id
stringlengths
20
20
content
stringlengths
211
2.4M
meta
dict
BkiUdBU241xiDnlsP0u3
\section{Introduction} Bayesian optimization (BO) is a well-established methodology to optimize expensive black-box functions~\cite{Shahriari2016}. It relies on a probabilistic model of an unknown target function $f({\mathbf x})$, which is repeatedly queried until one runs out of budget (e.g., time). The queries consist in evaluations of $f$ at hyperparameter configurations ${\mathbf x}^1,\ldots,{\mathbf x}^n$ selected according to an explore-exploit trade-off criterion (e.g., expected improvement). The hyperparameter configuration corresponding to the best query is then returned. One popular approach is to impose a Gaussian process (GP) prior over $f$ and, in light of the observed queries $f({\mathbf x}^1),\ldots,f({\mathbf x}^n)$, to compute the posterior GP. The GP model maintains posterior mean and posterior variance functions as required by the explore-exploit criterion. Despite their flexibility, GPs scale cubically with the number of observations~\cite{Rasmussen2006}. Hence, they cannot be applied in situations where $f$ has been or can be queried a very large number of times. In this work, we are interested in such a setting as we would like to warm start BO by, e.g., transferring information obtained from previous runs of the BO routine, or learn across similar problems (e.g., a given classifier applied across different datasets~\cite{Bardenet2013,Yogatama2014,Feurer2015,Fusi2017,Golovin2017}), which we will call \emph{tasks}. To tackle the scalability limitation of GPs and ease transfer learning in BO, we propose to fall back to adaptive Bayesian linear regression (BLR)~\cite{Bishop2006}, ABLR for short, which scales linearly with the number of observations and cubically in the dimension of the basis function expansion. Sparse GPs~\cite{McIntire2016} or multi-task GPs~\cite{Swersky2013} have been respectively developed to scale up GPs and make them suitable for multi-task learning. ABLR offers a simple alternative combining the strengths of these two approaches. Our main contribution is to learn a suitable representation of a variety of tasks with a feedforward neural network (NN), provided it is fed with enough data. We consider conditionally independent task-specific BLR models, which share a NN that learns the basis expansion. We compare to random Fourier basis expansions~\cite{Rahimi2007} as they have already been successfully applied to BO~\cite{Hernandez-Lobato2017,Jenatton2017b}. While more scalable, they are less flexible in learning a useful representation. Closest to our work is \cite{Snoek2015}, where BO is scaled up by replacing the GP with an ABLR model. The authors consider a single task setting, with a two-step inference procedure. First, they train the NN with a squared loss at the output layer to learn a maximum a posteriori estimate of the NN parameters. This requires evaluating a number of candidate queries to feed the NN training algorithm. They then fix the network architecture and replace the output layer by a BLR layer to run the BO routine. Instead, we \emph{jointly} learn the basis expansion, that is, the NN, and the task-specific BLR models in one go. Our objective function corresponds to a sum of log-marginal likelihood terms, each term corresponding to one of the underlying tasks. As a result, in contrast with \cite{Snoek2015} who use the squared loss, we can handle heterogeneous signals, each having its own marginal likelihood. In this sense, we borrow the strength of the likelihood of multi-output GPs while maintaining the scalability of~\cite{Snoek2015}. Another related model is presented in~\cite{Springenberg2016}. The authors propose Bayesian NNs to sample from the posterior over $f$, and add task-specific embeddings to the NN inputs to handle multiple tasks. While allowing for a principled treatment of uncertainties, fully Bayesian NNs are computationally more expensive and their training can be sensitive to the stochastic gradient MCMC hyperparameters. Our model allows for simpler inference and is more suitable for large scale deployment. \section{Multiple Adaptive Bayesian Linear Regression Model} \label{sec:model} Consider $T$ tasks defined by a set of black-box target functions $\{f_t(\cdot)\}_{t=1}^T$ we would like to optimize. Let $\mathcal{D}_t = \{({\mathbf x}^n_t, y^n_t) \}_{n=1}^{N_t}$ be the set of $N_t$ pairs of inputs and responses associated to task $t$. We further denote the stacked response vector associated to task $t$ by ${\mathbf y}_t \in {\mathbb{R}}^{N_t}$ and the corresponding stacked matrix of inputs by ${\mathbf X}_t\in\mathbb{R}^{N_t \times P}$. We assume the task responses $\{{\mathbf y}_t\}_{t=1}^{T}$ are drawn from independent BLR models conditioned on the shared feature map ${\boldsymbol\phi}_{\mathbf z}({\mathbf x}):\mathbb{R}^{P}\mapsto\mathbb{R}^{D}$, which is parametrized by ${\mathbf z}$, and the residual noise parameters $\{\alpha_t\}_{t=1}^T$: $$ {\mathbf y}_t \mid {\mathbf X}_t, {\mathbf w}_t, \alpha_t, {\mathbf z} \sim \mathcal{N} ({\boldsymbol\Phi}_{\mathbf z}({\mathbf X}_t) {\mathbf w}_t, \alpha_t^{-1} {\mathbf I}_{n_t}) , $$ where ${\boldsymbol\Phi}_{\mathbf z}({\mathbf X}_t) = [{\boldsymbol\phi}_{\mathbf z}({\mathbf x}^n_t)]_n \in\mathbb{R}^{N_t\times D}$ is the feature matrix, ${\mathbf w}_t \in \mathbb{R}^D$ a weight vector, and $\alpha_t\in\mathbb{R}^+$ a precision (i.e., inverse variance). To complete the model, we impose a zero-mean isotropic Gaussian prior on ${\mathbf w}_t$ and denote its precision by $\beta_t\in\mathbb{R}^+$. In the remainder, we will use ${\boldsymbol\Phi}_t$ for ${\boldsymbol\Phi}_{\mathbf z}({\mathbf X}_t)$. \subsection{Posterior inference} The posterior distribution over the weight parameters is analytically tractable in this model, as well as the predictive distribution, both of which are multivariate Gaussian distributions~\cite{Bishop2006}. The predictive mean and the predictive variance at a new input ${\mathbf x}_t^*$ are respectively given by \begin{align} {\boldsymbol\mu}_t({\mathbf x}_t^*; \mathcal{D}_t, \alpha_t, \beta_t, {\mathbf z}) &= \frac{\alpha_t}{\beta_t}{\boldsymbol\phi}_{{\mathbf z}}({\mathbf x}_t^*)^\top {\mathbf K}_t^{-1} {\boldsymbol\Phi}_t^\top {\mathbf y}_t = \frac{\alpha_t}{\beta_t} {\mathbf c}_t^\top {\mathbf L}_t^{-1} {\boldsymbol\phi}({\mathbf x}_t^*) ,\label{eq: predictive mean}\\ \sigma_t^2 ({\mathbf x}_t^*; \mathcal{D}_t, \alpha_t, \beta_t, {\mathbf z}) &= \frac{1}{\beta_t} {\boldsymbol\phi}_{{\mathbf z}}({\mathbf x}_t^*)^\top {\mathbf K}_t^{-1} {\boldsymbol\phi}_{{\mathbf z}}({\mathbf x}_t^*) + \frac{1}{\alpha_t} = \frac{1}{\beta_t} ||{\mathbf L}_t^{-1} {\boldsymbol\phi}({\mathbf x}_t^*) ||^2 + \frac{1}{\alpha_t}, \label{eq: predictive var} \end{align} where ${\mathbf K}_t = \frac{\alpha_t}{\beta_t} {\boldsymbol\Phi}_t^\top {\boldsymbol\Phi}_t + {\mathbf I}_D$. The right hand side reformulations (\ref{eq: predictive mean}) and (\ref{eq: predictive var}) ensure numerical stability. They are obtained by decomposing ${\mathbf K}_t$ in terms of its Cholesky factor ${\mathbf K}_t= {\mathbf L}_t {\mathbf L}_t^\top$, so that ${\boldsymbol\phi}({\mathbf x}_t^*)^\top {\mathbf K}_t^{-1} {\boldsymbol\phi}({\mathbf x}_t^*) = ||{\mathbf L}_t^{-1} {\boldsymbol\phi}({\mathbf x}_t^*) ||^2$ and $ {\mathbf K}_t^{-1} {\boldsymbol\Phi}_t^\top {\mathbf y}_t = {\mathbf L}_t^{-\top} {\mathbf c}_t$ with ${\mathbf c}_t= {\mathbf L}_t^{-1} {\boldsymbol\Phi}_t^\top {\mathbf y}_t$. Each task-specific BLR depends on the hyperparameters $\alpha_t$ and $\beta_t$, as well as the set of hyperparameters ${\mathbf z}$ defining the feature map. In particular, ${\mathbf z}$ will represent the weights of a NN (see Section~\ref{sec:nn features}). We adopt an empirical Bayes approach and jointly learn all these hyperparameters by optimizing the marginal likelihood of the data~\cite{MacKay2003}. More specifically, we integrate out the model parameters $\{{\mathbf w}_t\}_{t=1}^T$ and minimize the sum of the negative log-marginal likelihoods of each task: \begin{equation}\label{eq:marginal_likelihood} \rho\left({\mathbf z}, \{\alpha_t, \beta_t \}_{t=1}^T\right) = - \sum_{t=1}^T \left[ \frac{N_t}{2} \log \alpha_t - \frac{\alpha_t}{2} \left( ||{\mathbf y}_t||^2 - \frac{\alpha_t}{\beta_t} ||{\mathbf c}_t||^2 \right) - \sum_{i=1}^D \log ( [{\mathbf L}_t]_{ii}) \right]. \end{equation} \subsection{Learning a joint representation with feedforward neural networks} \label{sec:nn features} We learn the nonlinear map ${\boldsymbol\phi}_{{\mathbf z}}({\mathbf x})$ with a feedforward NN. For some input vector ${\mathbf x}$, we consider the following $L$-layer feedforward transformation parametrized by the weight matrices $\{{\mathbf Z}_l\}_{l=1}^L$: $$ {\boldsymbol\phi}_{{\mathbf z}}({\mathbf x}) = a_L\left( {\mathbf Z}_L a_{L-1} \left( \dots {\mathbf Z}_2 a_1\left( {\mathbf Z}_1 {\mathbf x} \right) \dots \right) \right) . $$ The parameter vector ${\mathbf z}$ is a flattened version of the stacked weight matrices. In practice, $a_l$ are set as \texttt{tanh} functions and $L=3$ (as~\cite{Snoek2015}), but any more complex NN architecture can be used. Interestingly, we depart from~\cite{Snoek2015} regarding the optimization of ${\mathbf z}$. While their squared-loss formulation naturally lends itself to stochastic gradient descent (SGD), in a regime with moderate values of $T$ (typically several tens in our settings) the evidence~(\ref{eq:marginal_likelihood}) is better suited to batch optimization. In our experiments, L-BFGS~\cite{Byrd1995} worked well. Unlike~\cite{Snoek2015}, an important by-product of this choice is that we need not find hyperparameters for SGD that should work robustly across a broad set of BO problems. \subsection{Random Fourier representation} \label{sec:rf features} An alternative approach is to use random kitchen sinks (RKS) for a random Fourier basis expansion~\cite{Rahimi2007}. Let ${\mathbf U} \in {\mathbb{R}}^{D \times P}$ and ${\mathbf b} \in {\mathbb{R}}^{D}$ be such that ${\mathbf U} \sim \mathcal{N} ({\mathbf 0},{\mathbf I})$ and $\{b_j \}_{j=1}^D \sim \mathcal{U}([0,2 \pi ])$. For a vector ${\mathbf x}$, RKS defines the mapping $ {\boldsymbol\phi}_{\mathbf z}({\mathbf x}) = \sqrt{2/D} \cos ( \frac{1}{\sigma} {\mathbf U} {\mathbf x} + {\mathbf b}), $ where $\sigma \in {\mathbb{R}}^+$ is the bandwidth of the approximated RBF kernel. The parameter vector ${\mathbf z}$ is a flattened version of $\{{\mathbf U},{\mathbf b},\sigma\}$. Unlike the NN, the RKS representation contains only one hyperparameter to optimize (${\mathbf U}$ and ${\mathbf b}$ are randomly generated). This reduces the complexity of learning the map, but is less expressive as we show in the following section. To optimize $\sigma$, we proceed as for the weights of the NN (see Section~\ref{sec:nn features}). \section{Results} \label{sec:experiments} The following subsections illustrate the benefits of multiple ABLR in a variety of settings. Sections \ref{sec:synthetic} and \ref{sec:openml} evaluate its ability to gather knowledge from multiple tasks, respectively on synthetic and \texttt{OpenML} data~\cite{Vanschoren2014}. Section \ref{sec:signals} shows how it can also be applied to exploit information from multiple heterogeneous signals. By doing so, we intend to learn more meaningful representations, which can be leveraged to accelerate the hyperparameter optimization. We could further generalize the model to handle multiple tasks and multiple signals at the same time, but leave this for future work. We implemented multiple ABLR in \texttt{GPyOpt}~\cite{Gpyopt2016}, with a backend in \texttt{MxNet}~\cite{Chen2015}, fully benefiting from the symbolic computation to obtain the derivatives of the mappings ${\mathbf z}, \{\alpha_t, \beta_t \}_{t=1}^T \rightarrow \rho({\mathbf z}, \{\alpha_t, \beta_t \}_{t=1}^T) $, together with ${\mathbf x}_t^* \rightarrow \mu_t({\mathbf x}_t^*; \mathcal{D}_t, \alpha_t, \beta_t, {\mathbf z})$ and ${\mathbf x}_t^* \rightarrow \sigma_t^2 ({\mathbf x}_t^*; \mathcal{D}_t, \alpha_t, \beta_t, {\mathbf z})$. In particular, we leverage the backward operator for the Cholesky decomposition~\cite{Seeger2017}. Interestingly, this allows us to jointly optimize all the model hyperparameters and perform exact BLR on top of an arbitrarily complex NN. \subsection{Transfer learning across parametrized quadratic functions} \label{sec:synthetic} We first consider a set of $T$ tasks. A task takes the form of a parametrized 3-dimensional quadratic function $ f_t({\mathbf x}) = \frac{1}{2}a_t \|{\mathbf x}\|_2^2 + b_t {\mathbf 1}^\top {\mathbf x} + c_t , $ where $(a_t, b_t, c_t) \in [0.1,10]^3$. We call the triplet $(a_t, b_t, c_t)$ the context associated to each task $t$. In a real-world setting, the contextual information would correspond to meta-data, e.g., the data set size or its dimensionality, as we shall see in the next~section. We generated $T=30$ different tasks by drawing $(a_t, b_t, c_t)$ uniformly at random, and evaluated ABLR in a leave-one-task-out fashion. Specifically, we optimized each one of the 30 tasks after warm starting the optimization with 10 observations for the remaining 29 tasks. We compared single task ABLR-based and standard GP-based hyperparameter optimization (HPO), both denoted by \texttt{plain}, with their transfer learning counterparts, both denoted by \texttt{transfer}. We perform transfer learning with standard GPs by stacking all observations together and augmenting the input space with the corresponding contextual information~\cite{Krause2011}. For ABLR with transfer, we took our approach, i.e., one marginal likelihood per task, with and without the contextual information. Figure \ref{transfer_quadratic_GP_vs_ABLR}(left) shows the current best minimum at each of 50 iterations of HPO. The results are averaged over 10 random initializations and 30 leave-one-task-out runs. HPO converges to the minimum much faster than plain ABLR or plain GP when we exploit the information from the related tasks. In addition, the RKS representation with $D=100$ performs slightly worse than the NN with 3 hidden layers of 50 units each per layer (as advocated in~\cite{Snoek2015}). Including the contextual information did not yield clear improvements, hence, for simplicity, we do not use it in the following experiments. The GP-based HPO with transfer performs slightly better on this toy example, but is not applicable in large-scale settings, such as the one in the next section (with $\sum_t N_t \approx 7.5\times10^5$). Figure \ref{transfer_quadratic_GP_vs_ABLR}(right) compares the compute time of HPO with GP and NN-based ABLR, suggesting that the linear scaling with the number of evaluations of the latter allows us to apply ABLR in the large-scale setting. The RKS basis expansion further decreases the computational time (at the expense of performance). \begin{figure*}[t] \begin{subfigure}{.6\textwidth} \centering \includegraphics[width=0.95\textwidth]{quadratic_exp} \end{subfigure}% \begin{subfigure}{.4\textwidth} \centering \includegraphics[width=0.7\textwidth]{time_GPvsABR.pdf} \includegraphics[width=0.7\textwidth]{time_NNvsRKS.pdf} \end{subfigure}% \caption { \textit{Left}: Transfer learning across parametrized quadratic functions. \textit{Right-top}: GP (cubical scaling) vs ABLR (linear scaling). \textit{Right-bottom}: NN vs RKS basis expansion.} \label{transfer_quadratic_GP_vs_ABLR} \end{figure*} \subsection{Transfer learning across OpenML black-box functions} \label{sec:openml} We consider the \texttt{OpenML} platform~\cite{Vanschoren2014}, which contains a large number of evaluations for a wide range of machine learning algorithms (referred to as flows in \texttt{OpenML}) over different datasets. In particular, we focus on a random forest model (\texttt{flow\_id} 6794) and apply ABLR to optimize its hyperparameters. We filtered the $T=30$ most evaluated datasets for this \texttt{flow\_id}, which amounts to $\sum_{t} N_t \approx 7.5\times10^5$ evaluations (with $N_t$ ranging from $9.940$ to $64.284$). In this setting, the linear scaling of ABLR is particularly appealing. As previously, we apply a leave-one-task-out protocol, where each task stands for a dataset. For the left-out task being optimized, say $t_0$, we use the surrogate modeling approach from~\cite{Eggensperger2012}. We compare \texttt{GP plain} and \texttt{ABLR plain}, which use evaluations of task $t_0$ only, with \texttt{ABLR transfer}, which is warm-started with the evaluations of all the other tasks. The results are reported in Figure \ref{openml_libsvm_exp}(left), showing that ABLR is able to gather knowledge from different datasets to speed up the convergence. \begin{figure*}[h] \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=0.9\textwidth]{openML_exp.pdf}\vspace*{-0.2cm} \end{subfigure}% \begin{subfigure}{.5\textwidth} \centering \includegraphics[width=0.9\textwidth]{LIBSVM_exp.pdf}\vspace*{-0.2cm} \end{subfigure}% \caption {\textit{Left}: OpenML data, multiple tasks. \textit{Right}: LIBSVM data, multiple signals.} \label{openml_libsvm_exp} \end{figure*} \subsection{Tuning of feedforward neural networks from heterogeneous signals} \label{sec:signals} Finally, we consider the tuning of feedforward NNs for binary classification. We show that our formulation can be seamlessly applied to the orthogonal problem of modeling $S$ multiple output signals, possibly of heterogeneous nature, \textit{at once}. Here, we optimize for the validation accuracy, using the training accuracy and CPU time as side information. Such side signals ``come for free'' while training machine learning algorithms, but are in general not exploited for efficient HPO. In comparison to multi-output GPs that scale as $\mathcal{O}(N^3+S^3)$, ABLR scales as $\mathcal{O}(S(D^2N + D^3))$. The NN hyperparameters to tune are the number of hidden layers in $\{1,\dots,4\}$, the number of units in $\{1,\dots,50\}$, the amount of $\ell_2$ regularization in $\{2^{-6},2^{-5},\dots,2^3\}$, the learning rate of Adam~\cite{Kingma2014} in $\{2^{-6},2^{-5},\dots,2^{-1}\}$, and the number of epochs in $\{3,\dots,10\}$. Figure~\ref{openml_libsvm_exp}(right) shows the results, which are averaged over 10 random initializations and 5 datasets (\texttt{w8a, sonar, w1a, phishing, australian}) from LIBSVM~\cite{Chang2011}. It can be observed that incorporating side signals in addition to the target signal, namely the validation accuracy of the NN classifier, speeds up the ABLR-based HPO. \bibliographystyle{abbrv} \input{nips_2017.bbl} \end{document}
{ "attr-fineweb-edu": 1.464844, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdD3xK4tBVhat3xq5
\section{Introduction} The equation, \begin{equation} \label{l:10} u_t+ {\mathcal{W}} u_x+2 u u_x=0, \ \ \widehat{{\mathcal{W}} u}(k)=\sqrt{\frac{\tanh(k)}{k}} \hat{u}(k) \end{equation} was proposed by Whitham \cite{W} as an alternative model to the ubiquitous Korteweg-de Vries approximation ($u_t + u_{xxx} + 2 uu_x = 0$) for water waves. In particular, \eqref{l:10} is driven by the non-local operator ${\mathcal{W}}$, which (modulo some rescalings) gives the ``full-dispersion" relation for the corresponding water waves equation. It also allows, in sharp contrast with the KdV model, for wave breaking (\cite{W1}), a desirable realistic feature for such models. In this article we study a generalization of \eqref{l:10}. More specifically, we allow for the following sort of ``pseudo-differential equations of Whitham type": \begin{equation}\label{W} u_t + (Lu + n(u))_x = 0, \quad u = u(x,t) \in {\bf{R}},\quad x \in {\bf{R}} \quad \text{and}\quad \ t \in {\bf{R}}, \end{equation} where $n:{\bf{R}} \to {\bf{R}}$ is purely nonlinear. The operator $L$ is a Fourier multiplier operator with symbol $m$. That is $$ \widehat{L f}(k)=m(k) \hat{f}(k) $$ where $\hat{f}(k)$ is the Fourier transform of $f(x)$. Precise conditions on $m$ and $n$ will be set forth below, but the prototypical choices will be of course $$ m(k) = \sqrt{\tanh(k)/k} \quad \text{and}\quad n(u) = u^2, $$ which then leads us to the original model \eqref{l:10}. The dynamical properties of \eqref{l:10}, such as local well-posedness, wave breaking among others (for \eqref{l:10} as well as for some more general versions, similar to \eqref{W}) have been thoroughly explored in recent years. We do not review these developments here, as the main focus of the current work lies in the existence and the properties of a class of special solutions, namely traveling waves. More specifically, we make the traveling wave ansatz $u(x,t) = w(x-\nu t)$, where $\nu \in {\bf{R}}$ is as yet undetermined wave speed. After one integration we arrive at: \begin{equation}\label{TWE1} (\nu - L)w = n(w). \end{equation} The question for existence and the corresponding properties of traveling waves, that is solutions of \eqref{TWE1}, in either the whole line or periodic context, has been the subject of numerous papers over the last ten years. We mention the papers \cite{EK1, EK}, where the question for existence periodic waves is investigated, both rigorously and numerically. Traveling waves in a model with weak surface tension were considered in \cite{GW10}. Finally, in the {\it tour de force}, \cite{EGW}, the authors have constructed (through an involved constrained variational with penalization construction), traveling waves for the whole line problem, with speeds slightly bigger than the sonic speed $\nu=1$. The question for stability of these waves, mostly in the periodic context, was considered recently in \cite{SKCK}. It should be noted that both in the analytical and numerical results discussed herein and elsewhere, it appears that there is some natural barrier for the the wave speeds, $1<\nu<1.141...$, which is still not fully understood. Thus, the ``slightly supersonic'' assumption in these papers appears to be well-warranted. The methods in these papers are varied and rather technical. In some cases, the analysis is supplemented by numerical simulations, which is justified given the lack of precise formulas, even in the classical case \eqref{l:10}. In this article, we take a slightly different point of view. A rescaling of the problem, together with some Fourier analysis reformulates the problem in such a way that the governing equations for the traveling waves are small and regular perturbations of well-understood ordinary differential equations. Then we use an implicit function theorem to prove the existence of solutions when the scaling parameter is small. The main ideas of the method are inspired by the work of Friesecke \& Pego \cite{FP} and Friesecke \& Mikikits-Leitner \cite{FM} on traveling waves in Fermi-Pasta-Ulam-Tsingou lattices, whose governing equations are nonlocal in a way similar to those we study here. \subsection{Assumptions and main results} We make the following assumption regarding $n(u)$. \begin{assumption}\label{n ass} There exists $\delta_*>0$ such that the nonlinearity $n: (-\delta_*,\delta_*) \to {\bf{R}}$ is $C^{2,1}$ (that is, its second derivative exists and is uniformly Lipschitz continuous) and satisfies $$ n(0) = n'(0) = 0 \quad \text{and}\quad n''(0) > 0. $$ \end{assumption} And here is our assumption on the multiplier $m$, which is a sort of combination of convexity near zero with boundededness for large $k$: \begin{assumption}\label{m ass} The multiplier $m:{\bf{R}} \to {\bf{R}}$ is even and there exists $k_*>0$ which has the following properties: \begin{itemize} \item $m$ is $C^{3,1}$ (that is, its third derivative exists and is uniformly Lipschitz continuous) on $[-k_*,k_*]$, $m(0)>0$ and \begin{equation} \label{m2 bound} m_2:=\max_{|k|\le k_*} m''(k) < 0. \end{equation} In particular $m''(0)<0$. \item \begin{equation}\label{upperbound} m_1:=\sup_{k \ge k_*} m(k) < m(0). \end{equation} \end{itemize} \end{assumption} An important quantity that will arise in the analysis is \begin{equation}\label{this is gamma} \gamma:= - {n''(0) \over m''(0)}>0, \end{equation} by Assumptions \ref{n ass} and \ref{m ass}. Both Assumption \ref{n ass} and \ref{m ass} are easily verified for the choices which give the full-dispersion Whitham equation \eqref{l:10}. Here are our main results. Note that our construction provides explicit leading term both for the waves speeds and the traveling wave profile\footnote{In principle, one could compute explicitly the next terms, up to any degree of accuracy} . \begin{theorem} \label{theo:10} The following hold when Assumptions \ref{n ass} and \ref{m ass} are met. There exists $\epsilon_0>0$, so that for every $\epsilon\in (0, \epsilon_0)$, there is a traveling wave solution $u(x,t)= \epsilon^2 W_\epsilon(\epsilon(x-\nu_\epsilon t))$ of \eqref{W}. Moreover, $W_\epsilon\in H^1_{even}(\mathbf R)$, \begin{eqnarray} \label{a:105} \nu_\epsilon &=& m(0) - {1 \over 2} m''(0) \epsilon^2, \\ \label{107} W_\epsilon(x) &=& \frac{3}{2\gamma} \sech^2\left(\frac{x}{2}\right)+O_{H^1({\bf{R}})}(\epsilon^2). \end{eqnarray} In addition, assume the boundedness of $m$. Then, the waves $ \epsilon^2 W_\epsilon(\epsilon(x-\nu_\epsilon t))$ are in fact spectrally stable, for all small enough values of $\epsilon$. \end{theorem} {\bf Remarks:} \begin{enumerate} \item Assuming higher regularity of $n$, say $C^{l+2,1}(\mathbf R)$, we have that $W_\epsilon\in H^l(\mathbf R)$. \item In the proof, we can actually verify the non-degeneracy of the solution $\epsilon^2 W_\epsilon(\epsilon x)$ in the sense that the linearized operator has kernel spanned exactly by the group of symmetries\footnote{in this case, the only symmetry is the translation in the $x$ variable}. By general results for Hamiltonian systems, see for example Theorem 5.2.11 in \cite{KP}, the spectral stability should imply orbital stability as well. Unfortunately, the conditions in Theorem 5.2.11 in \cite{KP} are not exactly met, since the anti self-adjoint portion of the linearization, ${\mathcal J}=\partial_x$ is not boundedly invertible. This is likely only a technical issue and we expect orbital stability to hold as well. \end{enumerate} We also prove the existence of periodic ``cnoidal" solutions of \eqref{W}. \begin{theorem} \label{theo:10P} The following hold when Assumptions \ref{n ass} and \ref{m ass} are met. There exists $P_0 > 0$ such that the following holds for all $P >P_0$. There exists $\epsilon_P>0$, so that for every $\epsilon\in (0, \epsilon_P)$ there is a $2P/\epsilon$-periodic, even, non-zero traveling wave solution $u(x,t)= \epsilon^2 W_{P,\epsilon}(\epsilon(x-\nu_\epsilon t))$ of \eqref{W}. Moreover, $W_{P,\epsilon}\in H^1_{even}({\bf{T}}_{P})$, \begin{eqnarray} \label{a:1055} \nu_\epsilon &=& m(0) - {1 \over 2} m''(0) \epsilon^2, \\ \label{1077} W_{P,\epsilon}(x) &=& \phi_P(x)+O_{H^1({\bf{T}}_P)}(\epsilon^2), \end{eqnarray} where $\phi_P$ is the unique even, non-zero $2P$-periodic solution of $-\phi_P''+\phi_P - \gamma \phi_P^2 = 0$. In addition, assume the boundedness of $m$. For $0<\epsilon\ll1$, the waves $W_{P,\epsilon}$ are spectrally and orbitally stable, with respect to co-periodic perturbations (that is perturbations of the same period $2P/\epsilon$). \end{theorem} \subsection{Conventions} By $H^s({\bf{R}})$ we mean the usual $L^2$-based order $s$ Sobolev space defined on ${\bf{R}}$. By $H^s({\bf{T}}_P)$ we mean the usual $L^2$-based order $s$ Sobolev space of periodic functions with period $2P$. Restricting attention only to even functions in the above results in the spaces $H^s_{even}({\bf{R}})$ and $H^s_{even}({\bf{T}}_P)$. If $X$ is a Banach space then $B(X)$ is the space of bounded linear maps from $X$ to itself, endowed with the usual norm. For a function $f \in H^s({\bf{R}})$ we use the following normalizations for the Fourier transform and its inverse: $$ \hat{f}(k)=\displaystyle \frac{1}{2\pi}\int_{\bf{R}} f(x) e^{- i x k} dx \quad \text{and}\quad f(x) =\displaystyle \int_{\bf{R}} \hat{f}(k) e^{ i x k} dk. $$ For a function $f \in H^s({\bf{T}}_P)$ we use the following normalizations for the Fourier series and its inverse: $$ \hat{f}(k):={1 \over 2P}\int_{-P}^{P} f(x) e^{-ik \pi x/P} dx \quad \text{and}\quad f(x) = \sum_{k \in {\bf{Z}}} \hat{f}(k) e^{ik \pi x/P}. $$ If $X$ is a Banach space and $q_\epsilon$ is an $\epsilon$ dependent quantity in $X$, we write $$ q_\epsilon = O_X(\epsilon^p) $$ if there exists $\epsilon_0$ and $C>0$ such that $$ \|q_\epsilon\|_{X} \le C \epsilon^p $$ for $0 < \epsilon \le \epsilon_0$. \section{Existence of small solutions} We present a detailed proof for the whole line case. The result for the periodic waves, which proceeds in an almost identical fashion, is proved in Section \ref{sec:2.3}. Our approach consists of introducing and analyzing a rescaled system, which is then showed to approximate the standard equation which gives the traveling wave solutions for KdV. \subsection{The rescaled system} We make the ``long wave/small amplitude/nearly supersonic" scalings $$ w(y) = \epsilon^{2} W(\epsilon y) \quad \text{and}\quad \nu = m(0) - {1 \over 2} m''(0) \epsilon^2 $$ where $0 < \epsilon \ll1$. With this, \eqref{TWE1} becomes \begin{equation}\label{TWE2} \left( m(0) - {1 \over 2} m''(0) \epsilon^2 - L_\epsilon \right) W = \epsilon^{-2} n( \epsilon^2 W) \end{equation} where $L_\epsilon$ is the Fourier multiplier operator with symbol \begin{equation} \label{mep} m_\epsilon(k) = m(\epsilon k). \end{equation} Since $n(u)$ is $C^{2,1}$ by assumption, Taylor's theorem tells us that $$ \epsilon^{-2} n( \epsilon^2 W) = {\epsilon^2 \over 2} n''(0) W^2 + \epsilon^4 \rho_\epsilon(W) $$ with \begin{equation}\label{rho est} |\rho_\epsilon(W)| \le C |W|^3 \quad \text{and}\quad |\partial_x[\rho(W(x))]|\leq C |W'(x)| |W^2(x)| \end{equation} when $|W| \le \delta_*/\epsilon^2$. Thus \eqref{TWE2} becomes: \begin{equation}\label{TWE3} \left( m(0) - {1 \over 2} m''(0) \epsilon^2 - L_\epsilon \right) W = {\epsilon^2 \over 2} n''(0) W^2 + \epsilon^4 \rho_\epsilon(W). \end{equation} Assumption \ref{m ass} implies the following result. \begin{lemma} \label{hinge} Given Assumption \ref{m ass}, there exists $C>0$ such that \begin{equation}\label{mult est} \sup_{K \in {\bf{R}}} \left \vert {\epsilon^2 \over m(0)-{1 \over 2} m''(0) \epsilon^2 - m(\epsilon K)} + {1 \over {1 \over 2}m''(0)(1+ K^2)}\right \vert \le C\epsilon^2 \end{equation} when $\epsilon$ is sufficiently close to zero. \end{lemma} We postpone the technical proof for the Appendix \ref{assorted proofs}, below. Note however that quite a bit of information is packed into this Lemma. The first piece is that it guarantees that \\ $ \left( m(0) - {1 \over 2} m''(0) \epsilon^2 - L_\epsilon \right) $ has a bounded inverse. And so we can rewrite \eqref{TWE3} as: \begin{equation}\label{TWE33} \bunderbrace{W - \epsilon^2 \left( m(0) - {1 \over 2} m''(0) \epsilon^2 - L_\epsilon \right)^{-1} \left( {1\over 2} n''(0) W^2 + \epsilon^2 \rho_\epsilon(W)\right)}{\Phi(W,\epsilon)} = 0. \end{equation} Our goal is to resolve \eqref{TWE33}, at least for $0<\epsilon\ll1$. To do so, we will rely on the implicit function theorem and as such we need the behavior of the limiting system at $\epsilon=0$. Lemma \ref{hinge} implies that \begin{equation}\label{limit} \epsilon^2 \left( m(0) - {1 \over 2} m''(0) \epsilon^2 - L_\epsilon \right)^{-1} = -\frac{2}{m''(0)}(1- \partial_x^2)^{-1} + O_{B(X)} (\epsilon^2) \end{equation} where $X$ is either $H^s({\bf{R}})$ or $H^s({\bf{T}}_P)$. Thus, if we set $\epsilon = 0$ in \eqref{TWE33}, we get: \begin{equation}\label{TWE4} W -{\gamma (1- \partial_X^2)^{-1}} W^2 = 0 \end{equation} or rather \begin{equation}\label{TWE5} -W''+ W -\gamma W^2=0. \end{equation} Here $\gamma>0$ is given above in \eqref{this is gamma}. \subsection{Existence of localized traveling waves} Observe that \eqref{TWE5}, and so \eqref{TWE4}, has (a unique!) non-zero even localized solution, namely \begin{equation} \label{a:110} W(X) =\sigma (x) := \frac{3}{2\gamma} \sech^2\left(\frac{x}{2}\right). \end{equation} In other words, we have $ \Phi(\sigma,0) = 0. $ Linearization of \eqref{TWE5} about $\sigma(x)$ results in the self-adjoint operator $${\mathcal L}:=-\partial_x^2+1 - 2\gamma\sigma,$$ which is well-studied in the literature. It is known to have exactly one negative eigenvalue, a single eigenvalue at zero, spanned by $\sigma'$, and outside of these two directions, the operator ${\mathcal L}$ is strictly positive. \subsubsection{Solvability of \eqref{TWE33}} If we compute ${\mathcal K}:=D_W\Phi(\sigma,0)$ we get $$ {\mathcal K}= Id - 2 \gamma (1-\partial_x^2)^{-1} \left(\sigma \cdot \right). $$ The following lemma is proved in \cite{FP}: \begin{lemma} \label{le:19} ${\mathcal K}: L^2_{even}(\mathbf R)\to L^2_{even}(\mathbf R)$ is bounded and has a bounded inverse. Also, ${\mathcal K}: H^1_{even}(\mathbf R)\to H^1_{even}(\mathbf R)$ is bounded and invertible. \end{lemma} Here is a brief explanation of the proof. It is by now a classical result that $(1-\partial_x^2)^{-1} \left(\sigma \cdot \right) :L^2(\mathbf R)\to L^2(\mathbf R)$ and indeed $(1-\partial_x^2)^{-1} \left(\sigma \cdot \right) : H^1(\mathbf R)\to H^1(\mathbf R)$ is a compact operator. Thus, the set $\sigma({\mathcal K})\setminus\{1\}$ has only eigenvalues of finite multiplicity. Note that when restricted to the even (and also odd subspaces), ${\mathcal K}$ acts invariantly, that is ${\mathcal K}: H^1_{even}(\mathbf R)\to H^1_{even}(\mathbf R)$. We claim that ${\mathcal K}$ is invertible on $H^1_{even}(\mathbf R)$. Indeed, assuming otherwise, it must be, by the Fredholm alternative, that there is an eigenfunction $f_0\in H^1_{even}: {\mathcal K} f_0=0$. One quickly realizes that this implies $f_0\in H^2(\mathbf R)$ and ${\mathcal L} f_0=0$. This is a contradiction, since $f_0\in Ker[{\mathcal L}]=span[\sigma']$, which then implies that $f_0$ is an odd function. We use the following version of the implicit function theorem: \begin{theorem} \label{theo:impl} Let $X$ be a Banach space and suppose that $\Phi : X \times {\bf{R}} \to X$ has the following properties: (a) $\Phi$ is continuously differentiable (b) $\Phi(x_*,\mu_*) = 0$ and (c) $D_x \Phi(x_*,\mu_*)$ has bounded inverse from $X$ to $X$ then there exists a neighborhoods $U$ of $x_*$ and $M$ of $\mu_*$ and differentiable function $\chi: M \to U$ such that $\Phi(\chi(\mu),\mu) = 0$ and $\Phi(x,\mu) = 0$ iff $x = \chi(\mu)$ for all $(x,\mu) \in U \times M$. \end{theorem} According to Theorem \ref{theo:impl}, the solvability of \eqref{TWE33}, that is $\Phi(W, \epsilon)=0$, holds. Indeed, by our previous considerations, $\Phi(\sigma, 0)=0$, the functional $\Phi: H^1_{even}({\bf{R}})\times \mathbf R\to H^1_{even}({\bf{R}}), s>\frac{1}{2}$ is continuously differentiable. In addition, ${\mathcal K}=D_W\Phi(\sigma,0): H^1_{even}(\mathbf R)\to H^1_{even}(\mathbf R)$ is invertible, according to Lemma \ref{le:19}. This gives a family of solutions, say $W_\epsilon\in H^{1}_{even}(\mathbf R)$, at least in a small neighborhood of $\epsilon \in (0, \epsilon_0)$, $\epsilon_0<1$. That $W_\epsilon - \sigma$ is $O_{H^1({\bf{R}})}(\epsilon^2)$ follows in routine way from \eqref{TWE33}, \eqref{limit} and \eqref{rho est}. This finishes the proof of the existence part of Theorem \ref{theo:10}. {\bf Remark:} Note that with the current assumptions on $m$, one cannot obtain a higher regularity results on $W_\epsilon$, since the operator $ \left( m(0) - {1 \over 2} m''(0) \epsilon^2 - L_\epsilon \right)^{-1}$ cannot be guaranteed to be smoothing\footnote{and in fact, for the Whitham example, where $m(k)=\sqrt{\frac{\tanh(k)}{k}}$ it is not smoothing}. We can however claim higher regularity, by essentially the same arguments as above, once we know a higher regularity of the remainder term $\rho_\epsilon(z)=\frac{n(\epsilon^2 z)-\frac{n''(0)}{2} \epsilon^4 z^2}{\epsilon^3}$ or, what is the same, a higher regularity of the nonlinearity $n$. Indeed, assuming $n\in C^{l+2,1}(\mathbf R)$, we obtain $\rho \in C^{l,1}(\mathbf R)$ and then, we can claim that the map $\Phi: H^l(\mathbf R)\times \mathbf R\to H^l(\mathbf R)$ is continuously differentiable. Since ${\mathcal K}$ will also be invertible on $H^l(\mathbf R)$, an application of the implicit function theorem will produce a solution $W_\epsilon\in H^l(\mathbf R)$. \subsection{Existence of periodic traveling waves} \label{sec:2.3} Return attention to \eqref{TWE5}. In addition to the solitary wave solution $\sigma(X)$, this equation has a one-parameter family of even periodic solutions. While there are explicit formulas available for these solutions (\cite{FM}) in terms of the elliptic functions ``${\textrm{cn}}$" (hence the nomenclature ``cnoidal" waves) we do not need these formulas here. Instead, we summarize the properties of such waves. \begin{theorem}\label{cnoidal} For all $\gamma > 0$ there exists $P_0>0$ and a family functions $\left\{\phi_P(x)\right\}_{P>P_0}$ with the following properties \begin{enumerate} \item $\phi_P(x)$ is $C^\infty$, non-constant and even. \item $\phi_P(x)$ is periodic with principal period $2P$. \item $W(x) = \phi_P(x)$ solves \eqref{TWE5} (and thus \eqref{TWE4}) \item The kernel of $$ \L_P := - \partial_x^2 + 1- 2 \gamma \phi_P $$ (as an operator in $H^s({\bf{T}}_P)$) is exactly $\spn\left\{\phi_P'(x) \right\}$. \end{enumerate} \end{theorem} This theorem tells us that $\Phi(\phi_P,0) = 0$. Our strategy for continuing such solutions to $\epsilon >0$ via the implicit function theorem is not terribly different than the one used for the localized waves above. If we compute ${\mathcal K}_P:=D_W\Phi(\phi_P,0)$ we get $$ {\mathcal K}_P= Id - 2 \gamma (1-\partial_x^2)^{-1} \left(\phi_P \cdot \right). $$ In \cite{FM} (their Lemma 5.1) the following is shown: \begin{lemma} \label{le:19P} ${\mathcal K}_P: L^2_{even}({\bf{T}}_P)\to L^2_{even}({\bf{T}}_P)$ is bounded and has a bounded inverse. Also, ${\mathcal K}_P: H^1_{even}({\bf{T}}_P)\to H^1_{even}({\bf{T}}_P)$ is bounded and invertible. \end{lemma} This follows from part (3) of Theorem \ref{cnoidal} and the argument is very much the same as the proof of Lemma \ref{le:19}. At this stage we have appeal to the implicit function theorem as above and arrive at the conclusions of Theorem \ref{theo:10P}. \section{Proof of Theorem \ref{theo:10} : the stability of the small Whitham waves} Now that we have constructed the solutions $W_\epsilon$ for $0<\epsilon\ll 1$, let us address the question for their stability. We first linearize around the traveling wave solution. \subsection{The linearized problem and stability} We take the perturbation of the solution $\epsilon^2 W_\epsilon (\epsilon(x-\nu t))$ in the form $u=\epsilon^2(W_\epsilon (\epsilon(x-\nu t))+v(\epsilon t, \epsilon(x-\nu t)))$. Plugging in this ansatz in the equation \eqref{W} and ignoring terms of order $O(v^2)$ and transforming $x-\nu t\to x$, we obtain the following linearized system \begin{equation} \label{a:10} v_t+\partial_x[L_\epsilon v - \nu v+n'(\epsilon^2 W_\epsilon) v]=0. \end{equation} Introduce the linearized operator $$ {\mathcal L}_\epsilon:=-L_\epsilon + \nu - n'(\epsilon^2 W_\epsilon). $$ Passing to the time independent problem via the map $v(t,x)\to e^{\lambda t} z(x)$, we arrive at the eigenvalue problem \begin{equation} \label{a:20} \partial_x {\mathcal L}_\epsilon z=\lambda z \end{equation} It is then time to introduce the notion of stability. \begin{definition} \label{defi:10} We say that the traveling wave $\epsilon^2 W_\epsilon (\epsilon(x-\nu t))$ is spectrally stable, if the eigenvalue problem \eqref{a:20} does not have non-trivial solutions $(\lambda, z): \Re\lambda>0, z \in L^2(\mathbf R)$. We say that the solution is orbitally (non-linearly) stable, if for every $\sigma>0$, there exists $\delta=\delta(\sigma, \epsilon)>0$, so that whenever $u_0\in H^1(\mathbf R): \|u_0 - \epsilon^2 W_\epsilon(\epsilon \cdot)\|_{H^1}<\delta$, then the solution $u$, with initial data $u_0$, $$ \inf_{y\in \mathbf R}\|u(t, \cdot)- \epsilon^2 W_\epsilon (\epsilon(\cdot+y-\nu t))\|_{H^1(\mathbf R)}<\sigma. $$ \end{definition} Next, we discuss the instability index count theory, which gives sufficient (and in many cases necessary) conditions for stability/instability, both spectral and orbital. We mostly follow the general theory, as developed in \cite{LZ}, although earlier relevant results are available, see \cite{KKS, KKS2, KP, KS}. \subsection{Instability index theory} \label{sec:3.2} For the eigenvalue problem \begin{equation} \label{a:30} {\mathcal J} {\mathcal L} f=\lambda f \end{equation} make the following assumptions regarding ${\mathcal L}, {\mathcal J}$: \begin{enumerate} \item ${\mathcal L}^*={\mathcal L}$, so that $L\in B(X,X^*)$ for some real Hilbert space\footnote{In the most common applications, $X=H^s, s>0$ is a Sobolev space of positive order, while $X^*=H^{-s}$ and one has $X=D({\mathcal L})\subset L^2 \subset X^*$} $X$, i.e. $\dpr{{\mathcal L} u}{v}: X\times X\to {\mathbf C}$ is continuous. \item $dim(Ker[{\mathcal L}])<\infty$ and there is the ${\mathcal L}$ invariant decomposition of the space $X$, $$ X=X_- \oplus Ker[{\mathcal L}]\oplus X_+, $$ where $dim(X_-)<\infty$, and for some $\delta>0$, ${\mathcal L}|_{X_-}\leq -\delta$, ${\mathcal L}|_{X_+}\geq \delta>0$. \item ${\mathcal J}: D({\mathcal J})\subset X^* \to X$, ${\mathcal J}^*=-{\mathcal J}$. \end{enumerate} Moreover, introduce the Morse index $n^-({\mathcal L})=dim(X_-)$, an integer. Consider the generalized eigenspace at zero for the operator ${\mathcal J} {\mathcal L}$, that is $E_0=\{u\in X: ({\mathcal J} {\mathcal L})^k u=0, k\geq 1 - \textup{integer} \}$. Clearly, $Ker[{\mathcal L}]$ is a (finite dimensional) subspace of $E_0$ and one can complete it: $E_0=Ker[{\mathcal L}]\oplus \tilde{E}_0$. Then, $$ k_0^{\leq 0}:=\max\{dim(Z): Z\ \textup{subspace of}\ \tilde{E}_0: \dpr{{\mathcal L} z}{z}\leq 0, z\in Z\}. $$ Under these assumptions, it was proved (see Theorem 2.3, \cite{LZ}) that\footnote{A much more precise result is contained in Theorem 2.3, \cite{LZ}, but we state this corollary, as it is enough for our purposes} \begin{equation} \label{a:40} k_{unstable}\leq n^-({\mathcal L})- k_0^{\leq 0}({\mathcal L}). \end{equation} where $k_{unstable}$ is the number of (non-trivial) unstable solutions to \eqref{a:30}, that is pairs $(\lambda, z)$ with $\Re\lambda>0, z \in X$. In the next section, we apply this theory to the linearized problem \eqref{a:20}. \subsection{Stability analysis for the small Whitham waves} \label{sec:3.3} For the eigenvalue problem \eqref{a:20}, we have ${\mathcal J}=\partial_x$, which is anti self-adjoint, while clearly ${\mathcal L}_\epsilon: {\mathcal L}^*_\epsilon={\mathcal L}_\epsilon$ is a bounded symmetric operator, if we assume the boundedness of its symbol $m$. We will establish below that ${\mathcal L}_\epsilon$ has, at least for small enough values of $\epsilon$, a single and simple negative eigenvalue (i.e. $n^-({\mathcal L}_\epsilon)=1$), while its kernel is one dimensional and it is in fact spanned by $W'_\epsilon$. Assuming that for the moment, let us proceed to establish a sufficient condition for the stability. According to \eqref{a:40}, $k_{unstable}\leq 1- k_0^{\leq 0}$. Thus, the stability of the solitary waves $\epsilon^2 W_\epsilon(\epsilon x)$, will be established, once we show that\footnote{and hence $k_0^{\leq 0}({\mathcal L}_\epsilon)=1$, since the left hand side of \eqref{a:40} is non-negative.} $k_0^{\leq 0}({\mathcal L}_\epsilon)\geq 1$. To this end, we can identify an element in $gKer(\partial_x {\mathcal L}_\epsilon)\setminus Ker[\partial_x {\mathcal L}_\epsilon]$. Note that $Ker[\partial_x {\mathcal L}_\epsilon]=Ker[{\mathcal L}_\epsilon]=span\{W'_\epsilon\}$. In addition, $W_\epsilon \perp W'_\epsilon $, whence $W_\epsilon \perp Ker[{\mathcal L}_\epsilon]$. Thus, $\Psi_\epsilon:={\mathcal L}_\epsilon^{-1}[W_\epsilon]$ is well-defined. Since, $$ (\partial_x {\mathcal L}_\epsilon)^2[\Psi_\epsilon]= \partial_x {\mathcal L}_\epsilon \partial_x[W_\epsilon ]= \partial_x {\mathcal L}_\epsilon[W'_\epsilon]=0, $$ we have that $\Psi_\epsilon \in gKer(\partial_x {\mathcal L}_\epsilon)\setminus Ker[\partial_x {\mathcal L}_\epsilon]$. According to the definition of $k_0^{\leq 0}({\mathcal L}_\epsilon)$, we will have established $k_0^{\leq 0}({\mathcal L}_\epsilon)\geq 1$, once we verify that $$ 0>\dpr{{\mathcal L}_\epsilon \Psi_\epsilon}{\Psi_\epsilon}=\dpr{{\mathcal L}_\epsilon^{-1}[W_\epsilon]}{\epsilon^2 W_\epsilon}. $$ Thus, we will need to verify the negativity of the Vakhitov-Kolokolov type quantity \begin{equation} \label{a:50} \dpr{{\mathcal L}_\epsilon^{-1}[W_\epsilon]}{W_\epsilon}<0, \end{equation} once we check that for all small enough $\epsilon$, $n^-({\mathcal L}_\epsilon)=1$, $Ker[{\mathcal L}_\epsilon]=span\{W'_\epsilon\}$. We do this in the next Lemma. \begin{lemma} \label{le:a10} There exists $\epsilon_0>0$ so that for all $\epsilon\in (0, \epsilon_0)$, $n^-({\mathcal L}_\epsilon)=1$, $Ker[{\mathcal L}_\epsilon]=span\{W'_\epsilon\}$. \end{lemma} \begin{proof} Start by taking a sufficiently large $\mu>0$, to be specified later. We will construct the operator $\left(\epsilon^{-2} {\mathcal L}_\epsilon+\mu\right)^{-1}$ for all small enough $\epsilon$. Indeed, since $$ n'(\epsilon^2 W_\epsilon)= n''(0)\epsilon^2 W_\epsilon+O_{H^1}(\epsilon^4)=n''(0)\epsilon^2 \sigma+O_{H^1}(\epsilon^4), $$ where $\sigma$ is the explicit $sech^2$ function, see \eqref{a:110}. We have \begin{eqnarray*} \epsilon^{-2} {\mathcal L}_\epsilon+\mu &=& \epsilon^{-2}[{\mathcal L}_\epsilon+\mu \epsilon^2]=\epsilon^{-2}[-L_\epsilon+\nu-\epsilon^2 n''(0) \sigma+\mu \epsilon^2+O_{H^1}(\epsilon^4)]=\\ &=& [Id -[ n''(0) \sigma-\mu +O_{H^1}(\epsilon^2)] \epsilon^2 (\nu-L_\epsilon)^{-1}]\epsilon^{-2}(\nu-L_\epsilon). \end{eqnarray*} Recall now that the operator $\epsilon^{2} (\nu-L_\epsilon)^{-1}$ is associated with the multiplier $\frac{\epsilon^2}{ m(0)-{1 \over 2} m''(0) \epsilon^2-m(\epsilon k)}$. So, according to Lemma \ref{hinge} (and more precisely \eqref{mult est}), \begin{equation} \label{a:70} \epsilon^{2} (\nu-L_\epsilon)^{-1}=-\frac{2}{m''(0)} (1-\partial_x^2)^{-1}+O_{B(L^2)}(\epsilon^2). \end{equation} Thus, \begin{eqnarray*} \epsilon^{-2} {\mathcal L}_\epsilon+\mu &=& \left(Id+\frac{2}{m''(0)} [n''(0) \sigma-\mu+O_{H^1}(\epsilon^2)](1-\partial_x^2)^{-1}\right)\epsilon^{-2}(\nu-L_\epsilon) =\\ &=& \left(Id +2[-\gamma \sigma -\frac{\mu}{m''(0)}+O_{H^1}(\epsilon^2)](1-\partial_x^2)^{-1}\right)\epsilon^{-2}(\nu-L_\epsilon). \end{eqnarray*} Note however \begin{eqnarray*} {\mathcal L}-\frac{2 \mu}{m''(0)}+O_{H^1}(\epsilon^2) &=& 1-\partial_x^2-2\gamma \sigma -\frac{2 \mu}{m''(0)}+O_{H^1}(\epsilon^2)=\\ &=& \left[Id +2[-\gamma \sigma -\frac{\mu}{m''(0)}+O_{H^1}(\epsilon^2)](1-\partial_x^2)^{-1}\right](1-\partial_x^2). \end{eqnarray*} Now, we select $\mu>0$ large and $\epsilon\ll1$, so that $ {\mathcal L}-\frac{2 \mu}{m''(0)}+O_{H^1}(\epsilon^2) $ is invertible. This is possible, since $-\frac{2 \mu}{m''(0)}>0$ and ${\mathcal L}$ is bounded from below\footnote{and in fact it has a single negative eigenvalue}. Moreover, $( {\mathcal L}-\frac{2 \mu}{m''(0)}+O_{H^1}(\epsilon^2))^{-1}: L^2\to H^{2}$. Thus, we can write \begin{eqnarray*} \left[Id +2[-\gamma \sigma -\frac{\mu}{m''(0)}+O_{H^1}(\epsilon^2)](1-\partial_x^2)^{-1}\right]^{-1}= (1-\partial_x^2)( {\mathcal L}-\frac{2 \mu}{m''(0)}+O_{H^1}(\epsilon^2))^{-1}:L^2\to L^2. \end{eqnarray*} Hence, we can invert (by means of the previous formula and \eqref{a:70}) \begin{eqnarray*} (\epsilon^{-2} {\mathcal L}_\epsilon+\mu)^{-1} &=& \epsilon^{2} (\nu-L_\epsilon)^{-1} \left[Id +2[-\gamma \sigma -\frac{\mu}{m''(0)}+O_{H^1}(\epsilon^2)](1-\partial_x^2)^{-1}\right]^{-1}= \\ &=& \left( -\frac{2}{m''(0)} (1-\partial_x^2)^{-1}+O_{B(L^2)}(\epsilon^2) \right) (1-\partial_x^2)\left( {\mathcal L}-\frac{2 \mu}{m''(0)}+O_{H^1}(\epsilon^2)\right)^{-1}=\\ &=& -\frac{2}{m''(0)} \left( {\mathcal L}-\frac{2 \mu}{m''(0)}\right)^{-1}+O_{B(L^2)}(\epsilon^2). \end{eqnarray*} That is, \begin{equation} \label{a:80} (\epsilon^{-2} {\mathcal L}_\epsilon+\mu)^{-1} = \left( -\frac{m''(0)}{2} {\mathcal L}+\mu \right)^{-1}+O_{B(L^2)}(\epsilon^2). \end{equation} We can now use this formula to study the spectrum of ${\mathcal L}_\epsilon$. Using min-max formulas for the eigenvalues of self-adjoint operators, we claim that \begin{equation} \label{a:90} \lambda_{\max}((\epsilon^{-2} {\mathcal L}_\epsilon+\mu)^{-1})=\lambda_{\max}\left(( -\frac{m''(0)}{2} {\mathcal L}+\mu )^{-1}+O_{B(L^2)}(\epsilon^2)\right)>\frac{1}{\mu} \end{equation} for all small enough $\epsilon$. Indeed, denoting the negative eigenvalue of ${\mathcal L}$ by $-\sigma_0^2: {\mathcal L} \psi_0=-\sigma_0^2 \psi_0, \|\psi_0\|=1$, we have that \begin{eqnarray*} \lambda_{\max}\left(( -\frac{m''(0)}{2} {\mathcal L}+\mu )^{-1}\right) &=& \sup_{f: \|f\|=1} \dpr{(-\frac{m''(0)}{2} {\mathcal L}+\mu )^{-1} f}{f} \geq \dpr{(-\frac{m''(0)}{2} {\mathcal L}+\mu )^{-1} \psi_0}{\psi_0}=\\ &=& \frac{1}{-\frac{m''(0)}{2} (-\sigma_0^2)+\mu}>\frac{1}{\mu}. \end{eqnarray*} It follows that for all small enough $\epsilon$, $\lambda_{\max}((\epsilon^{-2} {\mathcal L}_\epsilon+\mu)^{-1})>\frac{1}{\mu}$, or equivalently, $\epsilon^{-2} {\mathcal L}_\epsilon$ has the smallest eigenvalue in the form $\lambda_0(\epsilon^{-2} {\mathcal L}_\epsilon):=\frac{1}{\lambda_{\max}\left(( -\frac{m''(0)}{2} {\mathcal L}+\mu )^{-1}\right)}-\mu+O(\epsilon^2)<0$. Take $f:f\perp \psi_0, \|f\|=1$. Since we have ${\mathcal L}|_{\{\psi_0\}^\perp}\geq 0$ and ${\mathcal L}[\sigma']=0$, $$ \frac{1}{\mu}=\dpr{(-\frac{m''(0)}{2} {\mathcal L}+\mu )^{-1} \frac{\sigma'}{\|\sigma'\|}}{\frac{\sigma'}{\|\sigma'\|}} \leq \sup_{f\perp \psi_0, \|f\|=1} \dpr{(-\frac{m''(0)}{2} {\mathcal L}+\mu )^{-1} f}{f} \leq \frac{1}{\mu} $$ It follows that $\lambda_1((-\frac{m''(0)}{2} {\mathcal L}+\mu )^{-1})=\frac{1}{\mu}$, whence the second smallest eigenvalue for $(\epsilon^{-2} {\mathcal L}_\epsilon+\mu)^{-1}$ is of the form $\frac{1}{\mu}+O(\epsilon^2)$. Equivalently, the second smallest eigenvalue for $\epsilon^{-2} {\mathcal L}_\epsilon$ is $\lambda_1(\epsilon^{-2} {\mathcal L}_\epsilon)=O(\epsilon^2)$. Further, according to the spectral information for ${\mathcal L}$, its second eigenvalue is also simple, in particular, ${\mathcal L}|_{span\{\psi_0, \sigma'\}^{\perp}}\geq \delta Id>0$. Therefore, $$ \sup_{f\perp \psi_0, f\perp \sigma', \|f\|=1} \dpr{(-\frac{m''(0)}{2} {\mathcal L}+\mu )^{-1} f}{f} \leq \frac{1}{-\delta \frac{m''(0)}{2} +\mu}. $$ Equivalently, $$ \lambda_2(\epsilon^{-2} {\mathcal L}_\epsilon)\geq -\delta \frac{m''(0)}{2} +O(\epsilon^2) >0. $$ All in all, we have shown \begin{equation} \label{a:100} \lambda_0(\epsilon^{-2} {\mathcal L}_\epsilon)<0, \ \ \lambda_1(\epsilon^{-2} {\mathcal L}_\epsilon)=O(\epsilon^2), \ \ \lambda_2(\epsilon^{-2} {\mathcal L}_\epsilon)\geq -\delta \frac{m''(0)}{2} +O(\epsilon^2). \end{equation} A direct differentiation in $x$ in the profile equation \eqref{TWE2} shows that $[\nu-L_\epsilon-n'(\epsilon^2 W)]W'=0$ or equivalently, $0\in \sigma({\mathcal L}_\epsilon)$. This, combined with \eqref{a:100} shows that $\lambda_1(\epsilon^{-2} {\mathcal L}_\epsilon)=0$. This finishes the proof of Lemma \ref{le:a10}. \end{proof} It remains to finally verify \eqref{a:50}. Now that we know that $Ker[{\mathcal L}_\epsilon]=span\{W_\epsilon'\}$, we conclude that ${\mathcal L}_\epsilon$ is invertible on the even subspace $L^2_{even}$. In fact, we may use the formula \eqref{a:80} with $\mu=0$. In addition, from \eqref{107}, we have \begin{eqnarray*} \dpr{{\mathcal L}_\epsilon^{-1}[W_\epsilon]}{W_\epsilon} &=& -\frac{2}{m''(0)} \epsilon^{-2} \dpr{ ({\mathcal L}^{-1}+O_{B(L^2)}(\epsilon^2)[\sigma+O_{H^1}(\epsilon^2)}{\sigma+O_{H^1}(\epsilon^2)}=\\ &=& -\frac{2}{m''(0)} \epsilon^{-2}[\dpr{{\mathcal L}^{-1} \sigma}{\sigma}+O(\epsilon^2)]. \end{eqnarray*} The quantity $\dpr{{\mathcal L}^{-1} \sigma}{\sigma}$ is well-known in the theory of stability for the corresponding KdV/NLS models. Its negativity is exactly in the same way equivalent to the (well-known) stability of the corresponding traveling/standing waves. It actually may be computed explicitly as follows. Consider \eqref{TWE5} and a function $W_\lambda:=\lambda^2 \sigma(\lambda \cdot), \lambda>0$. This solves $$ -W_\lambda''+\lambda^2 W_\lambda - \gamma W_\lambda^2=0. $$ Taking a derivative in $\lambda$ and evaluating at $\lambda=1$ yields $$ {\mathcal L}[\frac{d}{d\lambda} W_\lambda|_{\lambda=1}]=-2\sigma $$ Thus, ${\mathcal L}^{-1} \sigma=-\frac{1}{2} \frac{d}{d\lambda} W_\lambda|_{\lambda=1}= -\frac{1}{2}(2\sigma+x \sigma')$. It follows that $$ \dpr{{\mathcal L}^{-1} \sigma}{\sigma}=-\frac{1}{2} \dpr{2\sigma+x \sigma'}{\sigma}=-\frac{3}{4} \|\sigma\|^2<0. $$ Thus, the Vakhitov-Kolokolov condition \eqref{a:50} is verified and the proof of Theorem \ref{theo:10} is complete. \subsection{Stability of the periodic waves} The stability calculation for the periodic waves proceed in an identical fashion. The eignevalue problem is in the form \eqref{a:20}, where now the operators are acting on the corresponding periodic spaces $H^s({\bf{T}}_P)$. In fact, noting that for $\lambda\neq 0$, the right hand side $z$ is an exact derivative, allows us to restrict the consideration of \eqref{a:20} to the space $L^2_0({\bf{T}}_P)=\{f\in L^2({\bf{T}}_P): \int_{-P}^P f(x) dx=0\}$. The advantage of this is that now ${\mathcal J}=\partial_x$ is boundedly invertible, hence allowing for the results of \cite{KP} to kick in. In particular, spectral stability and non-degeneracy do imply orbital stability. The instability index theory outlined in Section \ref{sec:3.2} applies. According to \eqref{a:40} and the analysis in Section \ref{sec:3.3} - \eqref{a:50} implies the spectral stability. Moreover, Lemma \ref{le:a10} applies as well to the periodic waves. That is, the Morse index of ${\mathcal L}_\epsilon$ is one and the wave is non-degenerate, in the sense that $Ker[{\mathcal L}_\epsilon]=span[W'_{P,\epsilon}]$. The verification of \eqref{a:50} is reduced, in the same way, to the verification of the inequality $\dpr{{\mathcal L}^{-1}_P \phi_P}{\phi_P}<0$. This quantity can be computed fairly precisely, in terms of elliptic functions, but we will not do so here. Instead, we remark that Theorem \ref{cnoidal} sets up the spectral/orbital stability of the waves $\Phi_P$ of the periodic KdV model back to the same quantity. That is, the spectral stability of $\Phi_P$ is equivalent to $\dpr{{\mathcal L}^{-1}_P \phi_P}{\phi_P}<0$. Since it is well-known that $\Phi_P$ are stable with respect to co-periodic perturbations\footnote{in fact, much more is known, namely the cnoidal waves are stable with respect to harmonic perturbations - that is perturbations with periods $2m P, m=1,2, \ldots$, \cite{BD}, \cite{DK}}, see for example \cite{BD}, \cite{DK}, it follows that $\dpr{{\mathcal L}^{-1}_P \phi_P}{\phi_P}<0$. By the invertibility of $\partial_x$ and the non-degeneracy of ${\mathcal L}_{P,\epsilon}$, we also conclude orbital stability for $W_{P,\epsilon}$.
{ "attr-fineweb-edu": 1.425781, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdFY5qdmDC8BJnpg-
\section{Preliminaries} We focus on a particular type of {\em random walk in a random environment} (RWRE), where the environment is inherited from orientations of the lattice on which the walker evolves, providing thus two independent sources of randomness: The horizontal orientations of the lattice and this of the walk performed on it afterwards, once the realization of the orientation has been fixed. We introduce first an {\em horizontally oriented} square lattice $\mathbb{L}^\eps$. The orientations $\eps=(\eps_y)_{y \in \Z}$ are families of i.i.d. Rademacher random variables taking values in the product probability space $(E,\mathcal{E},\rho)=\big(\{-1,+1\},\mathcal{P}(\{-1,+1\}, \frac{1}{2} \delta_{-1} + \frac{1}{2} \delta_{+1} \big)^{\otimes \Z}$. A given horizontal level $y$ is then oriented to the right when $\eps_y=+1$, to the left when $\eps_y=-1$, and induces an horizontally oriented version of $\Z^2$ for every realization of the random field $\eps$: \bd [Horizontally Oriented Lattice $\mathbb{L}^\eps$] Let $\eps=(\eps_y)_{y \in \Z} \in \{\pm 1\}^\Z$. The {\em oriented lattice} $\mathbb{L}^\eps=(\mathbb{V},\mathbb{A}^\eps)$ is the (random) directed graph with (deterministic) vertex set $\mathbb{V}=\Z^2$ and (random) edge set $\mathbb{A}^\eps$ defined by the condition that for $u=(u_1,u_2), v=(v_1,v_2) \in \Z^2$, $$ (u,v) \in \mathbb{A}^\eps \;\Longleftrightarrow \; v_1=u_1 \; {\rm and} \; v_2=u_2 \pm 1, \; {\rm or} \; v_2=u_2 \; {\rm and} \; v_1=u_1+ \eps_{u_2}. $$ \ed One performs then a {\em simple random walk} (SRW) $M=(M_n)_{n \in \N}$ on $\L^\eps$. For a given $\eps$, it is a $\mathbb{Z}^2$-valued Markov chain defined on a probability space $\big(\Omega_0, \mathcal{B}_0,\mathbb{P}^{(\eps)}\big)$, starting at the origin $(0,0)$, whose ($\eps$-dependent) transition probabilities are defined for all $(u,v) \in \mathbb{V}\times \mathbb{V}$ by $$ \pee^{(\eps)}[M_{n+1}=v | M_n=u]=\frac{1}{3} \; \rm{if} \; (u,v) \in \mathbb{A}^\eps, \; 0 \; \; \rm{otherwise.} $$ An interesting feature is that this SRW has been proven to be {\em transient for almost every orientation} $\eps$ \cite{CP2}. This almost sure approach is referred as the {\em quenched case} and we focus here on a more collective {\em annealed} approach: We consider the law of the process under the joint measure $\pee:= \rho \otimes \pee^{(\eps)}$. Thus, we study the behavior of the SRW as a discrete-time process on $$ (\Omega,\mathcal{B},\pee):=\big(E \times \Omega_0, \mathcal{E} \otimes \mathcal{B}_0, \rho \otimes \pee^{(\eps)} \big). $$ with its {\em annealed law} $\pee$ formally defined as $\pee= \int_E \pee^{(\eps)} d \rho(\eps).$ We write $\E$ (or $\E^{(\eps)}$ or $\E_\rho$) for the expectation under $\pee$ (or $\pee^{(\eps)}$ or $\rho$). Due to the non-local character of the orientations, the main drawback of this annealed model is that the walk is {\em not Markovian anymore}. Nevertheless \bp Under the {\em annealed} law $\pee$, the process $M$ is {\em reversible}. \ep Indeed, consider a trajectory $\omega=(\omega_0, \dots,\omega_n)$ and change $\eps$ into $-\eps$ : It has the law of the reverse trajectory $\omega^*=(\omega_n,\dots,\omega_0)$ and one concludes using the symmetry of the law $\rho$ of $\eps$.\\ Under this annealed law, a non-standard functional limit theorem has been proven in \cite{GPLN2}, while we shall start our study of the range of the random walk thanks to the following estimation of the probability of return to the origin due to Castell {\em et al.} \cite{CGPS}: \begin{theorem}[Local Limit Theorem \cite{CGPS}] There exists a constant $C >0$ such that \be \label{LLT} u_{\rm{cp}}(n):=\pee[M_n =(0,0)] = C \cdot n^{-5/4} + \circ(n^{-5/4}) \;\; \; \rm{as} \; \; n \to \infty. \ee \end{theorem} The main tool is to embed the two-dimensional random walk into a vertical SRW and an horizontal {\em random walk in random scenery} \cite{KS}. The fluctuations of the latter being of order of $n^{3/4}$, this explains why combined with the vertical SRW --whose fluctuations are of order of $n^{1/2}$-- it requires a proper normalization of the order of $n^{5/4}$, see also \cite{CP2,GPLN1,GPLN2}. This strong estimate (\ref{LLT}) implies the convergence of the {\em annealed Green function} \be\label{UCP} U_{\rm{cp}}:=\sum_{n=0}^\infty \pee[M_n=(0,0)] < \infty \ee which in turns implies this of the {\em quenched Green function} for $\rho$-a.e. orientation $\eps$ : \be \label{GreenQue} 0 < U_{\rm{cp}}^{(\eps)}:=\sum_{n=0}^\infty \pee^{(\eps)} [M_n=(0,0)] < \infty,\; {\rm with} \; U_{\rm{cp}}=\E_\rho \big[U_{\rm{cp}}^{(\eps)}\big]>0. \ee This also implies\footnote{Although the transience under in this quenched law has been proven before, using slightly weaker estimations, but following similar techniques in the vein of Fourier's analysis, see \cite{CP2,GPLN1}.} by Borel-Cantelli the transience of the SRW on $\L^\eps$ for $\rho$-a.e. orientation $\eps$. Thus, the usual dichotomy on $\Z^d$ (P\'olya, 1923) between low dimensions (recurrence for $d=1,2$) and higher dimensions (transience for $d \geq 3$) is broken by the extra-randomness of the orientations\footnote{While it is also proved in \cite{CP2} that deterministic alternate horizontal orientations do not break this recurrence.}. In order to precise the characteristics of this two-dimensional transient random walk, we focus in this paper on the asymptotic behavior of its {\em range} $R_n$, defined to be the number of distinct sites visited by the walker during the first $n$ steps: $$ R_n={\rm Card} \big\{ M_0, M_1, \dots, M_{n-1} \big\}. $$ It has been first studied for SRW on $\Z^d$ by Dvoretsky and Erd\"os (\cite{DE}, 1951) who provided estimates of its expectation together with (weak and strong) laws of large numbers under different forms for dimensions $d=2,3,4,\dots$\footnote{Later on, Jain {\em et al}. (\cite{JP2,JP5}, 1970's) established a {\em Central Limit Theorem} (CLT), see Section 5.}. \section{Results} \begin{theorem}\label{LinGrowth} The expectation of the range grows linearly : \beq \label{Qexp} {\rm For} \; \rho{\rm -a.e.}(\eps), \; \E^{(\eps)}[R_n] \; = \; n \cdot\gamma_{\rm{cp}}^{(\eps)} + \circ\big(n\big) \; \; {\rm with} \; \; \gamma_{\rm{cp}}^{(\eps)}=(U_{\rm{cp}}^{(\eps)})^{-1} \in \; ]0,1]\\ \label{Annexp} \E[R_n] \; = \; n \cdot\gamma_{\rm{cp}} + \circ\big(n\big) \; \; {\rm with} \; \; \gamma_{\rm{cp}} =\E_\rho \Big[\frac{1}{U_{\rm{cp}}^{(\eps)}} \Big] \in \; ]0,1]. \eeq \end{theorem} The rates of growth $\gamma_{\rm{cp}}$ and $\gamma_{\rm{cp}}^{(\eps)}$ are well-defined as the {\em probability of escape}\footnote{They are related to the notion of capacity of a set reduced to a single point, see \cite{spi3}. The notation $\gamma_{\rm cp}$ stems for Campanino and P\'etritis who first introduced this peculiar random walk in \cite{CP2}.} in next section. We emphasize that $\gamma_{\rm cp}$ is {\em not} given by the inverse of the annealed Green function $U_{\rm cp}$, which coincides with the expectation of the quenched Green function $U_{\rm{cp}}^{(\eps)}$. It indeed coincides with {\em the expectation of the inverse of the quenched Green function} and when the orientations $\eps$ are truly random, these two quantities are not necessarily equal\footnote{This phenomenon occurs rather often in disordered systems or for random walks in random environment.}.\\ One gets thus a linear growth of the expectations of the range similar to the behavior in the space described in \cite{DE}, where a rate $\gamma_3 >0$ is defined similarly, but on a 2-dimensional manifold instead of a 3-dimensional one. The walker visits thus a strictly positive fraction of $n$ sites, on the contrary to the standard planar SRW, for whom the walker typically visits a fraction $\frac{\pi}{\log{n}}$ of $n$ sites, that goes to zero as $n$ goes to infinity, see \cite{BCR,DE,JP1,LG}. This can be explained by the larger fluctuations, that make the walker escaping from the ball of radius $\sqrt{n}$, and visiting on the way less points already visited. In dimension two, the estimate (2.20) of \cite{DE} yields $\lim_n \frac{\E[R_n]}{n}=0$ but also the convergence in probability. Here, we also get : \begin{theorem}\label{WLLN} [Weak Law of Large Numbers (WLLN)] : \be \label{ALLN} \frac{R_n}{n} \stackrel{\mathbb{P}}{\longrightarrow}_n \; \gamma_{\rm{cp}} =\E_\rho \Big[\frac{1}{U_{\rm{cp}}^{(\eps)}} \Big]> 0. \ee \end{theorem} \section{Linear growth of the expected range} To prove Theorem \ref{LinGrowth}, we follow the road of the original study of \cite{DE}, generalized afterwards by Spitzer \cite{spi3}, and write this range as a sum of (dependent) random variables $R_n=\sum_{k=0}^{n-1} \mathbf{1}_{A_k}$ where $A_k$ is the event that the walker discovers a new site at the $k^{\rm{th}}$ step i.e. $$ A_0=\Omega,\; A_k:=\{M_k \neq M_j, \; \forall j=0, \dots, k-1\}. $$ We also introduce the {\em probability of escape at time k} to be $\gamma_{\rm{cp}}(k):=\pee(A_k)$. As in \cite{DE,spi3}, but with a different manner, we prove that it in fact coincides with the probability that the walk does not come back to its origin during the first $k$ steps. \bl \label{gammacpn0} Denote, for $k \geq 1, \; B_k:=\{M_l \neq (0,0), \; \forall l=1, \dots, k \}$. Then $\gamma_{\rm{cp}}(k)=\pee \big(B_k \big).$ \el \bpr On the contrary to the SRW on $\Z^d$, we cannot write $M_n$ as a sum of i.i.d. random variables, but in fact the result can be deduced from the reversibility of the walk. Write $$\pee(A_k)= \sum_{x \in \Z^2} \pee(A_k \cap \{M_k=x\})= \sum_{x \in \Z^2}\E_\rho\big[\pee^{(\eps)}(A_k \cap \{M_k=x\}) \big]$$ and use that for a fixed $\eps$, it corresponds to any trajectory in $A_k$ starting from the origin a unique reversed trajectory in $B_k$, of equal length and equal weight, that is at the origin at $k$: \begin{eqnarray*} \pee(A_k \cap \{M_k=x\})&=& \pee \big[ \cap_{j=0}^{k-1} \{M_k \neq M_j\} \cap \{M_k=x\} \big]\big]\\ &=& \sum_{m_j \neq m_k \in \Z^2, j <k} \E_\rho\big[ \pee^{(\eps)}\big[(M_0, \dots, M_j, \dots, M_k)=(0, \dots, m_j, \dots x)\big]\big]\\ &=& \sum_{m_l \neq m_k \in \Z^2, l <k} \E_\rho \big[\pee^{(-\eps)} \big[(M_0, \dots, M_l, \dots M_k)=(x, \dots, m_l, \dots, 0) \big] \big]\\ &=& \sum_{m_l \neq m_k \in \Z^2, l <k} \E_\rho \big[\pee^{(-\eps)} \big[(M_0, \dots, M_l, \dots M_k)=(0, \dots, m_l, \dots, -x) \big] \big]\\ &=& \pee (B_k \cap \{M_k =-x\}) \end{eqnarray*} where we use in the last lines the translation-invariance of $\rho$. Integrating out over all the possible final points, one gets $\pee(A_k)=\sum_{x \in \Z^2} \pee (B_k \cap \{M_k =-x\}) = \pee(B_k)$. \epr Hence, the escape probability at time $k$ coincides with the probability of no return to the origin until time $k$. These events $B_k$ are, on the contrary to the $A_k$'s, decreasing events ($B_{k+1} \subset B_k$), in such a way that we get a decreasing sequence $1=\gamma_{\rm{cp}}(1) \geq \dots \geq \gamma_{\rm{cp}}(k) \geq \gamma_{\rm{cp}}(k+1) \geq \dots \geq 0$. Together with the transience of the walk, this proves that the so-called {\em probability of escape} $\gamma_{\rm{cp}}$ exists and is strictly positive: $0 < \gamma_{\rm{cp}}:= \lim_k \gamma_{\rm{cp}}(k) \leq \gamma_{\rm{cp}}(k)$ for all $k \geq 0$. We use now the LLT (\ref{LLT}) to get an estimation the growth of the average range, \be \label{range} \E \big[R_n\big]=\sum_{k=0}^{n-1} \pee[A_k]=\sum_{k=0}^{n-1} \gamma_{\rm{cp}}(k). \ee Like in \cite{DE}, we partition the paths according to the last return to the origin occurring (strictly) before some given time $n$. The origin can only be reached at even times, so we consider $m=(n-1)/2$ for $n$ even (and $m=n/2 -1$ for $n$ odd) to write, for a given orientation $\eps$, \be \label{pathsone3} \sum_{k=0}^{m} \pee^{(\eps)} \big[ M_{2k}=(0,0), M_j \neq (0,0), \; \forall j,\; 2k < j \leq n-1 \big] = 1 \ee where, by the Markov property of the quenched measure, the summands of (\ref{pathsone3}) are $$ \pee^{(\eps)} \big[ M_{2k}=(0,0)\big]\cdot \pee^{(\eps)} \big[ M_j\neq(0,0), \; \forall j=2k+1, \dots, n-1 \; \mid M_{2k}=(0,0) \big]. $$ Introduce now the following characteristics for the quenched law, for a given orientation $\eps$: $$ u_{\rm{cp}}^{(\eps)}(k) := \pee^{(\eps)}[M_k=(0,0)] \; {\rm and} \; \gamma_{\rm{cp}}^{(\eps)}(k) := \pee^{(\eps)}[B_k]=\pee^{(\eps)}[M_j\neq (0,0), \forall j, \; 1 <j \leq n]. $$ For $\rho$-a.e. $\eps$, the quenched escape probability $\gamma_{\rm{cp}}^{(\eps)}:=\lim_k \gamma_{\rm{cp}}^{(\eps)}(k) >0$ exists and by symmetry, the probability of discovering a new point at time $k$ is also $\pee^{(\eps)}[A_k]=\gamma_{\rm{cp}}^{(-\eps)}(k)=\gamma_{\rm{cp}}^{(\eps)}(k)$.\\ The techniques developed by \cite{DE} relies on the LLT, here valid in the annealed set-up, yielding the existence of a strictly positive and finite {\em annealed Green function} (\ref{UCP}) and, for $\rho$-a.e$(\eps)$, of a {\em quenched Green function} (\ref{GreenQue}) in such a way that $U_{\rm{cp}}=\E_\rho[U_{\rm{cp}}^{(\eps)}]$. The renewal structure inherited from the Markov property is enough to get $$ \pee^{(\eps)} \big[ M_j\neq(0,0), \; \forall j=2k+1, \dots, n-1 \; \mid M_{2k}=(0,0) \big]= \gamma^{(\eps)}_{\rm{cp}}(n-2k) $$ so that (\ref{pathsone3}) becomes here, for $\rho$-almost every orientation $\eps$ and for all $n \in \N$ \be\label{pathsone4} \sum_{k=0}^m u^{(\eps)}_{\rm{cp}}(2k) . \gamma^{(\eps)}_{\rm{cp}}(n-2k)=1 \ee with $m=(n-1)/2$ for $n$ odd and $m=n/2-1$ for $n$ even. This implies the following \begin{lemma}\label{LEMAPROUVER} \begin{enumerate} \item ${\rm For} \; \rho{\rm -a.e} \; \eps,\; \gamma_{\rm{cp}}^{(\eps)}.{U_{\rm{cp}}^{(\eps)}}=1 \; {\rm and} \; \gamma_{\rm{cp}}= \E_\rho\Big[ \frac{1}{U_{\rm{cp}}^{(\eps)}}\Big] >0.$ \item For all $n \in \N$, there exists $B(n)=\circ(1)$ such that \be\label{growth3} 0 < \gamma_{\rm{cp}} \leq \gamma_{\rm{cp}}(n) \leq \gamma_{\rm{cp}} + B(n). \ee \end{enumerate} \end{lemma} \bpr Let $\eps$ such that (\ref{GreenQue}) is true, fix $1<l<m$ and split the lhs of (\ref{pathsone4}) to write it $$\sum_{k=0}^l u^{(\eps)}_{\rm{cp}}(2k). \gamma^{(\eps)}_{\rm{cp}}(n-2k) + \sum_{k=l+1}^m u^{(\eps)}_{\rm{cp}}(2k) .\gamma^{(\eps)}_{\rm{cp}}(n-2k)=1.$$ Use the monotonicity of $\gamma_{\rm{cp}}^{(\eps)}(k)$ to get a lower bound of the first term of the lhs: $$ \sum_{k=0}^l u^{(\eps)}_{\rm{cp}}(2k) . \gamma^{(\eps)}_{\rm{cp}}(n-2k) \leq \gamma^{(\eps)}_{\rm{cp}}(n-2l)\cdot \sum_{k=0}^l u^{(\eps)}_{\rm{cp}}(k) $$ and the fact that these escape probabilities are indeed probabilities for the second term: $$ \sum_{k=l+1}^m u^{(\eps)}_{\rm{cp}}(2k) .\gamma^{(\eps)}_{\rm{cp}}(n-2k) \leq \sum_{k=l+1}^m u^{(\eps)}_{\rm{cp}}(2k) $$ to eventually get the lower bound $\gamma^{(\eps)}_{\rm{cp}}(n-2l) . \sum_{k=0}^l u^{(\eps)}_{\rm{cp}}(2k) \geq 1 - \sum_{k=l+1}^m u^{(\eps)}_{\rm{cp}}(2k).$ Consider now $l\longrightarrow \infty$ such that $n-2l \longrightarrow \infty$ as $n$ goes to infinity, to get that for $\rho{\rm{-a.e.}} \; \eps$ $$ \gamma^{(\eps)}_{\rm{cp}}. U_{\rm{cp}}^{(\eps)} \geq 1 $$ or, the quenched Green function being strictly positive, $\gamma^{(\eps)}_{\rm{cp}} \geq \frac{1}{U_{\rm{cp}}^{(\eps)}}, \; \rho$-a.s. By monotonicity one gets in particular for all $n \in \N$ and for $\rho$-a.e. $\eps$ \be \label{controlescquen} \gamma^{(\eps)}_{\rm{cp}} (n) \geq \frac{1}{U_{\rm{cp}}^{(\eps)}}. \ee To get the lower bound, we proceed like in \cite{DE} with a weaker result\footnote{because we do not know whether the quenched local limit theorem is valid or not.} and substract $\frac{1}{U_{\rm{cp}}^{(\eps)}} \sum_{k=0}^{m} u^{(\eps)}_{\rm{cp}}(2k)$ to both sides of (\ref{pathsone4}) to get first that for $\rho$-a.e. orientation $\eps$, $$ u_{\rm{cp}}^{\eps}(0) \cdot \Big(\gamma^{(\eps)}_{\rm{cp}} (n) - \frac{1}{U_{\rm{cp}}^{(\eps)}}\Big) + \sum_{k=1}^{m} u^{(\eps)}_{\rm{cp}}(2k) . \Big(\gamma^{(\eps)}_{\rm{cp}} (n-2k) - \frac{1}{U_{\rm{cp}}^{(\eps)}}\Big) = 1-\frac{1}{U_{\rm{cp}}^{(\eps)}} \sum_{k=0}^m u^{(\eps)}_{\rm{cp}}(2k) $$ $$ {\rm so \; that} \; \;\;\;\; \;\;\; u_{\rm{cp}}^{(\eps)}(0) \cdot \Big(\gamma^{(\eps)}_{\rm{cp}} (n) - \frac{1}{U_{\rm{cp}}^{(\eps)}}\Big) \; \leq \; 1-\frac{1}{U_{\rm{cp}}^{(\eps)}} \sum_{k=0}^m u^{(\eps)}_{\rm{cp}}(2k).$$ Using (\ref{controlescquen}) and $u_{\rm{cp}}^{(\eps)}(0)=1$, let $n$ (and $m$) going to infinity to get for $\rho$-a.e. $\eps$ $$ \gamma^{(\eps)}_{\rm{cp}} \leq \frac{1}{U_{\rm{cp}}^{(\eps)}} \; \; \; {\rm and \; thus} \; \; \gamma^{(\eps)}_{{\rm cp}} = \frac{1}{U_{{\rm cp}}^{(\eps)}}, \; {\rm and} \; \gamma_{{\rm cp}} = \E_\rho \Big[ \frac{1}{U_{{\rm cp}}} \Big]. $$ Eventually, we also get that $\rho$-a.s., for all $n \in \N$ $$ 0 < \gamma^{(\eps)}_{\rm{cp}} \leq \gamma^{(\eps)}_{\rm{cp}} (n) \leq \gamma^{(\eps)}_{\rm{cp}} + B^{(\eps)}(n) $$ where $B^{(\eps)}(n)= 1- \frac{1}{U_{\rm{cp}}^{(\eps)}} \sum_{k=0}^m u^{(\eps)}_{\rm{cp}}(2k) = \frac{U_{\rm{cp}}^{(\eps)} - \sum_{k=0}^m u^{(\eps)}_{\rm{cp}}(2k)}{U_{\rm{cp}}^{(\eps)}}$ goes $\rho$-a.s. to $0$. Taking the expectations w.r.t. $\rho$, this yields the annealed result (\ref{growth3}) where, by dominated convergence, $$ B(n)=\E_\rho \Big[\frac{U_{\rm{cp}}^{(\eps)} - \sum_{k=0}^{m} u^{(\eps)}_{\rm{cp}}(2k)}{U_{\rm{cp}}^{(\eps)}} \Big]=\E_\rho\Big[\frac{1}{U_{\rm{cp}}^{(\eps)}}. \sum_{k=m+1}^\infty u_{\rm{cp}}^{(\eps)}(2k)\Big] \; \longrightarrow_n \; 0. $$ \epr This provides an estimation of the expected range using (\ref{range}) to get $$n \cdot \gamma_{\rm{cp}} \leq \E[R_n] \leq n \cdot \gamma_{\rm{cp}} + G(n)$$ where by Cesaro's theorem, $$ G(n)=\sum_{k=0}^{n-1} B(k)= \sum_{k=0}^{n-1} \E_\rho\Big[\frac{1}{U_{\rm{cp}}^{(\eps)}}. \sum_{l=m(k)+1}^\infty u_{\rm{cp}}^{(\eps)}(2l)\Big]=\circ \big(n\big). $$ One can proceeds similarly in the quenched case and eventually gets Theorem \ref{LinGrowth}. \section{Weak Law of large numbers} Theorem \ref{LinGrowth} provides thus a linear growth of the expectation of the range $$ \frac{\E[R_n]}{n} \; \longrightarrow_n \; \gamma_{\rm{cp}} = \E_\rho \Big[\frac{1}{U_{\rm{cp}}^{(\eps)}} \Big] > 0 $$ similar to the spatial behavior described in \cite{DE} where the limit $\gamma_3 >0$ is defined similarly. This walker goes further than the usual planar one, visiting much more sites but less often. For the standard SRW on the standard (unoriented) version of $\Z^2$, the estimate (2.20) of \cite{DE} $$ \E[R_n]= n \cdot \frac{\pi}{\log{n}} + \mathcal{O} \Big(\frac{n \log{\log{n}}}{\log^2{n}} \Big) $$ yields $\lim_n \frac{\E[R_n]}{n} = 0$ while Spitzer \cite{spi3} also proved that $\frac{R_n}{n} \; \stackrel{\mathbb{P}}{\longrightarrow_n} \; 0.$ To investigate this weak LLN\footnote{Established for all $d \geq 2$ in \cite{DE}, who also derive strong LLN.}, we need to estimate the variance of $R_n$, defined to be \be \label{annvar1} V_{{\rm cp}}(n):= \sigma^2(R_n) = \E\Big[\big(R_n-\E[R_n]\big)^2 \Big] = \E[R_n^2] - \big(\E[R_n]\big)^2 \ee which is also the $\rho$-expectation of the quenched variance, defined for a given orientation $\eps$ by \be \label{annvar2} V_{{\rm cp}}^{(\eps)}(n):=\E^{(\eps)}\Big[\big(R_n-\E[R_n]\big)^2 \Big] =\E^{(\eps)}[R_n^2] - \big(\E^{(\eps)}[R_n]\big)^2. \ee Introduce for all $j<k$ the events $A_{j,k}$ defined as $$ A_{0,k}=A_k,\; A_{j,k}= \big\{M_k \neq M_l, \forall l=j,\dots, k-1\}. $$ Re-write now (\ref{annvar1}) and (\ref{annvar2}) as follows \begin{eqnarray*} V_{{\rm cp}}(n)&=&\E\big[R_n^2\big] - \Big(\E\big[R_n\big]\Big)^2 = \E\Big[\big(\sum_{j=0}^{n-1} \mathbf{1}_{A_j} \big)^2\Big] - \Big( \E \big[\sum_{j=0}^{n-1}\mathbf{1}_{A_j} \big] \Big)^2\\ &=& \sum_{j,k=0}^{n-1} \Big(\pee \big[ A_j \cap A_k \big]- \pee\big[A_j\big] . \pee \big[A_{k} \big] \Big)\\ V_{{\rm cp}}^{(\eps)}(n) &=& \sum_{j,k=0}^{n-1} \Big(\pee^{(\eps)} \big[ A_j \cap A_k \big]- \pee^{(\eps)}\big[A_j\big] . \pee^{(\eps)} \big[A_{k} \big] \Big). \end{eqnarray*} Following carefully again the road of \cite{DE} or \cite{spi3}, we establish now the following bound, not optimal\footnote{Investigations around a quenched LLT should lead to $V_{{\rm cp}}(n)=\mathcal{O} \big( n^{3/2} \big)$, see Section 5.} but sufficient to get afterwards a weak law of large numbers: \bp The variance of the range of the SRW on the oriented lattices satisfies \be\label{varn2} V_{{\rm cp}}(n)=\circ \big( n^{2} \big) . \ee \ep \bpr The main ingredient is a sub-additivity of the summands of the variance, that we cannot get using the standard methods of \cite{DE,spi3}. Hence, we first work on the quenched law: \begin{lemma} \label{subadd} For all $0\leq j <k$, for all $\eps$, \be \label{insubadd} \pee^{(\eps)}\big[A_j \cap A_k \big] \leq \pee^{(\eps)}\big[A_j\big] . \pee^{(\eps)} \big[ A_{j,k} \big]. \ee \end{lemma} \bpr Use that the quenched law $\pee^{(\eps)}$ is Markov for any orientation $\eps$ to get for $0 \leq j<k$ \begin{eqnarray*} \pee^{(\eps)} [A_j \cap A_k] &=& \pee^{(\eps)} \big[\{M_j \neq M_i, \forall i<j \} \cap \{M_k \neq M_l, \forall l < k \} \big]\\ &\leq& \pee^{(\eps)} \big[\{M_j \neq M_i, \forall i<j \} \cap \{M_k \neq M_l, \forall j \leq l < k \} \big]\\ &=& \pee^{(\eps)} [A_j ] . \pee^{(\eps)} \big[ \{M_k \neq M_l, \forall j \leq l < k \} \big]= \pee^{(\eps)}\big[A_j\big] . \pee^{(\eps)} \big[ A_{j,k} \big]. \end{eqnarray*} \epr \begin{remark} Inequality (\ref{insubadd}) relies on the Markovian character of the quenched law, not true in the annealed case. Indeed, taking the expectation under $\rho$ in both sides yields $$ \pee \big[A_j \cap A_k \big] \leq \E_\rho \Big[ \pee^{(\eps)} \big[A_j \big] . \pee^{(\eps)} \big[ A_{j,k} \big] \Big] $$ and it is an open question whether the product structure of $\rho$ allows to get $$ \E_\rho \Big[ \pee^{(\eps)} \big[A_j \big] . \pee^{(\eps)} \big[ A_{j,k}\big] \Big] \leq \E_\rho \Big[ \pee^{(\eps)} \big[A_j \big]\Big] \cdot \E_\rho \Big[\pee^{(\eps)} \big[ A_{j,k}\big]\Big]. $$ One would get, by translation-invariance of $\rho$, the standard inequality \cite{DE,spi3} because \be \label{HomAnn} \E_\rho \Big[\pee^{(\eps)} \big[ A_{j,k}\big] \Big]= \E_\rho \Big[ \pee^{(\eps)} \big[ A_{k-j} \big]\Big]=\pee \big[A_{k-j} \big]. \ee \end{remark} Using now the estimate (\ref{insubadd}) and the expression (\ref{range}), we can estimate (\ref{annvar2}) \begin{eqnarray*} V_{{\rm cp}}^{(\eps)}(n)&=&2 \sum_{j=0}^{n-1}\sum_{k=j+1}^{n-1} \Big(\pee^{(\eps)} \big[ A_j \cap A_k \big]- \pee^{(\eps)}\big[A_j\big] . \pee^{(\eps)} \big[ A_{k} \big]\Big) + \sum_{j=0}^{n-1} \big(\pee^{(\eps)} \big[ A_j \big]-\pee^{(\eps)} \big[ A_j \big]^2 \big)\\ &\leq & 2 \sum_{j=0}^{n-1} \pee^{(\eps)} \big[ A_j \big] \cdot \sum_{k=j+1}^{n-1} \Big(\pee^{(\eps)} \big[ A_{j,k} \big] - \pee^{(\eps)} \big[ A_k \big] \Big) + \sum_{j=0}^{n-1} \pee^{(\eps)} \big[ A_j \big] \end{eqnarray*} so that $$ \frac{1}{n^2} V_{{\rm cp}}^{(\eps)}(n) \leq \frac{2}{n} \sum_{j=0}^{n-1} \Big( \pee^{(\eps)} \big[ A_j \big] \cdot \sum_{k=j+1}^{n-1} \frac{1}{n}\Big(\pee^{(\eps)} \big[ A_{j,k} \big] - \pee^{(\eps)} \big[ A_k \big] \Big) \Big) + \frac{\E^{(\eps)} \big[R_n \big]}{n^2} = G_n(\eps) + \frac{\E^{(\eps)} \big[R_n \big]}{n^2}. $$ The last term of the rhs goes $\rho$-a.s. to zero by (\ref{Qexp}) while we write $$ G_n(\eps)=2 \gamma_{\rm cp} \cdot \E_\rho \Big[\frac{1}{n} \sum_{j=0}^{n-1} \sum_{k=j+1}^{n-1} \frac{1}{n} \Big(\pee^{(\eps)} \big[ A_{j,k} \big] - \pee^{(\eps)} \big[ A_k \big] \Big) \Big)\Big] + 2 D_n(\eps)=2 (F_n(\eps) + D_n(\eps)) $$ in such a way that we control the annealed variance by $\frac{1}{n^2} V_{{\rm cp}}(n) \leq 2\E_\rho[F_n] + 2\E_\rho[D_n]$.\\ To deal with the second term, remark that, for given $j$ and $k$, $A_{j,k}= A_k \cup \tilde{A}_{j,k}$ where the events $\tilde{A}_{j,k}$ consists of the trajectories visiting at $k$ a point not visited since $j$ but who has been visited before. In particular, since $A_{j,k} \subset A_{0,k}=A_k$, $$ 0 \leq \frac{1}{n} \sum_{k=j+1}^{n-1} \Big(\pee^{(\eps)} \big[ A_{j,k} \big] - \pee^{(\eps)} \big[ A_k \big] \Big) = \frac{1}{n} \sum_{k=j+1}^{n-1} \pee^{(\eps)} \big[ \tilde{A}_{j,k} \big]\leq 1 $$ so that $$ 0 \leq \E_\rho[D_n] \leq \E_\rho \Big[ \frac{1}{n} \sum_{j=0}^{n-1} \Big( \pee^{(\eps)} \big[ A_j \big] - \gamma_{\rm{cp}} \Big)\Big] = \E_\rho \Big[ \Big(\frac{1}{n} \sum_{j=0}^{n-1} \pee^{(\eps)} \big[ A_j \big] \Big) - \gamma_{\rm{cp}} \Big] = \frac{\E[R_n]}{n} - \gamma_{\rm{cp}} $$ that goes to zero by (\ref{Annexp}). To deal with $F_n$, we write $$ \E_\rho[F_n] \leq \gamma_{\rm{cp}} \cdot \Big[ \sum_{j=0}^{n-1} \frac{1}{n} \sum_{k=j+1}^{n-1} \Big(\pee \big[ A_{j,k} \big] - \pee \big[ A_k \big] \Big) \Big]\leq \sum_{j=0}^{n-1} \frac{1}{n} \gamma_{\rm{cp}} \cdot \max_{j=0,\dots k-1} \sum_{k=j+1}^{n-1} \Big(\pee \big[ A_{k-j} \big] - \pee \big[ A_k \big]\Big) $$ where to get the last inequality we have used (\ref{HomAnn}) for the annealed measure\footnote{This step is not true in the quenched case so we cannot get the same bound, at least in this way.}. Now, we can work exactly like in the standard case treated in \cite{DE,spi3}: The balance between the number of possible points to discover and the number of points already visited reaches its maximum for $j=\big[\frac{n}{2}\big]$ so that, using (\ref{Annexp}), we get (\ref{varn2}) because $$ 0 \leq \E_\rho[F_n] \leq \gamma_{\rm cp} \cdot \frac{1}{n} \cdot \Big (\E \Big[ R_{n-[n/2]}+ R_{[n/2]} - R_n \Big]\Big) \leq \gamma_{\rm cp} \cdot \Big(\frac{1}{2} \gamma_{\rm{cp}} + \frac{1}{2} \gamma_{\rm{cp}} - \gamma_{\rm{cp}} \Big) + \circ (1 )= \circ (1 ). $$ \epr Using Markov's inequality one gets Theorem \ref{ALLN}, because for all $\delta >0$ $$ \pee \Big[ \big|\frac{R_n}{n} - \gamma_{\rm{cp}} \big| > \delta \Big] \leq \frac{1}{n^2 \delta^2} .\E\big[ |R_n - n . \gamma_{\rm{cp}} | \big] \leq \frac{1}{n^2 \delta^2} . V_{{\rm cp}}(n) + \frac{1}{\delta^2} . \Big(\gamma_{\rm{cp}} - \E \Big[\frac{R_n}{n}\Big] \Big)^2 $$ and the WLLN in the annealed set-up, by (\ref{varn2}) and (\ref{growth3}). As a by-product, one recovers also in the quenched WLLN for $\rho$-a.e. orientation. \section{Conclusions and perspectives} Further investigations, in the spirit of Jain {\em et al.} \cite{JP2,JP5}, would require a quenched local limit theorem or at least more accurate asymptotic of the variance of the range, using e.g. a less crude inequality than (\ref{insubadd}), and in this transient case the relationship between the range and the number of points that are never revisited. We suspect that in fact the variance is of order $n^{3/2}=n \sqrt{n} $, and that this should lead to an unconventional CLT: $$ \frac{R_n -n \gamma_{\rm{cp}}}{\sqrt{n \sqrt{n}}} \; \stackrel{\mathcal{L}}{\Longrightarrow} \; \mathcal{N}(0,1) $$ like in the three-dimensional case (where the normalization is $\sqrt{n \ln{n}})$, while in the two-dimensional case the limiting law is the so-called self-intersection local times \cite{LG}.\\ {\bf Aknowledegments :} I am grateful to Jean-Baptiste Bardet (Rouen), Frank den Hollander (Leiden) and Bruno Schapira (Orsay) for their interest and their advices. \addcontentsline{toc}{section}{\bf References}
{ "attr-fineweb-edu": 1.941406, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdFnxaKgTskq9XpFw
\subsection{Time Complexity} \label{sec:time_complexity} \renewcommand{\arraystretch}{1.25} \begin{table} {\footnotesize \begin{center} \begin{tabular}{|l|l|p{4.0cm}|} \hline Name & Type & Time Complexity \\ \hline 2PS-L & Stateful Out-of-Core & $\mathcal{O}(|E|)$ \\ \hline \hline HDRF & Stateful Streaming & $\mathcal{O}(|E|*k)$ \\ \hline ADWISE & Stateful Streaming & $\mathcal{O}(|E|*k)$ \\ \hline \hline DBH & Stateless Streaming & $\mathcal{O}(|E|)$ \\ \hline Grid & Stateless Streaming & $\mathcal{O}(|E|)$ \\ \hline \hline DNE & In-memory & $\mathcal{O}(\frac{d *|E| * (k+d)}{n * k})$ with $d = $ max. vertex degree, $n = $ num. of CPU cores \\ \hline METIS & In-memory & $\mathcal{O}((|V|+|E|)*\log_2(k))$ \\ \hline HEP & Hybrid & $\mathcal{O}(|E|*(\log{}|V| + k) + |V|)$ \\ \hline \end{tabular} \end{center} } \caption{Comparison of time complexity.} \label{tab:time_complexity} \vspace{-10pt} \end{table} We analyze each phase of 2PS-L separately. Phase 1, specified in Algorithm~\ref{alg:clustering}, performs a fixed number of passes through the edge set. In each pass, a constant number of operations is performed on each edge. Hence, the time complexity of the first phase is in $\mathcal{O}(|E|)$. Phase 2, specified in Algorithm~\ref{alg:partitioning}, consists of three steps. First, clusters are mapped to partitions in decreasing volume order. To sort clusters by volume is in $\mathcal{O}(|V| * \log |V|)$, as in the worst case, there are as many clusters as vertices (note that, in natural graphs, we can expect the number of clusters to be orders of magnitude smaller than the number of vertices). Each cluster is assigned to the currently least loaded partition, which can be performed in $\mathcal{O}(|V| * \log k)$ time, provided that we keep the $k$ partitions sorted by their accumulated volume while assigning clusters to them. Second, edges are pre-partitioned, such that edges whose adjacent vertices are both in clusters of the same partition are assigned to that partition. This is a constant-time operation per edge, resulting in $\mathcal{O}(|E|)$ time complexity. Third, the remaining edges are partitioned using stateful streaming, which is done in $\mathcal{O}(|E|)$ time, as for each edge, we need to compute the score against two partitions. In summary, the second phase of 2PS-L has a time complexity of $\mathcal{O}(|E|)$, as $|E| >> |V|$. The total time complexity of 2PS-L is, hence, in $\mathcal{O}(|E|)$, i.e., linear in the number of edges. In Table~\ref{tab:time_complexity}, we compare the time complexity of 2PS-L to known results from the literature\footnote{METIS figures: http://glaros.dtc.umn.edu/gkhome/node/419}. 2PS-L is the only stateful out-of-core edge partitioner that has linear time complexity. \renewcommand{\arraystretch}{1.25} \begin{table} {\footnotesize \begin{center} \begin{tabular}{|l|l|p{4.0cm}|} \hline Name & Type & Space Complexity \\ \hline 2PS-L & Stateful Out-of-Core & $\mathcal{O}(|V|*k)$ \\ \hline\hline HDRF & Stateful Streaming & $\mathcal{O}(|V|*k)$ \\ \hline ADWISE & Stateful Streaming & $\mathcal{O}(|V|*k + b)$ with $b= $ buffer size \\ \hline \hline DBH & Stateless Streaming & $\mathcal{O}(|V|)$ \\ \hline Grid & Stateless Streaming & $\mathcal{O}(1)$ \\ \hline \hline --- & In-memory & $\geq \mathcal{O}(|E|)$ \\ \hline \end{tabular} \end{center} } \caption{Comparison of space complexity.} \label{tab:space_complexity} \vspace{-10pt} \end{table} \subsection{Space Complexity} \label{sec:space complexity} We analyze the data structures used in 2PS-L. In Algorithm~\ref{alg:clustering}, we use arrays to store the vertex degrees, cluster volumes and the mapping of vertices to clusters. Each of these data structures has a space complexity of $\mathcal{O}(|V|)$. In Algorithm~\ref{alg:partitioning}, besides these arrays, we use additional arrays to map the clusters to partitions and to keep the volumes of clusters per partition. These arrays all have a space complexity of $\mathcal{O}(|V|)$. Finally, we use a vertex-to-partition replication matrix, which has a space complexity of $\mathcal{O}(|V| * k)$. Hence, the overall 2PS-L algorithm has a space complexity of $\mathcal{O}(|V| * k)$. In particular, the space complexity is independent of the number of edges in the graph. The preprocessing phase has no additional memory overhead in excess of the streaming partitioning phase. All data structures needed for clustering (i.e., vertex-to-cluster assignments, vertex degrees and cluster volumes) are directly used in the partitioning phase. In Table~\ref{tab:space_complexity}, we compare the space complexity of 2PS-L to known results from the literature. 2PS-L has the same space complexity as other stateful out-of-core (streaming) partitioners. In-memory partitioners, by definition, have a space complexity that is at least linear in the number of edges. \renewcommand{\arraystretch}{1.15} \begin{table} {\small \begin{center} \begin{tabular}{l|l|l|l|l} \hline Name & \textbf{$|V|$} & \textbf{$|E|$} & Size & Type \\ \hline com-orkut (OK) & 3.1 M & 117 M & 895 MiB & Social \\ it-2004 (IT) & 41 M & 1.2 B & 9 GiB & Web \\ twitter-2010 (TW) & 42 M & 1.5 B & 11 GiB & Social \\ com-friendster (FR) & 66 M & 1.8 B & 14 GiB & Social \\ uk-2007-05 (UK) & 106 M & 3.7 B & 28 GiB & Web \\ gsh-2015 (GSH) & 988 M & 34 B & 248 GiB & Web \\ wdc-2014 (WDC) & 1.7 B & 64 B & 478 GiB & Web \\ \hline \end{tabular} \end{center} } \caption{Real-world graph datasets. Size refers to the graph representation as binary edge list with 32-bit vertex IDs.} \label{tab:graphs} \vspace{-12pt} \end{table} \subsection{Phase 1: Clustering} \label{sec:phase1} \subsubsection{Intuition} We observe that in edge partitioning, a group of vertices should be replicated on the same partition if there are many edges between vertices of that group, i.e., the group is densely connected. This way, many edges can be assigned to a partition while only few vertices are added to the vertex cover set of that partition, leading to a low overall replication factor. Finding groups of vertices that are densely connected is a well-known problem called \emph{clustering} or \emph{community detection}~\cite{Newman8577, FORTUNATO201075}. In existing streaming partitioners, e.g., HDRF~\cite{Petroni:2015:HSP:2806416.2806424} or Greedy~\cite{powergraph}, it is unknown to the partitioning algorithm whether an incoming edge is an intra-cluster edge or not, i.e., whether it is incident to vertices of the same cluster. Instead, these algorithms only consider the vertex degrees, which can be misleading, as it is not always best to cut through the highest-degree vertices. Introducing an edge buffer, as in ADWISE~\cite{8416335}, allows for ``looking into the future'' in the stream to detect clusters within that buffer. However, as shown in our evaluations (cf. Section~\ref{sec:evaluations}), the buffer-based approach fails for very large graphs, as the buffer only covers a small fraction of the complete graph and, hence, cannot detect all clusters. In 2PS-L, we go a different way: We first analyze the \emph{entire} graph in a streaming pre-processing phase to find the vertex clusters, and then exploit the global knowledge in the partitioning phase. Figure~\ref{fig:clustering_idea} illustrates this idea. In the left-hand side figure, we depict a graph structure that has two clusters, a green one and a blue one. Most of the edges are intra-cluster edges (solid lines); there are only two inter-cluster edges (dashed lines). If partitioning is performed unaware of the clustering structure of the graph, this may lead to the distribution of intra-cluster edges onto different partitions, and as a consequence, leads to a low partitioning quality. On the other hand, if partitioning is performed aware of the clustering structure of the graph, intra-cluster edges are assigned to the same partition, which leads to a high partitioning quality. \begin{figure} \centering \includegraphics[width=0.95\linewidth]{figures/clustering_idea.pdf} \vspace{-5pt} \caption{The awareness of graph clustering in edge partitioning leads to better partitioning quality.} \vspace{-5pt} \label{fig:clustering_idea} \end{figure} \subsubsection{Streaming Clustering Algorithm} In spite of similarities, clustering and edge partitioning have a different nature and, hence, are addressed with different algorithms~\cite{Newman8577}. In particular, \emph{clustering is a less constrained problem} than partitioning. First, the size of the different clusters does not have to be balanced, i.e., clusters are allowed to have different sizes (although they may have to adhere to a maximum size). Contrary to this, in edge partitioning, every partition has to cover an equal (up to the imbalance factor $\alpha$) number of edges. Second, the number of clusters is not necessarily predetermined, but may originate from the structure of the graph. Contrary to this, in edge partitioning, the number of partitions is determined by the user. The less constrained nature of clustering allows for devising a more efficient and flexible streaming algorithm. Another advantage of clustering over edge partitioning is the possibility to change the assignment of a vertex to a cluster multiple times in one single pass through the edge stream. A vertex of degree $d$ is adjacent to $d$ edges, and therefore, is visited $d$ times in one single pass through the edge stream. Every time a vertex is visited, its assignment to a cluster can be refined, taking into account new information that has been gathered since the last time the vertex was visited. Contrary to this, in edge partitioning, in a single pass through the edge list, every edge is only visited once and is immediately assigned to a partition. It is not trivially possible to revoke an edge-to-partition assignment at a later point in time when more information about the graph structure is accumulated. To re-assign edges to different partitions would require to track the mapping of \emph{edges} to partitions. Such mapping, however, can not be kept in memory for graphs with a large edge set. \begin{algorithm}[t] \caption{2PS-L Phase 1: Clustering} \begin{algorithmic}[1] \footnotesize \State int[] \emph{d} \Comment{vertex degrees} \State int[] \emph{vol} \Comment{cluster volumes} \State int[] \emph{v2c} \Comment{map of vertex\_id to cluster\_id} \State int \emph{max\_vol} \Comment{maximum cluster volume} \State int \emph{next\_id} $\gets$ 0 \Comment{id of next new cluster} \vspace{0.1cm} \Procedure{streamingClustering}{} \State \texttt{performStreamingPass}() \State \emph{Optional: Further streaming passes.} \EndProcedure \vspace{0.1cm} \Procedure{performStreamingPass}{} \For{\textbf{each} $e \in $ edge\_stream} \For {\textbf{each} $v \in e$} \If{\emph{v2c}[$v$] = NULL} \State \emph{v2c}[$v$] $\gets$ next\_id \State \emph{vol}[\emph{next\_id}] $\gets$ \emph{d}[$v$] \State \emph{next\_id} $\gets$ \emph{next\_id} + 1 \EndIf \EndFor \If{\emph{vol}[\emph{v2c}[$v$]] $\leq$ \emph{max\_vol} $\forall v \in e$} \State $v_{\mathit{s}} \gets$ $v_i \in e : $ \emph{vol}[\emph{v2c}[$v_i$]] - d[$v_i$] $\leq$ \emph{vol}[\emph{v2c}[$v_j$]] - d[$v_j$] \State $v_{\mathit{l}} \gets$ $v_j \in e : v_j \neq v_{\mathit{s}}$ \If{\emph{vol}[\emph{v2c}[$v_l$]] $+$ \emph{d}[$v_s$] $\leq$ \emph{max\_vol}} \State \emph{vol}[\emph{v2c}[$v_l$]] $\gets$ \emph{vol}[\emph{v2c}[$v_l$]] $+$ \emph{d}[$v_s$]] \State \emph{vol}[\emph{v2c}[$v_s$]] $\gets$ \emph{vol}[\emph{v2c}[$v_s$]] $-$ \emph{d}[$v_s$]] \State \emph{v2c}[$v_s$] $\gets$ \emph{v2c}[$v_l$] \EndIf \EndIf \EndFor \EndProcedure \end{algorithmic} \label{alg:clustering} \end{algorithm} Our streaming clustering algorithm is an extension of an algorithm by Hollocou et al.~\cite{hollocou2017streaming}. The intuition of Hollocou's algorithm is as follows. A given random edge from the input stream is more likely an intra-cluster edge than an inter-cluster edge---this follows directly from the understanding of a cluster as a \emph{densly connected} sub-part of the graph. Therefore, when meeting an edge $e = (u,v)$ where vertices $u$ and $v$ are currently assigned to different clusters, we draw either $u$ or $v$ into the cluster of its corresponding neighbor. We prioritize the cluster with the larger \emph{volume} (i.e., the sum of the degrees of its vertices), as a vertex is more likely to have more connections to the larger cluster than to the smaller cluster. Algorithm~\ref{alg:clustering} processes the stream edge by edge (line 10). If $u$ or $v$ have no cluster yet, we create a new cluster and assign the vertex to it (lines 11--15). Now, we compare the cluster volumes of the clusters of $u$ and $v$. The vertex that is currently assigned to the cluster with the lower volume migrates to the neighboring cluster that has the higher volume (lines 16--22). However, such migration is only allowed if the new volume of the larger cluster does not exceed a volume bound. Our extension introduces two novelties: \emph{bounded cluster volumes} and \emph{re-streaming}. (1) The original algorithm by Hollocou et al.~\cite{hollocou2017streaming} cannot guarantee that cluster volumes are bounded. This is problematic for our use case because if there are too many intra-cluster edges, we have to cut through the clusters in the subsequent partitioning phase of 2PS-L to keep the balancing constraint, which can lead to a loss of partitioning quality. Therefore, different from Hollocou et al., we compute the degree of each vertex upfront (if not already known) and use the actual vertex degree instead of the partial degree in order to compute cluster volumes. The degree of each vertex is computed in a pass through the edge set, keeping a counter for each vertex ID that is seen in an edge, which is a lightweight, linear-time operation. Furthermore, we enforce an explicit volume cap on the clusters. As we consider the actual degree of vertices instead of the partial degree, we can enforce such volume cap effectively. (2) Hollocou et al.~\cite{hollocou2017streaming} do not consider to apply re-streaming~\cite{Nishimura:2013:RGP:2487575.2487696} to their clustering algorithm. In re-streaming, we perform another pass through the edge list and apply exactly the same clustering algorithm, using the state from the previous pass. We evaluate the impact of the number of streaming passes on the clustering quality in our evaluations (Section~\ref{sec:restreaming}, Fig.~\ref{eval:restreaming_rf} and~\ref{eval:restreaming_runtime}). \subsection{Phase 2: Partitioning} The edge partitioning algorithm (Algorithm~\ref{alg:partitioning}) has three steps. First, clusters are mapped to partitions. Second, a subset of edges are pre-partitioned by exploiting vertex clustering. Third, remaining edges are partitioned by linear-time stateful streaming edge partitioning. \textbf{\emph{Step 1: Mapping Clusters to Partitions.}} Our objective in the first step is to map clusters to partitions, such that the total volume of clusters across partitions is balanced. We model this problem as an instance of the classical \texttt{Makespan Scheduling Problem on Identical Machines} (\texttt{MSP-IM}). The problem can be defined as follows~\cite{graham1969bounds}: \vspace{-3pt} \begin{quote}Given a set of $k$ machines $M_1, ..., M_k$ and a list of $n$ jobs $j_1, ..., j_n$ with corresponding run-time $a_1, ..., a_n$, assign each job to a machine such that the makespan (i.e., the time to complete all jobs) is minimized.\end{quote} \vspace{-4pt} We apply our cluster assignment problem to \texttt{MSP-IM} as follows. Partitions are corresponding to ``machines'', clusters to ``jobs'' and volumes of the clusters to ``run-times'' of the jobs. The optimization goal is to minimize the cumulative volume of the largest partition. \texttt{MSP-IM} is NP-hard~\cite{Ullman:1975:NSP:1739944.1740138} so that we solve it by approximation. The \emph{sorted list scheduling algorithm} by Graham~\cite{graham1969bounds} is a $\frac{4}{3}$-approximation of \texttt{MSP-IM}, i.e., its result is at most $\frac{4}{3}$ times as large as the true optimum. Applied to our cluster assignment problem, sorted list scheduling means that the clusters are sorted by decreasing volume (Algorithm~\ref{alg:partitioning}, line 12) and then assigned one by one to the currently least loaded partition (lines 13 to 15). \textbf{Step 2: Pre-Partitioning.} In the second step, we exploit the clustering of vertices to pre-partition a subset of edges. To do so, the pre-partitioning algorithm performs one pass through the complete edge stream (Algorithm~\ref{alg:partitioning}, line 17). For each edge $e=(e.$first$, e.$second$)$, it checks if both adjacent vertices $e$.first and $e$.second are either in the same cluster or their clusters are assigned to the same partition $p$ (cf. Step 1 discussed above). In this case, $e$ is applicable to pre-partitioning and shall be assigned to $p$ (lines 18 to 21). If $p$ is already occupied to its maximum capacity $\alpha * \frac{|E|}{k}$, $e$ is assigned to a different partition instead, using linear-time stateful streaming edge partitioning (as in Step 3). \begin{algorithm}[t] \caption{2PS-L Phase 2: Streaming Partitioning} \begin{algorithmic}[1] \footnotesize \State int[] \emph{d} \Comment{vertex degrees (from Phase 1)} \State int[] \emph{vol} \Comment{cluster volumes (from Phase 1)} \State int[] \emph{v2c} \Comment{map of vertex\_id to cluster\_id (from Phase 1)} \State int[] \emph{c2p} \Comment{map of cluster\_id to partition\_id} \State int[] \emph{vol\_p} \Comment{sum of volumes of clusters per partition} \State int[][] \emph{v2p} \Comment{vertex\_id to partition\_id replication bit matrix} \vspace{0.01cm} \Procedure{streamingPartitioning}{} \State \texttt{mapClustersToPartitions}() \State \texttt{prepartitionEdges}() \State \texttt{partitionRemainingEdges}() \EndProcedure \vspace{0.01cm} \Procedure{mapClustersToPartitions}{} \State sort clusters by volume (descending) \For{\textbf{each} cluster $c$} (from largest to smallest) \State \emph{target\_p} $\gets$ $\argmin_{p_i \in P}$\emph{vol\_p}[$p_i$] \State \emph{c2p}[$c$] $\gets$ \emph{target\_p} \EndFor \EndProcedure \vspace{0.01cm} \Procedure{prepartitionEdges}{} \For{\textbf{each} $e \in $ edge\_stream} \State \emph{c\_1} $\gets$ \emph{v2c}[$e$.first] \State \emph{c\_2} $\gets$ \emph{v2c}[$e$.sec] \If{\emph{c\_1} = \emph{c\_2} \textbf{OR} \emph{c2p}[\emph{c\_1}] = \emph{c2p}[\emph{c\_2}]} \State \emph{target\_p} $\gets$ \emph{c2p}[\emph{c\_1}] \If{|\emph{target\_p}| $> \alpha * \frac{|E|}{k} $} \State \emph{target\_p} is determined via scoring \EndIf \State \emph{v2p}[$e$.first][\emph{target\_p}] $\gets$ true \State \emph{v2p}[$e$.sec][\emph{target\_p}] $\gets$ true \State \texttt{output}: $e$ assigned to \emph{target\_p} \EndIf \EndFor \EndProcedure \vspace{0.01cm} \Procedure{partitionRemainingEdges}{} \For{\textbf{each} $e \in $ edge\_stream} \State \emph{c\_1} $\gets$ \emph{v2c}[$e$.first] \State \emph{c\_2} $\gets$ \emph{v2c}[$e$.sec] \If{\emph{c\_1} = \emph{c\_2} \textbf{OR} \emph{c2p}[\emph{c\_1}] = \emph{c2p}[\emph{c\_2}]} \State \textbf{continue} \Comment{skip pre-partitioned edge} \EndIf \State \emph{bestScore} $\gets 0$ \State \emph{target\_p} $\gets$ NULL \For{\textbf{each} $p_i \in \{ $\emph{c2p}[\emph{v2c}[$e$.first]], \emph{c2p}[\emph{v2c}[$e$.second]]$\}$} \State \emph{score} $\gets$ $s(e.\mathit{first}, e.\mathit{second}, p_i)$ \Comment{scoring function} \If{\emph{score} $>$ \emph{bestScore}} \State \emph{bestScore} $\gets$ \emph{score} \State \emph{target\_p} $\gets p_i$ \EndIf \EndFor \If{|\emph{target\_p}| $> \alpha * \frac{|E|}{k} $} \Comment{degree-based hashing} \State \emph{target\_p} $\gets$ \texttt{hash}($\argmax_{v \in \{e.\mathit{first}, e.\mathit{second}\}}$\emph{d}[$v$]) \EndIf \State \emph{v2p}[$e$.first][\emph{target\_p}] $\gets$ true \State \emph{v2p}[$e$.sec][\emph{target\_p}] $\gets$ true \State \texttt{output}: $e$ is assigned to \emph{target\_p} \EndFor \EndProcedure \end{algorithmic} \label{alg:partitioning} \end{algorithm} \textbf{Step 3: Streaming Partitioning.} Edges between vertices of different clusters that are mapped to different partitions are remaining. Partitioning the remaining edges is performed with linear-time scoring-based streaming edge partitioning. We enforce a hard balancing cap, i.e., we guarantee that no partition gets more than $\alpha * \frac{|E|}{k}$ edges assigned. Existing stateful streaming partitioning algorithms are not aware of the vertex clustering. This induces three problems. First and foremost, the streaming algorithm has no guidance which partitions could be most suitable to place an edge on. Therefore, a scoring function is computed for \emph{every} partition. Second, the streaming algorithm starts with an empty partitioning state. Therefore, early edges in the stream are partitioned randomly at low partitioning quality. Third, the global structure of the graph is disregarded, which can lead to low partitioning quality despite of expensive scoring. In our linear-time scoring-based partitioning algorithm, we tackle these shortcomings as follows. First, we constrain the scoring function to only take into account two different partitions, namely, the partitions associated to the clusters of the adjacent vertices (see Step 1). This is reasonable because it is highly likely that a vertex is already replicated on the partition that is associated to its cluster; it is much less likely that a vertex is replicated on any of the other $k-1$ partitions, so that we can forego checking \emph{every} partition's state. Second, we exploit the partitioning state from pre-partitioning (see Step 2). This way, we avoid the ``cold start'' or ``uninformed assignment'' problem of streaming edge partitioning, where early edges in the stream are assigned to partitions randomly as all partitioning state is empty~\cite{8416335}. Third, in the scoring function, we take into account the cluster volumes. If a cluster has a higher volume, it is more likely that further edges that have vertices incident to the cluster will be seen in the edge stream. Thus, we assign a higher score to placing an edge on the partition that is associated with the higher-volume cluster. \textbf{Scoring function:} We denote with $d_v$ the degree of a vertex $v$, with $c_v$ the cluster of a vertex $v$, and with $\mathit{vol(c_v)}$ the volume of the cluster of a vertex $v$. Then, the scoring function for an edge $(u,v)$ is defined as follows: \[ s(u,v,p) = g_u + g_v + sc_u + sc_v \] \[ g_{\{u, v\}} = \begin{cases} 1 + (1 - \frac{d_{\{u, v\}}}{d_u + d_v}) & \quad \text{if \{u,v\} is replicated on $p$} , \\ 0 & \quad \text{else.} \end{cases} \] \[ sc_{\{u, v\}} = \begin{cases} \frac{\mathit{vol}(c_{\{u, v\}})}{\mathit{vol}(c_u) + \mathit{vol}(c_v)} & \quad \text{if $c_{\{u, v\}}$ is assigned to $p$} , \\ 0 & \quad \text{else.} \end{cases} \] To perform streaming partitioning, 2PS-L makes a complete pass through the edge stream (Algorithm~\ref{alg:partitioning}, line 28). First, it determines whether an edge has already been pre-partitioned by checking the conditions for pre-partitioning (adjacent vertices are in the same cluster or in clusters that are mapped to the same partition). If the conditions for pre-partitioning are met, the edge is skipped (lines 29 to 33) as it has already been assigned. Else, scoring is performed on the two target partitions to determine the highest-scoring partition. If this partition has already reached its capacity bound, we hash the edge using the ID of the vertex that has the highest degree. If the hashed partition is fully occupied as well, we assign the edge to the currently least loaded partition as a last resort (not shown in the pseudo code). After Step 3 is finished, all edges have been assigned to partitions and none of the partitions have more than $\alpha * \frac{|E|}{k}$ edges. This concludes the 2PS-L algorithm. \subsection{Partitioning of Real-World Graphs} \label{sec:realworld} We perform our experiments for $k = \{4, 32, 128, 256\}$ partitions. We repeat each experiment 3 times and report the mean value along with error bars that show the standard deviation. The key performance metrics we report are replication factor, partitioning run-time, and memory overhead. We also track balancing. In most cases, the balancing constraint $\alpha = 1.05$ is met by all partitioners; if this is not the case, we report the measured $\alpha$ in the plot. The results comprise \emph{all} costs of 2PS-L, including any preprocessing. To allow for a separate analysis of the effects of I/O speeds on the performance of 2PS-L, in the following experiments, we perform several subsequent runs, so that the graph data is factually cached by the operating system in memory. We also perform evaluations with disabled caching in Section~\ref{sec:external} to evaluate the effect of I/O bottlenecks in memory-constrained scenarios. \emph{Main Observations.} In Figure~\ref{eval:perf}, we depict all performance measurements. Our main observations are as follows. (1) The run-time of 2PS-L is independent of the number of partitions. Therefore, 2PS-L is significantly faster than all other stateful partitioners (streaming as well as in-memory partitioners) at higher number of partitions ($k=128$ and $k=256$). For instance, at $k=256$ on the TW graph, 2PS-L is $5.8 \times$ faster than HEP-100, $13.4 \times $ faster than HEP-10, $25.7 \times $ faster than HEP-1, $630 \times $ faster than ADWISE, $12.3 \times $ faster than HDRF, $2500 \times $ faster than METIS, and $5.0 \times $ faster than DNE. Even at $k=4$ and $k=32$, 2PS-L is the fastest stateful partitioner in almost all cases. Only DBH, a stateless partitioner based on hashing, is faster than 2PS-L. Hence, 2PS-L is the first stateful partitioner with a run-time that is competitive to stateless partitioning. This way, 2PS-L can be used in scenarios where existing heavy-weight stateful partitioning would not pay off in an end-to-end comparison when considering the sum of partitioning and subsequent distributed graph processing run-time. \begin{figure*} \centering \captionsetup[subfloat]{captionskip=-2pt} \subfloat[OK: Rep. factor.]{\includegraphics[width=0.12\textwidth]{figures/eval/2ps-hdrf/ok_rf.pdf}} \subfloat[OK: Run-time.]{\label{b} \includegraphics[width=0.12\textwidth]{figures/eval/2ps-hdrf/ok_time.pdf}} \subfloat[IT: Rep. factor.]{\includegraphics[width=0.12\textwidth]{figures/eval/2ps-hdrf/it_rf.pdf}} \subfloat[IT: Run-time.]{\label{b} \includegraphics[width=0.12\textwidth]{figures/eval/2ps-hdrf/it_time.pdf}} \subfloat[TW: Rep. factor.]{\label{a} \includegraphics[width=0.12\textwidth]{figures/eval/2ps-hdrf/tw_rf.pdf}} \subfloat[TW: Run-time.]{\label{b} \includegraphics[width=0.12\textwidth]{figures/eval/2ps-hdrf/tw_time.pdf}} \subfloat[FR: Rep. factor.]{\label{a} \includegraphics[width=0.12\textwidth]{figures/eval/2ps-hdrf/fr_rf.pdf}} \subfloat[FR: Run-time.]{\label{b} \includegraphics[width=0.12\textwidth]{figures/eval/2ps-hdrf/fr_time.pdf}} \vspace{-2pt} \caption{Performance of 2PS-HDRF, normalized to the results of 2PS-L (blue line). } \label{eval:2ps-hdrf} \vspace{-12pt} \end{figure*} (2) 2PS-L yields a comparably good replication factor. In most cases, 2PS-L yields a lower replication factor than HDRF and ADWISE, which are stateful streaming partitioners that have much higher run-time (see discussion above). While in-memory partitioning (HEP, NE, DNE, METIS) still yields a better replication factor than 2PS-L, these partitioners also have a higher run-time and higher memory overhead. When comparing the partitioning quality of 2PS-L with DBH---the only partitioner that is continuously faster,---we see that 2PS-L yields significantly better replication factors on all graphs except for TW. The highest advantage of 2PS-L over DBH is on the GSH graph, where at $k=256$, the replication factor of DBH is $6.4 \times$ higher. In summary, 2PS-L as an out-of-core edge partitioner shows superior performance to stateful streaming edge partitioning. We can reduce both replication factor \emph{and} run-time at the same time. Therefore, 2PS-L is an attractive new choice for out-of-core graph partitioning when both partitioning quality \emph{and} run-time are critical. \subsection{Run-Time of Different Phases in 2PS-L.} \label{sec:runtime} In Figure~\ref{eval:phases}, we dissect the total run-time of 2PS-L into its two phases, i.e., clustering and partitioning, and also report the time for calculating the vertex degrees. Between $7$ and $20~\%$ of the run-time are spent on degree calculations. This time could be saved if the vertex degrees are already known (which may be the case in practice, depending on the data source and format). Clustering takes between $16$ and $22~\%$ of the run-time. This time will increase when more streaming clustering passes are performed (see Section~\ref{sec:restreaming}). Finally, between $58$ and $77~\%$ of the run-time are accounted for in the partitioning phase, which includes the assignment of clusters to partitions, the pre-partitioning and the scoring-based partitioning pass. We see similar patterns in the distribution of run-time between the degree calculation, clustering and partitioning phases among the group of social network graphs (OK, TW, FR) and web graphs (IT, UK, GSH, WDC). This correlates to the ratio of the two different parts of the streaming partitioning phase, i.e., prepartitioning (assigning edges of commonly placed clusters to the single candidate partition) and partitioning of remaining edges (using the scoring function to decide between two candidate partitions). We show this ratio for the evaluated graphs in Figure~\ref{eval:edge_ratio}. Different from social network graphs, prepartitioning dominates in web graphs, which is faster than scoring-based partitioning although it has the same algorithmic time complexity. Therefore, web graphs exhibit a lower overall partitioning time and a lower fraction of the run-time is associated with partitioning. \subsection{Re-Streaming} \label{sec:restreaming} We further evaluate how the partitioning quality is improved by re-streaming in the clustering phase of 2PS-L. We measured the relative gain in replication factor as compared to single-pass clustering in Figure~\ref{eval:restreaming_rf} as well as the run-time in Figure~\ref{eval:restreaming_runtime}. In terms of replication factor, the gains for re-streaming clustering are somewhat limited (up to 3.5~\% reduction). This needs to be paid for by a larger run-time; however, the increase in run-time is not proportional to the number of streaming passes. For example, for 8 streaming passes, the run-time roughly doubles as compared to single-pass clustering. This is because clustering only takes a minor portion of the total partitioning run-time (cf. Section~\ref{sec:runtime}). In the end, it depends on the concrete use cases to decide whether re-streaming pays off. Our recommended standard setting for 2PS-L is to perform a single streaming clustering pass, i.e., not apply re-streaming. Different from existing methods to reduce replication factors in out-of-core edge partitioning (e.g., ADWISE~\cite{8416335}), 2PS-L with re-streaming has a run-time that is independent of the number of partitions, so that the cost of re-streaming is still moderate compared to prior approaches. \subsection{Comparison to HDRF Scoring} \label{sec:hdrf_scoring} We implemented an alternative version of 2PS-L that employs ``traditional'' stateful streaming partitioning with the HDRF scoring function~\cite{Petroni:2015:HSP:2806416.2806424} in the second phase instead of linear-time streaming partitioning as in 2PS-L. We call this version 2PS-HDRF. In the following, we compare the replication factor and the run-time of 2PS-HDRF with 2PS-L (see Figure~\ref{eval:2ps-hdrf}). Using the HDRF scoring function improves the replication factor by up to $50~\%$. However, it comes at the cost of higher run-time with increasing number of partitions as a score is computed for every edge on every partition. At $k=4$, there is almost no run-time difference between 2PS-L and 2PS-HDRF. But at $k=256$, 2PS-L is up to $12 \times$ faster than 2PS-HDRF. Our recommendation is as follows: For a low number of partitions (like $k=4$), it pays off to use 2PS-HDRF, as the run-time is similar to 2PS-L, but the replication factor is lower. For a higher number of partitions ($k > 4$), the question of whether to use 2PS-L or 2PS-HDRF depends on the subsequent graph processing workload. It may pay off to invest more run-time into graph partitioning to yield a better partitioning quality that leads to faster graph processing, but it requires profiling of the graph processing performance to determine whether this is indeed the case. In Section~\ref{sec:processing}, we study the performance of graph processing under different partitionings to shed more light onto this question. \subsection{Distributed Graph Processing} \label{sec:processing} We evaluate the distributed graph processing performance under different graph partitionings. To this end, we set up a cluster of 8 machines on which we equally distribute 32 Spark executors; details can be found in the appendix. We deploy Spark/GraphX version 3.0.0 and use static PageRank (PR) with 100 iterations as graph processing workload. We partitioned the graphs into the respective number of executors used, i.e., $k=32$. As baselines, we used state-of-the-art streaming edge partitioners, as well as HEP-1 (i.e., HEP with $\tau = 1.0$) which is an out-of-core partitioner that comes close to stateful streaming partitioners in terms of memory overhead~\cite{hep}. Due to memory overheads in Spark (which internally uses property graphs represented as resilient distributed datasets), we could not process large graphs with more than one billion edges on our cluster. Therefore, we perform graph processing experiments on the OK graph (see Table~\ref{tab:graphs}) and a Wikipedia graph (WI) with 14 M vertices and 437 M edges~\cite{Kunegis:2013:KKN:2487788.2488173}. \begin{table} \scriptsize \begin{center} \begin{tabular}{|p{1.2cm}||l|l||l|l||l|l||l|l|} \hline \emph{Algor. /} & \multicolumn{2}{c||}{Rep. Factor} & \multicolumn{2}{c||}{Partitioning} & \multicolumn{2}{c||}{PageRank} & \multicolumn{2}{c|}{Total} \\ \emph{Graph} & OK & WI & OK & WI & OK & WI & OK & WI \\ \hline \hline 2PS-L & 9.00 & 4.55 & 20 & 80 & 240 & 786 & \textbf{260} & \textbf{866} \\ \hline 2PS-HDRF & 7.04 & 2.78 & 50 & 166 & \textbf{228} & 730 & 278 & 896 \\ \hline \hline HDRF & 10.78 & 3.98 & 52 & 220 & 246 & 769 & 298 & 989 \\ \hline DBH & 12.42 & 5.72 & \textbf{6} & \textbf{28} & 285 & FAIL & 291 & FAIL \\ \hline SNE & 4.57 & \textbf{2.21} & 110 & 574 & 230 & \textbf{621} & 340 & 1,195 \\ \hline HEP-1 & \textbf{4.52} & 2.59 & 45 & 244 & 261 & 632 & 306 & 876 \\ \hline \end{tabular} \end{center} \caption{Partitioning time and graph processing time (in seconds).} \label{tab:processing} \vspace{-14pt} \end{table} Table~\ref{tab:processing} shows the resulting replication factor, partitioning run-time and graph processing run-time (average of 3 runs). Neither the best partitioning quality (HEP-1 on OK, SNE on WI) nor the fastest partitioning (DBH) resulted in the best total run-time. HEP-1 and SNE yield the best replication factors, but as their partitioning run-time is relatively high, they do not perform best in an end-to-end comparison. On the other hand, DBH is the fastest partitioner. However, it also yields the worst replication factors, which makes graph processing slower. Therefore, in an end-to-end comparison, DBH does not perform best either. Even worse, due to a large replication factor on the WI graph, Spark/GraphX could not perform graph processing when the graph was partitioned with DBH. Instead, it ran out of disk space (35~GB per worker machine), as too much shuffling occurred due to the high replication factor. We conclude that there \emph{is} a need for good partitioning quality, but partitioning run-time is equally important. 2PS-L takes into account both factors, as it is fast and yields a good replication factor at the same time. As a consequence, the total run-time was always lowest when partitioning the graph with 2PS-L. 2PS-HDRF achieved a better replication factor and a lower graph processing run-time; however, due to its higher partitioning run-time, it did not perform better in terms of total run-time. \begin{table} \scriptsize \setlength{\tabcolsep}{0.5em} \begin{center} \begin{tabular}{|p{1.2cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|p{0.8cm}|} \hline & OK & IT & TW & FR & UK & GSH & WDC \\ \hline \hline Page Cache & 24 s & 96 s & 7.3 m & 15.4 m & 5.3 m & 69 m & 131 m \\ \hline SSD & 29 s \newline +22 \% & 135 s \newline +40 \% & 8.2 m \newline +12 \% & 16.5 m \newline +7 \% & 7.1 m \newline +34 \% & 78 m \newline +13 \% & 149 m \newline +14 \% \\ \hline HDD & 61 s \newline +159 \% & 393 s \newline +308 \% & 14.2 m \newline +93 \% & 23.8 m \newline +54 \% & 20.5 m \newline +285 \% & 206 m \newline +200 \% & 411 m \newline +214 \% \\ \hline \end{tabular} \end{center} \caption{Partitioning time using different storage devices.} \label{tab:external} \vspace{-12pt} \end{table} \subsection{External Storage} \label{sec:external} Loading the graph data from external storage may slow down the performance of 2PS-L, in particular, as multiple passes through the edge list need to be performed. To evaluate the impact of I/O on partitioning time, we set up a server with two different storage devices: a local SSD and a local HDD. We profiled the sequential read speed using \texttt{fio} (single-threaded reading of a single file of 5 GB size in 100 MB blocks, average of 3 runs), resulting in 938 MB/s for the SSD and 158 MB/s for the HDD. To force 2PS-L to read the graph in every streaming iteration completely from disk, we drop the page cache~\cite{pagecache} after each streaming pass which invalidates the cached disk blocks for subsequent streaming passes. In Table~\ref{tab:external}, we compare the partitioning run-time under different storage solutions (page cache, SSD, HDD). The SSD is between 7~\% and 22~\% slower on social network graphs, and between 13~\% and 40~\% slower on web graphs, respectively, compared to reading the graph data from the page cache. Using an HDD comes with a performance penalty of 54~\% to 159~\% on social network graphs and 200~\% to 308~\% on web graphs, respectively. In conclusion, we recommend to employ a fast storage that achieves at least 1 GB/s sequential read speed when using 2PS-L in memory-constrained situations where none of the graph data can be cached in memory. \subsection{Edge Partitioning Problem} \emph{Formalization.} The problem of \emph{edge partitioning} is commonly specified as follows (cf. also~\cite{Zhang:2017:GEP:3097983.3098033, Bourse:2014:BGE:2623330.2623660}). The graph $G = (V, E)$ is undirected or directed and consists of a set of vertices $V$ and a set of edges $E \subseteq V \times V$. Now, $E$ shall be split into $k>1, k \in \mathbb{N}$ partitions $P = \{p_1, ..., p_k\}$ such that $\bigcup_{i=1,...,k} p_i = E$ and $p_i \cap p_j = \emptyset, i \neq j$, while a balancing constraint is met: $\forall p_i \in P : |p_i| \leq \alpha * \frac{|E|}{k} $ for a given $\alpha \geq 1, \alpha \in \mathbb{R}$. The balancing constraint ensures that the largest partition does not exceed the expected number of edges multiplied by an imbalance factor $\alpha$ that limits the acceptable imbalance. We define $V(p_i)=\{x \in V | \exists y \in V : (x,y) \in p_i \lor (y,x) \in p_i \}$ as the set of vertices covered by a partition $p_i \in P$, i.e., the set of vertices that are adjacent to an edge in $p_i$. The optimization objective of edge partitioning is to minimize the \emph{replication factor} RF($p_1, \dots, p_k$)$\ = \frac{1}{|V|} \sum_{i=1,...,k}{|V(p_i)|}$ \emph{Interpretation.} The replication of a vertex on multiple partitions induces synchronization overhead in distributed processing. The lower the replication factor, the lower is the synchronization overhead. This has positive effects on the performance of distributed computations. For instance, numerous studies~\cite{Zhang:2017:GEP:3097983.3098033, dne, Pacaci:2019:EAS:3299869.3300076} show that there is a direct correlation between replication factor in edge partitioning and run-time of distributed graph processing. \subsection{Streaming Edge Partitioning} Streaming is the dominant way of performing out-of-core edge partitioning. In particular, the space complexity of streaming partitioning is independent of the number of edges in the graph. To do so, the graph is ingested edge by edge (one edge at a time), and each edge is immediately assigned to a partition. Depending on how the edge assignment is computed, we can differentiate between stateless and stateful streaming edge partitioning. \emph{Stateless.} In stateless streaming edge partitioning, the assignment decision of a given edge $e$ is performed independently of the assignment of other edges. This is commonly achieved by \emph{hashing} on the vertex IDs of the adjacent vertices of $e$. As a practical example, degree-based hashing (DBH)~\cite{dbh} computes a hash on the vertex ID of the vertex that has the lower degree. \emph{Stateful.} In stateful streaming edge partitioning, the assignment of an edge to a partition is performed based on a \emph{scoring function} that is computed for every partition. The scoring function can take into account \emph{graph properties} (e.g., the known or estimated degrees of the adjacent vertices of the edge~\cite{dbh, Petroni:2015:HSP:2806416.2806424}) as well as \emph{partitioning state} (e.g., the vertex cover sets of the partitions and the current size of the partitions~\cite{Petroni:2015:HSP:2806416.2806424}). The size of the partitioning state is limited to $\mathcal{O}(|V|)$, i.e., to only vertex-related information. As a practical example, HDRF~\cite{Petroni:2015:HSP:2806416.2806424} is a streaming partitioner that assigns an edge $e =(u,v)$ to the partition $p$ which maximizes a scoring function $C^{\mathit{HDRF}}(u,v,p) = C_{\mathit{REP}}(u,v,p) + C_{\mathit{BAL}}(p)$, where $C_{\mathit{REP}}(u,v,p)$ is a degree-weighted replication score and $C_{\mathit{BAL}}(p)$ is a balancing score. $C_{\mathit{REP}}(u,v,p)$ is highest if both vertices $u$ and $v$ adjacent to an edge $e$ are in the vertex cover set of the same partition~$p$; $C_{\mathit{BAL}}(p)$ is highest when $p$ contains the least number of edges. \begin{figure} \centering \subfloat[Replication facor.]{\label{a} \includegraphics[width=0.49\linewidth]{figures/motivation/ok_rf.pdf}} \subfloat[Run-time.]{\label{b} \includegraphics[width=0.49\linewidth]{figures/motivation/ok_time.pdf}} \caption{Replication factor and run-time of 2PS-L against stateful (HDRF) and stateless (DBH) streaming partitioning on OK graph (cf. Table~\ref{tab:graphs}) at different numbers of partitions.} \label{eval:motivation} \vspace{-5pt} \end{figure} \paragraph*{Discussion} Stateful partitioning yields in most cases a lower replication factor than stateless partitioning~\cite{verma-vldb, Abbas:2018:SGP:3236187.3269471, Pacaci:2019:EAS:3299869.3300076}. However, as the run-time of stateful partitioning increases linearly with the number of partitions, for a growing number of partitions, it becomes less and less profitable to perform stateful partitioning. The core problem of stateful streaming edge partitioning is that the scoring function is computed for every edge on \emph{every} partition, making it inefficient at high values of $k$. We argue that if we gather information on the graph structure in a preprocessing step, we can reduce the search space of stateful streaming edge partitioning from all $k$ partitions to only two partitions (regardless of $k$). With our novel partitioning algorithm 2PS-L, we thus achieve linear run-time at competitive partitioning quality. In Figure~\ref{eval:motivation}, we perform streaming edge partitioning with DBH (a representative of stateless partitioning) and HDRF (stateful partitioning) at a growing number of partitions $k$. While the replication factor achieved with HDRF is better than with DBH, the run-time overhead increases considerably: At $k=256$, HDRF takes more than 5 minutes to partition the OK graph, while DBH is done in 7 seconds. In many cases, investing so much time into partitioning will not pay off. 2PS-L with its linear time complexity can build 256 partitions in 21 seconds \emph{and} achieves the lowest replication factor at the same time. \section{Introduction} \label{sec:introduction} \input{content/introduction} \section{Problem Analysis} \label{sec:background} \input{content/problem} \section{Approach} \label{sec:approach} \input{content/approach} \section{Theoretical Analysis} \label{sec:analysis} \input{content/analysis} \section{Evaluations} \label{sec:evaluations} \input{content/evaluation} \section{Related Work} \label{sec:related} \input{content/related} \section{Conclusions} \label{sec:conclusions} \input{content/conclusion} \section*{Appendix: Experimental Settings} \label{sec:appendix} \input{content/appendix} \bibliographystyle{IEEEtran} \IEEEtriggeratref{55}
{ "attr-fineweb-edu": 1.775391, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdH3xK19JmhArDWdT
\section{Introduction} \label{sec:introduction} During the past decade, deep learning (DL) has led to significant breakthroughs in many areas, such as image classification and natural language processing~\cite{densenet,resnet,big_nlp}. However, the existing large models and computation complexity limit the deployment of DL on resource-constrained devices and its large-scale adoption in edge computing. Multiple model compression techniques, such as network pruning~\cite{han_prune}, quantization~\cite{bnn}, and knowledge distillation~\cite{kd}, have been proposed to compress and deploy such complex models on resource-constrained devices without sacrificing the test accuracy. However, these techniques require a significant amount of manual tuning. Hence, neural architecture search (NAS) has been proposed to automatically design neural architectures with reduced model sizes~\cite{baker_17, quoc_le, lstm, Darts,elsken2019neural}. NAS is an optimization problem with specific targets (e.g., high classification accuracy) over a set of possible candidate architectures. The set of candidate architectures defines the (typically vast) search space, while the optimizer defines the search algorithm. Recent breakthroughs in NAS can simplify the tricky (and error-prone) ad-hoc architecture design process~\cite{lstm, hyper_nas}. Moreover, the networks obtained via NAS have higher test accuracy and significantly fewer parameters than the hand-designed networks~\cite{Darts, real_17}. These advantages of NAS have attracted significant attention from researchers and engineers alike~\cite{nas_survey}. However, most of the existing NAS approaches do not explicitly consider the hardware constraints (e.g., latency and energy consumption). Consequently, the resulting neural networks still cannot be deployed on real devices. To address this drawback, recent studies propose \textit{hardware-aware NAS}, which incorporates the hardware constraints of networks during the search process~\cite{jiang2020device}. Nevertheless, current approaches are time-consuming since they involve training the candidate network, and a tedious search process~\cite{wu2019fbnet}. To accelerate NAS, recent NAS approaches rely on graph neural networks (GNNs) to estimate the accuracy of a given network~\cite{eccv_gates, yiran_gnn, brp_nas, pr_2020_gnn_acc_pre}. However, training a GNN-based accuracy predictor is still time-consuming (in the order of tens of minutes~\cite{chiang2019cluster} to hours~\cite{mao2019learning} on GPU clusters). Therefore, adapting existing NAS approaches to different hardware architecture is challenging due to their intensive computation and execution time requirements. To alleviate the computation cost of current NAS approaches, we propose to analyze the NAS problem from a \textit{network topology} perspective. This idea is motivated by observing that the tediousness and complexity of current NAS approaches stem from the lack of understanding of what actually contributes to a neural network's accuracy. Indeed, the innovations on the topology of neural architecture, especially the introduction of skip connections, have achieved great success in many applications~\cite{densenet,resnet}. This is because, in general, the network topology (or structure) strongly influences the phenomena taking place over them~\cite{newman2006structure}. For instance, how closely the social network users are interconnected directly affects how fast the information propagates through the network~\cite{barabasi2003scale_Free}. Similarly, a DNN architecture can be seen as a network of connected neurons. As discussed in~\cite{nn_mass}, the topology of deep networks has a significant impact on how effectively the gradients can propagate through the network and thus the test performance of neural networks. These observations motivate us to take an approach from network science to quantify the topological property of neural networks to accelerate NAS. From an application perspective, the performance and energy efficiency of DNN accelerators are other critical metrics besides the test accuracy. In-memory computing (IMC)-based architectures have recently emerged as a promising technique to construct high-performance and energy-efficient hardware accelerators for DNNs. IMC-based architectures can store all the weights on-chip, hence removing the latency occurring from off-chip memory accesses. However, IMC-based architectures face the challenge of a tremendous increase of on-chip communication volume. While most of the state-of-the-art neural networks adopt skip connections in order to improve their performance~\cite{resnet, mobilenetv2, densenet}, the wide usage of skip connections requires large amounts of data transfer across multiple layers, thus causing a significant communication overhead. Prior work on IMC-based DNN accelerators proposed bus-based network-on-chip (NoC)~\cite{chen2018neurosim} or cmesh-based NoC~\cite{shafiee2016isaac} for communication between multiple layers. However, both bus-based and cmesh-based on-chip communication significantly increase the area, latency, and energy consumption of hardware; hence, they do not offer a promising solution for future accelerators. Starting from these overarching ideas, this paper proposes FLASH -- a fast neural architecture search with hardware optimization -- to address the drawbacks of current NAS techniques. FLASH delivers a neural architecture that is co-optimized with respect to accuracy and hardware performance. Specifically, by analyzing the topological property of neural architectures from a network science perspective, we propose a new topology-based metric, namely, the \textit{NN-Degree}. We show that NN-Degree could indicate the test performance of a given architectures. This makes our proposed NAS \textit{training-free} during the search process and accelerates NAS by orders of magnitude compared to state-of-the-art approaches. Then, we demonstrate that NN-Degree enables a lightweight accuracy predictor with only \textit{three parameters}. Moreover, to improve the on-chip communication efficiency, we adopt the mesh-NoC for the IMC-based hardware. Based on the communication-optimized hardware architecture, we measure the hardware performance for a subset of neural networks from the NAS search space. Then, we construct analytical models for the area, latency, and energy consumption of a neural network based on our optimized target hardware platform. Unlike existing neural network-based and black-box style searching algorithms~\cite{jiang2020device}, the proposed NAS methodology enable searching across the entire search space via a mathematically rigorous and time-efficient optimization algorithm. Consequently, our experimental evaluations show that FLASH significantly pushes forward the NAS frontier by enabling NAS in less than 0.1 seconds on a 20-core Intel Xeon CPU. Finally, we demonstrate that FLASH could be readily transferred to other hardware platforms (e.g., Raspberry Pi) only by fine-tuning the hardware performance models. Overall, this paper makes the following contributions: \begin{itemize} \item We propose a new topology-based analytical metric (\textit{NN-Degree}) to quantify the topological characteristics of DNNs with skip connections. We demonstrate that the NN-Degree enables a \textit{training-free} NAS within seconds. Moreover, we use the NN-Degree metric to build a new lightweight (\textit{three-parameter}) accuracy predictor by training as few as 25 samples out of a vast search space with more than 63 billion configurations. Without any significant loss in accuracy, our proposed accuracy predictor requires 6.88$\times$ fewer samples and provides a $65.79\times$ reduction of the fine-tuning time cost compared to existing GNN/GCN based approaches~\cite{yiran_gnn}. \item We construct analytical models to estimate the latency, area, and energy consumption of various DNN architectures. We show that our proposed analytical models are applicable to multiple hardware architectures and achieve a high accuracy with less than one second fine-tuning time cost. \item We design a hierarchical simplicial homology global optimization (SHGO)-based algorithm, to search for the optimal architecture. Our proposed hierarchical SHGO-based algorithm enables 27729$\times$ faster (less than 0.1 seconds) NAS compared to RL-based baseline approach. \item We demonstrate that our methodology enables NAS on a Raspberry Pi 3B with less than 3 seconds computational time. To our best knowledge, this is the first work showing NAS running directly on edge devices with such low computational requirements. \end{itemize} The rest of the paper is organized as follows. In Section \ref{sec:related_work}, we discuss related work and background information. In Section \ref{sec:methodology}, we formulate the optimization problem, then describe the new analytical models and search algorithm. Our experimental results are presented in Section \ref{sec:experimental_results}. Finally, Section \ref{sec:conclusion} concludes the paper with remarks on our main contributions and future research directions. \section{Related Work and Background Information} \label{sec:related_work} \noindent\textbf{Hardware-aware NAS:} Hardware accelerators for DNNs have recently become popular due to high-performance demand for multiple applications~\cite{img_net, manning1999foundations,benmeziane2021comprehensive}; they can reduce the latency and energy associated with DNN inference significantly. The hardware performance (e.g., latency, energy, and area) of accelerators varies with DNN properties (e.g., number of layers, parameters, etc.); therefore, hardware performance also is a crucial factor to consider during NAS. Several recent studies consider hardware performance for NAS. Authors in~\cite{jha_dac20} introduce a growing and pruning strategy that automatically maximizes the test accuracy and minimizes the FLOPs of neural architectures during training. A platform-aware NAS targeting mobile devices is proposed in~\cite{global_1}; the objective is to maximize the model accuracy with an upper bound on latency. Authors in~\cite{wu2019fbnet} create a latency-aware loss function to perform differentiable NAS. The latency of DNNs is estimated through a lookup table which consists of the latency of each operation/layer. However, both of these studies consider latency as the only metric for hardware performance. Authors in~\cite{diana_modeling} propose a hardware-aware NAS framework to design convolutional neural networks. Specifically, by building analytical latency, power, and memory models, they create a hardware-aware optimization methodology to search for the optimal architecture that meets the hardware budgets. Authors in~\cite{jiang2020device} consider latency, energy, and area as metrics for hardware performance while performing NAS. Also, a reinforcement learning (RL)-based controller is adopted to tune the network architecture and device parameters. The resulting network is retrained to evaluate the model accuracy. There are two major drawbacks of this approach. First, RL is a slow-converging process that prohibits fast exploration of the design space. Second, retraining the network further exacerbates the search time leading to hundreds of GPU hours needed for real applications~\cite{quoc_le}. Furthermore, most existing hardware-aware NAS approaches explicitly optimize the architectures for a specific hardware platform~\cite{cai2018proxylessnas, wu2019fbnet, edd_Dac}. Hence, if we switch to some new hardware, we need to repeat the entire NAS process, which is very time-consuming under the existing NAS frameworks~\cite{cai2018proxylessnas, wu2019fbnet, edd_Dac}. The demand for reducing the overhead of adaptation to new hardware motivates us to improve the transferability of hardware-aware NAS methodology. \noindent\textbf{Accuracy Predictor-based NAS:} Several approaches perform NAS by estimating the accuracy of the network~\cite{eccv_gates,yiran_gnn,brp_nas,pr_2020_gnn_acc_pre}. These approaches first train a graph neural network (GNN), or a graph convolution network (GCN), to estimate the network accuracy while exploring the search space. During the searching process, the test accuracy of the sample networks is obtained from the estimator instead of doing regular training. Although by estimating the accuracy, the NAS process is significantly accelerated, the training cost of the accuracy predictor itself remains a bottleneck. GNN requires many training samples to achieve high accuracy, thus involving a significant overhead during training the candidate networks from the search space. Therefore, using accuracy predictors to do NAS still suffers from excessive computation and time requirements. \noindent\textbf{Time-efficient NAS:} To reduce the time cost of training candidate networks, authors in~\cite{hyper_nas, single_path_nas} introduced the weight sharing mechanism (WS-NAS). Specifically, candidate networks are generated by randomly sampling part of a large network (supernet). Hence, candidate networks share the weights of the supernet and update these weights during training. By reusing these trained weights instead of training from scratch, WS-NAS significantly improves the time efficiency of NAS. However, the accuracy of these models obtained via WS-NAS is typically far below those obtained from training from scratch. Several optimization techniques have been proposed to fill the accuracy gap between the shared weights and stand-alone training~\cite{big_nas, Cai2020Once-for-All}. For example, authors in \cite{Cai2020Once-for-All} propose a progressive shrinking algorithm to train the supernet. However, in many cases, the resulting networks still need some fine-tuning epochs to get the final architecture. To further accelerate the NAS process, some works propose the differentiable NAS to accelerate the NAS process~\cite{Darts,cai2018proxylessnas}. The differentiable NAS approaches search for the optimal architecture by learning the optimal architecture parameters during the training process. Hence, differentiable NAS only needs to train the supernet once, thus reducing the training time significantly. Nevertheless, due to the significantly large number of parameters of the supernet, differentiable NAS requires a high volume of GPU memory. In order to further improve the time-efficiency of NAS, several approaches have been proposed to do training-free NAS~\cite{tf_nas1,tf_nas2}. These approaches leverage some training-free proxy that indicates the test performance of some given architectures; hence, the training time is eliminated from the entire NAS process. However, these methods usually use gradient-based information to build the proxy~\cite{tf_nas1,tf_nas2}. Therefore, in order to calculate the gradients, GPUs are still necessary for the backward propagation process. To totally decouple the NAS process from using GPU platforms, our work proposes a GPU-free proxy to do training-free NAS. We provide more details in Section~\ref{subsec:tf_nas}. \begin{figure} [b] \centering \includegraphics[width=0.8\textwidth]{figures/model/skip_connection_v8.pdf} \caption{Modeling a CNN as a network in network science: Each channel is modeled as a node; each convolution kernel/filter is modeled as a link/connection. (a) Illustration of a single cell with DenseNet-type skip connections (DTSC). (b) Illustration of a single cell with Addition-type skip connections (ATSC). (c) Decomposition of a network cell with skip connections into a Lattice Network $\mathcal{G}$ and a Random Network $\mathcal{R}$.} \label{fig:skip_link} \end{figure} \noindent\textbf{Skip connections and Network Science:} Currently, both networks obtained by manual design and NAS have shown that long-range links (i.e., skip connections) are crucial for getting higher accuracy~\cite{resnet,densenet,mobilenetv2,Darts}. Overall, there are two commonly used skip connections in neural networks. First, we have the \textit{DenseNet-type} skip connections (DTSC), which concatenate previous layers' outputs as the input for the next layer~\cite{densenet}. To study the topological properties and enlarge the search space, we do \textit{not} use the original DesneNets~\cite{densenet}, which contains all-to-all connections. Instead, we consider a generalized version where we vary the number of skip connections by randomly selecting only some channels for concatenation, as shown in Fig. \ref{fig:skip_link}(a). The other type of skip connections is the \textit{addition-type} skip connections (ATSC), which consist of links that bypass several layers to be directly added to the output of later layers (see Fig. \ref{fig:skip_link}(b))~\cite{resnet}. In network science, a small-world network is defined as a highly clustered network, thus showing a small distance (typically logarithmic in the number of network nodes) between any two nodes inside the network~\cite{smallworldness}. Considering the skip connections in neural networks, we propose to use the \textit{small-world network} concept to analyze networks with both short- and long-range (or skip) links. Indeed, small-world networks can be decomposed into: (i) a lattice network $\mathcal{G}$ accounting for short-range links; (ii) a random network $\mathcal{R}$ accounting for long-range links (see Fig.~\ref{fig:skip_link}(c)). The co-existence of a rich set of short- and long-range links leads to both a high degree of clustering and short average path length (logarithmic with network size). We use the small-world network to model and analyze the topological property of neural networks in Section \ref{sec:methodology}. \noindent\textbf{Average Degree:} The \textit{average degree} of a network determines the average number of connections a node has, i.e., the total number of edges divided by the total number of nodes. The average degree and degree distribution (i.e., distribution of node degree) are important topological characteristics that directly affect how information flows through a network \cite{barabasi2003scale_Free}. Indeed, the small network theory reveals that the average degree of a network has a significant impact on network average path length and clustering behavior \cite{smallworldness}. Therefore, we investigate the performance gains due to the topological properties by using network science. \section{Proposed Methodology} \label{sec:methodology} \begin{figure}[t] \centering \includegraphics[width=0.8\textwidth]{figures/big_pic_v18.pdf} \caption{Overview of the proposed approach. Stage 1 (red box): we build hardware performance model (i.e., latency $\mathcal{L}$, energy $\mathcal{E}$, and area $\mathcal{A}$) and accuracy predictor by randomly sampling candidate networks from the search space to evaluate the hardware characteristics (latency $\mathcal{L}$, energy $\mathcal{E}$, and area $\mathcal{A}$) and test accuracy $\theta$. Stage 2 (blue box): search for the optimal network architecture given the multi-objective function $f(\mathcal{L}, \mathcal{E}, \mathcal{A}, \theta)$.} \label{fig:overview} \end{figure} \subsection{Overview of New NAS Approach} The proposed NAS framework is a two-stage process, as illustrated in Fig. \ref{fig:overview}: (i) We first quantify the topological characteristics of neural networks by the newly proposed NN-Degree metric. Then, we randomly select a few networks and train them to fine-tune the accuracy predictor based on the network topology. We also build analytical models to estimate the latency, energy, and area of given neural architectures. (ii) Based on the accuracy predictor and analytical performance models in the first stage, we use a simplical homology global optimization (\textit{SHGO})-based algorithm in a hierarchical fashion to search for the optimal network architecture. \subsection{Problem Formulation of hardware-aware NAS} The overall target of the hardware-aware NAS approach is to find the network architecture that gives the highest test accuracy while achieving small area, low latency, and low energy consumption when deployed on the target hardware. In practice, there are constraints (budgets) on the hardware performance and test accuracy. For example, battery-based devices have very constrained energy capacity~\cite{wang2020neural}. Hence, there is an upper bound for the energy consumption of the neural architecture. To summarize, the NAS problem can be expressed as: \begin{equation} \begin{aligned} \max &\quad f_{obj}=\frac{\theta}{\mathcal{A}\times\mathcal{L}\times\mathcal{E}} \\ \text{subject\ to:} &\quad \mathcal{\theta} \geq \mathcal{\theta}_M,\ \mathcal{A} \leq \mathcal{A}_M, \ \mathcal{L} \leq \mathcal{L}_M, \ \mathcal{E} \leq \mathcal{E}_M\\ \end{aligned} \label{eq:problem_definition} \end{equation} \noindent{where $\mathcal{\theta}_M$, $\mathcal{A}_M$, $\mathcal{L}_M$, and $\mathcal{E}_M$ are the constraints on the test accuracy, area, latency, and energy consumption, respectively. We summarize the symbols (and their meaning) in this part in Table \ref{table:prob_form}.} \begin{table}[t] \caption{Symbols and their corresponding definition/meaning used in our Problem Formulation.} \scalebox{0.88}{\begin{tabular} {|l|l|} \hline Symbol & Definition \\ \hline \hline $f_{obj}$ & Objective function of NAS \\ \hline $\theta$ & Test accuracy of a given network \\ \hline $A$ & Chip area \\ \hline $\mathcal{L}$ & Inference latency of a given network \\ \hline $\mathcal{E}$ & Inference energy consumption of a given network \\ \hline $\theta_M$ & Constraint of test accuracy for NAS \\ \hline $A_M$ & Constraint of area for NAS \\ \hline $\mathcal{L}_M$ & Constraint of inference latency for NAS \\ \hline $\mathcal{E}_M$ & Constraint of inference energy consumption for NAS \\ \hline \end{tabular}} \label{table:prob_form} \end{table} \subsection{NN-Degree and Training-free NAS} This section first introduces our idea of modeling a CNN based on network science~\cite{smallworldness}. To this end, we define a group of consecutive layers with the same width (i.e., number of output channels, $w_c$) as a \textit{cell}; then we break the entire network into multiple cells and denote the number of cells as $N_c$. Similar to MobileNet-v2~\cite{mobilenetv2}, we also adopt a width multiplier ($w_m$) to scale the width of each cell. Moreover, following most of the mainstream CNN architectures, we assume that each cell inside a CNN has the same number of layers ($d_c$). Furthermore, as shown in Fig.~\ref{fig:skip_link}, we consider each channel of the feature map as a node in a network and consider each convolution filter/kernel as an undirected link. These notations are summarized in Table \ref{table:acc_pred}. \begin{table}[htb] \caption{Symbols and their corresponding definition/meaning used in our NN-Degree based analytical accuracy predictor.} \scalebox{0.88}{\begin{tabular} {|l|l|} \hline Symbol & Definition \\ \hline \hline $g$ & NN-Degree (new metric we propose) \\ \hline $g_\mathcal{G}$ & NN-Degree of the lattice network (short-range connections)\\ \hline $g_\mathcal{R}$ & NN-Degree of the random network (long-range or skip connections)\\ \hline $N_c$ & Number of cells \\ \hline $w_c$ & Number of output channels per layer within cell $c$ (i.e., the width of cell $c$)\\ \hline $d_c$ & Number of layers within cell $c$ (i.e., the depth of cell $c$)\\ \hline $SC_c$ & Number of skip connections within cell $c$ \\ \hline $a_\theta,b_\theta,c_\theta$ & Learnable parameters for the accuracy predictor \\ \hline \end{tabular}} \label{table:acc_pred} \end{table} Combining the concept of small-world networks in Section \ref{sec:related_work} and our modeling of a CNN, we decompose a network cell with skip connections into a lattice network $\mathcal{G}$ and random network $\mathcal{R}$ (see Fig.~\ref{fig:skip_link}(c)). \noindent\textbf{Proposed Metrics:} Our key objective is two-fold: (i) Quantify which topological characteristics of DNN architectures affect their performance, and (ii) Exploit such properties to accurately predict the test accuracy of a given architecture. To this end, we propose a new analytical metric called NN-Degree, as defined below. \noindent\textbf{Definition of NN-Degree:} \textit{Given a DNN with $N_c$ cells, $d_c$ layers per cell, the width of each cell $w_c$, and the number of skip connections of each cell $SC_c$, the NN-Degree metric is defined as the sum of the average degree of each cell:} \begin{equation} \begin{split} g &=\sum_{c=1}^{N_c}(w_c +\frac{SC_c}{w_c\times d_c})\\ \end{split} \label{eq:nn_deg} \end{equation} \noindent\textbf{Intuition:} The average degree of a given DNN cell is the sum of the average degrees from lattice network $\mathcal{G}$ and random network $\mathcal{R}$. Given a cell with $d_c$ convolutional layers and $w_c$ channels per layer, the number of nodes is $w_c\times d_c$. Moreover, each convolutional layer has $w_c\times w_c$ filters (kernels) accounting for the short-range connections; hence, in the lattice network $\mathcal{G}$, there are $w_c\times w_c\times d_c$ connections (total). Using the above analysis, we can express the NN-Degree as follows: \begin{equation} \begin{split} g &=g_\mathcal{G} + g_\mathcal{R}\\ &=\sum_{c=1}^{N_c}\frac{number\ of\ connections\ in\ \mathcal{G}}{number\ of\ nodes\ in\ cell\ c} +\sum_{c=1}^{N_c} \frac{number\ of\ connections\ in\ \mathcal{R}}{number\ of\ nodes\ in\ cell\ c}\\ &=\sum_{c=1}^{N_c}\frac{w_c\times d_c \times w_c}{w_c\times d_c} +\sum_{c=1}^{N_c} \frac{number\ of\ skip\ connections}{w_c\times d_c}\\ &=\sum_{c=1}^{N_c}(w_c +\frac{SC_c}{w_c\times d_c})\\ \end{split} \label{eq:nn_deg_dev} \end{equation} \noindent\textbf{Discussion:} The first term in Equation~\ref{eq:nn_deg_dev} (i.e., $g_\mathcal{G}$) reflects the the width of the network $w_c$. Many successful DNN architectures, such as DenseNets~\cite{densenet}, Wide-ResNets~\cite{wide_resnet}, and MobileNets~\cite{mobilenetv2}, have shown that wider networks can achieve a higher test performance. The second term (i.e., $g_\mathcal{R}$) quantifies how densely the nodes are connected through the skip connections. As discussed in \cite{ensemble_resnet}, networks with more skip connections have more forward/backward propagation paths, thus have a better test performance. Based on the above analysis, we claim that a higher NN-Degree value should indicate networks with higher test performance. We verify this claim empirically in the experimental section. Next, we propose an accuracy predictor based only on the NN-Degree. \vspace{2mm} \noindent\textbf{Accuracy Predictor:} Given the NN-Degree ($g$) definition, we build the accuracy predictor by using a variant of logistic regression. Specifically, the test accuracy $\theta$ of a given architecture is: \begin{equation} \theta= \frac{1}{a_\theta+\text{exp}(b_\theta \times \frac{1}{g}+c_\theta)}\\ \label{acc_predictor} \end{equation} where $a_\theta,b_\theta,c_\theta$ are the parameters that are fine-tuned with the accuracy and NN-Degree of sample networks from the search space. Section \ref{sec:experimental_results} shows that by using as few as 25 data samples (NN-Degree and corresponding accuracy values), we can generate an accurate predictor for a huge search space covering more than 63 billion configurations within 1 second on {a 20-core Intel Xeon CPU}. \noindent\textbf{Training-free NAS:} Section~\ref{sec:experimental_results} shows that NN-Degree can indicate the test accuracy of a given architecture. Hence, one can use NN-Degree as a proxy of the test accuracy to enable the training-free NAS. Section~\ref{subsec:tf_nas} demonstrates that we can do \textit{training-free} NAS within 0.11 seconds on a 20-core CPU. \subsection{Overview of In-memory Computing (IMC)-based Hardware} Fig.~\ref{fig:imc_arch} shows the IMC architecture considered in this work. We note that the proposed FLASH methodology is not specific to IMC-based hardware. We adopt an IMC architecture since it has been proven to achieve less memory access latency~\cite{horowitz20141}. Due to the high communication volume imposed by deeper and denser networks, communication between multiple tiles is crucial for hardware performance, as shown in~\cite{krishnan2020interconnect, mandal2020latency}. Our architecture consists of multiple tiles connected by network-on-chip (NoC) routers, as shown in Fig.~\ref{fig:imc_arch}(a). We use a mesh-based NoC due to its superior performance compared to bus-based architectures. Each tile consists of a fixed number of compute elements (CE), a rectified linear unit (ReLU), an I/O buffer, and an accumulation unit, as shown in Figure Fig.~\ref{fig:imc_arch}(b). Within each CE, there exist a fixed number of im-memory processing elements (imPE), a multiplexer, a switch, an analog-to-digital converter (ADC), a shift and add (S\&A) circuit, and a local buffer~\cite{chen2018neurosim}, as shown in Fig.~\ref{fig:imc_arch}(c). The ADC precision is set to four bits to avoid any accuracy degradation. There is no digital-to-analog (DAC) converter used in the architecture. A sequential signaling technique to represent multi-bit inputs is adopted~\cite{peng2019inference}. Each imPE consists of 256$\times$256 IMC crossbars (the memory elements) based on ReRAM (1T1R) technology ~\cite{krishnan2020interconnect, mandal2020latency, chen2018neurosim}. This work incorporates a sequential operation between DNN layers since a pipelined operation may cause pipeline bubbles during inference~\cite{song2017pipelayer, qiao2018atomlayer}. \begin{figure}[t] \centering \includegraphics[width=0.88\textwidth]{figures/model/layoutv2.pdf} \caption{Details of the IMC hardware. (a) The architecture consists of multiple tiles connected via routers; (b) The structure of a tile. Each tile consists of multiple computing elements (CE), I/O buffer, ReLU unit and accumulation unit; (c) The structure of each CE. Each CE consists of multiple in-memory processing elements (imPE), local buffers, switch, multiplexer, analog to digital converter (ADC), shift and add (S\&A) circuit.} \label{fig:imc_arch} \end{figure} \begin{table}[htb] \caption{Symbols and their corresponding definition used in our analytical area, latency, and energy models.} \scalebox{0.86}{\begin{tabular}{|l|l||l|l|} \hline Symbol & Definition & Symbol & Definition \\ \hline \hline $N_c$ & Number of cells & $N_i^r$ & \begin{tabular}[c]{@{}l@{}}Number of rows of imPE arrays\\of $i^\mathrm{th}$ layer\end{tabular} \\ \hline $a_\theta,b_\theta,c_\theta$ & \begin{tabular}[c]{@{}l@{}}Learnable parameters for\\ accuracy predictor\end{tabular} & $N_i^c$ & \begin{tabular}[c]{@{}l@{}}Number of columns of imPE arrays\\ of $i^\mathrm{th}$ layer\end{tabular} \\ \hline $w_m$ & Width multiplier & $Kx_i, Ky_i$ & \begin{tabular}[c]{@{}l@{}}Kernel size \\of $i^\mathrm{th}$ layer\end{tabular} \\ \hline $d_c$ & Number of layers within cell $c$ & \begin{tabular}[c]{@{}l@{}}$N_i^{if}$,\ $N_i^{of}$\end{tabular} & \begin{tabular}[c]{@{}l@{}}Number of input and\\output features of $i^\mathrm{th}$ layer\end{tabular} \\ \hline $w_c$ & Width of cell $c$ & \begin{tabular}[c]{@{}l@{}}$(PE_x)_i$,\ $(PE_y)_i$\end{tabular} & \begin{tabular}[c]{@{}l@{}}Size of a single imPE \\ of $i^\mathrm{th}$ layer\end{tabular} \\ \hline $SC_c$ & \begin{tabular}[c]{@{}l@{}}Number of skip connections\\ within cell $c$\end{tabular} & $T_i$ & \begin{tabular}[c]{@{}l@{}}Number of tiles\\of $i^\mathrm{th}$ layer\end{tabular} \\ \hline $FLOP_c$ & Number of FLOPs of cell $c$ & $c$ & \begin{tabular}[c]{@{}l@{}}Number of CEs in each tile\\ of $i^\mathrm{th}$ layer\end{tabular} \\ \hline $Comm_c$ & \begin{tabular}[c]{@{}l@{}}The amount of data transferred\\ through NoC inside cell $c$\end{tabular} & $p$ & Number of imPEs in each CE \\ \hline $N_T$ & \begin{tabular}[c]{@{}l@{}}Total number of tiles\\ of the chip\end{tabular} & $A_T$ & Area of a tile \\ \hline $F_{\mathcal{E}}$ & Features for energy & $\mathcal{E}^T$ & \begin{tabular}[c]{@{}l@{}}Energy consumption\\ of a tile\end{tabular} \\ \hline \begin{tabular}[c]{@{}l@{}}$\Lambda_{comp}$, \ $\Lambda_{NoC}$\end{tabular} & \begin{tabular}[c]{@{}l@{}}Weight vectors to estimate\\ computation and NoC latency\end{tabular} & \begin{tabular}[c]{@{}l@{}}$F_{Comp}$,\ $F_{NoC}$\end{tabular} & \begin{tabular}[c]{@{}l@{}}Features to estimate computation\\ and NoC latency\end{tabular} \\ \hline \end{tabular}} \label{table:hw_model} \end{table} \subsection{Hardware Performance Modeling} This section describes the methodology of modeling hardware performance. We consider three metrics for hardware performance: area, latency, and energy consumption. We use customized versions of NeuroSim~\cite{chen2018neurosim} for circuit simulation (computing fabric) and BookSim~\cite{jiang2013detailed} for cycle-accurate NoC simulation (communication fabric). First, we describe the details of the simulator. \noindent\textbf{Input to the simulator:} The inputs to the simulator include the DNN structure, technology node, and frequency of operation. In this work, we consider a layer-by-layer operation. Specifically, we simulate each DNN layer and add its performance at the end to obtain the total performance of the hardware for the DNN. \noindent\textbf{Simulation of computing fabric:} Table~\ref{tab:circuit_param} shows the parameters considered for the simulation of computing fabric. At the start of the simulation, the number of in-memory computing tiles is computed. Then, the area and energy of one tile are computed through analytical models derived from HSPICE simulation. After that, the area and energy of one tile are multiplied by the total number of tiles to obtain the total area and energy of the computing fabric. The latency of the computing fabric is computed as a function of the workload (the DNN being executed). We note that the original version of NeuroSim considers point-to-point on-chip interconnects, while our proposed work uses mesh-based NoC. Therefore, we skip the interconnect simulation in NeuroSim. \noindent\textbf{Simulation of communication fabric:} We consider cycle-accurate simulation for the communic-ation fabric. BookSim is used to perform simulation. First, the number of tiles required for each layer is obtained from the simulation of computing fabric. In this work, we assume that each tile is connected to a dedicated router of the NoC. A trace file is generated corresponding to the particular layer of the DNN. The trace file consists of the information of the source router, destination router, and timestamp when the packet is generated. The trace file is simulated through BookSim to obtain the latency to finish all the transactions between two layers. We also obtain the area and energy of the interconnect through BookSim. Table~\ref{tab:circuit_param} shows the parameters considered for the interconnect simulator. More details of the simulator can be found in~\cite{krishnan2021interconnect}. For hardware performance modeling, first we obtain the performance of the DNN through simulation, then the performance numbers are used to construct the performance models. \noindent\textbf{Analytical Area Model:} An in-memory computing-based DNN accelerator consists of two major components: computation and communication. The computation unit consists of multiple tiles and peripheral circuits; the communication unit includes an NoC with routers and other network components (e.g., buffers, links). To estimate the total area, we first compute the number of rows ($N^r_i$) and number of columns ($N^c_i$) of imPEs required for the $i^\mathrm{th}$ layer of the DNN following Equation~\ref{eq:Nr} and Equation~\ref{eq:Nc}. \begin{equation}\label{eq:Nr} N^r_i = \Big\lceil \frac{Kx_i \times Ky_i \times N^{if}_i}{(PE_x)_i} \Big\rceil \end{equation} \begin{equation}\label{eq:Nc} N^c_i = \Big\lceil \frac{N^{of}_i \times N_{bits}}{(PE_y)_i} \Big\rceil \end{equation} where all the symbols are defined in Table \ref{table:hw_model}. Therefore, total number of imPEs required for the $i^\mathrm{th}$ layer of the DNN is $N^r_i \times N^c_i$. Each tile consists of $c$ CEs, and each CE consists of $p$ number of imPEs. Accordingly, each tile comprises $c \times p$ imPEs. Therefore, the total number of tiles required for the $i^\mathrm{th}$ layer of the DNN ($T_i$) is: \begin{equation}\label{eq:T} T_i = \Big\lceil \frac{N^r_i \times N^c_i}{c \times p} \Big\rceil \end{equation} Hence, the total number of tiles ($N_T$) required for a given DNN is $N_T=\sum_i T_i$. \begin{table}[t] \caption{Parameters used for simulation of computation and communication fabric.} \scalebox{0.88}{\begin{tabular}{|l|l|l|l|} \hline \multicolumn{2}{|c|}{Circuit} & \multicolumn{2}{c|}{NoC} \\ \hline imPE array size & $128 \times 128$ & Bus width & 32 \\ \hline Cell levels & 2 bit/cell & Routing algorithm & X--Y \\ \hline Flash ADC resolution & 4 bits & Number of router ports & 5 \\ \hline Technology used & RRAM & Topology & Mesh \\ \hline \end{tabular}} \label{tab:circuit_param} \end{table} As shown in Fig.~\ref{fig:imc_arch}(a), each tile is connected to the NoC routers for the on-chip communication. We assume that the total number of required routers is equal to the total number of tiles. Hence, the total chip area is expressed as follows: \begin{equation} \begin{aligned} \mathcal{A} &= A_{comp} + A_{NoC} \\ & = ( A_{Tile}^{Tot} + A_{Periphery} ) + ( A_{Router}^{Tot} + A_{others} ) \\ & = N_T \times A_T + N_T \times A_R + ( A_{Periphery} + A_{others} ) \\ & = N_T \times (A_T + A_R) + A_{rest} \\ \label{eq:area_model} \end{aligned} \end{equation} where $A_{Tile}^{Tot}$ is the area accounted for all tiles and $A_{Router}^{Tot}$ is the total area accounted for all routers in the design. The area of a single tile is denoted by $A_{T}$; there are $N_T$ tiles in the design. Therefore $A_{Tile}^{Tot} = N_T \times A_T$. The area of the peripheral circuit ($A_{Periphery}$) consists of I/O interface, max pool unit, accumulation unit, and global buffer. The area of a single router is denoted by $A_{R}$; the number of routers is equal to the number of tiles ($N_T$). Therefore $A_{Router}^{Tot} = N_T \times A_R$. The area of other components in the NoC ($A_{rest}$) comprises links and buffers. \begin{figure} \centering \vspace{2mm} \includegraphics[width=0.88\textwidth]{figures/model/break_all.pdf} \caption{Layerwise hardware performance breakdown of a DNN with 3 cells ($N_c=3$), 16 layers per cell ($d_c=16$), and a total of 48 layers. (a) Latency breakdown layer by layer: the computation latency accounts for 37.9\% of the total latency, while communication accounts for 62.1\%. (b) Energy consumption breakdown layer by layer: the computation energy accounts for 96.1\% of the total latency, while communication accounts for 3.9\%.} \label{fig:break} \end{figure} \noindent\textbf{Analytical Latency Model:} Similar to area, the total latency consists of computation latency and communication latency, as shown in Fig.~\ref{fig:break}(a). To construct the analytical model of latency, we use floating-point operations (FLOPs) of the network to represent the computational workload. We observe that the FLOPs of a given network are roughly proportional to the total number of convolution filters (kernels), which is the product of the number of layers and the square of the number of channels per layer (i.e., width value). In the network search space we consider, the width is equivalently represented by the width multiplier $w_m$ and the number of layers is $N_c\times d_c$; hence, we express the number of FLOPs of a given network approximately as the product of the number of layers, and the square of width multiplier: \begin{equation} FLOPs\sim N_c d_c w_m^2 \end{equation} Moreover, communication volume increases significantly due to the skip connections. To quantify the communication volume due to skip connections, we define $Comm_c$ (the communication volume of a given network cell $c$) as follows: $$Comm_c=SC_c \times Featuremap\ size\ of\ each\ SC$$ Combining the above analysis of computation latency and communication latency, we use a linear model to build our analytical latency model as follows: \begin{equation} \mathcal{L}=\mathcal{L}_{comp} + \mathcal{L}_{NoC} = \Lambda^T_{comp} F_{comp} + \Lambda^T_{NoC} F_{NoC} \end{equation} where $\Lambda^T_{comp}$ is a weight vector and $F_{comp} = [ w_m,d_c,N_c,N_cd_cw_m^2 ]$ is the vector of features with respect to the computation latency; $\Lambda^T_{NoC}$ is another weight vector and $F_{comp} = [ SC_{c}, Comm_c ]$ is the vector of features corresponding to the NoC latency. We randomly sample some networks from the search space and measure their latency to fine-tune the values of $\Lambda^T_{comp}$ and $\Lambda^T_{NoC}$. \noindent\textbf{Analytical Energy Model:} We divide the total energy consumption into computation energy and communication energy, as shown in Fig.~\ref{fig:break}(b). Specifically, the entire computation process inside each tile consists of three steps: \begin{itemize} \item Read the input feature map from the I/O buffer to the CE; \item Perform computations in CE and ReLU unit, then update the results in the accumulator; \item Write the output feature map to the I/O buffer. \end{itemize} Therefore, both the size of feature map and FLOPs contribute to the computation energy of a single cell. Moreover, the communication energy consumption is primarily determined by the communication volume, i.e., ($Comm_c$). Hence, we use a linear combination of features to estimate the energy consumption of each tile $\mathcal{E}^T$: \begin{equation} \mathcal{E}^T = \Lambda_\mathcal{E}^T F_\mathcal{E} \end{equation} where $\Lambda_\mathcal{E}^T$ is a weight vector and $F_\mathcal{E} = [w_m,d_c,N_c,SC_{c}, Comm_c, FLOP_c, FM_c]$ are the features corresponding to the energy consumption of each tile. We use the measured energy consumption values of several sample networks to fine-tune the values of $\Lambda_\mathcal{E}^T$. The total energy consumption ($\mathcal{E}$) is the product of $\mathcal{E}^T$ and number of tiles: \begin{equation} \mathcal{E}=\Lambda_\mathcal{E}^T F_\mathcal{E} N_T \end{equation} We note that all the features used in both our accuracy predictor and analytical hardware performance model are only related to the architecture of the network through the basic parameters $\{w_m,d_c,N_c,SC_{c}\}$. Therefore, the analytical hardware models are lightweight. We note that there exist no other lightweight analytical models for IMC platforms. Besides this, FLASH is general and can be applied to different hardware platforms. For a given hardware platform, energy, latency, and area of the DNNs need to be first collected. Then the analytical hardware models need to be trained using the performance data. \begin{algorithm}[t] \SetAlgoLined \KwInput{ \\ \qquad Objective function: $f_{obj}$; \\ \qquad Global search space: \\ \qquad \quad $SP_{global}=[N_{cmin},N_{cmax}]\times [w_{m_{min}}, w_{m_{max}}] \times [d_{cmin}, d_{cmax}]\times [SC_{cmin}, SC_{cmax}]$; \\ \qquad Search constraints: $S_{cons}=\{L_M, E_M, A_M,\theta_M\}$ ;\\ \qquad Coarse-grain search step size: $\lambda $ } \KwOutput{\\ \qquad The optimal architecture $\{w_m^{*},N_c^{*}, d_c^{*},SC_c^{*}\}$;\\ \\ } \spprocess{ Initialize Candidate Architecture Set ($CAS$) as empty set; \textbf{level 1: Fixed-$w_m$ Search} \For {$w_m$ \text{in} $[w_{m_{min}}, w_{m_{max}}]$} { \textbf{level 2: Coarse-grain Search} fix $w_m$, search the optimum $N_c^{G}, d_c^{G},SC_c^{G}$ with large search step $\lambda$ $N_c^{G}, d_c^{G},SC_c^{G}$=SHGO(${f_{obj}}$, $SP_{global}$, $S_{cons}$, search step size=$\lambda$ ) \qquad \textbf{level 3: Fine-grain Search} \qquad within the neighbourhood of $N_c^{G}, d_c^{G},SC_c^{G}$, search the optimum $N_c^{L}, d_c^{L},SC_c^{L}$ \qquad Local search space: $SP_{local}=\{ N_c^{G}\pm 2\lambda, d_c^{G}\pm 2\lambda,SC_c^{G}\pm 2\lambda\}$ \qquad $N_c^{L}, d_c^{L},SC_c^{L}$=SHGO(${f_{obj}}$, $SP_{local}$, $S_{cons}$, search step size=$\mathrm{1}$ ) Add $\{w_m,N_c^{L}, d_c^{L},SC_c^{L}\}$ to $CAS$ } Compare the candidate architecture in $CAS$, find the optimum $\{w_m^{*},N_c^{*}, d_c^{*},SC_c^{*}\}$. \textbf{Return} {$\{w_m^{*},N_c^{*}, d_c^{*},SC_c^{*}\}$} } \caption{Our hierarchical SHGO-based search algorithm} \label{alg:shgo} \end{algorithm} \subsection{Optimal neural architecture search} Based on the above accuracy predictor and analytical hardware performance models, we perform the second stage of our NAS methodology, i.e., searching for the optimal neural architecture by considering both test accuracy and hardware performance on the target hardware. To this end, we use a modified version of the Simplicial Homology Global Optimization (SHGO \cite{shgo}) algorithm to search for the optimum architecture. SHGO has mathematically rigorous convergence properties on non-linear objective functions and constraints and can solve derivative-free optimization problems\footnote{The detailed discussion of SHGO is beyond the scope of this paper. More details are available in \cite{shgo}}. Moreover, the convergence of SHGO requires much fewer samples and less time than reinforcement learning approaches \cite{jiang2020device}. Hence, we use SHGO for our new \textit{hierarchical} searching algorithm. Specifically, as shown in Algorithm \ref{alg:shgo}, to further accelerate the searching process, we propose a \textit{three-level} SHGO-based algorithm instead of using the original SHGO algorithm. At the first level, we enumerate $w_m$ in the search space. Usually, the range of $w_m$ is much more narrow than the other architecture parameters; hence without fixing $w_m$, we cannot use a large search step size for the second-level \textit{coarse-grain search}. At the second level, we use SHGO with a large search step size $\lambda$ to search for a coarse optimum $N_c^{G}, d_c^{G},SC_c^{G}$ by fixing the $w_m$. At the third level (\textit{fine-grain search}), we use SHGO with the smallest search step size (i.e., 1) to search for the optimum $N_c^{L}, d_c^{L},SC_c^{L}$ values for a specific $w_m$, within the neighborhood of the coarse optimum $N_c^{G}, d_c^{G},SC_c^{G}$, and add it to the candidate set. After completing the three-level search, we compare all neural architectures in the candidate set and determine the (final) optimal architecture $\{w_m^{*},N_c^{*}, d_c^{*},SC_c^{*}\}$. To summarize, given the number of hyper-parameters $M$ and the number of possible values of each hyper-parameter $N$, the complexity of our hierarchical SHGO-based NAS is roughly proportional to MN, i.e., $O(MN)$. Experimental results in Section \ref{sec:experimental_results} show that our proposed hierarchical search accelerates the overall search process without any decrease in the performance of the obtained neural architecture. Moreover, our proposed hierarchical SHGO-based algorithm involves much less computational workload compared to the original (one-level) SHGO-based algorithm and RL-based approaches~\cite{jiang2020device}; this even enables us to do NAS on a real Raspberry Pi-3B processor. \section{Experimental Results} \label{sec:experimental_results} \subsection{Experimental setup} \noindent\textbf{Dataset:} Existing NAS approaches show that the test accuracy of CNNs on CIFAR-10 dataset can indicate the test accuracy on other datasets, such as ImageNet~\cite{dong2020nasbench201}. Hence, similar to most of the NAS approaches, we use CIFAR-10 as the primary dataset. Moreover, we also evaluate our framework on CIFAR-100 and Tiny-ImageNet\footnote{Tiny-ImageNet is a downscaled-version ImageNet dataset with 64x64 resolution and 200 classes~\cite{img_net}. For more details, please check: \url{http://cs231n.stanford.edu/tiny-imagenet-200.zip} } to demonstrate the generality of our proposed metric NN-Degree and accuracy predictor. \noindent\textbf{Training Hyper-parameters:} We train each of the selected neural networks five times with PyTorch and use the mean test accuracy of these five runs as the final results. All networks are trained for 200 epochs with the SGD optimizer and a momentum of 0.9. We set the initial learning rate as 0.1 and use Cosine Annealing algorithm as the learning rate scheduler. \noindent\textbf{Search Space:} DenseNets are more efficient in terms of model size and computation workload than ResNets while achieving the same test accuracy~\cite{densenet}. Moreover, DenseNets have many more skip connections; this provides us with more flexibility for exploration compared to networks with Addition-type skip connections (ResNets, Wide-ResNets, and MobileNets). Hence, in our experiments, we explore the CNNs with DenseNet-type skip connections. To enlarge the search space, we generate the generalized version of standard DenseNets by randomly selecting channels for concatenation. Specifically, for a given cell $c$, we define $t_c$ as the maximum skip connections that \textit{each layer} can have; thus, we use $t_c$ to control the topological properties of CNNs. Given the definition of $t_c$, layer $i$ can receive DenseNet-type skip connections (DTSC) from a maximum number of $t_c$ channels from previous layers within the same cell; that is, we randomly select $min\{w_c(i-1), t_c\}$ channels from layers ${0,1,...,(i-2)}$, and concatenate them at layer $i-1$. The concatenated channels then pass through a convolutional layer to generate the output of layer $i$ ($s_i$). Similar to recent NAS research~\cite{Darts}, we select links randomly because random architectures are often as competitive as the carefully designed ones. If the skip connections encompass all-to-all connections, this would result in the original DenseNet architecture~\cite{densenet}. An important advantage of the above setup is that we can control the number of DTSC (using $t_c$) to cover a vast search space with a large number of candidate DNNs. Like standard DenseNets, we can generalize this setup to contain multiple ($N_c$) cells of width $w_c$ and depth $d_c$; DTSC are present \textit{only within a cell} and not across cells. Furthermore, we increase the width (i.e., the number of output channels per layer) by a factor of 2 and halve the height and width of the feature map cell by cell, following the standard practice ~\cite{simonyan2014vgg}. After several cells (groups) of convolutions layers, the final feature map is average-pooled and passed through a fully-connected layer to generate the logits. The width of each cell is controlled using a width multiplier, $w_m$ (like in Wide-ResNets~\cite{wide_resnet}). The base number of channels of each cell is [16,32,64]. For $w_m = 3$, cells will have [48,96,192] channels per layer. To summarize, we control the value $\{w_m,N_c,d_c,t_{c}\}$ to sample candidate architectures from the entire search space. \begin{figure}[b] \centering \includegraphics[width=0.88\textwidth]{figures/search_space_v2.pdf} \vspace{-2mm} \caption{An example of candidate neural architectures from our search space. (The values of $w_c$ ,$t_c$, and $d_c$ are only for illustration and they do not represent the real search space). Not all skip connections are shown in the figure, for simplicity. The upper inset shows the contribution from all skip and short-range links to layer $i=2$: The feature maps for the randomly selected channels are concatenated as the input of the current layer $i=2$ (similar to DenseNets~\cite{densenet}). At each layer in a given cell, the maximum number of channels contributing to skip connections is controlled by $t_c$.} \label{fig:cnn_cell} \end{figure} Fig.~\ref{fig:cnn_cell} illustrates a sample CNN similar to the candidate architectures in our search space (small values of $w_c$ and $d_c$ are used for clarity). This CNN consists of three cells, each containing $d_c = 4$ convolutional layers. The three cells have a width (i.e., the number of channels per layer) of 2, 3, and 4, respectively. We denote the network width as $w_c = [2,3,4]$. Finally, the maximum number of channels that can supply skip connections is given by $t_c = [2,5,6]$. That is, the first cell can have a maximum of two skip connection candidates per layer (i.e., previous channels that can supply skip connections), the second cell can have a maximum of five skip connections candidates per layer, and so on. Moreover, as mentioned before, we randomly choose $min\{w_c(i-1), t_c\}$ channels for skip connections at each layer. The inset of Fig. \ref{fig:cnn_cell} shows for a specific layer, how skip connections are created by concatenating the feature maps from previous layers. In practice, we use three cells for the CIFAR-10 dataset, i.e., $N_c=3$. We constrain the $1\leq w_m\leq 3$ and $5\leq d_c\leq 30$. We also constrain $t_c$ of each cell: $5\leq t_{1}$, $2t_{1}\leq t_{2}$ and $2t_{2}\leq t_{3}$ for these three cells, respectively. In this way, we can balance the number of skip connections across each cell. Moreover, the maximum number of skip connections that a layer can have is the product of the width of the cell ($w_c$) and $d_c-2$ which happens for the last layer in a cell concatenating all of the output channels except the second last layer. Hence, the upper bound of $t_c$, for each cell, is $16w_m(d_c -2),32w_m(d_c -2),64w_m(d_c -2)$, respectively. Therefore, the size of the overall search space is: $$\sum_{w_m=1}^{3}\sum_{d_c =5}^{30}\sum_{t_{1} =5}^{16w_m(d_c -2)}\sum_{t_{2} =2t_{1}}^{32w_m(d_c -2)}({64w_m(d_c -2)- 2t_{2}}+1)=6.39 \times 10^{10}$$ \noindent\textbf{Hardware Platform:} The training of the sample neural architectures from the search space is conducting on Nvidia GTX-1080Ti GPU. We use Intel Xeon 6230, a 20-core CPU, to simulate the hardware performance of multiple candidate networks and fine-tune the accuracy predictor and analytical hardware models. Finally, we use the same 20-core CPU to conduct the NAS process. \subsection{Accuracy Predictor} \begin{figure} [b] \centering \includegraphics[width=0.96\textwidth]{figures/results/acc_deg.pdf} \vspace{-4mm} \caption{We randomly select multiple networks from the search space then train and test their accuracy on \textbf{CIFAR-10, CIFAR-100, Tiny-ImageNet} datasets. (a) Real test accuracy vs. NN-Degree: networks with higher NN-Degree values have a higher test accuracy on the \textbf{CIFAR-10} dataset. (b) Real test accuracy vs. NN-Degree: networks with higher NN-Degree values have a higher test accuracy on the \textbf{CIFAR-100} dataset. (c) Real test accuracy vs. NN-Degree: networks with higher NN-Degree values have a higher test accuracy on the \textbf{Tiny-ImageNet} dataset. } \label{fig:acc_deg} \end{figure} \begin{figure} [b] \centering \includegraphics[width=0.96\textwidth]{figures/results/acc_pred.pdf} \vspace{-4mm} \caption{(a) Predictions of our NN-Degree based accuracy predictor vs. real test accuracy on \textbf{CIFAR-10} dataset. (b) Predictions of our NN-Degree based accuracy predictor vs. real test accuracy on \textbf{CIFAR-100} dataset. (c) Predictions of our NN-Degree based accuracy predictor vs. real test accuracy on \textbf{Tiny-ImageNet} dataset. The red dotted lines in these figures show a very good correlation between the predicted and measured values.} \label{fig:acc_predict} \end{figure} \begin{table}[t] \caption{Our NN-Degree based accuracy predictor for neural architecture search vs. existing predictors implemented by graph-based neural networks. We calculate the improvement ratio for each of the metric by considering the best among all existing approaches in this table. (`-' denotes that the corresponding results are not reported or not applicable.)} \vspace{-3mm} \footnotesize \setlength\tabcolsep{3.5pt} \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline \multicolumn{1}{|c|}{\multirow{2}{*}{\begin{tabular}[c]{@{}c@{}}Accuracy Estimation\\ Technique\end{tabular}}} & \multicolumn{2}{c|}{Search Space (SS) Size} & \multicolumn{2}{c|}{\# Training Samples} & \multicolumn{1}{c|}{\multirow{2}{*}{RMSE (\%)}} & \multicolumn{2}{c|}{Training Time (s)} \\ \cline{2-5} \cline{7-8} \multicolumn{1}{|c|}{} & \multicolumn{1}{c|}{Value} & \multicolumn{1}{c|}{\% of FLASH SS } & \multicolumn{1}{c|}{Value} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Ratio ($\times$)\\ w.r.t FLASH \end{tabular}} & \multicolumn{1}{c|}{} & \multicolumn{1}{c|}{Value} & \multicolumn{1}{c|}{\begin{tabular}[c]{@{}c@{}}Ratio ($\times$)\\ w.r.t FLASH\end{tabular}} \\ \hline GNN+MLP~\cite{eccv_gates} & $4.2 \times 10^5$ & $6.6 \times 10^{-4}$ \% & $3.8 \times 10^5$ & 15250 & - & - & - \\ \hline GNN~\cite{pr_2020_gnn_acc_pre} & $4.2 \times 10^5$ & $6.6 \times 10^{-4}$ \% & $3.0 \times 10^5$ & 11862 & 0.05 & - & - \\ \hline GCN~\cite{brp_nas} & $1.6 \times 10^4$ & $2.5 \times 10^{-5}$ \% & $1.0 \times 10^3$ & 40 & \textgreater{}1.8 & - & - \\ \hline GCN~\cite{yiran_gnn} & $4.2 \times 10^5$ & $6.6 \times 10^{-4}$ \% & $1.7 \times 10^2$ & 6.88 & 1.4 & 25 & 66 \\ \hline \textbf{\begin{tabular}[c]{@{}l@{}}FLASH (NN-Degree +\\ Logistic Regression) \end{tabular}} & $\mathbf{6.4 \times 10^{10}}$ & \textbf{100\%} & $\mathbf{2.5 \times 10^1}$ & \textbf{1} & \textbf{0.152} & \textbf{0.38} & \textbf{1} \\ \hline \end{tabular} \label{tab:acc_predict} \end{table} We first derive the NN-Degree ($g$) for the neural architecture in our search space. Based on Equation \ref{eq:nn_deg}, we substitute $SC_c$ with the real number of skip connections in a cell as follows: \begin{equation} \begin{split} g =\sum_{c=1}^{N_c}(w_c +\frac{SC_c}{w_c\times d_c}) = \sum_{c=1}^{N_c} (w_c +\frac{\sum_{i=2}^{d_c-1} \text{min}\{(i-1)w_c,t_c\}}{d_c} ) \end{split} \end{equation} In Section \ref{sec:methodology}, we argue that the neural architecture with a higher NN-degree value tends to provide a higher test accuracy. In Fig. \ref{fig:acc_deg}(a), we plot the test accuracy vs. NN-Degree of 60 randomly sampled neural networks from the search space for CIFAR-10 dataset; our proposed network-topology based metric NN-Degree indicates the test accuracy of neural networks. Furthermore, Fig~\ref{fig:acc_deg}(b) and Fig~\ref{fig:acc_deg}(c) also show the test accuracy vs. NN-Degree of 20 networks on CIFAR-100 dataset and 27 networks on Tiny-ImageNet randomly sampled from the search space. Clearly, our proposed metric NN-Degree predicts the test accuracy of neural networks on these two datasets as well. Indeed, the results prove that our claim in Section \ref{sec:methodology} is empirically correct, i.e., networks with higher NN-Degree values have a better test accuracy. Next, we use our proposed NN-Degree to build the analytical accuracy predictor. We train as few as 25 sample architectures randomly sampled from the entire search space and record their test accuracy and NN-Degree on CIFAR-10, CIFAR-100, and Tiny-ImageNet datasets. Then, we fine-tune our NN-Degree based accuracy predictor described by Equation \ref{fig:acc_predict}. As shown in Fig. \ref{fig:acc_predict}(a), Fig~\ref{fig:acc_predict}(b), and Fig~\ref{fig:acc_predict}(c), our accuracy predictor achieves very high performance while using surprisingly few samples with only three parameters on all these datasets. We also compare our NN-Degree-based accuracy predictor with the current state-of-the-art approaches. As shown in Table~\ref{tab:acc_predict}, most of the existing approaches use Graph-based neural networks to make predictions~\cite{yiran_gnn,pr_2020_gnn_acc_pre,brp_nas,eccv_gates}. However, Graph-based neural networks require much more training data, and they are much more complicated in terms of computation and model structure compared to classical methods like logistic regression. Due to the significant reduction in the model complexity, our predictor requires $6.88\times$ fewer training samples, although a much larger search space ($1.5\times 10^5$ larger than the existing work) is covered. Moreover, our NN-Degree based predictor has only three parameters to be updated; hence it consumes $66\times$ less fine-tuning time than the existing approaches. Finally, besides such low model complexity and fast training process, our predictor achieves a very small RMSE (0.152\%) as well. During the search of our NAS methodology, we use the accuracy predictor to directly predict the accuracy of sample architectures as opposed to performing the time-consuming training. The high precision and low complexity of our proposed accuracy predictor also enable us to adopt very fast optimization methods during the search stage. Furthermore, because our proposed metric NN-Degree can predict the test performance of a given architecture, we can use NN-Degree as the proxy of the test accuracy to do NAS without the time-consuming training process. This \textit{training-free} property allows us to quickly compare the accuracy of given architectures and thus accelerate the entire NAS. \begin{figure}[t] \centering \includegraphics[width=0.96\textwidth]{figures/tf_free_nas_v8.pdf} \vspace{-3mm} \caption{Overview of the proposed training-free NAS approach. Stage 1 (red box): we build hardware (HW) performance models by randomly sampling candidate networks from the search space to evaluate the hardware characteristics (latency $\mathcal{L}$, energy $\mathcal{E}$, and area $\mathcal{A}$). Stage 2 (blue box): we search for the optimal network architecture with the hardware performance constraints (i.e., $\mathcal{L}_M$, $\mathcal{E}_M$, and $\mathcal{A}_M$); we randomly choose some architectures and use the HW performance models to estimate their hardware performance. Then, we select the neural architecture $D^*$ with the highest NN-Degree which meets the HW performance constraints. Finally, we train the obtained architecture $D^*$ to get the optimal neural architecture. } \label{train_free_nas} \end{figure} \subsection{NN-Degree based Training-free NAS}\label{subsec:tf_nas} \begin{table} \caption{Our NN-Degree based training-free NAS (\textbf{FLASH}) and several representative time-efficient NAS on CIFAR-10 Dataset. We select the optimal architectures with the highest NN-Degree values among 20,000 randomly sampled architectures on a 20-core CPU.} \vspace{-3mm} \scalebox{0.88}{ { \begin{tabular}{|l|l|l|l|l|l|} \hline Method& Search Method& {\#Params} & {Search Cost} & {Training needed} & {Test error (\%)} \\ \hline\hline ENAS\cite{hyper_nas}& RL+weight sharing & 4.6M & 12 GPU hours& Yes & 2.89\\\hline SNAS\cite{xie2018snas}& gradient-based & 2.8M & 36 GPU hours& Yes & 2.85\\\hline DARTS-v1\cite{Darts} & gradient-based & {3.3M}& {1.5 GPU hours}& {Yes}& {3.0}\\\hline DARTS-v2\cite{Darts} & gradient-based & {3.3M}& {4 GPU hours}& {Yes}& {2.76} \\\hline ProxylessNAS\cite{cai2018proxylessnas} & gradient-based & {5.7M}& {NA} & {Yes}& {2.08} \\\hline Zero-Cost\cite{tf_nas1} & Proxy-based& {NA}& {NA} & {Yes}& {5.78} \\\hline TE-NAS\cite{tf_nas2} & Proxy-based & {3.8M}& {1.2 GPU hours} & {No}& {2.63} \\\hline {\textbf{FLASH}}& \textbf{NN-Degree based}& {\textbf{3.8M}} & {\textbf{0.11 seconds}} & {\textbf{No}}& {\textbf{3.13}} \\ \hline \end{tabular}}} \label{notrain_tab} \end{table} To conduct the training-free NAS, we reformulate the problem described by Equation~\ref{eq:problem_definition} as follows: \begin{equation} \max \theta,\quad \text{subject\ to:} \ \mathcal{A} \leq \mathcal{A}_M, \ \mathcal{L} \leq \mathcal{L}_M, \ \mathcal{E} \leq \mathcal{E}_M\\ \label{eq:problem_definition_raw} \end{equation} \noindent{To maximize the values of $\theta$, we can search for the network with maximal \textit{NN-Degree} values, which eliminate the training time of candidate architectures. In Fig.~\ref{train_free_nas}, we show how we can use the NN-Degree to do training-free NAS. During the first stage, we profile a few networks on the target hardware and fine-tune our hardware performance models. During the second stage, we randomly sample candidate architectures and select those which meet the hardware performance constraints. We use the fine-tuned analytical models to estimate the hardware performance instead of doing real inference, which improves the time efficiency of the entire NAS. After that, we select the optimal architecture with the highest NN-Degree values which meets the hardware performance constraints. We note that the NAS process itself is training-free (hence lightweight), as only the final solution $D^*$ needs to be trained. } To evaluate the performance of our training-free NAS framework, we randomly sample 20,000 candidate architectures from the search space and select the one with the highest NN-Degree values as the obtained/optimal architecture. Specifically, it takes only 0.11 seconds to evaluate these 20,000 samples' NN-Degree on a 20-core CPU to get the optimal architecture (no GPU needed). As shown in Table~\ref{notrain_tab}, the optimal architecture among these 20,000 samples achieves a comparable test performance with the representative time-efficient NAS approaches but with much less time cost and computation capacity requirement. \begin{figure} [b] \centering \includegraphics[width=1\textwidth]{figures/results/img_hw_models.pdf} \vspace{-8mm} \caption{Performance of our analytical hardware models on \textbf{ImageNet} classification networks: (a) Predicted values by our analytical area model vs. measured area. (b) Predicted values by our analytical latency model vs. measured latency. (c) Predicted values by our analytical energy model vs. measured energy consumption. The red lines demonstrate that our proposed models generalize well for networks evaluated on ImageNet-scale datasets.} \label{fig:img_hw_model_perf} \end{figure} \subsection{Analytical hardware performance models} Our experiments show that using 180 samples offers a good balance between the analytical models' accuracy and the number of fine-tuning samples. Hence, we randomly select 180 neural architectures from the search space to build our analytical hardware performance models. Next, we perform the inference of these selected 180 networks on our simulator~\cite{krishnan2021interconnect} to obtain their area, latency, and energy consumption. After obtaining the hardware performance of 180 sample networks, we fine-tune the parameters of our proposed analytical area, latency, and energy models discussed in Section \ref{sec:methodology}. To evaluate the performance of these fine-tuned models, we randomly select another 540 sample architectures from the search space then conduct inference and obtain their hardware performance. Table~\ref{tab:hw_model_perf} summarizes the performance of our analytical models. The mean estimation error is always less than 4\%. Fig. \ref{fig:img_hw_model_perf} shows the estimated hardware performance obtained by our analytical model for the ImageNet dataset. We observe that the estimation coincides with the measured values from simulation. Our analytical models enable us to obtain very accurate predictions of hardware performance with the time cost of less than 1 second on a 20-core CPU. The high performance and low computation workload enable us to directly adopt these analytical models to accelerate our searching stage instead of conducting real inference. \begin{table} \caption{Summary of the performance of our proposed analytical models for Area, Latency, and Energy. } \scalebox{0.76}{ \begin{tabular}{|l|l|l|l|l|} \hline Model& \#Features & Mean Error (\%) & Max Error (\%) & Fine-tuning Time (s)\\ \hline\hline Area& 2 & 0.1 & 0.2 & 0.49 \\ \hline Latency& 9 & 3.0 & 20.8 & 0.52 \\ \hline Energy& 16 &3.7 & 24.4 & 0.56 \\ \hline \end{tabular}} \label{tab:hw_model_perf} \end{table} \begin{table}[t] \caption{Estimation error with different ML models for \textbf{ImageNet} with IMC as target hardware platform.} \scalebox{0.76}{\begin{tabular}{|l|l|l|l|} \hline & SVM & \begin{tabular}[c]{@{}c@{}}Random Forest\\ (Max. Depth = 16)\end{tabular} & \begin{tabular}[c]{@{}c@{}}Analytical Models\\ (Proposed)\end{tabular} \\ \hline Latency Est. Error (\%) & 58.98 & 8.23 & 6.7 \\ \hline Energy Est. Error (\%) & 78.49 & 11.01 & 3.5 \\ \hline Area Est. Error (\%) & 36.99 & 13.37 & 1.7 \\ \hline \end{tabular}} \label{tab:other_ml} \end{table} \begin{figure} [b] \centering \vspace{3mm} \includegraphics[width=1\textwidth]{figures/results/c10.pdf} \caption{Performance comparison between our mesh-NoC and cmesh-NoC~\cite{shafiee2016isaac} on \textbf{CIFAR-10} classification networks for 16 different networks: (a) Our mesh-NoC needs much less area than the cmesh-NoC; (b) Our mesh-NoC has almost the same latency as the cmesh-NoC; (c) Our mesh-NoC consumes much less energy consumption than the cmesh-NoC.} \label{fig:cmesh_vs_mesh} \end{figure} \begin{figure} [htb] \centering \vspace{2mm} \includegraphics[width=1\textwidth]{figures/results/img_mesh_cmesh.pdf} \vspace{-6mm} \caption{Performance comparison between our mesh-NoC and cmesh-NoC~\cite{shafiee2016isaac} on \textbf{ImageNet} classification networks for 15 different networks: (a) Our mesh-NoC needs much less area than the cmesh-NoC; (b) Our mesh-NoC has almost the same latency as the cmesh-NoC; (c) Our mesh-NoC consumes much less energy consumption than the cmesh-NoC.} \label{fig:cmesh_vs_mesh_img} \end{figure} \noindent\textbf{Comparison with other machine learning models:} Table~\ref{tab:other_ml} compares the estimation error for SVM, random forest with a maximum tree depth of 16 and the proposed analytical hardware models for ImageNet dataset. A maximum tree depth of 16 is chosen for random forest since it provides the best accuracy among random forest models. We observe that our proposed analytical hardware models achieve the smallest error among all three modeling techniques. SVM performs poorly since it tries to classify the data with a hyper-plane, and no such plane may exist given the complex relationship between the features and performance of the hardware platform. \subsection{On-chip communication optimization} As shown in Fig.~\ref{fig:cmesh_vs_mesh} and Fig.~\ref{fig:cmesh_vs_mesh_img}, we compare the NoC performance (area, energy, and latency) of our FLASH with respect to the cmesh-NoC~\cite{shafiee2016isaac} for 16 randomly selected networks from the search space for CIFAR-10 dataset and ImageNet dataset, respectively. We observe that the mesh-NoC occupies on average only 37\% area and consumes only 41\% energy with respect to the cmesh-NoC. Since the cmesh-NoC uses extra links and repeaters to connect diagonal routers, the area and energy with the cmesh-NoC are significantly higher than the mesh-NoC. Additional links and routers in the cmesh-NoC result in lower hop counts than the mesh-NoC. However, the lower hop count reduces the latency at low congestion. As the congestion in the NoC increases, the latency of the cmesh-NoC becomes higher than the mesh-NoC due to increased utilization of additional links. This phenomenon is also demonstrated in~\cite{grot2008scalable}. Therefore, the communication latency with the cmesh-NoC is higher than the mesh-NoC for most of the DNNs. The communication latency with the mesh-NoC is on average within 3\% different from the communication latency with the cmesh-NoC. Moreover, we observe that the average utilization of the queues in the mesh-NoC varies between 20\%-40\% for the ImageNet dataset. Furthermore, the maximum utilization of the queues ranges from 60\% to 80\%. Therefore, the mesh-NoC is heavily congested. Thus, our proposed communication optimization strategy outperforms the state-of-the-art approaches. \subsection{Hierarchical SHGO-based neural architecture search}\label{sec:res_nas} After we fine-tune the NN-Degree based accuracy predictor and analytical hardware performance models, we use our proposed hierarchical SHGO-based searching algorithm to do the neural architecture search. \noindent\textbf{Baseline approach:} Reinforcement Learning (RL) is widely used in NAS~\cite{jiang2020device, hsu2018monas, cell_1}; hence we have implemented a RL-based NAS framework as a baseline. For the baseline, we consider the objective function in Equation~\ref{eq:problem_definition}. Specifically, we incorporate a deep-Q network approach for the baseline-RL~\cite{mnih2013playing}. We construct four different controllers for the number of cell ($N_c$), cell depth ($d_c$), width multiplier ($w_m$) and number of long skip connections ($SC_c$). The training hyper-parameters for the baseline-RL are shown in Table~\ref{tab:params_RL}. The baseline-RL approach estimates the optimal parameters ($N_c, d_c, w_m, SC_c$). We tune the baseline-RL approach to obtain the best possible results. We also implement a one-level SHGO algorithm (i.e., original SHGO) as another baseline to show the efficiency of our hierarchical algorithm. \begin{table}[t] \caption{Parameters chosen for the baseline-RL approach.} \scalebox{0.76}{ \begin{tabular}{|l|l||l|l|} \hline Metric & Value & Metric & Value\\ \hline \hline Number of layers & 3 & Learning rate \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ & 0.001 \\ \hline Number of neurons in each layer & 20 & Activation & softmax \\ \hline Optimizer & ADAM & Loss & MSE \\ \hline \end{tabular}} \label{tab:params_RL} \end{table} \begin{table} [b] \caption{Comparison between RL-based search, one-level SHGO-based search, and our proposed hierarchical SHGO-based search. No constraint means that we don't set any bounds for the accuracy, area, latency, and energy consumption of the networks; we compare FLASH with RL when there are no constraints. For searching with constraints, we set the minimal accuracy being 95.8\% ($\theta\geq \theta_M=95.8\%$) as an example; we compare FLASH with one-level SHGO because RL does not converge. The quality of the model is calculated by the objective function in Equation~\ref{eq:problem_definition} (higher is better). } \scalebox{0.88}{ \begin{tabular}{|p{1.7cm}|p{4.4cm}|p{1.6cm}|p{1.6cm}|p{2.8cm}|l|l|} \hline Constraints involved? & Method & Search cost (\#Samples) & Search Time (s)& Quality of obtained model (Eq.~\ref{eq:problem_definition}) & Converge? \\ \hline\hline \multirow{4}{*}{No} & RL & 10000 & 1955 & 20984 & Yes \\ \cline{2-6} & one-level SHGO & 23 & 0.03 & 20984 & Yes \\ \cline{2-6} & \textbf{hierarchical SHGO (FLASH)} & 69 & 0.07 & 20984 & Yes \\ \cline{2-6} & \textbf{Improvement} & 144.93$\times$ & 27929$\times$ & $1\times$ & - \\ \hline\hline \multirow{4}{*}{Yes. $\theta\geq \theta_M$} & RL & >10000 & -& - & No \\ \cline{2-6} & one-level SHGO & 1195 & 3.82 & 10550 & Yes \\ \cline{2-6} & \textbf{hierarchical SHGO (FLASH)} &170 & 0.26 & 11969& Yes \\ \cline{2-6} & \textbf{Improvement} & 7.03$\times$ & 14.7$\times$ & 1.13$\times$& - \\ \hline \end{tabular}} \label{tab:sea_alg_comp} \end{table} We compare the baseline-RL approach with our proposed SHGO-based optimization approach. As shown in Table \ref{tab:sea_alg_comp}, when there is no constraint in terms of accuracy and hardware performance, our hierarchical SHGO-based algorithm brings negligible overhead compared to the one-level SHGO algorithm. Moreover, our hierarchical SHGO-based algorithm needs much fewer samples ($144.93\times$) during the search process than RL-based methods. Our proposed search algorithm is as fast as 0.07 seconds and 27929$\times$ faster than the RL-based methods, while achieving the same quality of the solution! As for the searching with specific constraints, the training of RL-based methods cannot even converge after training with 10000 samples. Furthermore, our hierarchical SHGO-based algorithm obtains a better-quality model with $7.03\times$ fewer samples and $14.7\times$ less search time compared to the one-level SHGO algorithm. The results show that our proposed hierarchical strategy further improves the efficiency of the original SHGO algorithm. \begin{figure} [t] \centering \vspace{0mm} \includegraphics[width=0.88\textwidth]{figures/results/rpi.pdf} \vspace{-4mm} \caption{(a) Predictions of our analytical latency models vs. measured values for RPi-3B. (b) Predictions of our analytical energy consumption models vs. measured values for RPi-3B. The red dotted lines in these two figures show a high correlation between predicted and measured values.} \label{fig:rpi_hw_model_perf} \end{figure} \begin{figure} [ht] \centering \includegraphics[width=0.88\textwidth]{figures/results/mc1.pdf} \vspace{-4mm} \caption{(a) Predictions of our analytical latency models vs. measured values for MC1. (b) Predictions of our analytical energy consumption models vs. measured values for MC1. The red dotted lines in these two figures show a very good correlation between the predicted and measured values.} \label{fig:mc1_hw_model_perf} \end{figure} \begin{table}[b] \caption{Comparison between one-level and hierarchical SHGO-based search on RPi-3B and Odroid MC1. For searching with constraints, we set the minimal accuracy being 96\% ($\theta\geq \theta_M=96\%$) as an example. The quality of the model is calculated by Equation~\ref{eq:rasp_obj} (higher is better).} \vspace{-4mm} \scalebox{0.88}{ \begin{tabular}{|l|l|l|l|l|l|l|l|} \hline \multirow{2}{*}{\begin{tabular}[c]{@{}l@{}}Constraints\\ involved?\end{tabular}} & \multirow{2}{*}{Method} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}Search Cost\\ (\# Samples)\end{tabular}} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}Search time\\ (s)\end{tabular}} & \multicolumn{2}{l|}{\begin{tabular}[c]{@{}l@{}}Model Quality\\ (Equation~\ref{eq:rasp_obj})\end{tabular}} \\ \cline{3-8} & & RPi-3B & MC1 & RPi-3B & MC1 & RPi-3B & MC1 \\ \hline \hline \multirow{2}{*}{No} & one-level SHGO & 112 & 113 & 1.68 & 0.71 & 4.74 & 4.13 \\ \cline{2-8} & hierarchical SHGO (FLASH) & 180 & 135 & 2.21 & 0.45 & 4.74 & 4.13 \\ \hline\hline \multirow{3}{*}{Yes, $\theta \geq \theta_M$} & one-level SHGO & 1309 & 1272 & 45.98 & 9.65 & 0.35 & 0.38 \\ \cline{2-8} & hierarchical SHGO (FLASH) & 261 & 414 & 2.33 & 1.32 & 0.48 & 0.57 \\ \cline{2-8} & \textbf{Improvement} & \textbf{5.01 $\times$} & \textbf{3.07 $\times$} & \textbf{19.73 $\times$} & \textbf{20.5 $\times$} & \textbf{1.37 $\times$ } & \textbf{1.51 $\times$}\\ \hline \end{tabular}} \label{tab:rpi_sea_alg} \vspace{-5mm} \end{table} \vspace{-1mm} \subsection{Case study: Raspberry Pi and Odroid MC1} \vspace{-1mm} As discussed in previous sections, each component and stages of FLASH are very efficient in terms of both computation and time costs. To further demonstrate the efficiency of our FLASH methodology, we implement FLASH on two typical edge devices, namely, the Raspberry Pi-3 Model-B (RPi-3B) and Odroid MC1 (MC1). \noindent\textbf{Setup:} RPi-3B has an Arm Cortex-A53 quad-core processor with a nominal frequency of 1.2GHz and 1GB of RAM. Furthermore, we use the Odroid Smart Power 2 to measure voltage, current, and power. We use TensorFlow-Lite (TF-Lite) as the run-time framework on RPi-3B. To achieve this, we first define the architecture of the models by TensorFlow (TF). Then we convert the TF model into the TF-Lite format and generate the binary file deployed on the RPi-3B. Odroid MC1 is powered by Exynos 5422, a heterogeneous system-on-a-chip (MPSoC). This SoC consists of two clusters of ARM cores and a small GPU core. Besides the hardware platform itself, we use the same setup as for the RPi-3B. \noindent\textbf{Accuracy predictor and analytical hardware performance models:} We adopt the same accuracy predictor used in Section \ref{sec:res_nas}. We only consider latency and energy consumption as the hardware performance metrics because the chip area is fixed. Hence, the objective function of searching on RPi-3B and MC1 is: \begin{equation} \label{eq:rasp_obj} f_{obj}=\frac{Accuracy}{Latency \times Energy} \end{equation} To fine-tune the analytical latency and energy models, we randomly select 180 sample networks from the search space. Then we convert them into the TF-Lite format and record their latency and energy consumption on the RPi-3B. Based on the recorded data, we update the parameters of the analytical latency and energy models. Fig. \ref{fig:rpi_hw_model_perf} and \ref{fig:mc1_hw_model_perf} show that our analytical hardware performance models almost coincide with the real performance of both the RPi-3B and MC1. \noindent\textbf{Search Process on RPi-3B and MC1:} We do not show the results of RL-based methods because the training of RL models requires intensive computation resources; thus, they cannot be deployed on RPi-3B and MC1. As shown in Table \ref{tab:rpi_sea_alg}, for searching without any constraint, our hierarchical SHGO-based algorithm has only a minimal overhead compared with the basic (one-level) SHGO algorithm. Moreover, our hierarchical SHGO-based algorithm is faster than the one-level SHGO algorithm on MC1. For searching with constraints, the hierarchical SHGO-based algorithm obtains a better-quality model with $5.01\times$ fewer samples and $19.73\times$ less search time on the RPi-3B; we achieve similar improvements on MC1 as well. These results prove the effectiveness of our hierarchical strategy again. Overall, the total searching time on RPi-3B and MC1 are as short as 2.33 seconds and 1.32 seconds, respectively on such resource-constrained edge devices. To our best knowledge, this is the first time when a neural architecture search is reported on edge devices. \section{Conclusions and Future Work} \label{sec:conclusion} \vspace{-1mm} This paper presented a very fast methodology, called FLASH, to improve the time efficiency of NAS. To this end, we have proposed a new topology-based metric, namely the \textit{NN-Degree}. Using the NN-Degree, we have proposed an analytical accuracy predictor by training as few as 25 samples out of a vast search space with more than 63 billion configurations. Our proposed accuracy predictor achieves the same performance with 6.88$\times$ fewer samples and $65.79\times$ reduction in fine-tuning time cost compared to state-of-the-art approaches. We have also optimized the on-chip communication by designing a mesh-NoC for communication across multiple layers; based on the optimized hardware, we have built new analytical models to predict area, latency, and energy consumption. Combining the accuracy predictor and the analytical hardware performance models, we have developed a hierarchical simplicial homology global optimization (SHGO)-based algorithm to optimize the co-design process while considering both test accuracy and the area, latency, and energy figures of the target hardware. Finally, we have demonstrated that our newly proposed hierarchical SHGO-based algorithm enables 27729$\times$ faster (less than 0.1 seconds) NAS compared to the state-of-the-art RL-based approaches. We have also shown that FLASH can be readily transferred to other hardware platforms by doing NAS on a Raspberry Pi-3B and Odroid MC1 in less than 3 seconds. To our best knowledge, our work is the first to report NAS performed directly and efficiently on edge devices. We note that there is no fundamental limitation to apply FLASH to other machine learning tasks. However, no IMC-based architectures are widely adopted yet for other machine learning tasks like speech recognition or object segmentation. Therefore,the current work focuses on DNN inference and leaves the extension to other machine learning tasks as future work. Finally, we plan to incorporate more types of networks such as ResNet and MobileNet-v2 as part of our future work. \vspace{-2mm} \section{0pt}{3pt plus 1pt minus 1pt}{3pt plus 1pt minus 1pt} \titlespacing\subsection{0pt}{3pt plus 1pt minus 1pt}{3pt plus 1pt minus 1pt} \titlespacing\subsubsection{0pt}{3pt plus 1pt minus 1pt}{3pt plus 1pt minus 1pt} \usepackage{mathtools} \usepackage{multirow} \usepackage{textcomp} \AtBeginDocument{% \providecommand\BibTeX{{% \normalfont B\kern-0.5em{\scshape i\kern-0.25em b}\kern-0.8em\TeX}}} \setcopyright{none} \newcommand{\rev}[1]{\textcolor{blue}{#1}} \newcommand{\red}[1]{\textcolor{red}{#1}} \begin{document} \title{FLASH: \underline{F}ast Neura\underline{l} \underline{A}rchitecture \underline{S}earch with \underline{H}ardware Optimization} \author[1]{Guihong Li} \email{lgh@utexas.edu} \affiliation{% \institution{The University of Texas at Austin} \city{Austin} \state{Texas} \country{USA} } \author[2]{Sumit K. Mandal} \email{skmandal@wisc.edu} \affiliation{% \institution{University of Wisconsin–Madison} \city{Madison} \state{Wisconsin} \country{USA} } \author[2]{Umit Y. Ogras} \email{uogras@wisc.edu} \affiliation{% \institution{University of Wisconsin–Madison} \city{Madison} \state{Wisconsin} \country{USA} } \author{Radu Marculescu} \email{radum@utexas.edu} \affiliation{% \institution{The University of Texas at Austin} \city{Austin} \state{Texas} \country{USA} } \renewcommand{\shortauthors}{G. Li, et al.} \begin{abstract} Neural architecture search (NAS) is a promising technique to design efficient and high-performance deep neural networks (DNNs). As the performance requirements of ML applications grow continuously, the hardware accelerators start playing a central role in DNN design. This trend makes NAS even more complicated and time-consuming for most real applications. This paper proposes FLASH, a very fast NAS methodology that co-optimizes the DNN accuracy and performance on a real hardware platform. As the main theoretical contribution, we first propose the NN-Degree, an analytical metric to quantify the topological characteristics of DNNs with skip connections (e.g., DenseNets, ResNets, Wide-ResNets, and MobileNets). The newly proposed NN-Degree allows us to do \textit{training-free} NAS within one second and build an accuracy predictor by training as few as 25 samples out of a vast search space with more than 63 billion configurations. Second, by performing inference on the target hardware, we fine-tune and validate our analytical models to estimate the latency, area, and energy consumption of various DNN architectures while executing standard ML datasets. Third, we construct a hierarchical algorithm based on simplicial homology global optimization (SHGO) to optimize the model-architecture co-design process, while considering the area, latency, and energy consumption of the target hardware. We demonstrate that, compared to the state-of-the-art NAS approaches, our proposed hierarchical SHGO-based algorithm enables more than four orders of magnitude speedup (specifically, the execution time of the proposed algorithm is about 0.1 seconds). Finally, our experimental evaluations show that FLASH is easily transferable to different hardware architectures, thus enabling us to do NAS on a Raspberry Pi-3B processor in less than 3 seconds. \end{abstract} \begin{CCSXML} <ccs2012> <concept> <concept_id>10010147.10010178</concept_id> <concept_desc>Computing methodologies~Artificial intelligence</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178</concept_id> <concept_desc>Computing methodologies~Neural Network</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178</concept_id> <concept_desc>Computing methodologies~Neural Architecture Search</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010147.10010178.10010224</concept_id> <concept_desc>Computing methodologies~Computer vision</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010520.10010553.10010562</concept_id> <concept_desc>Computer systems organization~Embedded systems</concept_desc> <concept_significance>500</concept_significance> </concept> <concept> <concept_id>10010520.10010553.10010554</concept_id> <concept_desc>Computer systems organization~Robotics</concept_desc> <concept_significance>100</concept_significance> </concept> <concept> <concept_id>10003033.10003083.10003095</concept_id> <concept_desc>Networks~Network reliability</concept_desc> <concept_significance>100</concept_significance> </concept> </ccs2012> \end{CCSXML} \ccsdesc[500]{Computing methodologies~Artificial intelligence} \ccsdesc[500]{Computing methodologies~Computer vision} \ccsdesc[500]{Computer systems organization~Embedded systems} \keywords{Neural Networks, Network Science, Hardware Optimization, Neural Architecture Search, Model-Architecture Co-design, Resource-constrained Devices} \maketitle \input{0-abstract.tex} \input{1-introduction.tex} \input{2-related_work.tex} \input{3-approach.tex} \input{4-experimental_results.tex} \input{5-conclusion.tex} \section{Acknowledgments} This work was supported in part by the US National Science Foundation (NSF) grant CNS-2007284, and in part by Semiconductor Research Corporation (SRC) grants GRC 2939.001 and 3012.001. \bibliographystyle{ACM-Reference-Format}
{ "attr-fineweb-edu": 1.99707, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdHM4uzki0qvyMyUw
\section{Introduction} Nonlocal boundary value problems of the type $$ \left\{ \begin{array}{ll} - \left( a+b\displaystyle\int_\Omega |\nabla u|^2 dx\right)\Delta u= f(x,u), & \hbox{ in } \Omega \\ \\ u=0, & \hbox{on } \partial \Omega \end{array} \right. $$ are related to the stationary version of the Kirchhoff equation $$\frac{\partial^2 u}{\partial t^2}- \left( a+b\displaystyle\int_\Omega |\nabla u|^2 dx\right)\Delta u=f_1(t,x,u),$$ first proposed by Kirchhoff to describe the transversal oscillations of a stretched string. Here $\Omega$ is a bounded domain of ${\mathbb R}^N$, $u$ denotes the displacement, $f_1$ is the external force, $b$ is the initial tension and $a$ is related to the intrinsic properties of the string. Note that, this type of nonlocal equations appears in other fields like biological systems, where $u$ describes a process depending on the average of itself, like population density (see for instance \cite{CL}). The first attempt to find solutions for subcritical nonlinearities, by means of variational methods, is due to Ma and Rivera \cite{MR} and Alves, Corr\^{e}a and Ma \cite{ACM} who combined minimization arguments with truncation techniques and a priori estimates. Using Yang index and critical group arguments or the theory of invariant sets of descent flows, Perera and Zhang (see \cite{PZ,ZP}) proved existence results for the above problem. Multiplicity theorems can be found for instance in \cite{CKW,MZ,R0}. The existence or multiplicity of solutions of the Kirchhoff type problem with critical exponents in a bounded domain (or even in the whole space) has been studied by using different techniques as variational methods, genus theory, the Nehari manifold, the Ljusternik--Schnirelmann category theory (see for instance \cite{CF,Fan,F,FS}). It is worth mentioning that Mountain Pass arguments combined with the Lions' Concentration Compactness principle \cite{L} are still the most popular tools to deal with such problems in the presence of a critical term. Applications to the lower dimensional case ($N<4$) can be found in \cite{ACF,LLG,N}, while for higher dimensions ($N\geq4$) we refer to \cite{H1,H2,N0,YM}. Notice that in order to employ the Concentration Compactness principle, $a$ and $b$ need to satisfy suitable constraints. In order to state our main result we introduce the following notations: we endow the Sobolev space $H^1_0(\Omega)$ with the classical norm $\|u\|=\left( \int_{\Omega }|\nabla u|^2 \ dx\right)^{\frac{1}{2}}$ and denote by $\|u\|_{q}$ the Lebesgue norm in $L^{q}(\Omega)$ for $1\leq q \leq 2^\star$, i.e. $\|u\|_{q}=\left(\int_{\Omega} |u|^{q} \ dx\right)^{\frac{1}{q}}$. Let $S_N$ be the embedding constant of $H^1_0(\Omega)\hookrightarrow L^{2^\star}(\Omega)$, i.e. \[\|u\|^2_{2^\star}\leq S_N^{-1} \|u\|^2 \qquad \mbox{for every } \ u\in H^1_0(\Omega). \] Let us recall that (see Talenti \cite{Talenti} and Hebey \cite{H1} for an explicit espression) \begin{equation}\label{2*} S_N=\frac{N(N-2)}{4}\omega_N^{\frac{2}{N}}, \end{equation} where $\omega_N$ is the volume of the unit ball in ${\mathbb R}^N$. For $N\geq4$ denote by $C_1(N)$ and $C_2(N)$ the constants \[ C_1(N)= \begin{cases}\displaystyle \frac{4(N-4)^{\frac{N-4}{2}}}{N^{\frac{N-2}{2}}S_{N}^{\frac{N}{2}}} & N>4\\ \\ \displaystyle \frac{1}{S_{4}^{2}}, & N=4, \end{cases} \qquad \mbox{ and } \qquad C_2(N)=\begin{cases} \displaystyle\frac{2(N-4)^{\frac{N-4}{2}}}{(N-2)^{\frac{N-2}{2}}S_{N}^{\frac{N}{2}}} & N>4\\ \\ \displaystyle \frac{1}{S_{4}^{2}}, & N=4. \end{cases}. \] Notice that $C_1(N)\leq C_2(N)$. Our result reads as follows: \begin{theor}\label{our theorem} Let $a, b$ be positive numbers, $N\ge4$. \\ \noindent (A) If $ a^{\frac{N-4}{2}} b\geq C_1(N)$, then, for each $\lambda>0$ large enough and for each convex set $C\subseteq L^2(\Omega)$ whose closure in $L^2(\Omega)$ contains $H^1_0(\Omega)$, there exists $v^*\in C$ such that the functional \[u\to \frac{a}{2} \int_{\Omega}|\nabla u|^2 dx +\frac{b}{4} \left( \int_{\Omega}|\nabla u|^2 dx\right)^2-\frac{1}{2^\star}\int_{\Omega}|u|^{2^\star} dx-\frac{\lambda}{2}\int_{\Omega}|u(x)-v^*(x)|^2 dx\] has two global minima. \noindent (B) If $a^{\frac{N-4}{2}} b> C_2(N)$, then, for each $\lambda>0$ large enough and for each convex set $C\subseteq L^2(\Omega)$ whose closure in $L^2(\Omega)$ contains $H^1_0(\Omega)$, there exists $v^*\in C$ such that the problem $$ \left\{ \begin{array}{ll} - \left( a+b\displaystyle\int_\Omega |\nabla u|^2 dx\right)\Delta u= |u|^{2^\star-2}u+\lambda (u-v^*(x)), & \hbox{ in } \Omega \\ \\ u=0, & \hbox{on } \partial \Omega \end{array} \right.\eqno{(\mathcal{P}_{\lambda})} $$ has at least three weak solutions, two of which are global minima in $H^1_0(\Omega)$ of the energy functional defined in (A). \end{theor} The paper is motivated by a recent work of Ricceri where the author studied problem $(\mathcal P_\lambda)$ in the subcritical case, i.e. when $|u|^{2^\star-2}u$ is replaced by $|u|^{p-2}u$ with $p<2^\star$. In \cite[Proposition 1]{R}, the existence of two global minima for the energy functional (and three solutions for the associated Kirchhoff problem) is obtained for every $a\geq 0$ and $b>0$. In the same paper, the following challenging question was raised (see \cite[Problem 1]{R}): \begin{question} Does the conclusion of Proposition 1 hold if $N>4$ and $p=2^\star$? \end{question} Notice that, for $N>4$ and $p=2^\star$ the energy functional associated to $(\mathcal{P}_{\lambda})$ is bounded from below while if $N=4(=2^\star)$ this is not true any more for arbitrary $b$. Moreover, when $p=2^\star$ the embedding of $H^1_0(\Omega)$ into $L^{p}(\Omega)$ fails to be compact and one can not apply directly the abstract tool which leads to \cite[Theorem 1 \& Proposition 1]{R}. The main result of the present note gives a partial positive answer to the above question and prove that Proposition 1 of \cite{R} holds for $p=2^\star$ and $N\geq 4$ provided that $a$ and $b$ satisfies a suitable crucial inequality. Namely, we prove that the interaction between the Kirchhoff type operator and the critical nonlinearity ensures the sequentially weakly lower semicontinuity of the energy functional, a key property which allows to apply the minimax theory developed in \cite[Theorem 3.2]{R1} (see also Theorem \ref{minimax theorem} below). \section{Proofs} The proof of Theorem \ref{our theorem} relies on the following key lemma (see \cite{FFK} for a deeper study on this topic). \begin{lem}\label{semicontinuity} Let $N\geq 4$ and $a, b$ be positive numbers such that $ a^{\frac{N-4}{2}} b\geq C_1(N)$. Denote by $\mathcal F:H^1_0(\Omega)\to{\mathbb R}$ the functional \[\mathcal F(u)=\frac{a}{2}\|u\|^2+\frac{b}{4} \|u\|^4-\frac{1}{2^\star}\|u\|^{2^\star}_{2^\star} \qquad \mbox{for every }\ u \in H^1_0(\Omega).\] Then, $\mathcal F$ is sequentially weakly lower semicontinuous in $H^1_0(\Omega)$. \end{lem} \begin{proof} Fix $u \in H^1_0(\Omega)$ and let $\{u_n\} \subset H^1_0(\Omega)$ such that $u_n\rightharpoonup u$ in $H^1_0(\Omega)$. Thus, \begin{align*} \mathcal{F}(u_n)-\mathcal{F}(u) =&\frac{a}{2}(\|u_n\|^2-\|u\|^2)+\frac{b}{4}(\|u_n\|^4-\|u\|^4)\\ &-\frac{1}{2^\star}\left(\|u_n\|_{2^\star}^{2^\star}-\|u\|_{2^\star}^{2^\star}\right). \end{align*} It is clear that \begin{align*}\|u_n\|^2-\|u\|^2&=\|u_n-u\|^2+2\int_{\Omega}\nabla(u_n-u)\nabla u \\ &= \|u_n-u\|^2+o(1), \end{align*} and \begin{align*} \|u_n\|^4-\|u\|^4&=\left(\|u_n-u\|^2+o(1)\right)\left(\|u_n-u\|^2+2\int_{\Omega}\nabla u_n\nabla u\right)\\&=\left(\|u_n-u\|^2+o(1)\right)\left(\|u_n-u\|^2+2\int_{\Omega}\nabla (u_n-u)\nabla u+2\|u\|^2\right)\\ &=\left(\|u_n-u\|^2+o(1)\right)\left(\|u_n-u\|^2+2\|u\|^2+o(1)\right). \end{align*} Moreover, from the Br\'ezis-Lieb lemma, one has $$\|u_n\|_{2^\star}^{2^\star}-\|u\|_{2^\star}^{2^\star}=\|u_n-u\|_{2^\star}^{2^\star}+o(1).$$ Putting together the above outcomes, \begin{align*} \mathcal{F}(u_n)-\mathcal{F}(u)=&\frac{a}{2}\|u_n-u\|^2+\frac{b}{4}\left(\|u_n-u\|^4+2\|u\|^2\|u_n-u\|^2\right)-\frac{1}{2^\star}\|u_n-u\|_{2^\star}^{2^\star}+o(1) \\{\geq}& \frac{a}{2}\|u_n-u\|^2+\frac{b}{4}\left(\|u_n-u\|^4+2\|u\|^2\|u_n-u\|^2\right)-\frac{{S}_N^{-\frac{2^\star}{2}}}{2^\star}\|u_n-u\|^{2^\star}+o(1) \\ \geq& \frac{a}{2}\|u_n-u\|^2 +\frac{b}{4}\|u_n-u\|^4-\frac{{S}_N^{-\frac{2^\star}{2}}}{2^\star}\|u_n-u\|^{2^\star}+o(1)\\=& \|u_n-u\|^2 \left(\frac{a}{2}+\frac{b}{4}\|u_n-u\|^2-\frac{{S}_N^{-\frac{2^\star}{2}}}{2^\star}\|u_n-u\|^{2^\star-2}\right)+o(1). \end{align*} Denote by $f:[0,+\infty[\to{\mathbb R}$ the function $\displaystyle f(x)=\frac{a}{2}+\frac{b}{4}x^2-\frac{{S}_N^{-\frac{2^\star}{2}}}{2^\star}x^{2^\star-2}$. We claim that $f(x)\geq 0$ for all $x\geq 0$. \bigskip Indeed, when $N=4$, and $b{S}_4^2\geq 1$, \[f(x)=\frac{a}{2}+\frac{b}{4}x^2-\frac{{S}_4^{-2}}{4}x^2=\frac{a}{2}+\frac{1}{4}\left(b-\frac{1}{{S}_4^2}\right)x^2\geq \frac{a}{2}.\] If $N>4$, it is immediately seen that $f$ attains its minimum at $$x_0=\left(\frac{2^\star }{2(2^\star-2)}{S}_N^{\frac{2^\star}{2}}b\right)^{\frac{1}{2^\star-4}}$$ and the claim is a consequence of the assumption $\displaystyle a^\frac{N-4}{2}b\geq C_1(N)$. Thus, $$\liminf_{n\to \infty}(\mathcal{F}(u_n)-\mathcal{F}(u))\geq \liminf_{n \to \infty}\|u_n-u\|^2 f(\|u_n-u\|)\geq 0,$$ and the thesis follows. \end{proof} \begin{rem}We point out that the constant $C_1(N)$ in Lemma \ref{semicontinuity} is optimal, i.e. if $ a^{\frac{N-4}{2}} b< C_1(N)$ the functional $\mathcal F$ is no longer sequentially weakly lower semicontinuous (see \cite{FFK}). \end{rem} In the next lemma we prove the Palais Smale property for our energy functional. Notice that the same constraints on $a$ and $b$ appear in \cite{H1} where such property was investigated for the critical Kirchhoff equation on closed manifolds by employing the $H^1$ (which is the underlying Sobolev space) decomposition. \begin{lem}\label{Palais Smale} Let $N \ge 4$ and $a,b$ be positive numbers such that $a^{\frac{N-4}{2}}b>C_{2}(N)$. For $\lambda>0, v^*\in H_{0}^{1}(\Omega)$ denote by $\mathcal{E}:H_{0}^{1}(\Omega)\to\mathbb{R}$ the functional defined by \[ \mathcal{E}(u)=\frac{a}{2}\|u\|^{2}+\frac{b}{4}\|u\|^{4}-\frac{1}{2^{\star}}\|u\|_{2^{\star}}^{2^{\star}}-\frac{\lambda}{2}\|u-v^{\star}\|_{2}^{2} \qquad \mbox{for every }\ u \in H^1_0(\Omega).\] Then, $\mathcal E$ satisfies the Palais-Smale (shortly (PS)) condition. \end{lem} \begin{proof} Let $\{u_{n}\}$ be a (PS) sequence for $\mathcal E$, that is \[ \begin{cases} \mathcal{E}(u_{n})\to c\\ \mathcal{E}'(u_{n})\to0 \end{cases}\mbox{as }n\to\infty. \] Since $\mathcal E$ is coercive, $\{u_{n}\}$ is bounded and there exists $u\in H_{0}^{1}(\Omega)$ such that (up to a subsequence) \begin{align*} u_{n} & \rightharpoonup u\mbox{ in }H_{0}^{1}(\Omega),\\ u_{n} & \to u\mbox{ in }L^{p}(\Omega),\ p\in[1,2^{\star}),\\ u_{n} & \to u\mbox{ a.e. in }\Omega. \end{align*} Using the second concentration compactness lemma of Lions \cite{L}, there exist an at most countable index set $J$, a set of points $\{x_{j}\}_{j\in J}\subset\overline\Omega$ and two families of positive numbers $\{\eta_{j}\}_{j\in J}$, $\{\nu_{j}\}_{j\in J}$ such that \begin{align*} |\nabla u_{n}|^{2} & \rightharpoonup d\eta\geq|\nabla u|^{2}+\sum_{j\in J}\eta_{j}\delta_{x_{j}},\\ |u_{n}|^{2^\star} & \rightharpoonup d\nu=|u|^{2^\star}+\sum_{j\in J}\nu_{j}\delta_{x_{j}}, \end{align*} (weak star convergence in the sense of measures), where $\delta_{x_{j}}$ is the Dirac mass concentrated at $x_{j}$ and such that $$ S_{N} \nu_{j}^{\frac{2}{2^\star}}\leq\eta_{j} \qquad \mbox{for every $j\in J$}.$$ Next, we will prove that the index set $J$ is empty. Arguing by contradiction, we may assume that there exists a $j_{0}$ such that $\nu_{j_{0}}\neq0$. Consider now, for $\varepsilon>0$ a non negative cut-off function $\phi_\varepsilon$ such that \begin{align*} &\phi_{\varepsilon} =1\mbox{ on }B(x_{0},\varepsilon),\\ &\phi_{\varepsilon} =0\mbox{ on } \Omega\setminus B(x_{0},2\varepsilon),\\ &|\nabla\phi_{\varepsilon}| \leq\frac{2}{\varepsilon}. \end{align*} It is clear that the sequence $\{u_{n}\phi_{\varepsilon}\}_{n}$ is bounded in $H_{0}^{1}(\Omega)$, so that \[ \lim_{n\to\infty}\mathcal{E}'(u_{n})(u_{n}\phi_{\varepsilon})=0. \] Thus \begin{align}\label{calc 1} o(1) & =(a+b\|u_{n}\|^{2})\int_{\Omega}\nabla u_{n}\nabla(u_{n}\phi_{\varepsilon})-\int_{\Omega}|u_{n}|^{2^\star}\phi_{\varepsilon}-\lambda\int_{\Omega}(u_{n}-v^{*})(u_{n}\phi_{\varepsilon}) \nonumber \\ & =(a+b\|u_{n}\|^{2})\left(\int_{\Omega}|\nabla u_{n}|^{2}\phi_{\varepsilon}+\int_{\Omega}u_{n}\nabla u_{n}\nabla\phi_{\varepsilon}\right)-\int_{\Omega}|u_{n}|^{2^\star}\phi_{\varepsilon}-\lambda\int_{\Omega}(u_{n}-v^{*})(u_{n}\phi_{\varepsilon}). \end{align} Moreover, using H\"{o}lder inequality, one has \[ \left|\int_{\Omega}(u_{n}-v^{*})(u_{n}\phi_{\varepsilon})\right|\leq \left(\int_{B(x_{0},2\varepsilon)}(u_{n}-v^{*})^2\right)^\frac{1}{2} \left(\int_{B(x_{0},2\varepsilon)}u_n^2\right)^\frac{1}{2}, \] so that \[\lim_{\varepsilon\to0}\lim_{n\to\infty}\int_{\Omega}(u_{n}-v^{*})(u_{n}\phi_{\varepsilon})=0.\] Also, \begin{eqnarray*} \left|\int_\Omega u_{n}\nabla u_{n}\nabla\phi_{\varepsilon}\right|&=&\left|\int_{B(x_{0},2\varepsilon)}u_{n}\nabla u_{n}\nabla\phi_{\varepsilon}\right|\leq \left(\int_{B(x_{0},2\varepsilon)}|\nabla u_n|^2\right)^\frac{1}{2} \left(\int_{B(x_{0},2\varepsilon)}|u_n\nabla \phi_\varepsilon|^2\right)^\frac{1}{2}\\ &\leq& C \left(\int_{B(x_{0},2\varepsilon)}|u_n\nabla \phi_\varepsilon|^2\right)^\frac{1}{2}. \end{eqnarray*} Since $$\lim_{n\to\infty}\int_{B(x_{0},2\varepsilon)}|u_n\nabla \phi_\varepsilon|^2=\int_{B(x_{0},2\varepsilon)}|u\nabla \phi_\varepsilon|^2,$$ and \begin{eqnarray*} \left(\int_{B(x_{0},2\varepsilon)}|u\nabla \phi_\varepsilon|^2\right)^\frac{1}{2}&\leq & \left(\int_{B(x_{0},2\varepsilon)} |u|^{2^\star}\right)^\frac{1}{2^\star} \left(\int_{B(x_{0},2\varepsilon)}|\nabla \phi_\varepsilon|^N \right)^\frac{1}{N}\\ &\leq& C \left(\int_{B(x_{0},2\varepsilon)} |u|^{2^\star}\right)^\frac{1}{2^\star} \end{eqnarray*} we get \[ \lim_{\varepsilon\to0}\lim_{n\to\infty}(a+b\|u_{n}\|^{2})\left|\int_\Omega u_{n}\nabla u_{n}\nabla\phi_{\varepsilon}\right|=0. \] Moreover, as $0\leq \phi_\varepsilon\leq 1$, \begin{eqnarray*} \lim_{n\to\infty}(a+b\|u_{n}\|^{2})\int_{\Omega}|\nabla u_{n}|^{2}\phi_{\varepsilon}&\geq& \lim_{n\to\infty}\left[a\int_{B(x_{0},2\varepsilon)}|\nabla u_{n}|^{2}\phi_{\varepsilon}+b\left(\int_{\Omega}|\nabla u_{n}|^{2}\phi_{\varepsilon}\right)^{2}\right]\\&\geq& a\int_{B(x_{0},2\varepsilon)}|\nabla u|^{2}\phi_{\varepsilon}+b\left(\int_{\Omega}|\nabla u|^{2}\phi_{\varepsilon}\right)^{2}+a\eta_{j_{0}}+b\eta_{j_{0}}^{2}. \end{eqnarray*} So, as $\int_{B(x_{0},2\varepsilon)}|\nabla u|^{2}\phi_{\varepsilon}\to 0$ as $\varepsilon\to 0$, \[ \lim_{\varepsilon\to0}\lim_{n\to\infty}(a+b\|u_{n}\|^{2})\int_{\Omega}|\nabla u_{n}|^{2}\phi_{\varepsilon} \geq a\eta_{j_{0}}+b\eta_{j_{0}}^{2}.\] Finally, \begin{align*} \lim_{\varepsilon\to0}\lim_{n\to\infty}\int_\Omega|u_{n}|^{2^\star}\phi_{\varepsilon} & =\lim_{\varepsilon\to0}\int_\Omega |u|^{2^\star}\phi_{\varepsilon}+\nu_{j_{0}}=\lim_{\varepsilon\to0}\int_{B(x_{0},2\varepsilon)} |u|^{2^\star}\phi_{\varepsilon}+\nu_{j_{0}}=\nu_{j_{0}}. \end{align*} Summing up the above outcomes, from \eqref{calc 1} one obtains \begin{align*} 0 & \geq a\eta_{j_{0}}+b\eta_{j_{0}}^{2}-\nu_{j_0}\geq a\eta_{j_{0}}+b\eta_{j_{0}}^{2}-S_{N}^{-\frac{2^\star}{2}}\eta_{j_{0}}^{\frac{2^\star}{2}}\\ & =\eta_{j_{0}}\left(a+b\eta_{j_{0}}-S_{N}^{-\frac{2^\star}{2}}\eta_{j_{0}}^{\frac{2^\star-2}{2}}\right). \end{align*} Denote by $f_{1}:[0,+\infty[\to\mathbb{R}$ the function ${\displaystyle f_{1}(x)=a+bx-S_{N}^{-\frac{2^\star}{2}}x^{\frac{2^\star-2}{2}}}$. As before, assumptions on $a$ and $b$ imply that $f_{1}(x)>0$ for all $x\geq0$. Thus \[ a+b\eta_{j_{0}}-S_{N}^{-\frac{2^\star}{2}}\eta_{j_{0}}^{\frac{2^\star-2}{2}}>0, \] therefore $\eta_{j_{0}}=0,$ which is a contradiction. Such conclusion implies that $J$ is empty, that is \[\lim_{n\to\infty}\int_{\Omega}|u_n|^{2^\star}= \int_{\Omega}|u|^{2^\star}\] and the uniform convexity of $L^{2^\star}(\Omega)$ implies that \[ u_{n}\to u\mbox{ in }L^{2^\star}(\Omega). \] Now, recalling that the derivative of the function $$u\to \frac{a}{2}\|u\|^{2}+\frac{b}{4}\|u\|^{4}$$ satisfies the $(S_+)$ property, in a standard way one can see that $u_{n}\to u\mbox{ in }H_{0}^{1}(\Omega)$, which proves our lemma. \end{proof} In the proof of our result, the main tool is the following theorem: \begin{theor}[Ricceri \cite{R1}, Theorem 3.2]\label{minimax theorem} Let $X$ be a topological space, $E$ a real Hausdorff topological vector space, $C\subseteq E$ a convex set, $f : X\times C \to {\mathbb R}$ a function which is lower semicontinuous, inf--compact in $X$, and upper semicontinuous and concave in $C$. Assume also that \begin{equation}\label{minimax} \sup_{v\in C}\inf_{x\in X}f(x,v)<\inf_{x\in X}\sup_{v\in C} f(x,v). \end{equation} Then, there exists $v^*\in C$ such that the function $f(\cdot, v^*)$ has at least two global minima. \end{theor} \noindent {\bf Proof of Theorem \ref{our theorem}} We apply Theorem \ref{minimax theorem} with $X=H^1_0(\Omega)$ endowed with the weak topology, $E=L^2(\Omega)$ with the strong topology, $C$ as in the assumptions. Let $\mathcal F$ as in Lemma \ref{semicontinuity}, i.e. \[\mathcal F(u)=\frac{a}{2}\|u\|^2+\frac{b}{4} \|u\|^4-\frac{1}{2^\star}\|u\|^{2^\star}_{2^\star} \qquad \mbox{for every }\ u \in H^1_0(\Omega).\] From Lemma \ref{semicontinuity}, $\mathcal F$ is sequentially weakly lower semicontinuous, and coercive, thus, the set $M_\mathcal F$ of its global minima is non empty. Denote by \begin{equation}\label{lambdastar}\lambda^\star=\inf\left\{\frac{\mathcal F(u)-\mathcal F(v)}{\|u-v\|_2^2} \ : \ (v, u)\in M_{\mathcal F}\times H^1_0(\Omega), \ v\neq u \right\} \end{equation} and fix $\lambda>\lambda^\star$. Let $f: H^1_0(\Omega)\times C\to{\mathbb R} $ be the function \[f(u,v)=\mathcal F(u)-\lambda \|u-v\|_2^2.\] From the Eberlein Smulyan theorem it follows that $f(\cdot, v)$ has weakly compact sublevel sets in $H^1_0(\Omega)$. It is also clear that $f(u, \cdot)$ is continuous and concave in $L^2(\Omega)$. Let us prove \eqref{minimax}. Recalling that the closure of $C$ in $L^2(\Omega)$ (denoted by ${\overline C}$) contains $H^1_0(\Omega)$, one has \begin{align}\label{first} \inf_{u\in H^1_0(\Omega)}\sup_{v\in C } f(u,v)&=\inf_{u\in H^1_0(\Omega)}\sup_{v\in \overline {C}}f(u,v)\nonumber \\&\geq \inf_{u\in H^1_0(\Omega)}\sup_{v\in H^1_0(\Omega) } f(u,v)\nonumber\\&= \inf_{u\in H^1_0(\Omega)}\sup_{v\in H^1_0(\Omega) } (\mathcal F(u)-\lambda \|u-v\|_2^2)\nonumber \\\nonumber&=\inf_{u\in H^1_0(\Omega)}(\mathcal F(u)-\lambda \inf_{v\in H^1_0(\Omega)}\|u-v\|_2^2)\\&= \min_ {H^1_0(\Omega)}\mathcal F \end{align} Since $\lambda>\lambda^\star$, there exist $u_0, v_0\in H^1_0(\Omega), u_0\neq v_0$ and $\varepsilon>0 $ such that \begin{align*} &\mathcal F(u_0)-\lambda \|u_0-v_0\|_2^2<\mathcal F(v_0)-\varepsilon,\\ & \mathcal F(v_0)= \min_ {H^1_0(\Omega)}\mathcal F. \end{align*} Thus, if $h:L^2(\Omega)\to{\mathbb R}$ is the function defined by $h(v)=\inf_{u\in H^1_0(\Omega)}(\mathcal F(u)-\lambda \|u-v\|_2^2)$, then, $h$ is upper semicontinuous in $L^2(\Omega)$ and \[h(v_0)\leq \mathcal F(u_0)-\lambda \|u_0-v_0\|_2^2<\mathcal F(v_0)-\varepsilon.\] So, there exists $\delta>0$ such that $h(v)<\mathcal F(v_0)-\varepsilon$ for all $\|v-v_0\|_2\leq \delta.$ Therefore, \[\sup_{\|v-v_0\|_2\leq \delta }\inf_{u\in H^1_0(\Omega)}(\mathcal F(u)-\lambda \|u-v\|_2^2)\leq \mathcal F(v_0)-\varepsilon.\] On the other hand, \[\sup_{\|v-v_0\|_2\geq \delta }\inf_{u\in H^1_0(\Omega)}(\mathcal F(u)-\lambda \|u-v\|_2^2)\leq \sup_{\|v-v_0\|_2\geq \delta } (\mathcal F(v_0)-\lambda \|v_0-v\|_2^2)\leq \mathcal F(v_0)-\lambda\delta^2.\] Summing up the above outcomes, we obtain \begin{align}\label{second} \sup_{v\in C }\inf_{u\in H^1_0(\Omega)} f(u,v)&\leq \sup_{v\in L^2(\Omega) }\inf_{u\in H^1_0(\Omega)}f(u,v)\nonumber \\&= \sup_{v\in L^2(\Omega) }\inf_{u\in H^1_0(\Omega)}(\mathcal F(u)-\lambda \|u-v\|_2^2)\nonumber\\&<\mathcal F(v_0)=\min_{H^1_0(\Omega)}\mathcal F. \end{align} From \eqref{first} and \eqref{second}, claim \eqref{minimax} follows. Applying Theorem \ref{minimax theorem}, we deduce the existence of $v^*\in C$ such that the energy functional \[\mathcal E(u)=\mathcal F(u)-\frac{\lambda}{2}\|u-v^*\|_2^2\] associated to our problem has two global minima, which is claim $(A)$. In order to prove $(B)$ we observe that, since the functional is of class $C^1$, such global minima turns out to be weak solutions of our problem. The third solution follows by Lemma \ref{Palais Smale} (recall that $C_2(N)\geq C_1(N)$) and a classical version of the Mountain Pass theorem by Pucci and Serrin \cite{PS}.\qed \begin{rem} For sake of clarity, we calculate the approximate values of the constants $C_1(N)$ and $C_2(N)$ for some $N$: \begin{center} \begin{tabular}{|c|c|c|} \hline $N$ & $C_1(N)$ & $C_{2}(N)$\tabularnewline \hline \hline 5 & 0.002495906672 & 0.002685168050\tabularnewline \hline 6 & 0.0001990835458 & 0.0002239689890\tabularnewline \hline 7 & 0.00001712333233 & 0.00001985538802\tabularnewline \hline 9 & 1.269275934$\cdot10^{-7}$ & 1.529437355$\cdot10^{-7}$\tabularnewline \hline \end{tabular} \end{center} \end{rem} {\begin{question} Notice that if $N=4$ then, for $b S_N^2< 1$, $\mathcal E$ is unbounded from below. Indeed, if $\{u_n\}$ is such that $\frac{\|u_n\|^2}{\|u_n\|_4^2}\to S_N$, then we can fix $c$ and $\bar n$ such that $\frac{\|u_{\bar n}\|^2}{\|u_{\bar n}\|_4^2}<c<b^{-\frac{1}{2}}$. Thus \[ \mathcal{E}(\tau u_{\bar n})<\frac{a\tau^2}{2}\|u_{\bar n}\|^{2}+\frac{\tau^4}{4}\left(b-\frac{1}{c^2}\right)\|u_{\bar n}\|^{4}-\frac{\lambda}{2}\|\tau u_{\bar n}-v^{*}\|_{2}^{2}\to-\infty, \mbox{as} \ \tau\to+\infty. \] It remains an open question if, when $N>4$, Theorem \ref{our theorem} holds for every $a\geq 0, b>0$ with $ a^{\frac{N-4}{2}} b< C_1(N)$. \end{question} {\bf Acknowledgment} This work was initiated when Cs. Farkas visited the Department of Mathematics of the University of Catania, Italy. He thanks the financial support of Gruppo Nazionale per l'Analisi Matematica, la Probabilit\`a e le loro Applicazioni (GNAMPA) of the Istituto Nazionale di Alta Matematica (INdAM).
{ "attr-fineweb-edu": 1.508789, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdHw4ubng04WQtWr4
\section{Introduction} It\^o formula for the square of the norm is an essential tool in the study of stochastic evolution equations of the type \begin{equation} \label{evolution} dv(t)=\mathbb A(t,v(t))\,dt+\sum_k\mathbb B_k(t,v(t))\,dW^k(t), \end{equation} where $(W^k)_{k=1}^{\infty}$ is a sequence of independent Wiener processes, and $\mathbb A(t,\cdot)$ and $\mathbb B_k(t,\cdot)$ are (possibly random nonlinear) operators on a separable real Banach space $V$, with values in a Banach space $V'$ and a Hilbert space $H$ respectively, such that $V\hookrightarrow H\hookrightarrow V'$ with continuous and dense embeddings. We assume there is a constant $K$ such that $(v,h)\leq K\|v\|_V\|h\|_{V'}$ for all $v\in V$ and $h\in H$. This means that for the linear mapping $\Psi:H\to H^{\ast}$, which identifies $H$ with its dual $H^{\ast}$ via the inner product in $H$, we have $\|\Psi(h)\|_{V^{\ast}}\leq K\|h\|_{V'}$. Therefore, since $H$ is dense in $V'$, $\Psi$ can be extended to a continuous mapping from $V'$ into $V^{\ast}$, the dual of $V$. It is assumed that this extension is one-to-one from $V'$ into $V^{\ast}$. Thus an initial value problem for equation \eqref{evolution} can be viewed as \begin{equation} \label{y} v(t)=\int_0^tv^{\ast}(s)\,ds+h(t)=:y(t) \end{equation} with the $V^{\ast}$-valued process $v^{\ast}(t):=\mathbb A(t,v(t))$ and $H\equiv H^{\ast}$-valued process $$ h(t):=h_0+\sum_k\int_0^t\mathbb B^k(s,v(s))\,dW^k(s), $$ where $h_0$ is a given initial value and the equality \eqref{y} in $V^{\ast}$ is required $ dt\times\P$ almost everywhere. In the special case $B_k=0$ for every $k$, and nonrandom $h_0$ and $A$, i.e., in the case $$ v(t)=h_0+\int_0^tv^{\ast}(s)\,ds, \quad dt\text{-a.e.}, $$ it is well-known that when $v\in L_p([0,T], V)$, $v^{\ast}\in L_q([0,T], V^{\ast})$ for $T>0$ and conjugate exponents $p$ and $q$, then there is $u\in C([0,T],H)$ such that $u=v$ for $dt$-almost all $t\in[0,T]$ and the ``energy equality" $$ |u(t)|_H^2=|h_0|_H^2+2\int_0^t\langle v^{\ast}(s),v(s)\rangle\,ds $$ holds for all $t\in[0,T]$, where $\langle\cdot,\cdot\rangle$ denotes the duality pairing of $V^{\ast}$ and $V$. This formula is used in proofs of existence and uniqueness theorems for PDEs, see e.g., \cite{Evans} and \cite{Lions}. A generalisation of it, a ``stochastic energy equality", i.e., an It\^o formula for the square of the $H$-norm of $y$, was first presented in Pardoux \cite{pardoux:thesis}, and was used to obtain existence and uniqueness theorems for SPDEs. The proof of it in \cite{pardoux:thesis} was not separated from the theory of SPDEs developed there. A proof, not bound to the theory of SPDEs, was given in Krylov and Rozovskii \cite{krylov:rozovskii:stochastic}, and then this stochastic energy equality was generalised in Gy\"ongy and Krylov~\cite{gyongy:krylov:on:stochastic:II} to $V^{\ast}$-valued semimartingales $y$ of the form \begin{equation} \label{eq y} y(t)=\int_{(0,t]}v^{\ast}(s)\,dA(s)+h(t), \end{equation} where $A$ is an adapted nondecreasing cadlag process and $h$ is an $H$-valued cadlag martingale. This generalisation is used in Gy\"ongy~\cite{gyongy} to extend the theory of SPDEs developed in \cite{pardoux:thesis} and \cite{krylov:rozovskii:stochastic} to SPDEs driven by random orthogonal measures and L\'evy martingales, written in the form \begin{equation} \label{SPDE} dv(t)=\mathbb A(t,v(t))\,dA(t)+\mathbb B(t,v(t))\,dM(t) \end{equation} with cadlag (quasi left-continuous) martingales $M$ with values in a Hilbert space. In the present paper we are interested in stochastic energy equalities which can be applied to SPDEs \eqref{SPDE} when $\mathbb A$ is of the form $\mathbb A=\mathbb A_1+\mathbb A_2+\cdots+\mathbb A_m$ and the operators $\mathbb A_i$ have different analytic and growth properties. This means, $$ \mathbb A_i(t,\cdot):V_i\to V_i^{\prime}\quad i=1,2,\ldots,m $$ for some Banach spaces $V_i$ and $V'_i$, such that with a constant $R$ and a process $g$, locally integrable with respect to $dA$, one has for all $t$ $$ \|\mathbb A_i(t,w)\|_{V'_i} \leq |g_t|^{1/q_i} + R\|w\|_{V_i}^{p_i-1} $$ for all $w\in V$, $q_i=p_i/(p_i-1)$ with (possibly) different exponents $p_i\geq1$, which for $p_i=1$ means that $\|\mathbb A_i(t,w)\|_{V'_i}$ is bounded by a constant. In the special case when $A(t)=t$ and $M$ is a Wiener process the above situation was considered in \cite{pardoux:thesis}, and a related stochastic energy equality was also presented there. Our main result, Theorem \ref{thm:1} generalises the results on stochastic energy equalities from \cite{pardoux:thesis} and \cite{gyongy:krylov:on:stochastic:II}. We prove it by adapting the method of the proof of the main theorem in \cite{gyongy:krylov:on:stochastic:II}. In the present paper we consider a semimartingale $y$ of the form~\eqref{eq y} such that $dA \times \P$-almost everywhere $y$ takes values in $V=V_1\cap \ldots \cap V_m$, where $V_i$ are Banach spaces (over $\mathbb{R}$) such that $V$ with the norm $\|\cdot\| := \sum_{i=1}^m \|\cdot\|_{V_i}$ is continuously and densely embedded in $H$. The process $v^\ast$ in~\eqref{eq y} is of the form $v^\ast = \sum_{i=1}^m v_i^\ast$, where $v_i^\ast$ are $V_i^\ast$-valued progressively measurable processes. We prove that $y$ is almost surely cadlag as a process with values in $H$ and for $|y|_H^2$ an It\^o formula holds under the assumption that $\|y\|_{V_i}^{p_i}$ and $\|v_i^\ast\|_{V_i^\ast}^{q_i}$ are almost surely locally integrable with respect to $dA$ for some conjugate exponents $p_i, q_i$. See Section~\ref{sec:main} for precise formulation of the main theorem. To apply the result of~\cite{gyongy:krylov:on:stochastic:II} to $y$ given by~\eqref{eq y}, one needs the local integrability (with respect to $dA$) of \[ \|y\|_V\|v^\ast\|_{V^\ast} = \left(\|y\|_{V_1} + \cdots + \|y\|_{V_m} \right)\|v_1^\ast + \cdots + v_m^\ast\|_{V^\ast}, \] which, in general, is not satisfied under our assumptions. See Remark~\ref{rem integrability} and Example~\ref{ex spde}. We note that in the context of stochastic evolution equations it is possible to prove It\^o formulae for more general functions (satisfying appropriate differentiability assumptions), see again Pardoux~\cite{pardoux:thesis}, Krylov~\cite{krylov:ito_fla}, \cite{K2010}, \cite{K}, Da Prato, Jentzen and R\"ockner~\cite{daprato:jentzen:rockner}, as well as Dareiotis and Gy\"ongy~\cite{dareiotis:gyongy}. The It\^o formula for the square of the norm is used in particular to establish a priori estimates as well as uniqueness and existence of solutions of stochastic evolution equations. The more general It\^o formula can then be used to study finer properties of solutions of stochastic evolution equations, for example the maximum principle. For general theory of SPDEs in the variational setting we refer the reader to Krylov and Rozovskii~\cite{krylov:rozovskii:stochastic}, Pr\'ev\^ot and R\"ockner~\cite{PR} and Rozovskii~\cite{R}. \section{Main Results} \label{sec:main} For $i=1,\ldots,m$ let $(V_i,\|\cdot\|_{V_i})$ be real Banach spaces with duals $(V_i^*,\|\cdot\|_{V_i^*})$. Let $V$ denote the vector space $V_1\cap \cdots \cap V_m$ with the norm $\|\cdot\| := \|\cdot\|_{V_1} + \cdots + \|\cdot\|_{V_m}$. Then clearly, $V$ is a Banach space. Assume that it is separable and is continuously and densely embedded in a Hilbert space $(H, |\cdot|)$, which is identified with its dual $H^{\ast}$ by the help of the inner product $(\cdot,\cdot)$ in $H$. Thus we have $$ V\hookrightarrow H\equiv H^{\ast}\hookrightarrow V^{\ast}, $$ where $H^{\ast}\hookrightarrow V^{\ast}$ is the adjoint of the embedding $V\hookrightarrow H$. We use the notation $\langle\cdot, \cdot\rangle$ for the duality pairing between $V$ and $V^{\ast}$. Note that if $v^{\ast}\in V_i^{\ast}$ for some $i$, then its restriction to $V$ belongs to $V^{\ast}$ and $|\langle v^{\ast},v\rangle|\leq \|v^{\ast}\|_{V^{\ast}_i}\|v\|_{V_i}$ for all $v\in V$. Note also that $\langle v^{\ast},v\rangle=(h,v)$ for for all $v\in V$ when $v^{\ast}=h\in H$. A complete probability space $(\Omega, \mathcal{F}, \P)$ together with an increasing family of $\sigma$-algebras $(\mathcal{F}_t)_{t\geq 0}$, $\mathcal{F}_t \subset \mathcal{F}$ will be used throughout the paper. Moreover it is assumed that the usual conditions are satisfied: $\bigcap_{s> t} \mathcal{F}_s = \mathcal{F}_t$ and $\mathcal{F}_0$ contains all subsets of $\P$-null sets of $\mathcal{F}$. We use the notation $\mathcal{B}(\mathbb{R}_+)$ for the $\sigma$-algebra of Borel subsets of $\mathbb{R}_+=[0,\infty)$, and for a real-valued increasing $\mathcal{B}(\mathbb{R}_+)\otimes\mathcal{F}$-measurable process $(A(t))_{t\geq0}$ the notation $dA\times\P$ stands for the measure defined on $\mathcal{B}(\mathbb{R}_+)\otimes\mathcal{F}$ by $$ (dA\times\P)(F)=\mathbb{E}\int_0^{\infty}{\bf 1}_F\,dA(t), \quad F\in \mathcal{B}(\mathbb{R}_+)\otimes\mathcal{F}. $$ Let $h=(h(t))_{t\geq0}$ be an $H$-valued locally square integrable martingale that is cadlag (continuous from the right with left-hand limits) in the strong topology on $H$. Its quadratic variation process is denoted by $[h]$, and $\langle h\rangle$ denotes the unique predictable process starting from zero such that $|h|^2-\langle h\rangle$ is a local martingale. Furthermore let $A$ be a real-valued nondecreasing adapted cadlag process starting from zero. Finally let $v=(v(t))_{t\geq0}$ be a $V$-valued progressively measurable process and for $i=1,\ldots,m$ let $v^{\ast}_i=(v^{\ast}_i)_{t\geq0}$ be $V_i^*$-valued processes such that $\langle\varphi,v_i^{\ast}\rangle$ are progressively measurable for any $\varphi \in V$. Notice that $v$ is also progressively measurable as a process with values in $\bar V_i$, the closure in $V_i$-norm of the linear hull of $ \{v(t):t\geq0, \omega\in\Omega\}. $ Let there be $p_i \in [1,\infty)$ and $q_i=p_i/(p_i-1)\in(1,\infty]$, where, as usual, $1/0:=\infty$. Assume that for each $i=1,2,\ldots,m$ and $T>0$ \begin{equation} \label{assumption main} \int_0^T\|v(t)\|_{V_i}^{p_i}\,dA(t)<\infty,\quad \left(\int_0^T\eta_i^{q_i}(t)\,dA(t)\right)^{1/q_i}<\infty, \end{equation} for some progressively measurable process $\eta_i$ such that $\|v_i^{\ast}\|_{V_i^{\ast}}\leq \eta_i$ for $dA\times \P$-almost everywhere, where for $q_i=\infty$ the second expression means $$ \text{$dA$-ess}\,\sup_{t\leq T}\eta_i(t), $$ the essential supremum (with respect to $dA$) of $\eta_i$ over $[0,T]$. The following theorem is the main result of this paper. \begin{theorem} \label{thm:1} Let $\tau$ be a stopping time. Suppose that for all $\varphi \in V$ and for $dA\times \P$ almost all $(\omega, t)$ such that $t\in (0,\tau(\omega))$ we have \begin{equation} \label{eq:1} (v(t),\varphi)= \sum_{i=1}^m \int_{(0,t]}\langle v^{\ast}_i(s),\varphi\rangle\, dA(s) + (h(t),\varphi). \end{equation} Then there is $\tilde{\Omega} \subset \Omega$ with $\P(\tilde{\Omega}) = 1$ and an $H$-valued cadlag process $\tilde v$ such that the following statements hold. \begin{enumerate} \item[(i)] For $dA\times \P$ almost all $(t, \omega)$ satisfying $t \in (0,\tau(\omega))$ we have $\tilde v=v$. \item[(ii)] For all $\omega \in \tilde{\Omega}$ and $t\in[0,\tau(\omega))$ we have \begin{equation} \label{eq:2} (\tilde v(t),\varphi) = \sum_{i=1}^m \int_{(0,t]}\langle v^{\ast}_i(s),\varphi\rangle\,dA(s) + h(t)\varphi\quad \textrm{for all }\, \varphi \in V. \end{equation} \item[(iii)] For all $\omega \in \tilde{\Omega}$ and $t\in[0, \tau(\omega))$ \begin{equation} \label{eq:3} \begin{split} |\tilde v(t)|^2 = & |h(0)|^2 + 2\sum_{i=1}^m \int_{(0,t]}\langle v^{\ast}_i(s),v(s)\rangle\,dA(s) + 2\int_{(0,t]}( \tilde v(s-)\,dh(s)) \\ & - \int_{(0,t]} |v^{\ast}(s)|^2 \Delta A(s) dA(s) + [h]_t, \end{split} \end{equation} where $v^{\ast}(t):=\sum_{i=1}^m v^{\ast}_i(t)\in H$ for $\Delta A(t) > 0$. \end{enumerate} \end{theorem} Consider now a situation where the assumptions on $h$ and $A$ are as above but $m=1$ and regarding $v$ and $v^{\ast}:=v^{\ast}_1$ we know that $\|v(t)\|$, $\|v^{\ast}(t)\|_{V^*}$ and $\|v(t)\|\|v^{\ast}(t)\|_{V^*}$ are almost surely locally integrable with respect to $dA(t)$. Let \begin{equation*} \bar{v}^{\ast}(t) := \frac{v^{\ast}(t)}{1+\|v^{\ast}(t)\|_{V^*}} \,\,\, \textrm{and} \,\,\, \bar A(t) := \int_{(0,t]}(1+\|v^{\ast}(t)\|_{V^*})dA(t). \end{equation*} Then $\|\bar{v}^{\ast}\|_{V^*} \leq 1$ and so $v$, $\bar{v}^{\ast}$ and $\bar A$ satisfy the conditions on $v$, $v^{\ast}$ and $A$, respectively, with $p_1 = 1$ and $q_1 = \infty$. If~\eqref{eq:1} holds for all $\varphi \in V$ and for $dA\times \P$ almost all $(\omega, t)$ such that $t\in (0,\tau(\omega))$ then \begin{equation*} ( v(t), \varphi ) = \sum_{i=1}^m \int_{(0,t]} \langle \bar{v}^{\ast}(s),\varphi \rangle\, d\bar A(s) + ( h(t),\varphi ). \end{equation*} Applying Theorem~\ref{thm:1} then means that we have all of its conclusions with $\bar{v}^{\ast}$ and $\bar A$ in place of $v^{\ast}$ and $A$ respectively. In particular, we get \begin{equation*} \begin{split} |\tilde v(t)|^2 = & |h(0)|^2 + 2 \int_{(0,t]} \langle \bar{v}^{\ast}(s), v(s) \rangle\, d\bar A(s) + 2\int_{(0,t]} (\tilde v(s-),dh(s)) \\ & - \int_{(0,t]} \left|\bar{v}^{\ast}(s)\right|^2 \Delta \bar A(s) d\bar A(s) + [h]_t\\ = & |h(0)|^2 + 2 \int_{(0,t]} \langle v^{\ast}(s), v(s) \rangle\, dA(s) + 2\int_{(0,t]} (\tilde v(s-),dh(s)) \\ & - \int_{(0,t]} \left|v^{\ast}(s)\right|^2 \Delta A(s)\,dA(s) + [h]_t. \end{split} \end{equation*} Hence we see that Theorem~\ref{thm:1} is a generalisation of the main theorem in Gy\"ongy and Krylov~\cite{gyongy:krylov:on:stochastic:II}. \begin{remark} \label{rem integrability} One might think that Theorem \ref{thm:1} follows from the main theorem in~\cite{gyongy:krylov:on:stochastic:II} by considering the process $v^{\ast}=\sum_iv^{\ast}_i$ as a process with values in $V^{\ast}$. However, taking into account that for any $w^{\ast}\in V^{\ast}$ $$ \|w^{\ast}\|_{V^{\ast}} =\inf\Big\{\max_{i=1,\ldots,m}\|w_i^{\ast}\|_{V^{\ast}_i} :w^{\ast}=\sum_{i=1}^mw_i^{\ast}, w_i^{\ast}\in V_i^{\ast}\Big\} $$ (see for example Gajewski, Gr{\"o}ger and Zacharias~\cite[Chapter 1, Theorem 5.13]{ggz}), one can show that the local integrability condition in \cite{gyongy:krylov:on:stochastic:II} for $$ \|v\|_{V}\|v^{\ast}\|_{V^{\ast}}=(\|v\|_1+\cdots+\|v\|_m)\|v^{\ast}\|_{V^{\ast}} $$ is not implied by our assumption \eqref{assumption main}. Thus the main theorem in \cite{gyongy:krylov:on:stochastic:II} is not applicable in our situation. \end{remark} We consider the following motivating example. \begin{example} \label{ex spde} Consider the stochastic partial differential equation \begin{equation*} \begin{split} du = & \left[ \nabla(|\nabla u|^{p_1-2} \nabla u) + |u|^{p_2-2} u \right] dt\\ & + f(u,\nabla u)\,dW + \int_Z g(u)\,q(dt,dz)\,\, \textrm{in} \,\, \mathscr{D}\times (0,T). \end{split} \end{equation*} Here $W$ is a Wiener process (finite or infinite dimensional depending on the choice of $f$), $(Z,\Sigma)$ is a measurable space and $q(ds,dz)$ a stochastic martingale measure on $[0,\infty)\times Z$. See, for example, Gy\"ongy and Krylov~\cite{gyongy:krylov:on:stochastic:I} for detailed definition. We take $\mathscr{D}$ to be a bounded Lipschitz domain in $\R^d$. It is natural to assume that a solution $u$ should be such that $\|u\|_{W^{1}_{p_1}(\mathscr{D})}^{p_1}$ and $\|u\|_{L_{p_2}(\mathscr{D})}^{p_2}$ are almost surely locally integrable. To apply the result in Gy\"ongy and Krylov~\cite{gyongy:krylov:on:stochastic:II} one could try to take $V := W^{1}_{p_1}(\mathscr{D}) \cap L_{p_2}(\mathscr{D})$ with the norm $\|\cdot\|_V = \|\cdot\|_{W^{1}_{p_1}(\mathscr{D})} +\|\cdot\|_{L_{p_2}(\mathscr{D})}$. The dual of $V$ can be identified with the linear space \begin{equation*} V^* = \{f = f_1 + f_2 :f_1 \in W^{1}_{p_1}(\mathscr{D})^*, f_2 \in L_{p_2}(\mathscr{D})^*\} \end{equation*} equipped with the norm \begin{equation*} \begin{split} \|f\|_{V*} = \inf \{\max(&\|f_1\|_{W^{1}_{p_1}(\mathscr{D})^*}, \|f_2\|_{ L_{p_2}(\mathscr{D})^*}): \\ & f = f_1 + f_2, f_1 \in W^{1}_{p_1}(\mathscr{D})^*,\,\, f_2\in L_{p_2}(\mathscr{D})^* \}. \end{split} \end{equation*} One would then need to show that $\|u\|_V \, \|\nabla(|\nabla u|^{p_1-2}\nabla u)+|u|^{p_2-2} u\|_{V^*}$ is locally integrable. To ensure this in general we need, in particular, that \begin{equation*} \|u\|_{W^{1}_{p_1}(\mathscr{D})}\, \||u|^{p_2-2} u\|_{L_{p_2}(\mathscr{D})^*}= \|u\|_{W^{1}_{p_1}(\mathscr{D})}\, \|u\|^{p_2-1}_{L_{p_2}(\mathscr{D})} \end{equation*} is locally integrable, which we may not have if $p_1 < p_2$. Thus one cannot apply the It\^o formula from Gy\"ongy and Krylov. On the other hand it is easy to check that the assumptions of Theorem~\ref{thm:1} are satisfied. \end{example} An application of the above It\^o's formula to SPDEs driven by Wiener processes is given in \cite{pardoux:thesis} (Chapter 2, Example 5.1) and in \cite{GSS}. Further examples can be found in \cite[Chapter 2, Section 1.7]{Lions}. \section{Preliminaries} \begin{lemma} \label{lemma:1} For $r\in [0,\infty)$ let $\beta(r) := \inf\{ t\geq 0: A(t) \geq r\}$ and let $x(t)$ be a real valued process that is locally integrable with respect to $dA$ for all $\omega \in \Omega$. Then \begin{enumerate}[i)] \item $\beta(r)$ is a stopping time (not necessarily finite) for every $r\in [0,\infty)$, \item \begin{equation*} \begin{split} \int_{(0,t]} x(s)\,dA(s) & = \int_{(0,A(t)]} x(\beta(r))\, dr, \\ \int_{(0,t)} x(s)\,dA(s) & = \int_{(0,A(t-)]} x(\beta(r)) \,dr \end{split} \end{equation*} for every $t\in [0,\infty)$, \item \begin{equation*} A(\beta(t)-) - A(\beta(s)) \leq t-s \end{equation*} for every $s,t \in [0,\infty)$. \item If $0 = r^n_0 < r^n_1 < \cdots < r^n_k < \cdots $ is an increasing sequence of decompositions of $[0,\infty)$ such that $\sup_{k}|r^n_{k+1} - r^n_k| \to 0$ as $n\to \infty$ then for every $t\geq 0$ and $\omega \in \Omega$ \begin{equation*} \sum_k\left|X(\tau^n_{k+1}\wedge t) - X(\tau^n_k \wedge t)\right|^2 \to \sum_{s\leq t} |X(s)|^2 |\Delta A(s)|^2 \end{equation*} as $n\to \infty$, where $X(t):= \int_{(0,t]}x(s)dA(s)$ and $\tau^n_k := \beta(r^n_k)$. \end{enumerate} \end{lemma} This Lemma is proved in Gy\"ongy and Krylov~\cite[Lemma 1]{gyongy:krylov:on:stochastic:II}. Let $\kappa_n^{(j)}$ for $j=1,2$ and integers $n\geq1$ denote the functions defined by $$ \kappa^{(1)}_n(t)=2^{-n}\lfloor 2^nt\rfloor, \quad \kappa^{(2)}_n(t)=2^{-n}\lceil 2^nt\rceil $$ The following lemma is known and the authors believe is due to Doob. \begin{lemma} \label{lemma:2} For integers $i\geq1$ let $(X_i,\|\cdot\|_{X_i})$ be Banach spaces, and let $p_i \in [1,\infty)$. Let $x_i: \R \times \Omega \to X_i$ be $ \mathscr{B}(\R) \otimes \mathcal{F}$ Bochner-measurable such that $x_i(r)= 0$ for $r\notin [0,1]$ and \begin{equation*} \alpha_i:=\mathbb{E}\int_{0}^{1} \|x_i(r)\|_{X_i}^{p_i} \,dr < \infty. \end{equation*} Then there exists a subsequence $n_k \to \infty$ such that for $dt$-almost all $t\in [0,1]$ \begin{equation*} \mathbb{E} \int_{(0,1]}\|x_i(r) - x_i(\kappa^{(j)}_{n_k}(r-t)+t)\|_{X_i}^{p_i}\, dr \to 0 \,\,\textrm{ as }\,\, k\to \infty \end{equation*} for $j=1,2$ and all $i\geq1$. \end{lemma} \begin{proof} Let $(c_i)_{i=1}^{\infty}$ be a sequence of positive numbers such that $$ \sum_{i=1}^{\infty}c_i2^{p_i}\alpha_i<\infty. $$ By change of variables and changing the order of integration $$ I_n:= \sum_{i=1}^{\infty}c_i\int_0^1 \mathbb{E} \int_{0}^{1}\|x_i(r) - x_i(\kappa^{(j)}_{n}(r-t)+t)\|_{X_i}^{p_i}\,dr\,dt $$ $$ \leq \sum_{i=1}^{\infty}c_i \mathbb{E}\int_{-1}^1\int_{0}^1\|x_i(s+t)-x_i(\kappa^{(j)}_n(s)+t)\|^{p_i}_{X_i}\,dt\,ds. $$ Note that by the shift invariance of the Lebesgue measure $$ J_{in}(s):=\int_{0}^1\|x_i(s+t)-x_i(\kappa^{(j)}_n(s)+t)\|^{p_i}_{X_i}\,dt\to 0\,(a.s.) $$ for $s\in(0,1)$, $i\geq1$, and $$ \sum_{i=1}^{\infty}c_i|J_{in}(s)| \leq \sum_{i=1}^{\infty}c_i2^{p_i-1}\left(\int_{0}^1\|x_i(s+t)\|^{p_i}_{X_i}\,dt +\int_{0}^1\|x_i(\kappa_n(s)+t)\|^{p_i}_{X_i}\,dt\right) $$ $$ \leq \sum_{i=1}^{\infty}{c_i}2^{p_i} \int_{0}^1\|x_i(t)\|^{p_i}_{X_i}\,dt. $$ Therefore by Lebesgue's theorem on dominated convergence $$ I_n=\int_0^1\left(\sum_{i=1}^{\infty}c_i \mathbb{E} \int_{0}^{1}\|x_i(r) - x_i(\kappa^{(j)}_{n}(r-t)+t)\|_{X_i}^{p_i}\,dr\right)\,dt \to0. $$ Hence for a subsequence $n_k\to\infty$ $$ \sum_{i=1}^{\infty}c_i \mathbb{E} \int_{0}^{1}\|x_i(r) - x_i(\kappa^{(j)}_{n}(r-t)+t)\|_{X_i}^{p_i}\,dr \to0 $$ for almost all $t\in[0,1]$, and the statement of the lemma follows. \end{proof} The following lemma is proved in Gy\"ongy and Krylov~\cite[Lemma 3]{gyongy:krylov:on:stochastic:II}. \begin{lemma} \label{lemma:3} Let $(\xi_n)_{n\in \mathbb{N}}$ be a sequence of $H$-valued predictable processes. Suppose \begin{equation*} \P \left[\sup_{n\in \mathbb{N}, t\leq T} |\xi_n(t)| < \infty \right] = 1 \end{equation*} and \begin{equation*} \P\left[ \forall t \leq T,\,\,\forall \varphi \in H\,\,\, \lim_{n\to \infty} (\xi_n(t),\varphi) = 0\right] = 1. \end{equation*} Then for any $\varepsilon > 0$ \begin{equation*} \P \left[ \sup_{t\leq T} \left|\int_{(0,t]}(\xi_n(s), dh(s))\right| > \varepsilon \right] \to 0 \end{equation*} as $n\to \infty$. \end{lemma} \section{Proof of the Main Result} \label{sec:proof} The following standard steps, as in Krylov and Rozovskii~\cite{krylov:rozovskii:stochastic}, allow us to work under more convenient assumptions without any loss of generality. \begin{enumerate}[1)] \item We note that $\tau$ can be assumed to be a bounded stopping time. Indeed if we prove Theorem~\ref{thm:1} under this assumption then we can extend it to unbounded stopping times by considering $\tau \wedge n$ and letting $n\to \infty$. In fact using a non-random time change we may assume that $\tau\leq 1$. \item Recall the processes $\eta_i$ from assumption \eqref{assumption main}, and set $$ Q_i(t)=\left(\int_{(0,t]} \eta_i^{q_i}(s)\,dA(s)\right)^{1/q_i}\quad t\geq0 $$ when $q_i<\infty$, and for $q_i=\infty$ let $Q_i=(Q_i(t))_{t\geq0}$ denote a nondecreasing cadlag adapted process such that almost surely $$ \text{$dA$-ess\,sup}_{s\leq t}\eta_i(s)\leq Q_i(t)\quad \text{ for all $t\geq0$}. $$ It is not difficult to see that such a process $Q_i$ exists, we can take, e.g., the adapted right-continuous modification of the process $\text{$dA$-ess\,sup}_{s\leq t}\eta_i(s)$, i.e., $$ \lim_{n\to\infty}\text{$dA$-ess\,sup}_{s\leq t+1/n}\eta_i(s). $$ Let $(e^j)_{j\in \mathbb{N}} \subset V$ be an orthonormal basis in $H$ and define \begin{equation} \label{eq:r_def} \begin{split} & r(t):=|h(0)|+A(t) + \sum_{i=1}^m\left(\int_{(0,t]} \|v(s)\|_{V_i}^{p_i} dA(s)\right)^{1/p_i} \\ & + \sum_{i=1}^mQ_i(t) + \sum_{i=1}^m\sum_{k\in \mathbb{N}} 2^{-c_k} \left(\int_{(0,t]}\|w_k(s)\|^{p_i}_{V_i} dA(s)\right)^{1/p_i}, \end{split} \end{equation} with $c_k:=\max_{1\leq i\leq m}\sum_{j\leq k}|e_j|^2_{V_i}$ and $w_k := \Pi^k h$, where $\Pi^k$ denotes the orthogonal projection of $H$ onto its subspace spanned by $(e_i)_{i=1}^k$. We may and will assume, without loss of generality, that $r$ and $\langle h \rangle$ are bounded. Indeed, imagine we have proved Theorem~\ref{thm:1} under this assumption. Consider \begin{equation*} \tau_n := \inf\{t \geq 0: r(t) \geq n\}. \end{equation*} Then $\tau_n$ is a stopping time and $\tau_n\to\infty$ for $n\to\infty$. Since $\langle h\rangle$ is a predictable process starting from $0$, there is an increasing sequence of stopping times $\sigma_n$ such that $\sigma_n\to\infty$ and $\langle h\rangle_t\leq n$ for $t\in[0,\sigma_n]$. Therefore $\tau_n \wedge \sigma_n \wedge \tau \to \tau$ as $n\to \infty$, and for fixed $n$ we get $r(t)\leq n$ for $t\in(0,\tau_n\wedge\sigma_n)$ and $\langle h \rangle_t \leq n$ for $t\in[0,\tau_n\wedge\sigma_n]$. Thus we get~\eqref{eq:2} and~\eqref{eq:3} for the stopping time $\tau_n \wedge \sigma_n \wedge \tau$ in place of $\tau$. Letting $n\to \infty$ provides~\eqref{eq:2} and~\eqref{eq:3} for $\tau$. Thus we may assume that there is $n\geq1$ such that $r(t)\leq n$ for $t\in(0,\tau)$ and $\langle h\rangle_t\leq n$ for $t\in[0,\tau]$. Moreover, by taking $h{\bf1}_{|h(0)|<n}$, $v{\bf1}_{|h(0)|<n}$ and $A{\bf1}_{|h(0)|<n}$ in place of $h$, $v$ and $A$, respectively, and then taking $n\to\infty$, we may assume that $r(t)\leq n$ for $t\in [0,\tau)$ and $\langle h\rangle_t\leq n$ for $t\in[0,\tau]$. Furthermore, we can define $A(t) := A(\tau-)$, $h(t)=h(\tau)$, $v(t) = 0$ and $v_i^{\ast}(t) = 0$ for $t\geq \tau$. Then $r(t) \leq n$ and $\langle h \rangle_t \leq n$ for $t\in [0,\infty)$. \item Finally, we can assume that $r(t) \leq 1$ for $t\in [0,\tau)$ and $\langle h \rangle_t\leq 1$ for $t\in [0,\tau]$. Indeed let $v_n := n^{-1}v$, $A_n := n^{-1}A$ and $h_n := n^{-1}h$. Then $r_n$, defined analogously to $r$ in~\eqref{eq:r_def} but with $v$, $A$ and $h$ replaced by $v_n$, $A_n$ and $h_n$ respectively, satisfies $r_n(t) \leq n^{-1}r(t) \leq 1$. We thus get~\eqref{eq:2} and~\eqref{eq:3} with $v$, $A$ and $h$ replaced by $v_n$, $A_n$ and $h_n$ respectively. We can now multiply by $n$ and $n^2$ to obtain the desired conclusions. \end{enumerate} Now we proceed to prove Theorem~\ref{thm:1} under the assumption that $\tau \leq 1$, $r(t) \leq 1$ and $\langle h \rangle_t \leq 1$ for $t\in [0,\infty)$. Our approach is the same as in Gy\"ongy and Krylov~\cite{gyongy:krylov:on:stochastic:II}. The idea is to approximate $v$ by simple processes whose jumps happen at stopping times where equation~\eqref{eq:1} holds. But~\eqref{eq:1} only holds for every $\varphi \in V$ and $dA\times \P$ almost all $(t,\omega) \in \rrbracket0,\tau\llbracket$, and thus it is not immediately clear how to choose an appropriate piecewise constant approximation to $v$. Here and later on for stopping times $\tau$ the notation $\rrbracket0,\tau\llbracket$ means the stochastic interval $\{(t,\omega):t\in(0,\tau(\omega)),\omega\in\Omega\}$. \begin{proposition} \label{propn:1} There is a nested sequence of random partitions of $[0,\infty]$, \begin{equation*} 0 = \tau^n_0 < \tau^n_1 \leq \tau^n_2 \leq \cdots \leq \tau^n_{N(n)+1} = \infty, \end{equation*} with stopping times $\tau^n_j$, $j=1,\ldots,N(n)+1$, such that for every $\omega \in \Omega$ either $\tau^n_j(\omega) < \tau(\omega)$ or $\tau^n_j(\omega) = \infty$, and such that the following statements hold. \begin{enumerate}[(1)] \item There is $\Omega' \subset \Omega$ such that $\P(\Omega') = 1$ and with \begin{equation*} I(\omega):=\{\tau^n_j(\omega):n\in \mathbb{N}, j=1,\ldots,N(n)\}\cap (0,\infty) \end{equation*} we have~\eqref{eq:1} satisfied for every $\omega \in \Omega'$, $t\in I(\omega)$ and $\varphi \in V$. Moreover, if $\Delta A(t)>0$ for some $t>0$ and $\omega\in\Omega'$, then $t\in I(\omega)$. Furthermore, if $0\leq s<t$ and $(s,t]\cap I(\omega)=\emptyset$, then $A(s)=A(t)$. \item For $l\in\{1,2\}$, $i=1,\ldots,m$ and for all $k\geq1$ \begin{equation} \label{eq:lims1:in_proposition} \begin{split} \lim_{n\to \infty} \mathbb{E}\int_{(0,\infty)} \|v(s) - v^{(l)}_n(s)\|_{V_i}^{p_i}\,dA(s) = 0,\\ \lim_{n\to \infty} \mathbb{E}\int_{(0,\infty)} \|w_k(s) - w^{(l)}_{kn}(s)\|^{p_i}_{V_i}\,dA(s) = 0, \end{split} \end{equation} where $$ v_n^{(1)}(t):=\sum_{j=1}^{N(n)}v(\tau_j^n){\bf1}_{[\tau_j^n,\tau_{j+1}^n)}(t), \quad v_n^{(2)}(t):=\sum_{j=0}^{N(n)}v(\tau_{j+1}^n){\bf1}_{(\tau_j^n,\tau_{j+1}^n]}(t), $$ and $w^{(l)}_{kn}$ is defined analogously from $w_k=\Pi^kh$. \end{enumerate} \end{proposition} \begin{proof} Since $V$ is separable there is $\{\varphi_i\}_{i\in \mathbb{N}}\subset V$ which is dense in $V$. For each $\varphi_i$ there is an exceptional set $D_i \in [0,\infty)\times\Omega$ such that~\eqref{eq:1} holds for $(t,\omega)\in\rrbracket0,\tau\llbracket\setminus D_i$ and $(dA\times \P)(D_i)=0$. Let $D = \bigcup_{i\in \mathbb{N}} D_i$. Then $(dA \times \P)(D)=0$ and~\eqref{eq:1} holds for all $\varphi\in V$ and all $(t,\omega)\in\rrbracket0,\tau\llbracket\setminus D$. Now using Lemma~\ref{lemma:1} and the Fubini theorem \begin{equation*} \begin{split} 0 & = \mathbb{E}\int_{(0,\tau)} \chi_D(s)\,dA(s) = \mathbb{E} \int_{(0,A(\tau-)]} \chi_D(\beta(r))\,dr\\ & = \int_{(0,\infty)} \P(r\leq A(\tau), (\beta(r),\omega)\in D)\,dr. \end{split} \end{equation*} From this we see that for $dr$ almost all $r\in (0,\infty)$ there is $\Omega(r) \subset \Omega$ with $\P(\Omega(r)) = 1$ such that for any $\omega \in \Omega(r)$ either $r > A(\tau(\omega),\omega)$ or $\beta(r,\omega) < \tau(\omega)$ and for $t=\beta(r)$ and for all $\varphi \in V$ \begin{equation} \label{eq:1_again} ( v(t), \varphi ) = \sum_{i=1}^m \int_{(0,t]} \langle v_i^{\ast}(s),\varphi \rangle \,dA(s) + ( h(t),\varphi ). \end{equation} By virtue of Lemma~\ref{lemma:2} there is a nested sequence of decompositions of $[0,1]$, \begin{equation} \label{eq r} 0 = r^n_0 < r^n_1 < \cdots < r^n_{N(n)+1} = 1, \end{equation} such that $\lim_{n\to \infty} \max_i |r^n_{j+1} - r^n_j| = 0$, and \begin{equation} \label{eq:lims_with_beta} \begin{split} \lim_{n\to \infty} \mathbb{E}\int_0^1 \|v(\beta(r)) - v(\beta(\kappa_n^{(l)}(r)))\|_{V_i}^{p_i}\,dr = 0,\\ \lim_{n\to \infty} \mathbb{E}\int_0^1 \|w_k(\beta(r)) - w_k(\beta(\kappa_n^{(l)}(r)))\|^{p_i}_{V_i}\,dr = 0 \end{split} \end{equation} for all $i=1,\ldots,m$, all $k\in \mathbb{N}$ and $l=1,2$, where $\kappa_n^{(1)}(r) = r^n_j$ if $r\in [r^n_j,r^n_{j+1})$ and $\kappa_n^{(2)}(r) = r^n_{j+1}$ if $r\in (r^n_j,r^n_{j+1}]$. Now let $\Omega' := \bigcap_{n\in \mathbb{N}} \left( \Omega(r^n_0)\cap \ldots \cap\Omega(r^n_{N(n)+1})\right)$, $\tau^n_j := \beta(r^n_j)$, and \begin{equation*} I(\omega):=\{\tau^n_i(\omega):n\in \mathbb{N}, i=1,\ldots,N(n)\}\cap (0,\infty). \end{equation*} Then $\P(\Omega') = 1$ and \begin{equation*} 0 = \tau^n_0 < \tau^n_1 \leq \tau^n_2 \leq \cdots \leq \tau^n_{N(n)+1} = \infty, \quad n=1,2,\ldots, \end{equation*} is a nested sequence of random partitions of $(0,1)$ by stopping times $\tau^n_j$ such that statement (1) holds. To prove (2) we notice that, just like in \cite{gyongy:krylov:on:stochastic:II}, for $r\in(r_j^n,r^n_{j+1}]$ $$ v_n^{(2)}(\beta(r))= \left\{ \begin{array}{lll} v(\beta(r^n_{j+1})) = v(\beta(\kappa^{(2)}_n(r))) & \text{if} & \beta(r^n_j) < \beta(r) \\ v(\beta(r^n_j)) = v(\beta(\kappa^{(1)}_n(r))) & \text{if} & \beta(r^n_j) = \beta(r). \end{array} \right. $$ Thus with appropriate sets $S_n\in\mathcal B(\R)\times\mathcal F$ $$ v^{(2)}(\beta(r))={\bf1}_{S_n}(r)v(\beta(\kappa^{(2)}_n(r))) - (1-{\bf1}_{S_n})(r))v(\beta(\kappa^{(1)}_n(r))). $$ Hence due to~\eqref{eq:lims_with_beta} and Lemma~\ref{lemma:1} we obtain the first equality in~\eqref{eq:lims1:in_proposition} for $l=2$, $i=1,\ldots,m$ and for all $k\in \mathbb{N}$. The rest of~\eqref{eq:lims1:in_proposition} is obtained similarly. \end{proof} \begin{proposition} \label{proposition 2} For every $n\in \mathbb{N}$, every $\omega \in \Omega'$ and every $\tau^n_j(\omega) \in I(\omega)$ \begin{equation} \label{eq:towards_ito_3:in_proposition} \begin{split} |v(\tau^n_j)|^2 = & |h(0)|^2 + 2\sum_{i=1}^m \int_{(0,\tau^n_j]} \langle v_i^{\ast}(s), v^{(2)}_n(s) \rangle\,dA(s) \\ & + 2 \int_{(0,\tau^n_j]} \bar{v}_n(s)\, dh(s) + 2(h(0), h(\tau^n_1) - h(0)) \\ & + \sum_{k=0}^{j-1}|h(\tau^n_{k+1}) -h(\tau^n_k)|^2 - |v(\tau^n_1)-h(\tau^n_1)|^2\\ & - \sum_{k=1}^{j-1} |v(\tau^n_{k+1})-v(\tau^n_k) - (h(\tau^n_{k+1}) - h(\tau^n_k))|^2, \end{split} \end{equation} where $\bar v_n(s)=0$ for $s\in[0,\tau^n_1]$ and $\bar v_n(s)=v(\tau^n_j)$ for $s\in(\tau^n_j,\tau^n_{j+1}]$ for $j=1,\ldots,N(n)$. Moreover, \begin{equation} \label{eq:bnds_3} \mathbb{E} \sup_{t\in I} |v(t)|^2 < \infty. \end{equation} \end{proposition} \begin{proof} Let $\omega \in \Omega'$ and $t,t' \in I(\omega)$ and $t' \geq t$. Clearly, $$ |v(t')|^2-|v(t)|^2=2(v(t'),v(t')-v(t))-|v(t')-v(t)|^2, $$ which by statement (1) of Proposition \ref{propn:1} gives \begin{equation*} \begin{split} & |v(t')|^2 - |v(t)|^2 \\ & = 2\sum_{i=1}^m \int_{(t,t']} \langle v_i^{\ast}(s), v(t') \rangle \,dA(s) + 2(h(t')-h(t),v(t')) - |v(t')-v(t)|^2 . \end{split} \end{equation*} Hence by the identity \begin{equation*} \begin{split} & 2( h(t')-h(t), v(t')-v(t)) \\ & = - |v(t')-v(t) - (h(t')-h(t))|^2 + |v(t')-v(t)|^2 + |h(t')-h(t)|^2, \end{split} \end{equation*} we have \begin{equation} \label{eq:towards_ito_2} \begin{split} |v(t')|^2 & - |v(t)|^2 = 2\sum_{i=1}^m \int_{(t,t']} \langle v_i^{\ast}(s), v(t') \rangle dA(s) + 2(v(t), h(t')-h(t))\\ & + |h(t')-h(t)|^2 - |v(t')-v(t) - (h(t')-h(t))|^2. \end{split} \end{equation} By (1) in Proposition \ref{propn:1} again \begin{equation*} 2|v(t)|^2 = 2\sum_{i=1}^m \int_{(0,t]} \langle v_i^{\ast}(s), v(t) \rangle\, dA(s) + 2(h(t),v(t)), \end{equation*} which by the identity $2(h(t),v(t)) = -|v(t)-h(t)|^2 + |v(t)|^2 + |h(t)|^2$ gives \begin{equation} \label{eq:towards_ito_1} |v(t)|^2 = 2\sum_{i=1}^m \int_{(0,t]} \langle v_i^{\ast}(s), v(t) \rangle\, \,dA(s) + |h(t)|^2 - |v(t)-h(t)|^2. \end{equation} Summing up for $k=1,\ldots,j-1$ equations \eqref{eq:towards_ito_2} with $t'=\tau^n_{k+1}$, $t=\tau^n_{k}$, and adding to it equation \eqref{eq:towards_ito_1} with $t=\tau^n_1$, we obtain \eqref{eq:towards_ito_3:in_proposition}. Form \eqref{eq:towards_ito_3:in_proposition} we have \begin{equation*} \begin{split} \mathbb{E} \max_{1\leq j \leq N(n)} |v(\tau^n_j)|^2 \leq & 2\mathbb{E}|h(0)|^2 + 2\mathbb{E} \sum_{i=1}^m \int_{(0,\tau]} |\langle v_i^{\ast}(s), v^{(2)}_n(s) \rangle|\,dA(s)\\ & + 2\mathbb{E} \max_{1\leq j \leq N(n)} \left|\int_{(0,\tau^n_j]}\bar{v}_n(s)\,dh(s)\right|\\ & + 2\mathbb{E} \sum_{k=0}^{N(n)}|h(\tau^n_{k+1}) -h(\tau^n_k)|^2 . \end{split} \end{equation*} Clearly $$ 2\mathbb{E} \max_{1\leq j \leq N(n)} \left|\int_{(0,\tau^n_j]}\bar{v}_n(s)\,dh(s)\right| \leq 16 + \frac{1}{16} \mathbb{E} \sup_{t\geq 0} \left|\int_{(0,t]}\bar{v}_n(s)\,dh(s)\right|^2, $$ and by Doob's inequality and $\langle h\rangle\leq 1$, \begin{equation*} \mathbb{E} \sup_{t\geq 0} \left|\int_{(0,t]}\bar{v}_n(s)\,dh(s)\right|^2\leq 4 \mathbb{E} \int_{0}^{\infty} |\bar{v}_n(s)|^2 d\langle h \rangle_s \leq4\mathbb{E}\max_{1\leq j \leq N(n)} |v(\tau^n_j)|^2. \end{equation*} Since $h$ is a martingale, $$ \mathbb{E} \sum_{k=0}^{N(n)}|h(\tau^n_{k+1}) -h(\tau^n_k)|^2\leq \mathbb{E}|h(1)|^2 =\mathbb{E}\langle h\rangle(1)\leq1. $$ By H\"older's inequality and $\sum_iQ_i\leq1$ we have $$ \sum_{i=1}^m\mathbb{E}\int_{(0,\tau]} |\langle v_i^{\ast}(s), v^{(2)}_n(s) \rangle|\,dA(s) $$ $$ \leq\sum_{i}\sup_{n\geq 1} \left(\mathbb{E}\int_{(0,\tau]} \|v^{(2)}_n(s)\|_{V_i}^{p_i}\, dA(s)\right)^{\tfrac{1}{p_i}}=:c\,, $$ which by virtue of \eqref{eq:lims1:in_proposition} is finite. Hence, taking also into account $\mathbb{E}|h(0)|^2\leq1$ we have \begin{equation*} \mathbb{E} \max_{1\leq j \leq N(n)} |v(\tau^n_j)|^2 \leq 22+2c+ \frac{1}{4} \mathbb{E} \max_{1\leq j \leq N(n)} |v(\tau^n_j)|^2, \end{equation*} which immediately yields \eqref{eq:bnds_3}, provided \begin{equation} \label{finite} \mathbb{E} \max_{1\leq j \leq N(n)} |v(\tau^n_j)|^2<\infty. \end{equation} To show \eqref{finite} note that due to~\eqref{eq:towards_ito_1}, for every $n\in \mathbb{N}$ and $j=1,\ldots,N(n)+1$, we get \begin{equation} \label{HV} \begin{split} \mathbb{E}|v(\tau^n_j)|^2 \leq & \mathbb{E} |h(\tau^n_j)|^2 + 2\mathbb{E} \sum_{i=1}^m \int_{(0,\tau^n_j]}\langle v_i^{\ast}(s), v(\tau^n_j)\rangle dA(s)\\ \leq & \mathbb{E} |h(0)|^2 + 2 \mathbb{E} \sum_{i=1}^m Q_i(\tau) \left(\int_{(0,\tau]} \|v(\tau^n_j)\|_{V_i}^{p_i} dA(s)\right)^{\tfrac{1}{p_i}}\\ \leq & 1 + 2\sum_i\mathbb{E} \|v(\tau^n_j)\|_{V_i}, \end{split} \end{equation} since $\tau \leq 1$ and $r(t) \leq1$ for all $t\in [0,\infty)$. For $i=1,\ldots,m$ \begin{equation*} \begin{split} & \mathbb{E} \|v(\tau^n_j)\|^{p_i}_{V_i}\leq \mathbb{E} \sup_{s\in [0,\infty)} \|v^{(2)}_n(s)\|_{V_i}^{p_i} \leq \mathbb{E} \sup_{r\in (0,1]} \|v^{(2)}_n(\beta(r))\|_{V_i}^{p_i}\\ &\leq 2^{p_i-1}\sum_{l=1}^2\mathbb{E} \sup_{r\in (0,1]} \|v(\beta(\kappa^{(l)}_n(r)))\|_{V_i}^{p_i}\\ &\leq 2^{p_i-1} \sum_{l=1}^2 \mathbb{E} \sum_{k=0}^{N(n)}\frac{1}{r^n_{k+1}-r^n_k} \int_{r^n_k}^{r^n_{k+1}} \|v(\beta(\kappa^{(l)}_n(r)))\|_{V_i}^{p_i}\, dr \\ & \leq \frac{2^{p_i-1}}{d_n} \sum_{l=1}^2 \mathbb{E} \int_0^1 \|v(\beta(\kappa^{(l)}_n(r)))\|_{V_i}^{p_i}\, dr < 2^{p_i}\frac{c_i}{d_n},\\ \end{split} \end{equation*} where $r^n_k$ are given by~\eqref{eq r}, $d_n := \min_{k=1,\ldots,N(n)}|r^n_{k+1}-r^n_k|>0$ and \begin{equation*} c_i:=\max_l\sup_n\sum_{i=1}^m\mathbb{E} \int_0^1 \|v(\beta(\kappa^{(l)}_n(r)))\|_{V_i}^{p_i}\, dr, \end{equation*} which due to~\eqref{eq:lims_with_beta} is finite. Hence by virtue of \eqref{HV} we have \eqref{finite}, which completes the proof of \eqref{eq:bnds_3}. \end{proof} We see that due to~\eqref{eq:bnds_3} there is $\Omega'' \subset \Omega'$ such that $\P(\Omega'') = 1$ and \begin{equation*} \sup_{t\in I(\omega)} |v(t)|^2 < \infty\quad\text{for all $\omega \in \Omega''$}. \end{equation*} Moreover, since $h$ is cadlag, for all $\omega \in \Omega''$ we have \begin{equation} \label{eq:bnds_4} \sup_{t\in I(\omega)} |v(t)-h(t)|^2 < \infty. \end{equation} Define \begin{equation} \label{eq:def_of_z} z^{(1)}(t) := \int_{(0,t)} \sum_{i=1}^mv_i^{\ast}(s)\,dA(s),\quad z^{(2)}(t) := \int_{(0,t]} \sum_{i=1}^mv_i^{\ast}(s)\,dA(s), \end{equation} for $t\geq0$, where the integrals are defined as weak* integrals. Recall that $v^{\ast}=\sum_iv^{\ast}$ is a $V^{\ast}$-valued such that $\langle v^{\ast}(t),\varphi\rangle $ is a progressively measurable process for every $\varphi\in V$, and $$ \int_{(0,t]}|\langle v^{\ast}(s),\varphi\rangle|\,dA(s)\leq \sum_i\int_{(0,t]}|\langle v_i^{\ast}(s),\varphi\rangle|\,dA(s) $$ $$ \leq \sum_i|\varphi|_{V_i}\int_{(0,t]}\eta_i(s)\,dA(s) \leq |\varphi|_{V}\sum_i\int_{(0,t]}\eta_i(s)\,dA(s)<\infty. $$ Therefore $z^{(1)}$ and $z^{(2)}$ are well-defined $V^{\ast}$-valued processes such that $\langle z^{(1)}, \varphi\rangle$ and $\langle z^{(2)}, \varphi\rangle$ are left-continuous and right-continuous adapted processes, respectively. In what follows we use the notation $\Delta^w f(t):=f(t)-\text{w-lim}_{s\nearrow t}f(s)$ for $H$-valued functions $f$, when the weak limit from the left exists at $t$. \begin{proposition} \label{propn:2} Let $z^{(l)}$, $l\in \{1,2\}$ be given by~\eqref{eq:def_of_z}. \begin{enumerate} \item If $\omega \in \Omega''$ and $t\in (0,\infty)$ then $z^{(l)}(t)\in H$ for $l\in\{1,2\}$. Moreover \begin{equation*} \sup_{t\in (0,\infty)} |z^{(l)}(t)| < \infty \,\,\,\, \forall \omega \in \Omega'',\,\, l \in \{1,2\}. \end{equation*} \item Let $\tilde v$ be given by \begin{equation*} \tilde v(t) := \chi_{\Omega''} z^{(2)}(t) + h(t). \end{equation*} Then $\tilde v$ is a $H$-valued adapted and weakly cadlag process such that $v(t) = \tilde v(t)$ for all $t\in I(\omega)$ and $\omega \in \Omega''$. Moreover \begin{equation*} \sup_{t\in (0,\infty)} |\tilde v(t)| < \infty \,\,\,\, \forall \omega \in \Omega''. \end{equation*} \item If $\omega \in \Omega''$ then for all $t\in (0,\tau(\omega))$ \begin{equation} \label{eq:Delta_w} \Delta^w (\tilde v-h)(t) = (\Delta A)(t)\sum_{i=1}^m v_i^{\ast}(t). \end{equation} \end{enumerate} \end{proposition} \begin{proof} Fix $\omega \in \Omega''$. If $t\in I(\omega)$ then for all $\varphi \in V$ \begin{equation*} \left(v(t) - h(t), \varphi\right) = \sum_{i=1}^m\int_{(0,t]} \langle v_i^{\ast}(s),\varphi \rangle\,dA(s), \end{equation*} and hence $z^{(2)}(t) \in H$. Consider now the situation when $t\in (0,\tau(\omega)] \setminus I(\omega)$. Let $\bar{I}^l(\omega)$ denote the left-closure of the set $I(\omega)$. If $t\in \bar{I}^l(\omega)\setminus I(\omega)$ then $\Delta A(t)=0$ by Proposition \ref{propn:1}, and there is a sequence $(t_n)_{n\in \mathbb{N}} \subset I(\omega)$ such that $t_n \nearrow t$. Moreover, due to~\eqref{eq:bnds_4} there is a subsequence $t_{n'}\nearrow t$ such that $v(t_{n'}) - h(t_{n'})$ converges weakly in $H$ to some $\xi\in H$. Hence for all $\varphi \in V$ \begin{equation*} \begin{split} (\xi, \varphi) & = \lim_{n'\to \infty} (v(t_{n'}) - h(t_{n'}),\varphi)\\ & = \lim_{n'\to \infty} \sum_{i=1}^m \int_{(0,t_{n'}]} \langle v_i^{\ast}(s), \varphi \rangle\,dA(s) = \sum_{i=1}^m \int_{(0,t)} \langle v_i^{\ast}(s), \varphi \rangle\,dA(s) \\ & = \sum_{i=1}^m \int_{(0,t]} \langle v_i^{\ast}(s), \varphi \rangle\,dA(s), \end{split} \end{equation*} which implies $z^{(2)}(t)=\xi \in H$. If $t\in (0,\infty)\setminus\bar{I}^l(\omega)$, then there is $s \in \{0\}\cup\bar{I}^l(\omega)$ such that $s<t$ and $(s,t] \cap I(\omega) = \emptyset$. So $\int_{(s,t]} v_i^{\ast}(s)\,dA(s) = 0$ and $z^{(2)}(t) = z^{(2)}(s) \in H$. Of course if $t=0$ then $z^{(2)}(t) = 0 \in H$. Finally, due to~\eqref{eq:bnds_4}, \begin{equation} \label{eq:bnds_5} \sup_{t\in (0,\infty)}|z^{(2)}(t)|^2 = \sup_{t\in (0,\infty)}|v(t)-h(t)|^2 < \infty. \end{equation} Now we consider $z^{(1)}(t)$ for $t\in (0,\infty)$. Take $(t_n)_{n\in \mathbb{N}}$ such that $t_n < t$ and $t_n \nearrow t$ as $n\to \infty$. From~\eqref{eq:bnds_5} we know that $\sup_{n\in \mathbb{N}} |z^{(2)}(t_n)|^2 < \infty$ and so there is a subsequence $t_{n'}\nearrow t$ such that $z^{(2)}(t_n)$ converges weakly in $H$ to some $\xi\in H$. Thus for any $\varphi \in V$ \begin{equation*} \begin{split} & (\xi, \varphi) = \lim_{n'\to \infty} (z^{(2)}(t_{n'}),\varphi)\\ & =\lim_{n'\to \infty} \sum_{i=1}^m \int_{(0,t_{n'}]} \langle v_i^{\ast}(s),\varphi \rangle\,dA(s) = \sum_{i=1}^m \int_{(0,t_{n'})} \langle v_i^{\ast}(s),\varphi \rangle\,dA(s) = \langle z^{(1)}(t), \varphi \rangle. \end{split} \end{equation*} Hence $z^{(1)}(t) = \xi \in H$, and due to~\eqref{eq:bnds_5} \begin{equation*} \sup_{t\in (0,\infty)}|z^{(1)}(t)|^2 \leq \sup_{t\in (0,\infty)}|z^{(2)}(t)|^2 < \infty. \end{equation*} By construction $\tilde v$ is weakly cadlag. Due to~\eqref{eq:bnds_5} for $\omega \in \Omega''$ \begin{equation*} \sup_{t\in (0,\infty)} |\tilde v(t)|^2 \leq \sup_{t\in (0,\infty)} | z^{(2)}(t)|^2 + \sup_{t\in (0,\infty)} | h(t)|^2 < \infty. \end{equation*} We note that for any $\varphi \in V$ the real valued random variable \begin{equation*} (\tilde v(t),\varphi) = \chi_{\Omega''} \sum_{i=1}^m \int_{(0,t]} \langle v_i^{\ast}(s), \varphi \rangle\,dA(s) + (h(t),\varphi) \end{equation*} is $\mathcal{F}_t$-measurable. Hence, since $H$ is separable, $\tilde v(t)$ is $\mathcal{F}_t$-measurable by the Pettis theorem. Finally notice that $$ \Delta ((\tilde v-h)(t),\varphi) = \sum_{i=1}^m \langle v_i^{\ast}(t),\varphi\rangle (\Delta A)(t) $$ for all $\varphi\in V$ and $\omega\in\Omega''$. Hence on $\Omega''$ $$ \Delta^w (\tilde v-h)(t)= \sum_{i=1}^m v_i^{\ast}(t)(\Delta A)(t)\,. $$ \end{proof} Let \begin{equation*} \tilde v_n(t) := \tilde v(\tau^n_j) \,\,\, \textrm{and}\,\,\,\, h_n(t) := h(\tau^n_j)\,\,\,\, \textrm{for}\,\,\,\, t\in (\tau^n_j,\tau^n_{j+1}], \,\,\,\, j = 0,1,\ldots,N(n). \end{equation*} Then from~\eqref{eq:towards_ito_3:in_proposition} it follows that for every $\omega \in \Omega''$ and $t:=\tau^n_j(\omega) \in I(\omega)$ \begin{equation} \label{eq:towards_ito_4} \begin{split} |\tilde v(t)|^2 = & |h(0)|^2 + 2\sum_{i=1}^m \int_{(0,t]} \langle v_i^{\ast}(s), v^{(2)}_n(s) \rangle\,dA(s) \\ & + 2 \int_{(0,t]} (\tilde v_n(s), dh(s)) + \sum_{k=0}^{j-1}|h(\tau^n_{k+1})-h(\tau^n_k)|^2-K_n(t), \end{split} \end{equation} where $$ K_n(t):= \sum_{k:\tau^n_{k+1}\leq t}^{j-1} |\tilde v(\tau^n_{k+1})-\tilde v(\tau^n_k) - (h(\tau^n_{k+1}) - h(\tau^n_k))|^2. $$ In order to let $n\to\infty$ in the above equation we first rewrite it as \begin{equation} \label{eq:towards_ito_5} \begin{split} |\tilde v(t)|^2 = & 2\sum_{i=1}^m \int_{(0,t]} \langle v_i^{\ast}(s), v^{(2)}_n(s) \rangle\, dA(s) \\ & + 2 \int_{(0,t]} (\tilde v_n(s)-h_n(s), dh(s)) + |h(t)|^2-K_n(t) \\ \end{split} \end{equation} by noticing that \begin{equation*} 2\int_{(0,\tau^n_j]} (h_n(s), dh(s)) =|h(\tau^n_j)|^2 - |h(0)|^2 - \sum_{k=0}^{j-1} |h(\tau^n_{k+1}) - h(\tau^n_j)|^2. \end{equation*} To perform the limit procedure we use the following two propositions. \begin{proposition} \label{prop:4} There is $\tilde\Omega \subset \Omega''$ with $\P(\tilde\Omega) = 1$ such that for a subsequence $n'$ and for every $\omega \in \tilde\Omega$ \begin{equation*} \begin{split} & \int_{(0,\infty)} \|v(s) - v^{(l)}_{n'}(s)\|_{V_i}^{p_i}\,dA(s) \to 0\,\,\,\, (l=1,2),\\ & \int_{(0,\infty)} \|w_k(s) - w^{(l)}_{kn'}(s)\|^{p_i}_{V_i}\,dA(s) \to 0\,\,\,\, (l=1,2; k\in \mathbb{N}),\\ & \sup_{t\in (0,\infty)}\left|\int_{(0,t]} (\tilde v_{n'}(s) - h_{n'}(s),dh(s)) - \int_{(0,t]} (\tilde v(s-)-h(s-),dh(s))\right| \to 0 \end{split} \end{equation*} as $n'\to \infty$. Moreover, $$ K_{n'}(t)\to \int_{(0,t]}|v^{\ast}(s)|^2 \Delta A(s)\,dA(s) \quad\text{for $t\in I(\omega)$ and $\omega\in\tilde\Omega$}. $$ \end{proposition} \begin{proof} Set $\xi(t) := \tilde v(t-) - h(t-)$ and $\xi_n(t):=\tilde v_n(t)-h_n(t)$. By Lemma~\ref{lemma:3}, taking into account that by Proposition~\ref{propn:2} on $\Omega''$ $$ \sup_n\sup_{t\in (0,\infty)} |\xi(t) - \xi_n(t)| \leq \sup_{t\in (0,\infty)}|z^{(1)}(t)| < \infty, $$ and that $V$ is dense in $H$, we have $$ \sup_{t\geq0}\left|\int_{(0,t]}(\xi(s)-\xi_n(s),dh(s))\right|\to 0\quad \text{in probability as $n\to\infty$}, $$ if we show that almost surely $$ \lim_{n\to\infty}(\xi(t)-\xi_n(t),\varphi)=0 \quad \text{for all $t>0$ and $\varphi\in V$}. $$ To this end set $$ v_i^{\ast}:=\int_{(\tau^n_j,t)} v_i^{\ast}(s)\,dA(s)\in V_i^{\ast}. $$ Then for all $\omega\in\Omega''$, $t>0$ and $\varphi\in V$ \begin{equation*} \begin{split} & (\xi(t) - \xi_n(t),\varphi)=\langle \xi(t) - \xi_n(t),\varphi\rangle = \bigg\langle \sum_{i=1}^m v_i^{\ast},\varphi\bigg\rangle = \sum_{i=1}^m \langle v_i^{\ast},\varphi\rangle\\ &= \sum_{i=1}^m \int_{(\tau^n_j,t)} \langle v_i^{\ast}(s),\varphi\rangle\,dA(s) \leq \sum_{i=1}^{m}\|\varphi\|_{V_i} \int_{(\tau^n_j,t)} \|v_i^{\ast}(s)\|_{V_i^*}\,dA(s) \\ & \leq \max_{j=1,\ldots,N(n)} \sum_{i=1}^{m}\|\varphi\|_{V_i} \left(A(\tau^n_{j+1})-A(\tau^n_j)\right)^{\frac{1}{p_i}} Q_i(\tau^n_{j+1})\\ & \leq \max_{j=1,\ldots,N(n)}\sum_{i=1}^{m}\|\varphi\|_{V_i} \left|r^n_{j+1} - r^n_j\right|^{\frac{1}{p_i}} \to 0 \,\,\, \textrm{as}\,\,\, n \to \infty, \end{split} \end{equation*} with $r^n_j$ given by~\eqref{eq r}. Consequently, taking also into account \eqref{eq:lims1:in_proposition} of Proposition \ref{propn:1} we have $\Omega'''\subset\Omega''$ and a subsequence $n'$ such that the first three limits are zero for $\omega\in\Omega'''$. Taking the limit along the subsequence $n'$ in \eqref{eq:towards_ito_5} we see that $K_{n'}(t)$ converges for $\omega\in\Omega'''$ and $t\in I(\omega)$ to some $K(t)$, and \begin{equation*} \begin{split} |\tilde v(t)|^2 = & 2\sum_{i=1}^m \int_{(0,t]} \langle v_i^{\ast}(s), v(s) \rangle\, dA(s) \\ & + 2 \int_{(0,t]} (\tilde v(s)-h(s), dh(s)) + |h(t)|^2-K(t). \\ \end{split} \end{equation*} From this point onwards we will always consider only the subsequence $n'$ but we will keep writing $n$ to ease notation. Our task is now to identify $K(t)$. We note that, using Parseval's identity, \begin{equation*} \begin{split} K_n(t) & = \sum_{0\leq \tau^n_{j+1} \leq t} \left|\sum_i\int_{(\tau^n_j,\tau^n_{j+1}]} v_i^{\ast}(s)\,dA(s) \right|^2 \\ & = \sum_{0\leq \tau^n_{j+1} \leq t} \sum_{k\in \mathbb{N}} \left(\sum_i\int_{(\tau^n_j,\tau^n_{j+1}]} v_i^{\ast}(s)\,dA(s), e_k \right)^2\\ & = \sum_{0\leq \tau^n_{j+1} \leq t} \sum_{k\in \mathbb{N}} \left\langle\sum_i\int_{(\tau^n_j,\tau^n_{j+1}]} v_i^{\ast}(s)\,dA(s), e_k \right\rangle^2\\ & = \sum_{0\leq \tau^n_{j+1} \leq t} \sum_{k\in \mathbb{N}} \left|\int_{(\tau^n_j,\tau^n_{j+1}]}\sum_i\langle v_i^{\ast}(s), e_k\rangle\,dA(s) \right|^2 . \end{split} \end{equation*} Hence, using Lemma \ref{lemma:1}, Parseval's identity and~\eqref{eq:Delta_w}, we get \begin{align} K(t) & = \lim_{n\to \infty} K_n(t) \geq \sum_{k\in \mathbb{N}} \varliminf_{n\to \infty} \sum_{0\leq \tau^n_{j+1} \leq t} \left|\int_{(\tau^n_j,\tau^n_{j+1}]} \sum_i\langle v_i^{\ast}(s), e_k \rangle\,dA(s) \right| ^2\nonumber\\ & = \sum_{k\in \mathbb{N}} \sum_{s\leq t} \left| \sum_i\langle v_i^{\ast}(s), e_k \rangle \Delta A(s) \right|^2 = \sum_{k\in \mathbb{N}} \sum_{s\leq t} \left| ( \Delta^w (\tilde v-h)(s), e_k ) \right|^2 \nonumber\\ & = \sum_{s\leq t} \left|\sum_iv_i^{\ast}(s)\right|^2 |\Delta A(s)|^2. \label{Knbelow} \end{align} To obtain an upper bound we use first the identity $$ |x+y|^2= y^2 +2x(y+x)-x^2 $$ together with the definition of $g$ to get \begin{equation} \label{Kn} \begin{split} K_n(t) & = \sum_{0\leq \tau^n_{j+1} \leq t} \left|\sum_i\int_{(\tau^n_j, \tau^n_{j+1})} v_i^{\ast}(s)\,dA(s) + \sum_iv_i^{\ast}(\tau^n_{j+1}) \Delta A(\tau^n_{j+1})\right|^2\\ & = \sum_{0\leq \tau^n_{j+1} \leq t} (J^{(1)}_j+J^{(2)}_j-J^{(3)}_j) \end{split} \end{equation} with \begin{equation*} \begin{split} J^{(1)}_j:=&\left|\sum_iv_i^{\ast}(\tau^n_{j+1})\right|^2 |\Delta A(\tau^n_{j+1})|^2, \\ J^{(2)}_j:=&2 \left(\sum_i \int_{(\tau^n_j, \tau^n_{j+1})} v_i^{\ast}(s)\,dA(s), \tilde v(\tau^n_{j+1})-\tilde v(\tau^n_j) - (h(\tau^n_{j+1})-h(\tau^n_j)) \right), \\ J^{(3)}_j:=&\left|\sum_i\int_{(\tau^n_j, \tau^n_{j+1})} v_i^{\ast}(s)\,dA(s)\right|^2 \end{split} \end{equation*} For $j\neq0$ we split $J^{(2)}_j=J^{(21)}_j-J^{(22)}_j$ with $$ J^{(21)}_j:=2\left(\sum_i\int_{(\tau^n_j, \tau^n_{j+1})} v_i^{\ast}(s)\,dA(s), \tilde v(\tau^n_{j+1})-\tilde v(\tau^n_j)\right), $$ $$ J^{(22)}_j:=2\left(\sum_i \int_{(\tau^n_j, \tau^n_{j+1})} v_i^{\ast}(s)\,dA(s), h(\tau^n_{j+1})-h(\tau^n_j) \right), $$ and notice that $$ J^{(21)}_j=2\sum_i\int_{(\tau^n_j, \tau^n_{j+1})}\langle v_i^{\ast}(s),v(\tau^n_{j+1})-v(\tau^n_j)\rangle \,dA(s) $$ $$ =2\sum_i\int_{[\tau^n_j, \tau^n_{j+1})}\langle v_i^{\ast}(s),v_n^{(2)}(s)-v_n^{(1)}(s)\rangle \,dA(s). $$ Using $\Pi_k$, the orthogonal projection of $H$ onto the space spanned by $(e_j)_{j=1}^k\subset V$, we have $$ J^{(22)}_j=J^{(22)}_{jk}+\bar J^{(22)}_{jk} $$ with $$ J^{(22)}_{jk}:=2\left(\sum_i \int_{(\tau^n_j, \tau^n_{j+1})} v_i^{\ast}(s)\,dA(s), \Pi_k(h(\tau^n_{j+1})-h(\tau^n_j)) \right), $$ $$ \bar J^{(22)}_{jk}:=2\left(\sum_i \int_{(\tau^n_j, \tau^n_{j+1})} v_i^{\ast}(s)\,dA(s), (I-\Pi_k)(h(\tau^n_{j+1})-h(\tau^n_j)) \right). $$ Notice that $$ J^{(22)}_{jk}=2\sum_i \int_{(\tau^n_j, \tau^n_{j+1})} \langle v_i^{\ast}(s),\Pi_k(h(\tau^n_{j+1})-h(\tau^n_j)) \rangle\,dA(s), $$ $$ =2\sum_i \int_{[\tau^n_j, \tau^n_{j+1})} \langle v_i^{\ast}(s),w^{(2)}_{kn}(s)-w^{(1)}_{kn}(s)\rangle\,dA(s), $$ and $$ \bar J^{(22)}_{jk}\leq J_j^{(3)}+\left|(I-\Pi_k)(h(\tau^n_{j+1})-h(\tau^n_j)) \right|^2. $$ Similarly, taking into account $\tilde v(0)=h(0)$, for $J^{(2)}_0$ we have $$ J^{(2)}_0=2\left(\sum_i \int_{(0, \tau^n_{1})} v_i^{\ast}(s)\,dA(s), \tilde v(\tau^n_{1})-h(\tau^n_{1}) \right) =J^{(21)}_0-J^{(22)}_0, $$ where $$ J^{(21)}_0:=2\sum_i\int_{(0, \tau^n_{1})}\langle v_i^{\ast}(s), \tilde v(\tau^n_{1})\rangle\,dA(s) $$ $$ =2\sum_i\int_{(0, \tau^n_{1})}\langle v_i^{\ast}(s), v^{(2)}(s)-v^{(1)}(s)\rangle\,dA(s), $$ and $$ J^{(22)}_0:=2\left(\sum_i \int_{(0, \tau^n_{1})} v_i^{\ast}(s)\,dA(s), h(\tau^n_{1})\right)=J^{(22)}_{0k}+\bar J^{(22)}_{0k} $$ with $$ J^{(22)}_{0k}:=\sum_i \int_{(0, \tau^n_{1})}\langle v_i^{\ast}(s),w^{(2)}(s)-w^{(1)}(s)\rangle\,dA(s), $$ $$ \bar J^{(22)}_{0k}:=2\left(\sum_i \int_{(0, \tau^n_{1})} v_i^{\ast}(s)\,dA(s), (I-\Pi_k)h(\tau^n_1) \right) $$ $$ \leq J_0^{(3)}+\left|(I-\Pi_k)h(\tau^n_{1}) \right|^2. $$ Thus from \eqref{Kn} we get \begin{equation*} \begin{split} K_n(t) \leq &\sum_{0\leq \tau^n_{j+1} \leq t} |\sum_iv_i^{\ast}(\tau^n_{j+1})|^2 |\Delta A(\tau^n_{j+1})|^2 \\ &+2\sum_i\int_{(0, t)} \langle v_i^{\ast}(s), v^{(2)}_n(s)-v^{(1)}_n(s) \rangle\, dA(s) \\ &-2 \sum_i\int_{(0,t)} \langle v_i^{\ast}(s), w^{(2)}_{nk}(s) - w^{(1)}_{nk}(s)\rangle\,dA(s) +\xi_{nk}(t) \end{split} \end{equation*} with $$ \xi_{nk}(t):=\sum_{j=1}^{N(n)} \left|(I-\Pi_k)(h(\tau^n_{j+1}\wedge t) - h(\tau^n_j\wedge t)) \right|^2 + \left|(I-\Pi_k)h(\tau^n_1\wedge t)\right|^2 $$ for every $n,k\in \mathbb{N}$. As $n\to \infty$ we see that \begin{equation*} \sum_{0\leq \tau^n_{j+1} \leq t} |\sum_iv_i^{\ast}(\tau^n_{j+1})|^2 |\Delta A(\tau^n_{j+1})|^2 \to \sum_{0<s \leq t} |v^{\ast}(s)|^2|\Delta A(s)|^2, \end{equation*} where we use the notation $v^{\ast}(s)=\sum_iv^{\ast}_i(s)$. By H\"older's inequality, taking into account $r(t)\leq1$, we have $$ \varlimsup_{n\to\infty}\int_{(0, t)} |\langle v_i^{\ast}(s), v^{(2)}_n(s)-v^{(1)}_n(s) \rangle|\, dA(s) $$ $$ \leq \varlimsup_{n\to\infty} \left(\int_{(0, t)} \|v^{(2)}_n(s)-v^{(1)}_n(s)\|_{V_i}^{p_i}\, dA(s)\right)^{\tfrac{1}{p_i}}=0 $$ and similarly, \begin{equation*} \varlimsup_{n\to\infty}\int_{(0,t)} |\langle v_i^{\ast}(s), w^{(2)}_{nk}(s) - w^{(1)}_{nk}(s)\rangle|\,dA(s)=0 \end{equation*} for all integers $k\geq1$ and $i=1,2,\ldots,m$. Thus \begin{equation} \label{K} K(t)=\varliminf_{n\to\infty}K_n(t) \leq \sum_{s\leq t} |{v}^{\ast}(s)|^2|\Delta A(s)|^2 + \xi_{k}(t) \end{equation} for every $k\in N$, where \begin{equation*} \xi_k(t): = \varliminf_{n\to \infty} \left(\sum_{j=1}^{N(n)} \left|(I-\Pi_k)(h(\tau^n_{j+1}\wedge t) - h(\tau^n_j\wedge t)) \right|^2 + \left|(I-\Pi_k)h(\tau^n_1\wedge t)\right|^2 \right). \end{equation*} Note that by Fatou's lemma and the martingale property of $h$ \begin{equation*} \begin{split} & \mathbb{E} \xi_k(t) \leq \mathbb{E}|(I-\Pi_k)h(1)|^2 \to 0 \,\,\, \textrm{as}\,\,\, k\to \infty. \end{split} \end{equation*} Note also that for each $\omega \in \Omega$ and $t\in (0,\infty)$ we have $\xi_k \geq \xi_{k+1}$ and $\xi_k \geq 0$. Thus there exists a set $\Omega''''\subset\Omega$ with $P(\Omega'''')=1$ such that for every $t\in [0,\infty)$ and $\omega\in\Omega''''$ we have $\xi_k(t) \to 0$. Letting here $k\to\infty$ in \eqref{K} we obtain \begin{equation*} K(t)\leq \int_{(0,t]}|{v}^{\ast}(s)|^2 \Delta A(s)\,dA(s), \end{equation*} which together with \eqref{Knbelow} gives \begin{equation*} K(t)= \int_{(0,t]}|v^{\ast}(s)|^2 \Delta A(s)\,dA(s) \end{equation*} for $\omega \in \tilde{\Omega}:=\Omega'''\cap\Omega''''$ and $t\in I(\omega)$. \end{proof} \begin{proposition} For $\omega\in\tilde\Omega$ \begin{equation} \label{formula} \begin{split} & |\tilde v(t)|^2 = |h(0)|^2 + 2\sum_{i=1}^m \int_{(0,t]} \langle v_i^{\ast}(s), v(s) \rangle\, dA(s) \\ & + 2 \int_{(0,t]} (\tilde v(s-), dh(s)) - \int_{(0,t]} \left|\sum_{i=1}^m v_i^{\ast}(s)\right|^2 \Delta A(s)\,dA(s)+[h]_t \end{split} \end{equation} for $t\in[0,\tau(\omega))$. \end{proposition} \begin{proof} Let $\omega \in \tilde{\Omega}$ be fixed and let $t\in[0,\tau(\omega))$. To ease notation we use $n\to\infty$ in place of the subsequence $n'\to\infty$ defined in the previous proposition. If $t\in I(\omega)$, then by virtue of the previous proposition taking $n\to \infty$ in~\eqref{eq:towards_ito_5} we obtain \begin{align*} |\tilde v(t)|^2 = &|h(t)|^2 + 2\sum_{i=1}^m \int_{(0,t]} \langle v_i^{\ast}(s), v(s) \rangle\, dA(s) + 2 \int_{(0,t]} (\tilde v(s-)-h(-), dh(s)) \\ & - \int_{(0,t]} \left|\sum_{i=1}^m v_i^{\ast}(s)\right|^2 \Delta A(s)\,dA(s). \end{align*} Hence using the It\^o formula for Hilbert space valued processes \begin{equation} \label{eq:hilbert_ito_fla} |h(t)|^2 = |h(0)|^2 + 2 \int_{(0,t]} (h(s-),dh(s)) + [h]_t, \end{equation} we get \eqref{formula} for $t\in I(\omega)$. If $t\in \bar{I}^l(\omega)\setminus I(\omega)$, then for sufficiently large $n$ there is $j=j(n)$ such that $t_n:=\tau^{n}_j(\omega)\in I(\omega)$ and $t_n\nearrow t$ for $n\to \infty$. Using the algebraic relationship \begin{equation} \label{algebra} |\tilde v(s)-\tilde v(r)|^2 =|\tilde v(s)|^2-|\tilde v(r)|^2-2\left(\tilde v(r),\tilde v(s)-\tilde v(r)\right), \end{equation} with $s:=t_n$, $r:=t_l$, and since~\eqref{formula} holds for every $t\in I(\omega)$, we get \begin{equation*} \begin{split} & |\tilde v(t_n) - \tilde v(t_l)|^2 = 2\sum_{i=1}^m \int_{(t_l,t_n]} \langle v_i^{\ast}(s),v(s) \rangle\, dA(s) + 2 \int_{(t_l,t_n]} (\tilde v(s-), dh(s)) \\ & - \int_{(t_l,t_n]} \left| \sum_{i=1}^m v_i^{\ast}(s) \right|^2 \Delta A(s)\, dA(s) + [h]_{t_n} - [h]_{t_l} - 2(\tilde v(t_l),\tilde v(t_n)-\tilde v(t_l)) \end{split} \end{equation*} for $n>l$. Moreover \begin{equation*} \begin{split} & 2(\tilde v(t_l),\tilde v(t_n)-\tilde v(t_l)) \\ & = 2\sum_{i=1}^m \int_{(t_l,t_n]} \langle v_i^{\ast}(s),v(t_l) \rangle dA(s) + 2(\tilde v(t_l),h(t_n)-h(t_l)). \end{split} \end{equation*} Hence by~\eqref{eq:hilbert_ito_fla} \begin{equation*} \begin{split} |\tilde v(t_n) - \tilde v(t_l)|^2 = & 2\sum_{i=1}^m \int_{(t_l,t_n]} \langle v_i^{\ast}(s),v(s) - v(t_l)\rangle dA(s)\\ & + 2 \int_{(t_l,t_n]} (\tilde v(s-)-h(s-)-(\tilde v(t_l)-h(t_l)), dh(s)) \\ & - \int_{(t_l,t_n]} \left|\sum_{i=1}^m v_i^{\ast}(s)\right|^2 \Delta A(s) dA(s) + |h(t_n) - h(t_l)|^2\\ & =: 2I^1_{ln} + 2I^2_{ln} - I^3_{ln} +I^4_{ln}. \end{split} \end{equation*} Since $h$ is cadlag we have \begin{equation*} \lim_{l\to \infty} \sup_{n>l} I^4_{ln} = \lim_{l\to \infty} \sup_{n>l} |h(t_n) - h(t_l)|^2 = 0. \end{equation*} By the previous proposition we get \begin{equation*} \begin{split} & \lim_{l\to \infty} \sup_{n>l} |I^2_{ln}| = \lim_{l\to \infty} \sup_{n>l} \left|\int_{(t_l,t_n]} (\tilde v(s-)-h(s-)-(\tilde v_l(s)-h_l(s)), dh(s))\right|\\ & \leq 2\lim_{l\to \infty} \sup_{t\in (0,\infty)} \left|\int_{(0,t]} (\tilde v(s-)-h(s-)-(\tilde v_l(s)-h_l(s)), dh(s))\right| = 0, \end{split} \end{equation*} and \begin{equation*} \begin{split} & \lim_{l\to \infty} \sup_{n>l} |I^1_{ln}| \leq \lim_{l\to \infty} \sum_{i=1}^m \int_{(0,\infty)} \|v_i^{\ast}(s)\|_{V_i^*}\|v(s)-v^{(1)}_l(s)\|_{V_i}\,dA(s) = 0, \end{split} \end{equation*} via $r(t)\leq1$ and H\"older's inequality. Thus \begin{equation*} \lim_{l\to \infty} \sup_{n>l} |\tilde v(t_n)-\tilde v(t_l)|^2 = 0, \end{equation*} and so the sequence $(\tilde v(t_n))_{n\in \mathbb{N}}$ converges strongly to some $\xi$ in $H$. Moreover since $\tilde v$ is weakly cadlag and $t_n \nearrow t$, we conclude that $\xi=\tilde v(t-)$. Hence using \eqref{formula} with $t_n$ in place of $t$, and letting $n\to\infty$ we obtain \begin{equation*} \begin{split} & |\tilde v(t-)|^2 = |h(0)|^2 + 2\sum_{i=1}^m \int_{(0,t)} \langle v_i^{\ast}(s), v(s) \rangle\, dA(s) \\ & + 2 \int_{(0,t)} (\tilde v(s-), dh(s)) - \int_{(0,t)} \left|\sum_{i=1}^m v_i^{\ast}(s)\right|^2 \Delta A(s)\,dA(s)+[h]_{t-} \end{split} \end{equation*} for $t\in I(\omega)\setminus\bar I^{l}(\omega)$, and so for this $t$ we get also \eqref{formula} by taking into account that $\Delta A(t)=0$. If $t \in (0,\tau(\omega)) \setminus \bar{I}^l(\omega)$, then there is $t'\in \{0\} \cup \bar{I}^l(\omega)$ such that $t'<t$ and $(t',t] \cap I(\omega) = \emptyset$. Thus $dA(s) = 0$ for $s\in (t',t]$, and so $\tilde v(s)-\tilde v(t') = h(s)-h(t')$. Hence applying~\eqref{formula} with $t:=t'$, and the formula $$ |\tilde v(t)|^2-|\tilde v(t')|^2=2(\tilde v(t'),\tilde v(t)-\tilde v(t'))+|\tilde v(t)-\tilde v(t')|^2 $$ together with the It\^o formula for Hilbert space valued martingales, \begin{equation*} |h(t) - h(t')|^2 = 2\int_{(t',t]} (h(s-)-h(t'),dh(s) ) + [h]_t - [h]_{t'}, \end{equation*} we obtain \eqref{formula} for the $t$ under consideration. \end{proof} Now we can finish the proof of Theorem~\ref{thm:1} by noting that by the above proposition $|\tilde v(t)|^2$ is a cadlag process, and since by Proposition~\ref{propn:2} the process $\tilde v$ is $H$-valued and weakly cadlag, it follows by identity \eqref{algebra} that $\tilde v$ is an $H$-valued cadlag process. \paragraph{\bf Acknowledgements} The authors are sincerely grateful to the anonymous referees. Their corrections and valuable suggestions helped improve the presentation of the paper. \paragraph{\bf Open Access} This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (\url{http://creativecommons.org/licenses/by/4.0/}), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
{ "attr-fineweb-edu": 1.772461, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdJc5qdmDHWQtqyXb
\section{Introduction} In many applications, the utility of 3D network layouts and visualization is limited by the fact that they must be projected onto a 2D plane. In contrast, Virtual Reality (VR) provides a natural setting for 3D network visualizations by allowing users to interact with complex network structures in a manner similar to how they might interact with real-life structures. Additionally, the current generation of high fidelity gaming-focused VR headsets such as the Oculus Rift, and HTC Vive are tethered and not designed for full-room interaction or real time collaboration among users in the same room. We believe full room usability (sensitivity to large translations) and real time collaboration will be essential components of future VR visualization applications. Correspondingly, the contributions of this paper are twofold: \begin{enumerate} \item We provide a setting designed to render 3D network layouts at scale, with a focus on the aesthetic and UX advantages gained via \textit{full room} VR \item We offer the means for multiple users to \textit{collaboratively} inspect complex network structures, interacting with them manually via high fidelity motion capture systems \end{enumerate} To test these methods, we work with networks derived from twitter friend/follow relationships, along with metadata about users (nodes) on the graph. We use Holojam \cite{Holojam}, a system that allows for low-latency data streaming to multiple clients in Samsung GearVR headsets in order to create a nomadic VR application for viewing the network graphs. This experience is further enhanced by the use of Perception Neuron gloves to do high-accuracy finger tracking, which allows us to interact with the virtual environment through gesture controls. Modern social networks, such as Twitter, contain complex and noteworthy structures at multiple scales. For example, a network of interest may include a set of twenty fake twitter accounts managed by one actor, embedded within a larger network of ten thousand users with a global structure influenced by political affiliations. In order to better visualize and appreciate the complexities of such a network, we have developed our system to fully immerse the user at any scale and allow them to use their hands to manipulate network structures. Our data visualization system is unique in that it is within an environment that is nomadic, collaborative, and manipulable with a user's hands and fingers. As a fundamental principle of this work, we believe that one is more likely to study and interact with an object (i.e. an intricate rendering of a network) if those around them can corroborate its existence and interact with it as well. When the users enter the virtual space, they see a representation of the network graph (Figure 1) along with other users represented as a mask and hand. From there, the user can move and scale the graph in order to appropriately view various graph clusters and highlight different nodes with their finger. All users within the system can see these manipulations. Users can take advantage of these features to analyze and discuss the complex graph structures before them in ways that were previously not possible. \begin{figure*}[!ht] \centering \captionsetup{justification=centering} \begin{minipage}{.33\textwidth} \centering \includegraphics[width=0.6 \linewidth, angle=90]{vr_neuron} \caption{Perception neuron device with additional reflective markers for optical tracking} \label{fig:test1} \end{minipage}% \begin{minipage}{.33\textwidth} \centering \includegraphics[width=0.9 \linewidth, angle=270]{IMG_0680} \caption{One of the 12 wall mounted Optitrack sensors} \label{fig:test2} \end{minipage}% \begin{minipage}{.33\textwidth} \centering \includegraphics[width=0.7 \linewidth]{vr_gear3} \caption{GearVR headset with \\reflective markers} \label{fig:test2} \end{minipage} \end{figure*} \section{Related Work} There have been many decades of research and development into VR technology, both for hardware and software. VR systems have existed as early as the 1960's with Ivan Sutherland's ``Sword of Damocles'' \cite{sutherland1968head} and have greatly expanded since then. Interactive, multiperson systems, such as the CAVE \cite{Cruz-Neira:1992:CAV:129888.129892}, have broadened the applications for VR to be more collaborative and physical, like what our system aims to do, but has the downside of restricting the rendering to the viewpoint of one user. Since then, personal head-mounted displays (HMDs) have become a lightweight, commodity product, such as the Oculus Rift or the HTC/Valve Vive system. These allow for a user to get a high-quality, accurate depiction of their viewport in a virtual world. However, these systems fall short for our purposes, as they are tethered into a typically large computer. Instead, we aim to use the Holojam architecture, discussed in a later section, which allows multiple users to walk around in the space without having to worry about tangled cables. Thus, while other current systems have allowed for data visualization in a 3D virtual environment, ours is the first that allows for collaborative, HMD-based data visualization. The Gephi and Cytoscape desktop applications \cite{gephi,cytoscape} are some of the most widely used software for visualizing network topologies. These tools provide extensive functionality in terms of layout algorithms, clustering methods, styling and more. Our work can be viewed as a VR front-end to analytics tools like these. In a previous version, the system accepted \texttt{.gexf} files exported from Gephi as input, but using gephi as the core analysis engine proved to be too constrining in terms of metadata annotation and analysis automation. In our current system, the input is a pickled python-igraph \texttt{Graph} object and the analytics and layout is delegated to an auxiliary server utilizing the igraph library (see figure 5). Work by Donalek et. al. \cite{DBLP:journals/corr/DonalekDDCWLNZLYMGD14} uses uses Unity3D and Oculus VR headsets to visualize astronomical datasets, but has key differences in terms of capabilities and constraints. The rendering in \cite{DBLP:journals/corr/DonalekDDCWLNZLYMGD14} occurs on a personal computer as opposed to a mobile device, and therefore is less resource bound than in our context. Furthermore, the underlying system that we build upon is designed \textit{expressly} for collaborative full-room VR, allowing users to interact with a social network in much the same way they would a real object. With \textit{Vister}, Heer et. al.\cite{heer2005vizster} provided a precedent for social network visualization using force layouts. We provide much of the same basic functionality in VR, but at a much larger scale. Although not the focus of this paper, a Three.js/webGL frontend that in many respects resembles a 3D version of Vister is additionally provided by the analysis server (discussed below). One contrast between our work and Vister is that the latter focuses on active layouts, while our layouts are computed ahead of time, due to the larger network size. The work of Munzner \cite{munzner} offers a comprehensive exposition of large network visualization, covering many types of layout techniques. \section{System Architecture} The architecture described below is largely based on the architecture of the NYU Holojam system, an untethered, multi-user VR system presented at the SIGGRAPH VR Village 2015 \cite{Holojam}. While this system is not the contribution of the paper, we present a brief outline of the hardware and network protocol specifications, as they are important to discuss the primary features of the paper. We have adopted these specifications from Holojam, which we briefly describe below. On top of this system, we have added additional features, such as integration with the Noitom Perception Neuron \cite{https://neuronmocap.com/} glove for hand pose based interactions with the virtual environment and a server for analyzing and distributing graph data from external Internet sources, such as Twitter, for data visualization. \subsection{Hardware} The system uses the Samsung GearVR \cite{gearvr}, a lightweight headset that contains a 1000Hz refresh rate inertial motion unit (IMU) to report the change in user orientation to the headset, with rendering powered by a Samsung phone. We chose to use the developer version of the GearVR along with the Samsung Galaxy Note 4, as it offered the largest screen size at the time of our experiments. The GearVR offers smooth head orientation tracking, but it lacks certain capabilities required for an untethered experience. Primarily, because the GearVR is not inherently built for a nomadic experience, it lacks ground truth in both positional and rotational tracking. Furthermore, it does not even include any form of positional tracking. This means that without an external positional tracking system, the user will not know where they are in space. Additionally, without the ground truth, the user may be facing a different direction in the virtual space than the physical space. If two or more users are in the space, they could potentially be viewing each other in the wrong location without a ground truth. These issues can be resolved by introducing some form of ground truth tracking into the system. Many such tracking solutions exist today. Holojam works using optical motion capture technology, which can provide the highest quality data and has the advantage of being relatively portable and easy to set up the system. OptiTrack Motive, a well-known motion capture system, allows for 6 degree of freedom (6DOF) tracking at up to 240 frames per second with under 10ms of latency \cite{optitrack}. This information is broadcast using the wireless protocol to each of our GearVR clients. Using this motion capture software for tracking has its advantages and disadvantages. The advantages of using OptiTrack is that it is fairly reliable and very fast, which is extremely important for delivering timely data to the headsets. The biggest downside to motion capture technology is its cost. High quality motion capture cameras are widely used in academia and industry but are well outside of the consumer price range. On top of this, motion capture is limited by visibility and physical interference. If a marker is obscured, the user loses all positional data until the marker set is visible again. Loss of frames can often cause motion sickness for the user, even if the user loses tracking for as little as half a second. Ultimately, the benefit of having quick, accurate data outweighs the disadvantage of the costs for research purposes, and the visibility issues are easy to avoid with careful camera and marker configurations. In addition to the optical motion tracking discussed above, we also use sensor-based tracking through the Noitom Perception Neuron system in order to track precise finger motions. These motions allow us to manipulate and interact with graphs with intuitive and natural gestures. This will be discussed further in a later section. The Perception Neuron can transmit data over WiFi, and thus matches our system requirement of being untethered from a central computer. It does, however, require an external power source, but this can be a relatively light battery pack, and so it does not overburden the user. Since the Perception Neuron system is based entirely on IMUs, it is prone to drift and it does not have a ground truth location. As with the GearVR, we resolve this issue by adding an optical tracker to the gloves, allowing our OptiTrack system to position the hand. From there, the Perception Neuron system positions and orients the fingers relative to the location reported by the OptiTrack. \begin{figure*}[!ht] \centering \includegraphics[width=15cm]{holojam-neuron-network2} \caption{A model of the communications within our system. Upon application launch, data is collected from the Network Analysis Server, which has network graph layouts of several datasets from sources such as Twitter. Once the program is running, there is a constant motion-to-photon loop. This loop starts with the user movement, which is tracked through the OptiTrack and Perception Neuron devices. The Neuron data is sent via WiFi to a receiving program, where it is then processed by the Noitom Axis software. The optical motion capture data is processed by the OptiTrack Motive software. Both forms of motion capture data are sent locally to a main server, the "BlackBox Server," where it is packaged into the Google Protobuf format, and sent over a WiFi UDP multicast stream to each of the phone clients, where it is combined with the GearVR sensor data and rendered on the Unity client.} \label{fig:network-layout} \end{figure*} \subsection{Low-Latency Network Protocol} Figure \ref{fig:network-layout} also briefly summarizes the combined schema of the low-latency network protocol and the Network Analysis Server discussed in the next section. In order to service tracking data to each of the mobile phone clients in a timely fashion, Holojam uses a lightweight protocol that emphasizes rapid delivery over guaranteed delivery. While this has its fallbacks, which will be discussed later, it allows us to transmit the majority of our data with low enough latency to be imperceptible to the user. In order to achieve the above, it uses a modified UDP multicast protocol in order to transmit the tracking data from the central server to the phone. Note that this protocol is not used to transmit the Twitter data to be visualized, as that data does not need to arrive at the same rate as the motion capture data. That transmission will be discussed in a later section. The Holojam protocol uses Google Protobuf as a data format, as it is much more compact than many other common data protocols, such as XML \cite{protocolbuffers}. As we mentioned above, since we care more about low-latency delivery over reliability, it is better to use UDP over TCP and multicast over unicast, as both of those methods avoid acknowledgements that can slow down transmission rates. Furthermore, Holojam uses a router with a modified firmware in order to avoid limitations that are set in place for general good practice, but would impede our software, such as multicast rate limiting. One other issue the server software avoids is IP fragmentation. Typically, WiFi packets will be fragmented into several chunks according to the specifications in the router, and are reassembled on the receiving end. If a number of chunks are missing, then the entire packet will be discarded. Since this can lead to undesirable frame drops, the server instead creates packets that are lower than the router's maximum transmission size. If the data we wish to send exceeds this size, such as if many more users are present in the area and need to be tracked, those packets are manually broken up into small enough packets, and each one is sent individually. Finally, we have data packets coming from several different sources. In our case, these different types were the OptiTrack Motive software and the Noitom Axis Neuron software. We needed to ensure that each packet type would be sent fairly. That is, even if one packet type was being received by the server to forward to the phone clients in large volume, the other packet type must be sent through as well. Failing to do so resulted in loss of real-time data, which produced unacceptable latencies. To correct this, we modified the architecture by implementing a simple procedure to rotate through packet types sent, prioritizing packet types that had been sent the least recently. In doing this, we ensured that all packets could be transmitted at even rates. \subsection{Network Analysis Server} \subsubsection{System Design} The network analysis server exists on a different machine from the above system, connected to the optimized lossy WiFi LAN explained above as well as a wired internet connection. New layout requests, based on a pickled (a python serialization protocol) igraph object are fielded via the browser-based web frontend of the analysis server. The server then forks a layout process which will independently operate until completion, at which point the result is uploaded to remote storage service. The state of the layout and upload process is catalogued in a local Mongo instance. Once a layout process is finished, a JSON version of the network data is saved in amazon s3 storage, annotated with vertex position data along with any other network analyses metadata performed by the analysis server during processing. Since the analysis server is connected to the same LAN, the headset device(s) running unity can then make requests directly to the local address of the analysis server to gain access to the completed json chunks. Upon downloading, the headset device must parse the json string (usually between 3-10MB) and then configure the appropriate Unity objects. We found that on the Galaxy Note headset devices the string parsing often took much longer than the download itself. For rendering the network objects, we chose to use the \texttt{Mesh()} object attatched to a \texttt{GameObject} in Unity. Using individual GameObjects for each vertex would be convenient because of all the built in functionality they provide, but this approach is untenable for larger network topologies with thousands or tens of thousands of nodes. \subsubsection{Layout and analysis} In order to compute layouts (prior to visualization), we use igraph's 3D Fruchterman-Reingold layout function \texttt{graph.layout\_fruchterman\_reingold\_3d()} using the default cooling exponent of $1.5$ and maximum of $2000$ iterations. Fructerman Reingold takes $O(\left\vert N \right\vert ^2 + \left\vert E \right\vert)$ time per annealing step, and large network structures often require many iterations to produce a satisfactory layout. For this reason we opted to fork separate long-running layout subprocess for each layout task. The coloration of the nodes corresponds to clusters determined by the modularity maximization algorithm described in \cite{modularity} and is intended to aid in identifying the different subgroups within the graph. The colors determined by modularity maximization almost always coincide with the agglomerations which are visible within the layout. This research was in part intended to test the rendering limits on the GearVR, and once those limits have be reached the experience can become highly unfavorable for the user; low framerates have even been known to induce nausea in some VR users. One strategy to deal with the rendering problem for large graphs is down-sampling the network structure. The Network analysis server provides four options for network sampling: \begin{description} \item[Random Node (RN): ]{Each vertex is included with probability $p$. This scheme is the simplest and yields networks that maintain the degree distribution of the original network relatively well.} \item[Random Edge (RE): ]{Each edge is included with probability $p$ and only the nodes they connect are included in the down-sampled version (no singletons). While this scheme directly addresses the rendering issues caused by too many edges, it drastically changes the degree distribution \cite{largegraphs}. We also found that in our experience, RE sampling yields a post-layout spatial structure that is visually very different from the one of the network it is derived from.} \item[Random Walk (RW): ]{Begin a random walk, selecting the next node from the current set of neighbors uniformly. In order to prevent the walk from getting 'stuck' in only one area of the network, with probability $p$ transfer to a new random node. We continue the process until a certain proportion of nodes are visited. When $p = 1$ this method is nearly identical to RN sampling. If we select the correct $p$, this method can yield down-sampled graphs with very similar degree distributions. One issue is that $p$ may depend on particulars of the network topology at hand.} \end{description} More graph sampling techniques with respect to the goal of preserving degree distribution are discussing at length in \cite{largegraphs}. \section{VR Interaction Design} In order to take full advantage of our collaborative VR data visualization system, we designed a few ways for a user to interact with each other and the network graphs. We use the Perception Neuron data to make a simple recreation of each user's hand, calculate the gesture from that recreation, and then use the gesture to control interactions. Currently, we have implemented gestures to allow the moving and scaling of network graphs. \subsection{Hand Pose Recognition} The Perception Neuron data, as noted above, uses a WiFi interface to connect to a server computer that runs the Noitom software used to interpret the Neuron data. This reconstructs a skeleton of the user based on which Neurons are used. In our case, we only use the single-arm model, so we get data for each user's left arm, including finger movements. Since the program does a full skeletal reconstruction, we get data for several joints, including one for the upper arm, one for the lower arm, one for the hand, and four for each finger. However, we found that we can consistently infer poses with a subset of these data, so to minimize the amount of streaming data, we reduce the hand model to nine points per hand: two for each finger, and one for the hand itself. Once the data is forwarded through the central server and is received on the Unity phone client, we use the nine points to reconstruct the user's hand using a low-polygon mesh. Then, using the two points for each finger, we determine whether the finger is "open" or "closed" based on the angle between the fingertip and the knuckle. From here, we can get a 5-bit representation of the hand, one bit for each finger, to get 32 possible poses, although we recognize that only a subset of those poses will be comfortable for human use. Nonetheless, this simple yet effective pose recognition opens up many possibilities for controls in VR that avoid bulky and unnatural control schemes. \subsection{Network Interactions} For this particular project, we decided to focus on three interaction types that would allow for simple collaborative interaction with the network graphs. \begin{description} \item{\textbf{Grab}} First, we implemented a gesture that allows a user to grab and move a graph. This simply translates the entire graph in 3-dimensions. We found this to be primarily useful when a group of users would want to explore a portion of the graph that was outside of the bounds of the physical tracking space. \item{\textbf{Scale}} Second, we implemented a gesture that allows a user to re-scale the graph about the point at which the gesture begins. This provides two primary uses: a user can scale the graph down in order to see the entire structure, or a user can scale the graph up to explore dense clusters. \item{\textbf{Highlight}} Grabbing and scaling are \textit{network wide} interactions and therefore can be easily implemented as transformations applied to the network \texttt{GameObject} as a whole. To interact with individual nodes we use a kd-tree to look up which node is closest to the user's index finger. Kd-trees create efficient spatial subdivisions and allow collision testing in $O(\log(n))$ time. Upon initialization, we store relevant meatadata about each node in a hashtable and when a user selects one, the emanating edges are illuminated and text metadata is displayed near the node's position. In our current prototype we simply display the Twitter handle associated with the selected node, but this text could easily be augmented with the other metadata in the hashtable such as: location, description, and profile image. \end{description} From here, it would not be difficult to implement many more actions. For instance, individual nodes could be moved, selected, and analyzed. Graphs could be rotated, restructured around certain clusters, and much more. In other tests, we have implemented 3-dimensional drawing, which allows users to highlight and annotate certain portions of the graph. However, it was necessary to follow to certain design considerations imposed by our server architecture when designing gestures and the corresponding actions. Because all actions are handled on the client with no acknowledgements or data sent back to the main server, our actions were primarily state-based as opposed to event-based. In other words, we avoided having particular events, such as the opening or closing of a hand trigger actions, and instead relied on the state of the hand model. This would sometimes cause issues if a user had received a large amount of packet loss, causing a desynchronization between graph positions or sizes. While these desynchronization problems were infrequent and typically not an issue, it was This issue could be remedied by having a simple master client that shares its state with all of the other clients, and the other clients follow the master client's state. \section{Discussion} We were able to achieve our goal of data visualization in a collaborative VR environment. We found that the ability for users to see reconstructions of other users and their hands greatly enhanced collaborative analysis of complex graph structures. Clusters that would otherwise look overly cluttered due to high edge connections greatly benefited from being distributed across three dimensions. Our primary goal, which we satisfied, was to allow analysts to observe data together in an immersive environment, a tool which was previously difficult or ineffective to do otherwise. \begin{figure}[h!] \centering \includegraphics[width=7cm]{fps_by_edge_count} \caption{A graph of the phone client's frame rate against a few graphs with different edge counts. Up to a certain point, the FPS remains relatively stable at around 60FPS. However, after a certain point, it fluctuates by a significant amount and drops to an average of about 30FPS.} \label{fig:my_label} \end{figure} However, there was still room for improvement. Due to the low rendering power of the Samsung Note 4 phones, we found that loading large graphs could be too cumbersome for the graphics processing unit. Graphs with large amount of nodes and edges could produce significant graphical lag. We found that edges contributed more to this lag, as it required more pixels to render. Figure \ref{fig:my_label} shows sampled frames per second (FPS) counts of a few different sample graphs. We can see that larger graphs created a sudden drop-off in FPS. While we were able to avoid major graphical lag by loading graphs with fewer nodes and edges, we would ideally like to find ways to push that limit. Different shading models and other rendering techniques could assist here. Additionally, we also aim to explore optimizations such as foveated rendering and re-sampling of the graph to maintain the graph representation in a meaningful way while lowering the number of visible nodes. In addition to the graphical optimizations listed above, we would also like to expand upon the interaction and gesture library we have created. While our work was sufficient to demonstrate the effectiveness of gesture controls in a virtual environment for data visualization, we believe that a broader toolset would be ideal for taking advantage of our system. Ultimately, we would like to see a system such as this contain all the tools necessary for data visualization and analysis, ranging from computation interfaces to graph manipulation tools. Finally, it is important to note that virtual reality is currently a constantly evolving technology. As technology improves, we would like to adapt this work to adhere to the most lightweight system possible, as that was the goal when choosing the GearVR and Perception Neuron for this project. The OptiTrack system, while providing us with the possibility to do wireless tracking, would ideally be replaced with a cheaper tracking system that could be more financially available. \section{Conclusion} We have demonstrated a system that allows users to generate and collaboratively inspect large network layouts, using their hands in a way that is familiar and intuitive. We hope that new data visualization modalities like GraphiteVR will help make complex structures like social networks seem more familiar and intuitive as well. \acknowledgments{} \bibliographystyle{abbrv} \nocite{*}
{ "attr-fineweb-edu": 1.700195, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdKTxK6nrxjHzBdQ-
\section{Abstract} Using the {\it ab initio} computed Potential Energy Surface (PES) for the electronic interaction of the MgH$^+$($^1\Sigma$) ion with the He($^1$S) atom, we calculate the relevant state-changing rotationally inelastic collision cross sections from a quantum treatment of the multichannel scattering problem. We focus on the quantum dynamics at the translationally low energies for the present partners discussed in the earlier, cold ion trap experiments (see below) which we wish to model in detail. The corresponding state-changing rates computed between the lower rotational states of the molecular ion are employed to describe the time-evolution kinetics followed by recent experiments on Coulomb-crystalized MgH$^+$($^1\Sigma$), where the ions are rotationally cooled by micromotion tuning after the uploading into the trap of He as a buffer gas. The present computational modeling of the final ions' rotational temperatures in the experiments turns out to agree very well with their observations and points at a fast equilibration between rotational and thermal temperatures of the ions. \section{Introduction} The detailed control (for a general overview see \cite{1}) and the active manipulation of the internal, as well as the external degrees of freedom of gas-phase molecules has been pursued and investigated by now for the best part of the last twenty years \cite{2,3,4} and the information obtained has been of great value for furthering advances in several experimental fields. Hence, from the progress in methodology \cite{5}, to quantum information processing \cite{6}, to the quantum control of molecular reactions and transformations \cite{7} and to the collecting of accurate data for chemical reactions in well defined states \cite{8}, a great deal of studies and of computational modeling have been developed. They have involved a large variety of ensembles of cold molecules which could be further experimentally interrogated to follow their time evolutions from well-defined initial conditions and to extract specific information on their state-to-state collision rate constants \cite{9}. From these many fields of investigation the research on the behaviour of cold molecules, whether neutral or ionic species, has developed fairly rapidly by employing a wide variety of techniques which will not be further discussed in this study as they have been already presented various times in the current literature \cite{10,11,12,13,14,15,16}. The cooling techniques for molecular ions have also developed to the point that has become realistically possible to work with ensembles of molecular ions that are sympathetically cooled into a Coulomb crystallization through an efficient Coulomb interaction with laser-cooled atomic ions \cite{15}. The above techniques have shown that the translational-cooling schemes are indeed very versatile in bringing the molecular ions down to the low temperatures of the millikelvin \cite{15}, although the further passage to also achieve simultaneously that extremely low-lying internal states are being the most populated in the cold traps has to be designed and adapted to the specific molecules under study \cite{15}. In very recent analysis \cite{17,18}, a novel setting has been experimentally investigated whereby the usual Helium buffer-gas technique for the cooling of internal molecular degrees of freedom has been employed for MgH$^+$($^1\Sigma$) ions. The ions were previously trapped in a cryogenically cooled, linear, radio-frequency quadrupole trap and further traslationally cooled, through a Coulomb type interaction, with simultaneously trapped, laser-cooled atomic Mg$^+$ ions \cite{17}. It was found there that the interaction with the additional He buffer gas is chiefly employed for the cooling of the molecular ion's internal degrees of freedom, thereby needing much lower gas densities( e.g. around 10$^{10}$ cm$^{-3}$) for the uploaded buffer atoms, which can therefore be four to five orders of magnitude lower that in a typical buffer-gas cooling setting \cite{18}. The vibrational degree of freedom of the MgH$^+$ partner is known to be already frozen out at room temperature, with $>$ 99\% probability for the molecular ion of being in its vibrational ground state. Hence, at the cryogenic temperatures of the Coulomb crystallization it can be entirely disregarded when modeling of the present dynamics, so that the full rotational-state distributions of the cold molecules could be directly measured in the Coulomb trap \cite{17}. In the present work we shall analyze in detail this specific collisional cooling process which is involving the differently populated rotational states of MgH$^+$ when the He atoms of the buffer gas are uploaded into the trap after the formation of the Coulomb-crystallized ions. The following Section \ref{sec2} will provide specific information on the potential energy surface (PES) we have computed for the MgH$^+$/He system and will further outline the quantum dynamics of the rotationally inelastic collision processes. The next Section \ref{sec3} will analyse the relevant state-changing cross sections and use them in the ensuing Section \ref{sec4} to generate the state-changing rates at the temperature of the traps. The master equations describing the system's time evolution as a function of various trap parameters/conditions will be presented and discussed in Section \ref{sec5}. The final Section \ref{sec6} will summarize our present conclusions. \section{Interaction forces and quantum dynamics} \label{sec2} Within the usual Born-Oppenheimer (B.O.) approximation that separates nuclear and electronic motions, the electronic interaction between the MgH$^+$($^1\Sigma$) molecular ion at its equilibrium geometry of 1.67 \AA\ and the He($^1S$) neutral atom is described by a 2D grid of points providing the $V(R,\theta)$ single potential energy surface (PES). In our earlier work on the same system \cite{19,20}, we computed the points by using the coupled-cluster single and double excitations with noniterative corrections for the triple excitation, CCSD(T), initial expansion coupled with a complete basis set (CBS) extrapolation limit and starting with the augmented coupled-cluster polarized valence multipole (auf-cc-pVnZ) (with n=3,4,5) basis set series as implemented in the software package GAUSSIAN08 \cite{21}. The employed Jacobi coordinates were the distance $R$ of the He atom from the center-of-mass (c.o.m.) of MgH$^+$ and the angle $\theta$ between $R$ and the bond, $r_{eq}$, of the partner molecular ion. The angular values were varied between 0º and 180º in intervals of 10º. The radial coordinate was ranged from 1.7 \AA\ to 16.0 \AA\, generating a total of 1200 radial points for the full set of angles mentioned before. in the current work we have taken advantage of the previous set of computed points of the 2D grid but we have added several new points to better describe the short-range repulsive interaction over a broad range of angles. Thus, an additional set of 100 points was added to the previous 2D grid. The marked anisotropy of the present PES was already extensively discussed earlier \cite{19,20} so we will not repeat here the same analysis. Suffices it to say that the most attractive well of the overall interaction is located along a linear structure with the He atom approaching the Mg$^+$ side of the molecular partner. The same PES becomes increasingly more repulsive as the He atom approaches the partner from the H-atom end of the molecular cation. Thus, the multipolar representation of the anisotropic interaction can be obtained from writing: \begin{equation} \label{expansion} V(R,\theta | r_{eq}) = \sum^{\lambda_{max}}_\lambda V_\lambda (R|r_{eq})P_\lambda(\cos\theta) \end{equation} where: \begin{equation} \label{lambda} V_\lambda(R | r_{eq}) = \int^1_{-1} V(R,\theta|r_{eq})P_\lambda(\cos\theta)\, \mathrm{d}\cos\theta \end{equation} Hence, the range of action of each $V_\lambda$ coefficient gives us indication on the strength and range of the anisotropy present in the computed PES: each coefficient, in fact, will be directly involved in the dynamical coupling of rotational states of the cation during the collisional inelastic processes within the coulomb trap, as we shall discuss below. We have reached good numerical convergence of expansion \ref{expansion} by extending the sum up to $\lambda_{max}=30$. As an example of the radial behaviour of the coefficients from eq. \ref{lambda}, we report in figure \ref{fig1} the first six coefficients for the present PES. \begin{figure} \includegraphics{fig1} \caption{Computed multipolar coefficients from eq. \ref{lambda} for the MgH$^+$/He system. Only the first six coefficients are reported in the figure.} \label{fig1} \end{figure} The inset of this figure shows an enlarged view of the higher multipolar terms, beyond $\lambda$=0 and 1, in the radial regions where the coefficients with $\lambda$ values = 2, 3, 4 and 5 present an attractive behaviour, albeit with decreasing depth as the $\lambda$ value increases, while the next higher term shows a much shallower well. All these terms, however, are markedly less attractive than the dominant spherical term with $\lambda$=0 shown in the main figure. Since the data of figure \ref{fig1} mainly describe the short-range and the inner well regions, we need to further include the long-range (LR) behaviour of the total PES. This is done by a numerical interpolation between short-range (SR) and LR regions included in our in-house scattering code (see below) and iturns out to show as its strongest attractive term the long-range spherical polarizability tcontribution which appears in the standard treatment of the LR forces via perturbative expansions (e.g. see \cite{22}): \begin{eqnarray} \label{vlambda3} V(R,\theta|r_{eq}) &\stackrel{R\rightarrow \infty}{=}& V_{LR}(R,\theta) \sim -\frac{\alpha_{He}}{2 R^4} - 2 \alpha_{He} \frac{\mu P_1(\cos\theta)}{R^5} - \frac{\alpha_{He}\mu^2}{R^6} \\ &&- \left(\alpha_{He}\mu^2 + Q\alpha_{He}\right)\frac{ P_2(\cos\theta)}{R^7}\\ + ... \label{perturbative} \end{eqnarray} The above array of asymptotic terms is dominated by the spherical term in the $\lambda$=0 coefficient. On the other hand, the coefficients with $\lambda$=1 and $\lambda$=2 are next in importance with respect to the next higher coefficients, as one could also gather from the relative strengths of their short-range terms shown by figure \ref{fig1}. Hence, we could qualitatively say that rotational inelasticity at low collision energies will be mainly driven by the $\Delta j$=1 and $\Delta j$=2 rotational coupling terms of the PES, as we shall further discuss below. Once the full PES has been obtained and its multipolar coefficients generated from eq. \ref{expansion}-\ref{vlambda3}, including the coefficients for the LR extension of the lowest three $\lambda$ values, one can then approach the calculations for the quantum multichannel dynamics of the inelastic scattering processes inducing state-changes between rotational levels of the cation, taken to be in its $|v>$=0 vibrational state: we therefore describe the nuclear motion of the partners within the usual time-independent Schr\"odinger equation (TISE) containing the potential interaction of eq. \ref{expansion} and subjected to the usual boundary conditions within the coupled-channel approach of expanding the total wavefunctions on an ensemble of rotational functions for the molecular ion plus the continuum functions for the relative motion, numerically obtained at the positive, relative collision energies of the scattering partners \cite{23}. We have employed our in-house numerical code ASPIN and details of its implementation have been given before \cite{24,25}. We will therefore not discuss it again in the present work. Suffices it to say that the physical observables which we are obtaining from the ASPIN scattering code are in this case the state-to-state partial cross sections for each of the contributing total angular momenta $J$: $\sigma^J (j'\leftarrow j|E_i)$ with $E_i$ giving the initial relative energy between partners. The further summing over the contributing angular momenta (which, in the present case were taken up to $J_{max}$=50) will therefore yield the corresponding state-to-state partial integral cross sections: \begin{eqnarray} \label{xsec_def} \sigma(j'\leftarrow j|E_i) = \sum^{J_{max}}_J \sigma^J (j'\leftarrow j|E_i) \end{eqnarray} From them we can further obtain the partial rotational quenching/heating rate constants $K_{jj'}(T)$ at the temperature of interest: \begin{eqnarray} \label{rate_def} K_{jj'}(T) = \int \sigma(j'\leftarrow j|E) \sqrt{\frac{4 E}{\pi(k_B T)^3}} \exp{(-E/k_B T)} EdE \end{eqnarray} We have integrated the computed cross sections over an extended range of collision energy for the corresponding cross sections, ensuring that the threshold behaviour is well described by a dense grid of values. We have further used and extended range of energies well beyond necessary extension to map the required interval of temperatures. Numerical convergence has been checked to a more than 0.01 stability of the final rates. \section{Computing the state-changing collision cross sections} \label{sec3} As mentioned earlier, the inelastic cross sections were obtained using our in-house quantum CC code ASPIN, \cite{25,26,27}. Therefore, we shall report here only a few specific details of the numerical procedure. We have included in each CC calculation a maximum number of rotational channel up to $j_{max}$=11, where at each collision energy at least five channels were included as closed channels, to ensure overall convergence of the inelastic cross sections. The radial integration was extended, at the lowest collision energies which we needed to take into consideration, out to $R_{max}$=1000 \AA. The anisotropy of the PES was also included via a variable number of $\lambda$ values in the expansions of eqs. \ref{expansion} and \ref{lambda}. In practice, we found that we obtained converged inelastic cross sections by keeping $\lambda_{max}$=18 in our potential expansion. The $B_0$ value for the MgH$^+$ rotor was taken to be 6.3870 cm$^{-1}$ \cite{30,31}. It is worth noting here that the present calculations cover a range of energies/temperatures which is much higher than that studied earlier by us on the present system \cite{19,20}. This therefore means that all the present cross sections were not among those discussed in that earlier work. The data in figure \ref{fig2} show a pictorial presentation of the energy spacing for the lower rotational states of the MgH$^+$ cation considered in the present study. \begin{figure} \includegraphics{fig2} \caption{Computed rotational energy spacings between the lower five molecular levels of the ion which will be included in the present dynamical analysis of the collisional rotational energy transfer.} \label{fig2} \end{figure} One clearly sees in that figure how the lower three rotational levels are the closer in energy and will be the ones more effectively activated at the collisional temperatures of the present study. To reach numerical convergence of the state-changing cross sections from eq. \ref{xsec_def}, however, we have included in the CC state expansion also the higher rotational levels shown by figure \ref{fig2}. The significant role played by anisotropic features of the PES, in conjunction with the values of the energy gaps between transitions, could be seen by the partial, excitation cross sections reported by the two panels of figure \ref{fig3}. The upper panel shows excitation processes with $\Delta j>$1 transitions, while the processes involving $\Delta j$=1 transitions are shown by the lower panel. \begin{figure} \includegraphics{fig3} \caption{Computed excitation cross sections for collisional state-changing processes in the cold trap. The examined range of relative energies span 100 cm$^{-1}$. Upper panel: excitation processes for $\Delta j>$1 . Lower panel: excitation collisions for $\Delta j$=1 transitions.} \label{fig3} \end{figure} The marked interaction forces which act during the collisional events are causing a very rich presence of resonant features, especially above the onset energies of the excitation precesses and for transitions involving the lower rotational states. Such marked structural features are obviously linked to the occurrence of both open channel (trapping) resonances and to virtual excitations via Feshbach resonances involving closed rotational channels. The detailed analysis of the dynamical origins of such resonances will not be carried out here, as it is somewhat outside the scope of the present study. However, it is interesting to point out that near each threshold the two dominant excitation processes are those where the ground rotational state of the target ion is being excited via the $\Delta j$=1 coupling potential and the $\Delta j$=2 dynamical potential term . As discussed earlier from the PES features of figure \ref{fig1}, the largest cross sections pertain to the effects of the anisotropic coupling linked to the $\lambda$=1 multipolar coefficient,and to the extension of its radial range during the dynamics. On the other hand, the next cross section in size is that for which it is the $\lambda$=2 potential coupling which chiefly causes the ($0\rightarrow 2$) rotational excitation process shown in the upper panel of the same figure \ref{fig3}. On the whole, however, the data for the excitation processes show the state-changing dynamics to be an effective collisional path for the target ion at the low collision energies shown here. \begin{figure} \includegraphics{fig4} \caption{Computed rotationally inelastic cross sections associated with the de-excitation paths from the lower rotational states of the molecular ion. Upper panel: rotational de-excitation processes with $\Delta j>$1 ; lower panel: rotational "cooling" processes for $\Delta j$=1 transitions. See main text for further details.} \label{fig4} \end{figure} The data reported by figure \ref{fig4} present the corresponding inelastic transitions relevant for the collisional rotational "cooling" (i.e. rotational de-excitation) dynamics in the trap. These de-excitation cross sections indicate that the relative sizes of the inelastic transitions with $\Delta j$=-1 (lower panel in the figure) decrease as the initial state moves up on the energy ladder. It signifies that the processes where more internal rotational energy is released into the trap after the collision are more efficient since the interaction times are shorter in comparison to the case where the least energy is being released (e.g. for the ($1\rightarrow 0$) process) and the de-excitation dynamics is least effective. On the other hand, when the rotational internal energy released becomes even larger (upper panel of figure \ref{fig4}), the presence $\Delta j >$1 couplings make the dynamical torques activated by the higher terms of the multipolar potential of eqs. \ref{expansion} less efficient so that all these inelastic cross sections are uniformly smaller than those with $\Delta j$=1 state-changing processes. Here again, however, we see that the cross sections are rather significant in size and indicate the inelastic collisions to be an effective path for depopulating the rotational states of the trapped molecular ions. \section{Rotationally inelastic rates at low T ($\le$50K)} \label{sec4} Following the relation shown by equation \ref{rate_def} we have employed the inelastic cross sections discussed in the previous section to obtain the corresponding inelastic rates over a broad range of temperatures, spanning those of the Coulomb-crystals experiments \cite{16}. \begin{figure} \includegraphics{fig5} \caption{Computed rotational 'heating' (excitation) rates between different rotational states of MgH$^+$ in the traps. Upper panel: excitation rates with $\Delta j>$1. Lower panel: excitation rates for $\Delta j$=1 transitions. See main text for further details.} \label{fig5} \end{figure} \begin{figure} \includegraphics{fig6} \caption{Same as in figure \ref{fig5}, but this time for the 'cooling' (de-excitation) rates between rotational states. The upper panel reports inelastic rates with a negative $\Delta j>$-1, while the lower panel shows transitions with negative $\Delta j$ values=-1. See main text for further details.} \label{fig6} \end{figure} The data in figure \ref{fig5} and \ref{fig6} present the state-to-state inelastic rates involving the lower five rotational states of the MgH$^+$ trapped ion. More specifically, the excitation rates reported by figure \ref{fig5} show, in the upper panel of that figure, the processes for which the state-changing transitions involve $\Delta j>$1 transitions, while the lower panel presents the transition rates with $\Delta j$=1 and involving rotational states from $|0>$ up to $|4>$. The following consideration could be made by examining the results given in the figure: \begin{enumerate} \item the cooling rates become rapidly fairly large as the collision energy increases and show the ($0\rightarrow 1$) excitation to be by far the largest. All other single-quantum excitations from the higher states of the ion show rates which are smaller than the ($0\rightarrow 1$) rate by factor of two or more; \item the multi-quantum excitation rates are seen to be fairly smaller than the former ones (see upper panel in the figure) and remain nearly one order of magnitude smaller in the temperature regions between 10K and 30K; \item in both sets of processes the excitation from the ground rotational state correspond to the largest values of the excitation rates within each panel. \end{enumerate} It is interesting to note that in a recent study of ours on a similar molecular cation, the OH$^+$($^2\Sigma$) molecule \cite{31}, the rotational excitation rates computed in a trap with He as a buffer gas, turned out to be about a factor of three smaller over the same range of temperatures, in keeping with differences between the energy spacings of their lower-lying rotational levels. If we now turn to the rotational relaxation rates over the same range of temperatures, we see in the two panels of figure \ref{fig6} their relative size and T-dependence. The lower panel reports single-quantum rotational ''cooling'' transitions, while the multiple-quantum rotational de-excitation transitions are in the upper panel of the same figures. The general trend of the relative sizes of the inelastic rotational de-excitation transitions is here very similar to that of the excitation rates. However, with the exception of the ($1\rightarrow 0$) process, all the computed rates show a fairly slow dependence on temperature, a result which is in keeping with the same findings from our earlier calculations regarding the OH$^+$/He system \cite{31}. The single-quantum rotational relaxation rates are also uniformly larger than those for two- and three-quanta transitions, which are in some cases up to one order of magnitude smaller. The same size differences were also found for the OH$^+$($^2\Sigma$) cation \cite{31}. \section{Modeling the cooling kinetics in the trap} \label{sec5} As discussed in the Introduction , the experimental findings of ref. \cite{18} indicate that the He buffer-gas uploaded within the setting of MgH$^+$ ions trapped in a cryogenically cooled linear, radio-frequency quadrupole trap, and already translationally cooled through Coulomb interaction with atomic Mg$^+$ ions, can cause the molecular ions to be cooled into their ground rotational state even through a low density of He atoms of $\sim$ 10$^{10}$ cm$^{-3}$ is being present in the trap.In those experiments, in fact, the He temperature is essentially kept constant while the effective collision "temperature" is changed via the scaling of the average micromotion amplitude. In this way, the molecular ions experience different effective temperatures within the uploaded gas environment (see ref. \cite{18} for further details). Given the information we have obtained from the calculations presented in the previous Sections, we are now in a position to try and follow the microscopic evolutions of the cation's rotational state populations by setting up the corresponding rate eq.s describing such evolution as induced by collisional energy transfer with the uploaded He atoms in the trap \cite{28,29}: \begin{equation} \label{eq6} \frac{d\mathbf{p}}{dt} = n_{He} \mathbf{k}(T)\cdot \mathbf{p}(t) \end{equation} where the quantity $n_{He}$ indicates the density of He atoms in the trap, the vector $\mathbf{p}(t)$ contains the time-evolving fractional rotational populations of the ion partner's rotational state, p$_ j(t)$, from the initial conditions at t=t$_{initial}$, and the matrix $\mathbf{k}(T)$ contains the individual k$_{i\rightarrow j}(T)$ rate coefficients at the temperature of the trap's conditions. Both the p(t$_{initial}$) values and the collisional temperature T of the trap corresponding to the mean collisional energy between the partners are quantities to be specifically selected in each computational run and will be discussed in detail in the modelling examples presented below. In the present study we shall disregard for the moment the inclusion of the state-changing rates due to spontaneous radiative processes in the trap. These quantities are already known to be smaller than the collisionally-controlled rates between the lower rotational levels of such systems, as already shown by us in earlier studies \cite{29}, and are therefore not expected to have a significant effect under the present trap conditions \cite{18}. We have chosen the initial rotational temperature of the trap's ions to be at 400 K, so that the vector's components at t=t$_{initial}$ are given by a Boltzmann distribution at that chosen temperature. This was done in order to follow the kinetics evolution over an extended range of time and also test the physical reliability of our computed state-changing collisional rates. Obviously the actual kinetics evolution of physical interest in this study will be considered over the range of the much lower temperatures sampled by the experiments \cite{18}. If the rate coefficients of the $\mathbf{K}(T)$ matrix satisfy the detailed balance between state-changing transitions, then as t$\rightarrow \infty$ the initial Boltzmann distribution will approach that of the effective equilibrium temperature of the uploaded buffer gas as felt by the ions in the Coulomb Crystals. These asymptotic solutions correspond to the steady-state conditions in the trap and can be obtained by solving the corresponding homogeneous form of eq. \ref{eq6} given as: $d\mathbf{p} (t)/dt = 0$. We solved the homogeneous equations by using the singular-value decomposition technique (SVD) \cite{28}, already employed by us in previous studies. The non-homogeneous equations \ref{eq6}, starting from our t$_{initial}$ of 400 K, were solved using the Runge-Kutta method for different translational temperatures of the trap. Since the role of the He density is simply that of a scaling factor in the kinetics eq.s, we present in the figures only the actual value which was employed in the trap experiments \cite{18}. The results shown by figure \ref{fig7} indicate for the present system the steady-state population in the cold trap over a rather large range of temperatures up to 100 K. It allows us to numerically control the equilibrium population expected for the rotational levels of the ions in the crystal as the perceived translational temperature is increased. The inset in the figure clearly shows how, up to about 10 K, the only levels involved would be j=0 and j=1, with a very small presence of the j=2 populations. The rather small energy spacings between the MgH$^+$ rotational levels, for which the first four levels are spanning about 75 cm$^{-1}$, indicates also that, as the temperature increases, many more rotational levels will be occupied at the equilibrium trap temperatures indicated by figure \ref{fig7}. How fast such a collisional thermalization would occur will be shown by the results presented below. \begin{figure} \includegraphics{fig7} \caption{Asymptotic (steady-state) rotational populations of the MgH$^+$ internal states as a function of computed trap translational temperatures up to 100 K. The data for the lower T values up to $\sim$ 8 K is shown in the inset. See main text for further details.} \label{fig7} \end{figure} The calculations reported by figure \ref{fig8} indicate the time evolution of the collisionally-driven molecular ion populations for different values of the trap's temperature and for a fixed density of the buffer gas of $n_{He}$=10$^{10}$ cm$^{-3}$. All the temperatures reported in the panels correspond to those experimentally assessed in the Coulomb crystals of ref. \cite{18}, while the density of the He gas is also the one indicated by the experiments. \begin{figure} \includegraphics{fig8} \caption{Computed values of the time-evolution of the MgH$^+$ ion's rotational state populations by collisional perturbations in the Coulomb crystals induced by tuning the micromotion amplitudes after uploading He as a buffer gas. The temperatures are the same as those sampled in the experiments of ref. \cite{18} and the gas density is also the same of the experiments: 10$^{10}$ cm$^{-3}$.} \label{fig8} \end{figure} The six panels of that figure report the six different effective temperatures perceived by the localized ions that are reported by the experimental data \cite{18}. We show in the panels the time evolution for the relative populations of the lower five ion's rotational levels, although the experimental data analysed the behaviour of only the first four rotational states as those having any significant population during the cooling process. The data indicate that the change of the perceived trap temperature is indeed a significant parameter for changing the relative rates of level populations during the collisional rotational de-excitation processes. As T increases, in fact, we see that the two dominant population fractions are those associated to the j=0 and j=1 states. Their relative importance, however, changes dramatically when moving from the 8-9 K region, where at the equilibrium time the j=0 population is about twice that of the j=1 state, to the 20-23 K region, where the two levels have now inverted populations and the j=1 state is more abundant than the j=0 state. This result is remarkably close to the experimental findings \cite{18} which suggested that the rotational populations of the ion's levels gets very quickly to be that given by the thermal distributions within the trap, at the average translational temperature generated by changes in the micromotion amplitude after the uploading of the buffer gas whose temperature remains fixed at around 8.7 K \cite {18}. We see, in fact, from the data of the calculations shown by the panes of figure \ref{fig8}, that after at most about 3 s the relative populations have reached their steady-state values reported by the test calculations of figure \ref{fig7}. In other words, the efficient collisional energy transfer processes between MgH$^+$ and He, caused by changing their relative translational average energy in the trap through the tuning of the micromotion amplitudes, are rapidly allowing the internal rotational populations of the cation to thermalize at the experimentally achieved translational temperatures in the Coulomb trap. The experimental data indeed suggest also (see figure 3 of ref. \cite{18}) that, over a fairly broad range of temperatures, the rotational temperatures are the same as the translational temperatures after the He buffer gas is uploaded to the trap. It therefore stands to reason to expect, as found in the experiments, that, after the passing of a very short time interval, the ions localized within the CC environment will have reached the same temperatures for their rotational and translational degrees of freedom, as we shall further illustrate below. Another useful indicator which could be extracted from the present calculations is the definition of a characteristic time, $\tau$, which can be defined as: \begin{equation} \label{tau} \left\langle E_{rot} \right\rangle (\tau) - \left\langle E_{rot} \right\rangle (t=\infty) = \frac{1}{e}\left( \left\langle (E_{rot} \right\rangle (t=0) -\left\langle E_{rot} \right\rangle(t=\infty)\right) \end{equation} the quantity $\left\langle E_{rot} \right\rangle$ represents the level-averaged rotational internal energy of the molecule in the trap after a characteristic time interval $\tau$ defined by equation \ref{tau}. It obviously depends on the physical collision frequency and therefore it depends on the $n_{He}$ value present in the trap. The model calculations of figure \ref{fig9} report the behaviour of $\tau$ for the experimental value of the He density in the trap and for the expected range of effective thermal temperatures tested by the experiments \cite{18}. \begin{figure} \includegraphics{fig9} \caption{Computed characteristic relaxation time $\tau$, as defined in eq. \ref{tau}, for different translational (thermal) temperatures and for the buffer gas density value considered in the experiments.} \label{fig9} \end{figure} From the data reported in that figure, we see that $\tau$ is a slow function of T, while it depends markedly on the chosen $n_{He}$ value and is inversely proportional to it. The buffer gas density is the one estimated by the experiments \cite{18} and the range of temperatures covers the values given by the experimental data of figure 1 in \cite{18}. One clearly sees there that the characteristic relaxation time, i.e. the average elapsed time required to reach the rotational-to-translational temperatures,is well below 1 s, being around 0.50 s at the lower T values and only reducing to 0.30 s at the highest experimental thermal temperatures. Such values are once more indicative of the collisional efficiency of the rotational cooling processes for MgH$^+$, since similar calculations for the OH$^+$($^2\Sigma$) cation \cite{31} indicated a $\tau$ value which was a factor of two larger over the same range of temperatures. To further make contact with the experimental findings, and link our present results with the $\tau$ indicator of figure \ref{fig9}, we report in figure \ref{fig10} the relative populations of the ion's rotational levels in the trap, as a function of the different temperatures sampled by the experiments and for different delay times after the uploading of the buffer gas in the trap and the start of the micromotion scaling to change the effective, relative translational energies within the trap. The time values shown correspond to 1 , 2 and 3 s of delay after buffer gas loading. \begin{figure} \includegraphics{fig10} \caption{Computed relative populations of the MgH$^+$ rotational levels in the trap, as a function of trap's thermal temperatures and for three different values ( in sec) of time delay after buffer gas uploading and ion's micromotion tuning in the experiments.} \label{fig10} \end{figure} The following considerations could be made by looking at the results shown in that figure: \begin{enumerate} \item All panels in the figure indicate that, after 3 s at the most, the population of rotational states has reached the Boltzmann's thermal distribution for each of the examined temperatures. This can be confirmed by the time evolutions in figure \ref{fig8} and the Boltzmann's distributions of figure \ref{fig7}. this means that the rotational temperature of the molecular ions is in therml equilibrium with its velocity distributions. \item With the exclusion of the two lowest temperatures of 8 K and 9 K, at all the other temperatures the relative populations change negligibly after increasing the time delays from 1 s to 3 sec. In practice, the calculations indicate that after about 1 s the rotational populations have reached their steady-state value at that temperature in each panel, while only at the lowest T values the equilibration of the relative populations is reached after a slightly longer time (see also lowest two panels of figure \ref{fig8}). \end{enumerate} One can therefore argue that the present collision-driven rotational population evolution in the Coulomb traps indicates a very rapid thermalization process and a very efficient energy redistribution within the MgH$^+$ rotational levels in order to bring the rotational temperature of the trapped ion in line with the translational temperature. The latter is the one achieved by the same CC ions after the micromotion tuning of the relative collision energies following the uploading of the He as the buffer gas. Thus, the ions change their relative velocities with respect to the He gas atoms but rapidly attain rotational stabilization in equilibrium with their final thermal energy. To make the comparison with the experimental findings ever more transparent, we further report in the five panels of figure \ref{fig11} the relative distributions of rotational state populations found by the five temperatures considered by the experiments (fig. 1 of ref. \cite{18}) and compare them with the same distributions found by us after a time delay of 3 s in the evolution of the kinetics eq.s. After what has been discussed above, one should also note the rapid termalization of the molecular rotational temperatures to the ion's translational temperatures found by our calculations, a feature which confirms the experimental findings reported by fig.1 in ref. \cite{18} and shown by the panels on the r.h.s. of figure \ref{fig11}. \begin{figure} \includegraphics[width=0.6\textwidth]{fig11} \caption{ Observed relative populations of the rotational states, and the observed thermal distributions of the same,(given by the lighter 'sticks'), for the trapped MgH$^+$ cation in the Coulomb crystal after the uploading of the He buffer gas (panels in the left column, reproduced with permission from ref. \cite{18}). We also report for comparison the calculated rotational population distributions after solution of the master eq.s \ref{eq6} and after a time evolution delay of 3 s (panels on the right-side column of figure). See main text for further details.} \label{fig11} \end{figure} One can make the following considerations from a perusal of the data presented in that figure: \begin{enumerate} \item The experimental distributions at the various rotational temperatures we are considering here are very close to those obtained from solving the present Master eq.s and extracting from the latter the distributions after an uploading time between 1 s and 3 s; \item The calculations suggest a very rapid, collision-driven thermalization between the internal rotational degrees of freedom of the trapped MgH$^+$ cation and the translational temperature experimentally generated in the trap after the uploading of the buffer gas and the micromotion tuning effects. This result is in line with what has been suggested by the experimental data reported by \cite{18}; \item We also see from the experimental data reported in the figure that the best-fit thermal distributions given by the lighter "sticks" are very close to the rotational temperatures reported in the same panels. This is in keeping with our present findings, since we have shown that the computed the rotational distributions on the r.h.s. of the figure where obtained after thermalization of the ion's rotational temperatures to its steady-state translational temperatures; \item The experimental estimates of the refilling rates for the depleted rotational levels in the trap which are given in ref. \cite{18} indicate a time of about 1 s$^{-1}$, from which they extrapolate a rate of about 10 s$^{-1}$ for a gas density $n_{He}$=10$^{10}$ cm$^{-3}$. This means that the driving depletion rates, over the range of T spanned by the experiments, would be around 10$\times$10$^{-10}$ cm$^3$s$^{-1}$. If we look at the individuals state-to-state, depletion rates computed in the present study (see panels of figure \ref{fig6}) we see that our dominant rates for depleting the first three rotational states of MgH$^+$ around 10-20 K sum up to about 6.5$\times$10$^{-10}$ cm$^3$s$^{-1}$. This is in good accord with the above value, especially if we notice that the Langevin rate value employed by the experimental work is larger than the state-to-state cooling rates generated by the present calculations.Thus, we expect that the rate value extracted from the refilling frequencies should be smaller when our computed rates would be employed. Either way, however, computed and experimentally estimated value are close to each other; \item One should further notice that our calculated estimates of the characteristic cooling time $\tau$ of figure \ref{fig9} are around 0.4 s at the temperature of 15 K. This corresponds to a ''refilling'' rate after rotational depletion of about 2 s$^{-1}$, which is also in line with the computed rates from the present, realistic calculations with respect to the larger Langevin rate employed by the experiments. \end{enumerate} In conclusion, the present modeling of the collision-driven rotational cooling kinetics of MgH$^+$ ions trapped in a Coulomb crystal, and further exposed to the interaction with the uploaded He buffer gas, indicates this process to be rather efficient and to be within a characteristic time below 1 s. The internal rotational state populations are shown to reach thermalization with the translational temperatures of the buffer gas under the different trap conditions which are induced by the experimental tuning of the micromotion amplitudes. This is in near quantitative agreement with the experimental findings of ref. \cite{18} and agrees with all the cooling dynamics features discussed by the experiments. \section{Present conclusions} \label{sec6} The study reported in this paper deals with the detailed computational modeling of the internal rotational state-changing kinetics of a molecular ion, the MgH$^+$($^1\Sigma$) ion, which , experimentally, is first undergoing sympathetic cooling in a Coulomb crystal trap arrangement, and then is further internally cooled by collisions with an uploaded buffer gas of He atoms. By the tuning of its micromotion amplitudes that simulate the changing of its average relative collisional energy within the trap. In order to carry out a complete computational simulation from first principles, and using a quantum ab initio description of the various steps involved, we have obtained first the electronic potential energy surface for the interaction between MgH$^+$ and He atoms. To this aim, we have employed the set of ab initio points already reported in our earlier work \cite{19,20} and have implemented them by generating additional points for the short-range regions of the repulsive part of the PES, as discussed and described in Section \ref{sec2}. The ensuing interaction potential has been used to calculate the partial, integral, state-to-state inelastic cross sections between the lower five rotational states of the molecular ion, although only four of them have been found to be significantly populated in the experiments. From the set of inelastic cross sections, which involve excitation and de-excitation transitions between the rotational states, we have obtained the corresponding inelastic rates for the rotational ''cooling'' and the rotational ''heating'' collision-driven dynamics over a range of temperatures up to about 50 K, which is well above the experimentally tested trap temperatures in ref. \cite{18} and also well above our earlier calculations on this same system \cite{19,20}. The solution of the Master Eq.s for the time evolution of the level populations during the uploading of the buffer gas allowed us to obtain quantitative estimates of the time interval needed to deplete the rotational states of the ion in order to thermalize its rotational state populations to the translational temperature of the trap after gas uploading. Our present results are in close agreement with the experimental findings and suggest that: \begin{enumerate} \item After about 1 s the internal energy distributions of the trapped ion have reached the translational temperature perceived after the uploading of the buffer gas and achieved by the tuning of the micromotion amplitude around a fixed He temperature of about 8.7 K. Our estimated ''refilling'' rate is of the order of about 1-2 s$^{-1}$, a value which is close to the experimental estimates of about 1 s$^{-1}$ \cite{18}. \item The experimental estimate of a global cooling rate in the traps is around 10$\cdot$10$^{-10}$ cm$^3$s$^{-1}$, in line with our dominant cooling rates at around 20 K of about 6.6$\cdot$10$^{-10}$ cm$^3$s$^{-1}$. \item All the experimentally observed distributions between rotational states of the ion at different temperatures, are in near quantitative agreement with our thermalized rotational distributions after time intervals between 1 s and 3 s. \item The experimentally observed equalization between rotational and thermal temperatures of the ions in the trap are confirmed by our calculations, which report rotational distributions at the various temperatures to be very close to the thermal distributions achieved after rapidly reaching the steady-state conditions in the traps. \end{enumerate} The calculations have therefore found very good agreement with the experimental data and suggest that the collision-driven state-changing rates for the present cations are indeed very large and indicate a very rapid process of thermalization of the rotational levels' ''temperature'' to the translational temperature achieved into the Coulomb crystal environment after the uploading of the He buffer gas and subsequent tuning of the ion micromotion. \section{Acknowledgments} L.G.S. acknowledges the financial support from the Spanish Ministry of Science and Innovation Grant No. CTQ2015-65033-P. F.A.G. and R.W. thank the support by the Austrian Science Fund (FWF), Project No. 29558-N36. The computational results have been obtained by using in-house computer codes running on the HPC infrastructure LEO of the University of Innsbruck. This work was supported by a STSM Grant from COST Action CM1401, being held by L. Gonz{\' a}lez-S{\' a}nchez. We thank I. Iskandarov and Lorenzo Petralia for their generous initial help in setting up the multipolar coefficients from the computed interaction potential discussed in Section \ref{sec2}. \section{Keywords} Keyword 1: inelastic collisions, keyword 2: intermolecular potentials, keyword 3: collisionally inelastic rates, keyword 4: rotational relaxation times , keyword 5: molecular dynamics in cold traps \bibliographystyle{unsrt}
{ "attr-fineweb-edu": 1.265625, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdKc5ixsDMO6ytz8o
\section{Introduction} Let $(M,g)$ be an $n$-dimensional compact Riemannian manifold ($n \geq 3$). In \cite{yamabe:60} Yamabe attempted to show that there is a metric $\tilde{g}$ conformal to $g$ such that the scalar curvature $S_{\tilde{g}}$ of ${\tilde{g}}$ is constant. However, Trudinger \cite{trudinger:68} realized that Yamabe's proof contained a serious gap. The problem is now solved, but it took a very long time to find the good approach. The problem of finding a metric ${\tilde{g}}$ with constant scalar curvature in the conformal class $[g]$ is called the Yamabe problem. The first step towards a rigorous solution of this problem was achieved by Trudinger~\cite{trudinger:68} who was able to repair the gap of Yamabe's article in the case that the scalar curvature of $g$ is non-positive. Eight years later, Aubin \cite{aubin:76} solved the problem for arbitrary non locally conformally flat manifolds of dimension $n \geq 6$. The problem was completely solved another eight years later in an article of Schoen \cite{schoen:84} in which the proof was reduced to the positive-mass theorem which had previously been proved by Schoen and Yau \cite{schoen.yau:79a,schoen.yau:88}. The reader can refer to \cite{lee.parker:87}, \cite{aubin:76} or \cite{hebey:97} for more information on this subject. The method to solve the Yamabe problem was the following. Let $u \in C^{\infty}(M)$, $u>0$ be a smooth function and $\tilde{g} = u^{N-2} g$ where $N= \frac{2n}{n-2}$. Then, multiplying $u$ by a constant, the following equation is satisfied: \begin{eqnarray*} L_g (u) = S_{\tilde{g}} |u|^{N-2} u. \end{eqnarray*} where $$L_g= c_n \Delta_g +S_g = {4(n-1) \over n-2} \Delta_g + S_g$$ is called the Yamabe operator. As a consequence, solving the Yamabe problem is equivalent to finding a positive smooth solution $u$ of \begin{eqnarray} \label{eqyam} L_g (u) = C_0 |u|^{N-2} u. \end{eqnarray} where $C_0$ is a constant. In order to obtain solutions of this equation Yamabe defined the quantity $$\mu(M,g)= \inf_{u \not= 0, u \in C^{\infty}(M)} Y(u)$$ where $$Y(u)= \frac{\int_M c_n |\nabla u|^2 + S_g u^2\,dv_g}{{ \left( \int_M |u|^N \,dv_g \right)}^{\frac{2}{N}}}.$$ Nowadays, $\mu(M,g)$ is called the \emph{Yamabe invariant}, and $Y$ the \emph{Yamabe functional}. Writing the Euler-Lagrange equation associated to $Y$, we see that there exists a one to one correspondence between critical points of $Y$ and solutions of equation (\ref{eqyam}). In particular, if $u$ is a positive smooth function such that $Y(u)= \mu(M,g)$, then $u$ is a solution of (\ref{eqyam}) and $\tilde{g}= u^{N-2} g$ is the desired metric of constant scalar curvature. The key point of the resolution of the Yamabe problem is the following theorem due to Aubin \cite{aubin:76}. In the theorem and in the whole article, $\mS^N$ will always denote the sphere $S^n$ with the standard Riemannian structure. \\ \begin{theorem} \label{aubin} Let $(M,g)$ be a compact Riemannian manifold of dimension $n \geq 3$. If $\mu(M,g) < \mu(\mS^n)$, then there exists a positive smooth function $u$ such that $Y(u)= \mu(M,g)$. \end{theorem} This strict inequality is used to show that a minimizing sequence does not concentrate in any point. Aubin \cite{aubin:76} and Schoen \cite{schoen:84} proved the following. \begin{theorem} Let $(M,g)$ be a compact Riemannian manifold of dimension $n \geq 3$. Then $\mu(M,g) \leq \mu(\mS^n)= n(n-1) \om_n^{\frac{2}{n}}$ where $\om_n$ stands for the volume of the standard sphere $S^n$. Moreover, we have equality in this inequality if and only if $(M,g)$ is conformally diffeomorphic to the sphere. \end{theorem} These theorems solves the Yamabe problem. \\ In this paper, we introduce and study an invariant that we will call the \emph{second Yamabe invariant}. It is well known that the operator $L_g$ has discrete spectrum $$\Spec(L_g)= \{ \la_1(g), \la_2(g), \cdots \}$$ where the eigenvalues $$\la_1(g) < \la_2(g) \leq \la_3(g) \leq \cdots \leq \la_k(g) \cdots \to +\infty$$ appear with their multiplicities. The variational characterization of $\la_1(g)$ is given by $$\la_1(g)= \inf_{u \not= 0, u \in C^{\infty}(M)} \frac{\int_M c_n |\nabla u|^2 + S_g u^2\,dv_g}{\int_M |u|^2 \,dv_g}.$$ Let $[g]$ be the conformal class of $g$. Assume now that the Yamabe invariant $\mu (M,g) \geq 0$. It is easy to check that $$\mu(M,g) = \inf_{\tilde{g} \in [g]} \la_1(\tilde{g}) \Vol(M,\tilde{g})^{\frac{2}{n}},$$ where $[g]$ is the conformal class of $g$. We then enlarge this definition. \begin{definition} Let $k \in \mN^*$. Then, the $k^{th}$ Yamabe invariant is defined by $$\mu_k(M,g) = \inf_{\tilde{g} \in [g]} \la_k(\tilde{g}) \Vol(M,\tilde{g})^{\frac{2}{n}}.$$ \end{definition} With these notations, $\mu_1(M,g)$ equals to Yamabe invariant $\mu(M,g)$ in the case $\mu(M,g) \geq 0$, and $\mu_1(M,g)=-\infty$ in the case $\mu(M,g) < 0$. The goal of this article is to study the second Yamabe invariant $\mu_2(M,g)$ for manifolds whose Yamabe invariant in the case $\mu(M,g) \geq 0$. As explained in Section \ref{negative}, the most interesting case is when $\mu(M,g)>0$. In particular, we discuss whether $\mu_2(M,g)$ is attained. This question is discussed in Subsection~\ref{attaine}. In particular, Proposition~\ref{notattained} asserts that contrary to the standard Yamabe invariant, $\mu_2(M,g)$ cannot be attained by a metric if $M$ is connected. In other words, there does not exist $\tilde{g} \in [g]$ such that $\mu_2(M,g) = \la_2(\tilde{g}) \Vol(M,\tilde{g})^{\frac{2}{n}}$. In order to find minimizers, we enlarge the conformal class $[g]$ to what we call the class of \emph{generalized metrics} conformal to $g$. A generalized metric is a ``metric'' of the form $\tilde{g} = u^{N-2} g$, where $u$ is no longer necessarily positive and smooth, but $u \in L^{N}(M)$, $u\geq 0$, $u\not\equiv 0$. The definitions of $\la_2(\ti g)$ and of $\Vol(M,\ti g)$ can be extended to generalized metrics (see section 3). Then, we are able to prove the following result: \begin{theorem} \label{attain} Let $(M,g)$ be a compact Riemannian manifold of dimension $n \geq 3$ whose Yamabe invariant is non-negative. Then, $\mu_2(M,g)$ is attained by a generalized metric in the following cases: \begin{enumerate}[\ \ $\bullet$] \item $\mu_1(M,g) > 0 $ and $\mu_2(M,g) < \left[\mu_1(M,g)^{\frac{n}{2}} + \mu_1(\mS^n))^{\frac{n}{2}} \right]^{\frac{2}{n}} $; \medskip \item $\mu_1(M,g) = 0 $ and $\mu_2(M,g) < \mu_1(\mS^n)$ \end{enumerate} where $\mu_1(\mS^n)= n(n-1) \om_n^{\frac{2}{n}} $ is the Yamabe invariant of the standard sphere. \end{theorem} The result we obtain in the case $ \mu_1(M,g) = 0 $ is not surprising. Indeed, when $\mu_2(M,g) < \mu_1(\mS^n)$, Aubin's methods \cite{aubin:76} can be adapted here and allow to avoid concentration of minimizing sequences. However, when $\mu_1(M,g) > 0 $ and $\mu_2(M,g) < \left[\mu_1(M,g)^{\frac{n}{2}} + \mu_1(\mS^n))^{\frac{n}{2}} \right]^{\frac{2}{n}} $, the result is much more difficult to obtain (see Subsection~\ref{sectatt}). A second result is to find explicit examples for which the assumptions of Theorem \ref{attain} are satisfied. The method consists in finding an appropriate couple of test functions. \begin{theorem} \label{condi} The assumptions of Theorem \ref{attain} are satisfied in the following cases: \begin{enumerate}[\ \ $\bullet$] \item $\mu_1(M,g)>0$, $(M,g)$ is not locally conformally flat and $n \geq 11$; \medskip \item $\mu_1(M,g)=0$, $(M,g)$ is not locally conformally flat and $n \geq 9$. \end{enumerate} \end{theorem} One of our motivations is to find solutions of the Yamabe equation (\ref{eqyam}) with alternating sign, i.e. positive and negative values. If $M$ is connected, alternating sign implies that the zero set $u^{-1}(0)$ of $u$ is not empty. In the following we will use the standard definition to call the zero set $u^{-1}(0)$ of a function $u$ the \emph{nodal set of $u$}. A solution with a non-empty nodal set is usually called a \emph{nodal solution}. If $M$ is connected, then the maximum principle implies that a solution of the Yamabe equation is nodal if and only if it has alternating sign. They are called \emph{nodal solutions} of the Yamabe equation. The articles \cite{hebeyvaugon:94}, \cite{djadli.jourdain:02}, \cite{jourdain:99}, \cite{holcman:99} prove existence of nodal solutions under symmetry assumptions or under some assumptions which allow to use Aubin's methods, as in Theorem~\ref{attain} when $\mu_1(M,g) = 0 $ and $\mu_2(M,g) < \mu_1(\mS^n)$. If $\mu(M,g) \leq 0$, another method is given in Section \ref{negative}. The method we use here is completely different and we obtain solutions on a large class of manifolds. In particular, to our knowledge, there is no work which leads to the existence of such solutions if the Yamabe invariant is positive and if $(M,g)$ is not conformally equivalent to the round sphere. The result we obtain is the following: \begin{theorem} \label{eulerequ} Let $(M,g)$ be a compact Riemannian manifold of dimension $n \geq 3$. Assume that $\mu_2(M,g)$ is attained by a generalized metric $u^{N-2}g$ where $u \in L^N(M)$, $u \geq 0$ and $u \not\equiv 0$. Let $\Om $ be the nodal set of $u$. Then, there exists a nodal solution $w \in C^{\infty}(M \setminus \Om) \cap C^{3,\alpha}(M)$ ( $\alpha \leq N-2$) of equation (\ref{eqyam}) such that $|w| =u $. \end{theorem} A corollary of Theorems \ref{attain}, \ref{condi} and \ref{eulerequ} is then \begin{cor} \label{cor1} Let $(M,g)$ be a compact Riemannian manifold of dimension $n \geq 3$ whose Yamabe invariant is non-negative. We assume that one of the following assumptions is true: \begin{enumerate}[\ \ $\bullet$] \item $\mu_1(M,g)>0$, $(M,g)$ is not locally conformally flat and $n \geq 11$; \medskip \item $\mu_1(M,g)=0$, $(M,g)$ is not locally conformally flat and $n \geq 9$. \end{enumerate} Then, there exists a nodal solution of Yamabe equation (\ref{eqyam}). \end{cor} {\it Acknowledgement\\} The authors want to thank M. Ould Ahmedou for many interesting conversations about nodal solutions of the Yamabe equation. His large knowledge about such problems was a stimulating inspiration for this article. The author are also extremely obliged to Fr\'ed\'eric Robert for having pointed out a little mistake in the first version of this paper. \section{Variational characterization of $\mu_2(M,g)$} \label{varia} \subsection{Notation} In the whole article we will use the following notations $$L_+^N(M):=\left\{u\in L^N(M)\,|\,u\geq 0, \quad u\not \equiv 0\right\}.$$ \subsection{Grassmannians and the min-max principle} Let $\Gr{k}{}{C^\infty(M)}$ be the $k$-dimensional \emph{Grassmannian} in $C^\infty(M)$, i.e.\ the set of all $k$-dimensional subspaces of $C^\infty(M)$. The Grassmannian is an important ingredient in the min-max characterization of $\la_k(g)$ $$\la_k(L_{\ti{g}}):=\inf_{V\in \Gr{k}{}{C^\infty(M)}} \sup_{v\in V\setminus\{0\}} {\int (L_{\ti g} v)v\, dv_{\ti g}\over \int_M v^2 \,dv_{\ti g}}.$$ We will also need a slightly modified Grassmannian. For any $u\in L_+^N(M)$ we define $\Gr{k}u{C^\infty}$ to be the set of all $k$-dimensional subspaces of $C^\infty(M)$, such that the restriction operator to $M\setminus u^{-1}(0)$ is injective. More explicitly, we have $\mathop{\rm span}(v_1,\ldots,v_k) \in\Gr{k}{u}{C^\infty(M)}$ if and only if $v_1|_{M\setminus u^{-1}(0)},\ldots,v_k|_{M\setminus u^{-1}(0)}$ are linearly independent. Sometimes it will be convenient to use the equivalent statement that the functions $u^{{N-2\over 2}}v_1,\ldots,u^{{N-2\over 2}}v_k$ are linearly independent. Similarly, by replacing $C^\infty(M)$ by $H_1^2(M)$ we obtain the definitions of $\Gr{k}{}{H_1^2(M)}$ and $\Gr{k}u{H_1^2(M)}$. \subsection{The functionals} For all $u \in L_+^N(M)$, $v \in H_1^2(M)$ such that $u^{\frac{N-2}{2}} v \not\equiv 0$, we set $$F(u,v) = \frac{\int_M c_n | \nabla v|^2 + S_g v^2 \,dv_g }{\int_M v^2 u^{N-2}\,dv_g} {\left( \int_M u^N \,dv_g \right)}^{\frac{2}{n}}.$$ \subsection{Variational characterization of $\mu_2(M,g)$} The following characterization will be of central importance for our article. \begin{prop} We have \begin{eqnarray} \label{defmu} \mu_k(M,g)= \inf_{{\ss u\in L_+^N(M)\atop \ss V\in \Gr{k}u{H_1^2(M)}}} \sup_{v\inV\setminus\{0\}} F(u, v) \end{eqnarray} \end{prop} \proof{} Let $u$ be a smooth positive function on $M$. For all smooth functions $f$, $f \not\equiv 0$, we set $\tilde{g} = u^{N-2} g$ ($N= \frac{2n}{n-2}$) and $$F'(u,f)= \frac{\int_M f L_{ \tilde{g}} f \,dv_{\tilde{g}}} {\int_M f^2 \,dv_{\tilde{g}}}.$$ The operator $L_g$ is conformally invariant (see \cite{hebey:97}) in the following sense: \begin{eqnarray} \label{conf_inv} u^{N-1} L_{\tilde{g}} ( u^{-1} f) = L_g (f) \end{eqnarray} Together with the fact that \begin{eqnarray} \label{vol_elem} dv_{\tilde{g}}= u^N \,dv_g, \end{eqnarray} we get that $$F'(u,f)= \frac{\int_M (uf) L_{g}(uf)\,dv_g }{\int_M (u f)^2 u^{N-2} \,dv_g} .$$ Using the min-max principle, we can write that $$\lambda_k (\tilde{g}) = \inf_{ V \in \Gr{k}u{H_1^2(M)} }\sup_{f \in V\setminus\{0\}} F'(u,f)$$ Now, replacing $uf$ by $v$, we obtain that \begin{eqnarray} \label{def_lambda2} \lambda_k (\tilde{g}) = \inf_{V\in \Gr{k}{}{H_1^2(M)}} \sup_{v \in V\setminus\{0\}} \frac{\int_M v L_{g} v \,dv_g }{\int_M v^2 u^{N-2} \,dv_g}. \end{eqnarray} Using the definition of $\mu_2$ and $\Vol_{\tilde{g}}(M) = \int_M u^N \,dv_g$, we derive $$\mu_k(M,g)= \inf_{\ss u\in L_+^N(M)\atop \ss V\in \Gr{k}u{C^\infty(M)}} \sup_{v\inV\setminus\{0\}} F(u, v) $$ The result follows immediately. \section{Generalized metrics and the Euler-Lagrange equation} \subsection{A regularity result} We will need the following result. \begin{lem} \label{regu} Let $u \in L^N(M)$ and $v \in H_1^2(M)$. We assume that $$L_g v = u^{N-2} v$$ holds in the sense of distributions. Then, $v \in L^{N+\ep}(M)$ for some $\ep >0$. \end{lem} This result is well known for the standard Yamabe equation. Proofs for the standard Yamabe equation can be found in \cite{trudinger:68} and \cite{hebey:97}, and the modifications for proving Lemma~\ref{regu} are obvious. Unfortunately, \cite{trudinger:68} contains some typos, and the book \cite{hebey:97} is difficult to obtain. This is why we included a proof in the appendix for the convenience of the reader. \subsection{The $k$-th eigenvalue of the Yamabe operator for a generalized metric} On a given Riemannian manifold $(M,g)$ we say that $\tilde{g} = u^{N-2} g$, $u\in L_+^N(M)$, is a \emph{generalized metric} conformal to $g$. For a generalized metric $\tilde{g}$, we can define \begin{eqnarray}\label{def_lambdad} \lambda_k (\tilde{g}) = \inf_{V\in \Gr{k}u{H_1^2(M)}} \sup_{v \in V\setminus\{0\}} \frac{\int_M v L_{g} v \,dv_g }{\int_M v^2 u^{N-2} \,dv_g}. \end{eqnarray} \begin{prop} \label{la1la2} For any $u\in L_+^N$, $\ti g=u^{N-2}$ there exist two functions $v,w$ belonging to $H_1^2(M)$ with $v \geq 0$ and such that in the sense of distributions. \begin{eqnarray} \label{eqvl} L_g v = \la_1 (\tilde{g}) u^{N-2} v \end{eqnarray} and \begin{eqnarray} \label{eqwl} L_g w = \la_2 (\tilde{g}) u^{N-2} w. \end{eqnarray} Moreover, we can normalize $v,w$ by \begin{eqnarray} \label{vwort} \int_M u^{N-2} v^2 \,dv_g = \int_M u^{N-2} w^2 \,dv_g=1 \hbox{ and } \int_M u^{N-2} v w \,dv_g =0 \end{eqnarray} \end{prop} For $k=2$ the infimum in formula~(\ref{def_lambda2}) over all subspaces $V\in \Gr2{u}{H_1^2(M)}$ is attained by $V= \mathop{\rm span} (v,w)$ and the supremum over the functions in $V\setminus \{0\}$ is attained by $w$. The reader should pay attention to the fact that the space $V$ is in general non unique. As one can check, if $w$ changes the sign then the supremum over all $v\in V =\mathop{\rm span} (v,w )\setminus\{0\}$ and the supremum over all $v\in V_1= \mathop{\rm span} ( w, |w| )\setminus\{0\}$ coincide. {}From section (\ref{varia}), we get $$\mu_2(M,g) = \inf_{\tilde{g} \in \ol{[g]}} \lambda_2 (\tilde{g}).$$ Hence, $\mu_2(M,g)$ can be attained by a regular metric, or by a generalized metric or it can be not attained at all. These questions are discussed in Section~\ref{properties}. Let us now prove Proposition \ref{la1la2}. \\ {\bf Proof of Proposition \ref{la1la2}:} Let $(v_m)_m$ be a minimizing sequence for $\la_1(\tilde{g}) $ i.e. a sequence $v_m\in H^2_1(M)$ such that $$\lim_{m\to \infty} \frac{\int_M c_n |\nabla v_m|^2 + S_g v_m^2 \,dv_g}{\int_M u^{N-2} v_m^2 \,dv_g}= \la_1(\tilde{g}).$$ It is well known that $(|v_m|)_m$ is also a minimizing sequence. Hence, we can assume that $v_m \geq 0$. If we normalize $v_m$ by $\int_M u^{N-2} v_m^2 \,dv_g=1$, then $(v_m)_m$ is bounded in $H^2_1(M)$ and after restriction to a subsequence we may assume that there exists $v \in H^2_1(M)$, $v \geq 0$ such that $v_m \to v$ weakly in $H^2_1(M)$, strongly in $L^2(M)$ and almost everywhere. If $u$ is smooth, then \begin{eqnarray} \label{eq.limit} \int_M u^{N-2} v^2 \,dv_g = \lim_ m \int_M u^{N-2} v_m^2 \,dv_g=1 \end{eqnarray} and by standard arguments, $v$ is a non-negative minimizer of the functional associated to $\la_1(\tilde{g})$. We must show that (\ref{eq.limit}) still holds if $u \in L_+^N(M) $. Let $A>0$ be a large real number and set $u_A = \inf(u,A)$. Then, using the H\"older inequality, we write \begin{eqnarray*} \left| \int_M u^{N-2} \left(v_m^2 - v^2\right) \,dv_g \right| & \leq & \left( \int_M u_A^{N-2} |v_m^2 - v^2| \,dv_g + \int_M (u^{N-2} - u_A^{N-2}) (|v_m|+|v|)^2 \,dv_g\right) \\ & \leq & A \int_M |v_m^2 - v^2| \,dv_g\\ &&{}+ {\left( \int_M (u^{N-2}-u_A^{N-2})^\frac{N}{N-2} \,dv_g \right)}^{\frac{N-2}{N}} {\left( \int_M (|v_m|+|v|)^N \,dv_g \right)}^{\frac{2}{N}}. \end{eqnarray*} By Lebesgue's theorem we see that $$\lim_{A \to +\infty} \int_M (u^{N-2}-u_A^{N-2})^\frac{N}{N-2} \,dv_g = 0. $$ Since $(v_m)_m$ is bounded in $H_1^2(M)$, it is bounded in $L^N(M)$ and hence there exists $C>0$ such that $\int_M (|v_m|+|v|)^N \,dv_g \leq C$. By strong convergence in $L^2(M)$, $$\lim_m \int_M |v_m^2 - v^2| \,dv_g =0.$$ Equation~\eref{eq.limit} easily follows and $v$ is a non-negative minimizer of the functional associated to $\la_1(\tilde{g})$. Writing the Euler-Lagrange equation of $v$, we find that $v$ satisfies equation (\ref{eqvl}). Now, we define $$\la_2'(\tilde{g}) = \inf \frac{\int_M c_n |\nabla w|^2 + S_g w^2 \,dv_g}{\int_M u^{N-2} |w|^2 \,dv_g}$$ where the infimum is taken over smooth functions $w$ such that $u^{\frac{N-2}{2} } w \not\equiv 0$ and such that $\int_M u^{N-2} vw \,dv_g= 0 $. With the same method, we find a minimizer $w$ of this problem that satisfies (\ref{eqwl}) with $\la_2'(\tilde{g})$ instead of $\la_2(\tilde{g})$. However, it is not difficult to see that $\la_2'(\tilde{g})=\la_2(\tilde{g})$ and Proposition~\ref{la1la2} easily follows. \subsection{Euler-Lagrange equation of a minimizer of $\la_2\Vol^{2/n}$} \begin{lemma}\label{lem.EL} Let $u\in L_+^N(M)$ with $\int u^N=1$. Suppose that $w_1,w_2\in H_1^2(M)\setminus\{0\}$, $w_1,w_2\geq 0$ satisfy \begin{align} \int (c_n|\na w_1|^2 +\Scal_g w_1^2)\, dv_g & \leq \mu_2(M,g)\,\int u^{N-2}w_1^2 \label{ineq.v1}\\ \int (c_n|\na w_2|^2 +\Scal_g w_2^2)\, dv_g & \leq \mu_2(M,g)\,\int u^{N-2}w_2^2\label{ineq.v2} \end{align} and suppose that $(M\setminus w_1^{-1}(0))\cap (M\setminus w_2^{-1}(0))$ has measure zero. Then $u$ is a linear combination of $w_1$ and~$w_2$ and we have equality in \eref{ineq.v1} and~\eref{ineq.v2}. \end{lemma} \proof{} We let $\bar{u} = a w_1 + b w_2$ where $a,b>0$ are chosen such that \begin{eqnarray} \label{v1v2} \frac{a^{N-2}}{b^{N-2}}\; \frac{\int_M u^{N-2} w_1^2 \,dv_g}{\int_M u^{N-2} w_2^2 \,dv_g}= \frac{\int_M w_1^N \,dv_g}{\int_M w_2^N\,dv_g} \end{eqnarray} and \begin{eqnarray} \label{baru} \int_M \bar{u}^N \,dv_g=a^N\int_M w_1^N + b^N \int w_2^N=1. \end{eqnarray} Because of the variational characterization of $\mu_2$ we have \begin{eqnarray} \label{mu<} \mu_2(M,g) \leq \sup_{(\la, \mu) \in \mR^2 \setminus \{(0,0)\}} F(\bar{u}, \la w_1+ \mu w_2) \end{eqnarray} By \eref{ineq.v1},\eref{ineq.v2} and \eref{baru}, and since $(M\setminus w_1^{-1}(0))\cap (M\setminus w_2^{-1}(0))$ has measure zero \begin{eqnarray} F(\bar{u}, \la w_1+ \mu w_2) & = & \frac{ \la^2 \int_M \left(c_n {|\nabla w_1|}^2 +S_g w_1^2\right) \,dv_g + \mu^2 \int_M\left( c_n {|\nabla w_2|}^2 +S_g w_2^2\right) \,dv_g }{\la^2 \int_M |\bar{u}|^{N-2} w_1^2 \,dv_g +\mu^2 \int_M |\bar{u}|^{N-2} w_2^2 \,dv_g}\nonumber \\ &\leq & \mu_2(M,g) \frac{ \la^2\int_M u^{N-2} w_1^2 \,dv_g + \mu^2 \int_M u^{N-2} w_2^2 \,dv_g}{\la^2 a^{N-2} \int_M w_1^N \,dv_g + \mu^2 b^{N-2} \int_M w_2^N \,dv_g}.\label{ineq.F} \end{eqnarray} As one can check, relation (\ref{v1v2}) implies that this expression does not depend on $\la, \mu$. Hence, setting $\la=a $ and $\mu= b$, the denominator is $1$, and we get \begin{eqnarray*} \sup_{ (\la, \mu) \in \mR^2 \setminus \{(0,0)\}} F(\bar{u}, \la w_1+ \mu w_2) & \leq & \mu_2(M,g) \int_M u^{N-2}(a^2 w_1^2 + b^2 w_2^2) \,dv_g \\ & = & \mu_2(M,g) \int_M u^{N-2} \bar{u}^2 \,dv_g. \end{eqnarray*} By H\"older inequality, \begin{eqnarray} \sup_{ (\la, \mu) \in \mR^2 \setminus \{(0,0)\}} F(\bar{u}, \la w_1+ \mu w_2) \leq \mu_2(M,g) {\left( \int_M u^N \,dv_g \right)}^{\frac{N-2}{N}} {\left( \int_M {\bar{u}}^N \,dv_g \right)}^{\frac{2}{N}} =\mu_2(M,g).\label{ineq.hoelder} \end{eqnarray} Inequality (\ref{mu<}) implies that we have both equality in the H\"older inequality of \eref{ineq.hoelder} and in~\eref{ineq.F}. The equality in the H\"older inequality implies that there exists a constant $c>0$ such that $u= c \bar{u}$ almost everywhere. Moreover, since $\int u^N = \int \bar{u}^N =1$, we have $u = \bar{u}= a w_1 + b w_2 $. The equality in \eref{ineq.F} implies inequality in \eref{ineq.v1} and~\eref{ineq.v2}. \qed \begin{theorem}[Euler-Lagrange equation] \label{theo.limit} Assume that $\mu_2(M,g)\neq 0$ and that $\mu_2(M,g)$ is attained by a generalized metric $\tilde{g} = u^{N-2} g$ with $ u \in L_+^N(M)$. Let $v,w$ be as in Proposition~\ref{la1la2}. Then, $u = |w|$. In particular, \begin{eqnarray} \label{eqlim} L_g w= \mu_2(M,g) |w|^{N-2} w \end{eqnarray} Moreover, $w$ has alternating sign and $w \in C^{3,\alpha}(M)$ ($\alpha \leq N-2$). \end{theorem} \begin{rem} Assume that $\mu_2(M,g)$ is equal to $0$ and is attained by a generalized metric $g'$, then, using the conformal invariance of the Yamabe operator, it is easy to check that for all generalized metrics $\tilde{g}$ conformal to $g'$, we have $\lambda_2(\tilde{g}) = 0$. Consequently, each metric conformal to $g$ is a minimizer for $\mu_2(M,g)$ and Theorem \ref{theo.limit} is always false in this case. However, we will still get a nodal solution of~\eref{eqyam} if $\mu_2(M,g)=0$. Indeed, by Theorem~\ref{attain} and the remark above, $\la_2(g)=0$. Let $w$ be an eigenfunction associated to $\la_2(g)$. We have $L_g w = 0$. Then, we have a solution of \eref{eqlim}. \end{rem} \begin{rem} Assume that $\mu_2(M,g) \not= 0$ and that $\mu_2(M,g)$ is attained by a generalized metric. Let $w$ be the solution of equation (\ref{eqlim}) given by Theorem \ref{theo.limit}. We let $\Om_+= \{x \in M \hbox{ s.t. } w(x)>0 \}$ and $\Om_-= \{x \in M \hbox{ s.t. } w(x)-0 \}$. Then, a immediate consequence of Lemma \ref{lem.EL} is that $\Om_+$ and $\Om_-$ have exactly one connex component. \end{rem} \proof{} Without loss of generality, we can assume that $\int_M u^N \,dv_g =1$. By assumption we have $ \la_2(\tilde{g})= \mu_2(M,g)$. Let $v,w \in H_1^2(M)$ be some functions satisfying equations (\ref{eqvl}), (\ref{eqwl}) and relation (\ref{vwort}). \begin{step} \label{ste1} We have $\la_1(\tilde{g}) < \la_2 (\tilde{g})$. \end{step} We assume that $\la_1(\tilde{g}) = \la_2 (\tilde{g})$. Then, after possibly replacing $w$ by a linear combination of $v$ and $w$, we can assume that the function $u^{\frac{N-2}{2}} w$ changes the sign. We apply Lemma~\ref{lem.EL} for $w_1 := \sup(w,0)$ and $w_2:= \sup(-w,0)$. We obtain the existence of $a,b>0$ with $u = a w_1 + b w_2$. Now, by Lemma~\ref{regu}, $w \in L^{N+\ep}(M)$. By a standard bootstrap argument, equation (\ref{eqwl}) shows that $w \in C^{2, \al}(M)$ for all $\al \in ]0,1[$. It follows that $u\in C^{0, \al}(M)$ for all $\al \in ]0,1[$. Now, since $\la_1(\tilde{g}) = \la_2 (\tilde{g})$ and by definition of $\la_1(\tilde{g})$, $w$ is a minimizer of the functional $\bar{w} \mapsto F(u,\bar{w})$ among the functions belonging to $H_1^2(M)$ and such that $u^{\frac{N-2}{2}} \bar{w} \not\equiv 0$. Since $F(u,w) = F(u,|w|)$, we see that $|w|$ is a minimizer for the functional associated to $\la_1(\tilde{g})$ and hence, writing the Euler-Lagrange equation of the problem, $w$ satisfies the same equation as $w$. As a consequence, $|w|$ is $C^2(M)$. By the maximum principle, we get $|w| >0$ everywhere. This is false. Hence, the step is proved.\\ \begin{step} The function~$w$ changes the sign. \end{step} Assume that $w$ does not change the sign, i.e.~ after possibly replacing $w$ by $-w$, we have $w\geq 0$. Using~\eref{vwort} we see that $(M\setminus v^{-1}(0))\cap (M\setminus w^{-1}(0))$ has measure zero. Setting $w_1:=v$ and $w_2:=w$ we have \eref{ineq.v1} and~\eref{ineq.v2}. While we have equality in \eref{ineq.v2}, Step~1 implies that inequality \eref{ineq.v1} is strict. However using Lemma~\ref{lem.EL} we can derive equality in \eref{ineq.v1}. Hence we obtain a contradiction, and the step is proved. \begin{step} There exists $a,b >0$ such that $u= a \sup(w,0) + b \sup(-w,0)$. Moreover, $w\in C^{2,\al}(M)$ and $u\in C^{0,\al}(M)$ for all $\al \in ]0,1[$. \end{step} As in the proof of Step~1 we apply Lemma~\ref{lem.EL} for $w_1:=\sup(w,0)$ and $w_2:=\sup(-w,0)$. We obtain the existence of $a,b >0$ such that $u= a w_1 + b w_2$. As in Step~1 we get that $w\in C^{2,\al}(M)$ and $u\in C^{0,\al}(M)$ for all $\al \in ]0,1[$. This proves the present step. \begin{step} Conclusion. \end{step} Let $h \in C^{\infty}(M)$ whose support is contained in $M \setminus \{u^{-1}(0) \}$. For $t$ close to $0$, set $u_t = |u + th|$. Since $u>0$ on the support of $h$ and since $u$ is continuous (see last step), we have for $t$ close to $0$, $u_t=u+th$. As $\mathop{\rm span}(v,w)\in \Gr{2}u{H_1^2(M)}$ we obtain using \eref{defmu} for all $t$ $$\mu_2(M,g) \leq \sup_{(\lambda, \mu) \in \mR^2 \setminus \{ (0,0) \}} F(u_t,\la v + \mu w).$$ Equations (\ref{eqvl}), (\ref{eqwl}), and relation (\ref{vwort}) yield \begin{eqnarray*} F(u_t,\la v + \mu w )& = & \frac{\la^2 \la_1(\ti g) \int_M u^{N-2} v^2 \,dv_g + \mu^2 \la_2(\ti g) \int_M u^{N-2} w^2 \,dv_g} {\la^2 \int_M u_t^{N-2} v^2 \,dv_g + 2 \la \mu \int_M u_t^{N-2} v w \,dv_g + \mu^2 \int_M u_t^{N-2} w^2 \,dv_g } { \left( \int_M u_t^N \,dv_g \right)}^{\frac{2}{n}}\\ & =& \frac{\la^2 \la_1(\ti g) + \mu^2 \la_2(\ti g)}{\la^2 a_t + \la\mu b_t + \mu^2 c_t} {\left( \int_M |u_t|^N \,dv_g \right)}^{\frac{2}{n}}. \end{eqnarray*} where $$a_t= \int_M u_t^{N-2} v^2 \,dv_g,$$ $$b_t = 2 \int u_t^{N-2} v w \,dv_g$$ and $$c_t = \int_M u_t^{N-2} w^2 \,dv_g.$$ The functions $a_t$, $b_t$ and $c_t$ are smooth for $t$ close to $0$, furthermore $a_0 = c_0=1$ and $b_0=0$. The function $f(t,\al):= F(u_t,\sin(\al) v + \cos(\al) w)$ is smooth for small $t$. Using $\la_1(\ti g)<\la_2(\ti g)$ one calculates \begin{align*} {\pa\over \pa\al}\,f(0,\al) &=0 &\Leftrightarrow\qquad&\al\in {\pi\over 2}\mZ\\ {\pa^2\over \pa\al^2}\,f(0,\al)&<0 &\mbox{for} \qquad&\al\in \pi\mZ\\ {\pa^2\over \pa\al^2}\,f(0,\al)&>0 &\mbox{for} \qquad&\al\in \pi\mZ+{\pi\over 2} \end{align*} Applying the implicit function theorem to ${\pa f \over \pa\al}$ at the point $(0,0)$, we see that there is a smooth function $t\mapsto\alpha(t)$, defined on a neighborhood of $0$ with $\al(0)=0$ and $$f(t,\al(t))=\sup_{\al\in \mR}f(t,\al)= \sup_{(\lambda, \mu) \in \mR^2 \setminus \{ (0,0) \}} F(u_t,\la v + \mu w).$$ As a consequence $${d\over dt}|_{t=0}\sin^2\al(t)={d\over dt}|_{t=0}\cos^2\al(t)={d\over dt}|_{t=0}(\sin^2\al(t)a_t)={d\over dt}|_{t=0}(\sin\al(t)\cos\al(t)b_t)=0.$$ Hence, ${d\over dt}|_{t=0}\,f(t,\al(t))$ exists and we have \begin{eqnarray*} {d\over dt}|_{t=0}\,f(t,\al(t)) & = & \la_2(M,\ti g)\left(- \frac{d}{dt}|_{t=0} c_t + \frac{d}{dt}|_{t=0} {\left( \int_M |u_t|^N \,dv_g \right)}^{\frac{2}{n}} \right) \\ & = & \la_2(M,\ti g) (N-2)\left( - \int_M u^{N-3} h w^2 \,dv_g + \int_M u^{N-1} h \,dv_g \right). \end{eqnarray*} By definition of $\mu_2(M,g)$, $f$ admits a minimum in $t= 0$. As $\la_2(M,\ti g)=\mu_2(M,g)\neq 0$ we obtain $$ \int_M u^{N-3} h w^2 \,dv_g = \int_M u^{N-1} h \,dv_g.$$ Since $h$ is arbitrary (we just have to ensure that its support is contained in $M \setminus \{ u^{-1}(0)\}$), we get that $u^{N-3}w^2 = u^{N-1}$ on $M \setminus \{ u^{-1}(0)\}$, hence $u=|w|$ on $M \setminus \{ u^{-1}(0)\}$. Together with Step~3, we get $u=|w|$ everywhere. This proves theorem~\ref{theo.limit}.\qed \section{A sharp Sobolev inequality related to $\mu_2(M,g)$} \subsection{Statement of the results} For any compact Riemannian manifold $(M,g)$ of dimension $n \geq 3$, Hebey and Vaugon have shown in (\cite{hebey.vaugon:96}) that there exists $B_0(M,g) > 0$ such that $$ \mu_1(\mS^n)= n(n-1)\,\om_n^{\frac{2}{n}} = \inf_{u \in H_1^2(M)\setminus \{0\}} \frac{ \int_M c_n |\nabla u |^2 + B_0 \int_M u^2 \,dv_g} {{\left(\int_M u^N \,dv_g \right)}^{\frac{2}{n}}} \eqno{\rm (S)}$$ where $\om_n$ stands for the volume of the standard $n$-dimensional sphere $\mS^n$ and where $\mu_1(\mS^n)$ is the Yamabe invariant of $\mS^n$. This inequality is strongly related to the resolution of the Yamabe problem. It allows to avoid concentration for the minimizing sequence of $\mu_1 (M,g)$. For the minimization of $\mu_2(M,g)$, this inequality is not sufficient and another one must be constructed. The following result is adapted to the problem of minimizing $\mu_2(M,g)$. \begin{theorem} \label{sobolev} On a compact connected Riemannian manifold $(M,g)$ of dimension $n \geq 3$ we have $$ 2^{\frac{2}{n}} \mu_1(\mS^n) = \inf_{\ss u \in L_+^N(M)\atop\ss V\in \Gr{k}u{H_1^2(M)}} \sup_{v\in V\setminus\{0\}} \frac{\left(\int_M c_n |\nabla v |^2 \,dv_g +B_0(M,g) \int_M v^2 \,dv_g \right) {\left( \int_M u^N \,dv_g \right)}^{\frac{2}{N}}}{ \int_M u^{N-2} v^2 \,dv_g} \eqno{\rm (S_1)}$$ where $B_0(M,g)$ is given by inequality {\rm (S)}. \end{theorem} We present now two corollaries of Theorem~\ref{sobolev}. \begin{cor} \label{muS} For the standard $n$-dimensional sphere we have $ \mu_2 (\mS^n) = 2^{\frac{2}{n}} \mu_1(\mS^n)$. \end{cor} \begin{cor} \label{muR} For all $u \in C^{\infty}_c(\mR^n)$ and $V\in \Gr{2}u{C^\infty_c(\mR^n)}$ we have $$ 2^{2/n} \mu_1(\mS^n) \leq \sup_{v\in V\setminus\{0\}} \frac{\left(\int_{\mR^n} c_n |\nabla v |^2 \,dv_g \right) {\left( \int_{\mR^n} |u|^N \,dv_g \right)}^{\frac{2}{N}}}{ \int_{\mR^n} |u|^{N-2} v^2 \,dv_g} $$ \end{cor} \subsection{Proof of theorem \ref{sobolev}} The functional $$G(u,v):=\frac{\left(\int_M c_n |\nabla v |^2 \,dv_g +B_0(M,g) \int_M v^2 \,dv_g \right) {\left( \int_M u^N \,dv_g \right)}^{\frac{2}{N}}} { \int_M u^{N-2} v^2 \,dv_g}$$ is continuous on $L_+^N(M)\times (H_1^2(M)\setminus\{0\})$. As a consequence $I(u,V):= \sup_{v\in V \setminus \{0\}} G(u,v)$ depends continuously on $u\in L_+^N(M)$ and $V\in \Gr{2}u{H_1^2(M)}$. Thus, in order to show the theorem it is sufficient to show that $I(u,V)\geq 2^{2/n} \mu_1(\mS^n)$ for all smooth $u>0$ and $V\in \Gr{2}{}{C^\infty(M)}$. Without loss of generality, we can assume \begin{eqnarray} \label{u=1} \int_M u^N \,dv_g =1. \end{eqnarray} The operator $v\mapsto P(v):= c_n u^{2-N\over 2}\Delta (u^{2-N\over 2}v) + B_0(M,g)u^{2-N}v$ is an elliptic operator on $M$, and $P$ is self-adjoint with respect to the $L^2$-scalar product. Hence, $P$ has discrete spectrum $\la_1\leq \la_2\leq \ldots$ and the corresponding eigenfunctions $\phi_1,\phi_2,\ldots$ are smooth. Setting $v_i:= u^{2-N\over 2}\phi_i$ we obtain $$\left(c_n\Delta+B_0\right)(v_i)= \la_i u^{N-2}v_i$$ $$\int u^{N-2}v_iv_j\,dv_g=0\qquad\mbox{if $\la_i\neq \la_j$}.$$ The maximum principle implies that an eigenfunction to the smallest eigenvalue $\la_1$ has no zeroes. Hence $\la_1<\la_2$, and we can assume $v_1>0$. We define $w_+:=a_+\sup(0,v_2)$ and $w_-:=a_-\sup(0,-v_2)$, where we choose $a_+,a_->0$ such that $$\int_M u^{N-2} w_-^2 \,dv_g = \int_M u^{N-2} w_+^2 \,dv_g =1.$$ We let $\Om_-= \{ w <0 \} $ and $\Om_+= \{ w \geq 0 \} $. By H\"older inequality, \[ \begin{array}{ccl} 2 & = & \int_M u^{N-2} w_-^2 \,dv_g + \int_M u^{N-2} w_+^2 \,dv_g \\ & \leq & {\left(\int_{\Om_-} u^N \,dv_g \right)}^{\frac{N-2}{N}} {\left(\int_M w_-^N \,dv_g \right)}^{\frac{2}{N}} + {\left(\int_{\Om_+} u^N \,dv_g \right)}^{\frac{N-2}{N}} {\left(\int_M w_+^N \,dv_g \right)}^{\frac{2}{N}}. \end{array} \] Using the sharp Sobolev inequality $(S)$, we get that \begin{eqnarray} \label{usingS} 2 \mu_1(\mS^n) &\leq & {\left(\int_{\Om_-} u^N \,dv_g \right)}^{\frac{N-2}{N}} \int_M w_- u^{N-2\over 2}P \left( u^{N-2\over 2}\, w_-\right) \,dv_g\\ & + & {\left(\int_{\Om_+} u^N \,dv_g \right)}^{\frac{N-2}{N}} \int_M w_+ u^{N-2\over 2}P \left( u^{N-2\over 2}\, w_+\right) \,dv_g \end{eqnarray} Since $w_-$ resp.\ $w_+$ are some multiples of $w$ on $\Om_-$ resp.\ $\Om_+$, they satisfy the same equation as $w$. Hence, we get that \[ \begin{array}{ccl} 2 & = & \mu_1(\mS^n)^{-1} \la_2 \left( {\left(\int_{\Om_-} u^N \,dv_g \right)}^{\frac{N-2}{N}} \int_M u^{N-2} w_-^2 \,dv_g + {\left(\int_{\Om_+} u^N \,dv_g \right)}^{\frac{N-2}{N}} \int_M u^{N-2} w_+^2 \,dv_g \right) \\ & = & \mu_1(\mS^n)^{-1} \la_2 \left( {\left(\int_{\Om_-} u^N \,dv_g \right)}^{\frac{N-2}{N}} + {\left(\int_{\Om_+} u^N \,dv_g \right)}^{\frac{N-2}{N}} \right) . \end{array} \] Now, for any real non-negative numbers $a,b \geq 0$, the H\"older inequality yields $$a + b \leq 2^{\frac{2}{N}}{\left( a^{\frac{N}{N-2}} + b^ {\frac{N}{N-2}} \right)}^{\frac{N-2}{N}} $$ We apply this inequality with $a = {\left(\int_{\Om_-} u^N \,dv_g \right)}^{\frac{N-2}{N}}$ and $b= {\left( \int_{\Om_+} u^N \,dv_g \right)}^{\frac{N-2}{N}} $. Using (\ref{u=1}), we obtain $$ 2 \leq 2^{\frac{2}{N}} \mu_1(\mS^n)^{-1} \la_2 \left( \int_{\Om_-} u^N \,dv_g + \int_{\Om_+} u^N \,dv_g \right)= 2^{\frac{2}{N}} \mu_1(\mS^n)^{-1} \la_2. $$ We obtain $\la_2 \geq 2^{\frac{2}{n }} \mu(\mS^n)$. Since $\la_2 = I(u,\mathop{\rm span}(v_1,v_2))$, this ends the proof of Theorem~\ref{sobolev}. \subsection{Proof of Corollaries~\ref{muS} and \ref{muR}} It is well known that $B_0(\mS^n)$ equals to the scalar curvature of~$\mS^n$, i.e.\ $B_0(\mS^n)=n(n-1)$. Replacing $B_0(\mS^n)$ by its value and taking the infimum over $u,V$, the right hand term of inequality $(S_1)$ is exactly the variational characterization of $\mu_2(\mS^n)$ (see equation \eref{defmu}). This proves that $\mu_2(\mS^n) \geq 2^{2/n} \mu_1(\mS^n)$. Corollary~\ref{muS} then follows from Theorem~\ref{upbound}. Since $\mR^n$ is conformal to $\mS^n \setminus \{p\}$ ($p$ is any point of $\mS^n$), we can use the conformal invariance to prove Corollary~\ref{muR}. \section{Some properties of $\mu_2(M,g)$} \label{properties} \subsection{Is $\mu_2(M,g)$ attained?} \label{attaine} Let $(M,g)$ be an $n$-dimensional compact Riemannian manifold. The Yamabe problem shows that $\mu_1(M,g)$ is attained by a metric $\tilde{g}$ conformal to $g$. Some questions arise naturally concerning $\mu_2(M,g)$: {\bf 1-} Is $\mu_2(M,g)$ attained by a metric? {\bf 2-} Is it possible that $\mu_2(M,g)$ is attained by a generalized metric? In this section, we give answers to these questions. The first result we prove is the following: \begin{prop} \label{sUs} Let $\mS^n \dot\cup \mS^n$ be the disjoint union of two copies of the sphere equipped with their standard metric. Then, $\mu_2(\mS^n \dot\cup \mS^n)= 2^{2/n} \mu_1(\mS^n)$ and it is attained by the canonical metric. \end{prop} \proof{}One computes $$\la_2(\mS^n \dot\cup \mS^n)\,\Vol(\mS^n \dot\cup \mS^n)^{2/n} = 2^{2/n}\la_1(\mS^n)\,\Vol(\mS^n)^{2/n}=2^{2/n} \mu_1(\mS^n).$$ Hence $\mu_2(\mS^n \dot\cup \mS^n)\leq 2^{2/n} \mu_1(\mS^n)$ follows. Now, let $\ti g$ be an arbitrary smooth metric on $S^n \dot\cup S^n$. We write $S^n_1$ for the first $S^n$ and $S^n_2$ for the second $S^n$. Then $\la_2(S^n \dot\cup S^n,\ti g)$ is the minimum of $\la_2(S^n_1,\ti g)$, $\la_2(S^n_2,\ti g)$ and $\max\{\la_1(S^n_1,\ti g),\la_1(S^n_2,\ti g) \}$. It follows from Corollary~\ref{muS} that $$ \la_2(S^n_1,\ti g)\,\Vol(S^n\dot\cup S^n,\ti g)^{2/n}\geq \la_2(S^n_1,\ti g)\, \Vol(S^n_1,\ti g)^{2/n}\geq 2^{2/n }\mu_1(\mS^n),$$ and obviously we have the same for $\la_2(S^n_2,\ti g)$. Summing $$\la_1(S^n_i,\ti g)^{n/2}\geq \mu_1(\mS^n)^{n/2}\Vol(S^n_i,\ti g)$$ over $i\in\{1,2\}$, we obtain the remaining inequality $$\max\{\la_1(S^n_1,\ti g),\la_1(S^n_2,\ti g) \}\, \Vol(S^n\dot\cup S^n,\ti g)^{2/n}\geq 2^{2/n} \mu_1(\mS^n),$$ and the proposition is proved. \qed Question~1 is solved by the following result. \begin{prop} \label{notattained} If $M$ is connected, then $\mu_2(M,g)$ cannot be attained by a metric. \end{prop} Indeed, otherwise by Theorem~\ref{theo.limit}, we would have that $u=|w|$ and hence $u$ cannot be positive. Theorem~\ref{attain} and the following result answer Question~2. \begin{prop} \label{notattainedS} The invariant $\mu_2(\mS^n)$ is not attained by a generalized metric. \end{prop} This proposition immediately follows from Proposition~\ref{lowbound}. \subsection{Some bounds of $\mu_2(M,g)$} At first, we give an upper bound for $\mu_2(M,g)$. \begin{theorem} \label{upbound} Let $(M,g)$ be an $n$-dimensional compact Riemannian manifold with $\mu_1(M,g) \geq 00$. Then, \begin{equation}\label{eq.upbound} \mu_2(M,g) \leq {( \mu_1(M,g)^{\frac{n}{2}}+ \mu_1(\mS^n)^{\frac{n}{2}})}^{\frac{2}{n}}. \end{equation} This inequality is strict in the following cases: \begin{enumerate}[\ \ $\bullet$] \item $\mu_1(M,g) > 0$, $(M,g)$ is not locally conformally flat and $n \geq 11$; \item $\mu_1(M,g)=0$, $(M,g)$ is not locally conformally flat and $n \geq 9$. \end{enumerate} \end{theorem} \noindent {}From the solution of the Yamabe problem by Aubin and Schoen \cite{aubin:76,schoen:84} we know that if $(M,g)$ is not conformally equivalent to $\mS^n$, then $\mu_1(M,g)<\mu_1(\mS^N)$. Hence, \eref{eq.upbound} implies the following corollary. \begin{cor} Let $(M,g)$ be an $n$-dimensional compact connected Riemannian manifold whose Yamabe invariant is non-negative. Then $\mu_2(M,g) \leq \mu_2(\mS^n)$ with inequality if and only if $(M,g)$ is conformally diffeomorphic to the sphere $\mS^n$. \end{cor} These inequalities are very important, because they can be used to avoid concentration of minimizing sequences for $\mu_2$, in a way which is similar to the resolution of the Yamabe problem. The following proposition gives a lower bound for~$\mu_2$. \begin{prop} \label{lowbound} Let $(M,g)$ be a $n$-dimensional compact Riemannian manifold whose Yamabe invariant is non-negative. Then, \begin{eqnarray} \label{lowerbound} \mu_2(M,g)\geq 2^{2 \over n} \mu_1(M,g). \end{eqnarray} Moreover, if $M$ is connected and if $\mu_2(M,g)$ is attained by a generalized metric, then this inequality is strict. \end{prop} When $\mu_1(M,g) = 0 $, inequality (\ref{lowerbound}) is trivial. If $\mu_1(M,g) >0$, by a possible chande of metric in the conformal class, we can assume that the scalar curvature is positive. The proof of inequality (\ref{lowerbound}) is exactly the same as the one of Theorem~\ref{sobolev}. We just have to replace $B_0(M,g)$ by $S_g$. Moreover, if $M$ were connected and if $\mu_2(M,g)$ were attained by a generalized metric, then inequality (\ref{usingS}) would be an equality and we would have that $w_+$ or $w_-$ is a function for which equality in the Sobolev inequality $(S)$ is attained. By the maximum principle, we would get that $w_+$ or $w_-$ is positive on $M$ which is impossible. \subsubsection{Proof of theorem \ref{upbound}} \begin{lem} \label{estim} For any $\al>2$, there is a $C >0$ such that $$|a+b|^{\al} \leq a^{\al} + b^{\al} + C ( a^{\al -1}b + a b^{\al -1})$$ for all $a,b >0$. \end{lem} {\bf Proof of Lemma \ref{estim}.} Without loss of generality, we can assume that $a=1$. Then we set for $x > 0$, $$f(x) = \frac{|1+x|^{\al} - (1 + x^{\al})}{x^{\al-1} + x}.$$ One checks that $\lim_{x \to 0} f(x) = \lim_{x \to +\infty } f(x)= \al$. Since $f$ is continuous, $f$ is bounded by a constant $C$ on $\mR_+$. Clearly, this constant is the desired $C$ in inequality of Lemma \ref{estim}.\\ {\bf Proof of Theorem \ref{upbound}.} For $u \in H_1^2(M)\setminus \{0\}$ let $$Y(u) = \frac{ \int_M c_n |\nabla u|^2 +S_g u^2 \,dv_g}{{\left( \int_M |u|^N \,dv_g \right)}^{\frac{2}{N}}}$$ be the Yamabe functional of $M$. The solution of the Yamabe problem provides the existence of a smooth positive minimizer $v$ of~$Y$, and we can assume \begin{eqnarray} \label{intv} \int_M v^N \,dv_g = 1. \end{eqnarray} Then, $v$ satisfies the Yamabe equation \begin{eqnarray} \label{yamequ} L_g v = \mu_1(M,g) v^{N-1}. \end{eqnarray} Let $x_0 \in M$ be fixed and choose a system $(x_1,\cdots, x_n)$ of normal coordinates at $x_0$. We note $r=dist_g(x_0,.)$. If $\delta >0$ is a small fixed number, let $\eta$ be a smooth cut-off function such that $0 \leq \eta \leq 1$, $\eta(B(x_0, \delta) ) = \{ 1 \}$ and $\eta( M \setminus B(x_0, 2 \delta)= \{ 0 \}$, $|\nabla \eta|\leq 2/\delta$. Then, we can define for all $\ep >0$ $$v_{\ep} = C_{\ep} \eta (\ep + r^2)^{\frac{2-n}{n}}.$$ where $C_{\ep}>0$ is such that \begin{eqnarray} \label{intve} \int_M v_{\ep}^N \,dv_g = 1. \end{eqnarray} By standard computations (see \cite{aubin:76}) \begin{eqnarray} \label{testsph} \lim_{\ep \to 0} Y(v_{\ep}) = \mu_1(\mS^n). \end{eqnarray} If $(M,g)$ is not locally conformally flat, if $g$ is well chosen in the conformal class and if $x_0$ is well chosen in $M$, it was also proven in \cite{aubin:76} that there exists a constant $C(M)>0$ such that \begin{eqnarray} \label{test} Y(v_{\ep})=\left|\; \begin{matrix} \mu_1(\mS^n) - C(M) \ep^2 + o(\ep^2)\hfill & \hbox{ if } n > 6 \\ \mu_1(\mS^n) - C(M) \ep^2 |\ln(\ep)| + o(\ep^2 |\ln(\ep)|)\hfill & \hbox{ if } n = 6. \end{matrix} \right. \end{eqnarray} Moreover, it follows from \cite{aubin:76} that $$ a \ep^{\frac{n-2}{4} } \leq C_{\ep} \leq b \ep^{\frac{n-2}{4} }$$ where $a,b >0$ are independent of $\ep$. If $p \geq 1$, standard computations made in \cite{aubin:76} show that there exist some constants $c,C>0$ independent of $\ep$ such that \begin{eqnarray} \label{normp} c \al_{p,\ep} \leq \int_M v_{\ep}^p \,dv_g \leq C \al_{p,\ep } \end{eqnarray} where \[ \al_{p,\ep} = \left| \begin{array}{lll} \ep^{\frac{2n - (n-2) p}{4}} & \hbox{if} & p> \frac{n}{n-2};\\ |\ln(\ep)| \ep^{\frac{n}{4}} & \hbox{if} & p= \frac{n}{n-2};\\ \ep^{ \frac{(n-2) p}{4}} & \hbox{if}& p< \frac{n}{n-2} \end{array} \right. \] Since the large inequality if easier to obtain, we only prove strict inequality. Assume first that $\mu_1(M,g)>0$, that $(M,g)$ is not locally conformally flat and that $n \geq 11$. We set, $$u_{\ep} = Y(v_{\ep})^{\frac{1}{N-2}} v_{\ep} + \mu_1(M,g)^{\frac{1}{N-2}} v.$$ Let us derive estimates for $F\big(u_{\ep},\la v_{\ep}+\mu v)\big)$. Let $(\lambda,\mu) \in \mR^2 \setminus \{(0,0)\}$. Using (\ref{intv}), (\ref{intve}) and the equation (\ref{yamequ}) of $v$, we get that \begin{eqnarray*} F(u_{\ep}, \la v_{\ep} + \mu v) & = & \frac{ \la^2 \int_M v_{\ep} L_g(v_{\ep})\,dv_g + \mu^2 \int_M v L_g(v) \,dv_g + 2 \la \mu \int_M v_{\ep} L_g v \,dv_g } {\la^2 \int_M |u_{\ep}|^{N-2}(\la v_{\ep} + \mu v)^2 \,dv_g} {\left( \int_M u_{\ep}^N \,dv_g \right)}^{\frac{2}{n}}. \end{eqnarray*} \begin{eqnarray} \label{testfunc} = \frac{ \la^2 Y(v_{\ep}) + \mu^2 \mu_1(M,g) + 2 \la \mu \mu_1(M,g) \int_M |v|^{N-2} v v_{\ep} \,dv_g}{ \la^2 \int_M |u_{\ep}|^{N-2} v_{\ep}^2 \,dv_g + \mu^2 \int_M |u_{\ep}|^{N-2} v^2 \,dv_g + 2\la \mu \int_M |u_{\ep}|^{N-2} v v_{\ep} \,dv_g} {\left( \int_M u_{\ep}^N \,dv_g \right)}^{\frac{2}{n}}. \end{eqnarray} Using the definition of $u_{\ep} $ \begin{eqnarray*} \la^2 \int_M |u_{\ep}|^{N-2} v_{\ep}^2 \,dv_g + \mu^2 \int_M |u_{\ep}|^{N-2} v^2 \,dv_g + 2\la \mu \int_M |u_{\ep}|^{N-2} v v_{\ep} \,dv_g \end{eqnarray*} \begin{eqnarray*} \;&\ \geq \la^2 Y(v_{\ep}) \int_M |v_{\ep}|^{N} \,dv_g + \mu^2 \mu_1(M,g) \int_M |v|^{N} \,dv_g + 2\la \mu \int_M |u_{\ep}|^{N-2} v v_{\ep} \,dv_g \\ \; & = \la^2 Y(v_{\ep})+ \mu^2 \mu_1(M,g) + 2\la \mu \int_M |u_{\ep}|^{N-2} v v_{\ep} \,dv_g. \end{eqnarray*} If $\la \mu \geq 0$, we have $$ 2\la \mu \int_M |u_{\ep}|^{N-2} v v_{\ep} \,dv_g \geq 2\la \mu \mu_1(M,g) \int_M v^{N-2} v_{\ep} \,dv_g.$$ This implies that $$ \frac{ \la^2 Y(v_{\ep}) + \mu^2 \mu_1(M,g) + 2 \la \mu \mu_1(M,g) \int_M |v|^{N-2} v v_{\ep} \,dv_g}{ \la^2 \int_M |u_{\ep}|^{N-2} v_{\ep}^2 \,dv_g + \mu^2 \int_M |u_{\ep}|^{N-2} v^2 \,dv_g + 2\la \mu \int_M |u_{\ep}|^{N-2} v v_{\ep} \,dv_g} \leq 1.$$ If $\la \mu < 0$ then, we write that since $N-2 \in ]0,1[$, $$ |u_{\ep}|^{N-2} \leq Y( v_{\ep}) v_{\ep}^{N-2} + \mu_1(M,g) v^{N-2}.$$ We obtain that \begin{eqnarray*} \la^2 \int_M |u_{\ep}|^{N-2} v_{\ep}^2 \,dv_g + \mu^2 \int_M |u_{\ep}|^{N-2} v^2 \,dv_g + 2\la \mu \int_M |u_{\ep}|^{N-2} v v_{\ep} \,dv_g \\ \geq \la^2 Y(v_{\ep})+ \mu^2 \mu_1(M,g) - C \left(\int_M v^{N-1} v_{\ep} \,dv_g + \int_M v_{\ep}^{N-1} v \,dv_g \right). \end{eqnarray*} where $C >0$ is as in in the following a positive real number independent of $\ep$. Together with (\ref{normp}), we get that $$\la^2 \int_M |u_{\ep}|^{N-2} v_{\ep}^2 \,dv_g + \mu^2 \int_M |u_{\ep}|^{N-2} v^2 \,dv_g + 2\la \mu \int_M |u_{\ep}|^{N-2} v v_{\ep} \,dv_g \geq \la^2 Y(v_{\ep})+ \mu^2 \mu_1(M,g) + O(\ep^{\frac{n-2}{4}}).$$ It follows that \begin{eqnarray} \label{r1} \sup_{(\lambda,\mu) \in \mR^2 \setminus \{(0,0)\}} \frac{ \la^2 Y(v_{\ep}) + \mu^2 \mu_1(M,g) + 2 \la \mu \mu_1(M,g) \int_M |v|^{N-2} v v_{\ep} \,dv_g}{ \la^2 \int_M |u_{\ep}|^{N-2} v_{\ep}^2 \,dv_g + \mu^2 \int_M |u_{\ep}|^{N-2} v^2 \,dv_g + 2\la \mu \int_M |u_{\ep}|^{N-2} v v_{\ep} \,dv_g} \leq 1 +O(\ep^{\frac{n-2}{4}}). \end{eqnarray} By Lemma \ref{estim}, \begin{eqnarray*} \int_M u_{\ep}^N \,dv_g & \leq & {(Y(u_{\ep}))}^{\frac{n}{2}} \int_M v_{\ep}^N \,dv_g + \mu_1(M,g)^{\frac{n}{2}} \int_M v^N \,dv_g \\ & & + C \left(\int_M v^{N-1} v_{\ep} \,dv_g + \int_M v_{\ep}^{N-1} v \,dv_g \right). \end{eqnarray*} By (\ref{intv}), (\ref{intve}), (\ref{test}) and (\ref{normp}), we obtain \begin{eqnarray} \label{r2} {\left( \int_M u_{\ep}^N \,dv_g \right)}^{\frac{2}{n}} \leq {( \mu_1(M,g)^{\frac{n}{2}}+ \mu_1(\mS^n)^{\frac{n}{2}})}^{\frac{2}{n}} -C \ep^2 + O(\ep^{\frac{n-2}{4}}) + o(\ep^2). \end{eqnarray} Since $\frac{n-2}{4} > 2$, we get from (\ref{r1}) and (\ref{r2}) that for $\ep$ small enough \begin{eqnarray*} \mu_2(M,g) &\leq & \sup_{(\lambda,\mu) \in \mR^2 \setminus \{(0,0)\}} F(u_{\ep}, \la v_{\ep} + \mu v) \\ &\leq & {( \mu_1(M,g)^{\frac{n}{2}}+ \mu_1(\mS^n)^{\frac{n}{2}})}^{\frac{2}{n}} -C \ep^2 + O(\ep^{\frac{n-2}{4}}) + o(\ep^2) < {( \mu_1(M,g)^{\frac{n}{2}}+ \mu_1(\mS^n)^{\frac{n}{2}})}^{\frac{2}{n}}. \end{eqnarray*} This proves Theorem \ref{upbound} if $\mu_1(M,g) >0$. Now, we assume that $\mu_1(M,g)=0$, that $(M,g)$ is not locally conformally flat and that $n \geq 9$. For more simplicity, We set $u_{\ep} = v_{\ep}$ instead of $ u_{\ep} = Y(v_{\ep})^{\frac{n-2}{4}} v_{\ep}$ as above. We proceed exactly as in the case $\mu_1(M,g)>0$. We obtain that for $(\lambda,\mu) \in \mR^2 \setminus \{(0,0)\}$ \begin{eqnarray*} F(u_{\ep}, \la v_{\ep} + \mu v) & = & \frac{ \la^2 Y(v_{\ep})}{ \la^2 \int_M v_{\ep}^N \,dv_g + \mu^2 \int_M |v_{\ep}|^{N-2} v^2 \,dv_g + 2\la \mu \int_M |v_{\ep}|^{N-1} v \,dv_g} {\left( \int_M v_{\ep}^N \,dv_g \right)}^{\frac{2}{n}} \\ & =& \frac{ \la^2 Y(v_{\ep}) }{ \la^2 + \mu^2 \int_M |v_{\ep}|^{N-2} v^2 \,dv_g + 2\la \mu \int_M |v_{\ep}|^{N-1} v \,dv_g}. \end{eqnarray*} Let $\la_{\ep}, \mu_{\ep}$ be such that $\la_{\ep}^2+ \mu_{\ep}^2 = 1$ and such that $$ F(u_{\ep}, \la_{\ep} v_{\ep} + \mu_{\ep} v) = \sup_{(\lambda,\mu) \in \mR^2 \setminus \{(0,0)\}} (u_{\ep}, \la v_{\ep} + \mu v).$$ If $\la_{\ep} = 0$, we obtain that $F(u_{\ep}, \la_{\ep} v_{\ep} + \mu_{\ep} v) = 0$ and the theorem would be proven. Then we assume that $\la_{\ep} \not= 0 $ and we write that $$F(u_{\ep}, \la_{\ep} v_{\ep} + \mu_{\ep} v) = \frac{Y(v_{\ep}) }{ 1 + 2 x_{\ep} b_{\ep} + x_{\ep}^2 a_{\ep}}$$ where $x_{\ep} = \frac{\mu_{\ep}}{\la_{\ep}}$ and where, using (\ref{normp}) $$b_{\ep} = \int_M v_{\ep}^{N-1} v \,dv_g \sim_{\ep \to 0} C \ep^{\frac{n-2}{4}}$$ and $$a_{\ep} = \int_M v_{\ep}^{N-2} v^2 \,dv_g \sim_{\ep \to 0} C \ep.$$ Maximizing this expression in $x_{\ep}$ and using (\ref{test}), we get that $$F(u_{\ep}, \la_{\ep} v_{\ep} + \mu_{\ep} v) \leq \frac{\mu_1(\mS^n) - C(M) \ep^2+ o(\ep^2)}{1 - \frac{b_{\ep}^2 }{a_{\ep}}} = \frac{\mu_1(\mS^n) - C(M) \ep^2+ o(\ep^2)}{1 - O(\ep^{\frac{n-4}{2}})}.$$ Since $n \geq 9$, $\frac{n-4}{2} > 2$ and we get that for $\ep$ small, $$F(u_{\ep}, \la_{\ep} v_{\ep} + \mu_{\ep} v)< \mu_1(\mS^n).$$ This proves Theorem \ref{upbound}. \section{Existence of a minimum of $\mu_2(M,g)$} \label{sectatt} The aim of this section is to prove Theorem~\ref{attain}. \setcounter{step}{0} We study a sequence of metrics $(g_m)_m= (u_m ^{N-2} g)_m$ ($u_m >0$, $u_m \in C^{\infty}(M)$) which minimizes the infimum in the definition of $\mu_2(M,g)$ i.e. a sequence of metrics such that $$\lim_m \lambda_2(g_m) {\Vol(M,g_m)}^{\frac{2}{n}}=\mu_2(M,g).$$ Without loss of generality, we may assume that $\Vol(M,g_m)= 1$ i.e. that \begin{eqnarray} \label{u_bound} \int_M u_m^N \,dv_g =1. \end{eqnarray} In particular, the sequence $(u_m)_m$ is bounded in $L^N(M)$ and there exists $u \in L^N(M)$, $u \geq 0$ such that $u_m \weakto u$ weakly in $L^N(M)$. We are going to prove that $u \not=0 $ and that the generalized metric $u^{N-2} g$ minimizes $\mu_2(M,g)$. Proposition \ref{la1la2} implies the existence of $v_m, w_m \in C^{\infty}(M)$, $v_m \geq 0$ such that \begin{eqnarray} \label{eqvm} L_g v_m = \la_{1,m} u_m^{N-2} v_m \end{eqnarray} and \begin{eqnarray} \label{eqwm} L_g w_m = \la_{2,m} u_m^{N-2} w_m. \end{eqnarray} where $\la_{i,m} = \la_i (g_m)$ and such that \begin{eqnarray} \label{vw_bound} \int_M u_m^{N-2} v_m^2 \,dv_g = \int_M u_m^{N-2} w_m^2 \,dv_g = 1 \; \hbox{and} \; \int_M u_m^{N-2} v_m w_m \,dv_g = 0 \end{eqnarray} With these notations and by (\ref{u_bound}), $$\lim_{m} \la_{2,m} = \mu_2(M,g).$$ Moreover, by the maximum principle, $v_m >0$. If $\la_{1,m} = \la_{2,m}$ then $w_m$ would be a minimizer of the functional associated to $\la_{1,m}$ and by the maximum principle, we would get that $w_m >0$. This contradicts (\ref{vw_bound}). Hence, $\la_{1,m} < \la_{2,m}$ for all $m$. The sequences $(v_m)_m$ and $(w_m)_m$ are bounded in $H_1^2(M)$. We can find $v,w \in H_1^2(M)$, $v \geq 0$ such that $v_m$ (resp. $w_m$) tends to $v$ (resp. $w$) weakly in $ H_1^2(M)$. Together with the weak convergence of the $(u_m)_m$ towards $u$ in $L^N(M)$, we get that in the sense of distributions \begin{eqnarray} \label{eqvlim} L_g v = \widehat\mu_1 u^{N-2} v \end{eqnarray} and \begin{eqnarray} \label{eqwlim} L_g w = \mu_2(M,g)\, u^{N-2} w. \end{eqnarray} where $\widehat\mu_1 = \lim_m \la_{1,m} \leq \mu_2(M,g)$. {}From what we know until now, it is not clear whether $v$ and $w$ are linearly independent, and even if they are, their restrictions to the set $M\setminus u^{-1}(0)$ might be linearly dependent. \noindent It will take a certain effort to prove the following claim. \begin{claim} The functions $u^{N-2\over 2} v$ and $u^{N-2\over 2} w$ are linearly independent. \end{claim} \noindent Once the claim is proved, we have $\mathop{\rm span}(v,w)\in \Gr2u{H_1^2(M)}$, and this implies that $$\sup_{(\la,\mu)\neq (0,0)} F(u, \la v + \mu w) = \mu_2(M,g).$$ Hence, by equations (\ref{eqvlim}) and (\ref{eqwlim}), the generalized metric $u^{N-2} g$ minimizes $\mu_2(M,g)$, i.e. Theorem~\ref{attain} is proved. The first step in the proof of the claim is an estimate that avoids concentration of $w_m$ and $v_m$. \begin{step} Let $x \in M$ and $\ep \in \mathopen]0, \frac{N-2}{2}\mathclose[$. We choose a cut-off function $\eta \in C^{\infty}$ such that $0 \leq \eta \leq 1$, $\eta(B_x(\de)) \equiv 1$ (where $\delta>0$ is a small number) and $\eta(M \setminus B_x(2\de) ) \equiv 0$, $|\na \eta|\leq 2/\delta$. We define $W_m = \eta |w_m|^{\ep} w_m$. Then, we have \begin{eqnarray} \label{inegstep1} {\left( \int_M |W_m|^N \,dv_g \right)}^{\frac{2}{N}} \leq \mu_2(M,g) (1 - \al_{\ep})^{-1} \mu_1(\mS^n)^{-1} {\left( \int_{B_x(2 \de)} u_m^N \right)}^{\frac{2}{n}} {\left( \int_M |W_m|^N \,dv_g \right)}^{\frac{2}{N}} +C_{\de}. \end{eqnarray} where $C_{\de}$ is a constant that may depend on $\de$ but not on $\ep$ and where $\lim_{\ep \to 0} \al_{\ep}=0$. Moreover, the same conclusion is true with $V_m = \eta |v_m|^{\ep} v_m$ instead of $W_m$. \end{step} The proof uses classical methods. We will explain the proof for $W_m$. The proof for $V_m$ uses exactly the same arguments. At first, we differentiate the definition of $W$ and obtain \begin{eqnarray} |\nabla W_m|^2 & \geq & \Bigl|\nabla( |w_m|^{\ep} w_m )\Bigr|^2 \eta^2 - \bigl(2 |\nabla \eta|\, |w_m|^{1+\ep}\bigr)\left( \Bigl|\nabla (|w_m|^{\ep} w_m) \Bigr| \eta\right) + |\nabla \eta|^2 \, |w_m|^{2+2\ep} \nonumber\\ & \geq & \Bigl|\nabla \bigl(|w_m|^{\ep} w_m\bigr) \Bigr|^2 \eta^2 - \left( \frac{1}{2} \Bigl|\nabla ( |w_m|^{\ep} w_m) |^2 \eta^2 + 2 |\nabla \eta|^2 |w_m|^{2+2\ep} \right) + |\nabla \eta|^2 |w_m|^{2+2\ep}\nonumber \end{eqnarray} This leads to \begin{equation} \eta^2 |\nabla (|w_m|^{\ep} w_m )|^2 \leq 2 |\nabla W_m|^2 + 2 |\nabla \eta|^2 |w_m|^{2+2\ep}. \label{ineq.etana} \end{equation} Now, we want to derive lower bound for \begin{eqnarray} \label{W1} ( \nabla (\eta^2 |w_m|^{2 \ep} w_m), \nabla w_m ) = | \nabla W_m|^2 - \bigl|\nabla (\eta |w_m|^{\ep })\bigr|^2\, |w_m|^2 \end{eqnarray} For the second summand on the right hand side in \eref{W1} we have the bound \begin{eqnarray*} |\nabla (\eta |w_m|^{\ep})|^2 |w_m|^2 & = &|\nabla \eta|^2 |w_m|^{2+2 \ep}+ 2 (\nabla \eta, \nabla |w_m|^{\ep}) \,\eta |w_m|^{2+\ep} + \eta^2 \Bigl|\nabla (|w_m|^{\ep})\Bigr|^2 w_m^2 \nonumber \\ & \leq & 2 |\nabla \eta|^2 |w_m|^{2+2 \ep} + 2 \eta^2\Bigl|\nabla (|w_m|^{\ep})\Bigr|^2 w_m^2\\ & \leq & 2 |\nabla \eta|^2 |w_m|^{2+ 2 \ep} +\frac{2 \eta^2\ep^2}{(1+\ep)^2} \Bigl|\nabla (|w_m|^{\ep} w_m) \Bigr|^2\\ & \leq & (2 +\frac{4 \ep^2}{(1+\ep)^2}) |\nabla \eta|^2 |w_m|^{2+ 2 \ep}+ \frac{4 \ep^2}{(1+\ep)^2} |\nabla W_m|^2. \end{eqnarray*} Here, we used \eref{ineq.etana} in the last line. Coming back to (\ref{W1}), we obtain that $$ ( \nabla (\eta^2 |w_m|^{2 \ep} w_m), \nabla w_m ) \geq (1-\al_{\ep}) |\nabla W_m|^2 - C |\nabla \eta|^2 |w_m|^{2+ 2 \ep}. $$ where $\al_{\ep} \to 0$ when $\ep \to 0$ and where $C>0$ is a constant independent of $\ep$. This relations shows that \begin{eqnarray*} \int_M \eta^2 |w_m|^{2\ep} w_m L_g(w_m) \,dv_g \geq (1-\al_{\ep})\int_M c_n |\nabla W_m|^2 \,dv_g -C \int_M |\nabla \eta|^2 |w_m|^{2+ 2 \ep} \,dv_g+ \min\Scal \int W_m^2\, dv_g. \end{eqnarray*} Now, since $\ep < \frac{N-2}{2}$,the sequence $(w_m)_m$ is bounded in $L^{2+2\ep}(M)$ (and hence the sequence $(W_m)_m$ is bounded in $L^2(M)$). As a consequence, there exists a constant $C_{\de}$ possibly depending on~$\de$ but not on~$\ep$, and such that \begin{eqnarray} \label{W4} \int_M \eta^2 |w_m|^{2\ep} w_m L_g(w_m) \,dv_g \geq (1-\al_{\ep})\int_M \left(c_n |\nabla W_m|^2 + B_0(M,g) W_m^2\right) \,dv_g - C_{\de}. \end{eqnarray} Using equation (\ref{eqwm}) in the left hand side of (\ref{W4}) and applying Sobolev inequality $(S)$ to the right hand side, we get that \begin{eqnarray*} \mu_2(M,g) \int_M u_m^{N-2} W_m^2 \,dv_g \geq (1 - \al_{\ep}) \mu_1(\mS^n) {\left( \int_M |W_m|^N \,dv_g \right)}^{\frac{2}{N}} - C_{\de}. \end{eqnarray*} By the H\"older inequality, we obtain \begin{eqnarray*} {\left( \int_M |W_m|^N \,dv_g \right)}^{\frac{2}{N}} \leq \mu_2(M,g) (1 - \al_{\ep})^{-1} \mu_1(\mS^n)^{-1} {\left( \int_{B_x(2 \de)} u_m^N \right)}^{\frac{2}{n}} {\left( \int_M |W_m|^N \,dv_g \right)}^{\frac{2}{N}} +C_{\de}. \end{eqnarray*} This ends the proof of the step. \begin{step} If $\mu_2(M,g) < \mu_1(\mS^n)$, then the generalized metric $u^{N-2} g$ minimizes $\mu_2(M,g)$. \end{step} {}From (\ref{inegstep1}), and the fact $\mu_2(M,g) < \mu_1(\mS^n)$, we get that for $\ep$ small enough, there exists a constant $K <1$ such that $${\left( \int_M |W_m|^N \,dv_g \right)}^{\frac{2}{N}} \leq K {\left( \int_{B_x(2 \de)} u_m^N \right)}^{\frac{2}{n}} {\left( \int_M |W_m|^N \,dv_g \right)}^{\frac{2}{N}} +C_{\de}.$$ Since $\int_{B_x(2 \de)} u_m^N \leq 1$, the sequence $\int_M |W_m|^N \,dv_g $ is bounded. This implies that $(w_m)_m$ is bounded in $L^{N+{\ep}}(B_x(\de))$ and since $x$ is arbitrary in $L^{N+\ep}(M)$. Weak convergences $w_m\to w$ in $H_1^2(M)$ implies strong convergence $w_m\to w$ in $L^{N-\ep}(M)$. The H\"older inequality yields then strong convergence in $L^N(M)$. After passing to a subsequence we obtain that $(w_m)_m$ tends to $w$ strongly in $L^N (M)$. This implies that we can pass to the limit in (\ref{vw_bound}) and hence that $u^{\frac{N-2}{2}} v$ and $u^{\frac{N-2}{2}} w$ are linearly independent. The claim follows in this case.\\ In the following, we assume that $\mu_1(M,g) >0$ and that $$\mu_2(M,g) < {\left( \mu_1(M,g)^{\frac{n}{2}} + \mu_1(\mS^n)^{\frac{n}{2}} \right)}^{\frac{2}{n}}.$$ We define the \emph{set of concentration points} $$\Om = \Biggl\{ x \in M \,\Big|\, \forall \de>0, \; \limsup_m \int_{B_x(\de)} u_m^N \,dv_g > \frac{1}{2} \Biggr\}.$$ Since $\int_M u_m^N \,dv_g = 1$, we can assume --- after passing to a subsequence --- that $\Om$ contains at most one point. We now prove that: \begin{step} \label{st2} Let $U$ be an open set such that $\overline{U} \subset M \setminus \Om$. Then, the sequence $(v_m)_m$ (and $(w_m)_m$ resp.) converges towards $v$ (and $w$ resp.) strongly in $H_1^2(\overline{U})$. \end{step} Without loss of generality, we prove the result only for $w$. For any $x \in M \setminus \Om$ we can find $\de>0$ with $$\limsup_m \int_{B_x(2 \de)} u_m^N \,dv_g\leq{1\over 2}.$$ Using $\mu_2(M,g) < {(\mu_1(M,g)^{\frac{n}{2}} + \mu_1(\mS^n)^{\frac{n}{2}})}^{\frac{2}{n}} \leq 2^{\frac{2}{n}} \mu_1(\mS^n)$ we obtain for a small $\ep>0$ $$ \mu_2(M,g) (1 - \al_{\ep})^{-1} \mu_1(\mS^n)^{-1} {\left( \int_{B_x(2 \de)} u_m^N \right)}^{\frac{2}{n}} \leq K <1$$ for almost all $m$. Together with inequality (\ref{inegstep1}), this proves that $\int_M |W_m|^N \,dv_g $ is bounded. This implies that $(w_m)_m$ is bounded in $L^{N+{\ep}}(B_x(\de))$. As in last step, this proves that up to a subsequence, $(w_m)_m$ tends to $w$ strongly in $L^N(U)$. Using equation (\ref{eqwm}) and (\ref{eqwlim}), we easily obtain that $$\lim_m \int_U |\nabla w_m|^2 dv_g =\int_U |\nabla w|^2 dv_g.$$ Together with the weak convergence of $(w_m)_m$ to $w$, this proves the step.\\ Now, we set for all $m$, $$S_m= \{ \lambda v_m + \mu w_m | \la^2+\mu^2 = 1 \} \; \hbox{ and } \; S= \{ \lambda v + \mu w | \la^2+\mu^2 = 1 \}.$$ \begin{step} \label{st1} There exists a sequence $(\overline{w}_m)_m$ ($\overline{w}_m \in S_m$) and $\overline{w} \in S$ such that $\overline{w}_m$ tends to $\overline{w}$ strongly in $H_1^2(M)$. \end{step} By theorem \ref{sobolev}, there exists $\la_m,\mu_m$ such that $\la_m^2+\mu_m^2= 1$ and such that \begin{align} 2^{2/n} \mu_1(\mS^n) &\int_M u_m^{N-2} {(\la_m(v_m-v) + \mu_m(w_m - w))}^2 \,dv_g\nonumber \\ \leq &\int_M c_n |\nabla(\la_m(v_m-v) + \mu_m(w_m - w)) |^2 \,dv_g \label{sob} \\ {}+ & \int_M B_0(M,g)(\la_m(v_m-v) + \mu_m(w_m - w))^2 \,dv_g \nonumber \end{align} Up to a subsequence, there exists $\la,\mu$ such that $\la^2+\mu^2=1$ and such that $\lim_m \la_m = \la$ and $\lim_m \mu_m = \mu$. We set $\overline{w}_m= \la_m v_m + \mu_m w_m \in S_m$ and $\overline{w}= \la v + \mu w$. Then, $\overline{w}_m$ tends to $\overline{w}$ weakly in $H_1^2(M)$. A first remark is that by strong convergence in $L^2(M)$ \begin{eqnarray} \label{sob1} \lim_m \int_M (\la_m(v_m-v) + \mu_m(w_m - w))^2 \,dv_g = 0. \end{eqnarray} Using the weak convergence of $\overline{w}_n$ to $\overline{w}$ in $H_1^2(M)$ and the weak convergence of $u_m$ to $u$ in $L^N(M)$, it is easy to compute that \begin{eqnarray} \label{sob2} \int_M u_m^{N-2} {(\la_m(v_m-v) + \mu_m(w_m - w))}^2 \,dv_g = \int_M u_m^{N-2} \overline{w}_m^2 \,dv_g - \int_M u^{N-2} \overline{w}^2 \,dv_g + o(1) \end{eqnarray} and that \begin{eqnarray*} \int_M c_n |\nabla(\la_m(v_m-v) + \mu_m(w_m - w)) |^2 \,dv_g & = & \la^2 \left( \int_M c_n |\nabla v_m |^2 \,dv_g - \int_M c_n |\nabla v |^2 \,dv_g \right)\\ & + & \mu^2 \left( \int_M c_n |\nabla w_m |^2 \,dv_g - \int_M c_n |\nabla w |^2 \,dv_g \right)\\ &+ & 2 \la \mu \left( \int_M c_n (\nabla v_m, \nabla w_m) \,dv_g - \int_M c_n (\nabla v, \nabla w) \,dv_g \right) +o(1). \end{eqnarray*} Using equations (\ref{eqvm}), (\ref{eqwm}), (\ref{eqvlim}) and (\ref{eqwlim}), we get that \begin{eqnarray*} \int_M c_n |\nabla(\la_m(v_m-v) + \mu_m(w_m - w)) |^2 \,dv_g & = & \la^2\,\widehat\mu_1 \left( \int_M u_m^{N-2} v_m^2 \,dv_g - \int_M u^{N-2} v^2) \,dv_g \right)\\ & + & \mu^2\, \mu_2(M,g) \left( \int_M u_m^{N-2} w_m^2 \,dv_g - \int_M u^{N-2} w^2) \,dv_g \right) \\ & + & 2 \la \mu\, \mu_2(M,g) \left( \int_M u_m^{N-2} v_ m w_m \,dv_g - \int_M u^{N-2} v w \,dv_g \right) + o(1). \end{eqnarray*} Since $\widehat\mu_1 \leq \mu_2(M,g)$ and since, by weak convergence $$\liminf_m \int_M u_m^{N-2} v_m^2 \,dv_g - \int_M u^{N-2} v^2) \,dv_g \geq 0,$$ we get that \begin{eqnarray*} \int_M c_n |\nabla(\la_m(v_m-v) + \mu_m(w_m - w)) |^2 \,dv_g & \leq & \la^2\,\mu_2(M,g) \left( \int_M u_m^{N-2} v_m^2 \,dv_g - \int_M u^{N-2} v^2 \,dv_g \right)\\ & + & \mu^2\,\mu_2(M,g) \left( \int_M u_m^{N-2} w_m^2 \,dv_g - \int_M u^{N-2} w^2 \,dv_g \right) \\ & +& 2 \la \mu\, \mu_2(M,g) \left( \int_M u_m^{N-2} v_ m w_m \,dv_g - \int_M u^{N-2} v w \,dv_g \right), \end{eqnarray*} and hence, \begin{eqnarray} \label{sob3} \int_M c_n |\nabla(\la_m(v_m-v) + \mu_m(w_m - w)) |^2 \,dv_g \leq \mu_2(M,g) \left( \int_M u_m^{N-2} \overline{w}_m^2 \,dv_g - \int_M u^{N-2} \overline{w}^2 \,dv_g \right) +o(1). \end{eqnarray} Together with (\ref{sob}), (\ref{sob1}) and (\ref{sob2}), we obtain that \begin{eqnarray*} &&2^{2/n} \mu_1(\mS^n) \left( \int_M u_m^{N-2} \overline{w}_m^2 \,dv_g - \int_M u^{N-2} \overline{w}^2 \,dv_g \right) \\ &&\leq \mu_2(M,g) \left( \int_M u_m^{N-2} \overline{w}_m^2 \,dv_g - \int_M u^{N-2} \overline{w}^2 \,dv_g \right) +o(1). \end{eqnarray*} Since $\mu_2(M,g) < {(\mu_1(M,g)^{\frac{n}{2}} + \mu_1(\mS^n)^{\frac{n}{2}})}^{\frac{2}{n}} \leq 2^{\frac{2}{n}} \mu_1(\mS^n)$, we get that $$\left( \int_M u_m^{N-2} \overline{w}_m^2 \,dv_g - \int_M u^{N-2} \overline{w}^2 \,dv_g \right) \leq K_0 \left( \int_M u_m^{N-2} \overline{w}_m^2 \,dv_g - \int_M u^{N-2} \overline{w}^2 \,dv_g \right) +o(1)$$ where $K_0 <1$. This implies that \begin{eqnarray} \label{unot=0} 1= \lim_m \int_M u_m^{N-2} \overline{w}_m^2 \,dv_g = \int_M u^{N-2} \overline{w}^2 \,dv_g \end{eqnarray} and hence by (\ref{sob3}). $$\lim_m \int_M c_n |\nabla(\la_m(v_m-v) + \mu_m(w_m - w)) |^2 \,dv_g =0.$$ The step easily follows.\\ As a remark, (\ref{unot=0}) implies that $u^{\frac{N-2}{2}} \overline{w} \not\equiv 0$. Now, we set $\overline{v}_m = - \mu_m v_m + \la_m w_m$ and $\overline{v}= -\mu v + \la w$. We prove that \begin{step} \label{st3} There exists $x \in M$ such that $$\limsup_m \int_{B_x{\delta}} u_m^2 (\overline{v}_m- \overline{v})^2 \,dv_g = 1$$ for all $\delta>0$. \end{step} The sequence $(\overline{v}_m)_m$ tends to $\overline{v}$ weakly in $H_1^2(M)$. If $\Om=\emptyset$, then we know from Step \ref{st2} that $(\overline{v}_m)_m$ tends to $\overline{v}$ strongly in $H_1^2(M)$, which implies $\int u^{N-2}\bar v \bar w=0$. Hence, in the case $\Omega=\emptyset$, the functions $ u^{N-2\over 2}\bar v$ and $ u^{N-2\over 2}\bar w$ are linearly independent, and the claim follows. Hence, without loss of generality let $\Om = \{ x \}$ where $x$ is some point of $M$. We assume that the claim is false, i.e. $u^{\frac{N-2}{2}} v$ and $ u^{\frac{N-2}{2}}w$ are linearly dependent. As $u^{\frac{N-2}{2}} \overline{w}\not\equiv 0$, there exists $b\in \mR$ with $u^{\frac{N-2}{2}} \overline{v}= b u^{\frac{N-2}{2}} \overline{w}$. Hence, $$ 0 = \int_M u^{N-2} \overline{v}^2 \,dv_g + b^2\int_M u^{N-2} \overline{w}^2 \,dv_g - 2 b \int_M u^{N-2} \overline{v}\, \overline{w} \,dv_g.$$ By strong convergence of $(\overline{w}_m)_m$ to $\overline{w}$ in $H_1^2(M)$, weak convergence of $(\overline{v}_m)_m$ to $\overline{v}$ in $H_1^2(M)$ and weak convergence of $(u_m)_m$ to $u$ in $L^N(M)$, we have $ \int_M u^{N-2} \overline{w}^2 \,dv_g= 1 $ and $\int_M u^{N-2} \overline{v} \, \overline{w} \,dv_g = 0$. We obtain $\int_M u^{N-2} \overline{v}^2 \,dv_g + b^2 = 0$. As a consequence, $u^{\frac{N-2}{2}} \overline{v} \equiv 0$. Let now $\delta>0$. We write that \begin{eqnarray*} \int_{B_{x}(\de)} u_m^{N-2}( \overline{v}_m-\overline{v})^2 \,dv_g & = & \int_{B_{x}(\de)} u_m^{N-2}\overline{v}_m^2 \,dv_g\\ & = & 1 - \int_{M \setminus B_{x}(\de)} u_m^{N-2} \overline{v}_m^2 \,dv_g. \end{eqnarray*} By step \ref{st2}, $$\lim_m \int_{M \setminus B_{x}(\de)} u_m^{N-2} \overline{v}_m^2 \,dv_g= \int_{M \setminus B_{x}(\de)} u^{N-2} \overline{v}^2 \,dv_g =0.$$ This proves the step. \begin{step} \label{st4} Conclusion. \end{step} Let $\de >0$ be a small fixed number. In the following, $o(1)$ denotes a sequence of real numbers which tends to $0$, however we do not claim that the convergence is uniform in $\delta$. By step \ref{st3} and the H\"older inequality, \begin{eqnarray*} 1 & = & \int_{B_x(\de)} u_m^{N-2} (\overline{v}_m- \overline{v})^2 \,dv_g + o(1)\\ & \leq & {\left( \int_{B_x(\de)} u_m^N \,dv_g \right)}^{\frac{2}{n}} {\left( \int_M |\overline{v}_m -\overline{v}|^N \,dv_g \right)}^{\frac{2}{n}}+o(1). \end{eqnarray*} Applying Sobolev inequality $(S)$, we get that \begin{eqnarray*} 1 & \leq & {\left( \int_{B_x(\de)} u_m^N \,dv_g \right)}^{\frac{2}{n}} \mu_1(\mS^n)^{-1} \left( \int_M c_n |\nabla (\overline{v}_m - \overline{v}) |^2 \,dv_g +B_0(M,g) \int_M (\overline{v}_m - \overline{v})^2 \,dv_g \right) + o(1). \end{eqnarray*} By strong convergence of $(\overline{v}_m-\overline{v})_m$ to $0$ in $L^2(M)$, \begin{eqnarray*} 1& \leq & {\left( \int_{ B_x(\de)} u_m^N \,dv_g \right)}^{\frac{2}{n}} \mu_1(\mS^n)^{-1} \left( \int_M c_n |\nabla (\overline{v}_m - \overline{v}) |^2+ S_g (\overline{v}_m - \overline{v})^2 \,dv_g \right)+ o(1) \end{eqnarray*} Using equations (\ref{eqvm}), (\ref{eqwm}), (\ref{eqvlim}), (\ref{eqwlim}) and the fact that $\widehat\mu_1 \leq \mu_2(M,g)$, we get that \begin{eqnarray*} 1& \leq & {\left( \int_{ B_x(\de)} u_m^N \,dv_g \right)}^{\frac{2}{n}} \mu_1(\mS^n)^{-1} \mu_2(M,g) \int_M u_m^{N-2}( \overline{v}_m - \overline{v})^2 \,dv_g \\ & =& {\left( \int_{B_x(\de)} u_m^N \,dv_g \right)}^{\frac{2}{n}} \mu_1(\mS^n)^{-1} \mu_2(M,g). \end{eqnarray*} Since $\mu_2(M,g) < {(\mu_1(M,g)^{\frac{n}{2}} + \mu_1(\mS^n)^{\frac{n}{2}})}^{\frac{2}{n}}$, we obtain that \begin{eqnarray*} \int_{ B_x(\de)} u_m^N \,dv_g > \frac{\mu_1(\mS^n)^{\frac{n}{2}}} {\mu_1(M,g)^{\frac{n}{2}} + \mu_1(\mS^n)^{\frac{n}{2}}}. \end{eqnarray*} and since $\int_M u_m^N \,dv_g = 1$, \begin{eqnarray} \label{in_final} \int_{M \setminus B_x(\de)} u_m^N \,dv_g < \frac{\mu_1(M,g)^{\frac{n}{2}}} {\mu_1(M,g)^{\frac{n}{2}} + \mu_1(\mS^n)^{\frac{n}{2}}}. \end{eqnarray} Now, we write that by strong convergence of $(\overline{w}_m)_m$ in $H_1^2(M)$, \begin{eqnarray*} a_{\de} & = & \int_{B_x(\de)} u_m^{N-2} \overline{w}_m^2 \,dv_g \end{eqnarray*} \begin{eqnarray*} 1 -a_{\de} & = & \int_{M \setminus B_x(\de)} u_m^{N-2} \overline{w}_m^2 \,dv_g \end{eqnarray*} where $a_{\de}$ does not depend of $m$ and tends to $0$ when $\de$ tends to $0$. By H\"older inequality, \begin{eqnarray*} 1 -a_{\de} & \leq & {\left( \int_{M \setminus B_x(\de)} u_m^N \,dv_g \right)}^{\frac{2}{n}} {\left( \int_M \overline{w}^N \,dv_g \right)}^{\frac{2}{n}}. \end{eqnarray*} Since $\mu_1(M,g)$ is the minimum of Yamabe functional, we get that \begin{eqnarray*} 1 -a_{\de} & \leq & {\left( \int_{M \setminus B_x(\de)} u_m^N \,dv_g \right)}^{\frac{2}{n}} \mu_1(M,g)^{-1} \int_M \left(c_n |\nabla \overline{w}_m|^2 + S_g \overline{w}_m^2\right) \,dv_g . \end{eqnarray*} As we did for $\overline{v}$, we obtain \begin{eqnarray*} 1 -a_{\de} & \leq & {\left( \int_{M \setminus B_x(\de)} u_m^N \,dv_g \right)}^{\frac{2}{n}} \mu_1(M,g)^{-1} \mu_2(M,g) \underbrace{\int_M u_m^{N-2} \overline{w}_m^2 \,dv_g}_1 \end{eqnarray*} By (\ref{in_final}), in the limit $\de \to 0$, this gives $$\mu_2(M,g) \geq {(\mu_1(M,g)^{\frac{n}{2}} + \mu_1(\mS^n)^{\frac{n}{2}})}^{\frac{2}{n}}.$$ This is false by assumption. Hence, the claim is proved, and Theorem~\ref{attain} follows. \section{The invariant $\mu_k(M)$ for $k \geq 3$} A natural question is: Can we do the same work for $\mu_k(M)$ with $k \geq 3$? This problem is still open but seems to be hard. Let $(M,g)$ be a compact Riemannian manifold of dimension $n \geq 3$. Using the variational characterization of $\mu_k(M)$, one can check that $\mu_k(M) \leq k^{\frac{2}{n}} \mu_1(\mS^n)$. It is natural to conjecture that one has equality if $M$ is the round sphere i.e. that $\mu_k(\mS^n)= k^{\frac{2}{n}} \mu_1(\mS^n)$. However, the following result shows that is false: \begin{prop} \label{muk} Let $n \in \mN^*$. Then, for $n\geq 7$ $$\mu_{n+2}(\mS^n) < (n+2)^{\frac{2}{n}} \mu_1(\mS^n).$$ \end{prop} {\bf Proof:} Let us study $\mS^n$ with its natural embedding into $\mR^{n+1}$. We have $L_g(1)=n(n-1)$. Hence, $\la_1(\mS^n) \leq n(n-1)$. Let also $x_i$ ($i \in [1,\cdots, n+1]$) be the canonical coordinates on $\mR^{n+1}$. As one can check, $$L_g(x_i) = \frac{n(n-1)(n+2)}{n-2} x_i$$ and hence $\la_{n+2}(\mS^n) \leq \frac{n(n-1)(n+2)}{n-2}$. This shows that $$\mu_{n+2}(\mS^n) \leq \frac{n(n-1)(n+2)}{n-2} \om_{n}^{\frac{2}{n}}.$$ As one can check, for $n\geq 7$ $$\frac{n(n-1)(n+2)}{n-2} \om_{n}^{\frac{2}{n}} < (n+2)^{\frac{2}{n}} n (n-1) \om_{n}^{\frac{2}{n}}= (n+2)^{\frac{2}{n}} \mu_1(\mS^n).$$ This ends the proof of Proposition \ref{muk}. \section{The case of manifolds whose Yamabe invariant is negative} \label{negative} We let $(M,g)$ be a compact Riemannian manifold of dimension $n \geq 3$. Then, we have: \begin{prop} Let $k \in \mN^*$. Assume that $\mu_k(M,g) < 0$. Then, $\mu_k(M,g)= - \infty$. \end{prop} \noindent {\bf Proof:} After a possible change of metric in the conformal class, we can assume that $\la_k(g) <0$. This implies that we can find some smooth functions $v_1, \cdots,v_k$ satisfying $$L_g v_i = \la_i(g) v_i $$ for all $i \in \{1, \cdots,k \}$ and such that $$\int_M v_i v_j dv_g = 0$$ for all $i,j \in \{1, \cdots, k\}$, $i \not = j$. Let $v_\ep$ be defined as in the proof of Theorem \ref{upbound}. We define $u_{\ep} = v_{\ep} + \ep$ to obtain a positive function. We set $V= \{v_1, \cdots, v_k \}$. It is easy to check that, uniformly in $v \in V$ $$\lim_{\ep to 0} \int_M v_{\ep}^{N-2} v^2 dv_g =0.$$ Since $\la_i <0$, it is then easy to see that $\sup_{v \in V} F(v_{\ep},v) = -\infty$. Together with the variational characterization of $\mu_k(M,g)$, we get that $\mu_k(M,g) = - \infty$.\\ \noindent This result proves for example that if the Yamabe invariant of $(M,g)$ is negative, then $\mu_1(M,g) = - \infty$. This is the reason why we restricted in this article to the case of non-negative Yamabe invariant. Many of our results and proofs remain valid in the case $\mu_2(M) \geq 0$. However, if the Yamabe invariant of $(M,g)$ is non-positive, there are other ways to find nodal solutions of Yamabe equation. Indeed, Aubin's methods \cite{aubin:76} can be applied to avoid concentration phenomenom. See for example \cite{djadli.jourdain:02}, \cite{jourdain:99}, \cite{holcman:99} for such methods. Here, we present very briefly one new method in this case. We just sketch it since it is not the purpose of our paper to find solutions of Yamabe equation with Aubin's type methods. At first, for any metric $\tilde{g}$ conformal to $g$, we let $\la_1^+(\tilde{g})$ be the first \emph{positive} eigenvalue of Yamabe operator. We then define $\la^+ = \inf \la_1^+(\tilde{g}) \Vol(M,\tilde{g})^{\frac{2}{n}}$ where the infimum is taken over the conformal class of $g$. Then, proceeding in a way analogous to \cite{ammann.habil,ammann_p04}, one shows that $$0 < \la^+ = \inf \frac{{\left( \int_M |L_g u |^{\frac{2n}{n+2}}\, dv_g \right)}^{\frac{n+2}{n}}}{\int_M u L_gu\,dv_g}$$ where the infimum is taken over the smooth functions $u$ such that $$\int_M u L_gu\,dv_g>0.$$ Then, one shows using test functions that $\la_+ \leq \mu_1(\mS^n)$. If the inequality is strict, then we can find a minimizer for the functional above which is a solution of the Yamabe equation. If the Yamabe invariant is positive, this solution is a Yamabe metric and hence is positive. However, if the Yamabe invariant is non-positive, this solution has an alternating sign. \section*{A.\ \ Appendix: Proof of Lemma~\ref{regu}} Let $(M,g)$ be a compact Riemannian manifold of dimension $n \geq 3$ and let $v \in H_1^2(M)$, $v \not\equiv 0$ and $u \in L^{N}_+(M)$ be two functions which satisfy in the sense of distributions $$L_gv = u^{N-2} v. \eqno{(Eq)}$$ We define $v_+ = \sup(v,0)$. We let $q \in ]1, \frac{n}{n-2}]$ be a fixed number and $l>0$ be a large real number which will tend to $+\infty$. We let $\beta= 2q -1 $. We then define the following functions for $x \in \mR $: \[ G_l(x) = \left| \begin{array}{ccc} 0 & \hbox{ if } & x <0 \\ x^{\beta} & \hbox{ if } & x \in [0,l[\\ l^{q-1}(ql^{q-1} x - (q-1) l^q) & \hbox{ if } & x \geq l \end{array} \right. \] and \[ F_l(x) = \left| \begin{array}{ccc} 0 & \hbox{ if } & x <0 \\ x^q & \hbox{ if } & x \in [0,l[\\ ql^{q-1} x - (q-1) l^q & \hbox{ if } & x \geq l \end{array} \right. \] \noindent It is easy to check that for all $x \in \mR$, \begin{eqnarray} \label{i1} (F_l'(x))^2 \leq q G_l'(x), \end{eqnarray} \begin{eqnarray} \label{i2} (F_l(x))^2 \geq x G_l(x) \end{eqnarray} and \begin{eqnarray} \label{i3} x G'(x) \leq \beta G_l(x). \end{eqnarray} \noindent Since $F_l$ and $G_l$ are uniformly lipschitz continuous functions, $F_l(v_+)$ and $G_l(v_+)$ belong to $H_1^2(M)$. Now, let $x_0 \in M$ be any point of $M$. We denote by $\eta$ a $C^2$ non-negative function supported in $B_{x_0}(2 \delta)$ ($\delta>0$ being a small number to be fixed) such that $0 \leq \eta \leq 1$ and such that $\eta(B_{x_0}(\delta)) = \{ 1 \}$. Multiply equation $(Eq)$ by $\eta^2 G_l(v_+)$ and integrate over $M$. Since the supports of $v_+$ and $G_l(v_+)$ coincide, we get: \begin{eqnarray} \label{equ1} c_n \int_M (\nabla v_+, \nabla \eta^2 G_l(v_+) )dv_g + \int_M S_g v_+ \eta^2 G_l(v_+) dv_g = \int_M u^{N-2} v_+ \eta^2 G_l(v_+) dv_g. \end{eqnarray} \noindent Let us deal with the first term of the left hand side of (\ref{equ1}). In the following, $C$ will denote a positive constant depending possibly on $\eta, q, \beta, \delta$ but not on $l$. We have \begin{eqnarray*} \int_M (\nabla v_+, \nabla \eta^2 G_l(v_+) )dv_g & = & \int_M G_l(v_+) (\nabla v_+,\nabla \eta^2)dv_g + \int_M G_l' (v_+) \eta^2 |\nabla v_+|^2 dv_g \\ & = & \int_M G_l(v_+) v_+ \Delta (\eta^2) - 2 \int_M v_+ G_l'(v_+) \eta (\nabla v_+,\nabla \eta) dv_g + \int_M G_l' (v_+) \eta^2 |\nabla v_+|^2 dv_g \\ & \geq & - C \int_M v_+ G_l(v_+)dv_g - 2 \int_M v_+^2 G_l'(v_+) |\nabla \eta|^2 dv_g + \frac{1}{2} \int_M G_l' (v_+) \eta^2 |\nabla v_+|^2 dv_g.\\ \end{eqnarray*} \noindent Using (\ref{i1}), (\ref{i2}) and (\ref{i3}), we get \begin{eqnarray} \int_M (\nabla v_+, \nabla \eta^2 G_l(v_+) )dv_g & \geq & - C \int_M (F_l(v_+))^2 dv_g + \frac{1}{2q} \int_M (F_l' (v_+))^2 \eta^2 |\nabla v_+|^2 dv_g \nonumber \\ & \geq& -C \int_M (F_l(v_+))^2 dv_g + \frac{1}{2q}\int_M \eta^2 |\nabla F_l(v_+)|^2 dv_g \nonumber \\ & \geq & -C \int_M (F_l(v_+))^2 dv_g + \frac{1}{4q} \int_M |\nabla (\eta F(v_+)) |^2 dv_g - \frac{1}{2q} \int_M |\nabla \eta|^2 (F_l(v_+))^2 dv_g \nonumber \\ & \geq & -C \int_M (F_l(v_+))^2 dv_g + \frac{1}{4q} \int_M |\nabla (\eta F(v_+)) |^2 dv_g. \label{i4} \end{eqnarray} \noindent Using the Sobolev embedding $H_1^2(M)$ into $L^N(M)$, there exists a constant $A>0$ depending only on $(M,g)$ such that $$ \int_M |\nabla (\eta F(v_+)) |^2 dv_g \geq A {\left( \int_M (\eta F(v_+))^N dv_g \right)}^{\frac{2}{N}} - \int_M (\eta F(v_+))^2 dv_g.$$ Together with (\ref{i4}), we obtain \begin{eqnarray} \label{i5} \int_M (\nabla v_+, \nabla \eta^2 G_l(v_+) )dv_g \geq -C \int_M (F_l(v_+))^2 dv_g +\frac{A}{4q} {\left( \int_M (\eta F(v_+))^N dv_g \right)}^{\frac{2}{N}} \end{eqnarray} \noindent Independently, we choose $\delta>0$ small enough such that $$\int_{B_{x_0}(2 \delta)} u^N dv_g \leq {\left( c_n \frac{A}{8q} \right)}^{\frac{n}{2}}.$$ Relation (\ref{i2}) and H\"older inequality then lead to \begin{eqnarray} \label{i6} \int_M u^{N-2} v_+ \eta^2 G_l(v_+) dv_g \leq \int_M u^{N-2} \eta^2 (F_l(v_+))^2 dv_g \leq c_n \frac{A}{8q} {\left( \int_M (\eta F(v_+))^N dv_g \right)}^{\frac{2}{N}}. \end{eqnarray} \noindent Since, by (\ref{i2}), $$\int_M S_g v_+ \eta^2 G_l(v_+) dv_g \geq -C \int_M (F_l(v_+)^2) dv_g,$$ we get from (\ref{equ1}), (\ref{i5}) and (\ref{i6}) that $$c_n \frac{A}{8q} {\left( \int_M (\eta F(v_+))^N dv_g \right)}^{\frac{2}{N}} \leq C \int_M (F_l(v_+))^2 dv_g. $$ Now, by Sobolew embedding, $v_+ \in L^N(M)$. Since $2q \leq N$ and since $C$ does not depend on $l$, the right hand side of this inequality is bounded when $l$ tends to $+\infty$. We obtain that $$\limsup_{l \to +\infty} \int_M (\eta F(v_+))^N dv_g < +\infty.$$ This proves that $v_+ \in L^{qN}(B_{x_0}(\delta))$. Since $x_0$ is arbitrary, we get that $v_+ \in L^{qN}(M)$. Doing the same with $\sup(-v,0)$ instead of $v_+$, we get that $v \in L^{qN}(M)$. This proves Lemma \ref{regu}.
{ "attr-fineweb-edu": 1.788086, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdKk4dbjiVFhKfefV
\section{Introduction} \section{Introduction} In many applications it is well-known that the solution of the optimization problem can be approximated by low-rank matrices or tensors, i.e. it lies on a certain manifold \cite{oseledets-survey-2015, absil-opt-2009}. Thus, instead of minimizing the full functional, the framework of Riemannian optimization can be very effective in terms of storage \cite{udriste-riemannian-1994, ma-manifold-2011}. There are different approaches for the optimization over low-rank manifolds, including projection onto the tangent space \cite{lubich-timett-2015} conjugate-gradient type methods \cite{sato-cg-2015}, second-order methods \cite{absil-newton-2009}. The manifolds of matrices with bounded ranks and tensors with fixed tensor train and hierarchical ranks are of crucial importance in many high-dimensional problems, and are examples of Riemannian manifolds with a very particular polylinear structure. In this paper we consider the two-dimensional (matrix) case and study the convergence of the projected gradient-type methods and show that if the original method converges, its manifold version based on the so-called \emph{projector-splitting method} is guaranteed to converge at least with the same rate and some additional conditions on the initial approximation. This is up to a certain extent an unexpected result, since the standard estimates include the curvature of the manifold. For the manifold of matrices of rank $r$, the curvature is given by $1 / \sigma_{\min}$, i.e. if the matrix is close to the matrix of a smaller rank, such estimates are useless in practice. Our results show that the curvature is not important for the convergence. Consider an iterative process \begin{equation}\label{thm:fullit} X_{k+1} = \Phi(X_k), \quad k = 0, \ldots \end{equation} where $Y_k \in \mathbb{R}^{n \times m}$ and $\Phi$ is a contraction with parameter $\delta$. Then, $X_k$ converges linearly to $X_*$, for $k \rightarrow \infty$, i.e. $$ \Vert X_{k+1} - X_* \Vert \leq \delta \Vert X_k - X_* \Vert, $$ for some matrix norm $\Vert \cdot \Vert$. Also we assume that the initial point and the final points are on the manifold, i.e. and $$ X_0, X_* \in \mathcal{M}_r, \quad \mathcal{M}_r = \left\{ X \bigg| \rank X \leq r \right\}. $$ From \eqref{thm:fullit} we create the projected version as \begin{equation}\label{thm:projit} Y_{k+1} = I(Y_k, \Phi(Y_k) - Y_k), \quad k = 0, \ldots, \end{equation} where $I(Z, H)$ is the \emph{projector-splitting integrator} \cite{lubich-timett-2015} which is known to be a \emph{retraction} to the manifold \cite{absil-newton-2009}. There are many other possible choices for the retraction, but in this paper we consider only one of them and all the convergence estimates are proven for the method \eqref{thm:projit}. Our approach is based on the splitting the error $\Vert X_k - X_* \Vert$ into two components. The first component is a projection on the tangent space of the manifold at some intermediate point and shows how close current point to stationary point in the sense of Riemannian metric on the manifold. The second component is the projection on normal space at the same point and is related to the manifold curvature. The typical case convergence is presented at Figure \ref{fig:figure1a}. However, much more interesting pattern is possible. See Figure \ref{fig:figure1b}. \begin{figure}[H] \centering \begin{subfigure}{.45\textwidth} \centering \scalebox{0.38}{\input{prj-est01.pgf}} \caption{Typical case convergence.}\label{fig:figure1a} \end{subfigure} \hspace{0.5cm} \begin{subfigure}{.45\textwidth} \centering \scalebox{0.38}{\input{prj-est02.pgf}} \caption{Stair case convergence.}\label{fig:figure1b} \end{subfigure} \end{figure} In both cases, although the curvature influences only on but the convergence is not worse than for the full case. \section{Projector-splitting integrator} The projector-splitting integrator was ori\-ginally proposed \cite{lubich-lrappr-2007} as an integration scheme for the equations of motions of dynamical low-rank approximation. However, the only information it requires, are two matrices, $A_0$, $A_1$, at subsequent time steps. Thus it is very natural to consider it for the discrete time problems, and moreover, it can be formally viewed as a retraction onto the manifold of rank-$r$ matrices. It is formulated as follows. Given a rank-$r$ matrix in the form $A_0 = U_0 S_0 V_0^{\top}, U_0^{\top} U_0 = V_0^{\top} V_0 = I_r$ and a direction $D$, it provides the retraction of $A_0 + D$ back onto the manifold by the following steps: \begin{algorithm}[!h] \KwData{$A_0 = U_0 S_0 V_0^{\top}, \quad D$} \KwResult{$A_1 = U_1 S_1 V_1^{\top}$} $ U_1, S' = \mathrm{QR}(U_0 S_0 + D V_0)$\; $S'' = S' - U_1^{\top} D V^{\top}_0$\; $V_1, S^{\top}_1 = \mathrm{QR}(V_0 S''^{\top} + D^{\top} U_1)$\; \caption{The projector splitting retraction}\label{thm:algi} \end{algorithm} Note that the QR-factorizations in the intermediate steps are non-unique, but the final result $U_1 S_1 V^{\top}_1$ does not depend on it. For the details we refer the reader to \cite{lubich-timett-2015}. We will denote the result of Algorithm \ref{thm:algi} as $I(A_0, D)$. Define $\mathcal{T} (X)$ as the tangent space of $X \in \mathcal{M}_r$ The following Lemma provides a new interpretation of the projector-splitting integrator as a projection onto the tangent plane in some intermediate point. \begin{lemma} Let $\rank A_0 = U_0 S_0 V^{\top}_0, \quad U^{\top}_0 U_0 = V^{\top}_0 V_0 = I_r$, $D \in \mathbb{R}^{n \times m}$. Then, \begin{equation}\label{thm:projectorform} I(A_0, D) = P_{\mathcal{T} (X)}(A_0 + D), \quad I(A_0, D), A_0 \in \mathcal{T} (X). \end{equation} where $X$ is some matrix of rank $r$. \end{lemma} \begin{proof} It is sufficient to select $X = U_1 S V^{\top}_0$ for any non-singular $S$, and $U_1$ is defined as in the Algorithm \eqref{thm:algi}. Note from the construction, that both the initial and the final points lie in the tangent space $\mathcal{T}(X)$. \end{proof} \section{Decomposition of the error into the normal and tangent parts} Let us write one step of the iterative process \eqref{thm:projit} as \begin{equation}\label{thm:onestep} Y_1 = I(Y_0, \Phi(Y_0) - Y_0). \end{equation} Using the projector form \eqref{thm:projectorform} we have $$ Y_1 = P_{\mathcal{T}(X)}(\Phi(Y_0)), $$ and the error can be written as \begin{equation}\label{thm:errorcontrol} E_1 = Y_1 - X_* = P_{\mathcal{T}(X)}(\Phi(Y_0) - \Phi(X_*)) + P_{\mathcal{T}(X)}(X_*) - X_*. \end{equation} Due to the contraction property we can bound $$ \Vert \Phi(Y_0) - \Phi(X_*) \Vert \leq \delta \Vert E_0 \Vert. $$ It is natural to introduce the notation $$ P_{\mathcal{T}(X)}(X_*) - X_* = -P^{\perp}_{\mathcal{T}(X)}(X_*), $$ since it is the normal to the tangent space component of $X_*$ at point $X$. Thus the error at the next step satisfies $$ \varepsilon^2_1 = \Vert E_1 \Vert^2 = \varepsilon^2_{\tau} + \varepsilon^2_{\perp}. $$ From the definition it is easy to see that $$ \varepsilon_{\tau} = \Vert P_{\mathcal{T}(X)}(\Phi(Y_0) - \Phi(X_*)) \Vert \leq \Vert \Phi(Y_0) - \Phi(X_*) \Vert \leq \delta \varepsilon_0. $$ The estimate for the decay of $\varepsilon_{\perp} = \Vert P^{\perp}_{\mathcal{T}(X)}(X_*)\Vert$ is much less trivial. \section{Estimate for the normal component of the error} From the definition of the error we have $$ \Phi(Y_0) = X_* + H, $$ and $\Vert H \Vert \leq \delta \varepsilon_0$. Since $Y$ and $X_*$ are on the manifold, they admit factorizations $$ Y = U_0 S_0 V^{\top}_0, \quad X_* = U_* S_* V^{\top}_*, $$ where $U_*, V_*, U_0$ and $V_0$ are orthonormal. If $\varepsilon_0$ is small, one can expect that the subspaces spanned by columns of $V_0$ and $V_*$ are close; however, the estimates depend on the smallest singular values of $X_*$. The following Theorem gives a bound on the normal component. \begin{theorem}\label{thm:normalcomponent} Let $X_* = U_* S_* V^{\top}_*$, where $V_*^{\top} V_* = U_*^{\top} U_* = I_q, \quad q \leq r$ and $H$ is an $n \times m$ matrix, $V_0$ be an $m \times r$ matrix with orthonormal columns and $U_1$ be any orthogonal basis for the column space of the matrix $(X_* + H)V_0.$ Then, the norm of $P^{\perp}(X_*)$ defined as \begin{equation}\label{thm:prjcomp} P^{\perp}(X_*) = (I - U_1 U^{\top}_1) X_* (I - V_0 V_0^{\top}). \end{equation} can be bounded as \begin{equation}\label{thm:norm_normal} \Vert P^{\perp}(X_*) \Vert \leq \Vert H \Vert \Vert \tan \angle (V_0, V_*) \Vert. \end{equation} \end{theorem} \begin{proof} First, we find an $r \times r$ orthonormal matrix $Q$ such that \begin{equation}\label{thm:psidef} \Psi Q = (V^{\top}_* V_0) Q = \begin{bmatrix} \widehat{\Psi} & 0_{r-q} \end{bmatrix}, \end{equation} where matrix $\widehat{\Psi}$ has size $q \times q$. Since the multiplication by the orthogonal matrix $Q$ does not change the projector $$V_0 V^{\top}_0 = (V_0 Q) (V_0 Q)^{\top},$$ we can always assume that the matrix $\Psi$ is already in the form \eqref{thm:psidef}. Since $U_1$ spans the columns space of $(X_* + H) V_0$, we have \begin{equation}\label{thm:singvecteq} (U_1 U^{\top}_1) (X_* V_0 + H V_0) = X_* V_0 + H V_0. \end{equation} From this equation we have \begin{equation}\label{thm:eq1} X_* V_0 = U_1 U^{\top}_1 X_* V_0 + U_1 U^{\top}_1 H V_0 - H V_0 = U_* S_* V^{\top}_* V_0 = U_* \begin{bmatrix} \widehat{\Psi} & 0 \end{bmatrix}. \end{equation} Introduce the matrix $V^{(q)}_0$ comprised of the first $q$ column of the matrix $V_0$. From \eqref{thm:eq1} we have $$ U_* S_* \widehat{\Psi} = U_1 U^{\top}_1 X_* V^{(q)}_0 + U_1 U^{\top} H V^{(q)}_0 - H V^{(q)}_0. $$ Thus, \begin{equation}\label{thm:ustar} U_* S_*= U_1 \Psi_1 - H V^{(q)}_0 \widehat{\Psi}^{-1}, \end{equation} Note, that $$\Vert P^{\perp} \Vert = \Vert (I - U_1 U^{\top}_1) X_* (I - V_0 V^{\top}_0) \Vert \leq \Vert (I - U_1 U^{\top}_1) X_* (I - V^{(q)}_0 V^{(q)}_0)^{\top})) \Vert,$$ and from \eqref{thm:singvecteq} it follows also that $$ (I - U_1 U^{\top}_1) (X_* + H) V^{(q)}_0 (V^{(q)}_0)^{\top} = 0. $$ For simplicity, denote $$ P^{\perp}_q(X_*) = (I - U_1 U^{\top}_1) X_* (I - V^{(q)}_0 (V^{(q)}_0)^{\top}). $$ Then, \begin{equation}\label{thm:perpproj} \begin{split} P_q^{\perp}(X_*) & = (I - U_1 U^{\top}_1) X_* - (I - U_1 U^{\top}_1) X_* V^{(q)}_0 (V^{(q)}_0)^{\top} = \\ &= (I - U_1 U^{\top}_1) X_* + (I - U_1 U^{\top}_1) H V^{(q)}_0 (V^{(q)}_0)^{\top}. \end{split} \end{equation} Replacing $U_* S_*$ in \eqref{thm:perpproj} by \eqref{thm:ustar} we get \begin{equation} \begin{split} P^{\perp}_{\mathcal{T}(X)} &= (I - U_1 U_1^{\top}) U_* V^{\top}_* + (I - U_1 U^{\top}_1) H V^{(q)}_0 (V^{(q)}_0)^{\top}) \\ &= (I - U_1 U^{\top}_1) H V^{(q)}_0 (V^{(q)}_0)^{\top} - (I - U_1 U^{\top}_1) H V^{(q)}_0 \widehat{\Psi}^{-1} V^{\top}_* \\ &= (I - U_1 U^{\top}_1) H V^{(q)}_0 (V^{(q)}_0)^{\top} - \widehat{\Psi}^{-1} V^{\top}_*). \end{split} \end{equation} To estimate the norm, note that $$ \Vert P^{\perp}_{\mathcal{T}(X)} \Vert \leq \Vert H \Vert \Vert (V^{(q)}_0)^{\top} - \widehat{\Psi}^{-1} V^{\top}_* \Vert. $$ Introduce the matrix $$ B = (V^{(q)}_0)^{\top} - \widehat{\Psi}^{-1} V^{\top}_*. $$ We have $$ \Vert X_* ( I - V_0 V^{\top}_0) \Vert = \Vert (X_* - Y_0) ( I - V_0 V^{\top}_0) \Vert \leq \Vert X_* - Y_0 \Vert. $$ Replacing $X_*$ by $U_* S_* V^{\top}_*$ we have $$ \Vert U_* (V^{\top}_* - \Psi V^{\top}_0) \Vert = \Vert U_* (V^{\top}_* - \widehat{\Psi} (V^{(q)}_0)^{\top}) \Vert. $$ Thus, $$\Vert V^{\top}_* - \widehat{\Psi} (V^{(q)}_0)^{\top} \Vert \leq \frac{\Vert X_* - Y_0 \Vert}{\sigma_q}.$$ Introduce the matrix $C = V^{\top}_* - \widehat{\Psi} (V^{(q)}_0)^{\top}$. Then, $$\Vert C \Vert^2 = \Vert C C^{\top} \Vert = \Vert I - \widehat{\Psi} \widehat{\Psi}^{\top} \Vert \leq \frac{\Vert X_* - Y_0 \Vert^2}{\sigma^2_q}.$$ Then, we have $$\sin \theta \leq \frac{\Vert X_* - Y_0 \Vert}{\sigma_q},$$ whereas we require to bound $$ \tan \theta = \frac{\sin \theta}{\sqrt{1 - \sin^2 \theta}}. $$ Let $\widehat{\Psi} = U \Lambda V^{\top}$ be the singular value decomposition of $\widehat{\Psi}$. From the definition of the angles between subspaces we have $$ \Lambda = \cos \angle (V^{\top}_*, V^{(q)}_0) = \cos \angle (V^{\top}_*, V_0), $$ therefore $$ \Vert B \Vert^2 = \Vert \cos^{-2} \angle (V^{\top}_*, V_0) - 1\Vert = \Vert \tan^2 \angle (V^{\top}_*, V_0) \Vert, $$ which completes the proof. \end{proof} \section{Error estimate} Theorem \ref{thm:normalcomponent} shows that the normal component can decay as a tangent component squared. Unfortunately, convergence of the projector splitting method in general is not guaranteed. In section \ref{prj:counter_example_section} we give the example for which sequence $Y_k$ converges to a matrix different from $X_*$. In this section we derive sufficient conditions for convergence of projector splitting method. We consider one step of the projector splitting scheme. \begin{lemma}\label{prj:y_lem} Let us denote the initial point $Y_0 = U_0 S_0 V_0^{\top}$, the next step point $ Y_1 = U_1 S_1 V_1^{\top}$ and the fixed point $X_* = U_* S_* V_*^{\top}.$ We assume that $S_*$ is a diagonal matrix: $$ S_* = \sum_{k=1}^r\limits s_k e_k e_k^{\top},$$ where $s_k$ is the $k$-singular value and $e_k$ is the corresponding vector from the standard basis. Let us denote \begin{align*}\label{prj:cos_def} &\cos^2 \phi_{Li, k} = \Vert U_i U_i^{\top} U_* e_k \Vert_F^2, &\cos^2 \phi_{Ri, k} = \Vert e_k^{\top} V_*^{\top} V_i V_i^{\top} \Vert_F^2,\\ &\sin^2 \phi_{Li, k} = \Vert (I-U_i U_i^{\top}) U_* e_k \Vert_F^2, &\sin^2 \phi_{Ri, k} = \Vert e_k^{\top} V_*^{\top} (I-V_i V_i^{\top}) \Vert_F^2. \end{align*} Assume that \begin{equation}\label{prj:pq_cond} \begin{split} \delta^2 \Vert Y_0 - X_*\Vert_F^2 + \sum_{k=1}^r\limits s_k^2 \sin^2 \phi_{R0, k} \leq s_r . \end{split} \end{equation} Then the next inequality holds: \begin{equation}\label{prj:y_ineq} \begin{split} &\Vert Y_1 - X_* \Vert_F^2 \leq \delta^2 \Vert Y_0 - X_* \Vert_F^2 +\\ + \Big(\delta^2 &\Vert Y_0 - X_* \Vert_F^2 - \sum_{k=1}^r\limits s_k^2 \sin^2 \phi_{R1, k} \Big) \frac{ \sum_{k=1}^r\limits s_k^2\sin^2 \phi_{R0, k}}{ s_r - \sum_{k=1}^r\limits s_k^2\sin^2 \phi_{R0, k} - \sum_{k=1}^r\limits s_k^2 \sin^2 \phi_{R1, k}}. \end{split} \end{equation} \end{lemma} \begin{proof} Without the loss of generality we can assume that $$ U_1 = \begin{bmatrix} I_r \\ 0_{(n-r)\times r}\\ \end{bmatrix}, \quad V_0 = \begin{bmatrix} I_r \\ 0_{(m-r)\times r}\\\end{bmatrix}. $$ Then we use the following block representation of $Y_0, \Phi(Y_0), Y_1$ and $X_*$: \begin{equation*} \begin{split} Y_0 &= U_0 S_0 V_0^{\top} = \begin{bmatrix} D_1^0 & 0 \\ D_3^0 & 0 \end{bmatrix}, \Phi(Y_0) = \begin{bmatrix} D_1^1 & D_2^2 \\ 0 & D_4^1 \\ \end{bmatrix},\\ Y_1 &= U_1 S_1 V_1^{\top} = \begin{bmatrix} D_1^1 & D_2^1 \\ 0 & 0 \\ \end{bmatrix}, X_* = U_* S_* V_*^{\top} = \begin{bmatrix} E_1 & E_2 \\ E_3 & E_4 \\ \end{bmatrix}. \end{split} \end{equation*} Therefore, \begin{equation*} \begin{split} \Vert Y_1 - X_* \Vert_F^2 &= \Vert D_1^1 - E_1 \Vert_F^2 + \Vert D_2^1 - E_2 \Vert_F^2 + \Vert E_3 \Vert_F^2 + \Vert E_4 \Vert_F^2 \leq\\ &\leq \left(\Vert D_1^1 - E_1 \Vert_F^2 + \Vert D_2^1 - E_2 \Vert_F^2 + \Vert E_3 \Vert_F^2 + \Vert D_4^1 - E_4 \Vert_F^2\right) + \left( \Vert E_4 \Vert_F^2\right) =\\ &=\Vert \Phi(Y_0) - X_* \Vert_F^2 + \Vert (I - U_1 U_1^{\top}) X_* (I - V_0 V_0^{\top}) \Vert_F^2 \leq \\ &\leq \delta^2 \Vert Y_0 - X_* \Vert_F^2 + \Vert (I - U_1 U_1^{\top}) X_* (I - V_0 V_0^{\top})\Vert_F^2. \end{split} \end{equation*} We want to estimate $ \Vert (I - U_1 U_1) X_* (I - V_0 V_0)\Vert_F^2$. For that purpose we exploit contraction property of $\Phi$: \begin{equation*}\label{prj:y_left_ineq} \begin{split} \Vert U_1 U_1^{\top} ( \Phi(Y_0) - X_*)\Vert_F^2 &+ \Vert (I - U_1U_1^{\top} ) ( \Phi(Y_0) - X_*)\Vert_F^2 =\\ &= \Vert( \Phi(Y_0) - X_*)\Vert_F^2 \leq \delta^2 \Vert Y_0 - X_* \Vert_F^2, \\ \Vert U_1 U_1^{\top} ( X_*) (I - V_1 V_1^{\top})\Vert_F^2 &+ \Vert (I - U_1U_1^{\top} ) ( X_*)V_0 V_0^{\top}\Vert_F^2 \leq\delta^2 \Vert Y_0 - X_* \Vert_F^2,\\ \Vert (I - U_1U_1^{\top} ) ( X_*) V_0 V_0^{\top}\Vert_F^2 &-\Vert (I - U_1 U_1^{\top}) ( X_*) (I - V_1 V_1^{\top})\Vert_F^2 \leq \\ &\leq \delta^2 \Vert Y_0 - X_* \Vert_F^2 - \Vert ( X_*) (I - V_1 V_1^{\top})\Vert_F^2. \end{split} \end{equation*} Then the inequality \eqref{prj:y_left_ineq} transforms to \begin{equation}\label{prj:left_ineq1} \begin{split} &\sum_{k=1}^r s_k^2 \Vert (I - U_1 U_1^{\top} ) U_* e_1 \Vert_F^2 \Vert e_k^{\top} V_0 V_0^{\top}\Vert_F^2 -\\ &- \sum_{k=1}^r s_k^2 \Vert (I - U_1 U_1^{\top})U_* e_k\Vert \Vert e_k^{\top} V_*^{\top} (I - V_1 V_1^{\top})\Vert_F^2 \leq \\ &\leq \delta^2 \Vert Y_0 - X_* \Vert_F^2 - \sum_{k=1}^r s_k^2 \Vert U_* e_k \Vert_F^2 \Vert e_k^{\top} V_*^{\top} (I - V_1 V_1^{\top})\Vert_F^2. \end{split} \end{equation} Using \eqref{prj:cos_def} we have $$ \sum_{k = 1}^r\limits \sin^2 \phi_{L1, k} s_k^2 (\cos^2 \phi_{R0, k} - \sin^2 \phi_{R1, k}) \leq \delta^2 \Vert Y_0 - X_* \Vert_F^2 - \sum_{k=1}^r\limits s_k^2 \sin^2 \phi_{R1, k}. $$ Inequality \eqref{prj:pq_cond} guarantees that \begin{equation*} \begin{split}\sum_{k=1}^r\limits s_k^2\sin^2 \phi_{R0, k} - \sum_{k=1}^r\limits s_k^2 \sin^2 \phi_{R1, k} < s_r^2, \\ 0 < \max_{1\leq k\leq r} \left(\cos^2 \phi_{R0, k} - \sin^2 \phi_{R1, k}\right).\\ \end{split} \end{equation*} Therefore \begin{equation*} \begin{split} &\sum_{k = 1}^r\limits s_k^2 \sin^2 \phi_{L1, k} \sin^2\phi_{R0, k} \leq \\ \leq \Big(\delta^2 \Vert Y_0 - X_* \Vert_F^2 - &\sum_{k=1}^r\limits s_k^2 \sin^2 \phi_{R1, k} \Big) \max_{1\leq k\leq r} \frac{\sin^2 \phi_{R0, k}}{\cos^2 \phi_{R0, k} - \sin^2 \phi_{R1, k}} \leq \\ \leq \Big(\delta^2 \Vert Y_0 - X_* \Vert_F^2 - &\sum_{k=1}^r\limits s_k^2 \sin^2 \phi_{R1, k} \Big) \frac{ \sum_{k=1}^r\limits s_k^2\sin^2 \phi_{R0, k}}{ s_r^2 - \sum_{k=1}^r\limits s_k^2\sin^2 \phi_{R0, k} - \sum_{k=1}^r\limits s_k^2 \sin^2 \phi_{R1, k}}. \end{split} \end{equation*} i.e. \eqref{prj:y_ineq} is proven. \end{proof}\\ For convenience we introduce new variables: \begin{equation}\label{prj:pqs} \begin{split} s = \delta^2, \quad p_k = \dfrac{\Vert Y_k - X_* \Vert^2_F}{s_r^2}, \quad q_k = \dfrac{1}{s_r^2}\sum_{k=1}^r\limits s_k^2 \sin^2 \phi_{Rk}, \end{split} \end{equation} Now we can formulate the connection between the subsequent steps: \begin{equation}\label{prj:pqs_ineq} \begin{split} p_{k+1} \leq s p_k + \frac{(s p_k- q_{k+1})q_k}{1 - q_k - q_{k+1}}, \quad 0 \leq q_{k+1} \leq s p_k. \end{split} \end{equation} We can derive upper estimate for $p_k$: \begin{theorem} Assume that $0<s<1$, $0 \leq q_0 \leq 1$, $0<p_0$. \\ Consider $p_k , q_k , k\in \mathbf{N}$ that satisfy \eqref{prj:pqs}. Assume that $4 \dfrac{p_0}{(1-q_0)^2}\dfrac{s}{1-s}<1$. Then the next inequalities hold: \begin{equation}\label{prj:thm_spq} \begin{split} p_k &\leq\frac{p_0}{c_*(s, p_0, q_0)} s^k,\quad 0 < c_*(s, p_0, q_0) \leq 1 - \sum_{j=0}^k \limits q_k \leq s p_{k-1} + q_{k-1}, \end{split} \end{equation} where $$ c_*(s, p_0, q_0) = \frac{p_0}{1-q_0} \frac{s}{1-s} \left(\dfrac{2}{1 + \sqrt{1 - 4 \frac{p_0}{(1-q_0)^2}\frac{s}{1-s}}} \right). $$ \end{theorem} \begin{proof} The parameter $c_* (s, p_0, q_0)$ is the positive solution of the equation: $$c_* (s, p_0, q_0) = 1 - q_0 - p_0 \frac{s}{1-s} \frac{1}{c_* (s, p_0, q_0)}.$$ We will use mathematical induction to prove \eqref{prj:thm_spq}. The base case follows from $0 < c_* (s, p_0, q_0) < 1 $ $$ p_0 \leq \frac{p_0}{c_* (s, p_0, q_0)}, \quad c_* (s, p_0, q_0) \leq 1 - q_0. $$ Consider the inductive step. Assume that \eqref{prj:thm_spq} holds for every $i<k$ for some k. Then, \begin{equation \begin{split} p_{k+1} &\leq s p_k + \frac{(s p_k- q_{k+1})q_k}{1 - q_k - q_{k+1}} = \\ &= s p_k \frac{1 - q_k}{1 - q_k - q_{k+1}} - \frac{ q_{k+1}q_k}{1 - q_k - q_{k+1}} \leq s \frac{p_k}{1 - \frac{q_{k+1}}{1 - q_k}}. \end{split} \end{equation} We can expect that the term $\dfrac{ q_{k+1}q_k}{1 - q_k - q_{k+1}}$ is sufficiently smaller than the $p_{k+1}$ and decays as $p_{k+1}^2$ due to $q_k\sim p_k$. Finally, \begin{equation}\label{prj:pqs_ineq2} \begin{split} p_{k+1} &\leq \dfrac{sp_k}{1 - \left(\dfrac{q_{k+1}}{1 - q_k}\right) } \leq \dfrac{s^{k+1} p_0}{\prod\limits_{j = 0}^k \Big( 1 - \dfrac{q_{j+1}}{1 - q_j} \Big)}. \end{split} \end{equation} It is easy to prove that in the case $\sum_{j = 0}^k\limits q_j < 1$ we have $$ \prod\limits_{j = 0}^k \Big( 1 - \frac{q_{j+1}}{1 - q_j} \Big) \leq 1 - \sum_{j = 0}^{k+1} q_k. $$ It leads to \begin{equation*} \begin{split} p_{k+1} &\leq \dfrac{s^{k+1} p_{0}}{1 - \sum_{j = 0}^k\limits q_j } \leq \dfrac{s^{k+1} p_{0}}{c_* (s, p_0, q_0)},\\ \end{split} \end{equation*} therefore \begin{equation*} \begin{split} c_*(s, p_0, q_0) &= 1 - q_0 - \dfrac{p_0}{c_*(s, p_0, q_0)} \frac{s}{1-s} = 1 - q_0 - s\sum_{k=0}^{\infty}\limits \dfrac{p_0}{c_*(s, p_0, q_0)} s^i \leq \\ &\leq 1 - q_0 - s \sum_{j=0}^{k+1}\limits p_j \leq 1 - \sum_{j=0}^{k+1}\limits q_j \leq 1 - q_k - s p_k.\\ \end{split} \end{equation*} The inductive step is proven. \end{proof} The final estimate is \begin{equation*} \begin{split} p_n \leq \dfrac{ p_{0} }{ c_*(s, p_0, q_0) } s^{n} = \dfrac{ p_{0} }{ 1 - q_0 } s^{n} \left(\dfrac{1 + \sqrt{1 - 4 \dfrac{p_0}{(1-q_0)^2}\dfrac{s}{1-s}}}{2\dfrac{p_0}{(1-q_0)^2}\dfrac{s}{1-s}}\right). \end{split} \end{equation*} Note that if the condition $4 \dfrac{p_0}{(1-q_0)^2}\dfrac{s}{1-s}<1$ does hold, then the condition $s p_0 + q_0< 1$ does hold as well. \begin{corollary} Define $Y_k$ as in \eqref{thm:projit}, $X_*$, $s_k$ and $\sin^2 \phi_{R0, k}$ as in Lemma \ref{prj:y_lem}. Assume that the next inequality holds $$ 4 \dfrac{\Vert Y_0 - X_*\Vert}{\left(s_r^2 - \sum_{k=1}^r\limits s_k^2 \sin^2 \phi_{R0, k}\right)^2} < 1. $$ Then the sequence $Y_k$ converges to $X_*$ and the following inequality holds $$ \Vert Y_k - X_*\Vert < c(\delta, Y_0, X_*) \Vert Y_0 - X_*\Vert \delta^k, $$ where $$ c(\delta, Y_0, X_*) = \dfrac{1 + \sqrt{1 - 4 \dfrac{\delta^2\Vert Y_0 - X_*\Vert^2 }{(1-\delta^2)\left(s_r^2-\sum_{k=1}^r\limits s_k^2 \sin^2 \phi_{R0, k}\right)^2}}}{2 \dfrac{\delta^2\Vert Y_0 - X_*\Vert^2 }{(1-\delta^2)\left(s_r^2-\sum_{k=1}^r\limits s_k^2 \sin^2 \phi_{R0, k}\right)^2}}. $$ \end{corollary} This estimate guarantees if the initial point is close enough to the fixed point then the projector splitting method in the worst case has the same convergence rate as the fixed-point iteration method. Also the estimate requires that the distance between the initial point and the fixed point is less than the smallest singular value of the fixed point $s_r$. In the next section we give the example for which this condition do not hold and the projector splitting method does not converges to the true solution. \section{Counter-example}\label{prj:counter_example_section} Consider the case $n=2,$ $r=1$. We will need the following auxiliary result: \begin{lemma}\label{prj:f_lemma} Let the mapping $\Phi: \mathbb{R}^{2\times 2} \to \mathbb{R}^{2\times 2}$ be defined as \begin{equation}\label{prj:phidef} \begin{split} \Phi(Y) &= X_* + \delta \Vert Y - X_* \Vert_F X_{\perp}, \\ X_* &= \begin{pmatrix} 1 & 0 \\ 0 & 0\end{pmatrix}, X_{\perp} = \begin{pmatrix} 0 & 0 \\ 0 & 1\end{pmatrix}.\\ \end{split} \end{equation} Let us consider $\delta, d_*, q_{max}$ and $s$ that satisfy \begin{equation} \begin{split} 0 &< \delta < 1, \quad 1< \delta^2 + \delta^6, \quad d_* = \frac{1}{\sqrt{1-\delta^2}}, \\ 0 &< q_{max}, \quad 0<s, \quad \frac{1}{\delta^4 d_*^2} \Big( 1 +\frac{q_{max}}{\delta^2 d_*^2 } \Big) \leq \delta^2 - \frac{s}{\delta^2 d_*^2}.\\ \end{split} \end{equation} Denote the set $$\Omega = \left\{ \{p, q\} \Big| 0\leq p,\quad 0\leq q \leq q_{max},\quad \frac{q}{p} \leq s\right\} $$ and the function \begin{equation*} \begin{split} f: \Omega \to \mathbb{R}_{0, +}^2,\quad f(\{ p, q \}) = \left\{ \dfrac{1 + \delta^2 p}{1 + \dfrac{q}{\delta^2 d_*^2 (1 + p)} }-1,\quad \dfrac{q}{\delta^4 d_*^2 (1 + p)} \right\}, \end{split} \end{equation*} \end{lemma} and $$ f^{*n}(x) = \underbrace{f(\ldots f(x) \ldots )}_{n \text{ times}}. $$ Then $$ f(\Omega) \subset \Omega, \quad \forall x\in\Omega,\quad\lim_{n\to\infty}\limits f^{*n}(x) = \{0, 0\}. $$ \begin{proof} It is important to note that $1 < \delta^4 d_*^2 = \dfrac{\delta^4}{1 - \delta^2}$ because of the choice of $\delta$\eqref{prj:f_lemma}. Let us denote $f(\{ p, q \}) = \{p_1, q_1 \}$. Then \begin{equation}\label{prj:ineq_qmax} \begin{split} q_1 = \frac{q}{\delta^4 d_*^2 (1 + p)} \leq \frac{q}{\delta^4 d_*^2} < q \leq q_{max}, \end{split} \end{equation} and therefore \begin{equation*} \begin{split} \frac{q_1}{q} &= \frac{1}{\delta^4 d_*^2 (1 + p)} \leq \frac{1}{\delta^4 d_*^2},\\ \frac{p_1}{p} &= \frac{1}{p}\Big( d\frac{1 + \delta^2 p}{1 + \dfrac{q}{\delta^2 d_*^2 (1 + p)} }-1\Big) \geq \frac{1}{p}\Big( \dfrac{1 + \delta^2 p}{1 + \dfrac{q}{\delta^2 d_*^2} }-1\Big) =\\ &= \Big(\delta^2 - \frac{1}{\delta^2 d_*^2}\frac{q}{p}\Big) / \Big(1 + \frac{q}{\delta^2 d_*^2} \Big) \geq \Big(\delta^2 - \frac{s}{\delta^2 d_*^2}\Big) / \Big(1 + \frac{q_{max}}{\delta^2 d_*^2} \Big) \geq \frac{1}{\delta^4 d_*^2}.\\ \end{split} \end{equation*} Finally we have \begin{equation}\label{prj:ineq_pq} \begin{split} \frac{q_1}{p_1} \leq \dfrac{ q / \delta^4 d_*^2}{ p / \delta^4 d_*^2} = \frac{q}{p}\leq s. \end{split} \end{equation} The statement $f(\Omega) \subset \Omega$ follows from \eqref{prj:ineq_qmax} and \eqref{prj:ineq_pq}. Also the following inequalities hold \begin{equation*}\label{prj:conv_pq} \begin{split} \frac{p_1}{p} = \frac{1}{p}\left( \dfrac{1 + \delta^2 p}{1 + \dfrac{q}{\delta^2 d_*^2 (1 + p)} }-1\right) \leq \delta^2,\quad \frac{q_1}{q} \leq \frac{1}{\delta^4 d_*^2}. \end{split} \end{equation*} The inequalities \eqref{prj:conv_pq} guarantee linear convergence of $ f^{*n}(x)$ to $ \{ 0, 0\}$ for every $x \in \Omega.$ \end{proof} \begin{lemma}\label{prj:pi_lemma} Let contraction mapping $\Phi$ is defined as in lemma \ref{prj:f_lemma}. Let us consider parameters $\delta, d_*$, contraction mapping $\Phi$ and the set $\Omega$ and the function $f$ that satisfy condition of lemma \ref{prj:f_lemma}. Let us denote the set of rank-$1$ $2\times2$ real matrices $M_{2,1}(\mathbf{R})$, $\phi_R(X)$ - right angle for rank-$1$ $2\times2$ matrix $X$ and \begin{equation} \begin{split} &\mathcal{M}_{2,1}^{'} = \Big( X | X \in M_{2,1}(\mathbf{R}), \sin^2(\phi_R(X)) > 0\Big),\\ &\pi : \mathcal{M}_{2,1}^{'} \to \mathbf{R}_{0, +}^2,\quad \pi(X)= \left\{ \dfrac{\Vert X - X_*\Vert_F^2}{d_*^2} - 1, \quad \ensuremath{\mathrm{ctg}}^2 \phi_R(X) \right\}.\\ \end{split} \end{equation} Assume that \begin{equation} \begin{split} Y_0 &= \begin{pmatrix} \cos \phi_{L0} & \sin \phi_{L0}\end{pmatrix} s_0 \begin{pmatrix} \cos \phi_{R0} \\ \sin \phi_{R0} \end{pmatrix} \in \pi^{-1}(\Omega), \\ Y_1 &= I(Y_0, \Phi(Y_0) - Y_0) = \begin{pmatrix} \cos \phi_{L1} & \sin \phi_{L1}\end{pmatrix} s_0 \begin{pmatrix} \cos \phi_{R1} \\ \sin \phi_{R1} \end{pmatrix}. \end{split} \end{equation} Then the following equalities hold \begin{equation}\label{prj:commuteq} \begin{split} \pi (Y_1) = f(\pi(Y_0)),\quad \ensuremath{\mathrm{ctg}}^2 \phi_{L1} <\ensuremath{\mathrm{ctg}}^2 \phi_{R1}. \end{split} \end{equation} \end{lemma} \begin{proof} We will use the equivalent form of Algorithm \ref{thm:algi} \begin{equation} \begin{split} U_1, S' &= \mathrm{QR}( (A_0+ D) V_0),\\ V_1, S^{\top}_1 &= \mathrm{QR}( (A_0+ D^{\top}) U_1).\\ \end{split} \end{equation} Let us consider $Y_0 = U_0 S_0 V_0^{\top}$, $d_0 = \Vert Y_0 - X_*\Vert_F$ and $V_0 = \begin{pmatrix} \cos \phi_{R0} \\ \sin \phi_{R0}\end{pmatrix}$. Then \begin{equation} \begin{split} \Phi(Y_0) &= \begin{pmatrix} 1 & 0 \\ 0 & \delta d_0\end{pmatrix}, \quad U_1, S' = \mathrm{QR}\left( \begin{pmatrix} \cos \phi_{R0} \\ \delta d_0 \sin \phi_{R0}\end{pmatrix}\right),\\ V_1, S^{\top}_1 &= \mathrm{QR}\left( \frac{1}{\sqrt{1 + (\delta^2 d_0^2-1) \sin^2 \phi_{R0}}}\begin{pmatrix} \cos \phi_{R0} \\ \delta^2 d_0^2 \sin \phi_{R0}\end{pmatrix} \right).\\ \end{split} \end{equation} Finally we get: \begin{equation}\label{prj:usveq} \begin{split} U_1 S_1 V_1^{\top} = \begin{pmatrix} \cos \phi_{R0} \\ \delta d_0 \sin \phi_{R0} \end{pmatrix} \frac{1}{1 + (\delta^2 d_0^2-1) \sin^2 \phi_{R0}} \begin{pmatrix} \cos \phi_{R0} & \delta^2 d_0^2 \sin \phi_{R0} \end{pmatrix} \end{split} \end{equation} It is important to note that $\cos^2 \phi_{L1} < \cos^2 \phi_{R1} < \cos^2 \phi_{R0} $ in case $1 < \delta d_*$ (and our choice of $\delta$ provides that). The equality \eqref{prj:usveq} guarantees if $0 < \sin^2 \phi_{R0}$ then $0 < \sin^2 \phi_{R1}.$ So \begin{equation} \begin{split} d_1^2 &= S_1^2 + \Big(1 - \frac{\cos^2 \phi_{R0}}{\cos^2 \phi_{R0} + \delta^2 d_0^2 \sin^2 \phi_{R0}}\Big)^2 - \Big(\frac{\cos^2 \phi_{R0}}{\cos^2 \phi_{R0} + \delta^2 d_0^2 \sin^2 \phi_{R0}}\Big)^2 = \\ &= \frac{\cos^2 \phi_{R0} + \delta^4 d_0^4 \sin^2 \phi_{R0} }{\cos^2 \phi_{R0} +\delta^2 d_0^2 \sin^2 \phi_{R0}} + 1 - \dfrac{2\cos^2 \phi_{R0}}{\cos^2 \phi_{R0} + \delta^2 d_0^2 \sin^2 \phi_{R0}} = \frac{1+ \delta^2 d_0^2}{1 + \ensuremath{\mathrm{ctg}}^2 \phi_{R0} / (\delta^2 d_0^2)} \end{split} \end{equation} Let us denote $p_0 = d_0^2 / d_*^2 - 1$ and $q_0 = \ensuremath{\mathrm{ctg}}^2(\phi_{R0})$. Then \begin{equation} \begin{split} \frac{d_1^2}{d_*^2} - 1 = \frac{1}{d_*^2} \left(\frac{1+ \delta^2 d_0^2}{1 + \ensuremath{\mathrm{ctg}}^2 \phi_R / (\delta^2 d_0^2)}\right) - 1 = \\ \frac{1}{d_*^2} \left(\frac{1+ \delta^2 d_0^2}{1+ q_0 / (\delta^2 d_0^2)}\right) - 1 = \dfrac{1 + \delta^2 p_0}{1 + \dfrac{q_0}{\delta^2 d_*^2 (1 + p_0)} }-1,\\ q_1 = \ensuremath{\mathrm{ctg}}^2(\phi_{R1}) = \frac{\ensuremath{\mathrm{ctg}}^2(\phi_{R0})}{\delta^2 d_0^2} = \frac{q_0}{\delta^2 d_*^2(1+p_0)}. \end{split} \end{equation} It completes the proof of \eqref{prj:commuteq}. \end{proof} \begin{theorem}\label{prj:counter_thm} Let the mappings $\Phi, \pi$ and the set $\Omega$ are defined as in lemma \ref{prj:pi_lemma}. Let us consider matrix $Y_0 \in \pi^{-1}(\Omega)$ and the projector splitting integrator $I(A, D)$ that is defined by \eqref{thm:algi}. Then the sequence $Y_k = I(Y_{k-1}, \Phi(Y_{k-1})-Y_{k-1})$ converges to $Y_* = d_* X_{\perp}.$ \end{theorem} \begin{proof} We apply Lemma \ref{prj:pi_lemma} $$\pi(Y_k) = f(\pi(Y_{k-1})) = f^{*k} (\pi(Y_0) ) $$ and then, using Lemma \ref{prj:f_lemma}, we have $$\lim_{k \to \infty} \pi(Y_k) = \lim_{k \to \infty}\limits f^{*k} (\pi(Y_0) ) =\{0, 0\}.$$ Lemma \ref{prj:pi_lemma} guarantees that squared cotangents of left and right angles go to zero, so $\lim_{k \to \infty}\limits Y_k = Y_*$. \end{proof} \begin{remark} Note that the condition $1< \delta^2 + \delta^6$ (it requites $\delta > 0.8$) significantly restricts the usage of Theorem \ref{prj:counter_thm}. But our numerical experiments show that the projector splitting method might not converge in computer arithmetics in the case this condition does not hold. \end{remark} \section{Numerical examples} \subsection{Typical case}\label{prj:bunchcase} We consider the ''linear'' contraction mapping \begin{equation*} \begin{split} \Phi: \mathbb{R}^{n\times m} \to \mathbb{R}^{n\times m}, \quad \Phi(X) = X_* + Q(X-X_*), \end{split} \end{equation*} where $X$ and $ X^* $ are rank-$r$ $n\times m$ matrices, $Q$ is a linear operator (on matrices), $n=m=40$, $r = 7$, $\Vert Q \Vert < 0.8$ let us denote singular values of $X_*$ as $\sigma_i, 1\leq i \leq r.$ The typical case corresponds to $\sigma_1 / \sigma_r \approx 10$. It shows that the orthogonal part converges quadratically. \begin{figure}[H] \centering \scalebox{0.7}{ \input{prj-est01.pgf}} \caption{Convergence rates for typical case.}\label{prj:fig11} \end{figure} \subsection{Stair case}\label{prj:staircase} The stair case corresponds to the same $n, m, r$ and exponentially decaying singular values $\sigma_k = 10^{4-2k}, \quad 1\leq k \leq 7, \quad \sigma_1 / \sigma_r = 10^{12}$. The results are shown on the Figure \ref{prj:fig2}. The orthogonal component decays quadratically until the next singular value is achieved. Meanwhile, the tangent component decays linearly, and once it hits the same singular value, the orthogonal component drops again. The steps on the 'stair' correspond to the singular values of $X_*$. Numerical experiments show that the projector splitting method has ``component-wise'' convergence. Until the first $j$ singular components of the current point converge to the first $j$ singular components of the fixed point, the last $r-j$ components of the $X_k$ are ``noisy'' and do not contain useful information. \begin{figure}[H] \centering \scalebox{0.7}{ \input{prj-est02.pgf}} \caption{Convergence rates for staircase.}\label{prj:fig2} \end{figure} \subsection{Counter-example case}\label{prj:counterexamplecase} For the following experiment we consider ``nonlinear'' contraction mapping $$ \Phi(X) = X_* + \delta \Vert X - X_* \Vert X_{\perp},$$ where $X$ is a $2\times 2$ matrix, $X_* = \begin{pmatrix} 1 & 0 \\ 0 & 0\end{pmatrix},$ $X_{\perp} = \begin{pmatrix} 0 & 0 \\ 0 & 1\end{pmatrix},$ $\delta = 0.5.$ It shows that original projector splitting method fails and converges to another stationary point. Nevertheless this stationary point is unstable and to show that we introduce a perturbed projector splitting method: $$Y_{k+1}^{pert} = I(Y_k^{pert}, \Phi(Y_k^{pert}) -Y_k^{pert} + R_k),$$ where $R_k$ is a $n\times m$ matrix with elements taken from the normal distribution\\ $\mathcal{N}\left(0, \frac{1}{100nm}\Vert\Phi(Y_k^{pert}) -Y_k^{pert} \Vert\right)$. The convergence is shown at Figure \ref{prj:fig3}: \begin{figure}[H] \centering \scalebox{0.7}{ \input{prj-est03.pgf}} \caption{Convergence rates for 'bad functional'}\label{prj:fig3} \end{figure} \section{Related work} Projector splitting method arises naturally as a numerical integrator for dynamical low-rank approximation of ODE \cite{lubich-timett-2015, lubich-prj-2014} and was originally proposed in \cite{lubich-lrappr-2007}. In this paper we focused on the properties of the projector splitting method as the retraction onto low-rank manifold \cite{absil-opt-2009}. It was compared with another retraction methods in the survey \cite{oseledets-survey-2015}. Close results about convergence in the presence of small singular values were obtained in \cite{lubich-smallsing-2015}. The problem formulation is as follows. Let $X(t)$ be the solution of the ordinary differential equation (ODE): \begin{align*} \dot{X}(t) = F(t, X(t)), \quad X(0) = X_0,& \quad X(t) \in \mathbb{R}^{n\times m}, &t\in [0, T], \\ \Vert F(t, X_1) - F(t, X_2)\Vert \leq L,& \quad \forall X_1, X_2 \in \mathbb{R}^{n\times m}, &\forall t\in [0, T],\\ \Vert F(t, X) \Vert \leq B,& \quad \forall X \in \mathbb{R}^{n\times m}, &\forall t\in [0, T]. \\ \end{align*} We want to obtain approximation to stationary point $X_*$: $F(t, X_*) = 0$. We seek for low-rank approximation $Y(t)$ to $X(t)$ and $Y(t)$ satisfies the modified ODE: $$\label{prj:modODE} \dot{Y}(t) = P(Y(t))F(t, Y(t)), \quad Y(0) = Y_0,\quad \rank Y(t) = r, $$ where $P(Y(t))$ is a projector onto the subspace determined by $Y(t)$. \cite[Theorem 2.1]{lubich-smallsing-2015} states that numerical approximation $ \widetilde{Y}(t)$ is stable despite the presence of small singular values of $Y(t)$. However, this result cannot be directly applied to optimization problems and $F$ should satisfy certain restrictions. Another close result is a guaranteed local linear convergence for alternating least squares optimization scheme in convex optimization problems \cite{uschmajew-alsconv-2013}. Also local convergence results are obtained for modified alternating least squares scheme, such as maximum block improvement \cite{uschmajew-mbi-2015} and alternating minimal energy \cite{dolgov-amen-2014}, but for these methods the low-rank manifold changes at every step. \section{Conclusions and perspectives} Our numerical results show that the staircase is a typical case for linear contraction mappings. However, conditions of the proved theorem cover only convergence at the last ``step'' on the stair. We plan to formulate conditions for the contraction mapping $\Phi$ for which ``component-wise'' convergence as for stair case is guaranteed. Our current hypothesis is that the ``extended'' mapping $\Phi_m(X, X_*)$ should also satisfy the contraction property for $X_*$. It will be very interesting to explain the nature of the stair case convergence. Another important topic for further research is to determine a viable ``a-poste\-riori'' error indicator, since we do not know the orthogonal component. This will allow to develop rank-adaptive projector splitting based scheme. The main conclusion of this paper is that projected iterations are typically as fast as the unprojected ones. We plan to generalize the paper results for tensor case. \section*{Acknowledgements} This work was supported by Russian Science Foundation grant 14-11-00659. We thank Prof. Dr. Christian Lubich and Hanna Walach for fruitful discussions about projector splitting scheme and retractions on a low-rank manifold. We also thank Maxim Rakhuba for his help for improving the manuscript. \bibliographystyle{siam}
{ "attr-fineweb-edu": 1.575195, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdM45qhLA7iWj6r8N
\section*{Acknowledgments} We thank the technical and administrative staff at CERN and other CMS Institutes, and acknowledge support from: FMSR (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES (Croatia); RPF (Cyprus); Academy of Sciences and NICPB (Estonia); Academy of Finland, ME, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); OTKA and NKTH (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); NRF (Korea); LAS (Lithuania); CINVESTAV, CONACYT, SEP, and UASLP-FAI (Mexico); PAEC (Pakistan); SCSR (Poland); FCT (Portugal); JINR (Armenia, Belarus, Georgia, Ukraine, Uzbekistan); MST and MAE (Russia); MSTDS (Serbia); MICINN and CPAN (Spain); Swiss Funding Agencies (Switzerland); NSC (Taipei); TUBITAK and TAEK (Turkey); STFC (United Kingdom); DOE and NSF (USA). Individuals have received support from the Marie-Curie IEF program (European Union); the Leventis Foundation; the A. P. Sloan Foundation; and the Alexander von Humboldt Foundation. \section{The CMS Collaboration \label{app:collab}}\begin{sloppypar}\hyphenpenalty=500\input{CFT-09-002-authorlist.tex}\end{sloppypar} \end{document} \section{Summary} The Cosmic Run At Four Tesla has been an important experience for commissioning the tracker. The control and readout systems were successfully commissioned, synchronised to the Level-1 Trigger, and operated in global runs with all the other sub-detectors of the CMS experiment. The total number of modules used corresponds to 98.0\% of the total system. About 15 million events with a muon in the tracker were collected. The hit and track reconstruction are seen to have an excellent performance and the Combinatorial Track Finder\xspace, which will be used in proton-proton collisions as the default reconstruction algorithm, was tested successfully. The signal-to-noise performance is in the range 25-30 for thin modules and 31-36 for thick ones. The efficiency of hit reconstruction is above 99.5\%. In addition, with the collected data sample, it has been possible to calibrate the measurement of energy loss in silicon and to measure the Lorentz angle. The track reconstruction efficiency has been measured with two different methods: one using only muons reconstructed in the muon chambers and one using only data from the tracker. The reconstruction efficiency in data was found to be high and well described by the Monte Carlo simulation. For tracks passing close to the centre of the detector and having a direction close to the vertical axis, the reconstruction efficiency was found to be higher than 99\%. The resolution on hit position and track parameters was also consistent with expectations from Monte Carlo simulation. CRAFT demonstrated the successful operation of the tracker integrated with the other CMS subsystems. It was an important milestone towards final commissioning with colliding beam data. \section{Introduction} The primary goal of the Compact Muon Solenoid (CMS) experiment~\cite{cms} is to explore particle physics at the TeV energy scale exploiting the proton-proton collisions delivered by the Large Hadron Collider (LHC)~\cite{LHC}. The central tracking detector~\cite{cms} built for the CMS experiment is a unique instrument, in both size and complexity. It comprises two systems based on silicon sensor technology: one employing silicon pixels and another using silicon microstrips. The Pixel Detector surrounds the beampipe and contains 66~million detector channels~\cite{craftPixel}. The Pixel system is, in turn, surrounded by the Silicon Strip Tracker (SST), which is the subject of this paper. \begin{figure}[b] \begin{center} \includegraphics[width=\textwidth]{Figures/general_layout} \caption{Schematic cross section of the CMS tracker. Each line represents a detector module. Double lines indicate double-sided modules which deliver stereo hits.} \label{fig:tk-layout} \end{center} \end{figure} The SST consists of four main subsystems, shown in Fig.~\ref{fig:tk-layout}: the four-layer Tracker Inner Barrel (TIB), the six-layer Tracker Outer Barrel (TOB) and, on each side of the barrel region, the three-disk Tracker Inner Disks (TID), and the nine-disk Tracker End Caps (TEC). Each TID disk is made of three rings of modules, while TEC disks have seven rings. Overall, the tracker cylinder is 5.5\,m long and 2.4\,m in diameter, with a total active area of $198\, {\rm m}^2$, consisting of 15\,148 detector modules and comprising 9.3 million detector channels. Each detector module consists of a carbon or graphite fibre frame, which supports the silicon sensor and the associated front-end readout electronics. Four barrel layers and three rings in the end cap disks are equipped with double-sided modules, each of which is constructed from two single-sided modules mounted back-to-back with a stereo angle of 100\,mrad between the strips. The silicon sensors are made up of single-sided $p^+$ strips on $n$-bulk sensors with two different thicknesses: $320 \mum$ and $500\mum$ in the inner four and outer six layers of the barrel, respectively; $320\mum$ in the inner disks; and $320\mum$ and $500\mum$ in the inner four and outer three rings of the end cap disks, respectively. There are a total of fifteen different types of sensors in the SST, which vary in terms of strip length and pitch~\cite{sensors} to ensure that the single strip occupancy is low even at full LHC luminosity. The first experience of the SST operation and detector performance study was gained in summer 2006, when a small fraction of the SST was inserted into the CMS detector. Cosmic ray muon data were recorded in the presence of a solenoidal field up to the maximum design value of 4\,T. The results from this period of data-taking are described elsewhere~\cite{mtccPaper}. Construction of the full SST was completed in 2007 and 15\% of the full SST was commissioned and operated for several months prior to installation in the underground CMS experimental hall. The results of this period of stand-alone operation, known as the Slice Test, are also described elsewhere~\cite{TIF_Note, tifPaper}. The installation of the SST within CMS was completed during 2008 and the system underwent its first round of {\it in situ} commissioning together with the other CMS sub-detectors during summer 2008. The first operation of the SST in a 3.8\,T magnetic field took place during October-November 2008, when the CMS Collaboration conducted a month-long data-taking exercise known as the Cosmic Run At Four Tesla (CRAFT)~\cite{CRAFTGeneral}. This exercise provided valuable operational experience, as well as allowing, for the first time, a full study of the SST performance after installation. First results from the study are presented here. This paper is laid out as follows. The procedures used to commission the SST and the results from the round of {\it in situ} commissioning are presented and discussed in Section 2. The final data samples from CRAFT and the corresponding Monte Carlo simulations are described in Section 3. The performance results obtained from the CRAFT data samples for hit and track reconstruction are presented in Sections 4 and 5, respectively. \section{Commissioning the SST Control and Readout Systems} In order to bring the SST detector into an operational state suitable for data-taking, several commissioning procedures are required to checkout, configure, calibrate, and synchronise the various hardware components of the control and readout systems. The majority of the commissioning procedures are performed with the SST operating independently of the rest of the CMS experiment. Only the procedures that concern synchronisation to an external trigger, described in Section~\ref{sec:ext-synch}, require reconstructed particle trajectories from cosmic ray muons or LHC pp collision data. The commissioning of the SST aims to maximise the signal identification efficiency for in-time particles and minimise pileup due to out-of-time particles. The ultimate objective is to maximise the tracking efficiency while minimising the number of tracks caused by out-of-time signals from adjacent bunch crossings. Section~\ref{sec:readout-system} provides an overview of the SST control and readout systems. Section~\ref{sec:checkout} summarises the checkout procedures used to determine the functional components of these systems. Sections~\ref{sec:int-synch}-\ref{sec:ext-synch} review the various commissioning procedures and their performances. \subsection{The control and readout systems} \label{sec:readout-system} The major components of the SST readout system~\cite{ttdr} are: 15\,148 front-end detector modules that host 76\,000 APV25~\cite{APV25} readout chips, an analogue optical link system comprising 38\,000 individual fibres~\cite{LINKS}, and 440 off-detector analogue receiver boards, known as Front-End Drivers (FED)~\cite{FED}. The SST control system~\cite{ieeeFred} is driven by 46 off-detector digital transceiver boards, known as Front-End Controllers (FEC)~\cite{FEC}. The FECs distribute the LHC clock, triggers and control signals to the front-end detector modules via Communication and Control Units (CCU)~\cite{CCU}, which are hosted on 368 {\it control rings}. The APV25 readout chip samples, amplifies, buffers, and processes signals from 128 detector channels at a frequency of 40~MHz. Fast pulse shaping is therefore required to provide bunch crossing identification and minimise pileup. This is difficult to achieve with low noise and power levels, so the APV25 chip uses pre-amplifier and shaper stages to produce a CR-RC pulse shape with a relatively slow rise-time of 50~ns in an operating mode known as {\it peak}. An alternative mode, {\it deconvolution}, performs additional signal processing to constrain the signal to a single bunch crossing~\cite{Deconvolution} at the expense of a reduced signal-to-noise ratio. Deconvolution is expected to be the standard mode of operation. However, the results presented in this paper are based on data accumulated with peak mode operation, unless stated otherwise. \begin{figure}[t] \begin{centering} \includegraphics[width=0.54\textwidth]{Figures/apv-frame} \includegraphics[width=0.44\textwidth]{Figures/tick-mark} \par \end{centering} \caption{\label{fig:apv-data} (left) Two APV25 data frames multiplexed, containing a time stamp and the sensor pulse height information. (right) A feature of the APV25 data stream, known as a tick mark, that is heavily used by the checkout and commissioning procedures. The left and right figures have sampling intervals of 25~ns and 1.04~ns, respectively. } \end{figure} Figure~\ref{fig:apv-data} (left) shows an example of the raw data captured at 40~MHz by a single FED readout channel on receipt of a trigger. The data contain frames from two APV25 chips that are multiplexed (interleaved) together. A single frame comprises 12 bits of binary information that encodes time and error information, known as the digital header, followed by analogue pulse height data from 128 sensor strips. A trailing {\it tick mark} identifies the end of the frame. The structure observed in the pulse height data across the 128 channels is due to static offsets, known as {\it pedestals}, which are unique to each detector channel. Small, time-varying {\it common mode} shifts in the levels of all 128 channels are observed when operating. Figure~\ref{fig:apv-data} (left) also shows an example of a signal left by a minimum ionising particle. Signals are superimposed on the pedestal and common mode levels, which must be subtracted before the signal can be identified. In the absence of a trigger, no data frames are output by the APV25 chip, but tick marks are produced every 70 clock cycles. Figure~\ref{fig:apv-data} (right) shows the pulse shape of multiplexed tick marks from two APV25 chips that are reconstructed with an effective sampling frequency of 960~MHz. This tick mark feature is used heavily in the checkout and commissioning procedures detailed below. The FEDs can format the pulse height data from the APV25 chips in different ways. The first is Scope Mode (SM), which is simply a capture of the raw data, as shown in Fig.~\ref{fig:apv-data} (left). The second is Virgin Raw (VR), which removes all of the binary information (digital header and tick marks) and simply provides the digitised pulse height data from the sensors. Both modes provide digital samples with a 10-bit range and are used when commissioning the SST system and for debugging. The third and normal mode of operation is Zero Suppressed (ZS). This uses Field Programmable Gate Array (FPGA) chips to implement algorithms that perform pedestal subtraction, common mode subtraction, and identification of channels potentially containing signals above threshold. A threshold of five times the detector channel noise is used for single channels, but a threshold of only twice the channel noise is used for signals in contiguous channels. The zero-suppressed data are output with an 8-bit range. \subsection{Checkout of the detector components and cabling} \label{sec:checkout} The checkout procedures are used to identify: responsive and functional devices in the control and readout systems; the cabling of the readout electronics chain, from the front-end detector modules to the off-detector FED boards; the cabling of the Low Voltage (LV) and High Voltage (HV) buses of the power supply system~\cite{POWER}; and the mapping of the detector modules to their geometrical position within the SST superstructure. Automation is possible as each detector module hosts a Detector Control Unit (DCU) chip~\cite{DCU}, which broadcasts a unique indentifier via the control system. This identifier is used to tag individual modules. The cabling of the LV power supply system is established by sequentially powering groups of detector modules and identifying responsive devices via the control system. Similarly, the HV cabling is determined by applying HV to an individual channel and identifying detector modules responding with a decreased noise, due to reduced strip capacitance. Each front-end detector module hosts a Linear Laser Driver (LLD) chip~\cite{LLD}, which drives the optical links that transmit the analogue signals to the off-detector FED boards. The cabling of the readout electronics chain is established by configuring individual LLD chips to produce unique patterns in the data stream of the connected FED channels. The final number of modules used in the CRAFT data-taking period corresponds to 98.0\% of the total system. The most significant losses were from one control ring in each of the TIB and TOB sub-systems. In the TIB, this was due to a single faulty CCU. The remaining CCUs on this ring have since been recovered using a built-in redundancy feature of the control ring design. The fraction of operational modules was increased to 98.6\% after data-taking, once problems identified during checkout were investigated more fully. \subsection{Relative synchronisation of the front-end} \label{sec:int-synch} Relative synchronisation involves adjusting the phase of the LHC clock delivered to the front-end so that the sampling times of all APV25 chips in the system are synchronous. Additionally, the signal sampling time of the FED Analogue/Digital Converters (ADC) is appropriately adjusted. This procedure accounts for differences in signal propagation time in the control system due to differing cable lengths. This synchronisation procedure is important because signal amplitude is attenuated by as much as 4$\%$ per nanosecond mis-synchronisation due to the narrow pulse shape in deconvolution mode. Using the FED boards in Scope Mode, the tick mark pulse shape is reconstructed with a 1.04~ns step width by varying the clock phase using a Phase Locked Loop (PLL) chip~\cite{PLL} hosted by each detector module, as shown in Fig.~\ref{fig:apv-data} (right). The ideal sampling point is on the signal plateau, 15~ns after the rising edge of the tick mark. The required delays are thus inferred from the arrival times of the tick mark edges at the FED ADCs. The pre-synchronisation timing spread of up to 160~ns is reduced to an RMS of 0.72~ns, with the largest deviation of 4~ns corresponding to a maximum signal attenuation of $\sim$16\% in deconvolution mode. \subsection{Calibration of the readout system gain} \label{sec:tick-mark-calib} One of the largest contributions to gain variation in the readout system is the distribution of laser output efficiencies caused by the variation of laser-to-fibre alignment from sample to sample during production of the transmitters. In addition some loss may have been introduced at the three optical patch panels in the fibre system. Changes in the LV power supply or environmental temperature can also significantly affect the gain at the level of a FED readout channel. The calibration procedure aims to optimise the use of the available dynamic range of the FED ADCs and also equalise the gain of the entire readout system. This is achieved by tuning the bias and gain register settings of the LLD chip for individual fibres. Four gain settings are possible. The amplitude of the tick mark, which is assumed to be roughly constant in time and across all APV25 chips within the system, is used to measure the gain of each readout channel. The setting that results in a tick mark amplitude closest to 640 ADC counts is chosen, as this amplitude corresponds to the expected design gain of 0.8. After tuning the system, a spread of $\pm$20\% is observed, which is expected because of the coarse granularity of the LLD gain settings. The response of all detector channels can be further equalised during offline reconstruction by correcting the signal magnitude by the normalisation factor $f = 640~\mathrm{ADC~counts}~/ a_{\mathrm{tick mark}}$, where $a_{\mathrm{tick mark}}$ is the tick mark amplitude in ADC counts. The tick mark amplitude is a good indicator of the maximum output of the APV25 chip, which corresponds to a charge deposit of 175\,000~$\mathrm{e}^{-}$. This method provides a calibration factor of $274\pm14$~$\mathrm{e}^-$/ADC~count. The estimated systematic uncertainty is 5\%, attributable to the sensitivity of the tick mark amplitude to variations in the LV power supply and environmental temperature~\cite{TIF_Note}. \begin{figure}[t] \begin{centering} \includegraphics[height=0.40\textwidth]{Figures/PulseShape} \includegraphics[height=0.40\textwidth]{Figures/calibTEC} \par \end{centering} \caption{\label{fig:pulse-calib} (Left) An example of the CR-RC pulse shape of a single APV25 chip, before and after the pulse shape tuning procedure. (Right) Pulse height measurements using the on-chip calibration circuitry of APV25 chips in the TEC+. } \end{figure} \subsection{Tuning of the APV25 front-end amplifier pulse shape} \label{sec:pulse-shape} The shape of the CR-RC pulse from the APV25 pre-amplifier and shaper stages is dependent on the input capacitance, which depends on the sensor geometry and evolves with total radiation dose. By default, all APV25 chips are configured with pre-defined settings appropriate to the sensor geometry, based on laboratory measurements~\cite{Timing}. However, non-uniformities in the fabrication process result in a small natural spread in the pulse shape parameters for a given input capacitance. This issue is important for performance in deconvolution mode, which is sensitive to the CR-RC pulse shape. In order to maximise the signal-to-noise ratio and confine the signal to a single bunch crossing interval when operating in deconvolution mode, the rise time of the CR-RC pulse shape must be tuned to 50~ns and the signal amplitude at 125~ns after the signal maximum should be 36\% of the maximum. This tuning also reduces the timing uncertainties associated with the synchronisation procedures. Figure~\ref{fig:pulse-calib} (left) demonstrates how the CR-RC pulse shape of an APV25, operating in peak mode, can be improved by the procedure. \begin{figure}[t] \begin{centering} \includegraphics[width=0.48\textwidth]{Figures/TOBRPhi-Layer3} \includegraphics[width=0.48\textwidth]{Figures/TEC-_RPHI5NoiseMin} \par \end{centering} \caption{\label{fig:noise-analysis} (Left) Mean calibrated noise for individual APV25 chips on modules in the TOB single side layer 3. (Right) The ratio of minimum noise to median noise per APV25 chip. The distinct populations reflect the different noise sources within a module. } \end{figure} Figure~\ref{fig:pulse-calib} (right) shows the pulse height amplitude (in ADC counts) observed for a charge injection of 60\,000~$\mathrm{e}^-$ using the on-chip calibration circuitry of the APV25 chip. The charge injection provided by the calibration circuit is known with a precision of 5\% and can be used to calibrate the detector signal amplitude. A mean signal of 223~ADC~counts with a RMS of 29~ADC~counts was observed, giving a calibration factor of $269\pm13\, \mathrm{e}^-$/ADC~counts. This measurement is compatible with the calibration based on tick mark amplitudes, described in Section~\ref{sec:tick-mark-calib}. \subsection{Calibration of the detector channel pedestals and noise} \label{sec:noise} The mean level of the pedestals for the 128 channels of a given APV25 chip, known as the {\it baseline} level, can be adjusted to optimise the signal linearity and the use of the available dynamic range of the APV25. The baseline level for each APV25 chip is adjusted to sit at approximately one third of the dynamic range. Following this baseline adjustment, the pedestal and noise constants for each individual detector channel must be measured, as these values are used by the zero-suppression algorithms implemented in the FPGA logic of the FEDs. Pedestals and noise are both measured using a random, low frequency trigger ($\sim$10~Hz) in the absence of signal. Pedestals are first calculated as the mean of the raw data in each detector channel from a large event sample. They are subsequently subtracted from the raw data values for each event. Common mode offsets are evaluated for each APV25 chip per event by calculating the median of these pedestal-subtracted data. The median value is then subtracted from each channel. The noise for each detector channel is then defined to be the standard deviation of the residual data levels, which can be calibrated using the measurements described in Sections~\ref{sec:tick-mark-calib}~and~\ref{sec:pulse-shape}. Figure~\ref{fig:noise-analysis} (left) shows a distribution of the mean noise measured per APV25 chip, for TOB single side layer 3. The outliers correspond to APV25 chips from modules with unbiased sensors, due to problems in the HV power supply. \begin{table} \caption{Summary of the mean normalised noise for each type of sensor geometry. } \label{tab:Summary-of-noise} \vspace{1ex} \begin{centering} \begin{tabular}{|c|c|c|c|c|} \hline {Partition} & {Strip length (cm)} & {Total noise ( $\mathrm{e}^-$)} & {Pitch adapter ( $\mathrm{e}^-$)} & {Bare APV ( $\mathrm{e}^-$)} \tabularnewline \hline TEC Ring 1 & 8.52 & 757 & 421 & 245 \tabularnewline TEC Ring 2 & 8.82 & 791 & 434 & 265 \tabularnewline TEC Ring 3 & 11.07 & 832 & 450 & 250 \tabularnewline TEC Ring 4 & 11.52 & 843 & 437 & 257 \tabularnewline TEC Ring 5 & 14.44 & 1024 & 461 & 265 \tabularnewline TEC Ring 6 & 18.10 & 1097 & 513 & 270 \tabularnewline TEC Ring 7 & 20.18 & 1146 & 510 & 258 \tabularnewline \hline TOB Layers 1-4 & 18.32 & 1184 & 583 & 254 \tabularnewline TOB Layers 5-6 & 18.32 & 1205 & 538 & 261 \tabularnewline \hline TIB Layers 1-2 & 11.69 & 925 & 454 & 265 \tabularnewline TIB Layers 3-4 & 11.69 & 851 & 445 & 256 \tabularnewline \hline \end{tabular} \par\end{centering} \end{table} Modules with different sensor geometries are studied separately to account for the different strip lengths and pitch adapter layouts that affect the input capacitance. The mean normalised noise measured for the different sensor geometries are summarised in Table~\ref{tab:Summary-of-noise}. Fitting the mean noise versus silicon strip length, the following parameterisation is obtained: \[noise(\mathrm{e}^-)=(427\pm39)+(38.7\pm3.0)\times length(cm)\] This is compatible with the measurement performed during the SST integration period, prior to installation~\cite{cms}. The individual sources of noise on the detector module can be identified and measured by plotting the ratio of the minimum to the median noise value for each APV25, as shown in Fig.~\ref{fig:noise-analysis}~(right) and summarised in Table~\ref{tab:Summary-of-noise}. The ratio takes advantage of the fact that broken wire bonds on the detector modules effectively reduce the input capacitance to individual channels of the APV25 chips. Broken wire bonds can occur between (in ascending order of capacitance): the APV25 and pitch adapter; the pitch adapter and silicon sensor; and sensors in two-sensor modules. Fitting to the first three populations, corresponding to the previous broken wire configurations, provides an estimate of different noise contributions. The fourth population corresponds to modules with no broken wires. \begin{figure} \begin{centering} \includegraphics[width=0.48\textwidth]{Figures/latencyCurves2} \includegraphics[width=0.48\textwidth]{Figures/finedelay} \par \end{centering} \caption{\label{fig:latency-scan} (Left) Mean signal of leading strip in clusters associated to tracks as a function of the latency (25 ns steps), for each of the four partitions. (Right) Fine delay scan for the TOB layer 3, in deconvolution. The mean position (-14.2\,ns) is including the mean time-of-flight of particles from the muon system to the silicon sensors (12\,ns).} \end{figure} \subsection{Absolute synchronisation to an external trigger} \label{sec:ext-synch} The last two commissioning procedures concern the synchronisation of all modules in the SST with the Level-1 trigger of CMS. This was done using a dedicated trigger provided by the Muon Drift Tube sub-detector~\cite{MUON}, based on a coincidence between centrally-located top and bottom chambers. The procedure requires track reconstruction and the analysis was performed offline~\cite{Timing}. Absolute synchronisation accounts for both the delays introduced by the hardware configuration and the effects due to the time-of-flight of particles. The first of the two procedures is a coarse scan in time, in steps of 25~ns, by adjusting the latency between the trigger arrival and the sampling time of the APV25 chip. The mean signal of the channel with the largest signal amplitude ({\it leading strip}) in clusters associated to reconstructed tracks was extracted as a function of the latency. The signal magnitude was corrected for the track path length through the active sensor volume, inferred from the track angle. The latency measurement was performed for the tracker as a whole, but fine adjustments for each partition were made relative to the TOB results: TIB and TEC- were shifted by 12.5 ns and TEC+ by -12.5 ns, as shown by the fits in Fig.~\ref{fig:latency-scan} (left). Time-of-flight is not taken into account in this procedure, since the variations expected across the detector ($\leq$10~ns with cosmic ray muons, 5~ns in collisions) are lower than the target precision of 25~ns. The last procedure comprises a fine tuning of the synchronisation. It involves skewing the clock delay in steps of 1~ns around the expected optimal value for all modules of a given test layer, with the configuration of all other modules in the SST unchanged with respect to the value obtained from the coarse latency scan. Clusters on the test layer compatible with a reconstructed track are used to reconstruct the pulse shape. Figure~\ref{fig:latency-scan} (right) shows the resulting pulse shape from clusters found in modules of TOB layer 3, acquired in deconvolution mode. With collision data, the time-of-flight can be adjusted for each individual track, but this is not the case for cosmic ray muons, for which the jitter from the trigger cannot be subtracted. The 14~ns shift observed is consistent with the expected time-of-flight (12~ns) of cosmic ray muons from the Muon Drift Tube chambers to the TOB layer 3. From analysis of the latency and fine delay scans, correction factors are computed to compensate the residual mis-synchronisation of each partition. These corrections range from 1.0 to 1.06 with uncertainties of ~0.03 and are used to correct the cluster charge in calibration and $dE/dx$ studies, reported below. \section{Data Samples and Monte Carlo Simulations} \label{sec:data-samples} In the following sections, the performance of the tracker will be analysed using the data collected during CRAFT. The event reconstruction and selection, data quality monitoring and data analysis were all performed within the CMS software framework, known as CMSSW~\cite{ptdr}. The data quality was monitored during both the online and offline reconstruction~\cite{CRAFTWorkflow}. The data were categorised and the results of this categorisation procedure propagated to the CMS Dataset Bookkeeping System~\cite{dbs}. Unless otherwise stated, only runs for which the quality was certified as good, i.e., no problems were known to affect the Trigger and Tracker performance, were used for the analyses presented in this paper. The data-taking period can be split into three distinct intervals in time, based on magnetic field conditions and tracker performance. Each period has approximately uniform conditions. In the first period, period A, part of the SST was not correctly synchronised with the rest of the CMS detector. This problem was fixed for data taken in subsequent periods. The magnet was at its nominal field value of 3.8~T during periods A and B, while period C corresponds to data taken with the magnet switched off. Unless stated otherwise, the following results are based only on events from period B. For the studies presented in this paper, the events selected by the Global Muon Trigger~\cite{CRAFTTrigger} were used. This data sample was additionally filtered to include only events that contain at least one reconstructed track in the tracker or that have a track reconstructed in the muon chambers whose trajectory points back into the SST barrel volume. Several analyses use a simulated sample of 21 million cosmic ray muons to derive correction factors and compare results. The sample was generated using the CMSCGEN package~\cite{CMSCGEN, CMSCGEN_2}. The detector was simulated using the standard program of CMSSW. Modules known to be excluded from the read-out were masked in the simulation. Besides this, the simulation was not optimised to the conditions of CRAFT. Nevertheless, the agreement with the data was sufficient for the purpose of the studies presented. \section{Performance of the Local Reconstruction} \label{localreco} In this section, the reconstruction at the level of the single detector module, is presented. The cosmic ray muon rate is small and events with more than one track are rare. So with zero-suppression only a tiny fraction of the SST channels are read out in any one event. These channels which pass zero-suppression and therefore have non-zero ADC counts are known as {\em digi}. Despite the zero suppression, digis may still only consist of noise. Clusters are formed from digs by means of a three threshold algorithm~\cite{ptdr}. Clusters are seeded by digis which have a charge that is at least three times greater than the corresponding channel noise. For each seed, neighbouring strips are added if the strip charge is more than twice the strip noise. A cluster is kept if its total charge is more than five times the cluster noise, defined as $\sigma_{\mathrm{cluster}} = \sqrt{ \sum_i \sigma_i^2 }$, where $\sigma_i$ is the noise from strip $i$, and the sum runs over all strips in the cluster. In the following, the properties of both digis and clusters are studied and the performance of each SST subsystem is assessed. \subsection{Occupancy} The average number of digis per event and the occupancy are shown for each SST subsystem in Table~\ref{tab:digirate}. The strip occupancy is computed after removing the masked modules (2.0 \%). The average occupancy in the SST is $4\times 10^{-4}$, as expected from simulation and from the properties of the zero suppression algorithm. The digi occupancy is dominated by noise, but the cluster algorithm reconstructs less than ten hits per event when there is no track within the SST acceptance. \begin{table} \caption{Strip occupancies in the SST subsystems. \label{tab:digirate} } \begin{center} \begin{tabular}{|l|c|c|c|c|} \hline & TIB & TOB & TID & TEC \\ \hline Average number of digis per event & 720 & 1000 & 300 & 1700 \\ Number of readout channels / 10$^6$ & 1.8 & 3.1 & 0.6 & 3.9 \\ Strip occupancy from digis (\%) & 0.04 & 0.03 & 0.05 & 0.04 \\ \hline Average number of clusters per event due to noise & 1.0 & 2.0 & 0.3 & 3.0 \\ \hline \end{tabular} \end{center} \end{table} \subsection{Signal-to-noise ratio \label{sec:s2n} } The signal-to-noise ratio is a benchmark for the performance of the SST. It is particularly useful for studying the stability over time. In the signal-to-noise ratio, the cluster noise is divided by $\sqrt{N_{\mathrm{strips}}}$, so that the resulting noise value is approximately equal to the strip noise, independently of the size of the cluster. \begin{figure}[hbtp] \begin{center} \includegraphics[width=0.48\linewidth]{Figures/Signal_to_Noise_TIB} \includegraphics[width=0.48\linewidth]{Figures/Signal_to_Noise_TOB} \caption{Signal-to-noise ratio distributions of clusters associated to tracks in TIB layer 1 (left) and TOB layer 5 (right).} \label{fig:CHG} \end{center} \end{figure} The path-length corrected signal-to-noise ratio distributions are presented in Fig.~\ref{fig:CHG} for TIB layer 1 and TOB layer 5. The distributions have been fitted with a Landau function convoluted with a Gaussian function to determine the most probable value for the signal-to-noise ratio. The result is in the range 25-30 for thin modules and 31-36 for thick ones, and within 5\% from the expected values. Thick sensors collect about a factor of $5/3$ more charge than the thin sensors, but this does not simply scale up the signal-to-noise ratio, as the noise is also larger for thick sensors, because of the longer strips of these modules. The fit of the signal-to-noise ratio can also be performed on a run-by-run basis; Figure~\ref{fig:SNFIT} shows the most probable value as a function of run number, allowing to monitor the stability over a period of time. Figure~\ref{fig:SNFIT} is divided into the three main data-taking periods as discussed in Section~\ref{sec:data-samples}. It can be seen that in period A the signal-to-noise ratio was lower because muons were out-of-time in the modules not correctly synchronised with the trigger. Temporal variations of 5\% arise from residual pedestal and timing mis-calibrations. \begin{figure}[hbtp] \begin{center} \includegraphics[width=0.7\linewidth]{Figures/StoNTrend} \caption{Signal-to-noise ratio versus the run number. The error bars represent the uncertainty associated with the Landau fit described in the text.} \label{fig:SNFIT} \end{center} \end{figure} \subsection{Gain calibration \label{sec:gain}} The charge released in the silicon sensors by the passage of a charged particle is processed by the readout electronics chain described in Section~\ref{sec:readout-system}. The ratio of ADC counts output after FED digitisation to the originally-released charge corresponds to the gain of the electronics chain. Particle identification using energy loss in the silicon detectors~\cite{pid-dedx} is known to be sensitive both to the absolute calibration scale and to gain non-uniformities. It is therefore important that these non-uniformities be corrected for and that the conversion factor between deposited energy and ADC counts is measured precisely. \subsubsection{Inter-calibration of gain \label{sec:gainnorm}} The electronics gain can be made uniform throughout the SST simply by scaling the tick mark heights measured during calibration to an appropriate value. However, this procedure will not take into account gain changes due to temperature variations and non-uniformities in the sensor response to a traversing particle, e.g., because of trigger synchronization, or because the sensor is not fully depleted. For particle identification with energy loss, non-uniformity must not exceed 2\%~\cite{pid-dedx}. This level of inter-calibration can be achieved only using the signals produced by particles. The path length corrected charge of those clusters associated with tracks was fitted with a separate Landau curve for each APV25 chip. Figure~\ref{fig:gain} shows the distribution of most probable values for APVs with at least 50 clusters, subdivided by sensor thickness. The spread of these distributions is around 10\%. The most probable value of each distribution is then used to compute the inter-calibration constants by normalising the signal to 300 ADC counts/mm -- the value expected for a minimum ionising particle with a calibration of 270 e$^-$/ADC~count (Section~\ref{sec:pulse-shape}). The inter-calibration constants determined in this manner were used in the final reprocessing of the CRAFT data, resulting in a uniform response. \begin{figure} \begin{center} \includegraphics[width=0.45\linewidth]{Figures/MPVs_B} \caption{Most probable value of the cluster charge for different thicknesses before gain calibration.} \label{fig:gain} \end{center} \end{figure} \subsubsection{Absolute calibration using energy deposit information \label{sec:dedx}} In addition to the inter-calibration constants, for particle identification using energy loss, the ratio of deposited charge to ADC counts must be measured. The energy loss by particles traversing thin layers of silicon is described by the Landau-Vavilov-Bichsel theory~\cite{bichsel}. The most probable energy deposition per unit of length, $\Delta_p/x$, is described by the Bichsel function and depends on both the silicon thickness and the particle momentum. For muons, the function has a minimum at $0.5$ GeV/$c$ and then rises to reach a plateau for momenta greater than 10 $\ensuremath{\mathrm{GeV}/c}$. The absolute gain calibration can be determined by fitting the Bichsel function predictions to the measured $\Delta_p/x$ values from the CRAFT data sample. The quantity $\Delta_p/x$ is measured using the charge of clusters associated to tracks as a function of track momentum. The resulting charge distributions are fitted with a Landau convoluted with a Gaussian. Only tracks with at least six hits and $\chi^2/\mbox{ndf}$ less than 10 are considered. In addition, only clusters with fewer than four strips are taken into account. This last requirement is imposed in order to avoid mis-reconstructed clusters. Before the absolute calibration factor can be extracted from the cluster charge data, two corrections must be applied. Firstly, a correction is needed to take into account any charge loss in the zero-suppression process and during clustering. This is determined using Monte Carlo simulations for each subsystem and for both thin and thick sensors in the end caps. Secondly, a correction is needed to handle the imperfect synchronisation between the different subsystems. Overall, the uncertainty due to these corrections is estimated to be about $1.5$\%. Figure~\ref{fig:dEdx} shows the most probable value of energy deposition per unit length plotted as a function of the track momentum for both thin and thick sensors. The error bars reflect the uncertainty from the Landau fit, while the bands represent the fully-correlated systematic uncertainties from Monte Carlo corrections. The small dip at 5 $\ensuremath{\mathrm{GeV}/c}$ arises from a temporary problem in the trigger provided by a sector of the muon chambers, because of which this momentum region was contaminated with out-of-time particles. The absolute calibration factor is determined separately for each subsystem and for both thin and thick sensors in TEC+ and TEC-. The resulting values are given in Table~\ref{tab:dEdx}. If a fit is performed for all SST modules together, the absolute calibration factor is found to be 262$\pm$ 3 e$^-$/ADC count, which is very similar to the result in the TOB alone, which dominates the data sample. However, thick and thin modules are compatible and overall the result is in agreement with the value of $269\pm13$ e$^-$/ADC count obtained from the pulse calibration described in Section~\ref{sec:pulse-shape}. \begin{table} \begin{center} \caption{\label{tab:dEdx} Absolute gain calibration measured from energy deposit per unit length, $\Delta_p/x$.} \begin{tabular}{|l|c|c|c|c|c|c|} \hline Subsystem & TIB & TOB & TEC+ thin & TEC+ thick & TEC- thin & TEC- thick \\ \hline e$^-$/ADC count & $262.3^{+2.5}_{-3.5}$ & $261.5^{+0.5}_{-1.5}$ & $273^{+7}_{-9}$ & $270^{+7}_{-9}$ & $264^{+3}_{-4}$ & $261^{+3}_{-4}$ \\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[ht] \begin{center} \includegraphics[width=0.5\linewidth]{Figures/dEdx} \caption{Most probable energy deposit per unit of length $\Delta_p/x$ as a function of track momentum, for thin and thick sensors. The shaded bands show the correlated systematic uncertainties on the measurements. The curves are the expectations from the Bichsel function~\cite{bichsel} as explained in the text.} \label{fig:dEdx} \end{center} \end{figure} \subsection{Lorentz angle measurement \label{sec:lorentz} } In the silicon sensors, the electric field is perpendicular to the strips. For normal incidence particles, typically only one strip is hit and the cluster size increases with the angle of incidence. In the presence of a magnetic field, however, the drift direction is tilted by the Lorentz angle, as illustrated in Fig.~\ref{fig:lorentz}. This is illustrated, for one module in layer 4 of TOB, in Fig.~\ref{fig:TK-loren}, which shows a profile plot of cluster size versus the tangent of the incidence angle. To extract the Lorentz angle, this distribution is fitted to the function: \begin{displaymath} f(\theta_t)=\frac{h}{P}\cdot p_1 \cdot |\tan\theta_t - p_0| + p_2 \end{displaymath} where $h$ is the detector thickness, $P$ is the pitch, and $p_0$, $p_1$ and $p_2$ are the fit parameters. The parameter $p_0$ is, in effect, $\tan \theta_L$, while $p_1$ represents the slope of the line divided by the ratio of thickness to pitch. The third parameter, $p_2$, is the average cluster size at the minimum. \begin{figure}[h] \begin{center} \includegraphics[width=0.5\linewidth]{Figures/lorentz_shift} \caption{Lorentz drift in the microstrip sensors.} \label{fig:lorentz} \end{center} \end{figure} \begin{figure}[hbtp] \begin{centering} \includegraphics[width=0.5\linewidth]{Figures/TOB4module} \caption{Cluster size versus incident angle in one module of TOB Layer 4.\label{fig:TK-loren}} \end{centering} \end{figure} The Lorentz angle is measured for each individual module. The mean $\tan\theta_{L}$ is $0.07$ in TIB and $0.09$ in TOB, with an RMS of $0.02$ and $0.01$, respectively. A small difference between TIB and TOB is expected because the hole mobility depends on the electric field, and therefore, for the same applied voltage, on the thickness. The Lorentz angle correction applied to clusters during track reconstruction is relatively small -- of the order of 10 $\mu$m -- but it is still larger than the overall alignment precision~\cite{craftAlign}. The alignment procedure can therefore provide a useful method of cross-checking the Lorentz angle measurements. In particular, it is useful to compare the residual distributions from data with and without the magnetic field applied. Results from the tracker alignment procedure confirm the measurements presented here~\cite{craftAlign}. \subsection{Hit efficiency} The hit efficiency is the probability to find a cluster in a given silicon sensor that has been traversed by a charged particle. In order to calculate the hit efficiency, track seeding, finding, and reconstruction must be performed. The results presented here have been determined using the Combinatorial Track Finder\xspace for cosmic ray muons events (see Section~\ref{sec:trackalgo} for further details), excluding the clusters in the layer of the SST for which the hit efficiency is to be determined. The efficiency for a given module in this layer is then calculated by finding tracks that pass through that module and determining if a cluster is, in fact, present. A single run from the CRAFT dataset has been used in order to assure that the number of excluded modules did not change. A very long run was chosen to ensure that the track statistics were sufficient. There were between 16\,400 and 104\,800 tracks per barrel layer and between 1700 and 6500 per end cap layer. The analysis was limited to events that contained only one track, which was required to have a minimum of eight hits and no more than four missing hits. To ensure that the muon has actually passed through the module under study, the location of the extrapolation of the track trajectory on the module surface was required to be no closer to the sensor edge than five times the position uncertainty of the extrapolated point. The efficiency results per SST layer are shown in Fig.~\ref{fig:hiteff}. These measurements, which include all SST modules, are compatible with the expected overall percentage of excluded modules. If the modules that were excluded because of known problems were ignored in the efficiency calculation, the resulting efficiency would be greater than 99\% for most layers. No more than about $0.001$ of the inefficiency arises from isolated dead strips~\cite{tifPaper}, which are not taken into account in the efficiency calculation for Fig.~\ref{fig:hiteff} (right). The rest is attributed to modules that were problematic only for a short period of time and were therefore not identified by the other procedures described in this paper. Subsequent improvements, such as detailed logging of modules affected by temporary power supply problems (HV trips etc.), will improve the bookkeeping of inefficiency for future data-taking. \begin{figure}[bhtp] \begin{center} \includegraphics[width=0.48\linewidth]{Figures/HitEffSummary69912QualBad} \includegraphics[width=0.48\linewidth]{Figures/HitEffSummary69912QualGood} \caption{Average module hit efficiency per layer/disk, without any correction for disconnected or otherwise exclude modules (left) and after applying such corrections (right). The efficiency cannot be measured in the outermost layers of TOB (layer 6) or TEC (layer 9) without modifying the track reconstruction algorithm, because the track reconstruction requires the presence of a hit in the outermost layer or disk, depending on the track trajectory.} \label{fig:hiteff} \end{center} \end{figure} \section{Track Reconstruction} In this section, the performance of the track reconstruction using the full tracker, including the pixel detector, is presented. Details of the commissioning and the performance of the hit reconstruction in the pixel detector can be found elsewhere~\cite{craftPixel}. \subsection{Track reconstruction algorithms\label{sec:trackalgo}} The two main algorithms used to reconstruct tracks from cosmic ray muons in CRAFT data are the Combinatorial Track Finder\xspace (CTF) and the Cosmic Track Finder (CosmicTF). The Combinatorial Track Finder\xspace is the standard track reconstruction algorithm intended for use with proton-proton collisions and the main focus of the present study; for these runs, it has been specially re-configured to handle the different topology of cosmic muon events. The second algorithm was devised specifically for the reconstruction of single track cosmic ray muon events. Since it is meant as a cross-check of the Combinatorial Track Finder\xspace, it has not been tuned to the same level of performance. A full description of these algorithms can be found elsewhere~\cite{tifPaper}. There have been two significant changes in the Combinatorial Track Finder\xspace since its first use in the Slice Test, both relating to the seed finding phase. The Slice Test was performed without the presence of a magnetic field and with only limited angular coverage. Now that the full tracker is available, seed finding in the barrel uses TOB layers only and both hit triplets and pairs are generated. In the end caps, hits in adjacent disks are used to form hit pairs. The presence of the 3.8\,T magnetic field means that for hit-triplet seeds, the curvature of the helix yields an initial estimate of the momentum. For hit pairs seeds, an initial estimate of $2\,\ensuremath{\mathrm{GeV}/c}$ is used, which corresponds to the most probable value. The detector has been aligned with the methods described in Reference~\cite{craftAlign}. \subsection{Track reconstruction results} The number of tracks reconstructed by the two algorithms in the data from Period B, without applying any additional track quality criteria, except those applied during the track reconstruction itself, are 2.2 million using the Combinatorial Track Finder\xspace and 2.7 million by the Cosmic Track Finder\xspace. The number of reconstructed tracks per event is shown in Fig.~\ref{fig:trkNbr}, and Fig.~\ref{fig:trkValidation} shows the distributions of a number of track-related quantities compared between a subset of the data and Monte Carlo simulation. The large number of events without reconstructed tracks is mainly due to muons outside of the fiducial volume for which fewer than five hits are reconstructed in the tracker. It can be seen that reasonable agreement is found between the data and the Monte Carlo simulation, although there are some discrepancies that require further investigation. These are thought to be due to the reconstruction of showers by the track reconstruction algorithms. The Combinatorial Track Finder\xspace is capable of reconstructing more than one track per event, but as it has not been optimised to reconstruct showers, multi-track events tend to contain a number of fake or badly reconstructed tracks. These are mostly low momentum tracks with a small number of hits and large \ensuremath{\chi^2}\xspace values, and the fake rate is estimated to be around 1\%. For this reason, only single track events are used in the rest of the results presented in this paper, and the distributions shown in Fig.~\ref{fig:trkValidation} are only for single track events. Small discrepancies remain for tracks with fewer hits and low momentum. These could be due to detector noise and limitations in the simulation in describing the low momentum range of cosmic ray muons, such as the position of the concrete plug covering the shaft. The simulation assumed that the CMS access shaft was always closed by a thick concrete plug, while, during the data-taking period, it was also opened or half-opened. The absence of the concrete plug causes more low momentum muons to reach the tracker~\cite {MUON2}. The noise is responsible for fake hits added to genuine tracks and, occasionally, fake tracks, which contribute to the discrepancies in the \ensuremath{\chi^2}\xspace distribution. By design the Cosmic Track Finder reconstructs only one track. The difference between the number of tracks reconstructed by the two algorithms is mainly due to the minimum number of hits required during the pattern recognition phase. In the Combinatorial Track Finder\xspace a minimum of five hits are required, while only four are required in the case of the Cosmic Track Finder\xspace. It can be seen that a small number of tracks have fewer hits than these minimum requirements. This is due to the fact that hits deemed to be outliers can still be removed in the track fitting phase. It can also be seen from Fig.~\ref{fig:trkValidation} that there is a significant number of tracks with a high number of hits, indicating that tracks can be followed through the whole tracker and be reconstructed with hits in both the upper and lower hemispheres. \begin{figure}[th] \begin{center} \centerline{ \includegraphics[width=7.5cm]{Figures/trk_numTracks} } \caption{ Distribution of the number of tracks reconstructed per event with the two different algorithms. For each algorithm, the total number of simulated Monte Carlo tracks are normalised to the number of observed tracks. } \label{fig:trkNbr} \end{center} \end{figure} \begin{figure}[th] \begin{center} \centerline{ \includegraphics[width=5cm]{Figures/trk_nHitPerTrack} \includegraphics[width=5cm]{Figures/trk_chi2ndof} \includegraphics[width=5cm]{Figures/trk_pt} } \caption{ Distributions of several track-related variables for the two different algorithms in single track events: the number of hits per track (left), \ensuremath{\chi^2/\mbox{ndf}}\ (middle) and the transverse momentum (right). Note that for the \ensuremath{\chi^2/\mbox{ndf}}\ distribution, a log-scale is used for the y-axis. For each algorithm, the total number of simulated Monte Carlo tracks are normalised to the number of observed tracks. } \label{fig:trkValidation} \end{center} \end{figure} \subsection{Track reconstruction efficiency} The track reconstruction efficiency for the two algorithms described above has been measured using two different methods. First, the efficiencies were measured by searching for a reconstructed track and matching it to a muon reconstructed only in the muon chambers. In the second method, the efficiency was measured using data just from the tracker, by reconstructing tracks independently in the upper and lower hemispheres of the tracker. In addition, the likely performance of the Combinatorial Track Finder\xspace in proton-proton collisions was estimated by running the algorithm with the appropriate settings and measuring the efficiency by comparing the two segments of traversing cosmic ray muons, i.e.\ the second method. \subsubsection{Track reconstruction efficiency using muons reconstructed by the muon chambers} \label{sec:tkEffMu} In the first method, the track reconstruction efficiency is measured with respect to muons reconstructed using information from the muon chambers, and required to point within the geometrical acceptance of the tracker. This ensures that the muons have been identified independently of the tracker. The muons are first reconstructed by the muon chambers, combining segments of muon tracks reconstructed in the top and bottom hemispheres of the muon detectors in a global fit. These reference muons are required to have at least 52 hits in the muon chambers, which corresponds to having hits in at least five Drift Tube chambers. Combining segments from the two hemispheres removes muons which are absorbed by the CMS steel yoke before reaching the tracker. It also improves the track direction reconstruction, which is needed for the propagation through the detector. The efficiency is estimated with respect to reference muons with a topology similar to that expected in proton-proton collisions. This is achieved by requiring that the point of closest approach of the extrapolated muon to the centre of the detector is less than 30~cm in both the transverse and longitudinal directions. The absolute value of the pseudorapidity, $|\eta|$, is required to be less than $1$ and the azimuthal angle is required to be in the range $0.5<|\phi|<2.5$, effectively restricting the tracks to the barrel. These cuts also ensure that the tracks cross most of the layers of the tracker and cross most modules perpendicularly. The efficiency is then measured by searching for a corresponding track reconstructed in the tracker. The efficiencies measured in the data and in the Monte Carlo simulation are compared in Fig.~\ref{fig:trkEff}~(left) and summarised in Table~\ref{tab:stdEffLHC}. The efficiencies are higher than 99\% for both data and Monte Carlo simulation and for the two tracking algorithms. The difference between data and Monte Carlo observed around $20\,\ensuremath{\mathrm{GeV}/c}$ for the Cosmic Track Finder\xspace, while statistically significant, is small and has not been pursued further, since this algorithm will not be used in proton-proton collisions. The overall differences between data and Monte Carlo simulation are found to be smaller than 0.5\%. \begin{table} [th] \caption{Track reconstruction efficiencies for the two algorithms in Data and in Monte Carlo simulation, measured with the muon-matching method.} \label{tab:stdEffLHC} \begin{center} \begin{tabular}{|l|c|c|c|c|} \hline & \multicolumn{2}{c|}{CTF} & \multicolumn{2}{c|}{CosmicTF} \\ \cline{2-5} & Data & MC & Data & MC \\ \hline Efficiency (\%) & 99.78 $\pm$ 0.02 &99.88 $\pm$ 0.01 & 99.47 $\pm$ 0.04& 99.72 $\pm$ 0.01 \\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[tbh] \begin{center} \centerline{ \includegraphics[width=7.5cm]{Figures/eff_Mu_ptDistrib_LHC} \includegraphics[width=7.5cm]{Figures/eff_TopBot_vs_pt_lhc} } \caption{ Track reconstruction efficiency as a function of the measured transverse momentum of the reference track, as measured with the track-muon matching method (left) and the Top/Bottom comparison method (right). \label{fig:trkEff} } \end{center} \end{figure} \subsubsection{Track reconstruction efficiency using tracker data only} In the second method, the efficiency is measured using data from the tracker; no muon chamber information is included. This method has been used in previous cosmic ray muon data-taking exercises, when the efficiency was evaluated using track segments reconstructed separately in the TIB and TOB~\cite{tifPaper}. As cosmic ray muons pass through the tracker from top to bottom, the tracker was divided into two hemispheres along the $y=0$ horizontal plane for this study. The tracks were reconstructed independently in the two hemispheres. Tracks reconstructed in the upper hemisphere are referred to as {\em top tracks} and those reconstructed in the lower hemisphere as {\em bottom tracks}. Tracks in one hemisphere are used as references to measure the efficiency in the other hemisphere. Two such measurements are performed: $\epsilon (T|B)$, where, given a bottom track, a matching top track is sought and vice versa ($\epsilon (B|T)$). The matching is performed by requiring that the two opposite-half tracks have pseudorapidities that satisfy $\vert \Delta\eta\vert<0.5$. Only events containing a single track with a topology similar to that expected in proton-proton collisions are analysed and the same track requirements that were applied in Section~\ref{sec:tkEffMu} are used. To reconstruct the two track legs independently, only seeds with hits in the top or bottom hemisphere are selected and, before the final track fit, the hits in the other hemisphere are removed from the track. After track segment reconstruction, a track is only retained for further analysis if it contains at least 7 hits and satisfies the requirement $\ensuremath{\chi^2/\mbox{ndf}} > 10 $. Furthermore, to ensure that a matching track can be reconstructed, the extrapolation of the reference track into the other hemisphere is required to cross at least five layers. The efficiencies measured using this method are shown in Fig.~\ref{fig:trkEff}~(right) and Table~\ref{tb:trkEffTB}. The difference seen for low momentum tracks for the Cosmic Track Finder\xspace is small, and has not been pursued further. The lower efficiency for top tracks is primarily caused by a large inactive area in the upper half of TOB layer 4, which would otherwise be used to build track seeds. This will not be an issue for the track reconstruction that will be used in proton-proton collisions as in this case, tracks are seeded principally in the pixel detector with the tracking then proceeding towards the outer layers of the SST. The efficiencies measured in the Monte Carlo simulation are consistent with those measured in the data to within $1\%$. \begin{table}[hbt] \caption{\label{tb:trkEffTB} Overall track reconstruction efficiency measured with the top/bottom comparison method.} \begin{center} \begin{tabular}{|l|c|c|c|c|} \hline & \multicolumn{2}{c|}{CTF} & \multicolumn{2}{c|}{CosmicTF} \\ \cline{2-5} & Data & MC & Data & MC \\ \hline $\epsilon (B|T)$ (\%) &97.03$\pm$0.07 &97.56$\pm$0.04 & 94.01$\pm$0.10 &93.41$\pm$0.06 \\ $\epsilon (T|B)$ (\%) &95.61$\pm$0.08 &95.79$\pm$0.05 & 92.65$\pm$0.11 &93.19$\pm$0.07 \\ \hline \end{tabular} \end{center} \end{table} \subsubsection{Inside-out tracking method}\label{lhc_trk} Finally, to evaluate the algorithm that will be used during proton-proton collisions, the efficiency of the Combinatorial Track Finder\xspace with the appropriate settings is measured. The reconstruction process~\cite{ptdr} starts in the centre of the tracker and proceeds to the outside, using seeds constructed primarily in the pixel detector. The default Combinatorial Track Finder\xspace is optimised to reconstruct tracks that originate near the interaction point. By contrast, very few cosmic ray muons will pass through this region. In order to take this into account, only tracks for which the point of closest approach to the centre of the detector is less than 4~cm in the transverse direction and 25~cm in the longitudinal direction are used, effectively crossing the three barrel layers of the pixel detector. The tracks are reconstructed from a seed made with hit pairs from any combination of the innermost three layers of the SST; the nominal beam spot is used as an additional constraint in the transverse plane to provide the initial estimate of the track parameters. This is a legitimate approximation as long as the transverse impact parameter of the tracks is much smaller than the radius of the innermost detector layer used. Hits in the silicon pixel detector are not used in this analysis in the seed finding phase, as this imposes too strong a constraint on the tracks to come from the nominal beam spot. They are, however, identified in the pattern recognition phase and added to the track. The reconstruction efficiencies are estimated with respect to a reference track in one hemisphere of the tracker. A compatible seed and track is sought in the other hemisphere within a cone of radius $\Delta R < 1.0$ (where $\Delta R = \sqrt {\Delta \eta ^2 + \Delta \phi^2}$) opposite to the reference track. The cone size is kept very large compared to the angular resolution so that the matching procedure cannot bias the efficiency measurements. To avoid multi-track events, a track is not used as a reference if there is another track in the same hemisphere or within the matching cone. Fake tracks created by noisy hits are rejected by requiring that the reference tracks have at least 10 hits. The efficiencies measured using this method are shown in Fig.~\ref{fig:trkEffIO} and in Table~\ref{tb:trkEffIO}. These efficiencies can be further divided into a {\em seed finding} efficiency, which is the efficiency of building a seed for a given reference track, and a {\em pattern recognition} efficiency, which is the efficiency of reconstructing a track once a seed has been found. Inefficiencies affecting only a few detector channels have not been taken into account when calculating the overall efficiency from the data. The efficiencies measured in the Monte Carlo simulation match those measured in the data to within $1$\%. \begin{figure}[tb] \begin{center} \centerline{ \includegraphics[width=5cm]{Figures/effAll_tkOppo_pt} \includegraphics[width=5cm]{Figures/effSeed_tkOppo_pt} \includegraphics[width=5cm]{Figures/effFromSeed_tkOppo_pt} } \caption{ Track reconstruction efficiency (left), seed finding efficiency (middle), and pattern recognition efficiency (right) as a function of the measured transverse momentum of the reference track for inside-out tracking method. Note that the Monte Carlo points are shifted by $2\ensuremath{\mathrm{GeV}/c}$ so as to allow the uncertainties to be seen.} \label{fig:trkEffIO} \end{center} \end{figure} \begin{table}[tbh] \caption{\label{tb:trkEffIO} Reconstruction efficiency of the Inside-out tracking method. } \begin{center} \begin{tabular}{|l|c|c|} \hline & Data & MC \\ \hline Seed finding efficiency (\%) & 99.17 $\pm$ 0.12 & 99.30 $\pm$ 0.08\\ Pattern recognition efficiency (\%) & 99.79 $\pm$ 0.06 & 99.64 $\pm$ 0.05 \\ Track reconstruction efficiency (\%) & 98.96 $\pm$ 0.13 & 98.94 $\pm$ 0.09\\ \hline \end{tabular} \end{center} \end{table} \subsubsection{Summary of the track efficiency measurements} The three methods of efficiency calculation presented in this section yield consistent results and indicate that a high track reconstruction efficiency is attained for vertical tracks passing close to the nominal beam line, which is the topology most similar to the tracks from proton-proton collisions. Although the results are similar, some small differences were observed. The main difference between the efficiencies determined by the first and second methods arises from the fact that tracks are sought in only one half of the detector in the second method, while in the first method, tracks may be found from seeds produced in both halves of the tracker. The Combinatorial Track Finder\xspace algorithm has been fully tested and is well understood, yielding a high quality performance. The Cosmic Track Finder\xspace algorithm, while not tuned to the level of the Combinatorial Track Finder\xspace, also achieves good performance and provides a fundamental cross-check. The measurements of the ``Inside-out tracking method'' give confidence that the track reconstruction will perform well in proton-proton collisions. Finally, the efficiencies measured in the Monte Carlo simulation agree very well with those measured in the data once the known detector inefficiencies are accounted for in the simulation. This indicates that the tracker and the reconstruction algorithms are well understood. \subsection{Track parameter resolution} The track reconstruction can be further validated using the CRAFT data sample by splitting the tracks into two separate parts. A measure of the resolution of the track parameters can be determined by comparing the two legs of the split tracks. To perform this study, tracks are split at the point of closest approach to the nominal beam-line. The top and bottom legs are treated as two independent tracks and re-fitted accordingly. The track parameters are then propagated to their respective points of closest approach to the beam-line. This method has been tested using Monte Carlo simulation and found to work well. For the purposes of this study, only events in which the Combinatorial Track Finder\xspace reconstructed a single track whose point of closest approach to the beam-line is inside the volume of the pixel barrel are considered. The transverse momentum of the track must be greater than $4\,\ensuremath{\mathrm{GeV}/c}$ and its \ensuremath{\chi^2}\xspace must satisfy the requirement $\ensuremath{\chi^2/\mbox{ndf}} < 100$. In addition, the track must contain a minimum of 10 hits, with at least two hits being on double-sided strip modules. There must also be six hits in the pixel barrel subsystem. After splitting, each track segment is required to have at least six hits, three of which must be in the pixel barrel. The results of this analysis are summarised in Table~\ref{tb:trkParam}, while the distributions of the residuals and pulls of the inverse transverse momentum and the azimuthal ($\phi$) and polar ($\theta$) angles are shown in Fig.~\ref{fig:trkParam}. The corresponding distributions for the transverse ($d_{xy}$) and longitudinal ($d_{z}$) impact parameters can be found elsewhere~\cite{craftPixel}. For each track parameter, the residuals are defined as $\delta x = (x_1 - x_2)/\sqrt{2}$. The factor of $\sqrt{2}$ is needed to account for the fact that the two legs are statistically independent. The standardised residuals (or pulls) are defined by ${\widetilde{\delta x}} = (x_1 - x_2)/\sqrt{\sigma_{x1}^2 + \sigma_{x2}^2}$. In Table~\ref{tb:trkParam} the mean and standard deviation (referred to as the {\em resolution}) of a Gaussian fitted to the peak of the distributions are given. In order to get an estimate of the tails of the distributions, the half-widths of the symmetric intervals covering $95$\% of the distribution (also known as the {\em 95\% coverage}), which, in the case of a Gaussian distribution, correspond to twice the standard deviation, are also given in Table~\ref{tb:trkParam}. The same quantities are used to characterise the pull distributions. In this case, the standard deviations of the fitted Gaussians are taken as the pull values. It can be seen that the resolution of the angles and the impact parameters are well described by a Gaussian. The resolution as a function of the momentum has been presented elsewhere~\cite{craftAlign}. \begin{table}[bt] \caption{\label{tb:trkParam} Standard deviation, mean, and 95\% coverage of the residual and pull distributions of the track parameters. The units indicated pertain only to the residual distributions. } \begin{center} \begin{tabular}{|l|c|c|c|c|c|c|} \hline Track parameter & \multicolumn{3}{c|}{Residual distributions} & \multicolumn{3}{c|}{Pull distributions}\\ \cline{2-7} &Std. Dev. & Mean & 95\% Cov. & Std. Dev. & Mean & 95\% Cov.\\ \hline $p_T$ (\ensuremath{\mathrm{GeV}/c}) & 0.083 & 0.000 & 1.92 & 0.99 & 0.01 & 2.1\\ Inverse $p_T$ ($\gev^{-1}c$) & 0.00035& 0.00003 & 0.00213& 0.99 &-0.01 & 2.1\\ $\phi$ (mrad) & 0.19 & 0.001 & 0.87 & 1.08 &-0.02 & 2.4\\ $\theta$ (mrad) & 0.40 & 0.003 & 1.11 & 0.93 &-0.01 & 2.1\\ $d_{xy}$ ($\mu$m) & 22 & 0.30 & 61 & 1.22 & 0.00 & 2.9\\ $d_{z}$ ($\mu$m) & 39 & 0.28 & 94 & 0.94 &-0.01 & 2.1\\ \hline \end{tabular} \end{center} \end{table} \begin{figure}[ht] \begin{center} \centerline{ \includegraphics[width=0.48\linewidth]{Figures/res_ptinv} \includegraphics[width=0.48\linewidth]{Figures/pull_dcurv} } \centerline{ \includegraphics[width=0.48\linewidth]{Figures/res_dphi} \includegraphics[width=0.48\linewidth]{Figures/pull_dphi} } \centerline{ \includegraphics[width=0.48\linewidth]{Figures/res_dtheta} \includegraphics[width=0.48\linewidth]{Figures/pull_dtheta} } \caption{ Residual distribution (left) and pull distribution (right) of the inverse transverse momentum $1/\pt$ (top), azimuthal $\phi$ (middle), and polar $\theta$ angle (bottom). } \label{fig:trkParam} \end{center} \end{figure} \subsection{Hit resolution} The hit resolution has been studied by measuring the track residuals, which are defined as the difference between the hit position and the track position. The track is deliberately reconstructed excluding the hit under study in order to avoid bias. The uncertainty relating to the track position is much larger than the inherent hit resolution, so a single track residual is not sensitive to the resolution. However, the track position difference between two nearby modules can be measured with much greater precision. A technique using tracks passing through overlapping modules from the same tracker layer is employed to compare the difference in residual values for the two measurements in the overlapping modules~\cite{tifPaper}. The difference in hit positions, $\Delta x_{hit}$, is compared to the difference in the predicted positions, $\Delta x_{pred}$, and the width of the resulting distribution arises from the hit resolution and the uncertainty from the tracking predictions. The hit resolution can therefore be determined by subtracting the uncertainty from the tracking prediction. This overlap technique also serves to reduce the uncertainty arising from multiple scattering, by limiting the track extrapolation to short distances. Any uncertainty from translational misalignment between the modules is also avoided by fitting a Gaussian to the distribution of the differences between the residuals. For the purposes of this study, only events in which the Combinatorial Track Finder\xspace reconstructed a single track are used, and only overlaps from barrel modules for which the residual rotational misalignment is less than $5\mum$ are analysed. The \ensuremath{\chi^2}\xspace probability of the track is required to exceed $0.1$\% and the tracks must be reconstructed with at least 6 hits. In addition, the track momenta are required to be greater than $20\, \ensuremath{\mathrm{GeV}/c}$, ensuring that the uncertainty arising from multiple scattering is reduced to less than $3\mum$. Remaining uncertainties from multiple scattering and rotational misalignment between the overlapping modules are included as systematic uncertainties in the measurement. The distribution of the differences between the residuals is fitted, with the width containing contributions from the hit resolutions and the uncertainty from the tracking predictions. The latter is subtracted out in quadrature to leave the resolution on the difference of the hit positions between the two modules. As the two overlapping modules are expected to have the same resolution, the resolution of a single sensor is determined by dividing by $\sqrt{2}$. The sensor resolution is known to depend strongly on the angle of the track and the pitch of the sensor. The results are therefore determined separately for different sensor pitches and in 10 degree intervals for the track incidence angle. The results are shown in Table~\ref{tb:trkHitRes}, where they are compared to the predictions from Monte Carlo simulation. The agreement between the data and the predictions is very good for normally incident tracks, but suggests that the simulation may underestimate the resolution for larger track angles, as can be seen in the first two layers of TIB. The resolutions vary from $20$ to $56\mum$ for the position difference, which corresponds to a variation between $14$ and $40\mum$ in the single sensor resolution. \begin{table}[bt] \caption{\label{tb:trkHitRes} Hit resolution measured on CRAFT data and predicted by the model in the Monte Carlo simulation, for the different local track angles. All values are in microns.} \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|} \hline Sensor & Pitch & Resolution & \multicolumn{4}{|c|}{Track angle}\\ \cline{4-7} &$(\mu \mathrm{m})$ & $(\mu \mathrm{m})$& $0\de-10\de$ & $10\de-20\de$ & $20\de-30\de$ & $30\de-40\de$\\ \multirow{2}{*}{TIB 1-2} & \multirow{2}{*}{ 80} & Measurement& $ 17.2 \pm 1.9 $ & $ 14.3 \pm 2.3 $ & $ 17.4 \pm 3.2$ & $ 25.7 \pm 6.0$ \\ && MC Prediction& $ 16.6 \pm 0.5 $ & $ 11.8 \pm 0.5 $ & $ 12.4 \pm 0.6$ & $ 17.9 \pm 1.5$ \\ \hline \multirow{2}{*}{TIB 3-4} & \multirow{2}{*}{ 120} & Measurement& $ 27.7 \pm 3.6 $ & $ 18.5 \pm 3.1 $ & $ 16.1 \pm 3.1$ & $ 24.1 \pm 6.7$ \\ && MC Prediction& $ 26.8 \pm 0.7 $ & $ 19.4 \pm 0.8 $ & $ 17.2 \pm 0.3$ & $ 21.4 \pm 2.0$ \\ \hline \multirow{2}{*}{TOB 1-4} & \multirow{2}{*}{ 183} & Measurement& $ 39.6 \pm 5.7 $ & $ 28.0 \pm 5.8 $ & $ 24.8 \pm 6.5$ & $ 32.8 \pm 8.3$ \\ && MC Prediction& $ 39.4 \pm 1.3 $ & $ 27.8 \pm 1.2 $ & $ 26.5 \pm 0.3$ & $ 32.5 \pm 2.1$ \\ \hline \multirow{2}{*}{TOB 5-6} & \multirow{2}{*}{ 122} & Measurement& $ 23.2 \pm 3.6 $ & $ 19.5 \pm 3.6 $ & $ 20.9 \pm 6.1$ & $ 29.3 \pm 9.7$ \\ && MC Prediction& $ 23.8 \pm 0.9 $ & $ 18.0 \pm 0.5 $ & $ 19.2 \pm 1.2$ & $ 25.4 \pm 1.6$ \\ \hline \end{tabular} \end{center} \end{table}
{ "attr-fineweb-edu": 1.078125, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdPU5qWTD6fkJfvr_
\section{Introduction} Stochastic control is a well-studied framework for the dynamical systems with inherent stochastic uncertainties \citep{aastrom2012introduction}. The probability distributions of uncertain variables in the system are usually assumed to be fully known, which in practice is difficult to obtain as it possibly involves numerous expensive experiments. Using the approximated distribution is not always reliable, especially for those safety-critical systems, where a poor approximation may lead to catastrophic system behaviors~\citep{nilim2005robust}. Besides, an exact dynamical model can be also difficult to obtain due to e.g., limited measurements and/or complicated dynamical couplings. In either case, a model-based controller might not always work well. Thus, it is necessary to design a model-free controller that is robust against the distribution errors. To tackle the distribution uncertainties in the system, an approach is to use the distributionally robust (DR) control~\citep{delage2010distributionally}. Recent years have witnessed significant research efforts on the DR optimization~\citep{wiesemann2014distributionally, gao2016distributionally,esfahani2018data} which has also been applied to machine learning~\citep{chen2018robust}, Markov decision process (MDP)~\citep{xu2010distributionally, yang2017convex, yu2015distributionally} and control~\citep{ van2015distributionally, zymler2013distributionally,yang2018wasserstein}. It assumes that the groundtruth distribution is contained in a given set of probability distributions, also known as the ambiguity set, and then optimize a reasonable performance index over this set. The design of the ambiguity set is critical and usually requires to merge the available prior statistical knowledge with data to reduce the conservativeness of the DR controller~\citep{gao2016distributionally,yang2018wasserstein,schuurmans2019safe}. However, most of the existing methods are model-based~\citep{delage2010distributionally,ben2013robust,yang2018wasserstein,schuurmans2019safe}, i.e., they require an exact dynamical model of the system. Reinforcement learning (RL)\citep{sutton1998introduction}, as a sample-based method, has achieved tremendous progresses recently in various control problems~\citep{mnih2013playing, mnih2015human-level, levine2013guided, lillicrap2015continuous, schulman2017proximal}. It is model-free in the sense that the controller is not built directly on the dynamical model. Under the dynamic programming framework, the RL aims to approximately solve MDP problems by solely using training samples of the decision process. Since collecting training samples in the physical world could be possibly expensive and time-intensive, the RL algorithms are usually trained over a simulator. As expected, the resulting policies may fail to generalize to practical scenarios due to the model discrepancy between the simulated environment and the physical system. Robust RL algorithms aim to enhance the robustness of the policy~\citep{tessler2019action,peng2018sim,morimoto2005robust,pinto2017robust}. Inspired by the concept in DR control, some robust RL algorithms with respect to distribution errors have been developed~\citep{abdullah2019wasserstein, smirnova2019distributionally}. In \citet{abdullah2019wasserstein}, the over-fitting to training environments is tackled by minimizing a long-term cost function under the worst-case distribution over a Wasserstein ambiguity set. In \citet{smirnova2019distributionally}, a safety learning method is proposed under a DR policy iteration scheme. However, the aforementioned DR RL frameworks are designed for solving general sequential decision-making problems without using exploring system information e.g., linearity of the optimal policy for a linear stochastic system. Thus, they suffer from the lack of global convergence guarantees and require a large number of training samples. As a consequence, the resulting controller cannot guarantee to stabilize the system. Note that the stability is not explicitly concerned in the classical RL setting, though it is among the most important requirements in control theory. These issues have drawn increasing attention in the RL community~\citep{fazel2018global, yang2019global, zhang2019policy, zhang2020policy}, yet to the best of our knowledge, there are little convergence guarantees in the DR RL setting. In this paper, we focus on the stochastic control problem in the presence of distribution uncertainties, without an exact model of the system dynamics. In particular, we consider the sample-based control problem for linear stochastic systems. To this end, we formulate it as a DR optimal control problem with disturbance distribution errors measured by a Wasserstein ball which accounts for the uncertainties of the disturbance distribution. For tractability, we reformulate it as a two-player zero-sum game with Wasserstein penalties to design an optimal controller under the worst-case disturbance distribution. Then, we propose a DR Q-learning algorithm to learn a robust controller by using training data from a simulator. Our main contributions are summarized below: \begin{enumerate} \renewcommand{\labelenumi}{\rm(\alph{enumi})} \item \textbf{An explicit min-max solution with stability guarantees.} We derive an explicit solution to the zero-sum game with Wasserstein penalties under Riccati-type iterations. In particular, the optimal controller is affine with respect to the state feedback. Moreover, we show that under mild assumptions, the optimal controller is always able to stabilize the system. \item \textbf{Relations to {$H_{\infty}$} optimal control.} Through the insights from the Q-function of the zero-sum game, we show the equivalence between the DR problem and its deterministic counterpart, and reveal that the classical $H_{\infty}$ optimal control is a special case of our DR optimal control formulation. \item \textbf{Model-free algorithm with global convergence.} Leveraging the affine structure of the optimal controller, we propose a DR Q-learning algorithm to learn an optimal controller by solely using data from simulated system trajectories, and show its global convergence by building connections between Q-function iterations and value iterations. This is fundamentally different from the general DR RL in~\citet{abdullah2019wasserstein, smirnova2019distributionally}. \end{enumerate} The remainder of the paper is organized as follows. In Section \ref{sec:background}, we describe the stochastic control problem with distribution uncertainties and formulate the DR optimal control problem. In Section \ref{sec:drc}, we derive its closed-form solution, and show the stability of the closed-loop system. In Section \ref{sec:q learning}, we derive the Q-function and discuss its relations to the classical $H_{\infty}$ optimal control. In Section \ref{sec:algorithm}, we propose a DR Q-learning algorithm and show its global convergence. In Section \ref{sec:experiment}, we demonstrate the convergence and effectiveness of the proposed algorithm via simulations. \section{Problem Formulation}\label{sec:background} In this section, we first describe the stochastic control problem for linear systems with distribution uncertainties. To derive a DR controller, we formulate it as a two-player zero-sum dynamic game with the Wasserstein metric. \subsection{Optimal Control for Linear Stochastic Systems} We consider a time-invariant linear stochastic system with full state feedback \begin{equation} \label{equ:sys} x_{k+1} = Ax_k + Bu_k + Ew_k, \end{equation} where the next state $x_{k+1}$ is a linear combination of the current state $x_k \in \mathbb{R}^n$, the control $u_k \in \mathbb{R}^{m}$ and the random disturbance $w_k \in \mathbb{R}^{d}$. The disturbance $w_k$ is a stationary process with probability distribution $\nu$. The state feedback policy $\pi$ is written in the form that $u_k = \pi(x_1,x_2,\dots,x_k)$. The goal of stochastic optimal control is to find an optimal control policy $\pi^*$ that minimizes the long-term cost $J_x(\pi)$ in the presence of the random disturbance $w_k$, i.e., \begin{equation} \label{equ:cost} J_x(\pi)=\mathbb{E}_{w_{k} \sim \nu}^{\pi}\left[\sum_{k=0}^{\infty} \alpha^{k} c(x_k,u_k) | x_{0}={x}\right], \end{equation} where $\alpha \in (0,1)$ is a discount factor, and $c(x_k,u_k)$ is a user-chosen stage cost in a quadratic form $$c(x_k,u_k)=x_k^{\top}Qx_k + u_k^{\top}Ru_k.$$ The main difficulties for the stochastic control at least include: (a) the groundtruth distribution $\nu$ is generally unknown. A common approach is to simply set it as a Gaussian distribution, which is not suitable for the safety-critical systems as the distribution errors may result in significant degradation of the control performance. (b) The exact dynamical model $(A,B,E)$ can be difficult to obtain as we can only collect a finite number of noisy system input/output. \begin{figure} \begin{center} \includegraphics[width=70mm]{simu_tu} \caption{The simulator serves as a black-box where the inputs consist of the control $u_k$ and disturbance $w_k$, and the output is a quaternion $\{x_k, u_k, w_k, x_{k+1}\}$.} \label{pic:simu} \end{center} \end{figure} In practice, we may have access to a finite number of samples $\left\{\hat{w}^{(1)}, \ldots, \hat{w}^{(N)}\right\}$ of the disturbance from the environment. For instance, experiments can be conducted to collect the disturbance samples with a simple controller. Then, a straightforward way to approximate the groundtruth distribution $\nu$ is to use an empirical distribution \begin{equation} \label{equ:emp} \nu_{N}:=\frac{1}{N}\sum_{j=1}^{N}\delta_{\hat{w}^{(j)}}, \end{equation} where the Dirac delta measure $\delta_{\hat{w}^{(j)}}$ is concentrated at $\hat{w}^{(j)}$. As the number of samples tends to infinity, the empirical distribution $\nu_N$ weakly converges to the static groundtruth distribution $\nu$. However, it may be costly to collect such a large number of samples, which motivates us to design a controller with robustness to the distribution error in $\nu_N$. Moreover, we assume to have access a simulator to generate system trajectories, i.e., $\{x_k, u_k, w_k, x_{k+1}\}_{k=0}^M$, as illustrated in Fig. \ref{pic:simu}. The purpose of this work is to design a model-free reinforcement learning (RL) algorithm for the linear stochastic system (\ref{equ:sys}) to learn a controller with robustness to distribution errors in $\nu_N$ by using a simulator. \subsection{The Wasserstein DR Control} To measure the distance between two probability distributions, we adopt the Wasserstein metric which is widely studied recently in the control and machine learning communities~\citep{abadeh2018wasserstein,esfahani2018data,lee2015performance}. For two d-dimension distributions $\mu_i: \mathbb{R}^d \rightarrow [0, 1], \forall i\in\{1,2\}$, the Wasserstein metric of order $p \in [1,\infty)$ is defined as $$ W_{p}(\mu_1, \mu_2) = \inf _{\kappa}\left(\int_{\mathbb{R}^d\times \mathbb{R}^d} \left\|w_{1}-w_{2}\right\|^{p} \kappa \left(\mathrm{d} w_{1}, \mathrm{d} w_{2}\right)\right)^{\frac{1}{p}}, $$ where $\kappa \in \Gamma (\mu_1, \mu_2)$ and $\Gamma (\mu_1, \mu_2)$ denotes the set of the joint distributions with marginal distributions $\mu_1$ and $\mu_2$. It can be thought as the minimal energy consumption to transport the mass from a distribution $\mu_1$ to the other $\mu_2$, and $\kappa$ is interpreted as the transport plan, see e.g.~\citet{abadeh2018wasserstein,esfahani2018data} for details. In this work, the groundtruth probability distribution $\nu$ is assumed to be within an \textit{ambiguity set} $\mathbb{T}$ centred at the empirical distribution $\nu_N$. With the 2-order Wasserstein metric, the ambiguity set $\mathbb{T}$ is defined as \begin{equation}\notag \mathbb{T} := \left\{\omega | W_{2}\left(\omega, \nu_N \right) \leq \rho \right\}. \end{equation} Clearly, the radius $\rho$ of the Wasserstein ball reflects our confidence level of $\nu_N$ and it reduces as the number of samples increases. Then we aim to find a DR optimal controller $\pi$ by solving a min-max optimization problem \begin{equation}\label{equ:constraint} \begin{aligned} &\inf_{\pi} \sup_{\mu_k} \mathbb{E}_{w_{k} \sim \mu_k}^{\pi}\left[\sum_{k=0}^{\infty} \alpha^{k} c(x_k,u_k) | x_{0}={x}\right]\\ &\text{subject~to} ~~~~W_{2}(\mu_k, \nu_N) \leq \rho, ~~\forall k \in \mathbb{N}, \end{aligned} \end{equation} where $0<\alpha<1$ is the discount factor. It is well-known that the infinite number of hard constraints in (\ref{equ:constraint}) are difficult to tackle. To alleviate it, we consider its penalized version \citep{yang2018wasserstein} of the form \begin{equation}\label{equ:wasprob} \inf_{\pi} \sup_{\mu_k} \mathbb{E}_{w_{k} \sim \mu_k}^{\pi}\left[\sum_{k=0}^{\infty} \alpha^{k} c(x_k, u_k, \mu_k) | x_{0}={x}\right], \end{equation} where $c(x_k, u_k, \mu_k) = x_k^{\top}Qx_k + u_k^{\top}Ru_k - \lambda \cdot W_{2}^2\left(\mu_k, \nu_N \right).$ That is, we use a tunable hyperparameter $\lambda$ to penalize the deviation of a probability distribution $\mu_k$ from the empirical distribution $\nu_N$. For such a penalized problem, the optimal parameter $\lambda$ is expected to decrease as the radius $\rho$ of the ambiguity set increases. In this work, we avoid the use of the radius $\rho$ and manually tune the hyperparameter $\lambda$ to yield sub-optimal performance. To solve the min-max optimization problem (\ref{equ:wasprob}), we formulate it as a two-player zero-sum Markov game with full observations~\citep{gonzalez2002minimax,yang2018wasserstein,bacsar2008h}. Player \uppercase\expandafter{\romannumeral1} (controller) selects a control policy to minimize the cost in (\ref{equ:wasprob}) while Player \uppercase\expandafter{\romannumeral2} (adversary) seeks to thwart its goal by exerting disturbance probability distributions. Before proceeding, we mention that our motivation to use Wasserstein metric is manifold. It is symmetric and can measure the difference of any pair of discrete and continuous probability distributions, whereas other distances e.g., the Kullback-Leibler divergence~\citep{esfahani2018data, van2014renyi}, can not. This implies that their support sets are allowed to be different. More importantly, it genuinely reflects the modeling errors of the empirical distribution. It is reasonable that the shape of the two distributions could be different, and accordingly, the probability that large bias occur should naturally be small. The Wasserstein metric formalizes this intuition. \subsection{Landscape of This Work} For ease of exposition, we summarize the main ideas that eventually yield a model-free DR optimal controller to solve (\ref{equ:wasprob}) in Fig. \ref{pic:land}. \begin{figure} \begin{center} \includegraphics[width=80mm]{landscape} \caption{Landscape of this work.} \label{pic:land} \end{center} \end{figure} With the disturbance samples $\{\hat{w}^{(i)}\}_{i=1}^N$ and assuming that the system model $(A,B,E)$ is known, we first derive an explicit solution to the zero-sum game (\ref{equ:wasprob}) via dynamic programming, whose optimal value function $V^*(x_k)$ has a quadratic form and the optimal control policy $u^*_k$ is affine with respect to the state. Then we develop a quadratic Q-function $\widetilde{Q}(x_k,u_k,\mu_k)$ with the triple $\{x_k,u_k,\mu_k\}$ as inputs. We further show the equivalence between the zero-sum game (\ref{equ:wasprob}) and a deterministic version, in the sense of having the same optimal controller. The remaining step is to design a model-free Q-learning algorithm for the equivalent deterministic game to learn a simpler Q-function $Q_c(x_k,u_k,w_k)$, the parameterization of which can leverage the structure of $V^*(x_k)$ and disturbance samples $\{\hat{w}^{(i)}\}_{i=1}^N$. This learning process is achieved by using the simulator in Fig. \ref{pic:simu}. The DR model-free controller is obtained once the Q-function is learned. Kindly note that the global convergence to the DR optimal controller of (\ref{equ:wasprob}) is also shown in this work. \section{Distributionally Robust Control for Linear Stochastic Systems}\label{sec:drc} In this section, we derive an explicit solution to the penalized problem (\ref{equ:wasprob}) via dynamic programming if the system model $(A,B,E)$ are known. The optimal value function has a quadratic form and the policy is affine in the state. We further show that the resulting closed-loop system is always stable under mild assumptions. \subsection{An Explicit Solution to the Zero-sum Game} We solve the zero-sum game (\ref{equ:wasprob}) by a backward dynamic programming paradigm. Particularly, the optimal value function at state $x_k$ is given recursively through the well-known Bellman equation as \begin{equation} \label{equ:optvalue} V^*(x_k) = \min_{u_k} \max_{\mu_k} \mathbb{E} \bigl[ c_k(x_k, u_k, \mu_k) + \alpha V^*(x_{k+1}) \bigr]. \end{equation} The Wasserstein metric $W_{2}(\mu_k, \nu_N )$ in $c_k(x_k, u_k, \mu_k)$ is seemingly difficult to tackle. By leveraging recent techniques in the DR optimization problem~\citep{gao2016distributionally, yang2018wasserstein}, we convert it to a tractable formulation. \begin{lem}[Proposition 6,\citet{yang2018wasserstein}] \label{lem:bellman} The Bellman equation (\ref{equ:optvalue}) can be equivalently expressed as $$ V^*(x_k) = \min_{u_k}\left\{ x_k^{\top}Qx_k + u_k^{\top}Ru_k \\ + \frac{1}{N} \sum_{j=1}^{N} \max_{w_k^j \in \mathbb{R}^d} \Phi(u_k, w_k^j) \right\}, $$ where $\Phi(u_k, w_k^j) = \alpha V^*(x_{k+1}) - \lambda \|w_k^j-\hat{w}^{(j)}\|^{2}$. \end{lem} Lemma ~\ref{lem:bellman} implies that the effect of an arbitrary distribution $\mu_k$ on the optimal value function can be fully captured by a uniform discrete distribution which can be parameterized by $N$ vectors $w_k^j, j \in \{1,2,\dots,N\}$. This enables us to derive an explicit solution to the zero-sum game (\ref{equ:wasprob}) under the following standard assumptions. \begin{assum} \label{assumption} $Q$ is positive semi-definite and $R$ is positive definite. The pair $(A,B)$ is stabilizable and $(A,Q^{{1}/{2}})$ is observable. \end{assum} We define the following statistics over the set of samples $\{\hat{w}^{(i)}\}_{i=1}^N$ as follows. \begin{defi}\label{def} The sample mean and covariance of $\{\hat{w}^{(i)}\}_{i=1}^N$ are defined as \begin{equation}\notag \begin{aligned} \text{Sample mean:}~~~~\bar{w} &:= \frac{1}{N} \sum_{j=1}^{N} \hat{w}^{(j)}, \\ \text{Sample covariance:}~~~~\Sigma &:= \frac{1}{N} \sum_{j=1}^{N} (\hat{w}^{(j)} - \bar{w}) (\hat{w}^{(j)} - \bar{w})^{\top}. \end{aligned} \end{equation} \end{defi} We show that the optimal value function $V^*(x_k)$ in (\ref{equ:optvalue}) is quadratic with respect to the state $x_k$, and the optimal controller has an affine state feedback form with a constant offset. For the ease of notation, let $ H_{xx}^i = Q+\alpha A^{\top}P_iA, H_{xu}^i = \alpha A^{\top}P_iB, H_{xw}^i = \alpha A^{\top}P_iE, H_{uu}^i= R+\alpha B^{\top}P_iB, H_{uw}^i= \alpha B^{\top}P_iE, H_{ww}^i = \alpha E^{\top}P_iE-\lambda I$ and $G_x^i = \alpha A^{\top}g_i, G_u^i = \alpha B^{\top}g_i, G_{w}^i = \alpha E^{\top}g_i+2\lambda \bar{w}$, $G_{w^j}^i = \alpha E^{\top}g_i+2\lambda \hat{w}^{(j)}$, then we have the following results. \begin{thm} \label{theorem:solution} Suppose that for $P_0 = 0$, the Riccati-type iteration \begin{equation} \label{def:P} \hspace{-0.085cm} P_{i+1}= H_{xx}^i- \begin{bmatrix} H_{xu}^i&H_{xw}^i \end{bmatrix} \begin{bmatrix} H_{uu}^i & H_{uw}^i \\ * & H_{ww}^i \end{bmatrix}^{-1} \begin{bmatrix} H_{xu}^{i^{\top}} \\ H_{xw}^{i^{\top}} \end{bmatrix} \end{equation} converges to a positive semi-definite matrix $P$ and $\lambda I -\alpha E^{\top} P E > 0 $. Then, we have the following results. \begin{enumerate} \renewcommand{\labelenumi}{\rm(\alph{enumi})} \item The optimal value function $V^*(x)$ in (\ref{equ:optvalue}) has a quadratic form, i.e., $$ V^{*}(x) =x^{\top} P x+ g^{\top}x + z, $$ where $g = \lim\limits_{i \rightarrow \infty} g_i$, $z=\lim\limits_{i \rightarrow \infty} z_i$ and \begin{equation}\label{def:g} \begin{aligned} &g_{i+1}= G_x^i- \begin{bmatrix} H_{xu}^i&H_{xw}^i \end{bmatrix} \begin{bmatrix} H_{uu}^i & H_{uw}^i \\ * & H_{ww}^i \end{bmatrix}^{-1} \begin{bmatrix} G_u^i \\ G_w^i \end{bmatrix},\\ &z_{i+1} = \alpha z_i - \lambda \|\bar{w}\|^2 - \text{tr}\{H_{ww}^{i^{-1}} (\lambda^2\Sigma + \frac{1}{4}G_w^iG_w^{i^{\top}}) \}\\ &-\frac{1}{4} (G_u^{i^{\top}}-G_w^{i^{\top}}H_{ww}^{i^{-1}} H_{uw}^{i^{\top}})(H_{u u}^i-H_{u w}^iH_{w w}^{i^{-1}} H_{uw}^{i^{\top}})^{-1}\\ & \times ( G_u^{i}- H_{uw}^{i} H_{ww}^{i^{-1}} G_w^{i}). \end{aligned} \end{equation} \item The optimal control policy to solve (\ref{equ:wasprob}) has an affine state feedback form, i.e., \begin{equation}\label{equ:opt_u} u^* = Kx + r \end{equation} where \begin{equation} \begin{aligned} &K = (H_{u u}-H_{u w} H_{w w}^{-1} H_{uw}^{\top})^{-1}(H_{u w} {H_{w w}^{-1}} H_{xw}^{\top}-H_{u x}^{\top}), \\ &r = -\frac{1}{2}(H_{u u}-H_{u w} H_{w w}^{-1} H_{uw}^{\top})^{-1}(G_u - H_{uw} H_{w w}^{-1} G_w ). \end{aligned} \notag \end{equation} \item One of the worst-case disturbance distributions $\mu^*$ to solve (\ref{equ:wasprob}) is stationary and discrete, whose support set has exactly N points ${w^j}^*, j\in \{1,2,\cdots, N\}$. Specifically, let ${w^j}^* = Lx + l_j$, where \begin{equation} \notag \begin{aligned} &L = (H_{w w}-H_{uw}^{\top} H_{uu}^{-1} H_{u w})^{-1}(H_{uw}^{\top} H_{uu}^{-1} H_{xu}^{\top}-H_{xw}^{\top}),\\ &l_j = -\frac{1}{2N}(H_{w w}-H_{uw}^{\top} H_{uu}^{-1} H_{u w})^{-1}(G_{w^j}- H_{uw}^{\top} H_{uu}^{-1} G_u ), \end{aligned} \end{equation} then $\mu^*=\frac{1}{N} \sum_{j=1}^{N} \delta_{{w^j}^*}$. \end{enumerate} \end{thm} \begin{pf}\label{proof:solution} We apply the backward dynamic programming for the finite horizon case, and the proof is completed by letting the horizon goes to infinity. The finite-horizon zero-sum game aims to solve the following problem \begin{equation}\label{equ:fini_prob} \inf_{\pi} \sup_{\mu_k} \mathbb{E}_{w_{k} \sim \mu_k}^{\pi}\left[\sum_{k=0}^{h-1} \alpha^{k} c(x_k, u_k, \mu_k) | x_{0}={x}\right], \end{equation} where $h$ denotes the time horizon. Let $V_k^h(x)$ be the corresponding optimal value function at time step $k$. We use mathematical induction to show that $V_k^h(x)$ has the following quadratic form, \begin{equation}\label{equ:finite} V_{k}^{h}(x) =x^{\top} P_k x+ g_{k}^{\top}x + z_{k}, \end{equation} where $P_{k}$ is a symmetric positive semi-definite matrix to be determined, $g_{k}$ is a column vector and $z_{k}$ is a scalar. Clearly, (\ref{equ:finite}) holds for $k = h$ with $P_h = 0$, $g_h = 0$ and $z_h = 0$. Suppose it also holds for $k+1 \in \{1,2,\dots,h\}$, it follows from Lemma \ref{lem:bellman} that at time step $k$, we obtain \begin{equation}\label{equ:finite_bellman} V_k^h(x_k) = \min_{u_k} \bigl\{ x_k^{\top}Qx_k + u_k^{\top}Ru_k + \frac{1}{N} \sum_{j=1}^{N} \max_{w_k^j} \Phi(u_k, w_k^j) \bigr\}, \end{equation} where \begin{equation}\notag \begin{aligned} &\Phi(u_k, w_k^j) = \\ &\alpha(Ax_k+Bu_k+Ew_k^j)^{\top}P_{k+1}(Ax_k+Bu_k+Ew_k^j)\\ &+ \alpha g_{k+1}^{\top}(Ax_k+Bu_k+Ew_k^j) + \alpha z_{k+1} -\lambda \|w_k^j-\hat{w}^{(j)}\|^{2} \end{aligned} \end{equation} is quadratic in $w_k^j$ and is concave if $\lambda I -\alpha E^{\top} P_{k+1} E > 0 $. Then, $\Phi(u_k, w_k^j)$ attains its maximum value at a unique point \begin{equation}\label{equ:w} \begin{aligned} w_k^{j*} =& \left(\lambda I-\alpha E^{\top} P_{k+1} E\right)^{-1}\big(\alpha E^{\top} P(A x_k+B u_k)\\ &+\lambda \hat{w}^{(j)} + \frac{1}{2}\alpha E^{\top}g_{k+1} \big), \end{aligned} \end{equation} and \begin{equation}\label{equ:phi} \begin{aligned} &\Phi(u_k, w_k^{j*}) = \alpha(Ax_k+Bu_k)^{\top}P_{k+1}(Ax_k+Bu_k)\\ &+\alpha g_{k+1}^{\top}(Ax_k+Bu_k)+\alpha z_{k+1} - \lambda \|\bar{w}\|^2\\ & - \big(\alpha E^{\top}P_{k+1}(Ax_k+Bu_k)+\frac{1}{2}\alpha E^{\top}g_{k+1}+ \lambda \hat{w}^{(j)} \big)^{\top} \\ &\times (\alpha E^{\top}P_{k+1}E-\lambda I)^{-1}\\ &\times\big(\alpha E^{\top}P_{k+1}(Ax_k+Bu_k)+\frac{1}{2}\alpha E^{\top}g_{k+1}+ \lambda \hat{w}^{(j)} \big). \end{aligned} \end{equation} Inserting $\Phi(u_k, w_k^{j*})$ in (\ref{equ:phi}) into (\ref{equ:finite_bellman}), we have that \begin{equation}\label{equ:v} \begin{aligned} &V_k^h(x_k) = \min_{u_k} \big\{ u_k^{\top}(H_{uu}^k- H_{uw}^kH_{w w}^{k^{-1}} H_{uw}^{k^{\top}})u_k + u_k^{\top} \\ &\times (G_u^k - H_{uw}^k H_{w w}^{k^{-1}} G_w^k - 2(H_{u w}^k {H_{w w}^{k^{-1}}} H_{xw}^{k^{\top}}-H_{u x}^{k^{\top}})x_k)\big\}\\ &+ \alpha z_k - \lambda \|\bar{w}\|^2 - \text{tr}\{H_{w w}^{k^{-1}} (\lambda^2\Sigma + \frac{1}{4}G_w^kG_w^{k^{\top}}) \}\\ &+ x_k^{\top}(H_{xx}^k - H_{xw}^k H_{w w}^{k^{-1}} H_{xw}^{k^{\top}})x_k \\ &+ (G_x^k - H_{xw} H_{w w}^{k^{-1}} G_w^k)^{\top}x_k \end{aligned} \end{equation} Solving the above quadratic optimization problem yields that $u_k^* = K_kx_k + r_k$, where \begin{equation} \begin{aligned} &K_k=(H_{u u}^k-H_{u w}^k H_{ww}^{k^{-1}} H_{uw}^{k^{\top}})^{-1} (H_{u w}^k H_{ww}^{k^{-1}} H_{xw}^{k^{\top}}-H_{u x}^{k^{\top}}),\\ &r_k = -\frac{1}{2}(H_{u u}^k-H_{u w}^k H_{ww}^{k^{-1}} H_{uw}^{k^{\top}})^{-1}(G_u^k - H_{uw}^k H_{ww}^{k^{-1}} G_w^k). \end{aligned} \notag \end{equation} Replacing $u_k$ with $u_k^*$ in (\ref{equ:w}) and (\ref{equ:v}), we finish the induction. Note that the assumption $\lambda I -\alpha E^{\top} P_{k+1} E > 0 $ in the derivation is automatically satisfied since $\lambda I -\alpha E^{\top} P E > 0 $ and $P \geq P_k$ \citep{bacsar2008h}. Since the Riccati-type iterations (\ref{def:P}) converge, (\ref{equ:fini_prob}) converges as $h$ goes to infinity. Note that the convergence of $g_i$ and $z_i$ in (\ref{def:g}) is trivial. \hfill \vrule height6pt width 6pt depth 0pt \end{pf} \begin{remark} A similar result has been developed in \citet[Theorem 4]{yang2018wasserstein} for the case $ \bar{w} = 0$. If the sample mean $\bar{w}$ is not zero, the zero-sum game (\ref{equ:wasprob}) is solved by augmenting the system state as $\widetilde{x} = \begin{bmatrix} (x - \bar{x})^{\top} & 1 \end{bmatrix}$ with $\bar{x}= (I-A)^{-1}E\bar{w}$. Clearly, this method implicitly requires the existence of $(I-A)^{-1}$, excluding an important class of open-loop unstable systems. Moreover, the augmented system is not controllable due to the constant in the last element of the augmented state $\widetilde{x}$, and cannot be applied to the RL setting as $(I-A)^{-1}E\bar{w}$ is not computable without the model information $(A,E)$. \end{remark} \subsection{Stability of the Closed-loop System} By the linear quadratic (LQ) dynamic game theory~\citep{bacsar2008h}, we first show that the conditions in Theorem \ref{theorem:solution} hold, which implies that the Riccati-type iteration (\ref{def:P}) converges. Then for an appropriate $\alpha$, the affine optimal controller (\ref{equ:opt_u}) is able to stabilize the system. \begin{thm}\label{coro} Let Assumption \ref{assumption} hold, then the Riccati-type iteration (\ref{def:P}) converges to a symmetric positive semi-definite matrix $P$. Moreover, if the discount factor $\alpha$ is sufficiently close to $1$, then $\rho(A+BK+EL)<1$ and $\rho(A+BK)<1$ where $(K,L)$ is given in (\ref{equ:opt_u}). \end{thm} \begin{pf} Let $ A_{\alpha} := \sqrt{\alpha} A, ~~B_{\alpha} := \sqrt{\alpha} B, \text { and } E_{\alpha} := \sqrt{\alpha} E, $ the convergence of (\ref{def:P}) follows from the standard LQ game theory~\citep{bacsar2008h}. Since the feedback gains $K$ and $L$ in Theorem \ref{theorem:solution} are functions of $\alpha$, we rewrite them as $K(\alpha)$ and $L(\alpha)$, respectively. We consider the spectral radius $\rho(A+BK(\alpha)+EL(\alpha))$ which is a continuous function with respect to $\alpha$. It follows from~\citet{bacsar2008h} that $A+ BK(1) + EL(1)$ is stabilizing, namely $\rho(A+BK(1)+EL(1))<1$. Thus, we have $\rho(A+BK(\alpha)+EL(\alpha))<1$ as long as $\alpha$ is sufficiently close to 1. Similarly, we can show that $\rho(A+BK(\alpha))<1$. \hfill \vrule height6pt width 6pt depth 0pt \end{pf} The computation of the feedback gain pair $(K,l)$ in (\ref{equ:opt_u}) requires the information of the system model $(A,B,E)$. Next, we further design a RL algorithm to learn an optimal control policy via training data in a trial-and-error way. Since it can be both time-consuming and expensive to collect data from the physical world, one can use a computer simulator to generate data at a relatively low cost, see Fig. \ref{pic:simu}. In the sequel, we shall only use such a simulator to find the optimal controller in Theorem \ref{theorem:solution}. \section{Distributionally Robust Q-learning}\label{sec:q learning} \label{sec:q_setup} In this section, we first find an equivalent DR Q-learning setup and derive the Q-function of the zero-sum game in (\ref{equ:wasprob}). By exploiting the structure of the Q-function, we then convert the stochastic zero-sum game (\ref{equ:wasprob}) to a deterministic version. Moreover, we discuss its relation to the classical $H_{\infty}$ optimal control. \subsection{Distributionally Robust Q-learning} In order to solve the zero-sum Markov game (\ref{equ:wasprob}) via a model-free approach, we adopt the approximate dynamic programming framework \citep{bertsekas1995dynamic, powell2007approximate, bertsekas2019reinforcement}. Particularly, the Q-function $Q(x_k,u_k,\mu_k)$ in (\ref{equ:wasprob}) is given as \begin{equation}\notag \begin{aligned} Q(x_k,u_k,\mu_k) = & x_k^{\top}Qx_k + u_k^{\top}Ru_k - \lambda W_{2}^2(\mu_k, \nu_N ) \\ &+ \alpha \mathbb{E}_{w_{k} \sim \mu_k} V^*(x_{k+1}), \end{aligned} \end{equation} where $u_k$ and $\mu_k$ are actions taken by the controller and adversary, respectively. Once the Q-function is determined, the optimal $u_k^*$ can be obtained by simply setting the derivative to zero. The difficulty in determining a closed-form of the Q-function lies in the Wasserstein distance $W_{2}(\mu_k, \nu_N)$. By Theorem \ref{theorem:solution}, the worst-case distribution $\mu^*$ is discrete with the same number of supports as the empirical distribution $\nu_N$. Thus, there is no loss of generality to restrict $\mu_k$ to a set of discrete distributions $\mathbb{D}_N$ and parameterize it with $N$ vectors $\{w_k^j\}_{j=1}^N$. The following lemma formally confirms this observation. \begin{lem} \label{lemma:equvalence} Define an alternative Q-function \begin{equation} \label{equ:appro_q} \begin{aligned} \widetilde{Q}(x_k,u_k,\mu_k) = & x_k^{\top}Qx_k + u_k^{\top}Ru_k - \frac{\lambda}{N} \sum_{i=1}^{N} \|{w_k^j}-\hat{w}^{(j)}\|^{2}\\ & + \alpha \mathbb{E}_{w_{k} \sim \mu_k} V^*(x_{k+1}). \end{aligned} \end{equation} It follows that $$\min_{u_k} \max_{\mu_k} Q(x_k,u_k,\mu_k) = \min_{u_k} \max_{\mu_k \in \mathbb{D}_N} \widetilde{Q} (x_k,u_k,\mu_k)$$ and optimal values of both sides are achieved at the same pair $(u_k^*, \mu_k^*)$. \end{lem} \begin{pf} By Lemma \ref{lem:bellman}, it follows that \begin{equation}\notag \begin{aligned} &\hspace{-0.8cm}\min_{u_k} \max_{\mu_k} Q(x_k,u_k,\mu_k)=\min_{u_k} \bigl\{ x_k^{\top}Qx_k + u_k^{\top}Ru_k \\ &~~~+ \frac{1}{N} \sum_{j=1}^{N} \max_{w \in \mathbb{R}^d} \bigl\{ \alpha V^*(x_{k+1}) - \lambda \|w-\hat{w}^{(j)}\|^{2} \bigr\} \bigr\} \\ & = \min_{u_k} \max_{\mu_k \in \mathbb{D}_N} \widetilde{Q}(x_k,u_k,\mu_k). ~~~~~~~~~~~~~~~~~~~~~~~~~\hfill \vrule height6pt width 6pt depth 0pt \end{aligned} \end{equation} \end{pf} Thus, we only need to focus on $\widetilde{Q}(x_k,u_k,\mu_k)$ over $ \mathbb{D}_N$. Since $V^*(x_{k+1})$ is quadratic, $\widetilde{Q}(x_k,u_k,\mu_k)$ also has a quadratic form. \begin{prop} \label{prop:q} The Q-function in \eqref{equ:appro_q} is explicitly given as \begin{equation}\notag \widetilde{Q}(x_k,u_k,\mu_k) = \begin{bmatrix} x_{k} \\ u_{k} \\ w_k^1 \\ \vdots \\ w_k^N \end{bmatrix}^{\top} \widetilde H \begin{bmatrix} x_{k} \\ u_{k} \\ w_k^1 \\ \vdots \\ w_k^N \end{bmatrix} + \widetilde G^\top \begin{bmatrix} x_{k} \\ u_{k} \\ w_k^1 \\ \vdots \\ w_k^N \end{bmatrix} + \widetilde s, \end{equation} where $\widetilde G^{\top} = \bigl[ G_x^{\top}~ G_u^{\top} ~ G_{w^1}^{\top}~\cdots~ G_{w^N}^{\top} \bigr], \widetilde s = \alpha z - \frac{\lambda}{N} \sum_{i=1}^{N} \|\hat{w}^{(j)}\|^2$ and \begin{equation}\notag \widetilde H = \begin{bmatrix} H_{xx} & H_{xu} & H_{xw} &\cdots & H_{xw} \\ & H_{uu} & H_{uw} &\cdots & H_{uw} \\ & & H_{ww} &\cdots & 0 \\ & * & & \ddots & 0 \\ & & & & H_{ww} \\ \end{bmatrix}, \end{equation} \end{prop} \begin{pf} We note that \begin{equation}\notag \begin{aligned} & \widetilde{Q}(x_k,u_k,\mu_k) \\ &= x_k^{\top}Qx_k + u_k^{\top}Ru_k - \frac{\lambda}{N} \sum_{i=1}^{N} \|{w_k^j}-\hat{w}^{(j)}\|^{2} + \alpha \mathbb{E}_{w_{k}} V^*(x_{k+1}) \\ & = x_k^{\top}Qx_k + u_k^{\top}Ru_k - \frac{\lambda}{N} \sum_{i=1}^{N} \|{w_k^j}-\hat{w}^{(j)}\|^{2} \\ &+ \frac{\alpha}{N} \sum_{i=1}^{N} \bigl((Ax_k+Bu_k + Ew_k^j)^{\top} P (Ax_k+Bu_k + Ew_k^j) \\ &+ g^{\top}(Ax_k+Bu_k + Ew_k^j) + z\bigr)\\ & = x_k^{\top}Qx_k + u_k^{\top}Ru_k \\ &- \frac{\lambda}{N} \sum_{i=1}^{N} (\|w_k^j\|^2 - 2 \langle \hat{w}^{(j)},w_k^j \rangle + \| \hat{w}^{(j)} \|^2 ) + \alpha z \\ &+ \frac{\alpha}{N} \sum_{i=1}^{N} \bigl((Ax_k+Bu_k)^{\top}P(Ax_k+Bu_k) + \langle w_k^j, E^{\top}PEw_k^j \rangle \\ &+ 2\langle w_k^j,E^{\top}P(Ax_k+Bu_k) \rangle+ g^{\top}(Ax_k+Bu_k + Ew_k^j) \bigr). \\ \end{aligned} \end{equation} By reorganizing the above terms with tedious algebraic manipulations, the proof is completed. \hfill \vrule height6pt width 6pt depth 0pt \end{pf} \vspace{-0.5cm} Since \begin{equation}\notag \begin{aligned} &\frac{\partial \widetilde{Q}}{\partial u_k} = \frac{2}{N} \sum_{i=1}^{N} \left(H_{xu}^{\top}x_k + H_{uu}u_k + H_{uw} w_k^j \right) + G_u ~\text{and}~ \\ &\frac{\partial \widetilde{Q}}{\partial w_k^j} = \frac{2}{N}\left(H_{ww} w_k^j + H_{xw}^{\top} x_k + H_{uw}^{\top} u_k \right) + G_{w^j}, \end{aligned} \end{equation} the solution to the zero-sum game (\ref{equ:wasprob}) depends only on the parameter of $\widetilde{Q}(x_k,u_k,\mu_k)$. Notice that $\widetilde H$ is sparse with only 6 undetermined blocks. Similar observation can also be found in $\widetilde G$, implying a practical way to learn $\widetilde H$ and $\widetilde G$ by only using data. \subsection{Equivalence of the Zero-sum Game} In the Q-learning framework, both $\widetilde H$ and $\widetilde G$ are learned on-line. However, this is a challenging task under our DR setting as the computation of $\widetilde{Q}(x_k,u_k,\mu_k)$ in (\ref{equ:appro_q}) involves an expectation operator $\mathbb{E}_{w_{k} \sim \mu_k} V^*(x_{k+1})$, which is not convenient to evaluate on-line since only one sample $\{x_k, u_k, w_k, x_{k+1}\}$ is available at each time instant. To remedy it, we further show that the DR problem can be converted to a deterministic version. By Proposition \ref{prop:q}, we only have the two parameter matrices to be learned, i.e., \begin{align*} H_c &=\begin{bmatrix} H_{xx} & H_{xu} & H_{xw} \\ & H_{uu} & H_{uw} \\ *& & H_{ww} \end{bmatrix} \\ &=\begin{bmatrix} \alpha A^{\top} P A+ Q & \alpha A^{\top} P B & \alpha A^{\top} P E \\ & \alpha B^{\top} P B+ R & \alpha B^{\top} P E \\ *& & \alpha E^{\top} P E-\lambda I \end{bmatrix} \end{align*} and $ G_c^{\top} = \begin{bmatrix} G_x^{\top} &G_u^{\top} & G_w^{\top} \end{bmatrix}=\begin{bmatrix} \alpha g^{\top}A &\alpha g^{\top}B &\alpha g^{\top}E+2\lambda \bar{w}^{\top} \end{bmatrix} $ and find that the pair of $(H_c,G_c)$ corresponds to the Q-function of another zero-sum game. Specifically, consider the following deterministic zero-sum game \begin{equation}\label{equ:newgame} \min_{\pi} \max_{w} \sum_{k=0}^{\infty} \alpha^{k} (x_k^{\top}Qx_k + u_k^{\top}Ru_k - \lambda \|w_k - \bar{w} \|^2 ), \end{equation} where $w$ denotes the policy of the adversary in the form that $w_k = w(x_k)$. \begin{thm} \label{theorem:certainty} Under the same conditions in Theorem \ref{theorem:solution}, we have the following results. \begin{enumerate} \renewcommand{\labelenumi}{\rm(\alph{enumi})} \item The optimal value function of the deterministic zero-sum game (\ref{equ:newgame}) has a quadratic form \begin{equation} \notag \begin{array}{lll} V_c^{*}(x) & =x^{\top} P x+ g^{\top}x + z ,& \\ \end{array} \end{equation} where $P$, $g$ and $z$ are obtained through iterations in (\ref{def:P}) and (\ref{def:g}). \item The Q-function of the zero-sum game in (\ref{equ:newgame}) is given by \begin{equation}\label{def:Q_c} \begin{aligned} Q_c(x_k,u_k,w_k) =& \begin{bmatrix} x_{k} \\ u_{k} \\ w_k \end{bmatrix}^{\top} H_c \begin{bmatrix} x_{k} \\ u_{k} \\ w_k \end{bmatrix} + G_c^{\top} \begin{bmatrix} x_{k} \\ u_{k} \\ w_k \end{bmatrix}\\ & + s_c, \end{aligned} \end{equation} where $s_c$ is a scalar. \item The optimal controller of the game in (\ref{equ:newgame}) is identical to that of the game in (\ref{equ:wasprob}), i.e., $u^* = Kx + r,$ where \begin{eqnarray*} &K = (H_{u u}-H_{u w} H_{w w}^{-1} H_{uw}^{\top})^{-1}(H_{u w} {H_{w w}^{-1}} H_{xw}^{\top} -H_{xu}^{\top})\\ &r = -\frac{1}{2}(H_{u u}-H_{u w} H_{w w}^{-1} H_{uw}^{\top})^{-1}(G_u - H_{uw} H_{w w}^{-1} G_w ). \end{eqnarray*} \item The optimal adversarial policy is given by $w^*= Lx+l,$ where \begin{equation} \notag \begin{aligned} &L = (H_{w w}-H_{uw}^{\top} H_{uu}^{-1} H_{u w})^{-1}(H_{uw}^{\top} H_{uu}^{-1} H_{xu}^{\top}-H_{xw}^{\top})\\ &l = -\frac{1}{2}(H_{w w}-H_{uw}^{\top} H_{uu}^{-1} H_{u w})^{-1}(G_w- H_{uw}^{\top} H_{uu}^{-1} G_u ). \end{aligned} \end{equation} \end{enumerate} \end{thm} \begin{pf} By following the same procedures (backward induction) as in the proof of Theorem \ref{theorem:solution}, it can be shown that the iterations in (\ref{def:P}) and (\ref{def:g}) are preserved. To save space, the details are omitted. \hfill \vrule height6pt width 6pt depth 0pt \end{pf} \begin{remark}\label{sebsec:H_inf} In comparison with (\ref{equ:newgame}), the well-known $H_\infty$ optimal control~\citep{bacsar2008h} solves the following zero-sum game \begin{equation} \min_{\pi} \max_{w} \sum_{k=0}^{\infty} (x_k^{\top}Qx_k + u_k^{\top}Ru_k - \lambda \|w_k\|^2).\notag \end{equation} Clearly, the deterministic game (\ref{equ:newgame}) covers this case by letting $\bar{w}=0$. \end{remark} Theorem \ref{theorem:certainty} implies that the game with Wasserstein penalties is equivalent to the deterministic version in \eqref{equ:newgame} in the sense that the resulting optimal controllers are identical. Thus, it suffices to design a model-free algorithm to solve the deterministic game (\ref{equ:newgame}). \section{Model-free Q-learning with Convergence Guarantees}\label{sec:algorithm} In this section, we develop a Q-learning algorithm with the simulator in Fig. \ref{pic:simu} to solve the deterministic game (\ref{equ:newgame}) and show its global convergence to the DR optimal controller in Theorem \ref{theorem:solution}. \subsection{An Online Q-learning Algorithm} Motivated by~\citet{al2007model}, we propose a Q-learning algorithm to learn $Q_c(x,u,w)$ in (\ref{def:Q_c}) by solely using data from a simulator. We refer to it as the DR Q-learning algorithm since it yields the same optimal controller as the zero-sum game in (\ref{equ:wasprob}). By Theorem \ref{theorem:certainty}, $Q_c(x,u,w)$ can be parameterized with a symmetric matrix $H$, a vector $G$ and a scalar $s$. By using the Kronecker operator, we can reformulate it as a linear function with a parameter vector $\theta$. Let $e = [x^{\top}~u^{\top} ~w^{\top}]^{\top} \in \mathbb{R}^q$ with $q = n+m+d$, $\bar{e} = [e_1^2, \cdots, e_1e_q, e_2^2, e_2e_3, \cdots, e_{q-1}e_q,e_q^2]^{\top}$ be the Kronecker product quadratic polynomial basis vector, $h$ be the vector formed by stacking the columns of the matrix $H$ and then removing the redundant terms introduced by the symmetry of $H$. Then the Q-function can be written as \begin{equation} \label{equ:para_q} \begin{aligned} Q_c(x,u,w |\theta) &= \begin{bmatrix} x\\ u \\ w \\ \end{bmatrix}^{\top} H \begin{bmatrix} x \\ u \\ w \\ \end{bmatrix} + G^{\top} \begin{bmatrix} x \\ u \\ w \end{bmatrix} +s\\ &= [h^{\top}~G^{\top}~s] \begin{bmatrix} \bar{e} \\ e \\ 1 \\ \end{bmatrix} = \theta^{\top} \tilde{e} \end{aligned} \end{equation} with $\theta = [h^{\top}~G^{\top}~s]^{\top}$ and \begin{equation}\label{def:e} \tilde{e} = [\bar{e}^{\top}~e^{\top}~1]^{\top}. \end{equation} A pair of optimal solutions to the zero-sum game (\ref{equ:newgame}) under the parameter vector $\theta$ is given as $u^*(x) = K x + r$ and $w^*(x) = Lx+l$. Since $$V_c(x_k) = \min_{u_k}\max_{w_k} Q_c(x_k, u_k, w_k),$$ it follows from the Bellman equation that \begin{equation}\label{equ:iter} Q_c(x_k,u^*(x_k),w^*(x_k)|\theta) = d(x_k,\theta) \end{equation} where \begin{equation}\label{def:d} \begin{aligned} d(x_k,\theta) &= x_k^{\top}Qx_k + u^*(x_k)^{\top}Ru^*(x_k)- \lambda \|w^*(x_k) - \bar{w}\|^2\\ &~~ + \alpha Q(x_{k+1},u^*(x_{k+1}),w^*(x_{k+1})|\theta). \end{aligned} \end{equation} Clearly, $d(x_k,\theta)$ can be computed by using the sample $\{x_k, u^*(x_k),w^*(x_k), x_{k+1}\}$ from the simulator. Now, we find $\theta$ by designing an iterative learning algorithm. Suppose that at the i-th iteration, the parameter vector is denoted as $\theta_{i}$ and the resulting pair of optimal policies are $u_i^*(x)=K_ix+r_i, w_i^*(x)= L_ix+l_i$. We sample a system trajectory $\{x_k^p, u_i^*(x_k^p), w_i^*(x_k^p), x_{k+1}^p\}_{p=1}^M$ of the length $M$ from the simulator, then $\theta_{i+1}$ is obtained by solving a least-squares problem \begin{equation} \label{equ:theta_update} \begin{aligned} {\theta}_{i+1} &=\arg \min_\theta \{ \sum_{p = 1}^{M} | Q_c(x_k^p,u_i^*(x_k^p),w_i^*(x_k^p)|\theta) - d(x_k^p, \theta_i) |^2 \}\\ &=\arg \min_\theta \{ \sum_{p = 1}^{M} | {\theta}^{\top} \tilde{e}(x_k^p) - d(x_k^p, \theta_i) |^2 \}, \end{aligned} \end{equation} where $\tilde{e}(x_k^p)$ and $d(x_k^p, \theta_i)$ are given by (\ref{def:e}) and (\ref{def:d}), respectively. Since $u_i^*(x_k)$ and $w_i^*(x_k)$ are linearly dependent on $[x_k^{\top} ~~1]^{\top}$, solving (\ref{equ:ls}) yields an infinite number of solutions. To this end, we manually add exploration noises to the control and disturbance inputs, i.e., \begin{equation}\label{equ:explor} {u}_i^*(x_k) = K_i x_k + r_i + o_k^1,~~~~{w}_i^*(x_k) = L_i x_k + l_i + o_k^2, \end{equation} where $o_k^1 \sim \mathcal{N}(0, \Sigma_1)$ and $o_k^2 \sim \mathcal{N}(0, \Sigma_2)$ with covariance matrices $\Sigma_1$ and $\Sigma_2$. This ensures that if $M > \frac{1}{2}(q+1)(q+2)$, there is a unique solution to (\ref{equ:theta_update}), i.e., \begin{equation} \label{equ:ls} {\theta}_{i+1} = \left(\sum_{p = 1}^{M}\tilde{e}(x_k^p)\tilde{e}(x_k^p)^{\top} \right)^{-1} \sum_{p = 1}^{M} \tilde{e}(x_k^p) d(x_k^p,\theta_i). \end{equation} It is shown in~\citet{al2007model} that the exploration noises do not result in any bias to $\theta$. \begin{algorithm}[t] \caption{The DR Q-learning algorithm} \label{alg:q_learning} \begin{algorithmic}[1] \Require Penalty parameter $\lambda$, discount factor $\alpha$, disturbance samples $\left\{\hat{w}^{(1)}, \ldots, \hat{w}^{(N)}\right\}$ from the physical world, length of a trajectory $M$, termination condition $\epsilon$. \Ensure An optimal controller $u^*(x)= Kx+r$. \State Initialize ${\theta}_0 = 0$, and the optimal feedback policies $K_0 = 0$, $r_0 = 0$, $L_0 = 0$ and $l_0 = 0$. \For{$i=0,1,\cdots $} \State \textbf{Step 1: Q-function Evaluation} \State Collect $\{x_k^p, {u}_i^*(x_k^p), {w}_i^*(x_k^p), x_{k+1}^p\}_{p=1}^M$ from the \phantom{......}simulator, where ${u}_i^*(x_k^p)$ and ${w}_i^*(x_k^p)$ are noisy \phantom{......}inputs given by (\ref{equ:explor}). \State Determine the target value $\{d(x_k^p, \theta_i)\}_{p=1}^M$ by \phantom{......}using the trajectory. \State Update $\theta_{i+1}$ by \begin{equation}\notag {\theta}_{i+1} = \arg \min_{\theta} \left\{ \sum_{p = 1}^{M} | {\theta}^{\top} \tilde{e}(x_k^p) - d(x_k^p, \theta_i) |^2 \right\}. \end{equation} \If{$\|{\theta}_{i+1} -{\theta}_{i}\| \leq \epsilon $} \State Terminate the loop and output $K_i,r_i$. \EndIf \State \textbf{Step 2: Policy Improvement} \State Update the optimal policy pair $(K_{i+1}, r_{i+1})$ and \phantom{......}$(L_{i+1}, l_{i+1})$ by $\theta_{i+1}$. \EndFor \end{algorithmic} \end{algorithm} We present our DR Q-learning algorithm in Algorithm \ref{alg:q_learning} which terminates when the increment of ${\theta}_{i}$ is smaller than a user-defined constant $\epsilon$. \subsection{Convergence of the Q-learning Algorithm} We show that $\theta = [h^{\top}~G^{\top}~s]^{\top}$ in (\ref{equ:para_q}) can be solved by the value iteration in (\ref{def:P}) and (\ref{def:g}), whose convergence has already been shown in Theorem \ref{coro}. Let $W=\text{diag}(Q, R, -\lambda I)$. Then we have the following results. \begin{lem}\label{lemma3} The update of $\theta_i = [h_i^{\top}~G_i^{\top}~s_i]^{\top}$ in (\ref{equ:ls}) can be written as \begin{equation}\label{equ:para_iter1} H_{i+1}= W+\alpha \begin{bmatrix} A & B & E \\ K_{i} A & K_{i} B & K_{i} E \\ L_{i} A & L_{i} B & L_{i} E \end{bmatrix}^{\top} H_{i}\begin{bmatrix} A & B & E \\ K_{i} A & K_{i} B & K_{i} E \\ L_{i} A & L_{i} B & L_{i} E \end{bmatrix}, \end{equation} \begin{equation}\label{equ:para_iter2} \begin{aligned} &G_{i+1}^{\top}= \alpha (G_i^{\top} + 2 \begin{bmatrix} 0 \\ r_i \\ l_i \end{bmatrix}^{\top} H_i) \begin{bmatrix} I \\ K_i \\ L_i \end{bmatrix}\bigl[A~B~E\bigr] + \begin{bmatrix} 0 \\ 0 \\ 2\lambda \bar{w} \end{bmatrix}^{\top},\\ & \text{and} ~s_{i+1} = \alpha \bigl(s_i + G_i^{\top} \begin{bmatrix} 0 \\ r_i \\ l_i \end{bmatrix} + \begin{bmatrix} 0 \\ r_i \\ l_i \end{bmatrix}^{\top} H_i \begin{bmatrix} 0 \\ r_i \\ l_i \end{bmatrix} \bigr) - \lambda \|\bar{w}\|^2. \end{aligned} \end{equation} \end{lem} \begin{pf} Define \begin{equation}\notag \begin{aligned} &V = [\tilde{e}(x_k^1), \tilde{e}(x_k^2), \cdots, \tilde{e}(x_k^M)], \\ &Y = [d(x_k^1, \theta_i), d(x_k^2, \theta_i), \cdots, d(x_k^M, \theta_i)]^{\top}. \end{aligned} \end{equation} It follows from (\ref{equ:ls}) that ${\theta}_{i+1} = (VV^{\top})^{-1}VY$. In the sequel, we show that $d(x_k, \theta_i)$ is linear with respect to $\tilde{e}(x_k)$, i.e., $Y = Vy(\theta_i)$, where $y(\theta_i)$ is a function of $\theta_{i}$. We derive $d(x_k, \theta_i)$ as a function of $e(x_k)$. Since \begin{equation}\notag \begin{aligned} &Q_c(x_{k+1},{u}_i^*(x_{k+1}),{w}_i^*(x_{k+1})|\theta_{i}) \\ &~~~~~~~~~~~~~~~~~=e(x_{k+1})^{\top}H_i e(x_{k+1}) + G_i^{\top} e(x_{k+1}) +s_i, \end{aligned} \end{equation} it follows from (\ref{equ:iter}) that \begin{equation}\label{equ:dd} \begin{aligned} d(x_k, \theta_i) & = e(x_k)^{\top}We(x_k) + \begin{bmatrix} 0 \\ 0 \\ 2\lambda \bar{w} \end{bmatrix}^{\top}e(x_k) - \lambda \|\bar{w}\|^2 \\ &+ \alpha \big[e(x_{k+1})^{\top}H_i e(x_{k+1}) + G_i^{\top} e(x_{k+1}) +s_i\big]. \end{aligned} \end{equation} Moreover, the term $e(x_{k+1})$ in (\ref{equ:dd}) can be written as \begin{equation}\label{equ:e_e} \begin{aligned} &e(x_{k+1})=e\left([A~~B~~E]e(x_k)\right)\\ &=e\big([A~~B~~E]\big( \begin{bmatrix} I \\ K_i \\ L_i \end{bmatrix} x_k + \begin{bmatrix} 0 \\ r_i \\ l_i \end{bmatrix} \big)\big) \\ &=\begin{bmatrix} I \\ K_i \\ L_i \end{bmatrix} [A~~B~~E]\big( \begin{bmatrix} I \\ K_i \\ L_i \end{bmatrix} x_k + \begin{bmatrix} 0 \\ r_i \\ l_i \end{bmatrix} \big) + \begin{bmatrix} 0 \\ r_i \\ l_i \end{bmatrix} \end{aligned} \end{equation} \begin{equation} \begin{aligned} &=\begin{bmatrix} I \\ K_i \\ L_i \end{bmatrix} [A~~B~~E]e(x_k) + \begin{bmatrix} 0 \\ r_i \\ l_i \end{bmatrix}. \\ \end{aligned} \end{equation} Inserting (\ref{equ:e_e}) into (\ref{equ:dd}), $d(x_k, \theta_i)$ is written as a function with respect to $e(x_k)$. By using the Kronecker product as in (\ref{equ:para_q}), it follows that $d(x_k, \theta_i) = \tilde{e}(x_k)^{\top}y(\theta_i)$. Combining ${\theta}_{i+1} = (VV^{\top})^{-1}VY$, the proof is completed. \hfill \vrule height6pt width 6pt depth 0pt \end{pf} \begin{lem}\label{lemma:2} The update of $H_i$, $G_i$, $s_i$ in (\ref{equ:para_iter1}) and (\ref{equ:para_iter2}) can be written as \begin{equation}\label{equ:hp} \begin{aligned} H_{i+1} &= \begin{bmatrix} \alpha A^{\top} P_i A+ Q & \alpha A^{\top} P_i B & \alpha A^{\top} P_i E \\ &\alpha B^{\top} P_i B+ R & \alpha B^{\top} P_i E \\ *& & \alpha E^{\top} P_i E-\lambda I \end{bmatrix},\\ G_{i+1}^{\top} &= \left[\alpha g_i^{\top}A ~~~ \alpha g_i^{\top}B ~~~ \alpha g_i^{\top}E+2\lambda \bar{w}^{\top} \right],\\ s_{i+1} &= \alpha z_i - \lambda \|\bar{w}\|^2, \end{aligned} \end{equation} where $P_{i}=\left[\begin{array}{lll} I & L_{i}^{\top} & K_{i}^{\top} \end{array} \right] H_{i} \left[\begin{array}{lll} I & L_{i}^{\top} & K_{i}^{\top} \end{array} \right]^{\top}$, and \begin{equation}\label{def:pgz} \begin{aligned} &g_i^{\top} = (G_i^{\top} + 2 \begin{bmatrix} 0 \\ r_i \\ l_i \end{bmatrix}^{\top} H_i) \begin{bmatrix} I \\ K_i \\ L_i \end{bmatrix},\\ &z_i = s_i + G_i^{\top} \begin{bmatrix} 0 \\ r_i \\ l_i \end{bmatrix} + \begin{bmatrix} 0 \\ r_i \\ l_i \end{bmatrix}^{\top} H_i \begin{bmatrix} 0 \\ r_i \\ l_i \end{bmatrix}. \end{aligned} \end{equation} \end{lem} \begin{pf} The iteration on $H_i$ in (\ref{equ:para_iter1}) can be written as \begin{equation}\notag H_{i+1}=W+[A~~B~~E]^{\top}[I~~L_i^{\top}~~K_i^{\top}]H_i[I~~L_i^{\top}~~K_i^{\top}]^{\top}[A~~B~~E]. \end{equation} By the definition of $P_i$ in (\ref{def:pgz}), we yield that \begin{equation}\notag \begin{aligned} H_{i+1}&=W+[A~~B~~E]^{\top}P_i[A~~B~~E]\\ &\hspace{-.8cm}=\begin{bmatrix} \alpha A^{\top} P_i A+ Q & \alpha A^{\top} P_i B & \alpha A^{\top} P_i E \\ &\alpha B^{\top} P_i B+ R & \alpha B^{\top} P_i E \\ *& & \alpha E^{\top} P_i E-\lambda I \end{bmatrix}. \end{aligned} \end{equation} Similarly, both $G_{i+1}$ and $s_{i+1}$ can be expressed in terms of $g_i$ and $z_i$. \hfill \vrule height6pt width 6pt depth 0pt \end{pf} \begin{lem}\label{lemma:3} Iterations (\ref{equ:para_iter1}) and (\ref{equ:para_iter2}) can be written as the value iteration in (\ref{def:P}) and (\ref{def:g}), respectively. \end{lem} \begin{pf} It follows from (\ref{def:pgz}) that \begin{equation}\label{hhh} P_{i+1} = [I~~L_{i+1}^{\top}~~K_{i+1}^{\top}]H_{i+1}[I~~L_{i+1}^{\top}~~K_{i+1}^{\top}]^{\top}. \end{equation} Since $K_{i+1}, L_{i+1}$ can be directly obtained using $H_{i+1}$ hence $P_i$ by (\ref{equ:hp}), one can verify that (\ref{hhh}) has the same expression as (\ref{def:P}). The iterations on $g_{i+1}$ and $z_{i+1}$ in (\ref{def:g}) can be analogously derived. \hfill \vrule height6pt width 6pt depth 0pt \end{pf} The following theorem shows the convergence of the proposed DR Q-learning algorithm. \begin{thm} Let Assumption \ref{assumption} hold and $M > \frac{1}{2}(q+1)(q+2)$, then the sequence $\{\theta_{i}\}$ in Algorithm \ref{alg:q_learning} converges to a unique optimal parameter vector $\theta$. \end{thm} \begin{pf} Lemma \ref{lemma:3} shows that the update of $\theta_i = [h_i^{\top}~G_i^{\top}~s_i]^{\top}$ in (\ref{equ:ls}) can be written as the value iteration (\ref{def:P}) and (\ref{def:g}), whose convergence has been proved in Theorem \ref{coro}. Thus, $H_i,G_i,s_i$ converge with $H_0 =0,G_0 =0,s_0=0$, namely, \begin{equation}\notag \begin{aligned} \lim\limits_{i \rightarrow \infty}H_{i} &= \begin{bmatrix} \alpha A^{\top} P A+ Q & \alpha A^{\top} P B & \alpha A^{\top} P E \\ &\alpha B^{\top} P B+ R & \alpha B^{\top} P E \\ *& & \alpha E^{\top} P E-\lambda I \end{bmatrix}\\ \lim\limits_{i \rightarrow \infty}G_{i}^{\top} &= \left[\alpha g^{\top}A ~~~ \alpha g^{\top}B ~~~ \alpha g^{\top}E+2\lambda \bar{w}^{\top} \right]\\ \lim\limits_{i \rightarrow \infty}s_{i} &= \alpha z - \lambda \|\bar{w}\|^2, \end{aligned} \end{equation} where $P,g,z$ are given by (\ref{def:P}) and (\ref{def:g}). \hfill \vrule height6pt width 6pt depth 0pt \end{pf} \begin{figure*}[htbp] \centering \subfigure[ ]{ \label{pic:comparex1} \includegraphics[width=0.31 \textwidth]{comp_x1} } \subfigure[ ]{ \label{pic:comparex2} \includegraphics[width=0.31 \textwidth]{comp_x2} } \subfigure[ ]{ \label{pic:compareu1} \includegraphics[width=0.31 \textwidth]{comp_u1} } \caption{{Sequences of (a) the state $x_{k,1}$, (b) the state $x_{k,2}$, (c) the input $u_{k,1}$.}} \end{figure*} \begin{figure}[t] \centerline{\includegraphics[width=70mm]{cost}} \caption{The long-term cost converges to the near optimal value within 30 iterations.} \label{pic:cost} \end{figure} \section{Numerical Examples}\label{sec:experiment} In this section, we demonstrate the effectiveness of our Q-learning algorithm on a regulation problem of a quadrotor and illustrate its convergence. \subsection{Experiment Setup} We consider a quadrotor that operates on a 2-D horizontal plane. The groundtruth discrete-time dynamical model is given by a double integrator as \begin{equation}\label{def:model} x_{k+1}=\begin{bmatrix} 1 & 0 & T & 0 \\ 0 & 1 & 0 & T \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 1 \end{bmatrix} x_{k}+\begin{bmatrix} \frac{T^2}{2} & 0 \\ 0 & \frac{T^2}{2} \\ T & 0 \\ 0 & T \end{bmatrix}\left(u_{k}+w_{k}\right), \end{equation} where the sample period is set as $T=0.1s$, $[x_{k,1},x_{k,2}]$ denotes the position coordinates, $[x_{k,3},x_{k,4}]$ is the corresponding velocity, and the input $u_k$ is the acceleration. Disturbances $w_{k,1}$ and $w_{k,2}$ from the wind are independent random variables, and $w_{k,1} \sim \mathcal{N}(1.8,0.1)$, $w_{k,2} \sim \mathcal{N}(0.5,0.1)$. \begin{figure}[t] \centering \subfigure{ \includegraphics[width=39mm]{hybrid_x1_comp} } \subfigure{ \includegraphics[width=39mm]{hybrid_u1_comp} } \caption{Evolution of $x_{k,1}$ and $u_{k,1}$ under the Gaussian mixture disturbance distribution.} \label{pic:gmm} \end{figure} Our objective is to regulate the position of the quadrotor to the origin with minimum energy consumption. To this end, the parameters of the cost function are set as $ Q = I~\text{and}~R = 0.2\times I. $ By Theorem \ref{theorem:solution}, the distribution deviation penalty parameter $\lambda$ should satisfy $\lambda I -\alpha E^{\top} P E > 0$, which in our experiments is $\lambda > 0.22$. For a clear presentation we select $\lambda = 6$, and the effect of $\lambda$ is to be examined later. The discount factor $\alpha$ is set as $\alpha = 0.99$ such that Theorem \ref{coro} holds. The length of the trajectory $\{x_k^p, \hat{u}_i^*(x_k^p), \hat{w}_i^*(x_k^p), x_{k+1}^p\}_{p=1}^M$ in each iteration is set as $M = 900$. \begin{figure*}[t] \centering \subfigure[ ]{ \label{pic:mean} \includegraphics[width=0.31 \textwidth]{mean} } \subfigure[ ]{ \label{pic:variance} \includegraphics[width=0.31 \textwidth]{variance} } \subfigure[ ]{ \label{pic:cost_lambda} \includegraphics[width=0.31 \textwidth]{cost_lambda} } \caption{{ Illustration of the effects of $\lambda$: (a) the mean of the steady state $\bar{x}_1$, (b) the variance of $\bar{x}_1$, (c) the long-term cost. The red line (LQR) is for comparison. }} \label{pic:7} \end{figure*} \subsection{Convergence of the DR Q-learning Algorithm} We now demonstrate the convergence of the proposed DR Q-learning algorithm. Suppose that the disturbance samples from the physical world are given as $\{\hat{w}^{(i)}\}_{i=1}^{10}$ with the sample mean $\bar{w} =[1.7974, 0.5405]^{\top}$. Then, we apply Algorithm \ref{alg:q_learning} to train the WDR-LQR controller. We select the finite-horizon cost \begin{equation}\label{def:finite_cost} J_i = \sum_{k=0}^{h} \alpha^{k} (x_k^{\top}Qx_k + (u_k^{i})^{\top}Ru_k^i - \lambda \|w_k^i- \bar{w}\|^2 ) \end{equation} as an indicator for the convergence, where $u_k^{i} = K_ix+ r_i$ and $w_k^{i} = L_ix+ l_i$ are the pair of optimal solutions at the i-th iteration. Clearly, the convergence of policies $(K_i,r_i)$ and $(L_i,l_i)$ can be reflected by the convergence of $J_i$ in (\ref{def:finite_cost}). As illustrated in Fig. \ref{pic:cost}, the finite-horizon cost (\ref{def:finite_cost}) converges almost exponentially fast with respect to the number of the iteration. \subsection{Comparisons to Other Controllers} We illustrate the effectiveness of our Wasserstein DR linear quadratic regulation (WDR-LQR) via comparisons with (a) the canonical LQR and (b) the classical $H_{\infty}$ control (${H}_{\infty}$-LQR), as detailed below. (a) The LQR controller is a linear state feedback, namely $u = K_qx$ with $K_q=\left(R+B^{\top} P_q B\right)^{-1} B^{\top} P_q A$, where the matrix $P_q$ is the solution to the algebraic Riccati equation $$A^{\top} P_q A+Q-A^{\top} P_q B\left(R+B^{\top} P_q B\right)^{-1} B^{\top} P_q A=P_q.$$ (b) The classical $H_{\infty}$ control~\citep{bacsar2008h} seeks to solve the following minimax problem \begin{equation}\label{equ:h_inf} \min_{\pi}\max_{\omega} \sum_{k=0}^{\infty} (x_k^{\top}Qx_k + u_k^{\top}Ru_k - \lambda \|w_k\|^2). \end{equation} We manually tune its parameter $\lambda$ to yield good control performance, which is finally set to $\lambda = 0.25$. In Fig. \ref{pic:comparex1} and \ref{pic:comparex2}, the position $[x_{k,1},x_{k,2}]$ is regulated to the origin under our WDR-LQR controller with the initial state $x_0 = [1.2 ~~0.6~~ 0.5~-0.5]^{\top}$. In contrast, the LQR and ${H}_{\infty}$-LQR controllers fail to steer the position to the origin, though the latter has the well-known robustness and disturbance rejection capabilities~\citep{bacsar2008h}. Fig. \ref{pic:compareu1} shows that the WDR-LQR consumes less energy than the ${H}_{\infty}$-LQR. To test the robustness of the WDR-LQR controller, we apply it to a system where we instead use a Gaussian mixture distribution to generate the disturbance $w_{k,1}$ in (\ref{def:model}). The Gaussian mixture distribution is a mixture of $\mathcal{N}(1,0.2)$ and $\mathcal{N}(0.9,0.5)$ with the same weights. The evolution of the position $x_{k,1}$ and input $u_{k,1}$ are displayed in Fig. \ref{pic:gmm}. It can be observed that the WDR-LQR exhibits the best trade-off between the state bias and energy costs. \subsection{Effects of the Penalty Parameter $\lambda$} We observe from (\ref{equ:wasprob}) and (\ref{equ:h_inf}) that $\lambda$ is in effect in both WDR-LQR and $H_{\infty}$-LQR. To empirically show how $\lambda$ works, we apply Algorithm \ref{alg:q_learning} under 20 different $\lambda$ ranging from $\lambda = 0.22$ to $\lambda = 10$ and study the performance of the resulting controllers. Based on Monte Carlo methods, we conduct 500 independent trials using each controller and exhibit the mean and variance of the steady position $\bar{x}_1$ at $time =18s$, see Fig. \ref{pic:mean} and Fig. \ref{pic:variance}. We observe that for the $H_{\infty}$-LQR, there is an apparent steady-state bias. The effect of $\lambda$ on the variance of $\bar{x}_1$ is similar for the $H_{\infty}$-LQR and WDR-LQR. We further simulate the long-term cost with respect to $\lambda$ in Fig. \ref{pic:cost_lambda}. We provide a possible explanation for this result in the follows. For the $H_{\infty}$-LQR, a large $\lambda$ implies that the adversarial disturbance should be near to zero, see (\ref{equ:h_inf}). Thus, its performance may unexpectedly degrade as largely deviated disturbance $w_{k,1}$ appears. However, in the WDR-LQR, $\lambda$ penalizes the deviation of the adversarial disturbance distribution from the empirical one $\nu_N$. Since the groundtruth distribution is closer to $\nu_N$ than zero, our WDR-LQR performs better. \section{Conclusion} This paper proposed a sample-based DR Q-learning algorithm to learn a controller with robustness to disturbance distribution errors. We formulated the stochastic optimal control problem as a zero-sum game with Wasserstein penalties. We first derived an explicit solution to the zero-sum game assuming that the system model $(A,B,E)$ was known. Then, we developed its Q-function and showed that the zero-sum game was equivalent to a deterministic version. We designed a Q-learning algorithm for the deterministic game and showed its global convergence. Finally, simulations were conducted to show the effectiveness of our DR RL algorithm. Recent years the policy gradient methods show tremendous success, which can be used to solve the zero-sum game. This will be our future work. \bibliographystyle{agsm}
{ "attr-fineweb-edu": 1.737305, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdPw5qdmDHWQtq34O
\section{Introduction} The real-time Nonequilibrium Green's Function (NEGF) technique~\cite{Danielewicz1984,Stefanucci2013,Balzer2013} for inhomogeneous systems has received a boost in recent years. One of the reasons is the reinvention of the Generalized Kadanoff-Baym Ansatz (GKBA)~\cite{Lipavsky1986} for the solution of the NEGF equations, which has made it possible to perform \emph{ab-initio} simulations of atoms, molecules, and bulk systems thanks to a drastic reduction of the computational effort. The NEGF--GKBA has been used to study, e.g., atoms~\cite{Perfetto2015b}, biologically relevant molecules~\cite{Perfetto2018}, organic compounds~\cite{Pal2011,Bostrom2018} as well as a large class of extended systems~\cite{Sangalli2015,Sangalli2016} including several two-dimensional layered materials~\cite{Pogna2016,Molina-Sanchez2017}. Recently, the scheme has also been used to study model Hamiltonians with Hubbard or extended Hubbard interactions~\cite{Hermanns2012,Hermanns2013,Hermanns2014,Latini2014,BarLev2016}. The practical application of the NEGF--GKBA, however, suffers from a drawback. At present it is not known how to include \emph{initial correlations} in the equations of motion; hence correlations have to be built up in real time. This means taking a noncorrelated state as initial state, evolving the system with an adiabatically switched-on interaction and then continuing the evolution in the presence of time-dependent external fields if nonequilibrium properties are of interest. The NEGF--GKBA formalism, in the most common approximations, contains a memory kernel that makes the computational effort scale quadratically with the number of time steps. Thus, if we need $N_{\rm ic}$ time steps to build up initial correlations (using the adiabatic switching) and if the nonequilibrium properties of interest require $N_{\rm prop}$ more time steps, the overall simulation scales like $(N_{\rm ic}+N_{\rm prop})^{2}$. Depending on the system $N_{\rm ic}$ can be very large, up to the point of making the simulation computationally prohibitive in the physically relevant time window (from $N_{\rm ic}$ to $N_{\rm ic}+N_{\rm prop}$). Overcoming this drawback would therefore be of utmost practical value. We stress from the outset that the reduced computational complexity of NEGF--GKBA with respect to NEGF is currently possible only for many-body self-energies up to the second Born (2B) level, with first- and second-order exchange diagrams evaluated using either the bare Coulomb interaction $v$ or the statically or partially dynamically screened interaction $W$. Indeed, the implementation of, e.g., a full GW or T-matrix self-energy would give back the original NEGF scaling in the absence of a GKBA-like expression for the fully dynamically screened interaction $W$ or T-matrix $T$. This current limitation prevents the use of NEGF--GKBA for too strongly correlated systems. In this work, we extend the NEGF--GKBA equation to allow for starting the real-time evolution from an {\em initially correlated} (IC) state. This allows for driving the system out of equilibrium already at the beginning of the simulation, thereby reducing the scaling of a calculation from $(N_{\rm ic}+N_{\rm prop})^{2}$ to $N_{\rm prop}^{2}$. The resulting NEGF--GKBA+IC scheme is general and in principle applicable to any system. Existing NEGF--GKBA codes can easily be extended and the additional computational cost is negligible. The structure of the paper is as follows. We first give a brief introduction to the NEGF formalism and the GKBA. We then discuss the issue of initial correlations and extend the NEGF--GKBA formalism. Two schemes for calculating the initial correlated state are proposed. We present numerical results in a model donor-acceptor complex, show how our method works in practice and demonstrate its accuracy and improved performance with respect to standard NEGF--GKBA simulations. Finally, we conclude and provide an outlook for future directions. \section{Kadanoff-Baym Equations} We consider electrons described by the general time-dependent second-quantized Hamiltonian in a finite basis \begin{equation} \hat{H}(t) = \sum_{ij\sigma} h_{ij}(t) \hat{c}^\dagger_{i\sigma} \hat{c}_{j\sigma} + \frac{1}{2} \! \sum_{\substack{ijmn\\ \sigma \sigma'}} v_{ijmn}(t) \hat{c}^\dagger_{i\sigma} \hat{c}^\dagger_{j\sigma'} \hat{c}_{m\sigma'} \hat{c}_{n\sigma}. \label{Hamiltonian} \end{equation} The creation (annihilation) operator $\hat{c}^\dagger_{i\sigma} (\hat{c}_{i\sigma})$ creates (destroys) an electron in basis function $i$ with spin $\sigma$. The single-particle Hamiltonian $h(t)$ contains the kinetic energy as well as a general time-dependent external field. The two-body interaction $v_{ijmn}(t)$ is taken to be time-dependent in order to describe adiabatic switchings or interaction quenches; we do not specify its specific shape further here. Without any loss of generality we assume that the system is in equilibrium for times $t\leq 0$. For simplicity we consider spin-compensated systems, although no complications arise in the more general case. We describe the nonequilibrium dynamics of the electrons governed by the Hamiltonian in \Eq{Hamiltonian} using NEGF~\cite{Danielewicz1984,Haug2008,Stefanucci2013,Balzer2013}. The equations of motion for the lesser $\mathcal{G}^<$ and greater $\mathcal{G}^>$ single-particle Green's function are known as the Kadanoff-Baym Equations (KBE)~\cite{Kadanoff1962} and read (in matrix form): \begin{align} \left [ i \overset{\rightarrow}{\partial}_{t} - h_{\rm{HF}}(t) \right ]& \mathcal{G} ^\lessgtr(t,t') \nonumber \\ =&\left [\Sigmacal^\lessgtr \cdot \mathcal{G}^A + \Sigmacal^R \cdot \mathcal{G}^\lessgtr + \Sigmacal^\rceil \star \mathcal{G}^\lceil \right](t,t'), \label{KBE1} \end{align} \begin{align} \mathcal{G} ^\lessgtr(t,t') &\left [ -i \overset{\leftarrow}{\partial}_{t'} -h_{\rm{HF}}(t') \right ] \nonumber \\ &\quad\quad= \left [\mathcal{G}^\lessgtr \cdot \Sigmacal^A + \mathcal{G}^R \cdot \Sigmacal^\lessgtr + \mathcal{G}^\rceil \star \Sigmacal^\lceil \right ](t,t'), \label{KBE2} \end{align} where we have defined the real-time and imaginary-time convolutions according to \begin{align} \left [A \cdot B\right ] (t,t') &\equiv \int_{0}^\infty \text{d} \bar{t}\, A(t,\bar{t}) B(\bar{t},t'), \\ \left [ A \star B \right ](t,t') &\equiv -i \int_{0}^{\beta} \text{d} \bar{\tau} A(t,\bar{\tau}) B(\bar{\tau},t'), \end{align} with $\beta$ the inverse temperature. The imaginary-time convolutions involve the so-called \emph{mixed} functions with one real time and one imaginary time; they contain information about the IC state~\cite{Stefanucci2013}. The retarded and advanced functions are defined as \begin{equation} X^{R/A}(t,t') = \pm \theta(\pm(t-t')) \left [ X^>(t,t') - X^<(t,t') \right ]\!. \label{RetAdv} \end{equation} The quantity $\Sigmacal$ in the KBE is the correlation part of the self-energy. The time-local mean-field or Hartree-Fock (HF) part of the self-energy is incorporated in $h_{\rm{HF}}$, defined as \begin{equation} h_{{\rm{HF}},ij}(t) = h_{ij}(t) + \sum_{mn} w_{imnj}(t) \rho_{nm}(t), \label{HFhamiltonian} \end{equation} where $\rho(t)= -i \mathcal{G}^<(t,t)$ is the single-particle density matrix and we have defined $w_{imnj}(t) \equiv 2v_{imnj}(t) - v_{imjn}(t)$. In this work we consider the 2B approximation to the correlation self-energy~\cite{Perfetto2015b} \begin{equation} \Sigmacal_{ij}^\lessgtr (t,\bar{t}) = \smashoperator{\sum_{mnpqrs}}v_{irpn}(t) w_{mqsj}(\bar{t}) \mathcal{G}_{nm}^\lessgtr(t,\bar{t}) \mathcal{G}_{pq}^\lessgtr (t,\bar{t}) \mathcal{G}_{sr}^\gtrless (\bar{t},t). \label{2ndBornSigmaLesserGreater} \end{equation} For future reference, we note that the calculation of the 2B self-energy scales like $N_b^5$ with the number of basis functions $N_b$ and that for any fixed $t$ and $\bar{t}$ it does not scale with the number of time steps $N_t$. Knowledge of the lesser/greater Green's functions give access to many observables, e.g., density, current density, spectral function, total energy, etc. Unfortunately, the computational effort to solve the KBE is relatively high since these are integro-differential equations for {\em two-time} functions. Using a time-stepping technique the propagation up to $N_t$ time steps scales like $N_t^3$, provided that the calculation of the self-energy does not scale higher than that~\cite{Stan2009}. For the most common approximations used in the literature, i.e., the 2B, GW and T-matrix approximations, the full solution of the KBE does indeed scale {\em cubically} with $N_t$~\cite{Myohanen2008,Myohanen2009,Friesen2009,PuigvonFriesen2010}. This cubic scaling is what prohibits long time evolutions in many systems. To reduce the computational effort we reduce the information contained in the unknown functions. Instead of solving the KBE for the Green's function we solve the equation of motion for the single-particle density matrix $\rho(t)$ which is a {\em one-time} function. The equation for $\rho$ can be derived from the KBE by subtracting \Eq{KBE2} from \Eq{KBE1}, and then letting $t' \to t$~\cite{Kadanoff1962,Stefanucci2013} \begin{equation} \partial_t \rho(t) + i \left [ h_{\rm{HF}}(t), \rho(t) \right ] = - \left({\mathcal{I}}(t)+{\mathcal{I}}^{\rm ic}(t) + {\rm H.c.} \right ), \label{rhoEquation} \end{equation} where we have defined the {\em collision integral} \begin{equation} {\mathcal{I}} (t) \! = \! \int_{0}^t \! \! \text{d} \bar{t} \left [\Sigmacal^>(t,\bar{t}) \mathcal{G}^<(\bar{t},t) - \Sigmacal^<(t,\bar{t}) \mathcal{G}^>(\bar{t},t) \right ] \label{CollisionIntegral} \end{equation} and the {\em IC integral} \begin{equation}\ {\mathcal{I}}^{\rm ic}(t) = -i \int_{0}^{\beta} \text{d} \bar{\tau} \Sigmacal^\rceil(t,\bar{\tau}) \mathcal{G}^\lceil(\bar{\tau},t). \label{ICIntegral} \end{equation} The IC integral ${\mathcal{I}}^{\rm ic}(t)$ depends on $t$ only through the integrand, whereas the collision integral ${\mathcal{I}} (t)$ depends on $t$ through both the integrand and the upper integration limit. Thus, the calculation of the right hand side of \Eq{rhoEquation} scales linearly with the number of time steps $N_t$. This implies that the full propagation of the density matrix scales {\em quadratically} with $N_{t}$, provided that the calculation of the self-energy does not scale higher than that. Although the time-stepping technique for $\rho$ is numerically cheaper than for the Green's function, Eq.~(\ref{rhoEquation}) suffers from a fundamental problem: it is not a closed equation for $\rho$. The collision integral ${\mathcal{I}}(t)$ involves the off-diagonal (in time) $\mathcal{G}^{\lessgtr}(t,t')$ and the IC integral contains the mixed functions. In the next Section we discuss the Generalized Kadanoff-Baym Ansatz (GKBA) to transform ${\mathcal{I}}$ into a functional of $\rho$ whereas in Section~\ref{GKBAICsec} we present the main result of this work, i.e., a suitable functional form of ${\mathcal{I}}^{\rm ic}$ in terms of $\rho$. \section{Collision Integral with GKBA} The GKBA~\cite{Lipavsky1986} is the following {\em ansatz} for the lesser and greater Green's function (in matrix form) \begin{align} \begin{split} \mathcal{G}^<(t,t') = &-\left [ \mathcal{G}^R(t,t') \rho(t') - \rho(t) \mathcal{G}^A(t,t')\right ], \\ \mathcal{G}^>(t,t') = &\left [ \mathcal{G}^R(t,t') \bar{\rho}(t') - \bar{\rho}(t) \mathcal{G}^A(t,t')\right ], \end{split}\label{GKBA} \end{align} where $\bar{\rho}(t) \equiv \oneh - \rho(t) = i \mathcal{G}^>(t,t)$. Of course, \Eq{GKBA} alone does not transform ${\mathcal{I}}$ into a functional of the density matrix. We also need to specify the retarded/advanced Green's functions $\mathcal{G}^{R/A}(t,t')$. These functions satisfy their own KBE and the computational advantage would be lost if we had to solve them numerically. For systems where the average collision time is smaller than the quasi-particle's lifetime the effect of the correlation self-energy on $\mathcal{G}^{R/A}(t,t')$ can be discarded, and we can employ the HF approximation to $\mathcal{G}^{R/A}(t,t')$, i.e. \begin{align} \mathcal{G}^R(t,t') = -i \theta(t-t') &\TimeOrdering{e^{-i \int_{t'}^{t} h_{\rm{HF}}(\bar{t}) d\bar{t}}}. \label{Gret} \end{align} The calculation of the HF $\mathcal{G}^R(t,t')$ for all $t'<t$ scales linearly in $t$. We mention that there are also other approximations to $\mathcal{G}^R(t,t')$ with the same scaling. They are written in terms of $\rho$ only and contain correlation effects to some extent~\cite{Haug1992,Bonitz1999,Arnaud2005,Marini2013,Latini2014}. The following discussion applies to these approximations as well. The expression for the retarded Green's functions, \Eq{Gret}, together with \Eq{GKBA}, define the GKBA. Since the HF hamiltonian depends only on $\rho$, see \Eq{HFhamiltonian}, the right hand side of \Eq{GKBA} and hence the self-energy of \Eq{2ndBornSigmaLesserGreater} are functionals of $\rho$. Consequently, the collision integral ${\mathcal{I}}(t)$, see \Eq{CollisionIntegral}, becomes a history-dependent functional of $\rho(\bar{t})$ with $\bar{t} \leq t$. \section{Initial Correlation Integral with GKBA} \label{GKBAICsec} \subsection{Drawbacks of a vanishing IC integral} Without an expression of ${\mathcal{I}}^{\rm ic}$ in terms of $\rho$, the equation of motion for the density matrix, \Eq{rhoEquation}, cannot be solved. NEGF-GKBA simulations are usually performed with ${\mathcal{I}}^{\rm ic}=0$. However, this is justified only provided that the initial state is noncorrelated. In fact, in the absence of external fields $\rho(t)=\rho^{\rm eq}$ should be stationary and consequently $h_{\rm{HF}}(t)=h_{\rm{HF}}^{\rm eq}$ is stationary too. If ${\mathcal{I}}^{\rm ic}=0$ then \Eq{rhoEquation} at time $t=0$ implies $[\rho^{\rm eq},h_{\rm{HF}}^{\rm eq}]=0$ since ${\mathcal{I}}(0)=0$. Therefore $\rho(t)=\rho^{\rm eq}$ is solution of \Eq{rhoEquation} with ${\mathcal{I}}^{\rm ic}=0$ only if ${\mathcal{I}}(t)=0$ for all $t$, i.e., only in the absence of correlations. Viceversa, a correlated density matrix $\rho^{\rm eq}$ does not commute with $h_{\rm{HF}}^{\rm eq}$ and for it to be stationary in the absence of external fields, ${\mathcal{I}}^{\rm ic}$ cannot vanish. This is easily seen by taking again into account that ${\mathcal{I}}(0)=0$ and hence \Eq{rhoEquation} at time $t=0$ implies \begin{equation} {\mathcal{I}}^{\rm ic}(0)+{\rm H.c.}=-i \left [ h_{\rm{HF}}^{\rm eq}, \rho^{\rm eq} \right ]. \label{stateq} \end{equation} The common way to circumvent the problem of initially noncorrelated states consists in starting from a noncorrelated $\rho(0)=\rho^{\rm eq}$ and then build up correlations by a slow switching-on of the interaction. The drawback of this procedure is that the correlation build-up time can be rather long, like in systems with a small gap between the ground state and the lowest excited states. Suppose that we are interested in studying the nonequilibrium dynamics for $N_{\rm prop}$ time steps and that $N_{\rm ic}$ time steps are necessary for the IC build-up. The computational effort to perform the $i$-th time step in the physically relevant time-window scales like $N_{\rm ic}+i$ (since ${\mathcal{I}}$ in \Eq{CollisionIntegral} contains an integral from time step 0 to time step $N_{\rm ic}+i$) and therefore the cost of the entire simulation scales like $(N_{\rm ic}+N_{\rm prop})^{2}$. \subsection{Equivalent expression of the IC integral} Let us now discuss the removal of the adiabatic switching from the numerical procedure. For this purpose, we inevitably need to find an expression of the IC integral in terms of $\rho$ which satisfies the {\em stationarity property} \begin{equation} {\mathcal{I}}^{{\rm ic}}(0)={\mathcal{I}}(t)+{\mathcal{I}}^{{\rm ic}}(t) \label{statprop} \end{equation} for any $\rho(t)=\rho^{\rm eq}$ solution of the stationary equation (\ref{stateq}). The difficulty in deriving such an expression stems from the fact that there is no GKBA-like form for the mixed functions appearing in ${\mathcal{I}}^{{\rm ic}}$, see again \Eq{ICIntegral}. The solution to the problem is found by rewriting the IC integral in an equivalent manner. In Appendix~\ref{generalizedFD} we prove a generalized version of the fluctuation-dissipation theorem and use this generalization in Appendix~\ref{iceqformapp} to show that the IC integral in \Eq{ICIntegral} can equivalently be expressed in terms of real-time Green's functions according to (see \Eq{equivalentForm}) \begin{equation} \!\! {\mathcal{I}}^{\rm ic} (t) = \int _{-\infty} ^{0} \!\!\! \text{d} \bar{t} \left [ \Sigmacal^>(t,\bar{t}) \mathcal{G}^<(\bar{t},t) - \Sigmacal^< (t,\bar{t}) \mathcal{G}^>(\bar{t},t) \right ] \! . \label{CollisionIntegralInit} \end{equation} For $t<0$, when the system is in equilibrium, \Eq{CollisionIntegralInit} follows from the standard fluctuation-dissipation theorems for $\mathcal{G}$ and $\S$~\cite{Stefanucci2013}. With the generalized fluctuation-dissipation theorem of Appendix~\ref{generalizedFD} one can show that \Eq{CollisionIntegralInit} is also valid out of equilibrium, i.e., for $t>0$. We emphasize that the equivalence between Eqs.~(\ref{CollisionIntegralInit}) and (\ref{ICIntegral}) is an {\em exact} result, at zero or finite temperature. For notational convenience, we suppress a convergence factor in \Eq{CollisionIntegralInit}, see \Eq{equivalentForm}. Let us now employ the GKBA approximation to \Eq{CollisionIntegralInit}. The main advantage of \Eq{CollisionIntegralInit} over \Eq{ICIntegral} is that it contains only lesser and greater Green's functions for which a GKBA exists, and we avoid the necessity of constructing a GKBA for the mixed functions. Therefore, \Eq{CollisionIntegralInit} allows us to transform ${\mathcal{I}}^{\rm ic}$ into a functional of $\rho$. While \Eq{CollisionIntegralInit} is an exact relation, it is not obvious that the application of GKBA to \Eq{CollisionIntegralInit} will yield a solution that satisfies the stationarity property. Let us prove that the functional ${\mathcal{I}}^{\rm ic}$ indeed fulfills \Eq{statprop}. For any stationary $\rho$ and in the absence of external fields $\mathcal{G}^{R/A}$ is a function of the time difference only, see \Eq{Gret}. Via the GKBA, \Eq{GKBA}, the same is true for the lesser and greater Green's functions and hence for the 2B self-energy of \Eq{2ndBornSigmaLesserGreater}. Renaming the integration variable in \Eq{CollisionIntegral} and \Eq{CollisionIntegralInit} according to $\bar{t}' = \bar{t} - t$ we have that $\mathcal{G}^{\lessgtr}(t,\bar{t}) = \mathcal{G}^{\lessgtr}(0,\bar{t}')$ and hence $\Sigmacal^{\lessgtr}(t,\bar{t}) = \Sigmacal^{\lessgtr}(0,\bar{t}')$. Using \Eq{CollisionIntegral} and \Eq{CollisionIntegralInit} this in turn implies that \begin{align*} {\mathcal{I}} (t) \! +\! {\mathcal{I}}^{\rm ic} (t) = \int_{-\infty}^t \! \! \! \! \text{d} \bar{t} \left [\Sigmacal^>(t,\bar{t}) \mathcal{G}^<(\bar{t},t) - \Sigmacal^<(t,\bar{t}) \mathcal{G}^>(\bar{t},t) \right ] \\ = \int_{-\infty}^0 \! \! \! \! \text{d} \bar{t} \left [\Sigmacal^>(0,\bar{t}) \mathcal{G}^<(\bar{t},0) - \Sigmacal^<(0,\bar{t}) \mathcal{G}^>(\bar{t},0) \right ] = {\mathcal{I}}^{\rm ic} (0). \end{align*} Therefore, a stationary $\rho^{\rm eq}$ satisfying \Eq{stateq} yields a stationary right-hand side in \Eq{rhoEquation} also for positive times, in the absence of external fields. This demonstrates the formal usefulness of \Eq{CollisionIntegralInit} in the GKBA context. In the next section we will discuss the practical implications. \subsection{Practical implementation of the IC integral with GKBA} \label{practsec} To make the NEGF--GKBA+IC scheme practical we have to perform the IC integral from minus infinity to zero analytically for arbitrary time-dependent drivings switched on at $t>0$. Let us insert the 2B self-energy of \Eq{2ndBornSigmaLesserGreater} into the expression for ${\mathcal{I}}^{\rm ic}$: \begin{equation} {\mathcal{I}}^{\rm ic} (t)={\mathcal{J}}^{{\rm ic}}(t) - \bar{{\mathcal{J}}}^{{\rm ic}}(t), \end{equation} where \begin{align} {\mathcal{J}}^{{\rm ic}}_{ik}(t)=&\sum_{mn pq rs j } v_{irpn}(t)\;w_{mqsj} \int _{-\infty} ^{0} \!\!\! \text{d} \bar{t} \nonumber \\ &\times \,\mathcal{G}_{nm}^>(t,\bar{t}) \mathcal{G}_{pq}^> (t,\bar{t}) \mathcal{G}_{sr}^< (\bar{t},t)\mathcal{G}_{jk}^< (\bar{t},t)e^{\eta \bar{t}}, \label{jicdef} \end{align} and $\bar{{\mathcal{J}}}^{{\rm ic}}_{ik}(t)$ is defined as in \Eq{jicdef} with the replacement $\mathcal{G}^{\lessgtr}\to \mathcal{G}^{\gtrless}$. We added the convergence factor $e^{\eta \bar{t}}$ [see \Eq{equivalentForm} for details]. In \Eq{jicdef} we took into account that the tensor $w$ is independent of time since we assumed that the Hamiltonian is constant at negative times (for otherwise the system would not be in equilibrium). The contributions ${\mathcal{J}}^{\rm ic}$ and $\bar{{\mathcal{J}}}^{{\rm ic}}$ have the same structure; we then discuss ${\mathcal{J}}^{\rm ic}$ only. Since $\bar{t} < 0 < t$, the GKBA of \Eq{GKBA} yields \begin{align} \begin{split} \mathcal{G}^>(t,\bar{t}) &= \mathcal{G}^R(t,\bar{t}) \bar{\rho}(\bar{t}), \\ \mathcal{G}^<(\bar{t},t) &= \rho(\bar{t}) \mathcal{G}^A(\bar{t},t). \end{split} \label{GKBA2} \end{align} Furthermore, the retarded/advanced Green's functions in the HF approximation, \Eq{Gret}, satisfies the group property \begin{align} \begin{split} &\mathcal{G}^R(t,\bar{t}) = i \mathcal{G}^R(t,0) \mathcal{G}^R(0,\bar{t}), \\ &\mathcal{G}^A(t,\bar{t}) = -i \mathcal{G}^A(\bar{t},0) \mathcal{G}^A(0,t). \end{split} \end{align} Therefore, we can rewrite the lesser and greater Green's functions in \Eq{GKBA2} as \begin{align} \begin{split} \mathcal{G}^> (t,\bar{t}) &= i\mathcal{G}^R(t,0) \mathcal{G}^>(0,\bar{t}), \\ \mathcal{G}^<(\bar{t},t) &= -i\mathcal{G}^<(\bar{t},0) \mathcal{G}^A(0,t). \end{split} \label{GKBAapprox} \end{align} As we shall see below, Eqs.~\eqref{GKBAapprox} allow for isolating the $t$-dependence in ${\mathcal{J}}^{\rm ic}(t)$ as well as for performing the integral over $\bar{t}$ analytically. To ease the notation we define the time-dependent tensor \begin{equation} \tilde{v} _{irpn} (t) \equiv \sum _{\tilde{n} \tilde{p} \tilde{r}}\, v_{i \tilde{r} \tilde{p} \tilde{n}}(t)\, \mathcal{G}^R_{\tilde{n} n} (t,0) \mathcal{G}^R_{\tilde{p} p} (t,0) \mathcal{G}^A_{r \tilde{r}} (0,t). \label{vtilde} \end{equation} We also find it useful to define $\tilde{\Jcalh}^{\rm ic} = {\mathcal{J}}^{\rm ic}(t) \mathcal{G}^R(t,0)$ from which we can get back the original ${\mathcal{J}}^{\rm ic}(t)$ through ${\mathcal{J}}^{\rm ic}(t) = \tilde{\Jcalh}^{\rm ic}(t) \mathcal{G}^A (0,t)$ [we have used that $ \mathcal{G}^R (t,0) \mathcal{G}^A (0,t)= \oneh$]. Inserting \Eq{GKBAapprox} into \Eq{jicdef} and taking into account the above definitions we have \begin{align} \tilde{\Jcalh}^{\rm ic}_{ik} (t) &= \sum_{mn pq rs j} \tilde{v}_{irpn}(t) w_{mqsj} \nonumber \\ \times &\int _{-\infty}^0 \! \! \! \! \! \text{d} \bar{t} \ \mathcal{G}^>_{nm}(0,\bar{t}) \mathcal{G}^>_{pq}(0,\bar{t}) \mathcal{G}^<_{sr}(\bar{t},0) \mathcal{G}^<_{j k}(\bar{t},0)e^{\eta \bar{t}}\!. \quad\label{TimeIntegrationSeparated} \end{align} As anticipated the $t-$dependence has been isolated since it is now contained only in the tensor $\tilde{v}$. To perform the integral over $\bar{t}$ we observe that $h_{\rm{HF}}(\bar{t})= h_{\rm{HF}}^{\rm eq}$ for all $\bar{t}<0$ and therefore \begin{equation} \mathcal{G}^R(0,\bar{t}) =[\mathcal{G}^A(\bar{t},0)]^{\dag}= -i e^{i h_{\rm{HF}}^{\rm eq} \bar{t}} . \label{graeq} \end{equation} Let us work in the eigenbasis of $h_{\rm{HF}}^{\rm eq}$. In general, this is {\em not} the basis resulting from a pure HF calculation since $\rho^{\rm eq}$ and $h_{\rm{HF}}^{\rm eq}$ do not commute in the correlated case, see again \Eq{stateq}. Denoting by $\epsilon_n$ the $n$-th eigenvalue of $h_{\rm{HF}}^{\rm eq}$, from \Eq{GKBA2} we have \begin{align} \begin{split} &\mathcal{G}^>_{nm} (0,\bar{t}) = -ie^{i \epsilon_n \bar{t}} \bar{\rho}^{\rm eq}_{nm} ,\\ &\mathcal{G}^<_{nm} (\bar{t},0) = i \rho^{\rm eq}_{nm} e^{-i \epsilon_m \bar{t}}. \end{split}\label{GlesserGreaterSmallT} \end{align} Inserting these expressions into \Eq{TimeIntegrationSeparated} and manipulating $\bar{{\mathcal{J}}}^{{\rm ic}}(t)$ in a similar way we eventually obtain \begin{equation} {\mathcal{I}}^{\rm ic}(t) = \tilde{{\mathcal{I}}}^{\rm ic}(t) \mathcal{G}^A (0,t), \label{Icalh} \end{equation} with \begin{equation} \tilde{{\mathcal{I}}}^{\rm ic}_{ik} (t) = i \sum_{n p r} \frac{\tilde{v}_{irpn}(t) \tilde{w}_{nprk}}{\epsilon_r + \epsilon_k - \epsilon_n - \epsilon_p + i \eta}, \label{Icalht} \end{equation} and the tensor $\tilde{w}$ defined according to \begin{equation} \tilde{w}_{nprk} \! \equiv \! \smashoperator{\sum_{mqsj}} w_{mqsj} \left (\bar{\rho}^{\rm eq}_{nm} \bar{\rho}^{\rm eq}_{pq} \rho^{\rm eq}_{sr} \rho^{\rm eq}_{jk} - \rho^{\rm eq}_{nm} \rho^{\rm eq}_{pq} \bar{\rho}^{\rm eq}_{sr} \bar{\rho}^{\rm eq}_{jk} \right). \label{wtilde} \end{equation} A few remarks are in order: \\ $(i)$ Equations~(\ref{Icalh},\ref{Icalht}) together with the definitions in Eqs.~(\ref{vtilde},\ref{wtilde}) allow for including initial correlations in the NEGF--GKBA scheme. The resulting NEGF--GKBA+IC scheme is the main results of this work and it consists in solving \Eq{rhoEquation} with nonvanishing collision integral and IC integral. The latter is functional of the initial correlated equilibrium density matrix $\rho^{\rm eq}$ and of its time-dependent value $\rho(t)$ (through the retarded/advanced Green's functions). \\ $(ii)$ The Coulomb tensor $v$ and hence $w$ are written in the eigenbasis of $h_{\rm{HF}}^{\rm eq}$. Thus, interactions that are sparse in some basis, such as the Hubbard interaction in the site basis, do not necessarily yield a sparse tensor $v$ in the eigenbasis of $h_{\rm{HF}}^{\rm eq}$. \\ $(iii)$ In the noncorrelated case $\rho^{\rm eq}$ is diagonal and it is easy to show that the tensor $\tilde{w}$ is identically zero for $\epsilon_r + \epsilon_k - \epsilon_n - \epsilon_p=0$. For a general correlated density matrix $\tilde{w}_{nprk}$ vanishes whenever $r=n$ and $k=p$ or $r=p$ and $k=n$. We assume the same behavior even for accidental degeneracies and restrict the summation in \Eq{Icalht} to include only those indices for which the denominator is non-vanishing. Thus, we can safely set $\eta=0$. \\ $(iv)$ The extra computational effort for the implementation of the IC integral is minimal. The calculation of $\tilde{w}_{nprk}$ has to be done only once and the summation can be performed efficiently in sequence, scaling at most like $N_b^5$ where $N_b$ is the number of basis functions. The same efficient summation can be used to calculate $\tilde{v}_{nprk}(t)$ in \Eq{vtilde}, although in this case the summation has to be performed for every time step. Having $\tilde{w}$ and $\tilde{v}(t)$ we calculate $\tilde{{\mathcal{I}}}^{\rm ic}(t)$ from \Eq{Icalht}, another operation that scales like $N_b^5$. The scaling with the fifth power of $N_{b}$ is the same as that of the summation involved in the 2B self-energy of \Eq{2ndBornSigmaLesserGreater}. Thus, ${\mathcal{I}}(t)$ and ${\mathcal{I}}^{{\rm ic}}(t)$ scale in the same way with the number of basis functions. However, the IC integral does not scale with the number of time steps $N_{t}$ (no time integration) whereas the collision integral scales linearly with $N_{t}$ (integration from time step 0 to time step $N_{t}$). Consequently, the inclusion of initial correlations via ${\mathcal{I}}^{\rm ic}(t)$ adds a negligible computational cost to standard GKBA simulations. Furthermore, the calculation of ${\mathcal{I}}^{\rm ic}(t)$ is completely independent from ${\mathcal{I}}$ and can be done separately; hence no internal modifications need to be made to an existing GKBA code in order to incorporate initial correlations. \\ ($v$) In Appendix~\ref{appendixA} we show that the above conclusions remain intact when using a given dynamically screened interaction $W(t-t')$, as that of Ref.~\cite{Pal2009,Pal2011}, in place of the bare time-local interaction $v$. \section{The equilibrium correlated density matrix} \label{eqmethods} In the NEGF-GKBA+IC scheme the initial and correlated density matrix $\rho^{\rm eq}$ satisfies \Eq{stateq}, and $\rho(t)=\rho^{\rm eq}$ is a solution of the equation of motion (\ref{rhoEquation}) in the absence of external fields. A scheme to obtain $\rho^{\rm eq}$ based on solving the equilibrium KBE for the lesser Green's function using the GKBA for the collision integral has recently been proposed in Ref.~\cite{Hopjan2018}. In the following we discuss two alternative methods. The first method consists in solving \Eq{stateq} self-consistently. This equation, however, admits infinitely many solutions since the diagonal of the left and right hand sides vanish in any real basis for Hamiltonians invariant under time-reversal. In fact, \Eq{stateq} is not a variational equation, rather it is a stationary equation, i.e., it stems from setting $\partial_{t}\rho=0$. The possible solutions do therefore correspond to the infinitely many stationary density matrices of the system. A unique solution can be found by supplementing \Eq{stateq} with the value of the diagonal occupations $\rho_{nn}=\{f_{n}\}$ in some basis. To illustrate the self-consistent procedure let us first discuss the noncorrelated case, i.e., ${\mathcal{I}}^{{\rm ic}}=0$. Then, \Eq{stateq} tells us that $\rho^{\rm eq}$ is diagonal in the eigenbasis of $h_{\rm HF}$. We then diagonalize the noninteracting Hamiltonian $h$, find the eigenvectors $\varphi^{(0)}_{n}$, and construct $\rho^{(0)}_{nm}=\delta_{nm}f_{n}$ in the basis of these eigenvectors. In the $(i+1)$-th iteration step we use $\rho^{(i)}$ to calculate $h_{\rm{HF}}^{(i)}=h_{\rm{HF}}[\rho^{(i)}]$, find the eigenvectors $\varphi^{(i+1)}_{n}$ and construct $\rho^{(i+1)}_{nm}=\delta_{nm}f_{n}$ in the $(i+1)$-basis. At convergence we have the HF basis with HF occupations $\{f_{n}\}$. In particular, if $f_{n}=1$ for $n\leq N_{\rm el}$ and zero otherwise the procedure converges to the HF ground state with $2N_{\rm el}$ electrons. In the correlated case the procedure is identical but in the $(i+1)$-th iteration step $\rho^{(i+1)}_{nm}$ is not diagonal. In the eigenbasis $\varphi^{(i+1)}_{n}$ of $h_{\rm{HF}}^{(i)}=h_{\rm{HF}}[\rho^{(i)}]$ with eigenvalues $\epsilon^{(i+1)}_{n}$ we have for $n\neq m$ \begin{equation} \rho^{(i+1)}_{nm}=i\;\frac{{\mathcal{I}}^{{\rm ic}}_{nm}(0)+{\mathcal{I}}^{{\rm ic}*}_{mn}(0)} {\epsilon^{(i+1)}_{n}-\epsilon^{(i+1)}_{m}}. \label{offdiagrho} \end{equation} As already observed this result does not allow to update the diagonal elements. We could either supplement \Eq{offdiagrho} with $\rho^{(i+1)}_{nn}=f_{n}$ for some reasonable set of occupations or take advantage from a self-consistent Matsubara Green's function calculation providing $\rho_{pq}=\delta_{pq}f_{q}$ in the natural orbital basis $\psi_{q}$ and supplement \Eq{offdiagrho} with \begin{equation} \rho^{(i+1)}_{nn}=\sum_{q}f_{q}|\langle\psi_{q}|\varphi^{(i+1)}_{n}\rangle|^{2}. \end{equation} Independently of the prescription to fix the diagonal elements $\rho^{(i+1)}_{nn}$, at convergence $\rho^{\rm eq}$ satisfies \Eq{stateq}. The second method is instead borrowed from standard NEGF--GKBA simulations. We start from an noncorrelated density matrix at time $t=0$ and evolve the system with no external fields in the presence of a slowly increasing interaction $v(t)$ having the property that $v(t<0)=0$ and $v(t>T_{\rm ic})=v$. The time $T_{\rm ic}$ is the IC build-up time which should be chosen large enough for $\rho(t)=\rho(T_{\rm ic})$ to be sufficiently stationary when $t$ is larger than $T_{\rm ic}$. Taking advantage of the fact that $v(t)=0$ for $t\leq 0$, the IC integral vanishes at all times $t$ since $\Sigmacal^\lessgtr (t,\bar{t})=0$ for $\bar{t}\leq0$, as can be seen from \Eq{2ndBornSigmaLesserGreater} and \Eq{CollisionIntegralInit}. At the steady state $\rho(T_{\rm ic})=\rho^{\rm eq}$ satisfies \Eq{stateq}. We emphasize again that the number of time steps for the IC build-up does not affect the computational cost of the subsequent physically relevant time propagation with $\rho(0)=\rho^{\rm eq}$ as initial state. We also observe that this second method is limited to systems at zero temperature. In fact, due to correlation-induced level crossings and/or splittings of degenerate many-body states, the finite-temperature noninteracting density matrix does not, in general, evolve into the finite-temperature interacting one. \section{Example of GKBA with initial correlations} In this section we provide numerical evidence that our procedure works and is efficient. As a non-trivial example, we consider the donor-acceptor dyad used in Ref.~\cite{Latini2014} as a molecular junction to address the ultrafast charge dynamics at the donor-acceptor interface. The system is modelled by a two-levels donor, the levels being the HOMO ($H$) and LUMO ($L$), and a linear chain of $N_{a}$ acceptor sites labelled by the site index $a$. The Hamiltonian reads \begin{align*} \begin{split} \hat{H} &= \epsilon_A\sum_{a=1}^{N_{a}}\hat{n}_a +T_{DA}\sum_\sigma \left ( \hat{c}^\dagger_{L\sigma} \hat{c}_{1\sigma} + {\rm H.c.}\right) \\ &+\sum_{i=H,L}\epsilon_i \hat{n}_i +T_A \sum_{\sigma,a=1}^{N_{a}-1} \left ( \hat{c}^\dagger_{a\sigma}\hat{c}_{a+1,\sigma} + {\rm H.c.} \right) \\ &+ U_{DA}(t) ( \hat{n}_H \!+\! \hat{n}_L \!-\! 2) \sum_{a=1}^{N_{a}} \frac{\hat{n}_a- 1}{a}, \end{split} \end{align*} where ${\rm H.c.}$ denotes the hermitian conjugates. We defined $\hat{n}_i = \sum_\sigma \hat{n}_{i\sigma}$ the occupation of level $i=H,L$ with energy $\epsilon_i$ and likewise for the occupation of the acceptor sites. The system is isolated and the dimension of the single-particle basis is $N_b=2+N_{a}$. The LUMO is not coupled to the HOMO but to the first site of the acceptor chain with tunneling amplitude $T_{DA}$. The tunneling amplitude between two nearest neighbour acceptor sites is $T_{A}$. In accordance with Ref.~\cite{Latini2014} we set the level energies $\epsilon_H = -2.92$, $\epsilon_L = -0.92$ and $\epsilon_A = -2.08$, and the tunneling amplitudes $T_{DA} = -0.3$ and $T_A = -0.2$ (all quantities are in atomic units). The donor-acceptor dyad is half-filled with equal number of up and down electrons. The electrons interact with a density-density type of interaction, and we set the interaction strength $U_{DA}(t) = U_{DA}=0.5$ for positive times. As time-dependent perturbation we choose \begin{equation} \hat{H}_{\text{ext}} (t) = f(t)\sum_{\sigma} \left ( D_{LH} e^{i \Omega t} \hat{c}^\dagger_{H\sigma} \hat{c}_{L\sigma} + {\rm H.c.} \right ) \label{ExtPerturbation} \end{equation} describing the coupling between a monochromatic electric field of amplitude $f$ and frequency $\Omega$, and the HOMO-LUMO dipole moment $D_{LH}$. We consider a resonant frequency $\Omega = \epsilon_L - \epsilon_H = 2$ and set the value of $ D_{HL} = 0.3$. The electric field is very strong, $f=1$, and it is active from time $t=0$ until time $t= \frac{\pi}{4 D_{LH}}\simeq 2.6$. As we shall see, the external driving transfer one unit of electric charge from the initially filled HOMO to the initially empty LUMO. In all simulations below we have considered the number of acceptor sites $N_{a}=4$. \subsection{Simulations without external field} We first show calculations without external fields to illustrate that the system is stationary with the inclusion of the IC integral. We use the adiabatic switching method to obtain the initially correlated density matrix $\rho^{\rm eq}$, see Section~\ref{eqmethods}. The switching protocol was chosen to be \begin{equation} U_{DA}(t) = U_{DA} \times\left\{ \begin{array}{ll} \sin^2 \left ( \frac{\pi}{2} \frac{t}{T_{\rm ic}} \right ) & t<T_{\rm ic} \\ 1 & t\geq T_{\rm ic} \end{array} \right. \end{equation} We have used the CHEERS code~\cite{CHEERS} with time step $\Delta t=0.005$ to perform three separate calculations: (a) A calculation with ${\mathcal{I}}_{\rm ic}(t)=0$ that starts from $t=0$ with the noncorrelated HF density matrix and adiabatically switches on the interaction with $T_{\rm ic}=100$ (for this calculation we have shifted the time axis to set the time origin at $T_{\rm ic}$); (b) A NEGF--GKBA+IC calculation with the IC integral evaluated as described in Section~\ref{practsec} that starts from $t=0$ using $\rho(t=0) = \rho^{\rm eq}$; (c) A calculation with ${\mathcal{I}}_{\rm ic}(t)=0$ that starts from $t=0$ using $\rho(t=0) = \rho^{\rm eq}$. We remind that $\rho^{\rm eq} = \rho^{\rm eq}(T_{\rm ic})$ and hence calculations (a) and (b) are expected to coincide for large enough $T_{\rm ic}$. We also stress that the computational time for calculations (b) and (c) is practically equal. \begin{figure*}[ht] \begin{center} \includegraphics[width=0.78\textwidth]{noFieldt1000new} \end{center} \caption{LUMO occupation, without external fields, for the three type of calculations described in the main text. Total number of time steps $N_t=2 \cdot 10^5$ and time step $\Delta t=0.005$. Left panel: evolution using $T_{\rm ic}=100$ up to $t=100$. Top right panel: long time behavior for $T_{\rm ic} = 100$. Bottom right panel: Long time behavior for $T_{\rm ic} = 1000$. In the right panels we do not show the curve corresponding to calculation (c) [${\mathcal{I}}_{\rm ic}(t)=0$, see main text] due to too large oscillations. \label{noField1000}} \end{figure*} In \Fig{noField1000} we show the evolution of the LUMO occupation $n_L = \rho_{LL}(t)$ up to $t=1000$. From the left panel we conclude that the adiabatic evolution, calculation (a), yields a LUMO occupation that remains stationary after $t>0$, except for small oscillations due to the finiteness of $T_{\rm ic}$. The same quantity for calculation (b), that includes ${\mathcal{I}}^{\rm ic}$, is indeed stationary, even for very long propagation times. Calculation (c), where ${\mathcal{I}}^{\rm ic}$ is artificially set to zero, does instead yield a nonstationary $\rho(t)$, as expected from the discussion of the previous Section. For long times, the LUMO occupation for both calculations (a) and (b) (top right panel) shows small oscillations due to the finite adiabatic switching time. Increasing the switching time to $T_{\rm ic}=1000$ the amplitude of the oscillations decreases for both calculations (bottom right panel in \Fig{noField1000}). Perhaps remarkably, the correlated density matrix $\rho^{\rm eq}$ resulting from the adiabatic switching with $T_{\rm ic}=100$ yields a reasonably stationary $\rho(t)$ in NEGF--GKBA+IC [certainly less oscillatory than that of calculation (a)], indicating that the NEGF--GKBA+IC equation is numerically stable. \subsection{Simulations with external field} We now show that also the off-diagonal elements of the density matrix are well-reproduced in NEGF--GKBA+IC. We perform the three type of calculations of the previous section in the presence of the external driving in \Eq{ExtPerturbation}, and use a very long adiabatic switch-on time $T_{\rm ic}=1000$ to converge the calculations. The quantities chosen to illustrate the performance of the NEGF--GKBA+IC scheme are the LUMO density, the current $J(t) = 2 |T_{DA}| \Im [ \rho_{L1}(t)]$ flowing through the bond between the LUMO and the first acceptor site and the real part of the off-diagonal HOMO-LUMO matrix element of $\rho(t)$. The results are shown in \Fig{all} up to $t = 1000$. \begin{figure*} \begin{center} \includegraphics[width=0.78\textwidth]{pi4T1000} \end{center} \caption{LUMO occupation (top panels), current between LUMO and the first acceptor site (middle panels) and real part of $\rho_{HL}$ (bottom panels) in the presence of the external driving in \Eq{ExtPerturbation} for the three type of calculations described in the main text. Total number of time steps $N_t=2 \cdot 10^5$ and time step $\Delta t=0.005$. The quantities are shown in the time range $(0,50)$ (left) and $(950,1000)$ (right). \label{all}} \end{figure*} As anticipated the NEGF--GKBA+IC scheme, calculation (b), correctly reproduces the outcome of standard NEGF--GKBA with an adiabatically switched-on interaction, calculation (a). The agreement is excellent all the way to the end of the simulation time. Neglecting the IC integral and starting from the correlated density matrix $\rho^{{\rm eq}}$, calculation (c), introduces an error that becomes more severe as the time increases. The general trend is that all quantities can be well-reproduced for short times even without properly accounting for initial correlations, but eventually the agreement tend to deteriorate. \section{Conclusions} Using the NEGF--GKBA+IC scheme we have shown how to separate the calculation of the correlated density matrix from that of the time-dependent responses. By generalizing the fluctuation--dissipation theorem for the Green's function and self-energy we have derived an equivalent expression of the IC integral suited to be evaluated using the GKBA. With the addition of this IC integral it is possible to use correlated states as initial states, thus removing the bottleneck of a preliminary adiabatic switching. For the most common approximations the computational effort of our method scales favorably and, most importantly, does not slow down an ordinary NEGF--GKBA implementation. Furthermore, the scheme can easily be implemented in any existing GKBA code without internal modifications. The NEGF--GKBA+IC equation widens the class of nonequilibrium phenomena considered so far, allowing for larger systems and/or longer time propagations than was previously feasible. We also emphasize that the proposed scheme is compatible with any technique to obtain the initially correlated density matrix as it does not rely on the adiabatic switching procedure. In fact, the NEGF--GKBA+IC equation is also suitable to study systems at finite temperature (the adiabatic switching procedure is consistent only at zero temperature). An interesting future prospect is the implementation of many-body approximations to the correlation self-energy that go beyond the ones currently used within the GKBA. We derived a feasible form for the IC integral in the 2B approximation, but the fundamental idea is completely general. Indeed, in Appendix~\ref{appendixA} we provide an expression of the IC integral for the GW$^{{\rm eq}}$ approximation, where the dynamically screened interaction is taken from an equilibrium calculation. For other commonly used many-body approximations, like the full GW and T-matrix approximation, it is first necessary to find a GKBA-like form of the screened interaction $W$ and T-matrix $T$ for otherwise the favourable quadratic scaling with the number of time steps is lost. Perhaps a more immediate direction is the application of the NEGF--GKBA+IC scheme to open systems. This would allow for more efficiently studying, for example, transient quantum transport or photoionization in molecules. \begin{acknowledgments} D.K. acknowledges the Academy of Finland for funding under Project No. 308697. G.S. and E.P. acknowledge EC funding through the RISE Co-ExAN (Grant No. GA644076). E.P. also acknowledges funding from the European Union project MaX Materials design at the eXascale H2020-EINFRA-2015-1, Grant Agreement No. 676598 and Nanoscience Foundries and Fine Analysis-Europe H2020-INFRAIA-2014-2015, Grant Agreement No. 654360. \end{acknowledgments}
{ "attr-fineweb-edu": 1.527344, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdQ3xK2li-LM1RrEV
\section{Supplemental Material for \\[0.5em] Dynamic properties of collective excitations in twisted bilayer Graphene } \subsection{Section I: Momentum space QMC methodology} Following the description in Ref.~\cite{XuZhang2021}, in this section, we elucidate the momentum space quantum Monte Carlo method in detail. First, the partition function of the TBG Hamiltonian in Eq. (3) of the main text is given by: \begin{equation} \begin{aligned} Z&=\operatorname{Tr}\left[e^{-\beta H}\right]\\ &=\operatorname{Tr}\left[\left(e^{-\Delta \tau H} \right)^{L_{\tau}}\right]\\ &=\operatorname{Tr}\left[\prod_{\tau=1}^{L_\tau}e^{-\Delta \tau H_0}e^{-\Delta\tau H_{int}}\right]+O(\Delta \tau^2) \end{aligned} \end{equation} For the interaction part $H_{i n t}=\frac{1}{2 \Omega} \sum_{\mathbf{q}, \mathbf{G},|\mathbf{q}+\mathbf{G}| \neq 0} V(\mathbf{q}+\mathbf{G}) \delta \rho_{\mathbf{q}+\mathbf{G}} \delta \rho_{-\mathbf{q}-\mathbf{G}}$, we have \begin{equation} \sum_{\mathbf{q}, \mathbf{G},|\mathbf{q}+\mathbf{G}| \neq 0}\frac{1}{2\Omega} V(\mathbf{q}+\mathbf{G}) \delta \rho_{\mathbf{q}+\mathbf{G}} \delta \rho_{-\mathbf{q}-\mathbf{G}}=\sum_{|\mathbf{q}+\mathbf{G}| \neq 0} \frac{V(\mathbf{q}+\mathbf{G})}{4\Omega}\left[\left(\delta \rho_{-\mathbf{q}-\mathbf{G}}+\delta \rho_{\mathbf{q}+\mathbf{G}}\right)^{2}-\left(\delta \rho_{-\mathbf{q}-\mathbf{G}}-\delta \rho_{\mathbf{q}+\mathbf{G}}\right)^{2}\right] \end{equation} then \begin{equation} \begin{aligned} e^{-\Delta \tau \hat{H}_{int}}&= \prod_{|{\mathbf{q}}+{\mathbf{G}}| \neq 0} e^{-\Delta \frac{V(\mathbf{q}+\mathbf{G})}{4\Omega}\left[\left(\delta \rho_{-\mathbf{q}-\mathbf{G}}+\delta \rho_{\mathbf{q}+\mathbf{G}}\right)^{2}-\left(\delta \rho_{-\mathbf{q}-\mathbf{G}}-\delta \rho_{\mathbf{q}+\mathbf{G}}\right)^{2}\right]}. \end{aligned} \end{equation} The discrete Hubbard-Stratonovich transformation~\cite{Assaad2008,YDLiao2021PRX,YDLiao2019PRL,XuZhang2021} reads: \begin{equation} e^{\alpha \hat{O}^{2}}=\frac{1}{4} \sum_{l=\pm 1,\pm 2} \gamma(l) e^{\sqrt{\alpha} \eta(l) \hat{o}}+O\left(\alpha^{4}\right) \label{eq:eq8} \end{equation} where $l=\pm 1,\pm 2$, and \begin{equation}\begin{aligned} \gamma(\pm 1)=1+\sqrt{6} / 3, & \qquad \gamma(\pm 2)=1-\sqrt{6} / 3 \\ \eta(\pm 1)=\pm \sqrt{2(3-\sqrt{6})}, &\qquad \eta(\pm 2)=\pm \sqrt{2(3+\sqrt{6})} \end{aligned} \end{equation} This can be seen from the following simple derivation. Assuming, \begin{equation} \gamma(1)=\gamma(-1)=a, \quad \gamma(2)=\gamma(-2)=b, \quad \eta(1)=\sqrt{c}=-\eta(1), \quad \eta(2)=\sqrt{d}=-\eta(2) \end{equation} Taylor expands both sides of Eq.~\eqref{eq:eq8} to $O\left(\alpha^{4}\right)$ and compare the coefficients, we obtain: \begin{equation} 1=\frac{1}{2}(a+b),\quad 1=\frac{1}{4}(a c+b d) ,\quad \frac{1}{2}=\frac{1}{48}\left(a c^{2}+b d^{2}\right),\quad \frac{1}{6}=\frac{1}{1440}\left(a c^{3}+b d^{3}\right) \end{equation} solve these equations, then we have: \begin{equation} \begin{aligned} &a=1+\sqrt{6} / 3, \quad b=1-\sqrt{6} / 3 \\ &c=2(3-\sqrt{6}), \quad d=2(3+\sqrt{6}) \end{aligned} \end{equation} as those in Eq.~\eqref{eq:eq8}. For a fermion bilinear, i.e. free fermion system, its partition function can be expressed as a determinant, \begin{equation} \operatorname{Tr}\left[e^{-\sum_{i, j} c_{i}^{\dagger} A_{i, j c} c_{j}-\sum_{i, j} c_{i}^{\dagger} B_{i, j} c_{j}}\right]=\operatorname{Det}\left(1+e^{-\mathbf{A}} e^{-\mathbf{B}}\right). \label{eq:eq13} \end{equation} Put Eqs.~\eqref{eq:eq8} and ~\eqref{eq:eq13} together, the partition function of our interacting TBG system can be expressed as: \begin{equation} \begin{aligned} Z&=\sum_{\left\{ l_{|{\mathbf{q}}+{\mathbf{G}}|,a,\tau}=\pm 1,\pm 2\right\} } \prod_{\tau=1}^{L_\tau} e^{-\Delta \tau H_0} \operatorname{Tr}_{c}\left[\prod_{|{\mathbf{q}}+{\mathbf{G}}|\neq 0} \frac{1}{16} \gamma\left(l_{|{\mathbf{q}}+{\mathbf{G}}|,1,\tau}\right)\gamma\left(l_{|{\mathbf{q}}+{\mathbf{G}}|,2,\tau}\right) e^{i \eta\left(l_{|\mathbf{q}|_{1}, t}\right) A_{\mathbf{q}}\left(\delta \rho_{-\mathbf{q}}+\delta \rho_{\mathbf{q}}\right)} e^{\eta\left(l_{|\mathbf{q}|_{2}, t}\right) A_{\mathbf{q}}\left(\delta \rho_{-\mathbf{q}}-\delta \rho_{\mathbf{q}}\right)} \right] \\ &\qquad \qquad + O(\Delta \tau {}^2) \end{aligned} \end{equation} where $A_{\mathbf{q}+\mathbf{G}}=\sqrt{\frac{\Delta \tau}{4} \frac{V(\mathbf{q}+\mathbf{G})}{\Omega}}$, as shown in the main text. The free of sign-problem and the Monte Carlo sampling scheme are presented in Ref.~\cite{XuZhang2021}. \subsection{Section II: Order Parameter} As discussed in the main text. For the correlation functions of VP order parameter, we define \begin{equation} \begin{aligned} S_{VP}(\boldsymbol{q}) & \equiv \frac{1}{N^{2}}\left\langle\mathcal{O}_{a}(-\boldsymbol{q}) \mathcal{O}_{a}(\boldsymbol{q})\right\rangle \\ &\mathcal{O}_{a}(\boldsymbol{q}) \equiv \sum_{\boldsymbol{k}} d_{\boldsymbol{k}+\boldsymbol{q}}^{\dagger} \tau_z \eta_0 d_{\boldsymbol{k}} \end{aligned} \end{equation} where $\eta_0$ is for band index and $\tau_z$ is for valley index. Then its QMC implementation reads as, \begin{equation} \begin{aligned} S_{VP}(q)=&\frac{1}{N^{2}} \sum_{k_{1},k_{2}}\sum_{n_{1},n_{2}}\sum_{\tau_1,\tau_2=\pm} \left(\tau_1 \tau_2\right) \left \langle d_{k_{1}, n_{1}, \tau_1}^{\dagger} d_{k_{1}+q, m_{1}, \tau_1} d_{k_{2}+q, n_{2}, \tau_2}^{\dagger} d_{k_{2}, m_{2}, \tau_2} \right\rangle\\ =&\frac{1}{N^{2}}\langle \left(\sum_{k_{1}}\left(\sum_{n_{1}} d_{k_{1}, n_{1}, \tau}^{\dagger} d_{k_{1}+q, n_{1}, \tau}-\tilde{d}_{k_{1}, n_{1}-\tau} \tilde{d}_{k_{1}+q, n_{1},-\tau}^{\dagger}\right)\right) \\ & \qquad \left.\cdot\left(\sum_{k_{2}}\left(\sum_{n_{2}} d_{k_{2}+q, n_{2}, \tau}^{\dagger} d_{k_{2}, n_{2}, \tau}- \tilde{d}_{k_{2}+q, n_{2}-\tau} \tilde{d}_{k_{2}, n_{2}-\tau}^{\dagger}\right)\right)\right\rangle \\ =&\frac{1}{N^{2}} \sum_{k_{1},k_{2}} \sum_{n_{1},n_{2}}\mathrm{Gc}_{n_1 n_1,\tau}(k_{1},k_{1}+q)\mathrm{Gc}_{n_2 m_2,\tau}(k_{2}+q,k_{2})\\ & \qquad \qquad + \mathrm{G}^{*}_{n_1 n_1,\tau}(k_{1},k_{1}+q)\mathrm{G}^{*}_{n_2 n_2,\tau}(k_{2}+q,k_{2})\\ & \qquad \qquad + \mathrm{Gc}_{n_1 n_2,\tau}(k_{1},k_{2})\mathrm{G}_{n_1 n_2,\tau}(k_{1}+q,k_{2}+q)\\ & \qquad \qquad + \mathrm{G}^{*}_{n_1 n_2,\tau}(k_{1},k_{2})\mathrm{Gc}^{*}_{n_1 n_2,\tau}(k_{1}+q,k_{2}+q)\\ & \qquad \qquad -\mathrm{Gc}_{n_1 n_1,\tau}(k_{1},k_{1}+q)\mathrm{G}^{*}_{n_2 n_2,\tau}(k_{2}+q,k_{2})\\ & \qquad \qquad -\mathrm{G}^{*}_{n_1 n_1,\tau}(k_{1},k_{1}+q)\mathrm{Gc}_{n_2 n_2,\tau}(k_{2}+q,k_{2}) \end{aligned} \end{equation} where $\tilde{d}_{\mathbf{k}, m,-\tau}=m * d_{\mathbf{k},-m,-\tau}^{\dagger}$ and $d^\dagger_{\mathbf{k}_1, m,-\tau} d_{\mathbf{k}_2, n,-\tau}=(mn) \tilde{d}_{\mathbf{k}_1, -m,-\tau} \tilde{d}^\dagger_{\mathbf{k}_2, -n,-\tau}=(mn)G_{-m,-n}^{*}(k_1,k_2)$, note we define the fermion Green's function as $\mathrm{G}_{ij}=\langle d^{\dagger}_i d_j \rangle$ and define $\mathrm{Gc}_{i j}=\delta_{i j}-\operatorname{G}_{j i}$. For the correlation function of the IVC order parameter, we define \begin{equation} \begin{aligned} S_{IVC}(q) & \equiv \frac{1}{N^{2}}\left\langle\mathcal{O}_{a}(-\boldsymbol{q}) \mathcal{O}_{a}(\boldsymbol{q})\right\rangle \\ &\mathcal{O}_{a}(\boldsymbol{q}) \equiv \sum_{\boldsymbol{k}} d_{\boldsymbol{k}+\boldsymbol{q}}^{\dagger} \tau_x \eta_y d_{\boldsymbol{k}} \end{aligned} \end{equation} and its QMC implementation reads as, \begin{equation} \begin{aligned} S_{IVC}(q)=\frac{1}{N^{2}} & \sum_{k_{1},k_{2}}\sum_{n_{1},n_{2}, }\sum_{\tau_1,\tau_2=\pm} \left(n_1 n_2\right) \left \langle d_{k_{1}, n_{1}, \tau_1}^{\dagger} d_{k_{1}+q, -n_1, -\tau_1} d_{k_{2}+q, n_{2}, \tau_2}^{\dagger} d_{k_{2}, -n_{2}, -\tau_2} \right\rangle\\ =\frac{1}{N^{2}} & \sum_{k_{1},k_{2}}\sum_{n_{1},n_{2}, }\sum_{\tau=\pm} \left(n_1 n_2\right)\mathrm{Gc}_{n_1 ,-n_2,\tau}(k_{1},k_{2})\mathrm{G}_{-n_1, n_2,-\tau}(k_{1}+q,k_{2}+q)\\ =\frac{1}{N^{2}} & \sum_{k_{1},k_{2}}\sum_{n_{1},n_{2}, } \mathrm{Gc}_{n_1,-n_2,\tau}(k_{1},k_{2})\mathrm{Gc}^{*}_{n_1, -n_2,\tau}(k_{1}+q,k_{2}+q)\\ &\qquad \qquad + \mathrm{G}^{*}_{n_1,-n_2,\tau}(k_{1},k_{2})\mathrm{G}_{n_1, -n_2,\tau}(k_{1}+q,k_{2}+q) \end{aligned} \end{equation} \subsection{Section III: Analytic continuation} From QMC simulations, we only obtain the imaginary time or imaginary frequency Green's functions, we further perform the stochastic analytic continuation (SAC) method~\cite{Sandvik1998,beach2004identifying,Sandvik2016,Olav2008,HShao2017,NSMa2018,zhou2020amplitude,GYSun2018,ZYan2021,hu2020evidence,li2020kosterlitz,jiang2020,XuZhang2021} to obtain the real frequency spectral function $A(k,\omega)$. Here we give a brief description of the scheme. Firstly, we define : $e^{-\beta \Omega}=\operatorname{Tr}\left(e^{-\beta(H-\mu N)}\right), K \equiv H-\mu N$. The imaginary time Green's function is: \begin{equation}\begin{aligned} G(\tau) &=\left\langle T_{\tau}d\(\tau\) d^\dagger\(0\) \right\rangle\\ &=\operatorname{Tr}\left[e^{-\beta(K-\Omega)} T_{\tau} e^{\tau K} d \,e^{-\tau K} d^{\dagger}\right] \end{aligned}\end{equation} where $K|m\rangle=E_{m}|m\rangle$. Then if we consider the Lehmann representation: \begin{equation}\begin{aligned} \tau>0: & G(\tau)=e^{\beta \Omega} \sum_{n, m}\left\langle n\left|e^{-\beta K} d(\tau)\right| m\right\rangle\left\langle m\left|d^{\dagger}(0)\right| n\right\rangle \\ & G(\tau)=e^{\beta \Omega} \sum_{n, m}|\langle n|d| m\rangle|^{2} e^{-\beta E_{n}} e^{\tau\left(E_{n}-E_{m}\right)} \end{aligned}\end{equation} Once again, imaginary frequency Green's function is : \begin{equation}\begin{aligned} G\left(i \omega_{n}\right)&=\int_{0}^{\beta} d \tau e^{i \omega_{n}\tau} G(\tau) \\ &=-e^{\beta \Omega} \sum_{n, m}|\langle n|d| m\rangle|^{2} e^{-\beta E_{n}} \frac{e^{\(i \omega_n +E_n-E_m\)\tau} |^{\beta}_{0}}{i\omega_n +E_n-E_m}\\ &=e^{\beta \Omega} \sum_{n, m}|\langle n|d| m\rangle|^{2} \frac{e^{-\beta E_n} \mp e^{-\beta E_m}}{i\omega_n +E_n-E_m} \label{Eq:A21} \end{aligned}\end{equation} here $\mp$ for boson and fermion. And we use $e^{i \omega_n \beta}=\pm 1$. Then we carry out the analytic continuation: $i \omega_{n} \rightarrow w+i \delta$ and obtain the retarded real frequency Green's function $ G\left(i \omega_{n}\right) \rightarrow G^{ret}(\omega)$, where $G^{ret}\left( \omega \right)=\int_{-\infty}^{\infty} e^{i\omega t} G^{ret}\left(t\right) \mathrm{d}t$ and $G^{ret}\left(t-t^{\prime}\right)=-i \theta\left(t-t^{\prime}\right)\left\langle\left[d(t) d^{\dagger}\left(t^{\prime}\right)+d^{\dagger}\left(t^{\prime}\right) d(t)\right]\right\rangle$. The spectral function is obtained by the retarded Green function: $A(k,\omega)=-(1 / \pi) \operatorname{Im} G^{r e t}(k,\omega)$ \begin{equation} \begin{aligned} A(k,\omega)&=-(1 / \pi) \operatorname{Im} G^{r e t}(k,\omega)\\ &=e^{\beta \Omega} \sum_{n, m}|\langle n|d| m\rangle|^{2} \(e^{-\beta E_n} \mp e^{-\beta E_m}\)\delta(\omega +E_n-E_m) \label{Eq:A22} \end{aligned} \end{equation} from Eqs.~\eqref{Eq:A21} and ~\eqref{Eq:A22}, we can get : \begin{equation} G(k,\tau)=\int_{-\infty}^{\infty} d \omega\left[\frac{e^{-\omega \tau}}{1\mp e^{-\beta \omega}}\right] A(k,\omega)\end{equation} Note again $\mp$ for boson and fermion. For boson Green function: \begin{equation} G(k,\tau)=\int_{0}^{+\infty} \mathrm{d} \omega \frac{e^{-\tau \omega}+e^{-(\beta-\tau) \omega}}{1-e^{\boldsymbol{-} \beta \omega}} A(k,\omega). \label{eq:Aeq24} \end{equation} In the spectroscopy measurements such as the inelastic neutron scattering, the spectral function $S(k, \omega)=\frac{1}{1-e^{-\beta \omega}} \operatorname{Im} \chi(k, \omega)$, where $\chi(k, \omega)$ is dynamical spin susceptibility. We can see $\operatorname{Im} \chi(k, \omega)$ is the spectral function $A(k,\omega)$ mentioned above. Now we discuss the details of stochastic analytic continuation. The idea is to give a very generic variational ansatz of the spectrum $A(k,\omega)$, and obtain corresponding Green's function $G(k,\tau)$ following Eq.~\eqref{eq:Aeq24} . Then compare the Green's function with the Green's function obtained from QMC by the quantity $\chi^2_{F/B}$. Definition of $\chi^2_{F/B}$ is \begin{equation} \chi_{F}^{2}=\sum_{i j}\left(\bar{G}\left(\tau_{i}\right)-\int_{-\infty}^{\infty} d \omega\left[\frac{e^{-\omega \tau_i}}{1+e^{-\beta \omega}}\right] A(\omega)\right)\left(C^{-1}\right)_{i j}\left(\bar{G}\left(\tau_{j}\right)-\int_{-\infty}^{\infty} d \omega\left[\frac{e^{-\omega \tau_j}}{1+e^{-\beta \omega}}\right] A(\omega)\right)\end{equation} and \begin{equation} \chi_{B}^{2}=\sum_{i j}\left(\bar{G}\left(\tau_{i}\right)-\int_{0}^{\infty} d \omega\left[\frac{e^{-\omega \tau_i}+e^{-(\beta-\tau) \omega}}{1-e^{-\beta \omega}}\right] A(\omega)\right)\left(C^{-1}\right)_{i j}\left(\bar{G}\left(\tau_{j}\right)-\int_{0}^{\infty} d \omega\left[\frac{e^{-\omega \tau_j} + e^{-(\beta-\tau) \omega} }{1-e^{-\beta \omega}}\right] A(\omega)\right)\end{equation} where \begin{equation} C_{i j}=\frac{1}{N_{b}\left(N_{b}-1\right)} \sum_{b=1}^{N_b}\left(G^{b}\left(\tau_{i}\right)-\bar{G}\left(\tau_{i}\right)\right)\left(G^{b}\left(\tau_{j}\right)-\bar{G}\left(\tau_{j}\right)\right) \end{equation} and $\bar{G}\left(\tau_{i}\right)$ is the Monte Calro average of Green's functions of $N_b$ bins. Then we perform the Monte Carlo sampling ~\cite{Sandvik2016,Olav2008} again to optimize the spectral function. We assume that the spectral function has the following form: $A(\omega)=\sum_{i=1}^{N_{\omega}} A_{i} \delta\left(\omega-\omega_{i}\right)$ and the weight of such Monte Carlo configuration is: $ W \sim \exp \left(-\frac{\chi^{2}}{2 \,\Theta_T}\right)$. Here $\Theta_T$ is an analogy to temperature. Then we compute the average $\langle\chi^{2}\rangle$ at different $\Theta_T$, via the simulated annealing process, at the end of it, we can choose the converged $\Theta_T$ to satisfy: \begin{equation} \langle\chi^{2}\rangle=\chi_{\min }^{2}+a \sqrt{\chi_{\min }^{2}}. \end{equation} Usually we set $a=2$, and the ensemble average of the spectra at such optimized $\Theta$ is the final one to present in the main text. We note that the QMC-SAC scheme for obtaining dynamical spetral function, is developed over the past decades and has been verified in many works on quantum many-body systems and have been directly compared with the Bethe ansatz, exact diagonalization, field theoretical analysis and spectropy experiments, such as the works on 1D Heisenberg chain~\cite{Sandvik2016}, 2D Heisenberg model compared with neutron scattering and field theoretical analysis~\cite{HShao2017,zhou2020amplitude}, $Z_2$ quantum spin liquid model with fractionalized spectra~\cite{GYSun2018,ZYan2021}, quantum Ising model with direct comparison with neutron scattering and NMR experiments~\cite{hu2020evidence,li2020kosterlitz}, the non-Fermi-liquid and metallic quantum critical point~\cite{jiang2020,ChuangChen2021} and the TBG system at flat-band limit~\cite{XuZhang2021}. \subsection{Section IV: Exact Analytic Charge $\pm 1$ Excitations} Here we follow the the Ref.~\cite{bernevig2020tbg5}. For $\nu=0$, ground state is: \begin{equation} O_{\mathbf{q}+\mathbf{G}} |\Psi\rangle=0 \end{equation} then: \begin{equation} \left[H_{int}, d_{{\mathbf{k}}, n, \eta,ss}^{\dagger}\right]|\Psi\rangle=\frac{1}{2 \Omega_{\mathrm{tot}}} \sum_{m_2} R_{m_2 n}^{\eta}({\mathbf{k}}) d_{{\mathbf{k}}, m_{2}, \eta,ss}^{\dagger}|\Psi\rangle \end{equation} where \begin{equation} R_{m_1 n_1}^{\eta}({\mathbf{k}})=\sum_{ m,{\mathbf{q}},{\mathbf{G}},|{\mathbf{q}}+{\mathbf{G}}|\neq 0}V(\mathbf{q}+\mathbf{G})\lambda^{*}_{m_1,m,\eta}({\mathbf{k}},{\mathbf{k}}+{\mathbf{q}}+{\mathbf{G}})\lambda_{n_1,m,\eta}({\mathbf{k}},{\mathbf{k}}+{\mathbf{q}}+{\mathbf{G}}) \ \end{equation} Diagonalize $\frac{R_{m_1 n_1}^{\eta}({\mathbf{k}})}{2 \Omega}$ and we obtain the charge $\pm 1$ excitations, as plotted as the dashed lines in the Fig.~\ref{fig:fig2} (c) and (d) of the main text with our model parameters. \end{widetext} \end{document}
{ "attr-fineweb-edu": 1.612305, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdTA4dbghU_Qx69Oz
\section{Introduction} \par Recently a new time-evolution picture of the Dirac quantum mechanics was defined in charts with spatially flat Robertson-Walker metrics, under the name of Scr\" odinger picture \cite{B1}. Using the advantage offered by this picture in \cite{B2} was found a new set of Dirac energy eigenspinors which behave as polarized plane waves. \par Also recently the Coulomb scattering on de Sitter expanding universe was studied \cite{B4}, using the plane wave solutions derived in \cite{B3}. The main results in \cite{B4} was that the modulus of total momentum is not conserved but there is a tendency of helicity conservation. \par On the other hand, the expansion of the Universe is accelerating, and this could increase the interest of studying the scattering processes on de Sitter backgrounds. In the present paper we would like to analyse the Coulomb scattering using the method of \cite{B4} with the energy eigenspinors derived in \cite{B2}, pointing out a series of aspects in comparison with the study presented in \cite{B4}. We shall see that in this case the scattering has new important features since this time the the energy is conserved. \par The paper is organized as follows. In section 2, we present a short review of the Schr\" odinger picture introduced in \cite{B1} and we write the form of the energy eigenspinors derived in \cite{B2}. In Section 3 we define the lowest order contribution for scattering amplitude in the potential $A^{\hat{\mu}}$ in the new Schr\" odinger picture and then we calculate the scattering amplitude, showing that the total energy conservation holds in this case. Section 4 is dedicated to the problem of cross section. Our conclusions are summarized in section 5 pointing out a series of aspects that remain to be clarified elsewhere. \par We use elsewhere natural units i. e. $\hbar=c=1$. \section{Polarized plane wave in the Schr\" odinger picture} We start with the results of \cite{B1}, where was shown that two time evolution pictures can be identified in the case of the Dirac theory on backgrounds with spatially flat Robertson-Walker metrics. The idea there was to define the natural picture as that in which the free Dirac equation is written directly as it results from its Lagrangean, in a diagonal gauge and Cartesian coordinates. In addition the Schr\" odinger picture was introduced in which the free Dirac equation is transformed such that its kinetic part takes the same form as in special relativity while the gravitational interaction is separated in a specific term. \par Let us take the local chart with cartesian coordinates of a flat Robertson-Walker manifold, in which the line element reads: \begin{equation}\label{1} ds^{2}=dt^{2}-\alpha(t)^{2}d\vec{x}^{2}, \end{equation} where $\alpha$ is an arbitrary function. One knows that for defining spinor fields on curved backgrounds it requires to introduce the tetrad fields $e_{\hat{\mu}}(x)$ and $\hat{e}^{\hat{\mu}}(x)$, fixing the local frames and corresponding coframes which are labelled by the local indices $\hat{\mu}, \hat{\nu}=0, 1, 2, 3$. The form of the line element allows one to choose the simple diagonal gauge where the tetrad fields have the non-vanishing components\cite{B2}, \cite{B3}: \begin{equation}\label{2} e^{0}_{\hat{0}}=1 \quad , e^{i}_{\hat{j}}=\frac{1}{\alpha(t)}\delta^{i}_{j}, \quad \hat{e}^{0}_{0}=1, \quad \hat{e}^{i}_{j}=\alpha(t)\delta^{i}_{j}. \end{equation} The Dirac field $\psi$ of mass $m$ satisfy the free Dirac equation which can be easily written using the tetrad fields (\ref{2}) (see \cite{B2}). If $\psi(x)$ is the Dirac field in natural picture then the Dirac field of the Schr\" odinger picture,$\psi_{S}(x)$, can be obtained using the transformation $\psi_{S}(x)=W(x)\psi(x)$ produced by the operator of time dependent dilatations \cite{B1}, \begin{equation}\label{3} W(x)=exp\left[-\ln(\alpha(t))(\vec{x}\cdot\vec{\partial})\right], \end{equation} which has the property \begin{equation}\label{4} W(x)^{+}=\sqrt{-g(t)}W(x)^{-1}\,. \end{equation} Using this operator, the Dirac equation of the Schr\" odinger picture was obtained in \cite{B2}, as well as the relativistic scalar product $\langle\psi_{S}, \psi^{'}_{S}\rangle=\int d^{3}x \bar{\psi}_{S}(x)\gamma^{0}\psi^{'}_{S}(x)$, which is no more dependent of $\sqrt{-g(t)}$. \par Now taking in Eq. (\ref{1}) $\alpha(t)=e^{\omega t}$ one obtains the de Sitter metric which is the case of interest here. The form of the Dirac equation on de Sitter spacetime in Schr\" odinger picture is given in \cite{B2} where a complete set of orthonormalized funndamental solutions was written down. These depend on the normalized Pauli spinors, $\xi_{\lambda}(\vec{n})$, of {\em helicity} $\lambda=\pm 1/2$ which satisfy \begin{equation}\label{5} (\vec{n}\cdot\vec{\sigma})\xi_{\lambda}(\vec{n})=2 \lambda \xi_{\lambda}(\vec{n}), \end{equation} where $\vec{\sigma}$ are the Pauli matrices while the momentum direction is given by $\vec{n}$ ($\vec{p}=p\vec{n}$). Then the fundamental spinor solutions of positive frequencies with energy $E$, momentum direction $\vec{n}$ and helicity $\lambda$ obtained in \cite{B2} read \begin{equation}\label{6} U^{S}_{E, \vec{n}, \lambda}(t, \vec{x})=i\frac{ \omega e^{-iEt}}{(2\pi)^{3/2}\sqrt{2}}\int^{\infty}_{0}s ds\left (\begin{array}{c} \frac{1}{2}e^{\pi k/2}H^{(1)}_{\nu_{-}}(s)\xi_{\lambda}(\vec{n})\\ \lambda e^{-\pi k/2}H^{(1)}_{\nu_{+}}(s)\xi_{\lambda}(\vec{n}) \end{array}\right)e^{i\omega s\vec{n}\vec{x}-i\epsilon \ln s}\,. \end{equation} The notations used here are $\nu_{\pm}=\frac{1}{2}\pm ik$ with $k=m/\omega$, $s=p/\omega$ and $\epsilon=E/\omega$. The negative frequency modes can be obtained using the charge conjugation as in \cite{B2} $U^{S}_{E, \vec{n}, \lambda}(x)\rightarrow V^{S}_{E, \vec{n}, \lambda}(x)=i\gamma^{2}\gamma^{0}(\bar{U}^{S}_{E, \vec{n}, \lambda}(x))^{T}$, because the charge conjugation in a curved space is point independent \cite{B5}. However the negative frequency modes will be of no interest here. These spinors are normalized in the energy scale (in generalized sense) with respect to the new relativistic scalar product defined in Scr\" odinger picture \cite{B2}: \begin{eqnarray} &&\int d^{3}x \bar{U}^{S}_{E, \vec{n}, \lambda}\gamma^{0} U^{S}_{E^{'}, \vec{n}^{'}, \lambda^{'}}=\nonumber\\ &&~~~~~~~~~~\int d^{3}x \bar{V}^{S}_{E, \vec{n}, \lambda}\gamma^{0} V^{S}_{E^{'}, \vec{n}^{'}, \lambda^{'}}=\delta_{\lambda\lambda^{'}}\delta(E-E^{'}) \delta^{2}(\vec{n}-\vec{n}^{'}),\label{7} \end{eqnarray} where $\delta^{2}(\vec{n}-\vec{n}^{'})=\delta(\cos \theta_{n}-\cos \theta^{'}_{n})\delta(\phi_{n}-\phi^{'}_{n})$. These spinors form a complete system of solutions: \begin{eqnarray} &&\int_{0}^{\infty} dE \int_{S^{2}}d\Omega_{n} \sum_{\lambda}\left[U_{E, \vec{n}, \lambda}(t, \vec{x})U^{+}_{E, \vec{n}, \lambda}(t, \vec{x^{\prime}})\right.\nonumber\\ &&\hspace*{30mm}\left.+V_{E, \vec{n}, \lambda}(t, \vec{x})V^{+}_{E, \vec{n}, \lambda}(t, \vec{x^{\prime}})\right]= \delta^{3}(\vec{x}-\vec{x^{\prime}}).\label{8} \end{eqnarray} \section{The scattering amplitude} The solutions written in \cite{B2} will be the central piece of our calculations. In \cite{B4} it was pointed out that the necessary requirement for developing the scattering on de Sitter background is the global hyperbolicity of the spacetime and having a complete set of solutions of the free equation for incident and scattered field (Born approximation). Now for defining the lowest order contribution of the scattering amplitude in the Schr\" odinger picture let us recall the definition of this quantity from \cite{B4} in the natural picture: \begin{equation}\label{9} A_{i\rightarrow f}=-ie \int d^{4}x \left[-g(x)\right]^{1/2}\bar\psi_{f}(x)\gamma_{\mu}A^{\hat{\mu}}(x)\psi_{i}(x). \end{equation} This expression was obtained by analogy with Minkowski space \cite{B6,B9}, but can also be obtained from one reduction formalism on de Sitter spacetime \cite{B8}. Using now (\ref{4}) it is not hard to obtain the analogue of (\ref{8}) in the Schr\" odinger picture: \begin{equation}\label{10} A^{S}_{i\rightarrow f}=-ie \int d^{4}x \bar\psi_{Sf}(x)\gamma_{\mu}A^{\hat{\mu}}_{S}(x)\psi_{Si}(x), \end{equation} where $e$ is the unit charge of the field, $A^{\hat{\mu}}_{S}(x)$ is the potential in the Schr\" odinger picture, and the hated indices label the components in local Minkowski frames. \par Our target is a fixed charge $Ze$ whose Coulomb potential on de Sitter spacetime in the natural picture \cite{B4} reads \begin{equation}\label{11} A^{\hat{0}}(x)=\frac{Ze}{|\vec{x}|} e^{-\omega t}, \end{equation} while in the new Schr\" odinger picture this becomes \begin{equation}\label{12} A^{\hat{0}}_{S}(x)=\frac{Ze}{|\vec{x}|}\,. \end{equation} \par Our aim is to calculate the amplitude of Coulomb scattering using the definition (\ref{10}) in which we replace our quantities of interest (\ref{6}) and (\ref{12}). We start with the waves freely propagating in the $in$ and $out$ sectors, $U^{S}_{E_{i}, \vec{n}, \lambda_{i}}(x)$ and $U^{S}_{E_{f}, \vec{n}, \lambda_{f}}(x)$, assuming that the both of them are of positive frequency. If we replace the explicit form of spinors and the Coulomb potential in (\ref{10}) we observe two remarkable properties. The first one is that we may split the four dimensional integral into a pure spatial integral and a temporal one. In other respects, these integrals have the same form as in Minkowski spacetime, i. e. \begin{eqnarray} \int d^{3}x \frac{e^{i(\vec{p_{i}}-\vec{p_{f}})\vec{x}}}{|\vec{x}|}&=&\frac{4 \pi}{|\vec{p_{f}}-\vec{p_{i}}|^{2}}\,, \nonumber\\ \frac{1}{2 \pi}\int^{\infty}_{-\infty} dt e^{i(E_{f}-E_{i})t}&=&\delta(E_{f}-E_{i})\,.\label{13} \end{eqnarray} Note that the limits of integration in (\ref{13}) for the time variable correspond to $t=\pm \infty$, assuming that the interaction extends into the past and future. \par In this case the integration after $s=p/\omega$ variable is not quite simple but we can calculate our amplitude as \begin{eqnarray} && A^{S}_{i\rightarrow f}= \frac{-i\alpha Z \omega^{2}\delta(E_{f}-E_{i})}{8\pi|\vec{p_{f}}-\vec{p_{i}}|^{2}}\xi^{+}_{\lambda_{f}} \,(\vec{n})\xi_{\lambda_{i}}(\vec{n})\nonumber\\ &&\times\left[e^{\pi k}\int_0^{\infty} ds_{f} s^{1+iE_{f}/\omega}_{f}H^{(2)}_{\nu_{+}}(s_{f})\int_{0}^{\infty} ds_{i}s^{1-iE_{i}/\omega}_{i} H^{(1)}_{\nu_{-}}(s_{i})\right. \nonumber\\ &&\left. +sgn(\lambda_{f}\lambda_{i})e^{-\pi k}\int_0^{\infty} ds_{i}s^{1+iE_{f}/\omega}_{f}H^{(2)}_{\nu_{-}}(s_{f}) \int_{0}^{\infty} ds_{i}s^{1-iE_{i}/\omega}_{i}H^{(1)}_{\nu_{+}}(s_{i})\right]\label{14} \end{eqnarray} where $\alpha=e^{2}$ . The evaluation of the integrals (\ref{14}) is given in Appendix A, the final result being expressed in terms of Euler gamma functions, \begin{eqnarray} A^{S}_{i\rightarrow f}&=& \frac{-i\alpha Z \omega^{2}\delta(E_{f}-E_{i})}{4\pi|\vec{p}_{f}-\vec{p}_{i}|^{2}}\, \xi^{+}_{\lambda_{f}}(\vec{n})\xi_{\lambda_{i}}(\vec{n})\nonumber\\ &&\times\left[ f_{k}(E_{f})f^{*}_{k}(E_{i})+sgn(\lambda_{f}\lambda_{i}) f_{-k}(E_{f})f^{*}_{-k}(E_{i})\right]\label{15} \end{eqnarray} where we introduced the following notations: \begin{eqnarray} && f_{k}(E)=e^{\pi k/2}\left[2^{iE/\omega}\frac{\Gamma(\frac{5}{4}+\frac{i k}{2}+\frac{i E}{2 \omega})}{\Gamma(\frac{1}{4}+\frac{i k}{2}-\frac{i E}{2 \omega})}-\frac{i2^{iE/\omega}}{\pi}\cos\left(\frac{\pi}{4}+\frac{i k \pi}{2}-\frac{i E \pi}{2 \omega}\right)\right.\nonumber\\ &&\left.\times \Gamma\left(\frac{5}{4}+\frac{i k}{2}+\frac{i E}{2 \omega}\right)\Gamma\left(\frac{3}{4}-\frac{i k}{2}+\frac{i E}{2 \omega}\right)\right],\label{16} \end{eqnarray} and $f_{-k}(E)$ is obtained when $k \rightarrow -k$ in (\ref{16}). Now let us take a look to our scattering amplitude (\ref{15}). We obtain that the energy is conserved in the scattering process as in Minkowski case. This is however expected because the form of the external field (\ref{12}) allows us to consider that the scattering process take place in a constant field. One knows that the energy of a system scattered on a constant field is conserved (but this does not mean that the momentum is conserved too), as we obtained here. It is also remarkable that we obtain the Rutherford denominator $|\vec{p_{f}}-\vec{p_{i}}|^{2}$ as in Minkowski scattering. In our previous work \cite{B4} the Coulomb scattering was analyzed with spinors having given momentum and helicity. The surprising result was that there exists a nonvanishing probability for a scattering process where the law of conservation of total momentum is lost. Here we obtain the nice result that the total energy is always conserved and the non-linear terms that may broken the energy conservation have no contributions to the amplitude (\ref{15}). \par Let us make an analysis in the helicity space. We observe that the analysis can be done using only the terms from parenthesis in (\ref{15}). Now the probability of scattering is proportional with the amplitude at square $P \sim |A_{i\rightarrow f}|^{2}$. After a little calculation we obtain that the probability of transition between identical helicity states is bigger that the probability of transition between opposite helicity states ($P_{\lambda_{i}=\lambda_{f}}>P_{\lambda_{i}\neq\lambda_{f}}$) with the quantity: \begin{equation}\label{17} 2\left[f_{k}(E_{f})f^{*}_{-k}(E_{f}) f^{*}_{k}(E_{i})f_{-k}(E_{i}) +f^{*}_{k}(E_{f})f_{-k}(E_{f}) f^{*}_{-k}(E_{i})f_{k}(E_{i})\right] \end{equation} Hereby we conclude that in the scattering process a tendency is manifested for helicity conservation. This conclusion was also obtained in \cite{B4} were the analysis was done using the momentum eigenspinors. The obvious conclusion is that in the de Sitter space there is a tendency for total angular momentum conservation. \section{The cross section problem} The first observation here is that in this case we have just linear contributions to the cross section in contrast with \cite{B4} where the cross section was calculated as a sum of a linear contribution and a non-linear one. Moreover, the term $\delta(E_{f}-E_{i})$ will give us the opportunity to define the transition probability in unit of time like in Minkowski case. Then the definition of cross section here will have the same form as in Minkowski space, \begin{equation}\label{18} d\sigma=\frac{1}{2}\sum_{\lambda_{i}\lambda_{f}}\frac{dP}{dt}\frac{1}{j}, \end{equation} where $\frac{dP}{dt}$ is the transition probability in unit of time, $j$ is the incident flux while the factor $\frac{1}{2}$ transforms the sum in mediation. \par The problem of calculating the incident flux is identic to that of \cite{B4}. First let us introduce the expression of the Dirac current in local frames, \begin{equation}\label{19} J^{\hat\mu}=e^{\hat\mu}_{\nu}\bar U_{\vec{p_{i}}, \lambda_{i}}(x)\gamma^{\nu}U_{\vec{p_{i}}, \lambda_{i}}(x)\,. \end{equation} Then the spatial components can be defined as follows: \begin{eqnarray} j(t)&=&e^{\omega t}\bar{U}^{S}_{E_{i}, \vec{n}, \lambda_{i}} (x)(\vec{n}\cdot\vec{\gamma} )U^{S}_{E_{i}, \vec{n}, \lambda_{i}}(x)\nonumber\\ &=&\frac{e^{\omega t}\omega^{2}}{32 \pi^{3}}\left|\left[1+i\cot\left(\frac{\pi}{4}+\frac{ik\pi}{2}\right)\right]\left(\frac{1}{2} +ik\right)\right|^{2}\,.\label{20} \end{eqnarray} From this equation it is immediate that our incident flux is a time dependent quantity. We know from the well-establish picture from Minkowski spacetime that the incident flux does not depend of time. This property is no longer valid in a spacetime where the translational invariance with respect to time is lost, and our result is in agreement with this observation. We note that in \cite{B4} it was obtained a similar dependence of time for the incident flux. Another observation is that the incident flux calculated here does not depend on the incident momentum like in \cite{B4}. \par The cross section however must be evaluated using an incident flux that is independent on time. We will follow the formalism presented in \cite{B7}, where for the calculation of the incident flux one must know the state of the unperturbed system from the approximative moment of collision. In \cite{B7}, this was taken to be $t\sim 0$, which in the case of our incident flux (\ref{20}) gives: \begin{equation}\label{21} j=j(0)=\frac{\omega^{2}}{32 \pi^{3}}\frac{2 e^{\pi k}}{\cosh(\pi k)}\left(k^2+\frac{1}{4}\right)\,. \end{equation} \par The evaluation of the transition probability in unit of time $\frac{dP_{l}}{dt}=\frac{d|A^{S}_{i\rightarrow f}|^{2}}{dt}\frac{d^{3}p_{f}}{(2\pi)^{3}}$ ( where we use the fact that $[\delta(E_{f}-E_{i})]^{2}=\frac{t}{2\pi}\delta(E_{f}-E_{i})$) yields \begin{eqnarray} && \frac{dP_{l}}{dt}=\frac{(\alpha Z)^{2}\omega^{4}}{32 \pi^{3}|\vec{p}_{f}-\vec{p}_{i}|^{4}}\delta(E_{f}-E_{i})\left[ |f_{k}(E_{f})|^{2}|f_{k}(E_{i})|^{2}\right. \nonumber\\ &&\left. +|f_{-k}(E_{f})|^{2}|f_{-k}(E_{i})|^{2} +sgn(\lambda_{f}\lambda_{i})f_{k}(E_{f})f^{*}_{-k}(E_{f}) f^{*}_{k}(E_{i})f_{-k}(E_{i})\right. \nonumber\\ &&\left. +sgn(\lambda_{f}\lambda_{i})f^{*}_{k}(E_{f})f_{-k}(E_{f}) f^{*}_{-k}(E_{i})f_{k}(E_{i})\right] [\xi^{+}_{\lambda_{f}}(\vec{n})\xi_{\lambda_{i}}(\vec{n})]^{2} \frac{d^{3}p_{f}}{(2\pi)^{3}}\,.\label{22} \end{eqnarray} \par For obtaining the cross section when we have particles with given helicities we must average upon the helicities of incident particles and sum upon the helicities of emergent ones. In our case we obtain : \begin{equation}\label{23} \frac{1}{2}\sum_{\lambda_{i}\lambda_{f}} \left[\xi^{+}_{\lambda_{f}}(\vec{n})\xi_{\lambda_{i}}(\vec{n})\right]^{2}=2\,. \end{equation} \par The final expression of the differential cross section after we replace (\ref{21}), (\ref{22}) and (\ref{23}) in (\ref{18}) turns out to be: \begin{eqnarray} && d\sigma=\frac{(\alpha Z)^{2}\omega^{2}}{4 \pi^{3}|\vec{p}_{f}-\vec{p}_{i}|^{4}}\frac{1+e^{-2\pi k}}{1+4 k^2}\,\delta(E_{f}-E_{i}) \left[ |f_{k}(E_{f})|^{2}|f_{k}(E_{i})|^{2}\right. \nonumber\\ &&\left. +|f_{-k}(E_{f})|^{2}|f_{-k}(E_{i})|^{2} +sgn(\lambda_{f}\lambda_{i})f_{k}(E_{f})f^{*}_{-k}(E_{f}) f^{*}_{k}(E_{i})f_{-k}(E_{i})\right. \nonumber\\ &&\left. +sgn(\lambda_{f}\lambda_{i})f^{*}_{k}(E_{f})f_{-k}(E_{f}) f^{*}_{-k}(E_{i})f_{k}(E_{i})\right]d^{3}p_{f}\,.\label{24} \end{eqnarray} \par In Minkowski case the factor with $\delta(E_{f}-E_{i})$ was eliminated after performing the integral with respect to the final momentum, because there the relation between momentum and energy is known. In de Sitter spacetime we do not know this relation, and the factor that contain delta Dirac distribution can not be eliminated when one performs the integration with respect to final momentum in (\ref{24}). \par We note that in \cite{B4} integrals of this type were used for evaluating the cross section. These had the form \begin{eqnarray} \int_{0}^{\infty}dp_{f} f(p_{f})p^{2}_{f}\delta(p_{f}-p_{i}), \nonumber\\ \int_{0}^{\infty}dp_{f} f(p_{f})p^{2}_{f}\theta(p_{f}-p_{i})\,,\label{25} \end{eqnarray} since there we calculated the scattering process using spinors with a definite momentum but unknown energy. The last integral in (\ref{25}) was discussed in \cite{B4} ($\theta(p_{f}-p_{i})$ is the unit step function), and is solved when the modulus of the momentum is not conserved in the scattering process. \par We observe that our cross sections have a complicated dependence of energy which is quite unusual. This dependence of energy was obtained after the integration with respect to $s=p/\omega$, which means that this dependence translated in physical terms means that our cross section is still dependent of the form of the incident wave. However one can write $d^{3}p_{f}=p^{2}_{f}dp_{f}d\Omega_{p_{f}}$ and solve the integral with respect to the final momentum in (\ref{24}) for obtaining $\frac{d\sigma}{d\Omega}$ but restricting the limits of integration between zero and a maximal value of the final momentum. \par Finally it will be interesting to study how the results obtained here in the Schr\" odinger picture have to be translated in the natural picture. This is because in the natural picture the potential is no longer constant. First of all one knows from \cite{B2} that the fundamental spinor solution of positive frequency (\ref{6}) can be wreathed in the natural picture as $U_{E_{i}, \vec{n}, \lambda_{i}}(t,\vec{x})=U^{S}_{E_{i}, \vec{n}, \lambda_{i}}(t,\vec{x}e^{\omega t})$. In addition, the external Coulomb field in the natural picture is given by Eq. (\ref{11}). Replacing these quantities in the definition of the scattering amplitude from natural picture (\ref{9}), one will obtain the same scattering amplitude as (\ref{15}). This can be checked out passing to a new variable of integration $y=xe^{\omega t}$ when one solves the spatial integrals. It means that our main conclusions from this paper (energy conservation and the tendency for helicity conservation) will also remain valid in the natural picture. \section{Conclusion} \par We examined in this paper the Coulomb scattering on de Sitter spacetime using the energy eigenspinors. In our considerations the initial and final states of the field are described by exact solutions (with a given energy and helicity) of the free Dirac equation on de Sitter space, which were written in the Schr\" odinger picture. \par Moreover, we found that the scattering amplitude and the cross sections depend on the expansion factor as $\omega^{2}$. In addition we recover the result from our previous work that the amplitude and implicitly the cross section depends on the form of the incident wave. The incident flux was also found as a time dependent quantity. Needless to say that this consequences is the result of the lost translational invariance with respect to time in de Sitter spacetime. \par In section 3 we found that the total energy is conserved in the scattering process and, in addition, terms that could broken the energy conservation are absent since the scattering was considered in a constant field of the form (\ref{12}). In section 3 we recover the tendency for helicity conservation as in \cite{B4}. \par For further investigations it will be interesting to obtain the definition of the scattering amplitude (\ref{10}) in the new Schr\" odinger picture from one reduction formalism for the Dirac field. This will require to use the form of the Dirac equation in the Schr\" odinger picture \cite{B2} and the fundamental spinor solutions (\ref{6}), with the distinction between positive/negative frequencies. \section{Appendix A} The integrals that help us to arrive at the scattering amplitude (\ref{15}) are of the type: \begin{eqnarray} &&\int_0^{\infty} dz z^{1-iE/\omega}H^{(1)}_{\mu}(z)=2^{1-iE/\omega}\frac{\Gamma(\frac{\mu}{2}+1-\frac{i E}{2\omega})} {\Gamma(\frac{\mu}{2}+\frac{i E}{2\omega})}\nonumber\\ &&+i\frac{2^{1-iE/\omega}}{\pi}\cos\left(\frac{\mu \pi}{2}+\frac{i E \pi}{2\omega}\right)\Gamma\left(-\frac{\mu}{2}+1-\frac{i E}{2\omega}\right)\Gamma\left(\frac{\mu}{2}+1-\frac{i E}{2\omega}\right)\label{26} \end{eqnarray} and \begin{eqnarray} &&\int_0^{\infty} dz z^{1+iE/\omega}H^{(2)}_{\mu}(z)=2^{1+iE/\omega}\frac{\Gamma(\frac{\mu}{2}+1+\frac{i E}{2\omega})} {\Gamma(\frac{\mu}{2}-\frac{i E}{2\omega})}\nonumber\\ &&-i\frac{2^{1+iE/\omega}}{\pi}\cos\left(\frac{\mu \pi}{2}-\frac{i E \pi}{2\omega}\right)\Gamma\left(-\frac{\mu}{2}+1+\frac{i E}{2\omega}\right)\Gamma\left(\frac{\mu}{2}+1+\frac{i E}{2\omega}\right)\,.\label{27} \end{eqnarray} Now setting $z=p/\omega$ and $\mu=1/2\pm ik$ one can see that our result (\ref{15}) is correct. \par For calculating our incident flux we solve integrals of the form: \begin{eqnarray} \int_0^{\infty} dz z H^{(1)}_{\mu}(z)=\mu+i\mu\cot\left(\frac{\mu \pi}{2}\right)\,, \nonumber\\ \int_0^{\infty} dz z H^{(2)}_{\mu}(z)=\mu-i\mu\cot\left(\frac{\mu \pi}{2}\right)\,.\label{28} \end{eqnarray}
{ "attr-fineweb-edu": 1.84082, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdUE5qWTBJsoPRflf
\section{Introduction} \label{Intro} \vspace{-1mm} Organizing and formalizing results in the string theory literature, we start by noticing the following curious systematics, to be elaborated upon throughout the paper. \vspace{2mm} \noindent {\bf Toroidal orientifolds with ADE-singularities -- A curious pattern.} Consider type II superstring vacua compactified on fluxless toroidal orientifolds (\cite{Sagnotti88}\cite[p. 12]{DaiLeighPolchinski89}\cite{Mukhi97}\cite[3]{dBDHKMMS02}; see also \cite[5.3.4, 10.1.3]{IbanezUranga12}\cite[15.3]{BLT13}) with ADE-type singularities (\cite{AspinwallMorrison97}\cite{Intriligator97}; see \cite{ADE}), hence on orbifold quotients $\mathbb{T}^{\mathbf{4}_{\mathbb{H}}} \!\sslash\! G$ (e.g. \cite[13]{Ratcliffe06}) of 4-tori by crystallographic point groups \eqref{CrystallographicGroups}. These are finite subgroups $G \subset \mathrm{SU}(2) \simeq \mathrm{Sp}(1)$ of the group unit quaternions acting by left multiplication on the space $\mathbb{H} \simeq_{\mathbb{R}} \mathbb{R}^4$ of all quaternions \eqref{TheQuaternionicRepresentation}. \medskip The consistency condition on such compactifications known as (Ramond-Ramond) \emph{RR-field tadpole anomaly cancellation} (\cite[Sec. 3]{GimonPolchinski96}\cite[Sec. 9.3]{Witten12}; see \cite[4.4]{IbanezUranga12}\cite[9.4]{BLT13}), essentially says that the joint D-brane and O-plane charge in such compact orientifolds has to vanish, albeit with some subtle fine print. Explicitly, we observe that a case-by-case analysis of the string worldsheet superconformal field theory shows (\hyperlink{Table1}{\it Table 1}) that, for single wrapping number, RR-field tadpole anomaly cancellation is the following condition on the $G$-representation of D-brane charge and the $G$-set of O-planes: \begin{enumerate}[{\bf (i)}] \vspace{-.2cm} \item {\bf Local/twisted tadpole cancellation:} D-brane charge is a combination of a regular representation $\mathbf{k}_{\mathrm{reg}}$ and the trivial one $\mathbf{1}_{\mathrm{triv}}$, with coefficients the number of integral and fractional branes, respectively: \vspace{-4mm} $$\mathbf{c}_{{}_{\mathrm{Dbra}}} = N_{\mathrm{brane} \atop \mathrm{integ}} \cdot \mathbf{k}_{\mathrm{reg}} + N_{\mathrm{brane} \atop \mathrm{frac}} \cdot \mathbf{1}_{\mathrm{triv}}\;. $$ \vspace{-.4cm} \item {\bf Global/untwisted tadpole cancellation:} The dimension of D-brane charge is the cardinality of the $G$-set of O-planes: \vspace{-8mm} $$ \mathrm{dim}(\mathbf{c}_{{}_{\mathrm{Dbra}}}) = \mathrm{card}( \mathbf{c}_{{}_{\mathrm{Opla}}}) \;. $$ \end{enumerate} \vspace{-.1cm} \noindent In particular, $\mathbf{c}_{{}_{\mathrm{Dbra}}}$ comes from, and $\mathbf{c}_{{}_{\mathrm{Opla}}}$ gives rise to, a permutation representation, in the image of $\beta$ \eqref{BoardmanH} \cite{SS19b}: \vspace{-7mm} \begin{equation} \hspace{-3mm} \xymatrix@R=-4pt@C=40pt{ \mathbf{c}_{{}_{\mathrm{Dbra}}} \ar@{}[r]|{\in} & \mathrm{RO}(G) \ar@{}[r]|{\simeq} & \mathrm{KO}_G^0 & \mathbb{S}_G^0 \ar[l]_-{\beta} \ar@{}[r]|-{\simeq} & A(G) \ar@{<-}[r] & G \mathrm{Set}_{/\sim} \ar@{}[r]|-{\ni} & \mathbf{c}_{{}_{\mathrm{Opla}}} \\ \mathpalette\mathclapinternal{ \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} D-brane charge \\ on toroidal orientifold \end{tabular} } } & \mathpalette\mathclapinternal{ \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} representation \\ ring \end{tabular} } } & \mathpalette\mathclapinternal{ \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} equivariant \\ K-theory \end{tabular} } } & \mathpalette\mathclapinternal{ \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} equivariant \\ stable Cohomotopy \end{tabular} } } & \mathpalette\mathclapinternal{ \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} Burnside \\ ring \end{tabular} } } & \mathpalette\mathclapinternal{ \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} $G$-sets \\ ($G$-permutations) \end{tabular} } } & \mathpalette\mathclapinternal{ \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} O-plane charge \\ on toroidal orientifold \end{tabular} } } } \end{equation} \hypertarget{Table1}{} \vspace{-1mm} \hspace{-.7cm} {\small \begin{tabular}{|c|c|c|c|} \hline \begin{tabular}{c} \bf Single D-brane species \\ \bf on toroidal orientifold \end{tabular} & \begin{tabular}{c} \bf Local/twisted \\ \bf tadpole cancellation \\ \bf condition \end{tabular} & \begin{tabular}{c} \bf Global/untwisted \\ \bf tadpole cancellation \\ \bf condition \end{tabular} & \bf Comments \\ \hline \hline \begin{tabular}{l} Branes on \\ $\mathbb{T}^{\mathbf{4}_{\mathbb{H}}} \!\sslash\! G^{\mathrm{ADE}}$ \end{tabular} & \raisebox{17pt}{ $ \raisebox{-17pt}{ $ \begin{array}{rl} \mathbf{c}_{{}_{\mathrm{Dbra}}} = & \phantom{+} N_{\mathrm{brane} \atop \mathrm{int}} \cdot \mathbf{k}_{\mathrm{reg}} \\ & + N_{\mathrm{brane} \atop \mathrm{frac}} \cdot \mathbf{1}_{\mathrm{triv}} \end{array} $ } $ } & $ \begin{array}{l} \mathrm{dim}\big( \mathbf{c}_{\mathrm{tot}} \big) = \mathrm{card}\big( \mathbf{c}_{{}_{\mathrm{Opla}}} \big) \end{array} $ & \begin{tabular}{l} The general pattern \\ of the following \\ case-by-case results \end{tabular} \\ \hline \hline \begin{tabular}{l} D5/D9-branes \\ on $\mathbb{T}^{\mathbf{4}_{\mathbb{H}}} \!\sslash\! \mathbb{Z}_2$ \end{tabular} & \begin{tabular}{l} $\mathbf{c}_{{}_{\mathrm{Dbra}}} = N \cdot \mathbf{2}_{\mathrm{reg}}$ \\ (\cite[(19)]{BST99}) \end{tabular} & \begin{tabular}{l} $\mathbf{c}_{{}_{\mathrm{Dbra}}} = 16 \cdot \mathbf{2}_{\mathrm{reg}}$ \\ (\cite[(18)]{BST99}) \end{tabular} & \multirow{2}{*}{ \begin{tabular}{l} Following \\ \cite{GimonPolchinski96} \\ \cite{GC96} \end{tabular} } \\ \cline{1-3} \begin{tabular}{l} D5/D9-branes \\ on $\mathbb{T}^{\mathbf{4}_{\mathbb{H}}} \!\sslash\! \mathbb{Z}_4$ \end{tabular} & \begin{tabular}{l} $\mathbf{c}_{{}_{\mathrm{Dbra}}} = N \cdot \mathbf{4}_{\mathrm{reg}}$ \\ (\cite[(19)]{BST99}) \end{tabular} & \begin{tabular}{l} $\mathbf{c}_{{}_{\mathrm{Dbra}}} = 8 \cdot \mathbf{4}_{\mathrm{reg}}$ \\ (\cite[(18)]{BST99}) \end{tabular} & \\ \hline \begin{tabular}{l} D4-branes \\ on $\mathbb{T}^{\mathbf{4}_{\mathbb{H}}} \!\sslash\! \mathbb{Z}_k$ \end{tabular} & \begin{tabular}{l} $\mathbf{c}_{{}_{\mathrm{Dbra}}} = N \cdot \mathbf{k}_{\mathrm{reg}}$ \\ (\cite[4.2.1]{AFIRU00a}) \end{tabular} & & \begin{tabular}{l} Re-derived via M5-branes \\ below in \cref{M5MO5AnomalyCancellation} \end{tabular} \\ \hline \begin{tabular}{l} D4-branes \\ on $\mathbb{T}^{\mathbf{4}_{\mathbb{H}}} \!\sslash\! \mathbb{Z}_3$ \end{tabular} & \begin{tabular}{l} $\mathbf{c}_{{}_{\mathrm{Dbra}}} = N \cdot \mathbf{3}_{\mathrm{reg}}$ \\ \cite[(7.2)]{AFIRU00b} \end{tabular} & \begin{tabular}{l} $\mathbf{c}_{{}_{\mathrm{Dbra}}} = 4 \cdot \mathbf{3}_{\mathrm{reg}} + 4 \cdot \mathbf{1}_{\mathrm{triv}}$ \\ (\cite[(14)-(17)]{KataokaShimojo02}, \\ \end{tabular} & \begin{tabular}{l} The special case of $k = 3$ \\ (review in \cite[4]{Marchesano03}) \end{tabular} \\ \hline \begin{tabular}{l} D8-branes \\ on $\mathbb{T}^{{\mathbf{4}_{\mathbb{H}}}} \!\sslash\! \mathbb{Z}_3$ \end{tabular} & \begin{tabular}{l} $\mathbf{c}_{{}_{\mathrm{Dbra}}} = N \cdot \mathbf{3}_{\mathrm{reg}}$ \\ \\ \end{tabular} & \begin{tabular}{l} $\mathbf{c}_{{}_{\mathrm{Dbra}}} = 4 \cdot \mathbf{3}_{\mathrm{reg}} + 4 \cdot \mathbf{1}_{\mathrm{triv}}$ \\ (\cite[4]{Honecker01}, \cite[(29)]{Honecker02}) \end{tabular} & \begin{tabular}{l} Equivalent by T-duality \\ to previous case \\ (\cite[p.1 ]{Honecker01}, \cite[6]{Honecker02}) \end{tabular} \\ \hline \begin{tabular}{l} D3-branes \\ on $\mathbb{T}^{\mathbf{4}_{\mathbb{H }}} \!\sslash\! \mathbb{Z}_k$ \end{tabular} & \begin{tabular}{l} $\mathbf{c}_{{}_{\mathrm{Dbra}}} = N \cdot \mathbf{k}_{\mathrm{reg}} $ \\ (\cite[(25)]{FHKU01}) \end{tabular} & & \\ \hline \begin{tabular}{l} D7-branes \\ on $\mathbb{T}^{\mathbf{4}_{\mathbb{H }}} \!\sslash\! \mathbb{Z}_k$ \end{tabular} & \begin{tabular}{l} $\mathbf{c}_{{}_{\mathrm{Dbra}}} = N \cdot \mathbf{k}_{\mathrm{reg}} $ \\ (\cite[(5), (6)]{FHKU01}) \end{tabular} & & \\ \hline \begin{tabular}{l} D6-branes \\ on $\mathbb{T}^{6} \!\sslash\! \mathbb{Z}_4$ \end{tabular} & & \begin{tabular}{l} $\mathbf{c}_{{}_{\mathrm{Dbra}}} = 8 \cdot \mathbf{k}_{\mathrm{reg}} $ \\ (\cite[(25)]{IKS99}) \end{tabular} & \\ \hline \end{tabular} } \vspace{.15cm} \noindent {\bf \footnotesize Table 1 -- Tadpole cancellation conditions between D-branes and O-planes on toroidal ADE-orientifolds} {\footnotesize as derived from case-by-case analysis in perturbative string theory. The geometric content is shown in \hyperlink{FigureA}{\it Figure A}. The re-derivation from \hyperlink{HypothesisH}{\it Hypothesis H} is in \cref{EquivariantCohomotopyChargeOfM5AtMO5}.} \medskip The D-brane species in \hyperlink{Table1}{\it Table 1} with the most direct lift to M-theory are the D4-branes, lifting to M5-branes under double dimensional reduction (\cite[6]{APPS97a}\cite[6]{APPS97a}\cite{LPSS11}); see \hyperlink{Table7}{\it Table 7}. With an actual formulation of M-theory lacking, indirect plausibility arguments have been advanced \cite{DasguptaMukhi95}\cite[3.3]{Witten95b}\cite[2.1]{Hori98} that for M5-branes on M-theoretic orientifolds of the form $\mathbb{T}^{ \mathbf{5}_{\mathrm{sgn}} } \!\sslash\! \mathbb{Z}_2$, anomaly cancellation implies \hyperlink{Table2}{\it Table 2}: \vspace{-2mm} {\small \begin{center} \hypertarget{Table2}{} \begin{tabular}{|c|c|c|c|} \hline \begin{tabular}{c} \bf Single M-brane species \\ \bf on toroidal orientifold \end{tabular} & \begin{tabular}{c} \bf Local/twisted \\ \bf tadpole cancellation \\ \bf condition \end{tabular} & \begin{tabular}{c} \bf Global/untwisted \\ \bf tadpole cancellation \\ \bf condition \end{tabular} & \bf Comments \\ \hline \hline \multirow{2}{*}{ \begin{tabular}{c} M5-branes \\ on $\mathbb{T}^{\mathbf{5}_{\mathrm{sgn}}} \!\sslash\! \mathbb{Z}_2 $ \end{tabular} } & \begin{tabular}{c} $ \mathpalette\mathclapinternal{\phantom{\vert^{\vert^{\vert}}}} \phantom{AAA} \mathbf{c}_{{}_{\mathrm{Mbra}}} = N \cdot \mathbf{2}_{\mathrm{reg}} \phantom{AAA} \mathpalette\mathclapinternal{\phantom{\vert_{\vert_{\vert}}}} $ \end{tabular} & \begin{tabular}{c} $ \phantom{AAA} \mathbf{c}_{{}_{\mathrm{Mbra}}} = 16 \cdot \mathbf{2}_{\mathrm{reg}} \phantom{AAA} $ \end{tabular} & \multirow{2}{*}{ $\phantom{A}$ \begin{tabular}{c} \phantom{a} \\ plausibility arguments \end{tabular} $\phantom{A}$ } \\ & \multicolumn{2}{c|}{ $\mathpalette\mathclapinternal{\phantom{\vert^{\vert^{\vert}}}}$ (\cite{DasguptaMukhi95} \cite[3.3]{Witten95b} \cite[2.1]{Hori98}) $\mathpalette\mathclapinternal{\phantom{\vert_{\vert_{\vert}}}}$ } & \\ \hline \end{tabular} \end{center} } \vspace{-.15cm} \noindent {\bf \footnotesize Table 2 -- M5/MO5 anomaly cancellation in M-theory} {\footnotesize according to Folklore \ref{AnomalyCancellationOnMTheoreticOrientifolds}. While it has remained open in which cohomology theory the charge $\mathbf{c}_{\mathrm{Mbra}}$ is quantized, the geometric picture is again that illustrated in \hyperlink{FigureA}{\it Figure A}.} \medskip We highlight in \hyperlink{FigureA}{\it Figure A} the geometric interpretation of these tadpole cancellation conditions from \hyperlink{Table1}{\it Table 1} and \hyperlink{Table2}{\it Table 2}. The left side of \hyperlink{FigureA}{\it Figure A} shows a 2-dimensional slice through the toroidal orbifold $\mathbb{T}^{\mathbf{4}_{\mathbb{H}}} \!\sslash\! \mathbb{Z}_4 = (\mathbb{R}^4 / \mathbb{Z}^4) \!\sslash\! \mathbb{Z}_4$ with transversal branes/O-plane charges appearing as points. The O-plane charges (shown as open circles) are stuck one-to-one to the fixed points of the point reflection subgroup $\mathbb{Z}_2 \hookrightarrow \mathbb{Z}_4$ (see also \hyperlink{TableRT}{\it Table RT}) and, in the example shown, are permuted by the full orbifold group action of $\mathbb{Z}_4$ according to the permutation representation $2 \cdot \mathbf{1}_{\mathrm{triv}} + 1 \cdot \mathbf{2}_{\mathrm{perm}}$. The local/twisted tadpole cancellation condition says that the branes (shown as filled circles) appear in the vicinity of the O-planes with all their distinct mirror images under the full group action, thus contributing Chan-Paton fields in the regular representation $\mathbf{4}_{\mathrm{reg}}$. The global/untwisted tadpole cancellation condition says that the total charge of branes minus O-planes, hence the net charge if all branes/O-planes could freely move and pairwise annihilate, vanishes: \begin{center} \hypertarget{FigureA}{} \begin{tikzpicture}[scale=0.8, decoration=snake] \begin{scope}[shift={(0,-.4)}] \node at (1.4,5.3) {$ \overbrace{ \phantom{------------------} } $}; \node at (1.4,7) { \tiny \color{darkblue} \bf \begin{tabular}{c} local/twisted \\ tadpole cancellation \end{tabular} }; \node (EquivariantCocycle) at (1.4,5.3+.8) {\tiny $ \mathpalette\mathllapinternal{ \mathbf{c}_{\mathrm{tot}} = \; } 4 \cdot \big( \, \overset{ \mathbf{c}_{{}_{\mathrm{Dbra}}} }{ \overbrace{ 1 \cdot \mathbf{4}_{\mathrm{reg}} }} - \overset{ \beta\big( \mathbf{c}_{{}_{\mathrm{Opla}}}\big) }{ \overbrace{ ( 2 \cdot \mathbf{1}_{\mathrm{triv}} + 1 \cdot \mathbf{2}_{\mathrm{perm}} ) } } \; \big) $}; \node at (1.4+8,7) { \tiny \color{darkblue} \bf \begin{tabular}{c} global/untwisted \\ tadpole cancellation \end{tabular} }; \node at (1.4+8,5.3) {$ \overbrace{ \phantom{------------------} } $}; \node (PlainCocycle) at (1.4+8,5.3+.8) {\tiny \raisebox{-.6cm}{ $ \begin{aligned} & 4 \cdot \big( \, 1 \cdot 4 - ( 2 \cdot 1 + 1 \cdot 2 ) \, \big) \\ & = 0 \end{aligned} $}}; \draw[|->] (EquivariantCocycle) to node[above]{\tiny $\mathrm{dim}$ } (PlainCocycle); \end{scope} \begin{scope} \clip (-1.8,-1.8) rectangle (4.8,4.8); \draw[step=3, dotted] (-3,-3) grid (6,6); \draw[dashed] (-3,-3) circle (1); \draw[dashed] (0,-3) circle (1); \draw[dashed] (3,-3) circle (1); \draw[dashed] (6,-3) circle (1); \draw[dashed] (-3,0) circle (1); \draw[dashed] (0,0) circle (1); \draw[dashed] (3,0) circle (1); \draw[dashed] (-3,3) circle (1); \draw[dashed] (0,3) circle (1); \draw[dashed] (3,3) circle (1); \draw[dashed] (-3,6) circle (1); \draw[dashed] (0,6) circle (1); \draw[dashed] (3,6) circle (1); \draw[dashed] (6,6) circle (1); \draw[fill=white] (0,0) circle (.07); \draw[fill=white] (3,0) circle (.07); \draw[fill=white] (0,3) circle (.07); \draw[fill=white] (3,3) circle (.07); \draw[<->, dashed] (2.5,0) to[bend right=47] node { \colorbox{white}{ \tiny \color{darkblue} \bf \begin{tabular}{c} orientifold \\ action \end{tabular} } } (0,2.5); \draw (0,3) node[right] { \colorbox{white}{ \hspace{-.3cm} \tiny \color{darkblue} \bf O-plane \hspace{-.3cm} } }; \draw (3,0) node[right] { \colorbox{white}{ \hspace{-.5cm} \tiny \color{darkblue} \bf \begin{tabular}{c} mirror \\ O-plane \end{tabular} \hspace{-.3cm} } }; \draw[fill=black] (17:.7) circle (.07); \draw[fill=black] (17+90:.7) circle (.07); \draw[fill=black] (17+180:.7) circle (.07); \draw[fill=black] (17+270:.7) circle (.07); \draw (17+90:.7) node[right] { \colorbox{white}{ \hspace{-.3cm} \tiny \color{darkblue} \bf brane \hspace{-.3cm} } }; \draw (17+180:.7)+(.58,.03) node[right, below] { { \hspace{-.3cm} \tiny \color{darkblue} \bf mirror branes \hspace{-.3cm} } }; \end{scope} \begin{scope}[shift={(8,0)}] \clip (-1.8,-1.8) rectangle (4.8,4.8); \draw[step=3, dotted] (-3,-3) grid (6,6); \draw[dashed] (-3,-3) circle (1); \draw[dashed] (0,-3) circle (1); \draw[dashed] (3,-3) circle (1); \draw[dashed] (6,-3) circle (1); \draw[dashed] (-3,0) circle (1); \draw[dashed] (0,0) circle (1); \draw[dashed] (3,0) circle (1); \draw[dashed] (-3,3) circle (1); \draw[dashed] (0,3) circle (1); \draw[dashed] (3,3) circle (1); \draw[dashed] (-3,6) circle (1); \draw[dashed] (0,6) circle (1); \draw[dashed] (3,6) circle (1); \draw[dashed] (6,6) circle (1); \begin{scope}[shift={(1.1,1.1)}] \draw[fill=white] (0,0) circle (.07); \end{scope} \draw[->, decorate, lightgray] (0,0) to (.97,.97); \draw[fill=white] (3-1.1,0+1.1) circle (.07); \draw[->, decorate, lightgray] (3,0) to (3-.97,0+.97); \begin{scope}[shift={(1.1+.2,-1.1-.2)}] \draw[fill=white] (0,3) circle (.07); \end{scope} \draw[->, decorate, lightgray] (0,3) to (0+.97+.2,3-.97-.2); \begin{scope}[shift={(-1.2,-1.2)}] \draw[fill=white] (3,3) circle (.07); \end{scope} \draw[->, decorate, lightgray] (3,3) to (3-1.07,3-1.07); \draw[fill=black] (17:.7)+(1.36,1.36) circle (.07); \draw[fill=black] (17+90:.7)+(1.1,1.1) circle (.07); \draw[fill=black] (17+180:.7)+(1.5,1.5) circle (.07); \draw[fill=black] (17+270:.7)+(1.5,1.5) circle (.07); \draw[->, decorate, lightgray] (17:.7) to ++(1.23,1.23); \draw[->, decorate, lightgray] (17+90:.7) to ++(1.0,1.0); \draw[->, decorate, lightgray] (17+180:.7) to ++(1.37,1.37); \draw[->, decorate, lightgray] (17+270:.7) to ++(1.37,1.37); \end{scope} \begin{scope}[shift={(0,1.5)}] \draw (0,-3.5) node {\tiny $x_1 = 0$}; \draw (3,-3.5) node {\tiny $x_1 = \tfrac{1}{2}$}; \begin{scope}[shift={(8,0)}] \draw (0,-3.5) node {\tiny $x_1 = 0$}; \draw (3,-3.5) node {\tiny $x_1 = \tfrac{1}{2}$}; \end{scope} \end{scope} \draw (-3.1,0) node {\tiny $x_2 = 0$}; \draw (-3.1,3) node {\tiny $x_2 = \tfrac{1}{2}$}; % \end{tikzpicture} \end{center} \vspace{-.4cm} \noindent {\bf \footnotesize Figure A -- Illustration of the geometric situation of tadpole cancellation on toroidal ADE-orientifolds} {\footnotesize according to \hyperlink{Table1}{\it Table 1}, shown for the case $G^{\mathrm{ADE}} = \mathbb{Z}_4$. This is for single wrapping number of the branes along any further compact dimensions; but the general statement is just the tensor product of this situation with the cohomology of these further compact spaces.} \medskip \medskip \noindent In view of the evident pattern evidenced by \hyperlink{Table1}{\it Table 1} and \hyperlink{Table2}{\it Table 2}, here we ask the following question: \vspace{-1mm} \begin{center} \emph{ Is there a generalized cohomological brane charge quantization which enforces tadpole anomaly cancellation? } \end{center} \vspace{-1mm} \noindent We show in this paper that (see \hyperlink{FigureU}{\it Figure U}), for fluxless toroidal ADE-orientifolds, the answer to this question is \emph{unstable equivariant Cohomotopy} theory; see \eqref{EquivariantCohomotopySet} below. Before explaining this, we put the open problem in perspective: \vspace{5mm} \noindent {\bf The open problem -- Systematic understanding of tadpole cancellation by charge quantization.} While the RR-field tadpole cancellation conditions are thought to be crucial not just for mathematical consistency, but also for phenomenological accuracy of string model building \cite[Sec. 4.4]{IbanezUranga12}, a real understanding of the conditions in full detail and generality has remained an open problem; see \cite[p. 2]{BDS05}\cite[4.6.1]{Moore14}\cite[p. 2]{HMSV19} for critical discussion. In particular, most of the existing literature on tadpole cancellation simply regards D-brane charge as being in ordinary cohomology, while widely accepted arguments say that D-brane charge instead must be regarded in (a twisted differential enhancement of) K-theory; in this context, see \cite{SS19b} for review, and see \cite{GS-AHSS}\cite{GS19A}\cite{GS19B} for detailed constructions and accounts of the twisted differential case. D-brane charge in K-cohomology may be understood as a generalized \emph{charge quantization} rule, in analogy to how Dirac's classical argument for charge quantization \cite{Dirac31} (see \cite[16.4e]{Frankel97}) expresses the electromagnetic field as a cocycle in (the differential refinement of) ordinary cohomology; see \cite{Freed00}. Notice that cohomological charge quantization concerns the full non-perturbative structure of a physical theory, including its instanton/soliton charge content. \medskip Accordingly, in \cite[5]{Uranga00} it was suggested that RR-tadpole cancellation must be a consistency condition expressed in K-theory. Specifically, for orientifolds this could be Atiyah's \emph{R}eal K-theory \cite{Atiyah66}, i.e., KR-theory restricting on O-planes to KO-theory, which has been argued to capture D-brane charges on orientifolds in \cite[5]{Witten98c}\cite{Gukov99}\cite[\S 3]{BGS01}; explicit constructions are given in \cite{DMDR1}\cite{DMDR2}\cite{HMSV15}\cite{HMSV19}\cite{GS19B}. In more detail, D-brane charge on orbifolds is traditionally expected \cite[5.1]{Witten98c}\cite[4.5.2]{dBDHKMMS02}\cite{GarciaCompean99} to be in equivariant K-theory (see \cite{Greenlees05}). Hence orientifolds are expected to have charge quantization in a combination of these aspects in some Real equivariant K-theory \cite{Moutuou11}\cite{Moutuou12}\cite{FreedMoore12}\cite{Gomi17}. \medskip However, before even formulating tadpole cancellation in Real equivariant K-theory, the full formulation of O-plane charge has remained open: \medskip \noindent \textbf{\emph{Open issue 1: Single O-plane charge.}} While O-plane charge is not supposed to vary over all integers, perturbative string theory predicts it to vary in the set $\{0, \pm 1\}$ (e.g. \cite[p. 2]{HIS00}), illustrated in \hyperlink{FigureB}{\it Figure B}. \begin{center} \hypertarget{FigureB}{} \begin{tikzpicture}[decoration=snake] \draw (-2.5,0) node {\tiny $x_2 = 0$}; \begin{scope}[shift={(9.8,0)}] \draw[dashed] (0,0) circle (1); \draw[dotted] (0,2.5) to (0,-2.5); \draw[dotted] (1.9,0) to (-1.9,0); \draw (0,-2.7) node {\tiny $x_1 = 0$}; \draw[<->, dashed] (120:2.3) to node[very near start] { \tiny \color{darkblue} \bf \begin{tabular}{c} orientifold \\ action \end{tabular} } (120+180:2.3); \draw (0,-.3) node { \colorbox{white}{ \tiny \color{darkblue} \bf $O^{{}^{0}}\!$-plane } }; \end{scope} \begin{scope}[shift={(0,0)}] \draw[dashed] (0,0) circle (1); \draw[dotted] (0,2.5) to (0,-2.5); \draw[dotted] (1.9,0) to (-1.9,0); \draw (0,-2.7) node {\tiny $x_1 = 0$}; \draw[<->, dashed] (120:2.3) to node[very near start] { \tiny \color{darkblue} \bf \begin{tabular}{c} orientifold \\ action \end{tabular} } (120+180:2.3); \draw (0,-.3) node { \colorbox{white}{ \tiny \color{darkblue} \bf $O^{{}^{-}}\!$-plane } }; \draw[fill=white] (0,0) circle (.07); \end{scope} \begin{scope}[shift={(4.9,0)}] \draw[dashed] (0,0) circle (1); \draw[dotted] (0,2.5) to (0,-2.5); \draw[dotted] (1.9,0) to (-1.9,0); \draw (0,-2.7) node {\tiny $x_1 = 0$}; \draw[<->, dashed] (120:2.3) to node[very near start] { \tiny \color{darkblue} \bf \begin{tabular}{c} orientifold \\ action \end{tabular} } (120+180:2.3); \draw (0,-.3) node { \colorbox{white}{ \tiny \color{darkblue} \bf $O^{{}^{+}}\!$-plane } }; \draw[fill=black] (0,0) circle (.07); \begin{scope}[scale=.7, shift={(-1.9,-2.5)}] \draw[dashed] (0,0) circle (1); \draw[dotted] (0,1.2) to (0,-1.2); \draw[dotted] (1.2,0) to (-1.2,0); \draw[fill=white] (0,0) circle (.07); \draw[fill=black] (30:.85) circle (.07); \draw[fill=black] (30+180:.85) circle (.07); \draw[->, decorate, lightgray] (30:.8) to (30:.1); \draw[->, decorate, lightgray] (30+180:.8) to (30+180:.1); \draw (56:1.1) node { \begin{rotate}{56} \tiny \raisebox{-3pt}{ $\simeq$ } \end{rotate} }; \end{scope} \end{scope} \end{tikzpicture} \end{center} \vspace{-.4cm} \noindent {\bf \footnotesize Figure B -- The charge carried by a single O-plane} {\footnotesize takes values in the set $\{0, \pm 1\}$ (in units of corresponding integral D-brane charge), visualized here following the geometric illustration of \hyperlink{FigureA}{\it Figure A}. For O4-planes this situation lifts to MO5-planes in M-theory \cite{Hori98}\cite{Gimon98} \cite[II.B]{AKY98}\cite[3.1]{HananyKol00}. (The notation for $O^{{}^{0}}$ originates with \cite[p. 29]{Hori98}\cite[p. 4]{Gimon98}; see \hyperlink{FigureT}{\it Figure T} for more.)} \medskip But in plain KR-theory all O-planes are $\mathrm{O}^{{}^{-}}\!$-planes. To capture at least the presence of $\mathrm{O}^{{}^{+}}\!$-planes requires adding to KR-theory an extra sign choice \cite{DMDR1}. In some cases this may be regarded as part of a twisting of KR-theory \cite{HMSV19}, but the situation remains inconclusive \cite[p. 2]{HMSV19}. \footnote{Note that \cite[footnote 1]{HMSV19} claims a problem with the sign choice in \cite{DMDR1}, and hence also in \cite{Moutuou12}. These continuing issues with orbifold K-theory for D-brane charge may motivate but do not affect the discussion here, where instead we propose equivariant Cohomotopy theory for M-brane charge as an alternative. } {\small \begin{floatingtable}[r] { \hypertarget{Table3}{} \begin{tabular}{|c|c|c|c|c|} \hline \begin{tabular}{c} \bf O-plane \\ \bf species \end{tabular} & \begin{tabular}{c} \bf Charge \\ $q_{{}_{\mathrm{O}p^-}}/q_{{}_{\mathrm{D}p}}$ \end{tabular} & \begin{tabular}{c} \bf Transverse \\ \bf orientifold \end{tabular} & \begin{tabular}{c} \bf Number of \\ \bf singularities \end{tabular} \\ \hline \hline $\mathrm{O}9^{-}$ & -32 & $\mathbb{T}^0 \!\sslash\! \mathbb{Z}_2$ & 1 \\ \hline $\mathrm{O}8^{-}$ & -16 & $\mathbb{T}^1 \!\sslash\! \mathbb{Z}_2$ & 2 \\ \hline $\mathrm{O}7^{-}$ & -8 & $\mathbb{T}^2 \!\sslash\! \mathbb{Z}_2$ & 4 \\ \hline $\mathrm{O}6^{-}$ & -4 & $\mathbb{T}^3 \!\sslash\! \mathbb{Z}_2$ & 8 \\ \hline $\mathrm{O}5^{-}$ & -2 & $\mathbb{T}^4 \!\sslash\! \mathbb{Z}_2$ & 16 \\ \hline $\mathrm{O}4^{-}$ & -1 & $\mathbb{T}^5 \!\sslash\! \mathbb{Z}_2$ & 32 \\ \hline \end{tabular} } \\ {\bf \footnotesize Table 3 -- Absolute O$p$-plane charge} {\footnotesize \cite[(5.52)]{IbanezUranga12}\cite[10.212]{BLT13} $- 32$ is not implied by K-theory \cite{BGS01}, but is implied by Cohomotopy. } \end{floatingtable} } \medskip \noindent \textbf{\emph{Open issue 2: Total O-plane charge.}} As highlighted in \cite[p. 4, p. 25]{BGS01}, it remains open whether a putative formalization of tadpole cancellation via Real K-theory reflects the \emph{absolute total} charge to be carried by O-planes. This is a glaring open problem, since the absolute total charge -32 of O$p$-planes in toroidal orientifolds (see \hyperlink{Table3}{\it Table 3}) fixes the gauge algebra $\mathfrak{so}(32)$ of type I string theory required for duality with heterotic string theory (see, e.g., \cite[p. 250]{BLT13} \cite{AntoniadisPartoucheTaylor97}) with Green-Schwarz anomaly cancellation. This core result of string theory, is the basis of the ``first superstring revolution'' \cite[p. 21]{Schwarz11}, and a successful formalization of tadpole cancellation ought to reproduce it. \medskip A proposal for capturing absolute background charge of O-planes by equipping K-theory with a quadratic pairing has been briefly sketched in \cite{DFM11}, but the implications remain somewhat inconclusive \cite[p. 22]{Moore14}. We notice that the implications on M-brane charge quantization of analogous quadratic functions in M-theory \cite{HopkinsSinder05} are reproduced by charge quantization in twisted Cohomotopy theory \cite{FSS19b}. Here we further check this alternative proposal: That brane charge quantization is in \emph{Cohomotopy} cohomology theory, which lifts K-theory through the Boardman homomorphism; see \eqref{FromUnstableCohomotopyToEquivariantKTheory} below. \medskip \noindent {\bf The proposal -- Charge quantization on orientifolds in Equivariant Cohomotopy theory.} When educated guesswork gets stuck, it is desirable to identify principles from which to systematically \emph{derive} charge quantization in M-theory, if possible, and seek the proper generalized cohomology theory to describe the M-theory fields, as was advocated and initiated in \cite{Sa1}\cite{Sa2} \cite{Sa3}\cite{tcu}. A first-principles analysis of super $p$-brane sigma-models in rational homotopy theory shows \cite{S-top}\cite{FSS15}\cite{FSS16a}\cite{FSS16b} \newline that rationalized M-brane charge is quantized in rational \emph{Cohomotopy} cohomology theory; see \cite{FSS19a} for review. This naturally suggests the following hypothesis about charge quantization in M-theory \cite{S-top}\cite{FSS19b}\cite{FSS19c} \cite{SS19b}\cite{SS19c} (for exposition see \cite{Schreiber20}): \vspace{0mm} \hypertarget{HypothesisH}{} \begin{center} \fbox{\noindent {\bf Hypothesis H.} {\bf \it The M-theory C-field is charge-quantized in Cohomotopy theory}.} \end{center} Applied to toroidal orbifolds, the relevant flavor of unstable Cohomotopy theory is (see \hyperlink{Table4}{\it Table 4}) \emph{unstable equivariant Cohomotopy} (\cite[8.4]{tomDieck79}\cite{Cruickshank03}), denoted $\pi^\bullet_G$ \eqref{EquivariantCohomotopySet}. This is the cohomology theory whose degrees are labeled by orthogonal linear $G$-representations, called the \emph{RO-degree} (see, e.g., \cite[3]{Blu17}) \vspace{-2mm} \begin{equation} \label{RODegree} \xymatrix{ {\phantom{V}}\mathpalette\mathllapinternal{ \mbox{ \tiny \color{darkblue} \bf ``RO-degree'' } } \ar@[white]@(ul,ur)^{ \mathpalette\mathllapinternal{ \mbox{ \tiny \color{darkblue} \bf orthogonal linear $G$-representation } } } } \xymatrix{V \ar@(ul,ur)^{G \subset \mathrm{O}(\mathrm{dim}(V))} } \!\!\!\!\!\in \mathrm{RO}(G) \;\;\; \mbox{\tiny \color{darkblue} \bf representation ring} \end{equation} and whose value on a topological $G$-space $X$ (representing a global $G$-quotient orbifold $X \!\sslash\! G$) with specified point at infinity $\infty \in X$ -- see diagram \eqref{VanishingAtInfinity} -- is the \emph{set} of $G$-homotopy classes \eqref{GHomotopy} of pointed $G$-equivariant continuous functions \eqref{EquivariantFunction} from $X$ to the {$V$-representation sphere} $S^V$ \eqref{RepSpheres} (see \cref{EquivariantCohomotopyAndTadpoleCancellation} for details and illustration): \begin{equation} \label{EquivariantCohomotopySet} \begin{array}{ccc} \pi^V_G \big( X \big) & \coloneqq & \left\{ \raisebox{-6pt}{ {\xymatrix{ X \ar@(ul,ur)|-{\,G\,} \ar[rr]^-c && S^V \ar@(ul,ur)|-{\,G\,} } } } \right\}_{\raisebox{2pt}{\tiny$\!\!\!\Big/\sim$}} \\ \tiny \color{darkblue} \begin{tabular}{c} \bf equivariant Cohomotopy set \\ \bf of the orbifold $X \!\sslash\! G$ \\ \bf in RO-degree $V$ \end{tabular} && \tiny \color{darkblue} \begin{tabular}{c} \bf set of $G$-homotopy classes \\ \bf of $G$-equivariant continuous functions \\ \bf from $X$ to $S^V$ \end{tabular} \end{array} \end{equation} This is the evident enhancement to unstable $G$-equivariant homotopy theory (see \cite[1]{Blu17}) of unstable plain Cohomotopy theory $\pi^\bullet$ (\cite{Borsuk36}\cite{Spanier49}\cite{KMT12}\cite[3.1]{FSS19b}). \vspace{-.3cm} \begin{equation} \label{FromUnstableCohomotopyToEquivariantKTheory} \raisebox{0pt}{ \xymatrix@R=-20pt@C=16pt{ \mbox{ \raisebox{-10pt}{\footnotesize \begin{minipage}[l]{7.2cm} Equivariant Cohomotopy is a non-abelian (i.e. ``unstable'') Cohomology theory \cite{SSS12}\cite{NSS12} that maps to equivariant K-theory via stabilization followed by the Boardman homomorphism, see \cref{StableEquivariantHopfDegree} and \cite{SS19b}. \end{minipage} } } & \pi^\bullet_G \ar[rr]^-{\Sigma^\infty}_-{ \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} $\phantom{a}$ \\ stablilization \end{tabular} } } && \mathbb{S}_G \ar[rr]^-{\beta}_-{ \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} $\phantom{a}$ \\ Boardman \\ homomorphism \end{tabular} } } && \mathrm{KO}_G \\ & \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} unstable \\ equivariant \\ Cohomotopy \end{tabular} } && \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} stable \\ equivariant \\ Cohomotopy \end{tabular} } && \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} \\ equivariant \\ K-theory \end{tabular} } } } \end{equation} \noindent {\bf The solution -- From Hypothesis H.} In this article we explain how lifting brane charge quantization to ADE-equivariant Cohomotopy, regarded as the generalized Dirac charge quantization of the M-theory C-field (e.g. \cite{Duff99B}) on toroidal M-orientifolds (\cite{DasguptaMukhi95}\cite{Witten95b}\cite{Hori98}\cite{ADE}), gives the local O-plane charges in $\{0,\pm 1\}$ from \hyperlink{FigureB}{\it Figure B} and enforces on D-brane charge in the underlying equivariant K-theory \eqref{FromUnstableCohomotopyToEquivariantKTheory} the RR-field tadpole cancellation constraints from \hyperlink{Table1}{\it Table 1} via their M-theory lift from \hyperlink{Table2}{\it Table 2}. \medskip \noindent {\bf Overall picture -- M-Theory and Cohomotopy.} As we further explain in \cite{OrbifoldCohomology}, unstable equivariant Cohomotopy theory is the incarnation on flat orbifolds of \emph{unstable twisted Cohomotopy} cohomology theory, which we showed in \cite{FSS19b}\cite{FSS19c} implies a list of M-theory anomaly cancellation conditions on non-singular (i.e., ``smooth'') but topologically non-trivial spacetimes; see \hyperlink{Table4}{\it Table 4}: \vspace{-1mm} {\small \hypertarget{Table4}{} \begin{center} \hspace{-4.5mm} \begin{tabular}{cc} \setlength\tabcolsep{.4em} \begin{tabular}{|c||c|c|} \hline Spacetime & {\bf Flat} & {\bf Curved} \\ \hline \hline {\bf Smooth} & \begin{tabular}{c} plain \\ Cohomotopy \\ (\cite{FSS15}\cite{BSS18}) \end{tabular} & \begin{tabular}{c} twisted \\ Cohomotopy \\ (\cite{FSS19b}\cite{FSS19c}) \end{tabular} \\ \hline \begin{tabular}{c} \bf Orbi- \\ \bf singular \end{tabular} & \begin{tabular}{c} equivariant \\ Cohomotopy \\ (\cite{ADE}\cite{SS19b} \cref{M5MO5AnomalyCancellation}) \end{tabular} & \begin{tabular}{c} orbifold \\ Cohomotopy \\ (\cite{OrbifoldCohomology}) \end{tabular} \\ \hline \end{tabular} & \begin{minipage}[l]{8cm} {\bf \footnotesize Table 4 -- M-theory anomaly cancellation by C-field charge quantization in Cohomotopy.} \hspace{-2mm} {\footnotesize On smooth but curved spacetimes, Cohomotopy theory is twisted via the J-homomorphism by the tangent bundle. On flat orbi-orientifolds the spacetime curvature is all concentrated in the $G$-singularities, around which the tangent bundle becomes a $G$-representation and twisted Cohomotopy becomes equivariant Cohomotopy. In each case the respective charge quantization implies expected anomaly cancellation conditions. See also \hyperlink{Table8}{\it Table 8}}. \end{minipage} \end{tabular} \end{center} } \vspace{1mm} \noindent Each entry in \hyperlink{Table4}{\it Table 4} supports \hyperlink{HypothesisH}{\it Hypothesis H} in different corners of the expected phase space of M-theory. This suggests that \hyperlink{HypothesisH}{\it Hypothesis H} is a correct assumption about the elusive mathematical foundation of M-theory. \medskip \hypertarget{RelevanceOfUnstable}{} \noindent {\bf The necessity of unstable = non-abelian charge quantization for O-planes.} We highlight that most authors who discuss equivariant Cohomotopy consider \emph{stable} equivariant Cohomotopy theory (e.g. \cite{Segal71}\cite{Carlsson84} \cite{Lueck}), represented by the equivariant sphere spectrum $\mathbb{S}_G$ in equivariant stable homotopy theory (\cite{LMS86}\cite[Appendix]{HHR16}); see \cref{StableEquivariantHopfDegree} below. There are comparison homomorphisms \eqref{FromUnstableCohomotopyToEquivariantKTheory} from equivariant unstable Cohomotopy to stable Cohomotopy and further to K-theory but each step forgets some information (has a non-trivial kernel) and produces spurious information (has a non-trivial cokernel); see \cite{SS19b}. For the result presented here (just as for the previous discussion in \cite{FSS19b}\cite{FSS19c}), it is crucial that we use the richer \emph{unstable} version of the Cohomotopy theory, hence the \emph{non-abelian Cohomology theory} \cite{SSS12}\cite{NSS12}, which is the one that follows from analysis of super $p$-brane cocycles \cite{S-top}\cite{FSS19a}. We find that: \begin{enumerate}[{\bf (a)}] \vspace{-2mm} \item the difference in the behavior between the O-plane charges and the D-brane charges (in \hyperlink{Table1}{\it Table 1}, \hyperlink{TableMNTC}{\it Table 2} and \hyperlink{FigureP}{Figure P}) and \vspace{-2mm} \item the unstable/non-abelian nature of O-plane charge itself (\hyperlink{FigureOP}{\it Figure OP}) \end{enumerate} \vspace{-2mm} are reflected in the passage from the unstable to the stable range in unstable ADE-equivariant Cohomotopy, where the O-plane charges are distinguished as being in the unstable range; see \hyperlink{FigureC}{\it Figure C}: \vspace{-9mm} \begin{center} \hypertarget{FigureC}{} $$ \xymatrix@C=20pt@R=1.2em{ & \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} toroidal \\ orientifold \end{tabular} } \ar@{}[rrrr]|-{ \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} cocycle in \\ equivariant Cohomotopy \end{tabular} } } &&&& \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} equivariant cohomotopy \\ classifying space \\ (representation sphere) \end{tabular} } \\ & \mathbb{T}^{\mathbf{4}_{\mathbb{H}}} \ar@(ul,ur)|<<<<{ \mathbb{Z}_2 } \ar[rrrr]^{ c } &&&& S^{\mathbf{4}_{\mathbb{H}}} \ar@(ul,ur)|<<<<{\mathbb{Z}_2} \\ \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} set of fixed points \\ in orientifold \\ (O-planes) \end{tabular} } & \big( \mathbb{T}^{\mathbf{4}_{\mathbb{H}}} \big)^{\mathbb{Z}_2} = \big\{ 0, \tfrac{1}{2} \big\}^4 \ar[rrrr]^-{ c^{(\mathbb{Z}_2)} }_{ \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} O-plane charge \\ (a subset) \end{tabular} } } \ar@{^{(}->}[dd] &&&& \underset{ S^0 }{ \underbrace{ \{0,\infty\} } } = \big( S^{\mathbf{4}_{\mathbb{H}}} \big)^{\mathbb{Z}_2} \ar@{^{(}->}[dd] & \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} set of fixed points \\ in classifying space \\ (the 0-sphere) \end{tabular} } \\ \\ \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} underlying \\ plain 4-torus \end{tabular} } & \big( \mathbb{T}^{\mathbf{4}_{\mathbb{H}}} \big)^1 = T^4 \ar[rrrr]^-{ (c)^1 }_-{ \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} net charge \\ (an integer) \end{tabular} } } &&&& S^4 = \big( S^{\mathbf{4}_{\mathbb{H}}} \big)^{1} & \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} underlying \\ plain 4-sphere \end{tabular} } } $$ \end{center} \vspace{-3mm} \noindent {\bf \footnotesize Figure C -- A cocycle in unstable ADE-equivariant Cohomotopy on a toroidal orientifold} {\footnotesize according to \eqref{EquivariantCohomotopySet}, and its decomposition on fixed point strata into Elmendorf stages; see \cite[1.3]{Blu17}\cite[3.1]{ADE}.} \vspace{4mm} {\bf Characterizing brane/O-plane charges -- Unstable (equivariant) differential topology.} Since in \hyperlink{FigureC}{\it Figure C} the fixed locus in the classifying space is just a 0-sphere, and since the Hopf degree of maps $X^n \to S^n$ stabilizes only for $n \geq 1$ -- see diagram \eqref{HopfDegreesUnderSuspension} -- the fixed points in the spacetime (= O-planes) carry ``unstable'' or ``non-linear'' charge: not given by a group element, but by a subset, distinguishing $O^{{}^{\pm}}\!$-planes from $O^{{}^{0}}\!$-planes as in \hyperlink{FigureOP}{\it Figure OP}. The further distinction between $O^{{}^{-}}\!$-planes and $O^{{}^{+}}\!$-planes is implied by normal framing that enters in the unstable Pontrjagin-Thom theorem (discussed in \cref{NormalFramingAndBraneAntibraneAnnihilation}). Moreover, the local/twisted tadpole cancellation condition in the vicinity of O-planes is implied by the unstable equivariant Hopf degree theorem (discussed in \cref{LocalTadpoleCancellation}). Last but not least, it is the unstable Pontrjagin-Thom theorem, discussed in \cref{Sec-Coh}, which identifies all these charges with \emph{sub}-manifolds, hence with actual brane/O-plane worldvolumes as shown in \hyperlink{FigureA}{\it Figure A}, (while the stable PT-theorem instead relates stable Cohomotopy to manifolds equipped with any maps to spacetime). \vspace{.4cm} {\small \hspace{-.8cm} \setlength\tabcolsep{.25em} \begin{tabular}{|c|c|c|c|} \hline \begin{tabular}{c} \bf Classical theorem \end{tabular} & Reference & \begin{tabular}{c} \bf Interpretation for brane charge quantization \\ \bf in unstable Cohomotopy (\hyperlink{HypothesisH}{\it Hypothesis H}) \end{tabular} & \begin{tabular}{c} Discussed in \end{tabular} \\ \hline \hline \begin{tabular}{c} Unstable \\ Pontrjagin-Thom theorem \end{tabular} & \begin{tabular}{c} \cite[IX (5.5)]{Kosinski93} \end{tabular} & \begin{tabular}{c} Cohomotopy charge is sourced by submanifolds \\ hence by worldvolumes of branes and O-planes \end{tabular} & \cref{PTTheorem} \\ \hline \begin{tabular}{c} Unstable \\ Hopf degree theorem \end{tabular} & \begin{tabular}{c} \cite[IX (5.8)]{Kosinski93} \\ \cite[7.5]{Kobin16} \end{tabular} & \begin{tabular}{c} Charge of flat transversal branes is integer \\ while charge of flat transversal O-planes is in $\{0,1\}$ \end{tabular} & \cref{NormalFramingAndBraneAntibraneAnnihilation} \\ \hline \begin{tabular}{c} Unstable \\ equivariant Hopf degree theorem \end{tabular} & \cite[8.4]{tomDieck79} & \begin{tabular}{c} Branes appear in regular reps around O-planes \\ = local/twisted tadpole anomaly cancellation \end{tabular} & \cref{EquivariantCohomotopyAndTadpoleCancellation} \\ \hline \end{tabular} } \medskip \noindent {\bf Organization of the paper.} In \cref{Sec-Coh} we discuss how the classical unstable Pontrjagin-Thom isomorphism says that plain Cohomotopy classifies charge carried by brane worldvolumes. In \cref{EquivariantCohomotopyAndTadpoleCancellation} we introduce the enhancement of this situation to equivariant Cohomotopy on toroidal orbifolds, where it encodes joint D-brane and O-plane charge. We explain in \cref{EquivariantCohomotopyAndTadpoleCancellation} that now the \emph{equivariant Hopf degree theorem} encodes the form of local/twisted tadpole cancellation conditions, and explain in \cref{GlobalTadpoleCancellation} that super-differential refinement at global Elmendorf stage encodes the form of global/untwisted tadpole cancellation conditions as in \hyperlink{Table1}{\it Table 1} and \hyperlink{Table2}{\it Table 2}. The Pontrjagin-Thom theorem now serves to map these charges precisely to the geometric situations of the form shown in \hyperlink{FigureA}{\it Figure A}. Finally, in \cref{M5MO5AnomalyCancellation} we specify these general considerations to the physics of M5-branes at MO5-planes in toroidal ADE-orientifolds in M-theory, with the C-field charge-quantized in equivariant Cohomotopy theory, according to \hyperlink{HypothesisH}{\it Hypothesis H}. To set the scene, we first recall in \cref{HeteroticMTheoryOnADEOrbifolds} the situation of heterotic M-theory on ADE-orbifolds and highlight subtleties in the interpretation of MO5-planes. With this in hand, we apply in \cref{EquivariantCohomotopyChargeOfM5AtMO5} the general discussion of equivariant Cohomotopy from \cref{EquivariantCohomotopyAndTadpoleCancellation} to ADE-singularities intersecting MO9-planes in M-theory, and find (Cor. \ref{EquivariantCohomotopyOfSemiComplementSpacetime}, Cor. \ref{GlobalM5MO5CancellationImplied}) that this correctly encodes the expected anomaly cancellation of M5-branes at MO5-planes, and this, upon double dimensional reduction (see \hyperlink{Table7}{\it Table 7} and \hyperlink{FigureU}{\it Figure U}), the RR-field tadpole anomaly cancellation for D-branes on ADE-orientifolds. \section{Cohomotopy and brane charge } \label{Sec-Coh} \vspace{-1mm} Before turning to equivariant/orbifold structure in \cref{EquivariantCohomotopyAndTadpoleCancellation}, we first discuss basics of plain unstable Cohomotopy on plain manifolds. The key point is that the unstable \emph{Pontrjagin-Thom theorem}, reviewed in \cref{PTTheorem}, identifies cocycles in unstable Cohomotopy theory with cobordism classes of submanifolds carrying certain extra structure (normal framing). These submanifolds are naturally identified with the worldvolumes of branes that source the corresponding Cohomotopy charge, and the normal structure they carry corresponds to the charge carried by the branes, distinguishing branes from anti-branes. In \cref{NormalFramingAndBraneAntibraneAnnihilation} we highlight that coboundaries in unstable Cohomotopy accordingly correspond to brane pair creation/annihilation processes. This way the Pontrjagin-Thom theorem establishes Cohomotopy as a natural home for brane charges, as proposed in \cite{S-top}. \vspace{-1mm} \subsection{Pontrjagin-Thom theorem and brane worldvolumes} \label{PTTheorem} \noindent {\bf Cohomotopy cohomology theory.} The special case of unstable $G$-equivariant Cohomotopy \eqref{EquivariantCohomotopySet} with $G = 1$ the trivial group is unstable plain Cohomotopy theory (\cite{Borsuk36}\cite{Spanier49}\cite{KMT12}\cite[3.1]{FSS19b}), denoted $\pi^\bullet \coloneqq \pi^\bullet_1$. This is the unstable/non-abelian cohomology theory whose degrees are natural numbers $n \in \mathbb{N}$ and which assigns to an un-pointed topological space $X$ the \emph{Cohomotopy set} of free homotopy classes of continuous maps into the $n$-sphere: \vspace{-2mm} \begin{equation} \label{PlainCohomotopySet} \begin{array}{ccc} \pi^n ( X ) & \coloneqq & \big\{ \!\!\!\!\! \raisebox{+2pt}{ {\xymatrix{ X \ar[rr]^-c && S^n } } } \!\!\!\!\! \big\}_{\raisebox{16pt}{$/\sim$}} \\[-12pt] \tiny \color{darkblue} \begin{tabular}{c} \bf Cohomotopy set \\ \bf of the space $X$ \\ \bf in degree $n$ \end{tabular} && \tiny \color{darkblue} \begin{tabular}{c} \bf set of homotopy classes \\ \bf of continuous functions \\ \bf from $X$ to the $n$-sphere $S^n$ \end{tabular} \end{array} \end{equation} The contravariant assignment $X \mapsto \pi^n(X)$ is analogous to the assignment $X \mapsto H^n(X, \mathbb{Z})$ of integral cohomology groups, or of the assignment $X \mapsto K^n(X)$ of K-theory groups, and as such may be regarded as a generalized but \emph{non-abelian} cohomology theory \cite{SSS12}\cite{NSS12}: For $n \geq 1$ we have (as for any connected topological space) a weak homotopy equivalence between the $n$-sphere and the classifying space of its loop group, $ S^n \;\simeq_{\mathrm{whe}}\; B \big(\Omega S^{n}\big) $, which means that the Cohomotopy sets \eqref{PlainCohomotopySet} \vspace{-2mm} $$ \underset{ \mathpalette\mathclapinternal{ \mbox{ \tiny \color{darkblue} \bf $n$-Cohomotopy set } \; } }{ \pi^n(X) } \;\;\;\; \simeq\;\;\;\; \underset{ \mathpalette\mathclapinternal{ \; \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} non-abelian cohomology set \\ with coefficients in \\ loop group of $n$-sphere \end{tabular} } } }{ H^1(X, \Omega S^n) } $$ are equivalently the non-abelian cohomology sets with coefficient in the loop-group of the $n$-sphere, in direct generalization of the familiar case of non-abelian cohomology $H^1(X,G) \simeq G\mathrm{Bund}(X)_{/\sim}$ with coefficients in a compact Lie group $G$. \medskip In this way we may think of \eqref{PlainCohomotopySet} as defining a generalized cohomology theory, different from but akin to, say, K-theory, and as such we call it \emph{Cohomotopy cohomology theory}, or \emph{Cohomotopy theory} or just \emph{Cohomotopy}, for short. The capitalization indicates that this term is the proper name of a specific cohomology theory (we might abbreviate further to \emph{$C$-theory} to bring out the analogy with $K$-theory yet more) and \emph{not} on par with \emph{homotopy theory}, which instead is the name of the general mathematical framework within which we are speaking. In particular, \emph{Cohomotopy cohomology theory} is \emph{not} the dual concept of \emph{homotopy theory}, but is the dual concept of the unstable/non-abelian generalized homology theory which assigns homotopy groups $X \mapsto \pi_n(X)$ to pointed topological spaces $X$ (hence: \emph{Homotopy homology theory}, mostly familiar in its stable form). \medskip \noindent {\bf Unstable Pontrjagin-Thom theorem.} Thinking of $X$ here as spacetime, we are interested in the case that $X = X^D$ admits the structure of closed smooth manifold of dimension $D \in \mathbb{N}$. In this case, the unstable Pontrjagin-Thom theorem \eqref{UnstablePTTheorem} identifies (see e.g. \cite[IX.5]{Kosinski93}) the degree-$n$ Cohomotopy set of $X^D$ \eqref{PlainCohomotopySet} with the set of cobordism classes of normally framed codimension-$n$ closed submanifolds of $X^D$ (see e.g.\cite[IX.2]{Kosinski93}), hence of closed submanifolds $\Sigma^{d} \overset{i}{\hookrightarrow} X^D$ which are of dimension $d = D - n$ and equipped with a choice of trivialization \vspace{0mm} \begin{equation} \label{TrivializationOfNormalVectorBundle} \xymatrix@R=-4pt@C=4em{ N_i\Sigma \ar[rr]^-{ \mbox{\tiny \color{darkblue} \bf normal framing} }_-{\simeq} && \Sigma \times \mathbb{R}^{n} \\ \mathpalette\mathclapinternal{ \mbox{ \bf \tiny \color{darkblue} \begin{tabular}{c} normal bundle \\ of codimension $n$ submanifold $\Sigma$ \\ inside ambient manifold $X$ \end{tabular} } } && \mbox{ \bf \tiny \color{darkblue} \begin{tabular}{c} trivial vector bundle \\ of rank $n$ \end{tabular} } } \end{equation} of their normal vector bundle: \vspace{-3mm} \begin{equation} \label{UnstablePTTheorem} \hspace{-5mm} \xymatrix@R=-10pt@C=3.2em{ \mbox{ \raisebox{-10pt}{ \begin{minipage}[l]{3cm} \footnotesize \bf Unstable \\ Pontrjagin-Thom \\ theorem \end{minipage} \hspace{-1.9cm} } } & \pi^n \big( X^D \big) \ar@<+6pt>[rrr]^-{ \overset{ \mbox{ \bf \tiny \color{darkblue} take pre-image at 0 of regular representative } }{ \mathrm{fib}_0 \, \circ \, \mathrm{reg} } }_-{ \simeq } \ar@{<-}@<-6pt>[rrr]_-{ \mbox{\tiny ``PT collapse''} \atop \mbox{\bf \tiny \color{darkblue} assign directed asymptotic distance } } &&& \left\{ \!\!\!\!\!\!\!\! \mbox{ \raisebox{2pt}{\footnotesize \begin{tabular}{c} Closed submanifolds $\Sigma^d \overset{i}{\hookrightarrow} X^D$ \\ of dimension $d = D - n$ \\ and equipped with normal framing \end{tabular} } } \!\!\!\!\!\!\!\!\! \right\}_{\raisebox{5pt}{\tiny $\!\!\!\Big/{\!\!\mathrm{cobordism}}$}} \\ & \mbox{ \bf \tiny \color{darkblue} \begin{tabular}{c} Cohomotopy set in degree $n$ \\ of closed $D$-dim. manifold $X$ \end{tabular} } } \end{equation} The construction which exhibits this bijection is traditionally called the Pontrjagin-Thom \emph{collapse}, but a more suggestive description, certainly for our application to brane charges, is this: \emph{ The Cohomotopy class corresponding to a submanifold/brane is represented by the function which assigns \emph{directed asymptotic distance} from the submanifold/brane, as measured with respect to the given normal framing \eqref{TrivializationOfNormalVectorBundle} upon identifying the normal bundle with a tubular neighborhood and regarding all points outside the tubular neighborhood as being at infinite distance.} See \hyperlink{FigureD}{\it Figure D}: \vspace{-4mm} \begin{center} {\hypertarget{FigureD}{}} \begin{tikzpicture}[scale=0.69] \node (X) at (-4.5,6) {\small $X$}; \node (sphere) at (6,6) {\small $S^n = (\mathbb{R}^n)^{\mathrm{cpt}}$}; \draw[->] (X) to node[above] {\footnotesize $c$} (sphere); \node at (-4.5,5.4) {\tiny \color{darkblue} \bf manifold}; \node at (-4.5,4.4) {$\overbrace{\phantom{--------------------------}}$}; \node at (6,5.4) {\tiny \color{darkblue} \bf \begin{tabular}{c} $n$-sphere \\ Cohomotopy coefficient \end{tabular}}; \node at (6,4.4) {$\overbrace{\phantom{--------------}}$}; \node at (.25,5.7) {\tiny \color{darkblue} \bf Cohomotopy cocycle}; \begin{scope}[shift={(-6,-1.5)}] \clip (-2.9,-2.9) rectangle (5.9,5.9); \draw[step=3, dotted] (-3,-2) grid (6,6); \draw[very thick] (-4,1.3) .. controls (-1,-3.2) and (2.3,6.6) .. (7,4.2); \begin{scope}[shift={(0,.9)}] \draw[dashed] (-4,1.3) .. controls (-1,-3.2) and (2.3,6.6) .. (7,4.2); \end{scope} \begin{scope}[shift={(0,-.9)}] \draw[dashed] (-4,1.3) .. controls (-1,-3.2) and (2.3,6.6) .. (7,4.2); \end{scope} \begin{scope}[shift={(0,.45)}] \draw[dashed, thick] (-4,1.3) .. controls (-1,-3.2) and (2.3,6.6) .. (7,4.2); \end{scope} \begin{scope}[shift={(0,-.45)}] \draw[dashed, thick] (-4,1.3) .. controls (-1,-3.2) and (2.3,6.6) .. (7,4.2); \end{scope} \end{scope} \begin{scope}[shift={(4,0)}] \draw (2,0) circle (2); \node at (+.6,0) {{\tiny $0$} \raisebox{.0cm}{ $ \mathpalette\mathrlapinternal{ \!\!\!\!\!\!\! \mbox{ \bf \tiny \color{darkblue} \begin{tabular}{c} regular \\ value \end{tabular} }} $ } }; \node (zero) at (0,0) {$-$}; \node (infinity) at (4,0) {\colorbox{white}{$\infty$}}; \fill[black] (2,0) ++(40+180:2) node (minusepsilon) {\begin{turn}{-45} $)$ \end{turn}}; \fill[black] (2,0) ++(180-40:2) node (epsilon) {\begin{turn}{45} $)$ \end{turn}}; \fill[black] (2.3,0.25) ++(40+180:2) node { \tiny $-\epsilon$ }; \fill[black] (2.3,-0.25) ++(-40-180:2) node { \tiny $+\epsilon$ }; \end{scope} \draw[|->, thin, brown] (-5.1-.25,.95-.25) to[bend right=6.7] (epsilon); \draw[|->, thin, brown] (-5.1+.25,.05+.25) to[bend right=6.7] (minusepsilon); \draw[|->, thin, brown] (-5.1,.5) to[bend right=6.7] node { \tiny \color{darkblue} \colorbox{white}{\bf codimension $n$ submanifold } $\mathpalette\mathrlapinternal{ \;\;\;\;\;\;\;\; \raisebox{-47pt}{ \begin{turn}{90} \colorbox{white}{ \begin{tabular}{c} \tiny \bf tubular neighborhood \\ \tiny \bf $\leftrightsquigarrow$ normal framing \end{tabular} } \end{turn} } }$ } (zero); \draw[|->, thin, olive] (-5.1-.5,1.4-.45) to[bend left=26] (infinity); \draw[|->, thin, olive] (-4.9,3.2) to[bend left=26] (infinity); \draw[|->, thin, olive] (-5.1+.5,-.4+.45) to[bend right=33] node[below] {\colorbox{white}{\tiny \color{darkblue} \bf \begin{tabular}{c} constant on $\infty$ \\ away from tubular neighborhood\\\end{tabular}}} (infinity); \draw[|->, thin, olive] (-4.7,-2.7) to[bend right=30] (infinity); \end{tikzpicture} \end{center} \vspace{-9mm} \noindent {\bf \footnotesize Figure D -- The Pontrjagin-Thom construction} {\footnotesize which establishes the unstable Pontrjagin-Thom theorem \eqref{UnstablePTTheorem}. The cocycle $c$ in Cohomotopy \cref{PlainCohomotopySet} is the continuous function which sends each point to its directed asymptotic distance from the given submanifold.} \vspace{3mm} \noindent {\bf One-point compactifications by adjoining the point at infinity.} Here and in all of the following, we are making crucial use of the fact that the $n$-sphere is the one-point compactification $(-)^{\mathrm{cpt}}$ of the Cartesian space $\mathbb{R}^n$, \begin{equation} \label{SphereIsCompactificationOfCartesianSpace} S^n \;\simeq_{{}_{\mathrm{homeo}}}\; \big( \mathbb{R}^n\big)^{\mathrm{cpt}} \;\coloneqq\; \big( \{ x \in \mathbb{R}^n \;\mbox{or}\; x = \infty \} , \tau_{\mathrm{cpt}} \big) \phantom{AAAA} \mbox{for all $n \in \mathbb{N}$}, \end{equation} as indicated on the right of \hyperlink{FigureD}{\it Figure D}. Here the one-point compactification $X^{\mathrm{cpt}}$ of a topological space $X$ is defined (e.g. \cite[p. 150]{Kelly55}) by adjoining one point to the underlying set of $X$ -- denoted ``$\infty$'' as it becomes literally the \emph{point at infinity} -- and by declaring on the resulting set a topology $\tau_{{\mathrm{cpt}}}$ whose open subsets are those of $X$, not containing $\infty$, and those containing $\infty$ but whose complement in $X$ is compact. Notice that this construction also applies to topological spaces that already are compact, in which case the point at infinity appears disconnected \begin{equation} \label{BasepointFreelyAdjoined} X \;\mbox{already compact} \;\;\Rightarrow\;\; X^{\mathrm{cpt}} \;=\; X_+ \;\coloneqq\; X \sqcup \{\infty\} \,. \end{equation} This means that \eqref{SphereIsCompactificationOfCartesianSpace} indeed holds also in the ``unstable range'' of $n = 0$: \begin{equation} \label{OSphereAsCompactificationOfPoint} \big( \mathbb{R}^0 \big)^{\mathrm{cpt}} \;=\; \big( \{0\} \big)^{\mathrm{cpt}} \;=\; \{0\} \sqcup \{\infty\} \;=\; S^0 \,. \end{equation} \noindent {\bf Cohomotopy charge vanishing at infinity.} In view of the Pontrjagin-Thom theorem \eqref{UnstablePTTheorem}, it makes sense to say that a cocycle in Cohomotopy \emph{vanishes} wherever it takes as value the point at infinity $\infty \in \big( \mathbb{R}^n\big)^\mathrm{cpt} \simeq S^n$ in the coefficient sphere, identified under \eqref{SphereIsCompactificationOfCartesianSpace}. This means to regard the coefficient sphere as a pointed topological space, with basepoint $\infty \in S^n$. Given then a non-compact (spacetime) manifold $X$ (such as $X = \mathbb{R}^n$), a Cohomotopy cocycle $X \longrightarrow S^n$ \emph{vanishes at infinity} if it extends to the one-point compactification $X^{\mathrm{cpt}}$ \eqref{SphereIsCompactificationOfCartesianSpace} such as to send the actual point at infinity $\infty \in X^{\mathrm{cpt}}$ to the point at infinity in the coefficient sphere. \begin{equation} \label{VanishingAtInfinity} \hspace{-2cm} \mbox{ \begin{minipage}[l]{9cm} \footnotesize A Cohomotopy cocycle on a non-compact space $X$ which {\it vanishes at infinity} is a Cohomotopy cocycle on the one-point compactification $X^{\mathrm{cpt}}$ that sends the point at infinity in the domain to that in the coefficient $n$-sphere. \end{minipage} } \phantom{AAA} \raisebox{20pt}{ \xymatrix@R=1.5em{ X^{\mathrm{cpt}} \ar[rr]^-{c} && \big( \mathbb{R}^n\big) \mathpalette\mathrlapinternal{ \; \simeq S^n } \\ \{\infty\} \ar@{^{(}->}[u] \ar[rr]_-{ c_{\vert_{ \{\infty\}}} } && \{\infty\} \ar@{^{(}->}[u] } } \end{equation} \begin{example}[{\hyperlink{FigureE}{\it Figure E}}] For $X = \mathbb{R}^n$, we have that Cohomotopy $n$-cocycles on $X$ vanishing at infinity are equivalently maps from an $n$-sphere to itself: \end{example} \vspace{-8mm} \begin{center} {\hypertarget{FigureE}{}} \begin{tikzpicture} \begin{scope}[shift={(0,-1.3)}] \node (X) at (-4.5,6) {\small $(\mathbb{R}^n)^{\mathrm{cpt}}$}; \node (sphere) at (6,6) {\small $S^n = (\mathbb{R}^n)^{\mathrm{cpt}}$}; \draw[->] (X) to node[above] {\footnotesize $c = 1 - 3 = -2$} (sphere); \node at (-4.5, 5.3) {\tiny \color{darkblue} \bf \begin{tabular}{c} Euclidean $n$-space \\ compactified by \\ a point at infinity \end{tabular} }; \node at (-4.5,4.6) {$\overbrace{\phantom{----------------}}$}; \node at (6,5.4) {\tiny \color{darkblue} \bf \begin{tabular}{c} $n$-sphere \\ Cohomotopy coefficient \end{tabular} }; \node at (6,4.6) {$\overbrace{\phantom{--------------}}$}; \node at (.55,5.4) { \tiny \color{darkblue} \bf \begin{tabular}{c} Cohomotopy cocycle \\ counting net number \\ of charged submanifolds \end{tabular} }; \end{scope} \begin{scope}[shift={(-4.5,1.3)}] \draw (0,0) circle (2); \node (infinity1) at (2,0) {\colorbox{white}{$\infty$}}; \node (submanifold1) at (180-20:2) {$\bullet$}; \node (submanifold2) at (180+110:2) {}; \draw[fill=white] (180+110:2) circle (.07); \node (submanifold3) at (180+130:2) {$\bullet$}; \node (submanifold4) at (180+140:2) {$\bullet$}; \end{scope} \draw[|->, thin, olive] (infinity1) to[bend right=40] node {\colorbox{white}{\tiny \color{darkblue} \bf cocycle vanishes at infinity}} (7.7,-.2); \node at (-.2,-1.5) { \colorbox{white}{$\phantom{{A A A}\atop {A A} }$} }; \node at (5.1,-1.8) { \colorbox{white}{$\phantom{ A }$} }; \begin{scope}[shift={(4,1.3)}] \draw (2,0) circle (2); \node at (+.5,0) {{\footnotesize $0$} \raisebox{.1cm}{ $ \mathpalette\mathrlapinternal{ \!\!\!\!\!\!\! \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} regular \\ value \end{tabular} }} $ } }; \node (zero) at (0,0) {$-$}; \node (infinity) at (4,0) {\colorbox{white}{$\infty$}}; \fill[black] (2,0) ++(40+180:2) node (minusepsilon) {\begin{turn}{-45} $)$ \end{turn}}; \fill[black] (2,0) ++(180-40:2) node (epsilon) {\begin{turn}{45} $)$ \end{turn}}; \fill[black] (2.3,0.25) ++(40+180:2) node {\footnotesize $-\epsilon$ }; \fill[black] (2.3,-0.25) ++(-40-180:2) node {\footnotesize $+\epsilon$ }; \end{scope} \draw[|->, thin, olive] (submanifold2) to[bend right=16] node[very near start, below] { \tiny \color{darkblue} \bf \begin{tabular}{c} here with opposite \\ normal framing \\ (see \hyperlink{FigureF}{\it Figure F}) \end{tabular} } (zero); \draw[|->, thin, olive] (submanifold3) to[bend right=16] (zero); \draw[|->, thin, olive] (submanifold4) to[bend right=16] node { \hspace{-4.4cm} \raisebox{-.9cm}{ \colorbox{white}{ \hspace{-.3cm} \tiny \color{darkblue} \bf more submanifolds \hspace{-.3cm} } } } (zero); \draw[white, line width=8pt] (2,.7) to (2.8,.7); \draw[|->, thin, brown] (submanifold1)+(.13,.3) to[bend left=11] (epsilon); \draw[|->, thin, brown] (submanifold1)+(-.1,-.4) to[bend left=11] (minusepsilon); \draw[|->, thin, brown] (submanifold1) to[bend left=11] node { \raisebox{-1cm}{ \hspace{1.4cm} \tiny \color{darkblue} \colorbox{white}{ \bf codimension $n$ submanifold } $\mathpalette\mathrlapinternal{ \;\;\;\;\;\;\;\;\; \raisebox{-40pt}{ \begin{turn}{90} \colorbox{white}{ \hspace{-.5cm} \begin{tabular}{c} \bf \tiny tubular neighborhood \\ \bf \tiny $\leftrightsquigarrow$ normal framing \end{tabular} } \end{turn} } }$ } } (zero); \end{tikzpicture} \end{center} \vspace{-1cm} \noindent {\bf \footnotesize Figure E -- Cohomotopy in degree $n$ of Euclidean $n$-space vanishing at infinity } {\footnotesize is given by Cohomotopy cocycles \eqref{PlainCohomotopySet} on the one-point compactification $(\mathbb{R}^n) \simeq S^n$ \eqref{SphereIsCompactificationOfCartesianSpace} that send $\infty$ to $\infty$ \eqref{VanishingAtInfinity}. } \medskip Of course, this is just the cohomotopical version of \emph{instantons} in ordinary gauge theory: \medskip \noindent {\bf Instantons and solitons.} If $G$ is a compact Lie group with classifying space $B G$ equipped with the canonical point $\ast \simeq B \{e\} \longrightarrow B G$, then a \emph{$G$-instanton sector} on Euclidean space $X = \mathbb{R}^n$ is the homotopy class of a continuous function from the one-point compactification of $X$ to $B G$, which takes the base points to each other \footnote{ An actual instanton in this instanton sector is a $G$-principal connection on $X^{\mathrm{cpt}}$ whose underlying $G$-principal bundle has this classifying map. Ultimately we are interested in such enhancement to \emph{differential cohomology}, but this is beyond the scope of the present article.} \vspace{-2mm} \begin{equation} \label{Instanton} \mbox{ \begin{minipage}[l]{9cm} \footnotesize A $G$ \emph{instanton sector} is a cocycle in degree-1 $G$-cohomology which \emph{vanishes at infinity} in that it is a cocycle on the one-point compactification $X^{\mathrm{cpt}}$ \eqref{SphereIsCompactificationOfCartesianSpace} which sends the point at infinity in the domain to the base point in the classifying space $B G$. \end{minipage} } \phantom{AAA} \raisebox{20pt}{ \xymatrix@R=1.5em{ \big( \mathbb{R}^n\big)^{\mathrm{cpt}} \ar[rr]^-{c} && B G \\ \{\infty\} \ar@{^{(}->}[u] \ar[rr]_-{ c_{\vert_{\{\infty\}}} } && B \{e\} \ar@{^{(}->}[u] } } \end{equation} \vspace{-3mm} \noindent {\bf Cohomotopy and $\mathrm{SU}(N)$-instanton sectors.} Specifically for $n = 4$ and $G = \mathrm{SU}(N)$ any map $S^4 \overset{\epsilon}{\longrightarrow} B \mathrm{SU}(N)$ representing a generator $1 \in \mathbb{Z} \simeq \pi_4\big( B \mathrm{SU}(N) \big)$ of the 4th homotopy group of the classifying space exhibits a bijection between the 4-Cohomotopy of $\mathbb{R}^4$ vanishing at infinity \eqref{VanishingAtInfinity}, and the set of $\mathrm{SU}(N)$ instanton sectors $$ \pi^4\big( ( \mathbb{R}^n)^{\mathrm{cpt}} \big) \;=\; \big\{ \xymatrix{ ( \mathbb{R}^4)^{\mathrm{cpt}} \ar[r] & S^4 } \big\}_{\!\!\big/\sim} \;\; \overset{\epsilon_\ast}{\simeq} \;\; \big\{ \xymatrix{ ( \mathbb{R}^4)^{\mathrm{cpt}} \ar[r] & B \mathrm{SU}(N) } \big\}_{\!\!\big/\sim} \;\simeq\; \left\{ \!\!\!\!\! \mbox{ \footnotesize \begin{tabular}{c} $\mathrm{SU}(n)$-instanton sectors \\ on $\mathbb{R}^4$ \end{tabular} } \!\!\!\!\! \right\}. $$ \vspace{-1mm} \noindent Under this identification of $\mathrm{SU}(N)$-instanton sectors with Cohomotopy vanishing at infinity, the Pontrjagin-Thom construction \eqref{UnstablePTTheorem} produces precisely the distribution of \emph{instanton center points}, again illustrated by the left hand side in \hyperlink{FigureE}{\it Figure E}. To see all this in more detail, we next turn to further discussion of the charge structure encoded by Cohomotopy. \subsection{Hopf degree theorem and brane-antibrane annihilation} \label{NormalFramingAndBraneAntibraneAnnihilation} {\bf The classical \emph{Hopf degree theorem}} describes the $n$-Cohomotopy \eqref{PlainCohomotopySet} of orientable closed $D$-manifolds $X$ \eqref{UnstablePTTheorem} in the special case where $n = D$. It says that, in the ``stable range'' $n \geq 1$, the Cohomotopy set is in bijection with the set of integers, where the bijection is induced by sending the continuous function representing a Cohomotopy coycle to its mapping degree (see, e.g., \cite[7.5]{Kobin16}): \vspace{-3mm} \begin{equation} \label{HopfDegreeTheorem} \hspace{-1cm} \mbox{\footnotesize \begin{tabular}{c} \bf Hopf degree \\ \bf theorem \\ in stable range $n \geq 1$ \end{tabular} } \phantom{AA} \raisebox{30pt}{ \xymatrix@R=-2pt@C=3em{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} $n$-Cohomotopy \\ of $n$-manifold \end{tabular} } \\ \pi^{n} \big( X \big) \ar[rr]^ { S^n \overset{\epsilon_\ast}{\longrightarrow} K(\mathbb{Z}, n ) }_-{\simeq} && H^{n} \big( X, \mathbb{Z} \big) \ar@{}[r]|-{\simeq} & \mathbb{Z} \\ \big[X^n \overset{c}{\longrightarrow}S^n\big] \ar@{|->}[rr] && \big[X^n \overset{c}{\longrightarrow}S^n \overset{\epsilon}{\longrightarrow} K(\mathbb{Z},n) \big] \ar@{}[r]|-{ \eqqcolon } & \mathrm{deg}(c) } } \end{equation} Under the Pontrjagin-Thom theorem \eqref{UnstablePTTheorem} the Hopf degree theorem \eqref{HopfDegreeTheorem} translates into the following geometric situation for signed (charged) points in $X^n$ (see \cite[IX.4]{Kosinski93}): A codimension-$n$ submanifold in an $n$-manifold $X^n$ is a set of points in $X^n$, and a choice of normal framing \eqref{TrivializationOfNormalVectorBundle} is, up to normally framed cobordism, the same as choice of sign (charge) in $\{\pm 1\}$ for each point, as shown in \hyperlink{FigureF}{\it Figure F}: \vspace{-4mm} \begin{center} \hypertarget{FigureF}{} \begin{tikzpicture}[scale=0.75] \draw (1.5,6.7) node {$\overbrace{\phantom{------------------------}}$}; \draw (11,6.7) node {$\overbrace{\phantom{---------------}}$}; \draw (1.5,7.3) node {\tiny \color{darkblue} \bf manifold}; \draw (11,7.4) node { \tiny \color{darkblue} \bf \begin{tabular}{c} sphere \\ Cohomotopy coefficient \end{tabular} }; \draw (5.5,7.6) node { \tiny \color{darkblue} \bf Cohomotopy cocycle }; % \begin{scope}[shift={(0, 0.6)}] \begin{scope} \clip (-2.9,-2.9) rectangle (5.9,5.9); \draw[step=3, dotted] (-3,-3) grid (6,6); \draw[dashed] (0+2.3+.3,4.9) circle (1.8); \draw (0+2.3+.2-.8,4.9-1.2) node { { \tiny \color{darkblue} \bf \begin{tabular}{c} tubular \\ neighborhood \end{tabular} } }; \draw[dashed] (0+2.3+.3,-2.4) circle (1.8); \draw (0+2.3+.2-.6,-2.4+1.2) node { { \tiny \color{darkblue} \bf \begin{tabular}{c} tubular \\ neighborhood \end{tabular} } }; \end{scope} % \node at (11,2) {\colorbox{white}{$\phantom{a}$}}; \draw[dashed] (11-.1,2) circle (2); \node (zero) at (11,2) {\tiny $0$}; \node (infinity) at (11-.1,2+2.1) {\tiny $\infty$}; \node (leftinfinity) at (11-.1-2.2,2) {\tiny $\infty$}; \node (bottominfinity) at (11-.1,2-2.1) {\tiny $\infty$}; \node (rightinfinity) at (11-.1+2.2,2) {\tiny $\infty$}; % % \node (torus) at (1.5,7.35) {\raisebox{0pt}{\small $ X^n $}}; \node (sphere) at (11,7.35) {\raisebox{0pt}{\small $ S^{n} = D( \mathbb{R}^{n} )/S(\mathbb{R}^{n}) $}}; \draw[->, thin] (torus) to node[above]{\footnotesize $c$} (sphere); \begin{scope}[shift={(.13,0)}] \draw[fill=black] (0+2.3,4.9) circle (.07); \draw[|->, olive] (0+2.3+.2+.55,4.9-.05) to (11-.2+.55,2+.05); \draw[|->, olive] (0+2.3+.2+1.1,4.9-.05) to (11-.2+1.1,2+.05); \draw[|->, olive] (0+2.3+.2+1.65,4.9-.05) to (11-.2+1.65,2+.05); \draw[|->, olive] (0+2.3+.2-.55,4.9-.05) to (11-.2-.55,2+.05); \draw[|->, olive] (0+2.3+.2-1.1,4.9-.05) to (11-.2-1.1,2+.05); \draw[|->, olive] (0+2.3+.2-1.65,4.9-.05) to (11-.2-1.65,2+.05); \draw[|->, olive] (0+2.3+.2,4.9-.05) to node {\colorbox{white}{\tiny \color{darkblue} \bf positively charged submanifold} } (11-.2,2+.05); \draw[fill=white] (0+2.3,-2.4) circle (.07); \draw[|->, olive] (0+2.3+.2+.55,-2.4+.05) to (11-.2-.55,2-.05); \draw[|->, olive] (0+2.3+.2+1.1,-2.4+.05) to (11-.2-1.1,2-.05); \draw[|->, olive] (0+2.3+.2+1.65,-2.4+.05) to (11-.2-1.65,2-.05); \draw[|->, olive] (0+2.3+.2-.55,-2.4+.05) to (11-.2+.55,2-.05); \draw[|->, olive] (0+2.3+.2-1.1,-2.4+.05) to (11-.2+1.1,2-.05); \draw[|->, olive] (0+2.3+.2-1.65,-2.4+.05) to (11-.2+1.65,2-.05); \draw[|->, olive] (0+2.3+.2,-2.4+.05) to node {\colorbox{white}{\tiny \color{darkblue} \bf negatively charged submanifold} } (11-.2,2-.05); \end{scope} \begin{scope}[shift={(-.04,.2)}] \draw[fill=black] (0.4,1.4) circle (.07); \draw[fill=black] (0.6,1.05) circle (.07); \draw[fill=white] (0.4,1.2) circle (.07); \draw[fill=white] (0.7,1.3) circle (.07); \draw[dashed] (0.57,1.2) circle (.6); \draw (1.55,1.2) node {$\simeq$}; \draw (1.55,1.2-.9) node { \tiny \color{darkblue} \bf \begin{tabular}{c} opposite charges \\ cancel each other \end{tabular} }; \draw[dashed] (2.57,1.2) circle (.6); \end{scope} \draw[|->, olive] (4,1.4+.3) to[bend right=6] (leftinfinity); \draw[|->, olive] (4,1.4-.3) to[bend right=6] (leftinfinity); \draw[|->, olive] (4,1.4) to[bend right=6] node { \colorbox{white}{ \hspace{-.3cm} \tiny \color{darkblue} \bf \begin{tabular}{c} no charge here \end{tabular} \hspace{-.3cm} } } (leftinfinity); \end{scope} % \end{tikzpicture} \end{center} \vspace{-.4cm} \noindent {\bf \footnotesize Figure F -- Charge in Cohomotopy carried by submanifolds, under the PT-isomorphism \eqref{UnstablePTTheorem}} {\footnotesize is encoded in their normal framing \eqref{TrivializationOfNormalVectorBundle}. In full codimension the normal framing is a normal orientation and hence a choice in $\{\pm 1\}$, which we indicate graphically by $ \renewcommand{\arraystretch}{.4} \begin{array}{ccc} \bullet &\leftrightarrow& -1 \\ \circ &\leftrightarrow& +1 \end{array} $ } \vspace{2mm} \noindent Under this geometric translation, we have the correspondence \vspace{-2mm} $$ \xymatrix{ \mbox{\footnotesize \begin{tabular}{c} Hopf degree \\ of Cohomotopy cocycle on $X$ \end{tabular} } \ar@{<->}[rr]^-{\mbox{\tiny PT}} && \mbox{\footnotesize \begin{tabular}{c} Net number of $\pm$-charges \\ carried by points in $X$ \end{tabular} } } $$ \vspace{-2mm} \noindent The mechanism which implements this on the geometric right hand side is that points of opposite sign/normal framing are cobordant to the empty collection of points, hence mutually annihilate each other via coboundaries in Cohomotopy, as shown in \hyperlink{FigureG}{\it Figure G}: \vspace{-3mm} \begin{center} \hypertarget{FigureG}{} \begin{tikzpicture}[scale=.8] \begin{scope}[shift={(0,-.7)}] \node (X) at (-4.5,6) {\small $[0,1] \times X $}; \node (sphere) at (6,6) {\small $S^n = (\mathbb{R}^n)^{\mathrm{cpt}}$}; \draw[->] (X) to node[above] { \tiny $0 \simeq (-1) + (+1)$ } (sphere); \node at (-4.5,5.4) { \tiny \color{darkblue} \bf \begin{tabular}{c} product space \\ of interval with manifold \end{tabular} }; \node at (-4.5, 4.7) {$\overbrace{\phantom{----------------------}}$}; \node at (6,5.4) {\tiny \color{darkblue} \bf \begin{tabular}{c} $n$-sphere \\ Cohomotopy coefficient \end{tabular}}; \node at (6,4.7) {$\overbrace{\phantom{--------------}}$}; \node at (.5,5.7) {\tiny \color{darkblue} \bf Cohomotopy coboundary}; \end{scope} \begin{scope}[shift={(-1.5,0.0)}] \begin{scope}[shift={(-6,-1.5)}] \clip (0,-2.9) rectangle (6,5.9); \draw[step=3, dotted] (-3,-2.5) grid (6,5.5); \end{scope} \begin{scope}[shift={(-6,-1.5)}] \draw (6,-2.8) node {\tiny $\{1\} \times X $ }; \draw (0,-2.8) node {\tiny $\{0\} \times X $ }; \end{scope} \begin{scope}[rotate=-90] \draw[very thick] (-3,0) .. controls (-3,-2.5-.85) and (3,-2.5-.85) .. (+3,0); \draw[dashed, thick] (-3+.4,0) .. controls (-3+.4,-2.5+.5-.85) and (3-.4,-2.5+.5-.85) .. (+3-.4,0); \draw[dashed, thick] (-3-.4,0) .. controls (-3-.4,-2.5-.5-.85) and (3+.4,-2.5-.5-.85) .. (+3+.4,0); \draw[dashed] (-3+.4+.4,0) .. controls (-3+.4+.4,-2.5+.5+.5-.85) and (3-.4-.4,-2.5+.5+.5-.85) .. (+3-.4-.4,0); \draw[dashed] (-3-.4-.4,0) .. controls (-3-.4-.4,-2.5-.5-.5-.85) and (3+.4+.4,-2.5-.5-.5-.85) .. (+3+.4+.4,0); \end{scope} \end{scope} \begin{scope}[shift={(4,0)}] \draw (2,0) circle (2); \node at (+.6,0) {{\tiny $0$} \raisebox{.0cm}{ $ \mathpalette\mathrlapinternal{ \!\!\!\!\!\!\! \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} regular \\ value \end{tabular} }} $ } }; \node (zero) at (0,0) {$-$}; \node (infinity) at (4,0) {\colorbox{white}{$\infty$}}; \fill[black] (2,0) ++(40+180:2) node (minusepsilon) {\begin{turn}{-45} $)$ \end{turn}}; \fill[black] (2,0) ++(180-40:2) node (epsilon) {\begin{turn}{45} $)$ \end{turn}}; \fill[black] (2.3,0.25) ++(40+180:2) node { \tiny $-\epsilon$ }; \fill[black] (2.3,-0.25) ++(-40-180:2) node { \tiny $+\epsilon$ }; \end{scope} \draw[|->, olive] (-1.35,2.9) to (3.8,0+0.05); \draw[|->, olive] (-1.35,2.9+.4) to (3.9,0+.4+0.05); \draw[|->, olive] (-1.35,2.9-.4) to (3.9,0-.4+0.05); \draw[|->, olive] (-1.35,3+.8) to[bend right=-17] (7.7,0.15); \draw[|->, olive] (-1.35,-2.9) to (3.8-.0,0-0.05); \draw[|->, olive] (-1.35,-2.9-.4) to (3.9-.0,0+.4-0.05); \draw[|->, olive] (-1.35,-2.9+.4) to (3.9-.0,0-.4-0.05); \draw[|->, olive] (-1.35,-3-.8) to[bend left=-17] (7.7,-0.15); \draw[fill=black] (-1.5,3) circle (.07); \draw[fill=white] (-1.5,-3) circle (.07); \draw (-1.5,3)+(1.4,-.2) node { \colorbox{white}{ \hspace{-.4cm} \tiny \color{darkblue} \bf \begin{tabular}{c} positively charged \\ submanifold \end{tabular} \hspace{-.4cm} } }; \draw (-1.5,-3)+(1.4,+.2) node { \colorbox{white}{ \hspace{-.4cm} \tiny \color{darkblue} \bf \begin{tabular}{c} negatively charged \\ submanifold \end{tabular} \hspace{-.4cm} } }; \draw (-8,0) node { \colorbox{white}{ \hspace{-.4cm} \tiny \color{darkblue} \bf \begin{tabular}{c} no \\ submanifold \end{tabular} \hspace{-.4cm} } }; \draw (-3.8,0) node { \colorbox{white} { \hspace{-.3cm} \tiny \color{darkblue} \bf cobordism \hspace{-.3cm} } }; \end{tikzpicture} \end{center} \vspace{-5mm} \noindent {\footnotesize \bf Figure G -- Cobordisms between submanifolds of opposite normal framing} {\footnotesize as in \hyperlink{FigureF}{\it Figure F} exhibit their pair creation/annihilation. This is the geometric mechanism which underlies the Hopf degree theorem \eqref{HopfDegreeTheorem} when translating via the Pontrjagin-Thom theorem \eqref{UnstablePTTheorem} between Cohomotopy charge and the submanifolds sourcing it, as in \hyperlink{FigureD}{\it Figure D}. } \medskip \noindent {\bf Hopf degree in unstable range.} The classical Hopf degree theorem \eqref{HopfDegreeTheorem} is stated only in the stable range $n \geq 1$, but it is immediate to extend it to the unstable range. While this is a simple statement in itself, it is necessary to conceptually complete the discussion of the equivariant Hopf degree theorem in \cref{LocalTadpoleCancellation} below, where the ordinary Hopf degree appears jointly in stable and unstable range, with the distinction being responsible for the difference in nature between O-plane charge (unstable range) and D-brane charge (stable range): For $X = X^0$ a compact 0-manifold, hence a finite set, and $X^{\mathrm{cpt}} = X_+ = X \sqcup \{\infty\}$ the same set with a ``point at infinity'' adjoined \eqref{BasepointFreelyAdjoined}, its unstable Cohomotopy classes \eqref{PlainCohomotopySet} in degree 0, being functions to the 0-sphere hence to the 2-element set $S^0 = \{0,\infty\}$ that take $\infty \mapsto \infty$ $$ \pi^0\big( X^{\mathrm{cpt}} \big) \;=\; \big\{ X \xrightarrow{ \;\;\; c \;\;\; } S^0 \big\}, $$ are in bijection to the subsets $S \subset X$ of $X$, by the assignment that sends $c$ to the pre-image $c^{-1}\big( \{0\}\big)$ of $0 \in S^0$ under $c$. We may think of these subsets as elements of the power set $\{0,1\}^X$ and as such call them the sets $\mathrm{deg}(c)$ of Hopf degrees in $\{0,1\}$ for $n = 0$: \vspace{-3mm} \begin{equation} \label{UnstableRangeHopfDegreeTheorem} \hspace{-1cm} \mbox{\footnotesize \begin{tabular}{c} \bf Hopf degree \\ \bf theorem \\ in unstable range $n = 0 $ \end{tabular} } \phantom{AA} \raisebox{30pt}{ \xymatrix@R=-2pt@C=3em{ \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} $0$-Cohomotopy \\ of $0$-manifold \end{tabular} } && & \mathpalette\mathclapinternal{ \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} sets of \\ unstable Hopf degrees \end{tabular} } } \\ \pi^{0} \big( X^{\mathrm{cpt}} \big) \ar[rr]^ { S^0 = \{0,\infty\} }_-{\simeq} && \mathrm{Subsets}(X) \ar@{}[r]|-{\simeq} & \{0,1\}^{X} \\ \big[X^0 \overset{c}{\longrightarrow}S^0\big] \ar@{|->}[rr] && \big[ c^{-1}\big( \{0\}\big) \subset X \big] \ar@{}[r]|-{ \eqqcolon } & \mathrm{deg}(c) } } \end{equation} \begin{example} For $X = \{0\}$ the single point so that, with \eqref{OSphereAsCompactificationOfPoint}, $X^{\mathrm{cpt}}$ is the 0-sphere, we have $ \pi^0\big( \{0\}^{\mathrm{cpt}} \big) \simeq \{0,1\} $, as illustrated in the following figure: \end{example} \vspace{-.5cm} \begin{center} \hyperlink{FigureH}{} \begin{tikzpicture}[scale=1.2] \begin{scope}[shift={(6,0)}] \begin{scope} \draw (0,2.3) node {$ \big( \mathbb{R}^0\big)^{\mathrm{cpt}} $}; \draw (0,1.7) node {$\overbrace{\phantom{--}}$}; \draw (-.01,1.3) to (.01,1.3); \draw (-.2,1.3) node {\tiny $\infty$}; \draw (0,.5) circle (.07); \draw (-.2,.5) node {\tiny $0$}; \end{scope} \draw[|->, olive] (0.1,1.3) to node {\colorbox{white}{\tiny \color{darkblue} \bf vanishing at infinity}} (3-.3,1.3); \draw[|->, olive] (0.1,.5) to node {\colorbox{white}{ \tiny \color{darkblue} \bf charge }} (3-.3,.5); \draw[->] (.7,2.25) to node[above]{\small $c = 1$} (3-.3,2.25); \begin{scope}[shift={(3,0)}] \draw (0,2.3) node {$ S^0 $}; \draw (0,1.7) node {$\overbrace{\phantom{--}}$}; \draw (-.01,1.3) to (.01,1.3); \draw (-.14,1.3) node {\tiny $\infty$}; \draw (-.01,.5) to (.01,.5); \draw (-.14,.5) node {\tiny $0$}; \end{scope} \end{scope} \begin{scope} \begin{scope} \draw (0,2.3) node {$ \big( \mathbb{R}^0\big)^{\mathrm{cpt}} $}; \draw (0,1.7) node {$\overbrace{\phantom{--}}$}; \draw (-.01,1.3) to (.01,1.3); \draw (-.2,1.3) node {\tiny $\infty$}; \draw (-.01,.5) to (.01,.5); \draw (-.2,.5) node {\tiny $0$}; \end{scope} \draw[|->, olive] (0.1,1.3) to node {\colorbox{white}{\tiny \color{darkblue} \bf vanishing at infinity}} (3-.3,1.3); \draw[|->, olive] (0.1,.5) to node {\colorbox{white}{\tiny \color{darkblue} \bf no charge}} (3-.3,1.3); \draw[->] (.7,2.25) to node[above]{\small $c = 0$} (3-.3,2.25); \begin{scope}[shift={(3,0)}] \draw (0,2.3) node {$ S^0 $}; \draw (0,1.7) node {$\overbrace{\phantom{--}}$}; \draw (-.01,1.3) to (.01,1.3); \draw (-.14,1.3) node {\tiny $\infty$}; \draw (-.01,.5) to (.01,.5); \draw (-.14,.5) node {\tiny $0$}; \end{scope} \end{scope} \end{tikzpicture} \end{center} \vspace{-.5cm} \noindent {\footnotesize \bf Figure H -- Hopf degree in the unstable range} {\footnotesize takes values in the set $\{0,1\}$ \eqref{UnstableRangeHopfDegreeTheorem}, corresponding to the binary choice of there being or not being a unit charge at the single point.} \medskip The point of unstable Hopf degree in $\{0,1\}$ is that it exhibits {\it homogeneous behavior under suspension} $\Sigma^1$ \eqref{EquSuspension} across the unstable and stable range of Hopf degrees, with the unstable Hopf degrees in $\{0,1\}$ injecting into the full set of integers in the stable range: \begin{equation} \label{HopfDegreesUnderSuspension} \raisebox{20pt}{ \xymatrix@R=13pt{ \overset{ \mathpalette\mathclapinternal{ \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} unstable \\ Hopf degrees \end{tabular} } } } { \{0,1\} } \ar@{}[d]|-{ \mathpalette\mathllapinternal{ \mbox{ \tiny \color{darkblue} \bf \eqref{UnstableRangeHopfDegreeTheorem} } \; } \begin{rotate}{270} $\!\!\!\!\!\simeq$ \end{rotate} } \ar@{^{(}->}[rr]^-{ \mbox{ \tiny \color{darkblue} \bf injection } } && \overset{ \mathpalette\mathclapinternal{ \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} stable \\ Hopf degrees \end{tabular} } } }{ \mathbb{Z} } \ar@{}[d]|-{ \mathpalette\mathllapinternal{ \mbox{ \tiny \color{darkblue} \bf \eqref{HopfDegreeTheorem} } \; } \begin{rotate}{270} $\!\!\!\!\!\simeq$ \end{rotate} } \ar[rr]^-{=} && \overset{ \mathpalette\mathclapinternal{ \mbox{ \bf \tiny \color{darkblue} \begin{tabular}{c} stable \\ Hopf degrees \end{tabular} } } }{ \mathbb{Z} } \ar@{}[d]|-{ \mathpalette\mathllapinternal{ \mbox{ \tiny \color{darkblue} \bf \eqref{HopfDegreeTheorem} } \; } \begin{rotate}{270} $\!\!\!\!\!\simeq$ \end{rotate} } \ar[rr]^-{=} && \overset{ \mathpalette\mathclapinternal{ \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} stable \\ Hopf degrees \end{tabular} } } }{ \mathbb{Z} } \ar@{}[d]|-{ \mathpalette\mathllapinternal{ \mbox{ \tiny \color{darkblue} \bf \eqref{HopfDegreeTheorem} } \; } \begin{rotate}{270} $\!\!\!\!\!\simeq$ \end{rotate} } \ar[r] & \cdots \ar@{}[d]|-{\vdots} \\ \pi^0\big( S^0 \big) \ar[rr]_-{ \Sigma^1 }^-{ \mbox{\tiny \color{darkblue} \bf suspension } } && \pi^{1}\big( S^1 \big) \ar[rr]_-{ \Sigma^1 } && \pi^{2}\big( S^2 \big) \ar[rr]_-{ \Sigma^1 } && \pi^{3}\big( S^3 \big) \ar[r] & \cdots } } \end{equation} As we next turn from plain to equivariant Cohomotopy in \cref{EquivariantCohomotopyAndTadpoleCancellation}, we find that unstable and stable Hopf degrees unify in the equivariant Hopf degree theorems, and that the {\it D-brane charge is what appears in the stable range}, while the {\it O-plane charge is what appears in the unstable range} (in particular, via the proof of Theorem \ref{CharacterizationOfStabilizationOfUnstableCohomotopy} below). \section{Equivariant Cohomotopy and tadpole cancellation} \label{EquivariantCohomotopyAndTadpoleCancellation} We now turn to the equivariant enhancement \eqref{EquivariantCohomotopySet} of Cohomotopy theory. We discuss in \cref{LocalTadpoleCancellation} and in \cref{GlobalTadpoleCancellation}, respectively, how this captures the form of the local/twisted (see Diagram \eqref{KernelOfTheGlobalElmendorfStageProjection} in \cref{GlobalTadpoleCancellation}) and of the global/untwisted tadpole cancellation conditions (see \cref{HeteroticMTheoryOnADEOrbifolds}) according to \hyperlink{Table1}{\it Table 1} and \hyperlink{Table2}{\it Table 2}, by appeal to the equivariant enhancement of the Hopf degree theorem applied to representation spheres, which we state as Theorem \ref{UnstableEquivariantHopfDegreeTheorem} and Theorem \ref{CharacterizationOfStabilizationOfUnstableCohomotopy}. \medskip \noindent {\bf Basic concepts of unstable equivariant homotopy theory.} To set up notation, we start with reviewing a minimum of underlying concepts from unstable equivariant homotopy theory (see \cite[1]{Blu17}\cite[3.1]{ADE} for more). \noindent {\it Topological $G$-spaces.} For $G$ a finite group, a \emph{topological $G$-space} $\xymatrix{ X \ar@(ul,ur)|-{\,G\,}} $ (or just \emph{$G$-space}, for short) is a topological space $X$ equipped with a continuous $G$-action, hence with a continuous function ${G \times X \xrightarrow{\cdot} X}$ such that for all $g_i \in G$ and $x \in X$ we have $g_1 \cdot (g_2 \cdot x) = (g_1 g_2) \cdot x$ and $e \cdot x = x$ (where $e \in G$ is the neutral element). \medskip Here we are concerned with the {\bf classes of examples of $G$-spaces} shown in \hyperlink{Table5}{\it Table 5}:\footnote{ For our purposes here, the covering $G$-space $X$ is all we need to speak about the corresponding orbifold $X \!\sslash\! G$. For a dedicated discussion of geometric orbifolds we refer to \cite[13]{Ratcliffe06}\cite{OrbifoldCohomology}. Note that \cite[13]{Ratcliffe06} says ``Euclidean orbifold'' for any flat orbifold. } {\small \begin{center} \hypertarget{Table5}{} \begin{tabular}{|c||c|c||c|c|} \hline {\bf $G$-representation} & {\bf $G$-space} & {\bf $G$-orbifold} & \multicolumn{2}{c|}{ {\bf Terminology} } \\ \hline \hline \multirow{2}{*}{ \begin{tabular}{c} $ \underset { \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} finite group \end{tabular} } } { G } $ \\ $\phantom{-}$ \\ $ \underset { \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} orthogonal linear \\ $G$-representation \end{tabular} } } { V \;\in\; \mathrm{RO}(G) } $ \end{tabular} } & $ \underset{ \mbox{ \bf \tiny \color{darkblue} \begin{tabular}{c} Euclidean \\ $G$-space \eqref{EuclideanGSpace} \end{tabular} } }{ \xymatrix{ \mathbb{R}^V \ar@(ul,ur)|{\,G\,} } } $ & $ \underset{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} Euclidean orbifold \end{tabular} } }{ \mathbb{R}^V \!\sslash\! G } $ & \begin{tabular}{l} {\bf singularity} \end{tabular} & \multirow{2}{*}{ \begin{tabular}{c} {\bf ADE-singularities} \\ \\ $ \underset{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} finite subgroup of $\mathrm{SU}(2)$ \\ \eqref{ADESubgroups} \end{tabular} } }{ G \subset \mathrm{SU}(2) } $ \\ $\phantom{-}$ \\ $ \underset{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} quaternionic representation \\ \eqref{TheQuaternionicRepresentation} \end{tabular} } }{ V = \mathbf{4}_{\mathbb{H}} } $ \end{tabular} } \\ \cline{2-4} & $ \underset{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} $G$-representation \\ sphere \eqref{RepSpheres} \end{tabular} } }{ \xymatrix{ S^V \ar@(ul,ur)|{\,G\,} } } $ & $ \underset{ \!\!\!\!\!\!\!\!\!\! \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} Euclidean orbifold \\ including point at infinity \eqref{VanishingAtInfinity} \end{tabular} } }{ S^V \!\sslash\! G = \big( \mathbb{R}^V \!\sslash\! G \big)^{\mathrm{cpt}} } $ & \begin{tabular}{l} {\bf vicinity of} \\ {\bf singularity} \end{tabular} & \\ \cline{2-4} $ \underset { \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} crystallographic group \eqref{CrystallographicGroups}) \end{tabular} } } { G \rtimes \mathbb{Z}^{\mathrm{dim}(V)} \subset \mathrm{Iso}\big( \mathbb{R}^V \big) } $ & $ \underset{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} $G$-representation \\ torus \eqref{RepresentationTorus} \end{tabular} } }{ \xymatrix{ \mathbb{T}^V \ar@(ul,ur)|{\,G\,} } } $ & $ \underset{ \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} toroidal orbifold \end{tabular} } }{ \mathbb{T}^V \!\sslash\! G = \big( \mathbb{R}^V \!\sslash\! G \big)/\mathbb{Z}^{\mathrm{dim}(V)} } $ & \begin{tabular}{l} {\bf flat, compact} \\ {\bf singular space} \end{tabular} & \\ \hline \end{tabular} \end{center} \vspace{-.2cm} \noindent {\bf \footnotesize Table 5 -- Flat $G$-orbifolds and the $G$-spaces covering them.} \ {\footnotesize Examples arising in application to M-theory are discussed in \cref{HeteroticMTheoryOnADEOrbifolds}.} } \medskip \noindent {\bf Orbifold terminology.} As common in string theory, we will be thinking of $G$-spaces $X$ as stand-ins for their homotopy quotients $X \!\sslash\! G$, which are the actual orbifolds. This is mathematically fully justified by the fact that the proper notion of generalized cohomology of such global quotient orbifolds $X \!\sslash\! G$ is equivalently the $G$-equivariant generalized cohomology of the space $X$. We relegate a comprehensive discussion of this technical point to \cite{OrbifoldCohomology}, but this is mathematical folklore: see \cite[\S 1]{PronkScull10}\cite[p. 1]{Schwede17}\cite[p. ix-x]{Schwede18}. Moreover, in the specific application to M-theory, below in \cref{M5MO5AnomalyCancellation}, the relevant orbifolds are always part of \emph{orbi-orientifolds}, in that a subgroup $\mathbb{Z}_2^{\mathrm{refl}}$ of the orbifold quotient group $G = G^{\mathrm{ADE}}$ combines with a Ho{\v r}ava-Witten-involution $\mathbb{Z}_2^{\mathrm{HW}}$ to an orientation-changing involution $\mathbb{Z}_2^{\mathrm{HW} + \mathrm{refl}}$ which fixes an ``$\mathrm{MO5}$-plane''. This is made precise in \cref{HeteroticMTheoryOnADEOrbifolds} below; see \eqref{OrbiOrientifoldGroupSequence} there. Since, with passage to the $\mathbb{Z}_2^{\mathrm{HW}}$-fixed locus (the ``$\mathrm{MO9}$-pane'') understood \eqref{SemiComplement}, the further localization to the $\mathrm{MO5}$-plane coincides with the orbifold singularity, we will often refer here to orbifold fixed points as orientifold fixed points, wherever this serves the preparation of the application in \cref{M5MO5AnomalyCancellation}. Accordingly, the orbifold singularities in the applications below in \cref{M5MO5AnomalyCancellation} are always inside an O-plane, so that the relevant flavor of equivariant K-theory considered below in Prop. \ref{TheoremLocalTadpoleCancellation} and in \hyperlink{FigureP}{\it Figure P}, \hyperlink{FigureM}{\it Figure M} is $\mathrm{KO}$. \medskip \noindent {\bf Linear $G$-representations.} The $G$-spaces of interest for the discussion of toroidal orbifolds all come from {\it orthogonal linear $G$-representations} $V$: finite-dimensional Euclidean vector spaces equipped with a linear action by $G$ factoring through the canonical action of the orthogonal group. We will denote concrete examples of such $V$ of dimension $n \in \mathbb{N}$ and characterized by some label ``$\mathrm{l}$'' in the form $V = \mathbf{n}_{\mathrm{l}}$, and also refer to them as an \emph{RO-degree} \eqref{RODegree}. \medskip The {\it key class of examples} of interest here are finite subgroups (see, e.g., \cite[A.1]{SS19b}) \begin{equation} \label{ADESubgroups} G^{\mathrm{ADE}} \;\subset\; \mathrm{SU}(2) \;\simeq\; \mathrm{Sp}(1) \;\simeq\; U(1,\mathbb{H}) \;\simeq\; S(\mathbb{H}) \end{equation} of the multiplicative group of unit norm elements $q \in S(\mathbb{H})$ in the vector space $\mathbb{H} \simeq_{{}_{\mathbb{R}}} \mathbb{R}^4$ of quaternions, and their defining 4-dimensional linear representation on this space (by left quaternion multiplication), which we denote by \begin{equation} \label{TheQuaternionicRepresentation} \mathbf{4}_{\mathbb{H}} \;\in\; \mathrm{RO}\big( G^{\mathrm{ADE}} \big) \,. \end{equation} All of these, except the cyclic groups of odd order, contain the subgroup \begin{equation} \label{PointReflectionSubgroup} \mathbb{Z}_2^{\mathpalette\mathrlapinternal{\mathrm{refl}}} \;\;\;\coloneqq\; \big\langle -1 \in S(\mathbb{H})\big\rangle \;\subset\; G^{\mathrm{A}_{\mathrm{ev}}\mathrm{DE}} \end{equation} generated by the quaternion $-1 \in \mathbb{H}$. This acts on the 4-dimensional quaternionic representation \eqref{TheQuaternionicRepresentation} by point reflection at the origin, hence as the 4-dimensional sign representation $$ \xymatrix{ \mathbb{R}^{\mathbf{4}_{\mathbb{H}}} \ar@(ul,ur)|-{\; \mathbb{Z}_2^{\mathrm{refl}}\!\!\!\! } } \;\simeq\; \xymatrix{ \mathbb{R}^{\mathbf{4}_{\mathrm{sgn}}} \ar@(ul,ur)|-{ \; \mathbb{Z}_2 } } \,, $$ as illustrated for 2 of 4 dimensions in \hyperlink{FigureI}{\it Figure I}. \medskip \noindent {\bf Euclidean $G$-Spaces.} The underlying Euclidean space of a linear $G$-representation $V$ is of course a $G$-space, hence a {\it Euclidean $G$-space}, which we suggestively denote by $\mathbb{R}^V$: \vspace{-.5cm} \begin{equation} \label{EuclideanGSpace} \mbox{\tiny \color{darkblue} \bf linear $G$-representation } \;\;\;\; V \in \mathrm{RO}(G) \;\;\;\;\;\; \Rightarrow \;\;\;\;\;\; \xymatrix{ \mathbb{R}^V\ar@(ul,ur)|{\,G\,} } \;\;\;\; \mbox{ \tiny \color{darkblue} \bf Euclidean $G$-space } \end{equation} \begin{example}[{\hyperlink{FigureI}{\it Figure I}}] With $G = \mathbb{Z}_2$ and $V = \mathbf{2}_{\mathrm{sgn}}$ its 2-dimensional sign representation, the Euclidean $G$-spaces $\mathbb{R}^{\mathbf{2}_{\mathrm{sgn}}}$ is the Cartesian plane equipped with the action of point reflection at the origin: \end{example} \begin{center} \hypertarget{FigureI}{} \begin{tikzpicture}[scale=0.5] \draw (-16,0) node {\footnotesize \begin{minipage}[l]{6.4cm} { \bf Figure I -- The Euclidean $\mathbb{Z}_2$-space} \eqref{EuclideanGSpace} of the 2-dimensional sign representation $\mathbf{2}_{\mathrm{sgn}}$. The underlying topological space is the Euclidean plane $\mathbb{R}^2$, with group action by point reflection at the origin. \end{minipage} }; \begin{scope} \clip (-1.8-1.7,-3.4) rectangle (4.8-1.5,3.4); \draw[step=3, dotted] (-6,-6) grid (6,6); \draw[<->, dashed, color=darkblue] (-2,2) to node[near start] { \colorbox{white}{ \bf \tiny \color{darkblue} \begin{tabular}{c} $\mathbb{Z}_2$ \\ action \end{tabular} } } (2,-2); \draw[<->, dashed, color=darkblue] (2,2) to (-2,-2); \end{scope} \draw (-6,0) node {$\mathbb{R}^{\mathbf{2}_{\mathrm{sgn}}} = $}; \begin{scope}[shift={(0,-1.4)}] \draw (-3,-2.6) node {\tiny $x_1 = -\tfrac{1}{2}$}; \draw (0,-2.6) node {\tiny $x_1 = 0$}; \draw (3,-2.6) node {\tiny $x_1 = \tfrac{1}{2}$}; \end{scope} \draw (-4.1,0) node {\tiny $x_2 = 0$}; \draw (-4.1,3) node {\tiny $x_2 = \tfrac{1}{2}$}; \draw (-4.1,-3) node {\tiny $x_2 = -\tfrac{1}{2}$}; % \end{tikzpicture} \end{center} \vspace{-2mm} Notice that for $V, W \in \mathrm{RO}(G)$ two orthogonal linear $G$-representations, with $V \oplus W \in \mathrm{RO}(G)$ their direct sum representation, the Cartesian product of their Euclidean $G$-spaces \eqref{EuclideanGSpace} is the Euclidean $G$-space of their direct sum: \begin{equation} \label{CartesianProductOfEuclideanGSpaces} \mathbb{R}^V \times \mathbb{R}^W \;\simeq\; \mathbb{R}^{V \oplus W} \,. \end{equation} \noindent {\bf $G$-Representation spheres.} The one-point compactification \eqref{SphereIsCompactificationOfCartesianSpace} of a Euclidean space $\mathbb{R}^V$ \eqref{EuclideanGSpace} becomes itself a $G$-space, with the point at infinity declared to be fixed by all group elements; this is called the {\it representation sphere} of $V$ (see, e.g., \cite[1.1.5]{Blu17}): \vspace{-3mm} \begin{equation} \label{RepSpheres} \raisebox{10pt}{ \xymatrix@R=-4pt{ & \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} one-point compactification \\ of Euclidean space $V$ \end{tabular} } } && \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} unit sphere \\ in product of $V$ \\ with the 1d trivial representation \end{tabular} } } \\ S^V \ar@{}[r]|-{ \coloneqq } & \big( \mathbb{R}^V \big)^{\mathrm{cpt}} \ar@{}[r]|-{ \simeq } & D\big(\mathbb{R}^V\big)/S\big( \mathbb{R}^V\big) \ar@{}[r]|-{ \simeq } &\ S\big( \mathbb{R}^{\mathbf{1}_{\mathrm{triv}} \oplus V} \big)\;. \\ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} representation \\ sphere \end{tabular} } } && \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} unit disk in $V$ \\ with boundary collapsed \\ to the point at infinity \end{tabular} } } } } \end{equation} \vspace{-2mm} \begin{example}[{\hyperlink{FigureJ}{\it Figure J}}] With $G \coloneqq \mathbb{Z}_2$ the group of order 2 and $\mathbf{1}_{\mathrm{sgn}}$ its 1-dimensional sign-representation, the corresponding representation sphere \eqref{RepSpheres} is the circle equipped with the $\mathbb{Z}_2$-action that reflects across an equator: \end{example} \begin{center} \hypertarget{FigureJ}{} \begin{tikzpicture}[scale=0.6] \draw (-13,0) node {\begin{minipage}[l]{6.7cm} \footnotesize {\bf Figure J -- The $\mathbb{Z}_2$-representation sphere} {of the 1-dimensional sign representation $\mathbf{1}_{\mathrm{sgn}}$ is the $\mathbb{Z}_2$-space whose underlying topological space is the circle, and equipped with the $\mathbb{Z}_2$-action that reflects points across the equator through $0$ and the point at infinity.} \end{minipage} }; \draw (-4,.5) node {$ \xymatrix{ S^{\mathbf{1}_{\mathrm{sgn}}} \ar@(ul,ur)|{\,\mathbb{Z}_2\,} } = $}; \draw (0,0) circle (2); \node (infinity1) at (2,0) {\colorbox{white}{$\infty$}}; \node (submanifold1) at (-2-.3,0) {\footnotesize $0$}; \draw (180-0:2) node {$-$}; \node (submanifold2) at (180+50:2) {}; \node (mirrorsubmanifold2) at (180-50:2) {}; \node (submanifold3) at (180+35:2) {}; \node (mirrorsubmanifold3) at (180-35:2) {}; \node (submanifold4) at (90:2) {}; \node (mirrorsubmanifold4) at (-90:2) {}; \node (submanifold5) at (45:2) {}; \node (mirrorsubmanifold5) at (-45:2) {}; \draw[<->, dashed, darkblue] (submanifold3) to (mirrorsubmanifold3); \draw[<->, dashed, darkblue] (submanifold2) to node[near start] { \raisebox{1.2cm}{ \tiny \bf \color{darkblue} \hspace{.2cm} \begin{tabular}{c} $\mathbb{Z}_2$ \\ action \end{tabular} } } (mirrorsubmanifold2); \draw[<->, dashed, darkblue] (submanifold4) to (mirrorsubmanifold4); \draw[<->, dashed, darkblue] (submanifold5) to (mirrorsubmanifold5); \end{tikzpicture} \end{center} \vspace{-3mm} \noindent {\bf $G$-Representation tori.} Similarly, consider the linear $G$-representation $V$ such that $G \subset \mathrm{Iso}\big( \mathbb{R}^{\mathrm{dim}(V)}\big)$ is the point group of a crystallographic group $C$ (see, e.g., \cite{Farkas81}) of the underlying Euclidean space $\mathbb{R}^{\mathrm{dim}(V)}$ with corresponding translational sub-lattice $\mathbb{Z}^n \subset \mathrm{Iso}(n)$ inside the Euclidean group in $n = \mathrm{dim}(V)$ dimensions. This means we have an exact sequence of this form: \vspace{-2mm} \begin{equation} \label{CrystallographicGroups} \raisebox{25pt}{ \xymatrix@R=-2pt{ & \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} lattice of translations \\ normal subgroup \end{tabular} }} && \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} crystallographic \\ group \end{tabular} } } && \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} point group \\ $\simeq C/\mathbb{Z}^n$ \end{tabular} } } \\ 1 \ar[r] & \mathbb{Z}^n \ar@{^{(}->}[dddddd] \ar@{^{(}->}[rr] && C \ar@{^{(}->}[dddddd] \ar@{->>}[rr] && {\color{darkblue} G } \ar@{^{(}->}[dddddd] \ar[r] & 1 \\ \\ \\ \\ \\ \\ 1 \ar[r] & \mathbb{R}^n \ar@{^{(}->}[rr] && \mathrm{Iso}(n) \ar@{->>}[rr] && \mathrm{O}(n) \ar[r] & 1 \\ & \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} translation group \end{tabular} } } && \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} Euclidean group \\ (isometries of $\mathbb{R}^n$) \end{tabular} } } && \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} orthogonal group \end{tabular} } } } } \end{equation} Then the corresponding torus $\mathbb{T}^n \coloneqq \mathbb{R}^n/\mathbb{Z}^n$ inherits a $G$-action from $\mathbb{R}^V$. We may call the resulting $G$-space the {\it representation torus} of $V$. This is the type of $G$-space whose global quotients are {\it toroidal orbifolds}: \begin{equation} \label{RepresentationTorus} \begin{array}{ccccccc} V \in \mathrm{RO}(G) &\Rightarrow& \xymatrix{ \mathbb{R}^V \ar@(ul,ur)|{\,G\,} } &\Rightarrow& \xymatrix@R=-6pt{ \mathbb{T}^V \ar@(ul,ur)|-{\,G\,} \ar@{}[r]|-{\coloneqq} & \mathbb{R}^V \ar@(ul,ur)|-{\,G\,} & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!/\mathbb{Z}^n } &\Rightarrow& \mathbb{T}^V \!\sslash\!G\;. \\ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} orthogonal linear \\ $G$-representation \end{tabular} } && \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} Euclidean \\ $G$-space \end{tabular} } & \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} if $G$ is point group \\ of crystallographic group \end{tabular} } & \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} representation torus \end{tabular} } && \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} toroidal \\ orbifold \end{tabular} } \end{array} \end{equation} \vspace{-2mm} \begin{example}[{\hyperlink{FigureK}{\it Figure K}}] \label{FixedPointsInSignRepTorus} For $G = \mathbb{Z}_4$ the cyclic group of order 4 and $\mathbf{2}_{\mathrm{rot}}$ its 2-dimensional linear representation given by rotations around the origin by integer multiples of $\pi/2$, this action descends to the 2-torus quotient to give the representation torus $\mathbb{T}^{\mathbf{2}_{\mathrm{rot}}}$: \end{example} \vspace{-8mm} \begin{center} \hypertarget{FigureK}{} \begin{tikzpicture}[scale=0.85] \draw (-12,.5) node {\footnotesize \begin{minipage}[l]{9cm} { \bf Figure K -- The $\mathbb{Z}_4$-representation torus} \eqref{RepresentationTorus} of the 2-dimensional rotational representation $\mathbf{2}_{\mathrm{rot}}$. The underlying topological space is the 2-torus $T^2 = \mathbb{R}^2/\mathbb{Z}^2$, of which we show the canonical covering $\mathbb{R}^2$-coordinate chart. Due to the coordinate identifications $$ \big([x_1], [x_2]\big) = \big([x_1 + n], [x_2 + m]\big) \;\in\; \mathbb{T}^2 = \mathbb{R}^2 / \mathbb{Z}^2 $$ the fixed point set \eqref{FixedLoci} of the $\mathbb{Z}_2$-subgroup has four points is $$ \big( \mathbb{T}^{\mathbf{2}_{\mathrm{rot}}} \big)^{\mathbb{Z}_2} \;=\; \Big\{ \big([0],[0]\big), \big([\tfrac{1}{2}], [\tfrac{1}{2}]\big), \big([0],[\tfrac{1}{2}]\big), \big([\tfrac{1}{2}], [0]\big) \Big\} \subset \mathbb{T}^2\,. $$ while that of the full group has two points $$ \big( \mathbb{T}^{\mathbf{2}_{\mathrm{rot}}} \big)^{\mathbb{Z}_4} \;=\; \Big\{ \big([0],[0]\big), \big([\tfrac{1}{2}], [\tfrac{1}{2}]\big) \Big\} \subset \mathbb{T}^2\,. $$ \end{minipage} }; \draw (-5.5,.6) node { $ \xymatrix{ \mathbb{T}^{\mathbf{2}_{\mathrm{rot}}} \ar@(ul,ur)|{\,\mathbb{Z}_4\,} } \;=\; $ }; \begin{scope} \clip (-1.8-1.7,-1.4) rectangle (4.8-1.5,3.5); \draw[step=3, dotted] (-3.4,-3) grid (6,6); \draw[<->, dashed, darkblue] (-.8,.8) to (.8,-.8); \draw[<->, dashed, greenii] (0-45+3:1.3) arc (0-45+3:90-45-3:1.3); \draw[<->, dashed, greenii] (0+45+3:1.3) arc (0+45+3:90+45-3:1.3); \draw[<->, dashed, greenii] (0-30+3:1.6) arc (0-30+3:90-30-3:1.6); \draw[<->, dashed, greenii] (0-30+3+90:1.6) arc (0-30+3+90:90-30-3+90:1.6); \draw[<->, dashed, darkblue] (0-30:1.5) to (0-30+180:1.5); \draw[<->, dashed, greenii] (0-20+3:1.9) arc (0-20+3:90-20-3:1.9); \draw[<->, dashed, greenii] (0-20+3+90:1.9) arc (0-20+3+90:90-20-3+90:1.9); \draw[<->, dashed, darkblue] (0-20:1.8) to (0-20+180:1.8); \draw[<->, dashed, greenii] (0-10+3:2.2) arc (0-10+3:90-10-3:2.2); \draw[<->, dashed, greenii] (0-10+3+90:2.2) arc (0-10+3+90:90-10-3+90:2.2); \draw[<->, dashed, darkblue] (0-10:2.1) to (0-10+180:2.2); \draw[<->, dashed, greenii] (0+3:2.5) arc (0+3:90-3:2.5); \draw[<->, dashed, greenii] (0+3+90:2.5) arc (0+3+90:90-3+90:2.5); \draw[<->, dashed, darkblue] (0:2.4) to (0+180:2.4); \draw[<->, dashed, greenii] (0+10+3:2.8) arc (0+10+3:90+10-3:2.8); \draw[<->, dashed, greenii] (0+10+3+90:2.8) arc (0+10+3+90:90+10-3+90:2.8); \draw[<->, dashed, darkblue] (0+10:2.7) to (0+10+180:2.7); \draw[<->, dashed, greenii] (0+20+3:3.1) arc (0+20+3:90+20-3:3.1); \draw[<->, dashed, greenii] (0+20+3+90:3.1) arc (0+20+3+90:90+20-3+90:3.1); \draw[<->, dashed, darkblue] (0+20:3) to (0+20+180:3); \draw (2,2) node { \colorbox{white}{ \hspace{-.4cm} \tiny \color{greenii} \begin{tabular}{c} $\mathbb{Z}_4$ \\ action \end{tabular} \hspace{-.4cm} } } (0,2.5); \draw (-1.3,0) node { \colorbox{white}{ \hspace{-.4cm} \tiny \color{darkblue} \begin{tabular}{c} $\mathbb{Z}_2 \subset \mathbb{Z}_4$ \\ action \end{tabular} \hspace{-.4cm} } } (0,2.5); \end{scope} \begin{scope}[shift={(0,1)}] \draw (-3,-2.6) node {\tiny $x_1 = -\tfrac{1}{2}$}; \draw (0,-2.6) node {\tiny $x_1 = 0$}; \draw (3,-2.6) node {\tiny $x_1 = \tfrac{1}{2}$}; \draw (-3,-2.8) .. controls (-3,-2.8-.8) and (3,-2.8-.8) .. node[below] {\tiny $\sim$} (3,-2.8); \end{scope} \draw (-3.9,0) node {\tiny $x_2 = 0$}; \draw (-3.9,3) node {\tiny $x_2 = \tfrac{1}{2}$}; % \end{tikzpicture} \end{center} \vspace{-1mm} \noindent {\bf $H$-Fixed subspaces and isotropy groups.} For $\xymatrix{X \ar@(ul,ur)|{\, G\,}}$ a $G$-space and $H \subset G$ any subgroup, the {\it $H$-fixed subspace} \begin{equation} \label{FixedLoci} X^H \;\coloneqq\; \big\{ x \in X \big\vert h\cdot x = x \; \mbox{for all}\; h \in H \big\} \;\subset\; X \end{equation} is the topological subspace of $X$ on those points which are fixed by the action of $H$. In particular, for $1 \subset G$ the trivial group we have $X^1 = X$. We also write \begin{equation} \label{IsotropySubgroups} \mathrm{Isotr}_X(G) \;\coloneqq\; \big\{ \mathrm{Stab}_G(x) \subset G \,\big\vert\, x \in X \big\} \end{equation} for the set of {\it isotropy subgroups} of $G$, hence those that appear as stabilizer groups of some point, namely as maximal subgroups fixing a point: $ \mathrm{Stab}_G(x) \;\coloneqq\; \big\{ g \in G \vert g \cdot x = x \big\} \;\subset\; G \,. $ It is the isotropy subgroups \eqref{IsotropySubgroups}, but not necessarily the generic subgroups, which serve to filter a $G$-space in a non-degenerate way, since if one isotropy subgroup is strictly larger than another, then its fixed subspace \eqref{FixedLoci} is strictly smaller $$ H_1 \subsetneq H_2 \;\in\; \mathrm{Isotr}_X(G) \phantom{AAA} \Rightarrow \phantom{AAA} X^{H_2} \subsetneq X^{H_1}. $$ \begin{example}[fixed subspaces of ADE-singularities] \label{FixedSubspacesOfADESingularities} The non-trivial fixed subspaces of the Euclidean $G$-space \eqref{EuclideanGSpace} of the quaternionic representation $\mathbf{4}_{\mathbb{H}}$ \eqref{TheQuaternionicRepresentation} are all the singleton sets consisting of the origin: \begin{equation} \label{FixedSubspacesOfQuaternionRepresentation} \big( \mathbb{R}^{\mathbf{4}_{\mathbb{H}}} \big)^H \;=\; \left\{ \begin{array}{cc} \mathbb{R}^4 & \mbox{if}\;H = 1 \\ \{0\} & \mbox{otherwise}. \end{array} \right. \end{equation} \end{example} \begin{example}[{\hyperlink{FigureK}{\it Figure K}}] \label{ExampleRT} For $G = \mathbb{Z}_2$ and $\mathbf{n}_{\mathrm{sgn}}$ the $n$-dimensional sign representation, the corresponding representation torus \eqref{RepresentationTorus} has as $\mathbb{Z}_2$-fixed space \eqref{FixedLoci} the 0-dimensional space which is the set of points whose canonical coordinates are all either 0 mod $\mathbb{Z}$ or $\tfrac{1}{2}$ mod $\mathbb{Z}$: \vspace{-2mm} \begin{equation} \label{RepresentationTorusOfSignRep} \mathbb{T}^{\mathbf{n}_{\mathrm{sgn}}} \;\coloneqq\; \xymatrix{ ( \mathbb{R}^n \ar@(ul,ur)^{\footnotesize [x] \mapsto [-x] } & \!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! /\mathbb{Z}^n) } \phantom{AAA} \Longrightarrow \phantom{AAA} \big( \mathbb{T}^{\mathbf{n}_{\mathrm{sgn}}} \big)^{\mathbb{Z}_2} \;=\; \big\{ [0], [\tfrac{1}{2}] \big\}^n \;\subset\; \mathbb{T}^n = \mathbb{R}^n/\mathbb{Z}^n. \end{equation} \end{example} \begin{example}[\bf Kummer surface] \label{KummerSurface} The reflection ADE-action \eqref{PointReflectionSubgroup} $ \xymatrix{ \mathbb{R}^{\mathbf{4}_{\mathbb{H}}} \ar@(ul,ur)|-{\, \mathbb{Z}^{\mathrm{refl}}_2 \!\!\!} } $ is clearly crystallographic \eqref{CrystallographicGroups}. The orbifold $ \mathbb{T}^{\mathbf{4}_{\mathbb{H}}} \!\sslash\! \mathbb{Z}^{\mathrm{refl}}_2 \simeq \mathbb{T}^{\mathbf{4}_{\mathrm{sgn}}} \!\sslash\! \mathbb{Z}_2 $ presented by the corresponding representation torus \eqref{RepresentationTorus} (when equivalently thought of as an orbifold of the complex 2-dimensional torus) known as the {\it Kummer surface} (e.g. \cite[5.5]{BDP17}). The cardinality of its fixed point set \eqref{FixedLoci} is (by Example \ref{ExampleRT}) $$ \left\vert \big( \mathbb{T}^{\mathbf{4}_{\mathbb{Z}}} \big)^{\mathbb{Z}_2^{\mathrm{refl}}} \right\vert \;=\; \left\vert \{[0],[\tfrac{1}{2}]\}^4 \right\vert \;=\; 16 . $$ \end{example} \medskip \noindent {\bf Residual action on fixed spaces.} There is a residual group action on any $H$-fixed subspace $X^H$ \eqref{FixedLoci} inherited from the $G$-action on all of $X$, with the residual group being the ``Weyl group'' \cite[p. 13]{May96} \begin{equation} \label{WeylGroup} W_G(H) \;\coloneqq\; N_G(H) / H \end{equation}{ which is the quotient group of the maximal subgroup $N_G(H) \subset G$ for which $H$ is a normal subgroup (the normalizer of $H$ in $G$) by $H$ itself. Thereby any $H$-fixed subspace becomes itself a $W_G(H)$-space: \begin{equation} \label{ResidualActionOnFixedSubspaces} \begin{array}{ccccc} \xymatrix{ X \ar@(ul,ur)|{\, G\,} } &~~~\colon~~~& (H \subset G) &~~ \longmapsto ~~& \xymatrix{ X^H \ar@(ul,ur)^{W_G(H)} }\!\!. \\ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} A $G$-space induces }} && \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} for each subgroup $H$ }} && \mathpalette\mathclapinternal{ \mbox{\bf \raisebox{-2pt}{ \tiny \color{darkblue} \begin{tabular}{l} the $H$-fixed space with \\ residual $W_G(H)$-action \end{tabular} } }} \end{array} \end{equation} Notice the two extreme cases of the Weyl group \eqref{WeylGroup}: \begin{equation} \label{ExtremeCasesOfWeylGroups} W_G(1) = G \phantom{AAA} \mbox{and} \phantom{AAA} W_G(G) = 1 \,. \end{equation} \medskip \noindent {\bf Maps between $G$-spaces and their Elmendorf stages.} The relevant {\it morphisms between $G$-spaces} are continuous functions between the underlying spaces that are $G$-equivariant: \begin{equation} \label{EquivariantFunction} \xymatrix{ X \ar@(ul,ur)|{\, G\,} \ar[rr]^-{f} && Y \ar@(ul,ur)|{\, G\,} } \phantom{AAA} \Leftrightarrow \phantom{AAA} \xymatrix{ X \ar[rr]^-{f} && Y } \;\; \mbox{\footnotesize \begin{tabular}{l} such that $f(g\cdot x) = g \cdot f(x)$ \\ for all $g \in G$ and all $x \in X$. \end{tabular} } \end{equation} This $G$-equivariance implies that $H$-fixed points are sent to $H$-fixed points, for every subgroup $H \subset G$, hence that every $G$-equivariant continuous function \eqref{EquivariantFunction} induces a system of plain continous functions $f^H := f_{\vert X^H}$ between $H$-fixed point spaces \eqref{FixedLoci}, which are each equivariant with respect to the residual $W_G(H)$-action \eqref{WeylGroup} and compatible with each other with respect to inclusions $H_i \subset H_j$ of subgroups: \begin{equation} \label{SystemOfMapsOnHFixedSubspaces} \begin{array}{ccc} {\xymatrix{ X \ar@(ul,ur)|{\,G\,} \ar[rr]^{f} && Y\ar@(ul,ur)|{\,G\,} }} & \phantom{AAA} \Rightarrow \phantom{AA} & \raisebox{60pt}{ \xymatrix{ X \ar@(ul,dl)_G \ar[rr]^-f && Y \ar@(ur,dr)^G &{\phantom{AAAAA}}& 1 \ar@{^{(}->}[d] \\ X^{H_i} \ar@(ul,dl)_{W_G(H_i)} \ar@{^{(}->}[u] \ar[rr]^-{f^{H_i}} && Y^{H_i} \ar@(ur,dr)^{W_G(H_i)} \ar@{^{(}->}[u] & \ar@{}[d]|{ \mbox{\hspace{1.3cm} for all} } & H_i \ar@{^{(}->}[d] \\ X^{H_j} \ar@(ul,dl)_{W_G(H_j)} \ar@{^{(}->}[u] \ar[rr]^-{f^{H_j}} && Y^{H_j} \ar@(ur,dr)^{W_G(H_j)} \ar@{^{(}->}[u] && H_j \ar@{^{(}->}[d] \\ X^{G} \ar@{^{(}->}[u] \ar[rr]^-{f^{G}} && Y^{G} \ar@{^{(}->}[u] && G } } \\ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} $G$-equivariant function between $G$-spaces \end{tabular} } } & \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} induces \end{tabular} } } & \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} system of $W_G(H)$-equivariant functions between $H$-fixed subspaces \end{tabular} } } \end{array} \end{equation} We will refer to the component $f^H$ here as the {\it Elmendorf stage} labeled by $H$ \cite[1.3]{Blu17}\cite[3.1]{ADE}. \medskip Finally, a {\it $G$-homotopy} between two $G$-equivariant functions $f_1, f_2$ \eqref{EquivariantFunction} \begin{equation} \label{GHomotopy} \xymatrix{ X \ar@(ul,dl)_G \ar@/^1pc/[rr]^-{f_0}_-{\ }="s" \ar@/_1pc/[rr]_-{f_1}^-{\ }="t" && Y \ar@(ur,dr)^G % \ar@{=>}^\eta "s"; "t" } \end{equation} \vspace{-.5cm} \noindent is a homotopy $ [0,1] \times X \overset{ \eta }{\longrightarrow} X $ between the underlying continuous functions, hence such that $f_i = \eta(i,-)$, which is equivariant as a function on the product $G$-space $X \times [0,1]$, where the $G$-action on the interval $[0,1]$ is taken to be trivial. \medskip \subsection{Equivariant Hopf degree on spheres and Local tadpole cancellation} \label{LocalTadpoleCancellation} We discuss the unstable (Theorem \ref{UnstableEquivariantHopfDegreeTheorem}) and the stabilized (Theorem \ref{CharacterizationOfStabilizationOfUnstableCohomotopy}) equivariant Hopf degree theorem for representation spheres, which characterizes equivariant Cohomotopy in compatible RO-degree (Def. \ref{CompatibleRODegree} below), on Euclidean $G$-spaces and vanishing at infinity, hence of the vicinity of $G$-singularities inside flat Euclidean space (Def. \ref{CohomotopyOfVicinityOfSingularity} below). Using this we show (Prop. \ref{TheoremLocalTadpoleCancellation}) that equivariant Cohomotopy implies the form of the local/twisted tadpole cancellation conditions from \hyperlink{Table1}{\it Table 1}, \hyperlink{Table2}{\it Table 2}. \medskip \subsubsection{Unstable equivariant Hopf degree} \label{UnstableEquivariantHopfDegree} For stating the equivariant Hopf degree theorem, we need the following concept of \emph{compatible RO-degree} for equivariant Cohomotopy. This condition is really a reflection of the structure of \emph{J-twisted} Cohomotopy (as in \cite{FSS19b}\cite{FSS19c}) in its version on flat orbifolds, and as such is further developed in \cite{OrbifoldCohomology}. \vspace{-.3cm} \begin{defn}[\bf Compatible RO-degree] \label{CompatibleRODegree} Given a $G$-space $\xymatrix{ X \ar@(ul,ur)|{\,G \,}}$ such that each $H$-fixed subspace $X^H$ \eqref{FixedLoci} for isotropy groups $H \in \mathrm{Isotr}_X(G)$ \eqref{IsotropySubgroups} admits the structure of an orientable manifold, we say that an orthogonal linear $G$-representation $V$ is a \emph{compatible RO-degree for equivariant Cohomotopy of $X$} if for each isotropy subgroup $H \in \mathrm{Isotr}_X(G)$ \eqref{IsotropySubgroups} the following two conditions hold: \footnote{ These conditions are a specializations of the conditions stated in \cite[p. 212-213]{tomDieck79}, streamlined here for our purpose.} \begin{enumerate}[{\bf (i)}] \vspace{-2mm} \item {\bf Compatible fixed space dimensions:} the dimension of the $H$-fixed subspace of $V$ equals that of the $H$-fixed subspace of $X$: \begin{equation} \label{FixedSpacesOfCompatibleDimension} \mathrm{dim}\big( X^H \big) \;=\; \mathrm{dim}\big( V^H \big). \end{equation} \vspace{-3mm} \item {\bf Compatible orientation behavior:} the action \eqref{ResidualActionOnFixedSubspaces} of an element $[g] \in W_G(H)$ \eqref{WeylGroup} on $V^H$ is orientation preserving or reversing, respectively, precisely if it is so on $X^H$ \begin{equation} \label{OrientationBehaviousCompatible} \mathrm{orient} \left( \raisebox{-10pt}{ \xymatrix{ X^H \ar@(ul,ur)^{ [g] \in W_H(H) } }} \right) \;=\; \mathrm{orient} \left( \raisebox{-10pt}{ \xymatrix{ (S^V)^H \ar@(ul,ur)^{ [g] \in W_H(H) } } } \right). \end{equation} \end{enumerate} \end{defn} \begin{example}[\bf Compatible RO-degree for representation-spheres and -tori] \label{ExamplesOfCompatibleRODegree} We observe that every real linear $G$-representation $V$ is a compatible RO-degree (Def. \ref{CompatibleRODegree}) \begin{enumerate}[{\bf (i)}] \vspace{-2mm} \item for the corresponding representation sphere $S^V$ \eqref{RepSpheres}; \vspace{-2mm} \item and for the corresponding representation torus $\mathbb{T}^{V}$ \eqref{RepresentationTorus} \end{enumerate} \vspace{-2mm} If the latter exists, hence if $G$ is the point group of a crystallographic group on $\mathbb{R}^V$ \eqref{CrystallographicGroups}. \end{example} For brevity, we introduce the following terminology, following \hyperlink{Table5}{\it Table 5}, for the situation in which we will now consider equivariant Cohomotopy in compatible RO-degree: \begin{defn}[Cohomotopy of vicinity of the singularity] \label{CohomotopyOfVicinityOfSingularity} Given a finite group $G$ and an orthogonal linear $G$-representation $V \in \mathrm{RO}(G)$, we say that the {\it Cohomotopy of the vicinity of the singularity} is the unstable $G$-equivariant Cohomotopy \eqref{EquivariantCohomotopySet} $$ \pi^V_G \big( (\mathbb{R}^V)^{\mathrm{cpt}} \big) \;=\; \pi^V_G \big( S^V \big) $$ in compatible RO-degree $V$ (Def. \ref{CompatibleRODegree}, Example \ref{ExamplesOfCompatibleRODegree}) of the Euclidean $G$-space $\mathbb{R}^V$ \eqref{EuclideanGSpace} and vanishing at infinity \eqref{VanishingAtInfinity}, hence of the representation sphere $S^V$ \eqref{RepSpheres} and preserving the point at infinity. \end{defn} The key implication of the first clause \eqref{FixedSpacesOfCompatibleDimension} on compatible RO-degrees is that each Elmendorf stage $c^H$ \eqref{SystemOfMapsOnHFixedSubspaces} of a $G$-equivariant Cohomotopy cocycle $c$ is a cocycle in ordinary Cohomotopy \eqref{PlainCohomotopySet} to which the ordinary Hopf degree theorem applies, either in its stable range \eqref{HopfDegreeTheorem} or in the unstable range \eqref{UnstableRangeHopfDegreeTheorem}: \vspace{-2mm} \begin{equation} \label{ElmedorfStageWiseHopfDegrees} \hspace{-2mm} \!\!\!\!\!\!\!\!\! \begin{array}{ccc} \underset{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} equivariant Cohomotopy cocycle \eqref{EquivariantCohomotopySet} \\ in compatible RO-degree $V$ \eqref{FixedSpacesOfCompatibleDimension} \end{tabular} } }{ {\xymatrix{ X \ar@(ul,ur)|{\, G\,} \ar[rr]^-{c} && S^V\ar@(ul,ur)|{\,G\,} }} } & \phantom{} \Rightarrow \phantom{A} & \raisebox{83pt}{ \xymatrix{ X \ar[rr]^-c && S^{\,\mathrm{dim}(X) > 0} & & \mathrm{deg}(f) \in \mathbb{Z} \\ \\ X^{H} \ar@{^{(}->}[uu]|-{ \raisebox{2pt}{$\vdots$} } \ar[rr]^-{c^{H}} && S^{\,\mathrm{dim}(X^{H}) > 0} \ar@{^{(}->}[uu]|-{ \raisebox{2pt}{$\vdots$} } & & \underset{ \tiny \color{darkblue} \bf \begin{tabular}{c} \bf ordinary stable Hopf degree \eqref{HopfDegreeTheorem} \end{tabular} }{ \mathrm{deg}(c^{H}) \in \mathbb{Z} } \\ & \ddots \ar@{^{(}->}[ul]|-{\raisebox{5pt}{$\ddots$}} \ar[rr && \ddots \ar@{^{(}->}[ul]|-{\raisebox{5pt}{$\ddots$}} \\ \mathpalette\mathllapinternal{ \mbox{\bf \tiny \color{darkblue} Elmendorf stages \eqref{SystemOfMapsOnHFixedSubspaces} } \;\;\; } X^{K} \ar@{^{(}->}[uu]|-{ \raisebox{2pt}{$\vdots$} } \ar[rr]^>>>>>>>{c^{K}} && S^{\,\mathrm{dim}(V^{K})> 0} \ar@{^{(}->}[uu]|-{ \raisebox{2pt}{$\vdots$} } & & \mathrm{deg}(c^{K}) \in \mathbb{Z} \\ & \ddots \ar@{^{(}->}[ul]|-{\raisebox{5pt}{$\ddots$}} \ar[rr \ar@{^{(}->}[uu]|-{ \raisebox{2pt}{$\vdots$} } && \ddots \ar@{^{(}->}[ul]|-{\raisebox{5pt}{$\ddots$}} \ar@{^{(}->}[uu]|-{ \raisebox{2pt}{$\vdots$} } \\ X^{J} \ar@{^{(}->}[uu]|-{ \raisebox{2pt}{$\vdots$} } \ar[rr]^>>>>>>>{c^{J}} & \ar@{..>}[u] & S^{\,\mathrm{dim}(V^J)= 0} \ar@{^{(}->}[uu]|-{ \raisebox{2pt}{$\vdots$} } & \ar@{..>}[u] & \underset{ \mbox{\bf \tiny \color{darkblue} \phantom{AA} ordinary unstable Hopf degree \eqref{UnstableRangeHopfDegreeTheorem} } }{ \mathrm{deg}(c^J) \in \mathpalette\mathrlapinternal{ \{0,1\}^{(X^J)} } \phantom{\mathbb{Z}} } \\ {\phantom{ {A \atop A} \atop {A \atop A} }} \ar@{..>}[u] & & {\phantom{ {A \atop A} \atop {A \atop A} }} \ar@{..>}[u] & } } \end{array} \end{equation} \vspace{-1.2cm} \begin{theorem}[\bf Unstable equivariant Hopf degree theorem for representation spheres] \label{UnstableEquivariantHopfDegreeTheorem} The unstable Cohomopotopy of the vicinity of a $G$-singularity $\mathbb{R}^V$ (Def. \ref{CohomotopyOfVicinityOfSingularity}) is in bijection to the product set of one copy of the integers for each isotropy group \eqref{IsotropySubgroups} with positive-dimensional fixed subspace $\mathrm{Isotr}^{d_{\mathrm{fix}} > 0}_X(G)$ \eqref{FixedLoci}, and one copy of $\{0,1\}$ if there is an isotropy group with 0-dimensional fixed subspace $\mathrm{Isotr}^{d_{\mathrm{fix}} = 0}_X(G)$ (which is then necessarily unique and, in fact, the group $G$ itself): \vspace{-3mm} \begin{equation} \label{UnstableEquivariantCohomotopyOfRepresentationSphereInCompatibleDegree} \xymatrix{ \pi^V_G\big( \big(\mathbb{R}^V\big)^{\mathrm{cpt}} \big) \ar[rrr]^-{ c \;\mapsto\; ( H \mapsto {\color{darkblue} \bf N_H}(c) ) }_-{\simeq} &&& \mathbb{Z}^{{}^{ \mathrm{Isotr}^{d_{\mathrm{fix}} > 0}_X(G) } } \times \{0,1\}^{{}^{ \mathrm{Isotr}^{d_{\mathrm{fix}} = 0}_X(G) } } }, \end{equation} where, for $H \in \mathrm{Isotr}^{d_{\mathrm{fix}} > 0 }_X(G)$, the ordinary Hopf degree at Elmendorf stage $H$ \eqref{ElmedorfStageWiseHopfDegrees} is of the form \begin{equation} \label{TheWeylGroupMultiples} \xymatrix@R=-2pt{ \mathrm{deg}\big( c^{H} \big) & \!\!\! \!\!\! \!\!\! \!\!\! = \!\!\! \!\!\! \!\!\! \!\!\! & \phi_H\big( \{ \mathrm{deg}\big( c^K \big) \big\vert K \supsetneq H \in \mathrm{Isotr}_X(G) \} \big) & \!\!\! \!\!\! \!\!\! \!\!\! - \!\!\! \!\!\! \!\!\! \!\!\! & {\color{darkblue} \bf N_H}(c) \cdot \big| \big( W_G(H)\big) \big| & \!\!\!\!\!\!\!\!\! \in \mathbb{Z}. \\ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} The ordinary Hopf degree \eqref{HopfDegreeTheorem} \\ at Elmendorf stage $K$ \eqref{SystemOfMapsOnHFixedSubspaces} \end{tabular} } } & \!\!\!\!\!\! \!\!\!\!\!\! \!\!\!\!\!\! \!\!\!\!\!\! & \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} offset, being a function $\phi_H$ of \\ the Hopf degrees at all lower stages. \end{tabular} } } & \!\!\!\!\!\! \!\!\!\!\!\! \!\!\!\!\!\! \!\!\!\!\!\! & \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} an integer multiple of \\ the order of the Weyl group \eqref{WeylGroup} \end{tabular} } } } \end{equation} The isomorphism \eqref{UnstableEquivariantCohomotopyOfRepresentationSphereInCompatibleDegree} is exhibited by sending an equivariant Cohomotopy cocycle $c$ to the sequence of the integers ${\color{darkblue} \bf N_H}(c)$ from \eqref{TheWeylGroupMultiples} in positive fixed subspace dimensions, together with possibly the choice of an element of $\{0,1\}$, which is the unstable Hopf degree in dimension 0 \eqref{UnstableRangeHopfDegreeTheorem}, at Elmendorf stage $G$ (if $\mathrm{dim}(V^G) = 0$). \end{theorem} \begin{proof} In the special case that no subgroup $H \subset G$ has a fixed subspace of vanishing dimension, this is \cite[Theorem 8.4.1]{tomDieck79} (the assumption of positive dimension is made ``for simplicity'' in \cite[middle of p. 212]{tomDieck79}). Hence we just need to convince ourselves that the proof given there generalizes: in the present case of representation spheres, the only possible 0-dimensional fixed subspace is the 0-sphere. Hence we need to consider the case that $( S^V)^G = S^0$. To generalize the inductive argument in \cite[p. 214]{tomDieck79} to this case, we just need to see that every function $( S^V)^G \to ( S^V)^G$ extends to a $W_G(H)$-equivariant function $( S^V)^H \to ( S^V)^H$ on a next higher Elmendorf stage $H$. But this holds in the present case: every function from $S^0 = \{0,\infty\}$ to itself (as in \hyperlink{FigureH}{\it Figure H}) readily extends even to a $G$-equivariant function $S^V \to S^V$, and by assumption of vanishing at infinity \eqref{VanishingAtInfinity} one of exactly two extensions will work, namely either the identity function or the function constant on $\infty \in S^V$: \begin{equation} \label{InductionStartForRepSpheres} \xymatrix@R=-4pt{ & \{0,1\} \ar@{<-}[rr]^-{ \mathrm{deg}\left( (-)^G \right) } && \pi^V\big( S^V \big) \\ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} configuration of \\ a single point in $S^0$ \\ sitting at $0 \in S^0$ \end{tabular} } & \big[ S^0 \xrightarrow{\mathrm{id}_{S^0}} S^0 \big] \ar@{}[rr]|-{\longmapsfrom} && \big[ S^V \xrightarrow{c = \mathrm{id}_{S^V}} S^V \big] & \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} configuration of \\ a single charged point in $S^V$ \\ which is sitting at $0 \in S^V$ \end{tabular} } \\ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} configuration of \\ no point in $S^0$ \end{tabular} } & \big[ S^0 \xrightarrow{\mathrm{const}_\infty} S^0 \big] \ar@{}[rr]|-{\longmapsfrom} && \big[ S^V \xrightarrow{c = \mathrm{const}_{\infty}} S^V \big] & \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} configuration of \\ no point in $S^V$ \end{tabular} } } \end{equation} From this induction forward, the proof of \cite[8.4.1]{tomDieck79} applies verbatim and shows that on top of this initial Hopf degree number of -1 (a charge at $0 \in S^0$) or $0$ (no charge at $0 \in S^0$) there may now be further $N_H \cdot \vert W_G(H)\vert$-worth of Hopf degree at the next higher Elmendorf stage $H$, and so on. \end{proof} \begin{example}[$\ensuremath{\mathbb Z}_2$-equivariant Cohomotopy] Consider $$ c \;\in\; \pi^{\mathbf{n}_{\mathrm{sgn}}}_{\mathbb{Z}_2} \big( (\mathbb{R}^{\mathbf{n}_{\mathrm{sgn}}})^{\mathrm{cpt}} \big) $$ (i.e., a cocycle in $\mathbb{Z}_2$-equivariant Cohomotopy vanishing at infinity \eqref{VanishingAtInfinity} of the $n$-dimensional Euclidean orientifold $\mathbb{R}^{\mathbf{n}_{\mathrm{sgn}}}$ \eqref{EuclideanGSpace} underlying the $n$-dimensional sign representation $\mathbf{n}_{\mathrm{sgn}}$, as in \hyperlink{FigureI}{\it Figure I}, hence the equivariant Cohomotopy of the representation sphere $S^{\mathbf{n}_{\mathrm{sgn}}}$ \eqref{RepSpheres}, as in \hyperlink{FigureJ}{\it Figure J}, in compatible RO-degree $\mathbf{n}_{\mathrm{sgn}}$, by Example \ref{ExamplesOfCompatibleRODegree}). Then the unstable equivariant Hopf degree theorem \ref{UnstableEquivariantHopfDegreeTheorem} says, when translated to a geometric situation via the unstable Pontrjagin-Thom theorem \eqref{UnstablePTTheorem}, that: \begin{enumerate}[{\bf (i)}] \vspace{-2mm} \item there either is, or is not, a single charge sitting at the finite fixed point $0 \in S^{\mathbf{n}_{\mathrm{sgn}}}$, corresponding, with \eqref{InductionStartForRepSpheres}, to an offset of $- 1$ or $0$, respectively, in \eqref{TheWeylGroupMultiples}; \vspace{-2mm} \item in addition, there is any integer number (the $N_{1} \in \mathbb{N}$ in \eqref{TheWeylGroupMultiples}) of orientifold mirror pairs (since $\vert W_{\mathbb{Z}_2}(1)\vert = \vert \mathbb{Z}_2\vert = 2$, by \eqref{ExtremeCasesOfWeylGroups}) of charges floating in the vicinity. \end{enumerate} \vspace{-4mm} \begin{center} {\hypertarget{FigureL}{}} \begin{tikzpicture}[scale=0.75] \begin{scope}[shift={(0,-1.3)}] \node (X) at (-4.5,6) { \raisebox{44pt}{ $ ( \xymatrix{ \mathbb{R}^{\mathbf{n}_{\mathrm{sgn}}} \ar@(ul,ur)^{ \overset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} sign \\ representation \end{tabular} } } } { \mathbb{Z}_2 } } } )^{\mathrm{cpt}} $}}; \node (sphere) at (6,6) { \raisebox{44pt}{ $ S^{\mathbf{n}_{\mathrm{sgn}}} = ( \xymatrix{ \mathbb{R}^{\mathbf{n}_{\mathrm{sgn}}} \ar@(ul,ur)^{ \overset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} sign \\ representation \end{tabular} } } }{ \mathbb{Z}_2 } } } )^{\mathrm{cpt}} $}}; \draw[->] (X) to node[above] {$c$} (sphere); \node at (-4.5,4.9) {\tiny \color{darkblue} \bf \begin{tabular}{c} Euclidean $n$-space \\ around orientifold singularity \\ compactified by a point at infinity \end{tabular} }; \node at (-4.5,3.9) {$\overbrace{\phantom{----------------}}$}; \node at (6,4.9) {\tiny \color{darkblue} \bf \begin{tabular}{c} representation sphere \\ equivariant Cohomotopy coefficient \end{tabular} }; \node at (6,3.9) {$\overbrace{\phantom{--------------}}$}; \node at (.25,5.7) { \tiny \color{darkblue} \bf equivariant Cohomotopy cocycle }; \end{scope} \begin{scope}[shift={(-4.5,0)}] \draw (0,0) circle (2); \node (infinity1) at (2,0) {\colorbox{white}{$\infty$}}; \node (submanifold1) at (180-0:2) {$ \mathpalette\mathllapinternal{ \mbox{ \bf \tiny \color{darkblue} \begin{tabular}{c} orientifold \\ singularity \end{tabular} } \; } $}; \draw[fill=white] (180-0:2) circle (.07); \node (submanifold2) at (180+50:2) {$\bullet$}; \node (mirrorsubmanifold2) at (180-50:2) {$\bullet$}; \node (submanifold3) at (180+35:2) {$\bullet$}; \node (mirrorsubmanifold3) at (180-35:2) {$\bullet$}; \draw[<->, dashed, darkblue] (submanifold3) to (mirrorsubmanifold3); \draw[<->, dashed, darkblue] (submanifold2) to node[near start] { \raisebox{.6cm}{ \bf \tiny \color{darkblue} \hspace{.2cm} \begin{tabular}{c} orientifold \\ action \end{tabular} } } (mirrorsubmanifold2); \end{scope} \draw[|->, thin, olive] (infinity1) to[bend right=40] node { \colorbox{white}{\bf \tiny \color{darkblue} \hspace{-.5cm} \begin{tabular}{c} cocycle vanishes \\ at infinity \\ (far away from the singularity) \end{tabular} \hspace{-.5cm} } } (7.7,-.2); \node at (-.2,-1.5) { \colorbox{white}{$\phantom{{A A A}\atop {A A} }$} }; \node at (5.1,-1.8) { \colorbox{white}{$\phantom{ A }$} }; \begin{scope}[shift={(4,0)}] \draw (2,0) circle (2); \node at (+.5,0) {$\!\!\!\!\!\!0$}; \node (zero) at (0,0) {$-$}; \node (infinity) at (4,0) {\colorbox{white}{$\infty$}}; \fill[black] (2,0) ++(40+180:2) node (minusepsilon) {\begin{turn}{-45} $)$ \end{turn}}; \fill[black] (2,0) ++(180-40:2) node (epsilon) {\begin{turn}{45} $)$ \end{turn}}; \fill[black] (2.3,0.25) ++(40+180:2) node (label+epsilon) { \tiny $-\epsilon$ }; \fill[black] (2.3,-0.25) ++(-40-180:2) node (label-epsilon) { \tiny $+\epsilon$ }; \draw[<->, dashed, darkblue] (label+epsilon) to node {\tiny $\mathbb{Z}_2$} (label-epsilon); \end{scope} \draw[|->, olive] (mirrorsubmanifold2) to[bend left=18] (zero); \draw[|->, olive] (mirrorsubmanifold3) to[bend left=18] node { \colorbox{white}{\bf \tiny \color{darkblue} \hspace{-.5cm} \begin{tabular}{c} submanifolds away from \\ fixed point/singularity \end{tabular} \hspace{-.5cm} } } (zero); \draw[|->, thin, olive] (submanifold2) to[bend right=18] (zero); \draw[|->, thin, olive] (submanifold3) to[bend right=18] node[near end] { \colorbox{white}{\bf \tiny \color{darkblue} \begin{tabular}{c} mirror submanifolds \end{tabular} } } (zero); \draw[|->, thin, brown] (submanifold1) to[bend left=11] node[near end] { \hspace{-.6cm} \colorbox{white}{\bf \tiny \color{darkblue} \hspace{-.6cm} \begin{tabular}{c} submanifold inside \\ fixed point/singularity \end{tabular} \hspace{-.6cm} } } (zero); \end{tikzpicture} \end{center} \vspace{-.6cm} \noindent {\bf \footnotesize Figure L -- The $\mathbb{Z}_2$-Equivariant Cohomotopy of Euclidean $n$-orientifolds vanishing at infinity} {\footnotesize according to the unstable equivariant Hopf degree theorem \ref{UnstableEquivariantHopfDegreeTheorem} applied to sign-representation spheres (\hyperlink{FigureJ}{\it Figure J}) and visuallized by the corresponding configurations of charged points via the unstable Pontrjagin-Thom construction \eqref{UnstablePTTheorem}, in equivariant enhancement of the situation show in \hyperlink{FigureE}{\it Figure E}. The same situation, just crossed with an interval, appears in the application to M5/MO5 charge in \hyperlink{FigureV}{\it Figure V}. } \end{example} \medskip It is possible and instructive to make this fully explicit in the simple special case of the 1-dimensional sign representation, where the statement of the equivariant Hopf degree theorem \ref{UnstableEquivariantHopfDegreeTheorem} may be found in elementary terms: It is readily checked that all the continuous functions $c^1 : S^1 \to S^1$ which take $0$ to either of $0, \infty \in S^1$ and wind around at constant parameter speed are $\mathbb{Z}_2$-equivariant, hence are Elmendorf stages \eqref{SystemOfMapsOnHFixedSubspaces} of $\mathbb{Z}_2$-equivariant cocycles $c$: \begin{equation} \label{EquivariantMapS1sgnToItself} \xymatrix@R=2pt{ \mathpalette\mathllapinternal{ \big( \mathbb{R}^{\mathbf{1}_{\mathrm{sgn}}}\big)^{\mathrm{cpt}} \simeq \; } S^{\mathbf{1}_{\mathrm{sgn}}} \ar[rr]^-c && S^{\mathbf{1}_{\mathrm{sgn}}} \\ S^1 \ar[rr]^-{c^1} && S^1 \\ \\ S^0 \ar@{^{(}->}[uu] \ar[rr]^-{c^{\mathbb{Z}_2}} && S^0 \ar@{^{(}->}[uu] } \,. \end{equation} If such a function vanishes at infinity \eqref{VanishingAtInfinity}, in that it takes $\infty \mapsto \infty$ as shown in \hyperlink{FigureL}{\it Figure L}, then we have one of two cases: \begin{enumerate}[{\bf (i)}] \vspace{-2mm} \item either $c^1$ {\it winds an odd number} of times, so that \eqref{TheWeylGroupMultiples} reads: \vspace{-2mm} $$ \; \mathrm{deg}(c^1) = \; \overset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} offset } } } { \overbrace{1} } \;-\; N_{1} \cdot 2 \,, $$ in which case it satisfies $c^1(0) = 0$, so that under the PT-theorem \eqref{UnstablePTTheorem} there is precisely one charge at the singular fixed point, together with the even integer number $2 \cdot N_1 \in \mathbb{Z}$ of net charges in its ``vicinity'' (namely: away from infinity) which are arranged in $\mathbb{Z}_2$-mirror pairs, due to the $\mathbb{Z}_2$-equivariance of $c$; this is what is shown on the left of \hyperlink{FigureL}{\it Figure L}; \vspace{-2mm} \item or $c^1$ {\it winds an even number} of times so that \eqref{TheWeylGroupMultiples} reads: \vspace{-2mm} $$ \; \mathrm{deg}(c^1) = \overset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} offset } } }{ \overbrace{0} } \;-\; N_{1} \cdot 2 \,, $$ in which case it satisfies $c^1(0) = \infty$, so that under the PT-theorem \eqref{UnstablePTTheorem} there is no charge at the singular fixed point, but a net even integer number $2 \cdot N_1 \in \mathbb{Z}$ of charges in its vicinity, as before. \end{enumerate} \begin{remark} [Number of branes and offset] Notice that: \begin{enumerate}[{\bf (i)}] \vspace{-3mm} \item For $N_1 = 0$ (no branes) this is the situation of \eqref{InductionStartForRepSpheres}: either there is a non-vanishing charge associated with the singular fixed point (O-plane charge), or not. \vspace{-3mm} \item Furthermore, if there is, it is either +1 or -1, so that in general the charge associated with the singular fixed point is in $\{0, \pm 1\}$, as befits O-plane charge according to \hyperlink{FigureOP}{\it Figure OP}. \vspace{-3mm} \item The offset is relevant only modulo 2, so that we could have chosen an offset of $+1$ instead of as $-1$ in the first case. This choice just fixes the sign convention for D-brane/O-plane charge. \end{enumerate} \end{remark} \noindent {\bf Characterizing the brane content around a singularity.} In the above example in RO-degree $\mathbf{1}_{\mathrm{sgn}}$ \eqref{EquivariantMapS1sgnToItself}, it is clear that the configurations of branes implied by the unstable equvariant Hopf degree theorem (Theorem \ref{UnstableEquivariantHopfDegreeTheorem}) appear in multiples of the regular $G$-set around a fixed O-plane charge stuck in the singularity, as illustrated in \hyperlink{FigureL}{\it Figure L} and as demanded by the local/twisted tadpole cancellation conditions according to \hyperlink{Table1}{\it Table 1}. In order to prove that this is the case generally, we now turn to the stabilized equivariant Hopf degree theorem (Theorem \ref{CharacterizationOfStabilizationOfUnstableCohomotopy} below), which concretely characterizes the (virtual) $G$-sets of branes that may appear classified by equivariant Cohomotopy. \medskip \subsubsection{Stable equivariant Hopf degree} \label{StableEquivariantHopfDegree} In a homotopy-theoretic incarnation of perturbation theory, we may approximate unstable equivariant Cohomotopy (Theorem \ref{CharacterizationOfStabilizationOfUnstableCohomotopy}) by its homotopically linearized, namely stabilized (see \cite{BSS18}) version. We briefly recall the basics of stable equivariant Cohomotopy in RO-degree 0 (\cite{Segal71}, see \cite[7.6 \& 8.5]{tomDieck79}\cite{Lueck}) before applying this in Theorem \ref{CharacterizationOfStabilizationOfUnstableCohomotopy} and Prop. \ref{TheoremLocalTadpoleCancellation} below. \medskip \noindent {\bf Equivariant suspension.} For $V,W \in \mathrm{RO}(G)$ two orthogonal linear $G$-representations, and for $$ \big[ S^V \simeq \big( \mathbb{R}^V\big)^{\mathrm{cpt}} \overset{c}{\longrightarrow} \big( \mathbb{R}^V\big)^{\mathrm{cpt}} \simeq S^V \big] \;\in\; \pi^V_G \big( \big( \mathbb{R}^V \big)^{\mathrm{cpt}} \big) $$ the class of a cocycle in the equivariant Cohomotopy \eqref{EquivariantCohomotopySet} of the Euclidean $G$-space $\mathbb{R}^V$ \eqref{EuclideanGSpace} in compatible $\mathrm{RO}$-degree $V$ (Example \ref{ExamplesOfCompatibleRODegree}) and vanishing at infinity \eqref{VanishingAtInfinity}, we obtain the class of a cocycle vanishing at infinity on the product $G$-space $\mathbb{R}^{V \oplus W}$ \eqref{CartesianProductOfEuclideanGSpaces} in compatible degree $V \oplus W$, simply by forming the Cartesian product of $c$ with the identity on $\mathbb{R}^W$. This is the {\it equivariant suspension} of $c$ by RO-degree $W$: \begin{equation} \label{EquSuspension} \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} equivariant suspension \\ by RO-degree $W$ \\ of equivariant Cohomotopy cocycle \end{tabular} } } }{ \Sigma^W c } \hspace{1cm} \coloneqq \hspace{.2cm} \Big[ \big(\mathbb{R}^V \times \mathbb{R}^W\big)^{\mathrm{cpt}} \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} \phantom{a} \\ compactified Cartesian product \\ of cocycle with identity on $\mathbb{R}^W$ \end{tabular} } } }{ \xrightarrow{\;\; c \,\times\, \mathrm{id}_{\mathbb{R}^W} } } \big(\mathbb{R}^V \times \mathbb{R}^W\big)^{\mathrm{cpt}} \Big] \;\in\; \pi^{V \oplus W}_G \big( \big( \mathbb{R}^{V \oplus W} \big)^{\mathrm{cpt}} \big) \,. \end{equation} Note that this reduces to the ordinary suspension operation \cref{HopfDegreesUnderSuspension} for $G = 1$ the trivial group, hence for RO-degrees $\mathbf{n}_{\mathbf{triv}} = n$. These equivariant suspension operations form a directed system on the collection of equivariant Cohomotopy sets \eqref{EquivariantCohomotopySet}, indexed by inclusions of orthogonal linear representations: \begin{equation} \label{DirectedSystemOfEquivariantSuspensionMaps} \big( V \hookrightarrow V \oplus W \big) \;\;\longmapsto\;\; \Big( \pi^V_G\big( \big(\mathbb{R}^V\big)^{\mathrm{cpt}} \big) \xrightarrow{ \;\Sigma^W} \pi^{V \oplus W}_G\big( \big(\mathbb{R}^{V \oplus W}\big)^{\mathrm{cpt}} \big) \Big) \,. \end{equation} \medskip \noindent {\bf Stable equivariant Cohomotopy.} As a consequence of the above, one may consider the union of all unstable equivariant Cohomotopy sets of representation spheres in all compatible degrees, with respect to the identifications along the equivariant suspension maps \eqref{DirectedSystemOfEquivariantSuspensionMaps} (the colimit of this system): \begin{equation} \label{StableEquivariantCohomotopyOfThePoint} \hspace{-1.8cm} \underset{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} unstable equivariant Cohomotopy set \\ in compatible RO-degree $V$ \end{tabular} } }{ \pi^V_G \big( \big( \mathbb{R}^V \big)^{\mathrm{cpt}} \big) } \;\; \underset{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} stabilization map \\ (coprojection into colimit) \end{tabular} } }{ \xymatrix{ \;\;\;\; \ar[rr]^{\Sigma^\infty} && } } \;\;\;\;\;\;\;\;\;\; \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} union (colimit) of \\ all unstable Cohomotopy sets \\ in compatible RO-degrees \\ identified along equivariant suspensions \end{tabular} } } }{ \underset{ \underset{W}{\longrightarrow} }{\mathrm{lim}} \; \pi^W_G \big( \big( \mathbb{R}^W \big)^{\mathrm{cpt}} \big) } \hspace{.7cm}= \hspace{.8cm} \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} stable \\ equivariant Cohomotopy ring \\ in RO-degree 0 \end{tabular} } } }{ \mathbb{S}^0_G }. \end{equation} Since the resulting union/colimit is, by construction, stable under taking further such suspensions, this is called the {\it stable equivariant Cohomotopy in degree 0} (\cite[p. 1]{Segal71}, see \cite[p. 9-10]{Lueck}) also called the \emph{0th stable $G$-equivariant homotopy group of spheres} or the \emph{$G$-equivariant stable 0-stem} or similar (see \cite[IX.2]{May96}\cite[3]{Schwede}). Notice that here the stable RO-degree is the formal difference of the unstable RO-degree by the RO-degree of the singularity, so that vanishing stable RO-degree is another expression of compatibility of unstable degree, in the sense of Example \ref{ExamplesOfCompatibleRODegree}: \vspace{-5mm} $$ - \underset{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} RO-degree of \\ singularity \end{tabular} } }{ \underbrace{ V } } + \underset{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} compatible RO-degree of \\ unstable Cohomotopy \end{tabular} } }{ \underbrace{ V } } = \underset{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} RO-degree of \\ stable Cohomotopy \end{tabular} } }{ \underbrace{ 0 } } $$ via $$ \pi_V(S^W)=[S^V, S^W]=\pi^W(S^V) \xrightarrow{\Sigma^\infty} S^{W- V}. $$ Extensive computation of stable $\mathbb{Z}_2$-equivariant Cohomotopy of representation spheres in non-vanishing RO-degrees, i.e., computation of the abelian groups $\mathbb{S}^{\mathbf{n}_{\mathrm{sgn}} + \mathbf{m}_{\mathrm{triv}}}_{\mathbb{Z}_2}$, is due to \cite{ArakiIriye82}\cite{Iriye82}; see also \cite[5]{DuggerIsaksen16}\cite[p. 10-15]{Dugger08}. Under \hyperlink{HypothesisH}{\it Hypothesis H}, these groups are relevant for tadpole cancellation with branes wrapping orientifold singularities non-transversally. This is of interest to us but goes beyond the scope of this article. \medskip \noindent {\bf Equivalence to the Burnside ring.} Due to the stabilization, the stable equivariant Cohomotopy set \eqref{StableEquivariantCohomotopyOfThePoint} has the structure of an abelian group, in fact the structure of a ring. As such, it is isomorphic to the {\it Burnside ring} $A(G)$ of virtual $G$-sets (\cite{Burnside01}\cite{Solomon67}\cite[1]{tomDieck79}, for exposition in our context see \cite[2]{SS19b}): \vspace{-2mm} \begin{equation} \label{IsoToAG} \hspace{1cm} \overset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} stable \\ equivariant Cohomotopy \end{tabular} } } }{ \mathbb{S}_G^0 } \hspace{8mm} \simeq \hspace{3mm} \overset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} Burnside ring \end{tabular} } } }{ A(G) } \;\;=\;\; \big\{ \!\! \mbox{ Virtual $G$-sets } \!\! \big\}. \end{equation} This result is due to \cite[p. 2]{Segal71}; see \cite[7.6.7 \& 8.5.1]{tomDieck79}\cite[1.13]{Lueck}, we highlight its geometric meaning below; see \hyperlink{FigureM}{\it Figure M}. This is a non-linear analog (more precisely, the analog over the absolute base ``field'' $\mathbb{F}_1$ \cite[p. 3]{Cohn04}\cite[2.5.6]{Durov07}) of the fact that the equivariant K-theory in degree 0 is the representation ring of virtual linear $G$-representations over the field of real numbers (see, e.g., \cite[3]{Greenlees05}): \begin{equation} \label{ROG} \overset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} equivariant K-theory \end{tabular} } } }{ \mathrm{KO}_G^0 } \hspace{4mm} \simeq \hspace{4mm} \overset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} representation ring \end{tabular} } } }{ \mathrm{RO}(G) } \; \;=\; \big\{ \!\! \mbox{ Virtual $G$-representations } \!\! \big\}. \end{equation} In fact, the operation $S \mapsto \mathbb{R}[S]$ that sends a (virtual) $G$-set $S \in \mathrm{A}(G)$ to its linearization, hence to its linear span $\mathbb{R}[S]$, hence to the (virtual) permutation representation that it induces (see \cite[4]{tomDieck79}\cite[2]{SS19b}), is a ring homomorphism from the Burnside ring to the representation ring. Furthermore, it exhibits the value on the point of unique multiplicative morphism from equivariant stable Cohomotopy theory to equvariant K-theory, called the \emph{Boardman homomorphism} \cite[II.6]{Adams74}, which is the Hurewicz homomorphism generalized from ordinary cohomology to generalized cohomology theories: \vspace{-2mm} \begin{equation} \label{BoardmanH} \xymatrix@C=1.6em@R=2pt{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} equivariant stable Cohomotopy \end{tabular} } & \mathbb{S}_G^0 \ar@{}[dd]|-{ \begin{rotate}{270} $\!\!\simeq$ \end{rotate} } \ar[rrrr]^{ \overset{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} Boardman homomorphism \end{tabular} } }{ \beta } } &&&& \mathrm{KO}_G^0 \ar@{}[dd]|-{ \begin{rotate}{270} $\!\!\simeq$ \end{rotate} } & \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} equivariant K-theory \end{tabular} } \\ \\ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} Burnside ring \end{tabular} } & \mathrm{A}(G) \ar[rrrr]_{ \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} linearization \\ sending $G$-sets $S$ to \\ linear $G$-representations $\mathbb{R}[S]$ \end{tabular} } } }{ S \, \mapsto \, \mathbb{R}[S] } } &&&& \mathrm{RO}(G) & \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} representation ring \end{tabular} } } \end{equation} In summary, the composite of the stabilization morphism \eqref{StableEquivariantCohomotopyOfThePoint} with the isomorphism \eqref{IsoToAG} to the Burnside ring explicitly extracts from any cocycle $c$ in unstable equivariant Cohomotopy a virtual $G$-set $\{\mathrm{branes}\}$, hence a virtual $G$-permutation representation $\mathbb{R}[ \{\mathrm{branes}\}]$. The following theorem explicitly identifies this $G$-set $\{branes\}$ in terms of the Elmendorf stage-wise Hopf degrees of the cocycle $c$; see \hyperlink{FigureM}{\it Figure M} below for illustration. \begin{theorem}[\bf Stabilized equivariant Hopf degree theorem for representation spheres] \label{CharacterizationOfStabilizationOfUnstableCohomotopy} Consider a cocycle $c$ in unstable Cohomotopy of the vicinity of a $G$-singularity $\mathbb{R}^V$ (Def. \ref{CohomotopyOfVicinityOfSingularity}). Its image under stabilization in equivariant stable Cohomotopy \eqref{StableEquivariantCohomotopyOfThePoint} is, under the identification \eqref{IsoToAG} with the Burnside ring, precisely that virtual $G$-set $\{\mathrm{branes}\} \in A(G)$ whose net number of $H$-fixed points (``Burnside marks'', see \cite[2]{SS19b}), equals the Hopf degree of $c$ at any Elmendorf stage $H \in \mathrm{Isotr}_X(G)$ \eqref{ElmedorfStageWiseHopfDegrees}. Hence if $H = \langle g\rangle$ is a cyclic group generated by an element $g \in G$, this number also equals the character value at $g$ (i.e., the trace of the linear action of $g$) on the linear representation $\mathbb{R}\big[\{\mathrm{branes}\}\big]$: \vspace{-8mm} \begin{equation} \label{MorphismFromUnstableEquivariantCohomotopyToRepresentationRing} \xymatrix@R=1pt{ & \overset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} cocycle in \\ unstable equivariant Cohomotopy \end{tabular} } } }{ c } \ar@{}[dd]|-{ \begin{rotate}{270} $\!\!\in$ \end{rotate} } \ar@{|->}[rr] && \overset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} cocycle in stable equivariant Cohomotopy \\ $\simeq$ virtual $G$-set of $\mathrm{branes}$ \end{tabular} } } }{ \{\mathrm{branes}\} } \ar@{}[dd]|-{ \begin{rotate}{270} $\!\!\in$ \end{rotate} } \ar@{|->}[rr] && \overset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} virtual linear $G$-representation \\ spanned by virtual $G$-set of branes \end{tabular} } } }{ \mathbb{R}[ \{\mathrm{branes}\}] } \ar@{}[dd]|-{ \begin{rotate}{270} $\!\!\in$ \end{rotate} } \\ \\ & \pi^V_G \big( \big( \mathbb{R}^V \big)^{\mathrm{cpt}} \big) \ar@{}[dd]|-{ \mathpalette\mathllapinternal{ \mbox{\bf \tiny \color{darkblue} \eqref{UnstableEquivariantCohomotopyOfRepresentationSphereInCompatibleDegree} } \; } \begin{rotate}{270} $\!\!\!\!\!\simeq$ \end{rotate} } \ar[rr]^-{ \overset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} stabilization } } }{ \Sigma^\infty } } && \mathbb{S}_G^0 \ar@{}[dd]|-{ \mathpalette\mathllapinternal{ \mbox{\bf \tiny \color{darkblue} \eqref{IsoToAG} } \; } \begin{rotate}{270} $\!\!\!\!\!\simeq$ \end{rotate} } \ar[rr]^-{\beta} && \mathrm{KO}_G^0 \ar@{}[dd]|-{ \mathpalette\mathllapinternal{ \mbox{\bf \tiny \color{darkblue} \eqref{ROG} } \; } \begin{rotate}{270} $\!\!\!\!\!\simeq$ \end{rotate} } \\ \\ & \mathbb{Z}^{{}^{\mathrm{Isotr}^{d_{\mathrm{fix}}> 0 }_X(G)}} \times \{0,1\}^{{}^{\mathrm{Isotr}^{d_{\mathrm{fix}}= 0 }_X(G)}} \ar[rr] && \mathrm{A}(G) \ar[rr]_-{ \underset{ \mbox{\bf \tiny \color{darkblue} linearization } } { \mathbb{R}[-] } } && \mathrm{RO}(G) \\ \underset { \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} Elmendorf \\ stage \eqref{SystemOfMapsOnHFixedSubspaces} \end{tabular} } } } { G \supset H \mathpalette\mathrlapinternal{\; \colon} } & \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} Hopf degree at stage $H$ \eqref{ElmedorfStageWiseHopfDegrees} \\ of Cohomotopy cocycle \end{tabular} } } }{ \mathrm{deg}\big( c^H \big) } \ar@{=}[rr] && \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} net number of $H$-fixed elements \\ = Burnside marks at stage $H$ \\ in virtual set of $\mathrm{branes}$ \end{tabular} } } }{ \Big\vert \{\mathrm{branes}\}^H \Big\vert } \\ \underset { \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} Elmendorf stage \\ at cyclic subgroup \\ generated by $g \in G$ \end{tabular} } } } { H = \langle g\rangle \mathpalette\mathrlapinternal{\; \colon} } & \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} Hopf degree at stage $H = \langle g\rangle$ \eqref{ElmedorfStageWiseHopfDegrees} \\ of Cohomotopy cocycle \end{tabular} } } }{ \mathrm{deg}\big( c^{\langle g \rangle} \big) } \ar@{=}[rr] && \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} virtual number of fixed points \\ under action of $g \in G$ \\ on virtual set of branes \end{tabular} } } }{ \Big\vert \{\mathrm{branes}\}^{\langle g \rangle} \Big\vert } \ar@{=}[rr] && \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} character value at $g$ \\ of virtual permutation representation \\ spanned by $\mathrm{branes}$ \end{tabular} } } }{ \mathrm{Tr}_{g}\big( \mathbb{R}[ \{\mathrm{branes} \} ] \big) } } \end{equation} \end{theorem} \begin{proof} For the case that all fixed subspace dimensions are positive, this is essentially the statement of \cite[8.5.1]{tomDieck79}, after unwinding the definitions there (see \cite[p. 190]{tomDieck79}). We just need to see that the statement generalizes as claimed to the case where the full fixed subspace $\big( S^V\big)^G = S^0$ is the 0-sphere. But, under stabilization map $\Sigma^\infty$ \eqref{StableEquivariantCohomotopyOfThePoint}, the image of a Cohomotopy cocycle $S^V \xrightarrow{\;\;c\;\;} S^V$ and its equivariant suspension \eqref{EquSuspension} $ S^{\mathbf{1}_{\mathrm{triv}}\oplus V } \xrightarrow{ \Sigma^{\mathbf{1}_{\mathrm{triv}}}c } S^{\mathbf{1}_{\mathrm{triv}} \oplus V} $ by, in particular, the trivial 1-dimensional representation, have the same image $ \Sigma^\infty ( c ) \simeq \Sigma^\infty \big( \Sigma^{\mathbf{1}_{\mathrm{triv}}} c \big) $. Now to the suspended cocycle $\Sigma^{\mathbf{1}_{\mathrm{triv}}} c$ the theorem \cite[8.5.1]{tomDieck79} applies, and hence the claim follows from the fact \eqref{HopfDegreesUnderSuspension} that the unstable Hopf degree in $\{0,1\}$ injects under suspension into the stable Hopf degrees: $$ \big[ S^0 = \big( S^V\big)^G \xrightarrow{\;c^G\;} \big( S^V\big)^G = S^0 \big] \;\in\; \{0,1\} \hookrightarrow \mathbb{Z} \,. $$ \vspace{-7mm} \end{proof} \medskip \noindent {\it For ADE-singularities}, this implies the following (see \hyperlink{FigureM}{\it Figure M}): \begin{prop}[\bf Classification of Cohomotopy charge in the vicinity of ADE-singularities] \label{TheoremLocalTadpoleCancellation} Consider $G = G^{\mathrm{ADE}} \subset \mathrm{SU}(2)$ a finite ADE-group \eqref{ADESubgroups} and $\mathbf{4}_{\mathbb{H}}$ its canonical quaternionic representation \eqref{TheQuaternionicRepresentation}. Then the homomorphism \eqref{MorphismFromUnstableEquivariantCohomotopyToRepresentationRing} from Theorem \ref{CharacterizationOfStabilizationOfUnstableCohomotopy} identifies the unstable Cohomotopy of the vicinity of the $G^{\mathrm{ADE}}$-singularity $\mathbb{R}^{\mathbf{4}_{\mathbb{H}}}$ (Def. \ref{CohomotopyOfVicinityOfSingularity}) with its image in the representation ring \vspace{-3mm} $$ \xymatrix{ \pi^{\mathbf{4}_{\mathbb{H}}}_{G^{\mathrm{ADE}}} \big( \big( \mathbb{R}^{\mathbf{4}_{\mathbb{H}}} \big)^{\mathrm{cpt}} \big) \; \ar@{^{(}->}[rr]^{ \beta \circ \Sigma^\infty } && \mathrm{KO}_G^0 \simeq\mathrm{RO}(G) } $$ which consists of all the virtual representations of the form \begin{equation} \mathbb{R}[\{\mathrm{branes}\}] \;=\; N_{\color{darkblue} \bf \mathrm{Opla}} \cdot \mathbf{1}_{\mathrm{triv}} - N_{\color{darkblue} \bf \mathrm{brane} \atop \mathrm{int}} \cdot \mathbf{k}_{\mathrm{reg}} \phantom{AAAA} \mbox{for} \phantom{AA} \begin{array}{rcl} N_{\color{darkblue} \bf \mathrm{brane} \atop \mathrm{int}} &\in& \mathbb{N} \,, \\ N_{\color{darkblue} \bf \mathrm{Opla}} &\in& \{0, 1\} \end{array} \end{equation} hence of the form of the local/twisted tadpole cancellation conditions in \hyperlink{Table1}{\it Table 1} and \hyperlink{Table2}{\it Table 2}. \end{prop} \begin{proof} By \eqref{FixedSubspacesOfQuaternionRepresentation}, the representation $\mathbf{4}_{\mathbb{H}}$ is such that \emph{every} non-trivial subgroup $1 \neq H \subset G$ has a 0-dimensional fixed space: $$ \mathrm{dim} \left( \big( \mathbb{R}^{\mathbf{4}_{\mathbb{H}}} \big)^H \right) \;=\; \left\{ \begin{array}{cc} 4 & \mbox{if}\;H = 1 \\ 0 & \mbox{otherwise}. \end{array} \right. $$ This means that for $ c \in \pi^{\mathbf{4}_{\mathbb{H}}} \big( \big( \mathbb{R}^{\mathbf{4}_{\mathbb{H}}}\big)^{\mathrm{cpt}} \big) $ an equivariant Cohomotopy cocycle in the vicinity of an ADE-singularity, its only Elmendorf stage-wise Hopf degree \eqref{ElmedorfStageWiseHopfDegrees} in positive dimension is, by equation \eqref{TheWeylGroupMultiples} in Theorem \ref{UnstableEquivariantHopfDegreeTheorem}, of the form $$ \mathrm{deg}\big( c^1 \big) \;=\; \overset{ \mbox{\tiny $\in \{0, 1\}$} }{ \overbrace{ Q_{\mathrm{Opla}} } } - N_1 \cdot \vert G \vert \,, $$ where we used the fact that $W_G(1) = G$ \eqref{ExtremeCasesOfWeylGroups}. But, by Theorem \ref{CharacterizationOfStabilizationOfUnstableCohomotopy}, this implies that the virtual $G$-set $\{\mathrm{branes}\}$ of branes corresponding to $c$ has the following Burnside marks $$ \{\mathrm{branes}\}^H \;=\; \left\{ \begin{array}{cc} Q_{\mathrm{Opla}} - N_1 \cdot \left\vert G \right\vert & \mbox{if}\; H = 1 \\ Q_{\mathrm{Opla}} & \mbox{otherwise} \,, \end{array} \right. $$ hence that the corresponding permutation representation of branes has the following characters: $$ \mathrm{Tr}_g\big( \mathbb{R}\{\mathrm{branes}\}\big) \;=\; \left\{ \begin{array}{cc} Q_{\mathrm{Opla}} - N_1 \cdot \left\vert G \right\vert & \mbox{if}\; g = e \\ Q_{\mathrm{Opla}} & \mbox{otherwise}\,. \end{array} \right. $$ The unique $G$-set/$G$-representation with these Burnside marks/characters is the sum of the $N_1$-fold multiple of the regular $G$-set/$G$-representation and the $Q_{\mathrm{Opla}}$-fold multiple of the trivial representation (see \hyperlink{FigureM}{\it Figure M}): $$ \mathbb{R}[\{\mathrm{branes}\}] \;=\; Q_{\mathrm{Opla}} \cdot \mathbf{1}_{\mathrm{triv}} \;-\; N_1 \cdot \mathbf{k}_{\mathrm{reg}} \,, $$ \vspace{-7mm} \end{proof} \medskip \noindent The situation is illustrated by \hyperlink{FigureM}{\it Figure M}: \begin{center} \hypertarget{FigureM}{} \begin{tikzpicture}[scale=.8] \begin{scope}[shift={(0,1)}] \draw node at (0,7.2) { \tiny \color{darkblue} \bf \begin{tabular}{c} equivariant Cohomotopy \\ vanishing at infinity \\ of Euclidean $G$-space \\ in compatible RO-degree $V$ \end{tabular} }; \draw node at (0,6) { $ \pi^{V}_{{}_{G}} \big( \big( \mathbb{R}^V \big)^{\mathrm{cpt}} \big) $ }; \draw[->] (0+2,6) to node {\colorbox{white}{\small $\Sigma^\infty$}} node[above] { \raisebox{.3cm}{ \tiny \color{darkblue} \bf stabilization } } (6-.8,6); \draw node at (6,6.9) { \tiny \color{darkblue} \bf \begin{tabular}{c} stable \\ equivariant \\ Cohomotopy \end{tabular} }; \draw node at (6,6) { $ \mathbb{S}_G^0 $ }; \draw[->] (6+.7,6) to node{\colorbox{white}{\footnotesize $\beta$}} node[above] { \raisebox{.3cm}{ \tiny \color{darkblue} \bf \begin{tabular}{c} Boardman \\ homomorphism \end{tabular} } } (11-.9,6); \draw node at (6,5.5) { \begin{rotate}{270} $\!\!\simeq$ \end{rotate} }; \draw node at (6,5) { $ A_G $ }; \draw node at (6,4.3) { \tiny \color{darkblue} \bf \begin{tabular}{c} Burnside \\ ring \end{tabular} }; \draw[->] (6+.7,5) to node { \colorbox{white} { \small $ \underset { \mbox{ \tiny \color{darkblue} \bf linearization } } {\footnotesize \mathbb{R}[-] } $ } } (11-.7,5); \draw node at (11,6.8) { \tiny \color{darkblue} \bf \begin{tabular}{c} equivariant \\ K-theory \end{tabular} }; \draw node at (11,6) { $ \mathrm{KO}_G^0 $ }; \draw node at (11,5.5) { \begin{rotate}{270} $\!\!\simeq$ \end{rotate} }; \draw node at (11.2,5) { $ \mathrm{RO}(G) $ }; \draw node at (11,4.3) { \tiny \color{darkblue} \bf \begin{tabular}{c} representation \\ ring \end{tabular} }; \end{scope} \begin{scope}[shift={(0,.4)}] \draw node at (0,4) {$ \overset{ }{ \mbox{ \tiny e.g. one $O^{{}^{-}}\!\!$-plane and two branes } } $}; \draw node at (0,3) { $\overbrace{\phantom{AAAAAAAAAAAAAAAAAAA}}$ }; \draw node at (6,4) {$ \overset{ }{ \mbox{ \tiny \begin{tabular}{c} minus the trivial $G$-set \\ with two regular $G$-sets \end{tabular} } } $}; \draw node at (11,3) { $\overbrace{\phantom{AAAAAA}}$ }; \draw node at (11,4) {$ \overset{ }{ \mbox{ \tiny \begin{tabular}{c} minus the trivial $G$-representation \\ plus two times the regular $G$-representation \end{tabular} } } $}; \draw node at (6,3) { $\overbrace{\phantom{AAAAAA}}$ }; \draw[dashed] (0,0) circle (2); \draw (0,2+.2) node {\footnotesize $\infty$}; \draw (0,-2-.2) node {\footnotesize $\infty$}; \draw (2+.3,0) node {\footnotesize $\infty$}; \draw (-2-.3,0) node {\footnotesize $\infty$}; \draw[fill=white] (0,0) circle (.07); \draw[fill=black] (18:.6) circle (.07); \draw[fill=black] (18+90:.6) circle (.07); \draw[fill=black] (18+180:.6) circle (.07); \draw[fill=black] (18+270:.6) circle (.07); \draw[fill=black] (58:1.2) circle (.07); \draw[fill=black] (58+90:1.2) circle (.07); \draw[fill=black] (58+180:1.2) circle (.07); \draw[fill=black] (58+270:1.2) circle (.07); \draw[|->] (3.6,0) to ++(.5,0); \draw[|->] (8.4,0) to ++(.5,0); \begin{scope}[shift={(6,.2)}] \draw[fill=white] (0,2) circle (0.07); \draw (5,2) node {\small $+\mathbf{1}_{{}_{\mathrm{triv}}}$ }; \draw[|->, olive] (-.05,2.12) arc (30:325:.2); \begin{scope}[shift={(0,.3)}] \draw[fill=black] (0,1) circle (0.07); \draw[fill=black] (0,.5) circle (0.07); \draw[fill=black] (0,0) circle (0.07); \draw[fill=black] (0,-.5) circle (0.07); \draw[|->, olive] (0-.1,1-.05) arc (90+6:270-16:.2); \draw[|->, olive] (0-.1,.5-.05) arc (90+6:270-16:.2); \draw[|->, olive] (0-.1,0-.05) arc (90+6:270-16:.2); \draw[|->, olive] (0+.1,-.5+.1) to[bend right=60] (0+.1,1-.03); \draw (5,0.25) node {\small $-\mathbf{4}_{{}_{\mathrm{reg}}}$ }; \end{scope} \begin{scope}[shift={(0,-1.7)}] \draw[fill=black] (0,1) circle (0.07); \draw[fill=black] (0,.5) circle (0.07); \draw[fill=black] (0,0) circle (0.07); \draw[fill=black] (0,-.5) circle (0.07); \draw[|->, olive] (0-.1,1-.05) arc (90+6:270-16:.2); \draw[|->, olive] (0-.1,.5-.05) arc (90+6:270-16:.2); \draw[|->, olive] (0-.1,0-.05) arc (90+6:270-16:.2); \draw[|->, olive] (0+.1,-.5+.1) to[bend right=60] (0+.1,1-.03); \draw (5,0.25) node {\small $-\mathbf{4}_{{}_{\mathrm{reg}}}$ }; \end{scope} \end{scope} \end{scope} \end{tikzpicture} \end{center} \vspace{-4mm} \noindent {\bf \footnotesize Figure M -- Virtual $G$-representations of brane configurations classified by equivariant Cohomotopy} {\footnotesize in the vicinity of ADE-singularities (Def. \ref{CohomotopyOfVicinityOfSingularity}), according to Prop. \ref{TheoremLocalTadpoleCancellation}, following Theorem \ref{UnstableEquivariantHopfDegreeTheorem} and Theorem \ref{CharacterizationOfStabilizationOfUnstableCohomotopy}. The results reproduces the form of the local/twisted tadpole cancellation conditions in \hyperlink{Table1}{\it Table 1}, \hyperlink{Table2}{\it Table 2}. Shown is a situation for $G = \mathbb{Z}_4$ and $V = \mathbf{2}_{\mathrm{rot}}$ as in \hyperlink{FigureK}{\it Figure K}.} \subsection{Equivariant Hopf degree on tori and Global tadpole cancellation} \label{GlobalTadpoleCancellation} We now globalize the characterization of equivariant Cohomotopy from the vicinity of singular fixed points to compact toroidal orbifolds, in Theorem \ref{UnstableEquivariantHopfDegreeTheoremForTori} below. Prop. \ref{PushforwardOfVicinityOfSingularityToRepresentationTorus} below shows that the two are closely related, implying that the local/twisted tadpole cancellation carries over to toroidal orbifolds. Then we informally discuss the enhancement of unstable equivariant Cohomotopy to a super-differential cohomology theory \eqref{DifferentialEquivariantCohomotopyPullback} and show that its implications \eqref{KernelOfTheGlobalElmendorfStageProjection} on the underlying equivariant Cohomotopy enforce the form of the global/untwisted tadpole cancellation conditions. \medskip \noindent {\bf Globalizing from Euclidean orientifolds to toroidal orientifolds.} In \cref{LocalTadpoleCancellation} we discussed the characterization of equivariant Cohomotopy in the vicinity of singularities (according to \hyperlink{Table5}{\it Table 5}). We may globalize this to compact toroidal orientifolds by applying this local construction in the vicinity of each singularity, using that the condition of ``vanishing at infinity'' \eqref{VanishingAtInfinity} with respect to any one singularity means that the local constructions may be glued together. This \emph{local-to-global} construction is indicated in \hyperlink{FigureN}{\it Figure N}: \vspace{-3mm} \begin{center} {\hypertarget{FigureN}{}} \begin{tikzpicture}[scale=0.75] \draw (1.5,6.4) node {$\overbrace{\phantom{------------------------}}$}; \draw (11,6.4) node {$\overbrace{\phantom{---------------}}$}; \draw (1.5,7.3) node {\tiny \color{darkblue} \bf toroidal orientifold}; \draw (11,7.3) node { \tiny \color{darkblue} \bf \begin{tabular}{c} representation sphere \\ equivariant Cohomotopy coefficient \end{tabular} }; \draw (5.8,7.6) node { \tiny \color{darkblue} \bf $\mathbb{Z}_2$-equivariant Cohomotopy cocycle }; % \begin{scope} \clip (-2.9,-2.9) rectangle (5.9,5.9); \draw[step=3, dotted] (-3,-3) grid (6,6); \draw[dashed] (-3,-3) circle (1); \draw[dashed] (0,-3) circle (1); \draw[dashed] (3,-3) circle (1); \draw[dashed] (6,-3) circle (1); \draw[dashed] (-3,0) circle (1); \draw[dashed] (0,0) circle (1); \draw[dashed] (3,0) circle (1); \draw[dashed] (-3,3) circle (1); \draw[dashed] (0,3) circle (1); \draw[dashed] (3,3) circle (1); \draw[dashed] (-3,6) circle (1); \draw[dashed] (0,6) circle (1); \draw[dashed] (3,6) circle (1); \draw[dashed] (6,6) circle (1); \draw (0,.2) node {\colorbox{white}{\tiny $(0,0)$}}; \draw[fill=white] (0,0) circle (.07); \draw[<->, dashed, darkblue] (0-1.9,3+1.6) to node[near start, above] { \tiny \color{darkblue} \bf \begin{tabular}{c} orientifold \\ action \end{tabular} } (0+1.5,3-1.5); \draw (3,0) node {\colorbox{white}{\tiny $(\tfrac{1}{2},0)$}}; \draw (0,3) node {\colorbox{white}{\tiny $(0,\tfrac{1}{2})$}}; \draw (0,3-.25) node {{\tiny \color{darkblue} \bf a fixed point}}; \draw (0,3-.65) node {\colorbox{white}{\tiny \color{darkblue} \bf \begin{tabular}{c}disk around fixed point\end{tabular}}}; \draw (3,3) node { \colorbox{white}{ \tiny $(\tfrac{1}{2},\tfrac{1}{2})$ } }; \draw (3+.25,3+.51) node {$\bullet$}; \draw (3-.25,3-.51) node {$\bullet$}; \end{scope} \draw (0,-3.4) node {\tiny $x_1 = 0$}; \draw (3,-3.4) node {\tiny $x_1 = \tfrac{1}{2}$}; \draw (-3.7,0) node {\tiny $x_2 = 0$}; \draw (-3.7,3) node {\tiny $x_2 = \tfrac{1}{2}$}; % \draw[<->, dashed, darkblue] (11-1,2+1) to node[below, near end] {\tiny $\mathbb{Z}_2$} (11+1,2-1); \node at (11,2) {\colorbox{white}{$\phantom{a}$}}; \draw[dashed] (11,2) circle (2); \node (zero) at (11,2) {\tiny $0$}; \node (infinity) at (11,2+2.1) {\tiny $\infty$}; \node (bottominfinity) at (11,2-2.1) {\tiny $\infty$}; \node (rightinfinity) at (11+2.2,2) {\tiny $\infty$}; % % \node (torus) at (1.5,8) {\raisebox{42pt}{$ \mathbb{T}^{\mathbf{n}_{\mathrm{sgn}}} = \xymatrix{ \mathbb{R}^{\mathbf{n}_{\mathrm{sgn}}} \ar@(ul,ur)^{ \overset{ \mathpalette\mathclapinternal{ \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} sign \\ representation \end{tabular} } } } { \mathbb{Z}_2 } } } /\mathbb{Z}^n $}}; \node (sphere) at (11,8) {\raisebox{42pt}{$ S^{\mathbf{n}_{\mathrm{sgn}}} = D( \xymatrix{ \mathbb{R}^{\mathbf{n}_{\mathrm{sgn}}} \ar@(ul,ur)^{ \overset{ \mathpalette\mathclapinternal{ \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} sign \\ representation \end{tabular} } } } { \mathbb{Z}_2 } } } )/S(\mathbb{R}^{\mathbf{n}_{\mathrm{sgn}}}) $}}; \draw[->, thin] (torus) to node[above]{c} (sphere); \draw[|->, thin, olive] (3+.4,3+.5) to[bend left=5] node { \hspace{-1.3cm} \colorbox{white}{ \tiny \color{darkblue} \bf some charge at this singularity } } (zero); \draw[|->, thin, olive] (3-.1,3-.5) to[bend left=5] node { \hspace{-1.3cm} \colorbox{white}{ \tiny \color{darkblue} \bf mirror charge } } (zero); \draw[|->, thin, olive] (0+.2,0+0) to[bend left=6.7] (zero); \draw[|->, thin, olive] (0-0,0-.5) to[bend left=6] (11-.1,2-.3); \draw[|->, thin, olive] (0+1,0+0) to[bend left=6.7] node { \hspace{-.8cm} \colorbox{white}{ \tiny \color{darkblue} \bf O-plane unit charge at this singularity } } (11+1.9,2+0); % \draw[|->, thin, olive] (1.5+0,4+0) to[bend left=13] node[below] {\hspace{.4cm}\colorbox{white}{\tiny \color{darkblue} \bf vanishing charge far away from all singularities}} (infinity); % \draw[|->, thin, olive] (3+.2,0-.5) to[bend left=7] (bottominfinity); \draw[|->, thin, olive] (3+.4,0+0) to[bend left=7] node {\colorbox{white}{\tiny \color{darkblue} \bf no charge at this singularity}} (bottominfinity); \draw[|->, thin, olive] (3+.6,0-.8) to[bend left=7] (bottominfinity); \end{tikzpicture} \end{center} \vspace{-5mm} \noindent {\bf \footnotesize Figure N -- Equivariant Cohomotopy cocycle on toroidal orbifolds glued from local cocycles in the vicinity of singularities,} {\footnotesize as formalized in the proof of Theorem \ref{UnstableEquivariantHopfDegreeTheoremForTori}. Shown is a situation for $G = \mathbb{Z}_2$, as in \hyperlink{FigureI}{\it Figure I} and \hyperlink{FigureJ}{\it Figure J}.} \medskip \noindent {\bf Well-isolated singularities.} In order to formalize this local-to-global construction conveniently, we make the following sufficient assumption on the $G$-spaces to which we apply it: \begin{defn} We say that a $G$-space $\xymatrix{X \ar@(ul,ur)|{\, G\,}}$ has {\it well-isolated singularities} if all the \emph{minimal} subgroups with 0-dimensional fixed subspaces \eqref{FixedLoci} are in the center of $G$, i.e. if the following condition holds: \begin{equation} \label{WellIsolatedFixedPoints} H \subset G \;\mbox{minimal such that}\; \mathrm{dim}\big( X^G\big) = 0 \phantom{AA} \Rightarrow \phantom{AA} H \subset \mathrm{Center}(G)\;. \end{equation} \end{defn} \begin{example} The ADE-singularites (\hyperlink{Table5}{\it Table 5}) with well-isolated fixed points in the sense of \eqref{WellIsolatedFixedPoints} are all those in the $\mathbb{A}$-series, as well as the generalized quaternionic ones in the $\mathbb{D}$-series -- see \hyperlink{Table6}{\it Table 6}. This is because, for ADE-singularities, all non-trivial subgroups have 0-dimensional fixed space \eqref{FixedSubspacesOfQuaternionRepresentation}, so that here the condition of well-isolated singularities \eqref{WellIsolatedFixedPoints} requires that all non-trivial minimal elements in the subgroup lattice be in the center. This is trivially true for the cyclic groups in the $\mathbb{A}$-series, since they are abelian. For the generalized quaternionic groups in the $\mathbb{D}$-series there is in fact a unique minimal non-trivial subgroup, and it in fact it is always the orientifold action $H_{\mathrm{min}} = \mathbb{Z}_2$ which coincides with the center, as shown for the first few cases in \hyperlink{Table6}{\it Table 6}. \end{example} \medskip The point of the notion of well-isolated fixed points \eqref{WellIsolatedFixedPoints} is that it is sufficient to guarantee that the action of the full group restricts to the union of the 0-dimensional fixed subspaces, since then \begin{equation} \label{SetOfIsolatedFixedPointsIsIndeedFixed} H \cdot x_{\mathrm{fixed}} \;=\; x_{\mathrm{fixed}} \phantom{AA} \Rightarrow \phantom{AA} H \cdot (g \cdot x_{\mathrm{fixed}}) \;=\; (H \cdot g) \cdot x_{\mathrm{fixed}} \;=\; (g \cdot H) \cdot x_{\mathrm{fixed}} \;=\; g \cdot (H \cdot x_{\mathrm{fixed}}) \;=\; g \cdot x_{\mathrm{fixed}}, \end{equation} for all $g \in G$. Hence, with \eqref{WellIsolatedFixedPoints}, the quotient set \begin{equation} \label{SetOfWellIsolatedFixedPoints} \mathrm{IsolSingPts}_G(X) \;\coloneqq\; \Bigg( \underset { \Scale[0.6] { H \subset G , \mathrm{dim}( X^H ) = 0 } } { \bigcup } X^H \Bigg) \big/ G \end{equation} exists and is the set of isolated singular points in the orbifold $X \!\sslash\! G$. {\footnotesize \begin{center} \hypertarget{Table6}{} \hspace{-.2cm} \small\addtolength{\tabcolsep}{-5pt} \begin{tabular}{|c||c|c|c|c|c|} \hline \begin{tabular}{c} {\bf Dynkin} \\ {\bf label} \end{tabular} & $\mathbb{A}3 = \mathbb{D}3$ & $\mathbb{D}4$ & $\mathbb{D}6$ & $\mathbb{D}10$ & $\mathbb{D}18$ \\ \hline \raisebox{-20pt}{ \begin{tabular}{c} $G \subset \mathrm{Sp}(1)$ \\ \phantom{A} \\ \phantom{A} \\ \begin{rotate}{+90} $ \mathpalette\mathllapinternal{ \mbox{\bf subgroup lattice } } $ \end{rotate} \end{tabular} } & $ \xymatrix@C=5pt{ & {\phantom{Q_8}} \\ & \mathbb{Z}_4 \mathpalette\mathrlapinternal{ \, \mbox{\tiny = center} } \\ & {\color{darkblue} \bf \mathbb{Z}^{\mathpalette\mathrlapinternal{\mathrm{refl}}}_2 } \ar@{^{(}->}[u] & {\phantom{\mathbb{Z_2}}} \\ & 1 \ar@{^{(}->}[u] } $ & $ \xymatrix@C=12pt{ & Q_8 \ar@{=}[d] \\ & 2 D_4 \\ \mathbb{Z}_4 \ar@{^{(}->}[ur] & \mathbb{Z}_4 \ar@{^{(}->}[u] & \mathbb{Z}_4 \ar@{^{(}->}[ul] \\ & {\color{darkblue} \bf \mathbb{Z}^{\mathpalette\mathrlapinternal{\mathrm{refl}}}_2 } \mathpalette\mathrlapinternal{ \;\;\; \mbox{\tiny = center} } \ar@{^{(}->}[ul] \ar@{^{(}->}[u] \ar@{^{(}->}[ur] \\ & 1 \ar@{^{(}->}[u] } $ & $ \xymatrix@C=12pt{ & Q_{16} \ar@{=}[d] \\ & 2 D_8 \\ 2 D_4 \ar@{^{(}->}[ur] & \mathbb{Z}_8 \ar@{^{(}->}[u] & 2 D_4 \ar@{^{(}->}[ul] \\ \mathbb{Z}_4 \ar@{^{(}->}[u] & \mathbb{Z}_4 \ar@{^{(}->}[ul] \ar@{^{(}->}[u] \ar@{^{(}->}[ur] & \mathbb{Z}_4 \ar@{^{(}->}[u] \\ & {\color{darkblue} \bf \mathbb{Z}^{\mathpalette\mathrlapinternal{\mathrm{refl}}}_2 } \mathpalette\mathrlapinternal{ \;\;\; \mbox{\tiny = center} } \ar@{^{(}->}[ul] \ar@{^{(}->}[u] \ar@{^{(}->}[ur] \\ & 1 \ar@{^{(}->}[u] } $ & $ \xymatrix@C=12pt{ & Q_{32} \ar@{=}[d] \\ & 2 D_{16} \\ 2 D_8 \ar@{^{(}->}[ur] & \mathbb{Z}_{16} \ar@{^{(}->}[u] & 2 D_8 \ar@{^{(}->}[ul] \\ 2 D_4 \ar@{^{(}->}[u] & \mathbb{Z}_8 \ar@{^{(}->}[ul] \ar@{^{(}->}[u] \ar@{^{(}->}[ur] & 2 D_4 \ar@{^{(}->}[u] \\ \mathbb{Z}_4 \ar@{^{(}->}[u] & \mathbb{Z}_4 \ar@{^{(}->}[ul] \ar@{^{(}->}[u] \ar@{^{(}->}[ur] & \mathbb{Z}_4 \ar@{^{(}->}[u] \\ & {\color{darkblue} \mathbf \mathbb{Z}^{\mathpalette\mathrlapinternal{\mathrm{refl}}}_2 } \mathpalette\mathrlapinternal{ \;\;\; \mbox{\tiny = center} } \ar@{^{(}->}[ul] \ar@{^{(}->}[u] \ar@{^{(}->}[ur] \\ & 1 \ar@{^{(}->}[u] } $ & \xymatrix@C=12pt{ & Q_{64} \ar@{=}[d] \\ & 2 D_{32} \\ 2 D_{16} \ar@{^{(}->}[ur] & \mathbb{Z}_{32} \ar@{^{(}->}[u] & 2 D_{16} \ar@{^{(}->}[ul] \\ 2 D_{8} \ar@{^{(}->}[u] & \mathbb{Z}_{16} \ar@{^{(}->}[ul] \ar@{^{(}->}[u] \ar@{^{(}->}[ur] & 2 D_{8} \ar@{^{(}->}[u] \\ 2 D_{4} \ar@{^{(}->}[u] & \mathbb{Z}_{8} \ar@{^{(}->}[ul] \ar@{^{(}->}[u] \ar@{^{(}->}[ur] & 2 D_{4} \ar@{^{(}->}[u] \\ \mathbb{Z}_4 \ar@{^{(}->}[u] & \mathbb{Z}_4 \ar@{^{(}->}[ul] \ar@{^{(}->}[u] \ar@{^{(}->}[ur] & \mathbb{Z}_4 \ar@{^{(}->}[u] \\ & {\color{darkblue} \bf \mathbb{Z}^{\mathpalette\mathrlapinternal{\mathrm{refl}}}_2 } \mathpalette\mathrlapinternal{ \;\;\; \mbox{\tiny = center} } \ar@{^{(}->}[ul] \ar@{^{(}->}[u] \ar@{^{(}->}[ur] \\ & 1 \ar@{^{(}->}[u] } \\ \hline \end{tabular} \end{center} } \vspace{-2mm} \noindent {\footnotesize \bf Table 6 -- The ADE-Singularities $\xymatrix{\mathbb{R}^{\mathbf{4}_{\mathbb{H}}}\ar@(ul,ur)|{\;\;\;\; G^{\mathrm{ADE}}\, }}$ with well-isolated fixed points} {\footnotesize according to \eqref{WellIsolatedFixedPoints} are those in the $\mathbb{A}$-series $G^{\mathrm{ADE}} = \mathbb{Z}_n$ and the quaternionic groups $Q_{2^{n+2}} = 2 D_{2^{n} + 2}$ in the $\mathbb{D}$-series. For the latter and for the even-order cyclic groups, the minimal non-trivial central subgroup is unique and given by the point reflection group $\mathbb{Z}_2^{\mathrm{refl}}$ \eqref{PointReflectionSubgroup}. } \vspace{4mm} \noindent {\bf Unstable equivariant Hopf degree of representation tori.} With these preliminaries in hand, we may now state and prove the unstable equivariant Hopf degree theorem for representation tori with well-isolated singularities, Theorem \ref{UnstableEquivariantHopfDegreeTheoremForTori} below. Its statement and proof are directly analogous to the case for representation spheres in Theorem \ref{UnstableEquivariantHopfDegreeTheorem}. The difference here, besides the passage from spheres to tori, is the extra assumption on well-isolated singularities and the fact that the proof here invokes the construction of the previous proof around each one of the well-isolated singularities. \begin{theorem}[\bf Unstable equivariant Hopf degree theorem for representation tori] \label{UnstableEquivariantHopfDegreeTheoremForTori} The unstable equivariant Cohomotopy \eqref{EquivariantCohomotopySet} of a $G$-representation torus $\mathbb{T}^V$ \eqref{RepresentationTorus} with well-isolated singularities \eqref{WellIsolatedFixedPoints} and with a point at infinity adjoined \eqref{BasepointFreelyAdjoined} in compatible RO-degree $V$ (Example \ref{ExamplesOfCompatibleRODegree}) is in bijection to the product set of one copy of the integers for each isotropy group \eqref{IsotropySubgroups} with positive dimensional fixed subspace \eqref{FixedLoci}, and one copy of $\{0,1\}$ for each well-isolated fixed point \eqref{SetOfWellIsolatedFixedPoints} \begin{equation} \label{UnstableEquivariantCohomotopyOfRepresentationTorusInCompatibleDegree} \xymatrix{ \pi^V_G \big( \big( \mathbb{T}^V \big)_+ \big) \ar[rrr]^-{ c \;\mapsto\; ( H \mapsto {\color{darkblue} \bf N_H}(c) ) }_-{\simeq} &&& \mathbb{Z}^{{}^{ \mathrm{Isotr}^{d_{\mathrm{fix}} > 0}_X(G) } } \times \{0,1\}^{{}^{ \mathrm{IsolSingPts}_G(X) }} }, \end{equation} where, for $H \in \mathrm{Isotr}^{d_{\mathrm{fix}} > 0 }_X(G)$, the ordinary Hopf degree at Elmendorf stage $H$ \eqref{ElmedorfStageWiseHopfDegrees} is of the form \begin{equation} \label{TheWeylGroupMultiplesForRepresentationTori} \xymatrix@R=-2pt{ \mathrm{deg}\big( c^{H} \big) &=& \phi_H\big( \{ \mathrm{deg}\big( c^K \big) \big\vert K \supsetneq H \in \mathrm{Isotr}_X(G) \} \big) &-& {\color{darkblue} \bf N_H}(c) \cdot \big| \big( W_G(H)\big) \big| & \!\!\!\!\!\!\!\!\!\!\!\! \in \mathbb{Z}. \\ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} The ordinary Hopf degree \eqref{HopfDegreeTheorem} \\ at Elmendorf stage $K$ \eqref{SystemOfMapsOnHFixedSubspaces} \end{tabular} } } & & \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} a fixed offset, being a function of \\ the Hopf degrees at all lower stages. \end{tabular} } } & & \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} an integer multiple of \\ the order of the Weyl group \eqref{WeylGroup} \end{tabular} } } } \end{equation} The isomorphism \eqref{UnstableEquivariantCohomotopyOfRepresentationTorusInCompatibleDegree} is exhibited by sending an equivariant Cohomotopy cocycle to the sequence of the integers ${\color{darkblue} \bf N_H}$ from \eqref{TheWeylGroupMultiplesForRepresentationTori} in positive fixed subspace dimensions, together with the collection of elements in $\{0,1\}$, which are the unstable Hopf degrees in dimension 0 \eqref{UnstableRangeHopfDegreeTheorem}, at Elmendorf stage $G$ at each one of the well-isolated singularities. \end{theorem} \begin{proof} In the special case when no subgroup $H \subset G$ has a fixed subspace of vanishing dimension, this is \cite[Theorem 8.4.1]{tomDieck79}, where the assumption of positive dimension is made ``for simplicity'' in \cite[middle of p. 212]{tomDieck79}. Hence we just need to convince ourselves that the proof given there generalizes. To that end, assume that $\mathrm{dim}\big( V^G \big) = 0$. To generalize the inductive argument in \cite[p. 214]{tomDieck79} to this case, we just need to see that every $G$-invariant function on the isolated fixed \eqref{SetOfWellIsolatedFixedPoints} \begin{equation} \label{CohomotopyCocycleOnWellIsolatedFixedPoints} \xymatrix@R=1.2em{ \mathrm{IsolSingPts}_G(X) \ar[rr] && S^0 \\ \underset { \mathpalette\mathclapinternal{ \Scale[0.6] { H \subset G , \mathrm{dim}( X^H ) = 0 } } } { \bigcup } \;\;\;\;\; X^H \ar[rr]^{ (c^H) } \ar[u]^q && S^0 \ar@{=}[u] } \end{equation} extends to a $W_G(K)$-equivariant function $( S^V)^K \to ( S^V)^K$ on the next higher Elmendorf stage $K \in \mathrm{Isotr}^{d_{\mathrm{fix}}> 0}_X(G)$. For this, consider a $G$-equivariant tubular neighborhood around the well-isolated fixed points. This is guaranteed to exist on general grounds by the equivariant tubular neighborhood theorem, since, by assumption \eqref{WellIsolatedFixedPoints}, the set of points (in the bottom left of \eqref{CohomotopyCocycleOnWellIsolatedFixedPoints}) is an equivariant (and of course closed) subspace, by \eqref{SetOfIsolatedFixedPointsIsIndeedFixed}. In fact, in the present specific situation of global \emph{orthogonal} linear actions on a Euclidean space we obtain a concrete such equivariant tubular neighborhood by forming the union of Euclidean open balls of radius $\epsilon$ around each of the points, for any small enough positive real number $\epsilon$. This kind of tubular neighborhood is indicated by the collection of dashed circles in \hyperlink{FigureA}{\it Figure A} and \hyperlink{FigureN}{\it Figure N}. Given this or any choice of equivariant tubular neighborhood, the extensions \eqref{InductionStartForRepSpheres} in the proof of Theorem \ref{UnstableEquivariantHopfDegreeTheorem} apply to the vicinity of any one fixed points. This is a choice in $\{0,1\}$ for each element in $\mathrm{IsolSingPts}_G(X)$ \eqref{SetOfWellIsolatedFixedPoints}, hence in total is the choice of an element in $\{0,1\}^{{}^{ \mathrm{IsolSingPts}_G(X) }}$, as it appears in \eqref{UnstableEquivariantCohomotopyOfRepresentationSphereInCompatibleDegree}. Since all these local extensions to the vicinity of any of the singularities ``vanish at infinity'' \eqref{VanishingAtInfinity}, i.e., at some distance $> \epsilon$ from any and all of the well-isolated fixed points, they may jointly be further extended to a global cocycle $\mathbb{T}^V \overset{c}{\longrightarrow} S^V$ by declaring that $c$ sends every other point in $\mathbb{T}^V$ outside the given tubular neighborhood to $\infty \in S^V$ (shown in \hyperlink{FigureN}{\it Figure N}). From this induction onwards, the proof of \cite[8.4.1]{tomDieck79} applies verbatim and shows that on top of this initial Hopf degree choice in $\{0,1\}^{{}^{\mathrm{IsolSingPts}_G(X)}}$ there may now be further $N_H \cdot \vert W_G(H)\vert$-worth of Hopf degree at the next higher Elmendorf stage $H$, and so on. \end{proof} \vspace{1mm} \noindent {\bf Stable equivariant Hopf degree of representation tori.} Note that the unstable equivariant Hopf degrees of representation spheres (Theorem \ref{UnstableEquivariantHopfDegreeTheorem}) and of representation tori (Theorem \ref{UnstableEquivariantHopfDegreeTheoremForTori}) have the same form, \eqref{UnstableEquivariantCohomotopyOfRepresentationSphereInCompatibleDegree} and \eqref{UnstableEquivariantCohomotopyOfRepresentationTorusInCompatibleDegree}, respectively, away from the unstable Hopf degrees in vanishing fixed space dimensions. It follows immediately that, up to equivariant homotopy, all brane charge may be thought of as concentrated in the vicinity of the ``central'' singularity (see \hyperlink{FigureO}{\it Figure O}): \vspace{-2mm} \begin{prop}[\bf Pushforward in unstable equivariant Cohomotopy] \label{PushforwardOfVicinityOfSingularityToRepresentationTorus} Let$\xymatrix{\mathbb{T}^V\ar@(ul,ur)|-{G}}$ be a $G$-representation torus \eqref{RepresentationTorus} with well-isolated singularities \eqref{WellIsolatedFixedPoints}, and write $ D_\epsilon ( \mathbb{R}^V) \overset{i}{\hookrightarrow} \mathbb{T}^V $, $ 0 \hookrightarrow x_0 $, for the inclusion of the $G$-equivariant tubular neighborhood around the fixed point $x_0 \in \mathbb{T}^V$ covered by $0 \in \mathbb{R}^V$ that is given by the open $\epsilon$-ball around the point, for any small enough positive radius $\epsilon$. Then pushforward along $i$ from the unstable equivariant Cohomotopy of the vicinity of this fixed point (as in Theorem \ref{UnstableEquivariantHopfDegreeTheorem}) to that of the full representation torus (as in Theorem \ref{UnstableEquivariantHopfDegreeTheoremForTori}) $$ \xymatrix@R=-2pt{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} unstable equivariant Cohomotopy \\ of vicinity of $G$-singularity \end{tabular} } } \ar@{}[rrr]|-{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} identify with vicinity of $x_0$ \\ ${\phantom{\vert}}$ \end{tabular} } } &&& \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} unstable equivariant Cohomotopy \\ of $G$-representation torus \end{tabular} } } \\ \pi^V_G \big( \big( \mathbb{R}^{V} \big)^{\mathrm{cpt}} \big) \ar@{^{(}->}[rrr]^-{ i_\ast }_-{ \simeq_{{}_{ (d_{\mathrm{fix}} > 0 } ) } } &&& \pi^V_G \big( \big( \mathbb{T}^{V} \big)_+ \big) \\ \left[ \!\!\!\!\!\! \scalebox{.95}{ $ {\begin{array}{ccc} \mathbb{R}^{V} &\mathpalette\mathrlapinternal{\xrightarrow{\phantom{---}c\phantom{---}}}& S^V \\ x &\!\!\!\longmapsto\!\!\!& \left\{ \!\!\! \scalebox{.9}{ $ \begin{array}{cl} c(x) & \mbox{if $d(x,0) < \epsilon$} \\ \infty & \mbox{otherwise} \end{array} $ } \right. \end{array}} $ } \!\!\!\!\!\!\!\!\!\!\!\! \right] \ar@{}[rrr]|-{ \longmapsto } &&& \left[ \!\!\! \scalebox{.95}{ $ {\begin{array}{ccc} \mathbb{T}^{V} &\mathpalette\mathrlapinternal{\xrightarrow{\phantom{---}i_\ast(c)\phantom{---}}}& \;\;\;\;\;\;\;\;\;\; S^V \\ x &\!\!\!\longmapsto\!\!\!& \left\{ \!\!\!\!\!\! \scalebox{.9}{ $ \begin{array}{cl} c(x) & \mbox{if $d(x,x_0) < \epsilon$} \\ \infty & \mbox{otherwise} \end{array} $ } \right. \end{array}} $ } \!\!\!\!\!\!\!\!\!\!\!\! \right] } $$ is an isomorphism on Hopf degrees at Elmendorf stages $H_{> 0}$ of non-vanishing fixed space dimension and an injection on the unstable Hopf degree set at Elemendorf stages $H_{= 0}$ with vanishing fixed subspace dimension: $$ i_\ast \;\colon\; \left\{ \begin{array}{cl} N_{H_{= 0}}(c) \! & \hookrightarrow \; N_{H_{= 0}}(i_\ast(c)) \\ N_{H_{> 0}}(c) & \mapsto \; N_{H_{> 0}}(i_\ast(c))\,. \end{array} \right. $$ \end{prop} \noindent This is illustrated by \hyperlink{FigureO}{\it Figure O}: \begin{center} \hypertarget{FigureO}{} \begin{tikzpicture}[scale=0.8, decoration=snake] \begin{scope}[shift={(0,0)}] \begin{scope}[shift={(0,-.6)}] \node at (1.4,8) { \tiny \color{darkblue} \bf \begin{tabular}{c} unstable equivariant Cohomotopy \\ of vicinity of singularity \end{tabular} }; \node (EquivariantCohomotopy) at (1.4,5.3+1.6) {$ \pi^{ \mathbf{4}_{\mathbb{H}} }_{\mathbb{Z}_4} \big( \big( \mathbb{R}^{ \mathbf{4}_{\mathbb{H}} } \big)^{\mathrm{cpt}} \big) $}; \end{scope} \node at (1.4,5.3) {$ \overbrace{ \phantom{------------------} } $}; \begin{scope}[shift={(0,.8)}] \draw[<->, dashed, darkblue] (2.5,0) to[bend right=47] node { \colorbox{white}{\bf \tiny \color{darkblue} \begin{tabular}{c} orientifold \\ action \end{tabular} } } (0,2.5); \end{scope} \begin{scope}[shift={(0, .8)}] \begin{scope} \clip (-1.8,-1.5) rectangle (1.5,1.5); \draw[step=3, dotted] (-3,-2) grid (6,6); \draw[dashed] (-3,-3) circle (1); \draw[dashed] (0,-3) circle (1); \draw[dashed] (3,-3) circle (1); \draw[dashed] (6,-3) circle (1); \draw[dashed] (-3,0) circle (1); \draw[dashed] (0,0) circle (1); \draw[dashed] (3,0) circle (1); \draw[dashed] (-3,3) circle (1); \draw[dashed] (0,3) circle (1); \draw[dashed] (3,3) circle (1); \draw[dashed] (-3,6) circle (1); \draw[dashed] (0,6) circle (1); \draw[dashed] (3,6) circle (1); \draw[dashed] (6,6) circle (1); \draw[fill=white] (0,0) circle (.07); \draw[fill=white] (3,0) circle (.07); \draw[fill=white] (0,3) circle (.07); \draw[fill=white] (3,3) circle (.07); \draw (0,3) node[right] { \colorbox{white}{ \hspace{-.3cm} \tiny \color{darkblue} \bf O-plane \hspace{-.3cm} } }; \draw (3,0) node[right] { \colorbox{white}{ \hspace{-.5cm} \tiny \color{darkblue} \bf \begin{tabular}{c} mirror \\ O-plane \end{tabular} \hspace{-.3cm} } }; \end{scope} \begin{scope}[shift={(0,0)}] \draw[fill=black] (17:.8) circle (.07); \draw[fill=black] (17+90:.8) circle (.07); \draw[fill=black] (17+180:.8) circle (.07); \draw[fill=black] (17+270:.8) circle (.07); \begin{scope}[rotate=7] \draw[fill=black] (17:.3) circle (.07); \draw[fill=black] (17+90:.3) circle (.07); \draw[fill=black] (17+180:.3) circle (.07); \draw[fill=black] (17+270:.3) circle (.07); \end{scope} \draw (17+90+16:.62) node[right] { { \hspace{-.3cm} \tiny \color{darkblue} \bf branes \hspace{-.3cm} } }; \draw (17+180:.7)+(.58,.03) node[right, below] { { \hspace{-.3cm} \tiny \color{darkblue} \bf mirror branes \hspace{-.3cm} } }; \end{scope} \begin{scope}[shift={(0,1.6)}] \draw (0,-3.4) node {\tiny $x_1 = 0$}; \end{scope} \draw (-2.5,0) node {\tiny $x_2 = 0$}; % \end{scope} \end{scope} \begin{scope}[shift={(10,0)}] \begin{scope}[shift={(0,-.6)}] \node at (1.4,8) { \tiny \color{darkblue} \bf \begin{tabular}{c} unstable equivariant Cohomotopy \\ of representation torus \end{tabular} }; \node (EquivariantCohomotopy) at (1.4,5.3+1.6) {$ \pi^{ \mathbf{4}_{\mathbb{H}} }_{\mathbb{Z}_4} \big( \big( \mathbb{T}^{ \mathbf{4}_{\mathbb{H}} } \big)_+ \big) $}; \end{scope} \node at (1.4,5.3) {$ \overbrace{ \phantom{------------------} } $}; \begin{scope}[shift={(0,.8)}] \draw[<->, dashed, darkblue] (2.5,0) to[bend right=47] node { \colorbox{white}{ \tiny \color{darkblue} \bf \begin{tabular}{c} orientifold \\ action \end{tabular} } } (0,2.5); \end{scope} \begin{scope}[shift={(0, .8)}] \begin{scope} \clip (-1.8,-1.5) rectangle (4.8,4.4); \draw[step=3, dotted] (-3,-2) grid (6,6); \draw[dashed] (-3,-3) circle (1); \draw[dashed] (0,-3) circle (1); \draw[dashed] (3,-3) circle (1); \draw[dashed] (6,-3) circle (1); \draw[dashed] (-3,0) circle (1); \draw[dashed] (0,0) circle (1); \draw[dashed] (3,0) circle (1); \draw[dashed] (-3,3) circle (1); \draw[dashed] (0,3) circle (1); \draw[dashed] (3,3) circle (1); \draw[dashed] (-3,6) circle (1); \draw[dashed] (0,6) circle (1); \draw[dashed] (3,6) circle (1); \draw[dashed] (6,6) circle (1); \draw[fill=white] (0,0) circle (.07); \end{scope} \begin{scope}[shift={(0,0)}] \draw[fill=black] (17:.8) circle (.07); \draw[fill=black] (17+90:.8) circle (.07); \draw[fill=black] (17+180:.8) circle (.07); \draw[fill=black] (17+270:.8) circle (.07); \begin{scope}[rotate=7] \draw[fill=black] (17:.3) circle (.07); \draw[fill=black] (17+90:.3) circle (.07); \draw[fill=black] (17+180:.3) circle (.07); \draw[fill=black] (17+270:.3) circle (.07); \end{scope} \draw (17+90+16:.62) node[right] { { \hspace{-.3cm} \tiny \color{darkblue} \bf branes \hspace{-.3cm} } }; \draw (17+180:.7)+(.58,.03) node[right, below] { { \hspace{-.3cm} \tiny \color{darkblue} \bf mirror branes \hspace{-.3cm} } }; \end{scope} \begin{scope}[shift={(0,1.6)}] \draw (0,-3.4) node {\tiny $x_1 = 0$}; \draw (3,-3.4) node {\tiny $x_1 = \tfrac{1}{2}$}; \end{scope} \draw (-2.5,0) node {\tiny $x_2 = 0$}; \draw (-2.5,3) node {\tiny $x_2 = \tfrac{1}{2}$}; % \end{scope} \end{scope} \begin{scope}[shift={(0,-.6)}] \draw[->] (3.6,7) to node[above] {\tiny $i_\ast$} node[below] {\tiny $\simeq_{{}_{d_{\mathrm{fix}} > 0}}$ } (9.3,7); \end{scope} \end{tikzpicture} \end{center} \vspace{-.3cm} \noindent {\footnotesize \bf Figure O -- Pushforward in equivariant Cohomotopy from the vicinity of a singularity to the full toroidal orientifold} {\footnotesize is an isomorphism on brane charges and an injection on O-plane charges, by Prop. \ref{PushforwardOfVicinityOfSingularityToRepresentationTorus}. Shown is a case with $G = \mathbb{Z}_{4}$, as in \hyperlink{FigureM}{\it Figure M}. All integer number of branes (black dots) are in the image of the map, but only the O-plane at $(x_1, x_2) = (0,0)$ is in the image. } \medskip \noindent {\bf Local tadpole cancellation in toroidal ADE-orientifolds.} Under the identification from Prop. \ref{PushforwardOfVicinityOfSingularityToRepresentationTorus}, the stabilized equivariant Hopf degree theorem for representation spheres (Theorem \ref{CharacterizationOfStabilizationOfUnstableCohomotopy}) applies also to representation tori, and hence so does Prop. \ref{TheoremLocalTadpoleCancellation}, showing now for the case of toroidal orbifolds with ADE-singularities that the brane charges classified by equivariant Cohomotopy are necessarily multiples of the regular representation. This result is visualized in \hyperlink{FigureP}{\it Figure P}: \vspace{-.4cm} \begin{center} \hypertarget{FigureP}{} \begin{tikzpicture}[scale=0.8, decoration=snake] \node at (1.4,5.3) {$ \overbrace{ \phantom{------------------} } $}; \node at (1.4,8) { \tiny \color{darkblue} \bf \begin{tabular}{c} equivariant Cohomotopy \\ of representation torus \\ (orientifold Cohomotopy) \end{tabular} }; \node (EquivariantCohomotopy) at (1.4,5.3+1.6) {$ \pi^{ \mathbf{4}_{\mathbb{H}} }_{\mathbb{Z}_4} \big( \big( \mathbb{T}^{ \mathbf{4}_{\mathbb{H}} } \big)_+ \big) $}; \node (EquivariantCocycle) at (1.4,5.3+.8) {\tiny $ 4 \cdot [\mathbb{Z}_4/\mathbb{Z}_4] - 3 \cdot [\mathbb{Z}_4/1] $}; \node at (1.4+8,7.8) { \tiny \color{darkblue} \bf \begin{tabular}{c} equivariant K-theory \\ of representation torus \\ = representation ring \end{tabular} }; \node at (1.4+8,5.3) {$ \overbrace{ \phantom{--------} } $}; \node (PlainCohomotopy) at (1.4+8,5.3+1.6) {$ \mathrm{KO}_{\mathbb{Z}_4}^0 \;\simeq\; \mathrm{RO}(\mathbb{Z}_4) $}; \node (PlainCocycle) at (1.4+8,5.3+.8) {\tiny \raisebox{-.0cm}{ $ \begin{aligned} & 4 \cdot \mathbf{1} - 3 \cdot \mathbf{4}_{\mathrm{reg}} \end{aligned} $}}; \draw[->] (EquivariantCohomotopy) to node[above] { \tiny \color{darkblue} \bf stabilize and linearize } (PlainCohomotopy); \draw[|->] (EquivariantCocycle) to (PlainCocycle); \draw[<->, dashed, darkblue] (2.5,0) to[bend right=47] node { \colorbox{white}{ \tiny \color{darkblue} \bf \begin{tabular}{c} orientifold \\ action \end{tabular} } } (0,2.5); \begin{scope}[shift={(0, .8)}] \begin{scope} \clip (-1.8,-1.5) rectangle (4.8,4.4); \draw[step=3, dotted] (-3,-2) grid (6,6); \draw[dashed] (-3,-3) circle (1); \draw[dashed] (0,-3) circle (1); \draw[dashed] (3,-3) circle (1); \draw[dashed] (6,-3) circle (1); \draw[dashed] (-3,0) circle (1); \draw[dashed] (0,0) circle (1); \draw[dashed] (3,0) circle (1); \draw[dashed] (-3,3) circle (1); \draw[dashed] (0,3) circle (1); \draw[dashed] (3,3) circle (1); \draw[dashed] (-3,6) circle (1); \draw[dashed] (0,6) circle (1); \draw[dashed] (3,6) circle (1); \draw[dashed] (6,6) circle (1); \draw[fill=white] (0,0) circle (.07); \draw[fill=white] (3,0) circle (.07); \draw[fill=white] (0,3) circle (.07); \draw[fill=white] (3,3) circle (.07); \draw (0,3) node[right] { \colorbox{white}{ \hspace{-.3cm} \tiny \color{darkblue} \bf O-plane \hspace{-.3cm} } }; \draw (3,0) node[right] { \colorbox{white}{ \hspace{-.5cm} \tiny \color{darkblue} \bf \begin{tabular}{c} mirror \\ O-plane \end{tabular} \hspace{-.3cm} } }; \draw[fill=black] (38:.8) circle (.07); \draw[fill=black] (38+90:.8) circle (.07); \draw[fill=black] (38+180:.8) circle (.07); \draw[fill=black] (38+270:.8) circle (.07); \draw[fill=black] (70:.4) circle (.07); \draw[fill=black] (70+90:.4) circle (.07); \draw[fill=black] (70+180:.4) circle (.07); \draw[fill=black] (70+270:.4) circle (.07); \end{scope} \begin{scope}[shift={(3,3)}] \draw[fill=black] (17:.7) circle (.07); \draw[fill=black] (17+90:.7) circle (.07); \draw[fill=black] (17+180:.7) circle (.07); \draw[fill=black] (17+270:.7) circle (.07); \draw (17+90:.7) node[right] { \colorbox{white}{ \hspace{-.3cm} \tiny \color{darkblue} \bf brane \hspace{-.3cm} } }; \draw (17+180:.7)+(.58,.03) node[right, below] { { \hspace{-.3cm} \tiny \color{darkblue} \bf mirror branes \hspace{-.3cm} } }; \end{scope} \begin{scope}[shift={(0,1.6)}] \draw (0,-3.4) node {\tiny $x_1 = 0$}; \draw (3,-3.4) node {\tiny $x_1 = \tfrac{1}{2}$}; \end{scope} \draw (-2.5,0) node {\tiny $x_2 = 0$}; \draw (-2.5,3) node {\tiny $x_2 = \tfrac{1}{2}$}; % \draw[|->] (6.8,1.5) to ++(.6,0); \draw (1.4+8,1.5) node {\footnotesize $ \begin{array}{c} 4 \cdot \mathbf{1}_{{}_{\mathrm{triv}}} \\ - 3 \cdot \mathbf{4}_{{}_{\mathrm{reg}}} \end{array} $}; \end{scope} \end{tikzpicture} \end{center} \vspace{-.6cm} \noindent {\bf \footnotesize Figure P -- Local/twisted tadpole cancellation in a toroidal ADE-orientifold is enforced by equivariant Cohomotopy} {\footnotesize according to Prop. \ref{PushforwardOfVicinityOfSingularityToRepresentationTorus}, which reduces to the situation in the vicinity of a single singularity, as in \cref{LocalTadpoleCancellation}. Shown is a case with $G = \mathbb{Z}_4$ as in \hyperlink{FigureO}{\it Figure O}.} \noindent This is the local/twisted tadpole cancellation in toroidal ADE-orentifolds according to \hyperlink{Table1}{\it Table 1} and \hyperlink{Table2}{\it Table 2}. \medskip \noindent {\bf Global/untwisted tadpole cancellation from super-differential Cohomotopy.} This concludes our discussion of local tadpole cancellation in global (i.e. toroidal) ADE-orientifolds implied by C-field charge quantization in equivariant Cohomotopy. Finally, we turn to discuss how the global/untwisted tadpole cancellation condition on toroidal orbifolds follows from charge quantization in super-differential equivariant Cohomotopy. We state the concrete condition below in \eqref{KernelOfTheGlobalElmendorfStageProjection}, but first we explain how this condition arises from super-differential refinement: \medskip \noindent {\bf Super-differential enhancement of unstable equivariant Cohomotopy theory.} Given any generalized cohomology theory for charge quantization, it is its corresponding enhancement to a \emph{differential cohomology theory} which classifies not just the topological soliton/instanton sectors, but the actual geometric higher gauge field content, hence including the flux densities. For stable/abelian cohomology theories this is discussed for instance in \cite{Freed00}\cite{Bunke12}, while in the broader generality of unstable/non-abelian cohomology theories this is discussed in \cite{FSS10}\cite{SSS12}\cite{FSS12}\cite{FSS15}. For example, ordinary degree-2 integral cohomology theory $B U(1) \simeq B^2 \mathbb{Z}$ classifies magnetic charge sectors, but it is its differential cohomology enhancement $\mathbf{B}U(1)_{\mathrm{conn}}$ (Deligne cohomology) which is the universal moduli for actual electromagnetic field configurations. Similarly, plain (twisted) K-theory $K U$ and $K O$ classifies topological RR-charge sectors, but it is differential K-theory which classifies the actual RR-fields; see \cite{GS-AHSS}\cite{GS19A}\cite{GS19B}. \medskip Hence with \hyperlink{HypothesisH}{\it Hypothesis H} we are ultimately to consider the refinement of ADE-equivariant Cohomotopy theory $\pi^{\mathbf{4}_{\mathbb{H}}}_G$, discussed so far, to some \emph{differential} equivariant Cohomotopy theory, denoted $\big( \pi^{\mathbf{4}_{\mathbb{H}}}_G \big)_{\mathrm{conn}}$ and characterized as completing a homotopy pullback diagram of geometric unstable cohomology theories of the following form: \begin{equation} \label{DifferentialEquivariantCohomotopyPullback} \hspace{-10mm} \raisebox{43pt}{ \xymatrix@C=6em@R=18pt{ \mathpalette\mathclapinternal{ \mbox{ \tiny \begin{tabular}{c} super-differential \\ unstable equivariant Cohomotopy \\ \cite{OrbifoldCohomology} = \cite{FSS15} $\wedge$ \cite{ADE} \end{tabular} } } & \big(\pi^\bullet_G\big)_{\mathrm{conn}} \ar@{}[ddrr]|-{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} {\color{black} \large \begin{rotate}{-140} $\!\!\Rightarrow$ \end{rotate} } \\ \\ universal homotopy \end{tabular} } } \ar[rr]^{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} forget topological data, \\ retain only flux super-forms \\ \end{tabular} } } \ar[dd]|-{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} forget flux forms, \\ retain underlying cocycle in \\ plain equivariant Cohomotopy \end{tabular} } \;\;\;\;\;\;\; \;\;\;\;\;\;\; \;\;\;\;\;\;\; } && \Big\{ \big( \mu_{{}_{\rm M2/M5}}\big)_G \Big\} \ar[dd]|-{ \;\;\;\;\;\;\; \;\;\;\;\;\;\; \;\;\;\;\;\;\; \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} inject this cocycle, thereby \\ enforce 11d SuGra torsion constraint \end{tabular} } } & \mathpalette\mathclapinternal{ \mbox{ \tiny \begin{tabular}{c} $G$-equivariant enhancement \cite[5]{ADE} \\ of M2/M5-brane super WZW-terms \\ jointly regarded as a cocycle in \\ super-rational 4-Cohomotopy \\ \cite[3]{FSS15}\cite[2.3]{FSS16a}\cite[(57)]{FSS19a} \end{tabular} } } \\ \\ \mathpalette\mathclapinternal{ \mbox{ \tiny \begin{tabular}{c} unstable equivariant Cohomotopy \\ cohomology theory \\ \eqref{EquivariantCohomotopySet} \end{tabular} } } & \pi^\bullet_G \ar[rr]_-{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} rationalize, i.e.: \\ forget all torsion subgroups \\ in homotopy/cohomology groups \end{tabular} } } && \mbox{\bf $\Omega$}_G(-, \mathfrak{l}S^4_G) & \mathpalette\mathclapinternal{ \mbox{ \tiny \begin{tabular}{c} super-rational \\ unstable equivariant Cohomotopy theory \\ \cite[3.2]{ADE} \end{tabular} } } } } \end{equation} \noindent Discussing this construction $\big( \pi^{\mathbf{4}_{\mathbb{H}}}_G \big)_{\mathrm{conn}}$ in detail requires invoking concepts from $\infty$-stacks and $L_\infty$-algebroids \cite{FSS10}\cite{SSS12}, as well as their application to super-geometric orbifolds \cite{OrbifoldCohomology}, which is beyond the scope of this article. However, for the present purpose of seeing the global tadpole cancellation condition arise, all that matters are the following implications of super-differential refinement, which we make explicit by themselves: \medskip \noindent {\bf Rational flux constraints from equivariant enhancement of M2/M5-cocycle.} The homotopy pullback construction \eqref{DifferentialEquivariantCohomotopyPullback} amounts to equipping the rationalization of cocycles in plain unstable equivariant Cohomotopy \eqref{EquivariantCohomotopySet} with equivalences (connection data) to prescribed flux super-forms in super-rational equivariant Cohomotopy theory \cite[3.2]{ADE}. The flux super-forms relevant for charge-quantization of the M-theory C-field according to \hyperlink{HypothesisH}{\it Hypothesis H} are $G$-equivariant enhancements of the joint M2/M5-brane cocycle \cite[3]{FSS15}\cite[2.3]{FSS16a}\cite[3.42]{ADE}\cite[(57)]{FSS19a} with coefficients in the rationalized 4-sphere $\mathfrak{l}S^4$: \vspace{-3mm} \begin{equation} \label{TheM2M5Cocycle} \xymatrix{ \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} $D = 11$, $\mathcal{N} = 1$ \\ super-Minkowski spacetime \end{tabular} } } }{ \mathbb{R}^{10,1\vert \mathbf{32}} }\;\;\;\; \ar[rrrrr]^-{ \mu_{{}_{\rm M2/M5}} \coloneqq \left( { { \frac{i}{2} \overline{\psi}\Gamma_{a_1 a_2} \psi \wedge e^{a_1} \wedge e^{a_2} \,, } \atop { \frac{1}{5} \overline{\psi}\Gamma_{a_1 \cdots a_5} \psi \wedge e^{a_1} \wedge \cdots \wedge e^{a_2} } } \right) }_-{ \;\;\;\;\; \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} M2/M5-brane super-cocycle \\ (joint M2/M5 WZW-term curvatures) \end{tabular} } } &&&&& \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} rationalized \\ 4-sphere \end{tabular} } } }{\;\;\; \mathfrak{l} S^4 } }\;. \end{equation} Specifically, for $G^{\mathrm{ADE}}$-equivariance \eqref{ADESubgroups} at ADE-singularities $\mathbf{4}_{\mathbb{H}}$ \eqref{TheQuaternionicRepresentation}, a choice of equivariant extension of this cocycle is a choice of extension to an Elmendorf-stage diagram as in \eqref{ElmedorfStageWiseHopfDegrees} -- see \cite[5]{ADE}:\footnote{ For more general actions this involves extension to a functor on the \emph{orbit category}; see \cite[Lemma 5.4]{ADE}. } \begin{equation} \label{ElmendorfStagesOfEquivariantM2M5Cocycle} \xymatrix@R=6pt@C=5em{ & ( \mathbb{R}^{10,1\vert \mathbf{32}} \ar@(ul,ur)|-{\;\;G^{\mathrm{ADE}}\!\!\!\!}) \ar[rr]^{ (\mu_{{}_{\rm M2/M5}})_{G_{\mathrm{ADE}}} } && ( \mathfrak{l} S^{\mathbf{4}_{\mathbb{H}}} \ar@(ul,ur)|-{\;\;G^{\mathrm{ADE}}\!\!\!\!}) & \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} $G^{\mathrm{ADE}}$-equivariant enhancement \\ of M2/M5-brane super-cocycle \end{tabular} } \\ (-)^{H = 1} \ar@{}[dd]|-{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} Elmendorf stages \\ \eqref{SystemOfMapsOnHFixedSubspaces} \end{tabular} } } } & \mathbb{R}^{10,1\vert \mathbf{32}} \ar[rr]^{ \mu_{{}_{\rm M2/M5}} } && \mathfrak{l} S^{\mathbf{4}_{\mathbb{H}}} & \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} M2/M5-brane super-cocycle \\ \eqref{TheM2M5Cocycle} \end{tabular} } \\ \\ {(-)}^{H = G^{\mathrm{ADE}}} & \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} MK6 super-embedding \\ (see \cite[Thm. 4.3]{ADE} and Rem. \ref{TheRoleOfMK6EndingOnM5}) \end{tabular} } } }{ \mathbb{R}^{6,1\vert \mathbf{16}} } \ar@{^{(}->}[uu] \ar[rr]^{ \in \{0,1\} } && \mathfrak{l}S^0 \ar@{^{(}->}[uu] & \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} charge at fixed planes \end{tabular} } } \end{equation} This involves a binary choice at lowest (and hence any other, by Example \ref{FixedSubspacesOfADESingularities}) Elmendorf stage. The homotopy in the diagram \eqref{DifferentialEquivariantCohomotopyPullback} enforces this local choice of rationalized flux globally onto the rationalized fluxes of the equivariant Cohomotopy cocycles. This has two effects: \medskip \noindent {\bf 1. Super-differential enhancement at global Elmendorf stage implies vanishing total flux.} Note the M2/M5-brane super-cocycle $\mu_{{}_{\rm M2/M5}}$ \eqref{TheM2M5Cocycle} appearing at global Elmendorf stage in \eqref{ElmendorfStagesOfEquivariantM2M5Cocycle} has vanishing bosonic flux ( $\mu_{{}_{\rm M2/M5}}\vert_{\psi = 0} = 0$ by \eqref{TheM2M5Cocycle}). Also, the infinitesimal fermionic component $\psi$ does not contribute to the topology seen by plain equivariant Cohomotopy (see \cite{OrbifoldCohomology} for details). Hence the homotopy in \eqref{DifferentialEquivariantCohomotopyPullback} forces the underlying classes in plain equivariant Cohomotopy to be \emph{pure torsion} at global Elmendorf stage. But, since in compatible RO-degree (as in Example \ref{ExamplesOfCompatibleRODegree}) the Hopf degree theorem \eqref{HopfDegreeTheorem} implies non-torsion Cohomotopy groups at all positive Elmendorf stages \eqref{ElmedorfStageWiseHopfDegrees}, this means that super-differential refinement \eqref{DifferentialEquivariantCohomotopyPullback} of equivariant Cohomotopy in compatible RO-degree enforces \emph{vanishing} Hopf degrees at global Elmendorf stage $H = 1$ \eqref{ElmedorfStageWiseHopfDegrees}. \medskip Explicitly, this means that the super-differential enhancement \eqref{DifferentialEquivariantCohomotopyPullback} forces the underlying plain equivariant Cohomotopy cocycles of ADE-orientifolds in compatible RO-degree to be in the kernel of the forgetful map $(-)^1$ \eqref{ElmedorfStageWiseHopfDegrees} from equivariant to ordinary Cohomotopy, which projects out the global Elmendorf stage at $H = 1$: \begin{equation} \label{KernelOfTheGlobalElmendorfStageProjection} \xymatrix@R=1pt@C=3em{ \overset{ { \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} unstable equivariant Cohomotopy \\ admitting super-differential refinement \end{tabular} } } } }{ \pi^{\mathbf{4}_{\mathbb{H}}}_{G^{\mathrm{ADE}}} \big( \big( \mathbb{T}^{\mathbf{4}_{\mathbb{H}}} \big)_+ \big)_{ {}_{ \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} ``super-differentiable'' \\ \eqref{DifferentialEquivariantCohomotopyPullback} \end{tabular} } } }{\mathrm{Sdiffble}} }} } \ar@{^{(}->}[dd]_-{ \mathrm{kernel} } \ar[rrrr] \ar@{}[ddrrrr]|<<<<<<<<<<<<<<<<<<<{ \mbox{\tiny (pb)} } &&&& \{0\} \ar@{^{(}->}[dd] \\ \\ \;\;\; \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{ccc} $\phantom{A}$ && \\ && equivariant Cohomotopy \eqref{EquivariantCohomotopySet} \\ && of toroidal orbifold \eqref{RepresentationTorus} \\ && with ADE-singularities \eqref{TheQuaternionicRepresentation} \end{tabular} } } }{ \pi^{\mathbf{4}_{\mathbb{H}}}_{G^{\mathrm{ADE}}} } \big( \big( \mathbb{T}^{\mathbf{4}_{\mathbb{H}}} \big)_+ \big) \;\;\; \ar[rrr]^-{ \left\vert Q_{\mathrm{tot}}\right\vert \coloneqq (-)^1 }_-{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} project out \\ global charge = \\ Hopf degree at global Elmendorf stage \\ \eqref{ElmedorfStageWiseHopfDegrees} \end{tabular} } } &&& \;\;\;\; \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} plain Cohomotopy \\ of plain 4-torus \\ \eqref{PlainCohomotopySet} \end{tabular} } } }{ \pi^4 \big( \big( \mathbb{T}^4 \big)_+ \big) } \ar@{}[r]|-{\simeq} & \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} \\ global Hopf degree \\ = net brane/O-plane charge \\ \eqref{HopfDegreeTheorem} \end{tabular} } } }{ \mathbb{Z} } } \end{equation} \medskip \noindent It is now immediate, from Theorem \ref{UnstableEquivariantHopfDegreeTheorem} and Theorem \ref{UnstableEquivariantHopfDegreeTheoremForTori}, that this enforces the condition of vanishing net brane/O-plane charge, precisely in the form of the global/untwisted tadpole cancellation condition from \hyperlink{Table1}{\it Table 1} and \hyperlink{Table2}{\it Table 2} in the way illustrated in \hyperlink{FigureA}{\it Figure A}. \vspace{4mm} \noindent {\bf 2. Super-differential enhancement at lower Elmendorf stage implies choice of O-plane charge.} The globalization via \eqref{KernelOfTheGlobalElmendorfStageProjection} of the lower $S^0$-valued Elmendorf stage in the equivariantized M2/M5-brane cocycle \eqref{ElmendorfStagesOfEquivariantM2M5Cocycle} means to impose the chosen charge $\in \{0,1\}$ to all O-planes, via Prop. \ref{TheoremLocalTadpoleCancellation} as illustrated in \hyperlink{FigureH}{\it Figure H}. We will denote the ADE-equivariant Cohomotopy sets which admit super-differential refinement with the choice $-1 \in \{0,1\}$ in \eqref{ElmendorfStagesOfEquivariantM2M5Cocycle} by a subscript $(-)_-$: \begin{example}[\bf Super-differentiable equivariant Cohomotopy of ADE-orbifolds] \label{SuperDifferentiableEquivariantCohomotopyOfADEOrbifolds} Locally, the super-differentiable equivariant Cohomotopy of the vicinity of an ADE-singularity (\hyperlink{Table5}{\it Table 5}) with respect to the choice $-1 \in \{-0,-1\}$ in the equivariant enhancement \eqref{ElmendorfStagesOfEquivariantM2M5Cocycle} of the super-flux form \eqref{TheM2M5Cocycle} is \begin{equation} \label{SuperDifferentiableLocalCohomotopyCharge} \pi^{\mathbf{4}_{\mathbb{H}}}_{G^{\mathrm{ADE}}} \big( \big( \mathbb{R}^{\mathbf{4}_{\mathbb{H}}} \big)^{\mathrm{cpt}} \big)_- \;=\; \Big\{ \underset{ \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} local charge structure \\ (Prop. \eqref{TheoremLocalTadpoleCancellation}) \end{tabular} } }{ 1 \cdot \mathbf{1}_{\mathrm{triv}} - N_{\mathrm{brane}} \cdot \mathbf{k}_{\mathrm{reg}} } \;\Big\vert\; N_{\mathrm{brane}} \in \mathbb{Z} \Big\} \,. \end{equation} Globally, the super-differentiable equivariant Cohomotopy specifically of the Kummer surface ADE-orbifold $\mathbb{T}^{\mathbf{4}_{\mathbb{H}}}\sslash \mathbb{Z}^{\mathrm{refl}}_2$ (Example \ref{KummerSurface}) is \begin{equation} \label{SuperDifferentiableEquivariantCohomotopyOfKummerSurface} \underset{ \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} ADE-equivariant Cohomotopy \\ admitting super-differential lift \\ \eqref{DifferentialEquivariantCohomotopyPullback} \end{tabular} } }{ \pi^{\mathbf{4}_{\mathbb{H}}}_{\mathbb{Z}_2^{\mathrm{refl}}} \big( \big( \mathbb{T}^{\mathbf{4}_{\mathbb{H}}} \big)_+ \big)_{\mathrm{Sdiffble}_-} } \;=\; \Big\{ \underset{ \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} \\ super-differentiability \\ at low Elmendorf stage \\ \eqref{ElmendorfStagesOfEquivariantM2M5Cocycle} \end{tabular} } }{ 16 \cdot \mathbf{1}_{\mathrm{triv}} } - \underset{ { \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} \\ local charge structure \\ (Prop. \ref{TheoremLocalTadpoleCancellation}, Prop. \ref{PushforwardOfVicinityOfSingularityToRepresentationTorus}) \end{tabular} } } }{ N_{\mathrm{brane} } \cdot \mathbf{2}_{\mathrm{reg}} } \;\big\vert\; \underset{ \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} \\ super-differentiablity \\ at global Elmendorf stage \\ \eqref{KernelOfTheGlobalElmendorfStageProjection} \end{tabular} } }{ 2 N_{\mathrm{brane} } - 16 = 0 } \Big\}. \end{equation} \end{example} \section{M5/MO5 anomaly cancellation} \label{M5MO5AnomalyCancellation} We now apply the general discussion of equivariant Cohomotopy in \cref{EquivariantCohomotopyAndTadpoleCancellation} to cohomotopical charge quantization of the M-theory C-field, according to \hyperlink{HypothesisH}{\it Hypothesis H}, for compactifications of heterotic M-theory on toroidal orbifolds with ADE-singularities. The resulting M5/MO5-anomaly cancellation is discussed in \cref{EquivariantCohomotopyChargeOfM5AtMO5} below. In order to set the scene and to sort out some fine print, we first discuss in \cref{HeteroticMTheoryOnADEOrbifolds} relevant folklore regarding heterotic M-theory on ADE-orbifolds. \subsection{Heterotic M-theory on ADE-orbifolds} \label{HeteroticMTheoryOnADEOrbifolds} We now explain how the singularity structure (as in \hyperlink{Table5}{\it Table 5}), which must really be meant when speaking of MO5-planes \eqref{TheMO5} coinciding with black M5-branes \eqref{M5Singularity}, is that of ``$\tfrac{1}{2}\mathrm{M5}$-branes'' \eqref{TheHalfM5} \cite[2.2.7]{ADE}\cite[4]{FSS19d}; see \hyperlink{FigureS}{\it Figure S} below. This singularity structure goes back to \cite[3]{Sen97} with further discussion and development in \cite{FLO99}\cite{KSTY99}\cite{FLO00a}\cite{FLO00b}\cite{FLO00c}\cite{CabreraHananySperling19}; the type IIA perspective is considered in \cite{GKST01} and also briefly in \cite[p. 4]{KataokaShimojo02}. We highlight the systematic picture behind the resulting {\it heterotic M-theory on ADE-orbifolds} and its string theory duals, further below in \hyperlink{Table7}{\it Table 7}. \medskip \noindent {\bf Critique of pure $\mathrm{MO5}$-planes.} We highlight the following: \begin{enumerate}[{\bf (i)}] \vspace{-2mm} \item Seminal literature on M-theoretic orientifolds speaks of M5-branes parallel and/or coincident to {\it MO5} {\it singularities} \cite{DasguptaMukhi95}\cite[3.3]{Witten95b}\cite[2.1]{Hori98}, namely to Euclidean $\mathbb{Z}_2$-orientifolds \eqref{EuclideanGSpace} of the form (see \cite[2.2.2]{ADE}): \vspace{-.6cm} \begin{equation} \label{TheMO5} {\color{darkblue}\tiny \bf \mathrm{MO5}} \phantom{AAAAAA} \mathbb{R}^{5,1} \; \xymatrix{\ar@{^{(}->}[r]&} \mathbb{R}^{5,1} \times \xymatrix{ \mathbb{R}^{\mathbf{5}_{\mathrm{sgn}}} \ar@(ul,ur)|-{\, \mathbb{Z}_2} } \,, \end{equation} where $\mathbb{R}^{\mathbf{5}_{\mathrm{sgn}}}$ is the Euclidean singularity \eqref{EuclideanGSpace} of the 5-dimensional sign representation of the group $\mathbb{Z}_2$. \vspace{-2mm} \item But $\sfrac{1}{2}$BPS M5-brane solutions of $D=11$ supergravity themselves have been classified \cite[8.3]{MF10} and found to be given, in their singular far horizon limit \cite[3]{AFCS99}, by singularities for finite subgroups $G^{\mathrm{ADE}} \subset \mathrm{SU}(2) \simeq \mathrm{Sp}(1)$ \eqref{ADESubgroups} of the type \vspace{-.6cm} \begin{equation} \label{M5Singularity} {\color{darkblue}\tiny \mathrm{M5}} \phantom{AAAAAA} \mathbb{R}^{5,1} \; \xymatrix{\ar@{^{(}->}[r]&} \mathbb{R}^{5,1} \times \mathbb{R}^1 \times \xymatrix{ \mathbb{R}^{\mathbf{4}_{\mathbb{H}}} \ar@(ul,ur)|{\;\;\; G^{\mathrm{ADE}} } } \,, \end{equation} where the last factor is an ADE-singularity \eqref{TheQuaternionicRepresentation}. \vspace{-3mm} \item As orbifold singularities, this coincides with the far horizon geometry of coincident KK-monopole solutions to 11d supergravity (e.g. \cite[(47)]{IMSY98}\cite[(18)]{Asano00}; see \cite[2.2.5]{ADE}) \vspace{-.3cm} \begin{equation} \label{MK6Singularity} {\color{darkblue}\tiny \mathrm{MK6}} \phantom{AAAAAA} \mathbb{R}^{6,1} \; \xymatrix{\ar@{^{(}->}[r]&} \mathbb{R}^{6,1} \times \xymatrix{ \mathbb{R}^{\mathbf{4}_{\mathbb{H}}} \ar@(ul,ur)|{\;\;\; G^{\mathrm{ADE}} } } \,, \end{equation} which, from the perspective of type IIA theory, reflects the fact that NS5-branes are domain walls inside D6-branes (e.g. \cite[p. 5]{EGKRS00}, see \cite[3.3.1. 3.3.2]{Fazzi17}). This is illustrated by the central dot on the vertical axis in \hyperlink{FigureS}{\it Figure S}. Hence for the special case that $G^{\mathrm{ADE}} = \mathbb{Z}^{\mathrm{refl}}_2$ \eqref{PointReflectionSubgroup}, this yields the product $\mathbb{R}^1 \times \mathbb{R}^{\mathbf{4}_{\mathrm{sgn}}}$ of the 4-dimensional sign representation with the trivial 1-dimensional representation, instead of the 5-dimensional sign representation in \eqref{TheMO5}. \vspace{-3mm} \item In order to allow M5-singularities \eqref{M5Singularity} to coincide with MO5-singularities \eqref{TheMO5} we have to consider intersecting a $\sfrac{1}{2}$BPS 5-brane solution with an $\mathrm{MO9}$ locus fixed by a Ho{\v r}ava-Witten involution $\mathbb{Z}_2^{\mathrm{HW}}$ (\cite{HoravaWitten96a}, see \cite[2.2.1]{ADE}): \vspace{-4mm} \begin{equation} \label{TheMO9} {\color{darkblue}\tiny \mathrm{MO9}} \phantom{AAAAAA} \mathbb{R}^{9,1} \; \xymatrix{\ar@{^{(}->}[r]&} \mathbb{R}^{9,1} \times \xymatrix{ \mathbb{R}^{\mathbf{1}_{\mathrm{sgn}}} \ar@(ul,ur)|-{\, \mathbb{Z}^{\mathrm{HW}}_2\!\!\!\!\!} } \,. \end{equation} \vspace{-3mm} \item This intersection is called the $\tfrac{1}{2}\mathrm{M5}$ in \cite[2.2.7]{ADE}\cite[4]{FSS19d} \vspace{-.5cm} \begin{equation} \label{TheHalfM5} {\color{darkblue} \tfrac{1}{2}\mathrm{M5} } = \mathrm{MK6} \cap \mathrm{MO9} \phantom{AAAAAA} \mathbb{R}^{5,1} \; \xymatrix{\ar@{^{(}->}[r]&} \mathbb{R}^{5,1} \times \xymatrix{ \mathbb{T}^{\mathbf{1}_{\mathrm{sgn}}} \ar@(ul,ur)^{\mathbb{Z}_2^{\mathrm{HW}}} } \xymatrix{ \times \ar@[white]@(ul,ur)^{\times} } \xymatrix{ \mathbb{T}^{\mathbf{4}_{\mathbb{H}}} \ar@(ul,ur)^{ G^{\mathrm{ADE}} } } \end{equation} since its type IIA incarnation is known as the $\tfrac{1}{2}\mathrm{NS5}$ \cite[6]{GKST01}\cite[p. 18]{ApruzziFazzi17}. This is the brane configuration thought to geometrically engineer $D=6$, $\mathcal{N} = (1,0)$ field theories \cite{HananyZaffaroni97}\cite{HKLY15}\cite[6]{DHTV14}. \end{enumerate} \vspace{.0cm} \noindent \hspace{-5pt} $ \mbox{ \hyperlink{FigureR}{} \begin{minipage}[l]{9.3cm} Since the fixed point set of the toroidal orbifolds \eqref{RepresentationTorus} for both the $\tfrac{1}{2}\mathrm{M5}$ \eqref{TheHalfM5} and the $\mathrm{MO5}$ \eqref{TheMO5} is the same set \eqref{RepresentationTorusOfSignRep} of 32 points, all arguments about $\mathrm{MO5}$ \eqref{TheMO5} which depend only on the set of isolated orientifold fixed points, such as in \cite{DasguptaMukhi95}\cite[3.3]{Witten95b}\cite[2.1]{Hori98}, apply to $\tfrac{1}{2}\mathrm{M5}$ \eqref{TheHalfM5} as well. But the $\tfrac{1}{2}\mathrm{M5}$ orientifold has in addition fixed lines, namely the $\mathrm{MK6}$ loci, and fixed 4-planes, namely the $\mathrm{MO9}$, as shown on the right of \hyperlink{FigureS}{Figure S}. This reflects the fact that, by the classification of \cite[8.3]{MF10}, the black $\mathrm{M5}$ not only may, but must appear as a domain wall inside an $\mathrm{MK6}$ singular locus. We {\bf conclude} from this that: \emph{The $\tfrac{1}{2}\mathrm{M5}$ \eqref{TheHalfM5} orientifold is the correct model of orientifolded M5/MO5 geometry, while the pure $\mathrm{MO5}$ \eqref{TheMO5} is just its restriction along the diagonal subgroup inclusion \eqref{Z2ReflHW}, as shown in \hyperlink{FigureR}{\it Figure R}} \end{minipage} } \phantom{AA} \raisebox{70pt}{\small \xymatrix@C=-5pt@R=-1pt{ & \mathpalette\mathclapinternal{ \overset{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} orientifold \\ subgroup \\ ${\phantom{A}}$ \end{tabular} } }{ H \subset G } } & & & & \mathpalette\mathclapinternal{ \overset{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} fixed/singular subspace \eqref{FixedLoci} \\ ${\phantom{A}}$ \end{tabular} } }{ \mathbb{R}^{5,1} \times \big( \mathbb{R}^{ \mathbf{1}^{\mathrm{HW}}_{\mathrm{sgn}} + \mathbf{4}^{\mathrm{ADE}}_{\mathbb{H}} } \big)^H } } \\ & \mathbb{Z}_2^{\mathrm{HW}} \times G^{\mathrm{ADE}} &&&& \overset{ \mbox{\bf \tiny \color{darkblue} $\tfrac{1}{2}\mathrm{M5}$ } }{ \mathpalette\mathrlapinternal{\phantom{\vert \atop \vert}} \mathbb{R}^{5,1} } \\ \mathbb{Z}_2^{\mathrm{HW}} \ar@{^{(}->}[ur] & & \mathbb{Z}_2^{\mathrm{refl}} \ar@{^{(}->}[ul] & {\phantom{AA}} & \overset{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} MO9 \end{tabular} } }{ \mathpalette\mathrlapinternal{\phantom{\vert \atop \vert}} \mathbb{R}^{9,1} } && \overset{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} MK6 \end{tabular} } }{ \mathpalette\mathrlapinternal{\phantom{\vert \atop \vert}} \mathbb{R}^{6,1} } \\ & \mathpalette\mathclapinternal{\phantom{\vert \atop \vert}} \mathbb{Z}_2^{ \mathrm{refl}+\mathrm{HW} } \ar@{^{(}->}[uu]|-{\mathrm{diag}} &&&& \underset{ \mbox{\bf \tiny \color{darkblue} $\mathrm{MO}5$ } }{ \mathpalette\mathrlapinternal{\phantom{\vert \atop \vert}} \mathbb{R}^{5,1} } \ar@{^{(}->}[ul] \ar@{_{(}->}[ur] \\ & & &&& \\ & 1 \ar@/^1pc/@{^{(}->}[uuul] \ar@{^{(}->}[uu] \ar@/_1pc/@{_{(}->}[uuur] &&&& \\ \mathpalette\mathrlapinternal{ \!\!\!\!\!\!\!\!\!\! \mbox{ \begin{minipage}[l]{7cm} {\footnotesize \bf Figure R -- Fixed subspaces in the $\tfrac{1}{2}\mathrm{M5}$-singularity \eqref{TheMO5} } {\footnotesize with MO5 \eqref{TheMO5} in the intersection of MK6 \eqref{MK6Singularity} with MO9 \eqref{TheMO9}, illustrated in \hyperlink{FigureS}{\it Figure S}. } \end{minipage} } } } } $ \vspace{2mm} \begin{equation} \label{Z2ReflHW} \xymatrix{ \underset{ \mbox{\bf \tiny \color{darkblue} \eqref{TheMO5} } }{ \mathbb{Z}_2^{\mathrm{refl}+\mathrm{HW}} } \; \ar@{^{(}->}[rr]^-{\small \mathrm{diag} } && \underset{ \mbox{\bf \tiny \color{darkblue} \eqref{TheMO9} } }{ \mathbb{Z}^{\mathrm{HW}}_2 } \times \underset{ \mbox{\bf \tiny \color{darkblue} \eqref{PointReflectionSubgroup} } }{ \mathbb{Z}^{\mathrm{refl}}_2 } \; \ar@{^{(}->}[rr] && \underset{ \mbox{\bf \tiny \color{darkblue} \eqref{TheMO9} } }{ \mathbb{Z}_2^{\mathrm{HW}} } \times \underset{ \mbox{\bf \tiny \color{darkblue} \eqref{ADESubgroups} } }{ G^{\mathrm{ADE}} } }. \end{equation} \noindent In summary, this data arranges into a short exact sequence of orbi-/orienti-fold group actions (as in \cite[p. 4]{DFM11}) \begin{equation} \label{OrbiOrientifoldGroupSequence} \hspace{-4mm} \xymatrix@C=2.5em@R=0pt{ 1 \ar[r] & \underset{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} \phantom{a} \\ orbifold \end{tabular} } }{ G^{\mathrm{ADE}} } \ar@{^{(}->}[rr]^-{ \mbox{ \tiny \begin{tabular}{c} index-2 subgroup \end{tabular} } } && \overset{ \{\mathrm{e}, \sigma\} }{ \overbrace{ \mathbb{Z}_2^{\mathrm{HW}} } } \times G^{\mathrm{ADE}} \ar[rr]^-{ \scalebox{.7}{ $ \begin{aligned} (\mathrm{e}, q) & \mapsto (\mathrm{e}, + q) \\ (\sigma, q) & \mapsto ( R , - q ) \end{aligned} $ } }_-{ \simeq } & \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} \phantom{a} \\ ---\!---\!---\!---\!---\!---\!---\!---\!---\!---\!--- orbi-orientifold ---\!---\!---\!---\!---\!---\!---\!---\!---\!---\!--- \end{tabular} } } } {\phantom{\mathpalette\mathclapinternal{A}}} & \overset{ \{\mathrm{e}, R\} }{ \overbrace{ \mathbb{Z}_2^{\mathrm{HW} + \mathrm{refl}} } } \times G^{\mathrm{ADE}} \ar@{->>}[r] & \underset{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} \phantom{A} \\ orientifold \end{tabular} } }{ \mathbb{Z}_2^{\mathrm{HW} + \mathrm{refl}} } \ar[r] & 1 \\ & \mathbb{R}^{\mathbf{1}_{\mathrm{triv}} + \mathbf{4}_{\mathbb{H}}} && \mathbb{R}^{ \mathbf{1}_{\mathrm{sgn}} + \mathbf{4}_{\mathbb{H}} } && \mathbb{R}^{ \mathbf{5} } && \mathbb{R}^{ \mathbf{5}_{\mathrm{sgn}} } } \end{equation} \noindent This situation is illustrated by the following figure: \vspace{3mm} {\small \hypertarget{FigureS}{} \begin{tabular}{cc} \begin{tabular}{|cc||c|c|} \hline \multicolumn{2}{|c||}{ $\mbox{\bf Orientifold}$ } & $\mathpalette\mathclapinternal{\phantom{\vert \atop \vert}} \mathrm{\bf MO5}$ & $\tfrac{1}{2}\mathrm{\bf M5}$ \\ \hline \hline \begin{tabular}{c} Global quotient \\ group \end{tabular} & $G=$ & $\mathbb{Z}_2$ & $\mathpalette\mathclapinternal{\phantom{A \atop A}}$ $\mathbb{Z}_2^{\mathrm{HW}} \times G^{\mathrm{ADE}}$ \\ \hline \begin{tabular}{c} Global quotient \\ group action \end{tabular} & $\xymatrix{ \mathbb{T}^V \ar@(ul,ur)^G } = $ & $ \xymatrix{ \mathbb{T}^{\mathbf{5}_{\mathrm{sgn}}} \ar@(ul,ur)^{\mathbb{Z}_2} } $ & $ \xymatrix{ \mathbb{T}^{\mathbf{1}_{\mathrm{sgn}}} \ar@(ul,ur)^{\mathbb{Z}_2^{\mathrm{HW}}} } \xymatrix{ \times \ar@[white]@(ul,ur)^{\times} } \xymatrix{ \mathbb{T}^{\mathbf{4}_{\mathrm{sgn}}} \ar@(ul,ur)^{ G^{\mathrm{ADE}} } } $ \\ \hline \begin{tabular}{c} Fixed/singular \\ points \end{tabular} & $\left( T^V\right)^G = $ & \multicolumn{2}{c|}{ $\{0,\tfrac{1}{2}\}^5 = \overline{32}$ } \\ \hline \multicolumn{2}{|c||}{ \begin{tabular}{c} Far horizon-limit \\ of M5 SuGra solution? \end{tabular} } & no & yes \\ \hline \end{tabular} \hspace{-.1cm} \scalebox{.76}{ \raisebox{-96pt}{ \includegraphics[width=.5\textwidth]{half-M5} }} \end{tabular} } \noindent {\footnotesize \bf Figure S -- Singularity structure of heterotic M-theory on ADE-singularities}, {\footnotesize as in \hyperlink{FigureR}{Figure R}, \cite[2.2.2, 2.2.7]{ADE}. The corresponding toroidal orbifolds (as per \hyperlink{Table5}{\it Table 5}) are illustrated in \hyperlink{FigureV}{\it Figure V} and \hyperlink{Table8}{\it Table 8}.} \vspace{5mm} \noindent {\bf $\mathrm{O}^0$-planes and M2-brane CS level.} There is one more ingredient to the $G$-space structure of heterotic M-theory on ADE-orbifolds (see \hyperlink{Table7}{\it Table 7} below for the full picture): While the MO5-planes \eqref{TheMO5} are supposed to be the M-theory lifts of the charged $\mathrm{O4}^{\pm}$-planes \cite[3]{Hori98}\cite[III.A]{Gimon98}\cite[3.1.1]{HananyKol00}, the M-theory lift of the un-charged $\mathrm{O4}^0$-planes (see \hyperlink{FigureOP}{\it Figure OP}) involves one more group action on spacetime \cite[III.B]{Gimon98}, being rotation of the circle fiber in M/IIA-duality, which we hence indicate as follows: \vspace{-.5cm} \begin{equation} \label{TheIIAZero} {\color{darkblue}\tiny \bf \mathrm{IIA}^0} \phantom{AAAAAA} \mathbb{R}^{9,1} \times \varnothing \; \xymatrix{\ar@{^{(}->}[r]&} \mathbb{R}^{9,1} \times \xymatrix{ S^1 \ar@(ul,ur)|-{\, \mathbb{Z}^{\mathrm{rot}}_k \!\!\!} } . \end{equation} Here on the right we have the circle regarded as a $\mathbb{Z}_k$-space (\cref{EquivariantCohomotopyAndTadpoleCancellation}) via rigid rotation by multiples of $2 \pi/k$, for any $k \in \mathbb{Z} \setminus \{0\}$. This is of course a free action (in particular, not a representation sphere \eqref{RepSpheres}) hence with empty fixed subspace \eqref{FixedLoci}, whence the superscript $(-)^0$ and the empty set $\varnothing$ of fixed points in \eqref{TheIIAZero}. But passing along the unique $\mathbb{Z}_k^{\mathrm{rot}}$-equivariant function \eqref{EquivariantFunction} \begin{equation} \label{KKReductionOnShiftCircle} \xymatrix{ S^1 \ar@(ul,ur)|-{\, \mathbb{Z}^{\mathrm{rot}}_k\!\!\!} \ar[rrr]^-{ \mathrm{KK}_{S^1_{\mathrm{rot}}} }_-{ \mbox{ \tiny \color{darkblue} \bf \begin{tabular}{c} KK-reduction on $S^1_{\mathrm{rot}}$ \end{tabular} } } &&& \ast \ar@(ul,ur)|-{\, \mathbb{Z}^{\mathrm{rot}}_k \!} } \end{equation} from the circle to the point $\ast$ with its necessarily trivial $\mathbb{Z}^{\mathrm{rot}}_k$-action, as befits KK-reduction from M-theory to type IIA string theory (see \cite{BSS18} for discussion in the context of \hyperlink{HypothesisH}{\it Hypothesis H}), we obtain a non-empty fixed subspace: \vspace{-2mm} \begin{equation} \label{TheIIA} {\color{darkblue}\tiny \bf \mathrm{IIA}} \phantom{AAAAAA} \mathbb{R}^{9,1} \; \xymatrix{\ar@{^{(}->}[r]&} \mathbb{R}^{9,1} \times \xymatrix{ \ast \ar@(ul,ur)|-{\, \mathbb{Z}^{\mathrm{rot}}_k\!} } . \end{equation} In these terms, we may phrase the core of M/IIA duality as saying that \begin{center} \emph{The lift of $\mathrm{IIA}$ \eqref{TheIIA} through $\mathrm{KK}_{S^1_{\mathrm{rot}}}$ \eqref{KKReductionOnShiftCircle} is $\mathrm{IIA}^0$ \eqref{TheIIAZero}}. \end{center} Notice in the case that the global 11d-spacetime is $\mathrm{AdS}_3$ times $S^7$ regarded as an $S^1_{\mathrm{rot}}$-fibration $$ \xymatrix@R=10pt{ S^1\ar@(ul,ur)|-{\, \mathbb{Z}^{\mathrm{rot}}_k \!\!\!} \ar[r] & S^7 \ar@(ul,ur)|-{\, \mathbb{Z}_k^{\mathrm{rot}} \!\!\!} \ar[r] & \mathbb{C}P^3 } $$ the order $k$ of $\mathbb{Z}^{\mathrm{rot}}_k$ in \eqref{TheIIAZero} is the level of the dual 3d Chern-Simons-matter theory \cite{ABJM08}. \medskip The argument in \cite[III.B]{Gimon98}, together with our discussion above, suggests that the analogous statement for $\mathrm{O4}^0$-planes is this: \vspace{-2mm} \begin{center} \emph{The lift of $\mathrm{O4}^0$ through $\mathrm{KK}_{S^1_{\mathrm{rot}}}$ \eqref{KKReductionOnShiftCircle} is $\mathrm{MO5}^0$ \eqref{TheMO5Zero} }. \end{center} \vspace{-2mm} \noindent Hence we take $\mathrm{MO5}^0$ to be the following $G$-space/orbifold, combining $\mathrm{MO5}$ \eqref{TheMO5} with $\mathrm{IIA}^0$ \eqref{TheIIAZero}: \vspace{-.2cm} \begin{equation} \label{TheMO5Zero} {\color{darkblue}\tiny \bf \mathrm{MO5}^0} \phantom{AAAAAA} \mathbb{R}^{4,1} \times \varnothing \; \xymatrix{\ar@{^{(}->}[r]&} \mathbb{R}^{4,1} \times \xymatrix{ S^1 \ar@(ul,ur)|-{\, \mathbb{Z}^{\mathrm{rot}}_k\!\!} } \times \xymatrix{ \mathbb{T}^{\mathbf{5}_{\mathrm{sgn}}} \ar@(ul,ur)^-{\, \mathbb{Z}_2^{\mathrm{refl}+\mathrm{HW}}} } \,. \end{equation} As before in \eqref{TheIIAZero}, the fixed subspace of the diagonal group action (now for $k =2$, as in \cite[(3.2)]{Gimon98}) $$ \mathbb{Z}_2^{\mathrm{refl}+\mathrm{rot}+\mathrm{HW}} \; \xymatrix{\ar@{^{(}->}[r]^{\mathrm{diag}}&} \mathbb{Z}_2^{\mathrm{refl}+\mathrm{HW}} \times \mathbb{Z}_2^{\mathrm{rot}} $$ in \eqref{TheMO5Zero} is actually empty, since the action of $\mathbb{Z}_2^{\mathrm{rot}}$ and hence that of $\mathbb{Z}_2^{\mathrm{refl}+\mathrm{HW}+\mathrm{rot}}$ is free, whence the superscript $(-)^0$. But, as before in \eqref{TheIIA}, under M/IIA KK-reduction \eqref{KKReductionOnShiftCircle} we have an equivariant projection map to the orbifold \begin{equation} \label{TheO4Zero} {\color{darkblue}\tiny \bf \mathrm{O4}}^0 \phantom{AAAAAA} \mathbb{R}^{4,1} \; \xymatrix{\ar@{^{(}->}[r]&} \mathbb{R}^{4,1} \times \xymatrix{ \ast \ar@(ul,ur)|-{\, \mathbb{Z}^{\mathrm{rot}}_k\!\!\!} } \times \xymatrix{ \mathbb{T}^{\mathbf{5}_{\mathrm{sgn}}} \ar@(ul,ur)^-{\, \mathbb{Z}_2^{\mathrm{refl}+\mathrm{HW}}} } \,, \end{equation} with non-empty fixed/singular subspace being the $\mathrm{O4}$-worldvolume -- which is thereby exhibited as being un-charged, as its lift to M-theory in in fact non-singular. \medskip In the same manner, there is the analogous $\mathbb{Z}_k^{\mathrm{rot}}$-resolution of the $\mathrm{MK6}$-singularity \eqref{MK6Singularity} \begin{equation} \label{TheMK6Zero} {\color{darkblue}\tiny \bf \mathrm{MK6}^0} \phantom{AAAAAA} \mathbb{R}^{6,1} \times \varnothing \; \xymatrix{\ar@{^{(}->}[r]&} \mathbb{R}^{5,1} \times \xymatrix{ S^1 \ar@(ul,ur)|-{\, \mathbb{Z}^{\mathrm{rot}}_k \!\!\!} } \times \xymatrix{ \mathbb{T}^{ \mathbf{4}_{\mathbb{H}} } \ar@(ul,ur)|-{\;\; G^{\mathrm{ADE}} \!\! } } , \end{equation} as well as of the $\mathrm{MO9}$-singularity \eqref{TheMO9}: \vspace{-.2cm} \begin{equation} \label{TheMO9Zero} {\color{darkblue}\tiny \bf \mathrm{MO9}^0} \phantom{AAAAAA} \mathbb{R}^{9,1} \times \varnothing \; \xymatrix{\ar@{^{(}->}[r]&} \mathbb{R}^{8,1} \times \xymatrix{ S^1 \ar@(ul,ur)|-{\, \mathbb{Z}^{\mathrm{rot}}_k\!\!\!} } \times \xymatrix{ \mathbb{T}^{\mathbf{1}_{\mathrm{sgn}}} \ar@(ul,ur)|-{\, \mathbb{Z}_2^{\mathrm{HW}}\!\!\!\!\!} } . \end{equation} The reduction of the latter along $\mathrm{KK}_{S^1_{\mathrm{rot}}}$ \eqref{KKReductionOnShiftCircle} is \begin{equation} \label{TheO8Zero} {\color{darkblue}\tiny \bf \mathrm{O8}^0} \phantom{AAAAAA} \mathbb{R}^{8,1} \; \xymatrix{\ar@{^{(}->}[r]&} \mathbb{R}^{8,1} \times \xymatrix{ \ast \ar@(ul,ur)|-{\, \mathbb{Z}^{\mathrm{rot}}_k} } \times \xymatrix{ \mathbb{T}^{\mathbf{1}_{\mathrm{sgn}}} \ar@(ul,ur)|-{\, \mathbb{Z}_2^{\mathrm{HW}}\!\!\!\!\!} } . \end{equation} \noindent In summary, {\it the full singularity structure of heterotic M-theory on ADE-orbifolds}, such as to admit \begin{enumerate}[{\bf (i)}] \vspace{-2mm} \item black M5-branes coinciding with MO5-planes and \vspace{-2mm} \item the $\mathrm{MO5}^0$-lift of $\mathrm{O4}^0$-planes \end{enumerate} \vspace{-2mm} is as shown in \hyperlink{Table7}{\it Table 7}. \vspace{-4mm} \begin{center} \hypertarget{Table7}{} \begin{tikzpicture} \draw (0,0) node {\footnotesize \begin{tabular}{c} M5-branes at \\ MO9-planes intersecting \\ ADE-singularities in \\ M-theory on \\ \raisebox{-20pt}{ \fbox{ $ \xymatrix{ S^{\mathbf{1}_{\mathrm{sgn}}} \ar@(ul,ur)|<<<<{\color{darkblue} \bf {}_{\mathbb{Z}_2^{\mathrm{HW}}} } } \!\!\times\!\!\! \xymatrix{ \mathbb{T}^{ \mathbf{4}_{\mathbb{H}} } \ar@(ul,ur)|<<<<{\color{darkblue} \bf {}_{G^{\mathrm{ADE}}} } } \!\times\! \xymatrix{ S^1 \ar@(ul,ur)|<<<<{\color{darkblue} \bf {}_{\mathbb{Z}^{\mathrm{rot}}_k} } } $ } } $\mathpalette\mathrlapinternal{ \mbox{ \hspace{-.4cm} \raisebox{-10pt}{ {\tiny \begin{tabular}{l} (\cite[3]{Sen97}, \\ \cite{FLO99}, \\ \cite{KSTY99} ) \end{tabular} } }} }$ \end{tabular} }; \begin{scope}[shift={(0,-.1)}] \draw[->] (0,-1.5) to node { \small \colorbox{white}{\tiny \color{darkblue} \begin{tabular}{c} \bf reduction on \\ $S^1_{\mathrm{ADE}}$ \end{tabular} } } (0,-2.9); \draw[->] (1.5,-1.5) to node { \small \colorbox{white}{ \tiny \color{darkblue} \bf \begin{tabular}{c} reduction on \\ $S^1_{\mathrm{rot}}$ \eqref{KKReductionOnShiftCircle} \end{tabular} } $\mathpalette\mathrlapinternal{ \mbox{ \hspace{.5cm} \tiny (\cite{HoravaWitten96a}) } }$ } (4,-2.9); \draw[->] (-1.5,-1.5) to node { \small \colorbox{white}{\tiny \color{darkblue} \begin{tabular}{c} \bf reduction on \\ $S^1_{\mathrm{HW}}$ \end{tabular} } } (-4,-2.9); \end{scope} \begin{scope}[shift={(0,.4)}] \draw (0,-5) node {\footnotesize \begin{tabular}{c} NS5-branes at \\ O8-planes intersecting \\ D6-branes in \\ $\mathrm{I}'$-theory on \\ \raisebox{-20pt}{ \fbox{ $ \xymatrix{ S^{\mathbf{1}_{\mathrm{sgn}}} \ar@(ul,ur)|<<<<{\color{darkblue} \bf {}_{\;\; \mathbb{Z}_2^{\mathrm{HW}}} } } \!\times \! \xymatrix{ S^1 \ar@(ul,ur)|<<<<{\color{darkblue} \bf {}_{\mathbb{Z}_2^{\mathrm{rot}}} } } $ } } $\mathpalette\mathrlapinternal{\mbox{ \raisebox{-10pt}{ {\tiny (\cite{GKST01})} } }}$ \end{tabular} }; \draw (5,-5) node { \footnotesize \begin{tabular}{c} D4-branes at \\ O8-planes intersecting \\ ADE-singularities in \\ $\mathrm{I}'$-theory on \\ \raisebox{-20pt}{ \fbox{ $ \xymatrix{ S^{\mathbf{1}_{\mathrm{sgn}}} \ar@(ul,ur)|<<<<{\color{darkblue} \bf {}_{\;\;\mathbb{Z}_2^{\mathrm{HW}}} } } \!\!\times\!\!\! \xymatrix{ \mathbb{T}^{ \mathbf{4}_{\mathbb{H}} } \ar@(ul,ur)|<<<<{\color{darkblue} \bf {}_{G^{\mathrm{ADE}}} } } $ } } $\mathpalette\mathrlapinternal{ \mbox{ \tiny \raisebox{-10pt}{ \begin{tabular}{c} (\cite{BRG12}, \\ \cite[3.4.2]{HKKP15}) \end{tabular} } }}$ \end{tabular} }; \draw (-5,-5) node {\footnotesize \begin{tabular}{c} NS5-branes \\ at \\ ADE-singularities in \\ $\mathrm{HET}_{E}$-theory on \\ \raisebox{-20pt}{ \fbox{ $ \xymatrix{ S^1 \ar@(ul,ur)|<<<<{\color{darkblue} \bf {}_{\mathbb{Z}_2^{\mathrm{rot}}} } } \!\!\times\!\!\! \xymatrix{ \mathbb{T}^{ \mathbf{4}_{\mathbb{H}} } \ar@(ul,ur)|<<<<{\color{darkblue} \bf {}_{G^{\mathrm{ADE}}} } } $ } } $ \mathpalette\mathrlapinternal{ \raisebox{-10pt}{ \mbox{\tiny (\cite{Witten99})}}} $ \end{tabular} }; \end{scope} \end{tikzpicture} \end{center} \vspace{-4mm} \noindent {\bf \footnotesize Table 7 -- Singularity structure of heterotic M-theory on ADE-orbifolds and its string theory duals} {\footnotesize given by combining the $\tfrac{1}{2}\mathrm{M5}$-structure of \hyperlink{FigureS}{\it Figure S} with $\mathrm{IIA}^0$-structure \eqref{TheIIAZero}, hence admitting also $\mathrm{MO5}^0$-structure \eqref{TheMO5Zero}.} \vspace{.4cm} \noindent The following \hyperlink{FigureT}{\it Figure T} shows the corresponding subgroup lattice with its associated fixed/singular spaces: \begin{center} \hypertarget{FigureT}{} \begin{tikzpicture}[scale=0.75] \begin{scope} \draw (0,5.4) node { \small \color{darkblue} \bf \begin{tabular}{c} $\mathrm{M}_{\mathrm{HET}}/{\mathrm{ADE}}$-orbifold subgroups \\ $H \subset$ \\ $ G^{\mathrm{ADE}} \times \mathbb{Z}_k^{\mathrm{rot}} \times \mathbb{Z}_2^{\mathrm{HW}} $ \end{tabular} }; \draw (0,4) node {$\overbrace{\phantom{--------------------}}$}; \begin{scope}[shift={(0,1)}] \draw (0+30+180:2.2) to (0+30+180+60:3.5); \draw (0+30+180+120:2.2) to (0+30+180+60+120:3.5); \draw (0+30+180+240:2.2) to (0+30+180+60+240:3.5); \draw (0+30+180+60:3.5) to (0+30+180+120:2.2); \draw (0+30+180+60+120:3.5) to (0+30+180+120+120:2.2); \draw (0+30+180+60+240:3.5) to (0+30+180+120+240:2.2); \draw (0,0) to (0+30+180+60:3.5); \draw (0,0) to (0+30+180+60+120:3.5); \draw (0,0) to (0+30+180+60+240:3.5); \draw (0+30+180:2.2) node {\colorbox{white}{ $ \mathbb{Z}_2^{\mathrm{refl}+\mathrm{rot}} $}}; \draw (60+30+180:3.5) node {\colorbox{white}{$ \underset{ \mbox{ \tiny \color{darkblue} \bf \eqref{KKReductionOnShiftCircle} } }{ \mathbb{Z}_k^{\mathrm{rot}} } $}}; \draw (120+30+180:2.2) node {\colorbox{white}{$ \mathbb{Z}_2^{\mathrm{rot}+\mathrm{HW}} $}}; \draw (180+30+180:3.5) node {\colorbox{white}{$ \mathbb{Z}_2^{\mathrm{HW}} $}}; \draw (240+30+180:2.2) node {\colorbox{white}{$ \underset{ \mbox{ \tiny \color{darkblue} \bf \eqref{Z2ReflHW} } }{ \mathbb{Z}_2^{\mathrm{refl}+\mathrm{HW}} } $}}; \draw (300+30+180:3.5) node {\colorbox{white}{ $ \underset{ \mbox{ \tiny \color{darkblue} \bf \eqref{PointReflectionSubgroup} } }{ \mathbb{Z}_2^{\mathrm{refl}} } $}}; \draw (0,0) node {\colorbox{white}{$ \mathbb{Z}_2^{\mathrm{refl}+\mathrm{rot}+\mathrm{HW}} $}}; \end{scope} \end{scope} \begin{scope}[shift={(9.7,0)}] \draw (0,5.5) node {\small \color{darkblue} \bf \begin{tabular}{c} Fixed/singular subspaces {\footnotesize\eqref{FixedLoci}} \\ $ \Big( \xymatrix{ \mathbb{T}^{ \mathbf{4}_{\mathbb{H}} } \ar@(ul,ur)|<<<<{\color{darkblue} \bf {}_{G^{\mathrm{ADE}}} } } \!\times\! \xymatrix{ S^1 \ar@(ul,ur)|<<<<{\color{darkblue} \bf {}_{\mathbb{Z}^{\mathrm{rot}}_k} } } \!\! \times \!\!\! \xymatrix{ S^{\mathbf{1}_{\mathrm{sgn}}} \ar@(ul,ur)|<<<<{\color{darkblue} \bf {}_{\;\; \mathbb{Z}_2^{\mathrm{HW}}} } } \Big)^H $ \end{tabular} }; \draw (0,4) node {$\overbrace{\phantom{--------------------}}$}; \begin{scope}[shift={(0,1)}] \draw (0+30+180:2.2) to (0+30+180+60:3.5); \draw (0+30+180+120:2.2) to (0+30+180+60+120:3.5); \draw (0+30+180+240:2.2) to (0+30+180+60+240:3.5); \draw (0+30+180+60:3.5) to (0+30+180+120:2.2); \draw (0+30+180+60+120:3.5) to (0+30+180+120+120:2.2); \draw (0+30+180+60+240:3.5) to (0+30+180+120+240:2.2); \draw (0,0) to (0+30+180+60:3.5); \draw (0,0) to (0+30+180+60+120:3.5); \draw (0,0) to (0+30+180+60+240:3.5); \draw (0+30+180:2.2) node {\colorbox{white}{$ \underset{ \mbox{\bf \tiny \color{darkblue} \eqref{TheMK6Zero} } }{ \mathrm{MK6}^0 } $}}; \draw (60+30+180:3.5) node {\colorbox{white}{$ \underset{ \mbox{\bf \tiny \color{darkblue} \eqref{TheIIAZero} } }{ \mathrm{IIA}^0 } $}}; \draw (120+30+180:2.2) node {\colorbox{white}{$ \underset{ \mbox{\bf \tiny \color{darkblue} \eqref{TheMO9Zero} } }{ \mathrm{MO9}^0 } $}}; \draw (180+30+180:3.5) node {\colorbox{white}{$ \underset{\bf \tiny \color{darkblue} \eqref{TheMO9} }{ \mathrm{MO9} } $}}; \draw (240+30+180:2.2) node {\colorbox{white}{$ \underset{ \mbox{\bf \tiny \color{darkblue} \eqref{TheMO5} } }{ \mathrm{MO5} } $}}; \draw (300+30+180:3.5) node {\colorbox{white}{$ \underset{ \mbox{\bf \tiny \color{darkblue} \eqref{MK6Singularity} } }{ \mathrm{MK6} } $}}; \draw (0,0) node {\colorbox{white}{$ \underset{ \mbox{\bf \tiny \color{darkblue} \eqref{TheMO5Zero} } }{ \mathrm{MO5}^0 } $}}; \end{scope} \end{scope} \end{tikzpicture} \end{center} \vspace{-5mm} \noindent {\footnotesize \bf Figure T -- Subgroup lattice and fixed/singular subspaces in the singularity structure for heterotic M-theory on ADE-orbifolds from \hyperlink{Table7}{\it Table 7}}. {\footnotesize On the left, groups associated to the middle of a sub-simplex are diagonal subgroups inside the direct product of subgroups associated to the vertices, as indicated by the superscripts. On the right, all fixed loci with superscript $(-)^0$ are actually empty, but appear as superficially non-empty (un-charged) singularities after M/IIA KK-reduction \eqref{KKReductionOnShiftCircle}, e.g. $\mathrm{O4}^0$ \eqref{TheO4Zero}, $\mathrm{O8}^0$ \eqref{TheO8Zero}, as on the right of \hyperlink{FigureOP}{\it Figure OP}. The numbered subscripts $(xx)$ indicate the corresponding expression in the text.} \subsection{Equivariant Cohomotopy charge of M5 at $\mathrm{MO5}_{\mathrm{ADE}}$} \label{EquivariantCohomotopyChargeOfM5AtMO5} Applying the general mathematical results of \cref{EquivariantCohomotopyAndTadpoleCancellation} to the $\mathrm{M}_{\mathrm{HET}}/\mathrm{ADE}$-singularities from \cref{HeteroticMTheoryOnADEOrbifolds}, we finally show here (see \hyperlink{FigureV}{\it Figure V} ) that \hyperlink{HypothesisH}{\it Hypothesis H} formalizes and validates the following widely accepted but informal Folklore \ref{AnomalyCancellationOnMTheoreticOrientifolds}, concerning the nature of M-theory: \vspace{.0cm} \noindent \begin{minipage}[l]{10.8cm} \hypertarget{FigureU}{} \begin{folklore}[M5/MO5 anomaly cancellation {\cite{DasguptaMukhi95}\cite[3.3]{Witten95b} \cite[2.1]{Hori98}}] \label{AnomalyCancellationOnMTheoreticOrientifolds} For M-theory on the toroidal orientifold $\mathbb{R}^{5,1} \times \mathbb{T}^{ \mathbf{5}_{\mathrm{sgn}} } \!\sslash\! \mathbb{Z}_2$ (\hyperlink{Table5}{\it Table 5}) with MO5-singularities \eqref{TheMO5}, consistency requires the situation shown in \hyperlink{Table2}{\it Table 2}: \begin{enumerate} [\bf (i)] \vspace{-3mm} \item a charge of $q_{{}_{\mathrm{MO5}}}/q_{{}_{\mathrm{M5}}} = -1/2$ is carried by each of the fixed/singular MO5-planes \eqref{TheMO5}; \vspace{-3mm} \item the M5-brane charge is integral in natural units, hence on the covering $\mathbb{Z}_2$-space $\mathbb{T}^{\mathbf{5}_{\mathrm{sgn}}}$ the M5-branes appear in $\mathbb{Z}_2$-mirror pairs around the MO5-planes, as in \hyperlink{FigureL}{\it Figure L} and \hyperlink{FigureN}{\it Figure N}; \vspace{-3mm} \item the total charge of the $N_{{}_{\rm M5}}$ M5-branes has to cancel that of the 32 O-planes \eqref{RepresentationTorusOfSignRep}, $N_{{}_{\rm M5}} q_{{}_{\rm M5}} + 32 q_{{}_{\rm MO5}} = 0$, as indicated in \hyperlink{FigureA}{\it Figure A}. \end{enumerate} \vspace{-4mm} \end{folklore} \noindent Via the similarly widely accepted Folklore \ref{ReductionOfMO5ToO4}, the statement of Folklore \ref{AnomalyCancellationOnMTheoreticOrientifolds} implies tadpole anomaly cancellation in string theory. Notice that this is not so much a claim than part of the defining criterion for M-theory: \vspace{-2mm} \begin{folklore}[Double dimensional reduction of M5/MO5 to D4/O4 {\cite[3]{Hori98}\cite[III.A]{Gimon98}\cite[3.1.1]{HananyKol00}}] \label{ReductionOfMO5ToO4} Under M/IIA duality, the situation of Folklore \ref{AnomalyCancellationOnMTheoreticOrientifolds} becomes the string-theoretic tadpole cancellation condition from \hyperlink{Table1}{\it Table 1} for D4-branes and $\mathrm{O}4^-$-planes. \end{folklore} \vspace{-5mm} \begin{folklore}[T-duality relating $\mathrm{O}$-planes, e.g. {\cite[p.317-318]{BLT13}}] \label{TDualityRelatesOpPlanes} By iterative T-duality, the situation of Folklore \ref{ReductionOfMO5ToO4} implies general tadpole cancellation for $\mathrm{D}p$-branes and $\mathrm{O}p^-$-planes (\hyperlink{Table3}{\it Table 3}). \end{folklore} \end{minipage} $\phantom{a}$ \fbox{ $ \!\!\! \raisebox{156pt}{ \xymatrix@R=14pt{ \fbox{ \hyperlink{HypothesisH}{\it Hypothesis H} } \ar@{=>}[dd]_-{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} Equivariant Cohomotopy \\ of $\mathrm{M}_{\mathrm{HET}}/\mathrm{ADE}$-orbifolds \end{tabular} } }^-{ \mbox{\color{darkblue} \bf \tiny \begin{tabular}{l} Rigorous: Cor. \ref{EquivariantCohomotopyOfSemiComplementSpacetime}, \ref{GlobalM5MO5CancellationImplied} \end{tabular} } } \\ \\ \fbox{ \begin{tabular}{c} M5/MO5 anomaly cancellation \\ { \tiny (Folklore \ref{AnomalyCancellationOnMTheoreticOrientifolds}) } \end{tabular} } \ar@{<=>}[dd]_-{ \mbox{ \tiny \begin{tabular}{c} M/IIA duality \end{tabular} } }^-{ \mbox{ \tiny \begin{tabular}{c} Folklore \ref{ReductionOfMO5ToO4} \end{tabular} } } \\ \\ \fbox{ \begin{tabular}{c} D4/O4 tadpole cancellation \end{tabular} } \ar@{<=>}[dd]_-{ \mbox{ \tiny \begin{tabular}{c} T-duality \end{tabular} } }^-{ \mbox{ \tiny \begin{tabular}{l} Folklore \ref{TDualityRelatesOpPlanes} \end{tabular} } } \\ \\ \fbox{ \begin{tabular}{c} D$p$/O$p$-tadpole cancellation \end{tabular} } \\ \mbox{ \footnotesize \begin{minipage}[l]{5.7cm} \noindent {\bf Figure U -- Structure of the argument.} We demonstrate that \hyperlink{HypothesisH}{\it Hypothesis H} on C-field charge quantization in Cohomotopy, applied to heterotic M-theory on toroidal ADE-oribolds, implies M5/MO5-anomaly cancellation in M-theory. This directly subsumes and implies the statement of tadpole cancellation for D$4$/O$4$ branes in string theory. \end{minipage} } } } \!\!\! $ } \vspace{-2mm} \begin{center} \hypertarget{FigureV}{} \begin{tikzpicture}[scale=0.8] \begin{scope}[shift={(0,-.7)}] \draw (1.5,6.7) node {$\overbrace{\phantom{------------------------}}$}; \draw (11,6.7) node {$\overbrace{\phantom{---------------}}$}; \draw (1.5,7.6) node {\tiny \color{darkblue} \bf \begin{tabular}{c} semi-complement \eqref{SemiComplement} \\ of $\mathrm{MO5} \subset \tfrac{1}{2}\mathrm{M5}$-singularities (\hyperlink{FigureS}{\it Figure S}) \\ in heterotic M-theory on ADE-orbifolds (\hyperlink{Table7}{\it Table 7}) \end{tabular} }; \draw (11,7.6) node { \tiny \color{darkblue} \bf \begin{tabular}{c} C-field charge quantization \\ in ADE-equivariant Cohomotopy \eqref{EquivariantCohomotopySet} \end{tabular} }; % \draw[->] (4,8.5) to node[above] { \tiny \color{darkblue} \bf \begin{tabular}{c} M-theory C-field \\ charge-quantized by \hyperlink{Hyothesis}{\it Hypothesis H} \\ as a cocycle in equivariant Cohomotopy \end{tabular} } (9,8.4); \end{scope} \begin{scope}[shift={(0, 0)}] \clip (0,-1) rectangle (2.8,5.7); \draw[dotted, thick] (0,-3) to (0,6); \draw[step=3, dotted, thick] (-3,-3) grid (6,6); \draw[draw=black, fill=white] (0,0) circle (.1); \draw[draw=black, fill=white] (0,3) circle (.1); \draw[draw=black, fill=white] (3,3) circle (.1); \draw[draw=black, fill=white] (3,0) circle (.1); \draw[draw=black, fill=white] (3,6) circle (.1); \draw[draw=black, fill=white] (0,6) circle (.1); \draw[draw=black, fill=white] (3,-3) circle (.1); \draw[draw=black, fill=white] (0,-3) circle (.1); \draw[dashed] (0,3+.7) to (3,3+.7); \draw[dashed] (0,3-.7) to (3,3-.7); \draw[dashed] (0,+.7) to (3,+.7); \draw[dashed] (0,-.7) to (3,-.7); \end{scope} \draw (0,-.9) node {\tiny $x_1 = 0$}; \draw (3,-.9) node {\tiny $x_1 > 0$}; \draw (-3.7,0) node {\tiny $x_2 = 0$}; \draw (-3.7,3) node {\tiny $x_2 = \tfrac{1}{2}$}; % \node at (11,2) {\colorbox{white}{$\phantom{a}$}}; \draw (11,2) circle (2); \node (infinity) at (11+2,2) {\colorbox{white}{$\infty$}}; \node (zero) at (11-2,2) {$-\,\tiny \mathpalette\mathrlapinternal{0} $}; % \begin{scope}[shift={(9,2)}] \fill[darkblue] (2,0) ++(40+180:2) node (minusepsilon) {\begin{turn}{-45} $)$ \end{turn}}; \fill[darkblue] (2,0) ++(180-40:2) node (epsilon) {\begin{turn}{45} $)$ \end{turn}}; \fill[darkblue] (2.3,0.25) ++(40+180:2) node (label+epsilon) { \tiny $-\epsilon$ }; \fill[darkblue] (2.3,-0.25) ++(-40-180:2) node (label-epsilon) { \tiny $+\epsilon$ }; \draw[<->, dashed, gray] (label+epsilon) to node {\tiny $\mathbb{Z}_2$} (label-epsilon); \end{scope} % % \node (torus) at (1.5,7.5) {\raisebox{42pt}{$ \big( \mathbb{R}^{\mathbf{1}_{\mathrm{sgn}}} / \mathbb{Z}_2^{\mathrm{HW}} \big) \times \xymatrix{ \mathbb{T}^{ \mathbf{4}_{\mathrm{sgn}} } \ar@(ul,ur)|{\, \mathbb{Z}^{\mathrm{refl}}_2 \!\!\!\!\! } } $}}; \node (sphere) at (11,7.2) {\raisebox{42pt}{$ \xymatrix{ S^{\mathbf{4}_{\mathbb{H}}} \ar@(ul,ur)|-{\, \mathbb{Z}_2 } } $} }; % % \draw[<->, dashed] (1.5,3+1.5) to node[very near end] { \tiny \color{darkblue} \bf \begin{tabular}{c} \;\; residual \\ \;\; $\mathbb{Z}_2^{\mathrm{refl}}$-action \end{tabular} } (1.5,3-1.5); \draw[draw=darkblue, fill=darkblue] (0.01,3+.35-.05) rectangle (2.8,3+.35+.05); \draw[draw=darkblue, fill=darkblue] (0.01,3-.35-.05) rectangle (2.8,3-.35+.05); \draw[draw=lightgray, fill=lightgray] (0.09,-.05) rectangle (2.8,+.05); \draw[thin] (0.09,-.05) to (2.8,-.05); \draw[thin] (0.09,+.05) to (2.8,+.05); \draw[draw=lightgray, fill=lightgray] (0.09,3-.05) rectangle (2.8,3+.05); \draw[thin] (0.09,3-.05) to (2.8,3-.05); \draw[thin] (0.09,3+.05) to (2.8,3+.05); \draw[|->, olive] (2.1,3+.35) to[bend right=8] (zero); \draw[|->, olive] (2.1,3-.35) to[bend right=8] (zero); \draw[|->, olive] (2.1,3) to[bend right=8] node { \colorbox{white}{ \tiny \color{darkblue} \bf codimension 1 submanifolds } } (zero); \draw[|->, olive] (2.1,0) to[bend right=8] node { \colorbox{white}{ \tiny \color{darkblue} \bf codimension 1 submanifold } } (zero); \draw[|->, olive] (1.2,5) to[bend left=26] node { } (infinity); \draw[|->, olive] (1.8,3+.7) to[bend left=26] node { \colorbox{white} { \tiny \color{darkblue} \bf cocycle vanishes far away from fixed lines } } (infinity); \begin{scope}[shift={(0,-3)}] \node (MO9) at (-.7,4.7) {\tiny \color{darkblue} \bf $\mathrm{MO9}$}; \draw[->, gray] (MO9) to (-.1,4.3); \begin{scope}[shift={(.1,3.35)}] \node (MK6) at (-.7,3.9) {\tiny \color{darkblue} \bf $\mathrm{MK6}$}; \draw[->, gray] (MK6) to (.3,3.1); \end{scope} \node (MO5) at (-1,3) {\tiny \color{darkblue} \bf $\mathrm{MO5}$}; \draw[->, gray] (MO5) to (-.1,3); \node (M5) at (-.7,6.6) {\tiny \color{darkblue} \bf $\mathpalette\mathllapinternal{\tfrac{1}{2}}\mathrm{M5}$}; \draw[->, gray] (M5) to ++(.6,-.2); \node (halfM5) at (-1,6) { \tiny \color{darkblue} \bf $\mathpalette\mathllapinternal{-\tfrac{1}{2}\mathrm{M5} = \;} \mathrm{MO5}$}; \draw[->, gray] (halfM5) to ++(.9,0); \node (mirrorM5) at (-.7,5.4) { \tiny \color{darkblue} \bf $\mathpalette\mathllapinternal{\mbox{mirror}\;\tfrac{1}{2}}\mathrm{M5}$ }; \draw[->, gray] (mirrorM5) to ++(.6,+.2); \end{scope} \begin{scope}[shift={(0,+.35)}] \clip (0,3+.2) rectangle (1,3-.2); \draw[draw=green, fill=green] (0,3) circle (.1); \end{scope} \begin{scope}[shift={(0,-.35)}] \clip (0,3+.2) rectangle (1,3-.2); \draw[draw=green, fill=green] (0,3) circle (.1); \end{scope} \end{tikzpicture} \end{center} \vspace{-.4cm} \noindent {\bf \footnotesize Figure V -- Equivariant Cohomotopy of ADE-orbifolds in heterotic M-theory} {\footnotesize with singularity structure as in \hyperlink{FigureS}{\it Figure S}. The resulting charge classification (Cor. \ref{EquivariantCohomotopyOfSemiComplementSpacetime}) implies, via the unstable PT isomorphism (\cref{PTTheorem}), the $\tfrac{1}{2}\mathrm{M5} = \mathrm{MO9} \cap \mathrm{MK6}$-brane configurations \eqref{TheHalfM5} similarly shown in \cite[Fig. 1]{FLO99}\cite[p. 7]{KSTY99}\cite[Fig. 1]{FLO00a}\cite[Fig. 2]{FLO00b}\cite[Fig. 1]{FLO00c}\cite[p. 4, 68, 71]{GKST01}. This is as in \hyperlink{FigureL}{\it Figure L} but with points (M5s) extended to half-line (MK6s), see Remark \ref{TheRoleOfMK6EndingOnM5} and \hyperlink{Table8}{\it Table 8}. } \medskip \noindent {\bf C-Field flux quantization at pure MO5-Singularities.} To put the discussion below in perspective, it is instructive to first recall the success and the shortcoming of the existing argument \cite[2]{Hori98} for M5/MO5-brane charge quantization around a \emph{pure} MO5-singularity \eqref{TheMO5} (see the left column of\hyperlink{Table8}{\it Table 8}): Following the classical argument of \cite{Dirac31}, we consider removing the locus of the would-be M5-brane from spacetime and then computing the appropriate cohomology of the remaining complement. For the \emph{pure} MO5-singularity \eqref{TheMO5} the complement spacetime is, up to homotopy equivalence, the 4-dimensional real projective space: \vspace{-.2cm} \begin{equation} \label{M5ThreadedThroughRP4} \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} complement spacetime \\ around pure MO5-singularity \end{tabular} } } }{ X^{11}_{{}_{\mathrm{MO5}}} } \;\;\;\;\;\;\;\;=\; \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} full Euclidean orientifold \eqref{EuclideanGSpace} \\ with pure MO5-singularity \eqref{TheMO5} \end{tabular} } } }{ \big( \mathbb{R}^{5,1} \times \xymatrix{ \mathbb{R}^{\mathbf{5}_{\mathrm{sgn}}} } \overset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} pure \eqref{Z2ReflHW} \\ orbifold quotient \end{tabular} } } }{ \!\! \!\sslash\! \mathbb{Z}_2^{\mathrm{refl}+\mathrm{HW}} } \big) } \setminus \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} $\mathbb{Z}_2^{\mathrm{refl}+\mathrm{HM}}$-fixed \\ subspace \eqref{FixedLoci} \end{tabular} } } }{ \{ \mathbb{R}^{5,1} \times \{0\} \} } \;\simeq_{{}_{\mathrm{homotopy}}}\; \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} unit 4-sphere \\ around MO5 \end{tabular} } } }{ S\big( \mathbb{R}^{\mathbf{5}_{\mathrm{sgn}}} \big) } / \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} pure \eqref{Z2ReflHW} \\ MO5-quotient \end{tabular} } } }{ \mathbb{Z}_2^{\mathrm{refl}+\mathrm{HW}} } \;\simeq\; \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} real projective \\ 4-space \end{tabular} } } }{ \mathbb{R}P^4 } \,. \end{equation} Since this ambient spacetime \eqref{M5ThreadedThroughRP4} is a smooth but curved (i.e. non-parallelizable) manifold, the flavor of Cohomotopy theory that measures its M-brane charge, according to \hyperlink{HypothesisH}{\it Hypothesis H}, is, according to \hyperlink{Table4}{\it Table 4}, the $J$-twisted Cohomotopy theory of \cite[3]{FSS19b}. This implies, by \cite[Prop. 4.12]{FSS19b}, that rationalized brane charge (bottom of \eqref{DifferentialEquivariantCohomotopyPullback}) is measured by the integral of a differential 4-form $G_4 \in \Omega^4\big( X^{11}\big) $ (the C-field 4-flux density) which satisfies the half-integral shifted flux quantization condition \begin{equation} \label{HalfIntegralFluxQuantization} [G_4] + \big[ \tfrac{1}{4}p_1 \big] \;\in\; H^4\big( X^{11}, \mathbb{Z}\big) \to H^4\big( X^{11}, \mathbb{R}\big) \end{equation} as is expected from the M-theory folklore (recalled in \cite[2.2]{FSS19b}). Applying this to the complement $X^{11}_{\mathrm{MO5}}$ \eqref{M5ThreadedThroughRP4} around a pure MO5-plane implies, as pointed out in \cite[2.1]{Hori98}, that there must be an \emph{odd integer} of brane charge in the pure MO5-spacetime \vspace{-3mm} \begin{equation} \label{TheOddChargeAroundAPureMO5} \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} {\phantom{A}} \\ $J$-twisted Cohomotopy (\cite[3.1]{FSS19b}) \\ of pure MO5-complement \eqref{M5ThreadedThroughRP4} \end{tabular} } } }{ \pi^{{}^{ \overset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} twist } } }{ T \mathbb{R}P^4 } }} \!\! \big( X^{11}_{{{}_\mathrm{MO5}}} \big)_{\mathbb{R}} } \;\;\;\;\;\;=\; \underset{ \raisebox{13pt}{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} due to half-integral $G_4$-flux quantization \eqref{HalfIntegralFluxQuantization} \\ implied by twisted Cohomotopy \cite[Prop. 4.12]{FSS19b} \end{tabular} } } }{ \Bigg\{ 2 \int_{\mathbb{R}P^4} G_4 \;=\; \overset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} odd integer \\ net charge \\ {\phantom{-}} \end{tabular} } } }{ 1 - 2N } \;\vert\; N \in \mathbb{N} \Bigg\} } \,. \end{equation} \noindent {\bf The need to resolve further microscopic details.} If one could identify in \eqref{TheOddChargeAroundAPureMO5} the offset of $1 \,\mathrm{mod}\, 2$ in \eqref{TheOddChargeAroundAPureMO5} with the charge carried by the pure MO5-plane \eqref{TheMO5}, and the remaining even charge $2 N$ with that of $N$ M5-branes in its vicinity \begin{equation} \label{MissingBraneChargeIdentification} 1 - N \cdot 2 \;\overset{?}{=}\; Q_{\mathrm{MO5}} - N_{\mathrm{brane}} \cdot Q_{\mathrm{M5}} \end{equation} this would be the local/twisted M5/MO5-anomaly cancellation condition of \hyperlink{Table2}{\it Table 2}. Without such further information, the charge quantization \eqref{TheOddChargeAroundAPureMO5} around \emph{pure} MO5-planes \eqref{TheMO5} is only \emph{consistent with} the local/twisted M5/MO5-anomaly cancellation from \hyperlink{Table2}{\it Table 2}, as noticed in \cite[bottom of p. 5]{Hori98}. \medskip But with the results of \cref{EquivariantCohomotopyAndTadpoleCancellation} and in view of \cref{HeteroticMTheoryOnADEOrbifolds}, we may now complete this old argument (see the right column of \hyperlink{Table8}{\it Table 8}): \medskip \noindent {\bf Equivariant Cohomotopy implies local/twisted M5/MO5-anomaly cancellation at $\tfrac{1}{2}\mathrm{M5}$-singularities.} We know from \cref{LocalTadpoleCancellation} that the identification \eqref{MissingBraneChargeIdentification} missing from the result \eqref{TheOddChargeAroundAPureMO5} for twisted Cohomotopy on smooth but curved spacestimes \emph{is} implied by the result of Prop. \ref{TheoremLocalTadpoleCancellation} for equivariant Cohomotopy of singular but flat spacetimes. Moreover, we have argued in \cref{HeteroticMTheoryOnADEOrbifolds} that having black M5-branes actually coinciding with MO5-planes requires/implies that the pure MO5-planes are but the diagonally fixed sub-loci (shown in \hyperlink{Table6}{\it Table 6}) inside the richer $\tfrac{1}{2}\mathrm{M5} = \mathrm{MK6} \cap \mathrm{MO9}$-singularities \eqref{TheHalfM5} of heterotic M-theory on ADE-orbifolds (\hyperlink{FigureS}{\it Figure S}). Hence for a rigorous M5/MO5-anomaly cancellation result not just consistent with (as in \eqref{TheOddChargeAroundAPureMO5}), but actually \emph{implying} Folkore \ref{AnomalyCancellationOnMTheoreticOrientifolds}, we need to compute the M-brane charge at MO5-singularities inside $\tfrac{1}{2}\mathrm{M5}$-singularities \eqref{TheHalfM5}. Concretely, this means with \hyperlink{HypothesisH}{\it Hypothesis H} that M5/MO5-charge at a single MO5-singularity is measured by the equivariant Cohomotopy of the following $\tfrac{1}{2}\mathrm{M5}$-refinement of the naive MO5-complement spacetime \eqref{M5ThreadedThroughRP4}: \vspace{0mm} \begin{eqnarray} \label{SemiComplement} \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} semi-complement spacetime \\ around $\mathrm{MO5}$ in $\tfrac{1}{2}\mathrm{M5}$-singularity \end{tabular} } } }{ X^{11}_{{}_{\frac{1}{2}\mathrm{M5}}} } &\;\coloneqq\; & \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} full Euclidean orientifold \eqref{EuclideanGSpace} \\ with $\tfrac{1}{2}\mathrm{M5}$-singularity \eqref{TheHalfM5} \end{tabular} } } }{ \big( \mathbb{R}^{5,1} \!\!\times\!\! \xymatrix{ \mathbb{R}^{\mathbf{1}_{\mathrm{sgn}}} } \!\!\times\!\! \xymatrix{ \mathbb{R}^{\mathbf{4}_{\mathbb{H}}} } \!\!\!\sslash\! \mathbb{Z}_2^{\mathrm{HW}} \!\!\times\! \mathbb{Z}_2^{\mathrm{refl}} \big) } \setminus \big( \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} $\mathbb{Z}_2^{\mathrm{HW}}$-fixed subspace \eqref{FixedLoci} \\ with residual $\mathbb{Z}_{2}^{\mathrm{het}}$-action \eqref{WeylGroup} \end{tabular} } } }{ \mathbb{R}^{5,1} \!\!\times\!\! \{0\} \!\times\!\! \xymatrix{ \mathbb{R}^{\mathbf{4}_{\mathbb{H}}} } } \!\!\sslash\! \mathbb{Z}_2^{\mathrm{refl}} \big) \nonumber \\ & \; \underset{\mathrm{homotopy}}{\simeq}\; & \underset{ \simeq \;\ast }{ \underbrace{ \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} unit 0-sphere \\ around $\mathrm{MO9}$ \end{tabular} } \;\; } }{ S(\mathbb{R}^{\mathbf{1}_{\mathrm{sgn}}}) } / \underset{ \mathpalette\mathclapinternal{ \;\; \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} HW-quotient \\ \eqref{TheMO9} \end{tabular} } } }{ \mathbb{Z}_2^{\mathrm{HW}} } }} \; \times \!\!\!\! \underset{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} ADE-singularity \\ (\hyperlink{Table5}{\it Table 5}) \end{tabular} } }{ \xymatrix{ \mathbb{R}^{\mathbf{4}_{\mathbf{H}}} } \!\!\sslash\! \mathbb{Z}_2^{\mathrm{refl}} }. \end{eqnarray} As shown in the second line, this is homotopy-equivalent to a residual ADE-singularity (\hyperlink{Table5}{\it Table 5}). Therefore, the discussion from \cref{LocalTadpoleCancellation} applies: \begin{cor}[\bf Equivariant Cohomotopy implies local/twisted M5/MO5-anomaly cancellation] \label{EquivariantCohomotopyOfSemiComplementSpacetime} The super-differentiable \eqref{DifferentialEquivariantCohomotopyPullback} equivariant Cohomotopy charge of the vicinity (Def. \ref{CohomotopyOfVicinityOfSingularity}) of the semi-complement spacetime of a single charged $\mathrm{MO5}$-singularity \eqref{SemiComplement} $$ \pi^{\mathbf{4}_{\mathbb{H}}} \Big( \Big( X^{11}_{{{}_{\frac{1}{2}\mathrm{MO5}}}} \Big)^{\mathrm{cpt}} \Big)_- \;=\; \Big\{ \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} \\ MO5-plane \\ charge \end{tabular} } } }{ 1 \cdot \mathbf{1}_{\mathrm{triv}} } - \underset{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} \\ M5-brane \\ charge \end{tabular} } }{ N_{\mathrm{M5}} \cdot \mathbf{2}_{\mathrm{reg}} } \;\Big\vert\; N_{\mathrm{M5}} \in \mathbb{Z} \Big\} $$ as in Folklore \ref{AnomalyCancellationOnMTheoreticOrientifolds}, \hyperlink{Table2}{\it Table 2}, regarding the local/twisted form of M5/MO5-anomaly cancellation. \end{cor} \begin{proof} By $G$-homotopy invariance of $G$-equivariant homotopy theory, this follows as the special case of Prop. \ref{TheoremLocalTadpoleCancellation} with \eqref{SuperDifferentiableLocalCohomotopyCharge} in Example \ref{SuperDifferentiableEquivariantCohomotopyOfADEOrbifolds}, for $G = \mathbb{Z}_2$, hence with $k = \left\vert W_G(1)\right\vert = 2$. \end{proof} \begin{remark}[Super-exceptional geometry of $\mathrm{MO5}$ semi-complement] While here we consider only topological orientifold structure, the full super-exceptional geometry corresponding to \eqref{SemiComplement} is introduced in \cite[4]{FSS19d}; shown there to induce the M5-brane Lagrangian on any super-exceptional embedding of the $\tfrac{1}{2}\mathrm{M5}$-locus. \end{remark} \noindent {\bf Equivariant Cohomotopy implies local/untwisted M5/MO5-anomaly cancellation at $\tfrac{1}{2}\mathrm{M5}$-singularities.} It is immediate to consider the globalization of this situation to the semi-complement around one $\mathrm{MO9}$ in heterotic M-theory compactified on the toroidal $\mathbb{Z}_2^{\mathrm{refl}}$-orbifold $\mathbb{T}^{\mathbf{5}_{\mathrm{sgn}}} \sslash \mathbb{Z}_2^{\mathrm{refl}+\mathrm{HW}} $ with MO5-singularities: \vspace{-.2cm} \begin{equation} \label{SemiComplementOfToroidalMNO5Compactification} \underset{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} semi-complement spacetime \\ around $\mathrm{MO5}s$ in $\mathrm{M}_{\mathrm{HET}}/\mathbb{Z}_2^{\mathrm{refl}}$ \end{tabular} } }{ X^{11}_{\mathrm{M}_{\mathrm{HET}}/\mathbb{Z}^{\mathrm{refl}}_2} } \;\coloneqq\; \mathbb{R}^{5,1} \,\times \,\, \overset{ \simeq \, \ast }{ \overbrace{ \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} unit 0-sphere \\ around $\mathrm{MO9}$ \end{tabular} } \;\; } }{ S(\mathbb{R}^{\mathbf{1}_{\mathrm{sgn}}}) } / \underset{ \mathpalette\mathclapinternal{ \;\; \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} HW-quotient \\ \eqref{TheMO9} \end{tabular} } } }{ \mathbb{Z}_2^{\mathrm{HW}} } }} \;\;\;\;\; \times \!\!\!\!\! \underset{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} toroidal reflection-orbifold \eqref{PointReflectionSubgroup} \\ (\hyperlink{Table5}{\it Table 5}) \end{tabular} } }{ \xymatrix{ \mathbb{T}^{\mathbf{4}_{\mathbf{H}}} } \!\! \!\sslash\! \mathbb{Z}_2^{\mathrm{refl}}\;\;. } \end{equation} To this toroidal ADE-orbifold the discussion in \cref{GlobalTadpoleCancellation} applies as follows. \begin{cor}[\bf Equivariant Cohomotopy implies global/untwisted M5/MO5-anomaly cancellation] \label{GlobalM5MO5CancellationImplied} The super-differentiable \eqref{DifferentialEquivariantCohomotopyPullback} equivariant Cohomotopy charge \eqref{EquivariantCohomotopySet} of the semi-complement spacetime \eqref{SemiComplementOfToroidalMNO5Compactification} of heterotic M-theory on a toroidal MO5-orientifold (\cref{HeteroticMTheoryOnADEOrbifolds}) with charged MO5-planes in compatible RO-degree (Example \ref{ExamplesOfCompatibleRODegree}) and admitting equivariant super-differential refinement \eqref{KernelOfTheGlobalElmendorfStageProjection} is $$ \pi^{\mathbf{4}_{\mathbb{H}}} \Big( \big( X^{11}_{\mathrm{M}_{\mathrm{HET}}/\mathbb{Z}_2^{\mathrm{refl}}} \big)_{+} \Big)_{{}_{\mathrm{Sdiffble}_-}} \;=\; \Big\{ \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} \\ MO5-plane \\ charge \end{tabular} } } }{ 16 \cdot \mathbf{1}_{\mathrm{triv}} } - \underset{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} \\ M5-brane \\ charge \end{tabular} } }{ 8 \cdot \mathbf{2}_{\mathrm{reg}} } \big\} $$ \vspace{-2mm} \noindent as expected from Folklore \ref{AnomalyCancellationOnMTheoreticOrientifolds}, \hyperlink{Table2}{\it Table 2}, regarding the global/untwisted form of M5/MO5-anomaly cancellation (recalling that the semi-complement \eqref{SemiComplementOfToroidalMNO5Compactification} is that around \emph{one} of the two MO9-planes). \end{cor} \begin{proof} By $G$-homotopy invariance of equivariant Cohomotopy, this follows from the statement \eqref{SuperDifferentiableEquivariantCohomotopyOfKummerSurface} in Example \ref{SuperDifferentiableEquivariantCohomotopyOfADEOrbifolds}. \end{proof} \medskip \noindent More generally we have the following: \medskip \noindent {\bf M5/MO5-anomaly cancellation in heterotic M-theory on general ADE-orbifolds.} The statements and proofs of Corollary \ref{EquivariantCohomotopyOfSemiComplementSpacetime} and Cor. \ref{GlobalM5MO5CancellationImplied} directly generalize to heterotic M-theory on general $G^{\mathrm{ADE}}$-singularities $\mathbb{R}^{\mathbf{4}_{\mathbb{H}}}$ \cref{HeteroticMTheoryOnADEOrbifolds}, because the underlying results in \cref{EquivariantCohomotopyAndTadpoleCancellation} apply in this generality. Hence \hyperlink{HypothesisH}{\it Hypothesis H} implies that on the semi-complement spacetime of an $\mathrm{MO9}$ intersecting a toroidal ADE-orbifold \vspace{-.2cm} \begin{equation} \label{SemiComplementOfToroidalM5ADECompactification} \underset{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} semi-complement spacetime \\ around $\tfrac{1}{2}\mathrm{M5}_{\mathrm{ADE}}$ in $\mathrm{M}_{\mathrm{HET}}/\mathrm{ADE}$ \\ (\hyperlink{FigureS}{\it Figure S}) \end{tabular} } }{ X^{11}_{{}_{ \mathrm{M}_{\mathrm{HET}} / G^{\mathrm{ADE}} }} } \;\coloneqq\; \mathbb{R}^{5,1} \;\times \;\; \overset{ \simeq \, \ast }{ \overbrace{ \underset{ \mathpalette\mathclapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} \\ unit 0-sphere \\ around $\mathrm{MO9}$ \end{tabular} } \;\; } }{ S(\mathbb{R}^{\mathbf{1}_{\mathrm{sgn}}}) } / \underset{ \mathpalette\mathclapinternal{ \;\; \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} \\ HW-quotient \\ \eqref{TheMO9} \end{tabular} } } }{ \mathbb{Z}_2^{\mathrm{HW}} } } } \;\;\;\;\;\; \times \underset{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} \\ toroidal ADE-orbifold \\ (\hyperlink{Table5}{\it Table 5}) \end{tabular} } }{ \xymatrix{ \mathbb{T}^{\mathbf{4}_{\mathbf{H}}} } \!\! \!\sslash\! G^{\mathrm{ADE}} } \end{equation} the M5/MO5 charge, measured in equivariant Cohomotopy, is $$ Q_{\mathrm{tot}} \;=\; 16 \cdot \mathbf{1}_{\mathrm{triv}} - N_{\mathrm{M5}} \cdot \mathbf{k}_{\mathrm{reg}} \phantom{AAA} \left\vert Q_{\mathrm{tot}}\right\vert = 0 \,, $$ for $k = \left\vert G^{\mathrm{ADE}} \right\vert$ the order of the global quotient group. Under double dimensional reduction to type IIA string theory according to \hyperlink{Table7}{\it Table 7}, this implies the tadpole cancellation conditions for D4-branes in ADE-orientifolds, from \hyperlink{Table1}{\it Table 1}. \newpage \medskip {\small \hypertarget{Table8}{} \hspace{-.9cm} \begin{tabular}{|c|c|c|} \hline \multicolumn{3}{ |c| }{ \begin{tabular}{c} \\ \bf Spacetimes on which to measure flux sourced by M5/MO5-charge \\ $\phantom{-}$ \end{tabular} } \\ \hline \multirow{2}{*}{ {\bf Definition} } & $ \mathpalette\mathclapinternal{\phantom{ {\vert \atop \vert} \atop {\vert \atop \vert}}} X_{\mathrm{MO5}} \simeq_{{}_{\mathrm{htpy}}} S(\mathbb{R}^{\mathbf{1}_{\mathrm{sgn}} + \mathbf{4}_{\mathrm{sgn}}})/\mathbb{Z}_2^{\mathrm{het}+\mathrm{HW}} $ & $ X_{\sfrac{1}{2}\mathrm{M5}} \simeq_{{}_{\mathrm{htpy}}} S(\mathbb{R}^{\mathbf{1}_{\mathrm{sgn}}})/\mathbb{Z}_2^{\mathrm{HW}} \times \mathbb{T}^{\mathbf{4}_{\mathbb{H}}} \sslash \mathbb{Z}_2^{\mathrm{refl}} $ \\ & \eqref{M5ThreadedThroughRP4} & \eqref{SemiComplement} \\ \hline \hline \raisebox{2.5cm}{ {\bf Illustration} } & \begin{tikzpicture}[scale=.8] \begin{scope} \clip(-3,-3) -- (-3,0) -- (-2,0) arc (180:360:2cm and 0.6cm) -- (3,0) -- (3,-3) -- (-3,-3); \shade[ball color=lightblue!60!white, opacity=0.60] (0,0) circle (2cm); \end{scope} \shade[shading=radial, inner color=lightblue!20!white, outer color=lightblue!60!white, opacity=0.60] (2,0) arc (0:360:2cm and 0.6cm); \draw (2,0) arc (0:360:2cm and 0.6cm); \draw (-2,0) arc (180:360:2cm and 2cm); \draw[->, dashed] (0,0) to (30:1.06); \draw[->, dashed] (0,0) to (30+180:1.06); \draw[->, dashed] (0,0) to (180-30:1.06); \draw[->, dashed] (0,0) to (180-30+180:1.06); \end{tikzpicture} & \begin{tikzpicture}[scale=0.8] \draw (0,3.7) node {\phantom{$---$}}; \begin{scope}[shift={(1.5,1)}] \draw[dotted, thick] (0,0) circle (1.5); \begin{scope} \clip (1.5-.02,0-.1) rectangle (1.5+.1, 0+.1); \draw[fill=white] (1.5,0) circle (.1); \end{scope} \draw[draw=lightgray, fill=lightgray] (1.5+0.1,0-.05) rectangle (4,0+.05); \draw[] (1.5+0.1,0-.05) to (4,0-.05); \draw[] (1.5+0.1,0+.05) to (4,0+.05); \draw[draw=lightgray, fill=lightgray] (-1.5-0.1,0-.05) rectangle (-4,0+.05); \draw[] (-1.5-0.1,0-.05) to (-4,0-.05); \draw[] (-1.5-0.1,0+.05) to (-4,0+.05); \draw (4.6,0) node {\tiny $x_2 = \tfrac{1}{2}$}; \draw (-4.6,0) node {\tiny $x_2 = 0$}; \draw (0,-1.5) node {\colorbox{white}{\tiny $x_1 = 0$}}; \draw (-70:2.6) node {\colorbox{white}{\tiny $x_1 > 0$}}; \draw[dashed] (40:1.5) to (40:4); \draw[dashed] (-40:1.5) to (-40:4); \draw[very thick, darkblue] (18-1:1.5) to (18-.4:4); \draw[very thick, darkblue] (18-.5:1.5) to (18-.2:4); \draw[very thick, darkblue] (18-0:1.5) to (18-0:4); \draw[very thick, darkblue] (18+.5:1.5) to (18+.2:4); \draw[very thick, darkblue] (18+1:1.5) to (18+.4:4); \draw[very thick, darkblue] (-18-1:1.5) to (-18-.4:4); \draw[very thick, darkblue] (-18-.5:1.5) to (-18-.2:4); \draw[very thick, darkblue] (-18-0:1.5) to (-18-0:4); \draw[very thick, darkblue] (-18+.5:1.5) to (-18+.2:4); \draw[very thick, darkblue] (-18+1:1.5) to (-18+.4:4); \begin{scope} \clip (-1.5-.1,0-.1) rectangle (-1.5+0.02, 0+.1); \draw[fill=white] (-1.5,0) circle (.1); \end{scope} \draw (-1.5+.7,0) node {\tiny \color{darkblue} \bf MO5}; \draw[->, gray] (-1.5+.4,0) to ++(-.3,0); \begin{scope}[shift={(5.1,1.1)}] \draw (-2.1,.5) node {\tiny \color{darkblue} \bf MK6}; \draw[->, gray] (-2.1,.3) to ++(0,-.3); \end{scope} \draw (180-60:.9) node {\tiny \color{darkblue} \bf MO9 }; \draw[->, gray] (180-60:1.1) to (180-60:1.4); \draw node (halfM5) at (0:.75) { \tiny \color{darkblue} \bf $ \mathrm{MO5} $ }; \draw[->, gray] (halfM5) to ++(0:.7); \draw node (M5) at (36:.8) { \tiny \color{darkblue} \bf $ \mathpalette\mathllapinternal{\tfrac{1}{2}} \mathrm{M5} $ }; \draw[->, gray] (M5) to ++(.65,0); \draw node (mirrorM5) at (-36:.8) { \tiny \color{darkblue} \bf $ \mathpalette\mathllapinternal{\mbox{mirror}\;\tfrac{1}{2}} \mathrm{M5} $ }; \draw[->, gray] (mirrorM5) to ++(.65,0); \draw[dashed] (180-40:1.5) to (180-40:4); \draw[dashed] (180+40:1.5) to (180+40:4); \draw[<->, dashed, darkblue] (180-29:3) to[bend right=24] node[very near end] { $ \mathpalette\mathllapinternal{ \mbox{\bf \tiny \color{darkblue} \begin{tabular}{c} residual \\ $\mathbb{Z}_2^{\mathrm{refl}}$-action \end{tabular} } } $ } (180+29:3); \end{scope} \begin{scope}[shift={(1.5,1)}] \begin{scope}[rotate=(+18)] \clip (1.5-.02,.3) rectangle (1.5+.3,-.3); \draw[draw=green, fill=green] (1.5,0) circle (.1); \end{scope} \begin{scope}[rotate=(-18)] \clip (1.5-.02,.3) rectangle (1.5+.3,-.3); \draw[draw=green, fill=green] (1.5,0) circle (.1); \end{scope} \end{scope} \end{tikzpicture} \\ \hline $\phantom{ {\vert \atop \vert} \atop {\vert \atop \vert} }$ {\bf Geometry} & smooth but curved & singular but flat \\ \hline \hline \multicolumn{3}{|c|}{ \begin{tabular}{c} \\ \bf Cohomological charge quantization \\ by \hyperlink{HypothesisH}{\it Hypothesis H} \\ $\phantom{-}$ \end{tabular} } \\ \hline \begin{tabular}{c} \\ {\bf Cohomology theory} \\ (by \hyperlink{Table4}{\it Table 4}) \\ $\phantom{-}$ \end{tabular} & \begin{tabular}{c} $J$-twisted Cohomotopy $\pi^{T X}\big(X\big)$ \\ \cite{FSS19b}\cite{FSS19c} \end{tabular} & \begin{tabular}{c} equivariant Cohomotopy $\pi_{\mathbb{Z}_2}^{V}\big( \mathbb{T}^V \big)$ \\ \cref{EquivariantCohomotopyAndTadpoleCancellation} \end{tabular} \\ \hline \raisebox{53pt}{ \begin{tabular}{c} {\bf Illustration} \\ (Remark \ref{TheRoleOfMK6EndingOnM5}) \end{tabular} } & \raisebox{19pt}{ \begin{tikzpicture}[scale=.9] \draw[dashed] (0,0) circle (1.4); \draw[draw=green, fill=green] (0,0) circle (.15); \draw (0.45,0) node { \tiny \color{darkblue} \bf $\mathrm{M5}$ }; \draw (125:1.6) node { \small \color{darkblue} \bf $S^4$ }; \end{tikzpicture} } & \hspace{-1.7cm} \begin{tikzpicture}[scale=.9] \begin{scope} \ \clip (-3,2) rectangle (0,-2); \draw[dashed] (0,0) circle (1.4); \draw[draw=darkblue, fill=darkblue] (0,-.05) rectangle (-2.5,+.05); \draw[draw=darkblue, fill=darkblue] (-2.5-.05,-.05) rectangle (-2.5-.10,+.05); \draw[draw=darkblue, fill=darkblue] (-2.5-.15,-.05) rectangle (-2.5-.20,+.05); \draw[draw=darkblue, fill=darkblue] (-2.5-.25,-.05) rectangle (-2.5-.30,+.05); \draw[draw=green, fill=green] (0,0) circle (.15); \draw (-2.15,-.25) node { \tiny \color{darkblue} \bf $\mathrm{MK6}$ }; \draw (125:1.6) node { \small \color{darkblue} \bf $S^4$ }; \end{scope} \draw (0.45,0) node { \tiny \color{darkblue} \bf $\tfrac{1}{2}\mathrm{M5}$ }; \draw (0,-1.8) to (0,1.8); \draw (0,-2) node { \tiny \color{darkblue} \bf $\mathrm{MO9}$ }; \end{tikzpicture} \\ \hline $\mathpalette\mathclapinternal{\phantom{ {\vert \atop \vert} \atop {\vert \atop \vert} }}$ \begin{tabular}{c} \bf Charge classification \end{tabular} & \begin{tabular}{c} $c_{\mathrm{tot}} = 1 - N \cdot 2$ \\ \eqref{TheOddChargeAroundAPureMO5} \end{tabular} & \begin{tabular}{c|c} $ c_{\mathrm{tot}} = N_{\mathrm{MO5}} \cdot \mathbf{1}_{\mathrm{triv}} - N_{\mathrm{M5}} \cdot \mathbf{2}_{\mathrm{reg}} $ & $ \begin{array}{rcl} & \left\vert Q_{\mathrm{tot}}\right\vert & \!\!\!= 0 \\ \Leftrightarrow & N_{\mathrm{M5}} & \!\!\!= 8 \end{array} $ \\ (Cor. \ref{EquivariantCohomotopyOfSemiComplementSpacetime}) & (Cor. \ref{GlobalM5MO5CancellationImplied}) \end{tabular} \\ \hline \end{tabular} \vspace{.2cm} \noindent {\bf \footnotesize Table 8 -- Two ways of measuring M5/MO5-charge.} {\footnotesize On the left is the traditional approach not resolving the singularities. On the right (which shows the same situation as in \hyperlink{FigureV}{\it Figure V} but with the periodic identification indicated more explicitly) the fine-grained microscopic picture seen by C-field charge quantization in equivariant Cohomotopy.} } \medskip \medskip With these result in hand, we highlight that not only did equivariant Cohomotopy inform us about M-theory, but M-theory also shed light on a subtle point regarding the interpretation of equivariant Cohomotopy: \begin{remark}[\bf Equivariant Cohomotopy and MK6 ending on M5] \label{TheRoleOfMK6EndingOnM5} {\bf (i)} The heuristic way to see that ordinary Cohomotopy $\pi^4$ from \eqref{PlainCohomotopySet} canonically measures charges of 5-branes inside 11-dimensional spacetime is that the `\emph{classifying space}' $S^4$ of $\pi^4$ gets essentially identified with the (any) \emph{spacetime} 4-sphere \emph{around} a 5-brane in an 11-dimensional ambient space (see \cite[(6)]{ADE} for the heuristic picture, and \cite[4.5]{FSS19b} for the full mathematical detail). \vspace{-1mm} \item {\bf (ii)} But as we pass from plain to equivariant Cohomotopy, this picture \vspace{-.5cm} $$ \mbox{ \it Brane charge sourced in the center of $S^4$ \hspace{2.2cm} \raisebox{-32pt}{ \begin{tikzpicture}[scale=.9] \draw[dashed] (0,0) circle (1.4); \draw[draw=green, fill=green] (0,0) circle (.15); \draw (0.45,0) node { \tiny \color{darkblue} \bf $\mathrm{M5}$ }; \draw (125:1.6) node { \small \color{darkblue} \bf $S^4$ }; \end{tikzpicture} } } $$ may superficially appear to be in tension with the picture provided by the Pontrjagin-Thom theorem as in \hyperlink{FigureD}{\it Figure D} and \hyperlink{FigureL}{\it Figure L}, where instead \vspace{-.5cm} $$ \mbox{ \it Brane charge is sourced at the 0-pole of $S^{\mathbf{4}}$ \hspace{.7cm} \raisebox{-32pt}{ \begin{tikzpicture}[scale=.9] \begin{scope} \clip (-2.8,-1) rectangle (0,1); \draw[draw=darkblue, fill=darkblue] (0,-.05) rectangle (-2.5,+.05); \draw[draw=darkblue, fill=darkblue] (-2.5-.05,-.05) rectangle (-2.5-.10,+.05); \draw[draw=darkblue, fill=darkblue] (-2.5-.15,-.05) rectangle (-2.5-.20,+.05); \draw[draw=darkblue, fill=darkblue] (-2.5-.25,-.05) rectangle (-2.5-.30,+.05); \draw[draw=green, fill=green] (0,0) circle (.15); \draw (-2.15,-.25) node { \tiny \color{darkblue} \bf $\mathrm{MK6}$ }; \end{scope} \draw (125:1.6) node { \small \color{darkblue} \bf $S^4$ }; \draw (0.45,0) node { \tiny \color{darkblue} \bf $\tfrac{1}{2}\mathrm{M5}$ }; \draw[dashed] (0,0) circle (1.4); \end{tikzpicture} } } $$ However, in the orbi-geometry of heterotic M-theory on ADE-singularities \cref{HeteroticMTheoryOnADEOrbifolds} indeed \emph{both pictures apply simultaneously}, witnessing different but closely related brane species (see \hyperlink{Table8}{\it Table 8}): \vspace{-1mm} \item {\bf (iii)} The black $\tfrac{1}{2}\mathrm{M5}$-brane locus (\hyperlink{FigureS}{\it Figure S}) is the terminal point of an MK6-singularity which extends radially away from the M5. Hence, given any radial 4-sphere with the $\tfrac{1}{2}\mathrm{M5}$ at its center, the MK6 will pierce this 4-sphere at one point. Since the $\tfrac{1}{2}\mathrm{M5}$ and the MK6 are \emph{necessarily} related this way, the 5-brane charge inside the $S^4$ may equivalently be measured by 6-brane charge piercing through $S^4$. This is exactly what the Pontrjagin-Thom theorem says happens in Corollary \ref{EquivariantCohomotopyOfSemiComplementSpacetime}, as shown in \hyperlink{FigureV}{\it Figure V} and on the right of \hyperlink{Table8}{\it Table 8}. \end{remark} \vspace{.5cm} \noindent {\large \bf Acknowledgements.} H. S. acknowledges that this work was performed in part at Aspen Center for Physics, which is supported by National Science Foundation grant PHY-1607611. This work was partially supported by a grant from the Simons Foundation. We thank Matt Kukla for a hint on TikZ typesetting. \medskip
{ "attr-fineweb-edu": 1.915039, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdUY5qsBDHn4Qzhqd
\section{Supplementary Material} \subsection{Parameters in the last layer} \label{sec:last_layer_parameters} The number of parameters in the classification layer grows linearly with respect to the number of classes and typically dominates the total number of parameters in the model. Figure~\ref{fig:num-classes-vs-num-params} shows the number of parameters in the classification layer as the percentage of total number of parameters in the MobileNetV3 model. Each curve shows the percentage for different number of target classes for a fixed embedding size. \input{floats/fig-num-classes-vs-num-params} It is obvious that as the number of classes or the size of image representation increases so does the communication and local optimization cost for the full softmax training in the federated setting. In either of these situations our proposed method will facilitate training at significantly lower cost. \subsection{Implementation Details} \label{sec:impl_details} For all the datasets we use the default MobileNetV3 architecture~\citep{howard2019searching}, except that instead of 1280 dimensional embedding we output 64 dimensional embedding. We replace Batch Normalization~\citep{ioffe2015batch} with Group Normalization~\citep{wu2018group} to improve the stability of federated learning~\citep{hsu2019measuring,hsieh2020noniid}. Input images are resized to 256$\times$256 from which a random crop of size 224$\times$224 is taken. All ImageNet-21k trainings start from scratch, whereas, for Landmarks-User-160k and the SOP we start from a ImageNet-1k~\citep{russakovsky2015imagenet} pretrained checkpoint. For client side optimization we go through the local data once and use stochastic gradient descent optimizer with batchsize of 32. We use the learning rate of 0.01 for the SOP and Landmarks-User-160k. All ImageNet-21k experiments start from scratch and use the same learning rate of 0.001. To have a fair comparison with \textsl{FedAwS} method we do hyperparameter search to find the best spreadout weight and report the performances corresponding to it. For all the experiments, we use scaled cosine similarity with fixed scale value \citep{wang2017normface} of 20 for computing the logits; the server side optimization is done using Momentum optimizer with learning rate of 1.0 and momentum of 0.9. All \textsl{Centralized} baselines are trained with stochastic gradient descent. For a given dataset, all the FL methods are trained for a fixed number of rounds. The corresponding centralized experiment is trained for an equivalent number of model updates. \subsection{Imagenet-21k experiments} \label{sec:imagenet-21k-exps} Along with Landmarks-User-160K~\citep{hsu2020federated} and the SOP~\citep{song2016deep} datasets we also experiment with ImageNet-21k~\citep{deng2009imagenet} dataset. It is a super set of the widely used ImageNet-1k~\citep{russakovsky2015imagenet} dataset. It contains 14.2 million images distributed across 21k classes organized by the WordNet hierarchy. For every class we do a random 80-20 split on its samples to generate the train and test splits, respectively. The train split is used to generate 25,691 clients, each containing approximately 400 images distributed across 20 class labels. ImageNet-21k requires a large number of FL rounds given its abundant training images, hence we set a training budget of 25,000 FL rounds to make our experiments manageable. Although the performance we report on ImageNet-21k is not comparable with the (converged) state-of-the-art, we emphasize that the setup is sufficient to evaluate our \textsl{FedSS} method and demonstrate its effectiveness. \input{floats/table-imagenet21k} \input{floats/fig-accu-method-num-params-imagenet21k} Table~\ref{table:performance-imagenet21k} summarizes top-1 accuracy on the ImageNet-21k test split. We experiment with five different choices of $|\mathcal{S}_k|$. The \textsl{FullSoftmax} method reaches (best) top-1 accuracy of $11.30\%$ by the end of 25,000 FL rounds, while our method achieves top-1 accuracy of $10.02 \pm0.5\%$, but with less than 2\% of the classes on the clients. Figure~\ref{fig:accu-method-num-params-imagenet21k} summarizess performance of different methods with respect to number of parameters in the classification layer transmitted to and optimized by the clients. Our client-driven negative sampling with positive inclusion method (FedSS) requires a very small fraction of parameters in the classification layer while performing reasonably similar to the full softmax training (FullSoftmax). \subsection{Overfitting in the SOP FullSoftmax experiments} The class labels in the train and test splits of the SOP dataset do not overlap. In addition, it has, on average, only 5 images per class label. This makes the SOP dataset susceptible to overfitting (Table~\ref{table:table-sop-overfitting}). In this case, using \textsl{FedSS} mitigates the overfitting as only a subset of class representations is updated every FL round. \input{floats/table-sop-overfitting} \subsection{Derivations from Eq.~\ref{eq:local_sampled_softmax_loss} to Eq.~\ref{eq:rewritten_local_sampled_softmax_loss}} \label{sec:appendix-eq-rewrite} \begin{proof} Starting from Eq.~\ref{eq:local_sampled_softmax_loss}, we have \begin{align*} L^{(k)}_{\textrm{FedSS}}(\bm{x}, \bm{y}) &= - o_t' + \log \sum_{j\in \mathcal{S}_k} \exp(o_j') \\ & = \log \left( \exp(-o_t') \cdot \sum_{j\in \mathcal{S}_k} \exp(o_j') \right) \\ & = \log \sum_{j\in \mathcal{S}_k} \exp(o_j'-o_t') \\ & = \log \left(\exp(o_t'-o_t') + \sum_{j\in \mathcal{S}_k/\{t\}} \exp(o_j'-o_t') \right) \\ & = \log \left(1 + \sum_{j\in \mathcal{S}_k/\{t\}} \exp(o_j'-o_t') \right). \end{align*} This gives Eq.~~\ref{eq:rewritten_local_sampled_softmax_loss}. \end{proof} \section{Introduction} The success of many computer vision applications, such as classification~\citep{kolesnikov2019big,yao2019deep,huang2016learning}, detection~\citep{lin2014microsoft,zhao2019object,Ouyang_2016_CVPR}, and retrieval ~\citep{sohn2016improved, song2016deep, musgrave2020metric}, relies heavily on the quality of the learned image representation. Many methods have been proposed to learn better image representation from centrally stored datasets. For example, the contrastive~\citep{chopra2005learning} and the triplet losses~\citep{weinberger2009distance, qian2019softtriple} enforce local constraints among individual instances while taking a long time to train on $O(N^2)$ pairs and $O(N^3)$ triplets for $N$ labeled training examples in a minibatch, respectively. A more efficient loss function for training image representations is the softmax cross entropy loss which involves only $O(N)$ inputs. Today's top performing computer vision models~\citep{kolesnikov2019big,mahajan2018exploring,sun2017revisiting} are trained on centrally stored large-scale datasets using the classification loss. In particular, using an extremely large number of classes has proven to be beneficial for learning universal feature representations~\citep{sun2017revisiting}. However, a few challenges arise when learning such image representations with the classification loss under the \emph{cross-device} federated learning scenario~\citep{kairouz2019advances} where the clients are edge devices with limited computational resources, such as smartphones. First, a typical client holds data from only a small subset of the classes due to the nature of non-IID data distribution among clients~\citep{hsieh2020noniid,hsu2019measuring}. Second, as the size of the label space increase, the communication cost and computation operations required to train the model will grow proportionally. Particularly for ConvNets the total number of parameters in the model will be dominated by those in its classification layer~\citep{krizhevsky2014one}. Given these constraints, for an FL algorithm to be practical it needs to be resilient to the growth of the problem scale. \input{floats/fig-intro} In this paper, we propose a method called \textit{federated sampled softmax} (\textsl{FedSS}) for using the classification loss efficiently in the federated setting. Inspired by sampled softmax~\citep{bengio2008adaptive}, which uses only a subset of the classes for training, we devise a client-driven negative class sampling mechanism and formulate a sampled softmax loss for federated learning. Figure~\ref{fig:intro} illustrates the core idea. The FL clients sample negative classes and request a sub network from the FL server by sending a set of class labels that anonymizes the clients' positive class labels in its local dataset. The clients then optimize a sampled softmax loss that involves both the clients' sampled negative classes as well as its local positive classes to approximate the global full softmax objective. To the best of our knowledge, this is the first work addressing the intersection of representation learning with Federated Learning and resource efficient sampled softmax training. Our contributions are: \begin{enumerate} \item We propose a novel federated sampled softmax algorithm, which extends the image representation learning via large-scale classification loss to the federated learning scenario. \item Our method performs on-par with full softmax training, while requiring only a fraction of its cost. We evaluate our method empirically and show that less than $10\%$ of the parameters from the classification layer can be sufficient to get comparable performance. \item Our method is resilient to the growth of the label space and makes it feasible for applying Federated Learning to train image representation and classification models with large label spaces. \end{enumerate} \subsection{Federated Sampled Softmax (FedSS)} Now we discuss our proposed federated sampled softmax (\textsl{FedSS}) algorithm listed in Algorithm~\ref{alg:fed}, which adopts sampled softmax in the federated setting by incorporating negative sampling under FedAvg~\citep{mcmahan2017communication} framework, the standard algorithm framework in federated learning. One of the main characteristics of FedAvg is that all the clients receive and optimize the exact same model. To allow efficient communication and local computing, our federated sampled softmax algorithm transmits a much smaller sub network to the FL clients for local optimization. Specifically, we view ConvNet classifiers parameterized by $\theta = (\varphi, W)$ as two parts: a feature extractor $f(\bm{x}; \varphi): \mathbb{R}^{h\times w\times c}\rightarrow \mathbb{R} ^d$ parameterized by $\varphi$ that computes a $d$-dimensional feature given an input image, and a linear classifier parameterized by a matrix $W \in \mathbb{R}^{d \times n}$ that outputs logits for class prediction~\footnote{We omit the bias term in discussion without loss of generality.}. The FL clients, indexed by $k$, train sub networks parameterized by $(\varphi, W_{\mathcal{S}_k})$ where $W_{\mathcal{S}_k}$ contains a subset of columns in $W$, rather than training the full model. With this design, federated sampled softmax is more communication-efficient than FedAvg since the full model is never transmitted to the clients, and more computation-efficient because the clients never compute gradients of the full model. In every FL round, every participating client first samples a set of negative classes $\mathcal{N}_k \subset [n]/\mathcal{P}_k$ that does not overlap with the class labels $\mathcal{P}_k = \{t: (\bm{x}, \bm{y}) \in \mathcal{D}_k, y_t = 1, t \in [n]\}$ in its local dataset $\mathcal{D}_k$. The client then communicates the union of these two disjoint sets $\mathcal{S}_k = \mathcal{P}_k \cup \mathcal{N}_k$ to the FL server for requesting a model for local optimization. The server subsequently sends back the sub network $(\varphi, W_{\mathcal{S}_k})$ with all the parameters of the feature extractor together with a classification matrix that consists of class vectors corresponding to the labels in $\mathcal{S}_k$. \input{floats/alg-federated-sampled-softmax} Then every client trains its sub network by minimizing the following sampled softmax loss with its local dataset \begin{align} \label{eq:local_sampled_softmax_loss} L^{(k)}_{\textrm{FedSS}}(\bm{x}, \bm{y}) &= - o_t' + \log \sum_{j\in \mathcal{S}_k} \exp(o_j'), \end{align} after which the same procedure as FedAvg is used for aggregating model updates from all the participating clients. In our federated sampled softmax algorithm, the set of positive classes $\mathcal{P}_k$ is naturally constituted by all the class labels from the client's local dataset, whereas the negative classes $\mathcal{N}_k$ are sampled by each client individually. Next we discuss negative sampling and the use of positive classes in the following two subsections respectively. \subsection{Client-driven uniform sampling of negative classes} \label{sec:generating-negatives} For centralized learning, proposal distributions and sampling algorithms are designed for efficient sampling of negatives or high quality estimations of the full softmax gradients. For example, \citet{jean2015using} partition the training corpus and define non-overlapping subsets of class labels as sampling pools. The algorithm is efficient once implemented, but the proposal distribution imposes sampling bias which is not mitigable even as $m\rightarrow \infty$. Alternatively, efficient kernel-based algorithms~\citep{blanc2018adaptive,rawat2019sampled} yield unbiased estimators of the full softmax gradients by sampling from the softmax distribution. These algorithms depend on both the current model parameters $(\varphi, W)$ and the current raw input $\bm{x}$ for computing feature vectors and logit scores. However, this is not feasible in the FL scenario, one the one hand due to lack of resources on FL clients for receiving the full model, on the other hand due to the constraint of keeping raw inputs only on the devices. In the \textsl{FedSS} algorithm, we assume the label space is known and take a client-driven approach, where every participating FL client uniformly samples negative classes $\mathcal{N}_k$ from $[n]/P_k$. Using a uniform distribution over the entire label space is a simple yet effective choice that does not incur sampling bias. The bias on the gradient estimation can be mitigated by increasing $m$ (See~\ref{sec:gradient_noise_analysis} for an empirical analysis). Moreover, $\mathcal{N}_k$ can be viewed as noisy samples from the maximum entropy distribution over $[n]/P_k$ that mask the client's positive class labels. From the server's perspective, it is not able to identify which labels in $\mathcal{S}_k$ belong to the client's dataset. In practice, private information retrieval techniques~\citep{chor1995private} can further be used such that no identity information about the set is revealed to the server. The sampling procedure can be performed on every client locally and independently without requiring peer information or the current latest model from the server. \subsection{Inclusion of positives in local optimization} \label{sec:effect_of_plo} When computing the federated sampled softmax loss, including the set of positive class labels $\mathcal{P}_k$ in Eq.~\ref{eq:local_sampled_softmax_loss} is crucial. To see this, Eq.~\ref{eq:local_sampled_softmax_loss} can be equivalently written as follows (shown in~\ref{sec:appendix-eq-rewrite}) \begin{equation} \label{eq:rewritten_local_sampled_softmax_loss} \mathcal{L}_{\text{FedSS}}^{(k)}(\bm{x}, \bm{y}) = \log \left[1 + \sum_{j\in \mathcal{S}_k/\{t\}}\exp(o_j' - o_t') \right]. \end{equation} Minimizing this loss function pulls the input image representation $f(\bm{x}; \varphi)$ and target class representation $\bm{w}} % {\theta}_t$ closer, while pushing the representations of the negative classes $W_{\mathcal{S}_k/\{t\}}$ away from $f(\bm{x}; \varphi)$. Utilizing $\mathcal{P}_k/\{t\}$ as an additional set of negatives to compute this loss encourages the separation of classes in $\mathcal{P}_k$ with respect to each other as well as with respect to the classes in $\mathcal{N}_k$ (Figure~\ref{fig:negatives}d). \input{floats/fig-negatives} Alternatively, not using $\mathcal{P}_k/\{t\}$ as additional negatives leads to a negatives-only loss function \begin{equation} \label{eq:local_sampled_softmax_negonly_loss} \mathcal{L}_{\text{NegOnly}}^{(k)}(\bm{x}, \bm{y}) = \log \left[1 + \sum_{j\in \mathcal{N}_k}\exp(o_j' - o_t') \right], \end{equation} where $t \in \mathcal{P}_k$ only contributes to computing the true logit for individual inputs, while the same $\mathcal{N}_k$ is shared across all inputs (Figure~\ref{fig:negatives}b). Minimizing this negatives-only loss, trivial solutions can be found for a client's local optimization. Because it encourages separation of target class representations $W_{\mathcal{P}_k}$ from the negative class representations $W_{\mathcal{N}_k}$, which can be easily achieved by increasing the magnitudes of the former and reducing those of the latter. In addition, the learned representations can collapse, as the local optimization is reduced to a binary classification problem between the on-client classes $\mathcal{P}_k$ and the off-client classes $\mathcal{N}_k$. In contrast, using only the local positives $\mathcal{P}_k$ without the sampled negative classes $\mathcal{N}_k$ gives \begin{equation} \label{eq:local_sampled_posonly_loss} \mathcal{L}_{\text{PosOnly}}^{(k)}(\bm{x}, \bm{y}) = \log \left[1 + \sum_{j\in \mathcal{P}_k/\{t\}}\exp(o_j' - o_t') \right]. \end{equation} Minimizing this loss function solves the client's local classification problem which diverges from the global objective (Figure~\ref{fig:negatives}c), especially when $\mathcal{P}_k$ remains fixed over FL rounds and $|\mathcal{P}_k| \ll n$. \section{Experiments} \label{sec:experiments} \subsection{Setup} \paragraph{Notations and Baseline methods.} We denote our proposed algorithm as \textsl{FedSS} where both the sampled negatives and the local positives are used in computing the client's sampled softmax loss. We compare our method with the following alternatives: \begin{itemize} \item \textsl{NegOnly}: The client's objective is defined by sampled negative classes only (Eq.~\ref{eq:local_sampled_softmax_negonly_loss}). \item \textsl{PosOnly}: The client's objective is defined by the local positive classes only, no negative classes is sampled (Eq.~\ref{eq:local_sampled_posonly_loss}). \item \textsl{FedAwS}~\citep{yu2020federated}: client optimization is same as the \textsl{PosOnly}, but a spreadout regularization is applied on server. \end{itemize} In addition, we also provide two reference baselines: \begin{itemize} \item \textsl{FullSoftmax}: The client's objective is the full softmax cross-entropy loss (Eq.~\ref{eq:full-softmax-loss}), serving as performance references when it is affordable for clients to compute the full model. \item \textsl{Centralized}: A model is trained with the full softmax cross-entropy loss (Eq.~\ref{eq:full-softmax-loss}) in a centralized fashion using IID data batches. \end{itemize} \paragraph{Evaluation protocol.} We conduct experiments on two computer vision tasks: multi-class image classification and image retrieval. Performance is evaluated on the test splits of the datasets, which have no sample overlap with the corresponding training splits. We report the mean and standard deviation of the performance metrics from three independent runs. For the \textsl{FullSoftmax} and \textsl{Centralized} baselines, we report the best result from three independent runs. Please see~\ref{sec:impl_details} for implementation details. \subsection{Multi-class Image Classification} For multi-class classification we use the Landmarks-User-160K~\citep{hsu2020federated} and report top-1 accuracy on its test split. Landmarks-User-160k is a landmark recognition dataset created for FL simulations. It consists of 1,262 natural clients based on image authorship. Collectively, every client contains 130 images distributed across 90 class labels. For our experiments $K=64$ clients are randomly selected to participate in each FL round. We train for a total 5,000 rounds, which is sufficient for reaching convergence. \input{floats/table-landmarks} \input{floats/fig-learning} Table~\ref{table:performance-landmarks} summarizes the top-1 accuracy on the test split. For \textsl{FedSS} and \textsl{NegOnly} we report accuracy across different $|\mathcal{S}_k|$. Overall, we observe that our method performs similar to the FullSoftmax baseline while requiring only a fraction of the classes on the clients. Our \textsl{FedSS} formulation also outperforms the alternative \textsl{NegOnly}, \textsl{PosOnly} and \textsl{FedAwS} formulations by a large margin. Approximating the full softmax loss with \textsl{FedSS} does not degrade the rate of convergence either as seen in Figure~\ref{fig:learning}a. Additionally, Figure~\ref{fig:all-posneg-learning}a shows learning curves for \textsl{FedSS} with different $|\mathcal{S}_k|$. Learning with a sufficiently large $|\mathcal{S}_k|$ follows closely the performance of the \textsl{FullSoftmax} baseline. We also report performance on ImageNet-21k~\citep{deng2009imagenet} in~\ref{sec:imagenet-21k-exps}. \subsection{Image Retrieval} \input{floats/table-sop} The Stanford Online Products dataset~\citep{song2016deep} has 120,053 images of 22,634 online products as the classes. The train split includes 59,551 images from 11,318 classes, while the test split includes 11,316 different classes with 60,502 images in total. For FL experiments, we partition the train split into 596 clients, each containing 100 images distributed across 20 class labels. For each FL round, $K=32$ clients are randomly selected. Similar to metric learning literature, we use nearest neighbor retrieval to evaluate the models. Every image in the test split is used as a query image against the remaining ones. We use normalized euclidean distance to compare two image representations. We report MAP@$R$ ($R=10$) as the evaluation metric~\citep{musgrave2020metric}, which is defined as follows: \begin{equation} \textrm{MAP}@R = \dfrac{1}{R} \sum_{i=1}^{R} P(i), \quad \textrm{where } P(i) = \begin{cases} \text{precision at $i$}, & \text{if $i^{\text{th}}$ retrieval is correct} \\ 0, & \text{otherwise.} \end{cases} \end{equation} \input{floats/fig-all-posneg-learning} \input{floats/fig-accu-method-num-params} Table~\ref{table:performance-sop} summarizes MAP@$10$ on the SOP test split at the end of 2k FL rounds. Our \textsl{FedSS} formulation consistently outperforms the alternative methods while requiring less than $1\%$ of the classes on the clients. This reduces the overall communication cost by 16\% when $|\mathcal{S}_k| = 100$ for every client per round. For reasonably small value of $|\mathcal{S}_k|$ our method has a similar rate of convergence to the \textsl{FullSoftmax} baseline, as seen in Figure~\ref{fig:learning}b and Figure~\ref{fig:all-posneg-learning}b. Using the MobilenetV3 ~\citep{howard2019searching} architecture with embedding size 64, the classification layer contributes to 16\% of the total number of parameters in the SOP experiment and 3.4\% in the Landmarks-User-160k experiment. In the former, our \textsl{FedSS} method requires only 84\% of the model parameters on every client per round when $|\mathcal{S}_k| = 100$. In the latter, it reduces the model parameters transmitted by 3.38\% per client per round when $|\mathcal{S}_k| = 170$ (summarized in Figure~\ref{fig:accu-method-num-params-landmarks-sop}). These savings will increase as the embedding size or the total number of classes increases (Figure~\ref{fig:num-classes-vs-num-params} in~\ref{sec:last_layer_parameters}). For example with embedding size of 1280, which is default embedding size of MobileNetV3, above setup will result in 79\% and 38\% reduction in the communication cost per client per round for the SOP and Landmarks-User-160k datasets, respectively. \subsection{On importance of $\mathcal{P}_k$ in local optimization} One may note that the \textsl{NegOnly} loss (Eq.~\ref{eq:local_sampled_softmax_negonly_loss}) involves fewer terms inside the logarithm than \textsl{FedSS} (Eq.~\ref{eq:rewritten_local_sampled_softmax_loss}). To show that the \textsl{NegOnly} is not unfairly penalized, we compare the \textsl{FedSS} with \textsl{NegOnly} such that the number of classes providing pushing forces for every input is the same. This is done by sampling additional $|\mathcal{P}_k| -1$ negative classes for the \textsl{NegOnly} method. As seen in Figure~\ref{fig:effect-of-plo}, using the on-client classes ($\mathcal{P}_k$) as additional negatives instead of the additional off-client negatives is crucial to the learning. \input{floats/fig-effect-of-plo} \input{floats/fig-posneg-negonly-subclassifier-confusion-matrix} This boost can be attributed to better approximation of the global objective by the clients. Figure~\ref{fig:posneg-negonly-subclassifier-confusion-matrix} plots a client's confusion matrix corresponding to the \textsl{FedSS} and \textsl{NegOnly} methods. The \textsl{NegOnly} loss leads to a trivial solution for the client's local optimization problem such that the client's positive class representations collapse onto one representation, as reasoned in section \ref{sec:effect_of_plo}. \subsection{FedSS Gradient noise analysis} \label{sec:gradient_noise_analysis} \citet{bengio2008adaptive} provides theoretical analysis of convergence of the sampled softmax loss. Doing so for the proposed federated sampled softmax within the FedAvg framework is beyond the scope of this work. Instead we provide an empirical gradient noise analysis for the proposed method. To do so we compute the expected difference between FedAvg (with \textsl{FullSoftmax}) and \textsl{FedSS} gradients, \textit{i.e.\ } $\mathbb{E}(|\bar{\bm{g}}_{FedAvg} - \bar{\bm{g}}_{FedSS}|)$, where $\bar{\bm{g}}_{FedAvg}$ and $\bar{\bm{g}}_{FedSS}$ are client model changes aggregated by the server for FedAvg (with \textsl{FullSoftmax}) and \textsl{FedSS} methods, respectively. Given that \textsl{FedSS} is an estimate of FedAvg (with \textsl{FullSoftmax}) this difference essentially represents the noise in \textsl{FedSS} gradients. \input{floats/fig-fedavg-gradient-noise} To compute a single instance of gradient noise we assume that the clients participating in the FL round has same $\mathcal{D}$ with $|\mathcal{D}|=32$. Please note that the clients will have different $\mathcal{N}_k$. For a given $|\mathcal{N}_k|$ we compute the expectation of the gradient noise across multiple batches ($\mathcal{D}$) of the SOP dataset. Figure~\ref{fig:fedavg-gradient-noise} shows the \textsl{FedSS} gradient noise as a function of $|\mathcal{N}_k|$. For very small values of $|\mathcal{N}_k|$ the gradients can be noisy but as the $|\mathcal{N}_k|$ increases the gradient noise drops exponentially. \section{Conclusion} \label{sec:conclusion} Federated Learning is becoming a prominent field of research. Major contributing factors to this trend are: rise in privacy awareness among the general users, surge in amount of data generated by edge devices, and the noteworthy increase in computing capabilities of edge devices. In this work we presented a novel federated sampled softmax method which facilitates efficient training of large models on edge devices with Federated Learning. The clients solve small subproblems approximating the global problem by sampling negative classes and optimizing a sampled softmax objective. Our method significantly reduces the number of parameters transferred to and optimized by the clients, while performing on par with the standard full softmax method. We hope that this encouraging result can inform future research on efficient local optimization beyond the classification layer. \section{Related Work} \label{sec:related-work} \textbf{Large scale classification.} The scale of a classification problem could be defined by the total number of classes involved, number of training samples available or both. Large vocabulary text classification is well studied in the natural language processing domain~\citep{bengio2008adaptive, liu2017deep, jean2015using, zhang2018deep}. On the contrary, image classification is well studied with small to medium number of classes~\citep{lecun1998gradient, cifar100, russakovsky2015imagenet} while only a handful of works~\citep{kolesnikov2019big, hinton2015distilling, mahajan2018exploring, sun2017revisiting} address training with large number of classes. Training image classification with a significant number of classes requires a large amount of computational resources. For example, \citet{sun2017revisiting} splits the last fully connected layer into sub layers, distributes them on multiple parameter servers and uses asynchronous SGD for distributed training on 50 GPUs. In this work, we focus on a cross-device FL scenario and adopt sampled softmax to make the problem affordable for the edge devices. \textbf{Representation learning.} Majority of works in learning image representation are based on classification loss~\citep{kolesnikov2019big,hinton2015distilling, mahajan2018exploring} and metric learning objectives~\citep{oh2016deep, qian2019softtriple}. Using full softmax loss with a large number of classes in the FL setting can be very expensive and sometimes infeasible for two main reasons: (i) exorbitant cost of communication and storage on the clients can be imposed by the classification layer's weight matrix; (ii) edge devices like smartphones typically do not have computational resources required to train on such scale. On the other hand, for metric learning methods~\citep{oh2016deep, qian2019softtriple} to be effective, extensive hard sample mining from quadratic/cubic combinations of the samples~\citep{sheng2020mining, schroff2015facenet, qian2019softtriple} is typically needed. This requires considerable computational resources as well. Our federated sampled softmax method addresses these issues by efficiently approximating the full softmax objective. \textbf{Federated learning for large scale classification.} The closest related work to ours is \citet{yu2020federated}, which considers the classification problem with large number of classes in the FL setting. They make two assumptions: (a) every client holds data for a single fixed class label (\textit{e.g.\ } user identity); (b) along with the feature extractor only the class representation corresponding to the client's class label is transmitted to and optimized by the clients. We relax these assumptions in our work since we focus on learning generic image representation rather than individually sensitive users' embedding. We assume that the clients hold data from multiple classes and the full label space is known to all the clients as well as the FL server. In addition, instead of training individual class representations we formulate a sampled softmax objective to approximate the global full softmax cross-entropy objective. \section{Method} \label{sec:method} \subsection{Background and Motivation} \textbf{Softmax cross-entropy and the parameter dominance. } Consider a multi-class classification problem with $n$ classes where for a given input $\bm{x}$ only one class is correct $\bm{y} \in [0,1]^n$ with $\sum_{i=1}^{n} y_i = 1$. We learn a classifier that computes a $d$-dimensional feature representation $f(\bm{x}) \in \mathbb{R}^d$ and logit score $o_i = \bm{w}} % {\theta}_i^T f(\bm{x}) + b\in \mathbb{R}$ for every class $i \in [n]$. A softmax distribution is formed by the class probabilities computed from the logit scores using the softmax function \begin{equation} p_i = \dfrac{ \exp(o_i) }{\sum_{j=1}^{n} \exp(o_j)}, \quad i \in [n]. \label{eq:softmax-probability} \end{equation} Let $t \in [n]$ be the target class label for the input $\bm{x}$ such that $y_t = 1$, the softmax cross-entropy loss for the training example $(\bm{x}, \bm{y})$ is defined as \begin{align} \mathcal{L}(\bm{x}, \bm{y}) &= -\sum_{i=1}^{n} y_i \log p_i = - o_t + \log \sum_{j=1}^{n} \exp(o_j). \label{eq:full-softmax-loss} \end{align} The second term involves computing the logit score for all the $n$ classes. As the number of classes $n$ increase so does the number of columns in the weight matrix $W \equiv [\bm{w}} % {\theta}_1, \bm{w}} % {\theta}_2, \dots, \bm{w}} % {\theta}_n ] \in \mathbb{R}^{d \times n}$ of the classification layer. The complexity of computing this full softmax loss also grows linearly. Moreover, for a typical ConvNet classifier for $n$ classes, the classification layer \textit{dominates} the total number of parameters in the model as $n$ increases, because the convolutional layers typically have small filters and the total number of parameters (See Figure~\ref{fig:num-classes-vs-num-params} in~\ref{sec:last_layer_parameters} for concrete examples). This motivates us to use an alternative loss function to overcome the growing compute and communication complexity in the cross-device federated learning scenario. \textbf{Sampled softmax. } Sampled softmax~\citep{bengio2008adaptive} was originally proposed for training probabilistic language models on datasets with large vocabularies. It reduces the computation and memory requirement by approximating the class probabilities using a subset $\mathcal{N}$ of negative classes whose size is $m \equiv |\mathcal{N}| \ll n$. These negative classes are sampled from a proposal distribution $Q$, with $q_i$ being the sampling probability of the class $i$. Using the adjusted logits $o_j' = o_{j} - \log(m q_{j}), \forall j \in \mathcal{N}$, the target class probability can be approximated with \begin{equation} \label{sampled_softmax_probability} p'_t = \dfrac{ \exp(o_t') }{\exp(o_t') + \sum_{j\in \mathcal{N}} \exp(o_j')}. \end{equation} This leads to the sampled softmax cross-entropy loss \begin{equation} \mathcal{L}_{\textrm{sampled}}(\bm{x}, \bm{y}) = - o_t' + \log \sum_{j\in \mathcal{N} \cup \{t\} } \exp(o_j'). \label{eq:sampled_softmax_loss} \end{equation} Note that the sampled softmax gradient is a biased estimator of the full softmax gradient. The bias decreases as $m$ increases. The estimator is unbiased only when the negatives are sampled from the full softmax distribution~\citep{blanc2018adaptive} or $m \to \infty$~\citep{bengio2008adaptive}.
{ "attr-fineweb-edu": 1.413086, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdUk5qdmB60zYsbbT
\section{Summary} X-ray CT image reconstruction algorithms often are designed to trade-off a data fit term and an image roughness term. Typical examples of data fit terms are squared error\cite{thibault2007three} and log-likelihood. Typical examples of image roughness terms are total variation, Gauss-Markov random field priors\cite{bouman1993generalized}, and Huber class roughness penalties. A complementary approach for penalizing roughness is to represent the image using a wavelet (or other multiresolution) expansion, directly estimate the wavelet coefficients, and introduce a penalty (often $L_1$) on the wavelet coefficients\cite{ramani2012splitting,xu2014sparsity}. Multigrid approach\cite{MultigridSPIE04,pan1991numerical,oh2005general,oh2006multigrid} has been shown to be successful in some image reconstruction methods in terms of achieving faster convergence speed. The idea is to move through different grid levels over time. At any grid level, the voxel size is the same throughout the image domain. The approach we describe in this paper is closest to this with the following differences: \begin{itemize} \item Our data fit term is a Poisson log-likelihood with mean determined by Beer's law \item We use an alternating minimization framework to derive an algorithm that is guaranteed to decrease the cost function at every iteration \item We extend the prior alternating minimization framework to point spread functions with negative values \item We update only a subset of wavelet coefficients, constrained to be on a tree, thereby decreasing the computational complexity per iteration relative to fully updating the image \item We adaptively update the tree defining which wavelet coefficients are updated at each iteration \item Our wavelet tree structure results in image domain representation of voxels with different sizes \item We incorporate an adaptive threshold allowing the computation of a sequence of images of increasing resolution, with increasing roughness and increasing log-likelihood \end{itemize} The result is a fast, adaptive, iterative image reconstruction algorithm. \section{Introduction} The problem to be minimized for x-ray CT in this paper is penalized likelihood estimation with a Poisson log-likelihood data fitting term and a regularization term, optimized over image $\boldsymbol{\mu} \in \mathbb{R}^{N}_{+}$. It is shown in \cite{OSullivanBenac07} that maximization of the Poisson log-likelihood term is equivalent to minimizing the I-divergence\footnote{I-divergence between two vectors $ \boldsymbol{p}, \boldsymbol{q}\in\mathbb{R}^{N}_{+}$ is defined as $I(\boldsymbol{p}||\boldsymbol{q})=\sum_{i} p_{i}log(\frac{p_{i}}{q_{i}}) - p_{i} + q_{i}$.} between the transmission data $\boldsymbol{d} $ and the estimated mean $\boldsymbol{q}(\boldsymbol{\mu}) \in \mathbb{R}_{+}^{M}$, where $\boldsymbol{q}(\boldsymbol{\mu})(y) = \boldsymbol{I_{0}}(y) \exp(-\sum_{x}h(y|x)\mu(x))$, $\boldsymbol{I_{0}}(.)$ is the incident photon count vector, $h(y|x)$ is an element of the system matrix $\boldsymbol{H} \in \mathbb{R}^{M\times N}_{+} $ that represents the length of the intersection between the ray path of index $y \in \mathcal{Y}^{M}$ and voxel of index $x \in \mathcal{X}^{N}$. Then this penalized likelihood estimation problem can be formulated as \cite{SPIE2015} \begin{equation} \label{eq:1.1} \boldsymbol{\mu}^{*}_{PML} = \argmin_{\boldsymbol{\mu} \ge 0} I(\boldsymbol{d}||\boldsymbol{q}(\boldsymbol{\mu}) ) + \lambda R(\boldsymbol{\mu}), \end{equation} where $R(\boldsymbol{\mu})$ is a regularization term selected as a roughness penalty and $\lambda \ge 0$ is the parameter that controls the level of roughness imposed on the image. Also, it is important to note that the non-negativity constraint on $\boldsymbol{\mu}$ is due to the nature of linear attenuation coefficients of materials. Since there is no closed form solution to this problem, we solve it iteratively. At each iteration, a surrogate function that approximates the original objective function is minimized, which in turn decreases the original objective function. In our recent work \cite{SPIE2015}, we generalized the formulation of surrogate functions in \cite{OSullivanBenac07} for data fitting term to the regularization term. The idea is to use Jensen's inequality to decouple the objective function and form many one-parameter convex functions, minimize them, and iterate. Assume that there exists a discrete wavelet inverse transform matrix $\boldsymbol{\Omega} \in \mathbb{R}^{N \times N}$ that is non-singular. Then, the image $\boldsymbol{\mu}$ can be represented as \begin{equation} \label{eq:1.2} \boldsymbol{\mu}=\boldsymbol{\Omega}\boldsymbol{\beta}, \end{equation} where $\boldsymbol{\beta}$ is the vector of wavelet coefficients. The problem in this paper can then be written as \begin{eqnarray} \label{eq:1.3} \boldsymbol{\beta}^{*}_{PML} = \argmin_{\boldsymbol{\beta}} I(\boldsymbol{d}||\boldsymbol{q}(\boldsymbol{\Omega}\boldsymbol{\beta}) ) + \lambda R(\boldsymbol{\Omega}\boldsymbol{\beta}) \\ \text{ subject to } \boldsymbol{\Omega}\boldsymbol{\beta} \ge 0 \nonumber \end{eqnarray} Below, the derivation of the surrogate functions for the data fitting term is shown. A similar approach yields surrogate functions for the regularization term as well. The I-divergence term can be written as \begin{align} \label{eq:1.4} I(\boldsymbol{d}||\boldsymbol{q}(\boldsymbol{\Omega}\boldsymbol{\beta})) &= \sum_{y} d(y) \sum_{x} h(y|x) \sum_{z} \omega(z|y) \beta(z) \nonumber \\ &+ \sum_{y} I_{0}(y) \exp \big(-\sum_{x} h(y|x) \sum_{z} \omega(x|z) \beta(z) \big) \nonumber \\ &+ constant(y). \end{align} For simplicity, define the matrix $\boldsymbol{\Phi} = \boldsymbol{H}\boldsymbol{\Omega}$, where $\phi(y|z)$ is the system matrix element between ray path of index $y$ and wavelet coefficient of index $z \in \mathcal{Z}^{N}$. Assume that there exists a known estimate $\boldsymbol{\hat{\beta}}$ and $\boldsymbol{\hat{q}}(y) = \boldsymbol{I_{0}}(y) \exp (-\sum_{x} h(y|x) \sum_{z} \omega(x|z) \hat{\beta}(z)) = \boldsymbol{I_{0}}(y) \exp (-\sum_{z} \phi(y|z) \hat{\beta}(z)) $. The terms in the I-divergence that depend on $\boldsymbol{\beta}$ are used to construct surrogate functions as follows. \begin{align} \label{eq:1.5} &= \sum_{y} d(y) \sum_{z} \phi(y|z) \beta{(z)}\nonumber \\ &\quad \quad +\sum_{y} \hat{q}(y) \exp \big(-\sum_{z} \phi(y|z) (\beta(z) - \hat{\beta}(z) \big) \nonumber \\ &\le \sum_{z} b(z) \beta{(z)} \nonumber\\ &\quad \quad +\sum_{y} \sum_{z} \hat{q}(y) r(z|y) \exp(-\frac{\phi(y|z)}{r(z|y)} (\beta(z) - \hat{\beta}(z))), \end{align} where \begin{equation} \label{eq:1.6} b(z) = \sum_{y} d(y)\phi(y|z), \end{equation} the convex decomposition lemma~\cite{OSullivanBenac07} is used for $r(z|y) \ge 0$, $\sum_{z} r(z|y) \le 1$. $r(z|y)$ can be chosen as \[ r(z|y) = \begin{cases} \frac{|\phi(y|z)|}{Z_{0}},& \text{ if } z \in \mathcal{Z}_{s} \\ 0,& \text{ if } z \notin \mathcal{Z}_{s}, \end{cases} \] \begin{equation} \label{eq:1.61} Z_{0} = \max_{y} \sum_{z \in \mathcal{Z}_{s}} |\phi(y|z)|, \end{equation} and $\mathcal{Z}_{s} \subseteq \mathcal{Z}$, $\mathcal{Z}_{s} \neq \emptyset$.\footnote{$\mathcal{Z}_{s}$ represents a subset of the wavelet domain to be chosen for update. In our approach, we choose it in a way that every voxel in image domain is represented at any iteration, possibly with different numbers of coefficients. This subset can be fixed or be varied over iterations.} \begin{align} \label{eq:1.7} &\le \sum_{z \in \mathbb{Z}_{s}} b(z) \beta{(z)} \nonumber \\ &\quad \quad +\sum_{y} \sum_{z \in \mathcal{Z}_{s}} \hat{q}(y) \frac{|\phi(y|z)|}{Z_{0}} \exp(-Z_{0}\frac{\phi(y|z)}{|\phi(y|z)|} (\beta(z) - \hat{\beta}(z))) \nonumber \\ &\quad \quad +\sum_{z' \notin \mathcal{Z}_{s}} const(z') \end{align} Adding the constant term in I-divergence, we define our surrogate function, \begin{multline} \label{eq:1.8} \hat{I}_{\mathcal{Z}_{s}}(\boldsymbol{d}||\boldsymbol{q};\boldsymbol{\beta},\boldsymbol{\hat{\beta}}) = \sum_{z \in \mathcal{Z}_{s}} b(z) \beta{(z)}\\ +\sum_{y} \sum_{z \in \mathcal{Z}_{s}} \hat{q}(y) \frac{|\phi(y|z)|}{Z_{0}} \exp(-Z_{0}\frac{\phi(y|z)}{|\phi(y|z)|} (\beta(z) - \hat{\beta}(z))) \\ +\sum_{z' \notin \mathcal{Z}_{s}} const(z') + const(y) \end{multline} It is clear to see that this is a one-parameter convex function over each $\beta(z)$ and the gradient with respect to $\beta(z)$ is given as: \begin{eqnarray} \label{eq:1.9} \hat{I}'_{\mathcal{Z}_{s}}(\boldsymbol{d}||\boldsymbol{q};\boldsymbol{\beta},\boldsymbol{\hat{\beta}})&=& b(z) - \hat{b}_{+}(z)\exp(-Z_{0}(\beta(z)-\hat{\beta}(z))) \nonumber \\ &-& \hat{b}_{-}(z)\exp(Z_{0}(\beta(z)-\hat{\beta}(z))) \end{eqnarray} where \begin{align} \label{eq:1.10} \hat{b}_{+}(z) &= \sum_{y, \phi(y|z)>0} \hat{q}(y)\phi(y|z),\\ \hat{b}_{-}(z) &= \sum_{y, \phi(y|z)<0} \hat{q}(y)\phi(y|z). \end{align} The first-order necessary condition for a minimizer is to find the $\beta(z)$ for which the gradient is zero, which has a closed form solution. The algorithm is shown below. \begin{algorithm} \caption{Unregularized Wavelet AM Algorithm}\label{unr-wam} \begin{algorithmic} \State{\textbf{Inputs}: $\boldsymbol{\beta^{(0)}}, \boldsymbol{d}, \boldsymbol{I_{0}}, \boldsymbol{H}, \boldsymbol{\Phi}, \boldsymbol{\Omega}, \mathcal{Z}^{(j)}_{s} \text{for} j=0, 1, ..., (J-1) $} \State {Precompute $b(z) = \sum_{y} d(y) \phi(y|z) $} \For{$j=0, 1, ..., (J-1)$} \State $\hat{q}^{(j)}(y) = I_{0}(y) \exp(-\sum_{z \in \mathcal{Z}^{(j)}_{s}} \phi(y|z) \hat{\beta}^{(j)}(z))$ \State {$ Z_{0}^{(j)} = \max_{y} \sum_{z \in \mathcal{Z}^{(j)}_{s}} |\phi(y|z)|$} \For {every $z \in \mathcal{Z}^{(j)}_{s}$} \State $\hat{b}^{(j)}_{+}(z) = \sum_{y, \phi(y|z)>0} \hat{q}(y)\phi(y|z)$ \State $\hat{b}^{(j)}_{-}(z) = \sum_{y, \phi(y|z)<0} \hat{q}(y)\phi(y|z)$ \State $\hat{\beta}^{(j+1)}(z) = \tilde{\beta}(z)$ where \State $b(z) - \hat{b}^{(j)}_{+}(z)\exp(-Z_{0}^{(j)}(\tilde{\beta}(z)-\hat{\beta}^{(j)}(z)))$ \State{$- \hat{b}^{(j)}_{-}(z)\exp(Z_{0}^{(j)}(\tilde{\beta}(z)-\hat{\beta}^{(j)}(z))) = 0$} \EndFor \State{\textbf{end for}} \EndFor \State{\textbf{end for}} \end{algorithmic} \end{algorithm} \section{Results} The multiresolution technique has been evaluated using a real data scan of the NIST Phantom Test Article A~\cite{NIST} acquired on a SureScan\textsuperscript{TM} \emph{x}1000 Explosive Detection System. A two dimensional Level 3 Haar disrete wavelet transform is used to represent each z-slice of the three dimensional image domain. The wavelet tree, $\mathcal{Z}^{(j)}_{s}$ is initialized to consist of approximation coefficients only. At iteration number $64$, the coefficients are back projected to the image space, voxel values across z-slices are summed up and the pixels whose values were larger than $0.1$ times the maximum of the summed image were chosen to expand one level. Then, at iteration number $128$, the same procedure is applied with the same factor to expand one level further, and the last expansion is done at iteration number $256$. Figure \ref{fig_1} shows objective function values versus time for unregularized alternating minimization algorithm (AM)\cite{OSullivanBenac07} and unregularized wavelet AM represented in this paper. AM algorithm has been run for 100 iterations while Wavelet AM has been run for 300 iterations. Figures \ref{fig_2} and \ref{fig_3} show image slices reconstructed from two algorithms at the same objective function value level. The difference between these two images (unregularized AM image subtracted from wavelet AM image) is shown in Figure \ref{fig_4}. It is important to note that even though two images are at the same objective function value level, the image reconstructed using wavelet AM has sharper edges. \begin{figure}[!t] \centering \includegraphics[width=0.9\linewidth]{objfnc_v3} \caption{Objective function values vs. time for AM and Wavelet AM.} \label{fig_1} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=1.0\linewidth]{am_100_iters_v2} \caption{Image reconstructed with unregularized AM after 100 iterations.} \label{fig_2} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=1.0\linewidth]{wam_100_iters_v2} \caption{Image reconstructed with wavelet AM after 300 iterations.} \label{fig_3} \end{figure} \begin{figure}[!t] \centering \includegraphics[width=1.0\linewidth]{diff_100_iters_v2} \caption{Difference image, unregularized AM image subtracted from wavelet AM image.} \label{fig_4} \end{figure} \section{Conclusion} A fast, iterative, and adaptive algorithm for x-ray imaging was formulated and presented by using alternating minimization framework. The algorithm is guaranteed to decrease at each iteration and adaptive wavelet tree structure provides better utilization of computations. In other words, more computations are used for the regions with high frequency components like edges while less are used for smoother areas. The wavelet tree expansion used to reconstruct the image shown in the results section is one of many possible methods to perform it. Different ways to expand the tree will be investigated in the future. Different scale levels of discrete wavelet transform, different wavelet types and exploration of regularization are other parts to be explored later. Furthermore, this method can be combined with other acceleration methods like ordered subsets \cite{ErdoganFessler99}. Preliminary studies combining ordered subsets and wavelet AM showed promising results and will be investigated further. \section*{Acknowledgment} We thank Carl Bosch, Nawfel Tricha, Sean Corrigan, Michael Hollenbeck, and Assaf Mesika of SureScan\textsuperscript{TM} Corporation for their contributions to the collection, formatting, and sharing of the data.
{ "attr-fineweb-edu": 1.608398, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdVc5qdmC7lEenJLF
\section{Introduction} \quad Local operations and classical communications (LOCC) are basic operations in quantum information theory. Many interesting studies have arisen from the question, what we can$\backslash$cannot do using only LOCC. The question is highly non-trivial and difficult to solve due to the lack of simple characterization of LOCC. The necessary and sufficient condition of the deterministic convertibility of one pure state to the other was derived by Nielsen, for general bipartite systems, in \cite{Nielsen}. Furthermore, in \cite{Vidal}, Vidal obtained the optimal probability to convert one pure state to the other, non-deterministically. However, when we start to think of simultaneous convertibility of more than one states, the problem becomes furthermore difficult, because of the fact that Lo-Popescu Theorem \cite{lop} is not applicable there. The local distinguishability problem is one of these questions. The problem is as follows: We investigate a combined quantum system consisting of two parts $A$ and $B$ held by separated observers (Alice and Bob). We denote the associated Hilbert space by ${\cal H}_A\otimes{\cal H}_B$, where ${\cal H}_A$, ${\cal H}_B$ are separable (i.e., possibly infinite dimensional) Hilbert spaces that represent the system of Alice and Bob, respectively. Let $\psi_1,\cdots,\psi_M$ be orthonormal vectors in ${\cal H}_A\otimes{\cal H}_B$, which represent $M$ pure states. Suppose that the system is in a state $\psi$, which is prepared to be one of $\psi_1,\cdots,\psi_M$. Alice and Bob know that $\psi$ is one of $\psi_1,\cdots,\psi_M$, but they don't know which of them it is. The problem is if Alice and Bob can find out which one it is, when only LOCC is allowed. In \cite{WH}, Walgate et.al. proved that any two orthogonal pure states in finite dimensional systems are distinguishable. Unfortunately, because of the nature of their proof, this important result has been restricted to finite dimensional systems so far. As it is indispensable to consider infinite dimensional systems in the real world, the analogous result in infinite dimensional system is desirable. In this paper, we prove the infinite version: \newtheorem{wthm}{Theorem}[section] \renewcommand{\thewthm}{} \begin{wthm} Any two orthogonal pure states are distinguishable by LOCC, even for infinite dimensional systems. \end{wthm} In spite of these simple results for two pure states, it is known that more than two pure states are not always distinguishable by LOCC. It was proved that three Bell states can not be distinguished with certainty by LOCC and four Bell states can not, even probabilistically \cite{Ghosh}. A set of non-entangled pure states that are not locally distinguishable was introduced in \cite{ben}. The probability of the discrimination for the worst case was estimated in \cite{na}. In this paper, we give an estimate of discrimination probability for some family of more than two pure states. This result also holds for infinite dimensional systems.\\ In order to investigate distinguishability, we look for a suitable decomposition of the states. Let us decompose the vectors $\psi_1,\cdots,\psi_M$ with respect to an orthonornal basis $\{e_k\}$ of ${\cal H}_B$: \begin{align} \psi_l=\sum_k\xi_k^l\otimes e_k ,\quad l=1,\cdots, M. \label{decom} \end{align} (Here and below, if the dimension of ${\cal H}_B$ is finite $n$, $\sum_i \varphi'_i\otimes f_i'$ stands for the sum $\sum_{i=1}^n\varphi'_i\otimes f_i'$, while if ${\cal H}_B$ is infinite dimensional, it stands for the limit $\lim_{n\to\infty}\sum_{i=1}^n \varphi_i'\otimes f_i'$, when the limit converges in the norm topology of ${\cal H}_A\otimes{\cal H}_B$.) Suppose that the vectors $\{\xi_k^l\}$ satisfy the orthogonal conditions for each $k$: \begin{align} \langle \xi_k^l\ket{\xi_k^m}=0\quad\forall{l\neq m}, \quad\forall k. \label{ortho} \end{align} This orthogonality condition does not hold in general, but if this condition holds, Alice and Bob can distinguish these states by the following LOCC: First Bob performs a projective measurement $\{\ket{e_k}\bra{e_k}\}$ on his side. Then he tells the result $k$ of his measurement to Alice by a classical communication. For each $k$, let $S_k$ be a set of $1\le l\le M$ such that $\xi_k^l\neq0$. According to the information from Bob, Alice performs a projective measurement given by projections $\{\ket{\hat\xi_k^l}\bra{\hat\xi_k^l}\}_{l\in{S_k}}$ and $1-\sum_{l\in{S_k}}\ket{\hat\xi_k^l}\bra{\hat\xi_k^l}$. Here, a vector $\hat\xi_k^l\in{\cal H}_A$ is the normalization of the vector $\xi_k^l\in{\cal H}_A$. As $\{\xi_k^l\}_{l\in S_k}$ are mutually orthogonal for each $k$, the projections are orthogonal. Because the initial state $\psi$ was prepared to be one of $\psi_1,\cdots,\psi_M$, Alice obtains one of $\hat\xi_k^l$, $l\in{S_k}$. When Bob obtains $e_k$ and Alice obtains $\hat{\xi_k^l}$, they can say the original state $\psi$ was $\psi_l$, because if $\psi=\psi_m$ for $m\neq l$, the probability that they obtain $e_k$ and $\hat{\xi_k^l}$ is $0$. Hence a deterministic local discrimination is possible when the decomposition (\ref{decom}) with the orthogonality condition (\ref{ortho}) is given. Next let us consider probabilistic discriminations. Suppose that $\psi_1,\cdots,\psi_M$ are decomposed into the form (\ref{decom}), but now the orthogonal condition holds only partially, i.e., just for $k$ larger than some $N_p$: \begin{align} \langle \xi_k^l\ket{\xi_k^m}=0\quad \forall{l\neq m}\quad\quad \forall k> N_p. \label{prob} \end{align} In this case, $\psi_1,\cdots,\psi_M$ can be distinguished by conclusive LOCC protocol, probabilistically. Let ${P}_{d}$ be the largest probability that can be attained. The conclusive protocol below gives the lower bound of ${P}_d$: \begin{align} P_d\ge 1-\underset{1\le l\le M}{\max} \sum_{k=1}^{N_p}\Vert\xi_k^l\Vert^2. \label{pb} \end{align}First Bob performs the projective measurement $\{\ket{e_k}\bra{e_k}\}$ again. If he gets the result $k> N_p$, he tells the result to Alice. Then Alice performs the projective measurement given by projections $\{\ket{\hat\xi_k^l}\bra{\hat\xi_k^l}\}_{l\in{S_k}}$ and $1-\sum_{l\in S_k}\ket{\hat\xi_k^l}\bra{\hat\xi_k^l}$, and obtains one of $\hat\xi_k^l$. If she gets $\hat\xi_k^l$, then they can conclude $\psi=\psi_l$, as before. In this way, they can distinguish $\psi_1,\cdots,\psi_M$ if the result of Bob's measurement is $k> N_p$. On the other hand, if Bob obtains $k\le N_p$, we regard it as an error. When $\psi=\psi_l$, the probability the error occurs is $\sum_{k=1}^{N_p}\Vert\xi_k^l\Vert^2$. Hence $\psi_1,\cdots,\psi_M$ can be distinguished by LOCC with probability $P_d$, lower bounded as in (\ref{pb}). The problem is if there is a decomposition (\ref{decom}) of $\psi_1,\cdots,\psi_M$, satisfying the orthogonality condition (\ref{ortho}) or (\ref{prob}). In order to deal with this problem, we will introduce a real vector space of trace class self-adjoint operators on the Hilbert space ${\cal H}_B$, determined by the states $\psi_1,\cdots,\psi_M$. We will denote the vector space by $\cal K$. Let $N$ be the dimension of $\cal K$ and $(A_1,\cdots, A_N)$ a basis of $\cal K$. For every orthogonal projection $P$ on ${\cal H}_B$, we investigate the subset of ${\mathbb R}^N$ given by \begin{align*} \left\{\left(\left\langle z,A_1 z\right\rangle, \cdots,\left\langle z,A_N z\right\rangle\right)\; : \; z\in P{\cal H}_B,\; \Vert z\Vert=1\right\}\subset{\mathbb R}^N. \end{align*} This set is the {\it joint numerical range} of operators $(A_1,\cdots,A_N)$, restricted on the sub-Hilbert space $P{\cal H}_B$. We will show that the convexity of these sets implies the existence of the decomposition (\ref{decom}) with the orthogonality condition (\ref{ortho}), hence the local distinguishability of the states $\psi_1,\cdots,\psi_M$. One of the advantage of this method is that we can consider infinite systems, easily. In this paper, we prove the infinite version of \cite{WH}: by the convex analysis on joint numerical ranges, we show that any two pure orthogonal states can be decomposed as in (\ref{decom}), with the orthogonality condition (\ref{ortho}). We also apply our method to investigate the distinguishability of more than two pure states. We show that if the dimension of $\cal K$ is $3$, the condition (\ref{prob}) holds for $N_P=2$, hence the states are distinguishable probabilistically. (Theorem \ref{snd}) The remainder of the paper is organized in the following way: In Section \ref{rep}, we introduce a representation of a vector in ${\cal H}_A\otimes{\cal H}_B$ as an operator from ${\cal H}_B$ to ${\cal H}_A$. And from them, we define the real vector space $\cal K$. Then we represent our main results in terms of the vector space $\cal K$. In Section \ref{proof}, by use of convex analysis on joint numerical ranges, we show the distinguishability of states. \section{The distinguishability of states}\label{rep} In this section, we introduce a representation of pure states on ${\cal H}_A\otimes{\cal H}_B$ as operators from ${\cal H}_B$ to ${\cal H}_A$, and describe our main results in terms of the operator representation. In finite dimensional systems, the operator representation corresponds to the well known matrix representation of states, by use of a maximal entangled state. (See for example \cite{mat}). Let ${\cal H}_A$, ${\cal H}_B$ be separable (possibly infinite dimensional) Hilbert spaces. Let us fix some orthonormal basis $\{f_i\}$ of ${\cal H}_B$. A vector $\psi$ in ${\cal H}_A\otimes{\cal H}_B$ can be decomposed as \begin{align*} \psi=\sum_i\varphi_i\otimes f_i, \end{align*} in general. Here, the limit $\lim_{n\to\infty}\sum_{i=1}^n\varphi_i\otimes f_i$ converges in the norm topology of ${\cal H}_A\otimes{\cal H}_B$ for infinite dimensional case. The vectors $\varphi_i$ in ${\cal H}_A$ satisfy \begin{align} \sum_i\Vert\varphi_i\Vert^2=\Vert\psi\Vert^2. \label{psum} \end{align} Now we define a bounded linear operator $X$ from ${\cal H}_B$ to ${\cal H}_A$ by \begin{align} X\eta\equiv \sum_i \langle f_i\vert \eta\rangle\cdot\varphi_i,\quad \forall\eta\in {\cal H}_B. \label{Xdef} \end{align} From (\ref{psum}), the sum in (\ref{Xdef}) absolutely converges in norm of ${\cal H}_B$, and we obtain $\Vert X\Vert\le \Vert\psi\Vert$. Then the vector $\psi$ is represented as \begin{align*} \psi=\sum_{i}\varphi_i\otimes f_i =\sum_i(Xf_i)\otimes f_i. \end{align*} The bounded operator $X^*X$ on ${\cal H}_B$ satisfies \begin{align} Tr X^*X =\sum_i\Vert\varphi_i\Vert^2 =\Vert \psi\Vert^2 <\infty, \label{trace} \end{align} i.e., $X^*X$ is a trace class operator on ${\cal H}_B$. By operating $1\otimes\ket{f_i}\bra{f_i}$ on $\psi$, we see that $X$ is the unique operator such that $\psi=\sum_{i}Xf_i\otimes f_i$. On the other hand, for any bounded linear operator $X$ from ${\cal H}_B$ to ${\cal H}_A$ satisfying $TrX^*X<\infty$, there exists a unique vector $\sum_{i}X f_i\otimes f_i$, (i.e., there exists the limit $\lim_{n\to\infty}\sum_{i=1}^nX f_i\otimes f_i$ for infinite dimensional case, in the norm of ${\cal H}_A\otimes{\cal H}_B$.) Hence we obtain the following one-to-one correspondence: \begin{align*} \psi\in{\cal H}_A\otimes{\cal H}_B\quad \Leftrightarrow\quad X\in B({\cal H}_B,{\cal H}_A), \quad s.t.\quad TrX^*X<\infty, \end{align*} through the relation \begin{align} \psi =\sum_{i}(Xf_i)\otimes f_i. \label{asso} \end{align} Here $B({\cal H}_B,{\cal H}_A)$ indicates the set of bounded operators from ${\cal H}_B$ to ${\cal H}_A$. Now let us consider a set of orthonormal $M$ vectors $\psi_1,\cdots, \psi_M$ in ${\cal H}_A\otimes{\cal H}_B$. We can associate each $\psi_l$ with an operator $X_l$ through (\ref{asso}). As in (\ref{trace}), $X_m^*X_l$ are trace class operators on ${\cal H}_B$ for all $1\le m,l\le M$ and satisfy \begin{align} Tr X_m^*X_l =\left\langle\psi_m,\psi_l\right\rangle=\delta_{m,l},\quad 1\le m,l\le M. \label{otg} \end{align} Let ${\cal K}$ be the real linear subspace of trace class self-adjoint operators on ${\cal H}_B$ spanned by operators $\{X_m^*X_l+X_l^*X_m,\; i(X_m^*X_l-X_l^*X_m)\}_{m\neq l}$. Let $N$ be the dimension of ${\cal K}$ and $(A_1,\cdots,A_{N})$ an arbitrary basis of ${\cal K}$. The dimension $N$ is bounded as $N\le M(M-1)$. Because each $X^*_mX_l$ satisfies (\ref{otg}), we have \begin{align} Tr A_i=0,\quad i=1,\cdots,N. \label{atrace} \end{align} We will call $\cal K$ the real vector space of trace class self-adjoint operators associated with $\psi_1,\cdots,\psi_M$. Now we are ready to state our main results. In this paper, we show the following theorems: \begin{thm} Let ${\cal H}_A$, ${\cal H}_B$ be (possibly infinite dimensional) separable Hilbert spaces. Let $\psi_1,\cdots,\psi_M$ be a set of orthogonal pure states in ${\cal H}_A\otimes{\cal H}_B$ and $\cal K$ the associated real vector space of trace class self-adjoint operators on ${\cal H}_B$. Then if the dimension of ${\cal K}$ is $2$, the states $\psi_1,\cdots,\psi_M$ are distinguishable by LOCC with certainty. In particular, any pair of orthogonal pure states $\psi_1,\psi_2$ are distinguishable by LOCC with certainty. \label{fst} \end{thm} \begin{thm} Let ${\cal H}_A$, ${\cal H}_B$ be (possibly infinite dimensional) separable Hilbert spaces. Let $\psi_1,\cdots,\psi_M$ be a set of orthogonal pure states in ${\cal H}_A\otimes{\cal H}_B$ and $\cal K$ the associated real vector space of trace class self-adjoint operators on ${\cal H}_B$. Suppose that the dimension of $\cal K$ is $3$. Then $\psi_1,\cdots,\psi_M$ are distinguishable by conclusive LOCC protocol with probability $P_d$ such that \[ P_d\ge 1-\underset{1\le l\le M}{max}\left(\sum_{k=1}^2p_k^l\right). \] Here, $p^l_k$ represents the $k$-th Schmidt coefficient of $\psi_l$, ordered in the decreasing order. \label{snd} \end{thm} \begin{rem} The last statement of Theorem \ref{fst} is the extension of \cite{WH} to infinite dimensional system. Applying the argument in \cite{WH}, we can extend the result to multipartite systems: any two orthogonal pure states in multipartite systems are distinguishable by LOCC even in infinite dimensional systems. \end{rem} \begin{rem} In \cite{virmani}, S.Virmani et.al. showed that any two (even non-orthogonal) multipartite pure states in finite dimensional systems can be optimally distinguished using only LOCC. It was derived using the result of the orthogonal case in \cite{WH}. The argument there can be applied to our infinite dimensional case. Therefore, any two bipartite pure states can be optimally distinguished using only LOCC, even for infinite dimensional system. \end{rem} \section{Proof} \label{proof} In this section, we prove the main theorems. We correlate the problem of the distinguishability with that of the convexity of the joint numerical ranges. Let $(A_1,\cdots, A_N)$ be bounded self-adjoint operators on a Hilbert space $\cal H$. A subset of ${\mathbb R}^N$ given by \begin{align*} \left\{\left( \langle z,A_1 z\rangle,\;\langle z,A_2 z\rangle,\;\cdots, \langle z,A_N z\rangle \right);\; z\in{\cal H},\;\Vert z\Vert=1\right\} \subset{\mathbb R}^N \end{align*} is called the joint numerical range of $(A_1,\cdots,A_N)$. Furthermore, for an orthogonal projection $P$ on $\cal H$, we will call the set \begin{align*} C_P(A_1,\cdots,A_N)\equiv \left\{\left( \langle z,A_1 z\rangle,\;\langle z,A_2 z\rangle,\;\cdots, \langle z,A_N z\rangle \right);\; z\in P{\cal H},\;\Vert z\Vert=1\right\} \subset{\mathbb R}^N, \end{align*} the joint numerical range of $(A_1,\cdots,A_N)$ restricted to the sub-Hilbert space $P{\cal H}$. Theorem \ref{fst}, Theorem \ref{snd} are derived as corollaries of the following propositions: \begin{prop} Let $\psi_1,\cdots,\psi_M$ be a set of orthogonal pure states in ${\cal H}_A\otimes {\cal H}_B$, and $\cal K$ the associated real vector space of trace class self-adjoint operators on ${\cal H}_B$. Let $(A_1,\cdots, A_{N})$ be a basis of $\cal K$. Suppose that for any projection $P$ on ${\cal H}_B$, $C_P(A_1,\cdots,A_N)$ is convex. Then the states $\psi_1,\cdots,\psi_M$ are distinguishable by LOCC with certainty. \label{thm1} \end{prop} \begin{prop} Let $\psi_1,\cdots,\psi_M$ be a set of orthonormal pure states in ${\cal H}_A\otimes {\cal H}_B$, and $\cal K$ the associated real vector space of trace class self-adjoint operators on ${\cal H}_B$. Let $(A_1,\cdots, A_{N})$ be a basis of ${\cal K}$. Suppose that for any projection $P$ of ${\cal H}_B$ with dimension larger than $N_p$, $C_P(A_1,\cdots,A_N)$ is convex. Then the states $\psi_1,\cdots,\psi_M$ are distinguishable by LOCC with the probability $P_d$ such that \[ P_d\ge1-\underset{1\le l\le M}{\max} \left(\sum_{k=1}^{N_p}p^l_k\right).\] Here, $p^l_k$ represents the $k$-th Schmidt coefficient of $\psi_l$, ordered in the decreasing order. \label{thm2} \end{prop} First we prove the Proposition \ref{thm1}. The proof consists of four steps: {\it Step 1}. First, we show that if ${\cal H}_B$ has an orthonormal basis $\{g_k\}$ such that $\langle g_k,A_i g_k\rangle=0$ for all $i=1,\cdots,N$ and $k$, then, $\psi_1,\cdots,\psi_N$ are distinguishable by LOCC (Lemma \ref{base}). {\it Step 2}. Second, using convex analysis, we show that if the joint numerical range of $(A_1,\cdots,A_N)$ is convex, there exists at least one vector $z\in{\cal H}_B$ such that $\langle z,A_i z\rangle=0$ for all $i=1,\cdots,N$ (Lemma \ref{one}). {\it Step 3}. Third, using Lemma \ref{one}, we show the existence of the orthonormal basis satisfying the desired condition in {\it Step 1} (Lemma \ref{convex}). {\it Step 4}. Finally, combining the results of {\it Step 1} and {\it Step 3}, we obtain Proposition \ref{thm1}. Now let us start the proof. First we show the following Lemma: \begin{lem} Let $(A_1,\cdots,A_N)$ be a basis of $\cal K$ associated with $\psi_1,\cdots,\psi_M$. Suppose that there exists an orthonormal basis $\{g_k\}$ of ${\cal H}_B$ such that \begin{align} \langle g_k,A_i g_k\rangle=0,\quad \forall k,\quad i=1\cdots N. \label{asA} \end{align} Then the states $\psi_1,\cdots,\psi_M$ are distinguishable by LOCC. \label{base} \end{lem} {\it Proof} Let $\{f_i\}$ be the orthonormal basis fixed in Section \ref{rep}. (Recall that we defined the operators $X_l$s in terms of $\{f_i\}$.) We define an antilinear operator $J:{\cal H}_B\to{\cal H}_B$ to be the complex conjugation with respect to $\{f_i\}$: \begin{align*} J\sum_i\alpha_i f_i\equiv\sum_i\bar{\alpha_i}f_i. \end{align*} As $J$ is an antilinear isometry, $\{Jg_k\}$ is an orthonormal basis of ${\cal H}_B$. Therefore, we can decompose $\psi_1,\cdots,\psi_M$ with respect to $\{Jg_k\}$: \begin{align} \psi_l=\sum_k\xi^l_k\otimes Jg_k. \label{dcom1} \end{align} We show that for each $k$, $\{\xi_k^1,\cdots,\xi_k^M\}$ are mutually orthogonal. Let us decompose $\psi_l$ with respect to $\{f_i\}$: \begin{align} \psi_l=\sum_i\varphi_i^l\otimes f_i. \label{decom2} \end{align} Comparing (\ref{dcom1}) and (\ref{decom2}), we obtain \begin{align*} \xi_k^l=\sum_i\varphi_i^l\langle Jg_k,f_i\rangle =\sum_i\varphi_i^l\langle f_i, g_k\rangle =X_lg_k. \end{align*} As $(A_,\cdots,A_N)$ is a basis of ${\cal K}$, the assumption (\ref{asA}) implies \begin{align*} \langle \xi^l_k, \xi_k^{m}\rangle= \langle X_l g_k, X_m g_k\rangle=0 \quad \forall\;l\neq m,\quad \forall k. \end{align*} Hence for each $k$, $\{\xi_k^1,\cdots,\xi_k^M\}$ are mutually orthogonal. Thus (\ref{dcom1}) takes the form of (\ref{decom}), with the orthogonality condition (\ref{ortho}). Therefore, from the arguments in the Introduction, we can distinguish $\psi_1,\cdots,\psi_M$ by LOCC with certainty. $\square$\\\\ Next we show the following Lemma which holds on a general Hilbert space ${\cal H}$: \begin{lem} Let $(A_1,\cdots,A_N)$ be a set of trace class self-adjoint operators on a Hilbert space $\cal H$ such that $Tr A_i=0$ for each $1\le i\le N$. Suppose that the joint numerical range of $(A_1,\cdots,A_N)$ is a convex subset of ${\mathbb R}^N$. Then there exists a vector $z\in{\cal H}$ with $\Vert z \Vert=1$ such that \begin{align*} \langle z, A_i z\rangle=0,\quad i=1,\cdots, N. \end{align*} \label{one} \end{lem} {\it Proof}\\ Before starting the proof, we review some basic facts from convex analysis \cite{fund}. Let $x_1,\cdots, x_k$ be elements in ${\mathbb R}^N$. An element $\sum_{i=1}^{k}\alpha_i x_i$ with real coefficients $\alpha_i$ satisfying $\sum_{i=1}^{k}\alpha_i=1$ is called an affine combination of $x_1,\cdots,x_k$. An affine manifold in ${\mathbb R}^N$ is a set containing all its affine combinations. Let $S$ be a nonempty subset of ${\mathbb R}^N$. The affine hull of $S$ is defined to be the smallest affine manifold containing $S$. We denote the affine hull of S by ${\rm aff} S$. In other words, ${\rm aff} S$ is the affine manifold generated by $S$. As easily seen, it is a closed plane parallel to a linear subspace in ${\mathbb R}^N$. Its dimension may be lower than $N$ in general. The relative interior of $S$, ${\rm ri}S$, is the interior of $S$ with respect to the topology relative to ${\rm aff}S$. In other words, \begin{align*} {\rm ri} S\equiv\{x\in S;\;\exists\; \varepsilon >0\; s.t.\; B(x,\varepsilon )\cap{\rm aff}S\subset S\}. \end{align*} Here, $B(x,\varepsilon)$ is a ball of radius $\varepsilon$, centered at $x$. The following fact is known: \begin{lem} Let $C$ be a nonempty convex subset of ${\mathbb R}^N$. Then for any point $x_0$ in ${\rm aff}C\backslash{\rm ri}C$, there exists a non-zero vector $s\in {\mathbb R}^N$ parallel to ${\rm aff}C$, such that \[ \left\langle\left\langle s,x-x_0\right\rangle\right\rangle \ge 0,\quad \forall x\in C. \] Here $\left\langle\left\langle\;,\;\right\rangle\right\rangle$ is the inner product of ${\mathbb R}^N$: \begin{align*} \left\langle\left\langle s, x\right\rangle\right\rangle \equiv\sum_{i=1}^Ns_i\cdot x_i. \end{align*} \label{hyper} \end{lem} Now we are ready to prove Lemma \ref{one}. The claim is equivalent to saying that $0$ is included in the joint numerical range of the operators $(A_1,\cdots, A_N)$. We denote the joint numerical range by $C_1$: \begin{align*} C_1\equiv\left\{\left( \langle z,A_1 z\rangle,\;\langle z,A_2 z\rangle,\;\cdots, \langle z,A_N z\rangle \right)\in{\mathbb R}^N;\; z\in{\cal H},\;\Vert z\Vert=1\right\}. \end{align*} By assumption, $C_1$ is a nonempty convex subset of ${\mathbb R}^N$. Let $\{e_k\}$ be an arbitrary orthonormal basis of $\cal H$. By the definition of $C_1$, \begin{align*} x_k\equiv\left(\left\langle e_k,A_1 e_k\right\rangle, \cdots,\left\langle e_k,A_N e_k\right\rangle\right) \end{align*} is an element of $C_1$ for each $k$. The finite dimensional case ${\cal H}={\mathbb C}^n$ is immediate. By the convexity of $C_1$, we obtain \begin{align*} 0=\frac{1}{n}\left( Tr A_1,\cdots,Tr A_N \right) =\frac{1}{n}\sum_{k=1}^n \left( \langle e_k, A_1 e_k\rangle, \cdots, \langle e_k, A_N e_k\rangle \right) \in C_1. \end{align*} Below we prove the infinite dimensional case. First we observe that $0$ is included in the closure of $C_1$. In particular, $0$ is in ${\rm aff}C_1$. To see this, note that for all $l\in{\mathbb N}$, we have \begin{align*} \frac{1}{l}\sum_{k=1}^l \left( \langle e_k, A_1 e_k\rangle, \cdots, \langle e_k, A_N e_k\rangle \right)\in C_1. \end{align*} As $A_i$ is a trace class operator, the sum $\sum_{k=1}^\infty\langle e_k, A_i e_k\rangle$ converges absolutely. By taking $l\to\infty$ limit, we obtain \begin{align*} 0=\lim_{l\to\infty}\frac{1}{l}\sum_{k=1}^l \left( \langle e_k, A_1 e_k\rangle, \cdots, \langle e_k, A_N e_k\rangle \right)\in \overline{C_1}\subset{\rm aff}C_1. \end{align*} Hence $0$ is in ${\rm aff}C_1$. Second, we show that $0$ is actually in ${\rm ri}C_1$. To prove this, assume $0$ is not included in ${\rm ri}C_1$. Then it is an element of ${\rm aff}C_1\backslash{\rm ri} C_1$. As $C_1$ is a nonempty convex set, from Lemma \ref{hyper}, there exists a non-zero vector $s=(s_1,\cdots,s_N)\in {\mathbb R}^N$ pararell to ${\rm aff}C_1$, such that \begin{align*} \left\langle\left\langle s,x\right\rangle\right\rangle \ge 0,\quad \forall x\in C_1. \end{align*} As $x_k\in C_1$, we have \begin{align} \left\langle\left\langle s, x_k \right\rangle\right\rangle \ge 0, \label{each} \end{align} for all $k$. On the other hand, we have \begin{align} \sum_{k=1}^{\infty} \left\langle\left\langle s, x_k \right\rangle\right\rangle =\sum_{i=1}^N s_i\sum_{k=1}^\infty \cdot \left\langle e_k,A_i e_k\right\rangle =\sum_{i=1}^N s_i\cdot TrA_i=0. \label{sum} \end{align} From (\ref{each}) and (\ref{sum}), we obtain $\left\langle\left\langle s, x_k\right\rangle\right\rangle=0$ for all $k$. As the orthonormal basis $\{e_k\}$ can be taken arbitrary, we obtain \begin{align*} \left\langle\left\langle s, x \right\rangle\right\rangle=0,\quad \forall x\in C_1. \end{align*} As $s$ is a non-zero vector parallel to ${\rm aff}C_1$, this means that $C_1$ is included in some affine manifold that is strictly smaller than ${\rm aff}C_1$. This contradicts the definition of ${\rm aff}C_1$. (Recall that ${\rm aff}C_1$ is the smallest affine manifold including $C_1$.) Therefore, we obtain $0\in {\rm ri}C_1$. In particular, $0\in C_1$ and this completes the proof. $\square$ \\\\ Using Lemma \ref{one}, we obtain the following lemma: \begin{lem} Let $(A_1,\cdots ,A_N)$ be a set of trace class self-adjoint operators on a Hilbert space $\cal H$ such that $Tr A_i=0$ for each $1\le i\le N$. Suppose that for every orthogonal projection $P$ on $\cal H$, $C_P(A_1,\cdots,A_N)$ is convex. Then there exists an orthonormal basis $\{g_k\}$ of $\cal H$, such that \[ \left\langle g_k,A_i g_k \right\rangle=0,\quad \forall i=1,\cdots N,\quad \forall k. \] \label{convex} \end{lem} {\it Proof}\\ We will say that a set of vectors $Z$ in $\cal H$ satisfies {\it Property \nolinebreak*} if it satisfies the following conditions:\\\\ {\it Property *} \begin{enumerate} \item $Z$ is a set of mutually orthogonal unit vectors of $\cal H$. \item $\left\langle z,A_i z \right\rangle=0,\quad i=1,\cdots,N\;$ for all $z\in Z$. \end{enumerate} By Zorn's lemma, there exists a maximal set of orthonormal vectors $\{g_k\}$ in $\cal H$ which satisfies the {\it Property *}. It suffices to show that $\{g_k\}$ is complete. Suppose that $\{g_k\}$ is not complete in $\cal H$, and let $P$ be the orthogonal projection onto the sub-Hilbert space spanned by $\{g_k\}$. From {\it Property *}, we have \begin{align*} Tr PA_iP=\sum_k \left\langle g_k, A_i g_k\right\rangle=0, \quad i=1,\cdots N. \end{align*} Let $\bar P$ be $\bar P=1-P$. Now we regard $(\bar P A_1 \bar P,\cdots,\bar P A_N \bar P)$ as self-adjoint trace class operators on the Hilbert space ${\bar P}{\cal H}$ such that \begin{align*} Tr_{\bar P{\cal H}}(\bar P A_i \bar P) =Tr(A_i)-Tr(PA_iP)=0,\quad i=1,\cdots N. \end{align*} By the assumption, the joint numerical range of $(\bar P A_1 \bar P,\cdots,\bar P A_N \bar P)$ on $\bar{P}{\cal H}$ is convex. Thus, applying Lemma \ref{one}, there exists a unit vector $z\in\bar{P}{\cal H}$ such that $\left\langle z, A_i z\right\rangle=0$ for all $i=1,\cdots,N$. As $z$ is orthogonal to all $g_k$, the set $\{z\}\cup\{g_k\}$ satisfies the {\it Property *}, and is strictly larger than $\{g_k\}$. This contradicts the maximality of $\{g_k\}$. Therefore, $\{g_k\}$ is complete. $\square$ \\\\ Now, let us complete the proof of Proposition \ref{thm1}. The basis of $\cal K$, $(A_1,\cdots, A_{N})$ are trace class self-adjoint operators satisfying $TrA_i=0,\;i=1,\cdots,N$ (\ref{atrace}). Therefore, if $C_P(A_1,\cdots,A_N)$ is a convex subset of ${\mathbb R}^N$ for any orthogonal projection $P$ on ${\cal H}_B$, there exists an orthonormal basis $\{g_k\}$ of ${\cal H}_B$ such that $\left\langle g_k,A_i g_k \right\rangle=0$, for all $i=1,\cdots N$ and $k$, from Lemma \ref{convex}. By Lemma \ref{base}, this concludes that $\psi_1,\cdots,\psi_M$ are distinguishable by LOCC.$\square$ \\\\ Proposition \ref{thm2} can be shown in the same way. We have the following lemma: \begin{lem} Let $(A_1,\cdots ,A_N)$ be a set of trace class self-adjoint operators on a Hilbert space $\cal H$ such that $Tr A_i=0$ for each $1\le i\le N$. Suppose that for every orthogonal projection $P$ on $\cal H$ with dimension larger than $N_p$, $C_P(A_1,\cdots,A_N)$ is convex. Then there exists an orthonormal basis $\{g_k\}$ of $\cal H$, such that \[ \left\langle g_k,A_i g_k \right\rangle=0,\quad i=1,\cdots N,\quad\forall k>N_p. \] \end{lem} {\it Proof}\\ The same as the proof of Lem 3.6. We can find a set of orthonormal vectors satisfying {\it Property *}, such that the dimension of its complementary subspace is $N_p$. $\square$\\ Decomposing each $\psi_l$ with respect to $\{Jg_k\}$, we obtain \begin{align} \psi_l=\sum_{k}\xi^l_k\otimes J g_k. \label{er} \end{align} By the argument in the proof of Lemma \ref{base}, (\ref{er}) takes the form of (\ref{decom}) with the orthogonality condition (\ref{prob}). Therefore, for the protocol in the Introduction, the probability that the error occurs is $\sum_{k=1}^{N_p}\Vert\xi^l_k\Vert^2$ when $\psi=\psi_l$. It is bounded from above as follows: \begin{align*} &\sum_{k=1}^{N_p}\Vert\xi^l_k\Vert^2 =\sum_{k=1}^{N_p} \left\Vert\left( 1\otimes\ket{Jg_k}\bra{Jg_k}\right) \psi_l\right\Vert^2 \\ &\le \sup\left\{\sum_{k=1}^{N_p} \left\Vert\left( 1\otimes\ket{z_k}\bra{z_k}\right)\psi_l\right\Vert^2; \;\left\{z_k\right\}_{k=1}^{N_p} :\;\;{\rm orthonormal\; set\; of\;} {\cal H}_B \right\} = \sum_{k=1}^{N_p} p_k^{l}. \end{align*} Here, $p^l_k$ is the $k$-th Schmidt coefficient of $\psi_l$, ordered in the decreasing order. Therefore, $\psi_1,\cdots,\psi_M$ are distinguishable by LOCC with probability $P_d$ such that \[ P_d \ge 1-\underset{1\le l\le M}{\max} \left( \sum_{k=1}^{N_p} \Vert\xi_k^l\Vert^2 \right) \ge 1-\underset{1\le l\le M}{\max} \left(\sum_{k=1}^{N_p}p_k^l\right), \] and we obtain Proposition \ref{thm2}.$\square$ \\\\ {\it Proof of Theorem \ref{fst} and Theorem \ref{snd}} \\\\ Now we apply the known results about joint numerical range to Proposition \ref{thm1}, \ref{thm2} and derive Theorem \ref{fst} and Theorem \ref{snd}. For $N=2$ case, the following Theorem is known \cite{Hal}: \begin{thm} For any bounded self-adjoint operators $T_1,T_2$ on a separable Hilbert space $\cal H$, the set \begin{align*} \left\{ \left( \left\langle z,T_1 z\right\rangle, \left\langle z,T_2 z\right\rangle\right) \in{\mathbb R}^2,\quad z\in {\cal H},\quad \Vert z\Vert=1 \right\} \end{align*} is a convex subset of ${\mathbb R}^2$. \label{top} \end{thm} This is called Toeplitz Hausdorff Theorem. By this Theorem, $C_P(A_1,A_2)$ is a convex subset of ${\mathbb R}^2$ for any projection $P$ on ${\cal H}_B$. Therefore, applying Proposition \ref{thm1}, we obtain Theorem \ref{fst}. The last statement comes from the fact $N\le M(M-1)=2$ for $M=2$. On the other hand, for $N=3$, the next Theorem is known \cite{bind},\cite{fan}. \begin{thm} Let $\cal H$ be a separable Hilbert space with $dim {\cal H}\ge 3$. Then for any self-adjoint operators $T_1,T_2,T_3$ in $\cal H$, the set \begin{align*} \left\{ \left( \left\langle z,T_1 z\right\rangle, \left\langle z,T_2 z\right\rangle, \left\langle z,T_3 z\right\rangle \right) \in{\mathbb R}^3,\quad z\in {\cal H},\quad \Vert z\Vert=1 \right\} \end{align*} is a convex subset of ${\mathbb R}^3$. \label{three} \end{thm} By this Theorem, $C_P(A_1,A_2,A_3)$ is a convex subset of ${\mathbb R}^3$ for any projection $P$ on ${\cal H}_B$ with dimension larger than $2$. Therefore, applying Proposition \ref{thm2}, we obtain Theorem \ref{snd}. $\square$. \\\\ \noindent {\bf Acknowledgement.}\\ { This work is supported by Research Fellowships of the Japan Society for the Promotion of Science for Young Scientists. } \noindent
{ "attr-fineweb-edu": 1.825195, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdVo5qdmDNenrz2-F
\section{Introduction} The recent Cosmic Microwave Background (CMB) data is still consistent with the simple $\Lambda$CDM model with a nearly scale-invariant, adiabatic and Gaussian power spectrum which can well be represented by the single-field slow-roll inflation models~\cite{ben,valv,Ade:2015lrj}. The forthcoming cosmological data with even better precision however could reveal the potential deviations from such pure adiabatic perturbations, and it would be worth exploring the possibilities for the non-adiabatic perturbations in existence of correlations among the adiabatic and non-adiabatic modes along with their indications for the early Universe dynamics in the fundamental physics models. In this paper, we study the mixture of the adiabatic mode and cold dark matter (axion) isocurvature mode taking account of their possible cross-correlations~\cite{amen,pol,lang,cro,kur,mack,kadojinn,gon,bucher} through the concrete models based on superstring theory. The recent Planck analysis studied the generally correlated isocurvature perturbations. The robust parameter estimation, without significantly affecting the bounds on the conventional ($\Lambda$CDM) cosmological parameters even with the inclusion of isocurvature modes, was not previously realized due to strong degeneracies among the parameters involving the isocurvature perturbations which the previous CMB data sets suffered from and the Planck data could greatly reduce \cite{Ade:2015lrj}. With such a precise cosmological parameter estimation including the correlated isocurvature perturbations at hand, it would be intriguing to explore the indications of the generally correlated isocurvature perturbations for the early Universe phenomena, and we in this paper aim to study the presumably ubiquitous light degrees of freedom in the early Universe through their isocurvature fluctuations. In superstring theory, the higher-dimensional form fields predict a number of light degrees of freedom represented by the axions in addition to the QCD axion~\cite{Peccei:1977hh}. These axions are associated with the internal cycles of the extra-dimensional manifold. While, at the perturbative level, the axion potential is protected by the gauge symmetry in string theory, the non-perturbative effects can break the continuous gauge symmetry leading to the discrete one and generate the axion potential. It is thus expected that the axion potential is well controlled by the residual discrete symmetry and the mass scale of axions depends on the non-perturbative effects \cite{Blumenhagen:2006ci,Ibanez:2012zz,Baumann}. We, in this paper, focus on the single-field axion inflation models in coexistence of an isocurvature perturbation due to another light axion. Although the axion inflation is often considered as the single-field inflation, the axion potential, in general, has the axionic mixing due to the moduli-mixing gauge kinetic function. As concrete examples, we discuss the natural inflation~\cite{Freese:1990rb} in Sec.~\ref{sec:2} and axion monodromy inflation~\cite{McAllister:2008hb,Silverstein:2008sg} in Sec.~\ref{sec:3} where the cross-correlated isocurvature perturbations can arise due to such axionic mixings. In these illustrative examples, the adiabatic curvature perturbations are dominantly sourced by the axion-inflaton whereas the isocurvature perturbations originate from the fluctuations of the light axion (different from the heavy axion inducing the inflation). The mixing of the string axions arises from the non-perturbative effects in the sinusoidal form, and the consequent cross-correlations between the adiabatic and isocurvature modes are studied. We conclude our discussions in Sec.~\ref{sec:con}. \section{Natural inflation with sinusoidal correction} \label{sec:2} The natural inflation is among the simplest axion inflation models ~\cite{Freese:1990rb} which can be constructed in the field theory as well as superstring theory. Although, at the perturbative level, the axion potential is not generated due to the gauge symmetry in string theory, the non-perturbative effects in a hidden gauge sector can generate the axion potential terms. Especially, when the gauginos ($\lambda$) of the hidden gauge group condensate at a certain energy scale~\cite{Ferrara:1982qs}, the superpotential can be generated in the four-dimensional ($4$D) ${\cal N}=1$ supersymmetry, \begin{align} W\simeq \langle \lambda \lambda \rangle \simeq Ae^{-aT}, \end{align} where $A={\cal O}(1)$ and $a=24\pi^2/b_0$ with $b_0$ being the one-loop beta-function coefficient.\footnote{Here and in what follows, we employ the reduced Planck units, $M_{\rm Pl}=2.4\times 10^{18}\,{\rm GeV}=1$.} We consider the scenarios where the size of gauge coupling is determined by the real part of modulus field, $T$, which is typical for heterotic string theory, type I string theory and type II string theory with D-branes along the single cycle (see for reviews, e.g., Refs.~\cite{Blumenhagen:2006ci,Ibanez:2012zz}). By fixing the real part of moduli, for instance through another non-perturbative effect, we can obtain the effective inflaton potential for the imaginary part of modulus (axion), \begin{align} V_{\rm inf}=\Lambda_1^4 \left( 1-{\rm cos}\frac{\phi}{f} \right), \end{align} with $\phi$ and $f$ being the axion-inflaton and its decay constant. The conventional natural inflation model, in view of the recent Planck data, requires the trans-Planckian axion decay constant, $f>5$, even though its construction requires some care because the fundamental axion decay constant obtained after the dimensional reduction is typically much smaller than the Planck scale \cite{Choi:1985je}. The gauge couplings in the visible and hidden sectors, in general, depend on the linear combination of moduli fields through the gauge threshold correction and non-trivial brane configuration. For example, in type II string theory, the D-branes wrap the internal cycle of six extra-dimensional manifold and then the volume of this internal cycle is determined by the linear combination of moduli fields $T_i$ where the number of moduli $T_i$ is determined by the topology of the extra-dimensional manifold ~\cite{Blumenhagen:2006ci,Ibanez:2012zz}. Thus, the gauge coupling on Dp-branes is represented by the linear combination of them, \begin{align} \langle c_iT^i\rangle =\frac{1}{g^2}, \end{align} where $c_{i}$ are constant. Furthermore, if we consider the one-loop corrections for the gauge coupling, the superpotential also depends on the linear combination of moduli fields $T$ and $T^\prime$, \begin{align} W=Ae^{-aT-dT^\prime}, \label{eq:threshold} \end{align} where $A={\cal O}(1)$, $a=24\pi^2/b_0$ and $d=24\pi^2/b_0 \times b/48\pi$ with $b$ being the one-loop beta-function coefficient determined by massive modes \cite{Lust:2003ky}. The axion decay constant for the modulus $T^\prime$ here can be enhanced by the one-loop effect~\cite{Abe:2014pwa,Abe:2014xja}. Indeed, there are several scenarios to enhance the axion decay constant based on the moduli-mixing in the gauge kinetic function such as the alignment mechanism~\cite{Kim:2004rp}, N-flation~\cite{Dimopoulos:2005ac}, kinetic mixing~\cite{Bachlechner:2014hsa}, the threshold correction~\cite{Abe:2014pwa,Abe:2014xja} and the flux-induced enhancements~\cite{Hebecker:2015rya}. Since there exist, in general, ubiquitous axion fields in string theory, one can also expect that there are moduli-dependent correction terms in the potential, \begin{align} V_{\rm int}=\Lambda_2^4 \left( 1-{\rm cos}\left(\frac{\phi}{g_1} +\frac{\chi}{g_2}\right) \right), \label{eq:infponat2} \end{align} where $\phi$ and $ \chi$ represent an axion-inflaton and another light axion field. For the notational brevity, we in the following define the parameters \begin{align} \sigma =\frac{\phi}{f},\,\,\, \psi =\frac{\phi}{g_1},\,\,\, \theta =\frac{\chi}{g_2}, \end{align} so that the total potential can be written as \begin{align} V=\Lambda_1^4 (1-\cos\sigma) + \Lambda_2^4 \left( 1- \cos\left(\psi+\theta\right) \right). \label{eq:infponat3} \end{align} We hereafter focus on the scenarios where the adiabatic perturbations are dominantly sourced by the axion-inflaton fluctuation $\delta \phi$ and the additional axion fluctuation $\delta \chi$ leads to the isocurvature perturbations. Before discussing the cosmological perturbations and their indication for the model discrimination, we show an allowed parameter region for the spectral tilt of the adiabatic perturbations and the tensor-to-scalar ratio in both pure adiabatic (ADI) model and generally-correlated ADI + cold dark matter isocurvature (CDI) model. Fig.~\ref{nsvsr} shows that, for the natural inflation with isocurvature perturbations, the inclusion of a cross-correlated isocurvature mode tightens the constraints on the axion decay constant, $5<f<10$, and the e-folding number, $60<N$. On the other hand, the axion monodromy inflation, to be discussed in the next section, except for the quadratic one can be better fitted by the Planck data by including the cross-correlated isocurvature mode. The degeneracies among the parameters involving the correlated isocurvature perturbations result in the shift in the best-fit parameters compared with those in the pure adiabatic model, even though the strong degeneracies such as that between the isocurvature perturbation amplitude and adiabatic perturbation spectral index which WMAP data had greatly suffered from reduced significantly in Planck $TT$ + polarization data \cite{ben,valv}. The constraints on $r$ however turn out not to be significantly affected by the inclusion of the cross-correlated isocurvature modes partly because the Planck data including the polarization already gives sufficiently tight constraints on the isocurvature and tensor modes \cite{Ade:2015lrj}. In the rest of the paper, we for simplicity do not consider the significant tensor contribution and we adopt the Planck likelihood analysis results without including $r$ in the following discussions. \begin{figure \begin{center} \epsfxsize = 0.48\textwidth \psfrag{nRR}[B][B][1][0]{$n_s$} \psfrag{rrr}[B][B][1][0]{$r$} \includegraphics[scale=0.4]{dnsrsept8.eps} \end{center} \caption{$68\%$ and $95\%$ confidence level constraints on the adiabatic spectral index and tensor to scalar ratio from Planck \cite{Ade:2015lrj}. The filled contours are for generally-correlated adiabatic and CDM (axion) isocurvature modes. The unfilled dashed contours are for the pure adiabatic model without the isocurvature perturbations.} \label{nsvsr} \end{figure} We now discuss the cosmic perturbations for the axion fields, starting with the brief discussions for the conventional curvature and isocurvature perturbations to set up our notations followed by the exploration on their cross-correlations along with their indication for the string inflation model building \cite{Kadota:2014hpa}. The curvature and isocurvature perturbations in our scenarios are \cite{ewanlyth, Kadota:2014hpa} \begin{align} {\cal R} & = -\frac{H}{\dot\phi_0} \delta \phi \, , \\ {\cal I} & = 2\frac{\Omega_a}{\Omega_m} \frac{\delta \theta}{\theta_0} \, , \end{align} with $\Omega_a$ and $\Omega_m$ being the axion and matter densities with respect to the critical density. The factor $\Omega_a/\Omega_m$ appears here because we are interested in the isocurvature perturbations between the radiation and the non-relativistic matter, and the non-adiabatic fluctuations arise solely from an axion which contributes to the total matter density with the fraction $\Omega_a/\Omega_m$. The dot denotes the time derivative and the subscript $_0$ represents the background field values during the inflation and, in the following, we omit $\delta$ representing the fluctuations for the notational brevity when it is clear from the context. The corresponding power spectra are given by~\cite{ewanlyth} \begin{align} {\cal P}_{\cal R} &=\left( \frac{H}{\dot\phi_0} \right)^2 {\cal P}_\phi = \nonumber\\ & = \left( \frac{H}{2\pi} \right)^2 \left( \frac{H}{\dot\phi_0} \right)^2 \left( \frac{k}{aH} \right)^{3-2\nu_\phi} 2^{2\nu_\phi-3} \left[ \frac{\Gamma(\nu_\phi)}{\Gamma(3/2)} \right]^2, \nonumber\\ {\cal P}_{\cal I} &= \left(\frac{\Omega_a}{\Omega_m} \right)^2 \left( \frac{2}{\theta_0} \right)^2 {\cal P}_\theta = \nonumber\\ & = \left(\frac{\Omega_a}{\Omega_m} \right)^2 \left( \frac{H}{2\pi} \right)^2 \left( \frac{2}{g_1\theta_0} \right)^2 \left( \frac{k}{aH} \right)^{3-2\nu_\theta} 2^{2\nu_\theta-3} \left[ \frac{\Gamma(\nu_\theta)}{\Gamma(3/2)} \right]^2, \label{eq:PRPI} \end{align} in terms of $\nu_{\phi(\theta)}=\sqrt{9/4-m^2_{\phi(\theta)}/H^2}$ with \begin{align} m_{\phi}^2&= \frac{\Lambda_1^4}{f^2}\cos \sigma_0, \nonumber\\ m_{\theta}^2&= \frac{\Lambda_2^4}{g_2^2} \cos (\psi_0+\theta_0), \nonumber\\ H^2&=\frac{V}{3}\simeq \frac{\Lambda_1^4}{6} \left( \frac{\phi_0}{f} \right)^2. \end{align} The cross-correlation between the curvature and isocurvature perturbations can be obtained using the in-in formalism~\cite{Weinberg:2005vy}, and the relevant interaction term in the Hamiltonian at the quadratic order, $\Lambda_2^4 \cos (\psi_0+\theta_0) \delta \psi \delta \theta$, leads to the isocurvature cross-correlation power spectrum~\cite{Weinberg:2005vy,Kadota:2014hpa}, \begin{align} {\cal P}_{\cal C} &= - \frac{\pi}{2}\frac{\Lambda_2^4}{g_1 g_2 H^2} \cos (\psi_0 +\theta_0)\Re \left[ i\int_0^\infty \frac{dx}{x} H_{\nu_\phi}^{(2)}(x) H_{\nu_\theta}^{(2)}(x) \right] \sqrt{{\cal P}_{\cal R}{\cal P}_{\cal I}} \nonumber\\ &\sim 4.2 \left( \frac{1}{g_1 g_2} \right) \left( \frac{\Lambda_2}{\Lambda_1} \right)^4 \left( \frac{f}{\phi_0} \right)^2 \cos(\psi_0+\theta_0) \sqrt{{\cal P}_{\cal R}{\cal P}_{\cal I}}, \label{eq:PC} \end{align} where the numerical integral of the Hankel function gives a factor $\sim -0.45$ and an order of the Hankel function is taken as $\nu_{\phi(\theta)}=\sqrt{9/4-m^2_{\phi(\theta)}/H^2}$.\footnote{This integral can be evacuated, at the leading order, at an arbitrary value of $x$ as long as $e^{-1/\xi} < x < 1$ with $\xi$ being the typical size of the slow-roll parameter \cite{gongsp,ks03}.} We plot the following cross correlation parameter \begin{align} \beta_C \equiv \frac{{\cal P}_{\cal C}}{\sqrt{{\cal P}_{\cal R}{\cal P}_{\cal I}}} \sim 4.2 \left( \frac{1}{g_1 g_2} \right) \left( \frac{\Lambda_2}{\Lambda_1} \right)^4 \left( \frac{f}{\phi_0} \right)^2 \cos(\psi_0+\theta_0) \simeq 4.2 \left( \frac{1}{g_1g_2} \right) \left( \frac{\Lambda_2^4}{A_S} \right) \cos(\psi_0+\theta_0) \frac{\phi_0^2}{96\pi^2}, \end{align} by varying the axion decay constant $f$ from $1$ to $20$ and the e-folding number $N$ from $50$ to $60$ in Fig.~\ref{crosscnat}. Here, the prefactor of $\phi_0^2$ is set to be of order $0.001$ and the power spectrum of the adiabatic curvature perturbations \begin{align} {\cal P}_{\cal R}=A_Sk^{n_{s}-1}, \label{askn} \end{align} is fixed to be $A_S\simeq H^2/(8\pi^2 \epsilon) \simeq 2.2 \times 10^{-9}$ with $\epsilon\simeq 2/\phi_0^2$ being the slow-roll parameter at the pivot scale $k_{\ast}=0.05\,{\rm Mpc}^{-1}$. This figure also shows the Planck likelihood contours including the polarization data which greatly improve the constraints on the isocurvature perturbations compared with the WMAP results \cite{ben,valv}. The high-$l$ ($l\geq 30$) $TE, EE$ data turn out to drive the isocurvature cross-correlation towards a smaller value and disfavor the negative cross-correlations which would be allowed otherwise with the high-$l$ $TT$ data \cite{Ade:2015lrj}. We can find that the coefficient $c$ in $\beta_{\calC}=c\,\phi_0^2$ has to be of order less than $10^{-3}$ to be within $2$ sigma and the axion decay constant $f$ is constrained to the range between $5$ and $10$. The cross correlation parameter $\beta_{\calC}$ is constrained to be $-0.1\lesssim \beta_{\calC} \lesssim 0.3$, or, in terms of the parameters in the sinusoidal correction term ~(Eq.~(\ref{eq:infponat2})), to be within \begin{align} -0.1 \lesssim 4.2 \left( \frac{1}{g_1g_2} \right) \left( \frac{\Lambda_2^4}{2.2\times 10^{-9}} \right) \cos(\psi_0+\theta_0) \frac{\phi_0^2}{96\pi^2} \lesssim 0.3. \label{eq:betac1nat} \end{align} Moreover, the following conditions are taken into account to justify our calculations: \begin{description} \item[{$\bullet$}] The adiabatic perturbations come from $V_{\rm inf}$ and not from $V_{\rm int}$, that is, $V_{\rm inf} \gg V_{\rm int}$. \item[{$\bullet$}] The inflaton dynamics is dominated by $\phi$, i.e., $\left|\frac{\partial V_{\rm inf}}{\partial \phi} \right|\gg \left|\frac{\partial V_{\rm int}}{\partial \phi}\right|$. \item[{$\bullet$}] The quantum fluctuations of axions are not over-damped during the inflation, $m_{\theta}^2,m_{\phi}^2 \ll H^2$. \item[{$\bullet$}] The standard slow-roll conditions, $\epsilon\ll 1$ and $|\eta|\ll 1$. \end{description} Some of the above conditions may be redundant depending on the parameter range of interest. In the light of these conditions, the cross-correlation parameter is bounded above by \begin{align} \beta_C \simeq 4.2 \left( \frac{1}{g_1g_2} \right) \left( \frac{\Lambda_2^4}{A_S} \right) \cos(\psi_0+\theta_0) \frac{\phi_0^2}{96\pi^2} \ll 2.1\left( \frac{1}{g_1g_2} \right), \label{eq:betac2nat} \end{align} where $|\Lambda_2^4 \cos(\psi_0+\theta_0)| \ll |V_{\rm inf}|$ is applied. For $g_1, g_2 <{\cal O}(1)$, the constraint of Eq.~(\ref{eq:betac2nat}) for $\beta_{\calC}$ is automatically satisfied if Eq.~(\ref{eq:betac1nat}) is satisfied. The illustrative values of isocurvature parameters are listed in Tab.~\ref{tab:1} by setting the typical values for the parameters in the scalar potential~in Eq. (\ref{eq:infponat3}). Note that, although the axion decay constants are typically of order the grand unification scale~($10^{16}$ GeV)~\cite{Choi:1985je} and hence one may expect $g_1\sim g_2$, the hierarchical values $g_1\ll g_2$ ($g_1\gg g_2$) can well be realized for the axion $\chi$ ($\phi$) by the non-perturbative effects through the (gauge) threshold correction~(Eq. (\ref{eq:threshold})). Next, we estimate the fraction of isocurvature perturbations \begin{align} \beta_{\rm iso}=\frac{{\cal P}_{\cal I}}{{\cal P}_{\cal R}+{\cal P}_{\cal I}} =\frac{ \frac{{\cal P}_{\cal I}}{{\cal P}_{\cal R}}}{ 1+\frac{{\cal P}_{\cal I}}{{\cal P}_{\cal R}}}, \end{align} where the power spectrum of the adiabatic perturbation is fixed as in Eq. (\ref{askn}), whereas the power spectrum of the isocurvature perturbation is given by \begin{align} {\cal P}_{\cal I} \approx \left(\frac{\Omega_a}{\Omega_m} \right)^2 \left( \frac{1}{2\pi} \right)^2 \left( \frac{2}{g_1\theta_0} \right)^2 \frac{\Lambda_1^4}{6} \left( \frac{\phi_0^2}{f^2} \right) = \left(\frac{\Omega_a}{\Omega_m} \right)^2 \left( \frac{1}{g_1\theta_0} \right)^2 \frac{16A_S}{\phi_0^2}. \end{align} Then, the fraction of isocurvature perturbations \begin{align} \frac{{\cal P}_{\cal I}}{ {\cal P}_{\cal R}} \approx 16\left(\frac{\Omega_a}{\Omega_m} \right)^2 \left( \frac{1}{g_1\theta_0} \right)^2 \phi_0^{-2}, \label{eq:fraciso} \end{align} can give a sizable contribution to the cosmological observables as illustrated in Fig.~\ref{crossisonat} where the prefactor of $\phi_0^{-2}$ in Eq.~(\ref{eq:fraciso}) is set to $1$ and $10$ for a varying $f$. A larger prefactor is preferred for a larger isocurvature contributions. The Planck bounds the uncorrelated axion isocurvature mode to $\beta_{\rm iso}\lesssim 0.038$, whereas the inclusion of isocurvature cross-correlation results in the constraint $0.034 \lesssim \beta_{\rm iso}\lesssim 0.28$ at the $95\%$ confidence level \cite{Ade:2015lrj}. Tab.~\ref{tab:1} summarizes the typical numerical values of parameters in the scalar potential~(Eq. (\ref{eq:infponat3})) which can realize a sizable fraction of isocurvature perturbations. \begin{table}[htb] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline $f$ & $N$ & $ g_1$ & $ g_2$ & $\Lambda_2^4/\Lambda_1^4$ & $\Omega_a/\Omega_m$ & $\cos(\psi_0+\theta_0)$ & $\theta_0$ & $\beta_{\cal C}$ & $\beta_{\rm iso}$ & $n_{s}$\\ \hline $10$ & $55$ & $10^{-4}$ & $10^{-2}$ & $2\times 10^{-10}$ & $2\times 10^{-4}$ & $1/2$ & $2$ & $2\times 10^{-7}$ & $0.07$ & $0.964$\\ \hline $10$ & $55$ & $10^{-2}$ & $10^{-2}$ & $1\times 10^{-5}$ & $0.02$ & $1/2$ & $2$ & $1\times 10^{-4}$ & $0.07$ & $0.964$\\ \hline $10$ & $55$ & $10^{-2}$ & $1$ & $1\times 10^{-5}$ & $0.02$ & $1/2$ & $2$ & $1\times 10^{-6}$ & $0.07$ & $0.964$\\ \hline $10$ & $55$ & $1$ & $10^{-2}$ & $1\times 10^{-5}$ & $0.02$ & $1/2$ & $2$ & $1\times 10^{-6}$ & $7\times 10^{-6}$ & $0.964$\\ \hline \end{tabular} \caption{The typical numerical values of parameters, the e-folding number ($N$), the spectral index ($n_{s}$), the fraction of isocurvature perturbation ($\beta_{\rm iso}$) and the cross-correlation parameter ($\beta_{\calC}$) for the natural inflation with sinusoidal correction.} \label{tab:1} \end{center} \end{table} \begin{figure}[htb!] \begin{center} \psfrag{nRR}[B][B][1][0]{$n_s$} \psfrag{PRI/Sqrt[PRR*PII]}[B][B][1][0]{$\calP_{\calC}/\sqrt{\calP_\calR\calP_\calI}$} \includegraphics[scale=0.4]{dsept8cosDelnRRnatural.eps} \end{center} \caption{$\calP_{\calC}/\sqrt{\calP_{R}\calP_{I}}$ and the adiabatic spectral index $n_{s}$ for the natural inflation with sinusoidal correction (68 \% and 95 \% CL contours are from Planck \cite{Ade:2015lrj}). $\calP_{C}/\sqrt{\calP_{\calR}\calP_{\calI}} =c\times \phi_0^2$ for $c=10^{-3}, 5\times 10^{-3}$ are shown for varying $N$ and $f$ (the labels are in units of the reduced Planck mass). The anti-correlation cases (for $c=-10^{-3}, -5\times 10^{-3}$) are also shown with the dotted curves.} \label{crosscnat} \end{figure} \begin{figure}[htb!] \begin{center} \psfrag{nRR}[B][B][1][0]{$n_s$} \psfrag{bbb}[B][B][1][0]{$\beta_{iso}$} \includegraphics[scale=0.4]{daug24betaisonRRnatural.eps} \end{center} \caption{$\beta_{\rm iso}\equiv \calP_{\calI}/(\calP_{\calR}+\calP_{\calI})$ and $n_{s}$ for the natural inflation with sinusoidal correction ($68\%$ and $95\%$ CL contours are from Planck \cite{Ade:2015lrj}). $(\beta_{\rm iso}, n_s)$ are shown for $\calP_{\calI}/\calP_{\calR}=c/\phi_0^2$ with $c=1,10$ for varying $f$ and $N$. } \label{crossisonat} \end{figure} Before concluding this section to move on to the discussion on the monodromy inflation, let us comment on the microscopic description about the correction term in the inflaton potential given by Eq. (\ref{eq:infponat2}). Such a potential can be derived from the following K\"ahler and superpotential, \begin{align} K=-2\ln (T_1+\bar{T}_1) -\ln (T_2+\bar{T}_2), \nonumber\\ W=w_0 +Ae^{-b_1T_1} +Be^{-c_1T_1-c_2T_2}, \label{eq:KW} \end{align} where $w_0$ is the flux-induced constant term induced by the Gukov-Vafa-Witten superpotential $W_{\rm flux}=\int G \wedge \Omega$, where $G$ is the linear combination of Ramond-Ramond and Neveu-Schwarz three-form fluxes and $\Omega$ is the period vector in the framework of type II superstring theory. The second and third terms in the superpotential~(\ref{eq:KW}) denote the non-perturbative effects, such as the gaugino condensation terms, D-brane instanton and world-sheet instanton effects. Let us define the moduli as \begin{align} T_1=t_1+i a_1,\nonumber\\ T_2=t_2+i a_2, \end{align} and assume that all the real parts of moduli $T_{1,2}$ are stabilized at their minima and sufficiently heavier than the remaining imaginary parts of $T_{1,2}$. Then, from the four-dimensional scalar potential based on $4$D ${\cal N}=1$ supergravity, \begin{align} V=e^K(K^{I\bar{J}} D_I W D_{\bar J}\bar{W} -3|W|^2), \end{align} where $K^{I\bar{J}}$ is the inverse of the K\"ahler metric $K_{I\bar{J}}=\partial^2 K/\partial \Phi^I\partial\bar{\Phi}^{\bar{J}}$, $D_{I}W=W_{I}+K_{I}W$, with $W_{I}=\partial W/\partial \Phi^I$ and $K_{I}=\partial K/\partial \Phi^I$, for $\Phi^I=T^1, T^2$. We can obtain the axion potential for $a_{1,2}$ by further assuming that some uplifting sector lifts up the scalar potential from the AdS vacuum to the dS one with a very small vacuum energy, \begin{align} V\simeq \Lambda +\Lambda_1 {\rm cos}\left( \frac{\phi}{f}\right) +\Lambda_2 {\rm cos}\left( \frac{\phi}{g_1}+\frac{\chi}{g_2}\right), \end{align} where $\Lambda \simeq -\Lambda_1-\Lambda_2 \simeq -\Lambda_1$ is a constant and $\Lambda_{1,2}$ depend on the vacuum expectation values of ${\rm Re}\,T_{1,2}$. The fields $\phi$ and $\chi$ are the canonically normalized axions. The kinetic terms of $a_{1,2}$ are extracted from the second derivatives of the K\"ahler potential with respect to the moduli, \begin{align} K_{I\bar{J}}\partial \Phi^I\partial \bar{\Phi}^J &=K_{T_1\bar{T}_1} \partial T_1 \partial {\bar T}_1 +K_{T_2\bar{T}_2} \partial T_2 \partial {\bar T}_2. \end{align} As a result, the axion decay constants $f_{1}, g_{1,2}$ of the canonically normalized axions $\phi$ and $\chi$ are given by \begin{align} f&=\frac{\sqrt{2K_{T_1\bar{T}_1}}}{b_1} =\frac{2}{b_1(T_1+\bar{T}_1)}, \nonumber\\ g_1&=\frac{\sqrt{2K_{T_1\bar{T}_1}}}{c_1} =\frac{2}{c_1(T_1+\bar{T}_1)}, \nonumber\\ g_2&=\frac{\sqrt{2K_{T_2\bar{T}_2}}}{c_2} =\frac{\sqrt{2}}{c_2(T_2+\bar{T}_2)}. \end{align} \section{Axion monodromy inflation with sinusoidal correction} \label{sec:3} We now discuss the axion monodromy inflation which offers another popular axion inflation scenario in string theory. Axion monodromy inflation is a successful large-field inflation in which the inflaton can move around its configuration place on many cycles, and the field range of inflaton can be thus much larger than its fundamental period determined by the axion decay constant. The scalar potential for the axion monodromy inflation is represented by \begin{align} {\cal L}=-\frac{1}{2}(\partial \phi)^2 -\mu_1^{4-p}\phi^p, \label{eq:mono} \end{align} where $\phi$ is the axion originating from the higher-dimensional form fields, $\mu_1$ represents the energy scale and $p$ is the fractional number which depends on the model in string theory ~\cite{Baumann}. Let us consider the spacetime filling D$5$-brane in type IIB string theory~\cite{McAllister:2008hb}. The D$5$-brane wraps a certain internal two-cycle $\Sigma_2$ in the $6$D compact space in addition to the $4$D spacetime and its {\it Dirac-Born-Infeld} action is given by \begin{align} S_{D5}=\frac{1}{(2\pi)^5 g_s (\alpha^\prime)^3} \int d^6 \sigma \sqrt{-{\rm det} (G_{ab} +B_{ab}) }, \end{align} where $g_s$ is the string coupling, $\alpha^\prime$ is the regge-slope, $G_{ab}$, $a,b=0,1,2,3,4,5$ is the pullback of the metric of the target space, $B_{ab}$ is the Kalb-Ramond field whose extra-dimensional component corresponds to the axion $b=\int_{\Sigma_2} B_2$ where $B_2$ is the Kalb-Ramond two-form. We here do not consider the magnetic flux background. After carrying out the dimensional reduction, the axion potential can be extracted as \begin{align} V_{\rm eff}\simeq \frac{{\cal T}}{(2\pi)^5 g_s (\alpha^\prime)^2} \sqrt{l^4 +b^2}, \end{align} where ${\cal T}$ and $l$ are some warp factors and the volume of two-cycle $\Sigma_2$ in string units ($\alpha^\prime=1$). For a large field value of the inflaton $b\gg l^2$, the potential reduces to a linear type, \begin{align} V_{\rm eff}\simeq \frac{{\cal T}}{(2\pi)^5 g_s (\alpha^\prime)^2} b. \end{align} Then, the relevant Lagrangian of the inflaton is given by \begin{align} {\cal L}=-\frac{1}{2}(\partial \phi)^2 -\mu_1^{3}\phi, \end{align} where $\mu_1^{3} =\frac{{\cal T}}{f (2\pi)^5 g_s (\alpha^\prime)^2}$ with $f$ being the decay constant of the axion $\phi=b$. Furthermore, for the D$4$-brane in a nilmanifold (twisted torus) on type IIA string theory, the axion potential has the form of Eq. (\ref{eq:mono}) with $p=2/3$~\cite{Silverstein:2008sg}. When we consider the seven-branes~\cite{Palti:2014kza} or a four-form field strength~\cite{Kaloper:2008fb}, the axion monodromy inflation is that with $p=2$. The other types of axion monodromy inflation with $p=4/3,3$ can also be constructed by a coupling between NS-NS two-form and the Ramond-Ramond field strength~\cite{McAllister:2014mpa}. As mentioned for the natural inflation, the axion, in general, can receive the non-perturbative effects associated with the gaugino condensation, D-brane instanton and world-sheet instanton, and the scalar potential receives the moduli-dependent correction including the mixing with another light axion $\chi$, \begin{align} V= \mu_1^{4-p}\phi^p +\mu_2^4 {\rm cos}\left( \frac{\phi}{g_1}+\frac{\chi}{g_2}\right), \label{eq:mono2} \end{align} where $g_2$ denotes the decay constant of $\chi$. We here assume that the moduli except for the relevant axions under our discussion are fixed at their minimum and decoupled from our setup. For the notational brevity, we in the following define the parameters \begin{align} \psi =\frac{\phi}{g_1},\,\,\, \theta =\frac{\chi}{g_2}, \end{align} so that the total potential can be written as \begin{align} V= \mu_1^{4-p}\phi^p + \mu_2^4 \cos\left(\psi+\theta\right). \end{align} Analogously to the natural inflation discussed in the last section, the curvature and isocurvature perturbations in our monodromy inflation scenario read \begin{align} {\cal P}_{\cal R} & =A_Sk^{n_{s}-1}, \nonumber \\ {\cal P}_{\cal I} & \approx \left(\frac{\Omega_a}{\Omega_m} \right)^2 \left( \frac{H}{2\pi} \right)^2 \left( \frac{2}{g_1\theta_0} \right)^2 \simeq \left(\frac{\Omega_a}{\Omega_m} \right)^2 \left( \frac{2}{g_1\theta_0} \right)^2 A_S \left( \frac{p}{\phi_0} \right)^2, \label{eq:PRPImono} \end{align} with \begin{align} H^2&=\frac{\mu_1^{4-p}\phi_0^p}{3} =4\pi^2 A_S \left( \frac{p}{\phi_0} \right)^2, \end{align} replacing $\mu_1$ by $A_S$ through the CMB normalization, $\mu_1^{4-p}=12\pi^2p^2 A_S\phi_0^{-p-2}$, where $p=2$ corresponds to the natural inflation case. The cross-correlation power spectrum then become \begin{align} {\cal P}_{\cal C} &\sim 2.1 \left( \frac{1}{g_1 g_2} \right) \left( \frac{\mu_2}{\mu_1} \right)^4 \left( \frac{\mu_1}{\phi_0} \right)^p \cos(\psi_0+\theta_0) \sqrt{{\cal P}_{\cal R}{\cal P}_{\cal I}} \nonumber\\ &= 2.1 \left( \frac{1}{g_1 g_2} \right) \left( \frac{\mu_2^4}{12\pi^2 p^2 A_S} \right) \phi_0^2 \cos(\psi_0+\theta_0) \sqrt{{\cal P}_{\cal R}{\cal P}_{\cal I}}. \end{align} We plot the cross-correlation parameter \begin{align} \beta_C =\frac{{\cal P}_{\cal C}}{\sqrt{{\cal P}_{\cal R}{\cal P}_{\cal I}}} &= 2.1 \left( \frac{1}{g_1 g_2} \right) \left( \frac{\mu_2^4}{12\pi^2 A_S} \right) \cos(\psi_0+\theta_0) \left( \frac{\phi_0}{p}\right)^2, \end{align} by varying the index $p$ and the e-folding number $N$ in Fig.~\ref{crosscmono}, where the prefactor of $\phi_0^2/p^2$ is set to $\pm 0.005$ for concreteness. $\beta_{\cal C}$ increases for a larger $p$ because the initial field value of axion-inflaton increases for a larger $p$. By considering the consistency conditions as spelled out below Eq. (\ref{eq:betac1nat}) in the last section for the validity of our calculations, the cross-correlation is bounded above by \begin{align} \beta_C &\simeq 2.1 \left( \frac{1}{g_1 g_2} \right) \left( \frac{\mu_2^4}{12\pi^2 p^2 A_S} \right) \phi_0^2 \cos(\psi_0+\theta_0) \nonumber\\ &\ll 2.1 \left( \frac{1}{g_1g_2} \right). \label{eq:betac2} \end{align} Tab.~\ref{tab:2} lists some illustrative values for the isocurvature perturbation parameters by setting the typical values for parameter sets in the scalar potential~(\ref{eq:mono2}). We next estimate the fraction of isocurvature perturbations \begin{align} \beta_{\rm iso}=\frac{{\cal P}_{\cal I}}{{\cal P}_{\cal R}+{\cal P}_{\cal I}} =\frac{ \frac{{\cal P}_{\cal I}}{{\cal P}_{\cal R}}}{ 1+\frac{{\cal P}_{\cal I}}{{\cal P}_{\cal R}}}, \end{align} where the power spectrum of adiabatic perturbations is fixed to be $A_S=2.2 \times 10^{-9}$ at the pivot scale $k_{\ast}=0.05\,{\rm Mpc}^{-1}$, whereas the fraction of ${\cal P}_{\cal R}$ and ${\cal P}_{\cal I}$ is given by \begin{align} \frac{{\cal P}_{\cal I}}{{\cal P}_{\cal R}} \approx \left(\frac{\Omega_a}{\Omega_m} \right)^2 \left( \frac{2}{g_1\theta_0} \right)^2 \left( \frac{p}{\phi_0} \right)^2. \end{align} Fig.~\ref{crossisomono} plots $\beta_{\rm iso}$ with $p=2/3,1,4/3,2$ as a function $\phi_0$ by setting the parameters as \begin{align} \frac{{\cal P}_{\cal I}}{{\cal P}_{\cal R}} =c \times \left( \frac{p}{\phi_0} \right)^2, \end{align} with $c=1,10,50$, and we can find, as expected, the isocurvature contribution increases for a larger $p$. The Planck data hence favors the sizable generally correlated isocurvature perturbations for the axion monodromy inflation with sinusoidal correction. Tab.~\ref{tab:2} exemplifies the parameters which can realize the sizable fraction of isocurvature perturbations. Figs.~\ref{nsvsr},~\ref{crosscmono} and~\ref{crossisomono} hence demonstrates that, for the axion monodromy inflation with $p=1,2/3$ including the sinusoidal correction, there is a preference for the existence of cross-correlated isocurvature modes in the currently available CMB data. \begin{table}[htb] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline $p$ & $N$ & $ g_1$ & $ g_2$ & $\mu_2^{4-p}/H^2$ & $\Omega_a/\Omega_m$ & $\cos(\psi_0+\theta_0)$ & $\theta_0$ & $\beta_{\cal C}$ & $\beta_{\rm iso}$ & $n_{s}$\\ \hline $2$ & $55$ & $10^{-2}$ & $10^{-2}$ & $6\times 10^{-7}$ & $0.03$ & $1/2$ & $2$ & $0.002$ & $0.14$ & $0.964$\\ \hline $4/3$ & $55$ & $10^{-2}$ & $10^{-2}$ & $3\times 10^{-7}$ & $0.03$ & $1/2$ &$2$ & $0.001$ & $0.1$ & $0.97$\\ \hline $1$ & $55$ & $10^{-2}$ & $10^{-2}$ & $4\times 10^{-7}$ & $0.03$ & $1/2$ & $2$ &$0.001$ & $0.08$ & $0.973$\\ \hline $2/3$ & $55$ & $10^{-2}$ & $10^{-2}$ & $4\times 10^{-7}$ & $0.03$ & $1/2$ & $2$ & $0.001$ & $0.05$ & $0.976$\\ \hline \end{tabular} \caption{The typical numerical values for the axion monodromy inflation with sinusoidal correction.} \label{tab:2} \end{center} \end{table} \begin{figure \begin{center} \psfrag{nRR}[B][B][1][0]{$n_s$} \psfrag{PRI/Sqrt[PRR*PII]}[B][B][1][0]{$\calP_{\calC}/\sqrt{\calP_\calR\calP_\calI}$} \includegraphics[scale=0.4]{baug24cosDelnRRmonodromy.eps} \end{center} \caption{ $\calP_{\calC}/\sqrt{\calP_{\calR}\calP_{\calI}}$ and $n_{s}$ for the axion monodromy inflation ($V_{\rm inf}=\mu_1^{4-p}\phi^p$) with sinusoidal correction (68 \% and 95 \% CL contours are from Planck \cite{Ade:2015lrj}). $\calP_{\calC}/\sqrt{\calP_{\calR}\calP_{\calC}}= c \times \phi_0^2/p^2$ for $c=0.005$ are shown for varying the e-folding number $N$. The anti-correlation cases (for $c=-0.005$) are also shown with the dashed lines.} \label{crosscmono} \end{figure} \begin{figure \begin{center} \psfrag{nRR}[B][B][1][0]{$n_s$} \psfrag{bbb}[B][B][1][0]{$\beta_{iso}$} \includegraphics[scale=0.4]{daug24betaisonRRmonodromy.eps} \end{center} \caption{$\beta_{\rm iso}\equiv \calP_{\calI}/(\calP_{\calR}+\calP_{\calI})$ and $n_{s}$ for the axion monodromy inflation ($V_{\rm inf}=\mu_1^{4-p}\phi^p$) with sinusoidal correction (68 \% and 95 \% CL contours from Planck \cite{Ade:2015lrj}). ($\beta_{\rm iso}, n_s$) are shown for $\calP_{\calI}/\calP_{\calR}=c\times p^2/\phi_0^2$ with $c=1,5,50$.} \label{crossisomono} \end{figure} \clearpage \section{Conclusion} \label{sec:con} The natural inflation and axion monodromy inflation with moduli-dependent sinusoidal corrections can generically appear in the scalar potential through the non-perturbative effects. We demonstrated that probing the precise nature of isocurvature fluctuations can help us understand the nature of fundamental physics using these popular inflation models as concrete examples. In this paper, we focused on the scenarios where the heavy axion induces the adiabatic perturbations while another light axion sources the isocurvature perturbations with their cross-correlations taken into account. Sec.~\ref{sec:2} demonstrated that the cross-correlated isocurvature mode gives an even tighter constraints on the decay constant of axion-inflaton and the e-folding number for natural inflation. While Sec.~\ref{sec:2} showed that the cross-correlated isocurvature perturbations are not favored by Planck for the natural inflation, Sec.~\ref{sec:3} showed that there is a preference for the existence of cross-correlated isocurvature mode for the axion monodromy inflation. We also mention that, when, in contrast to our setup, the sinusoidal corrections are not suppressed enough, the scalar power spectrum could posses the modulating behaviour for the natural inflation~\cite{Abe:2014xja,Czerny:2014wua} and the axion monodromy inflation~\cite{Flauger:2009ab,Kobayashi:2010pz,Kobayashi:2014ooa,Higaki:2014sja}. Such an additional feature in the cosmological observables would also be of great interest to discriminate among the possible inflation models. We illustrated our findings through the simple models where a single axion is added besides the axion-inflaton, but, in general, there could appear multiple light axions $\chi_i$ in addition to the axion-inflaton $\phi$. They can then have the mixing terms \begin{eqnarray} \sum_j \Lambda^4_j \left( 1- \cos \left( \frac{\phi}{g} +\sum_i \frac{\chi_i}{g_i^j} \right) \right), \end{eqnarray} and the analysis analogous to what has been done in this paper can be preformed for such multiple axion cases too. Even though Planck, in particular the addition of the polarization data, can significantly tighten the constraints on the isocurvature modes, we should be careful about the potential systematics in the current Planck data. For instance, it was pointed out that the apparent low power in $TE$ spectrum could result in the preference for the positive cross-correlation, and this could cause an over-constraint on the isocurvature component if such a power spectrum feature is due to the unidentified systematics~\cite{Ade:2015xua}. While the current polarization data at hand are not yet robust to the systematis on the large scales, the forthcoming polarization data with a better handle on the systematics would certainly be able to probe the nature of isocurvature perturbations more precisely and consequently explore the properties of ubiquitous light degrees of freedom in the early Universe. \subsection*{Acknowledgements} We thank Jinn-ouk Gong for the useful discussions and the hospitality of CTPU where the collaboration initiated. K.~K. was supported by Institute for Basic Science (IBS-R018-D1) and thanks the Galileo Galilei Institute for Theoretical Physics for hospitality. T.~K. was supported in part by the Grant-in-Aid for Scientific Research No.~25400252 and No.~26247042 from the Ministry of Education, Culture, Sports, Science and Technology (MEXT) in Japan. H.~O. was supported in part by a Grant-in-Aid for JSPS Fellows No. 26-7296.
{ "attr-fineweb-edu": 1.177734, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdY05qWTA9fx_AXqM
\section{Introduction and statement of the main results} Let $(M,\partial M)$ be a smooth compact manifold with boundary, and let $g\in C^k(M)$ be a Riemannian metric on it. We can always assume that $(M,\partial M)$ is equipped with a real analytic atlas, while $\partial M$ and $g$ may or may not be analytic. We define the geodesic X-ray transform $I$ of symmetric 2-tensor fields by \be{I_G} I f(\gamma) = \int_0^{l_\gamma} \langle f(\gamma(t)), \dot \gamma^2(t) \rangle \,\d t, \end{equation} where $[0,l_\gamma]\ni t\mapsto \gamma$ is any geodesic with endpoints on $\partial M$ parameterized by its arc-length. Above, $\langle f, \theta^2\rangle$ is the action of $f$ on the vector $\theta$, that in local coordinates is given by $f_{ij} \theta^i \theta^j$. The purpose of this work is to study the injectivity, up to potential fields, and stability estimates for $I$ restricted to certain subsets $\Gamma$ (that we call $I_\Gamma$), and for manifolds with possible conjugate points. We require however that the geodesics in $\Gamma$ do not have conjugate points. We also require that $\Gamma$ is an open sets of geodesics such that the collection of their conormal bundles covers $T^*M$. This guarantees that $I_\Gamma$ resolves the singularities. The main results are injectivity up to a potential field and stability for generic metrics, and in particular for real analytic ones. We are motivated here by the boundary rigidity problem: to recover $g$, up to an isometry leaving $\partial M$ fixed, from knowledge of the boundary distance function $\rho(x,y)$ for a subset of pairs $(x,y)\in \partial M\times \partial M$, see e.g., \cite{M,Sh, CDS, SU-rig,PU}. In presence of conjugate points, one should study instead the lens rigidity problem: a recovery of $g$ from its scattering relation restricted to a subset. Then $I_\Gamma$ is the linearization of those problems for an appropriate $\Gamma$. Since we want to trace the dependence of $I_\Gamma$ on perturbations of the metric, it is more convenient to work with open $\Gamma$'s that have dimension larger than $n$, if $n\ge3$, making the linear inverse problem formally overdetermined. One can use the same method to study restrictions of $I$ on $n$ dimensional subvarieties but this is behind the scope of this work. Any symmetric 2-tensor field $f$ can be written as an orthogonal sum of a \textit{solenoidal} part $f^s$ and a \textit{potential} one $dv$, where $v=0$ on $\partial M$, and $d$ stands for the symmetric differential of the 1-form $v$, see Section~\ref{sec_prel}. Then $I(dv)(\gamma)=0$ for any geodesic $\gamma$ with endpoints on $\partial M$. We say that $I_\Gamma$ is \textit{s-injective}, if $I_\Gamma f=0$ implies $f=dv$ with $v=0$ on $\partial M$, or, equivalently, $f=f^s$. This problem has been studied before for \textit{simple} manifolds with boundary, i.e., under the assumption that $\partial M$ is strictly convex, and there are no conjugate points in $M$ (then $M$ is diffeomorphic to a ball). The book \cite{Sh} contains the main results up to 1994 on the integral geometry problem considered in this paper. Some recent results include \cite{Sh-sib}, \cite{Ch}, \cite{SU-Duke}, \cite{D}, \cite{Pe}, \cite{SSU}, \cite{SU}. In the two dimensional case, following the method used in \cite{PU} to solve the boundary rigidity problem for simple 2D manifolds, injectivity of the solenoidal part of the tensor field of order two was proven in \cite{Sh-2d}. In \cite{SU-rig}, we considered $I$ on all geodesics and proved that the set of simple metrics on a fixed manifold for which $I$ is s-injective is generic in $C^k(M)$, $k\gg1$. Previous results include s-injectivity for simple manifolds with curvature satisfying some explicit upper bounds \cite{Sh,Sh-sib,Pe}. A recent result by Dairbekov \cite{D} proves s-injectivity for non-trapping manifolds (not-necessarily convex) satisfying similar bounds, that in particular prevent the existence of conjugate points. Fix another compact manifold $M_1$ with boundary such that $M_1^\text{\rm int}\supset M$, where $M_1^\text{\rm int}$ stands for the interior of $M_1$. Such a manifold is easy to construct in local charts, then glued together. \begin{definition} \label{def_ms} We say that the $C^k(M)$ (or analytic) metric $g$ on $M$ is \textbf{regular}, if $g$ has a $C^k$ (or analytic, respectively) extension on $M_1$, such that for any $(x,\xi)\in T^*M$ there exists $\theta\in T_xM\setminus 0$ with $\langle \xi, \theta\rangle =0$ such that there is a geodesic segment $\gamma_{x,\theta}$ through $(x,\theta)$ such that (a) the endpoints of $\gamma_{x,\theta}$ are in $M^\text{\rm int}_1\setminus M$. (b) there are no conjugate points on $\gamma_{x,\theta}$. \noindent Any geodesic satisfying (a), (b) is called a \textbf{simple} geodesic. \end{definition} Note that we allow the geodesics in $\Gamma$ to self-intersect. Since we do not assume that $M$ is convex, given $(x,\theta)$ there might be two or more geodesic segments $\gamma_j$ issued from $(x,\theta)$ such that $\gamma_j\cap M$ have different numbers of connected components. Some of them might be simple, others might be not. For example for a kidney-shaped domain and a fixed $(x,\theta)$ we may have such segments so that the intersection with $M$ has only one, or two connected components. Depending on which point in $T^*M$ we target to recover the singularities, we may need the first, or the second extension. So simple geodesic segments through some $x$ (that we call simple geodesics through $x$) are uniquely determined by an initial point $x$ and a direction $\theta$ and its endpoints. In case of simple manifolds, the endpoints (of the only connected component in $M$, unless the geodesics does not intersect $M$) are not needed, they are a function of $(x,\theta)$. Another way to determine a simple geodesic is by parametrizing it with $(x,\eta)\in T(M^\text{\rm int}_1 \setminus M)$, such that $\exp_x{\eta}\in M^\text{\rm int}_1\setminus M$ then \be{01} \gamma_{x,\eta} = \left\{ \exp_x(t\eta), 0\le t\le1\right\}. \end{equation} This parametrization induces a topology on the set $\Gamma$ of simple geodesics through points of $M^\text{\rm int}_1$. \begin{definition} \label{def_complete} The set $\Gamma$ of geodesics is called \textbf{complete}, if (a) $\forall (x,\xi)\in T^*M$ there exists a simple geodesic $\gamma\in \Gamma$ through $x$ such that $\dot\gamma$ is normal to $\xi$ at $x$. (b) $\Gamma$ is open. \end{definition} In other words, a regular metric $g$ is a metric for which a complete set of geodesics exists. Another way to express (a) is to say that \be{N*} N^*\Gamma := \left\{N^*\gamma;\; \gamma\in \Gamma\right\} \supset T^*M, \end{equation} where $N^*\gamma$ stands for the conormal bundle of $\gamma$. We always assume that all tensor fields defined in $M$ are extended as $0$ to $M_1\setminus M$. Notice that $If$ does not change if we replace $M$ by another manifold $M_{1/2}$ close enough to $M$ such that $M\subset M_{1/2}\subset M_1$ but keep $f$ supported in $M$. Therefore, assuming that $M$ has an analytic structure as before, we can always extend $M$ a bit to make the boundary analytic and this would keep $(M,\partial M,g)$ regular. Then s-injectivity in the extended $M$ would imply the same in the original $M$, see \cite[Prop.~4.3]{SU-rig}. So from now on, we will assume that $(M,\partial M)$ is analytic but $g$ does not need to be analytic. To define correctly a norm in $C^K(M)$, respectively $C^k(M_1)$, we fix a finite analytic atlas. The motivation behind Definitions~\ref{def_ms},~\ref{def_complete} is the following: if $g$ is regular, and $\Gamma$ is any complete set of geodesics, we will show that $I_\Gamma f=0$ implies that $f^s\in C^l(M)$, where $l=l(k)\to\infty$, as $k\to\infty$, in other words, the so restricted X-ray transform resolves the singularities. The condition of $g$ being regular is an open one for $g\in C^k(M)$, i.e., it defines an open set. Any simple metric on $M$ is regular but the class of regular metrics is substantially larger if $\dim M\ge3$ and allows manifolds not necessarily diffeomorphic to a ball. For regular metrics on $M$, we do not impose convexity assumptions on the boundary; conjugate points are allowed as far as the metric is regular; $M$ does not need to be non-trapping. In two dimensions, a regular metric can not have conjugate points in $M$ but the class is still larger than that of simple metrics because we do not require strong convexity of $\partial M$. \medskip \paragraph{\bf Example 1.} To construct a manifold with a regular metric $g$ that has conjugate points, let us start with a manifold of dimension at least three with at least one pair of conjugate points $u$ and $v$ on a geodesic $[a,b]\ni t\mapsto \gamma(t)$. We assume that $\gamma$ is non-selfintersecting. Then we will construct $M$ as a tubular neighborhood of $\gamma$. For any $x_0\in\gamma$, define $S_{x_0} = \exp_{x_0}\{v;\; \langle v, \dot\gamma(x_0)\rangle=0,\; |v|\le\varepsilon \}$, and $M := \cup_{x_0\in\gamma} S_{x_0}$ with $\varepsilon\ll1$. Then there are no conjugate points along the geodesics that can be loosely described as those ``almost perpendicular'' to $\gamma$ but not necessarily intersecting $\gamma$; and the union of their conormal bundles covers $T^*M$. More precisely, fix $x\in M$, then $x\in S_{x_0}$ for some $x_0\in\gamma$. Let $0\not=\xi\in T^*_xM$. Then there exists $0\not=v\in T_xM$ that is both tangent to $S_{x_0}$ and normal to $\xi$. The geodesic through $(x,v)$ is then a simple one for $\varepsilon\ll1$, and the latter can be chosen in a uniform way independent of $x$. To obtain a smooth boundary, one can perturb $M$ so that the new manifold is still regular. \medskip \paragraph{\bf Example 2.} This is similar to the example above but we consider a neighborhood of a periodic trajectory. Let $M =\left\{(x^1)^2+(x^2)^2\le 1\right\}\times S^1$ be the interior of the torus in ${\bf R}^3$, with the flat metric $(dx^1)^2+(dx^2)^2+d\theta^2$, where $\theta$ is the natural coordinate on $S^1$ with period $2\pi$. All geodesics perpendicular to $\theta=\mbox{const.}$ are periodic. All geodesics perpendicular to them have lengths not exceeding $2$ and their conormal bundles cover the entire $T^*M$ (to cover the boundary points, we do need to extend the geodesics in a neighborhood of $M$). Then $M$ is a regular manifold that is trapping, and one can easily show that a small enough perturbation of $M$ is also regular, and may still be trapping. \medskip The examples above are partial cases of a more general one. Let $(M',\partial M')$ be a simple compact Riemannian manifold with boundary with $\dim M'\ge2$, and let $M''$ be a Riemannian compact manifold with or without boundary. Let $M$ be a small enough perturbation of $M'\times M''$. Then $M$ is regular. Let $g$ be a fixed regular metric on $M$. The property of $\gamma$ being simple is stable under small perturbations. The parametrization by $(x,\eta)$ as in \r{01} clearly has two more dimensions that what is needed to determine uniquely $\gamma|_M$. Indeed, a parallel transport of $(x,\eta)$ along $\gamma_{x,\eta}$, close enough to $x$, will not change $\gamma|_M$, similarly, we can replace $\eta$ by $(1+\varepsilon)\eta$, $|\varepsilon|\ll1$. We assume throughout this paper that $M$ satisfies the following. \medskip \textbf{Topological Condition:} Any path in $M$ connecting two boundary points is homotopic to a polygon $c_1\cup \gamma_1 \cup c_2\cup\gamma_2\cup\dots \cup\gamma_k \cup c_{k+1}$ with the properties: (i) $c_j$ are paths on $\partial M$; (ii) For any $j$, $\gamma_j =\tilde\gamma_j|_M$ for some $\tilde\gamma_j \in\Gamma$; $\gamma_j$ lie in $M^\text{\rm int}$ with the exception of its endpoints and is transversal to $\partial M$ at both ends. \medskip \begin{theorem} \label{thm_an} \ Let $g$ be an analytic, regular metric on $M$. Let $\Gamma$ be a complete complex of geodesics. Then $I_\Gamma$ is s-injective. \end{theorem} The proof is based on using analytic pseudo-differential calculus, see \cite{Sj-Ast, T}. This has been used before in integral geometry, see e.g., \cite{BQ, Q}, see also \cite{SU-rig}. To formulate a stability estimate, we will parametrize the simple geodesics in a way that will remove the extra two dimensions. Let $H_m$ be a finite collection of smooth hypersurfaces in $M^\text{\rm int}_1$. Let $\mathcal{H}_m$ be an open subset of $\{(z,\theta)\in SM_1; \; z\in H_m, \theta\not\in T_zH_m \}$, and let $\pm l_m^\pm(z,\theta)\ge0$ be two continuous functions. Let $\Gamma(\mathcal{H}_m)$ be the set of geodesics \be{5} \Gamma(\mathcal{H}_m) = \left\{\gamma_{z,\theta}(t); \; l_m^-(z,\theta)\le t\le l_m^+(z,\theta), \; (z,\theta)\in \mathcal{H}_m \right\}, \end{equation} that, depending on the context, is considered either as a family of curves, or as a point set. We also assume that each $\gamma\in \Gamma(\mathcal{H}_m)$ is a simple geodesic. If $g$ is simple, then one can take a single $H=\partial M_1$ with $l^-=0$ and an appropriate $l^+(z,\theta)$. If $g$ is regular only, and $\Gamma$ is any complete set of geodesics, then any small enough neighborhood of a simple geodesic in $\Gamma$ has the properties listed above and by a compactness argument on can choose a finite complete set of such $\Gamma(\mathcal{H}_m)$'s, that is included in the original $\Gamma$, see Lemma~\ref{lemma_H}. Given $\mathcal{H}=\{\mathcal{H}_m\}$ as above, we consider an open set $\mathcal{H'}=\{\mathcal{H}_m'\}$, such that $\mathcal{H}_m' \Subset \mathcal{H}_m$, and let $\Gamma(\mathcal{H}_m')$ be the associated set of geodesics defined as in \r{5}, with the same $l_m^\pm$. Set $\Gamma(\mathcal{H})=\cup\Gamma(\mathcal{H}_m)$, $\Gamma(\mathcal{H}')=\cup\Gamma(\mathcal{H}_m')$. The restriction $\gamma\in \Gamma(\mathcal{H}_m')\subset \Gamma(\mathcal{H}_m)$ can be modeled by introducing a weight function $\alpha_m$ in $\mathcal{H}_m$, such that $\alpha_m=1$ on $\mathcal{H}_m'$, and $\alpha_m=0$ otherwise. More generally, we allow $\alpha_m$ to be smooth but still supported in $\mathcal{H}_m$. We then write $\alpha=\{\alpha_m\}$, and we say that $\alpha\in C^k(\mathcal{H})$, if $\alpha_m\in C^k(\mathcal{H}_m)$, $\forall m$. We consider $I_{\alpha_m}=\alpha_mI$, or more precisely, in the coordinates $(z,\theta) \in \mathcal{H}_m$, \be{I_a0} I_{\alpha_m}f = \alpha_m(z,\theta) \int_0^{l_m(z,\theta)} \big \langle f(\gamma_{z,\theta}), \dot \gamma_{z,\theta}^2\big \rangle \,\d t, \quad (z,\theta)\in \mathcal{H}_m. \end{equation} Next, we set \be{Na} I_\alpha = \{ I_{\alpha_m} \}, \quad N_{\alpha_m} = I_{\alpha_m}^* I_{\alpha_m} = I^*|\alpha_m|^2I, \quad N_\alpha = \sum N_{\alpha_m}, \end{equation} where the adjoint is taken w.r.t.\ the measure $\d\mu := |\langle \nu(z),\theta\rangle | \,\d S_z\,\d \theta$ on $\mathcal{H}_m$, $\d S_z\,\d \theta$ being the induced measure on $SM$, and $\nu(z)$ being a unit normal to $H_m$. S-injectivity of $N_\alpha$ is equivalent to s-injectivity for $I_\alpha$, which in turn is equivalent to s-injectivity of $I$ restricted to $\supp\alpha$, see Lemma~\ref{lemma_1}. The space $\tilde{H}^2$ is defined in Section~\ref{sec_prel}, see \r{S24}. \begin{theorem} \label{thm_stab} \ (a) Let $g=g_0\in C^k$, $k\gg1$ be regular, and let $\mathcal{H}'\Subset\mathcal{H}$ be as above with $\Gamma(\mathcal{H}')$ complete. Fix $\alpha = \{\alpha_m\}\in C^\infty$ with $\mathcal{H}_m' \subset\supp\alpha_m\subset \mathcal{H}_m$. Then if $I_\alpha$ is s-injective, we have \be{est} \|f^s\|_{L^2(M)} \le C \|N_{\alpha} f\|_{\tilde H^2(M_1)}. \end{equation} (b) Assume that $\alpha=\alpha_g$ in (a) depends on $g \in C^k$, so that $C^k(M_1) \ni g \to C^l(\mathcal{H}) \ni \alpha_g$ is continuous with $l\gg1$, $k\gg1$. Assume that $I_{g_0,\alpha_{g_0}}$ is s-injective. Then estimate \r{est} remains true for $g$ in a small enough neighborhood of $g_0$ in $C^k(M_1)$ with a uniform constant $C>0$. \end{theorem} In particular, Theorem~\ref{thm_stab} proves a locally uniform stability estimate for the class of non-trapping manifolds considered in \cite{D}. Theorems~\ref{thm_an}, \ref{thm_stab} allow us to formulate generic uniqueness results. One of them is formulated below. Given a family of metrics $\mathcal{G}\subset C^k(M_1)$, and $U_g\subset T(M^\text{\rm int}_1\setminus M)$, depending on the metric $g\in \mathcal{G}$, we say that $U_g$ depends continuously on $g$, if for any $g_0\in \mathcal{G}$, and any compact $K\subset U^\text{int}_{g_0}$, we have $K\subset U^\text{int}_{g}$ for $g$ in a small enough neighborhood of $g_0$ in $C^k$. In the next theorem, we take $U_g=\Gamma_g$, that is identified with the corresponding set of $(x,\eta)$ as in \eqref{01}. \begin{theorem} \label{thm_I} Let $\mathcal{G}\subset C^k(M_1)$ be an open set of regular metrics on $M$, and let for each $g\in\mathcal{G}$, $\Gamma_g$ be a complete set of geodesics related to $g$ and continuously depending on $g$. Then for $k\gg0$, there is an open and dense subset $\mathcal{G}_s$ of $\mathcal{G}$, such that the corresponding X-ray transform $I_{\Gamma_g}$ is s-injective. \end{theorem} Of course, the set $\mathcal{G}_s$ includes all real analytic metrics in $\mathcal{G}$. \begin{corollary} \label{cor_1} Let $\mathcal{R}(M)$ be the set of all regular $C^k$ metrics on $M$ equipped with the $C^k(M_1)$ topology. Then for $k\gg1$, the subset of metrics for which the X-ray transform $I$ over all simple geodesics is s-injective, is open and dense in $\mathcal{R}(M)$. \end{corollary} The results above extend the generic results in \cite{SU-rig}, see also \cite{SU-Duke}, in several directions: the topology of $M$ may not be trivial, we allow conjugate points but we use only geodesics without conjugate points; the boundary does not need to be convex; and we use incomplete data, i.e., we use integrals over subsets of geodesics only. In Section~\ref{sec_f}, we discuss versions of those results for the X-ray transform of vector fields and functions, where the proofs can be simplified. Our results remain true for tensors of any order $m$, the necessary modifications are addressed in the key points of our exposition. To keep the paper readable, we restrict ourselves to orders $m=2,1,0$. \section{Preliminaries} \label{sec_prel} We say that $f$ is analytic in some subset $U$ of an analytic manifold, not necessarily open, if $f$ can be extended analytically to some open set containing $U$. Then we write $f\in \mathcal{A}(U)$. Let $g\in \mathrm{C^k}(M)$, $k\gg2$ or $g\in\mathcal{A}(M)$ be a Riemannian metric in $M$. We work with symmetric 2-tensors $f=\{f_{ij}\}$ and with 1-tenors/differential forms $v_j$ (the notation here and below is in any local coordinates). We use freely the Einstein summation convention and the convention for raising and lowering indices. We think of $f_{ij}$ and $f^{ij}= f_{kl}g^{ki}g^{lj}$ as different representations of the same tensor. If $\xi$ is a covector at $x$, then its components are denoted by $\xi_j$, while $\xi^j$ is defined as $\xi^i = g^{ij}\xi_j$. Next, we denote $|\xi|^2=\xi_i\xi^i$, similarly for vectors that we usually denote by $\theta$. If $\theta_1$, $\theta_2$ are two vectors, then $\langle \theta_1,\theta_2\rangle$ is their inner product. If $\xi$ is a covector, and $\theta$ is a vector, then $\langle \xi,\theta\rangle$ stands for $\xi(\theta)$. This notation choice is partly justified by identifying $\xi$ with a vector, as above. The geodesics of $g$ can be also viewed as the $x$-projections of the bicharacteristics of the Hamiltonian $E_g(x,\xi)=\frac12 g^{ij}(x) \xi_i\xi_j$. The energy level $E_g=1/2$ corresponds to parametrization with the arc-length parameter. For any geodesic $\gamma$, we have $f^{ij}(x)\xi_i\xi_j= f_{ij}(\gamma(x)) \dot\gamma^i(t) \dot\gamma^j(t)$, where $(x,\xi) = (x(t),\xi(t))$ is the bicharacteristic with $x$-projection equal to $\gamma$. \subsection{Semigeodesic coordinates near a simple geodesic and boundary normal coordinates.} \label{sec_sm} Let $[l^-,l^+]\allowbreak \ni t\mapsto \gamma_{x_0,\theta_0}(t)$ be a simple geodesic through $x_0 =\gamma_{x_0,\theta_0}(0) \in M_1$ with $\theta_0\in S_{x_0}M_1$. The map $t\theta\mapsto \exp_{x_0}(t\theta)$ is a local diffeomorphism for $\theta$ close enough to $\theta_0$ and $t\in[l^-,l^+]$ by our simplicity assumption but may not be a global one, since $\gamma_{x_0,\theta_0}$ may self-intersect. On the other hand, there can be finitely many intersections only and we can assume that each subsequent intersection happens on a different copy of $M$. In other words, we think of $\gamma_0$ as belonging to a new manifold that is a small enough neighborhood of $\gamma_0$, and there are no self-intersections there. The local charts of that manifold are defined through the exponential map above. Therefore, when working near $\gamma_{x_0,\theta_0}$ we can assume that $\gamma_{x_0,\theta_0}$ does not intersect itself. We will use this in the proof of Proposition~\ref{lemma_wf}. Then one can choose a neighborhood $U$ of $\gamma_0$ and normal coordinates centered at $x_0$ there, denoted by $x$ again, such that the radial lines $t\mapsto t\theta$, $\theta=\text{const.}$, are geodesics. If $g\in C^k$, then we lose two derivatives and the new metric is in $C^{k-2}$; if $g$ is analytic near $\gamma_0$, then the coordinate change can be chosen to be analytic, as well. If in the situation above, let $x_0\not\in M$, and moreover, assume that the part of $\gamma_{x_0,\theta_0}$ corresponding to $t<0$ is still outside $M$. Then, one can consider $(\theta,t)$ as polar coordinates on $T_{x_0}M$. Considering them as Cartesian coordinates there, see also \cite[sec.~9]{SU-Duke}, one gets coordinates $(x',x^n)$ near $\gamma_{x_0,\theta_0}$ so that the latter is given by $\{(0,\dots,0,t), \; 0\le t\le l^+\}$, $g_{in}=\delta_{in}$, and $\Gamma_{nn}^i=\Gamma_{in}^n=0$, $\forall i$. Given $x\in {\bf R}^n$, we write $x' = (x^1,\dots,x^{n-1})$. Moreover, the lines $x'=\text{const.}$, $|x'|\ll1$, $x^n=t\in[0,l^+]$ are geodesics in $\Gamma$, as well. We will call those coordinates semigeodesic coordinates near $\gamma_{x_0,\theta_0}$. We will often use boundary normal (semi-geodesic) coordinates $(x',x^n)$ near a boundary point. If $x'\in{\bf R}^{n-1}$ are local coordinates on $\partial M$, and $\nu(x')$ is the interior unit normal, for $p\in M$ close enough to $\partial M$, they are defined by $\exp_{(x',0)}x^n\nu=p$. Then $x^n=0$ defines $\partial M$, $x^n>0$ in $M$, $x^n = \text{dist}(x,\partial M)$. The metric $g$ in those coordinates again satisfies $g_{in}=\delta_{in}$, and $\Gamma_{nn}^i=\Gamma_{in}^n=0$, $\forall i$. We also use the convention that all Greek indices take values from $1$ to $n-1$. In fact, the semigeodesic coordinates in the previous paragraph are boundary normal coordinates to a small part of the geodesic ball centered at $x_0= \gamma_{x_0,\theta_0}(0)$ with radius $\varepsilon$, $0<\varepsilon\ll1$. \subsection{Integral representation of the normal operator.} \label{sec_int} We define the $L^2$ space of symmetric tensors $f$ with inner product \[ (f,h) = \int_M \langle f, \bar h\rangle (\det g)^{1/2}\,\d x, \] where, in local coordinates, $\langle f, \bar h\rangle = f_{ij}\bar h^{ij}$. Similarly, we define the $L^2$ space of 1-tensors (vector fields, that we identify with 1-forms) and the $L^2$ space of functions in $M$. Also, we will work in Sobolev $H^s$ spaces of 2-tensors, 1-forms and functions. In order to keep the notation simple, we will use the same notation $L^2$ (or $H^s$) for all those spaces and it will be clear from the context which one we mean. In the fixed finite atlas on $M$, extended to $M_1$, the norms $\|f\|_{C^k}$ and the $H^s$ norms below are correctly defined. In the proof, we will work in finitely many coordinate charts because of the compactness of $M$, and this justifies the equivalence of the correspondent $C^k$ and $H^s$ norms. We define the Hilbert space $\tilde{H}^2(M_1)$ used in Theorem~\ref{thm_stab} as in \cite{SU-Duke,SU-rig}. Let $x=(x',x^n)$ be local coordinates in a neighborhood $U$ of a point on $\partial M$ such that $x^n=0$ defines $\partial M$. Then we set \[ \|f\|^2_{\tilde{H}^1(U)} = \int_U \Big(\sum_{j=1}^{n-1} |\partial_{x^j}f|^2 +|x^n\partial_{x^n}f|^2+|f|^2\Big)\, \d x. \] This can be extended to a small enough neighborhood $V$ of $\partial M$ contained in $M_1$. Then we set \begin{equation} \label{S24} \|f\|_{\tilde{H}^2(M_1)} = \sum_{j=1}^{n} \|\partial_{x^j}f\|_{\tilde{H}^1(V)} + \|f\|_{\tilde{H}^1(M_1)}. \end{equation} The space $\tilde{{H}}^2(M_1)$ has the property that for each $f\in H^1(M)$ (extended as zero outside $M$), we have $N f \in \tilde{H}^2(M_1)$. This is not true if we replace $\tilde{H}^2(M_1)$ by $H^2(M_1)$. \begin{lemma} \label{lemma_H} Let $\Gamma_g$ and $\mathcal{G}$ be as in Theorem~\ref{thm_I}. Then for $k\gg1$, for any $g_0\in\mathcal{G}$, there exist $ \mathcal{H}'=\{\mathcal{H}_m'\}\Subset \mathcal{H}=\{\mathcal{H}_m\}$ such that $\Gamma(\mathcal{H})\Subset\Gamma_{g_0}$, and $\mathcal{H}'$, $\mathcal{H}$ satisfy the assumptions of Theorem~\ref{thm_stab}. Moreover, $\mathcal{H}'$ and $\mathcal{H}$ satisfy the assumptions of Theorem~\ref{thm_stab} for $g$ in a small enough neighborhood of $g_0$ in $C^k$. \end{lemma} \begin{proof} Fix $g_0\in\mathcal{G}$ first. Given $(x_0,\xi_0)\in T^*M$, there is a simple geodesic $\gamma: [l^-, l^+] \to M_1$ in $\Gamma_{g_0}$ through $x_0$ normal to $\xi_0$ at $x_0$. Choose a small enough hypersurface $H$ through $x_0$ transversal to $\gamma\in \Gamma_{g_0}$, and local coordinates near $x_0$ as in Section~\ref{sec_sm} above, so that $x_0=0$, $H$ is given by $x^n=0$, $\dot\gamma(0)=(0,\dots,0,1)$. Then one can set $\mathcal{H}_0 = \{x;\; x^n=0; \; |x'|<\varepsilon\} \times\{ \theta;\; |\theta'|<\varepsilon \}$, and $\mathcal{H}_0'$ is defined in the same way by replacing $\varepsilon$ by $\varepsilon/2$. We define $\Gamma(\mathcal{H}_0)$ as in \r{5} with $l^\pm(z,\theta)=l^\pm$. Then the properties required for $\mathcal{H}_0$, including the simplicity assumption are satisfied when $0<\varepsilon\ll1$. Choose such an $\varepsilon$, and replace it with a smaller one so that those properties are preserved under a small perturbation of $g$. Any point in $SM$ close enough to $(x_0,\xi_0)$ still has a geodesic in $\Gamma(\mathcal{H}_0')$ normal to it. By a compactness argument, one can find a finite number of $\mathcal{H}_m'$ so that the corresponding $\Gamma(\mathcal{H}') = \cup \Gamma(\mathcal{H}'_m)$ is complete. The continuity property of $\Gamma_g$ w.r.t.\ $g$ guarantees that the construction above is stable under a small perturbation of $g$. \end{proof} Similarly to \cite{SU-Duke}, one can see that the map $I_{\alpha_m} : L^2(M) \to L^2(\mathcal{H}_m,\,\d\mu)$ defined by \r{I_a0} is bounded, and therefore the {\em normal}\/ operator $N_{\alpha_m}$ defined in \r{Na} is a well defined bounded operator on $L^2(M)$. Applying the same argument to $M_1$, we see that $N_{\alpha_m} : M \to M_1$ is also bounded. By \cite{SU-Duke}, at least when $f$ is supported in the local chart near $x_0=0$ above, and $x$ is close enough to $x_0$, \be{N0} \left[N_{\alpha_m}f\right]^{i'j'} (x) = \int_0^\infty \int_{S_x\Omega} |\alpha_m^\sharp(x,\theta)|^2 \theta^{i'} \theta^{j'} f_{ij}( \gamma_{x,\theta}(t) ) \dot \gamma_{x,\theta}^i(t) \dot\gamma_{x,\theta}^j(t)\, \d \theta\, \d t, \end{equation} where $|\alpha_m^\sharp(x,\theta)|^2 = |\tilde\alpha_m(x,\theta)|^2 + |\tilde\alpha_m(x,-\theta)|^2$, and $\tilde\alpha_m$ is the extension of $\alpha_m$ as constant along the geodesic through $(x,\theta)\in \mathcal{H}_m$; and equal to $0$ for all other points not covered by such geodesics. Formula \r{N0} has an invariant meaning and holds without the restriction on $\supp f$. On the other hand, if $\supp f$ is small enough (but not necessarily near $x_0$), $y=\exp_x(t\theta)$ defines a local diffeomorphism $t\theta\mapsto y\in \supp f$, therefore after making the change of variables $y=\exp_x(t\theta)$, see \cite{SU-Duke}, this becomes \be{N} N_{\alpha_m}f (x) = \frac1{\sqrt{\det g}} \int A_m(x,y) \frac{f^{ij}(y)}{\rho(x,y)^{n-1}} \frac{\partial\rho}{\partial y^i} \frac{\partial\rho}{\partial y^j} \frac{\partial\rho}{\partial x^k} \frac{\partial\rho}{\partial x^l}\, \!\det\frac{\partial^2(\rho^2/2)}{\partial x\partial y} \,\d y, \end{equation} where \be{A} A_m(x,y) = \big|\alpha_m^\sharp\!\left(x, \text{grad}_x \rho(x,y)\right)\big|^2, \end{equation} $y$ are any local coordinates near $\supp f$, and $\rho(x,y) = |\exp^{-1}_x y|$. Formula \r{N} can be also understood invariantly by considering $\d_x \rho$ and $\d_y\rho$ as tensors. For arbitrary $f\in L^2(M)$ we use a partition of unity in $TM^\text{\rm int}_1$ to express $N_{\alpha_m}f(x)$ as a finite sum of integrals as above, for $x$ near any fixed $x_0$. We get in particular that $N_{\alpha_m}$ has the pseudolocal property, i.e., its Schwartz kernel is smooth outside the diagonal. As we will show below, similarly to the analysis in \cite{SU-Duke, SU-rig}, $N_{\alpha_m}$ is a $\Psi$DO\ of order $-1$. We always extend functions or tensors defined in $M$ as $0$ outside $M$. Then $N_\alpha f$ is well defined near $M$ as well and remains unchanged if $M$ is extended such that it is still in $M_1$, and $f$ is kept fixed. \subsection{Decomposition of symmetric tensors.} For more details about the decomposition below, we refer to \cite{Sh}. Given a symmetric 2-tensor $f= f_{ij}$, we define the 1-tensor $\delta f$ called {\em divergence} of $f$ by $$ [\delta f]_i = g^{jk} \nabla_k f_{ij}, $$ in any local coordinates, where $\nabla_k$ are the covariant derivatives of the tensor $f$. Given an 1-tensor (a vector field or an 1-form) $v$, we denote by $dv$ the 2-tensor called symmetric differential of $v$: $$ [d v]_{ij} = \frac12\left(\nabla_iv_j+ \nabla_jv_i \right). $$ Operators $d$ and $-\delta$ are formally adjoint to each other in $L^2(M)$. It is easy to see that for each smooth $v$ with $v=0$ on $\partial M$, we have $I(d v)(\gamma)=0$ for any geodesic $\gamma$ with endpoints on $\partial M$. This follows from the identity \be{v} \frac{\d }{\d t} \langle v(\gamma(t)), \dot\gamma(t) \rangle \allowbreak = \allowbreak \langle dv(\gamma(t)), \dot\gamma^2(t) \rangle. \end{equation} If $\alpha=\{\alpha_m\}$ is as in the Introduction, we get \be{dv} I_\alpha (dv)=0, \quad \forall v\in C_0^1(M), \end{equation} and this can be extended to $v\in H_0^1(M)$ by continuity. It is known (see \cite{Sh} and \r{10} below) that for $g$ smooth enough, each symmetric tensor $f\in L^2(M)$ admits unique orthogonal decomposition $f=f^s+d v$ into a {\em solenoidal}\/ tensor $\mathcal{S}f :=f^s $ and a {\em potential}\/ tensor $\mathcal{P}f :=d v$, such that both terms are in $L^2(M)$, $f^s$ is solenoidal, i.e., $\delta f^s=0$ in $M$, and $v\in H^1_0(M)$ (i.e., $v=0$ on $\partial M$). In order to construct this decomposition, introduce the operator $\upDelta^s = \delta d$ acting on vector fields. This operator is elliptic in $M$, and the Dirichlet problem satisfies the Lopatinskii condition. Denote by $\upDelta^s_D$ the Dirichlet realization of $\upDelta^s$ in $M$. Then \begin{equation} \label{9} v = \left(\upDelta^s_D\right)^{-1}\delta f, \quad f^s = f - d \left(\upDelta^s_D\right)^{-1}\delta f. \end{equation} Therefore, we have $$ \mathcal{P} = d \left(\upDelta^s_D\right)^{-1}\delta, \quad \mathcal{S} = \mbox{Id}-\mathcal{P}, $$ and for any $g \in C^1(M)$, the maps \be{10} (\upDelta^s_D)^{-1}: H^{-1}(M) \to H_0^{1}(M), \quad \mathcal{P}, \mathcal{S} : L^2(M) \longrightarrow L^2(M) \end{equation} are bounded and depend continuously on $g$, see \cite[Lemma~1]{SU-rig} that easily generalizes for manifolds. This admits the following easy generalization: for $s=0,1,\dots$, the resolvent above also continuously maps $H^{s-1}$ into $H^{s+1} \cap H_0^1$, similarly, $\mathcal{P}$ and $\mathcal{S}$ are bounded in $H^{s}$, if $g\in C^k$, $k\gg1$ (depending on $s$). Moreover those operators depend continuously on $g$. Notice that even when $f$ is smooth and $f=0$ on $\partial M$, then $f^s$ does not need to vanish on $\partial M$. In particular, $f^s$, extended as $0$ to $M_1$, may not be solenoidal anymore. To stress on the dependence on the manifold, when needed, we will use the notation $v_M$ and $f^s_M$ as well. Operators $\mathcal{S}$ and $\mathcal{P}$ are orthogonal projectors. The problem about the s-injectivity of $I_\alpha$ then can be posed as follows: if $I_\alpha f=0$, show that $f^s=0$, in other words, show that $I_\alpha$ is injective on the subspace $\mathcal{S}L^2$ of solenoidal tensors. Note that by \r{dv} and \r{Na}, \be{11} N_\alpha = N_\alpha\mathcal{S}=\mathcal{S} N_\alpha, \quad \mathcal{P} N_\alpha=N_\alpha\mathcal{P}=0. \end{equation} \begin{lemma} \label{lemma_1} Let $\alpha=\{\alpha_m\}$ with $\alpha_m\in C_0^\infty(\mathcal{H}_m)$ be as in the Introduction. The following statements are equivalent: (a) $I_\alpha$ is s-injective on $L^2(M)$; (b) $N_\alpha : L^2(M) \to L^2(M)$ is s-injective; (c) $N_\alpha : L^2(M) \to L^2(M_1)$ is s-injective; (d) If $\Gamma^\alpha_m$ is the set of geodesics issued from $(\supp\alpha_m)^\text{\rm int}$ as in \r{5}, and $\Gamma^\alpha = \cup\Gamma_m^\alpha$, then $I_{\Gamma^\alpha}$ is s-injective. \end{lemma} \begin{proof} Let $I_\alpha$ be s-injective, and assume that $N_\alpha f=0$ in $M$ for some $f\in L^2(M)$. Then $$ 0 = (N_\alpha f,f)_{L^2(M)} = \sum \|\alpha_m I f\|_{L^2(\mathcal{H}_m,\d\mu)}^2 \quad \Longrightarrow \quad f^s=0. $$ This proves the implication $(a) \Rightarrow (b)$. Next, $(b) \Rightarrow (c)$ is immediate. Assume (c) and let $f\in L^2(M)$ be such that $I_\alpha f=0$. Then $N_\alpha f=0$ in $M_1$, therefore $f^s=0$. Therefore, $(c) \Rightarrow (a)$. Finally, $(a) \Leftrightarrow (d)$ follows directly form the definition of $I_\alpha$. \end{proof} \paragraph{\bf Remark.} Lemma~\ref{lemma_1} above, and Lemma~\ref{lemma_bd}(a) in next section show that $(\supp\alpha_m)^\text{int}$ in (d) can be replaced by $\supp\alpha_m$ if $\Gamma^\alpha$ is a complete set of geodesics. \section{Microlocal Parametrix of $N_\alpha$} \begin{proposition} \label{pr_2}\ Let $g=g_0\in C^k(M)$ be a regular metric on $M$, and let $\mathcal{H}'\Subset \mathcal{H}$ be as in Theorem~\ref{thm_stab}. (a) Let $\alpha$ be as in Theorem~\ref{thm_stab}(a). Then for any $t=1,2,\dots$, there exists $k>0$ and a bounded linear operator $$ Q : \tilde H^2(M_1) \longmapsto \mathcal{S} L^2(M), $$ such that \be{F} QN_\alpha f = f_M^s +Kf, \quad \forall f\in H^1(M), \end{equation} where $K :H^1(M) \to \mathcal{S} H^{1+t}(M)$ extends to $K :L^2(M) \to \mathcal{S} H^{t}(M)$. If $t=\infty$, then $k=\infty$. (b) Let $\alpha =\alpha_g$ be as in Theorem~\ref{thm_stab}(b). Then, for $g$ in some $C^k$ neighborhood of $g_0$, (a) still holds and $Q$ can be constructed so that $K$ would depend continuously on $g$. \end{proposition} \begin{proof} A brief sketch of our proof is the following: We construct first a parametrix that recovers microlocally $f^s_{M_1}$ from $N_\alpha f$. Next we will compose this parametrix with the operator $f_{M_1}^s \mapsto f_M^s$ as in \cite{SU-Duke, SU-rig}. Part (b) is based on a perturbation argument for the Fredholm equation \r{F}. The need for such two step construction is due to the fact that in the definition of $f^s$, a solution to a certain boundary value problem is involved, therefore near $\partial M$, our construction is not just a parametrix of a certain elliptic $\Psi$DO. This is the reason for losing one derivative in \r{est}. For tensors of orders 0 and 1, there is no such loss, see \cite{SU-Duke} and \r{est1}, \r{est2}. As in \cite{SU-rig}, we will work with $\Psi$DO s with symbols of finite smoothness $k\gg1$. All operations we are going to perform would require finitely many derivatives of the amplitude and finitely many seminorm estimates. In turn, this would be achieved if $g\in C^k$, $k\gg1$ and the corresponding $\Psi$DO s will depends continuously on $g$. Recall \cite{SU-Duke,SU-rig} that for simple metrics, $N$ is a $\Psi$DO\ in $M^\text{\rm int}$ of order $-1$ with principal symbol that is not elliptic but $N+|D|^{-1}\mathcal{P}$ is elliptic. This is a consequence of the following. We will say that $N_\alpha$ (and any other $\Psi$DO\ acting on symmetric tensors) is {\em elliptic on solenoidal tensors}, if for any $(x,\xi)$, $\xi\not=0$, $\sigma_p(N_\alpha)^{ijkl}(x,\xi)f_{kl}=0$ and $\xi^i f_{ij}=0$ imply $f=0$. Then $N$ is elliptic on solenoidal tensors, as shown in \cite{SU-Duke}. That definition is motivated by the fact that the principal symbol of $\delta$ is given by $f_{ij} \mapsto \mathrm{i}\xi^if_{ij}$, and s-injectivity is equivalent to the statement that $Nf=0$ and $\delta f=0$ in $M$ imply $f=0$. Note also that the principal symbol of $d$ is given by $v_j \mapsto (\xi_i v_j +\xi_j v_i)/2$, and $\sigma_p(N)$ vanishes on tensors represented by the r.h.s.\ of the latter. We will establish similar properties of $N_\alpha$ below. Let $N_{\alpha_m}$ be as in Section~\ref{sec_int} with $m$ fixed. \begin{lemma} \label{lemma_2} $N_{\alpha_m}$ is a classical $\Psi$DO\ of order $-1$ in $M^\text{\rm int}_1$. It is elliptic on solenoidal tensors at $(x_0,\xi^0)$ if and only if there exists $\theta_0\in T_{x_0}M_1\setminus 0$ with $\langle \xi^0, \theta_0 \rangle=0$ such that $\alpha_0(x_0,\theta_0)\not =0$. The principal symbol $\sigma_p(N_{\alpha_m})$ vanishes on tensors of the kind $f_{ij} = (\xi_i v_j +\xi_j v_i)/2$ and is non-negative on tensors satisfying $\xi^if_{ij}=0$. \end{lemma} \begin{proof} We established the pseudolocal property already, and formulas \r{N0}, \r{N} together with the partition of unity argument following them imply that it is enough to work with $x$ in a small neighborhood of a fixed $x_0\in M^\text{\rm int}_1$, and with $f$ supported there as well. Then we work in local coordinates near $x_0$. To express $N_{\alpha_m}$ as a pseudo-differential operator, we proceed as in \cite{SU-Duke, SU-rig}, with a starting point \r{N}. Recall that for $x$ close to $y$ we have \[ \begin{split} \rho^2(x,y)&=G^{(1)}_{ij}(x,y)(x-y)^i(x-y)^j,\\ \frac{\partial\rho^2(x,y)}{\partial x^j} &=2 G^{(2)}_{ij}(x,y)(x-y)^i,\\ \frac{\partial^2\rho^2(x,y)}{\partial x^j\partial y^j} &=2 G^{(3)}_{ij}(x,y), \end{split} \] where $G^{(1)}_{ij}$, $G^{(2)}_{ij}$ $G^{(3)}_{ij}$ are smooth and on the diagonal. We have $$ G^{(1)}_{ij}(x,x)= G^{(2)}_{ij}(x,x)= G^{(3)}_{ij}(x,x)= g_{ij}(x). $$ Then $N_{\alpha_m}$ is a pseudo-differential operator with amplitude \begin{equation} \label{a20'} \begin{split} M_{ijkl}(x,y,\xi) & = \int e^{-\mathrm{i}\xi\cdot z}\left(G^{(1)}z\cdot z\right)^{\frac{-n+1}2-2} \big|\alpha_m^\sharp(x,g^{-1}G^{(2)} z)\big|^2 \\ &\qquad \times \big[G^{(2)}z\big]_i \big[G^{(2)}z\big]_j\big[\wtilde G^{(2)}z\big]_k \big[\wtilde G^{(2)} z\big]_l \frac{\det G^{(3)}}{\sqrt{\det g}} \,\d z, \end{split} \end{equation} where $\wtilde G^{(2)}_{ij}(x,y)= G^{(2)}_{ij}(y,x)$. As in \cite{SU-rig}, we note that $M_{ijkl}$ is the Fourier transform of a positively homogeneous distribution in the $z$ variable, of order $n-1$. Therefore, $M_{ijkl}$ itself is positively homogeneous of order $-1$ in $\xi$. Write \be{m} M(x,y,\xi) = \int e^{-\mathrm{i}\xi\cdot z}|z|^{ -n+1} m(x,y,\theta) \,\d z, \quad \theta=z/|z|, \end{equation} where \be{30} \begin{split} m_{ijkl}(x,y,\theta) = & \left(G^{(1)}\theta\cdot \theta\right)^{\frac{-n+1}2-2} \big|\alpha_m^\sharp(x,g^{-1}G^{(2)} \theta)\big|^2 \\ & \times \big[G^{(2)}\theta \big]_i \big[G^{(2)}\theta\big]_j\big[\wtilde G^{(2)}\theta\big]_k \big[\wtilde G^{(2)} \theta\big]_l \frac{\det G^{(3)}}{\sqrt{\det g(x)}}, \end{split} \end{equation} and pass to polar coordinates $z=r\theta$. Since $m$ is an even function of $\theta$, smooth w.r.t.\ all variables, we get (see also \cite[Theorem~7.1.24]{H}) \be{M} M(x,y,\xi) = \pi \int_{|\theta|=1} m(x,y,\theta)\delta(\theta\cdot\xi) \,\d \theta. \end{equation} This proves that $M$ is an amplitude of order $-1$. To obtain the principal symbol, we set $x=y$ above (see also \cite[sec.~5]{SU-Duke} to get \be{32} \sigma_p(N_{\alpha_m}) (x,\xi) = M(x,x,\xi) = \pi \int_{|\theta|=1} m(x,x,\theta)\delta(\theta\cdot\xi) \,\d \theta, \end{equation} where \be{33} m^{ijkl}(x,x,\theta) = \big|\alpha_m^\sharp(x,\theta)\big|^2 \sqrt{\det g(x)}\left(g_{ij}(x)\theta ^i \theta^j\right)^{\frac{-n+1}2-2} \theta^i \theta^j \theta^k \theta^l . \end{equation} To prove ellipticity of $M(x,\xi)$ on solenoidal tensors at $(x_0,\xi^0)$, notice that for any symmetric real $f_{ij}$, we have \be{33a} m^{ijkl}(x_0,x_0,\theta) f_{ij} f_{kl} = \big|\alpha_m^\sharp(x_0,\theta) \big|^2 \sqrt{\det g(x_0)}\left(g_{ij}(x_0)\theta ^i \theta^j\right)^{\frac{-n+1}2-2} \!\left( f_{ij} \theta^i \theta^j \right)^2\ge0. \end{equation} This, \r{32}, and the assumption $\alpha_m(x_0,\theta_0)\not= 0$ imply that $M^{ijkl}(x_0,x_0,\xi^0)f_{ij} f_{kl}=0$ yields $f_{ij}\theta^i \theta^j=0$ for $\theta$ perpendicular to $\xi^0$, and close enough to $\theta_0$. If in addition $(\xi^0)^j f_{ij}=0$, then this implies $f_{ij}\theta^i \theta^j=0$ for $\theta\in \n(\theta_0)$, and that easily implies that it vanishes for all $\theta$. Since $f$ is symmetric, this means that $f=0$. The last statement of the lemma follows directly from \r{32}, \r{33}, \r{33a}. Finally, we note that \r{33}, \r{33a} and the proof above generalizes easily for tensors of any order. \end{proof} We continue with the proof of Proposition~\ref{pr_2}. Since (b) implies (a), we will prove (b) directly. Notice that $\mathcal{H}'$ and $\mathcal{H}$ satisfy the properties listed in the Introduction, right before Theorem~\ref{thm_stab}, if $g=g_0$. On the other hand, those properties are stable under small $C^k$ perturbation of $g_0$. We will work here with metrics $g$ close enough to $g_0$. By Lemma~\ref{lemma_2}, since $\Gamma(\mathcal{H}')$ is complete, $N_\alpha$ defined by \r{Na} is elliptic on solenoidal tensors in $M$. The rest of the proof is identical to that of \cite[Proposition~4]{SU-rig}. We will give a brief sketch of it. To use the ellipticity of $N_\alpha$ on solenoidal tensors, we complete $N_\alpha$ to an elliptic $\Psi$DO\ as in \cite{SU-rig}. Set \be{W} W = N_\alpha + |D|^{-1}\mathcal{P}_{M_1}, \end{equation} where $|D|^{-1}$ is a properly supported parametrix of $(-\Delta_g)^{1/2}$ in $\n(M_1)$. The resolvent $(-\Delta^s_{M_1,D})^{-1}$ involved in $\mathcal{P}_{M_1}$ and $\mathcal{S}_{M_1}$ can be expressed as $R_1+R_2$, where $R_1$ is any parametrix near $M_1$, and $R_2 : L^2_{\text{comp}}(M_1) \to C^l(M_1)$, $R_2: H^l(M_1) \to H^{l+2}(M_1)$, where $l=l(k)\gg1$, if $k\gg1$. Then $W$ is an elliptic $\Psi$DO\ inside $M_1$ of order $-1$ by Lemma~\ref{lemma_2}. Let $P$ be a properly supported parametrix for $W$ of finite order, i.e., $P$ is a classical $\Psi$DO\ in the interior of $M_1$ of order $1$ with amplitude of finite smoothness, such that \be{34} PW=\mbox{Id} +K_1, \end{equation} and $K_1 : L^2_{\text{comp}} (M_1)\to H^l(M_1)$ with $l$ as above. Then $$ P_1 := \mathcal{S}_{M_1}P $$ satisfies \be{35} P_1 N_\alpha = \mathcal{S}_{M_1}+K_2, \end{equation} where $K_2$ has the same property as $K_1$. To see this, it is enough to apply $\mathcal{S}_{M_1}$ to the left and right of \r{34} and to use \r{11}. Next step is to construct an operator that recovers $f^s_{M}$, given $f^s_{M_1}$, and to apply it to $P_1 N_\alpha -K_2$. In order to do this, it is enough first to construct a map $P_2$ such that if $f^s_{M_1}$ and $v_{M_1}$ are the solenoidal part and the potential, respectively, corresponding to $f\in L^2(M)$ extended as zero to $M_1\setminus M$, then $P_2 : f^s_{M_1}\mapsto \left. v_{M_1}\right|_{\partial M}$. This is done as in \cite{SU-Duke} and \cite[Proposition~4]{SU-rig}. We also have $$ P_2P_1 : \wtilde H^2(M_1) \to H^{1/2}(\partial M). $$ Then we showed in \cite[Proposition~4]{SU-rig} that one can set $$ Q = (\mbox{Id} +dRP_2)P_1, $$ where $R : h\mapsto u$ is the Poisson operator for the Dirichlet problem $\upDelta^s u=0$ in $M$, $u|_{\partial M} =h$. As explained above, we work with finite asymptotic expansions that require finite number of derivatives on the amplitudes of our $\Psi$DO s. On the other hand, these amplitudes depend continuously on $g\in C^k$, $k\gg1$. As a result, all operators above depend continuously on $g\in C^k$, $k\gg1$. \end{proof} The first part of next lemma generalizes similar results in \cite[Thm~2]{SU-Duke}, \cite{ Ch, SSU} to the present situation. The second part shows that $I_\Gamma f=0$ implies that a certain $\tilde f$, with the same solenoidal projection, is flat at $\partial M$. This $\tilde f$ is defined by the property \r{l1_2} below. \begin{lemma} \label{lemma_bd} Let $g\in C^k(M)$ be a regular metric, and let $\Gamma$ be a complete set of geodesics. Then (a) $\Ker I_\Gamma \cap \mathcal{S}L^2(M)$ is finite dimensional and included in $C^l(M)$ with $l=l(k)\to\infty$, as $k\to\infty$. (b) If $I_\Gamma f=0$ with $f\in L^2(M)$, then there exists a vector field $v\in C^l(M)$, with $v|_{\partial M}=0$ and $l$ as above, such that for $\tilde f := f-dv$ we have \be{l1_1} \partial^\alpha \tilde f|_{\partial M} =0, \quad | \alpha|\le l, \end{equation} and in boundary normal coordinates near any point on $\partial M$ we have \be{l1_2} \tilde f_{ni}=0,\quad \forall i. \end{equation} \end{lemma} \begin{proof} Part (a) follows directly from Proposition~\ref{pr_2}. Without loss of generality, we may assume that $M_1$ is defined as $M_1 = \{x,\; \mbox{dist}(x,M)\le\epsilon\}$, with $\epsilon>0$ small enough. By Proposition~\ref{pr_2}, applied to $M_1$, \be{11_3} f_{M_1}^s \in C^l(M_1), \end{equation} where $l\gg1$, if $k\gg1$. Let $x=(x',x^n)$ be boundary normal coordinates in a neighborhood of some boundary point. We recall how to construct $v$ defined in $M$ so that \r{l1_2} holds, see \cite{SU1} for a similar argument for the non-linear boundary rigidity problem, and \cite{E,Sh-sib,SU-Duke,SU-rig} for the present one. The condition $(f-dv)_{in}=0$ is equivalent to \begin{equation} \label{a1} \nabla_n v_i+\nabla_i v_n= 2f_{in}, \quad v|_{x^n=0}=0, \quad i=1,\dots,n. \end{equation} Recall that $\nabla_i v_j = \partial_i v_j-\Gamma_{ij}^kv_k$, and that in those coordinates, $\Gamma_{nn}^k=\Gamma_{kn}^n=0$. If $i=n$, then \r{a1} reduces to $\nabla_n v_n=\partial_n v_n=f_{nn}$, $v_n=0$ for $x^n=0$; we solve this by integration over $0\le x^n\le \varepsilon \ll1$; this gives us $v_n$. Next, we solve the remaining linear system of $n-1$ equations for $i=1,\dots,n-1$ that is of the form $\nabla_nv_i=2f_{in}-\nabla_iv_n$, or, equivalently, \begin{equation} \label{a1'} \partial_n v_i-2\Gamma^\alpha_{ni}v_\alpha = 2f_{in}-\partial _iv_n, \quad v_i|_{x^n=0}=0, \quad i=1,\dots,n-1, \end{equation} (recall that $\alpha=1,\dots,n-1$). Clearly, if $g$ and $f$ are smooth enough near $\partial M$, then so is $v$. If we set $f=f^s$ above (they both belong to $\Ker I_\Gamma$), then by (a) we get the statement about the smoothness of $v$. Since the condition \r{l1_2} has an invariant meaning, this in fact defines a construction in some one-sided neighborhood of $\partial M$ in $M$. One can cut $v$ outside that neighborhood in a smooth way to define $v$ globally in $M$. We also note that this can be done for tensors of any order $m$, see \cite{Sh-sib}, then we have to solve consecutively $m$ ODEs. Let $\tilde f =f-dv$, where $v$ is as above. Then $\tilde f$ satisfies \r{l1_2}, and let \be{37} \tilde f^s_{M_1} = \tilde f - d\tilde v_{M_1} \end{equation} be the solenoidal projection of $\tilde f$ in $M_1$. Recall that $\tilde f$, according to our convention, is extended as zero in $M_1\setminus M$ that in principle, could create jumps across $\partial M$. Clearly, $\tilde f^s_{M_1} = f^s_{M_1}$ because $f-\tilde f=dv$ in $M$ with $v$ as in the previous paragraph, and this is also true in $M_1$ with $\tilde f$, $f$ and $v$ extended as zero (and then $v=0$ on $\partial M_1$). In \r{37}, the l.h.s.\ is smooth in $M_1$ by \r{11_3}, and $\tilde f$ satisfies \r{l1_2} even outside $M$, where it is zero. Then one can get $\tilde v_{M_1}$ by solving \r{a1} with $M$ replaced by $M_1$, and $f$ there replaced by $\tilde f^s_{M_1}\in C^l(M_1)$. Therefore, one gets that $\tilde v_{M_1}$, and therefore $\tilde f$, is smooth enough across $\partial M$, if $g\in C^k$, $k\gg1$, which proves \r{l1_1}. One can give the following alternative proof of \r{l1_1}: Let $N_\alpha$ be related to $\Gamma$, as in Theorem~\ref{thm_stab}. One can easily check that $N_\alpha$, restricted to tensors satisfying \r{l1_2}, is elliptic for $\xi_n\not=0$. Since $N_\alpha \tilde f=0$ near $M$, with $\tilde f$ extended as 0 outside $M$, as above, we get that this extension cannot have conormal singularities across $\partial M$. This implies \r{l1_1}, at least when $g\in C^\infty$. The case of $g$ of finite smoothness can be treated by using parametrices of finite order in the conormal singularities calculus. \end{proof} \section{S-injectivity for analytic regular metrics} In this section, we prove Theorem~\ref{thm_an}. Let $g$ be an analytic regular metrics in $M$, and let $M_1\supset M$ be the manifold where $g$ is extended analytically according to Definition~\ref{def_ms}. Recall that there is an analytic atlas in $M$, and $\partial M$ can be assumed to be analytic, too. In other words, in this section, $(M,\partial M,g)$ is a real analytic manifold with boundary. We will show first that $I_\Gamma f=0$ implies $f^s\in \mathcal{A}(M)$. We start with interior analytic regularity. Below, $\mathrm{WF}_{\mathrm{A}}(f)$ stands for the analytic wave front set of $f$, see \cite{Sj-Ast,T}. \begin{proposition} \label{lemma_wf} Let $(x_0,\xi^0)\in T^*M\setminus 0$, and let $\gamma_0$ be a fixed simple geodesic through $x_0$ normal to $\xi^0$. Let $If(\gamma)=0$ for some 2-tensor $f\in L^2(M)$ and all $\gamma\in \n(\gamma_0)$. Let $g$ be analytic in $\n(\gamma_0)$ and $\delta f=0$ near $x_0$. Then \be{wf} (x_0,\xi^0) \not\in\mathrm{WF}_{\mathrm{A}}(f). \end{equation} \end{proposition} \begin{proof} As explained in Section~\ref{sec_sm}, without loss of generality, we can assume that $\gamma_0$ does not self-intersect. Let $U$ be a tubular neighborhood of $\gamma_0$ with $x=(x',x^n)$ analytic semigeodesic coordinates in it, as in the second paragraph of Section~\ref{sec_sm}. We can assume that $x_0=0$, $g_{ij}(0)=\delta_{ij}$, and $x'=0$ on $\gamma_0$. In those coordinates, $U$ is given by $|x'|<\varepsilon$, $l^-<x^n<l^+$, with some $0<\varepsilon\ll1$, and we can choose $\varepsilon\ll1$ so that $\{x^n=l^\pm;\; |x'|\le\varepsilon\}$ lie outside $M$. Recall that the lines $x'=\text{const.}$ in $U$ are geodesics. Then $\xi^0=((\xi^0)',0)$ with $\xi^0_n=0$. We need to show that \be{39} (0,\xi^0) \not\in \text{WF}_{\text{A}}(f). \end{equation} We choose a local chart for the geodesics close to $\gamma_0$. Set first $Z = \{x^n=0;\; |x'|<7\varepsilon/8\}$, and denote the $x'$ variable on $Z$ by $z'$. Then $z'$, $\theta'$ (with $|\theta'|\ll1$) are local coordinates in $\n(\gamma_0)$ determined by $(z',\theta') \to \gamma_{(z',0),(\theta',1)}$. Each such geodesic is assumed to be defined on $l^-\le t\le l^+$, the same interval on which $\gamma_0$ is defined. Let $\chi_N(z')$, $N=1,2,\dots$, be a sequence of smooth cut-off functions equal to $1$ for $|z'|\le 3\varepsilon/4$, supported in $Z$, and satisfying the estimates \be{N} \left| \partial^\alpha \chi_N\right|\le (CN)^{|\alpha|}, \quad |\alpha|\le N, \end{equation} see \cite[Lemma~1.1]{T}. Set $\theta=(\theta',1)$, $|\theta'|\ll1$, and multiply $$ I f \left(\gamma_{(z',0),\theta}\right) =0 $$ by $\chi_N(z') e^{\mathrm{i}\lambda z'\cdot \xi'}$, where $\lambda>0$, $\xi'$ is in a complex neighborhood of $(\xi^0)'$, and integrate w.r.t.\ $z'$ to get \be{42} \iint e^{\lambda \mathrm{i} z'\cdot \xi'} \chi_N(z') f_{ij}\left( \gamma_{(z',0),\theta} (t)\right) \dot\gamma_{(z',0),\theta}^i(t) \dot\gamma_{(z',0),\theta}^j (t)\, \d t\, \d z'=0. \end{equation} For $|\theta'|\ll1$, $(z',t)\in Z\times(l^-,l^+)$ are local coordinates near $\gamma_0$ given by $x=\gamma_{(z',0),\theta} (t)$. If $\theta'=0$, we have $x=(z',t)$. By a perturbation argument, for $\theta'$ fixed and small enough, $(t,z')$ are analytic local coordinates, depending analytically on $\theta'$. In particular, $x=(z'+t\theta',t) + O(|\theta'|)$ but this expansion is not enough for the analysis below. Performing a change of variables in \r{42}, we get \be{43} \int e^{\mathrm{i}\lambda z'(x,\theta')\cdot \xi'} a_N(x,\theta') f_{ij}( x) b^i(x,\theta') b^j(x,\theta')\, \d x=0 \end{equation} for $|\theta'|\ll1$, $\forall\lambda$, $\forall\xi'$, where, for $|\theta'|\ll1$, the function $(x,\theta') \mapsto a_N$ is analytic and positive for $x$ in a neighborhood of $\gamma_0$, vanishing for $x\not\in U$, and satisfying \r{N}. The vector field $b$ is analytic on $\supp a_N$, and $b(0,\theta') = \theta$, $a_N(0,\theta')=1$. To clarify the arguments that follow, note that if $g$ is Euclidean in $\n(\gamma_0)$, then \r{43} reduces to $$ \int e^{\mathrm{i}\lambda (\xi',-\theta'\cdot\xi')\cdot x} \chi_N f_{ij}(x) \theta^i \theta^j\, \d x=0, $$ where $\chi_N = \chi_N (x'-x^n\theta')$. Then $\xi = (\xi',-\theta'\cdot\xi')$ is perpendicular to $\theta=(\theta',1)$. This implies that \be{44} \int e^{\mathrm{i}\lambda \xi\cdot x} \chi_N f_{ij}(x) \theta^i (\xi)\theta^j(\xi)\, \d x=0 \end{equation} for any function $\theta(\xi)$ defined near $\xi^0$, such that $\theta(\xi)\cdot\xi=0$. This has been noticed and used before if $g$ is close to the Euclidean metric (with $\chi_N=1$), see e.g., \cite{SU1}. We will assume that $\theta(\xi)$ is analytic. A simple argument (see e.g.\ \cite{Sh,SU1}) shows that a constant symmetric tensor $f_{ij}$ is uniquely determined by the numbers $f_{ij}\theta^i\theta^j$ for finitely many $\theta$'s (actually, for $N'=(n+1)n/2$ $\theta$'s); and in any open set on the unit sphere, there are such $\theta$'s. On the other hand, $f$ is solenoidal near $x_0$. To simplify the argument, assume for a moment that $f$ vanishes on $\partial M$ and is solenoidal everywhere. Then $\xi ^i\hat f_{ij}(\xi)=0$. Therefore, combining this with \r{44}, we need to choose $N=n(n-1)/2$ vectors $\theta(\xi)$, perpendicular to $\xi$, that would uniquely determine the tensor $\hat f$ on the plane perpendicular to $\xi$. To this end, it is enough to know that this choice can be made for $\xi=\xi^0$, then it would be true for $\xi\in \n(\xi^0)$. This way, $\xi ^i\hat f_{ij}(\xi)=0$ and the $N$ equations \r{44} with the so chosen $\theta_p(\xi)$, $p=1,\dots,N$, form a system with a tensor-valued symbol elliptic near $\xi=\xi^0$. The $C^\infty$ $\Psi$DO\ calculus easily implies the statement of the lemma in the $C^\infty$ category, and the complex stationary phase method below, or the analytic $\Psi$DO\ calculus in \cite{T} with appropriate cut-offs in $\xi$, implies the lemma in this special case ($g$ locally Euclidean). We proceed with the proof in the general case. Since we will localize eventually near $x_0=0$, where $g$ is close to the Euclidean metric, the special case above serves as a useful guideline. On the other hand, we work near a ``long geodesic'' and the lack of points conjugate to $x_0=0$ along it will play a decisive role in order to allow us to localize near $x=0$. Let $\theta(\xi)$ be a vector analytically depending on $\xi$ near $\xi=\xi^0$, such that \be{th} \theta(\xi)\cdot\xi=0, \quad \theta^n(\xi)=1, \quad \theta(\xi^0) = e_n. \end{equation} Here and below, $e_j$ stand for the vectors $\partial/\partial x^j$. Replace $\theta=(\theta',1)$ in \r{43} by $\theta(\xi)$ (the requirement $|\theta'| \ll1$ is fulfilled for $\xi$ close enough to $\xi^0$), to get \be{45} \int e^{\mathrm{i}\lambda \varphi(x,\xi)} \tilde a_N(x,\xi)\tilde f_{ij}( x)\tilde b^i(x,\xi)\tilde b^j(x,\xi)\, \d x=0, \ \end{equation} where $\tilde a_N$ is analytic near $\gamma_0\times \{\xi^0\}$, and satisfies \r{N} for $\xi$ close enough to $\xi^0$ and all $x$. Next, $\varphi$, $\tilde b$ are analytic on $\supp \tilde a_N$ for $\xi$ close to $\xi^0$. In particular, $$ \tilde b = \dot\gamma_{(z',0),(\theta'(\xi),1)}(t), \quad t=t(x,\theta'(\xi)), \; z'=z'(x,\theta'(\xi)), $$ and $$ \tilde b(0,\xi) = \theta(\xi), \quad \tilde a_N(0,\xi)=1. $$ The phase function is given by \be{45a} \varphi(x,\xi) = z'(x,\theta'(\xi))\cdot \xi'. \end{equation} To verify that $\varphi$ is a non-degenerate phase in $\n(0,\xi^0)$, i.e., that $\det \varphi_{x\xi}(0,\xi^0)\not =0$, note first that $z'=x'$ when $x^n=0$, therefore, $(\partial z'/\partial x')(0,\theta(\xi))=\mbox{Id}$. On the other hand, linearizing near $x^n=0$, we easily get $(\partial z'/\partial x^n)(0,\theta(\xi))=-\theta'(\xi)$. Therefore, \[ \varphi_x(0,\xi) = (\xi', -\theta'(\xi)\cdot \xi') = \xi \] by \r{th}. So we get $\varphi_{x\xi}(0,\xi) = \mbox{Id}$, which proves the non-degeneracy claim above. In particular, we get that $x\mapsto \varphi_\xi(x,\xi)$ is a local diffeomorphism in $\n(0)$ for $\xi\in\n(\xi^0)$, and therefore injective. We need however a semiglobal version of this along $\gamma_0$ as in the lemma below. For this reason we will make the following special choice of $\theta(\xi)$. Without loss of generality we can assume that \[ \xi^0 =e^{n-1}. \] Set \be{45_1} \theta(\xi) = \bigg( \xi_1,\dots,\xi_{n-2}, -\frac{\xi_1^2+\dots+\xi_{n-2}^2 +\xi_n}{\xi_{n-1}} ,1 \bigg). \end{equation} If $n=2$, this reduces to $\theta(\xi) = (-\xi_2/\xi_1,1)$. Clearly, $\theta(\xi)$ satisfies \r{th}. Moreover, we have \be{45_2} \frac{\partial\theta}{\partial \xi_\nu}(\xi^0) = e_\nu, \quad \nu=1,\dots,n-2, \quad \frac{\partial\theta}{\partial \xi_{n-1}}(\xi^0) =0, \quad \frac{\partial\theta}{\partial \xi_{n}}(\xi^0) = -e_{n-1}, \end{equation} In particular, the differential of the map $S^{n-1}\ni \xi \mapsto \theta'(\xi)$ is invertible at $\xi=\xi^0=e^{n-1}$. \begin{lemma} \label{lemma_phase} Let $\theta(\xi)$ be as in \r{45_1}, and $\varphi(x,\xi)$ be as in \r{45a}. Then there exists $\delta>0$ such that if \[ \varphi_\xi(x,\xi) = \varphi_\xi(y,\xi) \] for some $x\in U$, $|y|<\delta$, $|\xi-\xi^0|<\delta$, $\xi$ complex, then $y=x$. \end{lemma} \begin{proof} We will study first the case $y=0$, $\xi=\xi^0$, $x'=0$. Since $\varphi_\xi(0,\xi)=0$, we need to show that $\varphi_\xi((0,x^n),\xi^0)=0$ for $(0,x^n)\in U$ (i.e., for $l^-<x^n< l^+$) implies $x^n=0$. To compute $\varphi_\xi(x,\xi^0)$, we need first to know $\partial z'(x,\theta')/\partial \theta'$ at $\theta'=0$. Differentiate $\gamma'_{(z',0),(\theta',1)}(t)=x'$ w.r.t.\ $\theta'$, where $t=t(x,\theta')$, $z'=z'(x,\theta')$, to get \[ \partial_{\theta_\nu} \gamma'_{(z',0),(\theta',1)}(t) + \partial_{z'} \gamma'_{(z',0),(\theta',1)}(t)\cdot \frac{\partial z'}{\partial \theta_\nu} +\dot \gamma'_{(z',0),(\theta',1)}(t) \frac{\partial t}{\partial \theta_\nu} =0. \] Plug $\theta'=0$. Since $\partial t/\partial \theta'=0$ at $\theta'=0$, we get \[ \frac{\partial z'}{\partial \theta_\nu} = - \partial_{\theta_\nu} \gamma'_{(z',0),(\theta',1)}(x^n)\Big|_{\theta'=0,x'=0} = - J'_\nu(x^n), \] where the prime denotes the first $n-1$ components, as usual; $J_\nu(x^n)$ is the Jacobi field along the geodesic $x^n\mapsto \gamma_0(x^n)$ with initial conditions $J_\nu(0)=0$, $DJ_\nu(0)=e_\nu$; and $D$ stands for the covariant derivative along $\gamma_0$. Since $z'((0,x^n),\theta'(\xi^0))=0$, by \r{45a} we then get \[ \frac{\partial\varphi}{\partial\xi_l} ((0,x^n),\xi^0) = - \frac{\partial \theta^\mu}{\partial \xi_l}(\xi^0) J_\mu(x^n)\cdot (\xi^0)'. \] By \r{45_2}, (recall that $\xi^0=e^{n-1}$), \be{45_3} \frac{\partial\varphi}{\partial\xi_l} ((0,x^n),\xi^0) = \begin{cases} -J_l^{n-1}(x^n), & l =1,\dots,n-2,\\ 0, & l=n-1,\\ J_{n-1}^{n-1}(x^n), & l =n, \end{cases} \end{equation} where $J^{n-1}_\nu$ is the $(n-1)$-th component of $J_\nu$. Now, assuming that the l.h.s.\ of \r{45_3} vanishes for some fixed $x^n=t_0$, we get that $J_\nu^{n-1}(t_0)=0$, $\nu=1,\dots,n-1$. On the other hand, $J_\nu$ are orthogonal to $e_n$ because the initial conditions $J_\nu(0)=0$, $DJ_\nu(0)=e_\nu$ are orthogonal to $e_n$, too. Since $g_{in}=\delta_{in}$, this means that $J_\nu^n=0$. Therefore, $J_\nu(t_0)$, $\nu=1,\dots,n-1$, form a linearly dependent system of vectors, thus some non-trivial linear combination $a^\nu J_\nu(t_0)$ vanishes. Then the solution $J_0(t)$ of the Jacobi equation along $\gamma_0$ with initial conditions $J_0(0)=0$, $DJ_0(0)=a^\nu e_\nu$ satisfies $J(t_0)=0$. Since $DJ_0(0)\not=0$, $J_0$ is not identically zero. Therefore, we get that $x_0=0$ and $x=(0,t_0)$ are conjugate points. Since $\gamma_0$ is a simple geodesic $x_0$, we must have $t_0=0=x^n$. The same proof applies if $x'\not=0$ by shifting the $x'$ coordinates. Let now $y$, $\xi$ and $x$ be as in the Lemma. The lemma is clearly true for $x$ in the ball $B(0,\varepsilon_1) = \{|x|<\varepsilon_1\}$, where $\varepsilon_1\ll1$, because $\varphi(0,\xi^0)$ is non-degenerate. On the other hand, $\varphi_\xi(x,\xi)\not= \varphi_\xi(y,\xi)$ for $x\in \bar U\setminus B(0,\varepsilon_1)$, $y=0$, $\xi=\xi^0$. Hence, we still have $\varphi_\xi(x,\xi)\not= \varphi_\xi(y,\xi)$ for a small perturbation of $y$ and $\xi$. \end{proof} The arguments that follow are close to those in \cite[Section~6]{KSU}. We will apply the complex stationary phase method \cite{Sj-Ast}. For $x$, $y$ as in Lemma~\ref{lemma_phase}, and $|\eta-\xi^0|\le\delta/\tilde C$, $\tilde C\gg2$, $\delta\ll1$, multiply \r{45} by $$ \tilde \chi(\xi-\eta)e^{\i \lambda( \i (\xi-\eta)^2/2 -\varphi(y,\xi) )}, $$ where $\tilde \chi$ is the characteristic function of the ball $B(0,\delta)\subset \mathbf{C}^n$, and integrate w.r.t.\ $\xi$ to get \be{46aa} \iint e^{\i\lambda \Phi(y,x,\eta,\xi)}\tilde{\tilde{a}}_N(x,\xi,\eta) f_{ij}( x) \tilde b^i(x,\xi) \tilde b^j(x,\xi)\, \d x\, \d \xi=0. \end{equation} Here $\tilde{\tilde{a}}_N = \tilde \chi(\xi-\eta)\tilde a_N$ is another amplitude, analytic and elliptic for $x$ close to $0$, $|\xi-\eta| <\delta/\tilde C$, and \[ \Phi = -\varphi(y,\xi)+\varphi(x,\xi) +\frac{\i}2 (\xi-\eta)^2. \] We study the critical points of $\xi\mapsto \Phi$. If $y=x$, there is a unique (real) critical point $\xi_{\rm c}=\eta$, and it satisfies $\Im\Phi_{\xi\xi} >0$ at $\xi= \xi_{\rm c}$. For $y\not=x$, there is no real critical point by Lemma~\ref{lemma_phase}. On the other hand, again by Lemma~\ref{lemma_phase}, there is no (complex) critical point if $|x-y|>\delta/C_1$ with some $C_1>0$, and there is a unique complex critical point $\xi_{\rm c}$ if $|x-y|<\delta/C_2$, with some $C_2>C_1$, still non-degenerate if $\delta\ll1$. For any $C_0>0$, if we integrate in \r{46aa} for $|x-y|>\delta/C_0$, and use the fact that $|\Phi_\xi|$ has a positive lower bound (for $\xi$ real), we get \be{45_5} \bigg| \iint_{|x-y|>\delta/C_0} e^{\i\lambda \Phi(y,x,\eta,\xi)}\tilde{\tilde{a}}_N(x,\xi,\eta) f_{ij}( x) \tilde b^i(x,\xi) \tilde b^j(x,\xi)\, \d x\, \d \xi \bigg| \le C_3(C_3N/\lambda)^N +CNe^{-\lambda/C}. \end{equation} Estimate \r{45_5} is obtained by integrating $N$ times by parts, using the identity \[ Le^{\i\lambda \Phi} = e^{\i\lambda \Phi}, \quad L := \frac{\bar} \newcommand{\wtilde}{\tilde} \newcommand{\upDelta}{\Delta} \newcommand{\PARENS}[1]{ \left( #1 \right) \Phi_\xi\cdot \partial_\xi}{\i\lambda|\Phi_\xi|^2} \] as well as using the estimate \r{N}, and the fact that on the boundary of integration in $\xi$, the $e^{\i\lambda\Phi}$ is exponentially small. Choose $C_0\gg C_2$. Note that $\Im \Phi>0$ for $\xi\in\partial (\supp \tilde\chi(\cdot-\eta))$, and $\eta$ as above, as long as $\tilde C\gg1$, and by choosing $C_0\gg1$, we can make sure that $\xi_{\rm c} $ is as close to $\eta$, as we want. To estimate \r{46aa} for $|x-y|<\delta/C_0$, set $$ \psi(x,y,\eta) := \Phi\big|_{\xi=\xi_{\text{c}}}. $$ Note that $\xi_{\text{c}} =-\i(y-x)+\eta+O(\delta)$, and $\psi(x,y,\eta) = \eta\cdot (x-y) +\frac{\i}2 |x-y|^2+O(\delta)$. We will not use this to study the properties of $\psi$, however. Instead, observe that at $y=x$ we have \be{46a} \psi_y(x,x,\eta) = -\varphi_x(x,\eta), \quad \psi_x(x,x,\eta) = \varphi_x(x,\eta), \quad \psi(x,x,\eta)=0. \end{equation} We also get that \be{46b} \Im \psi(y,x,\eta) \ge |x-y|^2/C. \end{equation} The latter can be obtained by setting $h=y-x$ and expanding in powers of $h$. The stationary complex phase method \cite{Sj-Ast}, see Theorem~2.8 there and the remark after it, gives \be{47} \int_{|x-y|\le \delta/C_0} e^{\i \lambda \psi(x,\alpha)} f_{ij}( x) B^{ij}(x,\alpha; \lambda) \, \d x = O\big( \lambda^{n/2}(C_3N/\lambda)^N +Ne^{-\lambda/C} \big), \quad \forall N, \end{equation} where $\alpha = (y,\eta)$, and $B$ is a classical analytic symbol \cite{Sj-Ast} with principal part equal to $\tilde b\otimes \tilde b$, up to an elliptic factor. The l.h.s.\ above is independent of $N$, and choosing $N$ so that $N\le \lambda/(C_3e)\le N+1$ to conclude that the r.h.s.\ above is $O(e^{-\lambda/C})$. In preparation for applying the characterization of an analytic wave front set through a generalized FBI transform \cite{Sj-Ast}, define the transform $$ \alpha \longmapsto \beta = \left(\alpha_x, \nabla_{\alpha_x}\varphi(\alpha)\right), $$ where, following \cite{Sj-Ast}, $\alpha=(\alpha_x,\alpha_\xi)$. It is a diffeomorphism from $\n(0,\xi^0)$ to its image, and denote the inverse one by $\alpha(\beta)$. Note that this map and its inverse preserve the first (n-dimensional) component and change only the second one. This is equivalent to setting $\alpha=(y,\eta)$, $\beta = (y,\zeta)$, where $\zeta = \varphi_y(y,\eta)$. Note that $\zeta =\eta+O(\delta)$, and at $y=0$, we have $\zeta=\eta$. Plug $\alpha=\alpha(\beta)$ in \r{47} to get \be{48} \int e^{\i \lambda \psi(x,\beta)} f_{ij}( x)B^{ij}(x,\beta; \lambda) \, \d x = O\big( e^{-\lambda/C} \big), \end{equation} where $\psi$, $B$ are (different) functions having the same properties as above. Then \be{49} \psi_y(x,x,\zeta) = -\zeta, \quad \psi_x(x,x,\zeta) = \zeta, \quad \psi(x,x,\zeta)=0. \end{equation} The symbols in \r{48} satisfy \be{48_1} \sigma_p(B)(0,0,\zeta) \equiv\theta(\zeta)\otimes \theta(\zeta) \quad \text{up to an elliptic factor}, \end{equation} and in particular, $\sigma_p(B)(0,0,\xi^0)\equiv e_n\otimes e_n$, where $\sigma_p$ stands for the principal symbol. Let $\theta_1=e_n, \, \theta_2, \dots,\theta_N$ be $N=n(n-1)/2$ unit vectors at $x_0=0$, normal to $\xi^0=e^{n-1}$ such that any constant symmetric 2-tensor $f$ such that $f_i^{n-1}=0$, $\forall i$ (i.e., $f_i^j\xi^0_j=0$) is uniquely determined by $f_{ij}\theta^i\theta^j$, $\theta=\theta_p$, $p=1,\dots,N$. Existence of such vectors is easy to establish, as mentioned above, and one can also see that such a set exists in any open set in $(\xi^0)^\perp$. We can therefore assume that $\theta_p$ belong to a small enough neighborhood of $\theta_1=e_n$ such that the geodesics $[-l^-,l^+] \ni t\mapsto \gamma_{0,\theta_p}(t)$ through $x_0=0$ are all simple. Then we can rotate a bit the coordinate system such that $\xi^0=e^{n-1}$ again, and $\theta_p=e_n$, and repeat the construction above. This gives us $N$ phase functions $\psi_{(p)}$, and as many symbols $B_{(p)}$ in \r{48} such that \r{49} holds for all of them, i.e., in the coordinate system related to $\theta_1=e_n$, we have \be{48_1a} \int e^{\i \lambda \psi_{(p)}(x,\beta)} f_{ij}( x)B^{ij}_{(p)}(x,\beta; \lambda)\, \d x = O\big( e^{-\lambda/C} \big), \quad p=1,\dots,N, \end{equation} and by \r{48_1}, \be{48_2} \sigma_p(B_{(p)})(0,0,\xi^0) \equiv \theta_p \otimes\theta_p, \quad p=1,\dots,N. \end{equation} Recall that $\delta f=0$ near $x_0=0$. Let $\chi_0=\chi_0(x)$ be a smooth cutoff close enough to $x=0$, equal to $1$ in $\n(0)$. Integrate $\frac1{\lambda} \exp\big(\i \lambda \psi_{(1)}(x,\beta)\big) \chi_0\delta f =0$ w.r.t.\ $x$, and by \r{46b}, after an integration by parts, we get \be{49a} \int e^{\i \lambda \psi_{(1)}(x,\beta)} \chi_0(x)f_{ij}(x) C^j(x,\beta;\lambda)\, \d x= O\big( e^{-\lambda/C} \big), \quad i=1,\dots,n, \end{equation} for $\beta_x=y$ small enough, where $\sigma_p(C^j)(0,0,\xi^0)=(\xi^0)^j$. Now, the system of $N+n = (n+1)n/2$ equations \r{48_1a}, \r{49a} can be viewed as a tensor-valued operator applied to the tensor $f$. Its symbol, an elliptic factor at $(0,0,\xi^0)$, has ``rows'' given by $\theta_p^i \theta_p^j$, $p=1,\dots,N$; and $\delta^i_k(\xi^0)^j$, $k=1,\dots,n$. It is easy to see that it is elliptic; indeed, the latter is equivalent to the statement that if for some (constant) symmetric 2-tensor $f$, in Euclidean geometry (because $g_{ij}(0)=\delta_{ij}$), we have $f_{ij}\theta_p^i \theta_p^j=0$, $p=1,\dots,N$; and $f_i^{n-1}=0$, $i=1,\dots,n$, then $f=0$. This however follows from the way we chose $\theta_p$. Therefore, \r{39} is a consequence of \r{48_1a}, \r{49a}, see \cite[Definition~6.1]{Sj-Ast}. Note that in \cite{Sj-Ast}, it is required that $f$ must be replaced by $\bar f$ in \r{48_1a}, \r{49a}. If $f$ is complex-valued, we could use the fact that $I(\Re f)(\gamma)=0$, and $I(\Im f)(\gamma)=0$ for $\gamma$ near $\gamma_0$ and then work with real-valued $f$'s only. Since the phase functions in \r{48_1a} depend on $p$, we need to explain why the characterization of the analytic wave front sets in \cite{Sj-Ast} can be generalized to this vector-valued case. The needed modifications are as follows. We define $h^{ij}_{(p)}(x,\beta;\lambda) = B_{(p)}^{ij}$, $p=1,\dots,N$; and $h^{ij}_{(N+k)}(x,\beta;\lambda) = C^{j}\delta^i_{k}$, $k=1,\dots,n$. Then $\{h^{ij}_{(p)}\}$, $p=1,\dots,N+n$, is an elliptic symbol near $(0,0,\xi^0)$. In the proof of \cite[Prop.~6.2]{Sj-Ast}, under the conditions \r{46b}, \r{49}, the operator $Q$ given by $$ [Qf]_p(x,\lambda) = \iint e^{\i \lambda( \psi_{(p)}(x,\beta) - \overline{\psi_{(p)}(y,\beta)} )} f_{ij}( y,\lambda) h_{(p)}^{ij}(x,\beta; \lambda) \, \d y\, \d \beta $$ is a $\Psi$DO\ in the complex domain with an elliptic matrix-valued symbol, where we view $f$ and $Qf$ as vectors in ${\bf R}^{N+n}$. Therefore, it admits a parametrix in $H_{\psi,x_0}$ with a suitable $\psi$ (see \cite{Sj-Ast}). Hence, one can find an analytic classical matrix-valued symbol $r(x,\beta,\lambda)$ defined near $(0,0,\xi^0)$, such that for any constant symmetric $f$ we have $$ \left[Q\left( r(\cdot,\beta,\lambda) e^{\i \lambda\psi_{(1)}} f\right) \right]_p = e^{\i \lambda\psi_{(1)}}f, \quad \forall p. $$ The rest of the proof is identical to that of \cite[Prop.~6.2]{Sj-Ast} and allows us to show that \r{48} is preserved with a different choice of the phase functions satisfying \r{46b}, \r{49}, and elliptic amplitudes; in particular, $$ \int e^{\i \lambda\psi_{(1)}(x,\beta)} \chi_2(x) f_{ij}(x) \, \d x = O\big( e^{-\lambda/C} \big) , \quad \forall i,j $$ {}for $\beta\in\n(0,\xi^0)$ and for some standard cut-off $\chi_2$ near $x=0$. This proves \r{39}, see \cite[Definition~6.1]{Sj-Ast}. This concludes the proof of Proposition~\ref{lemma_wf}. Notice that the proof works in the sane way, if $f$ is a distribution valued tensor field, supported in $M$. \end{proof} \begin{lemma} \label{pr_an} Under the assumptions of Theorem~\ref{thm_an}, let $f$ be such that $I_\Gamma f=0$. Then $f^s\in\mathcal{A}(M)$. \end{lemma} \begin{proof} Proposition~\ref{lemma_wf}, combined with the completeness of $\Gamma$, imply that $f^s$ is analytic in the interior of $M$. To prove analyticity up to the boundary, we do the following. We can assume that $M_1\setminus M$ is defined by $-\varepsilon_1\le x^n\le 0$, where $x^n$ is a boundary normal coordinate. Define the manifold $M_{1/2}\supset M$ by $x^n\ge -\varepsilon_1/2$, more precisely, $M_{1/2} = M\cup \{-\varepsilon_1/2\le x^n\le0\}\subset M_1$. We will show first that $f^s_{M_{1/2}}\in \mathcal{A}(M_{1/2})$. Let us first notice, that in $M_{1/2}\setminus M$, $f^s_{M_{1/2}} = -dv_{M_{1/2}}$, where $v_{M_{1/2}}$ satisfies $\Delta^s v_{M_{1/2}}=0$ in $M_{1/2}\setminus M$, $v|_{\partial M_{1/2}}=0$. Therefore, $v_{M_{1/2}}$ is analytic up to $\partial M_{1/2}$ in $M_{1/2}\setminus M$, see \cite{MN, SU-rig}. Therefore, we only need to show that $f^s_{M_{1/2}}$ is analytic in some neighborhood of $M$. This however follows from Proposition~\ref{lemma_wf}, applied to $M_{1/2}$. Note that if $\varepsilon_1\ll1$, simple geodesics through some $x\in M$ would have endpoints outside $M_{1/2}$ as well, and by a compactness argument, we need finitely many such geodesics to show that Proposition~\ref{lemma_wf} implies that $f^s_{M_{1/2}}$ is analytic in, say, $M_{1/4}$, where the latter is defined similarly to $M_{1/2}$ by $x^n\ge -\varepsilon_1/4$. To compare $f^s_{M_{1/2}}$ and $f^s = f^s_M$, see also \cite{SU-Duke, SU-rig}, write $f^s_{M_{1/2}} = f-dv_{M_{1/2}}$ in $M_{1/2}$, and $f^s_M = f-dv_M$ in $M$. Then $dv_{M_{1/2}}= -f^s_{M_{1/2}}$ in $M_{1/2}\setminus M$, and is therefore analytic there, up to $\partial M$. Given $x\in\partial M$, integrate $\langle dv_{M_{1/2}}, \dot\gamma^2 \rangle$ along geodesics in $M_{1/2}\setminus M$, close to ones normal to the boundary, with initial point $x$ and endpoints on $\partial M_{1/2}$. Then we get that $v_{M_{1/2}}|_{\partial M} \in \mathcal{A}(\partial M)$. Note that $v_{M_{1/2}}\in H^1$ near $\partial M$, and taking the trace on $\partial M$ is well defined, and moreover, if $x^n$ is a boundary normal coordinate, then $\n(0)\ni x^n \mapsto v_{M_{1/2}}(\cdot,x^n)$ is continuous. Now, \be{50} f^s_M = f-dv_M = f^s_{M_{1/2}} +dw \quad \mbox{in $M$,}\quad \mbox{where $w = v_{M_{1/2}}-v_M$.} \end{equation} The vector field $w$ solves $$ \upDelta^s w = 0, \quad w|_{\partial M} = v_{M_{1/2}}|_{\partial M} \in\mathcal{A}(\partial M). $$ Therefore, $w\in \mathcal{A}(M)$, and by \r{50}, $f^s_M\in \mathcal{A}(M)$. This completes the proof of Lemma~\ref{pr_an}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm_an}] Let $I_\Gamma f=0$. We can assume first that $f=f^s$, and then $f\in\mathcal{A}(M)$ by Lemma~\ref{pr_an}. By Lemma~\ref{lemma_bd}, there exists $h \in \mathcal{S}^{-1} \mathcal{S} f$ such that $\partial^\alpha h=0$ on $\partial M$ for all $\alpha$. The tensor field $h$ satisfies \r{l1_2}, i.e., $h_{ni}=0$, $\forall i$, in boundary normal coordinates, which is achieved by setting $h=f-dv_0$, where $v_0$ solves \r{a1} near $\partial M$. Then $v_0$, and therefore, $h$ is analytic for small $x^n\ge0$, up to $x^n=0$. Lemma~\ref{lemma_bd} then implies that $h=0$ in $\n(\partial M)$. So we get that \be{50a} f=dv_0, \quad 0\le x^n<\varepsilon_0,\quad \text{with $ v_0|_{x^n=0}=0$}, \end{equation} where $x^n$ is a global normal coordinate, and $0<\varepsilon_0\ll1$. Note that the solution $v_0$ to \r{50a} (if exists, and in this case we know it does) is unique, as can be easily seen by integrating $\langle f,\dot\gamma^2\rangle$ along paths close to normal ones to $\partial M$ and using \r{v}. We show next that $v_0$ admits an analytic continuation from a neighborhood of any $x_1\in\partial M$ along any path in $M$. Fix $x\in M$. Let $c(t)$, $0\le t\le1$ be a path in $M$ such that $c(0)=x_0 \in\partial M$ and $c(1)=x$. Given $\varepsilon>0$, one can find a polygon $x_0x_1\dots x_k x$ consisting of geodesic segments of length not exceeding $\varepsilon$, that is close enough and therefore homotopic to $c$. One can also assume that the first one is transversal to $\partial M$, and if $x\in\partial M$, the last one is transversal to $\partial M$ as well; and all other points of the polygon are in $M^\text{\rm int}$. We choose $\varepsilon\ll1$ so that there are no conjugate points on each geodesic segment above. We also assume that $\varepsilon\le\varepsilon_0$. Then $f=dv$ near $x_0x_1$ with $v=v_0$ by \r{50a}. As in the second paragraph of Section~\ref{sec_sm}, one can choose semigeodesic coordinates $(x',x^n)$ near $x_1x_2$, and a small enough hypersurface $H_1$ through $x_1$ given locally by $x^n=0$. As in Lemma~\ref{lemma_bd}, one can find an analytic 1-form $v_1$ defined near $x_1x_2$, so that $(f-dv_1)_{in}=0$, $v_1|_{x^n=0}=v_0(x',0)$. Close enough to $x_1$, we have $v_1=v_0$ because $v_0$ is also a solution, and the solution is unique, see also \r{a1'}. Since $v_1$ is analytic, we get that it is an analytic extension of $v_0$ along $x_1x_2$. Since $f$ and $v_1$ are both analytic in $\n(x_1x_2)$, and $f=dv_1$ near $x_1$, this is also true in $\n(x_1x_2)$. So we extended $v_0$ along $x_0x_1x_2$, let us call this extension $v$. Then we do the same thing near $x_2x_3$, etc., until we reach $\n(x)$, and then $f=dv$ there. This defines $v$ in $\n(x)$, where $x\in M$ was chosen arbitrary. It remains to show that this definition is independent of the choice of the path. Choose another path that connects some $y_1\in\partial M$ and $x$. Combine them both to get a path that connects $x_1\in \partial M$ and $y_1\in\partial M$. It suffices to prove that the analytic continuation of $v_0$ from $x_1$ to $y_1$ equals $v_0$ again. Let $c_1\cup \gamma_1 \cup c_2\cup\gamma_2\cup\dots\cup \gamma_k \cup c_{k+1}$ be the polygon homotopic to the path above. Analytic continuation along $c_1$ coincides with $v_0$ again by \r{50a}. Next, let $p_1$, $p_2$ be the initial and the endpoint of $\gamma_1$, respectively, where $p_1$ is also the endpoint of $c_1$. We continue analytically $v_0$ from $\n(p_1)$ to $\n(p_2)$ along $\gamma_1$, let us call this continuation $v$. By what we showed above, $f=dv$ near $\gamma_1$. Since $If(\gamma_1)=0$, and $v(p_1)=0$, we get by \r{v}, that $\langle v(p_2),\dot \gamma_1(l)\rangle =0$ as well, where $l$ is such $\gamma_1(l)=p_2$. Using the assumption that $\gamma_1$ is transversal to $\partial M$ at both ends, one can perturb the tangent vector $\dot \gamma_1(l)$ and this will define a new geodesic through $p_2$ that hits $\partial M$ transversely again near $p_1$, where $v=v_0=0$. Since $\Gamma$ is open, integral of $f$ over this geodesic vanishes again, therefore $\langle v(p_2),\xi_2\rangle =0$ for $\xi_2$ in an open set. Hence $v(p_2)=0$. Choose $q_2\in\partial M$ close enough to $p_2$, and $\eta_2$ close enough to $\xi_2$ (in a fixed chart). Then the geodesic through $(q_2,\eta_2)$ will hit $\partial M$ transversally close to $p_1$, and we can repeat the same arguments. We therefore showed that $v=0$ on $\partial M$ near $p_2$. On the other hand, $v_0$ has the same property. Since $f=dv=dv_0$ there, by the remark after \r{50a}, we get that $v=v_0$ near $p_2$. We repeat this along all the legs of the polygon until we get that the analytic continuation $v$ of $v_0$ along the polygon, from $x_1$ to $y_1$, equals $v_0$ again. As a consequence of this, we get that $f=dv$ in $M$ with $v=0$ on $\partial M$. Since $f=f^s$, this implies $f=0$. This completes the proof of Theorem~\ref{thm_an}. \end{proof} \section{Proof of Theorems~\ref{thm_stab} and \ref{thm_I}} \begin{proof}[Proof of Theorem~\ref{thm_stab}] Theorem~\ref{thm_stab}(b), that also implies (a), is a consequence of Proposition~\ref{pr_2}, as shown in \cite{SU-rig}, see the proof of Theorem~2 and Proposition~4 there. Part (a) only follows more directly from \cite[Prop.~V.3.1]{Ta1} and its generalization, see \cite[Thm~2]{SU-Duke}. \end{proof} \begin{proof}[Proof of Theorem~\ref{thm_I}] First, note that for any analytic metric in $\mathcal{G}$, $I_{\Gamma_g}$ is s-injective by Theorem~\ref{thm_an}. We build $\mathcal{G}_s$ as a small enough neighborhood of the analytic metrics in $\mathcal{G}$. Then $\mathcal{G}_s$ is dense in $\mathcal{G}$ (in the $C^k(M_1)$ topology) since it includes the analytic metrics. To complete the definition of $\mathcal{G}_s$, fix an analytic $g_0\in \mathcal{G}$. By Lemma~\ref{lemma_H}, one can find $\mathcal{H}'\Subset\mathcal{H}$ related to $g=g_0$ and $\Gamma_g$, satisfying the assumptions of Theorem~\ref{thm_stab}, and they have the properties required for $g$ close enough to $g_0$. Let $\alpha$ be as in Theorem~\ref{thm_stab} with $\alpha=1$ on $\mathcal{H}'$. Then, by Theorem~\ref{thm_stab}, $I_{\alpha,g}$ is s-injective for $g$ close enough to $g_0$ in $C^k(M_1)$. By Lemma~\ref{lemma_1}, for any such $g$, $I_{\Gamma^\alpha}$ is s-injective, where $\Gamma^\alpha = \Gamma(\mathcal{H}^\alpha)$, $\mathcal{H}^\alpha = \supp\alpha$. If $g$ is close enough to $g_0$, $\Gamma^\alpha\subset \Gamma_g$ because when $g=g_0$, $\Gamma^\alpha\subset \Gamma(\mathcal{H})\Subset\Gamma_{g_0}$, and $\Gamma_g$ depends continuously on $g$ in the sense described before the formulation of Theorem~\ref{thm_I}. Those arguments show that there is a neighborhood of each analytic $g_0\in\mathcal{G}$ with an s-injective $I_{\Gamma_g}$. Therefore, one can choose an open dense subset $\mathcal{G}_s$ of $\mathcal{G}$ with the same property. \end{proof} \begin{proof}[Proof of Corollary~\ref{cor_1}.] It is enough to notice that the set of all simple geodesics related to $g$ depends conti\-nuously on $g$ in the sense of Theorem~\ref{thm_I}. Then the proof follows from the paragraph above. \end{proof} \section{X-ray transform of functions and 1-forms/vector fields} \label{sec_f} If $f$ is a vector field on $M$, that we identify with an 1-form, then its X-ray transform is defined quite similarly to \r{I_G} by \be{I_v} I_\Gamma f(\gamma) = \int_0^{l_\gamma} \langle f(\gamma(t)), \dot \gamma(t) \rangle \,\d t, \quad \gamma\in \Gamma. \end{equation} If $f$ is a function on $M$, then we set \be{I_f} I_\Gamma f(\gamma) = \int_0^{l_\gamma} f(\gamma(t))\,\d t, \quad \gamma\in \Gamma. \end{equation} The latter case is a partial case of the X-ray transform of 2-tensors; indeed, if $f =\alpha g$, where $f$ is a 2-tensor, $\alpha$ is a function, and $g$ is the metric, then $I_\Gamma f = I_\Gamma\alpha$, where in the l.h.s., $I_\Gamma$ is as in \r{I_G}, and on the right, $I_\Gamma$ is as in \r{I_f}. The proofs for the X-ray transform of functions are simpler, however, and in particular, there is no loss of derivatives in the estimate \r{est}, as in \cite{SU-Duke}. This is also true for the X-ray transform of vector fields and the proofs are more transparent than those for tensors of order 2 (or higher). Without going into details (see \cite{SU-Duke} for the case of simple manifolds), we note that the main theorems in the Introduction remain true. In case of 1-forms, estimate \r{est} can be improved to \be{est1} \|f^s\|_{L^2(M)}/C \le \|N_{\alpha} f\|_{H^1(M_1)} \le C\|f^s\|_{L^2(M)}, \end{equation} while in case of functions, we have \be{est2} \|f\|_{L^2(M)}/C \le \|N_{\alpha} f\|_{H^1(M_1)} \le C\|f\|_{L^2(M)}. \end{equation} If $(M,\partial M)$ is simple, then the full X-ray transform of functions and 1-forms (over all geodesics) is injective, respectively s-injective, see \cite{Mu2, Mu-R, BG, AR}.
{ "attr-fineweb-edu": 1.047852, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdY85qoaAwm0PG5c4
\subsubsection{$t\bar{t}$} \label{sec:sig_vis_ttbar} A search for resonant \ensuremath{t\bar{t}}\xspace production in the $0\ell$ channel has been conducted by the ATLAS Collaboration using 139~\ensuremath{{\rm fb}^{-1}}\xspace of $\sqrt{s}=13$~TeV data~\cite{ATLAS:2020lks}. This search targets heavy vector and axial-vector resonances (including DM mediators) with masses $>1.4$~TeV, resulting in two merged top-quark decays. Merged top-quark decays are identified using a deep-neural net (DNN) based top tagger trained on the distributions of various characteristic jet and jet substructure variables to distinguish top-quark from light-quark and gluon initiated jets. SM \ensuremath{t\bar{t}}\xspace production constitutes the main, irreducible background to this search, followed by strong multi-jet production. The background spectrum is derived from data by fitting a smoothly falling function to the reconstructed $m_{\ensuremath{t\bar{t}}\xspace}$ distribution, similar to the approach classically chosen in di-jet resonance searches. A larger range of resonance masses has been probed by a search for resonant \ensuremath{t\bar{t}}\xspace production in the $1\ell$ channel, conducted by the ATLAS Collaboration on 36~\ensuremath{{\rm fb}^{-1}}\xspace of $\sqrt{s}=13$~\ensuremath{\rm TeV}\xspace data~\cite{ATLAS:2018rvc}. This search targets both \textit{merged} and \textit{resolved} hadronic top-quark decays and is sensitive to resonance masses just above the \ensuremath{t\bar{t}}\xspace kinematic threshold ($>2\ensuremath{m_{\mathrm{top}}}\xspace$). The main, irreducible background from SM \ensuremath{t\bar{t}}\xspace production, as well as most other, smaller backgrounds, are estimated using MC simulation. Data-driven corrections are applied to the MC simulation of the $W$+jets background. The small background from strong multi-jet production is estimated with a fully data-driven approach. A first search for heavy spin-1 resonances combining final states with 0, 1 and 2 leptons has been performed by the CMS Collaboration using data recorded at \ensuremath{\sqrt{s}}\xspace=13~\ensuremath{\rm TeV}\xspace and corresponding to a total integrated luminosity of 35.9~\ensuremath{{\rm fb}^{-1}}\xspace~\cite{CMS:2018rkg}. The analysis utilises reconstruction techniques that are optimised for top quarks with high Lorentz boosts, which requires the use of non-isolated leptons partially overlapping with $b$-quark jets and jet substructure techniques for top-quark tagging. Except for the QCD multijet background in the 0-lepton channel, the shapes of all backgrounds are estimated from MC simulation. The signal strength is extracted from the distributions of the reconstructed invariant mass of the top quark pair for the 0- and 1-lepton channels and from the sum of missing transverse energy and the transverse momenta of all jets and leptons in the 2-lepton channel. Interference effects between the resonant signal and background processes are not taken into account in the searches discussed above as they are irrelevant for spin-1 and spin-2 particles. However, this is not true for scalar and pseudoscalar resonances, such as additional heavy Higgs bosons, which are produced from $gg$ initial states via heavy quark loops. The process $gg\rightarrow A/H \rightarrow \ensuremath{t\bar{t}}\xspace$ interferes strongly with the irreducible background from SM \ensuremath{t\bar{t}}\xspace production, which is dominated by $gg\rightarrow \ensuremath{t\bar{t}}\xspace$. Interference effects significantly distort the resonance lineshape from a Breit-Wigner peak to a characteristic peak-dip or even more complicated structures. The treatment of these effects is non-trivial and requires dedicated analysis methods, in particular in the statistical analysis. Searches for heavy scalars and pseudoscalars have been conducted by both the ATLAS~\cite{ATLAS:2017snw} and CMS Collaborations~\cite{CMS:2019pzc} in the $1\ell$ and $1\ell+2\ell$ channels, respectively. These searches are sensitive to the production of scalar and pseudoscalar DM mediators. However, due to the strong model-dependence of the interference patterns, no dedicated interpretation of these results in the context of DM models exists to date. An approximate re-interpretation of the results in Ref.~\cite{ATLAS:2017snw} in the context of the 2HDM+$a$ (Section~\ref{sec:2HDMa_model}) can be found in Ref.~\cite{Bauer:2017ota}. \subsubsection{$tbH^{\pm}(tb)$} \label{sec:sig_vis_tbtb} Final states with two top and two bottom quarks are sensitive to the associated production of a charged Higgs boson $H^{\pm}$ with a top and a bottom quark ($tb$) and its subsequent decay to $tb$. The ATLAS Collaboration has published a search for $tbH^{\pm}(tb)$ production using 139~\ensuremath{{\rm fb}^{-1}}\xspace of $\sqrt{s}=13$~TeV data~\cite{ATLAS:2021upq}. It targets charged Higgs boson masses in the range 0.2–2.0 TeV. Events are required to contain exactly one electron or muon to suppress the large backgrounds from strong multi(-$b$)-jet production. The selected events are further classified according to the number of reconstructed jets and the number of $b$-jets among them. A neural network is used to enhance the separation between signal and background. The dominant background for this search is composed of \ensuremath{t\bar{t}}\xspace jets events as well as single-top production in the $Wt$ channel. The backgrounds are modelled using MC simulations with additional data-driven corrections derived in a dedicated control region. A search for charged Higgs bosons decaying into a top and a bottom quark in the 0-lepton final state has been performed by the CMS Collaboration using proton-proton collision at \ensuremath{\sqrt{s}}\xspace = 13 \ensuremath{\rm TeV}\xspace from 2016 ~\cite{CMS:2020imj}. Two different scenarios have been studied, the associated production with a top and bottom quark and the $s$-channel production of a charged Higgs. The results are combined with a search in final states with one or two leptons~\cite{CMS:2019rlz}. For production in association with a top quark, upper limits at the 95\% confidence level on the charged Higgs production cross section and branching fraction of 9.25 to 0.005 pb are obtained for charged Higgs masses in the range of 0.2 to 3 \ensuremath{\rm TeV}\xspace. While there is no DM interpretation of the result by the CMS Collaboration, the result from ATLAS was interpreted in a 2HDM+$a$ scenario, as further detailed in Section~\ref{sec:2HDMa_results}. \subsubsection{Same-sign $tt$} \label{sec:sig_vis_ss_tt} Events with a same-sign $tt$ pair are identified via the leptonic decays of the $W$ bosons from the two top quarks. They are required to contain two same-sign charged leptons, at least one $b$-jet, and significant \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace from the two neutrinos resulting from the leptonic $W$ boson decays. A search in same-sign $tt$ events has been conducted by the ATLAS Collaboration, using 36~\ensuremath{{\rm fb}^{-1}}\xspace of $\sqrt{s}=13$~TeV data~\cite{ATLAS:2018alq}. The signal region of this search is defined by requiring the presence of two positively charged leptons ($e$,$\mu$) and at least one $b$-jet. Additionally, the scalar sum of the transverse momenta of all selected objects in the event, $H_T$ is required to be significant ($H_T > 750$~GeV). Further requirements on the \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace and the angular separation of the two leptons are imposed. The signal region is split into three orthogonal channels based on the lepton flavour ($ee$, $e\mu$, $\mu\mu$). The main backgrounds of this search are estimated using MC simulation, while the sub-dominant background from fake leptons is estimated using data-driven techniques. \subsubsection{$t\bar{t}t\bar{t}$} \label{sec:sig_vis_4top} Final states with four top quarks ($\ensuremath{t\bar{t}}\xspace\ttbar$) can arise from non-resonant processes predicted in the SM but are also predicted in BSM models allowing for the associated production of a heavy BSM resonance, which subsequently decays to \ensuremath{t\bar{t}}\xspace, with a \ensuremath{t\bar{t}}\xspace pair. Four-top final states are particularly relevant in searches for heavy scalars and pseudoscalars, as the signal-background interference is negligible for associated production with \ensuremath{t\bar{t}}\xspace compared to loop-induced production from $gg$ initial states (Section~\ref{sec:sig_vis_ttbar}). It should be noted, though that the production cross-section for associated production is significantly lower than for loop-induced production. Four-top final states are characterised by a high object multiplicity. Orthogonal signal regions can be defined based on the multiplicity of leptons ($e,\mu$) in the final state, which corresponds to the number of top quarks with a leptonically decaying $W$ boson. The ATLAS Collaboration has recently found evidence ($4.3~\sigma$ observed, $2.4~\sigma$ expected significance) for four-top quark production in a search focusing on the multi-lepton final state conducted on 139~\ensuremath{{\rm fb}^{-1}}\xspace of $\sqrt{s}=13$~TeV $pp$ collision data~\cite{ATLAS:2020hpj}. The result is consistent with the SM prediction for four-top production within $1.7\sigma$. A subsequent dedicated search for BSM four-top production on the same dataset specifically targets \ensuremath{t\bar{t}}\xspace associated production of heavy scalar or pseudoscalar Higgs bosons $A/H$ decaying to \ensuremath{t\bar{t}}\xspace (\ensuremath{t\bar{t}}\xspace $A/H\rightarrow \ensuremath{t\bar{t}}\xspace\ttbar$)~\cite{ATLAS:2022rws}. It is based on and extends the analysis strategy of Ref.~\cite{ATLAS:2020hpj} to increase the sensitivity to $A/H$ production. In both the SM and BSM searches, events are required to contain either a same-sign lepton pair or at least three leptons. A multivariate discriminant based on a Boosted Decision Tree (BDT) is used to separate between SM four-top production and other background processes, using event-level information such as jet and $b$-jet multiplicity as well as additional kinematic variables. The BSM search relies on a second BDT to subsequently distinguish between BSM and SM four-top production. This second BDT is parameterised as a function of the mass of the heavy Higgs boson by introducing the mass as a labelled input in the training~\cite{Baldi:2016fzo}. The main, irreducible backgrounds arise from associated production of a \ensuremath{t\bar{t}}\xspace pair with a boson and additional jets (\ensuremath{t\bar{t}}\xspace+$W$+jets, \ensuremath{t\bar{t}}\xspace+$Z$+jets, \ensuremath{t\bar{t}}\xspace+$h$+jets). They are estimated using MC simulations with additional data-driven corrections applied in the case of \ensuremath{t\bar{t}}\xspace+$W$+jets production. Smaller, reducible backgrounds arise mostly from \ensuremath{t\bar{t}}\xspace+jets and $tW$+jets production with mis-identified charge or fake/non-prompt leptons. These smaller backgrounds are estimated from data using dedicated control regions. No significant excess of events over the SM prediction is observed in the BSM four-top search and the results are interpreted in the context of a type-II 2HDM. No dedicated interpretation in the context of DM models has been performed. The constraints on the type-II 2HDM with $m_A=m_H$, however, indicate that this search can improve upon the current four-top constraints on the 2HDM+$a$ parameter space included in the latest 2HDM+$a$ summary plots of Ref.~\cite{ATLAS:DMSum} (Section~\ref{sec:2HDMa_results}), which are based on a search in the single-lepton channel using 36~\ensuremath{{\rm fb}^{-1}}\xspace of $\sqrt{s}=13$~TeV data~\cite{ATLAS:2017oes}. The CMS Collaboration has reported an observed (expected) significance for $\ensuremath{t\bar{t}}\xspace\ttbar$ of $2.6~\sigma$ ($2.7~\sigma$) in the multi-lepton channel using 137~\ensuremath{{\rm fb}^{-1}}\xspace of $\sqrt{s}=13$~TeV $pp$ collision data~\cite{CMS:2019rvj}. The search relies on a new multivariate classifier to maximize the sensitivity to the SM $\ensuremath{t\bar{t}}\xspace\ttbar$ signal. As in the equivalent ATLAS search, the main backgrounds from $\ensuremath{t\bar{t}}\xspace$+boson+jets production are estimated using MC simulations. Data-driven corrections are applied in the cases of \ensuremath{t\bar{t}}\xspace+$W$+jets and \ensuremath{t\bar{t}}\xspace+$Z$+jets production. Backgrounds arising from charge mis-identification or fake/non-prompt leptons are estimated from data. This result has been used to constrain scalar and pseudoscalar production in 2HDMs as well as in the simplified DM model with a scalar or pseudoscalar mediator (Section~\ref{sec:SPS_model}). No dedicated interpretation for the 2HDM+$a$ is available, although the constraints on type-II 2HDMs suggest that the search will also constrain the 2HDM+$a$ parameter space. The searches described above have been optimised for non-resonant $\ensuremath{t\bar{t}}\xspace\ttbar$ production and/or production of heavy scalar or pseudoscalar resonances, including resonance masses below 1 TeV. An additional search targeting top-philic vector and axial-vector ($Z'$) resonances with masses $>1$~TeV has been conducted by the ATLAS Collaboration. The preliminary result relies on 139~$\ensuremath{{\rm fb}^{-1}}\xspace$ of $\sqrt{s}=13$~TeV data~\cite{ATLAS:ttZp}. Unlike other searches in the $\ensuremath{t\bar{t}}\xspace\ttbar$ final state, this search is designed to reconstruct the BSM resonance explicitly from a pair of re-clustered jets identified as merged top quarks. The results can in principle be used to constrain purely top-philic vector or axial-vector mediators to which classic \ensuremath{t\bar{t}}\xspace resonance searches, which assume $Z'$ production from light-quark or gluon initial states (Section~\ref{sec:sig_vis_ttbar}) may not be sensitive. A dedicated interpretation of this search in the context of DM models is left to future work. \subsubsection{\ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tW$ and \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tj$} \label{sec:sig_inv_mettWtj} Like the \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t$ searches described in Section~\ref{sec:sig_inv_mett}, searches for \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tW$ target events with single top quarks produced in association with large \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace but additionally require the existence of a second visible object. This can be either a $W$ boson or a hadronic jet. The resulting signatures are referred to as \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tW$ and \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tj$, respectively. It should be noted that searches in these final states are not orthogonal to the \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t$ searches discussed in Section~\ref{sec:sig_inv_mett} as the latter do not veto the presence of additional visible objects in the event and hence implicitly include \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tj$ and \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tW$ signatures. While \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t$ searches are traditionally used to constrain resonant DM production via a colour-charged scalar mediator and non-resonant DM production via a vector mediator with a flavour-violating $V_{ut}$ coupling, as explained in Section~\ref{sec:sig_inv_mett}, \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tW$ searches in particular are used to probe the 2HDM+$a$ (Section~\ref{sec:2HDMa_model}) and more recently also simplified models with a scalar or pseudoscalar mediator (Section~\ref{sec:SPS_model}). Simplified models with a scalar or pseudoscalar mediator predict both \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tW$ and \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tj$ production, as illustrated by the two right-most diagrams in Figure~\ref{fig:SPS_Feyn}. The corresponding signal cross-sections are, up to mediator masses of 200~\ensuremath{\rm GeV}\xspace, smaller than those of the dominant \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t\bar{t}$ production mode discussed in Section~\ref{sec:sig_inv_ttmet}. Therefore, \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tW$ and \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tj$ searches have not been used to constrain these simplified models by the ATLAS Collaboration. However, with the increased sensitivity of recent searches, single top associated production becomes more and more relevant and a first search including \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tW$ and \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t\bar{t}$ signatures has been performed by the CMS Collaboration~\cite{CMS:2019zzl} as further discussed in Section~\ref{sec:sig_inv_mettttwtj}. \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tW$ and \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tj$ production is also predicted in the 2HDM+$a$. Compared to simplified models with a single (pseudo)scalar mediator, this model contains additional production modes, illustrated for example by the third diagram in Figure~\ref{fig:2HDM_Feyn}, which lead to higher predicted signal cross-sections for \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tW$ and \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tj$ production. A search for \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tW$ and \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tj$ signatures, optimised specifically for 2HDM+$a$ signal processes, has been conducted by the ATLAS Collaboration~\cite{ATLAS:2020yzc} using 139~fb$^{-1}$ of $\sqrt{s}=13$~TeV $pp$ collision data. The search considers events with one or two leptons ($e$,$\mu$), at least one $b$-tagged jet, and significant \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace in three orthogonal categories. Two of them target \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tW$ production in final states with one or two leptons, while the third channel targets \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tj$ production in final states with exactly one lepton. The search has been extended in the context of a preliminary analysis of the same dataset~\cite{ATLAS:METplusTopW} to include events with highly energetic $W$ boson decays in final states with zero leptons or one lepton. These provide additional sensitivity for large masses of the charged Higgs bosons. The newly added zero- and improved one-lepton channels are statistically combined with the two-lepton channel of Ref.~\cite{ATLAS:2020yzc}. \subsubsection{\ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tW$, \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tj$ and \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+\ensuremath{t\bar{t}}\xspace} \label{sec:sig_inv_mettttwtj} A first result exploring topologies of single top quark and top-quark pair associated production has been released by the CMS Collaboration~\cite{cms_exo_18_010}. The analysis is using 36~\ensuremath{{\rm fb}^{-1}}\xspace of data recorded in 2016 at 13~\ensuremath{\rm TeV}\xspace and combines multiple selection categories in final states with 0 or 1 lepton. In the 1 lepton channel, dominant background is suppressed using a similar strategy as the one discussed in Section~\ref{sec:sig_inv_ttmet}, while in the 0 lepton channel, dominant background is reduced by a cut on the missing transverse energy, the ratio of the leading jet transverse momentum over the total hadronic transverse energy in the event, and the minimum opening angle between the missing transverse energy and the two leading jets. To enhance the sensitivity to single top quark associated production, events are separated according to the number of identified b-quark jets. Events with a single b-tagged jet are further split into events with a central or forward jet. The categorization in terms of forward jets allows a further enhancement of t/$\bar{\rm t}$+DM t-channel events. This production mode leads to final states with one top quark and an additional jet, which tends to be in the forward region of the detector, while the additionally produced b quark is typically low in transverse momentum and therefore not reconstructed. Key observable of this search is the \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace spectrum explored in a combined fit to different orthogonal signal regions. Overall, data are found to be in good agreement with the expected SM background. Due to the combination of single top quark and \ensuremath{t\bar{t}}\xspace associated production, this analysis was able to derive the most stringent limits from LHC data on spin-0 mediators at that time. \subsubsection{\ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t\bar{t}$} \label{sec:sig_inv_ttmet} Searches for DM or DE production in association with a $t\bar{t}$ pair target final states characterised by sizeable \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace and the presence of the $t\bar{t}$ decay products. The CMS Collaboration has released a search for DM in association with a $t\bar{t}$ pair using 137~\ensuremath{{\rm fb}^{-1}}\xspace of data recorded at \ensuremath{\sqrt{s}}\xspace = 13~\ensuremath{\rm TeV}\xspace between 2016 and 2018~\cite{CMS:2021eha}. The analysis combines previous searches in final states with 0~\cite{cms_stop_0l}, 1~\cite{cms_stop_1l} or 2~\cite{cms_stop_2l} leptons. While the primary target of the analyses is stop quark production, a re-interpretation of the combined result in a simplified DM model with scalar mediators is provided. Central feature of the analysis in the $0$-lepton channel is an advanced jet-tagging algorithm identifying hadronically decaying top quarks and $W$ bosons with low and high Lorentz-boost. For the highly Lorentz-boosted regime, the DeepAK8 algorithm \cite{cms_deepak8} is used whereas in the resolved regime the DeepResolved algorithm \cite{cms_stop_1l} is explored to tag top quarks in the intermediate transverse momentum range from 150 to 450 \ensuremath{\rm GeV}\xspace. The analysis includes a total of 183 non-overlapping signal regions. The contribution of each SM background process is estimated through measurements of event rates in dedicated background control samples that are translated to predicted event counts in the corresponding signal region with the aid of MC simulation. The key requirements in the $1$-lepton channel are exactly one lepton and $\ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace > 250~\ensuremath{\rm GeV}\xspace$. Moreover, the transverse mass computed from the lepton and the missing momentum is required to be larger than 150 \ensuremath{\rm GeV}\xspace to reduce the dominant background from SM \ensuremath{t\bar{t}}\xspace and $W$+jets production, for which the transverse mass has a natural cutoff at the mass of the $W$ boson. The SM production of dileptonic \ensuremath{t\bar{t}}\xspace events, where one of the leptons is lost, is the largest remaining background. It is estimated through a set of dedicated control regions and reduced by using the modified topness variable~\cite{cms_stop_1l}. The $1$-lepton channel also exploits the jet tagging algorithms used in the $0$-lepton channel, to identify hadronic top quark decays. In order to enhance the sensitivity to different signal scenarios, including the case of small missing transverse momentum, events are categorised into a total of 39 non-overlapping signal regions. The search in the $2$-lepton channel explores orthogonal signal regions based on the flavour of the leptons and three characteristic observables: The so-called missing transverse momentum significance~\cite{cms_metsig} and two specific definitions of the stransverse mass~\cite{cms_stop_2l,cms_stransversemass}. The \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace significance is given by the ratio of the \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace over its resolution and it is particularly powerful to suppress events where detector effects and misreconstruction of particles from pileup interactions are the main source of missing transverse momentum. The key feature of the stransverse mass using leptons (lepton and b-quark jets) is that it retains a kinematic endpoint at the $W$-boson (top-quark) mass for SM background events from the leptonic decays of two $W$ bosons (top quarks). The dominant backgrounds arise from $t\bar{t}$ and $t\bar{t}+Z$ production as well as single-top quark production in the $Wt$ channel. After a veto of the $Z$-boson mass window, i.e. $|m_{\ell\ell} - m_Z|> 15$~GeV, Drell-Yan production represents only a minor source of background. A similar search using 139~\ensuremath{{\rm fb}^{-1}}\xspace of LHC data has been released by the ATLAS Collaboration exploring separately the 0-lepton~\cite{ATLAS:2020dsf}, 1-lepton~\cite{ATLAS:2020xzu}, and 2-lepton~\cite{ATLAS:2021hza} channels. All three final states have been combined afterwards into a single result~\cite{ATLAS:METplustt}. In this context, the (0$\ell$) channel search has been further optimised through an improved selection of triggers targeting $b$-jets. Searches for \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tW$ (Section~\ref{sec:sig_inv_mettWtj}) production have not been included in this combination as their datasets are not orthogonal to those in the \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t\bar{t}$ by construction. Including them in a statistical combination is left to future publications. While by now the \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t\bar{t}$ searches discussed above have been interpreted in simplified models with a scalar or pseudoscalar mediator only, see Section~\ref{sec:SPS_results}, earlier searches, based on smaller datasets, have already been used to constrain a 2HDM with a pseudoscalar mediator (Section~\ref{sec:2HDMa_results}) and a model of scalar DE (Section~\ref{sec:DE_model}). \subsubsection{\ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t$} \label{sec:sig_inv_mett} Searches for the production of large \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace in association with a single top quark have been conducted by both the ATLAS~\cite{ATLAS-CONF-2022-036} and CMS \cite{CMS:2018gbj} Collaborations. The ATLAS Collaboration has performed a \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t$ search targeting merged hadronic top-quark decays using 139~fb$^{-1}$ of $\sqrt{s}=13$~TeV $pp$ collision data~\cite{ATLAS-CONF-2022-036}. Events are required to have \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace>250~GeV and contain at least one large-$R$ (anti-$k_t$~\cite{Cacciari:2008gp} $R=1.0$) jet with transverse momentum $350 < p_T < 2500$~GeV and mass $40< m < 600$~GeV. Additionally, the selected jet must be identified as a top-quark candidate via a dedicated top-tagging algorithm~\cite{ATLAS:2018wis}, which relies on deep neural net (DNN) that uses jet kinematics and substructure variables as input~\cite{ATLAS:2018wis,ATLAS:DNNTopPerf}. The working point for the top tagging algorithm chosen for this analysis corresponds to a 50\% top tagging efficiency. Dedicated signal regions targeting resonant DM production via a colour-charged scalar mediator (Section~\ref{sec:SCC_model}) and non-resonant DM production via a vector mediator with a $V_{ut}$ coupling (Section~\ref{sec:VFC_model}) are defined based on the output score of XGBoost classifiers~\cite{XGBoost} that are trained on several event observables. Control regions are defined to constrain the dominant backgrounds from $t\bar{t}$ and $V$+jets production. A similar search has been performed by the CMS Collaboration~\cite{CMS:2018gbj}. Different from the ATLAS analysis the result is based on data recorded in 2016 only corresponding to an integrated luminosity of 36~\ensuremath{{\rm fb}^{-1}}\xspace. To identify the hadronically decaying top quark, CA15 jets are used. CA15 jets are clustered from particle flow candidates using the Cambridge–Aachen algorithm \cite{Cacciari:2008gp} with a distance parameter of 1.5. The CA15 jets must have a transverse momentum $ p_T > 250~\ensuremath{\rm GeV}\xspace $, $|\eta| < 2.4$ and an invariant mass of $110~\ensuremath{\rm GeV}\xspace < m < 210~\ensuremath{\rm GeV}\xspace$. Furthermore, several substructure observables, like the N-subjettiness \cite{Thaler:2010tr} or so-called energy-correlation functions \cite{Larkoski:2013eya,Moult:2016cvt} are combined in a boosted decision tree (BDT)~\cite{Hocker:2007ht} to distinguish top quark jets from the hadronisation products of single light quarks or gluons. At 50\% signal efficiency, the BDT background acceptance is 4.7\%. The dominant backgrounds from \ensuremath{t\bar{t}}\xspace and single vector bosons (Z, W, $\gamma$) are constraint from dedicated control regions. The signal is probed in distributions of missing transverse energy \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace considering two signal regions which correspond to a BDT output between 0.1 and 0.45 and above 0.45 respectively. The summary plots for the benchmark model with a colour-charged scalar mediator in Section~\ref{sec:SCC_results}, which show the interplay between the \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t$ and same-sign $tt$ (Section~\ref{sec:sig_vis_ss_tt}) searches, are based on an earlier search of the ATLAS Collaboration using 36~fb$^{-1}$ of $\sqrt{s}=13$~TeV $pp$ collision~\cite{ATLAS:2018cjd}. This analysis statistically combines the results from two orthogonal channels, targeting semi-leptonic and hadronic top-quark decays, respectively. \subsubsection{Flavour-changing interaction} \label{sec:VFC_results} The strongest constraints on the VFC model are obtained from searches targeting same-sign $tt$ and \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t$ production on 36~fb$^{-1}$ of $pp$ collision data~\cite{ATLAS:2019wdu}. Results for two representative parameter planes are shown in Figure~\ref{fig:VFC_limits}. The left plot of Figure~\ref{fig:VFC_limits} shows a scan in the mediator mass versus the flavour-changing coupling $g_{ut}$ while fixing the remaining two parameters at $m_{\chi}=1$~GeV and $g_{\chi}=1$. The \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t$ search provides stronger constraints on $g_{ut}$ at lower mediator masses, excluding $g_{ut}$ down to 0.07 at 1~TeV, while the same-sign $tt$ search is more sensitive for mediator masses > 1.6 TeV, still excluding $g_{ut}>0.3$ at 3~TeV. Mediator masses below 1~\ensuremath{\rm TeV}\xspace have been probed by the CMS Collaboration at \ensuremath{\sqrt{s}}\xspace = 13~\ensuremath{\rm TeV}\xspace and are shown in Figure~\ref{fig:VFC_limits_CMS}. The \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t$ search discussed in Section~\ref{sec:sig_inv_mett} is able to exclude couplings as low as 0.03 for mediator masses of 200~\ensuremath{\rm GeV}\xspace. The right plot of Figure~\ref{fig:VFC_limits} shows a scan in the invisible branching ratio of the mediator $\mathcal{BR}(\chi \chi)$ and the coupling $g_{ut}$. The constraints derived from the same-sign $tt$ search exhibits only a weak dependence on $\mathcal{BR}(\chi \chi)$ due to the fact that the sensitivity of this process is dominated by the $t$-channel exchange of the mediator (middle and right diagrams in Figure~\ref{fig:VFC_Feyn}). This process is only indirectly sensitive to $g_{\chi}$ through the total width of the mediator in the $t$-channel exchange. The same-sign $tt$ analysis hence dominates the sensitivity at low values of $g_{\chi}$ (and hence low values of $\mathcal{BR}(\chi \chi)$), while the \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t$ analysis dominates the sensitivity at large values of $\mathcal{BR}(\chi \chi)$, excluding $g_{ut}$ down to almost 0.06 at $\mathcal{BR}(\chi \chi) = 1$. \begin{figure}[h!] \centering \includegraphics[width=0.49\textwidth]{figures/VFC_Limits1.png} \includegraphics[width=0.49\textwidth]{figures/VFC_Limits2.png} \caption{Regions in the ($m_{Z'_{\mathrm{VFC}}}$,$g_{ut}$) (left) and the ($\mathcal{BR}(\chi \chi)$,$g_ut$) plane (right) of the VFC model excluded at 95\% CL by searches in the same-sign $t\bar{t}$ and \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t$ final states~\cite{ATLAS:2019wdu}.\label{fig:VFC_limits}} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.495\textwidth]{figures/VFC_Limits_CMS.png} \caption{Exclusion limits for the VFC model in the two-dimensional plane spanned by the mediator mass and the coupling between the mediator and quarks released by the CMS Collaboration~\cite{CMS:2018gbj}. The observed exclusion range is shown as yellow solid line, while the yellow dashed lines show the cases in which the predicted cross section is shifted by the assigned theoretical uncertainty. The expected exclusion range is indicated by a black solid line, the experimental uncertainties are shown in black dashed lines.\label{fig:VFC_limits_CMS}} \end{figure} \subsubsection{Colour-neutral interaction} \label{sec:SPS_results} Simplified models with a colour-neutral scalar or pseudoscalar mediator have been constrained by searches targeting invisible mediator decays at the ATLAS and CMS experiments using data from $pp$ collisions at $\sqrt{s}=13$~TeV. The most recent constraints from the CMS Collaboration based on \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+\ensuremath{t\bar{t}}\xspace events are shown in Figure~\ref{fig:SPS_limits_CMS}, while Figure~\ref{fig:SPS_limits} shows the most recent summary from the ATLAS Collaboration. Up to now, only \ensuremath{t\bar{t}}\xspace associated DM production has been probed by the CMS Collaboration using the full Run II dataset of 137~\ensuremath{{\rm fb}^{-1}}\xspace~\cite{CMS:2021eha}. The interpretation of this analysis in simplified models of scalar and pseudoscalar mediators is shown in Figure~\ref{fig:SPS_limits_CMS}. Assuming a mediator coupling of 1 to DM and SM particles, masses up to 400~\ensuremath{\rm GeV}\xspace and 420~\ensuremath{\rm GeV}\xspace can be excluded for scalar and pseudoscalar mediators, respectively. While the sensitivities of the 0- and 1-lepton channels are comparable, the sensitivity of the 2-lepton channel is significantly weaker. The sensitivity of this channel can be further enhanced by exploring information sensitive to the spin of the mediator which has not been done here. The exclusion limits for pseudoscalar mediators can be further extended up to 470~\ensuremath{\rm GeV}\xspace by \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+jet searches~\cite{CMS:2021far}. The results shown in Figure~\ref{fig:SPS_limits} are obtained from analyses targeting \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t\bar{t}$, \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tW$, \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tj$, \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$b\bar{b}$, and \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+jet production using the full ATLAS Run 2 dataset of 139~fb$^{-1}$~\cite{ATLAS:DMSum}. The sensitivity across most of the mediator mass region is dominated by a statistical combination of three searches for \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t\bar{t}$ production in the 0-, 1-, and 2-lepton channels (Section~\ref{sec:sig_inv_ttmet}). In the scenario with a scalar mediator, the statistical combination of the \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t\bar{t}$ searches provides the strongest constraints across the probed mediator mass range, while pseudoscalar case, the dominant constraints for $m_{\phi/a}>300$~GeV are obtained from \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+jet searches. Searches targeting the \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$b\bar{b}$ signature provide significantly weaker constraints on this model. However, as explained in Section~\ref{sec:SPS_model}, in UV completions of the simplified model the couplings to up-type quarks can be suppressed compared to those to down-type quarks, making \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$b\bar{b}$ searches a relevant complement to \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t\bar{t}$ searches. Searches targeting DM production with a single top quark (\ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tj$ and \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tW$, see Section~\ref{sec:SPS_model}) have a similar sensitivity as the individual searches for \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t\bar{t}$ production. They have not been included in the statistical combination as they are not orthogonal to the searches in the \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+\ensuremath{t\bar{t}}\xspace final states by construction. \begin{figure}[p] \centering \includegraphics[width=0.495\textwidth]{figures/SPS_Limits_CMS_Scalar.png} \includegraphics[width=0.495\textwidth]{figures/SPS_Limits_CMS_PseudoScalar.png} \caption{Expected (dashed line) and observed (solid line) upper limits at the 95\% CL on the ratio of the excluded and predicted cross-section at leading-order for a DM particle with a mass of 1~\ensuremath{\rm GeV}\xspace as a function of the mediator mass for a scalar (left) and pseudoscalar (right) mediator~\cite{CMS:2021eha}. The green and yellow bands represent the regions containing 68 and 95\%, respectively, of the distribution of limits expected under the background-only hypothesis. The mediator couplings are set to 1.\label{fig:SPS_limits_CMS}} \end{figure} \begin{figure}[p] \centering \includegraphics[width=0.495\textwidth]{figures/SPS_Limits1.png} \includegraphics[width=0.495\textwidth]{figures/SPS_Limits2.png} \caption{Upper limits at 95\% CL on the production of a scalar $\phi$ (left) and pseudoscalar $a$ (right) mediator as a function of the mediator mass~\cite{ATLAS:DMSum}. The limits are expressed in terms of the ratio of the excluded cross-section and the cross-section calculated for a coupling assumption of $g=g_q=g_{\chi}=1.0$. The latter was calculated at NLO for the \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+\ensuremath{t\bar{t}}\xspace signatures and at LO for the \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tW$/$tj$ and \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$j$ signatures. \label{fig:SPS_limits}} \end{figure} If $m_{\phi/a} > 2\cdot m_{t}$, searches targeting visible mediator decays to top quarks are also sensitive to the production of scalar or pseudoscalar mediators. Two different modes can contribute: gluon-induced mediator production and production of a mediator in association with \ensuremath{t\bar{t}}\xspace. Searches targeting both modes have been performed, as discussed in Sections~\ref{sec:sig_vis_ttbar} and~\ref{sec:sig_vis_4top}, respectively. However, only the results of a search for four-top production conducted by the CMS Collaboration have been interpreted in the context of simplified models with a scalar or pseudoscalar mediator. The results are shown in Figure~\ref{fig:SPS_limits_4top} as upper limits on the cross-section of associated production of the mediator with top quarks times the branching ratio of the mediator decay to \ensuremath{t\bar{t}}\xspace. Masses between 350~GeV and 450 (510) GeV for a scalar (pseudoscalar) mediator are excluded. \begin{figure}[H] \centering \includegraphics[width=0.495\textwidth]{figures/SPS_Limits_4top_H.png} \includegraphics[width=0.495\textwidth]{figures/SPS_Limits_4top_A.png} \caption{Upper limits at 95\% CL on the production of a scalar (left, called $H$ here instead of $\phi$) and pseudoscalar (right, called $A$ here instead of $a$) mediator as a function of the mediator mass~\cite{ATLAS:DMSum}. The limits are expressed in terms of an upper limit on the production cross-section times the branching ratio of the mediator to \ensuremath{t\bar{t}}\xspace and compared to the cross-section calculated at LO for a coupling assumption of $g=g_q=g_{\chi}=1.0$ (here denoted as: $g_{\mathrm{SM}}=g_{\mathrm{DM}}=1.0$). \label{fig:SPS_limits_4top}} \end{figure} It should be noted that the re-interpretation of the results from searches targeting gluon-induced mediator production is significantly more involved than for the case of associated production due to the presence of strong signal-background interference (Section~\ref{sec:sig_vis_ttbar}). The resulting interference patterns are highly model-dependent which means that a re-interpretation in the context of a different model requires the generation of the model-specific interference pattern and a subsequent re-running of the full profile likelihood fit for these model-specific interference patterns. \subsubsection{Colour-charged interaction} \label{sec:SCC_results} Models in which the colour-charged mediator decays to a top quark and a DM particle are constrained by the searches in \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t$ final states discussed in Section~\ref{sec:sig_inv_mett}. Mediator masses up to 5~TeV can be excluded by the ATLAS Collaboration for coupling strength values $\lambda_t=0.4$ and $g_{ds}=0.6$ assuming a DM mass $m_{\chi}=10$~GeV~\cite{ATLAS-CONF-2022-036}. Results with a mixed scalar and pseudoscalar coupling to both SM quarks as well as DM and top quarks are provided by CMS Collaboration~\cite{CMS:2018gbj}. Assuming a coupling of 0.1 to SM quarks and of 0.2 to DM and top quarks, mediators with masses up to 3.3~\ensuremath{\rm TeV}\xspace can be excluded for a dark matter mass of 100~\ensuremath{\rm GeV}\xspace. \subsection{Scalar DE EFT model} \label{sec:DE_results} Searches in the \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t\bar{t}$ final state have been used to constrain the $\mathcal{L}_1$ operator in the EFT model of scalar DE (Section~\ref{sec:DE_model}). Results from three independent analyses, each targeting a different $t\bar{t}$ decay mode (0-, 1-, 2-lepton channels) have been used. No statistical combination was performed. Instead, the constraint from the analysis yielding the smallest CL$_{\mathrm{s}}$ value for a given signal hypothesis was re-interpreted in the EFT model of DE. The strongest constraints arise from searches in the 0- and 1-lepton channels, with both contributing roughly equally. The constraints are derived as a function of the effective coupling $g_*$ associated with the UV completion of the EFT model and the effective mass scale $M_1$. It is assumed that the EFT is valid for momentum transfers $Q_\textrm{tr} < g_* M$~\cite{ATLAS:2019wdu}. For events failing this requirement, a conservative approach to correct the final limits based on the fraction of valid events, referred to as iterative rescaling~\cite{Abercrombie:2015wmb}, is applied. The regions excluded at 95\% CL are shown in Figure~\ref{fig:DE_limits}. Mass scales $<200$~GeV are excluded for $g_*>\pi^2$. The sensitivity of the \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t\bar{t}$ signature to softer effective couplings $g_*$ is limited by the EFT criterion as $t\bar{t}$ pair production typically involves large momentum transfers. \begin{figure}[H] \centering \includegraphics[width=0.5\textwidth]{figures/DE_L1_Limits.png} \caption{Regions in the plane of the effective coupling $g_*$ associated with the UV completion of the EFT model and the effective mass scale $M_1$ for the $\mathcal{L_1}$ operator excluded at 95\% CL by searches in the \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t\bar{t}$ final state~\cite{ATLAS:2019wdu}.\label{fig:DE_limits}} \end{figure} \subsubsection{Flavour-conserving interaction} \label{sec:AVV_results} Strong constraints on visible decays of the axial-vector (Figure~\ref{fig:AVV_limits}) or vector (Figure~\ref{fig:VV_limits}) mediator $m_{Z'}$ are obtained from a variety of resonance and related searches that probe mediator masses in the range between 50~GeV~\cite{CMS:DMSum} and 5000~GeV~\cite{ATLAS:DMSum}. The latest constraints on axial-vector mediators released by the ATLAS Collaboration and based on data from $pp$ collisions at $\sqrt{s}=13$~TeV are shown in Figure~\ref{fig:AVV_limits}. The coupling of the mediator to leptons is set to zero ($g_{\ell}=0$), while the coupling to DM is set to unity ($g_{\chi}=1.0$) and the DM mass is taken to be 10~TeV to kinematically suppress invisible mediator decays and highlight the interplay of constraints on visible mediator decays. In the high mediator mass range, the main sensitivity comes from two searches for di-jet resonances, referred to as \textit{di-jet} and \textit{di-jet angular}. The former aims to identify local resonant enhancements in the di-jet invariant mass spectrum and targets narrow mediator widths. The latter, for which no results on the full LHC Run 2 dataset are available, relies on the di-jet angular separation to identify broader mediator widths that cannot be probed by the search in the invariant mass spectrum. Neither of the searches imposes quark-flavour specific selection requirements and hence are sensitive to all possible hadronic decays of the mediator. Searches for $t\bar{t}$ resonances, which rely on top-quark identification algorithms to identify specifically the decays of the mediator to top quarks, have a slightly lower expected sensitivity to the coupling $g_q$ than di-jet searches, although the observed limit is stronger than that from the di-jet search in some small regions of the mediator mass where the di-jet observed limit fluctuates upward. The use of top-quark identification allows for a stronger suppression of SM backgrounds compared to di-jet and also di-$b$-jet searches, in particular the background from strong multi-jet production. This effect partially compensates the disadvantage of probing only roughly $\frac{1}{6}$ of the hadronic mediator decays. In Figure~\ref{fig:VV_limits}, constraints on vector mediators in the plane of the DM and the mediator mass from the CMS Collaboration~\cite{CMS:DMSum} are shown. Different from Figure~\ref{fig:AVV_limits}, results from visible and invisible decays are summarised. While searches with invisible final states are only possible when the mediator mass is about twice the DM mass, the sensitivity of searches for visible decays only depends on the DM mass through the width of the mediator. When the decay channel to DM particles opens up, the width of the mediator increases and resonant searches become less sensitive. The best sensitivity to vector mediators from \ensuremath{p_{\mathrm T}^{\mathrm miss}}\xspace searches is provided by DM searches with initial state radiation either from a gluon/quark jet or from the hadronic decay of a vector boson~\cite{CMS:2021far}. Searches with visible final states achieve best sensitivity down to 50~\ensuremath{\rm GeV}\xspace when looking for a large radius jet that recoils against the mediator~\cite{CMS:2019emo}. At high mass, the strongest constraints are obtained from di-jet searches~\cite{CMS:2019gwf}. The searches discussed in Section~\ref{sec:sig_vis_ttbar} probing vector mediators decaying into \ensuremath{t\bar{t}}\xspace are not shown as no dedicated interpretation of these results where performed in models of DM by the CMS Collaboration. However, the interpretation of the searches in generic vector particle models show comparable sensitivity between the results released by the ATLAS and CMS Collaborations. \begin{figure}[p] \centering \includegraphics[width=0.7\textwidth]{figures/AVV_Limits.png} \caption{Upper limits at 95\% CL on the coupling $g_q$ of the mediator to quarks in a simplified model with a vector or axial-vector mediator obtained from different types of resonance searches using data from $pp$ collisions at $\sqrt{s}=13$~TeV. The DM mass is $m_{\chi}=10$~TeV and its coupling to the mediator $g_{\chi}=1$~\cite{ATLAS:DMSum}.\label{fig:AVV_limits}} \end{figure} \begin{figure}[p] \centering \includegraphics[width=0.7\textwidth]{figures/VV_Limits.png} \caption{95\% CL observed and expected exclusion regions on vector mediators in the DM-mediator mass plane from searches with visible and invisible final states released by the CMS Collaboration~\cite{CMS:DMSum}. Exclusions are computed for a lepto-phobic scenario with $g_l=0$, a universal quark coupling of $g_q = 0.25$ and a DM coupling of $g_{\rm DM} = 1.0$.\label{fig:VV_limits}} \end{figure} \subsubsection{2HDM with a pseudoscalar mediator} \label{sec:2HDMa_results} Constraints on the 2HDM+$a$ are derived from a variety of searches targeting different production and decay modes of the mediator and the additional Higgs bosons. The most comprehensive summary of constraints has been released by the ATLAS Collaboration~\cite{ATLAS:DMSum}. These summary plots are based results obtained on the partial or full Run 2 datasets. Not all of the latest searches on the full Run 2 dataset have been re-interpreted in the context of the 2HDM+$a$. Updated summary plots will be released in the near future. The constraints are evaluated as a function of the free parameters of the model described in Section~\ref{sec:2HDMa_model}. Two representative parameter scans in the ($m_a$,$m_{A}$) and the ($m_a$,$\tan\beta$) plane highlighting the interplay of signatures involving top quarks with other types of signatures are shown in Figure~\ref{fig:2HDMa_limits}. The constraints for other benchmark scans can be found in Ref.~\cite{ATLAS:DMSum}. \begin{figure}[h!] \centering \includegraphics[width=0.495\textwidth]{figures/2HDMa_Limits1.png} \includegraphics[width=0.495\textwidth]{figures/2HDMa_Limits2.png} \caption{Regions in the 2HDM+$a$ parameter space excluded at 95\% CL by several individual searches targeting different signatures and a statistical combination of \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$Z(\ell\ell)$ and \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$h(b\bar{b})$ searches. The results are shown in the ($m_a$,$m_{A}$) plane (left) and the ($m_a$,$\tan\beta$) plane (right). In the former case, $\tan\beta=1$, while in the latter case, $m_A = 600$~GeV. In both cases, the conditions $\sin\theta=0.35$ and $m_A = m_H = m_{H^{\pm}}$ are imposed. All results are based on either the full 139~fb$^{1}$ of $pp$ collision data at $\sqrt{s}=13$~TeV or a subset of that dataset amounting to 36~fb$^{1}$~\cite{ATLAS:DMSum}.\label{fig:2HDMa_limits}} \end{figure} The sensitivity in the ($m_a$,$m_{A}$) plane for $\tan\beta=1$, $\sin\theta=0.35$, and $m_A = m_H = m_{H^{\pm}}$ is largely dominated by searches targeting the production of an invisibly decaying mediator with a Higgs or $Z$ boson, leading to \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$h$ and \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$Z$ signatures, directly. These processes are dominated by diagrams involving the resonant production of a neutral Higgs bosons $H$ or $A$ that decays to $ah$ or $aZ$, respectively. The sensitivity from searches for \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tW$ production, which can also proceed resonantly via a charged Higgs boson (Section~\ref{sec:2HDMa_model}) is sub-dominant in this parameter region. Constraints that are largely complementary to those from \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$X$ searches are obtained from a search targeting resonant associated production of a charged Higgs boson $H^{\pm}$ with a top-bottom quark pair ($tbH^{\pm}$) with subsequent decay to a top-bottom quark pair $tb$. These constraints exhibit only a weak dependence on the mediator mass $m_a$ as this signature does not involve production of a mediator at leading order and is hence only indirectly dependent on the mediator mass via its effect on the branching ratio to $tb$ compared to those for other decays, such as $H^{\pm}\rightarrow aW^{\pm},AW^{\pm},HW^{\pm}$. Searches targeting resonant production of the neutral Higgs bosons $A/H$, either via gluon fusion or $t\bar{t}$ associated production, and their decay to $t\bar{t}$, leading to $t\bar{t}$ and $t\bar{t}t\bar{t}$ final states, respectively, are expected to also provide complementary constraints to those from \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$X$ searches in this parameter region, given that the choice $\tan\beta=1$ favours the coupling of those Higgs bosons to top quarks. No constraints from $A/H(t\bar{t})$ have been derived for the 2HDM+$a$ yet due to the presence of strong, model-dependent interference effects that make a straightforward re-interpretation of these searches in the context of other benchmark models difficult, as explained in Section~\ref{sec:SPS_results}. A search targeting $t\bar{t}A/H(t\bar{t})$ production has been used to constrain the 2HDM+$a$ parameter space (see below). It is based on 36~fb$^{-1}$ of LHC Run 2 data and not sensitive at $\tan\beta=1$, as shown in Figure~\ref{fig:2HDMa_limits} (right plot). The results of a search for $t\bar{t}A/H(t\bar{t})$ production in multi-lepton final states using 139~fb$^{-1}$ of LHC Run 2 data indicate that $A/H$ masses up to 700~GeV could be excluded in the 2HDM+$a$ for the parameter region with $\tan\beta$ under consideration here~\cite{ATLAS:2022rws}. In the ($m_a$,$\tan\beta$) plane with $m_{A}=m_{H}=m_{H^{\pm}}=600$~GeV (right plot in Figure~\ref{fig:2HDMa_limits}), the sensitivity is again dominated by the statistical combination of the \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$h(b\bar{t})$ and \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$Z(\ell\ell)$ searches and the search for $tbH^{\pm}(tb)$ production, which provide complementary constraints in this region of parameter space. Low values of $\tan\beta$ are fully excluded by the search for charged Higgs bosons decaying to $tb$. The constraints from the search targeting $t\bar{t}t\bar{t}$ production on 36~fb$^{-1}$ of LHC Run 2 data are also shown. While they are notably weaker than the constraints from the charged-Higgs-boson search, which relies on the full Run 2 dataset amounting to 139$^{-1}$, the results from the search for $t\bar{t}A/H(t\bar{t})$ on 139~fb$^{-1}$ of LHC Run 2 data~\cite{ATLAS:2022rws} (Section~\ref{sec:sig_vis_4top}) indicate that this final state may provide a comparable exclusion power as the charged-Higgs-boson search if re-interpreted in the context of this model. Searches for \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t\bar{t}$ production, which dominate the sensitivity to the simplified model with a colour-neutral scalar or pseudoscalar mediator (Section~\ref{sec:SPS_results}), only weakly constrain the benchmark scenarios~\cite{LHCDarkMatterWorkingGroup:2018ufk,ATLAS:2HDMa_2021} probed at the LHC. It should, however, be noted that the \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t\bar{t}$ constraints shown in Figure~\ref{fig:2HDMa_limits} are based on only 36~fb$^{-1}$ of LHC Run 2 data and the sensitivity is mainly limited by low event rates. Hence significantly stronger constraints are expected from a re-interpretation of searches using the full 139~fb$^{-1}$ of LHC Run 2 data~\cite{ATLAS:METplustt}. The sensitivity of the \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t\bar{t}$ final state is expected to become comparable to that of searches in the \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$h$ and \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$Z$ final states for an integrated luminosity of 300~fb$^{-1}$, expected to be available after the end of LHC Run 3 (2022--2025)~\cite{Bauer:2017ota}. In this context, it should be noted that the cross-section for \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t\bar{t}$ production is suppressed by $\sin\theta^2$, making this process more sensitive for large values of $\sin\theta$~\cite{Bauer:2017ota}. Furthermore, for $m_a > 2\cdot m_t$, visible mediator decays to $t\bar{t}$ are possible, reducing the invisible branching ratio $a \rightarrow \chi\chi$ and hence the sensitivity of the \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t\bar{t}$ searches~\cite{Bauer:2017ota}. \subsubsection{Flavour-changing interaction} \label{sec:VFC_model} DM signatures with top quarks are predicted in simplified models containing a vector mediator $Z'_{\mathrm{VFC}}$ with a flavour-changing coupling $V_{ut}$ to the top and up quark.This type of model, referred to as \textit{VFC model} in the following, is motivated, for example, by scenarios with DM in a hidden sector that only interacts with the SM sector via a flavour-changing coupling of a $Z'$ boson~\cite{Boucheneb:2014wza,Kamenik:2011nb}. The dominant production and decay modes of the VFC model are shown in Figure~\ref{fig:VFC_Feyn}. The mediator can be produced on-shell in association with a single top or anti-top (left diagram) and decay either invisibly into DM or visibly into a top and up quark. The former decay results in a \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t$ signature, often referred to as \textit {mono-top}. The latter decay yields a characteristic final state with two top quarks ($tt$) or two anti-top quarks $\bar{t}\bar{t}$ (same-sign $tt$). This signature can be easily distinguished from the more abundant $t\bar{t}$ production via SM processes by the sign of the lepton charges in fully leptonic decays. Similar $tt/\bar{t}\bar{t}$ final states arise from the other two diagrams in Figure~\ref{fig:VFC_Feyn}, which represent the $t$-channel exchange of the $Z'_{\mathrm{VFC}}$ mediator. The VFC model is fully characterised by four free parameters: the mass of the mediator, $m_{Z'_{\mathrm{VFC}}}$, the mass of the DM particle, $m_{\chi}$, the coupling of the mediator to DM, $g_{\chi}$, and the flavour-changing coupling, $g_{ut}$~\cite{ATLAS:2018alq}. The DM mass has no significant impact on the collider phenomenology of the VFC model, if $2 m_{\chi} < m_{Z'_{\mathrm{VFC}}}$ and is fixed to a value of 1~GeV for existing collider searches~\cite{ATLAS:2019wdu}. Constraints on the VFC model are accordingly derived in several parameter planes involving the remaining free parameters (or dependent parameters): $m_{Z'_{\mathrm{VFC}}}$, $g_{ut}$, and the invisible branching ratio $\mathcal{BR}(\chi \bar{ \chi})$ of the mediator. \begin{figure}[h!] \centering \includegraphics[width=0.3\textwidth]{figures/VFC_Feyn1.png} \includegraphics[width=0.27\textwidth]{figures/VFC_Feyn2.png} \includegraphics[width=0.3\textwidth]{figures/VFC_Feyn3.png} \caption{Schematic representation of the dominant production and decay modes of the VFC model~\cite{ATLAS:2019wdu}.\label{fig:VFC_Feyn}} \end{figure} \subsubsection{Colour-neutral interaction} \label{sec:SPS_model} A colour-neutral interaction between a SM and a DM particle is described by a simplified model with a neutral, scalar or pseudoscalar mediator~\cite{Buckley:2014fba,Abercrombie:2015wmb} with Yukawa-like couplings to the SM fermions. The model has four free parameters: the mass of the DM particle, $m_{\chi}$, the mass of the mediator, $m_{\phi/a}$, the coupling of the mediator to DM, $g_{\chi}$, and the coupling of the mediator to SM fermions. The latter is parameterised by a flavour-universal coupling constant $g_{q}\equiv g_{u} = g_{d} = g_{\ell}$, which modifies the SM-like Yukawa coupling of the mediator to fermions~\cite{Buckley:2014fba}, thus satisfying the requirements of MFV. It should be noted that couplings to leptons are explicitly included in the model but in practice the related signatures play no significant role in the parameter space accessible to collider searches~\cite{Abercrombie:2015wmb}. Couplings to vector bosons $W, Z$ are not included in this simplified model~\cite{Buckley:2014fba}. The Yukawa-like couplings imply that the mediator is mostly produced via loop-induced gluon fusion via a heavy-quark dominated loop or in association with heavy-flavour quarks, mostly top quarks. Additionally, visible decays of the mediator preferentially result in heavy quarks. The dominant production and decay modes of the mediator with heavy-flavour quarks in the final state are shown in Figure~\ref{fig:SPS_Feyn}. These are (from left to right): \begin{itemize} \item visible decay of a mediator produced via gluon-fusion to heavy-flavour quarks, resulting in a resonant $t\bar{t}$ or $b\bar{b}$ signal; \item associated production of a mediator that decays either visibly or invisibly with heavy-flavour quarks, leading to a \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t\bar{t}$/$b\bar{b}$ signature in the case of invisible mediator decay or characteristic fully visible $t\bar{t}t\bar{t}$, $t\bar{t}b\bar{b}$, $b\bar{b}b\bar{b}$ signatures; \item associated production of an invisibly decaying mediator with a top quark and a light ($d,u,s,c$) quark, leading to a \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tj$ signature; \item associated production of an invisibly decaying mediator with a top quark and a $W$ boson, resulting in a \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tW$ signature. \end{itemize} Additional signatures not shown here include \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+jet and \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$V/h$ production. \begin{figure}[h!] \centering \includegraphics[width=0.24\textwidth]{figures/SPS_Feyn1.png} \includegraphics[width=0.24\textwidth]{figures/SPS_Feyn2.png} \includegraphics[width=0.24\textwidth]{figures/SPS_Feyn3.png} \includegraphics[width=0.24\textwidth]{figures/SPS_Feyn4.png} \caption{Schematic representation of the dominant production and decay modes with heavy-flavour quarks in the final state in the simplified model with a scalar ($\phi$) or pseudoscalar ($a$) mediator~\cite{ATLAS:2019wdu}. \label{fig:SPS_Feyn}} \end{figure} It should be noted that, while the Yukawa-like coupling structure implies a greater importance of signatures involving top quarks rather than bottom quarks in the final state, signatures involving bottom quarks are still relevant as some UV completions of this simplified model involve a parameter modifying the relative importance of the couplings to up- and down-type quarks. In these UV completions, signatures involving bottom quarks can be more sensitive than signatures involving top quarks if the couplings to up-type quarks are suppressed. \subsubsection{Colour-charged interaction} \label{sec:SCC_model} A colour-charged interaction between the SM quarks and DM is described in a class of simplified models containing a scalar, colour-triplet mediator particle. This type of simplified models is inspired by the Minimal Supersymmetric Standard Model (MSSM)~\cite{MSSM1,MSSM2} with first- and second-generation squarks and neutralino DM~\cite{ATLAS:2019wdu}. The mediator couplings to quarks and DM in the simplified models, however, can differ from those of the MSSM, leading to additional production diagrams. Different models of colour-charged mediators, differing by the mediator couplings to quarks, have been probed at the LHC. These include a model with preferred couplings of the mediator to the first and second quark generation, a model with preferred mediator couplings to bottom quarks, and a model with preferred mediator couplings to top quarks. Only the latter will be discussed in this review. The concrete realisation of this model is documented in Ref.~\cite{Boucheneb:2014wza}. It contains a new SU(2)$_{\mathrm{L}}$ singlet field that couples to right-handed quarks. The mediator corresponding to this field is produced from a down-type quark-anti-quark pair and decays to a top quark and a DM particle, as illustrated in Figure~\ref{fig:SCC_Feyn}. This model can be related to the MSSM if an additional R-parity violating interaction of the top squark with the down-type quarks is assumed~\cite{ATLAS:2019wdu}. The free parameters of this model are the mass of the DM particle, $m_{\chi}$, the mass of the mediator, $m_{\eta_t}$, the $t$-DM coupling strength of the mediator, $\lambda_t$, and the coupling strength of the mediator to down-type quarks, $g_{ds}$. \begin{figure}[h!] \centering \includegraphics[width=0.285\textwidth]{figures/SCC_Feyn.png} \caption{Schematic representation of \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t$ production via a colour-changing scalar mediator $\eta_t$~\cite{ATLAS:2019wdu}. \label{fig:SCC_Feyn}} \end{figure} \subsection{EFT model of scalar dark energy} \label{sec:DE_model} Searches for DM signatures involving top quarks provide a powerful tool to probe models of scalar DE. The first re-interpretation of DM searches in the context of DE, which relied on the analysis of 36~fb$^{-1}$ of LHC Run 2 data~\cite{ATLAS:2019wdu}, used an EFT implementation~\cite{Brax:2016did} of the Horndeski theories~\cite{Horndeski:1974} to describe DE production at the LHC~\cite{ATLAS:2019wdu}. The latter introduce a new scalar field, $\phi_{\mathrm{DE}}$, corresponding to DE, that couples to gravity. The EFT model contains two classes of operators: operators that are invariant under a shift symmetry $\phi_{\mathrm{DE}} \rightarrow \phi_{\mathrm{DE} + \mathrm{constant}}$ and operators that break this symmetry. The former contain only derivative couplings of the DE field to SM fermions as direct Yukawa-type interactions break the shift symmetry. The latter induce direct couplings of the DE field to the SM fermions, such as Yukawa-type interactions, and are subject to tight experimental constraints~\cite{Joyce:2014kja}. Only shift-symmetric operators of the EFT model have been considered for the DE re-interpretation of LHC DM searches~\cite{ATLAS:2019wdu}. The model under consideration contains nine such operators, $\mathcal{O}^{(d)}_i$, where $d$ denotes the dimensionality of the operator. This leads to nine possible terms in the Lagrangian, each suppressed by powers of a characteristic energy scale $M_i^{d-4}$, according to the operator's dimensionality: \begin{equation*} \mathcal{L}=\mathcal{L}_{\mathrm{SM}}+\sum_{i=1}^9 c_i\mathcal{L}_i=\mathcal{L}_{\mathrm{SM}}+\sum_{i=1}^9 \frac{c_i}{M_i^{d-4}}\mathcal{O}^{(d)}_i, \end{equation*} where the $c_i$ denote the Wilson coefficients. Only the phenomenology of the two leading, i.e. least suppressed, terms has been considered by the LHC experiments so far. These are of dimension eight and can be expressed in terms of the conformal anomaly, $T^{\nu}_{\nu}$ ($=m\bar{\psi}\psi$ for a Dirac field), and the energy-momentum tensor of the SM Lagrangian $T^{\mu\nu}$ as follows: \begin{eqnarray*} \mathcal{L}_1&=&\frac{\partial_{\mu}\phi_{\mathrm{DE}}\partial^{\mu}\phi_{\mathrm{DE}}}{M_1^4}T^{\nu}_{\nu}\\ \mathcal{L}_2&=&\frac{\partial_{\mu}\phi_{\mathrm{DE}}\partial_{\nu}\phi_{\mathrm{DE}}}{M_2^4}T^{\mu\nu}. \end{eqnarray*} The coupling described by the first term, $\mathcal{L}_1$, is proportional to the mass of the SM fermions to which the DE field couples, thus making collider signatures involving top quarks a sensitive probe of DE. A schematic representation of DE production at the LHC via this operator is shown in Figure~\ref{fig:DE}. It describes the radiation of a pair of DE particles off a final-state top quark from SM $t\bar{t}$ production, leading to a \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t\bar{t}$ signature. \begin{figure}[h!] \centering \includegraphics[width=0.29\textwidth]{figures/DE_Top_Feyn.png} \caption{Schematic representation of the leading process of DE production in association with a $t\bar{t}$ pair in an EFT model of scalar DE via the operator $\mathcal{L}_1$~\cite{ATLAS:2019wdu}.\label{fig:DE}} \end{figure} The second operator, $\mathcal{L}_2$, involves derivatives of the SM fields and is therefore proportional to their momenta. Final states involving high-momentum intermediate states, of which a DE pair is radiated off, provide the best sensitivity to this operator. At a hadron collider like the LHC, the most likely high-momentum intermediate state particles are hadronically interacting particles, such as gluons, leading to characteristic \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+jet signatures as the smoking-gun signatures for DE production. Constraints on the EFT model of DE have been derived using searches for both \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t\bar{t}$ ($\mathcal{L}_1$ term) and \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+jet signatures~\cite{ATLAS:2019wdu} ($\mathcal{L}_2$ term). Only the former are discussed in this review. It should be noted that additional signatures, such as \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t$ production, are predicted based on the sub-leading operators. The exploration of these additional signatures and possible re-interpretations of further DM searches in the context of DE is left to future work. \subsubsection{Flavour-conserving interaction} \label{sec:AVV_model} A mediator with flavour-universal couplings to the SM quarks and leptons, respectively, is predicted in a simplified model that describes a flavour-conserving interaction between a fermionic WIMP DM particle $\chi$ and the SM fermions~\cite{Abercrombie:2015wmb}. It is based on a simple extension of the SM by a new $U(1)$ gauge symmetry under which $\chi$ as well as some of the SM fermions are charged, thus allowing the mediator to couple to the SM sector. The interaction described by this gauge group is mediated by the $s$-channel exchange of a new, electrically neutral spin-1 particle $Z'$ with either vector or axial-vector couplings to the DM and SM fields. It will be referred to as \textit{ vector mediator} or \textit{ axial-vector mediator} in the following. The model contains five free parameters~\cite{Abercrombie:2015wmb}: the masses of the mediator, $m_{Z'}$, and the DM particle, $m_{\chi}$, as well as the quark-flavour universal coupling $g_q$ of the mediator to quarks, the lepton-flavour universal coupling $g_{\ell}$ of the mediator to leptons, and the coupling $g_{\chi}$ of the mediator to DM. The mediator can decay either invisibly into a $\chi\bar{\chi}$ pair or visibly into a fermion-anti-fermion $f\bar{f}$ pair, as illustrated schematically by the left and right diagrams, respectively, in Figure~\ref{fig:AVV_Feyn}. The former process can be detected as a \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$X$ signature in the presence of initial-state radiation (ISR), where $X$ can be a gluon, photon, or vector boson, depending on the type of ISR, while the latter process results in a resonant enhancement in the invariant mass spectrum of the $f\bar{f}$ pair. Constraints on this model are derived in various parameter planes, including the $(m_{Z'},m_{\chi})$ plane for fixed couplings $g_q$, $g_{\ell}$, $g_{\chi}$~\cite{ATLAS:2019wdu} and as upper limits on $g_q$ as a function of $m_{Z'}$, as shown in Section~\ref{sec:AVV_results}. \begin{figure}[h!] \centering \includegraphics[width=0.3\textwidth]{figures/AVV_Feyn1.png} \includegraphics[width=0.3\textwidth]{figures/AVV_Feyn2.png} \caption{Schematic representation of the dominant production and decay modes of the simplified model with an $s$-channel vector or axial-vector mediator $Z'$~\cite{ATLAS:2019wdu}.\label{fig:AVV_Feyn}} \end{figure} \subsubsection{2HDM with a pseudoscalar mediator} \label{sec:2HDMa_model} A 2HDM with a pseudoscalar mediator $a$~\cite{Bauer:2017ota}, referred to as 2HDM+$a$ in the following, is a more complex simplified model that embeds the phenomenology of the simplified models with a colour-neutral pseudoscalar mediator (Section~\ref{sec:SPS_model}) in more complete model with a second complex SU(2) doublet. The 2HDM in this model has a CP-conserving potential with a softly broken $\mathbb{Z}_2$ symmetry~\cite{Gunion:2002zf}. Its Higgs sector contains five Higgs bosons: two scalars, $h$ and $H$, a pseudoscalar, $A$, and two charged Higgs bosons $H^{\pm}$. The alignment limit is assumed, meaning that one of the two scalars of the model is identified with the 125~GeV Higgs boson discovered in 2012. Furthermore, the Yukawa structure of the 2HDM is of type-II~\cite{Branco:2011iw} meaning that couplings of the additional Higgs bosons to top quarks are preferred over those to other fermions at low values of the ratio of the two vacuum expectation values, $\tan\beta$, one of the model parameters with the biggest impact on the collider phenomenology of the model. The pseudoscalar mediator $a$ mixes with the pseudoscalar $A$ of the 2HDM with mixing angle $\theta$. The phenomenology of the 2HDM+$a$ is fully defined by 14 free parameters, making it considerably more complex than the simplified models described in the previous sections. These parameters are: the masses $m_h$, $m_H$ and $m_A$ of the neutral Higgs bosons; the masses $m_{H^{\pm}}$ of the charged Higgs bosons; the mass $m_a$ of the mediator; the mass $m_{\chi}$ of the DM particle; the coupling $y_{\chi}$ between DM and the mediator; the three quartic couplings $\lambda_{\textrm{P1}}$, $\lambda_{\textrm{P2}}$, $\lambda_3$ of the mediator to the SU(2) fields; the vacuum expectation value (VEV) $v$ of the electroweak sector; the ratio $\tan\beta=\frac{v_2}{v_1}$ of the VEVs of the two Higgs fields; the mixing angle $\alpha$ between the two scalar Higgs bosons $h$ and $H$; and the mixing angle $\theta$ between the pseudoscalar Higgs boson $A$ and the mediator $a$. The choice of the alignment limit ($\cos(\beta-\alpha)$=0) implies $m_h=125$~GeV and $v=246$~GeV. The DM-mediator coupling is set to unity ($y_{\chi}=1.0$) without significant impact on the phenomenology of the model. The setting $\lambda_3=3$ is chosen to ensure the stability of the Higgs potential in the mass ranges of interested of the heavy Higgs bosons~\cite{ATLAS:2019wdu}. Furthermore, the choice $\lambda_{\textrm{P1}}$ = $\lambda_{\textrm{P2}} = \lambda_3 = 3$ maximises the tri-linear couplings between the CP-even and CP-odd neutral states~\cite{ATLAS:2019wdu}. Finally, the choice $m_A = m_H = m_{H^{\pm}}$ ensures compatibility of the model predictions with flavour constraints~\cite{Bauer:2017ota} and additionally simplifies the phenomenology of the model~\cite{ATLAS:2019wdu}. With these constraints, the remaining 2HMD+$a$ parameter space can be described by the following five parameters: $m_A$, $m_a$, $m_{\chi}$, $\sin\theta$, and $\tan\beta$. Representative benchmark scans of this parameter space have been defined by the LHC Dark Matter Working Group~\cite{LHCDarkMatterWorkingGroup:2018ufk} with the aim to highlight different aspects of the phenomenology of this benchmark model and the interplay between searches targeting different signal processes across this parameter space. Additional benchmark scans have been defined in Ref.~\cite{ATLAS:2HDMa_2021}. The 2HDM+$a$ predicts a rich phenomenology with a diverse range of final states. The dominant processes leading to final states with top quarks are shown in Figure~\ref{fig:SPS_Feyn}, along with the leading diagrams for the resonant production of an invisibly decaying mediator with a Higgs or $Z$ boson, leading to \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$h$ and \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$Z$ final states, respectively, which are among the most sensitive probes of the 2HDM+$a$. A full overview of the phenomenology of the 2HDM+$a$ can be found in Refs.~\cite{Bauer:2017ota,LHCDarkMatterWorkingGroup:2018ufk}. \begin{figure}[h!] \centering \includegraphics[width=0.24\textwidth]{figures/2HDMa_Feyn2.png} \includegraphics[width=0.24\textwidth]{figures/2HDMa_Feyn3.png} \includegraphics[width=0.24\textwidth]{figures/2HDMa_Feyn6.png} \includegraphics[width=0.24\textwidth]{figures/2HDMa_Feyn5.png} \caption{Schematic representation of relevant production and decay modes with top quarks leading to either top quarks in the final state or \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$h/Z$ signatures. From left to right: resonant production of a neutral scalar or pseudoscalar particle $H/A/a$ decaying to $t\bar{t}$ or $b\bar{b}$; associated production with $b\bar{b}$ or $t\bar{t}$ of a single $H/A/a$ decaying either visibly to heavy flavour or invisibly to DM; associated production of a top quark and a charged Higgs boson decaying to a $W$ boson and an invisibly decaying mediator $a$; resonant $A/H$ production with subsequent decay to a $Z/h$ boson and an invisibly decaying mediator $a$~\cite{ATLAS:2019wdu}.\label{fig:2HDM_Feyn}} \end{figure} \subsection{LHC Run 3} \label{sec:LHCRun3} The non-observation of WIMP DM at the LHC and various direct detection experiments to date has prompted the particle physics community to place a stronger focus on models and searches for non-WIMP DM as well as uncovered DM signatures at the LHC that can be probed during LHC Run 3 (2022-2025) and/or via re-interpretations of existing searches on LHC Run 2 data. A few notable examples involving signatures with top quarks are given in the following. \subsubsection{ALPs} \label{sec:ALPs} Axions and axion-like particles (ALPs)~\cite{ALPs1,ALPs2} have received increasing attention in recent years. A novel strategy to search for ALPs and, more generally, pseudo-Nambu-Goldstone bosons (pNGB) at the LHC has been proposed in Ref.~\cite{Gavela:2019cmq}, focusing on non-resonant searches that would be sensitive to ALPs produced as an off-shell $s$-channel mediator. It is motivated by the fact that the pNGB nature of the ALPs implies that their couplings to the SM are dominantly derivative, which leads to a cross-section enhancement for non-resonant ALPs production at centre-of-mass energies $\hat{s}>>m_a$, where $m_a$ denotes the mass of the ALP. The focus of recent studies has been on constraining the ALP-boson ($W$, $Z$, $h$, $g$, $\gamma$) coupling via non-resonant $ZZ$, $\gamma\gamma$, and $gg$~\cite{Gavela:2019cmq}, non-resonant $ZZ$ and $Zh$~\cite{CMS:2021xor}, and non-resonant $WW$, $Z\gamma$~\cite{Carra:2021ycg} production. The ALPs-fermion coupling can be predominantly probed via non-resonant \ensuremath{t\bar{t}}\xspace production (illustrated by the left diagram in Figure~\ref{fig:ALPs}) due to the Yukawa-like structure of the ALP-fermion couplings. No public results exist to date but studies are on-going. \begin{figure}[H] \centering \includegraphics[width=0.35\textwidth]{figures/ALPs_nonres.png} \includegraphics[width=0.425\textwidth,height=3.5cm]{figures/ALPs_LLP.png} \caption{Schematic representation of non-resonant \ensuremath{t\bar{t}}\xspace production via an off-shell $s$-channel ALP (left,~\cite{ALPS_NR_Feyn}) and SM \ensuremath{t\bar{t}}\xspace production with subsequent decay of one of the top quarks to an up-type quark and a long-lived ALP (right,~\cite{Carmona:2022jid}).\label{fig:ALPs}} \end{figure} The ALPs-fermion coupling can also be probed in \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+\ensuremath{t\bar{t}}\xspace final states. These are sensitive to \ensuremath{t\bar{t}}\xspace-associated production of a single ALP with couplings to quarks derived from couplings to the bosonic sector and proportional to the fermion mass~\cite{Brivio:2017ije}. It should be noted that the \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace distribution predicted for this signal process is softer on average than that predicted by e.g. stop production in supersymmetric models, emphasising the importance of keeping the \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace threshold low in future searches. Novel detector signatures involving exotic top quark decays are predicted in models with flavour-violating ALPs~\cite{Carmona:2022jid}, which are motivated by $t$-channel dark sector models~\cite{Renner:2018fhh} or Frogatt-Nielsen models of flavour~\cite{FVALPs}. These models predict flavour-violating decays of the top quark to an up-type quark and an ALP, with the ALP decaying predominantly to hadrons, either promptly or with a long lifetime. Precision measurements of single-top-quark production can constrain the parameter space of such models for prompt ALPs decays to jets and detector-stable ALPs. Displaced detector signatures are predicted for non-prompt ALPs decays within the detector volume. A novel search has been proposed~\cite{Carmona:2022jid} focusing on exotic top-quark decays from SM \ensuremath{t\bar{t}}\xspace production (right diagram in Figure~\ref{fig:ALPs}), where one of the top quarks decays into an up-type quark and an ALP, which in turn decays into a displaced narrow jet within the calorimeter volume. This and other signatures involving long-lived particles (LLP) in top-quark decays have not yet been probed in dedicated searches at the LHC. They remain an exciting prospect for the analysis of LHC Run 3 data within the currently fast-growing field of LLPs searches at the LHC, a field that benefits in particular from novel trigger and reconstruction algorithms deployed by the ATLAS and CMS experiments for Run 3 data taking. \subsubsection{Composite pseudo Nambu-Goldstone Bosons} \label{sec:pNGB} Signatures with top quarks can also be used to probe still viable WIMP models in which WIMP DM is made up of composite pNGBs~\cite{Haisch:2021ugv}. In these models, both the SM Higgs boson and DM emerge from a TeV-scale strongly-coupled sector as pNGBs and the SM-DM interaction is provided by higher-dimensional derivative couplings with the Higgs fields, which leads to a strong suppression of the DM scattering rates against SM particles. Thus, these models evade the strong constraints from direct detection experiments, making collider searches particularly relevant. The pNGB DM contains additional interactions with the SM sector, besides the derivative Higgs portal, with preferential couplings to third-generation fermions being well-motivated~\cite{Haisch:2021ugv}. If couplings to top quarks are preferred over couplings to bottom quarks, e.g. in the case of Yukawa-type couplings, pNGB models can be probed at the LHC via associated production of pNGB DM with \ensuremath{t\bar{t}}\xspace or a single top quark, i.e. in \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+\ensuremath{t\bar{t}}\xspace or \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t$+X final states. Two possible production modes of pNGB leading to \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tW$ final states via the Higgs portal and direct DM-top interactions are shown in Figure~\ref{fig:pNGBs}. Searches in these final states are complementary to searches for invisible Higgs boson decays in vector-boson fusion (VBF) production as they are sensitive to pNGB interactions with fermions not accessible via the latter. Re-interpretations of existing \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+\ensuremath{t\bar{t}}\xspace and \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tW$ searches as well as possible optimisations of future searches for pNGB production could be interesting to explore during LHC Run 3. \begin{figure}[H] \centering \includegraphics[width=0.35\textwidth]{figures/pNGB1.png} \includegraphics[width=0.35\textwidth]{figures/pNGB2.png} \caption{Schematic representation of \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tW$ production via DM-Higgs operators (left) and DM-top operators in an EFT of composite pNGBs~\cite{Haisch:2021ugv}.\label{fig:pNGBs}} \end{figure} \subsubsection{Dark Mesons} \label{sec:darkMesons} Final states with multiple top quarks are predicted in models with a strongly coupled dark sector consisting of composite particles that carry electroweak but no colour charges~\cite{Kribs:2018ilo}. These models not only address the hierarchy problem but can also provide a DM candidate in the form of a composite meson whose decays are suppressed via an automatic accidental symmetry. The most promising target for collider searches is the dark meson sector, consisting of dark vector mesons $\rho_D$ and dark pions $\pi_D$~\cite{Kribs:2018ilo}. Signatures with multiple top or bottom quarks are predicted if a pair of dark pions with gauge-phobic couplings to the SM is produced from the decay of a resonantly produced $\rho_D$ ($pp\rightarrow \rho_D \rightarrow \pi_D\pi_D$). The dark pions then decay predominantly into third-generation fermions, with decays to \ensuremath{t\bar{t}}\xspace ($tb$) dominating the branching fraction for $\pi_D^0$ ($\pi_D^{\pm}$) if the pion mass is above the \ensuremath{t\bar{t}}\xspace ($tb$) production threshold. Depending on the charge of the intermediate $\rho_D$, different final states involving third-generation quarks are possible: $b\bar{b}t\bar{b}$, $t\bar{t}b\bar{b}$, $t\bar{t}t\bar{b}$. Existing searches in multi-top final states only weakly constrain the parameter space of these models~\cite{Kribs:2018ilo}. This is due to the fact that small masses of the $\rho_D$ and $\pi_D$ are still viable, which means that the SM fermions in the final state tend to be rather soft. In searches at $\sqrt{s}=13$~TeV, in particular, higher thresholds are imposed on the energy/momenta of the final-state objects or their vector sum. In order to probe dark pions, or more generically strongly-coupled like models, dedicated searches targeting final states with a high multiplicity of low-momentum objects compatible with the decays of one or several low-momentum top quarks are needed. \subsection{HL-LHC and HE-LHC} \label{sec:out_HLLHC} The physics potential for DM searches involving top quarks during the high-luminosity phase of the LHC (HL-LHC, starting 2028) and the perspectives for a possible future high-energy LHC (HE-LHC) have been studied in the context of a 2019 CERN Yellow Report~\cite{HLLHC_YR}. The final HL-LHC dataset is expected to amount to an integrated luminosity of 3000~\ensuremath{{\rm fb}^{-1}}\xspace at a centre-of-mass energy $\sqrt{s}=14$~TeV. The HE-LHC scenario relies on the assumption of a possible further upgrade of the LHC to a 27~TeV $pp$ collider with a final integrated luminosity of 15,000~\ensuremath{{\rm fb}^{-1}}\xspace. Sensitivity studies have been performed for the \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+\ensuremath{t\bar{t}}\xspace, \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$tW$, \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t$, \ensuremath{t\bar{t}}\xspace, and $\ensuremath{t\bar{t}}\xspace\ttbar$ signatures within various benchmark models, including simplified models with a scalar or pseudoscalar mediator (Section~\ref{sec:SPS_model}), simplified models with a vector mediator with a flavour-changing coupling to the top and up quark (Section~\ref{sec:VFC_model}), and the 2HDM+$a$ (Section~\ref{sec:2HDMa_model}). These studies are mostly based on the analysis tools and strategies used for the analysis of the partial LHC Run 2 dataset (2015-2016). They do not include further improvements, such as new machine-learning based tools or background estimation strategies, implemented for the later analyses of the full LHC Run 2 dataset. A full review of the results of these sensitivity studies across the different final states and models is beyond the scope of this article but a few general observations can be made. Overall, both the increase in integrated luminosity (HL-LHC) and centre-of-mass energy (HE-LHC) lead to a significant sensitivity increase across the different final states. For example, the mass range for a (pseudo)scalar mediator expected to be excluded by \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+\ensuremath{t\bar{t}}\xspace searches in the simplified model of Section~\ref{sec:SPS_model} with $g=g_q=g_{\chi}=1.0$ (compare Figure~\ref{fig:SPS_limits}) is expected to increase by a factor of two for the HL-LHC compared to the expected sensitivity for LHC Run 3, and by another factor of two for the HE-LHC compared to the HL-LHC. The sensitivity of most of the searches is dominated by the systematic uncertainties on the main (often irreducible) background processes, for example $\ensuremath{t\bar{t}}\xspace+V$ in the case of \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+\ensuremath{t\bar{t}}\xspace searches. In \ensuremath{t\bar{t}}\xspace final states, these typically arise from two sources: firstly, uncertainties related to reconstructed objects, such as the energy scale for hadronic jets, and, secondly, uncertainties arising from the modelling of SM processes, such as missing higher-order corrections. These uncertainties can vary between a few percent and a few tens of percent, depending on the process and kinematic region. The former are expected to decrease with increasing integrated luminosity as the statistical uncertainties on the measurements from which they are derived are reduced accordingly. A further reduction of these uncertainties can be expected due to the development of better and more refined calibration methods. The latter can be reduced significantly through profiling in a likelihood fit to data if appropriate, background-enriched control regions are defined. Improved theoretical predictions, for example for differential cross-sections at higher orders in perturbation theory, can also significantly boost the sensitivity of many searches. In the case of the HE-LHC, in addition to the improvements due to the larger integrated luminosity, the larger centre-of-mass energy provides access to mediator masses beyond the kinematic reach of the (HL-)LHC and to process with small signal cross-sections. \subsection{FCC-hh} \label{sec:out_FCChh} Similar considerations as for the HE-LHC apply to the case of a potential future hadron collider operating at centre-of-mass energies beyond that of the LHC and HE-LHC. The most prominent example is that of the FCC-hh, the Future Circular Collider in its operation mode as a hadron collider with a centre-of-mass energy of $\sqrt{s}=100$~TeV~\cite{FCChh}. Few dedicated studies regarding the sensitivity of DM searches with top quarks at the FCC-hh exist. For example, in Ref.~\cite{Dutta:2017sod} the sensitivity of the 2-lepton \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+\ensuremath{t\bar{t}}\xspace final state to Higgs portal models and their extensions is discussed. In general, a significant increase in the accessible mass range of both mediators and DM particles is expected, as well as a significant increase in the sensitivity to smaller DM-SM couplings, rendering detector signatures involving decays of long-lived particles away from the interaction point highly relevant. Moreover, top quarks appearing in the final states of FCC-hh collision can be extremely boosted, underlining the need for high-resolution detectors to identify very collimated decays, as well as the use of advanced pattern recognition methods for top-quark tagging. A particularly interesting observation is the fact that associated production of a single Higgs boson with \ensuremath{t\bar{t}}\xspace becomes the dominant Higgs boson production mode at Higgs boson transverse momenta of 1-2~TeV and above, a kinematic regime that would be well-populated at the FCC-hh~\cite{PHarris_2017}. According to initial studies~\cite{PHarris_2017}, searches for invisible Higgs boson decays in this production mode would feature a very low background contamination ($S/B \sim 1$) and hence provide excellent sensitivity to Higgs portal models with small couplings. The corresponding final state would be \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+\ensuremath{t\bar{t}}\xspace with highly boosted top quarks. \subsection{Future $e^+e^-$ colliders} \label{sec:out_FCCee} No studies of DM searches with top quarks exist for future $e^+e^-$ colliders, such as the International Linear Collider (ILC)~\cite{Behnke:2013xla}, the Compact Linear Collider (CLIC)~\cite{CLIC1,CLIC2}, the Future Circular Collider FCC-ee~\cite{FCCeePhys,FCCee}, and the Circular Electron-Positron Collider (CEPC)~\cite{CEPCStudyGroup:2018rmc,CEPCStudyGroup:2018ghi}. This can be mostly attributed to the fact that these machines are primarily designed for Higgs boson and top quark precision measurements rather than a broad range of BSM (including DM) searches and that their foreseen centre-of-mass energies are in many cases below or close to the \ensuremath{t\bar{t}}\xspace production threshold. For example, operation modes at $\sqrt{s}=240$~GeV (250~GeV), i.e. around the maximum of the $Zh$ production cross-section, are foreseen for the FCC-ee and the CEPC (ILC). Additional operation modes in the range 350-365~GeV (FCC-ee, CEPC) and 380~GeV (CLIC) are foreseen for top quark precision measurements. Higher centre-of-mass energies of 1 TeV (ILC) and 1-3~TeV could be possible for the linear $e^+e^-$ machines to allow for wider range of BSM searches. Hence direct DM production in association with at least one top quark, leading to \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+\ensuremath{t\bar{t}}\xspace and \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace+$t$+$X$ final states, while in principle possible, is trivially limited by the available centre-of-mass energy. Nevertheless, the foreseen precision scans of the \ensuremath{t\bar{t}}\xspace production threshold at the FCC-ee could in principle be sensitive to anomalous resonant or non-resonant \ensuremath{t\bar{t}}\xspace production linked with DM or DM mediators as well as anomalous top-quark decays. Further studies are needed to understand the prospects for DM searches with top quarks at future $e^+e^-$ colliders. \subsection{Conclusion} \label{sec:Conclusion} Collider signatures with top quarks provide sensitive probes of DM predicted by a wide range of models, and possibly even to DE signatures. Searches targeting top-quark production in association with DM or via visible decays of mediator particles have been performed by the ATLAS and CMS Collaborations, with many searches on the full LHC Run 2 collision data still on-going. As shown in this review, DM searches involving top quarks often provide sensitivity in parameter regions not covered by other DM searches, underlining their importance as sensitive probes of DM at colliders. The upcoming LHC Run 3 opens up further opportunities to improve upon existing results or to explore new signatures, for example involving top quarks in association with long-lived particle signatures. \section{Introduction} \input{sections/introduction} \section{Models with BSM signatures involving top quarks}\label{sec:models} Collider searches for DM are usually interpreted in the context of so-called \textit{simplified models}, which contain a minimal set of new particles and couplings. Most of these models contain only a single Dirac DM particle and a single mediator particle. They are characterised by a minimal set of free parameters, namely the masses of the DM and mediator particles and the couplings of the mediator to the SM and dark sector. Simplified models provide a convenient framework to compare searches in different final states and among different experiments. In the following, the simplified models used for the interpretation of DM searches involving top quarks are described. Additionally, an effective-field theory (EFT) description of scalar DE is introduced. \subsection{Vector and axial-vector mediators} \input{sections/models_AVV.tex} \input{sections/models_VFC.tex} \subsection{Scalar and pseudoscalar mediators} A preferred coupling of DM to top quarks is predicted in simplified models containing a spin-0 mediator with Yukawa-like couplings to SM fermions. The mediator can be either a scalar ($\phi$) or pseudoscalar ($a$). These models can be straightforwardly embedded in ultra-violet (UV) complete theories with extended Higgs sectors, such as Two-Higgs-Doublet Models (2HDMs, see also Section~\ref{sec:2HDM}). Assuming Yukawa-like couplings allows this class of models to satisfy strong constraints from flavour precision measurements. The dynamics of flavour violation are completely determined by the structure of the ordinary fermion Yukawa couplings, which is referred to as \textit{Minimal Flavour Violation (MFV)}~\cite{DAmbrosio:2002vsn}. The simplified models described in this section can be broadly categorised into models with a colour-neutral and a colour-charged interaction. An overview of the models falling into each category can be found in Ref.~\cite{ATLAS:2019wdu} and references therein. Two representative benchmark models used by the ATLAS and CMS Collaborations are presented in the following. \input{sections/models_SPS.tex} \input{sections/models_SCC.tex} \subsection{Extended Higgs sectors} \label{sec:2HDM} Extended Higgs sectors are predicted by a range BSM theories, such as supersymmetry~\cite{Djouadi:2008gy}, certain classes of axion models~\cite{PDG2020}, or theories predicting additional sources of CP violation in the Higgs sector to explain the observed baryon asymmetry in the universe~\cite{Carena:2015uoe,Fuchs:2017wkq}. Extension of the SM Higgs sector by a second complex SU(2) doublet, referred to as Two-Higgs-Doublet Models (2HDMs), are among the simplest and most studied models with an extended Higgs sector, historically due to their strong motivation from supersymmetry. In the past years, 2HDMs have also received considerable attention from the DM community as a means of embedding the simplified, mediator-based, models described in the previous sections in the context of a UV-complete and renormalisable framework with a broader collider phenomenology. Models of DM based on a 2HDM with a vector~\cite{Berlin:2014cfa}, pseudoscalar~\cite{Bauer:2017ota,Goncalves:2016iyg}, and scalar~\cite{Bell:2016ekl} mediator have been proposed. Concrete realisations of the former two have been used as benchmark models by the LHC experiments. Models with vector mediators are not discussed in this review as final states with top quarks do not play a dominant role in their phenomenology. Models with a pseudoscalar mediator, on the other hand, feature a rich phenomenology involving relevant signatures with top quarks due to the Yukawa-type coupling of the mediator to SM fermions. pseudoscalar mediators are also particularly interesting to study at the LHC as they are not strongly constrained by direct-detection experiments because the DM-nucleon scattering cross-section pseudoscalar couplings is strongly suppressed at tree-level by the momentum transfer in the non-relativistic limit~\cite{Abe:2018emu}. A concrete realisation of a 2HDM with a pseudoscalar mediator that is used as a benchmark model by the LHC experiments is described in Section~\ref{sec:2HDMa_model}. \input{sections/models_2HDMa.tex} \input{sections/models_DE.tex} \section{Experimental signatures} \label{sec:signatures} Searches for DM in $pp$ collisions involving single or multiple top quarks can be broadly split into two categories: Searches for large \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace and searches for a DM mediator decaying into SM particles. Both classes rely on different analysis technique. Common to all searches is a detailed exploration of the top quark decay. Due to the almost diagonal structure of the CKM matrix and in particular $V_{tb}$ being close to one, the top quark decays almost 100\% of the time into a bottom quark and a $W$ boson. The $W$ boson itself decays with about 30\% probability into a charged lepton, i.e. an electron, muon, or tau, and the corresponding neutrino, or into two quarks otherwise. Similar to DM particles, neutrinos can only be inferred from missing transverse momentum in the detector. Events with two top quarks or with a single top quark and a $W$ boson are typically categorised in three orthogonal channels based on the lepton ($\ell = e,\mu$, including decays via $\tau$ leptons, i.e. $\tau \to $ e,$\tau \to \mu$) multiplicity in the final state. 0-lepton (0$\ell$) final states arise in events in which both $W$ bosons decay hadronically; 1-lepton (1$\ell$) final states arise in events in which one $W$ boson decays hadronically, the other leptonically; 2-lepton (2$\ell$) final states arise if both $W$ bosons decay leptonically. When top quarks recoil against significant \ensuremath{p_{\mathrm{T}}^{\mathrm{miss}}}\xspace or result from the decay of a very heavy resonance, top quarks are highly Lorentz boosted and their decay products become highly collimated. In the case of hadronic top-quark decays, this means that the particle showers from the three final-state quarks can no longer be reconstructed as three separate small-radius (small-$R$) jets (\textit{resolved decay)} but merge into a single large-radius (large-$R$) jet with characteristic substructure (\textit{merged decay}). Merged top-quark decays are identified using dedicated \textit{top tagging} algorithms. \subsection{Final states with invisible decays} \input{sections/sig_inv_mett.tex} \input{sections/sig_inv_mettWtj.tex} \input{sections/sig_inv_mettt.tex} \input{sections/sig_inv_mettttwtj.tex} \subsection{Final states without invisible decays} \input{sections/sig_vis_samesign_tt.tex} \input{sections/sig_vis_ttbar.tex} \input{sections/sig_vis_4top.tex} \input{sections/sig_vis_tbH.tex} \section{Results} \label{sec:results} \subsection{Vector and axial-vector mediators} \input{sections/results_AVV.tex} \input{sections/results_VFC.tex} \subsection{Scalar and pseudoscalar mediators} \input{sections/results_SPS.tex} \input{sections/results_SCC.tex} \subsection{Extended Higgs sectors} \input{sections/results_2HDMa.tex} \input{sections/results_DE.tex} \section{Discussion} \label{sec:discussion} A variety of searches targeting top-quark production in association with DM or via visible decays of mediator particles have been conducted by the ATLAS and CMS Collaborations. No significant deviation from the SM prediction has been observed. Therefore the results are used to constrain DM in a variety of simplified models as well as scalar DE described in an EFT model. Signatures involving top quarks often provide sensitivity in parameter regions not covered by other DM searches, underlining their importance as sensitive probes of DM at colliders. They provide a particularly relevant probe of models involving new particles with Yukawa-like interactions, which imply preferred couplings to top quarks. It should be noted that many of the results and summary plots presented in this review are preliminary as various searches on the full LHC Run 2 collision data still on-going. Furthermore, not all of the existing results have been interpreted in relevant benchmark models. Further results of DM searches with top quarks are expected to be released by both collaborations in the near future. \section{Outlook} \label{sec:outlook} \input{sections/conclusion.tex} \section*{Acknowledgements} K.B.thanks the Helmholtz Association for the support through the "Young Investigator Group" initiative. The authors acknowledge funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany's Excellence Strategy – EXC 2121 "Quantum Universe" – 390833306. \bibliographystyle{unsrtnat}
{ "attr-fineweb-edu": 1.484375, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdYHxaJJQnL9IZz-R
\section{Introduction} Elemental Tellurium is known to have a wide variety of unusual optical characteristics. It is the only elemental semiconductor with a direct bandgap in the technically interesting mid infrared wavelength range near $3.8\mu\mathrm{m}$. Furthermore, Te is considered to have exceptional non-linear optical properties\cite{fee1970,Berezovskii1972} due to its chiral structure where the atoms form helical chains. Relatively few studies of the optical properties of Te have been published so far. Reflectivity and absorption spectra and their temperature dependent variations have been analyzed in Refs.\onlinecite{Sobolev63,Stuke64,Loferski54,Tutihasi1969} and references therein. These papers report a strong polarization dependence of the optical response, reflecting the uniaxial nature of the Te crystal. Bulk crystals of Te exhibit large refractive indices with a prominent difference between the ordinary and extraordinary directions (about 4.9 and 6.3 near the bandgap)\cite{Caldwell59}. The strongly directional crystal structure also leads to prominent optical nonlinearities. An exceptionally large nonlinear coefficient was confirmed by a phase-matched harmonic generation measurement on an elemental Te crystal in Ref.\onlinecite{Patel65}. Furthermore, the chiral structure has been shown to lead to gyroscopic nonlinear optical responses depending on the helicity of the light (see e.g. Ref.\onlinecite{Tsirkin18} and references therein). Measurements of the photoluminescence from bulk Te crystals have been reported for cryogenic temperatures in Ref.\onlinecite{Benoit65} and for room temperature in Ref.\onlinecite{Choi19}. These publications also document indications of stimulated emission and lasing as well as strong second- and third-order harmonic generation. To complement and extend the earlier investigations, we present in this paper a comprehensive analysis of the nonlinear optical properties of bulk Te. For this purpose, we performed a systematic microscopic study of its resonant incoherent and off-resonant coherent properties. We employ an $\it{ab \,\, initio}$ based approach where we use Density Functional Theory (DFT) together with the shell Local Density Approximation-1/2 (shLDA-1/2) method to obtain accurate structural and electronic parameters. We evaluate the dispersion of the energetically highest valence and the lowest conduction bands and determine the relevant dipole and Coulomb interaction matrix elements. Using these results as input for the semiconductor Bloch equations (SBE)\cite{lindberg1988}, we first evaluate the Te absorption spectra for different excitation conditions. Our results show excellent agreement with published experimental data. Assuming quasi equilibrium carrier populations in the relevant valence and conduction bands, we compute the transition from absorption to optical gain. The corresponding luminescence spectra are evaluated using the semiconductor luminescence equations (SLE)\cite{sle}. Both, gain and luminescence exhibit strong dependence on the light polarization direction. For strongly off-resonant excitation, we investigate the generation of high harmonics in a wide spectral range extending far above the fundamental Te bandgap. Currently, high harmonic generation (HHG) in semiconductors after excitation with short high-intensity pulses is a field of active research\cite{ghimire2011observation, luu2015extreme, yoshikawa2017high, vampa2015linking, Hohenleutner2015, ndabashimiye2016solid, liu2017high, you2017anisotropic, xia2018nonlinear, kemper2013theoretical, Vampa2014, hawkins2015effect, higuchi2014strong, tamaya2016diabatic, wu2015high, ghimire2012generation}. Microscopically, semiconductor HHG can be related to the nonequilibrium dynamics of the induced electron-hole excitations, including interband polarizations and intraband currents probing the conduction and valence bandstructure in the entire Brillouin (BZ) zone. To analyze these effects, we use our DFT results as structural input for the SBE and compute HHG spectra for different excitation conditions. Besides local evaluations of HHG spectra, we also study the effects of different sample thicknesses performing calculations which explicitly include field propagation effects. This paper is organized as follows: In Sec. 2, we give an overview of our DFT approach and discuss the resulting bandstructure and the relevant dipole matrix elements for bulk Te. Section 3 summarizes our calculations for optical absorption, gain, and photo luminescence, whereas Sec. 4 is devoted to the modeling of HHG in Te for different excitation conditions and sample lengths. A short summary and outlook in Sec. 5 concludes our presentation. \section{Electronic Structure Calculations} \subsection{Computational Details} In our approach to construct the electronic structure for bulk Te, we use the Vienna Ab initio Simulation Package\cite{Kresse1993, Kresse1994, Kresse1996, Kresse1996a} (VASP) version 5.4.4 which implements the Projector-Augmented Wave (PAW) method\cite{Kresse1999, Blochl1994}. Starting from the symmetry group of right-handed Tellurium $P3_121-D^4_3$, the crystal structure was relaxed using the Generalized Gradient Approximation (GGA) by Perdew, Burke and Ernzerhof (PBE)\cite{Perdew1996} for the exchange-correlation energy. A $\Gamma$-centered Monkhorst-Pack\cite{Monkhorst1976} grid of $15 \times 15 \times 15$ $k$-points and a plane wave basis-set cutoff energy of $500 \, \text{eV}$ was used. The cell volume, cell shape and ion positions were optimized using the conjugate gradient algorithm. The convergence criteria were set to $10^{-9} \, \text{eV}$ for electronic minimization and $3 \cdot 10^{-4} \, \text{eV}/\text{\AA}$ for the forces acting on the ions.\\ After relaxation, the PAW pseudopotential for Te was modified according to the shLDA-1/2 method as proposed by Xue et al.\cite{Xue2018}. This method is based on the LDA-1/2\cite{Ferreira2008, Ferreira2011a} method, which aims to avoid the underestimation of bandgaps with a GGA by correcting for the self-interaction of a localized hole in the valence band by adding a so-called self-energy potential to the pseudopotential. Based on Slater's half-occupation technique\cite{Slater1972}, the self-energy potential is found by subtracting the potential of the half-ionized atom from the unionized atom. Since this self-energy potential is added to every atom, it has to be trimmed to avoid divergent contributions. In the LDA-1/2 method, this is achieved with a spherical trimming function \begin{equation} \Theta(r) = \begin{cases} \left[1-\left(\frac{r}{r_\text{cut}}\right)^n\right]^3 & r \leq r_\text{cut} \\ 0 & r > r_\text{cut} \end{cases}\quad , \end{equation} in which the cutoff radius has to be determined variationally with the condition that the resulting bandgap is maximized. In the shLDA-1/2 method, the trimming function is replaced by a spherical shell \begin{equation} \Theta(r) = \begin{cases} \left[1-\left(\frac{r}{r_\text{out}}\right)^m\right]^3 \frac{1+\text{tanh}[n(r-r_\text{in})]}{2} & r \leq r_\text{out} \\ 0 & r > r_\text{out} \end{cases} , \end{equation} which is more suitable for crystals where the charge is not centered around the atom cores, but lies between two atoms. In this case, in addition to the outer cutoff radius $r_\text{out}$ an inner cutoff radius $r_\text{in}$ has to be determined by the same method as before, keeping the outer cutoff radius constant. The self-energy corrected pseudopotentials for different cutoff radii have been constructed and the optimal cutoff radius determined by fitting a quadratic function of the cutoff radius to the resulting bandgaps and finding the maximum. The corresponding DFT calculations used the same computational parameters as the relaxation, however, the crystal structure was kept constant and spin-orbit coupling was included.\\ In a third set of calculations, the band structure and dipole matrix elements were determined. To this end, the charge-density of the self-consistently calculated ground-state obtained with the constructed pseudopotential was read in and kept constant. The $k$-points were chosen along high symmetry lines in the Brillouin zone and the number of bands was increased, since a significant amount of empty conduction bands is needed for the optical routines of the VASP program that calculate the dielectric properties\cite{gajdos2006}. \subsection{Bandstructure and Dipole Matrix Elements} \label{dftres} \begin{table} \caption{Comparison of structural and electronic parameters from \textit{ab-initio} DFT calculations using the shLDA-1/2 method with experimental results.}\label{tab:dftres} \begin{ruledtabular} \begin{tabular}{lccccc} & \multicolumn{3}{c}{Structural parameters} & \multicolumn{2}{c}{Electronic Properties} \\ & $a$ & $c$ & $u$ & $E_g$ & $E_{\text{LH-HH}}$ \\ \hline DFT & $4.51 \, \text{\AA}$ & $5.96 \, \text{\AA}$ & $0.27$ & $0.323 \, \text{eV}$ & $0.111 \, \text{eV}$ \\ Exp.\cite{Adenis1989, Anzin1977, Caldwell1959} & $4.46 \, \text{\AA}$ & $5.92 \, \text{\AA}$ & $0.267$ & $0.33 \, \text{eV}$ & $0.112 \, \text{eV}$ \end{tabular} \end{ruledtabular} \end{table} \begin{figure}[htbp] \includegraphics{fig1} \caption{Lowest six electron bands and highest six hole bands of Tellurium calculated with the shLDA-1/2 method.} \label{bstr} \end{figure} \begin{figure*}[ht] \includegraphics[width=.89\textwidth]{fig2} \caption{Dipole transition matrix elements between bands in a 2d plane of the 1. BZ spanned by the $\Gamma$-,M-,L- and A-points. The first row and first column show the band energies, while the inner plots show the dipole strengths. The dipole in a particular cell corresponds to the band combination given by the bands in the respective row and column. In the bottom left triangle, the dipoles for $E \parallel c$ direction are given, in the top right triangle, the dipoles for $E \perp c$ direction are given. The color bar in the bottom right cell pertains to all dipoles. Values higher than 10 are not distinguished in the color plot. The 'max' value above each dipole plot indicates the maximum value of the respective dipole at any point in the plane. } \label{dips} \end{figure*} The results of the structural relaxation can be found in the first three columns of Table \ref{tab:dftres}, where $a$ and $c$ are the lattice constants and $u$ is the parameter that determines the position of the atoms in the plane perpendicular to the helical chains. Comparison to experimental values shows that both lattice constants are slightly overestimated. For the construction of the self-energy corrected pseudopotential, the optimized inner and outer cutoff radii were determined as $1.328 \, \text{\AA}$ and $3.395 \, \text{\AA}$, respectively. The resulting direct bandgap at the H-point, $E_g$, and splitting of the light-hole and heavy-hole valence band at the H-point, $E_{LH-HH}$, are compared to the experimental values in Table \ref{tab:dftres}. Both, the gap and the valence band splitting are in very good agreement with the experiment, underestimating the experimental values slightly by $2\%$ and $1\%$, respectively. The complete band structure along high symmetry lines in the BZ is shown in Fig. \ref{bstr}. From the wavefunctions, $\phi$, obtained from DFT, the transition dipole moments (TDMs) $\mathbf{d}^{nn^\prime}_{\mathbf{k}}$ between bands $n$ and $n^\prime$ at every k-point $\mathbf{k}$ are determined, \begin{equation} \label{tdm} d^{nn^\prime}_{\mathbf{k}} = \frac{\hat{\mathbf{e}}}{\epsilon_{n\mathbf{k}} -\epsilon_{n'\mathbf{k}}} \cdot \left\langle \phi_{n'\mathbf{k}} \left| \frac{\partial (\mathbf{ H} - \epsilon_{n\mathbf{k}} \mathbf{S})}{\partial \mathbf{k}} \right| \phi_{n\mathbf{k}} \right\rangle \, . \end{equation} Here $\epsilon$ denotes the single particle energies and $\hat{\mathbf{e}}$ is the polarization direction. $\mathbf{ H}$ is the Hamilton operator for the cell periodic wavefunctions and $\mathbf{S}$ is the corresponding overloap operator\cite{Gajdos06}. An overview of the TDMs projected onto the $z$-direction for optical fields polarized parallel to the $c$-axis ($E \parallel c$) and onto the $x$-direction for $E \perp c$ is shown in Fig. \ref{dips}. Here, the modulus of the TDMs between the two lowest conduction bands and four highest valence bands are shown in a momentum vector plane spanned by the $\Gamma$-, A-, H-, L-, M-, and K-point of the BZ. While the scale of the color map in Fig.~\ref{dips} is capped at 10, the maximum value of the dipoles between two valence bands or two conduction bands far exceeds that limit. This can be explained from Eq. \ref{tdm}, since the bands of the same type are very close to each other up to the point of almost becoming degenerate, so that the factor $1/(\epsilon_{n\mathbf{k}} -\epsilon_{n'\mathbf{k}})$ becomes very large. For the interband dipoles between a conduction and a valence band for $E\perp c$, the strongest dipole coupling is found around the direct bandgap at the H-point. For $E\parallel c$ the interband dipoles involving the two highest valence bands are vanishingly small at the H-point, however, they become stronger when moving away from the H-point along the H-L-line. Only the interband dipoles of the two lower-lying valence bands, VB3 and VB4, are significant at the H-point. We will utilize this feature to simplify the optical response calculations for $E\parallel c$ by omitting the two upper valence bands, VB1 and VB2. Generally, strong dipole coupling is found in parameter regions where the bands are close to each other. However, there are differences between the dipoles for $E\parallel c$ and $E\perp c$ although the band energies are the same for both polarization directions. E.g., the intraband dipoles are strong along the $\Gamma$-K-M line for $E\parallel c$, while there is no significant coupling for $E\perp c$. Conversely, the coupling along the H-K-line is much stronger for $E\perp c$ than for $E\parallel c$. \section{Incoherent Resonant Nonlinearities} \label{sec:abs} In order to test the results of our DFT calculations we use the band structures, wavefunctions and TMDs to evaluate absorption spectra for Te and compare them to experimentally measured results. The absorption is calculated for two polarization directions of the exciting light field, $E \parallel c$ and $E \perp c$. In the BZ, these directions correspond to the H-K- and H-L-H-A-path, respectively. Linear absorption spectra are computed by applying an arbitrarily small field $E(t)$ and calculating the material response $P(t)$ by solving the equations of motion for the microscopic polarizations, $p^{j i}_{\mathbf{k}}$, i.e. the SBE\cite{lindberg1988,girndt1997}: \begin{align} \label{sbe_eq} \frac{\mathrm{d}}{\mathrm{d}t} p^{j_1 i_1}_{\mathbf{k}} = & \frac{1}{i \hbar} ( \sum_{i_2, j_2} \left[ \tilde{\epsilon}^{h}_{j_1 j_2,\mathbf{k}} \delta_{i_1 i_2} + \tilde{\epsilon}^{e}_{i_1 i_2, \mathbf{k}} \delta_{j_1 j_2} \right] p^{j_2 i_2}_{\mathbf{k}} \\ \notag & \quad \quad \quad + \left[1 - f^{e}_{i_1, \mathbf{k}} - f^{h}_{j_1, \mathbf{k}} \right] \Omega^{i_1 j_1}_{\mathbf{k}} ) \\ \notag &+ \left. \frac{\mathrm{d}}{\mathrm{d}t} p^{j_1 i_1}_{\mathbf{k}} \right\vert_{\text{corr}} \end{align} with the renormalized electron and hole energies \begin{align} \tilde{\epsilon}^{e}_{i_1 i_2, \mathbf{k}} &= \epsilon^{e}_{i_1,\mathbf{k}} \delta_{i_1 i_2} - \sum_{i_3, q} V^{i_1 i_3 i_2 i_3}_{\mathbf{k}-\mathbf{q}} f^{e}_{i_3, \mathbf{q}} \\ \tilde{\epsilon}^{h}_{j_1 j_2, \mathbf{k}} &= \epsilon^{h}_{j_1,\mathbf{k}} \delta_{j_1 j_2} - \sum_{j_3, q} V^{j_2 j_3 j_1 j_3}_{\mathbf{k}-\mathbf{q}} f^{h}_{j_3, \mathbf{q}} \\ \end{align} and the renormalized generalized Rabi frequency \begin{align} \Omega^{i_1 j_1}_{\mathbf{k}} &= - d^{i_1 j_1}_\mathbf{k} E(t) - \sum_{i_2, j_2, q} V^{i_1 j_2 j_1 i_2}_{\mathbf{k}-\mathbf{q}} p^{j_2 i_2}_{\mathbf{q}} \, . \end{align} Here, $i_1, i_2, i_3$ are electron band indices and $j_1, j_2, j_3$ are hole band indices. Like the dipole matrix elements, the Coulomb matrix elements $V$ are evaluated using the DFT wavefunctions. For the linear absorption calculations, the material is assumed to be in the unexcited ground state and the field is too weak to create carriers such that the occupations for electrons/holes $f^{e/h}$ remain zero. For gain calculations, the carriers are assumed to be in thermal equilibrium and described by Fermi distributions within the respective bands. This fully microscopic approach has been shown to yield very good quantitative agreement with the experiment for a wide variety of materials spanning the mid-IR to visible wavelength ranges (see e.g. Ref.\onlinecite{nlcstr-web-page} for examples). The term $\left. \frac{\mathrm{d}}{\mathrm{d}t} p^{j_1 i_1}_{\mathbf{k}} \right\vert_{\text{corr}}$ summarizes higher order correlations that include the electron-electron and electron-phonon scattering which lead to the dephasing of the polarization and the resulting homogeneous broadening of the spectra. We include the scatterings on a fully microscopic level by solving the corresponding quantum-Boltzman type scattering equations. Standard literature parameters are used for the phonon scattering as discussed in Ref.\onlinecite{girndt1997}. The explicit calculation of the dephasing processes not only eliminates adjustments requiring empirical parameters, but has also been shown essential to obtain correct lineshapes, amplitudes, spectral positions and density dependencies. From the Fourier transform of the macroscopic polarization $P(t) = \sum_{i,j,\mathbf{k}} p^{ji}_{\mathbf{k}} d^{ij*}_{\mathbf{k}}$, the absorption coefficient $\alpha$ is calculated according to \begin{align} \alpha(\omega) = \frac{\omega}{\epsilon_0 n_r(\omega) c E(\omega)} \text{Im} \left[ P(\omega) \right] \, . \end{align} \begin{figure}[htbp] \includegraphics[width=0.4\textwidth]{fig3} \caption{Room temperature material absorption of Te for light polarized $\parallel c$ (smaller) and $\perp c$ (larger). Solid lines: theoretical results based on DFT. Symbols: experimental data extracted from Ref.\onlinecite{Tutihasi1969}. The experimental data was shifted by $14\,\mathrm{meV}$ to lower energies.} \label{fig_abs} \end{figure} In Fig.~\ref{fig_abs}, we plot the resulting absorption spectra for the polarizations $E \perp c$ and $E \parallel c$. Especially near the bandgap, the absorption for $E \perp c$ is much larger than the one for $E \parallel c$ due to the weaker coupling between the topmost valence bands and the conduction bands near the bandgap for $E \parallel c$ (see Sec. \ref{dftres}). The blue and red dots in Fig.~\ref{fig_abs} show the results of measurements extracted from Ref.\onlinecite{Tutihasi1969}. As we can see, our computed results agree well with the experimentally measured spectra. As noted in Ref.\onlinecite{Tutihasi1969}, it is difficult to determine the reason for the strong polarization dependence of the absorption from measurement alone. While the authors of Ref.\onlinecite{Tutihasi1969} assumed that the absorption for $E\parallel c$ is suppressed due to an indirect gap, Refs.\onlinecite{Rigaux66, Grosse68} concluded that the real reasons are the selection rules leading to forbidden transitions for this configuration. This assumption is fully confirmed by our DFT calculations. The authors of Ref.\onlinecite{Tutihasi1969} state the bandgap of their sample to be around $0.335-0.337\,\mathrm{eV}$ compared to our value of $0.323\,\mathrm{eV}$ and the value of about $0.33\,\mathrm{eV}$ from Refs.\onlinecite{Adenis1989, Anzin1977, Caldwell1959}. In Ref.\onlinecite{Choi19}, the authros report that it was possible to shift the bandedge photoluminescence of their sample by about $24\,\mathrm{eV}$ through annealing. This indicates that the bandgap of Te can vary due to effects like sample quality or strain by amounts that can explain the difference found between our results and those in Ref.\onlinecite{Tutihasi1969}. In Fig.~\ref{fig_abs}, we account for the difference in bandgaps by shifting the experimental data extracted from Ref.\onlinecite{Tutihasi1969} by $14\,\mathrm{meV}$. \begin{figure}[htbp] \includegraphics[width=0.48\textwidth]{fig4} \caption{Room temperature material gain (negative absorption) spectra of Te for light polarized $\perp c$ (left) and $\parallel c$ (right) at various carrier densities. The carrier densities are given in the labels in units of $10^{19}/\mathrm{cm}^3$.} \label{fig_gain} \end{figure} Encouraged by the good agreement of the computed and measured linear absorption spectra, we use our microscopic approach to investigate the nonlinear optical properties of bulk Te. In a first step, we assume that the material has been excited to generate significant densities of incoherent electron and hole populations in the respective bands. As an example, we show in Fig.\ref{fig_gain} the calculated optical material gain ($- \alpha(\omega)$) for $E \perp c$ and $E \parallel c$ and various carrier densities. We see that for $E\perp c$ gain begins to occur for carrier densities above $4\times 10^{19}/\mathrm{cm}^3$. For densities above about $7\times 10^{19}/\mathrm{cm}^3$ the peak gain shifts from the CB1-VB1 transition with a peak around $0.33-0.35\,\mathrm{eV}$ to the second conduction band transition, CB2-VB1, with a peak near $0.37\,\mathrm{eV}$. As has been seen in the linear absorption spectra, the TDMs are much smaller for $E\parallel c$ than for $E\perp c$ in the spectral range where gain would occur. This leads to virtual no gain at all for this polarization direction at realistic carrier densities. Assuming the same excitation conditions, the resulting photo luminescence (PL) is calculated by solving the SLE\cite{sle}, i.e., the microscopic equations of motion for the photon assisted polarizations. Structurally, the SLE have the same form as the SBE, Eq.(\ref{sbe_eq}), but include higher excitonic correlations as an additional source term. As for the SBE, we include in our SLE evaluations the electron-electron and electron-phonon scattering on a fully microscopic level. \begin{figure}[htbp] \includegraphics[width=0.48\textwidth]{fig5} \caption{Theoretrical (lines) and experimental (symbols) room temperature photo luminescence spectra of Te for light polarized $\perp c$ (left) and $\parallel c$ (right) at various carrier densities. The theoretical spectra have been divided by the respective density squared. The carrier densities are given in the labels in units of $10^{19}/\mathrm{cm}^3$. The experimental data extracted from Ref.\onlinecite{Choi19} are given in arbitrary units.} \label{fig_pl} \end{figure} Fig.\ref{fig_pl} shows PL spectra for $E\perp c$ and $E\parallel c$ at various levels of electron-hole-pair populations. In the low density regime, the PL scales quadratically with the carrier density. Plotting the PL divided by the square of the density as in Fig.\ref{fig_pl} reveals deviations for higher excitation levels from this quadratic variation that are due to phase space filling\cite{apl-phase-space05}. In this regime, the density dependence becomes less than quadratic and the PL peak shifts to higher transition energies. For the case of $E\parallel c$ the peak shift is stronger and the amplitude reduction is slower. These features can be attributed to the fact that the TDMs for $E\parallel c$ increase significantly with increasing energy above the gap which enhances energetically higher PL contributions. Like the gain and absorption, the PL is much weaker for $E\parallel c$ than for $E\perp c$ due to the much smaller TDMs in the energy region of interest. This agrees with the experimentally observed dominant polarization $E\perp c$ of PL in Ref.\onlinecite{Benoit65}. The spectral position as well as the lineshape of our calculated PL agrees very well with experimentally measured data from Ref.\onlinecite{Choi19} that we include in Fig.\ref{fig_pl} for comparison. This demonstrates the high accuracy of the fully microscopic modelling approach including the explicit treatment of scattering processes that lead to an almost perfect agreement with the experimentally observed linewidth of about $80\,\mathrm{meV}$. \section{Coherent Off-Resonant Nonlinear Response} \subsection{Microscopic Approach} In order to model the nonlinear optical response of a crystal to a strong exciting THz field, the coupled dynamics of interband polarizations and intraband currents have to be investigated. For this purpose, we again use the SBE. However, in contrast to the quasi stationary nonlinear response investigated so far, we now have to explicitly include the nonequilibrium carrier dynamics. In particular, the strong long-wavelength excitation field leads to an acceleration of carriers along the bands throughout the entire BZ. Thus, the results depend critically on the dispersion relation across the whole BZ. Furthermore, pulse propagation effects have to be included in order to study the dependence of HHG on sample length. In earlier studies, we have shown that for the strongly off-resonant excitation assumed here, the Coulomb renormalizations have a negligible inlfuence\cite{huttner-even-hhg17} such that the equations of motion can be simplified to \begin{align} \label{eq:sbep} i \hbar \frac{\mathrm{d}}{\mathrm{d} t} p^{\mathrm{h}_i \mathrm{e}_j}_{\mathbf{k}} &= \left( \epsilon^{\mathrm{e}_j}_{\mathbf{k}} + \epsilon^{\mathrm{h}_i}_{\mathbf{k}} + i \lvert e \rvert E_\text{THz} (t) \nabla_{\mathbf{k}} \right) p^{\mathrm{h}_i \mathrm{e}_j}_{\mathbf{k}} \\ \notag &- \hbar \Omega^{\mathrm{h}_i \mathrm{e}_j}_{\mathbf{k}} (t) \left( 1 - f^{\mathrm{e}_j}_{\mathbf{k}} - f^{\mathrm{h}_i}_{\mathbf{k}} \right) + \Gamma^{\mathrm{h}_i \mathrm{e}_j}_{\mathbf{k}} \\ \notag &+ \sum_{\mathrm{e}_\lambda \neq \mathrm{e}_j} \left[ \hbar \Omega^{\mathrm{h}_i \mathrm{e}_\lambda}_{\mathbf{k}} (t) p^{\mathrm{e}_\lambda \mathrm{e}_j}_{\mathbf{k}} - \hbar \Omega^{\mathrm{e}_\lambda \mathrm{e}_j}_{\mathbf{k}} (t) p^{\mathrm{h}_i \mathrm{e}_\lambda}_{\mathbf{k}} \right] \\ \notag &+ \sum_{\mathrm{h}_\lambda \neq \mathrm{h}_i} \left[ \hbar \Omega^{\mathrm{h}_i \mathrm{h}_\lambda}_{\mathbf{k}} (t) p^{\mathrm{h}_\lambda \mathrm{e}_j}_{\mathbf{k}} - \hbar \Omega^{\mathrm{h}_\lambda \mathrm{e}_j}_{\mathbf{k}} (t) p^{\mathrm{h}_i \mathrm{h}_\lambda}_{\mathbf{k}} \right]\\ \notag &+ \left. \frac{\mathrm{d}}{\mathrm{d}t} p^{\mathrm{h}_i \mathrm{e}_j}_{\mathbf{k}} \right\vert_{\text{corr}} \end{align} \begin{align} \hbar \frac{\mathrm{d}}{\mathrm{d} t} f^{\mathrm{e}_i}_{\mathbf{k}} &= - 2 \hbar \; \times \\ \notag \times \; &\text{Im} \left[ \sum_{\mathrm{e}_\lambda \neq \mathrm{e}_i} \Omega^{\mathrm{e}_\lambda \mathrm{e}_i}_{\mathbf{k}} (t) \left( p^{\mathrm{e}_\lambda \mathrm{e}_i}_{\mathbf{k}} \right)^* + \sum_{\mathrm{h}_\lambda} \Omega^{\mathrm{h}_\lambda \mathrm{e}_i}_{\mathbf{k}} (t) \left( p^{\mathrm{h}_\lambda \mathrm{e}_i}_{k} \right)^* \right] \\ \notag &+ \lvert e \rvert E_\text{THz} (t) \nabla_{\mathbf{k}} f^{\mathrm{e}_i}_{\mathbf{k}} + \Gamma^{\mathrm{e}_i}_{\mathbf{k}} . \end{align} We have similar expressions for the intraband polarizations between conduction bands $p^{\mathrm{e}_i \mathrm{e}_j}_{\mathbf{k}}$ and between valence bands $p^{\mathrm{h}_i \mathrm{h}_j}_{\mathbf{k}}$ and the carrier occupations in the valence band $f^{\mathrm{h}_i}_{\mathbf{k}}$, respectively. For HHG, we model the dephasing of the polarization as represented by the last term in Eq.(\ref{eq:sbep}) using a dephasing time $T_2=40\,$fs. The macroscopic polarization $P(t) = \sum_{\lambda, \lambda^\prime, \mathbf{k}} d^{\lambda \lambda^\prime}_{\mathbf{k}} p^{\lambda \lambda^\prime}_{\mathbf{k}}$ and the macroscopic current $J(t) = \sum_{\lambda, \mathbf{k}} j_\lambda (\mathbf{k}) f^{\lambda}_{\mathbf{k}}$ due to the acceleration of carriers along the bands contribute to the emitted electric field $E_{\text{out}} (t) \propto \frac{\partial}{\partial t} P(t) + J(t)$ and create the characteristic local high harmonic emission spectrum which is given by the emission intensity $I_\text{out} (\omega) \propto \lvert \omega P (\omega) + i J (\omega) \rvert^2$. In order to gain some insights before doing the full propagation calculations, we performed local evaluations which need significantly less numerical effort. Here, we use a one-dimensional $\mathbf{k}$-space model which assumes that carriers are predominantly excited near the fundamental gap, i.e. near the $H$-point with negligible momentum perpendicular to the field. For linearly polarized light, the carriers are then driven along a one dimensional path through the BZ. For $E\parallel c$ the path is from $K$ to $H$ and back to $K$. For $E\perp c$ the path goes from $A$ to $H$ to $L$ and back. For all HHG simulations we assume excitation with a Gaussian pulse, $E(t)=E_0\exp{-(t/\sigma)^2}cos(\omega_0t)$, with a width $\sigma=100\,$fs and a central frequency $\omega_0$ corresponding to a wavelength of $10.6\,\mu$m. In a first step, we use this local model to identify those bands that are relevant for HHG generation under typical off-resonant excitation conditions. Clearly, the HHG signal is dominated by transitions between those bands which are energetically closest to the bandgap unless these transitions are suppressed due to symmetry selection rules leading to small TDMs. As can be seen in Fig.\ref{bstr}, only four valence and two electron bands are in the energetically relevant region. Since the TDMs presented in Fig.~\ref{dips} show that the coupling of the top two valence bands to the lowest two conduction bands vanishes at the H-point for $E \parallel c$, we studied whether these bands are significant for the resulting HHG spectrum. A comparison of the computed spectra including different valence bands is shown in Fig.~\ref{fig:model} a). We note that by considering only the bottom two valence bands, we obtain a spectrum that agrees rather well with the full six-band calculation, allowing us to reduce the complexity of our propagation studies for $E \parallel c$ by including only this subset of bands. In contrast, for the $E \perp c$ configuration, the top two valence bands dominate the response and are thus included in the HHG simulations. \subsection{Phase of Transition Dipole Matrix Elements} In general, the TDMs presented in Sec.~\ref{dftres} are complex valued. To illustrate the influence of the phases on the HHG emission, we consider a perturbative power series of the polarization response to an electric field for a situation with two valence bands $h1, h2$ and one conduction band $e$. In first order of the field, all polarizations and occupations are $0$, so that we obtain from Eq.~\ref{eq:sbep} \begin{align} \left( p^{\mathrm{h}_1 \mathrm{e}}_{\mathbf{k}} (t) \right)^{(1)} \propto \frac{1}{\hbar \omega} d^{\mathrm{e} \mathrm{h}_1}_{\mathbf{k}} E(t) \, . \end{align} The resulting macroscopic polarization then yields \begin{align} \left( P_{\mathrm{h}_1 \mathrm{e}} (t) \right)^{(1)} = \sum_\mathbf{k} d^{\mathrm{h}_1 \mathrm{e}}_{\mathbf{k}} \left( p^{\mathrm{h}_1 \mathrm{e}}_{\mathbf{k}} (t) \right)^{(1)} \propto \frac{\lvert d^{\mathrm{h}_1 \mathrm{e}}_{\mathbf{k}} \rvert^2}{\hbar \omega} E(t) \, . \end{align} Hence, the phase of the TDMs in this first-order response is irrelevant. However, since the polarizations are non-zero in second order, the creation of a polarization between the valence bands allows for an indirect excitation into the conduction band, \begin{align} \left( p^{\mathrm{h}_1 \mathrm{e}}_{\mathbf{k}} (t) \right)^{(2)} \propto \frac{d^{\mathrm{e} \mathrm{h}_1}_{\mathbf{k}}}{\hbar \omega} E(t) + \frac{d^{\mathrm{h}_2 \mathrm{h}_1}_{\mathbf{k}} d^{\mathrm{e} \mathrm{h}_2}_{\mathbf{k}}}{2 \hbar^2 \omega^2} E^2 (t) + .. \end{align} This leads to a term in the macroscopic polarization \begin{align} \label{eq_ppp} \left( P_{\mathrm{h}_1 \mathrm{e}} (t) \right)^{(2)} \propto \sum_\mathbf{k} \frac{d^{\mathrm{h}_2 \mathrm{h}_1}_{\mathbf{k}} d^{\mathrm{h}_1 \mathrm{e}}_{\mathbf{k}} d^{\mathrm{e} \mathrm{h}_2}_{\mathbf{k}}}{2 \hbar^2 \omega^2} E^2 (t) + ... \end{align} where the phases of the TDMs do not vanish. If, e.g., one of the TDMs in Eq.(\ref{eq_ppp}) is antisymmetric in $\mathbf{k}$ and the other two are symmetric, the integration over $\mathbf{k}$ will lead to a zero contribution to the macroscopic polarization and resulting HHG signal while a strong non-zero contribution would be obtained if the phases are neglected. Thus, the phases need to be considered correctly in order to obtain the correct symmetry-related selection rules and amplitudes in the HHG calculations. It was shown in Ref. \onlinecite{huttner-even-hhg17} that quantum interference between intraband and interband polarizations can lead to the appearance of even harmonics. Moreover, if one neglects the phases of the TDMs, even harmonics would be allowed for all systems with three or more bands - which is known not to be the case. Thus, the correct inclusion of the phases is essential to obtain the correct selection rules for HHG. While the TDMs are complex valued, the plot in Fig.~\ref{dips} only shows the absolute value. In DFT, the Schr\"odinger equation of every $\mathbf{k}$-point is solved individually, so that there is no phase relation between different $\mathbf{k}$-points. Therefore, the computed TDMs contain a random phase which is not smooth across the BZ. As it turns out, this random phase can be eliminated for all $\mathbf{k}$-points by evaluating the product of the three complex TDMs connecting the bands $n$, $n^\prime$ and $n^{\prime \prime}$ in a circular way, e.g. $T^{n n^{\prime} n^{\prime \prime}}_{\mathbf{k}} = d^{n n^{\prime}}_{\mathbf{k}} d^{n^{\prime} n^{\prime \prime}}_{\mathbf{k}} d^{n^{\prime \prime} n}_{\mathbf{k}}$. The random phase of each band vanishes in the product, so that the phase of $T^{n n^{\prime} n^{\prime \prime}}_\mathbf{k}$ along any direction in the BZ is smooth. Since this only gives us the phase information about the product of three TDMs, this phase is applied to one of the constituent TDMs while taking the other ones as purely real. In that way, the triple product will have the correct phase. \begin{figure}[htbp] \includegraphics[width=0.48\textwidth]{fig6} \caption{Polarization part of $E \parallel c$ HHG emission in Te. (a) Influence of different choices of bands on HHG emission. (b) Influence of choice of TDM phases on HHG emission.} \label{fig:model} \end{figure} As an example, we show in Fig.\ref{dip_e1h1}, the complex TDMs for the respective transitions between the lowest electron and highest hole bands taken into account for $E \parallel c$ and $E \perp c$. In all four plots, the momentum parallel to the field polarization is vertically aligned. Once the phases of the dipoles are taken into account it becomes obvious that the Te system does not have pure radial or inversion symmetry. Thus, the $\mathbf{k}$-domain has to be expanded from the positive sector $\Gamma$-$A$-$L$-$M$ to four times the size to include also negative $k_x$ and $k_z$. For the $e1-h1$ transitions presented in Fig.\ref{dip_e1h1}, the real and imaginary parts of the dipoles for $E\parallel c$, shown in the two right-hand plots, appear nearly antisymmetric along the polarization direction. In contrast, the symmetry properties of the real and imaginary parts of the dipoles for $E\perp c$ are a little more ambiguous. As for the Te crystal itself, the TDMs do not have perfect (anti-) symmetry. This can be seen e.g. in the real parts of the TDMs for $E \perp c$ in Fig.\ref{dip_e1h1}. These are nearly symmetric near $H$ while they appear mostly antisymmetric in most regions of small $k_\perp$. The imaginary parts for $E\perp c$ in Fig.\ref{dip_e1h1}(c) appear mostly antisymmetric, with slight deviations around the H-point. In our procedure to assign the TDM phase, we arbitrarily choose the dipoles onto which we impose the smoothed phase of the triple dipole products. In order to check how this choice influences the HHG spectrum, we calculated the polarization part of the spectra for different phase projections. As can be seen in Fig.~\ref{fig:model} b), our phase assignment does not influence the overall structure of the spectra, leading only to insignificant amplitude changes, so that the comparisons between HHG calculations for different intensities, propagation lengths etc. is robust against this choice for the dipole phases. \begin{figure}[htbp] \includegraphics[width=0.48\textwidth]{fig7} \caption{Complex dipole matrix elements between the lowest conduction and highest valence band. (a) and (c) are the real and imaginary parts for $E\perp c$. (b) and (d) are the real and imaginary parts for $E\parallel c$. $k_\perp$ ($k_\parallel$) is the momentum perpendicular (parallel) to the field polarization. $d_{max}= 6,\,8,\,4,\,$ and $5$ for (a), (b), (c), and (d), respectively.} \label{dip_e1h1} \end{figure} \subsection{High Harmonics in Te} In order to determine the dependence of HHG production in Te on the field strength, we performed calculations for the material response only, without pulse propagation. Figure \ref{hhg_ahl_hk_local} shows the resulting emission for various intensities of the exciting pulse. \begin{figure}[htbp] \includegraphics[width=0.48\textwidth]{fig8} \caption{HHG spectra in Te for $E\perp c$ (left) and $E\parallel c$ (right) and various peak intensities $I_0$. Intensities given in the labels are in units of $10^{14}W/m^2$.} \label{hhg_ahl_hk_local} \end{figure} For both polarization configurations, a significant signal above the bandgap (frequencies above the third harmonic) develops for peak intensities above about $10^{11}W/m^2$. A plateau starts to form for about 100 times higher intensities. Harmonics below the bandgap emerge rather quickly for $E\perp c$ and start to saturate already at amplitudes about three orders below that of the fundamental. For $E\parallel c$, the signal below the bandgap develops much slower with field intensity. In particular, the third harmonic shows less saturation for the intensities investigated here. The differences at and below the bandgap are due to the fact that the interband coupling is much weaker at and near the gap as can be seen from the absorption spectra. Even harmonics are strongly suppressed for $E\parallel c$ while for $E\perp c$ no obvious discrimination occurs between even and odd harmonics. This behavior is a consequence of the symmetry of the dipole matrix elements. As in the case for the lowest electron-hole transition shown in Fig.\ref{dip_e1h1}, all dipoles that are relevant for even harmonics are nearly inversion symmetric for $E\parallel c$. This leads to a destructive quantum interference that suppresses the even harmonics. In contrast, for $E\perp c$ the relevant dipoles are dominantly symmetric which effectively enables quantum interference and allows for the even harmonics to reach similar levels as the odd harmonics. \begin{figure}[htbp] \includegraphics[width=0.48\textwidth]{fig9} \caption{HHG spectra in Te for $E\perp c$ (left) and $E\parallel c$ (right), a peak pulse intensity of $0.128 \times 10^{14}W/m^2$ and for various propagation distances. Spectra for different propagation distances have been scaled by factors of 100 for better visibility. } \label{hhg_ahl_hk_prop} \end{figure} To evaluated HHG for samples of different thicknesses, we include pulse propagation effects by coupling the SBE (Eq.\ref{eq:sbep}) to a uni-directional pulse propagation solver as described in Ref. \cite{prl-pd} and references therein. As an example of the results, Fig.\ref{hhg_ahl_hk_prop} shows HHG spectra after propagation through Te for various distances. The initial pulse has a peak intensity of $0.128 \times 10^{14} W/m^2$. For $E\perp c$ the higher harmonics quickly weaken with propagation distance. In part this is a consequence of the gradual decreasing excitation pulse due to HHG and absorption of spectral components above the bandgap. In part this is also due to propagation induced dephasing \cite{prl-pd}. This weakening is less pronounced for $E\perp c$ since the absorption is weaker and less HHG signal is produced. Over the maximum propagation distance investigated here ($50\,\mu\mathrm{m}$) the amplitude of the fundamental drops by about a factor of ten for $E\perp c$ and only a factor of four for $E\parallel c$. The reduced amount of even harmonics for $E\parallel c$ likely also leads to a reduced amount of quantum interference and resulting propagation induced dephasing within the remaining signal. \section{Summary and Outlook} In summary, we present a comprehensive microscopic analysis of optical nonlinearities in bulk Te. We determine the bandstructure, the optical dipoles, and the Coulomb interaction matrix elements using an DFT based approach. Investigating the near bandgap optical response for different levels of electron-hole-pair excitations, we numerically solve the stationary SBE and SLE to computed the strongly orientation dependent absorption and PL modifications. Comparing the linear absorption and PL spectra with experimental findings, we obtain excellent agreement. For elevated excitation levels, we obtain a transition from absorption to optical gain for $E\perp c$ gain with a peak in the technologically interesting mid-IR region. Since the TDMs are much smaller for $E\parallel c$ than for $E\perp c$ virtually no gain occurs for this polarization direction at realistic carrier densities. The generation of high-harmonic emission in Te is analyzed using the fully dynamic SBE systematically treating the nonequilibrium dynamics of the optically induced polarizations and currents. Pulse propagation effects are modeled by coupling the SBE to a unidirectional propagation solver that allows us to study the sample length and field orientation dependence of the even- and odd-order HHG for the different field polarization configurations. The importance of a correct treatment of the complex phases of dipole matrix elements for the correct description of optical selection rules is demonstrated. As a next step, we plan to evaluate the intrinsic losses in bulk Te, in particular the Auger losses that typically hamper the laser application potential of mid-IR emitting structures. Furthermore, we will extend our comprehensive microscopic approach to low dimensional Te \cite{Te2d} to investigate its nonlinear opto-electronic properties and device application potential. \begin{acknowledgments} The authors thank D. Matteo, S. Tochitsky, UCLA, for stimulating discussions during the early part of these investigations and I. Kilen, M. Kolesic, University of Arizona, for development of the numerical HHG propagation code. The Marburg work was supported by the Deutsche Forschungsgemeinschaft (DFG) in the framework of the Research Training Group ``Functionalization of Semiconductors'' (GRK~1782) and the Collaborative Research Center SFB 1083. The authors thank the HRZ Marburg and CSC-Goethe-HLR Frankfurt for computational resources. The Tucson work was supported by the Air Force Office of Scientific Research under award number FA9550-17-1-0246. \end{acknowledgments} \section*{Data Availability} The data that support the findings of this study are available from the corresponding author upon reasonable request.
{ "attr-fineweb-edu": 1.886719, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdYM5qX_Bpe9RFa1N
\section{Software versions used in the experiments} Since we provide wall-clock time results, it is important to specify the versions of the libraries we used. For the full-precision (FP32) results, we used the \texttt{pytorch\_p36} virtual environment associated with the ``Deep Learning AMI (Ubuntu 18.04) Version 40.0 (ami-084f81625fbc98fa4)" on Amazon EC2, {\it i.e.}, \texttt{PyTorch 1.4.0} with $\text{CUDA} 10.1.243$. Since AMP is only supported after $1.6.0$ of PyTorch, we use \texttt{PyTorch 1.6.0} with CUDA 10.1. \section{Detailed discussion on the computation complexity of various layers} We provide the complexity and number of parameters in the vanilla and low-rank factorized layers returned by \textsc{Pufferfish}{} in Section \ref{sec:pufferfish} without providing any details. We give the detailed discussion here. \paragraph{FC layer.} We start from the FC layer, assuming the input $x\in \mathbb{R}^{m\times n}$, the computation complexity is simply $\mathcal{O}(mn)$ for $xW$ and $\mathcal{O}(mr + rn)$ for $(xU)V^\top$. For the number of parameters, the vanilla FC layer contains $mn$ parameters in total while the low-rank FC layer contains $r(m+n)$ parameters in total. \paragraph{Convolution layer.} For a convolution layer, assuming the input is with dimension $x\in \mathbb{R}^{c_\text{in}\times H\times W}$ (the input ``image" has $c_{\text{in}}$ color channels and with size $H\times W$), the computation complexity of a vanilla convolution layer with weight $W \in \mathbb{R}^{c_{\text{in}}\times c_{\text{out}} \times k \times k}$ is $\mathcal{O}(c_{\text{in}}c_{\text{out}}k^2HW)$ for computing $W*x$ where $*$ is the linear convolution operation. And the low-rank factorized convolution layer with dimension $U \in \mathbb{R}^{c_{\text{in}}\times r \times k \times k}, V \in \mathbb{R}^{r\times c_{\text{out}} \times 1 \times 1}$ has the computation complexity at $\mathcal{O}(rc_{\text{in}}k^2HW)$ for $U*x$ and $\mathcal{O}(rHWc_{\text{out}})$ for convolving the output of $U*x$ with $V$. For the number of parameters, the vanilla convolution layer contains $c_{\text{in}}c_{\text{out}}k^2$ parameters in total while the low-rank convolution layer contains $c_{\text{in}}rk^2+rc_{\text{out}}$ parameters in total. \paragraph{LSTM layer.} For the LSTM layer, the computation complexity is similar to the computation complexity of the FC layer. Assuming the tokenized input is with dimension $x\in \mathbb{R}^d$, and the concatenated input-hidden and hidden-hidden weights $W_i \in \mathbb{R}^{d\times 4h}, W_h \in \mathbb{R}^{4h \times h}$, thus the computation complexity of the forward propagation of a LSTM layer is $\mathcal{O}(4dh + 4h^2)$. And for the low-rank LSTM layer, the computation complexity becomes $\mathcal{O}(dr+4rh + 4hr+rh)$ (as mentioned in Section \ref{sec:pufferfish}, we assume that the same rank $r$ is used for both the input-hidden weight and hidden-hidden weight). For the number of parameters, the vanilla LSTM layer contains $4dh+4h^2$ parameters in total while the low-rank convolution layer contains $4(dr+rh) + 4(hr+rh)=4dr+12hr$ parameters in total. \paragraph{Transformer.} For the encoder layer in the Transformer architecture, there are two main components, {\it i.e.}, the multi-head attention layer and the FFN layer. Note that, for the multi-head attention layer, the dimensions of the projection matrices are: The dimensions of the matrices are $Q, K, V \in \mathbb{R}^{n\times pd}, W^Q, W^K, W^V\in \mathbb{R}^{pd\times d}, W^O \in \mathbb{R}^{pd\times pd}$. And the dimensions of the two FC layers in the FFN are with dimensions $W_1 \in \mathbb{R}^{pd\times 4pd}, W_2 \in \mathbb{R}^{4pd \times pd}$. And we assume a sequence of input tokens with length $N$ is batched to process together. Since the computation for each attention head is computed independently, we only analyze the computation complexity of a single head attention, which is $\mathcal{O}\big(\underbrace{d \cdot pd\cdot N}_{\text{proj. of $Q, K, V$}}+\underbrace{2N^2\cdot d}_{\text{attention layer}}+\underbrace{pd \cdot pd\cdot N}_{\text{proj. of the output of attention}}\big)=\mathcal{O}\big((p+p^2)Nd^2+N^2d\big)=\mathcal{O}\big(Np^2d^2+N^2d\big)$. Similarly, the computation complexity for the FFN layer is $\mathcal{O}\big(\underbrace{4\times p^2 d^2N}_{xW_1}+\underbrace{4\times p^2 d^2N}_{xW_1 W_2}\big)$. For the low-rank attention layer, the computation complexity becomes \\ $\mathcal{O}\big(\underbrace{(dr+rpd)\cdot N}_{\text{low-rank proj. }}+\underbrace{(pdr + rpd) \cdot N}_{\text{low-rank proj. of the output}}+2N^2\cdot d\big)=\mathcal{O}\big((p+1)drN+2Ndpr+2N^2d\big)=\mathcal{O}\big(pdrN + N^2d\big)$ and the computation complexity for FFN $\mathcal{O}\big(\underbrace{(p\cdot d\cdot r+4r\cdot h\cdot d)\cdot N}_{xW_1}+\underbrace{(p\cdot d\cdot r+4r\cdot p\cdot d)\cdot N}_{xW_1 W_2}\big)$. For the number of parameters, the vanilla multi-head attention layer contains $3pd^2\cdot p + p^2d^2=4p^2d^2$ parameters in total while the low-rank multi-head attention layer contains $3p(pdr+rd)+(pdr+rpd)=prd(3p+5)$. parameters in total. The vanilla FFN layer contains $4p^2d^2+4p^2d^2=8p^2d^2$ parameters in total while the low-rank FFN layer contains $(pdr+r4pd)+(4pdr+rpd)=10pdr$. parameters in total. \section{Details on the dataset and models used for the experiment} The details of the datasets used in the experiments are summarized in Table \ref{table:dataset-model}. \begin{table*}[ht] \caption{The datasets used and their associated learning models.} \label{table:dataset-model} \begin{center} \scriptsize{ \begin{tabular}{ccccc} \toprule \textbf{Method} & CIFAR-10 & ImageNet & WikiText-2 & WMT16' Gen-Eng \bigstrut\\ \midrule \# Data points & $60,000$ & $1,281,167$ & $29,000$ & $1,017,981$ \bigstrut\\ Data Dimension & $32\times32\times3$ & $224\times224\times3$ & $1,500$ & $9,521$ \bigstrut\\ Model & VGG-19-BN;ResNet-18 & ResNet-50;WideResNet-50-2 & 2 layer LSTM & Transformer ($p=8, N=6$) \bigstrut\\ Optimizer & \multicolumn{2}{c}{\textsc{SGD} } & \textsc{SGD} & \textsc{Adam} \bigstrut\\ Hyper-params. & \multicolumn{2}{c}{Init lr: $0.01$ } & lr: $20$(decay with $0.25$ when val. loss not decreasing) & Init lr: 0.001 \bigstrut\\ \multicolumn{3}{c}{momentum: 0.9, $\ell_2$ weight decay: $10^{-4}$} & grad. norm clipping $0.25$ & $\beta s=(0.9, 0.98), \epsilon=10^{-8}$ \bigstrut\\ \bottomrule \end{tabular}}% \end{center} \end{table*} \section{Details on the hybrid networks in the experiments} \paragraph{The hybrid VGG-19-BN architecture.} we generally found that using $K=10$ in the VGG-19-BN architecture leads to good test accuracy and moderate model compression ratio. \begin{table}[H] \vspace{-4 mm} \caption{Detailed information of the hybrid VGG-19-BN architecture used in our experiments, all non-linear activation function in this architecture is ReLU after each convolution layer (omitted in the Table). The shapes for convolution layers follows $(c_{in}, c_{out}, k, k)$. There is a BatchNorm layer after each convolution layer with number of neurons the same as $c_{\text{out}}$ (also omitted in the Table).} \label{table:supp_vgg_architecture} \begin{center} \scriptsize{ \begin{tabular}{ccc} \toprule \textbf{Parameter} & Shape & Layer hyper-parameter \bigstrut\\ \midrule \textbf{layer1.conv1.weight} & $3 \times 64 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{layer2.conv2.weight} & $64 \times 64 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{pooling.max} & N/A & kernel size:$2$;stride:$2$ \bigstrut\\ \textbf{layer3.conv3.weight} & $64\times 128 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{layer4.conv4.weight} & $128\times 128 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{pooling.max} & N/A & kernel size:$2$;stride:$2$ \bigstrut\\ \textbf{layer5.conv5.weight} & $128 \times 256 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{layer6.conv6.weight} & $256\times 256 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{layer7.conv7.weight} & $256 \times 256 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{layer8.conv8.weight} & $256 \times 256 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{pooling.max} & N/A & kernel size:$2$;stride:$2$ \bigstrut\\ \textbf{layer9.conv9.weight} & $256 \times 512 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{layer10.conv10\_u.weight} & $512 \times 128 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{layer10.conv10\_v.weight} & $128 \times 512 \times 1 \times 1$ & stride:$1$ \bigstrut\\ \textbf{layer11.conv11\_u.weight} & $512 \times 128 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{layer11.conv11\_v.weight} & $128 \times 512 \times 1 \times 1$ & stride:$1$ \bigstrut\\ \textbf{layer12.conv12\_u.weight} & $512 \times 128 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{layer12.conv12\_v.weight} & $128 \times 512 \times 1 \times 1$ & stride:$1$ \bigstrut\\ \textbf{pooling.max} & N/A & kernel size:$2$;stride:$2$ \bigstrut\\ \textbf{layer13.conv13\_u.weight} & $512 \times 128 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{layer13.conv13\_v.weight} & $128 \times 512 \times 1 \times 1$ & stride:$1$ \bigstrut\\ \textbf{layer14.conv14\_u.weight} & $512 \times 128 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{layer14.conv14\_v.weight} & $128 \times 512 \times 1 \times 1$ & stride:$1$ \bigstrut\\ \textbf{layer15.conv15\_u.weight} & $512 \times 128 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{layer15.conv15\_v.weight} & $128 \times 512 \times 1 \times 1$ & stride:$1$ \bigstrut\\ \textbf{layer16.conv16\_u.weight} & $512 \times 128 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{layer16.conv16\_v.weight} & $128 \times 512 \times 1 \times 1$ & stride:$1$ \bigstrut\\ \textbf{pooling.max} & N/A & kernel size:$2$;stride:$2$ \bigstrut\\ \textbf{layer17.fc17.weight} & $512 \times 512$ & N/A \bigstrut\\ \textbf{layer17.fc17.bias} & $512$ & N/A \bigstrut\\ \textbf{layer18.fc18.weight} & $512 \times 512$ & N/A \bigstrut\\ \textbf{layer18.fc18.bias} & $512$ & N/A \bigstrut\\ \textbf{layer19.fc19.weight} & $512 \times 10$ & N/A \bigstrut\\ \textbf{layer19.fc19.bias} & $10$ & N/A \bigstrut\\ \bottomrule \end{tabular}}% \end{center} \end{table} \paragraph{The low-rank LSTM architecture.} Note that we only use a 2-layer stacked LSTM as the model in the WikiText-2 next word prediction task. Our implementation is directly modified from the PyTorch original example \footnote{\url{https://github.com/pytorch/examples/tree/master/word_language_model}}. We used the tied version of LSTM, {\it i.e.}, enabling weight sharing for the encoder and decoder layers. \begin{table}[H] \caption{Detailed information on the low-rank LSTM architecture in our experiment.} \label{table:architecture-lstm} \begin{center} \scriptsize{ \begin{tabular}{ccc} \toprule \textbf{Parameter} & Shape & Hyper-param. \bigstrut\\ \midrule \textbf{encoder.weight} & $33278\times 1500$ & N/A \bigstrut\\ \textbf{dropout} & N/A & $p=0.65$ \bigstrut\\ \textbf{lstm0.weight.ii/f/g/o\_u} & $1500\times 375$ & N/A \bigstrut\\ \textbf{lstm0.weight.ii/f/g/o\_v} & $375 \times 1500$ & N/A \bigstrut\\ \textbf{lstm0.weight.hi/f/g/o\_u} & $1500\times 375$ & N/A \bigstrut\\ \textbf{lstm0.weight.hi/f/g/o\_v} & $375 \times 1500$ & N/A \bigstrut\\ \textbf{dropout} & N/A & $p=0.65$ \bigstrut\\ \textbf{lstm1.weight.ii/f/g/o\_u} & $1500\times 375$ & N/A \bigstrut\\ \textbf{lstm1.weight.ii/f/g/o\_v} & $375 \times 1500$ & N/A \bigstrut\\ \textbf{lstm1.weight.hi/f/g/o\_u} & $1500\times 375$ & N/A \bigstrut\\ \textbf{lstm1.weight.hi/f/g/o\_v} & $375 \times 1500$ & N/A \bigstrut\\ \textbf{decoder.weight}(shared) & $1500\times 33278$ & N/A \bigstrut\\ \bottomrule \end{tabular}}% \end{center} \end{table} \paragraph{The hybrid ResNet-18, ResNet-50, WideResNet-50-2 architectures.} For the CIFAR-10 dataset, we modified the original ResNet-50 architecture described in the original ResNet paper \citep{he2016deep}. The details about the modified ResNet-18 architecture for the CIFAR-10 dataset are shown in Table~\ref{tab:resnet18-cifar10-arch}. The network architecture is modified from the public code repository \footnote{\url{https://github.com/kuangliu/pytorch-cifar}}. For the first $2$ convolution block, {\it i.e.}, conv2\_x, we used stride at $1$ and padding at $1$ for all the convolution layers. For conv3\_x, conv4\_x, and conv5\_x we used the stride at $2$ and padding at $1$. We also note that there is a BatchNorm layer after each convolution layer with the number of elements equals the number of convolution filters. As shown in Table \ref{tab:resnet18-cifar10-arch}, our hybrid architecture starts from the $2$nd convolution block, {\it i.e.}, $K=4$. Our experimental study generally shows that this choice of hybrid ResNet-18 architecture leads to a good balance between the final model accuracy and the number of parameters. Moreover, we did not handle the downsample weights in the convolution blocks. \newcommand{\blocka}[2]{\multirow{3}{*}{\(\left[\begin{array}{c}\text{3$\times$3, #1}\\[-.1em] \text{3$\times$3, #1} \end{array}\right]\)$\times$#2} } \newcommand{\blockb}[3]{\multirow{3}{*}{\(\left[\begin{array}{c}\text{1$\times$1, #2}\\[-.1em] \text{3$\times$3, #2}\\[-.1em] \text{1$\times$1, #1}\end{array}\right]\)$\times$#3} } \renewcommand\arraystretch{1.1} \setlength{\tabcolsep}{3pt} \begin{table}[H] \begin{center} \resizebox{1.0\linewidth}{!}{ \begin{tabular}{ccccccc} \toprule Layer Name & ResNet-18 & Rank Information \\ \midrule conv1 & \multicolumn{1}{c}{3$\times$3, 64, stride 1, padding 1} & full-rank\\ \midrule \multirow{3}{*}{conv2\_x} & \blocka{64}{2} & 1st block full-rank \\ & & 2nd block low-rank & & & &\\ & & conv\_u $(64,16,3,3)$, conv\_v$(16, 64, 1, 1)$ & & & &\\ \midrule \multirow{3}{*}{conv3\_x} & \blocka{128}{2} & low-rank \\ & & conv\_u $(128, 32, 3, 3)$ & & & & \\ & & conv\_v $(32, 128, 1, 1)$ & & & & \\ \midrule \multirow{3}{*}{conv4\_x} & \blocka{256}{2} & low-rank \\ & & conv\_u $(256, 64, 3, 3)$ & & & \\ & & conv\_v $(64, 256, 1, 1)$ & & & \\ \midrule \multirow{3}{*}{conv5\_x} & \blocka{512}{2} & low-rank \\ & & conv\_u $(512, 128, 3, 3)$ & & & & \\ & & conv\_v $(128, 512, 1, 1)$ & & & & \\ \midrule & \multicolumn{1}{c}{Avg Pool, 10-dim FC, SoftMax} \\ \bottomrule \end{tabular} } \end{center} \caption{The ResNet-18 architecture for the CIFAR-10 dataset used in the experiments. } \label{tab:resnet18-cifar10-arch} \end{table} For the ResNet-50 architecture, the detailed information is shown in the Table~\ref{tab:resnet50-imagenet-arch}. As we observed that the last three convolution blocks, {\it i.e.}, conv5\_x contains around $60\%$ of the total number of parameters in the entire network, thus we just put the last three convolution blocks as low-rank blocks and all other convolution blocks are full-rank blocks. Note that, different from the ResNet-18 architecture for the CIFAR-10 dataset described above. We also handle the downsample weight inside the ResNet-50 network, which only contains in the very first convolution block of conv5\_x. The original dimension of the downsample weight is with shape $(1024, 2048, 1, 1)$. Our factorization strategy leads to the shape of conv\_u: $(1024, 256, 1, 1)$ and conv\_v: $(256, 2048, 1, 1)$. \begin{table}[H] \begin{center} \resizebox{1.0\linewidth}{!}{ \begin{tabular}{ccccccc} \toprule Layer Name & output size & ResNet-50 & Rank Information \\ \midrule conv1 & 112$\times$112 & \multicolumn{1}{c}{7$\times$7, 64, stride 2} & full-rank\\ \midrule \multirow{4}{*}{conv2\_x} & \multirow{4}{*}{56$\times$56} & \multicolumn{1}{c}{3$\times$3 max pool, stride 2} \\\cline{3-7} & & \blockb{256}{64}{3} \\ & & & all blocks full-rank & & &\\ & & & & & &\\ \hline \multirow{3}{*}{conv3\_x} & \multirow{3}{*}{28$\times$28} & \blockb{512}{128}{4} & & \\ & & & all blocks full-rank & & & \\ & & & & & & \\ \hline \multirow{3}{*}{conv4\_x} & \multirow{3}{*}{14$\times$14} & \blockb{1024}{256}{6} \\ & & & all blocks full-rank & & \\ & & & & & \\ \hline \multirow{3}{*}{conv5\_x} & \multirow{3}{*}{7$\times$7} & \blockb{2048}{512}{3} & conv\_1\_u $(c_\text{in}, \frac{c_\text{in}}{4}, 1, 1)$; conv\_1\_v $(\frac{c_\text{in}}{4}, 512, 1, 1)$ \\ & & & conv\_2\_u $(512, 128, 3, 3)$; conv\_2\_v $(128, 512, 1, 1)$ & & & \\ & & & conv\_3\_u $(512, 128, 1, 1)$; conv\_2\_v $(128, 2048, 1, 1)$ & & & \\ \hline & 1$\times$1 & \multicolumn{1}{c}{Avg pool, 1000-dim FC, SoftMax} \\ \bottomrule \end{tabular} } \end{center} \caption{The ResNet-50 architecture for the ImageNet dataset used in the experiments. } \label{tab:resnet50-imagenet-arch} \end{table} For the WideResNet-50 architecture, the detailed architecture we used is shown in Table \ref{tab:wideresnet50-imagenet-arch}. Similar to what we observed for the ResNet-50 architecture, we just put the last three convolution blocks as low-rank blocks and all other convolution blocks are full-rank blocks. We also handle the downsample weight inside the WideResNet-50 network, which only contains the very first convolution block of conv5\_x. The original dimension of the downsample weight is with shape $(1024, 2048, 1, 1)$. Our factorization strategy leads to the shape of conv\_u: $(1024, 256, 1, 1)$ and conv\_v: $(256, 2048, 1, 1)$. \begin{table}[H] \begin{center} \resizebox{1.0\linewidth}{!}{ \begin{tabular}{ccccccc} \toprule Layer Name & output size & WideResNet-50-2 & Rank Information \\ \midrule conv1 & 112$\times$112 & \multicolumn{1}{c}{7$\times$7, 64, stride 2} & full-rank\\ \midrule \multirow{4}{*}{conv2\_x} & \multirow{4}{*}{56$\times$56} & \multicolumn{1}{c}{3$\times$3 max pool, stride 2} \\\cline{3-7} & & \blockb{256}{128}{3} \\ & & & all blocks full-rank & & &\\ & & & & & &\\ \hline \multirow{3}{*}{conv3\_x} & \multirow{3}{*}{28$\times$28} & \blockb{512}{256}{4} & & \\ & & & all blocks full-rank & & & \\ & & & & & & \\ \hline \multirow{3}{*}{conv4\_x} & \multirow{3}{*}{14$\times$14} & \blockb{1024}{512}{6} \\ & & & all blocks full-rank & & \\ & & & & & \\ \hline \multirow{3}{*}{conv5\_x} & \multirow{3}{*}{7$\times$7} & \blockb{2048}{1024}{3} & conv\_1\_u $(c_\text{in}, \frac{c_\text{in}}{4}, 1, 1)$; conv\_1\_v $(\frac{c_\text{in}}{4}, 1024, 1, 1)$ \\ & & & conv\_2\_u $(1024, 256, 3, 3)$; conv\_2\_v $(256, 1024, 1, 1)$ & & & \\ & & & conv\_3\_u $(1024, 256, 1, 1)$; conv\_2\_v $(256, 2048, 1, 1)$ & & & \\ \hline & 1$\times$1 & \multicolumn{1}{c}{Avg pool, 1000-dim FC, SoftMax} \\ \bottomrule \end{tabular} } \end{center} \caption{The WideResNet-50-2 architecture for the ImageNet dataset used in the experiments. } \label{tab:wideresnet50-imagenet-arch} \end{table} \paragraph{The hybrid Transformer architecture.} The Transformer architecture used in the experiment follows from the original Transformer paper \citep{vaswani2017attention}. Our implementation is modified from the public code repository \footnote{\url{https://github.com/jadore801120/attention-is-all-you-need-pytorch}}. We use the stack of $N=6$ encoder and decoder layers inside the Transformer architecture and number of head $p=8$. Since the encoder and decoder layers are identical across the entire architecture, we describe the detailed encoder and decoder architecture information in Table \ref{table:architecture-transformer-encoder} and Table \ref{table:architecture-transformer-decoder}. For the hybrid architecture used in the Transformer architecture, we put the very first encoder layer and first decoder layer as full-rank layers, and all other layers are low-rank layers. For low-rank encoder and decoder layers, we used the rank ratio at $\frac{1}{4}$, thus the shape of $U^Q, U^K, U^V, U^O \in \mathbb{R}^{512 \times 128}, V^{Q\top}, V^{K\top}, V^{V\top}, V^{O\top} \in \mathbb{R}^{128 \times 512}$. For $W_1$ in the $\text{FFN}(\cdot)$ layer, the $U_1\in \mathbb{R}^{512\times 128}, V_1^\top \in \mathbb{R}^{128\times 2048}$. For $W_2$ in the $\text{FFN}(\cdot)$ layer, the $U_2\in \mathbb{R}^{2048\times 128}, V_1^\top \in \mathbb{R}^{128\times 512}$. \begin{table}[H] \caption{Detailed information of the encoder layer in the Transformer architecture in our experiment} \label{table:architecture-transformer-encoder} \begin{center} \scriptsize{ \begin{tabular}{ccc} \toprule \textbf{Parameter} & Shape & Hyper-param. \bigstrut\\ \midrule \textbf{embedding} & $9521\times 512$ & padding index: 1 \bigstrut\\ \textbf{positional encoding} & N/A & N/A \bigstrut\\ \textbf{dropout} & N/A & $p=0.1$ \bigstrut\\ \textbf{encoder.self-attention.wq}($W^Q$) & $512\times 512$ & N/A \bigstrut\\ \textbf{encoder.self-attention.wk}($W^K$) & $512\times 512$ & N/A \bigstrut\\ \textbf{encoder.self-attention.wv}($W^V$) & $512\times 512$ & N/A \bigstrut\\ \textbf{encoder.self-attention.wo}($W^O$) & $512\times 512$ & N/A \bigstrut\\ \textbf{encoder.self-attention.dropout} & N/A & $p=0.1$ \bigstrut\\ \textbf{encoder.self-attention.layernorm} & $512$ & $\epsilon=10^{-6}$ \bigstrut\\ \textbf{encoder.ffn.layer1} & $512\times 2048$ & N/A \bigstrut\\ \textbf{encoder.ffn.layer2} & $2048\times 512$ & N/A \bigstrut\\ \textbf{encoder.layernorm} & $512$ & $\epsilon=10^{-6}$ \bigstrut\\ \textbf{dropout} & N/A & $p=0.1$ \bigstrut\\ \bottomrule \end{tabular}}% \end{center} \end{table} \begin{table}[H] \caption{Detailed information of the decoder layer in the Transformer architecture in our experiment} \label{table:architecture-transformer-decoder} \begin{center} \scriptsize{ \begin{tabular}{ccc} \toprule \textbf{Parameter} & Shape & Hyper-param. \bigstrut\\ \midrule \textbf{embedding} & $9521\times 512$ & padding index: 1 \bigstrut\\ \textbf{positional encoding} & N/A & N/A \bigstrut\\ \textbf{dropout} & N/A & $p=0.1$ \bigstrut\\ \textbf{decoder.self-attention.wq}($W^Q$) & $512\times 512$ & N/A \bigstrut\\ \textbf{decoder.self-attention.wk}($W^K$) & $512\times 512$ & N/A \bigstrut\\ \textbf{decoder.self-attention.wv}($W^V$) & $512\times 512$ & N/A \bigstrut\\ \textbf{decoder.self-attention.wo}($W^O$) & $512\times 512$ & N/A \bigstrut\\ \textbf{decoder.self-attention.dropout} & N/A & $p=0.1$ \bigstrut\\ \textbf{decoder.self-attention.layernorm} & $512$ & $\epsilon=10^{-6}$ \bigstrut\\ \textbf{decoder.enc-attention.wq}($W^Q$) & $512\times 512$ & N/A \bigstrut\\ \textbf{decoder.enc-attention.wk}($W^K$) & $512\times 512$ & N/A \bigstrut\\ \textbf{decoder.enc-attention.wv}($W^V$) & $512\times 512$ & N/A \bigstrut\\ \textbf{decoder.enc-attention.wo}($W^O$) & $512\times 512$ & N/A \bigstrut\\ \textbf{decoder.enc-attention.dropout} & N/A & $p=0.1$ \bigstrut\\ \textbf{decoder.enc-attention.layernorm} & $512$ & $\epsilon=10^{-6}$ \bigstrut\\ \textbf{decoder.ffn.layer1} & $512\times 2048$ & N/A \bigstrut\\ \textbf{decoder.ffn.layer2} & $2048\times 512$ & N/A \bigstrut\\ \textbf{encoder.layernorm} & $512$ & $\epsilon=10^{-6}$ \bigstrut\\ \textbf{dropout} & N/A & $p=0.1$ \bigstrut\\ \bottomrule \end{tabular}}% \end{center} \end{table} \paragraph{The hybrid VGG-19-BN architecture used for the LTH comparison.} To compare \textsc{Pufferfish}{} with LTH, we use the open-source LTH implementation, {\it i.e.}, \url{https://github.com/facebookresearch/open_lth}. The VGG-19-BN model used in the open-source LTH repository is slightly different from the VGG-19-BN architecture described above. We thus use the VGG-19-BN architecture in the LTH code and deploy \textsc{Pufferfish}{} on top of it for fairer comparison. Detailed information about the hybrid VGG-19-BN architecture we used in \textsc{Pufferfish}{} for the comparison with LTH is shown in Table~\ref{table:supp_vgg_architecture_lth}. \begin{table}[H] \vspace{-4 mm} \caption{Detailed information of the hybrid VGG-19-BN architecture used in our LTH comparison experiments, all non-linear activation function in this architecture is ReLU after each convolution layer (omitted in the Table). The shapes for convolution layers follows $(c_{in}, c_{out}, k, k)$. There is a BatchNorm layer after each convolution layer with number of neurons the same as $c_{\text{out}}$ (also omitted in the Table).} \label{table:supp_vgg_architecture_lth} \begin{center} \scriptsize{ \begin{tabular}{ccc} \toprule \textbf{Parameter} & Shape & Layer hyper-parameter \bigstrut\\ \midrule \textbf{layer1.conv1.weight} & $3 \times 64 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{layer2.conv2.weight} & $64 \times 64 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{pooling.max} & N/A & kernel size:$2$;stride:$2$ \bigstrut\\ \textbf{layer3.conv3.weight} & $64\times 128 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{layer4.conv4.weight} & $128\times 128 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{pooling.max} & N/A & kernel size:$2$;stride:$2$ \bigstrut\\ \textbf{layer5.conv5.weight} & $128 \times 256 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{layer6.conv6.weight} & $256\times 256 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{layer7.conv7.weight} & $256 \times 256 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{layer8.conv8.weight} & $256 \times 256 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{pooling.max} & N/A & kernel size:$2$;stride:$2$ \bigstrut\\ \textbf{layer9.conv9.weight} & $256 \times 512 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{layer10.conv10\_u.weight} & $512 \times 128 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{layer10.conv10\_v.weight} & $128 \times 512 \times 1 \times 1$ & stride:$1$ \bigstrut\\ \textbf{layer11.conv11\_u.weight} & $512 \times 128 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{layer11.conv11\_v.weight} & $128 \times 512 \times 1 \times 1$ & stride:$1$ \bigstrut\\ \textbf{layer12.conv12\_u.weight} & $512 \times 128 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{layer12.conv12\_v.weight} & $128 \times 512 \times 1 \times 1$ & stride:$1$ \bigstrut\\ \textbf{pooling.max} & N/A & kernel size:$2$;stride:$2$ \bigstrut\\ \textbf{layer13.conv13\_u.weight} & $512 \times 128 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{layer13.conv13\_v.weight} & $128 \times 512 \times 1 \times 1$ & stride:$1$ \bigstrut\\ \textbf{layer14.conv14\_u.weight} & $512 \times 128 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{layer14.conv14\_v.weight} & $128 \times 512 \times 1 \times 1$ & stride:$1$ \bigstrut\\ \textbf{layer15.conv15\_u.weight} & $512 \times 128 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{layer15.conv15\_v.weight} & $128 \times 512 \times 1 \times 1$ & stride:$1$ \bigstrut\\ \textbf{layer16.conv16\_u.weight} & $512 \times 128 \times 3 \times 3$ & stride:$1$;padding:$1$ \bigstrut\\ \textbf{layer16.conv16\_v.weight} & $128 \times 512 \times 1 \times 1$ & stride:$1$ \bigstrut\\ \textbf{pooling.max} & N/A & kernel size:$2$;stride:$2$ \bigstrut\\ \textbf{layer17.fc17.weight} & $512 \times 10$ & N/A \bigstrut\\ \textbf{layer17.fc17.bias} & $10$ & N/A \bigstrut\\ \bottomrule \end{tabular}}% \end{center} \end{table} \section{The compatibility of \textsc{Pufferfish}{} with other gradient compression methods} As \textsc{Pufferfish}{} is a training time parameter reduction method, the gradient of the factorized networks can be compressed further with any gradient compression methods. As \textsc{PowerSGD} is the state-of-the-art gradient compression method and is compatible with \texttt{allreduce}, we consider another baseline, {\it i.e.}, ``\textsc{Pufferfish}{}+\textsc{PowerSGD}", we conduct an experimental study over this baseline on ResNet-18 trained on CIFAR-10 (results shown in Figure~\ref{fig:compat-pufferfish}). The experiment is running over $8$ \texttt{p3.2xlarge} EC2 nodes with batch size at $256$ per node ($2048$ in total). The experimental results indicate that combining \textsc{Pufferfish}{} with \textsc{PowerSGD} can effectively reduce the gradient size of \textsc{Pufferfish}{}, making \textsc{Pufferfish}{} enjoys high computation efficiency and the communication efficiency as high as \textsc{PowerSGD}. However, as \textsc{PowerSGD} conducts layer-wise gradient encoding and decoding on both $U_l$ and $V_l$ layers, the gradient encoding and decoding cost in the ``\textsc{Pufferfish}{}+\textsc{PowerSGD}" baseline is higher compared to \textsc{PowerSGD}. We observe that a slightly higher rank is desired when combining \textsc{Pufferfish}{} with \textsc{PowerSGD} since both model weights and gradients are approximated in this case. In the experimental results shown in Figure~\ref{fig:compat-pufferfish}, we use \textsc{PowerSGD} with rank $4$ when combining with \textsc{Pufferfish}{} for both the vanilla warm-up training epochs and the consecutive low-rank training epochs. Moreover, we also found that under the large-batch setting, it is always helpful to re-warmup the learning rate for the ``\textsc{Pufferfish}{}+\textsc{PowerSGD}" baseline, {\it i.e.}, in the first $5$ epochs, we warm-up the learning rate linearly from $0.1$ to $1.6$, then at the $80$-th epoch where we switch from the vanilla warm-up training to low-rank training, we repeat the learning rate warm-up again with $5$ epochs (from $0.1$ to $1.6$). Our experimental results suggest that \textsc{Pufferfish}{} can be combined with the gradient compression methods to attain better communication efficiency, but it is desirable to combine \textsc{Pufferfish}{} with the gradient compression methods that can be deployed on the fattened gradients, {\it e.g.}, Top-$k$. \begin{figure}[ht] \vspace{-2 mm} \centering \subfigure[Breakdown per-epoch time]{\includegraphics[width=0.4\textwidth]{figs/breakdown_runtime_analysis_cifar10_extra.pdf}}\\ \subfigure[Convergence]{\includegraphics[width=0.4\textwidth]{figs/end2end_dist_cifar10_resnet18_extra.pdf}} \vspace{-4 mm} \caption{(a) Per-epoch breakdown runtime analysis and (b) convergence performance of \textsc{Pufferfish}{}, ``\textsc{Pufferfish}{}+\textsc{PowerSGD} (rank $4$)", \textsc{PowerSGD} (rank $2$), \textsc{signum}, and vanilla SGD over ResNet-18 trained on the CIFAR-10 dataset. } \label{fig:compat-pufferfish} \vspace{-4 mm} \end{figure} \section{Discussion on the communication efficiency of \textsc{Pufferfish}{}} It is natural to ask the question that ``\textit{Why are the previously proposed light weight gradient compression methods slow in practice, {\it e.g.}, the ones proposed in~\cite{suresh2016distributed}}?" We agree that there are lots of gradient compression methods, which are computationally cheap. However, other important factors can affect the gradient compression efficiency in practice (taking the gradient compression method in~\cite{suresh2016distributed} as an example): (i) After the binary sign rounding, extra encoding and decoding steps e.g. binary encoding are required to aggregate the quantized bits to bytes for attaining real communication speedup. That is optimizing the data structures to support low-communication for quantized gradients is necessary for any benefit to the surface, and also quite non trivial. (ii) For most gradient compression schemes, the encoded gradients are not compatible with all-reduce. Thus, all-gather has to be used instead. Unfortunately, in terms of comm. costs all-gather suffers a performance gap that increases with the number of nodes. (iii) In all-reduce, each worker receives a pre-aggregated gradient, making the cost of decompression independent to the number of workers. In all-gather, a worker receives the number of workers compressed gradients that need to be individually decompressed and aggregated. The time for decompression with all-gather therefore scales linearly with the number of workers. In fact we did run a test for the ``\textit{Stochastic binary quantization}" method in~\cite{suresh2016distributed} on ResNet-50+ImageNet over 16 EC2 \texttt{p3.2xlarge} nodes (per node batch size 32) as it is the computationally cheapest methods proposed in the paper. Though it is showed that conducting random rotation over the gradients can improve the compression error, we only care about the computational and communication efficiencies of the method in this particular experiment. Per epoch runtime results are shown in Figure~\ref{fig:sbq-comparisons}. \begin{figure}[ht] \vspace{-2 mm} \centering \includegraphics[width=0.4\textwidth]{figs/breakdown_runtime_analysis_sbq.pdf} \vspace{-2 mm} \caption{Breakdown per-epoch runtime comparison between \textsc{Pufferfish}{}, vanilla SGD, and stochastic binary quantization. } \label{fig:sbq-comparisons} \vspace{-4 mm} \end{figure} Note that in the ``compress.+decompress." stage, stochastic binary quantization takes $12.1\pm 0.6$ seconds for gradient compression and $118.4\pm 0.1$ for gradient decompression. We observe that although the stochastic binary quantization is efficient in the compression stage, its gradient decompression cost is expensive. Moreover, all-gather is less efficient compared to all-reduce at the scale of $16$ nodes. \section{The effectiveness of using SVD to find the low-rank factorization} In the vanilla warm-up training strategy proposed in \textsc{Pufferfish}{}, we decompose the network weights using SVD to find the initialization weights for the hybrid network. Though SVD is a computationally expensive method, \textsc{Pufferfish}{} only requires to conduct the factorization over the network weights once during the entire training process. We explicitly test the overhead incurred by conducting SVD over the model weights here. All the runtimes are measured over the \texttt{p3.2xlarge} instance of Amazon EC2 (equipped with Tesla V100 GPU). The results are shown in Figure \ref{table:svd-efficiency}. From the results, it can be observed that the run time on using SVD to factorize the partially trained vanilla full-rank network is quite fast, {\it e.g.}, on average it only costs $2.2972$ seconds over the ResNet-50 trained over the ImageNet dataset, which only takes $0.17\%$ of the per epoch training time. \begin{table}[ht] \caption{The time costs on conducting SVD over the partially trained vanilla full-rank network to find the initialization model for the hybrid network. The run time results are averaged from $5$ independent trials.} \label{table:svd-efficiency} \begin{center} \scriptsize{ \begin{tabular}{cc} \toprule \textbf{Method} & Time Cost (in sec.) \bigstrut\\ \midrule ResNet-50 on ImageNet & $2.2972\pm0.0519$ \bigstrut\\ WideResNet-50-2 on ImageNet & $4.8700\pm0.0859$ \bigstrut\\ VGG-19-BN on CIFAR-10 & $1.5198\pm0.0113$ \bigstrut\\ ResNet-18 on CIFAR-10 & $1.3244\pm0.0201$ \bigstrut\\ LSTM on WikeText-2 & $6.5791\pm0.0445$ \bigstrut\\ Transformer on WMT16 & $5.4104\pm0.0532$ \bigstrut\\ \bottomrule \end{tabular}}% \end{center} \end{table} \section{Details of data preprocessing} \paragraph{The CIFAR-10 dataset.} In preprocessing the images in CIFAR-10 dataset, we follow the standard data augmentation and normalization process. For data augmentation, random cropping and horizontal random flipping are used. Each color channels are normalized with mean and standard deviation by $\mu_r = 0.491, \mu_g = 0.482, \mu_b = 0.447$, $\sigma_r = 0.247, \sigma_g = 0.244, \sigma_b = 0.262$. Each channel pixel is normalized by subtracting the mean value in this color channel and then divided by the standard deviation of this color channel. \paragraph{The ImageNet dataset.} For ImageNet, we follow the data augmentation process of \citep{goyal2017accurate}, \textit{i.e.}, we use scale and aspect ratio data augmentation. The network input image is a $224\times 224$ pixels, randomly cropped from an augmented image or its horizontal flip. The input image is normalized in the same way as we normalize the CIFAR-10 images using the following means and standard deviations: $\mu_r = 0.485, \mu_g = 0.456, \mu_b = 0.406$; $\sigma_r = 0.229, \sigma_g = 0.224, \sigma_b = 0.225$. \section{Detailed hyper-parameters used in our experiments} \paragraph{ResNet-50 and WideResNet-50-2 over the ImageNet dataset.} For ResNet-50 and WideResNet-50-2 models, we follow the model training hyper-parameters reported in \citep{goyal2017accurate}. We train the model using the optimizer SGD with momentum value at $0.9$ with batch size at $256$. We also conduct $\ell_2$ regularization over the model weights instead of the BatchNorm layers with the regularization coefficient $10^{-4}$. The entire training process takes $90$ epochs. For both of the ResNet-50 and WideResNet-50-2 models, we start from the learning rate at $0.1$ and decay the learning rate by a factor of $0.1$ at the $30$-th, $60$-th, and the $80$-th epochs. For the vanilla warm-up training, we use warm-up epoch $E=10$. Note that at the $10$-th epoch we switch from the vanilla ResNet-50/WideResNet-50-2 models to the hybrid architecture, but we still use the same learning rate, {\it i.e.}, $0.1$ until the $30$-th epoch. Additional to the previously proposed work, we adopt the \textit{label smoothing} technique with probability $0.1$. The model initialization method follows directly from the implementation of PyTorch example \footnote{\url{https://github.com/pytorch/examples/tree/master/imagenet}}. \paragraph{ResNet-18 and VGG-19-BN over the CIFAR-10 dataset.} For ResNet-18 and VGG-19-BN models. We train the model using the optimizer SGD with momentum with momentum value at $0.9$ with batch size at $128$. The entire training takes $300$ epochs. We also conduct $\ell_2$ regularization over the model weights with the regularization coefficient $10^{-4}$. For both of the ResNet-18 and VGG-19-BN models, we start from the learning rate at $0.1$ and decay the learning rate by a factor of $0.1$ at the $150$-th, $250$-th epochs. For the vanilla warm-up training, we use warm-up epoch $E=80$. Note that at the $80$-th epoch we switch from the vanilla ResNet-18/VGG-19-BN models to the hybrid architecture, but we still use the same learning rate, {\it i.e.}, $0.1$ until the $150$-th epoch. \paragraph{LSTM over the WikiText-2 dataset.} For the LSTM model. We conduct training using the vanilla SGD optimizer with batch size at $20$. We also conduct gradient norm clipping with norm bound at $0.25$. The entire training takes $40$ epochs. We start from the learning rate at $20$ and decay the learning rate by a factor of $0.25$ if the validation loss is not decreasing. For the vanilla warm-up training, we use warm-up epoch $E=10$. Note that at the $10$-th epoch we switch from the vanilla LSTM model to the hybrid architecture, we also decay the learning rate by a factor of $0.5$. We also tie the word embedding and SoftMax weights \citep{press2016using}. \paragraph{The Transformer over the WMT16 dataset.} For the Transformer model. We conduct training using the Adam optimizer with initial learning rate at $0.001$, $\beta s=(0.9, 0.98), \epsilon=10^{-8}$ batch size at $256$. We also conduct gradient norm clipping with norm bound at $0.25$. The entire training takes $400$ epochs. For the vanilla warm-up training, we use warm-up epoch $E=10$. We enable label smoothing, weight sharing for the source and target word embedding, and weight sharing between target word embedding and the last dense layer. \section{Detailed information on the runtime mini-benchmark} In the experiment section, we discussed that in the reproducibility optimized setting, factorized networks achieve promising runtime speedup over the vanilla networks. However, sometimes users prefer faster runtime to reproducibility where the speed optimized setting is used (with \texttt{cudnn.benckmark} enabled and \texttt{cudnn.deterministic} disabled). We also study the runtime of the factorized network under the speed optimized setting. The results are shown in Table~\ref{table:mini-benchmark-benchmark}, from which we observe that the speedup of the factorized network is less promising compared to the reproducibility optimized setting especially for the VGG-19-BN network. However, \textsc{Pufferfish}{} ResNet-18 still achieves $1.16\times$ per-epoch speedup. We leave exploring the optimal model training speed of the factorized networks as the future work. \begin{table}[ht] \caption{The runtime mini-benckmark results of \textsc{Pufferfish}{} and vanilla VGG-19-BN and ResNet-18 networks training on the CIFAR-10 dataset, results averaged over $10$ epochs. Experiment running on a single V100 GPU with batch size at $128$; Over the optimized cuDNN implementation with \texttt{cudnn.benckmark} enabled and \texttt{cudnn.deterministic} disabled; Speedup calcuated based on the averaged per-epoch time.} \label{table:mini-benchmark-benchmark} \begin{center} \scriptsize{ \begin{tabular}{cccc} \toprule \textbf{Model Archs.} & Epoch Time (sec.) & Speedup & MACs (G) \bigstrut\\ \midrule Vanilla VGG-19 & $8.27 \pm 0.07$ & $-$ & $0.4$ \bigstrut\\ \textsc{Pufferfish}{} VGG-19 & ${\bf 8.16}\pm 0.12$ & $\bf{1.01\times}$& $\bf{0.29}$ \bigstrut\\ Vanilla ResNet-18 & $11.15\pm 0.01$ & $-$ & $0.56$ \bigstrut\\ \textsc{Pufferfish}{} ResNet-18 & ${\bf9.61} \pm 0.08$ & $\bf{1.16\times}$ & $\bf{0.22}$ \bigstrut\\ \bottomrule \end{tabular}}% \end{center} \end{table} \section{Time cost measurement on Amazon EC2} We use the \texttt{p3.2xlarge} instances for the distributed experiments, the bandwidth of the instance is ``Up to $10$ Gbps" as stated on the Amazon EC2 website, {\it i.e.}, \url{https://aws.amazon.com/ec2/instance-types/p3/}. For some tasks (especially for the ResNet-50 and WideResNet-50-2), we observe that the bandwidth of the \texttt{p3.2xlarge} instance decays sharply in the middle of the experiment. The time costs for ResNet-50 trained on the ImageNet dataset under our prototype \texttt{allreduce} distributed implementation are collected when there is no bandwidth decay, {\it e.g.}, under $10$ Gbps. For the DDP time cost results, we run the experiments till the per-epoch time costs become stable, then measure the per-epoch time. For ResNet-18 trained on the CIFAR-10 dataset experiments under our prototype \texttt{allreduce} distributed implementation, we do not observe significant bandwidth decay for the \texttt{p3.2xlarge} instances. All distributed experiments are conducted under the \texttt{us-west-2c} availability zone of EC2. \section{Additional experimental results} \paragraph{The ablation study on the accuracy mitigation strategy over CIFAR-10 and ImageNet.} The ablation study results are shown in Table ~\ref{table:ablation-resnet50-imagenet} for ResNet-50 trained on ImageNet and Table~\ref{table:ablation-vgg19-cifar10} for VGG-19-BN trained over CIFAR-10. For the vanilla low-rank ResNet-50 trained on ImageNet, we do not deploy the label smoothing and the extra learning rate decay (with a factor $0.1$) at the $80$-th epoch. \begin{table}[ht] \caption{The effect of vanilla warm-up training and hybrid network architectures of \textsc{Pufferfish}{} of the low-rank ResNet-50 trained over the ImageNet dataset} \label{table:ablation-resnet50-imagenet} \begin{center} \scriptsize{ \begin{tabular}{ccc} \toprule \textbf{Model architectures} & Test Acc. Top1 & Test Acc. Top5 \bigstrut\\ \midrule Low-rank ResNet-50 & $71.03\%$ & $90.26\%$ \bigstrut\\ Hybrid ResNet-50 (wo. vanilla warm-up) & $75.85\%$ & $92.96\%$ \bigstrut\\ Hybrid ResNet-50 (w. vanilla warm-up) & $\bf{76.43}\%$ & $\bf{93.10}\%$ \bigstrut\\ \bottomrule \end{tabular}% } \vspace{-6mm} \end{center} \end{table} \vspace{-2 mm} \begin{table}[ht] \caption{The effect of vanilla warm-up training and hybrid network architectures of \textsc{Pufferfish}{} of the low rank VGG-19-BN trained over the CIFAR-10 dataset. Results are averaged across $3$ independent trials with different random seeds.} \label{table:ablation-vgg19-cifar10} \begin{center} \scriptsize{ \begin{tabular}{ccc} \toprule \textbf{Model architectures} & Test Loss & Test Accuracy \bigstrut\\ \midrule Low-rank VGG-19-BN & $0.355\pm 0.012$ & $93.34\pm 0.08\%$ \bigstrut\\ Hybrid VGG-19-BN (wo. vanilla warm-up) & $0.407\pm 0.008$ & $93.53\pm 0.13\%$ \bigstrut\\ Hybrid VGG-19-BN (w. vanilla warm-up) & $0.375\pm 0.019$ & ${\bf93.89} \pm 0.14\%$ \bigstrut\\ \bottomrule \end{tabular}% } \end{center} \end{table} \section{Conclusion} We propose \textsc{Pufferfish}{}, a communication and computation efficient distributed training framework. Instead of gradient compression, \textsc{Pufferfish}{} trains low-rank networks initialized by factorizing a partially trained full-rank model. The use of a hybrid low-rank model and warm-up training, allows \textsc{Pufferfish}{} to preserve the accuracy of the fully dense SGD trained model, while effectively reducing its size. \textsc{Pufferfish}{} achieves high computation and communication efficiency and completely bypasses the gradient encoding and decoding, while yielding smaller and more accurate models compared pruning methods such the LTH and EB Train, while avoiding the burden of ``winning the lottery". \vspace{-2mm} \section{Experiments}\label{sec:experiment} We conduct extensive experiments to study the effectiveness and scalability of \textsc{Pufferfish}{} over various computer vision and natural language processing tasks, across real distributed environments. We also compare \textsc{Pufferfish}{} against a wide range of baselines including: (i) \textsc{PowerSGD}, a low-rank based, gradient compression method that achieves high compression ratios~\citep{vogels2019powersgd}; (ii) \textsc{Signum} a gradient compression method that only communicates the sign of the local momentum~\citep{bernstein2018signsgd,bernstein2018signsgd2}; (iii) The ``early bird'' structured pruning method \textit{EB Train}~\citep{you2019drawing}; and (iv) The LTH sparsification method (referred to as LTH for simplicity)~\citep{frankle2018lottery}. Our experimental results indicate that \textsc{Pufferfish}{} allows to train a model that is up to $3.35\times$ smaller than other methods, with only marginal accuracy loss. Compared to \textsc{PowerSGD}, \textsc{Signum}, and vanilla SGD, \textsc{Pufferfish}{} achieves $1.22\times$, $1.52\times$, and $1.74\times$ end-to-end speedups respectively for ResNet-18 trained on CIFAR-10 while reaching to the same accuracy as vanilla SGD. \textsc{Pufferfish}{} leads to a model with $1.3M$ fewer parameters while reaching $1.76\%$ higher top-1 test accuracy than EB Train on the ImageNet dataset. Compared to LTH, \textsc{Pufferfish}{} leads to $5.67\times$ end-to-end speedup for achieving the same model compression ratio for VGG-19 on CIFAR-10. We also demonstrate that the performance of \textsc{Pufferfish}{} is stable under the ``mixed-precision training" implemented by PyTorch AMP. Our code is publicly available for reproducing our results\footnote{\url{https://github.com/hwang595/Pufferfish}}. \vspace{-2 mm} \subsection{Experimental setup and implementation details} \paragraph{Setup.} \textsc{Pufferfish}{} is implemented in PyTorch~\cite{paszke2019pytorch}. We experiment using two implementations. The first implementation we consider is a data-parallel model training API, {\it i.e.}, DDP in PyTorch. However, as the gradient computation and communication are overlapped in DDP\footnote{the computed gradients are buffered and communicated immediately when hitting a certain buffer size, {\it e.g.}, 25MB.}, it is challenging to conduct a breakdown runtime analysis in DDP. We thus also come up with a prototype \texttt{allreduce}-based distributed implementation that decouples the computation and communication to benchmark the breakdown runtime of \textsc{Pufferfish}{} and other baselines. Our prototype distributed implementation is based on \texttt{allreduce} in PyTorch and the NCCL backend. All our experiments are deployed on a distributed cluster consisting of up to 16 \texttt{p3.2xlarge} (Tesla V100 GPU equipped) instances on Amazon EC2. \vspace{-2 mm} \paragraph{Models and Datasets.} The datasets considered in our experiments are CIFAR-10~\cite{krizhevsky2009learning}, ImageNet (ILSVRC2012)~\cite{deng2009imagenet}, the WikiText-2 datasets~\cite{merity2016pointer}, and the WMT 2016 German-English translation task data~\cite{elliott2016multi30k}. For the image classification tasks on CIFAR-10, we considered VGG-19-BN (which we refer to as VGG-19) \citep{simonyan2014very} and ResNet-18~\cite{he2016deep}. For ImageNet, we run experiments with ResNet-50 and WideResNet-50-2 \citep{zagoruyko2016wide}. For the WikiText-2 dataset, we considered a 2-layer stacked LSTM model. For the language translation task, we consider a $6$-layer Transformer architecture \citep{vaswani2017attention}. More details about the datasets and models can be found in the Appendix. \vspace{-2 mm} \paragraph{Implementation details and optimizations.} In our prototype distributed implementation, the \texttt{allreduce} operation starts right after all compute nodes finish computing the gradient. An important implementation-level optimization we conduct is that we pack all gradient tensors into one flat buffer, and only call the \texttt{allreduce} operation \textbf{once} per iteration. The motivation for such an optimization is that \textsc{Pufferfish}{} factorizes the full-rank layer $W_l$ to two smaller layers, {\it i.e.}, $U_l, V^\top_l$. Though the communication cost of the \texttt{allreduce} on each smaller layer is reduced, the total number of \texttt{allreduce} calls is doubled (typically an \texttt{allreduce} is required per layer to synchronize the gradients across the distributed cluster). According to the run-time cost model of the ring-allreduce~\citep{thakur2005optimization}, each \texttt{allreduce} call introduces a network latency proportional to the product of the number of compute nodes and average network latency. This is not a negligible cost. Our optimization strategy aims at minimizing the additional latency overhead and leads to good performance improvement based on our tests. For a fair comparison, we conduct the same communication optimization for all considered baselines. \begin{table}[ht] \vspace{-2mm} \caption{The results (averaged across $3$ independent trials with different random seeds) of \textsc{Pufferfish}{} and the vanilla 2-layer stacked LSTMs trained over the WikiText-2 dataset (since the embedding layer is just a look up table, we do not count it when calculating the MACs).} \vspace{-1 mm} \label{table:lstm-main-results} \begin{center} \scriptsize{ \begin{tabular}{ccc} \toprule \textbf{Model archs.} & Vanilla LSTM & \textsc{Pufferfish}{} LSTM \bigstrut\\ \midrule \# Params. & $85,962,278$ & $67,962,278$ \bigstrut\\ Train Ppl. & $52.87 \pm 2.43$ & $62.2\pm 0.74$\bigstrut\\ Val Ppl. & $92.49\pm 0.41$ & $93.62 \pm 0.36$\bigstrut\\ Test Ppl. & $88.16\pm 0.39$ & $88.72 \pm 0.24$\bigstrut\\ MACs & $18$M & $9$M \bigstrut\\ \bottomrule \end{tabular}}% \vspace{-6mm} \end{center} \end{table} \begin{table}[ht] \caption{The results (averaged across $3$ independent trials with different random seeds) of \textsc{Pufferfish}{} and vanilla 6-layer Transformers trained over the WMT 2016 German to English Translation Task.} \vspace{-1 mm} \label{table:transformer-main-results} \begin{center} \scriptsize{ \begin{tabular}{ccc} \toprule \textbf{Model archs.} & Vanilla Transformer & \textsc{Pufferfish}{} Transformer \bigstrut\\ \midrule \# Params. & $48,978,432$ & $26,696,192$ \bigstrut\\ Train Ppl . & $13.68\pm 0.96$ & $\bf{10.27 \pm 0.65}$ \bigstrut\\ Val. Ppl . & $11.88\pm 0.43$ & $\bf{7.34 \pm 0.12}$ \bigstrut\\ Val. BLEU & $19.05\pm 0.59$ & $\bf{26.87\pm 0.17}$ \bigstrut\\ \bottomrule \end{tabular}}% \vspace{-5mm} \end{center} \end{table} \begin{table}[ht] \caption{The results (averaged across $3$ independent trials with different random seeds) of \textsc{Pufferfish}{} and vanilla VGG-19 and ResNet-18 trained over the CIFAR-10 dataset. Both full-precision training (FP32) and ``mixed-precision training" (AMP) results are reported.} \vspace{-2 mm} \label{table:cifar10-main-results} \begin{center} \scriptsize{ \begin{tabular}{cccc} \toprule \textbf{Model Archs.} & \# Params. & Test Acc. (\%) & MACs (G) \bigstrut\\ \midrule Vanilla VGG-19 (FP32) & $20,560,330$ & $93.91 \pm 0.01$ & $0.4$ \bigstrut\\ \textsc{Pufferfish}{} VGG-19 (FP32) & $8,370,634$ & $93.89\pm 0.14$ & $0.29$ \bigstrut\\ Vanilla VGG-19 (AMP) & $20,560,330$ & $94.12\pm 0.08$ & N/A \bigstrut\\ \textsc{Pufferfish}{} VGG-19 (AMP) & $8,370,634$ & $93.98\pm 0.06$ & N/A \bigstrut\\ Vanilla ResNet-18 (FP32) & $11,173,834$ & $95.09\pm 0.01$ & $0.56$ \bigstrut\\ \textsc{Pufferfish}{} ResNet-18 (FP32) & $3,336,138$ & $94.87\pm 0.21$ & $0.22$ \bigstrut\\ Vanilla ResNet-18 (AMP) & $11,173,834$ & $95.02\pm 0.1$ & N/A \bigstrut\\ \textsc{Pufferfish}{} ResNet-18 (AMP) & $3,336,138$ & $94.70\pm 0.37$ & N/A \bigstrut\\ \bottomrule \end{tabular}}% \vspace{-4mm} \end{center} \end{table} \begin{table*}[ht] \caption{The results of the vanilla and \textsc{Pufferfish}{} ResNet-50 and WideResNet-50-2 models trained on the ImageNet dataset. For the ResNet-50 results, both full precision training (FP32) and mixed-precision training (AMP) are provided. For the AMP training, MACs are not calculated.} \vspace{-1 mm} \label{table:imagenet-main-results} \begin{center} \scriptsize{ \begin{tabular}{ccccc} \toprule \textbf{Model Archs.} & Number of Parameters & Final Test Acc. (Top-1) & Final Test Acc. (Top-5) & MACs (G) \bigstrut\\ \midrule Vanilla WideResNet-50-2 (FP32) & $68,883,240$ & $78.09\%$ & $94.00\%$ & $11.44$ \bigstrut\\ \textsc{Pufferfish}{} WideResNet-50-2 (FP32) & $40,047,400$ & $77.84\%$ & $93.88\%$ & $9.99$ \bigstrut\\ Vanilla ResNet-50 (FP32) & $25,557,032$ & $76.93\%$ & $93.41\%$ & $4.12$ \bigstrut\\ \textsc{Pufferfish}{} ResNet-50 (FP32) & $15,202,344$ & $76.43\%$ & $93.10\%$ & $3.6$ \bigstrut\\ Vanilla ResNet-50 (AMP) & $25,557,032$ & $76.97\%$ & $93.35\%$ & N/A \bigstrut\\ \textsc{Pufferfish}{} ResNet-50 (AMP) & $15,202,344$ & $76.35\%$ & $93.22\%$ & N/A \bigstrut\\ \bottomrule \end{tabular}}% \vspace{-6mm} \end{center} \end{table*} \begin{table}[ht] \vspace{-1 mm} \caption{The runtime mini-benckmark results of \textsc{Pufferfish}{} and vanilla VGG-19 and ResNet-18 networks training on the CIFAR-10 dataset. Experiment running on a single V100 GPU with batch size at $128$, results averaged over $10$ epochs; under the reproducible cuDNN setup with \texttt{cudnn.benckmark} disabled and \texttt{cudnn.deterministic} enabled; Speedup calculated based on the averaged runtime.} \vspace{-1 mm} \label{table:mini-benchmark} \begin{center} \scriptsize{ \begin{tabular}{cccc} \toprule \textbf{Model Archs.} & Epoch Time (sec.) & Speedup & MACs (G) \bigstrut\\ \midrule Vanilla VGG-19 & $13.51 \pm 0.02$ & $-$ & $0.4$ \bigstrut\\ \textsc{Pufferfish}{} VGG-19 & $\bf{11.02\pm 0.01}$ & $\bf{1.23\times}$& $\bf{0.29}$ \bigstrut\\ Vanilla ResNet-18 & $18.89\pm 0.07$ & $-$ & $0.56$ \bigstrut\\ \textsc{Pufferfish}{} ResNet-18 & $\bf{12.78\pm 0.03}$ & $\bf{1.48\times}$ & $\bf{0.22}$ \bigstrut\\ \bottomrule \end{tabular}}% \vspace{-3mm} \end{center} \end{table} \vspace{-2 mm} \paragraph{Hyper-parameters for \textsc{Pufferfish}{}.} For all considered model architectures, we use a global rank ratio of $0.25$, {\it e.g.}, for a convolution layer with an initial rank of $64$, \textsc{Pufferfish}{} sets $r = 64\times 0.25=16$. For the LSTM on WikiText-2 experiment, we only factorize the LSTM layers and leave the tied embedding layer as is. Allocating the optimal rank for each layer can lead to better final model accuracy and smaller model sizes as discussed in~\cite{idelbayev2020low}. However, the search space for the rank allocation problem is large. One potential way to solve that problem is to borrow ideas from the literature of neural architectural search (NAS), which we leave as future work. We tune the initial low-rank layer index, {\it i.e.}, $K$ and the vanilla warm-up training period to balance the hybrid model size and the final model accuracy. More details of the hyper-parameters of \textsc{Pufferfish}{} can be found in the Appendix. \vspace{-2 mm} \subsection{Results} \begin{figure*}[t] \centering \includegraphics[width=0.25\textwidth]{figs/breakdown_runtime_analysis_imagenet.pdf} \includegraphics[width=0.25\textwidth]{figs/breakdown_runtime_analysis_cifar10.pdf} \includegraphics[width=0.25\textwidth]{figs/scalability_diff_cluster_size.pdf}\\ \vspace{-2mm} \subfigure[Proto. ResNet-50, ImageNet]{\includegraphics[width=0.25\textwidth]{figs/end2end_dist_imagenet_resnet50.pdf}\label{fig:proto-imagenet-resnet50}} \subfigure[Proto. ResNet-18, CIFAR-10]{\includegraphics[width=0.25\textwidth]{figs/end2end_dist_cifar10_resnet18.pdf}\label{fig:proto-cifar10-resnet18}} \subfigure[DDP ResNet-50, ImageNet]{\includegraphics[width=0.25\textwidth]{figs/end2end_convergence_imagenet_resnet50_ddp.pdf}\label{fig:ddp-resnet50-ddp}} \vspace{-4 mm} \caption{(a) Breakdown per-epoch runtime analysis (top) and end-to-end convergence (bottom) results for vanilla SGD, \textsc{Pufferfish}{}, and \textsc{signum} over ResNet-50 trained on the ImageNet dataset. Where \texttt{Comm.} and \texttt{Comp.} stands for computation and communication costs; (b) Breakdown per-epoch runtime analysis (top) and end-to-end convergence (bottom) results for vanilla SGD, \textsc{Pufferfish}{}, \textsc{signum}, and PowerSGD over ResNet-18 trained on CIFAR-10; (c) The scalability of \textsc{Pufferfish}{} compared to vanilla SGD for ResNet-50 training on ImageNet using PyTorch DDP over the distributed clusters that consist of $2, 4, 8, 16$ nodes (top); End-to-end convergence for vanilla SGD and \textsc{Pufferfish}{} with PyTorch DDP under the cluster with $8$ nodes (bottom). } \vspace{-4 mm} \end{figure*} \paragraph{Parameter reduction and model accuracy.} We extensively study the effectiveness of \textsc{Pufferfish}{}, and the comprehensive numerical results are shown in Table~\ref{table:lstm-main-results}, \ref{table:transformer-main-results}, \ref{table:cifar10-main-results}, and \ref{table:imagenet-main-results}. The main observation is that \textsc{Pufferfish}{} effectively reduces the number of parameters while introducing only marginal accuracy loss. In particular, \textsc{Pufferfish}{} ResNet-18 is $3.35\times$ smaller than vanilla ResNet-18 with only $0.22\%$ accuracy loss. Surprisingly, the \textsc{Pufferfish}{} Transformer leads to even better validation perplexity and test BLEU scores than the vanilla Transformer. One potential reason behind that is that factorizing the Transformer introduces some implicit regularization, leading to better generalization. Apart from the full precision training over FP32, we also conduct mixed-precision experiments over PyTorch AMP on both CIFAR-10 and ImageNet. Our results generally demonstrate that the performance of \textsc{Pufferfish}{} remains stable under mixed-precision training. We measure the computational complexity using ``\textit{multiply–accumulate operations}" (MACs) \footnote{\url{https://en.wikipedia.org/wiki/Multiply\%E2\%80\%93accumulate_operation}}. The MAC results are shown in Table~\ref{table:lstm-main-results}, \ref{table:cifar10-main-results}, and \ref{table:imagenet-main-results}. The computation complexity is estimated by passing a single input through the entire network, {\it e.g.}, for the CIFAR-10 dataset, we simulate a color image with size $32\times 32 \times 3$ and pass it to the networks. For the LSTM network, we assume a single input token is with batch size at $1$. We only report the MACs of forward pass. \textsc{Pufferfish}{} reduces the MACs of the vanilla model to up to $2.55\times$ over ResNet-18 on CIFAR-10. \vspace{-1mm} \paragraph{Runtime mini-benchmark.} It is of interest to investigate the actual speedup of the factorized networks as they are dense and compact. We thus provide mini-benchmark runtime results over VGG-19 and ResNet-18 on the CIFAR-10 dataset. We measure the per-epoch training speed of the factorized networks used in \textsc{Pufferfish}{} and the vanilla networks on a single V100 GPU with batch size at $128$. The results are shown in Table~\ref{table:mini-benchmark}. We report the results (averaged over $10$ epochs) under the reproducibility optimized cuDNN environment, {\it i.e.}, \texttt{cudnn.benckmark} disabled and \texttt{cudnn.deterministic} enabled. The results indicate that the factorized networks enjoy promising runtime speedups, {\it i.e.}, $1.23\times$ and $1.48\times$ over the vanilla VGG-19 and ResNet-18 respectively. We also study the runtime of the factorized networks under the speed optimized cuDNN setting, {\it i.e.}, \texttt{cudnn.benckmark} enabled and \texttt{cudnn.deterministic} disabled, the results can be found in the Appendix. \vspace{-2mm} \paragraph{Computation and communication efficiency.} To benchmark the computation and communication costs of \textsc{Pufferfish}{} under a distributed environment, we conduct a per-epoch breakdown runtime analysis and compare it to vailla SGD and \textsc{Signum} on ResNet-50, trained over ImageNet. The experiment is conducted over $16$ \texttt{p3.2xlarge} EC2 instances. We set the global batch size at $256$ ($16$ per node). We use tuned hyper-parameters for all considered baselines. The result is shown in Figure~\ref{fig:proto-imagenet-resnet50} where we observe that the \textsc{Pufferfish}{} ResNet-50 achieves $1.35\times$ and $1.28\times$ per-epoch speedups compared to vanilla SGD and \textsc{Signum} respectively. Note that though \textsc{Signum} achieves high compression ratio, it is not compatible with \texttt{allreduce}, thus \texttt{allgather} is used instead in our \textsc{Signum} implementation. However, \texttt{allgather} is less efficient than \texttt{allreduce}, which hurts the communication efficiency of \textsc{signum}. The effect has also been observed in the previous literature~\citep{vogels2019powersgd}. We extend the per-epoch breakdown runtime analysis to ResNet-18 trainining on CIFAR-10 where we compare \textsc{Pufferfish}{} to \textsc{PowerSGD}, \textsc{signum}, and vanilla SGD. The experiments are conducted over $8$ \texttt{p3.2xlarge} EC2 instances with the global batch size at $2048$ ($256$ per node). We use a linear learning rate warm-up for $5$ epochs from $0.1$ to $1.6$, which follows the setting in~\cite{vogels2019powersgd,goyal2017accurate}. For \textsc{PowerSGD}, we set the rank at $2$, as it matches the same accuracy compared to vanilla SGD~\cite{vogels2019powersgd}. The results are shown in Figure~\ref{fig:proto-cifar10-resnet18}, from which we observe that \textsc{Pufferfish}{} achieves $1.33\times, 1.67\times, 1.92\times$ per-epoch speedups over \textsc{PowerSGD}, \textsc{signum}, and vanilla SGD respectively. Note that \textsc{Pufferfish}{} is slower than \textsc{PowerSGD} in the communication stage since \textsc{PowerSGD} massively compresses gradient and is also compatible with \texttt{allreduce}. However, \textsc{Pufferfish}{} is faster for gradient computing and bypasses the gradient encoding and decoding steps. Thus, the overall epoch time cost of \textsc{Pufferfish}{} is faster than \textsc{PowerSGD}. Other model training overheads, {\it e.g.}, data loading and pre-processing, gradient flattening, and etc are not included in the ``computation" stage but in the overall per-epoch time. Since \textsc{Pufferfish}{} only requires to modify the model architectures instead of gradients, it is directly compatible with current data-parallel training APIs, {\it e.g.}, DDP in PyTorch. Other gradient compression methods achieve high compression ratio, but they are not directly compatible with DDP without significant engineering effort. For PyTorch DDP, we study the speedup of \textsc{Pufferfish}{} over vanilla distributed training, measuring the per-epoch runtime on ResNet-50 and ImageNet over distributed clusters of size $2, 4, 8,$ and $16$. We fix the per-node batch size at $32$ following the setup in ~\cite{goyal2017accurate}. The results are shown in Figure \ref{fig:ddp-resnet50-ddp}. We observe that \textsc{Pufferfish}{} consistently outperforms vanilla ResNet-50. In particular, on the cluster with $16$ nodes, \textsc{Pufferfish}{} achieves $1.52\times$ per epoch speedup. \vspace{-2 mm} \paragraph{End-to-end speedup.} We study the end-to-end speedup of \textsc{Pufferfish}{} against other baselines under both our prototype implementation and PyTorch DDP. The experimental setups for the end-to-end experiment are identical to our per-epoch breakdown runtime analysis setups. All reported runtimes include the overheads of the SVD factorization and vanilla warm-up training. The ResNet-50 on ImageNet convergence results with our prototype implementation are shown in Figure~\ref{fig:proto-imagenet-resnet50}. We observe that to finish the entire $90$ training epochs, \textsc{Pufferfish}{} attains $1.3\times$ and $1.23\times$ end-to-end speedups compared to vanilla SGD and \textsc{Signum} respectively. The ResNet-18 on CIFAR-10 convergence results are shown in Figure~\ref{fig:proto-cifar10-resnet18}. For faster vanilla warm-up training in \textsc{Pufferfish}{}, we deploy \textsc{PowerSGD} to compress the gradients. We observe that it is generally better to use a slightly higher rank for \textsc{PowerSGD} in the vanilla warm-up training period of \textsc{Pufferfish}{}. In our experiments, we use \textsc{PowerSGD} with rank $4$ to warm up \textsc{Pufferfish}{}. We observe that to finish the entire $300$ training epochs, \textsc{Pufferfish}{} attains $1.74\times, 1.52\times, 1.22\times$ end-to-end speedup compared to vanilla SGD, \textsc{signum}, and \textsc{PowerSGD} respectively. \textsc{Pufferfish}{} reaches to the same accuracy compared to vanilla SGD. Moreover, we extend the end-to-end speedup study under PyTorch DDP where we compare \textsc{Pufferfish}{} with vanilla SGD under $8$ EC2 \texttt{p3.2xlarge} instances. The global batch size is $256$ ($32$ per node). The results are shown in Figure~\ref{fig:ddp-resnet50-ddp} where we observe that to train the model for $90$ epochs, \textsc{Pufferfish}{} achieves $1.64\times$ end-to-end speedup compared to vanilla SGD. We do not study the performance of \textsc{signum} and \textsc{PowerSGD} under DDP since they are not directly compatible with DDP. \begin{table*}[ht] \caption{Comparison of Hybrid ResNet-50 model compared to the Early-Bird Ticket structure pruned (EB Train) ResNet-50 model results with prune ratio $pr$ at $30\%, 50\%, 70\%$ over the ImageNet dataset} \label{table:comparison-eb-train} \begin{center} \scriptsize{ \begin{tabular}{ccccc} \toprule \textbf{Model architectures} & \# Parameters & Final Test Acc. (Top-1) & Final Test Acc. (Top-5) & MACs (G) \bigstrut\\ \midrule vanilla ResNet-50 & $25,610,205$ & $75.99\%$ & $92.98\%$ & $4.12$ \bigstrut\\ \textsc{Pufferfish}{} ResNet-50 & $15,202,344$ & $75.62\%$ & $92.55\%$ & $3.6$ \bigstrut\\ EB Train ($pr=30\%$) & $16,466,787$ & $73.86\%$ & $91.52\%$ & $2.8$ \bigstrut\\ EB Train ($pr=50\%$) & $15,081,947$ & $73.35\%$ & $91.36\%$ & $2.37$ \bigstrut\\ EB Train ($pr=70\%$) & $7,882,503$ & $70.16\%$ & $89.55\%$ & $1.03$ \bigstrut\\ \bottomrule \end{tabular}}% \vspace{-8mm} \end{center} \end{table*} \vspace{-4 mm} \paragraph{Comparison with structured pruning.} We compare \textsc{Pufferfish}{} with the EB Train method where structured pruning is conducted over the channel dimensions based on the activation values during the early training phase~\citep{you2019drawing}. EB Train finds compact and dense models. The result is shown in Table \ref{table:comparison-eb-train}. We observe that compared to EB Train with prune ratio $(pr)=30\%$, \textsc{Pufferfish}{} returns a model with $1.3M$ fewer parameters while reaching $1.76\%$ higher top-1 test accuracy. The EB Train experimental results are taken directly from the original paper~\citep{you2019drawing}. To make a fair comparison, we train \textsc{Pufferfish}{} with the same hyper-parameters that EB Train uses, {\it e.g.}, removing label smoothing and only decaying the learning rate at the $30$-th and the $60$-th epochs with the factor $0.1$. \vspace{-2 mm} \begin{figure}[ht] \centering \subfigure[Model size \textit{vs} Runtime]{\includegraphics[width=0.225\textwidth]{figs/pufferfish_vs_lth_time_vs_params.pdf}\label{fig:lth-comp-time}} \subfigure[Model size \textit{vs} Test Acc.]{\includegraphics[width=0.225\textwidth]{figs/pufferfish_vs_lth_param_acc.pdf}\label{fig:lth-comp-acc}} \vspace{-4 mm} \caption{The performance comparison between \textsc{Pufferfish}{} and LTH over a VGG-19 model trained over the CIFAR-10 dataset: (a) the number of parameters \textit{v.s.} wall-clock runtime; (b) the number of parameters pruned \textit{v.s.} the test accuracy. } \vspace{-6mm} \end{figure} \vspace{-2mm} \paragraph{Comparison with LTH.} The recent LTH literature initiated by Frankle et al.~\cite{frankle2018lottery}, indicates that dense, randomly-initialized networks contain sparse subnetworks (referred to as ``\textit{winning tickets}") that---when trained in isolation---reach test accuracy comparable to the original network~\citep{frankle2018lottery}. To find the winning tickets, an iterative pruning algorithm is conducted, which trains, prunes, and rewinds the remaining unpruned elements to their original random values repeatedly. Though LTH can compress the model massively without significant accuracy loss, the iterative pruning is computationally heavy. We compare \textsc{Pufferfish}{} to LTH across model sizes and computational costs on VGG-19 trained with CIFAR-10. The results are shown in Figure \ref{fig:lth-comp-time}, \ref{fig:lth-comp-acc} where we observe that to prune the same number of parameters, LTH costs $5.67\times$ more time than \textsc{Pufferfish}{}. \vspace{-4mm} \paragraph{Ablation study.} We conduct an ablation study on the accuracy loss mitigation methods in \textsc{Pufferfish}{}, {\it i.e.}, hybrid network and vanilla warm-up training. The results on ResNet-18+CIFAR-10 and LSTM+WikiText-2 are shown in Table~\ref{table:ablation-resnet18-cifar10} and Table~\ref{table:ablation-lstm}, which indicate that the hybrid network and vanilla warm-up training methods help to mitigate the accuracy loss effectively. Results on the other datasets can be found in the Appendix. \begin{table}[ht] \caption{The effect of vanilla warm-up training and hybrid network architectures of \textsc{Pufferfish}{} of the low rank ResNet-18 trained over the CIFAR-10 dataset. Results are averaged across $3$ independent trials with different random seeds.} \vspace{-2 mm} \label{table:ablation-resnet18-cifar10} \begin{center} \scriptsize{ \begin{tabular}{ccc} \toprule \textbf{Methods} & Test Loss & Test Acc. (\%) \bigstrut\\ \midrule Low-rank ResNet-18 & $0.31\pm 0.01$ & $93.75\pm 0.19$ \bigstrut\\ Hybrid ResNet-18 (wo. vanilla warm-up) & $0.30\pm 0.02$ & $93.92\pm 0.45$ \bigstrut\\ Hybrid ResNet-18 (w. vanilla warm-up) & ${\bf 0.25}\pm 0.01$ & ${\bf94.87} \pm 0.21$ \bigstrut\\ \bottomrule \end{tabular}% } \vspace{-4mm} \end{center} \end{table} \begin{table}[ht] \vspace{-3 mm} \caption{The effect of vanilla warm-up training on the low-rank LSTM trained over WikiText-2. Results are averaged across $3$ independent trials with different random seeds.} \vspace{-1 mm} \label{table:ablation-lstm} \begin{center} \scriptsize{ \begin{tabular}{ccc} \toprule \textbf{Methods} & Low-rank LSTM & Low-rank LSTM \\ & (wo. vanilla warm-up) & (w. vanilla warm-up) \bigstrut\\ \midrule Train Ppl. & $68.04 \pm 2.98$ & $\bf{62.2}\pm 0.74$ \bigstrut\\ Val. Ppl. & $97.59 \pm 0.69$ & $\bf{93.62}\pm0.36$ \bigstrut\\ Test Ppl. & $92.04 \pm 0.54$ & $\bf{88.72}\pm 0.24$ \bigstrut\\ \bottomrule \end{tabular}% } \vspace{-8mm} \end{center} \end{table} \vspace{-4mm} \paragraph{Limitations of \textsc{Pufferfish}{}.} One limitation of \textsc{Pufferfish}{} is that it introduces two extra hyper-parameters, {\it i.e.}, the initial low-rank layer index $K$ and the vanilla warm-up epoch $E_{\text{wu}}$, hence hyperparameter tuning requires extra effort. Another limitation is that although \textsc{Pufferfish}{} reduces the parameters in ResNet-18 and VGG-19 models effectively for the CIFAR-10 dataset, it only finds $1.68\times$ and $1.72\times$ smaller models for ResNet-50 and WideResNet-50-2 in order to preserve good final model accuracy. \vspace{-2mm} \section{\textsc{PufferFish}: effective deep factorized network training}\label{sec:pufferfish} In the following subsections, we discuss how model factorization is implemented for different model architectures. \subsection{Low-rank factorization for FC layers} For simplicity, we discuss a 2-layer FC network that can be represented as $h(x) = \sigma(W_1\sigma(W_2x))$ where $W_l, \forall l \in \{1, 2\}$ are weight matrices, $\sigma(\cdot)$ is an arbitrary activation function, and $x$ is the input data point. We propose to pre-factorize the matrices $W_l$ into $U_l V_l^T$ where the factors are of significantly smaller dimensions while also reducing the computational complexity of the full-rank FC layer. \vspace{-2 mm} \subsection{Low-rank factorization for convolution layers} \paragraph{Basics on convolution layers.} The above low-rank factorization strategy extends to convolutional layers (see Fig.~\ref{fig:lowrankDNN} for a sketch). In a convolution layer, a $c_{\text{in}}$-channel input image of size $H\times W$ pixels is convolved with $c_{\text{out}}$ filters of size $c_{\text{in}}\times k\times k$ to create a $c_{\text{out}}$-channel output feature map. Therefore, the computational complexity for the convolution of the filter with a $c_{\text{in}}$-channel input image is $\mathcal{O}(c_{\text{in}}c_{\text{out}}k^2HW)$. In what follows, we describe schemes for modifying the architecture of the convolution layers via low-rank factorization to reduce computational complexity and the number of parameters. The idea is to replace vanilla (full-rank) convolution layers with factorized versions. These factorized filters amount to the same number of convolution filters, but are constructed through linear combinations of a sparse, {\it i.e.}, low-rank filter basis. \vspace{-2mm} \paragraph{Factorizing a convolution layer.} For a convolution layer with dimension $W\in \mathbb{R}^{c_\text{in} \times c_\text{out} \times k \times k}$ where $c_\text{in}$ and $c_\text{out}$ are the number of input and output channels and $k$ is the size of the convolution filters, {\it e.g.}, $k=3$ or $5$. Instead of factorizing the 4D weight of a convolution layer directly, we consider factorizing the unrolled 2D matrix. Unrolling the 4D tensor $W$ leads to a 2D matrix with shape $W_{\text{unrolled}} \in \mathbb{R}^{c_\text{in}k^2 \times c_\text{out}}$ where each column represents the weight of a vectorized convolution filter. The rank of the unrolled matrix is determined by $\min\{c_{\text{in}}k^2,c_{\text{out}}\}$. Factorizing the unrolled matrix returns $U \in \mathbb{R}^{c_\text{in}k^2\times r}$, $V^\top \in \mathbb{R}^{r \times c_\text{out}}$, {\it i.e.}, $W_{\text{unrolled}} \approx UV^\top$. Reshaping the factorized $U, V^\top$ matrices back to 4D filters leads to $U \in \mathbb{R}^{c_{\text{in}} \times r \times k \times k}, V^\top \in \mathbb{R}^{r \times c_{\text{out}}}$. Therefore, factorizing a convolution layer returns a thinner convolution layer $U$ with width $r$, {\it i.e.}, the number of convolution filters, and a linear projection layer $V^\top$. In other words, the full-rank original convolution filter bank is approximated by a linear combination of $r$ basis filters. The $V^\top$s can also be represented by a $1\times 1$ convolution layer, {\it e.g.}, $V^\top_l \in \mathbb{R}^{r \times c_{\text{out}} \times 1 \times 1}$, which is more natural for computer vision tasks as it operates directly on the spatial domain~\cite{lin2013network}. In \textsc{Pufferfish}{}, we use the $1\times1$ convolution for all $V^\top_l$ layers in the considered CNNs. One can also use tensor decomposition, {\it e.g.}, the Tucker decomposition to directly factorize the 4D tensor weights~\cite{tucker1966some}. In this work, for simplicity, we do not consider tensor decompositions. \vspace{-2 mm} \subsection{Low-rank factorization for LSTM layers} LSTMs have been proposed as a means to mitigate the ``vanishing gradient'' issue of traditional RNNs~\cite{hochreiter1997long}. The forward pass of an LSTM is as follows \begin{align} i_t &= \sigma(W_{ii} x_t + b_{ii} + W_{hi} h_{t-1} + b_{hi})\nonumber \\ f_t &= \sigma(W_{if} x_t + b_{if} + W_{hf} h_{t-1} + b_{hf})\nonumber \\ g_t &= \tanh(W_{ig} x_t + b_{ig} + W_{hg} h_{t-1} + b_{hg}) \label{eq:lstm-rule}\\ o_t &= \sigma(W_{io} x_t + b_{io} + W_{ho} h_{t-1} + b_{ho}) \nonumber\\ c_t &= f_t \odot c_{t-1} + i_t \odot g_t \nonumber \\ h_t &= o_t \odot \tanh(c_t)\nonumber. \end{align} $h_t, c_t, x_t$ represent the hidden state, cell state, and input at time $t$ respectively. $h_{t-1}$ is the hidden state of the layer at time $t-1$. $i_t, f_t, g_t, o_t$ are the input, forget, cell, and output gates, respectively. $\sigma(\cdot)$ and $\odot$ denote the sigmoid activation function and the Hadamard product, respectively. The trainable weights are the matrices $W_{i\cdot} \in \mathbb{R}^{h\times d}, W_{h\cdot}\in \mathbb{R}^{h\times h}$, where $d$ and $h$ are the embedding and hidden dimensions. Thus, similarly to the low-rank FC layer factorization, the factorized LSTM layer is represented by \begin{align} i_t &= \sigma(U_{ii}V^\top_{ii} x_t + b_{ii} + U_{hi}V^\top_{hi} h_{t-1} + b_{hi})\nonumber \\ f_t &= \sigma(U_{if}V^\top_{if} x_t + b_{if} + U_{hf}V^\top_{hf} h_{t-1} + b_{hf})\nonumber \\ g_t &= \tanh(U_{ig}V^\top_{ig} x_t + b_{ig} + U_{hg}V^\top_{hg} h_{t-1} + b_{hg})\label{eq:lr-lstm-rule} \\ o_t &= \sigma(U_{io}V^\top_{io} x_t + b_{io} + U_{ho}V^\top_{ho} h_{t-1} + b_{ho})\nonumber \\ c_t &= f_t \odot c_{t-1} + i_t \odot g_t\nonumber \\ h_t &= o_t \odot \tanh(c_t)\nonumber. \end{align} \vspace{-6 mm} \subsection{Low-rank network factorization for Transformer}\label{sec:low-rank-transformer} A Transformer layer consists of a stack of encoders and decoders~\cite{vaswani2017attention}. Both encoder and decoder contain three main building blocks, {\it i.e.}, the \textit{multi-head attention} layer, \textit{position-wise feed-forward networks} (FFN), and \textit{positional encoding}. A $p$-head attention layer learns $p$ independent attention mechanisms on the input key ($K$), value ($V$), and queries ($Q$) of each input token: \begin{align*} \text{MultiHead}(Q, K, V)&=\text{Concat}(\text{head}_1,\cdots, \text{head}_p)W^O\\ \text{where head}_i&=\text{Attention}(QW^Q_i, KW^K_i, VW^V_i). \label{eq:scaled-dot-product-attention} \end{align*} In the above, $W_i^Q, W_i^K, W_i^V, i \in \{1, \cdots, p\}$ are trainable weight matrices. The particular attention, referred to as ``scaled dot-product attention", is used in Transformers, {\it i.e.}, $\text{Attention}(\tilde Q, \tilde K, \tilde V) = \text{softmax}\bigg(\frac{\tilde Q \tilde K^\top}{\sqrt{d}}\bigg)\tilde V$ where $\tilde Q = Q W^Q_i, \tilde K = K W^K_i, \tilde V = V W^V_i$. $W^O$ projects the output of the multi-head attention layer to match the embedding dimension. Following~\cite{vaswani2017attention}, we assume the projected key, value, and query are embedded to $pd$ dimensions, and are projected to $d$ dimensions in the attention layer. In Transformer, a sequence of $N$ input tokens are usually batched before passing to the model where each input token is embedded to a $pd$ dimensional vector. Thus, dimensions of the inputs are $Q, K, V \in \mathbb{R}^{N\times pd}$. The learnable weight matrices are $W_i^Q, W_i^K, W_i^V\in \mathbb{R}^{pd\times d}, W^O \in \mathbb{R}^{pd\times pd}$. The FFN in Transformer consists of two learnable FC layers: $\text{FFN}(x) = \text{max}(0, x W_1 + b_1)W_2 + b_2$ where $W_1 \in \mathbb{R}^{pd\times 4pd}, W_2 \in \mathbb{R}^{4pd \times pd}$ (the relationships between the notations in our paper and the original Transformer paper~\cite{vaswani2017attention} are $pd = d_{\text{model}}, d = d_k = d_v$, and $d_{ff} = 4pd$). In \textsc{Pufferfish}{}, we factorize all learnable weight matrices in the multi-head attention and the FFN layers. We leave the positional encoding as is, since there are no trainable weights. For the bias term of each layer and the ``\textit{Layer Normalization}" weights, we use the vanilla weights directly, as they are represented by vectors. \begin{table}[ht] \vspace{-4mm} \caption{The number of parameters and computational complexities for full-rank and low-rank FC, convolution, LSTM, and the Transformer layers where $m$, $n$ are the dimensions of the FC layer and $c_{\text{in}}, c_{\text{out}}, k$ are the input, output dimensions, and kernel size respectively. $h, d$ denote the hidden and embedding dimensions in the LSTM layer. $N,p,d$ denote the sequence length, number of heads, and embedding dimensions in the Transformer. $r$ denotes the rank of the factorized low-rank layer we assume to use. } \label{table:complexities} \begin{center} \scriptsize{ \begin{tabular}{ccc} \toprule \textbf{Networks} & \# Params. & Computational Complexity \bigstrut\\ \midrule Vanilla FC & $m \times n$ & $\mathcal{O}(m n)$ \bigstrut\\ Factorized FC & $r(m + n)$ & $\mathcal{O}(r(m + n))$ \bigstrut\\ Vanilla Conv. & $c_{\text{in}}\times c_{\text{out}} \times k^2$ & $\mathcal{O}(c_{\text{in}} c_{\text{out}} k^2 HW)$ \bigstrut\\ Factorized Conv. & $c_{\text{in}}rk^2+rc_{\text{out}}$ & $\mathcal{O}(rc_{\text{in}}k^2 HW+rHW c_{\text{out}})$ \bigstrut\\ Vanilla LSTM & $4(dh+h^2)$ & $\mathcal{O}(dh+h^2)$ \bigstrut\\ Factorized LSTM & $4dr+12hr$ & $\mathcal{O}(dr+hr)$ \bigstrut\\ Vanilla Attention & $4p^2d^2$ & $\mathcal{O}(Np^2d^2+N^2d)$ \bigstrut\\ Factorized Attention & $(3p+5)prd$ & $\mathcal{O}\big(rpdN +N^2 d \big)$ \bigstrut\\ Vanilla FFN & $8 p^2 d^2$ & $\mathcal{O}\big(p^2 d^2 N\big)$ \bigstrut\\ Factorized FFN & $10pdr$ & $\mathcal{O}\big(r p dN\big)$ \bigstrut\\ \bottomrule \end{tabular}}% \vspace{-6mm} \end{center} \end{table} \vspace{-4 mm} \subsection{Computational complexity and model size} A low-rank factorized network enjoys a smaller number of parameters and lower computational complexity. Thus, both the computation and communication efficiencies are improved, as the amount of communication is proportional to the number of parameters. We summarize the computational complexity and the number of parameters in the vanilla and low-rank FC, convolution, LSTM, and the Transformer layers in Table~\ref{table:complexities}. We assume the FC layer has shape $W_{FC} \in \mathbb{R}^{m \times n}$, the convolution layer has shape $W_{\text{Conv}} \in \mathbb{R}^{c_{\text{in}} \times c_{\text{out}} \times k \times k}$, the LSTM layer has shape $W_i\in \mathbb{R}^{4h\times d}; W_h \in \mathbb{R}^{4h\times h}$ (where $W_i$ and $W_h$ is the concatenated input-hidden and hidden-hidden weight matrices), and the shapes of the model weights in the encoder of a Transformer follow the discussion in Section~\ref{sec:low-rank-transformer}. For Transformers, we show the computational complexity of a single encoder block. We assume the low-rank layers have rank $r$. As the computation across the $p$ heads can be done in parallel, we report the computational complexity of a single attention head. Note that for the LSTM layer, our complexity analysis assumes the low-rank layer uses the same rank for the input-hidden weights $W_{i\cdot}$ and the hidden-hidden weights $W_{h\cdot}$. Similarly, for the Transformer layer, we assume the low-rank layer uses the same rank $r$ for all $W_i^Q, W_i^K, W_i^V, W^O$. Further details can be found in the Appendix. \vspace{0.1cm} \section{Strategies for mitigating accuracy loss} \vspace{-0.1cm} \begin{figure}[ht] \vspace{-4 mm} \centering \subfigure[VGG-11 on CIFAR-10]{\includegraphics[width=0.22\textwidth]{figs/vgg11_accuracy.pdf}} \subfigure[ResNet-50 on ImageNet]{\includegraphics[width=0.22\textwidth]{figs/resnet50_imagenet_accuracy.pdf}} \vspace{-4 mm} \caption{Model convergence comparisons between vanilla models and \textsc{Pufferfish}{} factorized models: (a) low-rank VGG-11 over the CIFAR-10 dataset; (b) ResNet-50 over the ImageNet dataset. For the low-rank networks, all layers except for the first convolution and the very last FC layer are factorized with a fixed rank ratio at $0.25$. } \label{fig:lr-vgg} \vspace{-4 mm} \end{figure} \begin{figure}[ht] \centering \subfigure[Hybrid network]{\includegraphics[width=0.215\textwidth]{figs/effect_of_hybrid_net.pdf}\label{fig:effect-hybrid}} \subfigure[Vanilla warm-up training]{\includegraphics[width=0.255\textwidth]{figs/hybrid_resnet50_warmup_e_vs_acc.pdf}\label{fig:effect-fr-warmup}} \vspace{-4 mm} \caption{The effect of the test accuracy loss mitigation methods in \textsc{Pufferfish}{}: (a) \textbf{Hybrid network}: The final test accuracy of the hybrid VGG-19 architectures with various initial low-rank layer indices ($K$) over the CIFAR-10 dataset. (b) \textbf{Vanilla warm-up training}: The final top-1 accuracy of the hybrid-ResNet-50 architecture trained on the ImageNet dataset under the different number of vanilla warm-up epochs: $\{2, 5, 10, 15, 20\}$. } \label{fig:lr-vgg} \vspace{-6 mm} \end{figure} In this section, we showcase that training low-rank models from scratch leads to an accuracy loss. Interestingly, this loss can be mitigated by balancing the degree of factorization across layers, and by using a short full-rank warm-up training phase used to initialize the factorized model. We conduct an experimental study on a version of \textsc{Pufferfish}{} where every layer of the network is factorized except for the first convolution layer and the last FC layer. On a relatively small task, {\it e.g.}, VGG-11 on CIFAR-10, we observe that \textsc{Pufferfish}{} only leads to $\sim 0.4\%$ accuracy loss (as shown in Figure~\ref{fig:lr-vgg}) compared to the vanilla VGG-19-BN. However, for ResNet-50 on the ImageNet dataset, a $\sim 3\%$ top-1 accuracy loss of \textsc{Pufferfish}{} is observed. To mitigate the accuracy loss of the factorized networks over the large-scale ML tasks, we propose two methods, {\it i.e.}, (i) \textit{hybrid network architecture} and (ii) \textit{vanilla warm-up training}. We then discuss each method separately. \vspace{-2 mm} \paragraph{Hybrid network architecture.} In \textsc{Pufferfish}{}, the low-rank factorization aims at approximating the original network weights, {\it i.e.}, $W_l \approx U_l V^\top_l$ for layer $l$, which inevitably introduces approximation error. Since the approximation error in the early layers can be accumulated and propagated to the later layers, a natural strategy to mitigate the model accuracy loss is to only factorize the later layers. Moreover, for most of CNNs, the number of parameters in later layers dominates the entire network size. Thus, factorizing the later layers does not sacrifice the degree of model compression we can achieve. Specifically, for an $L$ layer network $\{W_1, W_2, \cdots, W_L \}$, factorizing every layer leads to $\{U_1, V_1^\top, U_2, V^\top_2, \cdots, U_L, V^\top_L \}$. In the hybrid network architecture, the first $K-1$ layers are not factorized, {\it i.e.}, $\{W_1, W_2, \cdots, W_{K-1}, U_{K}, V^\top_{K}, \cdots, U_L, V^\top_L \}$ where we define $K$ as the index of the first low-rank layer in a hybrid architecture. We treat $K$ as a hyper-parameter, which balances the model compression ratio and the final model accuracy. In our experiments, we tune $K$ for all models. The effectiveness of the hybrid network architecture is shown in Figure~\ref{fig:effect-hybrid}, from which we observe that the hybrid VGG-19 with $K=9$ mitigates $\sim0.6\%$ test accuracy loss. \vspace{-2 mm} \begin{algorithm}[t] \small \SetKwInOut{Input}{Input} \SetKwInOut{Output}{Output} \Input{Randomly initialized weights of vanilla $N$-layer architectures $\{W_{1}, W_{2}, \ldots ,W_{L}\}$, and the associated weights of hybrid $N$-layer architecture $\{W_{1}, W_{2}, \ldots, W_{K-1}, U_{K}, V^\top_{K}, \ldots ,U_{L}, V^\top_{L}\}$, the entire training epochs $E$, the vanilla warm-up training epochs $E_{wu}$, and learning rate schedule $\{\eta_t\}^E_{t=1}$} \Output{Trained hybrid $L$-layer architecture weights $\{\hat W_{1}, \hat W_{2}, \ldots, \hat W_{K-1}, \hat U_{K}, \hat V^\top_{K}, \ldots , \hat U_{L}, \hat V^\top_{L}\}$} \For{$t \in \{1,\ldots, E_{wu}\}$}{ Train $\{W_{1}, W_{2}, \ldots ,W_{L}\}$ with learning rate schedule $\{\eta_t\}_{t=1}^{E_{wu}}$ \tcp*{vanilla warm-up training} } \For{$l \in \{1,\ldots, L\}$}{ \uIf{$l < K$}{ copy the partially trained $W_l$ weight to the hybrid network; } \Else{ $\tilde U_l \Sigma_l \tilde V_l^\top = \text{SVD}(W_l)$ \tcp*{Decomposing the vanilla warm-up trained weights} $U_l = \tilde U_l \Sigma_l^{\frac{1}{2}}, V_l^\top = \Sigma^{\frac{1}{2}}\tilde V_l^\top$ } } \For{$t \in \{E_{wu}+1,\ldots, E\}$}{ Train the hybrid network weights, {\it i.e.}, $\{W_{1}, W_{2}, \ldots, W_{K-1}, U_{K}, V^\top_{K}, \ldots , U_{L}, V^\top_{L}\}$ with learning rate schedule $\{\eta_t\}_{t=E_{wu}}^{E}$ \tcp*{consecutive low rank training} } \caption{\textsc{Pufferfish}{} Training Procedure} \label{alg:pufferfish} \end{algorithm} \paragraph{Vanilla warm-up training.} It has been widely observed that epochs early in training are critical for the final model accuracy~\cite{jastrzebski2020break,keskar2016large,achille2018critical,leclerc2020two,agarwal2020accordion}. For instance, sparsifying gradients in early training phases can hurt the final model accuracy~\citep{lin2017deep}. Similarly, factorizing the vanilla model weights in the very beginning of the training procedure can also lead to accuracy loss, which may be impossible to mitigate in later training epochs. It has also been shown that good initialization strategies play a significant role in the final model accuracy~\citep{zhou2020go}. In this work, to mitigate the accuracy loss, we propose to use the partially trained vanilla, full-rank model weights to initialize the low-rank factorized network. We refer to this as ``\textit{vanilla warm-up training}". We train the vanilla model for a few epochs ($E_{wu}$) first. Then, we conduct truncated matrix factorization (via truncated SVD) over the partially trained model weights to initialize the low-rank factors. For instance, given a partially trained FC layer $W^{(l)}$, we deploy SVD on it such that we get $\tilde U\Sigma \tilde V^\top$. After that the $U$ and $V^\top$ weights we introduced in the previous sections can be found by $U = \tilde U \Sigma^{\frac{1}{2}}, V^\top = \Sigma^{\frac{1}{2}}\tilde V^\top$. For convolution layer $W \in \mathbb{R}^{c_{\text{in}}\times c_{\text{out}} \times k\times k}$, we conduct SVD over the unrolled 2D matrix $W_{\text{unrolled}} \in \mathbb{R}^{c_\text{in}k^2 \times c_\text{out}}$, which leads to $U\in \mathbb{R}^{c_{\text{in}}k^2 \times r}, V^\top \in \mathbb{R}^{r \times c_{\text{out}}}$ where reshaping $U, V$ back to 4D leads to the desired initial weights for the low-rank layer, {\it i.e.}, $U\in \mathbb{R}^{r\times c_{\text{out}} \times k \times k}, V^\top \in \mathbb{R}^{r \times c_{\text{out}} \times 1 \times 1}$. For the Batch Normalization layers (BNs) \cite{ioffe2015batch} we simply extract the weight vectors and the collected running statistics, {\it e.g.}, the \textit{running mean and variance}, for initializing the low-rank training. We also directly take the bias vector of the last FC layer. \textsc{Pufferfish}{} then finishes the remaining training epochs over the factorized hybrid network initialized with vanilla warm-up training. Figure~\ref{fig:effect-fr-warmup} provides an experimental justification on the effectiveness of vanilla warm-up training where we study a hybrid ResNet-50 trained on the ImageNet dataset. The results indicate that vanilla warm-up training helps to improve the accuracy of the factorized model. Moreover, a carefully tuned warm-up period of $\hat E_{wu}$ also plays an important role in the final model accuracy. Though SVD is computationally heavy, \textsc{Pufferfish}{} only requires to conduct the SVD \textbf{once} throughout the entire training. We benchmark the SVD cost for all experimented models, which indicate the SVD runtime is comparatively small, {\it e.g.}, on average, it only costs $2.29$ seconds for ResNet-50. A complete study on the SVD factorization overheads can be found in the Appendix. \vspace{-0.5cm} \paragraph{Last FC layer.} The very last FC layer in a neural network can be viewed as a linear classifier over the features extracted by the previous layers. In general, its rank is equal to the number of classes in predictve task at hand. Factorizing it below the number of classes, will increase linear dependencies, and may further increase the approximation error. Thus, \textsc{Pufferfish}{} does not factorize it. Putting all the techniques we discussed in this section together, the training procedure of \textsc{Pufferfish}{} is summarized in Algorithm~\ref{alg:pufferfish}. \vspace{-2 mm} \section{Introduction}\label{sec:intro} Distributed model training plays a key role in the success of modern machine learning systems. Data parallel training, a popular variant of distributed training, has demonstrated massive speedups in real-world machine learning applications and systems \citep{li2014scaling,dean2012large,chen2016revisiting}. Several machine learning frameworks such as TensorFlow \citep{abadi2016tensorflow} and PyTorch \cite{paszke2019pytorch} come with distributed implementations of popular training algorithms, such as mini-batch SGD. However, the empirical speed-ups offered by distributed training, often fall short of a best-case linear scaling. It is now widely acknowledged that communication overheads are one of the key sources of this saturation phenomenon~\cite{dean2012large, seide20141, strom2015scalable, qi17paleo,grubic2018synchronous}. Communication bottlenecks are attributed to frequent gradient updates, transmitted across compute nodes. As the number of parameters in state-of-the-art (SOTA) deep models scales to hundreds of billions, the size of communicated gradients scales proportionally~\cite{he2016deep, huang2017densely,devlin2018bert,devlin2019bert,brown2020language}. To reduce the cost of communicating model updates, recent studies propose compressed versions of the computed gradients. A large number of recent studies revisited the idea of low-precision training as a means to reduce communication~\cite{ seide20141, de2015taming, alistarh2017qsgd, zhou2016dorefa, wen2017terngrad, zhang2017zipml, de2017understanding, de2018high, bernstein2018signsgd, konevcny2016federated}. Other approaches for low-communication training focus on sparsification of gradients, either by thresholding small entries or by random sampling~\cite{ strom2015scalable, mania2015perturbed, suresh2016distributed, leblond2016asaga, aji2017sparse, konevcny2016randomized, lin2017deep, chen2017adacomp, renggli2018sparcml, tsuzuku2018variance, wang2018atomo, vogels2019powersgd}. However, the proposed communication-efficient training techniques via gradient compression usually suffer from some of the following drawbacks: (i) The computation cost for gradient compression ({\it e.g.}, sparsification or quantization) can be high. For instance, \textsc{Atomo} \citep{wang2018atomo} requires to compute gradient factorizations using SVD for every single batch, which can be computationally expensive for large-scale models. (ii) Existing gradient compression methods either do not fully utilize the full gradients ~\citep{alistarh2017qsgd,wen2017terngrad,bernstein2018signsgd,wang2018atomo} or require additional memory. For example, the ``\textit{error feedback}" scheme~\citep{seide20141,stich2018sparsified,karimireddy2019error} utilizes stale gradients aggregated in memory for future iterations, but requires storing additional information proportional to the model size. (iii) Significant implementation efforts are required to incorporate an existing gradient compression technique within high-efficiency distributed training APIs in current deep learning frameworks {\it e.g.}, \texttt{DistributedDataParallel} (DDP) in PyTorch. Due to the above shortcomings of current communication-efficient techniques, it is of interest to explore the feasibility of incorporating elements of the gradient compression step into the model architecture itself. If this is feasible, then communication efficiency can be attained at no extra cost. In this work, we take a first step towards bypassing the gradient compression step via training low-rank, pre-factorized deep network, starting from full-rank counterparts. We observe that training low-rank models from scratch incurs non-trivial accuracy loss. To mitigate that loss, instead of starting from a low-rank network, we initialize at a full-rank counterpart. We train for a small fraction, {\it e.g.}, 10\% of total number epochs, with the full-rank network, and then convert to a low-rank counterpart. To obtain such a low-rank model we apply SVD on each of the layers. After the SVD step, we use the remaining 90\% of the training epochs to fine-tune this low-rank model. The proposed method bares similarities to the ``\textit{Lottery Ticket Hypothesis}" (LTH) \cite{frankle2018lottery}, in that we find ``winning tickets" within full-rank/dense models, but without the additional burden of ``winning the lottery''. Winning tickets seem to be in abundance once we seek models that are sparse in their spectral domain. \vspace{-2 mm} \paragraph{Our contributions.} In this work, we propose \textsc{Pufferfish}{}, a computation and communication efficient distributed training framework. \textsc{Pufferfish}{} takes any deep neural network architecture and finds a pre-factorized low-rank representation. \textsc{Pufferfish}{} then trains the pre-factorized low-rank network to achieve both computation and communication efficiency, instead of explicitly compressing gradients. \textsc{Pufferfish}{} supports several types of architectures including fully connected (FC), convolutional neural nets (CNNs), LSTMs, and Transformer networks~\citep{vaswani2017attention}. As \textsc{Pufferfish}{} manipulates the model architectures instead of their gradients, it is directly compatible with all SOTA distributed training frameworks, {\it e.g.}, PyTorch DDP and BytePS~\citep{jiang2020unified}. \begin{figure}[htp] \centering \includegraphics[width=0.35\textwidth]{figs/low_rank_DNN.pdf} \vspace{-2 mm} \caption{ We propose to replace fully connected layers represented by a matrix $W$, by a set of trainable factors $UV^T$, and represent each of the $N$ convolutional filters of each conv layer as a linear combination of $\frac{N}{R}$ filters. This latter operation can be achieved by using fewer filters per layer, and then applying a trainable up-sampling embedding to the output channels. } \label{fig:lowrankDNN} \vspace{-4 mm} \end{figure} We further observe that direct training of those pre-factorized low-rank deep networks leads to non-trivial accuracy loss, especially for large-scale machine learning tasks, {\it e.g.}, ImageNet \cite{deng2009imagenet}. We develop two techniques for mitigating this accuracy loss: (i) a \textit{ hybrid architecture} and (ii) \textit{vanilla warm-up training}. The effectiveness of these two techniques is justified via extensive experiments. We provide experimental results over real distributed systems and large-scale vision and language processing tasks. We compare \textsc{Pufferfish}{} against a wide range of SOTA baselines: (i) communication-efficient distributed training methods {\it e.g.}, \textsc{PowerSGD}~\cite{vogels2019powersgd} and \textsc{Signum}~\cite{bernstein2018signsgd}; (ii) structured pruning methods, {\it e.g.}, the \textit{Early Bird Ticket} (EB Train)~\cite{you2019drawing}; and model sparsification method, {\it e.g.}, the iterative pruning algorithm in LTH~\citep{frankle2018lottery}. Our experimental results indicate that \textsc{Pufferfish}{} achieves better model training efficiency compared to \textsc{PowerSGD}, \textsc{signum}, and LTH models. \textsc{Pufferfish}{} also leads to smaller and more accurate model compared to EB Train. We further show that the performance of \textsc{Pufferfish}{} remains stable under \textit{mixed-precision training}. \vspace{-2 mm} \paragraph{Related work.} \textsc{Pufferfish}{} is closely related to the work on communication-efficient distributed training methods. To reduce the communication cost in distributed training, the related literature has developed several methods for gradient compression. Some of the methods use quantization over the gradient elements ~\citep{seide20141,alistarh2017qsgd,wen2017terngrad,lin2017deep,luo2017thinet,bernstein2018signsgd,tang2019doublesqueeze,wu2018error}. Other methods study sparsifying the gradients in the element-wise or spectral domains~\citep{lin2017deep,wang2018atomo,stich2018sparsified,vogels2019powersgd}. It has also been widely observed that adopting the ``\textit{error feedback}" scheme is generally helpful for gradient compression methods to achieve better final model accuracy~\citep{stich2018sparsified,wu2018error,karimireddy2019error,vogels2019powersgd}. Compared to the previously proposed gradient compression methods, \textsc{Pufferfish}{} merges the gradient compression into model training, thus achieves communication-efficiency at no extra cost. \textsc{Pufferfish}{} is also closely related to model compression. Partially initialized by \textit{deep compression}~\citep{han2015deep}, a lot of research proposes to remove the redundant weights in the trained neural networks. The trained neural networks can be compressed via model weight pruning~\citep{li2016pruning,wen2016learning,hu2016network,zhu2017prune,he2017channel,yang2017designing,liu2018rethinking,yu2018nisp,yu2018slimmable}, quantization ~\citep{rastegari2016xnor,zhu2016trained,hubara2016binarized,wu2016quantized,hubara2017quantized,zhou2017incremental}, and low-rank factorization~\citep{xue2013restructuring,sainath2013low,jaderberg2014speeding,wiesler2014mean,konevcny2016federated}. Different from the model compression methods, \textsc{Pufferfish}{} proposes to train the factorized networks, which achieves better overall training time, rather than compressing the model after fully training it. Finally, our work is also related to efficient network architecture design, where the network layers are re-designed to be smaller, more compact, and more efficient ~\citep{iandola2016squeezenet,chen2016eyeriss,zhang2018shufflenet,tan2019efficientnet,howard2017mobilenets,chollet2017xception,lan2019albert,touvron2020fixing,waleffe2020principal}. The most related low-rank efficient training framework to \textsc{Pufferfish}{} is the one proposed in~\citep{ioannou2015training}, where a pre-factorized network is trained from scratch. However, we demonstrate that training the factorized network from scratch leads to non-trivial accuracy loss. In \textsc{Pufferfish}{}, we propose to warm-up the low-rank model via factorizing a partially trained full-rank model. Our extensive experiments indicate that \textsc{Pufferfish}{} achieves significantly higher accuracy compared to training the factorized network from scratch. Moreover, \citep{ioannou2015training} only studies low-rank factorizations for convolutional layers, whereas \textsc{Pufferfish}{} supports FC, CNN, LSTM, and Transformer layers. \vspace{-2 mm}
{ "attr-fineweb-edu": 1.774414, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdYfxK2li-DeX0CmI
\section{General notes} As explained in the talk \cite{vvv-hsqcd}, our approach does not assume any ``nuclear democracy''. In contrast, it discriminates between stable particles and resonances. Only stable particles survive as asymptotic states, and it is the stable sector where the $S$-matrix is unitary (see, e.g. \cite{Veltman}). If we restrict ourselves by a consideration of the strong non-strange sector, then the only stable particles are pions and nucleons. Hence, to illustrate the application of our technique by the relatively simple process, we can choose among $\pi\pi$, $NN$, and $\pi N$-elastic scattering (along with the cross-symmetric processes). Our choice of ($\pi N$) is mainly dictated by the absence of extra phenomenological symmetries appearing in the former two reactions and, at the same time, by the relatively rich set of experimental data. When working in the framework of effective theory one has to take account of all possible vertices and resonances which can contribute to the amplitude of the reaction under consideration. Since the perturbation theory which we rely upon is of Dyson's type, we need to construct the perturbation series order by order, starting from the tree level. However, at this very first step we immediately meet the difficulty because to obtain the tree level amplitude we need to sum an infinite number of contact vertices and exchange graphs (Fig.~\ref{fig:1}). \begin{figure}[ht] \begin{center} \begin{picture}(350,20)(25,-10) \put(0,0){ \begin{picture}(100,20)(0,0) \put(0,-5){\shortstack{$\displaystyle\sum_{ \rm vertices \atop \mbox{} }^{\infty}$}} \put(40,0){\circle*{3}} \put(40,0){\line(-1,1){10}} \put(40,0){\line(-1,-1){10}} \put(30,-10){\vector(1,1){7}} \put(40,0){\line(1,-1){10}} \put(40,0){\vector(1,-1){7}} \put(40,0){\line(1,1){10}} \put(60,-10){\shortstack{\boldmath$,$}} \end{picture} } \put(100,0){ \begin{picture}(100,20)(0,0) \put(0,-5){\shortstack{$\displaystyle\sum_{ \rm vertices, \atop \rm resonances }^{\infty}$}} \put(40,0){\circle*{3}} \put(40,0){\line(-1,1){10}} \put(40,0){\line(-1,-1){10}} \put(30,-10){\vector(1,1){7}} \multiput(40,0)(1,0){20}{\circle*{2}} \put(40,0.5){\vector(1,0){13}} \put(40,-0.5){\vector(1,0){13}} \put(45,-13){\shortstack{$R_s$}} \put(60,0){\circle*{3}} \put(60,0){\line(1,1){10}} \put(60,0){\line(1,-1){10}} \put(60,0){\vector(1,-1){7}} \put(80,-10){\shortstack{\boldmath$,$}} \end{picture} } \put(200,0){ \begin{picture}(100,20)(0,0) \put(0,-5){\shortstack{$\displaystyle\sum_{ \rm vertices, \atop \rm resonances}^{\infty}$}} \put(50,-10){\circle*{3}} \put(50,-10){\line(-1,-1){10}} \put(40,-20){\vector(1,1){7}} \put(50,-10){\line(1,-1){10}} \put(50,-10){\vector(1,-1){7}} \multiput(50,-10)(0,1){20}{\circle*{2}} \put(53,-5){\shortstack{$R_t$}} \put(50,10){\circle*{3}} \put(50,10){\line(-1,1){10}} \put(50,10){\line(1,1){10}} \put(75,-10){\shortstack{\boldmath$,$}} \end{picture} } \put(300,0){ \begin{picture}(100,20)(0,0) \put(0,-5){\shortstack{$\displaystyle\sum_{ \rm vertices, \atop \rm resonances}^{\infty}$}} \put(40,0){\circle*{3}} \multiput(40,0)(1,0){20}{\circle*{2}} \put(40,0.5){\vector(1,0){13}} \put(40,-0.5){\vector(1,0){13}} \put(45,-13){\shortstack{$R_u$}} \put(60,0){\line(1,-1){10}} \put(60,0){\vector(1,-1){7}} \put(60,0){\circle*{3}} \put(30,-10){\line(1,1){15}} \put(30,-10){\vector(1,1){7}} \put(55,15){\line(1,1){10}} \put(60,0){\line(-1,1){25}} \put(55,5){\oval(20,20)[tl]} \put(80,-10){\shortstack{\boldmath$.$}} \end{picture} } \end{picture} \end{center} \caption{Tree graphs: $R_s$, $R_t$ and $R_u$ stand for all admissible resonances in the $s$-, $t$-, and $u$-channels, respectively; summation over all possible kinds of vertices is implied, though the summation order is still unspecified. \label{fig:1}} \end{figure} \noindent The resulting sum is nothing but functional series, thus the problem of summation order is essential one. As it is demonstrated in \cite{vvv-hsqcd,AVVV1,POMI,AVVV2}, our approach gives a way to overcome the obstacle. Simply speaking, the recipe we suggest reads: \begin{enumerate} \item Classifying all possible graphs and switching to the {\em minimal parametrization} \cite{AVVV2} single out the set of {\em resultant} parameters of the given level (here --- tree level). The latter are assigned the {\em physical} values with the help of relevant renormalization prescriptions (RP's). \item Being guided by the {\em uniformity} and {\em summability} \cite{vvv-hsqcd} principles use the Cauchy formula for given order (tree level) amplitude in certain domain of the space of kinematical variables. \item Equating different expressions for the amplitude (the latter results from the Cauchy formula application) in the domains of their mutual validity, obtain the system of {\em bootstrap} equations. The latter allow one to specify the exact expressions of the amplitude under consideration and give restrictions for the values of {\em physical} parameters of the theory. \end{enumerate} In this talk we shall take a closer look at the first and the last steps. \section{Minimal (resultant) vertices and renormalization conditions} As it is seen from Fig~\ref{fig:1}, there are Hamiltonian% \footnote{In \cite{AVVV2} it is explained why it is preferably to use the effective Hamiltonian, rather than Lagrangian when constructing a theory with unlimited number of field derivatives.} three- and four-leg couplings and masses which parametrize the tree level amplitude in our case. Minimal parametrization is a first step toward the constructing of so-called {\em essential} parameters \cite{WeinMONO,AVVV2} --- the {\em independent} parameters needed to describe the (on-shell) S-matrix. In case of general process amplitude of arbitrary loop order the minimal couplings are the natural building blocks for the resultant parameters of which, in turn, the essential parameters can be constructed. However, in case of triple vertices at tree level, this structure gets simplified, and all the contributing three-leg minimal couplings appear also to be ``resultant''. The minimal vertices are, roughly speaking, the on-shell vertices. One just needs to take the {\em effective vertex} of a given order (at tree level this is a matrix element of the sum of all Hamiltonian vertices constructed of a given set of fields with all possible derivatives and matrix structures), put it on the mass shell, present the result in a Lorentz-covariant form and cross the wave functions out. The structure surviving after this is done, being considered as a function of independent components of {\em off-shell} momenta% \footnote{ Energy-momentum conservation is, of course, implied. For the precise definition of minimal vertex and the related classification see \cite{AVVV2} } is called the minimal vertex. The coefficients in the formal series for the corresponding formfactors are called the minimal couplings% \footnote{ They are, of course, functions of initial Hamiltonian couplings. However the latter functional dependence is not of interest anymore: we are not going to fix any of couplings in the initial Hamiltonian, rather, we will prefer to operate with minimal (resultant) parameters directly. }. One easily observes that the tree-level triple minimal couplings are constants, because on the mass shell any triple vertex does not depend upon external momenta. For example, all the minimal vertices with resonances of isospin $\frac{1}{2}$ and half-integer spin $l+\frac{1}{2}$ contributing to our process at tree level can be listed as the following ``Hamiltonian monomials''% \footnote{ Lacking space here, we do not list the remaining vertices with half-integer spin resonances and those with integer spin contributing in $t$-channel. }: \begin{description} \item{} $ g_{\widehat{R}} \overline{N}\bo{\sigma} \widehat{R}_{\mu_1\ldots\mu_l} \partial^{\mu_1}\!\!\!\!\ldots\partial^{\mu_l}\bo{\pi} + H.c.\; $ for the resonance parity $P = (-1)^{l+1}$, and \item{} $ ig_{R} \overline{N} \bo{\sigma} \gamma_5 R_{\mu_1\ldots\mu_l} \partial^{\mu_1}\!\!\!\!\ldots\partial^{\mu_l}\bo{\pi} + H.c.\; $ for the resonance parity $P = (-1)^l$, \end{description} where $\sigma_a$ stands for Pauli matrix, $\pi$, $N$, and $R$ denote pion, nucleon and resonance fields, respectively, while $g$'s are the minimal coupling constants which, of course, depend on the resonance spin and mass. The essence of the reduction theorem proved in \cite{AVVV2} is that any vertex that differs from the listed above by the number (or/and position) of derivatives, when added to the Feynman rules will only result in certain {\em rescaling} of $g$'s as long as one computes the $S$-matrix. In the same way we can specify all the 4-leg minimal couplings contributing at tree level, but in our case it appears to be unnecessary. The reason is not simple, so let us not discuss their structure at this stage and suppose that transition to the minimal parametrization has been done. The main thing one should keep in mind is that the $S$-matrix is completely specified when the values of all the minimal couplings are given. The way one assigns certain values to the $S$-matrix parameters in perturbation theory is the renormalization prescriptions (RP's). To obtain our tree level amplitude, we need to specify 3- and 4-leg couplings and masses. Forgetting for a while about 4-leg couplings we concern ourselves with the remaining parameters. As pointed out in \cite{AVVV2}, the resultant parameters are the natural candidates to impose the RP's under the condition that the renormalization point is taken on shell and {\em renormalized perturbation theory} is used. In this scheme the action is written in terms of {\em physical} parameters plus counter terms, the latter are tuned in a way that the values of those parameters remains unchanged after renormalization. So, we imply that the Feynman rules are written in the form of physical part plus counter terms at every loop order and it is the {\em real parts of physical masses} that appear in bare propagators. Simply speaking, we impose the following set of RP's: \[ \bo{\rm Re}\ V(p_1,p_2,p_3) = G_{phys} {\rm \; at \;} p_i^2 = M_{i_{phys}}^2, \] and \[ \bo{\rm Re}\ \Sigma(p) = 0 {\rm \; at \; } p^2 = M_{phys}^2, \] for every self-energy $\Sigma$ and every three-point vertex $V$. Now we are at tree level, thus there are no counter terms relevant, therefore the couplings $g$ are also physical (experimentally measurable). There is no phenomenological evidence that the mass spectrum and spin values of resonances are bounded from above. Therefore we need to reserve the possibility to work with infinite set of resonances of arbitrary high spin value. In other words, there is still infinite number of minimal couplings coming even from three-leg vertices. One of the main points of our work is that these couplings are {\em not independent}: there are {\em self-consistency conditions} that restrict their values. We call this conditions as the {\em bootstrap} equations. \section{Bootstrap and experimental data} Because of lack of space we do not discuss here the method of constructing the well defined expressions for the amplitude at tree level or at any given order of perturbation theory. It is enough to say that the main tool allowing to do this is just the celebrated Cauchy integral formula with the {\em summability} and {\em asymptotic uniformity} conditions discussed in \cite{vvv-hsqcd}. The final expression turns out to be completely parametrized by the minimal couplings. Moreover, in the case of tree level $\pi N$ elastic scattering amplitude only triple resultant vertices enter this expression. The joint contribution of four-leg vertices turns out to be uniquely determined by masses and triple couplings% \footnote{ This statement is by no means trivial and requires separate consideration. The main reason for it is the known values of Regge intercepts which, by uniformity principle, define the asymptotic behavior of the tree level amplitude. This analysis will be published elsewhere. }. The bootstrap equations mirror the crossing symmetry of a given order amplitude within our perturbation scheme. They can be rewritten in a form of infinite set of numerical equations for the amplitude parameters \cite{AVVV1,POMI}. What is essential to stress here is that the parameters that enter those equations are all minimal, and hence, as explained in the previous section, they are physical or (at least, in principle) {\em measurable}. Using the renormalized perturbation theory with on-shell RP's at each loop level one obtains certain set of bootstrap equations which should be satisfied to ensure self-consistency (usually crossing symmetry). The form of these equations may vary from level to level, but {\em all} of them are the equations for physical parameters, and the full set of RP's should be compatible with all of them. To put it another way, the set of renormalization prescriptions for couplings and masses must be a solution to the full set of the bootstrap constraints. We do not know how the solution of this latter set looks like. Even at tree level their form is highly non-linear. However, if our perturbation scheme can describe nature, then the experimentally fitted values of coupling constants and masses must fulfil the system of bootstrap conditions. That is why we have performed various calculations to check the consistency of our approach with the experimental data. Namely, we checked the tree level bootstrap equations for $\pi \pi$ and $\pi K$ elastic scattering amplitudes (see \cite{AVVV1} and references therein), and recently analogous calculations were performed for the cases of $\pi N$ \cite{menu} and $K N$ elastic scattering (the latter case is discussed in the talk by K.~Semenov-Tian-Shansky \cite{kstsh-hsqcd}). There were no contradiction found so far, and in most cases examined the experimental data seem to support our approach nicely. Apart from the question of formal compatibility with experiment, there is a question of efficiency. One can ask how many loops should be taken into account and how many parameters fixed to obtain the amplitude that could fit well the data at least at some kinematical region. To check this point we performed a calculation of low energy coefficients% \footnote{ Taylor expansion coefficients around the crossing symmetry point. } for the $\pi N$ amplitude. This coefficients measured and fitted in \cite{Nagels} are reproduced in our approach with very good accuracy already at tree level% \footnote{ Of course, it is partly because this region is relatively far from the branch cut points. In case if the latter appears close to the investigated region one should necessary include loops. }, and to gain reasonable precision it is enough to specify the parameters of just few lightest resonances. The results of this analysis were summarized in \cite{menu}; the details will be published elsewhere. \section*{Acknowledgments} I am grateful to V.~Cheianov, H.~Nielsen, S.~Paston, J.~Schechter, K.~Semenov-Tian-Shansky, A.~Vasiliev, V.~Vereshagin and M.~Vyazovsky for stimulating discussions.
{ "attr-fineweb-edu": 1.673828, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdZDxK19JmejM9mWw
\section{Introduction} Information leakage metrics seek to quantify an adversary's ability of inferring information about one quantity from another. Mutual information (MI) is a classic measure for quantifying information and often used to measure information secrecy \cite{SecrecySystem_Shannon49} or leakage in data publishing setting \cite{sankar_utility-privacy_2013,calmon2014allerton}. More recently, Issa \textit{et al.} introduced a measure, called \textit{maximal leakage} (MaxL), for a guessing adversary that quantifies the maximal multiplicative gain of an adversary, with access to a disclosed dataset, to guess \emph{any} (\emph{possible random}) \emph{function} of the original dataset \cite{IssaKW16}. Information leakage measures can be viewed through the lens of adversarial inference capabilities, and therefore, quantified via a loss function that the adversary seeks to minimize. The choice of a loss function provides a concrete measure of the gain in adversarial inference capability. For example, the definition of MaxL can be interpreted in terms of an adversary seeking to minimize the 0-1 loss function, which induces the adversary towards a hard decision, i.e., a maximum likelihood estimator. On the other hand, when MI is used as a leakage measure, the underlying loss function is the \emph{logarithmic loss} (log-loss) function \cite{Merhav1998,Courtade2011,Calmon_privacy_2012}, which models a (soft decision) belief-refining adversary. These two models capture two extremal actions of adversaries. Can these measures be viewed through the same framework? In this paper, we introduce a tunable measure, called \textit{maximal $\alpha$-leakage}, for information leakages, which encompasses MI (for $\alpha=1$) and MaxL (for $\alpha=\infty$) and allows continuous interpolation between the two extremes. The parameter $\alpha$ can be viewed as a tunable parameter that determines how much weight the adversary gives to its posterior belief. In this paper, we define two tunable measures for information leakages in Section \ref{Sec:Information Leakage Measures}: $\alpha$-leakage (Definition \ref{Def:alphaLeakge}) and maximal $\alpha$-leakage (Definition \ref{Def:GeneralLeakge}). In Section \ref{Sec:Information Leakage Measures}, we prove that the $\alpha$-leakage can be expressed as Arimoto mutual information (A-MI) (Theorem \ref{Thm:DefEquialentExpression_alphaleakage}), and the maximal $\alpha$-leakage is equivalent to the supremum of A-MI and Sibson mutual information (S-MI) (Theorem \ref{Thm:DefEquialentExpression}) over all distributions of the original dataset. In Section \ref{Sec:Properties}, we prove several important properties of the maximal $\alpha$-leakage. \section{Preliminaries}\label{Sec:Preliminaries} We begin by reviewing R{\'e}nyi entropy and divergence \cite{measures_renyi1961}. \begin{definition} Given a discrete distribution $P_X$ over a finite alphabet $\mathcal X$, the R{\'e}nyi entropy of order $\alpha\in (0,1)\cup(1,\infty)$ is defined as \begin{align} \label{eq:renyi_entropy} H_{\alpha}(P_X)= \frac{\alpha}{1-\alpha}\log\|P_X\|_{\alpha}. \end{align} Let $Q_X$ be a discrete distribution over $\mathcal X$. The R{\'e}nyi divergence (between $P_X$ and $Q_X$) of order $\alpha\in (0,1)\cup(1,\infty)$ is defined as \begin{align} \label{eq:renyi_divergence} D_{\alpha}(P_X\|Q_X)=\frac{1}{\alpha-1} \log\left(\sum\limits_{x}\frac{P_X(x)^{\alpha}}{Q_X(x)^{\alpha-1}}\right). \end{align} Both of the two quantities are defined by their continuous extension for $\alpha=1$ or $\infty$. \end{definition} The $\alpha$-leakage and max $\alpha$-leakage metrics can be expressed in terms of Sibson mutual information (S-MI) \cite{alphaMI_Sibson1969} and Arimoto mutual information (A-MI) \cite{AlphaMI_Arimoto1975}. These quantities generalize the usual notion of MI. We review these definitions next. \begin{definition} Let discrete random variables $(X,Y)\sim P_{XY}$ with $P_X$ and $P_Y$ as the marginal distributions, respectively, and $Q_Y$ be an arbitrary marginal distribution of $Y$. The Sibson mutual information (S-MI) of order $\alpha\in(0,1)\cup(1,\infty)$ is defined as \begin{align} \label{eq:Def_SibsionMI} I_\alpha^{\text{S}}(X;Y)&\triangleq \inf_{Q_Y}\,D_\alpha(P_{XY}\|P_X\times Q_Y),\\ \label{eq:Sibson_MI} &= \frac{\alpha}{\alpha-1}\log \sum\limits_{y}\left(\sum\limits_{x}P_X(x)P_{Y|X}(y|x)^{\alpha}\right)^{\frac{1}{\alpha}} \end{align} The Arimoto mutual information (A-MI) of order $\alpha\in(0,1)\cup(1,\infty)$ is defined as \begin{align} \label{eq:Def_ArimotoMI} I_\alpha^{\text{A}}(X;Y)&\triangleq H_{\alpha}(X)-H_{\alpha}(X|Y)\\ \label{eq:Arimoto_MI} &=\frac{\alpha}{\alpha-1}\log\sum\limits_{y }\mathsmaller{\left(\frac{\sum\limits_{x}P_X(x)^{\alpha}P_{Y|X}(y|x)^{\alpha}}{\sum\limits_{x }P_X(x)^{\alpha}}\right)^{\frac{1}{\alpha}}}, \end{align} where $H_{\alpha}(X|Y)$ is Arimoto conditional entropy of $X$ given $Y$ defined as \begin{align} \label{eq:Def_ArimotoConditionalEntropy} H_{\alpha}(X|Y)= \frac{\alpha}{1-\alpha}\log\sum\limits_{y}\mathsmaller{\left(\sum\limits_{x}\mathsmaller{P_X(x)^{\alpha}P_{Y|X}(y|x)^{\alpha}}\right)^{\frac{1}{\alpha}}}. \end{align} All of these quantities are defined by their continuous extension for $\alpha=1$ or $\infty$. \end{definition} \section{Information Leakage Measures} \label{Sec:Information Leakage Measures} In this section, we formally define the tunable leakage measures: $\alpha$-leakage and maximal $\alpha$-leakage. Let $X$, $Y$ and $U$ be three discrete random variables with finite supports $\mathcal X$, $\mathcal Y$ and $\mathcal U$, respectively. Let $\hat{X}$ be an estimator of $X$ and $P_{\hat{X}|Y}$ indicate a strategy for estimating $X$ given $Y$. We denote the probability of correctly estimating $X=x$ given $Y=y$ as \begin{align} \label{eq:Notation_ProbCorrectEst} P_c(P_{\hat{X}|Y},x,y)=P_{\hat{X}|Y}(x|y)=\mathbb{P}(\hat{X}=x|x,y). \end{align} Let $X$ and $Y$ represent the original data and disclosed data, respectively, and let $U$ represent an arbitrary (potentially random) function of $X$ that the adversary (a curious or malicious user of the disclosed data $Y$) is interested in learning. In \cite{MaximalLeakage_Issa2016}, Issa \textit{et al.} introduced MaxL to qualify the maximal gain in an adversary's ability of guessing $U$ by knowing $Y$. We review the definition below. \begin{definition}[{\cite[Def. 1]{MaximalLeakage_Issa2016}}]\label{Def:MaximalLeakage} Given a joint distribution $P_{XY}$, the \textit{maximal leakage} from $X$ to $Y$ is \begin{equation}\label{ml_op_def} \mathcal L_{\text{MaxL}}(X\to Y)\triangleq\sup_{U- X- Y} \log \frac{\max\limits_{u} \mathbb{E}\left[\mathbb{P}(\hat{U}=u|Y)\right]}{\max\limits_{u} \mathbb{P}(\tilde{U}=u)}. \end{equation} where both estimators $\hat{U}$ and $\tilde{U}$ take values from the same arbitrary finite support as $U$ \end{definition} \begin{remark}\label{Remark:MaxL} Note that from \eqref{eq:Notation_ProbCorrectEst}, the numerator of the logarithmic term in \eqref{ml_op_def} can be explicitly written as \begin{align} \max\limits_{u}\mathbb{E}\left[\mathbb{P}(\hat{U}=u|Y)\right]=\max\limits_{u}\sum\limits_{y}P_{Y}(y)P_{\hat{U}|Y}(u|y). \end{align} In Definition \ref{Def:MaximalLeakage}, $U$ represents any (possibly random) function of $X$. The numerator represents the maximal probability of correctly guessing $U$ based on $Y$, while the denominator represents the maximal probability of correctly guessing $U$ \emph{without} knowing $Y$. Thus, MaxL quantifies the maximal gain (in bits) in guessing any possible function of $X$ when an adversary has access to $Y$. \end{remark} We now present $\alpha$-leakage and maximal $\alpha$-leakage (under the assumptions of discrete random variables and finite supports). The $\alpha$-leakage measures \textit{various} aspects of the leakage (ranging from the probability of correctly guessing to the posteriori distribution) about data $X$ from the disclosed $Y$. \begin{definition}[{$\alpha$-Leakage}]\label{Def:alphaLeakge} Given a joint distribution $P_{XY}$ and an estimator $\hat{X}$ with the same support as $X$, the $\alpha$-leakage from $X$ to $Y$ is defined as \begin{align} \label{eq:alphaLeak_definition} \mathsmaller{\mathcal L_{\alpha}(X\hspace{-0.04in}\to \hspace{-0.04in}Y) \triangleq\frac{\alpha}{\alpha-1}\log\frac{\max\limits_{P_{\hat{X}|Y}}\mathbb{E}\left[\mathbb{P}(\hat{X}=X|X,Y)^{\frac{\alpha-1}{\alpha}}\right]}{\max\limits_{P_{\hat{X}}}\mathbb{E}\left[\mathbb{P}(\hat{X}=X|X)^{\frac{\alpha-1}{\alpha}}\right]}} \end{align} for $\alpha\in(1,\infty)$ and by the continuous extension of \eqref{eq:alphaLeak_definition} for $\alpha = 1$ and $\infty$. \end{definition} From \eqref{eq:Notation_ProbCorrectEst}, the numerator of the logarithmic term in \eqref{eq:alphaLeak_definition} can be explicitly written as \begin{align} &\max\limits_{P_{\hat{X}|Y}}\mathbb{E}\left[\mathbb{P}(\hat{X}=X|X,Y)^{\frac{\alpha-1}{\alpha}}\right]\nonumber\\ =&\max\limits_{P_{\hat{X}|Y}}\sum\limits_{xy}P_{XY}(xy)P_{\hat{X}|Y}(x|y)^{\frac{\alpha-1}{\alpha}}. \end{align} Analogous to the analysis for MaxL in Remark \ref{Remark:MaxL}, $\alpha$-leakage quantifies the multiplicative increase in the expected reward for correctly inferring $X$ when an adversary has access to $Y$. Whereas $\alpha$-leakage captures how much an adversary can learn about $X$ from $Y$, we also wish to quantify the information leaked about \textit{any function} of $X$ through $Y$. To this end, we define maximal $\alpha$-leakage below. \begin{definition}[Maximal $\alpha$-Leakage]\label{Def:GeneralLeakge} Given a joint distribution $P_{XY}$ on finite alphabets $\mathcal X\times\mathcal Y$, the maximal $\alpha$-leakage from $X$ to $Y$ is defined as \begin{align} \label{eq:GealLeak_definition} \mathcal L_{\alpha}^{\text{max}}(X\to Y) \triangleq\sup_{U- X- Y }\mathcal L_{\alpha}(U\to Y) \end{align} where $\alpha\in[1,\infty]$, $U$ represents any function of $X$ and takes values from an arbitrary finite alphabet. \end{definition} \begin{remark} Note that the optimal $P_{\hat{X}}^*$ of the maximization in the denominator of the logarithmic term in \eqref{eq:alphaLeak_definition} minimizes the expectation of the following loss function \begin{equation}\label{poly_loss} \ell(x,P_{\hat{X}})=\frac{\alpha}{\alpha-1} \big(1-P_{\hat{X}}(x)^{1-\frac{1}{\alpha}}\big), \end{equation} for each $\alpha\in(1,\infty)$. The limit of the loss function in \eqref{poly_loss} leads to the log-loss (for $\alpha=1$) and 0-1 loss (for $\alpha=\infty$) functions, respectively. In addition, for $\alpha=1$ and $\infty$, the maximal $\alpha$-leakage simplifies to MI and MaxL, respectively. These comments are formalized in the following theorems. \end{remark} The following theorem simplifies the expression of the $\alpha$- leakage in \eqref{eq:alphaLeak_definition} by solving the two maximizations in the logarithmic term. \begin{theorem}\label{Thm:DefEquialentExpression_alphaleakage} For $\alpha\in[1,\infty]$, $\alpha$-leakage defined in \eqref{eq:alphaLeak_definition} simplifies to \begin{align}\label{eq:alphaLeak_EquivDef} \mathcal L_{\alpha}(X\to Y)=I_{\alpha}^{\text{A}}(X;Y) \quad \alpha\in[1,\infty]. \end{align} \end{theorem} The proof hinges on solving the optimal estimations $P^*_{\hat{X}|Y}$ and $P^*_{\hat{X}}$ in \eqref{eq:alphaLeak_definition} for knowing $Y$ or not, respectively, as \begin{subequations} \begin{align} P^*_{\hat{X}|Y}(x|y)&=\frac{P_{X|Y}(x|y)^{\alpha}}{\sum_{x }P_{X|Y}(x|y)^{\alpha}}& (x,y)\in \mathcal X\times \mathcal Y\\ P^*_{\hat{X}}(x)&=\frac{P_{X}(x)^{\alpha}}{\sum_{x }P_{X}(x)^{\alpha}} & x\in \mathcal X, \end{align} \end{subequations} and therefore, the logarithm of the ratio in \eqref{eq:alphaLeak_definition} simplifies to A-MI. A detailed proof is in Appendix \ref{Proof:DefEquialentExpression_alphaleakage}. Making use of the conclusion in Theorem \ref{Thm:DefEquialentExpression_alphaleakage}, the following theorem gives equivalent expressions for the maximal $\alpha$-leakage. \begin{theorem}\label{Thm:DefEquialentExpression} For $\alpha\in[1,\infty]$, the maximal $\alpha$-leakage defined in \eqref{eq:GealLeak_definition} simplifies to\\ \vspace*{+0.3cm} \begin{subequations} $\mathcal L_{\alpha}^{\text{max}}(X\to Y)$ \vspace*{-0.3cm} \label{eq:GealLeak_EquivDef} \begin{empheq}[left={=\empheqlbrace\,}]{align} &\sup_{P_{\tilde{X}}}I^{\text{S}}_\alpha(\tilde{X};Y)=\sup_{P_{\tilde{X}}}I_{\alpha}^{\text{A}}(\tilde{X};Y)& \alpha\in(1,\infty] \label{eq:GealLeak_EquivDef_1infty}\\ & I(X;Y) &\alpha=1 \label{eq:GealLeak_EquivDef_1} \end{empheq} \end{subequations} where $P_{\tilde{X}}$ has the same support as $P_X$. \end{theorem} Note that the maximal $\alpha$-leakage is essentially the Arimoto channel capacity (with a support-set constrained input distribution) for $\alpha\geq 1$ \cite{AlphaMI_Arimoto1975}. This theorem is proved by first applying Theorem \ref{Thm:DefEquialentExpression_alphaleakage} to write the maximal $\alpha$-leakage as \begin{align} \label{eq:Thm_MaxAlphaLeak_ProofSketch} \mathcal L_{\alpha}^{\text{max}}(X\to Y)&=\sup_{U-X-Y}I_{\alpha}^{\text{A}}(U;Y)\quad \alpha\in[1,\infty]. \end{align} Subsequently, using the facts that A-MI and S-MI have the same supremum \cite[Thm. 5]{alphaMI_verdu} and that S-MI satisfies data processing inequality \cite[Thm. 3]{alphaMI_verdu}, we upper bound the supremum of \eqref{eq:Thm_MaxAlphaLeak_ProofSketch} by $\sup_{P_{\tilde{X}}}I^{\text{S}}_\alpha(\tilde{X};Y)$, and then, show that the upper bound can be achieved by a specific $U$ with $H(X|U)=0$. A detailed proof can be found in Appendix \ref{Proof:DefEquialentExpression}. \section{Properties of Maximal $\alpha$-Leakage}\label{Sec:Properties} In this section, we will prove that maximal $\alpha$-leakage has several properties that one would expect any reasonable leakage measure to have, including: (i) quasi-convexity in the conditional distribution $P_{Y|X}$; (ii) data processing inequalities; and (iii) a composition property. These properties are proved in the following theorem, which makes use of the equivalent form of maximal alpha-leakage found in Theorem \ref{Thm:DefEquialentExpression}, as well as known properties of S-MI from \cite{alphaMI_Sibson1969,ConvexityAlphaMI_Ho,alphaMI_verdu}. \begin{theorem}\label{Thm:Geneleak_qusiconvex_nondecreasing_dataprocessing} For $\alpha\in[1,\infty]$, maximal $\alpha$-leakage \begin{itemize} \item[1.] is quasi-convex in $P_{Y|X}$; \item[2.] is monotonically non-decreasing in $\alpha$; \item[3.] satisfies data processing inequalities: let random variables $X,Y,Z$ form a Markov chain, i.e., $X-Y-Z$, then \begin{subequations}\label{eq:GeneLeak_DataProcessIneq} \begin{align} \mathcal L_{\alpha}^{\text{max}}(X\to Z)\leq \mathcal L_{\alpha}^{\text{max}}(X\to Y) \label{eq:GeneLeak_DataProcessIneq_XY}\\ \mathcal L_{\alpha}^{\text{max}}(X\to Z)\leq \mathcal L_{\alpha}^{\text{max}}(Y\to Z) \label{eq:GeneLeak_DataProcessIneq_YZ}. \end{align} \end{subequations} \item[4.] satisfies \begin{align} \mathcal L_{\alpha}^{\text{max}}(X\to Y)\geq 0 \end{align}with equality if and only if $X$ is independent of $Y$, and \begin{align} \mathcal L_{\alpha}^{\text{max}}(X\to Y)\leq \begin{cases} \log|\mathcal{X}|\quad &\alpha>1\\ H(P_X)&\alpha=1 \end{cases} \end{align} with equality if $X$ is a deterministic function of $Y$. \item[5.] $ \mathcal L_{\alpha}^{\text{max}}(X\hspace{-0.05in}\to\hspace{-0.04in}Y)\leq I^{\text{S}}_{\infty}(P_X,P_{Y|X})$ with equality if $P_{Y|X}$ has either 0 or the maximal leakage in Part 4; \item[6.] $ \mathcal L_{\alpha}^{\text{max}}(X\hspace{-0.05in}\to\hspace{-0.04in}Y)\geq I^{\text{S}}_{\alpha}\left(P^{(\text{u})}_X,P_{Y|X}\right)$, where $P_X^{(\text{u})}$ indicates the uniform distribution of $X$, i.e., \begin{align} \mathsmaller{\mathcal L_{\alpha}^{\text{max}}(X\to Y)\geq\frac{\alpha}{\alpha-1}\log\frac{\sum\limits_{y\in\mathcal Y}\left(\sum\limits_{x\in \mathcal X}P_{Y|X}(y|x)^{\alpha}\right)^{\frac{1}{\alpha}}}{|\mathcal{X}|^{\frac{1}{\alpha}}}.} \end{align} The equality holds if either $P_{Y|X}$ is symmetric\footnote{All rows of $P_{Y|X}$ are permutations of other rows, and so are columns.} or $P_{Y|X}$ has 0 leakage. \end{itemize} \end{theorem} A detailed proof is in Appendix \ref{Proof:Geneleak_qusiconvex_nondecreasing_dataprocessing}. \begin{remark} Note that both MI and MaxL are convex in $P_{Y|X}$ so that $\mathcal L^{\text{max}}_{1}(X\to Y)$ and $\mathcal L^{\text{max}}_{\infty}(X\to Y)$ are convex in $P_{Y|X}$. \end{remark} Consider two disclosed versions $Y_1$ and $Y_2$ of $X$. The following theorem upper bounds the maximal $\alpha$-leakage to an adversary who has access to both $Y_1$ and $Y_2$ simultaneously. \begin{theorem}[Composition Theorem]\label{Thm:GeneLeak_CompositionTheory} Given a Markov chain $Y_1-X-Y_2$, we have $(\alpha\in [1,\infty])$ \begin{align} \mathcal L_{\alpha}^{\text{max}}(X\to Y_1,Y_2)\leq \sum_{i\in\{1,2\}}\mathcal L_{\alpha}^{\text{max}}(X\to Y_i). \end{align} \end{theorem} This composition theorem allows composing multiple releases under a total leakage constraint. A detailed proof is in Appendix \ref{proof:Thm:GeneLeak_CompositionTheory} \section{Concluding Remarks}\label{Sec:Conclusion Remarks} Via $\alpha$- and maximal $\alpha$-leakage, we have introduced novel tunable measures for information leakage. These measures can find direct applications in privacy and secrecy problems. The choice of restricting either specific variables or all possible functions of a dataset determines the choice of $\alpha$- and maximal $\alpha$-leakage measures, respectively. Future work includes characterizing privacy-utility tradeoffs for these measures and evaluating existing privacy mappings against these metrics. \appendices \section{Proof of Theorem \ref{Thm:DefEquialentExpression_alphaleakage}}\label{Proof:DefEquialentExpression_alphaleakage} \begin{proof}[\nopunct] The expression \eqref{eq:alphaLeak_definition} can be explicitly written as \begin{align} &\mathcal L_{\alpha}(X\to Y) =\lim_{\alpha'\to \alpha}\frac{\alpha'}{\alpha'-1}\nonumber\\ \label{eq:GealLeak_definition1} &\log\left(\frac{\max\limits_{P_{\hat{X}|Y}}\sum\limits_{xy}P_{XY}(xy)\left(P_{\hat{X}|Y}(x|y)\right)^{\frac{\alpha'-1}{\alpha'}}}{\max\limits_{P_{\hat{X}}}\sum\limits_{x}P_X(x)P_{\hat{X}}(x)^{\frac{\alpha'-1}{\alpha'}}}\right). \end{align} To simplify the expression in \eqref{eq:GealLeak_definition1}, we need to solve the two maximizations in the logarithm. First, we concentrate on the maximization in the denominator of the logarithm in \eqref{eq:GealLeak_definition1} and the one in the numerator can be solved following the same analysis. The maximization in the denominator can be equivalently written as \begin{subequations}\label{eq:GealLeak_DefMaxDenominator} \begin{align} \label{eq:GealLeak_DefMaxDenominator_obj} \max_{\substack{P_{\hat{X}}}}\quad &\sum_{x\in\mathcal X}P_X(x)P_{\hat{X}}(x)^{1-\frac{1}{\alpha'}}\\ \label{eq:GealLeak_DefMaxDenominator_const1} \text{s.t.}\quad & \sum_{x\in\mathcal X}P_{\hat{X}}(x)=1\\ \label{eq:GealLeak_DefMaxDenominator_const>0} & P_{\hat{X}}(x)\geq 0 \quad \text{ for all }x\in \mathcal{X} \end{align} \end{subequations} For $\alpha'\in[1,\infty)$, the problem in \eqref{eq:GealLeak_DefMaxDenominator} is a convex program. Therefore, by using Karush–-Kuhn–-Tucker (KKT) conditions, we obtain the optimal value of \eqref{eq:GealLeak_DefMaxDenominator} as \begin{align} \max_{P_{\hat{X}}}\sum_{x\in\mathcal{X}}P_X(x)P_{\hat{X}}(x)^{\frac{\alpha'-1}{\alpha'}} =\left(\sum_{x\in\mathcal{X}}P_X(x)^{\alpha'}\right)^{\frac{1}{\alpha'}}, \end{align} with the optimal solution $P^*_{\hat{X}}$ as \begin{align} \label{eq:GealLeak_DefMaxDenominatorOPTSol} P^*_{\hat{X}}(x)=\frac{P_{X}(x)^{\alpha'}}{\sum\limits_{x\in\mathcal X}P_{X}(x)^{\alpha'}}\quad \text{for all } x\in\mathcal X \end{align} Similarly, we attain the optimal solution $P^*_{\hat{X}|Y}$ of the maximization in the numerator of the logarithm in \eqref{eq:GealLeak_definition1} as \begin{align} \label{eq:GealLeak_DefMaxNumeratorOPTSol} P^*_{\hat{X}|Y}(x|y)=\frac{P_{X|Y}(x|y)^{\alpha'}}{\sum\limits_{x\in\mathcal X}P_{X|Y}(x|y)^{\alpha'}} \end{align} for all $x\in\mathcal X, y\in\mathcal Y$, and therefore, we have \begin{align} &\max_{P_{\hat{X}|Y}}\sum_{x\in\mathcal{X},y\in\mathcal{Y}}P_{XY}(xy)P_{\hat{X}|Y}(x|y)^{\frac{\alpha'-1}{\alpha'}}\nonumber\\ =&\sum_{y\in\mathcal Y}P_Y(y)\left(\sum_{x\in\mathcal{X}}P_{X|Y}(x|y)^{\alpha'}\right)^{\frac{1}{\alpha'}}. \end{align} Thus, for $\alpha\in[1,\infty)$, we have \begin{align} \label{eq:GealLeak_EquivalentInproof0} &\mathcal L_{\alpha}(X\to Y)=\nonumber\\ &\lim_{\alpha'\to \alpha}\frac{\alpha'}{\alpha'-1}\log \left(\frac{\sum\limits_{y}P_Y(y)\left(\sum\limits_{x}P_{X|Y}(x|y)^{\alpha'}\right)^{\frac{1}{\alpha'}}}{\left(\sum\limits_{x}P_X(x)^{\alpha'}\right)^{\frac{1}{\alpha'}}}\right), \end{align} i.e., A-MI of order $\alpha\in[1,\infty)$ in \eqref{eq:Arimoto_MI}.\\ Note that if $\alpha=\infty$, the optimal solution in \eqref{eq:GealLeak_DefMaxDenominatorOPTSol} is $\frac{0}{0}$. We go back to the expression in \eqref{eq:alphaLeak_definition} and observe that if $\alpha=\infty$, the expression $\mathcal L_{\infty}(X\to Y)$ becomes \begin{align}\label{eq:GealLeak_EquivalentMax_Inf} &\mathcal L_{\infty}(X\to Y)\nonumber\\ =&\log\left(\frac{\max\limits_{P_{\hat{X}|Y}}\sum\limits_{x,y}P_{XY}(xy)P_{\hat{X}|Y}(x|y)}{\max\limits_{P_{\hat{X}}}\sum\limits_{x}P_X(x)P_{\hat{X}}(x)}\right). \end{align} Since the largest convex combinations is the maximal involved value, the optimal values of the two maximizations in \eqref{eq:GealLeak_EquivalentMax_Inf} are \begin{subequations} \begin{align} &\max_{P_{\hat{X}|Y}}\sum_{xy}P_{XY}(xy)P_{\hat{X}|Y}(x|y)\nonumber\\ =&\sum_{y} P_Y(y)\max_{x} P_{X|Y}(x|y)\\ &\max_{P_{\hat{X}}}\sum_{x}P_X(x)P_{\hat{X}}(x)=\max_{x} P_X(x). \end{align} \end{subequations} Therefore, for $\alpha=\infty$, we have \begin{align} \mathcal L_{\infty}(X\to Y)=\log\left(\frac{\sum\limits_{y\in\mathcal Y} P_Y(y)\max\limits_{x} P_{X|Y}(x|y)}{\max\limits_{x} P_X(x)}\right), \end{align} which is exactly the A-MI of order $\infty$. Therefore, $\alpha$-leakage can be equivalently expressed as $I^{\text{A}}_{\alpha}(X;Y)$ for $\alpha\in[1,\infty]$. \end{proof} \section{Proof of Theorem \ref{Thm:DefEquialentExpression}}\label{Proof:DefEquialentExpression} \begin{proof}[\nopunct] From Theorem \ref{Thm:DefEquialentExpression_alphaleakage}, we have for $\alpha\in[1,\infty]$, \begin{align} \label{eq:Inproof_Thm:DefEquialentExpression} \mathcal L_{\alpha}^{\text{max}}(X\to Y)=\sup_{U- X- Y }I_{\alpha}^{\text{A}}(U;Y). \end{align} If $\alpha=1$, we have \begin{align} \mathcal L_{1}^{\text{max}}(X\to Y)&=\sup_{U- X- Y }I(U;Y)\leq I(X;Y) \end{align} where the inequality is from data processing inequalities of MI \cite[Thm 2.8.1]{IT_Cover}.\\ If $\alpha=\infty$, we have \begin{align} \mathcal L_{\infty}^{\text{max}}(X\to Y)=\sup_{U- X- Y }\log\frac{\mathsmaller{\sum\limits_{y} P_Y(y)\max\limits_{u} P_{U|Y}(u|y)}}{\mathsmaller{\max\limits_{u} P_U(u)}}, \end{align} which is exactly the expression of MaxL, and therefore, we have \cite[Thm. 1]{MaximalLeakage_Issa2016} \begin{align} \mathcal L_{\infty}^{\text{max}}(X\to Y)=\log\sum\limits_{y} \max\limits_{x} P_{Y|X}(y|x). \end{align} For $\alpha\in(1,\infty)$, we provide an upper bound for $\mathcal L_{\alpha}^{\text{max}}(X\to Y)$, and then, give an achievable scheme as follows. \\ \textbf{Upper Bound}: We have an upper bound of $\mathcal L_{\alpha}^{\text{max}}(X\to Y)$ as \begin{subequations}\label{eq:GealLeak_EquivalentInproofConverse} \begin{align} \label{eq:GealLeak_EquivalentInproofConverse0} &\mathcal L_{\alpha}^{\text{max}}(X\to Y)\nonumber\\ =&\sup_{U- X- Y}I_{\alpha}^{\text{A}}(U;Y)\\ \label{eq:GealLeak_EquivalentInproofConverse2} \leq &\sup_{P_{\tilde{X}|\tilde{U}}:P_{\tilde{X}|\tilde{U}}\left(\cdot|u\right)\ll P_{X}} \sup_{P_{\tilde{U}}} I_{\alpha}^{\text{A}}(\tilde{U};Y)\\ \label{eq:GealLeak_EquivalentInproofConverse3} = &\sup_{P_{\tilde{X}|\tilde{U}}:P_{\tilde{X}|\tilde{U}}\left(\cdot|u\right)\ll P_{X}} \sup_{P_{\tilde{U}}} I_{\alpha}^{\text{S}}(\tilde{U};Y)\\ \label{eq:GealLeak_EquivalentInproofConverse4} = &\sup_{P_{\tilde{X}}\ll P_{X}} I_{\alpha}^{\text{S}}(\tilde{X};Y)\\ \label{eq:GealLeak_EquivalentInproofConverse5} = &\sup_{P_{\tilde{X}}\ll P_{X}} I_{\alpha}^{\text{A}}(\tilde{X};Y) \end{align} \end{subequations} where $P_{\tilde{X}}\ll P_{X}$ means the alphabet of $P_{\tilde{X}}$ is a subset of that of $P_{X}$. The inequality in \eqref{eq:GealLeak_EquivalentInproofConverse2} holds because the supremum of A-MI over all $P_{\tilde{U},\tilde{X}}$ on $\mathcal U\times \mathcal X$ is no less than that (in \eqref{eq:GealLeak_EquivalentInproofConverse0}) over these $P_{U,X}$ constrained by the $P_X$. The equations in \eqref{eq:GealLeak_EquivalentInproofConverse3} and \eqref{eq:GealLeak_EquivalentInproofConverse5} result from that A-MI and S-MI of order $\alpha>0$ have the same supremum \cite[Thm. 5]{alphaMI_verdu}; and \eqref{eq:GealLeak_EquivalentInproofConverse4} obeys the data processing inequalities \cite[Thm. 3]{alphaMI_verdu}.\\ \textbf{Lower bound}: We lower bound \eqref{eq:Inproof_Thm:DefEquialentExpression} by consider a random variable $U$ such that $U-X-Y$ is a Markov chain and $H(X|U)=0$. Specifically, let the alphabet $\mathcal U$ consist of $\mathcal U_x$, a collection of $U$ mapped to a $x\in \mathcal X$, i.e., $\mathcal U=\cup_{x\in\mathcal X} \mathcal U_x $ with $U=u\in \mathcal U_x$ if and only if $X=x$. Therefore, for the specific variable $U$, we have \begin{align} \label{eq:GealLeak_EquivalentInproofAchieval0} P_{Y|U}(y|u)&=\begin{cases} P_{Y|X}(y|x) \quad &\text{ for all } u\in \mathcal U_x\\%y\in \{y:P_{Y|X}(y|x)>0\} 0 &\text{ otherwise}. \end{cases} \end{align} Construct a probability distribution $P_{\tilde{X}}$ over $\mathcal{X}$ from $P_U$ as \begin{align} \label{eq:GealLeak_EquivalentInproofAchievalConstructPX} P_{\tilde{X}}(x)=\frac{\sum_{u\in\mathcal{U}_x}P_U^{\alpha}(u)}{\sum_{x\in\mathcal X}\sum_{u\in\mathcal{U}_x}P_U^{\alpha}(u)} \quad \text{ for all } x\in \mathcal X. \end{align} Thus, \begin{align*} &I_{\alpha}^{\text{A}}(U;Y)\nonumber\\ =&\frac{\alpha}{\alpha-1}\log\frac{\sum\limits_{y\in\mathcal Y}\left(\sum\limits_{x\in\mathcal X}\sum\limits_{u\in\mathcal{U}_x}P_{Y|U}(y|u)^{\alpha}P_{U}(u)^{\alpha}\right)^{\frac{1}{\alpha}}}{\left(\sum\limits_{x\in\mathcal X}\sum\limits_{u\in\mathcal{U}_x}P_U(u)^{\alpha}\right)^{\frac{1}{\alpha}}} \end{align*} \begin{align*} =&\frac{\alpha}{\alpha-1}\log\frac{\sum\limits_{y\in\mathcal Y}\left(\sum\limits_{x\in\mathcal X}P_{Y|X}(y|x)^{\alpha}\sum\limits_{u\in\mathcal{U}_x}P_{U}(u)^{\alpha}\right)^{\frac{1}{\alpha}}}{\left(\sum\limits_{x\in\mathcal X}\sum\limits_{u\in\mathcal{U}_x}P_U(u)^{\alpha} \right)^{\frac{1}{\alpha}}} \\ =&\frac{\alpha}{\alpha-1}\log\left(\sum_{y\in\mathcal Y}\left(\sum_{x\in\mathcal X}P_{Y|X}(y|x)^{\alpha}P_{\tilde{X}}(x)^{\alpha}\right)^{\frac{1}{\alpha}}\right)\\ =&I_{\alpha}^{\text{S}}(\tilde{X};Y) \end{align*} Therefore, \begin{subequations}\label{eq:GealLeak_EquivalentInproofAchievable} \begin{align} \mathcal L_{\alpha}^{\text{max}}(X\to Y) =&\sup_{U-X-Y} I_{\alpha}^{\text{A}}(U;Y)\nonumber\\ \geq &\sup_{U:U-X-Y,H(X|U)=0} I_{\alpha}^{\text{A}}(U;Y)\\ \label{eq:GealLeak_EquivalentInproofAchieval3} =&\sup_{P_{\tilde{X}}\ll P_X}I_{\alpha}^{\text{S}}(\tilde{X};Y), \end{align} \end{subequations} where \eqref{eq:GealLeak_EquivalentInproofAchieval3} is because for any $P_{\tilde{X}}\ll P_X$, it can be obtained through \eqref{eq:GealLeak_EquivalentInproofAchievalConstructPX} by appropriately choosing $P_U$. Therefore, combining \eqref{eq:GealLeak_EquivalentInproofConverse} and \eqref{eq:GealLeak_EquivalentInproofAchievable}, we obtain \eqref{eq:GealLeak_EquivDef_1infty}. \end{proof} \section{Proof of Theorem \ref{Thm:Geneleak_qusiconvex_nondecreasing_dataprocessing}}\label{Proof:Geneleak_qusiconvex_nondecreasing_dataprocessing} \begin{proof}[\nopunct] \textbf{The proof of part 1}: We know that for $\alpha\geq 1$, $I^{\text{S}}_{\alpha}(X;Y)$ is quasi-convex $P_{Y|X}$ for given $P_X$ \cite[Thm. 2.7.4]{IT_Cover}, \cite[Thm. 10]{ConvexityAlphaMI_Ho}. In addition, the supreme of a set of quasi-convex functions is also quasi-convex, i.e., let function $f(a,b)$ is quasi-convex in $b$, such that $\sup_a f(a,b)$ is also quasi-convex in $b$ \cite{boydconvex}. Therefore, the maximal $\alpha$-leakage in \eqref{eq:GealLeak_EquivDef} is quasi-convex $P_{Y|X}$ for given $P_X$.\\ \textbf{The proof of part 2}: Let $\beta>\alpha\geq1$, and $P_{X\alpha}^*=\arg \sup_{P_X} I^{\text{S}}_{\alpha}(P_X,P_{Y|X})$ for given $P_{Y|X}$, such that \begin{subequations} \begin{align} \mathcal L_{\alpha}^{\text{max}}(X\to Y)&= I^{\text{S}}_{\alpha}(P_{X\alpha}^*,P_{Y|X})\\ \label{eq:GeneLeak_Property1inProof4} & \leq I^{\text{S}}_{\beta}(P_{X\alpha}^*,P_{Y|X})\\ \label{eq:GeneLeak_Property1inProof5} & \leq \sup_{P_X} I^{\text{S}}_{\beta}(P_X,P_{Y|X})\\ &=\mathcal L^{\text{max}}_{\beta}(X\to Y) \end{align} \end{subequations} where \eqref{eq:GeneLeak_Property1inProof4} results from that $I^{\text{S}}_{\alpha}$ is non-decreasing in $\alpha$ for $\alpha>0$ \cite[Thm. 4]{ConvexityAlphaMI_Ho}, and the equality in \eqref{eq:GeneLeak_Property1inProof5} holds if and only if $P_{X\alpha}^*=\arg \sup_{P_X} I_{\beta}(P_X,P_{Y|X})$.\\ \textbf{The proof of part 3}: Let random variables $X$, $Y$ and $Z$ form the Markov chain $X-Y-Z$. Making use of that S-MI of order $\alpha>1$ satisfies data processing inequalities \cite[Thm. 3]{alphaMI_verdu}, i.e., \begin{subequations} \begin{align} I^{\text{S}}_{\alpha}(X; Z)\leq I^{\text{S}}_{\alpha}(X; Y) \label{eq:DPInq_inproof01}\\ I^{\text{S}}_{\alpha}(X; Z)\leq I^{\text{S}}_{\alpha}(Y; Z) \label{eq:DPInq_inproof02}, \end{align} \end{subequations} we prove that maximal $\alpha$-leakage satisfies data processing inequalities as follows.\\ We first prove \eqref{eq:GeneLeak_DataProcessIneq_XY}. Let $P^*_X=\arg\sup_{P_X} I^{\text{S}}_{\alpha}(P_X,P_{Z|X})$. For the Markov chain $X-Y-Z$, we have \begin{subequations} \begin{align} \mathcal L_{\alpha}^{\text{max}}(X\to Z)&=I^{\text{S}}_{\alpha}(P^*_X,P_{Z|X}) \label{eq:DPInq_inproof1}\\ &\leq I^{\text{S}}_{\alpha}(P^*_X,P_{Y|X}) \label{eq:DPInq_inproof2}\\ &\leq \sup_{P_X} I^{\text{S}}_{\alpha}(P_X,P_{Y|X}) \label{eq:DPInq_inproof3}\\ &=\mathcal L_{\alpha}^{\text{max}}(X\to Y) \label{eq:DPInq_inproof4} \end{align} \end{subequations} where the inequality in \eqref{eq:DPInq_inproof2} results from \eqref{eq:DPInq_inproof01}. Similarly, the inequality in \eqref{eq:GeneLeak_DataProcessIneq_YZ} can be proved directly from \eqref{eq:DPInq_inproof02}.\\ \textbf{The proof of part 4}: For $\alpha\in(1,\infty]$, referring to \eqref{eq:Sibson_MI} and \eqref{eq:GealLeak_EquivDef_1infty} we have \begin{subequations} \begin{align} &\mathcal L_{\alpha}^{\text{max}}(X\to Y)\nonumber\\ =&\sup_{P_X} \frac{\alpha}{\alpha-1}\log \sum_{y}\left(\sum_{x}P_X(x)P_{Y|X}(y|x)^{\alpha}\right)^{\frac{1}{\alpha}}\\ \label{eq:alphaLeak_SpecialMechanism_Inproof1} \geq & \sup_{P_X} \frac{\alpha}{\alpha-1}\log \sum_{y}\bigg(\sum_{x}P_X(x)P_{Y|X}(y|x)\bigg)^{\frac{\alpha}{\alpha}}\\ =& \sup_{P_X} \frac{\alpha}{\alpha-1}\log 1=0, \end{align} \end{subequations} where \eqref{eq:alphaLeak_SpecialMechanism_Inproof1} results from applying Jensen's inequality to the convex function $f: t\to t^{\alpha}$ ($t\geq 0$), such that the equality holds if and only if given any $y\in\mathcal Y$, $P_{Y|X}(y|x)$ are the same for all $x\in \mathcal X$, such that \begin{align} P_{Y|X}(y|x)=P_Y(y)\quad x\in \mathcal X, y\in\mathcal Y \end{align} which means $X$ and $Y$ are independent, i.e., $P_{Y|X}$ is a rank-1 row stochastic matrix. For $\alpha=1$, we have \begin{align} \mathcal L^{\text{max}}_{1}(X\to Y)=I(X;Y)\geq 0, \end{align} with equalities if and only if $X$ is independent of $Y$ \cite{IT_Cover}. \\ Let $P_{X\Leftarrow Y}$ be an conditional probability matrix with only one non-zero entry in each column, and indicate the only non-zero entries by $x_y$, i.e., $x_y=\arg_x P_{X\Leftarrow Y}(y|x)>0$ for all $y\in\mathcal Y$. For $\alpha=\infty$, we have \begin{align} \mathcal L^{\text{max}}_{\infty}(P_{X\Leftarrow Y}) &=\log \sum_{y\in\mathcal Y}P_{X\Leftarrow Y}(y|x_y)=\log |\mathcal X|, \end{align} which is exactly the upper bound of MaxL \cite[Lem. 1]{MaximalLeakageHT_Liao2017} and absolutely an upper bound of maximal $\alpha$ leakage due to its monotonicity in $\alpha$.\\ For $\alpha\in(1,\infty)$, from \eqref{eq:Sibson_MI} and \eqref{eq:GealLeak_EquivDef_1infty} we have \begin{subequations} \begin{align} &\mathcal L_{\alpha}^{\text{max}}(P_{X\Leftarrow Y})\nonumber\\ =&\sup_{P_X} \frac{\alpha}{\alpha-1}\log \sum_{y\in\mathcal{Y}}\left(P_X^{\frac{1}{\alpha}}(x_y)P_{X\Leftarrow Y}(y|x_y)\right)\\ \label{eq:GeneLeak_LemmaSpecialMech_proof1} = &\sup_{P_X} \frac{\alpha}{\alpha-1}\log \sum_{x\in\mathcal{X}}P_X^{\frac{1}{\alpha}}(x); \end{align} \end{subequations} in addition, since the function maximized in \eqref{eq:GeneLeak_LemmaSpecialMech_proof1} is symmetric and concave in $P_X$, it is Schur-concave in $P_X$, and therefore, the optimal distribution of $X$ achieving the supreme in \eqref{eq:GeneLeak_LemmaSpecialMech_proof1} is uniform. Thus, \begin{align} \mathcal L_{\alpha}^{\text{max}}(P_{X\Leftarrow Y})=\log |\mathcal{X}|\quad \text{ for } \alpha\in(1,\infty) . \end{align} For $\alpha=1$, referring to \eqref{eq:GealLeak_EquivDef_1} we have \begin{subequations} \begin{align} &\mathcal L^{\text{max}}_{1}(X\to Y)\nonumber\\ =&\sum_{y\in\mathcal Y}P_X(x_y)P_{X\Leftarrow Y}(y|x_y)\log \frac{P_{X\Leftarrow Y}(y|x_y)}{P_X(x_y)P_{X\Leftarrow Y}(y|x_y)}\\ =&\sum_{y\in\mathcal Y}P_X(x_y)P_{X\Leftarrow Y}(y|x_y)\log \frac{1}{P_X(x_y)}\\ =&\sum_{x\in\mathcal X}P_X(x)\log \frac{1}{P_X(x)}=H(P_X), \end{align} \end{subequations} which is exactly the upper bound of $I(X;Y)$.\\ Therefore, if $X$ is a deterministic function of $Y$, maximal $\alpha$-leakage achieves its maximal value $\log|\mathcal{X}|$ for $\alpha>1$, and $H(P_X)$ for $\alpha=1$.\\ \textbf{The proof of part 5}: The upper bound is directly from the fact that maximal $\alpha$-leakage is non-decreasing in $\alpha$. In addition, from the results in part 4, we know that if $P_{Y|X}$ has 0 or the maximal leakage in in part 4, the upper bound is tight.\\ \textbf{The proof of part 6}: Given $P_{Y|X}$, the lower bound is actually the S-MI of order $\alpha$ for the uniform distribution of $X$. Due to the concavity of $I^{\text{S}}_{\alpha}(P_X,P_{Y|X})$ ($\alpha\geq 1$) in $P_X$ \cite[Thm. 8]{ConvexityAlphaMI_Ho} \footnote{The concavity of $I^{\text{S}}_{\alpha}(P_X,P_{Y|X})$ is based on the fact that a conditional R{\'e}nyi divergence is concave in $P_X$ \cite{ConvexityAlphaMI_Ho}.}, we know that $I^{\text{S}}_{\alpha}(P_X,P_{Y|X})$ is Schur concave in $P_X$ for any symmetric $P_{Y|X}$. Therefore the uniform distribution of $X$ maximizes \eqref{eq:GealLeak_EquivDef_1infty} and its S-MI is exactly the maximal $\alpha$-leakage \cite[Col. 9]{ConvexityAlphaMI_Ho}\footnote{Let $f(\mathbf{x})$ be a function which is Schur concave in a vector variable $\mathbf{x}\in \mathbb{R}^n$, $\mathbf{x}_1$ and $\mathbf{x}_2$ be two decreasing-ordered vectors in the domain of $f(\mathbf{x})$. If $\mathbf{x}_1$ majors $\mathbf{x}_2$, i.e., $\sum_{1}^{k}x_{1i}\geq \sum_{1}^{k}x_{2i}$ (for all $k\leq n$) and $\sum_{1}^{n}x_{1i}= \sum_{1}^{n}x_{2i}$, then $f(\mathbf{x}_1)\leq f(\mathbf{x}_2)$.}. The $P_{Y|X}$ in part 4 with zero leakage make the lower bound tight. \end{proof} \section{Proof of Theorem \ref{Thm:GeneLeak_CompositionTheory}}\label{proof:Thm:GeneLeak_CompositionTheory} \begin{proof}[\nopunct] Let $\mathcal Y_1$ and $\mathcal Y_2$ be the alphabets of $Y_1$ and $Y_2$, respectively. For any $(y_1,y_2)\in \mathcal Y_1\times \mathcal Y_2$, due to the Markov chain $Y_1-X-Y_2$, the corresponding entry of the conditional probability matrix of $(Y_1,Y_2)$ given $X$ is \begin{align} P(y_1,y_2|x)=P(y_1|x)P(y_2|x,y_1)=P(y_1|x)P(y_2|x). \end{align} Therefore, for $\alpha\in(1,\infty)$ \begin{subequations} \begin{align} &\mathcal L_{\alpha}^{\text{max}}(X\to Y_1,Y_2)\nonumber\\ =&\sup_{P_X} \frac{\alpha}{\alpha-1}\log \sum_{y_1,y_2\in \mathcal Y_1\times \mathcal Y_2}\nonumber\\ &\quad\left(\sum_{x\in\mathcal{X}}P_X(x)P_{Y_1,Y_2|X}(y_1,y_2|x)^{\alpha}\right)^{\frac{1}{\alpha}}\\ =&\sup_{P_X} \frac{\alpha}{\alpha-1}\log \sum_{y_1,y_2\in \mathcal Y_1\times \mathcal Y_2}\nonumber\\ \label{eq:alphaLeakage_CompostionTheoremProof_0} &\quad\left(\sum_{x\in\mathcal{X}}P_X(x)P_{Y_1|X}(y_1|x)^{\alpha}P_{Y_2|X}(y_2|x)^{\alpha}\right)^{\frac{1}{\alpha}}. \end{align} \end{subequations} Let $K(y_1)=\sum_{x\in\mathcal{X}}P_X(x)P_{Y_1|X}(y_1|x)^{\alpha}$, for all $y_1\in\mathcal Y_1$, such that we can construct a set of distributions over $\mathcal X$ as \begin{align} P_{\tilde{X}}(x|y_1)=\frac{P_X(x)P_{Y_1|X}(y_1|x)^{\alpha}}{K(y_1)}. \end{align} Therefore, from \eqref{eq:alphaLeakage_CompostionTheoremProof_0}, $\mathcal L_{\alpha}^{\text{max}}(X\to Y_1,Y_2)$ can be rewritten as \begin{align} &\mathcal L_{\alpha}^{\text{max}}(X\to Y_1,Y_2)\nonumber\\ =&\sup_{P_X} \frac{\alpha}{\alpha-1}\log \sum_{y_1,y_2\in \mathcal Y_1\times \mathcal Y_2}\left(\sum_{x\in\mathcal{X}}K(y_1)P_{\tilde{X}}(x|y_1)\right.\nonumber\\ & P_{Y_2|X}(y_2|x)^{\alpha}\bigg)^{\frac{1}{\alpha}}\displaybreak[0]\\ =&\sup_{P_X} \frac{\alpha}{\alpha-1}\log \sum_{y_1,y_2\in \mathcal Y_1\times \mathcal Y_2}\left(\sum_{x\in\mathcal{X}}P_X(x)\right.\nonumber\\ &P_{Y_1|X}(y_1|x)^{\alpha}\bigg)^{\frac{1}{\alpha}}\left(\sum_{x\in\mathcal{X}} P_{\tilde{X}}(x|y_1)P_{Y_2|X}(y_2|x)^{\alpha}\right)^{\frac{1}{\alpha}}\\ = &\sup_{P_X} \frac{\alpha}{\alpha-1}\log \mathlarger{\sum}_{y_1\in \mathcal Y_1}\small{\left(\sum_{x\in\mathcal{X}}P_X(x)P_{Y_1|X}(y_1|x)^{\alpha}\right)^{\frac{1}{\alpha}}}\nonumber\\ &\small{\sum_{y_2\in \mathcal Y_2}\left(\sum_{x\in\mathcal{X}} P_{\tilde{X}}(x|y_1)P_{Y_2|X}(y_2|x)^{\alpha}\right)^{\frac{1}{\alpha}}} \end{align} \begin{align} \leq & \mathsmaller{\sup\limits_{P_X} \frac{\alpha}{\alpha-1}\log \left(\sum\limits_{y_1\in \mathcal Y_1}\left(\sum\limits_{x\in\mathcal{X}}P_X(x)P_{Y_1|X}(y_1|x)^{\alpha}\right)^{\frac{1}{\alpha}}\right.}\nonumber\\ \label{eq:alphaLeakage_CompostionTheoremProof_1} &\mathsmaller{\left.\max\limits_{y_1\in\mathcal Y_1}\sum\limits_{y_2\in \mathcal Y_2}\left(\sum\limits_{x\in\mathcal{X}} P_{\tilde{X}}(x|y_1)P_{Y_2|X}(y_2|x)^{\alpha}\right)^{\frac{1}{\alpha}}\right)}\\ =&\mathsmaller{\sup\limits_{P_X} \frac{\alpha}{\alpha-1}\log \left(\sum\limits_{y_1\in \mathcal Y_1}\left(\sum\limits_{x\in\mathcal{X}}P_X(x)P_{Y_1|X}(y_1|x)^{\alpha}\right)^{\frac{1}{\alpha}}\right.}\nonumber\\ \label{eq:alphaLeakage_CompostionTheoremProof_2} &\mathsmaller{\left.\sum\limits_{y_2\in \mathcal Y_2}\left(\sum\limits_{x\in\mathcal{X}} P_{\tilde{X}}(x|y_1^*)P_{Y_2|X}(y_2|x)^{\alpha}\right)^{\frac{1}{\alpha}}\right)}\\ \leq &\mathsmaller{\sup\limits_{P_X} \frac{\alpha}{\alpha-1}\log \left(\sum\limits_{y_1\in \mathcal Y_1}\left(\sum\limits_{x\in\mathcal{X}}P_X(x)P_{Y_1|X}(y_1|x)^{\alpha}\right)^{\frac{1}{\alpha}}\right.}\nonumber\\ \label{eq:alphaLeakage_CompostionTheoremProof_3} &\mathsmaller{+\sup\limits_{P_{\tilde{X}}}\frac{\alpha}{\alpha-1}\log\sum\limits_{y_2\in \mathcal Y_2}\left(\sum\limits_{x\in\mathcal{X}} P_{\tilde{X}}(x)P_{Y_2|X}(y_2|x)^{\alpha}\right)^{\frac{1}{\alpha}}}\\ =&\mathcal L_{\alpha}^{\text{max}}(X\to Y_1)+\mathcal L_{\alpha}^{\text{max}}(X\to Y_2). \end{align} where $y_1^*$ in \eqref{eq:alphaLeakage_CompostionTheoremProof_2} is the optimal $y_1$ achieving the maximum in \eqref{eq:alphaLeakage_CompostionTheoremProof_1}. Therefore, the equality in \eqref{eq:alphaLeakage_CompostionTheoremProof_1} holds if and only if, for all $y_1\in \mathcal Y_1$, \begin{align} &\sum_{y_2\in \mathcal Y_2}\left(\sum_{x\in\mathcal{X}} P_{\tilde{X}}(x|y_1)P_{Y_2|X}(y_2|x)^{\alpha}\right)^{\frac{1}{\alpha}}\nonumber\\ =&\sum_{y_2\in \mathcal Y_2}\left(\sum_{x\in\mathcal{X}} P_{\tilde{X}}(x|y_1^*)P_{Y_2|X}(y_2|x)^{\alpha}\right)^{\frac{1}{\alpha}}; \end{align} and the equality in \eqref{eq:alphaLeakage_CompostionTheoremProof_3} holds if and only if the optimal solutions $P_X^*$ and $P_{\tilde{X}}^*$ of the two maximizations in \eqref{eq:alphaLeakage_CompostionTheoremProof_3} satisfy, for all $x\in\mathcal X$, \begin{align} P_{\tilde{X}}^*(x)=\frac{P_X^*(x)P_{Y_1|X}^{\alpha}(y_1^*|x)}{\sum_{x\in\mathcal{X}}P_X(x)P_{Y_1|X}^{\alpha}(y_1^*|x)}. \end{align} Now we consider $\alpha=1$. For $Y_1-X-Y_2$, we have \begin{align} I(Y_2;X|Y_1)\leq I(Y_2;X). \end{align} From Theorem \ref{Thm:DefEquialentExpression}, there is \begin{subequations} \begin{align} &\mathcal L^{\text{max}}_{1}(X\to Y_1,Y_2)\nonumber\\ =&I(X;Y_1)+I(X;Y_2|Y_1)\\ \leq &I(X;Y_1)+I(X;Y_2)\\ =& \mathcal L^{\text{max}}_{1}(X\to Y_1)+\mathcal L^{\text{max}}_{1}(X\to Y_2). \end{align} \end{subequations} For $\alpha=\infty$, we also have \begin{subequations} \begin{align} &\mathcal L^{\text{max}}_{\infty}(X\to Y_1,Y_2)\nonumber\\ =&\log \sum_{y_1,y_2\in \mathcal Y_1\times \mathcal Y_2}\max_{x\in\mathcal X} P(y_1|x)P(y_2|x)\\ \leq &\log \sum_{y_1,y_2\in \mathcal Y_1\times \mathcal Y_2}\left(\max_{x\in\mathcal X} P(y_1|x)\right)\left(\max_{x\in\mathcal X} P(y_2|x)\right)\\ =&\log \sum_{y_1\in \mathcal Y_1}\max_{x\in\mathcal X} P(y_1|x)+\log \sum_{y_2\in \mathcal Y_2}\max_{x\in\mathcal X} P(y_2|x)\\ =& \mathcal L^{\text{max}}_{\infty}(X\to Y_1)+ \mathcal L^{\text{max}}_{\infty}(X\to Y_2). \end{align} \end{subequations} \end{proof} \section*{Acknowledgment} The authors would like to thank Prof. Vincent Y. F. Tan from National University of Singapore for many valuable discussions. \bibliographystyle{IEEEtran}
{ "attr-fineweb-edu": 1.724609, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd_7xaKgQNLA_ec0L
\section{Introduction} \label{sec:introduction} Long range two-particle correlation functions in relative pseudorapidity and relative azimuthal angle in \ensuremath{pp}\xspace and \ensuremath{p\text{+Pb}}\xspace collisions measured at the LHC are consistent with a possible collective behaviour~\cite{CMS_ridge_pp,CMS_ridge_pPb, ATLAS_ridge_pp}, reminiscent to that observed in Pb+Pb collisions, see e.g.~\cite{ALICE_ridge_PbPb}. \\ Femtoscopic measurements in \ensuremath{pp}\xspace and \ensuremath{p\text{+Pb}}\xspace collisions~\cite{ATLAS_femto_pp,ALICE_femto_pPb} allow the investigation of the time evolution of small colliding systems. This proceeding reports the first measurement of the centrality and momentum dependence of same- and opposite-sign pion pair correlations in \ensuremath{p\text{+Pb}}\xspace collisions at $\ensuremath{\sqrt{s_{_{\text{NN}}}}}\xspace=5.02$~TeV measured by the ATLAS experiment. \section{Data analysis} \label{sec:data_analysis} This analysis uses a data sample with an integrated luminosity of $28.1$~nb$^{-1}$ of \ensuremath{p\text{+Pb}}\xspace collisions, measured in 2013 by the ATLAS detector~\cite{ATLAS_detector}. The Pb beam had an energy of $1.57$~TeV per nucleon and the opposing $p$ beam had an energy $4$~TeV, resulting in a center of mass energy of $\ensuremath{\sqrt{s_{_{\text{NN}}}}}\xspace = 5.02$~TeV. The centrality is determined using the total transverse energy measured in the forward calorimeter on the Pb-going side, see e.g. Ref.~\cite{ATLAS_cent_pPb} for more information. \\ Reconstructed tracks are required to have a transverse momentum $\ensuremath{p_{\text{T}}}\xspace > 0.1$~GeV and to be within the pseudorapidity\footnote{ATLAS uses a right-handed coordinate system with its origin at the nominal interaction point (IP) in the centre of the detector and the $z$-axis along the beam pipe. The $x$-axis points from the IP to the centre of the LHC ring, and the $y$-axis points upward. Cylindrical coordinates $(r,\phi)$ are used in the transverse plane, $\phi$ being the azimuthal angle around the $z$-axis. The pseudorapidity is defined in terms of the polar angle $\theta$ as $\eta=-\ln\tan(\theta/2)$.} range $|\eta| < 2.5$. Pions are identified using the $\ensuremath{\text{d}E/\text{d}x}\xspace$ information from the silicon pixel detector. Pion pairs are required to have $|\Delta \phi | < \pi/2$ and to be within the pair rapidity region of $|\eta_k| < 1.5$. Opposite-sign pairs that have an invariant mass close the masses of $\rho^0$, $K_S^0$ and $\phi$ are rejected. \\ The two-particle correlation function is defined as $ C(q) = A(q)/B(q)$, where $q$ is the relative momentum $q = p^a - p^b$, $\left. A(q) = \ensuremath{\text{d}}\xspace N / \ensuremath{\text{d}}\xspace q \right |_{\text{same}}$ is the same-event distribution and $\left. B(q) = \ensuremath{\text{d}}\xspace N / \ensuremath{\text{d}}\xspace q \right |_{\text{mixed}} $ is the mixed-event distribution in the same event class. \\ In three dimensions, a longitudinally co-moving frame is chosen, such that $p_z^a = -p_z^b$. The coordinates are defined in the Bertsch-Pratt convention~\cite{Bertsch89,Pratt86}, where $\ensuremath{q_{\text{out}}}\xspace$ shows in the direction of the pair momentum, $\ensuremath{q_{\text{long}}}\xspace$ shows in the direction of the beam and $\ensuremath{q_{\text{side}}}\xspace$ is perpendicular to the $\ensuremath{q_{\text{out}}}\xspace$ and $\ensuremath{q_{\text{long}}}\xspace$. A graphical visualization of Bertsch-Pratt coordinates is shown e.g. in Ref.~\cite{Lisa05}. Using the Bowler-Sinyukow~\cite{Bowler91,Sinyukow98} parametrization, the correlation function can written as \begin{figure}[t] \begin{minipage}{14pc} \includegraphics[scale = 0.43]{./fig_05.pdf} \end{minipage}\hspace{3.5pc}% \begin{minipage}{14pc} \includegraphics[scale = 0.43]{./fig_06.pdf} \end{minipage} \caption{Invariant radii $\ensuremath{R_{\text{inv}}}\xspace$ as a function of $\ensuremath{k_{\text{T}}}\xspace$ in different centrality bins (left panel) and as a function of the cube root of the average charged particle multiplicity, $\ensuremath{\langle \text{d}N_{\text{ch}} / \text{d}\eta \rangle ^{1/3}}\xspace$, in two $\ensuremath{k_{\text{T}}}\xspace$ intercepts (right panel). Figures from~\cite{ATLAS_femt_pPb_CONF}. }\label{fig:Rinv} \end{figure} \begin{equation} C (q) = \left[(1- \lambda) + \lambda K(q) C_{\text{BE} }(q) \right] \Omega(q), \end{equation} where $\lambda$, $K(q)$, $C_{\text{BE} }(q)$ and $\Omega(q)$ is the correlation strength, a correction factor for final-state interactions, the Bose-Einstein enhancement factor and the contribution from non-femtoscopic correlations, respectively. The Bose-Einstein factor $C_{\text{BE} }(q)$ in the correlation function is fit to an exponential function \begin{equation} C_{\text{BE} }(q) = 1 + \exp \left( - \hat{R} \hat{q} \right), \end{equation} where $\hat{R}$ and $\hat{q}$ are the invariant radius $\ensuremath{R_{\text{inv}}}\xspace$ and the invariant momentum $\ensuremath{q_{\text{inv}}}\xspace$ in the one-dimensional case and a diagonal matrix with the entries $\mathcal{R} = \text{diag}(\ensuremath{R_{\text{out}}}\xspace,\ensuremath{R_{\text{long}}}\xspace,\ensuremath{R_{\text{side}}}\xspace)$ and $\vec{q} = (\ensuremath{q_{\text{out}}}\xspace,\ensuremath{q_{\text{long}}}\xspace,\ensuremath{q_{\text{side}}}\xspace)$ in the three-dimensional case, respectively. \\ The non-femtoscopic contribution to the correlation function $\Omega(q)$ is estimated by a fit to the opposite-sign pair distribution \begin{equation} \Omega(\ensuremath{q_{\text{inv}}}\xspace) = \mathcal{N} \left[ 1 + \lambda_{\text{bkg}} \exp \left( - \left| R_{\text{bkg}} \ensuremath{q_{\text{inv}}}\xspace \right|^{\alpha_{\text{bkg}}} \right) \right], \end{equation} where $\mathcal{N}$ is an arbitrary normalization factor, $\lambda_{\text{bkg}}$ is the experimental correlation strength, $R_{\text{bkg}}$ is a parameter for the width and $\alpha_{\text{bkg}}$ is a shape parameter. The origin of this background is identified to be from hard processes, such as particles from jet fragmentation. A mapping of the parameters $\lambda_{\text{bkg}}$ and $R_{\text{bkg}}$ between opposite-sign and same-sign is extracted from Monte Carlo simulations. It should be noted, that all parameters describing the background are constrained by this method. Systematic uncertainties are estimated by taking into account the hard process background description, particle identification, effective Coulomb correction size $R_{\text{eff}}$, charge asymmetry, and two particle effects. More details on the data analysis procedure can be found in Ref.~\cite{ATLAS_femt_pPb_CONF}. \begin{figure}[!t] \begin{minipage}{14pc} \includegraphics[scale = 0.43]{./fig_07a.pdf} \end{minipage}\hspace{3.5pc}% \begin{minipage}{14pc} \includegraphics[scale = 0.43]{./fig_07b.pdf} \end{minipage} \begin{minipage}{14pc} \includegraphics[scale = 0.43]{./fig_07c.pdf} \end{minipage}\hspace{3.5pc}% \begin{minipage}{14pc} \includegraphics[scale = 0.43]{./fig_09.pdf} \end{minipage} \caption{The radii $\ensuremath{R_{\text{out}}}\xspace$ (upper left panel), $\ensuremath{R_{\text{side}}}\xspace$ (upper right panel) and $\ensuremath{R_{\text{long}}}\xspace$ (lower left panel) as a function of $\ensuremath{k_{\text{T}}}\xspace$ in different centrality bins. The lower right panel shows the ratio of $\ensuremath{R_{\text{out}}}\xspace/\ensuremath{R_{\text{side}}}\xspace$ as function of $\ensuremath{k_{\text{T}}}\xspace$. Figures from~\cite{ATLAS_femt_pPb_CONF}. }\label{fig:3d} \end{figure} \section{Results and conclusions} \label{sec:results_and_conclusions} Figure~\ref{fig:Rinv} shows the invariant radii $\ensuremath{R_{\text{inv}}}\xspace$ as a function of the pair momentum $\ensuremath{k_{\text{T}}}\xspace$ and as a function of cube root of the average charged particle multiplicity $\ensuremath{\langle \text{d}N_{\text{ch}} / \text{d}\eta \rangle ^{1/3}}\xspace$. The measured radii are observed to decrease with increasing $\ensuremath{k_{\text{T}}}\xspace$, which is consistent with collective expansion. This behavior is less severe for peripheral collisions.\\ Figure~\ref{fig:3d} shows the results for the radii $\ensuremath{R_{\text{out}}}\xspace$, $\ensuremath{R_{\text{long}}}\xspace$ and $\ensuremath{R_{\text{side}}}\xspace$ and the ratio $\ensuremath{R_{\text{out}}}\xspace/\ensuremath{R_{\text{side}}}\xspace$ as a function of the pair momentum $\ensuremath{k_{\text{T}}}\xspace$ for different centralities. The radii show a decreasing trend with increasing $\ensuremath{k_{\text{T}}}\xspace$. In the most central events $0\--1~\%$, the radii are about a factor $2.5$ larger than in peripheral collisions $70\--80~\%$. The ratio $\ensuremath{R_{\text{out}}}\xspace/\ensuremath{R_{\text{side}}}\xspace$ falls significantly below~$1$, indicating a very rapid and explosive expansion of the fireball. This behavior can be explained by a combination of pre-thermalized acceleration, a stiffer equation of state, and adding viscous corrections~\cite{Pratt09}.\\ In Fig.~\ref{fig:HVolume}, the product $\ensuremath{R_{\text{out}}}\xspace \ensuremath{R_{\text{side}}}\xspace \ensuremath{R_{\text{long}}}\xspace$, which scales linearly with the volume, is shown as a function of the average multiplicity $\ensuremath{\langle \text{d}N_{\text{ch}} / \text{d}\eta \rangle}\xspace$ for two intercepts of the pair momentum in the left panel. The volume is linearly increasing with increasing $\ensuremath{\langle \text{d}N_{\text{ch}} / \text{d}\eta \rangle}\xspace$, indicating a constant source density at the moment of freeze-out. The product is also shown as a function of $\ensuremath{\langle N_{\text{part}} \rangle}\xspace$ for the Glauber Model and for the Glauber-Gribov Color Fluctuation model (GGCF), including different values $\omega_{\sigma}$ for the magnitude of the color fluctuations. The extraction of centrality dependent values of $\ensuremath{\langle N_{\text{part}} \rangle}\xspace$ is described in Ref.~\cite{ATLAS_cent_pPb}.\\ \section{Acknowledgement} This work was supported by U.S. Department of Energy grant DE-FG02-86ER40281. \begin{figure}[t] \begin{minipage}{14pc} \includegraphics[scale = 0.43]{./fig_10.pdf} \end{minipage}\hspace{3.5pc}% \begin{minipage}{14pc} \includegraphics[scale = 0.43]{./fig_11.pdf} \end{minipage} \caption{The product $\ensuremath{R_{\text{out}}}\xspace \ensuremath{R_{\text{side}}}\xspace \ensuremath{R_{\text{long}}}\xspace$ as a function of $\ensuremath{\langle \text{d}N_{\text{ch}} / \text{d}\eta \rangle}\xspace$ for two different $\ensuremath{k_{\text{T}}}\xspace$ intervals (left panel) and as a function of $\ensuremath{\langle N_{\text{part}} \rangle}\xspace$ for three different models describing the initial geometry. Figures from~\cite{ATLAS_femt_pPb_CONF}.}\label{fig:HVolume} \end{figure} \bibliographystyle{elsarticle-num}
{ "attr-fineweb-edu": 1.914062, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUd_nxK7IACwaaAeE0
\section{Introduction} \label{introduction} Generative Adversarial Networks (GANs) \cite{goodfellow2014generative} can generate outstanding realistic examples, but suffer from two recognized problems (illustrated in Fig.1(a),(b) respectively): (1) Mode collapse, the generators of GANs could commonly miss modes in the training data while successfully cheating the discriminators; (2) Training instability, the training processes of GANs may fail at certain stages of the training. Many researchers have been devoted to solving them theoretically \cite{arjovsky2017wasserstein,zhao2016energy,salimans2018improving} or empirically ~\cite{salimans2016improved,metz2016unrolled,lin2018pacgan}, but these problems are still open. In this work, we tackle these problems by imposing explicit manifold prior onto GANs. GAN is recognized to model manifold from observed samples~\cite{zhu2016generative}. Since no latent representation are explicitly provided for the observed samples, GAN can be seen as implementing implicit manifold learning. It is worth noting that explicit manifold learning has advantages correspondingly addressing the above two problems: (1) By explicitly coding each observed sample on the generated manifold, all modes in the training data are guaranteed to be recovered and thus mode collapse problem can be solved naturally; (2) Explicit manifold learning has effect of pulling the generated manifold to the observed samples, which provides effective gradient to avoid the training instability at the beginning of training~\cite{salimans2016improved}. However, directly employing the conventional manifold learning methods fails to recover the intrinsic manifold to generate realistic samples. Manifold learning methods usually follow important assumptions like neighbors points lying close to a locally linear patch or preserving local structures, which are generally not satisfied in case of sparsely or unevenly distributed data. To construct an appropriate prior for GAN, we are motivated to further simplify the generated manifold to address the shortage in training data. Specifically, a new target for manifold learning, \emph{Minimum Manifold Coding}(MMC), is imposed to encouraging small Riemann volume of the generated manifold. The proposed MMC turns out a general form of the Shortest Hamiltonian Path(SHP) problem~\cite{polychronopoulos1996stochastic}, which aims to find a minimum manifold with fixed dimensions to cover all the samples and thus guarantees the simple and crease-free generated manifold. The standardized codes derived from MMC are then employed as prior to regularize the generator training in GAN, which constitutes the proposed framework of MMCGAN. The generated samples from MMCGAN in addressing mode collapse and training instability problems are illustrated in Fig.1(c),(d) correspondingly. We have conducted experiments on both toy datasets of 2D-SwissRoll, 25-Grid and realistic datasets of MNIST, Cifar10, ImageNet to show the effectiveness of MMC and MMCGAN. The main contributions can be summarized in three-fold: \begin{itemize} \item We propose to employ explicit manifold learning as prior to address the mode collapse and training instability problems of GAN. \item A new manifold learning target of Minimum Manifold Coding (MMC) is imposed to tackle the sparse and uneven data distribution and provide more suitable prior for GAN training. An approximate solution is also provided for the MMC problem. \item Extensive experiments shows that MMCGAN can alleviate mode collapse, stabilize training, and improve the quality of generated samples on different GAN architectures. \end{itemize} \begin{figure*}[t] \centering \subfigure[WGAN-gp] { \includegraphics[width=1.5in]{gaussian_wgan.jpg} } \subfigure[standard GAN] { \includegraphics[width=1.5in]{gaussian_standard.jpg} } \subfigure[MMC+WGAN-gp] { \includegraphics[width=1.5in]{gaussian_wganour.jpg} } \subfigure[MMC+standard GAN] { \includegraphics[width=1.5in]{gaussian_standardour.jpg} } \caption{Examples of (a) mode collapse and (b) training instability on the 25-Grid dataset. (c) and (d) shows the corresponding results from the proposed MMCGAN.} \end{figure*} \section{Related Work} \label{sec2} \subsection{Manifold Learning} \label{sec2.3} Manifold learning assumes that data are distributed around some low-dimensional manifolds rather than all over the data spaces. The goal of manifold learning is to discover this low-dimensional compact representation for the high-dimensional data. Classical manifold learning methods include LLE~\cite{roweis2000nonlinear}, Isomap~\cite{tenenbaum2000global}, Laplacian Eigenmaps~\cite{belkin2003laplacian} , ltsa~\cite{zhang2004principal}, t-SNE~\cite{maaten2008visualizing}, LargeVis~\cite{tang2016visualizing}, Umap~\cite{mcinnes2018umap}, etc. These methods basically consist of three steps: (1) finding k-nearest neighbors; (2) constructing a graph to preserve the structures of the raw data; and (3) embedding the raw data into low-dimensional representation satisfying the manifold structure. While manifold learning methods are widely used in data visualization and dimensionality reduction problems, they are not readily used as prior for generating realistic samples. One of the most important reasons is that manifold learning assume a topological space where every point has a neighborhood that is homeomorphic to the interior of a sphere in Euclidean space. Therefore, k-nearest neighbors represent the local structures only if the data are dense and evenly-distributed, which is hardly satisfied for realistic samples like image datasets. This critically limits the integration of conventional manifold learning methods into generative models like GAN. In this work, we introduce a further MMC target to simplify the generated manifold and fit to GAN in generating realistic samples. Comparison results with conventional manifold learning methods will be reported in the experiment section. \subsection{Generative Adversarial Networks} \label{sec2.1} Generative Adversarial Networks have two major parts: Generator (G) and Discriminator (D). The original form of GAN ~\cite{goodfellow2014generative} aims to find a Nash equilibrium to the following min-max problem: \begin{equation}\label{rawgan} \min_{G}{\max_{D}{\mathbb{E}_{x \sim q_{data}}[\log{D(x)}]+\mathbb{E}_{z \sim p_{z}}[\log{(1-D(G(z)))}]}} \end{equation} where $z \sim R^m$ is a latent representation drawn from distribution $p_z$ such as $\mathcal{N}(0,1)$ or $\mathcal{U}[1,-1]$. Theoretically, at the global optimum of Eqn.(\ref{rawgan}), generator will produce samples with the same distribution as data distribution. Unfortunately, standard GAN does not work well as it tends to be unstable during training, and its generator may treat the discriminator without diversities, which is called mode collapse. Many researchers make efforts to solve these problem \cite{nowozin2016f,fedus2017many,salimans2018improving,mescheder2018training}. An important line of work start from WGAN~\cite{arjovsky2017wasserstein}, which provides comprehensive theoretical analysis and gained good experimental performance. WGAN theoretically analyzes the reason why GAN is unstable and solve it by using Wasserstein distance to substitute Jensen-Shannon divergence in GAN. Then, the objective function changes to: \begin{equation}\label{wgan} \min_{G}{\max_{D}{\mathbb{E}_{x \sim q_{data}}[D(x)]+\mathbb{E}_{z \sim p_{z}}[-D(G(z))]}} \end{equation} Note that the discriminators here needs to be $1$-$Lipstchitz$. WGAN achieves this target by clipping the gradients. After that, WGAN-gp\cite{gulrajani2017improved} provides a more stable solution by imposing gradient penalty: the derived gradients will not be limited in only two values $\{-1,1\}$. SNGAN\cite{miyato2018spectral} is the state-of-art choice in this line of work, which is faster than WGAN-gp and achieves better performance. Recently, SNGAN was been implemented in BigGAN \cite{brock2018large}, which is the first photo-realistic GAN with the following hinge loss as objective function: \begin{equation}\label{sngan} \begin{split} &\min_{D}{\mathbb{E}_{x \sim q_{data}}[(1-D(x))_+]+\mathbb{E}_{z \sim p_{z}}[(1+D(G(z)))_+]}\\ &\min_{G}{\mathbb{E}_{z \sim p_{z}}[-D(G(z))]} \end{split} \end{equation} where $(\cdot)_+=\max(\cdot,0)$. We will compare the proposed MMCGAN with these typical GAN architectures to examine the effectiveness in addressing mode collapse and training stability. \subsection{GAN with Reconstruction Loss} \label{sec2.2} In this work, manifold learning serves as prior for GAN by adding a manifold preserving reconstruction loss during training the generator. In fact, reconstruction loss is one intuitive and efficient way to guarantee GAN not to lose information. This subsection reviews some GAN variants with reconstruction loss to penalizing losing different types of information. CycleGAN\cite{zhu2017unpaired} employs reconstruction loss to constrain that the image-to-image translation generates more diverse samples. EBGAN \cite{zhao2016energy} replaces traditional discriminator loss with reconstruction loss to show the performance of other energy functions. BAGAN \cite{yang2017dagan} uses the reconstruction loss to training a decoder as a better initialization for the generator, however, such initialization is a trick without theoretically analysis, and the coding of auto-encoder have a completely different distribution from $p_z$, which make its benefit unobvious. The most similar study is sinGAN ~\cite{shaham2019singan}, which uses reconstruction loss to guarantee that there exists an input noise to generate each of the raw image samples. The difference lies in that, sinGAN is proposed for a single input, which will not converge in case of multiple samples with random noise. Moreover, the reconstruction loss introduced in this work is motivated from manifold preserving perspective, which is compatible with the manifold discovery nature of generative models. \section{Minimum Manifold Coding} \label{sec4} As discussed in Introduction, in case of sparse and uneven data distribution, it is difficult for the generator to correctly recover manifold and generate realistic samples. We are motivated to introduce a new manifold learning target, Minimum Manifold Coding(MMC), to address these problems. Such target encourages a simple and unfolded manifold, so that the generators can fit easily. In this section, we will first derive the formal definition of the MMC, and then analyze its correlation with the Shorted Hamiltonian Path to explain why MMC leads to unfolded manifold. Finally, we will provide a practical algorithm to solve MMC. \subsection{Notations and Definitions} \label{sec4.1} \textbf{Notations}. Let the input data be $X=\{x_1,x_2,...,x_N\}$ where $\forall i,x_i \in R^n$, and we suppose all the samples are different from each other. The manifold learning methods embed $X$ to a low-dimensional space: we can obtain a set of codes corresponding to the input data: $C=\{c_1,c_2,...,c_N\}$, where $\forall i, c_i \in R^m$, $m<n$. Note that such codes represent the encoding mapping: $c_i=C(x_i), \forall i \in \{1,2,...,N\}$. The decoding function $f_C$ can recover the data $X$ from the coding $C$, and we denote $F_C$ as the set of all the decoding functions of $C$: $F_C=\{f_C:R^m\mapsto R^n| \forall c_i \in C, x_i=f_C(c_i)\}$. In addition, a decoding function $f_C$ can generate a corresponding manifold $M(f_C)=\{x|\forall c\in R^m, x=f_C(c)\}$. Obviously, all these manifolds of decoding functions in $F_C$ will intersect at the raw data points $X$. Recall that our target is to find a simple and unfolded manifold so that the generator of GAN can fit easily. Intuitively, the manifold with the minimum Riemann volume is simple, and we derive a new objective named Minimum Manifold Coding(MMC) from this motivation. Before deriving the formal definition of MMC, we will define \emph{Mapping Measure} first. \begin{Def}\label{MM} (Mapping Measure). Let the convex hull for a coding $C$ is $S=conv(C)$. A decoding function $f_C$ maps $S$ to the corresponding manifold: $f_C(S)\subset M(f_C)$. The mapping measure for $f_C$ defined as the Riemann volume of $f_C(S)$: \begin{equation} \Lambda(f_C)=\int_{S}{\sqrt{\left | det(J_{f_C}(s)^TJ_{f_C}(s)) \right|}ds} \end{equation} Where $det$ is the determinant of the matrix, and $J_{f_C}$ is the Jacobian matrix of $f_C$. \end{Def} \begin{Def}\label{MMC} (Coding Measure, Coding Manifold, and Minimum Manifold Coding). Let $F_C$ be a set of all the decoding functions with the coding $C$. The coding measure of the coding $C$ is the minimum mapping measure of the functions in $F_C$: \begin{equation} \rho(C)=\min_{f_C\in F_C} {\Lambda(f_C)} \end{equation} The coding manifold is the manifold generated by the decoding function $f_C$ with the minimum mapping measure: \begin{equation} M(C)=M(\arg\min_{f_C\in F_C} {\Lambda(f_C)}) \end{equation} Minimum manifold coding is to find a coding $C'$ with the minimum coding measure: \begin{equation} C'= \arg\min_{C}{\rho(C)} \end{equation} \end{Def} \begin{figure}[t] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.7\columnwidth, height=1.2in]{cross.png}} \caption{Illustration of the shortest Hamiltonian Path with no cross.} \end{center} \vskip -0.2in \end{figure} \subsection{General Form of the Shortest Hamiltonian Path} Such definition of MMC has a good property: the coding measure only depends on the arrangement of the data on the manifold, rather than the scale of codes. For example, if $m=1$, the manifolds will be curves, i.e., 1-D manifolds. As coding measure is the minimum mapping measure, it represents a set of line segments which connects all the points. Specifically, we can visualize the coding manifolds with painting line segments from the data point with the minimum code to the point with the maximum code in the data space. It is clear that such manifold only depends on the order of the codes, rather than the specific code values. As a matter of fact, the minimum manifold coding with 1-D manifold is equivalent to the Shortest Hamiltonian Path(SHP). Since the convex hull for a 1-D coding $C$ is a line segment ranging from the minimum code to the maximum code, the coding measure of the coding $C$ represents a path visiting each vertex exactly once, which is called Hamiltonian Path~\cite{tutte1946hamiltonian}. Therefore, minimizing the manifold coding can retrieve the shortest Hamiltonian path. In other words, MMC can be seen as a general form of the shortest Hamiltonian path. It is worth noting that the shortest Hamiltonian path represents a simple curve with less cross. Suppose there is a cross ($AC+BD$) in Hamiltonian path, see Fig.2. Without loss of generality, we suppose the cross aims to connect $AB$ and $CD$. It is clear that $AD+BC$ can also achieve the same connection target with no less, and $AD+BC<AC+BD$. As the general form of SHP, MMC is expected to discover manifold with less cross or even unfolded. \subsection{Approximate Solution of MMC} \label{sec4.3} As known to all, the SHP is an NP problem, so the MMC problem is also an NP problem and can only be approximately solved. In this subsection, we provide a practical approximate solution of the MMC problem. In brief, we split this problem into two parts: getting the decoding functions, and pursuing smaller mapping measures. For the first part, we use an auto-encoder with reconstruction loss. For the second part, we have the following theorem: \begin{thm}\label{thm} Let $f_C\in F_C$ be a decoding function which satisfies the L-Lipschitz condition on $S=conv(C)$, then the mapping measure of $f_C$ has an upper bound: \begin{equation} \Lambda(f_C)=\int_{S}{\sqrt{\left | det(J_{f_C}(s)^TJ_{f_C}(s)) \right|}ds}\leq L^m \int_{S}{ds} \end{equation} where $m$ is the dimension of the coding space. \end{thm} The proof is provided in Supplement-A. According to this theorem, as there always exist an $L$ to make decoder satisfy the $L$-$Lipschitz$ condition, we can use a minimum convex hull loss to get smaller convex hull $S$ and obtain a lower upper bound. In this work, we choose the constraint of L2-Norm for simplicity, so the objective function for auto-encoder becomes: \begin{equation}\label{gam} \min_{Dec,Enc}{\frac{1}{2}\mathbb{E}_{x \sim q_{data}}(\|x-Dec(Enc(x))\|^2+\gamma\|Enc(x)\|^2)} \end{equation} where $Dec$ is the decoder, and $Enc$ is the encoder of the auto-encoder. After training, we can obtain a coding with small coding measure. Recall that the coding measure will not change if we use the transformation which do not alter the arrangements of coding, so we can design a proper transformation to obtain an expected coding distribution. Note that the latent representations of GAN is drawn from the distribution $\mathcal{N}(0,1)$, and a code with zero-mean and one-variance will be more reasonable as the prior. In this work, we use z score standardization as the transformation function: $C'=\frac{C-\mathbb{E}C}{std(C)}$. \begin{figure}[t] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.65\columnwidth, height=1.4in]{reconstruct.jpg}} \caption{The manifold preserving reconstruction loss pulls the generator manifold to the data points according the corresponding codes.} \label{mc} \end{center} \vskip -0.2in \end{figure} \begin{figure*}[t] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=5.5in]{our_alg.png}} \caption{Illustration of MMCGAN framework.} \end{center} \vskip -0.2in \end{figure*} \section{MMCGAN} \label{sec3.1} In this section, we will use the manifold obtained by MMC as prior to improve the training of GAN, so called MMCGAN. There are many ways to implement the idea of using explicit manifold learning prior. We employ one intuitive way to constrain the generator to fit the prior manifold by $L2$ loss between the generator manifold and prior manifold: $R=\mathbb{E}_{x \sim q_{data}}\|x-G(C(x))\|^2$. We call the L2 loss as manifold preserving reconstruction loss. The overall framework of MMCGAN is shown in Fig.4, which consists of the auto-encoder component to derive the manifold prior, and the GAN component to using the prior to regularize the generator training. The training process has three steps: in the first step, we employ auto-encoder with loss of convex hull to get the latent code that minimizes the mapping measure. The derived code is then standardized by with z-score to be compatible with the input distribution of GAN. In the second step, we use the standardized code as prior to initialize the generator. Specifically, manifold preserving reconstruction loss is imposed at the beginning of the generator training, e.g., for hinge loss, we have: \begin{equation}\label{rlhinge} \begin{split} &\min_{D}{\mathbb{E}_{x \sim q_{data}}[(1-D(x))_+]+\mathbb{E}_{z \sim p_{z}}[(1+D(G(z)))_+]}\\ &\min_{G}{\mathbb{E}_{z \sim p_{z}}[-D(G(z))]}+\frac{\lambda}{2}\mathbb{E}_{x \sim q_{data}}\|x-G(C(x))\|^2 \end{split} \end{equation} Fig.3 illustrates the role of the manifold preserving reconstruction loss: it can be seen as some anchor pulling the generator manifold close to the data manifold, and ensure GANs are capable of producing all the input samples. Furthermore, GAN training is usually unstable at the beginning because of the adversarial mechanism. The manifold preserving reconstruction loss can provide consistent gradients to stabilize training. When the generator manifold is close enough to the AE recovered manifold, the role of manifold preserving reconstruction loss will be trivial. In contrast, further imposing the loss will prevent the generator from exploring its potential to cheat the discriminator. Moreover, the Nash equilibrium is difficult to achieve and guarantee the generator distribution is the same as the data distribution. Therefore, in the third step, we remove the manifold preserving reconstruction loss and turn to the training GAN in the standard way. Empirically, we use the moving average of the reconstruction loss to measure the closeness between the generator manifold and AE recovered manifold. A threshold value $T$ is set, and the third step switches on when the moving average is below $T$. \begin{figure*}[t] \centering \subfigure[MMC] { \includegraphics[width=1.2in]{MMC_swiss.jpg} } \subfigure[Laplace Eigenmap] { \includegraphics[width=1.2in]{LE_swiss.jpg} } \subfigure[LLE] { \includegraphics[width=1.2in]{LLE_swiss.jpg} } \subfigure[ltsa] { \includegraphics[width=1.2in]{ltsa_swiss.jpg} } \subfigure[UMAP] { \includegraphics[width=1.2in]{UMAP_swiss.jpg} } \subfigure[MMC] { \includegraphics[width=1.2in]{MMC_gaussian.jpg} } \subfigure[Laplace Eigenmap] { \includegraphics[width=1.2in]{LE_gaussian.jpg} } \subfigure[LLE] { \includegraphics[width=1.2in]{LLE_gaussian.jpg} } \subfigure[ltsa] { \includegraphics[width=1.2in]{ltsa_gaussian.jpg} } \subfigure[UMAP] { \includegraphics[width=1.2in]{UMAP_gaussian.jpg} } \caption{Recovered manifolds by MMC and traditional manifold learning methods on: (a)-(e) 2D-SwissRoll and (f)-(i) 25-Grid.} \label{fig2} \end{figure*} \section{Experiments} \label{sec5} In this section, after introducing the experiment settings of MMCGAN, we first compare MMC with the traditional manifold learning on sparse and uneven data, and then evaluate MMCGAN on different datasets with widely used GAN architectures to show its effectiveness in avoiding mode collapse and stabilizing training. \subsection{Implementation Settings} Note that delicate tuning of model hyperparameters and learning parameters is not necessary for MMCGAN, as most settings are universal for different datasets and model architectures. For the model hyperparameters, there are two hyper-parameters: we report the experimental results using $\gamma=\frac{1}{10m}$ for Eqn.(\ref{gam}) and $\lambda=1$ for Eqn.(\ref{rlhinge}), where $m$ is the dimension of latent representations in GAN. For the learning parameters, we enumerate them according to the training process illustrated in Section 4. For the first step, we use Adam optimizer~\cite{kingma2014adam} with $\beta_1 = 0.5$, and $\beta_2 = 0.9$. The learning rate scheme is described in SGDR\cite{loshchilov2016sgdr} which can accelerate convergence, with $T_0=10$, $\eta_{min}=0$ and $\eta_{max}=0.001$. For the second step, the moment of moving average is $0.999$ in this work, and the threshold $T$ is the moving average of the reconstruction loss of the first step. We have conducted experiments on 5 datasets and the following list the choice of threshold for these experiments: (1) \emph{2D-SwissRoll}, 0.1; (2)\emph{25-Grid}, 0.01; (3) \emph{MNIST}, 6; (4) \emph{Cifar10}: 30; (5) \emph{ImageNet20}: 1000. For the third step, the training settings are all the same as the normal GAN. The specific hyperparameters and architectures of benchmark GANs used in practice are detailed in Supplement-B. In addition, all the experiments use data-parallel distributed training in Pytorch with 6 Nvidia Titan X 12G. The source codes can be obtained in the supplementary materials. \subsection{MMC Evaluation} \label{sec5.1} The choice of explicit manifold learning prior determines the performance. To evaluate the performance of MMC prior, we conducted experiments on two synthetic datasets which are sparsely and unevenly distributed respectively: (1) \emph{2D-SwissRoll}, 200 samples which obtained by \verb|sklearn.datasets.make_swiss_roll| with noise of 0.25. To make the results more clear, we only use the first two dimensions and scale down to $\frac{2}{15}$. (2) \emph{25-Grid}\cite{lin2018pacgan}, 200 data samples from a mixture of 25 two-dimensional Gaussians with the same variances $\frac{1}{3200}$ and different means $(\frac{i}{2\sqrt{2}},\frac{j}{2\sqrt{2}})$, where $i, j \in \{-2, -1, 0, 1, 2\}$. We also examined the traditional manifold learning methods: PCA~\cite{wold1987principal}, Isomap~\cite{tenenbaum2000global}, Laplacian Eigenmaps~\cite{belkin2003laplacian}, LLE~\cite{roweis2000nonlinear}, HLLE~\cite{donoho2003hessian}, MLLE~\cite{zhang2007mlle}, ltsa~\cite{zhang2004principal}, t-SNE~\cite{maaten2008visualizing}, and Umap~\cite{mcinnes2018umap}. These methods are all implemented with official packages or sklearn, and the hyperparameters are \verb|n_neighbors=3, n_components=1|. To evaluate the performance intuitively, as analyzed in Section 3.2, we can use a set of line segments to connect all the data according to the order of coding to show the performance of the manifold learning. Fig.5 shows the results of MMC and 4 of the traditional manifold learning methods. Results for other manifold learning methods can be seen in Supplement-C. It can be seen that the manifold recovered by the examined traditional manifold learning methods tend to be folded and twisted, while MMC derives simple manifolds based on the proposed approximate solution. \subsection{MMCGAN Evaluation} \begin{figure}[t] \vskip 0.2in \begin{center} \centerline{\includegraphics[width=0.7\columnwidth]{modecollapse.eps}} \caption{Mode collapse experiments: comparing between standard GAN architectures, PacGAN and the proposed MMCGAN.} \label{mc} \end{center} \vskip -0.2in \end{figure} \begin{figure*}[t] \centering \subfigure[WGAN-gp] { \includegraphics[width=1.6in]{gaussian_wgan.jpg} } \subfigure[WGAN-gp(PacGAN)] { \includegraphics[width=1.6in]{gaussian_wganpac.jpg} } \subfigure[MMC+WGAN-gp] { \includegraphics[width=1.6in]{gaussian_wganour.jpg} } \subfigure[SNGAN] { \includegraphics[width=1.6in]{gaussian_SN.jpg} } \subfigure[SNGAN(PacGAN)] { \includegraphics[width=1.6in]{gaussian_SNpac.jpg} } \subfigure[MMC+SNGAN] { \includegraphics[width=1.6in]{gaussian_SNour.jpg} } \caption{Intuitive results of mode collapse for the WGAN-gp and SNGAN architectures. } \label{fig} \end{figure*} \subsubsection{Mode Collapse Results} \label{sec5.2} Mode collapse indicates that the generator only produces data within a subset of modes. Recently, such phenomenon is not well-understood, and previous work provide many hypothesis, like improper objective functions~\cite{arora2017generalization,arjovsky2017wasserstein} and weak discriminators~\cite{li2017towards,salimans2016improved}. Based on these hypothesis, previous work have proposed many methods, e.g., ATI~\cite{dumoulin2016adversarially}, VEEGAN~\cite{srivastava2017veegan}, unrolled GAN~\cite{metz2016unrolled}. The state-of-art is PacGAN\cite{lin2018pacgan}, which strengthens the discriminators by packing the inputs. In this subsection, we compare MMCGAN with PacGAN on \emph{25-Grid} on three different architecture: standard GAN, WGAN-gp, SNGAN. Specifically, we set the latent representation with dimension $m=1$ to make the generator produce a low-dimensional manifold. As \emph{25-Grid} is constructed by 25 Gaussian distributions, it has 25 different modes, and we use the distance between the samples and the centers of 25 Gaussian distributions to examine whether the generator can produce such modes. Specifically, we sampled from the generator distributions 200 times, and recorded how many modes have samples near enough, i.e., the distance is less than $0.1$. All the experiments repeat 5 times. In each experiment, to obtain a stable results, we averaged the last 5 results before the end of training. The results are summarized in Fig.6. It can be seen that MMCGAN can obtain more modes in \emph{25-Grid} datasets for WGAN-gp and SNGAN, but work slightly poorer for the standard GAN. That is because MMCGAN can only improve the initialize states of training, while the standard GAN has recognized problem of its global minima~\cite{salimans2016improved}, which is well solved in WGAN-gp and SNGAN. PacGAN also contains special mechanism to address this problem by strengthening the discriminator. The intuitive results of WGAN-gp and SNGAN architectures can be seen in Fig.7, MMCGAN successfully cover almost all the modes while the raw GAN and PacGAN usually miss some modes especially in the sparsely and unevenly distributed area. \begin{figure}[t] \centering \subfigure[standard GAN] { \includegraphics[width=1.5in]{MNIST_standard.png} } \subfigure[MMC+standard GAN] { \includegraphics[width=1.5in]{MNIST_standardour.png} } \subfigure[SNGAN] { \includegraphics[width=1.5in]{MNIST_SN.png} } \subfigure[MMC+SNGAN] { \includegraphics[width=1.5in]{MNIST_SNour.png} } \caption{Visualization of samples produced by generators on MNIST: (a) standard GAN without BatchNorm; (c) SNGAN without BatchNorm; (b) and (d) illustrate the corresponding results with MMC prior.} \label{fig} \end{figure} \subsubsection{Training Stability Results} \label{sec5.1} We use two datasets to show the performance of MMCGAN in stabilizing training: (1) \emph{25-Grid}: We visualize the generator distributions in Fig.1(b)(d), where the green points are the fake data and the yellow points are the training data. We also paint the contour lines of the discriminators to show the training trend. It can be seen that GAN with standard objective (Eqn.(\ref{rawgan})) is very fragile: the generator manifold deviates too far to fit the data. The proposed MMC prior successfully avoid such deviation for stable training. (2) \emph{MNIST}~\cite{lecun1998mnist}: we use the training set which consists of 60K $28\times28$ images of handwritten digits. The benchmark architecture is 3-layer DCGAN \cite{radford2015unsupervised} without BatchNorm~\footnote{Batch normalization plays an important role in stabilize training of DCGAN, we remove it to obtain an unstable control group to show the effect of MMCGAN.}~\cite{ioffe2015batch}. The generated images are visualized in Fig.8: standard GAN(Eqn.(\ref{rawgan})) and SNGAN with hinge loss (Eqn.(\ref{sngan})) without BatchNorm both failed, and adding MMC prior successfully recovered the data manifold and generated the realistic handwritten digit images. \begin{figure}[t] \centering \subfigure[IS] { \includegraphics[width=1.5in]{IS_C10.eps} } \subfigure[FID] { \includegraphics[width=1.5in]{FID_C10.eps} } \caption{Inception score and FID of MMCGAN and baseline on Cifar10. } \end{figure} \begin{figure}[t] \centering \subfigure[IS] { \includegraphics[width=1.5in]{IS_I128.eps} } \subfigure[FID] { \includegraphics[width=1.5in]{FID_I128.eps} } \caption{Inception score and FID of MMCGAN and baseline on ImageNet20. } \end{figure} \subsubsection{Quantitative Results} \label{sec5.3} In this subsection, we will examine Inception Score~\cite{salimans2016improved} and FID \cite{heusel2017gans} to quantitatively evaluate the quality of generated samples of MMCGAN. Experiments are conducted on the CIFAR-10 and ImageNet20. \emph{Cifar-10}: The CIFAR-10 dataset consists of 60k 32$\times$32 color images in 10 classes. We use 50k training images for the training of GAN. In particular, we choose SNGAN with the implementation of BigGAN as the benchmark. We report IS and FID measures in Fig.9: As training proceeds, MMCGAN can improve FID measure while keeping the similar IS measure. This validates that MMCGAN avoid mode collapse and keeps the same global minima. \emph{ImageNet20}: We select a subset of ImageNet ILSVRC 2012~\cite{deng2009imagenet} for evaluation: 20 categories which start with 'n014' and 'n015', totally about 26k $128\times128$ images. SAGAN~\cite{zhang2018self} selected as the baseline due to its efficiency in large-scale high resolution datasets. The model was implemented based on the code of BigGAN. With each experiment repeating 3 times and calculating the means and standard deviations, Fig.10 plots the error bars for IS and FID measures. All the experiments of traditional SAGAN have broken down before $2\times 10^{4}$ iterations, and the MMCGAN can achieve better performance and keep stable until $4\times 10^{4}$ iterations. Note that the training of SAGAN is stable in the complete ImageNet. The observed instability might be due to the lack of data with such high resolution, where MMC prior successfully stabilize the training process to make up for the data shortage. \section{Conclusion and Future Work} \label{sec6} In this work, we introduce explicit manifold learning as prior for GAN to avoid mode collapse and stabilize training. A new target of Minimum Manifold Coding is further imposed for manifold learning. Such target is validated to discover simple and unfolded manifolds even when the data is sparsely or unevenly distributed. There remain many interesting directions to be explored in the future. The first direction is the theoretical proof of equilibrium, convergence and analysis on the improvement of mode collapse and training stability. The second direction can pursue more characteristics of generative models from the perspective of manifold learning, e.g., regularizing the completeness of manifold to obtain balanced distribution of GAN for data augmentation. Another interesting direction is to explore the potential of GAN beyond data generation. As the generator manifold can closely approach the data manifold with minimum Riemann volume, we can employ GAN to approximate the solution of MMC, SHP and other similar optimization problems. \bibliographystyle{IEEEtran}
{ "attr-fineweb-edu": 1.547852, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdc45qhLA90Q4ab7v
\section{Introduction} \label{sec:introduction} The discovery of neutrino oscillations jointly by Super--Kamiokande (SK)~\cite{Super-Kamiokande:1998kpq} and Sudbury Neutrino Observatory (SNO)~\cite{SNO:2002tuh} have given a new insight to probe new--physics beyond the Standard Model (BSM). Neutrino oscillations essentially confirms that neutrinos are massive and provide the first clear experimental hint of BSM--physics. The parameters associated with the neutrino oscillations are being widely probed in different neutrino experiments \cite{Super-Kamiokande:2004orf,KamLAND:2004mhv,MINOS:2008kxu,MINOS:2011neo}. The neutrinos are one of the promising portals to explore new--physics in the leptonic sector. The BSM models, which describe the neutrino masses and mixing, often explore new unknown couplings of neutrinos termed as non--standard interactions (NSIs). Looking at the unprecedented accuracy and the precision provided by the current and upcoming neutrino experiments, these subdominant effects on neutrino oscillations may have a significant impact on the physics reach of these experiments. In this work, we have primarily explored the impacts of a scalar mediated NSI on the measurements of the leptonic phase $\delta_{CP}$ in the three long baseline (LBL) neutrino experiments DUNE \cite{Abi:2020loh}, T2HK \cite{Hyper-KamiokandeProto-:2015xww}, and T2HKK \cite{Hyper-Kamiokande:2016srs}. We have performed a synergy analysis combining these LBL experiments to probe the impact of scalar NSI in a model--independent way. The combination of different experiments often provides better sensitivity and also highlights various possible synergies among the experiments. For unambiguous determination of neutrino oscillation parameters, combination of various neutrino experiments are needed as the degenerate parameter space is different for different experiments \cite{Barger:2001yr,Burguet-Castell:2002ald,Minakata:2001qm,Fogli:1996pv}. In~\cite{Choubey:2017cba}, the authors showed that in presence of a light sterile neutrino, a combination of three different LBL experiments (DUNE, T2HK, T2HKK) give better sensitivity (more that 5$\sigma$) towards the CP--violation measurement as compared to individual sensitivity. In the same work, the authors also pointed that the combination of the experiments significantly improved the mass hierarchy as well as octant discovery potential sensitivities. It has also been explored \cite{Prakash:2012az} that the mass hierarchy--$\delta_{CP}$ degeneracy can be resolved using the synergy between two LBL experiments T2K \cite{T2K:2011qtm} and NO$\nu$A \cite{Prakash:2012az}. In~\cite{Masud:2016bvp}, the authors combined DUNE, T2K and No$\nu$A to explore possible synergy among these experiments towards a vector NSI. It is found that a combined sensitivity study from these experiments can be crucial to pin--point the CP--violation and CP--measurement in leptonic sector. In \cite{Choubey:2022gzv}, the authors have shown that the synergy between T2HK \cite{Hyper-KamiokandeProto-:2015xww} and JUNO \cite{JUNO:2015zny} experiments can provide an improved sensitivity up--to 9$\sigma$ towards mass ordering of neutrinos. In \cite{Agarwalla:2013ju}, the authors pointed out that the $\theta_{23}$ octant ambiguity can be resolved by combining the sensitivities of T2K and NO$\nu$A, irrespective of the hierarchy and $\delta_{CP}$. The physics potential can be significantly enhanced by combining a number of experiments, as it boosts up the sensitive energy range as well as the event distributions. The synergy between various neutrino experiments are often used for better understanding as well as for optimizing the fundamental knowledge of neutrino oscillations \cite{Cao:2020ans,Ghosh:2017ged,Ghosh:2015ena,Ghosh:2012px,Ghosh:2014dba,Bharti:2016hfb,Ballett:2016daj,Fukasawa:2016yue,Minakata:2003wq,Ghosh:2014zea}. In this precision era of neutrino physics all the ongoing and upcoming neutrino experiments focus on measuring the neutrino mixing parameters with utmost accuracy. The primary goal of these experiments are to address the three main unknowns in the neutrino sector, i.e., the hierarchy of neutrino masses~\cite{Capozzi:2017ipn}, the octant of mixing angle $\theta_{23}$~\cite{Agarwalla:2013ju} and the determination of CP phase ($\delta_{CP}$) in leptonic sector~\cite{Kobayashi:1973fv}. The robust nature of the ongoing and future neutrino experiments make them sensitive to the subdominant effects of neutrinos. One such subdominant effect is NSI, which may have a significant impact on the measurement of oscillation parameters in various neutrino experiments. Initially the idea of NSI~\cite{Wolfenstein:1977ue} was introduced with a coupling of neutrinos with the environmental fermions by a vector mediator. These kind of vector mediated NSIs appear as a matter potential term in the neutrino oscillation Hamiltonian. The vector mediated NSI has been widely explored \cite{Miranda:2015dra, Farzan:2017xzy, Biggio:2009nt, Babu:2019mfe, Ohlsson:2012kf}, and it is an excellent candidate to probe physics beyond the Standard Model. It can have a significant effect on the physics reach of various neutrino experiments~\cite{Liao:2016orc,Friedland:2012tq,Coelho:2012bp,Rahman:2015vqa,Coloma:2015kiu,deGouvea:2015ndi,Liao:2016hsa,Forero:2016cmb,Huitu:2016bmb,Bakhti:2016prn,Kumar:2021lrn,Agarwalla:2015cta,Agarwalla:2014bsa,Agarwalla:2012wf,Blennow:2016etl,Blennow:2015nxa,Deepthi:2016erc,Masud:2021ves,Soumya:2019kto,Masud:2018pig,Masud:2017kdi,Masud:2015xva,Ge:2016dlx,Fukasawa:2016lew,Chatterjee:2021wac} and these effects are being widely probed~\cite{Khatun:2019tad,Chatterjee:2014gxa,Super-Kamiokande:2011dam, Davidson:2003ha,Choubey:2014iia,Denton:2018xmq,Farzan:2015hkd,Farzan:2015doa,Esmaili:2013fva,Khan:2021wzy,Liu:2020emq,Chatterjee:2020kkm,Denton:2020uda,Babu:2020nna,Flores:2020lji,Farzan:2019xor,Pandey:2019apj}. A global status on the bounds of the vector NSI parameters can be found in~\cite{Esteban:2019lfo,Coloma:2019mbs}. We have explored here, the non-standard coupling of neutrinos with a scalar~\cite{Ge:2018uhz, Yang:2018yvk, Khan:2019jvr, Medhi:2021wxj}. The scalar mediated NSI affects the neutrino mass in the neutrino Hamiltonian and can provide unique phenomenology in neutrino oscillations. Unlike the vector NSI, the effects of scalar NSI linearly scale with the environmental matter density and this makes long-baseline neutrino experiments one of the most suitable candidates to probe scalar NSI. In~\cite{Ge:2018uhz}, the authors initiate the idea of scalar NSI to fit the recent data from Borexino experiment. Although there are currently not any stringent bounds on the scalar NSI parameters, a few studies have tried putting some constraints under astrophysical and cosmological limits \cite{Babu:2019iml,Venzor:2020ova}. In our work~\cite{Medhi:2021wxj}, we have explored the possible impacts of scalar NSI on the CP-violation sensitivities at LBL experiments taking DUNE as a case study. It is found that the presence of scalar NSI significantly impacts the CP--sensitivities of DUNE. These results are interesting and acts as a motivation to explore the scalar NSI in LBL experiments further. Combining various LBL experiments also become crucial as the synergy study would provide a more precise sensitivity scenario. In this paper we have performed, for the first time, a synergy study on the effects of scalar NSI on three LBL experiments viz. DUNE, T2HK and T2HKK in a model independent way. We have probed the effects of scalar NSI, one element at a time, and have found notable impacts of scalar NSI on the physics sensitivities of the chosen neutrino experiments. We have primarily explored the possible impacts of scalar NSI parameters on the CP--violation (CPV) sensitivities. We have then performed a combined analysis of DUNE with T2HK as well as DUNE with T2HKK for testing possible synergies among these experiments. We show that for some chosen values of NSI parameters the CPV sensitivities get enhanced and give improved precision in $\delta_{CP}$ measurements. It is found that, for all the chosen negative values of the NSI parameters the CPV sensitivities always get suppressed. We also see that a positive NSI parameter can fake the CP effects and mimic the standard CPV sensitivity at DUNE and T2HKK. The joint study of the LBL experiments (DUNE+T2HK, DUNE+T2HKK, DUNE+T2HK+T2HKK) improves the overall sensitivities and can help in lifting the underlying degeneracy in CPV measurements. It is highly crucial to put constraints on these NSI parameters for accurate measurements and better understandings of the data coming from various neutrino experiments. The paper is organized as follows: In section \ref{sec:framework} we discuss the detailed formalism of scalar NSI. In section \ref{sec:methdology}, we describe the simulation methodology used in our analysis. The technical details of the three neutrino experiments used in our simulations are presented in section \ref{sec:experiment}. The impacts of NSI on oscillation probabilities and CP--asymmetry are shown in section \ref{sec:oscillation_probabilities} and section \ref{sec:CP_asymmetry} respectively. We discuss the results of the $\chi^2$ analyses on NSI parameter sensitivity and CP-violation sensitivity in section \ref{sec:results}. We conclude our findings in section \ref{sec:summary}. \section{Scalar NSI Formalism} \label{sec:framework} The elusive neutrinos interact with matter through weak interaction and gravity. The neutrino interactions take place through mediating a W$^{\pm}$ boson (Charge Current -- CC) or a Z boson (Neutral Current -- NC) \cite{Linder:2005fc}. Both of the interactions appear as matter potentials in the neutrino Hamiltonian, however, only the CC--interactions contribute to the oscillation probabilities. The NC--interactions do not contribute to the oscillations as they appear as a common term in the Hamiltonian. The Lagrangian for neutrino--matter coupling via CC interactions may be written as \cite{Wolfenstein:1977ue, Nieves:2003in, Nishi:2004st, Maki:1962mu}, \begin{equation} \mathcal L^{\rm eff}_{\rm cc} = - \frac {4 G_F}{\sqrt 2} \left[ \overline{\nu_e}(p_3) \gamma_\mu P_L \nu_e(p_2) \right] \left[ \overline e(p_1) \gamma^\mu P_L e(p_4) \right], \label{eq:Leff} \end{equation} \noindent where, $G_F$ is the Fermi coupling constant, $p_{i}$'s are momenta of incoming and outgoing states and $P_L = (1 - \gamma_5)/2$, $P_R = (1+ \gamma_5)/2$) are left and right chiral projection operators. The effective Hamiltonian, $\mathcal{H_{\rm eff}}$, for neutrino oscillations in matter is framed as \cite{Bilenky:1987ty}, \begin{equation} \mathcal{H_{\rm eff}} = E_\nu + \frac{1}{2E_\nu} \, \mathcal{U} {\rm diag}(0, \Delta m^2_{21}, \Delta m^2_{31}) \mathcal{U}^\dag + {\rm diag} (V_{\rm CC}, 0 , 0)\,, \label{eq:matter_H2} \end{equation} \noindent where, \\ $\mathcal{U}$ = Pontecorvo-Maki-Nakagawa-Sakata (PMNS) matrix \cite{Pontecorvo:1957cp,Pontecorvo:1957qd,Pontecorvo:1967fh,ParticleDataGroup:2020ssz},\\ $E_\nu$ = neutrino energy, \\ $\Delta m^2_{ij} = m_i^2 - m_j^2$, are the neutrino mass-squared differences, and \\ $V_{\rm SI} = \pm \sqrt 2 G_F n_e$, comes due to CC neutrino matter interactions. The non-standard coupling of neutrinos with a scalar~\cite{Ge:2018uhz,Babu:2019iml} is also an interesting sector to probe new--physics beyond SM. The effective Lagrangian for neutrinos coupling via a scalar, $\phi$ may be framed as, \begin{align} {\cal L}_{\rm eff}^{\rm S} \ = \ \frac{y_f y_{\alpha\beta}}{m_\phi^2}(\bar{\nu}_\alpha(p_3) \nu_\beta(p_2))(\bar{f}(p_1)f(p_4)) \,, \label{eq:nsi_L} \end{align}\\ where, \\ \noindent $\alpha$, $\beta$ refer to the neutrino flavours e, $\mu$, $\tau$,\\ \noindent $f$ = e, u, d indicate the matter fermions, (e: electron, u: up-quark, d: down-quark),\\ \noindent $\bar{f}$ is for corresponding anti fermions, \\ \noindent $y_{\alpha\beta}$ is the Yukawa couplings of the neutrinos with the scalar mediator $\phi$, \\ \noindent$y_f$ is the Yukawa coupling of $\phi$ with $f$, and, \\ \noindent $m_\phi$ is the mass of the scalar mediator $\phi$. The Lagrangian is composed of Yukawa terms and hence it is not possible to convert it into vector currents. So, the effect of scalar NSI appears as an addition to the neutrino mass term. The corresponding Dirac equation taking into account the effect of scalar NSI gets the following form, \begin{equation} \bar \nu_\beta \left[ i \partial_\mu \gamma^\mu + \left( M_{\beta \alpha} + \frac {\sum_f n_f y_f y_{\alpha \beta}}{m^2_\phi} \right) \right] \nu_\alpha = 0 \,, \end{equation} \noindent where, $n_f$ is the number density of the environmental fermions. Hence we see that the effect of scalar NSI appears as a perturbation to the neutrino mass term. So, the effective Hamiltonian in presence of scalar NSI takes the form, \begin{equation} \mathcal H_{\rm SNSI} \approx E_\nu + \frac { M_{\rm eff} M_{\rm eff}^\dagger}{2 E_\nu} \pm V_{\rm SI} \,, \label{eq:Hs} \end{equation} where, $M_{\rm eff}$ = $M + M_{\rm SNSI}$, is the effective mass matrix that includes both the regular mass matrix $M$ and the contribution from the scalar NSI, $M_{SNSI} \equiv \sum_f n_f y_f y_{\alpha\beta} / m^2_\phi$. The active neutrino mass ($\equiv$ $\mathcal{U^{'}} D_{\nu} \mathcal{U^{'}}^{\dagger}$) may be diagonalized by the mixing matrix $\mathcal{U^{'}} \equiv P \mathcal{U} Q^{\dagger}$. Here $D_\nu$ is the diagonal mass matrix of neutrinos represented by $D_\nu$ $\equiv$ diag($m_1, m_2, m_3$). The matrix $\mathcal{U^{'}}$ is a combination of Majorana rephasing matrix, Q and a diagonal rephasing matrix P. The Majorana rephasing matrix can be absorbed by $Q D_\nu Q^{\dagger} = D_\nu$, however the unphysical rephasing matrix P cannot be rotated away. The effective neutrino mass term, after rotating the unphysical rephasing matrix P into the scalar NSI contribution, can therefore be written as , \begin{equation} M_{eff} \equiv \mathcal{U} D_\nu \mathcal{U}^{\dagger} + P^{\dagger} M_{SNSI} P \equiv M + \delta M \label{effectiveM} \end{equation} The scalar NSI contribution $\delta M$ includes the unphysical rephasing matrix P after proper rotation. We have used the following parametrization of $\delta M$ to probe the effects of scalar NSI in neutrino oscillations, \begin{equation} \delta M \equiv \sqrt{|\Delta m^2_{31}|} \left\lgroup \begin{matrix} \eta_{ee} & \eta_{e \mu} & \eta_{e \tau} \\ \eta_{\mu e} & \eta_{\mu \mu} & \eta_{\mu \tau} \\ \eta_{\tau e} & \eta_{\tau \mu} & \eta_{\tau \tau} \end{matrix} \right\rgroup \,. \label{eq:dM} \end{equation}\\ The dimensionless elements $\eta_{\alpha \beta}$ quantify the size of scalar NSI. The Hermicity of the Hamiltonian requires the diagonal elements to be real and off-diagonal elements to be complex. In this work we have explored the diagonal elements of the scalar NSI matrix, one at a time. For the three cases that we have used, the non-zero diagonal elements the effective modified Hamiltonian take the forms as shown below, \begin{equation} ~~~~~~~{\rm Case~I:}~ M_{\rm eff} = \mathcal{U} {\rm diag}\left(m_1, m_2, m_3 \right)\mathcal{U}^\dag + \sqrt{|\Delta m^2_{31}|}~ \rm diag \left( \eta_{ee}, 0, 0 \right). \label{MeffCase1} \end{equation} \begin{equation} ~~~~~~~~~{\rm Case~II:}~ M_{\rm eff} = \mathcal{U} {\rm diag}\left(m_1, m_2, m_3 \right)\mathcal{U}^\dag + \sqrt{|\Delta m^2_{31}|}~ \rm diag \left( 0, \eta_{\mu\mu}, 0 \right). \label{MeffCase2} \end{equation} \begin{equation} ~~~~~~~~~~~{\rm Case~III:}~ M_{\rm eff} = \mathcal{U} {\rm diag}\left(m_1, m_2, m_3 \right)\mathcal{U}^\dag + \sqrt{|\Delta m^2_{31}|}~ \rm diag \left( 0, 0, \eta_{\tau\tau} \right). \label{MeffCase3} \end{equation} \\ Interestingly, $\mathcal H_{\rm SNSI}$ has a direct dependence on the absolute masses of neutrinos. We have taken the value of $m_1$ to be $10^{-5}$ eV in this work. The values of $m_2$ and $m_3$ have been accordingly calculated from $\Delta m_{21}^2$ and $\Delta m_{31}^2$. \section{Methodology} \label{sec:methdology} To explore the impact of NSI on various neutrino experiments we have used GLoBES (Global Long Baseline Experiment Simulator) \cite{Huber:2004ka, Kopp:2006wp, Huber:2007ji}. GLoBES is a widely used sophisticated neutrino experiment simulator for long baseline experiments. The values of mixing parameters used in our simulation studies are listed in table~\ref{tab:mixing_parameters}. Throughout the analysis, we have considered normal hierarchy to be the true hierarchy and higher octant to be the true octant. We have considered three proposed super-beam experiments DUNE, T2HK and T2HKK to explore the impact of scalar NSI. The systematics and background information are incorporated from the corresponding Technical Design Reports (TDR) of the experiments. The uncertainties on signal and background are summarized in table \ref{tab:norm-uncertain-exp}. In this study we have considered the diagonal scalar NSI elements one at a time. We have, at first, explored the impact of scalar NSI at the probability level as well as the event level at the detector. We have then studied the effects of scalar NSI on the CP asymmetry parameter. In the following subsections we describe the technical details of the three experiments and the impact of scalar NSI on the oscillation probabilities as well as on the CP-asymmetry parameter. \begin{table}[h] \centering \begin{tabular}{|c|c|c|} \hline Parameters & True Values\\ \hline $\theta_{12}$ & 34.51$^\circ$ \\ $\theta_{13}$ & 8.44$^\circ$ \\ $\theta_{23}$ & 47$^\circ$ \\ $\delta_{CP}$ & -$\pi$/2 \\ $\Delta m_{21}^2$ & 7.56 $\times$ 10$^{-5}$ $eV^2$ \\ $\Delta m_{31}^2$ & 2.43 $\times$ 10$^{-3}$ $eV^2$ \\ \hline \end{tabular} \caption{The benchmark values of oscillation parameters used \cite{NuFIT5.0}.} \label{tab:mixing_parameters} \end{table} \subsection{Experimental setup} \label{sec:experiment} The technical details of DUNE, T2HK and T2HKK have been illustrated here. \subsubsection{DUNE :} The Deep Underground Neutrino Experiment (DUNE) \cite{DUNE1,DUNE2,DUNE3,DUNE4,DUNE5} is a proposed long baseline neutrino experiments which will be located in the USA. The Near Detector for the experiment will be located at Long-Baseline Neutrino Facility (LBNF) at a distance of 574 meters and 60 meters underground from the source of neutrino beam site at Fermilab. The neutrinos will be detected after travelling a distance of 1300 km at the Far detector (FD) which will be located in Homestake Mine in South Dakota. The FD is made of four modules of liquid argon time projection chamber (LArTPC), each having a fiducial mass of 10kt. The TPC which is used to detect the charge ionization from neutrino interactions has good spatial resolution, energy resolution, 3D tract reconstruction and identify particle track using the energy loss information along the track. The neutrino beam used for the DUNE will be produced at Fermilab having a power of 1.2 MW-120 GeV and will deliver $10^{21}$ proton-on-target (POT) per year. The experiment is expected to start the operation in 2026. \subsubsection{T2HK :} T2HK (Tokai to Hyper-Kamiokande) \cite{Hyper-KamiokandeProto-:2015xww} is one of the proposed promising long baseline experiment which is planned to have a baseline of 295 km. In the proposed set up the intense neutrino beam will be produced at J-PARC facility and will be detected in Hyper-Kamiokande (HK) detector. The neutrino beam from J-PARC will have a power of 1.3 MW which will generate 27 $\times$ $10^{21}$ POT (proton on target) per year. The HK detector in Japan is an upgradation of the Super-Kamiokande (SK) detector and is expected to have about twenty times the fiducial mass of Super-Kamiokande. The detector will have two cylindrical water Cherenkov module each having a fiducial mass of 187 kt. It will be located 2.5$^\circ$ off-axis from the J-PARC neutrino beam in Japan. For our simulation studies we have taken a baseline of 295 km and fiducial volume to be 374 kt (two cylindrical detector each having fiducial volume of 187 kt). The total run time of 10 years has been divided into 2.5 years in neutrino mode and 7.5 years in antineutrino mode (1:3 ratio) to have an equal contribution from the neutrino and the antineutrino signal events. \subsubsection{T2HKK :} T2HKK \cite{Hyper-Kamiokande:2016srs} is another proposed detector set--up involving T2HK where there is a plan to put the second cylindrical detector of HK in Korea. The second detector will be located at a distance of 1100 km from the J-PARC proton synchrotron facility. So, the T2HKK experiment will have two far detector set--ups, one at a distance of 295 km at HK site and another in Korea at a distance of 1100 km. Both the detector module will have fiducial volumes of 187 kt and the detection principle will be based on the water Cherenkov technique. The detector will be placed at an angle of 2.5$^\circ$ off axis from the neutrino beam and the peak of the second oscillation maximum will be at 0.66 GeV. In this work we have considered the background and systematic uncertainties of T2HKK to be identical to that of T2HK. \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|c|} \hline Experiment & Baseline (L in km) & L/E (in km/GeV) & Fiducial Volume (in kton) \\ \hline \hline T2HK & 295 km & 527 & 187 $\times$ 2 \\ \hline T2HKK & 295 km; 1100 km & 527(295 km); 1964(1100 km) & 187(295 km) + 187(1100 km) \\ \hline DUNE & 1300 km & 1543 & 40 \\ \hline \end{tabular} \end{center} \caption{{\footnotesize The baselines, L/E and fiducial volumes of each detector for T2HK, T2HKK, and DUNE. }} \label{tab:expt-details} \end{table} \begin{table}[h!] \begin{center} \begin{tabular}{|c|c|c|c|} \hline Channel & T2HK (295 km) & T2HK (1100 km) & DUNE (1300 km) \\ \hline \hline $\nu_e$ appearance & 3.2\%(5\%) & 3.8\%(5\%) & 3.2\%(5\%) \\ \hline $\nu_{\bar{e}}$ appearance & 3.9\%(5\%) & 4.1\%(5\%) & 3.9\%(5\%) \\ \hline $\nu_\mu$ disappearance & 3.6\%(5\%) & 3.8\%(5\%) & 3.6\%(5\%) \\ \hline $\nu_{\bar{\mu}}$ disappearance & 3.6\%(5\%) & 3.8\%(5\%) & 3.6\%(5\%) \\ \hline \end{tabular} \end{center} \caption{{\footnotesize The signal (background) normalization uncertainties of the experiments for the various channels for T2HK, T2HKK and DUNE.}} \label{tab:norm-uncertain-exp} \end{table} \subsection{Effects on oscillation probabilities} \label{sec:oscillation_probabilities} In this section we discuss the effects of scalar NSI (the three diagonal cases as mentioned in eq. \ref{MeffCase1}, eq. \ref{MeffCase2} and eq. \ref{MeffCase3}) on the neutrino oscillation probabilities. To perform this analysis we have used NuOscProbExact package \cite{Bustamante:2019ggq}. NuOscProbExact is a flexible python based numerical oscillation probability calculator for both the two and three flavour cases. It employs SU(2) and SU(3) expansions of the evolution operators to compute the numerical probabilities for time-independent Hamiltonian. We have modified the neutrino Hamiltonian accordingly as in eq. \ref{eq:Hs} and have incorporated the three scalar NSI cases. We have used the oscillation parameter values as listed in table \ref{tab:mixing_parameters}. Unless otherwise mentioned, we considered NH to be the true mass hierarchy and HO to be the true octant. The effects of the diagonal scalar NSI elements $\eta_{ee}$ (left column), $\eta_{\mu\mu}$ (middle column) and $\eta_{\tau\tau}$ (right column) on $P_{\mu e}$ as a function of neutrino energy are shown in fig. \ref{fig:probability3}. The plots corresponding to the baselines of DUNE (top--row), T2HK (middle--row) and T2HKK (bottom--row) are shown here. The probabilities are calculated for $\delta_{CP}$ = -- $90^\circ$ and $\theta_{23}$ = $47^\circ$. In all the plots, the solid--red line represents the case without scalar NSI i.e. $\eta_{\alpha\beta}$ = 0. The solid (dashed) lines in black, blue and magenta are for the chosen positive (negative) $\eta_{ee}$, $\eta_{\mu\mu}$ and $\eta_{\tau\tau}$ respectively. We observe that, \begin{figure}[!h] \centering \includegraphics[width=0.32\linewidth, height = 5cm]{Pme_varying_eta_ee_v4_DUNE_v2.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{Pme_varying_eta_mm_v4_DUNE_v2.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{Pme_varying_eta_tt_v4_DUNE_v2.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{Pme_varying_eta_ee_v4_T2HK_v2.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{Pme_varying_eta_mm_v4_T2HK_v2.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{Pme_varying_eta_tt_v4_T2HK_v2.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{Pme_varying_eta_ee_v4_T2HKK_v2.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{Pme_varying_eta_mm_v4_T2HKK_v2.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{Pme_varying_eta_tt_v4_T2HKK_v2.pdf} \caption{The effects of $\eta_{ee}$ (left--column), $\eta_{\mu\mu}$ (middle--column) and $\eta_{\tau\tau}$ (right--column) on $P_{\mu e}$ at the baselines corresponding to DUNE (top--row), T2HK (middle--row) and T2HKK (bottom--row). Here, $\delta_{CP}$ = -$\pi$/2, $\theta_{23}$ = 47$^\circ$ and true mass Hierarchy = NH. In all the plots, the red solid--curve is for no--NSI case while other solid (dashed) curves are for positive (negative) NSI parameters.} \label{fig:probability3} \end{figure} \begin{itemize} \item The presence of Scalar NSI parameters show significant effects on the oscillation probabilities at all the three baselines, especially around the oscillation maxima. \item A positive (negative) $\eta_{ee}$ enhances (suppresses) the probabilities around the oscillation maxima while a positive (negative) $\eta_{\tau\tau}$ exhibits complementary variations. \item A positive (negative) $\eta_{\mu\mu}$ shifts the oscillation maxima towards the higher (lower) energies with minor suppression on the amplitude. \end{itemize} \newpage The visible effects of scalar NSI on neutrino oscillations are interesting and we explore it further by constructing a CP-asymmetry parameter at the probability level. \subsection{Effects on CP asymmetry} \label{sec:CP_asymmetry} In this work, we are primarily exploring the possible impact of scalar NSI on the CP-measurement potential of the three chosen long--baseline experiments. We construct the CP--asymmetry parameter at the probability level as, \begin{equation}\label{def_asym} A_{CP} = \frac{P_{\mu e}-\bar{P}_{\mu e}}{P_{\mu e}+\bar{P}_{\mu e}}\,, \end{equation} where, $P_{\mu e}$ and $\bar{P}_{\mu e}$ are the appearance probabilities of $\nu_e$ and $\bar{\nu_e}$ respectively. The CP asymmetry parameter ($A_{CP}$) can be an estimate of CP violation as it quantifies the change in oscillation probabilities when CP phase changes its sign. The shape and size of the CP--asymmetry curve largely depends on the baseline and energy. We show the CP--asymmetry in presence of scalar NSI as a function of $\delta_{CP}$ at the baselines and peak energies of DUNE (left--panel), T2HK (middle--panel) and T2HKK (right--panel) in fig. \ref{fig:CP_assymetry}. Note that, the peak energies for DUNE, T2HK and T2HKK have been considered as 2.5 GeV, 0.5 GeV and 0.66 GeV respectively. The solid--red curve in all the plots represent the no-scalar NSI case, i.e. $\eta_{\alpha\beta}$ = 0. The solid (dashed) curves in black, magenta and green are for positive (negative) values of scalar NSI elements. The observations from fig. \ref{fig:CP_assymetry} are listed below. \begin{itemize} \item The presence of scalar NSI results in degeneracy for different sets of ($\eta_{\alpha\beta}$, $\delta_{CP}$), which would impact the expected CP asymmetry at DUNE, T2HK and T2HKK. \item At DUNE, a positive $\eta_{ee}$ enhances $A_{CP}$ in the range $\delta_{CP}$ $\in$ [-150$^\circ$, 10$^\circ$], while a negative $\eta_{ee}$ enhances $A_{CP}$ in the range $\delta_{CP}$ $\in$ [0, 180$^\circ$]. At T2HK, a positive (negative) $\eta_{ee}$ enhances (suppresses) the $A_{CP}$ values in $\delta_{CP}$ $\in$ [-180$^\circ$, 0]. For T2HKK, however, the chosen positive $\eta_{ee}$ suppresses the $A_{CP}$ parameter throughout the entire $\delta_{CP}$ range. \item At DUNE, a positive $\eta_{\mu\mu}$ enhances the $A_{CP}$ values in the whole $\delta_{CP}$ range. For a negative $\eta_{\mu\mu}$, we see an enhancement in $A_{CP}$ in the range $\delta_{CP}$ $\in$ [60$^\circ$, 140$^\circ$] while for other values of $\delta_{CP}$ we observe a suppression. At T2HK, a positive (negative) $\eta_{\mu\mu}$ enhances (suppresses) $A_{CP}$ in the range $\delta_{CP}$ $\in$ [-180$^\circ$, 40$^\circ$]. At T2HKK, we mostly see a suppression in $A_{CP}$ for a negative $\eta_{\mu\mu}$. However, for a positive $\eta_{\mu\mu}$ at T2HKK we observe a fluctuation in the variation pattern. \item At DUNE, a positive $\eta_{\tau\tau}$ enhances $A_{CP}$ for $\delta_{CP}$ <0. We note a crossover and suppression through the $delta_{CP}$ range [30$^\circ$, 140$^\circ$]. We observe a similar trend at T2HK as well. For a negative $\eta_{\tau\tau}$, at both DUNE and T2HK, $A_{CP}$ appear to be very mild dependent on $\delta_{CP}$. At T2HKK, we note a strong fluctuation with $\eta_{\tau\tau}$ of either polarity. \end{itemize} \begin{figure}[!h] \centering \includegraphics[width=0.32\linewidth, height = 5.5cm]{CP_asymmetry_DUNE_v2.pdf} \includegraphics[width=0.32\linewidth, height = 5.5cm]{CP_asymmetry_T2HK_v2.pdf} \includegraphics[width=0.32\linewidth, height = 5.5cm]{CP_asymmetry_T2HKK_v2.pdf} \caption{The CP--asymmetry vs $\delta_{CP}$ plot for DUNE (left--panel), T2HK (middle--panel) and T2HKK (right--panel) in presence of $\eta_{\alpha\beta}$ at corresponding peak energies. Here, $\theta_{23}$ = 47$^\circ$ and true mass hierarchy = NH. In all the three plots, the solid--red curve is for no scalar NSI case and other coloured solid (dashed) curves are for chosen positive (negative) $\eta_{\alpha\beta}$. } \label{fig:CP_assymetry} \end{figure} \section{Results and Discussion} \label{sec:results} Motivated by the significant impact on the oscillation probabilities and on $A_{CP}$, we focus into the effects of the scalar NSI on the event rates at the three detectors. We then perform a statistical analysis by constructing various $\chi^2$ parameters to probe the scalar NSI effects on $\delta_{CP}$. \begin{figure}[h] \centering \includegraphics[width=0.32\linewidth, height = 5cm]{event_rate1.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{event_rate_t2hk.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{event_rate_T2HKK.pdf} \caption{The binned event rates of DUNE (left), T2HK (middle) and T2HKK (right) as a function of neutrino energy at $\delta_{CP}$ = -$\pi$/2, $\theta_{23}$ = 47$^\circ$ and NH for different choices of $\eta_{\alpha\beta}$.} \label{fig:event_rate_1} \end{figure} \subsection{Effects on event rates} We discuss here in detail, the effects of scalar NSI parameters on the binned event rates at the three LBL experiments. In fig. \ref{fig:event_rate_1}, we show the raw binned event rates of DUNE (left--panel), T2HK (middle--panel) and T2HKK (right--panel) as a function of the true neutrino energy. We have varied the $\eta_{\alpha\beta}$ parameters, one at a time, in the range [-0.3, 0.3] while keeping $\delta_{CP}$ (true) = -$\pi/2$ and $\theta_{23}$ (true) = $47^\circ$. We quantify the effects of scalar NSI on the event rates in terms of the parameter $\Delta N_{evt}$ which is defined as \begin{equation} \Delta N_{evt} = N_{evt}^{NSI} - N_{evt}^{SI}, \end{equation} \noindent where, $N_{evt}^{NSI}$ ($N_{evt}^{SI}$) are the binned event rates at the far detector of the experiment in presence of scalar NSI (in absence of scalar NSI). The values of $\Delta N_{evt}$ quantify the impact of scalar NSI on the event rates. In fig. \ref{fig:event_rate_4}, we have shown $\Delta N_{evt}$ as a function of neutrino energy and $\eta_{\alpha\beta}$ for DUNE (top--row), T2HK (middle--row), and T2HKK (bottom--row). We see from fig. \ref{fig:event_rate_1} and fig.~\ref{fig:event_rate_4} that, \begin{figure}[h] \centering \includegraphics[width=0.32\linewidth, height = 5cm]{event_2d_DUNE_eta_ee_v2-1.png} \includegraphics[width=0.32\linewidth, height = 5cm]{event_2d_DUNE_eta_mm_v2-1.png} \includegraphics[width=0.32\linewidth, height = 5cm]{event_2d_DUNE_eta_tt_v2-1.png} \includegraphics[width=0.32\linewidth, height = 5cm]{event_2d_T2HK_eta_ee_v2-1.png} \includegraphics[width=0.32\linewidth, height = 5cm]{event_2d_T2HK_eta_mm_v2-1.png} \includegraphics[width=0.32\linewidth, height = 5cm]{event_2d_T2HK_eta_tt_v2-1.png} \includegraphics[width=0.32\linewidth, height = 5cm]{event_2d_T2HKK_eta_ee_v2-1.png} \includegraphics[width=0.32\linewidth, height = 5cm]{event_2d_T2HKK_eta_mm_v2-1.png} \includegraphics[width=0.32\linewidth, height = 5cm]{event_2d_T2HKK_eta_tt_v2-1.png} \caption{The variation of $\Delta N_{evt}$ of DUNE (top-row), T2HK (middle--row) and T2HKK (bottom--row) as a function of neutrino energy at fixed $\delta_{CP}$ = -$\pi$/2, $\theta_{23}$ = 47$^\circ$ and NH for different choices of $\eta_{\alpha\beta}$ $\in$ [-0.3, 0.3]. In the figure the left--column is for non-zero $\eta_{ee}$, the middle--column is for non-zero $\eta_{\mu\mu}$ and the right--column is for non-zero $\eta_{\tau\tau}$.} \label{fig:event_rate_4} \end{figure} \begin{itemize} \item A positive (negative) $\eta_{ee}$ increases (decreases) the binned events around the first oscillation maxima for DUNE and T2HK and around the second oscillation maxima for T2HKK. \item For $\eta_{\mu\mu}$, however, we observe a varied scenario with energy. For both positive and negative $\eta_{\mu\mu}$, there are certain increments and decrements of the event rates at various energy ranges. For example, if we look into the effects of a positive $\eta_{\mu\mu}$, we see that, at DUNE the event rates gets enhanced in E $\in$ [2.5 GeV, 5 GeV] and gets reduced in E $\in$ [1.5 GeV, 2.5 GeV]. At T2HK, we observe enhanced (suppressed) event rates in E $\in$ [2.0 GeV, 2.2 GeV] (E $\in$ [1 GeV, 1.8 GeV]) and at T2HKK, we find enhanced (suppressed) event rates in E $\in$ [0.6 GeV, 1.2 GeV] (E $\in$ [0.3 GeV, 0.5 GeV]). For a negative $\eta_{\mu\mu}$, we come across an opposite variation with energy at all the three experiments. \item A positive (negative) $\eta_{\tau\tau}$ mostly decreases (increases) the event rates around the oscillation maxima prominently for DUNE and T2HK. However, at some lower energies ([1 GeV, 2GeV] for DUNE, [0.1 GeV, 0.3GeV] for T2HK and [0.2 GeV, 0.8 GeV] for T2HKK) we see a nominal increase in the event rates with a positive $\eta_{\tau\tau}$. \item The behaviour of the binned event rates of the experiments as shown in fig. \ref{fig:event_rate_1}, are in good agreement with the neutrino oscillation probabilities as observed in fig.~\ref{fig:probability3} \end{itemize} \subsection{Exploring the sensitivities using a $\chi^2$ analysis} We now focus on exploring the possible impact of $\eta_{\alpha\beta}$ on the $\delta_{CP}$ measurement potential of the three experiments. We probe the three experiments' sensitivity towards the CP--conserving and CP--violating phases of $\delta_{CP}$ through the statistical $\chi^2$ as defined below. \begin{equation} \label{eq:chisq} \chi^2 \equiv \min_{\eta} \sum_{i} \sum_{j} \frac{\left[N_{true}^{i,j} - N_{test}^{i,j} \right]^2 }{N_{true}^{i,j}}, \end{equation} where, $N_{true}^{i,j}$ and $N_{test}^{i,j}$ are the number of true and test events in the $\{i,j\}$-th bin respectively. We have performed a sensitivity analysis of the experiments' capability towards constraining $\eta_{\alpha\beta}$. We have also explored the effects of $\eta_{\alpha\beta}$ on the CP--violation measurements of these experiments. The CP--violation sensitivity may be defined as the experiments' ability to differentiate between CP--conserving and CP--violating values. We have marginalized over the systematic uncertainties. The sensitivities are first obtained for the individual experiments. We then consider DUNE+T2HK and DUNE+T2HKK combinations to explore the synergy. We discuss the results in the following. \subsection{Sensitivity to scalar NSI parameters} In fig. \ref{fig:fixed_chi2_1}, we show the experiments' sensitivity towards constraining the scalar NSI parameters, $\eta_{\alpha\beta}$ for DUNE, T2HK and DUNE+T2HK. The plots for $\eta_{ee}$, $\eta_{\mu\mu}$ and $\eta_{\tau\tau}$ are shown in left--panel, middle--panel and right--panel respectively. We have kept the true values of $\eta_{\alpha\beta}$ fixed at 0.1 and marginalized the test $\eta_{\alpha\beta}$ in the range [-0.5, 0.5]. We considered normal hierarchy (NH) to be true neutrino mass hierarchy and higher octant (HO) to be true octant. Throughout the analysis, we have taken true $\delta_{CP}$ = -90$^\circ$ and true $\theta_{23}$ = 47$^\circ$ unless otherwise mentioned. Then we plotted $\Delta \chi^2$ as a function of test $\eta_{\alpha\beta}$ parameters. The dashed green and the dashed magenta line represent the 3$\sigma$ and 5$\sigma$ CL respectively. We observe that, \begin{itemize} \item The sensitivity of DUNE towards constraining $\eta_{ee}$ (for a true $\eta_{ee}$ = 0.1) is nominally better at 3$\sigma$ as compared to that of T2HK. On the other hand T2HK shows better constraining capability towards $\eta_{\mu\mu}$ and $\eta_{\tau\tau}$ (for true $\eta_{\alpha\beta}$ = 0.1), as compared to DUNE. \item The combined study with DUNE+T2HK improves the sensitivity towards constraining the $\eta_{\alpha\beta}$ parameters and is capable to putting stringer bounds on $\eta_{\alpha\beta}$. \end{itemize} \begin{figure}[h] \centering \includegraphics[width=0.32\linewidth, height = 5cm]{fixed_chi2_DUNE_T2HK.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{fixed_chi2_DUNE_T2HK_eta_mm.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{fixed_chi2_DUNE_T2HK_eta_tt.pdf} \caption{The sensitivity of DUNE, T2HK and DUNE + T2HK towards constraining non--zero $\eta_{ee}$ (left--panel), $\eta_{\mu\mu}$ (middle--panel), and $\eta_{\tau\tau}$ (right--panel) at true $\delta_{CP}$ = -$\pi$/2 and true $\theta_{23}$ = 47$^\circ$. In all the plots, the sensitivities for DUNE, T2HK and DUNE+T2HK are shown in red, black and blue respectively.} \label{fig:fixed_chi2_1} \end{figure} \begin{figure}[h] \centering \includegraphics[width=0.32\linewidth, height = 5cm]{fixed_chi2_DUNE_T2HKK_eta_ee.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{fixed_chi2_DUNE_T2HKK_eta_mm.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{fixed_chi2_DUNE_T2HKK_eta_tt.pdf} \caption{The sensitivity of DUNE, T2HKK and DUNE + T2HKK towards constraining $\eta_{ee}$ (left--panel), $\eta_{\mu\mu}$ (middle--panel), and $\eta_{\tau\tau}$ (right--panel) at true $\delta_{CP}$ = -$\pi$/2 and true $\theta_{23}$ = 47$^\circ$. In all the three plots the results for DUNE, T2HKK and DUNE+T2HKK are shown in red, black and blue respectively.} \label{fig:fixed_chi2_2} \end{figure} In fig. \ref{fig:fixed_chi2_2}, the sensitivity of DUNE, T2HKK and DUNE+T2HKK towards constraining $\eta_{\alpha\beta}$ are shown. The results for $\eta_{ee}$, $\eta_{\mu\mu}$ and $\eta_{\tau\tau}$ are shown in left--panel, middle--panel and right--panel respectively. We have plotted $\Delta \chi^2$ as a function of test $\eta_{\alpha\beta}$. Our observations are listed below. \begin{itemize} \item The constraining capability of T2HKK towards $\eta_{ee}$ and $\eta_{\mu\mu}$ are weaker than that of DUNE. For $\eta_{\tau\tau} (test) $ $\leq$ $\eta_{\tau\tau}$ (true), we see an overlap of the DUNE and T2HKK capabilities. For the rest of the $\eta_{\tau\tau}$ range, DUNE comes with a better sensitivity. \item Combining DUNE and T2HKK constrains $\eta_{\alpha\beta}$ with a stronger bound than those of DUNE and T2HKK individually. \end{itemize} \subsection{CP Violation sensitivity} The measurement of $\delta_{CP}$ in the leptonic sector is one of the prime goal of various ongoing and upcoming neutrino experiments. The detection of CP--violation may be crucial in explaining the baryon asymmetry of the Universe i.e. the dominance of matter over antimatter \cite{Steigman:1976ev, Cohen:1997ac, Fong:2012buy}. It is interesting to explore the subdominant effects of scalar NSI on $\delta_{CP}$ related measurements at the neutrino sector \cite{Medhi:2021wxj}. We discuss here the effects of $\eta_{\alpha\beta}$ on the CPV sensitivities at DUNE, T2HK and T2HKK. We have obtained the sensitivities by varying the true values of $\delta_{CP}$ in the allowed range [-$\pi$, $\pi$]. The true values of other mixing parameters used in this analysis are as listed in table~\ref{tab:mixing_parameters}. In the test spectrum of $\delta_{CP}$, we have only considered the CP-conserving values i.e. 0 and $\pm$ $\pi$. We have marginalized $\theta_{23}$ and $\Delta m_{31}^2$ over the allowed 3$\sigma$ ranges~\cite{NuFIT5.0} and have minimized the $\chi^2$ over all the marginalization ranges. The CPV sensitivity is calculated as , \begin{equation} {\Delta \chi}^{2}_{\rm CPV}~(\delta^{\rm true}_{\rm CP}) = {\rm min}~\left[\chi^2~(\delta^\text{true}_{CP},\delta^\text{test}_{CP}=0),~\chi^2 (\delta^\text{true}_{CP},\delta^\text{test}_{CP}=\pm \pi)\right ]. \end{equation} \begin{figure}[h] \centering \includegraphics[width=0.32\linewidth, height = 5cm]{cp_sensitivity_eta_ee_v2_DUNE.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{cp_sensitivity_eta_ee_v2_T2HK.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{cp_sensitivity_eta_ee_v2_T2HK_DUNE.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{cp_sensitivity_eta_mm_v2_DUNE.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{cp_sensitivity_eta_mm_v2_T2HK.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{cp_sensitivity_eta_mm_v2_T2HK_DUNE.pdf} \includegraphics[width=0.32\linewidth, height = 5cm ]{cp_sensitivity_eta_tt_v2_DUNE.pdf} \includegraphics[width=0.32\linewidth, height = 5cm ]{cp_sensitivity_eta_tt_v2_T2HK.pdf} \includegraphics[width=0.32\linewidth, height = 5cm ]{cp_sensitivity_eta_tt_v2_T2HK_DUNE.pdf} \caption{The CPV sensitivity of DUNE (left--column), T2HK (middle-column) and DUNE + T2HK (right-column) in presence of scalar NSI. The plots for $\eta_{ee}$, $\eta_{\mu\mu}$ and $\eta_{\tau\tau}$ are included in the top-row, middle-row and bottom--row respectively. The solid-red curve is for the no scalar NSI case whereas solid (dashed) black and blue curves are for positive (negative) $\eta_{\tau\tau}$.} \label{fig:cpv_3} \end{figure} In fig. \ref{fig:cpv_3}, we show the effects of scalar NSI on the CPV sensitivity for DUNE (left--column), T2HK (middle--column) and DUNE + T2HK (right--column). We have plotted here the statistical significance $\sigma$ (=$\sqrt{\Delta \chi^2_{CPV}}$) as a function of true $\delta_{CP}$. The plots for $\eta_{ee}$, $\eta_{\mu\mu}$ and $\eta_{\tau\tau}$ are shown on the top--row, middle--row and bottom--row respectively. For the $\chi^2$ study, we have marginalized over the NSI parameters. In all the plots, the solid--red curve represents the no scalar NSI case i.e. $\eta_{\alpha\beta}$ = 0. The solid (dashed) black and blue curves are for chosen positive (negative) values of $\eta_{\alpha\beta}$. The observations from fig. \ref{fig:cpv_3} are listed below. \begin{itemize} \item A positive (negative) $\eta_{ee}$ mostly enhances (suppresses) the CPV sensitivities at DUNE and T2HK. At $\eta_{ee}$ = 0.1 and $\delta_{CP}^{true}$ $\in$ [0, 90$^\circ$], we see that the sensitivities without and with scalar NSI almost overlap. The combined study of DUNE + T2HK improves the sensitivities (without and with NSI) for all cases including the overlapped region. \item A positive $\eta_{\mu\mu}$ deteriorates the CPV sensitivities in the upper half plane of $\delta_{CP}$ i.e. [0, $\pi$] at DUNE, while we observe a mild fluctuation for the rest of $\delta_{CP}$. At T2HK, we see enhancements for positive $\eta_{\mu\mu}$. For a negative $\eta_{\mu\mu}$, we observe significant suppression in the sensitivities for DUNE and T2HK. We find that, combining DUNE and T2HK improves the overall sensitivities (without and with NSI). \item At DUNE, for a positive $\eta_{\tau\tau}$, we see marginal fluctuations as compared to the no scalar NSI case. At T2HK, a positive $\eta_{\tau\tau}$ enhances the sensitivity. The analysis with DUNE+T2HK enhances the sensitivities (without and with NSI). \end{itemize} \begin{figure}[!h] \centering \includegraphics[width=0.32\linewidth, height = 5cm]{cp_sensitivity_eta_ee_v2_DUNE_v2.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{cp_sensitivity_eta_ee_v2_T2HKK.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{cp_sensitivity_eta_ee_v2_T2HKK_DUNE_v2.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{cp_sensitivity_eta_mm_v2_DUNE_v2.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{cp_sensitivity_eta_mm_v2_T2HKK_v2.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{cp_sensitivity_eta_mm_v2_T2HKK_DUNE_v2.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{cp_sensitivity_eta_tt_v2_DUNE_v2.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{cp_sensitivity_eta_tt_v2_T2HKK.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{cp_sensitivity_eta_tt_v2_T2HKK_DUNE_v2.pdf} \caption{The CPV sensitivity of DUNE (left-column), T2HKK (middle--column) and DUNE + T2HKK (right--column) in presence of scalar NSI. The plots for $\eta_{ee}$, $\eta_{\mu\mu}$ and $\eta_{\tau\tau}$ are included in the top-row, middle-row and bottom--row respectively. The solid-red curve is for the no scalar NSI case whereas solid (dashed) black and blue curves are for positive (negative) $\eta_{\tau\tau}$.} \label{fig:cpv_7} \end{figure} In fig. \ref{fig:cpv_7}, we show the effects of scalar NSI on the CPV sensitivities at DUNE (left--panel), T2HKK (middle--panel) and DUNE + T2HKK (right--panel) respectively. We have marginalized over the NSI parameters as well as $\theta_{23}$ in the allowed range [$40^\circ$, $50^\circ$]. The solid red--line represents standard case, whereas other coloured solid (dashed) lines are for positive (negative) values of $\eta_{\alpha\beta}$. The effects of $\eta_{ee}$, $\eta_{\mu\mu}$ and $\eta_{\tau\tau}$ are shown in top--panel, middle--panel and bottom--panel respectively. The dashed green and the dashed magenta line show the 3$\sigma$ and 5$\sigma$ CL respectively. We see that, \begin{itemize} \item A positive (negative) $\eta_{ee}$ enhances (suppresses) the CPV sensitivities mostly in DUNE and T2HKK. In the region $\delta_{CP}^{true}$ $\in$ [0, 90$^\circ$] the sensitivities overlap for the no Scalar NSI case and for $\eta_{ee}$ = 0.1. This implies that in that range DUNE alone will not be able to distinguish a fake sensitivity coming from scalar NSI. The joint analysis of DUNE + T2HKK can lift this degeneracy and can improve the overall sensitivities (without and with NSI). \item A negative $\eta_{\mu\mu}$ deteriorates the CPV sensitivity while a positive $\eta_{\mu\mu}$ can create various degeneracies in CP-measurement. For example, at T2HKK the standard CPV sensitivities overlap with NSI sensitivities for $\eta_{\mu\mu}$ = 0.1 in $\delta_{CP}^{true}$ $\in$ [-180$^\circ$, -120$^\circ$] and $\delta_{CP}^{true}$ $\in$ [110$^\circ$, 180$^\circ$]. This degeneracy can be removed by the joint analysis of DUNE + T2HKK. \item A negative $\eta_{\tau\tau}$ suppresses the CPV sensitivities while a positive $\eta_{tau\tau}$ mostly improves the sensitivities. The without and with scalar NSI sensitivities overlap in various region of $\delta_{CP}^{true}$ for a positive $\eta_{\tau\tau}$. This makes the experiments' capability indistinguishable to the effects from standard and non-standard interactions. The combined sensitivity of DUNE+T2HKK can lift this degeneracy with overall improvement in the CPV sensitivities (without and with NSI). \end{itemize} In fig. \ref{fig:cpv_8}, we have shown the combined CPV sensitivity of DUNE + T2HK + T2HKK for $\eta_{ee}$ (left--panel), $\eta_{\mu\mu}$ (middle--panel) and $\eta_{\tau\tau}$ (right--panel). We have plotted the statistical significance $\sigma$ (=$\sqrt{\Delta \chi^2_{CPV}}$) as a function of $\delta_{CP}^{true}$. In the three plots, the solid-red curve represents no scalar NSI case and other coloured solid (dashed) curves are for positive and negative $\eta_{\alpha\beta}$ respectively. The dashed green (magenta) line represents the 3$\sigma$ (5$\sigma$) CL. The range of statistical significance, $\sigma$ is taken to be same for the three plots for easy comparison. We observe that, \begin{itemize} \item A positive (negative) $\eta_{\alpha\beta}$ enhances (suppresses) the CPV sensitivity. We see that the impact of $\eta_{ee}$ is quite prominent the CPV sensitivities as compared to $\eta_{\mu\mu}$ and $\eta_{\tau\tau}$. For example, at $|\eta_{ee}|$ = 0.2, we see significant fluctuations. This implies that, $\eta_{ee}$ is predominantly sensitive to the $\delta_{CP}$ measurement sensitivity. \item Positive $\eta_{\mu\mu}$ and $\eta_{\tau\tau}$ show limited fluctuations from the no scalar NSI case. For negative $\eta_{\mu\mu}$ and $\eta_{\tau\tau}$, the sensitivities are mostly reduced as compared to the no scalar NSI case. \end{itemize} \begin{figure}[!h] \centering \includegraphics[width=0.32\linewidth, height = 5cm]{cp_sensitivity_eta_ee_v2_T2HK_T2HKK_DUNE.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{cp_sensitivity_eta_mm_v2_T2HK_T2HKK_DUNE.pdf} \includegraphics[width=0.32\linewidth, height = 5cm]{cp_sensitivity_eta_tt_v2_T2HK_T2HKK_DUNE.pdf} \caption{The CPV sensitivity of DUNE + T2HK + T2HKK in presence of $\eta_{ee}$ (left--panel), $\eta_{\mu\mu}$ (middle--panel) and $\eta_{\tau\tau}$ (right--panel). The solid--red curve is for the SI case whereas solid (dashed) black and blue curves are for positive (negative) $\eta_{\alpha\beta}$.} \label{fig:cpv_8} \end{figure} \section{Summary and concluding remarks } \label{sec:summary} With the magnificent development in the field of neutrino physics and in combination with the state-of-the-art experimental set--up, the neutrino oscillation parameters are aimed at being measured with utmost accuracy. The highly ambitious upcoming flagship neutrino experiments are aiming at measuring the neutrino mixing parameters as precisely as possible. Currently, the least constrained parameters in neutrino physics are $\delta_{CP}$ and the octant of mixing angle, $\theta_{23}$. In this work, we have primarily explored the impact of scalar NSI on the CP--measurement sensitivities of three upcoming LBL experiments (DUNE, T2HK and T2HKK) in a model--independent way. We also look into the advantages in the sensitivity measurements from combined analyses (DUNE + T2HK, DUNE +T2HKK, DUNE+ T2HK + T2HKK). If nature permits scalar NSI, we see that, the impact of scalar NSI on the CPV sensitivity is significant. For chosen negative values of NSI parameters, we observe a deterioration in the CP measurement sensitivities. We also notice an overlapping of standard and non-standard CPV sensitivities for certain positive $\eta_{\alpha\beta}$ at DUNE and T2HKK. This makes the experiments insensitive towards the fake CP effects coming from scalar NSI in those regions. However, this can be removed by a combined sensitivity analysis of DUNE+T2HK and/or DUNE+T2HKK. We observe that, T2HK has a better constraining capability towards NSI parameters as compared to DUNE or T2HKK. Although combining two experiments, i.e., DUNE+T2HK and DUNE+T2HKK the overall sensitivities get improved for all non-zero NSI parameters. It may be noted that, for a positive (negative) $\eta_{\alpha\beta}$, an analysis combining all the three experiments shows a significant improvement (deterioration) in CPV sensitivities. We see that, the element $\eta_{ee}$ has the highest sensitivity towards CPV for all the considered NSI parameters. It is crucial to identify these subdominant effects of neutrinos and its impact on the physics reach of various neutrino experiments. This study was primarily on understanding the impact of scalar NSI for three upcoming LBL experiments. We are also working on the possible exploration of impact on NSI at other physics sensitivities of different neutrino experiments. A combined efforts of all the solar, atmospheric, reactor etc experiments are needed to understand the impact of NSI. It is equally important to put some stringer constrain on the the effects of scalar NSI for accurate interpretation of data from various neutrino experiments. \newpage \section*{ACKNOWLEDGMENTS} We acknowledge the support of the Research and Innovation grant 2021 (DoRD/RIG/10-73/ 1592-A) funded by Tezpur University. AM and MMD would also acknowledge the support of the DST SERB grant CRG/2021/002961. AM thanks Dr. Pritam Das for the useful suggestions and discussions during the work. The authors also acknowledge the support of the DST FIST grant SR/FST/PSI-211/2016(C) of the Department of Physics, Tezpur University. \bibliographystyle{apsrev4-1}
{ "attr-fineweb-edu": 1.900391, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdco5jDKDyJzsJ_a6
\section{Introduction} \begin{figure}[!t] \begin{center} \includegraphics[width=0.48\textwidth]{framework.pdf} \end{center} \caption{Basic acceleration block. The orange panel in the figure shows two different kinds of low-cost collaborative kernels. One uses $1 \times 1$ convolution, and the other uses shared kernels~($W_i^{'} = W_j^{'}$ for $i,j \in [1, T]$). The black response map represents the output of the original convolutional layer with the kernel $W$, and the orange response map is generated by the low-cost collaborative layer. The purple cells represent the zero elements, of which the calculation of corresponding positions can be skipped in the original convolutional layer. We apply element-wise multiplication on the activated response maps from the original convolutional layer and low-cost layer to generate the final results of this basic acceleration block.} \label{fig:basic_framework} \end{figure} Despite the continuously improved performance of convolutional neural networks (CNNs)~ \cite{chatfield2014return,han2015deep,krizhevsky2012imagenet,lin2013network,simonyan2014very,szegedy2015going}, their computation costs are still tremendous. Without the support of high-efficiency servers, it is hard to establish CNN models on real-world applications. For example, to process a $224 \times 224$ image, AlexNet~\cite{krizhevsky2012imagenet} requires 725M FLOPs with 61M parameters, VGG-S~\cite{chatfield2014return} involves 2640M FLOPs with 103M parameters, and GoogleNet~\cite{szegedy2015going} needs 1566M FLOPs with 6.9M parameters. Therefore, to leverage the success of deep neural networks on mobile devices with limited computational capacity, accelerating network inference has become imperative. In this paper, we investigate the acceleration of CNN models based on the observation that the response maps of many convolutional layers are usually sparse after ReLU~\cite{montufar2014number} activation. Therefore, instead of fully calculating the layer response, we can skip calculating the zero cells in the ReLU output and only compute the values of non-zero cells in each response map. Theoretically, the locations of zero cells can be predicted by a lower cost layer. The values of non-zero cells from this lower-cost layer can be collaboratively updated by the responses of the original filters. Eventually, the low-cost collaborative layer (LCCL) accompanied by the original layer constitute the basic element of our proposed low-cost collaborative network (LCCN). To equip each original convolutional layer with a LCCL, we apply an element-wise multiplication on the response maps from the LCCL and the original convolutional layer, as illustrated in Fig.~\ref{fig:basic_framework}. In the training phase, this architecture can be naturally trained by the existing stochastic gradient descent (SGD) algorithm with backpropagation. First we calculate the response map $V^{'}$ of the LCCL after the activation layer, and use $V^{'}$ to guide the calculation of the final response maps. Despite the considerable amount of research where a sparse-based framework is used to accelerate the network inference, \eg~\cite{figurnov2015perforatedcnns,graham2014spatially,lebedev2015fast,li2016pruning,liu2015sparse}, we claim that LCCN is unique. Generally, most of these sparsity-based methods~\cite{lebedev2015fast,liu2015sparse,soulie2015compression} integrate the sparsity property as a regularizer into the learning of parameters, which usually harms the performance of network. Moreover, to further accelerate performance, some methods even arbitrarily zeroize the values of the response maps according to a pre-defined threshold. Compared with these methods, our LCCN automatically sets the negatives as zero, and precisely calculates the positive values in the response map with the help of the LCCL. This two-stream strategy reaches a remarkable acceleration rate while maintaining a comparable performance level to the original network. The main contributions are summarized as follows: \begin{itemize} \item We propose a general architecture to accelerate CNNs, which leverages low-cost collaborative layers to accelerate each convolutional layer. \item To the best of our knowledge, this is the first work to leverage a low-cost layer to accelerate the network. Equipping each convolutional layer with a collaborative layer is quite different from the existing acceleration algorithms. \item Experimental studies show significant improvements by the LCCN on many deep neural networks when compared with existing methods (\eg, a 34\% speedup on ResNet-110). \end{itemize} \section{Related Work} {\bf Low Rank}. Tensor decomposition with low-rank approximation-based methods are commonly used to accelerate deep convolutional networks. For example, in~\cite{denton2014exploiting,jaderberg2014speeding}, the authors exploited the redundancy between convolutional filters and used low-rank approximation to compress convolutional weight tensors and fully connected weight matrices. Yang \etal \cite{yang2015deep} use an adaptive fastfood transform was used to replace a fully connected layer with a series of simple matrix multiplications, rather than the original dense and large ones. Liu \etal \cite{liu2015sparse} propose a sparse decomposition to reduce the redundancy in convolutional parameters. In~\cite{zhang2015accelerating,zhang2015efficient}, the authors used generalized singular vector decomposition~(GSVD) to decompose an original layer to two approximated layers with reduced computation complexity. {\bf Fixed Point}. Some popular approaches to accelerate test phase computation are based on ``fixed point''. In~\cite{courbariaux2014training}, the authors trained deep neural networks with a dynamic fixed point format, which achieves success on a set of state-of-the-art neural networks. Gupta \etal \cite{gupta2015deep} use stochastic rounding to train deep networks with 16-bit wide fixed-point number representation. In~\cite{courbariaux2016binarynet,courbariaux2015binaryconnect}, a standard network with binary weights represented by 1-bit was trained to speed up networks. Then, Rastegari \etal \cite{rastegari2016xnor} further explored binary networks and expanded it to binarize the data tensor of each layer, increasing the speed by 57 times. {\bf Product Quantization}. Some other researchers focus on product quantization to compress and accelerate CNN models. The authors of \cite{wu2015quantized} proposed a framework to accelerate the test phase computation process with the network parameters quantized and learn better quantization with error correction. Han \etal \cite{han2015deep} proposed to use a pruning stage to reduce the connections between neurons, and then fine tuned networks with weight sharing to quantify the number of bits of the convolutional parameters from 32 to 5. In another work~\cite{hubara2016quantized}, the authors trained neural networks with extremely low precision, and extended success to quantized recurrent neural networks. Zhou \etal \cite{zhou2016dorefa} generalized the method of binary neural networks to allow networks with arbitrary bit-width in weights, activations, and gradients. {\bf Sparsity}. Some algorithms exploit the sparsity property of convolutional kernels or response maps in CNN architecture. In~\cite{zhou2016less}, many neurons were decimated by incorporating sparse constraints into the objective function. In~\cite{graham2014spatially}, a CNN model was proposed to process spatially-sparse inputs, which can be exploited to increase the speed of the evaluation process. In~\cite{lebedev2015fast}, the authors used the group-sparsity regularizer to prune the convolutional kernel tensor in a group-wise fashion. In~\cite{figurnov2015perforatedcnns}, they increased the speed of convolutional layers by skipping their evaluation at some fixed spatial positions. In~\cite{li2016pruning}, the authors presented a compression technique to prune the filters with minor effects on the output accuracy. {\bf Architecture}. Some researchers improve the efficiency of networks by carefully designing the structure of neural networks. In~\cite{hinton2015distilling}, a simple model was trained by distilling the knowledge from multiple cumbersome models, which helps to reduce the computation cost while preserving the accuracy. Romero \etal \cite{romero2014fitnets} extended the knowledge distillation approach to train a student network, which is deeper but thinner than the teacher network, by extracting the knowledge of teacher network. In this way, the student network uses less parameters and running time to gain considerable speedup compared with the teacher network. Iandola \etal \cite{iandola2016squeezenet} proposed a small DNN architecture to achieve similar performance as AlexNet by only using 50x fewer parameters and much less computation time via the same strategy. \section{Low-Cost Collaborative Network} In this section, we present our proposed architecture for the acceleration of deep convolutional neural networks. First, we introduce the basic notations used in the following sections. Then, we demonstrate the detailed formulation of the acceleration block and extend our framework to general convolutional neural networks. Finally, we discuss the computation complexity of our acceleration architecture. \subsection{Preliminary} \begin{figure*}[htp] \begin{center} \includegraphics[width=0.7\textwidth]{position_connect.pdf} \end{center} \caption{Connection strategy of collaborating LCCL with the original convolutional layer. The top figure shows the pre-activation residual block~\cite{he2016identity}; the bottom figure shows a ``Bef-Aft" connection strategy to speed up the residual block. ``Activ" represents that the collaborative layer is followed by BN and ReLU activation. The first LCCL receives the input tensor before being activated by BN and ReLU, and the second one receives the input tensor after BN and ReLU. (Best viewed in the original pdf file.)} \label{fig:position_connect} \end{figure*} Let's recall the convolutional operator. For simplicity, we discuss the problem without the bias term. Given one convolution layer, we assume the shapes of input tensor~$U$ and output tensor~$V$ are $X \times Y \times C$ and $X \times Y \times T$, where $X$ and $Y$ are the width and height of the response map, respectively. $C$ and $T$ represent the channel number of response map $U$ and $V$. A tensor~$W$ with size $k \times k \times C \times T$ is used as the weight filter of this convolutional layer. $V_t(x,y)$ represents the element of $V(x,y,t)$. Then, the convolutional operator can be written as: \vspace{-2mm} {\small \begin{align} V_t(x,y) = \sum_{i,j=1}^{k}\sum_{c=1}^{C}W_t(i,j,c)U(x+i-1,y+i-1,c) \end{align} } \vspace{-1mm} \noindent where $W_t(x,y)$ is the element of $W(x,y,t)$. In the LCCN, the output map of each LCCL should have the same size as the corresponding convolutional layer, which means that the shape of tensor~$V^{'}$ is $X \times Y \times T$. Similarly, we assume the weight kernel of $V^{'}$ is $W^{'}$. Therefore, the formula of the LCCN can be written as: {\small \begin{align} V^{'}_t(x,y) = \sum_{i,j=1}^{k^{'}}\sum_{c=1}^{C}W^{'}_{t}(i,j,c)U(x+i-1,y+i-1,c) \end{align} } \vspace{-2mm} \subsection{Overall Structure} Our acceleration block is illustrated in Fig.~\ref{fig:basic_framework}. The green block $V^{*}$ represents the final response map collaboratively calculated by the original convolutional layer and the LCCL. Generally, it can be formulated as: \vspace{-1mm} {\small \begin{align} V^{*}_t(x,y) = \begin{cases} 0 & \text{if } V^{'}_t(x,y) = 0 \\ V^{'}_t(x,y) \times V_t(x,y) & \text{if } V^{'}_t(x,y) \neq 0 \end{cases} \end{align} } \vspace{-1mm} \noindent where $V$ is the output response map from the original convolutional layer and $V^{'}$ is from LCCL. In this formula, the element-wise product is applied to $V$ and $V^{'}$ to calculate the final response map. Due to the small size of LCCL, the computation cost of $V^{'}$ can be ignored. Meanwhile, since the zero cells in $V^{'}$ will stay zero after the element-wise multiplication, the computation cost of $V$ is further reduced by skipping the calculation of zero cells according to the positions of zero cells in $V^{'}$. Obviously, this strategy leads to increasing speed in a single convolutional layer. To further accelerate the whole network, we can equip most convolutional layers with LCCLs. \subsection{Kernel Selection} As illustrated in the orange box in Fig.~\ref{fig:basic_framework}, the first form exploits a $1 \times 1 \times C \times T$ kernel ($k^{'} = 1$) for each original kernel to collaboratively estimate the final response map. The second structure uses a $k^{'} \times k^{'} \times C \times 1$ filter (we carefully tune the parameter k' and set k' = k) shared across all the original filters to calculate the final result. Both these collaborative layers use less time during inference when compared with the original convolutional layer, thus they are theoretically able to obtain acceleration. In many efficient deep learning frameworks such as Caffe~\cite{jia2014caffe}, the convolution operation is reformulated as matrix multiplication by flattening certain dimensions of tensors, such as: \vspace{-1mm} {\small \begin{align} V = U^{*} \times W^{*}~~~{\text{s.t.}} ~&~ U^{*} \in R^{XY \times k^{2}C}~,~W^{*} \in R^{k^{2}C \times T} \end{align} } \vspace{-3mm} \noindent Each row of the matrix $U^{*}$ is related to the spatial position of the output tensor transformed from the tensor $U$, and $W^{*}$ is a reshaped tensor from weight filters $W$. These efficient implementations take advantage of the high-efficiency of BLAS libraries, \eg, GEMM\footnote{matrix-matrix multiplication function} and GEMV\footnote{matrix-vector multiplication function}. Since each position of the skipped cell in $V^{*}$ corresponds to one row of the matrix $U^{*}$, we can achieve a realistic speedup in BLAS libraries by reducing the matrix size in the multiplication function. Different structures of the LCCL need different implementations. For a $k \times k \times C \times 1$ kernel, the positions of the skipped cells in the original convolutional layer are the same in different channels. In this situation, we can reduce the size of $U^{*}$ to $S^{'} \times k^{2}C$, where $S^{'}$ is the number of non-zero elements in $V^{'}$. For a $1 \times 1 \times C \times T$ kernel, the positions of zero cells are different in different channels, so it is infeasible to directly use the matrix-matrix multiplication function to calculate the result of LCCL, \ie $V^{'}$. In this case, we have to separate the matrix-matrix multiplication into multiple matrix-vector multiplications. However, this approach is difficult to achieve the desired acceleration effect. The unsatisfying acceleration performance of $1 \times 1 \times C \times T$ filters is caused by the inferior efficiency of multiple GEMV, and some extra operations also cost more time~(\eg, data reconstruction). Therefore, we choose the $k \times k \times C \times 1$ structure for our LCCL in our experiments, and leave the acceleration of $1 \times 1 \times C \times T$ filters as our future work. \subsection{Sparsity Improvement} According to the previous discussion, the simplest way for model acceleration is directly multiplying the tensor $V^{'}$ and tensor $V$. However, this approach cannot achieve favourable acceleration performance due to the low sparsity rate of $V^{'}$. To improve the sparsity of $V^{'}$, ReLU~\cite{montufar2014number} activation is a simple and effective way by setting the negative values as zeros. Moreover, due to the redundancy of positive activations, we can also append $L_1$ loss in the LCCL to further improve the sparsity rate. In this way, we achieve a smooth $L_{1}L_{2}({\bf X}) = \mu\|{\bf X}\| + \rho|{\bf X}|$ regularizer penalty for each $V^{'}$: \vspace{-2mm} {\small \begin{align} \|{\bf X}\| = \sqrt{ \sum_{i = 1}^{n} {\bf X}_{i}^2 }~~,~~|{\bf X}| = \sum_{i = 1}^{n} |{\bf X}| \end{align} } \vspace{-2mm} \noindent However, there are thousands of free parameters in the regularizer term and the additional loss always degrades the classification performance, as it's difficult to achieve the balance between the classification performance and the acceleration rate. \begin{table}[ht] \footnotesize \begin{center} \begin{tabular}{ c | c | c | c | c } \hline \multirow{2}{*}{Layer} & \multicolumn{2}{| c |}{With BN} & \multicolumn{2}{| c }{Without BN} \\ & conv1 & conv2 & conv1 & conv2 \\ \hline res-block-1.2 & 38.8\% & 28.8\% & 0.0\% & 0.0\% \\ res-block-2.2 & 37.9\% & 23.4\% & 0.0\% & 0.2\% \\ res-block-2.2 & 17.8\% & 40.4\% & 0.0\% & 40.7\% \\ \hline \end{tabular} \end{center} \caption{Sparsity of the LCCL for different activations with the same training setting. ``With BN'' means we activate the response map of the LCCL by BN and ReLU; ``Without BN'' means we only use ReLU activation. ``x.y'' means the y-th block at x-th stage of ResNet. We equip six convolutional layers with LCCL on ResNet-20 model.} \label{table:BN_Sparsity} \end{table} Recently, the Batch Normalization (BN)~\cite{ioffe2015batch} is proposed to improve the network performance and increase the convergence speed during training by stabilizing the distribution and reducing the internal covariate shift of input data. During this process, we observe that the sparsity rate of each LCCL is also increased. As shown in Table~\ref{table:BN_Sparsity}, we can find that the BN layer advances the sparsity of LCCL followed by ReLU activation, and thus can further improve the acceleration rate of our LCCN. We conjecture that the BN layer balances the distribution of $V^{'}$ and reduces the redundancy of positive values in $V^{'}$ by discarding some redundant activations. Therefore, to increase the acceleration rate, we carefully integrate the BN layer into our LCCL. Inspired by the pre-activation residual networks~\cite{he2016identity}, we exploit different strategies for activation and integration of the LCCL. Generally, the input of this collaborative layer can be either before activation or after activation. Taking pre-activation residual networks~\cite{he2016identity} as an example, we illustrate the ``Bef-Aft" connection strategy at the bottom of Fig.~\ref{fig:position_connect}. ``Bef" represents the case that the input tensor is from the flow before BN and ReLU activation. ``Aft" represents the case that the input tensor is the same to the original convolutional layer after BN and ReLU activation. According to the ``Bef-Aft" strategy in Fig.~\ref{fig:position_connect}. the ``Bef-Bef", ``Aft-Bef" and ``Aft-Aft" strategies can be easily derived. During our experiments, we find that input tensors with the ``Bef" strategy are quite diverse when compared with the corresponding convolutional layer due to different activations. In this strategy, the LCCL cannot accurately predict the zero cells for the original convolutional layer. So it is better to use the same input tensor as the original convolutional layer, \ie the ``Aft" strategy. \subsection{Computation Complexity} Now we analyze the test-phase numerical calculation with our acceleration architecture. For each convolutional layer, the forward procedure mainly consists of two components, \ie the low cost collaborative layer and the skip-calculation convolutional layer. Suppose the sparsity~(ratio of zero elements) of the response map $V^{'}$ is $r$. We formulate the detailed computation cost of the convolutional layer and compare it with the one equipped with our LCCL. \begin{table}[ht] \footnotesize \begin{center} \centering \begin{tabular}{c|c|c} \hline Architecture & FLOPs & Speed-Up Ratio \\\hline CNN & $XYTk^2C$ & 0 \\\hline basic & $XYTC(k{'}^2 + k^2r)$ & $1 - (k{'}^2/k^2 + r)$\\ ($1 \times 1$ kernel) & $XYTC(1 + k^2r)$ & $1 - (1/k^2 + r)$ \\ (weight sharing) & $XYTk^2(1 + Cr)$ & $1 - (1/C + r)$\\ \hline \end{tabular} \end{center} \caption{Theoretical numerical calculation acceleration for convolutional layers.} \label{table:theroretical_speedup} \end{table} As shown in Table~\ref{table:theroretical_speedup}, the speedup ratio is highly dependent on $r$. The term $1/C$ costs little time since the channel of the input tensor is always wide in most CNN models and it barely affects the acceleration performance. According to the experiments, the sparsity $r$ reaches a high ratio in certain layers. These two facts indicate that we can obtain a considerable speedup ratio. Detailed statistical results are described in the experiments section. In residual-based networks, if the output of one layer in the residual block is all zero, we can skip the calculation of descendant convolutional layers and directly predict the results of this block. This property helps further accelerate the residual networks. \section{Experiments} In this section, we conduct experiments on three benchmark datasets to validate the effectiveness of our acceleration method. \begin{figure*}[!t] \begin{center} \includegraphics[width=0.90\textwidth]{cifar10_sparse-eps-converted-to.pdf} \end{center} \caption{Sparsity for the response maps from each collaborative convolutional layer in ResNet-20. We use LCCL to modify 18 convolutional layers to speed up ResNet-20. ``x.y" represents the y-th residual block in the x-th generalized convolutional block. ``conv1" and ``conv2" represent the first and the second collaboration convolutional in the corresponding residual block. } \label{fig:cifar10_sparse} \end{figure*} \subsection{Benchmark Datasets and Experimental Setting} We mainly evaluate our LCCN on three benchmarks: CIFAR-10, CIFAR-100~\cite{krizhevsky2009learning} and ILSVRC-12~\cite{russakovsky2015imagenet}. The CIFAR-10 dataset contains 60,000 $32 \times 32$ images, which are categorized into 10 classes and each class contains 6,000 images. The dataset is split into 50,000 training images and 10,000 testing images. The CIFAR-100~\cite{krizhevsky2009learning} dataset is similar to CIFAR-10, except that it has 100 classes and 600 images per class. Each class contains 500 training images and 100 testing images. For CIFAR-10 and CIFAR-100, we split the 50k training dataset into 45k/5k for validation. ImageNet 2012 dataset~\cite{russakovsky2015imagenet} is a famous benchmark which contains 1.28 million training images of 1,000 classes. We evaluate on the 50k validation images using both the top-1 and top-5 error rates. Deep residual networks~\cite{he2015deep} have shown impressive performance with good convergence behaviors. Their significance has increased, as shown by the amount of research~\cite{he2016identity,zagoruyko2016wide} being undertaken. We mainly apply our LCCN to increase the speed of these improved deep residual networks. In the CIFAR experiments, we use the default parameter setting as~\cite{he2016identity,zagoruyko2016wide}. However, it is obvious that our LCCN is more complicated than the original CNN model, which leads to a requirement for more training epochs to converge into a stable situation. So we increase the training epochs and perform a different learning rate strategies to train our LCCN. We start the learning rate at 0.01 to warm up the network and then increase it to 0.1 after 3\% of the total iterations. Then it is divided by 10 at 45\%, 70\% and 90\% iterations where the errors plateau. We tune the training epoch numbers from \{200, 400, 600, 800, 1000\} according to the validation data On ILSVRC-12, we follow the same parameter settings as~\cite{he2015deep,he2016identity} but use different data argumentation strategies. (1) Scale augmentation: we use the scale and aspect ratio augmentation~\cite{szegedy2015going} instead of the scale augmentation~\cite{simonyan2014very} used in~\cite{he2015deep,he2016identity}. (2) Color augmentation: we use the photometric distortions from~\cite{howard2013some} to improve the standard color augmentation~\cite{krizhevsky2012imagenet} used in~\cite{he2015deep,he2016identity}. (3) Weight decay: we apply weight decay to all weights and biases. These three differences should slightly improve performance (refer to Facebook implementation\footnote{\url{https://github.com/facebook/fb.resnet.torch}}). According to our experiences with CIFAR, we extend the training epoch to 200, and use a learning rate starting at 0.1 and then is divided by 10 every 66 epochs. For the CIFAR experiments, we report the acceleration performance and the top-1 error to compare with the results provided in the original paper~\cite{he2016identity,zagoruyko2016wide}. On ILSVRC-12, since we use different data argumentation strategies, we report the top-1 error of the original CNN models trained in the same way as ours, and we mainly compare the accuracy drop with other state-of-the-art acceleration algorithms including: (1) Binary-Weight-Networks~(BWN)~\cite{rastegari2016xnor} that binarizes the convolutional weights; (2) XNOR-Networks~(XNOR)~\cite{rastegari2016xnor} that binarizes both the convolutional weights and the data tensor; (3) Pruning Filters for Efficient ConvNets~(PFEC)~\cite{li2016pruning} which prunes the filters with small effect on the output accuracy from CNNs. \subsection{Experiments on CIFAR-10 and CIFAR-100} First, we study the influence on performance of using different connection strategies proposed in the Kernel Selection and Sparsity Improvement sections. We use the pre-activation ResNet-20 as our base model, and apply the LCCL to all convolutional layers within the residual blocks. Using the same training strategy, the results of four different connection strategies are shown in Table~\ref{table:res20_connect}. Both collaborative layers with the after-activation method show the best performance with a considerable speedup ratio. Because the Aft strategy receives the same distribution of input to that of the corresponding convolution layer. We also try to use the $L_1L_2$ loss to restrict the output maps of each LCCL. But this will add thousands of extra values that need to be optimized in the $L_1L_2$ loss function. In this case, the networks are difficult to converge and the performance is too bad to be compared. \begin{table}[ht] \footnotesize \begin{center} \begin{tabular}{c|c|c} \hline Structure & Top-1 Err. & Speed-Up \\ \hline Aft-Aft & \textbf{8.32} & 34.9\% \\ Aft-Bef & 8.71 & 24.1\% \\ Bef-Bef & 11.62 & 39.8\% \\ Bef-Aft & 12.85 & \textbf{55.4\%} \\ \hline \end{tabular} \end{center} \caption{Before-activation and after-activation for connection strategy on ResNet-20. Each LCCL uses $3 \times 3 \times k$ kernel.} \label{table:res20_connect} \end{table} Furthermore, we analyze the performance influenced by using different kernels in the LCCL. There are two forms of LCCL that collaborate with the corresponding convolutional layer. One is a tensor of size $1 \times 1 \times C \times T$ (denoted as $1\times1$), and the other is a tensor of size $k \times k \times C \times 1$ (denoted as $k \times k$). As shown in Table~\ref{table:comparison_kernel}, the $k \times k$ kernel shows significant performance improvement with a similar speedup ratio compared with a $1\times1$ kernel. It can be caused by that the $k \times k$ kernel has a larger reception field than $1 \times 1$. \begin{table}[t] \footnotesize \begin{center} \begin{tabular}{ c | c | c | c | c | c | c} \hline \multirow{2}{*}{Model} & \multicolumn{3}{| c |}{$1 \times 1 \times C \times T$} & \multicolumn{3}{| c }{$k \times k \times C \times 1$} \\ & FLOPs & Ratio & Error & FLOPs & Ratio & Error \\ \hline ResNet-20 & 3.2E7 & 20.3\% & 8.57 & 2.6E7 & \textbf{34.9\%} & \textbf{8.32} \\ ResNet-32 & 4.7E7 & \textbf{31.2\%} & 9.26 & 4.9E7 & 28.1\% & \textbf{7.44} \\ ResNet-44 & 6.3E7 & \textbf{34.8\%} & 8.57 & 6.5E7 & 32.5\% & \textbf{7.29} \\ \hline \end{tabular} \end{center} \caption{Comparison of top-1 error rate on two different collaborative layers.~(The `Ratio' represents the speedup ratio) } \label{table:comparison_kernel} \end{table} Statistics on the sparsity of each response map generated from the LCCL are illustrated in Fig.~\ref{fig:cifar10_sparse}. This LCCN is based on ResNet-20 with each residual block equipped with a LCCL configured by a $1 \times 1 \times C \times T$ kernel. To get stable and robust results, we increase the training epochs as many as possible, and the sparsity variations for all 400 epochs are provided. The first few collaborative layers show a great speedup ratio, saving more than 50\% of the computation cost. Even if the last few collaboration layers behave less than the first few, the $k \times k \times C \times 1$ based method is capable of achieving more than 30\% increase in speed. Hitherto, we have demonstrated the feasibility of training CNN models equipped with our LCCL using different low-cost collaborative kernels and strategies. Considering the performance and realistic implementation, we select the weight sharing kernel for our LCCL. This will be used in all following experiments as default. Furthermore, we experiment with more CNN models\cite{he2016identity,zagoruyko2016wide} accelerated by our LCCN on CIFAR-10 and CIFAR-100. Except for ResNet-164~\cite{he2016identity} which uses a bottleneck residual block {\tiny $\left\{ \begin{array}{ccc} 1 \times 1 \\ 3 \times 3 \\ 1 \times 1 \end{array} \right\} $ }, all other models use a basic residual block {\tiny $\left\{ \begin{array}{ccc} 3 \times 3 \\ 3 \times 3 \end{array} \right\} $ }. We use LCCL to accelerate all convolutional layers except for the first layer, which takes the original image as the input tensor. The first convolutional layer operates on the original image, and it costs a little time due to the small input channels~(RGB 3 channels). In a bottleneck structure, it is hard to reach a good convergence with all the convolutional layers accelerated. The convolutional layer with $1 \times 1$ kernel is mainly used to reduce dimension to remove computational bottlenecks, which overlaps with the acceleration effect of our LCCL. This property makes layers with $1 \times 1$ kernel more sensitive to collaboration with our LCCL. Thus, we apply our LCCL to modify the first and second convolutional layer in the bottleneck residual block on CIFAR-10. And for CIFAR-100, we only modify the second convolutional layer with $3 \times 3$ kernel in the bottleneck residual block. The details of theoretical numerical calculation acceleration and accuracy performance are presented in Table~\ref{table:cifar10_acc} and Table~\ref{table:cifar100_acc}. \begin{table}[t] \footnotesize \begin{center} \begin{tabular}{ c | c | c | c | c} \hline & Depth & Ori. Err & LCCN & Speed-up \\\hline \multirow{2}{*}{ResNet~\cite{he2016identity}} & 110 & 6.37 & 6.56 & 34.21\% \\ & 164* & 5.46 & 5.91 & 27.40\% \\ \hline \multirow{6}{*}{WRN~\cite{zagoruyko2016wide}} & 22-8 & 4.38 & 4.90 & 51.32\% \\ & 28-2 & 5.73 & 5.81 & 21.40\% \\ & 40-1 & 6.85 & 7.65 & 39.36\% \\ & 40-2 & 5.33 & 5.98 & 31.01\% \\ & 40-4 & 4.97 & 5.95 & 54.06\% \\ & 52-1 & 6.83 & 6.99 & 41.90\% \\ \hline \end{tabular} \end{center} \caption{Top-1 Error and Speed-Up of eight different CNN models on CIFAR-10~(symbol ``*" means the bottleneck structure). Ori. Err represents the top-1 error of the original convolution network.} \label{table:cifar10_acc} \end{table} \begin{table}[ht] \footnotesize \begin{center} \begin{tabular}{ c | c | c | c | c} \hline & Depth & Ori. Err & LCCN & Speed-up \\\hline \multirow{1}{*}{ResNet~\cite{he2016identity}} & 164* & 24.33 & 24.74 & 21.30\% \\\hline \multirow{6}{*}{WRN~\cite{zagoruyko2016wide}} & 16-4 & 24.53 & 24.83 & 15.19\% \\ & 22-8 & 21.22 & 21.30 & 14.42\% \\ & 40-1 & 30.89 & 31.32 & 36.28\% \\ & 40-2 & 26.04 & 26.91 & 45.61\% \\ & 40-4 & 22.89 & 24.10 & 34.27\% \\ & 52-1 & 29.88 & 29.55 & 22.96\% \\ \hline \end{tabular} \end{center} \caption{Top-1 error and speed-up of seven different CNN models on CIFAR-100~(symbol ``*" means the bottleneck structure). Ori. Err represents the top-1 error of the original convolution network.} \label{table:cifar100_acc} \end{table} Experiments show our LCCL works well on much deeper convolutional networks, such as pre-activation ResNet-164~\cite{he2016identity} or WRN-40-4~\cite{zagoruyko2016wide}. Convolutional operators dominate the computation cost of the whole network, which hold more than 90\% of the FLOPs in residual based networks. Therefore, it is beneficial for our LCCN to accelerate such convolutionally-dominated networks, rather than the networks with high-cost fully connected layers. In practice, we are always able to achieve more than a 30\% calculation reduction for deep residual based networks. With a similar calculation quantity, our LCCL is capable of outperforming original deep residual networks. For example, on the CIFAR-100 dataset, LCCN on WRN-52-1 obtains higher accuracy than the original WRN-40-1 with only about 2\% more cost in FLOPs. Note that our acceleration is data-driven, and can achieve a much higher speedup ratio on ``easy" data. In cases where high accuracy is not achievable, it predicts many zeros which harms the network structure. Theoretically, the LCCN will achieve the same accuracy as the original one if we set LCCL as an identity (dense) network. To improve efficiency, the outputs of LCCL need to be sparse, which may marginally sacrifice accuracy for some cases. We also observe accuracy gain for some other cases (WRN-52-1 in Table~\ref{table:cifar100_acc}), because the sparse structure can reduce the risk of overfitting. \subsection{Experiments on ILSVRC-12} We test our LCCN on ResNet-18, 34 with some structural adjustments. On ResNet-18, we accelerate all convolutional layers in the residual block. However, ResNet-34 is hard to optimize with all the convolutional layers accelerated. So, we skip the first residual block at each stage (layer 2, 3, 8, 9, 16, 17, 28, 29) to make it more sensitive to collaboration. The performance of the original model and our LCCN with the same setting are shown in Table~\ref{table:imagenet_acc}. \begin{table}[ht] \footnotesize \begin{center} \begin{tabular}{ c | c | c | c | c | c} \hline \multirow{2}{*}{Depth} & \multicolumn{2}{| c |}{Top-1 Error} & \multicolumn{2}{| c |}{Top-5 Error} & \multirow{2}{*}{Speed-up}\\ & ResNet & LCCN & ResNet & LCCN & \\\hline 18 & 30.02 & 33.67 & 10.76 & 13.06 & 34.6\% \\\hline 34 & 26.58 & 27.01 & 8.64 & 8.81 & 24.8\% \\\hline \end{tabular} \end{center} \caption{Top-1 and Top-5 Error of LCCN on ImageNet classification task.} \label{table:imagenet_acc} \end{table} We demonstrate the success of LCCN on ResNet-18,~34~\cite{he2016identity}, and all of them obtain a meaningful speedup with a slight performance drop. \begin{table}[ht] \footnotesize \begin{center} \begin{tabular}{ c | c | c | c | c} \hline Depth & Approach & Speed-Up & Top-1 Acc. Drop & Top-5 Acc. Drop \\\hline \multirow{3}{*}{18} & LCCL & 34.6\% & 3.65 & 2.30 \\ & BWN & $\approx 50.0\%$ & 8.50 & 6.20 \\ & XNOR & $\approx 98.3\%$ & 18.10 & 16.00 \\\hline \multirow{2}{*}{34} & LCCL & 24.8\% & 0.43 & 0.17 \\ & PFEC & 24.2\% & 1.06 & - \\\hline \end{tabular} \end{center} \caption{Comparison with other acceleration methods on ResNet. Acc. Drop represents the accuracy drop.} \label{table:compare_acc_18} \end{table} We compare our method with other state-of-the-art methods, shown in Table~\ref{table:compare_acc_18}. As we can see, similar to other acceleration methods, there is some performance drop. However, our method achieves better accuracy than other acceleration methods. \subsection{Theoretical vs. Realistic Speedup} There is often a wide gap between theoretical and realistic speedup ratio. It is caused by the limitation of efficiency of BLAS libraries, IO delay, buffer switch or some others. So we compare the theoretical and realistic speedup with our LCCN. We test the realistic speed based on Caffe~\cite{jia2014caffe}, an open source deep learning framework. OpenBLAS is used as the BLAS library in Caffe for our experiments. We set CPU only mode and use a single thread to make a fair comparison. The results are shown in Table~\ref{table:Comparison_Speed}. \begin{table}[ht] \footnotesize \begin{center} \begin{tabular}{ c | c | c | c | c | c | c } \hline \multirow{2}{*}{Model} & \multicolumn{2}{| c |}{FLOPs} & \multicolumn{2}{| c |}{Time (ms)} & \multicolumn{2}{| c }{Speed-up} \\ & CNN & LCCL & CNN & LCCL & Theo & Real \\ \hline ResNet-18 & 1.8E9 & 1.2E9 & 97.1 & 77.1 & 34.6\% & 20.5\% \\ \hline ResNet-34 & 3.6E9 & 2.7E9 & 169.3 & 138.6 & 24.8\% & 18.1\% \\ \hline \end{tabular} \end{center} \caption{Comparison on the theoretical and realistic speedup.} \label{table:Comparison_Speed} \end{table} \textbf{Discussion.} As shown in Table~\ref{table:Comparison_Speed}, our realistic speedup ratio is less than the theoretical one, which is caused mainly by two reasons. First, we use data reconstruction and matrix-matrix multiplication to achieve the convolution operator as Caffe~\cite{jia2014caffe}. The data reconstruction operation costs too much time, making the cost of our LCCL much higher than its theoretical speed. Second, the frontal convolution layers usually take more time but contain less sparsity than the rear ones, which reduces the overall acceleration effect of the whole convolution neural network. These two defects can be solved in theory, and we will focus on the realistic speedup in future. \textbf{Platform.} The idea of reducing matrix size in convolutional networks can be applied to GPUs as well in principle, even though some modifications on our LCCN should be made to better leverage the existing GPU libraries. Further, our method is independent from platform, and should work on the FPGA platform with customization. \subsection{Visualization of LCCL} \begin{figure}[ht] \begin{center} \includegraphics[width=0.45\textwidth]{response_map.pdf} \end{center} \caption{ The feature maps (after ReLU) generated from the last LCCL of our LCCN and the corresponding convolutional layer of ResNet-50 are visualized for testing samples of PASCAL VOC2007 dataset. Each triplet represents one picture and its corresponding feature maps. The activated area of LCCL seems highlight more foreground objects than that of ResNet-50. In the meantime, LCCL is possible to depress the background area. } \label{fig:response_map} \end{figure} Here is an interesting observation about our LCCL. We visualize the results of LCCN on PASCAL VOC2007~\cite{pascal-voc-2007} training dataset. We choose ResNet-50 as the competitor, and add an additional 20 channels' convolutional layer with an average pooling layer as the classifier. For our LCCN, we equip the last 6 layers of this competitor model with our LCCL. After fine tuning, the feature maps generated from the last LCCL and the corresponding convolutional layer of the competitor model are visualized in Fig.~\ref{fig:response_map}. As we can observe, our LCCL might have the ability to highlight the fields of foreground objects, and eliminates the impact of the background via the collaboration property. For example, in the second triplet, car and person are activated simultaneously in the same response map by the LCCL. At the first glance, these highlighted areas look similar with the locations obtained by attention model. But they are intrinsically different in many ways, \eg, motivations, computation operations, response meaning and structures. \section{Conclusion} In this paper, we propose a more complicated network structure yet with less inference complexity to accelerate the deep convolutional neural networks. We equip a low-cost collaborative layer to the original convolution layer. This collaboration structure speeds up the test-phase computation by skipping the calculation of zero cells predicted by the LCCL. In order to solve the the difficulty of achieving acceleration on basic LCCN structures, we introduce ReLU and BN to enhance sparsity and maintain performance. The acceleration of our LCCN is data-dependent, which is more reasonable than hard acceleration structures. In the experiments, we accelerate various models on CIFAR and ILSVRC-12, and our approach achieves significant speed-up, with only slight loss in the classification accuracy. Furthermore, our LCCN can be applied on most tasks based on convolutional networks (\eg, detection, segmentation and identification). Meanwhile, our LCCN is capable of plugging in some other acceleration algorithms (\eg, fix-point or pruning-based methods), which will further enhance the acceleration performance. {\small \bibliographystyle{ieee}
{ "attr-fineweb-edu": 1.942383, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdco5qoTDt4TW7MxS
\section{Introduction}\label{sec:one We consider the problem of extracting a common signal from heterogeneous groups of data. This is exemplified by spatio-temporal array data on neuronal activity recorded repeatedly over time for 13 ferrets, the objective being to extract a common neuronal response to a visual stimulus. We will regard each 3D neuronal activity recording as a sample from a linear model with a mean component expressed in a basis expansion. If the mean components across all recordings are identical, the common mean component can be interpreted as the common signal and extracted via least squares estimation of the basis coefficient, say. However, the recordings are heterogeneous in the sense that the mean component cannot be regarded as fixed. Heterogeneity can, for instance, arise across recordings for a single animal due to slightly varying experimental conditions or to fatigue, and spatial heterogeneity is expected across animals due to dif\-fe\-ren\-ces in the cytoarchitecture. Various preprocessing techniques such as registration are used to alleviate heterogeneity, but preprocessing may only be partially successful, and human assessment for e.g. exclusion of outliers was needed in \cite{roland2006}. Explicit modeling of heterogeneity is possible and studied in the field of functional data analysis, \cite{Scheipl:2014, Staicu:2010, Wang:2016}, but we will not pursue this more sophisticated modeling framework. Though heterogeneity may represent structured variation, it may have many different known as well as unknown origins, and our focus is on fast, robust estimation of a common signal. \cite{meinshausen2015} proposed the maximin method as a way to aggregate heterogeneous data within the framework of linear models. Their population quantity called the maximin effect is the common signal, and they proposed families of estimators, see (9) in \cite{meinshausen2015}. These maximin estimators are, however, difficult to compute. Though they are given as solutions to convex minimization problems, the objective functions are nondifferentiable as well as nonseparable. An approach to circumvent the computational difficulties was proposed in another paper by \cite{buhlmann2016}. Using a theoretical representation of the maximin effect combined with the plug-in principle, they proposed magging (maximin aggregation) as an estimator of the maximin effect. Though magging is computationally applicable to the neuronal activity recordings, we will demonstrate that it does not successfully extract a common signal. We propose the soft maximin estimator, which may be viewed as a computationally well behaved approximation to maximin estimation and an alternative to magging. More importantly, it offers an entire range of estimators of independent interest interpolating magging and mean aggregation. By aggregating explained variances (or more generally convex group loss functions) using a type of soft minimum we obtain the estimator as a solution to a minimization problem with a differentiable loss. We refer to this loss function as the soft maximin loss and the estimator solves the soft maximin problem. Furthermore, to obtain a sparse solution across groups we consider an $\ell_1$-penalized version of this problem. For array data, such as the 3D neuronal activity recordings, we have previously demonstrated the efficiency of proximal gradient algorithms for sparse smoothing using tensor product bases, \cite{lund2017a}. In this paper we establish that the soft maximin loss is strongly convex under a full rank assumption on the design matrices (strongly convex group loss functions). We also show that the soft maximin loss has a Lipschitz continuous gradient when the design is identical across groups. Using this its possible to show convergence of a proximal gradient based algorithm when applied to the penalized soft maximin problem. As in \cite{lund2017a} we can then exploit the array-tensor structure of the data to obtain a time and space efficient solution algorithm for this type of problem. An implementation is provided in the R package \verb+SMMA+ available from CRAN, \cite{lund2017b}. The paper is organized as follows: The model setup and the soft maximin estimator is introduced in Section \ref{sec:two} and a small 1D example with simulated data is presented. In Section \ref{sec:three} we establish properties of the soft maximin loss and the convergence of the NPG algorithm within this setup. We also discuss how to exploit the array-tensor structure with this algorithm, and illustrate our method on a 3D signal extraction example. Section \ref{sec:four} presents the application to the neuronal activity data, and in Section \ref{sec:five} we discuss soft maximin estimation and how it relates to alternative methods. \section{Soft maximin problem}\label{sec:two We consider the linear model \begin{alignat}{4}\label{eq1} Y_{g, i} =X_{g, i}^\top B_g + \varepsilon_{g,i}, \quad g =1, \ldots, G, \ i = 1, \ldots, n_g \end{alignat} with $G$ groups, and with $X_{g,i}$ as well as $B_g$ $p$-dimensional vectors. Depending on the context, $X_{g,i}$ and $B_g$ may be regarded as fixed or they may be regarded as random as in \cite{meinshausen2015}. In any case, the errors, $\varepsilon_{g,i}$, are assumed uncorrelated with mean zero given $(X_{g,i}, B_g)_{g, i}$. Within this linear modeling framework, heterogeneity across the groups is captured by the variation in the $B_g$-coefficients. We let $Y_g = (Y_{g, 1},\ldots,Y_{g, n_g})^\top$ denote the group-specific response vector of length $n_g$, $X_g =(X_{g, 1} \ldots X_{g, n_g})^\top$ denotes the corresponding $n_g\times p$ design matrix, and $\varepsilon_g = (\varepsilon_{g,1},\ldots,\varepsilon_{g,n_g})^\top$ denotes the vector of error terms. The linear model for the $g$th group is then \begin{alignat}{4}\label{eq5} Y_g=X_gB_g+\varepsilon_g. \end{alignat} A \emph{common signal} in this framework is represented by a single $\beta \in \mathbb{R}^p$ such that $X_g \beta$ is a good approximation of $X_gB_g$ across all $G$ groups. Following \cite{meinshausen2015}, the empirical explained variance of $\beta \in \mathbb{R}^p$ for group $g$ is defined as \begin{alignat}{4}\label{eq9} \hat V_g(\beta) \coloneqq \frac{1}{n_g}(2\beta^\top X_g^\top y_g-\beta^\top X^\top_gX_g\beta). \end{alignat} Clearly, $\hat{\beta}_g = \argmax_{\beta} \hat V_g(\beta)$ is the OLS estimator within group $g$. The maximin effects estimator proposed in \cite{meinshausen2015} is obtained by maximizing the minimum of \eqref{eq9} across groups. The resulting optimization problem is difficult given the nondifferentiability and nonseparability of the $\min$ function. We propose the soft maximin estimator obtained by maximizing a soft minimum of \eqref{eq9} across groups. For $x\in \mathbb{R}^G$ and $\zeta \neq 0$ consider the scaled log-sum exponential function \begin{alignat*}{4} \mathrm{lse}_\zeta(x) \coloneqq \frac {\log(\sum_g e^{\zeta x_g} )}{\zeta}. \end{alignat*} As argued below $\mathrm{lse}_{\zeta}$ behaves as a soft maximum (minimum) for large positive (negative) values of $\zeta$. Letting $\hat V(\beta) = (\hat V_1(\beta),\ldots, \hat V_G(\beta))^\top$ denote the vector of explained variances, we shall refer to \begin{alignat*}{4} l_{\zeta}(\beta) \coloneqq \mathrm{lse}_{\zeta}(-\hat V(\beta)) \end{alignat*} as the soft maximin loss function. Noting that $\mathrm{lse}_{-\zeta}(x)=-\mathrm{lse}_{\zeta}(-x)$, the soft maximin estimator is then defined for $\zeta > 0$ as \begin{alignat}{4} \beta_{smm}:=\argmax_{\beta\in\mathbb{R}^p} \mathrm{lse}_{-\zeta}(\hat V(\beta))=\argmin_{\beta\in\mathbb{R}^p} l_{\zeta}(\beta). \label{def:mm} \end{alignat} Note that l'H\^ospital's rule gives $\mathrm{lse}_{-\zeta}(x)\to\min\{x\}$ for $\zeta\to\infty$. For large $\zeta > 0$ we can therefore view the soft maximin estimator \eqref{def:mm} as an approximation to the maximin estimator proposed in \cite{meinshausen2015}. Note also that soft maximin estimation puts less weight on the groups with the smallest explained variance than maximin estimation. Especially, using that \begin{alignat*}{4} \frac {\log(\frac {1}{G}\sum_g e^{\zeta x_g} )}{\zeta}\to \frac {1}{G}\sum_g x_g \end{alignat*} for $\zeta \to 0$, we see that $\mathrm{lse}_{\zeta}(x)\sim \frac {1}{G}\sum_g x_g + \frac {\log(G)}{\zeta} $ for small $\zeta$. Thus the soft maximin loss can be seen as an interpolation between mean aggregation and max aggregation of minus the explained variances. \subsection{Smoothing}\label{subsec:2.1 As a main example of soft maximin aggregation we will consider smoothing of signals over a multivariate domain from $G$ groups. Thus \begin{alignat}{4}\label{eq6} Y_{g, i}= f_g(z_{g, i}) + \varepsilon_{g, i}, \quad z_{g, i}\in \mathbb{R}^d, \ i = 1,\ldots,n_g, \end{alignat} with $f_g$ a group specific smooth function. If we represent $f_g $ using a basis expansion as \begin{alignat}{4}\label{eq7} f_g(z)=\sum_{m=1}^{p} \Theta_{g,m}\varphi_m(z), \quad \end{alignat} for $\varphi_1, \ldots, \varphi_p$ a set of basis functions, we can collect the basis function evaluations into the $n_g\times p$ matrix $\Phi_g = (\varphi_m(z_{g, i}))_{i, m}$, in which case model \eqref{eq6} is given as the linear model \eqref{eq5} with $X_g = \Phi_g$ and $B_g = (\Theta_{g,1},\ldots,\Theta_{g,p})^\top$. \subsection{1-dimensional signal extraction}\label{subsec:1dim To illustrate how soft maximin estimation works, we reproduce and extend the numerical example from \cite{buhlmann2016}. We simu\-late signals with three components: i) a common signal of interest $f(x)=\cos(10 (2 \pi) x) + 1.5 \sin(5 (2 \pi ) x)$ superimposed with ii) periodic signals with randomly varying frequency and phase and iii) additive white noise. In particular, we simulate $G=50$ signals where for each $g\in \{1,\ldots,50\}$ \begin{alignat*}{4} Y_{g,i}=f(x_i)+ 50 \sum_{j\in J_g} \varphi_j (x_i + p_g)+\varepsilon_{g,i}, \quad i = 1,\ldots,2001. \end{alignat*} Here $J_g$ is a set of $7$ integers sampled uniformly from $ \{1,\ldots,101\} $, $\varphi_j$ is the $j$th Fourier basis function, $p_g\sim \mathrm{unif}(-\pi,\pi)$, and $\varepsilon_{g,i}\sim \mathcal{N}(0,10)$. We simulate observations for each $x_i= 0,1,\ldots, 2000$. \begin{figure}[H] \begin{center} {\includegraphics[scale=0.4]{1dsimtwocol.pdf}} \caption{True signal in red. From top left we have the magging estimate, the soft maximin estimates for $\zeta=2000$, $200$, and $20$, the mean aggregated estimate and the mean signal, which is simply the average across groups. The MSE for the magging estimate is $1.301 \times 10^{-4}$ and $1.953 \times 10^{-4}$ for the soft maximin estimate ($\zeta=2000$).} \label{fig:1} \end{center} \end{figure} With $\Phi$ containing the 101 first Fourier basis functions evaluated at $x_i= 0,1,\ldots, 2000$ we solved an $\ell_1$ penalized soft maximin problem (see \eqref{eq13} below) for a sequence of penalty parameters and for $\zeta = 20$, $200$, and $2000$. In addition, we aggregated the groupwise OLS estimates, $\hat{\beta}_1, \ldots, \hat{\beta}_{50}$, using magging as proposed in \cite{buhlmann2016} as well as by mean aggregation. The mean signal across groups was also computed. Figure \ref{fig:1} shows the results of the different estimation procedures. Both the magging estimate and the soft maximin estimate for $\zeta = 2000$ extracted the true common signal quite well, while the mean aggregated estimate resembled the mean signal showing little similarity to the common signal. We note that for larger $\zeta$ soft maximin behaved similarly to magging, while for smaller $\zeta$ soft maximin resembled mean aggregation as expected. \section{Penalized soft maximin aggregation}\label{sec:three Here we formulate a general penalized soft maximin \emph{aggregation} problem. Instead of $-\hat V$ defined in \eqref{eq9} we consider a general set of group loss functions $h\coloneqq (h_1,\ldots,h_G)$ and the soft maximin aggregation loss $ s_\zeta:\mathbb{R}^p\to\mathbb{R}$, given by \begin{alignat*}{4} s_\zeta(\beta):=\mathrm{lse}_\zeta\circ h(\beta) = \frac{\log(\sum_{g=1}^G e^{\zeta h_g(\beta)})}{\zeta}, \quad \zeta>0. \end{alignat*} We are then interested in obtaining the penalized soft maximin aggregation estimator defined as the solution to the problem \begin{alignat}{4}\label{eq13} \min_{\beta\in \mathbb{R}^p} s_\zeta(\beta) +\lambda J(\beta), \quad \zeta>0, \end{alignat} where $J$ is a proper convex function and $\lambda\geq0 $ is the penalty parameter. When $h = -\hat V$ as in section \ref{sec:two}, we refer to $s_\zeta = l_\zeta$ as the soft maximin loss and to \eqref{eq13} as the penalized soft maximin problem. Thus the term \emph{aggregation} is used to emphasize that we are considering general group loss functions $h_1,\ldots,h_G$. Solving \eqref{eq13} in a large scale setting requires an efficient optimization algorithm for non-differentiable problems. We note that when $h=-\hat V$, in contrast to the hard maximin problem from \cite{meinshausen2015}, \eqref{eq13} is a convex nondifferentiable and also separable problem (see \cite{tseng2009}) implying that the coordinate descent algorithm is a viable for the problem \eqref{eq13}. Here however, since we are particularly interested in solving \eqref{eq13} for data with array-tensor structure we are going to consider modified versions of the proximal gradient algorithm. As demonstrated in \cite{lund2017a} this algorithm is very well suited to handle this particular setup and can outperform the coordinate descent algorithm. The proximal gradient algorithm fundamentally works by iteratively applying the proximal operator \begin{alignat}{4}\label{eq:4.6} \mathrm{prox}_{\delta J}(\beta) = \argmin_{ \gamma \in \mathbb{R}^p} \Big\{\frac{1}{2\delta}\Vert \gamma - \beta \Vert_{2}^2 + J(\gamma)\Big\},\quad \delta>0 \end{alignat} to gradient based proposal steps. For loss functions whose gradient is Lipschitz continuous with constant $L$, such an algorithm is guaranteed to converge to the solution as long as $\delta \in (0,2/L)$. In practice, $\delta$ is chosen as large as possible, and we are interested in finding the smallest possible Lipschitz constant $L$. With known $L$ and fixed $\delta \in (0,2/L)$ a proximal gradient algorithm consists of the following essential computations: \begin{enumerate} \item\label{gradeval} evaluation of the gradient of the loss \item\label{proxeval} evaluation of the proximal operator $\mathrm{prox}_{\delta J}$ \item\label{objeval} evaluation of the loss function and penalty function. \end{enumerate} The computational complexity in steps \ref{gradeval} and \ref{objeval} is dominated by matrix-vector products, (see e.g. \eqref{eq9} for the soft maximin problem). The complexity in step \ref{proxeval} is determined by $J$. As noted in \cite{beck2009} when $J$ is separable (e.g. the $\ell_1$-norm) $ \mathrm{prox}_{\delta J}$ can be computed analytically or at low cost. If $L$ is not known (or if $\delta \geq 2/L$ for a known, but perhaps conservative, $L$) we cannot guarantee convergence with a fixed choice of $\delta$, but adding a backtracking step will ensure convergence of the iterates. This extra step will increase the per-step computational complexity of the algorithm. When the gradient is not globally Lipschitz, it is no longer guaranteed that iterating steps \ref{gradeval}-\ref{objeval} will yield a solution to \eqref{eq13} for any fixed $\delta$. However, it is possible to show that the NPG algorithm will converge to a solution of \eqref{eq13} under some regularity conditions. \begin{algorithm} \caption{NPG minimizing $F = f + \lambda J$} \label{alg:1} \begin{algorithmic}[1] \REQUIRE $\beta^0$, $L_{\max}\geq L_{\min}>0$, $\tau>1$, $c>0$, $M\geq 0$. \FOR{$k=0$ to $K\in \mathbb{N}$} \STATE\label{start} choose $L_k\in [L_{\min},L_{\max}]$ \STATE\label{prox} solve $\beta =\mathrm{prox}_{ \lambda J/L_k}(\beta ^{(k)}- \frac{1}{L_k}\nabla f (\beta ^{(k)}))$ \label{alg:1_3} \IF{ $F(\beta)\leq \max_{[k- M]_+\geq i\geq k} F(\beta^{(i)})-c/2\Vert \beta-\beta^{(k)}\Vert^2$} \label{alg:1_4} \STATE $\beta^{(k+1)} = \beta$ \ELSE \STATE $L_k = \tau L_k$ and go to \ref{prox} \ENDIF \ENDFOR \end{algorithmic} \end{algorithm} We show that $s_\zeta$ does not have a Lipschitz continuous gradient in general, but convergence of the NPG algorithm can be established under general conditions on the group loss functions $h_1,\ldots,h_G$. Furthermore, in the special case where $h_g = - \hat{V}_g$ with all groups sharing the same design we establish that $s_\zeta$ has a globally Lipschitz continuous gradient, and we find a bound on the Lipschitz constant. The first result states that $s_\zeta$ inherits strong convexity from an individual group loss function $h_g$ given all $h_1,\ldots,h_G$ are convex and twice continuously differentiable. The proof is given in the appendix. \begin{thm_prop} \label{prop:one} Assume $h_1,\ldots, h_G$ are twice continuously differentiable. Defining $w_{g,\zeta}(\beta) := e^{\zeta h_g(\beta) - \zeta s_\zeta(\beta)}$, then $\sum_gw_{g,\zeta}(\beta) =1$ for all $\beta \in \mathbb{R}^p$ and \begin{alignat}{4}\label{eq8new} \nabla s_\zeta(\beta)&=&&\sum_{g=1}^Gw_{g,\zeta}(\beta)\nabla h_g(\beta)\\ \nabla^2 s_\zeta(\beta) &=&& \sum_{i=1}^G\sum_{j = i + 1}^G w_{i,\zeta}(\beta)w_{j,\zeta}(\beta) (\nabla h_i(\beta)-\nabla h_{j}(\beta))(\nabla h_i(\beta)-\nabla h_{j}(\beta))^\top\nonumber\\ &&&+ \sum_{g=1}^G w_{g,\zeta}(\beta) \nabla^2 h_g(\beta). \label{eq10new} \end{alignat} Furthermore if $h_1,\ldots, h_G$ are convex with at least one $h_g$ strongly convex, then $s_\zeta$ and $e^{\zeta s_\zeta}$ are strongly convex. \end{thm_prop} Proposition \ref{prop:one} applies to the soft maximin loss with $h_g =-\hat{V}_g$. In this case $\nabla^2 h_g = 2X^\top_gX_g / n_g$, and $h_g$ is strongly convex if and only if $X_g$ has rank $p$. Proposition \ref{prop:one} implies that if one of the matrices $X_g$ has rank $p$, $l_{\zeta}$ is strongly convex. However, we also see from Proposition \ref{prop:one} that $\nabla^2 s_\zeta(\beta)$ is not globally bounded in general even for the soft maximin loss. Consider, for instance, the case with $G = 2$ and $p = n_1 = n_2 = 2$ with $$X_1 = \left(\begin{array}{cc} 1 & 0 \\ 0 & 1 \end{array}\right) \quad \textrm{and} \quad X_2 = \left(\begin{array}{cc} 0 & 0 \\ \sqrt{2} & 0 \end{array}\right). $$ Take also $y_1=y_2=0$. When $\beta_1=\beta_2 = \kappa$ it holds that $h_1(\beta) = h_2(\beta) = \kappa^2$ and thus $w_{1,\zeta}=w_{2,\zeta}=1/2$ for any $\zeta$, while \begin{align*} (\nabla h_1(\beta)-\nabla h_{2}(\beta))(\nabla h_1(\beta)-\nabla h_{2}(\beta))^\top & \\ = \left(\begin{array}{cc} \beta_1^2 & -\beta_1\beta_2 \\ -\beta_1\beta_2 & \beta_2^2 \end{array}\right) & = \left(\begin{array}{cc} \kappa^2 & -\kappa^2 \\ -\kappa^2 & \kappa^2 \end{array}\right) \end{align*} is unbounded. The following result shows, on the other hand, that for soft maximin estimation with identical $X_g$-matrices across the groups, $\nabla l_\zeta$ is, in fact, Lipschitz continuous. The proof is in the appendix. \begin{thm_cor}\label{coro:one} Let $h_g =-\hat{V}_g, g\in\{1,\ldots,G\}$, with identical $n\times p$ design matrix $X$ across all $G$ groups. Then $\nabla l_\zeta^2$ is bounded by \begin{alignat}{4} \label{eq11new} L\Big(\frac{2}{n}\sum_{i=1}^G\sum_{j = i + 1}^G w_{i,\zeta}(\beta)w_{j,\zeta}(\beta) \Vert y_i-y_j\Vert_2^2+ 1\Big)\leq L\Big(\frac{2}{n}\sum_{i=1}^G\sum_{j = i + 1}^G \Vert y_i-y_{j}\Vert_2^2+1\Big), \end{alignat} where $L := 2\Vert X^\top X\Vert/n$ is the Lipschitz constant of $\nabla h_g$ implying $ l_\zeta$ has Lipschitz continuous gradient. \end{thm_cor} By Corollary \ref{coro:one} if we have identical design across groups we can obtain the soft maximin estimator by applying the fast proximal gradient algorithm from \cite{beck2009} to the optimization problem \eqref{eq13}. Furthermore in this setting the corollary also gives an explicit upper bound on the Lipschitz constant. When $L$, the Lipschitz constant of the gradient of the group loss, is computable it provides a way to find an efficient step size. Finally, in the general setup the following proposition shows that the non-monotone proximal gradient (NPG) algorithm (see \cite{wright2009} and \cite{chen2016}), which does not rely on a global Lipschitz property, solves the problem \eqref{eq13} given the assumptions in Proposition \ref{prop:one}. The proof of the proposition is given in the appendix. \begin{thm_prop}\label{prop:two} Assume $h_1,\ldots, h_G$ satisfy the assumptions in Proposition \ref{prop:one}. Let $(\beta^{(k)})_k$ be a sequence of iterates obtained by applying the NPG algorithm to \eqref{eq13}. Then $\beta^{(k)}\to \beta^\ast$ where $\beta^\ast$ is a critical point of $s_\zeta+\lambda J$. \end{thm_prop} In summary given strong convexity, e.g. satisfied in the maximin setup when one $X_g$ has full rank, we can always solve the problem \eqref{eq13} using a proximal gradient based algorithm. Furthermore for soft maximin estimation with identical design across groups we can even apply a standard version of this algorithm. This is particularly convenient in the array tensor setup described next where the bound \eqref{eq11new} is easy to compute. \subsection{Array tensor smoothing} \label{subsec:atsmooth Consider the situation where the observations in \eqref{eq6} are made in a $d$-dimensional grid $G$ times. That is, for each $g\in \{1,\ldots,G\}$ we have samples from all points in a product set \begin{alignat}{4}\label{eq9new} \mathcal{X}_1\times\mathcal{X}_2\times\ldots \times \mathcal{X}_{d} \end{alignat} where $ \mathcal{X}_j=\{x_{j,1},\ldots, x_{j,n_j}\}\subset \mathbb{R}$ with $x_{j,k_j}<x_{j,k_j+1}$ for $k_j = 1,\ldots,n_j-1$. We may organize such a sample as a $d$-dimensional (response) array $\bs{Y}_g$. Preserving this array structure when formulating the smoothing model in Section \ref{sec:two} leads to an estimation problem with array-tensor structure. Especially, when considering the smoothing model \eqref{eq6} with array data the tensor structure arise if we use tensor product basis functions. Letting $n=\prod_j^{d}n_j$ and $p=\prod_j^{d}p_j$ we can use the tensor product construction to specify the multivariate basis functions appearing in \eqref{eq7} in terms of $d$ univariate functions as \begin{alignat}{4}\label{eq14} \varphi_{m} = \varphi_{1,m_1} \varphi_{2,m_2}\cdots \varphi_{d,m_d}. \end{alignat} Here $\varphi_{j,m_j} : \mathbb{R} \to \mathbb{R}$ for $j = 1, \ldots, d$ and $m_j = 1, \ldots, p_j$ are marginal basis functions. Evaluating each of the $ p_j$ univariate functions at the $n_j$ points in $\mathcal{X}_j$ results in an $n_j\times p_j$ marginal design matrix $\Phi_j = (\varphi_{j, m_j}(x_{j,k_j}))_{k_j,m_j}$. It follows that the tensor (Kronecker) product of these marginal design matrices, \begin{alignat}{4}\label{eq15} \Phi = \Phi_{d}\otimes \cdots\otimes \Phi_2 \otimes \Phi_{1}, \end{alignat} is a design matrix for the $g$th group in \eqref{eq6}. Organizing the corresponding basis coefficients in a $p_1\times \cdots\times p_d$ array $\bs{\Theta}_g=(\Theta_{j_1,\ldots,j_d,g})_{j_1=1,\ldots,j_d=1}^{p_1,\ldots,p_d}$ and using the rotated $H$-transform $\rho$, see \cite{currie2006}, it follows that we can write the model \eqref{eq6} for the $g$th group as \begin{alignat}{4}\label{eq12} \bs{Y}_g=\rho(\Phi_{d},\rho(\Phi_{d-1},\ldots, \rho(\Phi_{1}, \bs{\Theta}_g))) + \bs{E}_g \end{alignat} where $\bs{E}_g$ is a $n_1\times n_2\times\cdots\times n_d$ array containing the error terms. As detailed in \cite{currie2006}, using $\rho$ the matrix-vector products needed when evaluating the gradient and the loss in steps \ref{gradeval} and \ref{objeval} above can be computed without having access to the (large) matrix $\Phi$. In addition this computation is very efficient. Furthermore because of the tensor structure in \eqref{eq15} the constant $L$ from Corollary \ref{coro:one} is easy to compute, see (30) in \cite{lund2017a}. Thus the upper bound in the corollary is computable which in turn implies that we can run the proximal gradient algorithm without performing any backtracking. Note however that the sum on the left hand side of \eqref{eq11new} is potentially much smaller than the sum on the right since the weights are convex. Thus an efficient implementation could e.g. entail scaling down this sum and then monitor the convergence. Also note that this type of step size optimization may also be used in the NPG algorithm to enhance performance. Following \cite{lund2017a} we have implemented both a fast proximal algorithm as well as a NPG algorithm in a way that exploits the array-tensor structure described above. These implementations are available for 1D, 2D, and 3D array data in the R package \verb+SMMA+. The result is a computationally efficient numerical procedure for solving the soft maximin problem \eqref{eq13} with a small memory footprint. \subsection{3-dimensional signal extraction}\label{subsec:3dim To demonstrate soft maximin estimation in a multi-dimensional setting we simulated $G = 50$ groups of 3-dimensional signals. The signals were generated in a way similar to the 1-dimensional example from Section \ref{subsec:1dim} and bear some resemblance to the neuronal activity imaging data. Specifically, we simulated signals with the common signal $f(x,y,t)=\varphi_{12.5,4}(x)\varphi_{12.5,4}(y)\varphi_{50,25}(t)$ ($\varphi_{\mu,\sigma^2}$ is the density for the $\mathcal{N}(\mu, \sigma^2)$ distribution) that we want to extract. This signal was superimposed with random cyclic components and white noise. The 4-dimensional raw data array was generated as \begin{alignat*}{4} Y_{i,j,k,g}&=f(x_i,y_j,t_k)\\ &+5 \sum_{j\in J_g} \varphi_j (x_i + p_g)\varphi_j (y_i + p_g)\varphi_j (t_k + p_g)+\epsilon_{i,j,k,g} \end{alignat*} with all components and quantities but $f$ as in Section \ref{subsec:1dim}, and with $x_i=1,2,\ldots,25$, $y_i=1,2,\ldots,25$ and $t_i=1,2,\ldots,101$. We note that compared to the 1-dimensional example the common signal is spatially as well as temporally localized. \begin{figure} \centering \includegraphics[scale=0.65]{3dsimdat.pdf} \caption{Three examples of 3D simulated signals at time $t_k=50$. The common signal is not visible.} \label{fig:two} \end{figure} Figure \ref{fig:two} shows the simulated signals for three different groups at time $t_k=50$ where $f$ attains its maximum. The common signal is visually undetectable from the individual signals. However, systematic fluctuations caused by the spatial part of the periodic random signal are visible and can be seen to differ between groups. To extract the common signal we used the array-tensor formulation from Section \ref{subsec:atsmooth} of the smoothing model from Section \ref{subsec:2.1}. Using B-splines as basis functions in each dimension we obtained an array model with tensor design components $\Phi^x$, $\Phi^y$, and $\Phi^t$ given by the B-spline basis function evaluations. We solved the soft maximin problem \eqref{eq13} with $\ell_1$-norm penalty and $\zeta =100$. \begin{figure} \begin{center} \includegraphics[scale=0.35]{CV3dsim.pdf} \caption{Generalization error for soft maximin. Dashed line is the minimum.} \label{fig:3new} \end{center} \end{figure \begin{figure} \centering \includegraphics[scale=0.5]{3dsimtemp.pdf} \caption{Bottom: Temporal plots for $(x,y)=(12,12)$. True signal in red. Soft maximin estimate, model no. 7 and $\zeta=100$ (top left), magging estimate (top right), mean aggregated estimate (bottom left) and mean over trials (bottom right). Soft maximin MSE is $5.5 \times 10^{-4}$ and magging MSE $2.8 \times 10^{-3}$.} \label{fig:3} \end{figure} \begin{figure} \centering \includegraphics[scale=0.5]{3dsimspa.pdf} \caption{Spatial plots for $t_k=50$. True signal (top left), soft maximin estimate (model no. 7), $\zeta=100$ (top right), magging estimate (bottom left), mean aggregated estimate (bottom right). } \label{fig:4} \end{figure} To obtain the magging estimates we also solved an $\ell_1$-norm penalized least squares estimation problem for each group using the same design components and the same sequence of 10 penalty parameters as for the soft maximin problem using the R package \verb+glamlasso+, \cite{lund2018a}. Given the $G$ estimates we aggregated them as described in \cite{buhlmann2016}. We note that the time to compute the soft maximin estimate was around 30 seconds while it took around 140 seconds to compute the magging estimate. For the magging estimate the bulk of the computational time was spent estimating the group parameters. Finally, we computed the mean aggregated estimate across groups as well as the mean signal. To select the penalty parameter we performed the following variation of 10 fold cross-validation. In each fold we left out all observations in a randomly selected $5\times 5\times 101$ block and fitted the model on the remaining data for each of the 10 penalty values $\lambda_1,\ldots,\lambda_{10}$ from the original fit. We did this 10 times and then computed the average (over folds) soft maximin loss on the held out observations for each $\lambda_m$. The result is shown in Figure \ref{fig:3new}. Figure \ref{fig:3} shows the resulting estimate along the temporal dimension for one spatial coordinate. Soft maximin (for the optimal model no. 7) with $\zeta=100$ was able to extract the common signal quite well. The magging estimate (likewise using model no. 7 for each group) also extracted the common signal but with some additional fluctuations giving the estimate more variability. The mean aggregated estimate (model no. 7) was not able to clearly extract the common signal but rather extracted some spurious periodic fluctuations. Finally, the mean signal across the groups does not reveal the common signal at all. Figure \ref{fig:4} shows the same results but plotted in the two spatial dimensions for the single time point $t_k=50$. The figure confirms the findings form Figure \ref{fig:3}. \section{Brain imaging data}\label{sec:four The neuronal activity recordings were obtained using voltage-sensitive dye imaging (VSDI) in an experiment previously described in \cite{roland2006}. The experiment consisted of a total of $G=275$ trials (groups) of recordings on 13 different ferrets. Each recording consists of a movie representing neuronal activity, which we have mapped into a 3-dimensional array for our analysis. In short, the experimental setup was as follows. Part of the visual cortex of a live ferret was exposed and stained with a voltage-sensitive dye. Changes in neuron cell membrane potentials affect the absorption or emission fluorescence of the dye, and neuronal activity can be recorded indirectly in terms of emitted fluorescent light. The recording used 464 channels organized in a two-dimensional (hexagonal) array producing images of \textit{in vivo} neuronal activity. In each trial a visual stimulus was presented to the live ferret (a white square on a grey screen) for 250 ms. Over the course of the trial images were recorded every $0.6136$ ms producing a movie of neuronal activity. For the purpose of our analysis, the 464 channels were mapped to a $25\times25$ array yielding an image with $625$ pixels. Note that data for 161 pixels are then unobserved. Several sources of heterogeneity are potentially present in the data. We list some here. \begin{enumerate} \item\label{list:iv} The heart beat affects the light emission by expanding the blood vessels in the brain, creating a cyclic heart rate dependent artefact. A changing heart rate over trials for one animal (fatigue) as well as differences in heart rate between animals will cause heterogeneity in the data. \item\label{list:ii} Spatial inhomogeneities can arise due to differences in the cytoarchitectural borders between the animals causing misalignment problems. \item\label{list:iii} The VSDI technique is very sensitive, see \cite{grinwald2002}. Even small changes in the experimental surroundings could affect the recordings and create heterogeneity. \item\label{list:v} There are differences between animals in how they respond to the visual stimulus. \end{enumerate} To alleviate the heart rate artefact, the raw VSDI recordings were preprocessed as follows. Two consecutive recordings were actually made in each trial; one with a visual stimulus and one without stimulus. These recordings were temporally aligned using electrocardiography (ECG) data, and the difference between these two aligned recordings was computed and normalized with the pixel-specific pre-stimulus standard deviation. We refer to the result as the preprocessed recordings. \begin{figure \begin{center} \includegraphics[scale=0.5]{dattemptwocol.pdf} \caption{Temporal evolution in the raw (left) and preprocessed (right) VSDI recording from pixel $(14, 14)$ for trials 30 (top), 40 (middle), and 50 (bottom). Vertical lines indicate stimulus start (200 ms) and stop (450 ms).} \label{fig:5} \end{center} \end{figure} Figures \ref{fig:5} and \ref{fig:6} show examples of the raw recordings as well as the preprocessed recordings for three trials. Figure \ref{fig:5} shows the recordings in the temporal dimension for one pixel, while Figure \ref{fig:6} shows the recordings in the spatial dimension around the time of an expected maximal stimulus response. Following the onset of the visual stimulus (200 ms), the recordings are expected to show the result of a depolarization of neuron cells in the visual cortex, but we do not observe a clear stimulus response for all trials. While trial 40 shows clear evidence of depolarization, the other two trials do not. Visual inspection of Figure \ref{fig:5} also indicates the presence of systematic noise components, that is, artefacts as described in \ref{list:iv}) in the list above, which are most pronounced for the raw recordings. \begin{figure \begin{center} \includegraphics[scale=0.5]{datspatwocol.pdf} \caption{The raw recordings (left) and the preprocessed recordings (right) for three different trials around the time of an expected maximal response. Trial 40 shows the strongest response to the stimulus whereas the other two trials show less response. The response is strongest in the preprocessed data.} \label{fig:6} \end{center} \end{figure} \subsection{Model fitting For both the raw and the preprocessed recordings we extracted a common signal across trials and animals by soft maximin estimation, which we compared to mean aggregation and magging of the OLS estimates. The data consists of 275 spatio-temporal recordings each with dimensions $25\times 25 \times 977$, that is, 625 pixels recorded over 977 time points (600 ms). We used 10 B-splines in each spatial dimension and 196 B-splines in the temporal dimension to obtain a linear array model with tensor design components $\Phi^x$, $\Phi^y$, and $\Phi^t$, as described in Section \ref{subsec:atsmooth}, given by the B-splines evaluated over the marginal domains. The resulting model has a total of $p = $ 19,600 parameters. The soft maximin problem \eqref{eq13} was solved for the entire data set using the $\ell_1$-penalty for 10 values of the penalty parameter $\lambda$ and $\zeta = 2$ and $\zeta = 100$ , while the magging estimate was obtained by computing the OLS estimate for each trial and then applying maximin aggregation. The mean aggregated fit was computed likewise. All estimates were computed for the raw as well as for the preprocessed recordings. We note that to compute the 10 soft maximin estimates it took around 60 seconds (110 seconds) for the raw (preprocessed) recordings. The computation of one magging estimate took around 100 seconds (110 seconds) for the raw (preprocessed) recordings. All computations were carried out on a Macbook Pro with a 2.8 GHz Intel core i7 processor and 16 GB of 1600 MHz DDR3 memory. Movies of the estimates for both raw and preprocessed recordings are available as supplementary material To choose the optimal penalty parameter we randomly excluded two $5\times 5 \times 977$ blocks of data for all trials and fitted the model on the remaining data using the 10 penalty values $\lambda_1,\ldots,\lambda_{10}$ from the original fit. The soft maximin loss was then computed on the excluded data blocks for each value of the \begin{figure \begin{center} \includegraphics[scale = 0.5]{CVdattwocol.pdf} \caption{Validation estimates for the soft maximin loss with $\zeta=2$ (applied to the raw recordings (left) and preprocessed recordings (right)). Dashed lines indicate minimum average soft maximin loss on held-out observations.} \label{fig:7} \end{center} \end{figure} \begin{figure}[H] \centering \includegraphics[scale = 0.5]{Stemptwocol.pdf} \caption{Temporal estimates for two different pixels using mean aggregation (black), soft maximin for $\zeta=2$ (red) and $\zeta=100$ (green), and magging (blue). For the raw recordings (top) model 8 was selected in the validation step while for the preprocessed recordings (bottom) model 7 was selected. Vertical lines indicate stimulus start and stop.} \label{fig:8} \end{figure} \noindent penalty parameter. The entire procedure was repeated ten times, the average loss was computed, and the penalty parameter with the minimal average loss was selected. This resulted in model number 8 for the raw recordings and model number 7 for the preprocessed recordings, see Figure \ref{fig:7}. Figure \ref{fig:8} shows the soft maximin (model 8), mean aggregation and magging estimates in the temporal dimension for pixels $(14, 14)$ and $(10, 20)$. Mean aggregation and soft maximin estimation extract fairly clear signals both for the raw and preprocessed recordings, and a clear on-signal (stimulus start) and off-signal (stimulus stop) for these pixels are picked up. Soft maximin gives some smoothing but also some shrinkage compared to mean aggregation. The magging estimator extracts mostly noise for the preprocessed data, while showing a weak signal for pixel $(14, 14)$ for the raw recordings. We note that for the raw recordings both estimates display some variation, which is possibly periodic. In particular, for pixel $(10,20)$ a notable polarization before the stimulus is presented is picked up. This could be due to the heart rate artefact. \begin{figure \centering \includegraphics[scale = 0.35]{Ssparaw.pdf} \caption{Spatial estimates at six different time points using the raw recordings and mean aggregation (col. 1), soft maximin for model no. 8 and $\zeta=2$ (col. 2) and $\zeta=100$ (col. 3), and magging (col. 4).} \label{fig:10} \end{figure} \begin{figure \centering \includegraphics[scale = 0.35]{Sspapre.pdf} \caption{Spatial estimates at six different time points for the preprocessed recordings using mean aggregation (col. 1), soft maximin (model no. 8) with $\zeta=2$ (col. 2), with $\zeta=100$ (col. 3), and magging (col. 4).} \label{fig:11} \end{figure} Figures \ref{fig:10} and \ref{fig:11} show soft maximin, mean aggregation and magging estimates in the spatial dimensions for six different time points. For the preprocessed recordings, mean aggregation resulted in a signal with a clear stimulus response. Soft maximin provided a similar result with a greater spatial localization but also shrinkage of the signal magnitude. The more compactly supported spatial area identified by soft maximin corresponds to the representation on the image of the center of field of view. For the raw data, mean aggregation resulted in some spurious spatial fluctuations that were smoothed away by soft maximin. Magging was not able to extract a signal from neither the raw nor the preprocessed recordings. \section{Discussion}\label{sec:five The maximin estimator with the $\ell_1$-penalty, as defined in \cite{meinshausen2015}, solves the minimization problem \begin{equation} \label{eq:maximin} \min_{\beta} \max_g\{ - \hat{V}_g(\beta) \} + \lambda \| \beta \|_1. \end{equation} Though the objective function is convex, it is nondifferentiable as well as nonseparable, and contrary to the claim in Section 4 of \cite{meinshausen2015}, coordinate descent will not always solve \eqref{eq:maximin}. Two approximate approaches for solving \eqref{eq:maximin} were suggested in \cite{meinshausen2015}, the first consisting of a proposed smooth approximation of the term $\max_g \{ - \hat{V}_g(\beta)\}$. However, we did not find this approximation to work in practice, and we developed the soft maximin loss as a better alternative. We note that the solution path of \eqref{eq:maximin} is piecewise linear in $\lambda$, and it may thus be computed using a method like LARS, see \cite{roll2008}. A LARS-type algorithm or a coordinate descent algorithm of a smooth majorant, such as the soft maximin loss, was also proposed to us by Meinshausen (personal communication) as better alternatives to those suggested in \cite{meinshausen2015}. In our experience, the LARS-type algorithm scales poorly with the size of the problem, and neither LARS nor coordinate descent can exploit the array-tensor structure. Magging, as proposed in \cite{buhlmann2016} as yet another alternative to \eqref{eq:maximin} for estimation of maximin effects, is computationally straightforward and easy to parallelize, but as we demonstrated not necessarily computationally faster than using soft maximin aggregation. From the definition of the soft maximin loss the intention of $\zeta$ is to control the tradeoff in the estimation between groups with large explained variance and groups with small explained variance. The gradient representation \eqref{eq8new} shows explicitly how this tradeoff works in the NPG algorithm: the gradient of the soft maximin loss is a convex combination of the gradients of the groupwise squared error loss functions with weights controlled by $\zeta$. The largest weights are on those groups with the smallest explained variances and as $\zeta \to \infty$ the weights concentrate on the groups with minimal explained variance. Thus our proposed algorithm and implementation in the R package \verb+SMMA+ provides a means for approximately minimizing \eqref{eq:maximin} and is as such an alternative to magging as an estimator of the maximin effect. More importantly, by the introduction of the tuning parameter $\zeta$ in the soft maximin loss we not only achieved an approximate solution of \eqref{eq:maximin} but an interpolation between max aggregation and mean aggregation across groups. We have demonstrated via simulations and the application to VSDI recordings how soft maximin is able to extract a signal in the context of multivariate array data and how the choice of the tuning parameter $\zeta$ affects the extracted signal. The simulations showed that magging as well as soft maximin estimation can extract a signal even in the presence of large heterogeneous noise components, but for the VSDI recordings, magging was not successful. We expect that soft maximin aggregation will be practically useful in a number of different contexts as a way of aggregating explained variances across groups. In particular because it down weights groups with a large explained variance that might simply be outliers, while it does not go to the extreme of the maximin effect, that can kill the signal completely as in the example of the VSDI recordings.
{ "attr-fineweb-edu": 1.719727, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdcw5qWTBJKvQ8Saz
\section*{List of Acronyms} \begin{acronym} \acrodef{3GPP}[3GPP]{3\textsuperscript{rd} Generation Partnership Project} \acrodef{BER}[BER]{bit error rate} \acro{BIM}[BIM]{binary interference matrix} \acro{BS}[BS]{base station} \acro{CDF}[CDF]{cumulative distributive function} \acro{CR}[CR]{cognitive radio} \acro{CRN}[CRN]{cognitive radio network} \acro{CSI}[CSI]{channel state information} \acro{DFS}[DFS]{dynamic frequency selection} \acro{DRRM}[DRRM]{distributed radio resource management} \acro{DSA}[DSA]{dynamic spectrum allocation} \acro{FSA}[FSA]{fixed spectrum allocation} \acrodef{IMT}[IMT]{International Mobile Telecommunications} \acrodef{ISM}[ISM]{Industrial, Scientific and Medical} \acrodef{ITU}[ITU]{International Telecommunication Union} \acro{LTE}[LTE]{Long Term Evolution} \acro{MAC}[MAC]{medium access control} \acro{MMF}[MMF]{max-min fair} \acro{NE}[NE]{Nash equillibrium} \acrodef{NGN}[NGN]{Next-generation network} \acro{OFDMA}[OFDMA]{Orthogonal Frequency Division Multiple Access} \acro{PCC}[PCC]{primary component carrier} \acro{PF}[PF]{proportional fair} \acro{PHY}[PHY]{physical layer} \acro{PU}[PU]{primary user} \acro{RAN}[RAN]{radio access network} \acro{RAT}[RAT]{radio access technology} \acro{SCC}[SCC]{secondary component carrier} \acro{SINR}[SINR]{signal-to-interference plus noise ratio} \acro{SPNE}[SPNE]{subgame perfect Nash equillibrium} \acro{NBS}[NBS]{Nash bargaining solution} \acro{SNR}[SNR]{signal-to-noise ratio} \acro{SU}[SU]{secondary user} \acro{QoS}[QoS]{Quality of Service} \acrodef{UDN}[UDN]{ultra-dense network} \acro{UE}[UE]{user equipment} \acro{WRAN}[WRAN]{wireless regional area network} \acro{WLAN}[WLAN]{wireless local area network} \acrodef{4G}[4G]{4\textsuperscript{th} generation} \acrodef{IMT}[IMT]{International Mobile Telecommunications} \acro{SPAI}[SPAI]{Sparse Inverse} \acrodef{CN}[CN]{Core Network} \acrodef{GSM}[GSM]{Global System for Mobile} \acrodef{WCDMA}[WCDMA]{Wideband Code Division Multiple Access} \acrodef{MSC}[MSC]{Mobile Switching Center} \acrodef{RNC}[RNC]{Radio Access Controller} \acrodef{1G}[1G]{1st Generation} \acrodef{2G}[2G]{2nd generation} \acrodef{3G}[3G]{3rd generation} \acrodef{GPRS}[GPRS]{\ac{GSM} First Evolved to General Packet Radio Service} \acrodef{EDGE}[EDGE]{Enhanced Data Rates for \ac{GSM} Evolution} \acrodef{IS-95}[IS-95]{Interim Standard 95} \acrodef{1xRTT}[1xRTT]{1 Times Radio Transmission Technology} \acrodef{CDMA2000}[CDMA2000]{Code Division Multiple Access 2000 } \acrodef{IMT-2000}[ITU-2000]{International Mobile Telecommunications-2000} \acro{WiMAX}[WiMAX]{Worldwide Interoperability for Microwave Access} \acro{HSPA}[HSPA]{High Speed Packet Access} \acro{IEEE}[IEEE]{Institute of Electrical and Electronics Engineers} \acro{RN}[RN]{Relay Nodes} \acrodef{CDMA}[CDMA]{Code Division Multiple Access} \acrodef{FDD}[FDD]{Frequency Division Duplex} \acrodef{TDD}[TDD]{Time Division Duplex} \acrodef{OFDMA}[OFDMA]{Orthogonal Frequency Division Multiple Access} \acro{MSE}[MMSE]{Minimising Mean Square Error} \acro{SLNR}[SLNR]{Signal to Leakage Plus Noise Ration} \acrodef{ISD}[ISD]{Inter-site Distance} \end{acronym} \clearpage \phantomsection \addcontentsline{toc}{section}{List of Figures} {\hypersetup{linkcolor=black} \listoffigures } \clearpage \phantomsection \addcontentsline{toc}{section}{List of Tables} {\hypersetup{linkcolor=black} \listoftables } \cleardoublepage \storeinipagenumber \pagenumbering{arabic} \acresetall \setcounter{page}{1} \section{Introduction} \subsection{Motivation} \noindent Radio spectrum is defined as part of the electromagnetic spectrum with frequencies ranging from 3 Hz to 300 GHz. It is used for various wireless communication tasks - data communications, voice communications, video communications, broadcast messaging, command and control communications, emergency response communications, etc. In the past decade, wireless communication services have seen an unprecedented exponential growth~\cite{WP11NSN} and they are expected to grow tremendously in the future as well~\cite{WP13Huawei}. The studies in~\cite{WP11NSN,WP11Ericsson} projected a 1000 times more traffic, and 50 billion connected devices in mobile networks by 2020. According to a study in~\cite{WP13Ericsson}, the development of 4G systems based on \ac{3GPP} \ac{LTE} \ac{RAT} is progressing on a large scale with 55 million users in November 2012 and nearly 1.6 billion users in 2018.\\ \begin{figure}[b] \centering \includegraphics[scale=1, trim = 0mm 4mm 0mm 6mm, clip]{DataGrowth.png} \caption{U.S. spectrum surplus/deficit situation with growing traffic per cell site~\cite{RP12FCC}} \label{fig:DataGrowth} \end{figure} \begin{figure}[b] \centering \includegraphics[scale=0.35, trim = 0mm 0mm 0mm 0mm, clip]{IMTFreq.png} \caption{Summary of frequency allocation from 0.3 GHz to 30 GHz~\cite{RP11NTIA}} \label{fig:IMTFreq} \end{figure} \noindent One possible solution to meet the ever-increasing demand is to allocate more spectrum for mobile services, e.g., through spectrum farming. In~\cite{RP12FCC}, quoted in Fig.~\ref{fig:DataGrowth}, it has been shown that by 2014, the mobile traffic per cell site in U.S., will double that of 2012, causing an estimated spectrum deficit of 275 MHz from the surplus of 87 MHz in 2012. Another study by the \ac{ITU} (report ITU-R M.2078~\cite{RP06ITU}) estimates that total spectrum bandwidth requirements for \ac{IMT} services will be up to 1720 MHz by 2020. It will be a challenge to identify such amounts of spectrum and to allocate it exclusively for mobile services.\\ \noindent Spectrum may be allocated using one of the following authorizations - dedicated, co-primary and unlicensed. Consider the frequency allocation of main \ac{IMT} bands (0.3-30 GHz) by the ITU, shown in Fig.~\ref{fig:IMTFreq}. It is interesting to observe that in co-primary authorization usually more than one service share the same spectrum, e.g., the frequency band of 3.4-4.2 GHz is allocated for both satellite and fixed services. In dedicated authorization, spectrum is exclusively allocated for a single service, e.g., in European region the frequency band of 470-790 MHz is only reserved for broadcasting at the moment. Finally, in unlicensed authorization, unlicensed multi-radio coexists in \ac{ISM} band 2.4 GHz and 5 GHz in which devices like Bluetooth, Wi-Fi, etc. works.\\ \noindent Though the spectrum map in Fig.~\ref{fig:IMTFreq} looks crowded, it is important to remark that it does not indicate the actual spectrum in use. Based on spectrum usage activity, spectrum utilization in a given geographical area can be regarded as fully utilized, underutilized (sporadically used) or fully unused. The unused or sporadically used spectrum in space and/or time could exist due to many reasons, e.g., the system is idle, or intermittent activity (spectrum holes), or signals are unable to reach the receiver due to heavy losses. One major cause of this underutilization is the static (fixed) allocation of spectrum to the various systems. If a system with a static frequency allocation is not using its assigned spectrum, the resources are wasted. If other systems could utilize the vacant spectrum, spectrum utilization could be improved.\\ \noindent Various benchmark studies and measurement campaigns have pointed out that a large portion of the allocated spectrum is not actively used in space and time. The FCC Spectrum Policy Task Force in 2002 in their report~\cite{RP02FCC} have reported vast temporal and geographic variations in the allocated spectrum utilization ranging from 15\% to 85\%. \v{C}abri\'{c} \textit{et al.} in~\cite{CP04Cabric} have shown measurements taken in an urban setting revealing a typical utilization of 0.5\% in the 3-4 GHz frequency band, further dropping to 0.3\% in the 4-5 GHz frequency band. In survey~\cite{CP10Valenta} conducted in 2010 globally, it is found that in densely populated areas, less than 20\% of spectrum bands below 3 GHz are used during a working day and the occupation is even lower in rural areas.\\ \noindent The incongruence between \textquotedblleft spectrum allocation\textquotedblright~and \textquotedblleft spectrum utilization\textquotedblright~suggests that \textquotedblleft spectrum allocation\textquotedblright~is a more significant problem than an actual physical scarcity of spectrum. The fixed spectrum allocation generally worked well in the past because of limited traffic. Nowadays, the pressing demands for more wireless services and the inefficient spectrum utilization necessitate a new communication paradigm to use the existing spectrum opportunistically and more efficiently. Opportunistic use is not necessarily limited to different services but can also be within the same service. For example, multiple operators can share the spectrum resources opportunistically. One promising case could be that operators operating in a shopping mall can use full spectrum resources by localizing themselves to the respective floors instead of a whole shopping mall area and render mobile services on a co-primary basis with negligible inter-operator interference. \subsection{Overview of Thesis Problem} \noindent \acp{NGN} will have higher bandwidth requirements so that they can meet demands of end user capacities and \ac{QoS}. Nowadays, operators are largely following \ac{FSA}. Such, static assignments are disadvantageous because they are time and space invariant, and prevent devices from efficiently utilizing allocated spectrum, resulting in spectrum holes (no devices in the area) and poor utilization~\cite{RP03McHenry}.\\ \noindent Let us consider multiple \acp{RAN} owned by different operators providing wireless services within and around the small area they control, e.g., offices, restaurants, etc. in a marketplace. Within the same geographical area, there exist different classes of users, as well as different companies/business units, and may have different peak usage times. With orthogonal assignments, the spectrum is underutilized when load conditions of neighbouring operators are subjected to temporal variations. In that scenario, a low load operator could transfer some of its spectrum resources to a high load operator by using \ac{DSA} and can help it, e.g., in reducing the blocking probability and in avoiding high latency. \ac{DSA} can help operators to adapt to varying channel state conditions and radio frequency environments. If the inter-\ac{RAN} interference is severe, operators tend to share the spectrum with a high degree of orthogonality; if the interference is negligible, operators tend to have a high degree of overlapped carriers (full spread).\\ \noindent With DSA operators become able to share the spectrum resources as per their relative needs and exercise better performance in their access area. For this, a protocol that coordinates the interaction between multiple operators is needed to achieve improved spectral efficiency by allowing flexible and efficient spectrum use. This is explored in this thesis. \subsection{Thesis Contribution} \noindent In this Thesis, an efficient \ac{DSA} scheme is proposed to improve the operational bandwidth efficiency in a multi-operator scenario. Multiple operators coexist in the same geographical area causing interference to each other. It is assumed that operators' \acp{RAN} have a connection between them. However, the cooperation between operators is on low level. They are unwilling to share their network and operational information due to mutual competition. Also they may send false information to get more advantage from other operators. Operators are thus considered as self-interested entities and will be contending for spectrum resources noncooperatively. By noncooperation, we mean that no operational information is shared amongst the operators. Hence, there is neither need for tight synchronizations (extra overhead) nor new interfaces.\\ \noindent Game theory provides tools that offer significant insight into the dynamics of noncooperation. It is a promising approach for studying mathematical models of conflict and cooperation between rational decision makers~\cite{BK03Osborne}. It has been recently applied in telecommunication field and has been established as an important tool for modelling interactions and \ac{DSA} techniques for evolving technologies like \ac{CR} or inter-operator spectrum sharing.\\ \noindent The studied spectrum allocation problem is related to the frequency assignment problem~\cite{CP12Peltomaki}, where a carrier is either used or not. Following the carrier selection approach, two algorithms are developed for dynamic spectrum sharing based on \textit{noncooperative repeated games}. In this, operators adopt an interactive mode of communication and agree upon formulating a policy on how to share carriers amongst them. Due to the fact that operators coexist in the same geographical area for a long time, they interact and build response sequences through a trust game. Interaction is modelled in terms of spectrum usage favors being asked or received by them. The favors are referred to utilization of shared frequency carriers. \subsection{Thesis Organization} \noindent The remainder of this thesis is organized as follows. Chapter \ref{chap:Background} briefly reviews the utility criterion for resource allocation. It also discusses game theory and its models.\\ \noindent In Chapter \ref{chap:Works}, the related work pertaining to inter-operator spectrum sharing are presented. Besides that, standards closely related to spectrum sharing are also discussed.\\ \noindent In Chapter \ref{chap:Coop}, inter-operator cooperation has been discussed, and its advantages and challenges are presented. The system model used for the cooperative schemes is reviewed, and the implementation is analyzed mathematically.\\ \noindent In Chapter \ref{chap:GamePrice}, the proposed DSA scheme based on noncooperative repeated games and virtual carrier pricing is explained. The system model and the utility functions are described alongside its optimization criteria. Finally, an algorithm for distributed dynamic spectrum sharing among the operators is explained.\\ \noindent In Chapter \ref{chap:GameExpectation}, another distributed noncooperative game theoretic scheme is proposed using mutual history of gains/losses incurred between the participating operators. The system model, utility functions and algorithm are explained. Detailed mathematical analysis is presented to corroborate the algorithm.\\ \noindent In Chapter \ref{chap:Simulation}, simulation results are presented and analyzed. The simulated scenario, simulation parameters, user distributions, and channel models are explained. The benefits of the proposed \ac{DSA} schemes are then assessed, comparing to static allocation schemes such as orthogonal, full spectrum allocations and a cooperative scheme. Finally, in the last chapter, conclusions are drawn and future work is suggested. \clearpage \section{Background} \label{chap:Background} \noindent In this chapter, inter-operator spectrum sharing is considered with the aid of the literature discussions. Inter-operator spectrum sharing opens opportunities for the operators to enhance the system level performance. To describe the operator specific performance, we consider performance metric in the form of utility functions. To describe interactions between the operators, we use the theory of games. So, this chapter provides an overview of utility functions and game theory background.\\ \subsection{Utility Criterion for Resource Allocation} \noindent Utility-based approaches have recently been widely adopted to quantify the radio resource allocation problems in wireless communication. \textit{Utility function} represents the system's performance level or \ac{QoS}~\cite{JR97Kelly}. The composition of a utility function is strictly non-decreasing (monotonic) and concave function of system parameters. The function describing the system-wide utility and welfare functions studied in economic sciences bears the same characteristics.\\ \noindent Though utility functions have been used to model various performance parameters such as data traffic (Shannon capacity)~\cite{JR03Sung}, bandwidth allocation~\cite{JR08Wu,CP02Cao}, multiuser diversity~\cite{CH05Navaie}, scheduling/delay-tolerant traffic~\cite{CP01Gao,CP04Liu}, \ac{SNR}/\ac{SINR} improvement, bandwidth pricing applications~\cite{CP02Siris,CP02Marbach,CP02Liu}, fairness~\cite{JR00Bianchi,JR01Liao}, \ac{BER}, energy efficiency~\cite{JR02Saraydar}, sigmoid-like function of \ac{SINR}~\cite{JR07Huang,JR03Xiao} etc. But in the thesis, we focus on the study consisting of fairness in system capacity based utility which can be best described in Fig.~\ref{UtilityBehavior}. \begin{figure}[h] \centering \includegraphics[scale=1, trim = 0mm 0mm 0mm 0mm, clip]{UtilityBehavior.png} \caption{Example of utility function behaviour} \label{UtilityBehavior} \end{figure} \noindent Assume a load of $\mathcal{L}=\left\{ 1,2,3,...,n \right\}$ users in the wireless network. The resources are quantified based on user preferences such as \ac{SINR}, or throughput, or allocated bandwidth etc. The associated utility function can be expressed as ${U}\left( {{x}_{i}}\left( t \right) \right)$, where ${{x}_{i}}\left( t \right)$ denotes the user specific quantity, such as experienced user rate of the $i$-th user at time $t$ and $U$ is a function that describes the user satisfaction level, given a quantity. The utility function ${U}\left( {{x}_{i}}\left( t \right) \right)$ is an increasing and strictly concave function representing the decreasing additional benefits with increasing resource allocation.\\ \noindent From user's satisfaction perspective, networks are interested in optimizing a resource allocation $r$, e.g., carrier allocations, within the resource constraint set $\mathcal{R}$, which maximizes the long term expected aggregated utility, \begin{equation} \label{eq:UtilityProblemSolution} \underset{r\in \mathcal{R}}{\mathop{\max }}\,\underset{\mathcal{T}\to \infty }{\mathop{\lim }}\,\frac{1}{\mathcal{T}}{\text{E}_{r}}\left\{ \int\limits_{0}^{\mathcal{T}} \sum\limits_{i\in n} {U} \left( {{x}_{i}}\left( t \right) \right) dt \right\}. \end{equation} \noindent The solution to Eq.~\eqref{eq:UtilityProblemSolution} is called the socially optimal solution. If the \ac{RAN} and the available resources (e.g., a fixed maximum transmit power constraint at the \ac{BS}, available bandwidth etc.) are static, and the user's \ac{QoS} measurements are independent of time (e.g., a user's \ac{SINR}), then Eq.~\eqref{eq:UtilityProblemSolution} can be written as \begin{equation*} \underset{r\in \mathcal{R}}{\mathop{\max }}\,\sum\limits_{i\in n}{{{U}}\left( {{x}_{i}} \right)} \end{equation*} \noindent without time $t$.\\ \noindent A solution that maximizes the sum-throughput utility of all the players might not be practicable, as some of the players might consider it \textit{unfair} in the sense that such a solution is achieved at the expense of some players. In many environments \textit{fairness} might be more important than optimality. Various definitions of fair allocations have been proposed, such as weighted fair~\cite{BK97Keshav}, \ac{MMF}~\cite{JR06HuangACM}, \ac{PF} allocations~\cite{JR97Kelly, JR98Kelly}, etc. Based on various fairness criteria, the utility function can be written as \begin{subnumcases}{U(x_i) =} \label{eq:fair} \frac{w_i}{1-\alpha }x_{i}^{1-\alpha}, &$\text{weighted } \alpha\text{-fairness, }$\label{eq:weighted} \\ &$\alpha > 0, \text{weights } (w_i) \geq 0$ \notag \\ {{x}_{i}},&$\text{Max}$\label{eq:max}\\ \lim_{\alpha \to \infty} \frac{1}{1-\alpha }x_{i}^{1-\alpha},&$\text{MMF}$\label{eq:mmf}\\ \text{log}\left( {{x}_{i}} \right),&$\text{PF.}\label{eq:pf}$ \end{subnumcases} \noindent The weighted $\alpha$-fair allocations are a parameterized family of fairness criteria. In Eq.~\eqref{eq:weighted}, if $w_i = 1$ and $\alpha \rightarrow 0$ then $\alpha$-fair regains the max-throughput optimization. The max or greedy fairness criterion maximizes the network throughput. The disadvantage of such an allocation is that users with poor channel conditions is starved of resources, which seems somewhat unfair. It would seem fairer, for all users simultaneously have some access to the network s resources. If max-throughput is unfair then, perhaps, \ac{MMF} is the most fair. Amongst all rate allocations, the minimum rate allocated to any flow is maximized over all possible rate allocations, and eventually leading to equal rates for all users. In Eq.~\eqref{eq:weighted}, if $w_i = 1$ and $\alpha \rightarrow \infty$ then the weighted $\alpha$-fair allocations reduces to \ac{MMF} allocations. \ac{PF} is a compromise-based scheduling algorithm. It is based upon maintaining a balance between two competing interests, trying to maximize network throughput while at the same time allowing all users at least a minimal level of service. In Eq.~\eqref{eq:weighted}, for $\alpha = 1$, the weighted $\alpha$-fair objective is not defined, but $\lim_{\alpha \to 1}$ reduces to \ac{PF} allocation. It is important to remark that achieving a fair allocation and achieving a socially optimal allocation do not always conflict with each other, and sometimes both objectives can be achieved by choosing the appropriate utility functions (e.g., ~\cite{JR98Kelly,JR00Mo}).\\ \noindent With utility-based framework, a network can be modelled using a single function and the network resource allocation problems can be studied in a noncomplex way. The performance of different allocation schemes can be easily compared, e.g., how far they are from the socially optimal solution, or the upper limit of resource usage. It also aids in examining the trade-off between social optimality, and other performance objectives. \subsection{Introduction to Game Theory} \noindent Game theory~\cite{BK03Osborne} is concerned with predicting the outcome of \textit{games of strategies}. Expressed succinctly, game theory is the formal study of decision-making where it analyzes or models the interactions between interdependent decision-making entities that have mutual and possibly conflicting objectives.\\ \noindent Developed since the first half of the 20\textsuperscript{th} century, it has been used primarily in economics as it is to describing animal behaviour and model competition between companies, and is central to the understandings of various other fields, such as political sciences, psychology, logic and biology. In recent years, telecommunications is one of the new fields that has evolved an emerging interest towards game theory as a tool to analyze conflicts among players, e.g., congestion control, routing, power control, topology control, trust management, dynamic spectrum sharing, etc. The importance of modelling interaction via game-theoretic approach is multifold - \begin{itemize} \item Offers a wide range of optimality criteria (e.g., in simultaneous, multistage games), \item Optimizes problems where no centralized control is present (noncooperative games), \item Players devise strategies independently and intelligently, and give a power to make decisions locally (noncooperative one-shot games, noncooperative repeated games). With good strategic mechanism players can enforce others to cooperate in noncooperative environment (noncooperative repeated games). \end{itemize} \subsubsection{Game Definition} \noindent A game is typically formalized as a triple of a set of players, a set of allowable strategies for each player, and a utility function. Utility function represents a player's evaluation of consequences in a game. Players play strategies with the intention to maximize their utilities. Normally the strategies are conflicting, i.e., increasing own utility happens at the expense of other's decreasing utility. So, the players have to be rational while playing strategies as too much greedy approach can harm themselves because of repercussions and too much of trustworthiness can let their exploitations by greedy opponents.\\ \noindent Representing mathematically, game $\mathcal{G}$, \begin{equation*} \mathcal{G}=\left\langle \mathcal{P},\mathcal{S},\left. \mathcal{U} \right\rangle \right., \end{equation*} \noindent where \begin{itemize} \item $\mathcal{P}$ is a finite set of players, s.t., $\mathcal{P}=\{1,2,3,...,m\}$, \item $\mathcal{S}$ is an m-tuple of pure strategy sets, one for each player, s.t., $\mathcal{S}=\{{{s}_{1}},{{s}_{2}},{{s}_{3}},...\\,{{s}_{m}}\}$, where ${{s}_{i}}$ is the strategy profile of $i$-th player, s.t., ${{s}_{i}}\in {{S}_{i}}$ and ${{S}_{i}}$ is finite number of allowable strategy set of $i$-th player, ${{S}_{i}}=\{1,...,{{q}_{i}}\}$, \item $\mathcal{U}$ is the utility function, whose intended interpretation is the award given to a single player at the outcome of the game, s.t., $\mathcal{U}:{{S}_{1}}\times {{S}_{2}}\times ...\times {{S}_{m}}\to \mathbb{R}$. \end{itemize} \subsubsection{Game Strategies Type} \noindent With different type of strategies, players can obtain various game resolutions - different equilibriums, optimal/suboptimal solution, etc. In this section, we outline the possible strategies describing their behaviour and possible outcome on the game $\mathcal{G}$. \begin{itemize} \item Mixed (Randomized) Strategy \noindent A mixed strategy for player $i$, with ${{S}_{i}}=\{1,...,{{q}_{i}}\}$ is a probability distribution over ${{S}_{i}}$. In other words, $p_{i}:S_{i} \to [0,1]$, where we have $p_{i}(s_{i}) \ge 0$ for all $s_{i} \in S_{i}$ and $\sum\limits_{{{s}_{i}}\in {{S}_{i}}}{{{p}_{i}}\left( {{s}_{i}} \right)}=1$, i.e., \begin{equation*} {{p}_{i}}\left( 1 \right)+{{p}_{i}}\left( 2 \right)+...+{{p}_{i}}\left( {{q}_{i}} \right)=1. \end{equation*} \noindent We interpret $p_{i}(s_{i})$ as the probability with which player $i$ chooses startegy $s_{i}$. \item Pure Strategy \noindent If in the mixed strategy, the probability associated to ${{s}_{i}}={{s}_{i}}\left( j \right)$ for some $j$ is 1, i.e., $p_{i}(s_{i}(j)) = 1$, where $1 \le s_{i}(j) \le q_{i}$, while for others is 0, then it is called pure strategy. \item Strictly Dominant Strategy \noindent A strategy $s_{i}^{*}\in {{S}_{i}}$ is a strictly dominant strategy to a given startegy $s_{i}^{'}\in {{S}_{i}}$ for player $i$ if $\forall {{s}_{-i}}\in {{S}_{-i}}$, we have, \begin{equation*} {{U}_{i}}\left( s_{i}^{*},{{s}_{-i}} \right)>{{U}_{i}}\left( s_{i}^{'},{{s}_{-i}} \right). \end{equation*} \noindent In this case, we say that $s_{i}^{*}$ strictly dominates $s_{i}^{'}$. \item Weakly Dominant Strategy \noindent For any player $i$, a strategy $s_{i}^{*}\in {{S}_{i}}$ weakly dominates another strategy $s_{i}^{'}\in {{S}_{i}}$ if $\forall s_{-i}\in {{S}_{-i}}$, \begin{equation*} {{U}_{i}}\left( s_{i}^{*},{{s}_{-i}} \right)\ge {{U}_{i}}\left( s_{i}^{'},{{s}_{-i}} \right). \end{equation*} \item Maxmin Strategy \noindent Player $i$ plays strategy ${{s}_{i}}\in {{S}_{i}}$ to the ${{s}_{-i}}\in {{S}_{-i}}$ in order maximize its minimum utility, \begin{equation*} \underset{{{s}_{i}}}{\mathop{\max }}\,\underset{{{s}_{-i}}}{\mathop{\min }}\,{{U}_{i}}\left( {{s}_{i}},{{s}_{-i}} \right). \end{equation*} \item Best Response \noindent A strategy $s_{i}^{*}\in {{S}_{i}}$ is a best response for player $i$ to ${{s}_{-i}}\in {{S}_{-i}}$ if $\forall {{s}_{i}}\in {{S}_{i}}$, \begin{equation*} {{U}_{i}}\left( s_{i}^{*},{{s}_{-i}} \right)\ge {{U}_{i}}\left( {{s}_{i}},{{s}_{-i}} \right). \end{equation*} \noindent Note - best response is different from dominant strategy in a way that best response improves utility for a specific strategy ${{s}_{-i}}\in {{S}_{-i}}$ and $\forall {{s}_{i}}\in {{S}_{i}}$, whereas dominant strategy improves utility to a given strategy $s_{i}^{'}\in {{S}_{i}}$ and $\forall {{s}_{-i}}\in {{S}_{-i}}$. \item Mixed Nash Equilibrium (mixed NE) \noindent For a strategic game $\mathcal{G}$, a strategy profile ${{s}^{*}}=\left( s_{1}^{*},s_{2}^{*},s_{3}^{*},...,s_{m}^{*} \right)\in \mathcal{S}$ is a mixed \acs{NE} if for every player $i$, $s_{i}^{*}$ is a best response to $s_{-i}^{*}\in {{S}_{-i}}$. In other words, for every player $i=1,...,m$ and for every mixed strategy ${{s}_{i}}\in {{S}_{i}}$, \begin{equation} \label{eq:mixedNE} {{U}_{i}}\left( s_{i}^{*},s_{-i}^{*} \right)\ge {{U}_{i}}\left( {{s}_{i}},s_{-i}^{*} \right). \end{equation} \noindent In other words, no player can improve its own utility by unilaterally deviating from the mixed strategy profile ${{s}^{*}}=\left( s_{1}^{*},s_{2}^{*},s_{3}^{*},...,s_{m}^{*} \right)$. \item Pure Nash Equilibrium (Pure NE) \noindent Strategy profile ${{s}^{*}}$ satisfying Eq.~\eqref{eq:mixedNE} in addition is called a pure \acs{NE} if every $s_{i}^{*}$ is a pure strategy $s_{i}^{*}=s_{i}^{*}\left( j \right)$, for some $j\in {{S}_{i}}$. \item Subgame Perfect Nash Equillibrium (\acs{SPNE}) \noindent A strategy profile $s$ is a \acs{SPNE} if it represents a \acs{NE} of every subgame of the original game $\mathcal{G}$. A subgame is a subset of any game that includes an initial node (which has to be independent from any information set) and all its successor nodes. \item Pareto Optimal \noindent A game $\mathcal{G}$ strategy profile $s=\left( {{s}_{1}},{{s}_{2}},{{s}_{3}},...,{{s}_{m}} \right)$ is said to be Pareto optimal if we cannot find another strategy profile $s$ in which it is impossible to make any one player better off without making at least one player worse off. Essentially, it is often treated as a weak efficient solution for the optimization problems beacuse a socially optimal solution is Pareto optimal, but the vice versa is not always true. For example, if each ${{U}_{i}}\left( {{x}_{i}}\left( t \right) \right)$ is monotonic and strictly concave in ${{x}_{i}}\left( t \right)$, and resource constraint set $\mathcal{W}=\{x|\sum\nolimits_{i\in m}{{{x}_{i}}}\le X\}$, where ${{x}_{i}}\left( t \right)$ denotes the \ac{QoS} measurements of the $i$-th user at time $t$, then any resource allocation $w\in \mathcal{W}$ achieves $\sum\nolimits_{i\in m}{{{x}_{i}}}=X$ is Pareto optimal, but there is only one socially optimal solution. \item Nash Bargaining Solution (\acs{NBS}) \noindent A pair of utilities $\left( U_{i}^{*},U_{-i}^{*} \right)$ is a \acs{NBS} if it solves the following optimization problem, \begin{equation*} \begin{gathered} \underset{{{U}_{i}},{{U}_{-i}}}{\mathop{\max }}\,\left( {{U}_{i}}-{{d}_{i}} \right)\left( {{U}_{-i}}-{{d}_{-i}} \right) \\ \text{subject to }\left( {{U}_{i}},{{U}_{-i}} \right)\in \mathcal{U} \\ \left( {{U}_{i}},{{U}_{-i}} \right)\ge \left( {{d}_{i}},{{d}_{-i}} \right), \\ \end{gathered} \end{equation*} \noindent where ${{d}_{i}}$ and ${{d}_{-i}}$, are the status quo utilities (i.e., the utility are not meant for bargain with the other player). The \acs{NBS} should satisfy certain axioms: \begin{itemize} \item [-] Invariant to affine transformations or Invariant to equivalent utility representations \item [-] Pareto optimality \item [-] Independence of irrelevant alternatives \item [-] Symmetry \end{itemize} \item Stackelberg Equilibrium \noindent The Stackelberg model can be solved to find the \acs{SPNE}. Assume there are two players, player $i$ act as a leader and player $-i$ its follower. To find the \acs{SPNE} of the game we need to use backward induction, as in any sequential game. Starting from the end (2\textsuperscript{nd} stage), player $-i$ (follower) makes reactive choices depending on the actions of player $i$, \begin{equation*} s_{-i}^{f}\left( {{s}_{i}} \right)=\arg \underset{{{s}_{-i}}}{\mathop{\max }}\,{{U}_{-i}}\left( {{s}_{-i}},{{s}_{i}} \right). \end{equation*} \noindent In the 1\textsuperscript{st} stage, player $i$ (leader) always anticipate its rival behaviour initially, makes its strategic choices accordingly, \begin{equation*} s_{i}^{l}=\arg \underset{{{s}_{i}}}{\mathop{\max }}\,{{U}_{i}}\left( {{s}_{i}},s_{-i}^{f}\left( {{s}_{i}} \right) \right). \end{equation*} \noindent So, the Stackelberg equilibrium or \acs{SPNE} strategies are $\left( s_{i}^{l},s_{-i}^{f} \right)$. \end{itemize} \subsubsection{Game Model Classification} \noindent In the game formulation, players act rationally according to their strategies with an objective to maximize their outcome. However, the strategic profile of players is highly influenced by the regulation imposed by the nature of the environment of the game. This led to the proliferation of varieties of the game with possible taxonomies are:\\ \begin{itemize} \item One-shot vs. Repeated Game \noindent One-shot game is also known as non-repeated or single stage game. It is played only once; therefore stakes are high but carries no further repercussions. Here, players may be uninformed about the moves made by other players and might act selfishly to get away with the highest payoff. If a game is not played once, but numerous times then the game is called a repeated game. It allows for a strategy to be contingent on past moves, and have reputation effects and retribution for it. Repeated game is further classified as finitely and infinitely repeated game, but widely studied repeated game is an infinitely repeated game. According to the Folk Theorem, for an infinite repeated game there exists a discount factor $\widehat{\delta }<1$ such that any feasible and individually rational payoff can arise as an equilibrium payoff for any discount factor $\delta \in \left( \widehat{\delta },1 \right)$. Thus, future payoffs are discounted and are less valuable. This is because the consumption in the future is considered less valuable than the present due to time preference (e.g., money). Therefore, the player's total payoff in a repeated game is a discounted sum of each stage payoff. The repeated game holds a variety of equilibrium properties because the threat of retaliation is real due to the repetitive nature of the game and also it has a much bigger strategy space than the one-shot game. Unlike one-shot game, here players can punish hostile players using tit-for-tat strategy~\cite{JR81Axelrod,BK07Axelrod}. A repeated game with perfect monitoring where players' actions are observable is called a multistage game. In this, players announce their strategy publicly and thus, each stage of a multistage game resembles a single stage game. \item Cooperative vs. Noncooperative Game \noindent A cooperative game is defined where group of players enforces cooperative behaviour on each other. In this, players bargain or negotiate on payoffs and form joint strategies. Cooperative game is pragmatically undesirable because of excessive overhead signalling and trust issues, but it provides a unique Pareto optimal solution for the problem modelling. In the latter, if the competition is between potentially conflicting and self-interested players, then the corresponding game is known as noncooperative game. In a noncooperative game, without centralized control, the players do not cooperate or make deals so that any cooperation among them must be self-enforcing. \end{itemize} \subsubsection{Prisoner's Dilemma Example} \noindent The prisoner's dilemma is probably the most widely used game for pedagogical purposes in game theory. Nicknamed in 1950 by Albert W. Tucker, prisoner's dilemma describes a situation where two prisoners are taken into custody in connection with a burglary. However, the authorities possess insufficient evidence to convict them for their crime, only to convict them on the charge of possession of stolen goods. This game is summarized in Fig.~\ref{PrisonersDilemma}. \begin{figure}[h] \centering \includegraphics[scale=1, trim = 0mm 0mm 0mm 0mm, clip]{PrisonersDilemma.jpg} \caption{Prisoner's dilemma game} \label{PrisonersDilemma} \end{figure} \paragraph{Description} \noindent\\\\ If prisoners help each other by not confessing the crime, they will both be charged with the lesser prison term of a year each. The authorities will question them in separate interrogation rooms, which mean that the prisoners take decisions simultaneously and they do not know about each other's decision. Thus, the process is noncooperative with imperfect information. The authorities will try to convince each prisoner to confess their crime by offering him an escape clause and to his accomplice a prison term of ten years. If both prisoners defect and confess their crime, they shall be sentenced to eight years. Both prisoners have a common knowledge of the same offered deal along with its consequences, but completely unaware of each other's choices. In this, the prison term can be seen as the respective utilities (negative of prison terms) for each set of choices, and each would like to have largest utility for himself. \paragraph{Prisoner's Choices} \noindent\\\\ As the game carries no further repercussions, prisoners are tempted to get away with largest profit (least prison term) and consequently, will defect. Therefore, \textquotedblleft to confess\textquotedblright~is the dominant strategy, thus \textit{(Noncooperation, Noncooperation)} is the \acs{NE} in this one-shot game.\\ \noindent Let us assume, the two burglars work together over a long time, and repeatedly end up being interrogated by the police, and punished by prison terms. After each round (stage) of interrogation, and based on their choices, they get to know what others did. Again, prisoners would like to have maximum sum-utility (minimal prison terms) for their repeated criminal acts. So, thinking rationally, it makes sense to cooperate over all the stages, to have maximum profit in a repeated sense of game (smaller prison term of one year is given every time for their crimes). Let us say, prisoner $P1$ wants to defect, and no doubt he can get away with zero prison term in that stage, whereas prisoner $P2$ will be awarded ten years of prison term. Beside, prisoner $P2$ will get aware of the last hostile choice made by prisoner $P1$, and later he will defect too, in order to punish prisoner $P1$ for his noncooperation. If they continue with their noncooperative behaviour, then both will always get eight years of prison term for every subsequent crime. Therefore, in repeated games, \textquotedblleft to lie\textquotedblright~is the dominant strategy, thus \textit{(Cooperation, Cooperation)} is the \acs{NE} as it maximizes the profit by getting aware of others' strategies and punishing them with tit-for-tat strategy~\cite{JR81Axelrod,BK07Axelrod} if somebody does not cooperate. \clearpage \section{Research Work and Standards for Spectrum Sharing} \label{chap:Works} \subsection{Related Work} \label{sec:ResWorks} \noindent Traditional channel allocation algorithms aim to improve carrier usage for a single system (e.g., cellular systems, femtocells deployments, etc.). In essence, these algorithms realize inter-operator spectrum sharing needs provided that operators with neighbouring \acp{RAN} are willing to work together. The spectrum sharing algorithms can be classified on the knowledge of the domain where inter-operator interference is handled, i.e., frequency, time and/or spatial domain. Further, there are two extremes in regard to the cooperative arrangement between the operators in distributed inter-operator spectrum sharing depending on whether the operators cooperate with each other or not. On one hand, the operators may behave in a totally selfish manner and respond to the opponents by using the sequence of the best responses. On the other hand, the operators may be honest and fully cooperative. \paragraph{Cooperative Games} \noindent\\\\ In~\cite{JR09Garcia}, autonomous component carrier selection scheme is proposed, a distributed solution to the cross-tier interference management for HetNet. The \acp{BS} exchange \ac{BIM} entries representing a single value over the entire bandwidth encapsulating the information of outgoing and incoming inter-cell interference reduction. \acp{BS} select the \acp{PCC} based on their coverage (maximum path loss), whereas \acp{SCC} are selected based on \acp{BIM} exchanged. In~\cite{CP11Prasad}, \acp{BS} select carriers if the corresponding capacity gain on its served \acp{UE} is greater than the losses of the neighbouring \acp{BS}. However, the network suffers from many carrier re-selections. In~\cite{CP13Amin}, a dynamic carrier selection scheme is proposed using \ac{BIM} per component carrier instead of full bandwidth (as in~\cite{JR09Garcia}), and avoids carrier re-selections by estimating capacity gains and losses. In~\cite{CP12Ahmed}, the interference condition is communicated in the form of interference prices instead of \acsp{BIM}. An upper bound on the sum-capacity of two operators is identified in~\cite{CP12Anchora} assuming that operators exchange their user-specific channel quality indicators over all shared channels. In~\cite{CP07Niyato}, a repeated game Cournot model for the cooperative spectrum sensing in \ac{CR} is presented in which multiple \acp{SU} share spectrum with a \ac{PU} through a bidding process using a determined spectrum pricing functions set by \ac{PU}.\\ \noindent In time domain spectrum sharing, the operators could, for instance, trade time slots~\cite{CP06Middleton,CP07Bennis}. Operators with a low load could borrow their time resources to heavily loaded operators helping them to reduce the blocking probability and frame delay~\cite{CP07Bennis}. This scheme improves spectrum utilization efficiency at the cost of high signalling overhead and a good time synchronization requirement among operators. For higher efficiency, one could also allow \acp{UE} connecting to the \textquotedblleft best \ac{BS}\textquotedblright~whether it belongs to their home network or not provided that all operators utilize the same \ac{RAT}~\cite{JR09Bennis}.\\ \noindent Inter-operator spectrum sharing in the spatial domain has been considered in~\cite{CP10Lindblom,CP11Jorsweik}, modelling each operator by a transmitter-receiver link. In~\cite{CP10Lindblom}, operators use inter-operator interference as a bargaining value to enable cooperation, and compute their beam-forming vectors. With cooperative beam-forming, operators increase their system throughput by lowering the overall inter-operator interference. In~\cite{CP11Jorsweik}, operators exchange their CSI, and utilize cooperative transmit beam-forming to steer their beams towards the desired receiver.\\ \noindent Besides spectrum utilization efficiency, cooperative algorithms for inter-operator spectrum sharing could jointly maximize a function that incorporates the utility function of each operator. In~\cite{JR06HuangIEEE}, the sum-utility is maximized by properly distributing the available power budget across multiple carriers. Joint power control and scheduling (as an operator may want to favor users based on their channel conditions) are identified in~\cite{CP12Ahmed} for maximizing the sum-\ac{PF} utility. In~\cite{CP13Amin}, multiple cells exchange interference prices, and partition the spectrum so that the sum-utility is maximized. Unfortunately, the cooperation mechanism forces each operator/link to reveal its network-specific information, e.g., how much interference it receives in the form of interference prices, or may be the utilities, \ac{CSI} information, load, etc. The method can also be generalized where operators cooperate with each other for maximizing their sum-operator utility. In practice operators are expected to have some elements of selfish behaviour, and may send malicious information to others (e.g., falsified interference prices, erroneous gains/losses over the component carriers, etc.) in order to get a higher share of available resources. Fully-cooperative inter-operator spectrum sharing schemes cannot be used unless there is a mechanism to identify and punish the non-trustworthy parties. \paragraph{Noncooperative One-shot Games} \noindent\\\\ The study in~\cite{JR09Bennis} considers one-shot noncooperative games between multiple operators where each operator is modelled by a single transmitter-receiver link. The operators maximize their sum-rate over multiple carriers in a selfish manner. Since the utility function (sum-rate) is concave, equilibrium exists, and the power allocation vector for each link at an equilibrium point is identified. Under certain conditions, there is a unique equilibrium and noncooperative spectrum sharing game becomes predictable. However, the equilibrium point in one-shot noncooperative games could be inefficient for some if not for all of the players~\cite{JR07Etkin}.\\ \noindent In the literature~\cite{JR09Bennis,JR07Etkin}, strict ways to apply punishment are considered where resource allocations of one-shot noncooperative games are enforced after a player deviates from cooperation. A common assumption in is that the one-shot games are enforced forever so as no player has the incentive to deviate. This kind of strategy is quite strict because it does not incorporate forgiveness and punishes all players even if a single player deviates. Besides, the existing studies consider the power spectral density as the strategy space and optimize the power allocation over multiple carriers for the different players/links/operators. \paragraph{Noncooperative Repeated Games} \noindent\\\\ Operators are expected to share spectrum a long time, and have persistent, known identity. As a result, they can learn from each other's behaviour, build reputations and achieve higher utility in comparison with one-shot selfish strategies. In that case, the interaction between operators would rather be modelled by repeated games among selfish players. Besides utility functions and strategy spaces used in the cooperative game model, repeated games of selfish players require also a punishment mechanism when a player deviates from the specified rules or mechanisms to otherwise, specify reciprocity. In~\cite{JR09Wu}, a repeated game approach is used to improve the inefficient \acs{NE} of a one-shot game where it optimizes to eliminate the incentive for deviation in the finite time interval. However, the cooperation strategy is arbitrarily set, i.e., orthogonal spectrum sharing is used, and the networks are assumed to have complete information about each other's parameters. Due to the fact that operators in advance agree on orthogonal spectrum sharing, the finite punishment period introduces forgiveness and alleviates the demand for perfect detection accuracy.\\ \noindent Noncooperative repeated games have been considered for scenarios where the players do not have equal rights, e.g., primary-secondary property rights model in spectrum sharing. In~\cite{CP05Ileri}, noncooperative repeated games are considered to model and analyze the competition among multiple \acp{SU} to access the \ac{PU}'s channels. The spectrum leasing process identified as a monopoly market in which the \ac{PU} has the full control over leasing process. The market is secondary driven where \acp{SU} act as relays with their demand functions based on the acceptance probability model for the users. \acs{NE} is considered as the solution for this noncooperative game, where each \ac{SU} tries to maximize its payoff function in a selfish manner. The games are based on the assumption that \acp{SU} are honest and will not cheat. Another game spectrum leasing models considered in~\cite{JR08Niyato,JR09Niyato} consisting of multiple strategic \acp{PU} (as against the one in~\cite{CP05Ileri}) and \acp{SU}. In this, the active trading of \acp{PU}' spare radio resources with the \acp{SU}' stochastic demand is modelled using a Markov chain to describe \acp{SU}' buying opportunities from \acp{PU}. Both models behaviour are very similar in nature with an only exception that~\cite{JR08Niyato} does not account the user's wireless details. With the existence of multiple sellers (\acp{PU}) and buyers (\acp{SU}); both groups' participants try to maximize their payoffs selfishly and the problem broken down to two different problems: the buyers' problem of revenue maximization and spectrum pricing, and the sellers' problem of spectrum access. For such active models,~\cite{JR08Niyato,JR09Niyato} use market-equilibrium-based approaches to understand the behavioural economics behind it. In~\cite{JR09Sengupta}, a competitive spectrum leasing model is presented where a central mediating entity acts as a spectrum broker and distributes spectrum among different competing service providers through auctioning~\cite{BK02Krishna}.\\ \noindent In~\cite{JR06Sun}, fair resource allocation is realized in time domain using the game-theoretic approach to enable concerned operators (auctioneers) maximize their respective revenues alongside ensuring the maximization of user's payoffs (throughput), as well. The revenue here is defined as user's quantitative measure of reservation preference (of channel) upon time slots. In this approach, the second price auction mechanism (Vickrey auctions) is employed where users bid for a wireless channel in the auction competing for resources (securing time slots) for their throughput maximization. Users with better channel coefficients have a better chance to secure the bid. The bidding here is not related to the willingness to pay for the resources; it only acts as a tool securing the possession of resources for the user with good channel conditions.\\ \noindent Most studies in noncooperative games~\cite{CP05Ileri,JR08Niyato,JR09Niyato,JR09Sengupta} are bounded to centralized scenarios or \ac{CR}, where \ac{PU} (or auctioneer) has full control over the information exchange about the spectrum utilization state and negotiation on the spectrum allocation. Secondly, market-driven mechanisms have been explored widely as a promising approach for spectrum sharing where \acp{PU} trade unused spectrum to \acp{SU} dynamically. However, operators favor decentralized resource management and, at the same time, are hesitant in adopting market driven sharing schemes as they may not want to touch their revenue model. Also, auction based spectrum access~\cite{JR09Sengupta,JR06Sun} poses larger overhead constraints, beside it might require the government's nod in its adoption. Therefore, operators are reluctant to engage in any kind of monetary payoffs, or auctioning spectrum, or transferring load as discussed in the majority of the noncooperative games and drive us to need for newer policies for modelling the noncooperative game models for spectrum sharing. \subsection{Related Standards} \noindent Regulators, operators, suppliers, users of radio communication services and radio equipment rely on technical standards to ensure that radio systems perform as designed. From the spectrum use perspective, there are already some closely related standards in existence for which we would like to discuss, and point out how they differ from the inter-operator spectrum sharing needs in the following section. \subsubsection{802.11 for Intra-cell/Inter-cell Transmission} \noindent IEEE 802.11~\cite{STD80211} is a set of \ac{MAC} and \ac{PHY} specifications for implementing \ac{WLAN} communication in the 2.4, 3.6, 5 and 60 GHz frequency bands. To cope with special problems of wireless transmission in intra-cell/inter-cell data transmission, IEEE 802.11 \ac{MAC} carries two different access mechanisms, the mandatory distributed coordination function (DCF) which provides distributed channel access based on CSMA/CA (carrier sense multiple access with collision avoidance) and the optional point coordination function (PCF) which provides centrally controlled channel access through polling.\\ \noindent The 802.11 protocol can be employed to tackle the \textit{intra-cell}/\textit{inter-cell} transmission problems in the low-density setting. However, the performance of 802.11 to resolute \textit{inter-operator} transmission problems in an \ac{UDN} scenario is no better than the already existing technologies, e.g., LTE small cell deployments. In~\cite{WP11Qualcomm}, an analysis is presented showing that LTE co-channel picocells offer a better user experience and system capacity improvement than Wi-Fi nodes. In addition, Wi-Fi nodes also lack better support for mobility/handoff, \ac{QoS}, security and self-organized networks. So, there is a need of a set of protocols or policies that cope with wireless transmission issues in the multi-operator \ac{UDN} system. \subsubsection{802.11h for Spectrum Management in 5 GHz Band} \noindent With an advent of the IEEE 802.11 \ac{WLAN} standard, the persistent thrust to open up spectrum for unlicensed use created a need for \ac{DFS}. \ac{DFS} is supported by the novel IEEE 802.11h~\cite{STD80211h} \ac{WLAN} standard which allows 5 GHz capable IEEE 802.11 devices to share spectrum with radar (or satellite) devices without causing interference to the radar operation. The concept of \ac{DFS} is to have the unlicensed IEEE 802.11h device monitors the presence of a radar/satellite on the channel it is using and, if the level of the radar is above a certain threshold, the device vacates the existing channel, and then monitors and selects another channel if no radar was detected.\\ \noindent This standard is different from the inter-operator spectrum sharing requirements in the sense that \textit{only IEEE 802.11h} device adjusts its spectrum needs in 5GHz band via \ac{DFS} in order to avoid co-channel operation with radar systems. Whereas in the inter-operator spectrum sharing, \textit{multiple players} share the spectrum dynamically, and have equal rights on the same frequency bands at the same time. \subsubsection{802.16h for Improved Coexisting Mechanism} \noindent The task of inter-operator spectrum sharing necessitates the networks for peaceful coexistence in the geographical area and formulates policies on the development. The IEEE 802.16h~\cite{STD80216h} License-Exempt Task Group, a unit of IEEE 802.16 Broadband Wireless Access Standards Committee realizes improved mechanisms for time domain spectrum sharing under the coordinated coexistence mode. It develops standards, e.g., \ac{MAC} enhancements or policies, and recommended practices enabling coexistence between license-exempt systems in wireless MAN. It also focuses in hierarchical sharing spectrum applications of coexisting systems with primary radio systems. The operation is not limited to license-exempt bands, but extending to all bands where 802.16-2004 is applicable.\\ \noindent For the execution of spectrum sharing policies, a distributed architecture for radio resource management is suggested (IEEE C802.16h-05/004) that enables communication and exchange of parameters between multiple networks formed by one 802.16 \ac{BS} and its associated \acp{UE}. Each \ac{BS} has a \ac{DRRM} entity and build up a database for sharing information related to actual and intended future usage of radio spectrum. 802.16h protocol realizes all necessary functions required for spectrum sharing amongst coexisting systems, e.g., detecting the co-located \ac{RAN} topology, registering to the \ac{DRRM} database, or negotiation for radio spectrum sharing. While interacting with \ac{MAC} or \ac{PHY}, the \ac{DRRM} uses the coexistence protocol to communicate with other \acp{BS} and regional license-exempt databases. In this manner, using the inter-system communication, IEEE 802.16 protocol helps to achieve harmonious sharing of an unlicensed or shared radio spectrum.\\ \noindent The IEEE 802.16h coexistence protocol works in \textit{time domain}. However, because of the tight requirements for a good time synchronicity between the operators; the coexistence is wished in \textit{frequency domain}, and in that case the standard IEEE 802.16h fails to serve the purpose. \subsubsection{802.22 for using White Spaces in the TV Frequency Spectrum} \noindent The development of the IEEE 802.22~\cite{STD80222} \ac{WRAN} standard aims at opportunistic use of white spaces using \ac{CR} techniques. White spaces refer to geographically unused spectrum made available for use at locations where spectrum is not being used by TV broadcasting services. IEEE 802.22 \acp{WRAN} are designed to operate in the VHF and UHF TV broadcast bands on a non-interfering basis, and to bring broadband access to hard-to-reach low population density areas, for e.g., rural environments up to 100 km from the transmitter. Each \ac{WRAN} will boost up a connection speed up to 22 Mbps per channel with no harmful interference to the existing TV broadcast stations. Therefore, it has timely potential for a worldwide applicability.\\ \noindent The IEEE 802.22 standard is intended for \textit{centralized} inter-network resource sharing and targets typical centralized scenario that enables \acp{SU} to reuse unused spectrum of \ac{PU}. On the contrary, in non-centralized scenarios without a central co-ordinator, this standard is not well established and push for the need of more versatile inter-operator spectrum sharing standards for \textit{non-centralized} distributed implementations. \clearpage \section{Cooperative Spectrum Sharing} \label{chap:Coop} \begin{figure}[b] \centering \includegraphics[scale=.52, trim = 0mm 0mm 0mm 0mm, clip]{Coop.png} \caption{Operators share spectrum in a cooperative manner} \label{fig:Coop} \end{figure} \noindent Cooperative spectrum sharing can effectively improve spectrum efficiency, and thus mitigate the network congestion or the wasteful usage of spectrum resources. Cooperative schemes are largely desirable in the trustworthy networks, and where extra signalling overheads are not an issue. The network resources can be effectively utilized by implementing cooperative schemes in the cellular systems within an operator. In this chapter, we discuss the notion behind cooperative spectrum sharing and investigate it mathematically as it provides us benchmark studies for henceforth discussed noncooperative spectrum sharing techniques for non-trusted and self-interested network operators.\\ \noindent The \textit{cooperative communications} amongst players can be realized in many ways - in time~\cite{CP06Middleton,CP07Bennis,JR08Simeone,JR08Leshem}, frequency~\cite{CP13Amin,CP12Ahmed,JR09Garcia,CP12Anchora,CP07Niyato,JR06HuangIEEE,JR12Su,JR08Leshem,JR09Suris,JR10Yang}, code~\cite{JR11Liu}, or spatial~\cite{CP11Jorsweik,CP12Zhai,JR04Laneman,JR11Saad,JR08Huang} domain. In~\cite{JR04Laneman}, the network players exploit space diversity, and whenever a player encounters poor channel access conditions, it relays its data to the other player's network which acts as cooperative relays. In~\cite{JR11Saad} the benefit of MIMO communications has been studied by using cooperative players as relay nodes. In~\cite{JR08Leshem,JR11Liu}, cooperative game theory has been used to analyze the interference channel. Problem of opportunistic spectrum access has been addressed in~\cite{JR09Suris} using cooperative game theory. The author showed that the \acs{NBS} achieves the best trade-off between fairness and optimality in spectrum allocation. Distributed power control for \ac{CRN} has been analyzed in~\cite{JR10Yang} based on cooperative game theory. In~\cite{JR08Huang}, a cooperative auction based algorithm is introduced for relay assignment and allocate relay transmit power among bidding users, for which the unique \acs{NE} is achieved distributively via globally updating the best response bid in a completely asynchronous manner\footnote{For additional information on related work, please refer Chapter \ref{chap:Works} Section \ref{sec:ResWorks}.}.\\ \noindent With a brief review of cooperation algorithms in the field of telecommunication, we henceforth model and investigate the cooperation phenomena in spectrum sharing, in detail. The motivation behind the following investigative study is~\cite{CP13Amin,CP11Prasad}, in which operators exchange interference prices and distribute spectrum resources amongst them by estimating their utility gain/loss and jointly making the decisions so that their sum-utility is maximized. \subsection{System Model} \noindent We consider operators $O_{i}$ and $O_{-i}$ sharing same spectrum, e.g., in a shopping mall scenario. Assume that each operator can construct a number that characterizes the level of service enjoyed by the users served by the operator. Such a number is here called a network utility. It may, e.g., be defined in terms of the distribution of the service provided to users, such as a suitable linear combination of average cell throughput and cell edge throughput. In a cooperative scheme, it is crucial that all operators have the same utility function. For simplicity, we will assume utility functions that are directly formed from the throughputs enjoyed by the user. Then, the operators schedule their users to the available carriers and jointly maximize, e.g., the sum-\ac{PF} rate function with weights reflecting the portion of time a user is multiplexed onto a carrier. An algorithm to approximate the cooperative optimal solution is detailed in~\cite{CP13Amin,CP11Prasad} and it is shortly summarized next. Fig.~\ref{fig:Coop} illustrates the given cooperative scenario.\\ \noindent Let us assume that each user can measure the interference levels due to transmissions originated from other operator and report them to its serving \ac{BS}. As the operators here are assumed to use the same spectrum, the ability to measure the interference levels originating from the \ac{BS} of another operator would be a straight forward generalization of \ac{LTE} handover measurements. By aggregating such measurements performed by the users, operator $O_{i}$ can form an approximation of the level of interference caused by a \ac{BS} of operator $O_{-i}$, following example the principles outlined in~\cite{JR09Garcia}. The \ac{BS} of an operator, e.g., operator $O_{i}$, asks its users to conduct spectrum measurements over all the carriers utilized by the operator. On receiving the measurement information, operator $O_{i}$ computes for each carrier its utility gain (if the users of operator $O_{-i}$ currently use the carrier, they stop using it) or its utility loss (if the users of operator $O_{-i}$ start using the carrier) and communicates gain/loss to operator $O_{-i}$. Operator $O_{-i}$ in its turn, selects randomly a carrier: (i) if it uses the carrier, it compares its loss (by removing the carrier) to the gain operator $O_{i}$ would achieve (ii) if it does not use the carrier, it compares its gain (by start using the carrier) to the loss operator $O_{i}$ would experience. Operator $O_{-i}$ makes the decision that increases the sum of the utilities of the two operators (a.k.a. cooperative utility). For the new carrier allocation, operator $O_{-i}$ computes its own utility gain/loss, communicates them to operator $O_{i}$, and the interaction continues until the cooperative utility cannot further increase by changing the carrier allocation between the operators. Note that the identification of spectrum allocation and user multiplexing weights across the carriers is a mixed integer programming problem. Approximations to the optimal solution can also be achieved in a centralized manner, but the computational complexity grows quickly for an increasing number of carriers and users. Natural solutions for mixed integer programming problems are iterative, and the protocol depicted above uses a natural distribution of these iterations to the independent decision makers in the problem here, i.e., the operator networks. \subsection{Cooperative Algorithm} The discussed algorithm in the form of pseudocode is summarised as below, \begin{algorithm} [H] \renewcommand\thealgorithm{} \caption{Cooperative Spectrum Sharing} \begin{algorithmic}[1] \STATE Operator ${O_i}$, ${i \in \mathcal{I}}$ considers carrier ${k \in {K}}$ for the dynamic selection. \STATE Add unused carrier $k$ by operator $O_{i}$. \STATEx Calculates utility gain $G_{i,k}$ for added carrier $k$. \STATEx Compares it with utility loss $L_{-i,k}$ of other operator $O_{-i}$. \STATEx \hskip\algorithmicindent \textbf{if} {$G_{i,k} > L_{-i,k}$} \textbf{then} \STATEx \hskip \algorithmicindent \hskip\algorithmicindent \textbf{do} \textit{START} using carrier $k$. \STATEx \hskip\algorithmicindent \textbf{end if} \STATE Remove used carrier $k$ by operator $O_{i}$. \STATEx Calculates utility loss $L_{i,k}$ for removed carrier $k$. \STATEx Compares it with utility gain $G_{-i,k}$ of other operator $O_{-i}$. \STATEx \hskip\algorithmicindent \textbf{if} {$L_{i,k} < G_{-i,k}$} \textbf{then} \STATEx \hskip\algorithmicindent \hskip\algorithmicindent \textbf{do} \textit{STOP} using carrier $k$. \STATEx \hskip\algorithmicindent \textbf{end if} \STATE \textbf{go to} 1, and repeat until convergence is achieved. \end{algorithmic} \end{algorithm} \subsection{Mathematical Analysis} \label{sec:CoopMath} \noindent In this section, we will present the mathematical analysis dealing with cooperative game between the operators with varying interference conditions and load factor. For the sake of simplicity, we construct the following assumptions in order to get an insight of the behaviour of the algorithm - \begin{itemize} \item There are two operators $O_{a}$ and $O_{b}$, each have a \ac{BS}, and a load ${{N}_{a}}$ and ${{N}_{b}}$ respectively, \item All users within the \ac{BS}'s access area experience the same \ac{SINR}/\ac{SNR} over the carrier components. With this, scheduling weights are same ($w_{n,k}=1/\text{load}$), \item No shadowing has been considered; therefore, user rates are function of only distance-dependent path loss, \item Operators follow \ac{PF} utility measure ($\sum\nolimits_{n}{\log {{r}_{n}}}$, where $r_{n}$ is the user rate of $n$-th user in the operator's access area). \end{itemize} \noindent We assume that both operators start with orthogonal sharing, and we show that under - \begin{itemize} \item Low Interference; both operators tend to share full spectrum, \item High Interference, and with asymmetric loads; both operators tend to share the spectrum orthogonally with high load operator utilizing more component carriers than low load operator, \item High Interference, and with symmetric loads; both operators tend to share the spectrum orthogonally with an equal carrier allocation. \end{itemize} \subsubsection{Orthogonal Spectrum Sharing} \label{sec:CoopMathOrtho} \noindent Let operators $O_{a}$ and $O_{b}$ have respective load $N_{a}$ and $N_{b}$. Both operators were initially sharing the spectrum orthogonally, each were having equal and non-overlapping $K/2$ carriers allocation and thus no inter-operator interference was generated. For \ac{PF} measure, the utility for operator $O_{a}$ with orthogonal carrier allocation, $U_{o,a}$ can be read as \begin{equation*} {{U}_{o,a}}=\sum\limits_{n=1}^{{{N}_{a}}}{\log }\left( \sum\limits_{k=1}^{{{K}_{a}}}{{{w}_{n,k}}{{\log }_{2}}\left( 1+{{\gamma }_{n,k}} \right)} \right). \end{equation*} \noindent As per assumptions and initial conditions, $U_{o,a}$ can be written as \begin{equation} \label{eq:CoopOrthoUtilA} {{U}_{o,a}}={{N}_{a}}\log \left( \frac{K}{2}\frac{1}{{{N}_{a}}}{{\log }_{2}}\left( 1+{{\gamma }_{a}} \right) \right), \end{equation} \noindent where ${\gamma }_{a}$ is the \ac{SNR}\footnote{As each operator has a single \ac{BS}, and according to the assumption, \ac{SNR} is same for all component carriers for all users within an operator.} of the users in operator $O_{a}$.\\ \noindent Similarly, for operator $O_{b}$, utility $U_{o,b}$ is,\\ \begin{equation} \label{eq:CoopOrthoUtilB} {{U}_{o,b}}={{N}_{b}}\log \left( \frac{K}{2}\frac{1}{{{N}_{b}}}{{\log }_{2}}\left( 1+{{\gamma }_{b}} \right) \right). \end{equation} \subsubsection{Cooperative Spectrum Sharing} \noindent Operator $O_{a}$ has higher load than operator $O_{b}$, ${{N}_{a}}\gg {{N}_{b}}$. Under asymmetric load it is beneficial (in a cooperative sense) that operator $O_{a}$ uses more carriers than operator $O_{b}$. If high load operator $O_{a}$ cooperatively agreeing with operator $O_{b}$ to switch on one of its unused carriers, the utility for operator $O_{a}$, i.e., $U_{c,a}$, becomes, \begin{equation*} U_{c,a}={{N}_{a}}\log \left( \frac{K}{2}\frac{1}{{{N}_{a}}}{{\log }_{2}}\left( 1+{{\gamma }_{a}} \right)+\frac{1}{{{N}_{a}}}{{\log }_{2}}\left( 1+\gamma _{a}^{'} \right) \right), \end{equation*} \noindent where $\gamma^{'}_{a}$ is the new \ac{SINR} corresponding to $k_{i}$-th carrier which both operators use at the same time in the same vicinity and generate inter-operator interference to each other, and all the other symbols have usual meanings. As a result the utility gain for operator $O_{a}$ is, $G_{a} = U_{c,a} - U_{o,a}$, \begin{equation*} {{G}_{a}}={{N}_{a}}\log \left( \frac{\left( \frac{K}{2} \right)\log_{2} \left( 1+{{\gamma }_{a}} \right)+\log_{2} \left( 1+\gamma _{a}^{'} \right)}{\left( \frac{K}{2} \right)\log_{2} \left( 1+{{\gamma }_{a}} \right)} \right). \end{equation*} \noindent Similarly, for operator $O_{b}$, utility $U_{c,b}$ is, \begin{equation*} U_{c,b}={{N}_{b}}\log \left( \left( \frac{K}{2}-1 \right)\frac{1}{{{N}_{b}}}{{\log }_{2}}\left( 1+{{\gamma }_{b}} \right)+\frac{1}{{{N}_{b}}}{{\log }_{2}}\left( 1+\gamma _{b}^{'} \right) \right), \end{equation*} \noindent and the respective utility loss $L_{b} = U_{o,b} - U_{c,b}$, \begin{equation*} {{L}_{b}}={{N}_{b}}\log \left( \frac{\left( \frac{K}{2} \right)\log_{2} \left( 1+{{\gamma }_{b}} \right)}{\left( \frac{K}{2}-1 \right)\log_{2} \left( 1+{{\gamma }_{b}} \right)+\log_{2} \left( 1+\gamma _{b}^{'} \right)} \right). \end{equation*} \noindent Let us define the ratio of rates with and without interference, $R=\frac{\log_{2} \left( 1+{\gamma }' \right)}{\log_{2} \left( 1+\gamma \right)},\,R<1$ (because SNR $\gamma$ $>$ SINR $\gamma^{'}$ and signal power $>$ noise power, i.e., $\gamma>1$) and re-write the gain/loss as \begin{equation*} {{G}_{a}}={{N}_{a}}\log \left( 1+2\frac{{{R}_{a}}}{K} \right), \end{equation*} \begin{equation*} {{L}_{b}}={{N}_{b}}\log {{\left( 1+2\frac{\left( {{R}_{b}}-1 \right)}{K} \right)}^{-1}}. \end{equation*} \noindent The cooperative utility (sum of the operators' utilities) increases, if ${{G}_{a}}>{{L}_{b}}$. The necessary condition is, \begin{equation*} {{N}_{a}}\log \left( 1+2\frac{{{R}_{a}}}{K} \right)>{{N}_{b}}\log {{\left( 1+2\frac{\left( {{R}_{b}}-1 \right)}{K} \right)}^{-1}}, \end{equation*} \begin{equation} \label{eq:CoopHighLoadCond} {{{\left( 1+\frac{2{{R}_{a}}}{K} \right)}^{{{N}_{a}}}}{{\left( 1+\frac{2\left( {{R}_{b}}-1 \right)}{K} \right)}^{{{N}_{b}}}}>1}. \end{equation} \noindent Similarly, in an unideal case, if low load operator $O_{b}$ switches on one of its unused carriers from the initial equal orthogonal carrier allocation, then the condition is, \begin{equation} \label{eq:CoopLowLoadCond} {{{\left( 1+\frac{2\left( {{R}_{a}}-1 \right)}{K} \right)}^{{{N}_{a}}}}{{\left( 1+\frac{2{{R}_{b}}}{K} \right)}^{{{N}_{b}}}}>1}. \end{equation} \begin{enumerate} \item Under low interference, both $R_{a}$ and $R_{b}$ approaches one because SINR $\gamma^{'}$ tends to be SNR $\gamma$. Therefore, both Eq.~\eqref{eq:CoopHighLoadCond} and~\eqref{eq:CoopLowLoadCond} are satisfied and both operators start using the unused carriers and share the full spectrum. \item Under high interference, interference power becomes significant and tends to lie closer to the signal power. Thus, SINR ${{\gamma }^{'}}$ tends to be 1, and consequently, ${{\text{R}}_{a}}\to 1/{\log_{2} \left( 1+{{\gamma }_{a}} \right)}$ and ${{\text{R}}_{b}}\to 1/{\log_{2} \left( 1+{{\gamma }_{b}} \right)}$ (because $\log_{2}(1+\gamma ')\approx {{\log }_{2}}2$), which implies that, ${{\text{R}}_{a}}<1$ and ${{\text{R}}_{b}}<1$. Therefore, in Eq.~\eqref{eq:CoopHighLoadCond}, the component $\left( 1+{2{{R}_{a}}}/{K} \right) > 1$, whereas $\left( 1+{2{({R}_{b}-1)}}/{K} \right)<1$, and with ${{N}_{a}}>>{{N}_{b}}$, the left hand side of Eq.~\eqref{eq:CoopHighLoadCond} becomes greater than one and the inequality satisfies. On the other hand, in Eq.~\eqref{eq:CoopLowLoadCond}, $\left( 1+{2{({R}_{a}-1)}}/{K} \right)<1$ and $\left( 1+{2{{R}_{b}}}/{K} \right) > 1$, and with ${{N}_{a}}<<{{N}_{b}}$, the left hand side of Eq.~\eqref{eq:CoopLowLoadCond} becomes less than one and the inequality does not satisfy. It implies that under high interference high load operator gets more orthogonal carriers than low load operator as long as their sum-operator utility increases. \item With equal load, i.e., ${{N}_{a}}\approx {{N}_{b}}=N$, we can assume ${{R}_{a}}\approx {{R}_{b}}=R$, and on that account, both Eq.~\eqref{eq:CoopHighLoadCond} and~\eqref{eq:CoopLowLoadCond} describing the spectrum sharing conditions reduce to a single conditon, and is independent of load, accordingly, \begin{equation} {{\left( 1+2\frac{R}{K} \right)\left( 1+2\frac{\left( R-1 \right)}{K} \right) }}>1. \label{eq:CoopEqLoadHighIntf} \end{equation} \noindent Under high interference (i.e., $R=1/{\log_{2} \left( 1+\gamma \right)}$, $R<1$), there must be no transference of carriers because loads are equal and the operators must remain as it is as they started initially with an equal orthogonal carrier allocation. Therefore, from Eq.~\eqref{eq:CoopEqLoadHighIntf}, the condition to remain in orthogonal sharing can be obtained as \begin{equation*} \left( 1+2\frac{R}{K} \right)\left( 1+2\frac{\left( R-1 \right)}{K} \right)<1. \end{equation*} \noindent So, the condition to remain in equal orthogonal share at high interference is, \begin{equation} \label{eq:CoopEqLoadHighIntfCond} \frac{2R\left( R-1 \right)}{K}+2R-1<0. \end{equation} \noindent For a large number of carriers ($K$), the necessary condition obtained from Eq.~\eqref{eq:CoopEqLoadHighIntfCond} is, $R<0.5$, becomes $\gamma >$ 3 or 4.77 dB. In the limiting case with $K=2$ and the necessary condition ($R<0.618$) becomes $\gamma >$ 2.07 or 3.15 dB. \end{enumerate} \noindent As a result, with high inter-operator interference, and for any number of channels, operator with a low load abandons carriers provided that the \ac{SNR} is higher than 4.77 dB. \clearpage \section{Repeated Games using Virtual Carrier Price for Spectrum Sharing} \label{chap:GamePrice} \noindent Operators may not be willing to share their performance over the different parts of the spectrum, nor willing to decide invariably in favor of cooperative utility. In this sense, a cooperative game model does not describe the interactions between operators in a realistic manner. Instead, interaction between operators could be modelled as noncooperative games. The operators are assumed to interact for long periods of time, and have a well-defined and publicly known identity. Accordingly, an appropriate framework is that of \textit{repeated games}. We assume that each operator has a carrier allocation strategy where it acts based on predetermined rules. One-shot games are not considered in our study as they can result in poor performance for some if not for all of the players~\cite{JR07Etkin}.\\ \noindent In this chapter, we model \textit{noncooperative repeated games} using \textit{virtual carrier pricing} based utility for inter-operator spectrum sharing. In this, operators estimate their utility gain distributively for a new carrier allocation strategy. For instance, in downlink transmission, the operator can ask its \acp{UE} to measure the carrier utilization and interference levels, and report them to the home \ac{BS}. The operator uses this information to analyze its carrier allocation strategy in which it uses its own unused carrier, or may ask the opponent to stop using it. The operators interact and approve each other carrier allocation strategies if they see utility improvement for themselves. The operators are self-interested; therefore, they scrutinize their mutual interaction in terms of spectrum usage favors given to each other. Finally, the latter chapter presents the simulation results for the proposed model and assesses its performance against the traditional allocation schemes and cooperative algorithm. \subsection{System Description} \label{sec:GamePriceSysDescription} \noindent We propose a dynamic spectrum sharing method in which the operators actively attempt to share its spectrum with the other operators in the downlink based on some policy. Obviously, leasing would mean that the lessee operator system will have to pay certain compensation to the lessor (owner) operator for this additional spectrum. However, instead of monetary compensation for the gained spectrum usage, operators keep track of their mutual spectrum transactions and can ask each other for their fair due based on their mutual history in demanding situations. As operators' identities are publicly known, they strive to behave honestly.\\ \noindent We consider a geographical area served by the number of operators, with their \acp{RAN} having a connection with each other. For the discussion, the operators are considered as a single cell operator. The set of operators is denoted by, $\mathcal{I}=\{1,2,3.....I\}$. The \ac{BS} distributes the $K$ carriers amongst the $J$ users according to their \ac{CSI} and fairness.\\ \noindent The total available spectrum ($K$ carriers) in the given geographical area is divided into two different allocations, namely, (i) Fixed spectrum allocation (FSA), and (ii) Dynamic spectrum allocation (DSA). In \ac{FSA}, each operator has its own independent spectrum usage rights (or \acp{PCC}); therefore, no frequency overlapping occurs, nor it generates any inter-operator interference. However, on the other hand, there exists a common pool of spectrum (or \acp{SCC}), for which operators contend for the spectrum usage rights based on some established policies, and is termed as \ac{DSA}. As multiple operators possess the right to access the spectrum in \ac{DSA} and there is no direct mechanism to control interference between the operators, inter-operator interference is generated as depicted in Fig.~\ref{SpectrumPool}. \begin{figure}[h] \centering \includegraphics[scale=.52, trim = 13mm 0mm 0mm 0mm, clip]{SpectrumPool.png} \caption{Operators contend for spectrum within the common spectrum pool} \label{SpectrumPool} \end{figure} \noindent In the following, we denote by ${{r}_{i,j}}({{k}_{i}},{{k}_{-i}})$ the rate of the $j$-th user of the $i$-th operator. ${k}_{i}$, ${k}_{-i}$ are the carrier allocation of operators ${O}_{i}$ and ${O}_{-i}$ respectively, where the ${O}_{-i}$ signal may interfere with the $O_i$ signal. We estimate throughput ${{T}_{i}}\left( {{k}_{i}},{{k}_{-i}} \right)$ for a single cell operator ${O}_{i}$ serving $J_i$ by the Shannon capacity as \begin{equation*} {{T}_{i}}=\sum\limits_{j=1}^{J_i}{{{r}_{i,j}}({{k}_{i}},{{k}_{-i}})}, \end{equation*} \begin{equation*} \label{eq:GamPricThroughput} {{T}_{i}}=\sum\limits_{j=1}^{J_i}{\sum\limits_{k=1}^{K}{{{w}_{i,j,k}}({{k}_{i}},{{k}_{-i}})\text{lo}{{\text{g}}_{2}}(1+{\text{SINR}}_{i,j,k}({{k}_{i}},{{k}_{-i}})}}), \end{equation*} \noindent where ${\text{SINR}}_{i,j,k}({k}_{i}, {k}_{-i})$ and ${w}_{i,j,k}$ are the downlink user \ac{SINR} and time scheduling weight of the $k$-th carrier of the $j$-th user in the $i$-th operator, respectively. The ${\text{SINR}}_{i,j,k}({k}_{i}, {k}_{-i})$ is defined as \begin{equation*} {{\text{SINR}}_{i,j,k}}=\frac{P_{i}\left( {{k}_{i}} \right){{\mathcal{C}}_{i,j}}}{\left( \sum\nolimits_{q=1,q\ne i}^{I}{P_{q}\left( {{k}_{q}} \right)}{{\mathcal{C}}_{q,j}} \right)+{N_{o}}}, \end{equation*} \noindent where $P_{i}\left( {{k}_{i}} \right)$ is the signal power of the $k$-th carrier, s.t., total power budget $P$ is uniformly distributed over $K_i$ active component carriers out of total $K$ carriers of the $i$-th operator, accordingly, $P/K_i$, ${\mathcal{C}}_{i,j}$ is the channel gain of the $j$-th user within the $i$-th operator, $N_o$ is the power density of the background noise and $\sum\nolimits_{q=1,q\ne i}^{I}{P_q\left( {{k}_{q}} \right)}{{\mathcal{C}}_{q,j}}$ represents the total interference power perceived by the $i$-th user in the $k$-th carrier, which is engendered by the other operators while sharing the same carrier frequency in \ac{DSA}. We define scheduling weight ${w}_{i,j,k}$ where the $i$-th user is scheduled over the $k$-th component carrier for a fraction ${w}_{i,j,k}$ of time in such a manner that it maximizes throughput $T_{i}$, \begin{equation*} \begin{aligned} & \underset{w_{i,j,k}}{\text{max}} & & T_{i} \\ & \text{s.t.} & & \sum\limits_{j=1}^{J_i}{{{w}_{i,j,k}}}=1\forall k \\ & & & {{w}_{i,j,k}}\ge 0\forall \{j,k\}. \end{aligned} \end{equation*} \subsection{System Model} \subsubsection{Distributed Game Model} \noindent Briefly reviewing game theory, for a finite set of $\mathcal{I}$ operators, a game $\mathcal{G}$ in strategic form game can be described as \begin{equation*} \mathcal{G}= \left\langle {{S}_{i}},{{U}_{i}} \right\rangle, \end{equation*} \noindent with the following ingredients - \begin{itemize} \item ${S}_{i}$ represents the set of strategies (or actions) for each operator $O_{i}$, $i\in \mathcal{I}$ that are feasible during the game $\mathcal{G}$, \item ${U}_{i}$ is the utility function (or objective function), on the basis of which game $\mathcal{G}$ is played amongst the operators by applying strategies or actions $s_i \in {S}_i$ independently in an effort to fetch the best utility for its own. \end{itemize} \noindent A key concept in noncooperative game theory is the \acf{NE}, which provides a benchmark for investigating how purely rational decision makers would behave~\cite{JR99Myerson}. \ac{NE} is a profile of strategies ($S_{i}$, $S_{-i}$) such that each intelligent operator has knowledge of its environment and thereby rationally acts to maximize its own utility function $U_{i}$, depending on not only its own actions, but also other's actions. Mathematically, a \ac{NE} is defined as \begin{equation*} {{U}_{i}}({{s}^{*}_{i}},{{s}_{-i}})\ge {{U}_{i}}(s_{i}^{'},{{s}_{-i}}), \end{equation*} \noindent where $s_{i}^{*}$ is a strict \ac{NE} strategy given the \ac{NE} strategy $s_{-i}$ if, for all $s_{i}^{'}\in {{S}_{i}}$ and ${{s}^{*}_{i}}\subset s_{i}^{'}$.\\ \noindent Fundamentally, it is assumed that the operators' strategies are independent and chosen at their own will intelligently. However, the game formulation extending to our work contains contingent strategies where acceptance of a strategy requires cooperation from the opponent operator, e.g., one may request the other to switch off an interfering carrier, and the other accedes to the request only if it sees a utility gain in doing it.\\ \noindent To play such a game $\mathcal{G}$, operator $O_{i}$ evaluates its carrier allocation strategy $s_{i}^{*}$ and checks for its utility\footnote{Here, utility is a function of cell throughput and carrier price component. Refer to Section~\ref{sec:GamePriceUtil} of this chapter for the detailed description.} gain, accordingly, \begin{equation} \label{eq:GameOwnUtilIneq} {{U}_{i}}(s_{i}^{*},{{s}_{-i}})>{{U}_{i}}({{s}_{i}},{{s}_{-i}}), \end{equation} \noindent where $s_{i}$ and $s_{-i}$ are the existing strategy profiles of operators $O_{i}$ and $O_{-i}$ respectively. While there could be many viable strategies, however, the operator likely to adopt a strategy that fetches it a highest possible utility gain. If Eq.~\eqref{eq:GameOwnUtilIneq} is satisfied, operator $O_{i}$ requests operator $O_{-i}$ for the fulfilment of its strategy $s_{i}^{*}$. Now, operator $O_{-i} $ analyzes its utility function $U_{-i}$, accordingly, \begin{equation} \label{eq:GameOtherUtilIneq} {{U}_{-i}}(s_{i}^{*},{{s}_{-i}})>{{U}_{-i}}({{s}_{i}},{{s}_{-i}}). \end{equation} \noindent If the above given inequality is satisfied, the evaluated strategy $s_{i}^{*}$ is confirmed. After mutual agreement, the new yielded outcome strategy $s_{i}^{*}={{({{s}_{i}})}_{i\in \mathcal{I}}}$ comes into existence, $s_{i}^{*}\to {{s}_{i}}$ and eventually, the strategy profile of the operators converges to \ac{NE}. It has to be noted that the described game model is noncooperative even though the strategies are contingent, but they never reveal any of its utility related information to the other. Besides, the decisions are made locally, unlike in~\cite{CP13Amin} where operators compare their utility gains/losses with each other and jointly make their decisions. \subsubsection{Utility Function} \label{sec:GamePriceUtil} \noindent The utility function is a performance metric, whose design is considered as a bottleneck factor by which then, the given operator tries to optimize this function every time whenever a strategy is played by the other, and plays its own strategy afterwards. Normally, cell throughput or its variant, e.g., \ac{MMF}, \ac{PF}, mean-rate, weighted fair utility~\cite{BK97Keshav,JR98Kelly} are regarded as the true measure of user satisfaction and is the usual choice for the utility function. Operators playing noncooperative games do not have to maintain same utility nor be aware of the utility of other operator. The utility function is occasionally defined as \begin{equation} \label{eq:UtilThroughput} {{U}_{i}}=f\left( {{T}_{i}}\left( {{k}_{i}},{{k}_{-i}} \right) \right), \end{equation} \noindent where $f$ represents the fairness criteria, as described in Eq.~\eqref{eq:fair} to~\eqref{eq:mmf}. Operators play strategies involving carrier allocation $(k_{i},k_{-i})$ and aim to maximize their utility function incessantly, \begin{equation} \label{eq:UtilMax} \underset{{{k}_{i}},{{k}_{-i}}}{\mathop{\text{max}}}\,{{U}_{i}}\text{ s}\text{.t}\text{. }{{k}_{i}},{{k}_{-i}}\in K. \end{equation} \noindent However, the operators are interested in maximizing their throughput over time horizon $\mathcal{T}$ of the repeated games rather than every time instant owing to the fact that the sacrificing operator (with a low load factor) tends to lose a small amount of throughput in order to gain larger throughput benefits during its peak conditions, s.t., \begin{equation} \label{eq:GamePriceThTime} \underset{{{k}_{i}}\left( t \right),{{k}_{-i}}\left( t \right)}{\mathop{\text{max}}}\,\underset{\mathcal{T}\to \infty }{\mathop{\lim }}\,\frac{1}{\mathcal{T}}\int\limits_{0}^{\mathcal{T}}{{{T}_{i}}\left( {{k}_{i}}\left( t \right),{{k}_{-i}}\left( t \right) \right)dt}\text{ s}\text{.t}\text{. }{{k}_{i}},{{k}_{-i}}\in K. \end{equation} \noindent According to Eq.~\eqref{eq:GamePriceThTime} operators cannot maximize their utility at all time instants if the utility function is chosen based throughput alone or its variants (like \ac{PF}, \ac{MMF}, etc. in Eq.~\eqref{eq:UtilThroughput}), which is in contradiction to what it has been stipulated in Eq.~\eqref{eq:UtilMax}. The reason is that, if the utility function chosen based on throughput alone, then the operator, which is sacrificing resources, will always have immediate lower utility. Therefore, either of the Eq.~\eqref{eq:GameOwnUtilIneq} and~\eqref{eq:GameOtherUtilIneq} will never satisfy and thus operators will never share resources.\\ \noindent However, if it is desirable to have game outcomes that are closer to a social optimum, one may change the utility by a virtual carrier price, \begin{equation} \label{UtilThroughputPrice} {{U}_{i}}=f({{T}_{i}})-{{\lambda }_{i}}, \end{equation} \noindent where $\lambda_{i}$ is the virtual carrier price. There is a lot of literature available on spectrum pricing (e.g.,~\cite{CP07Gandhi,CP06Ryan,CP05Ileri,CP04Huang}). Here, the selection for the carrier pricing function is kept simple and precise, instead of being modelled in terms of market based forces as discussed in most of the literature. Therefore, we select \begin{equation} \label{eq:VirCarPriFun} {{\lambda }_{i}}=p_1\left( {{e}^{p_2 \frac{\sum\nolimits_{k=1}^{K}{c_{i,k}}}{K_i} }}-1 \right), \end{equation} \noindent where $p_1$ and $p_2$ are the pricing constants, ${c}_{i,k}$ is the carrier utilization of the $k$-th carrier and $K_i$ is the sum of the active component carriers out of total $K$ carriers in the $i$-th operator. For a particular case with two operators in a given geographical area, the carrier utilization $c(k)$ can be set to, \begin{equation*} c\left( k \right)=\left\{ \begin{matrix} 1, \\ 0.5, \\ 0, \\ \end{matrix}\text{ }\begin{matrix} k\in&\text{full carrier} \\ k\in&\text{shared carrier} \\ k\in&\text{unused carrier}. \\ \end{matrix}\text{ } \right. \end{equation*} \noindent \\ \textit{Virtual Carrier Price} $\lambda$ in utility function $U$ penalises the operators for their carrier usage. In the game, operators aim to maximize their utility function at every game sequence, shown by Eq.~\eqref{eq:UtilMax}. With increased carrier utilization, the heavily loaded operators can have a larger throughput component in comparison to the negative carrier pricing component in their utility function, and in succession their utility increases. This progresses to the heavily loaded operators to afford more spectrum resources, which is not the usual case for the sparsely loaded operators. The selection of an exponential function for the carrier pricing component in Eq.~\eqref{eq:VirCarPriFun} is due to the fact that it penalises the operators for their increased carrier usage while ensuring the minimal carrier utilization requirement for every operator with negligible price. The label \textquoteleft Virtual\textquoteright~signifies that the price is not measured in monetary terms, rather it is a virtual measure or tool by which operators share the spectrum according to their demands. In this manner, operators become able to share the spectrum resources opportunistically and can maximize their sum-throughput noncooperatively. \subsubsection{Spectrum Usage Favors} \label{sec:GamePriceFavors} \noindent Operators are always motivated by self-interest; therefore, they model negotiations for carriers in terms of spectrum usage favors. The favors are referred to carrier component utilization. It is assumed that the opponent operator cooperates provided that both operators have so far fulfilled about the same number of favors. To do that, for instance, each operator maintains a bookkeeping system listing the number of times each operator has been cooperative. The operators grant favors to the opponents if they see the cooperative spirit. This kind of strategy resembles a tit-for-tat~\cite{JR81Axelrod,BK07Axelrod} strategy in a sense that it is forgiving and avoids immediate punishment. Note that this idea can also be extended to more general cases where operators can grant higher number of favors to the opponent provided they receive some sort of compensation. In this study, though, we do not consider inter-operator communication exterior to radio access and monetary transactions between the different entities.\\ \noindent Let us assume that operator $O_{i}$ selects randomly a carrier and constructs the possible favors for the different courses of strategic actions - \begin{itemize} \item {Both operators utilize carrier $k$. Operator $O_{i}$ asks operator $O_{-i}$ to stop using the carrier in case if it sees the utility gain, accordingly~\eqref{eq:GameOwnUtilIneq}. Operator $O_{-i}$ does the favor only if its own utility gain is positive too as per Eq.~\eqref{eq:GameOtherUtilIneq}.} \item {Operator $O_{i}$ does not use carrier $k$, but operator $O_{-i}$ does. Operator $O_{i}$ adopts a strategy using carrier $k$. If both operators see the utility gain according to the game (Eq.~\eqref{eq:GameOwnUtilIneq} and~\eqref{eq:GameOtherUtilIneq}), the new strategy is agreed and regarded as a favor as it causes destructive interference to operator $O_{-i}$.} \item {No operator utilizes the carrier and thus, operator $O_{i}$ can start using it.} \item {When only operator $O_{i}$ utilizes the carrier, there is no interaction between the operators.} \end{itemize} \noindent The given below Tab.~\ref{tab:FavorsClassification} summarises the strategic actions involving own carrier ($k_{i}$) and interfering carrier ($k_{-i}$) allocations that defines a favor given to operator $O_{i}$ by operator $O_{-i}$. \begin{table}[h] \centering \caption{Favors Classification} {\begin{tabular}{ p{7cm}p{3cm}} \hline \textbf{Operator} ${{O}_{i}}:\left( k_{i}^{t},k_{-i}^{t} \right)\to \left( k_{i}^{t+1},k_{-i}^{t+1} \right)$ & \textbf{Favor} \\ \hline $\left( \text{on,on} \right)\to \left( \text{on,off} \right)$ & Yes \\ $\left( \text{off,on} \right)\to \left( \text{on,on} \right)$ & Yes \\ $\left( \text{off,off} \right)\to \left( \text{on,off} \right)$ & No \\ $\left( \text{on,off} \right)\to \left( \text{on,off} \right)$ & No \\ \hline \end{tabular}} \label{tab:FavorsClassification} \end{table} \noindent To mitigate the selfish behaviour of the opponents, operators limit the number of outstanding favors. The operators incorporate a hard check stopping criterion for the game where they model the utility function based on Eq.~\eqref{UtilThroughputPrice} and grant spectrum usage favors to each other as long as their outstanding favors are less than surplus limit $S$, \begin{equation*} \begin{aligned} {{O}_{i}}~:&{{h}_{-i}}-{{h}_{i}}\le S,\\ {{O}_{-i}}:&{{h}_{i}}-{{h}_{-i}}\le S. \end{aligned} \end{equation*} \noindent In contrast, say, if operator $O_{i}$ has received more favors amounting $S$ than it has given to operator $O_{-i}$, operator $O_{-i}$ will not review its requests further anymore unless operator $O_{i}$ starts accepting the favors and bring down the outstanding favors of operator $O_{-i}$ lower than $S$.\\ \noindent The surplus limit $S$ controls the width of the outstanding favors window; therefore its construction requires an appropriate care. Choosing small values can subdue the game, whereas large values can polarize the game benefiting the particular operators. With a selection of fitting value for the surplus limit, the operators are able to trade resources with fairness; simultaneously keeping check on the operators' unauthorized requests for the unfair gains. \subsection{Proposed Algorithm I} \noindent The proposed algorithm in the form of pseudocode is summarised as below, \setcounter{algorithm}{0} \begin{algorithm} [H] \renewcommand\thealgorithm{} \caption{Repeated Games Model using Virtual Carrier Price for Inter-operator Spectrum Sharing} \begin{algorithmic}[1] \STATE Operator $O_{i}$, where $i \in \mathcal{I}$, analyses strategy $s$ by switching on carrier $k_{i}$ or removing interfering carrier $k_{-i}$. Calculates new utility $U_{i,s}$ and compares it with present utility $U_{i}$. \STATEx\textbf{if} {$U_{i,s} > U_{i}$} \textbf{then} \STATE \quad Operator $O_{-i}$ compares its outstanding favors with surplus $S$. \STATEx\quad\textbf{if} {$h_{-i}-h_{i} \le S$} \textbf{then} \STATE \quad\quad\begin{varwidth}[t]{\linewidth} Operator $O_{-i}$ compares new utility $U_{-i,s}$ for strategy $s$ with present utility\\ $U_{-i}$. \end{varwidth} \STATEx \quad\quad\textbf{if} {$U_{-i,s} > U_{-i}$} \textbf{then} \STATE \quad\quad\quad Strategy s is accepted. \STATE \quad\quad\quad Favors are updated: ${{h}_{-i}}\to {{h}_{-i}}+1$. \STATE \quad\quad\textbf{end if} \STATE \quad\textbf{end if} \STATE \textbf{end if} \end{algorithmic} \end{algorithm} \subsection{Mathematical Analysis} \noindent In this section, we analyze the algorithm mathematically, and provide theoretical results for the optimization of pricing constants. For the analysis, we consider the same assumptions made in Section~\ref {sec:CoopMath}. \subsubsection{Orthogonal Spectrum Sharing} \noindent Referring to Section~\ref{sec:CoopMathOrtho}, the \ac{PF} throughput of operator $O_{a}$ with orthogonal carrier allocation, $T_{o,a}$ is given by Eq.~\eqref{eq:CoopOrthoUtilA} and the same for operator $O_{b}$, $T_{o,b}$ is given by Eq.~\eqref{eq:CoopOrthoUtilB}.\\ \noindent The sum-\ac{PF} throughput of both operators, $T_{o}$ is, ${{T}_{o}}={{T}_{o,a}}+{{T}_{o,b}}$, i.e., \begin{equation} {{T}_{o}}={{N}_{a}}\log \left( \frac{K}{2}\frac{1}{{{N}_{a}}}{{\log }_{2}}\left( 1+{{\gamma }_{a}} \right) \right)+{{N}_{b}}\log \left( \frac{K}{2}\frac{1}{{{N}_{b}}}{{\log }_{2}}\left( 1+{{\gamma }_{b}} \right) \right). \label{eq:GamePriceOrtho} \end{equation} \subsubsection{Repeated Games based Spectrum Sharing} \label{sec:GamPriceOpt} \noindent Let us assume, ${{N}_{a}}>{{N}_{b}}$, and being $O_{a}$ is the high load operator, so it is more appropriate that operator $O_a$ gets more resources in order to avoid the congestion and blocking probability\cite{CP13Amin,CP11Prasad}. Assume, $\Delta K$ amount of carriers are transferred to high load operator $O_{a}$ by low load operator $O_{b}$ at the end of the sequence of a game from the initial orthogonal allocation (where $\Delta K\in \left( 0,{K}/{2} \right)$). Then the new respective \ac{PF} throughputs of the operators during the game, $T_{g,a}$ and $T_{g,b}$ are, \begin{equation*} {{T}_{g,a}}={{N}_{a}}\log \left( \left( \frac{K}{2}+\Delta K \right)\frac{1}{{{N}_{a}}}{{\log }_{2}}\left( 1+{{\gamma }_{a}} \right) \right), \end{equation*} \begin{equation*} {{T}_{g,b}}={{N}_{b}}\log \left( \left( \frac{K}{2}-\Delta K \right)\frac{1}{{{N}_{b}}}{{\log }_{2}}\left( 1+{{\gamma }_{b}} \right) \right). \end{equation*} \noindent The sum-\ac{PF} throughput of both operators, ${{T}_{g}}$ is, ${{T}_{g}}={{T}_{g,a}}+{{T}_{g,b}}$, \begin{equation*} {{T}_{g}}={{N}_{a}}\log \left( \left( \frac{K}{2}+\Delta K \right)\frac{1}{{{N}_{a}}}{{\log }_{2}}\left( 1+{{\gamma }_{a}} \right) \right)+{{N}_{b}}\log \left( \left( \frac{K}{2}-\Delta K \right)\frac{1}{{{N}_{b}}}{{\log }_{2}}\left( 1+{{\gamma }_{b}} \right) \right). \end{equation*} \noindent The game is only beneficial if sum-throughput of game is more than the sum-throughput in case of orthogonal sharing (Eq.~\eqref{eq:GamePriceOrtho}), i.e., ${{T}_{g}}>{{T}_{o}}$. Thus, \begin{equation} \label{eq:GamPriGamOrt} {{N}_{a}}\log \left( \frac{K}{2}+\Delta K \right)+{{N}_{b}}\log \left( \frac{K}{2}-\Delta K \right)>{{N}_{a}}\log \left( \frac{K}{2} \right)+{{N}_{b}}\log \left( \frac{K}{2} \right). \end{equation} \noindent Further simplifying, \begin{equation} \label{eq:GamPriGamOrt2} \left( 1+2\frac{\Delta K}{K} \right)^{N_{a}} \left( 1-2\frac{\Delta K}{K} \right)^{N_{b}} > 1. \end{equation} \noindent Besides, as per algorithm the tranfer of $\Delta K$ spectrum resources will occur only if the game based utilities, $U_{g}$ are satisified at the operators' end, accordingly, ${{U}_{g}}\left( k^{t+1} \right)>{{U}_{g}}\left( {k}^{t} \right)$, where $k^{t+1}$ and $k^{t}$ are the present and past carrier allocations.\\ \noindent So, operator $O_{a}$ checks its utility, $U_{g,a}$, accordingly, ${{U}_{g,a}}\left( \frac{K}{2}+\Delta K \right)>{{U}_{g,a}}\left( \frac{K}{2} \right)$, i.e., \begin{equation*} \begin{gathered} {{N}_{a}}\log \left( \left( \frac{K}{2}+\Delta K \right)\frac{1}{{{N}_{a}}}{{\log }_{2}}\left( 1+{{\gamma }_{a}} \right) \right)-p_1\left( {{e}^{p_2\left( \frac{\frac{K}{2}+\Delta K}{K} \right)}}-1 \right)>\\ {{N}_{a}}\log \left( \frac{K}{2} \frac{1}{{{N}_{a}}}{{\log }_{2}}\left( 1+{{\gamma }_{a}} \right) \right)-p_1\left( {{e}^{p_2\left( \frac{\frac{K}{2}}{K} \right)}}-1 \right), \end{gathered} \end{equation*} \begin{equation} \label{eq:GamPriHigLoaUtiCon} {{N}_{a}}\log \left( \frac{K}{2}+\Delta K \right)-{{N}_{a}}\log \left( \frac{K}{2} \right)>p_1{{e}^{p_2\left( \frac{\frac{K}{2}+\Delta K}{K} \right)}}-p_1{{e}^{p_2\left( \frac{\frac{K}{2}}{K} \right)}}. \end{equation} \noindent Similarly, operator $O_{b}$ checks its utility $U_{g,b}$, accordingly, ${{U}_{g,b}}\left( \frac{K}{2}-\Delta K \right)>{{U}_{g,b}}\left( \frac{K}{2} \right)$, i.e., \begin{equation} \label{eq:GamPriLowLoaUtiCon} {{N}_{b}}\log \left( \frac{K}{2}-\Delta K \right)-{{N}_{b}}\log \left( \frac{K}{2} \right)>p_1{{e}^{p_2\left( \frac{\frac{K}{2}-\Delta K}{K} \right)}}-p_1{{e}^{p_2\left( \frac{\frac{K}{2}}{K} \right)}}. \end{equation} \paragraph{Optimization of Pricing Constants} \noindent\\ \\ With the conditions in Eq.~\eqref{eq:GamPriGamOrt},~\eqref{eq:GamPriHigLoaUtiCon} and~\eqref{eq:GamPriLowLoaUtiCon}, the operators can perform optimization over the spaces, $p_1$, $p_2$ and $\Delta K$. However, operators strive to maximize the cooperative gain, which can be leveraged by the maximization of the left hand side of Eq.~\eqref{eq:GamPriGamOrt2}. Therefore, the value of $\Delta K$ such that it returns best sum-throughput of the system is accordingly, \begin{equation} \label{eq:GamePriceMaxProb} \begin{aligned} & \underset{\Delta K}{\text{max}} & & f(\Delta K) \\ & \text{s.t.} & & f(\Delta K)>0, \end{aligned} \end{equation} \noindent where from Eq.~\eqref{eq:GamPriGamOrt2}, $f(\Delta K)= \left( 1+2{\Delta K}/{K} \right)^{N_{a}} \left( 1-2{\Delta K}/{K} \right)^{N_{b}}- 1$. The solution to the given maximization probem can be achieved by $df(\Delta K)/d(\Delta K)=0$, assuming $\Delta K$ is a continous resource. Let the obtained solution be $\Delta {{K}_{limit}}$.\\ \noindent Adding Eq.~\eqref{eq:GamPriHigLoaUtiCon} and~\eqref{eq:GamPriLowLoaUtiCon}, we get, \begin{equation} \label{eq:GamPriHigLowSumCon} \begin{gathered} {{N}_{a}}\log \left( \frac{K}{2}+\Delta K \right)+{{N}_{b}}\log \left( \frac{K}{2}-\Delta K \right)-{{N}_{a}}\log \left( \frac{K}{2} \right)-{{N}_{b}}\log \left( \frac{K}{2} \right)>\\ p_1\left( {{e}^{p_2\left( \frac{\frac{K}{2}+\Delta K}{K} \right)}}+{{e}^{p_2\left( \frac{\frac{K}{2}-\Delta K}{K} \right)}}-2{{e}^{p_2\frac{1}{2}}} \right). \end{gathered} \end{equation} \noindent Comparing Eq.~\eqref{eq:GamPriGamOrt} and~\eqref{eq:GamPriHigLowSumCon}, it can be inferred, \begin{equation} \label{eq:GamPriPQCon} p_1\left( {{e}^{p_2\left( \frac{\frac{K}{2}+\Delta K}{K} \right)}}+{{e}^{p_2\left( \frac{\frac{K}{2}-\Delta K}{K} \right)}}-2{{e}^{p_2\frac{1}{2}}} \right)>0. \end{equation} \noindent From Eq.~\eqref{eq:GamPriHigLoaUtiCon},~\eqref{eq:GamPriLowLoaUtiCon} and~\eqref{eq:GamPriPQCon}, pricing constants ($p_1$ and $p_2$) can be obtained accordingly, \begin{subequations} \label{eq:GamPricEq1} \begin{align} &\text{~~~}p_1 e^{\frac{p_2}{2}} \left(\text{cosh} \left(p_2\frac{\Delta K}{K}\right)-1\right)>0,\\ &{{e}^{\frac{p_2}{2}\left( 1+2\frac{\Delta K}{K} \right)}}-{{e}^{\frac{p_2}{2}}}<\log {{\left( 1+2\frac{\Delta K}{K} \right)}^{\frac{{{N}_{a}}}{p_1}}},\\ &{{e}^{\frac{p_2}{2}\left( 1-2\frac{\Delta K}{K} \right)}}-{{e}^{\frac{p_2}{2}}}<\log {{\left( 1-2\frac{\Delta K}{K} \right)}^{\frac{{{N}_{b}}}{p_1}}}, \end{align} \end{subequations} \noindent where $0<\Delta K<\frac{K}{2}$ and ${{N}_{a}}>{{N}_{b}}$.\\ \noindent $N_{a}$ and $N_{b}$ represent the overall load conditions in the network, and they do not refer to instantaneous load values. Subtituting, $\Delta K=\Delta K_{limit}$, ${{N}_{a}}={{\widehat{N}}_{high}}$, and ${{N}_{b}}={{\widehat{N}}_{low}}$ in Eq.~\eqref{eq:GamPricEq1}, the optimization equations can be rewritten as \begin{subequations} \label{eq:GamPricEq2} \begin{align} p_1&>0 \label{eq:GamPricEq2a},\\ p_2&\ne0 \label{eq:GamPricEq2b}, \\ {{e}^{\frac{p_2}{2}\left( 1+2\frac{\Delta K_{limit}}{K} \right)}}-{{e}^{\frac{p_2}{2}}}&<\log {{\left( 1+2\frac{\Delta K_{limit}}{K} \right)}^{\frac{{{\widehat{N}}_{high}}}{p_1}}} \label{eq:GamPricEq2c}, \\ {{e}^{\frac{p_2}{2}\left( 1-2\frac{\Delta K_{limit}}{K} \right)}}-{{e}^{\frac{p_2}{2}}}&<\log {{\left( 1-2\frac{\Delta K_{limit}}{K} \right)}^{\frac{{{\widehat{N}}_{low}}}{p_1}}} \label{eq:GamPricEq2d}. \end{align} \end{subequations} \noindent In the analysis, $\Delta K_{limit}$ represents the maximum additional carriers allowed to transfer by a low load operator to a high load operator from their initial equal orthogonal carrier allocations. For $\widehat{N}_{high}=25$, $\widehat{N}_{low}=5$, we obtain $\Delta K_{limit}=2.7$ (using Eq.~\eqref{eq:GamePriceMaxProb}), then the optimal values of pricing constants $p_1$ and $p_2$ can be chosen from the depicted region shown in Fig.~\ref{fig:OptPricConse}. In the simulation (Section~\ref{sec:Algo1Analysis}) where many of the assumptions are disregarded, we have fixed the parameters, $p_1=7$ and $p_2=0.8$ in unison with the theoretical solution set (see Fig.~\ref{fig:OptPricConse}). We have observed, out of total carriers $K=8$, the maximum carrier utilization of 5.97 carriers in the case of a high load operator ($N_{a} = 25$) and the minimum carrier utilization of 2.03 carriers in the case of a low load operator ($N_{b} = 5$) (see Fig.~\ref{fig:Algo1HighIntfSur2Sur4}). It shows that observed $\Delta K_{limit} \approx 2$\footnote{From the simulation, $\Delta K_{limit}$ is calculated by using initial and final carrier utilizations of operator $O_a$ as $\left| 8/2 - 5.97\right|$, i.e., 1.97 (or using operator $O_b$'s carrier utilizations, $\left| 8/2 - 2.03\right|$).} for the given pricing constants is in close approximation with the theoretical results. \begin{figure}[h] \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=1, trim = 24mm 7mm 31mm 12mm, clip]{file1.jpg} \caption{Graphical portrayal of Eq.~\eqref{eq:GamPricEq2a}} \label{fig:OptPricConsa} \end{subfigure} \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=1, trim = 24mm 7mm 31mm 12mm, clip]{file2.jpg} \caption{Graphical portrayal of Eq.~\eqref{eq:GamPricEq2b}} \label{fig:OptPricConsb} \end{subfigure}\\ \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=1, trim = 24mm 7mm 31mm 12mm, clip]{file3.jpg} \caption{Graphical portrayal of Eq.~\eqref{eq:GamPricEq2c}} \label{fig:OptPricConsc} \end{subfigure} \begin{subfigure}{.5\linewidth} \centering \includegraphics[scale=1, trim = 24mm 7mm 31mm 12mm, clip]{file4.jpg} \caption{Graphical portrayal of Eq.~\eqref{eq:GamPricEq2d}} \label{fig:OptPricConsd} \end{subfigure}\\ \begin{subfigure}{1\linewidth} \centering \includegraphics[scale=1, trim = 24mm 7mm 31mm 12mm, clip]{file5.jpg} \caption{Intersection region of Fig.~\subref{fig:OptPricConsa}-\subref{fig:OptPricConsd}} \label{fig:OptPricConse} \end{subfigure} \caption{Intersection region in Fig.~\subref{fig:OptPricConse} delineating pricing constants $p_1$ and $p_2$, obtained from Eq.~\eqref{eq:GamPricEq2} for the parameters, $\widehat{N}_{high}=25$, $\widehat{N}_{low}=5$ and $\Delta K_{limit}=2.7.$} \label{fig:OptPricCons} \end{figure} \clearpage \section{Repeated Games using Mutual History for Spectrum Sharing} \label{chap:GameExpectation} \noindent In this chapter, we propose another coordination protocol than that discussed in Chapter~\ref{chap:GamePrice}. In that chapter, we addressed the issue using a carrier price based utility function. However, there might be a situation where operators are reluctant to entertain the carrier pricing factor in their utility function as it penalises them on their carrier usage. Hence, in this chapter, we address the same issue using a throughput-based utility function and devise a new strategic mechanism for contending spectrum resources amongst the operators.\\ \noindent We propose a \textit{noncooperative repeated games} model based on \textit{mutual history} of utility gains/losses. Operators estimate their utility gains/losses and compare with their expected gains/losses from the past games. In order to estimate gains/losses, for instance, in downlink transmissions, the operator can ask its \acp{UE} to measure on the carrier utilization and interference levels from the opponent \ac{BS} and report it to the serving \ac{BS}, which is a simple extension of \ac{LTE} handover measurements. Assuming that operators use the same \ac{RAT}, this should be possible. As a result, an operator may select a carrier depending on (i) the carrier utilization by the operator and the opponent, as well as (ii) the history of interactions (i.e., previous games). An operator may ask the opponent to do a favor, e.g., to not to use a carrier. The operators keep track of history of favors (utility gains) provided to each other, and they use it as a check from the unfair game treatment. Finally, in Chapter~\ref{chap:Simulation}, the benefits of the algorithm is assessed in terms of rate distribution by comparing with the static allocation schemes and cooperative solution. \subsection{System Description} \noindent The system description is similar to the one described in Section~\ref{sec:GamePriceSysDescription}. \subsection{System Model} \noindent To begin with, we consider a simple scenario, where the operators have equal rights to the shared part of the spectrum. It is in the benefit of an operator, to demand spectrum resources when it witnesses high load in order to avoid congestion and blocking probability. The demands of an additional spectrum resources are considered as spectrum usage favors. Favors are granted for a single time slot. Prior to asking a favor, the operator evaluates its utility, and if the granted favor results in a utility gain, it goes ahead with its request for that favor.\\ \begin{table}[h] \centering \caption{Carrier Utilization} {\begin{tabular}{ c c} \hline \textbf{Operator} ${O}_{i}: k_{i}$ & \textbf{Operator} ${O}_{-i}: k_{-i}$ \\ \hline 1 & 1 \\ 0 & 1 \\ 0 & 0 \\ 1 & 0 \\ \hline \end{tabular}} \label{tab:CarrUtil} \end{table} \noindent The carrier utilization between the two operators $O_{i}$ and $O_{-i}$ has four possible outcomes, shown in Tab.~\ref{tab:CarrUtil}. Based on the carrier utilization, the operators are always motivated to draw a favor in which they gain a carrier or may ask the other to stop using it. However, the operators cannot selfishly switch on their unused carriers, or dictate the others to switch off their interfering carriers. The operators follow a proper mechanism to come to a decision allow them to pry for favors based on their relative requirements according to the game model. Unlike Chapter~\ref{chap:GamePrice}, the following game model behaves in a very different manner as it accounts the history of utility gains/losses for evaluation of carrier allocation strategies, as against the game model presented in Chapter~\ref{chap:GamePrice}, where strategies depend on the present utilities only. \subsubsection{Utility Function} \noindent The utility function has already been discussed in Section~\ref{sec:GamePriceUtil} and the operators construct their utility function according to Eq.~\eqref{eq:UtilThroughput}. \subsubsection{Evaluation of Strategy: Decision Making Process} \label{sec:GameHisDecis} \noindent \textit{Strategic Decision Making} is a cognitive process, which produces a final outcome from the selection of a course of action among several alternative scenarios. Several factors can influence the decision making process, such as past experiences~\cite{JR05Juliusson}, cognitive biases~\cite{JR08Stanovich}, etc. However, the fundamental idea behind the decision making process (or selection of a strategy) is based on the players' rational outlook, available information and their past experiences. This theory is known as rational expectations~\cite{BK94Snowdon}, a widely studied hypothesis in the spheres of economics. In this algorithm, we aim to exploit the rational behaviour of the operators by devising a strategic mechanism based on their past experiences and state-of-the-art, which is described henceforth.\\ \noindent Operator $O_{i}$ evaluates its new carrier allocation strategy $s^{*}_{i}$ and the corresponding increase in the utility referred to as the immediate gain $G$, which is defined as \begin{equation*} {{G}_{i}}={{U}_{i}}\left( s_{i}^{*},{{s}_{-i}} \right)-{{U}_{i}}\left( {{s}_{i}},{{s}_{-i}} \right), \end{equation*} \noindent where $s_{i}$ and $s_{-i}$ are the existing carrier allocation strategy profiles of operators $O_{i}$ and $O_{-i}$ respectively. Operator $O_{i}$ compares gain $G_{i}$ with its expected loss $\widehat L_{i}$ over the previous games. The expected loss is an estimation of future losses that operator $O_{i}$ will incur by sharing its spectrum resources with the other. The expected loss is calculated by averaging the past losses it had incurred by giving favors (say, $h_{-i}$ favors) to the other operator $O_{-i}$ during the game $\mathcal{G}$, \begin{equation*} \widehat {{L}_{i}}=\frac{\sum\limits_{h=1}^{{{h}_{-i}}}{{{L}_{i,h}}}}{{{h}_{-i}}}, \end{equation*} \noindent where $\sum L_{i}$ is the totality of past losses operator $O_{i}$ had met at the hands of operator $O_{-i}$ by giving it $h_{-i}$ spectrum usage favors through their mutual interactions. If operator $O_{i}$ finds its immediate gain $G_i$ larger than its expected loss $\widehat {{L}_{i}}$, i.e., \begin{equation*} {{G}_{i}}>\widehat {{L}_{i}}, \end{equation*} \noindent operator $O_{i}$ asks a favor from operator $O_{-i}$ to pursue to its new evaluated carrier allocation strategy $s^{*}_{i}$. Now, operator $O_{-i}$ estimates its new immediate loss $L_{-i}$ for the asked carrier allocation strategy $s^{*}_{i}$, which is calculated as \begin{equation*} {{L}_{-i}}={{U}_{-i}}\left( s_{i}^{*},{{s}_{-i}} \right)-{{U}_{-i}}\left( {{s}_{i}},{{s}_{-i}} \right), \end{equation*} \noindent and compares with its expected gain $\widehat G_{-i}$ over the previous games. The expected gain is an estimation of future gains operator $O_{-i}$ will witness by getting the spectrum usage favors from the other. The expected gain is calculated by averaging the past gains it had collected during the game $\mathcal{G}$, \begin{equation*} \widehat G_{-i}=\frac{\sum\limits_{h=1}^{{{h}_{-i}}}{{{G}_{-i,h}}}}{{{h}_{-i}}}, \end{equation*} \noindent where $\sum G_{-i}$ is the totality of past gains operator $O_{-i}$ had collected in $h_{-i}$ number of the spectrum usage favors received from operator $O_{i}$ through their mutual interactions. If operator $O_{-i}$ finds its immediate loss $L_{-i}$ smaller than its expected gain $\widehat G_{-i}$, i.e., \begin{equation*} {{L}_{-i}}<\widehat G_{-i}, \end{equation*} \noindent operator $O_{-i}$ grants the favor and strategy $s^{*}_{i}$ comes into existence, demonstrating equilibria of the appropriate type, i.e., $s^{*}_{i} \to s_{i}$. The history is updated at the both ends enlisting their respective immediate gain/loss. The game is played in repeated bounds, and in the next time slot, the operator(s) again try to contest for resources randomly. In every case, a decision is conferred based on the present gain/loss and history over the previous games. Further, the operators agree on a limit which restraints the maximum allowable spectrum usage favors (or surplus limit $S$), and helps mediating favors by normative pressure to reciprocate. \subsubsection{Constraints: Initialization and Spectrum Usage Favors} \label{sec:GameExpectationCons} \paragraph{Initialization} \noindent\\ \\ The operation of the distributive algorithm requires an initialization. With initialization, the operators at the beginning of the game contend for spectrum resources having a small load factor and register approximate small gains/losses $\delta$. It is necessary because herewith the operator would be more liberistic in asking favors and in contrast equally conservative in granting them. This unfolding behavior enables operators to respond to asymmetric load conditions. This is illustrated in the mathematical analysis of Section~\ref{sec:GamHisRepeatedGame}. \begin{equation*} \widehat L_{i}\approx \widehat G_{i}\approx \widehat L_{-i}\approx \widehat G_{-i} \approx \delta \text{ at time }t\to 0. \end{equation*} \noindent \paragraph{Spectrum Usage Favors} \noindent\\ \\ The discussion related to the spectrum usage favors and surplus limit behavior has been posted in Section~\ref{sec:GamePriceFavors}. \subsection{Proposed Algorithm II} The proposed algorithm in the form of pseudocode is summarised as below, \begin{algorithm} [H] \renewcommand\thealgorithm{} \caption{Repeated Games Model using Mutual History for Inter-operator Spectrum Sharing} \begin{algorithmic}[1] \STATE Operator $O_{i}$, where $i \in \mathcal{I}$, analyses strategy $s$ by switching on carrier $k_{i}$ or removing interfering carrier $k_{-i}$. Calculates immediate utility gain $G_{i,s}$ and compares it with expected utility loss $\widehat L_{i} =\frac{\sum\limits_{h=1}^{{{h}_{-i}}}{{{L}_{i,h}}}}{{{h}_{-i}}} $. \STATEx \textbf{if} {$G_{i,s} > \widehat L_{i}$} \textbf{then} \STATE \quad Operator $O_{-i}$ compares its outstanding favors with surplus $S$. \STATEx \quad\textbf{if} {$h_{-i}-h_{i} \le S$} \textbf{then} \STATE \quad\quad\begin{varwidth}[t]{\linewidth} Operator $O_{-i}$ compares immediate utility loss $L_{-i,s}$ for strategy $s$ with ex- \\pected utility gain $\widehat G_{-i} =\frac{\sum\limits_{h=1}^{{{h}_{-i}}}{I{{G}_{-i,h}}}}{{{h}_{-i}}} $. \end{varwidth} \STATEx \quad\quad\textbf{if} {$L_{-i,s} < \widehat G_{-i}$} \textbf{then} \STATE \quad\quad\quad Strategy s is accepted. \STATE \quad\quad\quad Operator $O_{i}$ records $G_{i,s}$. \STATE \quad\quad\quad Operator $O_{-i}$ records $L_{-i,s}$. \STATE \quad\quad\quad Favors are updated: ${{h}_{-i}}\to {{h}_{-i}}+1$. \STATE \quad\quad\textbf{end if} \STATE \quad\textbf{end if} \STATE \textbf{end if} \end{algorithmic} \end{algorithm} \subsection{Mathematical Analysis} \label{sec:GameExpectMath} \noindent In this section, we analyze the algorithm mathematically, and provide the theoretical results. For the analysis, we consider the same assumptions made in Section~\ref {sec:CoopMath}. \subsubsection{Orthogonal Spectrum Sharing} \noindent Initially, the operators were sharing the spectrum in an orthogonal manner, i.e., each operator had allocated non-overlapping fixed number of $K/2$ carriers. The orthogonal utility of operators $O_{a}$ and $O_{b}$, i.e., $U_{o,a}$ and $U_{o,b}$ are described in Eq.~\eqref{eq:CoopOrthoUtilA} and~\eqref{eq:CoopOrthoUtilB} respectively. The Operators play repeated games in order to improve their current under-achieved utilities as described in the next section. \subsubsection{Repeated Games based Spectrum Sharing} \label{sec:GamHisRepeatedGame} We begin the analysis by defining the following symbols for $i$ = $a$, $b$,\\ \noindent $G_{i}^{t}$, immediate gain of operator $O_{i}$ at time slot $t$,\\ $L_{i}^{t}$, immediate loss of operator $O_{i}$ at time slot $t$,\\ $\widehat G_{i}^{t}$, expected gain of operator $O_{i}$ till time slot $t$,\\ $\widehat L_{i}^{t}$, immediate loss of operator $O_{i}$ till time slot $t$. \paragraph{Game Initialization} \noindent\\\\ The game requires to be initialized with a small load. Assume, operators $O_{a}$ and $O_{b}$ have begun with a load of one user initially. With equal loads, both operators play the game for some time and generate approximately equal and small gains/losses. Thus, their expected gains/losses at time slot $t$ are assumed, \begin{equation} \label{eq:GainLosInit} \widehat{L}_{a}^{t}\approx \widehat{G}_{a}^{t}\approx \widehat{L}_{b}^{t}\approx \widehat{G}_{b}^{t}\approx \delta. \end{equation} \paragraph{Game Beginning} \noindent\\\\ Consider Operator $O_{a}$ is experiencing a high load while operator $O_{b}$ has a low load, i.e., ${{N}_{a}}>{{N}_{b}}$. Now, two cases arise, either high load operator $O_{a}$ receives more spectrum usage favors in the form of component carriers from low load operator $O_{b}$ or vice-versa over the next time slot $t+1$. Let us calculate the probability of the occurrence for the mentioned cases.\\ \noindent \textit{Case I}: High load operator $O_{a}$ receives more spectrum usage favor(s) from low load operator $O_{b}$ at time slot $t+1$, \begin{subequations} \label{eq:BeginGameCaseI} \begin{align} G_{a}^{t+1} & > \widehat L_{a}^{t},\\ L_{b}^{t+1} & <\widehat G_{b}^{t}. \end{align} \end{subequations} \noindent \textit{Case II}: Low load operator $O_{b}$ gets more favor(s) from high load operator $O_{a}$ at time slot $t+1$, \begin{subequations} \label{eq:BeginGameCaseII} \begin{align} G_{b}^{t+1} & >\widehat L_{b}^{t},\\ L_{a}^{t+1} & < \widehat G_{a}^{t}. \end{align} \end{subequations} \noindent In order to know the probability of occurence for the mentioned cases, we need to know how possibly $G_{a}$ or $G_{b}$ increases and $L_{a}$ or $L_{b}$ decreases in comparison to the expected gain/loss, i.e., $\delta$ (from Eq.~\eqref{eq:GainLosInit}). For this, let us compute the generalized equations for both immediate gain ($G$) and immediate loss ($L$).\\ \noindent Let us assume, an operator gets spectrum resources from another operator\footnote{We assume complete carrier transference is equivalent to even number of favors.}, then the immediate gain can be calculated by subtracting the operator's past utility from its present utility, accordingly, \begin{equation*} {{G}^{t+1}}=N\log \left( \frac{{{k}^{t+1}}}{N}{{\log }_{2}}\left( 1+\gamma \right) \right)-N\log \left( \frac{{{k}^{t}}}{N}{{\log }_{2}}\left( 1+\gamma \right) \right), \end{equation*} \noindent where ${{k}^{t}}$ is the past orthogonal carrier allocation, ${{k}^{t+1}}$ is the present orthogonal carrier allocation (s.t., ${{k}^{t+1}}>{{k}^{t}}$), $\gamma$ is the SNR of component carriers and $N$ is the load an operator. Simplifying $G$, \begin{equation} \label{eq:IGGen} {{G}^{t+1}}=N\log \left( \frac{{{k}^{t+1}}}{{{k}^{t}}} \right). \end{equation} \noindent Similarly, the immediate loss for an operator (with ${{k}^{t+1}}<{{k}^{t}}$) can be shown as \begin{equation} \label{eq:ILGen} {{L}^{t+1}}=N\log \left( \frac{{{k}^{t}}}{{{k}^{t+1}}} \right). \end{equation} \noindent It is observable in Eq.~\eqref{eq:IGGen} and~\eqref{eq:ILGen} that both ${{G}^{t+1}}$ and ${{L}^{t+1}}$ will be larger for high load $N$. With $N_{a} > N_{b}$, using initial orthogonal utilities $U_{o,a}$ and $U_{o,b}$ as past utilities, i.e, $k_{a}^{t}=k_{b}^{t}={K}/{2}$, $k_{a}^{t+1}=({K}/{2}) + x$ and $k_{b}^{t+1}=({K}/{2}) - x$, where $0<x<{K}/{2}$, it can be implied that, \begin{equation} \label{eq:IGILProb} G_{a}^{t+1}>L_{b}^{t+1}. \end{equation} \noindent Even though ${k_{b}^{t}}/{k_{b}^{t+1}}$ is slightly superior than ${k_{a}^{t+1}}/{k_{a}^{t}}$, but the load's influence is far more pronounced, i.e., due to larger $N_{a}$ in comparison to $N_{b}$, $G_{a}$ gets better. A numerical example may clarify the mechanics of the presented analysis. Using simulation input parameters, $K=8$, $N_{a}=25$ and $N_{b}=5$, the transference of 1 carrier ($x=1$) from low load operator $O_{b}$ to high load operator $O_{a}$ from an initial equal orthogonal carrier allocation fetches values where $\text{log}_{10}(k_{a}^{t+1}/k_{a}^{t})=0.097$ is lower than $\text{log}_{10}(k_b^{t}/k_{b}^{t+1})= 0.125$; on the opposite, $G_{a}^{t+1}=2.423$ exceeds $L_{b}^{t+1}=0.624$.\\ \noindent From Eq.~\eqref{eq:GainLosInit},~\eqref{eq:BeginGameCaseI} and~\eqref{eq:IGILProb}, we get, \begin{equation*} G_{a}^{t+1}>\delta>L_{b}^{t+1}, \end{equation*} \noindent and it demonstrates that \textit{Case I} is more likely to prevail over \textit{Case II}. Therefore, at the beginning of the game, the high load operator is preferred over the low load operator for spectrum usage favors, i.e., operator $O_{a}$ starts gaining carriers. \paragraph{Game Progression} \noindent\\\\ If, for the time, the state of load conditions remains same, i.e., operator $O_{a}$ continues to bear a high load while operator $O_{b}$ has a low load, in due course, the tendency of low load operator $O_{b}$ to rent out more and more carriers turns highly unlikely. This is due to the fact that it increases its immediate loss ${{L}^{t+1}}$ very much against expected gain $\widehat G^{t}$ (see Eq.~\eqref{eq:ILGen}, ${{\lim }_{{{k}^{t+1}}\to 0}}{{L}^{t+1}}=\infty $). As a result, the carrier transfer ceases at some point where it cannot satisfy ${{L}^{t+1}}<\widehat G^{t}$ further anymore. However, the amount of carriers transferred to the high load operator by the low load operator does not guarantee of an optimal carrier allocation. Therefore, we require a constraint in the form of a surplus which limits the carrier transference resulting in an approximate optimal carrier allocation. \paragraph{Game: Role of Surplus} \noindent\\\\ Let us assume, at the end of the game, operator $O_{a}$ has collected $\Delta K$ additional carriers from operator $O_{b}$ from its initial orthogonal carrier allocation. The value of $\Delta K$, for which the game fetches the approximate best sum-utility for the operators is given by the Eq.~\eqref{eq:GamePriceMaxProb} and its solution is exactly the same described by $\Delta K_{limit}$ for the same equation. By appropriately selecting surplus limit $S$\footnote{According to the definition, surplus limit is defined in terms of favors (see Section~\ref{sec:GameExpectationCons}). However, it is possible to translate the surplus limit definition in terms of carriers limit, because $S_{carriers} = 2S_{favors}$.}, we can cap the carrier transference limit to $\Delta K_{limit}$. However, $\Delta K_{limit}$ changes with temporal load variations because operators' loads $N_{a}$ and $N_{b}$ are not fixed. Besides, the game is noncooperative, and the operators are not allowed to convey their load related information to each other. Therefore, the operators at the beginning can optimize $\Delta K_{limit}$ based on load estimations (commonly observed average load, maximum load, minimum load, etc.) and decide the surplus limit parameter.\\ \noindent In the simulation, we have used $K=8$ (\acp{PCC} = 2, \acp{SCC} = 6), ${{N}_{a}}=25$, ${{N}_{b}}=5$ and the users are uniformly distributed in the operator's access area. We observed, $\Delta {{K}_{limit }}=2$\footnote{See Section~\ref{sec:Algo2Analysis}, for $S$ = 4, $O_{a}$'s \acp{SCC} = 5, $O_{b}$'s \ac{SCC} = 1 and both have a single \ac{PCC}; therefore, $\Delta K_{limit}$ = $\left| {8}/{2} - (5+1)\right|$ or $\left| {8}/{2} - (1+1)\right|$.} (for surplus limit $S$=4) in a closer bound to the cooperative solution. For the same inputs, theoretically where many assumptions are regarded, we obtain $\Delta {{K}_{limit }}=2.7$ and lies close to the simulated result. \paragraph{Game Reversal} \noindent\\\\ In this section, we would like to show that with a change in relative load status, the game behaviour also changes, i.e., the new high load operator starts collecting spectrum usage favors from the low load operator. Here, we discuss two cases, the one in which the new low load (previously high load) operator gets more favor(s) from the other or vice versa. We will provide suggestive evidence that the operation of the first case becomes predominantly impossible, whereas the second case is plausible with a dependency on input specifications.\\ \noindent Proceeding with the analysis, we would like to measure first the expected gain/loss for operators $O_{a}$ and $O_{b}$ till time slot $t+1$. Representing operators' previously load status as $N_{a}^{-}$ and $N_{b}^{-}$, s.t., $N_{a}^{-} > N_{b}^{-}$\footnote{\textquoteleft -\textquoteright~denotes the past load status.}, and assuming $x$ component carriers had transferred to operator $O_{a}$ by operator $O_{b}$ during time slot $t+1$, s.t., $x\le \Delta K_{limit}$, we calculate, \begin{equation} \label{eq:GamRevMeanLG} \widehat L_{a}^{t+1}\approx \widehat G_{b}^{t+1}\approx \delta, \end{equation} \begin{equation*} \widehat G_{a}^{t+1}=\frac{h_{a} \delta +G_{a}^{t+1}}{h_{a}+1}, \end{equation*} \begin{equation*} \widehat L_{b}^{t+1}=\frac{h_{a} \delta +L_{b}^{t+1}}{h_{a}+1}, \end{equation*} \noindent where $h_{a}$ is the number of times operator $O_{a}$ had received favors from the other during the game initialization. $G_{a}^{t+1}$ and $L_{b}^{t+1}$ can be calculated using Eq.~\eqref{eq:IGGen} and~\eqref{eq:ILGen} with carrier allocation status $k_a^{t+1}={K}/{2}+x$ and $k_b^{t+1}={K}/{2} - x$ at time slot $t+1$. Therefore, $\widehat G_{a}^{t+1}$ and $\widehat L_{b}^{t+1}$ can be rewritten as \begin{equation} \label{eq:EGt2} \widehat G_{a}^{t+1}=\frac{h_{a} \delta+{{N}_{a}^{-}}\log \left( \frac{\frac{K}{2}+x}{\frac{K}{2}} \right)}{h_{a}+1}, \end{equation} \begin{equation} \label{eq:ELt2} \widehat L_{b}^{t+1}=\frac{h_{a} \delta +{{N}_{b}^{-}}\log \left( \frac{\frac{K}{2}}{\frac{K}{2}-x} \right)}{h_{a}+1}. \end{equation} \noindent Let us assume at time slot $t+2$, operator $O_{b}$ is experiencing relatively high load conditions than operator $O_{a}$, i.e., $N_{a} < N_{b}$. The operators play the game and ask each other for $y$ additional carriers. Again two cases arises, either low load operator $O_{a}$ gets resources or high load operator $O_{b}$.\\ \noindent \textit{Case I}: Low load operator $O_{a}$ receives more favor(s) from high load operator $O_{b}$ at time slot $t+2$, \begin{subequations} \label{eq:GameRevCase1} \begin{align} G_{a}^{t+2} & > \widehat L_{a}^{t+1},\\ L_{b}^{t+2} & < \widehat G_{b}^{t+1}. \end{align} \end{subequations} \noindent Calculating $G_{a}^{t+2}$ and $L_{b}^{t+2}$ using Eq.~\eqref{eq:IGGen} and~\eqref{eq:ILGen}, we get, \begin{equation} \label{eq:GamRevLowLoadImGain} G_{a}^{t+2}={{N}_{a}}\log \left( \frac{\frac{K}{2}+x+y}{\frac{K}{2}+x} \right), \end{equation} \begin{equation} \label{eq:GamRevLowLoadImLoss} L_{b}^{t+2}={{N}_{b}}\log \left( \frac{\frac{K}{2}-x}{\frac{K}{2}-x-y} \right). \end{equation} \noindent Let us evalute \textit{Case I} feasibility. Using Eq.~\eqref{eq:GamRevMeanLG}, we rewrite Eq.~\eqref{eq:GameRevCase1} as \begin{equation} \label{eq:GamRevLowLoadGen} G_{a}^{t+2}> \delta > L_{b}^{t+2}. \end{equation} \noindent Using the results of Eq.~\eqref{eq:GamRevLowLoadImGain} and~\eqref{eq:GamRevLowLoadImLoss} in Eq.~\eqref{eq:GamRevLowLoadGen}, we get, \begin{equation} \label{eq:GamRevLowLoad} {{\left( \frac{\frac{K}{2}+x+y}{\frac{K}{2}+x} \right)}^{{{N}_{a}}}}>{{\left( \frac{\frac{K}{2}-x}{\frac{K}{2}-x-y} \right)}^{{{N}_{b}}}}. \end{equation} \noindent With $x>0$, $y>0$, $x+y<\frac{K}{2}$, and ${{N}_{a}}<{{N}_{b}}$; Eq.~\eqref{eq:GamRevLowLoad} is not satisfied and the prevalence of \textit{Case I} almost becomes impossible.\\ \noindent \textit{Case II}: High load operator $O_{b}$ receives more favor(s) from low load operator $O_{a}$ at time slot $t+2$, \begin{equation*} \begin{aligned} G_{b}^{t+2} & >\widehat L_{b}^{t+1},\\ L_{a}^{t+2} & <\widehat G_{a}^{t+1}. \end{aligned} \end{equation*} \noindent Calculating $G_{b}^{t+2}$ and $L_{a}^{t+2}$ using Eq.~\eqref{eq:IGGen} and~\eqref{eq:ILGen}, we get, \begin{equation} \label{eq:GamRevHigLoadImGain} G_{b}^{t+2}={{N}_{b}}\log \left( \frac{\frac{K}{2}-x+y}{\frac{K}{2}-x} \right), \end{equation} \begin{equation} \label{eq:GamRevHigLoadImLoss} L_{a}^{t+2}={{N}_{a}}\log \left( \frac{\frac{K}{2}+x}{\frac{K}{2}+x-y} \right). \end{equation} \noindent Comparing $G_{b}^{t+2}$ with $\widehat L_{b}^{t+2}$ (See Eq.~\eqref{eq:GamRevHigLoadImGain} and~\eqref{eq:ELt2}); there is a good amount of probability that $G_{b}^{t+2}$ can exceed $\widehat L_{b}^{t+2}$ because for some values of $N_{b}$ and $y$, it is possible to have, \begin{equation*} {{N}_{b}}\log \left( \frac{\frac{K}{2}-x+y}{\frac{K}{2}-x} \right) > \delta + \frac{N_{b}^{-}}{h_{a}+1}\log \left( \frac{\frac{K}{2}}{\frac{K}{2}-x} \right), \end{equation*} \noindent where $N_{b}>N_{a}$, $N_{b}^{-}<N_{a}^{-}$, $0<x<K/2$, $0<y<(K/2) + x$ and $h_{a} > 0$.\\ \noindent Similarly, there is a room of possibility that $\widehat G_{a}^{t+2}$ can exceed $L_{a}^{t+2}$, i.e., \begin{equation*} \delta + \frac{N_{a}^{-}}{h_{a}+1}\log \left( \frac{\frac{K}{2}+x}{\frac{K}{2}} \right) > {{N}_{a}}\log \left( \frac{\frac{K}{2}+x}{\frac{K}{2}+x-y} \right) \end{equation*} \noindent for the same inputs.\\ \noindent Observing the analysis, it can be inferred that the noncooperative spectrum sharing games between the competitive operators benefit the heavily loaded operator for spectrum usage favors and improve spectrum efficiency of the network operators. \clearpage \section{Simulation Results and Analysis} \label{chap:Simulation} \noindent In this chapter, several system level simulation results of the proposed schemes in Chapters~\ref{chap:GamePrice} and~\ref{chap:GameExpectation} are presented. The \ac{DSA} techniques for inter-operator spectrum sharing are analyzed and investigated against the static allocation schemes - orthogonal and full spread spectrum sharing with varying interference conditions. In addition, as a baseline for comparison, the simulation results are also compared with the Pareto optimal cooperative schemes. We have used Monte Carlo methods for the simulations. \subsection{Simulation Scenario} \label{sec:SimSce} \noindent We consider a small cell \ac{LTE} based network comprising of 2 operators each have 2 \acp{BS} and a Poisson distributed mean load of 25 and 5 users in their given access area. The \acp{BS} are deployed in a single storey building separated by walls and the users are uniformly distributed within the operator's access area. The \acp{BS}' locations and coverage areas are illustrated in Fig.~\ref{fig:BuildingLayout} and Tab.~\ref{tab:SimParam}.\\ \noindent The total bandwidth of the system comprising operators $O_{a}$ and $O_{b}$ is equally divided into 8 component carriers. The centre of frequencies of the operators needs not to be adjacent. For example, if we assume that the \acp{UE} are \ac{LTE} Release-10 \acp{UE}, and then carrier aggregation can be applied to serve the \acp{UE} in the sub-band of shared frequency if the bandwidth of the operators is not contiguous. The additive white Gaussian noise (AWGN) for each component carrier is kept constant, $N_{o}$. Downlink power control is not exercised. The available power budget is divided equally among the used carriers for downlink transmissions. The transmitting \acp{BS} are full buffer in that they always have data to send. \\ \noindent Regarding the channel modelling, the signal power attenuates according to a power law model for distance-based path loss, i.e., $C{d^{-A}}$ with path loss exponent $A$ = 3.6, attenuation constant $C$ = 1e-4 and distance $d$ between the \ac{BS} and the \ac{UE}. For the sake of simplicity, we do not consider shadow fading, or frequency selective fading, or other indoor channel models, e.g., WINNER II because the protocol's behaviour is independent of fading or attenuation characterizations. The degree of spectrum sharing depends on the inter-operator interference. Therefore, to model different interference environments, the wall attenuation between the neighbouring \acp{BS} is allowed to change. Details of the system parameters are given in Tab.~\ref{tab:SimParam}. \begin{figure}[H] \begin{subfigure}{1\textwidth} \centering \includegraphics[scale=0.226, trim = 0mm 0mm 0mm 0mm, clip]{BuildingScenario.png} \caption{Multi-operator scenario in an office building} \end{subfigure} \\ \begin{subfigure}{1\textwidth} \centering \includegraphics[scale=1, trim = 0mm 0mm 0mm 0mm, clip]{BuildingLayout.png} \caption{Single floor layout of an office building} \end{subfigure} \caption{Indoor inter-operator deployment scenario} \label{fig:BuildingLayout} \end{figure} \begin{table}[h] \centering \caption{Simulation Parameters} {\begin{tabular}{ |p{7.0cm}|p{7.0cm}|} \hline \multicolumn{2}{|c|}{\bf{Systeml Model}} \\ \hline Carrier frequency & 2.6 [GHz] \\ Carrier bandwidth & 12.5 [MHz] \\ Total component carriers & 8\\ Primary component carriers (\acp{PCC}) & 2\\ Secondary component carriers (\acp{SCC}) & 6\\ BS transmit power & 30 [dBm]\\ Antenna patterns & Omni directional \\ Noise figure & 15 [dB]\\ Noise thermal power & -174 [dBm/Hz]\\ \hline \multicolumn{2}{|c|}{\bf{Path Loss Model}} \\ \hline Power law path loss model & $PL\text{ [dB]}=A*10{{\log }_{10}}\left( d [m] \right)+10{{\log }_{10}}\left( \frac{1}{C} \right)+W$\\ Path loss coefficients & $A = 3.6$\\ & $C=\text{1e-4}$\\ Wall attenuation ($W$) & 0 [dB] (High interference scenario)\\ & 10 [dB] (Low interference scenario)\\ \hline \multicolumn{2}{|c|}{\bf{Scenario Model}} \\ \hline Number of operators & 2\\ Number of BSs/operator & 2\\ Number of buildings & 1\\ Number of floors/building & 1\\ Number of rooms/floor & 4\\ Number of BSs/room & 1\\ \hline \multicolumn{2}{|c|}{\bf{Traffic Model}} \\ \hline Number of UEs/operator & Poisson distributed load with mean 25 or 5\\ UEs distribution & Uniformly distributed\\ \hline \multicolumn{2}{|c|}{\bf{Link Level Model}} \\ \hline Spectral efficiency & $r = \text{BW}_{\text{eff}}*\text{BW}*\text{log}_{2}\left(1+{\text{SINR}} \right)$ \\ Bandwidth efficiency & $\text{BW}_{\text{eff}} = .56$\\ \hline \multicolumn{2}{|c|}{\bf{Algorithm Parameters}} \\ \hline Maximum outstanding favors or surplus ($S$) & 2, 4\\ \hline \end{tabular}} \label{tab:SimParam} \end{table} \subsection{Performance Evaluation} \label{sec:PerfEval} \noindent The results are presented for 1000 random network instantiations, generated according to the aforementioned parameters. Scheduling weight per carrier $w_{i,j,k}$ (Eq.~\eqref{eq:GamPricThroughput}) is fixed and inversely proportional to their \ac{BS}'s load. The \acp{BS} within an operator use the same component carriers, however the \acp{BS} of different operators can have different carrier allocations. For each deployment, the repeated games are allowed to run for 30 counters. The operators' favors (refer Section~\ref{sec:GamePriceFavors},~\ref{sec:GameExpectationCons}) and gains/losses (only in the case of Algorithm II, refer Section~\ref{sec:GameHisDecis}) are recorded at the end of the game sequences and fed to the next deployment. \\ \noindent The data rates are experienced by the individual users tracked after each deployment and collected over 1000 deployments. The histogram is used to generate the user rate probability distribution over all realizations, which then used to plot the user rate \acp{CDF}. The user rate \acp{CDF} are plotted for operators $O_{a}$ and $O_{b}$ with respective Poisson distributed mean load of 25 and 5 users. Load reversal cases also been considered to assess the performance with varying load conditions. The user rate \acp{CDF} are plotted for 2000 instantiations, where in the halfway of the simulation, the loads are reversed with same Poisson distributed mean load, i.e., for the first 1000 deployments, the loads of operators $O_{a}$ and $O_{b}$ are 25 and 5 users respectively, and during the latter half, the respective loads are 5 and 25 users. The effect of temporal load variations depicted in the simulations defines the practicability of the scenarios. Besides, the effect of surplus $S$ has also been analyzed for the mentioned plots. Two cases with different interference conditions are taken into consideration - \begin{enumerate} \item High interference scenario (with wall loss of 0 dB), \item Low interference scenario (with wall loss of 10 dB). \end{enumerate} \noindent Henceforth, we present the analysis of both algorithms achieving about the same outcomes. \subsubsection{Algorithm I Analysis} \label{sec:Algo1Analysis} The Algorithm I considers a carrier pricing based utility function for the repeated games framework. The operators pay the penalty on their carriers usage which forces them to share the bandwidth resources according to their relative needs. In the simulation, the pricing constants $p_1$ and $p_2$ are set as 7 and 0.8 respectively according to the optimization criteria discussed in Section~\ref{sec:GamPriceOpt}. Fig.~\ref{fig:Algo1HighIntfSur2} shows the user rate \acp{CDF} for two operators $O_{a}$ and $O_{b}$ in a high interference scenario (wall loss, 0 dB). The maximum number of outstanding favors is set equal to, $S = 2$. It can be seen, with dynamic spectrum sharing, high load operator $O_{a}$ becomes able to improve its delivered throughput in comparison with the orthogonal static allocation. On the other hand, low load operator $O_{b}$'s throughput falls, but operators do not mind sacrificing their resources during low load conditions if they are anticipating benefits in the demanding circumstances. This behaviour is captured in Fig.~\ref{fig:Algo1HighIntfSur2LoadRev} when the load gets reversed. The figure shows the user rate distribution for operator $O_{a}$ spanning over the temporal load variations be fitted with its initial high load (of 25 users), and latter low load (of 5 users) instances. In the figure, it is observable that operator $O_{a}$'s delivered throughput has improved over time in comparison to the orthogonal sharing even though it has sacrificed the spectrum resources latter when the load was sparse. The similar behaviour has been noticed for operator $O_{b}$ during the simulation because, on an average, the loads are same over time for the both operators in the load reversal scenario. \begin{figure}[H] \centering \includegraphics[scale=.73, trim = 1mm 0.6mm 9mm 6.5mm, clip]{Algo1HighIntfSur2.jpg} \caption{Rate distribution for operator $O_{a}$, $N_{a} = 25$ users and operator $O_{b}$, $N_{b} = 5$ users using the traditional orthogonal and full spread spectrum sharing, cooperative algorithm and proposed scheme based on Algorithm I, in a high interference environment (wall loss, 0 dB). The maximum number of outstanding favors, $S = 2$.} \label{fig:Algo1HighIntfSur2} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=.73, trim = 1mm 0.6mm 9mm 6.5mm, clip]{Algo1HighIntfSur2LoadRev.jpg} \caption{Rate distribution for operator $O_{a}$ with temporal load variations using the traditional orthogonal and full spread spectrum sharing, cooperative algorithm and proposed scheme based on Algorithm I, in a high interference environment (wall loss, 0 dB). The maximum number of outstanding favors, $S = 2$.} \label{fig:Algo1HighIntfSur2LoadRev} \end{figure} \noindent In the game, surplus parameter plays a crucial role in controlling the trading of spectrum resources. It puts a limit on the number of spectrum usage favors given by the operators to each other as it ensures that the sacrificing operator retains an adequate amount of spectrum for its smoother operation while dynamically sharing the spectrum. In Fig.~\ref{fig:Algo1HighIntfSur2Sur4}, the effect of surplus is analyzed on the user rate distributions for high load operator $O_{a}$. In the simulation, the average carrier utilizations in \acp{SCC} (6 carriers) using a surplus limit of 2 are observed as 4.02 for high load operator $O_{a}$ and 1.99 for low load operator $O_{b}$, whereas with surplus limit 4, the average carrier utilizations are 4.97 and 1.03 respectively. It indicates that with easing off in surplus limit value, the operators' exploitation of the degree of freedom in frequency domain increases and the throughput gain approaches the efficient cooperative solution. Though, operators have \acp{SCC} of 6 carriers, which means, the maximal surplus limit can be fixed at 6. Nevertheless, it has been observed that the large surplus limit value becomes redundant once it reaches the approximate cooperative solution, i.e., surplus 4 to 6 almost give the same performance. The reason is that the carrier pricing component in the utility function keeps the carrier allocations (of both operators) optimal if the surplus limit is larger than the optimal carrier allocation (cooperative carrier allocation) of the high load operator. However, in the case of high load operator, its carrier allocation reduces to the surplus limit if the surplus limit is lower than its optimal carrier allocation. \begin{figure}[H] \centering \includegraphics[scale=.73, trim = 1mm 0.6mm 9mm 6.5mm, clip]{Algo1HighIntfSur2Sur4.jpg} \caption{Rate distribution for operator $O_{a}$, $N_{a} = 25$ users using the cooperative algorithm and proposed scheme based on Algorithm I, in a high interference environment (wall loss, 0 dB). The maximum number of outstanding favors is varied and rate curves are analysed for, $S = 2$ and 4.} \label{fig:Algo1HighIntfSur2Sur4} \end{figure} \noindent The algorithm's efficiency is also tested in a low interference environment, where interference is suppressed by increasing the wall attenuation to 10 dB between the \acp{BS}. The analysis is quite much same as it is discussed for the above case of high interference scenario. The only difference lies here is that now operators have more degree of overlapped carriers. Describing briefly, Fig.~\ref{fig:Algo1LowIntfSur2} shows the user rate distributions for operators $O_{a}$ and $O_{b}$. In the figure, high load operator $O_{a}$ gathers more spectrum resources than low load operator $O_{b}$ according to their load conditions. Similarly, Fig.~\ref{fig:Algo1LowIntfSur2LoadRev} shows the user rate distribution for operator $O_{a}$ over time when it had a high load initially and later a low load. The figure shows that the operators are able to improve their throughput over time through dynamically sharing the spectrum in comparison to the static allocations. The behaviour of surplus is captured in Fig.~\ref{fig:Algo1LowIntfSur2Sur4}. In the simulation, the average carrier utilizations in \acp{SCC} with surplus limit 2 are observed as 6.0 for high load operator $O_{a}$ and 3.99 for low load operator $O_{b}$, and with surplus limit 4, the average carrier utilizations are 6.0 and 2.14 respectively. \begin{figure}[H] \centering \includegraphics[scale=.73, trim = 1mm 0.6mm 9mm 6.5mm, clip]{Algo1LowIntfSur2.jpg} \caption{Rate distribution for operator $O_{a}$, $N_{a} = 25$ users and operator $O_{b}$, $N_{b} = 5$ users using the traditional orthogonal and full spread spectrum sharing, cooperative algorithm and proposed scheme based on Algorithm I, in a low interference environment (wall loss, 10 dB). The maximum number of outstanding favors, $S = 2$.} \label{fig:Algo1LowIntfSur2} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=.73, trim = 1mm 0.6mm 9mm 6.5mm, clip]{Algo1LowIntfSur2LoadRev.jpg} \caption{Rate distribution for operator $O_{a}$ with temporal load variations using the traditional orthogonal and full spread spectrum sharing, cooperative algorithm and proposed scheme based on Algorithm I, in a low interference environment (wall loss, 10 dB). The maximum number of outstanding favors, $S = 2$.} \label{fig:Algo1LowIntfSur2LoadRev} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=.73, trim = 1mm 0.6mm 9mm 6.5mm, clip]{Algo1LowIntfSur2Sur4.jpg} \caption{Rate distribution for operator $O_{a}$, $N_{a} = 25$ users using the cooperative algorithm and proposed scheme based on Algorithm I, in a low interference environment (wall loss, 10 dB). The maximum number of outstanding favors is varied and rate curves are analysed for, $S = 2$ and 4.} \label{fig:Algo1LowIntfSur2Sur4} \end{figure} \subsubsection{Algorithm II Analysis} \label{sec:Algo2Analysis} \noindent Algorithm II tries to attain the same objective what Algorithm I does. The Algorithm II considers mutual gain/loss history by which the operators play repeated games to share the spectrum dynamically. The working of the algorithm requires an initialization, and for this we have initialized both operators with a load of a single user in the simulation, and let it run for around 100 instants. In the performance curves, we have considered PF based user rates. The analysis of results is similar to what we have presented for Algorithm I in Section~\ref{sec:Algo1Analysis}.\\ \noindent Simulation result in Fig.~\ref{fig:Algo2HighIntfSur2} shows the user rate \acp{CDF} of operators $O_{a}$ and $O_{b}$ with respective mean load of 25 and 5 users. It is quite visible from the figure, that high load operator $O_{a}$ has become able to improve its delivered throughput at the expense of low load operator $O_{b}$. Though, operator $O_{b}$ suffers at the moment, but when its load gets high in the near future, it will be getting its rightful share of the spectrum resources. This behaviour is captured in Fig.~\ref{fig:Algo2HighIntfSur2LoadRev}, where loads get reversed after some time, i.e., now operator $O_{a}$ has a mean load of 5 users while operator $O_{b}$ has a mean load of 25 users. The user rate curves are plotted for operator $O_{a}$ over the time span when it had a mean load of 25 and latter of 5 users. The game is modelled in a high interference scenario (wall loss, 0 dB), and the plotted rate curves affirm that the game based \ac{DSA} scheme provides a clear benefit over the orthogonal sharing with asymmetric loading and improves throughput with time. \begin{figure}[H] \centering \includegraphics[scale=.73, trim = 1mm 0.6mm 9mm 6.5mm, clip]{Algo2HighIntfSur2.jpg} \caption{Rate distribution for operator $O_{a}$, $N_{a} = 25$ users and operator $O_{b}$, $N_{b} = 5$ users using the traditional orthogonal and full spread spectrum sharing, cooperative algorithm and proposed scheme based on Algorithm II, in a high interference environment (wall loss, 0 dB). The maximum number of outstanding favors $S = 2$.} \label{fig:Algo2HighIntfSur2} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=.73, trim = 1mm 0.6mm 9mm 6.5mm, clip]{Algo2HighIntfSur2LoadRev.jpg} \caption{Rate distribution for operator $O_{a}$ with temporal load variations using the traditional orthogonal and full spread spectrum sharing, cooperative algorithm and proposed scheme based on Algorithm II, in a high interference environment (wall loss, 0 dB). The maximum number of outstanding favors $S = 2$.} \label{fig:Algo2HighIntfSur2LoadRev} \end{figure} \noindent Apart from that, in the game, the maximum limit for outstanding favors (surplus limit $S$) is kept at 2, which means neither of the operators can trade spectrum resources more than the limit while dynamically sharing the spectrum. In Fig.~\ref{fig:Algo2HighIntfSur2Sur4}, the surplus limit behaviour on the game is captured. It can be seen that with an increase in the surplus limit, the high load operator's throughput improvement steadily approaches the cooperative solution. In the simulation, the average carrier utilizations in \acp{SCC} (6 carriers) using a surplus limit of 2 are observed as 4.09 for high load operator $O_{a}$ and 2.0 for low load operator $O_{b}$, whereas with surplus limit 4, the average carrier utilizations are 5.0 and 1.0 respectively. It indicates that with easing off in the surplus limit value, the operators' tendency for sharing the spectrum increases and become more pronounced. Though, the operators have \ac{SCC} of 6 carriers, which means, the maximal surplus limit can be fixed at 6. However, with a larger surplus limit (e.g., here, if more than 4), high load operator $O_{a}$'s gain surpasses the cooperative solution. It indicates that the loss of low load operator $O_{b}$ will be more than the gain of high load operator $O_{a}$, and the sum-throughput of operators will diminish. Therefore, it is essential to optimize the surplus limit parameter to have a maximal benefit, which has been discussed in detail, in Section~\ref{sec:GameExpectMath}.\\ \begin{figure}[H] \centering \includegraphics[scale=.73, trim = 1mm 0.6mm 9mm 6.5mm, clip]{Algo2HighIntfSur2Sur4.jpg} \caption{Rate distribution for operator $O_{a}$, $N_{a} = 25$ users using the cooperative algorithm and proposed scheme based on Algorithm II, in a high interference environment (wall loss, 0 dB). The maximum number of outstanding favors $S$ is varied and rate curves are analysed for, $S = 2$ and 4.} \label{fig:Algo2HighIntfSur2Sur4} \end{figure} \noindent Similarly, the rate curves have also been plotted for a low interference environment (with wall loss of 10 dB). In this, the full spread is the dominant static allocation. According to Fig.~\ref{fig:Algo2LowIntfSur2} and~\ref{fig:Algo2LowIntfSur2LoadRev}, it is observable that, with the game based spectrum sharing, the operators' spectrum allocation are now more closely aligned to the full spread rather than the orthogonal. Besides, the rate curves are observed better than the full spread. The reason is that the interference in this scenario is suppressed, but not completely eliminated. However, it has been observed that with a wall loss of over 20 dB, both full spread and game curves converge, and the operators utilize the full spectrum with negligible interference. The surplus behaviour also been captured in Fig.~\ref{fig:Algo2LowIntfSur2Sur4}, which shows that, with an increase in the surplus limit, the spectrum sharing improves and reaches the cooperative solution for some limit (of 4, as observed in the figure). In the simulation, the average carrier utilizations in \acp{SCC} with surplus limit 2 are observed as 5.9 for high load operator $O_{a}$ and 4.3 for low load operator $O_{b}$, and with surplus limit 4, the average carrier utilizations are 5.9 and 2.3 respectively. With a further increase in the surplus limit parameter, the overall sum-through starts declining and deviating from the optimal gain, same as it is discussed for the high interference scenario. \begin{figure}[H] \centering \includegraphics[scale=.73, trim = 1mm 0.6mm 9mm 6.5mm, clip]{Algo2LowIntfSur2.jpg} \caption{Rate distribution for operator $O_{a}$, $N_{a} = 25$ users and operator $O_{b}$, $N_{b} = 5$ users using the traditional orthogonal and full spread spectrum sharing, cooperative algorithm and proposed scheme based on Algorithm II, in a low interference environment (wall loss, 10 dB). The maximum number of outstanding favors $S = 2$.} \label{fig:Algo2LowIntfSur2} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=.73, trim = 1mm 0.6mm 9mm 6.5mm, clip]{Algo2LowIntfSur2LoadRev.jpg} \caption{Rate distribution for operator $O_{a}$ with temporal load variations using the traditional orthogonal and full spread spectrum sharing, cooperative algorithm and proposed scheme based on Algorithm II, in a low interference environment (wall loss, 10 dB). The maximum number of outstanding favors $S = 2$.} \label{fig:Algo2LowIntfSur2LoadRev} \end{figure} \begin{figure}[H] \centering \includegraphics[scale=.73, trim = 1mm 0.6mm 9mm 6.5mm, clip]{Algo2LowIntfSur2Sur4.jpg} \caption{Rate distribution for operator $O_{a}$, $N_{a} = 25$ users using the cooperative algorithm and proposed scheme based on Algorithm II, in a low interference environment (wall loss, 10 dB). The maximum number of outstanding favors $S$ is varied and rate curves are analysed for, $S = 2$ and 4.} \label{fig:Algo2LowIntfSur2Sur4} \end{figure} \clearpage \section{Conclusion and Future Work} \noindent This thesis reports the findings, whose main objective is to demonstrate how sharing paradigms in wireless networks, in particular spectrum sharing improves the spectral efficiency. We describe and evaluate different scenarios where spectrum sharing is considered in the context of multi-operator cooperation, namely full spread or orthogonal sharing. More specifically, we investigate the impact of noncooperative games between the operators on spectrum sharing. Our numerical results show that properly modelled games may provide a gain in terms of system-level throughput, with respect to full spread and orthogonal spectrum sharing scenarios over the time. The performance of the specific techniques is strongly dependent on several system parameters, such as the number of users, QoS and the rational outlook of the operators. More importantly, the gains are significant when the number of serving users by an operator is relatively large, and the BSs have enough degrees of freedom to efficiently schedule the users. \subsection{Summary} \noindent In this thesis, we have investigated the inter-operator spectrum sharing problem between the self-interested operators. Operators exist in nearby geographical area with neighbouring \acp{RAN}. The problem is modelled via game theoretic approach for an efficient \ac{DSA}. The spectrum resources amongst the operators divided into, (i) \ac{FSA} and (ii) \ac{DSA}. In \ac{FSA}, privately owned orthogonal frequency bands are allocated to the operators where no inter-operator interference exists. Whereas in \ac{DSA}, the operators contend for resources from a common spectrum pool. The operators follow the repeated games framework and devise strategies to fetch spectrum resources based on their requirements (e.g., load congestion, \ac{QoS}, etc.). The games are played entirely on a noncooperative basis as no operational information is revealed to the other. Leveraging this analysis, two different taxonomies of the noncooperative \ac{DSA} algorithm have been proposed.\\ \noindent Chapter~\ref{chap:GamePrice} discusses the first algorithm, where a carrier pricing based utility function has been considered for the repeated games framework. The utility design penalizes the operators when it comes to their spectrum usage. In the games, the operators aim to maximize their utility at every game sequence. This leads to sharing of spectrum resources between the operators based on their spectrum affordability, eventually favoring the congested operators.\\ \noindent In Chapter~\ref{chap:GameExpectation}, another coordination algorithm is introduced, where mutual interactions between the operators is recognized as the basis for resource sharing. In this, the operators translate their past gains/losses due to spectrum sharing over the previous games into their future benefits. The operators play noncooperative repeated games, and if they expect promising future gains, they readily sacrifice their resources upon requests and accordingly trade resources.\\ \noindent To curb the favoritism towards operators(s) in collecting spectrum usage favors in the algorithms, a limit has been imposed in the form of surplus. Surplus establishes a trust mechanism and ensures that the operators sacrifice resources for each other as long as the other was helpful in the past. Setting up this parameter requires an appropriate measure, as too small value does not let the game run effectively or too much of a relaxation in the value might distinctly favor the operators.\\ \noindent For the purpose of performance analysis, a scenario has been considered, comprising of two operators with their neighboring \acp{RAN} in a single storey building. Each operator has two \acp{BS}, and all the \acp{BS} are geographically separated by the walls. The operator's loads are Poisson distributed, but their locations are uniformly distributed within the operator's access area. A spectrum band of 8 carriers is available to the operators, which is partitioned exclusively into two allocations - \ac{FSA}, and \ac{DSA}. In \ac{FSA}, both operators are allocated a single orthogonal carrier whereas in \ac{DSA}, a spectrum pool of 6 carriers is shared. As a baseline for comparison, the benefits of the algorithms are assessed against the static allocations - orthogonal and full spread, and Pareto efficient cooperative algorithm (reference Chapter~\ref{chap:Coop}).\\ \noindent In the simulation results, the performance of algorithms achieving dynamic spectrum sharing outperforms the typical static allocation schemes (orthogonal or full spread) under varying interference conditions or load factors. The cooperative algorithm serves best in the scenario. However, such choice is disregarded by the self-interested operators because of the trust issues and significant overhead constraints. Though, it categorically provides an optimal benchmark solution for the study of game algorithms. The operators opportunistically share resources and maximize their sum-throughput noncooperatively with an effort to converge to the ideal cooperative solutions. \subsection{Future Work} \noindent We do not expect this work to be complete and there are several areas where the research done in this thesis could be expanded. \begin{enumerate} \item In the simulations, only two operators each have two \acp{BS}, have been considered. Extending this work many \acp{BS} could lead to interesting findings in terms of interference management. One also can implement the cooperative schemes for intra-operator radio resource management. However, this makes the problem of scheduling extremely complicated and requires some implementation to speed up the scheduling of users in the network. \item The proposed Algorithm II, based on mutual history of gains/losses, could add many facets to its decision making mechanism. For instance, the operators could incorporate outstanding favors in the decision making process rather than using it alone for a hard check. Also, the operators can categorize favors into big and small favors, and formulate policies in granting them, e.g., resorting to leniency in granting small favors. \item The operators consider the current traffic load only without anticipating the future. The algorithm has a room to be equipped with accurate load modelling. This could save the operators from unnecessary processing time and energy consumption in forwarding the requests and subsequent decision making. With accurate load modelling, the operators convey requests when it is right to do and thus, can make accurate reservations of spectrum resources proactively. \end{enumerate} \clearpage \phantomsection \addcontentsline{toc}{section}{References}
{ "attr-fineweb-edu": 1.376953, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUddU4uBhi-IyHD2WU
\subsection{Assumptions on $\xi(p)$ and $\zeta(p)$} The result of Eq.~(\ref{D}) assumes that the derivative $\partial D/\partial \alpha_{\text{\tiny R}}$ at $\alpha_{\text{\tiny R}}=0$ does exist. The latter is not the case, e.\,g., for the model of Dirac fermions, where $D\propto 1/\alpha_{\text{\tiny R}}$~\cite{iDMI-theory-TI1,iDMI-theory-TI2,iDMI-theory-TI3}. Thus, the necessary condition for the validity of Eq.~(\ref{D}) is $\xi(p)\not\equiv 0$. In order to establish the sufficient conditions, one should investigate the convergence of the integrals that define $\partial D/\partial \alpha_{\text{\tiny R}}$. Given $\xi(p)$ and $\zeta(p)$ have no singularities at finite values of $p$, it would be a study of convergence of the corresponding integrals at $p=\infty$. Uniform convergence is guaranteed, for instance, if distribution functions $f(\varepsilon^{\pm}(\boldsymbol p))$ decay at infinity well enough. This will be the case if at large $p$ function $\xi(p)$ is positive, unbounded, and grows faster than $\vert p\,\zeta(p)\vert$. The result of Eq.~(\ref{A}) provides the value of the exchange stiffness in the absence of SOC, hence it depends on $\xi(\cdot)$ only. If $\xi(p)$ has no singularities at finite values of $p$, and it is positive and unbounded at large $p$, Eq.~(\ref{A}) is valid. \subsection{Derivation of Eqs.~(\ref{symmetric_energy_general}) and (\ref{A}) of the main text of the Letter} In order to compute the symmetric exchange contribution to micromagnetic free energy density, one has to extract all terms proportional to $\nabla_\beta n_\gamma\nabla_{\beta'} n_{\gamma'}$ and $\nabla_\beta\nabla_{\beta'} n_\gamma$ in the electronic grand potential, Eq.~(\ref{Omega_general}). To do that, we extend the Dyson series of Eq.~(\ref{Dyson}) as \begin{multline} \label{Dyson_hardcore} \mathcal G(\boldsymbol r_0,\boldsymbol r_0)=G(\boldsymbol r_0-\boldsymbol r_0)+ J_{\text{sd}} S\int d\boldsymbol r'\, G(\boldsymbol r_0-\boldsymbol r') \left[ \sum\limits_{\beta\gamma}(\boldsymbol r'-\boldsymbol r_0)_\beta\nabla_\beta n_\gamma(\boldsymbol r_0)\,\sigma_\gamma \right] G(\boldsymbol r'-\boldsymbol r_0) \\ + (J_{\text{sd}} S)^2\int d\boldsymbol r'd\boldsymbol r''\, G(\boldsymbol r_0-\boldsymbol r') \left[ \sum\limits_{\beta\gamma}(\boldsymbol r'-\boldsymbol r_0)_\beta\nabla_\beta n_\gamma(\boldsymbol r_0)\,\sigma_\gamma \right] G(\boldsymbol r'-\boldsymbol r'') \left[ \sum\limits_{\beta'\gamma'}(\boldsymbol r''-\boldsymbol r_0)_{\beta'}\nabla_{\beta'} n_{\gamma'}(\boldsymbol r_0)\,\sigma_{\gamma'} \right] G(\boldsymbol r''-\boldsymbol r_0) \\ + \frac{J_{\text{sd}} S}{2}\int d\boldsymbol r'\, G(\boldsymbol r_0-\boldsymbol r') \left[ \sum\limits_{\beta\beta'\gamma}(\boldsymbol r'-\boldsymbol r_0)_\beta(\boldsymbol r'-\boldsymbol r_0)_{\beta'}\nabla_\beta\nabla_{\beta'} n_\gamma(\boldsymbol r_0) \right] G(\boldsymbol r'-\boldsymbol r_0), \end{multline} where the first line has been already analysed in the main text, the second line is a second order correction to the Green's function due to the first spatial derivatives of $\boldsymbol n$, while the third line is a first order correction due to the second spatial derivatives of $\boldsymbol n$. We substitute the latter two into Eq.~(\ref{Omega_general}), switch to momentum representation, and symmetrize the outcome, arriving at \begin{equation} \label{Exc_general_0} \Omega_A[\boldsymbol n]= \sum_{\beta\beta'\gamma\gamma'}{\Omega^{\text{exc-I}}_{\beta\beta'\gamma\gamma'}\nabla_\beta\, n_\gamma\nabla_{\beta'}\, n_{\gamma'}} + \sum_{\beta\beta'\gamma}{\Omega^{\text{exc-II}}_{\beta\beta'\gamma}\,\nabla_\beta\nabla_{\beta'}\, n_\gamma}, \end{equation} where the tensors are defined as \begin{equation} \label{Exc_general_1} \Omega^{\text{exc-I}}_{\beta\beta'\gamma\gamma'}= T\frac{(J_{\text{sd}} S)^2}{2\pi} \im{ \int{d \varepsilon\, g(\varepsilon) \int{ \frac{d^2 p}{(2\pi)^2} \tr{ \Bigl( G^{R}\,v_\beta\,G^{R}\sigma_\gamma\,G^{R}\sigma_{\gamma'}\,G^{R}\,v_{\beta'}\,G^{R}+ G^{R}\,v_{\beta'}\,G^{R}\sigma_{\gamma'}\,G^{R}\sigma_\gamma\,G^{R}\,v_\beta\,G^{R} \Bigr) } } } } \end{equation} and \begin{equation} \label{Exc_general_2} \Omega^{\text{exc-II}}_{\beta\beta'\gamma}= -T\frac{J_{\text{sd}} S}{4\pi} \im{ \int{d \varepsilon\, g(\varepsilon) \int{ \frac{d^2 p}{(2\pi)^2} \tr{\left( \frac{\partial^2 G^{R}}{\partial p_\beta\partial p_{\beta'}}\sigma_\gamma\,G^{R}+ G^{R}\sigma_\gamma\,\frac{\partial^2 G^{R}}{\partial p_\beta\partial p_{\beta'}} \right)} } } }. \end{equation} The notation of the argument of $\boldsymbol n(\boldsymbol r_0)$ is dropped in Eq.~(\ref{Exc_general_0}) and further below. The Green's functions entering Eqs.~(\ref{Exc_general_1}) and (\ref{Exc_general_2}) are taken in the momentum representation of Eq.~(\ref{green's_functions}) of the main text, but with $\alpha_{\text{\tiny{R}}}=0$. Taking a matrix trace calculation and performing an integration over the angle, we obtain \begin{gather} \label{calculated_Omega_A_1} \Omega^{\text{exc-I}}_{\beta\beta'\gamma\gamma'}= A_1\,\delta_{\beta\beta'}\delta_{\gamma\gamma'}+ W\,\delta_{\beta\beta'}n_{\gamma}n_{\gamma'}, \\ \label{calculated_Omega_A_2} \Omega^{\text{exc-II}}_{\beta\beta'\gamma}= A_2\,\delta_{\beta\beta'}n_{\gamma}, \end{gather} where $\delta_{q_1 q_2}$ is Kronecker delta, while \begin{gather} \label{almost_A_1} A_1=\frac{\Delta_{\text{sd}}^2}{2\pi^2}T \int\limits_0^{\infty}p\,dp\int\limits_{-\infty}^{\infty}d\varepsilon\, g(\varepsilon)\, \im{\left( \frac{\left[\xi'(p)\right]^2\left[3\Delta_{\text{sd}}^2+(\varepsilon-\xi(p))^2\right] \left[\varepsilon-\xi(p)\right]}{[\varepsilon+i 0-\varepsilon^{+}_0(\boldsymbol p)]^4[\varepsilon+i 0-\varepsilon^{-}_0(\boldsymbol p)]^4} \right)}, \\ \label{almost_A_2} A_2=-\frac{\Delta_{\text{sd}}^2}{\pi^2}T \int\limits_0^{\infty}p\,dp\int\limits_{-\infty}^{\infty}d\varepsilon\, g(\varepsilon)\, \im{\left( \frac{\left[\xi'(p)+p\,\xi''(p)\right]\left[\Delta_{\text{sd}}^2+3(\varepsilon-\xi(p))^2\right]}{4 p[\varepsilon+i 0-\varepsilon^{+}_0(\boldsymbol p)]^3[\varepsilon+i 0-\varepsilon^{-}_0(\boldsymbol p)]^3} +2\frac{\left[\xi'(p)\right]^2\left[\Delta_{\text{sd}}^2+(\varepsilon-\xi(p))^2\right] [\varepsilon-\xi(p)]}{[\varepsilon+i 0-\varepsilon^{+}_0(\boldsymbol p)]^4[\varepsilon+i 0-\varepsilon^{-}_0(\boldsymbol p)]^4} \right)}, \end{gather} and the actual value of $W$ is not relevant for the final result. Combining Eqs.~(\ref{Exc_general_0}),~(\ref{calculated_Omega_A_1}), and (\ref{calculated_Omega_A_2}) we find \begin{equation} \label{A1+A2} \Omega_A[\boldsymbol n]= A_1\left[(\nabla_x\boldsymbol n)^2+(\nabla_y\boldsymbol n)^2\right]+ A_2\left[\boldsymbol n\,\nabla_x^2\boldsymbol n+\boldsymbol n\,\nabla_y^2\boldsymbol n\right] +W(\boldsymbol n\, \nabla_x \boldsymbol n)^2+W(\boldsymbol n\, \nabla_y \boldsymbol n)^2. \end{equation} Before we proceed, it is important to notice two consequences of the constraint $\boldsymbol n^2 \equiv 1$, namely, \begin{equation} \label{wisdom} \frac{1}{2}\nabla_\beta \boldsymbol n^2=\boldsymbol n\, \nabla_\beta \boldsymbol n=0 \qquad\text{and} \qquad \frac{1}{2}\nabla_\beta^2 \boldsymbol n^2=\nabla_\beta(\boldsymbol n\, \nabla_\beta \boldsymbol n)=(\nabla_\beta\boldsymbol n)^2+\boldsymbol n\, \nabla_\beta^2 \boldsymbol n=0. \end{equation} With the help of Eq.~(\ref{wisdom}) we are able to bring Eq.~(\ref{A1+A2}) to the form \begin{equation} \Omega_A[\boldsymbol n]= (A_1-A_2)\left[(\nabla_x\boldsymbol n)^2+(\nabla_y\boldsymbol n)^2\right], \end{equation} proving Eq.~(\ref{symmetric_energy_general}) of the main text with $A=A_1-A_2$. To complete the calculation of the exchange stiffness $A$, one should perform a partial fraction decomposition of the integrands in Eqs.~(\ref{almost_A_1}),~(\ref{almost_A_2}) and make use of the formula \begin{equation} \im{\left([\varepsilon-\varepsilon^{\pm}_0(\boldsymbol p)+i0]^{-n-1}\right)} = \frac{(-1)^{n+1}}{n!}\pi\,\delta^{(n)}(\varepsilon-\varepsilon^{\pm}_0(\boldsymbol p)) \end{equation} to integrate over $\varepsilon$ with the result \begin{multline} \label{almost_A} A=\frac{\Delta_{\text{sd}}}{32\pi}T \int_{0}^{\infty}{d p\,\frac{p\,[\xi'(p)]^2}{\Delta_{\text{sd}}^2}(g_-'-g_+')} + \frac{\Delta_{\text{sd}}}{32\pi}T \int_{0}^{\infty}{d p\,\frac{p\,[\xi'(p)]^2}{\Delta_{\text{sd}}}(g_-''+g_+'')} \\ +\frac{\Delta_{\text{sd}}}{16\pi}T \int_{0}^{\infty}{d p\,[\xi'(p)+p\,\xi''(p)](g_-''-g_+'')} +\frac{\Delta_{\text{sd}}}{16\pi}T \int_{0}^{\infty}{d p\,p\,[\xi'(p)]^2(g_-'''-g_+''')}, \end{multline} where $\xi'(p)=\partial \xi/\partial p$ and the derivatives of $g_{\pm}=g(\varepsilon^{\pm}_0(\boldsymbol p))=g(\xi(p)\pm\Delta_{\text{sd}})$ are taken with respect to the argument. The latter can also be assumed to be the derivatives with respect to $\xi$, \begin{equation} g_{\pm}^{(n)}=\frac{\partial^{n} g_{\pm}}{\partial \xi^{n}}. \end{equation} The third term cancels out the fourth term in Eq.~(\ref{almost_A}) after integration by parts with the help of \begin{equation} \xi'(p)+p\,\xi''(p)=\partial [p\,\xi'(p)]/\partial p. \end{equation} In the remaining terms, one replaces the derivatives of $g_{\pm}=g(\xi(p)\pm\Delta_{\text{sd}})$ with respect to $\xi$ by the derivatives with respect to $\Delta_{\text{sd}}$, reduces the resulting expression to a form of a full derivative with respect to $\Delta_{\text{sd}}$, and uses the relation $\partial g(\varepsilon)/\partial \varepsilon=-f(\varepsilon)/T$ to arrive at Eq.~(\ref{A}) of the main text. \end{document}
{ "attr-fineweb-edu": 1.114258, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdfg5qWTD6fkJgAMi
\section{Introduction} Fix $\alpha\in \mathbb R$, and consider a Markov process $(Y_n^\alpha)_{n\ge 1}$ defined on some probability space $(\Omega, \mathcal F, \mathbb P)$ with the evolution governed by the transition kernel \begin{equation}\label{E:1.1} p(x, \cdot ) = \frac 1 2 \delta_{x+\alpha} + \frac 12 \delta_{x-\alpha}, \quad p : \mathbb S^1 \times \mathcal B (\mathbb S^1 ) \to [0,1], \end{equation} whose initial distribution, i.e. the distribution of $Y_1^\alpha$, is the Lebesgue measure (here $\mathcal B (\mathbb S^1 )$ stands for the $\sigma$-algebra of Borel subsets of $\mathbb S^1$). One can easily verify the process is stationary. More work is needed to show the Lebesgue measure is the unique possible choice for the law of $Y_1^\alpha$ to make the process stationary (see e.g. Theorem 7 and Remark 8 in \cite{Szarek_Zdunik_2016b}). In particular $(Y_n^\alpha)$ is ergodic, which means that if $A\in \mathcal B (\mathbb S^1 )$ is such that $p(x, A)=1$ for Lebesgue a.e. $x\in \mathbb S^1$ then $A$ is of the Lebesgue measure $0$ or $1$ (see e.g. Section 5 in \cite{Hairer_2006}, page 37, for characterizations of ergodicity and the relation to the notion of ergodicity in dynamical systems). This paper is devoted to the central limit theorem (\textbf{CLT} for short) for additive functionals of $(Y_n^\alpha)$, i.e. processes of the form $\big(\varphi(Y^\alpha_1)+\cdots+\varphi(Y^\alpha_n)\big)$, where a function $\varphi : \mathbb S^1 \to \mathbb R$ is usually called an observable. For convenience we assume that $\int \varphi(x)dx=0$. We say that \textbf{CLT} holds for the process if $$\frac{\varphi(Y^\alpha_1)+\cdots+\varphi(Y^\alpha_n)}{\sqrt n} \Rightarrow \mathcal N (0, \sigma) \quad \textrm{as $n\to \infty$}$$ for some $\sigma>0$. The validity of \textbf{CLT} depends on Diophantine properties of $\alpha$. An angle $\alpha$ is called Diophantine of type $(c,\gamma)$, $c>0$, $\gamma\ge 2$ if \begin{equation}\label{diophantine} \bigg|\alpha - \frac p q \bigg| \ge \frac{c}{q^\gamma} \quad \textrm{for all $p, q\in \mathbb Z$, $q\not=0$.} \end{equation} An angle $\alpha$ is Liouville if it is not Diophantine of type $(c,\gamma)$ for any choice of $c>0$, $\gamma \ge 2$. These and similar processes has been widely studied in the literature. \begin{itemize} \item Kesten \cite{Kesten_1960, Kesten_1961} investigated the limit distribution of $$D_N(x,\alpha)=\sum_{n=0}^{N-1} \varphi(x+n\alpha) - N\int_{\mathbb{S}^1}\varphi(x)dx,$$ where $\varphi$ is the characteristic function of some interval and $(x,\alpha)$ is uniformly distributed in $\mathbb S^1\times \mathbb S^1$. This was later generalized to higher dimensions by Dolgopyat and Fayad \cite{Dolgopyat_Fayad_2014, Dolgopyat_Fayad_2020}. \item Sinai and Ulcigrai \cite{Sinai_Ulcigrai_2008} considered a similar problem when $\varphi$ is non-integrable meromorphic function. \item In the above examples a point in the space is chosen randomly thus one calls it a spatial \textbf{CLT} . One can also fix a point in the space $x\in \mathbb S^1$, an angle $\alpha$ and, given $N$, pick randomly an integer number $n\in [1, N]$. The question arise what is the limit distribution of $D_n(x,\alpha)$ as $N$ is growing. This kind of limit theorems are called temporal. The first limit theorem in this flavour was proven by Beck \cite{Beck_2010, Beck_2011}. For further development see e.g. \cite{Dolgopyat_Sarig_2017}, \cite{Bromberg_Ulcigrai_2018}, \cite{Dolgopyat_Sarig_2020}. \item Sinai \cite{Sinai_1999} considered a situation where one draws $+\alpha$ or $-\alpha$ with a probability distribution depending on the position in the circle (the method was to study a related random walk in random environment). He proved the unique ergodicity and stability of the process when $\alpha$ is Diophantine. Recently Dolgopyat et. al. \cite{Dolgopyat_Fayad_Saprykina_2021} studied the behaviour in the Liouvillean case \item Borda \cite{Borda_2021} considered even a more general situation where several angles are given and one chooses one of them randomly. Given $p\in (0,1]$, he formulated certain Diophantine conditions implying \textbf{CLT} for all $\varphi$ in the class of $p$-H{\"o}lder functions. Thus the author was concerned about what assumptions one should put on the angles of rotation to imply \textbf{CLT} for all observables in a given class. \end{itemize} The situation here resembles the one from the last point, but here we rather touch the question how regular an observable should be to imply \textbf{CLT} if $\alpha$ is given. Namely, using celebrated result by Kipnis and Varadhan \cite{Kipnis_Varadhan_1986} we prove the following statement. \begin{prop}\label{P:1} Let us assume $\alpha$ to be Diophantine of type $(c,\gamma)$, $\gamma \ge 2$. If a non-constant function $\varphi \in C^{r}$, $r>\gamma-1/2$ (possibly $r=\infty$), is such that $\int\varphi(x)dx=0$ then there exists $\sigma>0$ such that $$\frac{\varphi(Y^\alpha_1)+\cdots+\varphi(Y^\alpha_n)}{\sqrt{n}} \Rightarrow \mathcal N (0, \sigma).$$ In particular, \textbf{CLT} holds if $\alpha$ is Diophantine of an arbitrary type and $\varphi$ is smooth. \end{prop} \noindent The result is included for the sake of completeness, not because of novelty. This (or slightly different) statement has been proven independently by several people using various methods related to harmonic analysis (section 8 in \cite{Derriennic_Lin_2001}, section 7.5 in \cite{Weber_2009}, \cite{Zdunik_2017}, \cite{Borda_2021}). By Proposition \ref{P:1} \textbf{CLT} holds if $\varphi$ is smooth and $\alpha$ is Diophantine of an arbitrary type. It is natural to ask then if for every Liouville $\alpha$ there exists a smooth $\varphi$ for which \textbf{CLT} fails. It is also natural to ask if \textbf{CLT} fails if analytic observables are considered. This leads us to the following theorems showing dichotomy between the behaviour of Liouville and Diophantine random rotation, similar to the one appearing in smooth conjugacy results for circle diffeomorphisms (see the beginning of Chapter I.3 in \cite{deMelo_vanStrien_1993}). \begin{thm}\label{T:1} There exists an irrational $\alpha$ and $\varphi \in C^\omega(\mathbb S^1)$ such that \textbf{CLT} fails. \end{thm} \noindent Note that by Proposition \ref{P:1} the angle in the assertion must be Liouville. \begin{thm}\label{T:2} Let $\alpha$ be an irrational number. Let us assume there exist $c>0$, $\gamma>5$ such that $$\bigg|\alpha - \frac{p}{q} \bigg| \le \frac{c}{q^\gamma} \quad \textrm{for infinitely many $p,q \in \mathbb Z$, $q\not = 0$.} $$ Let $r$ be the largest positive integer with $r<\frac{\gamma}{2}-\frac 3 2$. Then there exist $\varphi \in C^r$ such that \textbf{CLT} fails. \end{thm} \noindent The only reason for making the assumption $\gamma>5$ is to ensure $\frac{\gamma}{2}-\frac 3 2$ greater than $1$, so that the condition $r<\frac{\gamma}{2}-\frac 3 2$ is satisfied for at least one positive integer $r$. A slightly changed proof of Theorem \ref{T:2} yields the following. \begin{thm}\label{T:3} Let $\alpha$ be Liouville. Then there exists $\varphi\in C^\infty(\mathbb S^1)$ such that \textbf{CLT} fails. \end{thm} Let us end this section with an interesting open problem. An angle $\alpha$ is called badly approximable when it is Diophantine of type $(c, 2)$ for some $c>0$ (for instance, every quadratic irrational is badly approximable). Proposition \ref{P:1} yields if $\varphi$ is $C^2$ then the additive functional satisfies \textbf{CLT}. Unfortunately, Theorem \ref{T:2} does not give any counterexample in that case. This leads to a natural question: does \textbf{CLT} holds if $\alpha$ is badly approximable (e.g. $\alpha$ is the golden ratio) and $\varphi$ is $C^1$? \section{The Poisson equation and central limit theorem}\label{S:3} One of methods of proving \textbf{CLT} for additive functionals of Markov chains is the Gordin-Lif\v{s}ic method \cite{Gordin_Lifsic_1978}, which is to be roughly explained in present section (note that in \cite{Weber_2009}, \cite{Zdunik_2017}, \cite{Borda_2021} different techniques have been used). Before that let us define the operator $$ T\varphi(x)=\frac 1 2 \varphi(x+\alpha) + \frac 12 \varphi(x-\alpha), \quad \varphi\in B(\mathbb S^1), \ T: B(\mathbb S^1)\rightarrow B(\mathbb S^1), $$ where $B(\mathbb S^1)$ is the space of Borel measurable functions. By the very definition of a Markov process, if $(Y_n^\alpha)$ is defined on $(\Omega, \mathcal F, \mathbb P)$ then \begin{equation}\label{E:dual} \mathbb E ( \varphi(Y^\alpha_{n+1} ) | Y_n^\alpha ) = \int_{\mathbb{S}^1} p( Y_n^\alpha, dy) \varphi(y) = T\varphi (Y_n^\alpha), \quad n\ge 1, \end{equation} where $p$ is the transition function (\ref{E:1.1}). Let $\varphi : \mathbb S^1 \rightarrow \mathbb R$ be a square integrable function (with respect to the Lebesgue measure) with $\int \varphi(x)dx=0$. To show the convergence of $\frac{1}{\sqrt{n}}(\varphi(Y_1)+\cdots+\varphi(Y_n))$ to the normal distribution we solve so called Poisson equation\footnote{In dynamical systems theory this equation (with $T$ replaced by a Koopman operator) is called a cohomological equation. The name ``Poisson equation'' is more common in theory of stochastic processes, probably due to the fact that writing down the corresponding equation for a Brownian motion, which is a continuous time Markov process, gives $\frac 1 2 \Delta \varphi = \psi$, where $\Delta$ is the Laplace operator. Note $\frac 1 2 \Delta$ is the infinitesimal generator of the Brownian motion.} $T\psi - \psi =\varphi$, where $\psi\in L^2(\mathbb S^1)$ is unknown. If the solution $\psi$ exists then we can write $$\varphi(Y_1)+\cdots+\varphi(Y_n)$$ \begin{equation}\label{E:P1.1} =\big[(T\psi(Y_1)-\psi(Y_2))+\cdots+(T\psi(Y_{n-1})-\psi(Y_n))\big]+(T\psi(Y_n) - \psi (Y_1)). \end{equation} When divided by $\sqrt{n}$, the second term tends to zero in probability. It is sufficient then to show \textbf{CLT} for the first process, which is an ergodic, stationary martingale by (\ref{E:dual}). For such processess \textbf{CLT} is valid (see \cite{Brown_1971}). Thus the assertion follows provided the solution of the Poisson equation exists. Observe that $(I-T) u_n=(1-\cos(2\pi n \alpha)) u_n$ for $u_n(x)=\exp(2\pi i n x)$, $x\in \mathbb S^1$, $n\in\mathbb Z$. Therefore the trigonometric system $(u_n)_{n\in\mathbb Z}$ is also the orthonormal system of eigenvectors of $I-T$ with corresponding eigenvalues $1-\cos(2\pi n \alpha)$, $n\in\mathbb Z$. We deduce the $n$-th Fourier coefficient of $(I-T)\psi$, $\psi \in L^2(\mathbb S^1)$, is of the form $(1-\cos(2\pi n \alpha))\hat{\psi}(n)$, $n\in \mathbb Z$. This yields a recipe to find $\psi$ when $\varphi$ is given. Namely, $\psi$ should be a square integrable function whose Fourier series coefficient are \begin{equation}\label{fourier} \hat{\psi}(n) = \frac{|\hat{\varphi}(n)|}{1-\cos(2\pi\alpha n)}, \quad n\in\mathbb Z \setminus \{0\}, \end{equation} while $\hat{\psi}(0)$ is an arbitrary real number. Note we use here also the assumption that $\hat{\varphi}(0)=\int\varphi(x)dx=0$. Indeed, $1-\cos(0)=0$ implies that we must have $\hat{\varphi}(0)=0$ to solve the equation. What remains to do is to show the convergence \begin{equation}\label{condition1} \sum_{n\in\mathbb Z\setminus \{0\}} \frac{|\hat{\varphi}(n)|^2}{(1-\cos(2\pi\alpha n))^2}<\infty, \end{equation} to make sure the object with Fourier coefficients (\ref{fourier}) is indeed a square integrable function. In fact the solution of the Poisson equation does not have to exists to have \textbf{CLT}. Note the processes under consideration are reversible, which means that the distribution of random vectors $(Y^\alpha_1, \ldots, Y^\alpha_n)$ and $(Y^\alpha_n, \ldots, Y^\alpha_1)$ are the same for every natural $n$ or, equivalently, the operator $T$ is self-adjoint. In celebrated paper \cite{Kipnis_Varadhan_1986} (see Theorem 1.3 therein) the authors have proven the condition $\varphi\in \textrm{Im}(I-T)$ can be relaxed to \begin{equation}\label{Kipnis_Varadhan} \varphi \in \textrm{Im}(\sqrt{I-T}), \end{equation} where $\sqrt{I-T}$ is the square root of $I-T$ (recall the square root of a positive semidefinite, self-adjoint operator $P$ acting on a Hilbert space is the operator $\sqrt{P}$ with the property $(\sqrt{P})^2=P$). Since the $n$-th Fourier coefficient of the function $(I-T)\psi$, $\psi \in L^2(\mathbb S^1)$ is given by $(1-\cos(2\pi n \alpha))\hat{\psi}(n)$, we easily deduce that $\sqrt{I-T}$ is well defined on $L^2(\mathbb S^1)$ and the $n$-th Fourier coefficient of the function $\sqrt{I-T}\psi$, $\psi \in L^2(\mathbb S^1)$, is given by $\sqrt{1-\cos(2\pi n \alpha)}\hat{\psi}(n)$. Thus (\ref{Kipnis_Varadhan}) leads to the condition \begin{equation}\label{condition2} \sum_{n\in\mathbb Z \setminus \{0\}} \frac{|\hat{\varphi}(n)|^2}{1-\cos(2\pi\alpha n)}<\infty, \end{equation} weaker than (\ref{condition1}). Moreover, \cite{Kipnis_Varadhan_1986} (see (1.1) therein) delivers a formula for $\sigma$, which reads here as $$\sigma^2=\sum_{n\in \mathbb Z \setminus \{0\}} \frac{1+\cos(2\pi \alpha n)}{1-\cos(2\pi \alpha n)} |\hat{\varphi}(n)|^2.$$ Clearly, $\sigma^2<\infty$ if (\ref{condition2}) is satisfied and $\sigma>0$ if $\varphi$ is non-constant. We are in position to prove Proposition \ref{P:1}. We recall the statement for the convenience of the reader. \setcounter{prop}{0} \begin{prop} Let us assume $\alpha$ to be Diophantine of type $(c,\gamma)$, $\gamma \ge 2$. If a non-constant function $\varphi \in C^{r}$, $r>\gamma-1/2$ (possibly $r=\infty$), is such that $\int\varphi(x)dx=0$ then there exists $\sigma>0$ such that $$\frac{\varphi(Y^\alpha_1)+\cdots+\varphi(Y^\alpha_n)}{\sqrt{n}} \Rightarrow \mathcal N (0, \sigma).$$ In particular, \textbf{CLT} holds if $\alpha$ is Diophantine of an arbitrary type and $\varphi$ is smooth. \end{prop} \begin{proof} We are going to prove (\ref{condition2}) is satisfied. Fix $\alpha$ and $\varphi$ as above. Clearly $\sum_{n\in \mathbb Z} |\hat{\varphi}(n)|^2<\infty$ since $\varphi$ is square integrable, therefore the problem is when $\cos(2\pi \alpha n)$ is close to 1, which happens exactly when $\alpha n$ is close to some integer. To handle this we will use the fact that $\alpha$ is Diophantine of type $(c, \gamma)$. This means \begin{equation}\label{E:5.1} \bigg|\alpha - \frac{p}{n}\bigg| \ge \frac{c}{n^\gamma} \quad \textrm{for all $p, n\in \mathbb Z$, $n\not = 0$.} \end{equation} By Taylor's formula $|\cos(2 \pi (p+x))-1|=\frac{(2\pi x)^2}{2}+o(x^2)$ for $p\in \mathbb Z$. As a consequence there exists $\eta>0$ such that $$\big|\cos(2 \pi \alpha n)-1 \big|\ge 2\pi \eta|n\alpha - p|^2 \ge \frac{2\pi \eta c^2}{n^{2(\gamma-1)}}$$ for an arbitrary $n \in\mathbb Z$. If $\varphi \in C^r$ then $\hat{\varphi}(n)\le C|n|^{-r}$ for some constant $C$ thus $$ \frac{|\hat{\varphi}(n)|^2}{1-\cos(2\pi\alpha n)} \le \frac{C^2}{2\pi \eta c^2} |n|^{-2r+2(\gamma-1)}$$ for every $n$. It is immediate that if $r>\gamma-\frac 12$, then the series (\ref{condition2}) is convergent. This implies \textbf{CLT} by Theorem 1.3 in \cite{Kipnis_Varadhan_1986}. \end{proof} Clearly, if $\varphi$ is a trigonometric polynomial, then series (\ref{condition2}) becomes a finite sum and thus the condition is trivially satisfied. This yields another proposition, which will be used in the proof of Theorem \ref{T:2}. \begin{prop} \label{P:2} Let us assume $\alpha$ to be irrational. If $\varphi$ is a non-constant trigonometric polynomial with $\int\varphi(x)dx=0$ then there exists $\sigma>0$ such that $$\frac{\varphi(Y^\alpha_1)+\cdots+\varphi(Y^\alpha_n)}{\sqrt{n}} \Rightarrow \mathcal N (0, \sigma).$$ \end{prop} \section{Auxiliary results} In the proofs three lemmas will be pivotal. Given integer $q\ge 1$, $\eta\in (0,1/2)$, define $G_q^\eta$ to be the subset of $\mathbb S^1$ containing all points whose distance from the set $\{ 0, \frac{1}{q}, \ldots, \frac{ q-1}{q} \}$ (where $\cos(2\pi q x)$ attains value 1) is less than $\frac {\eta}{q}$. Clearly $\textrm{Leb}(G_q^\eta)=2\eta$ whatever $q$ is. Recall that $(Y^{\alpha}_n)$ stands for the Markov process defined on some probability space $(\Omega, \mathcal F, \mathbb P)$ with transition function (\ref{E:1.1}) and $Y_1^\alpha \sim \textrm{Leb}$. \begin{lem}\label{L:1} Let $\alpha=\frac p q$, $\varphi(x)=2^{-q} \cos(2 \pi q x)$ and let $s\in (0,1)$. Let $N$ be an arbitrary natural number with $2^{-q-1}N^{1-s}>2$. If $\alpha'$ is sufficiently close to $\alpha$ then $$\mathbb P \bigg(\frac{\varphi(Y_1^{\alpha'})+\cdots+\varphi(Y_N^{\alpha'})}{N^{s}} > 2 \bigg) > \frac{1}{6}.$$ \end{lem} \noindent Note the assertion is more difficult to obtain when $s$ is close to 1. \begin{proof} The result is the consequence of the invariance of $\varphi$ under the action of the rotation of angle $\alpha$. In particular the set $G_q^\eta$ is invariant for every $\eta>0$. Take $N$ like in the statement, and choose $\alpha'$ so close to $\alpha$ that $x+n\alpha' \in G_q^{1/6}$ for $|n|\le N$ and $x\in G_q^{1/12}$. By the definition of $G_q^\eta$, the value of $\varphi$ on $G_q^{1/6}$ is greater or equal to $2^{-q}\cos(2\pi/6)\ge 2^{-q}\cdot 1/2$. Thus $\varphi(x+n\alpha')\ge 2^{-q}\cdot 1/2=2^{-q-1}$ for $|n|\le N$ and $x\in G_q^{1/12}$. This yields $$\{ Y^{\alpha'}_1 \in G_q^{1/12} \} \subseteq \bigg\{ \frac{\varphi(Y_1^{\alpha'})+\cdots+\varphi(Y_N^{\alpha'})}{N} > 2^{-q-1} \bigg\}$$ $$ = \bigg\{ \frac{\varphi(Y_1^{\alpha'})+\cdots+\varphi(Y_N^{\alpha'})}{N^{s}} > 2^{-q-1}N^{1-s} \bigg\}$$ \noindent Using the facts that $Y_1^{\alpha'}\sim \textrm{Leb}$, $\textrm{Leb}(G_q^{1/12})=1/6$ and $2^{-q-1}N^{1-s}>2$ we have $$\mathbb P \bigg(\frac{\varphi(Y_1^{\alpha'})+\cdots+\varphi(Y_N^{\alpha'})}{N^{s}} > 2 \bigg)$$ $$\ge \mathbb P \bigg( \frac{\varphi(Y_1^{\alpha'})+\cdots+\varphi(Y_N^{\alpha'})}{N^{s}} > 2^{-q-1}N^{1-s} \bigg) \ge \mathbb P (Y_1^{\alpha'} \in G_q^{1/12}) = \frac{1}{6},$$ which yields the assertion. \end{proof} A slightly different lemma is the following. \begin{lem}\label{L:3} Let $\alpha$ be an irrational number, $s\in (1/2, 1)$, $c>0$, $\gamma\ge 2$. If $\alpha$ satisfies $$\bigg|\alpha - \frac p q \bigg|\le \frac{c}{q^\gamma}$$ for some pair of integers $p,q$, $q\not= 0$, then $$\mathbb P \bigg( \frac{\varphi(Y_1^{\alpha})+\cdots+\varphi(Y_N^{\alpha})}{N^s} > \frac{\sqrt{2}}{2\cdot (16 c)^{1-s}} \bigg) > \frac{1}{8},$$ where $\varphi(x)=q^{-(\gamma-1)(1-s)}\cos (2\pi q x)$, $N=\lfloor\frac{q^{\gamma-1}}{16c}\rfloor$. \end{lem} \begin{proof} If $|\alpha - \frac p q|\le \frac{c}{q^\gamma}$ and $|k|\le \frac{q^{\gamma-1}}{16c}$ then \begin{equation}\label{L:3.1} \bigg|k\alpha- k\frac p q \bigg|\le |k| \frac{c}{q^\gamma} < \frac{1}{16q}. \end{equation} Thus $z+n\alpha \in G^{1/8}_q$ for all $z\in G^{1/16}_q$ and integers $n$ with $|n|\le N$. On the other hand, the value of $\varphi$ on $G^{1/8}_q$ is greater or equal to $q^{-(\gamma-1)(1-s)}\cos(\frac{2 \pi}{8})=\frac{\sqrt{2}}{2}\cdot q^{-(\gamma-1)(1-s)}$. By the same reasoning as in the proof of Lemma \ref{L:1} we have $$\{Y_1^\alpha \in G^{1/16}_q \} \subseteq \bigg\{ \frac{\varphi(Y_1^{\alpha})+\cdots+\varphi(Y_N^{\alpha})}{N^s} > \frac{\sqrt{2}}{2\cdot (16c)^{1-s}} \bigg\}$$ and consequently $$\mathbb P \bigg( \frac{\varphi(Y_1^{\alpha})+\cdots+\varphi(Y_N^{\alpha})}{N^s} > \frac{\sqrt{2}}{2\cdot (16c)^{1-s}} \bigg)\ge \mathbb P (Y_1^\alpha\in G^{1/16}_q) =\frac{1}{8}.$$ \end{proof} Take $\alpha=p/q$ rational ($p/q$ is in the irreducible form) and the corresponding process $(Y_n^\alpha)$. If the initial point $Y_1^\alpha$ is already known, then we know also each $Y^\alpha_n$, $n\in \mathbb N$, is contained almost surely in the orbit of $Y_1^\alpha$ under the action of the rotation of angle $\alpha$, $\{Y_1^\alpha, Y_1^\alpha+\alpha, \ldots, Y_1^\alpha+(q-1)\alpha\}$ (this set is finite, since $\alpha$ is rational). The process $(Y^\alpha_n)$ can be therefore treated as a finite state Markov chain. If $q$ is odd, then the process $(Y^\alpha_n)$ treated as a finite state Markov chain is aperiodic and irreducible. Its stationary distribution the uniform distribution on the set $\{Y_1^\alpha, Y_1^\alpha+\alpha, \ldots, Y_1^\alpha+(q-1)\alpha\}$ (every state is of measure $1/q$). It follows from Theorem 8.9 (page 131) \cite{Billingsley_1995} that \begin{equation}\label{E:exp.conv.} |\mathbb P (Y^\alpha_n = Y_1^\alpha+i\alpha) - 1/q| \le A \rho^n \quad \textrm{for $i=0,1, \ldots, q-1$}, \end{equation} where the constants $A$ and $\rho\in(0,1)$ are independent of $x$ (since neither the space nor the transition probabilities depend on $x$). Let $\varphi(x)=a\cos(2\pi q' x)$ for some $a>0$ and $q'$ not a multiplicity of $q$. Since $p/q$ is assumed to be in an irreducible form, $p/q\cdot q'$ is not an integer and thus we have $$1/q\sum_{i=0}^{q-1} \varphi(x+i\alpha)=0$$ for every $x\in \mathbb S^1$, which is equivalent to say that the integral of $\varphi$ with respect to the stationary distribution of $(Y_n^\alpha)$ (treated as a finite state Markov chain) equals zero. Moreover, using (\ref{E:exp.conv.}) gives $$\bigg| \mathbb E\big( \varphi(Y^\alpha_n) \ \big| \ Y^\alpha_1 \big) \bigg| = \bigg| \sum_{i=0}^{q-1} \mathbb P ( Y^\alpha_n = Y^\alpha_1+i\alpha ) \cdot \varphi(Y^\alpha_1+i\alpha) - 1/q\sum_{i=0}^{q-1} \varphi(Y^\alpha_1+i\alpha ) \bigg|$$ $$\le \sum_{i=0}^{q-1} \|\varphi\|_\infty \big| \mathbb P ( Y^\alpha_n = Y^\alpha_1+i\alpha ) - 1/q \big| \le A q \|\varphi \|_\infty \rho^n$$ for $n\ge 1$. Thus \begin{equation}\label{E:3.1} \sum_{n=1}^\infty \bigg| \mathbb E\big( \varphi(Y^\alpha_n) \ \big| \ Y^\alpha_1 \big) \bigg| \le \frac{A q \|\varphi \|_\infty}{1-\rho} \quad \textrm{a.s.} \end{equation} The next lemma is essentially the consequence of the central limit theorem for finite state irreducible and aperiodic Markov chains. However, using (\ref{E:3.1}) we may deduce it in simpler way. \begin{lem}\label{L:2} Let $\alpha=\frac p q$ be rational (in irreducible form), $q$-odd. Let $\varphi(x)=a\cos(2\pi q' x)$ for some $a>0$ and $q'$ not a multiplicity of $q$. If $s>1/2$ then for every $\varepsilon>0$ and $\delta>0$ there exists $N$ such that $$\mathbb P \bigg( \frac{\big| \varphi(Y^\alpha_1)+\cdots+\varphi(Y^\alpha_n)\big|}{n^s} > \delta \bigg) < \varepsilon$$ for $n \ge N$. \end{lem} \begin{proof} It follows from the Chebyshev inequality. We have $$ \mathbb P \bigg( \frac{\big|\varphi(Y^\alpha_1)+\cdots+\varphi(Y^\alpha_n)\big|}{n^s} > \delta \bigg) \le \frac{\mathbb E \big( \varphi(Y^\alpha_1)+\cdots+\varphi(Y^\alpha_n) \big)^2}{\delta^2 n^{2s}} $$ $$= \frac{ \big| \mathbb E \big(\sum_{i=1}^n \varphi^2(Y_i^\alpha) + 2\sum_{i=1}^{n-1} \varphi(Y^\alpha_i)(\varphi(Y^\alpha_{i+1})+\cdots+\varphi(Y^\alpha_n) ) \big)\big|}{\delta^2 n^{2s}}$$ $$\le \sum_{i=1}^n \frac{\mathbb E\varphi^2(Y_i^\alpha)}{\delta^2 n^{2s}} + 2\sum_{i=1}^n \frac{\big| \mathbb E \big(\varphi(Y^\alpha_i) \mathbb E[ \varphi(Y^\alpha_{i+1})+\cdots+\varphi(Y^\alpha_n) | Y^\alpha_i ]\big)\big|}{\delta^2n^{2s}}$$ $$=\frac{1}{\delta^2 n^{2s-1}} \int\varphi^2(x)dx + 2\sum_{i=1}^n \frac{\big| \mathbb E \varphi(Y^\alpha_i) \big|\cdot \big| \mathbb E[ \varphi(Y^\alpha_{i+1})+\cdots+\varphi(Y^\alpha_n) | Y^\alpha_i ]\big|}{\delta^2n^{2s}}.$$ $$\le\frac{1}{\delta^2 n^{2s-1}} \int\varphi^2(x)dx + 2 \sum_{i=1}^n \frac{\big| \mathbb E \varphi(Y^\alpha_i) \big|\cdot \bigg( \big|\mathbb E[ \varphi(Y^\alpha_{i+1})| Y^\alpha_i ]\big| +\cdots+ \big|\mathbb E[ \varphi(Y^\alpha_n) | Y^\alpha_i ]\big|\bigg)}{\delta^2n^{2s}}.$$ By (\ref{E:3.1}) and the stationarity of the process each of the numerators in the sum does not exceed $\frac{2A q \|\varphi \|^2_\infty}{1-\rho}$, thus the second term is bounded by $$n\cdot \frac{2A q \|\varphi \|^2_\infty}{(1-\rho)\delta^2n^{2s}}=\frac{2A q \|\varphi \|^2_\infty}{(1-\rho)\delta^2n^{2s-1}}.$$ The entire expression tends to zero since $s>1/2$. The assertion follows. \end{proof} \section{Proof of Theorem \ref{T:1}} Fix an arbitrary $s\in (\frac 1 2, 1)$. We are going to construct an angle $\alpha$ and an observable $\varphi$ with $\int\varphi(x)dx=0$ such that there exist infinitely many $n$'s with $$\mathbb P \bigg( \frac{\varphi(Y_1^\alpha)+\cdots+\varphi(Y_n^\alpha)}{n^{s}} > 1 \bigg) > \frac{1}{12}.$$ Consequently the process does not satisfy \textbf{CLT} since \textbf{CLT} would imply the above quantity tends to zero. First we shall define inductively a sequence of numbers $\alpha_k$ convergent to some $\alpha$ along with certain observables $\varphi_k$. Then we will put $\varphi=\sum_k \varphi_k$ and use some relations between $\alpha_k$ and $\varphi_k$ established during the induction process to get the above assertion. Put $\alpha_1=\frac 1 3=\frac {p_1}{q_1}$ (when we represent a rational number as a fraction of integers we always assume it to be in an irreducible form, so here $p_1=1$ and $q_1=3$), and set $\varphi_1(x)=2^{-q_1} \cos(2\pi q_1 x)$. Take $N_1$ so large that $2^{-q_1-1}N_1^{1-s}>2$ and apply Lemma \ref{L:1} to obtain an angle $\alpha_2=\frac{p_2}{q_2}$, with $q_2>q_1$ and $q_2$ odd, such that \begin{equation}\label{E:4.1} \mathbb P \bigg( \frac{\varphi_1(Y_1^{\alpha_2})+\cdots+\varphi_1(Y_{N_1}^{\alpha_2})}{N_1^{s}} >2 \bigg) >\frac 1 6. \end{equation} Define $\varphi_2(x)=2^{-q_2}\cos(2\pi q_2 x)$. Take $N_2>N_1$ so large that $2^{-q_2-1} N_2^{1-s}>2$. Clearly $q_1$ is not a multiplicity of $q_2$, hence by Lemma \ref{L:2} we can assume that $N_2$ is so large that \begin{equation}\label{E:4.11} \mathbb P \bigg( \frac{\big|\varphi_1(Y_1^{\alpha_2})+\cdots+\varphi_1(Y_{N_2}^{\alpha_2})\big|}{N_2^{s}} >\frac 1 4 \bigg) < \frac 1 4 \cdot \frac{1}{6}.\end{equation} Again use Lemma \ref{L:1} to obtain an angle $\alpha_3=\frac{p_3}{q_3}$, with $q_3>q_2$ and $q_3$ odd, such that \begin{equation}\label{E:4.13} \mathbb P \bigg( \frac{\varphi_2(Y_1^{\alpha_3})+\cdots+\varphi_2(Y_{N_2}^{\alpha_3})}{N_2^{s}} > 2 \bigg) > \frac{1}{6}. \end{equation} We assume also the number $\alpha_3$ is so close to $\alpha_2$ that (\ref{E:4.1}) and (\ref{E:4.11}) still hold with $\alpha_2$ replaced by $\alpha_3$. This combined with (\ref{E:4.13}) gives $$\mathbb P \bigg( \frac{\varphi_i(Y_1^{\alpha_3})+\cdots+\varphi_i(Y_{N_i}^{\alpha_3})}{N_i^{s}} >2 \bigg) > \frac 1 6, \quad \textrm{for $i=1,2$,}$$ and $$\mathbb P \bigg( \frac{\big|\varphi_1(Y_1^{\alpha_3})+\cdots+\varphi_1(Y_{N_2}^{\alpha_3})\big|}{N_2^{s}} >\frac 1 4 \bigg) < \frac 1 4 \cdot \frac{1}{6}.$$ Assume $\alpha_k=\frac{p_k}{q_k}$, $N_i$, $\varphi_i$ are already defined, $k\ge 3$, $i<k$. These objects satisfy the relations \begin{equation}\label{E:4.51} \mathbb P \bigg( \frac{\big|\varphi_i(Y_1^{\alpha_{k}})+\cdots+\varphi_i(Y_{N_j}^{\alpha_{k}})\big|}{N_j^{s}} >\frac{1}{4^i} \bigg) < \frac{ 1}{4^i} \cdot \frac{1}{6} \quad \textrm{for $j=1,\ldots, k-1$, $i<j$,} \end{equation} and \begin{equation}\label{E:4.41} \mathbb P \bigg( \frac{\varphi_i(Y_1^{\alpha_{k}})+\cdots+\varphi_i(Y_{N_i}^{\alpha_{k}})}{N_i^{s}} > 2 \bigg) > \frac{1}{6} \quad \textrm{for $i=1,\ldots, k-1$.} \end{equation} \noindent Define $\varphi_k(x)=2^{-q_k} \cos(2 \pi q_k x)$ and take $N_k>N_{k-1}$ so large that $2^{-q_k-1} N_k^{1-s}>2$ and \begin{equation}\label{E:4.2} \mathbb P \bigg( \frac{\big|\varphi_i(Y_1^{\alpha_k})+\cdots+\varphi_i(Y_{N_k}^{\alpha_k})\big|}{N_k^{s}} >\frac{1}{4^i} \bigg) < \frac{ 1}{4^i} \cdot \frac {1}{6} \end{equation} for $i=1,\ldots, k-1$, by Lemma \ref{L:2}. Use Lemma \ref{L:1} to get a number $\alpha_{k+1}=\frac{p_{k+1}}{q_{k+1}}$, with $q_{k+1}>q_k$, $q_{k+1}$ odd, such that \begin{equation}\label{E:4.3} \mathbb P \bigg( \frac{\varphi_k(Y_1^{\alpha_{k+1}})+\cdots+\varphi_k(Y_{N_k}^{\alpha_{k+1}})}{N_k^{s}} > 2 \bigg) > \frac{1}{6}. \end{equation} We should take care that $\alpha_{k+1}$ is so close to $\alpha_k$ that (\ref{E:4.51}), (\ref{E:4.41}) and (\ref{E:4.2}) still hold with $\alpha_k$ replaced by $\alpha_{k+1}$. With this modification, (\ref{E:4.51}) and (\ref{E:4.2}) become \begin{equation}\label{E:4.5} \mathbb P \bigg( \frac{\big|\varphi_i(Y_1^{\alpha_{k+1}})+\cdots+\varphi_i(Y_{N_j}^{\alpha_{k+1}})\big|}{N_j^{s}} >\frac{1}{4^i} \bigg) < \frac{ 1}{4^i} \cdot \frac{1}{6} \quad \textrm{for $j=1,\ldots, k$, $i<j$}. \end{equation} while (\ref{E:4.41}) and (\ref{E:4.3}) can be rewritten as \begin{equation}\label{E:4.4} \mathbb P \bigg( \frac{\varphi_i(Y_1^{\alpha_{k+1}})+\cdots+\varphi_i(Y_{N_i}^{\alpha_{k+1}})}{N_i^{s}} > 2 \bigg) > \frac{1}{6} \quad \textrm{for $i=1,\ldots, k$.} \end{equation} This completes the induction. Observe there is no inconsistency in assuming that $q_{k+1}$'s grow so fast that \begin{equation}\label{E:4.8} 2^{-q_{k+1}} N_i^{1-s}<4^{-(k-i)} \quad \textrm{for $i=1,\ldots k$.} \end{equation} This way the sequences of numbers $(\alpha_k)$, $(N_k)$ and functions $(\varphi_k)$ are defined. Set $\alpha=\lim_{k\to \infty} \alpha_k$ and $\varphi=\sum_{k=1}^\infty \varphi_k$. When passing to the limit, inequality (\ref{E:4.5}) becomes \begin{equation}\label{E:4.6} \mathbb P \bigg( \frac{\big|\varphi_i(Y_1^{\alpha})+\cdots+\varphi_i(Y_{N_j}^{\alpha})\big|}{N_j^{s}} \ge \frac{1}{4^i} \bigg) \le \frac{ 1}{4^i} \cdot \frac {1}{6} \quad \textrm{for $j>1$, $i<j$}. \end{equation} while (\ref{E:4.4}) yields \begin{equation}\label{E:4.7} \mathbb P \bigg( \frac{\varphi_i(Y_1^{\alpha})+\cdots+\varphi_i(Y_{N_i}^{\alpha})}{N_i^{s}} \ge 2 \bigg) \ge \frac{1}{6} \quad \textrm{for $i\ge 1$.} \end{equation} The function $\varphi$ is analytic. Indeed, by design $$\varphi(x) = \sum_{k=-\infty}^\infty c_k e^{2\pi i k x},$$ where $c_k= \|\varphi_j\|_\infty=2^{-q_j}$ if $|k|=q_j$ and zero otherwise. Thus the Fourier coefficients of $\varphi$ decay exponentially fast, which implies $\varphi$ to be analytic\footnote{Indeed, $\varphi$ is defined as a series on the circle, however by the exponential convergence it can be extended to some neighbourhood of the unit disc $\mathbb D$ in the complex plane $\mathbb{C}$. Then $\varphi$ becomes a sum of holomorphic functions convergent uniformly on compact subsets of the domain of $\varphi$. Theorem 10.28 (page 214) in \cite{Rudin_1987} implies $\varphi$ is holomorphic.}. Obviously $\int \varphi(x)dx=0$ by the Lebesgue convergence theorem. Observe also that (\ref{E:4.8}) combined with $\|\varphi_i\|_\infty=2^{-q_i}$ yield \begin{equation}\label{E:4.9} \sum_{i>k} \|\varphi_i\|_\infty N_k^{1-s}<\sum_{i=1}^\infty 4^{-i}=\frac 1 2. \end{equation} We are in position to complete the proof, i.e. to show that $$\mathbb P \bigg( \frac{\varphi(Y_1^{\alpha})+\cdots+\varphi(Y_{N_k}^{\alpha})}{N_k^{s}} \ge 1 \bigg) \ge \frac {1}{12}$$ for every $k$. To this end fix $k$ and write $$\frac{\varphi(Y_1^{\alpha})+\cdots+\varphi(Y_{N_k}^{\alpha})}{N_k^{s}} = \sum_{i\le k} \frac{\varphi_i(Y_1^{\alpha})+\cdots+\varphi_i(Y_{N_k}^{\alpha})}{N_k^{s}}$$ $$+ \sum_{i>k} \frac{\varphi_i(Y_1^{\alpha})+\cdots+\varphi_i(Y_{N_k}^{\alpha})}{N_k^{s}}.$$ From (\ref{E:4.9}) it easily follows that the absolute value of the second summand on the right-hand side is less than $\frac 1 2$ almost surely. Thus $$\mathbb P \bigg ( \frac{\varphi(Y_1^{\alpha})+\cdots+\varphi(Y_{N_k}^{\alpha})}{N_k^{s}} \ge 1 \bigg ) \ge \mathbb P \bigg( \sum_{i\le k} \frac{\varphi_i(Y_1^{\alpha})+\cdots+\varphi_i(Y_{N_k}^{\alpha})}{N_k^{s}} \ge 3/2 \bigg)$$ $$ \ge \mathbb P \bigg ( \frac{\varphi_k(Y_1^{\alpha})+\cdots+\varphi_k(Y_{N_k}^{\alpha})}{N_k^{s}}\ge 2 \bigg ) -\sum_{i<k} \mathbb P \bigg ( \frac{\big|\varphi_i(Y_1^{\alpha})+\cdots+\varphi_i(Y_{N_k}^{\alpha})\big|}{N_k^{s}} \ge \frac{1}{4^i} \bigg). $$ By (\ref{E:4.6}) and (\ref{E:4.7}) it follows that $$\mathbb P \bigg ( \frac{\varphi(Y_1^{\alpha})+\cdots+\varphi(Y_{N_k}^{\alpha})}{N_k^{s}} \ge 1 \bigg ) \ge \frac{1}{6} - \sum_{i=1}^\infty\frac{1}{4^i} \cdot \frac{1}{6} = \frac{1}{12},$$ which is the desired assertion. \section{Proof of Theorems \ref{T:2} and \ref{T:3}} The entire section is devoted to the proof of Theorem \ref{T:2}. In the end we will give a short remark how to change the proof to get Theorem \ref{T:3}. Fix an irrational $\alpha$ and numbers $c>0$, $\gamma\ge 2$ such that \begin{equation}\label{E:6.1} \bigg|\alpha - \frac{p}{q} \bigg| \le \frac{c}{q^\gamma} \end{equation} for infinitely many pairs $p,q \in \mathbb Z$, $q\not = 0$. Take $r$ to be the largest possible integer with $r<\frac{\gamma}{2}-\frac 3 2$. The function $s\longmapsto (\gamma-1)(1-s)-1$ is decreasing, $s\in [\frac 1 2, 1)$, and its value at $s=\frac 1 2$ is $\frac{\gamma}{2}-\frac 3 2$, thus by continuity we can choose $s>\frac 1 2$ such that $r<(\gamma-1)(1-s)-1$. For this choice of $s$ we are going to construct an observable $\varphi$ with $\int\varphi(x)dx=0$ such that $$\mathbb P \bigg( \frac{\varphi(Y_1^{\alpha})+\cdots+\varphi(Y_n^{\alpha})}{n^s} > \frac{\sqrt{2}}{4\cdot (16 c)^{1-s}} \bigg) > \frac{1}{16}$$ for infinitely many $n$'s. Consequently \textbf{CLT} is violated. Take arbitrary $p_1, q_1 \in \mathbb Z$, $q_1\not = 0$, satisfying (\ref{E:6.1}). Set $\varphi_1(x) = q_1^{-(\gamma-1)(1-s)}\cos(2 \pi q_1 x)$ and apply Lemma \ref{L:3} to get \begin{equation}\label{E:6.3} \mathbb P \bigg( \frac{\varphi_1(Y_1^{\alpha})+\cdots+\varphi_1(Y_{N_1}^{\alpha})}{N_1^s} > \frac{\sqrt{2}}{2\cdot (16c)^{1-s}} \bigg) > \frac{1}{8}, \end{equation} where $N_1=\lfloor\frac{q_1^{\gamma-1}}{16c}\rfloor$. By Proposition \ref{P:2} the additive functional $(\varphi_1(Y^\alpha_1)+\cdots+\varphi_1(Y^\alpha_n))$ satisfies \textbf{CLT}, thus for $N$ sufficiently large \begin{equation}\label{E:6.2} \mathbb P \bigg( \frac{\varphi_1(Y_1^{\alpha})+\cdots+\varphi_1(Y_N^{\alpha})}{N^s} > \frac{\sqrt{2}}{4 \cdot (16c)^{1-s}} \cdot \frac{1}{4} \bigg) < \frac{1}{8}\cdot\frac{1}{4}. \end{equation} Let us take $p_2, q_2\in \mathbb Z$, $q_2\not = 0$, such that (\ref{E:6.1}) holds, $N_2=\lfloor\frac{q_2^{\gamma-1}}{16c}\rfloor$ satisfies (\ref{E:6.2}) and \begin{equation} q_2^{-(\gamma-1)(1-s)} \cdot N_1^{1-s}<\frac{\sqrt{2}}{4 \cdot (16c)^{1-s}} \cdot \frac 1 4 \end{equation} (this will imply that the inequality (\ref{E:6.3}) is not affected too much when $\varphi_1$ replaced by $\varphi_1+\varphi_2$). Lemma \ref{L:3} yields $$\mathbb P \bigg( \frac{\varphi_2(Y_1^{\alpha})+\cdots+\varphi_2(Y_{N_2}^{\alpha})}{N_2^s} > \frac{\sqrt{2}}{2\cdot (16c)^{1-s}} \bigg) >\frac{1}{8}.$$ Assume $N_k$, $p_k$, $q_k$ are already defined. Let us choose a pair $q_{k+1}, p_{k+1} \in \mathbb Z$ with (\ref{E:6.1}), where $q_{k+1}>q_k$ is so large that \begin{equation}\label{E:6.5} q_{k+1}^{-(\gamma-1)(1-s)} \cdot N_i^{1-s}< \frac{\sqrt{2}}{4\cdot (16c)^{1-s}} \cdot 4^{-(k-i)} \quad \textrm{for $i=1,\ldots, k$.} \end{equation} Moreover, using Lemma \ref{L:2} we demand that $q_{k+1}$ is so large that \begin{equation}\label{E:6.4} \mathbb P \bigg( \frac{\varphi_j(Y_1^{\alpha})+\cdots+\varphi_j(Y_{N_{k+1}}^{\alpha})}{N_{k+1}^s} > \frac{\sqrt{2}}{4 \cdot (16c)^{1-s}}\cdot \frac{1}{4^j} \bigg) < \frac{1}{8}\cdot\frac{1}{4^j} \quad \textrm{for $j \le k$,} \end{equation} where $N_{k+1}=\lfloor\frac{q_{k+1}^{\gamma-1}}{16c}\rfloor$. Finally we use Lemma \ref{L:3} to get \begin{equation}\label{E:6.6} \mathbb P \bigg( \frac{\varphi(Y_1^{\alpha})+\cdots+\varphi(Y_N^{\alpha})}{N^s} > \frac{\sqrt{2}}{2\cdot (16 c)^{1-s}} \bigg) > \frac{1}{8}, \end{equation} where $\varphi_{k+1}(x)=q_{k+1}^{-(\gamma-1)(1-s)}\cos (2\pi q_{k+1} x)$. \noindent When the induction is complete put $$\varphi(x)=\sum_{k=1}^\infty \varphi_k(x)=\sum_{k=1}^\infty q_k^{-(\gamma-1)(1-s)}\cos(2 \pi q_k x).$$ By assumption $r<(\gamma-1)(1-s)-1$, therefore we can take $\varepsilon>0$ so that $r=(\gamma-1)(1-s)-(1+\varepsilon)$. If one differentiates this series $r$ times, then it still converges uniformly (with the rate at least $q^{-(1+\varepsilon)}$). Therefore Theorem 7.17 (page 152) in \cite{Rudin_1976} yields $\varphi$ is $C^r$. Now it remains to show that $$\mathbb P \bigg( \frac{\varphi(Y_1^{\alpha})+\cdots+\varphi(Y_{N_k}^{\alpha})}{N_k^s} > \frac{\sqrt{2}}{4\cdot (16 c)^{1-s}} \bigg) > \frac{1}{16}$$ for every $k\in\mathbb N$. We proceed analogously to the proof of Theorem \ref{T:1}. Fix $k$. We have $$\frac{\varphi(Y_1^{\alpha})+\cdots+\varphi(Y_{N_k}^{\alpha})}{N_k^s} = \sum_{i\le k} \frac{\varphi_i(Y_1^{\alpha})+\cdots+\varphi_i(Y_{N_k}^{\alpha})}{N_k^s}$$ $$+\sum_{i> k} \frac{\varphi_i(Y_1^{\alpha})+\cdots+\varphi_i(Y_{N_k}^{\alpha})}{N_k^s}.$$ The application of (\ref{E:6.5}) yields the second term is bounded by $\frac{\sqrt{2}}{8\cdot (16 c)^{1-s}}$ a.s. Therefore $$\mathbb P \bigg( \frac{\varphi(Y_1^{\alpha})+\cdots+\varphi(Y_{N_k}^{\alpha})}{N_k^s} > \frac{\sqrt{2}}{4\cdot (16 c)^{1-s}} \bigg)$$ $$\ge \mathbb P \bigg( \sum_{i\le k} \frac{\varphi_i(Y_1^{\alpha})+\cdots+\varphi_i(Y_{N_k}^{\alpha})}{N_k^s} > \frac{3\sqrt{2}}{8\cdot (16 c)^{1-s}} \bigg)$$ $$\ge \mathbb P \bigg( \frac{\varphi_k(Y_1^{\alpha})+\cdots+\varphi_k(Y_{N_k}^{\alpha})}{N_k^s} > \frac{\sqrt{2}}{2\cdot (16 c)^{1-s}} \bigg)$$ $$-\sum_{i<k} \mathbb{P} \bigg( \frac{\varphi_i(Y_1^{\alpha})+\cdots+\varphi_i(Y_{N_k}^{\alpha})}{N_k^s} > \frac{\sqrt{2}}{4\cdot (16 c)^{1-s}}\cdot \frac{1}{4^i} \bigg). $$ The application of (\ref{E:6.4}) and (\ref{E:6.6}) yields Theorem \ref{T:2}. To demonstrate Theorem \ref{T:3} observe that for $\alpha$ Liouville there exist sequences of integers $p_k$, $q_k$ with $$\bigg|\alpha - \frac{p_k}{q_k} \bigg| \le \frac{1}{q^k} \quad \textrm{for every $k$.}$$ The only difference with the proof of Theorem \ref{T:2} is that $p,q$ are chosen from this sequence. Then again $\varphi=\sum_m \varphi_m$, and the series is uniformly convergent after differentiating it $r$-times for an arbitrary $r$. This implies $\varphi\in C^\infty$. The rest remains unchanged. \section{Acknowledgements} This research was supported by the Polish National Science Centre grant Preludium UMO-2019/35/N/ST1/02363. I am grateful to Anna Zdunik for fruitful discussions and for sharing her notes with the proof of \textbf{CLT} . I would also like to thank Corinna Ulcigrai for providing references \cite{Bromberg_Ulcigrai_2018, Sinai_Ulcigrai_2008}. I am grateful to two anonymous referees for many comments that helped improve the manuscript and for providing me reference \cite{Weber_2009}. Finally, I would like to thank Michael Lin for discussions and for pointing out that \cite{Kipnis_Varadhan_1986} applies here. \bibliographystyle{alpha}
{ "attr-fineweb-edu": 1.740234, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdgbxK7ICUmfbz7vZ
\section{Introduction} \label{sec:intro} The LHC has seen an indication of a diphoton resonance at 750 GeV in the CMS \cite{CMS:2015dxe,CMS-PAS-EXO-16-018,Moriond-CMS} and ATLAS \cite{ATLAS-diphoton,Moriond-ATLAS,ATLAS-CONF-2016-018} experiments. Many potential classes of new physics explanations have been catalogued in refs. \cite{Franceschini:2015kwy,Staub:2016dxq} and a large number of papers have suggested additional possibilities. It has turned out to be challenging to create models that are consistent with the properties of the resonance, do not violate constraints established by previous experiments, and do not include unreasonably large numbers of new particles. This letter proposes a new explanation for the resonance that has the virtue of being part of a model that is already established to be consistent with other existing constraints on new electroweak physics. A challenge in proposing new states to explain the diphoton signal is they must be readily enough produced to agree with the observed cross-section while evading constraints imposed by the lack of observed signals in dijet, $WW$, $ZZ$ channels at 750 GeV. Following the prescription used in \cite{1512.04939}, for a narrow resonance ${\mathcal R}$, the resonance production cross section times diphoton branching ratio needed to explain the signal at the 13 TeV LHC is estimated to be\footnote{ A more recent estimate, including initial data from the 13 TeV run, yields a somewhat lower value for the estimated cross section, with a central value of 4.8-5.5 fb$^{-1}$ \cite{Franceschini:2016gxv}. These revised cross section estimates are within the range considered here.} \begin{equation} \sigma(pp \to {\mathcal R} \to \gamma \gamma )= 6.26 \pm 3.32\ \text{fb}\ . \label{eq:signal} \end{equation} { In what follows, we will consider regions of parameter space that produce 13 TeV signal cross sections of between 3 and 9.4 fb$^{-1}$.} At the same time, exclusions on a $750$ GeV resonance ${\mathcal R}$ decaying to other standard model (SM) particles are determined by using the following set of values taken from {8 TeV LHC} experimental analyses: \begin{description} \item \qquad\qquad $\sigma({pp \to {\mathcal R} \to Z \gamma }) < 8.2\ \text{fb}$ \cite{Aad:2014fha}, \qquad $\sigma({pp \to {\mathcal R} \to W^{+} W^{-} }) < 37\ \text{fb}$ \cite{Aad:2015agg}, \item \qquad\qquad $\sigma({pp \to {\mathcal R} \to Z Z }) < 19\ \text{fb}$ \cite{Aad:2015kna}, \qquad $\sigma({pp \to {\mathcal R} \to g g }) < 2200\ \text{fb}$ \cite{CMS-PAS-EXO-14-005}, \item \qquad\qquad $\sigma({pp \to {\mathcal R} \to t \bar{t} }) < 700\ \text{fb}$ \cite{Aad:2015fna}. \end{description} We will show in this note that the observed diphoton resonance could be due to scalar and pseudoscalar states in the renormalizable coloron model \cite{Bai:2010dj}, a model that has been previously studied in the literature \cite{Chivukula:1996yr,Hill:1993hs,Dicus:1994sw} and is already known to be consistent with electroweak precision constraints and theoretical constraints \cite{Chivukula:2013xka,Chivukula:2014rka,Chivukula:2015kua}. More specifically, either the scalar or pseudoscalar state in the model could be responsible for the diphoton signal -- or the two states could be degenerate and jointly responsible. The model contains an extended color gauge group and the new scalar and pseudoscalar arise as part of the sector that spontaneously breaks the extended group down to standard QCD. In consequence, the new scalars do not couple directly to quarks and their mixing with the Higgs (which could induce a small indirect coupling to quarks) must be nearly zero to comport with precision electroweak data. Rather, the new scalars couple to spectator quarks that help cancel gauge anomalies in the theory. Gluon pairs coupled to loops of these spectators allow for s-channel production of the scalars at LHC; photon pairs likewise coupled to spectator loops allow for decay. Because production and decay are all occurring through loop-level processes, the dijet, $WW$, $ZZ$, and $Z\gamma$ rates can be small enough to be consistent with the LHC constraints. One last key element arises because the extended color sector yields an octet of massive coloron bosons that would be visible at LHC. The most recent limits on colorons have been set by CMS, which finds that the coloron mass must exceed 5.1 TeV \cite{Khachatryan:2015dcf,Chivukula:2011ng,Chivukula:2013xla}. Because the scalars are part of the color symmetry-breaking sector, their vacuum expectation value ($v_{s}$) is linked to the mass of the coloron~\cite{Chivukula:2013xka}; hence, the new limit on the coloron mass means that $v_{s}$ must be at least $1.7$ TeV.\footnote{As explained below, in what folows we will use $v_{s}=2$ TeV for illustration. Larger values are also allowed, though the fermion content of the theory must be adjusted accordingly to accomodate the observed diphoton signal.} Putting all of this information together, we find that the renormalizable coloron model is consistent with all of the data if one adds a few weak-singlet spectators to complement the weak-doublet spectators in the original model. The presence of the additional spectators enables the new scalar and/or pseudoscalar to be visible as a diphoton resonance without producing dijet, $WW$, or $ZZ$ events that would contravene the LHC bounds. Moreover, the addition of weak-singlet scalars leaves the model still in agreement with precision electroweak constraints and has only a small impact on the details of how the other theoretical constraints ({\it e.g.} triviality) are satisfied.\footnote{A previous paper in the literature~\cite{Liu:2015yec} suggested coloron decay to diphoton + jet might be the source of the LHC diphoton signal. That work did not include any contribution from scalar or pseudoscalar states which are the focus of the present work. Moreover it assumed a coloron mass of $2$ TeV, which is now well below the LHC's exclusion limit of $5.1$ TeV.} In the rest of this letter, we lay out the details of how the diphoton resonance appears in the renormalizable coloron model, what model components are necessary to ensure compliance with all phenomenological constraints, and what open questions should be studied if the resonsance is confirmed by additional LHC data. In section 2, we briefly review the elements of the renormalizable coloron model. Section 3 presents our calculations related to the diphoton signal observed at LHC. Section 4 presents a discussion and summarizes our conclusions. \section{Elements of the Model} \label{sec:model} \subsection{Bosonic Sector} \label{subsec:boson} The renormalizable coloron model is based on an extended $SU(3)_{1c} \times SU(3)_{2c}$ gauge symmetry, where color $SU(3)_C$ is identified with the diagonal subgroup of the larger group. The extended group is broken down via the expectation value of a $(3,\bar{3})$ scalar, $\Phi$, which may be decomposed into gauge eigenstates of QCD as follows \begin{equation}\label{Phi} \Phi = \frac{1}{\sqrt{6}} \pbrac{v_{s} + s_{0} + i {\mathcal A}} {\cal I}_{3\times 3} + \pbrac{G^a_H + i G^a_G}t^a \qquad \pbrac{t^a \equiv \lambda^a/2} \ . \end{equation} Here the $t^a$ are the generators of $SU(3)$, $v_{s}$ is the magnitude of the vacuum expectation value breaking the extended color symmetry, $s_0$ and ${\mathcal A}$ are singlet scalar and pseudoscalar fields, and $G^a_H$ and $G^a_G$ are color-octet scalar and pseudoscalar fields. The $G^a_G$ fields are absorbed by the massive color octet vector fields, the colorons, after symmetry breaking; the $G^a_H$ remain as physical states of the theory and their phenomenology has been studied in \cite{Bai:2010dj,Hill:1993hs}. The $s_0$ (after mixing with the Higgs field, as described below) and the ${\mathcal A}$ fields are candidate states for a diphoton resonance at 750 GeV. The model also includes a color-singlet weak-doublet Higgs field ($\phi$), whose neutral component develops a vacuum expectation value $v_h/\sqrt{2}$ (with $v_h \approx 246$ GeV) and is responsible for electroweak symmetry breaking. The scalar component of the Higgs field that remains in the spectrum after electroweak symmetry breaking ($h_0$) mixes with the $s_0$ scalar via a mixing angle $\chi$ to form mass eigenstate scalars \begin{align} s & = \sin\chi \, h_0 + \cos \chi\, s_0~,\\ h & = \cos \chi\, h_0 - \sin\chi\, s_0~. \end{align} An analysis of the model's full scalar potential phenomenology is given in \cite{Chivukula:2013xka,Chivukula:2014rka,Chivukula:2015kua}; one key result is that the value of $\sin\chi$ is constrained to be very small ($\lesssim 0.1$). The coloron mass in this model is given by \begin{equation}\label{eq:coloronmass} M^2_C = \frac{v^2_s}{6}(g^2_{s_1} + g^2_{s_2})~, \end{equation} where $g_{s_{1,2}}$ are the coupling constants of the two $SU(3)$ gauge-groups. The couplings $g_{s_{1,2}}$ cannot be too large if the theory is to remain perturbative. Following \cite{Dobrescu:2009vz}, therefore, we require that the large-$N_c$ corrected loop-counting factor be less than one, \begin{equation}\label{eq:perturbative} \frac{N_c\ g^2_{s_{1,2}}}{16 \pi^2}\le 1~. \end{equation} Using Eq. \ref{eq:coloronmass} for $N_c=3$, we then find immediately that \begin{equation}\label{eq:massbound} M_C \lesssim 3.0 \cdot v_{s}~, \end{equation} and hence, from the experimental lower bound of 5.1 TeV on the coloron mass reported by CMS \cite{Khachatryan:2015dcf}, we deduce that $v_{s} \gtrsim 1.7$ TeV. This will have a significant impact on the model's phenomenology. For the purposes of illustration, in the rest of the paper we choose $v_{s} = 2$ TeV. As we will see, one could always choose larger values of $v_s$ as well.\footnote{Values of $v_s$ smaller than 2 TeV will result, via Eq. \ref{eq:coloronmass} and the experimental lower bound of 5.1 TeV on the coloron mass, in large values of $g_{s_{1,2}}$ which can result in the scalar sector's having a Landau pole at very low energy scales. See discussion in Appendix \ref{sec:app-RGE}.} \subsection{Fermion Sector} \label{subsec:fermion} As described in \cite{Chivukula:2015kua}, it is possible for the various chiralities and flavors of the standard quarks to be assigned charges under $SU(3)_{1c} \times SU(3)_{2c}$ in a range of ways, allowing for flavor-dependent and potentially chiral couplings to the colorons \cite{Frampton:1987dn,Bagger:1987fz,Hill:1991at,Chivukula:1996yr}. The model can also contain fermions beyond those identified with ordinary quarks. In particular, if the strong couplings of the ordinary fermions are taken to be chiral, additional spectator fermions will be {\it required} to cancel $SU(3)_{1c} \times SU(3)_{2c}$ anomalies. While arbitrary generation-changing flavor-dependent coloron couplings are strongly constrained by limits on flavor-changing neutral-currents~\cite{Chivukula:2013kw}, next-to-minimal flavor violation can be successfully implemented in a renormalizable coloron model so as to reproduce the observed fermion masses and mixings ~\cite{Chivukula:2013kw}. In what follows, therefore, we will assume that any flavor-dependent couplings are (at least to a good approximation) generation preserving, and that the subsequent coloron couplings are therefore flavor-diagonal. Furthermore, for simplicity of presentation, we will assume that both right-handed quarks of a given generation ({\it e.g.}, $t_R$ and $b_R$) have the same color properties. This last assumption can easily be relaxed in the analysis below, but unnecessarily complicates the discussion of the phenomenology at hand. Even with the constraints described above, there are still several possibilities for assigning the color charges of the ordinary quarks. For instance, if all three generations of the ordinary quarks are chirally charged under the extended color gauge group (e.g., with all left-handed quarks charged under $SU(3)_{1c}$ and all right-handed quarks charged under $SU(3)_{2c}$), then three corresponding spectator fermion generations (carrying opposite chiral charges with respect to the quarks) are required to cancel the induced anomalies. On the other hand, if the chiral charge assignment of the third quark generation is opposite to those of the first two generations, only one additional spectator fermion generation (one up-like and one down-like spectator) is necessary. When all ordinary quarks are vectorially charged under the extended color interactions, no anomalies are induced and no spectator fermions are needed. In the simplest cases we would generally expect there to be between zero and three chiral doublets of spectator fermions to cancel the anomalies of the extended color group. In what follows we will consider a slight generalization of these possibilities. We will consider spectators charged as follows under $SU(3)_{1c} \times SU(3)_{2c} \times SU(2)_L \times U(1)_Y$: \begin{itemize} \item $N_Q$ weak doublets $Q_{L,R}$, with the $Q_L$ transforming as a $(3,1,2)_{1/6}$ and the $Q_R$ as a $(1,3,2)_{1/6}$. \item $n_q$ weak singlet pairs, $q_{L,R}$, with the {$q_R$} transforming as $(3,1,1)_{2/3,-1/3}$ and { $q_L$} transforming as $(1,3,1)_{2/3,-1/3}$. \end{itemize} With these assignments, the effective or net number of spectator doublets whose chiral charges under $SU(3)_{1c} \times SU(3)_{2c}$ help cancel the $SU(3)$ anomalies of the ordinary generations is $N_Q-n_q$. We therefore expect $0\le N_Q-n_q\le 3$. Moreover, the following Yukawa couplings give masses proportional to $v_{s}$ to the spectator fermions \begin{equation}\label{Lferm} - \frac{\sqrt{6}\, M_Q}{v_{s}} \bar{Q}^{k}_{L} \, \Phi \, Q^{k}_{R} - \frac{\sqrt{6}\, M_q}{v_{s}} \bar{q}^{\ell}_{L} \, \Phi^\dagger \, q^{\ell}_{R} +h.c.~, \end{equation} where $k$ and $\ell$ index the $N_Q$ and $n_q$ families of spectators and, for convenience, we have taken each kind of spectator to be mass-degenerate.\footnote{We have also neglected additional Yukawa couplings of the form $\bar{Q}^k_L \phi q^\ell_R + h.c.$ , where $\phi$ is the Higgs field, which lead to weak-scale mixing among the various spectator fermions. Since we know that $v_{s} \gg v_h$, these couplings lead to small effects which are irrelevant to the analysis given below.} \section{The Diphoton Signal at LHC} \label{sec:diphoton} We will now demonstrate that the scalar $s$ or pseudoscalar ${\mathcal A}$ boson of the renormalizable coloron model could give rise to a 750 GeV diphoton resonance consistent with the signal reported from early high-energy LHC data \cite{CMS:2015dxe,ATLAS-diphoton}. Following the procedure in \cite{Chivukula:2013xka,Chivukula:2014rka}, we construct an effective Lagrangian coupling the scalar and pseudoscalar bosons to the gauge bosons (having integrated out the heavy color degrees of freedom) and the ordinary fermions. We then use this effective Lagrangian to compute the relevant production cross-sections and branching ratios. We outline the relevant computations in appendices \ref{sec:appendix-i} and \ref{sec:appendix-ii}; details may be found in \cite{Chivukula:2013xka,Chivukula:2014rka}. In the renormalizable coloron model, the width of the $750$ GeV resonance, be it scalar or pseudoscalar, must be small.\footnote{For the scalar, we expect $\sin\chi\sim 0$ in order to be consistent with phenomenological constraints \cite{Chivukula:2013xka,Chivukula:2014rka,Chivukula:2015kua} so that both scalar and pseudscalar decays are dominated by loop induced processes and the total width must therefore be small.} Hence it is possible to evaluate the total production cross-section in the Narrow Width Approximation (NWA), \begin{equation} \sigma_{s,{\mathcal A}}(gg \to s,\,{\mathcal A} \to \gamma \gamma) = 16\pi^2 \cdot {\cal N} \cdot \frac{ \Gamma_{s,\,{\mathcal A}}}{m_s} \cdot BR(s,\,{\mathcal A} \to \gamma \gamma) \cdot BR(s,\,{\mathcal A}\to gg) \cdot \left[ \frac{d L_{gg}}{d\hat{s}}\right]_{\hat{s} = m^2_s}~. \label{eq:simplest} \end{equation} Here ${\cal N}$ is a ratio of spin and color counting factors which, for a color-singlet scalar produced via gluon fusion is: \begin{equation} {\cal N} = \frac{N_{S_s}}{N_{S_g} N_{S_g}} \cdot \frac{C_{s}}{C_g C_g} = \frac{1}{4}\cdot \frac{1}{64}, \end{equation} where $N_i$ and $C_i$, respectively, count the number of spin- and color-states for the initial-state partons (denominator) and the resonance (numerator). Within the cross-section formula, $L_{gg}$ is the gluon luminosity function, which we evaluate using the {\tt CTEQ6L1} parton distribution function~\cite{Pumplin:2002vw} at both 8 and 13 TeV. In order to better match our theory predictions to the experimental results, we determine the NNLO $K$-factor using the {\tt SuSHi} program~\cite{Harlander:2012pb} in the infinite quark mass limit. We use the {\tt CT14NNLO} pdf set~\cite{Dulat:2015mca} and set the renormalization and factorization scales to be $\mu_R=\mu_F=750 $ GeV. We find the $K$-factor to be $K_{NNLO/LO}^{13 TeV}\sim 2.9$ and $K_{NNLO/LO}^{8 TeV}\sim 3.2$ and we apply this to our tree-level cross-section results to make the comparison with data more meaningful. For both the $s$ and ${\mathcal A}$ states in the renormalizable coloron model (when $\sin\chi \approx 0$), the branching ratio to $gg$ dominates so long as all of the other scalars, colorons, and spectator fermions are heavy. In fact, $BR(s,\, {\mathcal A} \to gg) \approx 1$ so that the expression for the cross-section in Eq. \ref{eq:simplest} is proportional to $\Gamma_{s,\, {\mathcal A}} \cdot BR(s,\, {\mathcal A} \to \gamma \gamma) \approx \Gamma(s,\, {\mathcal A} \to \gamma \gamma)$. Furthermore, as shown in appendices \ref{sec:appendix-i} and \ref{sec:appendix-ii}, the partial width to diphotons is dominated by the contribution from loops of the spectator quarks $Q$ and $q$. The resonant diphoton production rate is, therefore, proportional to the square of the total number of spectator fermions $(N_Q + n_q)^2$ and inversely proportional to $v_{s}^2$. Thus, as we illustrate below, for a given value of $v_{s}$ some minimum number of spectators $N_Q + n_q$ will be required to make the predicted signal match the data.\footnote{The corresponding decay to $WW$ and $ZZ$ arise through a similar loop process, but in this case only the weak-doublet spectator fermions contribute significantly -- and hence this amplitude (exactly, for $WW$, and only approximately for $ZZ$) is proportional to $N_Q$.} In Fig.~\ref{fig:scalar} we illustrate the region of parameter space in the renormalizable coloron model that can accomodate the observed diphoton signal if this signal arises solely from the scalar $s$ boson. These plots are for the parameter values $ M_{Q,q} \gg 750\, {\rm GeV},\, m_\mathcal{A} =m_{G_H}=1\, {\rm TeV},\, {\rm and}\, v_{s} =2 \, {\rm TeV}$ -- though their appearance depends only weakly on $M_{Q,q}$, $m_\mathcal{A}$ and $m_{G_H}$ so long as these particles are heavy enough to prevent $s$ from decaying to pairs of them. For $v_{s}=2$ TeV and $\sin\chi=0$, the decay width to diphotons is sufficiently large to reproduce the resonance diphoton cross section of Eq. \ref{eq:signal} provided that the spectators are sufficiently numerous $(9 \lesssim N_Q + n_q \lesssim 14)$; the corresponding region is indicated in the left plot by the green (diagonally hatched) region. For larger values of $v_{s}$, the required value of $N_Q + n_q$ rises proportionally. As noted in Appendix~\ref{sec:app-RGE}, the upper third of the allowed area may be excluded by the need to avoid a Landau pole in the RGE running of the weak $SU(2)$ gauge coupling. Also plotted are the constraints arising from the non-observation of a $WW$~\cite{Aad:2015agg}, $ZZ$~\cite{Aad:2015kna}, and dijet resonance~\cite{CMS-PAS-EXO-14-005} of the same mass. Evidently, the most difficult constraint to satisfy in this model when $\sin\chi = 0$ is simply of having a sufficiently large diphoton signal. If one increased the value of $v_{s}$ and increased $N_Q + n_q$ proportionally so as to keep the signal strength in the diphoton channel constant, the minimum number of spectator quarks required to violate the dijet (Eq. \ref{eqn:sggwidth}) or diboson (e.g., Eq. \ref{eqn:sWWwidth}) bounds would also rise, leaving the model consistent with the data. If the Higgs mixing angle $\sin\chi$ is not zero, two separate effects start to suppress the branching ratio to diphotons, making it difficult to sustain a large enough signal. First the decays to $WW$ and $ZZ$ become significant and start to cut into the available parameter space. Second there is a destructive interference in the diphoton loop amplitude between the contributions of spectator fermions and $W$ bosons running in the loop; Fig. \ref{fig:BRvariation} illustrates that as $\sin\chi$ grows, the $WW$ and $ZZ$ widths grow while the diphoton width falls. The right hand panel of Fig. \ref{fig:scalar} demonstrates that, to be consistent with the putative signal, the $s_0 - h_0$ mixing must therefore be very small, with $|\sin\chi | \lesssim 0.01$. { Note that, as for any model in which a scalar's gaining a vacuum expectation value is the origin of the diphoton signal, a small mixing angle is not the natural consequence of any symmetry and it only occurs for a narrow range of parameters in the scalar potential.} \begin{figure} \centering \includegraphics[width= 0.47\textwidth]{scalar-constraint_Ma_Mgh-1TeV_vs-2000GeV.png} \includegraphics[width= 0.47\textwidth]{scalar-Constraint-vs-chi.png} \caption{The heavy black rim (left pane) encloses the region of the $N_Q$ vs $n_q$ plane for which the renormalizable coloron model's scalar boson $s$ is consistent with the $750$ GeV diphoton signal and other constraints. The region shown in green (diagonally hatched) matches the 1-$\sigma$ resonance diphoton cross section of Eq. \ref{eq:signal}. Also shown are the regions excluded by $s\to WW$ searches depicted with cross-hatching \cite{Aad:2015agg}, by $s \to ZZ $ searches depicted in blue (dark gray) \cite{Aad:2015kna} and by dijet searches depicted in red (lighter gray) \cite{CMS-PAS-EXO-14-005}. The regions with translucent gray overlays correspond to values of $(N_Q,n_q)$ that are not theoretically preferred (see text for details). \textbf{Left:} Plot in the $(N_Q,n_q)$ plane for the values $\sin\chi=0,\, m_\mathcal{A} =m_{G_H}=1\, {\rm TeV},\, {\rm and}\, v_{s} =2\, {\rm TeV}$. Depending on the spectator fermions included, this region is sensitive to the RGE constraints discussed in Appendix~\ref{sec:app-RGE}. \textbf{Right:} Plot in the $(\sin\chi,v_{s})$ plane for parameter values $m_\mathcal{A} =m_{G_H}=1\, {\rm TeV},\,{\rm and}\, (N_Q, n_q) = (6,5)$. \label{fig:scalar} } \end{figure} The left pane of Fig. \ref{fig:pseudoscalar} shows the region of parameter space in the renormalizable coloron model that can accomodate the dipoton signal via a 750 GeV pseudoscalar ${\mathcal A}$ boson. Here there is no $\sin\chi$ dependence, and we find that the diphoton signal can be accomodated with fewer spectator fermions, $5 \lesssim N_Q + n_q \lesssim 8$ for $v_{s}=2$ TeV.\footnote{ This is due to the fact that the coloron and other scalars, which dominate the $s \to gg$ decay, do not contribute to $\mathcal{A} \to gg$ decays. In case of the scalar, there is destructive interference between the bosonic contributions and the spectator fermion loops, so $N_Q + n_q$ is pushed toward larger values where the fermionic contribution dominates.} As with the scalar, if $v_{s}$ is increased, the total number of spectator fermions must be increased proportionally, but the constraints from non-observation of dijet and diboson decays do not become harder to satisfy. As noted in Appendix~\ref{sec:app-RGE}, the extent of the allowed region should be unaffected by the need to avoid a Landau pole in the RGE running of the gauge couplings. \begin{figure} \centering \includegraphics[width=0.47\textwidth]{Scalar-BR-chi-NQ_6-nq_5-vs_2000-Ma_1000_MGH_1000.png} \caption{Scalar boson ($s$) decay width and branching ratios as a function of $\sin\chi$, for the parameter values $m_\mathcal{A} = m_{G_H} = 1 \text{ TeV}$ and $(N_Q, n_q) = (6,5)$. \label{fig:BRvariation}} \end{figure} \begin{figure} \includegraphics[width= 0.45\textwidth]{Pseudo-scalar-constraint_vs-2000_NQ_nq.png} \includegraphics[width= 0.45\textwidth]{Degenerate-constraint_vs-2000GeV_MGh-1TeV.png} \caption{The heavy black rim encloses the region of the $N_Q$ vs $n_q$ plane for which the renormalizable coloron model's pseudoscalar boson ${\cal A}$ alone (left pane) or a degenerate $s,{\cal A}$ pair (right pane) is consistent with the $750$ GeV diphoton signal and other constraints. Details are as in the caption for the left pane of Fig. \ref{fig:scalar} (except that $\sin\chi$ is irrelevant for the ${\cal A}$ boson). The allowed region in each pane is unaffected by the RGE constraints in Appendix~\ref{sec:app-RGE}. \label{fig:pseudoscalar}} \end{figure} Finally, in the right pane of Fig.~\ref{fig:pseudoscalar} we consider the case in which the scalar and pseudoscalar are {roughly degenerate (to within experimental resolution)},\footnote{ There is no symmetry that would enforce strict degeneracy between the scalar and psuedo-scalar resonances in this model. For other examples of models of the diphoton signal involving degenerate resonances see~\cite{Wang:2015omi,Bai:2016rmn,Djouadi:2016eyy}.} and both have masses of 750 GeV. Here we see that $4 \lesssim N_Q + n_q \lesssim 7$ can accommodate the signal when $v_{s}\sim 2$ TeV. For larger values of $v_{s}$, proportionally larger values of $N_Q + n_q $ would be able to explain the diphoton signal without generating dijet or diphoton rates in excess of the bounds. Here too, as shown in Appendix~\ref{sec:app-RGE}, the extent of the allowed region should be unaffected by the need to avoid a Landau pole in the RGE running of the gauge couplings. In the quasi-degenerate case, it is interesting to note that the pseudoscalar contribution to the diphoton rate is predicted to be larger than that of the scalar contribution to the signal. In fact, one could determine the relative sizes of the scalar and pseudoscalar components of the signal through angular observables (e.g., in $\mathcal{A} ,s \to ZZ \to 4l$ decays) as a test of whether degenerate $\mathcal{A}$ and $s$ were contributing (in a manner analogous to the spin-parity measurement of the Higgs boson \cite{Khachatryan:2014kca,Aad:2015mxa}). In Fig.~\ref{fig:pseudoscalar-ratio} we show the ratio \begin{equation} R_{s/\mathcal{A}} = \frac{\sigma(pp \to s \to ZZ)}{\sigma(pp \to \mathcal{A} \to ZZ)}\ , \end{equation} as a function of $N_Q$ for the three possible physical cases: $n_q=N_Q$, $n_q=N_Q -1$ and $n_q=N_Q-3$. Note that this ratio is independent of the value of $v_{s}$. The dip observed in the ratio for $N_Q\sim4,5$ is due to a cancellation between the fermion and boson (coloron and other scalar) loops that causes the $s\to gg$ branching fraction (and hence the overall scalar production cross-section) to vanish. Experimental determination of this ratio could help determine the value of $N_Q$ and $n_q$.\footnote{We have neglected interference effects between $s$ and $\mathcal{A}$ since the decay widths of both particles are very small and since, due to the CP symmetry of the total cross-section, there is no contribution to the total cross-section from the CP-odd interference term.} \begin{figure} \includegraphics[width= 0.45\textwidth]{Degen_ratio_sA_ZZ.png} \caption{Ratio of the scalar to the pseudoscalar component of the diphoton signal as a function of $N_Q$, for quasi-degenerate $m_\mathcal{A} =m_s=750$ GeV. All three possible physical cases are shown $n_q=N_Q$ (blue circles), $n_q=N_Q -1$ (red squares) and $n_q=N_Q-3$ (black triangles).\\ \label{fig:pseudoscalar-ratio} } \end{figure} \section{Discussion} \label{sec:conclusions} We have shown that the scalar sector of the renormalizable coloron model can be the source of the 750 GeV resonance for which evidence has been observed at the LHC. Either the scalar state $s$, the pseudoscalar state ${\cal A}$, or both (if degenerate) could play the role of the new diphoton resonance, while remaining consistent with precision electroweak physics and constraints from triviality and unitarity. If the 750 GeV resonance is verified by further analysis and accumulation of more statistics, there are clear avenues for verifying that the renormalizable coloron model is the underlying new physics involved. The most straightforward would be to look for direct evidence of the coloron resonance in dijet invariant mass or dijet angular distributions; indeed, the LHC experiments routinely look for signs of high-mass dijet resonances in each newly-collected data set ({\it e.g.} \cite{Khachatryan:2015dcf}). Alternatively, one could seek evidence within the LHC data for a second new spinless state ($s$ or ${\cal A}$) at a different mass or look for signs of the colored scalars $G^a_H$ as suggested in \cite{Bai:2010dj,Hill:1993hs}. In addition, one could study other decay modes of the 750 GeV resonance to predict the expected number of spectator quarks or to differentiate between the scalar, pseudoscalar and degenerate cases discussed here. In the longer term, aspects of the model that would warrant further study would include the detailed impact of the weak-singlet spectator quarks upon the theoretical constraints on the model and the precise flavor structure of the quark sector in the presence of the various spectators. We look forward to seeing what the next run reveals. \begin{acknowledgments} This material is based upon work supported by the National Science Foundation under Grant No. PHY-1519045. KM acknowledges the support of the Michigan State University High Performance Computing Center and the Institute for Cyber Enabled Research. \end{acknowledgments}
{ "attr-fineweb-edu": 1.930664, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdhvxK7kjXIdG-7tN
\chapter*{Abstract}% \addcontentsline{toc}{chapter}{\numberline{}Abstract}% Although compression has been widely used for decades to reduce file sizes (thereby conserving storage capacity and network bandwidth when transferring files), there has been limited use of hardware-based compression within modern memory hierarchies of commodity systems. Why not? Especially as programs become increasingly data-intensive, the capacity and bandwidth within the memory hierarchy (including caches, main memory, and their associated interconnects) have already become increasingly important bottlenecks. If hardware-based data compression could be applied successfully to the memory hierarchy, it could potentially relieve pressure on these bottlenecks by increasing effective capacity, increasing effective bandwidth, and even reducing energy consumption. In this thesis, we describe a new, practical approach to integrating hardware-based data compression within the memory hierarchy, including on-chip caches, main memory, and both on-chip and off-chip interconnects. This new approach is fast, simple, and effective in saving storage space. A key insight in our approach is that access time (including decompression latency) is critical in modern memory hierarchies. By combining inexpensive hardware support with modest OS support, our holistic approach to compression achieves substantial improvements in performance and energy efficiency across the memory hierarchy. Using this new approach, we make several major contributions in this thesis. First, we propose a new compression algorithm, \emph{Base-Delta-Immediate Compression} (\emph{B$\Delta$I\xspace}), that achieves high compression ratio with very low compression/decompression latency. B$\Delta$I\xspace exploits the existing low dynamic range of values present in many cache lines to compress them to smaller sizes using Base+Delta encoding. Second, we observe that the compressed size of a cache block can be indicative of its reuse. We use this observation to develop a new cache insertion policy for compressed caches, the \emph{Size-based Insertion Policy} (\emph{SIP}), which uses the size of a compressed block as one of the metrics to predict its potential future reuse. Third, we propose a new main memory compression framework, \emph{Linearly Compressed Pages} (\emph{LCP}), that significantly reduces the complexity and power cost of supporting main memory compression. We demonstrate that \emph{any} compression algorithm can be adapted to fit the requirements of LCP, and that LCP can be efficiently integrated with the existing cache compression designs, avoiding extra compression/decompression. Finally, in addition to exploring compression-related issues and enabling practical solutions in modern CPU systems, we discover new problems in realizing hardware-based compression for GPU-based systems and develop new solutions to solve these problems. \chapter*{Acknowledgments} \addcontentsline{toc}{chapter}{\numberline{}Acknowledgments}% First of all, I would like to thank my advisers, Todd Mowry and Onur Mutlu, for always trusting me in my research experiments, giving me enough resources and opportunities to improve my work, as well as my presentation and writing skills. I am grateful to Michael Kozuch and Phillip Gibbons for being both my mentors and collaborators. I am grateful to the members of my PhD committee: Kayvon Fatahalian, David Wood, and Doug Burger for their valuable feedback and for making the final steps towards my PhD very smooth. I am grateful to Deb Cavlovich who allowed me to focus on my research by magically solving all other problems. I am grateful to SAFARI group members that were more than just lab mates. Vivek Seshadri was always supportive for my crazy ideas and was willing to dedicate his time and energy to help me in my work. Chris Fallin was a rare example of pure smartness mixed with great work ethic, but still always had time for an interesting discussion. From Yoongu Kim I learned a lot about the importance of details, and hopefully I learned something from his aesthetic sense as well. Lavanya Subramanian was my fellow cubic mate who showed me an example on how to successfully mix work with personal life and how to be supportive for others. Justin Meza helped me to improve my presentation and writing skills in a very friendly manner (as everything else he does). Donghyuk Lee taught me everything I know about DRAM and was always an example of work dedication for me. Nandita Vijaykumar was my mentee, collaborator, and mentor all at the same time, but, most importantly, a friend that was always willing to help. Rachata Ausavarungnirun was our food guru and one of the most reliable and friendly people in the group. Hongyi Xin reminded me about everything I almost forgot from biology and history classes, and also taught me everything I know now in the amazing field of bioinformatics. Kevin Chang and Kevin Hsieh were always helpful and supportive when it matters most. Samira Khan was always available for a friendly chat when I really need it. Saugata Ghose was my rescue guy during our amazing trip to Prague. I also thank other members of the SAFARI group for their assistance and support: HanBin Yoon, Jamie Liu, Ben Jaiyen, Yixin Luo, Yang Li, and Amirali Boroumand. Michelle Goodstein, Olatunji Ruwase and Evangelos Vlachos, senior PhD students, shared their experience and provided a lot of feedback early in my career. I am grateful to Tyler Huberty and Rui Cai for contributing a lot to my research and for being excellent undergraduate/masters researchers who selected me as a mentor from all the other options they had. During my time at Carnegie Mellon, I met a lot of wonderful people: Michael Papamichael, Gabe Weisz, Alexey Tumanov, Danai Koutra and many others who helped and supported me in many different ways. I am also grateful to people at PDL and CALCM groups for accepting me in their communities. I am grateful to my internship mentors for making my work in their companies mutually successful for both sides. At Microsoft Research, I had the privilege to closely work with Karin Strauss, Dimitrios Lymberopoulos, Oriana Riva, Ella Bounimova, Patrice Godefroid, and David Molnar. At NVIDIA Research, I had the privilege to closely work with Evgeny Bolotin, Steve Keckler, and Mike O'Connor. I am also grateful to my amazing collaborators from Georgia Tech: Hadi Esmaeilzadeh, Amir Yazdanbaksh, and Bradley Thwaites. And last, but not least, I would like to acknowledge the enormous love and support that I received from my family: my wife Daria and our daughter Alyssa, my parents: Gennady and Larissa, and my brother Evgeny. \clearpage \chapter{I like Pie} \input{appendix/appendix.tex} \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.7 and later. I wish you the best of success. \hfill mds \hfill January 11, 2007 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} This demo file is intended to serve as a ``starter file'' for IEEE conference papers produced under \LaTeX\ using IEEEtran.cls version 1.7 and later. I wish you the best of success. \hfill mds \hfill January 11, 2007 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \section*{Acknowledgment} The authors would like to thank... \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE journal papers produced under \LaTeX\ using IEEEtran.cls version 1.7 and later. I wish you the best of success. \hfill mds \hfill January 11, 2007 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \section*{Acknowledgment} The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Introduction} \IEEEPARstart{T}{his} demo file is intended to serve as a ``starter file'' for IEEE Computer Society journal papers produced under \LaTeX\ using IEEEtran.cls version 1.7 and later. I wish you the best of success. \hfill mds \hfill January 11, 2007 \subsection{Subsection Heading Here} Subsection text here. \subsubsection{Subsubsection Heading Here} Subsubsection text here. \section{Conclusion} The conclusion goes here. \appendices \section{Proof of the First Zonklar Equation} Appendix one text goes here. \section{} Appendix two text goes here. \ifCLASSOPTIONcompsoc \section*{Acknowledgments} \else \section*{Acknowledgment} \fi The authors would like to thank... \ifCLASSOPTIONcaptionsoff \newpage \fi \section{Compression-Aware Management Policies} \section{Introduction} \label{sec:intro} \blfootnote{Originally published as ``Exploiting Compressed Block Size as an Indicator of Future Reuse'' in the 21st International Symposium on High Performance Computer Architecture, 2015~\cite{camp}.} Off-chip main memory latency and bandwidth are major performance bottlenecks in modern systems. Multiple levels of on-chip caches are used to hide the memory latency and reduce off-chip memory bandwidth demand. Efficient utilization of cache space and consequently better performance is dependent upon the ability of the cache replacement policy to identify and retain useful data. Replacement policies, ranging from traditional (e.g., ~\cite{LRU,belady}) to state-of-the-art (e.g.,~\cite{mlp,RRIP,EAF,lacs,rw-samira,dip}), work using a combination of \textit{eviction} (identifies the block to be removed from the cache), \textit{insertion} (manages the initial block priority), and \textit{promotion} (changes the block priority over time) mechanisms. In replacement policies proposed for conventional cache organizations, these mechanisms usually work by considering \emph{only} the locality of the cache blocks. A promising approach to improving effective cache capacity is to use cache compression (e.g.,~\cite{fvc,ecm,fpc,c-pack,iic-comp,bdi,dcc,sc2}). In compressed caches, data compression algorithms, e.g., Frequent Pattern Compression (FPC)~\cite{fpc-tr}, Base-Delta-Immediate Compression (BDI)~\cite{bdi}, and Frequent Value Compression~\cite{fvc}, are used to achieve higher effective capacity (storing more blocks of data) and to decrease off-chip bandwidth consumption compared to traditional organizations without compression. This compression generates variable-size cache blocks, with larger blocks consuming more cache space than smaller blocks. However, most cache management policies in these compressed cache designs do not use block size in cache management decisions~\cite{fvc,fpc,c-pack,iic-comp,bdi,dcc,sc2}. Only one recent work---ECM~\cite{ecm}---uses the block size information, but its effectiveness is limited by its coarse-grained (big vs.~small) view of block size. The need to consider size along with temporal locality is well known in the context of web caches~\cite{elaarag1, elaarag2, size, lru-sp, luv}, but proposed solutions rely on a recency list of \emph{all} objects in the web cache~\cite{size} or consider frequency of object accesses~\cite{lru-sp} and are usually prohibitively expensive to implement in hardware for use with on-chip caches. In this chapter, we propose a \textit{Compression-Aware Management Policy (\carp{})} that takes into account compressed cache block size along with temporal locality to improve the performance of compressed caches. Compared to prior work (ECM~\cite{ecm}), our policies first use a finer-grained accounting for compressed block size and an optimization-based approach for eviction decisions. Second and more importantly, we find that size is not only a measure of the cost of retaining a given block in the cache, as previous works considered~\cite{ecm}, but it is sometimes also {\em an indicator of block reuse}. \carp\ contains two key components, Minimal-Value Eviction (\mineviction{}) and Size-based Insertion Policy (\insertionpolicy{}), which significantly improve the quality of replacement decisions in compressed caches (see Section~\ref{camp:sec:results} for a comprehensive analysis) at a modest hardware cost. \textbf{Minimal-Value Eviction (\mineviction{}).} \mineviction{} is based on the observation that one should evict an uncompressed block with good locality to make/retain room for a set of smaller compressed blocks of the same total size, even if those blocks individually have less locality, as long as the set of blocks collectively provides more hits cumulatively. A special case of this is that when two blocks have similar locality characteristics, it is preferable to evict the larger cache block. \mineviction{} measures the \emph{value} of each block as a combination of its locality properties and size. When an eviction is required (to make space for a new block), \mineviction{} picks the block with the least value as the victim. \REM{ , with each cache block to indicate its relative importance. We make two key observations. First, it is possible to build a better replacement policy if the size is directly used in decision making process. The larger the size, the more space it occupies in the cache, and hence its importance to the cache (to be useful) is less than that of a block of the a smaller size, but similar priority. This observation leads to our first mechanism -- \emph{Minimal-Value Eviction (\mineviction)}. The key idea behind \mineviction{} is that every block gets a \emph{value} assigned to it by a value function based on its expected reuse and size. \mineviction{} tries to maximize cache utilization, and if an eviction is needed (to create space for a new block), the first block to evict is the block with the currently \emph{minimal value}.} \textbf{Size-based Insertion Policy (\insertionpolicy{}).} SIP is based on our new observation that the compressed size of a cache block can sometimes be used as an indicator of its reuse characteristics. This is because elements belonging to the same data structure and having the same access characteristics are sometimes (but not always) compressed to the same size---e.g., in \emph{bzip2}~\cite{SPEC}, a compressed block of 34 bytes (with BDI compression~\cite{bdi}) likely belongs to one particular array with narrow values (e.g., small values stored in large data types) as we show in Section~\ref{sec:size-reuse}---and these structures more often than not have a specific pattern of access and/or reuse distance. By dynamically inserting blocks of different sizes with either \emph{high priority}---e.g., in the most-recently-used position for the LRU policy (ensuring blocks stay in cache longer)---or \emph{low priority}---e.g., in the least-recently-used position for the LRU policy (ensuring blocks get evicted quickly unless reused shortly)---\insertionpolicy{} learns the reuse characteristics associated with various compressed block sizes and, if such an association exists, uses this information to maximize the hit ratio. \REM{ This selection can be achieved by using a common set dueling mechanism~\cite{mlp}, where some sets of the cache prioritize one type of blocks (one specific size or range of sizes), and other sets - another type of blocks. \insertionpolicy{} detects the compressed block sizes, the prioritization (or deprioritization) of which leads to lower miss rate (and hence potentially better performance) during the training phase. This information is then used in the steady state, so that more important blocks stay longer in the cache. \textbf{Our approach.} We incorporate both the \mineviction{} and \insertionpolicy{} policies in a single \emph{Compression-Aware Management Policy (\carp)}. We implement \carp{} in two different compressed cache designs: (i) one with traditional cache organization (but with compression as was proposed in~\cite{fpc,bdi}) with \emph{local} replacement decisions made per set, and (ii) one with decoupled tag and data storage and \emph{global} replacement policy (as was proposed in the Variable Way or V-Way cache design~\cite{v-way} and Indirect Index cache design~\cite{iic,iic-comp}). As demonstrated later in this chapter, \carp{} provides the benefit of higher cache utilization for both classes of designs (both local and global) that leads to (i) better performance, (ii) lower off-chip bandwidth consumption, and (iii) lower energy consumed by the whole main memory hierarchy across variety of single- and multi-core systems. All these benefits are achieved with minimal hardware changes needed to the existing compressed cache designs. } \begin{figure*}[!h] \centering \includegraphics[width=0.85\textwidth]{figures/belady_example.pdf} \caption{Example demonstrating downside of not including block size information in replacement decisions.} \label{fig:belady} \end{figure*} As demonstrated later in this chapter, \carp{} (a combination of \mineviction{} and \insertionpolicy{}) works with both traditional compressed cache designs and compressed caches having decoupled tag and data stores (e.g., V-Way Cache~\cite{v-way} and Indirect Index Cache~\cite{iic,iic-comp}). It is general enough to be used with different compression mechanisms and requires only modest hardware changes. Compared to prior work, \carp{} provides better performance, more efficient cache utilization, reduced off-chip bandwidth consumption, and an overall reduction in the memory subsystem energy requirements. In summary, we make the following major contributions: \begin{itemize} \item We make the observation that the compressed size of a cache block can be indicative of its reuse. We use this observation to develop a new cache insertion policy for compressed caches, the Size-based Insertion Policy (\insertionpolicy{}), which uses the size of a compressed block as one of the metrics to predict its potential future reuse. \item We introduce a new compressed cache replacement policy, Minimal-Value Eviction (\mineviction{}), which assigns a value to each cache block based on both its size and its reuse and replaces the set of blocks with the least value. \item We demonstrate that both policies are generally applicable to different compressed cache designs (both with local and global replacement) and can be used with different compression algorithms (FPC~\cite{fpc} and BDI~\cite{bdi}). \item We qualitatively and quantitatively compare \carp{} (\insertionpolicy{} + \mineviction{}) to the conventional LRU policy and three state-of-the-art cache management policies: two size-oblivious policies (RRIP~\cite{RRIP} and a policy used in V-Way~\cite{v-way}) and the recent ECM~\cite{ecm}. We observe that \carp{} (and its global variant G-\carp{}) can considerably (i) improve performance (by 4.9\%/9.0\%/10.2\% on average in single-/two-/four-core workload evaluations and up to 20.1\%), (ii) decrease off-chip bandwidth consumption (by 8.7\% in single-core), and (iii) decrease memory subsystem energy consumption (by 7.2\% in single-core) on average for memory intensive workloads when compared with the best prior mechanism. \end{itemize} \section*{Acknowledgements} We thank the reviewers for their valuable suggestions. We thank Hadi Esmaeilzadeh from Georgia Tech for his helpful comments on earlier version of this paper. We thank the SAFARI group members for the feedback and stimulating research environment they provide. We acknowledge the support of our industrial partners: Facebook, Google, IBM, Intel, Microsoft, Qualcomm, VMware, and Samsung. This research was partially supported by NSF (grants 0953246, 1212962, 1065112, 1423172), the Semiconductor Research Corporation and the Intel Science and Technology Center for Cloud Computing. Gennady Pekhimenko is supported by a Microsoft Research Fellowship and a Qualcomm Innovation Fellowship. \section{Motivating Observations} \label{sec:motivation} Cache compression~\cite{fvc,fpc,bdi,iic-comp,c-pack} is a powerful mechanism that increases effective cache capacity and decreases off-chip bandwidth consumption. In this section, we show that cache compression adds an additional dimension to cache management policy decisions -- \emph{the compressed block size} (or simply \emph{the size}), which plays an important role in building more efficient management policies. We do this in three steps. \subsection{Size Matters} In compressed caches, replacement algorithms that take into account compressed cache block size along with locality to identify victim blocks can outperform existing policies that rely only on locality. In fact, Belady's optimal algorithm~\cite{belady} that relies only on locality (using perfect knowledge to evict the block that will be accessed furthest in the future) is sub-optimal in the context of compressed caches with variable size cache blocks. Figure~\ref{fig:belady} demonstrates one possible example of such a scenario. In this figure, it is assumed that cache blocks are one of two sizes: \begin{inparaenum}[(i)] \item uncompressed 64-byte blocks (blocks X and Y) and \item compressed 32-byte blocks (blocks A, B, and C). \end{inparaenum} Initially (see \ding{202}), the 160-byte capacity cache contains four blocks: three compressed blocks (A, B, C) and one uncompressed block (Y). Consider the sequence of memory requests X, A, Y, B, and C (see \ding{203}). In this case, after a request for X, Belady's algorithm (based on locality) will evict blocks B and C (to create 64-bytes of free space) that will be accessed furthest into the future. Over the next four accesses, this results in two misses (B and C) and two hits (A and Y). In contrast, a size-aware replacement policy can detect that it might be better to retain a set of smaller compressed cache blocks that receive more hits cumulatively than a single large (potentially uncompressed) cache block with better locality. For the access pattern discussed above, a size-aware replacement policy makes the decision to retain B and C and evict Y to make space for X (see \ding{204}). As a result, the cache experiences three hits (A, B, and C) and only one miss (Y) and hence outperforms Belady's optimal algorithm.\footnote{Note that if later (see \ding{205}) there are three additional requests to blocks B, Y, and A (all three hits), the final cache state becomes the same as the initial one. Hence, this example can represent steady state within a loop.} We conclude that using block size information in a compressed cache can lead to better replacement decisions. \subsection{Size Varies} Figure~\ref{fig:bdi} shows the distribution of compressed cache block sizes\footnote{Section~\ref{sec:methodology} describes the details of our evaluation methodology for this and other experiments.} for a set of representative workloads given a 2MB cache employing the BDI~\cite{bdi} cache compression algorithm (our results with FPC~\cite{fpc} compression algorithm show similar trends). Even though the size of a compressed block is determined by the compression algorithm, under both designs, \textbf{compressed cache block sizes can vary significantly}, both \begin{inparaenum}[(i)] \item within a single application (i.e., \emph{intra-application}) such as in \emph{astar, povray}, and \emph{gcc} and \item between applications (i.e., \emph{inter-application}) such as between \emph{h264ref} and \emph{wrf}. \end{inparaenum} \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{figures/distribution_bdi.pdf} \caption{Compressed block size distribution for representative applications with the BDI compression algorithm.} \vspace{-0.2cm} \label{fig:bdi} \end{figure} \REM{ In order to show that compressed block sizes significantly vary both within and between multiple applications, we conducted an experiment\footnote{Section~\ref{sec:methodology} describes the details of our evaluation methodology for this and other experiments.} where we observed the cache block size distribution (collected from snapshots of the 2MB last-level-cache). Figures~\ref{fig:bdi} and \ref{fig:fpc1} show the distributions of compressed block sizes for a representative selection of applications from our workload pool. In order to simplify both the data representation and analysis, we split all possible sizes into 8-byte bins with one special bin for 64-byte (uncompressed) blocks. We can draw two major conclusions from these figures. First, the compressed block sizes vary significantly both (i) within an application (e.g., \emph{astar}), and (ii) between the applications (e.g., compare \emph{h264ref} and \emph{wrf}). } Size variation within an application suggests that size-aware replacement policies could be effective for individual single-core workloads. Intra-application variation exists because applications have data that belongs to different common compressible patterns (e.g., zeros, repeated values, and narrow values~\cite{bdi,fpc}) and as a result end up with a mix of compressed cache block sizes. In the case of multiple cores with shared caches, inter-application variation suggests that even if an application has a single dominant compressed cache block size (e.g., \emph{lbm, h264ref} and \emph{wrf}), running these applications together will result in the shared cache experiencing a mix of compressed cache block sizes. Hence, size-aware management of compressed caches can be even more important for efficient cache utilization in multi-core systems (as we demonstrate quantitatively in Section~\ref{sec:multicore}). \REM{ Inter-application variation suggests that even if there is no significant variation within some individual applications, size can vary significantly when these applications share the compressed cache (which creates opportunities for size-aware replacement). For example, if two (or more) applications have a dominant compressed block size (e.g., \emph{h264ref} and \emph{wrf}), there is usually significant size variation (as with the variation within an application) when these applications are running together in a system with a shared compressed cache. Second, although the compressed block size is inherently dependent on the compression algorithm that is used in cache compression, comparison of the distributions from figure~\ref{fig:bdi} and figure~\ref{fig:fpc1} allows us to claim that size variation exists for different compression algorithms: BDI~\cite{bdi} and FPC~\cite{fpc}. In general, we expect that our observations about the size variation hold for other compression algorithms as well, e.g., C-pack~\cite{c-pack}, and hence we expect that the mechanisms we build based on these observations (see Section~\ref{sec:mve}) will be general enough to work with different cache designs and compression algorithms. } \subsection{Size Indicates Reuse} \label{sec:size-reuse} \textbf{Intuition.} We observe that elements belonging to the same data structure (within an application) sometimes lead to cache blocks that compress to the same size. This observation provides a new opportunity: using the compressed size of a cache block as an indicator of data reuse. Typically an application's key data structures are accessed in a regular fashion, with each data structure having an identifiable access pattern~\cite{dataStructurePhase}. This regularity in accesses can lead to a dominant {\em reuse distance}~\cite{reuse} range for the cache blocks belonging to the data structure (captured in some prior works~\cite{madcache,cachebasedonreusedist,singleUsage,DataCacheManagement} by learning the relationship between the instruction address and the reuse distance). The same data structure can also have a dominant compressed cache block size, i.e., a majority of the cache blocks containing the data structure can be compressed to one or few particular sizes. For such a data structure, the compressed cache block size can therefore be a good indicator of the reuse behavior of the cache blocks. In fact, different data structures can have different dominant compressed block sizes and different dominant reuse distances, enabling compressed block size to be used as a ``signature'' that can indicate reuse patterns of a data structure's cache blocks. \textbf{Quantitative Evidence.} To verify the relationship between the compressed size and reuse, we conducted an experiment with our applications' memory traces (23 memory intensive applications) to look for the presence of a linkage between the compressed size and reuse. For every block within an application, we computed the distance (measured in memory requests) between the time this block was inserted into the compressed cache and the time when it was reused next. We then accumulate this information for all different sizes. \begin{figure*}[t] \vspace{-0.7cm} \centering \begin{subfigure}[b]{0.33\linewidth} \centering \includegraphics[width=0.99\textwidth]{figures/bzip2.pdf} \vspace{-0.6cm} \caption*{\scriptsize{Size (bytes)}} \vspace{-0.2cm} \caption{bzip2} \label{fig:bzip2} \end{subfigure} \begin{subfigure}[b]{0.33\linewidth} \centering \includegraphics[width=0.99\textwidth]{figures/leslie3d.pdf} \vspace{-0.6cm} \caption*{\scriptsize{Size (bytes)}} \vspace{-0.2cm} \caption{leslie3d} \label{fig:leslie3d} \end{subfigure} \begin{subfigure}[b]{0.33\linewidth} \centering \includegraphics[width=0.99\textwidth]{figures/gcc.pdf} \vspace{-0.6cm} \caption*{\scriptsize{Size (bytes)}} \vspace{-0.2cm} \caption{gcc} \label{fig:gcc} \end{subfigure} \begin{subfigure}[b]{0.33\linewidth} \centering \includegraphics[width=0.99\textwidth]{figures/sphinx3.pdf} \vspace{-0.6cm} \caption*{\scriptsize{Size (bytes)}} \vspace{-0.2cm} \caption{sphinx3} \label{fig:sphinx3} \end{subfigure} \begin{subfigure}[b]{0.33\linewidth} \centering \includegraphics[width=0.99\textwidth]{figures/tpch6.pdf} \vspace{-0.6cm} \caption*{\scriptsize{Size (bytes)}} \vspace{-0.2cm} \caption{tpch6} \label{fig:tpch6} \end{subfigure} \begin{subfigure}[b]{0.33\linewidth} \centering \includegraphics[width=0.99\textwidth]{figures/soplex.pdf} \vspace{-0.6cm} \caption*{\scriptsize{Size (bytes)}} \vspace{-0.2cm} \caption{soplex} \label{fig:soplex} \end{subfigure} \begin{subfigure}[b]{0.33\linewidth} \centering \includegraphics[width=0.99\textwidth]{figures/gobmk.pdf} \vspace{-0.6cm} \caption*{\scriptsize{Size (bytes)}} \vspace{-0.2cm} \caption{gobmk} \vspace{-0.2cm} \label{fig:gobmk} \end{subfigure} \begin{subfigure}[b]{0.33\linewidth} \centering \includegraphics[width=0.99\textwidth]{figures/mcf.pdf} \vspace{-0.6cm} \caption*{\scriptsize{Size (bytes)}} \vspace{-0.2cm} \caption{mcf} \vspace{-0.2cm} \label{fig:mcf} \end{subfigure} \begin{subfigure}[b]{0.33\linewidth} \centering \includegraphics[width=0.99\textwidth]{figures/sjeng.pdf} \vspace{-0.6cm} \caption*{\scriptsize{Size (bytes)}} \vspace{-0.2cm} \caption{sjeng} \vspace{-0.2cm} \label{fig:sjeng} \end{subfigure} \vspace{-0.3cm} \caption{Plots demonstrate the relationship between the compressed block size and reuse distance. Dark red circles correspond to the most frequent reuse distances for every size.} \vspace{-0.2cm} \end{figure*} Figures~\ref{fig:bzip2}--\ref{fig:sjeng} show the results of this experiment for nine representative applications from our workload pool (our methodology is described in Section~\ref{sec:methodology}). In seven of these applications (\emph{bzip2}, \emph{leslie3d}, \emph{gcc}, \emph{sphinx3}, \emph{tpch6}, \emph{soplex}), compressed block size is an indicator of reuse distance (in other words, it can be used to distinguish blocks with different reuse distances). In two of the applications (\emph{mcf} and \emph{sjeng}), it is not. Each graph is a scatter plot that shows the reuse distance distribution experienced by various compressed cache block sizes in these applications. Reuse distance is defined as the number of distinct addresses accessed between two consecutive accesses to the same address. There are nine possible compressed block sizes (based on the description from the BDI work~\cite{bdi}). The size of each circle is proportional to the relative frequency of blocks of a particular size that exhibit a specified reuse distance. Dark red circles indicate the most frequent reuse distances (up to three) for every size. We can draw two major conclusions from these experiments. First, there are many applications where block size is an indicator of reuse distance (Figure~\ref{fig:bzip2}--\ref{fig:gobmk}). For instance, in \emph{bzip2} (Figure~\ref{fig:bzip2}), a large number of cache blocks are 8, 36, or 64 (uncompressed) bytes and have a short reuse distance of less than 1000. In contrast, a significant number of blocks are 34 bytes and have a large reuse distance of greater than 5000. This indicates that the 34-byte blocks can be deprioritized by the cache when running \emph{bzip2} to improve performance. Similarly, in \emph{sphinx3}, \emph{tpch6}, and \emph{soplex} (Figures~\ref{fig:sphinx3}--\ref{fig:soplex}), a significant number of blocks are compressed to 1-byte with a long reuse distance of around 1000, whereas most of the blocks of other sizes have very short reuse distances of less than 100. In general, we observe that data from 15 out of 23 of our evaluated applications show that block size is indicative of reuse. This suggests that a compressed block size can be used as an indicator of future block reuse which in turn can be used to prioritize the blocks of certain sizes (Section~\ref{sec:sip}), improving application performance (see the effect on \emph{soplex} in Section~\ref{sec:single-core}). Second, there is usually no coarse-grained way to distinguish between block sizes that are indicative of different reuse distances. In other words, simply dividing the blocks into \emph{big} or \emph{small} blocks, as recent work on compressed cache management does~\cite{ecm}, is not enough to effectively identify the different reuse behavior of blocks of different sizes; the distinction between block sizes should be done at a finer granularity. This is evident for bzip2 (Figure~\ref{fig:bzip2}): while 8, 36, and 64-byte blocks have short reuse distances, a significant fraction of the 34-byte blocks have very long reuse distances (between 5000 and 6000). Hence, there is no block size threshold that {\em divides/distinguishes} blocks with high reuse and those with low reuse. Data from other applications (e.g., \emph{leslie3d}, \emph{gcc}, \emph{soplex}) similarly support this. \REM Figures~\ref{fig:reuse} and \ref{fig:leslie3d} show the results of this experiment for two representative applications from our workload pool, \emph{bzip2} and \emph{leslie3d}, respectively. The compressed cache design in this experiment uses the BDI~\cite{bdi} compression algorithm. Each graph is a scatter plot that shows the reuse distance distribution experienced by various compressed cache block sizes in these applications. Reuse distance is defined as the number of distinct addresses accessed between two consecutive accesses to the same address. There are nine possible compressed block sizes (based on the description from the BDI paper~\cite{bdi}), and every size has blocks with different reuse distances. The size of each circle is proportional to the relative frequency of blocks of a particular size that exhibit a specified reuse distance. For instance, in \emph{bzip2}, a large number of cache blocks are compressed to either 8, 36, or 64 (uncompressed) bytes and have a short reuse distance of less than 1000. In \emph{leslie3d}, a large number of blocks are compressed to 1-byte and have a reuse distance of less than 400 (short reuse). \begin{figure*}[h!] \vspace{-0.2cm} \centering \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[width=0.99\textwidth]{figures/bzip2.pdf} \caption*{\scriptsize{Size (bytes)}} \caption{bzip2 application.} \label{fig:reuse} \end{subfigure} \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[width=0.99\textwidth]{figures/leslie3d.pdf} \caption*{\scriptsize{Size (bytes)}} \caption{leslie3d application.} \label{fig:leslie3d} \end{subfigure} \caption{Plots demonstrate the relationship between the compressed block size and reuse distance.} \vspace{-0.1cm} \end{figure*} We can draw two interesting conclusions from these experiments. First, for many applications (e.g., Figure~\ref{fig:bzip2}--\ref{fig:gobmk}), there are certain frequent reuse distances that are specific for a single compressed size (or several sizes). The fact that there is likely a single dominant reuse distance associated with the most frequently occurring compressed cache block sizes indicates that compressed cache block size can be used to predict reuse characteristics of cache blocks. Unlike the PC-based solutions that predict reuse distance~\cite{madcache,cachebasedonreusedist}, the additional on-chip storage requirements to predict the reuse will be very modest, because the compressed cache block size is an integral part of the cache block tag~\cite{fpc,bdi,c-pack}. Second, it is best to learn the correlation between block size and reuse at a \emph{fine granularity}. Existing solutions~\cite{ecm} classify the blocks only at a coarser granularity (into \emph{big} or \emph{small} blocks) and do not consider the correlation between size and reuse. In the case of \emph{bzip2}, while 8 and 36-byte blocks have a short reuse distance, a significant fraction of cache blocks that get compressed to 34-bytes have a long reuse distance (between 5000 and 6000). In this case, attempting to use a coarse-grained classification to correlate with reuse would end up associating the wrong reuse distance with either cache blocks of size 34-byte or 36-byte. Similar behavior can also be seen in the case of \emph{leslie3d} with 8-byte cache blocks (reuse distance closer to 1000) and 1, 24, 34, 36, and 40-byte blocks (with short reuse distance). To make things worse, the gap between some of the dominant cache block sizes is small (either 2 or 4-bytes). As a result, it is hard for a heuristic to predict a coarse-grained classification boundary with reasonable accuracy. } \textbf{Code Example.} The primary reason why this relationship exists is because in many applications, key data structures have similar compression ratios and reuse patterns within a data structure, but different across multiple different data structures. Figure~\ref{fig:example} demonstrates one simplified source code example based on the data structures observed in \emph{soplex}. It shows why the compressed size can be a good indicator of future reuse (as we observe in Figure~\ref{fig:soplex}).\footnote{Note that our mechanisms are applicable to a variety of applications (Section~\ref{sec:results}) with very different data structures and access patterns. The simplification in the example is done for clarity.} There are three data structures in this example: (i) array $A[N]$ of integer indexes that are smaller than value $M$ (well-compressible with BDI~\cite{bdi} to 20-byte cache blocks), (ii) small array $B[16]$ of floating point coefficients (incompressible, 64-byte cache blocks), and (iii) sparse matrix $C[M][N]$ with the main data (very compressible, 1-byte cache blocks). These data structures not only have different compressed block sizes, but also different reuse distances. Array $A[N]$ is accessed as $A[i]$ within the main loop, where $i$ changes every iteration of the outer loop, and hence data is usually reused shortly (every iteration of the inner loop). This leads to a short reuse distance for the elements of this array. Accesses to array $B$ ($B[(i+j)\%16]$) lead to a short reuse distance (usually every $16^{th}$ iteration of the inner loop). On the other hand, the reuse distance of array $C$ is data dependent on $A[i]$ -- it is usually long and depends on what indexes are currently stored in array $A[i]$. Hence, this simplified example shows that {\em compressed block size can indicate the reuse distance of a cache block}; a 1-byte block is likely to be reused much later than after it is accessed, whereas a 20-byte block is likely to be reused very quickly. If a cache learns this relationship, it can prioritize 20-byte blocks over 1-byte blocks in its management policy. As we will show in Section~\ref{sec:sip}, our compression-aware cache management policy learns exactly this, leading to significant performance improvements for \emph{soplex} (and other applications as shown in Section~\ref{sec:single-core}). \lstdefinestyle{customc}{ belowcaptionskip=1\baselineskip, breaklines=true, frame=L, xleftmargin=\parindent, language=C, showstringspaces=false, basicstyle=\footnotesize\ttfamily, keywordstyle=\bfseries\color{green!40!black}, commentstyle=\itshape\color{purple!40!black}, identifierstyle=\color{blue}, stringstyle=\color{orange}, } \lstdefinestyle{customasm}{ belowcaptionskip=1\baselineskip, frame=L, xleftmargin=\parindent, language=[x86masm]Assembler, basicstyle=\footnotesize\ttfamily, commentstyle=\itshape\color{purple!40!black}, } \lstset{escapechar=@,style=customc} \begin{figure}[h!] \vspace{-0.5cm} \centering \begin{lstlisting} int A[N]; // indexes (smaller than M): narrow values double B[16]; // coefficients: incompressible values double C[M][N];// sparse matrix: many zero values for (int i=0; i<N; i++) { for (int j=0; j<N; j++) { sum += B[(i+j } } \end{lstlisting} \vspace{-0.4cm} \caption{Code example: size and reuse distance relationship.} \label{fig:example} \vspace{-0.4cm} \end{figure} \section{Motivating Observations} \label{sec:motivation} Cache compression~\cite{fvc,fpc,bdi,iic-comp,c-pack} is a powerful mechanism that increases effective cache capacity and decreases off-chip bandwidth consumption. In this section, we show that cache compression adds an additional dimension to cache management policy decisions -- \emph{the compressed block size} (or simply \emph{the size}), which plays an important role in building more efficient management policies. We do this in three steps. \subsection{Size Matters} In compressed caches, replacement algorithms that take into account compressed cache block size along with locality to identify victim blocks can outperform existing policies that rely only on locality. In fact, Belady's optimal algorithm~\cite{belady} that relies only on locality (using perfect knowledge to evict the block that will be accessed furthest in the future) is sub-optimal in the context of compressed caches with variable size cache blocks. Figure~\ref{fig:belady} demonstrates one possible example of such a scenario. In this figure, it is assumed that cache blocks are one of two sizes: \begin{inparaenum}[(i)] \item uncompressed 64-byte blocks (blocks X and Y) and \item compressed 32-byte blocks (blocks A, B, and C). \end{inparaenum} Initially (see \ding{202}), the 160-byte capacity cache contains four blocks: three compressed blocks (A, B, C) and one uncompressed block (Y). Consider the sequence of memory requests X, A, Y, B, and C (see \ding{203}). In this case, after a request for X, Belady's algorithm (based on locality) will evict blocks B and C (to create 64-bytes of free space) that will be accessed furthest into the future. Over the next four accesses, this results in two misses (B and C) and two hits (A and Y). In contrast, a size-aware replacement policy can detect that it might be better to retain a set of smaller compressed cache blocks that receive more hits cumulatively than a single large (potentially uncompressed) cache block with better locality. For the access pattern discussed above, a size-aware replacement policy makes the decision to retain B and C and evict Y to make space for X (see \ding{204}). As a result, the cache experiences three hits (A, B, and C) and only one miss (Y) and hence outperforms Belady's optimal algorithm.\footnote{Note that if later (see \ding{205}) there are three additional requests to blocks B, Y, and A (all three hits), the final cache state becomes the same as the initial one. Hence, this example can represent steady state within a loop.} We conclude that using block size information in a compressed cache can lead to better replacement decisions. \subsection{Size Varies} Figure~\ref{fig:bdi} shows the distribution of compressed cache block sizes\footnote{Section~\ref{sec:methodology} describes the details of our evaluation methodology for this and other experiments.} for a set of representative workloads given a 2MB cache employing the BDI~\cite{bdi} cache compression algorithm (our results with FPC~\cite{fpc} compression algorithm show similar trends). Even though the size of a compressed block is determined by the compression algorithm, under both designs, \textbf{compressed cache block sizes can vary significantly}, both \begin{inparaenum}[(i)] \item within a single application (i.e., \emph{intra-application}) such as \emph{astar, povray}, and \emph{gcc} and \item between applications (i.e., \emph{inter-application}) such as \emph{h264ref} and \emph{wrf}. \end{inparaenum} \begin{figure}[t] \centering \includegraphics[width=0.45\textwidth]{figures/distribution_bdi.pdf} \caption{Compressed block size distribution for representative applications with the BDI compression algorithm.} \vspace{-0.2cm} \label{fig:bdi} \end{figure} \REM{ In order to show that compressed block sizes significantly vary both within and between multiple applications, we conducted an experiment\footnote{Section~\ref{sec:methodology} describes the details of our evaluation methodology for this and other experiments.} where we observed the cache block size distribution (collected from snapshots of the 2MB last-level-cache). Figures~\ref{fig:bdi} and \ref{fig:fpc1} show the distributions of compressed block sizes for a representative selection of applications from our workload pool. In order to simplify both the data representation and analysis, we split all possible sizes into 8-byte bins with one special bin for 64-byte (uncompressed) blocks. We can draw two major conclusions from these figures. First, the compressed block sizes vary significantly both (i) within an application (e.g., \emph{astar}), and (ii) between the applications (e.g., compare \emph{h264ref} and \emph{wrf}). } Size variation within an application suggests that size-aware replacement policies could be effective for individual single-core workloads. Intra-application variation exists because applications have data that belongs to different common patterns (e.g., zeros, repeated values, narrow values~\cite{bdi}) and as a result end up with a mix of compressed cache block sizes. In the case of multiple cores with shared caches, inter-application variation suggests that even if an application has a single dominant compressed cache block size (e.g., \emph{lbm, h264ref} and \emph{wrf}), running these applications together will result in the shared cache experiencing a mix of compressed cache block sizes. Hence, size-aware management of compressed caches can be even more important for efficient cache utilization in multi-cores (as we demonstrate quantitatively in Section~\ref{sec:multicore}). \REM{ Inter-application variation suggests that even if there is no significant variation within some individual applications, size can vary significantly when these applications share the compressed cache (which creates opportunities for size-aware replacement). For example, if two (or more) applications have a dominant compressed block size (e.g., \emph{h264ref} and \emph{wrf}), there is usually significant size variation (as with the variation within an application) when these applications are running together in a system with a shared compressed cache. Second, although the compressed block size is inherently dependent on the compression algorithm that is used in cache compression, comparison of the distributions from figure~\ref{fig:bdi} and figure~\ref{fig:fpc1} allows us to claim that size variation exists for different compression algorithms: BDI~\cite{bdi} and FPC~\cite{fpc}. In general, we expect that our observations about the size variation hold for other compression algorithms as well, e.g., C-pack~\cite{c-pack}, and hence we expect that the mechanisms we build based on these observations (see Section~\ref{sec:mve}) will be general enough to work with different cache designs and compression algorithms. } \subsection{Size Indicates Reuse} \label{sec:size-reuse} We observe that elements belonging to the same data structure (within an application) sometimes compress to the same size. This observation provides a new opportunity: using compressed size as an indicator of data reuse. Typically an application's key data structures are accessed in a regular fashion, with each data structure having an identifiable access pattern~\cite{dataStructurePhase}. This regularity in accesses can be reflected in terms of predictable reuse distances of the cache blocks belonging to the data structures. Some prior works capture this by learning the relationship between the PC (instruction address) that accesses the data structure and the reuse distance~\cite{madcache,cachebasedonreusedist,singleUsage,DataCacheManagement}. On the other hand, if some key data structures have a dominant compressed size (i.e. a majority of the elements are compressed to one or few particular sizes), this size can be used as a signature for these data structures. In such cases, regular access patterns to such data structures can be linked to (and hence be indicated by) the compressed size of the cache blocks that store these data structures. To verify this, we conducted an experiment with our applications' memory traces to look for the presence of a linkage between the compressed size and reuse. For every block within an application, we computed the average distance (measured in memory requests) between the time this block was inserted into the compressed cache and the time when it was reused next. We then accumulate this information for all different sizes. \begin{figure*}[t] \vspace{-0.7cm} \centering \begin{subfigure}[b]{0.33\linewidth} \centering \includegraphics[width=0.99\textwidth]{figures/bzip2.pdf} \vspace{-0.6cm} \caption*{\scriptsize{Size (bytes)}} \vspace{-0.2cm} \caption{bzip2.} \label{fig:bzip2} \end{subfigure} \begin{subfigure}[b]{0.33\linewidth} \centering \includegraphics[width=0.99\textwidth]{figures/leslie3d.pdf} \vspace{-0.6cm} \caption*{\scriptsize{Size (bytes)}} \vspace{-0.2cm} \caption{leslie3d.} \label{fig:leslie3d} \end{subfigure} \begin{subfigure}[b]{0.33\linewidth} \centering \includegraphics[width=0.99\textwidth]{figures/gcc.pdf} \vspace{-0.6cm} \caption*{\scriptsize{Size (bytes)}} \vspace{-0.2cm} \caption{gcc.} \label{fig:gcc} \end{subfigure} \begin{subfigure}[b]{0.33\linewidth} \centering \includegraphics[width=0.99\textwidth]{figures/sphinx3.pdf} \vspace{-0.6cm} \caption*{\scriptsize{Size (bytes)}} \vspace{-0.2cm} \caption{sphinx3.} \label{fig:sphinx3} \end{subfigure} \begin{subfigure}[b]{0.33\linewidth} \centering \includegraphics[width=0.99\textwidth]{figures/tpch6.pdf} \vspace{-0.6cm} \caption*{\scriptsize{Size (bytes)}} \vspace{-0.2cm} \caption{tpch6.} \label{fig:tpch6} \end{subfigure} \begin{subfigure}[b]{0.33\linewidth} \centering \includegraphics[width=0.99\textwidth]{figures/soplex.pdf} \vspace{-0.6cm} \caption*{\scriptsize{Size (bytes)}} \vspace{-0.2cm} \caption{soplex.} \label{fig:soplex} \end{subfigure} \begin{subfigure}[b]{0.33\linewidth} \centering \includegraphics[width=0.99\textwidth]{figures/gobmk.pdf} \vspace{-0.6cm} \caption*{\scriptsize{Size (bytes)}} \vspace{-0.2cm} \caption{gobmk.} \vspace{-0.2cm} \label{fig:gobmk} \end{subfigure} \begin{subfigure}[b]{0.33\linewidth} \centering \includegraphics[width=0.99\textwidth]{figures/mcf.pdf} \vspace{-0.6cm} \caption*{\scriptsize{Size (bytes)}} \vspace{-0.2cm} \caption{mcf.} \vspace{-0.2cm} \label{fig:mcf} \end{subfigure} \begin{subfigure}[b]{0.33\linewidth} \centering \includegraphics[width=0.99\textwidth]{figures/sjeng.pdf} \vspace{-0.6cm} \caption*{\scriptsize{Size (bytes)}} \vspace{-0.2cm} \caption{sjeng.} \vspace{-0.2cm} \label{fig:sjeng} \end{subfigure} \vspace{-0.3cm} \caption{Plots demonstrate the relationship between the compressed block size and reuse distance. Red circles correspond to the most frequent reuse distances for every size. The first seven workloads ((a)--(g)) have some relation between the size and reuse, while the last two ((h)--(i)) do not show that size is indicative of reuse.} \vspace{-0.2cm} \end{figure*} Figures~\ref{fig:bzip2}--\ref{fig:sjeng} show the results of this experiment for nine representative applications from our workload pool (our methodology is described in Section~\ref{sec:methodology}): \emph{bzip2}, \emph{leslie3d}, \emph{gcc}, \emph{sphinx3}, \emph{tpch6}, \emph{soplex}, \emph{mcf}, and \emph{sjeng} respectively. We use the BDI~\cite{bdi} compression algorithm in this experiment. Each graph is a scatter plot that shows the reuse distance distribution experienced by various compressed cache block sizes in these applications. Reuse distance is defined as the number of distinct addresses accessed between two consecutive accesses to the same address. There are nine possible compressed block sizes (based on the description from the BDI work~\cite{bdi}). The size of each circle is proportional to the relative frequency of blocks of a particular size that exhibit a specified reuse distance. Additional red circles indicate the most frequent reuse distances (up to three) for every size. For instance, in \emph{bzip2} (Figure~\ref{fig:bzip2}), a large number of cache blocks are compressed to either 8, 36, or 64 (uncompressed) bytes and have a short reuse distance of less than 1000. In \emph{sphinx3} (Figure~\ref{fig:sphinx3}), a significant number of blocks are compressed to 1-byte with long reuse distance of slightly more than 1000. We can draw two conclusions from these experiments. First, for many applications (e.g., Figure~\ref{fig:bzip2}--\ref{fig:gobmk}), there are certain frequent reuse distances that are specific for a single compressed size (or several sizes), while other applications (Figures~\ref{fig:mcf} and \ref{fig:sjeng}) have similar distributions of reuse distances within a size. For example, in \emph{gobmk} (Figure~\ref{fig:gobmk}) 1-byte blocks have a significant fraction of blocks with reuse distance more than 5000, and in \emph{soplex} (Figure~\ref{fig:soplex}) blocks of size 20 have a signification fraction of blocks with short reuse. This suggests that a compressed block size can potentially be used as an indicator of future block reuse which in turn can be used to prioritize the blocks of certain sizes (Section~\ref{sec:sip}), improving application performance (see the effect on \emph{soplex} in Section~\ref{sec:single-core}). Second, it is best to be sensitive to the compressed block size at a \emph{fine granularity}. Existing solutions~\cite{ecm} classify the blocks only at a coarser granularity (into \emph{big} or \emph{small} blocks) and do not consider the potential relationship between size and reuse. In the case of \emph{bzip2} (Figure~\ref{fig:bzip2}), while 8 and 36-byte blocks have a short reuse distance, a significant fraction of cache blocks that get compressed to 34-bytes have a long reuse distance (between 5000 and 6000). In this case, attempting to use a coarse-grained classification to relate with reuse would end up associating the wrong reuse distance with either cache blocks of size 34-byte or 36-byte. To make things worse, the gap between some of the dominant cache block sizes for some applications (e.g., \emph{bzip2} and \emph{gcc}) is small (2--4 bytes). As a result, it is hard for a heuristic to predict a coarse-grained classification boundary with reasonable accuracy. \REM Figures~\ref{fig:reuse} and \ref{fig:leslie3d} show the results of this experiment for two representative applications from our workload pool, \emph{bzip2} and \emph{leslie3d}, respectively. The compressed cache design in this experiment uses the BDI~\cite{bdi} compression algorithm. Each graph is a scatter plot that shows the reuse distance distribution experienced by various compressed cache block sizes in these applications. Reuse distance is defined as the number of distinct addresses accessed between two consecutive accesses to the same address. There are nine possible compressed block sizes (based on the description from the BDI paper~\cite{bdi}), and every size has blocks with different reuse distances. The size of each circle is proportional to the relative frequency of blocks of a particular size that exhibit a specified reuse distance. For instance, in \emph{bzip2}, a large number of cache blocks are compressed to either 8, 36, or 64 (uncompressed) bytes and have a short reuse distance of less than 1000. In \emph{leslie3d}, a large number of blocks are compressed to 1-byte and have a reuse distance of less than 400 (short reuse). \begin{figure*}[h!] \vspace{-0.2cm} \centering \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[width=0.99\textwidth]{figures/bzip2.pdf} \caption*{\scriptsize{Size (bytes)}} \caption{bzip2 application.} \label{fig:reuse} \end{subfigure} \begin{subfigure}[b]{0.49\linewidth} \centering \includegraphics[width=0.99\textwidth]{figures/leslie3d.pdf} \caption*{\scriptsize{Size (bytes)}} \caption{leslie3d application.} \label{fig:leslie3d} \end{subfigure} \caption{Plots demonstrate the relationship between the compressed block size and reuse distance.} \vspace{-0.1cm} \end{figure*} We can draw two interesting conclusions from these experiments. First, for many applications (e.g., Figure~\ref{fig:bzip2}--\ref{fig:gobmk}), there are certain frequent reuse distances that are specific for a single compressed size (or several sizes). The fact that there is likely a single dominant reuse distance associated with the most frequently occurring compressed cache block sizes indicates that compressed cache block size can be used to predict reuse characteristics of cache blocks. Unlike the PC-based solutions that predict reuse distance~\cite{madcache,cachebasedonreusedist}, the additional on-chip storage requirements to predict the reuse will be very modest, because the compressed cache block size is an integral part of the cache block tag~\cite{fpc,bdi,c-pack}. Second, it is best to learn the correlation between block size and reuse at a \emph{fine granularity}. Existing solutions~\cite{ecm} classify the blocks only at a coarser granularity (into \emph{big} or \emph{small} blocks) and do not consider the correlation between size and reuse. In the case of \emph{bzip2}, while 8 and 36-byte blocks have a short reuse distance, a significant fraction of cache blocks that get compressed to 34-bytes have a long reuse distance (between 5000 and 6000). In this case, attempting to use a coarse-grained classification to correlate with reuse would end up associating the wrong reuse distance with either cache blocks of size 34-byte or 36-byte. Similar behavior can also be seen in the case of \emph{leslie3d} with 8-byte cache blocks (reuse distance closer to 1000) and 1, 24, 34, 36, and 40-byte blocks (with short reuse distance). To make things worse, the gap between some of the dominant cache block sizes is small (either 2 or 4-bytes). As a result, it is hard for a heuristic to predict a coarse-grained classification boundary with reasonable accuracy.} \subsubsection*{Why is Size Sometimes Indicative of Reuse?} The primary reason why this relationship exists is because in many applications, key data structures have similar compression ratios and reuse patterns within a data structure, but different across multiple different data structures. Figure~\ref{fig:example} demonstrates one simplified source code example based on the data structures observed in \emph{soplex}. It shows why the compressed size can be a good indicator of future reuse (as we observe in Figure~\ref{fig:soplex}).\footnote{Note that our mechanisms are applicable to a variety of applications (Section~\ref{sec:results}) with very different data structures and access patterns. The simplification in the example is done for clarity.} There are three data structures in this example: (i) array $A[N]$ of integer indexes that are smaller than value $M$ (well-compressible with BDI~\cite{bdi} to 20-byte cache blocks), (ii) small array $B[16]$ of floating point coefficients (incompressible, 64-byte cache blocks), and (iii) sparse matrix $C[M][N]$ with the main data (very compressible, 1-byte cache blocks). These data structures not only have different compressed block sizes, but also different reuse distances. Array $A[N]$ is accessed as $A[j]$ within the main loop, where $j$ changes every iteration (in the inner loop), and hence data is reused only between the iterations of the outer loop. This leads to a long reuse distance for the elements of this array. On the other hand, accesses to array $B$ ($B[(i+j)\%16]$) lead to a short reuse distance (usually every $16^{th}$ iteration of the inner loop). The reuse distance of array $C$ is data dependent on $A[j]$ -- it is usually long (but can also be short), depending on what indexes are currently stored in array $A[j]$. \lstdefinestyle{customc}{ belowcaptionskip=1\baselineskip, breaklines=true, frame=L, xleftmargin=\parindent, language=C, showstringspaces=false, basicstyle=\footnotesize\ttfamily, keywordstyle=\bfseries\color{green!40!black}, commentstyle=\itshape\color{purple!40!black}, identifierstyle=\color{blue}, stringstyle=\color{orange}, } \lstdefinestyle{customasm}{ belowcaptionskip=1\baselineskip, frame=L, xleftmargin=\parindent, language=[x86masm]Assembler, basicstyle=\footnotesize\ttfamily, commentstyle=\itshape\color{purple!40!black}, } \lstset{escapechar=@,style=customc} \begin{figure}[h!] \vspace{-0.4cm} \centering \begin{lstlisting} int A[N]; // indexes (smaller than M): narrow values double B[16]; // coefficients: incompressible values double C[M][N];// sparse matrix: many zero values for (int i=0; i<N; i++) { for (int j=0; j<N; j++) { sum += B[(i+j } } \end{lstlisting} \vspace{-0.2cm} \caption{Code example: size and reuse distance correlation.} \label{fig:example} \vspace{-0.4cm} \end{figure} In this example, the compressed size of 20 bytes usually indicates a short reuse distance. A coarse-grain approach with a single threshold is \emph{ineffective} for this case (a case that is quite common based on our experiments), because it cannot represent the \emph{fine-grain} correlation between the size and reuse (we have 20-byte blocks with short reuse distance and 1-/64-byte blocks with long reuse distance). This fine-grain relation between size and reuse can be exploited by a compression-aware cache management policy (see Section~\ref{sec:sip}) to improve application's performance (see Section~\ref{sec:single-core} for \emph{soplex}). \section{Motivating Observations} \label{sec:motivation} Cache compression~\cite{fvc,ecm,fpc,c-pack,iic-comp,bdi,dcc,sc2} is a powerful mechanism that increases effective cache capacity and decreases off-chip bandwidth consumption.\footnote{ Data compression can be also effective in increasing the size of the main memory~\cite{MMCompression,lcp-tech,lcp-micro} and reducing the off-chip memory bandwidth/energy consumption~\cite{lcp-micro,memzip}.} In this section, we show that cache compression adds an additional dimension to cache management policy decisions -- \emph{the compressed block size} (or simply \emph{the size}), which plays an important role in building more efficient management policies. We do this in three steps. \subsection{Size Matters} In compressed caches, one should design replacement policies that take into account compressed cache block size along with locality to identify victim blocks, because such policies can outperform existing policies that rely \emph{only} on locality. In fact, Belady's optimal algorithm~\cite{belady} that relies only on locality (using perfect knowledge to evict the block that will be accessed furthest in the future) is sub-optimal in the context of compressed caches with variable-size cache blocks. Figure~\ref{fig:belady} demonstrates one possible example of such a scenario. In this figure, we assume that cache blocks are one of two sizes: (i) uncompressed 64-byte blocks (blocks X and Y) and (ii) compressed 32-byte blocks (blocks A, B, and C). We assume the cache capacity is 160 bytes. Initially (see \ding{202}), the cache contains four blocks: three compressed (A, B, C) and one uncompressed (Y). Consider the sequence of memory requests X, A, Y, B, C, B, Y, and A (see \ding{203}). In this case, after a request for X, Belady's algorithm (based on locality) evicts blocks B and C (to create 64 bytes of free space) that will be accessed furthest into the future. Over the next four accesses, this results in two misses (B and C) and two hits (A and Y). In contrast, a size-aware replacement policy can detect that it might be better to retain a set of smaller compressed cache blocks that receive more hits cumulatively than a single large (potentially uncompressed) cache block with better locality. For the access pattern discussed above, a size-aware replacement policy makes the decision to retain B and C and evict Y to make space for X (see \ding{204}). As a result, the cache experiences three hits (A, B, and C) and only one miss (Y) and hence outperforms Belady's optimal algorithm.\footnote{Later (see \ding{205}), when there are three requests to blocks B, Y, and A (all three hits), the final cache state becomes the same as the initial one. Hence, this example can represent steady state within a loop.} We conclude that using block size information in a compressed cache can lead to better replacement decisions. \subsection{Size Varies} Figure~\ref{fig:bdi} shows the distribution of compressed cache block sizes\footnote{Section~\ref{camp:sec:methodology} describes the details of our evaluation methodology for this and other experiments.} for a set of representative workloads given a 2MB cache employing the Base-Delta-Immediate (BDI)~\cite{bdi} cache compression algorithm (our results with the FPC~\cite{fpc} compression algorithm show similar trends). Even though the size of a compressed block is determined by the compression algorithm, under both designs, \textbf{compressed cache block sizes can vary significantly}, both (i) within a single application (i.e., \emph{intra-application}) such as in \emph{astar, povray}, and \emph{gcc} and (ii) between applications (i.e., \emph{inter-application}) such as between \emph{h264ref} and \emph{wrf}. \begin{figure}[h!] \centering \centering \includegraphics[width=0.65\textwidth]{figures/distribution_bdi.pdf} \caption{Compressed block size distribution for representative applications with the BDI~\cite{bdi} compression algorithm.} \label{fig:bdi} \end{figure} \REM{ In order to show that compressed block sizes significantly vary both within and between multiple applications, we conducted an experiment\footnote{Section~\ref{camp:sec:methodology} describes the details of our evaluation methodology for this and other experiments.} where we observed the cache block size distribution (collected from snapshots of the 2MB last-level-cache). Figures~\ref{fig:bdi} and \ref{fig:fpc1} show the distributions of compressed block sizes for a representative selection of applications from our workload pool. In order to simplify both the data representation and analysis, we split all possible sizes into 8-byte bins with one special bin for 64-byte (uncompressed) blocks. We can draw two major conclusions from these figures. First, the compressed block sizes vary significantly both (i) within an application (e.g., \emph{astar}), and (ii) between the applications (e.g., compare \emph{h264ref} and \emph{wrf}). } Size variation within an application suggests that size-aware replacement policies could be effective for individual single-core workloads. Intra-application variation exists because applications have data that belong to different common compressible patterns (e.g., zeros, repeated values, and narrow values~\cite{bdi}) and as a result end up with a mix of compressed cache block sizes. In a system with multiple cores and shared caches, inter-application variation suggests that even if an application has a single dominant compressed cache block size (e.g., \emph{lbm, h264ref} and \emph{wrf}), running these applications together on different cores will result in the shared cache experiencing a mix of compressed cache block sizes. Hence, size-aware management of compressed caches can be even more important for efficient cache utilization in multi-core systems (as we demonstrate quantitatively in Section~\ref{sec:multicore}). \subsection{Size Can Indicate Reuse} \label{sec:size-reuse} We observe that elements belonging to the same data structure (within an application) sometimes lead to cache blocks that compress to the same size. This observation provides a new opportunity: using the compressed size of a cache block as an indicator of data reuse of the block. {\bf Intuition.} We first briefly provide intuition on why there can be a relationship between compressed size and the reuse characteristics of the cache block. As past work has shown, an application's key data structures are typically accessed in a regular fashion, with each data structure having an identifiable access pattern~\cite{dataStructurePhase}. This regularity in accesses to a data structure can lead to a dominant {\em reuse distance}~\cite{reuse} range for the cache blocks belonging to the data structure.\footnote{Some prior works (e.g.,~\cite{madcache,cachebasedonreusedist,singleUsage,DataCacheManagement}) captured this regularity by learning the relationship between the instruction address and the reuse distance.} The same data structure can also have a dominant compressed cache block size, i.e., a majority of the cache blocks containing the data structure can be compressed to one or a few particular sizes (e.g., due to narrow or sparse values stored in the elements of an array). For such a data structure, the compressed cache block size can therefore be a good indicator of the reuse behavior of the cache blocks. In fact, different data structures can have different dominant compressed block sizes and different dominant reuse distances; in such cases, the compressed block size serves as a type of \emph{signature} indicating the reuse pattern of a data structure's cache blocks. {\bf Example to Support the Intuition.} To illustrate the connection between compressed block size and reuse behavior of data structures intuitively, Figure~\ref{fig:example} presents an example loosely based on some of the data structures we observed in \emph{soplex}. There are three data structures in this example: (i) array $A[N]$ of integer indexes that are smaller than value $M$ (well-compressible with BDI~\cite{bdi} to 20-byte cache blocks), (ii) small array $B[16]$ of floating point coefficients (incompressible, 64-byte cache blocks), and (iii) sparse matrix $C[M][N]$ with the main data (very compressible zero values, many 1-byte cache blocks). These data structures not only have different compressed block sizes, but also different reuse distances. Accesses to cache blocks for array $A$ occur only once every iteration of the outer loop (long reuse distance). Accesses to cache blocks for array $B$ occur roughly every $16^{th}$ iteration of the inner loop (short reuse distance). Finally, the reuse distance of array $C$ is usually long, although it is dependent on what indexes are currently stored in array $A[i]$. Hence, this example shows that {\em compressed block size can indicate the reuse distance of a cache block}: 20-byte blocks (from $A$) usually have long reuse distance, 64-byte blocks (from $B$) usually have short reuse distance, and 1-byte blocks (from $C$) usually have long reuse distance. If a cache learns this relationship, it can prioritize 64-byte blocks over 20-byte and 1-byte blocks in its management policy. As we show in Section~\ref{sec:sip}, our \insertionpolicy{} policy learns exactly this kind of relationship, leading to significant performance improvements for several applications (including \emph{soplex}), as shown in Section~\ref{sec:single-core}.\footnote{Note that our overall proposal also accounts for the size of the block, e.g., that a 64-byte block takes up more space in the cache than a 20-byte or 1-byte block, via the use of \mineviction{} policy (Section~\ref{sec:mve}).} \lstdefinestyle{customc}{ belowcaptionskip=1\baselineskip, breaklines=true, frame=L, xleftmargin=\parindent, language=C, showstringspaces=false, basicstyle=\footnotesize\ttfamily, keywordstyle=\bfseries\color{green!40!black}, commentstyle=\itshape\color{purple!40!black}, identifierstyle=\color{blue}, stringstyle=\color{orange}, } \lstdefinestyle{customasm}{ belowcaptionskip=1\baselineskip, frame=L, xleftmargin=\parindent, language=[x86masm]Assembler, basicstyle=\footnotesize\ttfamily, commentstyle=\itshape\color{purple!40!black}, } \lstset{escapechar=@,style=customc} \begin{figure}[t!] \centering \begin{lstlisting} int A[N]; // small indices: narrow values double B[16]; // FP coefficients: incompressible double C[M][N];// sparse matrix: many zero values for (int i=0; i<N; i++) { int tmp = A[i]; for (int j=0; j<N; j++) { sum += B[(i+j } } \end{lstlisting} \caption{Code example: size and reuse distance relationship.} \label{fig:example} \end{figure} {\bf Quantitative Evidence.} To verify the relationship between block size and reuse, we have analyzed 23 memory-intensive applications' memory access traces (applications described in Section~\ref{camp:sec:methodology}). For every cache block within an application, we computed the average distance (measured in memory requests) between the time this block was inserted into the compressed cache and the time when it was reused next. We then accumulate this {\em reuse distance} information for all different block sizes, where the size of a block is determined with the BDI~\cite{bdi} compression algorithm. \ignore{ \begin{figure}[h!] \centering \subfigure[bzip2 application]{\label{fig:reuse} \includegraphics[width=0.4\linewidth]{figures/bzip2.pdf}} \subfigure[B]{\label{fig:leslie3d} \includegraphics[width=0.4\linewidth]{figures/leslie3d.pdf}} \caption{Plots demonstrate the relationship between the compressed block size and reuse distance.} \end{figure} } \begin{figure}[tbh] \vspace{-0.1cm} \centering \subfigure[bzip2]{\label{fig:bzip2}\includegraphics[width=0.4\linewidth]{figures/bzip2.pdf}} \subfigure[sphinx3]{\label{fig:sphinx3}\includegraphics[width=0.4\linewidth]{figures/sphinx3.pdf}} \subfigure[soplex]{\label{fig:soplex}\includegraphics[width=0.4\linewidth]{figures/soplex.pdf}} \subfigure[tpch6]{\label{fig:tpch6}\includegraphics[width=0.4\linewidth]{figures/tpch6.pdf}} \subfigure[gcc]{\label{fig:gcc}\includegraphics[width=0.4\linewidth]{figures/gcc.pdf}} \subfigure[mcf]{\label{fig:mcf}\includegraphics[width=0.4\linewidth]{figures/mcf.pdf}} \vspace{-0.2cm} \ignore { \subfigure[Figure A] \centering \includegraphics[width=0.33\textwidth]{figures/bzip2.pdf} \caption*{\scriptsize{Size (bytes)}} \caption{bzip2} \label{fig:bzip2} \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \centering \includegraphics[width=0.95\textwidth]{figures/sphinx3.pdf} \caption*{\scriptsize{Size (bytes)}} \caption{sphinx3} \label{fig:sphinx3} \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \centering \includegraphics[width=0.95\textwidth]{figures/soplex.pdf} \caption*{\scriptsize{Size (bytes)}} \caption{soplex} \label{fig:soplex} \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \centering \includegraphics[width=0.95\textwidth]{figures/tpch6.pdf} \caption*{\scriptsize{Size (bytes)}} \caption{tpch6} \label{fig:tpch6} \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \centering \includegraphics[width=0.95\textwidth]{figures/leslie3d.pdf} \caption*{\scriptsize{Size (bytes)}} \caption{leslie3d} \label{fig:leslie3d} \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \centering \includegraphics[width=0.95\textwidth]{figures/gcc.pdf} \caption*{\scriptsize{Size (bytes)}} \caption{gcc} \label{fig:gcc} \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \centering \includegraphics[width=0.95\textwidth]{figures/gobmk.pdf} \caption*{\scriptsize{Size (bytes)}} \caption{gobmk} \label{fig:gobmk} \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \centering \includegraphics[width=0.95\textwidth]{figures/mcf.pdf} \caption*{\scriptsize{Size (bytes)}} \caption{mcf} \label{fig:mcf} \end{subfigure} \begin{subfigure}[b]{0.3\linewidth} \centering \includegraphics[width=0.95\textwidth]{figures/sjeng.pdf} \caption*{\scriptsize{Size (bytes)}} \caption{sjeng} \label{fig:sjeng} \end{subfigure} } \caption{Plots demonstrate the relationship between the compressed block size and reuse distance. Dark red circles correspond to the most frequent reuse distances for every size. The first five workloads ((a)--(e)) have some relation between size and reuse, while the last one (f) do not show that size is indicative of reuse.} \end{figure} Figures~\ref{fig:bzip2}--\ref{fig:mcf} show the results of this analysis for nine representative applications from our workload pool (our methodology is described in Section~\ref{camp:sec:methodology}). In five of these applications (\emph{bzip2}, \emph{sphinx3}, \emph{soplex}, \emph{tpch6}, \emph{gcc}), compressed block size is an indicator of reuse distance (in other words, it can be used to distinguish blocks with different reuse distances). In one of the applications (\emph{mcf}), it is not. Each graph is a scatter plot that shows the reuse distance distribution experienced by various compressed cache block sizes in these applications. There are nine possible compressed block sizes (based on the description from the BDI work~\cite{bdi}). The size of each circle is proportional to the relative frequency of blocks of a particular size that exhibit a specified reuse distance. The dark red circles indicate the most frequent reuse distances (up to three) for every size. We make three major observations from these figures. First, there are many applications where block size is an indicator of reuse distance (Figure~\ref{fig:bzip2}--\ref{fig:mcf}). For instance, in \emph{bzip2} (Figure~\ref{fig:bzip2}), a large number of cache blocks are 8, 36, or 64 (uncompressed) bytes and have a short reuse distance of less than 1000. In contrast, a significant number of blocks are 34 bytes and have a large reuse distance of greater than 5000. This indicates that the 34-byte blocks can be deprioritized by the cache when running \emph{bzip2} to improve performance. Similarly, in \emph{sphinx3}, \emph{tpch6}, and \emph{soplex} (Figures~\ref{fig:sphinx3}--\ref{fig:tpch6}), a significant number of blocks are compressed to 1-byte with a long reuse distance of around 1000, whereas most of the blocks of other sizes have very short reuse distances of less than 100. In general, we observe that data from 15 out of 23 of our evaluated applications show that block size is indicative of reuse~\cite{tr-camp}. This suggests that a compressed block size can be used as an indicator of future block reuse which in turn can be used to prioritize blocks of certain sizes (Section~\ref{sec:sip}), improving application performance (e.g., see the effect on \emph{soplex} in Section~\ref{sec:single-core}). \REM{ First, there are many applications where block size is an indicator of reuse distance (Figure~\ref{fig:bzip2}--\ref{fig:gobmk}). For instance, in \emph{bzip2} (Figure~\ref{fig:bzip2}), a large number of cache blocks are 8, 36, or 64 (uncompressed) bytes and have a short reuse distance of less than 1000. In contrast, a significant number of blocks are 34 bytes and have a very large reuse distance of greater than 5000. This indicates that the 34-byte blocks can be deprioritized by the cache when running \emph{bzip2} to improve performance. Similarly, in \emph{sphinx3} (Figure~\ref{fig:sphinx3}), a significant number of blocks are compressed to 1-byte with a long reuse distance of slightly more than 1000, whereas most of the blocks of other sizes (34, 36, 64 bytes) have very short reuse distances of less than 100. This indicates that 1-byte blocks can be deprioritized by the cache when running \emph{sphinx3}. Similarly, the 1-byte blocks in \emph{soplex} and \emph{tpch6} have much larger reuse distances than blocks with many other sizes and can therefore be deprioritized by the cache. We observe that data from 15 out of 23 of our evaluated applications show that block size is indicative of reuse, even though space limit enables us to plot only seven of them. This suggests that a compressed block size can be used for such applications in cache management policy decisions to prioritize blocks that likely lead to short reuse distances (Section~\ref{sec:sip}), improving application performance (e.g., as seen by the effect on \emph{soplex} in Section~\ref{sec:single-core}). } Second, there are some applications where block size does not have a relationship with reuse distance of the block (e.g., \emph{mcf}). For example, in \emph{mcf} (Figure~\ref{fig:mcf}), almost all blocks, regardless of their size, have reuse distances around 1500. This means that block size is less effective as an indicator of reuse for such applications (and the mechanism we describe in Section~\ref{sec:sip} effectively avoids using block size in cache management decisions for such applications). Third, for applications where block size is indicative of reuse, there is usually not a coarse-grained way to distinguish between block sizes that are indicative of different reuse distances. In other words, simply dividing the blocks into \emph{big} or \emph{small} blocks, as done in ECM~\cite{ecm}, is not enough to identify the different reuse behavior of blocks of different sizes. The distinction between block sizes should be done at a finer granularity. This is evident for \emph{bzip2} (Figure~\ref{fig:bzip2}): while 8, 36, and 64-byte blocks have short reuse distances, a significant fraction of the 34-byte blocks have very long reuse distances (between 5000 and 6000). Hence, there is no single block size threshold that would successfully {\em distinguish} blocks with high reuse from those with low reuse. Data from other applications (e.g., \emph{soplex}, \emph{gcc}) similarly support this. We briefly discuss why compressed size is sometimes not indicative of reuse behavior. First, data stored in the data structure might be different, so multiple compressed sizes are possible with the same reuse pattern (e.g., for \emph{mcf}). In this case, blocks of different sizes are equally important for the cache. Second, blocks with the same size(s) can have multiple different reuse patterns/distances (e.g., for \emph{milc} and \emph{gromacs}). In this case, size might not provide useful information to improve cache utilization, because blocks of the same size can be of very different importance. \REM{is best to be sensitive to the compressed block size at a \emph{fine granularity}. Existing solutions~\cite{ecm} classify the blocks only at a coarser granularity (into \emph{big} or \emph{small} blocks) and do not consider the potential relationship between size and reuse. In the case of \emph{bzip2} (Figure~\ref{fig:bzip2}), while 8 and 36-byte blocks have a short reuse distance, a significant fraction of cache blocks that get compressed to 34-bytes have a long reuse distance (between 5000 and 6000). In this case, attempting to use a coarse-grained classification to relate with reuse would end up associating the wrong reuse distance with either cache blocks of size 34-byte or 36-byte. To make things worse, the gap between some of the dominant cache block sizes for some applications (e.g., \emph{bzip2} and \emph{gcc}) is small (2--4 bytes). As a result, it is hard for a heuristic to predict a coarse-grained classification boundary with reasonable accuracy. } \REM{ We can draw two conclusions from these experiments. First, for many applications (e.g., Figure~\ref{fig:bzip2}--\ref{fig:gobmk}), there are certain frequent reuse distances that are specific for a single compressed size (or several sizes), while other applications (Figures~\ref{fig:mcf} and \ref{fig:sjeng}) have similar distributions of reuse distances within a size. For example, in \emph{gobmk} (Figure~\ref{fig:gobmk}) 1-byte blocks have a significant fraction of blocks with reuse distance more than 5000, and in \emph{soplex} (Figure~\ref{fig:soplex}) blocks of size 20 have a signification fraction of blocks with short reuse. This suggests that a compressed block size can potentially be used as an indicator of future block reuse which in turn can be used to prioritize the blocks of certain sizes (Section~\ref{sec:sip}), improving application performance (see the effect on \emph{soplex} in Section~\ref{sec:single-core}). } \REM{ Second, it is best to be sensitive to the compressed block size at a \emph{fine granularity}. Existing solutions~\cite{ecm} classify the blocks only at a coarser granularity (into \emph{big} or \emph{small} blocks) and do not consider the potential relationship between size and reuse. In the case of \emph{bzip2} (Figure~\ref{fig:bzip2}), while 8 and 36-byte blocks have a short reuse distance, a significant fraction of cache blocks that get compressed to 34-bytes have a long reuse distance (between 5000 and 6000). In this case, attempting to use a coarse-grained classification to relate with reuse would end up associating the wrong reuse distance with either cache blocks of size 34-byte or 36-byte. To make things worse, the gap between some of the dominant cache block sizes for some applications (e.g., \emph{bzip2} and \emph{gcc}) is small (2--4 bytes). As a result, it is hard for a heuristic to predict a coarse-grained classification boundary with reasonable accuracy. } \REM Figures~\ref{fig:reuse} and \ref{fig:leslie3d} show the results of this experiment for two representative applications from our workload pool, \emph{bzip2} and \emph{leslie3d}, respectively. The compressed cache design in this experiment uses the BDI~\cite{bdi} compression algorithm. Each graph is a scatter plot that shows the reuse distance distribution experienced by various compressed cache block sizes in these applications. Reuse distance is defined as the number of distinct addresses accessed between two consecutive accesses to the same address. There are nine possible compressed block sizes (based on the description from the BDI paper~\cite{bdi}), and every size has blocks with different reuse distances. The size of each circle is proportional to the relative frequency of blocks of a particular size that exhibit a specified reuse distance. For instance, in \emph{bzip2}, a large number of cache blocks are compressed to either 8, 36, or 64 (uncompressed) bytes and have a short reuse distance of less than 1000. In \emph{leslie3d}, a large number of blocks are compressed to 1-byte and have a reuse distance of less than 400 (short reuse). \begin{figure*}[h!] \centering \subfigure[A]{\label{fig:reuse} \includegraphics[width=0.49\linewidth]{figures/bzip2.pdf}} \caption*{\scriptsize{Size (bytes)}} \caption{bzip2 application.} \subfigure[B]{\label{fig:leslie3d} \includegraphics[width=0.49\linewidth]{figures/leslie3d.pdf}} \caption{Plots demonstrate the relationship between the compressed block size and reuse distance.} \end{figure*} } \REM{ We can draw two interesting conclusions from these experiments. First, for many applications (e.g., Figure~\ref{fig:bzip2}--\ref{fig:gobmk}), there are certain frequent reuse distances that are specific for a single compressed size (or several sizes). The fact that there is likely a single dominant reuse distance associated with the most frequently occurring compressed cache block sizes indicates that compressed cache block size can be used to predict reuse characteristics of cache blocks. Unlike the PC-based solutions that predict reuse distance~\cite{madcache,cachebasedonreusedist}, the additional on-chip storage requirements to predict the reuse will be very modest, because the compressed cache block size is an integral part of the cache block tag~\cite{fpc,bdi,c-pack}. Second, it is best to learn the correlation between block size and reuse at a \emph{fine granularity}. Existing solutions~\cite{ecm} classify the blocks only at a coarser granularity (into \emph{big} or \emph{small} blocks) and do not consider the correlation between size and reuse. In the case of \emph{bzip2}, while 8 and 36-byte blocks have a short reuse distance, a significant fraction of cache blocks that get compressed to 34-bytes have a long reuse distance (between 5000 and 6000). In this case, attempting to use a coarse-grained classification to correlate with reuse would end up associating the wrong reuse distance with either cache blocks of size 34-byte or 36-byte. Similar behavior can also be seen in the case of \emph{leslie3d} with 8-byte cache blocks (reuse distance closer to 1000) and 1, 24, 34, 36, and 40-byte blocks (with short reuse distance). To make things worse, the gap between some of the dominant cache block sizes is small (either 2 or 4-bytes). As a result, it is hard for a heuristic to predict a coarse-grained classification boundary with reasonable accuracy. } \REM{ \subsubsection*{Why is Size Sometimes Indicative of Reuse?} The primary reason why this relationship exists is because in many applications, key data structures have similar compression ratios and reuse patterns within a data structure, but different across multiple different data structures. Figure~\ref{fig:example} demonstrates one simplified source code example based on the data structures observed in \emph{soplex}. It shows why the compressed size can be a good indicator of future reuse (as we observe in Figure~\ref{fig:soplex}).\footnote{Note that our mechanisms are applicable to a variety of applications (Section~\ref{camp:sec:results}) with very different data structures and access patterns. The simplification in the example is done for clarity.} There are three data structures in this example: (i) array $A[N]$ of integer indexes that are smaller than value $M$ (well-compressible with BDI~\cite{bdi} to 20-byte cache blocks), (ii) small array $B[16]$ of floating point coefficients (incompressible, 64-byte cache blocks), and (iii) sparse matrix $C[M][N]$ with the main data (very compressible, 1-byte cache blocks). These data structures not only have different compressed block sizes, but also different reuse distances. Array $A[N]$ is accessed as $A[j]$ within the main loop, where $j$ changes every iteration (in the inner loop), and hence data is reused only between the iterations of the outer loop. This leads to a long reuse distance for the elements of this array. On the other hand, accesses to array $B$ ($B[(i+j)\%16]$) lead to a short reuse distance (usually every $16^{th}$ iteration of the inner loop). The reuse distance of array $C$ is data dependent on $A[j]$ -- it is usually long (but can also be short), depending on what indexes are currently stored in array $A[j]$. } \REM{ In this example, the compressed size of 20 bytes usually indicates a short reuse distance. A coarse-grain approach with a single threshold is \emph{ineffective} for this case (a case that is quite common based on our experiments), because it cannot represent the \emph{fine-grain} correlation between the size and reuse (we have 20-byte blocks with short reuse distance and 1-/64-byte blocks with long reuse distance). This fine-grain relation between size and reuse can be exploited by a compression-aware cache management policy (see Section~\ref{sec:sip}) to improve application's performance (see Section~\ref{sec:single-core} for \emph{soplex}). } \section{Qualitative Comparison with Prior Work} \label{sec:related} \subsection{Size-Aware Management in On-Chip Caches} Baek et al. propose Effective Capacity Maximizer (ECM)~\cite{ecm}. This mechanism employs size-aware insertion and replacement policies for an on-chip compressed cache. Unlike size-oblivious DRRIP~\cite{RRIP} on which it is built, ECM inserts big blocks with lower priority than small blocks. The threshold for what is considered a ``big'' block is determined dynamically at runtime using an equation derived from heuristics and based on the current effective capacity and physical memory usage. During replacement, the biggest block in the eviction pool is selected as the victim. ECM is the first size-aware policy employed for compressed on-chip caches. We find that this approach has several shortcomings and underperforms relative to our proposed mechanisms (as we show in Section~\ref{camp:sec:results}). First, the threshold scheme employed by ECM is coarse-grained and, especially in multi-core workloads where a greater diversity of block sizes exists across workloads, considering more sizes (as \carp{} does) yields better performance. Second, ECM's mechanism does not consider the relation between block reuse and size, whereas \carp{} exploits the new observation that block size and reuse can sometimes be related. Third, due to ECM's complex threshold definition, it is unclear how to generalize ECM to a cache with global replacement, where size-aware replacement policies demonstrate highest benefit (as shown in Section~\ref{camp:sec:results}). In contrast, \carp{} is easily adapted to work with such caches. Recently, Sardashti and Wood propose the decoupled compressed cache (DCC) design~\cite{dcc} that exploits both locality and decoupled sectored cache design to avoid recompaction (and partially fragmentation) overhead in the previous compressed cache designs. The DCC design is largely orthogonal to the compression mechanisms proposed in this work and can be used in cojunction with them. \subsection{Size-Aware Management in Web Caches} Prior works in web caches have proposed many management strategies that consider object size, e.g., variable document size. ElAarag and Romano~\cite{elaarag1, elaarag2} provide one of the most comprehensive surveys. While these proposed techniques serve the same high-level purpose as a management policy for an on-chip cache (e.g., making an informed decision on the optimal victim), they do so in a much different environment. Many proposed mechanisms rely on a recency list of \emph{all} objects in the cache (e.g., \cite{size}) or consider frequency of object access (e.g., \cite{lru-sp}), which are prohibitively expensive techniques for an on-chip cache. In addition, these techniques do not consider a higher density of information that comes with the smaller blocks after compression. This higher density can lead to a higher importance of the smaller blocks for the cache, which was mostly ignored in these prior mechanisms. Some prior works (e.g., \cite{luv, gd-size}) proposed function-based replacement policies that calculate the value of an object much like our proposed \mineviction{} policy. In particular, Bahn et al.~\cite{luv} proposed a mechanism where the \textit{value} of a block is computed as the division of re-reference probability and the relative cost of fetching by size. Similar to other function-based techniques, however, these inputs cannot efficiently be computed or stored in hardware. Our proposed technique does not suffer from this problem and requires only simple metrics already built into on-chip caches. \section{CCM: Overview} \label{sec:ideas} Our proposed Compressed Cache Management (\carp) Policy consists of two components: Minimal-Value Eviction (\mineviction{}) and Size-based Insertion Policy (\insertionpolicy{}). We explain the key ideas of each component in this section and the implementation of each in the next section. We also propose Global \carp{} (or G-\carp), an adaptation of \carp{} for a cache with a decoupled tag- and data-store and global replacement policy. \subsection{Minimal-Value Eviction (\mineviction)} The key observation in \mineviction{} is that evicting one or more important blocks of larger compressed size may be more beneficial than evicting several more compressible, less important blocks (see Section~\ref{sec:motivation}). The idea behind \mineviction{} is that each block has a value to the cache. This value is a function of two key parameters: (i) the likelihood of future re-reference and (ii) the compressed block size. For a given $<$prediction of re-reference, compressed block size$>$ tuple, \mineviction{} associates \emph{a value to the block}. Intuitively, a block with higher likelihood of re-reference is more valuable than a block with lower likelihood of re-reference and is assigned a higher value. Similarly, a more compressible block is more valuable than a less compressible block because it takes up fewer segments in the data-store, potentially allowing for the caching of additional useful blocks. The block with the least value in the cache is chosen as the next victim for replacement. \subsection{Size-based Insertion Policy (\insertionpolicy)} The key observation behind \insertionpolicy{} is that there is a correlation between cache block reuse distance and compressed block size. \insertionpolicy{} exploits this observation and inserts blocks of certain sizes with higher priority if doing so reduces the cache miss rate. \insertionpolicy{} is effective because there is often a correlation between size and reuse distance either within an application or within an application phase. Altering the priority of blocks of certain sizes with short or long reuse distances helps ensure more important blocks stay in the cache. At run-time, \insertionpolicy{} dynamically detects the set of sizes that, when inserted with higher priority, reduce the number of misses relative to a size-oblivious insertion policy. \insertionpolicy{} uses a simple mechanism based on dynamic set sampling~\cite{mlp} to make the prioritization decision for various compressed sizes. \subsection{\carp: Combining \mineviction{} and \insertionpolicy{}} Our final mechanism, \carp, combines \mineviction{} and \insertionpolicy{} into one comprehensive cache management policy for the compressed cache. To summarize, \carp{} (i) changes the insertion priority of blocks with compressed sizes determined to reduce the miss rate based on dynamically identified correlation between block size and reuse distance, and (ii) evicts the set of blocks with the least value based on a value function of block size and a prediction of re-reference. \subsection{Local vs. Global Insertion/Replacement} In addition to being an effective mechanism for the traditional compressed cache with local replacement policy, the key ideas behind \carp{} are even more effective when applied to a cache with a decoupled tag- and data-store and global replacement policy. Towards this end, we propose Global \insertionpolicy{} (or G-\insertionpolicy) and Global \mineviction{} (or G-\mineviction). Together, we combine these into Global \carp{} (or G-\carp). For a traditional cache structure, a local replacement policy considers only the blocks within a \emph{single set} for candidates to replace. The V-Way cache~\cite{v-way}, described in Section~\ref{sec:background}, with a decoupled tag- and data-store allows for a global replacement decision where the pool of potential candidates for replacement is much larger. In Section~\ref{sec:results} we show that this increases the effectiveness of our size-aware policies. \section{\carp{}: Design and Implementation} \label{sec:carp} Our proposed Compression-Aware Management Policy (\carp{}) consists of two components: Minimal-Value Eviction (\mineviction{}) and Size-based Insertion Policy (\insertionpolicy{}). These mechanisms assume a compressed cache structure where the compressed block size is available to the hardware making the insertion and replacement decisions. Without the loss of generality, we assume that the tag-store contains double the number of tags and is decoupled from the data-store to allow higher effective capacity (as proposed in several prior works~\cite{fpc,bdi,c-pack}). We also propose Global \carp{} (or G-\carp), an adaptation of \carp{} for a cache with a global replacement policy. In this section, we first provide the background information needed to understand some of our mechanisms (Section~\ref{sec:background}). Then, we describe the design and implementation of each mechanism in depth (Sections~\ref{sec:mve}-\ref{sec:gcarp}). We detail the implementation of our G-\carp{} mechanism assuming the structure proposed for the V-Way cache~\cite{v-way}. None of the mechanisms require extensive hardware changes on top of the baseline compressed cache designs (both local and global, see Section~\ref{sec:complexity} for an overhead analysis). \subsection{Background} \label{sec:background} Multiple size-oblivious cache management mechanisms (e.g.,~\cite{mlp,RRIP,EAF,lacs,rw-samira}) were proposed to improve the performance of conventional on-chip caches (without compression). Among them, we select RRIP~\cite{RRIP} as both a comparison point in our evaluations and as a predictor of future re-reference in some of our algorithms (see Section~\ref{sec:mve}). This selection is motivated both by the simplicity of the algorithm and its state-of-the-art performance (as shown in \cite{RRIP}). \textbf{RRIP.} Re-Reference Interval Prediction (RRIP)~\cite{RRIP} uses an $M$-bit saturating counter per cache block as a Re-Reference Prediction Value ($RRPV$) to predict the block's re-reference distance. The key idea behind RRIP is to prioritize the blocks with lower predicted re-reference distance, as these blocks have higher expectation of near-future reuse. Blocks are inserted with a long re-reference interval prediction ($RRPV = 2^M-2$). On a cache miss, the victim block is a block with a predicted distant re-reference interval ($RRPV = 2^M-1$). If there is no such block, the $RRPV$ of all blocks is incremented by one and the process repeats until a victim is found. On a cache hit, the $RRPV$ of a block is set to zero (near-immediate re-reference interval). Dynamic RRIP (DRRIP) uses set dueling~\cite{mlp,dip} to select between the aforementioned policy (referred to as SRRIP) and one that inserts blocks with a short re-reference interval prediction with high probability and inserts blocks with a long re-reference interval prediction with low probability. \textbf{V-Way.} The Variable-Way, or V-Way~\cite{v-way}, cache is a set-associative cache with a decoupled tag- and data-store. The goal of V-Way is two-fold: providing flexible (variable) associativity together with a global replacement across the entire data store. A defining characteristic is that there are more tag-entries than data-entries. Forward and backward pointers are maintained in the tag- and data-store to link the entries. This design enables associativity to effectively vary on a per-set basis by increasing the number of tag-store entries relative to data-store entries. Another benefit is the implementation of a \emph{global replacement policy}, which is able to choose data-victims from anywhere in the data-store. This is in contrast to a traditional \emph{local replacement policy}, e.g.,~\cite{LRU,RRIP}, which considers data-store entries only within a single set as possible victims. The particular global replacement policy described in \cite{v-way} (called Reuse Replacement) consists of a Reuse Counter Table (RCT) with a counter for each data-store entry. Victim selection is done by starting at a pointer (PTR) to an entry in the RCT and searching for the first counter equal to zero, decrementing each counter while searching, and wrapping around if necessary. A block is inserted with an RCT counter equal to zero. On a hit, the RCT counter for the block is incremented. We use the V-Way design as a foundation for all of our global mechanisms (described in Section~\ref{sec:gcarp}). \subsection{Minimal-Value Eviction (\mineviction)} \label{sec:mve} The key observation in our \mineviction{} policy is that evicting one or more important blocks of larger compressed size may be more beneficial than evicting several more compressible, less important blocks (see Section~\ref{sec:motivation}). The idea behind \mineviction{} is that each block has a value to the cache. This value is a function of two key parameters: (i) the likelihood of future re-reference and (ii) the compressed block size. For a given $<$prediction of re-reference, compressed block size$>$ tuple, \mineviction{} associates \emph{a value with the block}. Intuitively, a block with higher likelihood of re-reference is more valuable than a block with lower likelihood of re-reference and is assigned a higher value. Similarly, a more compressible block is more valuable than a less compressible block because it takes up fewer segments in the data-store, potentially allowing for the caching of additional useful blocks. The block with the least value in the associativity set is chosen as the next victim for replacement---sometimes multiple blocks need to be evicted to make room for the newly inserted block. In our implementation of \mineviction{}, the value $V_i$ of a cache block $i$ is computed as $V_i = p_i/s_i$, where $s_i$ is the compressed block size of block $i$ and $p_i$ is a predictor of re-reference, such that a larger value of $p_i$ denotes block $i$ is more important and is predicted to be re-referenced sooner in the future. This function matches our intuition and is monotonically increasing with respect to the prediction of re-reference and monotonically decreasing with respect to the size. We have considered other functions with these properties (i.e., a weighted linear sum), but found the difference in performance to be negligible. Our mechanism estimates $p_i$ using RRIP\footnote{Specifically, the version of RRIP that our mechanism uses is SRRIP. We experimented with DRRIP, but found it offered little performance improvement for our mechanisms compared to the additional complexity. All of our evaluations assume an RRPV width $M=3$. }~\cite{RRIP} as the predictor of future re-reference due to its simple hardware implementation and state-of-the-art stand-alone performance.\footnote{Other alternatives considered (e.g., \cite{EAF}) provide only a binary value.} As described in Section~\ref{sec:background}, RRIP maintains a re-reference prediction value (RRPV) for each cache block which predicts the re-reference distance. Since a larger RRPV denotes a longer predicted re-reference interval, we compute $p_i$ as $p_i=(RRPV_{MAX}+1-RRPV_i)$. Therefore, a block with a predicted short re-reference interval has more value than a comparable block with a predicted long re-reference interval. $p_i$ cannot be zero, because $V_i$ would lose dependence on $s_i$ and become size-oblivious. Depending on the state of the cache, there are two primary conditions in which a victim block must be selected: (i) the data-store has space for the block to be inserted, but all tags are valid in the tag-directory, or (ii) the data-store does not have space for the block to be inserted (an invalid tag may or may not exist in the tag-directory). In the first case where the data-store is not at capacity, \mineviction{} relies solely on the predictor of re-reference or conventional replacement policy, such as RRIP. For the second case, the valid blocks within the set are compared based on $V_i$ and the set of blocks with the least value is evicted to accommodate the block requiring insertion. \mineviction{} likely remains off the critical path, but to simplify the microarchitecture, we eliminate division in the calculation of $V_i$ by bucketing block sizes such that $s_i$ is always a power of two, allowing a simple right-shift operation instead of floating point division. For the purposes of calculating $V_i$, $s_i=2$ for blocks of size 0B -- 7B, $s_i=4$ for blocks of size 8B -- 15B, $s_i=8$ for blocks of size 16B -- 31B, and so on. The most complex step, comparing blocks by value, can be achieved with a fixed multi-cycle parallel comparison. \subsection{Size-based Insertion Policy (\insertionpolicy)} \label{sec:sip} The key observation behind \insertionpolicy{} is that sometimes there is a relation between cache block reuse distance and compressed block size (as shown in Section~\ref{sec:size-reuse}). \insertionpolicy{} exploits this observation and inserts blocks of certain sizes with higher priority if doing so reduces the cache miss rate. Altering the priority of blocks of certain sizes with short or long reuse distances helps to ensure that more important blocks stay in the cache. At run-time, \insertionpolicy{} dynamically detects the set of sizes that, when inserted with higher priority, reduce the number of misses relative to a size-oblivious insertion policy. \insertionpolicy{} uses a simple mechanism based on dynamic set sampling~\cite{mlp} to make the prioritization decision for various compressed sizes. It selects the best-performing policy among competing policies during a periodic training phase and applies that policy during steady state. The observation in dynamic set sampling is that sampling makes it possible to choose the better policy with only a relatively small number of sets selected from the Main Tag Directory (MTD) to have a corresponding set in an Auxiliary Tag Directory (ATD) participating in a tournament. Only the MTD is coupled with the data-store; the ATD is only for deciding which block size(s) should be inserted with high priority. Therefore, there are no performance degradations due to our sampling during training. \begin{figure}[tb] \vspace{-0.3cm} \centering \includegraphics[width=60mm]{figures/sip_implementation_atd.pdf} \label{fig:sipImplementationAtd} \includegraphics[width=60mm]{figures/sip_implementation_ctr.pdf} \label{fig:sipImplementationCtr} \caption{Set selection during training and decision of best insertion policy based on difference in miss rate in MTD/ATD.} \end{figure} Let $m$ be the minimum number of sets that need to be sampled so that dynamic set sampling can determine the best policy with high probability and $n$ be the number of compressible block sizes possible with the compression scheme (e.g., 8B, 16B, 20B, ..., 64B). In \insertionpolicy{}, the ATD contains $m\cdot{}n$ sets, $m$ for each of the $n$ sizes. As shown in Figure~\ref{fig:sipImplementationAtd}, each set in the ATD is assigned one of the $n$ sizes. The \emph{insertion policy} in these sets of the ATD differs from the insertion policy in the MTD in that the assigned size is prioritized. For the example in Figure~\ref{fig:sipImplementationAtd}, there are only two possible block sizes. Sets A and F in the ATD \emph{prioritize} insertions of 8-byte blocks (e.g., by increasing $p_i$). Sets D and I prioritize the insertion of 64-byte blocks. Sets B, C, E, G, and H are not sampled in the ATD. When a set in the MTD that has a corresponding set in the ATD receives a miss, a counter ${CTR}_i$ is incremented, where \emph{i} is a size corresponding to the prioritized size in the corresponding ATD set. When an ATD set receives a miss, it decrements ${CTR}_i$ for the size associated with the policy this set is helping decide. Figure~\ref{fig:sipImplementationCtr} shows the decision of the output of ${CTR}_{64B}$. For each of the possible compressed block sizes, a decision is made independently based on the result of the counter. If ${CTR}_i$ is negative, prioritizing blocks of size \emph{i} is negatively affecting miss rate (e.g., the insertion policy in the MTD resulted in fewer misses than the insertion policy in the ATD). Therefore, \insertionpolicy{} does not prioritize blocks of size \emph{i}. Likewise, if ${CTR}_i$ is positive, prioritizing insertion of blocks of size \emph{i} is reducing the miss rate and \insertionpolicy{} inserts size \emph{i} blocks with high priority for best performance. For $n$ different sizes, there are $2^n$ possible insertion schemes and any may be chosen by \insertionpolicy. For simplicity and to reduce power consumption, the dynamic set sampling occurs during a periodic training phase\footnote{In our evaluations, we perform training for 10\% of the time. For example, for 100 million cycles every 1 billion cycles.} at which time the insertion policy of the MTD is unaffected by \insertionpolicy{}. At the conclusion of the training phase, a steady state is entered and the MTD adopts the chosen policies and prioritizes the insertion of blocks of sizes for which $CTR$ was positive during training. \insertionpolicy{} is general enough to be applicable to many replacement policies (e.g., LRU, RRIP, etc). In some cases (e.g., LRU), it is more effective to try inserting blocks with lower priority (e.g., LRU position) instead of higher priority as proposed above. We evaluate \insertionpolicy{} with RRIP where blocks by default are inserted with a predicted long re-reference interval ($RRPV = 2^M-2$). Therefore, in the ATD sets, the appropriate sizes are prioritized and inserted with a predicted short re-reference interval ($RRPV=0$). For a 2MB cache with 2048 sets, we create an ATD with 32 sets for each of 8 possible block sizes. For simplicity, in our implementation we limit the number of sizes to eight by bucketing the sizes into eight size bins (i.e., bin one consists of sizes 0 -- 8B, bin two consists of sizes 9 -- 16B,\ldots, and bin eight consists of sizes 57 -- 64B). \subsection{\carp{} for the V-Way Cache} \label{sec:gcarp} In addition to being an effective mechanism for the traditional compressed cache with a local replacement policy, the key ideas behind \carp{} are even more effective when applied to a cache with a decoupled tag- and data-store and a global replacement policy, where the pool of potential candidates for replacement is much larger. In this work, we apply these ideas to the V-Way cache~\cite{v-way} (described in Section~\ref{sec:background}) with its decoupled tag- and data-store that increase the effectiveness of replacement algorithms. To demonstrate this effectiveness, we propose Global \insertionpolicy{} (or G-\insertionpolicy) and Global \mineviction{} (or G-\mineviction). Together, we combine these into Global \carp{} (or G-\carp). \textbf{V-Way cache + compression.} The V-Way cache~\cite{v-way} design can be enhanced with compression in four main steps (as shown in Figure~\ref{fig:vway+c}). First, the tag entries need to be extended with the encoding bits to represent a particular compression scheme used for a cache block (e.g., 4 bits for BDI~\cite{bdi}, see \ding{202}). The number of tags is already doubled in the V-Way cache. Second, the data store needs to be split into multiple segments to get the benefit of compression (e.g., 8-byte segments, see \ding{203}). As in ~\cite{bdi}, every cache block after compression consists of multiple adjacent segments. Third, the reverse pointers ($R_{n}$) that are used to perform the replacement need to track not only the validity (v bit) but also the size of each block after compression (measured in the number of 8-byte segments, \ding{204}). This simplifies the replacement policies, because there is no need to access the tags to find block sizes. Fourth, we double the number of reverse pointers per set, so that we can exploit the capacity benefits from compression (\ding{205}). \begin{figure}[tb] \centering \includegraphics[width=80mm]{figures/VWay-Compression.pdf} \caption{V-Way + compression cache design.} \label{fig:vway+c} \end{figure} For a 2MB V-Way-based L2 cache with 64-byte cache blocks, the sizes of the \emph{fptr} and \emph{rptr} pointers are 15 ($log_2{\frac{2MB}{64B}}$) and 16 ($log_2{\frac{2*2MB}{64B}}$) bits respectively. After compression is applied and assuming 8-byte segments, fptr would increase by 3 bits to a total size of 18 bits.\footnote{Fptr and rptr pointers can be reduced in size (by 3 bits) by using regioning (as described later in Section~\ref{sec:gsip}).} A single \emph{validity} bit that was used in V-Way cache is now enhanced to 3 bits to represent 7 different sizes of the cache blocks after compression with BDI as well as the validity itself. \textbf{G-\mineviction.} \label{sec:gmve} As in \mineviction, G-\mineviction{} uses a value function to calculate the value of blocks. The changes required are in (i) computing $p_i$ and (ii) selecting a pool of blocks from the large pool of replacement options to consider for one global replacement decision. To compute $p_i$, we propose using the reuse counters from the Reuse Replacement policy~\cite{v-way} as a predictor of future re-reference. As in the Reuse Replacement policy \cite{v-way} (see Section~\ref{sec:background}), each data-store entry has a counter. On insertion, a block's counter is set to zero. On a hit, the block's counter is incremented by one indicating its reuse. For the second change, we implement global replacement by maintaining a pointer (PTR) to a reuse counter entry. Starting at the entry PTR points to, the reuse counters of 64 valid data entries are scanned, decrementing each non-zero counter by one (as in the Reuse Replacement policy). The 64 blocks are assigned a value, $V_i$, and the least-valued block(s) are evicted to accommodate the incoming block. 64 blocks are chosen because it guarantees both an upper bound on latency and that evicting all 64 blocks (i.e., all highly compressed blocks) in the worst case will vacate enough data-store space for the incoming block. A few applications (i.e., \emph{xalancbmk}~\cite{SPEC}) have a majority of blocks of very similar sizes that primarily belong to two size bins of adjacent sizes. When considering 64 such blocks, certain blocks in the smaller size bin can essentially be ``stuck'' in the cache (i.e., there is only a very small probability these blocks will be chosen as victim, because a block with the same prediction of re-reference that belongs in the larger size bin is present and will be chosen). This results from the microarchitectural simplifications and approximate nature of the value function and can cause performance degradations in a few cases. We address this shortcoming later in this section. \textbf{G-\insertionpolicy.} \label{sec:gsip} Dynamic set sampling (used by \insertionpolicy{}) motivates that only a select number of sets are required to be sampled to estimate the performance of competing policies \cite{mlp}. However, this assumption does not hold in a cache with global replacement, because evictions are not limited to the set in which a cache miss occurs and this interferes with sampling. For the V-Way cache, we propose instead a mechanism inspired by set dueling~\cite{dip} to select the optimal insertion policy. To apply set dueling to G-\insertionpolicy, we need to divide the data-store into $n$ (where $n$ is small; in our evaluations $n=8$) equal regions. Instead of considering all blocks within the data-store, the replacement policy considers only the blocks within a particular region. This still allows considerably more replacement options than a traditional cache structure. We observe that this division also simplifies the V-Way cache design with negligible impact on performance.\footnote{G-\mineviction{} supports regions by simply maintaining one PTR per region.} During a training phase, each region is assigned a compressed block size to prioritize on insertion. Figure~\ref{fig:globalSipImplementationAtd} shows this assignment for a simple cache with three regions and two block sizes, 8-byte and 64-byte. The third region is designated as a baseline (or control) region in which no blocks are inserted with higher priority. When a miss occurs within a region, the $CTR$ counter is incremented for that region. For example, in Figure~\ref{fig:globalSipImplementationAtd}, a miss to set A, B, or C increments ${CTR}_{8B}$. Likewise, a miss to set G, H, or I increments ${CTR}_{base}$ and so on. At the end of the training phase, the region $CTR$ counters are compared (see Figure~\ref{fig:globalSipImplementationCtr}). If ${CTR}_i < {CTR}_{base}$, blocks of size $i$ are inserted with higher priority in steady state in all regions. Therefore, G-\insertionpolicy{} detects at runtime the sizes that reduce the miss rate when inserted with higher priority than other blocks. \begin{figure}[tb] \centering \includegraphics[width=60mm]{figures/global_sip_implementation_atd.pdf} \label{fig:globalSipImplementationAtd} \includegraphics[width=60mm]{figures/global_sip_implementation_ctr.pdf} \label{fig:globalSipImplementationCtr} \caption{Set selection during training and update of counters on misses to each region.} \end{figure} In our implementation, we have divided the data-store into eight regions.\footnote{We conducted an experiment varying the number of regions (and therefore the number of distinct size bins considered) from 4 to 64 and found having 8 regions performed best.} This number can be adjusted based on cache size. Because one region is designated as the baseline region, we bin the possible block sizes into seven bins and assign one range of sizes to each region. During the training phase, sizes within this range are inserted with higher priority. The training duration and frequency are as in \insertionpolicy{}. Because training is short and infrequent, possible performance losses due to set dueling are limited. \textbf{G-\carp.} \label{sec:gcarp-implementation} G-\mineviction{} and G-\insertionpolicy{} complement each other and can be easily integrated into one comprehensive replacement policy referred to as G-\carp. We make one improvement over the simple combination of these two orthogonal policies to further improve performance in the few cases where G-\mineviction{} degrades performance. During the training phase of G-\insertionpolicy{}, we designate a region in which we insert blocks with simple Reuse Replacement instead of G-\mineviction{}. At the end of the training phase, the $CTR$ for this region is compared with the control region and if fewer misses were incurred, G-\mineviction{} is disabled in all regions at the steady state. In G-\mineviction{}-friendly applications, it remains enabled. \begin{comment} \begin{table*}[htb]\scriptsize \begin{minipage}{\textwidth} \hfill{} \centering \begin{tabular}{lcccccccccc} \toprule \textbf {} & \textbf{Baseline} & \textbf{BDI} & \textbf{\mineviction{}} & \textbf{\insertionpolicy{}} & \textbf{\carp{}} & \textbf{V-Way} & \textbf{V-Way+Compr.} & \textbf{G-\mineviction{}} & \textbf{G-\insertionpolicy{}} & \textbf{G-\carp{}} \\ \midrule tag-store entry size & 21 & 35(\cite{bdi}) & 35 & 35 & 29\footnote{+8 forward ptr} & 43 & 43 &43 & 43 & 43 \\ \cmidrule(rl){1-11} data-store entry size & 512 & 512 & 512 & 512 & 512 & 524\footnote{+3 reuse, +9 reverse ptr} & 524 & 524 & 524 & 524 \\ \cmidrule(rl){1-11} \# tag-store entries & 32768 & 65536 & 65536 & 73728\footnote{+1/8 set sampling} & 66560 & 65536 & 65536 & 65536 & 65536 & 65536 \\ \cmidrule(rl){1-11} \# data-store entries & 32768 & 32768 & 32768 & 32768 & 32768 & 32768 & 32768 & 32768 & 32768 & 32768 \\ \cmidrule(rl){1-11} tag-store size & 84kB & 287kB & 287kB & 291kB & 322kB & 237kB & 352kB & 352kB & 352kB & 352kB \\ \cmidrule(rl){1-11} other & 0 & 0 & 0 & 8*16 (CTR's) & 8*16 & 0 & 0 & 8*16 & 0 & 8*16 \\ \cmidrule(rl){1-11} \textbf{total} & 2132kB & 2384kB & 2384kB & 2388kB & 2388kB & 2384kB & 2499kB & 2499kB & 2499kB & 2499kB \\ \bottomrule \end{tabular} \hfill{} \caption{Storage overhead of different cache designs for a 2MB L2 cache. } \vspace{-0.2cm} \label{table:complexity} \end{minipage} \end{table*} \end{comment} \subsection{Overhead and Complexity Analysis} \label{sec:complexity} Table~\ref{table:complexity} shows the storage cost of six cache designs: baseline uncompressed cache, BDI compressed cache with LRU, V-Way with and without compression, as well as \carp{} and G-\carp{}. On top of our reference cache with BDI and LRU (2384kB), \mineviction{} does not add any additional metadata and the dynamic set sampling in \insertionpolicy{} increases the cache size in bits by only 1.5\% (total \carp{} size: 2420kB). Adding BDI compression to V-Way cache with 2x tags and 8 regions increases cache size from 2458kB to 2556kB. G-\mineviction{}/G-\insertionpolicy{}/G-\carp{} do not add further metadata (with the exception of eight 16-bit counters for set-dueling in G-\insertionpolicy{}/G-\carp{}). In addition, none of the proposed mechanisms are on the critical path of the execution and the logic is reasonably modest to implement (e.g., comparisons of CTRs). We conclude that the complexity and storage overhead of \carp{} are modest. \setlength{\tabcolsep}{.4em} \begin{table}[h]\scriptsize \centering \vspace{-0.3cm} \hspace{-0.0cm} \begin{minipage}{\columnwidth} \hfill{} \begin{tabular}{|c|c|c|c|c|c|c|} \hline \textbf {} & \textbf{Base} & \textbf{BDI} & \textbf{\carp{}} & \textbf{V-Way} & \textbf{V-Way+C} & \textbf{G-\carp{}} \\ \hline tag-entry(bits) & 21 & 35(\cite{bdi}) & 35 & 36~\footnote{+15 forward ptr; \textsuperscript{\textit{b}} +16 reverse ptr; \textsuperscript{\textit{c{}}}+1/8 set sampling in \textbf{\insertionpolicy{}}; \textsuperscript{\textit{d}}CTR's in \textbf{\insertionpolicy{}}; \textsuperscript{\textit{e}} +4 for comp. encoding; \textsuperscript{\textit{f}} +32 (2 reverse ptrs per data entry, 13 bits each, and 2 extended validity bits, 3 bits each)} & 40~\textsuperscript{\textcolor{red}{\textit{e}}} & 40 \\ \hline data-entry(bits) & 512 & 512 & 512 & 528~\textsuperscript{\textcolor{red}{\textit{b}}} & 544~\textsuperscript{\textcolor{red}{\textit{f}}} & 544 \\ \hline \# tag entries & 32768 & 65536 & 73728~\textsuperscript{\textcolor{red}{\textit{c}}} & 65536 & 65536 & 65536 \\ \hline \# data entries & 32768 & 32768 & 32768 & 32768 & 32768 & 32768 \\ \hline tag-store (kB) & 86 & 287 & 323 & 295 & 328 & 328 \\ \hline data-store (kB) & 2097 & 2097 & 2097 & 2163 & 2228 & 2228 \\ \hline other & 0 & 0 & 8*16~\textsuperscript{\textcolor{red}{\textit{d}}} & 0 & 0 & 8*16 \\ \hline \hline \textbf{total (kB)} & 2183 & \textbf{2384} & \textbf{2420} & 2458 & \textbf{2556} & \textbf{2556} \\ \hline \end{tabular} \hfill{} \caption{Storage overhead of different mechanisms for a 2MB L2 cache. ``V-Way+C'' means V-Way with compression.} \label{table:complexity} \end{minipage} \vspace{-0.2cm} \end{table} \section{Methodology} \label{camp:sec:methodology} We use an in-house, event-driven 32-bit x86 simulator~\cite{MemSim} whose front-end is based on Simics~\cite{Simics}. All configurations have a two-level cache hierarchy, with private L1 caches and a shared, inclusive L2 cache. Table~\ref{camp:tbl:simulation-parameters} provides major simulation parameters. All caches uniformly use a 64B cache block size. All cache latencies were determined using CACTI~\cite{cacti} (assuming a 4GHz frequency). We also checked that these latencies match the existing last-level cache implementations from Intel and AMD, when properly scaled to the corresponding frequency.\footnote{Intel Xeon X5570 (Nehalem) 2.993GHz, 8MB L3 - 35 cycles~\cite{Nehalem}; AMD Opteron 2.8GHz, 1MB L2 - 13 cycles~\cite{Opteron}.} For single-core and multi-core evaluations, we use benchmarks from the SPEC CPU2006 suite~\cite{SPEC}, two TPC-H queries~\cite{tpc}, and an Apache web server. All results are collected by running a representative portion (based on PinPoints~\cite{pinpoints}) of the benchmarks for 1 billion instructions. We build our energy model based on McPAT~\cite{mcpat}, CACTI~\cite{cacti}, and on RTL of BDI~\cite{bdi} synthesized with Synopsys Design Compiler with a 65nm library (to evaluate the energy of compression/decompression with BDI). \subsection{{Evaluation Metrics}} We measure performance of our benchmarks using IPC (instruction per cycle), effective compression ratio (effective increase in L2 cache size without meta-data overhead, e.g., 1.5 for 2MB cache means effective size of 3MB), and MPKI (misses per kilo instruction). For multi-programmed workloads we use weighted speedup~\cite{weightedspeedup,ws2} as the performance metric. \subsection{{Energy}} We measure the memory subsystem energy, which includes the static and dynamic energy consumed by L1 and L2 caches, memory transfers, and DRAM, as well as the energy of BDI's compressor/decompressor units. Energy results are normalized to the energy of the baseline system with a 2MB compressed cache and an LRU replacement policy. BDI was fully implemented in Verilog and synthesized to create some of the energy results used in building our power model. The area overhead of the compression and decompression logic is $0.014$ $mm^2$ (combined). Decompression power is 7.4 mW, and compression power is 20.59 mW on average. Our results show that there are benchmarks that are almost insensitive (IPC improvement is less than 5\% with 32x increase in cache size) to the size of the L2 cache: dealII, povray, calculix, gamess, namd. This typically means that their working sets mostly fit into the L1D cache, leaving almost no potential for any L2/memory optimization. Therefore, we do not present data in detail for these applications, although we verified that our mechanism does not affect their performance. \begin{table}[t] \vspace{-0.2cm} \centering \scriptsize{ \begin{tabular}{|l|c|} \hline Processor & 1--4 cores, 4GHz, x86 in-order \\ \hline L1-D cache & 32KB, 64B cache-line, 2-way, 1 cycle, uncompressed \\ \hline L2 caches & 1--16 MB, 64B cache-line, 16-way, 15--48 cycles\\ \hline Memory & 300-cycle latency, 32 MSHRs \\ \cline{1-2} \end{tabular}% } \caption{Major parameters of the simulated system.} \label{camp:tbl:simulation-parameters}% \end{table} \subsection{{Parameters of Evaluated Schemes}} For FPC (BDI), we used a decompression latency of 5 cycles~\cite{fpc-tr} (1 cycle~\cite{bdi}), respectively. We use a segment size of 1 byte for both designs to get the highest compression ratio as described in~\cite{fpc-tr,bdi}, and an 8-byte segment size for V-Way-based designs. As in prior works~\cite{fpc,bdi}, we assume double the number of tags compared to the conventional uncompressed cache (and hence the compression ratio cannot be larger than 2.0). \section{Results and Analysis} \label{camp:sec:results} \subsection{Single-core Results} \subsubsection{Effect on Performance} \label{sec:single-core} \begin{figure}[htb] \centering \includegraphics[width=0.95\textwidth]{figures/SingleCore.pdf} \caption{Performance of our local replacement policies vs. RRIP and ECM, normalized to LRU.} \label{fig:1-core} \end{figure} \begin{figure*}[htb] \centering \includegraphics[width=0.95\textwidth]{figures/GSingleCore.pdf} \caption{Performance of our global replacement policies vs. RRIP and V-Way, normalized to LRU.} \label{fig:global-1-core} \end{figure*} Figures~\ref{fig:1-core} and~\ref{fig:global-1-core} show the performance improvement of our proposed cache management policies over the baseline design with a 2MB compressed\footnote{Unless otherwise stated, we use 2MB BDI~\cite{bdi} compressed cache design.} L2 cache and an LRU replacement policy. Figure~\ref{fig:1-core} compares the performance of \carp's local version (and its components: \mineviction{} and \insertionpolicy{}) over (i) the conventional LRU policy~\cite{LRU}, (ii) the state-of-the-art size-oblivious RRIP policy~\cite{RRIP}, and (iii) the recently proposed ECM policy~\cite{ecm}. Figure~\ref{fig:global-1-core} provides the same comparison for G-\carp{} (with its components: G-\mineviction{} and G-\insertionpolicy{}) over (i) LRU, (ii) RRIP, and (iii) V-Way design ~\cite{v-way}. Both figures are normalized to the performance of a BDI-cache with LRU replacement. Table~\ref{table:perf} summarizes our performance results. Several observations are in order. \begin{table}[!ht]\small \centering \begin{tabular}{lccc} \toprule \textbf{Mechanism} & \textbf{LRU} & \textbf{RRIP} & \textbf{ECM} \\ \midrule MVE & 6.3\%/-10.7\% & 0.9\%/-2.7\% & 0.4\%/-3.0\% \\ \cmidrule(rl){1-4} SIP & 7.1\%/-10.9\% & 1.8\%/-3.1\% & 1.3\%/-3.3\% \\ \cmidrule(rl){1-4} CAMP & \textbf{8.1\%/-13.3\%} & \textbf{2.7\%/-5.6\%} & \textbf{2.1\%/-5.9\%} \\ \bottomrule \end{tabular} \begin{tabular}{lcccc} \toprule \textbf{Mechanism} & \textbf{LRU} & \textbf{RRIP} & \textbf{ECM} & \textbf{V-Way}\\ \midrule G-MVE & 8.7\%/-15.3\% & 3.2\%/-7.8\% & 2.7\%/-8.0\% & 0.1\%/-0.9\%\\ \cmidrule(rl){1-5} G-SIP & 11.2\%/-17.5\% & 5.6\%/-10.2\% & 5.0\%/-10.4\% & 2.3\%/-3.3\%\\ \cmidrule(rl){1-5} G-CAMP & \textbf{14.0\%/-21.9\%} & \textbf{8.3\%/-15.1\%} & \textbf{7.7\%/-15.3\%} & \textbf{4.9\%/-8.7\%}\\ \bottomrule \end{tabular} \caption{Performance (IPC) / Miss rate (MPKI) comparison between our cache management policies and prior works, 2MB L2 cache. All numbers are pairwise percentage improvements over the corresponding comparison points and averaged across fourteen memory-intensive applications.} \label{table:perf} \end{table} First, our G-\carp{} and \carp{} policies outperform all prior designs: LRU (by 14.0\% and 8.1\%), RRIP (by 8.3\% and 2.7\%), and ECM (by 7.7\% and 2.1\%) on average across fourteen memory-intensive applications (\emph{GMeanIntense}, with MPKI $>$ 5). These performance improvements come from both components in our design, which significantly decrease applications' miss rates (shown in Table~\ref{table:perf}). For example, \mineviction{} and G-\mineviction{} are the primary sources of improvements in \emph{astar}, \emph{sphinx3} and \emph{mcf}, while \insertionpolicy{} is effective in \emph{soplex} and \emph{GemsFDTD}. Note that if we examine all applications, then G-\carp{} outperforms LRU, RRIP and ECM by 8.9\%, 5.4\% and 5.1\% (on average). Second, our analysis reveals that the primary reasons why \carp{}/G-\carp{} outperforms ECM are: (i) ECM's coarse-grain view of the size (only large vs. small blocks are differentiated), (ii) ECM's difficulty in identifying the right threshold for an application. For example, in \emph{soplex}, ECM defines every block that is smaller than or equal to 16 bytes as a small block and prioritizes it (based on ECM's threshold formula). This partially helps to improve performance for some important blocks of size 1 and 16, but our \insertionpolicy{} mechanism additionally identifies that it is even more important to prioritize blocks of size 20 (a significant fraction of such blocks have short reuse distance as we show in Section~\ref{sec:size-reuse}). This in turn leads to much better performance in \emph{soplex} by using \carp{} (and G-\carp{}). Third, in many applications, G-\mineviction{} significantly improves performance (e.g., \emph{soplex} and \emph{sphinx3}), but there are some noticeable exceptions (e.g., \emph{xalancbmk}). Section~\ref{sec:gmve} describes the main reason for this problem. Our final mechanism (G-\carp), where we use set dueling~\cite{dip} to dynamically detect such situations and disable G-\mineviction{} (for these cases only) avoids this problem. As a result, our G-\carp{} policy gets the best of G-\mineviction{} when it is effective and avoids degradations otherwise. Fourth, global replacement policies (e.g., G-\carp) are more effective in exploiting the opportunities provided by the compressed block size. G-\carp{} not only outperforms local replacement policies (e.g., RRIP), but also global designs like V-Way (by 3.6\% on average across all applications and by \emph{4.9\%} across memory intensive applications). We summarize the performance gains and the decrease in the cache miss rate (MPKI) for all our policies in Table~\ref{table:perf}. Based on our results, we conclude that our proposed cache management policies (G-\carp{} and \carp{}) are not only effective in delivering performance on top of the existing cache designs with LRU replacement policy, but also provide significant improvement over state-of-the-art mechanisms. \subsubsection{Sensitivity to the Cache Size} The performance benefits of our policies are significant across a variety of different systems with different cache sizes. Figure~\ref{fig:L2size} shows the performance of designs where (i) L2 cache size varies from 1MB to 16MB, and (ii) the replacement policies also vary: LRU, RRIP, ECM, V-Way, \carp{}, and G-\carp{}.\footnote{All results are normalized to the performance of the 1MB compressed L2 cache with LRU replacement policy. Cache access latency is modeled and adjusted appropriately for increasing cache size, using CACTI.} Two observations are in order. \begin{figure}[h] \centering \includegraphics[width=0.95\textwidth]{figures/L2Size.pdf} \caption{Performance with 1M -- 16MB L2 caches.} \label{fig:L2size} \end{figure} First, G-\carp{} outperforms all prior approaches for all corresponding cache sizes. The performance improvement varies from 5.3\% for a 1MB L2 cache to as much as 15.2\% for an 8MB L2 cache. \carp{} also outperforms all local replacement designs (LRU and RRIP). \begin{comment} We analyze the performance of the cache designs where (i) L2 cache size varies from 1MB to 16MB, and (ii) the replacement policies vary as well: LRU, RRIP, ECM, V-Way, \carp{}, and G-\carp{}.\footnote{We provide detailed results of this experiment in \cite{tr-camp}.} We find that G-\carp{} outperforms all prior approaches for all corresponding cache sizes. The performance improvement varies from 5.3\% for 1MB L2 cache to as much as 15.2\% for 8MB L2 cache. \carp{} also outperforms all local replacement designs (LRU and RRIP). The effect of having size-aware cache management policies like G-\carp{} in many cases leads to performance that is better than that of a twice as-large cache with conventional LRU replacement policy. We conclude that our management policies are efficient in achieving the performance of higher capacity LLCs without making them physically larger. \end{comment} Second, the effect of having size-aware cache management policies like G-\carp{}, in many cases, leads to performance that is better than that of a twice-as-large cache with the conventional LRU policy (e.g, 4MB G-\carp{} outperforms 8MB LRU). In some cases (e.g., 8MB), G-\carp{} performance is better than that of a twice-as-large cache with \emph{any other} replacement policy. We conclude that our management policies are efficient in achieving the performance of higher-capacity last-level cache without making the cache physically larger. \subsubsection{Effect on Energy} By decreasing the number of transfers between LLC and DRAM, our management policies also improve the energy consumption of the whole main memory hierarchy. Figure~\ref{fig:energy} shows this effect on the memory subsystem energy for two of our mechanisms (\carp{} and G-\carp) and three state-of-the-art mechanisms: (i) RRIP, (ii) ECM, and (iii) V-Way. Two observations are in order. \begin{figure}[h] \centering \includegraphics[width=0.95\textwidth]{figures/Energy.pdf} \caption{Effect on memory subsystem energy.} \label{fig:energy} \end{figure} First, as expected, G-\carp{} is the most effective in decreasing energy consumption due to the highest decrease in MPKI (described in Table~\ref{table:perf}). The total reduction in energy consumption is 15.1\% on average for memory-intensive workloads (11.8\% for all applications) relative to the baseline system and 7.2\% relative to the best prior mechanism. We conclude that our cache management policies are more effective in decreasing the energy consumption of the memory subsystem than previously-proposed mechanisms. Second, applications that benefit the most are usually the same applications that also have the highest performance improvement and the highest decrease in off-chip traffic, e.g., \emph{soplex} and \emph{mcf}. At the same time, there are a few exceptions, like \emph{perlbench}, that demonstrate significant reduction in energy consumed by the memory subsystem, but do not show significant performance improvement (as shown in Figures~\ref{fig:1-core} and~\ref{fig:global-1-core}). For these applications, the main memory subsystem is usually not a performance bottleneck due to the relatively small working set sizes that fit into the 2MB L2 cache and hence the relative improvements in the main memory subsystem might not have noticeable effects on the overall system performance. \subsubsection{Effect on Cache Capacity} We expect that size-aware cache management policies increase the effective cache capacity by increasing the effective compression ratio. Figure~\ref{fig:compratio} aims to verify this expectation by showing the average compression ratios for applications in our workload pool (both the overall average and the average for memory-intensive applications). We make two major observations. First, as expected, our size-aware mechanisms (\carp{}/G-\carp{}) significantly improve effective compression ratio over corresponding size-oblivious mechanisms (RRIP and V-Way) -- by 16.1\% and 14.5\% (on average across all applications). The primary reason for this is that RRIP and V-Way are designed to be aggressive in prioritizing blocks with potentially higher reuse (better locality). This aggressiveness leads to an even lower average compression ratio than that of the baseline LRU design (but still higher performance shown in Section~\ref{sec:single-core}). Second, both \carp{} and G-\carp{} outperform ECM by 6.6\% and 6.7\% on average across all applications for reasons explained in Section~\ref{sec:related}. We conclude that our policies achieve the highest effective cache ratio compression in the cache compared to the other three state-of-the-art mechanisms. \begin{figure}[h] \centering \includegraphics[width=0.95\textwidth]{figures/Capacity.pdf} \caption{Effect on compression ratio with a 2MB L2 cache.} \label{fig:compratio} \end{figure} \subsubsection{Comparison with Uncompressed Cache} Note that the overhead of using a compressed cache design is mostly due to the increased number of tags (e.g, 7.6\% for BDI~\cite{bdi}). If the same number of bits (or even a larger number, e.g., 10\%) is spent on having a larger L2 cache (i.e., a 2.2MB \textit{uncompressed} L2 cache with RRIP replacement), we find that the performance is 2.8\% lower than the performance of the baseline system with 2MB \textit{compressed} L2 and LRU replacement, and 12.1\% lower than the performance of the system with the 2MB L2 cache and G-\carp{} policy. We conclude that using a compressed cache with \carp{} provides a reasonable tradeoff in complexity for significantly higher performance. \begin{comment} \subsubsection{\insertionpolicy{} with Uncompressed Cache} Our \insertionpolicy{} policy can be applied to a cache without a compressed data-store, but still with knowledge of a block's compressibility. Evaluating such a design interestingly isolates the effects of smarter replacement and increased cache capacity. The performance improvement of our mechanisms ultimately comes from increasing the utility of the cache which may mean increasing capacity, smarter replacement, or both. Our evaluations with G-\insertionpolicy{} without compression shows a 2.2\% performance improvement over an uncompressed V-Way cache, and a 1.3\% performance improvement over the state-of-the-art PC-based mechanism~\cite{ship} without an overhead of storing a special hardware table. This performance comes from better management only, through the use of compressibility as an indicator of reuse. \end{comment} \subsection{Multi-core Results} \label{sec:multicore} \begin{comment} When the cache blocks from the working set of an application are compressed to mostly the same size, it is hard to expect that size-aware cache management policies would provide significant benefit. However, when different applications are running together in the multi-core system with a shared last-level-cache (LLC), there is a high chance that different applications will have different compressed sizes. As a result, we hypothesize that there is much more room for improvement with size-aware management in multi-core systems. \end{comment} We classify our applications into two distinct categories (\emph{homogeneous} and \emph{heterogeneous}) based on the distributions of the compressed sizes that they have. A homogeneous application is expected to have very few different compressed sizes for its data (when stored in the LLC). A heterogeneous application, on the other hand, has many different sizes. To formalize this classification, we first collect the access counts for different sizes for every application. Then, we mark the size with the highest access count as a ``peak'' and scale all other access counts with respect to this peak's access count. If a certain size within an application has over 10\% of the peak access count, it is also marked as a peak. The total number of peaks is our measure of the application's heterogeneity with respect to block size. If the application's number of peaks exceeds two, we classify it as heterogeneous (or simply \emph{Hetero}). Otherwise, the application is considered to be homogeneous (or simply \emph{Homo}). This classification matches our intuition that applications that have only one or two common sizes (e.g., one size for uncompressed blocks and one size for most of the compressed blocks) should be considered homogeneous. These two classes enable us to construct three different 2-core workload groups: (i) Homo-Homo, (ii) Homo-Hetero, and (iii) Hetero-Hetero. We generate 20 2-core workloads per group (60 total) by randomly selecting applications from different categories. Figures~\ref{fig:2-core} and~\ref{fig:global-2-core} show the performance improvement provided by all \carp{} designs as well as previously proposed designs: (i) RRIP, (ii) ECM, and (iii) V-Way over a 2MB baseline compressed cache design with LRU replacement. We draw three major conclusions. \begin{figure}[h] \centering \subfigure[Local replacement]{\label{fig:2-core} \includegraphics[width=65mm]{figures/f12ml.pdf}} \subfigure[Glocal replacement]{\label{fig:global-2-core}\includegraphics[width=65mm]{figures/f12mr.pdf}} \caption{Normalized weighted speedup, 2-cores with 2MB L2.} \end{figure} First, both G-\carp{} and \carp{} outperform all prior approaches in all categories. Overall, G-\carp{} improves system performance by 11.3\%/7.8\%/6.8\% over LRU/RRIP/ECM (\carp{} improves by 5.9\%/2.5\%/1.6\% over the same designs). The effect on system fairness, i.e., maximum slowdown~\cite{TCM,ATLAS,reetu,fairness,f2} by our mechanisms is negligible. Second, the more heterogeneity present, the higher the performance improvement with our size-aware management policies. This effect is clearly visible in both figures, and especially for global replacement policies in Figure~\ref{fig:global-2-core}. G-\carp{} achieves the highest improvement (15.9\% over LRU and 10.0\% over RRIP) when both applications are heterogeneous, and hence there are more opportunities in size-aware replacement. Third, when comparing relative performance of \mineviction{} vs. \insertionpolicy{} from Figure~\ref{fig:2-core} and the similar pair of G-\mineviction{} vs. G-\insertionpolicy{} from Figure~\ref{fig:global-2-core}, we notice that in the first pair the relative performance is almost the same, while in the second pair G-\mineviction{} is significantly better than G-\insertionpolicy{}. The primary reason for this difference is that G-\mineviction{} can get more benefit from global cache replacement, because it can easily exploit size variation between different sets. At the same time, G-\insertionpolicy{} gets its performance improvement from the relation between the size and corresponding data reuse, which does not significantly change between local and global replacement. We conducted a similar experiment\footnote{We increased the LLC size to 4MB to provide the same core to cache capacity ratio as with 2-cores.} with 30 4-core workloads and observe similar trends to the 2-core results presented above. G-\carp{} outperforms the best prior mechanism by 8.8\% on average across all workloads (by 10.2\% across memory-intensive workloads). \subsection{Sensitivity to the Compression Algorithm} So far, we have presented results only for caches that use BDI compression~\cite{bdi}, but as described in Section~\ref{sec:motivation}, our proposed cache management policies are applicable to different compression algorithms. We verify this by applying our mechanisms to a compressed cache design based on the FPC~\cite{fpc} compression algorithm. Compared to an FPC-compressed cache with LRU replacement, \carp{} and G-\carp{} improve performance of memory-intensive applications by 7.8\% and 10.3\% respectively. We conclude that our cache management policies are effective for different compression designs where they deliver the highest overall performance when compared to the state-of-the-art mechanisms. \subsection{\insertionpolicy{} with Uncompressed Cache} Our \insertionpolicy{} policy can be applied to a cache {\em without} a compressed data-store, while still using knowledge of a \emph{block's compressibility as an indicator of reuse}. We evaluate such a design to isolate the ``reuse prediction'' benefit of \insertionpolicy{} independently of its benefits related to cache compression. Our single-/two-core evaluations of G-\insertionpolicy{} show a 2.2\%/3.1\% performance improvement over an uncompressed LRU cache design, and a 1.3\%/1.2\% performance improvement over the state-of-the-art PC-based cache management mechanism~\cite{ship} (evaluated as comparison to a state-of-the-art ``reuse predictor'').\footnote{ In contrast to~\cite{ship}, \insertionpolicy{} does not require a special hardware table and tracking of PC with cache blocks.} We conclude that using compressibility as an indicator of future reuse can improve the performance of even uncompressed caches. \section{Summary} \label{camp:sec:conclusion} In this chapter, we presented Compression-Aware Management Policies (\carp) -- a set of new and simple, yet efficient \emph{size-aware} replacement policies for compressed on-chip caches. \carp{} improves system performance and energy efficiency compared to three state-of-the-art cache replacement mechanisms. Our policies are based on two key observations. First, we show that direct incorporation of the compressed cache block size into replacement decisions can be a basis for a more efficient replacement policy. Second, we find that the compressed block size can be used as an indicator of a block's future reuse in some applications. Our extensive evaluations show that \carp{}, applied to modern last-level-caches (LLC), improves performance by 4.9\%/9.0\%/10.2\% (on average for memory-intensive workloads) for single-core/two-/four-core workloads over the best state-of-the-art replacement mechanisms we evaluated. We conclude that \carp{} is an efficient and low-complexity management policy for compressed caches in both single- and multi-core systems. We also hope that our observation that compressed block size indicates reuse behavior could be useful in other contexts. \section{Related Work} \textbf{Size-Aware Replacement in Web Caches}. Prior works in web caches have proposed many replacement strategies that consider object size, e.g. variable document size. ElAarag and Romano \cite{elaarag1, elaarag2} provide one of the most comprehensive surveys. While these proposed techniques serve much the same purpose as a replacement policy for an on-chip cache (e.g. making an informed decision on the optimal victim), they do so in a much different environment. Many proposed mechanisms rely on a recency list of all objects in the cache (e.g. \cite{size}) or consider frequency of object access (e.g. \cite{lru-sp}), which are extremely expensive techniques for an on-chip cache. In addition, many of these techniques try to maximize hit rate...instead of compressed objects with higher density of information (TODO). Some prior works (e.g. \cite{luv, gd-size}) proposed function-based replacement policies that calculate the value of an object much like our proposed \mineviction{} policy. In particular, Bahn et al \cite{luv} proposed LUV where value of a block is computed as the division of rereference probability and the relative cost of fetching by size. Similar to other function-based techniques, however, these inputs cannot efficiently be computed or stored in hardware. Our proposed techique does not suffer from this problem and requires only simple parameters already built into on-chip caches. -compressed indirect index cache \section{Introduction} This document provides the formatting instructions for submissions to the 40th Annual International Symposium on Computer Architecture, 2013. In an effort to respect the efforts of reviewers and in the interest of fairness to all prospective authors, we request that all submissions to ISCA-40 follow the formatting and submission rules detailed below. Submissions that violate these instructions may not be reviewed, at the discretion of the program chair, in order to maintain a review process that is fair to all potential authors. This is a generous format, with plenty of space -- there should be no need to tweak it in any significant way. An example submission (formatted using the ISCA-40 submission format) that contains the submission and formatting guidelines can be downloaded from the conference website at {\em http://isca2013.eew.technion.ac.il}. All questions regarding paper formatting and submission should be directed to the program chair. \section{Preparation Instructions} \subsection{Paper Formatting} All submissions should contain a maximum of 12 pages of single-spaced two-column text. If you are using \LaTeX~\cite{lamport94} to typeset your paper, then we strongly suggest that you use the template available at http://isca2013.eew.technion.ac.il -- this document was prepared with that template. If you are using a different software package to typeset your paper, then please adhere to the guidelines given in Table~\ref{table:formatting}. \begin{table}[h!] \centering \begin{tabular}{|l|l|} \hline \textbf{Field} & \textbf{Value}\\ \hline \hline Page limit & 12 pages\\ \hline Paper size & US Letter 8.5in $\times$ 11in\\ \hline Top margin & 1in\\ \hline Bottom margin & 1in\\ \hline Left margin & 0.75in\\ \hline Right margin & 0.75in\\ \hline Separation between columns & 0.25in\\ \hline Body font & 10pt\\ \hline Abstract font & 10pt, italicized\\ \hline Section heading font & 12pt, bold\\ \hline Subsection heading font & 10pt, bold\\ \hline Caption font & 9pt, bold\\ \hline References & 8pt\\ \hline \end{tabular} \caption{Formatting guidelines for submission. } \label{table:formatting} \end{table} \textbf{Please ensure that you include page numbers with your submission}. This makes it easier for the reviewers to refer to different parts of your paper when they provide comments. Also, please ensure that your submission has a banner at the top of the title page, similar to this one, which contains the submission number and the notice of confidentiality. If using the template, just replace XXX in the template with the submission number you receive from the submission website. If you use bibtex, please note that the references.bib file provided in the template example includes some format-specific incantations at the top of the file. If you substitute your own bib file, you will probably want to include these incantations at the top of it. \subsection{Content} \noindent\textbf{\sout{Author List.}} All submissions are double blind. Therefore, please do not include any author names in the submission. You must also ensure that the metadata included in the PDF does not give away the authors. If you are improving upon your prior work, refer to your prior work in the third person and include a full citation for the work in the bibliography. For example, if you are building on {\em your own} prior work in the papers \cite{nicepaper,nicepaper2}, you would say something like "While the authors of \cite{nicepaper,nicepaper2} did X and Y, this paper does X, Y and Z, and is therefore much better." Do NOT omit or anonymize references for blind review. \noindent\textbf{Figures and Tables.} Ensure that the figures and tables are legible. Please also ensure that you refer to your figures in the main text. The proceedings will be printed in gray-scale, and many reviewers print the papers in gray-scale. Therefore, if you must use colors for your figures, ensure that the different colors are highly distinguishable in gray-scale. If a figure is not easily understandable in gray-scale, then assume it will not be understood by the reviewers. In many cases, it is better to just prepare your documents without color. \noindent\textbf{Main Body.} Avoid bad page or column breaks in your main text, i.e., last line of a paragraph at the top of a column or first line of a paragraph at the end of a column. If you begin a new section or sub-section near the end of a column, ensure that you have at least 2 lines of body text on the same column. \section{Submission Instructions} \subsection{Paper Authors} Declare all the authors of the paper upfront. Addition/removal of authors once the paper is accepted will have to be approved by the program chair. The paper selection process is carefully run in a way that maximizes fairness by seeking to eliminate all conflicts of interest. Late changes to author lists can invalidate that process. \subsection{Declaring Conflicts of Interest} The authors must register all their conflicts into the paper submission site. Conflicts are needed to resolve assignment of reviewers. Please get the conflicts right. You have a week between the registration of the paper and final submission -- there is no need to do the conflicts in a rush at the last second. If a paper is found to have an undeclared conflict that causes a problem OR if a paper is found to declare false conflicts in order to abuse or ``game'' the review system, the paper may be rejected. Please declare a conflict of interest (COI) if any of the following exist for any author of a paper: \begin{enumerate} \item Your Ph.D. advisor and Ph.D. students forever. \item Family relations by blood or marriage forever (if they might be potential reviewers). \item Other past or current advisors \item People with whom you collaborated in the last five years. Collaborators include: \begin{itemize} \item Co-authors on an accepted/rejected/pending research paper, \item Co-PIs on an accepted/rejected/pending grant, \item Those who are funders (decision-makers) regarding your research grants, and researchers whom you fund. \end{itemize} You many exclude ``service'' collaborations like writing a CSTB report or serving on a program committee together. \item People who shared your primary institution in the last five years. Note that if either you or they have moved, there could be several institutions to consider. \end{enumerate} There may be others with whom you know a conflict of interest exists. However, you will need to justify this conflict of interest. Please be reasonable. For example, just because a reviewer works on similar topics as the paper you are submitting is on, you cannot declare a conflict of interest with them. The PC Chair may contact co-authors to explain COIs whose origin is not clear. You will have to declare all conflicts with PC members as well as non-PC members with whom you have a conflict of interest. When in doubt, contact the program chair. \subsection{Concurrent Submissions and Resubmissions of Already Published Papers} By submitting a manuscript to ISCA-40, the authors guarantee that the manuscript has not been previously published or accepted for publication in a substantially similar form in any conference or journal. The authors also guarantee that no paper which contains significant overlap with the contributions of the submitted paper is under review to any other conference or journal or workshop, or will be submitted to one of them during the ISCA-40 review period. Violation of any of these conditions will lead to rejection. Extended versions of papers accepted to IEEE Computer Architecture Letters can be submitted to ISCA-19. If you are in doubt, contact the program chair. Finally, we also note that the ACM Plagiarism Policy ({\em http://www.acm.org/publications/policies/plagiarism\_policy}) covers a range of ethical issues concerning the misrepresentation of other works or one's own work. \bstctlcite{bstctl:nodash, bstctl:simpurl} \bibliographystyle{IEEEtranS} \section{Focus of This Dissertation: Efficiency of the Memory Hierarchy } This dissertation focuses on performance and energy efficiency of the modern memory hierarchies. We observe that existing systems have significant redundancy in the data (i) \emph{stored} in the memory hierarchies (e.g., main memory, on-chip caches) and (ii) \emph{transferred} across existing communication channels (e.g., off-chip bus and on-chip interconnect). Figure~\ref{fig:full} shows parts of the system stack where we aim to apply data compression (in red/dark). \begin{figure}[ht] \centering \includegraphics[width=0.95\textwidth]{chap1/Full.pdf} \caption{Data compression from the core to the main memory. } \label{fig:full} \end{figure} In this dissertation, we first propose a simple and fast yet efficient compression algorithm that is suitable for on-chip cache compression. This algorithm solves one of the key challenges for cache compression: achieving low decompression latency, which is on the critical path of the execution. Then, we show that \emph{compressed cache block size} is a new important factor when making cache replacement decisions that helps to outperform state-of-the-art cache replacement mechanisms. We then propose a new design for main memory compression that solves a key challenge in realizing data compression in main memory: the disparity between how the data is stored (i.e., at a \emph{page} granularity) and how it is accessed (i.e., at a \emph{cache line} granularity). Finally, we show that bandwidth compression---both on-chip and off-chip---can be efficient in providing high effective bandwidth in the context of modern GPUs (with more than a hundred real applications evaluated). At the same time, we find that there is a new important problem with bandwidth compression that makes it potentially energy inefficient -- the significant increase in the number of \emph{bit toggles} (i.e., the number of transitions between zeros and ones) that leads to an increase in dynamic energy. We provide an efficient solution to this problem. \subsection{A Compelling Possibility: Compressing Data throughout the Full Memory Hierarchy} At first glance, {\em data compression} may seem like an obvious approach to reducing the negative impacts of processing large amounts of data. In theory, if data compression could effectively reduce the size of the data without introducing significant overheads, it would relieve pressure on both the {\em capacity} of the various layers of the memory hierarchy (including caches, DRAM, non-volatile memory technologies, etc.) as well as the {\em bandwidth} of the communication channels (including memory buses, etc.) that transfer data between these layers. This in turn would allow system designers to avoid over-provisioning these resources, since they could deliver performance more efficiently as a function of system cost and/or power budget. Perhaps surprisingly, although forms of data compression have been used for many years to reduce file system storage requirements (e.g., by using {\tt gzip} to compress files), there has been little to no use of compression within modern memory hierarchies.\footnote{The only real exception that we are aware of is IBM's MXT technology~\citep{MXT}, which was shipped in commercial products roughly 10 years ago, but which has not become widely adopted.} Why not? \subsection{Why Traditional Data Compression Is Ineffective for Modern Memory Systems} Traditional file compression algorithms such as Lempel-Ziv~\citep{lz} achieve high compression ratios by scanning through the file from the beginning, building up a dictionary of common character sequences (which is stored within the compressed file and used for decompression). In the context of storing files on disk, variations of Lempel-Ziv have been very popular because files are often accessed as sequential streams, and because the large decompression latencies are considered to be acceptable given that (i) disk accesses are already slow, and (ii) saving as much disk space as possible is typically a very high priority. In contrast to accessing compressed files on disk, two things are fundamentally different when a processor accesses data (via loads and stores) within its memory hierarchy: (i) {\em latency} is extremely critical, and (ii) data is commonly {\em accessed randomly} (rather than sequentially). Because processor performance is so sensitive to memory access latency, it is critical that the {\em decompression latency} must be as small as possible when accessing compressed data within the memory hierarchy. Otherwise, system designers and users will quickly become disenchanted with memory compression if it costs them significant performance. Ideally, if decompression latency is small enough, compression within the memory hierarchy should actually {\em improve performance} by improving cache hit rates and reducing bandwidth-related stalls. The fact that main memory is randomly accessed creates additional challenges, including {\em locating} (as well as decompressing) arbitrary blocks of data efficiently, plus achieving significant compression ratios without being able to use Lempel-Ziv's approach of building up dictionaries over large access streams. \section{Related Work} Several prior works have proposed different mechanisms to improve the efficiency of the memory hierarchy to provide (i) higher capacity, (ii) higher bandwidth, (iii) lower latency, and (iv) higher energy efficiency. In this section, we summarize some of the approaches that are related to our work. We summarize those works based on their high-level insight and compare them with the mechanisms proposed in this thesis. \subsection{3D-Stacked DRAM Architectures} One of the major limitations of the existing DRAM-based memories is their limited off-chip bandwidth. One way to overcome this limitation is by vertically stacking multiple DRAM chips that provide wider IO interfaces, and hence increase the available off-chip bandwidth to improve performance. Many recent works have proposed designs and architectures based on this idea (e.g., ~\cite{jedec-wideio2,jedec-hbm, jedec-hbm,lee-isscc14,hmc10,hmc11}) to get higher off-chip bandwidth, or to utilize 3D-stacked memory's higher capacity as a cache (e.g.,~\cite{black-micro08,loh-isca08,loh-micro09,woo-hpca10}). These designs are largely orthogonal to the ideas proposed in this thesis, and hence can be used together. \subsection{In-Memory Computing} Processing in memory (PIM) has been previously (e.g., \cite{LogicInMemory,NON-VON,EXECUBE,Terasys,computationalRAM,IRAM,ActivePages,FlexRAM2,FlexRAM}) and more recently (e.g., \cite{rowclone,GS-DRAM,AndOrDRAM,LazyPIM,PointerChasing,ContRunAhead,SchedPIM,Milad1,PIM2,PIM3}) explored to perform computation near the data to reduce the off-chip bandwidth bottleneck improving both the performance and energy efficiency. More recently the idea of PIM have been actively explored again in the context of 3D-stacked memory (e.g., \cite{Ahn1,Ahn2,Akin,Babarinsa,NDA,Gao1,BBSync,in-memory1,Gao2,TOM,LazyPIM,SchedPIM}). These prior works might require (i) programmer effort to map regular computation and data to PIM, or (ii) significant increase in the overall cost of the system and/or cost-per-bit of the modern DRAM. The mechanisms proposed in this dissertation are also applicable to systems that perform in-memory computation. \subsection{Improving DRAM Performance} Many prior works look at different ways to improve the efficiency of modern DRAM architectures by either reducing the average access latency (e.g.,~\cite{lee-hpca2013,lee-hpca2015,rowclone,malladi-isca2012,ChangKHGHLLPKM16}) or enable higher parallelism within the DRAM itself (e.g.,~\cite{salp,Chang1}). The approaches used by these work include (i) exploiting DRAM heterogeneity (e.g., Tiered-Latency DRAM~\cite{lee-hpca2013}), Dynamic Asymmetric Subarray~\cite{DynamicAsymmetricSubarray}, Low-Cost Interlinked Subarrays~\cite{lisa}), (ii) improving DRAM parallelism~\cite{salp,Chang1}, (iii) exploiting variation in DRAM latency (e.g., Adaptive Latency DRAM~\cite{lee-hpca2015}, ChargeCache~\cite{chargecache}), (iv) smarter refresh and scheduling mechanisms (e.g.,~\cite{ESKIMO,raidr,Chang1,Avatar,Liu2,RAPID}), and (v) more intelligent memory scheduling and partitioning algorithms (e.g.,~\cite{parbs,stfm,TCM,ATLAS,Ebrahimi1,hps-tr,bliss,DASH,ASM,MISE,sch2,sch3,sch4,sch5,sch6,part1,part2,part3,part4}). Many of these techniques can significantly improve DRAM performance (in terms of latency and energy efficiency), but are not capable of providing higher effective off-chip bandwidth or higher effective DRAM capacity by exploiting the existing redundancy in the data itself. The ideas in this dissertation can be exploited in conjunction with many of these techniques, e.g., intelligent memory scheduling. \subsection{Fine-grain Memory Organization and Deduplication} Several different proposals aim to improve memory performance by changing its page-granularity organization (e.g., fine-grain memory deduplication~\cite{HICAMP}, fine-grain virtual page management~\cite{overlays}). The proposed frameworks usually require significant changes to the existing virtual page organization that frequently leads to a significant increase in the cost. The techniques proposed in this thesis are much less radical in the way they affect the higher levels of the systems stack. The key difference with the deduplication approach~\cite{HICAMP} is that data redundancy is exploited at a much finer granularity (e.g., 1--4 byte vs. 16--64 byte), hence much higher compression ratios are possible for many applications. Our techniques are complementary to fine-grain virtual page management works (e.g.,~\cite{overlays}). \subsection{Data Compression for Graphics} Data compression is a widely used technique in the specialized area of texture compression~\cite{LDR,floating,bufferCompression} used in modern GPUs. These approaches have several major limitations. First, compressed textures are usually read-only that is not acceptable for many applications. Second, compression/decompression latency is quite significant that limits applicability of these algorithms to latency-insensitive applications. Our work is targeted towards more general-purpose workloads where it is difficult to customize the compression algorithm to very specialized characteristics found in graphics processing. \subsection{Software-based Data Compression} Several mechanisms were proposed to perform memory compression in software (e.g., in the compiler~\cite{PointerComp}, in the operating system~\cite{vm-compression}) for various modern operating systems (e.g., Linux~\cite{linux}, MacOS~\cite{macos}, Windows~\cite{windows}, AIX~\cite{aix}). While these techniques can be quite efficient in reducing applications' memory footprint, their major limitation is very slow (usually software-based) decompression. This limits these mechanisms to compressing only ``cold'' pages (e.g., swap pages). \subsection{Code Compression} Compression was successfully applied not only to the application data, but also to the code itself~\cite{instr0, instr1,instr2,instr3,instr4,instr5,instr6,instr7,instr8,instr9,instr10}. The primary goal in these works was usually to reduce the program footprint (especially in the context of embedded devices) The reduced footprint can allow for more instructions to be stored in the instruction caches, and hence reduce the number of instruction cache misses, which, in turn, improves performance. In this dissertation, we do not specialize for code compression. Instead, our goal is to enable general data compression. Hence, the key difference between these prior works on code compression with the designs proposed in this dissertation is in the compression algorithms themselves: code compression algorithms are usually significantly tuned for a specific input -- instructions, and usually not effective for data compression. \subsection{Hardware-based Data Compression} Hardware-based data compression received some attention in the past (e.g., \cite{fvc,MXT,fpc,reetu1,c-pack,MMCompression}), but unfortunately proposed general-purpose designs were not practical either due to unacceptable compression/decompression latency or high design complexity and high overhead to support variable size blocks after compression. In this thesis, we will show how to overcome these challenges in several practical designs across the whole memory hierarchy. We will provide comprehensive quantitative comparisons to multiple previous state-of-the-art works on hardware-based data compression (e.g., ~\cite{fpc,c-pack,zca,fvc,MMCompression,MXT}). \section{Thesis Statement: Fast and Simple Compression \\ throughout the Memory Hierarchy} The key insight in our approach is that (i) {\em decompression latency} and (ii) {\em simplicity of design} are far more critical than {\em compression ratio} when designing a compression scheme that is effective for modern memory systems (in contrast to traditional file compression techniques aimed at disk storage). We have identified simple and effective mechanisms for compressing data in on-chip caches (e.g., by exploiting {\em narrow dynamic ranges}) and in main memory (e.g., by adopting a common compression ratio for all cache blocks within a page) that achieve significant compression ratios (roughly a factor of two in most cases) while adding minimal access latency overhead~\cite{bdi,LCP,camp,toggles-hpca}. The simplicity of our proposed mechanisms enables elegant solutions for dealing with the practical challenges of how on-chip caches and main memories are organized in modern systems. The ultimate goal of this research is to validate the following thesis: \begin{quote} \textbf{\em It is possible to develop a new set of designs for data compression within modern memory hierarchies that are fast enough, simple enough, and effective enough in saving storage space and consumed bandwidth such that the resulting improvements in performance, cost, and energy efficiency will make such compression designs attractive to implement in future systems.} \end{quote} The hope is to achieve this goal through the following new mechanism: \begin{quote} {\em Data compression hardware (along with appropriate operating system support) that (i) efficiently achieves significant compression ratios with negligible latencies for locating and decompressing data, and (ii) enables the seamless transfer of compressed data between all memory hierarchy layers. } \end{quote} As a result of this, future computer systems would be better suited to the increasingly data-intensive workloads of the future. \section{Contributions} This dissertation makes the following contributions. \begin{enumerate} \item We propose a new compression algorithm (B$\Delta$I\xspace) that achieves a high compression ratio. B$\Delta$I\xspace exploits the existing low dynamic range of values present in many cache lines to compress them to smaller sizes using Base+Delta encoding. B$\Delta$I\xspace yields itself to a very low latency decompression pipeline (requiring only a masked vector addition). To our knowledge, no prior work achieved such low latency decompression at high compression ratio. \textbf{Chapter 3} describes B$\Delta$I\xspace implementation and its evaluation in more detail. \item We observe that the compressed size of a cache block can be indicative of its reuse. We use this observation to develop a new cache insertion policy for compressed caches, the Size-based Insertion Policy (SIP), which uses the size of a compressed block as one of the metrics to predict its potential future reuse. We introduce a new compressed cache replacement policy, Minimal-Value Eviction (MVE), which assigns a value to each cache block based on both its size and its reuse and replaces the set of blocks with the smallest value. Both policies are generally applicable to different compressed cache designs (both with local and global replacement) and can be used with different compression algorithms. \textbf{Chapter 4} describes our proposed design, Compression-Aware Management Policies (CAMP = MVE + SIP) in detail. \item We propose a new compression framework (LCP) that solves the problem of efficiently computing the physical address of a compressed cache line in main memory with much lower complexity and power consumption than prior proposals. We demonstrate that \emph{any} compression algorithm can be adapted to fit the requirements of LCP, and that LCP can be efficiently integrated with existing cache compression designs (\textbf{Chapter 7}), avoiding extra compression/decompression. \textbf{Chapter 5} provides detailed implementation and evaluation of this framework. \item We observe that hardware-based bandwidth compression applied to on-chip/off-chip communication interfaces poses a new challenge for system designers: a potentially significant increase in the bit toggle count as a result of data compression. Without proper care, this increase can lead to significant energy overheads when transferring compressed data that was not accounted for in prior works. We propose a set of new mechanisms to address this new challenge: Energy Control and Metadata Consolidation. We provide a detailed analysis and evaluation of a large spectrum of GPU applications that justify (i) the usefulness of data compression for bandwidth compression in many real applications, (ii) as well as the existence of the bit toggle problem for bandwidth compression, and (iii) effectiveness of our new mechanisms to address bit toggle problem, in \textbf{Chapter 6}. \end{enumerate} \chapter{Introduction} \input{chap1/introduction.tex} \section{Results} \label{sec:results} \subsection{Effect on DRAM Capacity} Our LCP design aims to provide the benefit of increased effective main memory capacity without making memory physically larger and without significantly increasing the access latency. Figure~\ref{fig:capacity} compares the compression ratio of LCP against that of other compression techniques: {\em i)} Zero-Page compression, in which accesses to zero pages are served without any cache/memory access (similar to LCP's zero page optimization), {\em ii)} FPC~\cite{MMCompression}, {\em iii)} MXT~\cite{MXT}, and {\em iv)} Lempel-Ziv (LZ)~\cite{lz}.\footnote{\small Our implementation of LZ performs compression at 4kB page-granularity and serves as an idealized upper boundary for the in-memory compression ratio. In contrast, MXT employs Lempel-Ziv at 1kB granularity.} We evaluate the LCP framework in conjunction with two compression algorithms: BDI, and BDI+FPC-fixed, in which each page can be encoded with either BDI or FPC-fixed (Section~\ref{sec:design-prev-algos}). We draw two conclusions from Figure~\ref{fig:capacity}. First, as expected, MXT, which employs the complex LZ algorithm, has the highest average compression ratio (2.30) of all practical designs and performs closely to our idealized LZ implementation (2.60). At the same time, LCP (with either BDI or BDI+FPC-fixed) provides a reasonably high compression ratio (up to 1.69 with BDI+FPC-fixed), outperforming FPC (1.59). Second, while the average compression ratio of Zero-Page-Compression is relatively low (1.29), it greatly improves the effective memory capacity for a number of applications (e.g., GemsFDFD, zeusmp, and cactusADM) . This justifies our design decision of handling zero pages specifically at TLB-entry level. \begin{figure}[htb] \centering \includegraphics[width=0.80\textwidth]{figures/Capacity.pdf} \caption{Effect of compression on main memory footprints.} \label{fig:capacity} \end{figure} \begin{figure*}[htb] \centering \includegraphics[width=0.99\textwidth]{figures/IPC.pdf} \caption{Performance comparison (IPC) of different compressed designs.} \label{fig:IPC} \end{figure*} \subsection{Effect on Performance} \label{sec:results-perf} Main memory compression can improve performance in two major ways: 1) reduced memory footprint can reduce long-latency disk accesses, 2) reduced memory bandwidth requirements can enable less contention on main memory bus, which is a increasingly important bottleneck in systems. In our system performance evaluations, we do not take into account the former benefit as we do not model disk accesses (i.e., we assume that the uncompressed working set fits entirely in memory). However, we do evaluate the performance improvement due to memory bandwidth reduction (including our optimizations for compressing zero values described in Section~\ref{sec:opt-zeros}). Evaluations using our LCP-framework show that the performance gains due to the bandwidth reduction more than compensate for the slight increase in memory access latency. In our experiments for both single-core and multi-core systems, we compare eight different schemes that employ compression either in the last-level cache or main memory or both. Table~\ref{table:schemes} describes the eight schemes. \begin{table}[h!]\small \centering \begin{tabular}{|l|l|l|} \hline \textbf{No.} & \textbf{Label} & \textbf{Description}\\ \hline 1 & (None, None) & Baseline with no compression\\ \hline 2 & FPC-Cache & LLC compression using FPC~\cite{fpc}\\ \hline 3 & BDI-Cache & LLC compression using BDI~\cite{bdi}\\ \hline 4 & FPC-Memory & Main memory compression (Ekman and Stenstrom~\cite{MMCompression})\\ \hline 5 & LCP-BDI & LCP-framework with BDI\\ \hline 6 & (FPC, FPC) & Designs 2 and 4 combined\\ \hline 7 & (BDI, LCP-BDI) & Designs 3 and 5 combined\\ \hline 8 & (BDI, LCP-BDI+FPC-Fixed) & Design 3 combined with LCP-framework using BDI+FPC-Fixed\\ \hline \end{tabular} \caption{List of evaluated designs.} \label{table:schemes} \end{table} \vspace{-0.3cm} \subsubsection{Single-Core Results} Figure~\ref{fig:IPC} shows the performance of single-core workloads using all our evaluated designs normalized to the baseline (None, None). We draw three major conclusions from the figure. First, compared against an uncompressed system (None, None), the LCP design using BDI compression (LCP-BDI) improves performance by 6.1\%. This DRAM-only compression scheme outperforms all LLC-only compression schemes and the DRAM-only compression scheme proposed by Ekman and Stenstrom~\cite{MMCompression} that uses the FPC algorithm (FPC-memory). We conclude that our LCP framework is effective in improving performance by compressing main memory. Second, the performance improvement of combined LLC and DRAM compression is greater than that of LLC-only or DRAM-only compression alone. For example, LCP-BDI improves performance by 6.1\%, whereas (BDI, LCP-BDI) improves performance by 9.5\%. Intuitively, this is due to the orthogonality of the benefits provided by cache compression (retains more cache lines that otherwise would have been evicted) and DRAM compression (brings in more cache lines that would otherwise have required separate memory transfers on the main memory bus). We will provide more analysis of this observation when analyzing the effect on bandwidth in Section~\ref{sec:results-bandwidth}. We conclude that our LCP framework integrates well with and complements cache compression mechanisms. Third, a high compression ratio does not always imply an improvement in performance. For example, while GemsFDTD is an application with a highly compressible working set in both the cache and DRAM, its performance does not improve with cache-only compression schemes (due to the extra decompression latency), but improves significantly with while significantly improving for DRAM-only compression schemes. In contrast, cache-only compression is significantly beneficial for omnetpp, whereas DRAM-only compression is not. This difference across applications can be explained by the difference in their memory access patterns. We observe that when temporal locality is critical for the performance of an application (e.g., omnetpp and xalancbmk), then cache compression schemes are typically more helpful. On the other hand, when applications have high spatial locality and less temporal locality (e.g., GemsFDTD has an overwhelming streaming access pattern with little reuse), they benefit significantly from the bandwidth compression provided by the LCP-based schemes. Hence, if the goal is to improve performance of a wide variety of applications, that may have a mix of temporal and spatial locality, our results suggest that LCP-based designs with both DRAM and LLC compressed are the best option. We conclude that combined LLC and DRAM compression that takes advantage of our main memory compression framework benefits a wide variety of applications. \vspace{-0.3cm} \subsubsection{Multi-Core Results} When the system has a single core, the memory bandwidth pressure may not be large enough to take full advantage of bandwidth compression. However, in a multi-core system where multiple applications are running concurrently, savings in bandwidth (reduced number of memory bus transfers) may significantly increase the overall system performance. To study this effect, we conducted experiments using 100 randomly generated multiprogrammed mixes of applications (for both 2-core and 4-core workloads). Our results show that bandwidth compression is indeed more critical for multi-core workloads. Using our LCP-based design, LCP-BDI, the average performance improvement\footnote{\small Normalized to the performance of the baseline system without any compression.} is 13.9\% for 2-core workloads and 10.7\% for 4-core workloads. We summarize our multi-core performance results in Table~\ref{tbl:multicore}. Figure~\ref{fig:many-core} shows the effect of varying the last-level cache size on the performance benefit of our LCP-based design (using BDI compression in main memory) both for single core and multi-core systems across all evaluated workloads. LCP-based designs outperform the baseline across all evaluated systems, even when the L2 cache size of the system is as large as 16MB. We conclude that our memory compression framework is effective for a wide variety of core counts and last-level cache sizes. \begin{table}[ht]\small \centering \begin{tabular}{cccc} \toprule \textbf{Cores} & \textbf{LCP-BDI} & \textbf{(BDI, LCP-BDI)} & \textbf{(BDI, LCP-BDI+FPC-fixed)} \\ \midrule 1 & 6.1\% & 9.5\% & 9.3\% \\ \cmidrule(rl){1-4} 2 & 13.9\% & 23.7\% & 23.6\% \\ \cmidrule(rl){1-4} 4 & 10.7\% & 22.6\% & 22.5\% \\ \bottomrule \end{tabular}% \caption{Average performance improvement (weighted speedup) using LCP-based designs.} \label{tbl:multicore}% \end{table} \begin{figure*}[htb] \centering \begin{minipage}{5.0cm} \centering \includegraphics[height=3.0cm]{figures/1-core.pdf}\\ a) 1-core \end{minipage} \begin{minipage}{5.0cm} \centering \includegraphics[height=3.0cm]{figures/2-core.pdf}\\ b) 2-core \end{minipage} \begin{minipage}{5.0cm} \centering \includegraphics[height=3.0cm]{figures/4-core.pdf}\\ c) 4-core \end{minipage} \caption{Effect of varying cache size on performance.} \label{fig:many-core} \end{figure*} \begin{figure*}[htb] \centering \includegraphics[width=0.99\textwidth]{figures/Bandwidth.pdf} \caption{Effect of main memory compression on memory bandwidth.} \label{fig:bandwidth} \end{figure*} \begin{figure*}[htb] \centering \includegraphics[width=0.99\textwidth]{figures/Energy.pdf} \caption{Effect of main memory compression on power consumption of bus between memory controller and DRAM.} \label{fig:energy} \end{figure*} \subsection{Effect on Bus Bandwidth and Power} \label{sec:results-bandwidth} When DRAM pages are compressed, the traffic between the LLC and DRAM can also be compressed. This can have multiple positive effects: {\em i)} reduction in the average latency of memory accesses, which can lead to improvement in the overall system performance, {\em ii)} decrease in the bus power consumption due to the decrease in the number of transfers. Figure~\ref{fig:bandwidth} shows the reduction in main memory bandwidth between LLC and DRAM (in terms of bytes per kiloinstruction, normalized to a system with no compression) using different compression designs. Two major observations are in order. First, DRAM compression schemes are more effective in reducing bandwidth usage than cache compression schemes. This is because cache-only compression schemes reduce bandwidth consumption by reducing the number of LLC misses but they cannot reduce the bandwidth required to transfer a cache line from main memory. Overall, combined cache-DRAM compression schemes such as (FPC, FPC) and (BDI, LCP-BDI+FPC-fixed) decrease bandwidth consumption by more than 46\% by combining the reduction in both LLC misses and bandwidth required to transfer each cache line. Second, there is a strong correlation between bandwidth compression and performance improvement (Figure~\ref{fig:IPC}). Applications that show a significant reduction in bandwidth consumption (e.g., GemsFDFD, cactusADM, soplex, zeusmp, leslie3d, tpc*) also see large performance improvements. There are some noticeable exceptions to this observation, e.g., h264ref, wrf and bzip2. Although the memory bus traffic is compressible in these applications, main memory bandwidth is not the bottleneck for their performance. \vspace{-0.3cm} \subsubsection{Effect on Main Memory Bus Power} By reducing the number of data transfers on the memory bus, a compressed main memory design also reduces the power consumption of the memory bus. Figure~\ref{fig:energy} shows the reduction in consumed power\footnote{\small Normalized to the power of the baseline system with no compression.} by the main memory bus with different compression designs. We observe that DRAM compression designs outperform cache compression designs, and LCP-based designs provide higher reductions than previous mechanisms for main memory compression. The largest power reduction, 33\% on average, is achieved by combined cache compression and LCP-based main memory compression mechanisms, i.e. (BDI, LCP-BDI) and (BDI, LCP-BDI+FPC-fixed). Even though we do not evaluate full system power due to simulation infrastructure limitations, such a large reduction in main memory bus power consumption can have a significant impact on the overall system's power, especially for memory-bandwidth-intensive applications. We conclude that our framework for main memory compression can enable significant memory power savings. \subsection{Analysis of LCP Structures and Parameters} \vspace{-0.1cm} \subsubsection{Effectiveness of the Metadata Cache} \label{sec:results-md} The metadata (MD) cache is a critical structure in the LCP framework as it helps the memory controller to avoid accesses to the LCP metadata (Section~\ref{sec:design-metadata-cache}). Figure~\ref{fig:mdcache} shows the hit rate of a 512-entry (32kB) MD cache for an LCP design that uses the BDI+FPC-fixed compression scheme for the single-core system.\footnote{\small Other previously discussed designs have similar hit rate.} We draw two conclusions from the figure. First, the average hit ratio is high (88\% on average), indicating that the use of MD cache can significantly reduce the number of LCP metadata accesses to main memory. This also justifies the absence of significant performance degradation using the LCP framework (Figure~\ref{fig:IPC}) even for applications that do not benefit from compression. Second, some applications have significantly low MD cache hit rate, especially, sjeng and astar. Analysis of the source code of these applications revealed that accesses of these applications exhibit very low locality. As a result, we also observed a low TLB hit rate for these applications. Since TLB misses are costlier than MD cache misses (former requires multiple memory accesses), the low MD cache hit rate does not lead to significant performance degradation for these applications. \begin{figure}[htb] \centering \includegraphics[width=0.89\textwidth]{figures/IndexCache.pdf} \caption{Effectiveness of the metadata cache.} \label{fig:mdcache} \end{figure} \vspace{-0.3cm} \subsubsection{Analysis of Page Overflows} As described in Section~\ref{sec:design-handling-overflows}, page overflows can stall an application for a considerable duration. As we mentioned in that section, we did not encounter any type-2 overflows (the more severe type) in our simulations. Figure~\ref{fig:overflows} shows the number of type-1 overflows per instruction. The y-axis uses a log-scale as the number of overflows per instruction is very small. As the figure shows, on average, less than one type-1 overflow occurs every million instructions. Although such overflows are more frequent for some applications (e.g., soplex and tpch2), our evaluations show that this does not degrade performance in spite of adding a 10000 cycle penalty for each type-1 page overflow. In fact, these applications gain significant performance from our LCP design. The main reason for this is that the benefits of bandwidth reduction far outweighs the performance degradation due to type-1 overflows. We conclude that page overflows do not prevent the proposed LCP framework from providing good overall performance. \begin{figure}[htb] \centering \includegraphics[width=0.89\textwidth]{figures/Overflows.pdf} \caption{Type-1 page overflows for different applications.} \label{fig:overflows} \end{figure} \vspace{-0.3cm} \subsubsection{Number of Exceptions} The number of exceptions in the LCP framework is critical for two reasons. First, it determines the size of the physical page required to store the LCP. The higher the number of exceptions, the larger the required physical page size. Second, it can affect an application's performance as exceptions require three main memory accesses on an MD cache miss (Section~\ref{sec:basic-mcf-overview}). We studied the average number of exceptions (across all compressed pages) for each application. Figure~\ref{fig:exceptions} shows the results of these studies. The number of exceptions varies from as low as 0.02/page for GemsFDTD to as high as 29.2/page in milc (17.3/page on average). The average number of exceptions has a visible impact on the compression ratio of applications (Figure~\ref{fig:capacity}). An application with high compression ratio also has relatively few exceptions per page. Note that we do not restrict the number of exceptions in an LCP. As long as an LCP fits into a physical page not larger than the uncompressed page size (i.e., 4kB in our system), it will be stored in the compressed form irrespective of how high the number of exceptions are. This is why applications like milc have a large number of exceptions per page. We note that etter performance is potentially achievable by either statically or dynamically limiting the number of exceptions per page, but a complete evaluation of the design space is a part of our future work. \begin{figure}[htb] \centering \includegraphics[width=0.89\textwidth]{figures/Exclusions.pdf} \caption{Average number of exceptions per page for different applications} \label{fig:exceptions} \end{figure} \subsection{Comparison to Stride Prefetching} \label{sec:results-prefetching-hints} Our LCP-based framework improves performance due to its ability to transfer multiple compressed cache lines using a single memory request. Since this benefit resembles that of prefetching cache lines into the last-level cache (LLC), we compare our LCP-based design to a system that employs a stride prefetcher~\cite{stride-prefetching}. Figures~\ref{fig:pref-ipc} and \ref{fig:pref-bandwidth} compare the performance and bandwidth consumption of three systems: 1)~one that employs stride prefetching, 2)~one that employs LCP, and 3)~one that employs LCP along with hints from a prefetcher to avoid cache pollution (Section~\ref{sec:opt-bandwidth}). Two conclusions are in order. First, our LCP-based designs (second and third bars) outperform the stride prefetcher for all but a few applications (e.g., libquantum). The primary reason for this is that a stride prefetcher can considerably increase the memory bandwidth consumption of an application due to inaccurate prefetch requests. On the other hand, LCP obtains the benefits of prefetching without increasing (in fact, while significantly reducing) memory bandwidth consumption. Second, the effect of using prefetcher hints to avoid cache pollution is not significant. The reason for this is that our systems employ a large, highly-associative LLC (2MB 16-way) which is less susceptible to cache pollution. Evicting the LRU lines from such a cache has little effect on performance. \begin{figure}[thb] \centering \includegraphics[width=0.89\textwidth]{figures/PrefetchIPC.pdf} \caption{Performance comparison with stride prefetching, and effect of using prefetcher hints with the LCP-framework.} \label{fig:pref-ipc} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=0.89\textwidth]{figures/PrefetchBandwidth.pdf} \caption{Bandwidth comparison with stride prefetching.} \label{fig:pref-bandwidth} \end{figure} \pagebreak \subsection{Effect on GPU Systems} \label{sec:gpu-bandwidth} To show the general applicability of DRAM compression for different architectures, we perform a preliminary experiment to analyze the effect of main memory compression on memory bandwidth reduction for a GPU architecture (AMD Evergreen ISA). Figure~\ref{fig:gpu-bandwidth} shows the memory bandwidth reduction with three compression schemes: 1)~Frequent Pattern Compression, 2)~Base-Delta-Immediate Compression, and 3)~Base-Delta-Immediate-rotate Compression (described in Section~\ref{sec:design-prev-algos}). As the figure shows, all three mechanisms significantly reduce the bandwidth requirements of most GPU applications, with BDI-rotate showing the best results (48\% on average). We conclude that our proposal is effective for GPU systems, and can enable significant performance and energy-efficiency benefits due to this reduction in main memory bandwidth especially in memory-bandwidth-bound GPU applications. \begin{figure}[!htb] \centering \includegraphics[width=0.89\textwidth]{figures/GPU-Bandwidth.pdf} \vspace{-0mm} \caption{Bandwidth Reduction in GPUs.} \label{fig:gpu-bandwidth} \end{figure} \chapter{Putting It All Together} In the previous chapters, we analyzed hardware-based data compression on a per layer basis; i.e., as applied to only main memory, only cache, or only interconnect. In this chapter, we focus on issues that arise when combining data compression applied to multiple layers of the memory system at the same time in a single design. In the context of modern GPUs, on-chip cache capacity is usually not the bottleneck. Instead, the bottleneck for most of our GPGPU applications is the off-chip bandwidth. In addition, all of our GPU workloads have working set sizes that are too small to benefit from main memory compression, and their compression ratios are very close to those of the corresponding off-chip compression ratios (since most of the data has little reuse/locality and most of the data in these GPGPU applications is frequently accessed only once). Hence there is little benefit in separately evaluating main memory compression and bandwidth compression for the GPGPU applications that were available to us. Thus, the focus of this chapter is on combining cache compression and main memory compression for modern CPUs. \section{Main Memory + Cache Compression} We now show how main memory compression can be efficiently combined with cache compression with two compression algorithms: FPC~\cite{fpc} and BDI~\cite{bdi}. \subsection{Effect on Performance} Main memory compression (including the LCP-based designs we introduced in Section 5) can improve performance in two major ways: 1) reducing memory footprint can reduce long-latency disk accesses, 2) reducing memory bandwidth requirements can enable less contention on the main memory bus, which is an increasingly important bottleneck in systems. In our evaluations, we do not take into account the former benefit as we do not model disk accesses (i.e., we assume that the uncompressed working set fits entirely in memory). However, we do evaluate the performance improvement due to memory bandwidth reduction (including our optimizations for compressing zero values). Evaluations using our LCP framework show that the performance gains due to the bandwidth reduction more than compensate for the slight increase in memory access latency due to memory compression. In contrast, cache compression (as we introduced it in Section 3) improves performance by reducing the number of main memory accesses, which is also an important bottleneck in many systems today. In our experiments, we compare eight different schemes that employ compression either in the last-level cache, main memory, or both. Table~\ref{table:schemes} describes the eight schemes. Each scheme is named (X, Y) where X defines the cache compression mechanism (if any) and Y defines the memory compression mechanism the scheme uses. \begin{table}[h!]\small \centering \begin{tabular}{|l|l|l|} \hline \textbf{No.} & \textbf{Label} & \textbf{Description}\\ \hline 1 & (None, None) & Baseline with no compression\\ \hline 2 & (FPC, None) or FPC-Cache & LLC compression using FPC~\cite{fpc}\\ \hline 3 & (BDI, None) or BDI-Cache & LLC compression using BDI~\cite{bdi}\\ \hline 4 & (None, FPC) or FPC-Memory & Main memory compression (Ekman and Stenstrom~\cite{MMCompression})\\ \hline 5 & (None, LCP-BDI) or LCP-BDI & Main memory compression using LCP framework with BDI~\cite{lcp-micro}\\ \hline 6 & (FPC, FPC) & Designs 2 and 4 combined\\ \hline 7 & (BDI, LCP-BDI) & Designs 3 and 5 combined\\ \hline 8 & (BDI, LCP-BDI+FPC-Fixed) & Design 3 combined with LCP-framework using BDI+FPC-Fixed\\ \hline \end{tabular} \caption{List of evaluated designs.} \label{table:schemes} \end{table} Figure~\ref{fig:IPC} shows the performance of single-core workloads using all our evaluated designs, normalized to the baseline (None, None). We draw two major conclusions from the figure. \begin{figure}[h] \centering \includegraphics[width=0.99\textwidth]{chap10/figures/IPC.pdf} \caption{Performance comparison (IPC) of different compressed designs.} \label{fig:IPC} \end{figure} First, the performance improvement of combined LLC and DRAM compression is greater than that of LLC-only or DRAM-only compression alone. For example, LCP-BDI improves performance by 6.1\%, whereas (BDI, LCP-BDI) improves performance by 9.5\%. Intuitively, this is due to the orthogonality of the benefits provided by cache compression (which retains more cache lines that otherwise would have been evicted) and DRAM compression (which brings in more cache lines that would otherwise have required separate memory transfers on the main memory bus). We conclude that main memory and cache compression frameworks integrate well and complement each other. Second, a high compression ratio does not always imply an improvement in performance. For example, while GemsFDTD is an application with a highly compressible working set in both the cache and DRAM, its performance does not improve with LLC-only compression schemes (due to the extra decompression latency), but improves significantly with DRAM-only compression schemes. In contrast, LLC-only compression is beneficial for omnetpp, whereas DRAM-only compression is not. This difference across applications can be explained by the difference in their memory access patterns. We observe that when temporal locality is critical for the performance of an application (e.g., omnetpp and xalancbmk), then cache compression schemes are typically more helpful. On the other hand, when applications have high spatial locality and less temporal locality (e.g., GemsFDTD has an overwhelmingly streaming access pattern with little reuse), they benefit significantly from the bandwidth compression provided by the LCP-based schemes. Hence, if the goal is to improve performance of a wide variety of applications, which may have a mix of temporal and spatial locality, our results suggest that employing both memory and cache compression using our LCP-based designs are the best option. We conclude that combined LLC and DRAM compression that takes advantage of our main memory compression framework improves the performance of a wide variety of applications. \subsection{Effect on Bus Bandwidth} \label{sec:results-bandwidth} When cache blocks and DRAM pages are compressed, the traffic between the LLC and DRAM can also be compressed. This can have multiple positive effects: {\em i)} reduction in the average latency of memory accesses, which can lead to improvement in the overall system performance, {\em ii)} decrease in the bus energy consumption due to the decrease in the number of transfers. Figure~\ref{fig:bandwidth} shows the reduction in main memory bandwidth between LLC and DRAM (in terms of bytes per kiloinstruction, normalized to a system with no compression) using different compression designs. Two major observations are in order. \begin{figure}[h] \centering \includegraphics[width=0.99\textwidth]{chap10/figures/Bandwidth.pdf} \caption{Effect of cache and main memory compression on memory bandwidth.} \label{fig:bandwidth} \end{figure} First, DRAM compression schemes are more effective in reducing bandwidth usage than cache compression schemes. This is because cache-only compression schemes reduce bandwidth consumption by reducing the number of LLC misses but they cannot reduce the bandwidth required to transfer a cache line from main memory. Overall, combined cache-DRAM compression schemes such as (FPC, FPC) and (BDI, LCP-BDI+FPC-fixed) decrease bandwidth consumption by more than 46\%, by combining the reduction in both LLC misses and bandwidth required to transfer each cache line. Second, there is a strong correlation between bandwidth compression and performance improvement (Figure~\ref{fig:IPC}). Applications that show a significant reduction in bandwidth consumption (e.g., GemsFDFD, cactusADM, soplex, zeusmp, leslie3d, tpc*) also see large performance improvements. There are some noticeable exceptions to this observation, e.g., h264ref, wrf and bzip2. Although the memory bus traffic is compressible in these applications, main memory bandwidth is not the bottleneck for their performance. \subsection{Effect on Energy} By reducing the number of data transfers on the memory bus, a compressed cache and main memory design also reduces the energy consumption of the memory bus. Figure~\ref{fig:energy} shows the reduction in consumed energy\footnote{\small Normalized to the energy of the baseline system with no compression.} by the main memory bus with different compression designs. We observe that DRAM compression designs outperform cache compression designs, and LCP-based designs provide higher reductions than previous mechanisms for main memory compression. The largest energy reduction, 33\% on average, is achieved by combined cache compression and LCP-based main memory compression mechanisms, i.e., (BDI, LCP-BDI) and (BDI, LCP-BDI+FPC-fixed). Even though we do not evaluate full system energy due to simulation infrastructure limitations, such a large reduction in main memory bus energy consumption can have a significant impact on the overall system energy, especially for memory-bandwidth-intensive applications. We conclude that our framework for main memory compression can enable significant energy savings, especially when compression is applied in both the last level cache and main memory. \begin{figure}[htb] \centering \includegraphics[width=0.99\textwidth]{chap10/figures/Energy.pdf} \caption{Effect of cache and main memory compression on DRAM bus energy.} \label{fig:energy} \end{figure} \section{Compression and Decompression Latency} \subsection{Cache Compression} In order to make cache compression practical, we have to answer the following key question: what is the right compression algorithm for an on-chip memory hierarchy? The conventional wisdom is usually to aim for the highest possible compression ratio. This is usually achieved by using existing software-based compression algorithms that work by finding common subsets of data and storing them only once (i.e., dictionary-based compression), and then simplifying these algorithms so that they can be implemented in hardware. Instead of following this conventional path, another option is to prioritize simplicity of the compression algorithm over its efficiency (i.e., compression ratio). In summary, the major challenge is to balance the compression/decompression {\em speed} (decompression latency is especially important, because it is on the execution critical path) and {\em simplicity} (no complex or costly hardware changes), while still being {\em effective} (having good compression ratio) in saving storage space. \subsection{Main Memory} For main memory, compression/decompression latency is still an important factor, but there is definitely more headroom to play with, since typical memory accesses can take hundreds of processor cycles. Similar to on-chip caches, decompression lays on the critical path of the execution, and hence is the top priority in selecting a proper compression algorithm. Prior attempts to use existing software-based algorithms (e.g., Lempel-Ziv~\cite{lz}) were not successful~\cite{MXT}, because even optimized versions of these algorithms for hardware had decompression latencies of 64 or more cycles. \subsection{On-Chip/Off-chip Buses} Data compression is not only effective in providing higher capacity, it can also provide higher effective bandwidth when applied to communication channels. We call this effect \emph{bandwidth compression}. For major memory communication channels (e.g., on-chip/off-chip buses), compression and decompression are usually equally important, since both of them are directly added to the data transfer latency: \emph{compression latency} (before sending the data), and \emph{decompression} latency (after the data is received). Hence, the challenge is to properly balance both of these latencies without sacrificing the compression ratio. It is possible to avoid some of these overheads, by storing and transferring the data in compressed form. For example, if the main memory already stores compressed data, then there is no need to compress it again before transferring it to the on-chip caches, etc. In a holistic approach, where compression is applied across many layers of the memory hierarchy (e.g., on-chip caches and main memory), it is possible that there is almost no overhead for bandwidth compression since both the source and the destination can store data in the same compressed form. \section{Quickly Locating Compressed Data} While compression improves effective capacity and bandwidth, one challenge is due to the fact that it generates data blocks in variable sizes. It poses several challenges, and one of those challenges is the ability to quickly locate the compressed data. In the uncompressed memory organization, finding a certain cache line within a memory page is usually trivial: cache line offset within a physical page is the same as the cache line offset within the virtual page. Unfortunately, compression adds yet another layer of indirection, where cache line offsets can vary significantly within a physical page, depending on compressed sizes of the previous cache lines on the same page. \textbf{For main memory}, this means that we either need to store the offsets of all cache lines somewhere (either on-chip or in a different memory page) or continuously compute those offsets (multiple additions of the previous cache line sizes/offsets) from some metadata (which still needs to be stored somewhere). Both options can lead to (i) significant energy and latency overheads and (ii) can significantly complicate the final design~\cite{MXT}. It is important to mention that this challenge affects only main memory compression because of the disparity in how the data is stored (e.g., 4KB page granularity) and how it is accessed (e.g., 64B cache line granularity). This is usually not an issue for compressed cache organizations where tags and actual cache blocks utilize simple mapping algorithms. Similarly, it is not a problem for transferring compressed data over on-chip/off-chip communication channels, where data is usually transferred in small chunks (e.g., 16B flits in on-chip interconnects). \section{Fragmentation} Another challenge posed by the variable size blocks after compression is data fragmentation. \textbf{For on-chip caches}, the key issue is that after the compressed block is stored in the data store, it has a fixed size, and then it is immediately followed by another cache block (except for the last block). The problem arises when this compressed cache line is updated with new data. In that case, the cache line might not be compressed to the same size as it was before, and hence there is not enough space to simply store the new data for this cache block without moving data around. For a na\"{\i}ve compressed cache implementation, this could lead to significant energy waste and design complexity when shuffling data around after cache writebacks. \textbf{For main memory}, there can be two types of fragmentation: page level and cache line level. Page level fragmentation happens due to the fact that it is hard to support a completely flexible page size after compression, because this would severely complicate the OS memory management process. Hence, in most realistic designs (e.g., \cite{MMCompression}) only certain page sizes are possible (e.g., 1KB, 2KB and 4KB). This means that for every page that is not compressed to exactly one of these sizes, its physical size would be rounded up to the closest size that can fit this page. Cache line level fragmentation happens due to the fact that many designs limit the number of compressed sizes for cache lines within a particular page to reduce the amount of metadata to track per cache line. Similar to page-level fragmentation, this means that many cache lines could be padded to align with the smallest acceptable compressed block size that fits them. \section{Supporting Variable Size after Compression} The variable-sized nature of compression output causes significant challenges for \textbf{on-chip/off-chip communication channels}. For example, off-chip DRAM buses are usually optimized to transfer one cache line (e.g., 64 bytes) at a time. There is no easy mechanism (without changes to the existing DRAM) to transfer smaller number of bytes faster. There are some exceptions with GPU-oriented memories (e.g., GDDR5~\cite{gddr5}) where cache lines are typically larger (128 bytes) and data buses are more narrow (32 bytes): hence every cache line is transferred in four pieces, and data compression with compression ratios up to 4$\times$ is possible without major changes to DRAM. On-chip interconnects usually transfer cache lines in several data chunks called flits. In this case, compression ratio also limited by the granularity of the flits. \section{Data Changes after Compression} Data compression inevitably changes the data itself, and, unfortunately, sometimes these changes can lead to significant energy overhead. There are several reasons for this. First, in every particular case, it actually matters whether a 0 or 1 is transferred or stored. For example, for the on-chip interconnect, that just transferred a 0 bit, transferring another 0 over the same pin that has just transferred a 0 is almost free in terms of energy, while transferring 1 would cost additional energy. Hence, higher number of switches on the interconnect wire (called bit toggles) negatively affects energy efficiency of data communication. Second, modern programming languages and compilers tend to store data in a regular fashion such that data is usually nicely aligned at a 4/8-byte granularity. This also nicely aligns with how the data is then transferred over communication channels (e.g., 16-byte alignment for many modern on-chip networks). This means that many similar bits are kept being transferred over the same pins, reducing the energy cost of data transfers. Unfortunately, data compression frequently breaks this unspoken assumption about ``nice'' data alignment, thereby significantly increasing the total number of bit toggles, and hence, increasing the energy of on-chip data transfers. \section{Summary of Our Proposal} In this dissertation, we aim to develop efficient solutions to overcome the described challenges. To this end, we first propose a simple and fast yet efficient compression algorithm that is suitable for on-chip cache compression (\textbf{Chapter 3}). This algorithm solves one of the key challenges for cache compression: achieving \emph{low decompression latency} (which is on the critical path of the execution) while maintaining \emph{high compression ratio}. Our algorithm is based on the observation that many cache lines have data with a \emph{low dynamic range}, and hence can be represented efficiently using base-delta encoding. We demonstrate the efficiency of the algorithm inspired by this observation (called \emph{Base-Delta-Immediate Compression}) and the corresponding compressed cache design. Second, we show that \emph{compressed block size} is a new piece of information to be considered when making cache management decisions in a compressed (or even an uncompressed) cache. Including this new piece of information helps to outperform state-of-the-art cache management mechanisms. To this end, we introduce \emph{Compression-Aware Management Policies} described in \textbf{Chapter 4}. Third, we propose a new design for main memory compression, called \emph{Linearly Compressed Pages} (\textbf{Chapter 5}). This mechanism solves a key challenge in realizing data compression in main memory -- the disparity between how the data is stored (i.e. page granularity), and how it is accessed (i.e. cache line granularity). Fourth, we show that bandwidth compression, both on-chip and off-chip, can be efficient in providing high effective bandwidth increase in the context of modern GPUs. Importantly, we discover that there is a new problem with bandwidth compression that makes compression potentially energy inefficient -- number of \emph{bit toggles} (i.e. the number of transitions between zeros and ones) increases significantly with compression, which leads to an increase in dynamic energy. This problem was completely overlooked by the prior work on bandwidth compression. We propose several potential solutions to this problem using our new \emph{Energy Control} mechanisms (\textbf{Chapter 6}). \section{Software Interface} \label{sec_software} In this section we explore issues related to the design of a software interface for FOO, we present a sample implementation of such an interface, and we illustrate the use of this interface with several examples. \subsection{FOO Interface Design Issues} Blah blah blah. See Figure~\ref{recycled_while_loop}. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. Blah blah blah blah blah blah blah blah. \begin{figure} \singlespace \hrule \vspace{0.05in} \subfigure[A simple {\tt while} loop.]{ \begin{minipage}{1.0in} \begin{smalltextcode} {\it/* Initialize y so that there is a RAW dependence} \\ {\it \ * in iteration 6 of the loop: */} \\ y[] = \{20, 21, 22, 23, 24, 25, {\bf 5}, \\ \>\> 27, 28, 29, 30, ...\}; \\ ... \\ i = 0; \\ while\ (++i < N) \{ \\ \> x[i] = x[y[i]]; \\ \} \\ \end{smalltextcode} \end{minipage} \begin{minipage}{1.0in} \includegraphics[height=1.4in]{chap2/graphs/while} \end{minipage} } \subfigure[Loop executed speculatively using recycled threads and dynamic scheduling.]{ \begin{minipage}{1.0in} \begin{smalltextcode} \> i = 0; \\ \> {\it /* Fork to all available processors: */} \\ \> fork\_threads(\&start); \\ start: \\ \> {\it /* ++i is assumed to be atomic,} \\ \> {\it \ * my\_i is a local variable: */} \\ \> while\ ((my\_i = ++i) < N) \{ \\ \>\> begin\_speculation(my\_i); \\ \>\> x[my\_i] = x[y[my\_i]]; \\ \>\> end\_speculation(); \\ \>\> {\it/* speculation\_succeeded() doesn't return} \\ \>\> {\it \ * until this is the oldest epoch: */} \\ \>\> if(!speculation\_succeeded()) \{ \\ \>\>\> {\it /* Try again non-speculatively: */} \\ \>\>\> x[my\_i] = x[y[my\_i]]; \\ \>\> \} \\ \> \} \\ \> join\_threads(); \\ \> i = N; \\ \end{smalltextcode} \end{minipage} \begin{minipage}{1.0in} \includegraphics[height=1.5in]{chap2/graphs/recycled} \end{minipage} } \hrule \caption{Speculative execution illustrated using a {\tt while} loop.} \label{recycled_while_loop} \end{figure} \chapter{Key Challenges for Hardware-Based Memory Compression} \input{chap2/challenges.tex} \section{B$\Delta$I\xspace Compression} \label{sec:2-bdc} \subsection{Why Could Multiple Bases Help?} \label{sec:examples.2} Although B$+\Delta$\xspace proves to be generally applicable for many applications, it is clear that not every cache line can be represented in this form, and, as a result, some benchmarks do not have a high compression ratio, e.g., \emph{mcf}. One common reason why this happens is that some of these applications can mix data of different types in the same cache line, e.g., structures of pointers and 1-byte integers. This suggests that if we apply B$+\Delta$\xspace with multiple bases, we can improve compressibility for some of these applications. Figure~\ref{fig:2-bases-example} shows a 32-byte cache line from \emph{mcf} that is not compressible with a single base using B$+\Delta$\xspace, because there is no single base value that effectively compresses this cache line. At the same time, it is clear that if we use two bases, this cache line can be easily compressed using a similar compression technique as in the B$+\Delta$\xspace algorithm with one base. As a result, the entire cache line data can be represented using 19 bytes: 8 bytes for two bases (\texttt{0x00000000} and \texttt{0x09A40178}), 5 bytes for five 1-byte deltas from the first base, and 6 bytes for three 2-byte deltas from the second base. This effectively saves 13 bytes of the 32-byte line. \begin{figure}[ht!] \centering \includegraphics[scale=0.5]{chap3/figures/2-BDC-Example.pdf} \caption{Cache line from \emph{mcf} compressed by B$+\Delta$\xspace(two bases)} \label{fig:2-bases-example} \end{figure} As we can see, multiple bases can help compress more cache lines, but, unfortunately, more bases can increase overhead (due to storage of the bases), and hence decrease effective compression ratio that can be achieved with one base. So, it is natural to ask \emph{how many bases are optimal for B$+\Delta$\xspace compression}? In order to answer this question, we conduct an experiment where we evaluate the effective compression ratio with different numbers of bases (selected suboptimally using a greedy algorithm). Figure~\ref{fig:multbases} shows the results of this experiment. The ``0'' base bar corresponds to a mechanism that compresses only simple patterns (zero and repeated values). These patterns are simple to compress and common enough, so we can handle them easily and efficiently without using B$+\Delta$\xspace, e.g., a cache line of only zeros compressed to just one byte for any number of bases. We assume this optimization for all bars in Figure~\ref{fig:multbases}.\footnote{If we do not assume this optimization, compression with multiple bases will have very low compression ratio for such common simple patterns.} \begin{figure}[ht!] \centering \includegraphics[scale=0.5]{chap3/figures/L2MultBases.pdf} \caption{Effective compression ratio with different number of bases. ``0'' corresponds to zero and repeated value compression.} \label{fig:multbases} \end{figure} Results in Figure~\ref{fig:multbases} show that the empirically optimal number of bases in terms of effective compression ratio is 2, with some benchmarks having optimums also at one or three bases. The key conclusion is that B$+\Delta$\xspace with two bases significantly outperforms B$+\Delta$\xspace with one base (compression ratio of 1.51 vs. 1.40 on average), suggesting that it is worth considering for implementation. Note that having more than two bases does not provide additional improvement in compression ratio for these workloads, because the overhead of storing more bases is higher than the benefit of compressing more cache lines. Unfortunately, B$+\Delta$\xspace with two bases has a serious drawback: the necessity of finding a second base. The search for a second arbitrary base value (even a sub-optimal one) can add significant complexity to the compression hardware. This opens the question of how to find two base values efficiently. We next propose a mechanism that can get the benefit of compression with two bases with minimal complexity. \subsection{B$\Delta$I\xspace: Refining B$+\Delta$\xspace with Two Bases and Minimal Complexity } \label{sec:bdi} Results from Section~\ref{sec:examples.2} suggest that the optimal (on average) number of bases to use is two, but having an additional base has the significant shortcoming described above. We observe that setting the second base to zero gains most of the benefit of having an arbitrary second base value. Why is this the case? Most of the time when data of different types are mixed in the same cache line, the cause is an aggregate data type: e.g., a structure (\texttt{struct} in C). In many cases, this leads to the mixing of wide values with low dynamic range (e.g., pointers) with narrow values (e.g., small integers). A first arbitrary base helps to compress wide values with low dynamic range using base+delta encoding, while a second zero base is efficient enough to compress narrow values separately from wide values. Based on this observation, we refine the idea of B$+\Delta$\xspace by adding an additional implicit base that is always set to zero. We call this refinement \textbf{Base-Delta-Immediate} or \textbf{B$\Delta$I\xspace} compression. \begin{figure} \centering \includegraphics[scale=0.5]{chap3/figures/L2CompRatios.pdf} \caption{Compression ratio comparison of different algorithms: ZCA~\cite{ZeroContent}, FVC~\cite{fvc}, FPC~\cite{fpc}, B$+\Delta$\xspace (two arbitrary bases), and B$\Delta$I\xspace. Results are obtained on a cache with twice the tags to accommodate more cache lines in the same data space as an uncompressed cache.} \label{fig:2-bdc-compressibility} \end{figure} There is a tradeoff involved in using B$\Delta$I\xspace instead of B$+\Delta$\xspace with two arbitrary bases. B$\Delta$I\xspace uses an implicit zero base as the second base, and, hence, it has less storage overhead, which means potentially higher average compression ratio for cache lines that are compressible with both techniques. B$+\Delta$\xspace with two general bases uses more storage to store an arbitrary second base value, but can compress more cache lines because the base can be any value. As such, the compression ratio can potentially be better with either mechanism, depending on the compressibility pattern of cache lines. In order to evaluate this tradeoff, we compare in Figure~\ref{fig:2-bdc-compressibility} the effective compression ratio of B$\Delta$I\xspace, B$+\Delta$\xspace with two arbitrary bases, and three prior approaches: ZCA~\cite{ZeroContent} (zero-based compression), FVC~\cite{fvc}, and FPC~\cite{fpc}.\footnote{All mechanisms are covered in detail in Section~\ref{sec:comparison}. We provide a comparison of their compression ratios here to give a demonstration of BDI's relative effectiveness and to justify it as a viable compression mechanism. } Although there are cases where B$+\Delta$\xspace with two bases is better~--- e.g., \emph{leslie3d} and \emph{bzip2}~--- on average, B$\Delta$I\xspace performs slightly better than B$+\Delta$\xspace in terms of compression ratio (1.53 vs. 1.51). We can also see that both mechanisms are better than the previously proposed FVC mechanism~\cite{fvc}, and competitive in terms of compression ratio with a more complex FPC compression mechanism. Taking into an account that B$+\Delta$\xspace with two bases is also a more complex mechanism than B$\Delta$I\xspace, we conclude that our cache compression design should be based on the refined idea of B$\Delta$I\xspace. Now we will describe the design and operation of a cache that implements our B$\Delta$I\xspace compression algorithm. \section{Background and Motivation} \label{bdi:sec:background} Data compression is a powerful technique for storing large amounts of data in a smaller space. Applying data compression to an on-chip cache can potentially allow the cache to store more cache lines in compressed form than it could have if the cache lines were not compressed. As a result, a compressed cache has the potential to provide the benefits of a larger cache at the area and the power of a smaller cache. Prior work~\cite{fpc,fvc,MMCompression} has observed that there is a significant amount of redundancy in the data accessed by real-world applications. There are multiple patterns that lead to such redundancy. We summarize the most common of such patterns below. \textbf{Zeros:} Zero is by far the most frequently seen value in application data~\cite{VL,MMCompression,fvc}. There are various reasons for this. For example, zero is most commonly used to initialize data, to represent NULL pointers or false boolean values, and to represent sparse matrices (in dense form). In fact, a majority of the compression schemes proposed for compressing memory data either base their design fully around zeros~\cite{MMCompression,ZeroContent,ZeroValue,DynamicZero}, or treat zero as a special case~\cite{fpc,vm-compression,fvl}. \textbf{Repeated Values:} A large contiguous region of memory may contain a single value repeated multiple times~\cite{predictability}. This pattern is widely present in applications that use a common initial value for a large array, or in multimedia applications where a large number of adjacent pixels have the same color. Such a repeated value pattern can be easily compressed to significantly reduce storage requirements. Simplicity, frequent occurrence in memory, and high compression ratio make repeated values an attractive target for a special consideration in data compression~\cite{fpc}. \textbf{Narrow Values:} A narrow value is a small value stored using a large data type: e.g., a one-byte value stored as a four-byte integer. Narrow values appear commonly in application data due to over-provisioning or data alignment. Programmers typically provision the data types in various data structures for the worst case even though a majority of the values may fit in a smaller data type. For example, storing a table of counters requires the data type to be provisioned to accommodate the maximum possible value for the counters. However, it can be the case that the maximum possible counter value needs four bytes, while one byte might be enough to store the majority of the counter values. Optimizing such data structures in software for the common case necessitates significant overhead in code, thereby increasing program complexity and programmer effort to ensure correctness. Therefore, most programmers over-provision data type sizes. As a result, narrow values present themselves in many applications, and are exploited by different compression techniques~\cite{fpc,vm-compression,narrow}. \begin{table}[t] \centering \begin{tabular}{|@{ }>{\scriptsize\bgroup}c<{\egroup}@{ }|>{\scriptsize\bgroup}c<{\egroup}@{ }|@{ }>{\scriptsize\bgroup}c<{\egroup}@{ }| @{ }>{\scriptsize\bgroup}c<{\egroup}||>{\scriptsize\bgroup}c<{\egroup}@{ }|@{ }>{\scriptsize\bgroup}c<{\egroup}@{ }| @{ }>{\scriptsize\bgroup}c<{\egroup}@{ }|@{ }>{\scriptsize\bgroup}c<{\egroup}@{ }|} \hline \multirow{2}{*}{\rotatebox{30}{\textbf{}}}& \multicolumn{3}{c||}{\scriptsize \textbf{Characteristics}} & \multicolumn{4}{c|}{\scriptsize \textbf{Compressible data patterns}} \\ \cline{2-8} & {Decomp. Lat.} & {Complex.} & {C. Ratio} & {Zeros} & {Rep. Val.}& {Narrow} & {LDR}\\ \hline ZCA~\cite{ZeroContent} & \textbf{Low} & \textbf{Low} & Low & \ding{52} & \ding{53} & \ding{53} & \ding{53} \\ \hline FVC~\cite{fvc} & High & High & Modest & \ding{52} & Partly & \ding{53} & \ding{53} \\ \hline FPC~\cite{fpc} & High & High & \textbf{High} & \ding{52} & \ding{52} & \ding{52} & \ding{53} \\ \hline B$\Delta$I\xspace & \textbf{Low} & Modest & \textbf{High} & \ding{52} & \ding{52} & \ding{52} & \ding{52} \\ \hline \end{tabular}% \caption{Qualitative comparison of B$\Delta$I\xspace with prior work. LDR: Low dynamic range. Bold font indicates desirable characteristics.} \label{tbl:comparison}% \end{table} \textbf{Other Patterns:} There are a few other common data patterns that do not fall into any of the above three classes: a table of pointers that point to different locations in the same memory region, an image with low color gradient, etc. Such data can also be compressed using simple techniques and has been exploited by some prior proposals for main memory compression~\cite{vm-compression} and image compression~\cite{LDR}. In this work, we make two observations. First, we find that the above described patterns are widely present in many applications (SPEC CPU benchmark suites, and some server applications, e.g., Apache, TPC-H). Figure~\ref{fig:motivation2} plots the percentage of cache lines that can be compressed using different patterns.\footnote{The methodology used in this and other experiments is described in Section~\ref{sec:methodology}. We use a 2MB L2 cache unless otherwise stated.} As the figure shows, on average, 43\% of all cache lines belonging to these applications can be compressed. This shows that there is significant opportunity to exploit data compression to improve on-chip cache performance. \begin{figure}[!htb] \centering \includegraphics[scale=0.55]{chap3/figures/Motivation2.pdf} \caption{ Percentage of cache lines with different data patterns in a 2MB L2 cache. ``Other Patterns'' includes ``Narrow Values''.} \label{fig:motivation2} \end{figure} Second, and more importantly, we observe that all the above commonly occurring patterns fall under the general notion of \emph{low dynamic range} -- a set of values where the differences between the values is much smaller than the values themselves. Unlike prior work, which has attempted to exploit each of these special patterns individually for cache compression~\cite{fpc,fvc} or main memory compression~\cite{MMCompression,vm-compression}, our \textbf{goal} is to exploit the general case of values with \emph{low dynamic range} to build a simple yet effective compression technique. \textbf{Summary comparison:} Our resulting mechanism, base-delta-immediate (B$\Delta$I\xspace) compression, strikes a sweet-spot in the tradeoff between decompression latency (Decomp.~Lat.), hardware complexity of the implementation (Complex.), and compression ratio (C. Ratio), as shown in Table~\ref{tbl:comparison}. The table qualitatively compares B$\Delta$I\xspace with three state-of-the-art mechanisms: ZCA~\cite{ZeroContent}, which does zero-value compression, Frequent Value Compression (FVC)~\cite{fvc}, and Frequent Pattern Compression (FPC)~\cite{fpc}. (These mechanisms are described in detail in Section~\ref{sec:comparison}.) It also summarizes which data patterns (zeros, repeated values, narrow values, and other low dynamic range patterns) are compressible with each mechanism. For modest complexity, B$\Delta$I\xspace is the only design to achieve both low decompression latency and high compression ratio. We now explain the design and rationale for our scheme in two parts. In Section~\ref{sec:bdc}, we start by discussing the core of our scheme, which is \emph{Base+Delta~(B$+\Delta$\xspace)} compression. Building upon B$+\Delta$\xspace, we then discuss our full-blown B$\Delta$I\xspace compression scheme (with multiple bases) in Section~\ref{sec:2-bdc}. \begin{comment} \end{comment} \begin{comment} \end{comment} \section{Base + Delta Encoding:~Basic Idea} \label{sec:bdc} We propose a new cache compression mechanism, \emph{Base+Delta} (B$+\Delta$\xspace) compression, which unlike prior work~\cite{fpc,ZeroContent,fvc}, looks for compression opportunities at a cache line granularity -- i.e., B$+\Delta$\xspace either compresses the entire cache line or stores the entire cache line in uncompressed format. The key observation behind B$+\Delta$\xspace is that many cache lines contain data with low dynamic range. As a result, the differences between the words within such a cache line can be represented using fewer bytes than required to represent the words themselves. We exploit this observation to represent a cache line with low dynamic range using a common \emph{base} and an array of \emph{deltas} (differences between values within the cache line and the common base). Since the \emph{deltas} require fewer bytes than the values themselves, the combined size of the \emph{base} and the array of \emph{deltas} can be much smaller than the size of the original uncompressed cache line. The fact that some values can be represented in base+delta form has been observed by others, and used for different purposes: e.g. texture compression in GPUs~\cite{LDR} and also to save bandwidth on CPU buses by transferring only deltas from a common base~\cite{register-caching}. To our knowledge, no previous work examined the use of base+delta representation to improve on-chip cache utilization in a general-purpose processor. To evaluate the applicability of the B$+\Delta$\xspace compression technique for a large number of applications, we conducted a study that compares the effective compression ratio (i.e., effective cache size increase, see Section~\ref{sec:methodology} for a full definition) of B$+\Delta$\xspace against a simple technique that compresses two common data patterns (zeros and repeated values\footnote{Zero compression compresses an all-zero cache line into a bit that just indicates that the cache line is all-zero. Repeated value compression checks if a cache line has the same 1/2/4/8 byte value repeated. If so, it compresses the cache line to the corresponding value.}). Figure~\ref{fig:bdc-compressibility} shows the results of this study for a 2MB L2 cache with 64-byte cache lines for applications in the SPEC CPU2006 benchmark suite, database and web-server workloads (see Section~\ref{sec:methodology} for methodology details). We assume a design where a compression scheme can store up to twice as many tags for compressed cache lines than the number of cache lines stored in the uncompressed baseline cache (Section~\ref{sec:design} describes a practical mechanism that achieves this by using twice the number of tags).\footnote{This assumption of twice as many tags as the baseline is true for all compressed cache designs, except in Section~\ref{sec:res3}.} As the figure shows, for a number of applications, B$+\Delta$\xspace provides significantly higher compression ratio (1.4X on average) than using the simple compression technique. However, there are some benchmarks for which B$+\Delta$\xspace provides very little or no benefit (e.g., \emph{libquantum}, \emph{lbm}, and \emph{mcf}). We will address this problem with a new compression technique called B$\Delta$I\xspace in Section~\ref{sec:2-bdc}. We first provide examples from real applications to show why B$+\Delta$\xspace works. \begin{figure}[!h] \centering \includegraphics[scale=0.55]{chap3/figures/Motivation.pdf} \caption{Effective compression ratio with different value patterns} \label{fig:bdc-compressibility} \end{figure} \subsection{Why Does B$+\Delta$\xspace Work?} \label{sec:examples} B$+\Delta$\xspace works because of: (1) regularity in the way data is allocated in the memory (similar data values and types grouped together), and (2) low dynamic range of cache/memory data. The first reason is typically true due to the common usage of arrays to represent large pieces of data in applications. The second reason is usually caused either by the nature of computation, e.g., sparse matrices or streaming applications; or by inefficiency (over-provisioning) of data types used by many applications, e.g., 4-byte integer type used to represent values that usually need only 1 byte. We have carefully examined different common data patterns in applications that lead to B$+\Delta$\xspace representation and summarize our observations in two examples. Figures~\ref{fig:bdc-example} and \ref{fig:bdc-example2} show the compression of two 32-byte\footnote{We use 32-byte cache lines in our examples to save space. 64-byte cache lines were used in all evaluations (see Section~\ref{sec:methodology}).} cache lines from the applications \emph{h264ref} and \emph{perlbench} using B$+\Delta$\xspace. The first example from \emph{h264ref} shows a cache line with a set of narrow values stored as 4-byte integers. As Figure~\ref{fig:bdc-example} indicates, in this case, the cache line can be represented using a single 4-byte base value, $0$, and an array of eight 1-byte differences. As a result, the entire cache line data can be represented using 12 bytes instead of 32 bytes, saving 20 bytes of the originally used space. Figure~\ref{fig:bdc-example2} shows a similar phenomenon where nearby pointers are stored in the same cache line for the \emph{perlbench} application. \begin{figure}[!h] \centering \includegraphics[scale=0.5]{chap3/figures/BDC-Example.pdf} \caption{Cache line from \emph{h264ref} compressed with B$+\Delta$\xspace} \label{fig:bdc-example} \includegraphics[scale=0.5]{chap3/figures/BDC-Example2.pdf} \caption{Cache line from \emph{perlbench} compressed with B$+\Delta$\xspace} \label{fig:bdc-example2} \end{figure} We now describe more precisely the compression and decompression algorithms that lay at the heart of the B$+\Delta$\xspace compression mechanism. \subsection{Compression Algorithm} The B$+\Delta$\xspace compression algorithm views a cache line as a set of fixed-size values i.e., 8 8-byte, 16 4-byte, or 32 2-byte values for a 64-byte cache line. It then determines if the set of values can be represented in a more compact form as a base value with a set of differences from the base value. For analysis, let us assume that the cache line size is $C$ bytes, the size of each value in the set is $k$ bytes and the set of values to be compressed is $S = (v_1, v_2, ..., v_n)$, where $n = \frac{C}{k}$. The goal of the compression algorithm is to determine the value of the base, $B^*$ and the size of values in the set, $k$, that provide maximum compressibility. Once $B^*$ and $k$ are determined, the output of the compression algorithm is $\{k, B^*, \Delta = (\Delta_1, \Delta_2, ..., \Delta_n)\}$, where $\Delta_i = B^* - v_i ~~\forall i \in \{1,..,n\}$. \textbf{Observation 1:} The cache line is compressible \emph{only if} \\ ${\forall i}, \mathrm{max}(\mathrm{size}(\Delta_i)) < k$, where $\mathrm{size}(\Delta_i)$ is the smallest number of bytes that is needed to store $\Delta_i$. In other words, for the cache line to be compressible, the number of bytes required to represent the differences must be strictly less than the number of bytes required to represent the values themselves. \textbf{Observation 2:} To determine the value of $B^*$, either the value of $\mathrm{min}(S)$ or $\mathrm{max}(S)$ needs to be found. The reasoning, where $\mathrm{max}(S)$/$\mathrm{min}(S)$ are the maximum and minimum values in the cache line, is based on the observation that the values in the cache line are bounded by $\mathrm{min}(S)$ and $\mathrm{max}(S)$. And, hence, the optimum value for $B^*$ should be between $\mathrm{min}(S)$ and $\mathrm{max}(S)$. In fact, the optimum can be reached only for $\mathrm{min}(S)$, $\mathrm{max}(S)$, or exactly in between them. Any other value of $B^*$ can only increase the number of bytes required to represent the differences. Given a cache line, the optimal version of the B$+\Delta$\xspace compression algorithm needs to determine two parameters: (1) $k$, the size of each value in $S$, and (2) $B^*$, the optimum base value that gives the best possible compression for the chosen value of $k$. \textbf{Determining $k$.} Note that the value of $k$ determines how the cache line is viewed by the compression algorithm -- i.e., it defines the set of values that are used for compression. Choosing a single value of $k$ for all cache lines will significantly reduce the opportunity of compression. To understand why this is the case, consider two cache lines, one representing a table of 4-byte pointers pointing to some memory region (similar to Figure~\ref{fig:bdc-example2}) and the other representing an array of narrow values stored as 2-byte integers. For the first cache line, the likely best value of $k$ is $4$, as dividing the cache line into a set of of values with a different $k$ might lead to an increase in dynamic range and reduce the possibility of compression. Similarly, the likely best value of $k$ for the second cache line is $2$. Therefore, to increase the opportunity for compression by catering to multiple patterns, our compression algorithm attempts to compress a cache line using three different potential values of $k$ simultaneously: $2$, $4$, and $8$. The cache line is then compressed using the value that provides the maximum compression rate or not compressed at all.\footnote{ We restrict our search to these three values as almost all basic data types supported by various programming languages have one of these three sizes.} \textbf{Determining $B^*$.} For each possible value of $k$ $\in$ \{$2$, $4$, $8$\}, the cache line is split into values of size $k$ and the best value for the base, $B^*$ can be determined using Observation 2. However, computing $B^*$ in this manner requires computing the maximum or the minimum of the set of values, which adds logic complexity and significantly increases the latency of compression. To avoid compression latency increase and reduce hardware complexity, we decide to use the \emph{first} value from the set of values as an approximation for the $B^*$. For a compressible cache line with a low dynamic range, we find that choosing the first value as the base instead of computing the optimum base value reduces the average compression ratio only by 0.4\%. \subsection{Decompression Algorithm} To decompress a compressed cache line, the B$+\Delta$\xspace decompression algorithm needs to take the base value $B^*$ and an array of differences $\Delta = {\Delta_1, \Delta_2, ..., \Delta_n}$, and generate the corresponding set of values $S = {(v_1, v_2, ..., v_n)}$. The value $v_i$ is simply given by $v_i = B^* + \Delta_i$. As a result, the values in the cache line can be computed in parallel using a SIMD-style vector adder. Consequently, the entire cache line can be decompressed in the amount of time it takes to do an integer vector addition, using a set of simple adders. \begin{comment} \subsection{Multiple Bases} The results from Figure~\ref{fig:bdc-compressibility} shows the potential of B$\Delta$I\xspace compression idea with one base. Unfortunately, there are applications that are barely compressible with B$\Delta$I\xspace algorithm e.g., lbm, hmmer, sphinx3, mcf. One potential reason why this happens is that some of these applications can mix data of different types in the same cache line e.g., structures of pointers and 1-byte integers. This suggests that if we apply B$\Delta$I\xspace with multiple bases we can improve compressibility for some of these applications, and improve effective compression ratios for others. At the same time, B$\Delta$I\xspace with multiple bases has several major drawbacks. First, having more than one base adds significant complexity to the compression hardware. Second, adding every additional base has the effect of diminishing returns, because you have to store more bases, and the number of additionally compressed cache lines decreases. Third, it is not clear how many bases are optimal, and how to find every additional base efficiently. Section~\ref{sec:examples.2} aims to explore the tradeoff of using multiple bases and answer these questions. \end{comment} \begin{comment} \subsection{Floating Point} B$\Delta$I\xspace works especially well for integer data e.g., h264ref and gcc, but its compression ability for floating point data is mostly limited to simple patterns like zero and repeated value patterns (e.g., GemsFDTD and zeusmp). The primary reason for this is that "close" values in floating format representation [cite] do not necessarily have many higher order bits to be the same. This limits the effect of B$\Delta$I\xspace for several floating point applications e.g., lbm and leslie3d. At the same time, we observed that in many cases the bits that represent exponent (8-12 high order bits) are the same or close to be the same for the floating values in the same cache line. And, hence, it is possible to apply B$\Delta$I\xspace with the base of 8 or 4 bytes and the offsets that are only 8-12 bits smaller than the base. Note that such a compression will have lower compression ratio than previously described B$\Delta$I\xspace, so its application should be justified with the experiments (see Section~\ref{sec:results}). \end{comment} \section{Comparison with Prior Work} \section{Related Work} \label{sec:comparison} Multiple previous works investigated the possibility of using compression for on-chip caches~\cite{fvl,fpc,ZeroContent,ZeroValue,iic,c-pack} and/or memory~\cite{vm-compression,MXT,MMCompression}. All proposed designs have different tradeoffs between compression ratio, decompression/compression latency and hardware complexity. The spectrum of proposed algorithms ranges from general-purpose compression schemes e.g., the Lempel-Ziv algorithm~\cite{lz}, to specific pattern-based schemes, e.g., zero values~\cite{ZeroContent,ZeroValue} and frequent values~\cite{fvc}. The fundamental difference between B$\Delta$I\xspace and previous cache compression mechanisms is that whereas prior techniques compress data at word granularity -- i.e., each word within a cache line is compressed separately, B$\Delta$I\xspace compresses data at cache-line granularity -- i.e., all the words within a cache line are compressed using the same encoding or all the words within a cache line are stored uncompressed. As a result, B$\Delta$I\xspace provides two major advantages. First, the decompression of all words in the same cache line can be performed in parallel (using a masked vector addition), since the starting point of each word is known in the compressed cache line. In contrast, compressing each word within a cache line separately, as in prior works, typically serializes decompression as different words can be compressed to different sizes, making the starting point of each word in the compressed cache line dependent on the previous word. Second, B$\Delta$I\xspace exploits correlation across words within a cache line, which can lead to a better compression ratio -- e.g., when cache line consists of an array of pointers. Prior works do not exploit this correlation as they compress words individually. As already summarized in Table 1, different prior works suffer from one or more of the following shortcomings, which B$\Delta$I\xspace alleviates: 1) high decompression latency, 2) low effective compression ratio, and 3) high hardware complexity. We now describe the prior designs in more detail. \subsection{Zero-based Designs} Dusser et al.~\cite{ZeroContent} propose Zero-Content Augmented (ZCA) cache design where a conventional cache is augmented with a specialized cache to represent zero cache lines. Decompression and compression latencies as well as hardware complexity for the ZCA cache design are low. However, only applications that operate on a large number of zero cache lines can benefit from this design. In our experiments, only 6 out of 24 applications have enough zero data to benefit from ZCA (Figure~\ref{fig:2-bdc-compressibility}), leading to relatively small performance improvements (as we show in Section~\ref{sec:results}). Islam and Stenstr\"{o}m~\cite{ZeroValue} observe that 18\% of the dynamic loads actually access zero data, and propose a cache design called Zero-Value Canceling where these loads can be serviced faster. Again, this can improve performance only for applications with substantial amounts of zero data. Our proposal is more general than these designs that are based only on zero values. \subsection{Frequent Value Compression} Zhang et al.~\cite{fvl} observe that a majority of values read or written by memory operations come from a small set of frequently occurring values. Based on this observation, they propose a compression technique~\cite{fvc} that encodes frequent values present in cache lines with fewer bits. They apply this technique to a direct-mapped L1 cache wherein each entry in the cache can store either one uncompressed line or two compressed lines. Frequent value compression (FVC) has three major drawbacks. First, since FVC can only compress frequent values, it cannot exploit other commonly found patterns, e.g., narrow values or stride patterns in application data. As a result, it does not provide a high degree of compression for most applications as shown in Section~\ref{sec:results}. Second, FVC compresses only the frequent values, while other values stay uncompressed. Decompression of such a cache line requires sequential processing of every element (because the beginning of the next element can be determined only after the previous element is processed), significantly increasing the latency of decompression, which is undesirable. Third, the proposed mechanism requires profiling to identify the frequent values within an application. Our quantitative results in Section~\ref{sec:results} shows that B$\Delta$I\xspace outperforms FVC due to these reasons. \subsection{Pattern-Based Compression Techniques} Alameldeen and Wood~\cite{fpc} propose frequent pattern compression (FPC) that exploits the observation that a majority of words fall under one of a few compressible patterns, e.g., if the upper 16 bits of a 32-bit word are all zeros or are all ones, all bytes in a 4-byte word are the same. FPC defines a set of these patterns~\cite{fpc-tr} and then uses them to encode applicable words with fewer bits of data. For compressing a cache line, FPC first divides the cache line into 32-bit words and checks if each word falls under one of seven frequently occurring patterns. Each compressed cache line contains the pattern encoding for all the words within the cache line followed by the additional data required to decompress each word. The same authors propose a compressed cache design~\cite{fpc} based on FPC which allows the cache to store two times more compressed lines than uncompressed lines, effectively doubling the cache size when all lines are compressed. For this purpose, they maintain twice as many tag entries as there are data entries. Similar to frequent value compression, frequent pattern compression also requires serial decompression of the cache line, because every word can be compressed or decompressed. To mitigate the decompression latency of FPC, the authors design a five-cycle decompression pipeline~\cite{fpc-tr}. They also propose an adaptive scheme which avoids compressing data if the decompression latency nullifies the benefits of compression. Chen et al.~\cite{c-pack} propose a pattern-based compression mechanism (called C-Pack) with several new features: (1) multiple cache lines can be compressed into one, (2) multiple words can be compressed in parallel; but parallel decompression is not possible. Although the C-Pack design is more practical than FPC, it still has a high decompression latency (8 cycles due to serial decompression), and its average compression ratio is lower than that of FPC. \subsection{Follow-up Work} Publication of this work~\cite{bdi} inspired several new proposals for hardware-oriented compression algorithms~\cite{sc2,hycomp,morc,kimbit}, and new compressed cache designs~\cite{dcc,scc,yacc}. Most of these works aim for higher compression ratios, but this happens at the cost of much higher compression/decompression latency. This is why some of these works~\cite{morc,kimbit} are proposed in the context of modern GPUs that are much more tolerant to increase in memory latency. \section{Summary} \label{sec:conclusion} In this chapter, we presented B$\Delta$I\xspace, a new and simple, yet efficient hardware cache compression technique that provides high effective cache capacity increase and system performance improvement compared to three state-of-the-art cache compression techniques. B$\Delta$I\xspace achieves these benefits by exploiting the low dynamic range of in-cache data and representing cache lines in the form of two base values (with one implicit base equal to zero) and an array of differences from these base values. We provide insights into why B$\Delta$I\xspace compression is effective via examples of existing in-cache data patterns from real programs. B$\Delta$I\xspace's key advantage over previously proposed cache compression mechanisms is its ability to have low decompression latency (due to parallel decompression) while still having a high average compression ratio. We describe the design and operation of a cache that can utilize B$\Delta$I\xspace compression with relatively modest hardware overhead. Our extensive evaluations across a variety of workloads and system configurations show that B$\Delta$I\xspace compression in an L2 cache can improve system performance for both single-core (8.1\%) and multi-core workloads (9.5\%~/ 11.2\% for two/four cores), outperforming three state-of-the-art cache compression mechanisms. In many workloads, the performance benefit of using B$\Delta$I\xspace compression is close to the performance benefit of doubling the L2/L3 cache size. In summary, we conclude that B$\Delta$I\xspace is an efficient and low-latency data compression substrate for on-chip caches in both single- and multi-core systems. \section{B$\Delta$I\xspace: Design and Operation} \label{sec:design} \subsection{Design} \label{sec:design-design} \textbf{Compression and Decompression}. We now describe the detailed design of the corresponding compression and decompression logic.\footnote{For simplicity, we start with presenting the compression and decompression logic for B$+\Delta$\xspace. Compression for B$\Delta$I\xspace requires one more step, where elements are checked to be compressed with zero base; decompression logic only requires additional selector logic to decide which base should be used in the addition. We describe the differences between B$\Delta$I\xspace and B$+\Delta$\xspace designs later in this section.} The compression logic consists of eight distinct compressor units: six units for different base sizes (8, 4 and 2 bytes) and $\Delta$ sizes (4, 2 and 1 bytes), and two units for zero and repeated value compression (Figure~\ref{fig:compression2}). Every compressor unit takes a cache line as an input, and outputs whether or not this cache line can be compressed with this unit. If it can be, the unit outputs the compressed cache line. The compressor selection logic is used to determine a set of compressor units that can compress this cache line. If multiple compression options are available for the cache line (e.g., 8-byte base 1-byte $\Delta$ and zero compression), the selection logic chooses the one with the smallest compressed cache line size. Note that all potential compressed sizes are known statically and described in Table~\ref{tbl:ratios}. All compressor units can operate in parallel. \begin{figure}[!htb \begin{center} \includegraphics[scale=0.75]{chap3/figures/CompressorDesign.pdf} \caption{Compressor design. CU: Compressor unit.} \label{fig:compression2} \end{center} \end{figure} Figure~\ref{fig:compression} describes the organization of the 8-byte-base 1-byte-$\Delta$ compressor unit for a 32-byte cache line. The compressor ``views'' this cache line as a set of four 8-byte elements ($V_0$, $V_1$, $V_2$, $V_3$), and in the first step, computes the difference between the base element and all other elements. Recall that the base ($B_0$) is set to the first value ($V_0$), as we describe in Section~\ref{sec:bdc}. The resulting difference values ($ \Delta_0, \Delta_1, \Delta_2, \Delta_3$) are then checked to see whether their first 7 bytes are all zeros or ones (1-byte sign extension check). If so, the resulting cache line can be stored as the base value $B_0$ and the set of differences $\Delta_0, \Delta_1, \Delta_2, \Delta_3$, where each $\Delta_i$ requires only 1 byte. The compressed cache line size in this case is 12 bytes instead of the original 32 bytes. If the 1-byte sign extension check returns false (i.e., at least one $\Delta_i$ cannot be represented using 1 byte), then the compressor unit cannot compress this cache line. The organization of all other compressor units is similar. This compression design can be potentially optimized, especially if hardware complexity is more critical than latency, e.g., all 8-byte-base value compression units can be united into one to avoid partial logic duplication. \begin{table}[!htb] \begin{center} \begin{tabular}{|@{ }>{\scriptsize\bgroup}c<{\egroup}@{ }|@{ }>{\scriptsize\bgroup}c<{\egroup}@{ }|>{\scriptsize\bgroup}c<{\egroup} |@{ }>{\scriptsize\bgroup}c<{\egroup}@{ }|>{\scriptsize\bgroup}c<{\egroup}|| @{ }>{\scriptsize\bgroup}c<{\egroup}@{ }|@{ }>{\scriptsize\bgroup}c<{\egroup}@{ }|@{ } >{\scriptsize\bgroup}c<{\egroup}@{ }| >{\scriptsize\bgroup}c<{\egroup}|>{\scriptsize\bgroup}c<{\egroup}|} \hline {\textbf{Name}} & {\textbf{Base}}& {\textbf{$\Delta$}} & {\textbf{Size}} & \textbf{Enc.} & {\textbf{Name}} & {\textbf{Base}}& {\textbf{$\Delta$}} & {\textbf{Size}} &\textbf{Enc.} \\ \hline \hline Zeros & 1 & 0 & 1/1 & 0000 & Rep.Values & 8 & 0 & 8/8 & 0001 \\\hline Base8-$\Delta$1 & 8 & 1 & 12/16 & 0010 & Base8-$\Delta$2 & 8 & 2 & 16/24 & 0011 \\\hline Base8-$\Delta$4 & 8 & 4 & 24/40 & 0100 & Base4-$\Delta$1 & 4 & 1 & 12/20 & 0101 \\\hline Base4-$\Delta$2 & 4 & 2 & 20/36 & 0110 & Base2-$\Delta$1 & 2 & 1 & 18/34 & 0111 \\\hline NoCompr. & N/A & N/A & 32/64 & 1111 \\\cline{1-5} \end{tabular} \end{center} \caption{B$\Delta$I\xspace encoding. All sizes are in bytes. Compressed sizes (in bytes) are given for 32-/64-byte cache lines.} \label{tbl:ratios} \end{table} \begin{figure}[!htb] \begin{center \centering \includegraphics[scale=0.8]{chap3/figures/Compressor-8byte.pdf} \caption{Compressor unit for 8-byte base, 1-byte $\Delta$} \label{fig:compression} \end{center} \end{figure} Figure~\ref{fig:decompression} shows the latency-critical decompression logic. Its organization is simple: for a compressed cache line that consists of a base value $B_0$ and a set of differences $\Delta_0, \Delta_1, \Delta_2,$ $\Delta_3$, only additions of the base to the differences are performed to obtain the uncompressed cache line. Such decompression will take as long as the latency of an adder, and allows the B$\Delta$I\xspace cache to perform decompression very quickly. \begin{figure}[!htb] \centering \includegraphics[scale=0.8]{chap3/figures/Decompressor.pdf} \caption{Decompressor design} \label{fig:decompression} \end{figure} \textbf{B$\Delta$I\xspace Cache Organization}. In order to obtain the benefits of compression, the conventional cache design requires certain changes. Cache compression potentially allows more cache lines to be stored in the same data storage than a conventional uncompressed cache. But, in order to access these additional compressed cache lines, we need a way to address them. One way to achieve this is to have more tags~\cite{fpc}, e.g., twice as many,\footnote{We describe an implementation with the number of tags doubled and evaluate sensitivity to the number of tags in Section~\ref{sec:results}.} than the number we have in a conventional cache of the same size and associativity. We can then use these additional tags as pointers to more data elements in the corresponding data storage. Figure~\ref{fig:2bdc} shows the required changes in the cache design. The conventional 2-way cache with 32-byte cache lines (shown on the top) has a tag store with two tags per set, and a data store with two 32-byte cache lines per set. Every tag directly maps to the corresponding piece of the data storage. In the B$\Delta$I\xspace design (at the bottom), we have twice as many tags (four in this example), and every tag also has 4 additional bits to represent whether or not the line is compressed, and if it is, what compression type is used (see ``Encoding'' in Table~\ref{tbl:ratios}). The data storage remains the same in size as before (2$\times$32 = 64 bytes), but it is separated into smaller fixed-size segments (e.g., 8 bytes in size in Figure~\ref{fig:2bdc}). Every tag stores the starting segment (e.g., $Tag_2$ stores segment $S_2$) and the encoding for the cache block. By knowing the encoding we can easily know the number of segments used by the cache block. \begin{figure}[hbt] \includegraphics[scale=0.7]{chap3/figures/CacheOrganization.pdf} \caption{B$\Delta$I\xspace vs. conventional cache organization. Number of tags is doubled, compression encoding bits are added to every tag, data storage is the same in size, but partitioned into segments.} \label{fig:2bdc} \end{figure} \ignore{ \textbf{Storage Cost Analysis.} This organization potentially allows storing twice as many cache lines in the same data storage, because the number of tags in a set is doubled. It requires modest increase in the tag store size (similar to some other designs \cite{fpc-tr,iic,v-way}, see Table~\ref{tbl:cost} for details) with only 1-2 cycle access latency increase depending on the cache size (based on data from CACTI 5.3~\cite{cacti}). According to CACTI 5.3~\cite{cacti} the area increase with our tag organization is only 2.3\% of the total area occupied by a 2MB 16-way L2, in contrast to the 137\% times increase if we double both the size of the data storage and associativity. } \textbf{Storage Cost Analysis.} This cache organization potentially allows storing twice as many cache lines in the same data storage, because the number of tags in a set is doubled. As a result, it requires modest increase in the tag store size (similar to some other designs \cite{fpc-tr,iic,v-way}. We analyze the storage overhead in terms of raw additional bits in Table~\ref{tbl:cost} for a baseline 16-way 2MB cache. We have also used CACTI 5.3~\cite{cacti} to estimate the additional latency and area cost of our proposed cache organization, using parameters for the 32nm technology node. Cache access latency increases by 1-2 cycles (depending on cache size) for a 4GHz processor. On-chip cache area increases by 2.3\%, but this increase is small compared to the 137\% increase in area, which occurs if we double both the tag store and the data store size (by doubling the associativity).\footnote{As we show in Section~\ref{sec:results}, B$\Delta$I\xspace with our proposed cache organization achieves performance that is within 1-2\% of a cache that has double the tag and data store size.} \begin{table}[h] \centering \begin{tabular}{|@{ }>{\scriptsize\bgroup}c<{\egroup}@{ }| @{ }>{\scriptsize\bgroup}c<{\egroup}@{ }| @{ }>{\scriptsize\bgroup}c<{\egroup}@{ }|} \hline { } & {\textbf{Baseline}} & {\textbf{B$\Delta$I\xspace}}\\ \hline {Size of tag-store entry} & {21 bits} & {32 bits (+4--encoding, +7--segment pointer)} \\ \hline {Size of data-store entry} & {512 bits} & {512 bits} \\ \hline {Number of tag-store entries} & {32768} & {65536} \\ \hline {Number of data-store entries} & {32768} & {32768} \\ \hline {Tag-store size} & {84kB} & {256kB} \\ \hline \hline {Total (data-store+tag-store) size} & {2132kB} & {2294kB} \\ \hline \end{tabular} \caption{Storage cost analysis for 2MB 16-way L2 cache, assuming 64-byte cache lines, 8-byte segments, and 36 bits for address space.} \label{tbl:cost} \end{table} \textbf{Cache Eviction Policy.} In a compressed cache, there are two cases under which multiple cache lines may need to be evicted because evicting a single cache line (i.e., the LRU one in a cache that uses the LRU replacement policy) may not create enough space for the incoming or modified cache line. First, when a new cache line (compressed or uncompressed) is inserted into the cache. Second, when a cache line already in the cache is modified such that its new size is larger than its old size. In both cases, we propose to use a slightly modified version of the LRU replacement policy wherein the cache evicts multiple LRU cache lines to create enough space for the incoming or modified cache line.\footnote{On average, 5.2\% of all insertions or writebacks into the cache resulted in the eviction of multiple cache lines in our workloads.} such a policy can increase the latency of eviction, it has negligible effect on performance as evictions are off the critical path of execution. Note that more effective replacement policies that take into account compressed cache line sizes are possible -- e.g., a policy that does not evict a zero cache line unless there is a need for space in the tag store. We leave the study of such policies for future work. \textbf{B$\Delta$I\xspace Design Specifics}. So far, we described the common part in the designs of both B$+\Delta$\xspace and B$\Delta$I\xspace. However, there are some specific differences between these two designs. First, B$\Delta$I\xspace compression happens (off the critical path) in two steps (vs. only one step for B$+\Delta$\xspace). For a fixed $\Delta$ size, \emph{Step 1} attempts to compress all elements using an implicit base of zero. \emph{Step 2} tries to compress those elements that were not compressed in Step 1. The first uncompressible element of Step 1 is chosen as the base for Step 2. The compression step stores a bit mask, 1-bit per element indicating whether or not the corresponding base is zero. Note that we keep the size of $\Delta$ (1, 2, or 4 bytes) the same for both bases. Second, B$\Delta$I\xspace decompression is implemented as a masked addition of the base (chosen in Step 2) to the array of differences. The elements to which the base is added depends on the bit-mask stored in the compression step. \subsection{Operation} We propose using our B$\Delta$I\xspace design at cache levels higher than L1 (e.g., L2 and L3). While it is possible to compress data in the L1 cache~\cite{fvc}, doing so will increase the critical path of latency-sensitive L1 cache hits. This can result in significant performance degradation for applications that do not benefit from compression. We now describe how a B$\Delta$I\xspace cache fits into a system with a 2-level cache hierarchy (L1, L2 and main memory) with the L2 cache compressed using B$\Delta$I\xspace~-- note that the only changes are to the L2 cache. We assume all caches use the writeback policy. There are four scenarios related to the compressed L2 cache operation: 1) an L2 cache hit, 2) an L2 cache miss, 3) a writeback from L1 to L2, and 4) a writeback from L2 to memory. First, on an L2 hit, the corresponding cache line is sent to the L1 cache. If the line is compressed, it is first decompressed before it is sent to the L1 cache. Second, on an L2 miss, the corresponding cache line is brought from memory and is sent to the L1 cache. In this case, the line is also compressed and inserted into the L2 cache. Third, when a line is written back from L1 to L2, it is first compressed. If an old copy of the line is already present in the L2 cache, the old (stale) copy is invalidated. The new compressed cache line is then inserted into the L2 cache. Fourth, when a line is written back from L2 cache to memory, it is decompressed before it is sent to the memory controller. In both second and third scenarios, potentially multiple cache lines might be evicted from the L2 cache based on the cache eviction policy described in Section~\ref{sec:design-design}. \section{Introduction} \label{bdi:sec:introduction} \blfootnote{Originally published as ``Base-Delta-Immediate Compression: Practical Data Compression for On-Chip Caches''in the 21st International Conference on Parallel Architectures and Compilation Techniques, 2012~\cite{bdi}.} To mitigate the latency and bandwidth limitations of accessing main memory, modern microprocessors contain multi-level on-chip cache hierarchies. While caches have a number of design parameters and there is a large body of work on using cache hierarchies more effectively (e.g., \cite{iic,RRIP,dip,line-distillation,EAF,Seshadri1,Seznec1,mlp,Qureshi1,Johnson1,Johnson2,Tyson1}), one key property of a cache that has a major impact on performance, die area, and power consumption is its {\em capacity}. The decision of how large to make a given cache involves tradeoffs: while larger caches often result in fewer cache misses, this potential benefit comes at the cost of a longer access latency and increased area and power consumption. As we look toward the future with an increasing number of on-chip cores, the issue of providing sufficient capacity in shared L2 and L3 caches becomes increasingly challenging. Simply scaling cache capacities linearly with the number of cores may be a waste of both chip area and power. On the other hand, reducing the L2 and L3 cache sizes may result in excessive off-chip cache misses, which are especially costly in terms of latency and precious off-chip bandwidth. One way to potentially achieve the performance benefits of larger cache capacity without suffering all disadvantages is to exploit {\em data compression}~\cite{fpc,register-caching,iic-comp,OldCompression,fvc,fvl}. Data compression has been successfully adopted in a number of different contexts in modern computer systems~\cite{huffman,lz} as a way to conserve storage capacity and/or data bandwidth (e.g., downloading compressed files over the Internet~\cite{Networks} or compressing main memory~\cite{MXT}). However, it has not been adopted by modern commodity microprocessors as a way to increase effective cache capacity. Why not? The ideal cache compression technique would be {\em fast}, {\em simple}, and {\em effective} in saving storage space. Clearly, the resulting compression ratio should be large enough to provide a significant upside, and the hardware complexity of implementing the scheme should be low enough that its area and power overheads do not offset its benefits. Perhaps the biggest stumbling block to the adoption of cache compression in commercial microprocessors, however, is {\em decompression latency}. Unlike cache {\em compression}, which takes place in the background upon a cache fill (after the critical word is supplied), cache {\em decompression} is on the critical path of a {\em cache hit}, where minimizing latency is extremely important for performance. In fact, because L1 cache hit times are of utmost importance, we only consider compression of the L2 caches and beyond in this study (even though our algorithm could be applied to any cache). Because the three goals of having {\em fast}, {\em simple}, and {\em effective} cache compression are at odds with each other (e.g., a very simple scheme may yield too small a compression ratio, or a scheme with a very high compression ratio may be too slow, etc.), the challenge is to find the right balance between these goals. Although several cache compression techniques have been proposed in the past~\cite{fpc,c-pack,ZeroContent,iic-comp,fvc}, they suffer from either a small compression ratio~\cite{ZeroContent,fvc}, high hardware complexity~\cite{iic-comp}, or large decompression latency~\cite{fpc,c-pack,iic-comp,fvc}. To achieve significant compression ratios while minimizing hardware complexity and decompression latency, we propose a new cache compression technique called \textbf{Base-Delta-Immediate (B$\Delta$I\xspace)} compression. \subsection{Our Approach: B$\Delta$I\xspace Compression} The key observation behind \textbf{Base-Delta-Immediate~(B$\Delta$I\xspace)} compression is that, for many cache lines, the data values stored within the line have a {\em low dynamic range}: i.e., the relative difference between values is small. In such cases, the cache line can be represented in a compact form using a common {\em base} value plus an array of relative differences (``{\em deltas}''), whose combined size is much smaller than the original cache line. (Hence the {\em ``base''} and {\em ``delta''} portions of our scheme's name). We refer to the case with a single arbitrary base as {\em Base+Delta} (B$+\Delta$\xspace) compression, and this is at the heart of all of our designs. To increase the likelihood of being able to compress a cache line, however, it is also possible to have {\em multiple bases}. In fact, our results show that for the workloads we studied, the best option is to have {\em two bases}, where one base is always {\em zero}. (The deltas relative to zero can be thought of as small {\em immediate} values, which explains the last word in the name of our B$\Delta$I\xspace compression scheme.) Using these two base values (zero and something else), our scheme can efficiently compress cache lines containing a mixture of two separate dynamic ranges: one centered around an arbitrary value chosen from the actual contents of the cache line (e.g., pointer values), and one close to zero (e.g., small integer values). Such mixtures from two dynamic ranges are commonly found (e.g., in pointer-linked data structures), as we will discuss later. As demonstrated later in this chapter, B$\Delta$I\xspace compression offers the following advantages: (i) a {\em high compression ratio} since it can exploit a number of frequently-observed patterns in cache data (as shown using examples from real applications and validated in our experiments); (ii) {\em low decompression latency} since decompressing a cache line requires only a simple masked vector addition; and (iii) {\em relatively modest hardware overhead and implementation complexity}, since both the compression and decompression algorithms involve only simple vector addition, subtraction, and comparison operations. \ignore{ This paper makes the following contributions: \begin{itemize} \item We propose a new cache compression algorithm, Base-Delta-Immediate Compression (B$\Delta$I\xspace), which exploits the low dynamic range of values present in many cache lines to compress them to smaller sizes. Both the compression and decompression algorithms of B$\Delta$I\xspace have low latency and require only vector addition, subtraction and comparison operations. \item Based on the proposed B$\Delta$I\xspace compression algorithm, we introduce a new compressed cache design. This design achieves a high degree of compression at a lower decompression latency compared to two state-of-the-art cache compression techniques: Frequent Value Compression (FVC)~\cite{fvc} and Frequent Pattern Compression (FPC)~\cite{fpc}, which require complex and long-latency decompression pipelines~\cite{fpc-tr}. \item We evaluate the performance benefits of B$\Delta$I\xspace compared to a baseline system that does not employ compression, as well as against three state-of-the-art cache compression techniques~\cite{fpc,fvc,ZeroContent}. We show that B$\Delta$I\xspace provides a better or comparable degree of compression for the majority of the applications we studied. It improves performance for both single-core (8.1\%) and multi-core workloads (9.5\%~/ 11.2\% for two- / four-cores). For many applications, compression with B$\Delta$I\xspace provides the performance benefit of doubling the uncompressed cache size of the baseline system. \end{itemize} } \section{Evaluation Methodology} \label{sec:methodology} We use an in-house, event-driven 32-bit x86 simulator whose front-end is based on Simics~\cite{Simics}. All configurations have either a two- or three-level cache hierarchy, with private L1D caches. Major simulation parameters are provided in Table \ref{tbl:simulation-parameters}. All caches uniformly use a 64B cache block size and LRU policy for replacement. All cache latencies were determined using CACTI~\cite{cacti} (assuming a 4GHz frequency), and provided in Table~\ref{tbl:cache-latencies}. We also checked that these latencies match the existing last level cache implementations from Intel and AMD, when properly scaled to the corresponding frequency.\footnote{Intel Xeon X5570 (Nehalem) 2.993GHz, 8MB L3 - 35 cycles~\cite{Nehalem}; AMD Opteron 2.8GHz, 1MB L2 - 13 cycles~\cite{Opteron}.} For evaluations, we use benchmarks from the SPEC CPU2006 suite~\cite{SPEC}, three TPC-H queries~\cite{tpc}, and an Apache web server (shown in Table~\ref{tbl:benchmarks}, whose detailed description is in Section~\ref{sec:results}). All results are collected by running a representative portion of the benchmarks for 1 billion instructions. \begin{table}[ht] \centering \begin{tabular}{|>{\scriptsize\bgroup}l<{\egroup}|>{\scriptsize\bgroup}c<{\egroup}|} \hline Processor & 1--4 cores, 4GHz, x86 in-order \\ \hline L1-D cache & 32kB, 64B cache-line, 2-way, 1 cycle \\ \hline L2 caches & 0.5--16 MB, 64B cache-line, 16-way \\ \hline L3 caches & 2--16 MB, 64B cache-line, 16-way \\ \hline Memory & 300 cycle latency \\ \cline{1-2} \end{tabular}% \caption{Major parameters of the simulated system} \label{tbl:simulation-parameters}% \end{table} \begin{table}[ht] \centering \begin{tabular}{|>{\scriptsize\bgroup}c<{\egroup}|>{\scriptsize\bgroup}c<{\egroup}| >{\scriptsize\bgroup}c<{\egroup}|>{\scriptsize\bgroup}c<{\egroup}| >{\scriptsize\bgroup}c<{\egroup}|>{\scriptsize\bgroup}c<{\egroup}| } \hlin Size & Latency & Size & Latency & Size & Latency \\ \hlin 512kB & 15 & 1MB & 21 & 2MB & 27 \\ \hlin 4MB & 34 & 8MB & 41 & 16MB & 48\\ \hlin \end{tabular} \caption{Cache hit latencies used in simulations (in cycles). B$\Delta$I\xspace caches have +1 cycle for 0.5--4MB (+2 cycle for others) on a hit/miss due to larger tag stores, and +1 cycle for decompression.} \label{tbl:cache-latencies} \end{table} \textbf{{Metrics.}} We measure performance of our benchmarks using IPC (instruction per cycle), effective compression ratio (effective cache size increase, e.g., 1.5 for 2MB cache means effective size of 3MB), and MPKI (misses per kilo instruction). For multi-programmed workloads we use the weighted speedup \cite{weightedspeedup,ws2} as the performance metric: ($\sum_i \frac{IPC_i^{shared}} {{IPC}_i^{{alone}}} $~). For bandwidth consumption we use BPKI (bytes transferred over bus per thousand instructions~\cite{BPKI}). Effective compression ratio for all mechanisms is computed without meta-data overhead. We add all meta-data to the tag storage, e.g., for B$\Delta$I\xspace, we add four bits to encode the compression scheme, and a bit mask to differentiate between two bases. We include these in the tag overhead, which was evaluated in Section~\ref{sec:design}. Our comparisons are fair, because we do not include this overhead in compression ratios of previous works we compare to. In fact, the meta-data overhead is higher for FPC (3 bits for each word). We conducted a study to see applications' performance sensitivity to the increased L2 cache size (from 512kB to 16 MB). Our results show that there are benchmarks that are almost insensitive (IPC improvement less than 5\% with 32x increase in cache size) to the size of the L2 cache: dealII, povray, calculix, gamess, namd, milc, and perlbench. This typically means that their working sets mostly fit into the L1D cache, leaving almost no potential for any L2/L3/memory optimization. Therefore, we do not present data for these applications, although we verified that our mechanism does not affect their performance. \textbf{{Parameters of Evaluated Schemes.}} For FPC, we used a decompression latency of 5 cycles, and a segment size of 1 byte (as for B$\Delta$I\xspace) to get the highest compression ratio as described in ~\cite{fpc-tr}. For FVC, we used static profiling for 100k instructions to find the 7 most frequent values as described in~\cite{fvc}, and a decompression latency of 5 cycles. For ZCA and B$\Delta$I\xspace, we used a decompression latency of 1 cycle. We also evaluated B$\Delta$I\xspace with higher decompression latencies (2-5 cycles). B$\Delta$I\xspace continues to provide better performance, because for most applications it provides a better overall compression ratio than prior mechanisms. When decompression latency of B$\Delta$I\xspace increases from 1 to 5 cycles, performance degrades by 0.74\%. \textbf{{Internal Fragmentation.}} In our simulations, we assumed that before every insertion, we can shift segments properly to avoid fragmentation (implementable, but might be inefficient). We believe this is reasonable, because insertion happens off the critical path of the execution. Previous work~\cite{fpc} adopted this assumption, and we treated all schemes equally in our evaluation. Several more recent works~\cite{dcc,scc,yacc} (after this work was published) looked at more efficient ways of handling fragmentation. \begin{table}[!ht] \centering \begin{tabular}{ |>{\scriptsize\bgroup}c<{\egroup}|>{\scriptsize\bgroup}c<{\egroup}|>{\scriptsize\bgroup}c<{\egroup}|>{\scriptsize\bgroup}c<{\egroup}|| >{\scriptsize\bgroup}c<{\egroup}|>{\scriptsize\bgroup}c<{\egroup}|>{\scriptsize\bgroup}c<{\egroup}|| >{\scriptsize\bgroup}c<{\egroup}|>{\scriptsize\bgroup}c<{\egroup}|>{\scriptsize\bgroup}c<{\egroup}|} \hline \textbf{Cat.} & \textbf{Name} & \textbf{Comp. Ratio} & \textbf{Sens.} & \textbf{Name} & \textbf{Comp. Ratio} & \textbf{Sens.} & \textbf{Name} & \textbf{Comp. Ratio} & \textbf{Sens.} \\ \hline \hline \multirow{3}{*}{\rotatebox{45}{LCLS}} & gromacs & 1.43 / L & L & hmmer & 1.03 / L & L & lbm & 1.00 / L & L \\ \cline{2-10} & leslie3d & 1.41 / L & L & sphinx & 1.10 / L & L & tpch17 & 1.18 / L & L \\ \cline{2-10} & libquantum & 1.25 / L & L & wrf & 1.01 / L & L \\ \hline \hline \multirow{3}{*}{\rotatebox{45}{HCLS}} & apache & 1.60 / H & L & zeusmp & 1.99 / H & L & gcc & 1.99 / H & L \\ \cline{2-10} & gobmk & 1.99 / H & L & sjeng & 1.50 / H & L & tpch2 & 1.54 / H & L \\ \cline{2-10} & tpch6 & 1.93 / H & L & GemsFDTD & 1.99 / H & L & cactusADM & 1.97 / H & L \\ \hline \hline \multirow{3}{*}{\rotatebox{45}{HCHS}} & astar & 1.74 / H & H & bzip2 & 1.60 / H & H & mcf & 1.52 / H & H \\ \cline{2-10} & omnetpp & 1.58 / H & H & soplex & 1.99 / H & H & h264ref & 1.52 / H & H \\ \cline{2-10} & xalancbmk & 1.61 / H & H & & & & & & \\ \hline \end{tabular} \caption{Benchmark characteristics and categories: \textbf{Comp. Ratio} (effective compression ratio for 2MB B$\Delta$I\xspace L2) and \textbf{Sens.} (cache size sensitivity). Sensitivity is the ratio of improvement in performance by going from 512kB to 2MB L2 (L - low ($\le$ 1.10) , H - high ($>$ 1.10)). For compression ratio: L - low ($\le$ 1.50), H - high ($>$ 1.50). \textbf{Cat.} means category based on compression ratio and sensitivity.} \label{tbl:benchmarks} \end{table} \section{Results \& Analysis} \label{sec:results} \subsection{Single-core Results} \label{sec:results-1-core} Figure~\ref{fig:L2RealAll}(a) shows the performance improvement of our proposed B$\Delta$I\xspace design over the baseline cache design for various cache sizes, normalized to the performance of a 512KB baseline design. The results are averaged across all benchmarks. Figure~\ref{fig:L2RealAll}(b) plots the corresponding results for MPKI also normalized to a 512KB baseline design. Several observations are in-order. First, the B$\Delta$I\xspace cache significantly outperforms the baseline cache for all cache sizes. By storing cache lines in compressed form, the B$\Delta$I\xspace cache is able to effectively store more cache lines and thereby significantly reduce the cache miss rate (as shown in Figure~\ref{fig:L2RealAll}(b)). Second, in most cases, B$\Delta$I\xspace achieves the performance improvement of doubling the cache size. In fact, the 2MB B$\Delta$I\xspace cache performs better than the 4MB baseline cache. This is because, B$\Delta$I\xspace increases the effective cache size \emph{without} significantly increasing the access latency of the data storage. Third, the performance improvement of B$\Delta$I\xspace cache decreases with increasing cache size. This is expected because, as cache size increases, the working set of more benchmarks start fitting into the cache. Therefore, storing the cache lines in compressed format has increasingly less benefit. Based on our results, we conclude that B$\Delta$I\xspace is an effective compression mechanism to significantly improve single-core performance, and can provide the benefits of doubling the cache size without incurring the area and latency penalties associated with a cache of twice the size. \begin{figure}[h] \centering \begin{minipage}[b]{0.45\linewidth} \includegraphics[width=0.9\textwidth]{chap3/figures/L2RealGeoMean.pdf} \caption{\small{(a) IPC}} \label{fig:L2RealGeoMean} \end{minipage} \begin{minipage}[b]{0.45\linewidth} \includegraphics[width=0.9\textwidth]{chap3/figures/L2MPKI.pdf} \caption{\small{(b) MPKI}} \label{fig:L2MPKI} \end{minipage} \caption{Performance of B$\Delta$I\xspace with different cache sizes. Percentages show improvement over the baseline cache (same size).} \label{fig:L2RealAll} \end{figure} \begin{comment} The graph in Figure~\ref{fig:L2RealAllSizesSelective} represents IPC for every benchmark for the cache size it was most sensitive according to the sensitivity study described in Section~\ref{sec:methodology}. As we can see all benchmarks presented have performance benefit from cache compression, but the cache size at which the improvement is the highest varies across different applications, and depends on the size of the working set. \begin{figure}[!h] \centering \includegraphics[scale=0.5]{chap3/figures/L2RealAllSizesSelective.pdf} \caption{IPC comparison for different cache sizes. Every benchmark presented for a cache size it is most sensitive to.} \label{fig:L2RealAllSizesSelective} \end{figure} \end{comment} \subsection{Multi-core Results} \label{sec:mult} When the working set of an application fits into the cache, the application will not benefit significantly from compression even though its data might have high redundancy. However, when such an application is running concurrently with another cache-sensitive application in a multi-core system, storing its cache lines in compressed format will create additional cache space for storing the data of the cache-sensitive application, potentially leading to significant overall performance improvement. \sloppypar{ To study this effect, we classify our benchmarks into four categories based on their compressibility using B$\Delta$I\xspace (low (LC) or high (HC)) and cache sensitivity (low (LS) or high (HS)). Table~\ref{tbl:benchmarks} shows the sensitivity and compressibility of different benchmarks along with the criteria used for classification. None of the benchmarks used in our evaluation fall into the low-compressibility high-sensitivity (LCHS) category. We generate six different categories of 2-core workloads (20 in each category) by randomly choosing benchmarks with different characteristics (LCLS, HCLS and HCHS). } Figure~\ref{fig:l2ws2core2m} shows the performance improvement provided by four different compression schemes, namely, ZCA, FVC, FPC, and B$\Delta$I\xspace, over a 2MB baseline cache design for different workload categories. We draw three major conclusions. \begin{figure}[ht!] \begin{center} \includegraphics[scale=0.5]{chap3/figures/L2-2CoreWS.pdf} \caption{Normalized weighted speedup for 2MB L2 cache, 2-cores. Percentages show improvement over the baseline uncompressed cache.} \label{fig:l2ws2core2m} \end{center} \end{figure} First, B$\Delta$I\xspace outperforms all prior approaches for all workload categories. Overall, B$\Delta$I\xspace improves system performance by 9.5\% compared to the baseline cache design. Second, as we mentioned in the beginning of this section, even though an application with highly compressible data may not itself benefit from compression (HCLS), it can enable opportunities for performance improvement for the co-running application. This effect is clearly visible in the figure. When at least one benchmark is sensitive to cache space, the performance improvement of B$\Delta$I\xspace increases with increasing compressibility of the co-running benchmark (as observed by examining the bars labeled as High Sensitivity). B$\Delta$I\xspace provides the highest improvement (18\%) when \emph{both} benchmarks in a workload are highly compressible and highly sensitive to cache space (HCHS-HCHS). As the figure shows, the performance improvement is not as significant when neither benchmark is sensitive to cache space irrespective of their compressibility (as observed by examining the bars labeled Low Sensitivity). Third, although FPC provides a degree of compression similar to B$\Delta$I\xspace for most benchmarks (as we showed in Section~\ref{sec:bdi}, Figure~\ref{fig:2-bdc-compressibility}) its performance improvement is lower than B$\Delta$I\xspace for all workload categories. This is because FPC has a more complex decompression algorithm with higher decompression latency compared to B$\Delta$I\xspace. On the other hand, for high sensitivity workloads, neither ZCA nor FVC is as competitive as FPC or B$\Delta$I\xspace in the HCLS-HCHS category. This is because both ZCA and FVC have a significantly lower degree of compression compared to B$\Delta$I\xspace. However, a number of benchmarks in the HCLS category (\emph{cactusADM}, \emph{gcc}, \emph{gobmk}, \emph{zeusmp}, and \emph{GemsFDTD}) have high occurrences of zero in their data. Therefore, ZCA and FVC are able to compress most of the cache lines of these benchmarks, thereby creating additional space for the co-running HCHS application. \vspace{0.0cm} We conducted a similar experiment with 100 4-core workloads with different compressibility and sensitivity characteristics. We observed trends similar to the 2-core results presented above. On average, B$\Delta$I\xspace improves performance by 11.2\% for the 4-core workloads and it outperforms all previous techniques. We conclude that B$\Delta$I\xspace, with its high compressibility and low decompression latency, outperforms other state-of-the-art compression techniques for both 2-core and 4-core workloads, likely making it a more competitive candidate for adoption in modern multi-core processors. We summarize B$\Delta$I\xspace performance improvement against the baseline 2MB L2 cache (without compression) and other mechanisms in Table~\ref{tbl:summary}. \begin{table}[ht] \centering \begin{tabular}{|>{\scriptsize\bgroup}c<{\egroup}|>{\scriptsize\bgroup}c<{\egroup}|>{\scriptsize\bgroup}c<{\egroup}|>{\scriptsize\bgroup}c<{\egroup}|>{\scriptsize\bgroup}c<{\egroup}|} \hline \textbf{Cores} & \textbf{No Compression} & \textbf{ZCA} & \textbf{FVC} & \textbf{FPC} \\ \hline 1 & 5.1\% & 4.1\% & 2.1\% & 1.0\% \\ \hline 2 & 9.5\% & 5.7\% &3.1\% & 1.2\% \\ \hline 4 & 11.2\% & 5.6\% & 3.2\% & 1.3\% \\ \hline \end{tabular}\vspace{-1mm} \caption{Average performance improvement of B$\Delta$I\xspace over other mechanisms: No Compression, ZCA, FVC, and FPC.} \label{tbl:summary}% \end{table} \subsection{Effect on Cache Capacity} \label{sec:res3} \begin{figure}[!ht] \centering \includegraphics[scale=0.6]{chap3/figures/L2PerfBoundaries.pdf} \caption{IPC comparison of B$\Delta$I\xspace against lower and upper limits in performance (from 512kB 2-way - 4MB 16-way L2 cache). Percentages on the GeoMean bars show how close B$\Delta$I\xspace gets to the performance of the cache with twice the size (upper limit).} \label{fig:L2PerfBoundaries} \end{figure} Our proposed B$\Delta$I\xspace cache design aims to provide the benefits of increasing the cache size while not incurring the increased latency of a larger data storage. To decouple the benefits of compression using B$\Delta$I\xspace from the benefits of reduced latency compared to a larger cache, we perform the following study. We compare the performance of the baseline cache design and the B$\Delta$I\xspace cache design by progressively doubling the cache size by doubling the cache associativity. We fix the latency of accessing all caches. Figure~\ref{fig:L2PerfBoundaries} shows the results of this experiment. With the same access latency for all caches, we expect the performance of the B$\Delta$I\xspace cache (with twice the number of tags as the baseline) to be strictly between the baseline cache of the same size (lower limit) and the baseline cache of double the size (upper limit, also reflected in our results). However, with its high degree of compression, the B$\Delta$I\xspace cache's performance comes close to the performance of the twice as-large baseline cache design for most benchmarks (e.g., \emph{h264ref} and \emph{zeusmp}). On average, the performance improvement due to the B$\Delta$I\xspace cache is within 1.3\% -- 2.3\% of the improvement provided by a twice as-large baseline cache. We conclude that our B$\Delta$I\xspace implementation (with twice the number of tags as the baseline) achieves performance improvement close to its upper bound potential performance of a cache twice the size of the baseline. For an application with highly compressible data, the compression ratio of the B$\Delta$I\xspace cache is limited by the number of additional tags used in its design. Figure~\ref{fig:L2MultTags} shows the effect of varying the number of tags (from 2$\times$ to 64$\times$ the number of tags in the baseline cache) on compression ratio for a 2MB cache. As the figure shows, for most benchmarks, except \emph{soplex}, \emph{cactusADM}, \emph{zeusmp}, and \emph{GemsFDTD}, having more than twice as many tags as the baseline cache does not improve the compression ratio. The improved compression ratio for the four benchmarks is primarily due to the large number of zeros and repeated values present in their data. At the same time, having more tags does not benefit a majority of the benchmarks and also incurs higher storage cost and access latency. Therefore, we conclude that these improvements likely do not justify the use of more than 2X the tags in the B$\Delta$I\xspace cache design compared to the baseline cache. \begin{figure}[htb] \centering \includegraphics[scale=0.5]{chap3/figures/L2MultTagsRatio.pdf} \caption{Effective compression ratio vs. number of tags} \label{fig:L2MultTags} \end{figure} \subsection{Effect on Bandwidth} In a system with a 3-level cache hierarchy, where both the L2 and the L3 caches store cache lines in compressed format, there is an opportunity to compress the traffic between the two caches. This has two benefits: (1) it can lead to reduced latency of communication between the two caches, and hence, improved system performance, and (2) it can lower the dynamic power consumption of the processor as it communicates less data between the two caches~\cite{BandwidthCompression}. Figure~\ref{fig:L3Bandwidth} shows the reduction in L2-L3 bandwidth (in terms of bytes per kilo instruction) due to B$\Delta$I\xspace compression. We observe that the potential bandwidth reduction with B$\Delta$I\xspace is as high as 53X (for \emph{GemsFDTD}), and 2.31X on average. We conclude that B$\Delta$I\xspace can not only increase the effective cache size, but it can also significantly decrease the on-chip traffic. \begin{figure}[ht!] \centering \includegraphics[scale=0.5]{chap3/figures/L3Bandwidth.pdf} \caption{Effect of compression on bus bandwidth (in terms of BPKI) between L2 (256kB) and L3 (8MB)} \label{fig:L3Bandwidth} \end{figure} \subsection{Detailed Comparison with Prior Work} To compare the performance of B$\Delta$I\xspace against state-of-the-art cache compression techniques, we conducted a set of studies and evaluated IPC, MPKI, and effective compression ratio (Figure~\ref{fig:2-bdc-compressibility}) for single core workloads, and weighted speedup (Figure~\ref{fig:l2ws2core2m}) for two- and four-core workloads. Figure~\ref{fig:L2IPCComparison2M} shows the improvement in IPC using different compression mechanisms over a 2MB baseline cache in a single-core system. As the figure shows, B$\Delta$I\xspace outperforms all prior approaches for most of the benchmarks. For benchmarks that do not benefit from compression (e.g, \emph{leslie3d}, \emph{GemsFDTD}, and \emph{hmmer}), all compression schemes degrade performance compared to the baseline. However, B$\Delta$I\xspace has the lowest performance degradation with its low 1-cycle decompression latency, and never degrades performance by more than 1\%. On the other hand, FVC and FPC degrade performance by as much as 3.1\% due to their relatively high 5-cycle decompression latency. We also observe that B$\Delta$I\xspace and FPC considerably reduce MPKI compared to ZCA and FVC, especially for benchmarks with more complex data patterns like \emph{h264ref}, \emph{bzip2}, \emph{xalancbmk}, \emph{hmmer}, and \emph{mcf} (not shown due to space limitations). \begin{figure}[!htb] \centering \includegraphics[scale=0.5]{chap3/figures/L2IPCComparison2M.pdf} \caption{Performance of B$\Delta$I\xspace vs. prior work for a 2MB L2 cache} \label{fig:L2IPCComparison2M} \end{figure} Based on our results, we conclude that B$\Delta$I\xspace, with its low decompression latency and high degree of compression, provides the best performance compared to all examined compression mechanisms. \chapter{Base-Delta-Immediate Compression} \input{chap3/introduction.tex} \input{chap3/background.tex} \input{chap3/bdc.tex} \input{chap3/2-bdc.tex} \input{chap3/design} \input{chap3/comparison} \input{chap3/methodology} \input{chap3/results} \input{chap3/conclusion} \section{Introduction} Our goal is to parallelize individual transactions in a database, using FOO. \chapter{Compression-Aware Cache Management} \input{camp/sections/1_introduction} \input{camp/sections/2_motivation_onur} \input{camp/sections/5_mechanism} \input{camp/sections/3_background} \input{camp/sections/6_methodology} \input{camp/sections/7_evaluation} \input{camp/sections/8_conclusion} \section{Introduction} \label{intro} \blfootnote{Originally published as ``Linearly Compressed Pages: A Low Complexity, Low Latency Main Memory Compression Framework'' in the 46th International Symposium on Microarchitecture, 2013~\cite{lcp-micro}.} Main memory, commonly implemented using DRAM technology, is a critical resource in modern systems. To avoid the devastating performance loss resulting from frequent page faults, main memory capacity must be sufficiently provisioned to prevent the target workload's working set from overflowing into the orders-of-magnitude-slower backing store (e.g., hard disk or flash). Unfortunately, the required minimum memory capacity is expected to increase in the future due to two major trends: (i) applications are generally becoming more data-intensive with increasing working set sizes, and (ii) with more cores integrated onto the same chip, more applications are running concurrently on the system, thereby increasing the aggregate working set size. Simply scaling up main memory capacity at a commensurate rate is unattractive for two reasons: (i) DRAM already constitutes a significant portion of the system's cost and power budget~\cite{energy,Yixin1,MemoryScaling}, and (ii) for signal integrity reasons, today's high frequency memory channels prevent many DRAM modules from being connected to the same channel~\cite{signal}, effectively limiting the maximum amount of DRAM in a system unless one resorts to expensive off-chip signaling buffers~\cite{BoB}. If its potential could be realized in practice, {\em data compression} would be a very attractive approach to effectively increase main memory capacity without requiring significant increases in cost or power, because a compressed piece of data can be stored in a smaller amount of physical memory. Further, such compression could be hidden from application (and most system\footnote{We assume that main memory compression is made visible to the memory management functions of the operating system (OS). In Section~\ref{sec:background-prior-work}, we discuss the drawbacks of a design that makes main memory compression mostly transparent to the OS~\cite{MXT}.}) software by materializing the uncompressed data as it is brought into the processor cache. Building upon the observation that there is significant redundancy in in-memory data, previous work has proposed a variety of techniques for compressing data in caches~\cite{fvc,fpc,fpc-tr,fvl,bdi,iic-comp,c-pack} and in main memory~\cite{MXT,MMCompression,vm-compression,the-compression-cache,adaptive-compressed-caching}. \subsection{Shortcomings of Prior Approaches} A key stumbling block to making data compression practical is that \emph{decompression} lies on the critical path of accessing any compressed data. Sophisticated compression algorithms, such as Lempel-Ziv and Huffman encoding~\cite{lz,huffman}, typically achieve high compression ratios at the expense of large decompression latencies that can significantly degrade performance. To counter this problem, prior work~\cite{fvl,fpc-tr,bdi} on cache compression proposed specialized compression algorithms that exploit regular patterns present in in-memory data, and showed that such specialized algorithms have reasonable compression ratios compared to more complex algorithms while incurring much lower decompression latencies. \sloppypar{ While promising, applying compression algorithms, sophisticated or simpler, to compress data stored in main memory requires first overcoming the following three challenges. First, {\em main memory compression complicates memory management}, because the operating system has to map fixed-size virtual pages to variable-size physical pages. Second, because modern processors employ on-chip caches with tags derived from the physical address to avoid aliasing between different cache lines (as physical addresses are unique, while virtual addresses are not), {\em the cache tagging logic needs to be modified} in light of memory compression to take the main memory address computation off the critical path of latency-critical L1 cache accesses. Third, in contrast with normal virtual-to-physical address translation, the physical page offset of a cache line is often different from the corresponding virtual page offset, because compressed physical cache lines are smaller than their corresponding virtual cache lines. In fact, the location of a compressed cache line in a physical page in main memory depends upon the sizes of the compressed cache lines that come before it in that same physical page. As a result, accessing a cache line within a compressed page in main memory {\em requires an additional layer of address computation to compute the location of the cache line in main memory} (which we will call the \emph{main memory address}). This additional {\em main memory address computation} not only adds complexity and cost to the system, but it can also increase the latency of accessing main memory (e.g., it requires up to 22 integer addition operations in one prior design for main memory compression~\cite{MMCompression}), which in turn can degrade system performance. } While simple solutions exist for these first two challenges (as we describe later in Section~\ref{lcp:sec:design}), prior attempts to mitigate the performance degradation of the third challenge are either costly or inefficient~\cite{MXT,MMCompression}. One approach (IBM MXT~\cite{MXT}) aims to reduce the number of main memory accesses, the cause of long-latency main memory address computation, by adding a large (32MB) uncompressed cache managed at the granularity at which blocks are compressed (1KB). If locality is present in the program, this approach can avoid the latency penalty of main memory address computations to access compressed data. Unfortunately, its benefit comes at a significant additional area and energy cost, and the approach is ineffective for accesses that miss in the large cache. A second approach~\cite{MMCompression} aims to hide the latency of main memory address computation by speculatively computing the main memory address of {\em every} last-level cache request in parallel with the cache access (i.e., before it is known whether or not the request needs to access main memory). While this approach can effectively reduce the performance impact of main memory address computation, it wastes a significant amount of energy (as we show in Section~\ref{sec:results-energy}) because many accesses to the last-level cache do not result in an access to main memory. \begin{comment} One approach (IBM MXT~\cite{MXT}) adds a large (32MB) uncompressed cache managed at the granularity at which blocks are compressed (1KB). While this large cache can avoid some requests that need to access main memory (if locality is present in the program) by reducing the number of requests that suffer the latency penalty of the main memory address computation (see Section~\ref{lcp:sec:background}), unfortunately this benefit comes at the expense of the significant additional area and power cost that are required for such a large cache. Another approach is to overlap a cache line's main memory address computation with the last-level cache access~\cite{MMCompression}. However, this latter approach wastes power (as we show in Section~\ref{sec:results-energy}) because not all accesses to the last-level cache result in an access to main memory. \end{comment} \subsection{Our Approach: Linearly Compressed Pages} We aim to build a main memory compression framework that neither incurs the latency penalty for memory accesses nor requires power-inefficient hardware. Our goals are: (i) having low complexity and low latency (especially when performing memory address computation for a cache line within a compressed page), (ii) being compatible with compression employed in on-chip caches (thereby minimizing the number of compressions/decompressions performed), and (iii) supporting compression algorithms with high compression ratios. To this end, we propose a new approach to compress pages, which we call \emph{Linearly Compressed Pages} (LCP). The key idea of LCP is to compress all of the cache lines within a given page to the same size. Doing so simplifies the computation of the physical address of the cache line, because the page offset is simply the product of the index of the cache line and the compressed cache line size (i.e., it can be calculated using a simple shift operation). Based on this idea, a target compressed cache line size is determined for each page. Cache lines that cannot be compressed to the target size for its page are called \emph{exceptions}. All exceptions, along with the metadata required to locate them, are stored separately in the same compressed page. If a page requires more space in compressed form than in uncompressed form, then this page is not compressed. The page table indicates the form in which the page is stored. The LCP framework can be used with any compression algorithm. We adapt two previously proposed compression algorithms (Frequent Pattern Compression (FPC)~\cite{fpc} and Base-Delta-Immediate Compression (BDI)~\cite{bdi}) to fit the requirements of LCP, and show that the resulting designs can significantly improve effective main memory capacity on a wide variety of workloads. Note that, throughout this chapter, we assume that compressed cache lines are decompressed before being placed in the processor caches. LCP may be combined with compressed cache designs by storing compressed lines in the higher-level caches (as in~\cite{fpc,bdi}), but the techniques are largely orthogonal, and for clarity, we present an LCP design where only main memory is compressed.\footnote{We show the results from combining main memory and cache compression in our technical report~\cite{lcp-tech}.} An additional, potential benefit of compressing data in main memory, which has not been fully explored by prior work on main memory compression, is {\em memory bandwidth reduction}. When data are stored in compressed format in main memory, multiple consecutive compressed cache lines can be retrieved at the cost of accessing a single uncompressed cache line. Given the increasing demand on main memory bandwidth, such a mechanism can significantly reduce the memory bandwidth requirement of applications, especially those with high spatial locality. Prior works on bandwidth compression~\cite{LinkCompression,fvc-bus,GPUBandwidthCompression} assumed efficient variable-length off-chip data transfers that are hard to achieve with general-purpose DRAM (e.g., DDR3~\cite{micron-ddr3}). We propose a mechanism that enables the memory controller to retrieve multiple consecutive cache lines with a single access to DRAM, with negligible additional cost. Evaluations show that our mechanism provides significant bandwidth savings, leading to improved system performance. In summary, we make the following contributions: \begin{itemize} \item We propose a new main memory compression framework---{\em Linearly Compressed Pages} (LCP)---that solves the problem of efficiently computing the physical address of a compressed cache line in main memory with much lower cost and complexity than prior proposals. We also demonstrate that {\em any} compression algorithm can be adapted to fit the requirements of LCP. \item We evaluate our design with two state-of-the-art compression algorithms (FPC~\cite{fpc} and BDI~\cite{bdi}), and observe that it can significantly increase the effective main memory capacity (by 69\% on average). \item We evaluate the benefits of transferring compressed cache lines over the bus between DRAM and the memory controller and observe that it can considerably reduce memory bandwidth consumption (24\% on average), and improve overall performance by 6.1\%/13.9\%/10.7\% for single-/two-/four-core workloads, relative to a system without main memory compression. LCP also decreases the energy consumed by the main memory subsystem (9.5\% on average over the best prior mechanism). \end{itemize} \section{Summary} \label{lcp:sec:conclusion} Data compression is a promising technique to increase the effective main memory capacity without significantly increasing cost and power consumption. As we described in this chapter, the primary challenge in incorporating compression in main memory is to devise a mechanism that can efficiently compute the main memory address of a cache line without significantly adding complexity, cost, or latency. Prior approaches to addressing this challenge are either relatively costly or energy inefficient. We proposed a new main memory compression framework, called \emph{Linearly Compressed Pages} (LCP), to address this problem. The two key ideas of LCP are to use a fixed size for compressed cache lines within a page (which simplifies main memory address computation) and to enable a page to be compressed even if some cache lines within the page are incompressible (which enables high compression ratios). We showed that any compression algorithm can be adapted to fit the requirements of our LCP-based framework. We evaluated the LCP-based framework using two state-of-the-art compression algorithms (Frequent Pattern Compression and Base-Delta-Immediate Compression) and showed that it can significantly increase effective memory capacity (by 69\%) and reduce page fault rate (by 23\%). We showed that storing compressed data in main memory can also enable the memory controller to reduce memory bandwidth consumption (by 24\%), leading to significant performance and energy improvements on a wide variety of single-core and multi-core systems with different cache sizes. Based on our results, we conclude that the proposed LCP-based framework provides an effective approach for designing low-complexity and low-latency compressed main memory. \section*{Acknowledgments} Many thanks to Brian Hirano, Kayvon Fatahalian, David Hansquine and Karin Strauss for their feedback during various stages of this project. We thank the anonymous reviewers and our shepherd Andreas Moshovos for their feedback. We acknowledge members of the SAFARI and LBA groups for their feedback and for the stimulating research environment they provide. We acknowledge the support of AMD, IBM, Intel, Oracle, Samsung and Microsoft. This research was partially supported by NSF (CCF-0953246, CCF-1147397, CCF-1212962), Intel University Research Office Memory Hierarchy Program, Intel Science and Technology Center for Cloud Computing, Semiconductor Research Corporation and a Microsoft Research Fellowship. \section{Background on Main Memory Compression} \label{lcp:sec:background} Data compression is widely used in storage structures to increase the effective capacity and bandwidth without significantly increasing the system cost and power consumption. One primary downside of compression is that the compressed data must be decompressed before it can be used. Therefore, for latency-critical applications, using complex dictionary-based compression algorithms~\cite{lz} significantly degrades performance due to their high decompression latencies. Thus, prior work on compression of in-memory data has proposed simpler algorithms with low decompression latencies and reasonably high compression ratios, as discussed next. \subsection{Compressing In-Memory Data} Several studies~\cite{fvl,fpc-tr,bdi,fpc} have shown that in-memory data has exploitable patterns that allow for simpler compression techniques. Frequent value compression (FVC)~\cite{fvl} is based on the observation that an application's working set is often dominated by a small set of values. FVC exploits this observation by encoding such frequently-occurring 4-byte values with fewer bits. Frequent pattern compression (FPC)~\cite{fpc-tr} shows that a majority of words (4-byte elements) in memory fall under a few frequently occurring patterns. FPC compresses individual words within a cache line by encoding the frequently occurring patterns with fewer bits. Base-Delta-Immediate (BDI) compression~\cite{bdi} observes that, in many cases, words co-located in memory have small differences in their values. BDI compression encodes a cache line as a base-value and an array of differences that represent the deviation either from the base-value or from zero (for small values) for each word. These three low-latency compression algorithms have been proposed for on-chip caches, but can be adapted for use as part of the main memory compression framework proposed in this chapter. \subsection{Challenges in Memory Compression} \label{sec:background-challenges} LCP leverages the fixed-size memory pages of modern systems for the basic units of compression. However, three challenges arise from the fact that different pages (and cache lines within a page) compress to different sizes depending on data compressibility. \textbf{Challenge 1: Main Memory Page Mapping.} Irregular page sizes in main memory complicate the memory management module of the operating system for two reasons (as shown in Figure~\ref{fig:challenge2}). First, the operating system needs to allow mappings between the fixed-size virtual pages presented to software and the variable-size physical pages stored in main memory. Second, the operating system must implement mechanisms to efficiently handle fragmentation in main memory. \begin{figure}[h!] \centering \includegraphics[width=0.8\textwidth]{chap5/lcp/figures/challenge2.pdf} \caption{Main Memory Page Mapping Challenge} \label{fig:challenge2} \end{figure} \textbf{Challenge 2: Physical Address Tag Computation.} On-chip caches (including L1 caches) typically employ tags derived from the physical address of the cache line to avoid aliasing, and in such systems, every cache access requires the physical address of the corresponding cache line to be computed. Hence, because the main memory addresses of the compressed cache lines differ from the nominal physical addresses of those lines, care must be taken that the computation of cache line tag does not lengthen the critical path of latency-critical L1 cache accesses. \textbf{Challenge 3: Cache Line Address Computation.} When main memory is compressed, different cache lines within a page can be compressed to different sizes. The main memory address of a cache line is therefore dependent on the sizes of the compressed cache lines that come before it in the page. As a result, the processor (or the memory controller) must explicitly compute the location of a cache line within a compressed main memory page before accessing it (Figure~\ref{fig:challenge1}), e.g., as in~\cite{MMCompression}. This computation not only increases complexity, but can also lengthen the critical path of accessing the cache line from both the main memory and the physically addressed cache. Note that systems that do \emph{not} employ main memory compression do not suffer from this problem because the offset of a cache line within the physical page is the \emph{same} as the offset of the cache line within the corresponding virtual page. \begin{figure}[h!] \centering \includegraphics[width=0.8\textwidth]{chap5/lcp/figures/challenge1.pdf} \caption{Cache Line Address Computation Challenge} \label{fig:challenge1} \end{figure} As will be seen shortly, while prior research efforts have considered subsets of these challenges, this work is the first design that provides a holistic solution to all three challenges, particularly Challenge 3, with low latency and low (hardware and software) complexity. \subsection{Prior Work on Memory Compression} \label{sec:background-prior-work} Of the many prior works on using compression for main memory (e.g.,~\cite{MXT,MMCompression,vm-compression,kaplan-thesis,adaptive-compressed-caching,the-compression-cache,GPUBandwidthCompression}), two in particular are the most closely related to the design proposed in this chapter, because both of them are mostly hardware designs. We describe these two designs along with their shortcomings. Tremaine {\em et al.}~\cite{pinnacle} proposed a memory controller design, Pinnacle, based on IBM's Memory Extension Technology (MXT)~\cite{MXT} that employed Lempel-Ziv compression~\cite{lz} to manage main memory. To address the three challenges described above, Pinnacle employs two techniques. First, Pinnacle internally uses a 32MB last level cache managed at a 1KB granularity, same as the granularity at which blocks are compressed. This cache reduces the number of accesses to main memory by exploiting locality in access patterns, thereby reducing the performance degradation due to the address computation (Challenge 3). However, there are several drawbacks to this technique: (i) such a large cache adds significant area and energy costs to the memory controller, (ii) the approach requires the main memory address computation logic to be present and used when an access misses in the 32MB cache, and (iii) if caching is not effective (e.g., due to lack of locality or larger-than-cache working set sizes), this approach cannot reduce the performance degradation due to main memory address computation. Second, to avoid complex changes to the operating system and on-chip cache-tagging logic, Pinnacle introduces a \emph{real} address space between the virtual and physical address spaces. The real address space is uncompressed and is twice the size of the actual available physical memory. The operating system maps virtual pages to same-size pages in the real address space, which addresses Challenge 1. On-chip caches are tagged using the real address (instead of the physical address, which is dependent on compressibility), which effectively solves Challenge 2. On a miss in the 32MB cache, Pinnacle maps the corresponding real address to the physical address of the compressed block in main memory, using a memory-resident mapping-table managed by the memory controller. Following this, Pinnacle retrieves the compressed block from main memory, performs decompression and sends the data back to the processor. Clearly, the additional access to the memory-resident mapping table on every cache miss significantly increases the main memory access latency. In addition to this, Pinnacle's decompression latency, which is on the critical path of a memory access, is 64 processor cycles. Ekman and Stenstr\"{o}m~\cite{MMCompression} proposed a main memory compression design to address the drawbacks of MXT. In their design, the operating system maps the uncompressed virtual address space directly to a compressed physical address space. To compress pages, they use a variant of the Frequent Pattern Compression technique~\cite{fpc,fpc-tr}, which has a much smaller decompression latency (5 cycles) than the Lempel-Ziv compression in Pinnacle (64 cycles). To avoid the long latency of a cache line's main memory address computation (Challenge 3), their design overlaps this computation with the last-level (L2) cache access. For this purpose, their design extends the page table entries to store the compressed sizes of all the lines within the page. This information is loaded into a hardware structure called the \emph{Block Size Table} (BST). On an L1 cache miss, the BST is accessed in parallel with the L2 cache to compute the exact main memory address of the corresponding cache line. While the proposed mechanism reduces the latency penalty of accessing compressed blocks by overlapping main memory address computation with L2 cache access, the main memory address computation is performed on {\em every} L2 cache access (as opposed to only on L2 cache misses in LCP). This leads to significant wasted work and additional power consumption. Even though BST has the same number of entries as the translation lookaside buffer (TLB), its size is at least twice that of the TLB~\cite{MMCompression}. This adds to the complexity and power consumption of the system significantly. To address Challenge 1, the operating system uses multiple pools of fixed-size physical pages. This reduces the complexity of managing physical pages at a fine granularity. Ekman and Stenstrom~\cite{MMCompression} do not address Challenge 2. In summary, prior work on hardware-based main memory compression mitigate the performance degradation due to the main memory address computation problem (Challenge 3) by either adding large hardware structures that consume significant area and power~\cite{MXT} or by using techniques that require energy-inefficient hardware and lead to wasted energy~\cite{MMCompression}. \section{Linearly Compressed Pages: Our Approach} \section{Linearly Compressed Pages} \label{sec:basic} In this section, we provide the basic idea and a brief overview of our proposal, Linearly Compressed Pages (LCP), which overcomes the aforementioned shortcomings of prior proposals. Further details will follow in Section~\ref{lcp:sec:design}. \vspace{-0.1cm} \subsection{LCP: Basic Idea} \label{sec:basic-lcp} The main shortcoming of prior approaches to main memory compression is that different cache lines within a physical page can be compressed to different sizes based on the compression scheme. As a result, the location of a compressed cache line within a physical page depends on the sizes of all the compressed cache lines before it in the same page. This requires the memory controller to explicitly perform this complex calculation (or cache the mapping in a large, energy-inefficient structure) in order to access the line. To address this shortcoming, we propose a new approach to compressing pages, called the \emph{Linearly Compressed Page} (LCP). The key idea of LCP is to \emph{use a fixed size for compressed cache lines within a given page} (alleviating the complex and long-latency main memory address calculation problem that arises due to variable-size cache lines), and yet still enable a page to be compressed even if not all cache lines within the page can be compressed to that fixed size (enabling high compression ratios). Because all the cache lines within a given page are compressed to the same size, the location of a compressed cache line within the page is simply the product of the index of the cache line within the page and the size of the compressed cache line---essentially a linear scaling using the index of the cache line (hence the name \emph{Linearly Compressed Page}). LCP greatly simplifies the task of computing a cache line's main memory address. For example, if all cache lines within a page are compressed to $16$ bytes, the byte offset of the third cache line (index within the page is 2) from the start of the physical page is $16 \times 2 = 32$, if the line is compressed. This computation can be implemented as a simple shift operation. \begin{comment} There are two key design choices made in LCP to improve compression ratio in the presence of fixed-size compressed cache lines. First, the target size for the compressed cache lines can be different for different pages, depending on the algorithm used for compression and the data stored in the pages. Our LCP-based framework identifies this target size for a page when the page is compressed for the first time (or recompressed), as we will describe in Section~\ref{sec:design-algos}. Second, not all cache lines within a page can be compressed to a specific fixed size. Also, a cache line which is originally compressed to the target size may later become incompressible due to a write. One approach to handle such cases is to store the entire page in uncompressed format even if a single line cannot be compressed into the fixed size. However, this inflexible approach can lead to significant reduction in the benefits from compression and may also lead to frequent compression/decompression of entire pages. To avoid these problems, LCP stores such incompressible cache lines of a page separately from the compressed cache lines (but still within the page), along with the metadata required to locate them. \end{comment} Figure~\ref{fig:page-organization} shows the organization of an example Linearly Compressed Page, based on the ideas described above. In this example, we assume that a virtual page is 4KB, an uncompressed cache line is 64B, and the target compressed cache line size is 16B. \begin{figure}[tb] \centering \includegraphics[width=0.8\textwidth]{chap5/lcp/figures/LCP.pdf} \caption{Organization of a Linearly Compressed Page} \label{fig:page-organization} \end{figure} As shown in the figure, the LCP contains three distinct regions. The first region, \emph{the compressed data region}, contains a 16-byte slot for each cache line in the virtual page. If a cache line is compressible, the corresponding slot stores the compressed version of the cache line. However, if the cache line is not compressible, the corresponding slot is assumed to contain invalid data. In our design, we refer to such an incompressible cache line as an ``exception''. The second region, \emph{metadata}, contains all the necessary information to identify and locate the exceptions of a page. We provide more details on what exactly is stored in the metadata region in Section~\ref{sec:design-lcp-organization}. The third region, \emph{the exception storage}, is the place where all the exceptions of the LCP are stored in their uncompressed form. Our LCP design allows the exception storage to contain unused space. In other words, not all entries in the exception storage may store valid exceptions. As we will describe in Section~\ref{lcp:sec:design}, this enables the memory controller to use the unused space for storing future exceptions, and also simplifies the operating system page management mechanism. Next, we will provide a brief overview of the main memory compression framework we build using LCP. \subsection{LCP Operation} \label{sec:basic-mcf-overview} Our LCP-based main memory compression framework consists of components that handle three key issues: (i) page compression, (ii) cache line reads from main memory, and (iii) cache line writebacks into main memory. Figure~\ref{fig:request-flow} shows the high-level design and operation. \begin{comment} Our LCP-based main memory compression framework consists of components that handle three key issues: 1) page compression/recompression, 2) handling a cache line read from main memory, and 3) handling a cache line writeback into main memory. We briefly provide an overview of each of these components below. Section~\ref{lcp:sec:design} presents a detailed description of our design. \end{comment} \begin{figure}[h!] \centering \includegraphics[width=0.9\linewidth]{chap5/lcp/figures/flow.pdf} \caption{Memory request flow} \label{fig:request-flow} \end{figure} \textbf{Page Compression.} When a page is accessed for the first time from disk, the operating system (with the help of the memory controller) first determines whether the page is compressible using the compression algorithm employed by the framework (described in Section~\ref{sec:design-algos}). If the page is compressible, the OS allocates a physical page of appropriate size and stores the compressed page (LCP) in the corresponding location. It also updates the relevant portions of the corresponding page table mapping to indicate (i) whether the page is compressed, and if so, (ii) the compression scheme used to compress the page (details in Section~\ref{sec:design-page-table-extension}). \begin{comment} The first component of our framework is concerned with compression of an uncompressed page and recompression of an LCP. The former happens when a page is accessed for the first time from disk and the latter happens when the size of an LCP increases beyond the original uncompressed size of a page (due to increase in the number of exceptions). In both these cases, the operating system (with the help of the memory controller) first determines if the page is compressible using the compression algorithm employed by the framework (described in Section~\ref{sec:design-algos}). If the page is compressible, the OS allocates a physical page of appropriate size and stores the compressed page (LCP) in the corresponding location. It also updates the relevant portions of the corresponding page table mapping to indicate that the page is compressed along with the compression scheme (details in Section~\ref{sec:design-page-table-extension}). \end{comment} \label{sec:basic-controller-read-operation} \textbf{Cache Line Read.} When the memory controller receives a \emph{read request} for a cache line within an LCP, it must find and decompress the data. Multiple design solutions are possible to perform this task efficiently. A na\"{i}ve way of reading a cache line from an LCP would require at least two accesses to the corresponding page in main memory. First, the memory controller accesses the \emph{metadata} in the LCP to determine whether the cache line is stored in the compressed format. Second, based on the result, the controller either (i) accesses the cache line from the \emph{compressed data region} and decompresses it, or (ii) accesses it uncompressed from the \emph{exception storage}. To avoid two accesses to main memory, we propose two optimizations that enable the controller to retrieve the cache line with the latency of just \emph{one} main memory access in the common case. First, we add a small \emph{metadata (MD) cache} to the memory controller that caches the metadata of the recently accessed LCPs---the controller avoids the first main memory access to the metadata in cases when the metadata is present in the MD cache. Second, in cases when the metadata is not present in the metadata cache, the controller speculatively assumes that the cache line is stored in the compressed format and \emph{first} accesses the data corresponding to the cache line from the compressed data region. The controller then \emph{overlaps} the latency of the cache line decompression with the access to the metadata of the LCP. In the common case, when the speculation is correct (i.e., the cache line is actually stored in the compressed format), this approach significantly reduces the latency of serving the read request. In the case of a misspeculation (uncommon case), the memory controller issues another request to retrieve the cache line from the exception storage. \label{sec:basic-controller-writeback-operation} \textbf{Cache Line Writeback.} If the memory controller receives a request for a cache line \emph{writeback}, it then attempts to compress the cache line using the compression scheme associated with the corresponding LCP. Depending on the original state of the cache line (compressible or incompressible), there are four different possibilities: the cache line (1) was compressed and stays compressed, (2) was uncompressed and stays uncompressed, (3) was uncompressed but becomes compressed, and (4) was compressed but becomes uncompressed. In the first two cases, the memory controller simply overwrites the old data with the new data at the same location associated with the cache line. In case 3, the memory controller frees the exception storage slot for the cache line and writes the compressible data in the compressed data region of the LCP. (Section~\ref{sec:design-lcp-organization} provides more details on how the exception storage is managed.) In case 4, the memory controller checks whether there is enough space in the exception storage region to store the uncompressed cache line. If so, it stores the cache line in an available slot in the region. If there are no free exception storage slots in the exception storage region of the page, the memory controller traps to the operating system, which migrates the page to a new location (which can also involve page recompression). In both cases 3 and 4, the memory controller appropriately modifies the LCP metadata associated with the cache line's page. Note that in the case of an LLC writeback to main memory (and assuming that TLB information is not available at the LLC), the cache tag entry is augmented with the same bits that are used to augment page table entries. Cache compression mechanisms, e.g., FPC~\cite{fpc} and BDI~\cite{bdi}, already have the corresponding bits for encoding, so that the tag size overhead is minimal when main memory compression is used together with cache compression. \section{Detailed Design} \label{lcp:sec:design} In this section, we provide a detailed description of LCP, along with the changes to the memory controller, operating system and on-chip cache tagging logic. In the process, we explain how our proposed design addresses each of the three challenges (Section~\ref{sec:background-challenges}). \newcommand{\mathcal{V}}{\mathcal{V}} \newcommand{\mathcal{P}}{\mathcal{P}} \newcommand{\mathcal{C}}{\mathcal{C}} \newcommand{\mathcal{C}^*}{\mathcal{C}^*} \newcommand{n}{n} \newcommand{n_{ex}}{n_{ex}} \newcommand{n_{avail}}{n_{avail}} \newcommand{\mathcal{M}}{\mathcal{M}} \newcommand{{\small{\texttt{c-bit}}}\xspace}{{\small{\texttt{c-bit}}}\xspace} \newcommand{{\small{\texttt{c-type}}}\xspace}{{\small{\texttt{c-type}}}\xspace} \newcommand{{\small{\texttt{c-size}}}\xspace}{{\small{\texttt{c-size}}}\xspace} \newcommand{{\small{\texttt{c-base}}}\xspace}{{\small{\texttt{c-base}}}\xspace} \newcommand{{\small{\texttt{p-base}}}\xspace}{{\small{\texttt{p-base}}}\xspace} \newcommand{{\small{\texttt{m-size}}}\xspace}{{\small{\texttt{m-size}}}\xspace} \newcommand{{\small{\texttt{e-bit}}}\xspace}{{\small{\texttt{e-bit}}}\xspace} \newcommand{{\small{\texttt{z-bit}}}\xspace}{{\small{\texttt{z-bit}}}\xspace} \newcommand{{\small{\texttt{v-bit}}}\xspace}{{\small{\texttt{v-bit}}}\xspace} \newcommand{{\small{\texttt{e-index}}}\xspace}{{\small{\texttt{e-index}}}\xspace} \vspace{0.2cm} \subsection{Page Table Entry Extension} \label{sec:design-page-table-extension} \sloppypar To keep track of virtual pages that are stored in compressed format in main memory, the page table entries need to be extended to store information related to compression (Figure~\ref{fig:pte-extension}). In addition to the information already maintained in the page table entries (such as the base address for a corresponding physical page, {\small{\texttt{p-base}}}\xspace), each virtual page in the system is associated with the following pieces of metadata: (i) {\small{\texttt{c-bit}}}\xspace, a bit that indicates if the page is mapped to a compressed physical page (LCP), (ii) {\small{\texttt{c-type}}}\xspace, a field that indicates the compression scheme used to compress the page, (iii) {\small{\texttt{c-size}}}\xspace, a field that indicates the size of the LCP, and (iv) {\small{\texttt{c-base}}}\xspace, a {\small{\texttt{p-base}}}\xspace extension that enables LCPs to start at an address not aligned with the virtual page size. The number of bits required to store {\small{\texttt{c-type}}}\xspace, {\small{\texttt{c-size}}}\xspace and {\small{\texttt{c-base}}}\xspace depends on the exact implementation of the framework. In the implementation we evaluate, we assume 3 bits for {\small{\texttt{c-type}}}\xspace (allowing 8 possible different compression encodings), 2 bits for {\small{\texttt{c-size}}}\xspace (4 possible page sizes: 512B, 1KB, 2KB, 4KB), and 3 bits for {\small{\texttt{c-base}}}\xspace (at most eight 512B compressed pages can fit into a 4KB uncompressed slot). Note that existing systems usually have enough unused bits (up to 15 bits in Intel x86-64 systems~\cite{ia64}) in their PTE entries that can be used by LCP without increasing the PTE size. \begin{figure}[h!] \vspace{-0.0cm} \centering \includegraphics[width=0.9\linewidth]{chap5/lcp/figures/pte.pdf} \caption{Page table entry extension.} \label{fig:pte-extension} \vspace{-0.0cm} \end{figure} When a virtual page is compressed (the {\small{\texttt{c-bit}}}\xspace is set), all the compressible cache lines within the page are compressed to the same size, say $\mathcal{C}^*$. The value of $\mathcal{C}^*$ is uniquely determined by the compression scheme used to compress the page, i.e., the {\small{\texttt{c-type}}}\xspace (Section~\ref{sec:design-algos} discusses determining the {\small{\texttt{c-type}}}\xspace for a page). We next describe the LCP organization in more detail. \subsection{LCP Organization} \label{sec:design-lcp-organization} We will discuss each of an LCP's three regions in turn. We begin by defining the following symbols: $\mathcal{V}$ is the virtual page size of the system (e.g., 4KB); $\mathcal{C}$ is the uncompressed cache line size (e.g., 64B);\footnote{ Large pages (e.g., 4MB or 1GB) can be supported with LCP through minor modifications that include scaling the corresponding sizes of the metadata and compressed data regions. The exception area metadata keeps the exception index for every cache line on a compressed page. This metadata can be partitioned into multiple 64-byte cache lines that can be handled similar to 4KB pages. The exact ``metadata partition'' can be easily identified based on the cache line index within a page. } $n = \frac{\mathcal{V}}{\mathcal{C}}$ is the number of cache lines per virtual page (e.g., 64); and $\mathcal{M}$ is the size of LCP's metadata region. In addition, on a per-page basis, we define $\mathcal{P}$ to be the compressed physical page size; $\mathcal{C}^*$ to be the compressed cache line size; and $n_{avail}$ to be the number of slots available for exceptions. \vspace{-0.05cm} \subsubsection{Compressed Data Region} The compressed data region is a contiguous array of $n$ slots each of size $\mathcal{C}^*$. Each one of the $n$ cache lines in the virtual page is mapped to one of the slots, irrespective of whether the cache line is compressible or not. Therefore, the size of the compressed data region is $n\mathcal{C}^*$. This organization simplifies the computation required to determine the main memory address for the compressed slot corresponding to a cache line. More specifically, the address of the compressed slot for the $i^{th}$ cache line can be computed as ${\small{\texttt{p-base}}}\xspace + {\small{\texttt{m-size}}}\xspace*{\small{\texttt{c-base}}}\xspace + (i-1)\mathcal{C}^*$, where the first two terms correspond to the start of the LCP (${\small{\texttt{m-size}}}\xspace$ equals to the minimum page size, 512B in our implementation) and the third indicates the offset within the LCP of the $i^{th}$ compressed slot (see Figure~\ref{fig:macroview}). Thus, computing the main memory address of a compressed cache line requires one multiplication (can be implemented as a shift) and two additions independent of $i$ (fixed latency). This computation requires a lower latency and simpler hardware than prior approaches (e.g., up to 22 additions in the design proposed in \cite{MMCompression}), thereby efficiently addressing Challenge 3 (cache line address computation). \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{chap5/lcp/figures/macroview.pdf} \caption{Physical memory layout with the LCP framework.} \label{fig:macroview} \end{figure} \subsubsection{Metadata Region} The metadata region of an LCP contains two parts (Figure~\ref{fig:md-region}). The first part stores two pieces of information for each cache line in the virtual page: (i) a bit indicating whether the cache line is incompressible, i.e., whether the cache line is an \emph{exception}, {\small{\texttt{e-bit}}}\xspace, and (ii) the index of the cache line in the exception storage, {\small{\texttt{e-index}}}\xspace. If the {\small{\texttt{e-bit}}}\xspace is set for a cache line, then the corresponding cache line is stored uncompressed in location {\small{\texttt{e-index}}}\xspace in the exception storage. The second part of the metadata region is a \emph{valid} bit ({\small{\texttt{v-bit}}}\xspace) vector to track the state of the slots in the exception storage. If a {\small{\texttt{v-bit}}}\xspace is set, it indicates that the corresponding slot in the exception storage is used by some uncompressed cache line within the page. \begin{figure}[h] \centering \includegraphics[width=0.8\linewidth]{chap5/lcp/figures/metaregion.pdf} \caption{Metadata region, when $n=64$.} \label{fig:md-region} \end{figure} The size of the first part depends on the size of {\small{\texttt{e-index}}}\xspace, which in turn depends on the number of exceptions allowed per page. Because the number of exceptions cannot exceed the number of cache lines in the page ($n$), we will need at most $1 + \lceil \log_2 n \rceil$ bits for each cache line in the first part of the metadata. For the same reason, we will need at most $n$ bits in the bit vector in the second part of the metadata. Therefore, the size of the metadata region is given by $ \mathcal{M} = n(1 + \lceil \log_2 n \rceil) + n ~\textrm{bits}$. Since $n$ is fixed for the entire system, the size of the metadata region ($\mathcal{M}$) is the same for all compressed pages (64B in our implementation). \subsubsection{Exception Storage Region} The third region, the exception storage, is the place where all incompressible cache lines of the page are stored. If a cache line is present in the location {\small{\texttt{e-index}}}\xspace in the exception storage, its main memory address can be computed as: ${\small{\texttt{p-base}}}\xspace + {\small{\texttt{m-size}}}\xspace*{\small{\texttt{c-base}}}\xspace + n\mathcal{C}^* + \mathcal{M} + {\small{\texttt{e-index}}}\xspace \mathcal{C} $. The number of slots available in the exception storage ($n_{avail}$) is dictated by the size of the compressed physical page allocated by the operating system for the corresponding LCP. The following equation expresses the relation between the physical page size ($\mathcal{P}$), the compressed cache line size ($\mathcal{C}^*$) that is determined by {\small{\texttt{c-type}}}\xspace, and the number of available slots in the exception storage ($n_{avail}$): \vspace{-0.4cm}\\ \begin{equation} n_{avail} = \lfloor(\mathcal{P} - (n\mathcal{C}^* + \mathcal{M}))/\mathcal{C}\rfloor \label{eqn:avail-exceptions} \vspace{-0.05cm} \end{equation} \noindent As mentioned before, the metadata region contains a bit vector that is used to manage the exception storage. When the memory controller assigns an exception slot to an incompressible cache line, it sets the corresponding bit in the bit vector to indicate that the slot is no longer free. If the cache line later becomes compressible and no longer requires the exception slot, the memory controller resets the corresponding bit in the bit vector. In the next section, we describe the operating system memory management policy that determines the physical page size ($\mathcal{P}$) allocated for an LCP, and hence, the number of available exception slots ($n_{avail}$). \subsection{Operating System Memory Management} The first challenge related to main memory compression is to provide operating system support for managing variable-size compressed physical pages -- i.e., LCPs. Depending on the compression scheme employed by the framework, different LCPs may be of different sizes. Allowing LCPs of arbitrary sizes would require the OS to keep track of main memory at a very fine granularity. It could also lead to fragmentation across the entire main memory at a fine granularity. As a result, the OS would need to maintain large amounts of metadata to maintain the locations of individual pages and the free space, which would also lead to increased complexity. To avoid this problem, our mechanism allows the OS to manage main memory using a fixed number of pre-determined physical page sizes -- e.g., 512B, 1KB, 2KB, 4KB (a similar approach was proposed in~\cite{berger-thesis} to address the memory allocation problem). For each one of the chosen sizes, the OS maintains a pool of allocated pages and a pool of free pages. When a page is compressed for the first time or recompressed due to overflow (described in Section~\ref{sec:design-handling-overflows}), the OS chooses the smallest available physical page size that fits the compressed page. For example, if a page is compressed to 768B, then the OS allocates a physical page of size 1KB. For a page with a given size, the available number of exceptions for the page, $n_{avail}$, can be determined using Equation~\ref{eqn:avail-exceptions}. \subsection{Changes to the Cache Tagging Logic} As mentioned in Section~\ref{sec:background-challenges}, modern systems employ physically-tagged caches to avoid aliasing problems. However, in a system that employs main memory compression, using the physical (main memory) address to tag cache lines puts the main memory address computation on the critical path of L1 cache access (Challenge 2). To address this challenge, we modify the cache tagging logic to use the tuple $<$physical page base address, cache line index within the page$>$ for tagging cache lines. This tuple maps to a unique cache line in the system, and hence avoids aliasing problems without requiring the exact main memory address to be computed. The additional index bits are stored within the cache line tag. \subsection{Changes to the Memory Controller} In addition to the changes to the memory controller operation described in Section~\ref{sec:basic-mcf-overview}, our LCP-based framework requires two hardware structures to be added to the memory controller: (i) a small metadata cache to accelerate main memory lookups in LCP, and (ii) compression/decompression hardware to perform the compression and decompression of cache lines. \subsubsection{Metadata Cache} \label{sec:design-metadata-cache} As described in Section~\ref{sec:basic-controller-read-operation}, a small metadata cache in the memory controller enables our approach, in the common case, to retrieve a compressed cache block in a single main memory access. This cache stores the metadata region of recently accessed LCPs so that the metadata for subsequent accesses to such recently-accessed LCPs can be retrieved directly from the cache. In our study, we find that a small 512-entry metadata cache (32KB\footnote{We evaluated the sensitivity of performance to MD cache size and find that 32KB is the smallest size that enables our design to avoid most of the performance loss due to additional metadata accesses.}) can service 88\% of the metadata accesses on average across all our workloads. Some applications have lower hit rate, especially \emph{sjeng} and \emph{astar}~\cite{SPEC}. An analysis of these applications reveals that their memory accesses exhibit very low locality. As a result, we also observed a low TLB hit rate for these applications. Because TLB misses are costlier than MD cache misses (the former requires multiple memory accesses), the low MD cache hit rate does not lead to significant performance degradation for these applications. We expect the MD cache power to be much lower than the power consumed by other on-chip structures (e.g., L1 caches), because the MD cache is accessed much less frequently (hits in any on-chip cache do not lead to an access to the MD cache). \subsubsection{Compression/Decompression Hardware} Depending on the compression scheme employed with our LCP-based framework, the memory controller should be equipped with the hardware necessary to compress and decompress cache lines using the corresponding scheme. Although our framework does not impose any restrictions on the nature of the compression algorithm, it is desirable to have compression schemes that have low complexity and decompression latency -- e.g., Frequent Pattern Compression (FPC)~\cite{fpc} and Base-Delta-Immediate Compression (BDI)~\cite{bdi}. In Section~\ref{sec:design-algos}, we provide more details on how to adapt any compression algorithm to fit the requirements of LCP and also the specific changes we made to FPC and BDI as case studies of compression algorithms that we adapted to the LCP framework. \subsection{Handling Page Overflows} \label{sec:design-handling-overflows} As described in Section~\ref{sec:basic-controller-writeback-operation}, when a cache line is written back to main memory, the cache line may switch from being compressible to being incompressible. When this happens, the memory controller should explicitly find a slot in the exception storage for the uncompressed cache line. However, it is possible that all the slots in the exception storage are already used by other exceptions in the LCP. We call this scenario a \emph{page overflow}. A page overflow increases the size of the LCP and leads to one of two scenarios: (i)~the LCP still requires a physical page size that is smaller than the uncompressed virtual page size (type-1 page overflow), and (ii)~the LCP now requires a physical page size that is larger than the uncompressed virtual page size (type-2 page overflow). Type-1 page overflow simply requires the operating system to migrate the LCP to a physical page of larger size (without recompression). The OS first allocates a new page and copies the data from the old location to the new location. It then modifies the mapping for the virtual page to point to the new location. While in transition, the page is locked, so any memory request to this page is delayed. In our evaluations, we stall the application for 20,000 cycles\footnote{ To fetch a 4KB page, we need to access 64 cache lines (64 bytes each). In the worst case, this will lead to 64 accesses to main memory, most of which are likely to be DRAM row-buffer hits. Since a row-buffer hit takes 7.5ns, the total time to fetch the page is 495ns. On the other hand, the latency penalty of two context-switches (into the OS and out of the OS) is around 4us~\cite{ContextSwitch}. Overall, a type-1 overflow takes around 4.5us. For a 4.4Ghz or slower processor, this is less than 20,000 cycles. } when a type-1 overflow occurs; we also find that (on average) type-1 overflows happen less than once per two million instructions. We vary this latency between 10,000--100,000 cycles and observe that the benefits of our framework (e.g., bandwidth compression) far outweigh the overhead due to type-1 overflows. In a type-2 page overflow, the size of the LCP exceeds the uncompressed virtual page size. Therefore, the OS attempts to recompress the page, possibly using a different encoding ({\small{\texttt{c-type}}}\xspace). Depending on whether the page is compressible or not, the OS allocates a new physical page to fit the LCP or the uncompressed page, and migrates the data to the new location. The OS also appropriately modifies the {\small{\texttt{c-bit}}}\xspace, {\small{\texttt{c-type}}}\xspace and the {\small{\texttt{c-base}}}\xspace in the corresponding page table entry. Clearly, a type-2 overflow requires more work from the OS than a type-1 overflow. However, we expect page overflows of type-2 to occur rarely. In fact, we never observed a type-2 overflow in our evaluations. \subsubsection{Avoiding Recursive Page Faults} There are two types of pages that require special consideration: (i) pages that keep internal OS data structures, e.g., pages containing information required to handle page faults, and (ii) shared data pages that have more than one page table entry (PTE) mapping to the same physical page. Compressing pages of the first type can potentially lead to recursive page fault handling. The problem can be avoided if the OS sets a special \emph{do not compress} bit, e.g., as a part of the page compression encoding, so that the memory controller does not compress these pages. The second type of pages (shared pages) require consistency across multiple page table entries, such that when one PTE's compression information changes, the second entry is updated as well. There are two possible solutions to this problem. First, as with the first type of pages, these pages can be marked as \emph{do not compress}. Second, the OS could maintain consistency of the shared PTEs by performing multiple synchronous PTE updates (with accompanying TLB shootdowns). While the second solution can potentially lead to better average compressibility, the first solution (used in our implementation) is simpler and requires minimal changes inside the OS. Another situation that can potentially lead to a recursive fault is the eviction of dirty cache lines from the LLC to DRAM due to some page overflow handling that leads to another overflow. In order to solve this problem, we assume that the memory controller has a small dedicated portion of the main memory that is used as a scratchpad to store cache lines needed to perform page overflow handling. Dirty cache lines that are evicted from LLC to DRAM due to OS overflow handling are stored in this buffer space. The OS is responsible to minimize the memory footprint of the overflow handler. Note that this situation is expected to be very rare in practice, because even a single overflow is infrequent. \subsubsection{Handling Special Cases} There are several types of scenarios that require special attention: (i) rapid changes in compressibility (e.g., highly compressed page overwritten with non-compressible data), (ii) multiple back-to-back page overflows. The first scenario leads to the increase in the number of page overflows that are costly and time-consuming. This situation is common when the page is initialized with some values (frequently zero values), and then after some period of time multiple updates (e.g., writebacks) bring completely different data into this page. For zero pages the solution is simply not storing them at all - only one bit in TLB buffer, until there are not enough writebacks happen to these page to estimate its compressibility. For other pages, especially the ones that are allocated (e.g., through malloc), but never been updated, we also delay compression until there is not enough evidence that this page can be successfully compressed. These simple optimizations allow us to avoid major sources of the page overflows. The second scenario, while possible in practice, was extremely rare in our experiments. Nevertheless, one possible solution we consider to this problem, is to detect the situations like this, and when the number of back to back page overflows exceeds certain threshold, start to decompress this applications' data in the background to avoid further overflows. \subsection{Compression Algorithms} \label{sec:design-algos} Our LCP-based main memory compression framework can be employed with any compression algorithm. In this section, we describe how to adapt a generic compression algorithm to fit the requirements of the LCP framework. Subsequently, we describe how to adapt the two compression algorithms used in our evaluation. \newcommand{f_c}{f_c} \newcommand{f_d}{f_d} \subsubsection{Adapting a Compression Algorithm to Fit LCP} \label{sec:generic-compression} Every compression scheme is associated with a compression function, $f_c$, and a decompression function, $f_d$. To compress a virtual page into the corresponding LCP using the compression scheme, the memory controller carries out three steps. In the first step, the controller compresses every cache line in the page using $f_c$ and feeds the sizes of each compressed cache line to the second step. In the second step, the controller computes the total compressed page size (compressed data + metadata + exceptions, using the formulas from Section~\ref{sec:design-lcp-organization}) for each of a fixed set of target compressed cache line sizes and selects a target compressed cache line size $\mathcal{C}^*$ that minimizes the overall LCP size. In the third and final step, the memory controller classifies any cache line whose compressed size is less than or equal to the target size as compressible and all other cache lines as incompressible (exceptions). The memory controller uses this classification to generate the corresponding LCP based on the organization described in Section~\ref{sec:basic-lcp}. To decompress a compressed cache line of the page, the memory controller reads the fixed-target-sized compressed data and feeds it to the hardware implementation of function $f_d$. \subsubsection{FPC and BDI Compression Algorithms} \label{sec:design-prev-algos} Although any compression algorithm can be employed with our framework using the approach described above, it is desirable to use compression algorithms that have low complexity hardware implementation and low decompression latency, so that the overall complexity and latency of the design are minimized. For this reason, we adapt to fit our LCP framework two state-of-the-art compression algorithms that achieve such design points in the context of compressing in-cache data: (i) Frequent Pattern Compression~\cite{fpc}, and (ii) Base-Delta-Immediate Compression~\cite{bdi}. Frequent Pattern Compression (FPC) is based on the observation that a majority of the words accessed by applications fall under a small set of frequently occurring patterns~\cite{fpc-tr}. FPC compresses each cache line one word at a time. Therefore, the final compressed size of a cache line is dependent on the individual words within the cache line. To minimize the time to perform the compression search procedure described in Section~\ref{sec:generic-compression}, we limit the search to four different target cache line sizes: 16B, 21B, 32B and 44B (similar to the fixed sizes used in \cite{MMCompression}). Base-Delta-Immediate (BDI) Compression is based on the observation that in most cases, words co-located in memory have small differences in their values, a property referred to as \emph{low dynamic range}~\cite{bdi}. BDI encodes cache lines with such low dynamic range using a base value and an array of differences ($\Delta$s) of words within the cache line from either the base value or from zero. The size of the final compressed cache line depends only on the size of the base and the size of the $\Delta$s. To employ BDI within our framework, the memory controller attempts to compress a page with different versions of the Base-Delta encoding as described by Pekhimenko {\em et al.}~\cite{bdi} and then chooses the combination that minimizes the final compressed page size (according to the search procedure in Section~\ref{sec:generic-compression}). \begin{comment} For our GPU workloads, multiple values are typically packed into a word -- e.g., three components of a color. As a result, in a number of cases, we observed cache lines for which the most significant byte of the values within the cache line are different while the remaining bytes are fixed. The original BDI algorithm will not be able to compress such cache lines as the differences between the words will be large. However, if words of such cache lines are shifted cyclically (by one byte), they can then be compressed using BDI. We call this modification to BDI as BDI-rotate and evaluate it in Section~\ref{sec:gpu-bandwidth}. \end{comment} \section{LCP Optimizations} In this section, we describe two simple optimizations to our proposed LCP-based framework: (i) memory bandwidth reduction via compressed cache lines, and (ii) exploiting zero pages and cache lines for higher bandwidth utilization. \subsection{Enabling Memory Bandwidth Reduction} \label{sec:opt-bandwidth} One potential benefit of main memory compression that has not been examined in detail by prior work on memory compression is bandwidth reduction.\footnote{Prior work~\cite{register-caching,fvc-bus,LinkCompression,GPUBandwidthCompression} looked at the possibility of using compression for bandwidth reduction between the memory controller and DRAM. While significant reduction in bandwidth consumption is reported, prior work achieve this reduction either at the cost of increased memory access latency~\cite{register-caching,fvc-bus,LinkCompression}, as they have to both compress and decompress a cache line for every request, or based on a specialized main memory design~\cite{GPUBandwidthCompression}, e.g., GDDR3~\cite{gddr3}.} When cache lines are stored in compressed format in main memory, multiple consecutive compressed cache lines can be retrieved at the cost of retrieving a single uncompressed cache line. For example, when cache lines of a page are compressed to 1/4 their original size, four compressed cache lines can be retrieved at the cost of a single uncompressed cache line access. This can significantly reduce the bandwidth requirements of applications, especially those with good spatial locality. We propose two mechanisms that exploit this idea. In the first mechanism, when the memory controller needs to access a cache line in the compressed data region of LCP, it obtains the data from multiple consecutive compressed slots (which add up to the size of an uncompressed cache line). However, some of the cache lines that are retrieved in this manner may not be \emph{valid}. To determine if an additionally-fetched cache line is valid or not, the memory controller consults the metadata corresponding to the LCP. If a cache line is not valid, then the corresponding data is not decompressed. Otherwise, the cache line is decompressed and then stored in the cache. The second mechanism is an improvement over the first mechanism, where the memory controller additionally predicts if the additionally-fetched cache lines are \emph{useful} for the application. For this purpose, the memory controller uses hints from a multi-stride prefetcher~\cite{stride-prefetching}. In this mechanism, if the stride prefetcher suggests that an additionally-fetched cache line is part of a useful stream, then the memory controller stores that cache line in the cache. This approach has the potential to prevent cache lines that are not useful from polluting the cache. Section~\ref{sec:results-prefetching-hints} shows the effect of this approach on both performance and bandwidth consumption. Note that prior work~\cite{register-caching,fvc-bus,LinkCompression,GPUBandwidthCompression} assumed that when a cache line is compressed, only the compressed amount of data can be transferred over the DRAM bus, thereby freeing the bus for the future accesses. Unfortunately, modern DRAM chips are optimized for full cache block accesses~\cite{variable-reads}, so they would need to be modified to support such smaller granularity transfers. Our proposal does not require modifications to DRAM itself or the use of specialized DRAM such as GDDR3~\cite{gddr3}. \subsection{Zero Pages and Zero Cache Lines} \label{sec:opt-zeros} Prior work~\cite{ZeroContent,fvc,fpc,MMCompression,bdi} observed that in-memory data contains a significant number of zeros at two granularities: all-zero pages and all-zero cache lines. Because this pattern is quite common, we propose two changes to the LCP framework to more efficiently compress such occurrences of zeros. First, one value of the page compression encoding (e.g., {\small{\texttt{c-type}}}\xspace of 0) is reserved to indicate that the entire page is zero. When accessing data from a page with {\small{\texttt{c-type}}}\xspace$=0$, the processor can avoid any LLC or DRAM access by simply zeroing out the allocated cache line in the L1-cache. Second, to compress all-zero cache lines more efficiently, we can add another bit per cache line to the first part of the LCP metadata. This bit, which we call the {\small{\texttt{z-bit}}}\xspace, indicates if the corresponding cache line is zero. Using this approach, the memory controller does not require any main memory access to retrieve a cache line with the {\small{\texttt{z-bit}}}\xspace set (assuming a metadata cache hit). \section{Integration with Compressed Last-Level Caches} \label{sec:integration} While our LCP-framework does not require any compression in the last-level cache (LLC), it might be desirable to consider LLC compression together with main memory compression for two reasons. First, different applications may have different performance bottlenecks, e.g., limited bandwidth or LLC capacity. Hence, compressing data at both levels of the memory hierarchy can improve overall performance significantly by either improving an application's cache hits or reducing its bandwidth requirements or both. Second, if the same compression algorithm is employed to compress cache lines both in LLC and in main memory, cache lines can be migrated between the two levels without requiring any additional compression or decompression, leading to significant improvements in energy efficiency and latency. Our framework facilitates seamless integration of LLC compression and main memory compression by ensuring that compression algorithm is applied at the cache line granularity. To understand the benefits of integrating main memory compression with LLC compression, we evaluate a set of designs where both LLC and main memory can be either uncompressed or compressed with different compression mechanisms, e.g., BDI and FPC. We present the results of these evaluations in Section~\ref{sec:results-perf}. \section{Related Work} \label{sec:relatedwork} Previous work looked at possibility of bandwidth compression~\cite{register-caching,fvc-bus,LinkCompression} between LLC and DRAM. While significant potential decreases in bandwidth consumption are reported, none of the work looked at the possibility of compressing bandwidth without increase in latency (that is due the presence of compressors/decompressors, located on both ends of the memory bus). \section{Methodology} \label{lcp:sec:methodology} \begin{table}[t]\footnotesize \vspace{-0.05cm} \centering \begin{tabular}{ll} \toprule CPU Processor & 1--4 cores, 4GHz, x86 in-order \\ \cmidrule(rl){1-2} CPU L1-D cache & 32KB, 64B cache-line, 2-way, 1 cycle \\ \cmidrule(rl){1-2} CPU L2 cache & 2 MB, 64B cache-line, 16-way, 20 cycles \\ \cmidrule(rl){1-2} Main memory & 2 GB, 4 Banks, 8 KB row buffers, \\ & 1 memory channel, DDR3-1066~\cite{micron-ddr3} \\ \cmidrule(rl){1-2} LCP Design & Type-1 Overflow Penalty: 20,000 cycles \\ \bottomrul \end{tabular}% \caption{\small Major Parameters of the Simulated Systems.} \label{lcp:tbl:simulation-parameters}% \vspace{-0.2cm} \end{table} Our evaluations use an in-house, event-driven 32-bit x86 simulator whose front-end is based on Simics~\cite{Simics}. All configurations have private L1 caches and shared L2 caches. Major simulation parameters are provided in Table \ref{lcp:tbl:simulation-parameters}. We use benchmarks from the SPEC CPU2006 suite~\cite{SPEC}, four TPC-H/TPC-C queries~\cite{tpc}, and an Apache web server. All results are collected by running a representative portion (based on PinPoints~\cite{pinpoints}) of the benchmarks for 1 billion instructions. We build our energy model based on McPat~\cite{mcpat}, CACTI~\cite{cacti}, C-Pack~\cite{c-pack}, and the Synopsys Design Compiler with 65nm library (to evaluate the energy of compression/decompression with BDI and address calculation in~\cite{MMCompression}). \textbf{{Metrics.}} We measure the performance of our benchmarks using IPC (instruction per cycle) and effective compression ratio (effective DRAM size increase, e.g., a compression ratio of 1.5 for 2GB DRAM means that the compression scheme achieves the size benefits of a 3GB DRAM). For multi-programmed workloads we use the weighted speedup~\cite{weightedspeedup} performance metric: ($\sum_i \frac{IPC_i^{shared}} {{IPC}_i^{{alone}}}$). For bandwidth consumption we use BPKI (bytes transferred over bus per thousand instructions~\cite{BPKI}). \textbf{{Parameters of the Evaluated Schemes.}} As reported in the respective previous works, we used a decompression latency of 5 cycles for FPC and 1 cycle for BDI. \section{Results} \label{lcp:sec:results} In our experiments for both single-core and multi-core systems, we compare five different designs that employ different main memory compression strategies (frameworks) and different compression algorithms: (i) \textit{Baseline} system with no compression, (ii) robust main memory compression (\textit{RMC-FPC})~\cite{MMCompression}, (iii) and (iv) LCP framework with both FPC and BDI compression algorithms (\textit{LCP-FPC} and \textit{LCP-BDI}), and (v) \textit{MXT}~\cite{MXT}. Note that it is fundamentally possible to build a RMC-BDI design as well, but we found that it leads to either low energy efficiency (due to an increase in the BST metadata table entry size~\cite{MMCompression} with many more encodings in BDI) or low compression ratio (when the number of encodings is artificially decreased). Hence, for brevity, we exclude this potential design from our experiments. In addition, we evaluate two hypothetical designs: Zero Page Compression (\textit{ZPC}) and Lempel-Ziv (\textit{LZ})\footnote{Our implementation of LZ performs compression at 4KB page-granularity and serves as an idealized upper bound for the in-memory compression ratio. In contrast, MXT employs Lempel-Ziv at 1KB granularity.} to show some practical upper bounds on main memory compression. Table~\ref{table:schemes} summarizes all the designs. \begin{table}[t]\footnotesize \vspace{-0.05cm} \centering \begin{tabular}{lll} \toprule \textbf{Name} & \textbf{Framework} & \textbf{Compression Algorithm}\\ \midrul \textit{Baseline} & None & None\\ \cmidrule(rl){1-3} \textit{RMC-FPC} & RMC~\cite{MMCompression} & FPC~\cite{fpc}\\ \cmidrule(rl){1-3} \textit{LCP-FPC} & LCP & FPC~\cite{fpc}\\ \cmidrule(rl){1-3} \textit{LCP-BDI} & LCP & BDI~\cite{bdi}\\ \cmidrule(rl){1-3} \textit{MXT} & MXT~\cite{MXT} & Lempel-Ziv~\cite{lz}\\ \midrule \midrule \textit{ZPC} & None & Zero Page Compression\\ \cmidrule(rl){1-3} \textit{LZ} & None & Lempel-Ziv~\cite{lz}\\ \bottomrule \end{tabular} \caption{List of evaluated designs.} \vspace{-0.2cm} \label{table:schemes} \end{table} \subsection{Effect on DRAM Capacity} Figure~\ref{fig:capacity} compares the compression ratio of all the designs described in Table~\ref{table:schemes}. We draw two major conclusions. First, as expected, MXT, which employs the complex LZ algorithm, has the highest average compression ratio (2.30) of all practical designs and performs closely to our idealized LZ implementation (2.60). At the same time, LCP-BDI provides a reasonably high compression ratio (1.62 on average), outperforming RMC-FPC (1.59), and LCP-FPC (1.52). (Note that LCP could be used with both BDI and FPC algorithms together, and the average compression ratio in this case is as high as 1.69.) Second, while the average compression ratio of ZPC is relatively low (1.29), it greatly improves the effective memory capacity for a number of applications (e.g., {\em GemsFDTD}, {\em zeusmp}, and {\em cactus\-ADM}). This justifies our design decision of handling zero pages at the TLB-entry level. We conclude that our LCP framework achieves the goal of high compression ratio. \begin{figure}[htb] \centering \includegraphics[width=0.95\textwidth]{chap5/lcp/figures/Capacity.pdf} \caption{Main memory compression ratio.} \label{fig:capacity} \end{figure} \subsubsection{Distribution of Compressed Pages} The primary reason why applications have different compression ratios is the redundancy difference in their data. This leads to the situation where every application has its own distribution of compressed pages with different sizes (0B, 512B, 1KB, 2KB, 4KB). Figure~\ref{fig:distribution} shows these distributions for the applications in our study when using the LCP-BDI design. As we can see, the percentage of memory pages of every size in fact significantly varies between the applications, leading to different compression ratios (shown in Figure~\ref{fig:capacity}). For example, \emph{cactusADM} has a high compression ratio due to many 0B and 512B pages (there is a significant number of zero cache lines in its data), while \emph{astar} and \emph{h264ref} get most of their compression with 2KB pages due to cache lines with low dynamic range~\cite{bdi}. \subsubsection{Compression Ratio over Time} To estimate the efficiency of LCP-based compression over time, we conduct an experiment where we measure the compression ratios of our applications every 100 million instructions (for a total period of 5 billion instructions). The key observation we make is that the compression ratio for most of the applications is stable over time (the difference between the highest and the lowest ratio is within 10\%). Figure~\ref{lcp:fig:compression} shows all notable outliers from this observation: \emph{astar}, \emph{cactusADM}, \emph{h264ref}, and \emph{zeusmp}. Even for these applications, the compression ratio stays relatively constant for a long period of time, although there are some noticeable fluctuations in compression ratio (e.g., for \emph{astar} at around 4 billion instructions, for \emph{cactusADM} at around 500M instructions). We attribute this behavior to a phase change within an application that sometimes leads to changes in the applications' data. Fortunately, these cases are infrequent and do not have a noticeable effect on the application's performance (as we describe in Section~\ref{sec:results-perf}). We conclude that the capacity benefits provided by the LCP-based frameworks are usually stable over long periods of time. \begin{figure}[htb] \includegraphics[width=0.95\textwidth]{chap5/lcp/figures/PageDistribution.pdf} \caption{Compressed page size distribution with LCP-BDI.} \label{fig:distribution} \end{figure} \begin{figure}[htb] \includegraphics[width=0.95\textwidth]{chap5/lcp/figures/CompressionBW.pdf} \caption{Compression ratio over time with LCP-BDI.} \label{lcp:fig:compression} \end{figure} \subsection{Effect on Performance} \label{sec:results-perf} Main memory compression can improve performance in two major ways: (i) reduced memory bandwidth requirements, which can enable less contention on the main memory bus, an increasingly important bottleneck in systems, and (ii) reduced memory footprint, which can reduce long-latency disk accesses. We evaluate the performance improvement due to memory bandwidth reduction (including our optimizations for compressing zero values described in Section~\ref{sec:opt-zeros}) in Sections~\ref{lcp:sec:single-core} and~\ref{sec:multi-core}. We also evaluate the decrease in page faults in Section~\ref{sec:page-faults}. \begin{comment} \begin{table}[h!]\scriptsize \centering \begin{tabular}{llp{4.2cm}} \toprule \textbf{No.} & \textbf{Label} & \textbf{Description}\\ \midrul 1 & (None, None) & Baseline with no compression\\ \cmidrule(rl){1-3} 2 & FPC-memory & Only main memory compression (Ekman and Stenstrom~\cite{MMCompression})\\ \cmidrule(rl){1-3} 3 & LCP-BDI & Only main memory compression with LCP-framework using BDI~\cite{bdi}\\ \cmidrule(rl){1-3} 4 & (FPC, FPC-memory) & FPC cache compression~\cite{fpc} and design 2 combined\\ \cmidrule(rl){1-3} 5 & (BDI, LCP-BDI) & BDI cache compression~\cite{bdi} and design 3 combined\\ \bottomrule \end{tabular} \caption{List of evaluated designs.} \label{table:schemes} \end{table} \end{comment} \subsubsection{Single-Core Results} \label{lcp:sec:single-core} Figure~\ref{fig:IPC} shows the performance of single-core workloads using three key evaluated designs (RMC-FPC, LCP-FPC, and LCP-BDI) normalized to the \textit{Baseline}. Compared against an uncompressed system (\textit{Baseline}), the LCP-based designs (LCP-BDI and LCP-FPC) improve performance by 6.1\%/5.2\% and also outperform RMC-FPC.\footnote{Note that in order to provide a fair comparison, we enhanced the RMC-FPC approach with the same optimizations we did for LCP, e.g., bandwidth compression. The original RMC-FPC design reported an average degradation in performance~\cite{MMCompression}.} We conclude that our LCP framework is effective in improving performance by compressing main memory. \begin{figure}[htb] \centering \includegraphics[width=0.95\textwidth]{chap5/lcp/figures/IPC.pdf} \caption{Performance comparison (IPC) of different compressed designs for the single-core system.} \label{fig:IPC} \end{figure} Note that LCP-FPC outperforms RMC-FPC (on average) despite having a slightly lower compression ratio. This is mostly due to the lower overhead when accessing metadata information (RMC-FPC needs two memory accesses to \emph{different} main memory pages in the case of a BST table miss, while LCP-based framework performs two accesses to the same main memory page that can be pipelined). This is especially noticeable in several applications, e.g., \emph{astar}, \emph{milc}, and \emph{xalancbmk} that have low metadata table (BST) hit rates (LCP can also degrade performance for these applications). We conclude that our LCP framework is more effective in improving performance than RMC~\cite{MMCompression}. \begin{comment} Third, a high compression ratio does not always imply an improvement in performance. For example, while GemsFDTD is an application with a highly compressible working set in both the cache and DRAM, its performance does not improve with cache-only compression schemes~\cite{bdi}, but improves significantly for DRAM-only compression schemes. In contrast, cache-only compression is significantly beneficial for omnetpp, whereas DRAM-only compression is not. This difference across applications can be explained by the difference in their memory access patterns. We observe that when temporal locality is critical for the performance of an application (e.g., omnetpp and xalancbmk), then cache compression schemes are typically more helpful. On the other hand, when applications have high spatial locality and less temporal locality (e.g., GemsFDTD has an overwhelming streaming access pattern with little reuse), they benefit significantly from the bandwidth compression provided by the LCP-based schemes. Hence, if the goal is to improve performance of a wide variety of applications, that may have a mix of temporal and spatial locality, our results suggest that LCP-based designs with both DRAM and LLC compressed are the best option. We conclude that combined LLC and DRAM compression that takes advantage of our main memory compression framework benefits a wide variety of applications. \end{comment} \subsubsection{Multi-Core Results} \label{sec:multi-core} When the system has a single core, the memory bandwidth pressure may not be large enough to take full advantage of the bandwidth benefits of main memory compression. However, in a multi-core system where multiple applications are running concurrently, savings in bandwidth (reduced number of memory bus transfers) may significantly increase the overall system performance. To study this effect, we conducted experiments using 100 randomly generated multiprogrammed mixes of applications (for both 2-core and 4-core workloads). Our results show that the bandwidth benefits of memory compression are indeed more pronounced for multi-core workloads. Using our LCP-based design, LCP-BDI, the average performance improvement (normalized to the performance of the \textit{Baseline} system without compression) is 13.9\% for 2-core workloads and 10.7\% for 4-core workloads. We summarize our multi-core performance results in Figure~\ref{fig:ws}. We also vary the last-level cache size (1MB -- 16MB) for both single core and multi-core systems across all evaluated workloads. We find that LCP-based designs outperform the \emph{Baseline} across all evaluated systems (average performance improvement for single-core varies from 5.1\% to 13.4\%), even when the L2 cache size of the system is as large as 16MB. \begin{comment} Figure~\ref{fig:many-core} shows the effect of varying the last-level cache size on the performance benefit of our LCP-based design (using BDI compression in main memory) both for single core and multi-core systems across all evaluated workloads. LCP-based designs outperform the \textit{Baseline} design across all evaluated systems, even when the L2 cache size of the system is as large as 16MB. We conclude that our memory compression framework is effective for a wide variety of core counts and last-level cache sizes. \end{comment} \begin{comment} \begin{table}[ht]\footnotesize \centering \begin{tabular}{ccc} \toprule \textbf{Cores} & \textbf{LCP-BDI} & \textbf{(BDI, LCP-BDI)} \\ \midrule 1 & 6.1\% & 9.5\% \\ \cmidrule(rl){1-3} 2 & 13.9\% & 23.7\% \\ \cmidrule(rl){1-3} 4 & 10.7\% & 22.6\% \\ \bottomrule \end{tabular}% \caption{Average performance improvement (weighted speedup) using LCP-based designs.} \label{tbl:multicore}% \vspace{-0.4cm} \end{table} \end{comment} \begin{figure}[htb] \centering \includegraphics[width=0.6\textwidth]{chap5/lcp/figures/Speedup.pdf} \caption{Average performance improvement (weighted speedup).} \label{fig:ws} \end{figure} \begin{figure}[htb] \centering \includegraphics[width=0.6\textwidth]{chap5/lcp/figures/PF.pdf} \caption{Number of page faults (normalized to \textit{Baseline} with 256MB).} \label{fig:pf} \end{figure} \subsubsection{Effect on the Number of Page Faults} \label{sec:page-faults} Modern systems are usually designed such that concurrently-running applications have enough main memory to avoid most of the potential capacity page faults. At the same time, if the applications' total working set size exceeds the main memory capacity, the increased number of page faults can significantly affect performance. To study the effect of the LCP-based framework (LCP-BDI) on the number of page faults, we evaluate twenty randomly generated 16-core multiprogrammed mixes of applications from our benchmark set. We also vary the main memory capacity from 256MB to 1GB (larger memories usually lead to almost no page faults for these workload simulations). Our results (Figure~\ref{fig:pf}) show that the LCP-based framework (LCP-BDI) can decrease the number of page faults by 21\% on average (for 1GB DRAM) when compared with the \textit{Baseline} design with no compression. We conclude that the LCP-based framework can significantly decrease the number of page faults, and hence improve system performance beyond the benefits it provides due to reduced bandwidth. \begin{comment} \begin{figure}[htb] \begin{minipage}{2.8cm} \centering \includegraphics[height=2cm]{figures/1-core.pdf}\\ a) 1-core \end{minipage} \begin{minipage}{2.8cm} \centering \includegraphics[height=2cm]{figures/2-core.pdf}\\ b) 2-core \end{minipage} \begin{minipage}{2.8cm} \centering \includegraphics[height=2cm]{figures/4-core.pdf}\\ c) 4-core \end{minipage} \caption{Effect of varying cache size on performance.} \label{fig:many-core} \end{figure} \end{comment} \begin{comment} \begin{figure}[htb] \centering \includegraphics[width=0.5\textwidth]{figures/Energy.pdf} \caption{Effect of main memory compression on power consumption of bus between memory controller and DRAM.} \label{fig:energy} \end{figure} \end{comment} \subsection{Effect on Bus Bandwidth and Memory Subsystem Energy} \label{sec:results-bandwidth} \label{sec:results-energy} When DRAM pages are compressed, the traffic between the LLC and DRAM can be reduced. This can have two positive effects: (i) reduction in the average latency of memory accesses, which can lead to improvement in the overall system performance, and (ii) decrease in the bus energy consumption due to the decrease in the number of transfers. Figure~\ref{fig:bandwidth} shows the reduction in main memory bandwidth between LLC and DRAM (in terms of bytes per kilo-instruction, normalized to the \textit{Baseline} system with no compression) using different compression designs. The key observation we make from this figure is that there is a strong correlation between bandwidth compression and performance improvement (Figure~\ref{fig:IPC}). Applications that show a significant reduction in bandwidth consumption (e.g., \emph{GemsFDTD}, \emph{cactusADM}, \emph{soplex}, \emph{zeusmp}, \emph{leslie3d}, and the four \emph{tpc} queries) also see large performance improvements. There are some noticeable exceptions to this observation, e.g., \emph{h264ref}, \emph{wrf} and \emph{bzip2}. Although the memory bus traffic is compressible in these applications, main memory bandwidth is not the bottleneck for their performance. \begin{figure}[htb] \centering \includegraphics[width=0.95\textwidth]{chap5/lcp/figures/Bandwidth.pdf} \caption{Effect of different main memory compression schemes on memory bandwidth.} \label{fig:bandwidth} \end{figure} Figure~\ref{lcp:fig:energy} shows the reduction in memory subsystem energy of three systems that employ main memory compression---RMC-FPC, LCP-FPC, and LCP-BDI---normalized to the energy of \textit{Baseline}. The memory subsystem energy includes the static and dynamic energy consumed by caches, TLBs, memory transfers, and DRAM, plus the energy of additional components due to main memory compression: BST~\cite{MMCompression}, MD cache, address calculation, compressor/decompressor units. Two observations are in order. \begin{figure}[htb] \centering \includegraphics[width=0.95\textwidth]{chap5/lcp/figures/Energy.pdf} \caption{Effect of different main memory compression schemes on memory subsystem energy.} \label{lcp:fig:energy} \end{figure} First, our LCP-based designs (LCP-BDI and LCP-FPC) improve the memory subsystem energy by 5.2\% / 3.4\% on average over the \textit{Baseline} design with no compression, and by 11.3\% / 9.5\% over the state-of-the-art design (RMC-FPC) based on~\cite{MMCompression}. This is especially noticeable for bandwidth-limited applications, e.g., \emph{zeusmp} and \emph{cactusADM}. We conclude that our framework for main memory compression enables significant energy savings, mostly due to the decrease in bandwidth consumption. Second, RMC-FPC consumes significantly more energy than \textit{Baseline} (6.1\% more energy on average, as high as 21.7\% for \emph{dealII}). The primary reason for this energy consumption increase is the physical address calculation that RMC-FPC speculatively performs on \emph{every} L1 cache miss (to avoid increasing the memory latency due to complex address calculations). The second reason is the frequent (every L1 miss) accesses to the BST table (described in Section~\ref{lcp:sec:background}) that holds the address calculation information. Note that other factors, e.g., compression/decompression energy overheads or different compression ratios, are not the reasons for this energy consumption increase. LCP-FPC uses the same compression algorithm as RMC-FPC (and even has a slightly lower compression ratio), but does not increase energy consumption---in fact, LCP-FPC improves the energy consumption due to its decrease in consumed bandwidth. We conclude that our LCP-based framework is a more energy-efficient main memory compression framework than previously proposed designs such as RMC-FPC. \begin{comment} \subsection{Cache and Main Memory Compression} While our LCP-framework does not require any compression in the last-level cache (LLC), it might be desirable to consider LLC compression together with main memory compression for two reasons. First, different applications may have different performance bottlenecks, e.g., limited bandwidth or LLC capacity. Second, if the same compression algorithm is employed to compress cache lines both in LLC and in main memory, cache lines can be migrated between the two levels without requiring any additional compression or decompression, leading to significant improvements in energy efficiency and latency. Our framework facilitates seamless integration of LLC compression and main memory compression by ensuring that compression algorithm is applied at the cache line granularity. \begin{table}[h!]\small \centering \begin{tabular}{llll} \toprule \textbf{Memory Compr.} & \textbf{Cache Compr.} & \textbf{Performance} & \textbf{Bandwidth}\\ \midrul RMC-FPC & None & 2.6\% & 21.4\%\\ \cmidrule(rl){1-4} LCP-FPC & None & 5.2\% & 20.5\% \\ \cmidrule(rl){1-4} LCP-BDI & None & 6.1\% & 24.3\%\\ \cmidrule(rl){1-4} RMC-FPC & FPC & 4.8\% & 28.2\%\\ \cmidrule(rl){1-4} LCP-FPC & FPC & 7.5\% & 27.4\%\\ \cmidrule(rl){1-4} LCP-BDI & BDI & 9.6\% & 33.2\%\\ \bottomrule \end{tabular} \caption{Comparison of cache and memory compression designs. All numbers are percentage improvement over the \textit{Baseline} and averaged across all applications.} \label{table:all} \end{table} Table~\ref{table:all} summarizes both the performance and bandwidth consumption of several possible LLC and main memory compression designs. The performance improvement of combined LLC and DRAM compression is greater than that of DRAM-only compression alone. For example, LCP-BDI improves performance by 6.1\%, whereas (LCP-BDI, BDI) design (last row) improves performance by 9.6\%. Intuitively, this is due to the orthogonality of the benefits provided by cache compression (retains more cache lines that would otherwise have been evicted) and DRAM compression (brings in more cache lines that would otherwise have required separate memory transfers on the main memory bus). This intuition can be confirmed by looking at the reduction in bandwidth consumption (last column). The additional reduction is observed in all designs (7\% to 9\% on average) and is due to the reduction in LLC misses (achieved by the higher effective LLC size when cache compression is applied). Overall, we conclude that compressed memory hierarchy designs (where both LLC and DRAM are compressed with the same compression algorithm) can achieve significantly higher performance and bandwidth reduction than designs where only main memory is compressed. \end{comment} \subsection{Analysis of LCP Parameters} \begin{comment} \subsubsection{Effectiveness of the Metadata Cache} \label{sec:results-md} The metadata (MD) cache is a critical structure in the LCP framework as it helps the memory controller to avoid accesses to the LCP metadata (Section~\ref{sec:design-metadata-cache}). Figure~\ref{fig:mdcache} shows the hit rate of a 512-entry (32KB) MD cache for an LCP design that uses the BDI+FPC-fixed compression scheme for the single-core system.\footnote{Other previously discussed designs have similar hit rate.} We draw two conclusions from the figure. First, the average hit ratio is high (88\% on average), indicating that the use of the MD cache can significantly reduce the number of LCP metadata accesses to main memory. This is also the reason for the absence of significant performance degradation using the LCP framework (Figure~\ref{fig:IPC}) even for applications that do not benefit from compression. Second, some applications have significantly lower MD cache hit rate, especially, sjeng and astar. Analysis of the source code of these applications revealed that memory accesses of these applications exhibit very low locality. As a result, we also observed a low TLB hit rate for these applications. Since TLB misses are costlier than MD cache misses (the former requires multiple memory accesses), the low MD cache hit rate does not lead to significant performance degradation for these applications. \begin{figure}[htb] \centering \includegraphics[width=0.95\textwidth]{chap5/lcp/figures/IndexCache.pdf} \caption{Effectiveness of the metadata cache.} \label{fig:mdcache} \end{figure} \end{comment} \subsubsection{Analysis of Page Overflows} As described in Section~\ref{sec:design-handling-overflows}, page overflows can stall an application for a considerable duration. As we mentioned in that section, we did not encounter any type-2 overflows (the more severe type) in our simulations. Figure~\ref{fig:overflows} shows the number of type-1 overflows per instruction. The y-axis uses a log-scale as the number of overflows per instruction is very small. As the figure shows, on average, less than one type-1 overflow occurs every one million instructions. Although such overflows are more frequent for some applications (e.g., \emph{soplex} and the three \emph{tpch} queries), our evaluations show that this does not degrade performance in spite of adding a 20,000 cycle penalty for each type-1 page overflow.\footnote{We varied the type-1 overflow latency from 10,000 to 100,000 cycles and found that the impact on performance was negligible as we varied the latency. Prior work on main memory compression~\cite{MMCompression} also used 10,000 to 100,000 cycle range for such overflows.} In fact, these applications gain significant performance from our LCP design. The main reason for this is that the performance benefits of bandwidth reduction far outweigh the performance degradation due to type-1 overflows. We conclude that page overflows do not prevent the proposed LCP framework from providing good overall performance. \begin{figure}[htb] \centering \includegraphics[width=0.95\textwidth]{chap5/lcp/figures/Overflows.pdf} \caption{Type-1 page overflows for different applications.} \label{fig:overflows} \end{figure} \subsubsection{Number of Exceptions} The number of exceptions (uncompressed cache lines) in the LCP framework is critical for two reasons. First, it determines the size of the physical page required to store the LCP. The higher the number of exceptions, the larger the required physical page size. Second, it can affect an application's performance as exceptions require three main memory accesses on an MD cache miss (Section~\ref{sec:basic-mcf-overview}). We studied the average number of exceptions (across all compressed pages) for each application. Figure~\ref{fig:exceptions} shows the results of these studies. The number of exceptions varies from as low as 0.02/page for \emph{GemsFDTD} to as high as 29.2/page in \emph{milc} (17.3/page on average). The average number of exceptions has a visible impact on the compression ratio of applications (Figure~\ref{fig:capacity}). An application with a high compression ratio also has relatively few exceptions per page. Note that we do not restrict the number of exceptions in an LCP. As long as an LCP fits into a physical page not larger than the uncompressed page size (i.e., 4KB in our system), it will be stored in compressed form irrespective of how large the number of exceptions is. This is why applications like \emph{milc} have a large number of exceptions per page. We note that better performance is potentially achievable by either statically or dynamically limiting the number of exceptions per page---a complete evaluation of the design space is a part of our future work. \begin{figure}[htb] \centering \includegraphics[width=0.95\textwidth]{chap5/lcp/figures/Exclusions.pdf} \caption{Average number of exceptions per compressed page for different applications.} \label{fig:exceptions} \end{figure} \subsection{Comparison to Stride Prefetching} \label{sec:results-prefetching-hints} Our LCP-based framework improves performance due to its ability to transfer multiple compressed cache lines using a single memory request. Because this benefit resembles that of prefetching cache lines into the LLC, we compare our LCP-based design to a system that employs a stride prefetcher implemented as described in \cite{stride-prefetching}. Figures~\ref{fig:pref-ipc} and \ref{fig:pref-bandwidth} compare the performance and bandwidth consumption of three systems: (i)~one that employs stride prefetching, (ii)~one that employs LCP-BDI, and (iii)~one that employs LCP-BDI along with hints from a prefetcher to avoid cache pollution due to bandwidth compression (Section~\ref{sec:opt-bandwidth}). Two conclusions are in order. First, our LCP-based designs (second and third bars) are competitive with the more general stride prefetcher for all but a few applications (e.g., \emph{libquantum}). The primary reason is that a stride prefetcher can sometimes increase the memory bandwidth consumption of an application due to inaccurate prefetch requests. On the other hand, LCP obtains the benefits of prefetching without increasing (in fact, while significantly reducing) memory bandwidth consumption. Second, the effect of using prefetcher hints to avoid cache pollution is not significant. The reason for this is that our systems employ a large, highly-associative LLC (2MB 16-way) which is less susceptible to cache pollution. Evicting the LRU lines from such a cache has little effect on performance, but we did observe the benefits of this mechanism on multi-core systems with shared caches (up to 5\% performance improvement for some two-core workload mixes---not shown). \begin{figure}[thb] \includegraphics[width=0.9\textwidth]{chap5/lcp/figures/PrefetchIPC.pdf} \caption{Performance comparison with stride prefetching, and using prefetcher hints with the LCP-framework.} \label{fig:pref-ipc} \end{figure} \begin{figure}[htb] \includegraphics[width=0.9\textwidth]{chap5/lcp/figures/PrefetchBandwidth.pdf} \caption{Bandwidth comparison with stride prefetching.} \label{fig:pref-bandwidth} \end{figure} \begin{comment} \subsection{Effect on GPU Systems} \label{sec:gpu-bandwidth} To show the general applicability of DRAM compression for different architectures, we perform a preliminary experiment to analyze the effect of main memory compression on memory bandwidth reduction for a GPU architecture (AMD Evergreen ISA). Figure~\ref{fig:gpu-bandwidth} shows the memory bandwidth reduction with three compression schemes: 1)~Frequent Pattern Compression, 2)~Base-Delta-Immediate Compression, and 3)~Base-Delta-Immediate-rotate Compression (described in Section~\ref{sec:design-prev-algos}). As the figure shows, all three mechanisms significantly reduce the bandwidth requirements of most GPU applications, with BDI-rotate showing the best results (48\% on average). We conclude that our proposal is effective for GPU systems, and can enable significant performance and energy-efficiency benefits due to this reduction in main memory bandwidth especially in memory-bandwidth-bound GPU applications. \begin{figure}[!htb] \centering \includegraphics[width=0.49\textwidth]{figures/GPU-Bandwidth.pdf} \vspace{-6mm} \caption{Bandwidth Reduction in GPUs.} \label{fig:gpu-bandwidth} \end{figure} \end{comment} \chapter{Main Memory Compression: Linearly Compressed Pages} \input{chap5/lcp/sections/1_introduction} \input{chap5/lcp/sections/2_background} \input{chap5/lcp/sections/3_basic} \input{chap5/lcp/sections/4_design} \input{chap5/lcp/sections/5_optimizations} \input{chap5/lcp/sections/8_methodology} \input{chap5/lcp/sections/9_results} \input{chap5/lcp/sections/10_conclusion} \chapter{Flexible and Efficient Bandwidth Compression for GPUs} \input{caba/sections/1_introduction} \input{caba/sections/2_motivation} \input{caba/sections/3_CABA_1} \input{caba/sections/4_compression_1} \input{caba/sections/5_methodology} \input{caba/sections/6_results} \input{caba/sections/8_related} \input{caba/sections/9_conclusion} \chapter{Toggle-Aware Bandwidth Compression} \input{toggles/sections/1_introduction} \input{toggles/sections/2_background} \input{toggles/sections/3_motivation} \input{toggles/sections/4_idea} \input{toggles/sections/5_design} \input{toggles/sections/6_methodology} \input{toggles/sections/7_results} \input{toggles/sections/8_related} \input{toggles/sections/9_conclusion} \chapter{Conclusions and Future Work} Memory hierarchies play a significant role in the performance and energy efficiency of many modern systems, from mobile devices to data centers and supercomputers. Unfortunately, the limited resources of these memory hierarchies are not always utilized efficiently. One of these sources of inefficiency is redundancy in the data that is stored and transferred. We observe that this redundancy can be efficiently explored using hardware-based data compression. In Chapter 2, we described what are the key challenges against making hardware-based data compression practical across major layers of the memory hierarchy: caches, main memory, and on-chip/off-chip buses. In this dissertation, we proposed three major sets of solution to make hardware-based data compression efficient and practical in the context of all three layers of the memory hierarchy. First, we observed that a simple and fast, yet efficient compression algorithm can make data compression practical even for on-chip caches. In Chapter 3, we described such an algorithm, called \emph{Base-Delta-Immediate Compression}, and a corresponding on-chip cache design to support data compression. The performance benefits observed are on-par with the performance benefits of doubling the cache size. Then, in Chapter 4, we showed that compressed block size can be sometimes indicative of data reuse and can be efficiently used as a new dimension in cache management decisions. The performance benefits of our proposed compression-aware mechanism which takes into account compressed block size in making cache replacement and insertion decisions, results in performance on-par with that provided by doubling the cache size. Overall, both cache compression and compression-aware replacement policies using compressed block size deliver performance on par with that of a conventional cache with 4$\times$ capacity. Second, we proposed a new main memory compression framework, called \emph{Linearly Compressed Pages (LCP)}, that can provide low-overhead support for data compression in memory with different compression algorithms, to achieve higher effective memory capacity (69\% on average) and higher off-chip bandwidth (24\% on average). LCP improves performance by 6\%/14\%/11\% for single-/two-/four-core workloads, relative to a system without main memory compression. Third, we observed that there is a high potential for bandwidth compression for modern GPGPU applications. However, in order to realize this potential in an energy efficient manner, a new problem---the significant increase in bit flips (bit toggles) due to compressed data transfers on the interconnect---needs to be properly addressed. This increase is so high that it can lead to a 2.1$\times$ average increase in the consumed energy by the on-chip communication channel. We showed two major potential solutions to this problem, called \emph{Energy Control} and \emph{Metadata Consolidation}, which can preserve most of the benefits of compression without significant increase in energy consumption due to the bit toggle problem. \section{Future Work Directions} This dissertation on data compression significantly advances this subfield of computer architecture, but as it commonly happens, also highlights some completely new problems and opportunities. We conclude our dissertation by describing three such opportunities. \subsection{Compiler-Assisted Data Compression} One problem is the dependence of the existing compression algorithms on how the application data structures are mapped to main memory and on-chip caches (as we show in Chapter 3). For example, if pointer-like values are allocated side by side, they have a higher chance to be compressed well with BDI compression algorithm, but putting together (e.g., in the same cache line) a pointer and a boolean value would obviously lead to higher dynamic range, and hence lower compressibility. The latter frequently happens when arrays or lists of structs are defined in the program with different types mixed together. For applications with such data types, we want to allocate objects such that the spatial locality of similar-valued members is preserved. More precisely, we would like to \emph{split} an object up into respective members and allocate space for those members based on what kinds of values they hold. These decisions of splitting and allocation may be made during compile time or runtime, depending on the implementation. Compression ratio improves from using members with similar value-types that are \emph{pooled} (allocated) together and our preliminary studies already show a significant potential of such an approach. We aim to extend this idea to improve the compressibility of main memory pages that suffer from mixing data of very different types. \begin{comment} \subsection{Execution on Compressed Data} Another major problem in all prior designs (e.g., Chapter 3 -- 7) is that compressed data needs to be decompressed somewhere on the way to the core. While it might seem more attractive to always decompress data before execution, since it usually requires less change to the existing system, this internal limitation of performing all operations on the uncompressed data can potentially lead to significant energy waste and suboptimal performance. At the same time, if the final operation performed on the data is relatively simple (e.g., arithmetic comparison), it might be possible to perform these operations on the {\em compressed data itself} (this is somewhat similar to the execution on encrypted data in homomorphic encryption). Let us consider an example where an application performs a simple linear scan through an array searching for a certain value.\footnote{ This scenario can be quite common for existing database queries, search engine queries etc.} If this array is already compressed with a certain compression algorithm like BDI, then one simple strategy is to try to represent the searched value in the same base-delta form. If it cannot be represented in this form, then this value is not in this array, and there is no need to do any per-value comparisons. In cases where this representation is possible, we still need to do value comparisons, but more narrow -- instead of say 8-byte comparison for the original value, we can do 1--4 byte comparisons between deltas using SIMD (single instruction, multiple data) instructions. There are several major research questions that I would like to investigate. First, what are the correct abstractions to expose the compressed representation to the application? For example, can we combine this idea with some sort of virtual memory extension, so that the user can also see the compressed data directly in another part of the address space or even manipulate that data as long as the representation remains valid? Second, what is the best substrate to implement operations on compressed data? Should we use existing arithmetic instructions or special narrow SIMD-like instructions? What are the corresponding changes in hardware/software needed to support this idea? \subsection{Memory Bandwidth Compression for Visual Computing Workloads} While our preliminary results have demonstrated the potential benefits of using BDI (Chapter 3) and LCP (Chapter 5) for cache and main memory compression for traditional CPU workloads (e.g., SPEC~\cite{SPEC}, databases~\cite{tpc} and web workloads), we are interested in applying these techniques to a wider set of applications. In particular, we plan to explore data compression for {\em visual computing workloads}. There are two major overarching questions we pose in this work. First, what advantages can traditional CPU compression offer to visual computing workloads? Is it competitive with domain-specific compression schemes? Second, we plan to explore whether using {\em lossy compression} can reduce off-chip bandwidth even further. Can lossy compression be done in parallel with traditional compression mechanism (e.g., BDI)? In our preliminary experiments, we made several observations. The first one is that bandwidth savings is significantly dependent on the input image. The similarity in color of adjacent pixels increases the probability that a cache line contains similar data values, therefore increasing compressibility with BDI. Second, for lossy compression, we observe that dropping some (lower) bits in the significant of a floating point value (especially in the range of interest to us from 0.0 to 1.0) does not significantly change the floating point value. Even less significant is the change in 8-bit RGB color when the {\tt float} is converted back to the range 0-255. (Note that {\tt float} is the popular internal representation for images in imaging workloads that run on regular PCs.) We plan to use these initial observations in exploring the potential of general-purpose hardware-based compression for this type of workloads. In addition to the standard lossy mechanisms, we also aim to explore a ``lossy BDI'' mechanism where we can artificially truncate some bits for the deltas that {\em almost} fit with the specific flavor of BDI. \end{comment} \subsection{Data Compression for Non-Volatile Memories} LCP~\cite{lcp-micro} main memory compression design was built on top of commodity DRAM main memory, but data compression is fundamentally independent of the technology that was used to build main memory. In our work, we aim to investigate the potential of extending LCP to other emerging non-volatile memory technologies (e.g., PCM~\cite{PCM,PCM2,PCM3,PCM4,PCM5,PCM6,PCM7}, STT-MRAM~\cite{STT-MRAM,STT-MRAM2}, RRAM~\cite{RRAM}) and hybrid memory technologies (e.g.,~\cite{hb1,hb2,hb3,hb4,hb5}). We expect that longer access/write latencies of these emerging memory technologies will allow the system designs to use more aggressive compression algorithms, and hence the capacity benefits of LCP-based designs can increase even further. \subsection{New Efficient Representations for Big Data} Many modern applications, such as machine learning applications, applications from the bioinformatics field, modern databases etc., operate on data sets that significantly exceed the available main memory. At the same time, these applications do not always require the full precision or accuracy in computation, as their input data are already significantly imprecise or noisy. In our future work, we would like to investigate the potential of partially replacing the accesses to the huge data sets in these applications with the accesses to their much smaller representations or signatures. The key idea is to build a lower-resolution representation of the data set, keep it up-to-date in main memory, and refer to it when information to this data set is missing in the main memory. We then dynamically monitor whether the application meets its desired quality of output, and update the aggressiveness of our speculation accordingly. Our related work in recovery-free value prediction using approximate loads~\cite{rfvp-pact,rfvp-taco,rfvp-dt} hints that this can be significant promise toward this direction of research. \begin{comment} \section{Summary} In this dissertation, we showed that modern memory hierarchies not always utilize their limited resources efficiently by storing a lot of redundant bits of data. We proposed several techniques based on the general idea of hardware-based data compression to avoid this redundancy for (i) on-chip caches (Base-Delta-Immediate Compression and Compression-Aware Management Policies), (ii) main memory (Linearly Compressed Pages), and (iii) on-chip/off-chip bandwidth compression (Toggle-Aware bandwidth compression through Energy Control mechanism). As we showed in Section 8.1, the ideas behind these mechanisms can be extended in many directions for new research in this area of computer architecture and can also enable new mechanisms that could improve the efficiency of modern memory hierarchies even further. \end{comment} \chapter*{Other Works of This Author} I have been actively involved in research projects outside the scope of my thesis. \textbf{Systems.} I worked on web search systems for mobile phones where users' interest in certain trending events can be predicted and efficiently prefetched to extend the phone's battery life~\cite{pockettrend}. Previously, I also worked on improving the compile time of existing compilers with machine learning techniques that can predict which optimizations are actually useful for performance~\cite{ml-compilers}. \textbf{Main Memory.} In collaboration with Vivek Seshadri, I proposed several ways of better utilizing existing DRAM-based main memories: (i) fast bulk data operations like copying and memory initialization using RowClone~\cite{rowclone}, and (ii) an enhanced virtual memory framework that enables fine-grained memory management~\cite{overlays}. In collaboration with Donghyuk Lee, I worked on (i) reducing the latency of existing DRAM memories~\cite{lee-hpca2015}, and (ii) increasing the bandwidth available for existing (and future) 3D stacking designs~\cite{smla}. In collaboration with Hasan Hassan, I also worked on reducing DRAM latency by exploiting our new observation that many DRAM rows can be accessed significantly faster since they have sufficient amount of charge left~\cite{chargecache}. In collaboration with Kevin Chang, I investigated the potential of reducing different DRAM timing parameters to decrease its latency and their effect on the error rate~\cite{ChangKHGHLLPKM16}. \textbf{GPUs.} In collaboration with Nandita Vijaykumar, I worked on new ways of utilizing existing GPU resources through flexible data compression~\cite{caba,caba-book} and virtualization with oversubscription~\cite{proteus}. \textbf{Bioinformatics.} In collaboration with Hongyi Xin, I worked on new filters for alignment in genome read mapping~\cite{shd}, and techniques to find the optimal seeds for a particular read in the genome mapping process~\cite{oss}. \textbf{Approximate Computing.} Together with my collaborators from Georgia Tech, I worked on rollback-free value prediction mechanisms for both CPUs~\cite{rfvp-pact} and GPUs~\cite{rfvp-dt,rfvp-taco}. \chapter{\bibname}} \bibliographystyle{plain} \section{Introduction} \label{sec:introduction} \blfootnote{Originally published as ``Toggle-Aware Bandwidth Compression for GPUs'' in the 22nd International Symposium on High Performance Computer Architecture, 2016~\cite{toggles-hpca}, and as ``Toggle-Aware Compression for GPUs'' in Computer Architecture Letters, 2015~\cite{toggles-cal}.} Modern data-intensive computing forces system designers to deliver good system performance under multiple constraints: shrinking power and energy envelopes ({\em power wall}), increasing memory latency ({\em memory latency wall}), and scarce and expensive bandwidth resources ({\em bandwidth wall}). While many different techniques have been proposed to address these issues, these techniques often offer a trade-off that improves one constraint at the cost of another. Ideally, system architects would like to improve one or more of these system parameters, e.g., on-chip and off-chip\footnote{Communication channel between the last-level cache and main memory.} bandwidth consumption, while simultaneously avoiding negative effects on other key parameters, such as overall system cost, energy, and latency characteristics. One potential way of addressing multiple constraints is to employ dedicated hardware-based \emph{data compression} mechanisms (e.g.,~\cite{fvc,fpc,c-pack,bdi,sc2}) across different data links in the system. Compression exploits the high data redundancy observed in many modern applications~\cite{bdi,dcc,sc2,caba} and can be used to improve both capacity (e.g., of caches, DRAM, non-volatile memories~\cite{fvc,fpc,c-pack,bdi,sc2,lcp-micro,memzip,camp,caba,buri}) and bandwidth utilization (e.g., of on-chip and off-chip interconnects ~\cite{reetu,CompressionPrefetching,LinkCompression,GPUBandwidthCompression,lcp-micro,memzip,caba}). Several recent works focus on bandwidth compression to decrease memory traffic by transmitting data in a compressed form in both CPUs~\cite{lcp-micro,LinkCompression,CompressionPrefetching} and GPUs~\cite{GPUBandwidthCompression,lcp-micro,caba}, which results in better system performance and energy consumption. Bandwidth compression proves to be particularly effective in GPUs because they are often bottlenecked by memory bandwidth~\cite{veynu,osp-isca13,OWL,sched,caba,SchedPIM,TOM,sch6}. GPU applications also exhibit high degrees of data redundancy~\cite{GPUBandwidthCompression,lcp-micro,caba}, leading to good compression ratios. While data compression can dramatically reduce the number of bit symbols that must be transmitted across a link, compression also carries two well-known overheads: (1) latency, energy, and area overhead of the compression/decompression hardware~\cite{fpc,bdi}; and (2) the complexity and cost to support variable data sizes~\cite{iic-comp,dcc,lcp-micro,memzip}. Prior work has addressed solutions to both of these problems. For example, Base-Delta-Immediate compression~\cite{bdi} provides a low-latency, low-energy hardware-based compression algorithm. Decoupled and Skewed Compressed Caches~\cite{dcc,skewedCompressedCache} provide a mechanism to efficiently manage data recompaction and fragmentation in compressed caches. \subsection{Compression \& Communication Energy} In this chapter, we make a new observation that there is yet another important problem with data compression that must be addressed to implement energy-efficient communication: transferring data in compressed form (as opposed to uncompressed form) leads to a significant increase in the number of {\em bit toggles}, i.e., the number of wires that switch from 0 to 1 or 1 to 0\@. An increase in bit toggle count causes higher switching activities~\cite{nuca,tlc,desc} of wires, leading to higher dynamic energy being consumed by on-chip or off-chip interconnects. The bit toggle count increases for two reasons. First, the compressed data has a higher per-bit entropy because the same amount of information is now stored in fewer bits (the ``randomness'' of a single bit grows). Second, the variable-size nature of compressed data, which can negatively affect the word/flit data alignment in hardware. Thus, in contrast to the common wisdom that bandwidth compression saves energy (when it is effective), our key observation reveals a new trade-off: energy savings obtained by reducing bandwidth versus energy loss due to higher switching energy during compressed data transfers. This observation and the corresponding trade-off are the key contributions of this work. To understand (1) how applicable general-purpose data compression is for real GPU applications, and (2) the severity of the problem, we use six compression algorithms to analyze 221 discrete and mobile graphics application traces from a major GPU vendor and 21 open-source, general-purpose GPU applications. Our analysis shows that although off-chip bandwidth compression achieves a significant compression ratio (e.g., more than 47\% average effective bandwidth increase with C-Pack~\cite{c-pack} across mobile GPU applications), it also greatly increases the bit toggle count (e.g., 2.2$\times$ average corresponding increase). This effect can significantly increase the energy dissipated in the on-chip/off-chip interconnects, which constitute a significant portion of the memory subsystem energy. \vspace{-0.2cm} \subsection{Toggle-Aware Compression} In this work, we develop two new techniques that make bandwidth compression for on-chip/off-chip buses more energy-efficient by limiting the overall increase in compression-related bit toggles. \emph{Energy Control (EC)} decides whether to send data in compressed or uncompressed form, based on a model that accounts for the compression ratio, the increase in bit toggles, and current bandwidth utilization. The key insight is that this decision can be made in a fine-grained manner (e.g., for every cache line), using a simple model to approximate the commonly-used $Energy \times Delay$ and $Energy \times Delay^2$ metrics. In this model, $Energy$ is directly proportional to the bit toggle count; $Delay$ is inversely proportional to the compression ratio and directly proportional to the bandwidth utilization. Our second technique, \emph{Metadata Consolidation (MC)}, reduces the negative effects of scattering the metadata across a compressed cache line, which happens with many existing compression algorithms~\cite{fpc,c-pack}. Instead, MC consolidates compression-related metadata in a contiguous fashion. Our toggle-aware compression mechanisms are generic and applicable to different compression algorithms (e.g., Frequent Pattern Compression (FPC)~\cite{fpc} and Base-Delta-Immediate (BDI) compression~\cite{bdi}), different communication channels (on-chip/off-chip buses), and different architectures (e.g., GPUs, CPUs, and hardware accelerators). We demonstrate that these mechanisms are mostly orthogonal to different data encoding schemes also used to minimize the bit toggle count (e.g., Data Bus Inversion~\cite{dbi}), and hence can be used together with them to enhance the energy efficiency of interconnects. Our extensive evaluation shows that our proposed mechanisms can significantly reduce the negative effect of bit toggling increase (in some cases the 2.2$\times$ increase in bit toggle count is completely eliminated), while preserving most of the benefits of data compression when it is useful -- hence the reduction in performance benefits from compression is usually within 1\%. This efficient trade-off leads to the reduction in (i) the DRAM energy that is as high as 28.1\% for some applications (8.3\% average reduction), and (ii) the total system energy (at most 8.9\%, 2.1\% on average). Moreover, we can dramatically reduce the energy cost to support data compression over the on-chip interconnect. For example, our toggle-aware compression mechanisms can reduce the original 2.1$\times$ increase in consumed energy with C-Pack compression algorithm to much more acceptable 1.1$\times$ increase. \begin{comment} In summary, we make the following contributions: \begin{itemize} \item We make a new observation that hardware-based bandwidth compression applied to on-chip/off-chip communication interfaces poses a new challenge for system designers: a potentially significant increase in the bit toggle count as a result of data compression. Without proper care, this increase can lead to significant energy overheads when transferring compressed data that was not accounted for in prior works. \item We propose a set of new mechanisms to address this new challenge: Energy Control, and Metadata Consolidation. \item We provide a detailed analysis and evaluation of a large spectrum of GPU applications that justify both the usefulness of data compression for bandwidth compression in many real applications, as well as the existence of the bit toggle problem for bandwidth compression. Our proposed solutions can deliver most of the benefits of bandwidth compression with only minor increase in energy consumption, in contrast to 2.2$\times$ growth in the energy consumption with the baseline compressed design. \end{itemize} \end{comment} \section{Background} \label{toggles:sec:background} Data compression is a powerful mechanism that exploits the existing redundancy in the applications' data to relax capacity and bandwidth requirements for many modern systems. Hardware-based data compression was explored in the context of on-chip caches~\cite{fvc,fpc,c-pack,bdi,dcc,sc2} and main memory~\cite{MXT,LinkCompression,MMCompression,lcp-micro,memzip}, but mostly for CPU-oriented applications. Several prior works~\cite{LinkCompression,lcp-micro,GPUBandwidthCompression,memzip,caba,Reetu1} looked at the specifics of memory bandwidth compression, where it is very critical to decide where and when to perform compression and decompression. While these works looked at energy/power benefits of bandwidth compression, the overhead of compression was limited to the overhead of compression/decompression logic and the overhead of the newly proposed mechanisms/designs. To the best of our knowledge, this is the first work that looks at energy implications of compression on the data transferred over on-chip/off-chip buses. Depending on the type of the communication channel the data bits transferred have different effect on the energy spent on communication. We summarize this effect for three major communication channel types. \textbf{On-chip Interconnect.} For the full-swing on-chip interconnects, one of the dominant factors that defines the energy cost of a single data transfer (commonly called a flit) is the activity factor{\textemdash}the number of \emph{bit toggles} on the wires (communication channel switchings from 0 to 1 or from 1 to 0). The bit toggle count for a particular flit depends on both the current flit's data and on the data that was just sent over the same wires. Several prior works~\cite{dbi,desc,zhang-1998,nuca,tlc} looked at more energy-efficient data communication in the context of on-chip interconnects~\cite{desc} where the number of bit toggles can be reduced. The key difference between our work and these prior works is that we aim to address the specific effect of increase (sometimes a dramatic increase, see Section~\ref{toggles:sec:motivation}) in bit toggle count due to data compression. Our proposed mechanisms (described in Section~\ref{sec:idea}) are mostly orthogonal to these prior mechanisms and can be used in parallel with them to achieve even larger energy savings in data transfers. \textbf{DRAM bus.} In the case of DRAM (e.g., GDDR5~\cite{jedec-gddr5}), the energy attributed to the actual data transfer is usually less than the background and activate energy, but still significant (16\% on average based on our estimation with the Micron power calculator~\cite{micron-power}). The second major distinction between on-chip and off-chip buses, is the definition of bit toggles. In case of DRAM, bit toggles are defined as the number of zero bits. Reducing the number of signal lines driving a low level (zero bit) results in reduced power dissipation in the termination resistors and output drivers~\cite{jedec-gddr5}. To reduce the number of zero bits, techniques like DBI (data-bus-inversion) are usually used. For example, DBI is the part of the standard for GDDR5~\cite{jedec-gddr5} and DDR4~\cite{DDR4}. As we will show later in Section~\ref{toggles:sec:motivation}, these techniques are not effective enough to handle the significant increase in bit toggles due to data compression. \textbf{PCIe and SATA.} For SATA and PCIe, data is transmitted in a serial fashion at much higher frequencies than typical parallel bus interfaces. Under these conditions, bit toggles impose different design considerations and implications. Data is transmitted across these buses without an accompanying clock signal which means that the transmitted bits need to be synchronized with a clock signal by the receiver. This \emph{clock recovery} requires \emph{frequent} bit toggles to prevent loss in information. In addition, it is desirable that the \emph{running disparity}{\textemdash}which is the difference in the number of one and zero bits transmitted{\textemdash}be minimized. This condition is referred to as \emph{DC balance} and prevents distortion in the signal. Data is typically scrambled using encodings like the 8b/10b encoding~\cite{8b10b} to balance the number of ones and zeros while ensuring frequent transitions. These encodings have high overhead in terms of the amount of additional data transmitted but obscure any difference in bit transitions with compressed or uncompressed data. As the result, we do not expect further compression or toggle-rate reduction techniques to apply well to interfaces like SATA and PCIe\@. \textbf{Summary.} With on-chip interconnect, \emph{any bit transitions} increase the energy expended during data transfers. In the case of DRAM, energy spent during data transfers increases with an increase in \emph{zero} bits. Data compression exacerbates the energy expenditure in both these channels. For PCIe and SATA, data is scrambled before transmission and this obscures any impact of data compression and hence, our proposed mechanisms are not applicable to these channels. \section{Motivation and Analysis} \label{toggles:sec:motivation} In this work, we examine the use of six compression algorithms for bandwidth compression in GPU applications, taking into account bit toggles: (i) \emph{FPC} (Frequent Pattern Compression)~\cite{fpc}; (ii) \emph{BDI} (Base-Delta-Immediate Compression)~\cite{bdi}; (iii) \emph{BDI+FPC} (combined FPC and BDI)~\cite{lcp-micro}; (iv) \emph{LZSS} (Lempel-Ziv compression)~\cite{lz,MXT}; (v) \emph{Fibonacci} (a graphics-specific compression algorithm)~\cite{fibonacci}; and (vi) \emph{C-Pack}~\cite{c-pack}. All of these compression algorithms explore different forms of redundancy in memory data. For example, FPC and C-Pack algorithms look for different static patterns in data (e.g., high order bits are zeros or the word consists of repeated bytes). At the same time, C-Pack allows partial matching with some locally defined dictionary entries that usually gives it better coverage than FPC\@. In contrast, the BDI algorithm is based on the observation that the whole cache line of data can be commonly represented as a set of one or two bases and the deltas from these bases. This allows compression of some cache lines much more efficiently than FPC and even C-Pack, but potentially leads to lower coverage. For completeness of our compression algorithms analysis, we also examine the well-known software-based mechanism called LZSS, and the recently proposed graphics-oriented Fibonacci algorithm. To ensure our conclusions are practically applicable, we analyze both the real GPU applications (both \emph{discrete} and \emph{mobile} ones) with actual data sets provided by a major GPU vendor and \emph{open-sourced} GPU computing applications~\cite{sdk,rodinia,mars,lonestar}. The primary difference is that discrete applications have more single and double precision floating point data, mobile applications have more integers, and open-source applications are in between. Figure~\ref{fig:cr-all} shows the potential of these six compression algorithms in terms of effective bandwidth increase, averaged across all applications. These results exclude simple data patterns (e.g., zero cache lines) that are already handled by modern GPUs efficiently, and assume practical boundaries on bandwidth compression ratios (e.g., for on-chip interconnect, the highest possible compression ratio is 4.0, because the minimum flit size is 32 bytes while the uncompressed packet size is 128 bytes). \begin{figure}[t!] \centering \includegraphics[width=0.9\textwidth]{toggles/figures/CompRatioAll.pdf} \caption{Effective bandwidth compression ratios for various GPU applications and compression algorithms (higher bars are better).} \label{fig:cr-all} \end{figure} First, for the 167 discrete GPU applications (left side of Figure~\ref{fig:cr-all}), all algorithms provide substantial increase in available bandwidth (25\%--44\% on average for different compression algorithms). It is especially interesting that simple compression algorithms are very competitive with the more complex GPU-oriented \emph{Fibonacci} algorithm and the software-based Lempel-Ziv algorithm~\cite{lz}. Second, for the 54 mobile GPU applications (middle part of Figure~\ref{fig:cr-all}), bandwidth benefits are even more pronounced (reaching up to 57\% on average with the Fibonacci algorithm). Third, for the 21 open-sourced GPU computing applications the bandwidth benefits are the highest (as high as 72\% on average with the Fibonacci and BDI+FPC algorithms). Overall, we conclude that existing compression algorithms (including simple, general-purpose ones) can be effective in providing high on-chip/off-chip bandwidth compression for GPU compute applications. Unfortunately, the benefits of compression come with additional costs. Two overheads of compression are well-known: (i) additional data processing due to compression/decompression, and (ii) hardware changes due to transfer variable-length cache lines. While these two problems are significant, multiple compression algorithms~\cite{fpc,fvc,bdi,ZeroContent} have been proposed to minimize the overheads of data compression/decompression. Several designs~\cite{memzip,GPUBandwidthCompression,lcp-micro,caba} integrate bandwidth compression into existing memory hierarchies. In this work, we identify a new challenge with data compression that needs to be addressed: the increase in the total number of bit toggles as a result of compression. On-chip data communication energy is directly proportional to the number of bit toggles on the communication channel~\cite{nuca,tlc,desc}, due to the charging and discharging of the channel wire capacitance with each toggle. Data compression may increase or decrease the bit toggle count on the communication channel for any given data. As a result, energy consumed for moving this data can change. Figure~\ref{fig:tr-all} shows the increase in bit toggle count for all GPU applications in our workload pool with the six compression algorithms over a baseline that employs zero line compression (as this is already efficiently done in modern GPUs). The total number of bit toggles is computed such that it already includes the positive effects of compression (i.e., the decrease in the total number of bits sent due to compression). \begin{figure}[t!] \centering \includegraphics[width=0.9\textwidth]{toggles/figures/TogglesBaseAll.pdf} \caption{Bit toggle count increase due to compression.} \label{fig:tr-all} \end{figure} We make two observations. First, all compression algorithms consistently increase the bit toggle count. The effect is significant yet smaller (12\%--20\% increase) in discrete applications, mostly because they include floating-point data, which already has high toggle rates (31\% on average across discrete applications) and is less amenable to compression. This increase in bit toggle count happens even though we transfer less data due to compression. If this effect would be only due to the higher density of information per bit, we would expect the increase in the bit toggle rate (the relative percentage of bit toggles per data transfer), but not in the bit toggle count (the total number of bit toggles). Second, the increase in bit toggle count is more dramatic for mobile and open-sourced applications (right two-thirds of Figure~\ref{fig:tr-all}), exceeding 2$\times$ in four cases.\footnote{The FPC algorithm is not as effective in compressing mobile application data in our pool, and hence does not greatly affect bit toggle count.} For all types of applications, the increase in bit toggle count can lead to significant increase in the dynamic energy consumption of the communication channels. We study the relationship between the achieved compression ratio and the resultant increase in bit toggle count. Figure~\ref{fig:cr-tr} shows the compression ratio and the normalized bit toggle count of each discrete GPU application after compression with the FPC algorithm.\footnote{We observe similarly-shaped curves for other compression algorithms.} Clearly, there is a positive correlation between the compression ratio and the increase in bit toggle count, although it is not a simple direct correlation---higher compression ratio does not necessarily means higher increase in bit toggle count. To make things worse, the behaviour might change within an application due to phase and data patterns changes. We draw two major conclusions from this study. First, it strongly suggests that successful compression may lead to increased dynamic energy dissipation by on-chip/off-chip communication channels due to increased toggle counts. Second, these results show that any efficient solution for this problem should probably be dynamic in its nature to adopt for data pattern changes during applications execution. \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{toggles/figures/ToggleCompRatio.pdf} \caption{Normalized bit toggle count vs. compression ratio (with the FPC algorithm) for each of the discrete GPU applications.} \label{fig:cr-tr} \end{figure} To understand the toggle increase phenomenon, we examined several example cache lines where bit toggle count increases significantly after compression. Figures~\ref{fig:nocomp-example} and \ref{fig:fpc-example} show one of these cache lines with and without compression (FPC), assuming 8-byte flits. Without compression, the example cache line in Figure~\ref{fig:nocomp-example}, which consists of 8-byte data elements (4-byte indices and 4-byte pointers) has a very low number of toggles (2 toggles per 8-byte flit). This low number of bit toggles is due to the favourable alignment of the uncompressed data with the boundaries of flits (i.e., transfer granularity in the on-chip interconnect). With compression, the toggle count of the same cache line increases significantly, as shown in Figure~\ref{fig:fpc-example} (e.g., 31 toggles for a pair of 8-byte flits in this example). This increase is due to two major reasons. First, because compression removes zero bits from narrow values, the resulting higher per-bit entropy leads to higher ``randomness'' in data and, thus, a larger toggle count. Second, compression negatively affects the alignment of data both at the byte granularity (narrow values replaced with shorter 2-byte versions) and bit granularity (due to the 3-bit metadata storage; e.g., $0\text{x}5$ is the encoding metadata used to indicate narrow values for the FPC algorithm). \begin{figure}[ht!] \centering \includegraphics[width=0.9\textwidth]{toggles/figures/NoCompExample.pdf} \caption{Bit toggles without compression.} \label{fig:nocomp-example} \end{figure} \begin{figure}[ht!] \centering \includegraphics[width=0.9\textwidth]{toggles/figures/FPCExample.pdf} \caption{Bit toggles after compression.} \label{fig:fpc-example} \end{figure} \section{Toggle-aware Compression} \label{sec:idea} \subsection{Energy vs. Performance Trade-off} Data compression can reduce energy consumption and improve performance by reducing communication bandwidth demands. At the same time, data compression can potentially lead to significantly higher energy consumption due to increased bit toggle count. To properly evaluate this trade-off, we examine commonly-used metrics like $Energy \times Delay $ and $Energy \times Delay^2$~\cite{ed1}. We estimate these metrics with a simple model, which helps to make compression-related performance/energy trade-offs. We define the $Energy$ of a single data transfer to be proportional to the bit toggle count associated with it. Similarly, $Delay$ is defined to be inversely proportional to performance, which we assume is proportional to bandwidth reduction (i.e., compression ratio) and bandwidth utilization. The intuition behind this heuristic is that compression ratio reflects on how much additional bandwidth we can get, while bandwidth utilization shows how useful this additional bandwidth is in improving performance. Based on the observations above, we develop two techniques to enable {\em toggle-aware compression} to reduce the negative effects of increased bit toggle count. \subsection{Energy Control (EC)} \label{sec:ec} We propose a generic \emph{Energy Control} (EC) mechanism that can be applied along with any current (or future) compression algorithm.\footnote{In this work, we assume that only memory bandwidth is compressed, while on-chip caches and main memory still store data in uncompressed form.} It aims to achieve high compression ratio while minimizing the bit toggle count. As shown in Figure~\ref{fig:ec-detailed}, the Energy Control mechanism uses a generic decision function that considers (i) the bit toggle count for transmitting the original data ($T_{0}$), (ii) the bit toggle count for transmitting the data in compressed form ($T_{1}$), (iii) compression ratio ($CR$), (iv) current bandwidth utilization ($BU$), and possibly other metrics of interest that can be gathered and analyzed dynamically to decide whether to transmit the data compressed or uncompressed. Using this approach, it is possible to achieve a desirable trade-off between overall bandwidth reduction and increase/decrease in communication energy. The decision function that compares the compression ratio ($A$) and toggle ratio ($B$) can be linear ($ A \times B > 1$, based on $Energy \times Delay $) or quadratic ($ A \times B^{2} > 1$, based on $Energy \times Delay^2$).\footnote{We also find a specific coefficient in the relative weight between $Energy$ and $Delay$ empirically.} Specifically, when the bandwidth utilization ($BU$) is very high (e.g., $BU > 50\%$), we incorporate it into our decision function by multiplying the compression ratio with $ \frac{1}{1 - BU} $, hence allocating more weight to the compression ratio. Since the data patterns during application execution could change drastically, we expect our mechanism to be applied dynamically (either per cache line or a per region of execution) rather than statically for the whole application execution. \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{toggles/figures/EC-Detailed.pdf} \caption{Energy Control decision mechanism.} \label{fig:ec-detailed} \end{figure} \subsection{Metadata Consolidation} \label{sec:mc} Traditional energy-oblivious compression algorithms are not optimized to minimize the bit toggle count. Most of these algorithms~\cite{c-pack,fpc,fibonacci} have distributed metadata to efficiently track the redundancy in data, e.g., several bits per word to represent the current pattern used for encoding. These metadata bits can significantly increase the bit toggle count as they shift the potentially good alignment between different words within a cache line (Section~\ref{toggles:sec:motivation}). It is possible to enhance these compression algorithms (e.g., FPC and C-Pack) such that the increase in bit toggle count would be less after compression is applied. Metadata Consolidation (MC) is a new technique that aims to achieve this. The key idea of MC is to consolidate compression-related metadata into a {\em single contiguous metadata block} instead of storing (or, scattering) such metadata in a fine-grained fashion, e.g., on a per-word basis. We can locate this single metadata block either before or after the actual compressed data (this can increase decompression latency since the decompressor needs to know the metadata). The major benefit of MC is that it eliminates misalignment at the bit granularity. In cases where a cache line has a majority of similar patterns, a significant portion of the toggle count increase can be avoided. Figure~\ref{fig:mc} shows an example cache line compressed with the FPC algorithm, with and without MC. We assume 4-byte flits. Without MC, the bit toggle count between the first two flits is 18 (due to per-word metadata insertion). With MC, the corresponding bit toggle count is only 2, showing the effectiveness of MC in reducing bit toggles. \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{toggles/figures/Metadata.pdf} \caption{Bit toggle count w/o and with Metadata Consolidation.} \label{fig:mc} \end{figure} \section{EC Architecture} \label{toggles:sec:design} In this work, we assume a system where global on-chip network and main memory communication channels are augmented with compressor and decompressor units as described in Figure~\ref{fig:system-icnt} and Figure~\ref{fig:system-DRAM}. While it is possible to store data in the compressed form as well (e.g., to improve the capacity of on-chip caches~\cite{fvc,fpc,bdi,c-pack,dcc,sc2}), the corresponding changes come with potentially significant hardware complexity that we would like to avoid in our design.\ignore{In our system, the data traffic coming in and out of the channel is attempted to be compressed with one (or a few) compression algorithms.} We first attempt to compress the data traffic coming in and out of the channel with one (or a few) compression algorithms. The results of the compression, both the compressed cache line size and data, are then forwarded to the Energy Control (EC) logic that is described in detail in Section~\ref{sec:idea}. EC decides whether it is beneficial to send data in the compressed or uncompressed form, after which the data is transferred over the communication channel. It is then decompressed if needed at the other end, and the data flow proceeds normally. In the case of main memory bus compression (Figure~\ref{fig:system-DRAM}), additional EC and compressor/decompressor logic can be implemented in the already existing base-layer die assuming stacked memory organization~\cite{hbm,hmc}, or in the additional layer between DRAM and the main memory bus. Alternatively, the data can be stored in the compressed form but without any capacity benefits~\cite{GPUBandwidthCompression,memzip}. \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{toggles/figures/ICNT-System2.pdf} \caption{System overview with interconnect compression and EC.} \label{fig:system-icnt} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{toggles/figures/DRAM-System2.pdf} \caption{System overview with off-chip bus compression and EC.} \label{fig:system-DRAM} \end{figure} \subsection{Toggle Computation for On-Chip Interconnect} As described in Section~\ref{sec:idea}, our proposed mechanism, EC, aims to decrease the negative effect of data compression on bit toggling while preserving most of the compression benefits. GPU on-chip communication is performed via exchanging packets at a cache line size granularity. But the physical width of the on-chip interconnect channels is usually several times smaller than the size of a cache line (e.g., 32-byte wide channels for 128-byte cache lines). As a result, the communication packet is divided into multiple \emph{flits} that are stored at the transmission queue buffer before being transmitted over the communication channel in a sequential manner. Our approach adds a simple bit toggle computation logic that computes the bit toggle count across flits awaiting transmission. This logic consists of a flit-wide array of XORs and a tree-adder to compute the \emph{hamming distance}, the number of bits that are different, between two flits. We perform this computation for both compressed and uncompressed data, and the results are then fed to the EC decision function (as described in Figure~\ref{fig:ec-detailed}). This computation can be done sequentially while reusing the transition queue buffers to store intermediate compressed or uncompressed flits, or in parallel with the addition of some dedicated flit buffers (to reduce the latency overhead). In this work we assume the second approach. \subsection{Toggle Computation for DRAM} For modern DRAMs~\cite{jedec-gddr5,DDR4} the bit toggle definition is different from the definition we used for on-chip interconnects. As we described in Section~\ref{toggles:sec:background}, in the context of main memory bus what matters is the number of zero bits per data transfer. This defines how we compute the toggle count for DRAM transfers by simply counting the zero bits{\textemdash}which is known as the \emph{hamming weight} or the \emph{population count} of the inverted value. The difference in defining the toggle count also leads to the fact that the current toggle count does not depend on the previous data, which means that no additional buffering is required to perform the computation. \subsection{EC and Data Bus Inversion} Modern communication channels use different techniques to minimize (and sometimes to maximize) the bit toggle count to reduce the energy consumption or/and preserve signal integrity. We now briefly summarize two major techniques used in existing on-chip/off-chip interconnects: Data Bus Inversion and Data Scrambling, and their effect on our proposed EC mechanism. \input{toggles/sections/dbi} \subsubsection{Data Scrambling} To minimize the signal distortion, some modern DRAM designs~\cite{ddr3-jedec,scrambling} use a \emph{data scrambling} technique that aims to minimize the running data disparity, i.e., the difference between the number of 0s and 1s, in the transmitted data. One way to ``randomize'' the bits is by XORing them with a pseudo-random values generated at boot time~\cite{scrambling}. While techniques like data scrambling can potentially decrease signal distortion, they also increase the dynamic energy of DRAM data transfers. This approach also contradicts what several designs aimed to achieve by using DBI for GDDR5~\cite{jedec-gddr5} and DDR4~\cite{DDR4}, since the bits become much more random. In addition, using pseudo-random data scrambling techniques can be motivated by the existence of certain pathological data patterns~\cite{scrambling}, where signal integrity requires much lower operational frequency. At the same time, those patterns can usually be handled well with data compression algorithms that can provide the appropriate data transformation to avoid repetitive failures at a certain frequency. For the rest of this chapter, we assume GDDR5 memory without scrambling. \subsection{Complexity Estimation } \label{sec:overhead} Toggle count computation is the main hardware addition introduced by the EC mechanism. We modeled and synthesized the toggle-computational block in Verilog. Our results show that the required logic can be performed in an energy-efficient way (4pJ per 128-byte cache line with 32-byte flits for 65nm process\footnote{This is significantly lower than the corresponding energy for compression and decompression~\cite{memzip}.}). \ignore{ In our experiments, we observe that even this overhead can be significantly reduced with simple sampling techniques that monitor the decisions made with EC over time. More precisely, the decision made by EC stays very stable over time, and hence we can avoid recomputing the toggle count for every transferred cache line. } \section{Methodology} \label{toggles:sec:methodology} In our work, we analyze two distinct groups of applications. First, a group of 221 applications from a major GPU vendor in the form of memory traces with real application data. This group consists of two subgroups: \emph{discrete} applications (e.g., HPC workloads, general-purpose applications, physics etc.) and \emph{mobile} applications. As there is no existing simulator that can run these traces for cycle-accurate simulation, we use them to demonstrate (i) the benefits of compression on a large pool of existing applications operating on real data, and (ii) the existence of the toggle count increase problem. Second, we use 21 \emph{open-sourced} GPU computing applications derived from CUDA SDK~\cite{sdk} (\emph{BFS, CONS, JPEG, LPS, MUM, RAY, SLA, TRA}), Rodinia~\cite{rodinia} (\emph{hs, nw}), Mars~\cite{mars} (\emph{KM, MM, PVC, PVR, SS}), and Lonestar~\cite{lonestar} (\emph{bfs, bh, mst, sp, sssp}) suites. We evaluate the performance of our proposed mechanisms with the second group of applications using GPGPU-Sim 3.2.2~\cite{GPGPUSim} cycle-accurate simulator. Table~\ref{tab:meth} provides all the details of the simulated system. Additionally, we use GPUWattch~\cite{gpuwattch} for energy analysis with proper modifications to reflect bit-toggling effect. We run all applications to completion or 1 billion instructions (whichever comes first). Our evaluation in Section~\ref{toggles:sec:results} demonstrates detailed results for applications that exhibit at least 10\% bandwidth compressibility. \textbf{Evaluated Metrics.} We present Instruction per Cycle (\emph{IPC}) as the primary performance metric. In addition, we also use average bandwidth utilization defined as the fraction of total DRAM cycles that the DRAM data bus is busy, and \emph{compression ratio} defined as the effective bandwidth increase. For both on-chip interconnect and DRAM we assume the highest possible compression ratio of 4.0. For on-chip interconnect, this is because we assume a flit size of 32 bytes for a 128-byte packet. For DRAM, there are multiple ways of achieving the desired flexibility in data transfers: (i) increasing the size of a cache line (from 128 bytes to 256 bytes), (ii) using sub-ranking as was proposed for DDR3 in MemZip~\cite{memzip}, (iii) transferring multiple compressed cache lines instead of one uncompressed line as in LCP design~\cite{lcp-micro}, and (iv) any combination of the first three approaches. Existing GPUs (e.g., GeForce FX series) are known to support 4:1 data compression~\cite{maxwell}. \begin{table}[!t] \vspace{-0.3cm} \begin{scriptsize} \centering \begin{tabular}{ll} \toprule System Overview & 15 SMs, 32 threads/warp, 6 memory channels\\ \cmidrule(rl){1-2} Shader Core Config & 1.4GHz, GTO scheduler~\cite{tor-micro12}, 2 schedulers/SM\\ \cmidrule(rl){1-2} Resources / SM & 48 warps/SM, 32K registers, 32KB Shared Mem.\\ \cmidrule(rl){1-2} L1 Cache & 16KB, 4-way associative, LRU \\ \cmidrule(rl){1-2} L2 Cache & 768KB, 16-way associative, LRU \\ \cmidrule(rl){1-2} Interconnect & 1 crossbar/direction (15 SMs, 6 MCs), 1.4GHz \\ \cmidrule(rl){1-2} Memory Model & 177.4GB/s BW, 6 GDDR5 Memory Controllers,\\ & FR-FCFS scheduling, 16 banks/MC \\ \cmidrule(rl){1-2}GDDR5 Timing~\cite{jedec-gddr5} & $t_{CL}=12, t_{RP}=12, t_{RC}=40, t_{RAS}=28,$\\ &$t_{RCD}=12, t_{RRD}=6, t_{CLDR}=5, t_{WR}=12$ \\% \cmidrule(rl){1-2} Main Memory & 2 partitions per controller (what does this mean?) 8? channels, FRFCFS,\\ & 8? banks-per-rank XX ranks per channel\\ \bottomrule \end{tabular}% \vspace{-0.1cm} \caption{Major Parameters of the Simulated Systems.} \label{tab:meth}% \end{scriptsize}% \vspace{-0.4cm} \end{table}% \section{Evaluation} \label{toggles:sec:results} We present our results for two communication channels described above: (i) off-chip DRAM bus and (ii) on-chip interconnect. We exclude LZSS compression algorithm from our detailed evaluation since its hardware implementation is not practical with hundreds of cycles of compression/decompression latency~\cite{MXT}. \subsection{DRAM Bus Results} \subsubsection{Effect on Toggles and Compression Ratio} We analyze the effectiveness of the proposed EC optimization by examining how it affects both the number of toggles (Figure~\ref{fig:tr-ec-all}) and the compression ratio (Figure~\ref{fig:cr-ec-all}) for five compression algorithms. In both figures, results are averaged across all applications within the corresponding application subgroup and normalized to the baseline design with no compression. Unless specified otherwise, we use the EC mechanism with the decision function based on the $Energy \times Delay^2$ metric using our model from Section~\ref{sec:ec}. We make two observations from these figures. \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{toggles/figures/TogglesEC-All.pdf} \caption{Effect of Energy Control on the number of toggles on DRAM bus.} \label{fig:tr-ec-all} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{toggles/figures/CompRatioEC-All.pdf} \caption{Effective DRAM bandwidth increase for different applications.} \label{fig:cr-ec-all} \end{figure} First, we observe that EC can effectively reduce the overhead in terms of toggle count for both discrete and mobile GPU applications (Figure~\ref{fig:tr-ec-all}). For discrete GPU applications, the toggle reduction varies from 6\% to 16\% on average, and the toggle increase due to compression is almost completely eliminated in the case of the Fibonacci compression algorithm. For mobile GPU applications, the reduction is as high as 51\% on average for the BDI+FPC compression algorithm (more than 32$\times$ reduction in \emph{extra} bit toggles), with only a modest reduction\footnote{Compression ratio reduces because EC decides to transfer some compressible lines in the uncompressed form.} in compression ratio. Second, the reduction in compression ratio with EC is usually minimal. For example, in discrete GPU applications, this reduction for the BDI+FPC algorithm is only 0.7\% on average (Figure~\ref{fig:cr-ec-all}). For mobile and open-sourced GPU applications, the reduction in compression ratio is more noticeable (e.g., 9.8\% on average for Fibonacci with mobile applications), which is still a very attractive trade-off since the 2.2$\times$ growth in the number of toggles is practically eliminated. We conclude that EC offers an effective way to control the energy efficiency of data compression for DRAM by applying it only when it provides a high compression ratio with only a small increase in the number of toggles. While the average numbers presented express the general effect of the EC mechanism on both the number of toggles and compression ratio, it is also interesting to see how the results vary for individual applications. To perform this deeper analysis, we pick one compression algorithm (\emph{C-Pack}), and a single subgroup of applications (\emph{Open-Sourced}), and show the effect of compression with and without EC on the toggle count (Figure~\ref{fig:toggles-c-pack}) and compression ratio (Figure~\ref{fig:cr-c-pack}). We also study two versions of the EC mechanism: (i) \emph{EC1} which uses the $Energy \times Delay$ metric and (ii) \emph{EC2} which uses the $Energy \times Delay^2$ metric. We make three major observations from these figures. \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{toggles/figures/Toggles-C-Pack.pdf} \caption{Effect of Energy Control with C-Pack compression algorithm on the number of DRAM toggles.} \label{fig:toggles-c-pack} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{toggles/figures/CompRatio-C-Pack.pdf} \caption{Effective DRAM bandwidth increase with C-Pack algorithm.} \label{fig:cr-c-pack} \end{figure} First, both the increase in bit toggle count and compression ratio vary significantly for different applications. For example, \emph{bfs} from the Lonestar application suite has a very high compression ratio of more than 2.5$\times$, but its increase in toggle count is relatively small (only 17\% for baseline C-Pack compression without EC mechanism). In contrast, \emph{PageViewRank} application from the Mars application suite has more than 10$\times$ increase in toggles with 1.6$\times$ compression ratio. This is because different data is affected differently by data compression. There can be cases where the overall toggle count is lower than in the uncompressed baseline even without EC mechanism (e.g., \emph{LPS}). Second, for most of the applications in our workload pool, the proposed mechanisms (EC1 and EC2) can significantly reduce the bit toggle count while retaining most of the benefits of compression. For example, for \emph{heartwall} we reduce the bit toggle count with our EC2 mechanism from 2.5$\times$ to 1.8$\times$ by only sacrificing 8\% of the compression ratio (from 1.83$\times$ to 1.75$\times$). This could significantly reduce the bit toggling energy overhead with C-Pack algorithm while preserving most of the bandwidth (and hence potentially performance) benefits. Third, as expected, EC1 is more aggressive in disabling compression, because it weights bit toggles and compression ratio equally in the trade-off, while in the EC2 mechanism, compression ratio has higher value (squared in the formula) than bit toggle count. Hence, for many of our applications (e.g., \emph{bfs}, \emph{mst}, \emph{Kmeans}, \emph{nw}, etc.) we see a gradual reduction in toggles, with corresponding small reduction in compression ratio, when moving from baseline to EC1 and then EC2. This means that depending on the application characteristics, we have multiple options with varying aggressiveness to trade-off bit toggle count with compression ratio. As we will show in the next section, we can achieve these trade-offs with minimal effect on performance. \subsubsection{Effect on Performance} While previous results show that EC1 and EC2 mechanisms are very effective in trading off bit toggle count with compression ratio, it is still important to understand how much this trade-off ``costs'' in actual performance. This is especially important for the DRAM, that is commonly one of the major bottlenecks in GPU applications performance, and hence even a minor degradation in compression ratio can potentially lead to a noticeable degradation in performance and overall energy consumption. Figure~\ref{fig:perf-c-pack} shows this effect on performance for both EC1 and EC2 mechanisms in comparison to a baseline employing compression with C-Pack. We make two observations here. \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{toggles/figures/Performance-C-Pack.pdf} \caption{Speedup with C-Pack compression algorithm.} \label{fig:perf-c-pack} \end{figure} First, our proposed mechanisms (EC1 and EC2) usually have minimal negative impact on the applications' performance. The baseline mechanism (\emph{Without EC}) provides 11.5\% average performance improvement, while the least aggressive EC2 mechanism reduces performance benefits by only 0.7\%, and the EC1 mechanism - by 2.0\%. This is significantly smaller than the corresponding loss in compression ratio (shown in Figure~\ref{fig:cr-c-pack}). The primary reason is a successful trade-off between compression ratio, toggles and performance. Both EC mechanisms consider current DRAM bandwidth utilization, and only trade-off compression when it is unlikely to hurt performance. Second, while there are applications (e.g., \emph{MatrixMul}) where we could lose up to 6\% performance using the most aggressive mechanism (EC1), this is absolutely justified because we also reduce the bit toggle count from almost 10$\times$ to about 7$\times$. It is hard to avoid any degradation in performance for such applications since they are severely bandwidth-limited, and any loss in compression ratio is conspicuous in performance. If such performance degradation is unacceptable, then a less aggressive version of the EC mechanism, EC2, can be used. Overall, we conclude that our proposed mechanisms EC1 and EC2 are both very effective in preserving most of the performance benefit that comes from data compression while significantly reducing the negative effect of bit toggling increase (and hence reducing the energy overhead). \subsubsection{Effect on DRAM and System Energy} Figure~\ref{fig:energy-c-pack} shows the effect of C-Pack compression algorithm on the DRAM energy consumption with and without energy control (normalized to the energy consumption of the uncompressed baseline). These results include the overhead of the compression/decompression hardware~\cite{c-pack} and our mechanism (Section~\ref{sec:overhead}). and We make two observations from the figure. First, as expected, many applications significantly reduce their DRAM energy consumption (e.g., \emph{SLA}, \emph{TRA}, \emph{heartwall}, \emph{nw}). For example, for \emph{TRA}, the 28.1\% reduction in the DRAM energy (8.9\% reduction in the total energy) is the direct cause of the significant reduction in the bit toggle count (from 2.4$\times$ to 1.1$\times$ as shown in Figure~\ref{fig:toggles-c-pack}). Overall, the DRAM energy is reduced by 8.3\% for both EC1 and EC2. As DRAM energy constitutes on average 28.8\% out of total system energy (ranging from 7.9\% to 58.3\%), and the decrease in performance is less than 1\%, this leads to a total system energy reduction of 2.1\% on average across applications using EC1/EC2 mechanisms. \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{toggles/figures/DRAM-Energy-C-Pack.pdf} \caption{Effect on the DRAM energy with C-Pack compression algorithm.} \label{fig:energy-c-pack} \end{figure} Second, many applications that have significant growth in their bit toggle count due to compression (e.g., \emph{MatrixMul} and \emph{PageViewRank}) are also very sensitive to the available DRAM bandwidth. Therefore to provide any energy savings for these applications, it is very important to dynamically monitor their current bandwidth utilization. We observe that without the integration of current bandwidth utilization metric into our mechanisms (described in Section~\ref{sec:ec}), even a minor reduction in compression ratio for these applications could lead to a severe degradation in performance, and system energy. We conclude that our proposed mechanisms can efficiently trade off compression ratio and bit toggle count to improve both the DRAM and overall system energy. \ignore{ \gena{ Need to rework this, first I am not sure you need this, if you will mention it above. } Since there is a significant reduction in DRAM energy with almost no loss in compression ratio (and hence the performance) the total system energy is reduced by 2.1\% when EC2 mechanism is used. \gena{This below is not clear, it relates to DRAM energy and not system energy right?} We also observe that no application has energy consumption higher than that of the uncompressed baseline after EC mechanisms are applied, and there is only a few (e.g., \emph{bh}) where the system energy increases slightly after EC-based mechanism is applied (due to some loss in useful compression ratio). } \subsection{On-Chip Interconnect Results} \subsubsection{Effect on Toggles and Compression Ratio} Similar to the off-chip bus, we evaluate the effect of five compression algorithms on toggle count and compression ratio for the on-chip interconnect (Figure~\ref{fig:toggles-icnt} and Figure~\ref{fig:cr-icnt} correspondingly) using GPGPU-sim and open-sourced applications as described in Section~\ref{toggles:sec:methodology}. We make three major observations from these figures. \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{toggles/figures/Toggles-ICNT.pdf} \caption{Effect of Energy Control on the number of toggles in on-chip interconnect.} \label{fig:toggles-icnt} \end{figure} \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{toggles/figures/CompRatio-ICNT.pdf} \caption{Effect of Energy Control on compression ratio in on-chip interconnect.} \label{fig:cr-icnt} \end{figure} First, the most noticeable difference when compared with the DRAM bus is that the increase in bit toggle count is not as significant for all compression algorithms. It still increases for all but one algorithm (\emph{Fibonacci}), but we observe steep increases in bit toggle count (e.g., around 60\%) only for FPC and C-Pack algorithms. The reason for this behaviour is two fold. First, the on-chip data working set is different from that of the off-chip working set for some applications, and hence these data sets have different characteristics. Second, we define \emph{bit toggles} differently for these two channels (see Section~\ref{toggles:sec:background}). Second, despite the variation in how different compression algorithms affect the bit toggle count, both of our proposed mechanisms are effective in reducing the bit toggle count (e.g., from 1.6$\times$ to 0.9$\times$ with C-Pack). Moreover, both mechanisms, EC1 and EC2, preserve most of the compression ratio achieved by C-Pack algorithm. Therefore, we conclude that our proposed mechanisms are effective in reducing bit toggles for both on-chip interconnect and off-chip buses. Third, in contrast to our evaluation of the DRAM bus, our results with interconnect show that for all but one algorithm (C-Pack), both EC1 and EC2 are almost equally effective in reducing the bit toggle count while preserving the compression ratio. This means that in the case of on-chip interconnect, there is no need to use more aggressive decision functions to trade-off bit toggles with compression ratio, because the EC2 mechanism{\textemdash}the less aggressive of the two{\textemdash}already provides most of the benefits. Finally, while the overall achieved compression ratio is slightly lower than in case of DRAM, we still observe impressive compression ratios in on-chip interconnect, reaching up to 1.6$\times$ on average across all open-sourced applications. While DRAM bandwidth traditionally is a primary performance bottleneck for many applications, on-chip interconnect is usually designed such that its bandwidth will not be the primary performance limiter. Therefore the achieved compression ratio in case of on-chip interconnect is expected to translate directly into overall area and silicon cost reduction assuming fewer ports, wires and switches are required to provide the same effective bandwidth. Alternatively, the compression ratio can be translated into lower power and energy assuming lower clock frequency can be applied due to lower bandwidth demands from on-chip interconnect. \subsubsection{Effect on Performance and Interconnect Energy} While it is clear that both EC1 and EC2 are effective in reducing the bit toggle count, it is important to understand how they affect performance and interconnect energy in our simulated system. Figure~\ref{fig:perf-icnt} shows the effect of both proposed techniques on performance (normalized to the performance of the uncompressed baseline). The key takeaway from this figure is that for all compression algorithms, both EC1 and EC2 are within less than 1\% of the performance of the designs without the energy control mechanisms. There are two reasons for this. First, both EC1 and EC2 are effective in deciding when compression is useful to improve performance and when it is not. Second, the on-chip interconnect is less of a bottleneck in our example configuration than the off-chip bus, hence disabling compression in some cases has smaller impact on the overall performance. \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{toggles/figures/Performance-ICNT.pdf} \caption{Effect of Energy Control on performance when compression is applied to on-chip interconnect.} \label{fig:perf-icnt} \end{figure} Figure~\ref{fig:energy-icnt-icnt} shows the effect of data compression and bit toggling on the energy consumed by the on-chip interconnect (results are normalized to the energy of the uncompressed interconnect). As expected, compression algorithms that have higher bit toggle count, have much higher energy cost to support data compression, because bit toggling is the dominant part of the on-chip interconnect energy consumption. From this figure, we observe that our proposed mechanisms, EC1 and EC2, are both effective in reducing the energy overhead. The most notable reduction is for \emph{C-Pack} algorithm, where we reduce the overhead from 2.1$\times$ to just 1.1$\times$. Overall, we conclude that our mechanisms are effective in reducing the energy overheads related to increased bit toggling due to compression, while preserving most of the bandwidth and performance benefits achieved through compression. \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{toggles/figures/Energy-ICNT.pdf} \caption{Effect of Energy Control on on-chip interconnect energy.} \label{fig:energy-icnt-icnt} \end{figure} \subsection{Effect of Metadata Consolidation} Metadata Consolidation (MC) is able to reduce the bit-level misalignment for several compression algorithms (currently implemented for FPC and C-Pack compression algorithms). We observe additional toggle reduction on the \emph{DRAM bus} from applying MC (over EC2) of 3.2\% and 2.9\% for FPC and C-Pack respectively across applications in the discrete and mobile subgroups. Even though MC can mitigate some negative effects of bit-level misalignment after compression, it is not effective in cases where data values within the cache line are compressed to different sizes. These variable sizes frequently lead to misalignment at the byte granularity. While it is possible to insert some amount of padding into the compressed line to reduce the misalignment, this would counteract the primary goal of compression to minimize data size. \begin{figure}[h!] \centering \includegraphics[width=0.9\textwidth]{toggles/figures/MC-DRAM.pdf} \caption{Effect of Metadata Consolidation on DRAM bit toggle count with FPC compression algorithm.} \label{fig:toggles-mc} \end{figure} We also conducted an experiment with open-sourced applications where we compare the impact of MC and EC separately, as well as together, for the FPC compression algorithm. We observe similar results with the C-Pack compression algorithm. Figure~\ref{fig:toggles-mc} lead to two observations. First, when EC is not employed, MC can substantially reduce the bit toggle count, from 1.93$\times$ to 1.66$\times$ on average. Hence, in the case when the hardware changes related to EC implementation are undesirable, MC can be used to avoid some of the increase in the bit toggle count. Second, when energy control is employed (see \emph{EC2} and \emph{MC+EC2}), the additional reduction in bit toggle count is relatively small. This means that EC2 mechanism can cover most of the benefits that MC can provide. In summary, we conclude that MC mechanism can be effective in reducing the bit toggle count when energy control is not used. It does not require significant hardware changes other than the minor modifications in the compression algorithm itself. At the same time, in the presence of energy control mechanism, the additional effect of MC in toggle reduction is marginal. \section{Related Work} To the best of our knowledge, this is the first work that (i) identifies increased bit toggle count in communication channels as a major drawback in enabling efficient data compression in modern systems, (ii) evaluates the impact and causes for this inefficiency in modern GPU architectures for different channels across multiple compression algorithms, and (iii) proposes and extensively evaluates different mechanisms to mitigate this effect to improve overall energy efficiency. We first discuss prior works that propose more energy efficient designs for DRAM, interconnects and mechanisms for energy efficient data communication in on-chip/off-chip buses and other communication channels. We then discuss prior work that aims to address different challenges in efficiently applying data compression. \textbf{Low Power DRAM and Interconnects.} A wide range of previous works propose mechanisms and architectures to enable more energy-efficient operation of DRAM. Examples of these proposals include activating fewer bitlines~\cite{udipi-isca2010}, using shorter bitlines~\cite{lee-hpca2013}, more intelligent refresh policies~\cite{raidr, liu-asplos2011, taku-islped1998, ahn-asscc2006, kim-patmos2000,Avatar,khan-sigmetrics2014}, dynamic voltage and frequency scaling~\cite{david-icac11} and better management of data placement~\cite{zhu-itherm2008,lin-islped2009,liu-hpca2011}. In the case of interconnects, Balasubramonian et al.~\cite{balasubramonian-hpca2005} propose a hybrid interconnect comprising wires with different latency, bandwidth, and power characteristics for better performance and energy efficiency. Previous works also propose different schemes to enable and exploit \emph{low-swing} interconnects~\cite{zhang-1998,nuca,tlc} where reduced voltage swings during signalling enables better energy efficiency. These works do not consider energy efficiency in the context of data compression and are usually data-oblivious, hence the proposed solutions can not alleviate the negative impact of increased toggle rates with data compression. \ignore{ \textbf{Other Communication Channels.} For SATA and PCIe, data is transmitted in a serial fashion at much higher frequencies than typical parallel bus interfaces. Bit toggles within these high speed bus interfaces have different implications than that of on-chip or off-chit buses. First, since data is transmitted in a serial fashion, data alignment at larger byte sizes no longer plays a significant role in determining toggle rate. Second, bit toggles in themselves, impose different design considerations and implications. Data is transmitted across these buses without an accompanying clock signal which means that the transmitted bits need to be synchronized with a clock signal by the receiver. This \emph{clock recovery} requires \emph{frequent} bit toggles to prevent loss in information. In addition, it is desirable that the \emph{running disparity}{\textemdash}which is the difference in the number of one and zero bits transmitted{\textemdash}be minimized. This condition is referred to as \emph{DC balance} and prevents distortions in the signal. Data is typically scrambled using encodings like the 8b/10b encoding~\ref{8b10b} to balance the number of ones and zeros while ensuring frequent transitions. These encodings have high overhead in terms of the amount of additional data transmitted but obscure any difference in bit transitions with compressed or uncompressed data. We do not expect that the changed data after compression would pose any additional challenges to these interfaces, and hence we do not apply our proposed techniques to SATA and PCIe, and just provide their description for the overview completeness. } \textbf{Energy Efficient Encoding Schemes.} \emph{Data Bus Inversion (DBI)} is an encoding technique proposed to enable energy efficient data communication. Widely used DBI algorithms include \emph{bus invert coding}~\cite{dbi} and \emph{limited-weight coding}~\cite{limited-weight-codes1,limited-weight-codes2} which selectively invert all the bits within a fixed granularity to either reduce the number of bit flips along the communication channel or reduce the frequency of either 0's or 1's when transmitting data. Recently, \emph{DESC}~\cite{desc} was proposed in the context of on-chip interconnects to reduce power consumption by representing information by the delay between two consecutive pulses on a set of wires, thereby reducing the number of bit toggles. Jacobvitz et al.~\cite{coset-coding} applied \emph{coset coding} to reduce the number of bit flips while writing to memory by mapping each dataword into a larger space of potential encodings. These encoding techniques do not tackle the excessive bit toggle count generated by data compression and are largely orthogonal to the our proposed mechanisms for toggle-aware data compression. \ignore{\textbf{Low Power Interconnect and DRAM} {\bf Low Power DRAM Architecture}\\ Reducing Wordline Length~\cite{udipi-isca2010}\\ TL-DRAM~\cite{lee-hpca2013}\\ RowClone~\cite{seshadri-micro2013}\\ {\bf Low Power DRAM Feature in Industry}\\ JEDEC~\cite{jedec-lpdram}\\ MICRON~\cite{micron-lpdram}\\ {\bf 3D-Stacked DRAM with Low Channel Power}\\ 3D-DRAM~\cite{kang-isscc2009, jeong-isscc2009, woo-hpca2010, harward-mwscas2011, loh-isca2008, loh-micro2009, black-micro2006}\\ {\bf DRAM energy reduction with managing data placement and temperature}\\ \cite{zhu-itherm2008}\\ \cite{lin-islped2009} put overheated DRAMs to idle or power-downed states to distribute temperature across all ranks.\\ \cite{liu-hpca2011} change memory access rates based on temperatures of individual DRAM chips by optimizing cache replacement and page allocation policies.\\ {\bf Reducing DRAM Refresh}\\ \cite{liu-isca2012, liu-asplos2011, taku-islped1998, ahn-asscc2006, kim-patmos2000}\\ {\bf DVFS on DRAM}\\ DVFS on DRAM~\cite{david-icac11}\\ {\bf Low Power Interconnects}\\ Bus-invert coding~\cite{stan-vlsi1995}. There are many bus-invert coding mechanisms.\\ Low-Swing Interconnect~\cite{zhang-1998}\\ Cache using low-swing wires~\cite{udipi-hipc2009, beckmann-2003}\\ Hybrid Interconnect~\cite{balasubramonian-hpca2005}\\ DESC~\cite{desc}\\ } \textbf{Efficient Data Compression.} Several prior works~\cite{LinkCompression,CompressionPrefetching,GPUBandwidthCompression, lcp-micro,memzip,MXT} study main memory and cache compression with several different compression algorithms~\cite{fpc,bdi,c-pack,dcc,sc2}. These works exploit the capacity and bandwidth benefits of data compression to enable higher performance and energy efficiency. These prior works primarily tackle improving compression ratios, reducing the performance/energy overheads of processing data for compression/decompression, or propose more efficient architectural designs to integrate data compression. These works address different challenges in data compression and are orthogonal to our proposed toggle-aware compression mechanisms. To the best of our knowledge, this is the first work to study the energy implications of transferring compressed data over different on-chip/off-chip channels. \ignore{Alameldeen et al.~\cite{CompressionPrefetching} investigated the possibility of bandwidth compression with FPC~\cite{fpc-tr}. Authors showed that significant decrease in pin bandwidth demand can be achieved with FPC-based bandwidth compression design, but assumed variable-size data transfers that is not possible with modern DRAMs~\cite{ddr3-jedec}. Sathish et al.~\cite{GPUBandwidthCompression} look at the GPU-oriented memory link compression using C-Pack~\cite{c-pack} compression algorithm. Authors make the observation that GPU memory (GDDR3~\cite{gddr3}) indeed allows transfer of data in small bursts and propose to store data in the compressed form in the memory, but without space benefits. Unfortunately, this work still have two major drawbacks mentioned previously. Pekhimenko et al.~\cite{lcp-micro} proposed Linearly Compressed Pages (LCP) with the primary goal of compressing main memory to increase capacity. This design still relies on all the algorithms being implemented in hardware.} \ignore{This design also has several drawbacks. First, bandwidth savings are achieved by bringing additional adjacent cache lines that fit in a single memory transfer that only works when applications exhibit spatial locality (and can potentially result in a cache pollution). Second, even though LCP design can work with multiple t compression algorithms, it still relies on all the algorithms being implemented in hardware.} \section{Summary} We observe that data compression, while very effective in improving bandwidth efficiency in GPUs, can greatly increase the bit toggle count in the on-chip/off-chip interconnect. Based on this new observation, we develop two new {\em toggle-aware compression} techniques to reduce bit toggle count while preserving most of the bandwidth reduction benefits of compression. Our evaluations across six compression algorithms and 242 workloads show that these techniques are effective as they greatly reduce the bit toggle count while retaining most of the bandwidth reduction advantages of compression. We conclude that toggle-awareness is an important consideration in data compression mechanisms for modern GPUs (and likely CPUs as well), and encourage future work to develop new solutions for it. \subsubsection{Data Bus Inversion} Data Bus Inversion is an encoding technique proposed to reduce the power consumption in data channels. Two commonly used DBI algorithms include \emph{Bus invert coding}~\cite{dbi} and \emph{Limited-weight coding}~\cite{limited-weight-codes1,limited-weight-codes2}. \emph{Bus invert coding} places an upper-bound on the number of bit flips while transmitting data along a channel. Consider a set of \emph{N} bit lines transmitting data in parallel. If the Hamming distance between the previous and current data value being transmitted exceeds \emph{N/2}, the data is transmitted in the inverted form. This limits the number of bit flips to \emph{N/2}. To preserve correctness, an additional bit line carries the inverted status of each data tranmission. By reducing the number of bit flips, \emph{Bus invert coding} reduces the switching power associated with charging and discharging of bit lines. \emph{Limited weight coding} is a DBI technique that helps reduce power when one of the two different bus states is more dissipative than the other. The algorithm only observes the \emph{current} state of data. It decides to invert or leave the data inverted based on the goal of minimizing either the number of \emph{zeros} or \emph{ones} being transmitted. Implementing \emph{Bus invert coding} requires much the same circuitry for toggle count determination in the proposed EC mechanism. Here, hardware logic is required to compute the XOR between the different prior and current data at a fixed granularity. The Hamming distance is then computed by summing the number of 1's using a simple adder. Similar logic is required to compute the toggle count for compressed versus uncompressed data in the Energy Control mechanism. We expect that both EC and DBI can efficiently coexist. After compression is applied, we first apply DBI (to minimize the bit toggles), and after that we apply EC mechanism to evaluate the tradeoff between the compression ratio and the bit toggle count. \subsection{PCIe and SATA interfaces} For SATA and PCIe, data is transmitted in a serial fashion at much higher frequencies than typical parallel bus interfaces. Bit toggles within these high speed bus interfaces have different implications than that of on-chip or off-chit buses. First, since data is transmitted in a serial fashion, data alignment at larger byte sizes no longer plays a significant role in determining toggle rate. Second, bit toggles in themselves, impose different design considerations and implications. Data is transmitted across these buses without an accompanying clock signal which means that the transmitted bits need to be synchronized with a clock signal by the receiver. This \emph{clock recovery} requires \emph{frequent} bit toggles to prevent loss in information. In addition, it is desirable that the \emph{running disparity}{\textemdash}which is the difference in the number of one and zero bits transmitted{\textemdash}be minimized. This condition is referred to as \emph{DC balance} and prevents distortions in the signal. Data is typically scrambled using encodings like the 8b/10b encoding~\ref{8b10b} to balance the number of ones and zeros while ensuring frequent transitions. These encodings have high overhead in terms of the amount of additional data transmitted but obscure any difference in bit transitions with compressed or uncompressed data. We do not expect that the changed data after compression would pose any additional challenges to these interfaces, and hence we do not apply our proposed techniques to SATA and PCIe, and just provide their description for the overview completeness.
{ "attr-fineweb-edu": 1.801758, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdhw5qoTA-75cjKQk
\section{Introduction}\label{sec:intro} Consider a sample of observations from a high-dimensional normal model \begin{eqnarray} X_1,\ldots,X_n \mid \Sigma_n &\overset{i.i.d.}{\sim}& N_p(0, \Sigma_n), \label{model} \end{eqnarray} where $\Sigma_n \in \mathbb{R}^{p\times p}$ is a covariance matrix. There is often interest in inferring the structure in $\Sigma_n$ and in comparing different alternative covariance structures. This article focuses on this problem from a hypothesis testing perspective. Let $X = (X_1,\ldots, X_n)^T \in \mathbb{R}^{n\times p}$ be the data matrix. {\it A one-sample covariance test} can be reduced to the following simple form: \begin{eqnarray}\label{test} H_0: \Sigma_n = I_p \quad \text{versus}\quad H_1: \Sigma_n \neq I_p , \end{eqnarray} by noting that $H_0:\Sigma_n = I_p$ is equivalent to a null hypothesis $H_0: \Sigma_n = \Sigma_0$ for any given positive definite matrix $\Sigma_0$ by applying the linear transformation $X_i \mapsto \Sigma_0^{-1/2} X_i$. Another important problem is testing diagonality \begin{eqnarray*} H_{0}: \sigma_{ij} = 0 \text{ for any }i\neq j \quad\text{ versus }\quad H_{1}: \text{ not } H_0 , \end{eqnarray*} where $\Sigma_n = (\sigma_{ij})$. Finally, we consider the problem of {\it support recovery}, corresponding to estimating the nonzero elements of covariance matrices. We are interested in constructing novel Bayesian procedures that are practically applicable with theoretical guarantees for the (i) one-sample covariance test, (ii) diagonality test, and (iii) support recovery of the covariance matrix. We consider the high-dimensional setting in which the number of variables $p$ can grow to infinity as the sample size $n$ gets larger and possibly be much larger than $n$. Although it is well known that assuming a restricted covariance class is necessary for consistent {\em estimation} of large covariance matrices \citep{johnstone2009consistency,lee2018optimal}, in a {\em testing} context we focus on alternative hypotheses $H_1$ that are unconstrained. One natural possibility is to assume a conjugate inverse-Wishart prior $IW_p(\nu_n,A_n)$ for $\Sigma_n$ under $H_1$. However, in order for the resulting posterior to be proper, it is necessary to choose the degrees of freedom $\nu_n > p-1$, suggesting an extremely informative prior in high-dimensional settings. The resulting test will certainly be highly sensitive to the choice of $A_n$, and hence is not very useful outside of narrow applications having substantial prior information. One could instead choose a non-conjugate prior for $\Sigma_n$ under $H_1$, but then substantial computational issues arise in attempting to estimate the Bayes factor. From a frequentist perspective, \cite{chen2010tests} and \cite{cai2013optimal} suggested consistent one-sample covariance tests based on unbiased estimators of $\|\Sigma_n -I_p\|_F^2$, where $\|A\|_F = \big(\sum_{ij} a_{ij}^2 \big)^{1/2}$ is the Frobenius norm of a matrix $A=(a_{ij})$. Under the null hypothesis, they showed that their test statistic is asymptotically normal. The test also has power tending to one as $n$ goes to infinity, but it requires the condition, $\|\Sigma_n - I_p\|_F^2 \, n/p \to \infty$ as $n\to\infty$. This condition implies that they essentially adopted $H_1 = \{ \Sigma_n : \|\Sigma_n - I_p\|_F^2 \ge b_n p/n \}$ for some $b_n \to \infty$ as $n\to\infty$ as the alternative class. \cite{cai2013optimal} proved that if we consider an alternative class $H_1 = \{ \Sigma_n : \|\Sigma_n - I_p\|_F^2 \ge \epsilon_n \}$, say a dense alternative, the condition $\epsilon_n \ge b_n p/n$ is inevitable for any level $\alpha$ test to have power tending to one. This excludes cases in which a finite number of the components of $\Sigma_n-I_p$ have a magnitude $(p/n)^{1/2}$, although $(p/n)^{1/2}$ can be a significant signal when $p \ge n$. The above discussion motivates us to develop hypothesis tests that are easy to implement in practice while possessing theory guarantees. In particular, we wish to construct tests that can perform well even when the condition $\|\Sigma_n - I_p\|_F^2 \, n/p \to \infty$ fails to hold. We achieve this by proposing a novel Bayesian testing framework based on the maximum pairwise Bayes factor which will be introduced in Section \ref{subsec:mpbf}. The basic strategy is to focus on the pairwise difference between $\Sigma_n$ and $I_p$ rather than the Frobenius norm or other matrix norms. More precisely, instead of considering a usual Bayes factor based on a prior on the whole covariance matrix, we first consider the pairwise Bayes factors for each element of the matrix and combine them by taking a maximum over all possible pairs. This approach is analagous to frequentist tests based on maximum-type statistics \citep{jeng2013simultaneous,enikeeva2019high}. Our construction enables us to consider a different alternative class, $H_1= \{\Sigma_n : \|\Sigma_n - I_p\|_{\max}^2 \ge C \log p/n \}$ for some constant $C>0$, say a sparse alternative, where $\|A\|_{\max} = \max_{i,j}|a_{ij}|$ for a matrix $A=(a_{ij})$. When the primary interest is not on a collection of very weak signals, but on detecting at least one \emph{meaningful signal}, our test is much more effective than the frequentist methods mentioned above. The proposed testing method is general, easily implementable and theoretically supported, being the first Bayesian test shown to be consistent in the high-dimensional setting for the one-sample or diagonal covariance testing problems. Our procedure yields proven false discovery rate control and power improvement compared to existing methods. The proposed one-sample test is rate-optimal in the sense that it can distinguish the sparse alternative class $H_1= \{\Sigma_n : \|\Sigma_n - I_p\|_{\max}^2 \ge \epsilon_n\}$ from the null with the fastest rate of $\epsilon_n$, while guaranteeing consistency under the null. We also propose a scalable graph selection method for high-dimensional covariance graph models using pairwise Bayes factors. The proposed method consistently recovers the true covariance graph structure under a weaker or comparable condition to those in the existing frequentist literature. Recently, \cite{leday2018fast} suggested a fundamentally different pairwise approach to test marginal or conditional independence between two variables. Their focus is on the joint distribution of the $i$th and $j$th variables and an inverse-Wishart prior for $\Sigma_n$ was imposed. For each $i\neq j$, the hypothesis testing problem $H_{0,ij}^M: \sigma_{ij}=0$ versus $H_{1,ij}^M: \sigma_{ij}\neq 0$ was considered. Since the resulting Bayes factors for the pairwise tests are not scale-invariant, they proposed {\it scaled} versions. P-values under the {\it conditional null distribution} were obtained by shuffling or permuting labels of observations \citep{jiang2017bayesian}. For support recovery, they suggest using standard multiplicity correction procedures to control the false discovery rate, obtaining a frequentist procedure. Selection consistency results were not provided. \verb|R| code for implementation of our empirical results are available at https://github.com/leekjstat/mxPBF. Proofs of our main results are included in Supplementary Material. \section{Preliminaries}\label{sec:prel} \subsection{Notations}\label{subsec:notation} For any real values $a$ and $b$, we denote $a\vee b$ as the maximum between $a$ and $b$. For any positive sequences $a_n$ and $b_n$, we denote $a_n \ll b_n$ or $a_n = o(b_n)$ if $a_n / b_n \to 0$ as $n\to\infty$. For any vector $x = (x_1,\ldots,x_p)^T \in \mathbb{R}^p$, we define the vector $\ell_1$- and $\ell_2$-norm as $\|x\|_1= \sum_{j=1}^p |x_i|$ and $\|x\|_2 = (\sum_{j=1}^p x_j^2 )^{1/2}$, respectively. Let $\mathcal{C}_p$ be the set of all $p\times p$ positive definite matrices. We denote $\chi_k^2(\lambda)$ as the non-central chi-square distribution with degrees of freedom $k$ and non-centrality $\lambda \ge 0$, and let $\chi_k^2 = \chi_k^2(\lambda=0)$. For positive real values $a$ and $b$, $IG(a,b)$ denotes the inverse gamma distribution with shape $a$ and scale $b$. \subsection{Maximum Pairwise Bayes Factor}\label{subsec:mpbf} In this subsection, we introduce our approach focusing on the one-sample covariance test. As described before, the basic strategy is to concentrate on the \emph{pairwise difference} between $\Sigma_n$ and $I_p$. Let $\tilde{X}_j\in \mathbb{R}^n$ be the $j$th column vector of $X$. For any indices $i$ and $j$, based on the joint distribution \eqref{model}, the conditional distribution of $\tilde{X}_i$ given $\tilde{X}_j$ is \begin{eqnarray}\label{ij_reg_model} \tilde{X}_i \mid \tilde{X}_j &\sim& N_n \Big( a_{ij} \tilde{X}_j ,\, \tau_{ij}^2 I_n \Big), \end{eqnarray} where $a_{ij} \in \mathbb{R}$ and $\tau_{ij}>0$. We can view \eqref{ij_reg_model} as a linear regression model given a design matrix $\tilde{X}_j$. For each paired conditional model \eqref{ij_reg_model}, we consider a testing problem \begin{eqnarray}\label{hypo_ij} H_{0,ij}: a_{ij}=0 ,\, \tau_{ij}^2 =1 \quad\text{versus} \quad H_{1,ij}: \text{ not } H_{0,ij} . \end{eqnarray} If $H_{0,ij}$ is true, $\sigma_{ij} = 0 $ and $\sigma_{ii} =1$ because $a_{ij} = \sigma_{ij}/ \sigma_{jj}$ and $\tau_{ij}^2 = \sigma_{ii} (1- \rho_{ij}^2)$, where $\Sigma_n = (\sigma_{ij})$ and $R_n = (\rho_{ij})$ are covariance and correlation matrices, respectively. We suggest the following prior distribution under the alternative hypothesis $H_{1,ij}$ in \eqref{hypo_ij}, \begin{eqnarray} \begin{split}\label{prior} a_{ij} \mid \tau_{ij}^2 \,\,\sim\,\, N \Big( 0 , \, \frac{\tau_{ij}^2}{\gamma} \| \tilde{X}_j \|_2^{-1} \Big) , &\quad\, \tau_{ij}^2 \,\,\sim\,\, IG \big( a_0, \, b_{0,ij} \big) , \end{split} \end{eqnarray} where $\gamma= (n \vee p)^{-\alpha}$ and $a_0, b_{0,ij}$ and $\alpha$ are positive constants. The induced Bayes factor is \begin{eqnarray*} B_{10}(\tilde{X}_i, \tilde{X}_j) &=& \frac{p(\tilde{X}_i\mid \tilde{X}_j , H_{1,ij}) }{p(\tilde{X}_i \mid \tilde{X}_j , H_{0,ij})} \\ &=& \frac{b_{0,ij}^{a_0}}{ \Gamma(a_0)} \Big(\frac{\gamma}{1+\gamma} \Big)^{1/2} \Gamma\Big(\frac{n}{2}+a_0 \Big)\, e^{ n \widehat{\tau}_i^2/2 } \, \Big(\frac{n}{2}\widehat{\tau}_{ij, \gamma}^2 + b_{0,ij} \Big)^{- n/2 - a_0} , \end{eqnarray*} where $n \widehat{\tau}_i^2 = \|\tilde{X}_i\|_2^2$, $n \widehat{\tau}_{ij, \gamma}^2 = \tilde{X}_i^T \{ I_n - (1+\gamma)^{-1} H_j \} \tilde{X}_i$ and $H_j = \tilde{X}_j (\tilde{X}_j^T \tilde{X}_j)^{-1} \tilde{X}_j^T$. The choice of hyperparameters $a_0$ and $b_{0,ij}$ is discussed in Section \ref{subsec:sim_one}. The null hypothesis in the one-sample covariance test, $H_0: \Sigma_n = I_p$, is true if $H_{0,ij}$ is true for all pairs $(i,j)$ such that $i \neq j$. We aggregate the information from each pairwise Bayes factor $B_{10}(\tilde{X}_i, \tilde{X}_j)$ via the {\it maximum pairwise Bayes factor}, \begin{eqnarray}\label{map_one} B_{\max, 10}(X) &=& \max_{ i \neq j} B_{10}(\tilde{X}_i, \tilde{X}_j) . \end{eqnarray} A large value for $B_{\max, 10}(X)$ provides evidence supporting the alternative hypothesis. By taking a maximum, $B_{\max, 10}(X)$ supports the alternative hypothesis if at least one of the pairwise Bayes factors supports the alternative. A natural question is whether false positives increase as we take a maximum over more and more pairs. Indeed, we find that this is not the case, either asymptotically based on our consistency results (Theorems \ref{thm:MBF} and \ref{thm:diag_BF}) or in finite samples based on simulations. \section{Main Results}\label{sec:main} \subsection{One-sample Covariance Test}\label{subsec:one-sample} In this subsection, we show consistency of $B_{\max, 10}(X)$ defined in \eqref{map_one} for the one-sample covariance test \eqref{test}. We first introduce assumptions for consistency under $H_1 : \Sigma_n \neq I_p$. Let $\Sigma_0 = (\sigma_{0,ij}) \in \mathcal{C}_p$ be the true covariance matrix, implying the conditional distribution of $\tilde{X}_i$ given $\tilde{X}_{j}$ is \begin{eqnarray}\label{true_conditional} \tilde{X}_i \mid \tilde{X}_{j} &\sim& N_n \big( a_{0,ij} \tilde{X}_{j} , \tau_{0,ij}^2 I_n \big) \end{eqnarray} under $\bbP_0$, where $a_{0,ij} = \sigma_{0,ij}/ \sigma_{0,jj}$, $\tau_{0,ij}^2 = \sigma_{0,ii} \{ 1- \sigma_{0,ij}^2 / (\sigma_{0,ii} \sigma_{0,jj} ) \}$, $\bbP_0$ is the probability measure corresponding to model \eqref{model} with $\Sigma_n =\Sigma_0$, and $\tau_{0,ij}^2 = \sigma_{0,ii}$ if and only if $a_{0,ij}=0$. Under the alternative $H_1: \Sigma_n \neq I_p$, we assume that $\Sigma_0$ satisfies {\it at least one} of the following conditions: \begin{itemize} \item[(A1)] There exists a pair $(i,j)$ satisfying \begin{eqnarray}\label{sigma_betamin2} \big| \sigma_{0,ii} - 1 \big| &\ge& \Big[ 4 \sigma_{0,ii} {C_1}^{1/2} + C_2 + \frac{2b_{0,ij}}{ \{n \log (n \vee p)\}^{1/2} } \Big] \left\{\frac{\log (n \vee p)}{n} \right\}^{1/2} \end{eqnarray} for some constants $C_1 >0$ and $C_2 > 2(\alpha +2)^{1/2}$. \item[(A2)] There exists a pair $(i,j)$ satisfying \begin{eqnarray}\label{sigma_betamin} \big|\tau_{0,ij}^2 - 1 \big| &\ge& \Big[ 4 \tau_{0,ij}^2 {C_1}^{1/2} + C_2 + \frac{2b_{0,ij} + \tau_{0,ij}^2}{ \{n \log (n \vee p)\}^{1/2} } \Big] \left\{\frac{\log (n \vee p)}{n} \right\}^{1/2} \end{eqnarray} \item[(A3)] There exists a pair $(i,j)$ satisfying \begin{eqnarray}\label{betamin} \sigma_{0,ij}^2 &\ge& \frac{\sigma_{0,jj} }{1- 2 {C_1}^{1/2}\epsilon_0} \left\{\frac{9C_1 \tau_{0,ij}^2}{(1-C_3)^2} \vee \frac{C_4 (\alpha+2)}{C_3} \right\} \frac{\log (n \vee p)}{n} \quad\quad \end{eqnarray} for some constants $0<C_3<1$ and $C_4 >1$. \end{itemize} Throughout the paper, $C_1, C_2, C_3$ and $C_4$ are fixed global constants. For a given small constant $\epsilon > 0$, they can be considered as $C_1 = \epsilon, C_2 = 2(\alpha+2)^{1/2} +\epsilon, C_3 = 1 - \epsilon^{1/4}$ and $C_4 = 1 + \epsilon$. Condition (A1) is required to detect a non-unit variance $\sigma_{0,ii}$, and can be interpreted as a {\it beta-min condition} for $|\sigma_{0,ii} -1|$. The beta-min condition gives a lower bound for nonzero parameters and is essential for model selection consistency \citep{castillo2015bayesian,martin2017empirical}. Interestingly, the rate of lower bound in (A1) is given by $\{\log (n \vee p)/n \}^{1/2}$, which has been commonly used in the variable selection literature. Condition (A2) is similar to condition (A1), which can be interpreted as a beta-min condition for $|\tau_{0,ij}^2-1|$. Condition (A3) is a beta-min condition for off-diagonal elements of the covariance matrix. In summary, conditions (A1)--(A3) imply the sparse alternative \begin{eqnarray*} \Sigma_0 &\in& H_1 \,=\, \Big\{ \, \Sigma_n : \| \Sigma_n - I_p \|_{\max}^2 \ge C \, \frac{\log p}{n} \, \Big\} \end{eqnarray*} for some constant $C>0$, which corresponds to the {\it meaningful} difference we mentioned earlier. In fact, the rate $\log p/n$ is {\it optimal} for guaranteeing the consistency under both hypotheses (Theorem \ref{thm:LB}). Our method is not designed to detect dense alternatives in which all differences are very small, but requires at least one difference to be sufficiently large. Theorem \ref{thm:MBF} shows consistency for the one-sample covariance test even in the high-dimensional setting as long as $\log p \le \epsilon_0^2 n$ for some small constant $\epsilon_0>0$. \begin{theorem}\label{thm:MBF} Consider model \eqref{model} and the one-sample covariance testing problem \eqref{test}. Consider prior \eqref{prior} under $H_{1,ij}$ in \eqref{hypo_ij} with $\alpha> 8 (1 + {2}^{1/2} \epsilon_0)^2/ \{1- {2}^{3/2}\epsilon_0 (1 + {2}^{1/2}\epsilon_0)\}$ for some small constant $0<\epsilon_0< 3 \,(4 C_2)^{-1}$. Assume that $\log p \le \epsilon_0^2 \, n$ for all large $n$. Then under $H_0: \Sigma_n =I_p$, for some constant $c>0$, \begin{eqnarray*} B_{\max, 10}(X) &=& O_p \big\{ (n\vee p)^{-c} \big\} . \end{eqnarray*} If, under $H_1: \Sigma_n \neq I_p$, $\Sigma_0$ satisfies at least one of conditions (A1)--(A3), for some constant $c' >0$, \begin{eqnarray*} B_{\max, 10}(X)^{-1} &=& O_p \big\{ (n\vee p)^{-c'} \big\}. \end{eqnarray*} \end{theorem} We first prove that the pairwise Bayes factor $B_{10}(\tilde{X}_i, \tilde{X}_j)$ is consistent on a large event $E_{ij}$ such that $\bbP_0 (E_{ij}^c) \to 0$ as $n\to\infty$. To show consistency under $H_0$, it suffices to prove that $\sum_{i\neq j} \bbP_0 (E_{ij}^c) \to 0$ as $n\to\infty$, which means that the false discovery rate converges to zero. The condition for $\alpha$ in Theorem \ref{thm:MBF} is closely related to this requirement. It also has connections with the variable selection literature in regression \citep{fernandez2001benchmark, narisetty2014bayesian,yang2016computational} where the prior dispersion needs to depend on $(n \vee p^2)$ or $p$ for consistency. Our theory requires a larger dispersion of order roughly $(n \vee p)^8$ mainly due to the larger number of parameters compared to the regression setting. To show consistency under $H_1$, it suffices to show $ \bbP_0 (E_{ij}^c) \to 0$ as $n\to\infty$ for some index $(i,j)$ satisfying at least one of conditions (A1)--(A3). Interestingly, the rate of convergence is similar under both hypotheses, unlike most Bayesian testing procedures with the notable exception of non-local prior based methods \citep{johnson2010use,johnson2012bayesian}. The next theorem shows the optimality of the alternative class which is considered in Theorem \ref{thm:MBF} (Conditions (A1)--(A3)). It says, when the alternative class is defined based on the element-wise maximum norm, the condition $\|\Sigma_0 - I_p\|_{\max}^2 \ge C \log p/n$ for some constant $C>0$ is necessary for any consistent test to have power tending to one. Thus, conditions (A1)--(A3) are rate-optimal to guarantee the consistency under $H_0$ as well as $H_1$. \begin{theorem}\label{thm:LB} Let $E_{\Sigma}$ be the expectation corresponding to model \eqref{model}. For a given constant $C_\star>0$, define $H_1(C_\star) = \Big\{ \Sigma \in \mathcal{C}_p : \|\Sigma- I_p\|_{\max}^2 \ge C_\star^2 \log p/n \Big\}.$ If $C_\star^2 \le 2$, then for any consistent test $\phi$ such that $E_{I_p} \phi \longrightarrow 0$ as $n\to\infty$, \begin{eqnarray*} \limsup_{n\to\infty} \inf_{\Sigma \in H_1(C_\star) } E_{\Sigma} (\phi) \le \frac{1}{2} . \end{eqnarray*} \end{theorem} \subsection{Testing Diagonality}\label{subsec:diag} We now consider testing of diagonality of the covariance matrix: \begin{eqnarray}\label{diag_test} H_{0}: \sigma_{ij} = 0 \text{ for any }i\neq j \quad\text{ versus }\quad H_{1}: \text{ not } H_0 , \end{eqnarray} where $\Sigma_n = (\sigma_{ij})$. The above hypothesis testing problem can be modularized into many pairwise independence tests \begin{eqnarray}\label{pair_test} H_{0,ij}: \sigma_{ij} =0 \quad\text{ versus }\quad H_{1,ij}: \sigma_{ij} \neq 0 \end{eqnarray} for all $1\le i< j \le p$. We can adopt the maximum pairwise Bayes factor idea to aggregate the pairwise testing information from \eqref{pair_test} for all possible pairs $(i,j)$ such that $i \neq j$ to test \eqref{diag_test}. Based on the conditional distribution \eqref{ij_reg_model}, the null hypothesis $H_{0,ij}$ in \eqref{pair_test} is equivalent to $H_{0,ij}': a_{ij}=0$. We suggest the prior $\pi(\tau_{ij}^2) \propto \tau_{ij}^{-2}$ under both $H_{0,ij}$ and $H_{1,ij}$, and the prior $\pi(a_{ij} \mid \tau_{ij}^2)$ defined in \eqref{prior} under $H_{1,ij}$, which leads to the pairwise Bayes factor \begin{eqnarray*} \tilde{B}_{10} (\tilde{X}_i, \tilde{X}_j) &=& \Big(\frac{\gamma}{1+\gamma} \Big)^{1/2} \left( \frac{ \widehat{\tau}_{ij, \gamma}^2 }{ \widehat{\tau}_{i}^2 } \right)^{- n/2 }. \end{eqnarray*} The improper prior $\pi(\tau_{ij}^2) \propto \tau_{ij}^{-2}$ does not cause any problem because we use the same priors under $H_{0,ij}$ and $H_{1,ij}$. We suggest using \begin{eqnarray}\label{mxPBF_tilde_diag} \tilde{B}_{\max, 10}(X) &=& \max_{ i < j} \tilde{B}_{10} (\tilde{X}_i, \tilde{X}_j) \end{eqnarray} for the hypothesis testing problem \eqref{diag_test}. Theorem \ref{thm:diag_BF} states the consistency of $\tilde{B}_{\max, 10}(X)$ for testing \eqref{diag_test} under regularity conditions. For consistency under the alternative hypothesis, we assume the following condition: \\ (A4) There exists a pair $(i,j)$ satisfying \begin{eqnarray*} \sigma_{0,ij}^2 &\ge& \frac{C_4 \sigma_{0,jj} }{1- 2\epsilon_0 {C_1}^{1/2}} \left\{ \frac{9C_1 \tau_{0,ij}^2}{(1-C_3)^2} \vee \frac{ \alpha(1+\gamma) (1+4\epsilon_0{C_1}^{1/2}) \sigma_{0,ii} }{C_3 } \right\} \frac{\log (n\vee p)}{n} \quad\quad \end{eqnarray*} for constants $C_1>0, 0<C_3<1$ and $C_4>1$ defined in Section \ref{subsec:one-sample}. \begin{theorem}\label{thm:diag_BF} Consider model \eqref{model} and the diagonality testing problem \eqref{diag_test}. For a given pair $(i,j)$ such that $i\neq j$, consider the prior $\pi(\tau_{ij}^2) \propto\tau_{ij}^{-2}$ under both $H_{0,ij}$ and $H_{1,ij}$, and the prior $\pi(a_{ij} \mid \tau_{ij}^2)$ defined in \eqref{prior} under $H_{1,ij}$ in \eqref{pair_test} with $\alpha > 4/(1- {2}^{1/2} 3 \epsilon_0)$ for some small constant $0<\epsilon_0< 1/({2}^{1/2} 3)$. Assume that $\log p \le \epsilon_0^2 \, n$ for all large $n$. Then under $H_{0}: \sigma_{ij} = 0 \text{ for any }i\neq j$, for some constant $c>0$, \begin{eqnarray*} \tilde{B}_{\max, 10} (X) &=& O_p \big\{ (n\vee p)^{-c} \big\} . \end{eqnarray*} If, under $H_{1}:\text{ not } H_0$, $\Sigma_0$ satisfies condition (A4), for some constant $c'>0$, \begin{eqnarray*} \tilde{B}_{\max, 10} (X)^{-1} &=& O_p\big\{ (n\vee p)^{-c'} \big\} . \end{eqnarray*} \end{theorem} Condition (A4) is the beta-min condition for off-diagonal elements of the true covariance matrix. It indicates that if one of the off-diagonal elements satisfies the beta-min condition (A4), $\tilde{B}_{\max, 10}(X)$ consistently detects the true alternative hypothesis. Similar to Theorem \ref{thm:MBF}, the condition for $\alpha$ is required to control the false discovery rate, and $\tilde{B}_{\max, 10}(X)$ has similar rates of convergence under both hypotheses. Although the maximum pairwise Bayes factor idea is not limited to the test of diagonality, we introduce a few procedures that have been proposed for testing diagonality in the literature. \cite{yao2018testing} and \cite{leung2018testing} proposed $L_2$-type tests for dependence in model-free settings. These tests are powerful against dense alternatives, while our focus is on the sparse setting. \cite{han2017distribution} proposed two families of maximum-type rank tests of diagonality, which include Kendall's tau and Spearman's rho as special cases, respectively. Although our procedure has a Bayesian motivation, one can use it as a frequentist test statistic. In the following proposition, we derive the {\it limiting null distribution} of the maximum pairwise Bayes factor in \eqref{mxPBF_tilde_diag}. This enables us to construct a test having size $\alpha$ asymptotically. \begin{proposition}\label{prop:diag_limiting_null} Under the conditions of Theorem \ref{thm:diag_BF}, further assume that $p = p_n \to \infty$ as $n\to\infty$ and $\log p = o(n^{1/3})$. If $H_0: \sigma_{ij} = 0 \text{ for any }i\neq j $ is true, $2\log \tilde{B}_{\max, 10}(X) - C_{n,p} $ converges in distribution to a type I extreme value distribution with distribution function \begin{eqnarray*} F(z) &=& \exp \big\{ - (8\pi)^{-1/2} e^{- z/2} \big\} , \quad z \in \mathbb{R} , \end{eqnarray*} as $n\to\infty$, where $C_{n,p} = 0.5 \log \{ \gamma /(1+\gamma) \} + 4\log p - \log (\log p) $. \end{proposition} \subsection{Support Recovery of Covariance Matrices}\label{subsec:support} The primary interest of this section is on the recovery of $S(\Sigma_0)$, where $S(\Sigma_0) \subseteq \big\{ (i,j): 1\le i <j\le p \big\}$ is the nonzero index set of the true covariance matrix $\Sigma_0$. We call $S(\Sigma_0)$ the {\it support} of $\Sigma_0$. Estimating $S(\Sigma_0)$ corresponds to graph selection in covariance graph models \citep{cox1993linear}. Despite its importance, few Bayesian articles have investigated this problem. \cite{kundu2019efficient} proposed the regularized inverse Wishart prior, which can be viewed as a group Lasso penalty \citep{yuan2006model} on the Cholesky factor. They showed the consistency of their selection procedure for the support of precision matrices when the dimension $p$ is fixed. Recently, \cite{gan2018bayesian} adopted the spike-and-slab Lasso prior \citep{rovckova2016fast,rovckova2018bayesian} for off-diagonal entries of the precision matrix. Their proposed graph selection procedure for the precision matrix also yields selection consistency. To the best of our knowledge, in the Bayesian literature, a consistent support recovery result for covariance matrices has not been established. Although \cite{leday2018fast} proposed a graph selection procedure based on Bayesian modeling, their procedure relies on $p$-values and they do not show consistency. To tackle this gap, we propose a scalable graph selection scheme for high-dimensional covariance matrices based on pairwise Bayes factors. Looking closely at the proof of Theorem \ref{thm:diag_BF}, each pairwise Bayes factor $\tilde{B}_{10}(\tilde{X}_i, \tilde{X}_j)$ can consistently determine whether the corresponding covariance element $\sigma_{0,ij}$ is zero or not. Thus, we suggest using the estimated index set \begin{eqnarray}\label{mxPBF_selection} \widehat{S}_{pair, C_{sel}} &=& \Big\{ \, (i,j): \,\, 2 \log \tilde{B}_{10}(\tilde{X}_i, \tilde{X}_j) > C_{sel} , \quad 1\le i<j \le p \,\, \Big\} \end{eqnarray} for some constant $C_{sel} >0$. Although any threshold $C_{sel}$ can be used for consistent selection asymptotically, the choice is crucial in practice. As a default method, we suggest using cross-validation to select $C_{sel}$, as described in detail in Section \ref{subsec:real_covsel}. The Supplemental Materials presents a simulation study investigating the quality of support recovery for various threshold values. In the frequentist literature, \cite{drton2004model,drton2007multiple} proposed selection procedures using a related idea to \eqref{mxPBF_selection}, which select a graph by multiple hypothesis testing on each edge. However, they considered only the low-dimensional setting, $n \ge p +1$. For the consistency of $\widehat{S}_{pair, C_{sel}}$, we introduce the following condition for some constants $0<C_3<1, C_4>1$ and $C_5>2$: \vspace{.2cm} \\ (A5) For a given pair $(i,j)$ such that $i\neq j$, \begin{eqnarray*} \sigma_{0,ij}^2 &\ge& \frac{C_4 \sigma_{0,jj} }{1- 2\epsilon_0 {C_5}^{1/2}} \left[ \frac{9 C_5 \tau_{0,ij}^2}{(1-C_3)^2} \vee \frac{ \alpha(1+\gamma) (1+4\epsilon_0{C_5}^{1/2}) \sigma_{0,ii} }{C_3 } \right] \frac{\log (n\vee p)}{n} . \quad\quad \end{eqnarray*} The beta-min condition (A5) is almost the same as (A4) except using $C_5>2$ instead of $C_1>0$ to control the probabilities of small events on which the pairwise Bayes factor might not be consistent. Theorem \ref{thm:select} states that \eqref{mxPBF_selection} achieves model selection consistency if condition (A5) holds with $(i,j)$ or $(j,i)$ for any $(i,j) \in S(\Sigma_0)$. \begin{theorem}\label{thm:select} Consider model \eqref{model} and prior \eqref{prior} with $\alpha > 4/(1- {2}^{1/2} 3 \epsilon_0)$ for some small constant $0<\epsilon_0< ({2}^{1/2} 3)^{-1}$ and each pair $(i,j)$ such that $i\neq j$. Assume that $\log p \le \epsilon_0^2 \, n$ for all large $n$ and condition (A5) holds with $(i,j)$ or $(j,i)$ for any $(i,j) \in S(\Sigma_0)$. Then, we have \begin{eqnarray*} \lim_{n\to\infty} \normalfont{\bbP_0} \big( \, \widehat{S}_{pair, C_{sel}} =S(\Sigma_0) \, \big) &=& 1 . \end{eqnarray*} \end{theorem} We note that $\widehat{S}_{pair, C_{sel}}$ consistently recovers the support of the true covariance matrix $\Sigma_0$ regardless of the true sparsity as long as $\log p \le \epsilon_0^2 n$ and nonzero entries satisfy the beta-min condition (A5). \cite{rothman2009generalized} proved a similar support recovery result for generalized thresholding of the sample covariance matrix while assuming $\log p = o(n)$, $\max_i \sigma_{0,ii} \le M$ for some $M>0$ and $\min_{(i,j)\in S(\Sigma_0) }\sigma_{0,ij}^2 \ge M' \log p/n$ for some sufficiently large $M'>0$. \cite{cai2011adaptive} assumed $\log p = o(n^{1/3})$ and $\min_{(i,j)\in S(\Sigma_0) }\sigma_{0,ij}^2 \ge C \sigma_{0,ii} \sigma_{0,jj} \log p/n$ for some $C>0$ and obtained consistent support recovery using adaptive thresholding. Our condition, $\log p \le \epsilon_0^2 n$, is much weaker than the conditions used in the literature. The beta-min condition (A5) is similar to that in \cite{cai2011adaptive} and also has the same rate to that in \cite{rothman2009generalized} if we assume $\max_i \sigma_{0,ii} \le M$ for some $M>0$. Thus, the required condition in Theorem \ref{thm:select} is weaker or comparable to the conditions used in the literature. \section{Numerical Results}\label{sec:sim} \subsection{Simulation Study: One-sample Covariance Test}\label{subsec:sim_one} In this section, we demonstrate the performance of our one-sample covariance test in various simulation cases. For the hyperparameters, we suggest using $a_0 = 2 + K^{-2}$ and $b_{0,ij} = \widehat{\tau}_{ij, \gamma=0}^2(a_0-1)$ for some large constant $K>0$, which leads to $E^{\pi}(\tau_{ij}^2) = \widehat{\tau}_{ij, \gamma=0}^2$ and a prior coefficient of variation $\{{\rm var}^\pi(\tau_{ij}^2)\}^{1/2} / E^{\pi}(\tau_{ij}^2) = K$. In the simulation studies, $K=100$ was used and the results are not sensitive to the choice of $K$. The hyperparameter $\alpha$ was chosen as $\alpha= 8.01 (1- 1/\log n)$. If we assume a small $\epsilon_0>0$, the above choice of $\alpha$ asymptotically satisfies $\alpha>8(1+{2}^{1/2}\epsilon_0)^2 /\{1-{2}^{3/2}\epsilon_0(1+{2}^{1/2}\epsilon_0) \}$. We compare our one-sample covariance test with frequentist tests, proposed by \cite{cai2013optimal}, \cite{srivastava2014tests} and \cite{gupta2014exact}. The test suggested by \cite{srivastava2014tests} is based on estimating the squared Frobenius norm, and has a similar perspective to the test proposed by \cite{cai2013optimal}. \cite{gupta2014exact} proposed an exact one-sample covariance test based on fixed columns of the sample covariance matrix. We first generated 100 data sets from the null hypothesis $H_0:\Sigma_n=I_p$ for various choices of $n$ and $p$. We considered two structures for the alternative hypothesis $H_1: \Sigma_n \neq I_p$. First, we chose $\Sigma_0 = (\sigma_{0,ij})$ to have a compound symmetry structure \begin{eqnarray}\label{al1} \sigma_{0,ij} &=& I(i=j) \,+\, \rho I (i \neq j) \end{eqnarray} for some signal strength constant $\rho$ ranging from $0.05$ to $0.15$ by $0.025$. In this case, the difference between $\Sigma_0$ and $I_p$ is {\it dense}. As a second case for $\Sigma_0$, we let \begin{eqnarray}\label{al2} \sigma_{0,ij} &=& I(i=j) \,+\, \rho I (i=1,j=2) \,+\, \rho I (i=2,j=1) , \end{eqnarray} for some constant $\rho$ ranging from $0.3$ to $0.8$ by $0.025$. Because \eqref{al2} has signals at only two locations, the difference between $\Sigma_0$ and $I_p$ is {\it sparse}. We generated 100 simulated data from $N_p(0, \Sigma_0)$ for each setting. \begin{figure*}[!tb] \centering \includegraphics[width=5.cm,height=4.7cm]{comp_n100p200_1} \includegraphics[width=5.cm,height=4.7cm]{comp_n100p200_2} \includegraphics[width=5.cm,height=4.7cm]{comp_n100p200_3} \includegraphics[width=5.cm,height=4.7cm]{comp_n200p500_1} \includegraphics[width=5.cm,height=4.7cm]{comp_n200p500_2} \includegraphics[width=5.cm,height=4.7cm]{comp_n200p500_3} \vspace{-.2cm} \caption{ Receiver operating characteristic curves are represented for the three tests based on 100 simulated data sets for each hypothesis $H_0: \Sigma_n=I_p$ and $H_1: \Sigma_n \neq I_p$, where \eqref{al1} was used for $H_1$. mxPBF, CM, SYK and GB represent the test proposed in this paper, \cite{cai2013optimal}, \cite{srivastava2014tests} and \cite{gupta2014exact}, respectively. } \label{fig:roc1} \end{figure*} \begin{figure*}[!tb] \centering \includegraphics[width=5.cm,height=4.7cm]{sp_n100p200_1} \includegraphics[width=5.cm,height=4.7cm]{sp_n100p200_2} \includegraphics[width=5.cm,height=4.7cm]{sp_n100p200_3} \includegraphics[width=5.cm,height=4.7cm]{sp_n200p500_1} \includegraphics[width=5.cm,height=4.7cm]{sp_n200p500_2} \includegraphics[width=5.cm,height=4.7cm]{sp_n200p500_3} \vspace{-.2cm} \caption{ Receiver operating characteristic curves are represented for the three tests based on 100 simulated data sets for each hypothesis $H_0: \Sigma_n=I_p$ and $H_1: \Sigma_n \neq I_p$, where \eqref{al2} was used for $H_1$. mxPBF, CM, SYK and GB represent the test proposed in this paper, \cite{cai2013optimal}, \cite{srivastava2014tests} and \cite{gupta2014exact}, respectively. } \label{fig:roc2} \end{figure*} We calculated receiver operating characteristic curves to illustrate and compare the performance of the tests. For each setting, points of the curves were obtained based on various thresholds and significance levels for $B_{\max, 10}(X)$ and the frequentist tests, respectively. We tried $n=100,200,300$ and $p=200, 500$ for each setting. Figure \ref{fig:roc1} shows results based on 100 simulated data from $N_p(0, I_p)$ ($H_0$) and 100 simulated data from $N_p(0,\Sigma_0)$ with a compound symmetry structured $\Sigma_0$ ($H_1$) given in \eqref{al1}, for $(n,p) = (100, 200)$ and $(n,p)=(200, 500)$. The false positive rate corresponds to the rate of $H_0$'s falsely detected as $H_1$'s. Similarly, the true positive rate is the rate of $H_1$'s correctly detected as $H_1$'s. In this setting, as expected, the tests in \cite{cai2013optimal}, \cite{srivastava2014tests} and \cite{gupta2014exact} work better than the test proposed in this paper. However, as we can see from the second and third columns in Figure \ref{fig:roc1}, $B_{\max, 10}(X)$ also performs well so long as there is a {\it meaningful signal} somewhere. The only case when our method is not as powerful is when weak signals are spread through the alternative covariance matrix, in which case one may question the meaningfulness of the signals. Figure \ref{fig:roc2} shows results based on 100 simulated data from $N_p(0, I_p)$ and 100 simulated data from $N_p(0,\Sigma_0)$ with $\Sigma_0$ given in \eqref{al2}, when $(n,p) = (100, 200)$ and $(n,p)=(200, 500)$. As expected, $B_{\max, 10}(X)$ is much more powerful than the frequentist tests when $\Sigma_0 - I_p$ is sparse. Furthermore, the performances of the frequentist tests based on the Frobenius norm are almost the same for every setting, while $B_{\max, 10}(X)$ has better performance when $(n,p)=(200,500)$ than $(n,p)=(100,200)$. Interestingly, the performance of the test in \cite{gupta2014exact} improves as the signal strength $\rho$ increases. Thus, the test in \cite{gupta2014exact} is more sensitive to sparse changes than other tests based on the Frobenius norm difference. This makes sense because it focuses on the changes in a column of the covariance matrix rather than in the whole covariance matrix. \subsection{Simulation Study: Testing Diagonality}\label{subsec:sim_diag} We conducted a simulation study to illustrate the performance of our proposed diagonality test. The hyperparameter $\alpha$ was chosen as $\alpha = 4.01(1-1/\log n)$. We generated 100 data sets from the null $H_0: \sigma_{ij}=0$ for any $i\neq j$ using $\Sigma_0 = I_p$. The two structures of $\Sigma_0$ under $H_1$ used in the previous section, \eqref{al1} and \eqref{al2}, were considered. For each setting, 100 data sets were generated. We compare our test with some existing frequentist tests. \cite{cai2011limiting} proposed a diagonality test based on the maximum of sample correlations. Here $\widehat{\tau}_{ij, \gamma}^2$ in the pairwise Bayes factor $\tilde{B}_{10}(\tilde{X}_i, \tilde{X}_j)$ is a decreasing function of the sample correlation between $\tilde{X}_i$ and $\tilde{X}_j$. \cite{lan2015testing} developed a test in the regression setting based on the squared Frobenius norm of a sample covariance matrix. Their test should be powerful against dense alternatives. We also conducted maximum-type tests based on Kendall's tau and Spearman's rho \citep{han2017distribution}. \cite{chen2018testing} assumed $p$-dimensional observations from a common multivariate normal distribution and investigated the dependence {\it among samples}. Since their method can be applied to the diagonality test by considering $X^T$ instead of $X$, we included it as a contender. Their test requires $p = O(n)$ and the uniformly bounded condition for the eigenvalues of $\Sigma_0$ for theoretical properties, excluding the high-dimensional setting $p \gg n$ and some interesting covariance classes like compound symmetry. Finally, we also considered frequentist union-intersection tests based on the p-values associated with the marginal independence tests. A $t$-test for Pearson's correlation was conducted for testing $H_{0,ij}:\sigma_{ij}=0$ for each pair $i>j$, and the null hypothesis $H_0: \sigma_{ij} =0$ for any $i\neq j$ was rejected if at least one $H_{0,ij}$ was rejected. To calculate the p-values, we used the \verb|cor0.test| function in the \verb|GeneNet| package. \begin{figure*}[!tb] \centering \includegraphics[width=5.cm,height=4.7cm]{diag_comp_n50p150} \includegraphics[width=5.cm,height=4.7cm]{diag_comp_n50p300} \includegraphics[width=5.cm,height=4.7cm]{diag_comp_n50p500} \includegraphics[width=5.cm,height=4.7cm]{diag_sp_n50p150} \includegraphics[width=5.cm,height=4.7cm]{diag_sp_n50p300} \includegraphics[width=5.cm,height=4.7cm]{diag_sp_n50p500} \vspace{-.2cm} \caption{ Area under the curves are represented for the tests based on 100 simulated data sets for each hypothesis $H_0: \sigma_{ij}=0$ for any $i\neq j$ and $H_1:$ not $H_0$. ``dense" and ``sparse" mean that the true covariance matrix $\Sigma_0$ under $H_1$ were generated from \eqref{al1} and \eqref{al2}, respectively. mxPBF, CL and Lan represent the tests proposed in this paper, \cite{chen2018testing} and \cite{lan2015testing}, respectively. HCL1 and HCL2 represent the test based on Kendall's tau and Spearman's rho, respectively. ``multiple" means the frequentist union-intersection test. } \label{fig:roc3} \end{figure*} Figure \ref{fig:roc3} shows the area under the receiver operating characteristic curve for varying signal strength $\rho$ for each fixed $(n,p)$. We omit the results of \cite{cai2011limiting}, which were almost identical to our test in every setting. As expected, the test of \cite{lan2015testing} is more powerful against dense alternatives. The other tests, except the test of \cite{chen2018testing}, have less power, but work reasonably well as the signal $\rho$ grows. The test of \cite{chen2018testing} does not work well, likely because \eqref{al1} violates their assumptions. When $\Sigma_0 - I_p$ is sparse, the test of \cite{lan2015testing} does not work well even when $\rho$ is large. The other tests show good results against sparse alternatives, but our test has better performance. \subsection{Support Recovery using Gene Expression Data}\label{subsec:real_covsel} To describe the practical performance of the support recovery procedure \eqref{mxPBF_selection}, $\widehat{S}_{pair}$, we analyzed a data set from a small round blue-cell tumor microarray experiment \citep{khan2001classification}. The data set originally had 6,567 gene expression values, and 2,308 gene expressions were selected by an initial filtering \citep{khan2001classification}. For comparison purposes, we focus on the preprocessed data used in \cite{rothman2009generalized} and \cite{cai2011adaptive}, consisting of $p=200$ gene expression values for each of $n=64$ training tissue samples. There are four types of tumors represented in these tissue samples. Data were centered prior to analysis. For pairwise Bayes factors, the hyperparameter was set at $\alpha = 4.01 (1 - 1/\log n)$. We used cross-validation to select $C_{sel}$. Let $n$ be the number of observations for a given data set. We randomly divided the data 50 times into two subsamples with size $n_1=\lceil n/3 \rceil$ and $n_2 = n-n_1$ as a test set and training set, respectively. Denote $I_1$ and $I_2$ as indices for the test set and training set, respectively, thus, $|I_1|= n_1$, $|I_2|=n_2$ and $I_1 \cup I_2 = \{1,\ldots, n\}$. Let $\hat{S}_j(C_{sel})$ be the estimated support for the $j$th column of the covariance matrix via pairwise Bayes factors, based on $\{ X_i \}_{i \in I_2}$ and a given threshold $C_{sel}$. We calculated the averaged mean squared error \begin{eqnarray*} MSE(C_{sel}) &=& \sum_{j=1}^p \sum_{l \in \hat{S}_j(C_{sel}) } \Big\{ \sum_{i \in I_1} (X_{ij} - X_{il} \widehat{\beta}_{jl} )^2 /(n_1-1) \Big\} /|\hat{S}_j(C_{sel})|, \end{eqnarray*} where $\widehat{\beta}_{jl}$ is a least square estimate with respect to the dependent variable $\{X_{ij}\}_{i\in I_1}$ and covariate $\{X_{il} \}_{i\in I_1}$. The threshold $C_{sel}$ was varied from $-7$ to $10$ with increment $0.2$, and we selected $\widehat{C}_{sel}$ which minimizes $50^{-1} \sum_{\nu=1}^{50} MSE_\nu(C_{sel})$, where $MSE_\nu(C_{sel})$ is the averaged mean squared error based on the $\nu$th split. We compared our method with generalized thresholding estimators of \cite{rothman2009generalized} and \cite{cai2011adaptive}. \cite{rothman2009generalized} used a universal threshold $\lambda = \delta (\log p/n)^{1/2}$, while \cite{cai2011adaptive} used an individual threshold $\hat{\lambda}_{ij} = \delta ( \hat{\theta}_{ij}\log p/n)^{1/2}$ with a data-dependent $\hat{\theta}_{ij}$. We denote thresholding estimators proposed by \cite{rothman2009generalized} and \cite{cai2011adaptive} by $\widehat{\Sigma}_\delta$ and $\widehat{\Sigma}_\delta^\star$, respectively. We used the adaptive lasso thresholding rule, $s_\lambda(\sigma) = \sigma \max( 1- |\lambda/\sigma|^\eta , 0)$ with $\eta=4$, because it gave good support recovery results in simulation studies in \cite{rothman2009generalized} and \cite{cai2011adaptive}. We adopted the cross-validation method described in Section 4 of \cite{cai2011adaptive} to select $\delta$ and denote the selected tuning parameter by $\hat{\delta}$. \begin{figure*}[!tb] \centering \includegraphics[width=10cm,height=7cm]{estimated_supports} \caption{ The absolute sample correlation matrix (top left) and estimated supports from various methods. Clockwise from the top right are plots for the estimated supports based on $\widehat{S}_{pair,\widehat{C}_{sel}}$, $\widehat{\Sigma}_{\hat{\delta}}^\star$ and $\widehat{\Sigma}_{\hat{\delta}}$, respectively. } \label{fig:estimated_supports} \end{figure*} \begin{figure*}[!tb] \centering \includegraphics[width=14cm,height=5.3cm]{two_supports} \caption{ The ordered absolute sample correlation matrix and estimated supports for top 40 genes, with 1's representing the estimated supports from $\widehat{S}_{pair,\widehat{C}_{sel}}$ (left) and $\widehat{\Sigma}_{\hat{\delta}}^\star$ (right). } \label{fig:two_supports} \end{figure*} Figure \ref{fig:estimated_supports} shows the support recovery results and the absolute sample correlation matrix. The estimated supports based on $\widehat{S}_{pair,\widehat{C}_{sel}}$, $\widehat{\Sigma}_{\hat{\delta}}^\star$ and $\widehat{\Sigma}_{\hat{\delta}}$ are represented. One can see that $\widehat{S}_{pair,\widehat{C}_{sel}}$ and $\widehat{\Sigma}_{\hat{\delta}}^\star$ show the clustering structure between informative (top 40) and non-informative (bottom 160) genes, while the structure is somewhat blurred in $\widehat{\Sigma}_{\hat{\delta}}$. To compare $\widehat{S}_{pair,\widehat{C}_{sel}}$ and $\widehat{\Sigma}_{\hat{\delta}}^\star$ in more detail, we further focused on the top 40 genes. We applied hierarchical clustering to the genes based on the complete linkage method using \verb|R| function \verb|hclust|, and the genes were ordered according to the clustering result. Figure \ref{fig:two_supports} shows the ordered absolute sample correlation matrix and estimated supports for the top 40 genes. The clustering result suggests that there are four clusters, consistent with the four tumor types. Both support recovery procedures detect significant blocks in the sample correlation matrix. However, our support recovery procedure shows the clustering structure much clearer, while $\widehat{\Sigma}_{\hat{\delta}}^\star$ gives a blurred structure due to a dense support estimate. The estimated support based on pairwise Bayes factors has the advantage of producing a sparser, and hence potentially more interpretable, estimate of support. \section{Discussion}\label{sec:disc} We have focused on covariance matrix structure testing in this paper, but the maximum pairwise Bayes factor idea can be easily applied to other related settings. For example, testing differences across groups in high-dimensional mean vectors is an interesting possibility. When the two mean vectors are almost the same but differ only at a few locations, a maximum pairwise Bayes factor approach should have relatively high power. Similarly, it can be applied to the high-dimensional two-sample covariance test. Two covariances from two populations may differ only in a small number of entries. There are some possible generalizations of the pairwise Bayes factor idea. To accelerate the speed of computation, a random subsampling method can be used instead of calculating the pairwise Bayes factor for every single pair $(i,j)$. It should be interesting to develop a suitable random subsampling or random projection scheme achieving desirable theoretical properties. Especially when $p$ is huge, it will effectively reduce the computational complexity. The maximum pairwise Bayes factor approach is also trivially parallelizable. Another possibility is considering alternative combining approaches to the max in merging the information from every pairwise Bayes factor. If there are many weak non-zero covariances, then the average or summation may be preferable to the maximum. A suitable modification to learn parameters in the combining operator can potentially make the test powerful to a broad class of alternative hypotheses.
{ "attr-fineweb-edu": 1.335938, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdig4uBhi1_IMirfS
\section{Introduction} Convinced of cosmic speed up and not finding the dark energy hypotheses a compelling explanation, some cosmologists have looked for alternatives to Einstein's gravitation (Deffayet et al. 2002; Freese et al. 2002; Ahmed et al. 2002; Dvali et al. 2003; Capozziello et al. 2003; Carroll et al. 2003; Norjiri et al. 2003, 2004, and 2006; Das et al. 2005; Sotiriou 2005; Woodard 2006). There is a parallel situation in galactic studies. Dark matter hypotheses, intended to explain the flat rotation curves of spirals or the large velocity dispersions in ellipticals, have raised more questions than answers. Alternatives to newtonian dynamics have been proposed but have had their own critics. Foremost among such theories, the Modified Newtonian Dynamics (MOND) of Milgrom (1983 a,b,c) is able to explain the flat rotation curves (Sandres et al. 1998 and 2002) and justify the Tully-Fisher relation with considerable success. But it is often criticized for the lack of an axiomatic foundation; see, however, Bekenstein's (2004) TeVeS theory where he attempts to provide such a foundation by introducing a tensor, a vector, and a scalar field into the field equations of GR. Here we are concerned with galactic problems. We suggest following cosmologists and look for a modified Einstein gravity tailored to galactic environments. In Sects. 2 and 3 we design an action integral, different but close to that of Einstein-Hilbert, and find a spherically symmetric static solution to it. In Sect. 4 we analyze the orbits of test objects moving in this modified spacetime and demonstrate the kinship of the obtained dynamics with MOND. Section 5 is devoted to concluding remarks.\textbf{} \section{A modified field equation} The model we consider is an isolated mass point. As an alternative to the Einstein-Hilbert action, we assume \begin{eqnarray} \label{e1} &&S= \frac{1}{2}\int f(R)\sqrt{-g}d^4x, \end{eqnarray} where $R$ is the Ricci scalar and $f(R)$ an as yet unspecified, but differentiable function of $R$. Variations in $S$ with respect to the metric tensor lead to the following field equation (Capozziello et al. 2003): \begin{eqnarray} \label{e2} &&R_{\mu\nu}-\frac{1}{2}g_{\mu\nu}\frac{f}{h}=\left(h_{;\mu\nu}-{h_{;_\lambda}}^\lambda g_{\mu\nu}\right)\frac{1}{h}, \end{eqnarray} where $h=df/dR$. The case $f(R)=R+constant$ and $h=1$ gives the Einstein field equation with a cosmological constant included in it. For the purpose of galactic studies, we envisage a spherically symmetric static Schwarzschild-like metric, \begin{eqnarray}\label{e3} &&ds^2=-B(r)d t^2+ A(r) d r^2+ r^2\left(d\theta^2+\sin^2\theta d\varphi^2\right). \end{eqnarray}\vspace{1mm} From Eqs. (\ref{e2}) and (\ref{e3}) one obtains \begin{eqnarray}\label{e4} && \frac{B'}{B}+\frac{A'}{A}=-r\frac{h''}{h}+\frac{1}{2} r\left(\frac{B'}{B}+\frac{A'}{A}\right)\frac{h'}{h}, \\&&\cr \label{e5} &&\frac{B''}{B}-\frac{1}{2}\left(\frac{B'}{B}+\frac{2}{r}\right)\left(\frac{B'}{B} +\frac{A'}{A}\right)-\frac{2}{r^2}+\frac{2A}{r^2}\cr && \hspace{6mm}= 2\frac{h''}{h}-(\frac{A'}{A}+\frac{2}{r})\frac{h'}{h} ,\\ && \cr\label{e6}&\!\!\!\!\al \frac{B''}{B}-\frac{1}{2}\frac{B'}{B}\left(\frac{B'}{B}+\frac{A'}{A}\right)- \frac{2}{r}\frac{A'}{A}\cr &\!\!\!\!\al \hspace{6mm}= f\frac{A}{h} -\left(\frac{B'}{B}+\frac{4}{r}\right)\frac{h'}{h},\\ &&\cr \label{e7} && R=2\frac{f}{h}-\frac{3}{A}\left[\frac{h''}{h}+\left\{\frac{1}{2}\left( \frac{B'}{B}-\frac{A'}{A}\right)+\frac{2}{h}\right\}\frac{h'}{h}\right]. \end{eqnarray}\vspace{1mm} Equation (\ref{e4}) is the combination $R_{tt}/B+R_{rr}/A$, Eq. (\ref{e5}) is $R_{rr}/A-R_{\theta\theta}/r^2$, and Eq. (\ref{e6}) is the $rr$-component of the field equation. Finally, Eq. (\ref{e7}) is from the contraction of Eq. (\ref{e2}). In principle, for a given $h$ (or$f$) one should be able to solve the four Eqs. (\ref{e4})-(\ref{e7}) for the four unknowns, $A$, $B$, $R $, and $f$ (or $h$), as functions of $r$. \section{Solutions of Equations (4)-(7)} We are interested in solutions that differ from those of the classic GR by small amounts. For the classic GR one has $h=1$ and $A(r)B(r)= 1$. Here, we argue that, if the combination $B'/B+A'/A$ is a well-behaved differential expression, it should have a solution of the form $A(r)B(r)=g(r)$. Furthermore, $g(r)$ should differ from 1 only slightly, in order to remain in the vicinity of GR. There are a host of possibilities. For the sake of argument let us assume $g(r)=(r/s)^\alpha \approx1+\alpha\ln (r/s)$, where $\alpha$ is a small dimensionless parameter and $s$ is a length scale of the system to be identified shortly. Equation (\ref{e4}) splits into \vspace{1mm}\vspace{1mm} \begin{eqnarray}\label{e8} && \frac{B'}{B}+\frac{A'}{A}=\frac{\alpha}{r}, ~~~AB=\left(\frac{r}{s}\right)^\alpha, \\ \label{e9}&& h''-\frac{1}{2}\frac{\alpha}{r}h'+\frac{\alpha}{r^2}h=0. \end{eqnarray} Equation (\ref{e9}) has the solution $h=(r/s)^\beta$, $\beta=\alpha+O\left(\alpha^2\right)$, and $1-\frac{1}{2}\alpha+O\left(\alpha^2\right).$ Of these, the solution $h\approx (r/s)^\alpha$ satisfies the requirement $h\rightarrow1$ as $\alpha\rightarrow 0$. The second solution is discarded. Substituting $AB=h=(r/s)^\alpha$ in Eq. (\ref{e5}) gives \vspace{1mm} \begin{eqnarray} \label{e10} && \frac{1}{A}=\frac{1}{(1-\alpha)}\left[1-\left(\frac{s}{r}\right)^{(1-\alpha/2)}+\lambda \left(\frac{r}{s}\right)^{2(1-\alpha/2)}\right],\\\vspace{1mm} \label{e11}&& B=\left(\frac{r}{s}\right)^\alpha \frac{1}{A}, \end{eqnarray} where $\lambda$ is a constant of integration. Actually there is another constant of integration multiplying the $(s/r)$ term. We have, however, absorbed it in the expression for $s$ that we now define. For $\alpha=0$, Eqs. (\ref{e10}) and (\ref{e11}) are recognized as the Schwarzschild-deSitter metric. Therefore, $s$ is identified with the Schwarzschild radius of a central body, $2GM/c^2$, and $\lambda$ with a dimensionless cosmological constant. Substitution of Eqs. (\ref{e10}) and (\ref{e11}) into Eqs. (\ref{e6}) and (\ref{e7}) gives \vspace{1mm} \begin{eqnarray} \label{e12} && f=\frac{3}{(1-\alpha)}\frac{1}{r^2}\left[\alpha \left(\frac{r}{s}\right)^\alpha+(2+\alpha)\lambda \left(\frac{r}{s}\right)^2 \right], \\\cr\label{e13} && R=\frac{3}{(1-\alpha)}\frac{1}{r^2} \left[\alpha +(4-\alpha)\lambda \left(\frac{r}{s}\right)^{(2-\alpha)}\right]. \end{eqnarray} The Ricci scalar of the Schwarzschild space is zero and that of the deSitter or the Schwarzschild-deSitter space is constant. For non zero $\alpha$, however, $R$ is somewhere between these two extremes. At small distances it increases as $r^{-2}$ and at large $r$'s it behaves as $s^{-2}(s/r)^\alpha \approx s^{-2}(1- \alpha \ln{r/s})$. The spacetime is asymptotically neither flat nor deSitterian. Cosmologists may find this variable Ricci scalar relevant to their purpose ( see also Brevik et al, 2004, for a different modification of the Schwarzschild-deSitter metric). Likewise, we began with $f$ as a function of $R$ rather than $r$. Elimination of $r$ between Eqs. (\ref{e12}) and (\ref{e13}) provides one in terms of the other. For $\lambda = 0$, one easily finds \begin{eqnarray} \label{e14} && f =(3\alpha)^{\alpha/2} s^{-\alpha}R^{(1-\alpha/2)}\approx R [1-\frac{\alpha}{2}\ln (s^2 R)+\frac{\alpha}{2}\ln(3\alpha)]. \end{eqnarray} Once more we observe the mild logarithmic correction to the classic GR. \section{Applications to galactic environments} In this section we demonstrate that \begin{itemize} \item The logarithmic modification of the Einstein-Hilbert action, in the weak field regime, results in a logarithmic correction to the newtonian potential. A test star moving in such a potential acquires a constant asymptotic speed, $v_\infty=c\sqrt{\alpha/2}$. \item The asymptotic speed cannot be independent of the central mass. We resort to the observed rotation curves of spirals to find this dependence. \item The high- and low- acceleration limits of the weak-field regime are the same as those of MOND. A kinship with MOND follows. \end{itemize} \subsection{Orbits in the spacetime of Equations (\ref{e10})-(\ref{e13})} We assume a test star orbiting a central body specified by its Schwarzschild radius, $2GM/c^2$. We choose the orbit in the plane $\theta=\pi/2$. The geodesic equations for $r$, $\varphi$, and $t$ are \vspace{1mm} \begin{eqnarray}\label{e15} && \frac{d^2r}{d\tau^2}+\frac{1}{2}\frac{A'}{A}\left(\frac{dr}{d\tau}\right)^2-\frac{r}{A}\left(\frac{d \varphi}{d\tau}\right)^2+\frac{1}{2}\frac{B'}{A}\left(\frac{dt}{d\tau}\right)^2=0, \\\cr\label{e16} &&\left(\frac{d \varphi}{d\tau}\right)^{-1} \frac{d^2\varphi}{d\tau^2}+\frac{2}{r}\frac{dr}{d\tau}=0, \\\cr\label{e17} && \left(\frac{dt}{d\tau}\right)^{-1}\frac{d^2t}{d\tau^2}+\frac{B'}{B}\frac{dr}{d\tau}=0, \end{eqnarray} respectively. Equations (\ref{e16}) and (\ref{e17}) immediately integrate into \begin{eqnarray}\label{e18} && r^2d\varphi/d\tau=J, ~~\textrm{a constant}, \\\cr \label{e19} && dt/d\tau=1/B. \end{eqnarray} Substituting the latter into Eq. (\ref{e15}) and assuming a circular orbit, $dr/d\tau=0$, gives \begin{eqnarray}\label{e20} && \frac{J^2}{r^3}=\frac{1}{2}\frac{B'}{B^2}=\frac{1}{2}\left(\frac{r}{s}\right)^\alpha\frac{B'}{B^4}, \end{eqnarray} where we have used Eq. (\ref{e11}) to eliminate A. In galactic environments what one measures as the circular orbital speed is \begin{eqnarray}\label{e21} && v=\frac{rd\varphi}{\sqrt B dt}=\frac{r}{\sqrt B} \frac{d \varphi}{d\tau}\frac{d\tau}{d t}=\frac{\sqrt B J}{r}. \end{eqnarray} Eliminating $J$ between Eqs. (\ref{e21}) and (\ref{e20}) gives \begin{eqnarray}\label{e22} && v^2=\frac{1}{2} \frac{r B'}{B} = \frac{1}{2}\left[ \alpha - \frac{rA'}{A} \right]. \end{eqnarray} Further substitution for $B$ from Eqs. (\ref{e11}) and (\ref{e10}) yields \begin{eqnarray}\label{e23} v^2=\frac{1}{2}\alpha + \frac{1}{2}\left(1-\alpha/2\right) \frac{\left[\left(\frac{s}{r}\right)^{(1-\alpha/2)}+2\lambda \left(\frac{r}{s}\right)^{2(1-\alpha/2)}\right]}{\left[1-\left(\frac{s}{r}\right)^{(1-\alpha/2)}+\lambda \left(\frac{r}{s}\right)^{2(1-\alpha/2)}\right]}. \end{eqnarray} To put Eq. (\ref{e23}) in a tractable form: \begin{itemize} \item We neglect the $\lambda$ term and substitute $s=2GM/c^2$. \item We adopt the approximation $x^{-\alpha}=\exp(-\alpha\ln x)=1-\alpha\ln x+O\left(\alpha^2\right)$. \item The terms containing $s$ are small. We retain only the first order terms in $s$. \item $v$ is measured in units of $c$. We restore it hereafter. \end{itemize} With these provisions, Eq. (\ref{e23}) reduces to \begin{eqnarray}\label{e24} && v^2=\frac{1}{2}\alpha c^2 +\frac{GM}{r}\left[1-\frac{1}{2}\alpha\left\{1+\ln\left(\frac{2GM }{c^2r}\right)\right\}\right]. \end{eqnarray} A plot of $v^2$ as a function of $r$ has the horizontal asymptote $\frac{1}{2}\alpha c^2$. \subsection{Determination of $\alpha$} The asymptote in Eq. (\ref{e24}) cannot be a universal constant. It is not possible to imagine that a galaxy and a speck of dust dictate the same speed for distant passing objects. The parameter $\alpha$ should depend on the mass of the gravitating body residing at the origin, because any localized matter will betray no characteristics other than its mass when sensed from far distances. To find the mass dependence of $\alpha$ we resort to observations. From Sanders and Verheijn (1998) and Sanders and McGaugh (2002), we have compiled a list of 31 spirals for which total masses, asymptotic orbital speeds, and velocity curves are reported. The figures in their papers contain the observed circular speeds and the newtonian ones derived from the observed mass of the stellar and HI components of the galaxies. We have selected those objects that a) have a noticeable horizontal asymptote, b) have fairly reduced newtonian speeds by the time the flat asymptote is approached, and c) do not possess anomalously high HI content to hinder estimates of the total mass and the size of the galaxy. We also made the assumption that the total HI and stellar mass are distributed spherically symmetrically and mimic a point mass if observed from far distances. The relevant data along with $\alpha=2v_\infty^2/c^2$ are reported in the table, and the figure is a log-log plot of the calculated $\alpha$ versus the mass. A power law fit to the data gives \begin{eqnarray}\label{e25} &&\alpha = (3.07 \pm 0.18)\times 10^{-7} (M/ 10^{10}M_\odot)^{0.494}. \end{eqnarray}\\ It is important to note that Eq. (\ref{e25}) is not a consequence of the present theory, but rather an empirical relation dictated by observations and based on the masses and the asymptotic speeds of a selected list of galaxies reported by Sanders et al. Together with the popularly accepted rule that the masses and the luminosities of spirals are linearly related, it leads to a Tully-Fisher (TF) relation, $Luminosity \propto {v_{\infty}}^{4.05}$. Observational actualities, however, are complicated. In a recent paper, Kregel et al. (2005) distinguish between different TF relations based on the luminosity, disk mass, maximum disk stellar mass, baryonic mass (meaning stellar+HI mass), baryonic + bulge mass, etc. The reported exponents range from $3.23\pm 0.36$ to $4.2\pm 0.23$, depending on the type of qualification; see also Gurovich et al. (2004). A more elaborate discussion of the issue falls beyond the scope of the present paper. The main sources of error in Eq. (\ref{e25}), both in the exponent and in the slope, are a) the estimates of the total masses of the galaxies, b) the judgment whether what one measures as the asymptotic speed is indeed the orbital speed at the far outskirts of the galaxy, c) the popular assumption that the masses and luminosities of the spirals are linearly related, and finally, d) our heuristic assumption that the galaxies can be treated as spherically symmetric objects. In spite of all these uncertainties, we note that the exponent 0.494 is astonishingly close to 0.5, the figure that one finds from MOND. We also demonstrate in the following section that the slope $3.00\times 10^{-7}$ is in very good agreement with the characteristic acceleration of MOND. \begin{table} \caption{ The data in the first four columns are from Sanders et al. 2002. The last two columns show the empirical relation between the asymptotic speeds and the masses of the galaxies.}\label{tab1} \begin{center} \begin{tabular}{cccccc} \hline Galaxy&R&M&$v_\infty$&$2(v_\infty/c)^2$&$\alpha_0$\\ &kpc&$10^{10}M_\odot$&km/s&$\times10^{7}$&$\times10^{12}$\\ \hline\hline NGC 5533 & 72.0 & 22.0 & 250 & 13.9 & 3.02 \\ NGC 3992 & 30.0 & 16.22& 242 & 13.0 & 3.28 \\ NGC 5907 & 32.0 & 10.8 & 214 & 10.2 & 3.15 \\ NGC 2998 & 48.0 & 11.3 & 213 & 10.1 & 3.05 \\ NGC 801 & 60.0 & 12.9 & 218 & 10.6 & 3.00 \\ NGC 5371 & 40.0 & 12.5 & 208 & 9.61 & 2.76 \\ NGC 4157 & 26.0 & 5.62 & 185 & 7.61 & 3.24 \\ NGC 4217 & 14.5 & 4.50 & 178 & 7.04 & 3.35 \\ NGC 4013 & 27.0 & 4.84 & 177 & 6.96 & 3.19 \\ NGC 4088 & 18.8 & 4.09 & 173 & 6.65 & 3.32 \\ NGC 4100 & 19.8 & 4.62 & 164 & 5.98 & 2.81 \\ NGC 3726 & 28.0 & 3.24 & 162 & 5.83 & 3.26 \\ NGC 4051 & 10.6 & 3.29 & 159 & 5.62 & 3.12 \\ NGC 4138 & 13.0 & 3.01 & 147 & 4.82 & 2.80 \\ NGC 2403 & 19.0 & 1.57 & 134 & 3.99 & 3.19 \\ UGC 128 & 40.0 & 1.48 & 131 & 3.81 & 3.14 \\ NGC 3769 & 33.0 & 1.33 & 122 & 3.31 & 2.88 \\ NGC 6503 & 21.8 & 1.07 & 121 & 3.25 & 3.14 \\ NGC 4183 & 18.0 & 0.93 & 112 & 2.79 & 2.89 \\ UGC 6917 & 9.0 & 0.74 & 110 & 2.69 & 3.12 \\ UGC 6930 & 14.5 & 0.73 & 110 & 2.69 & 3.14 \\ M 33 & 9.0 & 0.61 & 107 & 2.54 & 3.24 \\ UGC 6983 & 13.8 & 0.86 & 107 & 2.54 & 2.74 \\ NGC 7793 & 6.8 & 0.51 & 100 & 2.22 & 3.10 \\ NGC 300 & 12.4 & 0.35 & 90 & 1.80 & 3.02 \\ NGC 5585 & 12.0 & 0.37 & 90 & 1.80 & 2.94 \\ NGC 6399 & 6.8 & 0.28 & 88 & 1.72 & 3.22 \\ NGC 55 & 10.0 & 0.23 & 86 & 1.64 & 3.39 \\ UGC 6667 & 6.8 & 0.33 & 86 & 1.64 & 2.83 \\ UGC 6923 & 4.5 & 0.24 & 81 & 1.46 & 2.95 \\ UGC 6818 & 6.0 & 0.14 & 73 & 1.18 & 3.12 \\ \hline \end{tabular} \end{center} R: radius of the galaxy (kpc); M: stellar + HI mass ($10^{10}M_\odot$); $v_{\infty}$: asymptotic speed (km/sec); $\alpha_0$: $2(v_\infty/c)^2 M^{-0.494}$. \end{table} \begin{center} \begin{figure}[h] \special{psfile=5188fig1.eps vscale=35 hscale=35 hoffset=25 voffset=-205} \vspace{4.3cm} \caption {A log-log plot of $\alpha$ versus M. The equation for the power law fit is shown in the legend. }\label{f1} \end{figure} \end{center} \subsection{ Kinship with MOND} We recall that in the weak-field approximation, the newtonian dynamics is derived from the Einsteinian one by writing the metric coefficient $B=\left(1+2\phi/c^2\right),\phi=GM/r$ and by expanding all relevant functions and equations up to the first order in $\phi/c^2$. In a similar way one may find our modified newtonian dynamics from the presently modified GR by expanding $B(r)$ of Eq. (\ref{e11}) up the first order in $\alpha$ and $s/r$. Thus \begin{eqnarray}\label{e26} &&B(r) =1+\alpha+\alpha\ln(r/s)-s/r=1+2\phi(r)/c^2, \end{eqnarray} where the second equality defines $\phi(r)$. Let us write Eq. ($\ref {e25}$) (with slight tolerance) as $\alpha=\alpha_0 (GM/GM_\odot)^{1/2}$ and find the gravitational acceleration\\ \begin{eqnarray}\label{e27} ~~~g &\!\!\!\!~= &\!\!\!\!~ \left|d\phi/dr\right|=(a_0 g_n)^{1/2}+g_n \\ &\!\!\!\!~ = &\!\!\!\!~g_n ~~{\rm for}~~ g_n\gg a_0 \cr &\!\!\!\!~ = &\!\!\!\!~ (a_0 g_n)^{1/2}~~ {\rm for} ~~ a_0\gg g_n \rightarrow 0, \nonumber \end{eqnarray} where we have denoted \begin{eqnarray}\label{e28} &&a_0 = \alpha_0 ^2c^4/4GM_\odot ~~{\rm and}~~ g_n=GM/r^2. \end{eqnarray} The limiting behaviors of $g$ are the same as those of MOND. One may, therefore, comfortably identify $a_0$ as MOND's characteristic acceleration and calculate $\alpha_0$ anew from Eq (\ref{e28}). For $a_0=1.2\times10^{-8}$cm/sec${}^2$, one finds \begin{eqnarray}\label{e29} &&\alpha = 2.8\times 10^{-12} \left(M/M_\odot\right)^{1/2}. \end{eqnarray} \indent It is gratifying how close this value of $\alpha$ is to the one in Eq. ($\ref {e25}$) and how similar the low and high acceleration limits of MOND and the present formalism are, in spite of their totally different and independent starting points. It should also be noted that there is no counterpart to the interpolating function of MOND here. \section{Concluding remarks} We have developed an $f(R)\propto R^{1-\alpha/2}$ gravitation that is essentially a logarithmic modification of the Einstein-Hilbert action. In spherically-symmetric static situations, the theory allows a modified Schwarzschild-deSitter metric. This metric in the limit of weak fields gives a logarithmic correction to the newtonian potential. From the observed asymptotic speeds and masses of spirals we learn that the correction is proportional to almost the square root of the mass of the central body. Flat rotation curves, the Tully-Fisher relation (admittedly with some reservations), and a version of MOND emerge as natural consequences of the theory. Actions are ordinarily form invariant under the changes in sources. Mass dependence of $\alpha$ destroys this feature and any claim for the action-based theory should be qualified with such reservation in mind. This, however, should not be surprising, for it is understood that all alternative gravitations, one way or another, go beyond the classic GR. One should not be surprised if some of the commonly accepted notions require re-thinking and generalizations. Since the appearance of an earlier version of this paper in arXiv, Mendoza et al.(2006) have investigated the gravitational waves and lensing effects in the proposed spacetime. They find the following: a) in any $f(R)=R^n$ gravitation, gravitational waves travel with the speed of light in a vacuum, and b) in the present spacetime, there is a lensing in addition to what one finds in the classical GR. Their ratio of the additional deflection angle of a light ray, $\delta\beta$, to that in GR, $\beta_{GR}$, can be reduced to \begin{eqnarray}\label{e31} && \delta\beta/\beta_{GR} =\frac{1}{2}\alpha \ln {(r_m/s-1)}, \end{eqnarray} where $r_m$ is the impact parameter of the impinging light. The proportionality of $\delta\beta$ to $\alpha$ is expected, because the proposed metric is in the neighborhood of GR. Its increase with increasing $r_m$ also should not be surprising, since the theory is designed to highlight unexpected features at far rather than nearby distances. Soussa et al. (2004) maintain that ``no purely metric-based, relativistic formulation of MOND whose energy functional is stable can be consistent with the observed amount of gravitational lensing from galaxies". For at least two reasons, this no-go theorem does not apply to what we have highlighted above as the kinship with MOND: \noindent a) Apart from their common low and high acceleration regimes, the two theories are fundamentally different. The gravitational acceleration in the weak field limit of the present theory is the newtonian one to which a small $1/r$ correction is added. That of MOND, on the other hand, is a highly nonlinear function of the newtonian acceleration through an arbitrary interpolating function. \noindent b) More important, however, is one of the authors' assumptions that ``the gravitational force is carried by the metric, and its source is the usual stress tensor". This is not the case in the present theory. Although we have only worked out the vacuum solution for a point source, the mass dependence of the exponent $\alpha$ in Eqs. (\ref{e10}) and (\ref{e11}) makes the theory different from what the assumption requires. There are two practices for obtaining the field equations of $f(R)$ gravity, the metric approach, where $g_{\mu\nu}$'s are considered as dynamical variables, and that of Palatini, where the metric and the affine connections are treated as such (see Magnano 1995 for a review). Unless $f(R)$ is linear in $R$, the resulting field equations are not identical (see Ferraris et al. 1994). The metric approach is often avoided for leading to fourth-order differential equations. It is also believed to have instabilities in the weak field approximations (see e. g., Sotiriou 2005 and also Amarzguioui et al. 2005). In the present paper we do not initially specify $f(R)$. Instead, at some intermediate stage in the analysis we adopt an ansatz for $df(R)/dR$ as a function of $r$ and work from there to obtain the metric, the Ricci scalar, and eventually $f(R)$. This enables us to avoid the fourth-order equations. This trick should work in other contexts, such as cosmological ones. The theory presented here is preliminary. Further investigations are needed from both formal and astrophysical points of view. The author's list of priorities include the following: \begin{itemize} \item Stability of the metric of Eqs. (\ref{e10}) \&(\ref{e11}). The approach should be to impose a small perturbation $\delta g_{\mu \nu}$ on the metric, linearize the field Eq. (\ref{e2}), and ask for the condition of stability of the metric. Such a condition, if it exists at all, might throw some light on the mass dependence of $ \alpha $, the empirical relation of Eq. (\ref{e29}). Managing the linear problems is straightforward. Here, however, the bookkeeping is extensive and laborious. \item Extension of the theory, at least in the weak field regime, to many body systems and to cases with a continuous distribution of matter, in order to obtain the metric inside the matter. \item Developing the theory beyond the first order in $\alpha$ \item Solar system tests of the theory. \item Possible cosmological implications of the theory. \end{itemize} \noindent \textbf{Acknowledgement}: The author wishes to thank Bahram Mashhoun and Naresh Dadhich for comments and helpful suggestions. Reza Saffari has pointed out a typographical error in the earlier version of the paper, Eq. (15). The error had propagated making the sign of $\alpha$ appear positive in the factor multiplying the term $GM/r$ in Eq. (24). This is corrected here.
{ "attr-fineweb-edu": 1.947266, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdig5qWTA8XvyTiWg
\section{} Recurrent cortical networks provide reservoirs of states that are thought to play a crucial role for sequential information processing in the brain. However, classical reservoir computing requires manual adjustments of global network parameters, particularly of the spectral radius of the recurrent synaptic weight matrix. It is hence not clear if the spectral radius is accessible to biological neural networks. Using random matrix theory, we show that the spectral radius is related to local properties of the neuronal dynamics whenever the overall dynamical state is only weakly correlated. This result allows us to introduce two local homeostatic synaptic scaling mechanisms, termed flow control and variance control, that implicitly drive the spectral radius towards the desired value. For both mechanisms the spectral radius is autonomously adapted while the network receives and processes inputs under working conditions. We demonstrate the effectiveness of the two adaptation mechanisms under different external input protocols. Moreover, we evaluated the network performance after adaptation by training the network to perform a time-delayed XOR operation on binary sequences. As our main result, we found that flow control reliably regulates the spectral radius for different types of input statistics. Precise tuning is however negatively affected when interneural correlations are substantial. Furthermore, we found a consistent task performance over a wide range of input strengths/variances. Variance control did however not yield the desired spectral radii with the same precision, being less consistent across different input strengths. Given the effectiveness and remarkably simple mathematical form of flow control, we conclude that self-consistent local control of the spectral radius via an implicit adaptation scheme is an interesting and biological plausible alternative to conventional methods using set point homeostatic feedback controls of neural firing. \tiny \fontsize{8}{11}\helveticabold { \section{Keywords:} recurrent networks, homeostasis, synaptic scaling, echo-state networks, reservoir computing, spectral radius} \end{abstract} \bigskip \section{Introduction} \label{sect:introduction} Cortical networks are highly recurrent, a property that is considered to be crucial for processing and storing temporal information. For recurrent networks to remain stable and functioning, the neuronal firing activity has to be kept within a certain range by autonomously active homeostatic mechanisms. It is hence important to study homeostatic mechanisms on the level of single neurons, as well as the more theoretic question of characterizing the dynamic state that is to be attained on a global network level. It is common to roughly divide adaptation mechanisms into intrinsic homeostasis, synaptic homeostasis, and metaplasticity. Synaptic scaling was identified as a mechanism that can postsynaptically regulate neural firing by adjusting synaptic efficacies in a proportional, multiplicative way. This finding has led to numerous studies investigating the role of synaptic scaling in controlling neural network activity \citep{Turrigiano_1998,Turrigiano_2000,Turrigiano_2008} and in stabilizing other plasticity mechanisms \citep{vanRossum_2000,Stellwagen2006,Tetzlaff2011,Toyoizumi2014}. Indeed, synaptic scaling has proven successful in stabilizing activity in recurrent neural networks \citep{Lazar_2009,Remme2012,Zenke2013,Effenberger_2015,Miner_2016}. However, these studies either used synaptic scaling as the sole homeostatic mechanism \citep{Zenke2013,Remme2012} or resorted to a variant of synaptic scaling where the scaling is not dynamically determined through a control loop using a particular target activity, but rather by a fixed multiplicative normalization rule \citep{Lazar_2009,Effenberger_2015,Miner_2016}. Therefore, these homeostatic models cannot account for higher moments of temporal activity patterns, i.e., their variance, as this would require at least the tuning of two parameters \citep{cannon2017stable}. Within more abstract models of rate encoding neurons, intrinsic homeostasis and synaptic scaling essentially correspond to adjusting a bias and gain factor on the input entering a nonlinear transfer function. Within this framework, multiple dual-homeostatic adaptation rules have been investigated concerning their effect on network performance. In this framework, the adaptation of the bias acts as an intrinsic plasticity mechanism for the control of the internal excitability of a neuron \citep{Franklin_1992,Abbott_1993,Borde_1995}, while the gain factors functionally correspond to a synaptic scaling of the recurrent weights. Learning rules for these types of models were usually derived by defining a target output distribution that each neuron attempts to reproduce by changing neural gains and biases \citep{Triesch_2007,steil2007intrinsicplasticity, schrauwen2008improving,boedecker2009initialization}, or were directly derived from an information-theoretic measure \citep{Bell_1995}. While these studies did indeed show performance improvements by optimizing local information transmission measures, apparently, optimal performance can effectively be traced back to a global parameter, the spectral radius of the recurrent weight matrix \citep{schrauwen2008improving}. Interestingly, to our knowledge, theoretical studies on spiking neural networks did not explicitly consider the spectral radius as a parameter affecting network dynamics. Still, the theory of balanced states in spiking recurrent networks established the idea that synaptic strengths should scale with $1/\sqrt{k}$, where $k$ is the average number of afferent connections \citep{VanVreeswijk1998}. According to the circular law of random matrix theory, this scaling rule simply implies that the spectral radius of the recurrent weight matrix remains finite as the number of neurons $N$ increases. More recent experiments on cortical cultures confirm this scaling \citep{Barral2016}. In the present study, we investigated whether the spectral radius of the weight matrix in a random recurrent network can be regulated by a combination of intrinsic homeostasis and synaptic scaling. Following the standard echo-state framework, we used rate encoding tanh-neurons as the model of choice. However, aside from their applications as efficient machine learning algorithms, echo state networks are potentially relevant as models of information processing in the brain \citep{nikolic2009distributed,Hinaut_2015,enel2016reservoir}. Note in this context that extensions to layered ESN architectures have been presented by \citet{gallicchio2017echo}, which bears a somewhat greater resemblance to the hierarchical structure of cortical networks than the usual shallow ESN architecture. This line of research illustrates the importance of examining whether local and biological plausible principles exist that would allow to tune the properties of the neural reservoir to the ``edge of chaos" \citep{livi2018determination}, particularly when a continuous stream of inputs is present. The rule has to be independent of both the network topology, which is not locally accessible information, and the distribution of synaptic weights. We propose and compare two unsupervised homeostatic mechanisms, which we denote by flow control and variance control. Both regulate the mean and variance of neuronal firing such that the network works in an optimal regime concerning sequence learning tasks. The mechanisms act on two sets of node-specific parameters, the biases $b_i$, and the neural gain factors $a_i$. We restricted ourselves to biologically plausible adaptation mechanisms, viz adaptation rules for which the dynamics of all variables are local, i.e., bound to a specific neuron. Additional variables enter only when locally accessible. In a strict sense, this implies that local dynamics are determined exclusively by the neuron's dynamical variables and by information about the activity of afferent neurons. Being less restrictive, one could claim that it should also be possible to access aggregate or mean-field quantities that average a variable of interest over the population. For example, nitric oxide is a diffusive neurotransmitter that can act as a measure for the population average of neural firing rates \citep{Sweeney_2015}. Following a general description of the network model, we introduce both adaptation rules and evaluate their effectiveness in tuning the spectral radius in Section \ref{sect:flow_control_results} and \ref{sect:variance_control_results}. We assess the performance of networks that were subject to adaptation in Section \ref{sect:XOR}, using a nonlinear sequential memory task. Finally, we discuss the influence of node-to-node cross-correlations within the population in Section \ref{sect:correlations}. \section{Results} \bigskip\subsection{Model} \label{sect_model} A full description of the network model and parameters can be found in the methods section. We briefly introduce the network dynamics as \begin{align} x_i(t) &= x_{{\rm r},i}(t) + I_i(t) \label{x_i_introduction} \\ x_{{\rm r},i}(t) &:= a_i\sum_{j=1}^N W_{ij} y_j(t-1) \label{x_r_i_introduction}\\ y_i(t) &= \tanh\left(x_i(t) - b_i\right) \; . \label{y_introduction} \end{align} Each neuron's membrane potential $x_i$ consists of a recurrent contribution $x_{{\rm r},i}(t)$ and an external input $I_i(t)$. The biases $b_i$ are subject to the following homeostatic adaptation: \begin{equation} b_i(t)= b_i(t-1) + \epsilon_{\rm b} \left[y_i(t) - \mu_{\rm t} \right] \; . \label{b_i_introduction} \end{equation} Here, $\mu_{\rm t}$ defines a target for the average activity and $\epsilon_{\rm b}$ is the adaptation rate. The local parameters $a_i$ act as scaling factors on the recurrent weights. We considered two different forms of update rules. Loosely speaking, both drive the network towards a certain dynamical state which corresponds to the desired spectral radius. The difference between them lies in the variables characterizing this state: While flow control defines a relation between the variance of neural activity and the variance of the total recurrent synaptic current, variance control does so by a more complex relation between the variance of neural activity and the variance of the synaptic current from the external input. \bigskip \subsubsection{Flow control} The first adaptation rule, flow control, is given by \begin{equation} a_i(t) = a_i(t-1)\Big[1+ \epsilon_{\rm a} \Delta R_i(t)\Big], \quad\quad \Delta R_i(t) = R_{\rm t}^2 y_i^2(t-1) - x_{{\rm r},i}^2(t)\;. \label{a_i_flow_introduction} \end{equation} The parameter $R_{\rm t}$ is the desired target spectral radius and $\epsilon_{\rm a}$ the adaptation rate of the scaling factor. The dynamical variables $y_i^2$ and $x_{{\rm r},i}^2$ have been defined before in Eqs.~(\ref{x_i_introduction}) and (\ref{x_r_i_introduction}). We also considered an alternative global update rule where $\Delta R_i(t)$ is given by \begin{equation} \Delta R_i(t) = \frac{1}{N}\Big[ R_{\rm t}^2\,{||\mathbf{y}(t-1)||}^2- {||\mathbf{x}_{\rm r}(t)||}^2 \Big] \; , \label{delta_R_global_introduction} \end{equation} where $|| \cdot ||$ denotes the euclidean vector norm. However, since this is a non-local rule, it only served as a comparative model to Eq.~(\ref{a_i_flow_introduction}) when we investigated the effectiveness of the adaptation mechanism. Three key assumptions enter flow control, Eq.~(\ref{a_i_flow_introduction}): \begin{itemize} \item Represented by $x_{{\rm r},i}(t)$, we assume that there is a physical separation between the recurrent input that a neuron receives and its external inputs. This is necessary because $x_{{\rm r},i}(t)$ is explicitly used in the update rule of the synaptic scaling factors. \item Synaptic scaling only affects the weights of recurrent connections. However, this assumption is not crucial for the effectiveness of our plasticity rule, as we were mostly concerned with achieving a preset spectral radius for the recurrent weight matrix. If instead the scaling factors acted on both the recurrent and external inputs, this would lead to an ``effective" external input $I'_i(t) := a_i I_i(t)$. However, $a_i$ only affecting the recurrent input facilitated the parameterization of the external input by means of its variance, see Section~\ref{sect:XOR}, a choice of convenience. \item For (\ref{a_i_flow_introduction}) to function, neurons need to able to represent and store squared neural activities. \end{itemize} Whether these three preconditions are satisfied by biological neurons needs to be addressed in future studies. \bigskip \subsubsection{Variance control} The second adaptation rule, variance control, has the form \begin{align} \label{a_i_variance} a_i(t) &= a_i(t-1) + \epsilon_{\rm a} \left[ \sigma_{{\rm t},i}^2(t) - {\left( y_i(t) - \mu^{\rm y}_i(t) \right)}^2\right] \\ \label{sigm_target} \sigma_{{\rm t},i}^2(t) &= 1 - \frac{1}{\sqrt{1 + 2R_{\rm t}^2 y_i(t)^2 + 2\sigma_{{\rm ext},i}^2(t)}} \; . \end{align} Eq.~(\ref{a_i_variance}) drives the average variance of each neuron towards a desired target variance $\sigma_{{\rm t},i}^2(t)$ at an adaptation rate $\epsilon_{\rm a}$ by calculating the momentary squared difference between the local activity $y_i(t)$ and its trailing average $\mu^{\rm y}_i(t)$. Eq.~(\ref{sigm_target}) calculates the target variance as a function of the target spectral radius $R_{\rm t}$, the current local square activity $y^2_i(t)$ and a trailing average $\sigma^2_{{\rm ext},i}(t)$ of the local variance of the external input signal. When all $a_i(t)$ reach a steady state, the average neural variance equals the target given by (\ref{sigm_target}). According to a mean-field approach that is described in Section~\ref{sect:MF_theory}, reaching this state then results in a spectral radius $R_{\rm a}$ that is equal to the target $R_{\rm t}$ entering (\ref{sigm_target}). Intuitively, it is to be expected that $\sigma_{{\rm t},i}^2$ is a function of both the spectral radius and the external driving variance: The amount of fluctuations in the network activity is determined by the dynamic interplay between the strength of the external input as well as the recurrent coupling. A full description of the auxiliary equations and variables used to calculate $\mu^{\rm y}_i(t)$ and $\sigma^2_{{\rm ext},i}(t)$ can be found in Section~\ref{sect:model}. Similar to flow control, we also considered a non-local version for comparative reasons, where (\ref{sigm_target}) is replaced with \begin{equation} \label{sigm_target_global} \sigma_{{\rm t},i}^2(t) = 1 - \frac{1}{\sqrt{1 + 2R_{\rm t}^2 ||\mathbf{y}(t)||^2/N + 2\sigma_{{\rm ext},i}^2(t)}} \; . \end{equation} Again, $||\cdot||$ denotes the euclidean norm. Before proceeding to the results, we discuss the mathematical background of the proposed adaptation rules in some detail. \bigskip \subsection{Autonomous spectral radius regulation} \label{sect_specrad_reg} There are some interesting aspects to the theoretical framework at the basis of the here proposed regulatory scaling mechanisms. The circular law of random matrix theory states that the eigenvalues $\lambda_j$ are distributed uniformly on the complex unit disc if the elements of a real $N\times N$ matrix are drawn from distributions having zero mean and standard deviation $1/\sqrt{N}$ \citep{tao2008random}. Given that the internal weight matrix $\widehat{W}$ ( $\widehat{\cdot}$~denoting matrices) with entries $W_{ij}$ has $p_{\rm r} N$ non-zero elements per row ($p_{\rm r}$ is the connection probability), the circular law implies that the spectral radius of $a_i W_{ij}$, the maximum of $|\lambda_j|$, is unity when the synaptic scaling factors $a_i$ are set uniformly to $1/\sigma_{\rm w}$, where $\sigma_{{\rm w}}$ is the standard deviation of $W_{ij}$. Our goal is to investigate adaptation rules for the synaptic scaling factors that are based on dynamic quantities, which includes the membrane potential $x_i$, the neural activity $y_i$ and the input $I_i$. The circular law, i.\ e.\ a $N \times N$ matrix with i.i.d.\ entries with zero mean and $1/N$ variance approaching a spectral radius of one as $N \rightarrow \infty$, can be generalized. \citet{Rajan2006} investigated the case where the statistics of the columns of the matrix differ in their means and variances: given row-wise E-I balance for the recurrent weights, the square of the spectral radius of a random $N\times N$ matrix whose columns have variances $\sigma^2_i$ is $N\left\langle \sigma^2_i \right\rangle_i$ for $N \rightarrow \infty$. Since the eigenvalues are invariant under transposition, this result also holds for row-wise variations of variances and column-wise E-I balance. While the latter is not explicitly enforced in our case, deviations from this balance are expected to tend to zero for large $N$ given the statistical assumptions that we made about the matrix elements $W_{ij}$. Therefore, the result can be applied to our model, where node-specific gain factors $a_i$ are applied to each row of the recurrent weight matrix. Thus, the spectral radius $R_{\rm a}$ of the \emph{effective random matrix} $\widehat{W}_{\rm a}$ with entries $a_iW_{ij}$ (as entering (\ref{x_r_i_introduction})) is \begin{equation} R_{\rm a}^2 \approxeq \frac{1}{N} \sum_i R_{{\rm a},i}^2, \qquad\quad R_{{\rm a},i}^2 := a^2_i \sum_j \left(W_{ij}\right)^2\,, \label{R_a} \end{equation} for large $N$, when assuming that the distribution underlying the \textit{bare weight matrix} $\widehat{W}$ with entries $W_{ij}$ has zero mean. Note that $R^2_{\rm a}$ can be expressed alternatively in terms of the Frobenius norm $\left\lVert\widehat{W}_{\rm a} \right\rVert_{\rm F}$, via \begin{equation} R_{\rm a}^2 \approxeq \left\lVert\widehat{W}_{\rm a} \right\rVert^2_{\rm F} / N \, . \end{equation} We numerically tested Eq.~(\ref{R_a}) for $N=500$ and heterogeneous random sets of $a_i$ drawn from a uniform $[0,1]$-distribution and found a very close match to the actual spectral radii ($1$-$2\%$ relative error). Given that the $R_{{\rm a}, i}$ can be interpreted as per-site estimates for the spectral radius, one can use the generalized circular law (\ref{R_a}) to regulate $R_{\rm a}$ on the basis of local adaptation rules, one for every $a_i$. For the case of flow control, the adaptation rule is derived using a comparison between the variance of neural activity that is present in the network with the recurrent contribution to the membrane potential. A detailed explanation is presented in Section~\ref{sec:sing_values} and Section~\ref{sect:flow_theo}. In short, we propose that \begin{equation} {\big\langle\,{||\mathbf{x}_{\rm r}(t)||}^2\,\big\rangle}_{\rm t} \approxeq R_{\rm a}^2\, {\big\langle\,{||\mathbf{y}(t-1)||}^2\,\big\rangle}_{\rm t}\,, \label{flow_R_a_introduction} \end{equation} where $x_{{\rm r},i}$ is the recurrent contribution to the membrane potential $x_i$. This stationarity condition leads to the adaptation rule given in Eq.~(\ref{a_i_flow_introduction}). An analysis of the dynamics of this adaptation mechanisms can be found in Section \ref{sect:flow_theo}. Instead of directly imposing Eq.~(\ref{flow_R_a_introduction}) via an appropriate adaptation mechanism, we also considered the possibility of transferring this condition into a set point for the variance of neural activities as a function the external driving. To do so, we used a mean-field approach to describe the effect of recurrent input onto the resulting neural activity variance. An in-depth discussion is given in Section~\ref{sect:MF_theory}. This led to the update rule given by Eq.~(\ref{a_i_variance}) and (\ref{sigm_target}) for variance control. \bigskip \subsection{Testing protocols} \label{sect:testing_protocols} We used several types of input protocols for testing the here proposed adaptation mechanisms, as well as for assessing the task performance discussed in Section \ref{sect:XOR}. The first two variants concern distinct biological scenarios: \begin{itemize} \item {\em Binary.} Binary input sequences correspond to a situation when a neural ensemble receives input dominantly from a singular source, which itself has only two states, being either active or inactive. Using binary input sequences during learning is furthermore consistent with the non-linear performance test considered here for the echo-state network as a whole, the delayed XOR-task. See Section~\ref{sect:XOR}. For symmetric binary inputs, as used, the source signal $u(t)$ is drawn from $\pm1$. \item {\em Gaussian.} Alternatively one can consider the situation that a large number of essentially uncorrelated input streams are integrated. This implies random Gaussian inputs signals. Neurons receive in this case zero-mean independent Gaussian noise. \end{itemize} Another categorical dimension concerns the distribution of the afferent synaptic weights. Do all neurons receive inputs with the same strength, or not? As a quantifier for the individual external input strengths, the variances $\sigma^2_{{\rm ext}, i}$ of the local external input currents where taken into account. We distinguished two cases \begin{itemize} \item {\em Heterogeneous.} In the first case, the $\sigma^2_{{\rm ext}, i}$ are quenched random variables. This means that each neuron is assigned a random value $\sigma^2_{{\rm ext}, i}$ before the start of the simulation, as drawn from a half-normal distribution parameterized by $\sigma = \sigma_{{\rm ext}}$. This ensures that the expected average variance $\big\langle \sigma^2_{{\rm ext}, i} \big\rangle$ is given by $\sigma^2_{{\rm ext}}$. \item {\em Homogeneous.} In the second case, all $\sigma^2_{{\rm ext}, i}$ are assigned the identical global value $\sigma^2_{{\rm ext}}$. \end{itemize} Overall, pairing ``binary'' vs.\, ``Gaussian'' and ``heterogeneous'' vs.\, ``homogeneous'', leads to a total of four different input protocols, i.\,e.\ ``heterogeneous binary'', ``homogeneous binary'', ``heterogeneous Gaussian'' and ``homogeneous Gaussian''. If not otherwise stated, numerical simulations were done using networks with $N=500$ sites and a connection probability $p_{\rm r}=0.1$. \bigskip \subsection{Performance testing of flow control} \label{sect:flow_control_results} In Figure~\ref{fig_R_a_regulation}, we present a simulation using flow control for heterogeneous Gaussian input with an adaptation rate $\epsilon_{\rm a}=10^{-3}$. The standard deviation of the external driving was set to $\sigma_{\rm ext}=0.5$. The spectral radius of $R_{\rm a}$ of $\widehat{W}_{\rm a}$ was tuned to the target $R_{\rm t} = 1$ with high precision, even though the local, row-wise estimates $R_{{\rm a},i}$ showed substantial deviations from the target. We further tested the adaptation with other input protocols, see Section~\ref{sect:testing_protocols} and Figure~\ref{S_flow_control_local_Fig}. We found that flow control robustly led to the desired spectral radius $R_{\rm t}$ under all Gaussian input protocols, while binary input caused $R_{\rm a}$ to converge to higher values than $R_{\rm t}$. However, when using global adaptation, as given by Eq.~(\ref{delta_R_global_introduction}), all input protocols resulted in the correctly tuned spectral radius, see Figure~\ref{S_flow_control_global_Fig}. Numerically, we found that the time needed to converge to the stationary states depended substantially on $R_{\rm t}$, slowing down when the spectral radius becomes small. It is then advantageous, as we have done, to scale the adaptation rate $\epsilon_{\rm a}$ inversely with the trailing average $\bar{x}_{\rm r}^2$ of $||x_{\rm r}||^2$, viz as $\epsilon_{\rm a} \to \epsilon_{\rm a}/\bar{x}_{\rm r}^2$. An exemplary plot showing the effect of this scaling is shown in Fig.~\ref{fig:flow_renorm}, see Section~\ref{sect:renorm_flow_control} for further details. \begin{figure}[t] \includegraphics[width=1.0\textwidth] {./plots/Figure1.png} \caption{{\bf Online spectral radius regulation using flow control.} The spectral radius $R_{\rm a}$ and the respective local estimates $R_{{\rm a},i}$ as defined by (\ref{R_a}). For the input protocol see Section~\ref{sect_input}. {\bf A}: Dynamics of $R^2_{{\rm a},i}$ and $R^2_{\rm a}$, in the presence of heterogeneous independent Gaussian inputs. Local adaptation. {\bf B}: Distribution of eigenvalues of the corresponding effective synaptic matrix $\widehat{W}_{\rm a}$, after adaptation. The circle denotes the spectral radius. } \label{fig_R_a_regulation} \end{figure} To evaluate the amount of deviation from the target spectral radius under different input strengths and protocols, we plotted the difference between the resulting spectral radius and the target spectral radius for a range of external input strength, quantified by their standard deviation $\sigma_{{\rm ext}}$. Results for different input protocols are shown in Figure~\ref{S1_Fig} in the supplementary material. For correlated binary input, increasing the input strength resulted in stronger deviations from the target spectral radius. On the other hand, uncorrelated Gaussian inputs resulted in perfect alignment for the entire range of input strengths that we tested. \bigskip \bigskip \subsection{Perfomance testing of variance control} \label{sect:variance_control_results} In comparison, variance control, shown in Figure~\ref{fig_R_a_regulation_var} and Figure~\ref{S_var_control_local_Fig}, resulted in notable deviations from $R_{\rm t}$, for both uncorrelated Gaussian and correlated binary input. As for flow control, we also calculated the deviations from $R_{\rm t}$ as a function of $\sigma_{{\rm ext}}$, see Figure~\ref{S2_Fig}. For heterogeneous binary input, deviations from the target spectral radius did not increase monotonically as a function of the input strength (Figure~\ref{S2_Fig}A), reaching a peak at $\sigma_{{\rm ext}}\approx 0.4$ for target spectral radii larger than $1$. For homogeneous binary input, we observed a substantial negative mismatch of the spectral radius for strong external inputs, see Figure~\ref{S2_Fig}C. Overall, we found that variance control did not exhibit the same level of consistency in tuning the system towards a desired spectral radius, even though it did perform better in some particular cases (compare Figure~\ref{S1_Fig}A for large $\sigma_{{\rm ext}}$ with Figure~\ref{S2_Fig}). Moreover, variance control exhibited deviations from the target (shown in Figure~\ref{S_var_control_global_Fig}) even when a global adaptation rule was used, as defined in (\ref{sigm_target_global}). This is in contrast to the global variant of flow control, which, as stated in the previous section, robustly tuned the spectral radius to the desired value even in the presence of strongly correlated inputs. \begin{figure}[t] \includegraphics[width=1.0\textwidth] {./plots/Figure2.png} \caption{{\bf Online spectral radius regulation using variance control.} The spectral radius $R_{\rm a}$ and the respective local estimates $R_{{\rm a},i}$ as defined by (\ref{R_a}). For the input protocol see Section~\ref{sect_input}. {\bf A}: Dynamics of $R^2_{{\rm a},i}$ and $R^2_{\rm a}$, in the presence of heterogeneous independent Gaussian inputs. Local adaptation. {\bf B}: Distribution of eigenvalues of the corresponding effective synaptic matrix $\widehat{W}_{\rm a}$. The circles denote the respective spectral radius. } \label{fig_R_a_regulation_var} \end{figure} \bigskip \subsection{Spectral radius, singular values and global Lyapunov exponents} \label{sec:sing_values} Apart from the spectral radius $R_{\rm a}$ of the matrix $\widehat{W}_{\rm a}$, one may consider the relation between the adaptation dynamics and the respective singular values $\sigma_i$ of $\widehat{W}_{\rm a}$. We recall that the spectrum of $\hat{U}_{\rm a}=\widehat{W}_{\rm a}^\dagger \widehat{W}_{\rm a}$ is given by the squared singular values, $\sigma_i^2$, and that the relation $||\mathbf{x}_{\rm r}||^2 = \mathbf{y}^\dagger \widehat{W}_{\rm a}^\dagger\widehat{W}_{\rm a} \mathbf{y}$ holds. Now, assume that the time-averaged projection of neural activity $\mathbf{y}=\mathbf{y}(t)$ onto all eigenvectors of $\hat{U}_{\rm a}$ is approximately the same, that is, there is no preferred direction of neural activity in phase space. From this idealized case, it follows that the time average of the recurrent contribution to the membrane potential can be expressed with \begin{equation} {\big\langle\,{||\mathbf{x}_{\rm r}||}^2\,\big\rangle}_{\rm t} \approx \frac{{\big\langle\,||\mathbf{y}||^2\,\big\rangle}_{\rm t} }{N} \sum_i \sigma_i^2 = \frac{{\big\langle\,||\mathbf{y}||^2,\big\rangle}_{\rm t} }{N} \sum_{i,j} {\big(a_iW_{ij}\big)}^2 \label{SVD_x_r} \end{equation} as the rescaled average of the $\sigma_i^2$. For the second equation, we used the fact that the $\sum_i\sigma_i^2$ equals the sum of all matrix elements squared \citep{sengupta1999distributions,shen2001singular}. With (\ref{R_a}), one finds that (\ref{SVD_x_r}) is equivalent to ${{\big\langle\,||\mathbf{x}_{\rm r}||}^2,\big\rangle}_{\rm t} = R_a^2 {\big\langle\,||\mathbf{y}||^2,\big\rangle}_{\rm t}$ and hence to the flow condition (\ref{flow_R_a_introduction}). This result can be generalized, as done in Section~\ref{sect:flow_theo}, to the case that the neural activities have inhomogeneous variances, while still being uncorrelated with zero mean. We have thus shown that the stationarity condition leads to a spectral radius of (approximately) unity. It is worthwhile to note that the singular values of $\hat{U}_{\rm a}=\widehat{W}_{\rm a}^\dagger \widehat{W}_{\rm a}$ do exceed unity when $R_{\rm a} = 1$. More precisely, for a random matrix with i.i.d.\ entries, one finds in the limit of large $N$ that the largest singular value is given by $\sigma_{\rm max} = 2 R_{\rm a}$, in accordance with the Marchenko-Pastur law for random matrices \citep{Marcenko1967}. Consequently, directions in phase space exist in which the norm of the phase space vector is elongated by factors greater than one. Still, this does not contradict the fact that a unit spectral radius coincides with the transition to chaos for the non-driven case. The reason is that the global Lyapunov exponents are given by \begin{equation} \lim\limits_{n\rightarrow \infty} \frac{1}{2n}\ln\left(\left(\widehat{W}_{\rm a}^n\right)^\dagger \widehat{W}_{\rm a}^n \right) \end{equation} which eventually converge to $\ln \lVert \lambda_i \rVert$, see Figure~\ref{S3_Fig} in the supplementary material and \citet{wernecke2019chaos}, where $\lambda_i$ is the $i$th eigenvalue of $\widehat{W}_{\rm a}$. The largest singular value of the $n$th power of a random matrix with a spectral radius $R_{\rm a}$ scales like $R_{\rm a}^{n}$ in the limit of large powers $n$. The global Lyapunov exponent goes to zero as a consequence when $R_{\rm a}\to1$. \medskip \subsection{XOR-memory recall} \label{sect:XOR} To this point, we presented results regarding the effectiveness of the introduced adaptation rules. However, we did not account for their effects onto a given learning task. Therefore, we tested the performance of locally adapted networks under the delayed XOR task, which evaluates the memory capacity of the echo state network in combination with a non-linear operation. For the task, the XOR operation is to be taken with respect to a delayed pair of two consecutive binary inputs signals, $u(t\!-\!\tau)$ and $u(t\!-\!\tau\!-\!1)$, where $\tau$ is a fixed time delay. The readout layer is given by a single unit, which has the task of reproducing \begin{equation} \label{XOR_f_t} f_\tau(t) = \mathrm{XOR}\left[u(t\!-\!\tau),u(t\!-\!\tau\!-\!1)\right], \qquad\quad t,\tau=1,\,2,\, \dots\,, \end{equation} where $\mathrm{XOR}[u,u']$ is $0/1$ if $u$ and $u'$ are identical/not identical. The readout vector $\mathbf{w}_{\rm out}$ is trained with respect to the mean squared output error, \begin{equation} {||\widehat{Y} \mathbf{w}_{\rm out} - \mathbf{f}_\tau||}^2 + \alpha {|| \mathbf{w}_{\rm out} ||}^2\,, \label{XOR_error} \end{equation} using ridge regression on a sample batch of $T_{\rm batch} = 10 N$ time steps, here for $N=500$, and a regularization factor of $\alpha=0.01$. The batch matrix $\widehat{Y}$, of size $T_{\rm batch} \times (N+1)$, holds the neural activities as well as one node with constant activity serving as a potential bias. Similarly, the readout (column) vector $\mathbf{w}_{\rm out}$ is of size $(N+1)$. The $T_{\rm batch}$ entries of $\mathbf{f}_\tau$ are the $f_\tau(t)$, viz the target values of the XOR problem. Minimizing (\ref{XOR_error}) leads to \begin{equation} \mathbf{w}_{\rm out} = {\left(\widehat{Y}^\dagger \widehat{Y} + \alpha^2 \hat{\mathbb{1}} \right)}^{-1} \widehat{Y}^\dagger\, \mathbf{f}_\tau \,. \end{equation} The learning procedure was repeated independently for each time delay $\tau$. We quantified the performance by the total memory capacity, $\mathrm{MC}_{\rm XOR}$, as \begin{align} \mathrm{MC}_{\rm XOR} &= \sum_{k=1}^\infty \mathrm{MC}_{{\rm XOR},k} \label{MC_XOR} \\ \mathrm{MC}_{{\rm XOR},k} &= \frac{\mathrm{Cov}^2\left[f_k(t),y_{\rm out}(t)\right]_t} {\mathrm{Var} \left[f_k(t)\right]_t \mathrm{Var}\left[y_{\rm out}(t)\right]_t}\,. \label{MC_XOR_k} \end{align} This is a simple extension of the usual definition of short term memory in the echo state literature \citep{Jaeger2002}. The activity $y_{\rm out}=\sum_{i=1}^{N+1} w_{{\rm out},i}\, y_i$ of the readout unit is compared in (\ref{MC_XOR_k}) with the XOR prediction task, with the additional neuron, $y_{N+1}=1$, corresponding to the bias of the readout unit. Depending on the mean level of the target signal, this offset might actually be unnecessary. However, since it is a standard practice to use an intercept variable in linear regression models, we decided to include it into the readout variable $y_{\rm out}$. The variance and covariance are calculated with respect to the batch size $T_{\rm batch}$. The results for flow control presented in Figure~\ref{fig:het_performance_sweep_flow_composite} correspond to two input protocols, heterogeneous Gaussian and binary inputs. Shown are sweeps over a range of $\sigma_{\rm ext}$ and $R_{\rm t}$. The update rule (\ref{a_i_flow_introduction}) was applied to the network for each pair of parameters until the $a_i$ values converged to a stable configuration. We then measured the task performance as described above. Note that in the case of Gaussian input, this protocol was only used during the adaptation phases. Due to the nature of the XOR task, binary inputs with the corresponding variances are to be used during performance testing. See Figure~\ref{S4_Fig} in the supplementary material for a performance sweep using the homogeneous binary and Gaussian input protocol. Optimal performance was generally attained around the $R_{\rm a}\approx 1$ line. A spectral radius $R_{\rm a}$ slightly smaller than unity was optimal when using Gaussian input, but not for binary input signals. In this case the measured spectral radius $R_{\rm a}$ deviated linearly from the target $R_{\rm t}$, with increasing strength of the input, as parameterized by the standard deviation $\sigma_{\rm ext}$. Still, the locus of optimal performance was essentially independent of the input strength, with maximal performance attained roughly at $R_{\rm t}\approx0.55$. Note that the line $R_{\rm a}=1$ joins $R_{\rm t}=1$ in the limit $\sigma_{\rm ext}\to0$. \begin{figure}[t] \includegraphics[width=1.0\textwidth] {./plots/Figure3.png} \caption{{\bf XOR performance for flow control.} Color-coded performance sweeps for the XOR-performance (\ref{MC_XOR}) after adaptation using flow control. Averaged over five trials. The input has variance $\sigma_{\rm ext}^2$ and the target for the spectral radius is $R_{\rm t}$. A/B panels are for heterogeneous binary/Gaussian input protocols. Optimal performance for a given $\sigma_{\rm ext}$ was estimated as a trial average (yellow solid line) and found to be generally close to criticality, $R_{\rm a} = 1$, as measured (white dashed lines).} \label{fig:het_performance_sweep_flow_composite} \end{figure} Comparing these results to variance control, as shown in Figure~\ref{fig:het_performance_sweep_var_composite}, we found that variance control led to an overall lower performance. To our surprise, for external input with a large variance, Gaussian input caused stronger deviations from the desired spectral radius as compared to binary input. Therefore, in a sense, it appeared to behave opposite to what we found for flow control. However, similar to flow control, the value of $R_{\rm t}$ giving optimal performance under a given $\sigma_{\rm ext}$ remained relatively stable over the range of external input strength measured. On the other hand, using homogeneous input, see Figure~\ref{S5_Fig} in the supplementary material, did cause substantial deviations from the target spectral radius when using binary input. \begin{figure}[t] \includegraphics[width=1.0\textwidth] {./plots/Figure4.png} \caption{{\bf XOR performance for variance control.} Color-coded performance sweeps for the XOR-performance (\ref{MC_XOR}) after adaptation using variance control. Averaged over five trials. The input has variance $\sigma_{\rm ext}^2$ and the target for the spectral radius $R_{\rm t}$. A/B panels are for heterogeneous binary/Gaussian input protocols. Optimal performance (yellow solid line) is in general close to criticality, $R_{\rm a} = 1$, as measured (white dashed lines).} \label{fig:het_performance_sweep_var_composite} \end{figure} \begin{figure}[t] \includegraphics[width=1.0\textwidth] {./plots/Figure5.png} \caption{{\bf Size dependence of correlation.} Comparison between the variance $\sigma_{\rm bare}^2$ of the bare recurrent input $x_{\rm bare}=\sum_j W_{ij}y_j$ with $\sigma_{\rm w}^2\sigma_{\rm y}^2$. Equality is given when the presynaptic activities are statistically independent. This can be observed in the limit of large network sizes $N$ for uncorrelated input data streams (homogeneous and heterogeneous Gaussian input protocols), but not for correlated inputs (homogeneous and heterogeneous binary input protocols). Compare Section~\ref{sect_input} for the input protocols. Parameters are $\sigma_{\rm ext}\!=\!0.5$, $R_{\rm a}\!=\!1$ and $\mu_{t}\!=\!0.05$. }5 \label{fig:variance_corr_N} \end{figure} \begin{figure}[t] \includegraphics[width=1.0\textwidth] {./plots/Figure6.png} \caption{{\bf Input induced activity correlations.} For heterogeneous binary and Gaussian inputs (A/B), the dependency of mean activity cross correlations $\bar{C}$, see Eq.~(\ref{crossCorr}). $\bar{C}$ is shown as a function of the spectral radius $R_{\rm a}$. Results are obtained for $N\!=\!500$ sites by averaging over five trials, with shadows indicating the standard error across trials. Correlations are due to finite-size effect for the autonomous case $\sigma_{\rm ext}\!=\!0$. } \label{fig:het_corr_act_composite} \end{figure} \bigskip \subsection{Input induced correlations} \label{sect:correlations} A crucial assumption leading to the proposed adaptation rules is the statistical independence of neural activity for describing the statistical properties of the bare recurrent contribution to the membrane potential, $x_{\rm bare}=\sum_j W_{ij}y_j$. In particular, the variance $\sigma^2_{\rm bare}$ of $x_{\rm bare}$ enters the mean-field approach described in Section~\ref{sect:MF_theory}. Assuming statistical independence across the population for $y_i(t)$, it is simply given by $\sigma^2_{\rm bare} = \sigma_{\rm w}^2\sigma_{\rm y}^2$, where \begin{equation} \sigma^2_{\rm w} \equiv \mathrm{Var}\left[\sum_{j=1}^N W_{ij}\right] \end{equation} is the variance of the sum of the bare afferent synaptic weights (see also Section~\ref{sect:model}). Being a crucial element of the proposed rules, deviations from the prediction of $\sigma^2_{{\rm bare}}$ would also negatively affect the precision of tuning the spectral radius. In Figure~\ref{fig:variance_corr_N}, a comparison of the deviations $|\sigma^2_{{\rm bare}} - \sigma^2_{{\rm w}}\sigma^2_{{\rm y}}|$ is presented for the four input protocols introduced in Section~\ref{sect_input}. For the Gaussian protocols, for which neurons receive statistically uncorrelated external signals, one observes that $\sigma_{\rm bare}^2\to\sigma_{\rm w}^2\sigma_{\rm y}^2$ in the thermodynamic limit $N\to\infty$ via a power law, which is to be expected when the presynaptic neural activities are decorrelated. On the other side, binary $0/1$ inputs act synchronous on all sites, either with site-dependent or site-independent strengths (heterogeneous/homogeneous). Corresponding activity correlations are induced and a finite and only weakly size-dependent difference between $\sigma_{\rm bare}^2$ and $\sigma_{\rm w}^2\sigma_{\rm y}^2$ shows up. Substantial corrections to the analytic theory are to be expected in this case. To this extend we measured the cross-correlation $C(y_i,y_j)$, defined as \begin{align} \label{crossCorr} \bar{C} = \frac{1}{N(N-1)}\sum_{i\ne j} |C(y_i,y_j)|, \quad\quad C(y_i,y_j) = \frac{\mathrm{Cov}(y_i,y_j)} {\sqrt{\mathrm{Cov}(y_i,y_i)\mathrm{Cov}(y_j,y_j)}}\,, \end{align} with the covariance given by $\mathrm{Cov}(y_i,y_j) = \langle (y_i - \langle y_i \rangle_t ) (y_j - \langle y_j \rangle_t ) \rangle_t$. For a system of $N=500$ neurons the results for the averaged absolute correlation $\bar{C}$ are presented in Figure~\ref{fig:het_corr_act_composite} (see Figure~\ref{S6_Fig} in the supplementary material for homogeneous input protocols). Autonomous echo-state layers are in chaotic states when supporting a finite activity level, which implies that correlations vanish in the thermodynamic limit $N\to\infty$. The case $\sigma_{\rm ext}=0$, as included in Figure~\ref{fig:het_corr_act_composite}, serves consequently as a yardstick for the magnitude of correlations that are due to the finite number of neurons. Input correlations were substantially above the autonomous case for correlated binary inputs, with the magnitude of $\bar{C}$ decreasing when the relative contribution of the recurrent activity increased. This was the case for increasing $R_{\rm a}$. The effect was opposite for the Gaussian protocol, for which the input did not induce correlations, but contributed to decorrelating neural activity. In this case, the mean absolute correlation $\bar{C}$ was suppressed when the internal activity became small in the limit $R_{\rm a}\to0$. For larger $R_{\rm a}$, the recurrent input gained more impact on neural activity relative to the external drive and thus drove $\bar{C}$ towards an amount of correlation that would be expected in the autonomous case. \section{Discussion} \label{sect:discussion} The mechanisms for tuning the spectral radius via a local homeostatic adaptation rule introduced in the present study require neurons to have the ability to distinguish and locally measure both external and recurrent input contributions. For flow control, neurons need to be able to compare the recurrent membrane potential with their own activity, as assumed in Section~\ref{sect_specrad_reg}. On the other hand, variance control directly measures the variance of the external input and derives the activity target variance accordingly. The limiting factor to a successful spectral radius control is the amount of cross-correlation induced by external driving statistics. As such, the functionality and validity of the proposed mechanisms depended on the ratio between external input, i.e.\ feed-forward or feedback connections, with respect to recurrent, or lateral connections. In general, it is not straightforward to directly connect experimental evidence regarding the ratio between recurrent and feed-forward contributions to the effects observed in the model. It is, however, worthwhile to note that the fraction of synapses associated with interlaminar loops and intralaminar lateral connections are estimated to make up roughly $50\%$ \citep{binzegger2004cortexcircuit}. Relating this to our model, it implies that the significant interneural correlations that we observed when external input strengths were of the same order of magnitude as the recurrent inputs, can not generally be considered an artifact of biologically implausible parameter choices. Synchronization \citep{echeveste2016drifting} is in fact a widely observed phenomenon in the brain \citep{Usrey_1999}, with possible relevance for information processing \citep{Salinas_2001}. On the other hand, correlations due to shared input reduces the amount of information that can be stored in the neural ensemble \citep{Bell_1995}. Maximal information is achieved if neural activities or spikes trains form an orthogonal ensemble \citep{Foeldiak_1990,Bell_1995,Tetzlaff2012}. Furthermore, neural firing in cortical microcircuits was found to be decorrelated across neurons, even if common external input was present \citep{Ecker2010}, that is, under a common orientation tuning. Therefore, the correlation we observed in our network due to shared input might be significantly reduced by possible modifications/extensions of our model: First, a strict separation between inhibitory and excitatory nodes according to Dale's law might help actively decorrelating neural activity \citep{Tetzlaff2012,Bernacchia2013}. Second, if higher dimensional input was used, a combination of plasticity mechanisms in the recurrent and feed-forward connections could lead to the formation of an orthogonal representation of the input \citep{Foeldiak_1990,Bell_1995,Wick2010}, leading to richer, ``higher dimensional" activity patterns, i.\ e.\ a less dominant largest principal component. Ultimately, if these measures helped in reducing neural cross-correlations in the model, we thus would expect them to also increase the accuracy of the presented adaptation mechanisms. We leave these modifications to possible future research. Overall, we found flow control to be generally more robust than variance control in the sense that, while still being affected by the amount of correlations within the neural reservoir, the task performance was less so prone to changes in the external input strength. Comparatively stable network performance could be observed, in spite of certain deviations from the desired spectral radius (see Figure~\ref{fig:het_performance_sweep_flow_composite}). A possible explanation may be that flow control uses a distribution of samples from only a restricted part of phase space, that is, from the phase space regions that are actually visited or ``used'' for a given input. Therefore, while a spectral radius of unity ensures --statistically speaking-- the desired scaling properties in all phase-space directions, it seem to be enough to control the correct scaling for the subspace of activities that is actually used for a given set of input patters. Variance control, on the other hand, relies more strictly on the assumptions that neural activities are statistical independent. In consequence, the desired results could only be achieved under a rather narrow set of input statistics (independent Gaussian input with small variance). In addition, the approximate expression derived for the nonlinear transformation appearing in the mean field approximation adds another layer of potential source of systematic error to the control mechanism. This aspect also speaks in favor of flow control, since its rules are mathematically more simple. In contrast to variance control, the stationarity condition stated in Eq.~(\ref{flow_R_a_introduction}) is independent of the actual nonlinear activation function used and could easily be adopted in a modified neuron model. It should be noted, however, that the actual target $R_{\rm t}$ giving optimal performance might then also be affected. Interestingly, flow control distinguishes itself from a conventional local activity-target perspective of synaptic homeostasis: There is no predefined set point in Eq.~(\ref{a_i_flow_introduction}). This allows heterogeneities of variances of neural activity to develop across the network, while retaining the average neural activity at a fixed predefined level. We would like to point out that, for all the results presented here, only stationary processes were used for generating the input sequences. Therefore, it might be worth considering the potential effects of non-stationary, yet bounded, inputs on the results in future work. It should be noted, however, that the temporal domain enters both adaptation mechanisms only in the form of trailing averages of first and second moments. As a consequence, we expect the issue of non-stationarity of external inputs to present itself simply as a trade-off between slower adaptation, i.e.\ longer averaging time scales, and the mitigation of the effects of non-stationarities. Slow adaptation is, however, completely in line with experimental results on the dynamics of synaptic scaling, which is taking place on the time scale of hours to days \citep{Turrigiano_1998,Turrigiano_2008}. \section{Conclusion} \label{sect:conclusion} Apart from being relevant from a theoretical perspective, we propose that the separability of recurrent and external contributions to the membrane potential is an aspect that is potentially relevant for the understanding of local homeostasis in biological networks. While homeostasis in neural compartments has been a subject of experimental research \citep{Chen_2008}, to our knowledge, it has not yet been further investigated on a theoretical basis, although it has been hypothesized that the functional segregation within the dendritic structure might also affect (among other intraneural dynamical processes) homeostasis \citep{Narayanan2012}. The neural network model used in this study lacks certain features characterizing biological neural networks, like strict positivity of the neural firing rate or Dale's law, viz E-I balance \citep{trapp2018ei}. Future research should therefore investigate whether the here presented framework of local flow control can be implemented within more realistic biological neural network models. A particular concern regarding our findings is that biological neurons are spiking. The concept of an underlying instantaneous firing rate is, strictly speaking, a theoretical construct, let alone the definition of higher moments, such as the ``variance of neural activity". It is however acknowledged that the variability of the neural activity is central for statistical inference \citep{echeveste2020cortical}. It is also important to note that real-world biological control mechanisms, e.g.\ of the activity, rely on physical quantities that serve as measurable correlates. A well-known example is the intracellular calcium concentration, which is essentially a linearly filtered version of the neural spike train \citep{Turrigiano_2008}. On a theoretical level, Cannon and Miller showed that dual homeostasis can successfully control the mean and variance of this type of spike-averaging physical quantities \citep{cannon2017stable}. An extension of the flow control to filtered spike trains of spiking neurons could be an interesting subject of further investigations. However, using spiking neuron models would have shifted the focus of our research towards the theory of liquid state machines \citep{Maass2002,Maass_2004}, exceeding the scope of this publication. We therefore leave the extension to more realistic network/neuron models to future work. \section{Materials and methods} \label{sect:methods} \bigskip \subsection{Model} \label{sect:model} We implemented an echo state network with $N$ neurons, receiving $D_{\rm in}$ inputs. The neural activity is $y_i\in[-1,1]$, $x_i$ the membrane potential, $u_i$ the input activities, $W_{ij}$ the internal synaptic weights and $I_i$ the external input received. The output layer will be specified later. The dynamics \begin{equation} x_i(t) = a_i\sum_{j=1}^N W_{ij} y_j(t-1) + I_i(t), \qquad\quad y_i(t) = \tanh\left(x_i(t) - b_i\right) \label{x_i} \end{equation} is discrete in time, where the input $I_i$ is treated instantaneously. A tanh-sigmoidal has been used as a nonlinear activation function. The synaptic renormalization factor $a_i$ in (\ref{x_i}) can be thought of as a synaptic scaling parameter that neurons use to regulate the overall strength of the recurrent inputs. The strength of the inputs $I_i$ is unaffected, which is biologically plausible if external and recurrent signals arrive at separate branches of the dendritic tree \citep{Spruston2008}. The $W_{ij}$ are the bare synaptic weights, with $a_i W_{ij}$ being the components of the effective weight matrix $\widehat{W}_{\rm a}$. Key to our approach is that the propagation of activity is determined by $\widehat{W}_{\rm a}$, which implies that the spectral radius of the effective, and not of the bare weight matrix needs to be regulated. The bare synaptic matrix $W_{ij}$ is sparse, with a connection probability $p_{\rm r}=0.1$. The non-zero elements are drawn from a Gaussian with standard deviation \begin{equation} \sigma=\frac{\sigma_{\rm w}}{\sqrt{N p_{\rm r}}}\,, \label{sigma_w} \end{equation} and vanishing mean $\mu$. Here $Np_{\rm r}$ corresponds to the mean number of afferent internal synapses, with the scaling $\sim 1/\sqrt{Np_{\rm r}}$ enforcing size-consistent synaptic-weight variances. As discussed in the results section, we applied the following adaptation mechanisms: \begin{equation} b_i(t)= b_i(t-1) + \epsilon_{\rm b} \left[y_i(t) - \mu_{\rm t} \right] \label{b_i} \end{equation} for the thresholds $b_i$. \begin{itemize} \item Adaption of gains, using flow control: \begin{equation} a_i(t) = a_i(t-1)\Big[1+ \epsilon_{\rm a} \Delta R_i(t)\Big], \quad\quad \Delta R_i(t) = R_{\rm t}^2 {|y_i(t-1)|}^2 - {|x_{{\rm r},i}(t)|}^2\;. \label{a_i_flow} \end{equation} \item Adaption of gains, with variance control: \begin{align} \label{a_i_variance_methods} a_i(t) &= a_i(t-1) + \epsilon_{\rm a} \left[ \sigma_{{\rm t},i}^2(t) - {\left( y_i(t) - \mu^{\rm y}_i(t) \right)}^2\right] \\ \label{sigm_target_methods} \sigma_{{\rm t},i}^2(t) &= 1 - \sqrt{1 + 2R_{\rm t}^2 y_i(t)^2 + 2\sigma_{{\rm ext},i}^2(t)} \\ \label{mu_y_methods} \mu^{\rm y}_i(t) &= \mu^{\rm y}_i(t-1) + \epsilon_\mu \left[y_i(t) - \mu^{\rm y}_i(t-1)\right] \\ \label{sigm_ext_methods} \sigma_{{\rm ext},i}^2(t) &= \sigma_{{\rm ext},i}^2(t-1) + \epsilon_{\sigma} \left[\left(I_i(t) - \mu_{{\rm ext},i}(t)\right)^2 - \sigma_{{\rm ext},i}^2(t-1)\right] \\ \label{mu_ext_methods} \mu_{{\rm ext},i}(t) &= \mu_{{\rm ext},i}(t-1) + \epsilon_\mu \left[I_i(t) - \mu_{{\rm ext},i}(t-1)\right] \; . \end{align} Note that Eq.~(\ref{mu_y_methods})--(\ref{mu_ext_methods}) have the same mathematical form \begin{equation*} \langle trail \rangle (t) = \langle trail \rangle (t-1) + \epsilon\left[\langle var \rangle (t) - \langle trail \rangle (t-1)\right] \end{equation*} since they only serve as trailing averages that are used in the two main equation (\ref{a_i_variance_methods}) and (\ref{sigm_target_methods}). \end{itemize} For a summary of all model parameters, see Table~\ref{tab_params}. \begin{table}[b] \centering \caption{Standard values for model parameters} \label{tab_params} \renewcommand{\arraystretch}{1.5} \begin{tabular}{ c|c|c|c|c|c|c|c } $N$ & $p_{\rm r}$ & $\sigma_{\rm w}$ & $\mu_{\rm t}$&$\epsilon_{\rm b}$ & $\epsilon_{\rm a}$ & $\epsilon_\mu$ & $\epsilon_{\sigma}$ \\ \hline $500$ & $0.1$ & $1$ & $0.05$ & $10^{-3}$ & $10^{-3}$ & $10^{-4}$ & $10^{-3}$ \end{tabular} \end{table} \bigskip \subsection{Convergence acceleration for flow control} \label{sect:renorm_flow_control} For small values of $R_{\rm t}$ and weak external input, the average square activities and membrane potentials $y^2_i(t)$ and $x^2_{{\rm t},i}(t)$ can become very small. As a consequence, their difference entering $\Delta R_i(t)$ in (\ref{a_i_flow}) also becomes small in absolute value, slowing down the convergence process. To eliminate this effect, we decided to rescale the learning rate by a trailing average of the squared recurrent membrane potential, i.\ e.\ $\epsilon_a \rightarrow \epsilon_{\rm a} / \bar{x}^2_{\rm r}$. The effect of this renormalization is shown in Fig.~\ref{fig:flow_renorm}. Rescaling the learning rate effectively removes the significant rise of convergence times for small $\sigma_{\rm ext}$ and small $R_{\rm t}$. \begin{figure} \begin{center} \includegraphics[width=\textwidth]{./plots/Figure7.png} \end{center} \caption{{\bf Convergence time with and without adaptation rate renormalization} Number of time steps $T_{\rm conv}$ needed for $|R_{\rm a}(t) - R_{\rm a}(t-1)|^2$ to fall below $10^{-3}$. Shown are results using heterogeneous Gaussian input without and with, ({\bf A}) and respectively ({\bf B}), a renormalization of the learning rate $\epsilon_a \rightarrow \epsilon_{\rm a} / \bar{x}^2_{\rm r}$. Note that, due to computational complexity, an estimate of $R_{\rm a}$ given by (\ref{R_a}) was used. An initial offset of $0.5$ from the target $R_{\rm t}$ was used for all runs. Color coding of $R_{\rm t}$ is the same in both panels.} \label{fig:flow_renorm} \end{figure} \bigskip \subsection{Input protocols} \label{sect_input} Overall, we examined four distinct input protocols. \begin{itemize} \item {\sl Homogeneous Gaussian.} Nodes receive inputs $I_i(t)$ that are drawn individually from a Gaussian with vanishing mean and standard deviation $\sigma_{\rm ext}$. \item {\sl Heterogeneous Gaussian.} Nodes receive stochastically independent inputs $I_i(t)$ that are drawn from Gaussian distributions with vanishing mean and node specific standard deviations $\sigma_{i, {\rm ext}}$. The individual $\sigma_{i, {\rm ext}}$ are normal distributed, as drawn from the positive part of a Gaussian with mean zero and variance $\sigma_{\rm ext}^2$. \item {\sl Homogeneous binary.} Sites receive identical inputs $I_i(t)=\sigma_{\rm ext} u(t)$, where $u(t)=\pm1$ is a binary input sequence. \item {\sl Heterogeneous binary.} We define with \begin{equation} I_i = W^{\rm u}_{i} u(t), \qquad\quad u_j(t)=\pm1 \label{I_i} \end{equation} the afferent synaptic weight vector $W^{\rm u}_{i}$, which connects the binary input sequence $u(t)$ to the network. All $W^{\rm u}_{i}$ are drawn independently from a Gaussian with mean zero and standard deviation $\sigma_{\rm ext}$. \end{itemize} The Gaussian input variant simulates external noise. We used it in particular to test predictions of the theory developed in Section~\ref{sect:MF_theory}. In order to test the performance of the echo state network with respect to the delayed XOR task, the binary input protocols are employed. A generalization of the here defined protocols to the case of higher-dimensional input signals would be straightforward. \bigskip \subsection{Spectral radius adaptation dynamics} \label{sect_R_dynamics} For an understanding of the spectral radius adaptation dynamics of flow control, it is of interest to examine the effect of using the global adaptation constraint \begin{equation} \Delta R_i(t) = \frac{1}{N}\Big[ R_{\rm t}^2\,{||\mathbf{y}(t-1)||}^2- {||\mathbf{x}_{\rm r}(t)||}^2 \Big] \label{delta_R_global} \end{equation} in (\ref{a_i_flow_introduction}). The spectral radius condition (\ref{flow_R_a_introduction}) is then enforced directly, with the consequence that (\ref{delta_R_global}) is stable and precise even in the presence of correlated neural activities (see Figure~\ref{fig_R_a_regulation}C). This rule, while not biologically plausible, provides an opportunity to examine the dynamical flow, besides the resulting state. There are two dynamic variables, $a = a_i \; \forall i$, where, for the sake of simplicity, we assumed that all $a_i$ are homogeneous, and the activity variance $\sigma_{\rm y}^2=||\mathbf{y}||^2/N$. The evolution of $(a,\sigma_{\rm y}^2)$ resulting from the global rule (\ref{delta_R_global_introduction}) is shown in Figure~\ref{fig_adaptation_dynamics}. \begin{figure}[t] \includegraphics[width=1.0\textwidth] {./plots/Figure8.png} \caption{{\bf Spectral radius adaptation dynamics.} The dynamics of the synaptic rescaling factor $a$ and the squared activity $\sigma_{\rm y}^2$ (orange), as given by (\ref{delta_R_global_introduction}), for $R_{\rm t}=1$. Also shown is the analytic approximation to the flow (blue), see (\ref{eq:gain_dyn_approx}) and (\ref{eq:y_squ_dyn_approx}), and the respective nullclines $\Delta a=0$ (green) and $\Delta\sigma_{\rm y}^2=0$ (red). For the input, the heterogeneous binary protocol is used. Panels {\bf A} to {\bf D} correspond to different combinations of external input strengths and target spectral radii. The black dots show the stead-state configurations of the simulated systems. $\epsilon_{\rm a} = 0.1$.} \label{fig_adaptation_dynamics} \end{figure} For the flow, $\Delta a= a(t+1)-a(t)$ and $\Delta\sigma_{\rm y}^2 = \sigma_{\rm y}^2(t) - \sigma_{\rm y}^2(t-1)$, the approximation \begin{align} \label{eq:gain_dyn_approx} \Delta a &= \epsilon_{\rm a} a \left(R_{\rm t}^2-a^2\sigma^2_{\rm w}\right) \sigma_{\rm y}^2 \\ \Delta\sigma_{\rm y}^2 &= 1 - \sigma_{\rm y}^2 - \frac{1}{\sqrt{1+2a^2 \sigma^2_{\rm w} \sigma_{\rm y}^2 + 2 \sigma_{\rm ext}}} \label{eq:y_squ_dyn_approx} \end{align} is obtained. For the scaling factor $a$, this leads to a fixed point of $R_{\rm t}/\sigma_{\rm w}$. We used the mean-field approximation for neural variances that is derived in Section~\ref{sect:MF_theory}. The analytic flow compares well with numerics, as shown in Figure~\ref{fig_adaptation_dynamics}. For a subcritical rescaling factor $a$ and $\sigma_{\rm ext}=0$, the system flows towards a line of fixpoints defined by a vanishing $\sigma_{\rm y}^2$ and a finite $a\in[0,1]$, see Figure~\ref{fig_adaptation_dynamics}A. When starting with $a>0$, the fixpoint is instead $(a,\sigma_{\rm y}^2)=(1,0)$. The situation changes qualitatively for finite external inputs, viz when $\sigma_{\rm ext}>0$, as shown in Figure~\ref{fig_adaptation_dynamics}B--D. The nullcline $\Delta\sigma_{\rm y}^2=0$ is now continuous and the system flows to the fixed point, as shown in Figure~\ref{fig_adaptation_dynamics}B--D, with the value of $\sigma_{\rm y}^2$ being determined by the intersection of the two nullclines. In addition, we also varied the target spectral radius, see Figure~\ref{fig_adaptation_dynamics}B/C. This caused a slight mismatch between the flow of the simulated systems and the analytic flow. It should be noted, however, that this is to be expected anyhow since we used an approximation for the neural variances, again, see Section~\ref{sect:MF_theory}. This analysis shows that external input is necessary for a robust flow towards the desired spectral weight, the reason being that the dynamics dies out before the spectral weight can be adapted when the isolated systems starts in the subcritical regime. \bigskip \subsection{Extended theory of flow control for independent neural activity} \label{sect:flow_theo} We would like to show that the stationarity condition in Eq.~(\ref{flow_R_a_introduction}) results in the correct spectral radius, under the special case of independently identically distributed neural activities with zero mean. We start with Eq.~(\ref{flow_R_a_introduction}) as a stationarity condition for a given $R_{\rm t}$: \begin{equation} {\big\langle {||\mathbf{x}_{\rm r}(t)||}^2 \big\rangle}_{\rm t} \overset{!}{=} R_{\rm t}^2 {\big\langle \,{||\mathbf{y}(t-1)||}^2\big\rangle}_{\rm t} \, . \label{eq:stat_cond} \end{equation} We can express the left side of the equation as \begin{equation} \mathrm{E}\left[ \mathbf{y}^\dagger (t) \widehat{W}_{\rm a}^\dagger \widehat{W}_{\rm a} \mathbf{y}(t) \right]_t \, . \end{equation} We define $\widehat{U}_{\rm a} \equiv= \widehat{W}_{\rm a}^\dagger \widehat{W}_{\rm a}$ with $\{\sigma^2_k\}$ being the set of eigenvalues, which are also the squared singular values of $\widehat{W}_{\rm a}$, and $\{\mathbf{u}_k\}$ the respective set of orthonormal (column) eigenvectors. We insert the identity $\sum_{k=1}^N \mathbf{u}_k \mathbf{u}^\dagger_k$ and find \begin{align} & \mathrm{E}\left[ \mathbf{y}^\dagger (t) \widehat{U}_{\rm a} \sum_{k=1}^N \mathbf{u}_k \mathbf{u}^\dagger_k \mathbf{y}(t) \right]_t \\ = & \mathrm{E}\left[ \sum_{k=1}^N \sigma^2_k \mathbf{y}^\dagger (t) \mathbf{u}_k \mathbf{u}^\dagger_k \mathbf{y}(t) \right]_t \\ = & \sum_{k=1}^N \sigma^2_k \mathbf{u}^\dagger_k \mathrm{E}\left[\mathbf{y}(t)\mathbf{y}^\dagger (t)\right]_t \mathbf{u}_k \\ = & \sum_{k=1}^N \sigma^2_k \mathbf{u}^\dagger_k \widehat{C}_{\rm yy} \mathbf{u}_k \\ = & \mathrm{Tr}\left( \widehat{D}_{\sigma^2} \widehat{S}^\dagger_{\rm u} \widehat{C}_{\rm yy} \widehat{S}_{\rm u} \right) \, . \end{align} Given zero mean neural activity, $\widehat{C}_{\rm yy} = \mathrm{E}[\mathbf{y}(t)\mathbf{y}^\dagger (t)]_t$ is the covariance matrix of neural activities. $\widehat{D}_{\sigma^2}$ is a diagonal matrix holding the $\{\sigma^2_k\}$ and $\widehat{S}_{\rm u}$ is a unitary matrix whose columns are $\{\mathbf{u}_k\}$. $\widehat{S}^\dagger_{\rm u} \widehat{C}_{\rm yy} \widehat{S}_{\rm u}$ is expressing $\widehat{C}_{\rm yy}$ in the diagonal basis of $\widehat{U}_{\rm a}$. Including the right hand side of (\ref{eq:stat_cond}), we get \begin{equation} \mathrm{Tr}\left( \widehat{D}_{\sigma^2} \widehat{S}^\dagger_{\rm u} \widehat{C}_{\rm yy} \widehat{S}_{\rm u} \right) = R^2_{\rm t} \mathrm{Tr}\left(\widehat{C}_{\rm yy}\right) \, . \end{equation} However, since the trace is invariant under a change of basis, we find \begin{equation} \mathrm{Tr}\left( \widehat{D}_{\sigma^2} \widehat{S}^\dagger_{\rm u} \widehat{C}_{\rm yy} \widehat{S}_{\rm u} \right) = R^2_{\rm t} \mathrm{Tr}\left( \widehat{S}^\dagger_{\rm u} \widehat{C}_{\rm yy} \widehat{S}_{\rm u} \right) \, . \end{equation} Defining $\widehat{C}^{\rm u} \equiv= \widehat{S}^\dagger_{\rm u} \widehat{C}_{\rm yy} \widehat{S}_{\rm u}$, we get \begin{equation} \sum_{k=1}^N \sigma^2_k C^{\rm u}_{kk} = R^2_{\rm t} \sum_{k=1}^N C^{\rm u}_{kk} \, . \label{eq:sum_sv_base_transf} \end{equation} If we assume that the node activities are independently identically distributed with zero mean, we get $(\widehat{C}_{\rm yy})_{ij} = (\widehat{C}^{\rm u} )_{ij} = \left\langle y^2\right\rangle_{t} \delta_{ij}$. In this case, which was also laid out in Section~\ref{sec:sing_values}, the equation reduces to \begin{equation} \sum_{k=1}^N \sigma^2_k = R^2_{\rm t} N \, . \label{eq:sum_sv_specrad} \end{equation} The Frobenius norm of a square Matrix $\widehat{A}$ is given by ${\lVert \widehat{A} \rVert}^2_{\rm F} \equiv = \sum_{i,j} \widehat{A}^2_{ij}$. Furthermore, the Frobenius norm is linked to the singular values via ${\lVert \widehat{A} \rVert}^2_{\rm F} = \sum_k \sigma^2_k (\widehat{A})$ \citep{sengupta1999distributions,shen2001singular}. This allows us to state \begin{equation} \sum_{i,j} {\left(\widehat{W}_{\rm a}\right)}^2_{ij} = R^2_{\rm t} N \end{equation} which, by using (\ref{R_a}), gives \begin{equation} R^2_{\rm a} = R^2_{\rm t} \, . \end{equation} A slightly less restrictive case is that of uncorrelated but inhomogeneous activity, that is ${(\widehat{C}_{\rm yy})}_{ij} = {\left\langle y_i^2 \right\rangle}_{t} \delta_{ij}$. The statistical properties of the diagonal elements $C^{\rm u}_{kk}$ then determine to which degree one can still claim that Eq.~(\ref{eq:sum_sv_base_transf}) leads to Eq.~(\ref{eq:sum_sv_specrad}). Figure~\ref{S7_Fig} in the supplementary materials shows an example of a randomly generated realization of ${(\widehat{C}_{\rm yy})}_{ij} = {\left\langle y_i^2\right\rangle}_{t}$ and the resulting diagonal elements of $\widehat{C}^{\rm u}$, where the corresponding orthonormal basis $\widehat{S}_{\rm u}$ was generated from the SVD of a random Gaussian matrix. As one can see, the basis transformation has a strong smoothing effect on the diagonal entries, while the mean over the diagonal elements is preserved. Note that this effect was not disturbed by introducing random row-wise multiplications to the random matrix from which the orthonormal basis was derived. The smoothing of the diagonal entries allows us to state that $C^{\rm u}_{kk} \approxeq \left\langle y^2\right\rangle$ is a very good approximation in the case considered, which therefore reduces (\ref{eq:sum_sv_base_transf}) to the homogeneous case previously described. We can conclude that the adaptation mechanism also gives the desired spectral radius under uncorrelated inhomogeneous activity. In the most general case, we can still state that if $C^{\rm u}_{kk}$ and $\sigma^2_k$ are uncorrelated, for large $N$, Eq.~(\ref{eq:sum_sv_base_transf}) will tend towards \begin{equation} N \left\langle \sigma^2\right\rangle \left\langle C^{\rm u}\right\rangle = N R^2_{\rm t} \left\langle C^{\rm u}\right\rangle \end{equation} which would also lead to Eq.~(\ref{eq:sum_sv_specrad}). However, we can not generally guarantee statistical independence since the recurrent contribution on neural activities and the resulting entries of $\widehat{C}_{\rm yy}$ and thus also $C^{\rm u}_{kk}$ are linked to $\widehat{S}$ and $\sigma^2_k$, being the SVD of the recurrent weight matrix. \bigskip \bigskip \subsection{Mean field theory for echo state layers} \label{sect:MF_theory} In the following, we deduce analytic expressions allowing to examine the state of echo-state layers subject to a continuous timeline of inputs. Our approach is similar to the one presented by \citet{Massar2013}. The recurrent part of the input $x_i$ received by a neuron is a superposition of $Np_{\rm r}$ terms, which are assumed here to be uncorrelated. Given this assumption, the self-consistency equations \begin{align} \label{self_consistency_sigma_t} \sigma_{{\rm y},i}^2&=\int_{-\infty}^{\infty} {\rm dx}\tanh^2(x) N_{\mu_i,\sigma_i}(x) - \mu^2_{{\rm y},i} \\ \label{self_consistency_mu_t} \mu_{{\rm y},i} &= \int_{-\infty}^{\infty} {\rm dx}\tanh(x) N_{\mu_i,\sigma_i}(x) \\ \sigma^2_i&=a_i^2\sigma_{\rm w}^2\left\langle\sigma_{{\rm y},j}^2\right\rangle_j+ \sigma_{{\rm ext},i}^2, \qquad\quad \mu_i = \mu_{{\rm ext},i} - b_i \label{self_consistency_sigma_mu} \end{align} determine the properties of the stationary state. We recall that $\sigma_{\rm w}$ parameterizes the distribution of bare synaptic weights via (\ref{sigma_w}). The general expressions (\ref{self_consistency_sigma_t}) and (\ref{self_consistency_mu_t}) hold for all neurons, with the site-dependency entering exclusively via $a_i$, $b_i$, $\sigma_{{\rm ext},i}$ and $\mu_{{\rm ext},i}$, as in (\ref{self_consistency_sigma_mu}), with the latter characterizing the standard deviation and the mean of the input. Here, $a_i^2\sigma_{\rm w}^2\sigma_{\rm y}^2$ is the variance of the recurrent contribution to the membrane potential, $x$, and $\sigma^2$ the respective total variance. The membrane potential is Gaussian distributed, as $N_{\mu,\sigma}(x)$, with mean $\mu$ and variance $\sigma^2$, which are both to be determined self-consistently. Variances are additive for stochastically independent processes, which has been assumed in (\ref{self_consistency_sigma_mu}) to be the case for recurrent activities and the external inputs. The average value for the mean neural activity is $\mu_i$. For a given set of $a_i$, $\sigma_{{\rm ext},i}$ and $b_i$, the means and variances of neural activities, $\sigma^2_{{\rm y},i}$ and $\mu_{{\rm y},i}$, follow implicitly. We compared the numerically determined solutions of (\ref{self_consistency_sigma_t}) and (\ref{self_consistency_mu_t}) against full network simulations using, as throughout this study, $N=500$, $p_{\rm r}=0.1$, $\sigma_{\rm w}=1$, $\mu_{\rm t}=0.05$. In Figure~\ref{fig_R_a_input_protocols}, the spectral radius $R_{\rm a}$ is given for the four input protocols defined in Section~\ref{sect_input}. The identical ensemble of input standard deviations $\sigma_{{\rm ext},i}$ enters both theory and simulations. \begin{figure}[t] \includegraphics[width=1.0\textwidth] {./plots/Figure9.png} \caption{{\bf Variance control for the spectral radius.} The spectral radius $R_{\rm a}$, given by the approximation $R_{\rm a}^2=\sum_ia_i^2/N$, for the four input protocols defined in Section~\ref{sect_input}. Lines show the numerical self-consistency solution of (\ref{self_consistency_sigma_t}), symbols the full network simulations. Note the instability for small $\sigma_{\rm y}$ and $\sigma_{\rm ext}$. {\bf A}: Homogeneous independent Gaussian input. {\bf B}: Homogeneous identical binary input. {\bf C}: Heterogeneous independent Gaussian input. {\bf D}: Heterogeneous identical binary input. } \label{fig_R_a_input_protocols} \end{figure} Theory and simulations are in good accordance for vanishing input. Here, the reason is that finite activity levels are sustained in an autonomous random neural network when the ongoing dynamics is chaotic and hence decorrelated. For reduced activity levels, viz for small variances $\sigma_{\rm y}^2$, the convergence of the network dynamics is comparatively slow, which leads to a certain discrepancy with the analytic prediction (see Figure~\ref{fig_R_a_input_protocols}). \bigskip \subsubsection{Gaussian~approximation} \label{sec:Gaussian_Approximation} The integral occurring in the self-consistency condition (\ref{self_consistency_sigma_t}) can be evaluated explicitly when a tractable approximation to the squared transfer function $\tanh^2()$ is available. A polynomial approximation would capture the leading behavior close to the origin, however without accounting for the fact that $\tanh^2()$ converges to unity for large absolute values of the membrane potential. Alternatively, an approximation incorporating both conditions, the correct second-order scaling for small, and the correct convergence for large arguments, is given by the Gaussian approximation \begin{equation} \tanh^2(x) \approx 1 - \exp\left(-x^2 \right)\,. \label{gaussian_approximation} \end{equation} With this approximation the integral in (\ref{self_consistency_sigma_t}) can be evaluated explicitly. The result is \begin{align} \frac{1}{1-\sigma^2_{\rm y} - \mu^2_{\rm y}} &= \sqrt{1+2\sigma^2} / \exp\left(-\mu^2/\left(1 + 2 \sigma^2\right) \right) \label{selfConsistency_GaussianApprox} \\ &= \sqrt{1+2a^2\sigma^2_{\rm w} \sigma^2_{\rm y} + 2\sigma^2_{\rm ext} } / \exp\left(-\mu^2/\left(1 + 2a^2\sigma^2_{\rm w} \sigma^2_{\rm y} + 2\sigma^2_{\rm ext}\right) \right) \,. \nonumber \end{align} Assuming that $\mu \approx 0$ and $\mu_{\rm y} \approx 0$, inverting the first equation yields a relatively simple analytic approximation for the variance self-consistency equation: \begin{equation} \sigma^2_{\rm y} = 1 - \frac{1}{\sqrt{1+2a^2\sigma^2_{\rm w} \sigma^2_{\rm y} + 2\sigma^2_{\rm ext} }} \; . \label{eq:sigm_y_approx} \end{equation} This equation was then used for the approximate update rule in (\ref{sigm_target}) and (\ref{eq:y_squ_dyn_approx}). Alternatively, we can write (\ref{eq:sigm_y_approx}) as a self- consistency equation between $\sigma_{{\rm y}}^2$, $\sigma_{{\rm ext}}^2$ $a^2\sigma_{{\rm w}}^2 = R^2_{\rm a}$, describing a phase transition at $R_{\rm a} = 1$: \begin{equation} 2R^2_{\rm a} \sigma_{{\rm y}}^2 \left(1 - \sigma_{{\rm y}}^2 \right)^2 = 1 - \left(1+2 \sigma_{{\rm ext}}^2\right)\left(1- \sigma_{{\rm y}}^2\right)^2 \; . \label{eq:sigm_y_approx_self_consist} \end{equation} See Fig.~\ref{fig:phase_trans_analytic} for solutions of (\ref{eq:sigm_y_approx_self_consist}) for different values of $\sigma^2_{{\rm ext}}$. Note that for vanishing external driving and values of $R_{\rm a}$ above but close to the critical point, the standard deviation $\sigma_{{\rm y}}$ scales with $\sigma_{{\rm y}} \propto (R_{\rm a} - 1)^{1/2}$, which is the typical critical exponent for the order parameter in classical Landau theory of second-order phase transitions \citep[p. 169]{Gros_ComplexSystems}. If combined with a slow homeostatic process, flow or variance control in our case, this constitutes a system with an absorbing phase transition \citep[p. 182-183]{Gros_ComplexSystems}, settling at the critical point $R_{\rm a} = 1$. This phase transition can also be observed in Fig.~\ref{fig_R_a_input_protocols} for $\sigma_{{\rm ext}} = 0$ as a sharp onset in $\sigma_{{\rm y}}$. \begin{figure}[t] \centering \includegraphics[width=\textwidth]{./plots/Figure10.png} \caption{{\bf Phase transition of activity variance} Shown are solutions of the analytical approximation given in (\ref{eq:sigm_y_approx_self_consist}), capturing the onset of activity (characterized by its variance $\sigma^2_{{\rm y}}$) at the critical point $R_{\rm a}=1$.} \label{fig:phase_trans_analytic} \end{figure} \section*{Conflict of Interest Statement} The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. \section*{Author Contributions} Both authors, F.S. and C.G., contributed equally to the writing and review of the manuscript. F.S. provided the code, ran the simulations and prepared the figures. \section*{Acknowledgments} The authors acknowledge the financial support of the German research foundation (DFG) and discussions with R.~Echeveste. This manuscript was published as a pre-print on biorxiv \citep{Schubert2020_biorxiv}. \section*{Data Availability Statement} The datasets generated for this study can be found in \url{https://itp.uni-frankfurt.de/~fschubert/data_esn_frontiers/}. Simulation and plotting code is available in \url{https://github.com/FabianSchubert/ESN_Frontiers}.
{ "attr-fineweb-edu": 1.832031, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdk_xK03BfNelXAR_
\section{Introduction} Accurate 3D human pose estimation from monocular images and videos is the key to unlock several applications in robotics, human computer interaction, surveillance, animation and virtual reality. These applications require \emph{accurate} and \emph{real-time} 3D pose estimation from monocular image or video under challenging variations of clothing, lighting, view-point, self-occlusions, activities, background clutter etc.~\cite{sminchisescu2003estimating,SARAFIANOS20161}. With the advent of recent advances in deep learning, compute hardwares and, most importantly, large-scale \emph{real-world} datasets (ImageNet~\cite{2014arXiv1409.0575R}, MS COCO~\cite{MSCOCO:2014}, CityScapes~\cite{cordts2016cityscapes} etc.), computer vision systems have witnessed dramatic improvements in performance. Human-pose estimation has also benefited from synthetic and real-world datasets such as MS COCO~\cite{MSCOCO:2014}, MPII Pose~\cite{andriluka14cvpr}, Human3.6M~\cite{h36m_pami,IonescuSminchisescu11}, MPI-INF-3DHP~\cite{mono-3dhp2017}, and SURREAL~\cite{Varol_2017_CVPR}. Especially, 2D pose prediction has witnessed tremendous improvement due to large-scale in-the-wild datasets~\cite{MSCOCO:2014,andriluka14cvpr}. However, 3D pose estimation still remains challenging due to severely under-constrained nature of the problem and absence of any real-world 3D annotated dataset. A large body of prior art either directly regresses for 3D joint coordinates~\cite{li20143d,Li_2015_ICCV,Sun_2017_ICCV} or infers 3D from 2D joint-locations in a two-stage approach~\cite{mono-3dhp2017,Moreno-Noguer_2017_CVPR,Lin_2017_CVPR,zhou2016sparseness,Zhou_2017_ICCV}. These approaches perform well on synthetic 3D benchmark datasets, but lack generalization to the real-world setting due to the lack of 3D annotated in-the-wild datasets. To mitigate this issue, some approaches use synthetic datasets~\cite{ChenWLSWTLCC16,Varol_2017_CVPR}, green-screen composition~\cite{mono-3dhp2017,VNect_SIGGRAPH2017}, domain adaptation~\cite{ChenWLSWTLCC16}, transfer learning from intermediate 2D pose estimation tasks~\cite{mono-3dhp2017,li20143d}, and joint learning from 2D and 3D data~\cite{Zhou_2017_ICCV,Sun_2017_ICCV}. Notably, joint learning with 2D and 3D data has shown promising performance in-the-wild owing to large-scale real-world 2D datasets. We seek motivation from the recently published joint learning framework of Zhou et al.~\cite{Zhou_2017_ICCV} and present a novel structure-aware loss function to facilitate training of Deep ConvNet architectures using both 2D and 3D data to accurately predict the 3D pose from a single RGB image. The proposed loss function is applicable to 2D images during training and ensures that the predicted 3D pose does not violate anatomical constraints, namely joint-angle limits and left-right symmetry of the human body. We also present a simple learnable temporal pose model for pose-estimation from videos. The resulting system outperforms the best published system by 12\% on both Human3.6M and MPI-INF-3DHP and runs at 30fps on commodity GPU. Our proposed structure-aware loss is inspired by anatomical constraints that govern the human body structure and motion. We exploit the fact that certain body-joints cannot bend beyond an angular range; e.g. the knee(elbow) joints cannot bend forward(backward). We also make use of left-right symmetry of human body and penalize unequal corresponding pairs of left-right bone lengths. Lastly, we also use the bone-length ratio priors from~\cite{Zhou_2017_ICCV} that enforces certain pairs of bone-lengths to be constant. It is important to note that the illegal-angle and left-right symmetry constraints are complementary to the bone-length ratio prior, and we show that they perform better too. One of our contributions lies in formulating a loss function to capture joint-angle limits from an inferred 3D pose. We present the visualization of the loss surfaces of the proposed losses to facilitate a deeper understanding of their workings. The three aforementioned structure losses are used to train our \emph{Structure-Aware PoseNet}. Joint-angle limits and left-right symmetry have been used previously in the form of optimization functions~\cite{akhter2015pose,HERDA2005189,bogo2016keep}. To the best of our knowledge we are the first ones to exploit these two constraints, in the form of differentiable and tractable loss functions, to train ConvNets directly. Our structure-aware loss function outperforms the published state-of-the-art in terms of Mean-Per-Joint-Position-Error ( MPJPE ) by 7\% and 2\% on Human3.6M and MPI-INF-3DHP, respectively. We further propose to learn a temporal motion model to exploit cues from sequential frames of a video to obtain anatomically coherent and smoothly varying poses, while preserving the realism across different activities. We show that a moving-window fully-connected network that takes previous $N$ poses performs extremely well at capturing temporal as well as anatomical cues from pose sequences. With the help of carefully designed controlled experiments we show the temporal and anatomical cues learned by the model to facilitate better understanding. We report an additional 7\% improvement on Human3.6M with the use of our temporal model and demonstrate real-time performance of the full pipeline at 30fps. Our final model improves the published state-of-the-art on Human3.6M~\cite{h36m_pami} and MPI-INF-3DHP~\cite{mono-3dhp2017} by 11.8\% and 12\%, respectively. \vspace{-1em} \section{Related Work} \label{sec:relatedWork} This section presents a brief summary of the past work related to human pose estimation from three viewpoints: (1) ConvNet architectures and training strategies, (2) Utilizing structural constraints of human bodies, and (3) 3D pose estimation from video. The reader is referred to~\cite{SARAFIANOS20161} for a detailed review of the literature. \textbf{ConvNet architectures:} Most existing ConvNet based approaches either directly regress 3D poses from the input image~\cite{Sun_2017_ICCV,li20143d,zhou2016deep,zhou2016sparseness} or infer 3D from 2D pose in a two-stage approach~\cite{Tome_2017_CVPR,Zhou_2017_ICCV,VNect_SIGGRAPH2017,Moreno-Noguer_2017_CVPR,Lin_2017_CVPR}. Some approaches make use of volumetric-heatmaps~\cite{Pavlakos_2017_CVPR}, some define a pose using bones instead of joints~\cite{Sun_2017_ICCV}, while the approach in~\cite{VNect_SIGGRAPH2017} directly regresses for 3D location maps. The use of 2D-to-3D pipeline enables training with large-scale in-the-wild 2D pose datasets~\cite{andriluka14cvpr,MSCOCO:2014}. A few approaches use statistical priors~\cite{zhou2016sparseness,akhter2015pose} to lift 2D poses to 3D. Chen et al.~\cite{Chen_2017_CVPR} and Yasin et al.~\cite{yasin2016dual} use a pose library to retrieve the nearest 3D pose given the corresponding 2D pose prediction. Recent ConvNet based approaches~\cite{VNect_SIGGRAPH2017,Rogez_2017_CVPR,Zhou_2017_ICCV,Sun_2017_ICCV,zhou2016sparseness,Pavlakos_2017_CVPR} have reported substantial improvements in real-world setting by pre-training or joint training of their 2D prediction modules, but it still remains an open problem. \textbf{Utilizing structural information:} The structure of the human skeleton is constrained by fixed bone lengths, joint angle limits, and limb interpenetration constraints. Some approaches use these constraints to infer 3D from 2D joint locations. Akhter and Black~\cite{akhter2015pose} learn pose-dependent joint angle limits for lifting 2D poses to 3D via an optimization problem. Ramakrishna et al.~\cite{varunECCV2012} solve for anthropometric constraints in an activity-dependent manner. Recently, Moreno~\cite{Moreno-Noguer_2017_CVPR} proposed to estimate the 3D inter-joint distance matrix from 2D inter-joint distance matrix using a simple neural network architecture. These approaches do not make use of rich visual cues present in images and rely on the predicted 2D pose that leads to sub-optimal results. Sun et al.~\cite{Sun_2017_ICCV} re-parameterize the pose presentation to use bones instead of joints and propose a structure-aware loss. But, they do not explicitly seek to penalize the feasibility of inferred 3D pose in the absence of 3D ground-truth data. Zhou et al.~\cite{Zhou_2017_ICCV} introduce a weakly-supervised framework for joint training with 2D and 3D data with the help of a geometric loss function to exploit the consistency of bone-length ratios in human body. We further strengthen this weakly-supervised setup with the help of joint-angle limits and left-right symmetry based loss functions for better training. Lastly, there are methods that recover both shape and pose from a 2D image via a mesh-fitting strategy. Bogo et al.~\cite{bogo2016keep} penalize body-part interpenetration and illegal joint angles in their objective function for finding SMPL~\cite{DBLP:journals/tog/LoperM0PB15} based shape and pose parameters. These approaches are mostly offline in nature due to their computational requirements, while our approach runs at 30fps. \textbf{Utilizing temporal information:} Direct estimation of 3D pose from disjointed images leads to temporally incoherent output with visible jitters and varying bone lengths. 3D pose estimates from a video can be improved by using simple filters or temporal priors. Mehta et al.~\cite{VNect_SIGGRAPH2017} propose a real-time approach which penalizes acceleration and depth velocity in an optimization step after generating 3D pose proposals using a ConvNet. They also smooth the output poses with the use of a tunable low-pass filter~\cite{casiez20121} optimized for interactive systems. Zhou et al.~\cite{zhou2016sparseness} introduce a first order smoothing prior in their temporal optimization step. Alldieck et al.~\cite{Alldieck2017} exploit 2D optical flow features to predict 3D poses from videos. Wei et al.~\cite{Wei:2010} exploit physics-based constraints to realistically interpolate 3D motion between video keyframes. There have also been attempts to learn motion models. Urtasun et al.~\cite{urtasun2006temporal} learn activity specific motion priors using linear models while Park et al.~\cite{Park:2006} use a motion library to find the nearest motion given a set of 2D pose predictions followed by iterative fine-tuning. The motion models are activity-specific whereas our approach is generic. Recently, Lin et al.~\cite{Lin_2017_CVPR} used recurrent neural networks to learn temporal dependencies from the intermediate features of their ConvNet based architecture. In a similar attempt, Coskun et al.~\cite{Coskun_2017_ICCV} use LSTMs to design a Kalman filter that learns human motion model. In contrast with the aforementioned approaches, our temporal model is simple yet effectively captures short-term interplay of past poses and predicts the pose of the current frame in a temporally and anatomically consistent manner. It is generic and does not need to be trained for activity-specific settings. We show that it learns complex, non-linear inter-joint dependencies over time; e.g. it learns to refine wrist position, for which the tracking is least accurate, based on the past motion of elbow and shoulder joints. \section{Background and Notations} \label{sec:background} This section introduces the notations used in this article and also provides the required details about the weakly-supervised framework of Zhou et al.~\cite{Zhou_2017_ICCV} for joint learning from 2D and 3D data. A 3D human pose \(P = \{p_1, p_2, \ldots, p_k \}\) is defined by the positions of $k$ = 16 body joints in Euclidean space. These joint positions are defined relative to a root joint, which is fixed as the pelvis. The input to the pose estimation system could be a single RGB image or a continuous stream of RGB images \(I = \{ \ldots, I_{i-1}, I_i\}\). The $i^{th}$ joint $p_i$ is the coordinate of the joint in a 3D Euclidean space i.e. $p_i = (p_i^x, p_i^y, p_i^z )$. Throughout this article inferred variables are denoted with a $\tilde{*}$ and ground-truth is denoted with a $\hat{*}$, therefore, an inferred joint will be denoted as $\tilde{p}$ and ground-truth as $\hat{p}$. The 2D pose can be expressed with only the x,y-coordinates and denoted as $p^{xy} = (p^x, p^y)$; the depth-only joint location is denoted as $p^z = (p^z)$. The $i^{th}$ training data from a 3D annotated dataset consists of an image $I_i$ and corresponding joint locations in 3D, $\hat{P}_i$. On the other hand, the 2D data has only the 2D joint locations, $\hat{P}_i^{xy}$. Armed with these notations, below we describe the weakly-supervised framework for joint learning from~\cite{Zhou_2017_ICCV}. \begin{figure}[!h] \centering \includegraphics[width=1\linewidth]{architecture.jpg} \caption{ A schematic of the network architecture. The stacked hourglass module is trained using the standard Euclidean loss $\mathcal{L}_{HM}$ against ground truth heatmaps. Whereas, the depth regressor module is trained on either $\mathcal{L}^z_{3D}$ or $\mathcal{L}^z_{2D}$ depending on whether the ground truth depth $\hat{P}^z$ is available or not.} \vspace{-1em} \label{fig:architecture} \end{figure} Due to the absence of in-the-wild 3D data, the pose estimation systems learned using the controlled or synthetic 3D data fail to generalize well to in-the-wild settings. Therefore, Zhou et al.~\cite{Zhou_2017_ICCV} proposed a weakly-supervised framework for joint learning from both 2D and 3D annotated data. Joint learning exploits the 3D data for depth prediction and the in-the-wild 2D data for better generalization to real-world scenario. The overall schematic of this framework is shown in Fig.~\ref{fig:architecture}. It builds upon the stacked hourglass architecture~\cite{NewellYD16} for 2D pose estimation and adds a depth-regression sub-network on top of it. The stacked hourglass is trained to output the 2D joint locations, $\tilde{P}^{xy}$ in the image coordinate with the use of standard Euclidean loss between the predicted and the ground-truth joint-location heatmaps, please refer to~\cite{NewellYD16} for more details. The depth-regression sub-network, a series of four residual modules~\cite{he2016deep} followed by a fully connected layer, takes a combination of different feature maps from stacked hourglass and outputs the depth of each joint i.e. $\tilde{P}^z$. Standard Euclidean loss $\mathcal{L}_{e}(\tilde{P}^z, \hat{P}^z)$ is used for the 3D annotated data-sample. On the other hand, a weak-supervision in the form of a geometric loss function, $\mathcal{L}_{g}(\tilde{P}^{z}, \hat{P}^{xy})$, is used to train with a 2D-only annotated data-sample. The geometric loss acts as a regularizer and penalizes the pose configurations that violate the consistency of bone-length ratio priors. Please note that the ground-truth xy-coordinates, $\hat{P}^{xy}$, with inferred depth, $\tilde{P}^z$ are used in $\mathcal{L}_g$ to make the training simple. The geometric loss acts as an effective regularizer for the joint training and improves the accuracy of 3D pose estimation under controlled and in-the-wild test conditions, but it ignores certain other \emph{strong} anatomical constraints of the human body. In the next section, we build upon the discussed weakly-supervised framework and propose a novel structure-aware loss that captures richer anatomical constraints and provides stronger weakly-supervised regularization than the geometric loss. \section{Proposed Approach} This section introduces two novel anatomical loss functions and shows how to use them in the weakly-supervised setting to train with 2D annotated data-samples. Next, the motivation and derivation of the proposed losses and the analyses of the loss surfaces is presented to facilitate a deeper understanding and highlight the differences from the previous approaches. Lastly, a learnable temporal motion model is proposed with its detailed analysis through carefully designed controlled experiments. \begin{figure}[!h] \centering \includegraphics[width=1\linewidth]{eccv_pipeline_schema.jpg} \caption{Overall pipeline of our method: We sequentially pass the video frames to a ConvNet that produces 3D pose outputs (one at a time). Next, the prediction is temporally refined by passing a context of past N frames along with the current frame to a temporal model. Finally, skeleton fitting may be performed as an optional step depending upon the application requirement.} \vspace{-1em} \label{fig:pipeSchema} \end{figure} Fig.~\ref{fig:pipeSchema} shows our complete pipeline for 3D pose estimation. It consists of \begin{enumerate} \item \emph{\bf Structure-Aware PoseNet} or \emph{\bf SAP-Net}: A single-frame based 3D pose-estimation system that takes a single RGB image $I_i$ and outputs the inferred 3D pose $\tilde{P}_i$. \item \emph{\bf Temporal PoseNet} or \emph{\bf TP-Net}: A learned temporal motion model that can take a continuous sequence of inferred 3D poses $\{\ldots, \tilde{P}_{i-2}, \tilde{P}_{i-1}\}$ and outputs a temporally harmonized 3D pose $\Bar{P}_i$. \item \emph{\bf Skeleton fitting}: Optionally, if the actual skeleton information of the subject is also available, we can carry out a simple skeleton fitting step which preserves the directions of the bone vectors. \end{enumerate} \subsection{Structure-Aware PoseNet or SAP-Net} SAP-Net uses the network architecture shown in Fig.~\ref{fig:pipeSchema}, which is taken from ~\cite{Zhou_2017_ICCV}. This network choice allows joint learning with both 2D and 3D data in weakly-supervised fashion as described in Section~\ref{sec:background}. A 3D annotated data-sample provides strong supervision signal and drives the inferred depth towards a unique solution. On the other hand, weak-supervision, in the form of anatomical constraints, imposes penalty on invalid solutions, therefore, restricts the set of solutions. Hence, the stronger and more comprehensive the set of constraints, the smaller and better the set of solutions. We seek motivation from the discussion above and propose to use loss functions derived from joint-angle limits and left-right symmetry of human body in addition to bone-length ratio priors~\cite{Zhou_2017_ICCV} for weak-supervision. Together, these three constraints are stronger than the bone-length ratio prior only and lead to better 3D pose configurations. For example, bone-length ratio prior will consider an elbow bent backwards as valid, if the bone ratios are not violated, but the joint-angle limits will invalidate it. Similarly, the symmetry loss eliminates the configurations with asymmetric left-right halves in the inferred pose. Next we describe and derive differentiable loss functions for the proposed constraints. \vspace{-1em} \subsubsection{Illegal Angle Loss ($\mathcal{L}_a$):} Most body joints are constrained to move within a certain angular limits only. Our illegal angle loss, $\mathcal{L}_a$, encapsulates this constraint for the knee and elbow joints and restricts their bending beyond $180^{\circ}$. For a given 2D pose $P^{xy}$, there exist multiple possible 3D poses and $\mathcal{L}_a$ penalizes the 3D poses that violate the knee or elbow joint-angle limits. To exploit such constraints, some methods ~\cite{HERDA2005189,akhter2015pose,ChenNie2013TIP} use non-differentiable functions to infer the legality of a pose. Unfortunately, the non-differentiability restricts their direct use in training a neural network. Other methods resort to represent a pose in terms of rotation matrices or quarternions for imposing joint-angle limits ~\cite{akhter2015pose,Wei:2010} that affords differentiability. However, this imposition is non-trivial when representing poses in terms of joint-positions, which are a more natural representation for ConvNets. Our novel formulation of illegal-angle discovery resolves the ambiguity involved in differentiating between the internal and external angle of a joint for a 3D joint-location based pose representation. Using our formulation and keeping in mind our the requirement of differentiability, we formulate $\mathcal{L}_a$ to be used directly as a loss function. We illustrate our formulation with the help of Fig.~\ref{fig:angLoss}, and explain its derivation for the right elbow joint. Subscripts $n$, $s$, $e$, $w$, $k$ denote neck, shoulder, elbow, wrist and knee joints in that order, and superscripts $l$ and $r$ represent left and right body side, respectively. We define \(\mathbf{v_{sn}^r} = P_s^r - P_n\), \(\mathbf{v_{es}^r} = P_e^r - P_s^r \) and \(\mathbf{v_{we}^r} = P_w^r - P_e^r \) as the collar-bone, upper-arm and the lower-arm, respectively (See Fig.~\ref{fig:angLoss}). Now, \(\mathbf{n_s^r} = \mathbf{v_{sn}^r \times \mathbf{v_{es}^r} }\) is the normal to the plane defined by the collar-bone and the upper-arm. For the elbow joint to be legal, \(\mathbf{v_{we}^r}\) must have a positive component in the direction of $\mathbf{n_s^r}$, i.e. \(\mathbf{n_s^r} \cdot \mathbf{v_{we}^r} \) must be positive. We do not incur any penalty when the joint angle is legal and define \(E_e^r = \min(\mathbf{n_s^r} \cdot \mathbf{v_{we}^r}, 0)\) as a measure of implausibility. Note that this case is opposite for the right knee and left elbow joints (as shown by the right hand rule) and requires $E_k^r$ and $E_e^l$ to be positive for the illegal case. We exponentiate $E$ to strongly penalize large deviations beyond legality. $\mathcal{L}_a$ can now be defined as: \\ \begin{equation} \label{eq:langle} \mathcal{L}_a = -E_e^r e^{-E_e^r} + E_e^l e^{E_e^l} + E_k^r e^{E_k^r} - E_k^l e^{-E_k^l} \end{equation} All the terms in the loss are functions of bone vectors which are, in turn, defined in terms of the inferred pose. Therefore, $\mathcal{L}_a$ is differentiable. Please refer to the supplementary material for more details. \begin{figure}[!tb] \centering \includegraphics[width=0.4\linewidth]{skinned_angle_loss.jpg} \caption{Illustration of Illegal Angle loss: For the elbow joint angle to be legal, the lower-arm must project a positive component along $\mathbf{n_s^r}$ (normal to collarbone-upperarm plane) , i.e. $\mathbf{n_s^r} \cdot \mathbf{v_{we}} \geq 0$. Note that we only need 2D annotated data to train our model using this formulation.} \vspace{-1em} \label{fig:angLoss} \end{figure} \vspace{-1em} \subsubsection{Symmetry Loss ($\mathcal{L}_s$):} It is simple yet heavily constrains the joint depths, especially when the inferred depth is ambiguous due to occlusions. $\mathcal{L}_s$ is defined as the difference in lengths of left/right bone pairs. Let $\mathcal{B}$ be the set of all the bones on right half of the body except torso and head bones. Also, let $BL_b$ represent the bone-length of bone $b$. We define $L_s$ as\\ \begin{equation} \mathcal{L}_s = \sum_{b \in \mathcal{B}} \vert\vert{ BL_b - BL_{C(b)}}\vert\vert_2 \end{equation} where $C(.)$ indicates the corresponding left side bone. Finally, our structure-aware loss $\mathcal{L}^z_{SA}$ is defined as weighted sum of illegal-angle loss $\mathcal{L}^z_{a}$, symmetry-loss $\mathcal{L}^z_{s}$ and geometric loss $\mathcal{L}^z_{g}$ from~\cite{Zhou_2017_ICCV} - \begin{equation} \label{eq:l2d} \begin{split} \mathcal{L}^z_{SA}(\tilde{P}^z, \hat{P}^{xy}) & = \lambda_{a} \mathcal{L}_{a}(\tilde{P}^z, \hat{P}^{xy}) + \lambda_{s} \mathcal{L}_{s}(\tilde{P}^z, \hat{P}^{xy}) + \lambda_g\mathcal{L}_{g}(\tilde{P}^z, \hat{P}^{xy}) \end{split} \end{equation} \vspace{-3em} \subsubsection{Loss Surface Visualization:} Here we take help of local loss surface visualization to appreciate how the proposed losses are pushing invalid configurations towards their valid counterparts. In order to obtain the loss surfaces we take a random pose $P$ and vary the $(x_{le},z_{le})$ coordinates of left elbow over an $XZ$ grid while keeping all other joint locations fixed. Then, we evaluate $\mathcal{L}^z_{SA}$ at different $(x,z)$ locations in the $XZ$ grid to obtain the loss, which is plotted as surfaces in Fig.~\ref{fig:surfEnergy}. We plot loss surfaces with only 2D-location loss, 2D-location+symmetry loss, 2D-location+symmetry+illegal angle loss and 3D-annotation based Euclidean loss to show the evolution of the loss surfaces under different anatomical constraints. From the figure it is clear that both the symmetry loss and illegal angle loss morph the loss surface to facilitate moving away from illegal joint configurations. \begin{figure*}[!h] \centering \includegraphics[width=1\textwidth]{ECCVSurfHorizontal.jpg} \caption{\textbf{Loss Surface Evolution} Plots (a) to (d) show the local loss surfaces for (a) 2D-location loss. (b) 2D-location+symmetry loss (c) 2D-location+symmetry+illegal angle loss and (d) full 3D-annotation Euclidean loss. The points (1), (2) and (3) highlighted on the plots are the corresponding 3D poses shown in (f), (g) and (h), with (3) being the ground-truth depth. The illegal angle penalty increases the loss for pose (1), which has the elbow bent backwards. Pose (2) has a legal joint angle, but the symmetry is lost. Pose (3) is correct. We can see that without the angle loss, the loss at (1) and (3) are equal and we cannot discern between the two points.} \vspace{-1em} \label{fig:surfEnergy} \end{figure*} \vspace{-2em} \subsection{Temporal PoseNet or TP-Net} \begin{figure*}[!h] \centering \includegraphics[width = \textwidth]{temporalHM.jpg} \caption{(a) The variation of sensitivity in output pose w.r.t to the perturbations in input poses of TP-Net for from $t$=0 to $t$=-19. (b) Strong structural correlations are learned from the pose input at $t$=0 frame. (c) Past frames show smaller but more complex structural correlations. The self correlations (diagonal elements) are an order of magnitude larger and the colormap range has been capped to better display. } \label{fig:tempInf} \vspace{-2em} \end{figure*} In this section we propose to learn a temporal pose model, referred as Temporal PoseNet, to exploit the temporal consistency and motion cues present in video sequences. Given independent pose estimates from SAP-Net, we seek to exploit the information from a set of adjacent pose-estimates $\mathbf{P_{adj}}$ to improve the inference for the required pose $P$. We propose to use a simple two-layer, 4096 hidden neurons, fully-connected network with ReLU non-linearity that takes a fixed number, $N=20$, of adjacent poses as inputs and outputs the required pose $\Bar{P}$. The adjacent pose vectors are simply flattened and concatenated in order to make a single vector that goes into the TP-Net and it is trained using standard $L_2$ loss from the ground-truth pose. Despite being extremely simple in nature, we show that it outperforms a more complex variant such as RNNs, see Table~\ref{tab:lstmComp}. Why? We believe it happens because intricate human motion has increasing variations possible with increasing time window, which perhaps makes additional information from too far in the time useless or at least difficult to utilize. Therefore, a dense network with a limited context can effectively capture the useful consistency and motion cues. In order to visualize the temporal and structural information exploited by TP-Net we carried out a simple sensitivity analysis in which we randomly perturbed the joint locations of $P_t$ that is $t$ time-steps away from the output of TP-Net $\Bar{P}$ and plot the sensitivity for time-steps $t=-1$ to $t=-19$ for all joints in Fig.~\ref{fig:tempInf}(a). We can observe that poses beyond 5 time-steps ( or $200ms$ time-window ) does not have much impact on the predicted pose. Similarly, Fig.~\ref{fig:tempInf}(b) shows the structural correlations the model has learned just within the current frame. TP-Net learns to rely on the locations of hips and shoulders to refine almost all the other joints. We can also observe that the child joints are correlated with parent joints, for eg. the wrists are strongly correlated with elbows, and the shoulders are strongly correlated with the neck. Fig.~\ref{fig:tempInf}(c) shows the sensitivity to the input pose at $t$ = -1. Here, the correlations learned from the past are weak, but exhibit a richer pattern. The sensitivity of the child joints extends further upwards into the kinematic chain, eg. the wrist shows higher correlations with elbow, shoulder and neck, for the $t$ = -1 frame. Therefore, we can safely conclude that TP-Net learns complex structural and motion cues despite being so simple in nature. We hope this finding would be useful for future research in this direction Since TP-Net takes as input a fixed number of adjacent poses, we can choose to take all the adjacent poses before the required pose, referred to as \emph{online} setting, or we can choose to have $N/2=10$ adjacent poses on either side of required pose, referred to as \emph{semi-online} setting. Since our entire pipeline runs at 30fps, even semi-online setting will run at a lag of 10fps only. From Fig.~\ref{fig:tempInf} we observe that TP-Net can learn complex, non-linear inter-joint dependencies over time - for e.g. it learns to refine wrist position, for which the tracking is least accurate, based on the past motion of elbow and shoulder joints. \subsection{Training and Implementation details} \label{sec:training} While training the SAP-Net, both 2D samples, from MPII2D, and 3D samples, from either of the 3D datasets, were consumed in equal proportion in each iteration with a minibatch size of 6. In the \emph{first stage} we obtain a strong 2D pose estimation network by pre-training the hourglass modules of SAP-Net on MPII and Human3.6 using SGD as in~\cite{NewellYD16}. Training with weakly-supervised losses require a warm start~\cite{zhou2017brief}, therefore, in the \emph{second stage} we train the 3D depth module with only 3D annotated data-samples for 240k iterations so that it learns to output reasonable poses before switching on weak-supervision. In the \emph{third stage} we train SAP-Net with $\mathcal{L}_g$ and $\mathcal{L}_a$ for 160k iterations with $\lambda_a = 0.03$, $\lambda_g = 0.03$ with a learning-rate of $2.5e-4$. Finally, in the \emph{fourth stage} we introduce the symmetry loss, $\mathcal{L_s}$ with $\lambda_s = 0.05$ and learning-rate $2.5e-5$. TP-Net was trained using Adam optimizer~\cite{kingma2014adam} for 30 epochs using the pose predictions generated by fully-trained SAP-Net. In our experiments, we found that a context of $N = 20$ frames yields the best improvement on MPJPE (Fig.~\ref{fig:tempInf}) and we use that in all our experiments. It took approximately two days to train SAP-Net and one hour to train TP-Net using one NVIDIA 1080 Ti GPU. SAP-Net runs at an average testing time of $20ms$ per image while TP-Net adds negligible delay (\textless1ms). \begin{table*}[t] \centering \begin{tabular}{l c c c c c c c c } \hline Method & Direction & Discuss & Eat & Greet & Phone & Pose & Purchase & Sit \\ \hline \hline Zhou~\cite{zhou2016sparseness} & 68.7 & 74.8 & 67.8 & 76.4 & 76.3 & 84.0 & 70.2 & 88.0 \\ Jahangiri~\cite{Jahangiri:ICCV2017} & 74.4 & 66.7 & 67.9 & 75.2 & 77.3 & 70.6 & 64.5 & 95.6 \\ Lin~\cite{Lin_2017_CVPR} & 58.0 & 68.2 & 63.2 & 65.8 & 75.3 & 61.2 & 65.7 & 98.6 \\ Mehta~\cite{mono-3dhp2017} & 57.5 & 68.6 & 59.6 & 67.3 & 78.1 & 56.9 & 69.1 & 98.0 \\ Pavlakos~\cite{Pavlakos_2017_CVPR} & 58.6 & 64.6 & 63.7 & 62.4 & 66.9 & 57.7 & 62.5 & 76.8 \\ Zhou~\cite{Zhou_2017_ICCV} & 54.8 & 60.7 & 58.2 & 71.4 & 62.0 & 53.8 & 55.6 & 75.2 \\ Sun~\cite{Sun_2017_ICCV} & 52.8 & 54.8 & 54.2 & 54.3 & 61.8 & 53.1 & 53.6 & 71.7 \\ \hline Ours(SAP-Net) & 46.9 & 53.8 & 47.0 & 52.8 & 56.9 & 45.2 & 48.2 & 68.0 \\ Ours(TP-Net) & \textbf{44.8} & \textbf{50.4} & \textbf{44.7} & \textbf{49.0} & \textbf{52.9} & \textbf{43.5} & \textbf{45.5} & \textbf{63.1} \\ \hline \hline Method & SitDown & Smoke & Photo & Wait & Walk & WalkDog & WalkPair & Avg \\ \hline \hline Zhou~\cite{zhou2016sparseness} & 113.8 & 78.0 & 78.4 & 89.1 & 62.6 & 75.1 & 73.6 & 79.9 \\ Jahangiri~\cite{Jahangiri:ICCV2017} & 127.3 & 79.6 & 79.1 & 73.4 & 67.4 & 71.8 & 72.8 & 77.6 \\ Lin~\cite{Lin_2017_CVPR} & 127.7 & 70.4 & 93.0 & 68.2 & 50.6 & 72.9 & 57.7 & 73.1 \\ Mehta~\cite{mono-3dhp2017} & 117.5 & 69.5 & 82.4 & 68.0 & 55.3 & 76.5 & 61.4 & 72.9 \\ Pavlakos~\cite{Pavlakos_2017_CVPR} & 103.5 & 65.7 & 70.7 & 61.6 & 56.4 & 69.0 & 59.5 & 66.9 \\ Zhou~\cite{Zhou_2017_ICCV}& 111.6 & 64.1 & 65.5 & 66.0 & 51.4 & 63.2 & 55.3 & 64.9 \\ Sun~\cite{Sun_2017_ICCV}& \textbf{86.7} & 61.5 & 67.2 & 53.4 & 47.1 & 61.6 & 53.4 & 59.1 \\ \hline Ours(SAP-Net) & 94.0 & 55.7 & 63.6 & 51.6 & 40.3 & 55.4 & 44.3 & 55.5 \\ Ours(TP-Net) & 87.3 & \textbf{51.7} & \textbf{61.4} & \textbf{48.5} & \textbf{37.6} & \textbf{52.2} & \textbf{41.9} & \textbf{52.1} \\ \hline \end{tabular} \vskip 2mm \caption{Comparative evaluation of our model on Human 3.6 following Protocol 1. The evaluations were performed on subjects 9 and 11 using ground truth bounding box crops and the models were trained only on Human3.6 and MPII 2D pose datsets.} \label{tab: h36mp1} \vspace{-2em} \end{table*} \begin{table*}[!h] \fontsize{7}{8}\selectfont \centering \setlength\tabcolsep{1pt} \begin{tabular}{lcccccccccccccccc} \hline Method & Direct. & Discuss & Eat & Greet & Phone & Pose & Purch. & Sit & \shortstack{Sit\\Down} & Smoke & Photo & Wait & Walk & \shortstack{Walk\\Dog} & \shortstack{Walk\\Pair} & Avg \\ \hline \hline Yasin~\cite{yasin2016dual} & 88.4 & 72.5 & 108.5 & 110.2 & 97.1 & 91.6 & 107.2 & 119.0 & 170.8 & 108.2 & 142.5 & 86.9 & 92.1 & 165.7 & 102.0 & 108.3 \\ Rogez~\cite{rogezNIPS} & - & - & - & - & - & - & - & - & - & - & - & - & - & - & - & 88.1 \\ Chen~\cite{Chen_2017_CVPR} & 71.6 & 66.6 & 74.7 & 79.1 & 70.1 & 67.6 & 89.3 & 90.7 & 195.6 & 83.5 & 93.3 & 71.2 & 55.7 & 85.9 & 62.5 & 82.7 \\ Nie~\cite{Nie_2017_ICCV} & 62.8 & 69.2 & 79.6 & 78.8 & 80.8 & 72.5 & 73.9 & 96.1 & 106.9 & 88.0 & 86.9 & 70.7 & 71.9 & 76.5 & 73.2 & 79.5 \\ Moreno~\cite{Moreno-Noguer_2017_CVPR} & 67.4 & 63.8 & 87.2 & 73.9 & 71.5 & 69.9 & 65.1 & 71.7 & 98.6 & 81.3 & 93.3 & 74.6 & 76.5 & 77.7 & 74.6 & 76.5 \\ Zhou~\cite{zhou2016sparseness} & 47.9 & 48.8 & 52.7 & 55.0 & 56.8 & 49.0 & 45.5 & 60.8 & 81.1 & 53.7 & 65.5 & 51.6 & 50.4 & 54.8 & 55.9 & 55.3 \\ Sun~\cite{Sun_2017_ICCV} & 42.1 & 44.3 & 45.0 & 45.4 & 51.5 & 43.2 & 41.3 & 59.3 & 73.3 & 51.0 & 53.0 & 44.0 & 38.3 & 48.0 & 44.8 & 48.3 \\ \hline Ours(SAP-Net) & 32.8 & 36.8 & 42.5 & 38.5 & 42.4 & 35.4 & 34.3 & 53.6 & 66.2 & 46.5 & 49.0 & 34.1 & 30.0 & 42.3 & 39.7 & 42.2 \\ Ours (TP-Net) & \textbf{28.0} & \textbf{30.7} & \textbf{39.1} & \textbf{34.4} & \textbf{37.1} & \textbf{28.9} & \textbf{31.2} & \textbf{39.3} & \textbf{60.6} & \textbf{39.3} & \textbf{44.8} & \textbf{31.1} & \textbf{25.3} & \textbf{37.8} & \textbf{28.4} & \textbf{36.3} \\ \hline \end{tabular} \vskip 2mm \caption{Comparative evaluation of our model on Human 3.6M using Protocol 2. The models were trained only on Human3.6M and MPII 2D datasets.} \label{tab:h36mp2} \vspace{-2em} \end{table*} \begin{table}[!bht] \centering \footnotesize \parbox{.40\linewidth}{ \begin{tabular}{ l l c c c } {\bf Method} & {\bf MPJE}\\ \hline Zhou w/o $\mathcal{L}_g$~\cite{Zhou_2017_ICCV}& 65.69\\ + Geometry loss & 64.90\\ \hline Baseline & 58.50\\ + Geometry loss & 58.45\\ + Illegal Angle loss & 56.20\\ + Symmetry loss & 55.51\\ + TP-Net real-time & 52.10\\ + TP-Net bi-directional & \textbf{51.10}\\ \hline \end{tabular} \vskip 2mm \caption{Ablation of different loss terms on Human3.6M using Protocol 1.} \label{tab:ablation} } \hspace{1em} \parbox{.45\linewidth}{ \begin{tabular}{l l c c c } {\bf Model} & \multicolumn{3}{c}{\bf Number of input frames} \\ \hline & 4 & 10 & 20\\ \hline LSTM & - & - & 54.05 \\ Bi-LSTM & 53.86 & 53.72 & 53.65 \\ TP-Net (Ours) & 53.0 & 52.24 & 52.1 \\ Bi-TP-Net (Ours) & 52.4 & 51.36 & \textbf{51.1} \\ \hline \end{tabular} \vskip 2mm \caption{Comparison of different temporal models considered with varying context sizes. LSTM nets model the entire past context till time t. Bidirectional networks take half contextual frames from the future and half from the past.} \label{tab:lstmComp} }\\ \end{table} \section{Experiments} \label{experiments} In this section, we present ablation studies, quantitative results on Human3.6M and MPI-INF-3DHP datasets and comparisons with previous art, and qualitative results on MPII 2D and MS COCO datasets. We start by describing the datasets used in our experiments. \indent \textbf{Human3.6M} has $11$ subjects performing different indoor actions with ground-truth annotations captured using a marker-based MoCap system. We follow ~\cite{Tome_2017_CVPR} and evaluate our results under 1) \textit{Protocol 1} that uses Mean Per Joint Position Error (MPJPE) as the evaluation metric w.r.t. root relative poses and 2) \textit{Protocol 2} that uses Procrustes Aligned MPJPE (PAMPJPE) which is MPJPE calculated after rigid alignment of predicted pose with the ground truth. \textbf{MPI-INF-3DHP (test) dataset} is a recently released dataset of $6$ test subjects with different indoor settings ( green screen and normal background) and $2$ subjects performing in-the-wild that makes it more challenging than Human3.6M, which only has a single indoor setting. We follow the evaluation metric proposed in~\cite{mono-3dhp2017} and report Percentage of Correct Keypoints (PCK) within \textit{150mm} range and Area Under Curve (AUC). Like~\cite{Zhou_2017_ICCV}, we assume that the global scale is known and perform skeleton retargeting while training to account for the difference of joint definitions between Human3.6M and MPI-INF-3DHP datasets. Finally, skeleton fitting is done as an optional step to fit the pose into a skeleton of known bone lengths. \textbf{2D datasets:} MS-COCO and MPII are in-the-wild 2D pose datasets with no 3D ground truth annotations. Therefore, we show qualitative results for both of them in Fig. ~\ref{fig:coco_vis}. Despite lack of depth annotation, our approach generalizes well and predicts valid 3D poses under background clutter and significant occlusion. \begin{figure*}[t] \centering \includegraphics[width = \linewidth]{temp_mpii_coco_vis.jpg} \caption{(a) Comparison of our temporal model TP-Net with SAP-Net on a video. The highlighted poses demonstrate the ability of TP-Net to learn temporal correlations, and smoothen and refine pose estimates from SAP-Net. (b) Qualitative results of SAP-Net on some images from MPII and MS-COCO datasets, from multiple viewpoints.} \label{fig:coco_vis} \vspace{-2em} \end{figure*} \vspace{-1em} \subsection{Quantitative Evaluations} \label{quanteval} We evaluate the outputs of the three stages of our pipeline and show improvements at each stage. \begin{enumerate} \item {\bf Baseline}: We train the same network architecture as SAP-Net but with only the fully supervised losses i.e. 2D heatmap supervision and $\mathcal{L}^e$ for 3D data only. \item {\bf SAP-Net}: Trained with the proposed structure-aware loss following Section~\ref{sec:training}. \item {\bf TP-Net}: Trained on the outputs of SAP-Net from video sequences ( see Section~\ref{sec:training}). \item {\bf Skeleton Fitting (optional)}: We fit a skeleton based on the subject's bone lengths while preserving the bone vector directions obtained from the 3D pose estimates. \end{enumerate} Below, we conduct ablation study on SAP-Net and report results on the two datasets. \textbf{SAP-Net Ablation Study:} In order to understand the effect of individual anatomical losses, we train SAP-Net with successive addition of geometry $\mathcal{L}^z_{g}$, illegal-angle $\mathcal{L}^z_{a}$ and symmetry $\mathcal{L}^z_{s}$ losses and report their performance on Human3.6M under {\it Protocol 1} in Table~\ref{tab:ablation}. We can observe that the incorporation of illegal-angle and symmetry losses to geometry loss significantly improves the performance while geometry loss does not offer much improvement even over the baseline. Similarly, TP-Net offers significant improvements over SAP-Net and the \emph{semi-online} variant of TP-Net ( TP-Net bi-directional ) does even better than TP-Net. \textbf{Evaluations on Human3.6M:} We show significant improvement over the state-of-the-art and achieve an MPJPE of $55.5mm$ with SAP-Net which is further improved by TP-Net to $52.1mm$. Table~\ref{tab: h36mp1} and Table~\ref{tab:h36mp2} present a comparative analysis of our results under \textit{Protocol 1} and \textit{Protocol 2}, respectively. We outperform other competitive approaches by significant margins leading to an improvement of 12\%. \textbf{Evaluations on MPI-INF-3DHP:} The results from Table~\ref{tab:mpi_full} show that we achieve slightly worse performance in terms of PCK and AUC but much better performance in terms of MPJPE, improvement of 12\%, as compared to the current state-of-the-art. It is despite the lack of data augmentation through green-screen compositing during training. \begin{table*}[th] \centering \parbox{.36\linewidth}{ \begin{tabular}{ l c c c} \hline {\bf Method} & \shortstack{{\bf PCK}} & \shortstack{{\bf AUC}} & \shortstack{ {\bf MPJPE} } \\ \hline Mehta~\cite{mono-3dhp2017} & 75.7 & 39.3 & 117.6 \\ Mehta~\cite{VNect_SIGGRAPH2017} & 76.6 & \textbf{40.4} & 124.7 \\ \hline Ours & \textbf{76.7} & 39.1 & \textbf{103.8} \\ \hline \end{tabular} \vskip 2mm \small \caption{Results on MPI-INF-3DHP dataset. Higher PCK and AUC are desired while a lower MPJPE is better. Note that unlike ~\cite{mono-3dhp2017,VNect_SIGGRAPH2017}, the MPI-INF-3DHP training dataset was not augmented.}\label{tab:mpi_full} \vspace{-1em} } \hspace{1em} \parbox{.55\linewidth}{ \centering \footnotesize \begin{tabular}{ l c c c c } {\bf Bone} & {\bf Zhou~\cite{Zhou_2017_ICCV}} & {\bf SAP-Net} & {\bf TP-Net} \\ \hline Upper arm & 37.8 & $25.8_{\downarrow31.7\%}$ & $\textbf{23.9}_{\downarrow36.7\%}$ \\ Lower arm & 50.7 & $\textbf{32.1}_{\downarrow36.7\%}$ & $33.9_{\downarrow33.1\%}$ \\ Upper leg & 43.4 & $27.8_{\downarrow35.9\%}$ & $\textbf{24.8}_{\downarrow42.8\%}$ \\ Lower leg & 47.8 & $38.2_{\downarrow20.1\%}$ & $\textbf{29.2}_{\downarrow38.9\%}$ \\ \hline \hline Upper arm & -- & 49.6 & \textbf{39.8} \\ Lower arm & -- & 66.0 & \textbf{48.3} \\ Upper leg & -- & 61.3 & \textbf{48.8} \\ Lower leg & -- & 68.8 & \textbf{48.3} \\ \hline \end{tabular} \vskip 2mm \caption{Evaluating our models on (i) symmetry - mean $L_1$ distance in mm between left/right bone pairs (upper half), and (ii) the standard deviation (in mm) of bone lengths across all video frames (lower half) on MPI-INF-3DHP dataset.} \label{tab:symmetryTab} \vspace{-1em} } \end{table*} \vspace{-1em} \subsection{Structural Validity Analysis} This section analyzes the validity of the predicted 3D poses in terms of the anatomical constraints, namely left-right symmetry and joint-angle limits. Ideally, the corresponding left-right bone pairs should be of similar length; therefore, we compute the mean $L_1$ distance in mm between the corresponding left-right bone pairs on MPI-INF-3DHP dataset and present the results in the upper half of Table~\ref{tab:symmetryTab}. For fairness of comparison, we evaluate on model trained only on Human3.6M. We can see that SAP-Net, trained with symmetry loss, significantly improves the symmetry as compared to the system in~\cite{Zhou_2017_ICCV} which uses bone-length ratio priors and TP-Net offers further improvements by exploiting the temporal cues from adjacent frames. It shows the importance of explicit enforcement of symmetry. Moreover, it clearly demonstrates the effectiveness of TP-Net in implicitly learning the symmetry constraint. The joint-angle validity of the predicted poses is evaluated using~\cite{akhter2015pose} and we observe only 0.8\% illegal non-torso joint angles as compared to 1.4\% for~\cite{Zhou_2017_ICCV}. The lower-half of Table~\ref{tab:symmetryTab} tabulates the standard deviation of bone lengths in mm across frames for SAP-Net and TP-Net. We can observe that TP-Net reduces the standard deviation of bone-length across the frames by 28.7\%. It is also worth noting that we do not use any additional filter (moving average, 1 Euro, etc.) which introduces lag and makes the motion look \textit{uncanny}. Finally, we present some qualitative results in Fig.~\ref{fig:coco_vis}, Fig.~\ref{fig:percentileAnalysis} and in the supplementary material to show that TP-Net effectively corrects the jerks in the poses predicted by SAP-Net. \vspace{1.5em} \begin{figure*}[t] \centering \includegraphics[width = \linewidth]{percentile_all.jpg} \vspace{-2em} \caption{Percentile analysis on Human3.6M (top row), MPI-INF-3DHP (middle row) and MPII (bottom row) datasets. The results are displayed at $15^{th}$, $30^{th}$, $60^{th}$ and $90^{th}$ percentile of error (MPJE for Human3.6M and MPI-INF-3DHP, 2D PCK for MPII) from left to right.} \label{fig:percentileAnalysis} \vspace{-2em} \end{figure*} \vspace{-02em} \section{Conclusion} \vspace{-1em} We proposed two anatomically inspired loss functions, namely illegal-angle and symmetry loss. We showed them to be highly effective for training weakly-supervised ConvNet architectures for predicting valid 3D pose configurations from a single RGB image in-the-wild setting. We analyzed the evolution of local loss surfaces to clearly demonstrate the benefits of the proposed losses. We also proposed a simple, yet surprisingly effective, sliding-window fully-connected network for temporal pose modelling from a sequence of adjacent poses. We showed that it is capable of learning semantically meaningful short-term temporal and structure correlations. Temporal model was shown to significantly reduce jitters and noise from pose prediction for video sequences while taking $< 1ms$ per inference. Our complete pipeline improved the publised state-of-the-art by 11.8\% and 12\% on Human3.6M and MPI-INF-3DHP, respectively while running at 30fps on NVIDIA Titan 1080Ti GPU. \vspace{2.5em} { \small \bibliographystyle{ieee}
{ "attr-fineweb-edu": 1.775391, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdlQ5qX_Bl-uWG3BE
\section{Introduction}\label{sec:intro} EDA research contests and their released benchmark suites have successfully attracted research endeavors on timely and practical problems. These contests stimulated innovative solutions which indeed advanced the cutting edge technologies. Based on these outstanding point tools, the \textit{DATC Robust Design Flow (RDF)} is developed to provide an open-source academic design flow, which can facilitate design methodology and cross-stage optimization research. Our goals are 1) to provide an academic reference flow from logic synthesis to detailed routing based on existing contest results, 2) to construct a database for design benchmarks and point tool libraries, and 3) to interact with industrial designs by using industrial standard design input/output formats. \section{DATC Robust Design Flow}\label{sec:overall_flow} DATC RDF improves the preliminary versions~\cite{jung:2016od,jung:2017dr} to deliver a complete research infrastructure of VLSI design flow~\cite{datc_rdf_repo}. Our goal is to provide an open-source academic design flow that covers entire design stages, i.e., from logic synthesis to detailed routing, based on the public academic point tools from the previous EDA research contests~\cite{Nam:2005gx,Nam:2008gr,Ozdal:2012gs,Kim:2014td,tau2017,Darav:2017,Mantik:2018is}, \Cref{fig:overall_flow} illustrates the overview of DATC RDF. It includes academic point tools for logic synthesis, global placement, detailed placement, timing analysis, gate sizing, global routing, and detailed routing. These tools are interfaced via transition scripts that enable data exchange between tools of other domains. A design library for DATC RDF contains: \begin{itemize}[noitemsep] \item A circuit written in a structural \textbf{Verilog} netlist. \item Standard cell library in \textbf{Liberty} format. \item Physical information of standard cells along with technology information in \textbf{LEF} format, which defines physical dimensions of each cell. \item Initial floorplan described in \textbf{DEF} format. \item Design constraints in \textbf{SDC} (Synopsys Design Constraints) format, such as clock period, driver information of each input port, and load capacitance of each output. \end{itemize} Given a design library, DATC RDF starts with the logic synthesis and generates a logic-optimized gate-mapped Verilog netlist. Taking the netlist and LEF/DEF from the design library, global and detail placements are then performed. Wire parasitics are extracted so that the timing of the placement result can be analyzed. Gate sizing may optionally run to remove timing violations while minimizing leakage power. Legalization is called to remove illegal placement caused by the gate sizing. Finally, global routing and detailed routing is performed. \begin{figure}[ht!]% \centering \includegraphics[width=0.95\linewidth]{figs/overall_flow.pdf}% \caption{Overview of DATC Robust Design Flow.}% \label{fig:overall_flow}% \end{figure} Currently, RDF database contains: \begin{enumerate}[noitemsep] \item Benchmarks: 2017 TAU Contest, and IWLS 2005 Benchmarks. \item Logic synthesis: ABC. \item Global placers: NTUPlace3, ComPLx, mPL5/6, Capo, FastPlace3-GP, Eh?Placer. \item Detailed placers: FastPlace3-DP, MCHL. \item Global routers: NCTUgr, FastRoute4.1, BFG-R. \item Detailed routers: NCTUdr. \item Gate sizers: USizer 2013 and USizer 2012. \item Timers: OpenTimer, iTimerC2.0. \item Cell libraries: ISPD 2012/2013 Contests, ASAP 7nm library. \end{enumerate} Users can customize their own flow based on the above options. \section{Updates In This Version} In this section, we highlight the extension from the preliminary versions~\cite{jung:2016od,jung:2017dr} of DATC RDF as follows. Details of the logic synthesis, global placement, gate sizing and global routing stages can be found in~\cite{jung:2016od}. \subsection{Technology and Standard Cell Libraries} To fully cover the entire VLSI design flow, technology and standard cell libraries have to contain cell timing library (for logic synthesis, gate sizing, and timing analysis), as well as technology information and design rules (for placement and routing). In this regard, two technology libraries are available in the current implementation of RDF. The default technology library is a variant of ISPD 2012/2013 Discrete Gate Sizing Contests~\cite{Ozdal:2012gs,Ozdal:2013ea}. We bring the Liberty standard cell library from the contest library. Since the ISPD 2012/2013 contest benchmark suite does not include a LEF file, which includes technology information and physical dimension of each cell, we take the LEF file generated by the A2A methodology presented in~\cite{Kahng:2014fk}. Another library that DATC RDF supports is the ASAP 7nm library~\cite{Vashishtha:2017ap,Xu:2017sc}. We take a total of 89 standard cells from the library, which includes basic combinational gates along with some complex gates such as AOI221 or OAI222. Only a basic D-type flip-flop with no reset and set ports is included in our library because the logic synthesis tool~\cite{BerkeleyABC} incorporated in RDF does not support complex sequencing elements. This library also comes with technology and cell LEF files, which can be readily used for all the placement and routing stages. \subsection{Circuit Netlists} A set of circuit netlists are taken from the TAU 2017 Timing Contest~\cite{tau2017} as well as from the IWLS 2005 Benchmarks~\cite{iwls2005}. They are remapped to the standard cell libraries described in the previous section, and the most critical path delay of each circuit is measured. To set tight timing constraints, the clock period is set to $80\%$ of the critical path delay for each circuit. The number of cells in the netlists range from 352 to 571853. \subsection{Detailed Placement} The first EDA research contest, ISPD 2005 contest, focused on mixed-size cell placement~\cite{Nam:2005gx}. In the past decade, most of research endeavors have been devoted to global placement, and current state-of-the-art placers become mature. Very recently, detailed placement and legalization request novel ideas to handle mixed-cell-height circuits for better power, area, routability, and performance trade-offs. A legalizer removes all cell overlaps, meets complicated design rules and constraints, and preserves the ``good'' solution provided by global placement as much as possible. Considering the mixed-cell-height standard cell designs with various design rules at advanced technology nodes, 2017 ICCAD held a Mixed-Cell-Height Standard Cell Legalization Contest~\cite{Darav:2017}. In RDF, the recent mixed-cell-height legalizer~\cite{Zhu:2018} which won the first place award of the contest is included. \begin{table}[!t] \caption{Design Rules and Routing Preference Metrics.} \vspace{-2mm} \small \begin{tabularx}{0.925\columnwidth}{@{}YY@{}} \toprule Design rules & Routing preference metrics \\ \midrule Open & Wrong-way routing \\ Short & Off-track routing \\ Parallel run length spacing & Routing guide honoring \\ End of line spacing & \\ Cut spacing & \\ Min area rule (MAR) & \\ \bottomrule \end{tabularx} \label{tab:design_rules} \end{table} \begin{figure*}[!t]% \centering \includegraphics[width=\textwidth]{figs/rdf_cloud.pdf}% \caption{A conceptual illustration of DATC RDF in scalable cloud infrastructure. Source codes of individual point tools and DATC RDF are maintained in source code repositories, which are continuously integrated and containerized using a continuous integration framework. The resulting container image is then deployed into cloud infrastructure, which can be accessed by end-users.}% \label{fig:rdf_cloud}% \end{figure*} \subsection{Detailed Routing} ISPD 2018 Initial Detailed Routing Contest~\cite{Mantik:2018is} is the first contest that targets detailed routing considering practical design rules and honoring global routing guidance. DATC RDF is extended to accommodate the outcome of the detailed routing contest. In RDF, global routing and detailed routing read input files based on ISPD 2008 Global Routing Contest~\cite{Nam:2008gr} and ISPD 2018 Initial Detailed Routing Contest~\cite{Mantik:2018is}, respectively. Since there is no industrial standard format for connecting global routing and detailed routing, we develop a global routing guide translator to translate the output format of ISPD 2008 Global Routing Contest into the input format of routing guide used in ISPD 2018 Initial Detailed Routing Contest. In ISPD 2018 Contest, a group of design rules and routing preference metrics are defined (\Cref{tab:design_rules}) and stored in LEF/DEF files. As in commercial routers, the output of a detailed router follows DEF format that can be read by any commercial layout tools. Currently, NCTUdr is included, and more tools from winning teams will be included. \section{DATC RDF in Scalable Cloud Infrastructure}\label{sec:datc_rdf_cloud} Because of the today's crises of design complexity, quality, and cost, a truly new approach and paradigm of design tools and flows are highly required~\cite{ABK:2018op, OpenROAD}. To foster such research efforts to the open-source cloud-based CAD tools based on previous CAD contests, we propose a development flow and an implementation of RDF~\cite{datc_rdf_cloud}, which can be readily deployed especially in the scalable cloud infrastructure. It consists of three fundamental parts as illustrated in~\Cref{fig:rdf_cloud}: code repositories, continuous integration and containerization, and container orchestration. We expect that they make it easier to collaborate the development of point tools and the design flow, and to deploy the entire system in someone's own machine or public cloud infrastructure. Source codes of each point optimization tool are maintained in source code repositories, such as git and Mercurial; RDF itself is also maintained in the code repositories. They are then integrated and \textit{containerized} into a container image~\cite{Merkel:2014do}, which can be automatically done by continuous integration tools~\cite{Meyer2014:co}. As the source codes are maintained using source code repositories and continuous integration tools, the implementation of RDF can stay always up to date. With the containerization, one can readily deploy the entire DATC RDF framework because the container keeps all the necessary libraries and dependencies that are necessary to run RDF. In particular, current mainstream cloud providers, such as \textit{Amazon AWS}~\cite{AWS}, \textit{Microsoft Azure}~\cite{Azure}, and \textit{IBM Cloud}~\cite{IBMCloud} IBM Cloud, all provide off-the-shelf solutions for automated container deployment engine, e.g., Kubernetes. Besides, scaling the deployment can be easily achieved with the ready-to-use horizontal scaling and load balancing features of container orchestration systems. We also expect that the containerization will realize large-scale parallel architectures of design automation systems. \section{Experiments and Demonstration}\label{sec:exp} DATC RDF framework is implemented using C++ and Python3. We demonstrate our flow based on a benchmark circuit \texttt{fft\_ispd} from the TAU 2017 Timing Contest~\cite{tau2017}. The circuit netlist was first unmapped to a generic gate library, and subsequently remapped to our standard cell library using \textit{Synopsys DesignCompiler L-2016.03-SP5-5}~\cite{DC}. It was then synthesized using the ABC logic synthesis and verification platform~\cite{BerkeleyABC} using the AIG optimization script of Lazy-man synthesis paradigm~\cite{Yang:2012lm}. Two placement instances were then created using ComPLx~\cite{Kim:2012jx} and NTUPlace3~\cite{Chen:2008fd}. They were then routed with NCTU-GR 2.0~\cite{Liu:2013hp} and BFG-R~\cite{Hu:2010ct} global routers. Finally, detailed routing was performed with NCTUdr. The results are shown in~\Cref{fig:placement_plot}, \Cref{fig:congestion_map}, and \Cref{fig:detailed_route}. \begin{figure}[!t]% \centering \includegraphics[width=0.95\linewidth]{figs/placement_plot.pdf}% \caption{Placement results of fft\_ispd after logic synthesis with the Lazy-man script of ABC. The sequencing elements are represented as the red boxes, and the combinational gates are as the blue boxes. (a) ComPLx and (b) NTUPlace3.}% \label{fig:placement_plot}% \end{figure} \begin{figure}[!t]% \centering \includegraphics[width=0.95\linewidth]{figs/congestion_map.pdf}% \caption{Global routing congestion map of fft\_ispd. Placement result is obtained using ComPLx placer, and global routing is done by NCTU-GR 2.0. (a) Metal-3, (b) Metal-4, (c) Metal-5 and (d) Metal-6 layers.} \label{fig:congestion_map}% \end{figure} \begin{figure}[!t]% \centering \includegraphics[width=0.95\linewidth]{figs/detailed_route.pdf}% \caption{Detailed routing results of fft\_ispd. Metal-3 to Metal-6 layers are colored with green, yellow, red, and orange, respectively. Placement is done with (a) ComPLx and (b) NTUPlace3.}% \label{fig:detailed_route}% \end{figure} \section{Conclusion}\label{sec:conclusion} In this paper, we present DATC RDF, which is an open design flow from logic synthesis to detailed routing. We include point tools based on previous EDA research contests and will keep expanding the flow coverage vertically and horizontally. We also demonstrate RDF in cloud infrastructure. RDF can be readily integrated with design methodology and cross-stage optimization research. \section*{Acknowledgment} This work was supported by IEEE CEDA Design Automation Technical Committee (DATC). Special thanks go to tool providers for their generosity. \bibliographystyle{ACM-Reference-Format}
{ "attr-fineweb-edu": 1.950195, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdljxaL3SujGfKFXk
\section{Conclusion} \label{sec:conclusion} One of the many challenges in $\phi_2$-sensitive studies is that similarity of the final states and overlapping analysis techniques inevitably leads to significant systematic correlations amongst the physics observables ultimately constraining $\phi_2$. A coordinated approach leads to an appreciably overall improved precision, while at the same time eliminating bias, which in the case of $B \ensuremath{\rightarrow}\xspace \rho\rho$ is shown to be at the level of $1^\circ$ in and of itself. Additional care may also need to be taken if ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace (\rho \pi)^0$ turns out to dominate the average whilst at the same time, the amplitude model uncertainty in $B \ensuremath{\rightarrow}\xspace \rho\rho$ is significant. Physics parameter correlations in data models can cross experimental lines for which minimal cooperation through the sharing of signed fit residuals on commonly defined systematic variations offers a good compromise to combined data analyses. Hopefully, this work inspires other investigations into the role of systematic correlations beyond single measurements in the combinations of other {\ensuremath{C\!P}}\xspace-violating weak phases such as $\phi_1$ ($\beta$), $\phi_3$ ($\gamma$) and $\phi_s$. \section{Model uncertainties and correlation matrix} \label{sec:app} For the considered $B \ensuremath{\rightarrow}\xspace \rho \rho$ and ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace (\rho \pi)^0$ amplitude models, the systematic uncertainties on the extracted physics parameters constraining $\phi_2$ that globally account for correlations arising from the $\rho$ pole parameters are given in table~\ref{tab:syst}. For comparison, how these uncertainties scale when ignoring such correlations within and between these systems is also shown. The associated correlation matrix in the global case is spread over tables~\ref{tab:corr1}-\ref{tab:corr3}, while the uncorrelated scenarios can be trivially inferred from these. \begin{table}[!htb] \centering \begin{tabular}{|c|c|c|} \hline Parameter & Model Uncertainty Proposed & Uncorrelated Scale\\ \hline $\mathcal{B}^{00}$ [S] & $3.1 \%$ & 1.10 \\ $|\lambda_{{\ensuremath{C\!P}}\xspace}^{00}|$ [S] & 0.021 & 0.94\\ $\phi_2^{00}$ [S] & $0.51^\circ$ & 0.99\\ $\mathcal{B}^{00}$ [P] & $0.090 \%$ & 1.13\\ $|\lambda_{{\ensuremath{C\!P}}\xspace}^{00}|$ [P] & 0.0010 & 1.14\\ $\phi_2^{00}$ [P] & $0.16^\circ$ & 1.12\\ $\mathcal{B}^{00}$ [D] & $1.39 \%$ & 1.09\\ $|\lambda_{{\ensuremath{C\!P}}\xspace}^{00}|$ [D] & 0.039 & 1.00\\ $\phi_2^{00}$ [D] & $0.53^\circ$ & 1.00\\ $\mathcal{B}^{+-}$ [S] & $0.26 \%$ & 1.05\\ $|\lambda_{{\ensuremath{C\!P}}\xspace}^{+-}|$ [S] & 0.00045 & 1.11\\ $\phi_2^{+-}$ [S] & $0.019^\circ$ & 1.07\\ $\mathcal{B}^{+-}$ [P] & $0.45 \%$ & 1.07\\ $|\lambda_{{\ensuremath{C\!P}}\xspace}^{+-}|$ [P] & 0.0013 & 1.05\\ $\phi_2^{+-}$ [P] & $0.022^\circ$ & 1.12\\ $\mathcal{B}^{+-}$ [D] & $1.7\%$ & 1.07\\ $|\lambda_{{\ensuremath{C\!P}}\xspace}^{+-}|$ [D] & 0.0018 & 1.07\\ $\phi_2^{+-}$ [D] & $0.015^\circ$ & 1.11\\ $\mathcal{B}^{+0}$ [S] & $0.25 \%$ & 0.85\\ $\mathcal{B}^{+0}$ [P] & $0.11 \%$ & 0.81\\ $\mathcal{B}^{+0}$ [D] & $1.55 \%$ & 0.78\\ $\phi_2$ [${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace (\rho \pi)^0$] & $0.081^\circ$ & 1.03\\ \hline \end{tabular} \caption{Model uncertainties under the Proposed practice scheme for each determined parameter entering the $\phi_2$ constraint. The final column shows how the uncertainties scale when correlations within and between systems are ignored. Apart from the final row entry, square brackets indicate the partial wave in the $B \ensuremath{\rightarrow}\xspace \rho \rho$ sector.} \label{tab:syst} \end{table} \begin{sidewaystable}[!htb] \centering \begin{tabular}{|c|ccccccccc|} \hline & $\mathcal{B}^{00}$ [S] & $|\lambda_{{\ensuremath{C\!P}}\xspace}^{00}|$ [S] & $\phi_2^{00}$ [S] & $\mathcal{B}^{00}$ [P] & $|\lambda_{{\ensuremath{C\!P}}\xspace}^{00}|$ [P] & $\phi_2^{00}$ [P] & $\mathcal{B}^{00}$ [D] & $|\lambda_{{\ensuremath{C\!P}}\xspace}^{00}|$ [D] & $\phi_2^{00}$ [D]\\ \hline $\mathcal{B}^{00}$ [S] & $+1.00$ &&&&&&&&\\ $|\lambda_{{\ensuremath{C\!P}}\xspace}^{00}|$ [S] & $-0.88$ & $+1.00$ &&&&&&&\\ $\phi_2^{00}$ [S] & $+0.85$ & $-0.99$ & $+1.00$ &&&&&&\\ $\mathcal{B}^{00}$ [P] & $-0.78$ & $+0.45$ & $-0.36$ & $+1.00$ &&&&&\\ $|\lambda_{{\ensuremath{C\!P}}\xspace}^{00}|$ [P] & $+0.61$ & $-0.22$ & $+0.12$ & $-0.97$ & $+1.00$ &&&&\\ $\phi_2^{00}$ [P] & $-0.49$ & $+0.09$ & $+0.02$ & $+0.92$ & $-0.99$ & $+1.00$ &&&\\ $\mathcal{B}^{00}$ [D] & $+1.00$ & $-0.90$ & $+0.87$ & $-0.76$ & $+0.58$ & $-0.45$ & $+1.00$ &&\\ $|\lambda_{{\ensuremath{C\!P}}\xspace}^{00}|$ [D] & $-0.96$ & $+0.97$ & $-0.94$ & $+0.66$ & $-0.45$ & $+0.32$ & $-0.97$ & $+1.00$ &\\ $\phi_2^{00}$ [D] & $-0.97$ & $+0.89$ & $-0.83$ & $+0.80$ & $-0.64$ & $+0.53$ & $-0.97$ & $+0.97$ & $+1.00$\\ $\mathcal{B}^{+-}$ [S] & $+0.87$ & $-0.54$ & $+0.48$ & $-0.98$ & $+0.92$ & $-0.85$ & $+0.85$ & $-0.74$ & $-0.85$\\ $|\lambda_{{\ensuremath{C\!P}}\xspace}^{+-}|$ [S] & $+0.65$ & $-0.33$ & $+0.23$ & $-0.95$ & $+0.95$ & $-0.93$ & $+0.63$ & $-0.54$ & $-0.70$\\ $\phi_2^{+-}$ [S] & $-0.80$ & $+0.45$ & $-0.38$ & $+0.96$ & $-0.93$ & $+0.87$ & $-0.77$ & $+0.66$ & $+0.79$\\ $\mathcal{B}^{+-}$ [P] & $-0.78$ & $+0.43$ & $-0.35$ & $+1.00$ & $-0.97$ & $+0.93$ & $-0.76$ & $+0.64$ & $+0.79$\\ $|\lambda_{{\ensuremath{C\!P}}\xspace}^{+-}|$ [P] & $-0.85$ & $+0.54$ & $-0.47$ & $+0.99$ & $-0.93$ & $+0.87$ & $-0.83$ & $+0.74$ & $+0.86$\\ $\phi_2^{+-}$ [P] & $+0.54$ & $-0.15$ & $+0.04$ & $-0.94$ & $+0.99$ & $-0.99$ & $+0.51$ & $-0.39$ & $-0.58$\\ $\mathcal{B}^{+-}$ [D] & $-0.80$ & $+0.46$ & $-0.37$ & $+1.00$ & $-0.96$ & $+0.91$ & $-0.77$ & $+0.67$ & $+0.81$\\ $|\lambda_{{\ensuremath{C\!P}}\xspace}^{+-}|$ [D] & $+0.80$ & $-0.46$ & $+0.38$ & $-0.99$ & $+0.96$ & $-0.91$ & $+0.78$ & $-0.66$ & $-0.80$\\ $\phi_2^{+-}$ [D] & $+0.63$ & $-0.30$ & $+0.20$ & $-0.95$ & $+0.95$ & $-0.94$ & $+0.60$ & $-0.51$ & $-0.68$\\ $\mathcal{B}^{+0}$ [S] & $+0.86$ & $-0.53$ & $+0.47$ & $-0.98$ & $+0.92$ & $-0.86$ & $+0.84$ & $-0.73$ & $-0.84$\\ $\mathcal{B}^{+0}$ [P] & $-0.77$ & $+0.43$ & $-0.34$ & $+1.00$ & $-0.97$ & $+0.93$ & $-0.75$ & $+0.64$ & $+0.79$\\ $\mathcal{B}^{+0}$ [D] & $-0.81$ & $+0.48$ & $-0.40$ & $+1.00$ & $-0.96$ & $+0.90$ & $-0.79$ & $+0.69$ & $+0.82$\\ $\phi_2$ [${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace (\rho \pi)^0$] & $-0.98$ & $+0.76$ & $-0.73$ & $+0.87$ & $-0.74$ & $+0.63$ & $-0.97$ & $+0.90$ & $+0.94$\\ \hline \end{tabular} \caption{Correlation matrix (1) under the Proposed practice scheme for each determined parameter entering the $\phi_2$ constraint. Apart from the final row entry, square brackets indicate the partial wave in the $B \ensuremath{\rightarrow}\xspace \rho \rho$ sector.} \label{tab:corr1} \end{sidewaystable} \begin{sidewaystable}[!htb] \centering \begin{tabular}{|c|ccccccccc|} \hline & $\mathcal{B}^{+-}$ [S] & $|\lambda_{{\ensuremath{C\!P}}\xspace}^{+-}|$ [S] & $\phi_2^{+-}$ [S] & $\mathcal{B}^{+-}$ [P] & $|\lambda_{{\ensuremath{C\!P}}\xspace}^{+-}|$ [P] & $\phi_2^{+-}$ [P] & $\mathcal{B}^{+-}$ [D] & $|\lambda_{{\ensuremath{C\!P}}\xspace}^{+-}|$ [D] & $\phi_2^{+-}$ [D]\\ \hline $\mathcal{B}^{+-}$ [S] & $+1.00$ &&&&&&&&\\ $|\lambda_{{\ensuremath{C\!P}}\xspace}^{+-}|$ [S] & $+0.89$ & $1.00$ &&&&&&&\\ $\phi_2^{+-}$ [S] & $-0.96$ & $-0.85$ & $+1.00$ &&&&&&\\ $\mathcal{B}^{+-}$ [P] & $-0.98$ & $-0.94$ & $+0.97$ & $+1.00$ &&&&&\\ $|\lambda_{{\ensuremath{C\!P}}\xspace}^{+-}|$ [P] & $-0.99$ & $-0.92$ & $+0.96$ & $+0.99$ & $+1.00$ &&&&\\ $\phi_2^{+-}$ [P] & $+0.87$ & $+0.96$ & $-0.88$ & $-0.94$ & $-0.89$ & $+1.00$ &&&\\ $\mathcal{B}^{+-}$ [D] & $-0.99$ & $-0.94$ & $+0.97$ & $+1.00$ & $+0.99$ & $-0.93$ & $+1.00$ &&\\ $|\lambda_{{\ensuremath{C\!P}}\xspace}^{+-}|$ [D] & $+0.99$ & $+0.91$ & $-0.99$ & $-0.99$ & $-0.99$ & $+0.92$ & $-1.00$ & $+1.00$ &\\ $\phi_2^{+-}$ [D] & $+0.89$ & $+1.00$ & $-0.85$ & $-0.94$ & $-0.92$ & $+0.97$ & $-0.94$ & $+0.91$ & $+1.00$\\ $\mathcal{B}^{+0}$ [S] & $+1.00$ & $+0.90$ & $-0.97$ & $-0.99$ & $-1.00$ & $+0.88$ & $-0.99$ & $+0.99$ & $+0.89$\\ $\mathcal{B}^{+0}$ [P] & $-0.98$ & $-0.94$ & $+0.97$ & $+1.00$ & $+0.99$ & $-0.95$ & $+1.00$ & $-0.99$ & $-0.94$\\ $\mathcal{B}^{+0}$ [D] & $-0.99$ & $-0.93$ & $+0.97$ & $+1.00$ & $+1.00$ & $-0.92$ & $+1.00$ & $-1.00$ & $-0.93$\\ $\phi_2$ [${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace (\rho \pi)^0$] & $-0.94$ & $-0.75$ & $+0.88$ & $+0.87$ & $+0.93$ & $-0.67$ & $+0.89$ & $-0.89$ & $-0.73$\\ \hline \end{tabular} \caption{Correlation matrix (2) under the Proposed practice scheme for each determined parameter entering the $\phi_2$ constraint. Apart from the final row entry, square brackets indicate the partial wave in the $B \ensuremath{\rightarrow}\xspace \rho \rho$ sector.} \label{tab:corr2} \end{sidewaystable} \begin{table}[!htb] \centering \begin{tabular}{|c|cccc|} \hline & $\mathcal{B}^{+0}$ [S] & $\mathcal{B}^{+0}$ [P] & $\mathcal{B}^{+0}$ [D] & $\phi_2$ [${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace (\rho \pi)^0$]\\ \hline $\mathcal{B}^{+0}$ [S] & $+1.00$ &&&\\ $\mathcal{B}^{+0}$ [P] & $-0.98$ & $+1.00$ &&\\ $\mathcal{B}^{+0}$ [D] &$-0.99$ & $+1.00$ & $+1.00$ &\\ $\phi_2$ [${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace (\rho \pi)^0$] & $-0.94$ & $+0.87$ & $+0.90$ & $+1.00$\\ \hline \end{tabular} \caption{Correlation matrix (3) under the Proposed practice scheme for each determined parameter entering the $\phi_2$ constraint. Apart from the final row entry, square brackets indicate the partial wave in the $B \ensuremath{\rightarrow}\xspace \rho \rho$ sector.} \label{tab:corr3} \end{table} \section{\boldmath Introduction} \label{sec:intro} Violation of the combined charge-parity symmetry~({\ensuremath{C\!P}}\xspace violation) in the Standard Model~(SM) arises from a single irreducible phase in the Cabibbo-Kobayashi-Maskawa~(CKM) quark-mixing matrix~\cite{Cabibbo:1963yz,Kobayashi:1973fv}. Various processes offer different yet complementary insight into this phase, which manifests in a number of experimental observables over-constraining the Unitarity Triangle (UT). The measurement of such parameters and their subsequent combination is important as New Physics~(NP) contributions can present themselves as an inconsistency within the triangle paradigm. \begin{figure}[!htb] \centering \includegraphics[height=115pt,width=!]{figs/pipi_tree.eps} \includegraphics[height=115pt,width=!]{figs/pipi_peng.eps} \put(-425,105){(a)} \put(-210,105){(b)} \caption{\label{fig:pipi} Leading-order Feynman diagrams shown producing ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\pion^+}}\xspace {\ensuremath{\pion^-}}\xspace$ decays, though the same quark transition can also produce ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace \rho^{\pm} {\ensuremath{\pion^\mp}}\xspace$, $\rho^+ \rho^-$ and $a_1^\pm {\ensuremath{\pion^\mp}}\xspace$. (a) depicts the dominant (tree) diagram while (b) shows the competing loop (penguin) diagram. In the penguin diagram, the subscript $x$ in $V_{xb}$ refers to the flavour of the intermediate-state quark $(x=u,c,t)$.} \end{figure} Decays that proceed predominantly through the ${\ensuremath{\overline \bquark}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\overline \uquark}}\xspace {\ensuremath{\Pu}}\xspace {\ensuremath{\overline \dquark}}\xspace$ tree transition (figure~\ref{fig:pipi}a) in the presence of {\ensuremath{\B^0}}\xspace--{\ensuremath{\Bbar{}^0}}\xspace mixing are sensitive to the interior angle of the UT, $\phi_2 = \alpha \equiv \arg(-{\ensuremath{V_{\tquark\dquark}^{\phantom{\ast}}}}\xspace {\ensuremath{V_{\tquark\bquark}^\ast}}\xspace)/({\ensuremath{V_{\uquark\dquark}^{\phantom{\ast}}}}\xspace {\ensuremath{V_{\uquark\bquark}^\ast}}\xspace)$, which can be accessed through mixing-induced {\ensuremath{C\!P}}\xspace violation observables measured from time-dependent, flavour-tagged analyses. This quark process manifests itself in multiple systems, including $B \ensuremath{\rightarrow}\xspace \pi \pi$~\cite{Lees:2012mma,Adachi:2013mae,Aaij:2018tfw,Aaij:2020buf,Aubert:2007hh,Duh:2012ie,Julius:2017jso}, $(\rho \pi)^0$~\cite{Lees:2013nwa,Kusaka:2007dv,Kusaka:2007mj}, $\rho \rho$~\cite{Aubert:2007nua,Vanhoefer:2015ijw,Aubert:2009it,Zhang:2003up,Aubert:2008au,Adachi:2012cz,Aaij:2015ria} and $a_1^\pm \pi^\mp$~\cite{Aubert:2006gb,Dalseno:2012hp,Aubert:2009ab}, where the angle $\phi_2$ has so far been constrained with an overall uncertainty of around $4^\circ$~\cite{Gronau:2016idx,Charles:2017evz,Bona:2006ah,Amhis:2019ckw}. With the dubious honour of being the least known input to the UT now falling to $\phi_2$, there has never been better motivation to improve its experimental precision. More often than not, this involves the combination of several physics parameters extracted from related decay channels raising an important general question that hitherto has not yet been explored in any great detail. As experimental measurements become more and more precise, a crucial unknown is the point at which it will become necessary to consider systematic correlations, not arising simply within individual analyses, but rather in between the relevant analyses in order to avoid bias. As a specific case study of a broader issue, which includes but is not limited to combination-based approaches designed to measure other {\ensuremath{C\!P}}\xspace-violating weak phases such as $\phi_1$ ($\beta$)~\cite{Ciuchini:2005mg,Faller:2008zc,Ciuchini:2011kd,Jung:2012mp,DeBruyn:2014oga,Frings:2015eva,Ligeti:2015yma,Barel:2020jvf}, $\phi_3$ ($\gamma$)~\cite{Lorier:2010xf,Imbeault:2010xg,Rey-LeLorier:2011ltd} and $\phi_s$~\cite{Faller:2008zc,DeBruyn:2014oga,Barel:2020jvf,Fleischer:1999zi,Faller:2008gt,Liu:2013nea}, I open the discussion in this paper within the context of the $\phi_2$ average. By and large, this problem is generally an internal matter for each collaboration, however there are irreducible systematic uncertainties that transcend experiment, warranting a more cooperative approach and thus is the primary focus here. Experience in amplitude analysis suggests that the model uncertainty of a dominant vector resonance tends to derive more significantly from its own pole parameters rather than the remainder of the model, which is converse to smaller contributions where the opposite trend appears to hold. This is because Breit-Wigner phases vary most rapidly at the poles, exacerbating the effect of small variations to manipulate interference patterns in the regions of greatest physical interest. In this specific consideration, these pole parameters are those of the $\rho$ meson. I open in section~\ref{sec:isospin}, with a description of the SU(2)-based approach for controlling distortions in experimental $\phi_2$ measurements arising from the ever-present strong-loop gluonic penguin processes. Following this, I introduce the impact of systematic correlations in section~\ref{sec:nbb} with a conceptually simpler example surrounding the branching fractions of the decay processes involved. In section~\ref{sec:rhorho}, I move into the primary study on the bias in $\phi_2$ caused by neglecting amplitude model correlations in the $B \ensuremath{\rightarrow}\xspace \rho \rho$ system arising from the $\rho$ pole masses and widths. This bias, if left unchecked, can then go on to affect the otherwise immune $B^0 \ensuremath{\rightarrow}\xspace (\rho \pi)^0$ analysis as explained in section~\ref{sec:rhopi}. Finally, conclusions are drawn in section~\ref{sec:conclusion} along with some recommendations on how the community going forward can reduce $\phi_2$ bias induced by systematic and amplitude model correlations. \section{\boldmath Strong-penguin containment in $\phi_2$ constraints} \label{sec:isospin} In general, the extraction of $\phi_2$ is complicated by the presence of interfering amplitudes that distort the experimentally determined value of $\phi_2$ from its SM expectation and would mask any NP phase if not accounted for. These effects primarily include ${\ensuremath{\overline \bquark}}\xspace \rightarrow {\ensuremath{\overline \dquark}}\xspace u {\ensuremath{\overline \uquark}}\xspace$ strong-loop decays (figure~\ref{fig:pipi}b), although isospin-violating processes such as electroweak penguins, ${\ensuremath{\pion^0}}\xspace$--$\eta$--$\eta^\prime$ mixing, $\rho^0$--$\omega$--$\phi$ mixing~\cite{Gronau:2005pq} and the finite $\rho$ width in $B \ensuremath{\rightarrow}\xspace \rho\rho$~\cite{Falk:2003uq} can also play a role. \subsection{Original approach} It is possible to remove the isospin-conserving component of this contamination by invoking SU(2) arguments. The original method considers the three possible charge configurations of $B \rightarrow \pi\pi$ decays~\cite{Gronau:1990ka}. Bose-Einstein statistics rules out a total isospin $I=1$ contribution, leaving just the $I=0, 2$ amplitudes remaining. Strong penguins then only have the possibility to contribute an $I=0$ amplitude, since the mediating gluon is an isospin singlet. However, in the specific case of ${\ensuremath{\Bu}}\xspace \rightarrow {\ensuremath{\pion^+}}\xspace {\ensuremath{\pion^0}}\xspace$, the further limiting projection $I_{3} = 1$ additionally rules out $I=0$, thereby forbidding strong penguin contributions to this channel. \begin{figure}[!htb] \centering \includegraphics[height=110pt,width=!]{figs/iso_anal.eps} \caption{\label{fig_iso} Complex isospin amplitude triangles from which $\Delta \phi_2$ can be determined.} \end{figure} The complex $B \rightarrow \pi\pi$ and $\bar B \rightarrow \pi\pi$ decay amplitudes obey the isospin relations \begin{equation} A^{+0} = \frac{1}{\sqrt{2}}A^{+-} + A^{00}, \;\;\;\; \bar{A}^{+0} = \frac{1}{\sqrt{2}}\bar{A}^{+-} + \bar{A}^{00}, \label{eq_iso} \end{equation} respectively, where the superscripts refer to the combination of pion charges. The decay amplitudes can be represented as triangles in the complex plane as shown in figure~\ref{fig_iso}. As ${\ensuremath{\Bu}}\xspace \rightarrow {\ensuremath{\pion^+}}\xspace {\ensuremath{\pion^0}}\xspace$ is a pure tree mode, its amplitude in isospin space is identical to that of its {\ensuremath{C\!P}}\xspace-conjugate and so these triangles lose their relative orientation to share the same base, $A^{+0}=\bar{A}^{+0}$, allowing the shift in $\phi_2$ caused by strong penguin contributions $\Delta \phi_2 \equiv \phi_2^\pm - \phi_2$, to be determined from the phase difference between $\bar{A}^{+-}$ and $A^{+-}$. These amplitudes can be constrained by 7 mostly independent physical observables for a two-fold discrete ambiguity in the range $[0, 180]^\circ$, which are related to the decay amplitudes as \begin{equation} \label{eq:old} \frac{1}{\tau_B^{i+j}} {\ensuremath{\mathcal{B}}}\xspace^{ij} = \frac{|\bar A^{ij}|^2 + |A^{ij}|^2}{2}, \hspace{10pt} \mathcal{A}_{{\ensuremath{C\!P}}\xspace}^{ij} = \frac{|\bar A^{ij}|^2 - |A^{ij}|^2}{|\bar A^{ij}|^2 + |A^{ij}|^2}, \hspace{10pt} \mathcal{S}_{{\ensuremath{C\!P}}\xspace}^{ij} = \frac{2\Im(\bar A^{ij} A^{ij*})}{|\bar A^{ij}|^2 + |A^{ij}|^2}, \end{equation} where {\ensuremath{\mathcal{B}}}\xspace, $\mathcal{A}_{{\ensuremath{C\!P}}\xspace}$ and $\mathcal{S}_{{\ensuremath{C\!P}}\xspace}$ are the branching fractions, {\ensuremath{C\!P}}\xspace violation in the decay and mixing-induced {\ensuremath{C\!P}}\xspace violation parameters, respectively. The superscript $ij$, represents the charge configuration of the final state pions and $\tau_B^{i+j}$ is the lifetime of the {\ensuremath{\Bu}}\xspace~($i+j=1$) or {\ensuremath{\B^0}}\xspace~($i+j=0$). Naturally for ${\ensuremath{\Bu}}\xspace \rightarrow {\ensuremath{\pion^+}}\xspace {\ensuremath{\pion^0}}\xspace$, {\ensuremath{C\!P}}\xspace violation in the decay is forbidden by the isospin argument and mixing-induced {\ensuremath{C\!P}}\xspace violation is not defined. The ambiguity in $\phi_2$ is also increased to 8-fold if $\mathcal{S}_{{\ensuremath{C\!P}}\xspace}^{00}$ of the colour-suppressed channel is not measured as is currently the case. This approach is also applied to the $B \ensuremath{\rightarrow}\xspace \rho \rho$ system analogously, substituting the $\rho$ meson in place of each pion. \subsection{Next-generation approach} The $B \ensuremath{\rightarrow}\xspace \rho \rho$ system presents a greater theoretical and experimental challenge over $B \ensuremath{\rightarrow}\xspace \pi \pi$. It has already been pointed out that isospin-breaking ($I=1$) $\rho$-width effects can be controlled by reducing the $\rho$ analysis window of ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace \rho^+ \rho^-$ and ${\ensuremath{\Bu}}\xspace \ensuremath{\rightarrow}\xspace \rho^+ \rho^0$ according to the method outlined in ref.~\cite{Gronau:2016nnc}. An open question to be studied is the extent to which this is systematically feasible in the presence of interfering and non-interfering backgrounds. In this work, I espouse an alternate viewpoint in which the possibility to exploit the multi-body final state through directly modelling the structure of $\rho^0$--$\omega$ mixing and $I=1$ finite $\rho$-width effects is acquired in exchange for greater analysis complexity. To that end, I have already outlined the amplitude analysis framework by which this can be achieved, replacing the measured physical observables from eq.~\ref{eq:old} by \begin{equation} \label{eq:new} \frac{1}{\tau_B^{i+j}} {\ensuremath{\mathcal{B}}}\xspace^{ij} = \frac{|\bar A^{ij}|^2 + |A^{ij}|^2}{2}, \hspace{10pt} |\lambda_{{\ensuremath{C\!P}}\xspace}^{ij}| = \biggl|\frac{\bar A^{ij}}{A^{ij}}\biggr|, \hspace{10pt} \phi_2^{ij} = \frac{\arg(\bar A^{ij} A^{ij*})}{2}, \end{equation} where $\lambda^{ij}_{{\ensuremath{C\!P}}\xspace}$ is a {\ensuremath{C\!P}}\xspace-violation parameter and $\phi_2^{ij}$ is its effective weak phase. As these quantities are now related to the isospin triangles at amplitude level, the solution degeneracy in $\phi_2$ for the range $[0, 180]^\circ$ is resolved~\cite{Dalseno:2018hvf} and as an added incentive, the 8-fold solution degeneracy in $B^0 \ensuremath{\rightarrow}\xspace a_1^\pm {\ensuremath{\pion^\mp}}\xspace$ can also be lifted for the same range in the SU(3) approach~\cite{Dalseno:2019kps}. Naturally, this method raises concerns regarding the potential impact on $\phi_2$ coming from correlated amplitude model systematics which will be studied here. \subsection{Statistical method} In this paper, I employ the frequentist approach adopted by the CKMfitter Group~\cite{Charles:2017evz} where a $\chi^2$ is constructed comparing theoretical forms for physical observables expressed in terms of parameters of interest, $\bm{\mu}$, with their experimentally measured values, $\bm{ x}$. The most general form, \begin{equation} \chi^2 \equiv (\bm{x}-\bm{\mu})^T \bm{\Sigma}^{-1} (\bm{x}-\bm{\mu}), \end{equation} is necessary here, where $\bm{\Sigma}$ is total covariance matrix composed of the statistical and systematic covariance matrices as $\bm{\Sigma} \equiv \bm{\Sigma}_{\rm Stat} + \bm{\Sigma}_{\rm Syst}$. The statistical covariance matrix comes directly from the function minimisation procedure during the nominal fit to a sample, while the systematic covariance matrix is manually derived. Parameter variations are generated according to their uncertainties and the fit is repeated for each set of variations. Over $N$ fits, the covariance between a pair of physical observables is given by \begin{equation} \Sigma_{x,y} \equiv \sum^N_{i=1} \frac{(x_i-\bar x)(y_i-\bar y)}{N}, \end{equation} where the barred quantities representing the means are obtained from the nominal fit. A scan is then performed, minimising the $\chi^2$ to determine $\bm{\mu}$ for each value of $\phi_2$ fixed across a range. The value of $\Delta \chi^2$ from the global minimum is finally converted into a $p$-value scan, assuming it is distributed with one degree of freedom, from which confidence intervals can be derived. \section{\boldmath Systematic correlations within systems} \label{sec:nbb} Before delving into the main point regarding amplitude model correlations, it may be advantageous to introduce this difficult topic by digressing to conceptually simpler systematic correlations that can be trivially accounted for here. One such example is the number of $B {\ensuremath{\offsetoverline{\PB}}}\xspace$ pairs produced at ${\ensuremath{\Pe^+\Pe^-}}\xspace$ machines operating at the $\Upsilon(4S)$ resonance, $N_{B{\ensuremath{\offsetoverline{\PB}}}\xspace}$, that enters the absolute branching fraction calculations in $B \ensuremath{\rightarrow}\xspace \pi \pi$ decays through \begin{equation} {\ensuremath{\mathcal{B}}}\xspace^{ij} = \frac{N^{ij}}{\epsilon^{ij}N_{B{\ensuremath{\offsetoverline{\PB}}}\xspace}}, \end{equation} where $N^{ij}$ is the extracted signal yield and $\epsilon^{ij}$ is the reconstruction efficiency of that mode. Although equal production of {\ensuremath{\Bu}}\xspace {\ensuremath{\Bub}}\xspace and {\ensuremath{\B^0}}\xspace {\ensuremath{\Bbar{}^0}}\xspace pairs is implicitly assumed here for simplicity, this will need to be evaluated at \mbox{Belle~II}\xspace as the current uncertainties on their rates~\cite{ParticleDataGroup:2020ssz} would otherwise constitute the dominant systematic instead of those arising from $N_{B{\ensuremath{\offsetoverline{\PB}}}\xspace}$. It can immediately be seen that all three branching fractions are 100\% systematically correlated in $N_{B{\ensuremath{\offsetoverline{\PB}}}\xspace}$, because as a quantity that is independent of the channel being studied, whatever direction it fluctuates in, all branching fractions must follow suit by the same factor. \begin{table}[!htb] \centering \begin{tabular}{|c|c|} \hline Parameter & \mbox{Belle~II}\xspace projection\\ \hline ${\ensuremath{\mathcal{B}}}\xspace({\ensuremath{\pion^+}}\xspace {\ensuremath{\pion^0}}\xspace)$ ($10^{-6}$) & $\phantom{+}5.86 \pm 0.03 \pm 0.09$\\ ${\ensuremath{\mathcal{B}}}\xspace({\ensuremath{\pion^+}}\xspace {\ensuremath{\pion^-}}\xspace)$ ($10^{-6}$) & $\phantom{+}5.04 \pm 0.03 \pm 0.08$\\ ${\ensuremath{\mathcal{B}}}\xspace({\ensuremath{\pion^0}}\xspace {\ensuremath{\pion^0}}\xspace)$ ($10^{-6}$) & $\phantom{+}1.31 \pm 0.03 \pm 0.03$\\ $\mathcal{A}_{{\ensuremath{C\!P}}\xspace}({\ensuremath{\pion^+}}\xspace {\ensuremath{\pion^-}}\xspace)$ & $+0.33 \pm 0.01 \pm 0.03$\\ $\mathcal{S}_{{\ensuremath{C\!P}}\xspace}({\ensuremath{\pion^+}}\xspace {\ensuremath{\pion^-}}\xspace)$ & $-0.64 \pm 0.01 \pm 0.01$\\ $\mathcal{A}_{{\ensuremath{C\!P}}\xspace}({\ensuremath{\pion^0}}\xspace {\ensuremath{\pion^0}}\xspace)$ & $+0.14 \pm 0.03 \pm 0.01$\\ \hline \end{tabular} \caption{Projections for $B \ensuremath{\rightarrow}\xspace \pi \pi$ physics observables with $50 \ensuremath{\aunit{ab}}\xspace^{-1}$ taken from ref.~\cite{Kou:2018nap} where the first uncertainty is statistical and the second is systematic.} \label{tab:nbb} \end{table} To illustrate, I repeat the $\phi_2$ projection for \mbox{Belle~II}\xspace with $50 \ensuremath{\aunit{ab}}\xspace^{-1}$ with and without accounting for systematic correlation arising from $N_{B{\ensuremath{\offsetoverline{\PB}}}\xspace}$. Input is borrowed from ref.~\cite{Kou:2018nap} and displayed verbatim in table~\ref{tab:nbb}. As the systematic uncertainty is considered to be irreducible and kept at the $1.37\%$ level from \mbox{Belle}\xspace, it is the dominant expected systematic by far. For the purposes of demonstrating impact on $\phi_2$, I will then assume that the branching fraction systematics are entirely due to the uncertainty in $N_{B{\ensuremath{\offsetoverline{\PB}}}\xspace}$, and thus the systematic correlation matrix can be immediately written down as shown in table~\ref{tab:nbbcorr}. The only known statistical correlation is between $\mathcal{A}_{{\ensuremath{C\!P}}\xspace}({\ensuremath{\pion^+}}\xspace {\ensuremath{\pion^-}}\xspace)$ and $\mathcal{S}_{{\ensuremath{C\!P}}\xspace}({\ensuremath{\pion^+}}\xspace {\ensuremath{\pion^-}}\xspace)$, set at $+0.10$ from the Belle result. \begin{table}[!htb] \centering \begin{tabular}{|c|cccccc|} \hline & ${\ensuremath{\mathcal{B}}}\xspace({\ensuremath{\pion^+}}\xspace {\ensuremath{\pion^0}}\xspace)$ & ${\ensuremath{\mathcal{B}}}\xspace({\ensuremath{\pion^+}}\xspace {\ensuremath{\pion^-}}\xspace)$ & ${\ensuremath{\mathcal{B}}}\xspace({\ensuremath{\pion^0}}\xspace {\ensuremath{\pion^0}}\xspace)$ & $\mathcal{A}_{{\ensuremath{C\!P}}\xspace}({\ensuremath{\pion^+}}\xspace {\ensuremath{\pion^-}}\xspace)$ & $\mathcal{S}_{{\ensuremath{C\!P}}\xspace}({\ensuremath{\pion^+}}\xspace {\ensuremath{\pion^-}}\xspace)$ & $\mathcal{A}_{{\ensuremath{C\!P}}\xspace}({\ensuremath{\pion^0}}\xspace {\ensuremath{\pion^0}}\xspace)$\\ \hline ${\ensuremath{\mathcal{B}}}\xspace({\ensuremath{\pion^+}}\xspace {\ensuremath{\pion^0}}\xspace)$ & $+1$ & & & & &\\ ${\ensuremath{\mathcal{B}}}\xspace({\ensuremath{\pion^+}}\xspace {\ensuremath{\pion^-}}\xspace)$ & $+1$ & $+1$ & & & &\\ ${\ensuremath{\mathcal{B}}}\xspace({\ensuremath{\pion^0}}\xspace {\ensuremath{\pion^0}}\xspace)$ & $+1$ & $+1$ & $+1$ & & &\\ $\mathcal{A}_{{\ensuremath{C\!P}}\xspace}({\ensuremath{\pion^+}}\xspace {\ensuremath{\pion^-}}\xspace)$ & $\phantom{+}0$ & $\phantom{+}0$ & $\phantom{+}0$ & $+1$ & &\\ $\mathcal{S}_{{\ensuremath{C\!P}}\xspace}({\ensuremath{\pion^+}}\xspace {\ensuremath{\pion^-}}\xspace)$ & $\phantom{+}0$ & $\phantom{+}0$ & $\phantom{+}0$ & $\phantom{+}0$ & $+1$ &\\ $\mathcal{A}_{{\ensuremath{C\!P}}\xspace}({\ensuremath{\pion^0}}\xspace {\ensuremath{\pion^0}}\xspace)$ & $\phantom{+}0$ & $\phantom{+}0$ & $\phantom{+}0$ & $\phantom{+}0$ & $\phantom{+}0$ & $+1$\\ \hline \end{tabular} \caption{Systematic correlation matrix between $B \ensuremath{\rightarrow}\xspace \pi \pi$ physics observables assuming only the uncertainty in $N_{B{\ensuremath{\offsetoverline{\PB}}}\xspace}$ contributes.} \label{tab:nbbcorr} \end{table} The $\phi_2$ scan is then conducted in the vicinity of the SM solution with and without systematic correlations, the results of which can be seen in figure~\ref{fig:nbb}. The leading edge of the solution consistent with the SM is seen to improve by $0.4^\circ$ when accounting for systematic correlations, a striking result within the context of the sub-degree precision anticipated at \mbox{Belle~II}\xspace. At a first glance, this may seem counter-intuitive as some may recall the familiar summation of correlated uncertainties linearly over the more favourable summation in quadrature for uncorrelated cases. However, this is more applicable to instances of single physics parameters, whereas between physics parameters, correlations restrict statistical freedom. In this example, as the uncertainty in $N_{B{\ensuremath{\offsetoverline{\PB}}}\xspace}$ is not allowed to nonsensically follow three independent statistical distributions, as would be encapsulated by the identity correlation matrix, the $\phi_2$ constraint improves in consequence. \begin{figure}[!htb] \centering \includegraphics[height=150pt,width=!]{figs/nbb.eps} \caption{$p$-value scan of $\phi_2$ where the horizontal dashed line shows the $1\sigma$ bound. The blue curve shows the constraint ignoring systematic correlations, while the red considers them in $N_{B{\ensuremath{\offsetoverline{\PB}}}\xspace}$.} \label{fig:nbb} \end{figure} Although the knowledge of $N_{B{\ensuremath{\offsetoverline{\PB}}}\xspace}$ is expected to dominate the systematic uncertainty on the branching fractions, ideally the full systematic covariance matrix will be constructed in future analyses considering all sources. For example, as common control samples provide the tracking, particle identification and {\ensuremath{\pion^0}}\xspace\ reconstruction uncertainties, the branching fractions are again systematically correlated in these categories. Concurrently, the {\ensuremath{C\!P}}\xspace-violating parameters are also affected with the timing resolution and flavour-tagging performance being obtained from common studies. However, perhaps the most dangerous systematic here would be the shared method for evaluating tag-side interference from doubly-Cabibbo-suppressed decays~\cite{Long:2003wq}, which is also considered to be another irreducible systematic up until the point where it becomes statistically advantageous to rely exclusively on semileptonic flavour tags. \section{Amplitude model correlations between systems} \label{sec:rhopi} Now that systematic bias in $\phi_2$ and precision being left on the table is found to occur when not considering amplitude model correlations properly, or indeed even at all, attention turns to the other system also involving the $\rho$ lineshape in an amplitude analysis, ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace (\rho \pi)^0$. While this involves a single analysis in which the complete set of systematic uncertainties are already considered as standard, the question remains whether the global treatment of amplitude model correlations in $B \ensuremath{\rightarrow}\xspace \rho\rho$ is sufficient, or whether its scope should also be expanded to include ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace (\rho \pi)^0$. Recalling the discussion of section~\ref{sec:nbb}, this is analogous to the realisation that all branching fractions of the $B \ensuremath{\rightarrow}\xspace \pi \pi$ and $\rho \rho$ systems are also systematically correlated through $N_{B{\ensuremath{\offsetoverline{\PB}}}\xspace}$. \subsection{Amplitude model} The overlap of the three ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace (\rho \pi)^0$ charge combinations in the ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\pion^+}}\xspace {\ensuremath{\pion^-}}\xspace {\ensuremath{\pion^0}}\xspace$ phase space allows the tree ($T$) and penguin ($P$) processes involved to be distinguished, leading to the direct measurement of $\phi_2$ in a single analysis~\cite{Snyder:1993mx}. Additional constraints from isospin pentagonal relations involving the charged $B$ modes can also help to improve the constraint~\cite{Lipkin:1991st}, though this extension will not be addressed further here, suffice it to say that a global approach to amplitude model correlations in the wider $B \ensuremath{\rightarrow}\xspace \rho \pi$ system will likely be needed as well in light of what has been seen so far. The single analysis applies the isospin symmetry argument to the penguin amplitudes rather than the tree as is the case in other systems, lessening the impact of theoretical uncertainties~\cite{Gronau:2005pq}. The decomposition of the complex amplitude couplings by charge and $B$ flavour is given in eq.~\ref{eq:rhopi}, \begin{alignat}{6} &A^{+-} &&= T^{+-}e^{-i\phi_2} &&+ P^{+-}, \hspace{20pt} &&\bar A^{-+} &&= T^{+-}e^{+i\phi_2} &&+ P^{+-}, \nonumber \\ &A^{-+} &&= T^{-+}e^{-i\phi_2} &&+ P^{-+}, \hspace{20pt} &&\bar A^{+-} &&= T^{-+}e^{+i\phi_2} &&+ P^{-+}, \nonumber \\ &A^{00} &&= T^{00}e^{-i\phi_2} &&+ \frac{1}{2}(P^{+-} + P^{-+}), \hspace{20pt} &&\bar A^{00} &&= T^{00}e^{+i\phi_2} &&+ \frac{1}{2}(P^{+-} + P^{-+}), \label{eq:rhopi} \end{alignat} where the superscripts represent the $\rho$ followed by the pion charge. It can be seen that sensitivity to the penguin amplitudes amongst the trees arises as a result of the isospin argument removing the independence of the colour-suppressed ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\rhomeson^0}}\xspace {\ensuremath{\pion^0}}\xspace$ penguin. However, by construction the tree and penguin couplings are highly correlated, so a parameterisation expanding the product of isobar sums for each of the three terms in eq.~\ref{eq:tdrate} was proposed~\cite{Quinn:2000by}. For each resulting amplitude-squared-level form factor composed solely of strong dynamics, the expression for the corresponding coupling combination is then substituted with an independent free parameter. The parameter space thus increases dramatically in return for statistical stability of the fit, after which a $\chi^2$ minimisation mapping the original tree and penguin amplitudes to the 27 bilinear coefficients can be executed to recover $\phi_2$. At this time, it is unclear if this method will persist going forwards due to the unphysical assumption that the higher $\rho$ resonances are imposed to share the same $P/T$ ratios and the related impracticalities in adding further structures to the amplitude model. For the purposes of this study, since the tree and penguin amplitudes are known from MC generation, the fit will be performed with that paradigm for simplicity as $\phi_2$ will be a free parameter. The input amplitudes themselves are still obtained from a $\chi^2$ fit to the Belle bilinear coefficients, taking the solution consistent with SM expectations~\cite{Kusaka:2007dv,Kusaka:2007mj}. The parameters are given in table~\ref{tab:rhopi}. \begin{table}[!htb] \centering \begin{tabular}{|c|c|} \hline Parameter & Value \\ \hline $\phi_2$ & $81.4^\circ$ \\ $\Re(T^{+-})$ & $+0.80$ (fixed)\\ $\Im(T^{+-})$ & $\phantom{+}0\phantom{.00}$ (fixed)\\ $\Re(T^{-+})$ & $+0.56$\\ $\Im(T^{-+})$ & $+0.11$\\ $\Re(T^{00})$ & $+0.07$\\ $\Im(T^{00})$ & $+0.37$\\ $\Re(P^{+-})$ & $+0.20$\\ $\Im(P^{+-})$ & $+0.10$\\ $\Re(P^{-+})$ & $-0.30$\\ $\Im(P^{-+})$ & $-0.01$\\ \hline \end{tabular} \caption{MC input for ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace (\rho \pi)^0$.} \label{tab:rhopi} \end{table} \subsection{Results} The ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace (\rho \pi)^0$ effective yield accounting for flavour-tagging dilution is set at 30k events for the $50 \ensuremath{\aunit{ab}}\xspace^{-1}$ expected at \mbox{Belle~II}\xspace, with two scenarios considered for systematic variations. The first is that each system is treated independently, with $B \ensuremath{\rightarrow}\xspace \rho\rho$ already adopting the globally treated correlations proposed in the previous section as standard, while in the second, the systematic variations are common to both systems. The model uncertainty on $\phi_2$ and how it correlates with the $B \ensuremath{\rightarrow}\xspace \rho \rho$ parameters are again given in appendix~\ref{sec:app}. A $\phi_2$ scan is performed over the lesser known quantities, which again includes the strength of the $B \ensuremath{\rightarrow}\xspace \rho \rho$ model uncertainty in that system and the total uncertainty in ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace (\rho \pi)^0$ relative to $B \ensuremath{\rightarrow}\xspace \rho \rho$. Figure~\ref{fig:rhopicorr}a shows the scope of the bias in $\phi_2$, while Figure~\ref{fig:rhopicorr}b demonstrates the loss in statistical power when treating each system independently. From these plots, it is clear that a coordinated approach to these analyses would be required if ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace (\rho \pi)^0$ dominates the precision when at the same time a sizeable model uncertainty is present in the $B \ensuremath{\rightarrow}\xspace \rho \rho$ system. \begin{figure}[!htb] \centering \includegraphics[height=139pt,width=!]{figs/Dphi2RhoPi.png} \includegraphics[height=139pt,width=!]{figs/phi2RhoPiScale.png} \put(-390,120){(a)} \put(-174,120){(b)} \caption{Performance of the $\phi_2$ constraint under various conditions. The bias in $\phi_2$ when treating amplitude model correlations in $B \ensuremath{\rightarrow}\xspace \rho \rho$ and ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace (\rho \pi)^0$ separately as opposed to globally is shown in (a), while the degradation of the total uncertainty in $\phi_2$ is shown in (b). The jagged edges indicate the limits of the scan, outside of which the contents can be ignored.} \label{fig:rhopicorr} \end{figure} \subsection{A word on amplitude model correlations between experiments} So far, no mention has been made on the role of \mbox{LHCb}\xspace, which at this time is expected to provide only partial input to the $\phi_2$ constraint, primarily from the ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\rhomeson^0}}\xspace {\ensuremath{\rhomeson^0}}\xspace$ decay. Depending on the statistical power of this analysis, which will be considerable, it has already been shown that each analysis and therefore experiment by extension, when left to their own devices concerning the handling of amplitude model correlations, is most detrimental to the average. The logical heresy is to combine \mbox{Belle~II}\xspace and \mbox{LHCb}\xspace data in a single analysis and jointly handle the systematics. In lieu of this ideal scenario, which is understandably fraught with political difficulties, an unbiased outcome can still be achieved without the sharing of data sets through rigorous bookkeeping. These experiments can communicate with each other to define the $\rho$ pole parameters amongst others and provide a standard set of variations for all analyses to use. In return, each analysis can report the signed fit residual obtained for each systematic variation on all physics observables measured so that systematic covariance matrices can be properly constructed. \section{\boldmath Amplitude model correlations within systems} \label{sec:rhorho} Although amplitude analysis has seen limited involvement~\cite{Aaij:2015ria} in the $B \ensuremath{\rightarrow}\xspace \rho \rho$ constraint of $\phi_2$, it stands to reason that this approach will become more attractive in controlling uncertainties as data samples increase. For the small cost of modelling one additional variable over current analyses, the necessary degrees of freedom are acquired to harness the full statistics particularly of the limiting colour-suppressed ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\rhomeson^0}}\xspace {\ensuremath{\rhomeson^0}}\xspace$ decay, thereby improving $\phi_2$ precision in this sector, and even opening the possibility to determine $\phi_2$ separately for each of the three polarisation configurations of $B \ensuremath{\rightarrow}\xspace \rho \rho$. Furthermore, amplitude analysis allows the direct modelling of interfering components such as the dipion $I=0, 1$ resonant contributions and other non-resonant S-wave effects such as elastic and inelastic particle rescattering processes in the vicinity of the $\rho$, thus reducing model uncertainty estimations. Perhaps most importantly, isospin-breaking contributions known to bias $\phi_2$ can also be accounted for in the amplitude model, such as with the $\rho^0$--$\omega$ mixing lineshape of refs.~\cite{Aaij:2019hzr,Aaij:2019jaq} and the structure of $I=1$ finite $\rho$-width effects suggested in ref.~\cite{Falk:2003uq}. One aspect these three analyses have in common is fixed $\rho$ pole parameters, so therein lies the potential for systematic model uncertainties to impact the $\phi_2$ average. Unlike the $N_{B{\ensuremath{\offsetoverline{\PB}}}\xspace}$ case mentioned in the previous section, these are not multiplicative factors to any physics parameter and as such the correlation matrix cannot immediately be written down. In order for the covariance matrix to be derived, repeated randomised systematic variations on a sample of each $B \ensuremath{\rightarrow}\xspace \rho \rho$ channel needs to be applied. Then to generate these samples, amplitude models are first required for which information is sparse, meaning that assumptions will have to be made on the magnitudes and relative phases between the $B \ensuremath{\rightarrow}\xspace \rho \rho$ polarisations. In previous works~\cite{Dalseno:2018hvf,Dalseno:2019kps}, unknown physics parameters were uniformly distributed in an ensemble test to give a sense of what to expect on average in their respective $\phi_2$ studies. However in this case, applying systematic variations on top of all three amplitude analyses in an ensemble is not a practical endeavour and would be of unclear benefit, besides. Therefore, conclusions from this paper will be limited to identifying the scale of potential bias in $\phi_2$ induced by neglecting amplitude model correlations, as opposed to providing a definitive range. \subsection{Amplitude model} Yields are set based on \mbox{Belle}\xspace results according to expectations for $50 \ensuremath{\aunit{ab}}\xspace^{-1}$ to be collected with \mbox{Belle~II}\xspace. I consider rudimentary models with contributions to the 4-body phase space coming only from the channels known to exist in the analysis region. The amplitude for each intermediate state at position $\Phi_4$, is parameterised as \begin{equation} A_i(\Phi_4) = B^L_B(\Phi_4) \cdot [B^L_{R_1}(\Phi_4) T_{R_1} (\Phi_4)] \cdot [B^L_{R_2}(\Phi_4) T_{R_2} (\Phi_4)] \cdot S_i(\Phi_4), \end{equation} where $B^L_B$ represents the production Blatt-Weisskopf barrier factor~\cite{VonHippel:1972fg} depending on the orbital angular momentum between the products of the $B$ decays, $L$. Two resonances will appear in each isobar, denoted by $R_1$ and $R_2$, for which respective decay barrier factors are also assigned. The Breit-Wigner propagators are represented by $T$, while the overall spin amplitude is given by $S$. Each isobar is Bose-symmetrised as necessary so that the total amplitude is always symmetric under the exchange of like-sign pions. The Blatt-Weisskopf penetration factors account for the finite size of the decaying resonances by assuming a square-well interaction potential with radius $r$. They depend on the breakup momentum between the decay products $q$, and the orbital angular momentum between them $L$. Their explicit expressions used in this analysis are \begin{eqnarray} B^0(q) &=& 1, \nonumber \\ B^1(q) &=& \frac{1}{\sqrt{1+(qr)^2}}, \nonumber\\ B^2(q) &=& \frac{1}{\sqrt{9+3(qr)^2+(qr)^4}}. \end{eqnarray} Spin amplitudes are constructed with the covariant tensor formalism based on the Rarita-Schwinger conditions~\cite{Rarita:1941mf}. The spin $S$, of some state with 4-momentum $p$, and spin projection $s_z$, is represented by a rank-$S$ polarisation tensor that is symmetric, traceless and orthogonal to $p$. These conditions reduce the number of independent elements to $2S+1$ in accordance with the number of degrees of freedom available to a spin-$S$ state. The sum over these polarisation indices of the inner product of polarisation tensors form the fundamental basis on which all spin amplitudes are built. Called the spin projection operator $P$, it projects an arbitrary tensor onto the subspace spanned by the spin projections of the spin-$S$ state. Another particularly useful object is the relative orbital angular momentum spin tensor $L$, which for some process $R \ensuremath{\rightarrow}\xspace P_1 P_2$, is the relative momenta of the decay products $q_R \equiv p_1 - p_2$ projected to align with the spin of $R$, \begin{equation} \label{eq:orbital} L_{\mu_1 \mu_2 ... \mu_L}(p_R, q_R) = P_{\mu_1 \mu_2 ... \mu_L \nu_1 \nu_2 ... \nu_L} (p_R) q_R^{\nu_1} q_R^{\nu_2} ... q_R^{\nu_L}, \end{equation} where the number of indices representing the tensor rank is equal to the value of $L$. Finally, to ensure that the spin amplitude behaves correctly under parity transformation, it is sometimes necessary to include the Levi-Civita totally antisymmetric tensor $\epsilon_{abcd}p_R^d$. Each stage of a decay is represented by a Lorentz scalar obtained by contracting an orbital tensor between the decay products with a spin wavefunction of equal rank representing the final state. Three spin topologies are necessary for $B \ensuremath{\rightarrow}\xspace \rho \rho$ as $S$-, $P$- and $D$-waves are permitted between the vector resonances, with total spin densities, \begin{eqnarray} \label{eq:spin} &&S\text{-wave}: \hspace{10pt} S \propto L_a(p_{\rho_1}, q_{\rho_1})L^a(p_{\rho_2}, q_{\rho_2}), \nonumber\\ &&P\text{-wave}: \hspace{10pt} S \propto \epsilon_{abcd} L^d(p_{B}, q_{B}) L^c(p_{\rho_1}, q_{\rho_1}) L^b(p_{\rho_2}, q_{\rho_2}) p^a_{B}, \nonumber\\ &&D\text{-wave}: \hspace{10pt} S \propto L_{ab}(p_{B}, q_{B}) L^b(p_{\rho_1}, q_{\rho_1}) L^a(p_{\rho_2}, q_{\rho_2}). \end{eqnarray} In general, resonance lineshapes are described by Breit-Wigner propagators as a function of the energy-squared $s$, \begin{equation} T(s) = \frac{1}{M^2(s) - s - i\sqrt{s}\Gamma(s)}, \end{equation} where $M^2(s)$ is the energy-dependent mass and $\Gamma(s)$ is the total width which is normalised such that it represents the nominal width $\Gamma_0$, at the pole mass, $m_0$. For the {\ensuremath{\rhomeson^0}}\xspace\ resonance, the Gounaris-Sakurai parameterisation is used to provide an analytic expression for $M^2(s)$ and $\Gamma(s)$~\cite{Gounaris:1968mw}. \subsection{Pseudo-experiment generation method} The unknown strong complex couplings between contributions in the amplitude model are partly inspired by reverse-engineering the known branching fractions for each polarisation. The Monte Carlo (MC) is based on the decay rates in phase space, which for ${\ensuremath{\Bu}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\rhomeson^+}}\xspace {\ensuremath{\rhomeson^0}}\xspace$ is \begin{equation} \Gamma(q) = \frac{1 + q}{2} |A|^2 + \frac{1 - q}{2} |\bar A|^2, \end{equation} where $q = +1 (-1)$ for {\ensuremath{\Bu}}\xspace\ ({\ensuremath{\Bub}}\xspace). On the other hand, the time-dependent decay rates of {\ensuremath{\B^0}}\xspace\ and {\ensuremath{\Bbar{}^0}}\xspace\ decays to a self-conjugate final states are given by \begin{eqnarray} \label{eq:tdrate} \Gamma(t) &\propto& e^{-t/\tau}[(|A|^2 + |\bar A|^2) + (|A|^2 - |\bar A|^2)\cos \Delta m_d t - 2 \Im (\bar A A^*) \sin \Delta m_d t], \nonumber\\ \bar \Gamma(t) &\propto& e^{-t/\tau}[(|A|^2 + |\bar A|^2) - (|A|^2 - |\bar A|^2)\cos \Delta m_d t + 2 \Im (\bar A A^*) \sin \Delta m_d t], \end{eqnarray} respectively, where $A$ is the static decay amplitude, $\tau$ is the {\ensuremath{\B^0}}\xspace\ lifetime and $\Delta m_d$ is the mass difference between the $B_H$ and $B_L$ mass eigenstates. This form assumes no {\ensuremath{C\!P}}\xspace violation in the mixing $|q/p| = 1$, and that the total decay rate difference between the two mass eigenstates is negligible. The total amplitude $A$, can be written in the typical isobar approach as the coherent sum over the number of intermediate states in the model with amplitude $A_i$, as a function of 4-body phase space position $\Phi_4$, \begin{equation} A \equiv \sum_i a_iA_i(\Phi_4), \end{equation} where $a_i$ is a strong complex coupling determined directly from the data. Incorporating a complex {\ensuremath{C\!P}}\xspace violation parameter $\lambda_i$, for each weak contribution in the phase space, the total $\bar A$ can be written as \begin{equation} \bar A \equiv \sum_i a_i \lambda_i \bar A_i(\bar \Phi_4) = \sum_i a_i \lambda_i A_i(\bar \Phi_4), \end{equation} for ${\ensuremath{\Bu}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\rhomeson^+}}\xspace {\ensuremath{\rhomeson^0}}\xspace$, where the phase space of the {\ensuremath{C\!P}}\xspace-conjugated process $\bar \Phi_4$, is set by convention to have the same sign as $\Phi_4$ for all amplitude contributions, leaving $A_i$ to contain only strong dynamics blind to flavour. Conversely, for ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\rhomeson^+}}\xspace {\ensuremath{\rhomeson^-}}\xspace$ and ${\ensuremath{\rhomeson^0}}\xspace {\ensuremath{\rhomeson^0}}\xspace$, \begin{equation} \bar A \equiv \sum_i a_i \lambda_i \bar A_i(\Phi_4) = \sum_i a_i \lambda_i A_i(\bar \Phi_4), \end{equation} the phase space of the {\ensuremath{C\!P}}\xspace-conjugated process $\bar \Phi_4$, must be transformed relative to the elected particle ordering under $C$ and $P$ conjugation in order to achieve $A_i$ containing only strong dynamics. Complex couplings are then evaluated through a $\chi^2$ fit relating the observed branching fractions for each isobar scaled to unity, to the fit fractions of each isobar calculated for the generated model in the 4-body phase space, \begin{equation} {\cal F}^{\rm pred}_i = \frac{\int (|A_i|^2 + |\bar A_i|^2) d\Phi_4}{\int \sum_i(|A_i|^2 + |\bar A_i|^2) d\Phi_4}. \end{equation} The branching fractions for each polarisation are set mostly with HFLAV input~\cite{Amhis:2019ckw} except where mentioned. As the longitudinal polarisation is known to dominate, the remainder is assigned exclusively to the $P$- or {\ensuremath{C\!P}}\xspace-odd P-wave for simplicity, while the longitudinal component is divided evenly between the $P$- or {\ensuremath{C\!P}}\xspace-even S- and D-waves for the flavour-specific and {\ensuremath{C\!P}}\xspace-conjugate final states, respectively. Naturally, there are 2 solutions for each free strong coupling, so whichever solution the fit converges to first is taken to generate the MC sample for each $B \ensuremath{\rightarrow}\xspace \rho \rho$ channel. Position in phase space is provided by the \texttt{GENBOD} algorithm~\cite{James:1968gu} and \texttt{qft++} gives the spin densities~\cite{Williams:2008wu}. \subsubsection{${\ensuremath{\Bu}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\rhomeson^+}}\xspace {\ensuremath{\rhomeson^0}}\xspace$} This amplitude analysis should be rather straight-forward as the phase space can be restricted to limit contributions from the $a_1(1260)$ resonances. Here, the yield is set to 100k events and the dipion range is restricted to be below the typical $1.1 \ensuremath{\aunit{Ge\kern -0.1em V\!/}c^2}\xspace$. The input branching fractions with the determined couplings are shown in table~\ref{tab:rhoprhoz}. Note that here and throughout, the spin amplitudes given in eq.~\ref{eq:spin} are not normalised over the phase space, so there is no direct relation between the fitted couplings and their corresponding branching fractions. This also means that the relative strengths of each partial wave cannot be inferred from the couplings either as each spin factor contains different momentum scales by eq.~\ref{eq:orbital}, depending on the number of orbital angular momentum tensors involved. \begin{table}[!htb] \centering \begin{tabular}{|c|ccc|}\hline Wave & {\ensuremath{\mathcal{B}}}\xspace ($10^{-6}$) & $\Re(a_i)$ & $\Im(a_i)$\\ \hline S & 11.4 & 1 (fixed) & 0 (fixed)\\ P & \phantom{0}1.2 & $+3.8$ & \hspace{5pt}$+1.6$\\ D & 11.4 & $\phantom{+}0.0$ & $-10.4$\\ \hline \end{tabular} \caption{Branching fraction input with corresponding reverse-engineered couplings for ${\ensuremath{\Bu}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\rhomeson^+}}\xspace {\ensuremath{\rhomeson^0}}\xspace$.} \label{tab:rhoprhoz} \end{table} \subsubsection{${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\rhomeson^+}}\xspace {\ensuremath{\rhomeson^-}}\xspace$} As before, there should not be a lot of interference from the $a_1(1260)$ resonances to this colour-favoured decay, so the analysis region is kept the same. Though the total branching fractions are similar, dilution arising from imperfect flavour-tagging performance taken to be around the 30\% mark for \mbox{Belle~II}\xspace means that the yield is set to 30k events. The {\ensuremath{C\!P}}\xspace-violation parameter can be set from the known longitudinal quasi-two-body parameters assuming the solution closest to the SM expectation and uniformity between polarisations. The input branching fractions with the determined couplings and {\ensuremath{C\!P}}\xspace-violation parameters are shown in table~\ref{tab:rhoprhom}. \begin{table}[!htb] \centering \begin{tabular}{|c|ccccc|}\hline Wave & {\ensuremath{\mathcal{B}}}\xspace ($10^{-6}$) & $\Re(a_i)$ & $\Im(a_i)$ & $\Re(\lambda_i)$ & $\Im(\lambda_i)$\\ \hline S & 13.7 & 1 (fixed) & 0 (fixed) & $-0.99$ (fixed) & $-0.14$ (fixed)\\ P & \phantom{0}0.3 & $+1.8$ & $\phantom{+0}0.0$ & " & "\\ D & 13.7 & $\phantom{+}0.0$ & $-10.4$ & " & "\\ \hline \end{tabular} \caption{Branching fraction input with corresponding reverse-engineered couplings for ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\rhomeson^+}}\xspace {\ensuremath{\rhomeson^-}}\xspace$.} \label{tab:rhoprhom} \end{table} It should also be noted that despite an amplitude analysis being conducted here, it is very unlikely that a single solution for the effective $\phi_2^{+-}$ will emerge. As discussed in ref.~\cite{Dalseno:2018hvf}, a lack of interfering contributions with a sizeable penguin contribution means that the {\ensuremath{C\!P}}\xspace-violation parameter will likely factorise in the isobar sum such that $\Im(\bar A A^*) = \Im(\lambda A A^*) = \Im(\lambda|A|^2)$ in eq.~\ref{eq:tdrate}. As $|A|^2$ must be real-defined, the imaginary part of the aforementioned product evaluates to $\sin 2\phi^{+-}_2$, leaving two solutions remaining for $\phi^{+-}_2$. \subsubsection{${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\rhomeson^0}}\xspace {\ensuremath{\rhomeson^0}}\xspace$} According to ref.~\cite{Dalseno:2018hvf}, expanding the analysis space to include ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace a_1^\pm {\ensuremath{\pion^\mp}}\xspace$ can ultimately resolve the $\phi_2$ solution degeneracy in $B \ensuremath{\rightarrow}\xspace \rho \rho$. Novelty aside, this strategy is prudent as the colour-favoured ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace a_1^\pm {\ensuremath{\pion^\mp}}\xspace$ is otherwise either difficult to control systematically or statistically very expensive to remove. As such, the analysis range is defined as the dipion mass being less than $1.1 \ensuremath{\aunit{Ge\kern -0.1em V\!/}c^2}\xspace$ as before, or the 3-pion mass being below the production of open charm. The combined yield accounting for flavour-tagging performance is estimated in that previous work at 30k events. An additional topology is necessary for ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace a_1^\pm {\ensuremath{\pion^\mp}}\xspace$, arising from the S-wave between the products of the $a_1^\pm \ensuremath{\rightarrow}\xspace \rho^0 {\ensuremath{\pion^\pm}}\xspace$ decay, \begin{equation} S \propto L_a(p_{{\ensuremath{\B^0}}\xspace}, q_{{\ensuremath{\B^0}}\xspace}) P^{ab}(p_{a_1^\pm}) L_b(p_{{\ensuremath{\rhomeson^0}}\xspace}, q_{{\ensuremath{\rhomeson^0}}\xspace}). \end{equation} While a relative orbital angular momentum D-wave between the vector and pseudoscalar is possible, this has yet to be definitively seen, so is ignored at this time. Regarding the lineshape of the $a_1^\pm$, potential dispersive effects are neglected, setting $M^2(s)$ to its pole-mass squared. The energy-dependent width of the $a_1^\pm$ is calculated from the integral over its phase space as a function of $s$, \begin{equation} \Gamma_{a_1^\pm}(s) = \frac{1}{2\sqrt{s}} \int \sum_{\lambda=0,\pm 1} |A^\lambda_{a_1^\pm \ensuremath{\rightarrow}\xspace (\rho\pi)^\pm_S} (s)|^2 d\Phi_3, \end{equation} where $A$ is the transition amplitude of the cascade, itself being comprised of barrier factors, a spin density and lineshape, with a coherent sum taken over the open polarisation indices of the initial state. Its exclusive decay to $(\rho\pi)^\pm$ and isospin symmetry i.e., $\Gamma_{a_1^\pm \ensuremath{\rightarrow}\xspace {\ensuremath{\rhomeson^0}}\xspace {\ensuremath{\pion^\pm}}\xspace}(s) = \Gamma_{a_1^\pm \ensuremath{\rightarrow}\xspace \rho^\pm {\ensuremath{\pion^0}}\xspace}(s)$ is also assumed. The numerical form of the $a_1^\pm$ energy-dependent width can be seen in figure~\ref{fig:a1}. \begin{figure}[!htb] \centering \includegraphics[height=150pt,width=!]{figs/a1.eps} \caption{Energy-dependent width of the $a_1^\pm$.} \label{fig:a1} \end{figure} The input branching fractions with the determined couplings and {\ensuremath{C\!P}}\xspace-violation parameters are shown in table~\ref{tab:rhozrhoz}. In this case, only the \mbox{LHCb}\xspace results are used to set the ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\rhomeson^0}}\xspace {\ensuremath{\rhomeson^0}}\xspace$ branching fractions~\cite{Aaij:2015ria}, while the \mbox{Belle}\xspace result is used to determine the ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace a_1^\pm {\ensuremath{\pion^\mp}}\xspace$ model~\cite{Dalseno:2012hp}. \begin{table}[!htb] \centering \begin{tabular}{|c|ccccc|}\hline Wave & {\ensuremath{\mathcal{B}}}\xspace ($10^{-6}$) & $\Re(a_i)$ & $\Im(a_i)$ & $\Re(\lambda_i)$ & $\Im(\lambda_i)$\\ \hline $a_1^+ {\ensuremath{\pion^-}}\xspace$ & 8.6 & 1 (fixed) & 0 (fixed) & $-1.04$ (fixed) & $-0.27$ (fixed)\\ $a_1^- {\ensuremath{\pion^+}}\xspace$ & 2.5 & $+0.56$ & $-0.12$ & $-0.80$ (fixed) & $-0.54$ (fixed)\\ S & 0.4 & $\phantom{+}0.00$ & $+0.01$ & $-0.78$ (fixed) & $+0.25$ (fixed)\\ P & 0.1 & $+0.08$ & $\phantom{+}0.00$ & " & "\\ D & 0.4 & $+0.08$ & $+0.09$ & " & "\\ \hline \end{tabular} \caption{Branching fraction input with corresponding reverse-engineered couplings for ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\rhomeson^0}}\xspace {\ensuremath{\rhomeson^0}}\xspace$ and ${\ensuremath{\B^0}}\xspace \ensuremath{\rightarrow}\xspace a_1^\pm {\ensuremath{\pion^\mp}}\xspace$.} \label{tab:rhozrhoz} \end{table} \subsection{Results} An amplitude fit is performed to the MC sample for each decay in order to determine nominal values for the physics observables of interest against which to compare in the ensuing systematic variations. Three scenarios are defined: ``Current practice'', in which systematic correlations are not considered at all, ``Expected practice'' for which each analysis independently handles their own systematic correlations and finally ``Proposed practice'', where systematic correlations are globally accounted for. Sets of $\rho$ pole parameters are then generated by which to refit the MC samples and calculate covariance between physics observables. Although the {\ensuremath{\rhomeson^+}}\xspace and {\ensuremath{\rhomeson^0}}\xspace\ parameters are separately determined, they are essentially a manifestation of the same state with different charge, and so for the purposes of evaluating model uncertainties here, are also considered to be fully correlated at the theoretical level. Their parameters taken from ref.~\cite{ParticleDataGroup:2020ssz} are thus distributed with a multivariate normal distribution, recorded in table~\ref{res:rho}. \begin{table}[!htb] \centering \begin{tabular}{|c|c|cccc|} \hline & Value (GeV$/c^2$) & $m_0$({\ensuremath{\rhomeson^+}}\xspace) & $\Gamma_0({\ensuremath{\rhomeson^+}}\xspace)$ & $m_0$({\ensuremath{\rhomeson^0}}\xspace) & $\Gamma_0({\ensuremath{\rhomeson^0}}\xspace)$\\ \hline $m_0$({\ensuremath{\rhomeson^+}}\xspace) & $0.7665 \pm 0.0011$ & $+1$ & & &\\ $\Gamma_0({\ensuremath{\rhomeson^+}}\xspace)/c^2$ & $0.1502 \pm 0.0024$ & $\phantom{+}0$ & $+1$ & &\\ $m_0$({\ensuremath{\rhomeson^0}}\xspace) & $0.7690 \pm 0.0010$ & $+1$ & $\phantom{+}0$ & $+1$ &\\ $\Gamma_0({\ensuremath{\rhomeson^0}}\xspace)/c^2$ & $0.1509 \pm 0.0017$ & $\phantom{+}0$ & $+1$ & $\phantom{+}0$ & $+1$\\ \hline \end{tabular} \caption{Values and correlation matrix set for the $\rho$ pole parameters in systematic variations.} \label{res:rho} \end{table} Each MC sample is then refit applying each $\rho$ variation upon which amplitude model covariance matrices are constructed for each scenario. Obviously for the Current practice, the amplitude model correlation matrix is set to the identity. In the Expected practice scenario, 100 $\rho$ parameter variations are individually generated for each decay starting with a unique seed, while for the Proposed practice, a single final set of 100 variations is generated to be shared amongst the three analyses. The physics observables included are the branching fractions and where appropriate, the magnitudes of the {\ensuremath{C\!P}}\xspace-violating parameters and their effective weak phases for each polarisation, resulting in a $21 \times 21$ covariance matrix. For completeness, the model uncertainties and their corresponding correlation matrix are given in appendix~\ref{sec:app}. The $\phi_2$ constraint summing over all polarisations is then conducted in each scenario. The model uncertainty is not taken to scale with statistics considering the large yields involved because the variations of the $\rho$ parameters change the underlying interference pattern in the phase space in a predetermined albeit unknown way, which is why this systematic is irreducible without appreciable improvements to the pole properties themselves. As such, I repeat the $\phi_2$ constraint in the context of a changing statistical uncertainty which is achieved by scaling the statistical covariance matrix obtained from the nominal fits, with the results shown in figure~\ref{fig:rhocorr1}. \begin{figure}[!htb] \centering \includegraphics[height=139pt,width=!]{figs/phi2.eps} \includegraphics[height=139pt,width=!]{figs/phi2_err_model.eps} \put(-385,115){(a)} \put(-175,115){(b)} \caption{Performance of the $\phi_2$ constraint under various conditions. (a) shows the $\phi_2$ motion as a function of its statistical uncertainty, while (b) shows the fraction of the amplitude model uncertainty.} \label{fig:rhocorr1} \end{figure} The raw drift of $\phi_2$ can be seen in figure~\ref{fig:rhocorr1}a. For reference, a statistical-only constraint ignoring the amplitude model covariance matrix is performed for which $\phi_2$ is perfectly flat as a function of its own overall statistical uncertainty as expected. For these values of statistical uncertainty, the trends of each analysis scenario are then determined. When accounting for the model uncertainty, but not any corresponding correlations, the constraint has no problem at the statistical precision of $1^\circ$, but rapidly deteriorates to become the worst performer once the model uncertainty begins to dominate. Surprisingly, the Expected practice scenario in which each analysis considers their own model correlations already has a visible bias at $1^\circ$ uncertainty, trending in the same direction as Current practice but without any intuitively discernible motion. Finally, the Proposed practice curve tightly follows the statistical-only curve, indicating that its covariance matrix by and large captures the correct structure of the model uncertainty. Perhaps the slight departure from the flat curve at low values indicates the point at which sensitivity to non-linear correlations begins to play a role. Figure~\ref{fig:rhocorr1}b indicates the size of the model uncertainty as a percentage of the total uncertainty in $\phi_2$, which is obtained by quadratically subtracting the statistical uncertainty at that point, thereby assuming that all other systematics behave like the statistical error. Current practice performs well when the statistical uncertainty dominates, but becomes overtly large at the other end of the spectrum. Conversely, the model uncertainty fraction for Expected practice is noticeably larger even where the statistical uncertainty is supposed to be dominant, while its performance even improves to the point of the Proposed practice though with substantial bias as already seen. The plots of figure~\ref{fig:rhocorr2} are variations of those already shown, though now as a function of the model uncertainty fraction in the Current practice scenario and relative to Proposed practice, which allows the difference in performance to be understood at the \mbox{Belle~II}\xspace projection of $0.7^\circ$ total uncertainty with $50 \ensuremath{\aunit{ab}}\xspace^{-1}$~\cite{Kou:2018nap}. This is achieved by tuning the statistical covariance matrix scale factor, the result of which is shown in figure~\ref{fig:rhorho}. Back in figure~\ref{fig:rhocorr2}a, the bias in $\phi_2$ with the full \mbox{Belle~II}\xspace dataset would be only $+0.03^\circ$ with Current practice, but $-0.1^\circ$ with Expected practice for the studied model. Figure~\ref{fig:rhocorr2}b indicates how much worse the model uncertainty scales as a function of its fraction, showing that at \mbox{Belle~II}\xspace uncertainties, Current practice is worse-off than Proposed practice only by factor of 1.2, while Expected practice has a model uncertainty 3 times worse. \begin{figure}[!htb] \centering \includegraphics[height=139pt,width=!]{figs/Dphi2.eps} \includegraphics[height=139pt,width=!]{figs/phi2_err_scale.eps} \put(-245,30){(a)} \put(-30,35){(b)} \caption{Performance of the $\phi_2$ constraint under various conditions. Under Current practice handling of the model uncertainty fraction, the bias in $\phi_2$ relative to proposed practice is shown in (a), while the degradation of the model uncertainty size is shown in (b). The vertical dotted line corresponds to the \mbox{Belle~II}\xspace\ projection point for their full data sample.} \label{fig:rhocorr2} \end{figure} \begin{figure} \centering \includegraphics[height=150pt,width=!]{figs/rhorho.eps} \caption{$p$-value scan of $\phi_2$ where the horizontal dashed line shows the $1\sigma$ bound. The blue, red and magenta curves show the Current, Expected and Proposed practice scenarios, respectively at the point where the statistical covariance matrix is aligned to give the total uncertainty expected by the end of \mbox{Belle~II}\xspace.} \label{fig:rhorho} \end{figure} For the given amplitude models studied, it would appear that Proposed practice is objectively superior. Current practice may initially perform well, but is expected to become problematic, while Expected practice just looks dangerous from the onset, which is perhaps an unexpected outcome. A natural question arising at this point is whether this whole issue can be sidestepped by releasing the $\rho$ pole properties in the fit. However, the meaning of the $\phi_2$ constraint would be unclear with multiple versions of these parameters present. In principle, it would be possible to release the $\rho$ pole parameters in a simultaneous fit to the three $B \ensuremath{\rightarrow}\xspace \rho \rho$ modes, trading relative analysis simplicity for easier sytematics handling, though understandably this alternative is not very practical in terms of coordination and the short-term contractual nature of the field. In any case, there is more to each amplitude model than just the $B \ensuremath{\rightarrow}\xspace \rho\rho$ contributions as mentioned at the beginning of this section, so the full correlated model uncertainty accounting for other common lineshapes and additional contributions should also be studied. Minimal cooperation and overlap between analyses through the sharing of systematic variations and their signed fit residuals would seem to be the most sensible strategy moving forwards.
{ "attr-fineweb-edu": 1.595703, "attr-cc_en_topic": 12, "domain": "arxiv" }
BkiUdmE5qWTD6lY8X7va
\section{Introduction} \numberwithin{equation}{section} We consider the problem of existence of nontrivial weak solutions to the following doubly critical problem on $\ensuremath{\mathbb{R}}^n$ involving the Fractional Laplacian: \begin{equation}\label{Main problem} \left\{\begin{array}{lll} ({-}{ \Delta})^{\frac{\alpha}{2}}u- \gamma \frac{u}{|x|^{\alpha}}&= |u|^{2_{\alpha}^*-2} u + {\frac{|u|^{2_{\alpha}^*(s)-2}u}{|x|^s}} & \text{in } {\ensuremath{\mathbb{R}}^n}\\ \hfill u&>0 & \text{in } \ensuremath{\mathbb{R}}^n, \end{array}\right. \end{equation} where $0\leq s<\alpha<2$, $n>\alpha$, $2_{\alpha}^*:=\frac{2 n}{n-{\alpha}},$ ${2_{\alpha}^*(s)}:=\frac{2(n-s)}{n-{\alpha}},$ $\gamma \in \mathbb{R}$. The fractional Laplacian $({-}{ \Delta})^{\frac{\alpha}{2}}$ is defined on the Schwartz class (space of rapidly decaying $C^\infty$ functions in $\ensuremath{\mathbb{R}}^n$) through the Fourier transform, $$ (-\Delta)^{\frac{\alpha}{2}}u= \mathcal{F}^{-1}(|\xi|^{\alpha}(\mathcal{F}u)) \quad \forall\xi\in\ensuremath{\mathbb{R}}^n, $$ where $ \mathcal{F}u$ denotes the Fourier transform of $u$, $\mathcal{F}u(\xi)=\int_{\ensuremath{\mathbb{R}}^n} e^{-2\pi i x.\xi} u(x) dx$. See \cite{Hitchhikers guide} and references therein for the basics on the fractional Laplacian. Problems involving two non-linearities have been studied in the case of local operators such as the Laplacian $-\Delta$, the $p$-Laplacian $-\Delta_p$ and the Biharmonic operator $\Delta^2$ (See \cite{Bhakta}, \cite{Filippucci-Pucci-Robert}, \cite{Kang-Li} and \cite{Xuan-Wang}). Problem (\ref{Main problem}) above is the non-local counterpart of the one studied by Filippucci-Pucci-Robert in \cite{Filippucci-Pucci-Robert}, who treated the case of the $p$-Laplacian in an equation involving both the Sobolev and the Hardy-Sobolev critical exponents. Questions of existence and non-existence of solutions for fractional elliptic equations with singular potentials were recently studied by several authors. All studies focus, however, on problems with only one critical exponent --mostly the non-linearity $u^{2_{\alpha}^*-1} $-- and to a lesser extent the critical Hardy-Sobolev singular term ${\frac{u^{2_{\alpha}^*(s)-1}}{|x|^s}}$ (see \cite{Cotsiolis-Tavoularis}, \cite{Fall-Minlend-Thiam}, \cite{Yang} and the references therein). These cases were also studied on smooth bounded domains (see for example \cite{B-C-D-S 2}, \cite{B-C-D-S 1}, \cite{Barrios-Medina-Peral}, \cite{Fall}, \cite{Servadei} and the references therein). In general, the case of two critical exponents involve more subtleties and difficulties, even for local differential operators. The variational approach that we adopt here, relies on the following fractional Hardy-Sobolev type inequality: \begin{equation} C(\int_{\mathbb{R}^n} \frac{|u|^{2_{\alpha}^*(s)}}{|x|^{s}}dx)^\frac{2}{{2_{\alpha}^*(s)}} \leq \int_{\ensuremath{\mathbb{R}}^n} |({-}{ \Delta})^{\frac{\alpha}{4}}u|^2 dx - \gamma \int_{\mathbb{R}^n} \frac{|u|^{2}}{|x|^{\alpha}}dx \quad \hbox{for all $u \in H^{\frac{\alpha}{2}}(\ensuremath{\mathbb{R}}^n)$}, \end{equation} where $\gamma < \gamma_H:=2^\alpha \frac{\Gamma^2(\frac{n+\alpha}{4})}{\Gamma^2(\frac{n-\alpha}{4})}$ is the best fractional Hardy constant on $\ensuremath{\mathbb{R}}^n$. The fractional space $H^{\frac{\alpha}{2}}(\ensuremath{\mathbb{R}}^n)$ is defined as the completion of $C_0^{\infty}(\ensuremath{\mathbb{R}}^n)$ under the norm $$\|u\|_{H^{\frac{\alpha}{2}}(\mathbb{R}^n)}^2= \int_{\mathbb{R}^n}|2\pi \xi |^{\alpha} |\mathcal{F}u(\xi)|^2 d\xi =\int_{\mathbb{R}^n} |(-\Delta)^{\frac{\alpha}{4}}u|^2 dx.$$ The best constant in the above fractional Hardy-Sobolev inequality is defined as: \begin{equation} \label{Problem: the best fractional Hardy-Sobolev type constant } \mu_{\gamma,s}(\ensuremath{\mathbb{R}}^n):= \inf\limits_{u \in H^{\frac{\alpha}{2}} (\ensuremath{\mathbb{R}}^n)\setminus \{0\}} \frac{ \int_{\ensuremath{\mathbb{R}}^n} |({-}{ \Delta})^{\frac{\alpha}{4}}u|^2 dx - \gamma \int_{\ensuremath{\mathbb{R}}^n} \frac{|u|^2}{|x|^{\alpha}}dx }{(\int_{\ensuremath{\mathbb{R}}^n} \frac{|u|^{2_{\alpha}^*(s)}}{|x|^{s}}dx)^\frac{2}{2_{\alpha}^*(s)}}. \end{equation} One step towards addressing Problem (\ref{Main problem}) consists of proving the existence of extremals for $\mu_{\gamma,s}(\ensuremath{\mathbb{R}}^n),$ when $s \in [0,\alpha)$ and $ \gamma\in (-\infty, \gamma_H).$ Note that the Euler-Lagrange equation corresponding to the minimization problem for $\mu_{\gamma,s}(\ensuremath{\mathbb{R}}^n)$ is --up to a constant factor-- the following: \begin{equation}\label{one} \left\{\begin{array}{rl} ({-}{ \Delta})^{\frac{\alpha}{2}}u- \gamma \frac{u}{|x|^{\alpha}}= {\frac{u^{2_{\alpha}^*(s)-1}}{|x|^s}} & \text{in } {\ensuremath{\mathbb{R}}^n}\\ u>0 \,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\,\, & \text{in } \mathbb{R}^n. \end{array}\right. \end{equation} When $\alpha=2$, i.e., in the case of the standard Laplacian, the above minimization problem (\ref{Problem: the best fractional Hardy-Sobolev type constant }) has been extensively studied. See for example \cite{Catrina-Wang}, \cite{Chern-Lin}, \cite{Filippucci-Pucci-Robert}, \cite{Ghoussoub-Moradifam}, \cite{Ghoussoub-Robert 2014} and \cite{Ghoussoub-Yuan}. The non-local case has also been the subject of several studies, but in the absence of the Hardy term, i.e., when $\gamma=0$. In \cite{Fall-Minlend-Thiam}, Fall, Minlend and Thiam proved the existence of extremals for $\mu_{0,s}(\ensuremath{\mathbb{R}}^n)$ in the case $\alpha=1.$ Recently, J. Yang in \cite{Yang} proved that there exists a positive, radially symmetric and non-increasing extremal for $\mu_{0,s}(\ensuremath{\mathbb{R}}^n)$ when $\alpha \in (0,2).$ Asymptotic properties of the positive solutions were given by Y. Lei \cite{Lei}, Lu and Zhu \cite{Lu-Zhu}, and Yang and Yu \cite{Yang-Yu}. In section 3, we consider the remaining cases in the problem of deciding whether the best constant in the fractional Hardy-Sobolev inequality $\mu_{\gamma,s}(\ensuremath{\mathbb{R}}^n)$ is attained. We use Ekeland's variational principle to show the following. \begin{theorem} \label{Theorem the best fractional H-S constan} Suppose $0<\alpha<2$, $ 0 \le s < \alpha<n$, and $\gamma < \gamma_H := 2^\alpha \frac{\Gamma^2(\frac{n+\alpha}{4})}{\Gamma^2(\frac{n-\alpha}{4})}$. \begin{enumerate} \item If either $ \{ s > 0 \} \text{ or } \{ s=0 \text{ and } \gamma \ge 0 \}$, then $\mu_{\gamma,s}(\ensuremath{\mathbb{R}}^n)$ is attained. \item If $s=0$ and $\gamma < 0$, then there are no extremals for $\mu_{\gamma,s}(\ensuremath{\mathbb{R}}^n).$ \item If either $\{0 < \gamma < \gamma_H \} \text{ or } \{ 0<s<\alpha \text{ and } \gamma=0 \},$ then any non-negative minimizer for $\mu_{\gamma,s}(\ensuremath{\mathbb{R}}^n)$ is positive, radially symmetric, radially decreasing, and approaches zero as ${|x| \to \infty}.$ \end{enumerate} \end{theorem} In section 4, we consider problem (\ref{Main problem}) and use the mountain pass lemma to establish the following result. \begin{theorem}\label{Theorem Main result} Let $0<\alpha<2,$ $ 0 < s < \alpha<n$ and $ 0\le \gamma < \gamma_H.$ Then, there exists a nontrivial weak solution of (\ref{Main problem}). \end{theorem} Recall that $u \in H^{\frac{\alpha}{2}}(\ensuremath{\mathbb{R}}^n)$ is a weak solution of (\ref{Main problem}), if we have for all $\varphi \in H^{\frac{\alpha}{2}}(\ensuremath{\mathbb{R}}^n),$ $$\int_{\ensuremath{\mathbb{R}}^n}({-}{ \Delta})^{\frac{\alpha}{4}}u ({-}{ \Delta})^{\frac{\alpha}{4}} \varphi dx = \int_{\ensuremath{\mathbb{R}}^n} \gamma \frac{u}{|x|^{\alpha}} \varphi dx + \int_{\ensuremath{\mathbb{R}}^n}|u|^{2_{\alpha}^*-2} u \varphi dx + \int_{\ensuremath{\mathbb{R}}^n} {\frac{|u|^{2_{\alpha}^*(s)-2}}{|x|^s}} u \varphi dx. $$ The standard strategy to construct weak solutions of (\ref{Main problem}) is to find critical points of the corresponding functional on $H^{\frac{\alpha}{2}}(\ensuremath{\mathbb{R}}^n)$. However, (\ref{Main problem}) is invariant under the following conformal one parameter transformation group, \begin{equation} \label{the conformal invariance property of Main problem} \hbox{$ T_r: H^{\frac{\alpha}{2}}(\ensuremath{\mathbb{R}}^n) \rightarrow H^{\frac{\alpha}{2}}(\ensuremath{\mathbb{R}}^n); \qquad u(x) \rightarrow T_r[u](x)= r^{\frac{n-\alpha}{2}} u(rx), \quad r>0,$} \end{equation} which means that the convergence of Palais-Smale sequences is not a given. As it was argued in \cite{Filippucci-Pucci-Robert}, there is an asymptotic competition between the energy carried by the two critical nonlinearities. Hence, the crucial step here is to balance the competition to avoid the domination of one term over another. Otherwise, there is vanishing of the weakest one, leading to a solution for the same equation but with only one critical nonlinearity. In order to deal with this issue, we choose a suitable minimax energy level, in such a way that after a careful analysis of the concentration phenomena, we could eliminate the possibility of a vanishing weak limit for these well chosen Palais-Smale sequences, while ensuring that none of the two nonlinearities dominate the other. \section{Preliminaries and a description of the functional setting} We start by recalling and introducing suitable function spaces for the variational principles that will be needed in the sequel. We first recall the following useful representation given in \cite{B-C-D-S 2} and \cite{B-C-D-S 1} for the fractional Laplacian $(-\Delta)^{\frac{\alpha}{2}}$ as a trace class operator, as well as for the space $H^{\frac{\alpha}{2}}(\ensuremath{\mathbb{R}}^n)$. For a function $u \in H^{\frac{\alpha}{2}}(\ensuremath{\mathbb{R}}^n)$, let $w=E_{\alpha}(u)$ be its $\alpha$-harmonic extension to the upper half-space, $\ensuremath{\mathbb{R}}_+^{n+1}$, that is the solution to the following problem: \begin{equation*} \left\{\begin{array}{rl} {\rm div}\,(y^{1-{\alpha}} \nabla w)=0 & \text{in } \mathbb{R}_+^{n+1} \\ w= u & \text{on } \mathbb{R}^n \times \{y=0\}. \end{array}\right . \end{equation*} Define the space $X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})$ as the closure of $ C_0^{\infty}(\overline{\mathbb{R}_+^{n+1})}$ for the norm $$\| w\|_{X^{\alpha}({\mathbb{R}_+^{n+1}})}:=\left( k_{\alpha} \int_{\mathbb{R}_+^{n+1}} y^{1-\alpha} | \nabla w |^2 dxdy \right)^\frac{1}{2},$$ where $k_\alpha=\frac{\Gamma(\frac{\alpha}{2})}{2^{1-\alpha}\Gamma(1-{\frac{\alpha}{2}})}$ is a normalization constant chosen in such a way that the extension operator $E_\alpha (u): {H^{\frac{\alpha}{2}}(\ensuremath{\mathbb{R}}^n) \to X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})}$ is an isometry, that is, for any $ u \in H^{\frac{\alpha}{2}}(\ensuremath{\mathbb{R}}^n),$ we have \begin{equation} \label{extension norm} \|E_{\alpha}(u)\|_{X^{\alpha} (\ensuremath{\mathbb{R}}^{n+1}_+)} = \|u \|_{H^{\frac{\alpha}{2}}(\ensuremath{\mathbb{R}}^n)}=\| (-\Delta)^{\frac{\alpha}{4}} u \|_{L^2(\ensuremath{\mathbb{R}}^n)}. \end{equation} Conversely, for a function $w \in X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1}),$ we denote its trace on $\ensuremath{\mathbb{R}}^n \times \{y = 0\}$ as Tr$(w):=w(.,0)$. This trace operator is also well defined and satisfies \begin{equation}\label{trace inequality between extension norm and fractional sobolev} \|w(.,0) \|_{H^{\frac{\alpha}{2}}(\ensuremath{\mathbb{R}}^n)} \le \|w\|_{X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})}. \end{equation} We shall frequently use the following useful fact: Since $\alpha \in (0, 2)$, the weight $y^{1-\alpha}$ belongs to the Muckenhoupt class $A _2$; \cite{Muckenhoupt}, which consists of all non-negative functions $w$ on $\ensuremath{\mathbb{R}}^n$ satisfying for some constant C, the estimate \begin{equation} \sup\limits_B (\frac{1}{|B|}\int_B w dx)(\frac{1}{|B|}\int_B w^{-1} dx) \le C, \end{equation} where the supremum is taken over all balls $B$ in $\ensuremath{\mathbb{R}}^n.$ If $\Omega \subset \ensuremath{\mathbb{R}}^{n+1}$ is an open domain, we denote by $L^2(\Omega, |y|^{1-\alpha})$ the space of all measurable functions on $\Omega$ such that $\|w\|^2_{L^2(\Omega , |y|^{1-\alpha})} = \int_{\Omega} |y|^{1-\alpha} |w|^2 dxdy < \infty$, and by $H^1(\Omega, |y|^{1-\alpha})$ the weighted Sobolev space $$H^1(\Omega, |y|^{1-\alpha}) = \left\{ w \in L^2(\Omega, |y|^{1-\alpha}): \, \nabla w \in L^2(\Omega, |y|^{1-\alpha}) \right\}.$$ It is remarkable that most of the properties of classical Sobolev spaces, including the embedding theorems have a weighted counterpart as long as the weight is in the Muckenhoupt class $A_2$ see \cite{Fabes-Kenig-Serapioni} and \cite{Gol'dshtein-Ukhlov}. Note that $H^1(\ensuremath{\mathbb{R}}^{n+1}_+ , y^{1-\alpha})$ -- up to a normalization factor-- is also isometric to $X^{\alpha}(\ensuremath{\mathbb{R}}^{n+1}_+).$ In \cite{Caffarelli-Silvestre}, Caffarelli and Silvestre showed that the extension function $E_{\alpha}(u)$ is related to the fractional Laplacian of the original function $u$ in the following way: \begin{equation*} (-\Delta)^{\frac{\alpha}{2}}u(x)=\frac{\partial w}{\partial \nu^{\alpha}}:= - k_{\alpha} \lim\limits_{y \to 0^+} y^{1-\alpha} \frac{\partial w}{\partial y}(x,y). \end{equation*} With this representation, the non-local problem (\ref{Main problem}) can then be written as the following local problem: \begin{equation} \label{Main problem.prime} \left\{\begin{array}{rll} - {\rm div}\,(y^{1-\alpha}\nabla w)=0 \hfill & \text{in} \ \mathbb{R}^{n+1}_+ \\ \frac{\partial w}{\partial \nu^{\alpha}}= \gamma \frac{w(.,0)}{|x|^{\alpha}} +w(.,0)^{2^*_\alpha-1}+ \frac{w(.,0)^{{2_{\alpha}^*(s)}-1}}{|x|^s}&\text{on } \ \mathbb{R}^n. \end{array}\right. \end{equation} A function $w \in X^{\alpha}(\ensuremath{\mathbb{R}}_+^{n+1}) $ is said to be a weak solution to (\ref{Main problem.prime}), if for all $\varphi \in X^{\alpha}(\ensuremath{\mathbb{R}}_+^{n+1}),$ \begin{eqnarray*} k_{\alpha} \int_{\mathbb{R}_+^{n+1}} y^{1-\alpha} \langle \nabla w, \nabla \varphi \rangle dxdy& =& \int_{\ensuremath{\mathbb{R}}^n} \gamma \frac{w(x,0)}{|x|^{\alpha}} \varphi dx + \int_{\ensuremath{\mathbb{R}}^n} |w(x,0)|^{2^*_\alpha-2} w(x,0)\varphi dx\\ &&+ \int_{\ensuremath{\mathbb{R}}^n} \frac{|w(x,0)|^{{2_{\alpha}^*(s)}-2}w(x,0)}{|x|^s} \varphi dx. \end{eqnarray*} Note that for any weak solution $w$ in $X^{\alpha}(\ensuremath{\mathbb{R}}_+^{n+1})$ to (\ref{Main problem.prime}), the function $u=w(.,0) $ defined in the sense of traces, is in $H^{\frac{\alpha}{2}}(\ensuremath{\mathbb{R}}^n)$ and is a weak solution to problem (\ref{Main problem}). The energy functional corresponding to (\ref{Main problem.prime}) is \begin{equation*} \Phi(w)= \frac{1}{2} \| w\|^2_{X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})} - \frac{\gamma}{2}\int_{\mathbb{R}^n}\frac{|w(x, 0)|^{2}}{|x|^{\alpha}} dx -\frac{1}{2_{\alpha}^*} \int_{\ensuremath{\mathbb{R}}^n} |w(x, 0)|^{2_{\alpha}^*}\, dx -\frac{1}{2_{\alpha}^*(s)}\int_{\ensuremath{\mathbb{R}}^n} \frac{|w(x, 0)|^{2_{\alpha}^*(s)}}{|x|^{s}} dx. \end{equation*} Hence the associated trace of any critical point $w$ of $\Phi$ in $X^{\alpha}(\ensuremath{\mathbb{R}}_+^{n+1})$ is a weak solution for (\ref{Main problem}). The starting point of the study of existence of weak solutions of the above problems is therefore the following fractional trace inequalities which will guarantee that the above functionals are well defined and bounded below on the right function spaces. We start with the fractional Sobolev inequality \cite{Cotsiolis-Tavoularis}, which asserts that for $n > \alpha$ and $ 0<\alpha<2$, there exists a constant $C(n,\alpha) >0 $ such that \begin{equation}\label{fractional sobolev inequality} \hbox{$ ( \int_{\ensuremath{\mathbb{R}}^n} |u|^{2_\alpha^*} dx )^{\frac{2}{2_\alpha^*}} \leq C(n,\alpha) \int_{\ensuremath{\mathbb{R}}^n} |({-}{ \Delta})^{\frac{\alpha}{4}}u|^2 dx$ \quad for all $u \in H^{\frac{\alpha}{2}} (\ensuremath{\mathbb{R}}^n),$ } \end{equation} where $2_{\alpha}^* = \frac{2n}{n-\alpha}.$ Another important inequality is the fractional Hardy inequality (see \cite{Frank-Lieb-Seiringer} and \cite{Herbst}), which states that under the same conditions on $n$ and $\alpha$, we have \begin{equation}\label{fractional Hardy inequality} \hbox{$ \gamma_H \int_{\ensuremath{\mathbb{R}}^n}\frac{|u|^2}{|x|^{\alpha}} dx \leq \int_{\ensuremath{\mathbb{R}}^n} |({-}{ \Delta})^{\frac{\alpha}{4}}u|^2 dx$ \quad for all $u \in H^{\frac{\alpha}{2}} (\mathbb{R}^n)$,} \end{equation} where $\gamma_H$ is the best constant in the above inequality on $\ensuremath{\mathbb{R}}^n,$ that is \begin{equation} \gamma_H=\gamma_H(\alpha):=\inf\left\{\frac{ \int_{\ensuremath{\mathbb{R}}^n} |({-}{ \Delta})^{\frac{\alpha}{4}}u|^2 dx}{ \int_{\ensuremath{\mathbb{R}}^n}\frac{|u|^2}{|x|^{\alpha}} dx}; \,\, u \in H^{\frac{\alpha}{2}} (\ensuremath{\mathbb{R}}^n) \setminus \{0\}\right\}. \end{equation} It has also been shown there that $\gamma_H (\alpha)= 2^\alpha \frac{\Gamma^2(\frac{n+\alpha}{4})}{\Gamma^2(\frac{n-\alpha}{4})}$. Note that $\gamma_H(\alpha)$ converges to the best classical Hardy constant $\gamma_H(2)=\frac{(n-2)^2}{4}$ when ${\alpha \to 2}$. By interpolating these inequalities via H\"older's inequalities, one gets the following fractional Hardy-Sobolev inequalities. \begin{lemma}[Fractional Hardy-Sobolev Inequalities] Assume that $0<\alpha<2$, and $ 0 \le s \le \alpha <n$. Then, there exist positive constants $ c \text{ and } C,$ such that \begin{equation} \label{Fractional H-S inequality} (\int_{\mathbb{R}^n} \frac{|u|^{2_{\alpha}^*(s)}}{|x|^{s}}dx)^\frac{2}{{2_{\alpha}^*(s)}} \leq c \int_{\ensuremath{\mathbb{R}}^n} |({-}{ \Delta})^{\frac{\alpha}{4}}u|^2 dx \quad \hbox{for all $u \in H^{\frac{\alpha}{2}} (\mathbb{R}^n).$} \end{equation} Moreover, if $\gamma < \gamma_H:=2^\alpha \frac{\Gamma^2(\frac{n+\alpha}{4})}{\Gamma^2(\frac{n-\alpha}{4})}$, then \begin{equation} \label{fractional H-S-M inequality} C(\int_{\mathbb{R}^n} \frac{|u|^{2_{\alpha}^*(s)}}{|x|^{s}}dx)^\frac{2}{{2_{\alpha}^*(s)}} \leq \int_{\ensuremath{\mathbb{R}}^n} |({-}{ \Delta})^{\frac{\alpha}{4}}u|^2 dx - \gamma \int_{\mathbb{R}^n} \frac{|u|^{2}}{|x|^{\alpha}}dx \quad \hbox{for all $u \in H^{\frac{\alpha}{2}} (\mathbb{R}^n).$} \end{equation} \end{lemma} \begin{proof} Note that for s = 0 (resp., s = $\alpha$) the first inequality is just the fractional Sobolev (resp., the fractional Hardy) inequality. We therefore have to only consider the case where $0 < s< \alpha$ in which case $2_{\alpha}^*(s)>2$. By applying H\"older's inequality, then the fractional Hardy and the fractional Sobolev inequalities, we have \begin{align*} \int_{\mathbb{R}^n} \frac{|u|^{2_{\alpha}^*(s)}}{|x|^{s}}dx &= \int_{\mathbb{R}^n} \frac{|u|^\frac{2s}{\alpha}}{|x|^{s}} |u|^{2_{\alpha}^*(s)-\frac{2s}{\alpha}}dx \\ & \le (\int_{\mathbb{R}^n} \frac{ |u|^2}{|x|^{\alpha}}dx)^\frac{s}{\alpha} (\int_{\mathbb{R}^n} |u|^{(2_{\alpha}^*(s) - \frac{2s}{\alpha}) \frac{\alpha}{\alpha-s} }dx)^\frac{\alpha-s}{\alpha} \\ &= (\int_{\mathbb{R}^n} \frac{ |u|^2}{|x|^{\alpha}}dx)^\frac{s}{\alpha} (\int_{\mathbb{R}^n} |u|^{2_{\alpha}^*}dx)^\frac{\alpha-s}{\alpha} \\ &\le C_1 (\int_{\ensuremath{\mathbb{R}}^n} |({-}{ \Delta})^{\frac{\alpha}{4}}u|^2 dx)^\frac{s}{\alpha} C_2 (\int_{\ensuremath{\mathbb{R}}^n} |({-}{ \Delta})^{\frac{\alpha}{4}}u|^2 dx)^{\frac{2_{\alpha}^*}{2}.\frac{\alpha-s}{\alpha}}\\ &\le c (\int_{\ensuremath{\mathbb{R}}^n} |({-}{ \Delta})^{\frac{\alpha}{4}}u|^2 dx)^\frac{n-s}{n-\alpha} = c (\int_{\ensuremath{\mathbb{R}}^n} |({-}{ \Delta})^{\frac{\alpha}{4}}u|^2 dx)^ \frac{2_{\alpha}^*(s)}{2}. \end{align*} From the definition of ${\gamma_H}$, it follows that for all $u \in H^{\frac{\alpha}{2}}(\ensuremath{\mathbb{R}}^n),$ $$\frac{\int_{\ensuremath{\mathbb{R}}^n} |({-}{ \Delta})^{\frac{\alpha}{4}}u|^2 dx-\gamma \int_{\mathbb{R}^n} \frac{|u|^{2}}{|x|^{\alpha}}dx}{ (\int_ {\mathbb{R}^n} \frac{|u|^ {2_{\alpha}^*(s)}}{|x|^s}dx)^\frac{2}{2_{\alpha}^*(s)} } \geq (1- \frac{\gamma}{\gamma_H}) \frac{ \int_{\ensuremath{\mathbb{R}}^n} |({-}{ \Delta})^{\frac{\alpha}{4}}u|^2 dx}{ (\int_ {\mathbb{R}^n} \frac{|u|^ {2_{\alpha}^*(s)}}{|x|^s}dx)^\frac{2}{2_{\alpha}^*(s)} }.$$ Hence (\ref{Fractional H-S inequality}) implies (\ref{fractional H-S-M inequality}) whenever $\gamma < \gamma_H.$ \end{proof} \begin{remark} One can use (\ref{extension norm}) to rewrite inequalities (\ref{fractional Hardy inequality}), (\ref{Fractional H-S inequality}) and (\ref{fractional H-S-M inequality}) as the following trace class inequalities: \begin{equation} \label{fractional Trace Hardy inequality} \gamma_H \int_{\mathbb{R}^n} \frac{|w(x,0)|^{2}}{|x|^{\alpha}} \ dx \leq \ \| w\|^2_{X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})}, \end{equation} \begin{equation} \label{fractional Trace H-S inequality} (\int_{\mathbb{R}^n} \frac{|w(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}} \ dx)^\frac{2}{{2_{\alpha}^*(s)}} \leq c \ \| w\|^2_{X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})}, \end{equation} \begin{equation} \label{fractional Trace H-S-M inequality } C(\int_{\mathbb{R}^n} \frac{|w(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}} \ dx)^\frac{2}{{2_{\alpha}^*(s)}} \leq \ \| w\|^2_{X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})} - \gamma\int_{\mathbb{R}^n}\frac{|w(x,0)|^{2}}{|x|^{\alpha}}dx. \end{equation} \end{remark} The best constant $\mu_{\gamma,s}(\ensuremath{\mathbb{R}}^n)$ in inequality (\ref{fractional H-S-M inequality}), can also be written as: \begin{equation*} S(n,\alpha,\gamma,s)= \inf\limits_{w \in X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})\setminus \{0\}} \frac{k_{\alpha}\int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} |\nabla w|^2 dxdy - \gamma \int_{\ensuremath{\mathbb{R}}^n} \frac{|w(x,0)|^2}{|x|^{\alpha}} dx}{(\int_{\ensuremath{\mathbb{R}}^n} \frac{|w(x, 0)|^{2_{\alpha}^*(s)}}{|x|^{s}}dx)^\frac{2}{2_{\alpha}^*(s)}}. \end{equation*} We shall therefore investigate whether there exist extremal functions where this best constant is attained. Theorems \ref{Theorem the best fractional H-S constan} and \ref{Theorem Main result} can therefore be stated in the following way: \begin{theorem} \label{Theorem Existence for fractional H-S-M, using the best constant} Suppose $0<\alpha<2$, $ 0 \le s < \alpha <n$, and $\gamma < \gamma_H $. We then have the following: \begin{enumerate} \item If $ \{ s > 0 \} \text{ or } \{ s=0 \text{ and } \gamma \ge 0 \}$, then $S(n,\alpha,\gamma,s)$ is attained in $X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})$. \item If $s=0$ and $\gamma < 0$, then there are no extremals for $S(n,\alpha,\gamma,s)$ in $X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})$. \end{enumerate} \end{theorem} \begin{theorem}\label{Theorem Main result in extended form} Let $0<\alpha<2,$ $ 0 < s < \alpha<n$ and $ 0\le \gamma < \gamma_H.$ Then, there exists a non-trivial weak solution to (\ref{Main problem.prime}) in $ X^\alpha (\ensuremath{\mathbb{R}}_+^{n+1}) $. \end{theorem} \section{Proof of Theorem \ref{Theorem the best fractional H-S constan}} \label{Section: the proof of attainability of S(n,alpha,gamma,s)} We shall minimize the functional $$I_{\gamma,s}(w)= \frac{k_{\alpha}\int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} |\nabla w|^2 dxdy - \gamma \int_{\ensuremath{\mathbb{R}}^n} \frac{|w(x,0)|^2}{|x|^{\alpha}}dx }{(\int_{\ensuremath{\mathbb{R}}^n} \frac{|w(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}}dx)^\frac{2}{2_{\alpha}^*(s)}} $$ on the space $X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})$. Whenever $S(n,\alpha,\gamma,s)$ is attained at some $w\in X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})$, then it is clear that $u = \text{Tr} (w):= w(.,0)$ will be a function in $ H^{\frac{\alpha}{2}}(\ensuremath{\mathbb{R}}^n)$, where $\mu_{\gamma,s}(\ensuremath{\mathbb{R}}^n)$ is attained. Note first that inequality (\ref{fractional Trace Hardy inequality}) asserts that $X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})$ is embedded in the weighted space $L^2(\ensuremath{\mathbb{R}}^n, |x|^{-\alpha})$ and that this embeding is continuous. If $\gamma < \gamma_H$, it follows from (\ref{fractional Trace Hardy inequality}) that $$ \|w\| := \left( k_{\alpha}\int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} |\nabla w|^2 dxdy - \gamma \int_{\ensuremath{\mathbb{R}}^n} \frac{|w(x,0)|^2}{|x|^{\alpha}} dx \right)^\frac{1}{2} $$ is well-defined on $X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})$. Set $\gamma_+ = \text{max} \{\gamma,0\}$ and $\gamma_- = - \text{max} \{\gamma,0\}$. The following inequalities then hold for any $u \in X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})$, \begin{equation}\label{comparable norms} (1-\frac{\gamma_+}{\gamma_H}) \|w\|^2_{X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})} \le \|w\|^2 \le (1+\frac{\gamma_-}{\gamma_H}) \|w\|^2_{X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})} . \end{equation} Thus, $\| \ . \ \|$ is equivalent to the norm $\| \ . \ \|_{X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})}$. We start by considering the case when $s > 0$. Ekeland's variational principle \cite{Ekeland} applied to the functional $I(w):= I_{\gamma,s}(w)$ yields the existence of a minimizing sequence $(w_k)_k$ for $S(n,\alpha,\gamma,s)$ such that as $k \to \infty$, \begin{equation} \int_ {\mathbb{R}^n} \frac{|w_k(x,0)|^ {2_{\alpha}^*(s)}}{|x|^s}dx=1, \end{equation} \begin{equation} I(w_k) \longrightarrow S(n,\alpha,\gamma,s), \end{equation} and \begin{equation} {I'(w_k) \to 0} \text{ in } (X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1}))', \end{equation} where $(X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1}))'$ denotes the dual of $ X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})$. Consider the functionals $J,K:X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1}) \longrightarrow \ensuremath{\mathbb{R}} $ by $$J(w):= \frac{1}{2} \|w\|^2 = \frac{k_{\alpha}}{2} \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} |\nabla w|^2 dxdy - \frac{\gamma}{2} \int_{\mathbb{R}^n} \frac{|w(x,0)|^{2}}{|x|^{\alpha}}dx,$$ and $$K(w):= \frac{1}{2_{\alpha}^*(s)}\int_{\ensuremath{\mathbb{R}}^n} \frac{|w(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}} dx. $$ Straightforward computations yield that as $k \to \infty$, \begin{equation*} J(w_k) \longrightarrow \frac{1}{2} S(n,\alpha,\gamma,s), \end{equation*} and \begin{equation}\label{Ekeland principle} J'(w_k) - S(n,\alpha,\gamma,s) K'(w_k) \longrightarrow 0 \text{ in } (X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1}))'. % \end{equation} Consider now the Levy concentration functions $Q$ of $\frac{|w_k(x,0)|^ {2_{\alpha}^*(s)}}{|x|^s}$, defined as \begin{equation*} Q(r)= \int_{B_r} \frac{|w_k(x,0)|^ {2_{\alpha}^*(s)}}{|x|^s} dx \quad \text{for} \quad r> 0, \end{equation*} where $B_r$ is the ball of radius $r$ in $\mathbb{R}^n$. Since $\int_ {\mathbb{R}^n} \frac{|w_k(x,0)|^ {2_{\alpha}^*(s)}}{|x|^s}dx=1$ for all $k \in \mathbb{N}$, then by continuity, and up to considering a subsequence, there exists $r_k>0$ such that \begin{equation*} Q(r_k)= \int_{B_{r_k}} \frac{|w_k(x,0)|^ {2_{\alpha}^*(s)}}{|x|^s} dx = \frac{1}{2} \quad \hbox{for all $k \in \mathbb{N}$. } \end{equation*} Define the rescaled sequence $v_k(x,y):= r_k^{\frac{n-\alpha}{2}} w_k(r_k x, r_k y)$ for $k \in \mathbb{N}$ and $(x,y) \in \ensuremath{\mathbb{R}}_+^{n+1}$, in such a way that $(v_k)_{k \in \mathbb{N}}$ is also a minimizing sequence for $S(n,\alpha,\gamma,s)$. Indeed, it is easy to check that $v_k \in X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})$ and that \begin{equation*} k_{\alpha}\int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} |\nabla v_k|^2 dxdy - \gamma \int_{\mathbb{R}^n} \frac{|v_k(x,0)|^{2}}{|x|^{\alpha}}dx = k_{\alpha} \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} |\nabla w_k|^2 dxdy - \gamma \int_{\mathbb{R}^n} \frac{|w_k(x,0)|^{2}}{|x|^{\alpha}}dx, \end{equation*} \begin{equation} \lim\limits_{k \to \infty} \left( k_{\alpha} \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} |\nabla v_k|^2 dxdy - \gamma \int_{\mathbb{R}^n} \frac{|v_k(x,0)|^{2}}{|x|^{\alpha}}dx \right)= S(n,\alpha,\gamma,s) \end{equation} and \begin{equation*} \int_ {\mathbb{R}^n} \frac{|v_k(x,0)|^ {2_{\alpha}^*(s)}}{|x|^s}dx=\int_ {\mathbb{R}^n} \frac{|w_k(x,0)|^ {2_{\alpha}^*(s)}}{|x|^s}dx=1. \end{equation*} Moreover, we have that \begin{equation} \label{Levy-type for v_k} \int_{B_1} \frac{|v_k(x,0)|^ {2_{\alpha}^*(s)}}{|x|^s} dx = \frac{1}{2} \quad \hbox{for all $k \in \mathbb{N}$. } \end{equation} In addition, $\|v_k\|^2 = S(n,\alpha,\gamma,s) +o(1)$ as ${k \to \infty}$, so (\ref{comparable norms}) yields that $\left(\|v_k\|_{X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})} \right) _{k \in \mathbb{N}}$ is bounded. Therefore, without loss of generality, there exists a subsequence -still denoted $v_k$- such that \begin{equation} \label{extract weak and strong limit of minimizing sequence - ekeland} \hbox{$ v_k \rightharpoonup v \text{ in } X^{\alpha}(\ensuremath{\mathbb{R}}_+^{n+1})$ \ and \ $ {v_k(.,0) \to v(.,0)} \text{ in } L_{loc}^{q}(\ensuremath{\mathbb{R}}^n), \ \text{for every } 1\le q < 2_{\alpha}^*.$} \end{equation} We shall show that the weak limit of the minimizing sequence is not identically zero, that is $v\not\equiv0$. Indeed, suppose $v\equiv0.$ It follows from (\ref{extract weak and strong limit of minimizing sequence - ekeland}) that \begin{equation} \label{weakly and strongly convergence to zero} \hbox{$ v_k \rightharpoonup 0 \text{ in } X^{\alpha}(\ensuremath{\mathbb{R}}_+^{n+1})$ \ and \ $ {v_k(.,0) \to 0} \text{ in } L_{loc}^{q}(\ensuremath{\mathbb{R}}^n), \ \text{for every } 1\le q < 2_{\alpha}^*.$} \end{equation} For $\delta>0$, define $B_\delta^+:= \{(x,y) \in \ensuremath{\mathbb{R}}_+^{n+1}: |(x,y)| < \delta \}$, $B_\delta:= \{x \in \ensuremath{\mathbb{R}}^n: |x| < \delta\}$ and let $\eta \in C_0^{\infty}(\ensuremath{\mathbb{R}}_+^{n+1})$ be a cut-off function such that $\eta\equiv 1$ in $B^+_{\frac{1}{2}}$ and $0 \le \eta \le 1$ in $\ensuremath{\mathbb{R}}_+^{n+1}.$ We use $\eta^2 v_k$ as test function in (\ref{Ekeland principle}) to get that \begin{equation}\label{Use test eta^2 v_k in functional} \begin{aligned} &k_{\alpha} \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} \nabla v_k . \nabla (\eta^2 v_k ) dxdy - \gamma \int_{\mathbb{R}^n} \frac{v_k(x,0) (\eta^2 v_k(x,0) ) }{|x|^{\alpha}}dx\\ & \quad \quad = S(n,\alpha,\gamma,s) \int_ {\mathbb{R}^n} \frac{|v_k(x,0)|^{2_{\alpha}^*(s)-1} (\eta^2 v_k(x,0) ) }{|x|^s}dx+o(1). \end{aligned} \end{equation} Simple computations yield $ | \nabla(\eta v_k)|^2= |v_k \nabla \eta|^2 + \nabla v_k . \nabla(\eta^2 v_k),$ so that we have \begin{align*} & k_\alpha \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} | \nabla(\eta v_k)|^2 dxdy - k_\alpha \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} \nabla v_k . \nabla(\eta^2 v_k) dxdy\\ & \quad \quad = k_\alpha \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} |v_k \nabla \eta|^2 dxdy = k_\alpha \int_{E} y^{1-\alpha}|\nabla \eta|^2 |v_k |^2 dxdy, \end{align*} where $E:= \text{Supp}(|\nabla \eta|).$ Since $\alpha\in (0,2)$, $y^{1-\alpha}$ is an $A_2$-weight, and since $E$ is bounded, we have that the embedding $H^1(E, y^{1-\alpha}) \hookrightarrow L^2(E, y^{1-\alpha})$ is compact (See \cite{B-C-D-S 1} and \cite{Gol'dshtein-Ukhlov}). It follows from $(\ref{weakly and strongly convergence to zero})_1$ that $${ k_\alpha \int_{E} y^{1-\alpha} |v_k \nabla \eta|^2 dxdy \to 0} \text{ as } {k \to \infty }.$$ Therefore, $$k_\alpha \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} | \nabla(\eta v_k)|^2 dxdy = k_\alpha \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} \nabla v_k . \nabla(\eta^2 w_k) dxdy + o(1).$$ By plugging the above estimate into (\ref{Use test eta^2 v_k in functional}), and using (\ref{Levy-type for v_k}), we get that \begin{equation} \label{estimate for grad term by test function eta^2 v_k} \begin{aligned} \|\eta v_k\|^2 & = k_{\alpha} \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} |\nabla (\eta v_k)|^2 dxdy - \gamma \int_{\mathbb{R}^n} \frac{|\eta v_k(x,0)|^2 }{|x|^{\alpha}}dx\\ & = S(n,\alpha,\gamma,s) \int_ {\mathbb{R}^n} \frac{|v_k(x,0)|^{2_{\alpha}^*(s)-2} (|\eta v_k(x,0)|^2) }{|x|^s}dx+o(1) \\ & \le S(n,\alpha,\gamma,s) \int_{B_1} \frac{|v_k(x,0)|^{2_{\alpha}^*(s)}}{|x|^s} dx + o(1) \\ &= \frac{S(n,\alpha,\gamma,s)}{2^{1-\frac{2}{2_{\alpha}^*(s)}}} \left( \int_{B_1} \frac{|v_k(x,0)|^ {2_{\alpha}^*(s)}}{|x|^s} dx \right)^\frac{2}{2_{\alpha}^*(s)}+o(1). \end{aligned} \end{equation} By straightforward computations and H\"older's inequality, we get that \begin{align*} \left( \int_{B_1} \frac{|v_k(x,0)|^ {2_{\alpha}^*(s)}}{|x|^s} dx \right)^\frac{1}{2_{\alpha}^*(s)} & = \left( \int_{B_1} \frac{|\eta v_k(x,0) +(1- \eta) v_k(x,0) |^ {2_{\alpha}^*(s)}}{|x|^s} dx \right)^\frac{1}{2_{\alpha}^*(s)}\\ &\le \left( \int_{B_1} \frac{|\eta v_k(x,0)|^ {2_{\alpha}^*(s)}}{|x|^s} dx \right)^\frac{1}{2_{\alpha}^*(s)} + \left( \int_{B_1} \frac{|(1- \eta )v_k(x,0)|^ {2_{\alpha}^*(s)}}{|x|^s} dx \right)^\frac{1}{2_{\alpha}^*(s)}\\ & \le \left( \int_{\ensuremath{\mathbb{R}}^n} \frac{|\eta v_k(x,0)|^ {2_{\alpha}^*(s)}}{|x|^s} dx \right)^\frac{1}{2_{\alpha}^*(s)} + C \left( \int_{B_1} |v_k(x,0)|^ {2_{\alpha}^*(s)} dx \right)^\frac{1}{2_{\alpha}^*(s)}. \end{align*} From $(\ref{weakly and strongly convergence to zero})_2$, and the fact that $2_{\alpha}^*(s) < 2_{\alpha}^*,$ we obtain $$ {\int_{B_1} |v_k(x,0)|^ {2_{\alpha}^*(s)} dx \to 0} \text{ as } {k \to \infty}. $$ Therefore, \begin{equation} \label{minkowski-type inequality for Hardy-Sobolev term} \left( \int_{B_1} \frac{|v_k(x,0)|^ {2_{\alpha}^*(s)}}{|x|^s} dx \right)^\frac{2}{2_{\alpha}^*(s)} \le \left( \int_{\ensuremath{\mathbb{R}}^n} \frac{|\eta v_k(x,0)|^ {2_{\alpha}^*(s)}}{|x|^s} dx \right)^\frac{2}{2_{\alpha}^*(s)} + o(1). \end{equation} Plugging the above inequality into (\ref{estimate for grad term by test function eta^2 v_k}), we get that \begin{align*} \|\eta v_k\|^2 &= k_{\alpha} \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} |\nabla (\eta v_k)|^2 dxdy - \gamma \int_{\mathbb{R}^n} \frac{|\eta v_k(x,0)|^2 }{|x|^{\alpha}}dx \\ & \le \frac{S(n,\alpha,\gamma,s)}{2^{1-\frac{2}{2_{\alpha}^*(s)}}} \left(\int_ {\mathbb{R}^n} \frac{|\eta v_k(x,0)|^{2_{\alpha}^*(s)}}{|x|^s}dx\right)^\frac{2}{2_{\alpha}^*(s)}+o(1). \end{align*} On the other hand, it follows from the definition of $S(n,\alpha,\gamma,s)$ that $$ S(n,\alpha,\gamma,s) \left(\int_ {\mathbb{R}^n} \frac{|\eta v_k(x,0)|^{2_{\alpha}^*(s)}}{|x|^s}dx\right)^\frac{2}{2_{\alpha}^*(s)} \le \|\eta v_k\|^2 \le \frac{S(n,\alpha,\gamma,s)}{2^{1-\frac{2}{2_{\alpha}^*(s)}}} \left(\int_ {\mathbb{R}^n} \frac{|\eta v_k(x,0)|^{2_{\alpha}^*(s)}}{|x|^s}dx\right)^\frac{2}{2_{\alpha}^*(s)}+o(1). $$ Note that $\frac{S(n,\alpha,\gamma,s)}{2^{1-\frac{2}{2_{\alpha}^*(s)}}} < S(n,\alpha,\gamma,s)$ for $ s \in (0, \alpha)$, hence (\ref{minkowski-type inequality for Hardy-Sobolev term}) yields that $$o(1)=\int_ {\ensuremath{\mathbb{R}}^n} \frac{|\eta v_k(x,0)|^{2_{\alpha}^*(s)}}{|x|^s}dx = \int_{B_1} \frac{|v_k(x,0)|^ {2_{\alpha}^*(s)}}{|x|^s} dx + o(1).$$ This contradicts (\ref{Levy-type for v_k}) and therefore $v\not\equiv 0$. We now conclude by proving that $v_k$ converges weakly in $\ensuremath{\mathbb{R}}_+^{n+1}$ to $v$, and that $\int_ {\ensuremath{\mathbb{R}}^n} \frac{|v(x,0)|^ {2_{\alpha}^*(s)}}{|x|^s}dx=1.$ Indeed, for $k \in \mathbb{N},$ let $\theta_k = v_k-v,$ and use the Brezis-Lieb Lemma (see \cite{Brezis-Lieb} and \cite{Yang}) to deduce that $$1=\int_{\ensuremath{\mathbb{R}}^n} \frac{|v_k(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}}dx = \int_{\ensuremath{\mathbb{R}}^n} \frac{|v(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}}dx + \int_{\ensuremath{\mathbb{R}}^n} \frac{|\theta_k(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}}dx+o(1),$$ which yields that both \begin{equation}\label{Bound for H-S terms - v and theta_k. } \hbox{$\int_{\ensuremath{\mathbb{R}}^n} \frac{|v(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}}dx$ and $ \int_{\ensuremath{\mathbb{R}}^n} \frac{|\theta_k(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}}dx$ are in the interval $[0,1].$} \end{equation} The weak convergence $\theta_k \rightharpoonup 0$ in $X^\alpha(\ensuremath{\mathbb{R}}_+^{n+1})$ implies that $$\|v_k\|^2 = \|v+\theta_k\|^2 = \|v\|^2+\|\theta_k\|^2 + o(1).$$ By using (\ref{Ekeland principle}) and the definition of $S(n,\alpha,\gamma,s),$ we get that \begin{equation*} \begin{aligned} o(1) &= \|v_k\|^2 - S(n,\alpha,\gamma,s) \int_{\ensuremath{\mathbb{R}}^n} \frac{|v_k(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}}dx\\ & = \left(\|v\|^2 - S(n,\alpha,\gamma,s) \int_{\ensuremath{\mathbb{R}}^n} \frac{|v(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}}dx \right) + \left(\|\theta_k\|^2 - S(n,\alpha,\gamma,s) \int_{\ensuremath{\mathbb{R}}^n} \frac{|\theta_k(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}}dx \right) + o(1) \\ & \ge S(n,\alpha,\gamma,s) \left[ \left(\int_{\ensuremath{\mathbb{R}}^n} \frac{|v(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}}dx\right)^{\frac{2}{2^*_\alpha(s)}} - \int_{\ensuremath{\mathbb{R}}^n} \frac{|v(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}}dx \right] \\ &+ S(n,\alpha,\gamma,s) \left[ \left(\int_{\ensuremath{\mathbb{R}}^n} \frac{|\theta_k(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}}dx\right)^{\frac{2}{2^*_\alpha(s)}} - \int_{\ensuremath{\mathbb{R}}^n} \frac{|\theta_k(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}}dx \right]+o(1). \end{aligned} \end{equation*} Set now $$A:=\left(\int_{\ensuremath{\mathbb{R}}^n} \frac{|v(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}}dx\right)^{\frac{2}{2^*_\alpha(s)}} - \int_{\ensuremath{\mathbb{R}}^n} \frac{|v(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}}dx,$$ and $$B:= \left(\int_{\ensuremath{\mathbb{R}}^n} \frac{|\theta_k(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}}dx\right)^{\frac{2}{2^*_\alpha(s)}} - \int_{\ensuremath{\mathbb{R}}^n} \frac{|\theta_k(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}}dx.$$\\ Note that since $2_\alpha^*(s) > 2,$ we have $a^{\frac{2}{2_\alpha^*(s)}} \ge a $ for every $a \in [0,1]$, and equality holds if and only if $a=0$ or $a=1.$ It then follows from (\ref{Bound for H-S terms - v and theta_k. }) that both $A$ and $B$ are non-negative. On the other hand, the last inequality implies that $A+B =o(1),$ which means that $A=0 $ and $B=o(1)$, that is \begin{equation*} \int_{\ensuremath{\mathbb{R}}^n} \frac{|v(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}}dx = \left(\int_{\ensuremath{\mathbb{R}}^n} \frac{|v(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}}dx\right)^{\frac{2}{2^*_\alpha(s)}}, \end{equation*} hence $$ \text{ either } \int_{\ensuremath{\mathbb{R}}^n} \frac{|v(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}}dx =0 \text{ or } \int_{\ensuremath{\mathbb{R}}^n} \frac{|v(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}}dx =1. $$ The fact that $v \not\equiv 0$ yields $\int_{\ensuremath{\mathbb{R}}^n} \frac{|v(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}}dx \neq 0,$ and $\int_{\ensuremath{\mathbb{R}}^n} \frac{|v(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}}dx =1,$ which yields that $$k_{\alpha} \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} |\nabla v|^2 dxdy - \gamma \int_{\mathbb{R}^n} \frac{|v(x,0)|^{2}}{|x|^{\alpha}}dx = S(n,\alpha,\gamma,s). $$ Without loss of generality we may assume $v \ge 0$ (otherwise we take $|v|$ instead of $v$), and we then obtain a positive extremal for $S(n,\alpha,\gamma,s)$ in the case $s \in (0, \alpha).$\\ $\bullet$ Suppose now that $s=0$ and $\gamma \ge 0$. By a result in \cite{Cotsiolis-Tavoularis}, extremals exist for $S(n,\alpha,\gamma, s)$ whenever $s=0$ and $\gamma = 0$. Hence, we only need to show that there exists an extremal for $S(n,\alpha,\gamma,0)$ in the case $\gamma > 0$. First note that in this case, we have that \begin{equation} S(n,\alpha,\gamma,0) < S(n,\alpha,0,0). \end{equation} Indeed, if $w \in X^{\alpha}(\ensuremath{\mathbb{R}}_+^{n+1}) \setminus \{0\}$ is an extremal for $S(n,\alpha,0,0)$, then by estimating the functional at $w,$ and using the fact that $\gamma > 0$, we obtain \begin{equation*} \begin{aligned} S(n,\alpha,\gamma,0)&=\inf\limits_{u \in X^{\alpha}(\ensuremath{\mathbb{R}}_+^{n+1}) \setminus \{0\}} \quad \frac{\| u\|^2_{X^{\alpha}(\ensuremath{\mathbb{R}}_+^{n+1})}- \gamma\int_{\ensuremath{\mathbb{R}}^n}\frac{|u(x,0)|^{2}}{|x|^{\alpha}}dx}{(\int_{\ensuremath{\mathbb{R}}^n} |u(x,0)|^{2_{\alpha}^*}dx)^\frac{2}{2_{\alpha}^*}}\\ & \le \frac{\| w\|^2_{X^{\alpha}(\ensuremath{\mathbb{R}}_+^{n+1})}- \gamma\int_{\ensuremath{\mathbb{R}}^n}\frac{|w(x,0)|^{2}}{|x|^{\alpha}}dx}{(\int_{\ensuremath{\mathbb{R}}^n} |w(x,0)|^{2_{\alpha}^*}dx)^\frac{2}{2_{\alpha}^*}}< \frac{\| w\|^2_{X^{\alpha}(\ensuremath{\mathbb{R}}_+^{n+1})}}{(\int_{\ensuremath{\mathbb{R}}^n} |w(x,0)|^{2_{\alpha}^*}dx)^\frac{2}{2_{\alpha}^*}}=S(n,\alpha,0,0). \end{aligned} \end{equation*} Now we show that $S(n,\alpha,\gamma,0)$ is attained whenever $S(n,\alpha,\gamma,0) < S(n,\alpha,0,0).$ Indeed, let $(w_k)_{k\in \mathbb{N}} \subset X^{\alpha}(\ensuremath{\mathbb{R}}_+^{n+1}) \setminus \{ 0 \}$ be a minimizing sequence for $S(n,\alpha,\gamma,0)$. Up to multiplying by a positive constant, we assume that \begin{equation} \lim\limits_{k \to \infty} \left( k_{\alpha} \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} |\nabla w_k|^2 dxdy - \gamma \int_{\mathbb{R}^n} \frac{|w_k(x,0)|^{2}}{|x|^{\alpha}}dx \right)= S(n,\alpha,\gamma,0) \end{equation} and \begin{equation} \label{minimizing sequence for sobolev is 1} \int_ {\mathbb{R}^n} |w_k(x,0)|^ {2_{\alpha}^*}dx=1. \end{equation} The sequence $\left(\|w_k\|_{X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})} \right) _{k \in \mathbb{N}}$ is therefore bounded, and there exists a subsequence - still denoted $w_k$- such that $w_k \rightharpoonup w \text{ weakly }\text{in } X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1}).$ The weak convergence implies that \begin{align*} \label{Brezis-Lieb Lemma for extension-norm} \begin{split} \| w_k\|^2_{X^{\alpha}(\ensuremath{\mathbb{R}}_+^{n+1}) }&= \| w_k - w\|^2_{X^{\alpha}(\ensuremath{\mathbb{R}}_+^{n+1})} +\| w\|^2_{X^{\alpha}(\ensuremath{\mathbb{R}}_+^{n+1})}+ 2 k_\alpha \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} \langle \nabla w, \nabla (w-w_k) \rangle dx dy\\ & = \| w_k - w\|^2_{X^{\alpha}(\ensuremath{\mathbb{R}}_+^{n+1})} +\| w\|^2_{X^{\alpha}(\ensuremath{\mathbb{R}}_+^{n+1})}+o(1) \end{split} \end{align*} and \begin{align*} \int_{\ensuremath{\mathbb{R}}^n} \frac{|w(x,0)|^{2}}{|x|^{\alpha}}dx &= \int_{\ensuremath{\mathbb{R}}^n} \frac{|(w-w_k)(x,0)|^{2}}{|x|^{\alpha}}dx +\int_{\ensuremath{\mathbb{R}}^n} \frac{|w_k(x,0)|^{2}}{|x|^{\alpha}}dx + 2\int_{\ensuremath{\mathbb{R}}^n} \frac{w_k(x,0) (w-w_k)(x,0)}{|x|^{\alpha}}dx \\ & = \int_{\ensuremath{\mathbb{R}}^n} \frac{|(w-w_k)(x,0)|^{2}}{|x|^{\alpha}}dx +\int_{\ensuremath{\mathbb{R}}^n} \frac{|w_k(x,0)|^{2}}{|x|^{\alpha}}dx+ o(1). \end{align*} The Brezis-Lieb Lemma (\cite[Theorem 1]{Brezis-Lieb}) and (\ref{minimizing sequence for sobolev is 1}) yield that $\int_ {\ensuremath{\mathbb{R}}^n} |(w_k - w) (x,0)|^ {2_{\alpha}^*}dx \le 1,$ for large $k$, hence \begin{align*} S(n,\alpha,\gamma,0) &= \|w_k\|^2_{X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})}- \gamma \int_{\ensuremath{\mathbb{R}}^n} \frac{|w_k(x,0)|^{2}}{|x|^{\alpha}}dx+ o(1) \\ & \ge \| w_k - w \|^2_{X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})} +\| w\|^2_{X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})}- \gamma \int_{\ensuremath{\mathbb{R}}^n} \frac{|w(x,0)|^{2}}{|x|^{\alpha}}dx +o(1)\\ &\ge S(n,\alpha,0,0) (\int_ {\ensuremath{\mathbb{R}}^n} |(w_k - w) (x,0)|^ {2_{\alpha}^*}dx)^\frac{2}{2_{\alpha}^*}+ S(n,\alpha,\gamma,0)(\int_ {\ensuremath{\mathbb{R}}^n} |w(x,0)|^ {2_{\alpha}^*}dx)^\frac{2}{2_{\alpha}^*}+o(1)\\ &\ge S(n,\alpha,0,0) \int_ {\ensuremath{\mathbb{R}}^n} |(w_k - w) (x,0)|^ {2_{\alpha}^*}dx+ S(n,\alpha,\gamma,0) \int_ {\ensuremath{\mathbb{R}}^n} |w (x,0)|^ {2_{\alpha}^*}dx+o(1). \end{align*} Use the Brezis-Lieb Lemma again to get that \begin{align*} S(n,\alpha,\gamma,0) &\ge \left(S(n,\alpha,0,0) - S(n,\alpha,\gamma,0)\right) \int_ {\mathbb{R}^n} |(w_k - w) (x,0)|^ {2_{\alpha}^*}dx+S(n,\alpha,\gamma,0) \int_ {\mathbb{R}^n} |w_k (x,0)|^ {2_{\alpha}^*}dx+o(1) \\ & =\left(S(n,\alpha,0,0) - S(n,\alpha,\gamma,0)\right) \int_ {\mathbb{R}^n} |(w_k - w) (x,0)|^ {2_{\alpha}^*}dx+S(n,\alpha,\gamma,0)+o(1) . \end{align*} Since $S(n,\alpha,\gamma,0) < S(n,\alpha,0,0)$, we get that ${w_k(.,0) \to w(.,0)}$ in $L^{2_\alpha^*}(\ensuremath{\mathbb{R}}^n),$ that is $ \int_ {\mathbb{R}^n} |w (x,0)|^ {2_{\alpha}^*}dx =1.$ The lower semi-continuity of $I$ then implies that $w$ is a minimizer for $S(n,\alpha,\gamma,0).$ Note that $|w| $ is also an extremal in $X^\alpha(\ensuremath{\mathbb{R}}_+^{n+1})$ for $S(n,\alpha,\gamma,0),$ therefore there exists a non-negative extremal for $S(n,\alpha,\gamma,s)$ in the case $\gamma > 0$ and $s=0$, and this completes the proof of the case when $s=0$ and $\gamma \geq 0$. Now we consider the case when $\gamma <0$. \begin{claim} \label{Claim: no extremal when gamma <0 } If $\gamma\le 0$, then $S(n,\alpha,\gamma,0) = S(n,\alpha,0,0)$, hence, there are no extremals for $S(n,\alpha,\gamma, 0)$ whenever $\gamma<0.$ \end{claim} Indeed, we first note that for $\gamma \le 0, $ we have $S(n,\alpha,\gamma,0) \ge S(n,\alpha,0,0).$ On the other hand, if we consider $w \in X^{\alpha}(\ensuremath{\mathbb{R}}_+^{n+1}) \setminus \{0\}$ to be an extremal for $S(n,\alpha,0,0)$ and define for $\delta \in \ensuremath{\mathbb{R}} $, and $\bar{x} \in \ensuremath{\mathbb{R}}^n$, the function $ w_{\delta}:= w(x-\delta \bar{x} , y)$ for $x\in\ensuremath{\mathbb{R}}^n$ and $y \in \ensuremath{\mathbb{R}}_+,$ then by a change of variable, we get $$S(n, \alpha, \gamma, 0) \leq I_{\delta}: =\frac{ \| w_\delta\|^2_{X^{\alpha}(\ensuremath{\mathbb{R}}_+^{n+1})}- \gamma \int_{\ensuremath{\mathbb{R}}^n} \frac{ |w_\delta(x,0)|^2 }{|x|^\alpha}dx}{(\int_{\mathbb{R}^n} | w_\delta(x,0)|^{2_{\alpha}^*}dx)^\frac{2}{2_{\alpha}^*}} = \frac{ \| w\|^2_{X^{\alpha}(\ensuremath{\mathbb{R}}_+^{n+1})}- \gamma \int_{\ensuremath{\mathbb{R}}^n} \frac{ |w(x,0)|^2 }{|x+\delta \bar{x}|^\alpha}dx}{(\int_{\mathbb{R}^n} | w(x,0)|^{2_{\alpha}^*}dx)^\frac{2}{2_{\alpha}^*}},$$ so that $$ S(n, \alpha, \gamma, 0) \leq \lim\limits_{\delta \to \infty} I_\delta = \frac{ \| w\|^2_{X^{\alpha}(\ensuremath{\mathbb{R}}_+^{n+1})}}{(\int_{\mathbb{R}^n} | w(x,0)|^{2_{\alpha}^*}dx)^\frac{2}{2_{\alpha}^*}}=S(n,\alpha,0,0).$$ Therefore, $S(n,\alpha,\gamma, 0) = S(n,\alpha,0,0).$ Since there are extremals for $S(n,\alpha,0,0)$ (see \cite{Cotsiolis-Tavoularis}), there is none for $S(n,\alpha,\gamma,0)$ whenever $\gamma<0.$ This establishes (2) and completes the proof of Theorem \ref{Theorem Existence for fractional H-S-M, using the best constant}. Back to Theorem \ref{Theorem the best fractional H-S constan}, since the $\alpha$-harmonic function $w \ge 0$ is a minimizer for $S(n,\alpha,\gamma,s)$ in $X^{\alpha}(\ensuremath{\mathbb{R}}_+^{n+1}) \setminus \{0\},$ which exists from Theorem \ref{Theorem Existence for fractional H-S-M, using the best constant}, then $ u:=\text{Tr}(w)= w(.,0) \in H^\frac{\alpha}{2}(\ensuremath{\mathbb{R}}^n)\setminus \{0\}$ and by (\ref{extension norm}), $u$ is a minimizer for $\mu_{\gamma,s}(\ensuremath{\mathbb{R}}^n)$ in $H^\frac{\alpha}{2}(\ensuremath{\mathbb{R}}^n) \setminus \{0\}$. Therefore, (1) and (2) of Theorem \ref{Theorem the best fractional H-S constan} hold. For (3), let $u^*$ be the Schwarz symmetrization of $u$. By the fractional Polya-Szeg\"o inequality \cite{Y.J. Park P-S inequality}, we have $$\| (-\Delta)^{\frac{\alpha}{2}} u^* \|^2_{L^2(\ensuremath{\mathbb{R}}^n)} \le \| (-\Delta)^{\frac{\alpha}{2}} u \|^2_{L^2(\ensuremath{\mathbb{R}}^n)}. $$ Furthermore, it is clear (Theorem 3.4. of \cite{Lieb-Loss}) that \begin{equation*} \hbox{$ \int_{\ensuremath{\mathbb{R}}^n}\frac{|u|^{2}}{|x|^{\alpha}}dx \le \int_{\ensuremath{\mathbb{R}}^n}\frac{|u^*|^{2}}{|x|^{\alpha}}dx $\quad and \quad $ \int_{\ensuremath{\mathbb{R}}^n} \frac{|u|^{2_{\alpha}^*(s)}}{|x|^{s}}dx \le \int_{\ensuremath{\mathbb{R}}^n} \frac{|u^*|^{2_{\alpha}^*(s)}}{|x|^{s}}dx.$} \end{equation*} Combining the above inequalities and the fact that $ \gamma \ge 0,$ we get that $$\mu_{\gamma,s}(\ensuremath{\mathbb{R}}^n) \le \frac{ \| (-\Delta)^{\frac{\alpha}{2}} u^* \|^2_{L^2(\ensuremath{\mathbb{R}}^n)}- \gamma\int_{\ensuremath{\mathbb{R}}^n}\frac{|u^*|^{2}}{|x|^{\alpha}}dx}{(\int_{\ensuremath{\mathbb{R}}^n} \frac{|u^*|^{2_{\alpha}^*(s)}}{|x|^{s}}dx)^\frac{2}{2_{\alpha}^*(s)}} \le \frac{ \| (-\Delta)^{\frac{\alpha}{2}} u \|^2_{L^2(\ensuremath{\mathbb{R}}^n)}- \gamma\int_{\ensuremath{\mathbb{R}}^n}\frac{|u|^{2}}{|x|^{\alpha}}dx}{(\int_{\ensuremath{\mathbb{R}}^n} \frac{|u|^{2_{\alpha}^*(s)}}{|x|^{s}}dx)^\frac{2}{2_{\alpha}^*(s)}}=\mu_{\gamma,s}(\ensuremath{\mathbb{R}}^n). $$ This implies that $u^*$ is also a minimizer and achieves the infimum of $\mu_{\gamma,s}(\ensuremath{\mathbb{R}}^n).$ Therefore the equality sign holds in all the inequalities above, that is \begin{equation*} \hbox{$ \gamma \int_{\ensuremath{\mathbb{R}}^n}\frac{|u|^{2}}{|x|^{\alpha}}dx = \gamma \int_{\ensuremath{\mathbb{R}}^n}\frac{|u^*|^{2}}{|x|^{\alpha}}dx $\quad and \quad $ \int_{\ensuremath{\mathbb{R}}^n} \frac{|u|^{2_{\alpha}^*(s)}}{|x|^{s}}dx = \int_{\ensuremath{\mathbb{R}}^n} \frac{|u^*|^{2_{\alpha}^*(s)}}{|x|^{s}}dx.$} \end{equation*} From Theorem 3.4. of \cite{Lieb-Loss}, in the case of equality, it follows that $u=|u|= u^*$ if either $\gamma \neq 0$ or if $s \neq 0$. In particular, $u$ is positive, radially symmetric and decreasing about origin. Hence $u$ must approach a limit as ${|x| \to \infty},$ which must be zero. \section{Proof of Theorem \ref{Theorem Main result}} We shall now use the existence of extremals for the fractional Hardy-Sobolev type inequalities, established in Section \ref{Section: the proof of attainability of S(n,alpha,gamma,s)}, to prove that there exists a nontrivial weak solution for (\ref{Main problem.prime}). The energy functional $\Psi$ associated to (\ref{Main problem.prime}) is defined as follows: \begin{equation}\label{Psi} \Psi(w)= \frac{1}{2} \| w\|^2 -\frac{1}{2_{\alpha}^*} \int_{\ensuremath{\mathbb{R}}^n} |u|^{2_{\alpha}^*}dx -\frac{1}{2_{\alpha}^*(s)}\int_{\ensuremath{\mathbb{R}}^n} \frac{|u|^{2_{\alpha}^*(s)}}{|x|^{s}} dx, \quad \text{ for } w \in X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1}), \end{equation} where again $u:= Tr (w)=w(.,0)$. Fractional trace Hardy, Sobolev and Hardy-Sobolev inequalities yield that $\Psi \in C^1(X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})).$ Note that a weak solution to (\ref{Main problem.prime}) is a nontrivial critical point of $\Psi$. Throughout this section, we use the following notation for any sequence $(w_k)_{k \in \mathbb{N}} \in X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})$: $$u_k:=\text{Tr}(w_k)=w_k(.,0), \ \text{ for all } k \in \mathbb{N}.$$ We split the proof in three parts: \subsection{Existence of a suitable Palais-Smale sequence} We first verify that the energy functional $\Psi$ satisfies the conditions of the Mountain Pass Lemma leading to a minimax energy level that is below a suitable threshold. The following is standard. \begin{lemma}[Ambrosetti and Rabinowitz \cite{Ambrosetti-Rabinowitz}] \label{Theorem MPT- Ambrosetti-Rabinowitz version} Let $(V,\|\,\|$) be a Banach space and $\Psi: {V \to \ensuremath{\mathbb{R}}}$ a $C^1-$functional satisfying the following conditions: (a) $ \Psi(0)=0,$ \\ (b) There exist $\rho, R>0$ such that $\Psi(u) \ge \rho$ for all $u \in V$, with $\|u\|=R,$\\ (c) There exists $v_0 \in V $ such that $\limsup\limits_{t \to \infty} \Psi(tv_0) <0.$ Let $t_0>0$ be such that $\|t_0v_0\|> R$ and $\Psi(t_0v_0)<0,$ and define $$c_{v_0}(\Psi):=\inf\limits_{\sigma \in \Gamma} \sup\limits_{t \in[0,1]} \Psi(\sigma(t)) \text{ where } \Gamma:= \{\sigma\in C([0,1],V): \sigma(0)=0 \text{ and } \sigma(1)=t_0v_0 \}.$$ Then, $c_w(\Psi) \ge \rho>0$, and there exists a Palais-Smale sequence at level $c_w(\Psi)$, that is $(w_k)_{k \in \mathbb{N}} \in V$ such that $$ \lim\limits_{k \to \infty} \Psi(w_k)=c_{v_0}(\Psi) \ \text{ and } \lim\limits_{k \to \infty} \Psi'(w_k)=0 \, \, \text{strongly in} \, V' .$$ \end{lemma} We now prove the following. \begin{proposition} \label{MPT with bound} Suppose $0 \le \gamma < \gamma_H \text{ and } 0 \le s<\alpha$, and consider $\Psi$ defined in (\ref{Psi}) on the Banach space $X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})$. Then, there exists $w \in X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1}) \setminus \{0\}$ such that $w\ge 0$ and $0<c_w(\Psi)<c^\star,$ where \begin{equation} \label{Definition of C^star} c^\star = \text{min} \left\{ \frac{\alpha}{2n} S(n,\alpha,\gamma,0)^{\frac{n}{\alpha}}, \frac{\alpha-s}{2(n-s)} S(n,\alpha,\gamma,s)^{\frac{n-s}{\alpha-s}} \right\}, \end{equation} and a Palais-Smale sequence $(w_k)_{k \in \mathbb{N}}$ in $X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})$ at energy level $c_w(\Psi)$, that is, \begin{equation} \label{P-S condition (Lim) on minimizing sequence} \lim\limits_{k \to \infty} \Psi'(w_k)=0 \text{ strongly in } (X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1}))' \text{ and } \lim\limits_{k \to \infty} \Psi(w_k)= c_w(\Psi). \end{equation} \end{proposition} \begin{proof}[Proof of Proposition \ref{MPT with bound}] In the sequel, we will use freely the following elementary identities involving $2_{\alpha}^*(s)$: $$\hbox{$ \frac{1}{2} - \frac{1}{2_{\alpha}^*}= \frac{\alpha}{2n}$,\quad $\frac{2_{\alpha}^*}{2_{\alpha}^*-2}= \frac{n}{\alpha}$,\quad $\frac{1}{2} - \frac{1}{2_{\alpha}^*(s)}= \frac{\alpha-s}{2(n-s)}$\quad and \quad $\frac{2_{\alpha}^*(s)}{2_{\alpha}^*(s)-2}=\frac{n-s}{\alpha-s}.$}$$ First, we note that the functional $\Psi$ satisfies the hypotheses of Lemma \ref{Theorem MPT- Ambrosetti-Rabinowitz version}, and that condition (c) is satisfied for any $w \in X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1}) \setminus \{0\}.$ Indeed, it is standard to show that $\Psi \in C^1(X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1}))$ and clearly $\Psi(0)=0$, so that (a) of Lemma \ref{Theorem MPT- Ambrosetti-Rabinowitz version} is satisfied. For (b) note that by the definition of $S(n,\alpha,\gamma,s),$ we have that $$ (\int_{\ensuremath{\mathbb{R}}^n} |u|^{2_{\alpha}^*}dx)^\frac{2}{2_{\alpha}^*}\le S(n,\alpha,\gamma,0)^{-1} \|w\|^2 \text { and } (\int_{\ensuremath{\mathbb{R}}^n} \frac{|u|^{2_{\alpha}^*(s)}}{|x|^{s}}dx)^\frac{2}{2_{\alpha}^*(s)}\le S(n,\alpha,\gamma,s)^{-1} \|w\|^2.$$ Hence, \begin{equation} \begin{aligned} \Psi(w) &\ge \frac{1}{2} \| w\|^2-\frac{1}{2_{\alpha}^*} S(n,\alpha,\gamma,0)^{-\frac{2_{\alpha}^*}{2}} \| w\|^{2_{\alpha}^*} -\frac{1}{2_{\alpha}^*(s)} S(n,\alpha,\gamma,s)^{-\frac{2_{\alpha}^*(s)}{2}} \| w\|^{2_{\alpha}^*(s)} \\ & = \left( \frac{1}{2}-\frac{1}{2_{\alpha}^*} S(n,\alpha,\gamma,0)^{-\frac{2_{\alpha}^*}{2}} \| w\|^{2_{\alpha}^*-2} -\frac{1}{2_{\alpha}^*(s)} S(n,\alpha,\gamma,s)^{-\frac{2_{\alpha}^*(s)}{2}} \| w\|^{2_{\alpha}^*(s)-2} \right) \|w\|^2. \end{aligned} \end{equation} Since $s \in [0,\alpha),$ we have that $2_{\alpha}^*-2 >0 $ and $2_{\alpha}^*(s)-2 >0.$ Thus, by (\ref{comparable norms}), we can find $R>0$ such that $\Psi(w) \ge \rho$ for all $w \in X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})$ with $\|w\|_{X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})} = R.$ Regarding (c), note that $$\Psi(tw) = \frac{t^2}{2} \| w\|^2 -\frac{t^{2_{\alpha}^*}}{2_{\alpha}^*} \int_{\ensuremath{\mathbb{R}}^n} |u|^{2_{\alpha}^*}dx -\frac{ t^{ 2_{\alpha}^*(s)} }{2_{\alpha}^*(s)}\int_{\ensuremath{\mathbb{R}}^n} \frac{|u|^{2_{\alpha}^*(s)}}{|x|^{s}} dx,$$ hence $ \lim\limits_{t \to \infty} \Psi(tw)= -\infty$ for any $w \in X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1}) \setminus \{0\}$, which means that there exists $t_w>0$ such that $\|t_w w\|_{X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})} > R$ and $\Psi(tw) <0,$ for $t\ge t_w.$ Now we show that there exists $ w \in X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1}) \setminus \{0\}$ such that $w\ge 0$ and \begin{equation}\label{bound for c_w when s=0} c_w (\Psi)< \frac{\alpha}{2n} S(n,\alpha,\gamma,0)^{\frac{n}{\alpha}}. \end{equation} From Theorem \ref{Theorem Existence for fractional H-S-M, using the best constant}, we know that there exists a non-negative extremal $w$ in $X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})$ for $S(n,\alpha,\gamma,0)$ whenever $\gamma \ge 0.$ By the definition of $t_w$ and the fact that $c_w > 0,$ we obtain $$c_w(\Psi) \le \sup\limits_{t\ge0} \Psi(tw)\le \sup\limits_{t\ge0}f(t), \quad \text{ where } f(t) = \frac{t^2}{2} \| w\|^2 -\frac{t^{2_{\alpha}^*}}{2_{\alpha}^*} \int_{\ensuremath{\mathbb{R}}^n} |u|^{2_{\alpha}^*}dx \quad \forall t>0. $$ Straightforward computations yield that $f(t)$ attains its maximum at the point $\tilde{t} = \left( \frac{\|w\|^2}{\int_{\ensuremath{\mathbb{R}}^n} |u|^{2_{\alpha}^*}dx} \right)^\frac{1}{2^*_\alpha -2}.$ It follows that $$\sup\limits_{t\ge0}f(t) = (\frac{1}{2} - \frac{1}{2_{\alpha}^*}) \left( \frac{\|w\|^2}{(\int_{\ensuremath{\mathbb{R}}^n} |u|^{2_{\alpha}^*}dx)^\frac{2}{2_{\alpha}^*}}\right)^\frac{2_{\alpha}^*}{2_{\alpha}^*-2} = \frac{\alpha}{2n} \left( \frac{\|w\|^2}{(\int_{\ensuremath{\mathbb{R}}^n} |u|^{2_{\alpha}^*}dx)^\frac{2}{2_{\alpha}^*}}\right)^{\frac{n}{\alpha}}.$$ Since $w$ is an extremal for $S(n,\alpha,\gamma,0)$, we get that $$c_w(\Psi) \le \sup\limits_{t\ge0}f(t) = \frac{\alpha}{2n} S(n,\alpha,\gamma,0)^{\frac{n}{\alpha}}. $$ We now need to show that equality does not hold in (\ref{bound for c_w when s=0}). Indeed, otherwise we would have that $0<c_w = \sup\limits_{t\ge0}\Psi(tw) = \sup\limits_{t\ge0}f(t).$ Consider $t_1$ (resp. $t_2 > 0$) where $\sup\limits_{t\ge0} \Psi(tw) (\text{resp.,} \ \sup\limits_{t\ge0}f(t) )$ is attained. We get that $$f(t_1) - \frac{ t_1^{ 2_{\alpha}^*(s)} }{2_{\alpha}^*(s)}\int_{\ensuremath{\mathbb{R}}^n} \frac{|w(x,0)|^{2_{\alpha}^*(s)}}{|x|^{s}} dx = f(t_2),$$ which means that $f(t_1) > f(t_2)$ since $t_1 > 0$. This contradicts the fact that $t_2$ is a maximum point of $f(t)$, hence the strict inequality in (\ref{bound for c_w when s=0}). To finish the proof of Proposition \ref{MPT with bound}, we can assume without loss that $$\frac{\alpha-s}{2(n-s)} S(n,\alpha,\gamma,s)^{\frac{n-s}{\alpha-s}} < \frac{\alpha}{2n} S(n,\alpha,\gamma,0)^{\frac{n}{\alpha}}.$$ Let now $w$ in $X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1}) \setminus \{0\} $ be a positive minimizer for $S(n,\alpha,\gamma,s)$, whose existence was established in Section \ref{Section: the proof of attainability of S(n,alpha,gamma,s)}, and set $$\bar{f}(t) = \frac{t^2}{2} \| w\|^2 -\frac{ t^{ 2_{\alpha}^*(s)} }{2_{\alpha}^*(s)}\int_{\ensuremath{\mathbb{R}}^n} \frac{|u|^{2_{\alpha}^*(s)}}{|x|^{s}} dx.$$ As above, we have $$c_w(\Psi) \le \sup\limits_{t\ge0}f(t) = (\frac{1}{2} - \frac{1}{2_{\alpha}^*(s)}) \left( \frac{\|w\|^2}{(\int_{\ensuremath{\mathbb{R}}^n} \frac{|u|^{2_{\alpha}^*(s)}}{|x|^{s}}dx)^\frac{2}{2_{\alpha}^*(s)}}\right)^\frac{2_{\alpha}^*(s)}{2_{\alpha}^*(s)-2} = \frac{\alpha-s}{2(n-s)} S(n,\alpha,\gamma,s)^{\frac{n-s}{\alpha-s}}. $$ Again, if equality holds, then $0<c_w(\Psi) \le \sup\limits_{t\ge0}\Psi(tw) = \sup\limits_{t\ge0}\bar{f}(t)$, and if $t_1,t_2>0$ are points where the respective suprema are attained, then a contradiction is reached since $$\bar{f}(t_1) -\frac{ t_1^{ 2_{\alpha}^*} }{2_{\alpha}^*}\int_{\ensuremath{\mathbb{R}}^n} |u|^{2_{\alpha}^*} dx = \bar{f}(t_2).$$ Therefore, \begin{equation*} 0 < c_w(\Psi) < c^\star = \text{min} \left\{ \frac{\alpha}{2n} S(n,\alpha,\gamma,0)^{\frac{n}{\alpha}}, \frac{\alpha-s}{2(n-s)} S(n,\alpha,\gamma,s)^{\frac{n-s}{\alpha-s}} \right\}. \end{equation*} Finally, the existence of a Palais-Smale sequence at that level follows immediately from Lemma \ref{Theorem MPT- Ambrosetti-Rabinowitz version} \end{proof} \subsection{Analysis of the Palais-Smale sequences} We now study the concentration properties of weakly null Palais-Smale sequences. For $\delta>0$, we shall write $B_\delta := \left\{ x \in \ensuremath{\mathbb{R}}^n : |x| < \delta \right\}.$ \begin{proposition} \label{Proposition lim of sobolev term in small ball when miniming sequence converges weakly to zero} Let $0 \le \gamma < \gamma_H \text{ and } 0 < s<\alpha.$ Assume that $(w_k)_{k \in \mathbb{N}}$ is a Palais-Smale sequence of $\Psi$ at energy level $c \in (0, c^\star )$. If $w_k \rightharpoonup 0 $ in $X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1}) \text{ as } {k \to \infty}$, then there exists a positive constant $\epsilon_0= \epsilon_0 (n, \alpha, \gamma,c,s)>0$ such that for every $\delta>0$, one of the following holds: \begin{enumerate} \item $\limsup\limits_{k \to \infty} \int_{B_{\delta}}|u_k|^{2^*_{\alpha}}dx = \limsup\limits_{k \to \infty} \int_{B_{\delta}} \frac{|u_k|^{2^*_{\alpha}(s)}}{|x|^s}dx =0;$ \item $\limsup\limits_{k \to \infty} \int_{B_{\delta}} |u_k|^{2^*_{\alpha}}dx\,\, \hbox{and} \,\, \limsup\limits_{k \to \infty} \int_{B_{\delta}} \frac{|u_k|^{2^*_{\alpha}(s)}}{|x|^s}dx \ge \epsilon_0,$ \end{enumerate} \end{proposition} The proof of Proposition \ref{Proposition lim of sobolev term in small ball when miniming sequence converges weakly to zero} requires the following two lemmas. \begin{lemma} \label{Lemma limi of Hardy-Sobolev and grad terms are zero on D} Let $(w_k)_{k \in \mathbb{N}}$ be a Palais-Smale sequence as in Proposition \ref{Proposition lim of sobolev term in small ball when miniming sequence converges weakly to zero}. If $w_k \rightharpoonup 0 $ in $X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1}),$ then for any $D \subset\subset \ensuremath{\mathbb{R}}^n \setminus \{0\},$ there exists a subsequence of $(w_k)_{k \in \mathbb{N}}$, still denoted by $(w_k)_{k \in \mathbb{N}}$, such that \begin{equation}\label{lim of Hardy and Hardy-Sobolev terms on D* are zero} \lim\limits_{k \to \infty} \int_{D} \frac{|u_k|^2}{|x|^\alpha} dx =\lim\limits_{k \to \infty} \int_{D} \frac{|u_k|^{2_{\alpha}^*(s)}}{|x|^{s}} dx =0 \end{equation} and \begin{equation}\label{lim of Sobolev and grad terms on D are zero} \lim\limits_{k \to \infty} \int_{D} |u_k|^{2_{\alpha}^*} dx= \lim\limits_{k \to \infty} \int_D |(-\Delta)^{\frac{\alpha}{4}} u_k|^2dx = 0 , \end{equation} where $u_k:=w_k(.,0)$ for all $k\in \mathbb{N}.$ \end{lemma} \begin{proof}[Proof of Lemma \ref{Lemma limi of Hardy-Sobolev and grad terms are zero on D}] Fix $D \subset\subset \ensuremath{\mathbb{R}}^n \setminus \{0\},$ and note that the following fractional Sobolev embedding is compact: $$H^{\frac{\alpha}{2}}(\ensuremath{\mathbb{R}}^n)\hookrightarrow L^q(D) \text{ for every } 1\le q < 2^*_\alpha .$$ Using the trace inequality (\ref{trace inequality between extension norm and fractional sobolev}), and the assumption that $w_k \rightharpoonup 0 $ in $X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1}),$ we get that $$ {u_k \to 0} \quad \text{ strongly for every } 1\le q < 2^*_\alpha.$$ On the other hand, the fact that $|x|^{-1}$ is bounded on $D \subset\subset \ensuremath{\mathbb{R}}^n \setminus \{0\}$ implies that there exist constants $C_1,C_2 >0$ such that $$0 \le \lim\limits_{k \to \infty} \int_{D} \frac{|u_k|^2}{|x|^\alpha} dx \le C_1 \lim\limits_{k \to \infty} \int_{D} |u_k|^2 dx $$ and $$0 \le \lim\limits_{k \to \infty} \int_{D} \frac{|u_k|^{2_{\alpha}^*(s)}}{|x|^{s}} dx \le C_2 \lim\limits_{k \to \infty} \int_{D} |u_k|^{2_{\alpha}^*(s)} dx . $$ Since $s \in (0,\alpha),$ we have that $1\le 2, 2^*_\alpha (s)< 2^*_\alpha.$ Thus, (\ref{lim of Hardy and Hardy-Sobolev terms on D* are zero}) holds. To show (\ref{lim of Sobolev and grad terms on D are zero}), we let $\eta \in C_0^{\infty}(\ensuremath{\mathbb{R}}_+^{n+1})$ be a cut-off function such that $\eta_*:=\eta(.,0) \in C_0^{\infty}(\ensuremath{\mathbb{R}}^n \setminus \{0\}),$ $\eta_* \equiv 1$ in $D$ and $0 \le \eta \le 1$ in $\ensuremath{\mathbb{R}}_+^{n+1}.$ We first note that \begin{equation} \label{estimate for grad term with cut-off (eta w_k)} k_{\alpha} \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-{\alpha}} |\nabla ( \eta w_k )|^2 dxdy= k_{\alpha} \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-{\alpha}} | \eta \nabla w_k|^2 dxdy+ o(1). \end{equation} Indeed, apply the following elementary inequality for vectors $X,Y$ in $\ensuremath{\mathbb{R}}^{n+1}$, $$\left | |X+Y|^2 - |X|^2 \right | \le C (|X||Y|+|Y|^2),$$ with $ X= y^{\frac{1-\alpha}{2}}w_k \nabla\eta $ and $Y= y^{\frac{1-\alpha}{2}} \eta \nabla w_k $, to get for all $k \in \mathbb{N}$, that \begin{align*} \left| y^{1-\alpha} | \nabla( \eta w_k )|^2 - y^{1-\alpha} | \eta \nabla w_k|^2 \right| \le C \left( y^{1-\alpha} |w_k \nabla\eta| | \eta \nabla w_k |+ y^{1-\alpha} |\eta \nabla w_k |^2 \right). \end{align*} By H\"older's inequality, we get \begin{equation} \label{Holder for gradient 1} \begin{aligned} &\left | \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} | \nabla( \eta w_k )|^2 dxdy- \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} | \eta \nabla w_k|^2 dxdy \right| \\ & \le \ C_3 \|w_k\|_{X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})} \ (\int_{\text{Supp} (\nabla \eta) } y^{1-\alpha} |w_k|^2 dxdy )^\frac{1}{2} + C_3 \int_{\text{Supp} (\nabla \eta) } y^{1-\alpha} |w_k|^2 dxdy \\ & \le C_4 \left[(\int_{\text{Supp} (\nabla \eta) } y^{1-\alpha} |w_k|^2 dxdy)^{\frac{1}{2}} +\int_{\text{Supp} (\nabla \eta) } y^{1-\alpha} |w_k|^2 dxdy \right]. \end{aligned} \end{equation} Since the embedding $H^1(\text{Supp} (\nabla \eta), y^{1-\alpha}) \hookrightarrow L^2(\text{Supp} (\nabla \eta), y^{1-\alpha})$ is compact, and $w_k \rightharpoonup 0 $ in $X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1}),$ we get that $$\int_{\text{Supp} (\nabla \eta) } y^{1-\alpha} |w_k|^2 dxdy = o(1),$$ which gives $$\int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} | \nabla( \eta w_k )|^2 dxdy= \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} | \eta \nabla w_k|^2 dxdy+o(1). $$ Thus, (\ref{estimate for grad term with cut-off (eta w_k)}) holds. Now recall that the sequence $(w_k)_{k \in \mathbb{N}}$ has the following property: \begin{equation} \label{Derivative of the functional converges strongly to zero} \lim\limits_{k \to \infty} \Psi'(w_k)=0 \text{ strongly in } (X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1}))' . \end{equation} Since $\eta^2 w_k \in X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})$ for all $k \in \mathbb{N}$, we can use it as a test function in (\ref{Derivative of the functional converges strongly to zero}) to get that \begin{equation*} \begin{aligned} o(1) &= \langle \Psi' (w_k), \eta^2 w_k\rangle \\ & = k_{\alpha} \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} \langle \nabla w_k , \nabla (\eta^2 w_k )\rangle dxdy - \gamma \int_{\ensuremath{\mathbb{R}}^n} \frac{\eta_*^2 |u_k|^2 }{|x|^{\alpha}}dx - \int_ {\ensuremath{\mathbb{R}}^n} \eta_*^2 |u_k|^{2_{\alpha}^*} dx - \int_ {\ensuremath{\mathbb{R}}^n} \frac{\eta_*^2 |u_k|^{2_{\alpha}^*(s)} }{|x|^s}dx. \end{aligned} \end{equation*} Regarding the first term, we have $$k_{\alpha} \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} \langle \nabla w_k , \nabla (\eta^2 w_k )\rangle dxdy = k_\alpha \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} |\eta \nabla w_k |^2 dxdy +k_{\alpha} \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} w_k \langle \nabla (\eta^2) ,\nabla w_k \rangle dxdy .$$ From H\"older's inequality, and the fact that ${w_k \to 0}$ in $L^2(\text{Supp} (|\nabla \eta|), y^{1-\alpha}),$ it follows that as $k \to \infty$, \begin{align*} & \left| k_{\alpha} \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} \langle \nabla w_k , \nabla (\eta^2 w_k )\rangle dxdy - k_\alpha \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} |\eta \nabla w_k |^2 dxdy\right| = \left| k_{\alpha} \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} w_k \langle \nabla( \eta^2) ,\nabla w_k \rangle dxdy \right| \\ &\qquad \qquad \qquad \le k_{\alpha} \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} |w_k| |\nabla (\eta^2)| |\nabla w_k| dxdy \le C \int_{\text{Supp}(|\nabla \eta|)} y^{1-\alpha} |w_k| |\nabla w_k| dxdy \\ & \qquad \qquad \qquad \le C \| w_k \|_{X^{\alpha}(\ensuremath{\mathbb{R}}^{n+1}_+)} \left(\int_{\text{Supp}(|\nabla \eta|)} y^{1-\alpha} |w_k|^2 dxdy\right)^{\frac{1}{2}} \\ &\qquad \qquad \qquad = o(1). \end{align*} Thus, we have proved that $$k_{\alpha} \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} \langle \nabla w_k , \nabla (\eta^2 w_k )\rangle dxdy = k_\alpha \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} |\eta \nabla w_k |^2 dxdy + o(1).$$ Using the above estimate coupled with (\ref{estimate for grad term with cut-off (eta w_k)}), we obtain \begin{equation} \begin{aligned} o(1) &= \langle \Psi' (w_k), \eta^2 w_k\rangle \\ & = k_\alpha \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} |\eta \nabla w_k |^2 dxdy - \gamma \int_{K} \frac{\eta_*^2 |u_k|^2 }{|x|^{\alpha}}dx - \int_ {\ensuremath{\mathbb{R}}^n} \eta_*^2 |u_k|^{2_{\alpha}^*} dx - \int_ {K} \frac{\eta_*^2 |u_k|^{2_{\alpha}^*(s)} }{|x|^s}dx +o(1)\\ &=k_\alpha \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} |\nabla (\eta w_k ) |^2 dxdy - \int_ {\ensuremath{\mathbb{R}}^n} \eta_*^2 |u_k|^{2_{\alpha}^*} dx +o(1)\\ & \ge \|\eta w_k\|^2 - \int_ {\ensuremath{\mathbb{R}}^n} \eta_*^2 |u_k|^{2_{\alpha}^*} dx +o(1), \quad \text{ as } {k \to \infty}, \end{aligned} \end{equation} where $K= \text{Supp}(\eta_*).$ Therefore, \begin{equation} \|\eta w_k\|^2 \le \int_ {\ensuremath{\mathbb{R}}^n} |\eta_*u_k|^2 |u_k|^{2_{\alpha}^*-2} dx +o(1) \quad \text{ as } {k \to \infty}. \end{equation} By H\"older's inequality, and using the definition of $S(n,\alpha,\gamma,0),$ we then get that \begin{equation} \begin{aligned} \|\eta w_k\|^2 &\le \left(\int_ {\ensuremath{\mathbb{R}}^n} |\eta_*u_k|^{2_{\alpha}^*}dx \right)^ {\frac{2}{2_{\alpha}^*}} \left(\int_ {\ensuremath{\mathbb{R}}^n} |u_k|^{2_{\alpha}^*} dx \right)^{\frac{2_{\alpha}^*-2}{2_\alpha^*}} +o(1)\\ & \le S(n,\alpha,\gamma,0)^{-1} \| \eta w_k \|^2 \left(\int_ {\ensuremath{\mathbb{R}}^n} |u_k|^{2_{\alpha}^*} dx \right)^{\frac{2_{\alpha}^*-2}{2_\alpha^*}} +o(1). \end{aligned} \end{equation} Thus, \begin{equation}\label{estimate for new norm with (eta w_k) by o(1) } \left[ 1- S(n,\alpha,\gamma,0)^{-1} \left(\int_ {\ensuremath{\mathbb{R}}^n} |u_k|^{2_{\alpha}^*} dx \right)^{\frac{2_{\alpha}^*-2}{2_\alpha^*}} \right] \| \eta w_k \|^2 \le o(1). \end{equation} In addition, it follows from (\ref{P-S condition (Lim) on minimizing sequence}) that $$\Psi(w_k) - \frac{1}{2} \langle \Psi'(w_k), w_k\rangle = c+ o(1),$$ that is, \begin{equation} \label{Summation of Hardy and Hardy-Sobolev terms for P-S sequence is c+o(1) } (\frac{1}{2} - \frac{1}{2_{\alpha}^*}) \int_{\ensuremath{\mathbb{R}}^n} |u_k|^{2_{\alpha}^*} dx + (\frac{1}{2} - \frac{1}{2_{\alpha}^*(s)}) \int_{\ensuremath{\mathbb{R}}^n} \frac{|u_k|^{2_{\alpha}^*(s)}}{|x|^{s}}dx = c+ o(1), \end{equation} from which follows that \begin{equation} \label{upper bounded for Hardy term P-S sequence 2n/c} \int_{\ensuremath{\mathbb{R}}^n} |u_k|^{2_{\alpha}^*} dx \le \frac{2n}{\alpha} c +o(1), \quad \text{ as } {k \to \infty}. \end{equation} Plugging (\ref{upper bounded for Hardy term P-S sequence 2n/c}) into (\ref{estimate for new norm with (eta w_k) by o(1) }), we obtain that $$ \left[ 1- S(n,\alpha,\gamma,0)^{-1} (\frac{2n}{\alpha} c )^{\frac{\alpha}{n}} \right] \| \eta w_k \|^2 \le o(1), \text{ as } {k \to \infty}. $$ On the other hand, by the upper bound (\ref{Definition of C^star}) on $c,$ we have that $$ c < \frac{\alpha}{2n} S(n,\alpha,\gamma,0)^{\frac{n}{\alpha}}.$$ This yield that $ 1- S(n,\alpha,\gamma,0)^{-1} (\frac{2n}{\alpha} c )^{\frac{\alpha}{n}} > 0,$ and therefore, $ \lim\limits_{k \to \infty} \| \eta w_k \|^2 = 0. $ Using (\ref{extension norm}) and (\ref{comparable norms}), we obtain that $$\lim\limits_{k \to \infty} \int_{\ensuremath{\mathbb{R}}^n} |(-\Delta)^{\frac{\alpha}{4}} (\eta_* u_k)|^2dx = \lim\limits_{k \to \infty} k_{\alpha}\int_{\ensuremath{\mathbb{R}}^{n+1}_+} y^{1-\alpha} |\nabla (\eta w_k)|^2 dxdy =0.$$ It also follows from the definition of $S(n, \alpha, \gamma,0) $ that $\lim\limits_{k \to \infty} \int_{\ensuremath{\mathbb{R}}^n} | \eta_* u_k |^{2^*_{\alpha}}dx =0$, hence, $$\lim\limits_{k \to \infty} \int_{\ensuremath{\mathbb{R}}^n} |(-\Delta)^{\frac{\alpha}{4}} (\eta_* u_k)|^2dx= \lim\limits_{k \to \infty} \int_{\ensuremath{\mathbb{R}}^n} | \eta_* u_k |^{2^*_{\alpha}} dx=0.$$ Since ${\eta_*}_{|_D} \equiv 1,$ the last equality yields (\ref{lim of Sobolev and grad terms on D are zero}). \end{proof} \begin{lemma} \label{Lemma relation between theta zeta and mu with S(n,alpha,gamma,s)} Let $(w_k)_{k \in \mathbb{N}}$ be Palais-Smale sequence as in Proposition \ref{Proposition lim of sobolev term in small ball when miniming sequence converges weakly to zero} and let $u_k:=Tr (w_k)= w_k(.,0)$. For any $\delta > 0,$ set \begin{equation} \label{Definition theta, zeta and mu} \begin{aligned} &\theta:= \limsup\limits_{k \to \infty} \int_{B_{\delta}} |u_k|^{2^*_{\alpha}}dx; \qquad \zeta:= \limsup\limits_{k \to \infty} \int_{B_{\delta}} \frac{|u_k|^{2^*_{\alpha}(s)}}{|x|^s}dx \, \, {\rm and} \\ &\mu:= \limsup\limits_{k \to \infty} \int_{B_\delta} \left( |(-\Delta)^{\frac{\alpha}{4}} u_k|^2dx - \gamma \frac{|u_k|^2}{|x|^{\alpha}} \right) dx , \end{aligned} \end{equation} where $u:= w(.,0)$. If $w_k \rightharpoonup 0 $ in $X^{\alpha} (\ensuremath{\mathbb{R}}_+^{n+1})$ as ${k \to \infty},$ then the following hold: \begin{enumerate} \item $ \theta^{\frac{2}{2^*_{\alpha}}} \le S(n,\alpha,\gamma,0)^{-1} \mu \quad \text{ and } \quad \zeta^{\frac{2}{2^*_{\alpha}(s)}} \le S(n,\alpha,\gamma,s)^{-1} \mu.$ \item $\mu \le \theta + \zeta.$ \end{enumerate} \end{lemma} \begin{proof}[Proof of Lemma \ref{Lemma relation between theta zeta and mu with S(n,alpha,gamma,s)}] First note that it follows from Lemma \ref{Lemma limi of Hardy-Sobolev and grad terms are zero on D} that $\theta, \zeta$ and $\mu $ are well-defined and are independent of the choice of $\delta > 0.$ Let now $\eta \in C_0^{\infty}(\ensuremath{\mathbb{R}}_+^{n+1})$ be a cut-off function such that $\eta_*:=\eta(.,0) \equiv 1$ in $B_\delta,$ and $0 \le \eta \le 1$ in $\ensuremath{\mathbb{R}}_+^{n+1}.$ 1. Since $\eta w_k \in X^{\alpha}(\ensuremath{\mathbb{R}}_+^{n+1})$, we get from the definition of $S(n,\alpha,\gamma,s)$ that \begin{equation}\label{fractional Hardy-Sobolev inequality when s=0} S(n,\alpha,\gamma,0)(\int_{\ensuremath{\mathbb{R}}^n} |\eta_* u_k|^{2_{\alpha}^*}dx)^\frac{2}{2_{\alpha}^*}\le k_{\alpha}\int_{\ensuremath{\mathbb{R}}^{n+1}_+} y^{1-\alpha} |\nabla(\eta w_k)|^2 dxdy - \gamma \int_{\ensuremath{\mathbb{R}}^n} \frac{|\eta_* u_k|^2}{|x|^{\alpha}} dx. \end{equation} On the other hand, from the definition of $\eta$ and (\ref{extension norm}), it follows that \begin{align*} &k_{\alpha}\int_{\ensuremath{\mathbb{R}}^{n+1}_+} y^{1-\alpha} |\nabla(\eta w_k)|^2 dxdy - \gamma \int_{\ensuremath{\mathbb{R}}^n} \frac{|\eta_* u_k|^2}{|x|^{\alpha}} dx = \int_{\ensuremath{\mathbb{R}}^n} \left( |(-\Delta)^{\frac{\alpha}{4}} (\eta_* u_k)|^2 - \gamma \frac{|\eta_*u_k|^2}{|x|^{\alpha}} \right) dx \\ &= \int_{B_\delta} \left( |(-\Delta)^{\frac{\alpha}{4}} u_k|^2 - \gamma \frac{|u_k|^2}{|x|^{\alpha}} \right) dx + \int_{\text{ Supp}(\eta_*) \setminus B_\delta} \left( |(-\Delta)^{\frac{\alpha}{4}} (\eta_* u_k)|^2 - \gamma \frac{|\eta_*u_k|^2}{|x|^{\alpha}} \right) dx, \end{align*} and$$(\int_{B_\delta} |u_k|^{2_{\alpha}^*}dx)^\frac{2}{2_{\alpha}^*} \le (\int_{\ensuremath{\mathbb{R}}^n} |\eta_* u_k|^{2_{\alpha}^*}dx)^\frac{2}{2_{\alpha}^*}. $$ Note that $ \text{ Supp}(\eta_*) \setminus B_\delta \subset\subset \ensuremath{\mathbb{R}}^n \setminus \{0\}.$ Therefore, taking the upper limits at both sides of (\ref{fractional Hardy-Sobolev inequality when s=0}), and using Lemma \ref{Lemma limi of Hardy-Sobolev and grad terms are zero on D}, we get that $$S(n,\alpha,\gamma,0) (\int_{B_\delta} |u_k|^{2_{\alpha}^*}dx)^\frac{2}{2_{\alpha}^*}\le \int_{B_\delta} \left( |(-\Delta)^{\frac{\alpha}{4}} u_k|^2dx - \gamma \frac{|u_k|^2}{|x|^{\alpha}} \right) dx +o(1) \ \text{ as }{k \to \infty},$$ which gives $$\theta^{\frac{2}{2^*_{\alpha}}} \le S(n,\alpha,\gamma,0)^{-1} \mu.$$ Similarly, we can prove that $$\zeta^{\frac{2}{2^*_{\alpha}(s)}} \le S(n,\alpha,\gamma,s)^{-1} \mu.$$ 2. Since $\eta^2 w_k \in X^{\alpha}(\ensuremath{\mathbb{R}}_+^{n+1})$ and $\langle \Psi'(w_k) , \eta^2 w_k \rangle =o(1) \text{ as } {k \to \infty}$, we have \begin{equation}\label{Estimating Psi with eta w_k} \begin{aligned} o(1) &= \langle \Psi'(w_k) , \eta^2 w_k \rangle \\ &=k_{\alpha} \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} \langle \nabla w_k , \nabla (\eta^2 w_k ) \rangle dxdy - \gamma \int_{\ensuremath{\mathbb{R}}^n} \frac{|\eta_*u_k|^2}{|x|^{\alpha}} dx - \int_ {\ensuremath{\mathbb{R}}^n} \eta^2_*|u_k|^{2_{\alpha}^*} dx - \int_{\ensuremath{\mathbb{R}}^n} \frac{\eta^2_* |u_k|^{2_{\alpha}^*(s)}}{|x|^{s}}dx \\ & = \left(k_{\alpha} \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} |\eta \nabla w_k|^2 dxdy - \gamma \int_{\ensuremath{\mathbb{R}}^n} \frac{|\eta_*u_k|^2}{|x|^{\alpha}} dx \right) - \int_ {\ensuremath{\mathbb{R}}^n} \eta^2_* |u_k|^{2_{\alpha}^*} dx - \int_{\ensuremath{\mathbb{R}}^n} \frac{\eta^2_* |u_k|^{2_{\alpha}^*(s)}}{|x|^{s}}dx \\ & \quad + k_{\alpha} \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} w_k \langle \nabla (\eta^2) ,\nabla w_k \rangle dxdy. \end{aligned} \end{equation} By H\"older's inequality, and the fact that ${w_k \to 0}$ in $L^2(\text{Supp} (|\nabla \eta|), y^{1-\alpha}),$ we obtain that \begin{align*} \left| k_{\alpha} \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} w_k \langle \nabla (\eta^2) ,\nabla w_k \rangle dxdy \right| &\le k_{\alpha} \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} |w_k| |\nabla (\eta^2)| |\nabla w_k| dxdy \\ &\le C \int_{\text{Supp}(|\nabla \eta|)} y^{1-\alpha} |w_k| |\nabla w_k| dxdy \\ & \le C \| w_k \|_{X^{\alpha}(\ensuremath{\mathbb{R}}^{n+1}_+)} \|w_k\|_{L^2(\text{Supp} (|\nabla \eta|), y^{1-\alpha})}\\ & \le o(1) \quad \hbox{as ${k \to \infty}.$} \end{align*} Plugging the above estimate into (\ref{Estimating Psi with eta w_k}) and using (\ref{extension norm}), we get that \begin{equation*} \begin{aligned} o(1) &= \langle \Psi'(w_k) , \eta^2 w_k \rangle \\ & = \left(k_{\alpha} \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} |\nabla (\eta w_k)|^2 dxdy - \gamma \int_{\ensuremath{\mathbb{R}}^n} \frac{|\eta_*u_k|^2}{|x|^{\alpha}} dx \right) - \int_ {\ensuremath{\mathbb{R}}^n} \eta^2_* |u_k|^{2_{\alpha}^*} dx - \int_{\ensuremath{\mathbb{R}}^n} \frac{\eta^2_* |u_k|^{2_{\alpha}^*(s)}}{|x|^{s}}dx \\ & = \int_{\ensuremath{\mathbb{R}}^n} \left( |(-\Delta)^{\frac{\alpha}{4}} (\eta_* u_k)|^2 - \gamma \frac{|\eta_*u_k|^2}{|x|^{\alpha}} \right) dx - \int_ {\ensuremath{\mathbb{R}}^n} \eta^2_* |u_k|^{2_{\alpha}^*} dx - \int_{\ensuremath{\mathbb{R}}^n} \frac{\eta^2_* |u_k|^{2_{\alpha}^*(s)}}{|x|^{s}}dx \\ & \ge \int_{B_\delta} \left( |(-\Delta)^{\frac{\alpha}{4}} u_k|^2 - \gamma \frac{|u_k|^2}{|x|^{\alpha}} \right) dx - \int_ {B_\delta} |u_k|^{2_{\alpha}^*} dx - \int_{B_\delta} \frac{ |u_k|^{2_{\alpha}^*(s)}}{|x|^{s}}dx \\ &- \int_{\text{ Supp}(\eta_*) \setminus B_\delta} \left(\gamma \frac{|\eta_*u_k|^2}{|x|^{\alpha}} dx + \eta^2_* |u_k|^{2_{\alpha}^*} dx + \frac{\eta^2_* |u_k|^{2_{\alpha}^*(s)}}{|x|^{s}} \right) dx +o(1). \end{aligned} \end{equation*} Noting that $\text{ Supp}(\eta_*) \setminus B_\delta \subset\subset \ensuremath{\mathbb{R}}^n \setminus \{0\},$ and taking the upper limits on both sides, we get that $\mu \le \theta+\zeta.$ \end{proof} \begin{proof}[Proof of Proposition \ref{Proposition lim of sobolev term in small ball when miniming sequence converges weakly to zero}] It follows from Lemma \ref{Lemma relation between theta zeta and mu with S(n,alpha,gamma,s)} that $$ \theta^{\frac{2}{2^*_\alpha}} \le S(n,\alpha,\gamma,0)^{-1} \mu \le S(n,\alpha,\gamma,0)^{-1} \theta + S(n,\alpha,\gamma,0)^{-1} \zeta, $$ which gives \begin{equation} \label{estimate for theta and zeta , using estimates in Lemmas} \begin{aligned} &\theta^{\frac{2}{2^*_\alpha}} (1- S(n,\alpha,\gamma,0)^{-1} \theta^{\frac{2^*_\alpha - 2}{2^*_\alpha}}) \le S(n,\alpha,\gamma,0)^{-1} \zeta. \end{aligned} \end{equation} On the other hand, by (\ref{Summation of Hardy and Hardy-Sobolev terms for P-S sequence is c+o(1) }), we have $$\theta \le \frac{2n}{\alpha} c. $$ Substituting the last inequality into (\ref{estimate for theta and zeta , using estimates in Lemmas}), we get that $$ (1- S(n,\alpha,\gamma,0)^{-1} (\frac{2n}{\alpha} c)^{\frac{\alpha }{n}}) \theta^{\frac{2}{2^*_\alpha}} \le S(n,\alpha,\gamma,0)^{-1} \zeta. $$ Recall that the upper bounded (\ref{Definition of C^star}) on $c$ implies that $$1- S(n,\alpha,\gamma,0)^{-1} (\frac{2n}{\alpha} c)^{\frac{\alpha }{n}} > 0.$$ Therefore, there exists $\delta_1= \delta_1(n,\alpha,\gamma,c)>0$ such that $ \theta^{\frac{2}{2^*_\alpha}} \le \delta_1 \zeta.$ Similarly, there exists $\delta_2= \delta_2(n,\alpha,\gamma,c,s)>0$ such that $ \zeta^{\frac{2}{2^*_\alpha(s)}} \le \delta_2 \theta. $ These two inequalities yield that there exists $\epsilon_0= \epsilon_0 (n, \alpha, \gamma,c,s)>0$ such that \begin{equation} \label{Definition of epsilon_0} \text{ either } \quad \theta= \zeta = 0 \quad \text{ or } \ \quad \{\theta \ge \epsilon_0 \text{ and } \zeta \ge \epsilon_0 \}. \end{equation} It follows from the definition of $\theta$ and $\zeta$ that $$\text{ either }\limsup\limits_{k \to \infty} \int_{B_{\delta}} |u_k|^{2^*_{\alpha}}dx = \limsup\limits_{k \to \infty} \int_{B_{\delta}} \frac{|u_k|^{2^*_{\alpha}(s)}}{|x|^s}dx =0;$$ $$ \text{ or } \quad \ \ \limsup\limits_{k \to \infty} \int_{B_{\delta}} |u_k|^{2^*_{\alpha}}dx \ge \epsilon_0 \quad {\rm and} \quad \limsup\limits_{k \to \infty} \int_{B_{\delta}} \frac{|u_k|^{2^*_{\alpha}(s)}}{|x|^s}dx \ge \epsilon_0.$$ \end{proof} \subsection{End of proof of Theorem \ref{Theorem Main result in extended form}} We shall first eliminate the possibility of a zero weak limit for the Palais-Smale sequence of $\Psi$, then we prove that the nontrivial weak limit is indeed a weak solution of Problem (\ref{Main problem.prime}). In the sequel $(w_k)_{k \in \mathbb{N}}$ will denote the Palais-Smale sequence for $\Psi$ obtained in Proposition \ref{Proposition lim of sobolev term in small ball when miniming sequence converges weakly to zero}. First we show that \begin{equation}\label{limsup} \limsup\limits_{k \to \infty} \int_{\ensuremath{\mathbb{R}}^n} |u_k|^{2^*_\alpha} dx > 0. \end{equation} Indeed, otherwise $\lim\limits_{k \to \infty} \int_{\ensuremath{\mathbb{R}}^n} |u_k|^{2^*_\alpha} dx =0,$ which once combined with the fact that $\langle \Psi'(w_k),w_k \rangle \to 0$ yields that $ \|w_k\|^2 = \int_ {\ensuremath{\mathbb{R}}^n} \frac{ |u_k|^{2_{\alpha}^*(s)}}{|x|^s} dx +o(1). $ By combining this estimate with the definition of $S(n, \alpha, \gamma, s)$, we obtain $$ \left(\int_ {\ensuremath{\mathbb{R}}^n} \frac{ |u_k|^{2_{\alpha}^*(s)}}{|x|^s}dx\right)^{\frac{2}{2_\alpha^*(s)}}\le S(n, \alpha, \gamma, s)^{-1} \|w_k\|^2 \le S(n, \alpha, \gamma, s)^{-1} \int_ {\ensuremath{\mathbb{R}}^n} \frac{ |u_k|^{2_{\alpha}^*(s)}}{|x|^s} dx +o(1), $$ which implies that $$\left(\int_ {\ensuremath{\mathbb{R}}^n} \frac{ |u_k|^{2_{\alpha}^*(s)}}{|x|^s}dx\right)^{\frac{2}{2_\alpha^*(s)}} \left[ 1- S(n, \alpha, \gamma, s)^{-1} (\int_ {\ensuremath{\mathbb{R}}^n} \frac{ |u_k|^{2_{\alpha}^*(s)}}{|x|^s} dx )^{\frac{2^*_\alpha(s)-2}{2^*_\alpha(s)}} \right] \le o(1).$$ It follows from (\ref{Definition of C^star}) and (\ref{Summation of Hardy and Hardy-Sobolev terms for P-S sequence is c+o(1) }) that as ${k \to \infty}$, $$\int_ {\ensuremath{\mathbb{R}}^n} \frac{ |u_k|^{2_{\alpha}^*(s)}}{|x|^s} dx= 2c \frac{n-s}{\alpha-s} +o(1) \quad \text{ and } \quad (1- S(n,\alpha,\gamma,s)^{-1} (2 c \frac{n-s}{\alpha-s})^{\frac{\alpha-s}{n-s}}) >0.$$ Hence, \begin{equation} \label{lim of Hardy-Sobolev term is zero (Contradiction)} \lim\limits_{k \to \infty}\int_ {\ensuremath{\mathbb{R}}^n} \frac{ |u_k|^{2_{\alpha}^*(s)}}{|x|^s} dx = 0. \end{equation} Using that $\lim\limits_{k \to \infty} \int_{\ensuremath{\mathbb{R}}^n} |u_k|^{2^*_\alpha} dx =0,$ in conjunction with (\ref{lim of Hardy-Sobolev term is zero (Contradiction)}) and (\ref{Summation of Hardy and Hardy-Sobolev terms for P-S sequence is c+o(1) }), we get that $c+o(1) = 0,$ which contradicts the fact that $c>0.$ This completes the proof of (\ref{limsup}). Now, we show that for small enough $\epsilon >0$, there exists another Palais-Smale sequence $(v_k)_{k \in \mathbb{N}}$ for $\Psi$ satisfying the properties of Proposition \ref{Proposition lim of sobolev term in small ball when miniming sequence converges weakly to zero}, which is also bounded in $X^\alpha(\ensuremath{\mathbb{R}}^{n+1}_+)$ and satisfies \begin{equation}\label{epsilon} \int_{B_1} |v_k(x,0)|^{2^*_\alpha} dx =\epsilon \quad \hbox{for all $k \in \mathbb{N}.$} \end{equation} For that, consider $\epsilon_0$ as given in Proposition \ref{Proposition lim of sobolev term in small ball when miniming sequence converges weakly to zero}. Let $\beta = \limsup\limits_{k \to \infty} \int_{\ensuremath{\mathbb{R}}^n} |u_k|^{2^*_\alpha} dx$, which is positive by (\ref{limsup}). Set $\epsilon_1 := \text{min} \{\beta , \frac{\epsilon_0}{2}\}$ and fix $\epsilon \in (0,\epsilon_1).$ Up to a subsequence, there exists by continuity a sequence of radii $(r_k )_k$ such that $ \int_{B_{r_k}} |u_k|^{2^*_\alpha} dx =\epsilon$ for each $k \in \mathbb{N}.$ Let now $$ v_k(x,y) := r_k^{\frac{n-\alpha}{2}} w_k(r_k x, r_k y) \quad \text{ for } x \in \ensuremath{\mathbb{R}}^n \text{ and } y \in \ensuremath{\mathbb{R}}_+ .$$ It is clear that \begin{equation}\label{epsilon} \int_{B_1} |v_k(x,0)|^{2^*_\alpha} dx= \int_{B_{r_k}} |u_k|^{2^*_\alpha} dx =\epsilon \quad \hbox{for all $k \in \mathbb{N}.$} \end{equation} It is easy to check that $(v_k)_{k \in \mathbb{N}}$ is also a Palais-Smale sequence for $\Psi$ that satisfies the properties of Proposition \ref{Proposition lim of sobolev term in small ball when miniming sequence converges weakly to zero}. We now show that it is bounded in $X^\alpha(\ensuremath{\mathbb{R}}^{n+1}_+)$ . Since $ (v_k)_{k \in \mathbb{N}} $ is a Palais-Smale sequence, there exists positive constants $ C_1, C_2 >0$ such that \begin{equation}\label{Use P-S condition (v_k) to prove boundedness} \begin{aligned} C_1 + C_2 \|v_k\| &\ge \Psi(v_k) - \frac{1}{2^*_\alpha(s)} \langle \Psi'(v_k), v_k \rangle \\ & \ge\left( \frac{1}{2} - \frac{1}{2^*_\alpha(s)} \right) \|v_k\|^2+ \left(\frac{1}{2^*_\alpha} - \frac{1}{2^*_\alpha(s)} \right) \int_{\ensuremath{\mathbb{R}}^n} |v_k(x,0)|^{2^*_\alpha} dx\\ & \ge \left( \frac{1}{2} - \frac{1}{2^*_\alpha(s)} \right) \|v_k\|^2. \end{aligned} \end{equation} The last inequality holds since $2<2^*_\alpha(s) < 2^*_\alpha.$ Combining (\ref{Use P-S condition (v_k) to prove boundedness}) with (\ref{comparable norms}), we obtain that $ (v_k)_{k \in \mathbb{N}} $ is bounded in $X^\alpha(\ensuremath{\mathbb{R}}^{n+1}_+).$ It follows that there exists a subsequence -- still denoted by $v_k$ -- such that $ v_k \rightharpoonup v \text{ in } X^\alpha(\ensuremath{\mathbb{R}}^{n+1}_+)$ as ${k \to \infty}.$ We claim that $v$ is a nontrivial weak solution of (\ref{Main problem.prime}). Indeed, if $v \equiv 0$, then Proposition \ref{Proposition lim of sobolev term in small ball when miniming sequence converges weakly to zero} yields that $$ \text{ either } \ \limsup\limits_{k \to \infty} \int_{B_1} |v_k(x,0)|^{2^*_\alpha} dx=0 \ \text{ or } \ \limsup\limits_{k \to \infty} \int_{B_1} |v_k(x,0)|^{2^*_\alpha} dx \ge \epsilon_0.$$ Since $\epsilon \in (0,\frac{\epsilon_0}{2}),$ this is in contradiction with (\ref{epsilon}), thus, $v \not\equiv 0.$ To show that $v \in X^\alpha(\ensuremath{\mathbb{R}}^{n+1}_+) $ is a weak solution of (\ref{Main problem.prime}), consider any $\varphi \in C^\infty_0(\ensuremath{\mathbb{R}}^{n+1}_+),$ and write \begin{equation}\label{Psi'(v_k,varphi)=o(1) } \begin{aligned} o(1) &= \langle \Psi'(v_k) , \varphi \rangle\\ &= k_{\alpha} \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} \langle \nabla v_k , \nabla \varphi \rangle dxdy - \gamma \int_{\ensuremath{\mathbb{R}}^n} \frac{ v_k(x,0) \varphi }{|x|^{\alpha}}dx \\ &\quad - \int_ {\ensuremath{\mathbb{R}}^n} |v_k(x,0)|^{2_{\alpha}^*-2} v_k(x, 0) \varphi dx - \int_ {\ensuremath{\mathbb{R}}^n} \frac{ |v_k(x,0)|^{2_{\alpha}^*(s)-2} v_k(x, 0) \varphi }{|x|^s}dx. \end{aligned} \end{equation} Since $ v_k \rightharpoonup v \text{ in } X^\alpha(\ensuremath{\mathbb{R}}^{n+1}_+)$ as ${k \to \infty},$ we have that $$\int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} \langle \nabla v_k , \nabla \varphi \rangle dxdy \rightarrow \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} \langle \nabla v , \nabla \varphi \rangle dxdy, \quad \forall \varphi \in C^\infty_0(\ensuremath{\mathbb{R}}^{n+1}_+).$$ In addition, the boundedness of $v_k$ in $X^\alpha(\ensuremath{\mathbb{R}}^{n+1}_+) $ yields that $ v_k(.,0),$ $|v_k(.,0)|^{2_{\alpha}^*-2} v_k(.,0) $ and $ |v_k(.,0)|^{2_{\alpha}^*(s)-2} v_k(.,0) $ are bounded in $L^2(\ensuremath{\mathbb{R}}^n, |x|^{-\alpha}),$ $ L^{\frac{2_{\alpha}^*}{2_{\alpha}^*-1}}(\ensuremath{\mathbb{R}}^n)$ and $ L^{\frac{2_{\alpha}^*(s)}{2_{\alpha}^*(s)-1}}(\ensuremath{\mathbb{R}}^n, |x|^{-s})$ respectively. Therefore, we have the following weak convergence: \begin{align*} &v_k(.,0)\rightharpoonup v(.,0) \quad \text{ in } L^2(\ensuremath{\mathbb{R}}^n, |x|^{-\alpha})\\ &|v_k(.,0)|^{2_{\alpha}^*-2} v_k(.,0) \rightharpoonup |v(.,0)|^{2_{\alpha}^*-2} v(.,0) \quad \text{ in } L^{\frac{2_{\alpha}^*}{2_{\alpha}^*-1}}(\ensuremath{\mathbb{R}}^n) \\ &|v_k(.,0)|^{2_{\alpha}^*(s)-2} v_k(.,0) \rightharpoonup |v(.,0)|^{2_{\alpha}^*(s)-2} v(.,0) \quad \text{ in } L^{\frac{2_{\alpha}^*(s)}{2_{\alpha}^*(s)-1}}(\ensuremath{\mathbb{R}}^n, |x|^{-s}). \end{align*} Thus, taking limits as ${k \to \infty}$ in (\ref{Psi'(v_k,varphi)=o(1) }), we obtain that \begin{equation*} \label{Psi'(v, varphi) = 0} \begin{aligned} 0 &= \langle \Psi'(v) , \varphi \rangle\\ &= k_{\alpha} \int_{\ensuremath{\mathbb{R}}_+^{n+1}} y^{1-\alpha} \langle \nabla v , \nabla \varphi \rangle dxdy - \gamma \int_{\ensuremath{\mathbb{R}}^n} \frac{ v(x,0) \varphi }{|x|^{\alpha}}dx\\ & \quad - \int_ {\ensuremath{\mathbb{R}}^n} |v(x, 0)|^{2_{\alpha}^*-2} v(x,0)\varphi dx - \int_ {\ensuremath{\mathbb{R}}^n} \frac{ |v(x,0)|^{2_{\alpha}^*(s)-2} v(x, 0) \varphi }{|x|^s}dx . \end{aligned} \end{equation*} Hence $v$ is a weak solution of (\ref{Main problem.prime}). \\ {\bf Acknowledgments:} Part of this work was done while the authors were visiting the Fields Institute for Research in Mathematical Sciences (Toronto), during the Thematic program on variational problems in Physics, Economics and Geometry. The authors would like to thank the Fields Institute for its support and its hospitality.
{ "attr-fineweb-edu": 1.226562, "attr-cc_en_topic": 12, "domain": "arxiv" }