id
stringlengths 20
20
| content
stringlengths 211
2.4M
| meta
dict |
---|---|---|
BkiUdBM5qYVBTkzTmm1f | \section{Crossing-point analysis}
\label{sm:crossings}
The crossing-point analysis employed in Fig.~\ref{fig1} is an extension of Fisher's ``phenomenological renormalization''. We follow essentially
the formalism developed and tested with numerically exact transfer-matrix results for the Ising model in Ref.~\cite{fsref}, but apply it to QMC
data. In Sec.~\ref{sm:crossings_1} we discuss formalities and derivations of the exponents governing the drifts of crossing points in the standard case, when
there is a single divergent length scale. In Sec.~\ref{sm:crossings_2} we discuss why the single-length scaling form (\ref{aqlform1}) can still be used to
analyze crossing points and extract the exponent $\nu$ controlling the shorter length scale, even in the case when the criticality is described by the
two-length ansatz (\ref{aqlform2}) with the anomalous limit controlled by the longer length scale. In Sec.~\ref{sm:crossings_3} we discuss several
practical issues and potential error sources (statistical as well as systematical) that should be properly taken into account when analyzing crossing points.
We illustrate the procedure with data for the 2D Ising model, demonstrating the unbiased nature of the approach by reproducing the exactly known
critical temperature and critical exponents to within small statistical errors.
\subsection{Scaling corrections and crossing points}
\label{sm:crossings_1}
Consider first the standard case of a single divergent length scale (correlation length) $\xi \propto |\delta|^{-\nu}$ as a function of the distance
$\delta=g-g_c$ to a critical point (a classical transition driven by thermal fluctuations at $T>0$ or a quantum phase transition at $T=0$). For
some other singular quantity $A$ with the behavior $A \propto |\delta|^\kappa$ in the thermodynamic limit (valid for $g < g_c$, $g>g_c$, or both,
depending on the quantity) the finite size scaling is governed by the form
\begin{equation}
A(\delta,L) = L^{-\kappa/\nu}f(\delta L^{1/\nu}, \lambda_1 L^{-\omega_1}, \lambda_2 L^{-\omega_2}, \cdots),
\label{fullaform}
\end{equation}
where $0 < \omega_i < \omega_{i+1}$ and the variables $\lambda_i$ are irrelevant fields which in principle can be tuned by introducing some other
interactions in the Hamiltonian \cite{omeganote}. Keeping only the most important irrelevant field, using the notation $\omega \equiv \omega_1$ for convenience, and
suppressing the dependence on the unknown value of $\lambda_1$, we have Eq.~(\ref{aqlform1}) in the main text. The scaling function
is non-singular and we can Taylor expand it in the neighborhood of the critical point;
\begin{equation}
A(\delta,L) = L^{-\kappa/\nu}(a_0+a_1\delta L^{1/\nu} + b_1L^{-\omega} + \ldots).
\label{ataylor}
\end{equation}
For two system sizes $L_1=L$ and $L_2=rL$ ($r>1$), the two curves $A(\delta,L_1)$ and $A(\delta,L_2)$ take the same value (cross each other) at the point
\begin{equation}
\delta^* = \frac{a_0}{a_1}\frac{1-r^{-\kappa/\nu}}{r^{(1-\kappa)/\nu}-1}L^{-1/\nu}
+\frac{b_1}{a_1}\frac{1-r^{-\kappa/\nu-\omega}}{r^{(1-\kappa)/\nu}-1}L^{-1/\nu-\omega}.
\label{deltastar}
\end{equation}
Thus, in general the finite-size value $g^*(L)$ of the critical point defined using such curve-crossing points shifts with the system
size as $g^*(L) -g_c \equiv \delta^* \propto L^{-1/\nu}$. However, if the quantity $A$ is asymptotically size-independent at the critical point, $\kappa=0$, the
first term in Eq.~(\ref{deltastar}) vanishes and the shift is faster;
\begin{equation}
g^*(L)-g_c \propto L^{-(1/\nu+\omega)},
\label{extraform1}
\end{equation}
where the constant of proportionality depends on the chosen aspect ratio $r$ and the generally unknown coefficients of the Taylor expansion (\ref{ataylor}).
The value of the quantity $A$ at the crossing point is obtained by inserting $\delta^*$ into Eq.~(\ref{ataylor}), which for both the general
case $\kappa\not=0$ and the special case $\kappa=0$ can be written as
\begin{equation}
A^*(L)=A(\delta^*,L) = L^{-\kappa/\nu}(a + b L^{-\omega} + \ldots),
\label{extraform2}
\end{equation}
with some constants $a$ and $b$.
Thus, in principle a crossing point analysis can be used to obtain the leading critical exponents $\kappa$ and $\nu$ as well as the subleading
exponent $\omega$. However, it should be noted that the higher-order terms in Eq.~(\ref{ataylor}) can play a significant role for system sizes
attainable in practice, and often $1/\nu + \omega$ and $\omega$ extracted from fitting to power laws according to Eqs.~(\ref{extraform1}) and
(\ref{extraform2}) should be considered only as ``effective'' exponents which change with the range of system sizes considered (with the
correct exponents obtained only for very large system sizes where the subleading corrections become negligible). To extract the critical point,
a dimensionless quantity ($\kappa=0$) should be chosen as the convergence then is the the most rapid, given by Eq.~(\ref{extraform1}). The value
of the critical point $g_c$ obtained from fitting to this functional form is normally not very sensitive to the imperfection of the power-law
correction with the effective value of the exponent, as long as the fit is statistically sound.
There are many other ways of analyzing crossing points. For instance, the exponent $\nu$ can be obtained more directly than the
difficult extraction based on the correction terms in the shift analysis above. Consider a dimensionless quantity $Q$, such as the Binder ratio
(or the corresponding cumulant). We then have, including also some terms of higher order in Eq.~(\ref{ataylor}),
\begin{equation}
Q(\delta,L)=a_0+a_1\delta L^{1/\nu} + a_2\delta^2 L^{2/\nu} + b_1L^{-\omega} + c_{1}\delta L^{1/\nu-\omega} + \ldots ,
\end{equation}
and from the derivative $s(\delta)$ with respect to $\delta$ or $g=g_c+\delta$ we have
\begin{equation}
s(\delta)= \frac{d Q(\delta,L)}{d\delta} = \frac{d Q(g,L)}{d g} =
a_1L^{1/\nu}+c_{1}L^{1/\nu-\omega} + a_2\delta L^{2/\nu} + \ldots .
\label{sderiv1}
\end{equation}
We will now assume that $s(\delta)$ is positive in the region of interest, and if not we redefine it with a minus sign.
At $\delta=0$ we then have
\begin{equation}
\ln[s(0)] = c + \frac{1}{\nu} \ln(L) + d L^{-\omega} + \ldots,
\end{equation}
with some constants $c$ and $d$. Thus, for large $L$, $\ln(s)$ at the critical point depends linearly on $\ln(L)$ and the slope is
the exponent $1/\nu$. A drawback of this method for extracting $\nu$ is that the critical point has to be determined first, and a careful
analysis should also take into account the uncertainties in the estimated value of $g_c$.
To circumvent the requirement of having to determine $g_c$ first, we observe that, instead of evaluating the derivative (\ref{sderiv1}) exactly at the
critical point, we can use the crossing point of the quantity $Q$ for two system sizes $(L_1,L_2)=(L,rL)$ [or, as in Ref.~\cite{fsref}, one
can use $L_2=L_1+\Delta L$ with a constant $\Delta L$, which only modifies some unimportant prefactors of the results derived below].
Inserting the crossing value (\ref{extraform1}) of $\delta$ into (\ref{sderiv1}) we obtain
\begin{eqnarray}
s(\delta^*,L_n)&=&a_1L_n^{1/\nu}+c_{1}L_n^{1/\nu-\omega} + a_2dL_n^{1/\nu-\omega} + \ldots \nonumber \\
&=&a_1L_n^{1/\nu}(1 + \tilde b_1L_n^{-\omega} + \ldots ),~~~~~n=1,2.
\end{eqnarray}
Having access to two different slopes at the crossing point, we can take the difference of the logarithms of these and obtain
\begin{equation}
\ln[s(\delta^*,rL)]-\ln[s(\delta^*,L)] = \frac{1}{\nu}\ln(r)+e L^{-\omega} + \ldots,
\end{equation}
with some constant $e$. We can therefore define an exponent estimate $\nu^*(L)$ corresponding to the crossing point,
\begin{equation}
\frac{1}{\nu^*(L)} = \frac{1}{\ln(r)}\ln{\left (\frac{s(\delta^*,rL)}{s(\delta^*,L)}\right )},
\label{nustar2}
\end{equation}
and this estimate approaches the correct exponent at the rate $L^{-\omega}$ for large $L$;
\begin{equation}
\frac{1}{\nu^*(L)} = \frac{1}{\nu} + g L^{-\omega} + \ldots,
\label{extraform3}
\end{equation}
with some constant $g$ and various higher-order terms again left out.
With all the crossing-point quantities discussed above, the infinite-size values $g_c$, $Q_c$, and $1/\nu$ can be obtained by fitting data for several
system-size pairs $(L,rL)$, using Eqs.~(\ref{extraform1}), (\ref{extraform2}), and (\ref{extraform3}). One can either use the leading form as written with
only the asymptotically dominant correction $L^{-(1/\nu+\omega)}$ (in the case of $g_c$) or $L^{-\omega}$ (for $Q_c$ and $1/\nu$) if the system sizes are large enough for
the higher-order terms to be safely neglected, or one can include higher-order terms explicitly and fit to a larger range of system sizes. The former
method has the advantage of the optimum fit being easier to find, while fits with multiple power laws are some times challenging or affected
by large fluctuations in the parameters unless the statistical errors are very small.
\subsection{The case of two length scales}
\label{sm:crossings_2}
We now turn to systems with two divergent lengths, where the critical scaling is governed by Eq.~(\ref{aqlform2}). When the thermodynamic limit
corresponds to the scaling function $f(x,y)$ being a power of the first argument $x=\delta L^{1/\nu}$ for large $x$ and $y$, the effect of the
second argument $y=\delta L^{1/\nu'}$ is the same as in the standard case of a dangerously irrelevant field scaling as $L^{-\omega'}$.
The crossing-point analysis then remains
the same as in the previous section. In the anomalous case, which we have termed the {\it super dangerous} perturbation, the second scaling
argument (the longer length scale) generically controls the $L \to \infty$ behavior and demands the modified powers of $L$ in front of the
scaling function. This case requires some additional
discussion.
In general, the scaling in this case is much more complex. In the main paper we have discussed how the correct thermodynamic limit is obtained when the
scaling function is controlled by $y=\delta L^{1/\nu'}$. This limit corresponds directly to the intuitive physical picture of the shorter length $\xi$
saturating at $L^{\nu/\nu'}$ when the longer length $\xi'$ reaches $L$, and, therefore, $\xi$ should not be replaced by $L$ at criticality but instead
by $L^{\nu/\nu'}$. This change imposes an anomalous power law at criticality for any observable which can be written as some nonzero power of the correlation
length close to the critical point. It should be noted that, there are special non-generic observables, such as the Binder ratio, which by construction
neither have any $L$-dependent prefactors of the finite-size scaling function $f$ nor any dependence on $\xi$ in the thermodynamic limit (e.g., the Binder ratio
takes constant values in the phases and a different value at the critical point). In such non-generic cases there are also no modified power laws, since
there are no powers to be modified by the ratio $\nu/\nu'$ in the first place. All other generic observables are expected to develop anomalous power laws.
We next note that, in the above large $L$ limit of $f(x,y)$, both the arguments $x=\delta L^{1/\nu}$ and $y=\delta L^{1/\nu'}$ become large. When we
are interested in crossing points close to $\delta=0$, we are far from this limit, however. We can anticipate crossing points as in the single-length
case when the first argument $x$ is of order one (i.e., $\delta$ is of order $L^{-1/\nu}$),
whence the second argument is very small, $y \approx L^{1/\nu'-1/\nu} \ll 1$. There is no {\it a priori}
reason to expect that this limit is controlled by $y$. The most natural assumption, which can be tested, is that $y$ is irrelevant in this regime.
Then we are back at a situation where the standard crossing-point analysis can be performed and the exponent delivered by such an analysis should
generically be $\nu$, not $\nu'$. An exception is an observable which is manifestly dependent only on the longer length scale, in which case the
shorter length scale will play the role of an irrelevant correction. The simplest quantity of this kind is a length scale which is proportional to
the longer length $\xi'$ itself. In the main text we have analyzed the size $\Lambda$ of the spinon bound state and found its crossing points
to be controlled by an exponent exponent $\nu'$ which is indeed significantly larger than $\nu$, and also $\Lambda \sim L$ holds in the neighborhood
of the critical point, as expected from the scaling function controlled by $y$ when $\Lambda \sim \xi'$ in the thermodynamic limit.
We now have concluded that the limits of $f(x,y)$ when $y=\delta L^{1/\nu'} \to \infty$ and $y \to 0$ are controlled by different exponents in the generic case;
by both $\nu$ and $\nu'$ in the former case and only by $\nu$ in the latter case. This implies an interesting cross-over behavior between these limits. In principle,
such a cross-over can be tested explicitly by numerical data, by graphing results for a wide range of system sizes and couplings (in the case of the $J$-$Q$
model, that should be done inside the VBS phase) against both $\delta L^{1/\nu}$ and $\delta L^{1/\nu'}$. One should observe data collapse onto common scaling
functions in both cases, but only in the relevant regimes controlled by the different scaling arguments; small $\delta L^{1/\nu}$ or large $\delta L^{1/\nu'}$. It would
clearly be desirable to carry out such an analysis for the $J$-$Q$ model,
which we have not yet done due to the large computational resources required to do this properly for sufficiently
large system sizes. We anticipate the analysis of the cross-over to be complicated also by the small exponent $\omega$ of the leading scaling
corrections, as demonstrated in Fig.~\ref{fig1} in the main paper.
Even if no tests of the cross-overs are available currently, the two limits $y\to 0$ and $y \to \infty$ have already been confirmed in
this work; the former by the scaling of the Binder cumulant with the exponent $\nu$ (the shorter length scale) and the latter more indirectly by the
presence of anomalous powers of $L$. An anomalous exponent which is very well converged as a function of the system size and completely inconsistent
with any other previous scenario (neither large scaling corrections nor a first-order transition) is best provided by the domain-wall energy $\kappa$,
which is analyzed in Fig.~\ref{fig3}(a) of the main paper and also further below in Sec.~\ref{sm:domainwall_jq}.
\subsection{Tests on the 2D Ising model}
\label{sm:crossings_3}
In order to demonstrate the reliability of the method of obtaining the critical point and exponents from crossing points, and to discuss practical
issues in implementing it, we here present results based on the Binder cumulant $U$ of the standard 2D Ising model;
\begin{equation}
U=\frac{1}{2} \left ( 3 - \frac{\langle m^4\rangle}{\langle m^2\rangle^2} \right ),
\label{isingbinder}
\end{equation}
where $m$ is the magnetization
\begin{equation}
m = \frac{1}{N}\sum_{i=1}^N \sigma_i,~~~ \sigma_i \in \lbrace -1,+1\rbrace.
\end{equation}
MC simulations were carried out on lattices of size $L\times L$ with periodic boundary conditions,
using a mix of Wulff and Swendsen-Wang (SW) cluster updates, with each sweep of Wulff updates
(where on average $\approx N$ spins are flipped) followed by an SW update where the system is decomposed into clusters, each of
which is flipped with probability $1/2$. The SW clusters are also used to measure $\langle m^2\rangle$ and $\langle m^4\rangle$ with
improved estimators (after each SW update). We carried out simulations of sizes $L=6,7,\ldots,20,22,\ldots,36,40,\ldots, 64,72,
\ldots, 128$, at $20-30$ temperatures in the neighborhood of the relevant crossing points of the Binder cumulant for system-size
pairs $(L,2L)$, i.e., using aspect ratio $r=2$ in the expressions of Sec.~\ref{sm:crossings_1}.
Up to $5\times 10^9$ measurements were collected for the smaller sizes and $10^8$
for the largest sizes.
\begin{figure}
\center{\includegraphics[width=11cm, clip]{figs01.pdf}}
\vskip-0mm
\caption*{Figure S1: Binder cumulant of the 2D Ising model with $L=16,32,64$ in the neighborhood of the points at which
the curves cross each other. The vertical and horizontal dashed lines indicate the critical temperature $T_c$ and the value
of the cumulant at $T_c$, respectively. The solid curves are cubic polynomial fits to the data sets. Error bars are much smaller
than the plot symbols.}
\vskip-1mm
\end{figure}
Figure S1 shows examples of data for three different system sizes, where cubic polynomials have been fitted to the data. The crossing
points can be extracted numerically using bisection. In order to analyze $T_c$ and $U_c$ in the thermodynamic limit, it suffices
to consider a small number of points very close to each crossing point to be analyzed. To obtain $\nu$ from the slopes according to Eq.~(\ref{nustar2}),
where the derivative in Eq.~(\ref{sderiv1}) is taken of the fitted polynomials, it is better to have a more extended range of points. However,
for a very large range a high order of the polynomial has to be used in order to obtain a good fit, and it is then better in practice to adapt
the window size so that a relatively low order polynomial can be used. In the tests reported here, cubic polynomials were used and all fits
were statistically sound.
In order to compute the statistical errors (error bars) a bootstrap method can be used, i.e., by generating a large number of random samples of the
binned MC data. Each bootstrap sample is computed using $B(L,T)$ randomly chosen bins for each system size and temperature, where $B(L,T)$ is also the total
number of data bins available from simulations at $(L,T)$. The standard deviations of the values (the horizontal and vertical crossing points and the
slope estimator for $1/\nu^*$) computed for these bootstrap samples correspond to the error bars, which later will be used in the fits to extrapolate
to infinite size. In evaluating the cumulant (\ref{isingbinder}), for the full data set or a bootstrap sample, the individual expectation values
$\langle m_i^2\rangle$ and $\langle m_i^4\rangle$ should be computed first based on all the bins included in the sample, after which the ratio is evaluated.
If one instead uses ratios computed for each bin separately, a statistically significant systematical error can be introduced in the ratio, due to
nonlinear contributions to the statistical error which do not vanish as the number of bins is increased (for fixed bin size) but do decrease properly
in the bootstrap method when the sample size is increased.
We next fit crossing points for a series of system pairs to the expected forms, Eqs.~(\ref{extraform1}), (\ref{extraform2}) with $\kappa=0$, and
(\ref{extraform3}), and compare with exact and previous
numerical results for the 2D Ising model. Onsager's rigorous analytical solution gives $T_c = 2\ln^{-1}(\sqrt{2}+1) \approx 2.269185314$ and $\nu=1$.
The value of $U$ at $T_c$ is not known exactly, but Bl\"ote obtained $U_c \approx 0.916035$ by extrapolating exact numerical finite-size
transfer-matrix data to infinite size \cite{blote93}. For the Binder cumulant the dominant subleading correction has the exponent $\omega=7/4$
\cite{blote93}. These results should all be obtained within statistical errors from the crossing point analysis of the MC data if sufficiently
large systems are used and the data are analyzed using appropriate statistical methods.
For small sizes the expected higher-order corrections will cause deviations beyond the statistical errors from the
leading-order forms, which can be detected in the goodness of the fits to the leading forms (\ref{extraform1}), (\ref{extraform2}), and (\ref{extraform3}).
Our strategy is to remove small system sizes until a statistically sound fit is obtained for a given quantity.
The crossing points for the different size pairs $(L_i,2L_i)$, $i=1,\ldots,M$,
are not all statistically independent, because the same system size can appear in two different
pairs. One should therefore define the goodness of the fit, $\chi^2$ per degree of freedom $N_{\rm dof}$ (the number of data points minus the number
of parameters of the fit), with the full covariance matrix instead of just its diagonal elements (which are the conventional variances).
Using $V_i$ to denote some quantity defined based on the $(L_i,2L_i)$ crossing point (the crossing temperature $T^*$, the value of $U^*$ of $U$
at the crossing point, or $1/\nu^*$ obtained from the slopes evaluated using the fitted polynomial), we thus use
\begin{equation}
\chi^2 = \sum_{i=1}^M\sum_{j=1}^M (\langle V_i\rangle - V^{\rm fit}_i)[C^{-1}]^2_{ij}(\langle V_j\rangle - V^{\rm fit}_j),
\label{chi2cov}
\end{equation}
where $\langle V_i\rangle$ is either the mean value obtained from all available bins or an average obtained from the bootstrap procedure
(the two estimates should differ only by an amount much smaller than the standard deviation based on the bootstrap analysis), $V_i^{\rm fit}$
is the value of the quantity evaluated using the fitted function (here a power-law correction to the infinite-size value), and $M$ is the total number
of system-size pairs used. The covariance matrix is defined as
\begin{equation}
C_{ij} = \Bigl\langle(V_i - \langle V_i\rangle)(V_j - \langle V_j\rangle)\Bigr\rangle,~~~~~ i,j \in \{1,\ldots,M\},
\label{covarmat}
\end{equation}
where the expectation value for each pair $i,j$ for which $C_{ij} \not= 0$ is again evaluated using bootstrap sampling (as explained above for
the error bars, which correspond to the square-roots of the diagonal elements $C_{ii}$). We use of the order $100-1000$ bins and generate several
thousand bootstrap samples to obtain accurate estimates of the covariance matrix.
To compute error bars on the extracted quantities, we repeat the fits to Eqs.~(\ref{extraform1}), (\ref{extraform2}), and (\ref{extraform3}) several
hundred times using the bootstrap method and define the final means and statistical errors (one standard deviation) using these bootstrap samples.
When defining $\chi^2$ as in Eq.~(\ref{chi2cov}) for data fits based on bootstrap samples, the covariance matrix (\ref{covarmat}) should be multiplied
by a factor $2$, due to the two statistically equal sources of fluctuations; the original MC fluctuations and those in the bootstrap samples.
Then, for a statistically sound fit, $\langle \chi^2\rangle/N_{\rm dof} \approx 1$ is expected for the bootstrap-averaged goodness of the fit.
To quantitatively define a criterion for an acceptable fit, we consider the standard deviation of the $\chi^2$ distribution. For $N_{\rm dof}$ degrees of
freedom, the standard deviation of $\chi^2/N_{\rm dof}$ is $(2/N_{\rm dof})^{1/2}$. We systematically eliminate the small sizes until $\langle \chi^2\rangle/N_{\rm dof}$
falls roughly within two standard deviations of its expected mean;
\begin{equation}
\frac{\langle \chi^2 \rangle}{N_{\rm dof}} - 1 < 2\sqrt{\frac{2}{N_{\rm dof}}}.
\label{chi2crit}
\end{equation}
Clearly this criterion is sensitive to the quality of the data---if the elements of the covariance matrix are very small, even fits including
only relatively large system sizes can detect the presence of higher-order corrections and not pass our test, while with noisy data also small
system sizes can be included (but the error bar on the final extrapolated value will be larger).
If a fit satisfies the goodness-of-fit criterion (\ref{chi2crit}) it can still not be completely guaranteed that no effects of the
higher-order corrections are present in the final result, but in general one would expect any remaining systematical errors to be small
relative to the statistical error. In principle one can estimate the magnitude of the systematical error using the parameters obtained
from the fit and some knowledge or estimate of the nature of the higher-order corrections. We will not attempt to do that here, because in
general such knowledge will be very limited. To minimize possibly remaining systematical errors one can continue to exclude more system sizes
even after the soundness criterion (\ref{chi2crit}) is satisfied, at the price of increasing the statistical errors of the parameters extracted
from the fits.
The above method implies a 'curse of good data', as less data points are actually included in the final fit when longer simulations are
carried out for a fixed set of system sizes. However, the discarded data still contain valuable information on the convergence properties
and can in principle be used to analyze higher-order scaling corrections (which we do not pursue here).
\begin{figure}[t]
\begin{center}
\includegraphics[width=14cm, clip]{figs02.pdf}
\end{center}
\vskip-3mm
\caption*{Figure S2: Results for the 2D Ising model.
(a) Crossing temperature of the Binder cumulant for size pairs $(L,2L)$ versus $1/L$, along with a fit of the $L \ge 12$ data to the form
(\ref{extraform1}). (b) The cumulant at the crossing points, along with a fit to the form (\ref{extraform2}) for $L \ge 14$. In both (a) and (b),
error bars are much too small to be visible. The insets shows the difference $\Delta$ between the data and the fitted functions including the error
bars (for only the sizes included in the fits).}
\vskip-1mm
\end{figure}
Results for the horizontal (temperature) and vertical (cumulant) crossing values of the 2D Ising model are shown in Fig.~S2.
For the horizontal points in (a), our fits start to satisfy the criterion (\ref{chi2crit}) when including sizes $L \ge 12$ (the average goodness of
the fit is then $\langle \chi^2\rangle/N_{\rm dof} \approx 1.6$ with $N_{\rm dof} = 20$) and we show that case in the figure. The fit gives
$T_c = 2.2691855(5)$ and the exponent combination $1/\nu + \omega = 2.674(4)$. Thus, the critical temperature comes out correct within the remarkably small
error bar, while $1/\nu + \omega$ is about twenty of its error bars outside the true
(asymptotic) value $1/\nu + \omega = 2.75$. As discussed above, it is typical in finite-size scaling that corrections-to-scaling
exponents do not come out reliably until very large systems are used, and we therefore do not consider the mismatch as a failure here, rather as a
confirmation of the known fact that the exponent should be considered as an ``effective exponent'' which slowly changes as larger
system sizes are included.
For the crossing value of the cumulant we find a similar trend. In this case a good fit requires that
only the $L \ge 14$ points are used, giving $U_c = 0.916031(3)$ and $\omega=1.667(6)$, again with $\langle \chi^2\rangle/N_{\rm dof} \approx 1.6$
($N_{\rm dof} = 18$). The $U_c$ value deviates by about an error bar from Bl\"ote's result quoted above, while the correction exponent
again is relatively far (considering the size of the error bar) from its asymptotic value $\omega=1.75$. Interestingly, $1/\nu$ extracted
as the difference of the two exponents comes out close to the correct value $1/\nu=1$, within the statistical error.
The insets of Fig.~S2 show the differences between the data points and the fitted curves. Here it can be seen that the points
are not quite randomly distributed around $0$, as they should be if the fitted functions are of the correct form. The overall shape with noisy but
discernible minimums and maximums suggests the presence of a correction which is barely detectable for the range of system sizes at this level of statistics.
One can then conclude that
the deviations of $\langle \chi^2\rangle /N_{\rm dof}$ by two standard deviations from $1$ in these fits are not purely statistical fluctuations (which is not
clear from the $\langle \chi^2\rangle/N_{\rm dof}$ values alone), but due to the neglected higher-order corrections. Nevertheless, the most important extrapolated
values $T_c$ and $U_c$ were not adversely affected statistically, thus demonstrating the ability of the effective exponent and the prefactor of
the correction term in Eqs.~(\ref{extraform1}) and (\ref{extraform2}) to reproduce the overall trend of the data sufficiently well for extrapolating
to infinite size.
To illustrate the effect of excluding even more system sizes, with the minimum size $L=28$ we obtain $T_c = 2.2691831(11)$, two error
bars away from the correct value (still a statistically acceptable match), and $U_c=0.916054(11)$, also about two error bars
from the previous (Bl\"ote's) value. From the $T_c$ fit we obtain $1/\nu +\omega = 2.70(4)$ in this case and from the $U$ fit $\omega = 1.73(5)$.
These exponents are now correct to within statistical errors, but the error bars are about 10 times larger than before, while the error bars on $T_c$
and $U$ only doubled. The average value of $\langle \chi^2\rangle /N_{\rm dof}$ is very close to $1$ for both these fits and the deviations from the fitted
function look completely random. Upon excluding even more points, the error bars increase rapidly but the extracted parameters remain
statistically in good agreement with their correct values.
\begin{figure}
\begin{center}
\includegraphics[width=10cm, clip]{figs03.pdf}
\end{center}
\vskip-3mm
\caption*{Figure S3: Estimates of the inverse of the correlation-length exponent $\nu$ of the 2D Ising model based on the slope expression
(\ref{nustar2}) applied to the Binder cumulant. The curve is a fit to the form (\ref{extraform1}) including all points ($L \ge 6$).}
\vskip-1mm
\end{figure}
Next, we extract the exponent $\nu$ using the log-slope formula (\ref{nustar2}). Fig.~S3 shows the results along with a fit including all the
system sizes ($L \ge 6$). Remarkably, the fit is statistically perfect, with $\langle \chi^2\rangle/N_{\rm dof} \approx 1.0$, already at this small minimum
size and the inverse exponent extrapolates to $1/\nu=1.0001(7)$, in excellent agreement with the exact result $1$. The slope data are much more noisy than the
underlying $U$ values and the error bars grow very rapidly with $L$ for the largest sizes. The fit is therefore dominated by the smaller sizes. Naturally,
the large error bars mask the effects of higher-order corrections, as discussed above. It is nevertheless remarkable that the extracted exponent $1/\nu$ does not show any effects of the neglected corrections at all, even though, again, the leading correction exponent, which comes out to $\omega=1.57(7)$, is not very close to
the correct value $1.75$ and its error bar is large. Again, the flexibility of the leading finite-size term allows it to mimic the effects of the correction
terms without significant effects in the extrapolation of the fit.
It should be noted that the 2D Ising model has logarithmic corrections in addition to the higher-order scaling corrections that we have neglected
here \cite{blote93}, which is not a generic feature of critical points (except for systems at their upper critical dimension).
The logarithms of $L$ multiply powers of $L$ higher than those of the leading
corrections and we therefore do not expect them to affect the procedures used above.
These results demonstrate the unbiased nature of the crossing-point analysis when it is carried out properly. We have used the same scheme
to analyze the results for the $J$-$Q$ model in Fig.~\ref{fig1} of the main text. In the left column, the behavior of $\Lambda/L$ is similar to
that of $U$ of the Ising model in Fig.~S2, with a relatively large correction exponent $\omega$ which makes the fits and extrapolations to
$L \to \infty$ stable and and visually convincing. In the right column, it is clear that the leading correction exponent $\omega$ for $R_1$ is small,
$\omega < 0.5$, and that there are other significant corrections present in the top two panels. The fact that the critical point nevertheless agrees perfectly
to within small error bars with that extracted from the spinon bound state is very reassuring. As in the Ising model, the fit to $1/\nu^*$ only requires a single
scaling correction, though it can not be excluded that this correction is an effective one, mimicking the collective effects of several corrections with the
same sign. In any case, the extrapolations are stable, e.g., excluding some of the small-$L$ points does not dramatically change the extrapolation,
though of course the error bar grows.
We advocate the systematic curve-crossing method as outlined above
to determine the critical temperature (or critical coupling of a quantum phase transition) and the critical exponents, instead of often used [also
in DQC studies \cite{sandvik07,harada13,block13}] data-collapse techniques where many choices have to be made, concerning the range of data included,
use of corrections, etc. Although trends when increasing the system size can also be studied with data collapse [as done in Ref.~\cite{harada13})], the
solid grounding of the present scheme directly to the finite-size scaling form (\ref{fullaform}) makes it the preferred method.
\section{Domain-wall energy}
\label{sm:domainwalls}
\begin{figure}
\begin{center}
\includegraphics[width=10cm, clip]{figs04.pdf}
\end{center}
\vskip-3mm
\caption*{Figure S4: A domain wall in a generic 2D system where a discrete order parameter is locked at different values (directions) to the left and right
and the twist between the two directions takes place over a region (domain-wall) of thickness $\xi'$.}
\vskip-1mm
\end{figure}
As we discussed in the main text, the fundamental longer length scale $\xi'$ in the DQC theory is the thickness of a domain wall in the VBS. In
Fig.~S4 we illustrate a generic domain wall in a 2D system in which a discrete symmetry is broken. In the case of a broken
continuous symmetry, e.g., the magnetization vector in the XY spin model, there is no domain wall but the order parameter (its direction) gradually twists
uniformly over the entire width $L$ of the system. This case will be discussed in Sec.~\ref{sm:stiffness} in the context of a twist of the N\'eel order
parameter of the $J$-$Q$ model. For a discrete broken symmetry it is energetically favorable for the system to instead restrict the size of the region
(the domain wall) over which the order parameter deviates significantly from the values imposed at the boundaries. Note, however, that the domain wall
is not strictly fixed at some location, and, e.g., in an MC simulation the local order parameter will not detect the intrinsic width of a domain wall,
because averaging is performed over all locations of the wall. Therefore, other means have to be employed to detect the intrinsic domain-wall thickness,
e.g., using suitably defined correlation functions.
As we showed in the main text, the length scale $\xi'$ is conveniently present in the $J$-$Q$ model in the finite-size scaling of the energy density
$\kappa$ of a VBS domain wall. Here, in Sec.~\ref{sm:domainwall_scaling} we derive the scaling form of $\kappa$, in the thermodynamic limit and for finite
system size, using a simple Ansatz generalizing the treatment by Fisher {\it et al.} \cite{fisher89} in a different context (considered further in
Sec.~\ref{sm:stiffness}) to the case of discrete symmetry breaking with two divergent length scales. The formalism applies both to classical and quantum
systems. We present our MC procedures to compute $\kappa$ at classical (thermal) phase transitions, using the 2D Ising model as a concrete
example in Sec.~\ref{sm:domainwall_ising}. We also present results for the 3D classical six-state clock model at its critical temperature
in Sec.~\ref{sm:domainwall_clock}, before describing the details of the QMC calculations of $\kappa$ for the $J$-$Q$ model at $T=0$ in
Sec.~\ref{sm:domainwall_jq}.
\subsection{Scaling forms}
\label{sm:domainwall_scaling}
Let us first consider the case of a $d$-dimensional system with single divergent length scale $\xi \propto \delta^{-\nu}$. Following Fisher {\it et al.}
\cite{fisher89}, we consider the singular part of the free-energy density, which we can write for a classical system at finite temperature or a quantum
system at $T=0$ (in which case the free energy is just the ground state energy) as
\begin{equation}
f_s(\delta,L) \propto \delta^{\nu (d+z)} Y(\xi/L) \propto \xi^{-(d+z)} Y(\xi/L),
\end{equation}
where formally the dynamic exponent $z=0$ for a classical system. Introducing a domain wall, the free-energy difference with respect to the system without domain
wall should scale in a similar way but with a different size-dependent function \cite{fisher89};
\begin{equation}
\Delta f_s(\delta,L) \propto \xi^{-(d+z)} \tilde Y(\xi/L).
\end{equation}
This density should be understood as a quantity averaged over the inhomogeneous system (or, equivalently, in a finite system the domain wall location
is not fixed and all properties are averages over all locations of the domain wall), and the total free-energy difference is
\begin{equation}
\Delta F_s(\delta,L) \propto \xi^{-(d+z)} \tilde Y(\xi/L)L^d,
\label{dfs_a}
\end{equation}
where $L^d$ is the volume of the system.
We can also write down a different expression for the free-energy difference, by explicitly considering the cost of twisting the order
parameter. If the domain wall has width $\xi$ and the total twist of the order parameter across the wall is $\Delta\phi$, then the cost
per lattice link inside the wall is $\rho (\Delta\phi/\xi)^2$, which also defines the stiffness constant $\rho$. Outside the wall region the local
energy cost vanishes, and, since the total volume occupied by the domain wall is $\propto \xi L^{d-1}$ we have
\begin{equation}
\Delta F_s(\delta,L) \propto \rho (\Delta \phi)^2 \xi^{-1} L^{d-1}.
\label{dfs_b}
\end{equation}
Consistency in the $L$ dependence between this expression and Eq.~(\ref{dfs_a}) requires that the scaling function has the form
$\tilde Y \propto \xi/L$, and therefore
\begin{equation}
\Delta F_s(\delta,L) \propto \xi^{-(d+z-1)} L^{d-1}.
\label{dfs_c}
\end{equation}
The domain wall energy per generalized cross-section area $L^{d-1}$ of the wall (its length for $d=2$, area for $d=3$, etc.) is then
\begin{equation}
\kappa = \frac{\Delta F_s}{L^{d-1}} \propto \frac{1}{\xi^{d+z-1}},
\label{kappa_a}
\end{equation}
which no longer has any $L$ dependence and, thus, represents the behavior in the thermodynamic limit. We can also read off the
scaling of the stiffness constant,
\begin{equation}
\rho \propto \xi^{-(d+z-2)} \propto \delta^{\nu(d+z-2)},
\end{equation}
by comparing Eqs.~(\ref{dfs_b}). and (\ref{dfs_c}).
Since we have written all expressions in terms of the correlation length, we can now switch to finite-size scaling at a critical
point by simply making the substitution $\xi \to L$. For the domain wall energy (\ref{kappa_a}) of interest here we obtain
\begin{equation}
\kappa(L) \propto L^{-(d+z-1)}.
\label{kappa_b}
\end{equation}
Now consider a system with two length scales, with a conventional correlation length $\xi \sim \delta^{-\nu}$ and a domain wall thickness
$\xi' \sim \delta^{-\nu'}$, with $\nu' > \nu$. A simple generalization of Eq.~(\ref{dfs_a}) suggests that
\begin{equation}
\Delta F_s(\delta,L) \propto \xi^{-(d+z)} \tilde Y(\xi/L,\xi'/L)L^d.
\label{dfs_d}
\end{equation}
Note that only the shorter length scale should appear in front of the size-dependent scaling function $\tilde Y$ because the
free energy in the thermodynamic limit should only depend on the two lengths in an additive way, $f_s = a\xi^{-(d+z)} + b\xi'^{-(d+z)}$,
in order for the specific-heat exponent ($\alpha$) relation $2-\alpha = \nu(d+z)$ to hold, i.e., for hyper-scaling to apply (which we thus
assume). Since $\xi$ diverges slower than $\xi'$, $f_s$ is asymptotically dominated by the $\xi$ term, and (\ref{dfs_d}) should then describe
the leading singular behavior.
We can also easily generalize Eq.~(\ref{dfs_b}) to a domain wall of thickness $\xi'$;
\begin{equation}
\Delta F_s(\delta,L) = \rho (\Delta\phi)^2 \xi'^{-1} L^{d-1}.
\label{dfs_e}
\end{equation}
Now consistency between Eqs.~(\ref{dfs_d}) and (\ref{dfs_e}) for both the $L$ dependence and the $\xi'$ dependence requires that
$\tilde Y \propto (L/\xi')(\xi^2/L^2)$, and we arrive at
\begin{equation}
\kappa \propto \frac{1}{\xi^{d+z-2}\xi'}
\label{kappa_c}
\end{equation}
for the scaling of $\kappa$ in the thermodynamic limit. Note the consistency of this form and the single-length form (\ref{kappa_a}) when
$\xi' \to \xi$. In the particular case of a DQC point ($d=2$, $z=1$), Eq.~(\ref{kappa_c}) reduces to $\kappa \propto (\xi\xi')^{-1}$, which was
derived in a different way by Senthil {\it et al.}~\cite{senthil04b}.
To convert Eq.~(\ref{kappa_c}) to finite-size scaling, in the standard treatment of two length scales arising from a dangerously irrelevant
perturbation \cite{leonard15}, the longer scale is not present in the leading finite-size scaling behavior. This can be understood physically
as follows: Upon approaching the critical point from the ordered phase, when $\xi'$ reaches $L$ we simply replace $\xi'$ by $L$. However,
$\xi$ continues to grow and controls the scaling behavior until it reaches $L$. At the critical point also $\xi$ is replaced by $L$, and the
critical finite-size scaling of $\kappa$ obtained from (\ref{kappa_c}) is, thus, identical to the single-length form (\ref{kappa_b}). Since
neither $\nu$ nor $\nu'$ appear here, there is no information on these exponents in the finite-size scaling of $\kappa$ in the standard
scenario.
As we argued in the main text, there is also another possibility, namely, the growth of $\xi$ in Eq.~(\ref{kappa_c})
is halted when $\xi'$ reaches $L$. Then $\xi \propto L^{\nu/\nu'}$, leading to the finite-size scaling
\begin{equation}
\kappa(L) \propto L^{-1-(d+z-2)\nu/\nu'}.
\label{kappa_d}
\end{equation}
In the case of DQC, this reduces to $\kappa \propto L^{-(1+\nu/\nu')}$. It is very interesting that the ratio $\nu/\nu'$ appears here in a simple way and
can be extracted using critical finite-size scaling. The result in Fig.~\ref{fig3}(a) leaves little doubt that $\kappa < 2$, which represents unambiguous
evidence for anomalous scaling in the $J$-$Q$ model. Below, in Sec.~\ref{sm:domainwall_jq}, we will present details of these calculations, along with
additional results showing that $\nu/\nu' \approx 0.72$ for the $J$-$Q$ model.
\subsection{2D Ising model}
\label{sm:domainwall_ising}
\begin{figure}
\begin{center}
\includegraphics[width=5.5cm, clip]{figs05.pdf}
\end{center}
\vskip-3mm
\caption*{Figure S5: Boundary conditions used to induce a domain wall in the 2D Ising ferromagnet. The black open circles and red filled circles indicate down
and up boundary spins, respectively. The vertical location $r$ denotes the point at which the domain-wall inducing boundary is terminated. This location
is updated, $r \to r\pm 1$, in MC updates in addition to the updates of the bulk spins. A full vertical domain wall is present when $r=L$.}
\vskip-1mm
\end{figure}
It is instructive to first test the domain-wall scaling using a simple system such as the 2D Ising model. A domain wall in the ferromagnet can be
enforced in different ways using suitable boundary conditions. Here we use $L \times L$ systems with periodic boundaries in the $y$-direction and
compare two different $x$ boundaries, as illustrated in Fig.~S5. The boundaries are open, with the edge columns coupled with the same
strength $J$ as the bulk coupling to fixed spins $\sigma_i = +1$ and $\sigma_i = -1$, equivalent to boundary fields of strength $\pm J$. Here the
domain-wall imposing column of spins to the right extends only partially through the system, to illustrate the mechanism we use for computing
the required free-energy difference.
It is not easy to compute the free energy in MC simulations, but it is relatively easy to compute a free-energy {\it difference}, if the two systems of
interest, let us call them ``1'' and ``2'', can be simulated collectively as a partition function $Z_{12}=Z_1+Z_2$. If there are updates switching the
simulation between system states 1 and 2 with detailed balance satisfied, then the free-energy difference $\Delta F_{21} = F_2-F_1 = \ln(Z_2/Z_1) =
\ln(P_2/P_1)$, where $P_1,P_2$ are the probabilities of the simulation ``visiting'' the respective states. Such {\it multi-canonical} simulations
\cite{marinari97} can be extended to an arbitrary number of systems $s=1,\ldots,n$, and any $\Delta F_{ij}$ can then be accessed, provided that the simulation
can easily transition between the different states $s$.
In the studies of domain walls considered here, the different systems correspond to boundary conditions fluctuating between the normal periodic boundaries
and the domain-wall boundaries. To enhance the ability of the system to fluctuate between these boundary conditions of interest, the whole boundary is not
changed at once, but in small steps where the right boundary has a change from $\sigma_i = -1$ to $\sigma_i = +1$ at some vertical location $y=r$, as
illustrated in Fig.~S5. Thus, $r=0$ corresponds to the normal periodic boundaries (no domain wall) and $r=L$ corresponds to the boundary
enforcing a full vertical domain wall. For $0 < r < L$ the domain wall does not extend vertically through the whole system and instead has a horizontal part
connecting to the location $y=r$ where the boundary changes. MC updates are used to move this location, $r \to r \pm 1$, using heat-bath acceptance
probabilities.
\begin{figure}
\begin{center}
\includegraphics[width=11cm, clip]{figs06.pdf}
\end{center}
\vskip-3mm
\caption*{Figure S6: Scaling of the domain-wall energy per unit length in the 2D Ising model at the critical temperature. The inset shows the running decay
exponent obtained from data pairs $\kappa(L)$ and $\kappa(2L)$ as $\epsilon(L)=\ln[\kappa(L)/\kappa(2L)]/\ln(2)$. The results have been fitted
to a straight line, which extrapolates to the expected value, $\epsilon \to d-1=1$, for $L \to \infty$.}
\vskip-1mm
\end{figure}
We find that the probability $P(r)$ of the boundary conditions generated is the highest, as expected, for $r=0$. There is also a local maximum
at $r=L$, and a minimum around $r=L/2$. To further increase the efficiency of the boundary moves, a weight factor $V(r)$ is multiplied with the Boltzmann
probability for the spins and gradually adjusted until the histogram $H(r)$ of the relative number of times the boundary is at $r$ becomes almost flat.
Then, the actual probability without the re-weighting factor is $P(r)=H(r)/V(r)$, and the free-energy difference between the systems with and without
domain wall is (leaving out the unimportant temperature factor),
\begin{equation}
\Delta F = \ln \left (\frac{P(L)}{P(0)} \right ).
\end{equation}
MC results for $\kappa$ are shown in Fig.~S6. The inset shows the running exponent $\epsilon(L)$ extracted on the basis of size pairs $(L,2L)$
by postulating $\kappa(L)=aL^{-\epsilon(L)}$ and $\kappa(2L)=a(2L)^{-\epsilon(L)}$, whence $\epsilon(L)=\ln[\kappa(L)/\kappa(2L)]/\ln(2)$. The results are
fully compatible with $\epsilon(L) \to 1$ when $L \to \infty$, as predicted by Eq.~(\ref{kappa_b}) when $d=2,z=0$, with a correction $\propto L^{-1}$. We
have also carried out simulations of the 3D Ising model and confirmed that $\epsilon(L) \to 2$.
\subsection{3D clock model}
\label{sm:domainwall_clock}
The existence of two length scales in the DQC theory relies heavily \cite{senthil04a,senthil04b} on an analogy with the classical 3D clock model,
where the standard XY model is deformed by an external potential $h\cos{(q\Theta_i)}$ for all the angles $\Theta_i$. This term is known to act as a
dangerously-irrelevant perturbation, leading to a domain-wall thickness $\xi' > \xi$. It is therefore natural to also test the scaling of the domain-wall energy
in this case. Here we use the standard XY interaction between nearest neighbors on the 3D simple cubic lattice
\begin{equation}
H_{\rm XY} = -J \sum_{\langle ij\rangle} \cos(\Theta_i - \Theta_j),
\end{equation}
where the angles are constrained to the $q$ clock angles, $\Theta_i=n2\pi/q$, $n=0,1,\ldots,q-1$. The hard constraint is equivalent to
the limit $h/J \to \infty$ with the cosine perturbation.
\begin{figure}
\begin{center}
\includegraphics[width=11cm, clip]{figs07.pdf}
\end{center}
\vskip-3mm
\caption*{Figure S7: Scaling of the domain-wall energy per unit area in the 3D classical $q=6$ clock model at its critical point ($T_c/J \approx 2.202$).
The inset shows the running exponent obtained from data pairs $\kappa(L)$ and $\kappa(2L)$ as $\epsilon(L)=\ln[\kappa(L)/\kappa(2L)]/\ln(2)$
and a fit to the form $\epsilon(L) = 2-aL^{-\omega}$ with $\omega \approx 0.77$.}
\vskip-2mm
\end{figure}
The exponent $\nu'$ should be independent of $h/J$ (including the fully-constrained limit considered here) but depends on $q$,
diverging as $q\to \infty$. There has been some controversy regarding methods to compute the exponent in MC simulations, as
summarized in the recent Ref.~\cite{leonard15}, but for small $q$ several calculations are nevertheless in good agreement with each other
and we can use them as reference points.
In order for the exponent ratio $\nu/\nu'$ to be significantly different from one we here use $q=6$, in which case $\nu' \approx 1.44$ and,
since the 3D XY exponent $\nu \approx 0.67$, the ratio $\nu/\nu' \approx 0.47$. Results for the domain-wall energy scaling at the
critical point are shown in Fig.~S7. The results are completely consistent with the form (\ref{kappa_b}) with $d=3,z=0$, corresponding to
the expected standard scenario where finite-size scaling is obtained from the thermodynamic-limit form by replacing both divergent length
scales by $L$. The results are completely inconsistent with the alternative scenario (\ref{kappa_d}), where the decay exponent should approach
$1+\nu/\nu' \approx 1.47$. This result reinforces the unusual form of the scaling of $\kappa$ in the $J$-$Q$ model, Fig.~\ref{fig3}(a)
of the main text, which we will discuss in more detail in the next section.
We also comment on the applicability of the generic two-length scaling form (\ref{aqlform2}) in the main paper to $\kappa$ in the clock model. Using
the finite-size scaling we found above, we should have
\begin{equation}
\kappa(\delta,L) = L^{-2}f(\delta L^{1/\nu},\delta L^{1/\nu'}).
\end{equation}
To obtain the correct thermodynamic limit, $\kappa \to (\xi\xi')^{-1}$ when $L \to \infty$, we must have $f(x,y) \to x^\nu y^{\nu'}$,
which is also natural because, given the form in the thermodynamic limit, $f$ should be separable, $f(x,y)=f_x(x)f_y(y)$, where the
two factors just correspond to the expected scaling forms for the length scales $\xi$ and $\xi'$ themselves. In contrast, in the $J$-$Q$
model we have argued for an anomalous form which corresponds to a generally non-separable scaling function with the thermodynamic limit
controlled only by the second argument.
\subsection{J-Q model}
\label{sm:domainwall_jq}
In the $J$-$Q$ model we are interested in ground state energies of systems with and without domain walls and these can be computed in standard
QMC simulations. The multi-canonical approach employed in the previous section, developed to circumvent the difficulties of MC calculations of individual free
energies at $T>0$, are therefore neither useful nor needed. We use the projector QMC approach with ${\rm e}^{-\beta H}$ applied to a valence-bond trial state of
the amplitude-product type \cite{sandvik10b,shao15}, choosing the ``projection time'' $\beta$ sufficiently large, up to $\beta=4L$, to converge the ground-state
energy. Domain walls are introduced by boundary conditions in two different ways, schematically illustrated in Fig.~S8.
\begin{figure}[t]
\begin{center}
\includegraphics[width=10cm, clip]{figs08.pdf}
\end{center}
\vskip-3mm
\caption*{Figure S8: Simplified pictures of VBS domain walls with total twist angle $n\pi/2$, $n=1,2$, of the order parameter between the left and right boundaries.
In the notation introduced in the text, the boundary conditions of these two cases are denoted as $(h,v_1)$ and $(v_2,v_1)$. In QMC simulations the
dimerization at the open $x$ boundaries is induced by weakening some of the interactions, thus explicitly breaking the symmetry between the possible
VBS patterns. Periodic boundary conditions are employed in the $y$ direction.}
\vskip-1mm
\end{figure}
The VBS order parameter is a vector ${\bf D} = (D_x,D_y)$, where the operators corresponding to the two components can be defined as
\begin{equation}
\hat D_x = \frac{1}{N}\sum_{i=1}^N (-1)^{x_i}{\bf S}_{x_i,y_i} \cdot {\bf S}_{x_i+1,y_i},~~~
\hat D_y = \frac{1}{N}\sum_{i=1}^N (-1)^{y_i}{\bf S}_{x_i,y_i} \cdot {\bf S}_{x_i,y_i+1},
\end{equation}
where $(x_i,y_i)$ are the integer lattice coordinates of site $i$. Inside a columnar VBS phase of a large system,
a histogram of the order parameter generated from the
estimators of $\hat D_x$ and $\hat D_y$ in QMC simulations exhibits sharp peaks at the points $(1,0)$, $(0,1)$, $(-1,0)$, $(0,-1)$ times the magnitude
$D$ of the order parameter. These peaks correspond to angles $m\pi/2$, $m=0,1,2,3$. As the critical point is approached, in simulations of the
$J$-$Q$ model the histograms develop an $U(1)$ symmetry, becoming completely circular symmetric at the DQC point \cite{sandvik07,jiang08}. The length scale
$\xi'$ controls this emergent $U(1)$ symmetry \cite{senthil04a}; upon course-graining the order parameter on length scales larger than $\xi'$ the discrete
$Z_4$ symmetry of the VBS is apparent, while on shorter length-scales $U(1)$ symmetry develops. The thickness of a domain wall forced by suitable boundary
conditions is controlled by this same length scale.
The four-fold symmetry of the VBS on the square lattice allows for two different types of boundary conditions, as illustrated in Fig.~S8. In the
case labeled $n=1$, the left and right sides of the lattice are forced to have VBS order with horizontal and vertical dimers, respectively, which corresponds
to an angular difference of the order parameter $\Delta\phi=\pi/2$. In the $n=2$ graph, there is vertical dimer order at both edges, but with a relative
shift of one lattice spacing, corresponding to an angular mismatch of $\Delta\phi=\pi$. In a large system, the elementary domain wall corresponds
to $\Delta\phi=\pi/2$ and a $\pi$ wall splits into two such elementary walls.
To compare the two cases and check for possible effects of interactions between two domain walls on the scaling of the energy, we have carried out projector
QMC simulations with domain walls induced with total twist angles $\Delta\phi=\pi/2$ and $\pi$. Simulations without domain walls were
carried out with similar boundary conditions, but with both the left and right walls at the same VBS angle $\phi$. The energy differences can then be computed
without any remaining effects of edge contributions to the total energy, which for a given type of edge is the same with and without domain walls present in
the bulk. Denoting boundary conditions enforcing horizontal dimerization at one of the edges (as in the left edge of the $n=1$ graph in Fig.~S8) by $h$
and vertical order with the two different phases (as shown in the $n=2$ graph) by $v_1$ and $v_2$, the systems we study with different combinations of left
and right boundaries are $(h,h)$, $(v_1,v_1)$, $(h_,v_1)$, and $(v_2,v_1)$. The $v_1$ and $v_2$ boundaries are related by just a translation and therefore
the edge contribution to the energy from these are the same. The domain wall contributions to the energy with the edge effects eliminated are then
\begin{eqnarray}
\Delta E(\pi/2) & = & E(h,v_1)-[E(h,h)+E(v_1,v_1)]/2, \\
\Delta E(\pi) & = & E(v_2,v_1)-E(v_1,v_1),
\end{eqnarray}
and the corresponding size-normalized energy density is $\kappa(\Delta\phi)=\Delta E(\Delta \phi)/L$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=15cm, clip]{figs09.pdf}
\end{center}
\vskip-3mm
\caption*{Figure S9: (a) Domain-wall energy in the critical $J$-$Q$ model. (b) The exponent $1+\nu/\nu'$ extracted from the data in (a) as a running exponent
(defined as in Fig.~S6) from system-size pairs ($L,2L$). Fits with power-law corrections including all data points are shown.}
\vskip-1mm
\end{figure}
QMC results for $\kappa$ computed at the estimated critical point $J/Q=0.0447$ are shown in Fig.~S9(a) [where the $\kappa=\pi$ results are
the same as those already presented in Fig.~\ref{fig3}(a)]. Here, to compare the energies on an equal footing, we divide $\kappa$ by the number $n=1,2$
of domain walls induced when the VBS twist angle is $\Delta\phi=n\pi/2$ and plot the results against $(L/n)^{-1}$, $L/n$ being the width over which a single
domain wall is (on average) distributed. It is interesting, and at first sight surprising, that the $\pi/2$ domain wall is energetically much more expensive,
since one would not expect any significant attractive interactions between the two domain walls in the $\Delta\phi=\pi$ case. We find that the lowering of
the energy is due to enhanced fluctuations in the system with two domain walls. Recalling the emergent $U(1)$ symmetry discussed above and considering a
$\pi/2$ domain wall between, say, boundaries at $\phi=0$ and $\phi=\pi/2$, we expect the VBS angle in the center of the system to fluctuate mainly between these
angles. In the case of the $\Delta\phi=\pi$ twist, there are similarly fluctuations between the angles at the edge, say $\phi=0$ and $\phi=\pi$, but here
the system has two possible paths to go between the edges, passing either through $\phi=\pi/2$ or $\phi=-\pi/2$. Since the system is critical, there is no
reason to expect any breaking of this symmetry.
By constructing histograms of the order parameter we have confirmed these behaviors for moderate system sizes, while for larger systems the amplitude of
the order parameter is reduced due to the critical nature of the domain walls and the histograms in both cases develop $U(1)$ symmetry. These results confirm
that the system with $\pi$ twist is ``softer'' than that with $\Delta\phi=\pi/2$, explaining the large overall differences between the $n=1$ and $n=2$
results in Fig.~S9(a).
Apart from the different overall magnitudes, the power-law decay of $\kappa$ with $L$ for the largest systems is similar for $n=1$ and $2$. Fig.~S9(b)
shows the running exponents $\epsilon(L)$ extracted from system sizes $(L,2L)$ in the same way as discussed in Sec.~\ref{sm:domainwall_ising} for the Ising
model and Sec.~\ref{sm:domainwall_clock} for the clock model.
The two data sets asymptotically extrapolate to the same exponent, which we have argued is $1 + \nu/\nu'$, with $\nu/\nu' = 0.715(15)$.
The corrections are perfectly captured by a power-law term $\propto L^{-\omega}$ with the same exponent $\omega \approx 1.2 - 1.3$ for $n=1$ and $2$ but
different signs of the prefactor. We have also carried out calculations slightly away from the estimated critical coupling, at $J/Q=0.0450$ and $0.0445$, and
there are no significant differences in the exponent ratio extracted at these points.
These results are key to our claims of anomalous finite-size scaling in the $J$-$Q$ model, as it is not possible to explain a non-integer decay exponent
$\epsilon < 2$ for the domain walls within the conventional quantum-criticality scenario (as discussed above in Sec.~\ref{sm:domainwall_scaling}),
and the results also are completely inconsistent with a first-order transition. In the latter case, VBS and N\'eel order would coexist at the transition point
and a domain wall induced in the way explained above could possibly also be affected by coexistence inside the domain wall. However, regardless of the nature
of the domain wall, the energy cost of the interface must scale linearly with the length of the domain wall, giving a finite $\kappa$ and a vanishing exponent
$\epsilon(L)$ when $L \to \infty$. This seems extremely unlikely, given our data in Fig.~S9(b).
In Ref.~\cite{shao15} we employed a different approach to studying domain walls in {\it periodic} systems, by restricting the trial state used in projector
QMC simulations in the valence-bond basis to a topological (winding number) sector corresponding to the presence of a given number of domain walls. We
found anomalous scaling for $\kappa$, but with a somewhat larger exponent ratio $\nu/\nu'=0.80(1)$, for a different variant of the $J$-$Q$ model with products
of three singlet projectors (the $J$-$Q_3$ model) instead of the two projectors used in the model (\ref{jqham}) (the $J$-$Q_2$ model). We have also repeated
this kind of calculation for the $J$-$Q_2$ model and again found $\nu/\nu' \approx 0.80$ for systems of small and moderate size. However, when larger
systems are considered and the statistical accuracy is sufficiently high, drifts in the exponent toward smaller values become apparent. The asymptotic behavior
is consistent with $\nu/\nu' \approx 0.72$ obtained above with the symmetry-breaking boundaries. The previous results in Ref.~\cite{shao15} were
likely affected in the same way by remaining scaling corrections, and $\nu/\nu' \approx 0.72$ should hold universally for different variants of the
$J$-$Q$ model and for different ways of generating domain walls.
\section{Finite-size scaling of the spin stiffness and susceptibility}
\label{sm:stiffness}
In the main text we discussed the generic two-length finite-size scaling form (\ref{aqlform2}) and its different limiting behaviors compatible with
the correct scaling of physical quantities in the thermodynamic limit. Here we discuss the behavior in the thermodynamic limit further, deriving the standard
forms assumed in the main text for the spin stiffness $\rho_s$ and the susceptibility $\chi$ in the presence of two divergent length scales. We then argue
for the unconventional size scaling. The scaling arguments generalize similar treatments by Fisher {\it et al.}~\cite{fisher89} for a system with a single
divergent length scale to a quantum phase transition with two divergent length scales, in a way analogous to the treatment of the domain-wall energy
in the previous section.
The standard scenario of Fisher {\it et al.} \cite{fisher89} was formulated for interacting bosons and gives the scaling behaviors of the superfluid
stiffness and the compressibility. The same formalism applies to a spin system as well \cite{chubukov94}, where the corresponding quantities are the
spin stiffness $\rho_s$ and uniform magnetic susceptibility $\chi$, which we will use in the notation here. As in Sec.~\ref{sm:domainwalls}, we again
start from the singular part of the free-energy density,
\begin{equation}
f_s(\delta,L,\beta) \propto \delta^{\nu (d+z)} Y(\xi/L, \xi^z/\beta),
\end{equation}
where we now explicitly include the dependence on the inverse temperature $\beta$, which was assumed to be zero in the case of the quantum system
($z>0$) in Sec.~\ref{sm:domainwalls}. In the end we will consider $\beta \to \infty$ but we will need finite $\beta$ in the derivation of the
susceptibility.
Upon imposing, by suitable boundary conditions, a total spatial phase twist $\Delta\phi$ of the continuous
N\'eel order parameter uniformly distributed over the system, the increase in free
energy is given by
\begin{equation}
\Delta f_s(\delta,L,\beta) = \rho_s \frac{(\Delta\phi)^2}{L^2} \propto \delta^{\nu (d+z)} \tilde Y_r(\xi/L, \xi^z/\beta).
\end{equation}
Internal consistency of this scaling form demands that $\tilde Y_r$ behaves as $(\xi/L)^2$, thus,
\begin{equation}
\rho_s \propto \xi^2 \delta^{\nu(d+z)} \propto \delta^{\nu(d+z-2)}.
\label{rhofisher}
\end{equation}
Similarly, $\chi(\Delta\phi)^2/\beta^2$ is the excess energy density needed to enforce a twist between $\tau=0$ and $\tau=\beta$ in the
imaginary-time direction;
\begin{equation}
\Delta f_s(\delta,L,\beta) = \chi \frac{(\Delta\phi)^2}{\beta^2} \propto \delta^{\nu (d+z)} \tilde Y_\tau(\xi/L, \xi^z/\beta),
\end{equation}
where $\tilde Y_\tau$ has to behave as $(\xi^{z}/\beta)^2$. Thus, the susceptibility scales as
\begin{equation}
\chi \propto \xi^{2 z}\delta^{\nu(d+z)} \propto \delta^{\nu(d-z)}.
\label{chifisher}
\end{equation}
The finite-size scaling properties at the critical point are simply obtained from Eqs.~(\ref{rhofisher}) and (\ref{chifisher}) by replacing
$\delta^{-\nu} \propto \xi$ by the system length $L$, leading to
\begin{equation}
\rho_s \propto L^{-(d+z-2)},~~~~\chi \propto L^{-(d-z)}.
\label{finiterhochi1}
\end{equation}
In the case of $z=1$ (as in the DQC theory) both quantities scale as $1/L$ but note that the dependence on $z$ is opposite for the two, which
implies that the behavior seen in Fig.~\ref{fig3} in the main text can not be explained simply by $z\not = 1$.
We now generalize the above derivations to the case of two divergent length-scales, $\xi$ and $\xi'$, writing the free energy density as
\begin{equation}
f_s(\delta,L, \beta) \propto \delta^{\nu (d+z)} Y(\xi/L, \xi^z/\beta, \xi'/L, \xi'^{z}/\beta),
\end{equation}
where we have made the assumption that the same dynamic exponent governs the two time scales associated with $\xi$ and $\xi'$ (and in
principle we can generalize to two different exponents $z$ and $z'$). The excess energy due to a spatial twist is
\begin{equation}
\Delta f_s(\delta,L, \beta) = \rho_s\frac{(\Delta\phi)^2}{L^2} \propto
\delta^{\nu (d+z)} \tilde {Y}_r(\xi/L, \xi^z/\beta,\xi'/L, \xi'^{z}/\beta).
\end{equation}
Here, at first sight, there are many ways in which $\tilde {Y}_r$ can depend on its arguments in order to contain the correct $L$ dependence;
\begin{equation}
\tilde{Y}_r \propto \left (\frac{\xi}{L} \right )^{a} \left (\frac{\xi'}{L} \right )^{2-a},
\end{equation}
with arbitrary exponent $a$. However, upon approaching the critical point, when the longer length reaches $L$, we have $\xi'/L \approx 1$
and the only dependence on $L$ at that point is in the factor $(\xi/L)^a$. Thus, we can argue that $a=2$. For the thermodynamic limit we therefore
reproduce the standard results, Eq.~(\ref{rhofisher}). In a similar way we also reproduce Eq.~(\ref{chifisher}) for the susceptibility.
For the finite-size scaling there are two physically natural options, following from two possible behaviors of the shorter length scale $\xi$ upon further
approaching the critical point when $\xi'$ has already reached $L$: (i) $\xi$ continues to increase and eventually reaches $L$. The standard finite-size
scaling forms (\ref{finiterhochi1}) are then again obtained by replacing $\xi \propto \delta^{-\nu}$ by $L$. (ii) The two length scales are fundamentally
tied together, and once $\xi'$ has saturated $\xi$ is locked into its corresponding value; $\xi \propto (\xi')^{\nu/\nu'} \propto L^{\nu/\nu'}$. Making this
replacement in the thermodynamic-limit forms (\ref{rhofisher}) and (\ref{chifisher}) leads to
\begin{equation}
\rho_s \propto L^{-(d+z-2){\nu/\nu'}},~~~~\chi \propto L^{-(d-z){\nu/\nu'}},
\label{newlscaleforms}
\end{equation}
exactly as we argued in the main text based on a direct finite-size scaling ansatz with an appropriate limit of the scaling function. As was shown
in Fig.~\ref{fig3} in the main text, the forms (\ref{newlscaleforms}) are in excellent qualitative agreement with data for the $J$-$Q$ model, with both
$\rho_s$ and $\chi$ decreasing slower with $L$ than in the standard forms (\ref{finiterhochi1}). Quantitative agreement is observed when using the exponent
ratio $\nu/\nu'$ extracted from the scaling of the domain-wall energy.
\section{Anomalous critical scaling at finite temperature}
\label{sm:temp}
One of the experimentally most important aspects of quantum criticality is that the quantum-critical point at $T=0$ governs the behavior
also in a wide $T>0$ region which expands out from $(g_c,T=0)$ with increasing $T$---the so-called quantum-critical fan. In the
standard scenario \cite{chubukov94}, the correlation length
exactly at $g_c$ diverges when $T\to 0$ as $\xi_T \propto T^{-1/z}$ and the uniform magnetic susceptibility approaches $0$ as $\chi_T \propto T^{d/z-1}$.
These forms are not seen in simulations of the $J$-$Q$ model in the neighborhood of its critical point, however \cite{sandvik10a,sandvik11}.
Given our findings where the ratio $\nu/\nu'$ modifies the standard power laws in finite-size scaling, it is also natural to expect modifications
of the powers of the temperature for the system in the thermodynamic limit. This expectation follows from the Euclidean path-integral mapping, where
the inverse temperature $1/T$ of a $d$-dimensional system corresponds to the thickness in the imaginary-time dimension of the $(d+1)$-dimensional
effective system ($L_T=c/T$, $c$ being velocity of the critical excitations). Finite-temperature scaling is therefore obtained as a generalized
finite-size scaling in $L_T$ \cite{chubukov94}.
We here re-analyze the critical $J$-$Q$ data of Ref.~\cite{sandvik11} to test whether power laws modified by $\nu/\nu'$ can explain the observed scaling
anomalies. The data were generated in Ref.~\cite{sandvik11} using QMC calculations on $L\times L$ lattices with $L$ up to $512$, which allowed for
studies effectively in the thermodynamic limit down to temperatures $T/Q \approx 0.035$ ($L/L_T \gg 1$). Given that the correlation length
diverges faster than expected and the susceptibility approaches $0$ slower than expected, in Fig.~S10 we test the forms
\begin{eqnarray}
\xi_T &\propto & T^{-1/(z\nu/\nu')}(1 + aT^{\omega_\xi}), \label{xit} \\
\chi_T &\propto & T^{(d/z-1)\nu/\nu'}(1+bT^{\omega_\chi}), \label{chit}
\end{eqnarray}
using $d=2,z=1,\nu/\nu'=0.715$ and positive correction exponents $\omega_\xi,\omega_\chi$. The correction terms reflect expected non-asymptotic contributions
which become unimportant when $T \to 0$ but still affect the behavior for the temperatures reached in the simulations. We have multiplied $\xi_T$ by $T$
in Fig.~S10(a) and divided $\chi_T$ by $T$ in Fig.~S10(b), so that the results graphed versus $1/T$ should approach constants if the
conventional forms $\xi_T \propto 1/T$ and $\chi_T \propto T$ hold. The data agree very well with the proposed anomalous forms, lending
support to our hypothesis that finite-size anomalies carry over also to $T>0$ scaling with the same exponent ratio $\nu/\nu'$
modifying the power laws.
\begin{figure}[t]
\begin{center}
\includegraphics[width=14cm, clip]{figs10.pdf}
\end{center}
\vskip-3mm
\caption*{Figure S10: Finite-temperature scaling in the critical $J$-$Q$ model based on QMC results from Ref.~\cite{sandvik11}. The
data are analyzed using the forms (\ref{xit}) and (\ref{chit}) with $d=2$, $z=1$, and the ratio $\nu/\nu'=0.715$ determined previously.
In (a) the correlation length has been
multiplied by $T$ and in (b) the susceptibility has been divided by $T$, so that conventional quantum-critical scaling demands
the results to approach constants when $1/T \to \infty$. The fits shown here gave the correction exponents $\omega_\xi \approx 0.40$
and $\omega_{\chi} \approx 0.55$ in Eqs.~(\ref{xit}) and (\ref{chit}).}
\vskip-1mm
\end{figure}
In Ref.~\cite{sandvik11} the scaling anomaly in the correlation length was used as input in a simple picture of a deconfined gas of spinons, leading to
quantitatively consistents relationships between numerical results for $\xi_T$, $\chi_T$, and the specific heat (the latter of which we do not analyze
here because its anomalies are very difficult to detect). Within the spinon gas picture the susceptibility was predicted to acquire a multiplicative
logarithmic correction. The present scenario strongly suggests a modified power law instead of a logarithm, but the consistent behaviors of the
three quantities found in Ref.~\cite{sandvik11} still hold numerically within the temperature regime considered (since the data fits work).
Since the spinons at the critical point
are not completely free particles in the DQC theory \cite{senthil04a,senthil04b} one cannot expect the free spinon-gas picture to remain
strictly correct down to $T\to 0$, but it appears to apply in a window of rather low temperatures, where the log-form used for the susceptibility in
Ref.~\cite{sandvik11} can not be distingushed from the modified power-law form proposed here. It would be interesting to carry out simulations
at still lower temperatures, to study how the logarithmic fit to $\chi_T/T$ presumably breaks down eventually and test further the anomalous
power laws where the correction terms in Eqs.~(\ref{xit}) and (\ref{chit}) become inignificant.
\section{Quantum Monte Carlo simulations}
\label{sm:qmc}
The QMC calculations of the spin stiffness and susceptibility were carried out with the standard Stochastic Series Expansion algorithm, using the
same program as in Ref.~\cite{sandvik10a}, to which we refer for technical details and further references. For a given system size, the method produces unbiased results
only affected by well-characterized statistical errors of the MC sampling.
Ground-state calculations in both the $S=0$ and $S=1$ sector were carried out with projector QMC simulations in the basis of valence bonds (singlet pairs) and
unpaired spins, following Refs.~\cite{tang11,sandvik10a} and references cited there [see also Ref.~\cite{shao15}].
For system size $N$ and total spin $S$, there are $(N-2S)/2$ valence bonds and $2S$ unpaired spins
with the same $z$-spin projection, i.e., the total spin-$z$ projection of the state $S^z=S$. The degrees of freedom of a bra and ket state are
importance-sampled, using the overlap of the two states as the sampling weight. This overlap is represented by a transition graph, where, in the case of the
ground state with $S=0$, the bonds form closed loops. For $S>0$ there are $2S$ ``open loops'', or strings, where for a given string the end points fall
on two unpaired spins, one in the bra and one in the ket. Such a configuration for $S=1$ is illustrated in Fig.~\ref{fig0} of the main text.
This string connecting an unpaired spin in the bra and ket states is a representation of a spinon.
The statistics of the individual strings and their cross-correlations provide information on the nature of the spinons and their collective states.
In particular, the size of the lowest-energy $S=1$ spinon bound state in a VBS can be defined in simulations with two unpaired spins. In Ref.~\cite{tang13}
the distance between the unpaired spins (the end points of the strings) were used for this purpose. Here we use a slightly different measure, inspired by
the arguments of Ref.~\cite{banerjee10} for a different problem, using the entire strings in the following way \cite{damle}: Each of the two spinons, $1$,
and $2$, in the $S=1$ state is associated with a string covering lattices sites located at $r_1(i)$, $r_2(j)$, with $i=1,\ldots, n_1$ and $j=1,\ldots ,n_2$.
We average the distance-squared $r_{ij}^2=|\vec r_1(i)-\vec r_2(j)|^2$ between two points on the two strings over all the $n_1n_2$ pairs of lattice sites
covered by the strings,
\begin{equation}
\langle r^2\rangle = \frac{1}{n_1n_2}\sum_{i=1}^{n_1}\sum_{j=1}^{n_2} r_{ij}^2,
\end{equation}
and define the size as $\Lambda=\langle r^2\rangle^{1/2}$. We find that this definition provides a clearer signal of the spinon bound state diverging
faster than the correlation length than definitions of $\Lambda$ based on just the unpaired spins used in Refs.~\cite{tang11,tang13}.
\end{document}
| {
"attr-fineweb-edu": 1.853516,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdCU4eIXhx5LdSax- | \section{Introduction}
The gate-based model and the measurement-based model are two fundamentally different approaches to implementing quantum computations.
In the gate-based model \cite{NielsenChuang}, the bulk of the computation is performed via unitary (i.e.\ reversible) one- and two-qubit gates.
Measurements serve mainly to read out data and may be postponed to the end of the computation.
Conversely, in the measurement-based model, the bulk of the computation is performed via measurements on some general resource state, which is independent of the specific computation.
We focus here on the one-way model~\cite{MBQC1}, where the resource states are \emph{graph states} (see Section~\ref{sec:MBQC}).
In this paper, we study ways of converting between these two different approaches to quantum computation with a view towards optimizing the implementation of both.
While computations in the gate-based model are represented as quantum circuits,
computations in the one-way model are usually represented by \emph{measurement patterns}, which describe both the graph state and the measurements performed on it~\cite{Patterns,danos_kashefi_panangaden_perdrix_2009}.
Measurement patterns in the one-way model generally do not allow arbitrary single-qubit measurements.
Instead, measurements are often restricted to the `planes' of the Bloch sphere that are orthogonal to the principal axes, labelled the \normalfont XY\xspace-, \normalfont XZ\xspace-, and \normalfont YZ\xspace-planes.
In fact, most research has focused on measurements in just the \normalfont XY\xspace-plane, which alone are sufficient for universal quantum computation~\cite{Patterns}.
Similarly, measurements in the \normalfont XZ\xspace-plane are also universal~\cite{mhalla2012graph}, although this model has been explored less in the literature.
In this paper, we will consider measurements in all three of the planes, since this usually leads to patterns involving fewer qubits, and allows for more non-trivial transformations of the graph state.
Due to the non-deterministic nature of quantum measurements, a one-way computation needs to be adaptive, with later measurement angles depending on the outcomes of earlier measurements~\cite{MBQC1}.
While the ability to correct undesired measurement outcomes is necessary for obtaining a deterministic computation, not all sequences of measurements support such corrections.
The ability to perform a deterministic computation depends on the underlying graph state and the choice of measurement planes, which together form an object called a labelled open graph.
If all measurements are in the \normalfont XY\xspace-plane, \emph{causal flow} (sometimes simply called `flow') is a sufficient condition for the labelled open graph\ to support deterministically implementable patterns~\cite{Danos2006Determinism-in-}.
Yet causal flow is not necessary for determinism.
Instead, the condition of \emph{generalized flow} (or \emph{gflow}) \cite{GFlow} is both sufficient and necessary for deterministic implementability.
Gflow can be defined for labelled open graph{}s containing measurements in all three planes, in which case it is sometimes called \emph{extended gflow}~\cite[Theorems~2 \&~3]{GFlow}.
A given representation of some computation can be transformed into a different representation of the same computation using local rewriting.
The new representation may be chosen to have more desirable properties.
For quantum circuits, such desirable properties include low depth~\cite{amy2013meet}, small total gate count~\cite{CliffOpt}, or small counts of some particular type of gate, such as the T-gate~\cite{amy2014polynomial}.
For measurement patterns, desirable properties include a small number of qubits~\cite{eslamy2018optimization,houshmand2018minimal} or a particularly simple underlying graph state~\cite{mhalla2012graph}.
Local processes can also be used to translate a pattern into a circuit.
This is used, for example, to verify that the pattern represents the desired operation~\cite{Danos2006Determinism-in-,beaudrap2010unitary,duncan2010rewriting,daSilva2013compact,miyazaki2015analysis}.
Conversely, a translation of a circuit into a pattern can be used to implement known algorithms in the one-way model, or it can be combined with a translation back to a circuit to trade depth against width, to parallelise Clifford operations, or to reduce the number of T gates~\cite{broadbent_2009_parallelizing,daSilva2013global,houshmand2017quantum}.
No complete set of rewrite rules is known for quantum circuits or for measurement patterns, although a completeness result does exist for 2-qubit circuits over the Clifford+T gate set \cite{Bian2Qubit}.
Rewriting of patterns or circuits, as well as translations between the two models, can be performed using the \zxcalculus, a graphical language for quantum computation \cite{CD2}.
This language is more flexible than quantum circuit notation and also has multiple complete sets of graphical rewrite rules \cite{SimonCompleteness,HarnyAmarCompleteness,JPV-universal,euler-zx}.
While translating a measurement pattern to a quantum circuit can be difficult, the translation between patterns and \zxdiagrams is straightforward \cite{duncan2010rewriting,cliff-simp,kissinger2017MBQC}.
\subsection{Our contributions}
In this paper, we give an algorithm that extracts a quantum circuit from any measurement pattern whose underlying labelled open graph\ has extended gflow.
Our algorithm does not use ancillae.
This is the first circuit extraction algorithm for extended gflow, i.e.\ where patterns may contain measurements in more than one plane.
The algorithm works by translating the pattern into the \zxcalculus and transforming the resulting \zxdiagram into a circuit-like form.
It generalises a similar algorithm, which works only for patterns where all measurements are in the \normalfont XY\xspace-plane \cite{cliff-simp}.
The circuit extraction algorithm employs the \zxcalculus, so it can be used not only on diagrams arising from measurement patterns but on any \zxdiagram satisfying certain properties.
Thus, this procedure is not only the most general circuit extraction algorithm for measurement patterns but also the most general known circuit extraction algorithm for \zxcalculus diagrams.
In developing the circuit extraction algorithm, we derive a number of explicit rewrite rules for \zxdiagrams representing measurement patterns, in particular rewrites that involve graph transformations on the underlying resource state.
We show how the gflow changes for each of these rewrite rules, i.e.\ how the rewrites affect the instructions for correcting undesired measurement outcomes.
These rewrite rules unify and formalise several rules that were previously employed in the literature in a more ad-hoc manner, e.g.\ the pivot-minor transformation in Ref.~\cite{mhalla2012graph} or the elimination of Clifford measurements first derived in a different context in Ref.~\cite{hein2004multiparty}.
The rewrite rules serve not only to prove the correctness of the algorithm, but also to simplify the measurement patterns by reducing the number of qubits involved.
Combining the different rules allows us to remove any qubit measured in a Pauli basis, while maintaining deterministic implementability.
This shows that the number of qubits needed to perform a measurement-based computation is directly related to the number of non-Clifford operations required for the computation.
We also generalise several concepts originally developed for patterns containing only \normalfont XY\xspace-plane measurements to patterns with measurements in multiple planes.
In particular, we adapt the definitions of \emph{focused gflow}~\cite{mhalla2011graph} and \emph{maximally delayed gflow}~\cite{MP08-icalp} to the extended gflow case.
Our generalisation of focused gflow differs from the three generalisations suggested by Hamrit and Perdrix~\cite{hamrit2015reversibility}; indeed, the desired applications naturally lead us to one unique generalisation (see Section~\ref{sec:focusing-extended-gflow}).
Combined with the known procedure for transforming a quantum circuit into a measurement pattern using the \zxcalculus \cite{cliff-simp}, our pattern simplification and circuit extraction procedure can be used to reduce the T-gate count of quantum circuits.
This involves translating the circuit into a \zxcalculus diagram, transforming to a diagram which corresponds to a measurement pattern, simplifying the pattern, and then re-extracting a circuit.
A rough overview of the different translation and optimisation procedures is given in Figure~\ref{figRoughTranslationsAndOptimisations}.
\begin{figure}
\ctikzfig{translationsOverview}
\caption{A rough overview over the translation procedures between the three paradigms, indicating where the optimisation steps are carried out.
\label{figRoughTranslationsAndOptimisations}}
\end{figure}
The remainder of this paper is structured as follows.
Known definitions and results relating to \zxcalculus, measurement patterns and gflow are presented in Section~\ref{sec:preliminaries}.
We derive the rewrite rules for extended measurement patterns and the corresponding gflow transformations in Section~\ref{sec:rewriting}. These results are used in Section~\ref{sec:simplifying} to simplify measurement patterns in various ways. Then in Section~\ref{sec:circuitextract}, we demonstrate the algorithm for extracting a circuit from a measurement pattern whose underlying labelled open graph\ has extended gflow.
The conclusions are given in Section~\ref{sec:conclusion}.
\section{Preliminaries} \label{sec:preliminaries}
We give a quick overview over the \zxcalculus in Section~\ref{sec:zx}
and introduce the one-way model of measurement-based quantum computing in Section~\ref{sec:MBQC}.
The graph-theoretic operations of local complementation and pivoting
(on which the rewrite rules for measurement patterns are based)
and their representation in the \zxcalculus are presented in Section~\ref{sec:lc}.
Section~\ref{sec:gflow} contains the definitions and properties of different notions of flow.
\subsection{The ZX-calculus}
\label{sec:zx}
The \zxcalculus is a diagrammatic language similar to the widely-used
quantum circuit notation. We will provide only a brief overview here,
for an in-depth reference see~\cite{CKbook}.
A \emph{\zxdiagram} consists of \emph{wires} and \emph{spiders}.
Wires entering the diagram from the left are called \emph{input wires}; wires exiting to the right are called \emph{output wires}.
Given two diagrams we can compose them horizontally (denoted $\circ$)
by joining the output wires of the first to the input wires of the second, or
form their tensor product (denoted $\otimes$) by simply stacking the two diagrams vertically.
Spiders are linear maps which can have any number of input or output
wires. There are two varieties: $Z$ spiders, depicted as green dots, and $X$ spiders, depicted as red dots.\footnote{If you are reading this
document in monochrome or otherwise have difficulty distinguishing green and red, $Z$ spiders will appear lightly-shaded and $X$ spiders will appear darkly-shaded.}
Written explicitly in Dirac `bra-ket' notation, these linear maps are:
\[
\small
\hfill \tikzfig{Zsp-a} \ := \ \ketbra{0...0}{0...0} +
e^{i \alpha} \ketbra{1...1}{1...1} \hfill
\qquad
\hfill \tikzfig{Xsp-a} \ := \ \ketbra{+...+}{+...+} +
e^{i \alpha} \ketbra{-...-}{-...-} \hfill
\]
A \zxdiagram with $m$ input wires and $n$ output wires then represents a linear map $(\mathbb C^2)^{\otimes m} \to (\mathbb C^2)^{\otimes n}$ built from
spiders (and permutations of qubits) by composition and tensor product
of linear maps. As a special case, diagrams with no inputs and $n$ outputs represent vectors in $(\mathbb C^2)^{\otimes n}$, i.e.
(unnormalised) $n$-qubit states.
\begin{example}\label{ex:basic-maps-and-states}
We can immediately write down some simple state preparations and
unitaries in the \zxcalculus:
\[
\begin{array}{rclcrcl}
\tikzfig{ket-+} & = & \ket{0} + \ket{1} \ \propto \ket{+} &
\qquad &
\tikzfig{ket-0} & = & \ket{+} + \ket{-} \ \propto \ket{0} \\[1em]
\tikzfig{Z-a} & = & \ketbra{0}{0} + e^{i \alpha} \ketbra{1}{1} =
Z_\alpha &
&
\tikzfig{X-a} & = & \ketbra{+}{+} + e^{i \alpha} \ketbra{-}{-} = X_\alpha
\end{array}
\]
In particular we have the Pauli matrices:
\[
\hfill
\tikzfig{Z} = Z \qquad\qquad \tikzfig{X} = X \qquad\qquad
\hfill
\]
\end{example}
It will be convenient to introduce a symbol -- a yellow square -- for
the Hadamard gate. This is defined by the equation:
\begin{equation}\label{eq:Hdef}
\hfill
\tikzfig{had-alt}
\hfill
\end{equation}
We will often use an alternative notation to simplify the diagrams,
and replace a Hadamard between two spiders by a blue dashed edge, as
illustrated below.
\ctikzfig{blue-edge-def}
Both the blue edge notation and the Hadamard box can always be
translated back into spiders when necessary. We will refer to the blue
edge as a \emph{Hadamard edge}.
\begin{definition}\label{def:interpretation}
The \emph{interpretation} of a \zxdiagram $D$ is the linear map that such a diagram represents
and is written $\intf{D}$.
For a full treatment of the interpretation of a ZX diagram see, e.g.\ Ref.~\cite{SimonCompleteness}.
We say two \zxdiagrams $D_1$ and $D_2$ are \emph{equivalent} when $\intf{D_1}=z\intf{D_2}$ for some non-zero complex number $z$.
\end{definition}
We define equivalence up to a global scalar, as those scalars will not be important for the class of diagrams we study in this paper.
\begin{remark}
There are many different sets of rules for the \zxcalculus. The version we present only preserves equality up to a global scalar. Versions of the \zxcalculus where equality is `on the nose' can be found in Ref.~\cite{Backens:2015aa} for the stabiliser fragment, in Ref.~\cite{SimonCompleteness} for the Clifford+T fragment, and in Ref.~\cite{JPV-universal,ng2017universal} for the full language.
\end{remark}
The interpretation of a \zxdiagram is invariant under
moving the vertices around in the plane, bending,
unbending, crossing, and uncrossing wires, so long as the connectivity
and the order of the inputs and outputs is maintained.
Furthermore, there is an additional set of equations that we call the \emph{rules} of the \zxcalculus; these are shown in
Figure~\ref{fig:zx-rules}. Two diagrams are equivalent if one can be transformed into the other using the rules of the \zxcalculus.
\begin{figure}[h]
\ctikzfig{ZX-rules}
\vspace{-3mm}
\caption{\label{fig:zx-rules} A convenient presentation for the ZX-calculus. These rules hold
for all $\alpha, \beta \in [0, 2 \pi)$, and due to $(\bm{h})$ and
$(\bm{i2})$ all rules also hold with the colours interchanged. Note the ellipsis should be read as `0 or more', hence the spiders on the LHS of \SpiderRule are connected by one or more wires.}
\end{figure}
\begin{remark}\label{rem:completeness}
The \zxcalculus is \emph{universal} in the sense that any linear map can be expressed as a \zxdiagram. Furthermore, when restricted to \textit{Clifford \zxdiagrams}, i.e. diagrams whose phases are all multiples of $\pi/2$, the version we present in Figure~\ref{fig:zx-rules} is \emph{complete}. That is, for any two Clifford \zxdiagrams that describe the same linear map, there exists a series of rewrites transforming one into the other. Recent extensions to the calculus have been introduced which are complete for the larger \textit{Clifford+T} family of \zxdiagrams \cite{SimonCompleteness}, where phases are multiples of $\pi/4$, and for \textit{all} \zxdiagrams~\cite{HarnyAmarCompleteness,JPV-universal,euler-zx}.
\end{remark}
Quantum circuits can be translated into \zxdiagrams in a straightforward manner.
We will take as our starting point circuits constructed
from the following universal set of gates:
\[
\CX
\qquad\qquad
Z_\alpha
\qquad\qquad
H
\]
We choose this gate
set because it admits a convenient representation in terms of
spiders:
\begin{align}\label{eq:zx-gates}
\CX & = \tikzfig{cnot} &
Z_\alpha & = \tikzfig{Z-a} &
H & = \tikzfig{h-alone}
\end{align}
These gates have the following interpretations:
\begin{align*}
\intf{\tikzfig{cnot}} &=
\left(\begin{matrix}
1 & 0 & 0 & 0 \\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 \\
\end{matrix}\right)
\qquad
\intf{\tikzfig{Z-a}} &=
\left(\begin{matrix}
1 & 0 \\
0 & e^{i \alpha}
\end{matrix}\right)
\qquad
\intf{\tikzfig{h-alone}} &= \frac{1}{\sqrt{2}}
\left(\begin{matrix}
1 & 1 \\
1 & -1
\end{matrix}\right)
\end{align*}
For the \CX gate, the green spider is the first (i.e. control) qubit and the red spider is the second (i.e. target) qubit. Other common gates can easily be expressed in terms of these gates. In particular, $S := Z_{\frac\pi2}$, $T := Z_{\frac\pi4}$ and:
\begin{align}\label{eq:zx-derived-gates}
X_\alpha & = \tikzfig{X-a-expanded} &
\ensuremath{\textrm{CZ}}\xspace & = \tikzfig{cz-small}
\end{align}
\begin{remark}
Note that the directions of the wires in the depictions of the \CX and \ensuremath{\textrm{CZ}}\xspace gates are irrelevant, hence we can draw them vertically without ambiguity. Undirected wires are a general feature of \zxdiagrams, and from hence forth we will ignore wire directions without further comment. We will also freely draw inputs/outputs entering or exiting the diagram from arbitrary directions if the meaning is clear from context (as for example in Example~\ref{ex:gflow-in-action}).
\end{remark}
\noindent For our purposes, we will define quantum circuits as a special case of \zxdiagrams.
\begin{definition}\label{def:circuit}
A \emph{circuit} is a \zxdiagram generated by composition and tensor products of the \zxdiagrams in equations~\eqref{eq:zx-gates} and~\eqref{eq:zx-derived-gates}.
The interpretation of such a circuit is given by the composition and tensor products of the interpretations of the generating diagrams given
in equation~\eqref{eq:zx-gates}, in accordance with:
\begin{align*}
\intf{D \otimes D'} = \intf{D} \otimes \intf{D'} \qquad \intf{D \circ D'} = \intf{D} \circ \intf{D'}
\end{align*}
\end{definition}
Important subclasses of circuits are \textit{Clifford circuits}, sometimes called stabilizer circuits, which are obtained from compositions of only \CX, $H$, and $S$ gates. They are efficiently classically simulable, thanks to the \textit{Gottesman-Knill theorem}~\cite{aaronsongottesman2004}. A unitary is \textit{local Clifford} if it arises from a single-qubit Clifford circuit, i.e. a composition of $H$ and $S$ gates.
The addition of $T$ gates to Clifford circuits yields \textit{Clifford+T circuits}, which are capable of approximating any $n$-qubit unitary to arbitrary precision, whereas the inclusion of $Z_\alpha$ gates for all $\alpha$ enables one to construct any unitary exactly~\cite{NielsenChuang}.
\subsection{Measurement-based quantum computation}
\label{sec:MBQC}
Measurement-based quantum computing (MBQC) is a universal model for quantum computation, underlying the \emph{one-way quantum computing} scheme \cite{MBQC2}.
The basic resource for MBQC is a \emph{graph state}, built from a register of qubits by applying $CZ$-gates pairwise to obtain an entangled quantum state.\footnote{There are models of MBQC where the basic resource is not a graph state, but we do not consider those models in this paper.}
The graph state is then gradually consumed by measuring single qubits.
By choosing an appropriate resource state and appropriate measurements, any quantum computation can be performed.
The difficulty is the non-determinism inherent in quantum measurements, which means computations need to be adaptive to implement deterministic operations.
Measurement-based quantum computations are usually formalised in terms of \emph{measurement patterns}, a syntax describing both the construction of graph states and their processing via successive measurements.
In the following, we first present measurement patterns, and then introduce other
-- and, in our opinion, simpler --
formalisms for reasoning about these computations.
Instead of allowing arbitrary single-qubit measurements, measurements are usually restricted to three planes of the Bloch sphere, labelled XY, XZ, and YZ.
For each measurement, the state denoted `$+$' is taken to be the desired result of the measurement and the state denoted `$-$' is the undesired result, which will need to be adaptively corrected.
The allowed measurements are (\cite[p.~292]{danos_kashefi_panangaden_perdrix_2009}):
\begin{align*}
\ket{+_{\ensuremath\normalfont\textrm{XY}\xspace,\alpha}} &= \frac{1}{\sqrt{2}}\left(\ket{0} + e^{i\alpha} \ket{1} \right) &
\ket{-_{\ensuremath\normalfont\textrm{XY}\xspace,\alpha}} &= \frac{1}{\sqrt{2}}\left(\ket{0} - e^{i\alpha} \ket{1} \right) \\
\ket{+_{\normalfont\normalfont\textrm{XZ}\xspace,\alpha}} &= \cos\left(\frac{\alpha}{2}\right)\ket{0} + \sin\left(\frac{\alpha}{2}\right) \ket{1} &
\ket{-_{\normalfont\normalfont\textrm{XZ}\xspace,\alpha}} &= \sin\left(\frac{\alpha}{2}\right)\ket{0} - \cos\left(\frac{\alpha}{2}\right) \ket{1} \\
\ket{+_{\normalfont\normalfont\textrm{YZ}\xspace,\alpha}} &= \cos\left(\frac{\alpha}{2}\right)\ket{0} + i \sin\left(\frac{\alpha}{2}\right) \ket{1} &
\ket{-_{\normalfont\normalfont\textrm{YZ}\xspace,\alpha}} &= \sin\left(\frac{\alpha}{2}\right)\ket{0} - i \cos\left(\frac{\alpha}{2}\right) \ket{1}
\end{align*}
\noindent where $0 \leq \alpha \leq 2\pi$.
Usually, the desired measurement outcome is labelled 0 and the undesired measurement outcome is labelled 1.
\begin{definition}\label{def:meas_pattern}
A \emph{measurement pattern} consists of an $n$-qubit register $V$ with distinguished sets $I, O \subseteq V$ of input and output qubits and a sequence of commands consisting of the following operations:
\begin{itemize}
\item Preparations $N_i$, which initialise a qubit $i \notin I$ in the state $\ket{+}$.
\item Entangling operators $E_{ij}$, which apply a $CZ$-gate to two distinct qubits $i$ and $j$.
\item Destructive measurements $M_i^{\lambda, \alpha}$, which project a qubit $i\notin O$ onto the orthonormal basis $\{\ket{+_{\lambda,\alpha}},\ket{-_{\lambda,\alpha}}\}$, where $\lambda \in \{ \ensuremath\normalfont\textrm{XY}\xspace, \normalfont\normalfont\textrm{XZ}\xspace, \normalfont\normalfont\textrm{YZ}\xspace \}$ is the measurement plane and $\alpha$ is the measurement angle.
The projector $\ket{+_{\lambda,\alpha}}\bra{+_{\lambda,\alpha}}$ corresponds to outcome $0$ and $\ket{-_{\lambda,\alpha}}\bra{-_{\lambda,\alpha}}$
corresponds to outcome $1$.
\item Corrections $[X_i]^s$, which depend on a measurement outcome (or a linear combination of measurement outcomes) $s\in\{0,1\}$ and act as the Pauli-$X$ operator on qubit $i$ if $s$ is $1$ and as the identity otherwise,
\item Corrections $[Z_j]^t$, which depend on a measurement outcome (or a linear combination of measurement outcomes) $t\in\{0,1\}$ and act as the Pauli-$Z$ operator on qubit $j$ if $t$ is $1$ and as the identity otherwise.
\end{itemize}
\end{definition}
We will only consider \emph{runnable patterns}, which satisfy certain conditions ensuring they are implementable in practice.
\begin{definition}[{\cite[p.~4]{GFlow}}]\label{def:runnable_pattern}
A measurement pattern is \emph{runnable} if the following conditions hold.
\begin{itemize}
\item No correction depends on an outcome not yet measured.
\item All non-input qubits are prepared.
\item All non-output qubits are measured.
\item A command $C$ acts on a qubit $i$ only if $i$ has not already been measured, and one of (1)-(3) holds:
\begin{enumerate}[label=({\arabic*})]
\item $i$ is an input and $C$ is not a preparation,
\item $i$ has been prepared and $C$ is not a preparation,
\item $i$ has not been prepared, $i$ is not an input, and $C$ is a preparation.
\end{enumerate}
\end{itemize}
\end{definition}
Runnable measurement patterns can be \emph{standardized} \cite[p.~4]{GFlow}, so that all preparations $N_i$ appear first, then all the entangling operators $E_{ij}$, then the measurements $M_i^{\lambda, \alpha}$ and finally the corrections.
The entangling operators $E_{ij}$ in a pattern determine the resource graph state.
In fact, they correspond to the edges of the underlying \emph{labelled open graph}. This is formalised in the following definitions and remark, which will be used throughout the paper.
\begin{definition}
An \emph{open graph} is a tuple $(G,I,O)$ where $G=(V,E)$ is an undirected graph, and $I,O \subseteq V$ are distinguished (possibly overlapping) subsets representing \emph{inputs} and \emph{outputs}. We will write $\comp{O} := V\setminus O$ for the \emph{non-outputs} and $\comp{I}:= V\setminus I$ for the \emph{non-inputs}. If $v\not\in I$ and $v\not\in O$, we call $v$ an \emph{internal} vertex, and if $v\in I\cup O$, we call $v$ a \emph{boundary vertex}.
For vertices $u,v \in V$ we write $u\sim v$ when $u$ and $v$ are adjacent in $G$, and denote by $N_G(u)\coloneqq\{w\in V \mid w\sim u\}$ the set of neighbours of $u$.
\end{definition}
\begin{definition}\label{def:LOG}
A \emph{labelled open graph} is a tuple $\Gamma = (G,I,O, \lambda)$ where $(G,I,O)$ is an open graph, and $\lambda : \comp{O} \rightarrow \{ \ensuremath\normalfont\textrm{XY}\xspace, \normalfont\normalfont\textrm{YZ}\xspace, \normalfont\normalfont\textrm{XZ}\xspace\}$ assigns a measurement plane to each non-output vertex.
\end{definition}
\begin{remark}
A labelled open graph\ and an assignment of measurement angles $\alpha:\comp{O}\to [0,2\pi)$ carry the same information as a (standardized) runnable measurement pattern with no corrections.
Given the measurement pattern, the corresponding labelled open graph\ $(G,I,O,\lambda)$ may be determined as follows: the vertices of the graph $G$ are given by the set of qubits $V$.
The edges of $G$ are given by the set
\[
E = \{i\sim j \mid i,j\in V \text{ and $E_{ij}$ appears in the pattern}\}.
\]
The sets $I$ and $O$ are the ones given in Definition~\ref{def:meas_pattern}.
The functions $\lambda$ and $\alpha$ are determined by the measurement operators $M_i^{\lambda,\alpha}$; Definition~\ref{def:runnable_pattern} guarantees that both are well-defined.
Given a labelled open graph\ we can apply this process in reverse to construct a standardised runnable measurement pattern without corrections.
(In the absence of corrections, the order of the individual preparation commands, entangling commands, and measurement commands in the pattern does not matter since all commands of the same type commute.)
A labelled open graph\ and an assignment of measurement angles can also be determined from a measurement pattern including corrections by simply ignoring the latter (i.e.\ by assuming that all measurements yield the desired result).
In Section~\ref{sec:gflow}, we discuss necessary and sufficient conditions under which appropriate corrections commands can be determined when constructing a measurement pattern from a labelled open graph; these corrections then put constraints on the order of the measurements.
\end{remark}
In general, a single measurement pattern can result in a variety of measurement instructions, with each instruction depending on earlier measurement outcomes and the resulting corrections that must be applied.
We are, however, interested in the subset of measurement patterns that always implement the same linear map, regardless of the measurement outcomes.
For these patterns, we can identify the unique linear map implemented by the pattern with the set of measurement instructions obtained when all the measurement outcomes are $0$ (and thus no corrections are necessary).
This leads us to the following definition:
\begin{definition}\label{def:ogs-to-linear-map}
Suppose $\Gamma=(G,I,O,\lambda)$ is a labelled open graph, and let $\alpha:\comp{O}\to [0,2\pi)$ be an assignment of measurement angles.
The \emph{linear map associated with $\Gamma$ and $\alpha$} is given by
\[
M_{\Gamma,\alpha} := \left( \prod_{i\in\comp{O}} \bra{+_{\lambda(i),\alpha(i)}}_i \right) E_G N_{\comp{I}},
\]
where $E_G := \prod_{i\sim j} E_{ij}$ and $N_{\comp{I}} := \prod_{i\in\comp{I}} N_i$.
\end{definition}
\begin{remark}
Note that the projections $\bra{+_{\lambda(i),\alpha(i)}}_i$ on different qubits $i$ commute with each other.
Similarly, the controlled-Z operations $E_{ij}$ commute even if they involve some of the same qubits.
Finally, all the state preparations $N_i$ on different qubits commute.
Thus, $M_{\Gamma,\alpha}$ is fully determined by $\Gamma$ and $\alpha$.
\end{remark}
\begin{definition}\label{def:ogs-to-ZX}
Suppose $\Gamma=(G,I,O,\lambda)$ is a labelled open graph\ and $\alpha:\comp{O}\to [0,2\pi)$ is an assignment of measurement angles.
Then its \emph{associated \zxdiagram} $D_{\Gamma,\alpha}$ is defined by translating the expression for $M_{\Gamma,\alpha}$ from Definition~\ref{def:ogs-to-linear-map} according to Table~\ref{tab:MBQC-to-ZX}. The elements are composed in the obvious way and any sets of adjacent phase-free green spiders other than measurement effects are merged. In other words, merging affects green spiders which come from the translation of a preparation or entangling command.
\begin{table}
\centering
\renewcommand{\arraystretch}{2}
\begin{tabular}{c||c|c|c|c|c}
operator & $N_i$ & $E_{ij}$ & $\bra{+_{\ensuremath\normalfont\textrm{XY}\xspace,\alpha(i)}}_i$ & $\bra{+_{\normalfont\normalfont\textrm{XZ}\xspace,\alpha(i)}}_i$ & $\bra{+_{\normalfont\normalfont\textrm{YZ}\xspace,\alpha(i)}}_i$ \\ \hline
diagram & \tikzfig{plus-state} & \tikzfig{cz} & \tikzfig{XY-effect} & \tikzfig{XZ-effect} & \tikzfig{YZ-effect}
\end{tabular}
\renewcommand{\arraystretch}{1}
\caption{Translation from an associated linear map to a \zxdiagram.}
\label{tab:MBQC-to-ZX}
\end{table}
\end{definition}
\begin{example}
The measurement pattern with the qubit register $V=\{ 1,2,3,4\}$, input and output sets $I=\{ 1,2 \}$ and $O = \{ 1,4 \}$ and the sequence of commands
$$ M_2^{\ensuremath\normalfont\textrm{XY}\xspace,\frac{\pi} 2}M_3^{\normalfont\normalfont\textrm{YZ}\xspace,\pi}E_{14}E_{23}E_{24} E_{34} N_3 N_4$$
is represented by the following \zxdiagram:
\ctikzfig{example-MBQC-translation}
\end{example}
\begin{lemma}\label{lem:zx-equals-linear-map}
Suppose $\Gamma=(G,I,O,\lambda)$ is a labelled open graph\ and $\alpha:\comp{O}\to [0,2\pi)$ is an assignment of measurement angles.
Let $M_{\Gamma,\alpha}$ be the linear map specified in Definition~\ref{def:ogs-to-linear-map} and let $D_{\Gamma,\alpha}$ be the \zxdiagram constructed according to Definition~\ref{def:ogs-to-ZX}.
Then $\intf{D_{\Gamma,\alpha}}=M_{\Gamma,\alpha}$.
\end{lemma}
\begin{proof}
For each operator $M$ in Table~\ref{tab:MBQC-to-ZX} and its corresponding diagram $D_M$, it is straightforward to check that $\intf{D_M}=M$.
The result thus follows by the compositional properties of the interpretation $\intf{\cdot}$ and the fact that rewriting ZX-diagrams preserves semantics.
\end{proof}
In order to specify a converse direction to this result, we will define a special class of ZX-diagrams. Before we do that, we recall which ZX-diagrams correspond to graph states.
\begin{definition}\label{def:graph-state}
A \emph{graph state diagram} is a \zxdiagram where all vertices are green, all the connections between vertices are Hadamard edges and an output wire is incident on each vertex in the diagram.
The \emph{graph corresponding to a graph state diagram} is the graph whose vertices are the green spiders of the \zxdiagram and whose edges are given by the Hadmard edges of the \zxdiagram.
\end{definition}
\begin{definition}\label{def:MBQC-form}
A \zxdiagram is in \emph{MBQC form} if it consists of a graph state diagram in which each vertex of the graph may also be connected to:
\begin{itemize}
\item an input (in addition to its output), and
\item a measurement effect (in one of the three measurement planes) instead of the output.
\end{itemize}
\end{definition}
\begin{definition}\label{def:graph-of-diagram}
Given a \zxdiagram $D$ in MBQC form, its \emph{underlying graph} $G(D)$ is the graph corresponding to the graph state part of $D$.
\end{definition}
See Figure~\ref{fig:graph-state} for an example of a graph state diagram and a diagram in MBQC form.
\begin{figure}
\ctikzfig{graph-state-ex}
\caption{On the left, a graph state diagram. In the middle, a diagram in MBQC form with the same underlying graph. On the right, an MBQC+LC form diagram with the same underlying labelled open graph.\label{fig:graph-state}}
\end{figure}
\noindent Given these definitions we can now show the following:
\begin{lemma}\label{lem:ogs-to-ZX-is-MBQC-Form}
Let $\Gamma=(G,I,O,\lambda)$ be a labelled open graph\ and let $\alpha:\comp{O}\to [0,2\pi)$ be an assignment of measurement angles.
Then the \zxdiagram $D_{\Gamma,\alpha}$ constructed according to Definition~\ref{def:ogs-to-ZX} is in MBQC form.
\end{lemma}
\begin{proof}
Consider performing the translation described in Definition~\ref{def:ogs-to-ZX} in two steps.
The first step involves translating the preparation and entangling commands of the linear map $M_{\Gamma,\alpha}$ according to Table~\ref{tab:MBQC-to-ZX} and then merging any sets of adjacent green spiders.
This yields a graph state diagram with some additional inputs.
(The underlying graph is $G$.)
The second step is the translation of the measurement projections of $M_{\Gamma,\alpha}$.
This yields measurement effects on some of the outputs of the graph state diagram.
Thus, the resulting \zxdiagram is in MBQC form by Definition~\ref{def:MBQC-form}.
\end{proof}
The converse of Lemma~\ref{lem:ogs-to-ZX-is-MBQC-Form} also holds.
\begin{lemma}\label{lem:zx-to-pattern}
Suppose $D$ is a \zxdiagram in MBQC form.
Then there exists a labelled open graph\ $\Gamma=(G,I,O,\lambda)$ and an assignment of measurement angles $\alpha:\comp{O}\to [0,2\pi)$ such that $\intf{D} = M_{\Gamma,\alpha}$.
\end{lemma}
\begin{proof}
Let $G:=G(D)$ be the underlying graph of the \zxdiagram $D$, cf.\ Definition~\ref{def:graph-of-diagram}.
Define $I\subseteq V$ to be the set of vertices of $D$ on which an input wire is incident.
Analogously, define $O\subseteq V$ to be the set of vertices of $D$ on which an output wire is incident.
Fix $\lambda:\comp{O}\to\{\ensuremath\normalfont\textrm{XY}\xspace,\normalfont\normalfont\textrm{XZ}\xspace,\normalfont\normalfont\textrm{YZ}\xspace\}$ by using Table~\ref{tab:MBQC-to-ZX} in reverse to determine the measurement planes from the measurement effects in the \zxdiagram.
Let $\Gamma := (G,I,O,\lambda)$.
Finally, define $\alpha:\comp{O}\to [0,2\pi)$ to be the phase of the measurement effect connected to each non-output vertex in the \zxdiagram.
Then $D = D_{\Gamma,\alpha}$ and thus the desired result follows from Lemma~\ref{lem:zx-equals-linear-map}.
\end{proof}
\begin{remark}
Given a fixed labelling of the graph vertices
Lemmas~\ref{lem:ogs-to-ZX-is-MBQC-Form} and \ref{lem:zx-to-pattern}
show that the correspondence between MBQC form \zxdiagrams and
the pairs $(\Gamma,\alpha)$, where $\Gamma$ is a labelled open graph\ and $\alpha$ is an assignment of measurement angles, is one-to-one.
\end{remark}
It will turn out to be useful to consider a `relaxed' version of the MBQC form for \zxdiagrams.
\begin{definition}
We say a \zxdiagram is in \emph{MBQC+LC} form when it is in MBQC form (see Definition~\ref{def:MBQC-form}) up to arbitrary single-qubit Clifford unitaries on the input and output wires (LC stands for `local Clifford').
When considering the underlying graph of a \zxdiagram in MBQC+LC form, we ignore these single qubit Clifford unitaries.
\end{definition}
Note that an MBQC form diagram is an MBQC+LC form diagram with trivial single-qubit unitaries on its inputs and outputs. An example diagram in MBQC+LC form is given in Figure~\ref{fig:graph-state}.
\subsection{Graph-theoretic rewriting}
\label{sec:lc}
The rewrites we will use are based on the graph-theoretic notions of \emph{local complementation} and \emph{pivoting}.
We present these operations (and their effects) in Definitions~\ref{def:loc-comp} and~\ref{def:pivot} as they appear in Ref.~\cite{DP3}.
Our interest is in the effect these operations have on a measurement pattern.
In particular, we consider whether a \zxdiagram in MBQC form will remain in MBQC form after applying a local complementation or pivot
(or remain close enough to MBQC form to still be useful).
\begin{definition}\label{def:loc-comp}
Let $G=(V,E)$ be a graph and $u\in V$ a vertex. The {\em local complementation of $G$ about the vertex $u$} is the operation resulting in the graph
$$G\star u\coloneqq \left( V, E\mathbin{\Delta}\xspace\{(b,c) : (b,u), (c,u)\in E\ \textrm{and}\ b\neq c\}\right) ,$$
where $\mathbin{\Delta}\xspace$ is the symmetric set difference, i.e.~$A\mathbin{\Delta}\xspace B\coloneqq (A\cup B)\setminus (A\cap B)$.
\end{definition}
In other words, $G\star u$ is a graph that has the same vertices as $G$. Two neighbours $b$ and $c$ of $u$ are connected in $G\star u$ if and only if they are not connected in $G$. All other edges are the same as in $G$.
\begin{definition}\label{def:pivot}
Let $G=(V,E)$ be a graph and $u,v\in V$ two vertices connected by an edge. The \emph{pivot of $G$ about the edge $u\sim v$} is the operation resulting in the graph $G\land uv\coloneqq G\star u\star v\star u$.
\end{definition}
If we denote the set of vertices connected to both $u$ and $v$ by $A$,
the set of vertices connected to $u$ but not to $v$ by $B$
and the set of vertices connected to $v$ but not to $u$ by $C$,
then pivoting consists of interchanging $u$ and $v$ and complementing the edges between each pair of sets $A$, $B$ and $C$.
That is, a vertex in $A$ is connected to a vertex in $B$ after pivoting if and only if the two vertices are not connected before pivoting; and similarly for the two other pairs.
All the remaining edges are unchanged, including the edges internal to $A$, $B$ and $C$.
We illustrate this by the following picture, where crossing lines between two sets indicate complementing the edges.
\[G \quad\tikzfig{pivot-L}\qquad\qquad \quad G\wedge uv \quad\tikzfig{pivot-R}
\]
\begin{remark}\label{rem:pivot_sym}
From the above characterisation it follows that pivoting is symmetric in the (neighbouring) vertices, that is, $G\land uv = G\land vu$.
\end{remark}
In the \zxcalculus, a spider with a zero phase and exactly two incident wires is equivalent to a plain wire (representing the identity operation) by rule $(\textit{\textbf {i1}})$ in Figure~\ref{fig:zx-rules}. The following definition represents the corresponding graph operation, which will be used to remove such vertices.
\begin{definition}\label{def:identity-removal}
Let $G=(V,E)$ be a graph and let $u,v,w\in V$ be vertices such that $N_G(v)=\{u,w\}$ and $u\notin N_G(w)$, that is, the neighbours of $v$ are precisely $u$ and $w$, and $u$ is not connected to $w$.
We then define \emph{identity removal} as
$$G\idrem{v} w\coloneqq ((G\land uv)\setminus\{u\})\setminus\{v\}.$$
Since $v$ has exactly two neighbours, one of which is $w$,
the choice of $u$ is implicit in the notation.
We think of this as `dragging $u$ along $v$ to merge with $w$'.
\end{definition}
The effect of the identity removal is to remove the middle vertex $v$ and to fuse the vertices $u$ and $w$ into one, as illustrated in the picture below. Thus the operation is symmetric in $u$ and $w$, in the sense that $G\idrem{v} u$ and $G\idrem{v} w$ are equal up to a relabelling of one vertex. Note that $u$ and $w$ are allowed to have common neighbours (which will disconnect from the fused vertex $w$ as a result of identity removal).
\[G \quad\tikzfig{identity-removal-L}\qquad\qquad \quad G\idrem{v} w \quad\tikzfig{identity-removal-R}
\]
\begin{example}\label{ex:identity-removal}
Consider the following graph:
\[G \coloneqq \tikzfig{identity-removal-example1}.
\]
Note that the vertices $u,v$ and $w$ satisfy the condition for identity removal: $u$ and $w$ are not connected and are precisely the neighbours of $v$. Hence identity removal results in the graph
\[G\idrem{v} w = \tikzfig{identity-removal-example3}.
\]
\end{example}
\begin{remark}\label{rem:identity-removal-connected-vertices}
If we have vertices $u,v$ and $w$ with $N_G(v)=\{u,w\}$ but, unlike in the definition above, $u\in N_G(w)$, we can first perform a local complementation on $v$, so that $u$ and $w$ become disconnected, and then remove the identity vertex $v$. In symbols:
$$(G\star v)\idrem{v} w .$$
\end{remark}
The abstract application of a local complementation to a graph corresponds to the application of a specific set of local Clifford gates on the corresponding graph state:
\begin{theorem}[\cite{NestMBQC}, in the manner of {\cite[Theorem~2]{DP1}}]\label{thm:lc-in-zx}
Let $G=(V,E)$ be a graph with adjacency matrix $\theta$ and let $u\in V$, then
\[
\ket{G\star u} = X_{\pi/2,u}\otimes\bigotimes_{v\in V} Z_{-\pi/2,v}^{\theta_{uv}}\ket{G}.
\]
\end{theorem}
This result can be represented graphically in the \zxcalculus:
\begin{lemma}[{\cite[Theorem~3]{DP1}}]\label{lem:ZX-lcomp}
The following equality involving graph state diagrams and Clifford phase shifts follows from the graphical rewrite rules:
{
\ctikzfig{local-comp-ex}
}
Here, the underlying graph on the LHS is $G\star u$ and the underlying graph on the RHS is $G$.
Any vertices not adjacent to $u$ are unaffected and are not shown in the above diagram.
\end{lemma}
Combining this result with the definition of pivoting in terms of local complementations (cf.\ Definition~\ref{def:pivot}) we also get:
\begin{lemma}[{\cite[Theorem~3.3]{DP3}}]\label{lem:ZX-pivot}
The following diagram equality follows from the graphical rewrite rules:
\ctikzfig{pivot-desc}
Here $u$ and $v$ are a connected pair of vertices, and the underlying graph on the LHS is $G\wedge uv$ while the RHS is $G$.
Any vertices not adjacent to $u$ or $v$ are unaffected and are not shown in the above diagram.
\end{lemma}
\subsection{Generalised flow}
\label{sec:gflow}
The notion of \emph{flow} or \emph{causal flow} on open graphs was introduced by Danos and Kashefi~\cite{Danos2006Determinism-in-} as a sufficient condition to distinguish those open graphs capable of supporting a deterministic MBQC pattern with measurements in the \normalfont XY\xspace-plane.
Causal flow, however, is not a necessary condition for determinism.
That is there are graphs that implement a deterministic pattern even though they do not have causal flow~\cite{GFlow,duncan2010rewriting}.
Browne et al.~\cite{GFlow} adapted the notion of flow to what they called
\emph{generalised flow} (gflow), which is both a necessary and sufficient condition for
`uniformly and strongly stepwise deterministic' measurement patterns (defined below).
Unlike causal flow, gflow can also be applied to arbitrary labelled open graph{}s, i.e.\ it supports measurement patterns with measurements in all three planes.
This even more general case is sometimes called \emph{extended gflow}.
\begin{definition}\label{def:determinism}
The linear map implemented by a measurement pattern for a specific set of measurement outcomes is called a \emph{branch} of the pattern.
A pattern is \emph{deterministic} if all branches are equal up to a scalar.
A pattern is \emph{strongly deterministic} if all branches are equal up to a global phase.
It is \emph{uniformly deterministic} if it is deterministic for any choice of measurement angles.
Finally, the pattern is \emph{stepwise deterministic} if any intermediate pattern -- resulting from performing some subset of the measurements and their corresponding corrections -- is again deterministic.
\end{definition}
The existence of gflow implies the uniform, strong and stepwise determinism of any pattern on the open graph (cf.~Theorem~\ref{t-flow} below).
Hence, by applying transformations to an open graph that preserve the existence of a gflow, we ensure that the modified open graph still supports a uniformly, strongly and stepwise deterministic pattern.
Note that the condition of interest is preservation of the \textit{existence} of gflow, not preservation of the specific gflow itself.
We will now give the formal definitions of (causal) flow and (extended) gflow.
\begin{definition}[{\cite[Definition~2]{Danos2006Determinism-in-}}]\label{def:causal-flow}
Let $(G,I,O)$ be an open graph. We say $G$ has \textit{(causal) flow} if there exists a map $f:\comp{O}\longrightarrow \comp{I}$ (from measured qubits to prepared qubits) and a strict partial order $\prec$ over $V$ such that for all $u\in \comp{O}$:
\begin{itemize}
\item $u \sim f(u)$
\item $u \prec f(u)$
\item $u \prec v$ for all neighbours $v\neq u$ of $f(u)$.
\end{itemize}
\end{definition}
When vertices are smaller than a vertex $v$ in the order~$\prec$, they are referred to as being `behind' or `in the past of' of $v$.
The notion of gflow differs from the above definition of causal flow in two ways.
The value of $f(u)$ is allowed to be a set of vertices instead of a single vertex, so that corrections can be applied to more than one vertex at a time.
As a result of this change, the third condition of causal flow is now too strong: requiring that no element of $f(u)$ is adjacent to any vertex `in the past' of $u$ would be too restrictive.
Since corrections are applied to sets of vertices at a time, it is possible to make use of the following idea: if corrections are simultaneously applied to an even number of neighbours of $v$, then there is no net effect on $v$.
Thus, the second change takes the form of a parity condition: all vertices in the neighbourhood of $f(u)$ that lie `in the past' of $u$ are required to be in the even neighbourhood of $f(u)$.
As a result, net effects of corrections do not propagate into `the past'.
Allowing measurements in more than one measurement plane requires further careful adjustment of the parity conditions depending on the measurement plane of the vertex being measured.
\begin{definition}
Given a graph $G=(V,E)$, for any $K\subseteq V$, let $\odd{G}{K}= \{u\in V: \abs{N(u)\cap K}\equiv 1 \mod 2\}$ be the \emph{odd neighbourhood} of $K$ in $G$, i.e.\ the set of vertices having an odd number of neighbours in $K$.
If the graph $G$ is clear from context, we simply write $\odd{}{K}$.
The \emph{even neighbourhood} of $K$ in $G$, $\eve{G}{K}$, is defined in a similar way; $\eve{G}{K}= \{u\in V: \abs{N(u)\cap K}\equiv 0 \mod 2\}$.
\end{definition}
\begin{definition}[{\cite[Definition~3]{GFlow}}]
\label{defGFlow}
A labelled open graph{} $(G,I,O,\lambda)$ has generalised flow (or \emph{gflow}) if there exists a map $g:\comp{O}\to\pow{\comp{I}}$ and a partial order $\prec$ over $V$ such that for all $v\in \comp{O}$:
\begin{enumerate}[label=({g}\theenumi), ref=(g\theenumi)]
\item\label{it:g} If $w\in g(v)$ and $v\neq w$, then $v\prec w$.
\item\label{it:odd} If $w\in\odd{}{g(v)}$ and $v\neq w$, then $v\prec w$.
\item\label{it:XY} If $\lambda(v)=\ensuremath\normalfont\textrm{XY}\xspace$, then $v\notin g(v)$ and $v\in\odd{}{g(v)}$.
\item\label{it:XZ} If $\lambda(v)=\normalfont\normalfont\textrm{XZ}\xspace$, then $v\in g(v)$ and $v\in\odd{}{g(v)}$.
\item\label{it:YZ} If $\lambda(v)=\normalfont\normalfont\textrm{YZ}\xspace$, then $v\in g(v)$ and $v\notin\odd{}{g(v)}$.
\end{enumerate}
The set $g(v)$ is called the \emph{correction set} of $v$.
\end{definition}
\begin{remark}
Every causal flow is indeed a gflow, where $g(v):=\{f(v)\}$ and the partial order remains the same.
To see this, first note causal flow can only be defined on labelled open graph{}s where $\lambda(v)=\ensuremath\normalfont\textrm{XY}\xspace$ for all $v\in\comp{O}$, so conditions \ref{it:XZ} and \ref{it:YZ} are vacuously satisfied.
Now, \ref{it:g} follows from the second bullet point of Definition~\ref{def:causal-flow}, \ref{it:odd} follows from the third bullet point, and \ref{it:XY} follows from the first bullet point.
\end{remark}
\begin{remark}
In the original definition of gflow in Ref.~\cite{GFlow}, condition \ref{it:odd} is given as:
\begin{align}\label{eq:wrong-g2}
\text{if }j \preccurlyeq i \text{ and } j \neq i \text{ then } j \notin \odd{}{g(i)}
\end{align}
In other publications, such as Ref.~\cite{danos_kashefi_panangaden_perdrix_2009}, the definition is changed to the version we give as \ref{it:odd}, yet this is usually done without comment.
For completeness, we provide an example which demonstrates that the condition \eqref{eq:wrong-g2} is insufficient for determinism.
Consider the following open graph:
\ctikzfig{bialgebra}
Here, the set of inputs is $\{i_1,i_2\}$, the set of outputs is $\{o_1,o_2\}$, and the non-outputs are measured in the planes $\lambda(i_1) = \lambda(i_2) = \ensuremath\normalfont\textrm{XY}\xspace$.
If we choose both measurement angles to be~$0$, it is straightforwardly checked that this diagram implements the linear map:
\[
\begin{pmatrix} 1&1&1&1\\ 1&-1&-1&1\\1&1&1&1\\ 1&-1&-1&1
\end{pmatrix}
\]
This has rank 2 and thus is not invertible, and in particular not unitary.
It therefore cannot be deterministically implementable and hence it should not have a gflow.
However, it is considered to have a gflow under condition \eqref{eq:wrong-g2} instead of \ref{it:odd}:
pick the partial order $i_1 \prec o_1, o_2$ and $i_2 \prec o_1, o_2$ with all other vertices incomparable.
Set $g(i_1) = \{o_1\}$ and $g(i_2) = \{o_2\}$. It is then easily checked that $(g,\prec)$ satisfies conditions \ref{it:g}, \eqref{eq:wrong-g2}, and \ref{it:XY}--\ref{it:YZ}.
Yet $(g,\prec)$ does not satisfy condition \ref{it:odd} and hence is not a gflow under the revised definition.
\end{remark}
To demonstrate that, with condition~\ref{it:odd}, the presence of a gflow indeed guarantees determinism of a pattern, we give a detailed proof of the following sufficiency theorem, which was first stated in Ref.~\cite{GFlow} as Theorem 2 with a sketch proof. The precise statement of the theorem requires some additional notation and the proof is quite lengthy and technical, so we state a coarse version of the theorem here and refer the reader to Appendix~\ref{sec:gflow-determinism} (and more specifically to Theorem~\ref{t-flow-app}) for the details.
\begin{theorem}\label{t-flow}
Let $\Gamma = (G,I,O,\lambda)$ be a labelled open graph~with a gflow and let $\alpha:\comp{O}\rightarrow [0,2\pi)$ be an assignment of measurement angles. Then there exists a runnable measurement pattern which is uniformly, strongly and stepwise deterministic, and which realizes the associated linear map $M_{\Gamma,\alpha}$ (cf.~Definition~\ref{def:ogs-to-linear-map}).
\end{theorem}
The converse also holds:
\begin{theorem}
If a pattern is stepwise, uniformly and strongly deterministic, then its underlying labelled open graph{} $(G, I, O, \lambda)$ has a gflow.
\end{theorem}
\begin{proof}
We present the proof sketch here, a complete treatment of the proof can be found in Ref.~\cite[Theorem~7.9.7]{danos_kashefi_panangaden_perdrix_2009}. The proof is by induction on the number of measurements. If the number of measurements is $n=0$, then the pattern trivially has a gflow. Suppose the pattern has $n+1$ qubits to be measured. Since the pattern is assumed to be stepwise deterministic, after performing the first measurement, the remaining pattern is still stepwise, uniformly and strongly deterministic. Hence it has a gflow by the induction hypothesis, where the partial order is given by the order in which the measurements are performed.
It remains to extend this gflow to include the first qubit to be measured. The essential part of this is to find a subset $S \subseteq \comp{I}$ that can act as the correction set of the first measurement (cf.\ Definition~\ref{defGFlow}). Given such a subset $S$, we define the full gflow as:
\begin{align*}
g^\prime(i) :=
\begin{cases}
g(i) & i\neq n \\ S& i=n,
\end{cases}
\end{align*}
where $g$ is the gflow of the smaller pattern.
\end{proof}
It is not straightforward to actually find a concrete gflow from the procedure in this proof. A constructive algorithm has since been given in Ref.~\cite{mhalla2011graph}, which finds gflow on labelled open graph{}s where all measurements are in the \normalfont XY\xspace plane. In Section~\ref{sec:MaximallyDelayedGflow}, we extend this algorithm to find extended gflow on labelled open graph{}s with measurements in all three planes.
Gflow is a property that applies to labelled open graph{}s. For convenience we also define it for ZX-diagrams.
\begin{definition}\label{dfn:zx-gflow}
We say a \zxdiagram in MQBC(+LC) form has \emph{gflow} $(g,\prec)$ if the corresponding labelled open graph\ $\Gamma$ has gflow $(g,\prec)$.
\end{definition}
The following result from Ref.~\cite{cliff-simp} shows that any unitary circuit can be converted into a deterministic measurement pattern.
\begin{proposition}[{\cite[Lemma 3.7]{cliff-simp}}]\label{prop:circuit-to-pattern}
Given a circuit there is a procedure for converting it into an equivalent measurement pattern.
Furthermore, this measurement pattern only contains \normalfont XY\xspace-plane measurements and has causal flow, so it also has gflow.
\end{proposition}
Note that Ref.~\cite{cliff-simp} does not explicitly talk about measurement patterns.
What they call graph-like diagrams however correspond in a straightforward manner to diagrams in MBQC form with every measured vertex being measured in the \normalfont XY\xspace-plane.
(We also note that the procedure of Proposition~\ref{prop:circuit-to-pattern} takes $O(n^2)$ operations,
where $n$ is the number of vertices.)
Below, we present a concrete example illustrating how the presence of gflow allows measurement errors to be corrected as the computation progresses.
\begin{example}\label{ex:gflow-in-action}
Consider the following labelled open graph{} $\Gamma$:
\[
\tikzfig{gflow-example-geometry}
\]
where $a$ is an input, $e$ and $f$ are outputs, and the measurement planes are given by $\lambda(a)=\lambda(b)=\ensuremath\normalfont\textrm{XY}\xspace$, $\lambda(c)=\normalfont\normalfont\textrm{XZ}\xspace$ and $\lambda(d)=\normalfont\normalfont\textrm{YZ}\xspace$. As usual, we denote the measurement angles by $\alpha : V \rightarrow [0,2\pi)$, where $V$ is the vertex set of $\Gamma$. Using Definition~\ref{def:ogs-to-ZX} (that is, we translate according to Table~\ref{tab:MBQC-to-ZX}), we obtain the corresponding \zxdiagram:
\[\tikzfig{gflow-example-zx}
\]
Note that the labelled open graph\ we started with has a gflow $(g,\prec)$ given by the following partial order
$$a\prec b\prec c\prec d\prec e,f,$$
with the function $g$ taking the values
\begin{align*}
g(a) &= \{b\} \\
g(b) &= \{c\} \\
g(c) &= \{c,d\} \\
g(d) &= \{d,e,f\}.
\end{align*}
It follows that we have
\begin{align*}
\odd{}{g(a)} &= \{a,c,d,e\} \\
\odd{}{g(b)} &= \{b,d,f\} \\
\odd{}{g(c)} &= \{c,d,f\} \\
\odd{}{g(d)} &= \varnothing,
\end{align*}
from which it is easy to verify that the conditions \ref{it:g}-\ref{it:YZ} hold.
By Theorem~\ref{t-flow}, the presence of gflow guarantees that we can correct the measurement errors provided that we measure the qubits according to the partial order. We demonstrate this for $\Gamma$ in Figure~\ref{fig:error-propagation}.
Thus suppose a measurement error of $\pi$ occurs when performing the measurement corresponding to the vertex $c$, as indicated in the top left part of Figure~\ref{fig:error-propagation}. The labels in this figure refer to the rules in Figure~\ref{fig:zx-rules}. In order to get the left diagram on the second row, we move each red $\pi$-phase past the corresponding Hadamard gate, which changes the colour of the phase to green. For the right diagram, the left green $\pi$ travels past the green node on the left and flips the sign of $\alpha(d)$. Next, to obtain the left diagram on the third row, the middle green $\pi$ travels along the middle triangle and past another Hadamard gate to become a red $\pi$. Finally, in the bottom left diagram, the red $\pi$ on the left has been fused with $-\alpha(d)$; and the red $\pi$ on the right has passed through the Hadamard gate switching its colour to green, and the adjacent green nodes have fused into a node with phase $\pi-\frac{\pi}{2}=\frac{\pi}{2}$. We then rearrange the diagram so that it looks like a measurement pattern again.
Note that all the vertices that are affected by the error are above $c$ in the partial order and hence `not yet measured' at this stage of the computation. Thus the necessary corrections may be applied to these vertices when they are measured.
\end{example}
\begin{figure}
\begin{align*}
&\tikzfig{gflow-example-corr2-11}\quad\stackrel{(\boldsymbol{\pi})}{\rightsquigarrow} & &\tikzfig{gflow-example-corr2-12} \\ \\
\stackrel{(\textit{\textbf h})\ \&\ (\textit{\textbf {i2}})}{\rightsquigarrow}\quad &\tikzfig{gflow-example-corr2-21}\quad\stackrel{(\textit{\textbf f})\ \&\ (\boldsymbol{\pi})}{\rightsquigarrow} & &\tikzfig{gflow-example-corr2-22} \\ \\
\stackrel{(\textit{\textbf f}), (\textit{\textbf h})\ \&\ (\textit{\textbf {i2}})}{\rightsquigarrow}\quad &\tikzfig{gflow-example-corr2-31}\quad\stackrel{(\boldsymbol{\pi})}{\rightsquigarrow} & &\tikzfig{gflow-example-corr2-32} \\ \\
\stackrel{(\textit{\textbf f}), (\textit{\textbf h})\ \&\ (\textit{\textbf {i2}})}{\rightsquigarrow}\quad &\tikzfig{gflow-example-corr2-41}\quad\stackrel{(\textit{\textbf f})}{\rightsquigarrow} & &\tikzfig{gflow-example-corr2-42}
\end{align*}
\caption{\label{fig:error-propagation} Propagation of a measurement error of the pattern in Example~\ref{ex:gflow-in-action}.}
\end{figure}
\subsection{Focusing gflow for {\normalfont XY\xspace} plane measurements}\label{sec:focusing-gflow}
For a labelled open graph{} $(G,I,O,\lambda)$ in which all measurements are in the \normalfont XY\xspace-plane, there is a special type of gflow which is specified by the correction function alone.
This gflow is called \emph{focused} because of the property that, among non-output vertices, corrections only affect the vertex they are meant to correct.
A labelled open graph{} where all measurements are in the \normalfont XY\xspace plane has gflow if and only if it has focused gflow.
\begin{definition}[{adapted from \cite[Definition~5]{mhalla2011graph}}]\label{def:focused-gflow}
Suppose $(G,I,O,\lambda)$ is a labelled open graph{} with the property that $\lambda(v)=\ensuremath\normalfont\textrm{XY}\xspace$ for all $v\in\comp{O}$.
Then $(g,\prec)$ is a \emph{focused gflow} on $(G,I,O,\lambda)$ if for all $v\in\comp{O}$, we have $\odd{G}{g(v)}\cap \comp{O}=\{v\}$, and furthermore $\prec$ is the transitive closure of the relation $\{(v,w) \mid v\in\comp{O} \wedge w\in g(v)\}$.
\end{definition}
\begin{theorem}[{reformulation of \cite[Theorem~2]{mhalla2011graph}}]\label{thm:mhalla2}
Suppose $(G,I,O,\lambda)$ is a labelled open graph{} with the property that $\lambda(v)=\ensuremath\normalfont\textrm{XY}\xspace$ for all $v\in\comp{O}$, then $(G,I,O,\lambda)$ has gflow if and only if it has a focused gflow.
\end{theorem}
A labelled open graph{} in which all measurements are in the \normalfont XY\xspace-plane can be \emph{reversed} by swapping the roles of inputs and outputs.
More formally:
\begin{definition}\label{def:reversed-LOG}
Suppose $(G,I,O,\lambda)$ is a labelled open graph{} with the property that $\lambda(v)=\ensuremath\normalfont\textrm{XY}\xspace$ for all $v\in\comp{O}$.
The corresponding \emph{reversed labelled open graph{}} is the labelled open graph{} where the roles of inputs and outputs are swapped, i.e.\ it is $(G,O,I,\lambda')$, where $\lambda'(v):=\ensuremath\normalfont\textrm{XY}\xspace$ for all $v\in\comp{I}$.
\end{definition}
Now if the number of inputs and outputs in the labelled open graph{} is the same, its focused gflow can also be reversed in the following sense.
\begin{corollary}\label{cor:reverse_unitary_gflow}
Suppose $(G,I,O,\lambda)$ is a labelled open graph{} with the properties that $\abs{I}=\abs{O}$ and $\lambda(v)=\ensuremath\normalfont\textrm{XY}\xspace$ for all $v\in\comp{O}$, and suppose it has a focused gflow $(g,\prec)$.
For all $v\in\comp{I}$, let $g'(v):=\{w\in\comp{O} \mid v\in g(w)\}$, and for all $u,w\in V$, let $u\prec' w$ if and only if $w\prec u$.
Then $(g',\prec')$ is a focused gflow for the reversed labelled open graph{} $(G,O,I,\lambda')$.
\end{corollary}
This follows immediately from the proofs of Ref.~\cite[Theorems~3--4]{mhalla2011graph} but it is not explicitly stated in that paper.
\section{Rewriting while preserving the existence of gflow}
\label{sec:rewriting}
In this section, we study a variety of topics dealing with labelled open graph{}s and gflow.
In the first subsection, we show how certain graph operations, such as local complementation and pivoting, affect the gflow.
In Section~\ref{sec:MaximallyDelayedGflow} we give a polynomial time algorithm for finding extended gflow using the concept of \emph{maximally delayed} gflow.
We combine this notion with that of a \emph{focused} extended gflow in Section~\ref{sec:focusing-extended-gflow} to transform a given gflow to give it certain useful properties.
\subsection{Graph operations that preserve the existence of gflow}
In this section, we prove some of our main technical lemmas, establishing that local complementation and related graph rewrites interact well with the gflow of the graph.
First, we show that a labelled open graph{} resulting from the local complementation of a labelled open graph{} with gflow will also have a gflow.
\newcommand{\statelcgflow}{
Let $(g,\prec)$ be a gflow for $(G,I,O,\lambda)$ and let $u\in\comp{O}$. Then $(g',\prec)$ is a gflow for $(G\star u, I, O,\lambda')$, where
\[
\lambda'(u) := \begin{cases} \normalfont\normalfont\textrm{XZ}\xspace &\text{if } \lambda(u)=\ensuremath\normalfont\textrm{XY}\xspace \\ \ensuremath\normalfont\textrm{XY}\xspace &\text{if } \lambda(u)=\normalfont\normalfont\textrm{XZ}\xspace \\ \normalfont\normalfont\textrm{YZ}\xspace &\text{if } \lambda(u)=\normalfont\normalfont\textrm{YZ}\xspace \end{cases}
\]
and for all $v\in \comp{O}\setminus\{u\}$
\[
\lambda'(v) := \begin{cases} \normalfont\normalfont\textrm{YZ}\xspace &\text{if } v\in N_G(u) \text{ and } \lambda(v)=\normalfont\normalfont\textrm{XZ}\xspace \\ \normalfont\normalfont\textrm{XZ}\xspace &\text{if } v\in N_G(u) \text{ and } \lambda(v)=\normalfont\normalfont\textrm{YZ}\xspace \\ \lambda(v) &\text{otherwise.} \end{cases}
\]
Furthermore,
\[
g'(u) := \begin{cases} g(u)\mathbin{\Delta}\xspace \{u\} &\text{if } \lambda(u)\in\{\ensuremath\normalfont\textrm{XY}\xspace,\normalfont\normalfont\textrm{XZ}\xspace\} \\ g(u) &\text{if } \lambda(u)=\normalfont\normalfont\textrm{YZ}\xspace \end{cases}
\]
and for all $v\in \comp{O}\setminus\{u\}$,
\[
g'(v) := \begin{cases} g(v) &\text{if } u\notin\odd{G}{g(v)} \\ g(v)\mathbin{\Delta}\xspace g'(u) \mathbin{\Delta}\xspace \{u\} &\text{if } u\in\odd{G}{g(v)}. \end{cases}
\]
}
\begin{lemma}\label{lem:lc_gflow}
\statelcgflow
\end{lemma}
\noindent The proof of this lemma can be found in Appendix~\ref{sec:proofs}. Note that the condition that the complemented vertex is not an output can in fact be dropped:
\begin{lemma}
Let $(g,\prec)$ be a gflow for $(G,I,O,\lambda)$ and let $u\in O$. Then $(g',\prec)$ is a gflow for $(G\star u, I, O,\lambda')$, where for all $v\in \comp{O}$
\[
\lambda'(v) := \begin{cases} \normalfont\normalfont\textrm{YZ}\xspace &\text{if } v\in N_G(u) \text{ and } \lambda(v)=\normalfont\normalfont\textrm{XZ}\xspace \\ \normalfont\normalfont\textrm{XZ}\xspace &\text{if } v\in N_G(u) \text{ and } \lambda(v)=\normalfont\normalfont\textrm{YZ}\xspace \\ \lambda(v) &\text{otherwise.} \end{cases}
\]
Furthermore, for all $v\in \comp{O}$,
\[
g'(v) := \begin{cases} g(v) &\text{if } u\notin\odd{G}{g(v)} \\ g(v) \mathbin{\Delta}\xspace \{u\} &\text{if } u\in\odd{G}{g(v)}. \end{cases}
\]
\end{lemma}
\begin{proof}
The proof is basically the same as that of Lemma~\ref{lem:lc_gflow} if we take $g(u)$ and $g'(u)$ to be empty.
The output vertex has no label, so its label does not need to be updated.
\end{proof}
Now by applying this lemma three times we see that a pivot also preserves the existence of a gflow.
\newcommand{\statecorpivotgflow}{
Let $(G,I,O,\lambda)$ be a labelled open graph\ which has a gflow, and let $u,v\in\comp{O}$ be connected by an edge. Then $(G\land uv, I, O,\hat\lambda)$, where
\[
\hat\lambda(a) = \begin{cases} \normalfont\normalfont\textrm{YZ}\xspace &\text{if } \lambda(a)=\ensuremath\normalfont\textrm{XY}\xspace \\
\normalfont\normalfont\textrm{XZ}\xspace &\text{if } \lambda(a)=\normalfont\normalfont\textrm{XZ}\xspace \\
\ensuremath\normalfont\textrm{XY}\xspace &\text{if } \lambda(a)=\normalfont\normalfont\textrm{YZ}\xspace \end{cases}
\]
for $a\in\{u,v\}$, and $\hat\lambda(w)=\lambda(w)$ for all $w\in \comp{O}\setminus\{u,v\}$ also has a gflow.
}
\begin{corollary}\label{cor:pivot_gflow}
\statecorpivotgflow
\end{corollary}
For more details regarding the correctness of this corollary, we refer to Appendix~\ref{sec:proofs}.
Somewhat surprisingly, the deletion of some types of vertices preserves the existence of gflow:
\begin{lemma}\label{lem:deletepreservegflow}
Let $(g,\prec)$ be a gflow for $(G,I,O,\lambda)$ and let $u\in \comp{O}$ with $\lambda(u) \neq \ensuremath\normalfont\textrm{XY}\xspace$. Then $(g',\prec)$ is a gflow for $(G\setminus\{u\},I,O,\lambda)$ where $\forall v\in V, v\neq u$:
\[g'(v) := \begin{cases} g(v) &\text{if } u\not \in g(v)\\ g(v)\mathbin{\Delta}\xspace g(u) &\text{if } u \in g(v) \end{cases}\]
\end{lemma}
\begin{proof}
First, observe that $u\in g(u)$ since $\lambda(u)\neq \ensuremath\normalfont\textrm{XY}\xspace$. Thus $u\not \in g'(v)$ for either case of the definition. Hence, $g'$ is indeed a function on the graph $G\setminus u$.
To check that $g'$ is indeed a gflow we check the necessary conditions for all $v\in G\setminus\{u\}$. If $u\not \in g(v)$, then $g'(v) = g(v)$ and hence we are done. If $u \in g(v)$, then $v\prec u$ and hence also $v\prec w$ for all $w \in g(u)$ or $w\in \odd{G}{g(u)}$. Since $g'(v) = g(v)\mathbin{\Delta}\xspace g(u)$, conditions \ref{it:g} and \ref{it:odd} are satisfied. For conditions \ref{it:XY}-\ref{it:YZ}, note that we cannot have $v \in g(u)$ or $v\in \odd{G}{g(u)}$ because $v\prec u$. As a result, $v\in g'(v) \iff v\in g(v)$ and $v\in \odd{G\setminus\{u\}}{g'(v)} \iff v \in \odd{G}{g(v)}$. Since the labels of all the vertices stay the same, \ref{it:XY}-\ref{it:YZ} remain satisfied.
\end{proof}
\begin{remark}
The condition that $\lambda(u) \neq \ensuremath\normalfont\textrm{XY}\xspace$ is necessary in the previous lemma. Removing a vertex with label \normalfont XY\xspace will, in general, lead to a labelled open graph{} which no longer has a gflow. For instance consider the following labelled open graph:
\ctikzfig{line-graph}
where the first two vertices both have label \normalfont XY\xspace. This graph has a gflow specified by $I\prec u \prec O$ and $g(I) = \{u\}$, $g(u) = \{O\}$, but removing $u$ will disconnect the graph and hence the resulting graph does not have a gflow.
Note that if we were to consider the same labelled open graph, but with $u$ measured in a different plane, it would \emph{not} have a gflow to start with.
This is because, if it did, we would need $u\in g(I)$, so that $I\prec u$ but also $u\in g(u)$ so that $I\in \odd{}{g(u)}$ giving $u\prec I$.
Hence this does not contradict the lemma.
\end{remark}
The next corollary shows how the previous results can be combined to remove a vertex with arity 2 from a labelled open graph{} while preserving gflow. In the ZX-calculus, the idea behind this is that we use \IdRule to remove a vertex and then \SpiderRule to fuse the adjacent vertices (cf.~Definition~\ref{def:identity-removal}).
Recall from that definition that $G\idrem{v} w\coloneqq ((G\land uv)\setminus\{u\})\setminus\{v\}$.
\begin{corollary}\label{cor:id_removal}
Let $(g,\prec)$ be a gflow for the labelled open graph{} $(G,I,O,\lambda)$, and let $u,v\in\comp O$ and $w\in V$ be vertices such that $N_G(v)=\{u,w\}$ and $\lambda(u),\lambda(v)\neq \normalfont YZ\xspace$. Then $(\tilde g,\prec)$ as defined below is a gflow for $(G\idrem{v} w,I,O,\lambda)$.
For all $z\in \comp O\setminus\{u,v\}$ we have
\[
\tilde g(z) = \begin{cases} \hat g(z) &\text{if } u\notin\hat g(z), v\notin\hat g(z) \\
\hat g(z)\mathbin{\Delta}\xspace\hat g(v) &\text{if } u\notin\hat g(z)\mathbin{\Delta}\xspace\hat g(v), v\in\hat g(z) \\
\hat g(z)\mathbin{\Delta}\xspace\hat g(u)\mathbin{\Delta}\xspace\hat g(v) &\text{if } u\in\hat g(z)\mathbin{\Delta}\xspace\hat g(v), v\in\hat g(z)\mathbin{\Delta}\xspace\hat g(u) \\
\hat g(z)\mathbin{\Delta}\xspace\hat g(u) &\text{if } u\in\hat g(z), v\notin\hat g(z)\mathbin{\Delta}\xspace\hat g(u), \end{cases}
\]
where $\hat g$ is as defined in Corollary \ref{cor:pivot_gflow}.
\end{corollary}
\begin{proof}
A computation using Corollary \ref{cor:pivot_gflow} and Lemma \ref{lem:deletepreservegflow}.
\end{proof}
\begin{lemma}\label{lem:gflow-add-output}
Let $\Gamma=(G,I,O,\lambda)$ be a labelled open graph\ with $G=(V,E)$. Let $\Gamma'$ be the labelled open graph\ that results from converting an output $u\in O$ into a vertex measured in the \normalfont XY\xspace-plane and adding a new output vertex $u'$ in its stead:
\ctikzfig{gflow-add-output}
Formally, let $\Gamma'=(G',I,O',\lambda')$, where $G'=(V',E')$ with $V'=V\cup\{u'\}$ and $E'=E\cup\{u\sim u'\}$, $O'=(O\setminus\{u\})\cup\{u'\}$, and $\lambda'(v)$ is the extension of $\lambda$ to domain $V'\setminus O'$ with $\lambda'(u)=\ensuremath\normalfont\textrm{XY}\xspace$. Then if $\Gamma$ has gflow, $\Gamma'$ also has gflow.
\end{lemma}
\begin{proof}
Suppose $\Gamma$ has gflow $(g,\prec)$.
Let $g'$ be the extension of $g$ to domain $V'\setminus O'$ which satisfies $g'(u)=\{u'\}$, and let $\prec'$ be the transitive closure of $\prec\cup\{(u,u')\}$.
The tuple $(g',\prec')$ inherits \ref{it:g} and \ref{it:XY}--\ref{it:YZ} for all $v\in V\setminus O$ because the correction sets have not changed for any of the original vertices.
Furthermore, $u'\in\odd{G'}{g'(v)}$ for any $v$ implies $u\in g'(v)$, as $u$ is the only neighbour of $u'$.
Hence $u'\in\odd{G'}{g'(v)}$ implies $v\prec' u \prec' u'$.
Therefore \ref{it:odd} continues to be satisfied for all $v\in V\setminus O$.
Now, for $u$, \ref{it:g} holds because $u\prec' u'$ by definition, \ref{it:odd} holds because $\odd{G'}{g'(u)}=\{u\}$, and \ref{it:XY} can easily be seen to hold.
Thus, $(g',\prec')$ is a gflow for $\Gamma'$.
\end{proof}
\begin{lemma}\label{lem:gflow-add-input}
Let $\Gamma=(G,I,O,\lambda)$ be a labelled open graph\ with $G=(V,E)$.
Let $\Gamma'$ be the labelled open graph\ that results from adding an additional vertex measured in the \normalfont XY\xspace-plane `before' the input $u\in I$:
\ctikzfig{gflow-add-input}
Formally, let $\Gamma'=(G',I',O,\lambda')$, where $G'=(V',E')$ with $V'=V\cup\{u'\}$ and $E'=E\cup\{u\sim u'\}$, $I'=(I\setminus\{u\})\cup\{u'\}$, and $\lambda'(v)$ is the extension of $\lambda$ to domain $V'\setminus O$ which satisfies $\lambda'(u')=\ensuremath\normalfont\textrm{XY}\xspace$. Then if $\Gamma$ has gflow, $\Gamma'$ also has gflow.
\end{lemma}
\begin{proof}
Suppose $\Gamma$ has gflow $(g,\prec)$.
Let $g'$ be the extension of $g$ to domain $V'\setminus O$ which satisfies $g'(u')=\{u\}$, and let $\prec'$ be the transitive closure of $\prec\cup\{(u',w):w\in N_G(u)\cup\{u\}\}$.
The tuple $(g',\prec')$ inherits the gflow properties for all $v\in V\setminus O$ because the correction sets have not changed for any of the original vertices and because the additional inequalities in $\prec'$ do not affect the gflow properties for any $v\in V\setminus O$.
The latter is because
\begin{itemize}
\item $u'\notin g'(v)=g(v)$ for any $v\in V\setminus O$, and
\item $u'\notin\odd{G'}{g'(v)}=\odd{G'}{g(v)}$ for any $v\in V\setminus O$ since its only neighbour $u$ was an input in $\Gamma$ and thus satisfies $u\notin g(v)$ for any $v\in V\setminus O$.
\end{itemize}
Now, for $u'$, \ref{it:g} holds by the definition of $\prec'$.
Note that $\odd{G'}{g(u')}=N_{G'}(u)$, so \ref{it:odd} also holds by the definition of $\prec'$.
Finally, \ref{it:XY} holds because $u'\notin g(u')$ and $u'\in\odd{G'}{g(u')}=N_{G'}(u)$.
Thus, $(g',\prec')$ is a gflow for $\Gamma'$.
\end{proof}
\subsection{Finding extended gflow} \label{sec:MaximallyDelayedGflow}
Ref.~\cite{MP08-icalp} gives a polynomial time algorithm for finding gflow for labelled open graph{}s with all measurements in the \normalfont XY\xspace-plane. In this section, we present an extension of this algorithm that works for measurements in all three measurement planes.
Before doing so, we note a few details from the algorithm. The intuition behind the procedure is to `maximally delay' any measurements,
thereby keeping potential correction options available for as long as possible. As a result, the algorithm finds a gflow of minimal `depth' (a notion we will make precise later).
The algorithm works backwards:
Starting from the output vertices, it iteratively constructs disjoint subsets of vertices
that can be corrected by vertices chosen in previous steps.
Viewed instead as information travelling forwards, from the inputs to the outputs,
this corresponds to only correcting a vertex at the last possible moment,
hence the name maximal delayed.
\begin{definition}[Generalisation of {\cite[Definition~4]{MP08-icalp}} to multiple measurement planes]
\label{defVk}
For a given labelled open graph $(G,I,O,\lambda)$ and a given gflow $(g,\prec)$ of $(G,I,O,\lambda)$, let
\[
V_k^\prec = \begin{cases} \max_\prec (V) &\text{if } k= 0 \\ \max_\prec (V\setminus(\bigcup_{i<k} V_i^\prec)) &\text{if } k > 0 \end{cases}
\]
where $\max_\prec(X) := \{u\in X \text{ s.t. } \forall v\in X, \neg(u\prec v)\}$ is the set of the maximal elements of $X$.
\end{definition}
\begin{definition}[Generalisation of {\cite[Definition~5]{MP08-icalp}} to multiple measurement planes]
\label{defMoreDelayed}
For a given labelled open graph $(G,I,O,\lambda)$ and two given gflows $(g,\prec)$ and $(g',\prec')$ of $(G,I,O,\lambda)$,
$(g,\prec)$ is \emph{more delayed} than $(g',\prec')$ if for all $k$,
\[
\abs{\bigcup_{i=0}^k V_i^\prec} \geq \abs{\bigcup_{i=0}^k V_i^{\prec'}}
\]
and there exists a $k$ such that the inequality is strict.
A gflow $(g,\prec)$ is \emph{maximally delayed} if there exists no gflow of the same open graph that is more delayed.
\end{definition}
\begin{theorem}[Generalisation of {\cite[Theorem~2]{MP08-icalp}} to multiple measurement planes]
\label{thmGFlowAlgo}
There exists a polynomial time algorithm that decides whether a given
labelled open graph\ has an extended gflow, and that outputs such a gflow if it exists.
Moreover, the output gflow is maximally delayed.
\end{theorem}
The proof of this theorem can be found in Appendix~\ref{sec:FindingGflow}.
\subsection{Focusing extended gflow}\label{sec:focusing-extended-gflow}
In Section~\ref{sec:focusing-gflow}, we introduced the notion of focused gflow for labelled open graph{}s in which all measurements are in the \normalfont XY\xspace plane.
There is no canonical generalisation of this notion to labelled open graph{}s with measurements in multiple planes.
Hamrit and Perdrix suggest three different extensions of focused gflow to the case of multiple measurement planes, which restrict correction operations on non-output qubits to only a single type of Pauli operator overall \cite[Definition~2]{hamrit2015reversibility}.
Here, we go a different route by requiring that non-output qubits only appear in correction sets, or odd neighbourhoods of correction sets, if they are measured in specific planes.
This means the correction operators which may be applied to some non-output qubit depend on the plane in which that qubit is measured.
The new notion of focused gflow will combine particularly nicely with the phase-gadget form of MBQC+LC diagrams of Section~\ref{sec:phasegadgetform}.
We begin by proving some lemmas that allow any gflow to be focused in our sense.
\begin{lemma}\label{lem:successor-gflow}
Let $(G,I,O,\lambda)$ be a labelled open graph which has gflow $(g,\prec)$.
Suppose there exist $v,w\in\comp{O}$ such that $v\prec w$.
Define $g'(v):=g(v)\mathbin{\Delta}\xspace g(w)$ and $g'(u):=g(u)$ for all $u\in\comp{O}\setminus\{v\}$, then $(g',\prec)$ is a gflow.
\end{lemma}
\begin{proof}
As the correction set only changes for $v$, the gflow properties remain satisfied for all other vertices.
Now, suppose $w'\in g'(v)$, then $w'\in g(v) \vee w'\in g(w)$.
In the former case, $v\prec w'$, and in the latter case, $v\prec w\prec w'$, since $(g,\prec)$ is a gflow, so \ref{it:g} holds.
Similarly, suppose $w'\in\odd{}{g'(v)}$, then by linearity of $\odd{}{\cdot}$ we have $w'\in\odd{}{g(v)} \vee w'\in\odd{}{g(w)}$.
Again, this implies $v\prec w'$ or $v\prec w\prec w'$ since $(g,\prec)$ is a gflow, so \ref{it:odd} holds.
Finally, $v\prec w$ implies $v\notin g(w)$ and $v\notin\odd{}{g(w)}$ since $(g,\prec)$ is a gflow.
Therefore $v\in g'(v) \Longleftrightarrow v\in g(v)$ and $v\in\odd{}{g'(v)}\Longleftrightarrow v\in\odd{}{g(v)}$.
Thus \ref{it:XY}--\ref{it:YZ} hold and $(g',\prec)$ is a gflow.
\end{proof}
\begin{lemma}\label{lem:focus-single-vertex}
Let $(G,I,O,\lambda)$ be a labelled open graph, let $(g,\prec)$ be a gflow for this open graph, and let $v\in\comp{O}$.
Then there exists $g':\comp{O}\to\pow{\comp{I}}$ such that
\begin{enumerate}
\item for all $w\in\comp{O}$, either $v=w$ or $g'(w)=g(w)$,
\item for all $w\in g'(v)\cap\comp{O}$, either $v=w$ or $\lambda(w) = \ensuremath\normalfont\textrm{XY}\xspace$,
\item for all $w\in \odd{}{g'(v)}\cap\comp{O}$, either $v=w$ or $\lambda(w)\neq \ensuremath\normalfont\textrm{XY}\xspace$, and
\item $(g',\prec)$ is a gflow for $(G,I,O,\lambda)$.
\end{enumerate}
We can construct this $g'$ in a number of steps that is polynomial in the number of vertices of $G$.
\end{lemma}
\begin{proof}
Let $g_0:=g$, we will modify the function in successive steps to $g_1,g_2$, and so on.
For each non-negative integer $k$ in turn, define
\begin{align*}
S_{k,\ensuremath\normalfont\textrm{XY}\xspace} &:= \{u\in (\odd{}{g_k(v)}\cap\comp{O}) \setminus\{v\} : \lambda(u)=\ensuremath\normalfont\textrm{XY}\xspace\}, \\
S_{k,\normalfont\normalfont\textrm{XZ}\xspace} &:= \{u\in (g_k(v)\cap\comp{O}) \setminus\{v\} : \lambda(u)=\normalfont\normalfont\textrm{XZ}\xspace\}, \\
S_{k,\normalfont\normalfont\textrm{YZ}\xspace} &:= \{u\in (g_k(v)\cap\comp{O}) \setminus\{v\} : \lambda(u)=\normalfont\normalfont\textrm{YZ}\xspace\},
\end{align*}
and set $S_k := S_{k,\ensuremath\normalfont\textrm{XY}\xspace} \cup S_{k,\normalfont\normalfont\textrm{XZ}\xspace} \cup S_{k,\normalfont\normalfont\textrm{YZ}\xspace}$.
Finding $S_k$ takes $O(\abs{V}^2)$ operations.
If $S_k=\emptyset$, let $g':=g_k$ and stop.
Otherwise, choose $w_k\in S_k$ among the elements minimal in $\prec$, and define
\[
g_{k+1}(u) := \begin{cases} g_k(v)\mathbin{\Delta}\xspace g_k(w_k) &\text{if } u=v \\ g_k(u) &\text{otherwise.} \end{cases}
\]
Note $w_k\in S_k$ implies $w_k\neq v$, as well as either $w_k\in g_k(v)$ or $w_k\in\odd{}{g_k(v)}$.
Thus if $(g_k,\prec)$ is a gflow, then $v\prec w_k$, and hence by Lemma~\ref{lem:successor-gflow}, $(g_{k+1},\prec)$ is also a gflow.
Since $(g_0,\prec)$ is a gflow, this means $(g_k,\prec)$ is a gflow for all $k$.
Now, if $w_k\in S_{k,\ensuremath\normalfont\textrm{XY}\xspace}$, then $w_k\in\odd{}{g_k(w_k)}$ by \ref{it:XY}.
This implies $w_k\notin\odd{}{g_{k+1}(v)}$, and thus $w_k\notin S_{k+1}$.
Similarly, if $w_k\in S_{k,\normalfont\normalfont\textrm{XZ}\xspace} \cup S_{k,\normalfont\normalfont\textrm{YZ}\xspace}$, then $w_k\in g_k(w_k)$ by \ref{it:XZ} or \ref{it:YZ}.
This implies $w_k\notin g_{k+1}(v)$, and thus $w_k\notin S_{k+1}$.
Hence, in each step we remove a minimal element from the set.
Suppose there exists $w'\in S_{k+1}\setminus S_k$, then either $w'\in g_k(w_k)$ or $w'\in\odd{}{g_k(w_k)}$; in either case $w_k\prec w'$.
In other words, we always remove a minimal element from the set and add only elements that come strictly later in the partial order.
Therefore, the process terminates after $n\leq\abs{V}$ steps, at which point $S_n=\emptyset$,
and the process requires $O(\abs{V}^2)$ operations at each step.
The total complexity is therefore $O(\abs{V}^3)$.
The function $g'=g_n$ has the desired properties: (1) holds because we never modify the value of the function on inputs other than $v$, (2) and (3) hold because $S_n=\emptyset$, and (4) was shown to follow from Lemma~\ref{lem:successor-gflow}.
\end{proof}
Based on these lemmas, we can now show the focusing property: first for arbitrary labelled open graph{}s and then for labelled open graph{}s corresponding to an MBQC diagram in phase-gadget form.
These results state that correction sets can be simplified to only contain qubits measured in the \normalfont XY\xspace plane.
Moreover, side-effects of corrections (i.e.\ effects on qubits other than the one being corrected) never affect qubits measured in the \normalfont XY\xspace plane.
\begin{proposition}\label{prop:focused-gflow}
Let $(G,I,O,\lambda)$ be a labelled open graph which has gflow.
Then $(G,I,O,\lambda)$ has a maximally delayed gflow $(g,\prec)$ with the following properties for all $v\in V$:
\begin{itemize}
\item for all $w\in g(v)\cap\comp{O}$, either $v=w$ or $\lambda(w)= \ensuremath\normalfont\textrm{XY}\xspace$, and
\item for all $w\in \odd{}{g(v)}\cap\comp{O}$, either $v=w$ or $\lambda(w)\neq \ensuremath\normalfont\textrm{XY}\xspace$.
\end{itemize}
This maximally delayed gflow can be constructed in a number of steps that is polynomial in the number of vertices in $G$.
\end{proposition}
\begin{proof}
Let $(g_0,\prec)$ be a maximally delayed gflow of $(G,I,O,\lambda)$.
Set $n:=\abs{V}$ and consider the vertices in some order $v_1,\ldots,v_n$.
For each $k=1,\ldots,n$, let $g_k$ be the function that results from applying Lemma~\ref{lem:focus-single-vertex} to the gflow $(g_{k-1},\prec)$ and the vertex $v_k$.
Then $g_k$ satisfies the two properties for the vertex $v_k$.
The function $g_k$ also equals $g_{k-1}$ on all inputs other than $v_k$, so in fact $g_k$ satisfies the two properties for all vertices $v_1,\ldots,v_k$.
Thus, $g_n$ satisfies the two properties for all vertices.
Moreover, the partial order does not change, so $(g_n,\prec)$ is as delayed as $(g_0,\prec)$; i.e.\ it is maximally delayed.
Hence if $g:=g_n$, then $(g,\prec)$ has all the desired properties.
The construction of each successive $g_{k+1}$ via Lemma~\ref{lem:focus-single-vertex} takes $O(n^3)$ operations,
which we perform at most $n$ times, giving a complexity of $O(n^4)$.
\end{proof}
The extended notion of focused gflow also allows us to prove another result which will be useful for the optimisation algorithm later.
First, note that if a labelled open graph{} has gflow, then the labelled open graph{} that results from deleting all vertices measured in the \normalfont XZ\xspace or \normalfont YZ\xspace planes still has gflow.
\begin{lemma}\label{lem:gflow_drop_gadgets}
Suppose $(G,I,O,\lambda)$ is a labelled open graph{} which has gflow.
Let $(G',I,O,\lambda')$ be the induced labelled open graph{} on the vertex set $V'=\{v\in V\mid v\in O \text{ or } \lambda(v)=\ensuremath\normalfont\textrm{XY}\xspace\}$.
Then $(G',I,O,\lambda')$ has gflow.
\end{lemma}
\begin{proof}
Apply Lemma~\ref{lem:deletepreservegflow} to each vertex measured in the \normalfont XZ\xspace or \normalfont YZ\xspace plane one by one.
Recall from Definition~\ref{defGFlow} that input vertices are measured in the \normalfont XY\xspace plane and so
are not removed by this process.
\end{proof}
We can now show that in a labelled open graph{} which has gflow and which satisfies $\abs{I}=\abs{O}$, any internal \normalfont XY\xspace vertex must have more than one neighbour.\footnote{The condition $\abs{I}=\abs{O}$ is necessary: consider the labelled open graph{} $(G, \emptyset, \{o\}, \lambda)$, where $G$ is the connected graph on two vertices $\{v,o\}$, and $\lambda(v)=\ensuremath\normalfont\textrm{XY}\xspace$.
Then $v$ is internal and has only a single neighbour, yet the labelled open graph{} has gflow with $g(v)=\{o\}$ and $v\prec o$.}
\begin{proposition}\label{prop:XY-neighbours}
Let $(G,I,O,\lambda)$ be a labelled open graph{} which has gflow and for which $\abs{I}=\abs{O}$.
Suppose $v\in\comp{O}$ satisfies $\lambda(v)=\ensuremath\normalfont\textrm{XY}\xspace$.
Then either $v\in I$ and $\abs{N_{G}(v)}\geq 1$, or $v\notin I$ and $\abs{N_{G}(v)}\geq 2$.
\end{proposition}
\begin{proof}
Consider some $v\in\comp{O}$ such that $\lambda(v)=\ensuremath\normalfont\textrm{XY}\xspace$.
Note that a vertex with no neighbours is in the even neighbourhood of any set of vertices.
Therefore we must have $\abs{N_{G}(v)}\geq 1$, since $(G,I,O,\lambda)$ has gflow and $v$ must be in the odd neighbourhood of its correction set by \ref{it:XY}.
Now suppose for a contradiction that $v\notin I$ and $\abs{N_{G}(v)}=1$.
Denote by $u$ the single neighbour of $v$.
If the labelled open graph{} contains any vertices measured in the \normalfont XZ\xspace or \normalfont YZ\xspace planes, by Lemma~\ref{lem:gflow_drop_gadgets}, we can remove those vertices while preserving the property of having gflow.
Since $\lambda(v)=\ensuremath\normalfont\textrm{XY}\xspace$, the removal process preserves~$v$.
The new labelled open graph{} has gflow and $v$ cannot have gained any new neighbours, so $u$ must also be preserved by the argument of the first paragraph above.
Thus, without loss of generality, we will assume that all non-outputs of $(G,I,O,\lambda)$ are measured in the \normalfont XY\xspace plane.
The labelled open graph{} $(G,I,O,\lambda)$ has gflow and satisfies $\lambda(w)=\ensuremath\normalfont\textrm{XY}\xspace$ for all $w\in\comp{O}$, so by Theorem~\ref{thm:mhalla2} it has a focused gflow $(g,\prec)$.
To satisfy the gflow condition \ref{it:XY} for $v$, that is, to satisfy $v\in\odd{G}{g(v)}$, we must have $u\in g(v)$.
This then implies $v\prec u$ by \ref{it:g}.
Since $\abs{I}=\abs{O}$, the focused gflow $(g,\prec)$ can be reversed in the sense of Corollary~\ref{cor:reverse_unitary_gflow}.
Denote by $(G,O,I,\lambda')$ the reversed labelled open graph{} (cf.\ Definition~\ref{def:reversed-LOG}) and by $(g',\prec')$ the corresponding reversed focused gflow.
Since $v\notin I$, $v$ remains a non-output in the reversed graph, so it has a correction set.
But $g'(v)=\{w\in\comp{O}\mid v\in g(w)\}$, so it cannot contain $u$ because $v\notin g(u)$ by $v\prec u$.
Thus, $v\notin\odd{G}{g'(v)}$, contradicting \ref{it:XY}.
Therefore, the initial assumption must be wrong, i.e.\ if $v\notin I$ then $\abs{N_{G}(v)}\geq 2$.
\end{proof}
\begin{remark}
This implies that in any unitary MBQC-form \zxdiagram with gflow, any vertex measured in the \normalfont XY\xspace-plane has at least two incident wires (plus the wire leading to the measurement effect), since being an input vertex of the labelled open graph{} implies being connected to an input wire in the \zxdiagram.
\end{remark}
\section{Simplifying measurement patterns}\label{sec:simplifying}
In the previous section, we saw several ways in which labelled open graph{}s can be modified while preserving the existence of gflow. In this section, we will see how these modifications can be done on measurement patterns in a way that preserves the computation being performed.
The goal of the simplifications in this section is to reduce the number of qubits needed to implement the computation.
Since we are specifically interested in patterns with gflow, we will represent a pattern by a ZX-diagram in MBQC+LC form, which carries essentially the same information.
Before we find qubit-removing rewrite rules however, we establish how local Cliffords in an MBQC+LC form diagram can be changed into measurements in Section~\ref{sec:local-Cliffords} and how local complementations affect a pattern in Section~\ref{sec:pattern-local-complementation}. We use local complementations to remove Clifford vertices from a pattern in Section~\ref{sec:removecliffordqubits}, and to change a pattern so that only two measurement planes are necessary in Section~\ref{sec:phasegadgetform}. Finally, in Section~\ref{sec:further-opt} we find some further simplifications that allow the removal of additional qubits.
\subsection{Transforming local Cliffords into measurements}\label{sec:local-Cliffords}
We used MBQC+LC diagrams as an extension of MBQC form diagrams. In this section we will see that we can always convert the local Clifford gates into measurements to turn the diagram into MBQC form.
\begin{lemma}\label{lem:SQU-to-MBQC-form}
Any \zxdiagram $D$ which is in MBQC+LC form can be brought into MBQC form.
Moreover, if the MBQC-form part of $D$ involves $n$ qubits, of which $p$ are inputs and $q$ are outputs, then the resulting MBQC-form diagram contains at most $n+2p+4q$ qubits.
\end{lemma}
\begin{proof}
Any single-qubit Clifford unitary can be expressed as a composite of three phase shifts \cite[Lemma~3]{backens1}.
Note that this result holds with either choice of colours, i.e.\ any single-qubit Clifford unitary can be expressed as \tikzfig{SQC-red} or \tikzfig{SQC-green}.
Now, with the green-red-green version, for any Clifford operator on an input, we can `push' the final green phase shift through the graph state part onto the outgoing wire.
There, it will either merge with the measurement effect or with the output Clifford unitary:
\ctikzfig{SQC-in-replacement}
If $\gamma\in\{0,\pi\}$, merging the phase shift with a measurement effect may change the angle but not the phase label, e.g.\ if $\gamma=\pi$:
\begin{center}
\tikzfig{pivot-pi-phases-XY} \qquad \tikzfig{pivot-pi-phases-XZ} \qquad \tikzfig{pivot-pi-phases-YZ}
\end{center}
If $\gamma\in\{\frac\pi2,-\frac\pi2\}$, merging the phase shift with a measurement effect will flip the phase labels \normalfont XZ\xspace and \normalfont YZ\xspace, e.g.\ if $\gamma=-\frac\pi2$:
\begin{center}
\tikzfig{lc-N-XY} \qquad \tikzfig{lc-N-XZ} \qquad \tikzfig{lc-N-YZ}
\end{center}
Thus we need to add at most two new qubits to the MBQC-form part when removing a Clifford unitary on the input.
For a Clifford unitary on the output, we have
\ctikzfig{SQU-out-replacement}
Thus we add at most four new qubits.
Combining these properties, we find that rewriting to MBQC form adds at most $2p+4q$ new qubits to the pattern.
\end{proof}
\begin{proposition}
Suppose $D$ is a \zxdiagram in MBQC+LC form and that its MBQC part has gflow.
Let $D'$ be the \zxdiagram that results from bringing $D$ into MBQC form as in Lemma~\ref{lem:SQU-to-MBQC-form}.
Then $D'$ has gflow.
\end{proposition}
\begin{proof}
By applying Lemma~\ref{lem:SQU-to-MBQC-form} repeatedly, we can incorporate any local Clifford operators into the MBQC form part of the diagram.
Lemmas~\ref{lem:gflow-add-output} and~\ref{lem:gflow-add-input} ensure that each step preserves the property of having gflow.
\end{proof}
\subsection{Local complementation and pivoting on patterns}\label{sec:pattern-local-complementation}
Lemmas~\ref{lem:ZX-lcomp} and~\ref{lem:ZX-pivot} showed how to apply a local complementation and pivot on a ZX-diagram by introducing some local Clifford spiders. In this section we will show how these rewrite rules can be used on MBQC+LC diagrams.
\begin{lemma}\label{lem:lc-MBQC-form-non-input}
Let $D$ be an MBQC+LC diagram and suppose $u\in G(D)$ is not an input vertex.
Then the diagram resulting from applying Lemma~\ref{lem:ZX-lcomp} on $u$ (\ie locally complementing), can be turned back into an MBQC+LC diagram $D'$ with $G(D')=G(D)\star u$. If $D$ had gflow, then $D'$ will also have gflow.
\end{lemma}
\begin{proof}
Suppose $D$ is an MBQC+LC diagram, $\Gamma=(G,I,O,\lambda)$ the corresponding labelled open graph, and $\alpha:\comp{O}\to[0,2\pi)$ gives the associated measurement angles.
By assumption, $u\notin I$, so -- with the exception of the output wire or the edge to the measurement effect -- all edges incident on $u$ connect to neighbouring vertices in the graph.
The input wires on the other qubits can be safely ignored.
To get back an MBQC+LC diagram after Lemma~\ref{lem:ZX-lcomp} is applied to $u$, we only need to rewrite the measurement effects, and hence we need to construct new $\lambda'$ and $\alpha'$ for these measurement effects. We do that as follows.
First of all, there are no changes to the measurement effects on vertices $v\not\in N(u)\cup\{u\}$, and hence for those vertices we have $\lambda'(v)=\lambda(v)$ and $\alpha'(v)=\alpha(v)$.
The vertex $u$ gets a red $\frac\pi2$ phase from the application of Lemma~\ref{lem:ZX-lcomp}. If $u\in O$, then it has no associated measurement plane or angle. In this case, this red $\frac\pi2$ simply stays on the output wire, as allowed in an MBQC+LC diagram. When $u\notin O$, there are three possibilities, depending on $\lambda(u)$:
\begin{itemize}
\item If $\lambda(u)=\ensuremath\normalfont\textrm{XY}\xspace$, then the new measurement effect is
\ctikzfig{lc-u-XY}
i.e.\ $\lambda'(u)=\normalfont\normalfont\textrm{XZ}\xspace$ and $\alpha'(u)=\frac{\pi}{2}-\alpha(u)$.
\item If $\lambda(u)=\normalfont\normalfont\textrm{XZ}\xspace$, then the new measurement effect is
\ctikzfig{lc-u-XZ}
i.e.\ $\lambda'(u)=\ensuremath\normalfont\textrm{XY}\xspace$ and $\alpha'(u)=\alpha(u)-\frac{\pi}{2}$.
\item If $\lambda(u)=\normalfont\normalfont\textrm{YZ}\xspace$, then the new measurement effect is
\ctikzfig{lc-u-YZ}
i.e.\ $\lambda'(u)=\normalfont\normalfont\textrm{YZ}\xspace$ and $\alpha'(u)=\alpha(u)+\frac{\pi}{2}$.
\end{itemize}
The vertices $v$ that are neighbours of $u$ get a green $-\frac\pi2$ phase. Again, if such a $v$ is an output, this phase can be put as a local Clifford on the output. If it is not an output, then there are also three possibilities depending on $\lambda(v)$:
\begin{itemize}
\item If $\lambda(v)=\ensuremath\normalfont\textrm{XY}\xspace$, then the new measurement effect is
\ctikzfig{lc-N-XY}
i.e.\ $\lambda'(v)=\ensuremath\normalfont\textrm{XY}\xspace$ and $\alpha'(v)=\alpha(v)-\frac{\pi}{2}$.
\item If $\lambda(v)=\normalfont\normalfont\textrm{XZ}\xspace$, then the new measurement effect is
\ctikzfig{lc-N-XZ}
i.e.\ $\lambda'(v)=\normalfont\normalfont\textrm{YZ}\xspace$ and $\alpha'(v)=\alpha(v)$.
\item If $\lambda(v)=\normalfont\normalfont\textrm{YZ}\xspace$, then the new measurement effect is
\ctikzfig{lc-N-YZ}
i.e.\ $\lambda'(v)=\normalfont\normalfont\textrm{XZ}\xspace$ and $\alpha'(v)=-\alpha(v)$.
\end{itemize}
With these changes, we see that the resulting diagram $D'$ is indeed in MBQC+LC form. The underlying graph $G(D')$ results from the local complementation about $u$ of the original graph $G(D)$. Furthermore, the measurement planes changed in the same way as in Lemma~\ref{lem:lc_gflow}, and hence if $D$ had gflow, then $D'$ will also have gflow.
\end{proof}
\begin{proposition}\label{prop:MBQC-lc-MBQC}
Let $D$ be an MBQC+LC diagram and suppose $u$ is an arbitrary vertex.
Then after application of a local complementation to $u$, the resulting diagram can be turned into an MBQC+LC diagram.
\end{proposition}
\begin{proof}
If $u$ is not an input vertex, the result is immediate from Lemma~\ref{lem:lc-MBQC-form-non-input}.
If instead $u$ is an input vertex, we modify $D$ by replacing the input wire incident on $u$ by an additional graph vertex $u'$ measured in the \normalfont XY\xspace-plane at angle 0, and a Hadamard unitary on the input wire:
\ctikzfig{input-replacement}
Throughout this process, the measurement effect on $u$ (if any) does not change, so it is left out of the above equation.
In the resulting diagram $D'$, $u$ is no longer an input.
Furthermore, $D'$ is an MBQC+LC diagram.
Thus, the desired result follows by applying Lemma~\ref{lem:lc-MBQC-form-non-input} to $D'$.
\end{proof}
A pivot is just a sequence of three local complementations.
Thus, the previous lemma already implies that when Lemma~\ref{lem:ZX-pivot} is applied to an MBQC+LC diagram the resulting diagram can also be brought back into MBQC+LC form. Nevertheless, it will be useful to explicitly write out how the measurement planes of the vertices change.
\begin{lemma}\label{lem:pivot-MBQC-form-non-input}
Let $D$ be an MBQC+LC diagram and suppose $u$ and $v$ are neighbouring vertices in the graph state and are not input vertices of the underlying labelled open graph.
Then the diagram resulting from applying Lemma~\ref{lem:ZX-pivot} to $u$ and $v$ (\ie a pivot about $u\sim v$) can be brought back into MBQC+LC form.
The resulting \zxdiagram $D'$ satisfies $G(D') = G(D)\wedge uv$. If $D$ had gflow, then $D'$ will also have gflow.
\end{lemma}
\begin{proof}
Suppose $\Gamma=(G,I,O,\lambda)$ is the labelled open graph\ underlying $D$ and suppose $\alpha:\comp{O}\to[0,2\pi)$ gives the measurement angles.
We will denote the measurement planes after pivoting by $\lambda':\comp{O}\to\{\ensuremath\normalfont\textrm{XY}\xspace,\normalfont\normalfont\textrm{XZ}\xspace,\normalfont\normalfont\textrm{YZ}\xspace\}$ and the measurement angles after pivoting by $\alpha':\comp{O}\to[0,2\pi)$.
Let $a\in\{u,v\}$, then:
\begin{itemize}
\item If $a$ is an output, we consider the Hadamard resulting from the pivot operation as a Clifford operator on the output.
\item If $\lambda(a)=\ensuremath\normalfont\textrm{XY}\xspace$ then $\lambda'(a) = \normalfont\normalfont\textrm{YZ}\xspace$ and if $\lambda(a)=\normalfont\normalfont\textrm{YZ}\xspace$ then $\lambda'(a) = \ensuremath\normalfont\textrm{XY}\xspace$:
\ctikzfig{pivot-u-XY}
In both cases, the measurement angle stays the same: $\alpha'(a) = \alpha(a)$.
\item If $\lambda(a)=\normalfont\normalfont\textrm{XZ}\xspace$, then
\ctikzfig{pivot-u-XZ}
\ie $\lambda'(a) = \normalfont\normalfont\textrm{XZ}\xspace$ and $\alpha'(a) = \frac\pi2 - \alpha(a)$.
\end{itemize}
The only other changes are new green $\pi$ phases on vertices $w\in N(u)\cap N(v)$.
For measured (i.e.\ non-output) vertices, these preserve the measurement plane and are absorbed into the measurement angle in all three cases:
\begin{align*}
(\lambda'(w), \alpha'(w)) =
\begin{cases}
(\ensuremath\normalfont\textrm{XY}\xspace, \alpha(w) + \pi) & \text{if } \lambda(w) = \ensuremath\normalfont\textrm{XY}\xspace \mspace{-1.5mu} \quad \tikzfig{pivot-pi-phases-XY} \\
(\normalfont\normalfont\textrm{YZ}\xspace, -\alpha(w)) & \text{if } \lambda(w) = \normalfont\normalfont\textrm{YZ}\xspace \quad \tikzfig{pivot-pi-phases-YZ} \\
(\normalfont\normalfont\textrm{XZ}\xspace, -\alpha(w)) & \text{if } \lambda(w) = \normalfont\normalfont\textrm{XZ}\xspace \quad \tikzfig{pivot-pi-phases-XZ}
\end{cases}
\end{align*}
If instead $w$ is an output vertex, we consider the green $\pi$ phase shift as a Clifford operator on the output wire.
The measurement planes and the graph change exactly like in Corollary~\ref{cor:pivot_gflow} and hence $D'$ has gflow when $D$ had gflow.
\end{proof}
\subsection{Removing Clifford vertices}\label{sec:removecliffordqubits}
In this section, we show that if a qubit is measured in one of the Pauli bases, i.e.\ at an angle which is an integer multiple of $\frac\pi2$, it can be removed from a pattern while preserving the semantics as well as the property of having gflow.
\begin{definition}\label{dfn:internal-boundary-Clifford}
Let $D$ be a \zxdiagram in MBQC+LC form, with underlying labelled open graph\ $(G,I,O,\lambda)$. Let $\alpha:\comp{O}\to [0,2\pi)$ be the corresponding set of measurement angles. We say a measured vertex $u\in G$ is \emph{Clifford} when $\alpha(u) = k\frac\pi2$ for some $k$.
\end{definition}
Our goal will be to remove as many internal Clifford vertices as possible.
We make a key observation for our simplification scheme: a \normalfont YZ\xspace-plane measurement with a $0$ or $\pi$ phase can be removed from the pattern by modifying its neighbours in a straightforward manner.
\begin{lemma}\label{lem:ZX-remove-YZ-Pauli}
Let $D$ be a ZX-diagram in MBQC+LC form with vertices $V$.
Suppose $u\in V$ is a non-input vertex measured in the \normalfont YZ\xspace or \normalfont XZ\xspace plane with an angle of $a\pi$ where $a=0$ or $a=1$. Then there is an equivalent diagram $D'$ with vertices $V\setminus \{u\}$. If $D$ has gflow, then $D'$ does as well.
\end{lemma}
\begin{proof}
Using the axioms of the ZX-calculus, it is straightforward to show that:
\ctikzfig{remove-YZ-measurement}
These $a\pi$ phase shifts on the right-hand side can be absorbed into the measurement effects of the neighbouring vertices (or, for output vertices, considered as a local Clifford operator). Absorbing an $a\pi$ phase shift into a measurement effect does not change the measurement plane, only the angle. The resulting diagram $D'$ is then also in MBQC+LC form. Since $G(D')$ is simply $G(D)$ with a \normalfont YZ\xspace or \normalfont XZ\xspace plane vertex removed, $D'$ has gflow if $D$ had gflow by Lemma~\ref{lem:deletepreservegflow}.
\end{proof}
We can combine this observation with local complementation and pivoting to remove vertices measured in other planes or at other angles.
\begin{lemma}\label{lem:lc-simp}
Let $D$ be a ZX-diagram in MBQC+LC form with vertices $V$. Suppose $u\in V$ is a non-input vertex measured in the \normalfont YZ\xspace or \normalfont XY\xspace plane with an angle of $\pm\frac\pi2$. Then there is an equivalent diagram $D'$ with vertices $V\setminus \{u\}$. If $D$ has gflow, then $D'$ does as well.
\end{lemma}
\begin{proof}
We apply a local complementation about $u$ using Lemma~\ref{lem:ZX-lcomp} and reduce the diagram to MBQC+LC form with Lemma~\ref{lem:lc-MBQC-form-non-input}. By these lemmas, if the original diagram had gflow, this new diagram will also have gflow.
As can be seen from Lemma~\ref{lem:lc-MBQC-form-non-input}, if $u$ was in the \normalfont XY\xspace plane, then it will be transformed to the \normalfont XZ\xspace plane and will have a measurement angle of $\frac\pi2 \mp\frac\pi2$. As a result its measurement angle is of the form $a\pi$ for $a\in\{0,1\}$.
If instead it was in the \normalfont YZ\xspace plane, then it stays in the \normalfont YZ\xspace plane, but its angle is transformed to $\frac\pi2 \pm\frac\pi2$ in which case it will also be of the form $a\pi$ for $a\in\{0,1\}$.
In both cases we can remove the vertex $u$ using Lemma~\ref{lem:ZX-remove-YZ-Pauli} while preserving semantics and the property of having gflow.
\end{proof}
\begin{lemma}\label{lem:pivot-simp}
Let $D$ be a ZX-diagram in MBQC+LC form with vertices $V$, and let $u,v \in V$ be two non-input measured vertices which are neighbours.
Suppose that either $\lambda(u)=\ensuremath\normalfont\textrm{XY}\xspace$ with $\alpha(u) = a\pi$ for $a\in \{0,1\}$ or $\lambda(u) = \normalfont\normalfont\textrm{XZ}\xspace$ with $\alpha(u) = (-1)^a\frac\pi2$.
Then there is an equivalent diagram $D'$ with vertices $V\setminus \{u\}$. Moreover, if $D$ has gflow, then $D'$ also has gflow.
\end{lemma}
\begin{proof}
We apply a pivot to $uv$ using Lemma~\ref{lem:ZX-pivot} and reduce the diagram to MBQC+LC form with Lemma~\ref{lem:pivot-MBQC-form-non-input}. If the original diagram had gflow, this new diagram will also have gflow.
As can be seen from Lemma~\ref{lem:pivot-MBQC-form-non-input}, if $\lambda(u) = \ensuremath\normalfont\textrm{XY}\xspace$ then $\lambda'(u) = \normalfont\normalfont\textrm{YZ}\xspace$ with $\alpha'(u)=\alpha(u) = a\pi$. If instead we had $\lambda(u) = \normalfont\normalfont\textrm{XZ}\xspace$ (and thus $\alpha(u) = (-1)^a\frac\pi2$), then $\lambda'(u) = \normalfont\normalfont\textrm{XZ}\xspace$, but $\alpha'(u) = \frac\pi2 - \alpha(u) = \frac\pi2 - (-1)^a \frac\pi2 = a\pi$. In both cases, using Lemma~\ref{lem:ZX-remove-YZ-Pauli}, $u$ can be removed while preserving semantics and the existence of gflow.
\end{proof}
\begin{remark}
The `graph-like' \zxdiagrams studied in Ref.~\cite{cliff-simp} are MBQC+LC form diagrams where every vertex is measured in the \normalfont XY\xspace plane.
Our Lemmas~\ref{lem:lc-simp} and \ref{lem:pivot-simp} are generalisations of the work found in Ref.~\cite[Lemmas~5.2 and 5.3]{cliff-simp} and Ref.~\cite[(P2) and (P3)]{tcountpreprint}.
\end{remark}
Combining the three previous lemmas we can remove most non-input Clifford vertices. The exceptions are some non-input Clifford vertices which are only connected to input and output vertices. While it might not always be possible to remove such vertices, when the diagram has a gflow, we can find an equivalent smaller diagram:
\begin{lemma}\label{lem:removeboundaryPauli}
Let $D$ be a ZX-diagram in MBQC+LC form with vertices $V$ that has a gflow. Let $u$ be a non-input measured vertex that is only connected to input and output vertices. Suppose that either $\lambda(u)=\ensuremath\normalfont\textrm{XY}\xspace$ with $\alpha(u) = a\pi$ for $a\in \{0,1\}$ or $\lambda(u) = \normalfont\normalfont\textrm{XZ}\xspace$ with $\alpha(u) = (-1)^a\frac\pi2$. Then there exists an equivalent diagram $D'$ which has gflow and has vertices $V\backslash\{u\}$.
\end{lemma}
\begin{proof}
We prove the result for $\lambda(u)=\ensuremath\normalfont\textrm{XY}\xspace$ and $\alpha(u) = a\pi$. The other case is similar.
We claim that $u$ is connected to at least one output that is not also an input. In order to obtain a contradiction suppose otherwise. The diagram then looks as follows:
\ctikzfig{ZX-Pauli-projector}
Here `LC' indicates that there are local Clifford operators on the inputs.
Since $D$ has gflow, the entire diagram must be (proportional to) an isometry, and hence it must still be an isometry if we remove the local Cliffords on the inputs. But we then have the map
\ctikzfig{ZX-Pauli-projector2}
as the first operation in the diagram. This map is a projector, and it is not invertible. This is a contradiction, as the entire diagram cannot then be an isometry.
Therefore, $u$ must be connected to some output vertex $v$, which is not an input. We can thus perform a pivot about $uv$. This adds a Hadamard operator after $v$, and changes the label of $u$ to \normalfont YZ\xspace. We can then remove $u$ using Lemma~\ref{lem:ZX-remove-YZ-Pauli}. As all these operations preserve gflow, the resulting diagram still has gflow.
\end{proof}
The following result about removing Pauli measurements (i.e.\ Clifford vertices) from patterns while preserving semantics is already contained in Ref.~\cite[Section III.A]{hein2004multiparty} (if outside the context of MBQC), and is also mentioned in Ref.~\cite{houshmand2018minimal}.
Nevertheless, we are the first to explicitly state the effects of this process on the measurement pattern and the gflow.
\begin{theorem}\label{thm:simplifiedZXdiagram}
Let $D$ be a ZX-diagram in MBQC+LC form that has gflow. Then we can find an equivalent ZX-diagram $D'$ in MBQC+LC form, which also has gflow and which contains no non-input Clifford vertices.
The algorithm uses a number of graph operations that is polynomial in the number of vertices of $D$.
\end{theorem}
\begin{proof}
Starting with $D$ we simplify the diagram step by step using the following algorithm:
\begin{enumerate}
\item Using Lemma~\ref{lem:lc-simp} repeatedly, remove any non-input \normalfont YZ\xspace or \normalfont XY\xspace measured vertex with a $\pm \frac\pi2$ phase.
\item Using Lemma~\ref{lem:ZX-remove-YZ-Pauli} repeatedly, remove any non-input vertex with measured in the \normalfont YZ\xspace or \normalfont XZ\xspace plane with angle $a\pi$.
\item Using Lemma~\ref{lem:pivot-simp} repeatedly, remove any \normalfont XY\xspace vertex with an $a\pi$ phase and any \normalfont XZ\xspace vertex with a $\pm \frac\pi2$ phase that are connected to any other internal vertex. If any have been removed, go back to step 1.
\item If there are non-input measured Clifford vertices that are only connected to boundary vertices, use Lemma~\ref{lem:removeboundaryPauli} to remove them. Then go back to step 1. Otherwise we are done.
\end{enumerate}
By construction there are no internal Clifford vertices left at the end. Every step preserves gflow, so the resulting diagram still has gflow.
As every step removes a vertex, this process terminates in at most $n$ steps, where $n$ is the number of vertices in $D$. Each of the steps possibly requires doing a pivot or local complementation requiring $O(n^2)$ elementary graph operations. Hence, the algorithm takes at most $O(n^3)$ elementary graph operations.
\end{proof}
\begin{theorem}\label{thm:simplifiedMBQCpattern}
Suppose $(G,I,O,\lambda,\alpha)$ is a uniformly deterministic MBQC pattern representing a unitary operation.
Assume the pattern involves $q$ inputs and $n$ qubits measured at non-Clifford angles, i.e.\ $q:=\abs{I}$ and $n := \abs{\{u\in\comp{O} \mid \alpha(u)\neq k\frac\pi2 \text{ for any } k \in \mathbb{Z}\}}$.
Then we can find a uniformly deterministic MBQC pattern that implements the same unitary and uses at most $(n+8q)$ measurements.
This process finishes in a number of steps that is polynomial in the number of vertices of $G$.
\end{theorem}
\begin{proof}
Let $D$ be the ZX-diagram in MBQC form from Lemma~\ref{lem:zx-equals-linear-map} that implements the same unitary as the MBQC pattern $\pat:=(G,I,O,\lambda,\alpha)$.
Since $\pat$ is uniformly deterministic, it has gflow, and hence $D$ also has gflow by Definition~\ref{dfn:zx-gflow}.
Let $D'$ be the ZX-diagram in MBQC+LC form produced by Theorem~\ref{thm:simplifiedZXdiagram}.
Since $D'$ has no internal Clifford vertices, its MBQC-form part can have at most $n$ internal vertices.
It may still have boundary Clifford vertices, and by unitarity $\abs{O}=\abs{I}=q$, so the MBQC-form part contains at most $(n+2q)$ vertices.
Denote by $D''$ the MBQC-form diagram produced by applying Lemma~\ref{lem:SQU-to-MBQC-form} to $D'$.
Then $D''$ has at most $((n+2q)+6q)$ vertices in its MBQC-form part.
We can construct a new pattern $\pat'$ from $D''$ using Lemma~\ref{lem:zx-to-pattern}.
As $D''$ has gflow, $\pat'$ also has gflow, and hence is uniformly deterministic.
The new pattern $\pat'$ involves at most $(n+8q)$ qubits.
For the complexity of these operations:
\begin{itemize}
\item Constructing $D$ from $\pat$ takes $O(\abs{G})$ operations
\item Constructing $D'$ from $D$ takes $O(\abs{G}^3)$ operations
\item Constructing $D''$ from $D$ takes $O(\abs{G})$ operations
\item Constructing $\pat'$ from $D''$ takes $O(\abs{G})$ operations
\end{itemize}
So the entire process is dominated $O(\abs{G}^3)$.
\end{proof}
\subsection{Phase-gadget form}\label{sec:phasegadgetform}
Using the local complementation and pivoting rules of Section~\ref{sec:pattern-local-complementation} we can transform the geometry of MBQC+LC form diagrams so that they no longer contain any vertices measured in the \normalfont XZ\xspace plane, nor will \normalfont YZ\xspace vertices be connected.
\begin{definition}\label{def:phasegadgetform}
An MBQC+LC diagram is in \emph{phase-gadget form} if
\begin{itemize}
\item there does not exist any $v\in\comp{O}$ such that $\lambda(v) = \normalfont\normalfont\textrm{XZ}\xspace$, and
\item there does not exist any pair of neighbours $v,w\in\comp{O}$ such that $\lambda(v)=\lambda(w)=\normalfont\normalfont\textrm{YZ}\xspace$.
\end{itemize}
\end{definition}
The name `phase gadget' refers to a particular configuration of spiders in the \zxcalculus, which, in our setting, corresponds to spiders measured in the \normalfont YZ\xspace plane. Phase gadgets have been used particularly in the study of circuit optimisation~\cite{phaseGadgetSynth,tcountpreprint,pi4parity}.
In Section~\ref{sec:further-opt} we introduce another form,
called \emph{reduced} (Definition~\ref{def:reduced-form}),
which requires the pattern to be in phase-gadget form.
\begin{example}
The following MBQC+LC diagram is in phase-gadget form.
\ctikzfig{example-phase-gadget-form}
\end{example}
\begin{proposition}\label{prop:ZXtophasegadgetform}
Let $D$ be a ZX-diagram in MBQC+LC form with gflow.
Then we can find an equivalent ZX-diagram $D'$ in MBQC+LC form that has gflow and is in phase-gadget form.
This process takes a number of steps that is polynomial in the number of vertices of $D$.
\end{proposition}
\begin{proof}
Set $D_0:=D$ and iteratively construct the diagram $D_{k+1}$ based on $D_k$.
\begin{itemize}
\item
If the diagram $D_k$ contains a pair of vertices $u \sim v$ that are both measured in the \normalfont YZ\xspace-plane:
First, note that any input vertex $w$ has $\lambda(w) = \ensuremath\normalfont\textrm{XY}\xspace$, as
otherwise $w \in g(w)$ contradicting the co-domain of $g$ as given in Definition~\ref{defGFlow}.
Therefore $u,v \notin I$.
Let $D_{k+1}$ be the diagram that results from pivoting about the edge $u \sim v$ (Lemma~\ref{lem:ZX-pivot})
and then transforming to MBQC+LC form (Lemma~\ref{lem:pivot-MBQC-form-non-input}.)
This changes the measurement plane for $u$ and $v$ from \normalfont YZ\xspace to \normalfont XY\xspace
and it does not affect the measurement planes for any other vertices:
\ctikzfig{rm-adj-red}
\item
If there is no such connected pair but there is some vertex $u$ that is measured in the \normalfont XZ\xspace-plane:
Note that $u$ cannot be an input vertex by the same reasoning as in the first subcase.
Let $D_{k+1}$ be the diagram that results from applying a local complementation
on $u$ and transforming back to MBQC+LC form (Lemmas~\ref{lem:ZX-lcomp} and \ref{lem:lc-MBQC-form-non-input}).
\ctikzfig{rm-adj-red2}
As can be seen from Lemma~\ref{lem:lc-MBQC-form-non-input},
this process changes the measurement plane of $u$ from \normalfont XZ\xspace to \normalfont YZ\xspace
and it does not affect the labels of any vertices that are measured in the \normalfont XY\xspace-plane.
\item
If there is no such connected pair, nor any vertex that is measured in the \normalfont XZ\xspace-plane
then $D_k$ is already in the desired form, so halt.
\end{itemize}
The number of vertices not measured in the \normalfont XY\xspace-plane decreases with each step,
and no vertices are added, so this process terminates in at most $n$ steps, where $n$ is the number of vertices in $D$.
Each step requires checking every pair of vertices,
or performing local complementation,
each of which have complexity $O(n^2)$, so the total complexity is $O(n^3)$.
Since a pivot is just a sequence of local complementations,
$D_{k+1}$ has gflow if $D_k$ had gflow
(Proposition~\ref{prop:MBQC-lc-MBQC}).
Finally every step preserves equivalence, so $D_{k+1}$ is equivalent to $D_k$.
\end{proof}
Proposition~\ref{prop:ZXtophasegadgetform} finds a phase-gadget form for an MBQC+LC diagram, but note that the phase-gadget form is not guaranteed to be unique.
\subsection{Further pattern optimisation}\label{sec:further-opt}
In Section~\ref{sec:removecliffordqubits} we saw that we can remove all non-input Clifford qubits from a pattern while preserving both determinism and the computation the pattern implements.
We will show in this section that it is also possible to remove certain qubits measured in non-Clifford angles.
These measurement pattern rewrite rules, seen then as transformations of ZX-diagrams, were used in Ref.~\cite{tcountpreprint} to reduce the T-count of circuits. We will see how in our context they can be used to remove a qubit from a pattern, again while preserving determinism.
First of all, any internal \normalfont YZ\xspace vertex with just one neighbour can be fused with this neighbour, resulting in the removal of the \normalfont YZ\xspace vertex:
\begin{lemma}\label{lem:removeidvertex}
Let $D$ be an MBQC+LC diagram with an interior vertex $u$ measured in the \normalfont YZ\xspace plane, and suppose it has a single neighbour $v$, which is measured in the \normalfont XY\xspace plane. Then there is an equivalent MBQC+LC diagram $D'$ with $G(D') = G(D)\setminus \{u\}$. If $D$ has gflow, then $D'$ also has gflow.
\end{lemma}
\begin{proof}
We apply the following rewrite:
\ctikzfig{id-simp-1}
The resulting diagram is again an MBQC+LC diagram. The change to the labelled open graph\ comes down to deleting a YZ vertex. By Lemma~\ref{lem:deletepreservegflow} this preserves gflow.
\end{proof}
Note that, by Proposition~\ref{prop:XY-neighbours}, if the diagram has gflow and equal numbers of inputs and outputs, then it has no internal \normalfont XY\xspace vertices with just one neighbour.
Thus, if the diagram is in phase-gadget form (cf.\ Definition~\ref{def:phasegadgetform}), the above lemma allows us to remove all internal vertices which have a single neighbour.
Our second rewrite rule allows us to also `fuse' \normalfont YZ\xspace vertices that have the same set of neighbours:
\begin{lemma}\label{lem:removepairedgadgets}
Let $D$ be an MBQC+LC diagram with two distinct interior vertices $u$ and $v$, both measured in the YZ plane and with $N(u) = N(v)$. Then there is an equivalent diagram $D'$ with $G(D') = G(D)\setminus\{u\}$. If $D$ has gflow, then $D'$ also has gflow.
\end{lemma}
\begin{proof}
We apply the following rewrite:
\ctikzfig{gadget-simp}
A straightforward sequence of \zxcalculus transformations shows this rewrite preserves semantics:
\begin{equation*}
\scalebox{0.9}{\tikzfig{gf-proof}}
\end{equation*}
The new diagram is still an MBQC+LC diagram, and the only change in the underlying labelled open graph\ is the deletion of a \normalfont YZ\xspace vertex. Hence, by Lemma~\ref{lem:deletepreservegflow}, this rewrite preserves gflow.
\end{proof}
The analogous result is not true for a pair of \normalfont XY\xspace vertices. However, when the diagram has gflow, such pairs cannot exist to begin with:
\begin{lemma}\label{lem:nopairedXYvertices}
Let $G$ be a labelled open graph{} with gflow and distinct interior vertices $u$ and $v$ both measured in the \normalfont XY\xspace plane. Then $N(u) \neq N(v)$.
\end{lemma}
\begin{proof}
Assume for a contradiction that $N(u)=N(v)$ and that the diagram has gflow. Note that, for any subset of vertices $S$, we have $u\in \odd{}{S} \iff v\in \odd{}{S}$. In particular, as $u\in \odd{}{g(u)}$ by \ref{it:XY}, we have $v\in \odd{}{g(u)}$ and thus $u\prec v$ by \ref{it:odd}. Yet, swapping $u$ and $v$ in the above argument, we also find $v\prec u$, a contradiction.
Thus, if the diagram has gflow, distinct vertices $u$ and $v$ must have distinct neighbourhoods $N(u) \neq N(v)$.\end{proof}
We can now combine these rewrite rules with our previous results to get a more powerful rewrite strategy:
\begin{definition} \label{def:reduced-form}
Let $D$ be an MBQC+LC diagram. We say $D$ is \emph{reduced} when:
\begin{itemize}
\item It is in phase-gadget form (see Definition~\ref{def:phasegadgetform}).
\item It has no internal Clifford vertices.
\item Every internal vertex has more than one neighbour.
\item If two distinct vertices are measured in the same plane, they have different sets of neighbours.
\end{itemize}
\end{definition}
\begin{theorem}\label{thm:optimisation}
Let $D$ be an MBQC+LC diagram with gflow and equal numbers of inputs and outputs.
Then we can find an equivalent diagram $D'$ that is reduced and has gflow.
This process finishes in a number of steps that is polynomial in the number of vertices of $D$.
\end{theorem}
\begin{proof}
Starting with $D$, we simplify the diagram step by step with the following algorithm:
\begin{enumerate}
\item Apply Theorem~\ref{thm:simplifiedZXdiagram} to remove all interior Clifford vertices.
\item Apply Proposition~\ref{prop:ZXtophasegadgetform} to bring the diagram into phase-gadget form. Then every vertex is of type \normalfont YZ\xspace or \normalfont XY\xspace, and the \normalfont YZ\xspace vertices are only connected to \normalfont XY\xspace vertices.
\item Apply Lemma~\ref{lem:removeidvertex} to remove any YZ vertex that has a single neighbour.
\item Apply apply Lemma~\ref{lem:removepairedgadgets} to merge any pair of \normalfont YZ\xspace vertices that have the same set of neighbours.
\item If the application of these lemmas resulted in any new internal Clifford vertices, go back to step 1. Otherwise we are done.
\end{enumerate}
Each of the steps preserves gflow, and hence at every stage of the algorithm the diagram has gflow.
By construction,
if algorithm has terminates,
every vertex is now of type \normalfont YZ\xspace or \normalfont XY\xspace, and \normalfont YZ\xspace vertices are only connected to \normalfont XY\xspace vertices.
Furthermore, every \normalfont YZ\xspace vertex must have more than one neighbour and have a different set of neighbours than any other \normalfont YZ\xspace vertex.
This is also true for the \normalfont XY\xspace vertices by the existence of gflow and the requirement that the number of inputs match the number of outputs (using Lemma~\ref{lem:nopairedXYvertices} and Proposition~\ref{prop:XY-neighbours}).
Hence, the resulting diagram has all the properties needed for it to be reduced.
To show this process terminates consider the lexicographic order:
\begin{itemize}
\item Number of vertices in the diagram
\item Number of vertices measured in the XZ or YZ planes
\item Number of vertices measured in the XY plane
\end{itemize}
The result of each step of the algorithm on this order is:
\begin{itemize}
\item Applying Theorem~\ref{thm:simplifiedZXdiagram} reduces the number of vertices in the diagram,
while possibly increasing the number of vertices in any given plane.
\item Applying Proposition~\ref{prop:ZXtophasegadgetform} reduces the number of vertices in the XZ or YZ planes,
while possibly increasing the number of vertices in the XY plane.
\item Applying Lemmas~\ref{lem:removeidvertex} and \ref{lem:removepairedgadgets}
reduces the number of vertices in the diagram,
and the number of vertices measured in the YZ plane.
\end{itemize}
Therefore each step in the algorithm reduces our order, so the process terminates.
Writing $n$ for the number of vertices in $D$
we see that the algorithmic loop can be called at most $n$ times (since we remove vertices each iteration),
and each of the steps in the loop take at most $O(n^3)$ operations,
giving a total complexity of $O(n^4)$.
\end{proof}
\begin{remark}
The algorithm described above uses the same idea as that described in Ref.~\cite{tcountpreprint}. But while they describe the procedure in terms of modifying a graph-like \zxdiagram, we describe it for MBQC+LC diagrams, a more general class of diagrams. Furthermore, we prove that the procedure preserves the existence of gflow. The existence of gflow is used in the next section to show how to recover a circuit from an MBQC+LC diagram.
\end{remark}
\section{Circuit extraction}\label{sec:circuitextract}
In this section we will see that we can extract a circuit from a measurement pattern whose corresponding labelled open graph\ has a gflow.
The extraction algorithm modifies that of Ref.~\cite{cliff-simp} so that it can handle measurements in multiples planes (not just the \normalfont XY\xspace plane).
Instead of describing the algorithm for measurement patterns, we describe it for the more convenient form of MBQC+LC diagrams.
The general idea is that we modify the diagram vertex by vertex to bring it closer and closer to resembling a circuit. We start at the outputs of the diagram and work our way to the inputs. The gflow informs the choice of which vertex is next in line to be `extracted' (specifically, this will always be a vertex maximal in the gflow partial order).
By applying various transformations to the diagram, we change it so that the targeted vertex can easily be pulled out of the MBQC-form part of the diagram and into the circuit-like part. The remaining MBQC-form diagram is then one vertex smaller. Since all the transformations preserve gflow, we can then repeat the procedure on this smaller diagram until we are finished.
Before we explain the extraction algorithm in detail in Section~\ref{sec:generalextractalgorithm}, we state some relevant lemmas.
\begin{lemma}\label{lem:cnotgflow}
The following equation holds:
\begin{equation}
\tikzfig{cnot-pivot}
\end{equation}
where $M$ is the biadjacency matrix of the output vertices to the neighbours of $D$, and $M^\prime$ is the matrix produced from $M$ by adding row~1 to row~2, modulo~2. If the full diagram on the LHS has gflow, then so does the RHS.
\end{lemma}
\begin{proof}
The equality is proved in Ref.~\cite[Proposition~6.2]{cliff-simp}. There it is also shown that this preserves gflow when all measurements are in the \normalfont XY\xspace plane, but the same proof works when measurement in all three planes are present.
\end{proof}
\begin{lemma}\label{lem:remove-output-edges-preserves-gflow}
Suppose $(G,I,O,\lambda)$ is a labelled open graph{} with gflow.
Let $G'$ be the graph containing the same vertices as $G$ and the same edges except those for which both endpoints are output vertices.
Formally, if $G=(V,E)$, then $G'=(V,E')$, where
\[
E' = \{v\sim w \in E\mid v\in\comp{O} \text{ or } w\in\comp{O}\}.
\]
Then $(G',I,O,\lambda)$ also has gflow.
\end{lemma}
\begin{proof}
We claim that if $(g,\prec)$ is a gflow for $G$, then it is also a gflow for $G'$. Note that $\odd{G'}{g(v)}\cap \comp{O} = \odd{G}{g(v)}\cap \comp{O}$ as the only changes to neighbourhoods are among the output vertices. It is thus easily checked that all properties of Definition~\ref{defGFlow} remain satisfied.
\end{proof}
\begin{lemma}\label{lem:output-neighbours-are-XY}
Let $(G,I,O,\lambda)$ be a labelled open graph with a gflow and the same number of inputs as outputs: $\lvert I\rvert = \lvert O\rvert$.
Let $v\in O\cap \comp{I}$ be an output which is not an input.
Suppose $v$ has a unique neighbour $u\in\comp{O}$. Then $\lambda(u)=\ensuremath\normalfont\textrm{XY}\xspace$.
\end{lemma}
\begin{proof}
Suppose, working towards a contradiction, that $\lambda(u) \neq \ensuremath\normalfont\textrm{XY}\xspace$.
Form the labelled open graph\ $(G',I,O,\lambda')$ by removing from $G$ all vertices $w$
such that $w \in \comp{O}$ and $\lambda(w) \neq \ensuremath\normalfont\textrm{XY}\xspace$, restricting $\lambda$ accordingly.
By Lemma~\ref{lem:gflow_drop_gadgets} the labelled open graph\ $(G',I,O,\lambda')$ also has a gflow.
Note that $G'$ does contain $v$, which is still an output vertex in $G'$, but does not contain $u$,
and hence $v$ has no neighbours in $G'$.
By Theorem~\ref{thm:mhalla2}, $G'$ has a focused gflow, and
because $G'$ has the same number of inputs as outputs, its reversed graph also has a gflow $(g,\prec)$ by Corollary~\ref{cor:reverse_unitary_gflow}.
In this reversed graph $v$ is an input and, since it is not an output, it is measured in the \normalfont XY\xspace plane.
It therefore has a correction set $g(v)$ so that $v\in \odd{}{g(v)}$.
But because $v$ has no neighbours, this is a contradiction.
We conclude that indeed $\lambda(u)=\ensuremath\normalfont\textrm{XY}\xspace$.
\end{proof}
\noindent For any set $A\subseteq V$, let $N_G(A) = \bigcup_{v\in A} N_G(v)$.
Recall the partition of vertices according to the partial order of the gflow into sets $V_k^\prec$, which is introduced in Definition~\ref{defVk}.
\begin{lemma}\label{lem:maxdelayednotempty}
Let $(G,I,O,\lambda)$ be a labelled open graph in phase-gadget form, which furthermore satisfies $\comp{O}\neq\emptyset$.
Suppose $(G,I,O,\lambda)$ has a gflow.
Then the maximally delayed gflow, $(g,\prec)$, constructed in Proposition~\ref{prop:focused-gflow}
exists and moreover $N_G(V_1^\prec)\cap O \neq \emptyset$, \ie the gflow has the property that,
among the non-output vertices,
there is a vertex which is maximal with respect to the gflow order and also connected to an output vertex.
\end{lemma}
\begin{proof}
By Proposition~\ref{prop:focused-gflow}, there exists a maximally delayed gflow of $(G,I,O,\lambda)$ such that
no element of a correction set (other than possibly the vertex being corrected) is measured in the \normalfont YZ\xspace plane.
Since the open graph does not consist solely of outputs, the set $V_1^\prec$ (as defined in Definition~\ref{defVk}) is non-empty, so the following arguments are non-trivial.
For any $v\in V_1^\prec$ we must have $g(v)\subseteq O\cup \{v\}$.
Now if there is a $v\in V_1^\prec$ with $\lambda(v) = \ensuremath\normalfont\textrm{XY}\xspace$, then $v\in\odd{}{g(v)}$.
There are no self-loops, hence this $v$ must be connected to at least one output, and we are done.
As the graph is in phase-gadget form, there are no vertices labelled \normalfont XZ\xspace and hence from now on assume that $\lambda(v)=\normalfont\normalfont\textrm{YZ}\xspace$ for all $v\in V_1^\prec$.
We distinguish two cases.
\begin{enumerate}
\item If $V_2^\prec = \emptyset$, then the only non-output vertices are in $V_1^\prec$.
Now, any connected component of the graph $G$ must contain an input or an output.
The vertices in $V_1^\prec$ are all labelled \normalfont YZ\xspace and thus appear in their own correction sets; this means they cannot be inputs because inputs do not appear in correction sets.
The vertices in $V_1^\prec$ are not outputs either, so each of them must have at least one neighbour.
Yet the labelled open graph{} is in phase-gadget form.
This implies that two vertices both labelled \normalfont YZ\xspace cannot be adjacent, and all vertices in $V_1^\prec$ are labelled \normalfont YZ\xspace.
Thus any vertex $v\in V_1^\prec$ must have a neighbour in $O$, and we are done.
\item So now assume there is some vertex $w\in V_2^\prec$.
Then, regardless of $\lambda(w)$, we have $g(w) \subseteq V_1^\prec\cup O\cup\{w\}$ and $\odd{}{g(w)} \subseteq V_1^\prec\cup O\cup\{w\}$.
We distinguish three subcases according to whether one of $g(w)$ or $\odd{}{g(w)}$ intersects $V_1^\prec$.
\begin{itemize}
\item Suppose $g(w)\cap V_1^\prec = \odd{}{g(w)}\cap V_1^\prec = \emptyset$, i.e.\ $g(w) \subseteq O\cup\{w\}$ and $\odd{}{g(w)} \subseteq O\cup\{w\}$.
Let $\prec' = \prec \setminus \{(w,u): u\in V_1^\prec\}$.
Then $(g,\prec')$ is a gflow: dropping the given inequalities from the partial order does not affect the gflow properties since $u\in V_1^\prec$ implies $w\notin g(u)$ and $w\notin \odd{}{g(u)}$.
Furthermore, $(g,\prec')$ is more delayed than $(g,\prec)$ because $w$ (and potentially some of its predecessors) moves to an earlier layer, contradicting the assumption that $(g,\prec)$ is maximally delayed.
Hence this case cannot happen.
\item Suppose $g(w)\cap V_1^\prec \neq \emptyset$, then there exists a \normalfont YZ\xspace vertex in the correction set of $w$ since all elements of $V_1^\prec$ are measured in the \normalfont YZ\xspace plane.
But our gflow satisfies the properties of Proposition~\ref{prop:focused-gflow}, and hence this cannot happen.
\item Suppose $\odd{}{g(w)}\cap V_1^\prec \neq \emptyset$ and $g(w)\cap V_1^\prec = \emptyset$, then there is a $v\in V_1^\prec$ such that $v\in \odd{}{g(w)}$.
There are two further subcases.
\begin{itemize}
\item If $\lambda(w)=\ensuremath\normalfont\textrm{XY}\xspace$, we have $w\not\in g(w)$ and hence $g(w)\subseteq O$ so that there must be some $o\in O$ that is connected to $v$ and we are done.
\item Otherwise, if $\lambda(w)=\normalfont\normalfont\textrm{YZ}\xspace$, then $w\in g(w)$.
Yet both $v$ and $w$ are measured in the \normalfont YZ\xspace plane, so they are not neighbours, and hence there still must be an $o\in O$ that is connected to $v$ to have $v\in \odd{}{g(w)}$.
\end{itemize}
\end{itemize}
\end{enumerate}
Thus, the gflow $(g,\prec)$ has the desired property in all possible cases.
\end{proof}
\subsection{General description of the algorithm}\label{sec:generalextractalgorithm}
We first walk through a high-level description of how to extract a circuit from a diagram in MBQC+LC form with gflow, explaining why every step works. After that, we present a more practical algorithm in Section~\ref{s:more-practical}. As we wish the output to be a unitary circuit, we will assume that the diagram has an equal number of inputs and outputs.
The process will be to make sequential changes to the \zxdiagram that make the diagram look progressively more like a circuit. During the process, there will be a `frontier': a set of green spiders such that everything to their right looks like a circuit, while everything to their left (and including the frontier vertices themselves) is an MBQC+LC form diagram equipped with a gflow.
We will refer to the MBQC-form diagram on the left as the \emph{unextracted} part of the diagram, and to the circuit on the right as the \emph{extracted} part of the diagram.
For example:
\begin{equation}\label{ex:frontier-example}
\scalebox{1.2}{\tikzfig{frontier-example}}
\end{equation}
In this diagram, we have merged the \normalfont XY\xspace measurement effects with their respective vertices, in order to present a tidier picture.
The matrix $M$ is the biadjacency matrix between the vertices on the frontier and all their neighbours to the left of the frontier.
For the purposes of the algorithm below, we consider the extracted circuit as no longer being part of the diagram, and hence the frontier vertices are the outputs of the labelled open graph\ of the unextracted diagram.
\textbf{Step 0}: First, we transform the pattern into phase-gadget form using Proposition~\ref{prop:ZXtophasegadgetform}, ensuring that all vertices are measured in the \normalfont XY\xspace or \normalfont YZ\xspace planes, and that vertices measured in the \normalfont YZ\xspace plane are only connected to vertices measured in the \normalfont XY\xspace plane.
This can be done in polynomial time, and preserves the interpretation of the diagram. Furthermore, the resulting diagram still has gflow.
\textbf{Step 1}: We unfuse any connection between the frontier vertices as a CZ gate into the extracted circuit, and we consider any local Clifford operator on the frontier vertices as part of the extracted circuit. For example:
\[\scalebox{1.2}{\tikzfig{example-unfuse-gates}}\]
This process changes the unextracted diagram in two ways: by removing local Clifford operators and by removing connections among the frontier vertices.
The former does not affect the underlying labelled open graph{} and the latter preserves gflow by Lemma~\ref{lem:remove-output-edges-preserves-gflow}.
Thus, the unextracted diagram continues to be in MBQC form and it continues to have gflow.
If the only unextracted vertices are on the frontier, go to step~5, otherwise continue to step~2.
\textbf{Step 2}: The unextracted diagram is in phase-gadget form and has gflow.
Thus, by Lemma~\ref{lem:maxdelayednotempty}, it has a maximally delayed gflow $(g,\prec)$ such that $N_G(V_1^\prec)\cap O \neq \emptyset$, where $V_1^\prec$ is the `most delayed' layer before the frontier vertices, which are the outputs of the labelled open graph\ (see Definition~\ref{defVk}).
Such a gflow can be efficiently determined by first finding any maximally delayed gflow using the algorithm of Theorem~\ref{thmGFlowAlgo} and then following the procedure outlined in Proposition~\ref{prop:focused-gflow}.
Now, if any of the vertices in $V_1^\prec$ are labelled \normalfont XY\xspace, pick one of these vertices and go to step~3. Otherwise, all the maximal non-output vertices (with respect to $\prec$) must have label \normalfont YZ\xspace; go to step~4.
\textbf{Step 3}: We have a maximal non-output vertex $v$ labelled \normalfont XY\xspace, which we want to extract. Since it is maximal in $\prec$, we know that $g(v)\subseteq O$ by Definition~\ref{defVk}.
As the gflow is maximally delayed, we have $\odd{}{g(v)}\cap \comp O = \{v\}$.
We now follow the strategy used in Ref.~\cite{cliff-simp} for the `\normalfont XY\xspace-plane only' case, illustrating it with an example.
Consider the following diagram, in which the vertex $v$ and its correction set $g(v)$ are indicated:
\begin{equation}\label{eq:example-extracted-vertex}
\scalebox{1.2}{\tikzfig{example-extracted-vertex}}
\end{equation}
For clarity, we are ignoring the measurement effects on the left-hand-side spiders, and we are not showing any frontier vertices that are inputs (although note that by definition of a gflow, the vertices of $g(v)$ cannot be inputs).
In the above example, the biadjacency matrix of the bipartite graph between the vertices of $g(v)$ on the one hand, and their neighbours in the unextracted part on the other hand, is
\begin{equation}\label{eq:biadjacency-example}
\tikzfig{example-matrix}
\end{equation}
where the rows correspond to vertices of $g(v)$, and vertices are ordered top-to-bottom. We do not include the bottom-most frontier vertex in the biadjacency matrix, as it is not part of $g(v)$, and we do not include the bottom left spider, as it is not connected to any vertex in $g(v)$.
The property that $\odd{}{g(v)}\cap \comp O = \{v\}$ now corresponds precisely to the following property of the matrix: if we sum up all the rows of this biadjacency matrix modulo 2, the resulting row vector contains a single 1 corresponding to the vertex $v$ and zeroes everywhere else.
It is straightforward to see that this is indeed the case for the matrix of Eq.~\eqref{eq:biadjacency-example}.
Now pick any frontier vertex $w\in g(v)$.
Lemma~\ref{lem:cnotgflow} shows that the application of a CNOT to two outputs corresponds to a row operation on the biadjacency matrix, which adds the row corresponding to the target to the row corresponding to the control. Hence if, for each $w'\in g(v)\setminus\{w\}$, we apply a CNOT with control and target on the output wires of $w$ and $w'$, the effect on the biadjacency matrix is to add all the other rows of the vertices of $g(v)$ to that of $w$:
\[\scalebox{1.15}{\tikzfig{example-extracted-vertex-cnots}}\]
As a result, $w$ is now only connected to $v$, but $v$ may still be connected to other vertices in $O\setminus g(v)$. For each such vertex $u$, applying a CNOT with control $u$ and target $w$ removes the connection between $u$ and $v$:
\[\scalebox{1.15}{\tikzfig{example-extracted-vertex-cnots2}}\]
Now we can move $v$ to the frontier by removing $w$ from the diagram, adding a Hadamard to the circuit (this comes from the Hadamard edge between $v$ and $w$), adding the measurement angle of $v$ to the circuit as a Z-phase gate, and adding $v$ to the set of outputs of the graph (i.e.\ the frontier):
\begin{equation}\label{eq:extract-vertex}
\scalebox{1.15}{\tikzfig{extract-vertex}}
\end{equation}
On the underlying labelled open graph\ this corresponds to removing $w$ and adding $v$ to the list of outputs. We need to check that this preserves the existence of a gflow. The only change we need to make is that for all $v'\neq v$ with $w\in g(v')$ we set $g'(v') = g(v')\backslash\{w\}$. As $w$'s only neighbour is $v$, removing $w$ from $g(v')$ only toggles whether $v\in\odd{}{g'(v')}$. Since $v$ is a part of the outputs in the new labelled open graph, this preserves all the properties of being a gflow.
Now that the vertex $w$ has been removed, the number of vertices in the unextracted part of the diagram is reduced by 1. We now go back to step 1.
\textbf{Step 4}: All the maximal vertices are labelled \normalfont YZ\xspace. Since we chose our gflow according to Lemma~\ref{lem:maxdelayednotempty}, we know that at least one of these vertices is connected to an output, and hence a frontier vertex. Pick such a vertex $v$, and pick a $w\in O\cap N_G(v)$ (this set is non-empty). Pivot about $vw$ using Lemma~\ref{lem:ZX-pivot} and reduce the resulting diagram to MBQC form with Lemma~\ref{lem:pivot-MBQC-form-non-input}. Afterwards, $v$ has label \normalfont XY\xspace and $w$ has a new Hadamard gate on its output wire (which will be dealt with in the next iteration of step 1).
We have changed one vertex label in the unextracted part of the diagram from \normalfont YZ\xspace to \normalfont XY\xspace. Since no step introduces new \normalfont YZ\xspace vertices, step~4 can only happen as many times as there are \normalfont YZ\xspace vertices at the beginning. Go back to step 1.
\textbf{Step 5:} At this point, there are no unextracted vertices other than the frontier vertices, all of which have arity 2 and can be removed using rule $(\bm{i1})$ of Figure~\ref{fig:zx-rules}.
Yet the remaining frontier vertices might be connected to the inputs in some permuted manner and the inputs might carry some local Cliffords:
\ctikzfig{example-permutation}
This is easily taken care of by decomposing the permutation into a series of SWAP gates, at which point the entire diagram is in circuit form.
\textbf{Correctness and efficiency:} Since step 3 removes a vertex from the unextracted diagram, and step 4 changes a measurement plane from \normalfont YZ\xspace to \normalfont XY\xspace (and no step changes measurement planes in the other direction), this algorithm reduces the lexicographic order ($\#$ unextracted vertices, $\#$ unextracted vertices with $\lambda = YZ$) each time we repeat an iteration of the process, so that the algorithm terminates. Each step described above takes a number of graph-operations polynomial in the number of unextracted vertices,
and therefore this entire algorithm takes a number of steps polynomial in the number of vertices.
All steps correspond to ZX-diagram rewrites, so the resulting diagram is a circuit that implements the same unitary as the pattern we started with.
\subsection{A more practical algorithm}\label{s:more-practical}
Now we know that the algorithm above is correct and will always terminate, we can take a couple of short-cuts that will make it more efficient.
In step 2, instead of using the gflow to find a maximal vertex, we do the following: Write down the biadjacency matrix of the bipartite graph consisting of frontier vertices on one side and all their neighbours on the other side. For example, the Diagram~\eqref{eq:example-extracted-vertex} would give the matrix:
\begin{equation}\label{eq:matrix2}
\begin{pmatrix}
1&1&0&0&0\\
0&0&1&1&0\\
0&1&1&1&0\\
1&1&0&1&1
\end{pmatrix}
\end{equation}
Now perform a full Gaussian elimination on this $\mathbb{Z}_2$ matrix. In the above case, this results in the matrix:
\begin{equation}\label{eq:matrix_after_elim}
\begin{pmatrix}
1&0&0&0&0\\
0&1&0&0&0\\
0&0&1&0&1\\
0&0&0&1&1
\end{pmatrix}
\end{equation}
Any row in this matrix containing a single 1 corresponds to an output vertex with a single neighbour. By Lemma~\ref{lem:output-neighbours-are-XY}, this neighbour is of type \normalfont XY\xspace. As an example, in the matrix in Eq.~\eqref{eq:matrix_after_elim}, the first row has a single 1 in the first column, and hence the top-left spider of Diagram~\eqref{eq:example-extracted-vertex} is the unique \normalfont XY\xspace neighbour to the first output. Similarly, the second row has a single 1, appearing in column 2, and hence the second spider from the top on the left in Diagram~\eqref{eq:example-extracted-vertex} is the unique neighbour to the second output.
If we found at least one row with a single 1 with this method, we implement the row operations corresponding to the Gaussian elimination procedure as a set of CNOT gates using Lemma~\ref{lem:cnotgflow}. Doing this with Diagram~\eqref{eq:example-extracted-vertex} gives:
\begin{equation}
\scalebox{1.15}{\tikzfig{example-extracted-gauss}}
\end{equation}
We see that every row which had a single 1 now corresponds to a frontier spider with a single neighbour, and hence we can extract vertices using the technique of Eq.~\eqref{eq:extract-vertex}:
\begin{equation}\label{eq:example-extracted-3}
\scalebox{1.15}{\tikzfig{example-extracted-3}}
\end{equation}
As we now extract multiple vertices at a time, there could be connections between the new frontier vertices (for instance between the top two frontier spiders in Eq.~\eqref{eq:example-extracted-3}). These are taken care of in the next iteration of step 1, turning those into CZ gates.
If the Gaussian elimination does not reveal a row with a single 1, then we are in the situation of step 4. We perform pivots involving a vertex with label \normalfont YZ\xspace and an adjacent frontier vertex until there is no vertex with a label \normalfont YZ\xspace which is connected to a frontier vertex. We then go back to step 1.
With these short-cuts, it becomes clear that we do not need an explicitly calculated gflow in order to extract a circuit. The fact that there is a gflow is only used to argue that the algorithm is indeed correct and will always terminate. Pseudocode for this algorithm can be found in Appendix~\ref{sec:pseudocode}.
With the results of this section we have then established the following theorem.
\begin{theorem}\label{thm:extraction-algorithm}
Let $\pat$ be a measurement pattern with $n$ inputs and outputs containing a total of $k$ qubits, and whose corresponding labelled open graph\ has a gflow. Then there is an algorithm running in time $O(n^2k^2 + k^3)$ that converts $\pat$ into an equivalent $n$-qubit circuit that contains no ancillae.
\end{theorem}
\begin{proof}
The runtime for the extraction algorithm is dominated by Gaussian elimination of the biadjacency matrices which has complexity $O(n^2m)$, where $n$ is the number of rows, corresponding to the number of outputs, and $m$ is the number of neighbours these output vertices are connected to. In principle $m$ could be as large as the number of vertices in the graph and hence could be as large as $k$ (although in practice it will be much smaller than that).
In the worst case, performing a pivot operation also requires toggling the connectivity of almost the entire graph, which requires $k^2$ elementary graph operations.
Since we might have to apply a pivot and a Gaussian elimination process for every vertex in the graph, the complexity for the entire algorithm is bounded above by $O(k(n^2k + k^2)) = O(n^2k^2 + k^3)$.
\end{proof}
Note that if $k\geq O(n^2)$, which will be the case for most useful computation, the bound becomes $O(k^3)$. In practice however we would not expect to see this worst-case complexity as it would only be attained if everything is almost entirely fully connected all the time. This does not seem possible because the pivots and Gaussian elimination always toggle connectivity, and hence a highly connected graph in one step will become less connected in the following step.
\section{Conclusions and Future Work}\label{sec:conclusion}
We have given an algorithm which extracts a circuit from any measurement pattern whose underlying labelled open graph\ has extended gflow.
This is the first algorithm which works for patterns with measurements in multiple planes, and does not use ancillae.
Simultaneously, it is the most general known algorithm for extracting quantum circuits from \zxcalculus diagrams.
We have also developed a set of rewrite rules for measurement patterns containing measurements in all three planes.
For each of these rewrite rules, we have established the corresponding transformations of the extended gflow.
The rewrite rules can be used to reduce the number of qubits in a measurement pattern, in particular eliminating all qubits measured in a Pauli basis.
Additionally, we have generalised the notions of focused gflow and maximally delayed gflow to labelled open graph{}s with measurements in multiple planes, and we have described algorithms for finding such gflows.
The pattern optimization algorithm of Theorem~\ref{thm:optimisation} and the circuit extraction algorithm of Section~\ref{sec:circuitextract} have been implemented in the \zxcalculus rewriting system \emph{PyZX}\footnote{PyZX is available at \url{https://github.com/Quantomatic/pyzx}. A Jupyter notebook demonstrating the T-count optimisation is available at \url{https://github.com/Quantomatic/pyzx/blob/5d409a246857b7600cc9bb0fbc13043d54fb9449/demos/T-count\%20Benchmark.ipynb}.}~\cite{pyzx}.
The reduction in non-Clifford gates using this method matches the state-of-the-art for ancillae-free circuits~\cite{tcountpreprint} at the time of development.
Our circuit extraction procedure resynthesizes the CNOT gates in the circuit. Depending on the input circuit this can lead to drastic decreases in the 2-qubit gate count of the circuit~\cite{cliff-simp,pyzx}, but in many cases it can also lead to an \emph{increase} of the CNOT count.
Such increases are avoided in the procedure of Ref.~\cite{tcountpreprint}, where two-qubit gates are not resynthesized.
Yet re-synthesis of two-qubit gates may be necessary anyway in many applications: current and near-term quantum devices do not allow two-qubit gates to be applied to arbitrary pairs of qubits.
Thus, general circuits need to be adapted to the permitted connections; this is called routing.
Our extraction algorithm currently uses a standard process of Gaussian elimination to produce the two-qubit gates required to implement the circuit, which implicitly assumes that two-qubit gates can be applied between any pair of qubits in the circuit.
It may be useful to replace this procedure with one incorporating routing, such as the \zxcalculus-based routing approach by Kissinger and Meijer-van~de~Griend~\cite{kissinger2019cnot}.
This would allow circuits to be adapted to the desired architecture as they are being extracted.
It would also be interesting to consider whether these routing algorithms can be used more abstractly to transform general measurement patterns to more regular patterns with restricted connectivity.
\medskip
\noindent {\small \textbf{Acknowledgements.}
Many thanks to Fatimah Ahmadi for her contributions in the earlier stages of this project.
The majority of this work was developed at the Applied Category Theory summer school during the week 22--26 July 2019; we thank the organisers of this summer school for bringing us together and making this work possible.
JvdW is supported in part by AFOSR grant FA2386-18-1-4028.
HJM-B is supported by the EPSRC.
}
\bibliographystyle{plainnat}
| {
"attr-fineweb-edu": 1.779297,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdCg4uBhhxJVAbK-w | \section{Introduction}
Translation surfaces have been studied in depth for many years. However,
there is no clear picture for the shape of a random translation surface.
The goal of this paper is to study the asymptotic growth rate of the
expected value of the covering radius of a translation surface as a function of genus.
This is the maximal distance of any point to a singularity. In the case of the minimal
stratum the covering radius is the same as the diameter, up to a factor $2$.
A motivation for this study is a paper of Mirzakhani \cite{Mirzakhani} in which she computed
the expected value of several geometric functions (such as systole, Cheeger constant, etc.)
on ${\mathcal{M}}_g$, the moduli space
of Riemann surfaces of genus $g$, equipped with the Weil--Petersson volume measure
$\nu_{\mathrm{wp}}$. For example, she proved that the expected value of the diameter of a generic hyperbolic
surface of genus $g$ grows like $\log g$ as $g\to \infty$. Specifically
\begin{equation*}
\mathbb{E}_{{\mathcal{M}}_g} (\diam )
= \frac{\displaystyle{\int_{{\mathcal{M}}_g} \diam \, d\nu_{\mathrm{wp}} }} {\Vol_{\mathrm{wp}} ( {\mathcal{M}}_g )}
\asymp \log g
\end{equation*}
where $\asymp$ means that the two sides are equal up to uniform multiplicative constants
that are independent of $g$.
The space of all translation surfaces is naturally stratified by the number and the type
of the singularities they can have and the expected shape of a translation surface may be
different depending on the stratum.
To get some possible idea about shapes, a first guess
would be to translate the result of Mirzakhani directly. Namely, a hyperbolic
surface $x$ of genus $g$ has an area comparable to $g$. To make $x$ have area $1$,
one needs to scale~$x$ down by a factor comparable to $\frac 1{\sqrt g}$. Then, the result
of Mirzakhani would suggest that the expected value of the diameter should be comparable
to $\frac{\log g }{\sqrt g}$.
However, the answer we find is different from this expected value.
Let $\kappa = (k_1, \dots, k_\ell)$ be a tuple of positive integers and let ${\mathcal{H}}_1(\kappa)$
be the stratum of unit area translation surfaces with $\ell$ singularities of degrees
$k_1, \dots, k_\ell$.
Let $\nu$ be the normalized Lebesgue measure on a stratum ${\mathcal{H}}_1(\kappa)$ as in
\cite{Masur-82,Veech-82}.
For a translation surface $(X,\omega) \in {\mathcal{H}}(\kappa)$, the maximum distance from a point in $X$ to the set of singularities of $(X, \omega)$ is called the \emph{covering radius} of $(X, \omega)$.
This is equal to the maximum radius of an immersed Euclidean disk in $(X, \omega)$.
We denote the covering radius of $(X, \omega)$ by~$\crad(X, \omega)$.
\begin{introthm}[Expected covering radius] \label{introthm:covering_radius}
Let ${\mathcal{H}}(\kappa)$ be a stratum of translation surfaces of genus $g$.
Then, for large values of $g$, we~have
\begin{equation*}
\mathbb{E}_{{\mathcal{H}}_1(\kappa)} ( \crad)
= \frac{\displaystyle \int_{{\mathcal{H}}_1(\kappa)} \crad(X) \, d\nu(X)}{\nu \big({\mathcal{H}}_1(\kappa)\big)}
\leq 20 \cdot \sqrt{\frac{\log g}{g}}
.
\end{equation*}
\end{introthm}
In the special case of the minimal stratum ${\mathcal{H}}_1(2g-2)$ where translation surfaces have exactly one singularity, the covering radius is up to a factor of $2$ the same as the diameter. We have therefore the following theorem which is a direct corollary of Theorem~\ref{introthm:covering_radius}.
\begin{introthm}[Expected diameter] \label{introthm:diameter_goes_to_zero}
For large values of $g$, we have
\begin{equation*}
\mathbb{E}_{{\mathcal{H}}_1(2g-2)} ( \diam)
= \frac{\displaystyle \int_{{\mathcal{H}}_1(2g-2)} \diam(X) \, d\nu(X)}{\nu \big({\mathcal{H}}_1(2g-2)\big)}
\leq 40 \cdot \sqrt{\frac{\log g}{g}}
.
\end{equation*}
\end{introthm}
This shows in particular that the expected value of the diameter in ${\mathcal{H}}_1(2g-2)$ is smaller than what you would get from scaling a hyperbolic surface (by a factor $\frac 1 {\sqrt g}$) to have area $1$. So, the situation is different from that of hyperbolic surfaces.
It would be interesting to compute the expected value of the diameter of surfaces
in ${\mathcal{H}}_1(\kappa)$ but that can not be achieved with our current methods.
Unlike for the covering radius, it is not even clear at the moment whether or not the expected value of the diameter in every stratum goes to zero as the genus of the underlying surface
goes to infinity.
In contrast with \autoref{introthm:covering_radius} and \autoref{introthm:diameter_goes_to_zero},
we have the following absolute lower bound for the
covering radius of any elements in ${\mathcal{H}}_1(\kappa)$.
\begin{introprop}[Lower bound on diameter]
For every $(X,\omega) \in {\mathcal{H}}_1(\kappa)$, we have
\[
\crad(X) \geq \sqrt{\frac{2}{3\sqrt{3} \cdot (2g+\ell-2)}}.
\]
\begin{proof}
Let $\kappa = (k_1, \dots, k_\ell)$ and let $(X, \omega) \in {\mathcal{H}}_1(\kappa)$.
Consider a Delaunay triangulation of~$(X, \omega)$. The number of triangles is
$2(2g+\ell-2)$. Hence, the area of the largest triangle is greater than or equal to
$\frac{1}{2(2g+\ell-2)}$. This triangle is inscribed in a circle of radius at least
$\sqrt{\frac{2}{3\sqrt{3} \cdot (2g+\ell-2)}}$.
As the triangulation is Delaunay, the corresponding disk is an immersed Euclidean
disk in $X$. Hence its radius is a lower bound for the covering radius of the
translation surface.
\end{proof}
\end{introprop}
\begin{rem}[Non-connectedness of ${\mathcal{H}}_1(\kappa)$]
The stratum ${\mathcal{H}}_1(\kappa)$ is not always connected.
For $\kappa=(k_1, \dots, k_\ell)$, when every $k_i$ is even, there are different
components corresponding to even and odd spin structure. Also, ${\mathcal{H}}_1(2g-2)$ and
${\mathcal{H}}_1(g-1, g-1)$ have a component consisting of hyperelliptic surfaces.
So, a stratum may have up to three connected components
(see \cite[Theorem 1]{kontsevich_zorich_03} for an exact statement).
In our formula for the expected value of the diameter or the
covering radius, we will not consider each component separately.
In fact, it is known that the volume of the hyperelliptic component
of ${\mathcal{H}}_1(2g-2)$ is approximately of order $(2g)^{-2g}$ \cite[Theorem 1.1]{athreya_eskin_zorich_16}.
So we cannot even conclude that the
expected value of the diameter for the hyperelliptic component goes to zero as $g\to \infty$.
It was recently shown that the components associated to even and odd spin structures
have asymptotically equal volumes as $g\to \infty$ \cite{chen_moeller_sauvaget_zagier_19}.
Hence, the statement of \autoref{introthm:diameter_goes_to_zero} does
hold for these two components with a different constant.
\end{rem}
We now outline the proof of \autoref{introthm:covering_radius} which immediately implies
\autoref{introthm:diameter_goes_to_zero}. For every $(X, \omega) \in {\mathcal{H}}_1(\kappa)$,
we find either an embedded disk or a cylinder that approximates the covering radius. When
there exists a large embedded disk, we take out a parallelogram whose area is proportional
to the area of the disk and glue the opposite sides to build a new translation surface.
The resulting translation surface is in the stratum ${\mathcal{H}}(\kappa, 2)$
and its area is smaller than $(X, \omega)$ by a definite amount.
We then renormalize this translation surface to have unit area.
We call this process taxing. Because of the renormalization process,
the Jacobian of the taxing map is very large but the volume of ${\mathcal{H}}(\kappa)$
and ${\mathcal{H}}(\kappa, 2)$ are comparable. Hence, the volume of the subset of
${\mathcal{H}}(\kappa)$ where there is a large embedded disk is small.
This allows us to show that the integral of the covering radius on these
sets is small.
When there is a cylinder of large height, then either the cylinder has large area
or a small circumference. In these cases, we bound the measure of the set of translation
surfaces that have such cylinders by bounding the associated Siegel--Veech constant.
Namely, for given length $\delta>0$, area $A \in [0, 1)$ and a translation surface
$(X, \omega)\in {\mathcal{H}}_1(\kappa)$, let $N_{\mathrm{cyl}}(X, \delta, A)$ be the number of cylinders
in $X$ where the circumference is at most $\delta$ and the area is at
least~$A$.
\begin{introthm}[Expected number of cylinders] \label{Thm:cyl}
There exists a constant $C>0$ such that for large values of $g$, we have
\[
\mathbb{E}_{{\mathcal{H}}_1(\kappa)} \big( N_{\mathrm{cyl}}(\param, \delta, A) \big)=
\frac{\displaystyle \int_{{\mathcal{H}}_1(\kappa)} N_{\mathrm{cyl}}(X, \delta, A) \, d\nu (X)}
{\nu\big( {\mathcal{H}}_1(\kappa) \big)}
\leq C \cdot g \cdot \delta^2 \cdot (1-A)^{2g+\ell-3}.
\]
In particular, for ${\mathcal{H}_{\mathrm{thin\text{-}cyl}}}(\delta, A)$ the set of translation surfaces $(X,\omega) \in {\mathcal{H}}_1(\kappa)$ for which $N_{\mathrm{cyl}}(X, \delta, A)$ is not zero, we have
\begin{equation*}
\frac{\nu\left({\mathcal{H}_{\mathrm{thin\text{-}cyl}}}(\delta, A)\right)}{\nu\left({\mathcal{H}}_1(\kappa)\right)}
\leq C \cdot g \cdot \delta^2 \cdot (1-A)^{2g+\ell-3}
.
\end{equation*}
\end{introthm}
For $A=0$, this is analogous to the estimate given by Mirzakhani
\cite[Theorem 4.2]{Mirzakhani} for the Weil--Petersson volume of the set ${\mathcal{M}}_g^\delta$ of Riemann surfaces of genus $g$ with
at least one closed curve of length less than or equal to $\delta$. Namely
\[
\frac{\Vol_{\mathrm{wp}} ({\mathcal{M}}_g^\delta)}{\Vol_{\mathrm{wp}} ({\mathcal{M}}_g)} \asymp \delta^2.
\]
In the setting of translation surfaces, another notion of thin part is the set of translation
surfaces that have a short saddle connection.
In fact, for a stratum ${\mathcal{H}}(\kappa)$ of translation surfaces and ${\mathcal{H}_{\mathrm{thin}}}(\delta)$ the set
of translation surfaces in ${\mathcal{H}}(\kappa)$ that have a saddle connection of length at
most $\delta$, Masur and Smillie showed (compare equation (7) in the proof of
Theorem~10.3 in \cite{masur_smillie_91})
\[
\nu\big( {\mathcal{H}_{\mathrm{thin}}}(\delta) \big) = O(\delta^2)
.
\]
However, the dependence of the constant on the genus or more generally on the stratum
was not known. For a complete treatment of this topic, we also find an estimate for the number
$N_{\mathrm{sc}}(X,\delta)$ of saddle connections of length at most $\delta$ in $X$.
\begin{introthm}[Expected number of saddle connections] \label{introthm:sc}
There exists a constant $C'>0$ such that for large values of $g$, we have
\[
\mathbb{E}_{{\mathcal{H}}_1(\kappa)} \big( N_{\mathrm{sc}}(\param, \delta) \big)=
\frac{\displaystyle \int_{{\mathcal{H}}_1(\kappa)} N_{\mathrm{sc}}(X, \delta) \, d\nu(X)}
{\nu\big( {\mathcal{H}}_1(\kappa) \big)}\leq C' \cdot g^2 \cdot \delta^2.
\]
In particular, for ${\mathcal{H}_{\mathrm{thin}}}(\delta)$ the set of translation surfaces $(X,\omega) \in {\mathcal{H}}_1(\kappa)$
for which $N_{\mathrm{sc}}(X, \delta)$ is not zero, we have
\begin{equation*}
\frac{ \nu\left({\mathcal{H}_{\mathrm{thin}}}(\delta)\right)}{\nu\big({\mathcal{H}}_1(\kappa)\big)}
\leq C' \cdot g^2 \cdot \delta^2.
\end{equation*}
\end{introthm}
\begin{rem}[Siegel--Veech constants and non-connectedness]
Theorems \ref{Thm:cyl} and \ref{introthm:sc} are proven in the appendix and in the proof,
we make use of Siegel--Veech constants. Explicit formulas for
values of various Siegel--Veech constants were computed by Eskin--Masur--Zorich in
\cite{eskin_masur_zorich_03} in terms of combinatorial data and volumes of related
strata with lower complexity. However, their methods give a precise
answer only for connected strata. Hence, we do not compute exact values, rather we find
upper bounds for Siegel--Veech constants that suffice for our purposes.
Recently, Aggarwal in \cite{aggarwal_18} used the recursive formula for volume of
strata given by Eskin--Okounkov \cite{EO} to compute the asymptotic growth rate of these
volumes (see also \cite{chen_moeller_zagier_18} for the principal stratum and
\cite{sauvaget_18} for the minimal~stratum).
One should also compare \autoref{Thm:cyl} and \autoref{introthm:sc} to computations
for values of various Siegel--Veech constants given in the appendix in
\cite{aggarwal_18} written by Anton Zorich. For example,
\autoref{Thm:cyl} is very similar to \cite[Corollary 5]{aggarwal_18}. The difference is that
in \cite[Corollary~5]{aggarwal_18}, only saddle connections bounding a cylinder of
multiplicity $1$ are counted. However, higher multiplicity saddle connections
do not pose a problem; this has been made precise by Aggarwal in \cite{aggarwal_18_SV}.
Also, the assumption on area being at least~$A$ contributes a factor of $(1-A)^{2g-2}$
to the estimate which is consistent with a result of Vorobets
\cite[Theorem 1.8]{vorobets_2005}. Hence, \autoref{Thm:cyl} and \autoref{introthm:sc}
essentially follow from a combination of these results. However, we write a details
proof in the case of ${\mathcal{H}}_1(2g-2)$, namely, we show that by a careful reading of
\cite{eskin_masur_zorich_03} and incorporating the estimates
given in \cite{aggarwal_18} one can obtain these theorems.
\end{rem}
\paragraph{Acknowledgements}
We would like to thank Jon Chaika for suggesting that the diameter of a generic
translation surface of high genus may go to zero which sparked this project.
We are indebted to Anton Zorich for very helpful discussions on calculations of
Siegel--Veech constants. We also thank Jon Chaika, Samuel Lelièvre,
Martin M\"oller, and Anton Zorich for their interest, helpful discussions, and pointing out
several references. Furthermore, we want to thank the referees for their careful reading and helpful suggestions on improving the paper. The first author ac\-knowledges support from NSF grant DMS 1607512, the second author from NSERC Discovery grant RGPIN 06486, and the third author from NSERC grant RGPIN 06521. Some of this work took place at the Fields Institute during the Thematic Program on Teichm\"uller Theory and its Connections to
Geometry, Topology and Dynamics.
\section{Four types of translation surfaces in \texorpdfstring{${\mathcal{H}}_1(\kappa)$}{a stratum}}
For this paper, a \emph{translation surface} $(X,\omega)$ is defined by a compact
connected Riemann surface~$X$, a finite set $\Sigma \subseteq X$, and a translation
structure on $X \setminus \Sigma$, i.e.\ a maximal atlas on $X \setminus \Sigma$ such
that the transition maps are translations. The second parameter $\omega$ refers to the
unique Abelian differential that is associated to a given translation structure on
$X \setminus \Sigma$.
The elements of $\Sigma$ correspond to zeros of $\omega$ and
are called \emph{singularities} of $(X,\omega)$. Every singularity $\sigma \in \Sigma$ is
a cone point of the translation structure with \emph{cone angle} $2\pi(k+1)$ where $k$
is the order of $\sigma$ as a zero of $\omega$. The total sum of the orders of the zeros
is equal to $2g-2$ where $g$ is the genus of~$X$. The translation structure defines also
a metric $d$ on $X$. With this metric, the \emph{diameter} of $X$ is defined to be
$\diam(X) \coloneqq \max\{d(x,y): x, y\in X\}$
and the \emph{covering radius} of $X$ is defined to be $\crad \coloneqq \max \{d(x,\sigma): x\in X, \sigma\in \Sigma\}$.
See \cite{strebel_84} and \cite{zorich_06} for background
information on translation surfaces.
Given a partition of $2g-2$ as a sum of integers $k_i\geq 1$, the \emph{stratum}
${\mathcal{H}}(k_1,\ldots,k_\ell)$ is defined to be the set of all translation surfaces $(X,\omega)$
of genus $g$ with $\ell$ singularities of orders $k_1,\ldots,k_\ell$. The subsets of translation
surfaces of area $1$ and area at most $1$ in ${\mathcal{H}}(k_1,\ldots,k_\ell)$ are denoted by
${\mathcal{H}}_1(k_1,\ldots,k_\ell)$ and ${\mathcal{H}}_{\leq 1}(k_1,\ldots,k_\ell)$, respectively.
A \emph{saddle connection} of $(X,\omega)$ is a geodesic segment from one
singularity of $(X, \omega)$ to another (not necessarily different)
singularity that is disjoint from $\Sigma$ in its interior. The translation structure
of~$(X, \omega)$ associates a vector in $\mathbb{C}$, called the \emph{holonomy vector}, to a given oriented saddle connection.
The holonomy vector of a saddle connection $\gamma$ can be expressed as $\int_\gamma \omega$.
A \emph{cylinder} in $(X,\omega)$ of \emph{circumference} $k>0$ and \emph{height} $a>0$ is an isometrically embedded Euclidean cylinder $\mathbb{R}/k\mathbb{Z} \times (0,a)$. We assume that every cylinder is maximal. This implies that there are saddle connections on both boundary components of the cylinder.
A saddle connection can also be thought of as an element of the relative homology group
of~$(X, \omega)$ relative to $\Sigma$. If ${\mathcal{B}}$ is a set of saddle connections that form a basis
for the relative homology, then the holonomy vectors of the elements of
${\mathcal{B}}$ determine $(X, \omega)$. For every $(X,\omega)$ and ${\mathcal{B}}$, there is a neighborhood $U$ of
$(X, \omega)$ in ${\mathcal{H}}(k_1,\ldots,k_\ell)$ such that for every $(X', \omega')$ in $U$,
all elements of~${\mathcal{B}}$ (thought of as elements in the relative homology group) can still
be represented in $(X', \omega')$ as saddle connections. Then the set of holonomy vectors
of saddle connections in ${\mathcal{B}}$ give coordinates for points in $U$.
We refer to this set of holonomy vectors as \emph{period coordinates} for ${\mathcal{H}}(k_1,\ldots,k_\ell)$ around $(X,\omega)$.
(See \cite{Masur-82} for details.)
The period coordinates give an embedding from $U$ to
$\mathbb{C}^{2g+\ell-1}$. The associated pullback measure in ${\mathcal{H}}(k_1,\ldots,k_\ell)$
is called the \emph{normalized Lebesgue measure} denoted by $\nu$ and was studied by Masur \cite{Masur-82} and Veech \cite{Veech-82}. This also defines
a measure on ${\mathcal{H}}_1(k_1,\ldots,k_\ell)$ in the following way. For an open set
$V \subseteq {\mathcal{H}}_1(k_1,\ldots,k_\ell)$, the measure is defined to be $\nu(U)$ where $U$ is the cone over $V$
of translation surfaces of area at most $1$. We abuse notation and denote the
measure on ${\mathcal{H}}_1(k_1,\ldots,k_\ell)$ also by $\nu$.
In the following, let $\kappa= (k_1,\ldots, k_\ell)$ be a fixed partition of $2g-2$ and ${\mathcal{H}}(\kappa)$ the corresponding stratum.
We show the main theorem by dividing the stratum ${\mathcal{H}}_1(\kappa)$ into four (not necessarily disjoint) parts and investigate the behaviour of the expected value of the covering radius separately for every part.
\begin{definition}[${\mathcal{H}_{\mathrm{small\text{-}diam}}}$, ${\mathcal{H}_{\mathrm{poor\text{-}cyl}}}$, ${\mathcal{H}_{\mathrm{rich\text{-}cyl}}}$, and ${\mathcal{H}_{\mathrm{rich\text{-}disk}}}$] \label{def:subsets_of_stratum}
For $g\geq 2$, define the following four subsets of ${\mathcal{H}}_1(\kappa)$:
\begin{itemize}
\item Let ${\mathcal{H}_{\mathrm{small\text{-}diam}}}$ be the subset of points $(X, \omega)$ where we have $\crad(X) < 18 \cdot \sqrt{\frac{\log g}{g}}$.
\item Let ${\mathcal{H}_{\mathrm{poor\text{-}cyl}}}$ be the subset of points $(X, \omega)$ where $\crad(X) \geq \frac{1}{\sqrt{g}}$ and where there exists a cylinder $C(X)$ of height at least $\crad(X)$ and $\area(C(X)) \leq \frac{1}{g}$. We call this the poor cylinder case.
\item Let ${\mathcal{H}_{\mathrm{rich\text{-}cyl}}}$ be the subset of points $(X, \omega)$ where $\crad(X) \geq \frac{1}{\sqrt{g}}$ and where there exists a cylinder $C(X)$ of height at least $\crad(X)$ and $\area(C(X)) \geq \frac{1}{g}$. We call this the rich cylinder case.
\item Let ${\mathcal{H}_{\mathrm{rich\text{-}disk}}}$ be the subset of points $(X, \omega)$ where $\crad(X) \geq 18 \cdot \sqrt{\frac{\log g}{g}}$ and where there exists an embedded disk $D(X)$ of diameter at least $\crad(X)$. We call this the rich disk case.
\end{itemize}
\end{definition}
Note that for a given translation surface $(X,\omega)$, we can be in the cylinder case and in the rich disk case.
Moreover, the choice of $C(X)$ in the cylinder case and $D(X)$ in the rich disk case is not canonical. However, for every translation surface $(X,\omega)$ in one of ${\mathcal{H}_{\mathrm{poor\text{-}cyl}}}$, ${\mathcal{H}_{\mathrm{rich\text{-}cyl}}}$, or ${\mathcal{H}_{\mathrm{rich\text{-}disk}}}$, we fix $C(X)$ or $D(X)$, respectively.
In particular, we fix $D(X)$ such that the holonomy vector defining the location of the center of $D(X)$ is locally constant in~${\mathcal{H}_{\mathrm{rich\text{-}disk}}}$ (see the proof of \autoref{lem:richdisk_taxing_isometric_embedding} for details on this choice).
\begin{lem}[The four cases cover ${\mathcal{H}}_1(\kappa)$]
For $g\geq 2$, we have
\begin{equation*}
{\mathcal{H}}_1(\kappa) = {\mathcal{H}_{\mathrm{small\text{-}diam}}} \cup {\mathcal{H}_{\mathrm{poor\text{-}cyl}}} \cup {\mathcal{H}_{\mathrm{rich\text{-}cyl}}} \cup {\mathcal{H}_{\mathrm{rich\text{-}disk}}}
.
\end{equation*}
\begin{proof}
Let $(X, \omega) \in {\mathcal{H}}_1(\kappa) \setminus {\mathcal{H}_{\mathrm{small\text{-}diam}}}$. Then there exists a point $x \in X$ such that $d(x,\Sigma) = \crad(X) \geq 18 \cdot \sqrt{\frac{\log g}{g}}$. In particular, there exists an immersed, locally flat, open disk around this point with radius $\crad(X)$.
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=0.8, rotate=110]
\draw[fill] (0,0) circle (2pt);
\draw (0,0) circle (4cm);
\draw (0,0) circle (2cm);
\draw[fill] (60:2) circle (2pt) node[left]{$z'$};
\draw[fill] (-60:2) circle (2pt) node[right]{$z$};
\draw[dotted] (60:2) -- (-60:2);
\draw[thick, dashed] (25.7:4) -- (154.3:4);
\draw[very thin] (60:2) -- (120:2);
\draw[thick, dashed] (-25.7:4) -- (-154.3:4);
\draw[very thin] (-60:2) -- (-120:2);
\end{tikzpicture}
\caption{If $z$ and $z'$ are identified in the smaller disk, then the dotted line between them is a core curve of a cylinder. Hence, the two dashed lines are identified and give a lower bound on the height of the cylinder.}
\label{fig:disk_wrapped_around_cylinder}
\end{center}
\end{figure}
Consider an immersed disk with the same center but radius $\frac{1}{2} \crad(X)$. If this disk is not embedded, then two points in the disk have to be identified. This defines a closed geodesic and hence a core curve of a cylinder. The circumference of this cylinder is the length of the core curve which is at most $\crad(X)$. The height of this cylinder then has to be at least~$\crad(X,\omega)$ (see \autoref{fig:disk_wrapped_around_cylinder}). Furthermore, we have $\crad(X) \geq 18 \cdot \sqrt{\frac{\log g}{g}} \geq \frac{1}{\sqrt{g}}$.
So, if we are not in the rich disk case then we are in the (poor or rich) cylinder case.
\end{proof}
\end{lem}
\section{The poor cylinder case}
We will deal with the poor cylinder case and the rich cylinder case similarly by
estimating the measure of the subset of ${\mathcal{H}_{\mathrm{poor\text{-}cyl}}}$ (or ${\mathcal{H}_{\mathrm{rich\text{-}cyl}}}$) where the height of the cylinder
is in a certain given range. Essentially, we use a Riemann sum argument
to estimate the integral.
We will need the following result proven in the appendix.
\begin{repcor}{lem:measure_cylinders_small_circumference}[Measure of ${\mathcal{H}_{\mathrm{thin\text{-}cyl}}}(\delta)$]
There exists a constant $C>0$ such that for large values of $g$ and
${\mathcal{H}_{\mathrm{thin\text{-}cyl}}}(\delta)$ the set of translation surfaces $(X,\omega) \in {\mathcal{H}}_1(\kappa)$ for which $N_{\mathrm{cyl}}(X, \delta, 0)$ is not zero, we have
\begin{equation*}
\frac{\nu\left({\mathcal{H}_{\mathrm{thin\text{-}cyl}}}(\delta)\right)}{\nu\left({\mathcal{H}}_1(\kappa)\right)}
\leq C \cdot g \cdot \delta^2 .
\end{equation*}
\end{repcor}
Recall that for $(X, \omega) \in {\mathcal{H}_{\mathrm{poor\text{-}cyl}}}$, the height of $C(X)$ is at least
$\crad(X) \geq \frac{1}{\sqrt{g}}$ and the area of~$C(X)$ is at most
$\frac{1}{g}$. We divide ${\mathcal{H}_{\mathrm{poor\text{-}cyl}}}$ into subsets based on the height of the cylinder.
For every~$n\geq 1$, define
\begin{equation*}
{\mathcal{H}_{\mathrm{poor\text{-}cyl}}}(n) =
\left\{ (X,\omega) \in {\mathcal{H}_{\mathrm{poor\text{-}cyl}}} : \height(C(X)) \in
\left(2^{n-1} \cdot \frac{1}{\sqrt{g}} , 2^n \cdot \frac{1}{\sqrt{g}} \right) \right\}
.
\end{equation*}
In particular, if $(X, \omega)$ is a translation surface in ${\mathcal{H}_{\mathrm{poor\text{-}cyl}}}(n)$ then we have that $(X,\omega)$ contains a cylinder whose circumference is at most $\frac{1}{2^{n-1} \cdot \sqrt{g}}$.
With the above corollary, we can calculate the measure of ${\mathcal{H}_{\mathrm{poor\text{-}cyl}}}(n)$.
\begin{cor}[Measure of ${\mathcal{H}_{\mathrm{poor\text{-}cyl}}}(n)$] \label{cor:measure_thincylinder_n}
For large values of $g$ and the constant $C$ from \autoref{lem:measure_cylinders_small_circumference}, we have
\begin{equation*}
\nu\left({\mathcal{H}_{\mathrm{poor\text{-}cyl}}}(n) \right)
\leq \nu\left({\mathcal{H}_{\mathrm{thin\text{-}cyl}}}\left( \frac{1}{2^{n-1} \cdot \sqrt{g}} \right)\right)
\leq \frac{C}{2^{2n-2}} \cdot\nu\left({\mathcal{H}}_1(\kappa)\right)
.
\end{equation*}
\end{cor}
\begin{thm}[Expected covering radius on ${\mathcal{H}_{\mathrm{poor\text{-}cyl}}}$] \label{thm:expected_diameter_Hpc}
For large values of $g$ and the constant $C$ from \autoref{lem:measure_cylinders_small_circumference}, we have
\begin{equation*}
\frac{ \int_{{\mathcal{H}_{\mathrm{poor\text{-}cyl}}}} \crad \, d\nu}{\nu\left({\mathcal{H}}_1(\kappa)\right)}
\leq 4C \cdot \frac{1}{\sqrt{g}}
.
\end{equation*}
\begin{proof}
A cylinder contains a singularity on each of its two boundary components. The distance from a point half way across the cylinder to the set of singularities is at least one half of the height of the cylinder, so the height is at most twice the covering radius.
Hence, for an $(X,\omega) \in {\mathcal{H}_{\mathrm{poor\text{-}cyl}}}(n)$, we have
$\crad(X) \in \left(2^{n-2} \cdot \frac{1}{\sqrt{g}}, \, 2^{n} \cdot \frac{1}{\sqrt{g}} \right)$.
We can now calculate the integral by using ${\mathcal{H}_{\mathrm{poor\text{-}cyl}}} = \cup_{n\geq 1} {\mathcal{H}_{\mathrm{poor\text{-}cyl}}}(n)$.
For the third inequality below we use \autoref{cor:measure_thincylinder_n}.
\begin{align*}
\int_{{\mathcal{H}_{\mathrm{poor\text{-}cyl}}}} \crad \, d\nu
&\leq \sum_{n=1}^\infty \int_{{\mathcal{H}_{\mathrm{poor\text{-}cyl}}}(n)} \crad \, d\nu \\
&\leq \sum_{n=1}^\infty \nu\left({\mathcal{H}_{\mathrm{poor\text{-}cyl}}}(n)\right) \cdot 2^{n} \cdot \frac{1}{\sqrt{g}} \\
&\leq \sum_{n=1}^\infty \frac{C}{2^{2n-2}} \cdot \nu\left({\mathcal{H}}_1(\kappa)\right) \cdot 2^{n} \cdot \frac{1}{\sqrt{g}} \\
&= 4C \cdot \frac{1}{\sqrt{g}} \cdot \nu\left({\mathcal{H}}_1(\kappa)\right) \cdot \sum_{n=1}^\infty \frac{1}{2^{n}} \\
&= 4C \cdot \frac{1}{\sqrt{g}} \cdot \nu\left({\mathcal{H}}_1(\kappa)\right)
\end{align*}
This finishes the proof of the statement.
\end{proof}
\end{thm}
\section{The rich cylinder case}
We can do a similar approach for the rich cylinder case as in the poor cylinder case.
Recall that for $(X, \omega) \in {\mathcal{H}_{\mathrm{rich\text{-}cyl}}}$, the height of the cylinder $C(X)$ is at least
$\crad(X) \geq \frac{1}{\sqrt{g}}$
and the area of $C(X)$ is at least $\frac{1}{g}$.
For $n, m \geq 1$, consider ${\mathcal{H}_{\mathrm{rich\text{-}cyl}}}(n,m)$ to be the subset of ${\mathcal{H}_{\mathrm{rich\text{-}cyl}}}$ where the height of $C(X)$
is in $\left(2^{n-1} \cdot \frac{1}{\sqrt{g}}, 2^n \cdot \frac{1}{\sqrt{g}}\, \right)$ and the area is in $\left( m \cdot \frac{1}{g}, (m+1) \cdot \frac{1}{g}\, \right)$.
The area of a translation surface in ${\mathcal{H}_{\mathrm{rich\text{-}cyl}}}(n,m)$ is $1$, therefore ${\mathcal{H}_{\mathrm{rich\text{-}cyl}}}(n,m)=\emptyset$ for $m \geq g$. We will state bounds on the volume of ${\mathcal{H}_{\mathrm{rich\text{-}cyl}}}(n,m)$ for all $m\geq 1$ noting that the statements are trivially true for $m \geq g$.
Since the area of the cylinder $C(X)$ for an $(X,\omega) \in {\mathcal{H}_{\mathrm{rich\text{-}cyl}}}(n,m)$ is bounded from above by $(m+1) \cdot \frac{1}{g}$ and the height is bounded from below by $2^{n-1} \cdot \frac{1}{\sqrt{g}}$, the circumference of $C(X)$ is bounded from above by~$\frac{m+1}{2^{n-1}} \cdot \frac{1}{\sqrt{g}}$.
With \autoref{thm:measure_cylinders_large_area}, we can calculate the measure of ${\mathcal{H}_{\mathrm{rich\text{-}cyl}}}(n,m)$.
\begin{cor}[Measure of ${\mathcal{H}_{\mathrm{rich\text{-}cyl}}}(n,m)$] \label{cor:measure_thincylinder_n_m}
For large values of $g$ and the constant $C$ from \autoref{thm:measure_cylinders_large_area}, and for all $n,m \geq 1$
\begin{equation*}
\nu\left({\mathcal{H}_{\mathrm{rich\text{-}cyl}}}(n,m) \right)
\leq \frac{4C \cdot (m+1)^2}{2^{2n}} \cdot e^{-m} \cdot \nu({\mathcal{H}}_1(\kappa))
.
\end{equation*}
\begin{proof}
This follows directly from \autoref{thm:measure_cylinders_large_area} with the following calculation:
\pagebreak[2]
\begin{align*}
\nu( {\mathcal{H}_{\mathrm{rich\text{-}cyl}}}(n, m) )
& \leq {\mathcal{H}_{\mathrm{thin\text{-}cyl}}} \left( \frac{m+1}{2^{n-1}} \cdot \frac{1}{\sqrt{g}} , \, \frac{m}{g} \right) \\
& \leq C \cdot g \cdot \frac{(m+1)^2}{2^{2n-2}} \cdot \frac{1}{g} \cdot \left( 1- \frac{m}{g} \right)^{2g+\ell-3} \cdot \nu({\mathcal{H}}_1(\kappa)) \\
& \leq \frac{4C \cdot (m+1)^2}{2^{2n}} \cdot \left( e^{- \sfrac{m}{g}} \right)^{2g+\ell-3} \cdot \nu({\mathcal{H}}_1(\kappa)) \\
& \leq \frac{4C \cdot (m+1)^2}{2^{2n}} \cdot e^{-m} \cdot \nu({\mathcal{H}}_1(\kappa)) \qedhere
\end{align*}
\end{proof}
\end{cor}
\begin{thm}[Expected covering radius on ${\mathcal{H}_{\mathrm{rich\text{-}cyl}}}$] \label{thm:expected_diameter_Hrc}
For large values of $g$ and the constant $C$ from \autoref{thm:measure_cylinders_large_area}, we have
\begin{equation*}
\frac{ \int_{{\mathcal{H}_{\mathrm{rich\text{-}cyl}}}} \crad \, d\nu}{\nu\left({\mathcal{H}}_1(\kappa)\right)}
\leq 44C \cdot \frac{1}{\sqrt{g}}
.
\end{equation*}
\begin{proof}
Note that the height of $C(X)$ cannot be larger than twice the covering radius of $X$. Hence, for an $(X,\omega) \in {\mathcal{H}_{\mathrm{rich\text{-}cyl}}}(n,m)$, we have
$\crad(X) \in \left(2^{n-2} \cdot \frac{1}{\sqrt{g}}, \, 2^{n} \cdot \frac{1}{\sqrt{g}} \right)$.
We can now calculate the integral by using ${\mathcal{H}_{\mathrm{rich\text{-}cyl}}} = \cup_{m\geq 1} \cup_{n\geq 1} {\mathcal{H}_{\mathrm{rich\text{-}cyl}}}(n,m)$.
For the third inequality below we use \autoref{cor:measure_thincylinder_n_m}.
\begin{align*}
\int_{{\mathcal{H}_{\mathrm{rich\text{-}cyl}}}} \crad \, d\nu
&\leq \sum_{m=1}^\infty \sum_{n=1}^\infty \int_{{\mathcal{H}_{\mathrm{rich\text{-}cyl}}}(n,m)} \crad \, d\nu \\
&\leq \sum_{m=1}^\infty \sum_{n=1}^\infty \nu\left({\mathcal{H}_{\mathrm{rich\text{-}cyl}}}(n,m)\right) \cdot 2^{n} \cdot \frac{1}{\sqrt{g}} \\
&\leq \sum_{m=1}^\infty \sum_{n=1}^\infty \frac{4C \cdot (m+1)^2}{2^{2n}} \cdot e^{-m} \cdot \nu({\mathcal{H}}_1(\kappa)) \cdot 2^{n} \cdot \frac{1}{\sqrt{g}} \\
&= 4C \cdot \frac{1}{\sqrt{g}} \cdot \nu\left({\mathcal{H}}_1(\kappa)\right) \cdot \sum_{m=1}^\infty (m+1)^2 \cdot e^{-m} \cdot \sum_{n=1}^\infty \frac{1}{2^{n}} \\
&= 4C \cdot \frac{1}{\sqrt{g}} \cdot \nu\left({\mathcal{H}}_1(\kappa)\right) \cdot \sum_{m=1}^\infty (m+1)^2 \cdot e^{-m}
\end{align*}
We now find a bound for the sum in this term. Note that $( (m+1)^2 \cdot e^{-m} )_{m \geq 1}$ is a decreasing sequence. In particular, we have for every $m\geq 6$:
\begin{equation*}
\frac{(m+2)^2 \cdot e^{-(m+1)}}{(m+1)^2 \cdot e^{-m}}
= \left( \frac{m+2}{m+1} \right)^2 \cdot e^{- 1}
\leq \left( \frac{8}{7} \right)^2 \cdot \frac{1}{e}
< \frac{1}{2}
.
\end{equation*}
Hence, the sum is bounded by seven times the first term; that is, we have
\begin{equation*}
\sum_{m=1}^\infty (m+1)^2 \cdot e^{-m}
\leq 7 \cdot 2^2 \cdot e^{- 1}
< 11
.
\end{equation*}
This finally shows the statement.
\end{proof}
\end{thm}
\section{The rich disk case}
Recall that ${\mathcal{H}_{\mathrm{rich\text{-}disk}}}$ is the subset of translation surfaces $(X, \omega)$ in ${\mathcal{H}}_1(\kappa)$ where $\crad(X) \geq 18 \cdot \sqrt{\frac{\log g}{g}}$ and where there exists an embedded disk of diameter at least $\crad(X)$.
Note that the area of an embedded disk can never be larger than $1$, hence the diameter of the disk has to be smaller than $\frac{2}{\sqrt{\pi}}$. Hence on ${\mathcal{H}_{\mathrm{rich\text{-}disk}}}$, the covering radius of a translation surface is globally bounded by $\frac{1}{\sqrt{\pi}}$. In particular, for small genera we have $\frac{1}{\sqrt{\pi}} \leq 18 \cdot \sqrt{\frac{\log g}{g}}$ and hence ${\mathcal{H}_{\mathrm{rich\text{-}disk}}}$ is empty.
The idea in the rich disk case is to take some area from the large embedded disk by removing a parallelogram and to distribute this area to the rest of the surface.
Note that when doing so, we leave the stratum and change the topology of the surface. We will show that we end up in a subset of the image stratum of small volume. We will argue that this forces the set of surfaces with large embedded disks to have small volume as well. In what follows, it will be convenient to consider surfaces of area less than or equal to $1$.
Let ${\mathcal{H}_{\leq 1, \mathrm{rich\text{-}disk}}}\subseteq {\mathcal{H}}_{\leq 1}(\kappa)$ be the cone over the set ${\mathcal{H}_{\mathrm{rich\text{-}disk}}} \subseteq {\mathcal{H}}_1(\kappa)$, that is, the set of translation surfaces
that are obtained from some $(X, \omega) \in {\mathcal{H}_{\mathrm{rich\text{-}disk}}}$ by scaling by a factor
$\sqrt \lambda$ with $0 < \lambda \leq 1$.
We now describe the parallelogram that is to be removed and explore its properties.
For the fixed $g\geq 2$, set $\xi = \sqrt{\frac{3}{2}} \cdot \sqrt{\frac{\log g}{g}}$.
Let $(X,\omega) \in {\mathcal{H}_{\mathrm{rich\text{-}disk}}}$ and let $(x_1,x_2,x_3) \in (-\xi,\xi)^6 \subseteq \mathbb{C}^3$.
By definition, there is a choice of an embedded disk $D(X)$ in $X$ with diameter
\[
d \geq \crad(X) \geq 18 \cdot \sqrt{\frac{\log g}{g}}
\]
and center~$c$.
We consider the parallelogram $P = P(X, x_1, x_2, x_3)$ in $X$ with center $c' = c+x_1$ and edges $(3\xi, 0) + x_2$ and $(0, 3\xi) + x_3$.
\begin{lem}[Properties of $P(X, x_1, x_2, x_3)$] \label{lem:properties_removed_parallelogram}
For $(X,\omega) \in {\mathcal{H}_{\mathrm{rich\text{-}disk}}}$ and $(x_1,x_2,x_3) \in (-\xi, \xi)^6$, the parallelogram $P=P(X, x_1, x_2, x_3)$ has the following properties:
\begin{enumerate}
\item The parallelogram $P$ is embedded.
\item The edges of $P$ are shorter than any geodesic segment from a corner of $P$ to itself or to another corner (except possibly the diagonals of $P$).
\item We have
\begin{equation*}
\frac{9}{2} \cdot \frac{\log g}{g} \leq \area(P(X, x_1, x_2, x_3)) \leq \frac{51}{2} \cdot \frac{\log g}{g}
.
\end{equation*}
\end{enumerate}
\begin{proof}
\begin{enumerate}
\item First note that the distance between $c'$ and $c$ is at most $\sqrt{2} \xi$. Second, the distance between $c'$ and a corner of the parallelogram is half the length of the corresponding diagonal, that is, it is bounded above by $\frac{1}{2} \cdot \sqrt{(3\xi + \xi + \xi)^2 + (3\xi + \xi + \xi)^2} = \frac{5\sqrt{2}}{2} \xi$. Therefore, every corner of $P$ has distance at most
\begin{equation*}
\frac{7\sqrt{2}}{2}\xi
= \frac{7\sqrt{3}}{2} \cdot \sqrt{\frac{\log g}{g}}
< 9 \cdot \sqrt{\frac{\log g}{g}}
\end{equation*}
from $c$ and hence is contained in $D(X)$. In particular, the whole parallelogram $P$ is contained in the embedded disk and therefore is embedded itself.
\item The length of an edge of $P$ is at most $\sqrt{(4\xi)^2 + \xi^2} = \sqrt{17}\xi$.
Note also that all the geodesic segments between corners of $P$ within $D(X)$ are edges or diagonals of $P$.
The distance from any corner of the parallelogram to the complement of $D(X)$ is at least
\begin{equation*}
\left( 9-\frac{7\sqrt{3}}{2} \right) \cdot \sqrt{\frac{\log g}{g}}
.
\end{equation*}
Therefore, any loop from a corner to itself or to another corner that leaves the disk has length at least
\begin{equation*}
\left( 18-7\sqrt{3} \right) \cdot \sqrt{\frac{\log g}{g}}
\geq \sqrt{\frac{51}{2}} \cdot \sqrt{\frac{\log g}{g}}
= \sqrt{17}\xi
,
\end{equation*}
so it is longer than any edge of the parallelogram.
\item The area of $P(X, x_1, x_2, x_3)$ can be calculated as the determinant of the matrix with entries $(3\xi, 0) + x_2$ and $(0, 3\xi) + x_3$. As $x_2, x_3 \in (-\xi, \xi)^2$, the determinant is bounded from above by $4\xi \cdot 4\xi - (-\xi) \cdot \xi = 17 \xi^2 = \frac{51}{2} \cdot \frac{\log g}{g}$ and bounded from below by $2\xi \cdot 2\xi - \xi \cdot \xi = 3 \xi^2 = \frac{9}{2} \cdot \frac{\log g}{g}$.
\end{enumerate}
\end{proof}
\end{lem}
To define now the taxing map locally, fix $(X_0,\omega_0)\in {\mathcal{H}_{\leq 1, \mathrm{rich\text{-}disk}}}$.
Let $c_0$ be the center of the embedded disk $D(X_0)$ in $(X_0, \omega_0)$.
Let $v$ be a vector connecting a singularity $\sigma\in \Sigma$ to $c_0$ that can be represented
as a geodesic segment.
We can choose the center $c$ of the rich disks in the translation surfaces in ${\mathcal{H}_{\mathrm{rich\text{-}disk}}}$ in such a way that the vector $v$ joining the point in $\Sigma$ to $c$ is constant in a neighborhood of~$(X_0,\omega_0)$ in ${\mathcal{H}}(\kappa)$ (see the remark after \autoref{def:subsets_of_stratum}).
\begin{definition}[Taxing map]
Define
\[
T \colon {\mathcal{H}_{\leq 1, \mathrm{rich\text{-}disk}}} \times (-\xi, \xi)^6 \to {\mathcal{H}}_{\leq 1}(\kappa,2)
\]
in the following way.
For a fixed $(X_0,\omega_0) \in {\mathcal{H}_{\leq 1, \mathrm{rich\text{-}disk}}}$, consider a neighborhood in ${\mathcal{H}_{\leq 1, \mathrm{rich\text{-}disk}}}$. And for any $(X,\omega)$ in this neighborhood and a chosen $\lambda \in (0,1)$, consider $(X', \omega') \in {\mathcal{H}_{\leq 1, \mathrm{rich\text{-}disk}}}$ with $\area(X') = \lambda$ such that $(X,\omega)$ is the scaled version of $(X',\omega')$ (we multiply every saddle connection in $(X', \omega')$ by a factor
of $\frac 1 {\sqrt \lambda}$).
For an $(x_1,x_2,x_3) \in (-\xi,\xi)^6 \subseteq \mathbb{C}^3$, let $P'= \sqrt{\lambda} P$ be the image of $P=P(X,x_1,x_2,x_3)$ in $(X', \omega')$ under scaling by $\sqrt \lambda$.
Now, we define $(Y', \zeta')$ to be the translation surface where we remove $P'$ from $(X', \omega')$ and glue the two pairs of parallel boundaries of $P'$. This introduces a new singularity $\tau$ with cone angle $3 \cdot 2\pi$, hence $(Y', \zeta') \in {\mathcal{H}}_{\leq 1}(\kappa,2) \coloneqq {\mathcal{H}}_{\leq 1}(k_1,\ldots,k_\ell,2)$.
The glued parallel sides give rise to a pair of loops based at the singularity $\tau$ that are shortest under all loops based at $\tau$.
As shown in \autoref{lem:properties_removed_parallelogram},
\[
\area(Y') \leq \left(1 - \frac{9}{2} \cdot \frac{\log g}{g} \right) \cdot \area(X').
\]
We rescale $(Y', \zeta')$ by this factor $\left(1 - \frac{9}{2} \cdot \frac{\log g}{g}\right)^{-\frac{1}{2}}$, and obtain a surface $T\big( (X', \omega'), (x_1,x_2,x_3)\big)$ that has area less than or equal to $1$.
\end{definition}
We study the map $T$ by considering it as the concatenation of the maps
\[
T_P \colon {\mathcal{H}_{\leq 1, \mathrm{rich\text{-}disk}}} \times (-\xi, \xi)^6 \to {\mathcal{H}}_{\leq 1}(\kappa,2)
\]
(removing $P'$) and
\[
T_S \colon {\mathcal{H}}(\kappa,2) \to {\mathcal{H}}(\kappa,2)
\]
(scaling the surface by $\left(1 - \frac{9}{2} \cdot \frac{\log g}{g}\right)^{-\frac{1}{2}}$).
\begin{lem}[Properties of $T_P$] \label{lem:richdisk_taxing_isometric_embedding} \leavevmode
\begin{enumerate}
\item Every translation surface in ${\mathcal{H}}_{\leq 1}(\kappa,2)$ has at most $g$ preimages under $T_P$.
\item We have $\Jac(T_P, ((X', \omega'), (x_1,x_2,x_3))) = (\area(X', \omega'))^3$ on ${\mathcal{H}_{\leq 1, \mathrm{rich\text{-}disk}}} \times (-\xi,\xi)^6$.
\end{enumerate}
\begin{proof}
Let $\ell$ be the number of singularities of $(X,\omega)\in {\mathcal{H}}(\kappa)$. Then the complex dimension of ${\mathcal{H}}_{\leq 1}(\kappa)$ is $2g+\ell-1$. Hence, ${\mathcal{H}_{\leq 1, \mathrm{rich\text{-}disk}}} \times (-\xi,\xi)^6$ has complex dimension $2g+\ell+2$. Also, the complex dimension of ${\mathcal{H}}_{\leq 1}(\kappa, 2)$ is
$2(g+1)+(\ell+1)-1 = 2g+\ell+2$. Hence the image and the domain have the same dimension.
We now show that a translation surface $(Y',\zeta')$ in ${\mathcal{H}}_{\leq 1}(\kappa,2)$ has at most $g$ preimages under~$T_P$.
We first note that it may have no preimages at all. This might happen, for instance, if the area of~$(Y',\zeta')$ is close to $1$ and so the parallelogram that we will add later forces the area of the possible preimage to be larger than $1$. In that case $(Y',\zeta')$ is not in the image of $T_P$.
Recall that $(Y', \zeta')$ has genus $g+1$, hence it can have at most $g$ singularities with cone angle $6\pi$. We now fix a singularity $\tau$ of $(Y', \zeta')$ with cone angle $6\pi$ and call the set of the other singularities~$\Sigma$. If the two shortest loops based at $\tau$ only intersect at $\tau$ then $(Y',\zeta')$ is possibly contained in the image of $T_P$. If the two shortest loops are not unique or intersect, $(Y',\zeta')$ cannot be in the image of $T_P$ for this given $\tau$. Thus assume the condition on two unique shortest loops at $\tau$ holds. We construct the only possible preimage $(X', \omega')$ of $(Y',\zeta')$ under~$T_P$ for the given singularity $\tau$.
We cut open along these unique shortest loops at $\tau$ giving us the possibility to glue in a parallelogram $P'$. Doing so, all corners of the former parallelogram will be regular
points (that is, their cone angle is $2\pi$) and we obtain a translation surface $(X', \omega')$. The area may be greater than $1$. In this case again $(Y',\zeta')$ has no preimages. Therefore assume $(X', \omega')$ has area~$\lambda \in(0,1]$, that is $(X', \omega') \in {\mathcal{H}}_{\leq 1}(\kappa)$.
Let $(X, \omega) = \left(\frac 1{\sqrt \lambda} X', \frac 1{\sqrt \lambda} \omega'\right)$, that is $\area(X, \omega) = 1$.
Let $c' \in (X,\omega) $ be the preimage of the point in $(X', \omega')$ which is the center of the parallelogram that was glued in to obtain~$(X', \omega')$.
Let $y_2$ be the holonomy vector of the edge $e_2$ of the parallelogram $P'$ with the smaller imaginary part and $y_3$ be the holonomy vector of the edge $e_3$ of $P'$ with the larger imaginary part.
Define $x_1 = c' - c$, $x_2 = \frac{1}{\sqrt \lambda} y_2 - (3\xi, 0)$, and $x_3 = \frac{1}{\sqrt{\lambda}} y_3 - (0, 3\xi)$.
Then $\big( \sqrt \lambda (X, \omega), (x_1, x_2, x_3)\big)$ is the only
possible preimage of $(Y',\zeta')$ when $\tau$ is a fixed singularity with angle $6\pi$.
In particular, $T_P$ is locally injective. The equality of dimensions and the local injectivity imply that the image of $T_P$ is a subset of ${\mathcal{H}}_{\leq 1}(\kappa,2)$ with non-empty interior.
To compare the measure in the domain and in the image, we locally (around the points
$(X',\omega')$ and $(Y', \zeta')$) choose \emph{compatible period coordinates} on
${\mathcal{H}_{\leq 1, \mathrm{rich\text{-}disk}}} \subseteq {\mathcal{H}}(\kappa)$ and on~${\mathcal{H}}(\kappa, 2)$.
For any triangulation $\Delta$ of $(Y',\zeta')$ by saddle connections, there is an open
set $U_\Delta$ in~${\mathcal{H}}(\kappa,2)$ where the edges of $\Delta$ (as a topological triangulation)
can still be represented by saddle connections. Also, any subset ${\mathcal{E}}$ of edges of
$\Delta$ that does not contain a contractible loop can be extended to a set ${\mathcal{B}}$ of edges
of $\Delta$ that form a basis for the relative homology of~$Y'$ relative to $\Sigma \cup \{\tau\}$.
Let ${\mathcal{E}}$ be the set of the following three saddle connections in~$(Y', \zeta')$:
the two shortest saddle connections $e_2$ and $e_3$ that connect $\tau$ to itself
and a saddle connection~$e_1$ that connects a singularity in $\Sigma$ to $\tau$ and is disjoint from
the other two such that the holonomy vector of $e_1$ is
$y_1= v + x_1 - (x_2 + x_3)/2-(3\xi, 3\xi)/2$. (That is, $e_1$ connects a singularity in $\Sigma$ to the lower left corner of
the parallelogram spanned by $e_2$ and $e_3$.)
Complete ${\mathcal{E}}$ to a triangulation $\Delta$ of $(Y', \zeta')$ by successively adding edges
that are disjoint from previous edges. Let ${\mathcal{E}}_\Sigma$ be the set of edges in $\Delta$
that start and end in singularities in $\Sigma$. Then ${\mathcal{E}} \cup {\mathcal{E}}_\Sigma$ spans the relative homology.
This is because any edge $e$ connecting a singularity in $\Sigma$ to $\tau$ is either in ${\mathcal{E}}$ or there is
a contractible loop in $\Delta$ consisting of $e$, the edge $e_1$ in~${\mathcal{E}}$ connecting a singularity in
$\Sigma$ to $\tau$, and an edge path in ${\mathcal{E}}_\Sigma$. Let ${\mathcal{B}}_\Sigma$ be a subset of
${\mathcal{E}}_\Sigma$ so that ${\mathcal{B}} = {\mathcal{B}}_\Sigma \cup {\mathcal{E}}$ forms a basis for the relative homology
of $Y'$ relative to $\Sigma \cup \{\tau\}$.
By construction, edges in ${\mathcal{E}}_\Sigma$ can also be represented as saddle connections
in $(X', \omega')$. In fact, the edges associated to ${\mathcal{B}}_\Sigma$ form a basis for
the relative homology of $X'$ relative to $\Sigma$ as~$|{\mathcal{E}}| = 3$ and the complex
dimension of ${\mathcal{H}}_{\leq 1}(\kappa)$ is $3$ less than the dimension of ${\mathcal{H}}_{\leq 1}(\kappa,2)$.
By making $U_\Delta$ smaller, we can ensure that any point in ${\mathcal{H}_{\leq 1, \mathrm{rich\text{-}disk}}}$ that is a
preimage of a point in $U_\Delta$ is contained in an open set $U_X$ where edges in
${\mathcal{B}}_\Sigma$ can still be represented by saddle connections. We call the period
coordinates given by ${\mathcal{B}}_\Sigma$ for points in $U_X$ and the period coordinates given
by ${\mathcal{B}}$ for points in $U_\Delta$ a pair of compatible period coordinates.
On a pair of compatible period coordinates (together with the three vectors
$x_1$, $x_2$, $x_3$), we therefore
have that $T_P$ is affine: it is the identity on all but the last three vectors and
the coefficients of $e_1$, $e_2$ and $e_3$ depend only on $x_1$, $x_2$ and $x_3$.
That is, $T_P$ can be represented in the following form
\[
\begin{pmatrix}
I_{2g} & 0 \\
0 & A
\end{pmatrix}
\begin{pmatrix}
{\mathcal{B}}_\Sigma\\
{\mathcal{E}}
\end{pmatrix}
+ \begin{pmatrix}
0\\
B
\end{pmatrix}.
\]
Recalling that
\begin{align*}
y_1&= \sqrt \lambda \left( x_1 + v -\bigg(\frac32 \xi, \frac32 \xi\bigg)-\frac{x_2}{2}-\frac{x_3}{2}\right)\\
y_2 &= \sqrt \lambda \big( x_2 + (3\xi, 0)\big)\\
y_3 &= \sqrt \lambda \big( x_3 + (0, 3\xi) \big)
\end{align*}
we have
\[
\begin{pmatrix}
y_1 \\
y_2\\
y_3
\end{pmatrix} =
A {\mathcal{E}} + B
= \sqrt \lambda \begin{pmatrix}
1 & -\frac{1}{2}& -\frac{1}{2}\\
0 & 1 & 0 \\
0 & 0 & 1
\end{pmatrix}
\begin{pmatrix}
x_1 \\
x_2\\
x_3
\end{pmatrix}
+
\sqrt \lambda \begin{pmatrix}
-(\frac32 \xi, \frac32 \xi) + v \\
(3\xi, 0)\\
(0, 3\xi)
\end{pmatrix}
\]
The transformation above should be thought of as a map from
$\mathbb{R}^6$ to $\mathbb{R}^6$. Hence,
\begin{equation*}
\Jac(T_P, ((X',\omega'), (x_1,x_2,x_3))) = \sqrt{\lambda}^{\, 6}
= (\area (X', \omega'))^3.\qedhere
\end{equation*}
\end{proof}
\end{lem}
\begin{lem}[Jacobian of the scaling map] \label{lem:richdisk_taxing_Jacobian}
We have $\Jac(T_S) \geq g^{\sfrac{9}{2}}$ on $T_P({\mathcal{H}_{\mathrm{rich\text{-}disk}}} \times (0,1])$.
\begin{proof}
Let $(X, \omega) \in {\mathcal{H}}_{\leq 1}(\kappa,2)$ be in the image of $T_P$. The map $T_S$ changes all of the $2 \cdot (2g+\ell+2) \geq 2 \cdot 2g$ real period coordinates uniformly by the factor $\left(1-\frac{\frac{9}{2} \cdot \log g}{g}\right)^{-\frac{1}{2}}$. For the calculation, we use that for every $x \in (0,1)$, we have $1 \geq 1-x^2$ and hence $\frac{1}{1-x} \geq 1 +x$.
\begin{align*}
\Jac(T_S) \ & = \left( \left( 1-\frac{\frac{9}{2} \cdot \log g}{g} \right)^{-\frac{1}{2}} \right)^{2 \cdot 2g} \\
& = \left( \frac{1}{1- \frac{\frac{9}{2} \cdot \log g}{g} } \right)^{2g} \\
& \geq \left( 1 + \frac{\frac{9}{2} \cdot \log g}{g} \right)^{2g} \\
& \geq \left( 1 + \frac{1}{\frac{g}{\sfrac{9}{2} \cdot \log g}} \right)^{\left( \frac{g}{\sfrac{9}{2} \cdot \log g} + 1 \right) \cdot \frac{2g}{2\cdot \frac{g}{\sfrac{9}{2} \cdot \log g}}} \\
& \geq e^{\sfrac{9}{2} \cdot \log g} = g^{\sfrac{9}{2}}. \qedhere
\end{align*}
\end{proof}
\end{lem}
Combining \autoref{lem:richdisk_taxing_isometric_embedding} and \autoref{lem:richdisk_taxing_Jacobian}, we get the following bound for the measure.
\begin{cor}[Measure of ${\mathcal{H}_{\mathrm{rich\text{-}disk}}}$]
We have
\begin{equation*}
\nu \left( {\mathcal{H}_{\mathrm{rich\text{-}disk}}} \right)
\leq \frac{2}{27} \cdot (\log g)^{-3} \cdot g^{-\sfrac{1}{2}} \cdot \nu\left({\mathcal{H}}_1(\kappa,2)\right)
.
\end{equation*}
\begin{proof}
Recall that we use the same notation $\nu$ for the measure on the whole stratum ${\mathcal{H}}(\kappa)$ and the measure on ${\mathcal{H}}_1(\kappa)$.
By definition, we have
\[
\nu \left( {\mathcal{H}_{\mathrm{rich\text{-}disk}}} \right) = \nu \left( {\mathcal{H}_{\leq 1, \mathrm{rich\text{-}disk}}} \right)
\qquad\text{and}\qquad
\nu\left({\mathcal{H}}_1(\kappa,2)\right) = \nu \left( {\mathcal{H}}_{\leq 1}(\kappa, 2) \right).
\]
Denote the image of ${\mathcal{H}_{\leq 1, \mathrm{rich\text{-}disk}}} \times (-\xi, \xi)^6$ under $T$ (or $T_P$)
with $\im(T)$ (or $\im(T_P)$, respectively). From \autoref{lem:richdisk_taxing_Jacobian}
and from the fact that $\im(T) \subseteq {\mathcal{H}}_{\leq 1}(\kappa,2)$, we get
\begin{equation*}
\nu\left(\im(T_P)\right) \cdot g^{\sfrac{9}{2}}
\leq \nu\left(\im(T)\right)
\leq \nu\left( {\mathcal{H}}_{\leq 1}(\kappa,2) \right)
.
\end{equation*}
Let ${\mathcal{H}_{\geq \frac{1}{2}, \mathrm{rich\text{-}disk}}}$ be the set of translation surfaces in ${\mathcal{H}_{\leq 1, \mathrm{rich\text{-}disk}}}$ that have area at least~$\frac{1}{2}$.
As the complex dimension of ${\mathcal{H}_{\leq 1, \mathrm{rich\text{-}disk}}}$ is $2g+\ell-1$, we have
\begin{equation*}
\nu\left( {\mathcal{H}_{\geq \frac{1}{2}, \mathrm{rich\text{-}disk}}} \right)
= \left(1- \left( \frac{1}{2} \right)^{2(2g+\ell-1)} \right) \cdot \nu\left( {\mathcal{H}_{\leq 1, \mathrm{rich\text{-}disk}}} \right)
\geq \frac{1}{2} \cdot \nu\left( {\mathcal{H}_{\mathrm{rich\text{-}disk}}} \right)
.
\end{equation*}
By \autoref{lem:richdisk_taxing_isometric_embedding}, we have
\[
g \cdot \nu\left(T_P\left({\mathcal{H}_{\geq \frac{1}{2}, \mathrm{rich\text{-}disk}}} \times (-\xi, \xi)^6 \right) \right) \geq \left(\frac{1}{2}\right)^3 \cdot \nu\left({\mathcal{H}_{\geq \frac{1}{2}, \mathrm{rich\text{-}disk}}} \right) \cdot (2\xi)^6
.
\]
This implies
\begin{align*}
\nu \left(T_P \left({\mathcal{H}_{\leq 1, \mathrm{rich\text{-}disk}}} \times (-\xi, \xi)^6 \right) \right)
& \geq \nu \left(T_P\left({\mathcal{H}_{\geq \frac{1}{2}, \mathrm{rich\text{-}disk}}} \times (-\xi, \xi)^6 \right)\right) \\
& \geq 8 \xi^6 \cdot g^{-1} \cdot \nu\left({\mathcal{H}_{\geq \frac{1}{2}, \mathrm{rich\text{-}disk}}} \right) \\
& \geq 4\xi^6 \cdot g^{-1} \cdot \nu\left( {\mathcal{H}_{\mathrm{rich\text{-}disk}}} \right) .
\end{align*}
Combining these measure comparisons and inserting $\xi = \sqrt{\frac{3}{2}} \cdot \sqrt{\frac{\log g}{g}}$, we can now deduce
\begin{align*}
\nu\left( {\mathcal{H}_{\mathrm{rich\text{-}disk}}} \right)
& \leq \frac{1}{4} \xi^{-6} \cdot g \cdot \nu(T_P({\mathcal{H}_{\leq 1, \mathrm{rich\text{-}disk}}} \times (-\xi, \xi)^6 )) \\
& \leq \frac{2}{27} \cdot \frac{g^3}{(\log g)^3} \cdot g \cdot g^{-\sfrac{9}{2}} \cdot \nu\left( {\mathcal{H}}_{\leq 1}(\kappa,2) \right)
. \qedhere
\end{align*}
\end{proof}
\end{cor}
\begin{thm}[Expected covering radius on ${\mathcal{H}_{\mathrm{rich\text{-}disk}}}$] \label{thm:expected_diameter_Hrd}
For large values of $g$, we have
\begin{equation*}
\frac{ \int_{{\mathcal{H}_{\mathrm{rich\text{-}disk}}}} \crad \, d\nu}{\nu\left({\mathcal{H}}_1(\kappa)\right)}
\leq \frac{1}{45} \cdot \sqrt{\frac{\log g}{g}}
.
\end{equation*}
\begin{proof}
Recall that on ${\mathcal{H}_{\mathrm{rich\text{-}disk}}}$, the covering radius of a translation surface is globally bounded by~$\frac{1}{\sqrt{\pi}}$.
This gives us the following calculation.
\begin{align*}
\int_{{\mathcal{H}_{\mathrm{rich\text{-}disk}}}} \crad \, d\nu
& \leq \frac{1}{\sqrt{\pi}} \cdot \nu\left({\mathcal{H}_{\mathrm{rich\text{-}disk}}}\right) \\
& \leq \frac{1}{\sqrt{\pi}} \cdot \frac{2}{27} \cdot (\log g)^{-3} \cdot g^{-\sfrac{1}{2}} \cdot \nu\left({\mathcal{H}}_1(\kappa,2)\right) \\
& \leq \frac{2}{27 \sqrt{\pi}} \cdot \sqrt{\frac{\log g}{g}} \cdot (\log g)^{-\frac{7}{2}} \cdot \nu\left({\mathcal{H}}_1(\kappa,2)\right)
\end{align*}
By \cite[Theorem 1.4]{aggarwal_18}, we have the estimate
$\nu \left({\mathcal{H}}_1(\kappa) \right) = \frac{4}{ \prod_{i=1}^\ell (k_i +1)} \cdot (1+{\mathcal{O}}(\frac{1}{g}))$
for a stratum ${\mathcal{H}}_1(\kappa)$ with $\kappa = (k_1,\ldots,k_\ell)$.
In particular, this also gives us
$\nu \left({\mathcal{H}}_1(\kappa,2) \right) = \frac{4}{3 \cdot \prod_{i=1}^\ell (k_i +1)} \cdot (1+{\mathcal{O}}(\frac{1}{g}))$.
Hence, for large values of $g$, we have
\[
\frac{\nu \left({\mathcal{H}}_1(\kappa,2)\right)}{\nu \left({\mathcal{H}}_1(\kappa)\right)} \leq \frac 12.
\]
The calculation $\frac{1}{27 \sqrt{\pi}} \leq \frac{1}{45}$ finishes the proof.
\end{proof}
\end{thm}
\section{Proof of the main theorem}
Now we can put together the ingredients for the proof of our main theorem.
We have shown in the previous three sections that the expected value of the covering radius goes to zero on ${\mathcal{H}_{\mathrm{poor\text{-}cyl}}}$, on ${\mathcal{H}_{\mathrm{rich\text{-}cyl}}}$, and on ${\mathcal{H}_{\mathrm{rich\text{-}disk}}}$.
When looking at the explicit statements in \autoref{thm:expected_diameter_Hpc} and \autoref{thm:expected_diameter_Hrc}, it is also clear that the rates $4C \cdot \frac{1}{\sqrt{g}}$ and $44C \cdot \frac{1}{\sqrt{g}}$ for some constant $C$ are faster than $\frac{1}{2} \cdot \sqrt{\frac{\log g}{g}}$ for large values of $g$. In \autoref{thm:expected_diameter_Hrd}, the rate is $\frac{1}{45} \cdot \sqrt{\frac{\log g}{g}}$.
The only missing part is ${\mathcal{H}_{\mathrm{small\text{-}diam}}}$. However, this set is defined so that the covering radius is smaller than $18 \cdot \sqrt{\frac{\log g}{g}}$, hence the expected value of the covering radius is also bounded by~$18 \cdot \sqrt{\frac{\log g}{g}}$.
Therefore, by summing up these four expected values, we have proven \autoref{introthm:covering_radius} that we state here again.
\begin{thm}[Expected covering radius]
For large values of $g$, we have
\begin{equation*}
\mathbb{E}_{{\mathcal{H}}_1(\kappa)} ( \crad )
= \frac{\displaystyle \int_{{\mathcal{H}}_1(\kappa)} \crad(X) \, d\nu(X)}{\nu \big({\mathcal{H}}_1(\kappa)\big)}
\leq 20 \cdot \sqrt{\frac{\log g}{g}}
.
\end{equation*}
\end{thm}
\section{Appendix}
In this appendix, we find an upper bound for the number of
cylinders on a translation surface where an upper bound on the circumference and a lower bound on the area of the cylinder are given. We also give an upper bound for the number of saddle connections where an upper bound on the length is given.
We then use these bounds to give upper bounds for the measure of the thin part of the stratum~${\mathcal{H}}_1(\kappa)={\mathcal{H}}_1(k_1,\ldots,k_\ell)$.
Here, we understand the $\delta$--thin part in three different ways.
First, we consider the set ${\mathcal{H}_{\mathrm{thin\text{-}cyl}}}(\delta)$ of translation surfaces of area $1$ on which there exists a cylinder that has circumference at most~$\delta$.
Second, we do a similar computation for the subset ${\mathcal{H}_{\mathrm{thin\text{-}cyl}}}(\delta, A)\subseteq {\mathcal{H}_{\mathrm{thin\text{-}cyl}}}(\delta)$ where we add the assumption that the area of the thin cylinder is bounded from below by some constant $A \in [0,\frac{1}{2}]$.
Third, for a complete treatment of the volume question, we consider the usual thin part, namely the set ${\mathcal{H}_{\mathrm{thin}}}(\delta)$ of translation surfaces that contain a saddle connection of length at most~$\delta$ that do not necessarily bound a cylinder.
Recall that the complex dimension $d$ of ${\mathcal{H}}(\kappa)$ is equal to $2g+\ell-1$.
\begin{thm}[Expected number of cylinders] \label{thm:measure_cylinders_large_area}
Consider a stratum ${\mathcal{H}}(\kappa)$ of complex dimension $d$.
Given $\delta>0$, $0<A<1$, and $(X, \omega)\in {\mathcal{H}}_1(\kappa)$, let $N_{\mathrm{cyl}}(X, \delta, A)$ be the number of cylinders in $X$ where the circumference is at most~$\delta$ and the area is at least~$A$.
There exists a constant $C>0$ such that for
large values of $g$, we have
\[
\mathbb{E}_{{\mathcal{H}}_1(\kappa)} \big( N(\param, \delta, A) \big)=
\frac{\int_{{\mathcal{H}}_1(\kappa)} N_{\mathrm{cyl}}(X, \delta, A) \, d\nu (X)}
{\nu\big( {\mathcal{H}}_1(\kappa) \big)}
\leq C \cdot g \cdot \delta^2 \cdot (1-A)^{d-2}.
\]
In particular, for ${\mathcal{H}_{\mathrm{thin\text{-}cyl}}}(\delta)$ the set of translation surfaces $(X,\omega) \in {\mathcal{H}}_1(\kappa)$ for which $N_{\mathrm{cyl}}(X, \delta, A)$ is not zero, we have
\begin{equation*}
\frac{\nu\left({\mathcal{H}_{\mathrm{thin\text{-}cyl}}}(\delta, A)\right)}{\nu\left({\mathcal{H}}_1(\kappa)\right)}
\leq C \cdot g \cdot \delta^2 \cdot (1-A)^{d-2}
.
\end{equation*}
\end{thm}
From \autoref{thm:measure_cylinders_large_area}, we can directly deduce the measure of ${\mathcal{H}_{\mathrm{thin\text{-}cyl}}}(\delta) = {\mathcal{H}_{\mathrm{thin\text{-}cyl}}}(\delta, 0)$ where we do not have any restriction on the area of the cylinder. This is used in the proof of \autoref{cor:measure_thincylinder_n}.
\begin{cor}[Measure of ${\mathcal{H}_{\mathrm{thin\text{-}cyl}}}(\delta)$]\label{lem:measure_cylinders_small_circumference}
For $C$ as in \autoref{thm:measure_cylinders_large_area} and ${\mathcal{H}_{\mathrm{thin\text{-}cyl}}}(\delta)$ the set of translation surfaces $(X,\omega) \in {\mathcal{H}}_1(\kappa)$ for which $N_{\mathrm{cyl}}(X, \delta, 0)$ is not zero, we have
\begin{equation*}
\frac{\nu\left({\mathcal{H}_{\mathrm{thin\text{-}cyl}}}(\delta)\right)}{\nu\left({\mathcal{H}}_1(\kappa)\right)}
\leq C \cdot g \cdot \delta^2 .
\end{equation*}
\end{cor}
For the sake of completeness, we also give the corresponding statement on the number of saddle connections.
\begin{thm}[Expected number of saddle connections] \label{Thm:sc}
For given $\delta>0$ and a translation surface $(X, \omega)\in {\mathcal{H}}_1(\kappa)$, let $N_{\mathrm{sc}}(X, \delta)$ be the number of saddle connections in $X$ of length at most $\delta$.
There exists a constant $C'>0$ such that for
large values of~$g$, we have
\[
\mathbb{E}_{{\mathcal{H}}_1(\kappa)} \big( N(\param, \delta) \big)=
\frac{\int_{{\mathcal{H}}_1(\kappa)} N_{\mathrm{sc}}(X, \delta) \, d\nu(X)}
{\nu\big( {\mathcal{H}}_1(\kappa) \big)}\leq C' \cdot g^2 \cdot \delta^2.
\]
In particular, for ${\mathcal{H}_{\mathrm{thin}}}(\delta)$ the set of translation surfaces $(X,\omega) \in {\mathcal{H}}_1(\kappa)$ for which $N_{\mathrm{sc}}(X, \delta)$ is not zero, we have
\begin{equation*}
\frac{\nu\left({\mathcal{H}_{\mathrm{thin}}}(\delta)\right)}{\nu\left({\mathcal{H}}_1(\kappa)\right)}
\leq C' \cdot g^2 \cdot \delta^2
.
\end{equation*}
\end{thm}
The key tools for these computations is to compute upper bounds for the Siegel--Veech constants
associated to the two different situations. Siegel--Veech constants were introduced to study asymptotics for various counting problems of saddle connections or cylinders. Such a counting problem could be the asymptotics as $L$ goes to $\infty$ of the number of saddle connections whose holonomy vectors have multiplicity $1$ and length at most $L$. Given a translation surface $(X, \omega)$ in ${\mathcal{H}}_1(\kappa)$ and a counting problem we are interested in, we can consider the set $V = V(X) \subset \mathbb{R}^2$ of holonomy vectors of the collection of saddle connections of interest (see \autoref{subsec:C} for details).
There is an action of $\SL(2,\mathbb{R})$ on each (connected component of a) stratum as well as on~$\mathbb{R}^2$. Then $V(X)$ is a set of vectors equivariant under these actions. The Siegel-Veech formula~\eqref{eq:transform} below holds for any $\SL(2,\mathbb{R})$--invariant measure; in particular for the normalized Lebesgue measure that we consider in this paper.
Now, for a connected component ${\mathcal{H}}$ of a stratum,
a compactly supported continuous function
$f\colon\, \mathbb{R}^2\to \mathbb R$ (for example a characteristic function of a ball)
and a set of vectors~$V(X)$ for every $(X, \omega) \in {\mathcal{H}}$ define $\hat f\colon\, {\mathcal{H}}\to \mathbb R$ by
\begin{equation} \label{Eq:f-hat}
\hat f(X,\omega)=\sum_{v\in V(X)} f(v).
\end{equation}
Then there is a constant $c(V)$ \cite[Theorem 0.5]{Veech_98}, called the \emph{Siegel--Veech
constant}, which is independent of $f$ and such that
\begin{equation}
\label{eq:transform}
\frac{1}{\nu({\mathcal{H}})}\int_{{\mathcal{H}}} \hat f \, d\nu
= c(V) \int_{\mathbb {R}^2} f \, dxdy.
\end{equation}
We will not compute these Siegel--Veech constants but determine upper bounds
which suffice to calculate upper bounds for the counting problems of interest.
For connected strata, Zorich in an appendix to \cite{aggarwal_18} gives estimates for the Siegel--Veech constants of cylinders without conditions on the area but with conditions on the bounding saddle connections, in particular that the multiplicity is $1$. Combining this with \cite{aggarwal_18_SV} where Aggarwal shows that the constants for higher multiplicity are negligible in comparison to multiplicity $1$, we have a first estimate for connected strata without the area constraint on cylinders. We now have to add arguments for the area of the cylinder, arguments for non-connected strata, sum up the different types of cylinders, and compare the bound for cylinders with the one for saddle connections.
For the sake of clarity, we use the first four subsections of the appendix to
carry out the computations in detail in the (non-connected) minimal stratum ${\mathcal{H}}_1(2g-2)$.
To do this, we follow the arguments in \cite{eskin_masur_zorich_03}, but consider additionally the area condition for cylinders. In particular, we show that the Siegel-Veech constant for cylinders with bounding saddle connections of multiplicity greater than $1$ is dominated by the one for multiplicity~$1$ as $g$ goes to $\infty$.
\subsection{Setting to calculate Siegel--Veech constants in the minimal stratum}
We now begin the computations in the case of the minimal stratum ${\mathcal{H}}_1(2g-2)$.
The key idea of the proof from \cite{eskin_masur_zorich_03} is that a set of $p$ homologous short saddle connections gives a decomposition of the translation surface into several pieces that are themselves surfaces with boundary.
In our setting, there are three different types of surfaces with boundary
than can appear in this construction (see \autoref{fig:types_subsurfaces}):
\begin{figure}
\begin{center}
\begin{tikzpicture}[scale=0.6]
\newcommand\hole{
\draw[very thick] (0,0) to[out=180,in=-90]
(-2,1.5) to[out=90,in=180]
(-1.2,2.3) to[out=0,in=100]
(-0.1,1.3) to[out=-70,in=90,looseness=0.5] (0,0);
\draw[fill] (0,0) circle (2pt);
}
\newcommand\genus{
\draw[thick] ((1,0.05) to[bend left] (2.5,0.05);
\draw[thick] ((0.8,0.1) to[bend right] (2.7,0.05);
}
\hole
\begin{scope}[rotate=180]
\hole
\end{scope}
\draw[thick] (-2,1.6) to[out=-110,in=80]
(-2.5,0.3) to[out=-100, in=90]
(-5,-3.5) to[out=-90, in=180]
(-3,-6) to[out=0, in=-90]
(0,-4) to[out=90, in=-150]
(1.6,-2.2);
\begin{scope}[rotate=55, xshift=-5.5cm]
\genus
\end{scope}
\begin{scope}[xshift=9cm]
\hole
\begin{scope}[rotate=180, xshift=-1.3cm, yshift=0.7cm]
\hole
\end{scope}
\draw[thick] (0,0) .. controls +(-110:1.5cm) and +(-130:1.2cm) .. (1.3,-0.7);
\draw[thick] (-2,1.6) to[out=-110,in=80]
(-2.5,0) to[out=-100, in=80]
(-5,-3) to[out=-100, in=180]
(-2.5,-6) to[out=0, in=-110, looseness=0.8]
(0.1,-4.7) to[out=70, in=-150]
(2.9,-2.9);
\begin{scope}[scale=0.9, rotate=50, xshift=-6.5cm, yshift=0.5cm]
\genus
\end{scope}
\begin{scope}[scale=0.9, rotate=50, xshift=-6.5cm, yshift=-1.5cm]
\genus
\end{scope}
\end{scope}
\begin{scope}[xshift=18cm]
\draw[very thick, color=gray!50] (-2.95,-3.3) to[out=85,in=180]
(-2.2,-2.7) to[out=0,in=100]
(-1.1,-3.7) to[out=-70,in=90,looseness=0.5] (-1,-5);
\hole
\draw[thick] (-2,1.6) -- ++(-1,-5);
\draw[thick] (0,0) -- (-1,-5);
\draw[very thick] (-1,-5) to[out=180,in=-95] (-3,-3.4);
\draw[fill] (-1,-5) circle (2pt);
\end{scope}
\end{tikzpicture}
\caption{The three types of surfaces with boundary: figure eight type, two holes type, cylinder (from left to right).}
\label{fig:types_subsurfaces}
\end{center}
\end{figure}
\begin{itemize}
\item figure eight type: this has one singularity and one boundary component which consists of two saddle connections, it can have any genus $g \geq 1$
\item two holes type: this has two boundary components that consist each of one saddle connection, it has two singularities and can have any genus $g \geq 1$
\item cylinder: this is a special case of the two holes type where we have two boundary components but no genus
\end{itemize}
Choosing any sequence of such pieces and gluing them together in a cyclic way, gives
us a new surface. Note that it is not possible to use only figure eight type surfaces without
at least one surface of another type. If we would do so, the result would be a surface with two distinct points glued to each other -- which is not possible for a surface.
Also, we are in the special situation that the obtained surfaces should have only one singularity. Because of the cyclic gluing, we can have only one surface with two singularities in the sequence,
that is, either a two hole type or a cylinder.
\subsection{Upper bound for Siegel--Veech constants for a given configuration
with one cylinder in the minimal stratum} \label{subsec:C}
In this section, we assume that we have a cylinder, so there is no surface with two holes. We will then recover Formula 13.1 from \cite[page 142]{eskin_masur_zorich_03} but posing additionally the condition on the area. The case of a saddle connection that does not bound a cylinder and hence where there is a surface with two holes, will be discussed later in \autoref{subsec:loops}.
We first need to deal with each combinatorial type separately.
Recall that the minimal stratum ${\mathcal{H}}_1(2g-2)$ has three connected components which we denote by ${\mathcal{H}}^j$ for $j=1,2,3$ from now on. The Siegel--Veech constant for a given connected component ${\mathcal{H}}^j$ will be denoted by $c^j(V)$. We study a specific combinatorial type, described as follows. Let $p$ be the multiplicity of the saddle connection in its homology class; that is, the number of pieces in which we cut the surface (not counting the cylinder as the two boundary components of the cylinder are not both counted). Hence, we have that the surface is divided into $p$ surfaces with boundary $(X_i, \omega_i)$ of genera $g_i\geq 1$ with $\sum_{i=1}^p g_i = g-1$ and additionally a cylinder. Suppose that the first surface in the cyclic order is the cylinder.
For the other $p$ surfaces, let $a_i = 2g_i-2$ be the order of the corresponding singularity, that is, the singularity on $(X_i,\omega_i)$ has a cone angle of $(a_i+1) \cdot 2\pi$. As these $p$ surfaces are of figure eight type, we can also choose the cone angle $(a_i'+1)\cdot 2\pi$ on one side of the figure eight and have the cone angle $(a_i''+1)\cdot 2\pi$ with $a_i = a_i' + a_i''$ on the other side. Note that there are $a_i+1$ possibilities for the ordered pair $(a_i', a_i'')$.
For convenience of notation, we will refer to the datum $(p, ((a_1', a_1''),\ldots,(a_p', a_p'')))$ by $(p, {\mathcal{C}})$. That is, a saddle connection has the combinatorial type $(p, {\mathcal{C}})$ if the $p$ saddle connections in its homology class decompose the surface $(X,\omega)$ into a cylinder and $p$ surfaces of figure eight type with singularities of order $a_1,\ldots, a_p$ where the cone angles are distributed to both sides of the figure eight as given by $(a_1', a_1''), \ldots, (a_p', a_p'')$.
Let $V_{\mathrm{cyl}}(p,{\mathcal{C}},A)$ be the set of holonomy vectors of saddle connections of combinatorial type $(p, {\mathcal{C}})$ where the corresponding cylinder has area at least~$A\in [0,1)$.
Respectively, there is a Siegel--Veech constant $c^j(V_{\mathrm{cyl}}(p,{\mathcal{C}},A))$ on the connected component~${\mathcal{H}}^j$ which corresponds to counting saddle connections of combinatorial type~$(p,{\mathcal{C}})$ where the cylinder has area at least $A$.
We proceed to find this $c^j(V_{\mathrm{cyl}}(p,{\mathcal{C}},A))$ by mimicking the calculations from \cite[Section~13]{eskin_masur_zorich_03}.
For this, let $\gamma$ be the holonomy vector of a saddle connection of combinatorial type $(p, {\mathcal{C}})$ and~$h$ be the height of the cylinder that $\gamma$ bounds.
Note that the surface $(X,\omega)$ that we build can have any area less than or equal to $1$.
However, it can be rescaled to have area $1$ with a cylinder whose circumference is bounded from above by $\delta$ and the area is bounded from below by $A$. Hence, we have
\begin{equation*}
|\gamma| \leq \delta \cdot \sqrt{\area (X,\omega))}
\qquad \text{and} \qquad
h \cdot |\gamma|\geq A \cdot \area(X,\omega).
\end{equation*}
As $\area (X,\omega) = \sum_{i=1}^p \area (X_i,\omega_i)+h|\gamma|$, the first inequality is equivalent to
\begin{equation}
\label{eq:nontrivial}
h \geq \frac{|\gamma|}{\delta^2}-\frac{\sum_{i=1}^p \area(X_i,\omega_i)}{|\gamma|}
\end{equation}
whereas the second inequality is equivalent to
\begin{equation}
\label{eq:condition_on_h_from_cylinder_area}
h\geq \frac{A}{1-A} \cdot \frac{\sum_{i=1}^p \area(X_i,\omega_i)}{|\gamma|}
.
\end{equation}
Both of the lower bounds on $h$ have to be fulfilled and together they are sufficient to obtain a translation surface by the construction given in \cite[Section~13]{eskin_masur_zorich_03}. We distinguish two cases now, depending on whether the bound from \eqref{eq:nontrivial} or the bound from \eqref{eq:condition_on_h_from_cylinder_area} is larger and implies the other inequality.
Note~that
\begin{equation*}
\frac{|\gamma|}{\delta^2}-\frac{\sum_{i=1}^p \area (X_i,\omega_i)}{|\gamma|}
\geq \frac{A}{1-A} \cdot \frac{\sum_{i=1}^p \area (X_i,\omega_i)}{|\gamma|}
,
\end{equation*}
is equivalent to
\begin{equation*}
|\gamma|\geq \delta \sqrt{ \frac{\sum_{i=1}^p \area (X_i,\omega_i)}{1-A} }
.
\end{equation*}
For the following calculation, let $d=4g$ be the real dimension of ${\mathcal{H}}(2g-2)$ and $d_i = 4g_i$ be the real dimension of ${\mathcal{H}}(a_i)$. Note that we have $\sum_{i=1}^p d_i = d-4$.
Furthermore, we set
\[
r_i = \sqrt{\area(X_i, \omega_i)}
\qquad\text{and}\qquad
D(z) = \left\{ (r_1,\ldots,r_p) : \sum_{i=1}^p r_i^2 \leq z \right\}.
\]
Define ${\mathcal{H}_{\mathrm{thin\text{-}cyl}}}(\delta,{\mathcal{C}},A)$ to be the set of translation surfaces in ${\mathcal{H}_{\mathrm{thin\text{-}cyl}}}(\delta, A)$
where the saddle connection of length at most $\delta$ that bounds a cylinder of area
$A$ has the combinatorial type $(p,{\mathcal{C}})$. Similar to
\cite[Section 13.1]{eskin_masur_zorich_03}, we have
\begin{align*}
\nu&({\mathcal{H}_{\mathrm{thin\text{-}cyl}}}(\delta,{\mathcal{C}},A)) = \\
& = WM \cdot
\Bigg(
\int_{D(1-A)} \prod_{i=1}^p r_i^{d_i-1} \,dr_i
\Bigg(
\int_{|\gamma| \leq \delta \sqrt{\frac {\sum r_i^2}{1-A}}} \int_{\frac{A}{1-A}\frac{\sum r_i^2}{|\gamma|} \leq h \leq \frac{1-\sum r_i^2}{|\gamma|}} \int_{0}^{|\gamma|} \,dt\,dh\,d\gamma +
\\
& \qquad \qquad \qquad \qquad \qquad +
\int_{\delta \sqrt{\frac {\sum r_i^2}{1-A}} \leq |\gamma| \leq \delta} \int_{\frac{|\gamma|}{\delta^2}-\frac{\sum r_i^2}{|\gamma|} \leq h \leq \frac{1-\sum r_i^2}{|\gamma|}}\int_0^{|\gamma|}\,dt\,dh\,d\gamma
\Bigg) \Bigg) + o(\delta^2)
\end{align*}
where $t$ is a twist parameter of the cylinder, $W = \prod \nu({\mathcal{H}}(a_i))$, $M$ is the combinatorial constant that counts how many different surfaces of area $1$ can be built for the fixed data $\delta$, $(p, {\mathcal{C}})$, and $A$, and $o(\delta^2)$ refers to a term that goes to zero faster than $\delta^2$ as $\delta$ goes to zero.
We first integrate over $t$ and $h$ and then over $\gamma$ to obtain
\begin{align*}
\nu({\mathcal{H}_{\mathrm{thin\text{-}cyl}}}&(\delta,{\mathcal{C}},A)) \\
= \ & W M \cdot \int_{D(1-A)}\prod_{i=1}^p r_i^{d_i-1}\Bigg(
\int_{|\gamma| \leq \delta \sqrt{\frac {\sum r_i^2}{1-A}}} \left( 1-\frac{1}{1-A} \sum r_i^2 \right) \, d\gamma \\
& \qquad\qquad\qquad\qquad\qquad + \int_{\delta \sqrt{\frac {\sum r_i^2}{1-A}} \leq |\gamma| \leq \delta} \left(1-\frac{|\gamma|^2}{\delta^2} \right) \, d\gamma \Bigg) \prod_{i=1}^p dr_i + o(\delta^2) \\
= \ & W M\cdot \delta^2\cdot \pi\cdot \int_{D(1-A)}\prod_{i=1}^p r_i^{d_i-1}
\left(1-\frac{1}{1-A}\sum r_i^2\right)\left(\frac{1}{1-A} \sum r_i^2\right)\prod_{i=1}^p dr_i \\
& \qquad \qquad + W M\cdot \delta^2\cdot \pi\cdot \int_{D(1-A)}\prod_{i=1}^p r_i^{d_i-1}
\left(1-\frac{1}{1-A}\sum r_i^2\right) \prod_{i=1}^p dr_i \\
& \qquad \qquad - W M \cdot \pi\cdot \int_{D(1-A)} \prod_{i=1}^p r_i^{d_i-1}
\left( \int_{\delta\sqrt{\frac{\sum r_i^2}{1-A}}}^\delta \, \frac{s^2}{\delta^2} \cdot 2s\,ds \right) \prod_{i=1}^p dr_i +o(\delta^2)
.
\end{align*}
With the substitution $u=\frac{s^2}{\delta^2}$, we can write the inner most integral in the last summand~as
$$\delta^2 \cdot \int_{\frac{1}{1-A}\sum r_i^2}^{1}u \,du ,$$
giving
\begin{align*}
\nu(&{\mathcal{H}_{\mathrm{thin\text{-}cyl}}}(\delta,{\mathcal{C}},A))= \\
= & W M\cdot \delta^2\cdot \pi \cdot \int_{D(1-A)}\prod_{i=1}^p r_i^{d_i-1}
\left(1-\frac{1}{1-A}\sum r_i^2\right)\left(1 + \frac{1}{1-A} \sum r_i^2\right)\prod_{i=1}^p dr_i \\
& - W M \cdot \delta^2 \cdot \pi \cdot \int_{D(1-A)} \prod_{i=1}^p r_i^{d_i-1}
\left(\frac{1}{2} - \frac{1}{2(1-A)^2} \left( \sum r_i^2 \right)^2 \right) \prod_{i=1}^p dr_i +o(\delta^2) \\
= & \pi \delta^2 \cdot W M \cdot \int_{D(1-A)}\prod_{i=1}^p r_i^{d_i-1} \\
& \cdot \left( 1 - \frac{1}{(1-A)^2} \left( \sum r_i^2 \right)^2 - \frac{1}{2} + \frac{1}{2(1-A)^2} \left( \sum r_i^2 \right)^2 \right) \prod_{i=1}^p dr_i +o(\delta^2) \\
= & \pi \delta^2\cdot W M \cdot \int_{D(1-A)}\prod_{i=1}^p r_i^{d_i-1}
\left( \frac{1}{2}-\frac{1}{2}\left(\sum \left(\frac{r_i}{\sqrt{1-A}} \right)^2\right)^2 \right) \prod dr_i +o(\delta^2)\\
= & \pi \delta^2\cdot W M \cdot \int_{D(1-A)}\prod_{i=1}^p r_i^{d_i-1} \\
& \cdot \left( \left( 1-\sum\left(\frac{r_i}{\sqrt{1-A}}\right)^{2}\right)-\frac{1}{2}\left(1-\sum \left(\frac{r_i}{\sqrt{1-A}}\right)^{2}\right)^{2} \right) \prod dr_i+o(\delta^2) .
\end{align*}
Now make the substitution $s_i=\frac{r_i}{\sqrt{1-a}}$ and since $\sum_{i=1}^p (d_i-1) =d-4-p$ and $d=4g$, we~get
\begin{align*}
\nu({\mathcal{H}_{\mathrm{thin\text{-}cyl}}}&(\delta,{\mathcal{C}},A)) \\
= &(1-A)^{\frac{p}{2}} \cdot \prod_{i=1}^p(1-A)^{\frac{(d_i-1)}{2}}\cdot \pi\delta^2 \cdot W M \\
& \qquad \qquad \cdot \int_{D(1)}\prod_{i=1}^p s_i^{d_i-1} \left(\left(1-\sum s_i^2\right)-\frac{1}{2}\left(1-\sum s_i^2\right)^2\right) \prod ds_i \\
= & (1-A)^{2g-2} \cdot \pi\delta^2 \cdot W M \\
& \qquad \qquad \cdot \int_{D(1)}\prod_{i=1}^p s_i^{d_i-1}
\left(\left(1-\sum s_i^2\right)-\frac{1}{2}\left(1-\sum s_i^2\right)^2 \right) \prod ds_i+o(\delta^2)
.
\end{align*}
Although its appearance is different on the first view, this formula has precisely the same content as Formula 13.1 in \cite{eskin_masur_zorich_03} for $q=1$, multiplied by the factor $(1-A)^{2g-2}$ which accounts for the area bound. This factor is the same for all combinatorial types $(p, {\mathcal{C}})$, in particular it is independent of $p$.
The role of the factor $(1-A)^{2g-2}$ was also shown by Vorobets in \cite[Theorem 1.8]{vorobets_2005}.
Now we find the combinatorial constant $M$ and investigate its dependence on $g$.
Here we may exactly use the formulae from \cite[Section 13.3]{eskin_masur_zorich_03}.
As we are in the minimal stratum, most of the factors in the formula are equal to $1$. In this particular case, we can use the following estimate.
\begin{equation*}
M \leq \prod_{i=1}^p (a_i+1)
\end{equation*}
The above expression for $\nu({\mathcal{H}_{\mathrm{thin\text{-}cyl}}}(\delta,{\mathcal{C}},A))$ can now be used to calculate
$\int_{{\mathcal{H}}_1(2g-2)} \hat f \, d\nu$ where~$\hat f$ is the function associated to
$V_{\mathrm{cyl}}(p,{\mathcal{C}},A)$ which Eskin--Masur--Zorich use to the
find the associated Siegel--Veech constant. In our setting, the above integral is
still an upper bound for $\int_{{\mathcal{H}}^j} \hat f \, d\nu$ and we get an upper bound
for $c^j(V_{\mathrm{cyl}}(p,{\mathcal{C}},A))$. Following the reasoning in
\cite[page 135]{eskin_masur_zorich_03}, we have
\begin{align*}
c^j(V_{\mathrm{cyl}}(p,{\mathcal{C}},A))
\leq & M \cdot (1-A)^{2g-2} \cdot \frac{1}{2^{p-1}} \cdot \frac{\prod_i (\frac{d_i}{2}-1)!}{(\frac{d}{2}-2)!} \cdot \frac {\prod_{i=1}^p \nu({\mathcal{H}}(a_i))}{\nu({\mathcal{H}}^j)} \\
\leq & (1-A)^{2g-2} \cdot \frac{1}{2^{p-1}} \cdot \prod_{i=1}^p (a_i+1) \cdot \frac{\prod_i (\frac{d_i}{2}-1)!}{(\frac{d}{2}-2)!} \cdot \frac {\prod_{i=1}^p \nu({\mathcal{H}}(a_i))}{\nu({\mathcal{H}}^j)}
.
\end{align*}
In this formula, the term $\prod_{i=1}^p \nu({\mathcal{H}}(a_i))$ is the previous $W$ and the factor $\frac{(1-A)^{2g-2}}{2^{p-1}} \cdot \frac{\prod_i (\frac{d_i}{2}-1)!}{(\frac{d}{2}-2)!}$ comes from the integral.
We now replace the exact values $a_i=2g_i-2$, $d_i=4g_i$, and $d=4g$.
Furthermore, we have from \cite[Theorem 1.9]{sauvaget_18} that $\nu({\mathcal{H}}(a_i)) = \frac{4}{2g_i-1} \cdot (1+{\mathcal{O}}(\frac{1}{g}))$ and $\nu({\mathcal{H}}_1(2g-2)) = \frac{4}{2g-1} \cdot (1+{\mathcal{O}}(\frac{1}{g}))$.
Explicit bounds $\nu({\mathcal{H}}(a_i)) \leq \frac{4}{2g_i-1} \cdot (1 + \frac{2^{2^{200}}}{g_i})$ and $\nu({\mathcal{H}}_1(2g-2)) \leq \frac{4}{2g-1} \cdot (1 + \frac{2^{2^{200}}}{g})$ are given in the more general \cite[Theorem 1.4]{aggarwal_18}.
To avoid keeping track of the error term, we define $k\coloneqq 4(1+2^{2^{200}})$ and use the bounds $\nu({\mathcal{H}}(a_i)) \leq \frac{k}{2g_i-1}$ and $\nu({\mathcal{H}}_1(2g-2)) \leq \frac{k}{2g-1}$. Then we have
\begin{align*}
c^j(V_{\mathrm{cyl}}(p,{\mathcal{C}},A))
\leq & (1-A)^{2g-2} \cdot \frac{1}{2^{p-1}} \cdot \prod_{i=1}^p (2g_i-1) \cdot \frac{\prod_{i=1}^p (2g_i-1)!}{(2g-2)!} \cdot \frac {k^p}{\nu({\mathcal{H}}^j)\cdot \prod_{i=1}^p (2g_i-1)} \\
\leq & k^p \cdot (1-A)^{2g-2} \cdot \frac{\prod_{i=1}^p (2g_i-1)!}{(2g-2)!} \cdot \frac{1}{\nu({\mathcal{H}}^j)}
.
\end{align*}
\subsection{Counting of cylinders in the minimal stratum}
\label{subsec:bounds_Siegel_Veech_cylinders}
Now we want to calculate upper bounds for the Siegel--Veech
constant for saddle connections that bound a cylinder. For a given multiplicity $p$, we
consider all ways how to decompose the surface into a cylinder and $p$ surfaces with boundary and
how to decompose the angle of the singularity of order $a_i$ in each of the surfaces.
That is, for a connected component ${\mathcal{H}}^j$, we define $V_{\mathrm{cyl}}(p, A)$ to be the set of
holonomy vectors for all saddle connections bounding a cylinder of area at least $A$
with multiplicity $p$. Let $c^j(V_{\mathrm{cyl}}(p,A))$ be the associated
Siegel--Veech constant. Then, $c^j(V_{\mathrm{cyl}}(p,A))$ is calculated as
sum over all possible combinatorial types $(p,{\mathcal{C}}) = (p, ((a_1',a_1''),\ldots,(a_p',a_p'')))$ for fixed~$p$:
\begin{align*}
c^j(V_{\mathrm{cyl}}(p,A)) & = \sum_{{\mathcal{C}}} c^j\big(V_{\mathrm{cyl}}(p,{\mathcal{C}}, A)\big)\\
& \leq \sum_{g_1+\ldots+g_p = g-1} \prod_{i=1}^p (a_i+1) \cdot k^p \cdot (1-A)^{2g-2} \cdot \frac{\prod_{i=1}^p (2g_i-1)!}{(2g-2)! \cdot \nu({\mathcal{H}}^j)} \\
& \leq \frac{k^p \cdot (1-A)^{2g-2}}{\nu({\mathcal{H}}^j)} \cdot \sum_{g_1+\ldots+g_p = g-1} \frac{\prod_{i=1}^p (2g_i)!}{(2g-2)!}
\end{align*}
We use $a_i + 1 \leq 2g_i$ here to obtain the third line.
Note that $\prod_{i=1}^p (2g_i)!$ and $(2g-2)!$ have the same number of factors. We have that $\prod_{i=1}^p (2g_i)!$ is the largest when all but one $g_i$ are equal to $1$. In this situation, let $g_1$ be the largest one, that is $g_1 = g-1 - (p-1)$. Therefore, we have
\begin{equation*}
\prod_{i=1}^p (2g_i)! \leq 2^{p-1} \cdot (2g-2p)!
\end{equation*}
which implies
\begin{align*}
\frac{\prod_{i=1}^p (2g_i)!}{(2g-2)!}
\leq & 2^{p-1} \cdot \frac{(2g-2p)!}{(2g-2)!} \\
= & \frac{2^{p-1}}{(2g-2)(2g-3)\cdot \cdots \cdot (2g-2p+1)} \\
\leq & \frac{1}{(2g-2)(2g-3)\cdot \cdots \cdot (2g-p)}
\end{align*}
for $p\geq 2$ and $\frac{(2g_1)!}{(2g-2)!} = 1$ for $p = 1$.
This upper bound is independent of the choice of the $g_i$. There are $\binom{g-2}{p-1}$ choices of the~$g_i$.
This is because we can consider an ordered set with $g-1$ elements and divide it into $p$ subsets of cardinality $\geq 1$ by specifying which elements are the last in their corresponding subsets. The last one in the whole ordered set has to be the last one of a subset. Apart from that, we can choose any $p-1$ elements out of the remaining $g-2$ elements to be last ones.
Therefore, we can calculate:
\begin{align*}
\sum_{g_1+\ldots+g_p = g-1} \frac{\prod_{i=1}^p (2g_i)!}{(2g-2)!}
& \leq \binom{g-2}{p-1} \cdot \frac{1}{(2g-2)(2g-3) \cdots (2g-p)} \\
& = \frac{1}{(p-1)!} \cdot \frac{(g-2)(g-3) \cdots (g-p)}{(2g-2)(2g-3) \cdots (2g-p)} \\
& \leq \frac{1}{(p-1)!}
\end{align*}
for $p\geq 2$ and the same upper bound is true for $p=1$.
Still fixing a connected component ${\mathcal{H}}^j$, we define $V_{\mathrm{cyl}}(A)$ to be the set of
holonomy vectors for all saddle connections bounding a cylinder of area at least $A$. Let $c^j(V_{\mathrm{cyl}}(A))$ be the associated Siegel--Veech constant.
To compute $c^j(V_{\mathrm{cyl}}(A))$, we have to sum over all $p$. This gives us:
\begin{align}
c^j(V_{\mathrm{cyl}}(A))
= & \sum_{p=1}^{g-1} c^j\big(V_{\mathrm{cyl}}(p,A)\big) \notag \\
\leq & \frac{(1-A)^{2g-2}}{\nu({\mathcal{H}}^j)} \cdot
\sum_{p=1}^{g-1} k^p \cdot \frac{1}{(p-1)!} \notag \\
\leq & \frac{(1-A)^{2g-2}}{\nu({\mathcal{H}}^j)} \cdot k \cdot \sum_{p=0}^{g-2} \frac{k^p}{p!} \notag \\
\leq & \frac{k \cdot e^{k} \cdot (1-A)^{2g-2}}{\nu({\mathcal{H}}^j)}
. \label{eq:cyl-estimate}
\end{align}
We are now ready to prove \autoref{thm:measure_cylinders_large_area} for the minimal stratum.
\begin{proof}[Proof of \autoref{thm:measure_cylinders_large_area}]
Let $V_{\mathrm{cyl}}(A)$ be defined as above, $f \colon \mathbb{R}^2\to \mathbb R$ be the characteristic function of the ball of radius $\delta$ and
$\hat f \colon\, {\mathcal{H}}^j \to \mathbb{R}$ be the associated function defined in
\autoref{Eq:f-hat}.
We argue that $\hat{f}(X) = N_{\mathrm{cyl}}(X, \delta, A)$ outside of
a measure zero set.
The two numbers are not equal if and only if there exists more than one cylinder that is bounded by a given saddle connection in $V_{\mathrm{cyl}}(A)$ or if there is a cylinder of the desired type with more than one saddle connection on a boundary component.
Recall that the measure on the stratum is given as Lebesgue measure in period coordinates and
in the minimal stratum, the relative homology is the same as the absolute homology.
Consider the set ${\mathcal{H}}_{\mathrm{parallel}}$ of translation surfaces that have two non-homologous
saddle connections whose holonomy vectors have the same direction. As ${\mathcal{H}}_{\mathrm{parallel}}$ can be defined locally in period coordinates, it is a lower-dimensional subset of ${\mathcal{H}}^j$ and hence has measure zero. This has two consequences. First, there is only a measure zero set of translation surfaces that has a cylinder with a boundary component that contains more than
one saddle connections. That is, generically, for every cylinder of area $A$,
we have a vector in $V_{\mathrm{cyl}}(A)$. Secondly, the set of translation surfaces
that have more than one cylinder giving the same vector in $V_{\mathrm{cyl}}(A)$
has also measure~zero.
We also have
\begin{equation*}
\int_{{\mathcal{H}}^j} \hat{f} \, d\nu
= c^j(V_{\mathrm{cyl}}(A)) \cdot \int_{\mathbb {R}^2} f \, dxdy \cdot \nu({\mathcal{H}}^j)
\leq k e^k \cdot \pi \,\delta^2 \, (1-A)^{2g-2}.
\end{equation*}
We can now estimate the desired integral by using the bounds on the integrals for all three connected components ${\mathcal{H}}^j$ on the stratum.
\begin{align*}
\frac{\int_{{\mathcal{H}}_1(2g-2)} N_{\mathrm{cyl}}(X, \delta, A) \, d\nu (X)}{\nu\big( {\mathcal{H}}_1(2g-2) \big)}
\leq & \frac{1}{\nu\big( {\mathcal{H}}_1(2g-2) \big)} \cdot
\sum_{i=1}^3 \int_{{\mathcal{H}}^j} \hat{f} \, d\nu \\
\leq & \frac{2g}{3.9} \cdot \sum_{i=1}^3 ke^k \cdot \pi \,\delta^2 \, (1-A)^{2g-2}\\
\leq & 2 k e^k \cdot \, g \, \delta^2 \, (1-A)^{2g-2}
.
\end{align*}
For the second to last line, we use again \cite[Theorem 1.4]{aggarwal_18}, this time in the form that $\nu\big( {\mathcal{H}}_1(2g-2) \big) \geq \frac{3.9}{2g}$ for large values of $g$.
The estimate for $\nu({\mathcal{H}_{\mathrm{thin\text{-}cyl}}}(\delta, A))$ follows from the fact that $X \in {\mathcal{H}_{\mathrm{thin\text{-}cyl}}}(\delta, A)$ if and only if $N_{\mathrm{cyl}}(X, \delta, A) \geq 1$.
\end{proof}
\subsection{Counting of saddle connections in the minimal stratum} \label{subsec:loops}
For the sake of completeness, we now turn to the case of saddle connections that do not bound a cylinder. Let $p$ again be the multiplicity of the saddle connection. Then the surface decomposes into $p$ surfaces with boundary of which $p-1$ are of figure eight type and exactly one is of two holes type. In particular, there is no cylinder.
Suppose that the first surface is the surface of two holes type.
Let $b_1',b_1''\geq 0$ be integers such that the interior angle at the holes is $(2b_1'+3)\pi$ and $(2b_1''+3)\pi$ with $b_1'+b_1''= 2g_1 -2$.
Then the real dimension of the stratum ${\mathcal{H}}(b_1'', b_1'')$ of the first surface is $d_1 = 4g_1+2$ and the volume of the stratum ${\mathcal{H}}(b_1'', b_1'')$ is approximately $\frac{4}{(b_1'+1)(b_1''+1)}$ (see \cite[Theorem 1.4]{aggarwal_18}). We use again the bound $\nu({\mathcal{H}}(b_1'', b_1'')) \leq \frac{k}{(b_1'+1)(b_1''+1)}$ with $k = 4(1+2^{2^{200}})$.
Similarly to before, this data defines the combinatorial type $(p,{\mathcal{C}})$. We consider the corresponding set $V_{\mathrm{loop}}(p,{\mathcal{C}})$ of holonomy vectors of saddle connections of combinatorial type~$(p,{\mathcal{C}})$ that do not bound a cylinder.
Then Formula 13.1 from \cite{eskin_masur_zorich_03} gives us that in the situation of having exactly one surface of two holes type and no cylinder, the Siegel--Veech constant $c^j(V_{\mathrm{loop}}(p,{\mathcal{C}}))$ for this data is bounded in the following way.
\begin{align*}
c^j(V_{\mathrm{loop}} & (p,{\mathcal{C}})) \\
& \leq \frac{1}{2^{p-1}} \cdot (b_1'+1) (b_1''+1)\cdot \prod_{i=2}^p (a_i+1) \cdot \frac{\prod_i (\frac{d_i}{2}-1)!}{(\frac{d}{2}-2)!} \cdot \frac {\nu({\mathcal{H}}(b_1', b_1'')) \cdot \prod_{i=2}^p \nu({\mathcal{H}}(a_i))}{\nu({\mathcal{H}}^j)} \\
&\leq \frac{1}{2^{p-1}} \cdot (b_1 '+1) (b_1''+1) \\
& \quad \cdot \prod_{i=2}^p (2g_i-1) \cdot \frac{(2g_1)! \cdot \prod_{i=2}^p (2g_i-1)!}{(2g-2)!} \cdot \frac {k^p}{(b_1'+1) (b_1''+1) \cdot \prod_{i=2}^p (2g_i-1) \cdot \nu({\mathcal{H}}^j)}
\\
& \leq k^p \cdot \frac{2g_1 \cdot \prod_{i=1}^p (2g_i-1)!}{(2g-2)!} \cdot \frac {1}{\nu({\mathcal{H}}^j)}
.
\end{align*}
Note that the bound for this Siegel--Veech constant $c^j(V_{\mathrm{loop}}(p,{\mathcal{C}}))$ differs exactly by a factor of $2g_1$ from the bound for the Siegel--Veech constant $c^j(V_{\mathrm{cyl}}(p,{\mathcal{C}}, 0))$ where we have a cylinder without a condition on the area.
Hence, we can skip the calculations for a given multiplicity $p$ and deduce directly
\begin{equation} \label{eq:loop-estimate}
c^j(V_{\mathrm{loop}}) \leq 2g_i \cdot \frac{k e^k}{\nu({\mathcal{H}}^j)} \leq g \cdot \frac{2k e^k}{\nu({\mathcal{H}}^j)}
.
\end{equation}
Let $c^j(V_{\mathrm{sc}})$ be the Siegel--Veech constant for all saddle connections.
Then
\[
c^j(V_{\mathrm{sc}}) \leq c^j(V_{\mathrm{loop}}) + c^j(V_{\mathrm{cyl}}(0)).
\]
Combining the estimates in \autoref{eq:cyl-estimate} for $A=0$ and
\autoref{eq:loop-estimate} we get
\[
c^j(V_{\mathrm{sc}}) \leq \frac{3ke^k \cdot g}{\nu({\mathcal{H}}^j)}.
\]
The rest of the proof of Theorem~\ref{Thm:sc} is then completely analogous to the
proof of \autoref{thm:measure_cylinders_large_area}.
\subsection{General strata}
We now turn our attention to the general case of a stratum ${\mathcal{H}}(\kappa)$.
The proof follows the same steps as for the minimal stratum.
We will outline here how to combine the arguments from the previous sections of the appendix with the results of \cite{eskin_masur_zorich_03, vorobets_2005, aggarwal_18}. We also want to refer to the newer article \cite{chen_moeller_sauvaget_zagier_19} from which more direct deductions are possible, in particular for disconnected strata.
We start with the case of cylinders.
For this, fix a connected component ${\mathcal{H}}$ of a stratum~${\mathcal{H}}(\kappa)$.
By applying the volume estimates from \cite{aggarwal_18} to the formulae from \cite{eskin_masur_zorich_03}, Zorich in an appendix to \cite{aggarwal_18} gets the following estimates for connected strata.
For cylinders where one boundary component is a saddle connection through a singularity of order $k_1$ and the other through a distinct singularity of order $k_2$ and where the multiplicity is~$1$, the Siegel--Veech constant is (up to lower order terms)
$\frac{(k_1+1)(k_2+1)}{d-2}$ for large $g$.
If the singularities are the same on both boundary components, we have that the Siegel--Veech constant is (again, up to lower order terms) $\frac{1}{2} \cdot \frac{(k_1+1)(k_1-1)}{d-2}$ for large $g$.
Note that in the case of non-connected strata, these are not estimates for the Siegel--Veech constants on the stratum but we have the upper bounds
$\frac{(k_1+1)(k_2+1)}{d-2} \cdot \frac{\nu({\mathcal{H}}_1(\kappa))}{\nu({\mathcal{H}}_1)}$ and $\frac{1}{2} \cdot \frac{(k_1+1)(k_1-1)}{d-2} \cdot \frac{\nu({\mathcal{H}}_1(\kappa))}{\nu({\mathcal{H}}_1)}$
on the Siegel--Veech constants on all of the connected components.
We now have to compare the Siegel--Veech constants for multiplicity~$1$ with these for higher multiplicity.
In \cite{aggarwal_18_SV}, Aggarwal shows that the Siegel--Veech constants for saddle connections of higher multiplicity are of lower order than the ones for saddle connections of multiplicity~$1$. As the terms for the saddle connection Siegel--Veech constants differ from the terms for the cylinder Siegel--Veech constants by a factor of $d-2$, the same combinatorial arguments hold for the cylinder Siegel--Veech constants.
Hence, the Siegel--Veech constants for cylinders with restrictions on the order of the singularity but without the restriction on the multiplicity are bounded by $K \cdot \frac{(k_1+1)(k_2+1)}{d-2}\cdot \frac{\nu({\mathcal{H}}_1(\kappa))}{\nu({\mathcal{H}}_1)}$ and $K \cdot \frac{(k_1+1)(k_1-1)}{d-2} \cdot \frac{\nu({\mathcal{H}}_1(\kappa))}{\nu({\mathcal{H}}_1)}$, respectively, for large values of $g$ and some constant $K>0$.
Recall that $V_\mathrm{cyl} \coloneqq V_\mathrm{cyl}(0)$ is the set of holonomy vectors for all saddle connections bounding a cylinder.
To obtain a bound on the Siegel--Veech constant $c(V_\mathrm{cyl})$ for ${\mathcal{H}}$, we have to sum over all possible ordered pairs $(k_1, k_2)$:
\begin{align*}
c(V_\mathrm{cyl})
& \leq K \cdot \frac{1}{d-2} \cdot \frac{\nu({\mathcal{H}}_1(\kappa))}{\nu({\mathcal{H}}_1)} \cdot
\Bigg( \sum_{i\neq j}(k_i+1)(k_j+1) + \sum_{k_i\geq 2} (k_i+1)(k_i-1)\Bigg) \\
& \leq K \cdot \frac{1}{d-2} \cdot \frac{\nu({\mathcal{H}}_1(\kappa))}{\nu({\mathcal{H}}_1)} \cdot \sum_{i,j} (k_i+1)(k_j+1) \\
& = K \cdot \frac{1}{d-2} \cdot \frac{\nu({\mathcal{H}}_1(\kappa))}{\nu({\mathcal{H}}_1)} \cdot \left(\sum_i (k_i+1)\right)^2 \\
& \leq K \cdot \frac{1}{d-2}(4g-4)^2 \cdot \frac{\nu({\mathcal{H}}_1(\kappa))}{\nu({\mathcal{H}}_1)}
\end{align*}
In the last inequality, we used $\sum_i (k_i + 1) \leq \sum_i k_i + \ell \leq (2g-2) + (2g-2)$. Note
that $d-2$ is larger than $2g-2$ and hence we have
\begin{align*}
c(V_\mathrm{cyl})
&\leq K \cdot \frac{1}{d-2}(4g-4)^2 \cdot \frac{\nu({\mathcal{H}}_1(\kappa))}{\nu({\mathcal{H}}_1)}\\
&\leq K \cdot \frac{1}{2g-2}(4g-4)^2 \cdot \frac{\nu({\mathcal{H}}_1(\kappa))}{\nu({\mathcal{H}}_1)}\\
&\leq 2K \cdot (4g-4)\cdot \frac{\nu({\mathcal{H}}_1(\kappa))}{\nu({\mathcal{H}}_1)}
\leq 8K \cdot g \cdot \frac{\nu({\mathcal{H}}_1(\kappa))}{\nu({\mathcal{H}}_1)} .
\end{align*}
As in the proof of \autoref{thm:measure_cylinders_large_area}, we let $f$ be the characteristic function of the ball of radius~$\delta$ and~$\hat f \colon\, {\mathcal{H}}^j \to \mathbb{R}$ be the associated function defined in \autoref{Eq:f-hat}.
Again, $\hat{f}(X) = N_{\mathrm{cyl}}(X, \delta)$ outside of a measure zero set.
This is because, otherwise the holonomy vectors associated to two not homologous
saddle connections are parallel and this is a measure zero property.
As every stratum has at most three connected components, we can again use the calculation
\begin{align*}
\frac{\int_{{\mathcal{H}}_1(\kappa)} N_{\mathrm{cyl}}(X, \delta) \, d\nu (X)}{\nu\big( {\mathcal{H}}_1(\kappa) \big)}
\leq & \, \frac{1}{\nu\big( {\mathcal{H}}_1(\kappa) \big)} \cdot 3 \cdot \int_{{\mathcal{H}}} \hat{f} \, d\nu \\
\leq & \, \frac{1}{\nu\big( {\mathcal{H}}_1(\kappa) \big)} \cdot 3 \cdot 8K \cdot g \cdot \frac{\nu({\mathcal{H}}_1(\kappa))}{\nu({\mathcal{H}}_1)} \cdot \pi \delta^2 \cdot \nu\big({\mathcal{H}}_1 \big) \\
\leq & \, 24 K \cdot g \, \delta^2
.
\end{align*}
Including the requirement on the area of the cylinder gives a factor of $(1-A)^{d-2}$ in the very first calculation of Siegel--Veech constants for a given configuration (compare the proof for the minimal stratum or \cite{vorobets_2005}).
Hence, this factor carries through the full proof and appears in the end as claimed.
For the case of saddle connections, a comparison of Corollary 1 and 3 with Corollary 4 and~5 in the appendix of \cite{aggarwal_18} shows that the estimates of the corresponding Siegel--Veech constants are larger by a factor of $d-2$ than in the case of cylinders. The inequality $3g \leq d-2$ then implies \autoref{Thm:sc}.
\bibliographystyle{amsalpha}
| {
"attr-fineweb-edu": 1.801758,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdDE5qdmC_bV0wMHP | \section{Evaluation Plan}
\label{sec: plan}
We aim to evaluate how \textsf{CardEst}\xspace algorithms behave in a real DBMS, including the end-to-end improvement on optimizing query plans and other practicality aspects, on our new benchmark.
This section introduces the detailed evaluation plan. Section~\ref{sec: plan-algo} presents all baseline \textsf{CardEst}\xspace algorithms chosen to be evaluated, Section~\ref{sec: plan-sys} describes our implementation method and system settings, and Section~\ref{sec: plan-met} lists the evaluation metrics of our interests.
\subsection{\textsf{CardEst}\xspace Algorithms}
\label{sec: plan-algo}
\revise{
We identify and choose twelve representative \textsf{CardEst}\xspace algorithms across the three classes (traditional, ML-based query-driven, and ML-based data-driven) reviewed in Section~\ref{sec: prelim}. The selection principles and details of algorithms in each class are elaborated as follows.
}
\noindent
\revise{
\underline{\textbf{Traditional \textsf{CardEst}\xspace Algorithms:}}
In this class, we choose five algorithms along the technical directions:
1) for histogram-based methods, we evaluate \textsf{PostgreSQL} and \textsf{MultiHist}, which applies the one-dimensional and multi-dimensional histograms for \texttt{CardEst};
2) for sampling-based methods, we evaluate the uniformly random sampling \textsf{UniSample} method and the more advanced \textsf{WJSample} method for join sampling;
and 3) for other methods, we evaluate \textsf{PessEst}, a recent proposed method that exhibit state-of-the-art performance in some aspects.
The details are as follows:
}
\indent
\revise{1) \textsf{\underline{{PostgreSQL}}}~\cite{psql2020} refers to the histogram-based \textsf{CardEst}\xspace method used in the well-known DBMS PostgreSQL. It assumes that all attributes are mutually independent and maintains a 1-D (cumulative) histogram to represent $P_{T}(A_i)$ for each attribute $A_i$. The probability $P_T(Q)$ can then be easily obtained by multiplying all $P_{T}(R_i)$ together. In addition, optimization strategies such as collecting the most common value and industrial implementation are used to enhance the performance.
}
\indent
\revise{
2) \textsf{\underline{{MultiHist}}}~\cite{poosala1997selectivity} identifies subsets of correlated attributes and model them as multi-dimensional histograms. We use the implementation provided in the repository~\cite{yang2019naru}. We do not compare with the variant methods DBHist~\cite{deshpande2001independence}, GenHist~\cite{gunopulos2000approximating,gunopulos2005selectivity} and
VIHist~\cite{wang2003multi} over~\cite{poosala1997selectivity} since their improvements are not very significant and their open-sourced implementations are not provided.
}
\indent 3) \textsf{\underline{{\revise{UniSample}}}}~\cite{leis2017cardinality,zhao2018random} makes no assumption but randomly fetches records from $T$ on-the-fly according to $P_{T}(A)$ to estimate the probability $P_{T}(Q)$. It is also widely used in DBMS such as MySQL~\cite{mysql2020} and MariaDB~\cite{mdb2020}. We set the sampling size to $10^4$.
\indent
\revise{
4) \textsf{\underline{{\revise{WJSample}}}}~\cite{li2016wander} designs a random walk based method called wander join to sample tuples from multiple tables. It has been integrated into DBMSs and exhibits favorable performance in a recent study~\cite{park2020gcare}. The method~\cite{zhao2018random} then improves from biased to unbiased sampling. We do not compare with it to avoid redundancy.
}
\revise{
5) \textsf{\underline{{\revise{PessEst}}}}~\cite{cai2019pessimistic} leverages
randomized hashing and data sketching to tighten the bound for multi-join queries. It is a new class of estimator as it never underestimates the cardinality. Meanwhile, it has been verified to perform well in real world DBMS.
}
\revise{
We do not compare with the other variants of traditional methods~\cite{bruno2001stholes, srivastava2006isomer, khachatryan2015improving, fuchs2007compressed, stillger2001leo, wu2018towards, heimel2015self,kiefer2017estimating, leis2017cardinality} as they do not exhibit significantly better performance or provide open-source implementation.}
\vspace{0.5em}
\noindent
\revise{
\underline{\textbf{ML-Based \textsf{CardEst}\xspace Algorithms:}}
In our evaluation, we choose four query-driven (\textsf{MSCN}, \textsf{LW-XGB}, \textsf{LW-NN} and \textsf{UAE-Q}) and four data-driven (\textsf{NeuroCard}, \textsf{BayesCard}, \textsf{DeepDB} and \textsf{FLAT}) \textsf{CardEst}\xspace methods. They are representative as they apply different statistical models, and exhibit state-of-the-art performance using each model. Specifically, for query-driven methods, \textsf{MSCN}, \textsf{LW-XGB}/\textsf{LW-NN} and \textsf{UAE-Q} use deep neural networks, classic lightweight regression models and deep auto-regression models to learn the mapping functions, respectively. For the data-driven methods, they build the data distribution utilizing deep auto-regression models and three probabilistic graphical models: BN, SPN, and FSPN, respectively. They use different techniques to balance estimation accuracy, inference efficiency, and model size. Besides, we also evaluate \textsf{UAE}, an extension of \textsf{UAE-Q} using both query and data information.
The details are as follows:
}
\indent 6) \textsf{\underline{MSCN}}~\cite{kipf2018learned} is a deep learning method built upon the multi-set convolutional network model. The features of attributes in table $T$, join relations in query $Q$, and predicates of query $Q$ are firstly fed into three separate modules, where each is comprised of a two-layer neural network. Then, their outputs are averaged, concatenated, and fed into a final neural network for estimation.
\indent 7) \textsf{\underline{LW-XGB}} and 8) \textsf{\underline{LW-NN}}~\cite{dutt2019selectivity}
formulate the \textsf{CardEst}\xspace mapping function as a regression problem and apply gradient boosted trees and neural networks for regression, respectively. Specifically, \textsf{LW-XGB} applies the XGBoost~\cite{chen2016xgboost} as it attains equal or better accuracy and estimation time than both LightGBM~\cite{ke2017lightgbm} and random forests~\cite{svetnik2003random} for a given model size. As the original models only support single table queries, we extend them to support joins with an additional neural network to combine single-table information.
\indent
\revise{
9) \textsf{\underline{UAE-Q}}~\cite{wu2021unified} applies the deep auto regression models to learn the mapping function. It proposes differentiable progressive sampling via the Gumbel-Sotfmax trick to enables deep auto-regression models to learn from queries.
}
For above query-driven \textsf{CardEst}\xspace methods, we automatically generate $10^5$ queries as the training examples to train these models.
10) \revise{\underline{\textsf{NeuroCard}}~\cite{yang2020neurocard}, the multi-table extension of \textsf{Naru}~\cite{yang2019deep}, is built upon a deep auto-regression model.}
It decomposes the joint PDF $P_{T}(A) = P_{T}(A_1) \cdot \prod_{i=2}^{k} P_{T}(A_i| A_1, A_2, \dots, A_{i - 1})$ according to the chain rule and model each (conditional) PDF parametrically by a 4-layer DNN ($4 \times 128$ neuron units). All \revise{tables} can be learned together using a \revise{single} masked auto-encoder~\cite{made}. Meanwhile, a progressive sampling technique~\cite{liang2020variable} is provided to sample points from the region of query $Q$ to estimate its probability. We set the sampling size to $8, 000$.
We omit a very similar method in~\cite{hasan2019multi} as it has \revise{slightly worse} performance than \textsf{NeuroCard}.
\revise{Worth noticing that the original \textsf{NeuroCard} method is only designed for datasets with tree-structured join schema. On our STATS benchmark with cyclic join schema, we partition the schema into multiple tree-structured join schemas and build one \textsf{NeuroCard} model for each schema. To avoid ambiguity, we denote this extension method as \textsf{NeuroCard}$^E$ in the following content.}
11) \textsf{\underline{BayesCard}}~\cite{wu2020bayescard} is fundamentally based on BN, which models the dependence relations among all attributes as a directed acyclic graph. Each attribute $A_i$ is assumed to be
conditionally independent of the remaining ones given its parent attributes $A_{\text{pa}(i)}$ so the joint PDF $\Pr_{T}(A) = \prod_{i = 1}^{k} \Pr_{T}(A_i | A_{\text{pa}(i)})$.
\textsf{BayesCard} revitalizes \textsf{BN} using probabilistic programming to improve its inference and model construction speed (i.e., learning the dependence graph and the corresponding probability parameters). Moreover, it adopts the advanced ML techniques to process the multi-table join queries, which significantly increases its estimation accuracy over previous BN-based \textsf{CardEst}\xspace methods~\cite{getoor2001selectivity, tzoumas2011lightweight, dasfaa2019}, which will not be evaluated in this paper. We use the Chow-Liu Tree~\cite{chow1968approximating} based method to build the structure of \textsf{BayesCard} and apply the complied variable elimination algorithm for inference.
\indent 12) \textsf{\underline{DeepDB}}~\cite{hilprecht2019deepdb}, based on sum-product networks (SPN)~\cite{poon2011sum,desana2020sum}, approximates $P_T(A)$ by recursively decomposing it into local and simpler PDFs. Specifically, the tree-structured SPN contains sum node to split $P_T(A)$ to multiple $P_{T'}(A)$ on tuple subset $T' \subseteq T$, product node to decompose $P_{T}(A)$ to $\prod_{S}P_{T}(S)$ for independent set of attributes $S$ and leaf node if $P_{T}(A)$ is a univariate PDF. The SPN structure can be learned by splitting table $T$ in a top-down manner. Meanwhile, the probability of $\Pr_{T}(Q)$ can be obtained in a bottom-up manner with time cost linear in the SPN's node size.
\indent 13) \textsf{\underline{FLAT}}~\cite{zhu2020flat}, based on factorize-split-sum-product networks (FSPN)~\cite{wu2020fspn}, improves over SPN by
adaptively decomposing $P_T(A)$ according to the attribute dependence level. It adds the factorize node to split $P_T$ as $P_T(W) \cdot P_T(H | W)$ where $H$ and $W$ are highly and weakly correlated attributes in $T$. $P_T(W)$ is modeled in the same way as SPN. $P_T(H | W)$ is decomposed into small PDFs by the split nodes until $H$ is locally independent of $W$. Then, the multi-leaf node is used to model the multivariate PDF $P_T(H)$ directly. Similar to SPN, the FSPN structure and query probability can be recursively obtained in a top-down and bottom-up fashion, respectively.
For both \textsf{DeepDB} and \textsf{FLAT}, we set the RDC thresholds to $0.3$ and $0.7$ for filtering independent and highly correlated attributes, respectively. Meanwhile, we do not split a node when it contains less than $1\%$ of the input data.
\indent
\revise{
14) \textsf{\underline{UAE}}~\cite{wu2021unified} extends the \textsf{UAE} method by unifiying both query and data information using the auto-regression model. It is a representative work aiming at closing the gap between data-driven and query-driven \textsf{CardEst}\xspace methods.
}
\vspace{0.5em}
\noindent
\underline{\textbf{Remarks:}}
For each \textsf{CardEst}\xspace algorithm, we adopt the publicly available implementation~\cite{hilp2019deepdb, yang2019naru, yang2020sb} if the authors provide it and otherwise implement it by ourselves. For other hyper-parameters, if they are known to be a trade-off of some metrics, we choose the default values recommended in the original paper. Otherwise, we run a grid search to explore the combination of their values that largely improves the end-to-end performance on a validation set of queries. Notice that, each of our evaluated \textsf{CardEst}\xspace algorithms is an independent and universal tool that can be easily integrated into common DBMS. There have also been proposed some \textsf{CardEst}\xspace modules~\cite{sun2019end, wu2021unified} that are optimized together with other components in a query optimizer in an end-to-end manner. We do not compare with them as they do not fit our evaluation framework.
\subsection{Implementation and System Settings}
\label{sec: plan-sys}
To make our evaluation more realistic and convincing, we integrate each \textsf{CardEst}\xspace algorithm into the query optimizer of PostgreSQL~\cite{psql2020}, a well-recognized open-source DBMS. Then, the quality of each \textsf{CardEst}\xspace method can be directly reflected by the end-to-end query runtime with their injected cardinality estimation.
Before introducing the details of our integration strategy, we introduce an important concept called \textit{sub-plan query}. For each SQL query $Q$, each \textit{sub-plan query} is a query touching only a subset of tables in $Q$.
The set of all these queries is called \textit{sub-plan query space}. For the example query $A \bowtie B \bowtie C$, its sub-plan query space contains queries on $A \bowtie B$, $A \bowtie C$, $B \bowtie C$, $A$, $B$, and $C$ with the corresponding filtering predicates. The built-in planner in DBMS will generate the sub-plan query space, estimate their cardinalities, and determine the optimal execution plan. For example, the sub-plan queries $A$, $B$, and $C$ only touch a single table, their \textsf{CardEst}\xspace results may affect the selection of table-scan methods, i.e. index-scan or seq-scan. The sub-plan queries $A \bowtie B$, $A \bowtie C$, and $B \bowtie C$ touch two tables. Their cardinalities may affect the join order, i.e. joining $A \bowtie B$ with $C$ or $A \bowtie C$ with $B$, and the join method, i.e. nested-loop-join, merge-join, or hash-join. Therefore, the effects of a \textsf{CardEst}\xspace method on the final query execution plan are entirely decided by its estimation results over the sub-plan query space.
To this end, in our implementation,
we overwrite the function ``\textsf{calc\_joinrel\_size\_estimate}'' in the planner of PostgreSQL to derive the sub-plan query space for each query in the workload.
Specifically, every time the planner needs a cardinality estimation of a sub-plan query, the modified function ``\textsf{calc\_joinrel\_size\_estimate}'' will immediately capture it.
Then, we call each \textsf{CardEst}\xspace method to estimate the cardinalities of the sub-plan queries and inject the estimations back into PostgreSQL. Afterward, we run the compiler of PostgreSQL on $Q$ to generate the plan. It will directly read the injected cardinalities produced by each method. Finally, we execute the query with the generated plan. In this way, we can support any \textsf{CardEst}\xspace method without a large modification on the source code of PostgreSQL. We can report the total time (except the sub-plan space generation time) as the end-to-end time cost of running a SQL query using any \textsf{CardEst}\xspace method.
For the environment, we run all of our experiments in two different Linux Servers. The first one
with 32 Intel(R) Xeon(R) Platinum 8163 CPUs @ 2.50GHz, one Tesla V100 SXM2 GPU and 64 GB available memory is used for model training. The other one with 64 Intel(R) Xeon(R) E5-2682 CPUs @ 2.50GHz is used for the end-to-end evaluation on PostgreSQL.
\subsection{Evaluation Metrics}
\label{sec: plan-met}
Our evaluation mainly focuses on \emph{quantitative} metrics that directly reflect the performance of \textsf{CardEst}\xspace algorithms from different aspects. We list them as follows:
\indent 1) \text{\underline{End-to-end time}} of the query workload, including both the query plan generation time and physical plan execution time. It serves as the ``gold-standard'' for \textsf{CardEst}\xspace algorithm, since improving the end-to-end time is the ultimate goal for optimizing \textsf{CardEst}\xspace in query optimizers.
\revise{
We report the end-to-end time of \textsf{TrueCard}, which injects the true cardinalities of all sub-plan queries into PostgreSQL. Ideally if the cost model is very accurate, \textsf{TrueCard} can obtain the optimal plan with shortest time. For a fixed PostgreSQL cost model, we find \textsf{TrueCard} can obtain the optimal query plan for most of the time. Thus, this could serve as a good baseline.
}
\indent 2) \text{\underline{Inference latency}} reflects the time cost for \textsf{CardEst}\xspace, which directly relates to the query plan generation time. It is crucial as \textsf{CardEst}\xspace needs to be done numerous times in optimizing the plan of each query. Specifically, an accurate \textsf{CardEst}\xspace method may be very time-costly in inference. Despite the fast execution time of the plans generated by this method, the end-to-end query performance can be poor because of its long plan generation time.
\indent 3) \text{\underline{Space cost}} refers to the \textsf{CardEst}\xspace model size. A lightweight model is also desired as it is convenient to transfer and deploy.
\indent 4) \text{\underline{Training cost}} refers to the models' offline training time.
\indent 5) \text{\underline{Updating speed}} reflects the time cost for \textsf{CardEst}\xspace models update to fit the data changes. For real-world settings, this metric plays an important role as its underlying data always updates with tuples insertions and deletions.
Besides these metrics, \cite{wang2020ready} proposed some \emph{qualitative metrics} related to the stability, usage, and deployment of \textsf{CardEst}\xspace algorithms and made a comprehensive analysis. Thus, we do not consider them in this paper.
In the following, we first evaluate the overall end-to-end performance of all methods in Section~\ref{sec: static}. Then, we analyze the other practicality aspects in Section~\ref{sec: dynamic}. At last, we point out the drawbacks of existing evaluation metric and propose a new metric as its potential substitution in Section~\ref{sec:analysis}.
\section{Preliminaries and Background}
\label{sec: prelim}
In this section, we introduce some preliminaries and background, including a formal definition of the cardinality estimation (\textsf{CardEst}\xspace) problem,
a brief review on representative \textsf{CardEst}\xspace algorithms and a short analysis on existing \textsf{CardEst}\xspace benchmarks.
\vspace{0.5em}
\noindent \underline{\textbf{\textsf{CardEst}\xspace Problem:}}
In the literature, \textsf{CardEst}\xspace is usually defined as a statistical problem. Let $T$ be a table with $k$ attributes $A = \{A_1, A_2, \dots, A_k \}$. $T$ could either represent a single relational table or a joined table.
In this paper, we assume each attribute $A_i$ for each $1 \leq i \leq k$ to be either categorical (whose values can be mapped to integers) or continuous, whose domain (all unique values) is denoted as $D_i$.
Thereafter, any selection query $Q$ on $T$ can be represented in a canonical form: $Q = \{A_1 \in R_1 \wedge A_2 \in R_2 \wedge \cdots \wedge A_n \in R_n\}$, where $R_i \subseteq D_i$ is the constraint region specified by $Q$ over attribute $A_i$ (i.e. filter predicates). Without loss of generality, we have $R_i = D_i$ if $Q$ has no constraint on $A_i$. Let $\textsf{Card}\xspace(T, Q)$ denote the \emph{cardinality}, i.e., the exact number of records in $T$ satisfying all constraints in $Q$. The \textsf{CardEst}\xspace problem requires estimating $\textsf{Card}\xspace(T, Q)$ as accurately as possible without executing $Q$ on $T$.
\revise{
In this paper, we concentrate on evaluating these selection queries on numerical/categorical (n./c.) attributes. We do not consider `LIKE'' (or pattern matching) queries on string attributes due to two reasons:
1) commercial \textsf{CardEst}\xspace methods for ``LIKE'' queries in DBMS often use magic numbers~\cite{psql2020, sqlserver2019, mysql2020, mdb2020}, which are not meaningful to evaluate;
and 2) \textsf{CardEst}\xspace solutions for n./c. queries mainly consider how to build statistical models summarizing attribute and/or query distribution information. Whereas, \textsf{CardEst}\xspace methods for ``LIKE'' queries ~\cite{sun2019end, shetiya2020astrid, mikolov2013distributed} focus on applying NLP techniques to summarize semantic information in strings. Thus, they tackle with different technical challenges and statistical \textsf{CardEst}\xspace methods can not effectively support ``LIKE'' queries.
}
\vspace{0.5em}
\noindent \underline{\textbf{\textsf{CardEst}\xspace Algorithms:}}
There exist many \textsf{CardEst}\xspace methods in the literature, which can be classified into three classes as follows:
\emph{Traditional \textsf{CardEst}\xspace methods}, such as histogram~\cite{selinger1979access} and sampling~\cite{leis2017cardinality,heimel2015self,kiefer2017estimating}, are widely applied in DBMS and generally based on simplified assumptions and expert-designed heuristics.
\revise{
Many variants are proposed later to enhance their performance.
Histogram-based variants include multi-dimensional histogram based methods~\cite{poosala1997selectivity, deshpande2001independence, gunopulos2000approximating, gunopulos2005selectivity, muralikrishna1988equi, wang2003multi}, correcting and self-tuning histograms with query feedbacks~\cite{bruno2001stholes, srivastava2006isomer, khachatryan2015improving, fuchs2007compressed} and updating statistical summaries in DBMS~\cite{stillger2001leo, wu2018towards}.
Sampling-based variants include query-driven kernel-based methods~\cite{heimel2015self,kiefer2017estimating}, index based methods~\cite{leis2017cardinality} and random walk based methods~\cite{zhao2018random, li2016wander}.
Some other work, such as the sketch based method~\cite{cai2019pessimistic}, explores a new direction for \texttt{CardEst}.
}
\textit{ML-based query-driven \textsf{CardEst}\xspace methods} try to learn a model to map each featurized query $Q$ to its cardinality $\textsf{Card}\xspace(T, Q)$ directly. Some ML-enhanced methods improve the performance of \textsf{CardEst}\xspace methods by using more complex models such as DNNs~\cite{kipf2018learned} or gradient boosted trees~\cite{dutt2019selectivity}.
\textit{ML-based data-driven \textsf{CardEst}\xspace methods} are independent of the queries. They regard each tuple in $T$ as a point sampled according to the joint distribution $P_T(A) = P_T(A_1, A_2, \dots, A_n)$.
Let $P_T(Q) = P_T(A_1 \in R_1, A_2 \in R_2, \cdots , A_n \in R_n)$ be the probability specified by the region of $Q$. Then, we have $\textsf{Card}\xspace(T, Q) = P_T(Q) \cdot |T|$ so \textsf{CardEst}\xspace problem could be reduced to model the probability density function (PDF) $P_T(A)$ of table $T$.
A variety of ML-based models have been used in existing work to represent $P_T(A)$, the most representative of which includes deep auto-regression model~\cite{yang2019deep,yang2020neurocard,hasan2019multi} and probabilistic graphical models (PGMs) such as
Bayesian networks (BN)~\cite{tzoumas2011lightweight, getoor2001selectivity, wu2020bayescard}, SPN~\cite{hilprecht2019deepdb}, and FSPN~\cite{zhu2020flat}.
\revise{
In addition, some methods proposed recently such as~\cite{wu2021unified} try to integrate both query and data information for \texttt{CardEst}.
}
\vspace{0.5em}
\noindent \underline{\textbf{\textsf{CardEst}\xspace Benchmark:}}
Literature works have proposed some benchmark datasets and query workloads for \textsf{CardEst}\xspace evaluation. We analyze their pros and cons as follows:
1) The synthetic benchmarks such as TPC-C~\cite{leis2015good}, TPC-H~\cite{tpch2021} and TPC-DS~\cite{tpcds2021} and Star Schema benchmarks (SSB)~\cite{o2009star} contain real-world data schemas and synthetic generated tuples. They are mainly used for evaluating query engines but not suitable for \textsf{CardEst}\xspace because their data generator makes oversimplified assumptions on the joint PDF of attributes, such as uniform distribution and independence.
However, real-world datasets are often highly skewed and correlated~\cite{leis2015good}, which are more difficult for \textsf{CardEst}\xspace.
2) IMDB dataset with its JOB workload~\cite{leis2015good} is a well-recognized benchmark, containing complex data and string ``LIKE'' queries.
\revise{
To evaluate statistical \textsf{CardEst}\xspace methods on n./c. queries only, most of the existing works~\cite{kipf2018learned,yang2019deep,yang2020neurocard,hilprecht2019deepdb,zhu2020flat} use the JOB-LIGHT query workload containing 70 selection queries with varied number of joining tables. However, these queries touch only 8 n./c. attributes within six tables of IMDB and the joins between these tables are only star joins centered at one table.
Thus, this simplified IMDB dataset and its workload cannot comprehensively evaluate the performance of nowadays \textsf{CardEst}\xspace algorithms on more complex data and varied join settings.} On the other hand, some works~\cite{yang2020neurocard, negi2021flow} generate queries on the IMDB dataset including ``LIKE'' queries which are not supported by most of recent statistical methods.
Apart from these well-established and general-purpose benchmarks, there also exist other benchmarks with specific purposes. For example, Wang~\cite{wang2020ready} presents a series of real-world datasets to analyze whether existing \textsf{CardEst}\xspace algorithms are suitable to be deployed into real-world DBMS. However, it is only conducted on single-table datasets, which can not reflect the behavior of these models in more practical multi-table settings.
\vspace{0.5em}
\noindent \underline{\textbf{Summary:}}
A surge of \textsf{CardEst}\xspace algorithms built on top of statistical models has been proposed in the literature, especially in the last decade. However, existing \textsf{CardEst}\xspace benchmarks are not sufficient to comprehensively evaluate their performance.
\section{Dynamic Performance Analysis}
\section{What Other Aspects of \textsf{CardEst}\xspace Methods Matter?}
\label{sec: dynamic}
In addition to \textsf{CardEst}\xspace's improvement in execution time, we discuss model practicality aspects in this section: inference latency (in Section~\ref{sect6.1}), model size and training time (in Section~\ref{sect6.2}), and model update speed and accuracy (in Section~\ref{sect6.3}).
\revise{We only compare the recently proposed \textsf{CardEst}\xspace methods, which have been proved to significantly improve the \textsf{PostgreSQL} baseline, namely \textsf{PessEst}, \textsf{MSCN}, \textsf{NeuroCard}$^E$, \textsf{BayesCard}, \textsf{DeepDB}, and \textsf{FLAT}.}
\revise{
\begin{table}
\caption{\revise{OLTP/OLAP Performance on STATS-CEB.}}
\vspace{-1.2em}
\label{tab:TPAP}
\scalebox{0.71}{
\begin{tabular}{ccc|cc}
\hline
\rowcolor{mygrey}
\revise{\bf Methods} & \revise{\bf \textsf{TP Exec. Time}} & \revise{\bf \textsf{TP Plan Time}} & \revise{\bf \textsf{AP Exec. Time}} & \revise{\bf \textsf{AP Plan Time}} \\ \hline
\textsf{PostgreSQL}& 44.7s & 4.8s (9.7\%) & 11.32h & 20.3s ($0.05\%$) \\
\textsf{TrueCard} & 8.2s & 4.8s (36.9\%) & 5.68h & 20.3s ($0.1\%$) \\
\textsf{PessEst} & 19.3s & 8.4s (30.3\%) & 6.09h & 35.4s ($0.16\%$) \\
\textsf{MSCN} & 15.7s & 8.2s (34.3\%) & 8.11h & 38.0s ($0.13\%$) \\
\textsf{NeuroCard$^E$} & 26.3s & 73s (73.5\%) & 11.84h & 350s ($0.81\%$) \\
\textsf{BayesCard} & 10.7s & 7.3s (40.6\%) & 7.15h & 27.4s ($0.11\%$) \\
\textsf{DeepDB} & 11.5s & 33.6s (74.5\%) & 6.46h & 135s ($0.58\%$) \\
\textsf{FLAT} & 14.3s & 41.5s (74.4\%) & 5.80h & 396s ($1.86\%$) \\ \hline
\end{tabular}
}
\vspace{-2em}
\end{table}}
\subsection{Inference Latency}
\label{sect6.1}
The end-to-end query time is comprised of query execution and planning time, the latter of which is determined by the \textsf{CardEst}\xspace method's inference latency. Commercial DBMS normally has a negligible planning time due to their simplified cardinality estimator and engineering effort to accelerate the inference speed. However, the inference latency of some ML-based data-driven methods can approach one second per sub-plan query, which slows down the end-to-end query execution time.
\revise{To further illustrate the importance of inference latency, we divide the STATS-CEB queries into OLTP and OLAP two workloads based on the query execution time. We report the results in Table~\ref{tab:TPAP} and derive the following observation.}
\smallskip
\noindent \revise{\textbf{O7: Inference latency can have a significant impact on the OLTP workload but a trivial impact on the OLAP workload.}
On OLTP workload of STATS-CEB, we observe that the planning time composes a large proportion of total end-to-end time. Specifically, some ML-based methods' (\textsf{NeuroCard}$^E$, DeepDB, and FLAT) inference speeds are relatively slow. Although their execution time on OLTP workload is faster than \textsf{PostgreSQL}, they have worse end-to-end performance because of the long planning time.
For OLAP workload of STATS-CEB, the \textsf{CardEst}\xspace methods' planning time is much shorter than their execution time because OLAP workload contains extremely long-run queries. In this case, the quality of the generated query plans overshadows the slow inference latency.
Therefore, we believe that \textsf{CardEst}\xspace methods targeting different workloads should fulfill different objectives. For OLTP workload, a desirable method should have fast inference speed, whereas the methods targeting OLAP workload can have high inference latency as long as they can produce high-quality query plans.}
Figure~\ref{fig: practicality} reports the average inference latencies of all sub-queries in the workload for each method.
Their inference speed can be ranked as \textsf{BayesCard} $>$ \revise{\textsf{NeuroCard}$^E$(GPU)} $>$ \textsf{FLAT}/\textsf{DeepDB} $>>$ \revise{\textsf{NeuroCard}$^E$}.
The newly proposed inference algorithms on BN provide \textsf{BayesCard} with a very fast and stable inference speed on both benchmarks.
However, the inference speeds of \textsf{FLAT} and \textsf{DeepDB} are not as stable because they tend to build much larger models with more computation circuits for the more complicated database STATS.
The inference process of \textsf{NeuroCard} requires a large number of progressive samples and its underlying DNN is computationally demanding on CPUs. Therefore, we observe that the inference speed is greatly improved by running it on GPUs.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{practicality_new.eps}
\vspace{-2.5em}
\caption{\revise{Practicality aspects of \textsf{CardEst}\xspace algorithms.}}
\vspace{-2.5em}
\label{fig: practicality}
\end{figure}
\subsection{Model Deployment}
\label{sect6.2}
\revise{Figure~\ref{fig: practicality} reports the model size and training time of all aforementioned methods. We do not report \textsf{PessEst} because it is a model-free method that does not need training.
Based on the results of STATS-CEB query, we derive the following observation.}
\noindent \textbf{O8: BN-based \textsf{CardEst}\xspace approach is very friendly for system deployment.}
\revise{First of all, the BN-based approaches, such as \break \textsf{BayesCard}, are generally interpretable and predictable, thus easy to debug for DBMS analytics.}
More importantly, a \textsf{CardEst}\xspace method friendly for system deployment should have faster training time and lightweight model size and \textsf{BayesCard} has the dominant advantage over the other ML-based data-driven methods in these two aspects because of its underlying Bayesian model.
Specifically, from both training time and model size aspects, these methods can be ranked as \textsf{BayesCard} $<<$ \textsf{FLAT}/\textsf{DeepDB} $<$ \revise{\textsf{NeuroCard}$^E$}. We provide the detailed reasoning as follows.
\textsf{BayesCard} proposes an accelerated model construction process of BN using probabilistic programming. Its model training time is roughly 100 times faster than the other three data-driven methods. Moreover, the BNs in \textsf{BayesCard}, which utilize the attribute conditional independence to reduce the model redundancy, are naturally compact and lightweight.
\textsf{FLAT} and \textsf{DeepDB} recursively learn the underlying FSPN and SPN models.
Their training time is less stable and varies greatly with the number of highly correlated attributes in the datasets.
Thus, we observe a much longer training time on STATS than on the IMDB dataset for these two methods.
The SPNs in \textsf{DeepDB} iteratively split the datasets into small regions, aiming to find local independence between attributes within each region.
However, in presence of highly correlated attributes (e.g. STATS), the SPNs tend to generate a long chain of dataset splitting operation, leading to long training time and a very large model size. The FSPNs in \textsf{FLAT} effectively address this drawback of SPNs by introducing the factorize operation but their training time and model size suffer greatly for datasets with a large number of attributes (e.g. STATS) because of the recursive factorize operations.
\revise{The training of \textsf{NeuroCard}$^E$ is particularly long and its size is also the largest on STATS because its join schema does not form a tree. As mentioned in Section~\ref{sec: plan-algo}, the original \textsf{NeuroCard} only supports tree-structured schemas.
Thus, \textsf{NeuroCard}$^E$ extracts $16$ tree sub-structures from STATS schema graph and train one model for each tree. Therefore, we argue that extending \textsf{NeuroCard} for non-tree-structured schemas can greatly improve its practicality.}
\subsection{Model Update}
\label{sect6.3}
Model update is a crucial aspect when deploying a \textsf{CardEst}\xspace method in OLTP databases. Frequent data updates in these DBs require the underlying \textsf{CardEst}\xspace method to swiftly update itself and adjust to the new data accurately. In the following, we first provide our observation regarding the updatability of ML-based query-driven methods and then provide the update experimental settings and results for ML-based data-driven methods on the STATS dataset.
\smallskip
\noindent \textbf{O9: Existing query-driven \textsf{CardEst}\xspace methods are impractical for dynamic DBs.} The query-driven models require a large amount of executed queries to train their model, which might be unavailable for a new DB and very time-consuming to generate (e.g. 146 STATS-CEB queries take more than ten hours to execute). More importantly, they need to recollect and execute the queries whenever datasets change or query workload shifts. Therefore, they can not keep up with the frequent data update in dynamic DBs.
\smallskip
\noindent \underline{\textbf{Experimental Settings:}} To simulate a practical dynamic environment, we split the STATS data into two parts based on the timestamps of tuples. We first train a stale model for each method on the data created before 2014 (roughly $50\%$) and insert the rest of the data to update these models. We only test the data insertion scenario since some methods (\revise{\textsf{NeuroCard}$^E$} and \textsf{DeepDB}) do not support data deletion. We use the update algorithm in these methods' publicly available source code.
\begin{table}
\caption{Update performance of \textsf{CardEst}\xspace algorithms.}
\vspace{-1em}
\label{tab:update}
\scalebox{0.8}{
\begin{tabular}{@{}ccccc@{}}
\toprule
\rowcolor{mygrey}
\bf Criteria & \bf \revise{\textsf{NeuroCard}$^E$} & \bf \textsf{BayesCard} & \bf \textsf{DeepDB} & \bf \textsf{FLAT} \\ \cmidrule{1-5}
Update time & 5,569s & \textbf{12s} & 248s & 360s \\
Original E2E time (Table~\ref{tab:overall}) &11.85h &7.16h &6.46h &5.80h \\
E2E time after update & 13.94h & 7.16h & 6.72h & 7.04h \\ \cmidrule{1-5}
\end{tabular}
}
\vspace{-2em}
\end{table}
\noindent \underline{\textbf{Experimental Results:}}
As shown in Table~\ref{tab:update}, we record the time these methods take to update their stale models and evaluate the end-to-end query performance of the updated models on STATS-CEB queries. We also cite the original model performance from Table~\ref{tab:overall} as comparison baselines. We first summarize the most important observation based on this table and then provide detailed reasoning from the update speed and accuracy perspectives.
\smallskip
\noindent \textbf{O10: Data-driven \textsf{CardEst}\xspace methods have the potential to keep up with fast data update and can be applied in dynamic DBs.} Specifically, \textsf{BayesCard} takes $12s$ to update itself for an insertion of millions of tuples on multiple tables in the DB. More important, its end-to-end performance is unaffected by this massive update, thus very suitable for dynamic DBs.
\textbf{Update speed} of these methods can be ranked as \textsf{BayesCard} $>>$ \textsf{DeepDB} $>$ \textsf{FLAT} $>$ \revise{\textsf{NeuroCard}$^E$}. \textsf{BayesCard} preserves its underlying BN's structure and only incrementally updates the model parameters. Since its model size is relatively small, the update speed is more than $20$ times faster than others. \textsf{DeepDB} and \textsf{FLAT} also preserves their underlying structure of SPN and FSPN but as their structures are significantly larger, the incrementally updating their model parameters still take a large amount of time.
\textbf{Update accuracy} can be ranked as \textsf{BayesCard} $>$ \textsf{DeepDB} $>$ \textsf{FLAT} $>$ \revise{\textsf{NeuroCard}$^E$}. \textsf{BayesCard}'s underlying BN's structure captures the inherent causality, which is unlikely to change when data changes. Therefore, \textsf{BayesCard} can preserve its original accuracy after model update (i.e. same as its comparison baseline). The structures of SPN in \textsf{DeepDB} and FSPN in \textsf{FLAT} are learned to fit the data before the update and cannot extrapolate well to the newly inserted data. Therefore, only updating the model parameters will cause modeling inaccuracy (i.e. we observe a drop in their end-to-end performance when compared with their baselines).
\section{Our New Benchmark}
\label{sec: bench}
In this section, we design a new benchmark with complex real-world data and diverse multi-table join query workload for evaluating \textsf{CardEst}\xspace algorithms. To simulate practical scenarios, the benchmark should attain the following properties:
\indent \underline{\textit{1) Large scale}} with enough tables, attributes, and tuples in the full outer join;
\indent \underline{\textit{2) Complex distribution}} with skewed and correlated attributes whose joint distribution can not be modeled in a straightforward manner (e.g. independent assumption);
\indent \underline{\textit{3) Rich join schema}} containing joins of various number of tables and diverse join forms (e.g. star and chain);
\indent \underline{\textit{4) Diverse workload}} with queries covering a wide range of true cardinalities and different number of filtering and join predicates.
To this end, we establish our benchmark on a new real-world dataset with a hand-picked query workload. It overcomes the drawbacks of existing \textsf{CardEst}\xspace benchmarks and fully fulfills the properties listed above. We describe the details on the data and workload settings in the following content.
\begin{figure}
\centering
\includegraphics[width=6cm]{stats_schema.eps}
\vspace{-1em}
\caption{Join relations between tables in STATS.}
\label{fig: benchjoin}
\vspace{-1em}
\end{figure}
\vspace{0.5em}
\noindent \underline{\textbf{Data Setting:}}
We adopt the real-world dataset STATS\footnote{\url{https://relational.fit.cvut.cz/dataset/Stats}}
in our benchmark. It is an anonymized dump of user-contributed content on the Stats Stack Exchange network. STATS consumes 658MB storage space with 8 tables and 71 \revise{n./c.} attributes on users, posts, comments, tags, and their relations. A comparison of the statistical information between STATS and IMDB \revise{(the simplified subset supporting JOB-LIGHT)} is shown in Table~\ref{tab: benchmarkstat-data}. We argue that STATS more suitable for \textsf{CardEst}\xspace benchmark as follows:
\indent \underline{\textit{1) Larger scale:}}
STATS has more data tables and a larger number of \revise{n./c. attributes} than \revise{the simplified} IMDB. Moreover, its full outer join size is four orders of magnitude larger.
\indent \underline{\textit{2) More complex data distribution:}}
The distribution skewness of STATS and attribute correlation is more significant than \revise{the simplified} IMDB.
Moreover, STATS has $3 \times$ more attributes with a larger domain size, suggesting its PDF is much harder to model.
\indent \underline{\textit{3) Larger query space:}}
\revise{
Each table in STATS has $1$ to $8$ n./c. attributes to be filtered while the simplified IMDB contains at most two in each table.
}
Moreover, STATS's full outer join size is much larger than \revise{the simplified} IMDB.
These two aspects provide STATS a larger query space varying in cardinality and predicate numbers.
\indent \underline{\textit{4) Richer join settings:}}
The join relations between all tables in STATS are shown in Figure~\ref{fig: benchjoin}.
\revise{The simplified} IMDB contains only star joins between primary key and foreign keys (i.e. 5 join relations). Whereas, STATS has richer and more diverse join types varying in the number of joined tables (from $2$ to $8$), join forms (chain, star, and mixture of these two), and join keys (PK-FK and FK-FK).
\begin{table}[t]
\caption{Comparison of IMDB \revise{(simplified subset to fit JOB-LIGHT workload)} and STATS dataset.}
\vspace{-1em}
\scalebox{0.87}{
\begin{tabular}{c|c|cc}
\hline
\rowcolor{mygrey}
\textbf{Criteria} & \textbf{Item} & \textbf{IMDB} & \textbf{STATS} \\ \hline
\multirow{4}{*}{Scale} &\# of tables & 6 & 8 \\
&\# of \revise{n./c.} attributes & 8 & 23 \\
&\# of \revise{n./c.} attributes per table & 1--2 & 1--8 \\
&full outer join size & $2\cdot 10^{12}$ & $3\cdot10^{16}$ \\ \hline
\multirow{3}{*}{Data}& total attribute domain size & 369, 563 & 578, 341 \\
&average distribution skewness & 9.159 & 21.798 \\
&average pairwise correlation & 0.149 & 0.221 \\ \hline
\multirow{2}{*}{Schema} & join forms & star & star/chain/mixed\\
& \# of join relations & 5 & 12\\ \hline
\end{tabular}
\label{tab: benchmarkstat-data}
}
\vspace{-2em}
\end{table}
\vspace{0.5em}
\noindent \underline{\textbf{Query Workload Setting:}}
We generate and then carefully handpick a query workload STATS-CEB on STATS to fulfill both practicality and diversity. The generation process is done in two phases.
In the first phase, we generate $70$ representative join templates based on the join schema in Figure~\ref{fig: benchjoin}, each of which specifies a distinct join pattern covering a set of tables. For these join templates, we do not consider:
1) cyclic joins as most of the ML-based \textsf{CardEst}\xspace algorithms~\cite{yang2020neurocard, hilprecht2019deepdb, zhu2020flat, wu2020bayescard} do not support them;
and 2) non-equal joins as they rarely occur in practice and many \textsf{CardEst}\xspace algorithms process them in the same way as many-to-many joins.
We manually check and retain each join template if it has occurred in the log data of StackExchang
or has its real-world semantics.
To reduce redundancy, we also ensure that these join templates are not very similar (e.g. only differ in inner or outer join conditions).
In the second phrase after deriving these $70$ join templates, we generate $146$ queries with $1-4$ queries for each template as the testing query workload STATS-CEB. We make sure all the generated filter predicates reflect real-world semantics and diversify in multiple perspectives.
In comparison to JOB-LIGHT (illustrated in Table~\ref{tab: benchmarkstat-workload}), we find the following advantages of STATS-CEB:
\indent \underline{\textit{1) More diverse queries:}}
STATS-CEB contains twice queries as JOB-LIGHT with $3 \times$ more join templates covering a wider range of the number of joined tables.
\indent \underline{\textit{2) Richer join types:}}
Unlike JOB-LIGHT benchmark with only star joins, STATS-CEB contains queries with chain joins and complex mixed join forms. Moreover, JOB-LIGHT only contains queries with one-to-many PK-FK joins, whereas STATS-CEB includes queries with many-to-many FK-FK joins.
\indent \underline{\textit{3) More filter predicates:}}
STATS-CEB contains queries with up to 16 distinct filter predicates, which is $4 \times$ larger than JOB-LIGHT.
\indent \underline{\textit{4) Wider range of true cardinality:}}
The cardinality range of STATS-CEB is an order of magnitude larger than JOB-LIGHT. The largest query in STATS-CEB has true cardinality of 20 billion, which is $3 \times$ larger than that of the JOB-LIGHT benchmark.
\vspace{0.5em}
\noindent \underline{\textbf{Summary:}}
\revise{
Our new benchmark with STATS dataset and STATS-CEB query workload are very comprehensive with more complex data, more skewed distribution, more diverse queries and more complicated join settings. According to our evaluation results in the following Sections~5--7, this new \textsf{CardEst}\xspace benchmark helps to find more insights over existing \textsf{CardEst}\xspace algorithms that have not been discovered in previous works.
}
\begin{table}[t]
\caption{Comparison of JOB-LIGHT and STATS-CEB benchmark query workload.}
\vspace{-1em}
\scalebox{0.9}{
\begin{tabular}{c|cc}
\hline
\rowcolor{mygrey}
\textbf{Item} & \textbf{JOB-LIGHT} & \textbf{STATS-CEB} \\ \hline
\# of queries & 70 & 146 \\
\# of joined tables & 2--5 & 2--8 \\
\# of join templates & 23 & 70 \\
\# of filtering \revise{n./c.} predicates & 1--4 & 1--16 \\
join type & PK-FK & PK-FK/FK-FK\\
true cardinality range & 9 --- $9\cdot10^9$ & 200 --- $2\cdot10^{10}$ \\ \hline
\end{tabular}
}
\label{tab: benchmarkstat-workload}
\vspace{-1em}
\end{table}
\section{Is Current Metric Good Enough?}
\label{sec:analysis}
\revise{
Most of the existing works~\cite{zhu2020flat,yang2020neurocard,hilprecht2019deepdb,kipf2018learned,dutt2019selectivity} use Q-Error~\cite{moerkotte2009preventing} to evaluate the quality of their \textsf{CardEst}\xspace methods. However, the ultimate goal of \textsf{CardEst}\xspace is to generate query plans with faster execution time. Therefore, we explore whether Q-Error is a good metric to fulfill this goal in this section. We first analyze the correlations between Q-Error and query execution time in Section~\ref{sect7.1}. The results show that smaller Q-Error does not necessarily lead to shorter execution time. Thus, we identify the limitations of Q-Error and propose another metric called P-Error in Section~\ref{sect7.2}.
We show that P-Error has better correspondence to query execution time and advocate it to be a potential substitution of Q-Error.
}
\subsection{Problems with Q-Error}
\label{sect7.1}
Q-Error is a well-known metric to evaluate the quality of different \textsf{CardEst}\xspace methods.
It measures the relative multiplicative error of the estimated cardinality from the actual one as:
\begin{equation*}
\textbf{Q-Error} = \max(\frac{\text{Estimated Cardinality}}{\text{True Cardinality}}, \frac{\text{True Cardinality}}{\text{Estimated Cardinality}}).
\end{equation*}
\revise{
Q-Error penalizes both overestimation and underestimation of the true cardinality. However, existing works have not investigated whether Q-Error is good evaluation metric for \texttt{CardEst}. I.e, would \textsf{CardEst}\xspace methods with smaller Q-Errors definitely generate query plans with shorter execution time and vice versa? To answer this question, we revisit the experimental results. Table~\ref{tab:metric} reports the distributions ($50\%, 90\% \text{ and } 99\%$ percentiles) of all sub-plan queries' Q-Errors generated by different \textsf{CardEst}\xspace methods on both JOB-LIGHT and STATS-CEB benchmarks. We sort all \textsf{CardEst}\xspace methods in a descending order of their execution time.
From a first glance, we derive the following observation:
}
\noindent \textbf{O11: The Q-Error metric can not serve as a good indicator for query execution performance.}
This observation is supported by a large amount of evidence from Table~\ref{tab:metric}. We list three typical examples on STATS-CEB as follows:
\revise{
1) \textsf{NeuroCard}$^E$ has the worst Q-Errors in all methods, but its execution time is comparable to \textsf{PostgreSQL} and much better than histogram and sampling based methods and \textsf{LW-XGB};
2) \textsf{BayesCard} has the best Q-Errors, yet execution time is $1.4h$ slower than \textsf{FLAT};
and 3) the Q-Errors of \textsf{MSCN} are significantly worse than \textsf{PostgreSQL}, but the execution time of \textsf{MSCN} largely outperforms it.
}
\revise{
Next, we analyze the underlying reasons behind $O11$. This is particularly important as the DB communities have made great efforts in purely optimizing the Q-Error of \textsf{CardEst}\xspace methods, but sometimes neglect the ultimate goal of \textsf{CardEst}\xspace in DBMS. As shown in Section~\ref{sec: plan-sys}, the \textsf{CardEst}\xspace method would be invoked for multiple sub-plan queries to decide the query plan. The estimation errors of different sub-plan query have different impact on the final query plan performance. However, the Q-Error metric could not distinguish this difference and regard the estimation errors of all queries equally. This would cause the phenomenon that a more accurate estimation measured by Q-Error may lead to a worse query execution plan in reality. We list two typical scenarios in the benchmark where Q-Error fails to distinguish the difference as follow:
}
\noindent \textbf{\revise{O12: Q-Error does not distinguish queries with small and large cardinality that have the same Q-Error value but matter differently to the query plan.}}
\revise{
For Q-Error, an estimation $1$ for true cardinality of $10$ has the same Q-Error as an estimation $10^{11}$ for true cardinality $10^{12}$.
The previous case may barely affect the overall query plan, whereas the latter one can be catastrophic since the (sub-plan) queries with large cardinalities dominate the overall effectiveness of the query plan (shown in O5). For example, in Figure~\ref{fig:q57plans}, the overall Q-Error of \textsf{BayesCard} over all sub-plan queries of Q57 is better than \textsf{FLAT}. However, only for the root query which matters most importantly to the query execution time, \textsf{BayesCard} fails to correctly estimate and leads to a much slower plan.
}
\noindent \textbf{\revise{O13: Q-Error can not distinguish between query underestimation and overestimation that have the same Q-Error value but matter differently to the query plan.}}
\revise{
For Q-Error, an underestimation $10^{9}$ for true cardinality $10^{10}$ is the same as an overestimation of $10^{11}$. These two estimations are very likely to lead to different plans with drastically different execution time. Recall the Q57 example, \textsf{BayesCard} underestimates the cardinality of the root query by $7\times$ and selects a ``merge join'' operation. We test this query but injecting a $7\times$ overestimation for this sub-plan query, and it then selects the ``hash join'' operation with twice faster time.
}
\revise{
As a result, the Q-Error metric does not consider the importance of different sub-plan queries and may mislead the query plan generation, so it is not a good optimization goal for \textsf{CardEst}\xspace methods.
}
\subsection{An Alternative Metric: P-Error }
\label{sect7.2}
Obviously, the best way to evaluate the quality of a \textsf{CardEst}\xspace method is to directly record its query execution time on some benchmark datasets and query workloads (e.g. JOB-LIGHT and STATS-CEB).
\revise{
However, this is time consuming and not suitable for the situations where fast evaluation is needed, e.g., hyper-parameter tuning.
A desirable metric should be fast to compute and simultaneously correlated with the query execution time. In the following, we propose the P-Error metric to fulfill this goal and quantitatively demonstrate that P-Error can be a possible substitute for Q-Error.
}
\noindent \underline{\textbf{P-Error metric for \textsf{CardEst}\xspace:}}
\revise{
Although obtaining the actual query execution time is expensive, we could
approximate it using the built-in component in DBMS. Note that, given a query plan, the cost model of a DBMS could output an estimated cost, which is designed to directly reflect the actual execution time. Inspired by the recent research~\cite{negi2021flow}, we believe that the estimated cost could serve as a good metric for evaluating the \textsf{CardEst}\xspace methods.}
\revise{
Specifically, given a query $Q$ and a \textsf{CardEst}\xspace method $A$, let $C^{\text{T}}$ and $C^{\text{E}}$ denote the set of true and estimated cardinality of all sub-plan queries of $Q$.
When $C^{\text{E}}$ is fed into the query optimizer, it would generate a query plan $P(C^{\text{E}})$ of $Q$.
During the actual execution of this query plan, the true cardinalities of all sub-plan queries along this plan will be instantiated.
Therefore, to estimate the execution cost of $P(C^{\text{E}})$, we inject the true cardinality of sub-plan queries $C^{\text{T}}$ into the DBMS. The DBMS will output an estimated cost based on this query plan, which is highly correlated to the actual time for an accurate cost model. Following prior work~\cite{negi2021flow}, we choose PostgreSQL to calculate this estimated cost, which is denoted as $PPC(P(C^{\text{E}}), C^{\text{T}})$.
}
\revise{
Ideally, if the cost model is accurate, the query plan $P(C^{\text{T}})$ found by the true cardinality $C^{\text{T}}$ should be optimal, i.e. $PPC(P(C^{\text{T}}), C^{\text{T}}) = \min_{C} PPC(P(C), C^{\text{T}})$. Therefore, we define
\begin{equation*}
\textbf{P-Error} = PPC(P(C^{\text{E}}), C^{\text{T}})/ PPC(P(C^{\text{T}}), C^{\text{T}})
\end{equation*}}
\vspace{-1em}
\noindent \revise{as our \textsf{CardEst}\xspace metric. The P-Error for an existing workload of queries can be computed instantaneously using the modified
plugin \textsf{pg\_hint\_plan} provided in~\cite{pghintplan} as long as we pre-compute and store the true cardinalities of all sub-plan queries.}
\revise{
In P-Error, the effectiveness of a \textsf{CardEst}\xspace method's estimation $C^{\text{E}}$ is measured on the plan cost level. The impact on the estimation error of each sub-plan query is reflected by its importance in generating the query plan $P(C^{\text{E}})$ (e.g. small or large cardinality, underestimation or overestimation, etc.).
}
\revise{
Notice that, in real-world DBMS, the cost model can sometimes be inaccurate~\cite{leis2015good}, which may lead to worse query plans with better estimated cost, i.e., $PPC(P(C^{\text{T}}), C^{\text{T}})$ may be not the minimal cost over all query plans. However, this is not an issue as $PPC(P(C^{\text{T}}), C^{\text{T}})$ is identical to different \textsf{CardEst}\xspace methods, we could always compare their relative performance using P-Error no matter $P(C^{\text{T}})$ is optimal or not.
Meanwhile, we find that $P(C^{\text{T}})$ is optimal in most cases.
On our STATS benchmark, the query plan generated by the true cardinality is optimal on more than $98\%$ queries using the default cost model of PostgreSQL.
}
\vspace{0.5em}
\noindent
\revise{
\underline{\textbf{Advantages of P-Error Metric:}}
In Table~\ref{tab:metric}, we report the P-Error distributions ($50\%, 90\%, 99\%$ percentiles) over the query workload of all \textsf{CardEst}\xspace methods and derive the following observation:
}
\noindent \textbf{O14: P-Error is more highly correlated to the query execution time than Q-Error.}
\revise{
We can roughly see that methods with better runtime tend to have smaller P-Error (e.g. \textsf{FLAT} has the best P-Error). We also compute the correlation coefficients between the query execution time and Q-Error/P-Error. On the STATS-CEB query workload, the value between $50\%$ and $90\%$ percentiles of Q-Error distribution w.r.t.~query time is $0.036$ and $0.037$. Whereas, the value between $50\%$ and $90\%$ percentiles of P-Error distribution w.r.t.~query time is $0.810$ and $0.838$. This indicates that P-Error is a better correspondence to the query execution time than Q-Error.
}
\revise{
In addition, P-Error is more convenient as it outputs a single value on the plan cost level whereas Q-Error outputs a value for each sub-plan query of $Q$. Therefore, P-Error makes an attempt to overcome the limitations of Q-Error and is shown to be more suitable to measure the actual performance of \textsf{CardEst}\xspace methods.
}
\section{Our New Benchmark}
\label{sec: bench}
In this section, we design a new benchmark with complex real-world data and diverse multi-table join query workload for evaluating \textsf{CardEst}\xspace algorithms. To simulate practical scenarios, the benchmark should attain the following properties:
\indent \underline{\textit{1) Large scale}} with enough tables, attributes, and tuples in the full outer join;
\indent \underline{\textit{2) Complex distribution}} with skewed and correlated attributes whose joint distribution can not be modeled in a straightforward manner (e.g. independent assumption);
\indent \underline{\textit{3) Rich join schema}} containing joins of various number of tables and diverse join forms (e.g. star and chain);
\indent \underline{\textit{4) Diverse workload}} with queries covering a wide range of true cardinalities and different number of filtering and join predicates.
To this end, we establish our benchmark on a new real-world dataset with a hand-picked query workload. It overcomes the drawbacks of existing \textsf{CardEst}\xspace benchmarks and fully fulfills the properties listed above. We describe the details on the data and workload settings in the following content.
\begin{figure}
\centering
\includegraphics[width=6cm]{stats_schema.eps}
\vspace{-1em}
\caption{Join relations between tables in STATS.}
\label{fig: benchjoin}
\vspace{-1em}
\end{figure}
\vspace{0.5em}
\noindent \underline{\textbf{Data Setting:}}
We adopt the real-world dataset STATS\footnote{\url{https://relational.fit.cvut.cz/dataset/Stats}}
in our benchmark. It is an anonymized dump of user-contributed content on the Stats Stack Exchange network. STATS consumes 658MB storage space with 8 tables and 71 \revise{n./c.} attributes on users, posts, comments, tags, and their relations. A comparison of the statistical information between STATS and IMDB \revise{(the simplified subset supporting JOB-LIGHT)} is shown in Table~\ref{tab: benchmarkstat-data}. We argue that STATS more suitable for \textsf{CardEst}\xspace benchmark as follows:
\indent \underline{\textit{1) Larger scale:}}
STATS has more data tables and a larger number of \revise{n./c. attributes} than \revise{the simplified} IMDB. Moreover, its full outer join size is four orders of magnitude larger.
\indent \underline{\textit{2) More complex data distribution:}}
The distribution skewness of STATS and attribute correlation is more significant than \revise{the simplified} IMDB.
Moreover, STATS has $3 \times$ more attributes with a larger domain size, suggesting its PDF is much harder to model.
\indent \underline{\textit{3) Larger query space:}}
\revise{
Each table in STATS has $1$ to $8$ n./c. attributes to be filtered while the simplified IMDB contains at most two in each table.
}
Moreover, STATS's full outer join size is much larger than \revise{the simplified} IMDB.
These two aspects provide STATS a larger query space varying in cardinality and predicate numbers.
\indent \underline{\textit{4) Richer join settings:}}
The join relations between all tables in STATS are shown in Figure~\ref{fig: benchjoin}.
\revise{The simplified} IMDB contains only star joins between primary key and foreign keys (i.e. 5 join relations). Whereas, STATS has richer and more diverse join types varying in the number of joined tables (from $2$ to $8$), join forms (chain, star, and mixture of these two), and join keys (PK-FK and FK-FK).
\begin{table}[t]
\caption{Comparison of IMDB \revise{(simplified subset to fit JOB-LIGHT workload)} and STATS dataset.}
\vspace{-1em}
\scalebox{0.87}{
\begin{tabular}{c|c|cc}
\hline
\rowcolor{mygrey}
\textbf{Criteria} & \textbf{Item} & \textbf{IMDB} & \textbf{STATS} \\ \hline
\multirow{4}{*}{Scale} &\# of tables & 6 & 8 \\
&\# of \revise{n./c.} attributes & 8 & 23 \\
&\# of \revise{n./c.} attributes per table & 1--2 & 1--8 \\
&full outer join size & $2\cdot 10^{12}$ & $3\cdot10^{16}$ \\ \hline
\multirow{3}{*}{Data}& total attribute domain size & 369, 563 & 578, 341 \\
&average distribution skewness & 9.159 & 21.798 \\
&average pairwise correlation & 0.149 & 0.221 \\ \hline
\multirow{2}{*}{Schema} & join forms & star & star/chain/mixed\\
& \# of join relations & 5 & 12\\ \hline
\end{tabular}
\label{tab: benchmarkstat-data}
}
\vspace{-2em}
\end{table}
\vspace{0.5em}
\noindent \underline{\textbf{Query Workload Setting:}}
We generate and then carefully handpick a query workload STATS-CEB on STATS to fulfill both practicality and diversity. The generation process is done in two phases.
In the first phase, we generate $70$ representative join templates based on the join schema in Figure~\ref{fig: benchjoin}, each of which specifies a distinct join pattern covering a set of tables. For these join templates, we do not consider:
1) cyclic joins as most of the ML-based \textsf{CardEst}\xspace algorithms~\cite{yang2020neurocard, hilprecht2019deepdb, zhu2020flat, wu2020bayescard} do not support them;
and 2) non-equal joins as they rarely occur in practice and many \textsf{CardEst}\xspace algorithms process them in the same way as many-to-many joins.
We manually check and retain each join template if it has occurred in the log data of StackExchang
or has its real-world semantics.
To reduce redundancy, we also ensure that these join templates are not very similar (e.g. only differ in inner or outer join conditions).
In the second phrase after deriving these $70$ join templates, we generate $146$ queries with $1-4$ queries for each template as the testing query workload STATS-CEB. We make sure all the generated filter predicates reflect real-world semantics and diversify in multiple perspectives.
In comparison to JOB-LIGHT (illustrated in Table~\ref{tab: benchmarkstat-workload}), we find the following advantages of STATS-CEB:
\indent \underline{\textit{1) More diverse queries:}}
STATS-CEB contains twice queries as JOB-LIGHT with $3 \times$ more join templates covering a wider range of the number of joined tables.
\indent \underline{\textit{2) Richer join types:}}
Unlike JOB-LIGHT benchmark with only star joins, STATS-CEB contains queries with chain joins and complex mixed join forms. Moreover, JOB-LIGHT only contains queries with one-to-many PK-FK joins, whereas STATS-CEB includes queries with many-to-many FK-FK joins.
\indent \underline{\textit{3) More filter predicates:}}
STATS-CEB contains queries with up to 16 distinct filter predicates, which is $4 \times$ larger than JOB-LIGHT.
\indent \underline{\textit{4) Wider range of true cardinality:}}
The cardinality range of STATS-CEB is an order of magnitude larger than JOB-LIGHT. The largest query in STATS-CEB has true cardinality of 20 billion, which is $3 \times$ larger than that of the JOB-LIGHT benchmark.
\vspace{0.5em}
\noindent \underline{\textbf{Summary:}}
\revise{
Our new benchmark with STATS dataset and STATS-CEB query workload are very comprehensive with more complex data, more skewed distribution, more diverse queries and more complicated join settings. According to our evaluation results in the following Sections~5--7, this new \textsf{CardEst}\xspace benchmark helps to find more insights over existing \textsf{CardEst}\xspace algorithms that have not been discovered in previous works.
}
\begin{table}[t]
\caption{Comparison of JOB-LIGHT and STATS-CEB benchmark query workload.}
\vspace{-1em}
\scalebox{0.9}{
\begin{tabular}{c|cc}
\hline
\rowcolor{mygrey}
\textbf{Item} & \textbf{JOB-LIGHT} & \textbf{STATS-CEB} \\ \hline
\# of queries & 70 & 146 \\
\# of joined tables & 2--5 & 2--8 \\
\# of join templates & 23 & 70 \\
\# of filtering \revise{n./c.} predicates & 1--4 & 1--16 \\
join type & PK-FK & PK-FK/FK-FK\\
true cardinality range & 9 --- $9\cdot10^9$ & 200 --- $2\cdot10^{10}$ \\ \hline
\end{tabular}
}
\label{tab: benchmarkstat-workload}
\vspace{-1em}
\end{table}
\section{Dynamic Performance Analysis}
\section{What Other Aspects of \textsf{CardEst}\xspace Methods Matter?}
\label{sec: dynamic}
In addition to \textsf{CardEst}\xspace's improvement in execution time, we discuss model practicality aspects in this section: inference latency (in Section~\ref{sect6.1}), model size and training time (in Section~\ref{sect6.2}), and model update speed and accuracy (in Section~\ref{sect6.3}).
\revise{We only compare the recently proposed \textsf{CardEst}\xspace methods, which have been proved to significantly improve the \textsf{PostgreSQL} baseline, namely \textsf{PessEst}, \textsf{MSCN}, \textsf{NeuroCard}$^E$, \textsf{BayesCard}, \textsf{DeepDB}, and \textsf{FLAT}.}
\revise{
\begin{table}
\caption{\revise{OLTP/OLAP Performance on STATS-CEB.}}
\vspace{-1.2em}
\label{tab:TPAP}
\scalebox{0.71}{
\begin{tabular}{ccc|cc}
\hline
\rowcolor{mygrey}
\revise{\bf Methods} & \revise{\bf \textsf{TP Exec. Time}} & \revise{\bf \textsf{TP Plan Time}} & \revise{\bf \textsf{AP Exec. Time}} & \revise{\bf \textsf{AP Plan Time}} \\ \hline
\textsf{PostgreSQL}& 44.7s & 4.8s (9.7\%) & 11.32h & 20.3s ($0.05\%$) \\
\textsf{TrueCard} & 8.2s & 4.8s (36.9\%) & 5.68h & 20.3s ($0.1\%$) \\
\textsf{PessEst} & 19.3s & 8.4s (30.3\%) & 6.09h & 35.4s ($0.16\%$) \\
\textsf{MSCN} & 15.7s & 8.2s (34.3\%) & 8.11h & 38.0s ($0.13\%$) \\
\textsf{NeuroCard$^E$} & 26.3s & 73s (73.5\%) & 11.84h & 350s ($0.81\%$) \\
\textsf{BayesCard} & 10.7s & 7.3s (40.6\%) & 7.15h & 27.4s ($0.11\%$) \\
\textsf{DeepDB} & 11.5s & 33.6s (74.5\%) & 6.46h & 135s ($0.58\%$) \\
\textsf{FLAT} & 14.3s & 41.5s (74.4\%) & 5.80h & 396s ($1.86\%$) \\ \hline
\end{tabular}
}
\vspace{-2em}
\end{table}}
\subsection{Inference Latency}
\label{sect6.1}
The end-to-end query time is comprised of query execution and planning time, the latter of which is determined by the \textsf{CardEst}\xspace method's inference latency. Commercial DBMS normally has a negligible planning time due to their simplified cardinality estimator and engineering effort to accelerate the inference speed. However, the inference latency of some ML-based data-driven methods can approach one second per sub-plan query, which slows down the end-to-end query execution time.
\revise{To further illustrate the importance of inference latency, we divide the STATS-CEB queries into OLTP and OLAP two workloads based on the query execution time. We report the results in Table~\ref{tab:TPAP} and derive the following observation.}
\smallskip
\noindent \revise{\textbf{O7: Inference latency can have a significant impact on the OLTP workload but a trivial impact on the OLAP workload.}
On OLTP workload of STATS-CEB, we observe that the planning time composes a large proportion of total end-to-end time. Specifically, some ML-based methods' (\textsf{NeuroCard}$^E$, DeepDB, and FLAT) inference speeds are relatively slow. Although their execution time on OLTP workload is faster than \textsf{PostgreSQL}, they have worse end-to-end performance because of the long planning time.
For OLAP workload of STATS-CEB, the \textsf{CardEst}\xspace methods' planning time is much shorter than their execution time because OLAP workload contains extremely long-run queries. In this case, the quality of the generated query plans overshadows the slow inference latency.
Therefore, we believe that \textsf{CardEst}\xspace methods targeting different workloads should fulfill different objectives. For OLTP workload, a desirable method should have fast inference speed, whereas the methods targeting OLAP workload can have high inference latency as long as they can produce high-quality query plans.}
Figure~\ref{fig: practicality} reports the average inference latencies of all sub-queries in the workload for each method.
Their inference speed can be ranked as \textsf{BayesCard} $>$ \revise{\textsf{NeuroCard}$^E$(GPU)} $>$ \textsf{FLAT}/\textsf{DeepDB} $>>$ \revise{\textsf{NeuroCard}$^E$}.
The newly proposed inference algorithms on BN provide \textsf{BayesCard} with a very fast and stable inference speed on both benchmarks.
However, the inference speeds of \textsf{FLAT} and \textsf{DeepDB} are not as stable because they tend to build much larger models with more computation circuits for the more complicated database STATS.
The inference process of \textsf{NeuroCard} requires a large number of progressive samples and its underlying DNN is computationally demanding on CPUs. Therefore, we observe that the inference speed is greatly improved by running it on GPUs.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{practicality_new.eps}
\vspace{-2.5em}
\caption{\revise{Practicality aspects of \textsf{CardEst}\xspace algorithms.}}
\vspace{-2.5em}
\label{fig: practicality}
\end{figure}
\subsection{Model Deployment}
\label{sect6.2}
\revise{Figure~\ref{fig: practicality} reports the model size and training time of all aforementioned methods. We do not report \textsf{PessEst} because it is a model-free method that does not need training.
Based on the results of STATS-CEB query, we derive the following observation.}
\noindent \textbf{O8: BN-based \textsf{CardEst}\xspace approach is very friendly for system deployment.}
\revise{First of all, the BN-based approaches, such as \break \textsf{BayesCard}, are generally interpretable and predictable, thus easy to debug for DBMS analytics.}
More importantly, a \textsf{CardEst}\xspace method friendly for system deployment should have faster training time and lightweight model size and \textsf{BayesCard} has the dominant advantage over the other ML-based data-driven methods in these two aspects because of its underlying Bayesian model.
Specifically, from both training time and model size aspects, these methods can be ranked as \textsf{BayesCard} $<<$ \textsf{FLAT}/\textsf{DeepDB} $<$ \revise{\textsf{NeuroCard}$^E$}. We provide the detailed reasoning as follows.
\textsf{BayesCard} proposes an accelerated model construction process of BN using probabilistic programming. Its model training time is roughly 100 times faster than the other three data-driven methods. Moreover, the BNs in \textsf{BayesCard}, which utilize the attribute conditional independence to reduce the model redundancy, are naturally compact and lightweight.
\textsf{FLAT} and \textsf{DeepDB} recursively learn the underlying FSPN and SPN models.
Their training time is less stable and varies greatly with the number of highly correlated attributes in the datasets.
Thus, we observe a much longer training time on STATS than on the IMDB dataset for these two methods.
The SPNs in \textsf{DeepDB} iteratively split the datasets into small regions, aiming to find local independence between attributes within each region.
However, in presence of highly correlated attributes (e.g. STATS), the SPNs tend to generate a long chain of dataset splitting operation, leading to long training time and a very large model size. The FSPNs in \textsf{FLAT} effectively address this drawback of SPNs by introducing the factorize operation but their training time and model size suffer greatly for datasets with a large number of attributes (e.g. STATS) because of the recursive factorize operations.
\revise{The training of \textsf{NeuroCard}$^E$ is particularly long and its size is also the largest on STATS because its join schema does not form a tree. As mentioned in Section~\ref{sec: plan-algo}, the original \textsf{NeuroCard} only supports tree-structured schemas.
Thus, \textsf{NeuroCard}$^E$ extracts $16$ tree sub-structures from STATS schema graph and train one model for each tree. Therefore, we argue that extending \textsf{NeuroCard} for non-tree-structured schemas can greatly improve its practicality.}
\subsection{Model Update}
\label{sect6.3}
Model update is a crucial aspect when deploying a \textsf{CardEst}\xspace method in OLTP databases. Frequent data updates in these DBs require the underlying \textsf{CardEst}\xspace method to swiftly update itself and adjust to the new data accurately. In the following, we first provide our observation regarding the updatability of ML-based query-driven methods and then provide the update experimental settings and results for ML-based data-driven methods on the STATS dataset.
\smallskip
\noindent \textbf{O9: Existing query-driven \textsf{CardEst}\xspace methods are impractical for dynamic DBs.} The query-driven models require a large amount of executed queries to train their model, which might be unavailable for a new DB and very time-consuming to generate (e.g. 146 STATS-CEB queries take more than ten hours to execute). More importantly, they need to recollect and execute the queries whenever datasets change or query workload shifts. Therefore, they can not keep up with the frequent data update in dynamic DBs.
\smallskip
\noindent \underline{\textbf{Experimental Settings:}} To simulate a practical dynamic environment, we split the STATS data into two parts based on the timestamps of tuples. We first train a stale model for each method on the data created before 2014 (roughly $50\%$) and insert the rest of the data to update these models. We only test the data insertion scenario since some methods (\revise{\textsf{NeuroCard}$^E$} and \textsf{DeepDB}) do not support data deletion. We use the update algorithm in these methods' publicly available source code.
\begin{table}
\caption{Update performance of \textsf{CardEst}\xspace algorithms.}
\vspace{-1em}
\label{tab:update}
\scalebox{0.8}{
\begin{tabular}{@{}ccccc@{}}
\toprule
\rowcolor{mygrey}
\bf Criteria & \bf \revise{\textsf{NeuroCard}$^E$} & \bf \textsf{BayesCard} & \bf \textsf{DeepDB} & \bf \textsf{FLAT} \\ \cmidrule{1-5}
Update time & 5,569s & \textbf{12s} & 248s & 360s \\
Original E2E time (Table~\ref{tab:overall}) &11.85h &7.16h &6.46h &5.80h \\
E2E time after update & 13.94h & 7.16h & 6.72h & 7.04h \\ \cmidrule{1-5}
\end{tabular}
}
\vspace{-2em}
\end{table}
\noindent \underline{\textbf{Experimental Results:}}
As shown in Table~\ref{tab:update}, we record the time these methods take to update their stale models and evaluate the end-to-end query performance of the updated models on STATS-CEB queries. We also cite the original model performance from Table~\ref{tab:overall} as comparison baselines. We first summarize the most important observation based on this table and then provide detailed reasoning from the update speed and accuracy perspectives.
\smallskip
\noindent \textbf{O10: Data-driven \textsf{CardEst}\xspace methods have the potential to keep up with fast data update and can be applied in dynamic DBs.} Specifically, \textsf{BayesCard} takes $12s$ to update itself for an insertion of millions of tuples on multiple tables in the DB. More important, its end-to-end performance is unaffected by this massive update, thus very suitable for dynamic DBs.
\textbf{Update speed} of these methods can be ranked as \textsf{BayesCard} $>>$ \textsf{DeepDB} $>$ \textsf{FLAT} $>$ \revise{\textsf{NeuroCard}$^E$}. \textsf{BayesCard} preserves its underlying BN's structure and only incrementally updates the model parameters. Since its model size is relatively small, the update speed is more than $20$ times faster than others. \textsf{DeepDB} and \textsf{FLAT} also preserves their underlying structure of SPN and FSPN but as their structures are significantly larger, the incrementally updating their model parameters still take a large amount of time.
\textbf{Update accuracy} can be ranked as \textsf{BayesCard} $>$ \textsf{DeepDB} $>$ \textsf{FLAT} $>$ \revise{\textsf{NeuroCard}$^E$}. \textsf{BayesCard}'s underlying BN's structure captures the inherent causality, which is unlikely to change when data changes. Therefore, \textsf{BayesCard} can preserve its original accuracy after model update (i.e. same as its comparison baseline). The structures of SPN in \textsf{DeepDB} and FSPN in \textsf{FLAT} are learned to fit the data before the update and cannot extrapolate well to the newly inserted data. Therefore, only updating the model parameters will cause modeling inaccuracy (i.e. we observe a drop in their end-to-end performance when compared with their baselines).
\section{Introduction}
The query optimizer is an integral component in modern DBMSs. It is responsible for generating high-quality execution plans for the input SQL queries. \emph{Cardinality estimation(\textsf{CardEst}\xspace)} plays a significant role in query optimization. It aims at estimating the result size of all sub-plans of each query and guiding the optimizer to select the optimal join operations. The performance of \textsf{CardEst}\xspace has a
critical impact on the quality of the generated query plans.
\vspace{0.5em}
\noindent{\underline{\textbf{Background:}}}
Due to its important role in DBMS, \textsf{CardEst}\xspace has been extensively studied, by both academic and industrial communities. \revise{Current open-source and commercial \revise{DBMSs} mainly use two traditional \textsf{CardEst}\xspace methods, namely histogram~\cite{selinger1979access,gunopulos2005selectivity,bruno2001stholes,muralikrishna1988equi,wang2003multi,deshpande2001independence} in PostgreSQL\cite{psql2020} and SQL Server\cite{sqlserver2019} and sampling~\cite{leis2017cardinality,heimel2015self,kiefer2017estimating,zhao2018random, li2016wander} in MySQL~\cite{mysql2020} and MariaDB~\cite{mdb2020}.}
The core task of \textsf{CardEst}\xspace is to build a compact model capturing data and/or query information. With the prosperity of machine learning (ML), we witness a proliferation of learned methods for \textsf{CardEst}\xspace in the last three years~\cite{kipf2018learned,hilprecht2019deepdb, sun2019end, yang2019deep,yang2020neurocard,wu2020bayescard,zhu2020flat,hasan2020,wu2021uae,dutt2019selectivity}. These methods could be categorized into two classes, namely query-driven and data-driven.
\revise{Query-driven \textsf{CardEst}\xspace methods~\cite{kipf2018learned, dutt2019selectivity} build discriminative models mapping featurized queries to their cardinalities while data-driven \textsf{CardEst}\xspace methods~\cite{yang2019deep,yang2020neurocard, tzoumas2011lightweight, getoor2001selectivity,wu2020bayescard, hilprecht2019deepdb, zhu2020flat} directly model the joint distribution of all attributes.} In comparison with the traditional methods, their estimation accuracy stands out as their models are more sophisticated and fine-grained~\cite{zhu2020flat, yang2019deep, wu2020bayescard, hilprecht2019deepdb}.
\vspace{0.5em}
\noindent{\underline{\textbf{Motivation:}}}
Despite the recent advance of the \textsf{CardEst}\xspace methods, we notice that a fundamental problem has not yet been answered, which is \emph{``to what extent can these advanced \textsf{CardEst}\xspace methods improve the performance of query optimizers in real-world settings?''} Although existing studies have conducted extensive experiments, they suffer from the following shortcomings:
\indent 1. \emph{The data and query workloads used for evaluation may not well represent the real-world scenarios.}
The widely adopted JOB-LIGHT query workload on IMDB benchmark data~\cite{leis2015good} touches at most 8 numerical or categorical attributes within six tables, whose schema forms a star-join. The recent benchmark work~\cite{wang2020ready} only evaluate these methods in a single table scenario. Therefore, the existing works are not sufficient to reflect the behavior of \textsf{CardEst}\xspace methods on complex real-world data with high skewness and correlations and multi-table queries with various join forms and conditions.
\indent 2. \emph{Most of the evaluations do not exhibit the end-to-end improvement of \textsf{CardEst}\xspace methods on the query optimizer.}
Existing works usually evaluate \textsf{CardEst}\xspace methods on the algorithm-level metrics, such as estimation accuracy and inference latency. \revise{These metrics only evaluate the quality of the \textsf{CardEst}\xspace algorithm itself, but cannot reflect how these methods behave in a real DBMS due to two reasons. First, the estimation accuracy does not directly equal to the query plan quality. As different sub-plan queries matters differently to the query plan~\cite{perron2019learned,chaudhuri2009exact,trummer2019exact}, a more accurate method may produce a much worse query plan if they mistake a few very important estimations~\cite{negi2020cost}.
Second, the actual query time is affected by multiple factors, including both query plan quality and \textsf{CardEst}\xspace inference cost.
}
Therefore, the ``gold standard'' to examine a \textsf{CardEst}\xspace method is to integrate it into the query optimizer of a real DBMS and record the \emph{end-to-end query time}, including both query plan generation time and execution time. Unfortunately, this end-to-end evaluation has been ignored in most existing works.
To address these two problems, the DBMS community needs
1) \emph{new benchmark datasets and query workloads that can represent the real-world settings} and 2) \emph{an in-depth end-to-end evaluation to analyze performance of \textsf{CardEst}\xspace methods}.
\smallskip
\noindent{\underline{\textbf{Contributions and Findings:}}}
In this paper, we provide a systematic evaluation on representative \textsf{CardEst}\xspace methods and make the following contributions:
\indent 1. \emph{We establish a new benchmark for \textsf{CardEst}\xspace that can represent real-world settings.}
Our benchmark includes a real-world dataset STATS and a hand-picked query workload STATS-CEB. STATS has complex properties such as large attribute numbers, strong distribution skewness, high attribute correlations, and complicated join schema. STATS-CEB contains a number of diverse multi-table join queries varying in the number of involved tables, true cardinality, and different join types (e.g., chain/star/mixed, one-to-many/many-to-many, etc.).
This benchmark pose challenges to better reveal the advantages and drawbacks of existing \textsf{CardEst}\xspace methods in real-world settings. (in Section~3)
\indent \emph{2. We provide an end-to-end evaluation platform for \textsf{CardEst}\xspace and present a comprehensive evaluation and analysis of the representative \textsf{CardEst}\xspace methods.}
We provide an approach that could integrate any \textsf{CardEst}\xspace method in the built-in query optimizer of PostgreSQL, a well-known open-source DBMS. Based on this, we evaluate the performance of both traditional and ML-based \textsf{CardEst}\xspace methods in terms of the end-to-end query time and other important aspects affecting their applicability, including inference latency, model size, training time, update efficiency, and update accuracy. From the results, we make a dozen of key observations (O). Some key take-away findings are listed as follows (in Sections~4--6):
\indent \textbf{K1. Improvement (O1):}
On numerical and categorical query workloads, the ML-based data-driven \textsf{CardEst}\xspace methods can achieve \revise{remarkable} performance, whereas \revise{most of} the other methods can hardly improve the \textsf{PostgreSQL} baseline.
\indent \textbf{K2. Method (O3, O8-10):}
\revise{Among the data-driven methods, probabilistic graphical models outperform deep models in terms of both end-to-end query time and other practicality aspects and are more applicable to deploy in real-world DBMS.}
\indent \textbf{K3. Accuracy (O5-6, O11-13):}
\revise{
Accurate estimation of some important queries, e.g., with large cardinality, is crucial to the overall query performance. The widely used accuracy metric Q-Error~\cite{moerkotte2009preventing} cannot reflect a method's end-to-end query performance.
}
\indent \textbf{K4. Latency (O7):}
\revise{The inference latency of \textsf{CardEst}\xspace methods has a non-trivial impact on the end-to-end query time on OLTP workload.
}
\indent \emph{3. We propose a new metric that can indicate the overall quality of \textsf{CardEst}\xspace methods.}
Previous \textsf{CardEst}\xspace quality metrics, such as Q-Error, can only reflect the estimation accuracy of each (sub-plan) query but not the overall end-to-end performance of \textsf{CardEst}\xspace methods.
Therefore, inspired by the recent work~\cite{negi2020cost,negi2021flow}, we propose a new metric called P-Error, which directly relates the estimation accuracy of (sub-plan) queries to the ultimate query execution plan quality of the \textsf{CardEst}\xspace methods.
Based on our analysis,
P-Error is highly correlated with the end-to-end query time improvement. Thus, it could serve as a potential substitute for Q-Error and a better optimization objective for learned \textsf{CardEst}\xspace methods. (in Section~7)
\indent
\revise{
\emph{4. We point out some future research directions for \textsf{CardEst}\xspace methods.}
On the application scope, future ML-based \textsf{CardEst}\xspace methods should enhance the ML models to support more types of queries. Moreover, it is also helpful to unify different approaches and/or models to adjust \textsf{CardEst}\xspace for different setting, i.e., OLTP and OLAP. On designing principles, we should optimize ML models towards the end-to-end performance metrics instead of purely accuracy metrics, with an emphasis on multi-table join queries.
(in Section~8)
}
\input{prelim.tex}
\input{benchmark.tex}
\input{workflow.tex}
\input{static.tex}
\input{dynamic.tex}
\input{analysis.tex}
\section{Conclusions and Future Work}
In this paper, we establish a new benchmark for \textsf{CardEst}\xspace, which contains the complex real-world dataset STATS and the diverse query workload STATS-CEB. This new benchmark helps to clearly identify the pros and cons of different \textsf{CardEst}\xspace methods. In addition, we propose the new metric P-Error as a potential substitute for the well-known Q-Error. \revise{Based on the exhaustive experimental analysis, we derive a series of important observations that will provide the DBMS community with a holistic view of the \textsf{CardEst}\xspace problem and help researchers design more effective and efficient methods. We summarize the following key take-away messages:}
\revise{
$\bullet$ \textbf{Overall performance of \textsf{CardEst}\xspace methods:}
Both estimation accuracy and inference time matters to the end-to-end performance of query optimization. Some ML-based data-driven \textsf{CardEst}\xspace methods built on top of PGMs, such as \textsf{FLAT}, \textsf{DeepDB} and \textsf{BayesCard}, can largely improve the end-to-end performance as they make the right balance in tuning the strictness of independence assumption to keep inference efficiency and model effectiveness. Whereas, the ML-based query-driven methods barely have any improvement over the \textsf{PostgreSQL} baseline.
}
\revise{
$\bullet$ \textbf{Practicality aspects of \textsf{CardEst}\xspace methods:}
Some traditional methods and ML-based data-driven \textsf{CardEst}\xspace methods, such as \textsf{PessEst} and \textsf{BayesCard}, can be applied in real-world dynamic DBMS as their updating speed could keep track of the fast data update pace. Whereas, existing ML-based query-driven methods are inherently impractical for DBs with frequent data updates.
}
\revise{
$\bullet$ \textbf{Challenges for multi-table join queries:}
Learning one large data-driven model on the (sample of) full outer join of all tables has poor scalability and low accuracy for numerous tables in a DB. However, ML-based data-driven \textsf{CardEst}\xspace methods exhibit degrading performance for queries with an increasing number of join tables. This brings big challenges for the actual deployment of ML-based methods in DBMS.}
\revise{
$\bullet$ \textbf{Importance of different queries:}
The estimation errors of different sub-plan queries matter differently to the final query plan quality. Sometimes, accurate estimation of queries with large cardinalities is much important than the small ones. Therefore, optimizing the accuracy of \textsf{CardEst}\xspace methods along the Q-Error metric does not always generate high quality query plans.
}
\smallskip
\revise{
Based on the takeaway messages, we point out the following future research directions (RD) for \textsf{CardEst}\xspace methods:}
\indent $\bullet$ \textbf{RD1}:
Enlarging the application scope of ML-based \textsf{CardEst}\xspace methods to support complex queries with ``LIKE'' predicates and cyclic joins.
\indent
\revise{
$\bullet$ \textbf{RD2}:
Combining different models together to design \textsf{CardEst}\xspace methods that could adjust the estimation accuracy and inference cost to fit different settings, e.g., OLAP and OLTP workload.
}
\indent
\revise{
$\bullet$ \textbf{RD3}:
Optimizing \textsf{CardEst}\xspace methods towards the end-to-end performance, i.e., using our new P-Error metric as the objective function and fine-tuning the estimation quality on important, possible large, sub-plan queries.
}
\indent
\revise{
$\bullet$ \textbf{RD4}:
Designing new factorization methods for join queries to better balance the estimation accuracy and training/inference cost.
}
\clearpage
\bibliographystyle{ACM-Reference-Format}
\section{Preliminaries and Background}
\label{sec: prelim}
In this section, we introduce some preliminaries and background, including a formal definition of the cardinality estimation (\textsf{CardEst}\xspace) problem,
a brief review on representative \textsf{CardEst}\xspace algorithms and a short analysis on existing \textsf{CardEst}\xspace benchmarks.
\vspace{0.5em}
\noindent \underline{\textbf{\textsf{CardEst}\xspace Problem:}}
In the literature, \textsf{CardEst}\xspace is usually defined as a statistical problem. Let $T$ be a table with $k$ attributes $A = \{A_1, A_2, \dots, A_k \}$. $T$ could either represent a single relational table or a joined table.
In this paper, we assume each attribute $A_i$ for each $1 \leq i \leq k$ to be either categorical (whose values can be mapped to integers) or continuous, whose domain (all unique values) is denoted as $D_i$.
Thereafter, any selection query $Q$ on $T$ can be represented in a canonical form: $Q = \{A_1 \in R_1 \wedge A_2 \in R_2 \wedge \cdots \wedge A_n \in R_n\}$, where $R_i \subseteq D_i$ is the constraint region specified by $Q$ over attribute $A_i$ (i.e. filter predicates). Without loss of generality, we have $R_i = D_i$ if $Q$ has no constraint on $A_i$. Let $\textsf{Card}\xspace(T, Q)$ denote the \emph{cardinality}, i.e., the exact number of records in $T$ satisfying all constraints in $Q$. The \textsf{CardEst}\xspace problem requires estimating $\textsf{Card}\xspace(T, Q)$ as accurately as possible without executing $Q$ on $T$.
\revise{
In this paper, we concentrate on evaluating these selection queries on numerical/categorical (n./c.) attributes. We do not consider `LIKE'' (or pattern matching) queries on string attributes due to two reasons:
1) commercial \textsf{CardEst}\xspace methods for ``LIKE'' queries in DBMS often use magic numbers~\cite{psql2020, sqlserver2019, mysql2020, mdb2020}, which are not meaningful to evaluate;
and 2) \textsf{CardEst}\xspace solutions for n./c. queries mainly consider how to build statistical models summarizing attribute and/or query distribution information. Whereas, \textsf{CardEst}\xspace methods for ``LIKE'' queries ~\cite{sun2019end, shetiya2020astrid, mikolov2013distributed} focus on applying NLP techniques to summarize semantic information in strings. Thus, they tackle with different technical challenges and statistical \textsf{CardEst}\xspace methods can not effectively support ``LIKE'' queries.
}
\vspace{0.5em}
\noindent \underline{\textbf{\textsf{CardEst}\xspace Algorithms:}}
There exist many \textsf{CardEst}\xspace methods in the literature, which can be classified into three classes as follows:
\emph{Traditional \textsf{CardEst}\xspace methods}, such as histogram~\cite{selinger1979access} and sampling~\cite{leis2017cardinality,heimel2015self,kiefer2017estimating}, are widely applied in DBMS and generally based on simplified assumptions and expert-designed heuristics.
\revise{
Many variants are proposed later to enhance their performance.
Histogram-based variants include multi-dimensional histogram based methods~\cite{poosala1997selectivity, deshpande2001independence, gunopulos2000approximating, gunopulos2005selectivity, muralikrishna1988equi, wang2003multi}, correcting and self-tuning histograms with query feedbacks~\cite{bruno2001stholes, srivastava2006isomer, khachatryan2015improving, fuchs2007compressed} and updating statistical summaries in DBMS~\cite{stillger2001leo, wu2018towards}.
Sampling-based variants include query-driven kernel-based methods~\cite{heimel2015self,kiefer2017estimating}, index based methods~\cite{leis2017cardinality} and random walk based methods~\cite{zhao2018random, li2016wander}.
Some other work, such as the sketch based method~\cite{cai2019pessimistic}, explores a new direction for \texttt{CardEst}.
}
\textit{ML-based query-driven \textsf{CardEst}\xspace methods} try to learn a model to map each featurized query $Q$ to its cardinality $\textsf{Card}\xspace(T, Q)$ directly. Some ML-enhanced methods improve the performance of \textsf{CardEst}\xspace methods by using more complex models such as DNNs~\cite{kipf2018learned} or gradient boosted trees~\cite{dutt2019selectivity}.
\textit{ML-based data-driven \textsf{CardEst}\xspace methods} are independent of the queries. They regard each tuple in $T$ as a point sampled according to the joint distribution $P_T(A) = P_T(A_1, A_2, \dots, A_n)$.
Let $P_T(Q) = P_T(A_1 \in R_1, A_2 \in R_2, \cdots , A_n \in R_n)$ be the probability specified by the region of $Q$. Then, we have $\textsf{Card}\xspace(T, Q) = P_T(Q) \cdot |T|$ so \textsf{CardEst}\xspace problem could be reduced to model the probability density function (PDF) $P_T(A)$ of table $T$.
A variety of ML-based models have been used in existing work to represent $P_T(A)$, the most representative of which includes deep auto-regression model~\cite{yang2019deep,yang2020neurocard,hasan2019multi} and probabilistic graphical models (PGMs) such as
Bayesian networks (BN)~\cite{tzoumas2011lightweight, getoor2001selectivity, wu2020bayescard}, SPN~\cite{hilprecht2019deepdb}, and FSPN~\cite{zhu2020flat}.
\revise{
In addition, some methods proposed recently such as~\cite{wu2021unified} try to integrate both query and data information for \texttt{CardEst}.
}
\vspace{0.5em}
\noindent \underline{\textbf{\textsf{CardEst}\xspace Benchmark:}}
Literature works have proposed some benchmark datasets and query workloads for \textsf{CardEst}\xspace evaluation. We analyze their pros and cons as follows:
1) The synthetic benchmarks such as TPC-C~\cite{leis2015good}, TPC-H~\cite{tpch2021} and TPC-DS~\cite{tpcds2021} and Star Schema benchmarks (SSB)~\cite{o2009star} contain real-world data schemas and synthetic generated tuples. They are mainly used for evaluating query engines but not suitable for \textsf{CardEst}\xspace because their data generator makes oversimplified assumptions on the joint PDF of attributes, such as uniform distribution and independence.
However, real-world datasets are often highly skewed and correlated~\cite{leis2015good}, which are more difficult for \textsf{CardEst}\xspace.
2) IMDB dataset with its JOB workload~\cite{leis2015good} is a well-recognized benchmark, containing complex data and string ``LIKE'' queries.
\revise{
To evaluate statistical \textsf{CardEst}\xspace methods on n./c. queries only, most of the existing works~\cite{kipf2018learned,yang2019deep,yang2020neurocard,hilprecht2019deepdb,zhu2020flat} use the JOB-LIGHT query workload containing 70 selection queries with varied number of joining tables. However, these queries touch only 8 n./c. attributes within six tables of IMDB and the joins between these tables are only star joins centered at one table.
Thus, this simplified IMDB dataset and its workload cannot comprehensively evaluate the performance of nowadays \textsf{CardEst}\xspace algorithms on more complex data and varied join settings.} On the other hand, some works~\cite{yang2020neurocard, negi2021flow} generate queries on the IMDB dataset including ``LIKE'' queries which are not supported by most of recent statistical methods.
Apart from these well-established and general-purpose benchmarks, there also exist other benchmarks with specific purposes. For example, Wang~\cite{wang2020ready} presents a series of real-world datasets to analyze whether existing \textsf{CardEst}\xspace algorithms are suitable to be deployed into real-world DBMS. However, it is only conducted on single-table datasets, which can not reflect the behavior of these models in more practical multi-table settings.
\vspace{0.5em}
\noindent \underline{\textbf{Summary:}}
A surge of \textsf{CardEst}\xspace algorithms built on top of statistical models has been proposed in the literature, especially in the last decade. However, existing \textsf{CardEst}\xspace benchmarks are not sufficient to comprehensively evaluate their performance.
\section{How Good Are The \textsf{CardEst}\xspace Methods?}
\label{sec: static}
In this section, we first thoroughly investigate the true effectiveness of the aforementioned \textsf{CardEst}\xspace methods in improving query plan quality.
Our evaluation focuses on a static environment where data in the system has read-only access. This setting is ubiquitous and critical for commercial DBMS, especially in OLAP workloads of data warehouses\cite{nam2020sprinter,garcia1982read,pedreira2018rethinking,zhao2020efficient}.
We organize the experimental results as follows: Section~\ref{sect5.1} reports the overall evaluation results, Section~\ref{sect5.2} provides detailed analysis of the method's performance on various query types and an in-depth case study on the performance of some representative methods.
\subsection{Overall End-to-End Performance }
\label{sect5.1}
\begin{table*}
\centering
\caption{Overall performance of \textsf{CardEst}\xspace algorithms.}
\vspace{-0.5em}
\resizebox{0.95\textwidth}{!}{
\begin{tabular}{cc|ccc|ccc}
\toprule
\rowcolor{mygrey}
& & \multicolumn{6}{c}{\textbf{Workload}} \\
\rowcolor{mygrey}
\multirow{-2}{*}[-2ex]{\textbf{Category}} &
\multirow{-2}{*}[-2ex]{\textbf{Method}} & \multicolumn{3}{c}{\textsf{\textbf{JOB-LIGHT}}} & \multicolumn{3}{c}{\textsf{\textbf{STATS-CEB}}}\\
\cmidrule[0.05em](lr){3-5} \cmidrule[0.05em](l){6-8}
\rowcolor{mygrey}
& & \textbf{End-to-End Time} & \textbf{Exec. + Plan Time} & \textbf{Improvement} & \textbf{End-to-End Time} & \textbf{Exec. + Plan Time} & \textbf{Improvement} \\
\midrule
\multirow{2}{*}{Baseline} &
\textsf{PostgreSQL} & 3.67h & 3.67h + 3s & $0.0\%$ & 11.34h & 11.34h + 25s & $0.0\%$ \\
& \textsf{\textbf{TrueCard}} & \textbf{3.15h} & \textbf{3.15h + 3s} & $\mathbf{14.2\%}$ & \textbf{5.69h} & \textbf{5.69h + 25s} & $\mathbf{49.8\%}$ \\ \hline
\multirow{4}{*}{Traditional} & \revise{\textsf{MultiHist}} & \revise{3.92h} & \revise{3.92h + 30s} & $-6.8\%$ & \revise{14.55h} & \revise{14.53h + 79s} & \revise{$-28.3\%$} \\
& \revise{\textsf{UniSample}} & 4.87h & 4.84h + 96s & $-32.6\%$& $>25h$ & $--$ & $--$\\
& \revise{\textsf{WJSample}} & \revise{4.15h} & \revise{4.15h + 23s} & \revise{$-13.1\%$}& \revise{19.86h} & \revise{19.85h + 45s} & \revise{$-75.0\%$}\\
& \revise{\textsf{PessEst}} & \revise{3.38h} & \revise{3.38h + 11s} & \revise{$7.9\%$}& \revise{6.10h} & \revise{6.10h + 43s} & \revise{$46.2\%$}\\
\hline
\multirow{4}{*}{Query-driven} & \textsf{MSCN} & \revise{3.50h} & \revise{3.50h + 12s} & \revise{$4.6\%$} & \revise{$8.13h$} & \revise{8.11h + 46s}
& \revise{$28.3\%$} \\
& \textsf{LW-XGB} & 4.31h & 4.31h + 8s & $-17.4\%$ & $>25h$ & $--$ & $--$\\
& \textsf{LW-NN} & 3.63h & 3.63h + 9s & $1.1\%$ & 11.33h & 11.33h + 34s & $0.0\%$ \\
& \revise{\textsf{UAE-Q}} & \revise{3.65h} & \revise{3.55h+356s} & \revise{$-1.9\%$} & \revise{11.21h} & \revise{11.03h+645s} & \revise{$1.1\%$} \\\hline
\multirow{4}{*}{Data-driven}
& \textsf{NeuroCard}$^E$ & 3.41h & 3.29h + 423s & $6.8\%$ & 12.05h & 11.85h + 709s & $-6.2\%$ \\
& \textsf{BayesCard} & 3.18h & 3.18h + 10s & $13.3\%$ & 7.16h & 7.15h + 35s & $36.9\%$ \\
& \textsf{DeepDB} & 3.29h & 3.28h + 33s & $10.3\%$ & 6.51h & 6.46h + 168s & $42.6\%$ \\
& \textsf{FLAT} & 3.21h & 3.21h + 15s & $12.9\%$ & 5.92h & 5.80h + 437s & $47.8\%$ \\ \hline
Query + Data & \revise{\textsf{UAE}} & \revise{3.71h} & \revise{3.60h + 412s} & \revise{$-2.7\%$} & \revise{11.65h} & \revise{11.46h + 710s} & \revise{$-0.02\%$} \\
\bottomrule
\end{tabular}
}
\label{tab:overall}
\end{table*}
We evaluate the end-to-end performance (query execution time plus planning time) on both JOB-LIGHT and STATS-CEB benchmarks for all \textsf{CardEst}\xspace methods including two baselines \textsf{PostgreSQL} and \textsf{TrueCard} shown in Table~\ref{tab:overall}.
We also report their relative improvement over the \textsf{PostgreSQL} baseline as an indicator of their end-to-end performance. In the following, we will first summarize several overall observations (O) regarding Table~\ref{tab:overall}, and then provide detailed analysis w.r.t. each of these \textsf{CardEst}\xspace methods.
\smallskip
\noindent \revise{\textbf{O1: Most of the ML-based data-driven \textsf{CardEst}\xspace methods can achieve remarkable performance, whereas most of the traditional and ML-based query-driven \textsf{CardEst}\xspace methods do not have much improvement over \textsf{PostgreSQL}.}}
The astonishing performance of these ML-based data-driven \textsf{CardEst}\xspace methods \break (\textsf{BayesCard}, \textsf{DeepDB}, and \textsf{FLAT}) come from their accurate characterization of data distributions and \revise{reasonable independence assumption over joined tables.} \revise{Traditional histogram and sampling based methods (\textsf{MultiHist}, \textsf{UniSample}, and \textsf{WJSample}) have worse performance than \textsf{PostgreSQL} whereas the new traditional approach (\textsf{PessEst}) is significantly better. The query-driven \textsf{CardEst}\xspace methods' performance is not stable. They
rely on a large amount of executed queries as training data and the testing query workload should follow the same distribution as the training workload to produce an accurate estimation~\cite{hilprecht2019deepdb}.}
\smallskip
\noindent \revise{\textbf{O2: The differences among the \textsf{CardEst}\xspace methods' improvements over \textsf{PostgreSQL} are much more drastic on datasets with more complicated data distributions and join schemas.}}
\revise{We observe that the execution time for \textsf{CardEst}\xspace method that can outperform \textsf{PostgreSQL} (\textsf{PessEst}, \textsf{NeuroCard}$^E$, \textsf{BayesCard}, \textsf{DeepDB}, and \textsf{FLAT}) on JOB-LIGHT are all roughly $3.2h$, which is very close to the minimal execution time of \textsf{TrueCard}($3.15h$).}
As explained in Section~\ref{sec: bench}, the data distributions in \revise{the simplified IMDB dataset} and the JOB-LIGHT queries are relatively simple. Specifically, the table \textsf{title} in the IMDB dataset plays a central role in the join schema that other tables are all joined with its primary key, \revise{so the joint distribution could be easily learned}. However, their performance differences on STATS are very drastic
because the STATS dataset is much more challenging with high attribute correlations and various join types.
Therefore, the STATS-CEB benchmark \revise{can help} expose the advantages and drawbacks of these methods.
\smallskip
\noindent \underline{\textbf{Analysis of Traditional \textsf{CardEst}\xspace Methods:}}
\revise{Histogram and sampling based methods perform significantly worse than \textsf{PostgreSQL} on both benchmarks because of their inaccurate estimation.
\textsf{MultiHist} and \textsf{UniSample} use the join uniformity assumption to estimate join queries, whose estimation error grows rapidly for queries joining more tables.
\textsf{WJSample} makes a random walk based sample across the join of multiple tables. However, as the cardinality increases with the number of joined tables, the relatively small sample size can not effectively capture the data distribution, leading to large estimation error.
These queries joining larger number of tables are generally more important in determining a good execution join order~\cite{leis2015good}.
Therefore, these methods tend to yield poor join orders and long-running query plans.
The \textsf{PostgreSQL} produces more accurate estimations because of its high-quality implementation and fine-grained optimizations on join queries.
The new traditional method \textsf{PessEst} has a significant improvement over the \textsf{PostgreSQL} because it can compute the upper bound on estimated cardinalities to avoid expensive physical join plans.
}
\smallskip
\noindent \underline{\textbf{Analysis of ML-based Query-driven \textsf{CardEst}\xspace Methods:}}
Overall the query-driven methods have comparable performance to the \textsf{PostgreSQL} baseline. Specifically, \textsf{MSCN} can slightly outperform the \textsf{PostgreSQL} (4.6\% faster runtime on JOB-LIGHT and 19.7\% faster on STATS-CEB), \textsf{LW-XGB} has much slower query runtime, and \textsf{LW-NN} has comparable performance.
The unsatisfactory performance of these methods could be due to the following reasons.
$\bullet$ These methods are essentially trying to fit the probability distributions of all possible joins in the schema, which has super-exponential complexity. Specifically, there can exist an exponential number possible joins in a schema, and for each joining table, the complexity of its distribution is exponential w.r.t. its attribute number and domain size~\cite{wu2020bayescard}.
Thus, these models themselves are not complex enough to fully understand all these distributions.
$\bullet$ Similarly, these methods would require an enormous amount of executed queries as training data to accurately characterize these complex distributions. In our experiment, our computing resources can only afford to generate $10^5$ queries (executing 146 queries in STATS-CEB takes 10 hours), which may not be enough for this task. Besides, it is unreasonable to assume that a \textsf{CardEst}\xspace method can have access to this amount of executed queries in reality.
$\bullet$ The well-known workload shift issue states that query-driven methods trained one query workload will not likely produce an accurate prediction on a different workload~\cite{hilprecht2019deepdb}. In our experiment, the training query workload is automatically generated whereas the JOB-LIGHT and STATS-CEB testing query workload is hand-picked. Therefore, the training and testing workload of these methods have different distributions.
\smallskip
\noindent \underline{\textbf{Analysis of ML-based Data-driven Methods:}}
Data-driven ML methods (\textsf{BayesCard}, \revise{\textsf{NeuroCard}$^E$}, \textsf{DeepDB}, and \textsf{FLAT}), do consistently outperform \textsf{PostgreSQL} by $7-13\%$ on JOB-LIGHT. Except for \revise{\textsf{NeuroCard}$^E$}, the other three improve the \textsf{PostgreSQL} by $37-48\%$ \revise{on STATS-CEB}.
\revise{Their performance indicates that data-driven methods could serve as a practical counterpart of the PostgreSQL \textsf{CardEst}\xspace component.} Through detailed analysis of \revise{\textsf{NeuroCard}$^E$} method, we derive the following observation:
\smallskip
\noindent \revise{\textbf{O3: Learning one model on the (sample of) full outer join of all tables in a DB may lead to poor scalability. We conjecture that an effective \textsf{CardEst}\xspace method should make appropriate independent assumptions for large datasets.}}
The advantages of \revise{\textsf{NeuroCard}$^E$} over \textsf{PostgreSQL} disappear when shifting from JOB-LIGHT to STATS-CEB benchmark for the following reasons. First, the STATS dataset contains significantly more attributes with larger domain size, which \revise{can be} detrimental to \revise{\textsf{NeuroCard}$^E$}'s underlying deep auto-regressive models~\cite{wang2020ready, wu2020bayescard}. Second, the full outer join size of STATS is significantly larger than \revise{the simplified} IMDB, making the sampling procedure of \revise{\textsf{NeuroCard}$^E$} less effective. Specifically, the full outer join size can get up to $3\times 10^{16}$ and an affordable training data sample size would be no larger than $3\times 10^8$. Therefore, the \revise{\textsf{NeuroCard}$^E$} model trained on this sample only contains $1\times 10^{-8}$ of the information as the original dataset.
\revise{Third, the join keys in STATS dataset have very skewed distribution. E.g. there exist key values of a table that can match with zero or one as well as hundreds of tuples in another table. This complicated distribution of join keys makes \textsf{NeuroCard}$^E$'s full outer join sample less effective.}
\revise{Therefore, \textsf{NeuroCard}$^E$} can hardly capture the correct data distributions especially for join tables with small cardinalities. Specifically, we find that for queries on the joins of a small set of tables, \revise{\textsf{NeuroCard}$^E$}'s prediction deviates significantly from the true cardinality because its training sample does not contain much not-null tuples for this particular set of join tables.
\begin{table}[t]
\caption{End-to-end time improvement ratio of \textsf{CardEst}\xspace algorithms on queries with different number of join tables.}
\vspace{-1em}
\scalebox{0.69}{
\begin{tabular}{cc|cccccc}
\hline
\rowcolor{mygrey}
\bf \# tables & \bf \# queries & \bf \revise{\textsf{PessEst}} & \bf \textsf{MSCN} & \bf \textsf{BayesCard} & \bf \textsf{DeepDB} & \bf \textsf{FLAT} & \textbf{TrueCard}\\ \hline
{$2-3$} & 38 & \revise{$2.62\%$} & \revise{$2.04\%$} & $2.07\%$ & $1.98\%$ & $2.48\%$ & $\mathbf{3.66\%}$ \\
{$4$} & 50 & \revise{$53.1\%$} & \revise{$-12.3\%$} & $55.8\%$ & $48.0\%$ & $55.7\%$ & $\mathbf{55.9\%}$\\
{$5$} & 28 & \revise{$31.7\%$} & \revise{$29.8\%$} & $36.55\%$ & $32.90\%$ & $35.4\%$ & $\mathbf{37.0\%}$ \\
{$6-8$} & 34 & \revise{$29.6\%$} & \revise{$-4.06\%$} & $2.51\%$ & $26.3\%$ & $32.0\%$ & $\mathbf{34.6\%}$ \\
\hline
\end{tabular}}
\label{tab: join_number}
\vspace{-2em}
\end{table}
All other three data-driven \textsf{CardEst}\xspace methods can significantly outperform the \textsf{PostgreSQL} baseline because their models are not constructed on the full outer join of all tables.
Specifically, they all use the ``divide and conquer'' idea to divide the large join schema into several smaller subsets with each representing a join of multiple tables.
In this way, they can capture the rich correlation within each subset of tables;
simultaneously, they avoid constructing the full outer join of all tables by assuming some independence between tables with low correlations.
Then, \textsf{BayesCard}, \textsf{DeepDB}, and \textsf{FLAT} build a model (BN, SPN, and FSPN respectively) to represent the distribution of the corresponding small part. This approach solves the drawback of \revise{\textsf{NeuroCard}$^E$}, yields relatively accurate estimation, and produces effective query execution plans.
Among them, \textsf{FLAT} achieves the best performance (47.8\% improvement), which is \revise{very close to the improvement 49.8\% for \textsf{TrueCard}}. It can outperform \textsf{DeepDB} mostly because the STATS dataset is highly correlated, so the FSPN in \textsf{FLAT} has a more accurate representation of the data distribution than the SPN in \textsf{DeepDB}. On the other hand, \textsf{BayesCard} has an even more accurate representation of data distribution and yields the best end-to-end time for most queries in STATS-CEB. It does not outperform \textsf{FLAT} most because of one extremely long-run query, which we will study in detail in Section~\ref{sect5.2}.
\subsection{Analysis of Different Query Settings}
\label{sect5.2}
In this section, we further examine to what extent the \textsf{CardEst}\xspace methods improve over \textsf{PostgreSQL} on various query types, i.e. different number of join tables ($\# tables$) and different intervals of true cardinalities.
Since JOB-LIGHT workload does not contain queries with very diverse types and the ML-based data-driven methods do not show significant difference on these queries, we only investigate queries on STATS-CEB.
Worth noticing that we only examine the methods with clear improvements over \textsf{PostgreSQL} on STATS-CEB: \textsf{MSCN}, \textsf{BayesCard}, \textsf{DeepDB}, and \textsf{FLAT}.
\smallskip
\noindent \textbf{\underline{Number of Join Tables:}}
Table~\ref{tab: join_number} shows performance improvement of different ML-based methods over the \textsf{PostgreSQL} baseline and we derive the following observation:
\noindent\textbf{\revise{O4: The improvement gaps between these methods and the performance of \textsf{TrueCard} increase with the number of join tables.}}
Specifically, the \textsf{BayesCard} achieves near-optimal improvement for queries joining no more than 5 tables, but it barely has much improvement for queries joining 6 tables and more.
This observation suggests that the estimation qualities of these SOTA methods decline for queries joining more tables.
In fact, the fanout join estimation approach adopted by all these methods sacrifices accuracy for efficiency by assuming some tables are independent of others. This estimation error may accumulate for queries joining a large number of tables, leading to sub-optimal query plans.
\begin{figure*}[htbp]
\centering
\includegraphics[width=16cm]{case_study.eps}
\vspace{-1em}
\caption{\revise{Case study of STATS Q57.}}
\label{fig:q57plans}
\vspace{-1em}
\end{figure*}
\smallskip
\noindent \textbf{\underline{Size of Cardinality:}}
\revise{We choose Q57 (in Figure~\ref{fig:q57plans}) of STATS-CEB as a representative query to study the effect of estimation accuracy w.r.t. different cardinalities and investigate when a \textsf{CardEst}\xspace method could go wrong. }
The execution time of Q57 for \textsf{TrueCard} and \textsf{FLAT} is $1.90h$ and $1.92h$, while the time for \textsf{BayesCard} is $3.23h$. We derive two important observations from this query, which are verified to be generalizable to other queries in JOB-LIGHT and STATS-CEB.
\smallskip
\noindent \revise{ \textbf{O5: Accurate estimation of (sub-plan) queries with large cardinalities is sometimes more important than the small ones.}}
When choosing the join method in the root node of execution plans for Q57, \textsf{BayesCard} underestimates the final join size and chooses the ``merge join'' physical operation. Alternatively, \textsf{FLAT} produces a more accurate estimation for the final join size and chooses the ``hash join'' operation, which is twice as faster as the ``merge join''. Since the final join operation takes up $99\%$ of the total execution time, \textsf{FLAT} significantly outperforms \textsf{BayesCard} on this query.
Generally, the (sub-plan) query with larger true cardinality requires a longer time to execute. It is very common that the join size of two intermediate tables is much larger than both of them. Therefore, some sub-plans can take a significantly longer time to execute than other sub-plans. A bad estimation on these large sub-plan queries can have a detrimental result on the overall runtime, whereas a series of good estimations on small sub-plan queries will not influence the runtime as much. Therefore, the estimation accuracy of sub-plan queries with very large true cardinalities dominate the overall quality of the query plan.
\smallskip
\noindent \textbf{O6: Choosing the correct physical operations sometimes is more important than selecting the optimal join order.}
As shown in Figure~\ref{fig:q57plans} \textsf{BayesCard} can generate the optimal join order of Q57 because of its near-perfect estimation of all sub-plan queries except for the one at the root node.
The join order selected by \textsf{FLAT} is very different from the optimal one. Surprisingly, \textsf{FLAT}'s plan is roughly twice faster to execute than \textsf{BayesCard}'s plan due to the dominant large sub-plan query at the root node. For Q57, a sub-optimal query plan is only $1\%$ slower to execute but only one sub-optimal physical operation is $68\%$ slower.
These aforementioned two observations also hold for other queries in these two benchmarks, so we conjecture that they might be generalizable to all queries.
\section{Evaluation Plan}
\label{sec: plan}
We aim to evaluate how \textsf{CardEst}\xspace algorithms behave in a real DBMS, including the end-to-end improvement on optimizing query plans and other practicality aspects, on our new benchmark.
This section introduces the detailed evaluation plan. Section~\ref{sec: plan-algo} presents all baseline \textsf{CardEst}\xspace algorithms chosen to be evaluated, Section~\ref{sec: plan-sys} describes our implementation method and system settings, and Section~\ref{sec: plan-met} lists the evaluation metrics of our interests.
\subsection{\textsf{CardEst}\xspace Algorithms}
\label{sec: plan-algo}
\revise{
We identify and choose twelve representative \textsf{CardEst}\xspace algorithms across the three classes (traditional, ML-based query-driven, and ML-based data-driven) reviewed in Section~\ref{sec: prelim}. The selection principles and details of algorithms in each class are elaborated as follows.
}
\noindent
\revise{
\underline{\textbf{Traditional \textsf{CardEst}\xspace Algorithms:}}
In this class, we choose five algorithms along the technical directions:
1) for histogram-based methods, we evaluate \textsf{PostgreSQL} and \textsf{MultiHist}, which applies the one-dimensional and multi-dimensional histograms for \texttt{CardEst};
2) for sampling-based methods, we evaluate the uniformly random sampling \textsf{UniSample} method and the more advanced \textsf{WJSample} method for join sampling;
and 3) for other methods, we evaluate \textsf{PessEst}, a recent proposed method that exhibit state-of-the-art performance in some aspects.
The details are as follows:
}
\indent
\revise{1) \textsf{\underline{{PostgreSQL}}}~\cite{psql2020} refers to the histogram-based \textsf{CardEst}\xspace method used in the well-known DBMS PostgreSQL. It assumes that all attributes are mutually independent and maintains a 1-D (cumulative) histogram to represent $P_{T}(A_i)$ for each attribute $A_i$. The probability $P_T(Q)$ can then be easily obtained by multiplying all $P_{T}(R_i)$ together. In addition, optimization strategies such as collecting the most common value and industrial implementation are used to enhance the performance.
}
\indent
\revise{
2) \textsf{\underline{{MultiHist}}}~\cite{poosala1997selectivity} identifies subsets of correlated attributes and model them as multi-dimensional histograms. We use the implementation provided in the repository~\cite{yang2019naru}. We do not compare with the variant methods DBHist~\cite{deshpande2001independence}, GenHist~\cite{gunopulos2000approximating,gunopulos2005selectivity} and
VIHist~\cite{wang2003multi} over~\cite{poosala1997selectivity} since their improvements are not very significant and their open-sourced implementations are not provided.
}
\indent 3) \textsf{\underline{{\revise{UniSample}}}}~\cite{leis2017cardinality,zhao2018random} makes no assumption but randomly fetches records from $T$ on-the-fly according to $P_{T}(A)$ to estimate the probability $P_{T}(Q)$. It is also widely used in DBMS such as MySQL~\cite{mysql2020} and MariaDB~\cite{mdb2020}. We set the sampling size to $10^4$.
\indent
\revise{
4) \textsf{\underline{{\revise{WJSample}}}}~\cite{li2016wander} designs a random walk based method called wander join to sample tuples from multiple tables. It has been integrated into DBMSs and exhibits favorable performance in a recent study~\cite{park2020gcare}. The method~\cite{zhao2018random} then improves from biased to unbiased sampling. We do not compare with it to avoid redundancy.
}
\revise{
5) \textsf{\underline{{\revise{PessEst}}}}~\cite{cai2019pessimistic} leverages
randomized hashing and data sketching to tighten the bound for multi-join queries. It is a new class of estimator as it never underestimates the cardinality. Meanwhile, it has been verified to perform well in real world DBMS.
}
\revise{
We do not compare with the other variants of traditional methods~\cite{bruno2001stholes, srivastava2006isomer, khachatryan2015improving, fuchs2007compressed, stillger2001leo, wu2018towards, heimel2015self,kiefer2017estimating, leis2017cardinality} as they do not exhibit significantly better performance or provide open-source implementation.}
\vspace{0.5em}
\noindent
\revise{
\underline{\textbf{ML-Based \textsf{CardEst}\xspace Algorithms:}}
In our evaluation, we choose four query-driven (\textsf{MSCN}, \textsf{LW-XGB}, \textsf{LW-NN} and \textsf{UAE-Q}) and four data-driven (\textsf{NeuroCard}, \textsf{BayesCard}, \textsf{DeepDB} and \textsf{FLAT}) \textsf{CardEst}\xspace methods. They are representative as they apply different statistical models, and exhibit state-of-the-art performance using each model. Specifically, for query-driven methods, \textsf{MSCN}, \textsf{LW-XGB}/\textsf{LW-NN} and \textsf{UAE-Q} use deep neural networks, classic lightweight regression models and deep auto-regression models to learn the mapping functions, respectively. For the data-driven methods, they build the data distribution utilizing deep auto-regression models and three probabilistic graphical models: BN, SPN, and FSPN, respectively. They use different techniques to balance estimation accuracy, inference efficiency, and model size. Besides, we also evaluate \textsf{UAE}, an extension of \textsf{UAE-Q} using both query and data information.
The details are as follows:
}
\indent 6) \textsf{\underline{MSCN}}~\cite{kipf2018learned} is a deep learning method built upon the multi-set convolutional network model. The features of attributes in table $T$, join relations in query $Q$, and predicates of query $Q$ are firstly fed into three separate modules, where each is comprised of a two-layer neural network. Then, their outputs are averaged, concatenated, and fed into a final neural network for estimation.
\indent 7) \textsf{\underline{LW-XGB}} and 8) \textsf{\underline{LW-NN}}~\cite{dutt2019selectivity}
formulate the \textsf{CardEst}\xspace mapping function as a regression problem and apply gradient boosted trees and neural networks for regression, respectively. Specifically, \textsf{LW-XGB} applies the XGBoost~\cite{chen2016xgboost} as it attains equal or better accuracy and estimation time than both LightGBM~\cite{ke2017lightgbm} and random forests~\cite{svetnik2003random} for a given model size. As the original models only support single table queries, we extend them to support joins with an additional neural network to combine single-table information.
\indent
\revise{
9) \textsf{\underline{UAE-Q}}~\cite{wu2021unified} applies the deep auto regression models to learn the mapping function. It proposes differentiable progressive sampling via the Gumbel-Sotfmax trick to enables deep auto-regression models to learn from queries.
}
For above query-driven \textsf{CardEst}\xspace methods, we automatically generate $10^5$ queries as the training examples to train these models.
10) \revise{\underline{\textsf{NeuroCard}}~\cite{yang2020neurocard}, the multi-table extension of \textsf{Naru}~\cite{yang2019deep}, is built upon a deep auto-regression model.}
It decomposes the joint PDF $P_{T}(A) = P_{T}(A_1) \cdot \prod_{i=2}^{k} P_{T}(A_i| A_1, A_2, \dots, A_{i - 1})$ according to the chain rule and model each (conditional) PDF parametrically by a 4-layer DNN ($4 \times 128$ neuron units). All \revise{tables} can be learned together using a \revise{single} masked auto-encoder~\cite{made}. Meanwhile, a progressive sampling technique~\cite{liang2020variable} is provided to sample points from the region of query $Q$ to estimate its probability. We set the sampling size to $8, 000$.
We omit a very similar method in~\cite{hasan2019multi} as it has \revise{slightly worse} performance than \textsf{NeuroCard}.
\revise{Worth noticing that the original \textsf{NeuroCard} method is only designed for datasets with tree-structured join schema. On our STATS benchmark with cyclic join schema, we partition the schema into multiple tree-structured join schemas and build one \textsf{NeuroCard} model for each schema. To avoid ambiguity, we denote this extension method as \textsf{NeuroCard}$^E$ in the following content.}
11) \textsf{\underline{BayesCard}}~\cite{wu2020bayescard} is fundamentally based on BN, which models the dependence relations among all attributes as a directed acyclic graph. Each attribute $A_i$ is assumed to be
conditionally independent of the remaining ones given its parent attributes $A_{\text{pa}(i)}$ so the joint PDF $\Pr_{T}(A) = \prod_{i = 1}^{k} \Pr_{T}(A_i | A_{\text{pa}(i)})$.
\textsf{BayesCard} revitalizes \textsf{BN} using probabilistic programming to improve its inference and model construction speed (i.e., learning the dependence graph and the corresponding probability parameters). Moreover, it adopts the advanced ML techniques to process the multi-table join queries, which significantly increases its estimation accuracy over previous BN-based \textsf{CardEst}\xspace methods~\cite{getoor2001selectivity, tzoumas2011lightweight, dasfaa2019}, which will not be evaluated in this paper. We use the Chow-Liu Tree~\cite{chow1968approximating} based method to build the structure of \textsf{BayesCard} and apply the complied variable elimination algorithm for inference.
\indent 12) \textsf{\underline{DeepDB}}~\cite{hilprecht2019deepdb}, based on sum-product networks (SPN)~\cite{poon2011sum,desana2020sum}, approximates $P_T(A)$ by recursively decomposing it into local and simpler PDFs. Specifically, the tree-structured SPN contains sum node to split $P_T(A)$ to multiple $P_{T'}(A)$ on tuple subset $T' \subseteq T$, product node to decompose $P_{T}(A)$ to $\prod_{S}P_{T}(S)$ for independent set of attributes $S$ and leaf node if $P_{T}(A)$ is a univariate PDF. The SPN structure can be learned by splitting table $T$ in a top-down manner. Meanwhile, the probability of $\Pr_{T}(Q)$ can be obtained in a bottom-up manner with time cost linear in the SPN's node size.
\indent 13) \textsf{\underline{FLAT}}~\cite{zhu2020flat}, based on factorize-split-sum-product networks (FSPN)~\cite{wu2020fspn}, improves over SPN by
adaptively decomposing $P_T(A)$ according to the attribute dependence level. It adds the factorize node to split $P_T$ as $P_T(W) \cdot P_T(H | W)$ where $H$ and $W$ are highly and weakly correlated attributes in $T$. $P_T(W)$ is modeled in the same way as SPN. $P_T(H | W)$ is decomposed into small PDFs by the split nodes until $H$ is locally independent of $W$. Then, the multi-leaf node is used to model the multivariate PDF $P_T(H)$ directly. Similar to SPN, the FSPN structure and query probability can be recursively obtained in a top-down and bottom-up fashion, respectively.
For both \textsf{DeepDB} and \textsf{FLAT}, we set the RDC thresholds to $0.3$ and $0.7$ for filtering independent and highly correlated attributes, respectively. Meanwhile, we do not split a node when it contains less than $1\%$ of the input data.
\indent
\revise{
14) \textsf{\underline{UAE}}~\cite{wu2021unified} extends the \textsf{UAE} method by unifiying both query and data information using the auto-regression model. It is a representative work aiming at closing the gap between data-driven and query-driven \textsf{CardEst}\xspace methods.
}
\vspace{0.5em}
\noindent
\underline{\textbf{Remarks:}}
For each \textsf{CardEst}\xspace algorithm, we adopt the publicly available implementation~\cite{hilp2019deepdb, yang2019naru, yang2020sb} if the authors provide it and otherwise implement it by ourselves. For other hyper-parameters, if they are known to be a trade-off of some metrics, we choose the default values recommended in the original paper. Otherwise, we run a grid search to explore the combination of their values that largely improves the end-to-end performance on a validation set of queries. Notice that, each of our evaluated \textsf{CardEst}\xspace algorithms is an independent and universal tool that can be easily integrated into common DBMS. There have also been proposed some \textsf{CardEst}\xspace modules~\cite{sun2019end, wu2021unified} that are optimized together with other components in a query optimizer in an end-to-end manner. We do not compare with them as they do not fit our evaluation framework.
\subsection{Implementation and System Settings}
\label{sec: plan-sys}
To make our evaluation more realistic and convincing, we integrate each \textsf{CardEst}\xspace algorithm into the query optimizer of PostgreSQL~\cite{psql2020}, a well-recognized open-source DBMS. Then, the quality of each \textsf{CardEst}\xspace method can be directly reflected by the end-to-end query runtime with their injected cardinality estimation.
Before introducing the details of our integration strategy, we introduce an important concept called \textit{sub-plan query}. For each SQL query $Q$, each \textit{sub-plan query} is a query touching only a subset of tables in $Q$.
The set of all these queries is called \textit{sub-plan query space}. For the example query $A \bowtie B \bowtie C$, its sub-plan query space contains queries on $A \bowtie B$, $A \bowtie C$, $B \bowtie C$, $A$, $B$, and $C$ with the corresponding filtering predicates. The built-in planner in DBMS will generate the sub-plan query space, estimate their cardinalities, and determine the optimal execution plan. For example, the sub-plan queries $A$, $B$, and $C$ only touch a single table, their \textsf{CardEst}\xspace results may affect the selection of table-scan methods, i.e. index-scan or seq-scan. The sub-plan queries $A \bowtie B$, $A \bowtie C$, and $B \bowtie C$ touch two tables. Their cardinalities may affect the join order, i.e. joining $A \bowtie B$ with $C$ or $A \bowtie C$ with $B$, and the join method, i.e. nested-loop-join, merge-join, or hash-join. Therefore, the effects of a \textsf{CardEst}\xspace method on the final query execution plan are entirely decided by its estimation results over the sub-plan query space.
To this end, in our implementation,
we overwrite the function ``\textsf{calc\_joinrel\_size\_estimate}'' in the planner of PostgreSQL to derive the sub-plan query space for each query in the workload.
Specifically, every time the planner needs a cardinality estimation of a sub-plan query, the modified function ``\textsf{calc\_joinrel\_size\_estimate}'' will immediately capture it.
Then, we call each \textsf{CardEst}\xspace method to estimate the cardinalities of the sub-plan queries and inject the estimations back into PostgreSQL. Afterward, we run the compiler of PostgreSQL on $Q$ to generate the plan. It will directly read the injected cardinalities produced by each method. Finally, we execute the query with the generated plan. In this way, we can support any \textsf{CardEst}\xspace method without a large modification on the source code of PostgreSQL. We can report the total time (except the sub-plan space generation time) as the end-to-end time cost of running a SQL query using any \textsf{CardEst}\xspace method.
For the environment, we run all of our experiments in two different Linux Servers. The first one
with 32 Intel(R) Xeon(R) Platinum 8163 CPUs @ 2.50GHz, one Tesla V100 SXM2 GPU and 64 GB available memory is used for model training. The other one with 64 Intel(R) Xeon(R) E5-2682 CPUs @ 2.50GHz is used for the end-to-end evaluation on PostgreSQL.
\subsection{Evaluation Metrics}
\label{sec: plan-met}
Our evaluation mainly focuses on \emph{quantitative} metrics that directly reflect the performance of \textsf{CardEst}\xspace algorithms from different aspects. We list them as follows:
\indent 1) \text{\underline{End-to-end time}} of the query workload, including both the query plan generation time and physical plan execution time. It serves as the ``gold-standard'' for \textsf{CardEst}\xspace algorithm, since improving the end-to-end time is the ultimate goal for optimizing \textsf{CardEst}\xspace in query optimizers.
\revise{
We report the end-to-end time of \textsf{TrueCard}, which injects the true cardinalities of all sub-plan queries into PostgreSQL. Ideally if the cost model is very accurate, \textsf{TrueCard} can obtain the optimal plan with shortest time. For a fixed PostgreSQL cost model, we find \textsf{TrueCard} can obtain the optimal query plan for most of the time. Thus, this could serve as a good baseline.
}
\indent 2) \text{\underline{Inference latency}} reflects the time cost for \textsf{CardEst}\xspace, which directly relates to the query plan generation time. It is crucial as \textsf{CardEst}\xspace needs to be done numerous times in optimizing the plan of each query. Specifically, an accurate \textsf{CardEst}\xspace method may be very time-costly in inference. Despite the fast execution time of the plans generated by this method, the end-to-end query performance can be poor because of its long plan generation time.
\indent 3) \text{\underline{Space cost}} refers to the \textsf{CardEst}\xspace model size. A lightweight model is also desired as it is convenient to transfer and deploy.
\indent 4) \text{\underline{Training cost}} refers to the models' offline training time.
\indent 5) \text{\underline{Updating speed}} reflects the time cost for \textsf{CardEst}\xspace models update to fit the data changes. For real-world settings, this metric plays an important role as its underlying data always updates with tuples insertions and deletions.
Besides these metrics, \cite{wang2020ready} proposed some \emph{qualitative metrics} related to the stability, usage, and deployment of \textsf{CardEst}\xspace algorithms and made a comprehensive analysis. Thus, we do not consider them in this paper.
In the following, we first evaluate the overall end-to-end performance of all methods in Section~\ref{sec: static}. Then, we analyze the other practicality aspects in Section~\ref{sec: dynamic}. At last, we point out the drawbacks of existing evaluation metric and propose a new metric as its potential substitution in Section~\ref{sec:analysis}.
| {
"attr-fineweb-edu": 1.827148,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdDnxK6wB9mn4-2PE |
\section{Introduction}
\label{sec:intro}
Twitter, Instagram, forums, blogs, on-line debates, and many other
forms of social media have become the outlets for people to freely and
frequently express ideas. Indeed, many research papers
have explored
social media usage in many application
areas.
Research has ranged from using social science techniques to
find indicators of phenomena such as increased health risks,
to studies on optimization of hugely scaled analytics computations,
to usability of analytics visualizations.
However, there has been little work on how to
bring together the myriad of analytics capabilities to
support knowledgable business analysts
in
rapid, collaborative, and iterative exploration and analysis
of large data sets.
This requires a combination of several aspects,
including integration of numerous analytics tools,
efficient and scalable data and processing management,
a unified approach for data and results visualization,
and strong support for on-going knowledge-worker driven activity
to uncover and focus in on particular areas of interest.
The paper describes the Alexandria system, currently
under development at IBM Research, which supports these
several aspects.
The system is currently focused on the early stages of
the overall analytics lifecycle, namely, on enabling
rapid, iterative exploration and visualization of social media data in connection with
a given domain (e.g., consumption habits for beverages,
the growth of the market for vegan foods, or political opinions
about an upcoming election).
The system has been designed to support rich extensibility,
and has already been integrated with a complimentary system
at IBM.
Figure \ref{fig:Alex-iterative-usage} shows the two main parts of
current Alexandria
processing, namely Background Processing and Iterative Exploration.
The Background Processing includes primarily (a) various analytics on
background text corpora that support several functionalities,
including similar term generation, parts-of-speech and collocation
analytics, and term-frequency-inverse-document-frequency (TF-IDF)
analytics; and (b) ingestion and indexing of social media data
(currently from Twitter) to enable main-memory access speeds
against both text and structured document attributes.
(Although not shown in the figure, there is also background
analytics to compute selected author profile attributes, e.g.,
geographic location, family aspects, interests).
Iterative Exploration enables users to build a number
of related {\em Projects} as part of an investigation of some
domain of interest.
Each Project includes
(i) the creation of a targeted {\em domain model}
used to focus on families of tweets and authors relevant to the investigation,
(ii) application of a variety of analytics against the
selected tweets and their authors,
and
(iii) several interactive visualizations of the resulting analtyics.
At the beginning of an investigation there are
typically several {\em experimental} Projects, used by
individuals or small collaborating groups.
Over time some Projects
may be {\em published} with more stability for broader usage.
Alexandria advances the state of the art of
social media analytics in two fundamental ways
(see also Section \ref{sec:related}).
First, the system brings together several text analytics tools
to provide a broad-based environment to rapidly
create domain models.
This contrasts with research that has focused on perfecting
such tools in isolation.
Second, Alexandria applies data-centric and other design principles
to provide a working platform that supports
ad hoc, iterative, and collaborative exploration of social media data.
As such, the system extends upon themes presented
in \cite{rajaraman2011mining,analytics-process-mgmt:DAB-2014},
and develops them in
the context of social media applications.
Section \ref{sec:goals} highlights the key goals for Alexandria, including
both longer- and shorter-term ones.
Section \ref{sec:overview} describes a prototypical usage scenario for
the system, and illustrates its key functionalities.
Section \ref{sec:arch} highlights key aspects of the system architecture,
and describes how the design choices support the key goals.
Section \ref{sec:scoping} describes key technology underpinnings
for the domain scoping capability, and
Section \ref{sec:analytics} does the same for the currently supported analytics.
Section \ref{sec:meta-data} describes the data-centric approach taken
for managing exploratory Projects to enable rich flexibility.
Section \ref{sec:related} describes related work,
and Section \ref{sec:conc} discusses future directions.
\hide{
These studies aimed to understand how and what type of information was
used in certain domains. Authors of
\cite{health-stats-via-twitter-CHI-2014,curation-through-use-CHI-2014}
reported how social media
information can be built up overtime to form images of people or
events. These papers are just a few among many others in this area.
Through appropriate and thorough filtering,social mediacontains
in-formation usefulto spot trends, information movements, or compiled
into portfolios of people and stories. Product and service companies
are eager to use social information to their benefits but at the same
time concerned about how to manage the large amount of data and how to
interpretit. In the past few years, many emerging companies have
established their business based on helping clients understand social
trends and sentiments. Our company, IBM, too is competing in this
arena bringing analytics, cloud, and multitude of other expertise into
a more end-to-end solution.
Naturally, companies want to discover problems related to their
products or services, increase awareness of the competition, spot
rele-vant market trends or events. The knowledge when accurate leads
to actionable insights to help manage impact on their businesses.
Compil-ing a comprehensive picture of what is happening in social
media that affects our clients is not a well-defined task. It is
rather open-ended; the quality of the findings depends heavily on the
cleverness of the teamthat conducts the task. For the past couple of
years, research teams at the IBM Customer Experience Lab have
conducted this type of task to showcase research analytics
capabilitieson social media to our clients. Through an ad-hoc, slow
process, tremendous manual labor, many discussions with subject matter
experts, and diligence of re-searchers/data analysts to shape the
search and analytics, we created meaningful results. The process is
somewhat repeatable through expe-rience but not tool wise. Iteration
rarely happens. When it does,the off-the-cuff process makes it hard
to pinpoint the required changes and how the changes may
systematically improve the results.
In this paper, we discuss a research tool, Alexandria,developed
toad-dress some of the problems in this space. First we address the
speed of exploration problem by providing a tool that assistsusers to
easilybuild up compoundconcepts for media exploration. We attempt
tosupports speedy explorations without sacrificing quality. We also
assumeour usersmay not be well verse in the subjectsthey are
exploring; hence analytics for enriching vocabulary for extraction is
used. Another prob-lem we tackle is basedon our client engagement
experience, in which multiple segmentation foci were created. Tweets
and authors were ag-gregated along these foci for comparison and
crossover analyses. Alexandria provides the users with the ability to
craft and store multiple foci for short and long term social media
exploration.
Ultimately, our clients want to understand their customers, what their
characteristics are,and what kind of opinions they express, for
market-ing marketing purposes. Alexandria provides users with
analytics and visualization to explore profile segmentations along
with demographic information and their share of voices. Currently the
analytics andvisualizationis limited but the framework would allow us
to easily add more in the future.
}
\begin{figure}
\vspace*{-2mm}
\centerline{\includegraphics[width=3.5in]{Alex-iterative-scop-anal-viz--b.eps}
\vspace*{-3mm}
\caption{Alexandria supports iterative exploration of social media,
and includes background text analytics processing.
(See also Figure \ref{fig:arch-flow}.)}
\label{fig:Alex-iterative-usage}
\vspace*{-2mm}
\end{figure}
\section{System Goals}
\label{sec:goals}
This section outlines the primary long- and shorter-term goals that have
motivated the design of the Alexandria framework and system.
\hide{
The longer-term goal is to provide an extensible approach
to support a broad variety of analytics, with
an eye towards supporting the overall analytics
lifecycle -- from exploration to callibration to production usage.
A shorter-term goal is to provide a flexible, easy to use, extensible
system for rapid exploration of social media.
}
The longer-term goals are as follows:
\smallskip
\noindent
{\bf LG1: Extensible platform to support business users with
numerous styles of analytics.}
This contrasts significantly with most previous works,
that are focused primarily on scalable performance,
support for targeted application areas,
or support primarily for data scientists.
\hide{
While there have been many advances in
recent years in tools for so-called Big Data Analytics,
most of the work has focused on
the development of new analytics algorithms,
improving the speed of widely used algorithms,
the development of highly scalable general-purpose
platforms (e.g., SPARK),
or the development of platforms for selected
kinds of data (e.g., TITAN for graph data).
There are also tools to support the data mining
process itself (e.g., IBM's SPSS, or the opensource RapidMiner).
}
Alexandria is focused on providing a layer above all of these,
to enable business users to more effectively use analytics, both
to find actionable insights,
and also to incorporate them
into on-going business processes.
\hide{
As such, Alexandria strives to expose different analytics
capabilities, and give users rapid access to them.
Implementation and optimization details are largely
``under the hood''.
Configuration of parameters should
be largely defaulted, and optionally adjustable by
more sophisticated users.
}
\hide{
The system should provide extensibility in two main
directions.
First, it is important to support new kinds of data and
new kinds of analytics as they become available or relevant.
Second,
because the technology for analytics processing
and visualization continues to advance, and because
different systems are more appropriate for different kinds of
data,
it is important
to provide an abstraction layer above the particular implementation
approaches.
}
\hide {
While REST services APIs are a natural choice to
enable a uniform approach for enabling the integration of
numerous data sources, analytics, and visualizations,
mechanisms must also be provided to pass data ``by reference'',
and indeed, to enable data to remain ``in place'' as much as
possible through multiple analytics steps.
}
\smallskip
\noindent
{\bf LG2: Support analytics process lifecycle, from exploration to prescription.}
As discussed in
\cite{analytics-process-mgmt:DAB-2014},
there are several stages in the lifecycle of analytics usage,
ranging from initial exploration, to refinement and hardening,
to incorporation into already existing business processes for
continuing value add,
to expanding the application to additional aspects of a business.
While the CRISP-DM method \cite{CRISP-DM}
addresses several elements of the lifecycle,
the method and associated tools are
geared towards data scientists rather than business users.
In contrast, a goal of Alexandria is to provide
business users with substantial exploration capabilities,
and also support the evolution of analytics approaches from
exploration to production usage.
Of course data scientists will still play a very key role,
and the Alexandria platform should
enable graceful incorporation of new algorithms as they
become available from the data scientists.
\hide{
On a more concrete level,
Alexandria is intended to support the following
key analytics lifecycle capabilities.
\begin{enumerate}
\item Make it easy to explore and
understand large volumes of unstructured data
\item Enable non-subject matter experts to explore a
domain of interest, find relevant social media, and discover insights
\item Support an interactive experience, including in particular
fast response times wherever possible
\item Enable flexible, composable intuitive analytic components
\item Help users find trends and emergent phenomena
\end{enumerate}
}
\noindent
{\bf LG3: Support for a collaborative production environment.}
Analytics is no longer the realm of a small team of data scientists
working largely in isolation.
Rather, it is increasingly performed by a multi-disciplinary
team that is in parallel digging more deeply into the data,
finding ways to add business value by
integrating analytics insights into
existing business processes,
and finding ways to make the usage of the insights
production grade.
\medskip
\hide{
As such, the Alexandria platform should support
several non-functional capabilities, including the following.
\begin{enumerate}
\item Enable sharable results, including catalogs of both building-block
and self-contained analytics
\item Enable repeatability and cloning of analytics flows
and Projects, as enabled through rich mechanisms for
recording and accessing provenance information
\item Support multiple levels of multi-tenancy, as well
as easy configurability of existing analytics to
block access to components that given users groups
may not have licenses or authorization to use
\end{enumerate}
}
\noindent
{\bf LG4: Scalable, e.g., work with billions of tweets and forum comments.}
The Alexandria system should be able to work with
state-of-the-art systems such as SPARK and TITAN, and
more generally with Hadoop-based and other distributed
data processing systems, to enable rapid turn-around
on large analytics processes.
Similarly, the system should
support main-memory indexing systems such as
Elastic Search or LUCENE/SOLR to enable split-second
access from very large data sets, including text-based searches.
\medskip
As a way to get started with the longer-term goals,
the initial version of Alexandria has focused more
narrowly on (a) Social Media analytics, and on (b) the
exploration and initial visualization phases of the
overall analytics process.
The key shorter-term goals include the following:
\def{\bf LG1}{{\bf LG1}}
\def{\bf LG2}{{\bf LG2}}
\def{\bf LG3}{{\bf LG3}}
\def{\bf LG4}{{\bf LG4}}
\def{\bf SG1}{{\bf SG1}}
\def{\bf SG2}{{\bf SG2}}
\def{\bf SG3}{{\bf SG3}}
\def{\bf SG4}{{\bf SG4}}
\def{\bf SG5}{{\bf SG5}}
\def{\bf SG6}{{\bf SG6}}
\noindent
{\bf SG1}:
Enable users to begin their exploration of a new topic
domain within a matter of hours.
\smallskip
\noindent
{\bf SG2}:
In particular, enable non-experts to quickly create a
domain model (i.e., keywords and extractors) that enables
a focus on Tweets and other social media that
are relevant to a given topic.
\smallskip
\noindent
{\bf SG3}:
Provide a variety of different analytics-produced views of the data,
to permit different styles of data and results examination
\smallskip
\noindent
{\bf SG4}:
Support iterative exploration based on info learned so far,
including management of meta-data about raw and derived data sets
\smallskip
\noindent
{\bf SG5}:
Minimize processing time through
to enable as much interactivity as
possible, by using main-memory indexes,
parallel processing, avoiding data transfers, etc.
\smallskip
\noindent
{\bf SG6}:
Enable easy and fast orchestration of capabilities, including
rapid creation of variations on the domain model and the
analytics processing.
This includes the automation of processing steps and
the defaulting of configuration parameters
wherever possible.
\section{Using the System}
\label{sec:overview}
\hide{
\begin{figure}
\vspace*{-2mm}
\centerline{\includegraphics[width=3.5in]{Alex-overall-flow.eps}
\vspace*{-3mm}
\caption{Key functional components of current Alexandria system. (Blue
indicates background processing and green indicates
user-requested processing}
\label{fig:Alex-overall-flow}
\end{figure}
}
This section illustrates the main capabiliites currently supported in
Alexandria through an extended example.
\hide{
Figure \ref{fig:Alex-overall-flow} indicates the major processing steps of the
system; the user is directly involved with the five stages of activity shown in green,
and also the orchestration of those stages.
These are described in the following.
}
To extract "relevant" documents from social media, one needs to gather documents that mentioned terms, expressions, or opinions pertaining to the area one wants to explore.
Alexandria provides tools that support both laymen and experts in
finding terms that cover the space of interest, and also terms that can
drill more deeply into that space.
\hide{
Challenges are (1) we may not know what "good" relevant terms or topics are, especially in the absence of knowledge in the area. (2) There are many ways to express one same thing hence it's hard to know how to catch them all. This is often true with or without the knowledge of the area of exploration.
Alexandria is designed and implemented to alleviate the burden of these two challenges. This is evident in the first initial two steps when using the system. In Alexandria, once an area of exploration is identified, the discovery process is an iterative cycle between scoping the area – expanding the breadth by finding more topics related to the area – and focusing – ensuring the depth by accounting for variations of the chosen topics. This section will demonstrate how Alexandria supports this process.
}
We will explore a subject around vaccination as an example for this paper.
Suppose that the government would like to encourage people to take vaccination, but wonder what people's opinions may be around vaccination. The exploration starts with creating a Project with a few seed terms,
namely `vaccination', `flu' and `measles'. Based on these terms, we asked Alexandria to generate a family
of relevant collocated
terms in an effort to bound the scope.
These terms may be manually edited, to reach the terms listed in Figure \ref{fig:termGen-with-edits}.
Here, the black terms were generated automatically,
red were added by hand, and gray with strike out were
auto-generated but deleted by hand.
\hide{
The user might be someone who is knowledgeable about the area being explored, or might be interested but lacks the vocabulary to provide breath to the exploration. Alexandria can be useful for both types of users. Those with familiarity with the subject may use Alexandria as a quick bootstrap where they can provide additional terms to explore after some terms are generated. Those without much of the subject knowledge will use Alexandria to gain breadth that oth-erwise would not be easily obtained.
}
In some cases the auto-generated terms will help the user learn more about the domain of
interest.
In this example, Dr. Anne arises, and a Google search reveals that Dr. Anne Schuchat is
the Director of the U.S. Center for Disease Control \cite{anne-schuchat-web-site},
so her name was left in the list.
Similarly, Dr. Gil remains because he is mentioned in a news article \cite{measles-in-disneyland}
concerning a measles outbreak at Disneryland.
\hide{
Please note in Figure \ref{fig:termGen-with-edits} that Alexandria's generated terms may not be all terms the users need. On one hand, the user can prune these terms as they see fit for various reasons. Those that are not relevant or consi-dered out of scope can be removed. Some of the terms are removed mainly because they do not make sense in the context of social media search. On the other hand, new terms can be added and existing terms modified to shape the scope that fits their needs. Some of the terms require some research such as Dr. Anne as it may lead to something interesting. We used Google with the key words Dr. Anne and vaccination to find out Dr.AnneSchuchat is the Director of Central of Disease Control, hence we left her name in the list. Her last name was included later in the list. We also left Dr. Gil in the list as he was mentioned in a news article concerning a measles outbreak at Disneryland. We didn't add any terms in this example. Table 2 shows the list from Table 1 pruned and enhanced. Terms in color orange are terms modified by us
}
\hide{
\begin{figure}
\begin{center}
\vspace*{-2mm}
\centerline{\includegraphics[width=3.5in]{termGen-no-edits.eps}}
\vspace*{-5mm}
\end{center}
\label{fig:termGen-no-edits}
\caption{List of relevant terms collocated with the seed terms used to launch Project}
\end{figure}
}
While the scoping step is supposed to extend our vocabulary to cover various areas of the topics, some terms appear to be rather similar. For example, many variations of vaccination are included in the list. We know that if a tweet mentioned one of these terms, it is likely to have something to do with vaccination. Alexandria supports automatically clustering similar terms into groups called "topics." Each topic is used to provide a list that, if a tweet mentions one or more of the terms in the topic, we can conclude that the tweet has mentioning of this topic.
\begin{figure}
\begin{center}
\vspace*{-2mm}
\centerline{\includegraphics[width=3.5in]{termGen-with-edits.eps}
\vspace*{-3mm}
\end{center}
\label{fig:termGen-with-edits}
\caption{List of relevant terms collocated with the seed terms, after manual edits}
\end{figure}
\begin{figure}
\vspace*{-2mm}
\centerline{\includegraphics[width=3.5in]{scoping-01a-terms-open-concepts.eps}}
\vspace*{-3mm}
\caption{Alexandria interface for domain scoping: After automatic term clustering}
\label{fig:scoping-01-terms-open-concepts}
\vspace*{-2mm}
\end{figure}
Figure \ref{fig:scoping-01-terms-open-concepts} above shows a snippet from the actual Alexandria page where the terms are listed vertically in the first column and the second column shows the clusters suggested by Alexandria. Note that these clusters are generically named "Cluster 1," "Cluster 2," and so forth. In the figure
Cluster 2 is ``open'', to show terms Alexandria placed in it. It appears many vaccination and diseases that can be prevented by vaccination are included. In Section \ref{sec:scoping}, we detail the analytics we are doing behind the scenes for topic clustering.
\hide{
\begin{figure}
\vspace*{-2mm}
\centerline{\includegraphics[width=3.5in]{scoping-02-concepts-after-edits.eps}
\vspace*{-3mm}
\caption{Topics after organizing around opinions about vaccinations}
\label{fig:scoping-02-concepts-after-edits}
\end{figure}
}
\begin{figure}
\vspace*{-2mm}
\centerline{\includegraphics[width=3.5in]{scoping-03a-adding-disbelieved.eps}}
\vspace*{-3mm}
\caption{Topics after adding terms in the ``Disbelieved'' topic}
\label{fig:scoping-03-adding-disbelieved}
\vspace*{-2mm}
\end{figure}
The numbers to the right of each topic indicate the number of tweets in which at least one term from the topic is found. One can use this number to gauge how widespread the topic is. Bear in mind that something general such as "Disbelieved" can be about any subject, hence the large number of over a million tweets, and not necessarily about vaccination.
These numbers are obtained within seconds through accesses to a SOLR index holding all of the tweet information.
Figure \ref{fig:scoping-04-composite-topics} illustrates the state of the system after a few steps.
First, Alexandria supports renaming of the clusters, and moving them around in the middle column.
Second, there is an automated ``similar term generation'' service for adding depth to an existing topic.
In the figure, the red terms in Disbelieved where inserted by hand, and the
terms in black below that were generated automatically to add depth.
\hide{
Once the topics are settled as shown in Figure 2, we requested Alexandria to do exactly that. Figure 3 shows terms in the group "Disbelieved," which now has many more added to the topic to the original 4 we created. Notice also the number of tweets for "Disbelieved" has increased to over 1.6 million. As a matter of facts, the numbers of tweets for all topics have increased after depth of topics is added, showing that by increasing the depth of each topic, we have a better reach in social media exploration. Because of space limitation for this paper, we will not show additional terms in other topics.
}
The third phase of scoping is the
building of the actual extractors (or queries) for selecting tweets of interest.
This is accomplished by creating
"composite topics," which are based on Boolean combinations of the topics.
Figure \ref{fig:scoping-04-composite-topics} shows several composite topics,
some of which are ``open'' to expose the topics that are used to form them.
(At present the UI supports only conjunctions of topics, but the underlying engine
supports arbitrary combinations.)
For example, a composite topic "Support Flu Vaccination" we combine "Flu," "Vaccination" and "Encouraged" topics to form a search statement of "find any tweets that mentioned at least one (or more) of the terms in "Flu" and one (or more) of the terms in "Vaccination" and one (or more) of the terms in "Encouraged."
(A further refinement would be to exclude tweets that include a negating term such as ``not''.)
\begin{figure}
\vspace*{-2mm}
\centerline{\includegraphics[width=3.5in]{scoping-04-composite-topics.eps}}
\vspace*{-3mm}
\caption{Topics after adding terms in the ``Disbelieved'' topic}
\label{fig:scoping-04-composite-topics}
\vspace*{-2mm}
\end{figure}
Once the set of composite topics has been specified, it
is time to perform some data extractions, re-structurings, and indexing to
support various anlaytics.
Upon request, Alexandria extracts tweets with topics matching the composite topic combinations, annotates each tweet accordingly, and then launches multiple analytics activities on these tweets. One of the activities was extracting the author profiles of these tweets and aggregate attributes among these profiles. We will detail this work on in Section \ref{sec:analytics} on Analytic View.
\hide{
\begin{figure*}[t]
\vspace*{-2mm}
\centerline{\includegraphics[width=6.5in]{FIG/scoping-05-terms-concepts-indicators}}
\vspace*{-3mm}
\caption{Alexandria interface for domain scoping: After creating composite topics}
\label{fig:scoping-05-terms-concepts-indicators}
\end{figure*}
}
We now describe some of the visualizations used to show the analytics
results associated with a Project.
In one direction, Alexandria infers profile attributes of Twitter
authors through background analysis of 100's tweets per author.
Information such
as education, gender, ethnicity, location of residence is
inferred based on evidence of words found in tweets. Figure
\ref{fig:views-map} shows how the demographic distribution of tweet
authors of composite topics in the U.S. On the left, it shows the
numbers of authors for various composite topics. On the map, states
with darker colors mean higher numbers of authors reside in those
states. Mousing over a state (not shown) would give more details of
these authors. The colored donuts below the map show percentage of
various characteristicsof those located in the U.S. for example, male,
female or unknown for gender. Mousing over a portion of a donut shows
the value of the characteristics and the number of profiles. For
example, in the figure we show that 5898 tweet authors of all topics
combined are students.
\begin{figure}
\vspace*{-2mm}
\centerline{\includegraphics[width=3.5in]{views-map.eps}}
\vspace*{-3mm}
\caption{Interactive view for exploring the demographic distribution of tweet authors
who are negative about flu vaccination}
\label{fig:views-map}
\vspace*{-2mm}
\end{figure}
Figure \ref{fig:views-topic-anomaly}
illustrates another analytic view in which Alexandria shows "share of voices",
i.e., comparison of tweet volumes of the composite topics over time. In this paper we are working on tweets from January to June of 2014.
Notice the higher volumes among the topics "Flu vaccination" and "Other Vaccination" in
Figure \ref{fig:views-topic-anomaly}, with a peak around mid-May for "Other vaccination" topic. One may wonder what happened during that week. In this view we can click on the graph to explore the frequently mentioned terms or anomalous terms mentioned in that week. Figure \ref{fig:views-black-boxes} shows
snippets of two images captured to highlight the two types of terms.
\begin{figure}
\vspace*{-2mm}
\centerline{\includegraphics[width=3.5in]{views-topic-anomaly.eps}}
\vspace*{-3mm}
\caption{Share of voices of tweets from different composite topics}
\label{fig:views-topic-anomaly}
\vspace*{-1mm}
\end{figure}
Specifically for Figure \ref{fig:views-black-boxes}, we selected the
"Flu Vaccination" topic on the left to narrow the visualization down
to just this topic, hence the presence of only one line graph in the two
snippets. This line represents the volume over time of tweets
that match the "Flu Vaccination" extractor. For this topic, there
seems to be a peak around the second week in January. The snippet on
the left of the figure shows frequently mentioned terms in the week
while the snippet on the right shows terms that are considered
anomalous in that week. We moused over the term "swine flu outbreak"
which was mentioned 19 times, hence showing up high in the word list.
However, this term is not considered anomalous, indicating that
this term also shows up fairly often in other weeks.
However, the term "miscarriage" is anomalous. Mousing over it reveals
some evidence of the news about a nurse refusing to get vaccinated and
subsequently being fired from a hospital.
\begin{figure}
\vspace*{-2mm}
\centerline{\includegraphics[width=3.5in]{views-black-boxes.eps}}
\vspace*{-3mm}
\caption{Exploring frequently mentioned and anomalous terms for the composite topic "Flu Vaccination" in the week of Jan4 to Jan 11. The text boxes show contextual data of the term they point to. The box on the right also shows partialcontext of the tweets where the term was extracted from.}
\label{fig:views-black-boxes}
\vspace*{-2mm}
\end{figure}
There are other views that one can use to explore analytics results of social media insights,
including some that leverage the configurable Banana visualization tool \cite{banana}.
\hide{
\begin{figure}[t]
\center{\includegraphics[width=3.5in]
{FIG/lifecycle.png}}
\caption{The lifecycle model of {\tt CustomerOrder}}
\label{fig:lifecycle}
\end{figure}
}
\hide{
The high-level functional components of the current
Alexandria system are shown in Figure \ref{fig:Alex-overall-flow}.
The domain scoping interface is illustrated in
Figures \ref{fig:scoping-02-terms-and-multiple-open-concepts}
and
\ref{fig:scoping-05-terms-concepts-indicators}.
}
\section{System Archtecture}
\label{sec:arch}
The Alexandria architecture will be described
from three perspectives:
(a) the overall processing flow
(see Figure \ref{fig:arch-flow}),
(b) the families of REST APIs supported (Figure \ref{fig:arch-REST}),
and
(c) the key systems components.
These descriptions will include discussion of how
the architectural choices support the long-term and short-term
goals.
\begin{figure}
\vspace*{-2mm}
\centerline{\includegraphics[width=3.5in]{arch-02-flow-ss.eps}}
\vspace*{-3mm}
\caption{Alexandria social media exploration process:
Ingestion and initial analytics in the background;
Domain Modeling using text analytics;
a broad variety of Social Media Analytics;
and identification of actionable insights through
visualizations. Insights can lead to iterative modifications
of the Domain Model and application of further analytics.}
\label{fig:arch-flow}
\vspace*{-2mm}
\end{figure}
Overall, the current Alexandria architecture flow
shown in Figure \ref{fig:arch-flow} expands on
Figure \ref{fig:Alex-iterative-usage},
and is focused
on supporting rapid exploration, analytics processing, and
visualization of Twitter data in a collaborative
environment, that is, on parts of goals {\bf LG2}, {\bf LG3}\ and {\bf LG4},
and all of goals {\bf SG1}\ through {\bf SG6}.
There are two forms of background processing.
One is to ingest and index the Tweets, and also includes
author-by author processing of tweets to extract demographic
attributes, such as gender, geographic location,
and one to ingest, process, and index background
text corpora. (This demographic processing uses the IBM Research SMARC
system \cite{SMARC-in-IEEE-Big-Data-2013}, a precursor to IBM's Social Media Analytics product
\cite{SMA},
but other systems could be used).
The results are placed into a LUCENE SOLR main-memory index to
enable rapid searching, including against the Tweet text bodies,
a key enabler for goals {\bf LG4}, {\bf SG1}, {\bf SG3}, and {\bf SG5}.
The other background processing is to ingest, process, and index
various background corpora to support text analytics.
As described in further detail in Section \ref{sec:scoping}
below,
this is used to support the interactive domain scoping activity,
relevant to goals {\bf LG2}, {\bf SG1}, {\bf SG2}.
And as describe in Section \ref{sec:analytics},
this is also used
to support the anomalous topics analytics and view (goals {\bf LG2}, {\bf SG3}).
Referring again to Figure \ref{fig:arch-flow},
once a Domain Model is established for a Project,
the Social Media analytics processing is performed.
This is described in more detail in section \ref{sec:analytics} below.
\hide{
One stage of this is to extract all of the relevant Tweets
and author profiles from the background, perform
some transformations and annotations on the data and
store the results, all to support a variety of analtyics.
At present the annotated data is stored in CouchDB, but
could also be placed onto, e.g., SPARK.
}
After extraction and annotation,
the desired analytics are invoked through REST APIs by
an orchestration layer and the results are again
placed into CouchDB.
Finally, these can be accessed through several interactive
visualizations.
\begin{figure*}[t]
\vspace*{-2mm}
\centerline{\includegraphics[width=5.5in]{arch-03-REST-ss.eps}}
\vspace*{-3mm}
\caption{Alexandria supports loosely coupled
RESTful services that orchestrate and invoke many
functionalities, all sharing a common data store}
\label{fig:arch-REST}
\vspace*{-3mm}
\end{figure*}
\hide{
\begin{figure}
\vspace*{-2mm}
\centerline{\includegraphics[width=3.5in]{arch-04-system-components.eps}}
\vspace*{-3mm}
\caption{System components that support the Alexandria capabilities.
(Words in gray indicate planned extensions.)}
\vspace*{-2mm}
\label{fig:arch-04-system-components}
\end{figure}
}
As illustrated in Figure \ref{fig:arch-REST},
most capabilities in Alexandria are accessed through REST services,
which is the basic approach to supporting goals {\bf LG1}, {\bf SG3}\ and {\bf SG6}.
\hide{
The current system is somewhat limited
with regards to support for exploration and collaboration
({\bf LG3}, {\bf SG4}) because the orchestration of components is
mostly hard-wired; but we are currently using the REST APIs
to develop a much more flexible orchestration and dashboarding layer.
}
For capabilities involving large data volumes, the
data is passed ``by reference'' for increased performance ({\bf LG4}, {\bf SG5}).
At present the REST services are grouped more-or-less according
to the architectural flow of Figure \ref{fig:arch-flow}.
(It is planned to REST-enable the background processing.)
The REST services rely on a shared logical Data Store, which
is currently comprised of LUCENE and CouchDB.
This can be extended to other storage and access technologies
without impacting the REST interfaces (goals {\bf LG1}, {\bf LG4}, {\bf SG5}).
The REST-based architecture has already been applied to
enable a rapid integration of Alexandria capabilities with
IBM Research's SystemG \cite{system-g-home-page},
a graph-based system that also supports social media analytics.
In particular, the Alexandria Domain Models are now
accessible to SystemG services, and the SystemG UI has been
extended to support both Domain Scoping and
Alexandria
analytics views.
Alexandria exists as a software layer that can access
raw repositories and streams of social media (and other) data,
and that resides on top of several application, middleware, and
data storage technologies.
The system currently uses the GNIP Twitter reader
and Board reader to access social media and web-accessible
data.
The application stack is currently based on LUCENE, CouchDB, and HDFS
for data storage and access,
Hadoop for cluster management, IBM's Big Insights, SPSS, and
Social Media Analysis for analytics, and finally TomCat and
Node.js to provide application server middleware.
Alexandria lives above these layers, and could be
extended to take advantage of other server capabilities
(goals {\bf LG1}, {\bf LG4}, {\bf SG3}, {\bf SG5}).
\hide{
\begin{figure}
\vspace*{-2mm}
\centerline{\includegraphics[width=3in]{arch-01-components.eps}}
\vspace*{-3mm}
\caption{High-level architecture and key data repositories illustrating the
components and data flows involved when performing a
Social Media Investigation}
\label{fig:arch-components}
\end{figure}
}
\hide{
One of the goals of this architecture is to automate the various steps
in the comprehensive social media investigation process, and to
curate, annotate and analyze end-to-end data as illustrated in Figure
\ref{fig:arch-REST} below. The steps include modeling and
discovery, extraction and annotation, analysis and visualization of
the investigation results, and most importantly the exploration
iterations as illustrated in Figure \ref{fig:arch-flow}.
}
\hide{
The architecture consists of a set of RESTful services. Alexandria
Orchestration Services coordinates all the underlying services to
ensure that dependencies between models, extracted data, and analytics
en-gines are handled appropriately. Figure \ref{fig:arch-REST}
illustrates the services in-volved in the social media investigation
process, and also illustrates that additional analytics can easily be
plugged into the framework.
The extensibility of Alexandria has already been demonstrated
through some integration with System G \cite{system-g-home-page},
a broadly focused platform
for social media analytics that uses rich graph-based analytics.
}
\section{Domain Scoping}
\label{sec:scoping}
Domain Scoping addresses the challenge of constructing Domain Models.
A Domain Model
is typically represented as families of keywords and
composite topics (a.k.a., text extractors),
which get applied to the corpus of text documents to realize the search
or filtering in the corpus. Traditionally, Domain Scoping is performed
by a subject matter expert who understands the domain very well and
can specify precisely what the particular queries and search criteria
should be for a given set of topics of interest. A central goal of Alexandria
is to simplify significantly the task of creation of Domain Models
as well as to lower the required domain expertise of the person
creating Domain Models. To achieve that, we developed several
techniques that leverage text analysis and data mining in order to
assist at discovery and definition of relevant topics that will
drive creation of search queries. In particular, we describe our approach
for (1) discovery of relevant collocated terms, for (2) term
clustering, and for (3) similar term generation.
As illustrated in Section \ref{sec:overview}, these three techniques combined together allow very easy,
iterative definition of terms and topics (i.e., sets of collocated terms)
relevant for a particular domain
with minimal input required from the user.
Other scoping tools can be incorporated into Alexandria, e.g.,
a tool based on using an ontology such as DBPedia.
\subsection{Collocated Term Discovery }
Alexandria employs two techniques \textendash{} term frequency\textendash{}inverse
document frequency (TF-IDF) score and collocation \textendash{} to
discover significant relevant terms to a specific set of seed terms.
Simply put, what Alexandria does is find documents that seed terms
appeared within. This is called the \textquotedblleft{}foreground\textquotedblright{}
documents. It then harvests other terms that were mentioned in the
documents and computes their significance.
To support this analytic, we acquired sample documents \textendash{}documents
considered general and representative enough of many different topics
and domains \textendash{} as the \textquotedblleft{}background\textquotedblright{}
materials for this operation. For this purpose we collected a complete
week of documents (Sept 1-7 2014) from BoardReader. This extraction
amounts to about 9 millions documents. The documents were then indexed
in SOLR \cite{solr}, a fast indexing and querying engine based on
Lucene, for later fast access. Next we queried \textquotedblleft{}NY
Times\textquotedblright{} from this large set of documents, which
resulted in news articles in many different areas including politics,
sports, science and technology, business, etc. This set of documents
is used to build a dictionary of terms that are not limited to a specific
domain within a small
sample. It is the basis for Alexandria
to calculate term frequency in general documents.
From the foreground materials, Alexandria computes the significance
of other terms in the documents using TF-IDF scores. TF-IDF score
is a numerical statistic widely used in information retrieval and
text mining to indicate the importance of a term to a document \cite{manning99}.
The score of a term is proportional to the frequency of the term in
a document, but is offset by the frequency of the same term in general
documents. The TF-IDF score of a word is high if the term has high
frequency (in the given document) and a low frequency in the general
documents. In other words, if a term appears a lot in a document,
it may be worth special attention. However, if the term appears a lot in
other documents as well, then its significance is low.
$$
\begin{array}{rcl}
TF-IDF & = & TF(t,d)\times IDF(t,D)
\\
IDF(t,D) & = & log\frac{N}{|t\in D|}
\end{array}
$$
A collocation is an expression consisting of two or more words that
corresponds to some conventional way of saying things. They include
noun phrases such as \textquotedblleft{}weapon of mass destruction\textquotedblright{},
phrasal verbs like \textquotedblleft{}make up\textquotedblright{}
and other stock phrases such as \textquotedblleft{}the rich and powerful.\textquotedblright{}
We applied collocation to bring in highly relevant terms as phrases
when the words collocate in the document and would make no sense as
individual terms. More details of this technique can be found in \cite{bdvj-nplm-03}.
Examples of these phrases are seen in Figure 1, for example, \textquotedblleft{}small
business,\textquotedblright{} \textquotedblleft{}retail categories,\textquotedblright{}
and \textquotedblleft{}men shirts.\textquotedblright{}
For collocated term generation, the larger the corpus and the more accurate
the results will be. However a very large corpus will suffer from
efficiency and is not practical to use in an interactive environment
such as Alexandria. Our hypothesis is that a week of general documents
as a background corpus is a good enough representative of the bigger
corpus, but is small enough to calculate the TF-IDF and collocation
scores in a responsive manner.
\subsection{Term Clustering and Similar Term Generation}
Alexandria uses a term-clustering algorithm based on semantic similarities
between terms to semantically group them into appropriate and strong
``topics''. Alexandria uses Neural Network Language Models (NNLMs)
that map words and bodies of text to latent vector spaces. Since they were initially proposed \cite{bdvj-nplm-03}, a great amount of progress has been
made in improving these models to capture many complex types of semantic
and syntactic relationships \cite{Mikolov-2013,PenningtonSM14}. NNLMs are generally trained in an unsupervised
manner over a large corpus (greater than 1 billion words) that contains
relevant information to downstream classification tasks. Popular classification
methods to extract powerful vector spaces from these corpora rely
on either maximizing the log-likelihood of a word, given its context
words \cite{Mikolov-2013} or directly training from the probabilistic
properties of word co-occurrences \cite{PenningtonSM14}. In
Alexandria, we train our NNLMs on either a large corpus of Tweets
from Twitter or a large corpus of news documents to reflect the linguistic
differences in the target domain the end user is trying to explore.
We also extended the basic NNLM architecture to include phrases that
are longer than those directly trained in the corpus by introducing
language compositionality into our model \cite{socher2013recursive,goller96,mikolov2010recurrent}.
This way, our NNLM models can map any length of text into the same
latent vector spaces for comparison.
The similarity measure obtained to support the term clustering
is also used to generate new terms that are ``similar'' to the terms
already in a topic.
\hide{
One challenge we are facing with the Alexandria user experience design is
handling concepts that have been altered by the users. When concepts
are manually created, or automatically computed ones are enhanced during
iterations, the concept-clustering algorithm has to take that into
account. The concepts that users created or changed can be leveraged by
the clustering system, which will train a neural network, aiming to
learn a belief function of the users target task. For this computation
it is important to prevent over fitting, and techniques like dropout
\cite{Hinton2012} lead to better clustering results. These vector
space models can also be used to create intelligent interface features.
For example, we can leverage text snippet similarity to automatically
name a new term list that bares strong conceptual resemblance to a
list that was previously created.
}
\hide{
\subsection{Profile Extraction}
Once tweets are extracted from the SOLR index, the corresponding author\textquoteright{}s
profiles are compiled. Both the tweets and profiles are annotated
along the foci, and stored for the initiative in both
CouchDB (noSQL
database) and SOLR indexes.
Alexandria incrementally fetches from the Twitter decahose to maintain
a 6-month rolling window of tweets. We also incrementally perform
analytics to compile authors \textquotedblleft{}user\textquotedblright{}
profiles. Attributes such as locations (used in showing geographic
distribution), whether authors are parents, intend to travel, whether
they are business owner, are computed using tweets as evidence. The
analytics based on previous research work done at IBM \cite{SMARC-in-IEEE-Big-Data-2013}
has shown to show around 82 \textendash{} 94 \% accuracy.
}
\section{Analytics Views}
\label{sec:analytics}
This section briefly surveys two of the four main
analytics algorithms currently supported by Alexandria;
the others are omitted due lack of space.
\subsection{Profile Extraction}
As a pre-cursor to the other analytics in Alexandria, the tweets identified by
the composite topics are
extracted from the SOLR index and the corresponding
authors' profiles are compiled. Both the tweets and profiles are
annotated along the composite topics, and stored for the Project in both
CouchDB (noSQL database) and SOLR indexes. Alexandria incrementally
fetches from the Twitter decahose to maintain a 6-month rolling
window of tweets. We also incrementally perform analytics to compile
authors' "user" profiles. Attributes such as locations (used in
showing geographic distribution), whether authors are parents, and intent
to travel,
are computed using tweets
as evidence. The analytics based on previous research work done at
IBM \cite{SMARC-in-IEEE-Big-Data-2013}
has shown to show around 82\% to 94\% accuracy.
\hide{
\begin{figure}
\begin{small}
\begin{center}
\begin{tabu} to 3.4in { | X[c] | X[c] | X[c] | }
\hline
Stand-alone machine & 10 nodes with 80 mappers & 10 nodes with 17K mappers \\
\hline
$\sim$15 hours & $\sim$2 hours & $\sim$1 hour \\
\hline
\end{tabu}
\end{center}
\end{small}
\caption{Time used for ingestion and indexing of
1 month of English-language
tweets from Twitter Decahose ($\sim$400 million tweets, with
resulting index $\sim$128 GB)}
\label{fig:tweet-ingestion-times}
\end{figure}
\begin{figure}
\begin{center}
\begin{small}
\begin{tabu} to 3.4in { | X[c] | X[c] | }
\hline
Number of tweets & Time for extraction and DB writes \\
\hline
$\sim$0.5M (453,931) & 4 min 29 sec \\
\hline
$\sim$1M (939,241) & 11 min 31 sec \\
\hline
\end{tabu}
\end{small}
\end{center}
\caption{Time used for extraction, annotation, and storage
of tweets matching a Domain Model}
\label{fig:tweet-extraction-times}
\end{figure}
}
We provide a brief illustration of the running time of various steps.
The current system is focused on a fixed set of English-language
Tweets from the Twitter Decahose (10\% of all Tweets).
With regards to background ingestion and initial processing,
the current Alexandria infrastructure uses a 4 node cluster,
with 1 as master and 3 as slaves; each node has
64MB of memory.
We focus on the time needed to process through Alexandria.
If a serialized machine
were to be used then the extraction would be about 15 hours;
With 10 nodes and 80 mappers there is a stong time reduction
down to about 2 hours. Increasing to 17K mappers (the maximum number)
brings the time to about 1 hour.
\hide{
The parallelism afforded by the 4 nodes
gives a substantial increase.
The table shows the time needed to ingest and index one day's worth
of the English-language tweets from the Twitter Decahose
(about 400 million tweets), to build an index
(with size about 128 GB).
While the use of 80 mapper services reduces the time needed
to about 2 hours per day, using 17K mappers (the maximum available)
reduces the time to about 1 hour.
}
We also measured the end-to-end clock time for performing
the extraction and annotatoin stage for a set of tweets.
With a corpuus of almost half a million tweets (452,201) the elapsed time was
4 minutes 29 seconds.
With a corpus of almost a million tweets (949,241) it took 11 minutes and 31 seconds.
(The numbers are not linear probably because the system is
running on cloud-hosted virtual servers, which are subject
to outside work loads at arbitrary times.)
The processing includes writing the formated data into both
a CouchDB and a SOLR database.
\hide{
In the current system there are four parts to the extraction
processing step.
First, SOLR is used to create an index in main memory specific
to the targeted tweets.
Second, the actual tweet data is pulled into SOLR.
Third, the tweet data is pulled over the wire into
the extraction service component.
Finally, this service annotates the tweet and author information,
and writes them over the wire into three databases
(two couchDB and one SOLR).
}
Looking forward, we expect to move towards an architecture
with a single indexed data store, so that
we can perform the annotations ``in-place''.
\subsection{Temporal Anomoly}
Lastly, Alexandria performs topic analytics to help the user explore
the topics discussed among tweets. Unlike many available topic
detection algorithms \cite{twitter-emerging-trends-2011},
we define anomalous topics as terms that
suddenly receive attention in a specific week when compared to the
rest of the weeks in the data set. Alexandria uses a technique
similar to the event detection domain \cite{beyond-twitter-trending-2011}.
It extracts terms from
tweets, compute TF-IDF scores and frequencies and only retain terms
with high TF-IDF score and high frequency. To calculate anomaly score
for a term, we consider the frequency of the term in each week and its
frequency over all the weeks in the data set. If the term's frequency
and score deviate a lot in a particular week from what it normally has
over all, the term is considered anomalous. There could be an event
or and emerging trend that caused the buzz, and hence people discuss more
about the term in that week. This can trigger the user to look further
to correlate research on events in that week. Following shows the
formulas used for the calculation.
\begin{small}
$$
\begin{array}{rcl}
{\rm anomalyScore}(term_i, week_j) & = &
\frac{{\rm normFreq}(term_i, week_j)}{{\rm normFreq}(term_i,{\rm all\_weeks})}
\\
{\rm normFreq}(term_i, week_j) & = &
\frac{{\rm count}(term_i, week_j)}{{\rm maxCount}(week_j)}
\\
{\rm normFreq}(term_i, {\rm all\_weeks}) & = &
\frac{{\rm count}(term_i, {\rm all\_weeks})}{{\rm maxCount}({\rm all\_weeks})}
\end{array}
$$
\end{small}
\section{Meta-data Support for Iterative Exploration}
\label{sec:meta-data}
Alexandria has been designed to support rapid, iterative, collaborative exploration of
a domain including the usage of multiple analytics (goals {\bf LG3}, {\bf SG4}, {\bf SG6}).
This is enabled in part by the disciplined use of REST APIs to wrap
the broad array of analytics capabiliites (see Figure \ref{fig:arch-REST}).
But the fundamental enabler is the strongly data-centric approach taken
for managing the several Projects that are typically created
during the investigation of a subject area.
Data about all aspects of a Project (and pointers to more detailed information)
is maintained in a CouchDB document, called {\em ProjectDoc};
this can be used to support a dashboard about project status, and to
enable invocation of various services.
For example, the ProjectDoc holds a materialized copy of the domain model
used to select the tweets and authors that are targeted by the Project.
It maintains a record of which analytics have been invoked,
and also maintains status during the analytics execution to enable
a dashboard to show status and expected completion time to the end-user.
Provenance data is also stored, to enable a determination of how
data, analytics results, and visualizations were created in case
something needs to be reconstructed or verified.
The ProjectDoc provides a foundation for managing flexible, ad hoc styles of
iterative exploration.
For example, with the ProjectDoc it is easy to support ``cloning'' of
a Project to create a new one, and to combine the Topics and Composite Topics from
multiple Projects to create a new one.
It also allows for maintenance of information about whether analytics results
have become out-of-date, and to support the
incremental invocation of analytics,
e.g., as new tweets become available.
It also supports the inclusion of new Composite Topics into a Project's domain model,
along with controlled, incremental computation of the analytics for these additions.
\section{Scaling to Many Users}
\rick{??}
\section{Benchmarking and Experiments}
\label{sec:experiments}
This section presents some highlights from
executions of the Alexandria system,
including both actual runs and controlled benchmarking.
\subsection{Tweet processing}
The current system is focused on a fixed set of English-language
Tweets from the Twitter Decahose (10\% of all Tweets).
With regards to background ingestion and initial processing,
Figure \ref{fig:tweet-ingestion-times} gives high-level information about
the time it takes to ingest and index tweets.
The current Alexandria infrastructure uses a 4 node cluster,
with 1 as master and 3 as slaves; each node has
64MB of memory.
As shown in the table, the parallelism afforded by the 4 nodes
gives a substantial increase.
The table shows the time needed to ingest and index one day's worth
of the English-language tweets from the Twitter Decahose
(about 400 million tweets), to build an index
(with size about 128 GB).
While the use of 80 mapper services reduces the time needed
to about 2 hours per day, using 17K mappers (the maximum available)
reduces the time to about 1 hour.
It remains open to determine the smallest number of mappers that
achieves this 1 hour performance time.
\rick{The numbers in Figure \ref{fig:tweet-extraction-times}
seem odd, because it is not linear. Was there some background
process running for the 1M tweet run??? We will need to
re-run these tests, I think}
As described in Section \ref{sec:arch}, a key processing step in the
exploration processing is to perform an extraction and annotation
step once a targeted set of tweets and authors has been
identified by a Domain Model.
Figure \ref{fig:tweet-extraction-times}
shows the time needed for this step, for two representative
sets of tweets, one with about 0.5 million tweets and the other
with about 1 million.
In the current system there are four parts to the extraction
processing step.
First, SOLR is used to create an index in main memory specific
to the targeted tweets.
Second, the actual tweet data is pulled into SOLR.
Third, the tweet data is pulled over the wire into
the extraction service component.
Finally, this service annotates the tweet and author information,
and writes them over the wire into three databases
(two couchDB and one SOLR).
Looking forward, we expect to move towards an architecture
with a single indexed data store, so that
we can perform the annotations ``in-place''.
\subsection{Term Generation and Clustering}
something here about variations on how term gen and clustering are done,
and measuring a notion of ``cluster stability''
\subsection{Direct Concept Generation}
something here about using term gen to get some inspiration words,
then selecting several disparate words, and then
using word2vec to get concepts/clusters relating to each of those words
idea of varying different things
\begin{itemize}
\item background corpus (tweets vs articles vs ??)
\item number of background documents examined (e.g., 100, 500, 1K, 2K)
\item set of seed terms used (vaccination crisis, mandatory vaccinations, no-vaccination movement, anti-vaccination, $<<$ some diseases $>>$
\item number of terms generated
\item ???
\end{itemize}
How to measure quality
\begin{itemize}
\item Human inspection
\item cluster ``stability''
\item use Wikipedia listing of diseases where vaccination can treat as
a gold standard of some sort
\end{itemize}
\section{Related Work}
\label{sec:related}
\hide{
A precursor to Alexandria is LARIAT \cite{LARIAT-SCC-2014}.
Reference \cite{analytics-process-mgmt:DAB-2014}
describes the analytics process lifecycle.
}
Many papers focus on understanding social media. Various social media
studies provide understanding of how information is gathered. For
instance, \cite{hurricane-sandy-social-CHI-2014} analyses community
behaviors of social news site in the face of a disaster, \cite{twitter-H1N1-2010}
studies information sharing on Twitter during bird flu breakout, and
\cite{health-info-online-CHI-2014} studies how people use search
engines and twitter to gain insights on health information, providing
motivation for ad hoc exploration of social data.
Fundamentally, the authors of \cite{rajaraman2011mining} elaborated on design features needed
in a tool for data exploration and analysis, and coined the term
\textquotedblleft{}Information
Building Applications.\textquotedblright{} They emphasized the support
for tagging and categorizing raw data into categorization and the
ability to restructure categories as their users, students, understand
more about the data or discover new facts. The authors also emphasized
the necessity of supporting fluid shift between concrete (raw data)
and abstract (category of data) during the validation and iteration
process, especially when faced with suspicious outcomes. While the
paper discussed specifically about a tool for exploring streams of
images, the nature of the approach is very similar to the process
of exploring social media we are supporting in Alexandria.
From another direction, as discussed in \cite{analytics-process-mgmt:DAB-2014},
an environment for analytics exploration, and application of the results,
must support rich flexibility for
pro-active knowledge-workers, and incorporate best practice approaches
including Case Management and CRISP-DM \cite{CRISP-DM}
at a fundamental level.
Because project management in Alexandria is based on data-centric principles
(Section \ref{sec:meta-data}),
along with the services-API-centric design,
the system lays the foundation
for the next generation of support for the overall analytics lifecycle.
Another novelty in our work is the combination of
various text analytics and social media exploration tools
into a broad-based solution for rapid and iterative domain modeling.
While many tools
exist, such as Topsy \cite{topsy}, Solr \cite{solr}, Banana \cite{banana},
we discovered that these tools do not support well the process and
the human thoughts in gathering quality results. The existing tools
typically tend to aid in a fraction of the overall exploration task
needed.
More comprehensive, commercial tools such as HelpSocial \cite{HelpSocial}
and IBM Social Media Analytics \cite{SMA} are geared towards a complete
solution. However, these tools require employing a team of consultants
with deep domain expertise to operate as consulting services. Their
support for the exploration process is not trivial and relies heavily
on human labor and expertise.
In terms of the research literature,
Alexandria is helping to close a key gap in research on tooling for
data exploration that was identified in \cite{BertiniL09}.
\hide{
A precursor to Alexandria is
the LARIAT system described in \cite{LARIAT-SCC-2014}.
Reference \cite{analytics-process-mgmt:DAB-2014}
describes the analytics process lifecycle.
Finally, related
work for analytics methods is addressed in the previous sections.
}
\section{Conclusions and Directions}
\label{sec:conc}
This paper describes the Alexandria system, which provides
a combination of features aimed at enabling business analysts
and subject matter experts to more easily explore and derive
actionable insights from
social media.
The key novelties in the system are:
(a) enabling iterative rapid domain scoping that takes advantage of
several advanced text analytics tools, and
(b) the development of a data-centric approach
to support the overall lifecycle
of flexible, iterative
analytics exploration in the social media domain.
The Alexandria team is currently working on enhancements in
several dimensions.
Optimizations are underway, including a shift to
SPARK for management and pre-processing of the background corpora
that support the rapid domain scoping.
Tools to enable comparisons between term generation strategies
and other scoping tools
are under development.
A framework to enable ``crowd-sourced''
evaluation and feedback about the accuracy of extractors
is planned.
The team is working to support multiple kinds
of documents (e.g., forums, customer reviews, and marketing content),
for both background and foreground analytics.
The team is also developing a persistent catalog for
managing sets of topics and extractors; this will
be structured using a family of industry-specific
ontologies.
More fundamentally, a driving question is how to bring predictive
analytics into the
framework.
A goal is to provide intuitive mechanisms
to explore, view and compare the results of
numerous configurations of typical machine learning
algorithms (e.g., clustering, regression).
This appears to be crucial for enabling
business analysts (as opposed to data scientists)
to quickly discover one-off and on-going insights that can be applied
to improve business functions such as
marketing, customer support, and product planning.
\section*{Acknowledgements}
We would like to acknowledge other team members, Richard Goodwin,
Sweefen Goh and Chitra Venkatramani. We also would like to acknowledge
our colleagues from the SystemG project \cite{system-g-home-page},
including in particular
Ching-Yung Lin, Danny Yeh, Jie Lu, Nan Cao, Jui-Hsin (Larry) Lai,
and Roddrik Sabbah.
| {
"attr-fineweb-edu": 1.726562,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdGHxK6EuNCwyvHZp | \section{Proof of \cref{sidetheorem} (Lower Bound for Balancing)}\label{sec:balanced}
Let $v_1, \dots, v_m$ denote the vectors in $V$, each independently drawn with i.i.d.~$N(0,1)$ entries. Enumerate the $k$-sparse vectors in $\{0,1\}^n$ as $x_1, x_2, \dots$, and let $X = \left\{1, \dots, {n \choose k}\right\}$.
Denote by $F_s$ the failure event for the $s$'th subset in the balancing problem, i.e.,~the event that $|x_s \cdot v_i| > d$ for all $v_i \in V$.
We establish a lower bound on $\Pr[\bigcup_{s \in X}F_s]$ using de Caen's lower bound (\cref{decaen}):
$$\Pr\left[\bigcup_{s \in X}F_s\right] \geq \sum_{s \in X} \frac{\Pr[F_s]^2}{\sum_{t \in X}\Pr[F_s \cap F_t]},$$
with the goal being to lower-bound the right-hand side by $\frac{1}{2}$.
Let $E_{s, i}$ be a single failure event, i.e.,~$|x_s \cdot v_i| > d$, such that $F_s = \bigcap_{i = 1}^{m}E_{s, i}$. We split the terms $\Pr[F_s \cap F_t]$ according to the amount of overlap between the two relevant subsets, and accordingly write $\Pr_{\beta}[F_s \cap F_t] = \Pr[F_s \cap F_t]$ in the case that $|x_s \cap x_t| = \beta k$ for some $\beta \in [0, 1]$.
\begin{definition}
We introduce the quantities
\begin{equation}
\alpha = {n \choose k} \Pr[F_s]^2 \label{eq:def_alpha}
\end{equation}
and
\begin{equation}
v_{\beta} = {k \choose (1 - \beta)k}{n - k \choose (1 - \beta)k}\Pr_{\beta}[F_s \cap F_t]. \label{eq:def_v}
\end{equation}
\end{definition}
Here for a fixed $x_s$, $v_{\beta}$ represents the sum of the contributions of all the pairs $(x_s, x_t)$ where $|x_s \cap x_t| = \beta k$ in the denominator of de Caen's bound (i.e.,~$\sum_{t \in X}\Pr[F_s \cap F_t]$). Formally, $v_{\beta} = \sum_{t: |x_t \cap x_s| =\beta k}\Pr_{\beta}[F_s \cap F_t]$. Note that $v_{\beta}$ is well-defined only when $\beta k \in \mathbb{Z}_{\geq 0}$, which is understood to hold throughout our analysis.
For a fixed $x_s$, the number of $x_t$'s in $X$ such that $|x_s \cap x_t| = \beta k$ is ${k \choose (1 - \beta)k}{n - k \choose (1 - \beta)k} = {k \choose \beta k}{n - k \choose (1 - \beta)k}$, because ${k \choose \beta k}$ is the number of ways to choose $\beta k$ entries within the non-zero entries of $x_s$ to be a part of $x_t$ and ${n - k \choose (1 - \beta)k}$ is the number of ways to pick the remaining entries from outside of $x_s$, ensuring that the intersection size is exactly $\beta k$.
To simplify subsequent notation, we define
\[ \widetilde{k} = \frac{k}{d^2} \implies \widetilde{k}^{-0.5} = k^{-0.5}d.\]
We define three summations, which partition the sum in the denominator of de Caen's bound (i.e.,~$\sum_{t \in X}\Pr[F_s \cap F_t]$) as follows:
$$A = \sum_{\beta \in [0, \widetilde{k}^{-0.5}]}v_\beta,~~ B = \sum_{\beta \in [\widetilde{k}^{-0.5}, 0.9]}v_\beta, \text{ and } C = \sum_{\beta \in [0.9, 1]}v_\beta.$$
Here and subsequently, $\displaystyle \sum_{\beta \in [a, b]}v_{\beta}$ is compact notation meant to represent the sum over only the well-defined $\beta \in [a, b]$. Formally,
$$\sum_{\beta \in [a, b]}v_{\beta} = \sum_{t = \ceil{ak}}^{\floor{bk}}v_{\frac{t}{k}} = \sum_{t = \ceil{ak}}^{\floor{bk}}{k \choose k - t}{n - t \choose k - t}\Pr_{\beta = \frac{t}{k}}[F_s \cap F_t].$$
\noindent \textbf{Intuition behind why we split into three sums}:
\begin{itemize}
\item $A$ is the sum of all the pairs who intersection size is ``sufficiently small'' with respect to $k$, i.e.,~at most $\sqrt{\widetilde{k}}$. Since the intersection is so small, so we are able to utilise the fact that $\Pr_{\beta}[F_s \cap F_t] \approx \Pr[F_s]^2$ to get $A \leq 1.5\alpha$ (\cref{lemmaA}).
\item For $B$, the sum of the binomial terms, i.e.,~${k \choose (1 - \beta)k}{n - k \choose (1 - \beta)k}$, is much smaller than ${n \choose k}$, and the largest asymptotic term in which $Pr_{\beta}[F_s \cap F_t]$ differs from $\Pr[F_s]^2$ is $O(\beta\frac{d^2}{k})$. Both of these balance out and eventually give $B \leq 0.25\alpha$ (\cref{lemmaB}).
\item Lastly, $C$ is used for the specific case of $\beta = \Omega(1)$, where we loosely bound $\Pr_{\beta}[F_s \cap F_t] \leq 1$ and show that the product of the two binomial terms is small, so $C \leq 0.25\alpha$ (\cref{lemmaC}).
\end{itemize}
We now proceed more formally. For any fixed $s \in X$, we have $\sum_{t \in X}\Pr_{\beta}[F_s \cap F_t] \leq A + B + C$, and hence
\begin{align*}
\frac{1}{\sum_{j \in X}\Pr_{\beta}[F_s \cap F_t]} &\geq \frac{1}{A + B + C}.
\end{align*}
Let $Q$ be the event that the set $V \subseteq \mathbb{R}^n$, where $|V| = m$ and each $V_{i, j}$ is an i.i.d.~standard Gaussian random variable, is not $\left(n, k, d\right)$-balanced.
Then, since $\bigcup_{s \in X}F_{s} \implies Q$, we have
$$ \Pr[Q] \geq \Pr\left[\bigcup_{s \in X}F_{s}\right]. $$
Then, using de Caen's bound (\cref{decaen}), we have
\begin{align*}
\Pr\left[\bigcup_{s \in X}F_s\right] &\geq \sum_{s \in X} \frac{\Pr[F_s]^2}{\sum_{t \in X}\Pr[F_s \cap F_t]} \\
&\geq \frac{|X|\Pr[F_1]^2}{A + B + C}\\
&= \frac{{n \choose k} \Pr[F_1]^2}{A + B + C}.
\end{align*}
The numerator is equal to $\alpha$ by definition, and $A$, $B$, and $C$ will be bounded in the subsequent subsections (\cref{lemmaA}, \cref{lemmaB} and \cref{lemmaC} below) to deduce that
\begin{align*}
\Pr\left[\bigcup_{s \in X}F_s\right] &\geq \frac{\alpha}{(1.5 + 0.25 + 0.25)\alpha} = \frac{1}{2}.
\end{align*}
Combining the above, we have $\Pr[Q] \geq \frac{1}{2}$ as desired.
With some small constants $c'$ and $c''$, we will require $m \le c' \frac{k^{1.5}}{d^3}$ in \cref{lemmaA}, and $m \le c'' \frac{k^{1.5}}{d}\log{k}$ in \cref{lemmaB} and \cref{lemmaC}. Since we need these to hold simultaneously, the overall scaling on $m$ is
$$m \le c\frac{k^{1.5}}{d}\min{\left(\frac{1}{d^2}, \log{k}\right)}$$ for a sufficiently small constant $c$.
We now proceed with the analysis of $A$, $B$, and $C$.
\subsection{Initial Characterization of $E_{s,i}$ and $F_s$}
Recall that we set $ \widetilde{k} = \frac{k}{d^2}$, and that $d$ satisfies \eqref{eq:choice_d}.
Additionally, define $$r = \frac{1}{\sqrt{2\pi}}\frac{2d}{\sqrt{k}} = \sqrt{\frac{2}{\pi \widetilde{k}}}. $$
We proceed with several straightforward auxiliary lemmas; in the following, $\overline{(\cdot)}$ denotes the complement of an event.
\begin{lemma}
\label{lemmaE}
Under the preceding definitions, we have $r - O(\frac{1}{\widetilde{k}^{1.5}}) \leq \Pr[\overline{E_{s, i}}] \leq r$.
\end{lemma}
\begin{proof}
Observe that $x_s \cdot v_i$ is distributed as $N(0,k)$, since it is the sum of $k$ i.i.d.~$N(0,1)$ variables. Hence, for $Z \sim N(0,k)$, we have $\Pr[\overline{E_{s,i}}] = \Pr\left[|Z| \leq d\right]$. Invoking \cref{smallballlemma} with $\delta = \frac{d}{\sqrt{k}}$ and $\sigma^2 = k$, we obtain
\begin{align*}
\sqrt{\frac{2}{\pi}} \cdot \frac{d}{\sqrt{k}} - \frac{\left(\frac{d}{\sqrt{k}}\right)^3}{\sqrt{2\pi}} &\leq \Pr[\overline{E_{s,i}}] \leq \sqrt{\frac{2}{\pi}} \cdot \frac{d}{\sqrt{k}}.
\end{align*}
The lemma follows from our definitions of $\widetilde{k}$ and $r$.
\end{proof}
\begin{lemma}
\label{lemmaF2}
We have $\Pr[F_s]^2 \geq (1 - 2r + r^2)^m$.
\end{lemma}
\begin{proof}
Since $F_s = \bigcap_{i = 1}^{m}E_{s, i}$, we have
\begin{align*}
\Pr[F_s]^2 &= (\Pi_{i = 1}^{m}\Pr[E_{s, i}])^2\\
&\geq (1 - r)^{2m} \text{ [Using \cref{lemmaE}]}\\
&= (1 - 2r + r^2)^m.
\end{align*}
\end{proof}
\begin{lemma}
\label{lemmaalpha}
The quantity $\alpha = {n \choose k} \Pr[F_s]^2$ satisfies $\log{\alpha} \geq \log{n \choose k} - 2mr - O\big(\frac{m}{\widetilde{k}}\big)$.
\end{lemma}
\begin{proof}
Using \cref{lemmaF2} and the inequality $- x - x^2 \leq \log{(1 - x)}$ for $x \in [0, \frac{1}{2})$, we obtain
\begin{align*}
\log{\alpha} \geq \log{{n \choose k}} + m\log{(1 - 2r + r^2)} \geq \log{{n \choose k}} - 2mr - 2mr^2,
\end{align*}
and the lemma follows since $r^2 = \Theta(1/\widetilde{k})$.
\end{proof}
\subsection{Analysis of $A$}
We first obtain a bound on $\Pr_\beta[F_s \cap F_t]$ that applies for the analysis of both $A$ and $B$.
\begin{lemma}
\label{lemmaFbeta}
Given $\beta \leq 0.9$, $\Pr_{\beta}[F_s \cap F_t] \leq \big(1 - 2r + r^2 + O(\beta r^2) + O(\widetilde{k}^{-3/2})\big)^m$.
\end{lemma}
\begin{proof}
Since $F_s \cap F_t = \bigcap_{p = 1}^{m}(E_{s, p} \cap E_{t, p})$, we have $\Pr[F_s \cap F_t] = \prod_{p = 1}^{m}\Pr[E_{s, p} \cap E_{t, p}] = \prod_{p = 1}^{m} (1 - \Pr[\overline{E_{s, p}}] - \Pr[\overline{E_{t, p}}] + \Pr[\overline{E_{s, p}} \cap \overline{E_{t, p}}])$. \cref{lemmaE} already bounds $\Pr[\overline{E_{s,p}}]$.
The following lemma analyzes $\Pr[\overline{E_{s, p}} \cap \overline{E_{t, p}}]$. The event $\overline{E_{s, p}} \cap \overline{E_{t, p}}$ means that both $x_s$ and $x_t$ are balanced with respect to $v_p$. To upper bound this term, we utilise the fact that for both of them to be balanced, if we fix the contribution of the intersection region (i.e.,~of $x_s \cap x_t$ with respect to $v_p$) then the contribution of the exclusive regions (i.e.,~of $x_s \setminus x_t$ and $x_t \setminus x_s$ with respect to $v_p$) will be constrained to be in a region of size at most $2 d$.
\begin{lemma}
\label{lemmaEbeta}
Given $|x_s \cap x_t| = \beta k$, where $\beta \leq 0.9$, it holds for each $p \in [m]$ that $\Pr[\overline{E_{s, p}} \cap \overline{E_{t, p}}] \leq r^2(1 + \beta + O(\beta^2))$.
\end{lemma}
\begin{proof}
Let $f_{\beta k}(z)$ denote the density function of $N(0, \beta k)$ at point $z$. We have
\begin{align*}
\Pr[\overline{E_{s, p}} \cap \overline{E_{t, p}}] &= \Pr\left[|x_s \cdot v_p| \leq d, |x_t \cdot v_p| \leq d\right]\\
&= \int_{-\infty}^{\infty}f_{\beta k}(i)\Pr\left[-z - d \leq N(0, k - \beta k) \leq -z + d\right]^2 \,dz.\\
\end{align*}
Defining $q = \max_{z \in \mathbb{R}}\Pr\big[-z - d \leq N(0, k - \beta k) \leq -z + d\big]$, we have from \cref{smallballlemma} that $q \leq 2\frac{d}{\sqrt{(k - \beta k)2\pi}}=\frac{r}{\sqrt{1-\beta}}$.
Thus, for $\beta \leq 0.9$, a Taylor expansion gives
\[
\Pr[\overline{E_{s, p}} \cap \overline{E_{t, p}}]
\leq \int_{-\infty}^{\infty}f_x(z)q^2 \,dz
\leq q^2
\leq \frac{r^2}{1 - \beta}= r^2(1 + \beta + O(\beta^2)).
\]
\end{proof}
We can now conclude the proof of \cref{lemmaFbeta}:
\begin{align*}
\Pr[F_s \cap F_t] &= \Pi_{p = 1}^{m}\Pr[E_{s, p} \cap E_{t, p}]\\
&= \Pi_{p = 1}^{m} (1 - \Pr[\overline{E_{s, p}}] - \Pr[\overline{E_{t, p}}] + \Pr[\overline{E_{s, p}} \cap \overline{E_{t, p}}]) \\
&\leq \left(1 - 2r + O\left(\frac{1}{\widetilde{k}^{1.5}}\right) + r^2(1 + \beta + O(\beta^2))\right)^m \text{ [Using \cref{lemmaE} and \cref{lemmaEbeta}]}\\
&\leq \left(1 - 2r + r^2 + O(\beta r^2) + O\left(\frac{1}{\widetilde{k}^{1.5}}\right)\right)^m.
\end{align*}
\end{proof}
We are now ready to prove our desired upper bound on $A$.
\begin{lemma}
\label{lemmaA}
Given $m \le c \frac{k^{1.5}}{d^3}$ for some sufficiently small constant $c$, and $k \le n^{\frac{2}{3} - \epsilon}$, it holds that $A \leq 1.5\alpha$.
\end{lemma}
\begin{proof}
Defining $l = (1 - \beta)k$, we have
\begin{align}\label{eqn:A}
A &\leq \sum_{\beta \in [0, \widetilde{k}^{-0.5}]}{k \choose (1 - \beta)k}{n - k \choose (1 - \beta)k}\Pr_{\beta}[F_s \cap F_t] \leq \sum_{l = k - \widetilde{k}^{0.5}}^{k}{k \choose l}{n - k \choose l}\Pr_{\beta}[F_s \cap F_t],
\end{align}
Since $\beta \leq \widetilde{k}^{-0.5}$, we have $O(\beta r^2) = O(\frac{\beta}{\widetilde{k}}) = O(\frac{1}{\widetilde{k}^{1.5}})$. Hence, \cref{lemmaF2} and \cref{lemmaFbeta} respectively yield:
\begin{align*}
\Pr[F_s]^2 &\geq \left(1 - 2r + r^2\right)^m, \\
\Pr_{\beta}[F_s \cap F_t] &\leq \left(1 - 2r + r^2 + O\left(\frac{1}{\widetilde{k}^{1.5}}\right)\right)^m,
\end{align*}
and combining these gives the following for some constant $C$:
\begin{align*}
\frac{\Pr_{\beta}[F_s \cap F_t]}{\Pr[F_s]^2} &\leq \left(1 + \frac{O(\widetilde{k}^{-1.5})}{1 - 2r + r^2}\right)^m \leq \left(1 + C\widetilde{k}^{-1.5}\right)^m \leq e^{-mC\widetilde{k}^{-1.5}}.
\end{align*}
Taking $m \le \frac{1}{3C}\widetilde{k}^{1.5}$, it follows that $\Pr_\beta[F_s \cap F_t] \leq e^{\frac{1}{3}}\Pr[F_s]^2 \leq 1.5\Pr[F_s]^2$. Substituting into \eqref{eqn:A} and observing that
\begin{align*}
\sum_{l = k - \widetilde{k}^{0.5}}^{k} {k \choose l}{n - k \choose l} \leq \sum_{l = 0}^{k} {k \choose l}{n - k \choose l} = {n \choose k},
\end{align*}
we obtain $A \leq {n \choose k}1.5\Pr[F_s]^2= 1.5\alpha$.
\end{proof}
\subsection{Analysis of $B$}
The main step in the analysis of $B$ is showing that $v_{\beta} \leq \frac{0.25\alpha}{k}$ for all $\beta \in [\widetilde{k}^{-0.5}, 0.9]$. It will be convenient to study the logarithm (base $e$) of $v_\beta$.
\begin{lemma}
\label{lemmavbeta}
Given $\beta \leq 0.9$, for large enough $k$, we have $\log{v_{\beta}} \leq (1 + \epsilon)kH_2(\beta) + \log{n - k \choose (1 - \beta)k} - 2mr + O\big(\frac{m}{\widetilde{k}}\big)$.
\end{lemma}
\begin{proof}
We have
\begin{align*}
v_{\beta} &= {k \choose (1 - \beta)k}{n - k \choose (1 - \beta)k}\Pr_{\beta}[F_s \cap F_t] \\
&\leq {k \choose (1 - \beta)k}{n - k \choose (1 - \beta)k}\left(1 - 2r + r^2 + O(\beta r^2) + O\Big(\frac{1}{\widetilde{k}^{1.5}}\Big)\right)^m. \text{ [Using \cref{lemmaFbeta}]}
\end{align*}
Recalling the asymptotic results on binomial coefficients as specified in \cref{smallklognchoosek}, we have the following (once $k$ is large enough so that $1 + o(1) \leq 1 + \epsilon$):
\begin{align*}
\log{v_{\beta}} &\leq (1 + \epsilon)kH_2(\beta) + \log{n - k \choose (1 - \beta)k} + m\log{\left(1 - 2r + r^2 + O(\beta r^2) + O\Big(\frac{1}{\widetilde{k}^{1.5}}\Big)\right)}\\
&\leq (1 + \epsilon)kH_2(\beta) + \log{n - k \choose (1 - \beta)k} - 2mr + mr^2 + O(m\beta r^2) + O\left(\frac{m}{\widetilde{k}^{1.5}}\right) \\
&\hspace*{7.5cm}\text{ [Using $\log{(1 - x)} \leq -x$]} \\
&\leq (1 + \epsilon)kH_2(\beta) + \log{n - k \choose (1 - \beta)k} - 2mr + O\left(\frac{m}{\widetilde{k}}\right). \text{ [Using $r = O\big( \frac{1}{\sqrt{\widetilde{k}}} \big)$]}
\end{align*}
\end{proof}
We also need the following estimate of a difference of binomial coefficients.
\begin{lemma}
\label{lemmalogsub}
$\log{n \choose k} - \log{n - k \choose (1 - \beta)k} \geq \beta k \log{\left(\frac{n - k}{k}\right)}$.
\end{lemma}
\begin{proof}
We have
\begin{align*}
\frac{{n \choose k}}{{n - k \choose (1 - \beta)k}}
&= \prod_{a = 0}^{k - \beta k - 1}\left(\frac{n - a}{n - k - a}\right) \times \prod_{b = 0}^{\beta k - 1}\left(\frac{n - k + \beta k - b}{k - b}\right)\\
&= \prod_{a = 0}^{k - \beta k - 1}\left(1 + \frac{k}{n - k - a}\right) \times \prod_{b = 0}^{\beta k - 1}\left(\frac{n - k + \beta k - b}{k - b}\right)\\
&\geq \left(1 + \frac{k}{n}\right)^{k - \beta k} \times \left(\frac{n - k}{k}\right)^{\beta k}.
\end{align*}
Taking the log, it follows that
\begin{align*}
\log{n \choose k} - \log{n - k \choose (1 - \beta)k} &\geq (k - \beta k)\log{\left(1 + \frac{k}{n}\right)} + \beta k \log{\left(\frac{n - k}{k}\right)}
\geq \beta k \log{\left(\frac{n - k}{k}\right)}.
\end{align*}
\end{proof}
We can now obtain the key result of this subsection, namely, a bound on $B$.
\begin{lemma}
\label{lemmaB}
Given $m \le c \frac{k^{1.5}}{d}\log{k}$ for a small enough constant $c$, $k \le n^{\frac{2}{3} - \epsilon}$, and $d$ satisfying \eqref{eq:choice_d},
it holds that $B \leq 0.25\alpha$.
\end{lemma}
\begin{proof}
We introduce the function
\begin{equation}
f(\beta) = \beta k \log{\Big(\frac{n - k}{k}\Big)} - (1 + \epsilon)kH_{2}(\beta) \label{eq:f_def}
\end{equation}
which yields
\begin{equation}
\frac{\partial f}{\partial \beta} (\beta) = k\log{\bigg(\frac{n - k}{k} {\Big(\frac{\beta}{1 - \beta}\Big)}^{1 + \epsilon}\bigg)}. \label{eq:f_deriv}
\end{equation}
For $\beta \in [\widetilde{k}^{-0.5}, 0.9]$, we have from \cref{lemmaalpha} and \cref{lemmavbeta} that
\begin{align*}
\log{\alpha} - \log{v_\beta} &\geq \left(\log{n \choose k} - \log{n - k \choose (1 - \beta)k}\right) - (1 + \epsilon)kH_2(\beta) - O\left(\frac{m}{\widetilde{k}}\right) \\
&\geq \beta k \log{\left(\frac{n - k}{k}\right)} - (1 + \epsilon)kH_2(\beta) - O\left(\frac{m}{\widetilde{k}}\right) \text{ [Using \cref{lemmalogsub}]}\\
&= f(\beta) - O\left(\frac{m}{\widetilde{k}}\right).
\end{align*}
In \cref{fincreasing} below, we show that $f(\beta)$ is an increasing function in the range $[\widetilde{k}^{-0.5}, 0.9]$ with the minimum at $f(\widetilde{k}^{-0.5})$. Moreover, recalling $\widetilde{k} = \frac{k}{d^2}$, we have
\begin{align*}
kH_2(\widetilde{k}^{-0.5}) &= k\left(k^{-0.5}d\log{\left(\frac{1}{k^{-0.5}d}\right)} + \left(1 - k^{-0.5}d\right)\log{\left(\frac{1}{1 - k^{-0.5}d}\right)}\right)\\
&= k^{0.5}d\left(\log{\left(\frac{1}{k^{-0.5}d}\right)} - \log{\left(\frac{1}{1 - k^{-0.5}d}\right)}\right) + k\log{\left(\frac{1}{1 - k^{-0.5}d}\right)}\\
&\leq k^{0.5}d\left(\log{\left(\frac{1}{k^{-0.5}d}\right)}\right) - k\log{\left(1 - k^{-0.5}d\right)}\\
&\leq k^{0.5}d(0.5\log{k} - \log{d}) - k\left(-k^{-0.5}d - k^{-1}d^2\right) \\
& \hspace{2cm} \text{ [Using $\log{(1 - x)} \geq - x - x^2$, when $x \in [0, \frac{1}{2}$]]}\\
&= k^{0.5}d(1 + 0.5\log{k} - \log{d}) + d^2\\
&\leq k^{0.5}d\log{\left(\frac{2k^{0.5}}{d}\right)} + d^2.
\end{align*}
Hence,
\begin{align*}
f(\widetilde{k}^{-0.5}) &= \widetilde{k}^{-0.5}k\log{\left(\frac{n - k}{k}\right)} - (1 + \epsilon)kH_2(\widetilde{k}^{-0.5}) \\
&\geq k^{0.5}d\log{\left(\frac{n - k}{k}\right)} - (1 + \epsilon)\left(k^{0.5}d\log{\left(\frac{2k^{0.5}}{d}\right)} + d^2\right)\\
&\geq k^{0.5}d\log{\left(\frac{n}{2k}\right)} - k^{0.5}d\log{\left(\left(\frac{2k^{0.5}}{d}\right)^{1 + \epsilon}\right)} - 2d^2 \text{ [Since $\epsilon < 1$]}\\
&= k^{0.5}d\log{\left(\frac{nd^{1 + \epsilon}}{2^{2 + \epsilon}k^{1.5 + 0.5\epsilon}}\right)} - 2d^2 \\
&\geq k^{0.5}d\log{\left(\frac{nd^{1 + \epsilon}}{4k^{1.5 + 0.5\epsilon}}\right)} - 2d^2. \text{ [Since $\epsilon > 0$]}\\
\end{align*}
Using the condition $d^{1 + \epsilon} > \frac{4k^{1.5 + 0.5\epsilon + \epsilon}}{n}$ in \eqref{eq:choice_d}, we additionally have
\begin{gather*}
\frac{nd^{1 + \epsilon}}{4k^{1.5 + 0.5\epsilon}} > k^{\epsilon}\\
\implies \log{\left(\frac{nd^{1 + \epsilon}}{4k^{1.5 + 0.5\epsilon}}\right)} > \epsilon\log{k}.
\end{gather*}
Therefore,
\begin{align*}
\log{\alpha} - \log{v_{\beta}} &\geq f(\widetilde{k}^{-0.5}) - O\left(\frac{m}{\widetilde{k}}\right)\\
&\geq \epsilon k^{0.5}d\log{k} - 2d^2 - O\left(\frac{m}{\widetilde{k}}\right).
\end{align*}
Since $m < c\frac{k^{1.5}}{d}\log{k}$ for a small enough constant $c$, we can select $c$ such that $O\big(\frac{m}{\widetilde{k}}\big) \le \frac{\epsilon}{2} \frac{k^{1.5}}{d}\log{k} \times \frac{d^2}{k} = \frac{\epsilon}{2} k^{0.5} d \log{k}$
Therefore, the above expression simplifies to
\begin{align*}
\log{\alpha} - \log{v_{\beta}} &\geq \epsilon k^{0.5}d\log{k} - 2d^2 - O\left(\frac{m}{\widetilde{k}}\right)\\
&\geq \frac{\epsilon}{2}k^{0.5}d\log{k} - 2d^2
\end{align*}
We analyse the two terms individually as follows, again considering the conditions on $d$ in \eqref{eq:choice_d}:
\begin{itemize}
\item For the first term,
\begin{align*}
\frac{\epsilon}{2} k^{0.5}d\log{k}
&= \frac{\epsilon}{2}k^{\epsilon}\sqrt{4\log{m}}\log{k} \text{ [Since $d \geq k^{\epsilon - 0.5}\sqrt{4\log{m}}$]}\\
&\geq \Omega(k^{\epsilon}).
\end{align*}
\item For the second term, using $d \le \sqrt{4\log{m}}$, we obtain
\begin{align*}
2d^2 &\le 8\log{m}\\
&= O(\log{k}). \text{ [Since $d \geq k^{\epsilon - 0.5}\sqrt{4\log{m}}$ and $m = O\big(\frac{k^{1.5}}{d}\log{k}\big)$]}
\end{align*}
\end{itemize}
Combining these, we have
$$\frac{\epsilon}{2} k^{0.5}d\log{k} - 2d^2 = \Omega(k^{\epsilon}) \geq \log{k} + \log{4},$$
where the last step holds for $k$ exceeding a sufficiently large constant. It follows that, for all $\beta \in [\widetilde{k}^{-0.5}, 0.9]$, we have $\log{\alpha} - \log{v_{\beta}} \geq \log{k} + \log{4}$, or equivalently $v_{\beta} \leq 0.25\frac{\alpha}{k}$. Summing over the relevant $\beta$ values then gives
\begin{align*}
B = \sum_{\beta \in [\widetilde{k}^{-0.5}, 0.9]}v_{\beta} \leq k \times 0.25\frac{\alpha}{k}
= 0.25\alpha.
\end{align*}
\end{proof}
In the above analysis, we focused on $\widetilde{k}^{-0.5}$ and then used monotonicity to handle all $\beta \in [\widetilde{k}^{-0.5}, 0.9]$. The following lemma gives the required monotonicity property.
\begin{lemma}
\label{fincreasing}
Given $d$ satisfying \eqref{eq:choice_d} and $k \le n^{\frac{2}{3} - \epsilon}$, we have for $\beta \ge \widetilde{k}^{-0.5}$ that $\frac{\partial f}{\partial \beta}(\beta) > 0$.
\end{lemma}
\begin{proof}
Since $\widetilde{k} = \frac{k}{d^2}$, the assumption $\beta \ge \widetilde{k}^{-0.5}$ becomes $\beta \ge k^{-0.5}d$. Additionally using the trivial bound $\frac{\beta}{1-\beta} \ge \beta$, it follows that
$$ \left(\frac{\beta}{1 - \beta}\right)^{1 + \epsilon} \ge k^{-0.5(1 + \epsilon)}d^{1 + \epsilon}. $$
Moreover, by assumption in \eqref{eq:choice_d}, we have
$$ d^{1 + \epsilon} \ge \frac{4k^{1.5 + 0.5\epsilon + \epsilon}}{n} \ge \frac{2k^{1.5 + 0.5\epsilon + \epsilon}}{n}, $$
and combining the two preceding equations gives
\begin{align*}
{\left(\frac{\beta}{1 - \beta}\right)}^{1 + \epsilon} &\ge k^{-0.5(1 + \epsilon)}\frac{2k^{1.5 + 0.5\epsilon + \epsilon}}{n}\\
&\ge \frac{2k^{1 + \epsilon}}{n}\\
&\ge \frac{2k}{n}\\
&\ge \frac{k}{n - k}.
\end{align*}
It follows that
\begin{equation*}
\frac{n - k}{k} {\left(\frac{\beta}{1 - \beta}\right)}^{1 + \epsilon} \ge 1,
\end{equation*}
or equivalently,
\begin{equation*}
k\log{\left(\frac{n - k}{k} {\left(\frac{\beta}{1 - \beta}\right)}^{1 + \epsilon}\right)} \ge 0.
\end{equation*}
Recalling from \eqref{eq:f_deriv} that the left-hand side is $\frac{\partial f}{\partial \beta}(\beta)$, this completes the proof.
\end{proof}
\subsection{Analysis of $C$}
\begin{lemma}
\label{lemmaC}
Given $m < c\frac{k^{1.5}}{d}\log{k}$ for a small enough constant $c$, $k \le n^{\frac{2}{3} - \epsilon}$, and $d$ satisfying \eqref{eq:choice_d},
it holds that $C \leq 0.25\alpha$.
\end{lemma}
\begin{proof}
We take $c$ to be small enough so that $\frac{2m}{\sqrt{\widetilde{k}}} + O\big(\frac{m}{\widetilde{k}}\big) = 2m\frac{d}{\sqrt{k}} + O\big(\frac{md^2}{k}\big) \le \frac{k}{2}\log{\frac{n}{k}}$. Then, we have
\begin{align*}
\log{\alpha} &\geq \log{{n \choose k}} - 2mr - O\left(\frac{m}{\widetilde{k}}\right) \text{ [Using \cref{lemmaalpha}]}\\
&\geq k\log{\frac{n}{k}} - 2\frac{m}{\sqrt{\widetilde{k}}} - O\left(\frac{m}{\widetilde{k}}\right) \text{ [Since $r = \sqrt{\frac{2}{\pi \widetilde{k}}}$ and $\frac{2}{\pi} \le 1$]}\\
&\geq \frac{k}{2}\log{\frac{n}{k}} \\
&\geq \frac{k}{6}\log{n}. \text{ [Since $k < n^{\frac{2}{3} - \epsilon}$]}
\end{align*}
We proceed by bounding $C$ as follows:
\begin{align*}
C &= \sum_{\beta \in [0.9, 1]}v_{\beta}\\
&= \sum_{\beta \in [0.9, 1]}{k \choose (1 - \beta)k}{n - k \choose (1 - \beta)k}\Pr_{\beta}[F_s \cap F_t]\\
&\leq 0.2k{k \choose 0.1k} {n - k \choose 0.1k} \Pr_{\beta}[F_s \cap F_t] \text{ [Since $k \geq 20 \implies \left\lceil 0.1k \right\rceil \leq 0.2k$]}\\
&\leq 0.2k {k \choose 0.1k} {n - k \choose 0.1k} \times 1.
\end{align*}
Taking the logarithm, we obtain
\begin{align*}
\log{C} &\leq \log{\left(0.2k{k \choose 0.1k}{n - k \choose 0.1k}\right)}\\
&\leq \log{(0.2)} + \log{k} + (0.1k)(\log{(10)} + 1) + (0.1k)(\log{(n)} + 1)\\
&\hspace{4cm} \text{ [Using $\log{{n \choose k}} \leq k\left(\log{\left(\frac{n}{k}\right)} + 1\right)$]}\\
&\leq 0.1k\log{n} + 0.6k + \log{k} + \log{(0.2)}\\
&\leq 0.15k\log{n} - \log{4} \text{ [For large $n$, $0.6k + \log{k} + \log{0.2} < 0.05k\log{n} - \log{4}$]}\\
&\leq \frac{k}{6}\log{n} - \log{4}.
\end{align*}
Combining the above, we obtain the desired bound $C \leq 0.25\alpha$ (recalling that the log has base $e$).
\end{proof}
\section{Lower Bound for Bounded Dynamic Range Signals}
\cref{maintheorem} below is the more detailed version of our main result, informally summarized earlier in \cref{mainthminf}.
\begin{theorem}
\label{maintheorem}
Fix $\epsilon \in \big(0,\frac{2}{3}\big)$, and suppose that $k \le n^{\frac{2}{3} - \epsilon}$ and $R \le \min{(n^{0.5\epsilon}, k^{0.5 - \epsilon})}$. If $A =(a_{i, j}) \in \mathbb{R}^{m \times n}$, where each $a_{i, j}$ is an i.i.d.~standard Gaussian random variable, and
\begin{equation}
m <cR\left(\frac{k}{\log k}\right)^{3/2}\min{\left({R^2}, \log^2{k}\right)} \label{eq:m_bound}
\end{equation}
for a sufficiently small constant $c>0$,
then with probability at least $1/3$, $A$ is not a valid universal $1$-bit measurement matrix for support recovery on $\mathcal{X}_k(R)$.
\end{theorem}
We observe that this lower bound matches the upper bound in \eqref{eq:m_Jacques} up to logarithmic factors (note that when ignoring logarithmic factors, we can safely replace $\min\left({R^2}, \log^2{k}\right)$ by $1$). Regarding the assumed scaling on $k$ and $R$ here, we note the following:
\begin{itemize}
\item The assumption $k \le n^{\frac{2}{3} - \epsilon}$ covers the sparsity regimes of primary interest, since $k \ge n^{\frac{2}{3} + \epsilon}$ would amount to the right-hand side of \eqref{eq:m_bound} exceeding $n$. However, letting $m = n$ with $A$ being the identity matrix would then require fewer measurements. Thus, in such scaling regimes, this trivial measurement scheme is more appropriate than the i.i.d.~Gaussian scheme.
\item The assumption $R \le k^{0.5-\epsilon}$ covers the scaling regimes of primary interest on the number of measurements, because $R \ge k^{0.5+\epsilon}$ would imply that the right-hand side of \eqref{eq:m_bound} exceeds $\Omega(k^2)$. However, it is known that $k^2$ dependence can be achieved even for {\em arbitrary} $k$-sparse signals regardless of the dynamic range \cite{Ach17}.
\item On the other hand, the assumption $R \le n^{0.5\epsilon}$ imposes a non-trivial restriction. Ideally this would be improved to $R \le n^{1.5\epsilon}$, which is a natural threshold because any higher would make the right-hand side of \eqref{eq:m_bound} again exceed $n$ when $k = \Theta(n^{2/3-\epsilon})$. We believe that that our constant $0.5$ could be increased without too much extra effort, but that increasing it all the way to $1.5$ may be more challenging.
\end{itemize}
The ``failure'' probability of $1/3$ in \cref{maintheorem} is arbitrary, and can be adjusted to any fixed constant in $(0,1)$ while only affecting the unspecified constant $c$. This is because any constant positive ``success'' probability could trivially be improved, and made arbitrarily close to one, by independently generating the measurement matrix $O(1)$ times.
As discussed in \cref{sec:overview}, we obtain the lower bound through a connection to a vector balancing problem, formally defined as follows.
\begin{definition}
A set $V \subseteq \mathbb{R}^n$ is said to be {\em $(n, \ell, d)$-balanced} if for any $S \subseteq [n]$ of size $\ell$, there exists $v \in V$ satisfying $|\sum_{i \in S} v_i| \leq d$.
\end{definition}
\noindent The remainder of the section is devoted to proving \cref{maintheorem}.
\subsection{Reduction From Balancing}
In this section, we show that a valid universal 1-bit measurement matrix for $\mathcal{X}_k(R)$ implies a bound on the size of a balanced family of vectors. Hence, a lower bound on the latter will imply our desired lower bound on the number of measurements.
\begin{lemma}
\label{successreducelemma}
If $A = (a_{i, j}) \in \mathbb{R}^{m \times n}$ is a valid universal $1$-bit measurement matrix for support recovery on $\mathcal{X}_k(R)$, then either
\begin{itemize}
\item $V = \{(a_{i,1}, a_{i,2}, \dots, a_{i,n - 2}) \mid i \in [m]\}$ is $\big(n - 2, k - 1, \frac{\sqrt{4\log{m}}}{R}\big)$-balanced, or
\item there exist indices $i \in [1, m], j \in [n - 1, n]$ such that $|a_{i, j}| > \sqrt{4\log{m}}$.
\end{itemize}
\end{lemma}
\begin{proof}
Suppose that $|a_{i, j}| \leq \sqrt{4\log{m}}$ for all $ i \in [n - 1, n], j \in [1, m]$. Consider any subset $S \subseteq [n - 2]$, where $|S| = k - 1$. Let $x, y$ be $R$-bounded $k$-sparse vectors such that
$$x_{i} = \begin{cases}
R & \text{if $i \in S$}\\
1 & \text{if $i = n - 1$}\\
0 & \text{otherwise}\\
\end{cases} \text{ and }
y_{i} = \begin{cases}
R & \text{if $i \in S$}\\
1 & \text{if $i = n$}\\
0 & \text{otherwise}.\\
\end{cases}
$$
Since $A$ is a valid universal $1$-bit measurement matrix for support recovery on $\mathcal{X}_k(R)$, we must have ${\rm sign}(Ax) \neq {\rm sign}(Ay)$, so there must exist a row $a_i$ in $A$ such that ${\rm sign}(a_i \cdot x) \neq {\rm sign}(a_i \cdot y)$. Without loss of generality, suppose that $a_i \cdot x \geq 0$ and $a_i \cdot y < 0$. Hence,
\begin{align}
\label{boundentries}
\begin{split}
\sum_{j \in S} Ra_{i,j} + a_{i,n - 1} \geq 0, \qquad \text{and} \qquad \sum_{j \in S} Ra_{i,j} + a_{i, n} < 0.
\end{split}
\end{align}
Since $|a_{i, n - 1}|, |a_{i, n}| \leq \sqrt{4\log{m}}$, it follows from \eqref{boundentries} that
\begin{align*}
\left|R\sum_{j \in S} a_{i, j}\right| \leq \sqrt{4\log{m}}
\end{align*}
Hence, $V$ is $\big(n - 2, k - 1, \frac{\sqrt{4\log{m}}}{R}\big)$-balanced, as claimed.
\end{proof}
\ignore{
\begin{corollary}
\label{failreducelemma}
Suppose that each $a_{i, j}$ is an i.i.d.~standard Gaussian random variable. If
\begin{itemize}
\item $V = \{v | v = (a_{i, 1}, a_{i, 2} \dots, a_{i, n - 2}), i \in [1, m]\}$ is not $\left(n - 2, k - 1, \frac{\sqrt{4\log{m}}}{R}\right)$-balanced AND
\item $\forall i \in [n - 1, n], \forall j \in [1, m], |a_{i, j}| \leq \sqrt{4\log{m}}$
\end{itemize} then $A$ is not a valid universal $1$-bit measurement matrix for support recovery on $\mathcal{X}_k(R)$.
\end{corollary}
\begin{proof}
This is the contrapositive of the \cref{successreducelemma}
\end{proof}
}
\subsection{Proof of \cref{maintheorem}}
In this section, we prove \cref{maintheorem} by applying the following lower bound on the size of balanced vector families, which will in turn be proved in \cref{sec:balanced}.
\begin{theorem}
\label{sidetheorem}
Fix $\epsilon \in \big(0,\frac{2}{3}\big)$, and suppose that $k \le n^{\frac{2}{3} - \epsilon}$. Let $V \subseteq \mathbb{R}^n$ be a set of $m$ vectors, each independently drawn with i.i.d.~$N(0,1)$ entries. Then, for any $d$ satisfying
\begin{equation}
d \ge \max{\left(\frac{4^{1/(1 + \epsilon)}k^{1.5}}{n^{1/(1 + \epsilon)}}, k^{\epsilon - 0.5}\sqrt{4\log{m}}\right)}, \quad d \le \sqrt{4\log{m}}, \label{eq:choice_d}
\end{equation}
if $m$ satisfies
\[
m \le c\frac{k^{1.5}}{d}\min{\left(\frac{1}{d^2}, \log{k}\right)}
\]
for a sufficiently small constant $c>0$, then with probability at least $1/2$ the set $V$ is not $\big(n, k, d)$-balanced.
\end{theorem}
For the sake of generality, we have considered the full range of $d$ in \eqref{eq:choice_d} that our proof permits, but in view of \cref{successreducelemma}, proving \cref{maintheorem} only requires handling the much more specific choice of $d = \frac{\sqrt{4 \log m}}{R}$, as we now show.
\begin{proof}[Proof of \cref{maintheorem}]
\noindent Let $H$ be the event that the matrix $A = (a_{i, j}) \in \mathbb{R}^{m \times n}$ sampled from i.i.d.~standard Gaussian random variables is not a valid universal $1$-bit measurement matrix for support recovery on $\mathcal{X}_k(R)$. Let $G$ be the event that $\forall i \in [n - 1, n], \forall j \in [1, m], |a_{i, j}| \leq \sqrt{4\log{m}}$, and let $P$ be the event that the set $V$ in \cref{successreducelemma} is not $\big(n - 2, k - 1, \frac{\sqrt{4\log{m}}}{R}\big)$-balanced.
From \cref{successreducelemma}, $P \cap G \implies H$, and therefore, $\Pr[H] \geq \Pr[P] - \Pr[\overline{G}]$.
We show that $\Pr[\overline{G}]\leq \frac16$:
\begin{align*}
\Pr[\overline{G}] &= \Pr\left[\bigcup_{i = n - 1}^{n} \bigcup_{j = 1}^{m} a_{i, j} > \sqrt{4\log{m}}\right]\\
&\leq 2m \Pr[Z > \sqrt{4\log{m}}]\\
&\leq 2m e^{-2 \log m} \le \frac16,
\end{align*}
where in the last inequality we assume that $k$ (and hence $m$) exceeds a sufficiently large constant.
Therefore, $\Pr[H] \geq \frac{1}{2} - \frac{1}{6}= \frac{1}{3}$. The minor difference in $n$ and $k$ between Theorems \ref{maintheorem} and \ref{sidetheorem} can be absorbed by adjusting the constant $c$.
\cref{sidetheorem} provides the following condition on $m$:
$$m \le c\frac{k^{1.5}}{d}\min{\left(\frac{1}{d^2}, \log{k}\right)}$$
for $d$ satisfying \eqref{eq:choice_d}. We claim that the choice $d = \frac{\sqrt{4\log{m}}}{R}$ indeed satisfies \eqref{eq:choice_d}; this is seen as follows:
\begin{itemize}
\item \cref{lemmad} below establishes that
$\big(\frac{\sqrt{4\log{m}}}{R}\big)^{1 + \epsilon} \ge \frac{4k^{1.5(1 + \epsilon)}}{n}$, which implies $d \ge \frac{4^{1/(1 + \epsilon)}k^{1.5}}{n^{1/(1 + \epsilon)}}$.
\item The inequality $\frac{\sqrt{4\log{m}}}{R} \geq k^{\epsilon - 0.5}\sqrt{4\log{m}}$ follows immediately from $R \leq k^{0.5 - \epsilon}$.
\item $\frac{\sqrt{4\log{m}}}{R} \leq \sqrt{4\log{m}}$ follows immediately from $R \ge 1$.
\end{itemize}
Substituting this choice of $d$ into \cref{sidetheorem}, we obtain the condition
\begin{equation}
m \le c'\frac{R}{\sqrt{\log{m}}}k^{1.5}\min{\left(\frac{R^2}{\log{m}}, \log{k}\right)} \label{eq:m_init}
\end{equation}
for a sufficiently small constant $c'$. Manipulating this expression so as to obtain $m$ only on the the left hand side, we obtain the condition
$$m \le c''\frac{R}{\sqrt{\log{k}}}k^{1.5}\min{\left(\frac{R^2}{\log{k}}, \log{k}\right)}.$$
This manipulation uses the assumption $R \le k^{0.5 - \epsilon}$, which implies that $\log m$ and $\log k$ are of the same order when $m$ equals the largest value satisfying \eqref{eq:m_init}.
\end{proof}
\begin{lemma}
\label{lemmad}
When $R \le n^{0.5\epsilon}$ and $k \le n^{\frac{2}{3}-\epsilon}$, we have $\big(\frac{\sqrt{4\log{m}}}{R}\big)^{1 + \epsilon} \ge \frac{4k^{1.5(1 + \epsilon)}}{n}$.
\end{lemma}
\begin{proof}
We have
\begin{align*}
R &\le n^{0.5\epsilon}\\
&\le \frac{\sqrt{4\log{m}}}{4} \cdot n^{0.5\epsilon} \text{ [Assuming $\log{m} \ge 4$]}\\
&\le \frac{\sqrt{4\log{m}}}{4^{\frac{1}{1 + \epsilon}}} \cdot n^{\frac{0.5\epsilon + 0.5\epsilon^2}{1 + \epsilon}}\\
&\le \sqrt{4\log{m}} \cdot \left(\frac{n^{0.5\epsilon + 1.5\epsilon^2}}{4}\right)^{\frac{1}{1 + \epsilon}} \text{ [Since $0.5\epsilon^2 \le 1.5\epsilon^2$]}\\
&= \sqrt{4\log{m}} \cdot \left(\frac{n}{4n^{(\frac{2}{3} - \epsilon)(1.5 + 1.5\epsilon)}}\right)^{\frac{1}{1 + \epsilon}} \\
&\le \sqrt{4\log{m}} \cdot \left(\frac{n}{4k^{1.5(1 + \epsilon)}}\right)^{\frac{1}{1 + \epsilon}}, \text{ [Since $k < n^{\frac{2}{3} - \epsilon}$]}
\end{align*}
and re-arranging gives
\begin{align*}
\left(\frac{\sqrt{4\log{m}}}{R}\right)^{1 + \epsilon} &\ge \frac{4k^{1.5(1 + \epsilon)}}{n}.
\end{align*}
\end{proof}
\section{Introduction}
1-bit compressive sensing (CS) is a fundamental high-dimensional statistical problem and is of interest both from a theoretical point of view and in practical applications, where sensing is performed subject to quantization. Since its introduction in \cite{Bou08}, the theory of 1-bit CS has seen several advances, with the required number of measurements varying depending on the recovery guarantee, assumptions on the underlying signal being estimated, and the possible presence of noise.
Under the noiseless model, the measurements $y \in \{-1,1\}^m$ are generated according to
\begin{equation}
b = {\rm sign}(\Avx^*),
\end{equation}
where $x^* \in \mathbb{R}^n$ is the unknown underlying signal, $A \in \mathbb{R}^{m \times n}$ is the measurement matrix, and the sign function ${\rm sign}(\cdot)$ is applied element-wise. The value of ${\rm sign}(0)$ is not important for the results of this paper, so we adopt the convention ${\rm sign}(0) = 1$.
We assume that the signal $x^*$ lies in some class $\mathcal{X}_k$,. The most common choice for $\mathcal{X}_k$ is the set of all $k$-sparse signals for some $k \ll n$. Sparsity is satisfied by many natural families of signals (e.g., image and audio in appropriate representations), and assuming it often leads to statistical and computational benefits (see \cite{Has15}). As we discuss further below, we will assume that $\mathcal{X}_k$ is a {\em subset} of the $k$-sparse signals, but possibly a strict subset. Given such a class, the problem of 1-bit CS for signals in $\mathcal{X}_k$ comes with several important distinctions:
\begin{itemize}
\item For {\em universal 1-bit CS} (also known as the {\em for-all guarantee}), it is required that the measurement matrix $A$ achieves some success criterion uniformly for all $x^* \in \mathcal{X}_k$.
\item For {\em non-universal 1-bit CS} (also known as the {\em for-each guarantee}), the measurement matrix $A$ may be randomized, and the success criterion is only required to hold with high probability over this randomness for each fixed $x^* \in \mathcal{X}_k$.
\item Under the {\em support recovery} success criterion, the goal is to produce an estimate $\Shat \subseteq [n]$ such that $\Shat = {\rm supp}(x^*)$, where ${\rm supp}(\cdot)$ denotes the support.
\item Under the {\em approximate recovery} success criterion, the goal is to produce an estimate $\hat{x}$ such that\footnote{Note that the output of 1-bit CS measurements is invariant to scaling, so the magnitude cannot be recovered.} $\big\| \frac{x^*}{\|x^*\|} - \frac{\hat{x}}{\|\hat{x}\|} \big\|$ is no larger than some pre-specified threshold $\epsilon$.
\end{itemize}
The distinction between the universal and non-universal criteria has a drastic impact on the required number of measurements. For instance, when $\mathcal{X}_k$ is the set of all $k$-sparse vectors, support recovery is possible under the for-each guarantee using $O(k \log n)$ tests \cite{Ati12,Ber17}, whereas attaining the for-all guarantee requires $\Omega\big( k^2 \frac{\log n}{\log k} \big)$ tests \cite{Ach17}.
On the other hand, there is recent evidence that for more {\em restricted} classes $\mathcal{X}_k$, this gap can be narrowed. In particular, it was shown in \cite{Flo19} that for the set of {\em binary} $k$-sparse vectors (i.e., the non-zero entries are all equal to one), then universal exact recovery is possible using $O\big(k^{3/2} + k\log\frac{n}{k} \big)$ measurements. This is achieved using a two-step design and decoding procedure, and improves on the i.i.d.~design of \cite{Jac13} requiring $O\big( k^{3/2}\log n \big)$ measurements. A simple counting argument shows that $\Omega\big(k \log \frac{n}{k}\big)$ measurements are needed even for binary input vectors, and this is the best lower bound to date.
The preceding observations raise the following two important questions:
\begin{enumerate}
\item Is the $k^{3/2}$ dependence unavoidable for universal recovery in the binary setting?
\item Under what broader classes $\mathcal{X}_k$, can we similarly avoid the $k^2$ dependence?
\end{enumerate}
In this paper, we partially address the first question by showing that the $k^{3/2}$ dependence is unavoidable, at least when considering i.i.d.~Gaussian measurements, as was considered in \cite{Jac13}.\footnote{In contrast, \cite{Flo19} used a ``two-step'' design consisting of a list-disjunct part followed by a Gaussian part. We do not prove $k^{3/2}$ to be unavoidable for such designs, though we expect this to be the case.} In more detail, we show that if significantly fewer measurements are used, then with constant probability, the matrix will fail to achieve the universal support recovery guarantee. Analogous hardness results {\em with respect to a given random design} have frequently appeared in other compressive sensing works, e.g., \cite{Wai09,Ati12,Sca15}.
To address the second question above, we consider the class of $k$-sparse vectors with {\em bounded dynamic range}:
\begin{equation}
\mathcal{X}_k(R) = \bigg\{ x \in \mathbb{R}^n \,:\, \| x \|_0 \le k \text{ and } \frac{\max_{i \in \supp(x)} |x_i| }{ \min_{i \in \supp(x)} |x_i| } \le R \bigg\}, \label{eq:set_Xk}
\end{equation}
where $R \ge 1$, and $R$ may grow as a function of $n$. This choice is motivated by the fact that the hardness result (showing $k^2$ dependence) for universal recovery in \cite{Ach17} implicitly considers signals with unbounded dynamic range. Additionally, the bounded dynamic range assumption has been studied previously in the context of for-each 1-bit CS in \cite{GNR10}, and recently, universal 1-bit CS with approximate support recovery \cite{Maz21}.
Observe that setting $R = 1$ in \eqref{eq:set_Xk} recovers the binary case $x \in \{0,1\}^n$ (the constant of $1$ is without loss of generality, since the 1-bit model is invariant to scaling). For signals in $\mathcal{X}_k(R)$, we provide a simple generalization of the above-mentioned $O\big( k^{3/2}\log n \big)$ achievability result as a corollary of \cite{Jac13}, and we provide an impossibility result for i.i.d.~Gaussian designs.
\subsection{Technical Overview}\label{sec:overview}
In order to recover the support of $x \in \mathcal{X}_k(R)$ via 1-bit CS using measurement matrix $A$, it is clearly necessary that for any $y \in \mathcal{X}_k(R)$ such that $\supp(x)\neq \supp(y)$, we also have $\textrm{sign}(Ax) \neq \textrm{sign}(Ay)$. Accordingly, we provide the following definition.
\begin{definition}
The matrix $A \in \mathbb{R}^{m \times n}$ is a {\em valid universal $1$-bit measurement matrix} for support recovery on $\mathcal{X}_k(R)$ if and only if
\begin{align*}
&\forall u \in \mathcal{X}_k(R),\forall v \in \mathcal{X}_k(R), \\
&\supp(u) \neq \supp(v) \implies {\rm sign}(Au) \neq {\rm sign}(Av)
\end{align*}
\end{definition}
Our main result is the following.\footnote{Asymptotic notation such as $\tilde{O}(\cdot)$ and $\tilde{\Omega}(\cdot)$ hides logarithmic factors of the argument.}
\begin{theorem}[Informal; see \cref{maintheorem}]\label{mainthminf}
Suppose $k$ and $R$ are sufficiently small in terms of $n$. Let $A \in \mathbb{R}^{m \times n}$ be a random matrix with i.i.d.~$N(0,1)$ entries. If it holds with constant probability that $A$ is a valid universal $1$-bit measurement matrix for support recovery on $\mathcal{X}_k(R)$, then $m = \tilde{\Omega}\big(Rk^{3/2} + k \log\frac{n}{k}\big)$.
\end{theorem}
The scaling regime of primary interest here is $k\ll n^{2/3}$, since if $k \gg n^{2/3}$ then one would improve on the $k^{3/2}$ scaling by simply letting $A$ be the $n \times n$ identity matrix (i.e., $m = n$). Similarly, the interesting range of $R$ is $R \ll k^{0.5}$, since an upper bound of $O(k^2 \log(n))$ was shown in \cite{Ach17} for the universal support recovery of arbitrary $k$-sparse signals. The assumed scaling of $k$ and $R$ are restricted accordingly in the formal statement, \cref{maintheorem}.
We now turn to an overview of the proof of \cref{mainthminf}. The lower bound of $\log {n \choose k} = \Omega(k \log(n/k))$ is simply due to the fact that the number of distinct measurement outcomes must be at least ${n \choose k}$. Hence, the main goal is to show that with $m \le \tilde{O}(Rk^{3/2})$, $A$ is not a valid universal $1$-bit measurement matrix, with constant probability. Let $v_1, \dots, v_m \in \mathbb{R}^{n-2}$ denote the first $n-2$ coordinates of each of the $m$ rows of $A$.
We will first show that for $A$ to be a valid measurement matrix with constant probability, it must be the case that $\{v_1, \dots, v_m\}$ is $(n-2, k-1, \sqrt{4\log m}/R)$-{\em balanced}, where we say that a collection $V \subseteq \mathbb{R}^{n}$ is $(n, \ell, d)$-balanced if for every set $S\subseteq [n]$ of size $\ell$, there exists $v \in V$ such that $\left| \sum_{i \in S} v_i\right| \leq d$. The idea is as follows: Suppose that there exists a set $S \subseteq [n-2]$ such that for every $v \in \{v_1, \dots, v_m\}$, $\left|\sum_{i \in S}v_i\right| > \frac{\sqrt{4\log m}}{R}$. In this case, the vectors $x$ and $y$ with
\[
x_{i} = \begin{cases}
R & \text{if $i \in S$}\\
1 & \text{if $i = n - 1$}\\
0 & \text{otherwise}\\
\end{cases} \text{ and }
y_{i} = \begin{cases}
R & \text{if $i \in S$}\\
1 & \text{if $i = n$}\\
0 & \text{otherwise}\\
\end{cases}
\]
cannot be distinguished by the measurement matrix $A$, since (as we will show) the entries in the last two columns of $A$ are all at most $\sqrt{4 \log m}$ with high probability. Thus, it suffices to prove a lower bound on the number of Gaussian vectors needed to form an $(n-2, k-1, \sqrt{4\log m}/R)$-{balanced} family.
The study of bounds on the size of $(n, \ell, d)$-balanced families dates back to the early work of \cite{ABCO88}, which examined the setting $V \subseteq \{\pm 1\}^n$ and $\ell =n$. Indeed, one can view an $(n, \ell, d)$-balanced family as a collection of hyperplanes passing through the origin such that every $\ell$-sparse vector is within a margin $d$ of one of the hyperplanes. There is a long line of work on such problems and variants thereof, but {\em without} the sparsity constraint.
For instance, \cite[Theorem 14]{calkin2008cuts} shows that if the number of hyperplanes is $o(n^{3/2})$, then the expected number of edges that are not cut is unbounded. See also the recent work \cite{YY21} and the references therein.
Returning to our setting, for a subset $S$ of size $\ell$, let $F_S$ denote the event that for every $v \in V$, $|\sum_{i \in S} v_i| > d$. We want to argue that if $m = |V|$ is too small, then $\Pr[\cup_S F_S]$ is lower bounded by a constant. To this end, we invoke {\em de Caen's inequality} that lower bounds the probability of a union.
\begin{lemma}[de Caen's inequality, \cite{Dec97}]\label{decaen}
Let $\{A_i\}_{i \in I}$ be a finite family of events in a probability space. Then:
\[
\Pr\left[\cup_{i \in I} A_i \right] \geq \sum_{i \in I} \frac{\Pr[A_i]^2}{\sum_{j \in I} \Pr[A_i \cap A_j]}.
\]
\end{lemma}
In our application, there are ${n \choose \ell}$ events of interest. For any $S$, $\Pr[F_S]$ is easy to lower bound using standard anti-concentration results for Gaussians. What occupies the bulk of our analysis is an upper bound for the denominator, $\sum_{T} \Pr[F_S \cap F_T]$. Somewhat surprisingly, the calculations here turn out to be quite delicate. To see why, consider the following naive approach: Let $E_S$ denote the event that for a single random Gaussian vector $v \in \mathbb{R}^n$, $|\sum_{i \in S} v_i| \leq r$, and let $\gamma = \Pr[E_S]$. Hence, the numerator is $(1-\gamma)^{2m}$. For the denominator, the inclusion-exclusion formula gives:
\[
\Pr[F_S \cap F_T] = (1 - \Pr[F_S] - \Pr[F_T] + \Pr[F_S \cap F_T])^m
\]
Now, for low values of $k$, a typical $T$ has small intersection with $S$, and hence, $F_S$ and $F_T$ are approximately independent, i.e., $\Pr[F_S \cap F_T] \approx \gamma^2 \ll \gamma$. Thus, for most $T$, $\Pr[F_S \cap F_T] \approx (1-\Omega(\gamma))^m$, and thus, one might expect by invoking \cref{decaen} that $\Pr[\cup_S F_S] \geq (1-O(\gamma))^m$. Unfortunately, this is not enough, as $\gamma \approx d/\sqrt{\ell}$, and hence, we do not get constant failure probability for $m =\Theta(\ell^{3/2}/d)$ as we desired.
We work more carefully to get a stronger bound. First of all, we do a more refined analysis of the denominator based on the size of the overlap $|T \cap S|$. While for most $T$, the overlap is small and $\Pr[E_S \cap E_T] \approx \gamma^2$, it also holds that for the few sets $T$ with large overlap, $\Pr[E_S \cap E_T]$ approaches $\gamma$. Secondly, we keep close track of the {\em constant} factors in the terms of the denominator, because roughly speaking, we need to show that the numerator and denominator match in the first-order terms and only differ in the second-order terms. A more detailed overview of the proof is available in \cref{sec:balanced}. Our proof also generalizes from Gaussian to Rademacher measurements in the case that $R = 1$; this extension is presented in \cref{sec:rademacher}.
\subsection{Other Related Work}
Under the setting of {\em universal 1-bit CS}, which is our focus, various works have established both upper and lower bounds for both {\em support recovery} and {\em approximate recovery} (i.e., accuracy to within some target Euclidean distance $\epsilon$, up to scaling).
In \cite{plan2012robust}, it was shown that both $O\left(\frac{k}{\epsilon^6}\log{\frac{n}{k}}\right)$ and $O\left(\frac{k}{\epsilon^5}(\log{\frac{n}{k}})^2\right)$ measurements are sufficient for universal approximate recovery. Then, a significant improvement with respect to $\epsilon$ was achieved in \cite{gopi2013one}, showing that $O(k^3\log{\frac{n}{k}} + \frac{k}{\epsilon}\log{\frac{k}{\epsilon}})$ measurements are sufficient. Recently, \cite{Ach17} showed that using RUFFs (Robust Union Free Families), a modification of UFFs (first used in \cite{gopi2013one}), one could further reduce the number of measurements to $O(k^2\log{n} + \frac{k}{\epsilon}\log{\frac{k}{\epsilon}})$. In addition, they provided a lower bound of $\Omega(k\log{\frac{n}{k}} + \frac{k}{\epsilon})$.
As for the support recovery problem, \cite{gopi2013one} established that $O(k^2\log{n})$ measurements are sufficient for non-negative signals, which was generalised to arbitrary real-valued signals with a matching number of measurements in \cite{Ach17}. In addition, \cite{Ach17} provided a nearly tight lower bound of $\Omega(k^2\frac{\log{n}}{\log{k}})$ for this setting.
Signals with bounded dynamic range have received relatively less attention.
Beyond binary vectors \cite{Flo19} (i.e., $R=1$), it has been shown that $O(R^2 k \log n)$ non-adaptive measurements suffice for the weaker {\em for-each} guarantee, while the dependence on $R$ can be avoided altogether using adaptive measurements \cite{Gup10}. More recently, for universal 1-bit CS with {\em approximate support recovery}, it was shown in \cite{Maz21} that having a known bounded dynamic range $R$ reduces the leading term in the number of tests from $k^{3/2}$ to $R k$. We additionally show in \cref{subsec:upperbound} that an upper bound for {\em exact} support recovery under the for-all guarantee can be deduced from an approximate recovery result given in \cite{Jac13} (deducing the same from \cite{Maz21} is also straightforward).
Other related problems include group testing \cite{Du93,Ald19}, standard CS \cite{Wai09,Ati12,Sca15}, and multi-bit CS \cite{Sla15}, but the precise details are less relevant for comparing to our own results.
\section{Preliminaries}
Throughout the paper, the function $\log(\cdot)$ has base $e$, and all information measures are in units of nats.
\subsection{Useful Technical Tools}
We will routinely use the following standard estimate for the Gaussian density.
\begin{lemma}[Small ball probabilities for Gaussians]
\label{smallballlemma}
If $X \sim N(0, \sigma^2)$, then for any $0<\delta<1$:
\[
\sqrt{\frac2\pi} \delta - \frac{\delta^3}{\sqrt{2\pi}} \leq \Pr[|X| \leq \delta \sigma] \leq \sqrt{\frac2\pi} \delta.
\]
In addition, the upper bound also holds for $\max_t \Pr[t-\delta\sigma \leq X \leq t+\delta\sigma]$.
\end{lemma}
We also recall the following well-known asymptotic results about $\log{{n \choose k}}$:
\begin{itemize}
\item If $k = o(n)$, then
\begin{equation}
\label{smallklognchoosek}
\log{{n \choose k}} = (1 + o(1))k\log{\left(\frac{n}{k}\right)}
\end{equation}
\item If $k = \Theta(n)$, then
\begin{equation}\log{{n \choose k}} = (1 + o(1))nH_2\left(\frac{k}{n}\right),
\end{equation}
where $H_2(p)$ is the binary entropy function in nats, i.e.,~$H_2(p) = p\log{\big(\frac{1}{p}\big)} + (1 - p)\log{\big(\frac{1}{1 - p}\big)}$.
\end{itemize}
\subsection{Upper Bound for Bounded Dynamic Range Signals}
\label{subsec:upperbound}
Here we show that the universal approximate recovery guarantee of \cite{Jac13} translates directly to a universal support guarantee for signals in $\mathcal{X}_k(R)$. Such a result may be considered folklore, but we are not aware of a formal statement in the literature.
The result of interest for our purposes is \cite[Thm.~2]{Jac13}, which reveals that in order to produce an estimate $\hat{x}$ satisfying $\big\| \frac{x^*}{\|x^*\|} - \frac{\hat{x}}{\|\hat{x}\|} \big\| \le \epsilon_0$, an i.i.d.~Gaussian measurement matrix succeeds with probability at least $1-\delta$ (for any fixed constant $\delta > 0$) using a number of measurements satisfying
\begin{equation}
m = O\bigg( \frac{k}{\epsilon_0} \log n + k \log \frac{1}{\epsilon_0} \bigg) \label{eq:m_Jacques}
\end{equation}
with a large enough implied constant. To convert this to a support recovery guarantee, we need to establish how small $\epsilon_0$ should be so that the above guarantee also implies $\supp(\hat{x}) = \supp(x^*)$. To do so, let $x$ and $y$ be two $k$-sparse signals with different supports, and assume without loss of generality that $\|x\| = \|y\| = 1$. Then there must exist some index $i$ where one signal (say $x$) is non-zero but the other is zero. From this entry alone, we have $\|x - y\| \ge x_i$. The assumption of dynamic range $R$ additionally implies that $x_i \ge \frac{1}{\sqrt{(k-1)R^2 + 1}}$, which is attained in the extreme case that $x$ has $k-1$ other non-zero values that are $R$ times higher than $x_i$. It follows that $\|x - y\| \ge \frac{1}{\sqrt{(k-1)R^2 + 1}}$, which means that $\epsilon_0 = \Theta\big( \frac{1}{R\sqrt{k}} \big)$ suffices for support recovery. Hence, \eqref{eq:m_Jacques} becomes
\begin{equation}
m = O\big( R k^{3/2} \log n \big). \label{eq:m_Jacques2}
\end{equation}
This finding naturally generalizes the sufficiency of $O(k^{3/2} \log n)$ measurements for binary signals, and demonstrates that the $k^2$ barrier can be avoided when $R \ll \sqrt{k}$.
\section{Lower Bound for Rademacher Measurements}\label{sec:rademacher}
In the Rademacher measurement scheme, each entry of the measurement matrix is an i.i.d.~Rademacher random variable, i.e., $+1$ with probability $0.5$ and $-1$ with probability $0.5$. Our analysis for Gaussian measurements, with slight modifications, is able to provide a lower bound in the case of Rademacher measurements and binary signals.
Before proceeding, we note that support recovery for $\mathcal{X}_k(R)$ with $R \ge 2$ is impossible using Rademacher measurements. We demonstrate this via the following example.
\begin{example}
Let $n = 3$, $k = 2$ and $R = 2$, and consider $x_1 = [2, 1, 0]^T, x_2 = [2, 0, 1]^T$, noting that $\supp(x_1) \neq \supp(x_2)$.
In this case, none of the $2^3 = 8$ combinations of $b = (b_1, b_2, b_3) \in \{\pm 1\}^3$ can differentiate ${\rm sign}(b \cdot x_1)$ from ${\rm sign}(b \cdot x_2)$, i.e.,~${\rm sign}(2b_1 + 1b_2 + 0b_3)$ is same as ${\rm sign}(2b_1 + 0b_2 + 1b_3)$ for all 8 possible choices of $b$. This is because once the $\pm 1$ coefficient to $2$ is specified, the $\pm 1$ coefficient to $1$ is too small to impact the sign. We can easily extend this example to general values of $R > 1$ and $k \geq 2$.
\end{example}
In view of this example, we focus our attention on $R = 1$, with binary signals $x \in \{0,1\}^n$.
\begin{theorem}
\label{rademacherthm}
Fix $\epsilon \in \big(0,\frac{2}{3}\big)$, and suppose that $k \le n^{\frac{2}{3} - \epsilon}$. If $A = (a_{i, j}) \in \{\pm 1\}^{m \times n}$ where each $a_{i, j}$ is an independent Rademacher random variable and where $m = O(k^{1.5})$ with a small enough implied constant, then with probability at least $\frac{1}{2}$, $A$ is not a valid $1$-bit measurement matrix for support recovery on $\mathcal{X}_k(1)$
\end{theorem}
\subsection{Proof Sketch}
For the proof of the Rademacher case, we reduce the $(n, k - 1, 1)$-balanced problem to $1$-bit CS.
\begin{lemma}
\label{rademacherreduction}
If $A \in \{\pm 1\}^{m \times n}$ is a valid $1$-bit measurement matrix for support recovery on $\mathcal{X}_k(1)$, then the row vectors of $A$ are $(n, k - 1, 1)$-balanced.
\end{lemma}
This lemma is proved in a near-identical manner to \cref{successreducelemma}, but the term $\frac{\sqrt{4 \log m}}{R}$ is replaced by one (which trivially equals the magnitude of any given entry of $A$). Another slight difference here is that we do not need to isolate two specific entries of $x$; with binary-valued $x$ and $A$, {\em any} two entries can play the role that entries $n-1$ and $n$ played in \cref{successreducelemma}. Since these differences are straightforward, we omit the details.
\begin{comment}
\begin{proof}
$\forall S \subseteq [n]$, where $|S| = k - 1, \exists v_1 \neq v_2 \subseteq [n]$, such that $x \cap y = S$ and $|x| = |y| = k$.
Let $\gamma_1$ and $\gamma_2$ be two indices not in $S$, then one feasible construction of $x$ and $y$ is as follows:
$$x_{i} = \begin{cases}
1 & \text{if $i \in S$ or $i = \gamma_1$}\\
0 & \text{otherwise}\\
\end{cases} \text{ and }
y_{i} = \begin{cases}
1 & \text{if $i \in S$ or $i = \gamma_2$}\\
0 & \text{otherwise}\\
\end{cases} \forall i \in [1, n]
$$
Let the rows of $A$ be denoted by $a_1, a_2, \dots, a_m$.
Since $A$ is a valid $1$-bit measurement matrix for support recovery on $\mathcal{X}_k(1)$, therefore ${\rm sign(Ax)} \neq {\rm sign(Ay)}$, therefore there must exist a row (let the row be $a_i$) in $A$ such that ${\rm sign(a_i \cdot x)} \neq {\rm sign(a_i \cdot y)}$
Without loss of generality, $a_i \cdot x \geq 0$ and $a_i \cdot y < 0$
\begin{align*}
\sum_{j = 1}^{n} x_{j}a_{i, j} \geq 0, \sum_{j = 1}^{n} y_{j}a_{i, j} < 0 \\
\sum_{j \in S} a_{i, j} \geq 0, \sum_{j \in S} a_{i, j} < 0\\
\sum_{j \in S} a_{i, j} + a_{i, \gamma_1} \geq 0, \sum_{j \in S} a_{i, j} + a_{i, \gamma_2} < 0 \\
-a_{i, \gamma_1} \leq \sum_{j \in S} a{i, j} < -a_{i, \gamma_2}
\end{align*}
Since $a_{i, j} \in \{\pm 1\}$ for all entries, therefore $-1 \leq - a_{i, \gamma_1}, -a_{i, \gamma_2} \leq 1$.
\begin{align*}
-1 \leq -a_{i, \gamma_1} \leq \sum_{j \in S} a_{i, j} \leq -a_{i, \gamma_2} \leq 1 \\
-1 \leq \sum_{j \in S} a_{i, j} \leq 1 \\
\left|\sum_{j \in S} a_{i, j}\right| \leq 1
\end{align*}
So the row vectors of $A$ are balanced.
\end{proof}
\end{comment}
Again having established a suitable reduction, the main remaining task is to prove the following lower bound for the balancing problem.
\begin{theorem}
\label{rademachersidethm}
Fix $\epsilon \in \big(0,\frac{2}{3}\big)$, and suppose that $k$ is an even number satisfying $k \le n^{\frac{2}{3} - \epsilon}$. For a set $V \subseteq \{\pm 1\}^n$ where $|V| = m$ and each $V_{i, j}$ is an i.i.d.~Rademacher random variable, if $m = O(k^{1.5})$ with a small enough implied constant, then with probability at least $\frac{1}{2}$, the set $V$ is not $(n, k, 1)$-balanced.
\end{theorem}
Before outlining the proof, we show how this result yields \cref{rademacherthm}.
\begin{proof}[Proof of \cref{rademacherthm}]
Let $H$ be the event that the matrix $A = (a_{i, j}) \in \{\pm 1\}^{m \times n}$ with i.i.d.~Rademacher entries is not a valid universal $1$-bit measurement matrix for support recovery on $\mathcal{X}_k(1)$.
Let $P$ be the event that the set $V = \{(a_{i, 1}, a_{i, 2}, \dots, a_{i, n})| i \in [1, m]\}$ is not $(n, k - 1, 1)$-balanced.
Assuming momentarily that $k-1$ is even, we have from \cref{rademacherreduction} that $P \implies H$. From \cref{rademachersidethm}, we have $\Pr[P] \geq \frac{1}{2}$ when $m = O((k-1)^{1.5})$. Under the big O notation we are able to change $k - 1$ to $k$ with only a slight change in the constant, and the desired lower bound on $\Pr[H]$ follows from that on $\Pr[P]$.
If $k-1$ is odd, then we can simply use the fact that $\mathcal{X}_{k-1}(1) \subseteq \mathcal{X}_{k}(1)$, and apply \cref{rademacherthm} with parameter $k-2$ (which is even). Approximating $k-2$ by $k$ under the big-O notation in the same way as above, we deduce the same result.
\end{proof}
\subsection{Key Differences in Proof of \cref{rademachersidethm} w.r.t.~Proof of \cref{sidetheorem}}
Recall that $E_{s,i}$ is the balancing failure event for set $s$ and a single measurement $i$, and $F_s = \bigcap_{i=1}^m E_{s,i}$ is the overall failure event for set $s$.
We again split $\sum_{t \in X}Pr[F_s \cap F_t]$ into three terms $A$, $B$ and $C$. However in this analysis our cut-off point between $A$ and $B$ is $k^{-0.5}$ (instead of the original $\widetilde{k}^{-0.5}$). Thus, we define
$$A = \sum_{\beta \in [0, k^{-0.5}]}v_\beta, B = \sum_{\beta \in [k^{-0.5}, 0.9]}v_\beta \text{ and } C = \sum_{\beta \in [0.9, 1]}v_\beta,$$
with $v_{\beta}$ defined in \eqref{eq:def_v}.
Accordingly, in the original proof, all occurrences of $\widetilde{k}$ simplify to $k$, and $r = \sqrt{\frac{2}{\pi \widetilde{k}}}$ is redefined to $r = \sqrt{\frac{2}{\pi k}}$.
Since this is a discrete setting, the proofs of $r - O(\frac{1}{k}) \leq \Pr[\overline{E_{s, i}}] \leq r$ and $\Pr[\overline{E_{s, p}} \cap \overline{E_{t, p}}] \leq r^2(1 + \beta + O(\beta^2))$ (when $|x_s \cap x_t| = \beta k$) are established slightly differently, outlined as follows.
\begin{lemma}
\label{rademacherlemma2nchoosen} ${2n \choose n} = \frac{4^n}{\sqrt{\pi n}}\left(1 - \frac{1}{8n} + O(n^{-2})\right), \forall n \in \mathbb{N}$
\end{lemma}
\begin{proof}
This follows easily from Stirling's approximation:
\begin{align*}
{2n \choose n} &= \frac{2^{2n}}{\sqrt{\pi n}}\left(1 - \frac{1}{8n} + \frac{1}{128n^2} + \frac{5}{1024n^3} + ...\right) \\
&= \frac{4^n}{\sqrt{\pi n}}\left(1 - \frac{1}{8n} + O(n^{-2})\right).
\end{align*}
\end{proof}
\begin{lemma}
\label{rademacherlemmaE}
For even-valued $k$ we have, $r - O\left(\frac{1}{k^{1.5}}\right) \leq \Pr[\overline{E_{s, i}}] \leq r$.
\end{lemma}
\begin{proof}
We have
\begin{align*}
\Pr[\overline{E_{s, i}]} &= \Pr[|x_s \cdot v_i| \leq 1]\\
&= \Pr[x_s \cdot v_i = 0] \text{ [Since $k$ is even]}\\
&= {k \choose {\frac{k}{2}}}2^{-k}\\
&= \sqrt{\frac{2}{\pi k}}\left(1 - \frac{1}{4k} + O(k^{-2})\right) \text{ [Using \cref{rademacherlemma2nchoosen}]}.
\end{align*}
It follows for large enough $k$ that $r - O\big(\frac{1}{k^{1.5}}\big) \leq \Pr[\overline{E_{s, i}}] \leq r$.
\end{proof}
\begin{lemma}
\label{rademacherlemmaEbeta}
For even-valued $k$ and $|x_s \cap x_t| = \beta k$, where $\beta \leq 0.9$, we have for each index $p \in [m]$ that $\Pr[\overline{E_{s, p}} \cap \overline{E_{t, p}}] \leq r^2(1 + \beta + O(\beta^2))$.
\end{lemma}
\begin{proof}
Let $|x_s \cap x_t| = \beta k = d$, and observe that
\begin{align*}
\Pr[\overline{E_{s, p}} \cap \overline{E_{t, p}}] &= \Pr[|x_s \cdot v_p| \leq 1, |x_t \cdot v_p| \leq 1]\\
&= \Pr[x_s \cdot v_p = 0, x_t \cdot v_p = 0] \text{ [Since k is even]}
\end{align*}
If we fix $i$ to be the number of ones in $x_s \cap x_t$. Then the number of ones in $x_s - x_t$ will need to be exactly $\frac{k}{2} - i$ so that the total number of ones in $x_s = i + \frac{k}{2} - i = \frac{k}{2}$. Similarly there need to be exactly $\frac{k}{2} - i$ ones in $x_t - x_s$. Hence,
\begin{align*}
\Pr[\overline{E_{s, p}} \cap \overline{E_{t, p}}] &= \sum_{i = 0}^{d}{d \choose i}2^{-d} \times \left({k - d \choose \frac{k}{2} - i}2^{-(k - d)}\right)^{2} \\
&\leq \frac{2}{\pi (k - d)}\sum_{i = 0}^{d}{d \choose i}2^{-d} \text{ [Using ${k - d \choose \frac{k}{2} - i} \leq {k - d \choose \frac{k - d}{2}} \leq \sqrt{\frac{2}{\pi k - d}}2^{k - d}$]}\\
&= \frac{2}{\pi (k - d)}\\
&= \frac{2}{\pi k (1 - \beta)} \\
&= r^2(1 + \beta + O(\beta^2)). \text{ [Using a Taylor Series]}
\end{align*}
\end{proof}
For the analysis to show $A \leq 1.5\alpha$ under $m = O(k^{1.5})$ (instead of the original $m = O(\widetilde{k}^{1.5})$), all the technical details have the same idea behind them and the proof goes through.
In the analysis of $B$ and $C$, the original choice $m = O(\frac{R}{\sqrt{\log{m}}}k^{1.5}\log{k})$ has the factor $\frac{R}{\sqrt{4\log{m}}}$ because of the $\big(n - 2, k - 1, \frac{\sqrt{4\log{m}}}{R}\big)$-balanced problem reduction.
Since we are now using a different reduction, i.e.~$(n - 1, k - 1, 1)$-balanced, the analysis also simplifies the $m$ to be $O(k^{1.5}\log{k})$. But $m = O(k^{1.5})$ is strictly smaller and assumed in the analysis of $A$, so here we work under the common setting of $m = O(k^{1.5})$ for all three terms.
As for the analysis of $B$, one of the few things that differs in the proof is that now we need to show that $\frac{\partial f(\beta)}{\partial \beta} \ge 0$ for all $\beta > k^{-0.5}$ (instead of the original $\forall \beta > \widetilde{k}^{-0.5}$). Because it is possible that $k^{-0.5} \le \widetilde{k}^{-0.5}$, the correctness of this statement is non-trivial, and is described below. The remaining proof details are essentially unchanged.
\begin{lemma}
Given $k \le n^{\frac{2}{3} - \epsilon}$, we have for $\beta \ge k^{-0.5}$ that $\frac{\partial f}{\partial \beta}(\beta) \ge 0$, where $f(\cdot)$ is defined in \eqref{eq:f_def}.
\end{lemma}
\begin{proof}
The condition $\beta \ge k^{-0.5}$ implies that
\begin{gather*}
\frac{\beta}{1 - \beta} \ge \beta \ge k^{-0.5}\\
\implies \left(\frac{\beta}{1 - \beta}\right)^{1 + \epsilon} \ge k^{-0.5(1 + \epsilon)}.
\end{gather*}
In addition, the assumption $k \le n^{\frac{2}{3} - \epsilon}$ implies that
\begin{align*}
2k^{0.5(1 + \epsilon)} &\le 2n^{0.5(\frac{2}{3} - \epsilon)(1 + \epsilon)}\\
&= 2n^{\frac{1}{3} - \frac{\epsilon}{6} + \frac{\epsilon^2}{2}}\\
&\le n^{\frac{1}{3} + \epsilon},
\end{align*}
and hence,
\begin{align*}
2k^{0.5(1 + \epsilon)} &\le \frac{n}{k} \text{ [Since $k \le n^{\frac{2}{3} - \epsilon}$]}\\
\frac{2k}{n} &\le k^{-0.5(1 + \epsilon)}.
\end{align*}
Therefore,
\begin{align*}
\left(\frac{\beta}{1 - \beta}\right)^{1 + \epsilon} &\ge \frac{2k}{n}\\
&\ge \frac{k}{n - k},
\end{align*}
and re-arranging gives
\begin{gather*}
\frac{n - k}{k} \left(\frac{\beta}{1 - \beta}\right)^{1 + \epsilon} \ge 1\\
\implies k\log{\left(\frac{n - k}{k} \left(\frac{\beta}{1 - \beta}\right)^{1 + \epsilon}\right)} \ge 0.
\end{gather*}
From \eqref{eq:f_deriv}, the left-hand side equals $\frac{\partial f}{\partial \beta}(\beta)$, and the proof is complete.
\end{proof}
Lastly, for the analysis of $C$, the original proof goes through without any significant modification. | {
"attr-fineweb-edu": 1.55957,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdIk4uzlhgyWvJUBI | \section{Introduction}
At least 50\% of radio-quiet AGN exhibit evidence for photoionized
outflows in their X-ray spectra \citep{Reynolds97, George98, Crenshaw03,
Porquet04, Blustin05, McKernan07}. The signatures of these winds
consist of absorption and emission features at soft (0.4--2~keV) and
hard (6--8~keV) X-ray energies, coincident with ionized O, N, Ne, Mg,
Si and Fe lines blueshifted in the observer's rest-frame. The
inferred velocities of the winds are typically in the range
$100-1000$\,km\,s$^{-1}$, but can be as high as $\sim 0.1c$ in some
sources \citep{Chartas02, Chartas03, Pounds03, Reeves03, Reeves08}. It is also
possible that the energy budget of the outflows of some AGN can
approach a significant fraction of the bolometric or even Eddington
luminosity \citep{KP03}.
In stark contrast, the X-ray evidence for nuclear outflows is very scarce
in Broad-Line Radio Galaxies (BLRGs) and in radio-loud AGN generally.
Previously, the radio-loud quasar 4C~+74.26 showed weak
absorption features at $\sim$ 1~keV with {\it ASCA}\ \citep{Ballantyne05},
while the BLRG Arp\,102B showed neutral X-ray absorption with {\it ASCA}\ and a
UV outflow of a few hundred km\,s$^{-1}$ \citep{Erac03}.
Furthermore two radio galaxies, 3C~445 and 3C~33 \citep{Sambruna07, Evans06}
also exhibit soft X-ray emission lines below 2~keV,
which could originate from spatially extended material (in
3C~33; \citet{Torresi09a}). Disk winds are also expected in
radio-loud AGN as ingredients for jet formation \citep{Blandford82}.
To differentiate between radio-loud and radio-quiet AGN, here we adopt
the radio-loudness parameter $R_{\rm L} = \log_{10}(f_{\rm 5GHz}/f_{4400})$,
where $f_{\rm 5GHz}$ is the core 5\,GHz radio flux and $f_{4400}$ is the
flux at 4400\AA, both in units of mJy \citep{Kellerman89}. Generally,
$R_{\rm L}>1$ for radio-loud AGN \citep{Wilkes87}, while for 3C\,382, $R_{\rm L}=1.9$
\citep{Lawson97}. If the extended radio emission from 3C\,382
is also included, then $R_{\rm L}$ may be considerably higher.
In this Letter we present direct evidence for outflowing gas from the nucleus
of the nearby ($z=0.05787$), bright BLRG 3C~382. A re-analysis of our
118~ks {\it Chandra}\ HETG (High Energy Transmission Grating) observations
\citep{Gliozzi07} revealed several blue-shifted absorption lines between
$0.7-2.0$\,keV which suggest the presence of a large-scale
($10-1000$\,pc) outflow in this source with a velocity of
800\,km\,s$^{-1}$.
The organization of this Letter is as follows. In \S~2 we describe the
{\it Chandra}\ data reduction and analysis; in \S~3 the
results of the spectral analysis; Discussion and Conclusions follow in
\S~4. Throughout this paper, a concordance cosmology with H$_0=71$ km
s$^{-1}$ Mpc$^{-1}$, $\Omega_{\Lambda}$=0.73, and $\Omega_m$=0.27
\citep{Spergel03} is adopted. Errors are quoted to 90\% confidence
for 1 parameter of interest (i.e. $\Delta \chi^{2}$ or $\Delta C=2.71$).
\section{The Chandra HETG Data}
Chandra observed 3C\,382 with the HETG for a net exposure of 118\,ks
between 27--30 November 2005. The $\pm1$ order spectra were summed
for the MEG (Medium Energy Grating) and HEG (High Energy Grating)
respectively, along with their response files. The summed first order
count rates for the MEG and HEG are 0.867\,counts\,s$^{-1}$ and
0.379\,counts\,s$^{-1}$ respectively, while the MEG data were fitted
between 0.5--7.0\,keV and the HEG from 1.0--9.0\,keV.
\section{The Warm Absorber in 3C\,382}
The {\it Chandra}\ HETG data were first analyzed by \citet{Gliozzi07},
who focused on the continuum and its variability. Our results are in
agreement with theirs. Specifically, the MEG and HEG data were fitted
by an absorbed power law with photon index $\Gamma=1.66\pm0.01$ plus a
blackbody with $kT=92\pm6$~eV to parameterize the soft excess below
1~keV \citep{Gliozzi07}, absorbed by a Galactic line of sight
column of $N_{\rm H, Gal}=7.0\times10^{20}$\,cm$^{-2}$ \citep{Dickey90}.
Figure~1 shows the broad-band HETG spectrum fitted
with an absorbed power-law only, to illustrate that a soft excess is clearly present
below 1\,keV. The 0.5-9\,keV band flux is
$6.4\times10^{-11}$\,erg\,cm$^{-2}$\,s$^{-1}$. Even upon adding a
blackbody to parameterize the soft excess, the fit is still formally
unacceptable ($\chi^2$/dof = 632/477, null probability $2.4 \times
10^{-6}$) as there are clear residuals around 1 keV that indicate the
presence of a warm absorber.
To analyse the warm absorber in detail, the HEG and MEG spectra were
binned more finely to sample the resolution of the detector, at
approximately HWHM the spectral resolution (e.g. $\Delta\lambda
=0.01$\AA\ bins for the MEG). For the fits, the C-statistic was used
\citep{Cash79}, as there are fewer than 20 counts per resolution bin. The
absorption lines were modelled with Gaussian profiles and the continuum model
was adopted from above. Table~1
lists the detected lines with their observed and inferred properties,
and their significance as per the C-statistic. Figure~2
shows the portions of the HETG spectrum containing the strongest
lines, with the model overlaid.
The seven absorption lines in Table~1 and Figure~2 are all detected at
high confidence (corresponding to $\Delta C>18$, or $>99.9\%$
confidence for 2 parameters of interest).
The lines likely arise from the $1s-2p$ transitions of
Ne\,\textsc{ix}, Ne\,\textsc{x}, Si\,\textsc{xiii}, and
Si\,\textsc{xiv} and the $2p-3d$ lines of Fe\,\textsc{xix-xxi}. The
two statistically weaker $1s-2p$ lines of Mg\,\textsc{xi} and
Mg\,\textsc{xii} may also be present, which have outflow velocities
consistent with the other lines.
Initially we assume that the lines have the same velocity width within
the errors. The velocity width of the absorption lines is then
$\sigma=340\pm70$\,km\,s$^{-1}$ (or $780\pm160$\,km\,s$^{-1}$ FWHM) and the
lines are clearly resolved. Even at 99\% confidence
($\Delta C=9.2$ for 2 parameters), the velocity width is constrained to
$\sigma=340\pm140$\,km\,s$^{-1}$. Upon allowing the velocity width of the
individual lines to vary, then they are constrained to lie
within the range $\sigma=250-500$\,km\,s$^{-1}$ as shown in Table\,1. The mean outflow
velocity is $-810$\,km\,s$^{-1}$. The overall fit statistic is $C =
3804$ for 3811 bins.
We used the photoionization code \textsc{xstar} \citep{Kallman04}
to derive the parameters of the absorber, assuming the baseline
continuum described above, including the soft excess. Solar abundances
are assumed throughout \citep{GS98}.
An important input parameter is the turbulent velocity, which can effect
the absorption line equivalent widths and hence the derived column
density. We experimented with two
different values of the turbulent velocity chosen to represent two
likely extremes: (i) a lowest value of $v_{\rm
turb}=100$\,km\,s$^{-1}$ and (ii) $v_{\rm turb}=300$\,km\,s$^{-1}$,
the latter being consistent with the measured width of the absorption
lines. The fitted continuum parameters are $\Gamma=1.68\pm0.02$ and
for the blackbody, $kT=110\pm8$\,eV. For case (i), then $N_{\rm H} =
(3.2\pm0.6) \times 10^{21}$\,cm$^{-2}$, the ionization parameter is
$\log \xi$\footnote{The units of $\xi$ are erg\,cm\,s$^{-1}$.}$= 2.45^{+0.13}_{-0.08}$
and the outflow velocity is $v_{\rm out} = -810^{+60}_{-55}$\,km\,s$^{-1}$. The fit
statistic is $C/{\rm bins} = 3795/3811$. For case (ii), then $N_{\rm
H} = (1.30\pm0.25) \times 10^{21}$\,cm$^{-2}$, $\log \xi =
2.45^{+0.06}_{-0.07}$ and the outflow velocity
$v_{\rm out} = -840^{+60}_{-50}$\,km\,s$^{-1}$. The fit statistic is
$C/{\rm bins} = 3783/3811$.
If the warm absorber is not included in the model, then the
fit statistic is substantially worse by $\Delta C=220$ (compared to model (ii)).
Only a single outflowing layer of gas is required to model the warm absorber.
The higher turbulence velocity model is statistically preferred and is
consistent with the measured 340\,km\,s$^{-1}$ widths of the
lines. Either model yields an outflow velocity of $-800$\,km\,s$^{-1}$
within a statistical error of $<10$\%.
Hereafter we adopt the parameters from model (ii), as the turbulent
velocity is consistent with widths of the individual absorption lines.
However in neither model is the fitted
column density of the absorber as high as the value of $N_{\rm H} \sim
3\times 10^{22}$\,cm$^{-2}$ reported by \citet{Torresi09b} from an
analysis of a short 34.5\,ks XMM-Newton/RGS observation of 3C\,382 on April 28, 2008.
If the column density of the warm absorber is fixed to the value of
$N_{\rm H} = 3\times 10^{22}$\,cm$^{-2}$ in the HETG spectrum, then the fit statistic is
considerably worse ($C/{\rm bins} = 8390/3811$).
As a consistency check, we analyzed
the archival RGS data of 3C\,382 with model (ii) and the same continuum form
as above. We found that the column density and ionization parameter were degenerate
with each other, given the short exposure of the RGS spectrum. Thus for the RGS,
$N_{\rm H} = 1.4^{+1.4}_{-1.3} \times 10^{22}$\,cm$^{-2}$,
$\log \xi = 3.4\pm1.0$ and the outflow velocity
$v_{\rm out} = -1200^{+300}_{-500}$\,km\,s$^{-1}$.
The fit statistic improves only by $\Delta C=25$ to $C/{\rm bins} =
826/820$ upon adding the absorber.
Thus within the larger errors, the parameters are consistent
with those obtained from the HETG. The individual lines detected in the Chandra HETG
observation
were also compared to the absorption lines claimed by \citet{Torresi09b} on the basis
of the RGS data. With the higher signal to noise ratio of the HETG spectrum compared to
the RGS spectrum,
we only confirm the detection
of one line reported by \citet{Torresi09b}, the Fe\,\textsc{xx} line at a rest--frame
energy of 1025\,eV\footnote{The line reported at 1.356\,keV by \citet{Torresi09b} may also be
associated with the $1s-2p$ line of Mg\,\textsc{xi}, as noted in Table~1.}
\section{Discussion and Conclusions}
The Chandra HETG spectra have revealed an ionized outflow in the BLRG
3C\,382. The outflow parameters are well
determined, with $N_{\rm H} = (1.30\pm0.25) \times
10^{21}$\,cm$^{-2}$, $\log \xi =
2.45^{+0.06}_{-0.07}$ and $v_{\rm out} =
-840^{+60}_{-50}$\,km\,s$^{-1}$, while the absorption line widths are
resolved with $\sigma=340\pm70$\,km\,s$^{-1}$.
To characterize the outflow, we define the (unabsorbed) ionizing
luminosity $L_{\rm ion}$, which in \textsc{xstar} is defined from
$1-1000$\,Rydberg. This depends on the continuum model fitted to the
data and it is important to take into account the soft excess. Using
the best fit powerlaw plus blackbody continuum, then $L_{\rm ion} =
1.2\times10^{45}$\,erg\,s$^{-1}$. We note that if a different continuum form
is used to parameterize the soft excess, e.g. a broken power-law, then
the ionizing luminosity can be a factor of $\sim 2$ higher; keeping
this caveat in mind we adopt the more conservative lower luminosity
value of $1.2\times 10^{45}$\,erg\,s$^{-1}$
\subsection{The Location of the Absorber}
The upper--bound to the wind radius ($R_{\rm out}$) is determined by
geometrical constraints, i.e. if the thickness of the absorber
$\Delta R/R << 1$ (valid for a thin shell) and as $N_{\rm H} = n \Delta R$, where n is the
electron number density. As the ionization parameter of the absorber
is defined as $\xi = L_{\rm ion}/nR^{2}$, then by substitution
$R_{\rm out} << L_{\rm ion} / N_{\rm H} \xi = 3.3 \times 10^{21}$\,cm (or
$<<1$\,kpc)\footnote{Note the same radius is obtained by integrating
down the line of a sight of a homogeneous radial outflow.}.
The lower bound is formed by the escape velocity,
i.e. for the gas to escape the system as an outflow then $R_{\rm esc}
> c^{2} / v^{2} R_{\rm s}$, where $R_{\rm s} = 2GM/c^2$ is the black
hole Schwarzschild radius and $v=800$\,km\,s$^{-1}$. If for 3C\,382,
$M = 1\times10^{9} \hbox{$\rm ~M_{\odot}$}$ (with a 40\% uncertainty, see \citet{Mar04}),
then $R_{\rm esc} > 1.4 \times 10^{5} R_{\rm s} > 4.2 \times 10^{19} {\rm cm} > 13 {\rm pc}$.
In other words the location of the soft X-ray outflow is likely bounded between
approximately 10\,pc and 1 kpc.
Furthermore the outflow velocity and absorption line FWHM of
$\sim 800$\,km\,s$^{-1}$ is similar to the width of the narrow optical
forbidden lines of [O\,\textsc{iii}], [O\,\textsc{i}] and [Si\,\textsc{ii}], which for 3C\,382
lie in the range $400-600$\,km\,s$^{-1}$, as measured from the spectra of \citet{Erac94}.
In particular the [O\,\textsc{iii}] emission line from the \citet{Erac94}
spectrum appears to have an asymmetric profile, with the blue-wing
extending to $\sim -1070$\,km\,s$^{-1}$ from the line centroid, which appears
$\sim370$\,km\,s$^{-1}$ broader than the red-wing.
Although this is not evidence
for observing outflowing gas through the direct line of sight, the [O\,\textsc{iii}] emission
may suggest we are viewing outflowing gas along a different sight-line.
Indeed this effect is also seen in other
radio galaxies \citep{Gelderman1994}.
The coincidence between the soft X-ray absorbing gas and extended [O\,\textsc{iii}]
emission has also been noted in the radio-quiet quasar, MR\,2251-178 \citep{Kaspi04}.
Thus the origin
of the X-ray absorption appears consistent with the optical Narrow Line Region (NLR).
The association between the soft X-ray absorption in 3C\,382
and any rest--frame UV absorption could be tested
with future simultaneous Chandra and HST observations.
The density of the outflow is then $n = L_{\rm ion} / \xi R^{2}$. For
the parameters above, then $n= 0.4 - 2400$\,cm$^{-3}$. While loosely
constrained, this is consistent with typical expected NLR densities of
$\sim10^{3}$\,cm$^{-3}$ in AGN \citep{Koski78}.
The mass outflow rate for a uniform spherical flow is $\dot{M}_{\rm out} =
4\pi n R^{2} m_{\rm p} v_{\rm out}$, where $nR^{2} = L_{\rm ion} /\xi$
and $m_p$ is the proton mass. Hence for the measured warm absorber
parameters for 3C\,382, $\dot{M}_{\rm out} = 7.2 \times
10^{27}$\,g\,s$^{-1}$ or $\dot{M}_{\rm out}= 100\hbox{$\rm ~M_{\odot}$}$\,yr$^{-1}$.
However the low rate of appearance of
such absorbers amongst BLRG's or radio-loud AGN generally might suggest
that the solid angle subtended by individual absorbing filaments is much
smaller than $4\pi$\,steradians.
Therefore, the mass outflow rate could be considerably smaller than the above estimate.
Subsequently the kinetic power of the soft X-ray outflow (for $v_{\rm
out}=-800$\,km\,s$^{-1}$) is then $\dot{E} = 1/2 \dot{M}_{\rm out}
v_{\rm out}^{2} < 2 \times 10^{43}$\,erg\,s$^{-1}$, which
energetically is a fairly insignificant 2\% of the ionizing luminosity
and is unlikely to contribute significantly towards AGN feedback \citep{King03}.
\acknowledgements
This research has made use of data obtained from the High Energy
Astrophysics Science Archive Research Center (HEASARC), provided by
NASA's Goddard Space Flight Center.
R.M.S. acknowledges support from
NASA through the {\it Suzaku}\ and {\it Chandra}\ programs.
M.E. thanks the NSF for support via grant AST-0807993.
We would like to thank
Tahir Yaqoob for assistance with the Chandra data analysis.
\clearpage
| {
"attr-fineweb-edu": 1.548828,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdJE4eIXhqCi86o-y | \section{Introduction}
Interactions of organisms, humans, and objects are a common phenomenon seen easily in collective behaviour in natural and social sciences. Models for interacting particle systems (IPS) and their mesoscopic limits, as the number of particles grows to infinity, receive presently enormous attention given their applicability in areas such as Finance, mathematical neuroscience, biology, machine learning, and physics --- animal swarming, cell movement induced by chemotaxis, opinion dynamics, particle movement in porous media, electrical battery modeling, self-assembly of particles (see for example \cite{stevens1997aggregation,Holm2006,Carrillo2010,bolley2011stochastic,Dreyer2011phasetransition,Baladron2012,marchioro2012mathematical,Kolokolnikov2013,Bossy2015,Guhlke2018,giesecke2020inference,Carrillo2019,li2019mean,jin2020rbm1} and references). In this work, we tackle the numerical approximation of interacting particle systems given by stochastic differential equations (SDE) and their mesoscopic limit equations (or a class thereof) called McKean--Vlasov Stochastic Differential Equations (MV-SDE) that follow as the scaling limit of an infinite number of particles.
In this work, we understand the IPS as an $N$-dimensional system of $\mathbb{R}^d$-valued interacting particles where each particle is governed by a Stochastic Differential Equation (SDE). Let $i=1,\cdots,N$ and consider $N$ particles $(X^{i,N}_t)_{t\in[0,T]}$ with independent and identically distributed ${X}_{0}^{i,N}=X_{0}^{i}$ (the initial condition is random, but independent of other particles) and satisfying the $(\mathbb{R}^d)^N$-valued SDE \eqref{Eq:MV-SDE Propagation}
\begin{align}
\label{Eq:MV-SDE Propagation}
& \mathrm{d} {X}_{t}^{i,N}
= \big( v (X_t^{i,N}, \mu^{X,N}_{t} )+ b (t,{X}_{t}^{i,N}, \mu^{X,N}_{t} )\big) \mathrm{d} t
+ \sigma (t,{X}_{t}^{i,N} , \mu^{X,N}_{t} ) \mathrm{d} W_{t}^{i}
, \quad X^{i,N}_0=X_0^i\in L_{0}^{m}( \mathbb{R}^{d}) ,
\\
\label{Eq:MV-SDE Propagation-drifts}
&
v ( {X}_{t}^{i,N} , \mu^{X,N}_{t} )
= \Big(\frac1N \sum_{j=1}^N f({X}_{t}^{i,N}-{X}_{t}^{j,N}) \Big) +u ( {X}_{t}^{i,N} , \mu^{X,N}_{t} ) \textrm{ with } \mu^{X,N}_{t}(\mathrm{d} x) := \frac{1}{N} \sum_{j=1}^N \delta_{X_{t}^{j,N}}(\mathrm{d} x),
\end{align}
where $\delta_{{X}_{t}^{j,N}}$ is the Dirac measure at point ${X}_{t}^{j,N}$, $\{W^{i}\}_{ i=1,\cdots,N\}}$ are independent Brownian motions and $L^m_0$ denotes the usual $m$th-moment integrable space of random variables.
For the IPS class \eqref{Eq:MV-SDE Propagation}, the limiting class as $N\to \infty$ are called McKean-Vlasov SDEs and the passage to the limit operation is known as ``Propagation of Chaos''. This class was first described by McKean \cite{McKean1966}, where he introduced the convolution type interaction (the $v$ in \eqref{Eq:MV-SDE Propagation-drifts}), and this is a class of Markov processes associated with nonlinear parabolic equations where the map $v$ in \eqref{Eq:MV-SDE Propagation-drifts} is also called "self-stabilizing". The IPSs underpinning our work \eqref{Eq:MV-SDE Propagation}-\eqref{Eq:MV-SDE Propagation-drifts} have been studied widely and from a variety of points of view and as early as \cite{Sznitman1991} for a general survey (under global Lipschitz conditions and boundedness).
McKean-Vlasov Stochastic Differential Equations (MV-SDEs) having McKean drifts of convolution type have general dynamics given by
\begin{align}
\label{Eq:General MVSDE}
\mathrm{d} X_{t} &= \big( v(X_{t},\mu_{t}^{X}) + b(t,X_{t}, \mu_{t}^{X})\big)\mathrm{d} t + \sigma(t,X_{t}, \mu_{t}^{X})\mathrm{d} W_{t}, \quad X_{0} \in L_{0}^{m}( \mathbb{R}^{d}),
\\
\label{Eq:General MVSDE shape of v}
&
\textrm{where }~ v(x,\mu)
= \int_{\mathbb{R}^{d} } f(x-y) \mu(\mathrm{d} y) +u(x,\mu)
\quad\textrm{with}\quad \mu_{t}^{X}=\textrm{Law}(X_t),
\end{align}
where $\mu_{t}^{X}$ denotes the law of the solution process $X$ at time $t$, $W$ is a Brownian motion in $\mathbb{R}^d$, $v,b, \sigma$ are measurable maps along with a sufficiently integrable initial condition $X_0$, and $f$ is also measurable function.
An embodiment (among many) for this typology of models in particle motion, either at IPS or MV-SDE level, can be seen as a model encapsulating three sources of forcing: the particle moves through a multi-well landscape potential gradient (the map $u$ and $b$), the trajectories are affected by a Brownian motion (and associated diffusion coefficient $\sigma$), and a convolution type self-stabilisation forcing characterises the influence of a large population of identical particles (under the same laws of motion $v$ and $f$). In effect, $v$ acts on the particle as an average attractive/repulsive force exerted on said particle by a population of similar particles (through the potential $f$), see \cite{2013doublewell,adams2020large} and further examples in \cite{jin2020rbm1} .
For instance, under certain constraints on $f$ the map $v$ adds inertia to the motion of the particle which in turn delays exit times from domain of attraction and alters exit locations \cite{HerrmannImkellerPeithmann2008,dosReisSalkeldTugaut2017,adams2020large}. The self-stabilisation term in the system induces in the corresponding Fokker-Plank equation a non-linear term of the form $\nabla[\rho . \nabla(f \star \rho)]$ (where $\rho$ stands for the processes density while `$\star$' is the usual convolution operator) \cite{Carrillo2010,Carrillo2019,jin2020rbm1}. The granular media Fokker-Plank equation from bio-chemistry is a good example of an equation featuring this kind of structure \cite{malrieu2006concentration,cattiaux2008probabilistic,adams2020large}.
The literature on MV-SDE is growing explosively with many contributions addressing wellposedness, regularity, ergodicity, nonlinear Fokker-Planck equations, large deviations \cite{HuangRenWang2021DDSDE-wellposedness,dosReisSalkeldTugaut2017,adams2021operatorsplitting,adams2021entropic}. The convolution framework has been given particular attention as it underpins many settings of interest \cite{malrieu2006concentration,cattiaux2008probabilistic,harang2020pathwise,2013doublewell}.
In fact, the literature is even richer under the restriction to constant diffusion elements, $\sigma=\textrm{const}$, as it gives access to methodologies based on Langevin-type dynamics but also to the machinery of Functional inequalities (e.g., log-Sobolev and Poincare inequalities). We point to \cite{harang2020pathwise} for a nice overview on several \textit{open} problems of interest where $f$ is a singular kernel (and $\sigma=$Const): including Coulomb interaction $f(x)=x/|x|^d$, Bio-Savart law $f(x)=x^{\bot}/|x|^d$; Cucker-Smale models $f(x)=(1+|x|^2)^{-\alpha}$ for $\alpha>0$; crystallisation $f(x) = |x|^{-2p} - 2|x|^{-p}$ and take $p\to \infty$; 2D viscous vortex model with $f(x)=x/|x|^2$ \cite{fournier2014propagation}.
\smallskip
\emph{Super-linear interaction forces.}
For the IPS \eqref{Eq:MV-SDE Propagation}-\eqref{Eq:MV-SDE Propagation-drifts} or the MV-SDE \eqref{Eq:General MVSDE}-\eqref{Eq:General MVSDE shape of v}, we focus on the class where the map $v$ is a super-linear growth function in both space and measure component. From the theoretical point of view this class is presently well understood, e.g., \cite{herrmann2012self} investigate different properties of the invariant measures for particles in double-well confining potential and later \cite{2013doublewell} investigate the convergence to stationary states. Large deviations and exit times for such self-stabilising diffusions is established in \cite{HerrmannImkellerPeithmann2008,adams2020large}. The study of probabilistic properties and parametric inference (under constant diffusion) for this class is given in \cite{GENONCATALOT2021}. Two recent studies on parametric inference \cite{belomestny2021semiparametric,comte2022nonparametric} include numerical studies for the particle interaction (\cite{GENONCATALOT2021} does not) but do not tackle superlinear growth in the interaction component (\cite{GENONCATALOT2021} does).
The landscape around corresponding numerical methods for this class is empty (to the best of our knowledge and except for \cite{Malrieu2003}). No general method allows for super-linear growth interaction kernels. For emphasis, standard SDE results for the super-linear growth drifts do not yield convergence results independent of the number of particles $N$. In other words, by treating the interacting particle system \eqref{Eq:MV-SDE Propagation} as an $(\mathbb R^{d})^N$-dimensional SDE known results from SDE numerics with coefficients with superlinear growth can be applied directly. \textit{However}, all estimates would depend on the system's dimension, $dN$, and hence ``explode'' as $N$ tends to infinity. In this work, we introduce new technical elements to overcome this difficulty and which, to the best of our knowledge, are new. It's noteworthy to observe that the direct numerical discretization of the IPS system \eqref{Eq:MV-SDE Propagation}-\eqref{Eq:MV-SDE Propagation-drifts} leads to a costly computational cost of $\mathcal{O}(N^2)$ and hence care is needed.
Many of the current numerical methods in the literature of MV-SDEs rely on the particle approximation given by the IPS by means of a quantified rate for the propagation of chaos \cite{adams2020large,chaintron2021propagation,lacker2021hierarchies,Lacker2018}: taming \cite{reis2018simulation,kumar2020explicit}, time-adaptive \cite{reisinger2020adaptive}, early Split-Step Methods (SSM) methods \cite{2021SSM} -- all these contributions allow for superlinear growth in space only. Further noteworthy contributions include \cite{belomestny2018projected,hutzenthaler2022multilevel,beck2021full,gobetpagliarani2018,agarwalstefano2021,bossytalay1997,crisanmcmurray2019,reis2018importance,talayvaillant2003}. Within existing literature no method has been presented able to deal with a super-linear growth $f$ component; all cited works make the assumption of a Lipschitz behaviour in $\mu \mapsto v(\cdot,\mu)$ (which, in essence, entail that $\nabla f$ is bounded).
\smallskip
\textbf{Our contribution.}
\emph{The results of this manuscript provide for both the numerical approximation of interacting particle SDE systems \eqref{Eq:MV-SDE Propagation}-\eqref{Eq:MV-SDE Propagation-drifts}, and McKean--Vlasov SDEs \eqref{Eq:General MVSDE}-\eqref{Eq:General MVSDE shape of v}.}
The main contribution of this work is the numerical scheme and its convergence analysis. We present a particle approximation SSM algorithm inspired in \cite{2021SSM} for the numerical approximation of MV-SDEs and associated particle systems with drifts featuring super-linear growth in space and measure, and the diffusion coefficient satisfies a general Lipschitz condition. The wellposedness result (Theorem \ref{Thm:MV Monotone Existence} below) and Propagation of Chaos (Proposition \ref{Prop:Propagation of Chaos} below) follow from known literature \cite{adams2020large} -- in fact, our Proposition \ref{Prop:Propagation of Chaos} establishes the wellposedness of the particle system hence closing the small gap present in \cite[Theorem 3.14]{adams2020large}. The only existing work tackling this involved setting, via a fully implicit scheme, is \cite{Malrieu2003}. They rely on (Bakry-Emery) functional inequalities methodologies under specific structural assumptions (constant elliptic diffusion, $u=b=0$ and differentiability) that we do not make.
As mentioned the scheme we propose is a split-step scheme inspired in \cite{2021SSM} (see Definition \ref{def:definition of the ssm} below), that first solves an implicit equation given by the SDE's drift component only then takes that outcome and feeds it to the remaining dynamics of the SDE via a standard Euler step. The idea is that the implicit step deals with the difficult super-linear growth part and the elements passed to the Euler step are better behaved. In \cite{2021SSM} there is only superlinear growth in the space variables and the measure component is assumed Lipschitz, here both space and measure component have super-linear growth. From a practical point of view, the implicit step in \cite{2021SSM} for a particle $i$ only depended on the elements of particle $i$ (the measure being fixed to the previous time step) hence one solves $N$ decoupled equations in $\mathbb{R}^d$. In this manuscript, the implicit step for particle $i$ involves the whole system of particles entailing that one needs to solves one-single system but in $(\mathbb{R}^d)^N$ and the solution depends on all terms. This change in the scheme makes it much harder to obtain moment estimates for the scheme. For the setting of \cite{2021SSM} there were already several competitive schemes present in the literature, e.g., taming \cite{reis2018simulation,kumar2020explicit} and time-adaptive \cite{reisinger2020adaptive} and the numerical study there was comparative. For this work, no alternative numerical scheme exists -- see below for further discussion regarding the implementation of taming for this class.
Results wise, we provide two convergence results in the strong-error\footnote{We understand a ``strong'' error metric as a metric that depends on the joint distribution of the true solution and the numerical approximation. In contrast to the weak error where one needs only the marginals separately. Theorem \ref{theorem:SSM: strong error 1} and \ref{theorem:SSM: strong error 2} showcase two ``strong'' but different error metrics.} sense. For the classical (path-space) root mean-square error, see Theorem \ref{theorem:SSM: strong error 2}, we achieve a nearly-optimal convergence rate of $1/2-\varepsilon$ with $\varepsilon>0$. The main difficulty, also where one of our main contributions lie, is in establishing higher-order moment bounds for the numerical scheme in a way that is compatible with the convolution component in \eqref{Eq:MV-SDE Propagation-drifts} or \eqref{Eq:General MVSDE shape of v} and It\^o-type arguments -- see Theorem \ref{theorem:moment bound for the big theorm time extensions }. We provide a second strong (non-path-space) mean-square error criteria, see Theorem \ref{eq: scheme continous extension in SDE form}, that attains the optimal rate $1/2$. This 2nd result requires only the higher moments of the IPS' solution process and the 2nd-moments of the numerical approximation \cite{2015ssmBandC} (which are easier to obtain).
We emphasise that this 2nd notion of strong convergence (see Theorem \ref{eq: scheme continous extension in SDE form}) is also standard (albeit less) within Monte Carlo literature. It also controls the variance of the approximation error (simply not in path-space). Hence, it is sufficient for the many uses one can give to the simulation output -- as one would do given any other Monte Carlo estimators (e.g., confidence intervals). Lastly, we show that with a constant diffusion coefficient, one attains the higher convergence rate of $1.0$ (see Theorem \ref{theorem:gm strong error}).
We illustrate our findings with extended numerical tests showing agreement with the theoretical results and discussing other properties for schemes: periodicity in phase-space, impact of the number of particles and numerical rate of Propagation of Chaos, and complexity versus runtime. For comparison, we implement the taming algorithm \cite{reis2018simulation} for the setting (without proof) and find that in the example with constant diffusion, it performs similarly to the SSM. In the non-constant diffusion example, it performs poorly. This latter finding raises questions (for future research) if taming is a suitable methodology for this class.
\textbf{Organisation of the paper.} In section \ref{sec:two} we set notation and framework. In Section \ref{section:SSM scheme and main results.} we state the SSM scheme and the two main convergence results. Section \ref{sec:examples} provides numerical illustrations (for the granular media model and a double-well model with non-constant diffusion). All proofs are given in Section \ref{sec:theSSMresults}.
\section{The split-step method for MV-SDEs and interacting particle systems}
\label{sec:two}
We follow the notation and framework set in \cite{adams2020large,2021SSM}.
\subsection{Notation and Spaces}
\label{section: notations and space}
Let $\mathbb{N}$ be the set of natural numbers starting at $0$, $\mathbb{R}$ denotes the real numbers. For $a,b\in \mathbb{N}$ with $a\leq b$, define $\llbracket a,b\rrbracket:= [a,b] \cap \mathbb{N} = \{a,\cdots,b\}$.
For $x,y \in \mathbb{R}^d$ denote the scalar product of vectors by $x \cdot y$; and $|x|=(\sum_{j=1}^d x_j^2)^{1/2}$ the Euclidean distance. The $\mathbf{0}$ denotes the origin in $\mathbb{R}^d$. Let $\mathbbm{1}_A$ be the indicator function of set $A\subset \mathbb{R}^d$. For a matrix $A \in \mathbb{R}^{d\times n}$ we denote by $A^\intercal$ its transpose and its Frobenius norm by $|A|=\textrm{Trace}\{A A^\intercal\}^{1/2}$. Let $I_d:\mathbb{R}^d\to \mathbb{R}^d$ be the identity map. For collections of vectors, let the upper indices denote the distinct vectors, whereas the lower index is a vector component, i.e., $x^l_j$ denote the $j$-th component of $l$-th vector. $\nabla$ denotes the the vector differential operator, $\partial$ denotes the partial differential operator.
We introduce over $\mathbb{R}^d$ the space of probability measures $\mathcal{P}(\mathbb{R}^d)$ and its subset $\mathcal{P}_2(\mathbb{R}^d)$ of those with finite second moment. The space $\mathcal{P}_2(\mathbb{R}^d)$ is Polish under the Wasserstein distance
\begin{align}
\label{eq:def of wasserstein distance}
W^{(2)}(\mu,\nu) = \inf_{\pi\in\Pi(\mu,\nu)} \Big(\int_{\mathbb{R}^d\times \mathbb{R}^d} |x-y|^2\pi(\mathrm{d} x,\mathrm{d} y)\Big)^\frac12, \quad \mu,\nu\in \mathcal{P}_2(\mathbb{R}^d) .
\end{align}
where $\Pi(\mu,\nu)$ is the set of couplings for $\mu$ and $\nu$ such that $\pi\in\Pi(\mu,\nu)$ is a probability measure on $\mathbb{R}^d\times \mathbb{R}^d$ such that $\pi(\cdot\times \mathbb{R}^d)=\mu$ and $\pi(\mathbb{R}^d \times \cdot)=\nu$.
Let our probability space be a completion of $(\Omega, \mathbb{F}, \mathcal{F},\mathbb{P})$ with $\mathbb{F}=(\mathcal{F}_t)_{t\geq 0}$ carrying a Brownian motion $W=(W^1,\cdots,W^l)$ with $l$-dimensions and generating the probability space's filtration, augmented by all $\mathbb{P}$-null sets, and with an additionally sufficiently rich sub $\sigma$-algebra $\mathcal{F}_0$ independent of $W$. We denote by $\mathbb{E}[\cdot]=\mathbb{E}^\mathbb{P}[\cdot]$ the usual expectation operator with respect to $\mathbb{P}$.
We consider some finite terminal time $T<\infty$ and use the following notation for spaces, which are standard in the (McKean-Vlasov) literature \cite{reis2018simulation,2021SSM}. Let $L_{t}^{p}\left(\mathbb{R}^{d}\right)$ defines the space of $\mathbb{R}^{d}$-valued, $\mathcal{F}_{t}$-measurable random variables $X$, that satisfy $\mathbb{E}\left[|X|^{p}\right]^{1 / p}<\infty$.
Consider some finite terminal time $T<\infty$.
Define $\mathbb{S}^{m}$ for $m \geqslant 1$, as the space of $\mathbb{R}^{d}$ -valued, $\mathcal{F}_\cdot$-adapted processes $Z$, that satisfy $\mathbb{E}\left[\sup _{0 \leqslant t \leqslant T}|Z(t)|^{m}\right]^{1 / m}<\infty$.
\color{black}
Throughout the text, $C$ denotes a generic constant positive real number that may depend on the problem's data, may change from line to line but is always independent of the constants $h,M,N$ (associated with the numerical scheme and specified below).
\subsection{Framework}
Let $W$ be an $l$-dimensional Brownian motion and take the measurable maps $v:\mathbb{R}^d \times\mathcal{P}_2(\mathbb{R}^d) \to \mathbb{R}^d$, $f:\mathbb{R}^d \to \mathbb{R}^d$, $b:[0,T] \times \mathbb{R}^d \times\mathcal{P}_2(\mathbb{R}^d) \to \mathbb{R}^d$ and $\sigma:[0,T] \times \mathbb{R}^d \times \mathcal{P}_2(\mathbb{R}^d) \to \mathbb{R}^{d\times l}$.
The MV-SDE of interest for this work is Equation \eqref{Eq:General MVSDE} (for some $m \geq 1$), where $\mu_{t}^{X}$ denotes the law of the process $X$ at time $t$, i.e., $\mu_{t}^{X}=\mathbb{P}\circ X_t^{-1}$.
We make the following assumptions on the coefficients.
\begin{assumption}
\label{Ass:Monotone Assumption} Let $b$ and $\sigma$ $1/2$-H\"{o}lder continuous in time, uniformly in $x\in \mathbb{R}^d$ and $\mu\in \mathcal{P}_2(\mathbb{R}^d)$. Assume that $b,\sigma$ are uniformly Lipschitz in the sense that there exists $L_b, L_\sigma\ge0$ such that for all $t \in[0,T]$ and all $x, x'\in \mathbb{R}^d$ and $\forall \mu, \mu'\in \mathcal{P}_2(\mathbb{R}^d)$ we have that
\begin{align*}
&(\mathbf{A}^b)
&|b(t, x, \mu)-b(t, x', \mu')|^2
\leq L_b \big(|x-x'|^2 + W^{(2)}(\mu, \mu')^2 \big),
\\
&(\mathbf{A}^\sigma)
&|\sigma(t, x, \mu)-\sigma(t, x', \mu')|^2\leq
L_\sigma \big(|x-x'|^2 + W^{(2)}(\mu, \mu')^2 \big).
\end{align*}
$(\mathbf{A}^u)~$ Let $u$ satisfy: there exist $ L_{u} \in \mathbb{R}$, $L_{\hat{u}}>0$, $L_{\tilde{u} } \ge 0$,
$q_1>0$
such that for all $t\in[0,T]$, $ x, x'\in \mathbb{R}^d$ and $\forall \mu, \mu'\in \mathcal{P}_2(\mathbb{R}^d)$, it holds that
\begin{align*}
\langle x-x', u(x,\mu)-u(x',\mu) \rangle
&
\leq L_{u}|x-x'|^{2}
& \textrm{(One-sided Lipschitz in space)},
\\
|u(x,\mu)-u( x',\mu)|
&
\leq L_{\hat{u}}(1+ |x|^{q_1} + |x'|^{q_1}) |x-x'|
& \textrm{(Locally Lipschitz in space)},
\\
|u(x,\mu)-u( x,\mu')|^2
&\leq L_{\tilde{u} } W^{(2)}(\mu, \mu')^2
& \textrm{(Lipschitz in measure)}.
\end{align*}
$(\mathbf{A}^f)~$ Let $f$ satisfy: there exist $ L_{f} \in \mathbb{R}$, $L_{\hat{f}}>0$,
$q_2>0$
such that for all $t\in[0,T]$, $ x, x'\in \mathbb{R}^d$, it holds that
\begin{align*}
\langle x-x', f(x)-f(x') \rangle
&
\leq L_{f}|x-x'|^{2}
& \textrm{(One-sided Lipschitz)},
\\
|f(x)-f( x')|
&
\leq L_{\hat{f}}(1+ |x|^{q_2} + |x'|^{q_2}) |x-x'|
& \textrm{(Locally Lipschitz)},\\
f(x)&=-f(-x),
& \textrm{(Odd function)}.
\end{align*}
Assume the normalisation\footnote{This constraint is a soft as the framework allows to easily redefine $f$ as $\hat f(x):=f(x)-f(\mathbf{0})$ with $f(\mathbf{0})$ merged into $b$.} $f(\mathbf{0})=\mathbf{0}$. Lastly, and for convenience, we set $q=\max\{q_1,q_2\}$ (and we have $q>0$).
\end{assumption}
The benefits of choosing drift=$v+b$ with $b$ being uniformly Lipschitz are discussed below in Remark \ref{rem:Constraint on h is soft} (see also \cite{2021SSM}). Certain useful properties can be derived from these assumptions.
\begin{remark}[Implied properties]
\label{remark:ImpliedProperties}
Under Assumption \ref{Ass:Monotone Assumption}, let $C>0$ then for all $t \in [0,T]$, $x,x',z\in \mathbb{R}^{d}$ and $\mu\in \mathcal{P}_2(\mathbb{R}^d)$, since $f$ is a normalised odd function (i.e., $f(\mathbf{0})=\mathbf{0}$), we have
\begin{align*}
&\langle x,f(x)\rangle = \langle x-\mathbf{0},f(x)-f(\mathbf{0})\rangle+\langle x,f(\mathbf{0})\rangle \le L_f|x|^2+|x||f(\mathbf{0})|= L_f|x|^2.
\end{align*}
Also, for the function $u$, define $ \widehat L_u=L_u+1/2,~C_u= |u(0,\delta_0) |^2$, and thus by Young's inequality
\begin{align*}
\langle x,u(x,\mu)\rangle &
\le
C_u +\widehat L_u|x|^2+ L_{\tilde{u} } W^{(2)}(\mu,\delta_0)^2
,~\langle x-x',u(x,\mu)-u(x',\mu') \rangle
\le \widehat L_u|x-x'|^2 + \frac{ L_{\tilde{u} }}{2} W^{(2)}(\mu,\delta_0)^2.
\end{align*}
Using the properties of the convolution, $v$ of \eqref{Eq:General MVSDE} also satisfies a one-sided Lipschitz condition in space
\begin{align*}
\langle x-x',v(x,\mu)-v(x',\mu) \rangle
&\le \int_{\mathbb{R}^{d} } L_f |x-x'|^2 \mu(dz) + L_u |x-x'|^2
~=~ (L_f+L_u) |x-x'|^2.
\end{align*}
Moreover, for $\psi\in\{b,\sigma\}$, by Young's inequality, we have
\begin{align*}
\langle x,\psi(t,x,\mu) \rangle
\le
C(1+|x|^2+W^{(2)}(\mu,\delta_0)^2 )
\quad \textrm{and}\quad
|\psi(t,x,\mu)|^2
\le
C(1+|x|^2+W^{(2)}(\mu,\delta_0)^2 ).
\end{align*}
\end{remark}
We first recall a result from \cite{adams2020large} establishing wellposedness of the MV-SDE \eqref{Eq:General MVSDE}-\eqref{Eq:General MVSDE shape of v}.
\begin{theorem}[Theorem 3.5 in \cite{adams2020large}]
\label{Thm:MV Monotone Existence}
Let Assumption \ref{Ass:Monotone Assumption} hold and assume for some $m >2(q+1)$, $X_{0} \in L_{0}^{m}(\mathbb{R}^{d})$.
Then, there exists a unique solution $X$ to MV-SDE \eqref{Eq:General MVSDE} in $\mathbb{S}^{m}([0,T])$.
For some constant $C>0$ we have
\begin{align*}
\mathbb E \big[ \sup_{t\in[0,T]} |X_{t}|^{\widehat m} \big]
\leq C \big(1+ \mathbb{E}\big[|X_0|^{\widehat m}\big]\big) e^{C T},\qquad \textrm{for any }~ \widehat m \in [2,m].
\end{align*}
\end{theorem}
\begin{proof}
Our Assumption \ref{Ass:Monotone Assumption} is a particularisation of \cite[Assumption 3.4]{adams2020large} and hence our theorem follows directly from \cite[Theorem 3.5]{adams2020large}.
\end{proof}
\textbf{The interacting particle system \eqref{Eq:MV-SDE Propagation}.}
As mentioned earlier, the numerical approximation results of this work apply directly if either one's starting point is the interacting particle system \eqref{Eq:MV-SDE Propagation} or if one's starting point is the MV-SDE \eqref{Eq:General MVSDE}. Concretely on the latter, as in \cite{2021SSM,reisinger2020adaptive,reis2018simulation} one can approximate the MV-SDE \eqref{Eq:General MVSDE} (driven by the Brownian motion $W$) by the $N$-dimensional system $\mathbb{R}^d$-valued interacting particle system given in \eqref{Eq:MV-SDE Propagation} and approximate it numerically with the gap closed by the Propagation of Chaos.
For completeness we recall the setup of \eqref{Eq:MV-SDE Propagation}. Let $i\in \llbracket 1,N\rrbracket$ and consider $N$ particles $(X^{i,N})_{t\in[0,T]}$ with independent and identically distributed (i.i.d.)~${X}_{0}^{i,N}=X_{0}^{i}$ (the initial condition is random, but independent of other particles) and satisfying the $(\mathbb{R}^d)^N$-valued SDE \eqref{Eq:MV-SDE Propagation} (and $v$ given as in \eqref{Eq:General MVSDE shape of v})
\begin{align*}
\mathrm{d} {X}_{t}^{i,N}
= \big( v (X_t^{i,N}, \mu^{X,N}_{t} )+ b (t,{X}_{t}^{i,N}, \mu^{X,N}_{t} )\big) \mathrm{d} t
+ \sigma (t,{X}_{t}^{i,N} , \mu^{X,N}_{t} ) \mathrm{d} W_{t}^{i}
, \quad X^{i,N}_0=X_0^i,
\end{align*}
where $\mu^{X,N}_{t}(\mathrm{d} x) := \frac{1}{N} \sum_{j=1}^N \delta_{X_{t}^{j,N}}(\mathrm{d} x)$ with $\delta_{{X}_{t}^{j,N}}$ is the Dirac measure at point ${X}_{t}^{j,N}$, and $W^{i}, i\in \llbracket 1,N\rrbracket$ being independent Brownian motions (also independent of the BM $W$ appearing in \eqref{Eq:General MVSDE}; with a slight abuse of notation to avoid re-defining the probability space's filtration).
\begin{remark}[The system through the lens of $\mathbb{R}^{dN}$]
\label{remark:OSL for the whole function / system V}
We introduce the map $V$ to deal with \eqref{Eq:MV-SDE Propagation} as one system of equations in $\mathbb{R}^{Nd}$ instead of $N$ dependent equations each in $\mathbb{R}^d$. Namely, we define
\begin{align}
\label{eq:Define-Function-V}
V=(V_1,\cdots,V_N):\mathbb{R}^{dN}\to \mathbb{R}^{dN }
\ \textrm{for}\ i\in \llbracket 1,N\rrbracket \ \ V_i:\mathbb{R}^{dN}\to \mathbb{R}^{d},
~
V_i(X^N)= v(X^{i,N}, \mu^{X,N})
\end{align}
with the function $v$ given by \eqref{Eq:General MVSDE shape of v}, $X^N=(X^{1,N},\cdots,X^{N,N} )\in \mathbb{R}^{dN}$ where each $X^{i,N}$ solves \eqref{Eq:MV-SDE Propagation}, $\mu^{X,N}(\mathrm{d} x) := \frac{1}{N} \sum_{j=1}^N \delta_{X^{j,N}}(\mathrm{d} x) $. \color{black}
Then, for $X,Y\in \mathbb{R}^{dN}$ with corresponding measure $\mu^{X,N},\mu^{Y,N} $ and letting Assumption \ref{Ass:Monotone Assumption} hold, the function $V$ also satisfies a one-sided Lipschitz condition
\begin{align*}
\langle& X^N-Y^N,V(X^N)-V(Y^N) \rangle
\\
&
=\frac{1}{2N}\sum_{i=1}^N\sum_{j=1}^N \Big\langle (X^{i,N}-X^{j,N})-(Y^{i,N}-Y^{j,N})
, f(X^{i,N}- X^{j,N})-f(Y^{i,N}- Y^{j,N})\Big\rangle
\\
&\quad +
\sum_{i=1}^N \Big\langle X^{i,N}-Y^{i,N},
u(X^{i,N}, \mu^{X,N})-u(Y^{i,N}, \mu^{X,N})+ u(Y^{i,N}, \mu^{X,N}) -u(Y^{i,N}, \mu^{Y,N})
\Big\rangle
\\
& \qquad
\le (2L_f^++L_u+\frac{1}{2}+\frac{L_{\tilde{u} }}{2})|X^N-Y^N|^2, \qquad L_f^+=\max\{0,L_f\}.
\end{align*}
In the last second step we changed the order of summation and used that $f$ is odd.
\end{remark}
\textbf{Propagation of chaos (PoC).}
In order to show that the particle approximation \eqref{Eq:MV-SDE Propagation} is of effective use to approximate the MV-SDE \eqref{Eq:General MVSDE}, we provide a pathwise propagation of chaos result (convergence as the number of particles increases).
We introduce the auxiliary system of non interacting particles
\begin{align}
\label{Eq:Non interacting particles}
\mathrm{d} X_{t}^{i} =
\big(
v(X_{t}^{i}, \mu^{X^{i}}_{t})+ b(t, X_{t}^{i}, \mu^{X^{i}}_{t}) \big)\mathrm{d} t + \sigma(t,X_{t}^{i}, \mu^{X^{i}}_{t}) \mathrm{d} W_{t}^{i}, \quad X_{0}^{i}=X_{0}^{i} \, ,\quad t\in [0,T] \, ,
\end{align}
which are just (decoupled) MV-SDEs with i.i.d. initial conditions $X_{0}^{i}$. Since the $X^{i}$s are
independent, $\mu^{X^{i}}_{t}=\mu^{X}_{t}$ for all $
i$ (and $\mu^{X}_{t}$ the law of the solution to \eqref{Eq:General MVSDE} with $v$ given as \eqref{Eq:General MVSDE shape of v}).
We are interested in the strong error-type metrics for the numerical approximation, hence the relevant PoC result for our case if given in the next proposition.
The Propagation of chaos result \eqref{eq:poc result} follows from \cite[Theorem 3.14]{adams2020large} under the assumption that the interacting particle system \eqref{Eq:MV-SDE Propagation} is well-posed. The first statement of Proposition \ref{Prop:Propagation of Chaos} establishes the wellposedness of the particle system hence closing the small gap present in \cite[Theorem 3.14]{adams2020large}, the proof and more details are presented in Appendix \ref{appendix: proof of moment bound for PC}.
\color{black}
\begin{proposition}
\label{Prop:Propagation of Chaos}
Let the assumptions of Theorem \ref{Thm:MV Monotone Existence} hold for some $m>2(q+1)$. Let $X^{i}\in \mathbb{S}^m$ be the solution to \eqref{Eq:Non interacting particles}.
Then, there exists a unique solution $X^{i,N}$ to \eqref{Eq:MV-SDE Propagation} and for any $1\leq p\leq m$ there exists $C>0$ independent of $N$ such that
\begin{align}
\label{eq:momentboundParticiInteractingSystem}
\sup_{t\in[0,T]}\sup_{ i\in \llbracket 1,N \rrbracket} \mathbb{E}\big[ |X^{i,N}_t|^p \big] \leq C\Big(1+\mathbb{E}\big[\,|X_0^\cdot|^p \big]\Big).
\end{align}
Moreover, suppose that $m>\max\{ 2(q+1) , 4\} $,
then there exist constant $C(T)$ depends on $T$
\begin{align}
\label{eq:poc result}
\sup_{ i\in \llbracket 1,N \rrbracket}\sup_{0 \le t \le T}
\mathbb{E}\big[ |X_{t}^{i} - X_{t}^{i,N}|^{2}\big]
\le C(T) \begin{cases}N^{-1 / 2}, & d<4 \\ N^{-1 / 2} \log N, & d=4 \\ N^{\frac{-2}{d+4}}, & d>4\end{cases}.
\end{align}
\end{proposition}
\color{black}
This shows the particle scheme will converge to the MV-SDE with a given quantified rate. Therefore, to show convergence between our numerical scheme and the MV-SDE, we only need to show that the numerical version of the particle scheme converges to the ``true'' particle scheme. The PoC rate can be optimised for the case of constant diffusion \cite[Remark 2.5]{2021SSM}.
\subsection{The scheme for the interacting particle system and main results}
\label{section:SSM scheme and main results.}
The split-step method (SSM) here is inspired by that of \cite{2021SSM} and re-cast accordingly to the setup here. The critical difficulty arises from the convolution component in $v$ \eqref{Eq:General MVSDE}. This term is the main hindrance in proving moment bounds. Before continuing recall the definition of $V$ in Remark \ref{remark:OSL for the whole function / system V}. We now introduce the SSM numerical scheme. \color{black}
\begin{definition}[Definition of the SSM]
\label{def:definition of the ssm}
Let Assumption \ref{Ass:Monotone Assumption} hold.
Define the uniform partition of $[0,T]$ as $\pi:=\{t_n:=nh : n\in \llbracket 0,M\rrbracket, h:=T/M \}$ for a prescribed $M\in \mathbb{N}$.
Define recursively the SSM approximating \eqref{Eq:MV-SDE Propagation} as: set $ \hat{X} _{0}^{i,N}=X^i_0$ for $i\in \llbracket 1,N\rrbracket $; iteratively over $n\in \llbracket 0,M-1\rrbracket$ for all $i\in \llbracket 1,N\rrbracket $ (recall Remark \ref{remark:OSL for the whole function / system V} and the definition of the map $V$ given in \eqref{eq:Define-Function-V})\color{black}
\begin{align}
Y_{n}^{\star,N} &=\hat{ X}_{n}^{N}+h V (Y_{n}^{\star,N} ),
\quad \hat{ X}_{n}^{N}=(\cdots, \hat{X} _{n}^{i,N},\cdots),\quad Y_{n}^{\star,N}=(\cdots,Y_{n}^{i,\star,N},\cdots),
\label{eq:SSTM:scheme 0}
\\
\label{eq:SSTM:scheme 1}
&\textrm{where }~Y_{n}^{i,\star,N} = \hat{X} _{n}^{i,N}+h v (Y_{n}^{i,\star,N}, \hat{\mu} ^{Y,N}_n ),
\quad
\quad
\hat{\mu} ^{Y,N}_n(\mathrm{d} x):= \frac1N \sum_{j=1}^N \delta_{Y_{n}^{j,\star,N}}(\mathrm{d} x),
\\
\label{eq:SSTM:scheme 2}
\hat{X} _{n+1}^{i,N} &=Y_{n}^{i,\star,N}
+ b(t_n,Y_{n}^{i,\star,N}, \hat{\mu} ^{Y,N}_n) h
+\sigma(t_n,Y_{n}^{i,\star,N}, \hat{\mu} ^{Y,N}_n) \Delta W_{n}^i,\qquad \Delta W_{n}^i=W_{t_{n+1}}^i-W_{t_n}^i.
\end{align}
The stepsize $h$ is chosen as to belong to the interval (this constraint is soft in the sense of Remark \ref{rem:Constraint on h is soft})
\begin{align}
\label{eq:h choice}
h\in \Big(0, \min\big\{1,\frac 1\zeta\big\} \Big)
\quad \textrm{for $\zeta$ defined as}\quad
\zeta= \max\Big\{ 2(L_f+L_u)~,~4L_f^++2L_u+2 L_{\tilde{u} } +1~,~0 \Big\}.
\end{align}
\end{definition}
In some cases where the original functions $f,u$ might cause trouble to find a suitable choice of $h$, and by the Remark below, we can use the addition and subtraction trick to bypass the constraint, see Remark \ref{remark: choice of h} and \cite[Section 3.4]{2021SSM} for more discussion.
\color{black}
\begin{remark}[The constraint on $h$ in \eqref{eq:h choice} is soft]
\label{rem:Constraint on h is soft}
Our framework allows to change $f,u,b$ in such a way as to have $\zeta=0$ in \eqref{eq:h choice} via addition and subtraction of linear terms to $f,u$ and $b$. Concretely, take $\theta,\gamma \in \mathbb{R}$ and redefine $f,u,b$ into $\widehat f, \widehat u, \widehat b$ as follows: for any $t\in [0,\infty),x\in\mathbb{R}^d, \mu\in \mathcal{P}_2(\mathbb{R}^d)$
\begin{align*}
\widehat f (x) = f(x) -\theta x,
\qquad
\widehat u(x,\mu) = u(x,\mu) -\gamma x - \theta \int_{\mathbb{R}^d} z \mu(\mathrm{d} z),
\quad \textrm{and}\quad
\widehat b(t,x,\mu) = b(t,x,\mu) +(\gamma + \theta) x.
\end{align*}
For judicious choices of $\theta,\gamma$ it is easy to see that $\zeta$ can be set to be zero (we invite the reader to carry out the calculations). We remark that this operation increases the Lipschitz constant of $\widehat b$.
\end{remark}
Recall that the function $V$ satisfies a one-sided Lipschitz condition in $X\in\mathbb{R}^{Nd}$ (Remark \ref{remark:OSL for the whole function / system V}), and hence (under \eqref{eq:h choice}) a unique solution $Y_{n}^{\star,N}$ to \eqref{eq:SSTM:scheme 0} as a function of $\hat{ X}_{n}^{N}$ exists (details in Lemma \ref{lemma:SSTM:new functions def and properties1}).
After introducing the discrete scheme, we introduce its continuous extension and the main convergence results.
\begin{definition} [Continuous extension of the SSM]
\label{def:definition of the ssm continouse extension}
Under the same choice of $h$ and assumptions in Definition \ref{def:definition of the ssm}, for all $t\in[t_n,{t_{n+1}}]$, $n\in \llbracket 0,M-1\rrbracket$ , $i\in\llbracket 1,N \rrbracket,~\hat{X}_{0}^i
\in L_0^m(\mathbb{R}^d)$, the continuous extension of the SSM is
\begin{align}
\label{eq: scheme continous extension in SDE form}
\mathrm{d} \hat{X}_{t}^{i,N}
&
=
\big( v (Y_{\kappa(t)}^{i,\star,N}, \hat{\mu} ^{Y,N}_{\kappa(t)} )
+b(\kappa(t),Y_{\kappa(t)}^{i,\star,N}, \hat{\mu} ^{Y,N}_{\kappa(t)} ) \big) \mathrm{d} t
+ \sigma (\kappa(t),Y_{\kappa(t)}^{i,\star,N}, \hat{\mu} ^{Y,N}_{\kappa(t)} ) \mathrm{d} W_t^i,
\\
\nonumber
\quad
\hat{\mu} ^{Y,N}_n(\mathrm{d} x):&= \frac1N \sum_{j=1}^N \delta_{Y_{n}^{j,\star,N}}(\mathrm{d} x),~\quad
\kappa(t)=\sup\big\{t_n: t_n\le t,\ n\in \llbracket 0,M-1 \rrbracket \big\}\nonumber,~\quad
\hat{\mu} ^{Y,N}_{t_n}= \hat{\mu} ^{Y,N}_n.
\end{align}
\end{definition}
The next result states our first strong convergence finding. It is a ``strong'' point-wise convergence result that is not in the classical mean-square error form. Recall that $C>0$ is independent of the parameters $h,M,N$.
\begin{theorem} [Non-path-space mean-square convergence]
\label{theorem:SSM: strong error 1}
Let Assumption \ref{Ass:Monotone Assumption} hold and choose $h$ as in \eqref{eq:h choice}.
Let $i\in\llbracket 1,N\rrbracket$, take $X^{i,N}$ as the solution \eqref{Eq:MV-SDE Propagation} and let $ \hat{X} ^{i,N}$ be the continuous-time extension of the SSM given by \eqref{eq: scheme continous extension in SDE form}.
If $m\ge 4q+4 >\max\{2(q+1),4\}$ where $X_0^i\in L^m_0$ and $q$ is as defined in Assumption \ref{Ass:Monotone Assumption}, then
\begin{align}
\label{eq:convergence theroem term 1}
\sup_{i\in \llbracket 1,N \rrbracket} \sup_{0\le t \le T}
\mathbb{E}\big[\, |X_{t}^{i,N}- \hat{X} _{t}^{i,N} |^2 \big] & \le Ch.
\end{align}
\end{theorem}
The proof is presented in Section \ref{subsection:properties of the scheme }.
This result does not need $L^p$-moment bounds of the scheme for $p>2$. It needs \textit{only} $L^p$-moments of the solution process of \eqref{Eq:MV-SDE Propagation} and $L^2$-moments for the scheme \cite{2015ssmBandC}. The proof takes advantage of the elegant structure induced by the SSM where Proposition \ref{prop:yi-yj leq xi-xj} and \ref{prop:sum y square leq sum x square} are the crucial intermediate results to deal with the convolution term.
The next moment bound result is necessary for the subsequent uniform convergence result.
\begin{theorem}[Moment bounds]
\label{theorem:moment bound for the big theorm time extensions }
Let the settings of Theorem \ref{theorem:SSM: strong error 1} hold. Let $m\geq 2$ where $X^i_0\in L^m_0$ for all $i\in\llbracket 1,N\rrbracket$ and let $ \hat{X} ^{i,N}$ be the continuous-time extension of the SSM given by \eqref{eq: scheme continous extension in SDE form}. Let $2p\in [2,m]$, then there exist a constant $C>0$ (independent of $h,N,M$) such that
\begin{align}
\label{eq:SSTM:momentbound for split-step time extension process00}
\sup_{i\in \llbracket 1,N \rrbracket} \sup_{0\le t \le T}\mathbb{E}\big[ \,| \hat{X} _{t}^{i,N}|^{2p}\big]\le C\big( 1+ \mathbb{E}\big[\, | \hat{X} _{0}|^{2p}\big] \big) <\infty.
\end{align}
\end{theorem}
The proof is presented in Section \ref{subsection: Moment bound of the SSM} and builds around Theorem \ref{theorem: discrete moment bound}. There, we expand \eqref{eq:mmb:def of HXp} and \eqref{eq:mmb:def of HYp}, and leverage the properties of the SSM scheme stated in Proposition \ref{prop:yi-yj leq xi-xj} and \ref{prop:sum y square leq sum x square} to deal with the difficult convolution terms.
Next we state the classic mean-square error convergence result.
\begin{theorem}[Classical path-space mean-square convergence]
\label{theorem:SSM: strong error 2}
Let the settings of Theorem \ref{theorem:SSM: strong error 1} hold. If there exist some $\epsilon\in (0,1)$ such that $m \ge \max\{4q+4,2+q+q/\epsilon \} >\max\{2(q+1),4\}$ with $X_0^i\in L^m_0$ for $i\in\llbracket 1,N\rrbracket$ and $q$ given in Assumption \ref{Ass:Monotone Assumption}, then
\begin{align}
\label{eq:convergence theroem term 2}
\sup_{i\in \llbracket 1,N \rrbracket}
\mathbb{E}\big[ \sup_{0\le t \le T} |X_{t}^{i,N}- \hat{X} _{t}^{i,N} |^2 \big]
&\le Ch^{1-\epsilon}.
\end{align}
\end{theorem}
The proof is presented in Section \ref{subsection:Proof of strong error 2}. For this result we need both the $L^p$-moments of the scheme and solution process. This in contrast to the proof methodology of Theorem \ref{theorem:SSM: strong error 1} and the reason we introduce Theorem \ref{theorem:moment bound for the big theorm time extensions } as a main result. The {nearly optimal} error rate of $(1-\epsilon)$ is a consequence of the estimation of \eqref{eq:se2:special term} (product of three unbounded random variables). The expectation is taken after the supremum and then we use Theorem \ref{theorem:SSM: strong error 1} and \ref{theorem:moment bound for the big theorm time extensions } -- this forces an $\varepsilon$ sacrifice of the rate. The {nearly optimal} error rate of $(1-\epsilon)$ is also the present best one can reach even for higher-order differences $p>2$ (although we do not present these calculations). It is still open how to prove \eqref{eq:SSTM:momentbound for split-step time extension process00} with the $\sup_t$ inside the expectation --- the difficulty to be overcome relates to establishing \eqref{eq:sum y square leq sum x square} of Proposition \ref{prop:sum y square leq sum x square} under higher moments $p>2$ in a way that aligns with \textit{carr\'e-du-champs} type arguments and the convolution term (for the proof we provide, otherwise a new argumentation needs to be found). It remains an open problem showing \eqref{eq:convergence theroem term 2} when $\varepsilon=0$.
\subsubsection*{A particular result for granular media equation type models}
We recast the earlier results to granular media type models where the diffusion coefficient is constant (and higher convergence rates can be established).
\begin{assumption}
\label{Ass:GM Assumption}
Consider the following MV-SDE
\begin{align}
\label{Eq:GM type MVSDE}
\mathrm{d} X_{t} &= v(X_{t},\mu_{t}^{X}) \mathrm{d} t + \sigma \mathrm{d} W_{t}, \quad X_{0} \in L_{0}^{m}( \mathbb{R}^{d}),\quad
v(x,\mu)= \int_{\mathbb{R}^{d} } f(x-y) \mu(\mathrm{d} y).
\end{align}
Let $f:\mathbb{R}^d \to \mathbb{R}^d$ be continuously differentiable satisfying ($\mathbf{A}^f$) of Assumption \ref{Ass:Monotone Assumption}. There exist $ L_{f'},~L_{f''} >0$, $q\in \mathbb{N}$ and $q>1$, with $q$ the same as in ($\mathbf{A}^f$), such that for all $ x,~x'\in \mathbb{R}^d$
\begin{align}
\label{eq:ass:gm:f 1}
| \nabla f (x)|
&
\leq L_{f'}(1+ |x|^{q} )
,\quad
| \nabla f (x)
-\nabla f (x')
|
\leq L_{f''}(1+ |x|^{q-1}+|x'|^{q-1} )|x-x'| .
\end{align}
The function $\sigma:[0,T] \times \mathbb{R}^d \times \mathcal{P}_2(\mathbb{R}^d) \to \mathbb{R}^{d\times l}$ is a constant matrix.
\end{assumption}
In the language of the granular media equation, MV-SDE \eqref{eq:ass:gm:f 1} corresponds to the Fokker-Plank PDE $\partial_t \rho =\nabla \cdot[\nabla \rho+\rho \nabla W * \rho]$ where $\nabla W=f$ and $\rho$ is the probability measure \cite{Malrieu2003}. We have the following results.
\begin{theorem}
\label{theorem:gm strong error}
Let Assumption \ref{Ass:GM Assumption} hold and choose $h$ as in \eqref{eq:h choice}. Let $i\in\llbracket 1,N\rrbracket$, take $X^{i,N}$ to be the solution to \eqref{Eq:MV-SDE Propagation}, let $ \hat{X} ^{i,N}$ be the continuous-time extension of the SSM given by \eqref{eq: scheme continous extension in SDE form} and $X^i_0\in L^m_0$. Let $m\ge \max\{8q,~4q+4\}>\max\{2(q+1),4\}$ with $q$ as defined in Assumption \ref{Ass:GM Assumption}, then
\begin{align}
\label{eq:convergence theroem term 11 constant diffusion}
\sup_{i\in \llbracket 1,N \rrbracket} \sup_{0\le t \le T}
\mathbb{E}\big[ |X_{t}^{i,N}- \hat{X} _{t}^{i,N} |^2 \big] & \le Ch^2.
\end{align}
\end{theorem}
This result is proved in Section \ref{subsection:Proof of strong error 3 GM} with simulation results presented in Section \ref{exam:StochGinzburg}, confirming a strong error rate of $1.0$.
One can use a proof process similar to that used for Theorem \ref{theorem:SSM: strong error 2} to obtain \eqref{eq:convergence theroem term 11 constant diffusion} with the $\sup_t$ inside the expectation. This would deliver a rate of $h^{2-\epsilon}$, the key steps are similar to \eqref{eq:proof strong errr2:ep 1}-\eqref{eq:proof strong errr2:ep 2}.
\color{black}
\section{Examples of interest}
\label{sec:examples}
We illustrate the SSM on three numerical examples.\footnote{Implementation code in Python is available in
\href{url}{https://github.com/AnandaChen/Simulation-of-super-measure}
}
The ``true'' solution in each case is unknown and the convergence rates for these examples are calculated in reference to a proxy solution given by the approximating scheme at a smaller timestep $h$ and higher number of particles $N$ (particular details are below). The strong error between the proxy-true solution $X_T$ and approximation $\hat X_T$ is as follows
\begin{align*}
\textrm{root Mean-square error (Strong error)}
= \Big( \mathbb{E}\big[\, |X_T-\hat{X}_T|^2\big]\Big)^{\frac12}
\approx \Big(\frac1N \sum_{j=1}^N |X_T^j - \hat{X}_T^j|^2\Big)^\frac12.
\end{align*}
We also consider the path strong error define as follows, for $Mh=T~,t_n=nh$,
\begin{align*}
\textrm{ Strong error (Path) }
= \Big( \mathbb{E}\big[\,\sup_t |X_t-\hat{X}_t|^2\big]\Big)^{\frac12}
\approx \Big(\frac1N \sum_{j=1}^N \sup_{n\in \llbracket 0,M \rrbracket }|X_{t_n}^j - \hat{X}_{t_n}^j|^2\Big)^\frac12.
\end{align*}
\color{black}
The propagation of chaos (PoC) rate between different particle systems $\{ \hat{X} _T^{i,N_l}\}_{i,l}$ where $i$ denotes the $i$-th particle and $N_l$ denotes the size of the system,
\begin{align}
\nonumber
\textrm{Propagation of chaos error (PoC error)}
\approx \Big(\frac{1}{N_l} \sum_{j=1}^{N_l} | \hat{X} _T^{j,N_l} - X_T^{j} |^2\Big)^\frac12.
\end{align}
\begin{remark}[`Taming' algorithm]
For comparative purposes we implement the `\textit{Taming}' algorithm \cite{2021SSM,reis2018simulation} -- any convergence analysis of the taming algorithm to the framework of this manuscript is an open question.
Of the many variants of Taming possible, set the terminal time $T$ with $Mh=T$, we implement as follows: $\int_{\mathbb{R}^{d} } f(\cdot-y) \mu(\mathrm{d} y)$ is replaced by
$\int_{\mathbb{R}^{d} } f(\cdot-y) \mu(\mathrm{d} y)/(1+M^{-\alpha}|\int_{\mathbb{R}^{d} } f(\cdot-y) \mu(\mathrm{d} y)|)$, and $u$ is replaced by $u/(1+M^{-\alpha}|u|)$ with the { choice of $\alpha=1/2$ for non-constant diffusion and $\alpha=1$ for constant diffusion. }
\end{remark}
Within each example, the error rates of Taming and SSM are computed using the same Brownian motion paths.
Moreover, for the simulation study below, we fix the algorithmic parameters as follows:
\begin{enumerate}
\item For the strong error, the proxy-true solution is calculated with $h=10^{-4}$ and the approximations are calculated with $h\in\{10^{-3},2\times10^{-3},\dots,10^{-1} \}$ with $N=1000$ at $T=1$ and using the same Brownian motion paths. We compare SSM and Taming with the proxy-true solutions provided by the same algorithm (SSM and Taming) respectively.
\item For the PoC error, the proxy-true solution is calculated with $N=2560$ and the approximations are calculated with $N\in\{40,80,\dots,1280\}$, with $h=0.001$ at $T=1$ and using the same Brownian motion paths.
\item The implicit step \eqref{eq:SSTM:scheme 0} of the SSM algorithm is solved, in our examples, via a Newton method iteration. We point the reader to Appendix \ref{appendix: discussion on Newton's method} for a full discussion. In practice, $2$ to $4$ Newton iterations are sufficient to ensure that the difference between two consecutive Newton iterates are not larger than $\sqrt{h}$ in $\|\cdot\|_\infty$-norm (in $\mathbb{R}^{dN}$).
\end{enumerate}
Lastly, the symbols $\mathcal{N}(\alpha,\beta)$ denote the normal distribution with mean $\alpha\in \mathbb{R}$ and variance $\beta\in (0,\infty)$.
\color{black}
\subsection{Example: the granular media equation}
\label{exam:StochGinzburg}
The first example is the granular media Fokker-Plank equation taking the form $\partial_t \rho =\nabla \cdot[\nabla \rho+\rho \nabla W * \rho]$ with $W(x)=\frac13|x|^3$ and $\rho$ is the correspondent probability density \cite{Malrieu2003,cattiaux2008probabilistic}. In MV-SDE form we have
\begin{align}
\label{eq:example:granular}
\mathrm{d} X_{t} &= v(X_{t},\mu_{t}^{X}) \mathrm{d} t + \sqrt{2}~\mathrm{d} W_{t}, ~ X_{0} \in L_{0}^{m}( \mathbb{R}^{d}), ~
v(x,\mu)= \int_{\mathbb{R}^{d} } \Big(-\text{sign}(x-y)|x-y|^2 \Big) \mu(\mathrm{d} y),
\end{align}
where $\text{sign}(\cdot)$ is the standard sign function, $\mu_t^X$ is the law of the solution process $X$ at time $t$.
\begin{figure}[h!bt]
\centering
\begin{subfigure}{.48\textwidth}
\setlength{\abovecaptionskip}{-0.02cm}
\setlength{\belowcaptionskip}{-0.05 cm}
\centering
\includegraphics[scale=0.25]{FIG-220531-N01-gm.png}
\caption{Density with $X_0\sim \mathcal{N}(0,1)$}
\end{subfigure}%
\begin{subfigure}{.48\textwidth}
\setlength{\abovecaptionskip}{-0.02cm}
\setlength{\belowcaptionskip}{-0.05 cm}
\centering
\includegraphics[scale=0.25]{FIG-220531-N24-gm.png}
\caption{Density with $X_0\sim \mathcal{N}(2,16)$}
\end{subfigure}%
\\
\begin{subfigure}{.24\textwidth}
\setlength{\abovecaptionskip}{-0.02cm}
\setlength{\belowcaptionskip}{-0.2 cm}
\centering
\includegraphics[scale=0.23]{FIG-0531-gm-se.png}
\caption{Strong Error}
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\setlength{\abovecaptionskip}{-0.02cm}
\setlength{\belowcaptionskip}{-0.2 cm}
\centering
\includegraphics[scale=0.23]{FIG-0622-gm-time.png}
\caption{Strong error v.s Runtime }
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\setlength{\abovecaptionskip}{-0.02cm}
\setlength{\belowcaptionskip}{-0.2 cm}
\centering
\includegraphics[scale=0.23]{FIG-0707-gm-sepath.png}
\caption{Strong error (Path) }
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\setlength{\abovecaptionskip}{-0.02cm}
\setlength{\belowcaptionskip}{-0.2 cm}
\centering
\includegraphics[scale=0.23]{FIG-0622-gm-poc.png}
\caption{PoC Error }
\end{subfigure}
\setlength{\belowcaptionskip}{-0.2cm}
\caption{Simulation of the granular media equation \eqref{eq:example:granular} with $N=1000$ particles. (a) and (b) show the density map for Taming (blue, left) and SSM (orange, right) with $h=0.01$ at times $T=1,3,10$ seen top-to-bottom and with different initial distribution. (c) Strong error (rMSE) of SSM and Taming with $X_0 \sim \mathcal{N}(2,16)$. (d) Strong error (rMSE) of SSM and Taming w.r.t algorithm with $X_0 \sim \mathcal{N}(2,16)$.(e) Strong error (Path) of SSM and Taming with $X_0 \sim \mathcal{N}(2,16)$. (f) PoC error rate in $N$ of SSM and Taming with $X_0 \sim \mathcal{N}(2,9)$ with perfect overlap of errors. }
\label{fig:1-2:granular example}
\end{figure}
This granular media model has been well studied in \cite{Malrieu2003,cattiaux2008probabilistic} and is a reference model to showcase the numerical approximation. For this specific case, starting from a normal distribution, the particles concentrate and move around its initial mean value (also its steady state).
In Figure \ref{fig:1-2:granular example} (a) and (b) one sees the evolution of the density map across time $T\in\{1,3,10\}$ for two initial initial distributions $\mathcal{N}(0,1)$ and $\mathcal{N}(2,4)$ respectively, and $h=0.01$. For this case, both methods approximate well the solution without any apparent leading difference between Taming and SSM.
Figure \ref{fig:1-2:granular example} (c) shows strong error of both methods, computed at $T=1$ across $h\in \{ 10^{-3},2\times10^{-3},\dots, 10^{-1} \}$. The proxy-true solution for each method is taken at $h=10^{-4}$ and
baseline slopes for the "order 1" and "order 0.5" convergence rate are provided for comparison.
\color{black}
The estimated rate of both method is $1.0$ in accordance to Theorem \ref{theorem:gm strong error} (under constant diffusion coefficient).
Figure \ref{fig:1-2:granular example} (d) shows strong error v.s algorithm runtime of both methods under the same set up as in (c). The SSM perform slightly better than the Taming method.
Figure \ref{fig:1-2:granular example} (e) shows the path type strong error of both method, compare to the results in (c), the SSM preserve the error rate of near $1.0$ and perform better than the Taming method.
\color{black}
Figure \ref{fig:1-2:granular example} (f) shows the PoC error of both methods, two results coincide since the differences between two methods are within $0.001$. The PoC rates are near $0.5$ which is better than the result of $1/4$ after we take square root in Proposition \ref{Prop:Propagation of Chaos}. This result is similar to \cite[Example 4.1]{reisinger2020adaptive}, and is explained theoretically in \cite[Lemma 5.1]{delarue2018masterCLT} but under stronger assumptions.
\subsection{Example: Double-well model}
\label{section:exam:toy-toy2}
We consider a limit model of particles under a symmetric double-well confinement. We test a variant of the model studied in \cite{2013doublewell} but change its diffusion coefficient to a non-constant one (in opposition to the previous example). Concretely, we study the following McKean-Vlasov equation
\begin{align}
\label{eq:example:toy2}
\mathrm{d} X_{t} &= \big( v(X_{t},\mu_{t}^{X})+X_{t} \big) \mathrm{d} t + X_t \mathrm{d} W_{t}, \quad
v(x,\mu)= -\frac14 x^3+\int_{\mathbb{R}^{d} } -\big(x-y \big)^3 \mu(\mathrm{d} y).
\end{align}
The corresponding Fokker-Plank equation is
$ \partial_t \rho=\nabla \cdot [~\nabla ( \frac{\rho|x|^2}{2})+\rho \nabla V+\rho \nabla W * \rho ]$
with $W=\frac14|x|^4$, $V=\frac{1}{16}|x|^4-\frac{1}{2}|x|^2$, $\rho$ is the corresponding density map. There are three stable states $\{-2,0,2\}$ for this model \cite{2013doublewell}.
\begin{figure}[h!bt]
\centering
\begin{subfigure}{.48\textwidth}
\setlength{\abovecaptionskip}{-0.02cm}
\setlength{\belowcaptionskip}{-0.05 cm}
\centering
\includegraphics[scale=0.25]{FIG-220531-N01-dw.png}
\caption{Density with $X_0\sim \mathcal{N}(0,1)$}
\end{subfigure}%
\begin{subfigure}{.48\textwidth}
\setlength{\abovecaptionskip}{-0.02cm}
\setlength{\belowcaptionskip}{-0.05 cm}
\centering
\includegraphics[scale=0.25]{FIG-220531-N39-dw.png}
\caption{Density with $X_0\sim \mathcal{N}(3,9)$}
\end{subfigure}%
\\
\begin{subfigure}{.45\textwidth}
\setlength{\abovecaptionskip}{-0.02cm}
\setlength{\belowcaptionskip}{-0.03 cm}
\centering
\includegraphics[scale=0.4]{FIG-220531-paths-dw.png}
\caption{Simulated paths of Taming (top) and SSM (bottom)}
\end{subfigure}
\begin{subfigure}{.26\textwidth}
\setlength{\abovecaptionskip}{-0.02cm}
\setlength{\belowcaptionskip}{-0.2 cm}
\centering
\includegraphics[scale=0.25]{FIG-0531-dw-se.png}
\caption{Strong error }
\end{subfigure}
\begin{subfigure}{.26\textwidth}
\setlength{\abovecaptionskip}{-0.02cm}
\setlength{\belowcaptionskip}{-0.2 cm}
\centering
\includegraphics[scale=0.25]{FIG-0531-dw-time.png}
\caption{Strong error v.s Runtime }
\end{subfigure}
\setlength{\belowcaptionskip}{-0.2cm}
\caption{Simulation of the Double-Well model \eqref{eq:example:toy2} with $N=1000$ particles.
(a) and (b) show the density map for Taming (blue, left) and SSM (orange, right) with $h=0.01$ at times $T=1,3,10$ seen top-to-bottom and with different initial distribution.
(c) simulated paths by Taming (top) and SSM (bottom) with $h=0.01$ over $t\in[0,3]$ and with $X_0\sim \mathcal{N}(3,9)$.
(d) Strong error (rMSE) of SSM and Taming with $X_0 \sim \mathcal{N}(2,4)$.
(e) Strong error (rMSE) of SSM and Taming w.r.t algorithm Runtime with $X_0 \sim \mathcal{N}(2,4)$.
}
\label{fig:3:the toy example3}
\end{figure}
The example of Section \ref{exam:StochGinzburg} was a relatively mild with additive noise and where both methods performed well. For this double-well model of \eqref{eq:example:toy2}, the drift includes super-linear growth components in both space and measure and a non-constant diffusion.
In Figure \ref{fig:3:the toy example3} (a) and (b), Taming (blue, left) fails to produce acceptable results of any type -- Figure \ref{fig:3:the toy example3} (c) shows the simulated paths of both methods where it is noteworthy to see that Taming become unstable while the SSM paths remain stable. In respect to Figure \ref{fig:3:the toy example3} (a) and (b), the SSM (orange, right) depicts the distribution's evolution to one of the expected stable states ($x=2$) as time evolves. It is interesting to find out that for the SSM in (a), where $X_0\sim \mathcal{N}(0,1)$, the particles shift from the zero (unstable) steady state to the positive stable steady state $x=2$. However, in (b) with $X_0 \sim \mathcal{N}(3,9)$, we find that the particles remain within the basin of attraction of the stable state $x=2$.
Figure \ref{fig:3:the toy example3} (d) displays under the same parameter choice for $h,~T$ as for the granular media example of Section \ref{exam:StochGinzburg} with $X_0\sim \mathcal{N}(2,4)$ the estimated rate of convergence for the schemes. It shows the taming method fails to converge (but does not explode). The strong error rate of the SSM is the expected $1/2$ in-line with Theorem \ref{theorem:SSM: strong error 1} (and Theorem \ref{theorem:SSM: strong error 2}).
The "order 1" and "order 0.5" lines are baselines corresponding to the slope of $1$ and $0.5$ rate of convergence.
\color{black}
Figure \ref{fig:3:the toy example3} (e) shows that, to reach the same strong error level Taming shall takes far more (over 100 times) runtime than the SSM.
\color{black}
\subsection{Example: 2d degenerate Van der Pol (VdP) oscillator}
\label{section:example:vdp}
We consider the kinetic type Van der Pol (VdP) model described in
\cite[Section 4.2 and 4.3]{HutzenthalerJentzen2015AMSbook}, with added super-linearity in measure and non-constant diffusion. We study the following MV-SDE dynamics: set $x=(x_1,x_2)\in\mathbb{R}^2$, for \eqref{Eq:General MVSDE} define the functions $f,u,b,\sigma$ as
\begin{align}
\label{eq:example:vdp}
f(x)=-x|x|^2
,~
u(x)=
\left[\begin{array}{c}
-\frac43 x_1^3 \\
0
\end{array}\right],
~
b(x)=
\left[\begin{array}{c}
4(x_1-x_2) \\
\frac{1}{4}x_1
\end{array}\right],
~
\sigma (x)=
\left[\begin{array}{ccc}
x_1 & 0 \\
0 & x_2
\end{array}\right],
\end{align}
which are known to satisfy the assumptions of this work.
Figure \ref{fig:1-3:vdp example} (a) shows the strong error of both methods, the "order 1" and "order 0.5" lines are baselines with the slope of $1$ and $0.5$ for comparison. The estimated rate of the SSM is near $0.5$ while Taming failed to converge.
Figure \ref{fig:1-3:vdp example} (b) shows the PoC error of both methods, Taming failed to converge while the estimated rate of the SSM is near $0.5$ (see discussion of previous Section \ref{exam:StochGinzburg}).
Figure \ref{fig:1-3:vdp example} (c) shows the system's phase-space portraits (i.e., the parametric plot of $t\mapsto (X_1(t),X_2(t))$ and $t\mapsto (\mathbb{E}[X_1(t)],\mathbb{E}[X_2(t)])$ over $t\in [0,20]$) of the SSM with respect to different choices of $N\in\{30,100,500,1000\}$. The impact of $N$ on the quality of simulation is apparent as is the ability of the SSM to capture the periodic behaviour of the true dynamics.
Figure \ref{fig:1-3:vdp example} (d)-(e)-(f)-(g) shows the expectation fluctuation (of Figure \ref{fig:1-3:vdp example} (c)) and the system's phase-space path portraits of the SSM for different choices of $N$. The trajectory becomes smoother as $N$ becomes larger and the paths are similar for $N\ge 500$.
\begin{figure}[h!bt]
\centering
\begin{subfigure}{.26\textwidth}
\setlength{\abovecaptionskip}{-0.04cm}
\setlength{\belowcaptionskip}{-0.1 cm}
\centering
\includegraphics[scale=0.26]{FIG-0531-vdp-se.png}
\caption{Strong error}
\label{fig:13-2}
\end{subfigure}
\begin{subfigure}{.26\textwidth}
\setlength{\abovecaptionskip}{0.05cm}
\setlength{\belowcaptionskip}{-0.05 cm}
\centering
\includegraphics[scale=0.25]{FIG-0601-vdp-poc.png}
\caption{PoC error}
\label{fig:13-3}
\end{subfigure}
\begin{subfigure}{.46\textwidth}
\setlength{\abovecaptionskip}{-0.04cm}
\setlength{\belowcaptionskip}{-0.1 cm}
\centering
\includegraphics[scale=0.46]{FIG-0601-vdp-2.png}
\caption{Expectation paths of SSM w.r.t $N$}
\label{fig:13-1}
\end{subfigure}%
\\
\centering
\begin{subfigure}{.24\textwidth}
\setlength{\abovecaptionskip}{-0.04cm}
\setlength{\belowcaptionskip}{-0.1 cm}
\centering
\includegraphics[scale=0.58]{FIG-0614-vdp-p1.png}
\caption{Phase graph of $N=30$}
\end{subfigure}
\begin{subfigure}{.24\textwidth}
\setlength{\abovecaptionskip}{-0.04cm}
\setlength{\belowcaptionskip}{-0.1 cm}
\centering
\includegraphics[scale=0.58]{FIG-0614-vdp-p2.png}
\caption{Phase graph of $N=100$}
\end{subfigure} \begin{subfigure}{.24\textwidth}
\setlength{\abovecaptionskip}{-0.04cm}
\setlength{\belowcaptionskip}{-0.1 cm}
\centering
\includegraphics[scale=0.58]{FIG-0614-vdp-p3.png}
\caption{Phase graph of $N=500$}
\end{subfigure} \begin{subfigure}{.24\textwidth}
\setlength{\abovecaptionskip}{-0.04cm}
\setlength{\belowcaptionskip}{-0.1 cm}
\centering
\includegraphics[scale=0.58]{FIG-0614-vdp-p4.png}
\caption{Phase graph of $N=1000$}
\end{subfigure}
\setlength{\belowcaptionskip}{-0.2cm}
\caption{Simulation of the Vdp model \eqref{eq:example:vdp} with $X_1\sim \mathcal{N}(0,4), X_2\sim \mathcal{N}(-2,4)$. (a) Strong error (rMSE) of the SSM and Taming with $T=1,~N=1000$. (b) PoC error of the SSM and Taming with $T=1,~h=0.001$.
(c) the expectation overlays paths for the SSM with $T=20,h=0.01$ w.r.t different $N$.
(d)-(e)-(f)-(g) the corresponding phase-space portraits in (c) with $N\in\{ 30,100,500,1000\} $.
}
\label{fig:1-3:vdp example}
\end{figure}
\subsection{Numerical complexity, discussion and various opens questions}
\label{sec:discussionOfNumerics}
Across the three examples the SSM converged and all examples recovered the theoretical convergence rate (of $1/2$ in general, and $1$ for the additive noise case). In the latter two examples, Taming failed to converge while on the first example the SSM and taming are mostly similar. The main difference between examples is the diffusion coefficient.
The SSM is robust in respect to small choices of $h$ and $N$. In all three examples, the SSM remains convergent for all choices of $h$ (even for $h=0.1$) while taming fails to converge at all. In the degenerate Van der Pol (VdP) oscillator example of Section \ref{section:example:vdp}, when comparing across different particle sizes $N$, the SSM provides a good approximation for all choices of $N$ (even for $N=30$) and the PoC result is as expected. In general, we found that the runtime of the SSM is nearly the double of Taming for the same choices of $h$, but on the other hand, Taming takes over 100-times more runtime to reach the same accuracy as the SSM (if one considers the strong error against runtime).
\subsubsection*{Computational costs and open pathways to be explored in future research}
In the context of simulating particle systems like \eqref{Eq:MV-SDE Propagation}, that involve convolution type operators and hence interaction terms that need to be computed for every single particle, assume one wants to simulate an $N$-particle system over a discretised finite time-domain with $M$ time points. Under such setting, a standard explicit Euler scheme incurs a computational cost of $\mathcal{O}(N^2M)$. Without the convolution component, the cost is simply $\mathcal{O}(N M)$.
For the SSM scheme in Definition \ref{def:definition of the ssm}, since it is has an implicit component there is an additional cost attached to it (more below).
At this level, two strategies can be thought to reduce the complexity.
The first is by controlling the cost of computing the interaction itself, these have been proposed for example in the projected particle method \cite{belomestny2018projected} or the Random Batch Method (RBM) \cite{jin2020rbm1}. To date there is no general proof of these outside Lipschitz conditions (and constant diffusion coefficient in the RBM case) for the efficacy of the method, also, it is not clear how to use these methods in combination with the Newton to solve the SSM's implicit equation (more below). The second is to better address the competition between the number of particles $N$, as dictated by the PoC result Proposition \ref{Prop:Propagation of Chaos}, and the time-step parameter $M$ (or $1/h$). Our experimental work estimating the Propagation of chaos rate points to a convergence rate of order $1/2$ instead of the upper bound rate $1/4$ guaranteed by \eqref{eq:poc result} in Theorem \ref{Prop:Propagation of Chaos}. This result is not surprising in view of the theoretical result \cite[Lemma 5.1]{delarue2018masterCLT}; and numerically in \cite[Example 4.1]{reisinger2020adaptive}. To the best of our knowledge, no known PoC rate result covers the examples presented here and Theorem \ref{Prop:Propagation of Chaos} is presently the best known general result.
\subsubsection*{Solving the implicit step in SSM - Newton's method}
The SSM scheme contains an implicit equation \eqref{eq:SSTM:scheme 0} that needs be solved at each timestep. It is left to the user to choose the most suitable method for given data and, in all generality, one needs an approximation scheme to solve \eqref{eq:SSTM:scheme 0}. Proposition \ref{prop: discussion of the approxi} below shows that as long as said approximation is uniformly within a ball of radius $Ch$ of the true solution, then the SSM's convergence rate of Theorem \ref{theorem:SSM: strong error 1} is preserved.
As mentioned in the initial part of Section \ref{sec:examples}, we use Newton's method (assuming extra differentiability of the involved maps) -- see Appendix \ref{appendix: discussion on Newton's method}) for details where \cite[Section 4.3]{Suli2003NumericsBook} is used to guarantee convergence.
The computation cost raises from $\mathcal{O}(N^2M )$ to $\mathcal{O}(\kappa N^2M)$, where $\kappa$ denotes the leading term cost of Newton after $\kappa$ iterations. In practice, we found that within $2$ to $4$ iterations (i.e., $\kappa\leq 4$) two consecutive Newton iteration are sufficiently close for the purposes of the scheme's accuracy: denoting Newton's $j^{th}$-iteration by $y^j \in \mathbb{R}^{dN}$, then $\|y^\kappa-y^{\kappa-1}\|_\infty<\sqrt{h}$ (which is the stop criteria used, see Appendix \ref{appendix: discussion on Newton's method}).
Interacting particle systems like \eqref{Eq:MV-SDE Propagation} induce a certain structure to the associated Jacobian matrix when seen through the lens of $(\mathbb{R}^{d})^N$.
The closed form expressions provided in Appendix \ref{sec:NewtonMethodJacobian} point to a very sparse Jacobian matrix with a very specific block structure. For instance, the $\Gamma$ matrix (see Appendix \ref{sec:NewtonMethodJacobian}) is a symmetric one and is multiplied by $h/N$ making its entries very small: it stands to reason that $\Gamma$ can be removed from the Jacobian matrix as one solves the system (provided its entries can be controlled) and thus suggests that inexact or quasi-Newton methods is computational more efficient. In \cite[Section 3]{LiCheng2007MoreonNewtonMethod} the authors review \cite{SolodovSvaiter1999ConvInexactNewton} who address the case of using inexact Newton methods when the equation of interest \eqref{eq:SSTM:scheme 0} is a monotone map, which is indeed our case. The usage of Newton method is not a primary element of discussion and, as does \cite{LiCheng2007MoreonNewtonMethod}, we point the reader to the comprehensive review \cite{Martinez2000ReviewonNewton} on practical quasi-Newton methods for nonlinear equations. In conclusion, it remains to explore how different versions of Newton method for sparse systems can be used as way to reduce its computational cost but, in light of our study, we found Newton method very fast and efficient even comparatively with the Explicit Euler taming method in Section \ref{exam:StochGinzburg}.
\color{black}
\section{Proof of split-step method (SSM) for MV-SDEs and interacting particle systems: convergence and stability}
\label{sec:theSSMresults}
The proof appearing in Section \ref{subsection:Proof of strong error 1} depends in no way on Theorem \ref{theorem:moment bound for the big theorm time extensions } or its proof (in Section \ref{subsection: Moment bound of the SSM}). Nonetheless, Section \ref{subsection: Moment bound of the SSM} has a strong complementary effect to fully understanding the proof in Section \ref{subsection:Proof of strong error 1}.
\subsection{Some properties of the scheme}
\label{subsection:properties of the scheme }
Recall the SSM scheme in Definition \ref{def:definition of the ssm}. In this section we clarify further the choice of $h$ and then introduce the two critical results arising from the SSM's structure. Note that throughout $C>0$ is a constant always independent of $h,N,M$.
\begin{remark}[Choice of $h$]
\label{remark: choice of h}
Let Assumption \ref{Ass:Monotone Assumption} hold, the constraint on $h$ in \eqref{eq:h choice} comes from
\eqref{eq:prop:yi-yj leq xi-xj}, \eqref{eq:sum y square leq sum x square} and \eqref{eq:se1:y-y} below, where $L_f,L_u \in \mathbb{R}$ and $L_{\tilde{u} }\ge 0$. Following the notation of those inequalities, under \eqref{eq:h choice}
for $\zeta>0$, there exists $\xi \in(0,1)$ such that $h< {\xi}/{\zeta}$ and
\begin{align*}
&\max \Bigg\{ \frac{1}{1-2(L_f+L_u)h},~\frac{1}{1-(4 L_f^++2 L_u+2L_{\tilde{u} }+1)h},~\frac{1}{1-(4 L_f^++2 L_u+L_{\tilde{u} }+1)h} \Bigg\}< \frac{1}{1-\xi}.
\end{align*}
For $\zeta=0$, the result is trivial and we conclude that there exist constants $C_1,C_2$ independent of $h$
\begin{align*}
\max \Bigg\{ \frac{1}{1-2(L_f+L_u)h},~\frac{1}{1-(4 L_f^++2 L_u+2L_{\tilde{u} }+1)h},~\frac{1}{1-(4 L_f^++2 L_u+L_{\tilde{u} }+1)h} \Bigg\}\le C_1\le 1+C_2h.
\end{align*}
As argued in Remark \ref{rem:Constraint on h is soft} the constraint on $h$ \textit{can be lifted}.
\end{remark}
\begin{lemma}
\label{lemma:SSTM:new functions def and properties1}
Choose $h$ as in \eqref{eq:h choice}.
Then, given any $X\in \mathbb{R}^{Nd}$ there exists a unique solution $Y\in \mathbb{R}^{Nd}$ to
\begin{align}\label{sstmmod func def}
Y=X+h V(Y).
\end{align}
The solution $Y$ is a measurable map of $X$.
\end{lemma}
\begin{proof}
Recall Remark \ref{remark:OSL for the whole function / system V}. The proof is an adaptation of the proof \cite[Lemma 4.1]{2021SSM} to the $\mathbb{R}^{dN}$ case.
\end{proof}
\begin{proposition} [Differences relationship]
\label{prop:yi-yj leq xi-xj}
Let Assumption \ref{Ass:Monotone Assumption} hold and choose $h$ as in \eqref{eq:h choice}. For any $n\in \llbracket 0,M \rrbracket$ and $Y^{*,N}_n$ in \eqref{eq:SSTM:scheme 0}-\eqref{eq:SSTM:scheme 1}, there exist some constants $C>0$ such that for all $i,~j\in \llbracket 1,N \rrbracket$,
\begin{align}\label{eq:prop:yi-yj leq xi-xj}
|Y_{n}^{i,\star,N}-Y_{n}^{j,\star,N}|^2
&\le
| \hat{X} _{n}^{i,N}- \hat{X} _{n}^{j,N}|^2 \frac{1}{1-2(L_f+L_u)h} \le (1+Ch) | \hat{X} _{n}^{i,N}- \hat{X} _{n}^{j,N}|^2.
\end{align}
\end{proposition}
\begin{proof}
Take $n\in \llbracket 0,M \rrbracket$, $i,~j\in \llbracket 1,N \rrbracket$. Using Remark \ref{remark:ImpliedProperties} and Young's inequality we have
\begin{align*}
&
|Y_{n}^{i,\star,N}-Y_{n}^{j,\star,N}|^2
\\
&=
\Big\langle
Y_{n}^{i,\star,N}-Y_{n}^{j,\star,N}
,
\hat{X} _{n}^{i,N}- \hat{X} _{n}^{j,N}
\Big\rangle
+
\Big\langle
Y_{n}^{i,\star,N}-Y_{n}^{j,\star,N}
,
v\left(Y_{n}^{i,\star,N}, \hat{\mu} ^{Y,N}_n\right)
-
v\left(Y_{n}^{j,\star,N}, \hat{\mu} ^{Y,N}_n\right)
\Big\rangle h
\\
&\le
\frac{1}{2} |Y_{n}^{i,\star,N}-Y_{n}^{j,\star,N} |^2
+
\frac{1}{2} | \hat{X} _{n}^{i,N}- \hat{X} _{n}^{j,N}|^2
+
(L_f+L_u) |Y_{n}^{i,\star,N}-Y_{n}^{j,\star,N}|^2 h.
\end{align*}
The argument regarding the uniformity of the constant $C$ comes from the Remark \ref{remark: choice of h}.
\end{proof}
\begin{proposition} [Summation relationship]
\label{prop:sum y square leq sum x square}
Let Assumption \ref{Ass:Monotone Assumption} holds and choose $h$ as in \eqref{eq:h choice}. For the process in \eqref{eq:SSTM:scheme 1} there exist constants $C>0$ such that, for all $i\in \llbracket 1,N \rrbracket,~n\in \llbracket 0,M \rrbracket$,
\begin{align}\label{eq:sum y square leq sum x square}
\sum_{i=1}^N |Y_{n}^{i,\star,N}|^2
&\le
Ch+(1+Ch) \sum_{i=1}^N | \hat{X} _{n}^{i,N}|^2.
\end{align}
\end{proposition}
\begin{proof}
From \eqref{eq:SSTM:scheme 2} we have
\begin{align}
&\sum_{i=1}^N |Y_{n}^{i,\star,N}|^2 =\sum_{i=1}^N \bigg\{\Big\langle Y_{n}^{i,\star,N}, \hat{X} _{n}^{i,N} \Big\rangle + \Big\langle Y_{n}^{i,\star,N}, v(Y_{n}^{i,\star,N}, \hat{\mu} ^{Y,N}_n) \Big\rangle h \bigg\}
\nonumber\\
\label{eq:ystar2}
&\le\sum_{i=1}^N \bigg\{\frac{1}{2}|Y_{n}^{i,\star,N}|^2 + \frac{1}{2}| \hat{X} _{n}^{i,N}|^2
+\Big\langle Y_{n}^{i,\star,N}, u(Y_{n}^{i,\star,N}, \hat{\mu} ^{Y,N}_n) \Big\rangle h
+
\frac{h}{N}\sum_{j=1}^N \Big\langle Y_{n}^{i,\star,N}, f(Y_{n}^{i,\star,N}-Y_{n}^{j,\star,N}) \Big\rangle
\bigg\}.
\end{align}
By Assumption \ref{Ass:Monotone Assumption}, Young's inequality and using that the particles are identically distributed, we have
\begin{align*}
\sum_{i=1}^N\sum_{j=1}^N \Big\langle Y_{n}^{i,\star,N}, f(Y_{n}^{i,\star,N}&-Y_{n}^{j,\star,N}) \Big\rangle
=\frac12\sum_{i=1}^N\sum_{j=1}^N \Big\langle Y_{n}^{i,\star,N}- Y_{n}^{j,\star,N}, f(Y_{n}^{i,\star,N}-Y_{n}^{j,\star,N}) \Big\rangle
\\
&\le \frac{1}{2}\sum_{i=1}^N\sum_{j=1}^N
L_f |Y_{n}^{i,\star,N}-Y_{n}^{j,\star,N}|^2
\le
2N L_f^+\sum_{i=1}^N |Y_{n}^{i,\star,N}|^2, \quad L_f^+=\max\{L_f,0\}.
\end{align*}
Plugging this into \eqref{eq:ystar2} and using Remark \ref{remark:ImpliedProperties} with $\Lambda= 4 L_f^++2 L_u+2L_{\tilde{u} }+1$, we have
\begin{align*}
\sum_{i=1}^N |Y_{n}^{i,\star,N}|^2
&
\le
\sum_{i=1}^N \bigg\{| \hat{X} _{n}^{i,N}|^2
+
2h \big(
2 L_f^+ |Y_{n}^{i,\star,N}|^2
+C_u +\widehat L_u|Y_{n}^{i,\star,N}|^2+ L_{\tilde{u} } W^{(2)}( \hat{\mu} ^{Y,N}_n,\delta_0)^2 \big)
\bigg\}
\\
&\le
\sum_{i=1}^N \bigg\{| \hat{X} _{n}^{i,N}|^2
+
2h \big(
2 L_f^+ |Y_{n}^{i,\star,N}|^2
+C_u +\widehat L_u|Y_{n}^{i,\star,N}|^2
+
\frac{L_{\tilde{u} } }{N}\sum_{j=1}^N |Y_{n}^{j,\star,N}|^2
\big)
\bigg\}
\\
&\le \frac{1}{1-\Lambda h} \sum_{i=1}^N \bigg\{| \hat{X} _{n}^{i,N}|^2 +2C_u h \bigg\}
=
\sum_{i=1}^N \bigg \{ | \hat{X} _{n}^{i,N}|^2 (1+ h\frac{\Lambda }{1-\Lambda h}) +
\frac{2C_u h}{1-\Lambda h}
\bigg \} .
\end{align*}
Since the particles are identically distributed, Remark \ref{remark: choice of h} yields the argument.
\end{proof}
We provide the following auxiliary propositions to deal with the cross products terms in the later proofs.
\begin{proposition}
\label{prop:auxilary for multipy of r.v.}
Take $N\in\mathbb{N}$, for all $ i\in \llbracket 1,N \rrbracket$, for any given $p\in\mathbb{N} $, sequences $\big\{ \{a_i\}_i: \sum_{i=1}^N a_i=p,~ a_i\in\mathbb{N} \big\}$ and any collection of identically distributed $L^p$-integrable random variables $\{X_i\}_i$ we have
\begin{align*}
\mathbb{E} \big[ \prod_{i=1}^N |X_i|^{a_i} \big]
\le \mathbb{E} \big[ |X_1|^p \big].
\end{align*}
\end{proposition}
\begin{proof}
Using the notation above, by Young's inequality, for any $ i,j\in \llbracket 1,N \rrbracket$ we have
\begin{align*}
|X_i|^{a_i}|X_j|^{a_j}
\le
\frac{a_i}{a_i+a_j}|X_i|^{a_i+a_j}+\frac{a_j}{a_i+a_j}|X_j|^{a_i+a_j}.
\end{align*}
Thus, by induction and using that the $\{X_i\}_i$ are identically distributed, the result follows.
\end{proof}
\subsection[Proof of the point-wise convergence result]{Proof of Theorem \ref{theorem:SSM: strong error 1}: the point-wise mean-square convergence result}\label{subsection:Proof of strong error 1}
We provide here the proof of Theorem \ref{theorem:SSM: strong error 1}. Throughout this section, we follow the notation introduced in Theorem \ref{theorem:SSM: strong error 1} and let Assumption \ref{Ass:Monotone Assumption} holds, $h$ is chosen as in \eqref{eq:h choice}, $m\ge 4q+4$, where $m$ is defined in \eqref{Eq:General MVSDE} and $q$ is defined in Assumption \ref{Ass:Monotone Assumption}.
\begin{proof}
Let $i\in\llbracket 1,N\rrbracket$, $n\in\llbracket 0,M-1\rrbracket$, $s\in[0,h]$ and $p\ge 2$ with $m\ge 4q+4$, define the following auxiliary process using same notation as in \eqref{Eq:MV-SDE Propagation},
\begin{align*}
X_{n}^{i,N}&=X_{t_n}^{i,N},\quad \Delta X^i_{t_n+s} =X_{t_n+s}^{i,N}- \hat{X} _{t_n+s}^{i,N} ,\quad t_n=nh, \quad
\Delta W_{n,s}^i=W_{t_n+s}^i-W_{t_n}^i,
\\
Y_{n}^{i,X,N} &=X_{n}^{i,N}+h v (Y_{n}^{i,X,N},\mu^{Y,X,N}_n ),
\quad
\quad
\mu^{Y,X,N}_n(\mathrm{d} x):= \frac1N \sum_{j=1}^N \delta_{Y_{n}^{j,X,N}}(\mathrm{d} x),
\\
X_{t_n+s}^{i,N}
&
=
X_{n}^{i,N}+
\big( v (Y_n^{i,X,N},\mu^{Y,X,N}_{n} )
+b(t_n,Y_n^{i,X,N},\mu^{Y,X,N}_{n}) \big) s
+ \sigma(t_n,Y_n^{i,X,N},\mu^{Y,X,N}_{n}) \Delta W_{n,s}^i.
\end{align*}
For all $n\in\llbracket 0,M-1\rrbracket,~i\in \llbracket 1,N \rrbracket,~r\in[0,h]$, we have from \eqref{eq: scheme continous extension in SDE form},
\begin{align*}
&|\Delta X^i_{t_n+r}|^2
=
\Big|
\Delta X^i_{t_n}
+
\int_{t_n}^{t_{n}+r} \big( v(X_{s}^{i,N},\mu_{s}^{X,N})-v(Y_n^{i,X,N},\mu^{Y,X,N}_{n})
\big) \mathrm{d} s
\\
&+
\int_{t_n}^{t_{n}+r} \big( v(Y_n^{i,X,N},\mu^{Y,X,N}_{n}) -v\left(Y_{n}^{i,\star,N}, \hat{\mu} ^{Y,N}_n\right)
\big) \mathrm{d} s
+
\int_{t_n}^{t_{n}+r} \big( b(s,X_{s}^{i,N},\mu_{s}^{X,N})-b(t_n,Y_n^{i,X,N},\mu^{Y,X,N}_{n})
\big) \mathrm{d} s
\\
&+ \int_{t_n}^{t_{n}+r} \big( b(t_n,Y_n^{i,X,N},\mu^{Y,X,N}_{n})
- b(t_n,Y_{n}^{i,\star,N}, \hat{\mu} ^{Y,N}_n)
\big) \mathrm{d} s
\\
&
+
\int_{t_n}^{t_{n}+r} \big( \sigma(s,X_{s}^{i,N},\mu_{s}^{X,N})
-\sigma(t_n,Y_n^{i,X,N},\mu^{Y,X,N}_{n})
\big) \mathrm{d} W^i_s
\\
&
+ \int_{t_n}^{t_{n}+r} \big( \sigma(t_n,Y_n^{i,X,N},\mu^{Y,X,N}_{n})
- \sigma(t_n,Y_{n}^{i,\star,N}, \hat{\mu} ^{Y,N}_n)
\big) \mathrm{d} W^i_s \Big|^2 .
\end{align*}
Taking expectations on both side, using Jensen's inequality and It\^o's isometry, we have
\begin{align}
\label{eq:se1:ultimate eqaution IIII}
\mathbb{E} \big[ |\Delta X^i_{t_n+r}|^2 \big]
&\le
(1+h) I_1+(1+\frac{1}{h}) I_2+2 I_3+2 I_4,
\end{align}
where the terms $I_1,I_2,I_3,I_4$ are defines as follows
\begin{align}
I_1= \mathbb{E} \Big[ \Big|&
\Delta X^i_{t_n}
+
\int_{t_n}^{t_{n}+r} \big( v(Y_n^{i,X,N},\mu^{Y,X,N}_{n}) -v\left(Y_{n}^{i,\star,N}, \hat{\mu} ^{Y,N}_n\right)
\big) \mathrm{d} s
\\
&+ \int_{t_n}^{t_{n}+r} \big( b(t_n,Y_n^{i,X,N},\mu^{Y,X,N}_{n})
- b(t_n,Y_{n}^{i,\star,N}, \hat{\mu} ^{Y,N}_n)
\big) \mathrm{d} s
\Big|^2\Big],
\end{align}
\begin{align}
I_2= \mathbb{E} \Big[ \Big| & \int_{t_n}^{t_{n}+r} \big( v(X_{s}^{i,N},\mu_{s}^{X,N})-v(Y_n^{i,X,N},\mu^{Y,X,N}_{n})
\big) \mathrm{d} s
\\
&+
\int_{t_n}^{t_{n}+r} \big( b(s,X_{s}^{i,N},\mu_{s}^{X,N})-b(t_n,Y_n^{i,X,N},\mu^{Y,X,N}_{n})
\big) \mathrm{d} s
\Big|^2\Big],
\end{align}
\begin{align}
I_3=& \mathbb{E} \Big[ \Big|
\int_{t_n}^{t_{n}+r} \big( \sigma(s,X_{s}^{i,N},\mu_{s}^{X,N})
-\sigma(t_n,Y_n^{i,X,N},\mu^{Y,X,N}_{n})
\big) \mathrm{d} W^i_s
\Big|^2\Big],
\end{align}
\begin{align}
I_4=& \mathbb{E} \Big[ \Big|
\int_{t_n}^{t_{n}+r} \big( \sigma(t_n,Y_n^{i,X,N},\mu^{Y,X,N}_{n})
- \sigma(t_n,Y_{n}^{i,\star,N}, \hat{\mu} ^{Y,N}_n)
\big) \mathrm{d} W^i_s
\Big|^2\Big].
\end{align}
For $I_1$, Young's inequality yields
\begin{align}
\nonumber
I_1& =
\mathbb{E} \Big[
\Big|
X_{n}^{i,N}+
\big(
V_n^{Y,i}
+b(t_n,Y_n^{i,X,N},\mu^{Y,X,N}_{n}) \big) r
-
\hat{X} _{n}^{i,N}-
\big(
V_n^{*,i}
+b(t_n,Y_n^{i,\star,,N}, \hat{\mu} ^{Y,N}_{n}) \big) r
\Big|^2 \Big]
\\
\label{eq:I1:term1}
&\le \mathbb{E} \Big[
\Big|
X_{n}^{i,N}-
\hat{X} _{n}^{i,N}+
\big(
V_n^{Y,i}-V_n^{*,i}\big)
r
\Big|^2\Big](1+\frac{h}{2})
\\
\label{eq:I1:term2}
&\quad
+ \mathbb{E} \Big[ \Big|
b(t_n,Y_n^{i,X,N},\mu^{Y,X,N}_{n})
-b(t_n,Y_n^{i,\star,,N}, \hat{\mu} ^{Y,N}_{n})
\Big|^2 \Big](\frac{h}{2}+h),
\end{align}
where $ V_n^{Y,i}$ and $V_n^{*,i}$ stand for
$V_n^{Y,i}= v (Y_n^{i,X,N},\mu^{Y,X,N}_{n} )$ and
$V_n^{*,i} = v (Y_n^{i,\star,N}, \hat{\mu} ^{Y,N}_{n} )$ respectively.
For \eqref{eq:I1:term1}, recall the SSM defined in \eqref{eq:SSTM:scheme 1}, we have
\begin{align*}
\mathbb{E} \Big[
\Big|
X_{n}^{i,N}-
\hat{X} _{n}^{i,N}
+ &
\big( V_n^{Y,i}
-
V_n^{*,i} \big)r
\Big|^2\Big]
\\
=&\mathbb{E} \Big[
\Big\langle X_{n}^{i,N}-
\hat{X} _{n}^{i,N}
+
\big( V_n^{Y,i} - V_n^{*,i} \big)r,
Y_n^{i,X,N}-
Y_n^{i,\star,N}
+
\big( V_n^{Y,i} - V_n^{*,i} \big)(r-h)
\Big\rangle\Big]
\\
=&\mathbb{E} \Big[
\Big\langle X_{n}^{i,N}-
\hat{X} _{n}^{i,N},
Y_n^{i,X,N}-
Y_n^{i,\star,N}
\Big\rangle\Big]
+
\mathbb{E} \Big[
\Big\langle X_{n}^{i,N}-
\hat{X} _{n}^{i,N},
\big( V_n^{Y,i} - V_n^{*,i} \big)
\Big\rangle\Big](r-h)
\\
&
+
\mathbb{E} \Big[
\Big\langle Y_n^{i,X,N}-
Y_n^{i,\star,N},
\big( V_n^{Y,i} - V_n^{*,i} \big)
\Big\rangle\Big]r
-r(h-r)
\mathbb{E} \Big[
\Big| V_n^{Y,i} - V_n^{*,i} \Big|^2\Big].
\end{align*}
Using the relationship that \eqref{eq:SSTM:scheme 1} induces, we have
\begin{align*}
V_n^{Y,i} - V_n^{*,i} =\frac{ Y_n^{i,X,N}-X_{n}^{i,N}
+Y_n^{i,\star,N}-
\hat{X} _{n}^{i,N}}{h}.
\end{align*}
Since $r\in[0,h]$ and $r(h-r)>0$, Young's inequality yields
\begin{align}
\nonumber
&\mathbb{E} \Big[
\Big|
X_{n}^{i,N}-
\hat{X} _{n}^{i,N}
+
\big( V_n^{Y,i}
-
V_n^{*,i} \big)r
\Big|^2\Big]
\\ \nonumber
& \le
\mathbb{E} \Big[
| X_{n}^{i,N}-
\hat{X} _{n}^{i,N}|^2
\Big] \frac{h-r}{h}
+
\mathbb{E} \Big[
\Big\langle X_{n}^{i,N}-
\hat{X} _{n}^{i,N},
Y_n^{i,X,N}-
Y_n^{i,\star,N}
\Big\rangle\Big]\frac{r}{h}
+
\mathbb{E} \Big[
\Big\langle Y_n^{i,X,N}-
Y_n^{i,\star,N},
\Big( V_n^{Y,i} - V_n^{*,i} \big)
\big\rangle\Big]r
\\
& \le
\mathbb{E} \Big[
| X_{n}^{i,N}-
\hat{X} _{n}^{i,N}|^2
\Big] \frac{2h-r}{2h}
+
\mathbb{E} \Big[
|
Y_n^{i,X,N}-
Y_n^{i,\star,N}
|^2\Big]\frac{r}{2h}
\label{eq:se1: y-y v-v source 0}
+
\mathbb{E} \Big[
\Big\langle Y_n^{i,X,N}-
Y_n^{i,\star,N},V_n^{Y,i} - V_n^{*,i}
\Big\rangle\Big]r.
\end{align}
Also, for \eqref{eq:I1:term2}, using Assumption \ref{Ass:Monotone Assumption} and that the particles are identically distributed
\begin{align}
\nonumber
\mathbb{E} \Big[ \Big|b(t_n&,Y_n^{i,X,N},~\mu^{Y,X,N}_{n})
-b(t_n,Y_{n}^{i,\star,N}, \hat{\mu} ^{Y,N}_n)\Big|^2 \Big]
\\
\nonumber
\le&
C\mathbb{E} \Big[ |Y_n^{i,X,N}-Y_n^{i,\star,N}|^2+ W^{(2)}(\mu^{Y,X,N}_{n}, \hat{\mu} ^{Y,N}_n)\Big]
\\
\label{eq:se1:by-by}
\le&
C\mathbb{E} \big[ |Y_n^{i,X,N}-Y_n^{i,\star,N}|^2\big]
+ C \mathbb{E} \Big[ \frac{1}{N} \sum_{j=1}^N |Y_n^{j,X,N}-Y_n^{j,\star,N}|^2\Big]
\le C\mathbb{E} \big[ |Y_n^{i,X,N}-Y_n^{i,\star,N}|^2\big].
\end{align}
By Assumption \ref{Ass:Monotone Assumption} and using Young's inequality once again
\begin{align}
\label{eq:se1:Yi-Yi star}
\mathbb{E} \big[ |&Y_n^{i,X,N} -Y_n^{i,\star,N}|^2 \big]
\le
\mathbb{E} \Big[ \Big\langle Y_n^{i,X,N}-Y_n^{i,\star,N},
X_{n}^{i,N}- \hat{X} _{n}^{i,N}+V_n^{Y,i}- V_n^{*,i}
\Big\rangle \Big]h
\\
&
\le
\mathbb{E} \Big[
\frac{1}{2} |Y_n^{i,X,N}-Y_n^{i,\star,N}|^2 +\frac{1}{2} |X_{n}^{i,N}- \hat{X} _{n}^{i,N}|^2 \Big]
\label{eq:se1: y-y v-v}
+\mathbb{E} \Big[
\Big\langle Y_n^{i,X,N}-Y_n^{i,\star,N},
V_n^{Y,i}- V_n^{*,i}
\Big\rangle \Big] h.
\end{align}
For the last term \eqref{eq:se1: y-y v-v}, since the particles are identically distributed, Assumption \ref{Ass:Monotone Assumption} and Remark \ref{remark:OSL for the whole function / system V} yield
\begin{align}
\nonumber
\label{eq:se1:y-y v-v}
\mathbb{E} \Big[
\Big\langle Y_n^{i,X,N}-Y_n^{i,\star,N},
V_n^{Y,i}- V_n^{*,i}
\Big\rangle \Big]
&
\le
\mathbb{E} \Big[ \frac{1}{N} \sum_{j=1}^N \Big\langle Y_n^{j,X,N}-Y_n^{j,\star,N},
V_n^{Y,j}- V_n^{*,j}
\Big\rangle \Big]
\\
&\le \Big(2L_f^++L_u+\frac{1}{2}+\frac{L_{\tilde{u} }}{2}\Big) \mathbb{E} \big[ | Y_n^{i,X,N}-Y_n^{i,\star,N}|^2 \big].
\end{align}
Thus, inject \eqref{eq:se1:y-y v-v} back into \eqref{eq:se1: y-y v-v} and \eqref{eq:se1:Yi-Yi star}, set $\Gamma_2=4L_f^++2L_u+L_{\tilde{u} }+1$, then by Remark \ref{remark: choice of h},
\begin{align}
\label{eq:se1:y-y}
&\mathbb{E} \big[ |Y_n^{i,X,N} -Y_n^{i,\star,N}|^2 \big]
\le \frac{1}{1-\Gamma_2 h}
\mathbb{E} \big[ |X_{n}^{i,N}- \hat{X} _{n}^{i,N}|^2 \big]
\le \mathbb{E} \big[ |X_{n}^{i,N}- \hat{X} _{n}^{i,N}|^2 \big] (1+Ch).
\end{align}
Plug \eqref{eq:se1:y-y} and \eqref{eq:se1:y-y v-v} back into \eqref{eq:se1: y-y v-v source 0}, \eqref{eq:se1:by-by} and \eqref{eq:I1:term1}. We then conclude that
\begin{align}
\label{eq:se1:I1 final result}
I_1\le \mathbb{E} \big[ |X_{n}^{i,N}- \hat{X} _{n}^{i,N}|^2 \big] (1+Ch).
\end{align}
For $I_2$, by Young's and Jensen's inequality, we have
\begin{align}
\label{eq:se1:I2 term 1}
I_2\le h~\mathbb{E} \Big[ & \int_{t_n}^{t_{n}+h} \Big| v(X_{s}^{i,N},\mu_{s}^{X,N})-v(Y_n^{i,X,N},\mu^{Y,X,N}_{n})
\Big|^2 \mathrm{d} s
\\
\label{eq:se1:I2 term 2}
&+
\int_{t_n}^{t_{n}+h} \Big| b(s,X_{s}^{i,N},\mu_{s}^{X,N})-b(t_n,Y_n^{i,X,N},\mu^{Y,X,N}_{n})
\Big|^2 \mathrm{d} s
\Big].
\end{align}
For \eqref{eq:se1:I2 term 1}, from Assumption \ref{Ass:Monotone Assumption}, using Young's, Jensen's, and Cauchy-Schwarz inequality
\begin{align}
\nonumber
&\mathbb{E} \Big[ \Big|
v( X_{s}^{i,N},\mu_s^{X,N})
- v( Y_n^{i,X,N},\mu_n^{Y,X,N}) \Big|^2\Big]
\\ \label{eq:se1:f-f things}
&\le C
\mathbb{E} \Big[\Big|
u( X_{s}^{i,N},\mu_s^{X,N})
- u( Y_n^{i,X,N},\mu_n^{Y,N})
\Big|^2
+
\frac{1}{N} \sum_{j=1}^N
\Big|
f(X_{s}^{i,N}-X_{s}^{j,N} )-
f(Y_{n}^{i,N}-Y_{n}^{j,N} )
\Big|^2
\Big]
\\ \nonumber
&\le \frac{C}{N} \sum_{j=1}^N
\mathbb{E} \Big[ \Big| \Big(
1+|X_{s}^{i,N}-X_{s}^{j,N}|^{q} +|Y_{n}^{i,N}-Y_{n}^{j,N}|^{q} \Big)
|X_{s}^{i,N}-Y_{n}^{i,N}-(X_{s}^{j,N}-Y_{n}^{j,N})| \Big|^2 \Big]
\\ \nonumber
&~
+
C \mathbb{E} \Big[ (1+| X_{s}^{i,N}|^{2q} +| Y_n^{i,X,N}|^{2q})(|X_{s}^{i,N}-Y_n^{i,X,N}|^{2})+ \frac{1}{N} \sum_{j=1}^N | X_{s}^{j,N}- Y_n^{j,X,N} |^2 \Big]
\\
\label{eq:se1: v-v inequality 1}
&\le C
\sqrt{ \mathbb{E} \Big[ 1+| X_{s}^{i,N}|^{4q} +| Y_n^{i,X,N}|^{4q} \Big] \mathbb{E} \Big[ |X_{s}^{i,N}-Y_n^{i,X,N}|^{4} \Big] }
+
\mathbb{E} \Big[ \frac{1}{N} \sum_{j=1}^N | X_{s}^{j,N}- Y_n^{j,X,N} |^2 \Big]
\\
\label{eq:se1: v-v inequality 2}
&~ + \frac{C}{N} \sum_{j=1}^N
\sqrt{
\mathbb{E} \Big[ 1+|X_{s}^{i,N}-X_{s}^{j,N}|^{4q} +|Y_{n}^{i,N}-Y_{n}^{j,N}|^{4q} \Big]
\mathbb{E} \Big[ |X_{s}^{i,N}-Y_{n}^{i,N}|^{4} +|X_{s}^{j,N}-Y_{n}^{j,N}|^{4} \Big]
}.
\end{align}
Using the structure of the SSM, Young's and Jensen's inequality, and Proposition \ref{prop:yi-yj leq xi-xj} we have
\begin{align}
\label{eq:se1:Xs-Yn 2 estimate}
|X_{s}^{i,N}-Y_n^{i,N} |^2
&
\le
2 |X_{s}^{i,N}-X_{n}^{i,N} |^2+
2 |X_{n}^{i,N}-Y_n^{i,N} |^2,
\\
\nonumber
|X_{n}^{i,N}-Y_n^{i,N} |^2
& =\Big|v( Y_n^{i,X,N},\mu_n^{Y,X,N}) h \Big|^2
\le
2\Big|u( Y_n^{i,X,N},\mu_n^{Y,X,N}) h \Big|^2
+
\frac{2h^2}{N} \sum_{j=1}^N \Big|f( Y_n^{i,X,N}- Y_n^{j,X,N})\Big|^2
\\\nonumber
&\le
C \Big( 1+ |Y_n^{i,N}|^{2q+2} + \frac{1 }{N} \sum_{j=1}^N |Y_n^{j,N}|^{2}
\Big) h^2
+
\frac{C h^2 }{N} \sum_{j=1}^N
\Big( 1+ |Y_n^{i,N}- Y_n^{j,X,N}|^{2q+2}\Big)
\\\nonumber
&\le
C \Big( 1+ |Y_n^{i,N}|^{2q+2}
+ \frac{1 }{N} \sum_{j=1}^N |Y_n^{j,N}|^{2}
\Big) h^2
+
\frac{C h^2 }{N} \sum_{j=1}^N
\Big( 1+ |X_n^{i,N}-X_n^{j,N}|^{2q+2}\Big).
\end{align}
Similarly, we have
\begin{align}
\label{eq:se1:Xs-Yn 4 estimate}
|X_{s}^{i,N}-Y_n^{i,N} |^4
&\le
16 |X_{s}^{i,N}-X_{n}^{i,N} |^4+
16 |X_{n}^{i,N}-Y_n^{i,N} |^4,
\\
\nonumber
|X_{n}^{i,N}-Y_n^{i,N} |^4
&
\le
C \Big( 1+ |Y_n^{i,N}|^{4q+4} + \frac{1 }{N} \sum_{j=1}^N |Y_n^{j,N}|^{4}\Big) h^4
+
\frac{C h^4 }{N} \sum_{j=1}^N
\Big( 1+ |X_n^{i,N}-X_n^{j,N}|^{4q+4}\Big).
\end{align}
From \eqref{Eq:MV-SDE Propagation} and using \eqref{eq:momentboundParticiInteractingSystem} (since $m\ge 4q+4$) alongside Young's inequality and It\^o's isometry, we have
\begin{align*}
\mathbb{E} \big[ & |X_{s}^{i,N}-X_n^{i,N} |^2 \big]
\le \mathbb{E} \Big[ \Big|
\int_{t_n}^{s} v( X_{u}^{i,N},\mu_u^{X,N})
+b(u,X_{u}^{i,N},\mu_u^{X,N}) \mathrm{d} s
+
\int_{t_n}^{s} \sigma(u,X_{u}^{i,N},\mu_u^{X,N}) \mathrm{d} W_u^i \Big|^2 \Big]~\le C h,
\\
\mathbb{E} \big[ &|X_{s}^{i,N}-X_n^{i,N} |^4 \big]
\le
\mathbb{E} \Big[ \Big|
\int_{t_n}^{s} v( X_{u}^{i,N},\mu_u^{X,N})
+b(u,X_{u}^{i,N},\mu_u^{X,N}) \mathrm{d} s
+ \int_{t_n}^{s} \sigma(u,X_{u}^{i,N},\mu_u^{X,N}) \mathrm{d} W_u^i \Big|^4 \Big]~\le C h^2.
\end{align*}
Also, using \eqref{eq:momentboundParticiInteractingSystem}, Jensen's and Young's inequality (since $m\ge 4q+4$) we have
\begin{align*}
\mathbb{E} \Big[ \frac{C h^2 }{N} \sum_{j=1}^N
\Big( 1+ |X_t^{i,N}-X_t^{j,N}|^{2q+2}\Big) \Big]
\le&
Ch^2
\quad\textrm{and}\quad
\mathbb{E} \Big[ \Big| \frac{C h^2 }{N} \sum_{j=1}^N
\Big( 1+ |X_t^{i,N}-X_t^{j,N}|^{2q+2} \Big) \Big|^2 \Big]
\le
Ch^4.
\end{align*}
This next argument uses steps similar to those used in \eqref{eq:mmb:def of HXp} and \eqref{eq:mmb:def of HYp} (appearing in the proof of Proposition \ref{theorem: discrete moment bound}). Since $X^{\cdot,N}$ has bounded moments via \eqref{eq:momentboundParticiInteractingSystem} (this refers to the true interacting particle system), we have for any $m\ge p\ge 2$ that
\begin{align*}
\mathbb{E}\big[|Y_n^{i,X,N}|^{p}\big]
&\le\Big(
4^{p} \mathbb{E} \Big[ \frac{1}{N} \sum_{j=1}^N |X_{n}^{i,N}- X_{n}^{j,N} |^{p} \Big]
+
4^p \mathbb{E} \Big[ \Big|\frac{1}{N} \sum_{j=1}^N ( 1+ |X_{n}^{j,N}|^2) \Big|^{p/2} \Big]+1\Big)(1+Ch)
\le C.
\end{align*}
Collecting all the terms above, using that the particles are identically distributed, we have
\begin{align}
\label{eq:se1: expec diff order 1}
\mathbb{E} \big[ |X_{s}^{i,N}-Y_n^{i,N} |^2 \big]
&\le Ch,
\qquad
\mathbb{E} \big[ |X_{s}^{i,N}-Y_n^{i,N} |^4 \big]
\le Ch^2,
\qquad
\mathbb{E}\big[|Y_n^{i,X,N}|^{p}\big]
\le C ,
\\
\label{eq:se1: expec diff order 2}
\mathbb{E} \Big[ \Big| W^{(2)}(\mu_s^{X,N},\mu_n^{Y,X,N} ) \Big|^2 \Big]
&\le \mathbb{E} \Big[ \frac{1}{N} \sum_{j=1}^N |X_{s}^{j,N}- Y_n^{j,X,N} |^2 \Big] \le Ch.
\end{align}
Plugging all the above inequalities back into \eqref{eq:se1: v-v inequality 1} and \eqref{eq:se1: v-v inequality 2}, we conclude that
\begin{align}
\label{eq:se1:I2 result for term v}
\mathbb{E} \Big[ \Big|
v( X_{s}^{i,N},\mu_s^{X,N})
- v( Y_n^{i,X,N},\mu_n^{Y,X,N}) \Big|^2\Big]
\le Ch.
\end{align}
We now consider term \eqref{eq:se1:I2 term 2} of $I_2$. By Assumption \ref{Ass:Monotone Assumption}, using \eqref{eq:se1: expec diff order 1} and \eqref{eq:se1: expec diff order 2}
\begin{align}
\mathbb{E} \Big[ \Big|& b(s,X_{s}^{i,N},\mu_{s}^{X,N})-b(t_n,Y_n^{i,X,N},\mu^{Y,X,N}_{n})\Big|^2 \Big]
\label{eq:se1:I2 result for term b}
\le
C\mathbb{E}\Big[ h+ |X_{s}^{i,N}-Y_n^{i,N} |^2
+
\Big| W^{(2)}(\mu_s^{X,N},\mu_{n}^{Y,X,N} ) \Big|^2 \Big] \le Ch.
\end{align}
Thus, plugging \eqref{eq:se1:I2 result for term v}, \eqref{eq:se1:I2 result for term b} back into \eqref{eq:se1:I2 term 1} and \eqref{eq:se1:I2 term 2}, we have \begin{align}
\label{eq:se1:I2 final result}
I_2\le Ch^3.
\end{align}
For $I_3$, by It\^o's isometry, the results in \eqref{eq:se1: expec diff order 1} and \eqref{eq:se1: expec diff order 2}, and using similar argument as in \eqref{eq:se1:I2 result for term b} we have
\begin{align}
I_3=& \mathbb{E} \Big[ \Big|
\int_{t_n}^{t_{n}+r} \Big( \sigma(s,X_{s}^{i,N},\mu_{s}^{X,N})
-\sigma(t_n,Y_n^{i,X,N},\mu^{Y,X,N}_{n})
\Big) \mathrm{d} W^i_s
\Big|^2\Big]
\nonumber
\\
\label{eq:se1:I3 final result}
\le& \mathbb{E} \Big[
\int_{t_n}^{t_{n}+h} \Big| \Big( \sigma(s,X_{s}^{i,N},\mu_{s}^{X,N})
-\sigma(t_n,Y_n^{i,X,N},\mu^{Y,X,N}_{n})
\Big)\Big|^2\ \mathrm{d} s
\Big] \le Ch^2.
\end{align}
Similarly for $I_4$, by It\^o's isometry, the previous result in \eqref{eq:se1:y-y} and using similar argument to \eqref{eq:se1:by-by}
\begin{align}
I_4=& \mathbb{E} \Big[ \Big|
\int_{t_n}^{t_{n}+r} \Big( \sigma(t_n,Y_n^{i,X,N},\mu^{Y,X,N}_{n})
- \sigma(t_n,Y_{n}^{i,\star,N}, \hat{\mu} ^{Y,N}_n)
\Big) \mathrm{d} W^i_s
\Big|^2\Big]
\nonumber
\\
\label{eq:se1:I4 final result}
\le& \mathbb{E} \Big[
\int_{t_n}^{t_{n}+h} \Big| \Big( \sigma(t_n,Y_n^{i,X,N},\mu^{Y,X,N}_{n})
- \sigma(t_n,Y_{n}^{i,\star,N}, \hat{\mu} ^{Y,N}_n)
\Big)\Big|^2\ \mathrm{d} s
\Big] \le \mathbb{E} \Big[ |X_{n}^{i,N}- \hat{X} _{n}^{i,N}|^2 \Big] Ch.
\end{align}
Plugging \eqref{eq:se1:I1 final result}, \eqref{eq:se1:I2 final result} \eqref{eq:se1:I3 final result} and \eqref{eq:se1:I4 final result} back to \eqref{eq:se1:ultimate eqaution IIII}, we have, for all $n\in \llbracket 0,M-1 \rrbracket$, $i\in \llbracket 1,N \rrbracket$ and $r\in[0,h]$
\begin{align*}
\mathbb{E} \big[ |\Delta X^i_{t_n+r}|^2 \big]
&\le
(1+h) \mathbb{E} \big[ |X_{n}^{i,N}- \hat{X} _{n}^{i,N}|^2 \big] (1+Ch)+(1+\frac{1}{h})Ch^3+Ch^2+ \mathbb{E} \big[ |X_{n}^{i,N}- \hat{X} _{n}^{i,N}|^2 \big] Ch
\\
&\le
\mathbb{E} \big[ |X_{n}^{i,N}- \hat{X} _{n}^{i,N}|^2 \big] (1+Ch)+Ch^2.
\end{align*}
By backward induction, the discrete type Gr\"onwall's lemma delivers the result of \eqref{eq:convergence theroem term 1}.
\end{proof}
\subsection[Proof of the moment bound result]{Proof of Theorem \ref{theorem:moment bound for the big theorm time extensions }: the moment bound result}
\label{subsection: Moment bound of the SSM}
In this section prove Theorem \ref{theorem:moment bound for the big theorm time extensions }. Throughout this section, we follow the notation introduced in Theorem \ref{theorem:moment bound for the big theorm time extensions }, and let Assumption \ref{Ass:Monotone Assumption} holds, $h$ is chosen as in \eqref{eq:h choice}, $m\ge 2p$, where $m$ is defined in \eqref{Eq:General MVSDE}.
We first proof the moment bounds result across the timegrid then extend it to the continues process as stated in Theorem \ref{theorem:moment bound for the big theorm time extensions }.
\begin{theorem}[Moment bounds of SSM]
\label{theorem: discrete moment bound}
Let the settings of Theorem \ref{theorem:SSM: strong error 1} hold. Let $m\geq 2$ where $X^i_0\in L^m_0$ for all $i\in\llbracket 1,N\rrbracket$ and let $ \hat{X} ^{i,N}$ be the continuous-time extension of the SSM given by \eqref{eq: scheme continous extension in SDE form}. Let $2p\in [2,m]$, then there exist a constant $C>0$ (independent of $h,N,M$) such that
\begin{align*}
\sup_{i\in \llbracket 1,N \rrbracket } \sup_{n\in \llbracket 0,M \rrbracket }
\mathbb{E} \big[ | \hat{X} _{n}^{i,N}|^{2p} \big]
+
\sup_{i\in \llbracket 1,N \rrbracket } \sup_{n\in \llbracket 0,M \rrbracket }
\mathbb{E} \big[ |Y_{n}^{i,\star,N}|^{2p} \big]
&\le C \big( 1+ \sup_{i\in \llbracket 1,N \rrbracket } \mathbb{E}\big[\, | \hat{X} _{0}^{i,N}|^{2p}\big] \big) <\infty.
\end{align*}
\end{theorem}
\begin{proof}
The next inequality introduces the quantities $H^{X,p}_{n}$ and $H^{Y,p}_{n}$. For any $ i\in \llbracket 1,N \rrbracket ,\ n\in \llbracket 0,M-1 \rrbracket$, by Young's and Jensen's inequality
\begin{align}
\nonumber
\mathbb{E}\big[ | \hat{X} _{n}^{i,N} |^{2p} \big]
&=
\mathbb{E} \Big[ \Big| \frac{1}{N} \sum_{j=1}^N \Big( \hat{X} _{n}^{i,N}- \hat{X} _{n}^{j,N}\Big)
+\frac{1}{N} \sum_{j=1}^N \hat{X} _{n}^{j,N}
\Big|^{2p}
\Big]
\\
\label{eq:mmb:def of HXp}
&\le 4^p \mathbb{E} \Big[ \frac{1}{N} \sum_{j=1}^N | \hat{X} _{n}^{i,N}- \hat{X} _{n}^{j,N} |^{2p} \Big]
+
4^p \mathbb{E} \Big[ \Big|\frac{1}{N} \sum_{j=1}^N ( 1+ | \hat{X} _{n}^{j,N}|^2) \Big|^{p} \Big]+1=H^{X,p}_{n},
\\
\mathbb{E}\big[ |Y_{n}^{i,\star,N} |^{2p} \big]
\label{eq:mmb:def of HYp}
&\le 4^p \mathbb{E} \Big[ \frac{1}{N} \sum_{j=1}^N |Y_{n}^{i,\star,N}- Y_{n}^{j,\star,N} |^{2p} \Big]
+
4^p \mathbb{E} \Big[ \Big|\frac{1}{N} \sum_{j=1}^N ( 1+ |Y_{n}^{j,\star,N}|^{2} ) \Big|^{p} \Big]+1=H^{Y,p}_{n}.
\end{align}
Using the following inequalities from Proposition \ref{prop:yi-yj leq xi-xj} and \ref{prop:sum y square leq sum x square}, we have $H^{Y,p}_{n} \le H^{X,p}_{n}(1+Ch)$,
\begin{align*}
|Y_{n}^{i,\star,N}-Y_{n}^{j,\star,N}|^2
&\le
| \hat{X} _{n}^{i,N}- \hat{X} _{n}^{j,N}|^2 (1+Ch)
~ \textrm{and}~
\frac{1}{N} \sum_{j=1}^N (1+|Y_{n}^{j,\star,N}|^2)
\le \Big[
\frac{1}{N} \sum_{j=1}^N
(1+| \hat{X} _{n}^{j,N}|^2)\Big] (1+Ch) .
\end{align*}
We now prove that $H^{X,p}_{n+1} \le H^{Y,p}_{n}(1+Ch) $.
For the first element composing $H^{X,p}_{n+1} $ we have
\begin{align}
\mathbb{E}\Big[ \frac{1}{N} \sum_{j=1}^N | \hat{X} _{n+1}^{i,N}&- \hat{X} _{n+1}^{j,N} |^{2p} \Big]
=
\mathbb{E} \Big[ \frac{1}{N} \sum_{j=1}^N \Big|
\Big( Y_{n}^{i,\star,N}
+ b(t_n,Y_{n}^{i,\star,N}, \hat{\mu} ^{Y,N}_n) h
+\sigma(t_n,Y_{n}^{i,\star,N}, \hat{\mu} ^{Y,N}_n) \Delta W_{n}^i \Big)
\nonumber
\\
&
-\Big( Y_{n}^{j,\star,N}
+ b(t_n,Y_{n}^{j,\star,N}, \hat{\mu} ^{Y,N}_n) h
+\sigma(t_n,Y_{n}^{j,\star,N}, \hat{\mu} ^{Y,N}_n) \Delta W_{n}^j \Big)\Big|^{2p} \Big].
\end{align}
Introduce the extra (local) notation for $G^{i,j,n}_1$, $G^{i,j,n}_2$ and $G^{i,j,n}_3$ as
\begin{align}
G^{i,j,n}_1&=
Y_{n}^{i,\star,N}-Y_{n}^{j,\star,N},
\quad
G^{i,j,n}_2= b(t_n,Y_{n}^{i,\star,N}, \hat{\mu} ^{Y,N}_n) h-b(t_n,Y_{n}^{j,\star,N}, \hat{\mu} ^{Y,N}_n) h,
\\
G^{i,j,n}_3&= \sigma(t_n,Y_{n}^{i,\star,N}, \hat{\mu} ^{Y,N}_n) \Delta W_{n}^i-\sigma(t_n,Y_{n}^{j,\star,N}, \hat{\mu} ^{Y,N}_n) \Delta W_{n}^j .
\end{align}
For $a+b+c=2p$, $a< 2p$, $a,b,c \in \mathbb{N}$, by Assumption \ref{Ass:Monotone Assumption}, Young's inequality, Jensen's inequality, Proposition \ref{prop:auxilary for multipy of r.v.} and the fact that the Brownian increments are independent, the particles are conditionally independent and identically distributed, we have
\begin{align}
\mathbb{E} \big[ |G^{i,j,n}_1|^a |G^{i,j,n}_2|^b |G^{i,j,n}_3|^c \big]
\le
\mathbb{E} \big[ | Y_{n}^{i,\star,N} |^{2p} \big] Ch
\le H^{Y,p}_{n} Ch.
\end{align}
Thus, for the first term of $H^{X,p}_{n+1} $, we conclude that
\begin{align}
4^p\mathbb{E}\Big[ \frac{1}{N} \sum_{j=1}^N | \hat{X} _{n+1}^{i,N}- \hat{X} _{n+1}^{j,N} |^{2p} \Big]
\label{eq:moment bounde: first term estimate}
&\le
4^p\mathbb{E}\Big[ \frac{1}{N} \sum_{j=1}^N |Y_{n}^{i,\star,N}-Y_{n}^{j,\star,N} |^{2p} \Big]+ H^{Y,p}_{n} Ch.
\end{align}
For the second term of $H^{X,p}_{n+1} $ we have
\begin{align*}
&\mathbb{E} \Big[ \Big|\frac{1}{N} \sum_{j=1}^N (1+| \hat{X} _{n+1}^{j,N}|^2)\Big|^{p} \Big]
=
\mathbb{E} \bigg[ \Big| \frac{1}{N} \sum_{j=1}^N \Big[ 1+
\Big( Y_{n}^{j,\star,N}
+ b(t_n,Y_{n}^{j,\star,N}, \hat{\mu} ^{Y,N}_n) h
+\sigma(t_n,Y_{n}^{j,\star,N}, \hat{\mu} ^{Y,N}_n) \Delta W_{n}^j \Big)^2 \Big] \Big|^{p} \bigg].
\end{align*}
Set the following (extra local) notation
\begin{align*}
G_4^{n}&=
\frac{1}{N} \sum_{j=1}^N (1+|Y_{n}^{j,\star,N}|^2),
\quad
G_5^{n}= \frac{1}{N} \sum_{j=1}^N
\Big\langle 2 Y_{n}^{j,\star,N}+
\sigma(t_n,Y_{n}^{j,\star,N}, \hat{\mu} ^{Y,N}_n) \Delta W_{n}^j ,
\sigma(t_n,Y_{n}^{j,\star,N}, \hat{\mu} ^{Y,N}_n) \Delta W_{n}^j\Big\rangle,
\\
G_6^{n}&= \frac{1}{N} \sum_{j=1}^N
\Big \langle 2Y_{n}^{j,\star,N}+
b(t_n,Y_{n}^{j,\star,N}, \hat{\mu} ^{Y,N}_n)h+2\sigma(t_n,Y_{n}^{j,\star,N}, \hat{\mu} ^{Y,N}_n) \Delta W_{n}^j ,b(t_n,Y_{n}^{j,\star,N}, \hat{\mu} ^{Y,N}_n)h\Big\rangle.
\end{align*}
We have once again using similar arguments as before, by Young's inequality, Jensen's inequality, Proposition \ref{prop:auxilary for multipy of r.v.}, that the particles are conditionally independent and identically distributed, the independence property of the Brownian increments, the Lipschitz property for $b$ and $\sigma$, and using the fact that for $l_1>l_2>1,~|x|^{l_2}\le 1+|x|^{l_1}$ we have
\begin{align}
\mathbb{E} \big[ |G_4^{n}|^a |G_5^{n}|^b |G_6^{n}|^c \big]
\le
\mathbb{E} \big[ | Y_{n}^{i,\star,N} |^{2p}+1 \big] Ch
\le H^{Y,p}_{n} Ch.
\end{align}
Thus, for the second term of $~H^{X,p}_{n+1} $, we conclude that
\begin{align}
\label{eq:moment bounde: second term estimate}
4^p \mathbb{E} \Big[ \Big|\frac{1}{N} \sum_{j=1}^N ( 1+ | \hat{X} _{n+1}^{j,N}|^2) \Big|^{p} \Big]
&\le
4^p \mathbb{E} \Big[ \Big|\frac{1}{N} \sum_{j=1}^N ( 1+ |Y_{n}^{j,\star,N}|^{2} ) \Big|^{p} \Big]+ H^{Y,p}_{n} Ch .
\end{align}
Plug \eqref{eq:moment bounde: first term estimate} and \eqref{eq:moment bounde: second term estimate} into $H^{X,p}_{n+1} $ we have
\begin{align}
\nonumber
H^{X,p}_{n+1}&
=4^p \mathbb{E} \Big[ \frac{1}{N} \sum_{j=1}^N | \hat{X} _{n+1}^{i,N}- \hat{X} _{n+1}^{j,N} |^{2p} \Big]
+
4^p \mathbb{E} \Big[ \Big|\frac{1}{N} \sum_{j=1}^N ( 1+ | \hat{X} _{n+1}^{j,N}|^2) \Big|^{p} \Big]+1
\\
&\le
4^p\mathbb{E}\Big[ \frac{1}{N} \sum_{j=1}^N |Y_{n}^{i,\star,N}-Y_{n}^{j,\star,N} |^{2p} \Big]
+
4^p \mathbb{E} \Big[ \Big|\frac{1}{N} \sum_{j=1}^N ( 1+ |Y_{n}^{j,\star,N}|^{2} ) \Big|^{p} \Big]+1+ H^{Y,p}_{n} Ch
\le H^{Y,p}_{n}(1+Ch).
\end{align}
Thus finally, for all $n\in \llbracket 0,M-1 \rrbracket,~i\in \llbracket 1,N \rrbracket $, by backward induction collecting all the results above, since $m\ge 2p$, where $m$ is defined in \eqref{Eq:General MVSDE}, we have (for some $C>0$ independent of $h,N,M$)
\begin{align*}
\mathbb{E}\big[ | \hat{X} _{n+1}^{i,N} |^{2p} \big]
&\le
H^{X,p}_{n+1}
\le H^{Y,p}_{n}(1+Ch) \le H^{X,p}_{n}(1+Ch)^2
\le
\cdots \le H^{X,p}_{0} e^{CT} \le C \mathbb{E}\big[ | \hat{X} _{0}^{i,N} |^{2p} \big] +C <\infty.
\end{align*}
Similar argument yields the result for $ \mathbb{E}\big[ | Y_{n+1}^{i,\star,N} |^{2p} \big] $.
\end{proof}
\subsubsection*{Proof of the Theorem \ref{theorem:moment bound for the big theorm time extensions }}
\begin{proof}[Proof of the Theorem \ref{theorem:moment bound for the big theorm time extensions }]
Under the same assumptions and notations of Theorem \ref{theorem: discrete moment bound}, one can apply similar argument as in \cite[Proposition 4.6]{2021SSM} to obtain the result.
\end{proof}
The final result of this section concerns the incremental (in time) moment bounds of $ \hat{X} ^{i,N}$ This result is in preparation for the next section.
\begin{proposition}
\label{prop:SSTM: local increment error00}
Under same assumptions and notations of Theorem \ref{theorem:moment bound for the big theorm time extensions }, there exists a constant $C>0$ such that for any $p\geq 2$ satisfy $m\ge (q+1)p$, where $m$ is defined in \eqref{Eq:General MVSDE}, $q$ is defined in Assumption \ref{Ass:Monotone Assumption}, we have
\begin{align}
\sup_{i\in \llbracket 1,N \rrbracket}
\sup_{0\le t \le T} \mathbb{E}\big[ | \hat{X} _{t}^{i,N}- \hat{X} _{ \kappa(t) }^{i,N} |^p \big]
\le C h^{\frac p2}.
\end{align}
\end{proposition}
\begin{proof}
Under Assumption \ref{Ass:Monotone Assumption}, carefully application of Young's and Jensen's inequality, one can argue similarly as \cite[Proposition 4.7]{2021SSM} to obtain the result (we omit further details).
\end{proof}
\subsection{Proof of Theorem \ref{theorem:SSM: strong error 2}, the uniform convergence result in pathspace}
\label{subsection:Proof of strong error 2}
We now prove Theorem \ref{theorem:SSM: strong error 2}. Throughout we follow the framework introduced in Theorem \ref{theorem:SSM: strong error 2}.
\begin{proof}[Proof of Theorem \ref{theorem:SSM: strong error 2}]Let Assumption \ref{Ass:Monotone Assumption} hold. Let $i\in\llbracket 1,N\rrbracket$, $t\in[0,T]$, suppose $m \ge \max\{4q+4,2+q+q/\epsilon \}$, where $X^i_0\in L^m_0$, $q$ is as given in Assumption \ref{Ass:Monotone Assumption}. From \eqref{eq:momentboundParticiInteractingSystem} and \eqref{eq:SSTM:momentbound for split-step time extension process00}, both process $X^{i,N}$ and $ \hat{X} ^{i,N}$ have sufficient bounded moments for the following proof.
Define $\Delta X^i:=X^{i,N}- \hat{X} ^{i,N}$. It\^o's formula applied to $|X_{t}^{i,N}- \hat{X} _{t}^{i,N}|^2=|\Delta X^i_t|^2$ yields
\begin{align}
|\Delta X^i_t|^2 \label{eq:Conv of big theom:term 11 }
=&2\int_0^t \Big\langle v(X_{s}^{i,N},\mu_{s}^{X,N})-v(Y_{\kappa(s)}^{i,\star,N}, \hat{\mu} ^{Y,N}_{\kappa(s)}) ,\Delta X^i_s \Big\rangle \mathrm{d} s
\\\label{eq:Conv of big theom:term 22 }
&+2 \int_0^t \Big\langle b(s,X_{s}^{i,N},\mu_{s}^{X,N})-b(\kappa(s),Y_{\kappa(s)}^{i,\star,N}, \hat{\mu} ^{Y,N}_{\kappa(s)}),\Delta X^i_s\Big\rangle \mathrm{d} s
\\ \label{eq:Conv of big theom:term 33 }
&+ \int_0^t \Big| \sigma(s,X_{s}^{i,N},\mu_{s}^{X,N})-\sigma(\kappa(s),Y_{\kappa(s)}^{i,\star,N}, \hat{\mu} ^{Y,N}_{\kappa(s)})\Big|^2 \mathrm{d} s
\\\label{eq:Conv of big theom:term 44 }
&+2\int_0^t \Big\langle \Delta X^i_s,\Big(\sigma(s,X_{s}^{i,N},\mu_{s}^{X,N}) - \sigma(\kappa(s),Y_{\kappa(s)}^{i,\star,N}, \hat{\mu} ^{Y,N}_{\kappa(s)})\Big) \mathrm{d} W^i_s\Big\rangle.
\end{align}
We analyse the above terms one by one and will take supremum over time with expectation. For \eqref{eq:Conv of big theom:term 11 },
\begin{align}
\nonumber
\big\langle v(X_{s}^{i,N}&,\mu_{s}^{X,N})-v(Y_{\kappa(s)}^{i,\star,N}, \hat{\mu} ^{Y,N}_{\kappa(s)}) ,\Delta X^i_s \big\rangle
\\
\label{eq:se2:v-v term}
=&
\big\langle v(X_{s}^{i,N},\mu_{s}^{X,N})-v( \hat{X} _{s}^{i,N}, \hat{\mu} ^{X,N}_{s}),\Delta X^i_s \big\rangle
+
\big\langle v( \hat{X} _{s}^{i,N}, \hat{\mu} ^{X,N}_{s})-v(Y_{\kappa(s)}^{i,\star,N}, \hat{\mu} ^{Y,N}_{\kappa(s)}),\Delta X^i_s \big\rangle.
\end{align}
For the first term above, by Assumption \ref{Ass:Monotone Assumption} and using Remark \ref{remark:ImpliedProperties}
\begin{align}
\mathbb{E}\Big[ \sup_{0\le t \le T}
&
\int_0^t \Big\langle v(X_{s}^{i,N},\mu_{s}^{X,N})-v( \hat{X} _{s}^{i,N}, \hat{\mu} ^{X,N}_{s}),\Delta X^i_s \Big\rangle \mathrm{d} s \Big]
\\
\le&
\mathbb{E}\Big[ \int_0^T
\frac{C}{N} \sum_{j=1}^N
\Big|
f(X_{s}^{i,N}-X_{s}^{j,N})-f( \hat{X} _{s}^{i,N}- \hat{X} _{s}^{j,N}) \Big| |\Delta X^i_s |
\mathrm{d} s \Big]
\\
& +
\mathbb{E}\Big[ \sup_{0\le t \le T}
\int_0^t \Big\langle u(X_{s}^{i,N},\mu_{s}^{X,N})-u( \hat{X} _{s}^{i,N}, \hat{\mu} ^{X,N}_{s}),\Delta X^i_s \Big\rangle \mathrm{d} s \Big]
\\
\label{eq:se2:special term}
\le&
\mathbb{E}\Big[ \int_0^T \frac{C}{N} \sum_{j=1}^N
\Big\{
\Big( 1+|X_{s}^{i,N}-X_{s}^{j,N} |^q +| \hat{X} _{s}^{i,N}- \hat{X} _{s}^{j,N} |^q \Big)
| \Delta X^i_s-\Delta X^j_s | |\Delta X^i_s | \Big\} \mathrm{d} s \Big]
\\
&+
\mathbb{E}\Big[ \int_0^T \Big( \widehat L_u |\Delta X^i_s |^2
+ \frac{ L_{\tilde{u} } }{2N} \sum_{j=1}^N |\Delta X^j_s |^2 \Big) \mathrm{d} s \Big].
\end{align}
To deal with \eqref{eq:se2:special term}, using the following notations, for all $i,j \in \llbracket 1,N \rrbracket$,
\begin{align}
G_7^{i,j,s}=\Big( 1+|X_{s}^{i,N}-X_{s}^{j,N} |^q +| \hat{X} _{s}^{i,N}- \hat{X} _{s}^{j,N} |^q \Big)
\quad\textrm{and}\quad
G_8^{i,j,s}= | \Delta X^i_s-\Delta X^j_s | |\Delta X^i_s |.
\end{align}
The combination of $G_7^{i,j,s}$ and $G_8^{i,j,s}$ makes it difficult to obtain a domination via $ |\Delta X^i_s |^2$, we overcome this difficulty by applying Chebyshev's inequality as follows.
\color{black}
The indicator function is denoted as $\mathbbm{1}_{\{\Omega\}}$. Recall the moment bound results on $X, \hat{X} $ in \eqref{eq:momentboundParticiInteractingSystem} and \eqref{eq:SSTM:momentbound for split-step time extension process00} respectively. Now, using Theorem \ref{theorem:SSM: strong error 1}, Proposition \ref{prop:auxilary for multipy of r.v.} and Young's inequality, we have
\begin{align}
\label{eq:proof strong errr2:ep 1}
\mathbb{E} \big[ & G_7^{i,j,s}G_8^{i,j,s} \big]
=
\mathbb{E} \Big[ G_7^{i,j,s}G_8^{i,j,s} (\mathbbm{1}_{\{ G_7^{i,j,s}<M^{\epsilon} \}}) \Big]
+\mathbb{E} \Big[ G_7^{i,j,s}G_8^{i,j,s} (\mathbbm{1}_{\{ G_7^{i,j,s}\ge M^{\epsilon} \}}) \Big]
\\
&\le
\mathbb{E} \big[ M^{\epsilon} G_8^{i,j,s} \big]
+
\mathbb{E} \Big[ \frac{ | G_7^{i,j,s}|^{1/\epsilon} }{ M} G_7^{i,j,s} G_8^{i,j,s} \Big]
\le
2 \mathbb{E} \big[ M^{\epsilon} |\Delta X^i_s |^2 \big]
+
h \mathbb{E} \big[ | G_7^{i,j,s}|^{1/\epsilon} G_7^{i,j,s} G_8^{i,j,s} \big]
\\
\label{eq:proof strong errr2:ep 2}
&
\le C h^{1-\epsilon} +h C\Big(1+\mathbb{E} \Big[ |X_{s}^{i,N} |^{2+q+q/\epsilon} +| \hat{X} _{s}^{i,N} |^{2+q+q/\epsilon} \Big] \Big)
\le C h^{1-\epsilon},
\end{align}
where for the last inequality, we used that the particles are identically distributed and there are sufficiently high bounded moments available for the process since $m\ge 2+q+q/\epsilon$.
Thus, for the first term in \eqref{eq:se2:v-v term} and using that the particles are identically distributed, we conclude that
\begin{align}
\mathbb{E}\Big[ \sup_{0\le t \le T} \int_0^t \Big\langle v(X_{s}^{i,N},\mu_{s}^{X,N})-v( \hat{X} _{s}^{i,N}, \hat{\mu} ^{X,N}_{s}),\Delta X^i_s \Big\rangle \mathrm{d} s \Big]
\le
C\mathbb{E}\Big[ \int_0^T |\Delta X^i_s|^2 \mathrm{d} s \Big]+
C h^{1-\epsilon}.
\end{align}
For the second term in \eqref{eq:se2:v-v term}, under Assumption \ref{Ass:Monotone Assumption}, using Young's inequality, Jensen's inequality, and Proposition \ref{prop:SSTM: local increment error00} we have
\begin{align}
\label{eq:se2: v-v term 2 sup form}
\mathbb{E} \Big[& \sup_{0\le t \le T} \int_0^t
\Big\langle v( \hat{X} _{s}^{i,N}, \hat{\mu} ^{X,N}_{s})-v(Y_{\kappa(s)}^{i,\star,N}, \hat{\mu} ^{Y,N}_{\kappa(s)}),\Delta X^i_s \Big\rangle \mathrm{d} s \Big]
\\ \label{eq:se2:I2 source}
=&
\mathbb{E}\Big[ \sup_{0\le t \le T} \int_0^t
\Big\langle u( \hat{X} _{s}^{i,N}, \hat{\mu} ^{X,N}_{s})-u(Y_{\kappa(s)}^{i,\star,N}, \hat{\mu} ^{Y,N}_{\kappa(s)}),\Delta X^i_s \Big\rangle \mathrm{d} s \Big]
\\ \label{eq:se2:I3 source}
&
\quad + \mathbb{E}\Big[ \sup_{0\le t \le T} \int_0^t \frac{1}{N} \sum_{j=1}^N \langle f( \hat{X} _{s}^{i,N}- \hat{X} _{s}^{j,N})-f(Y_{\kappa(s)}^{i,\star,N}-Y_{\kappa(s)}^{j,\star,N}),\Delta X^i_s \Big\rangle \mathrm{d} s \Big]
\\
\nonumber
\le
&
\mathbb{E}\Big[ \int_0^T |\Delta X^i_s|^2 \mathrm{d} s \Big] +I_2+I_3.
\end{align}
For $I_2$ (given by the domination of \eqref{eq:se2:I2 source}), by Assumption \ref{Ass:Monotone Assumption}, Young's inequality and Cauchy-Schwarz inequality
\begin{align*}
I_2=&L_{\hat{u}} \mathbb{E}\Big[
\int_0^T
\Big(1+ | \hat{X} _{s}^{i,N}|^{q} + |Y_{\kappa(s)}^{i,\star,N}|^{q}\Big)^2
| \hat{X} _{s}^{i,N}-Y_{\kappa(s)}^{i,\star,N}|^2 \Big]
\mathrm{d} s
\\\nonumber
\le
&
C \int_0^T
\sqrt{ \mathbb{E}\Big[ \Big(1+| \hat{X} _{s}^{i,N}|^{2q}+|Y_{\kappa(s)}^{i,\star,N}|^{2q}\Big)^2
\Big]
\mathbb{E}\Big[
| \hat{X} _{s}^{j,N}-Y_{\kappa(s)}^{j,\star,N}|^4
\Big]
} \mathrm{d} s.
\end{align*}
For $I_3$ (given by the domination of \eqref{eq:se2:I3 source} after extracting the $|\Delta X^i|$ term), by Assumption \ref{Ass:Monotone Assumption}, Young's inequality and Cauchy-Schwarz inequality
\begin{align*}
I_3=&\frac{C L_{\hat{f}}}{N} \sum_{j=1}^N \mathbb{E}\Big[
\int_0^T
\Big(1+ | \hat{X} _{s}^{i,N}- \hat{X} _{s}^{j,N}|^{q} + |Y_{\kappa(s)}^{i,\star,N}-Y_{\kappa(s)}^{j,\star,N}|^{q}\Big)^2
\Big|( \hat{X} _{s}^{i,N}- \hat{X} _{s}^{j,N})-(Y_{\kappa(s)}^{i,\star,N}-Y_{\kappa(s)}^{j,\star,N})\Big|^2 \Big]
\mathrm{d} s
\\\nonumber
\le&
\frac{C}{N} \sum_{j=1}^N \int_0^T
\sqrt{ \mathbb{E}\Big[ \Big(1+| \hat{X} _{s}^{j,N}|^{2q}+| \hat{X} _{s}^{i,N}|^{2q}+|Y_{\kappa(s)}^{i,\star,N}|^{2q}+|Y_{\kappa(s)}^{j,\star,N}|^{2q}\Big)^2
\Big]
\mathbb{E}\Big[
| \hat{X} _{s}^{j,N}-Y_{\kappa(s)}^{j,\star,N}|^4
\Big]
} \mathrm{d} s.
\end{align*}
By \eqref{eq:SSTM:scheme 1}, Assumption \ref{Ass:Monotone Assumption}, Young's inequality, Jensen's inequality, since $m\ge 4q+4$, and by Proposition \ref{theorem: discrete moment bound}, we have
\begin{align*}
\mathbb{E}\big[ &| \hat{X} _{\kappa(s)}^{i,N}-Y_{\kappa(s)}^{i,\star,N}|^4
\big]
=
\mathbb{E}\big[ | h v(Y_{\kappa(s)}^{i,\star,N}, \hat{\mu} ^{Y,N}_\kappa(s)) |^4
\big]
\\
&
\le
C h^4\mathbb{E}\big[ | u(Y_{\kappa(s)}^{i,\star,N}, \hat{\mu} ^{Y,N}_\kappa(s)) |^4
\big]
+
\frac{Ch^4}{N}
\sum_{j=1}^N\mathbb{E}\big[ |f(Y_{\kappa(s)}^{i,\star,N}-Y_{\kappa(s)}^{j,\star,N}) |^4
\big]
\\
&\le
C h^4\mathbb{E}\Big[ 1+ |Y_{\kappa(s)}^{i,\star,N}|^{4q+4}
+ \frac{1}{N} \sum_{j=1}^N |Y_{\kappa(s)}^{j,\star,N}|^{4}
\Big]
+
\frac{Ch^4}{N} \sum_{j=1}^N \mathbb{E}\Big[ (1+ |Y_{\kappa(s)}^{i,\star,N}-Y_{\kappa(s)}^{j,\star,N}|^{4q} ) |Y_{\kappa(s)}^{i,\star,N}-Y_{\kappa(s)}^{j,\star,N}|^4 \Big]
\\
&\le \frac{Ch^4}{N} \sum_{j=1}^N \mathbb{E}\big[ 1+ |Y_{\kappa(s)}^{j,\star,N}|^{4q+4} \big]
\le C h^4 .
\end{align*}
Using this inequality in combination with Proposition \ref{prop:SSTM: local increment error00} allows us to conclude that
\begin{align}
\label{eq:diff between X hat t and Y star kttt}
\mathbb{E}\big[ | \hat{X} _{s}^{j,N}-Y_{\kappa(s)}^{j,\star,N}|^4
\big]
\le
C\mathbb{E}\Big[ | \hat{X} _{s}^{j,N}- \hat{X} _{\kappa(s)}^{j,N}|^4+| \hat{X} _{\kappa(s)}^{j,N}-Y_{\kappa(s)}^{j,\star,N}|^4
\Big]\le Ch^2.
\end{align}
Thus, for \eqref{eq:se2:v-v term} injected back in \eqref{eq:Conv of big theom:term 11 }, take supremum and expectation, and collecting all the necessary results above, we reach
\begin{align}
\label{eq:se2:term 1 final result}
&\mathbb{E} \Big[ \sup_{0\le t \le T} \int_0^t
\Big\langle v(X_{s}^{i,N},\mu^{N}_{s})-v(Y_{\kappa(s)}^{i,\star,N}, \hat{\mu} ^{Y,N}_{\kappa(s)}),\Delta X^i_s \Big\rangle \mathrm{d} s \Big]
\le
C\mathbb{E}\Big[ \int_0^T |\Delta X^i_s|^2 \mathrm{d} s \Big]+Ch^{1-\epsilon} .
\end{align}
For the second term \eqref{eq:Conv of big theom:term 22 }, the calculation is similar as in \cite[Proof of proposition 4.9]{2021SSM}, we conclude that
\begin{align}
\label{eq:se2:term 2 final result}
&\mathbb{E} \Big[ \sup_{0\le t \le T} \int_0^t \Big\langle b(s,X_{s}^{i,N},\mu_{s}^{X,N})-b(\kappa(s),Y_{\kappa(s)}^{i,\star,N}, \hat{\mu} ^{Y,N}_{\kappa(s)}) ,\Delta X^i_s \Big\rangle \mathrm{d} s \Big]
\le
C h + C \mathbb{E} \Big[ \int_0^T |\Delta X^i_s|^2 \mathrm{d} s \Big].
\end{align}
Similarly, for the third term \eqref{eq:Conv of big theom:term 33 } (these are just Lipschitz terms), we have
\begin{align}
\label{eq:se2:term 3 final result}
\mathbb{E}\Big[ \sup_{0\le t \le T} \int_0^t \Big| \sigma(s,X_{s}^{i,N},\mu_{s}^{X,N})-\sigma(\kappa(s),Y_{\kappa(s)}^{i,\star,N}, \hat{\mu} ^{Y,N}_{\kappa(s)})\Big|^2 \mathrm{d} s \Big]
\le
C h + C \mathbb{E} \Big[ \int_0^T |\Delta X^i_s|^2 \mathrm{d} s \Big] .
\end{align}
Consider the last term \eqref{eq:Conv of big theom:term 44 } -- this is a Lipschitz term and dealt with similarly to \cite[Proof of proposition 4.9]{2021SSM}.
Using the Burkholder-Davis-Gundy's, Jensen's and Cauchy-Schwarz inequality, and the above results,
\begin{align}
\label{eq:se2:term 4 final result}
&\mathbb{E}\Big[ \sup_{0\le t \le T} \int_0^t \Big\langle \Delta X^i_s,\Big(\sigma(s,X_{s}^{i,N},\mu_{s}^{X,N}) - \sigma(\kappa(s),Y_{\kappa(s)}^{i,\star,N}, \hat{\mu} ^{Y,N}_{\kappa(s)})\Big) \mathrm{d} W^i_s\Big\rangle ~\Big]
\\
& \nonumber
\le
\frac{1}{4}\mathbb{E}\Big[~ \sup_{0\le t \le T } |\Delta X^i_t|^2 \Big]
+
\mathbb{E}\Big[\int_0^T \Big|\sigma(s,X_{s}^{i,N},\mu_{s}^{X,N}) - \sigma(\kappa(s),Y_{\kappa(s)}^{i,\star,N}, \hat{\mu} ^{Y,N}_{\kappa(s)})\Big|^2\mathrm{d} s~\Big] .
\end{align}
Again, gathering all the above results \eqref{eq:se2:term 1 final result},
\eqref{eq:se2:term 2 final result},
\eqref{eq:se2:term 3 final result},
and
\eqref{eq:se2:term 4 final result}, plugging them back into \eqref{eq:Conv of big theom:term 11 }, after taking supremum over $t\in[0,T]$ and expectation, for all $i\in \llbracket 1,N \rrbracket $ we have
\begin{align*}
\mathbb{E}\big[ \sup_{0\le t \le T}|\Delta X^i_t|^2 ~\big]
&\le
Ch^{1-\epsilon}+ C\mathbb{E} \Big[~ \int_0^T \sup_{0\le u \le s}|\Delta X^i_u|^2 \mathrm{d} s ~\Big]+\frac{1}{2} \mathbb{E} \Big[\sup_{0\le t \le T } |\Delta X^i_t|^2\Big]
\\
&
\le
Ch^{1-\epsilon} +C\int_0^T \mathbb{E} \big[\sup_{0\le u \le s} |\Delta X^i_u|^2 ~\big] \mathrm{d} s .
\end{align*}
Gr\"onwall's lemma delivers the final result after taking supremum over $i\in \llbracket 1,N \rrbracket $.
\end{proof}
\subsection{Discussion on the granular media type equation}
\label{subsection:Proof of strong error 3 GM}
\begin{proof}[Proof of Proposition \ref{Prop:Propagation of Chaos}]
Recall the proof of \eqref{eq:Conv of big theom:term 11 } in Section \ref{subsection:Proof of strong error 2}. Under Assumption \ref{Ass:GM Assumption}, for all $i\in\llbracket 1,N\rrbracket$, $t\in[0,T]$,
and using arguments similar to those of \eqref{eq:se2:v-v term} we have
\begin{align}
\nonumber
\Delta X^i_t=&~X_{t}^{i,N}- \hat{X} _{t}^{i,N} =\int_0^t v(X_{s}^{i,N},\mu_{s}^{X,N})-v(Y_{\kappa(s)}^{i,\star,N}, \hat{\mu} ^{Y,N}_{\kappa(s)})~ \mathrm{d} s,
\\
\Rightarrow \mathbb{E} \big[ |\Delta X^i_t|^2 \big] \label{eq:gm:proof: delta x term 1 }
\le&~
2\int_0^t \mathbb{E} \Big[ \Big\langle v(X_{s}^{i,N},\mu_{s}^{X,N})-v( \hat{X} _{s}^{i,N}, \hat{\mu} ^{X,N}_{s}) ,\Delta X^i_s \Big\rangle \Big] \mathrm{d} s
\\
\label{eq:gm:proof: delta x term 2 }
& ~+
2\int_0^t \mathbb{E} \Big[ \Big\langle v( \hat{X} _{s}^{i,N}, \hat{\mu} ^{X,N}_{s})-v(Y_{\kappa(s)}^{i,\star,N}, \hat{\mu} ^{Y,N}_{\kappa(s)}) ,\Delta X^i_s \Big\rangle \Big] \mathrm{d} s .
\end{align}
For \eqref{eq:gm:proof: delta x term 1 }, arguing as in \eqref{eq:se1:y-y v-v}, Remark \ref{remark:OSL for the whole function / system V} and using that the particles are identically distributed, we have
\begin{align}
\mathbb{E} &\Big[ \Big\langle v(X_{s}^{i,N},\mu_{s}^{X,N})-v( \hat{X} _{s}^{i,N}, \hat{\mu} ^{X,N}_{s}) ,\Delta X^i_s \Big\rangle \Big]
\label{eq:gm:proof: delta x term 1 results}
\le 2L_f^+ \mathbb{E} \big[ |\Delta X^i_s|^2 \big].
\end{align}
For \eqref{eq:gm:proof: delta x term 2 }, it is similar to the above, we have
\begin{align}
2\int_0^t \mathbb{E} &\Big[ \Big\langle v( \hat{X} _{s}^{i,N}, \hat{\mu} ^{X,N}_{s})-v(Y_{\kappa(s)}^{i,\star,N}, \hat{\mu} ^{Y,N}_{\kappa(s)}) ,\Delta X^i_s \Big\rangle \Big] \mathrm{d} s
\label{eq:gm:proof: delta x term 21}
=
\frac{2}{N} \sum_{j=1}^N\int_0^t
\mathbb{E}\Big[
\Big\langle f(\Delta^{X,i,j}_s) -f( \Delta^{Y,i,j}_ {\kappa(s)} )
,\Delta X^i_s \Big\rangle \Big] \mathrm{d} s,
\end{align}
where we introduce the following handy notation (recall \eqref{eq:SSTM:scheme 1} and \eqref{eq: scheme continous extension in SDE form})
\begin{align}
\nonumber
\Delta^{X,i,j}_t&= \hat{X} _{s}^{i,N}- \hat{X} _{s}^{j,N},
\qquad
\Delta^{Y,i,j}_ {\kappa(s)} =Y_{\kappa(s)}^{i,\star,N}-Y_{\kappa(s)}^{j,\star,N},
\\ \nonumber
\Delta^{X,i,j}_s
&= \Delta^{X,i,j}_ {\kappa(s)} +G_9^{i,j,s}(s- {\kappa(s)} )+G_{10}^{i,j,s},
\qquad
\label{eq:gm:G7 comes}
\Delta^{Y,i,j}_ {\kappa(s)}
= \Delta^{X,i,j}_ {\kappa(s)} +G_{9}^{i,j,s} h,
\\
G_{9}^{i,j,s}
&= \Big( v(Y_{\kappa(s)}^{i,\star,N}, \hat{\mu} ^{Y,N}_{\kappa(s)})-v(Y_{\kappa(s)}^{j,\star,N}, \hat{\mu} ^{Y,N}_{\kappa(s)})\Big)
\quad\textrm{and}\quad
G_{10}^{i,j,s}
=\sigma\Big( (W_s^i- W_{ {\kappa(s)} }^i)- (W_s^j- W_{ {\kappa(s)} }^j) \Big).
\end{align}
We now proceed to estimate \eqref{eq:gm:proof: delta x term 21}.
By the mean value theorem under Assumption \ref{Ass:GM Assumption}, for \eqref{eq:gm:proof: delta x term 21}, there exist $\rho_1,\rho_2 \in [0,1]$ such that \begin{align*}
f(\Delta^{X,i,j}_s)
=&
f (\Delta^{X,i,j}_ {\kappa(s)} )
+
\nabla f (\Delta^{X,i,j}_ {\kappa(s)} ) \Big( G_{9}^{i,j,s}(s- {\kappa(s)} )+G_{10}^{i,j,s} \Big)
+ \int_{\Delta^{X,i,j}_ {\kappa(s)} }^{\Delta^{X,i,j}_s} \Big( \nabla f(u) -\nabla f (\Delta^{X,i,j}_ {\kappa(s)} ) \Big) du
\\
=&
f (\Delta^{X,i,j}_ {\kappa(s)} )
+
\nabla f (\Delta^{X,i,j}_ {\kappa(s)} ) \Big( G_{9}^{i,j,s}(s- {\kappa(s)} )+G_{10}^{i,j,s} \Big)
\\
&+ \Big( \nabla f\big(\Delta^{X,i,j}_ {\kappa(s)} +\rho_1 ( G_{9}^{i,j,s}(s- {\kappa(s)} )+G_{10}^{i,j,s}) \big) -\nabla f (\Delta^{X,i,j}_ {\kappa(s)} ) \Big)
\Big( \Delta^{X,i,j}_s-\Delta^{X,i,j}_ {\kappa(s)} \Big) ,
\\
f(\Delta^{Y,i,j}_ {\kappa(s)} )
=&
f (\Delta^{X,i,j}_ {\kappa(s)} )
+
\nabla f (\Delta^{X,i,j}_ {\kappa(s)} ) \Big( G_{9}^{i,j,s} h \Big)
+
\Big( \nabla f\big(\Delta^{X,i,j}_ {\kappa(s)} +\rho_2 ( G_{10}^{i,j,s}h) \big) -\nabla f (\Delta^{X,i,j}_ {\kappa(s)} ) \Big)
\Big( \Delta^{Y,i,j}_ {\kappa(s)} -\Delta^{X,i,j}_ {\kappa(s)} \Big).
\end{align*}
Note that only $G_{10}$ contains the Brownian increments. From the above, there exists $\rho_{1,s},~\rho_{2,s}\in[0,1]$ for all $s\in[0,T]$, and by Young's inequality, we have
\begin{align}
\label{eq:gm:f delta term 00}
& \int_0^t
\mathbb{E}\Big[ \Big\langle f(\Delta^{X,i,j}_s)
-f(\Delta^{Y,i,j}_ {\kappa(s)} )
,\Delta X^i_s \Big\rangle \Big] \mathrm{d} s
\\
\label{eq:gm:f delta term 11}
&\le
\int_0^t
\mathbb{E}\Big[ \Big\langle \nabla f
(\Delta^{X,i,j}_ {\kappa(s)} )
\Big( G_{9}^{i,j,s}(s-h- {\kappa(s)} )+G_{10}^{i,j,s} \Big)
,\Delta X^i_s \Big\rangle \Big] \mathrm{d} s
+
C \int_0^t
\mathbb{E}\Big[ |\Delta X^i_s|^2 \Big] \mathrm{d} s
\\
\label{eq:gm:f delta term 22}
&+ C
\int_0^t
\mathbb{E}\Big[ \Big| \nabla f\Big(\Delta^{X,i,j}_ {\kappa(s)} +\rho_{1,s} ( G_{9}^{i,j,s}(s- {\kappa(s)} )+G_{10}^{i,j,s}) \Big) -\nabla f (\Delta^{X,i,j}_ {\kappa(s)} ) \Big|^2
\Big| \Delta^{X,i,j}_s-\Delta^{X,i,j}_ {\kappa(s)} \Big|^2 \Big] \mathrm{d} s
\\
\label{eq:gm:f delta term 33}
&+ C
\int_0^t
\mathbb{E}\Big[ \Big|\nabla f\big(\Delta^{X,i,j}_ {\kappa(s)} +\rho_{2,s} ( G_{9}^{i,j,s}h) \big) -\nabla f (\Delta^{X,i,j}_ {\kappa(s)} ) \Big|^2
\Big| \Delta^{Y,i,j}_ {\kappa(s)} -\Delta^{X,i,j}_ {\kappa(s)} \Big|^2 \Big] \mathrm{d} s.
\end{align}
For the first term of \eqref{eq:gm:f delta term 11}, by Young's inequality
\begin{align}
&\int_0^t
\mathbb{E}\Big[ \Big\langle \nabla f
(\Delta^{X,i,j}_ {\kappa(s)} )
\Big( G_{9}^{i,j,s}(s-h- {\kappa(s)} )+G_{10}^{i,j,s} \Big)
,\Delta X^i_s \Big\rangle \Big] \mathrm{d} s
\\
\label{eq:gm:f delta term 11-11}
&\le
C \int_0^t
\mathbb{E}\Big[ |\Delta X^i_s|^2 \Big] \mathrm{d} s
+
C \int_0^t
\mathbb{E}\Big[ \Big| \nabla f
(\Delta^{X,i,j}_ {\kappa(s)} )
G_{9}^{i,j,s}(s-h- {\kappa(s)} )\Big|^2
\Big] \mathrm{d} s
\\
\label{eq:gm:f delta term 11-22}
& +
\int_0^t
\mathbb{E}\Big[ \Big\langle
\nabla f
(\Delta^{X,i,j}_ {\kappa(s)} )~ G_{10}^{i,j,s},\Delta X^i_s -\Delta X^i_{ {\kappa(s)} } \Big\rangle \Big] \mathrm{d} s
+
\int_0^t
\mathbb{E}\Big[ \Big\langle
\nabla f
(\Delta^{X,i,j}_ {\kappa(s)} )~ G_{10}^{i,j,s},\Delta X^i_{ {\kappa(s)} } \Big\rangle \Big] \mathrm{d} s.
\end{align}
For the second term of \eqref{eq:gm:f delta term 11-11}, since $m\ge4q+2$, by Assumption \ref{Ass:GM Assumption} and Theorem \ref{theorem:moment bound for the big theorm time extensions }, using calculations similar to those in \eqref{eq:se1:f-f things} and Proposition \ref{prop:auxilary for multipy of r.v.}, we have
\begin{align*}
C \int_0^t
\mathbb{E}\Big[ \Big| \nabla f
(\Delta^{X,i,j}_ {\kappa(s)} )
G_{9}^{i,j,s}(s-h- {\kappa(s)} )\Big|^2
\Big] \mathrm{d} s
\le C h^2
\int_0^t
\mathbb{E}\Big[ 1+| \hat{X} _{ {\kappa(s)} }^{i,N}|^{4q+2}
+ | Y_{ {\kappa(s)} }^{i,\star,N}|^{4q+2}\Big] \mathrm{d} s
\le Ch^2.
\end{align*}
By Jensen's inequality and calculations close to those for $I_3$ in \eqref{eq:se2:I3 source}, since $m\ge4q+2$, we have
\begin{align}
\mathbb{E} \big[ |\Delta X^i_t &-\Delta X^i_{ \kappa(t) } |^2 \big]
=\mathbb{E} \Big[ \Big|\int_{ \kappa(t) }^t \Big( v(X_{s}^{i,N},\mu_{s}^{X,N})-v(Y_{\kappa(s)}^{i,\star,N}, \hat{\mu} ^{Y,N}_{\kappa(s)})\Big) ~ \mathrm{d} s \Big|^2 \Big]
\\
\le& h \int_{ \kappa(t) }^t
\frac{1}{N} \sum_{j=1}^N
\mathbb{E} \Big[ \Big| f(X_{s}^{i,N}-X_{s}^{i,N})-f(Y_{\kappa(s)}^{i,\star,N}-Y_{\kappa(s)}^{i,\star,N})\Big|^2\Big] ~ \mathrm{d} s
\le Ch^3.
\end{align}
Thus, for the first term of \eqref{eq:gm:f delta term 11-22}, by Cauchy-Schwarz inequality and the properties of the Brownian increment
\begin{align*}
\int_0^t
\mathbb{E}\Big[ & \Big\langle
\nabla f
(\Delta^{X,i,j}_ {\kappa(s)} )~ G_{10}^{i,j,s},\Delta X^i_s -\Delta X^i_{ {\kappa(s)} } \Big\rangle \Big] \mathrm{d} s
\leq
\int_0^t
\sqrt{\mathbb{E} \big[ \big|
\nabla f (\Delta^{X,i,j}_ {\kappa(s)} )~ G_{10}^{i,j,s}
\big|^2 \big] }
\sqrt{\mathbb{E} \big[ \big|
\Delta X^i_s -\Delta X^i_{ {\kappa(s)} }
\big|^2 \big] }
\mathrm{d} s
\le
Ch^2.
\end{align*}
For the second term of \eqref{eq:gm:f delta term 11-22}, since $ G_{10}^{i,j,s}$ of \eqref{eq:gm:G7 comes} is conditionally independent of $\Delta^{X,i,j}_ {\kappa(s)} $ and $\Delta X^{i}_ {\kappa(s)} $ (and contains the Brownian increments), the tower property yields
\begin{align}
\label{eq:gm:f delta term 11-22 result 2 }
\int_0^t
\mathbb{E}\Big[ & \Big\langle
\nabla f
(\Delta^{X,i,j}_ {\kappa(s)} )~ G_{10}^{i,j,s},\Delta X^i_{ {\kappa(s)} } \Big\rangle \Big] \mathrm{d} s
= 0.
\end{align}
Thus, plugging the above results back into \eqref{eq:gm:f delta term 11}, we conclude that
\begin{align}
\label{eq:gm:f delta term 11-result}
\int_0^t
\mathbb{E}\Big[ \Big\langle \nabla f
(\Delta^{X,i,j}_ {\kappa(s)} )
\Big( G_{9}^{i,j,s}(s-h- {\kappa(s)} )+G_{10}^{i,j,s} \Big)
,\Delta X^i_s \Big\rangle \Big] \mathrm{d} s \le Ch^2.
\end{align}
For \eqref{eq:gm:f delta term 22}, by Assumption \ref{Ass:GM Assumption}, Cauchy-Schwarz inequality and the properties of the Brownian increment, and the condition $m\ge \max\{8q,~4q+4\}$
\begin{align*}
&\mathbb{E}\Big[ \big| \nabla f\big(\Delta^{X,i,j}_ {\kappa(s)} +\rho_{1,s} ( G_{9}^{i,j,s}(s- {\kappa(s)} )+G_{10}^{i,j,s}) \big) -\nabla f \big(\Delta^{X,i,j}_ {\kappa(s)} \big) \big|^4 \Big]
\\
&\le C \mathbb{E}\Big[ \big| \Big( 1
+ \big|\Delta^{X,i,j}_ {\kappa(s)}
+ \rho_{1,s} \big( G_{9}^{i,j,s}(s- {\kappa(s)} )+G_{10}^{i,j,s}\big) \big|^{q-1} + \big|\Delta^{X,i,j}_ {\kappa(s)} \big|^{q-1}\Big)
\big| \rho_{1,s} \big( G_{9}^{i,j,s}(s- {\kappa(s)} )+G_{10}^{i,j,s}\big) \big|^4 \Big] \le Ch^2
\end{align*}
and
\begin{align*}
&\mathbb{E}\big[ \big| \Delta^{X,i,j}_s-\Delta^{X,i,j}_ {\kappa(s)} \big|^4 \big]
\le
C \mathbb{E}\Big[ \big| \big( G_{9}^{i,j,s}(s- {\kappa(s)} )+G_{10}^{i,j,s}\big)\big|^4 \Big]
\le Ch^2.
\end{align*}
Thus, using Cauchy-Schwarz inequality again and the results above we conclude that
\begin{align}
\label{eq:gm:f delta term 22-result}
\int_0^t
\mathbb{E}\Big[ \big| \nabla f\Big(\Delta^{X,i,j}_ {\kappa(s)}
+\rho_{1,s} \big( G_{9}^{i,j,s}(s- {\kappa(s)} )+G_{10}^{i,j,s}\big) \Big) -\nabla f (\Delta^{X,i,j}_ {\kappa(s)} ) \big|^2
\big| \Delta^{X,i,j}_s-\Delta^{X,i,j}_ {\kappa(s)} \big|^2 \Big] \mathrm{d} s
\le Ch^2 .
\end{align}
For \eqref{eq:gm:f delta term 33}, recall \eqref{eq:gm:G7 comes}. Similarly to above, by assumption $m\ge4q+2$ and hence
\begin{align}
\label{eq:gm:f delta term 33-result}
\int_0^t
\mathbb{E}\Big[ ~ \big|\nabla f\big( \Delta^{X,i,j}_ {\kappa(s)} +\rho_{2,s} G_{9}^{i,j,s}h \big) -\nabla f \big(\Delta^{X,i,j}_ {\kappa(s)} \big) \big|^2 ~
\big| G_{9}^{i,j,s} h \big|^2 ~ \Big] \mathrm{d} s \le Ch^2.
\end{align}
Thus, plugging \eqref{eq:gm:f delta term 11-result}, \eqref{eq:gm:f delta term 22-result} and \eqref{eq:gm:f delta term 33-result} back into \eqref{eq:gm:f delta term 00}, yields
\begin{align}
\label{eq:gm:proof: delta x term 2 results}
\int_0^t
\mathbb{E}\Big[ \Big\langle f(\Delta^{X,i,j}_s)
-f(\Delta^{Y,i,j}_ {\kappa(s)} )
,\Delta X^i_s \Big\rangle \Big] \mathrm{d} s
&\le
Ch^2+
C \int_0^t
\mathbb{E}\big[ |\Delta X^i_s|^2 \big] \mathrm{d} s.
\end{align}
Plug the above result and \eqref{eq:gm:proof: delta x term 1 results} back to \eqref{eq:gm:proof: delta x term 1 }, we conclude that, for all $i\in\llbracket 1,N\rrbracket$, $t\in[0,T]$
\begin{align}
\mathbb{E} \big[ |\Delta X^i_t|^2 \big]
\le&
C\int_0^t \mathbb{E} \big[ |\Delta X^i_s|^2 \big] \mathrm{d} s +Ch^2.
\end{align}
Gr\"onwall's lemma delivers the final result after taking supremum over $i\in \llbracket 1,N \rrbracket $.
\end{proof}
| {
"attr-fineweb-edu": 1.794922,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdJg5qX_AY1ZPJNpj | \section{Introduction}
The spectral properties of the Laplacians and Scr\"odinger operators on periodic graphs
have been studied by many authors
\cite{A, AnIsMo, HiNo, HiShi, HSSS, IsKo, IsMo, KoSa, RaRo, Su}
(see also the references therein).
In this paper, the essential spectrum of the Laplacian on a perturbed periodic graph is considered.
It is well-known that if the perturbation is compact,
the essential spectrum is stable (see Proposition \ref{prop2.1}).
We are interested in the case in which the perturbation is possibly non-compact,
{\rm i.e.}, the operator ``$L_{G^\prime} - L_G$" is not a compact operator,
where $L_G$ (resp., $L_{G^\prime}$) is the Laplacian on a periodic graph $G$
(resp., a perturbed graph $G^\prime$ of $G$).
If $G^\prime$ is a graph obtained from $G$ by removing and adding some vertices,
then $G$ is not a subgraph of $G^\prime$, and {\it vice versa}.
In such cases, the meaning of ``$L_{G^\prime} - L_G$" is unclear,
because $L_G$ and $L_{G^\prime}$ act on different Hilbert spaces.
The precise meaning of ``$L_{G^\prime} - L_G$" is given in \eqref{perturb}.
It is noteworthy that a perturbed periodic graph $G^\prime$ might not be periodic.
In general, it is difficult to determine the spectrum of an infinite graph,
if it does not have a nice symmetry, such as periodicity.
In this paper,
we present a class of perturbed periodic graphs $G^\prime$
such that the essential spectrum of an unperturbed graph $G$
is contained in that of $G^\prime$:
\begin{equation}
\label{Eq1.1}
\sigma_{\rm ess}(L_G) \subset \sigma_{\rm ess}(L_{G^\prime}).
\end{equation}
We emphasize that the converse of \eqref{Eq1.1}
cannot be expected in general.
As shown in Example \ref{counter_ex},
there exists a perturbed periodic graph $G^\prime$ such that
$\sigma_{\rm ess}(L_G) \subset \sigma_{\rm ess}(L_{G^\prime})$
and $\sigma_{\rm ess}(L_{G^\prime}) \setminus \sigma_{\rm ess}(L_G) \not=\emptyset$.
In our definition \eqref{DefLG},
$L_G$ is self-adjoint, and its spectrum $\sigma(L_G)$ is contained in $[-1,1]$.
This property raises the question of whether
$\sigma(L_G)$ is already all of $[-1,1]$.
In their paper \cite{HiShi},
Higuchi and Shirai stated that
$G$ has the full spectrum property (FSP) if $\sigma(L_G) = [-1,1]$,
and studied the problem of whether an infinite graph $G$ has the FSP.
If \eqref{Eq1.1} holds and $G$ has the FSP,
then $G^\prime$ has the FSP (see Corollary \ref{FSP}).
Therefore, it is possible to determine the spectra of perturbed graphs of $\mathbb{Z}^d$,
such as those of cones (Example \ref{ex_cone}) and the upper-half plane (Example \ref{ex_upp}).
We also discuss the spectrum of a graph obtained from $\mathbb{Z}^2$
by randomly adding pendants.
This paper is organized as follows.
In Section 2,
we present some basic facts on
perturbed and periodic graphs.
Section 3 is devoted to the study of
the essential spectra of perturbed periodic graphs.
In Section 4,
we demonstrate how to determine
the spectrum of a perturbed periodic graph,
using the results established in Section 3.
We present the proofs of technical lemmas
in the appendix.
\section{Preliminaries}
Let $G = (V(G), E(G))$ be an unoriented graph (possibley having
loops and multiple edges),
where $V(G)$ and $E(G)$ are the sets of vertices and unoriented edges, respectively.
We use an ordered pair $\{x, y\} \in V(G) \times V(G)$
to denote the endpoints of an edge $e \in E(G)$
and then write $V(e) = \{x, y\}$.
We consider that each edge $e \in E(G)$ has two orientations
and introduce the set $A(G)$ of all oriented edges $e$,
whose origins and terminals are denoted by $o(e)$ and $t(e)$,
respectively.
We denote the set of all oriented edges whose origin is $x$ by
\[ A_x(G) = \{ e \in A(G) \mid o(e) = x \} \]
and the number of all edges in $A_x(G)$ by ${\rm deg}_G x = \# A_x(G)$.
If there is no danger of confusion,
we omit $G$ in ${\rm deg}_G$.
Throughout this paper,
unless otherwise noted,
we assume that $G$ is locally finite and
\begin{equation}
\label{deg>=1}
1 \leq \inf_{x \in V(G)} {\rm deg} x \leq \sup_{x \in V(G)} {\rm deg} x < \infty.
\end{equation}
The left-hand side of \eqref{deg>=1} implies that there is no isolated vertex.
The Laplacian we address in this paper is defined as
\begin{equation}
\label{DefLG}
(L_G\psi)(x) = \frac{1}{{\rm deg} x}
\sum_{e \in A_x(G)} \psi(t(e)),
\quad \psi \in \ell^2(V(G)),
\end{equation}
where
\[ \ell^2(V(G))
= \{ \psi:V(G) \to \mathbb{C}
\mid \langle \psi, \psi \rangle < \infty \}
\]
is the Hilbert space with the inner product
\[ \langle \psi, \psi \rangle
= \sum_{x \in V(G)} |\psi(x)|^2 {\rm deg} x.
\]
We say that a graph (possibly having loops and multiple edges) $G^\prime$
is isomorphic to $G$ and write $G^\prime \simeq G$ if
there exists a pair of bijections $\varphi_V:V(G^\prime) \to V(G)$ and
$\varphi_E:E(G^\prime) \to E(G)$ such that
for all $e \in E(G)$
with endpoints $V(e) = \{x, y\}$,
\begin{equation}
\label{consv}
V(\varphi_E^{-1}(e)) = \{\varphi_V^{-1}(x), \varphi_V^{-1}(y)\}.
\end{equation}
In this case, we can introduce an orientation-preserving bijection
$\varphi_A:A(G^\prime) \to A(G)$ as
$\iota^\prime(\varphi_A^{-1}(e)) = \varphi_E^{-1}(\iota(e))$
and
\begin{equation}
\label{vecon}
o(\varphi_A^{-1}(e)) = \varphi_V^{-1}(o(e)),
\quad t(\varphi_A^{-1}(e)) = \varphi_V^{-1}(t(e))
\quad (e \in A(G)),
\end{equation}
where $\iota:A(G) \to E(G)$
and $\iota^\prime:A(G^\prime) \to E(G^\prime)$ are natural surjections.
We know from \eqref{vecon} that $\varphi^{-1}_A(A_x(G)) = A_{\varphi^{-1}_V(x)}(G^\prime)$
and ${\rm deg}_{G^\prime} \varphi^{-1}_V(x) = {\rm deg}_G x$ for all $x \in V(G)$.
If $G^\prime$ is isometric to $G$,
we can define a natural unitary operator
$\mathscr{U}:\ell^2(V(G)) \to \ell^2(V(G^\prime))$ as
\begin{equation}
\label{Vecid}
(\mathscr{U}\psi)(\varphi_V^{-1}(x)) = \psi(x),
\quad x \in V(G)
\end{equation}
for $\psi \in \ell^2(V(G))$.
By \eqref{vecon} and \eqref{Vecid},
\begin{equation}
\label{Lid}
L_{G^\prime}\mathscr{U} = \mathscr{U} L_G.
\end{equation}
\subsection{Perturbed graph}
We say that $G_0$ is a subgraph of $G$ and write $G_0 \subset G$
if $G_0$ is a graph satisfying $V(G_0) \subset V(G)$ and $E(G_0) \subset E(G)$.
A graph $G$ with $V(G) \not= \emptyset$ is called non-empty.
\begin{definition}
\label{def1.1}
{\rm
We say that $G^\prime$ is a {\it perturbed graph} of a graph $G$
if there exist non-empty subgraphs $G_0^\prime$ of $G^\prime$
and $G_0$ of $G$ such that $G^\prime_0 \simeq G_0$.
}
\end{definition}
\begin{remark}
\label{1.1}
{\rm
Definition \ref{def1.1} allows the case where $G \subset G^\prime$,
and {\it vise verca}.
If $G^\prime$ is a graph obtained by adding vertices and edges to $G$,
then $G^\prime$ is a subgraph of $G$.
On the other hand,
if $G^\prime$ is a graph obtained by removing vertices and edges from $G$,
then $G^\prime$ is a subgraph of $G$.
In cases where $G^\prime$ is a graph obtained from $G$
by adding and removing vertices and edges,
$G$ is not a subgraph of $G^\prime$,
and {\it vise verca}.
Such a case is also included in Definition \ref{def1.1}.
}
\end{remark}
Let $G^\prime$ be a perturbed graph of a graph $G$ with bijections
$\varphi_V:V(G_0^\prime) \to V(G_0)$ and $\varphi_E:V(G_0^\prime) \to E(G_0)$
satisfying \eqref{consv} .
If there is no danger of confusion,
we omit $V$ (resp. $E$, $A$) in $\varphi_V$ (resp. $\varphi_E$, $\varphi_A$).
By definition, $G^\prime_0 = (\varphi^{-1}(V(G_0)), \varphi^{-1}(E(G_0)))$.
We define an operator
$\mathscr{U}_0:\ell^2(V(G)) \to \ell^2(V(G^\prime))$ as
\begin{equation}
\label{Vecid0}
(\mathscr{U}_0\psi)(x^\prime)
= \begin{cases}
\psi(x), & x^\prime = \varphi^{-1}(x) \quad (x \in V(G_0)) \\
0, & \mbox{otherwise}
\end{cases}
\end{equation}
for $\psi \in \ell^2(V(G))$.
Since, in general, ${\rm deg}_G x \not= {\rm deg}_{G^\prime} \varphi^{-1}(x)$,
we cannot hope that $\mathscr{U}_0$ is partial isometric.
As shown in the example below,
it can also be the case that $\mathscr{U}_0 L_G \not= L_{G^\prime} \mathscr{U}_0$.
\begin{example}[Lattice with pendants]
\label{ex_pendant}
{\rm
Let $G=\mathbb{Z}$ be the one-dimensional lattice
and $G^\prime = (V(G^\prime), E(G^\prime))$ be defined
by $V(G^\prime) = \mathbb{Z} \times \{0,1\}$
and
\begin{align*}
E(G^\prime)
& = \{ e \mid V(e) = \{(m,0), (n,0)\}, |m-n|=1 \} \\
& \quad \cup \{ e \mid V(e) = \{(m,0), (m,1)\} \}.
\end{align*}
A vertex of degree one is called a pendant vertex.
The vertices $(m, 1) \in V(G^\prime)$ ($m \in \mathbb{Z}$) are pendant vertices,
{\it i.e.}, ${\rm deg}_{G^\prime}(m,1) = 1$.
We set $G_0 = \mathbb{Z}$ and $G^\prime_0 := \{ (m,0) \mid m \in \mathbb{Z} \}$. We define a bijection $\varphi:V(G^\prime_0) \to V(G_0)$ as
\[ \varphi((m,0)) = m, \quad (m,0) \in V(G^\prime_0). \]
The graph $G^\prime$ is a perturbed graph of
the one-dimensional lattice $\mathbb{Z}$,
which is a graph obtained from $\mathbb{Z}$
by adding a pendant vertex to each vertex of $\mathbb{Z}$.
We know that ${\rm deg}_G m = 2 \not= 3 = {\rm deg}_{G^\prime} \varphi^{-1}(m)$
and
\begin{align*}
\|\mathscr{U}_0\psi \|_{\ell^2(V(G^\prime))}^2
& = \frac{3}{2} \|\psi\|_{\ell^2(V(G))}^2,
\quad \psi \in \ell^2(V(G_0)).
\end{align*}
By definition, $(\mathscr{U}_0\psi)(\cdot,1) \equiv 0$
and \begin{align*}
(\mathscr{U}_0L_G\psi)(\varphi^{-1}(m))
& = \frac{3}{2} (L_{G^\prime} \mathscr{U}_0\psi)(\varphi^{-1}(m)),
\quad \psi \in V(G).
\end{align*}
See \cite[p.3465]{Su}, where $G^\prime$ is denoted by $G_{1,1}$.
The essential spectrum of $G^\prime$ is
\[ \sigma_{\rm ess}(L_{G^\prime})
= \left[-1, -\frac{1}{3} \right] \cup \left[ \frac{1}{3}, 1 \right], \]
whereas $\sigma_{\rm ess}(L_\mathbb{Z})=[-1,1]$.
Next we consider the graph $G$
obtained from $\mathbb{Z}$ by adding pendant vertices
to alternative vertices of $\mathbb{Z}$,
which was studied in \cite{Su} and called $G_{2,1}$.
Let $G^\prime = G_{1,1}$ as above.
Then, $G^\prime$ is the perturbed graph of $G$.
Indeed, we can check the condition in Definition \ref{def1.1}
as $G_0 = G_0^\prime = \mathbb{Z}$.
From \cite[p.3465]{Su}, we know that
\[ \sigma_{\rm ess}(L_G)
= \left[-1, -\frac{1}{\sqrt{3}} \right] \cup \{0\}
\cup \left[ \frac{1}{\sqrt{3}}, 1 \right]. \]
In particular, we have
$\sigma_{\rm ess}(L_{G}) \not \subset \sigma_{\rm ess}(L_{G^\prime})$
and $\sigma_{\rm ess}(L_{G^\prime}) \not \subset \sigma_{\rm ess}(L_{G})$.
}
\end{example}
In general, the restriction $\mathscr{U}_0 \mid_{\ell^2(V(G_0))}$
is not an isometry but an injection.
\begin{lemma}
{\rm
$\mathscr{U}_0 \mid_{\ell^2(V(G_0))}$
is an injection and
\begin{equation}
\label{U0rest}
c_0
\leq \frac{\|\mathscr{U}_0 \psi\|_{\ell^2(V(G^\prime))}}{\|\psi\|_{\ell^2(V(G))}}
\leq C_0, \quad \psi \in \ell^2(V(G_0))\setminus \{0\},
\end{equation}
where $c_0$ and $C_0$ are positive:
\[ c_0 = \frac{\inf_{x \in V(G_0)} {\rm deg}_{G^\prime} \varphi^{-1}(x)}{\sup_{x \in V(G_0)} {\rm deg}_G x},
\quad C_0 = \frac{\sup_{x \in V(G_0)} {\rm deg}_{G^\prime}\varphi^{-1}(x)}{\inf_{x \in V(G_0)} x}. \]
}
\end{lemma}
\begin{proof}
{\rm
It suffices to prove \eqref{U0rest}.
From \eqref{deg>=1}, we know that $0 < c_0 \leq C_0 < \infty$.
By direct calculation,
\begin{align*}
\|\mathscr{U}_0 \psi \|_{\ell^2(V(G^\prime))}^2
& = \sum_{x \in V(G_0)} |\psi(x)|^2{\rm deg}_{G^\prime}\varphi^{-1}(x) \\
& = \sum_{x \in V(G_0)} |\psi(x)|^2{\rm deg}_G x \frac{{\rm deg}_{G^\prime}\varphi^{-1}(x)}{{\rm deg}_G x},
\end{align*}
which yields \eqref{U0rest}.
}
\end{proof}
To obtain the result that
$\sigma_{\rm ess}(L_G) \subset \sigma_{\rm ess}(L_{G^\prime})$,
we must divide the unperturbed part of $G^\prime$ that preserves
the graph structure of $G$ from the perturbed part.
To this end, we set
\[ \Lambda
= \{ x \in V(G_0) \mid
{\rm deg}_{G^\prime} \varphi^{-1}(x) = {\rm deg}_G x,~
A_x(G) \subset A(G_0) \}.
\]
\begin{lemma}
\label{lem_bulk}
{\rm
Let $x \in \Lambda$.
Then,
\begin{equation}
\label{int_edge}
\varphi^{-1}(A_x(G)) = A_{\varphi^{-1}(x)}(G^\prime).
\end{equation}
Moreover, it follows that for all $\psi \in \ell^2(V(G))$,
\begin{equation}
\label{intertwine}
(\mathscr{U}_0 L_{G}\psi)(\varphi^{-1}(x))
= (L_{G^\prime}\mathscr{U}_0\psi)(\varphi^{-1}(x)).
\end{equation}
}
\end{lemma}
\begin{proof}
Because, by definition, $A_x(G) \subset A(G_0)$,
we have $A_x(G) = A_x(G_0)$.
Combining this with $G_0 \simeq G^\prime_0$
yields the result that
\begin{equation}
\label{eq_bulk01}
\varphi^{-1}(A_x(G)) = A_{\varphi^{-1}(x)}(G^\prime_0).
\end{equation}
To show \eqref{int_edge}, it suffices to prove
\begin{equation}
\label{eq_bulk02}
A_{\varphi^{-1}(x)}(G^\prime_0) = A_{\varphi^{-1}(x)}(G^\prime).
\end{equation}
Clearly, $A_{\varphi^{-1}(x)}(G^\prime_0) \subset A_{\varphi^{-1}(x)}(G^\prime)$.
We prove the equality.
Because $G^\prime_0 \simeq G_0$ and $A_x(G) = A_x(G_0)$,
${\rm deg}_{G^\prime_0} \varphi^{-1}(x) = {\rm deg}_{G_0} x$
and ${\rm deg}_{G} x = {\rm deg}_{G_0} x$.
Suppose that
$A_{\varphi^{-1}(x)}(G^\prime) \setminus A_{\varphi^{-1}(x)}(G^\prime_0)
\not= \emptyset$.
Then,
\[ {\rm deg}_{G^\prime} \varphi^{-1}(x)
> {\rm deg}_{G^\prime_0} \varphi^{-1}(x)
= {\rm deg}_{G_0} x
= {\rm deg}_{G} x. \]
This contradicts $x \in \Lambda$,
because, by the definition of $\Lambda$,
${\rm deg}_{G^\prime} \varphi^{-1}(x) = {\rm deg}_G x$.
This proves \eqref{eq_bulk02} and hence \eqref{int_edge}.
By \eqref{vecon}, \eqref{Vecid0}, and \eqref{int_edge},
\begin{align*}
(\mathscr{U}_0 L_{G}\psi)(\varphi^{-1}(x))
& = \frac{1}{{\rm deg}_G x}
\sum_{e \in A_x(G)} \psi(t(e))
\\
& = \frac{1}{{\rm deg}_{G^\prime} \varphi^{-1}(x)}
\sum_{\varphi^{-1}(e) \in A_{\varphi^{-1}(x)}(G^\prime)}
(\mathscr{U}_0\psi)(t(\varphi^{-1}(e))) \\
& = (L_{G^\prime}\mathscr{U}_0\psi)(\varphi^{-1}(x)) ,
\quad x \in \Lambda.
\end{align*}
This proves \eqref{intertwine}.
\end{proof}
\begin{remark}
{\rm
We use the condition
\begin{equation}
\label{eq_bulk00}
A_x(G) \subset A(G_0)
\end{equation}
in the definition of $\Lambda$
to prove Lemma \ref{lem_bulk}.
The condition \eqref{eq_bulk00} does not hold in general,
even if ${\rm deg}_{G^\prime} \varphi^{-1}(x) = {\rm deg}_G x$.
See Example \ref{ex_cone}.
The vertex $x = (x_1,0) \in V(G_0)$ satisfies
${\rm deg}_G x = {\rm deg}_{G^\prime} \varphi^{-1}(x)$.
However, \eqref{eq_bulk00} does not hold,
because $A_x(G) \setminus A_x(G_0) \not=\emptyset$.
}
\end{remark}
Let $P_{\varphi^{-1}(\Lambda)} : \ell^2(V(G^\prime)) \to \ell^2(V(G^\prime))$
be the orthogonal projection onto
the closed subspace
\[ \ell^2(\varphi^{-1}(\Lambda))
:= \{ \psi^\prime \in \ell^2(V(G^\prime)) \mid
{\rm supp}\psi^\prime \subset \varphi^{-1}(\Lambda) \} \]
and $P_{\varphi^{-1}(\Lambda)}^\perp := 1 - P_{\varphi^{-1}(\Lambda)}$.
Because by \eqref{intertwine},
$P_{\varphi^{-1}(\Lambda)} (L_{G^\prime}\mathscr{U}_0 -\mathscr{U}_0 L_G) = 0$,
\begin{equation}
\label{perturb}
L_{G^\prime}\mathscr{U}_0
= \mathscr{U}_0 L_G + K_{\Lambda},
\end{equation}
where
\begin{equation}
\label{K}
K_{\Lambda}
:= P_{\varphi^{-1}(\Lambda)}^\perp (L_{G^\prime}\mathscr{U}_0 -\mathscr{U}_0 L_G).
\end{equation}
In this sense, we say that $\varphi^{-1}(\Lambda)$
(resp. $\varphi^{-1}(\Lambda)^{\rm c}$) is
the unperturbed (resp. perturbed) part of the perturbed graph $G^\prime$.
If $\# \varphi^{-1}(\Lambda)^{\rm c} < \infty$,
then $P_{\varphi^{-1}(\Lambda)}^\perp$ is a finite rank operator
and hence by \eqref{K}, $K_{\Lambda}$ is compact.
\begin{proposition}
\label{prop2.1}
{\rm
Let $G^\prime$ be a perturbed graph of $G$.
If $\# \varphi^{-1}(\Lambda)^{\rm c} < \infty$,
then
\[ \sigma_{\rm ess}(L_{G}) \subset \sigma_{\rm ess}(L_{G^\prime}). \]
}
\end{proposition}
\begin{proof}
{\rm
Let $\lambda \in \sigma_{\rm ess}(L_{G})$ and $\{\psi_n\}$
be a Weyl sequence for $L_G$ such that
(i) $\lim_{n \to \infty} \|(L_G -\lambda )\psi_n\|=0$,
(ii) $\|\psi_n\| = 1$, and (iii) ${\rm w-}\lim_{n \to \infty} \psi_n = 0$.
$\mathscr{U}_0\psi_n/ \|\mathscr{U}_0\psi_n\|$
is a Weyl sequence for $L_{G^\prime}$.
Indeed, by \eqref{U0rest}, $\|\mathscr{U}_0\psi_n\| \geq c_0 > 0$
and hence ${\rm w-}\lim_{n \to \infty} \mathscr{U}_0\psi_n/\|\mathscr{U}_0\psi_n\| =0$.
By the compactness of $K_\Lambda$, it follows that
$\lim_{n \to \infty} \|(L_{G^\prime} - \lambda) (\mathscr{U}_0\psi_n/\|\mathscr{U}_0\psi_n\| )\|=0$.
Hence, $\lambda \in \sigma_{\rm ess}(L_{G^\prime})$.
}
\end{proof}
We want to prove $\sigma_{\rm ess}(L_{G}) \subset \sigma_{\rm ess}(L_{G^\prime})$
under the condition in which $K_\Lambda$ is allowed to be non compact, {\rm i.e.},
$\# \varphi^{-1}(\Lambda)^{\rm c} = \infty$.
This fact is established in Section \ref{sec.3} in the case in which $G$ is a periodic graph.
\subsection{Periodic graph}
We end this section by providing the definition of $\mathbb{Z}^d$-periodic graphs,
which are not necessary contained in $\mathbb{R}^d$
(see \cite{AnIsMo, KoSa} for periodic graphs contained in $\mathbb{Z}^d$)
and allow multiple edges and loops.
We set $\mathbb{N}_s = \{ v_1, v_2,\dots, v_s\}$ ($s \in \mathbb{N}$)
and define a translation on $\mathbb{Z}^d \times \mathbb{N}_s$ as
\[ \tau_a((m, v_i)) = (m - a, v_i),
\quad (m, v_i) \in \mathbb{Z}^d \times \mathbb{N}_s. \]
We use $A_{u,v}(G)$ to denote the set of edges with $o(e) = u$, $t(e) = v$
($u, v \in V(G)$):
\[ A_{u,v}(G) = \{ e \in A_u(G) \mid t(e) = v \}. \]
\begin{definition}
{\rm
We say that a graph $G =(V(G), E(G))$
is a {\it $\mathbb{Z}^d$-periodic graph} and write $G \in \mathscr{L}^d$
if $G$ is isomorphic to a locally finite graph $\Gamma = (V(\Gamma), E(\Gamma))$
satisfying ($\mathscr{L}_1$) - ($\mathscr{L}_2$).
\begin{itemize}
\item[($\mathscr{L}_1$)] There exists $s \in \mathbb{N}$ such that
$V(\Gamma) = \mathbb{Z}^d \times \mathbb{N}_s$.
\item[($\mathscr{L}_2$)] For all $u, v \in V(\Gamma)$,
$\# A_{\tau_a(v),\tau_a(u)}(\Gamma) = \# A_{v,u}(\Gamma)$.
\end{itemize}
}
\end{definition}
The condition ($\mathscr{L}_1$) ensures
that for a periodic graph $G \in \mathscr{L}^d$,
there is a bijection $\varphi:V(G) \ni x
\mapsto (m,v_i) \in \mathbb{Z}^d \times \mathbb{N}_s$:
\begin{equation}
\label{id}
x =\varphi^{-1} (m,v_i).
\end{equation}
We henceforth identify a vertex $x \in V(G)$ of a periodic graph
with $(m,v_i) \in \mathbb{Z}^d \times \mathbb{N}_s$
by \eqref{id}, and then write $x = (m, v_i)$.
In this case, we write the vertex set of a periodic graph as
$V(G) \simeq \mathbb{Z}^d \times \mathbb{N}_s$.
By the relations \eqref{Vecid} and \eqref{Lid} (replacing $G^\prime$ with $G$
and $G$ with $\Gamma$), we also identify
the Laplacian $L_{G}$ of a periodic graph with $L_\Gamma$.
Since, by ($\mathscr{L}_2$), ${\rm deg} \tau_a(x) = {\rm deg} x$ for $x \in V(G)$,
\begin{equation}
\label{di}
d_i:= {\rm deg}(m, v_i) \quad (i=1,\ldots,s)
\end{equation}
are independent of $m \in \mathbb{Z}^d$.
The condition ($\mathscr{L}_2$) implies that
there exists a graph automorphism $\varphi_{a, V}:V(G) \to V(G)$,
$\varphi_{a, E}:E(G) \to E(G)$ ($a \in \mathbb{Z}^d$) such that
$\varphi_{a,V}(x) = \tau_a(x)$ ($x \in V(G)$ and, if $V(e) = \{x,y\}$, then
$V(\varphi_{a,E}(e)) = \{ \tau_a(x), \tau_a(y) \}$.
We use the notation $\tau_a$ to denote the automorphism $\varphi_{a}$
(the subscripts $V$ and $E$ are omitted).
We define the translation $T_a$ on $\ell^2(V(G))$ ($a \in \mathbb{Z}^d$) as
\[ (T_a \psi)(x) = \psi(\tau_a(x)), \quad x \in V(G). \]
By ($\mathscr{L}_2$) again,
the Laplacian $L_G$ commutes with $T_a$
for all $a \in \mathbb{Z}^d$, {\it i.e}., $[L_G, T_a] = 0$.
Hence, we expect that
$L_G$ and $T_a$ can be simultaneously decomposable.
Indeed, this can be accomplished as follows (for details, see \cite{KoSa} and \cite{Su}).
Let $\ell^2(V_s)= \mathbb{C}^s$ be the Hilbert space with the inner product
\[ \langle \xi, \eta \rangle_{V_s} = \sum_{i=1}^s \bar{\xi}_i \eta_i d_i,
\quad \xi, \eta \in \ell^2(V_s). \]
Let $\mathscr{F}:\ell^2(V(G)) \to \int_{\mathbb{T}^d} ^\oplus \ell^2(V_s) \frac{dk}{(2\pi)^d}$
be a unitary operator defined as
$(\mathscr{F} \psi) (k)
= \left( \hat\psi_i(k) \right)_{i=1}^s$,
where $\hat\psi_i(k) = \sum_{m \in \mathbb{Z}^d} e^{-i k \cdot m}\psi(m,v_i)$.
Then, we have
\[ \mathscr{F}T_a \mathscr{F}^{-1}
= \int_{\mathbb{T}^d}^\oplus e^{-i ak} \frac{dk}{(2\pi)^d} \]
and the Floquet-Bloch decomposition,
\begin{equation}
\label{FBD}
\mathscr{F} L_G \mathscr{F}^{-1}
= \int_{\mathbb{T}^d} ^\oplus L_G(k) \frac{dk}{(2\pi)^d},
\end{equation}
where $L_G(k)$ is the Floquet $s\times s$ matrix and
$k \in \mathbb{T}^d = \mathbb{R}^d/ (2\pi \mathbb{Z})^d$
is the quasimomentum.
Let $\pi_*$ and $\pi^*$ be projections on $V(G)$ defined by
\[ \pi_*(m,v_i) = m \in \mathbb{Z}^d, \quad \pi^*(m,v_i) = v_i \in \mathbb{N}_s,
\quad (m,v_i) \in V(G) \]
and set
\[ A_{i,j}(G) = \{ e \in A_{(0,v_i)}(G) \mid \pi^*(t(e)) = v_j \}. \]
In their paper \cite{KoSa},
Korotyaev and Saburova introduced a convenient notation
called the {\it edge index} $\chi(e)$:
\[ \chi(e) = \pi_*(t(e)) - \pi_*(o(e)) \in \mathbb{Z}^d,
\quad e \in A(G). \]
They called an edge $e$ with non-zero index a {\it bridge}.
By ($\mathscr{L}_2$) and \eqref{vecon},
$\chi$ is $\mathbb{Z}^d$-invariant, {\it i.e.},
\begin{align*}
\chi(\tau_a(e))
& = \pi_*(\tau_a(t(e))) - \pi_*(\tau_a(o(e))) \\
& = (\pi_*(t(e))-a) - (\pi_*(o(e))-a) = \chi(e),
\quad a \in \mathbb{Z}^d.
\end{align*}
We also have $\pi_*(o(\tau_{\pi_*(o(e))}(e))) = 0 \in \mathbb{Z}^d$ and
\[ \chi(e) = \pi_*(t(\tau_{\pi_*(o(e))}(e))), \quad e \in A(G). \]
In particular, if $e \in A_{i,j}(G)$ and $t(e) = (m,v_j)$,
then $\chi(e) = m$.
The following are known:
\begin{lemma}[\cite{KoSa}, \cite{HiNo}]
\label{lem2.3}
{\rm
Let $G \in \mathscr{L}^d$.
\begin{itemize}
\item[(i)]
$L_G(k) = ((L_G)_{i,j}(k))_{i,j=1}^s$
in \eqref{FBD} is given by
\[ (L_G)_{i,j}(k)
= \sum_{e \in A_{i,j}(G)}e^{i \chi(e) \cdot k}/d_i.
\]
\item[(ii)] $\displaystyle \sigma(L_G) = \sigma_{\rm ess}(L_G) = \bigcup_{i=1}^s \lambda_i(\mathbb{T}^d)$.
\end{itemize}
}
\end{lemma}
Using (i) of Lemma \ref{lem2.3}, we have the following.
\begin{proposition}
\label{prop2.4}
{\rm
Let $G \in \mathscr{L}^d$.
\begin{equation*}
(L_G \psi)(m,v_i)
= \sum_{j=1}^s \sum_{e \in A_{i,j}(G)} \psi(m + \chi(e), v_j)/d_i,
\quad (m,v_i) \in V(G).
\end{equation*}
}
\end{proposition}
\begin{proof}
{\rm
By direct calculation,
\begin{align*}
(L_G \psi)(m,v_i)
& = \int_{\mathbb{T}^d} \frac{dk}{(2\pi)^d}
e^{ik \cdot m} \sum_{j=1}^s (L_G)_{i,j}(k)\hat \psi_j(k) \\
& = \sum_{j=1}^s \sum_{e \in A_{i,j}(G)}
\int_{\mathbb{T}^d} \frac{dk}{(2\pi)^d}
e^{ik \cdot (m +\chi(e)) } \hat \psi_j(k)/d_i \\
&= \sum_{j=1}^s \sum_{e \in A_{i,j}(G)} \psi(m + \chi(e), v_j)/d_i.
\end{align*}
}
\end{proof}
\section{Results}\label{sec.3}
We call a perturbed graph $G^\prime$ of $G \in \mathscr{L}^d$
a {\it perturbed periodic graph} of $G$.
We define the propagation length $l_G \in \mathbb{N}$ by
\begin{equation}
\label{proprang}
l_G = \sup_{j=1,\dots,d}~ \sup_{e \in A(G)}|\chi_j(e)|,
\end{equation}
where $\chi_j(e)$ is the $j$-th component of the edge index of $e$.
If $G$ is connected, then there exists a bridge, and hence $l_G \geq 1$.
We use the following condition.
\begin{itemize}
\item[($\mathscr{P}$)] There exists a sequence $\{x_n\}_{n=1}^\infty \subset V(G_0)$
such that
\[ I_n(x_n) := \{ x \in V(G) \mid \pi_*(x) - \pi_*(x_n) \in [-n-l_G+1,n+l_G-1]^d \} \subset \Lambda. \]
\end{itemize}
We are now in a position to state our main theorem:
\begin{theorem}
\label{mainthm}
{\rm
Let $G^\prime$ be a perturbed periodic graph of $G \in \mathscr{L}^d$.
Suppose that $G^\prime$ satisfies ($\mathscr{P}$).
Then,
\[ \sigma_{\rm ess}(L_G) \subset \sigma_{\rm ess}(L_{G^\prime}). \]
}
\end{theorem}
\begin{remark}
{\rm
From a physical point of view,
$\sigma_{\rm ess}(L_G)$
can be considered as the bulk spectrum.
As shown below,
the bulk spectrum $\lambda \in \sigma_{\rm ess}(L_G)$
corresponds to a Weyl sequence of states
with support in the unperturbed part $\varphi^{-1}(\Lambda)$ of $G^\prime$.
}
\end{remark}
Since $\sigma(L_{G^\prime})$ is contained in $[-1,1]$,
we have the following.
\begin{corollary}
\label{FSP}
{\rm
Let $G^\prime$ be a perturbed periodic graph of $G \in \mathscr{L}^d$.
Suppose that $G$ satisfies ($\mathscr{P}$) and $G$ has the FSP.
Then, $G^\prime$ has the FSP.
}
\end{corollary}
\begin{proof}[Proof of Theorem \ref{mainthm}]
As in the proof of Proposition \ref{prop2.1},
it suffices to show the existence of a Weyl sequence for $L_{G^\prime}$.
To this end, we fix $\lambda \in \sigma_{\rm ess}(G)$.
By Lemma \ref{lem2.3},
we have $\lambda = \lambda_h(k_0)$ with some $h =1,\ldots, s$
and $k_0 \in \mathbb{T}^d$.
Let $\xi_0 \in \ell^2(V_s)$ be a normalized eigenvector of
the Floquet matrix $L_G(k_0)$ corresponding to the eigenvalue $\lambda_h(k_0)$:
$L_G(k_0) \xi_0 = \lambda_h(k_0) \xi_0$.
For $n \in \mathbb{N}$, we define a function $\rho_n:\mathbb{Z}^d \to [0,1]$ as
\[ \rho_n(m) = \prod_{j=1}^d \rho(m_j/n),
\quad m = (m_1,\dots,m_d) \in \mathbb{Z}^d, \]
where
\begin{align*}
\rho(t) := \begin{cases}
1 - |t|, \quad & |t|\leq 1, \\
0, \quad & |t|> 1.
\end{cases}
\end{align*}
Then, $\rho_n$ is supported in $[-n+1,n-1]^d\cap \mathbb{Z}^d$ and
\begin{equation}
\label{rhon}
\sum_{m \in \mathbb{Z}^d} |\rho_n(m)|^2
= \prod_{j=1}^d \sum_{m_i \in [-n+1,n-1] \cap \mathbb{Z}}|\rho(m_j/n)|^2
= \left( \frac{ 2n^2+1}{3n} \right)^d.
\end{equation}
Let $\psi_n$ ($n \in \mathbb{N}$) be vectors in $\ell^2(V(G))$ defined by
\[ {\psi_n}(m,v_i) = e^{ik_0 \cdot m}\rho_n(m) (\xi_0)_i,
\quad (m,v_i) \in V(G), \]
where $(\xi_0)_i$ is the $i$-th component of $\xi_0$.
Because $\xi_0$ is a normalized vector, we know, from \eqref{rhon} that
\begin{equation}
\label{normofpsin01}
\|\psi_n\|_{\ell^2(V(G))}^2
= \left(\sum_{m \in \mathbb{Z}^d} |\rho_n(m)|^2 \right) \|\xi_0\|_{V_s}^2
= \left( \frac{ 2n^2+1}{3n} \right)^d.
\end{equation}
Combining ($\mathscr{P}$) with the fact ${\rm supp}\rho_n = [-n+1,n-1]^d \cap \mathbb{Z}^d$,
we have
\begin{equation}
\label{supp01}
{\rm supp} \psi_n(\tau_{\pi_*(x_n)}(\cdot)) \subset I_n(x_n) \subset \Lambda.
\end{equation}
Hence, by \eqref{U0rest} and \eqref{normofpsin01},
\begin{equation}
\label{normofpsin02}
\|\mathscr{U}_0T_{\pi_*(x_n)}{\psi}_n\|
\geq c_0 \left( \frac{ 2n^2+1}{3n} \right)^{d/2}.
\end{equation}
We now define a sequence $\{\Psi_n\} \subset \ell^2(V(G^\prime))$ of normalized vectors as
\begin{equation}
\label{Psin}
\Psi_n = \mathscr{U}_0T_{\pi_*(x_n)}\psi_n/\|\mathscr{U}_0 T_{\pi_*(x_n)} \psi_n\|.
\end{equation}
By definition, we know that $\sup_{x \in V(G)}|\psi_n(x)| \leq 1$.
By \eqref{normofpsin02},
\begin{equation*}
\sup_{x^\prime \in V(G^\prime)}
|\Psi_n(x^\prime)| \leq {c_0}^{-1} \left( \frac{ 2n^2+1}{3n} \right)^{-d/2}.
\end{equation*}
Hence, it follows that
\[ \lim_{n \to \infty} \langle \Phi, \Psi_n \rangle = 0 \]
for all finitely supported vectors $\Phi \in V(G^\prime)$, {\it i.e.},
$\#{\rm supp}\Phi < \infty$.
A standard limiting argument yields the result that ${\rm w-}\lim_{n \to \infty} \Psi_n = 0$.
It remains to prove the following:
\begin{equation}
\lim_{n \to \infty} (L_{G^\prime} - \lambda) \Psi_n = 0.
\end{equation}
By \eqref{perturb}, we observe that
\begin{equation}
\label{LGprimePsin}
(L_{G^\prime}-\lambda)\Psi_n
= C_n \left(\mathscr{U}_0 (L_G - \lambda) T_{\pi_*(x_n)} \psi_n
+ K_{\Lambda} T_{\pi_*(x_n)} \psi_n\right),
\end{equation}
where $C_n := \|\mathscr{U}_0T_{\pi_*(x_n)}{\psi}_n\|^{-1}$.
Since $T_a$ commutes with $L_G$ for all $a \in \mathbb{Z}^d$,
the first term of \eqref{LGprimePsin} is
\begin{equation}
\label{eq*01}
C_n \mathscr{U}_0 (L_G-\lambda) T_{\pi_*(x_n)} \psi_n
= C_n \mathscr{U}_0T_{\pi_*(x_n)} (L_G -\lambda) \psi_n.
\end{equation}
We will prove that the second term of \eqref{LGprimePsin} vanishes.
Because by the definition of $K_\Lambda$,
$(K_{\Lambda} T_{\pi_*(x_n)} \psi_n)\mid_{\varphi^{-1}(\Lambda) }= 0$,
it suffices to prove the following:
\begin{equation}
\label{*02}
(K_{\Lambda} T_{\pi_*(x_n)} \psi_n)\mid_{\varphi^{-1}(\Lambda)^{\rm c} }= 0.
\end{equation}
Let $x^\prime \in \varphi^{-1}(\Lambda)^{\rm c}$.
By \eqref{K},
\begin{equation*}
\left(K_\Lambda T_{\pi_*(x_n)}\psi_n\right)(x^\prime)
= \left(L_{G^\prime}\mathcal{U}_0 T_{\pi_*(x_n)}\psi_n \right)(x^\prime)
- \left(\mathcal{U}_0 L_{G} T_{\pi_*(x_n)}\psi_n \right)(x^\prime).
\end{equation*}
To show \eqref{*02}, it suffices to prove the following lemma,
which is proved in the appendix.
\begin{lemma}
\label{lem3.5*}
{\rm
Let $x^\prime \in \varphi^{-1}(\Lambda)^{\rm c}$.
\begin{itemize}
\item[(i)] $\left(L_{G^\prime}\mathcal{U}_0 T_{\pi_*(x_n)}\psi_n \right)(x^\prime) =0$.
\item[(ii)] $\left(\mathcal{U}_0 L_{G} T_{\pi_*(x_n)}\psi_n \right)(x^\prime) =0$.
\end{itemize}
}
\end{lemma}
Taking the above argument, \eqref{U0rest}, and eqref{normofpsin02} into account,
we observe, from \eqref{LGprimePsin} and \eqref{eq*01} that
\begin{align}
\|(L_{G^\prime}-\lambda)\Psi_n\|_{\ell^2(V(G^\prime))}
& = C_n \|\mathscr{U}_0T_{\pi_*(x_n)}(L_G-\lambda)\psi_n\|_{\ell^2(V(G^\prime))} \notag \\
& \leq C_0 c_0^{-1} \left( \frac{ 2n^2+1}{3n} \right)^{-d/2} \|(L_G-\lambda)\psi_n\|_{\ell^2(V(G))}. \label{bound01}
\end{align}
Because $\xi_0$ is an eigenvector of $L_G(k_0)$
corresponding to $\lambda = \lambda_h(k_0)$,
it follows from Lemma \ref{lem2.3} that
\begin{align*}
\lambda\psi_n(m,v_i)
& = e^{ik_0 \cdot m} \rho_n(m) \lambda_h(k_0) (\xi_0)_i \\
& = e^{ik_0 \cdot m} \rho_n(m) (L_G(k_0)\xi_0)_i \\
& = \sum_{j=1}^s \sum_{e \in A_{i,j}(G)}
e^{i(m + \chi(e)) \cdot k_0}\rho_n(m) (\xi_0)_j/d_i .
\end{align*}
From Proposition \ref{prop2.4},
\begin{align*}
((L_G - \lambda)\psi_n) (m,v_i)
& = \sum_{j=1}^s \sum_{e \in B_{i,j}(G)}
e^{i(m + \chi(e))\cdot k_0} (\xi_0)_j (\rho_n(m + \chi(e)) -\rho_n(m))/d_i,
\end{align*}
where $B_{i,j}(G)$ is the set of all bridges contained in $A_{i,j}(G)$:
\[ B_{i,j}(G) = \{ e \in A_{i,j}(G) \mid \chi(e) \not= 0 \}. \]
Let $B(G)$ be the set of all bridges
\[ B(G) = \{ e \in B_{i,j}(G) \mid i, j=1,\ldots, s \}. \]
By the Schwartz inequality and the fact that $d_i \geq 1$,
\begin{align*}
\|(L_G - \lambda)\psi_n\|^2
& = \sum_{m \in \mathbb{Z}^d} \sum_{i=1}^s|((L_G - \lambda)\psi_n) (m,v_i)|^2 \\
& \leq \sum_{m \in \mathbb{Z}^d} \sum_{i=1}^s
\left( \sum_{j=1}^s \sum_{e \in B_{i,j}(G)} (\xi_0)_j^2 \right) \\
& \qquad \times
\left( \sum_{j=1}^s \sum_{e \in B_{i,j}(G)}
|\rho_n(m + \chi(e)) -\rho_n(m)|^2\right) \\
& \leq \# B(G) \sum_{e \in B(G)}
\sum_{ m \in \mathbb{Z}^d} |\rho_n(m + \chi(e)) -\rho_n(m)|^2.
\end{align*}
Note that
\begin{align*}
|\rho_n(m + \chi(e)) -\rho_n(m)|
& = \prod_{\chi_j(e) =0} |\rho(m_j/n)| \\
& \qquad \times |\prod_{\chi_i(e) \not= 0} \rho((m_i-\chi_i(e))/n) - \prod_{\chi_i(e) \not=0} \rho(m_i/n)|
\end{align*}
and $\sum_{m \in \mathbb{Z}} |\rho(m/n)|^2 = \sum_{m \in \mathbb{Z}} |\rho((m-l)/n)|^2 $.
We observe that
\begin{align}
\|(L_G-\lambda)\psi_n\|^2
& \leq \#B(G) \left(\sum_{m \in \mathbb{Z}^d} |\rho(m)|^2 \right)^{d-1} \notag \\
& \qquad \times
\sum_{e \in B(G)} \sum_{\chi_i(e) \not=0}
\sum_{m \in \mathbb{Z}^d}
|\rho((m - \chi_i(e))/n) -\rho(m/n)|^2. \label{bound02}
\end{align}
Combining \eqref{bound01} with \eqref{bound02} yields the result that
\begin{align*}
\|((L_{G^\prime})-\lambda)\Psi_n\|_{\ell^2(V(G^\prime))}^2
& \leq C_0^2 \# B(G) \left( \frac{ 2n^2+1}{3n} \right)^{-1} \\
& \quad \times \sum_{e \in B(G)} \sum_{\chi_i(e) \not=0}
\sum_{m \in \mathbb{Z}^d}
|\rho((m - \chi_i(e))/n) -\rho(m/n)|^2.
\end{align*}
We complete the proof of Theorem \ref{mainthm}
by using the following lemma.
\end{proof}
\begin{lemma}
\label{lem3.3}
{\rm
For all $l \in \mathbb{Z}$,
\[ \sum_{m \in \mathbb{Z}} |\rho((m - l)/n) - \rho(m/n)|^2 = O(n^{-1}) \]
as $n \to \infty$.
}
\end{lemma}
We prove the lemma in the appendix.
\section{Examples}
In this section, we present some examples of Theorem \ref{mainthm}
and Corollary \ref{FSP}.
\begin{example}(Random pendant graph)
\label{ex_random}
{\rm
We consider a graph obtained from $\mathbb{Z}^2$
by randomly adding pendant vertices.
Let $G = \mathbb{Z}^2$.
Note that $G \in \mathscr{L}^d$,
because $G \simeq \mathbb{Z}^2 \times\{v_1\}$
and $ \mathbb{Z}^2 \times\{v_1\}$ satisfies ($\mathscr{L}_1$) and ($\mathscr{L}_2$).
Then, the map $\pi_*$ is trivial, {\it i.e.}, $\pi_*(x)=x$ for all $x\in\mathbb{Z}^2$.
Let $q_x$ $(x\in\mathbb{Z}^2)$ be a Bernoulli independent, identically distribution (i.i.d.) random variable
with $\mathbb{P}(q_x=1) = \mathbb{P}(q_x=0) = \frac{1}{2}$.
We define
\begin{align*}
G_0 &:= G \\
V(G_0') &:= \{(x,0) \mid x\in \mathbb{Z}^2 \}, \\
E(G_0') &:= \{ e \mid V(e) = \{(x,0),(y,0)\} , |x-y|=1 \}, \\
V(G') &:= V(G_0') \cup \{(x,1) \mid q_x=1\} \subset \mathbb{Z}^2 \times\{0,1\}, \\
E(G') &:= E(G_0') \cup \{ e \mid V(e) = \{(x,0),(x,1)\}, q_x=1 \}.
\end{align*}
Then, $G'=(V(G'),E(G'))$ is a perturbed graph of $G$ with
$\varphi((x,0)) = x$ $(x \in V(G_0))$.
The vertex $(x,1) \in V(G^\prime)$ is a pendant vertex,
which is added to the vertex $(x,0)$ of $G^\prime_0 \simeq \mathbb{Z}^2$
with probability $\mathbb{P}(q_x=1) =1/2$.
In this sense,
the graph $G^\prime$ is considered as
a graph obtained from $\mathbb{Z}^2$ by adding pendant vertices with probability $1/2$.
See Fig. 1.
\begin{figure}[tbp]
\label{fig_random}
\centering
\inpu
{FigEx41.tex}
\caption{Graphs in Example \ref{ex_random}}
\end{figure}
In this case, $\ell_G = 1$ and $I_n(x) = \{y \in\mathbb{Z}^2 | y-x \in [-n,n]^2\}$.
Since $q_x$ is a Bernoulli i.i.d.,
$G^\prime$ satisfies ($\mathscr{P}$) almost surely.
Indeed, for each $x\in \mathbb{Z}^2$,
\begin{align*}
\mathbb{P}(I_n(x) \not\subset \Lambda)
= 1 - \Big(\frac{1}{2}\Big)^{\# I_n(x)}
= 1 - \Big(\frac{1}{2}\Big)^{(2n+1)^2}.
\end{align*}
Let $\xi_j \in \mathbb{Z}^2$ satisfy $I_n(\xi_j) \cap I_n(\xi_k) = \emptyset ~(i\neq j)$.
Then, for any $n\in \mathbb{N}$,
\begin{align*}
\mathbb{P}(\forall x \in \mathbb{Z}^2, ~ I_n(x) \not\subset \Lambda)
~\leq &~
\mathbb{P}(\forall j=1,\dots,N, I_n(\xi_j) \not\subset \Lambda) \\
= & \left[ 1- \left(\frac{1}{2} \right)^{(2n+1)^2} \right]^N
\longrightarrow 0 ~ (N \to \infty).
\end{align*}
Hence, there almost surely exists
a sequence $\{x_n \} \subset \mathbb{Z}^2$ such that $I_n(x_n) \subset \Lambda$.
By Corollary \ref{FSP},
we have
\begin{align*}
\sigma_\mathrm{ess}(L_{G'}) = [-1,1], \quad \mathrm{a.s.}
\end{align*}
}
\end{example}
\begin{example}(Cone-like graph)
\label{ex_cone}
{\rm
The unperturbed graph is $G = \mathbb{Z}^2$.
We set
\begin{align*}
V(G_0) &:= \{ x=(x_1,x_2) \mid x_i \geq 0, i=1,2 \}, \\
E(G_0) &:= \{ e \mid V(e) = \{x,y\}, |x-y|=1, ~ x,y \in V(G_0)\}.
\end{align*}
A cone-like graph $G'=(V(G'),E(G'))$ is defined by
\begin{align*}
V(G') &= V(G_0), \\
E(G') &= E(G_0) \cup \{e \mid V(e) = \{(x_1,0),(0,x_1)\} \}.
\end{align*}
Setting $V(G_0') := V(G_0)$ and $E(G_0') := E(G_0)$, $G'$ becomes
a perturbed graph of $\mathbb{Z}^2$.
It is easy to show that $\Lambda = \{(x_1,x_2) \in V(G') \mid x_1 \geq 1, x_2 \geq 1\}$.
As in the previous example, the maps $\pi_*$ and $\varphi$ are trivial and $\ell_G=1$.
Since $I_n(x):=\{y\in\mathbb{Z}^2 \mid y-x \in [-n,n]^2\}$,
if we take $x_n = (n+1,n+1) \in V(G_0)$,
then $I_n(x_n) \subset \Lambda$.
Thus, the graph $G'$ satisfies ($\mathscr{P}$).
Then, by Corollary \ref{FSP},
\begin{align*}
\sigma_\mathrm{ess}(L_{G'}) = [-1,1].
\end{align*}
}
\end{example}
\begin{example}(Upper-half plane)
\label{ex_upp}
{\rm
The unperturbed graph is $G = \mathbb{Z}^2$.
The upper-half plane is defined by
\begin{align*}
V(G') &= \{(x_1,x_2) \mid x_1 \in \mathbb{Z}, ~ x_2 \geq 0 \}, \\
E(G') &= \{ e \mid V(e) = \{x,y\}, |x-y|=1, ~ x,y \in V(G') \}.
\end{align*}
Let $G_0 = G_0^\prime = G^\prime$.
$G'$ is a perturbed graph of $\mathbb{Z}^2$.
One can check that $\pi_* and \varphi$ are trivial maps, $\ell_G=1$, and
$\Lambda = \{(x_1,x_2) \in \mathbb{Z}^2 \mid x_2\geq 1\}$.
Setting $x_n = (0,n+1)$, $I_n(x_n) \subset \Lambda$.
By Corollary \ref{FSP}
we have
\begin{align*}
\sigma_\mathrm{ess}(L_{G'}) = [-1,1].
\end{align*}
}
\end{example}
\begin{example}
\label{counter_ex}
{\rm
Let $G \in \mathscr{L}^1$ be the graph obtained from $\mathbb{Z}$
by adding a pendant vertex to each $x \in \mathbb{Z}$
(see Example \ref{ex_pendant}).
Let $G^\prime$ be a graph obtained from $G$
by adding a pendant vertex to each vertex $x \geq 0$.
$G^\prime$ is a graph obtained from $\mathbb{Z}$
by adding one pendant vertex to each vertex $x <0$
and two pendant vertices to each vertex $x \geq 0$.
See Fig. 2.
\begin{figure}[htbp]
\label{fig_counter}
\centering
\inpu
{FigEx44.tex}
\caption{Graphs in Example \ref{counter_ex}}
\end{figure}
More precisely, we set
\begin{align*}
V(G^\prime) &= V(G) \cup \{(x,s) \mid x \geq 0, s \in \{ 0,1,2\} \}, \\
E(G^\prime) &= E(G) \cup \{ e \mid V(e) = \{ (x,0), (x,2) \}, x \geq 0 \}, \\
V(G) &= \{(x,s) \mid x \in \mathbb{Z}, s \in \{ 0,1\} \}, \\
E(G) & = \{ e \mid V(e) = \{(x,0), (y,0)\}, |x-y|=1 \} \\
&\qquad \cup \{ e \mid V(e) = \{(x,0), (x,1)\}, x \in \mathbb{Z} \}.
\end{align*}
The vertices $(x,1)$ and $(x,2)$ are pendant vertices adjacent to $(x,0)$.
Let $G_0 = G_0^\prime = G \subset G^\prime$.
Then, $\Lambda = \{ (x,s) \in V(G) \mid x < 0, s \in \{0,1\} \}$.
Clearly, $G^\prime$ satisfies ($\mathscr{P}$).
By Theorem \ref{mainthm}, we have
\[ \sigma_{\rm ess}(L_{G})
= \left[ -1, -\frac{1}{3} \right] \cup \left[ \frac{1}{3}, 1 \right]
\subset \sigma_{\rm ess}(L_{G^\prime}). \]
On the other hand, $L_{G^\prime}$ has zero eigenvalues with infinite multiplicity.
Indeed, $\Psi^{(n)} \in \ell^2(V(G^\prime))$ ($n \geq 0$) defined below
are zero eigenstates:
\[ \Psi^{(n)}(x,s)
= \begin{cases}
1/\sqrt{2}, & (x,s) = (n,1) \\
-1/\sqrt{2}, & (x,s) = (n,2) \\
0, & {\rm otherwise}.
\end{cases}
\]
Thus, $0 \in \sigma_{\rm ess}(L_{G^\prime}) \setminus \sigma_{\rm ess}(L_G)$.
}
\end{example}
| {
"attr-fineweb-edu": 1.416016,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdK05qoaAwj4XN-wq | \section{Introduction}
\label{sec: introduction}
In recent years many different nonlocal inverse problems have been studied. The prototypical example is the inverse problem for the fractional Schr\"odinger operator $(-\Delta)^s+q$, where the measurements are encoded in the (exterior) Dirichlet to Neumann (DN) map $f\mapsto\Lambda_qf=(-\Delta)^su_f|_{\Omega_e}$. Here $\Omega_e={\mathbb R}^n\setminus\overline{\Omega}$ is the exterior of a smoothly bounded domain $\Omega\subset{\mathbb R}^n$ and $0<s<1$. This problem, nowadays called \emph{fractional Calder\'on problem}, was first considered for $q\in L^{\infty}(\Omega)$ in \cite{GSU20} and initiated many of the later developments. The classical proof of the (interior) uniqueness for the fractional Calder\'on problem, that is of the assertion that $\Lambda_{q_1}=\Lambda_{q_2}$ implies $q_1=q_2$ in $\Omega$, relies on the Alessandrini identity, the unique continuation principle (UCP) of the fractional Laplacian and the Runge approximation. Following a similar approach, in the works \cite{bhattacharyya2021inverse,CMR20,CMRU20,GLX,CL2019determining,CLL2017simultaneously,cekic2020calderon,feizmohammadi2021fractional,harrach2017nonlocal-monotonicity,harrach2020monotonicity,GRSU18,GU2021calder,ghosh2021non,lin2020monotonicity,LL2020inverse,LL2022inverse,LLR2019calder,LLU2022calder,KLW2021calder,RS17,ruland2018exponential,RZ2022unboundedFracCald}, it has been shown that one can uniquely recover lower order, local perturbations of many different nonlocal models.
On the other hand, the author together with different collaborators considered in \cite{RZ2022unboundedFracCald,counterexamples,RGZ2022GlobalUniqueness,RZ2022LowReg,StabilityFracCond} the \emph{inverse fractional conductivity problem}. The main objective in this problem is to uniquely determine the conductivity $\gamma\colon{\mathbb R}^n\to{\mathbb R}_+$ from the DN map $f\mapsto \Lambda_{\gamma}f$ related to the Dirichlet problem
\begin{equation}
\label{eq: frac cond eq intro}
\begin{split}
L^s_\gamma u &= 0 \enspace\text{in}\enspace\Omega,\\
u &= f \enspace\text{in}\enspace\Omega_e.
\end{split}
\end{equation}
Here $L_{\gamma}^s$ denotes the \emph{fractional conductivity operator}, which can be strongly defined via
\begin{equation}
\label{eq: frac cond operator intro}
L_{\gamma}^su(x)=C_{n,s}\gamma^{1/2}(x)\,\text{p.v.}\int_{{\mathbb R}^n}\gamma^{1/2}(y)\frac{u(x)-u(y)}{|x-y|^{n+2s}}\,dy.
\end{equation}
In this formula, $C_{n,s}>0$ is some positive constant and $\text{p.v.}$ denotes the Cauchy principal value. More concretely, in the aforementioned articles it has been shown that the conductivity $\gamma$ with background deviation $m_{\gamma}=\gamma^{1/2}-1$ in $H^{s,n/s}({\mathbb R}^n)$ can be uniquely recovered from the DN data, in the measurement set the conductivity can be explicitly reconstructed with a Lipschitz modulus of continuity and on smooth, bounded domains the full data inverse fractional conductivity problem is under suitable a priori assumptions logarithmically instable.
Let us note that as $s$ converges to 1, the fractional conductivity operator $L_{\gamma}^s$ becomes the \emph{conductivity operator} $L_{\gamma}u=-\text{div}(\gamma\nabla u)$. Hence the above inverse problem can be considered as a nonlocal analogue of the classical \emph{Calder\'on problem} \cite{calderon}, that is, the problem of uniquely recovering the conductivity $\gamma\colon\overline{\Omega}\to{\mathbb R}_+$ from the DN map $f\mapsto\Lambda_{\gamma}f=\gamma\partial_{\nu}u_f|_{\partial\Omega}$, where $u_f\in H^1(\Omega)$ is the unique solution to the Dirichlet problem of the \emph{conductivity equation}
\begin{equation}
\label{calderon prob intro}
\begin{split}
L_\gamma u &= 0 \,\enspace\text{in}\enspace\Omega,\\
u &= f \enspace\text{on}\enspace\partial\Omega
\end{split}
\end{equation}
and $\nu$ denotes the outward pointing unit normal vector of the smoothly bounded domain $\Omega\subset{\mathbb R}^n$. The mathematical investigation of the inverse conductivity problem dates at least back to the work \cite{Langer33-calderon-halfspace} of Langer. The uniqueness proof of the Calderón problem is based on the \emph{Liouville reduction}, which allows to reduce this inverse problem for a variable coefficient operator to the inverse problem for the Schrödinger equation $-\Delta+q$, on the construction of complex geometric optics (CGO) solutions \cite{SU87}, and on a boundary determination result \cite{KV84}. The first uniqueness proof for the inverse fractional conductivity problem also relied on a reduction of the problem via a \emph{fractional Liouville reduction} to the inverse problem for the fractional Schr\"odinger equation and the boundary determination of Kohn and Vogelius was replaced by an exterior determination result (cf.~\cite{RGZ2022GlobalUniqueness} for the case $m_{\gamma}\in H^{2s,n/2s}({\mathbb R}^n)$ and \cite{RZ2022LowReg} for $m_{\gamma}\in H^{s,n/s}({\mathbb R}^n)$). Since the UCP and the Runge approximation are much stronger for nonlocal operators than for local ones, which in turn relies on the fact solutions to $(-\Delta)^s+q$ are much less rigid than the ones to the local Schr\"odinger equation $-\Delta+q$, the uniqueness for the nonlocal Schr\"odinger equation can be established without the construction of CGO solutions. In fact, it is an open problem whether these exist for the fractional Schr\"odinger equation.
\subsection{The optical tomography equation}
Recently, in the articles \cite{OptTomHarrach,SimDetDiffAbs}, it has been investigated whether the diffusion $\gamma$ and the absorption coefficient $q$ in the \emph{optical tomography equation}
\begin{equation}
\label{eq: opt tom prob}
\begin{split}
L_{\gamma}u +qu &= F \,\enspace\text{in}\enspace\Omega
\end{split}
\end{equation}
can be uniquely recovered from the partial Cauchy data $(u|_{\Gamma},\gamma\partial_{\nu}u|_{\Gamma})$, where $\Omega\subset{\mathbb R}^n$ is a bounded domain and $\Gamma \subset\partial\Omega$ is an arbitrarily small region of the boundary. This problem arises in the (stationary) diffusion based optical tomography and therefore we refer to \eqref{eq: opt tom prob} as the optical tomography equation. Generally speaking, in optical tomography one uses low energy visible or near infrared light (wavelength $\lambda\sim 700-1000$nm) to test highly scattering media (as a tissue sample of a human body) and wants to reconstruct the optical properties within the sample by intensity measurements on the boundary. In a possible experimental situation, light is sent via optical fibres to the surface of the medium under investigation and the transilluminated light is measured by some detecting fibres.
The starting point to describe the radiation propagation in highly scattering media is the radiative transfer equation (Boltzmann equation)
\begin{equation}
\label{eq: radiation equation}
\begin{split}
&\partial_tI(x,t,v)+v\cdot\nabla I(x,t,v)+(\mu_a+\mu_s)I(x,t,v)\\
&\quad=\mu_s\int_{S^{n-1}}f(v,v')I(x,t,v')\,d\sigma(v')+G(x,t,v),
\end{split}
\end{equation}
which describes the change of the radiance $I=I(x,t,v)$ at spacetime point $(x,t)$ into the direction $v\in S^{n-1}=\{x\,;\, |x|=1\}$. Here, we set $c=1$ (speed of light) and the other quantities have the following physical meaning:\\
\begin{tabular}{ c l }
$\mu_a$ & absorption coefficient \\
$\mu_s$ & scattering coefficient \\
$f(v,v')$ & scattering phase function - probability that the wave \\
& incident in direction $v'$ is scattered into direction $v$ \\
$G$ & isotropic source
\end{tabular}\\
In the diffusion approximation, as explained in detail in \cite{OptTomArridge} or \cite[Appendix]{schweiger1995finite}, one gets equation \eqref{eq: opt tom prob}, where the quantities are related as follows:\\
\begin{tabular}{ c l }
$u$ & photon density - $u(x,t)=\int_{S^{n-1}}I(x,t,v')\,d\sigma(v')$ \\
$\gamma$ & diffusion coefficient of the medium - $\gamma=[3(\mu_a+\mu'_s)]^{-1}$ \\
& with $\mu'_s$ being the reduced scattering coefficient \\
$q$ & absorption coefficient $\mu_a$ \\
$F$ & isotropic source
\end{tabular}\\
and
$-\gamma\partial_{\nu}u|_{\Gamma}$ describes the normal photon current (or exitance) across $\Gamma\subset\partial\Omega$. Let us remark that in the diffusion approximation one assumes $\mu_a\ll \mu_s$ and that the light propagation is weakly anisotropic, which is incoorporated in $\mu'_s$. For further discussion on this classical model, we refer to the above cited articles and \cite{gibson2005recent}.
\subsubsection{Non-uniqueness in diffusion based optical tomography}
\label{subsubsec: nonunique OT}
In \cite{arridge1998nonuniqueness}, Arridge and Lionheart constructed counterexamples to uniqueness for the inverse problem of the diffusion based optical tomography equation \eqref{eq: opt tom prob}. They consider a smoothly bounded domain $\Omega\subset{\mathbb R}^n$ containing a compact subdomain $\Omega_0\Subset \Omega$ such that the isotropic source is supported in $\Omega_1\vcentcolon = \Omega\setminus\overline{\Omega}_0$. Then they observe that if the diffusion coefficient $\gamma$ is sufficiently regular, the optical tomography equation \eqref{eq: opt tom prob} is reduced via the Liouville reduction to
\begin{equation}
\label{eq: reduced opt tom prob}
\begin{split}
-\Delta v +\eta v &= \frac{F}{\gamma^{1/2}} \,\enspace\text{in}\enspace\Omega
\end{split}
\quad\text{with}\quad \eta\vcentcolon = \frac{\Delta\gamma^{1/2}}{\gamma^{1/2}}+\frac{q}{\gamma},
\end{equation}
where $v=\gamma^{1/2}u$. Now, one can change the coefficients $(\gamma,q)$ to
\begin{equation}
\label{eq: equivalent coeff}
\widetilde{\gamma}\vcentcolon = \gamma+\gamma_0,\quad \widetilde{q}\vcentcolon = q+q_0\quad\text{and}\quad\widetilde{\eta}\vcentcolon=\frac{\Delta\widetilde{\gamma}^{1/2}}{\widetilde{\gamma}^{1/2}}+\frac{\widetilde{q}}{\widetilde{\gamma}},
\end{equation}
where these new parameters satisfy
\begin{enumerate}[(i)]
\item\label{cond 1 non} $\gamma_0\geq 0$ with $\gamma_0|_{\Omega_1}=0$
\item\label{cond 2 non} and $\widetilde{\eta}=\eta$ in $\Omega$.
\end{enumerate}
The latter condition means nothing else than
\begin{equation}
\label{eq: effective potentials}
\frac{\Delta(\gamma+\gamma_0)^{1/2}}{(\gamma+\gamma_0)^{1/2}}+\frac{q+q_0}{\gamma+\gamma_0}=\frac{\Delta\gamma^{1/2}}{\gamma^{1/2}}+\frac{q}{\gamma}\quad\text{in}\quad\Omega.
\end{equation}
Hence, if we have given $\gamma_0$, then this relation can always be used to calculate $q_0$ by
\begin{equation}
\label{eq: calculation of potential perturb}
q_0=(\gamma+\gamma_0)\left(\frac{\Delta\gamma^{1/2}}{\gamma^{1/2}}-\frac{\Delta(\gamma+\gamma_0)^{1/2}}{(\gamma+\gamma_0)^{1/2}}+\frac{q}{\gamma}\right)-q.
\end{equation}
As the transformations \eqref{eq: equivalent coeff} under the conditions \ref{cond 1 non}, \ref{cond 2 non} leave the Dirichlet and Neumann data of solutions to \eqref{eq: reduced opt tom prob} invariant, this leads to the desired counterexamples.
\subsubsection{Uniqueness in diffusion based optical tomography}
Harrach considered in \cite{OptTomHarrach,SimDetDiffAbs} the discrepancy between the counterexamples of the last section and the positive experimental results in \cite[Section~3.4.3]{gibson2005recent} of recovering $\gamma$ and $q$ simultaneously in more detail. In these works it is established that uniqueness in the inverse problem for the optical tomography equation is obtained, when the diffusion $\gamma$ is piecewise constant and the absorption coefficient piecewise analytic. The main tool to obtain this result is the technique of localized potentials (see~\cite{LocPotential}), which are solutions of \eqref{eq: opt tom prob} that are large on a particular subset but otherwise small. The use of special singular solutions to prove uniqueness in inverse problems for (local or nonlocal) PDEs became in recent years a popular technique (see for example \cite{KV84,KV85,Alessandrini-singular,Nachman1996GlobalUniqueness,SU87} for local PDEs and \cite{RGZ2022GlobalUniqueness,RZ2022LowReg,LRZ22,KLZ22FracpLap} for nonlocal PDEs).
\subsection{Nonlocal optical tomography equation and main results}
The main goal of this article is to study a nonlocal variant of the previously introduced inverse problem for the optical tomography equation. More concretely, we consider the \emph{nonlocal optical tomography equation}
\begin{equation}
\label{eq: nonlocal tomography equation intro}
L_{\gamma}^su+qu=0\quad\text{in}\quad\Omega,
\end{equation}
where $\Omega\subset{\mathbb R}^n$ is a domain bounded in one direction, $0<s<1$, $\gamma\colon {\mathbb R}^n\to{\mathbb R}_+$ is a diffusion coefficient, $q\colon{\mathbb R}^n\to{\mathbb R}$ an absorption coefficient (aka potential) and $L_{\gamma}^s$ the variable coefficient nonlocal operator defined in \eqref{eq: frac cond operator intro}. Then we ask:
\begin{question}
\label{question uniqueness}
Let $W_1,W_2\subset \Omega_e$ be two measurement sets. Under what conditions does the DN map $C_c^{\infty}(W_1)\ni f\mapsto \Lambda_{\gamma,q}f|_{W_2}$ related to \eqref{eq: nonlocal tomography equation intro} uniquely determine the coefficients $\gamma$ and $q$?
\end{question}
By \cite[Theorem~1.8]{RZ2022LowReg}, we know that the measurement sets need to satisfy $W_1\cap W_2\neq\emptyset$ and hence, we consider the setup illustrated in Figure~\ref{figure 1}.
\begin{figure}[!ht]
\centering
\begin{tikzpicture}
\filldraw[color=blue!50, fill=blue!5, xshift=11cm, yshift=1.5cm] (3,-2.5) ellipse (0.9 and 0.9);
\node[xshift=11cm, yshift=1.5cm] at (3,-2.5) {$\raisebox{-.35\baselineskip}{\ensuremath{\Lambda_{\gamma,q}f|_{W_2}}}$};
\filldraw[color=green!50, fill=green!5, xshift=11cm, yshift=0.1cm, opacity=0.8] (3,-2.5) ellipse (1.3 and 0.75);
\node[xshift=11cm, yshift=0.1cm] at (3,-2.5) {$\raisebox{-.35\baselineskip}{\ensuremath{f\in C_c^{\infty}(W_1)}}$};
\filldraw [color=orange!80, fill = orange!5, xshift=8cm, yshift=-2cm,opacity=0.8] plot [smooth cycle] coordinates {(-1,0.9) (3,1.5) (4.5,-0.5) (3,-1) (-1.5,-0.25)};
\node[xshift=3cm] at (6.3,-1.75) {$\raisebox{-.35\baselineskip}{\ensuremath{L_{\gamma}^su+qu=0\enspace\text{in}\enspace\Omega}}$};
\end{tikzpicture}
\caption{\begin{small} Here, $\Omega$ represents the scattering medium, $\gamma$, $q$ the diffusion and absorption coefficient, $f$ a light pulse in $W_1$ and $\Lambda_{\gamma}f|_{W_2}$ the nonlocal photon current in $W_2$.\end{small}}
\label{figure 1}
\end{figure}
Moreover, motivated by the counterexamples in Section~ \ref{subsubsec: nonunique OT}, we expect that the potentials $q_1,q_2$ should coincide in the measurement sets $W_1,W_2\subset\Omega_e$. Indeed, under slightly weaker assumptions we establish that the DN map $\Lambda_{\gamma,q}$ uniquely determines the coefficients $\gamma$ and $q$. More precisely, we will prove in Section~\ref{sec: inverse problem} the following result:
\begin{theorem}[Global uniqueness]
\label{main thm}
Let $0 < s < \min(1,n/2)$, suppose $\Omega\subset {\mathbb R}^n$ is a domain bounded in one direction and let $W_1,W_2\subset \Omega$ be two non-disjoint measurement sets. Assume that the diffusions $\gamma_1, \gamma_2\in L^{\infty}({\mathbb R}^n)$ with background deviations $m_{\gamma_1},m_{\gamma_2}\in H^{s,n/s}({\mathbb R}^n)$ and potentials $q_1,q_2\in \distr({\mathbb R}^n)$ satisfy
\begin{enumerate}[(i)]
\item\label{uniform ellipticity diffusions} $\gamma_1,\gamma_2$ are uniformly elliptic with lower bound $\gamma_0>0$,
\item\label{continuity diffusions} $\gamma_1, \gamma_2$ are a.e. continuous in $W_1\cap W_2$,
\item\label{integrability potentials} $q_1,q_2\in M_{\gamma_0/\delta_0,+}(H^s\to H^{-s})\cap L^p_{loc}(W_1\cap W_2)$ for $\frac{n}{2s}< p\leq \infty$
\item\label{equal potentials in measurement sets} and $q_1|_{W_1\cap W_2}=q_2|_{W_1\cap W_2}$.
\end{enumerate}
If $\Lambda_{\gamma_1,q_1}f|_{W_2}=\Lambda_{\gamma_2,q_2}f|_{W_2}$ for all $f\in C_c^{\infty}(W_1)$, then there holds $\gamma_1=\gamma_2$ in ${\mathbb R}^n$ and $q_1=q_2$ in $\Omega$.
\end{theorem}
\begin{remark}
\label{remark: def of constant}
In the above theorem and throughout this article, we set $\delta_0\vcentcolon =2\max(1,C_{opt})$, where $C_{opt}=C_{opt}(n,s,\Omega)>0$ is the optimal fractional Poincar\'e constant defined via
\begin{equation}
\label{eq: optimal fractional Poincare constant}
C_{opt}^{-1}=\inf_{0\neq u\in\widetilde{H}^s(\Omega)}\frac{[u]^2_{H^s({\mathbb R}^n)}}{\|u\|_{L^2({\mathbb R}^n)}^2}<\infty
\end{equation}
(see~Theorem~\ref{thm: Poinc Unbounded Doms}).
\end{remark}
\begin{remark}
Let us note that when we change $q$ away from $\Omega$ and the measurment sets $W_1,W_2$, then the DN data $C_c^{\infty}(W_1)\ni f\mapsto \Lambda_{\gamma,q}f|_{W_2}$ remain the same. Therefore, in the above theorem we have only uniqueness for the potential in $\Omega$.
\end{remark}
Next, let us discuss the assumption that the potentials $q_1,q_2$ coincide in $W=W_1\cap W_2$, where $W_1,W_2\subset\Omega_e$ are two non-disjoint measurement sets. First of all, one can observe that the proofs given in Section~\ref{subsec: exterior reconstruction} and \ref{subsec: determination of diffusion coeff} still work under the seemingly weaker assumption $W\cap \text{int}(\{q_1=q_2\})\neq\emptyset$. Hence, one can again conclude that $\gamma_1=\gamma_2$ in ${\mathbb R}^n$. Now, the UCP of the fractional conductivity operator $L_{\gamma}^s$ (see~Theorem~\ref{thm: uniqueness q}) and \cite[Corollary~2.7]{RZ2022unboundedFracCald} show that $q_1=q_2$ in $W$. Therefore, if the DN maps coincide then the assumption $W\cap \text{int}(\{q_1=q_2\})\neq\emptyset$ is equally strong as $q_1=q_2$ in $W$. This leads us to the following question:
\begin{question}
\label{question non-uniqueness}
For a measurement sets $W\subset\Omega_e$, can one find two distinct pairs of diffusion and absorption coefficients $(\gamma_1,q_1),(\gamma_2,q_2)$ satisfying the conditions \ref{uniform ellipticity diffusions}-\ref{integrability potentials} in Theorem~\ref{main thm} that generate the same DN data, i.e. $\Lambda_{\gamma_1,q_1}f|_{W}=\Lambda_{\gamma_2,q_2}f|_{W}$ for all $f\in C_c^{\infty}(W)$, but $q_1\not\equiv q_2$ in $W$?
\end{question}
We establish the following result:
\begin{theorem}[Non-uniqueness]
\label{thm: non uniqueness}
Let $0 < s < \min(1,n/2)$, suppose $\Omega\subset {\mathbb R}^n$ is a domain bounded in one direction and let $W\subset \Omega$ be a measurement set. Then there exist two different pairs $(\gamma_1,q_1)$ and $(\gamma_2,q_2)$ satisfying $\gamma_1,\gamma_2\in L^{\infty}({\mathbb R}^n)$, $m_{\gamma_1},m_{\gamma_1}\in H^{s,n/s}({\mathbb R}^n)$, \ref{uniform ellipticity diffusions}--\ref{integrability potentials} of Theorem~\ref{main thm} and $\left.\Lambda_{\gamma_1,q_1}f\right|_{W}=\left.\Lambda_{\gamma_2,q_2}f\right|_{W}$ for all $f\in C_c^{\infty}(W)$, but there holds $q_1(x)\neq q_2(x)$ for all $x\in W$.
\end{theorem}
Finally, let us note that whether uniqueness or non-uniqueness holds in the general case $q_1\not\equiv q_2$ on $W$ but $W\cap \{q_1=q_2\}$ has no interior points, is not answered by the above results. In fact, if $q_1,q_2$ are arbitrary potentials and the assumption $\Lambda_{\gamma_1,q_2}f|_{W}=\Lambda_{\gamma_2,q_2}f|_{W}$ for all $f\in C_c^{\infty}(W)$ implies $\gamma_1=\gamma_2$ in ${\mathbb R}^n$, then \cite[Corollary~2.7]{RZ2022unboundedFracCald} again shows $q_1=q_2$ in $W$. Hence, if one wants to establish uniqueness also for potentials $q_1,q_2\in M_{\gamma_0/\delta_0,+}(H^s\to H^{-s})$ satisfying $q_1\not\equiv q_2$ on $W$ and $W\cap \text{int}(\{q_1=q_2\})=\emptyset$, one would need to come up with a proof which does not rely on the separate determination of the coefficients as the one given in this article.
\section{Preliminaries}
\label{sec: Preliminaries}
Throughout this article $\Omega\subset {\mathbb R}^n$ is always an open set and the space dimension $n$ is fixed but otherwise arbitrary.
\subsection{Fractional Laplacian and fractional conductivity operator}
We define for $s> 0$ the fractional Laplacian of order $s$ by
\begin{equation}
(-\Delta)^su\vcentcolon = \mathcal{F}^{-1}(|\xi|^{2s}\widehat{u}),
\end{equation}
whenever the right hand side is well-defined. Here, $\mathcal{F}$ and $\mathcal{F}^{-1}$ denote the Fourier transform and the inverse Fourier transform, respectively. In this article we use the following convention
\[
\mathcal{F} u(\xi)\vcentcolon = \hat u(\xi) \vcentcolon = \int_{{\mathbb R}^n} u(x)e^{-ix \cdot \xi} \,dx.
\]
If $u\colon{\mathbb R}^n\to{\mathbb R}$ is sufficiently regular and $s\in(0,1)$, the fractional Laplacian can be calculated via
\begin{equation}
\label{eq: singular int def frac Lap}
\begin{split}
(-\Delta)^su(x)&=C_{n,s}\,\text{p.v.}\int_{{\mathbb R}^n}\frac{u(x)-u(y)}{|x-y|^{n+2s}}\,dy\\
&= -\frac{C_{n,s}}{2}\int_{{\mathbb R}^n}\frac{u(x+y)+u(x-y)-2u(x)}{|y|^{n+2s}}\,dy,
\end{split}
\end{equation}
where $C_{n,s}>0$ is a normalization constant. Based on formula \eqref{eq: singular int def frac Lap}, we introduce the fractional conductivity operator $L_{\gamma}^s$ by
\begin{equation}
\label{eq: frac cond op}
L_{\gamma}^su(x)=C_{n,s}\gamma^{1/2}(x)\,\text{p.v.}\int_{{\mathbb R}^n}\gamma^{1/2}(y)\frac{u(x)-u(y)}{|x-y|^{n+2s}}\,dy
\end{equation}
where $\gamma\colon{\mathbb R}^n\to{\mathbb R}_+$ is the so-called conductivity.
\subsection{Sobolev spaces}
The classical Sobolev spaces of order $k\in{\mathbb N}$ and integrability exponent $p\in [1,\infty]$ are denoted by $W^{k,p}(\Omega)$. Moreover, we let $W^{s,p}(\Omega)$ stand for the fractional Sobolev spaces, when $s\in {\mathbb R}_+\setminus{\mathbb N}$ and $1\leq p < \infty$. These spaces are also called Slobodeckij spaces or Gagliardo spaces. If $1\leq p<\infty$ and $s=k+\sigma$ with $k\in {\mathbb N}_0$, $0<\sigma<1$, then they are defined by
\[
W^{s,p}(\Omega)\vcentcolon =\{\,u\in W^{k,p}(\Omega)\,;\, [\partial^{\alpha} u]_{W^{\sigma,p}(\Omega)}<\infty\quad \forall |\alpha|=k\, \},
\]
where
\[
[u]_{W^{\sigma,p}(\Omega)}\vcentcolon =\left(\int_{\Omega}\int_{\Omega}\frac{|u(x)-u(y)|^p}{|x-y|^{n+\sigma p}}\,dxdy\right)^{1/p}
\]
is the so-called Gagliardo seminorm. The Slobodeckij spaces are naturally endowed with the norm
\[
\|u\|_{W^{s,p}(\Omega)}\vcentcolon =\left(\|u\|_{W^{k,p}(\Omega)}^p+\sum_{|\alpha|=k}[\partial^{\alpha}u]_{W^{\sigma,p}(\Omega)}^p\right)^{1/p}.
\]
We define the Bessel potential space $H^{s,p}({\mathbb R}^n)$ for $1\leq p<\infty$, $s\in{\mathbb R}$ by
\begin{equation}
\label{eq: Bessel pot spaces}
H^{s,p}({\mathbb R}^n) \vcentcolon = \{ u \in \mathscr{S}^{\prime}({\mathbb R}^n)\,;\, \vev{D}^su \in L^p({\mathbb R}^n)\},
\end{equation}
which we endow with the norm $\norm{u}_{H^{s,p}({\mathbb R}^n)} \vcentcolon = \norm{\vev{D}^su}_{L^p({\mathbb R}^n)}$. Here $\mathscr{S}^{\prime}({\mathbb R}^n)$ denotes the space of tempered distributions, which is the dual of the space of Schwartz functions $\mathscr{S}({\mathbb R}^n)$, and $\langle D\rangle^s$ is the Fourier multiplier with symbol $\langle\xi\rangle^s=(1+|\xi|^2)^{s/2}$. In the special case $p=2$ and $0<s<1$, the spaces $H^{s,2}({\mathbb R}^n)$ and $W^{s,2}({\mathbb R}^n)$ coincide and they are commonly denoted by $H^s({\mathbb R}^n)$.
More concretely, the Gagliardo seminorm $[\,\cdot\,]_{H^s({\mathbb R}^n)}$ and $\|\cdot\|_{\dot{H}^s({\mathbb R}^n)}$ are equivalent on $H^s({\mathbb R}^n)$ (cf.~\cite[Proposition~3.4]{DINEPV-hitchhiker-sobolev}).
Throughout, this article we will assume that $0<s<\min(1,n/2)$ such that $H^s({\mathbb R}^n)\hookrightarrow L^{2^*}({\mathbb R}^n)$, where $2^*$ is the critical Sobolev exponent given by $2^*=\frac{2n}{n-2s}$.
If $\Omega\subset {\mathbb R}^n$, $F\subset{\mathbb R}^n$ are given open and closed sets, then we define the following local Bessel potential spaces:
\begin{equation}\label{eq: local bessel pot spaces}
\begin{split}
\widetilde{H}^{s,p}(\Omega) &\vcentcolon = \mbox{closure of } C_c^\infty(\Omega) \mbox{ in } H^{s,p}({\mathbb R}^n),\\
\end{split}
\end{equation}
We close this section by introducing the notion of domains bounded in one direction and recalling the related fractional Poincar\'e inequalities. We say that an open set $\Omega_\infty \subset{\mathbb R}^n$ of the form $\Omega_\infty={\mathbb R}^{n-k}\times \omega$, where $n\geq k\geq 1$ and $\omega \subset {\mathbb R}^k$ is a bounded open set, is a \emph{cylindrical domain}. An open set $\Omega \subset {\mathbb R}^n$ is called \emph{bounded in one direction} if there exists a cylindrical domain $\Omega_\infty \subset {\mathbb R}^n$ and a rigid Euclidean motion $A(x) = Lx + x_0$, where $L$ is a linear isometry and $x_0 \in {\mathbb R}^n$, such that $\Omega \subset A\Omega_\infty$. Fractional Poincaré inequalities in Bessel potential spaces on domains bounded in one direction were recently studied in \cite{RZ2022unboundedFracCald}. In this article a $L^p$ generalization of the following result is established:
\begin{theorem}[{Poincar\'e inequality, \cite[Theorem~2.2]{RZ2022unboundedFracCald}}]
\label{thm: Poinc Unbounded Doms} Let $\Omega\subset{\mathbb R}^n$ be an open set that is bounded in one direction and $0<s<1$. Then there exists $C(n,s,\Omega)>0$ such that
\begin{equation}
\label{eq: poincare on L1}
\|u\|^2_{L^2({\mathbb R}^n)}\leq C[u]_{H^s({\mathbb R}^n)}^2
\end{equation}
for all $u\in \widetilde{H}^{s}(\Omega)$.
\end{theorem}
\begin{remark}
Let us note, that actually in \cite[Theorem~2.2]{RZ2022unboundedFracCald} the right hand side \eqref{eq: poincare on L1} is replaced by the seminorm $\|u\|_{\dot{H}^s({\mathbb R}^n)}=\|(-\Delta)^{s/2}u\|_{L^{2}({\mathbb R}^n)}$, but as already noted for $H^s({\mathbb R}^n)$ functions these two expressions are equivalent.
\end{remark}
\subsection{Sobolev multiplier}
In this section we briefly introduce the Sobolev multipliers between the energy spaces $H^s({\mathbb R}^n)$ and for more details we point to the book \cite{MS-theory-of-sobolev-multipliers} of Maz'ya and Shaposhnikova.
Let $s,t\in{\mathbb R}$. If $f\in \distr({\mathbb R}^n)$ is a distribution, we say that $f\in M(H^s\rightarrow H^t)$ whenever the norm
\begin{equation}
\|f\|_{s,t} \vcentcolon = \sup \{\abs{\ip{f}{u v}} \,;\, u,v \in C_c^\infty(\mathbb R^n), \norm{u}_{H^s({\mathbb R}^n)} = \norm{v}_{H^{-t}({\mathbb R}^n)} =1 \}
\end{equation}
is finite.
In the special case $t=-s$, we write $\|\cdot\|_s$ instead of $\|\cdot\|_{s,-s}$. Note that for any $f\in M(H^s\rightarrow H^t)$ and $u,v \in C_c^\infty(\mathbb R^n)$, we have the multiplier estimate
\begin{equation}
\label{multiplier-estimate}
\abs{\ip{f}{uv}} \leq \|f\|_{s,t}\norm{u}_{H^s({\mathbb R}^n)} \norm{v}_{H^{-t}({\mathbb R}^n)}.
\end{equation}
By a density argument one easily sees that there is a unique linear multiplication map $m_f \colon H^s({\mathbb R}^n) \to H^t({\mathbb R}^n)$, $u\mapsto m_f(u)$. To simplify the notation we will write $fu$ instead of $m_f(u)$.
Finally, we define certain subclasses of Sobolev multipliers from $H^s({\mathbb R}^n)$ to $H^{-s}({\mathbb R}^n)$.
For all $\delta > 0$ and $0<s<1$, we define the following convex sets
\begin{equation}
\label{eq: special classes of Sobolev multipliers}
\begin{split}
M_{\delta}(H^s \to H^{-s}) &\vcentcolon = \{\,q \in M(H^s \to H^{-s}) \,;\,\|q\|_{s} < \delta\,\},\\
M_{+}(H^s \to H^{-s})& \vcentcolon =M(H^s\to H^{-s})\cap \distr_+({\mathbb R}^n),\\
M_{\delta,+}(H^s\to H^{-s})&\vcentcolon =M_{\delta}(H^s\to H^{-s})+M_{+}(H^s\to H^{-s}),
\end{split}
\end{equation}
where $\distr_+({\mathbb R}^n)$ denotes the non-negative distributions.
Note that by definition of the multiplication map $u\mapsto fu$ one has $\langle qu,u\rangle \geq 0$ for all $u\in H^s({\mathbb R}^n)$, whenever $q\in M_{+}(H^s \to H^{-s})$.
\section{Well-posedness and DN map of forward problem}
\label{subsec: forward}
We start in Section~\ref{sec: basics} by recalling basic properties of the operator $L_{\gamma}^s$, like the fractional Liouville reduction, and then in Section~\ref{subsec: well-posedness and DN maps} we establish well-posedness results for the nonlocal optical tomography equation and the related fractional Schr\"odinger equation as well as introduce the associated DN maps.
\subsection{Basics on the fractional conductivity operator $L_{\gamma}^s$}
\label{sec: basics}
In this section, we recall several results related to the operator $L_{\gamma}^s$.
First, for any uniformly elliptic coefficient $\gamma\in L^{\infty}({\mathbb R}^n)$ and $0<s<1$, the operator $L_{\gamma}^s$ is weakly defined via the bilinear map $B_{\gamma}\colon H^s({\mathbb R}^n)\times H^s({\mathbb R}^n)\to{\mathbb R}$ with
\begin{equation}
\label{eq: bilinear frac cond eq}
B_{\gamma}(u, v)
\vcentcolon =\frac{C_{n,s}}{2}\int_{{\mathbb R}^{2n}}\gamma^{1/2}(x)\gamma^{1/2}(y)\frac{(u(x)-u(y))(v(x)-v(y))}{|x-y|^{n+2s}}\,dxdy
\end{equation}
for all $u,v\in H^s({\mathbb R}^n)$. Similarly, if $q\in M(H^s\to H^{-s})$, the bilinear map $B_q\colon H^s({\mathbb R}^n)\times H^s({\mathbb R}^n)\to {\mathbb R}$ representing the weak form of the fractional Schr\"odinger operator $(-\Delta)^s+q$ is defined via
\begin{equation}
\label{eq: schroedinger operator}
B_{q}(u,v)\vcentcolon = \langle (-\Delta)^{s/2}u,(-\Delta)^{s/2}v\rangle_{L^2({\mathbb R}^n)}+\langle qu,v\rangle
\end{equation}
for all $u,v\in H^s({\mathbb R}^n)$.
In \cite[Section~3]{RZ2022LowReg}, we showed that if the background deviation $m_{\gamma}=\gamma^{1/2}-1$ belongs to $H^{s,n/s}({\mathbb R}^n)$, then the \emph{fractional Liouville reduction} is still valid, which was first established in \cite{RZ2022unboundedFracCald} for conductivities having background deviation in $H^{2s,n/2s}({\mathbb R}^n)$ and hence $(-\Delta)^sm_{\gamma}\in L^{n/2s}({\mathbb R}^n)$. More precisely, we established the following results:
\begin{lemma}[{Fractional Liouville reduction}]
\label{Lemma: fractional Liouville reduction}
Let $0<s<\min(1,n/2)$, suppose $\Omega\subset{\mathbb R}^n$ is an open set and assume that the background deviation $m_{\gamma}=\gamma^{1/2}-1$ of the uniformly elliptic conductivity $\gamma\in L^{\infty}({\mathbb R}^n)$ belongs to $H^{s,n/s}({\mathbb R}^n)$. Then the following assertions hold:
\begin{enumerate}[(i)]
\item\label{estimate mathcal M} If $\mathcal{M}=m$ or $\frac{m}{m+1}$, then $\mathcal{M}\in L^{\infty}({\mathbb R}^n)\cap H^{s,n/s}({\mathbb R}^n)$ and one has the estimate
\begin{equation}
\label{eq: continuity estimate background}
\begin{split}
\|\mathcal{M}v\|_{H^s({\mathbb R}^n)}\leq C(\|\mathcal{M}\|_{L^{\infty}({\mathbb R}^n)}+\|\mathcal{M}\|_{H^{s,n/s}({\mathbb R}^n)})\|v\|_{H^s({\mathbb R}^n)}
\end{split}
\end{equation}
for all $v\in H^s({\mathbb R}^n)$ and some $C>0$. Moreover, if $u\in \widetilde{H}^s(\Omega)$, then there holds $\gamma^{\pm 1/2}u\in \widetilde{H}^s(\Omega)$
\item\label{potential frac Liouville reduction} The distribution $q_{\gamma}=-\frac{(-\Delta)^sm_{\gamma}}{\gamma^{1/2}}$, defined by
\[
\langle q_{\gamma},\varphi\rangle\vcentcolon =-\langle (-\Delta)^{s/2}m_{\gamma},(-\Delta)^{s/2}(\gamma^{-1/2}\varphi)\rangle_{L^2({\mathbb R}^n)}
\]
for all $\varphi\in C_c^{\infty}({\mathbb R}^n)$, belongs to $M(H^s\rightarrow H^{-s})$. Moreover, for all $u,\varphi \in H^s({\mathbb R}^n)$, we have
\begin{equation}
\langle q_{\gamma}u,\varphi\rangle = -\langle (-\Delta)^{s/2}m_{\gamma},(-\Delta)^{s/2}(\gamma^{-1/2}u\varphi)\rangle_{L^2({\mathbb R}^n)}
\end{equation}
satisfying the estimate
\begin{equation}
\label{eq: bilinear estimate}
\begin{split}
|\langle q_{\gamma}u,\varphi\rangle|&\leq C(1+\|m_{\gamma}\|_{L^{\infty}({\mathbb R}^n)}+\|m_{\gamma}\|_{H^{s,n/s}({\mathbb R}^n)})
\\ &\quad\quad \cdot \|m_{\gamma}\|_{H^{s,n/s}({\mathbb R}^n)}\|u\|_{H^s({\mathbb R}^n)}\|\varphi\|_{H^s({\mathbb R}^n)}.
\end{split}
\end{equation}
\item\label{liouville red identity} There holds $B_{\gamma}(u,\varphi)=B_{q_{\gamma}}(\gamma^{1/2}u,\gamma^{1/2}\varphi)$ for all $u,\varphi\in H^s({\mathbb R}^n)$, where $B_{q_{\gamma}}\colon H^s({\mathbb R}^n)\times H^s({\mathbb R}^n)\to {\mathbb R}$ is defined via \eqref{eq: schroedinger operator}.
\end{enumerate}
\end{lemma}
\subsection{Well-posedness results and DN maps}
\label{subsec: well-posedness and DN maps}
First, let us introduce for a given uniformly elliptic function $\gamma\in L^{\infty}({\mathbb R}^n)$ and a potential $q\in M(H^s\to H^{-s})$ the bilinear map $B_{\gamma,q}\colon H^s({\mathbb R}^n)\times H^s({\mathbb R}^n)\to {\mathbb R}$ representing the weak form of the nonlocal optical tomography operator $L_{\gamma}^s+q$ via
\begin{equation}
B_{\gamma,q}(u,v)=B_{\gamma}(u,v)+\langle qu,v\rangle
\end{equation}
for all $u,v\in H^s({\mathbb R}^n)$. As usual we say that a function $u\in H^s({\mathbb R}^n)$ solves the Dirichlet problem
\begin{equation}
\begin{split}
L_{\gamma}^s u+qu&= F\quad\text{in}\quad\Omega,\\
u&= f\quad\text{in}\quad\Omega_e
\end{split}
\end{equation}
for a given function $f\in H^s({\mathbb R}^n)$ and $F\in (\widetilde{H}^s(\Omega))^*$ if there holds
\[
B_{\gamma,q}(u,\varphi)=\langle F,\varphi\rangle \enspace\text{for all}\enspace \varphi\in \widetilde{H}^s(\Omega)
\]
and $u-f\in \widetilde{H}^s(\Omega)$. We have the following well-posedness result for the nonlocal optical tomography equation.
\begin{theorem}[Well-posedness and DN map for nonlocal optical tomography equation]
\label{thm: well-posedness opt tom eq}
Let $\Omega\subset {\mathbb R}^n$ be an open set which is bounded in one direction and $0<s<1$. Moreover, assume that the uniformly elliptic diffusion $\gamma\in L^{\infty}({\mathbb R}^n)$ is bounded from below by $\gamma_0>0$ and the potential $q$ belongs to $M_{\gamma_0/\delta_0,+}(H^s\to H^{-s})$.
Then the following assertions hold:
\begin{enumerate}[(i)]
\item\label{item 1 well-posedness opt tom eq} For all $f\in X\vcentcolon = H^s({\mathbb R}^n)/\widetilde{H}^s(\Omega)$ there is a unique weak solution $u_f\in H^s({\mathbb R}^n)$ of the fractional conductivity equation
\begin{equation}
\label{eq: nonlocal opt tomography problem}
\begin{split}
L_{\gamma}^su+qu&= 0\quad\text{in}\quad\Omega,\\
u&= f\quad\text{in}\quad\Omega_e.
\end{split}
\end{equation}
\item\label{item 2 DN map opt tom eq} The exterior DN map $\Lambda_{\gamma,q}\colon X\to X^*$ given by
\begin{equation}
\label{eq: DN map opt tom eq}
\begin{split}
\langle \Lambda_{\gamma,q}f,g\rangle \vcentcolon =B_{\gamma,q}(u_f,g),
\end{split}
\end{equation}
where $u_f\in H^s({\mathbb R}^n)$ is the unique solution to \eqref{eq: nonlocal opt tomography problem} with exterior value $f$, is a well-defined bounded linear map.
\end{enumerate}
\end{theorem}
\begin{remark}
In the above theorem and everywhere else in this article, we write $f$ instead of $[f]$ for elements of the trace space $X$. Let us note that on the right hand side of the formula \eqref{eq: DN map opt tom eq}, the function $g$ can be any representative of its equivalence class $[g]$.
\end{remark}
\begin{proof}
\ref{item 1 well-posedness opt tom eq}: First, let us note that the bilinear form $B_{\gamma}$ is continuous on $H^s({\mathbb R}^n)$ and that any Sobolev multiplier $q\in M(H^s\to H^{-s})$ by the multiplier estimate \eqref{multiplier-estimate} induces a continuous bilinear form on $H^s({\mathbb R}^n)$. Hence, $B_{\gamma,q}\colon H^s({\mathbb R}^n)\times H^s({\mathbb R}^n)\to {\mathbb R}$ is continuous. Moreover, as $q\in M_{\gamma_0/\delta,+}(H^s\to H^{-s})$ we may decompose $q$ as $q=q_1+q_2$, where $q_1\in M_{\gamma_0/\delta_0}(H^s\to H^{-s})$ and $q_2\in M_+(H^s\to H^{-s})$. Therefore, we can calculate
\begin{equation}
\label{eq: coercivity estimate potential q}
\begin{split}
B_{\gamma,q}(u,u)&\geq \gamma_0[u]_{H^s({\mathbb R}^n)}^2+\langle q_1u,u\rangle+\langle q_2u,u\rangle\\
&\geq \frac{\gamma_0}{2}\left([u]_{H^s({\mathbb R}^n)}^2+C_{opt}^{-1}\|u\|_{L^2({\mathbb R}^n)}^2\right)-|\langle q_1u,u\rangle|\\
&\geq \frac{\gamma_0}{2\max(1,C_{opt})}\|u\|_{H^s({\mathbb R}^n)}^2-\|q_1\|_{s}\|u\|_{H^s({\mathbb R}^n)}^2\\
&\geq (\gamma_0/\delta_0-\|q_1\|_{s})\|u\|_{H^s({\mathbb R}^n)}^2=\alpha\|u\|_{H^s({\mathbb R}^n)}^2
\end{split}
\end{equation}
for any $u\in \widetilde{H}^s(\Omega)$, where we used the (optimal) fractional Poincar\'e inequality (see~Theorem~\ref{thm: Poinc Unbounded Doms} and eq.~\eqref{eq: optimal fractional Poincare constant}). Using the fact that $q_1\in M_{\gamma_0/\delta_0}(H^s\to H^{-s})$, we deduce $\alpha>0$ and hence the bilinear form $B_{\gamma,q}$ is coercive over $\widetilde{H}^s(\Omega)$.
Next note that for given $f\in H^s({\mathbb R}^n)$, the function $u\in H^s({\mathbb R}^n)$ solves \eqref{eq: nonlocal opt tomography problem} if and only if $v=u-f\in H^s({\mathbb R}^n)$ solves
\begin{equation}
\label{eq: hom nonlocal opt tomography problem}
\begin{split}
L_{\gamma}^sv+qv&= F\quad\text{in}\quad\Omega,\\
v&= 0\quad\text{in}\quad\Omega_e
\end{split}
\end{equation}
with $F=-(L_{\gamma}^sf+qf)\in (\widetilde{H}^s(\Omega))^*$. Now since $B_{\gamma,q}$ is a continuous, coercive bilinear form the Lax--Milgram theorem implies that \eqref{eq: hom nonlocal opt tomography problem} has a unique solution $v\in \widetilde{H}^s(\Omega)$ and so the same holds for \eqref{eq: nonlocal opt tomography problem}. Next, we show that if $f_1,f_2\in H^s({\mathbb R}^n)$ satisfy $f_1-f_2\in\widetilde{H}^s(\Omega)$ then $u_{f_1}=u_{f_2}$ in ${\mathbb R}^n$, where $u_{f_j}\in H^s({\mathbb R}^n)$, $j=1,2$, is the unique solution to \eqref{eq: DN map opt tom eq} with exterior value $f_j$. Define $v=u_{f_1}-u_{f_2}\in \widetilde{H}^s(\Omega)$. Then $v$ solves
\begin{equation}
\label{eq: uniqueness trace space}
\begin{split}
L_{\gamma}^sv+qv&= 0\quad\text{in}\quad\Omega,\\
v&= 0\quad\text{in}\quad\Omega_e.
\end{split}
\end{equation}
By testing \eqref{eq: uniqueness trace space} with $v$ and using the coercivity of $B_{\gamma,q}$ over $\widetilde{H}^s(\Omega)$, it follows that $v=0$ in ${\mathbb R}^n$. Hence, for any $f\in X$, there is a unique solution $u_f\in H^s({\mathbb R}^n)$.
\noindent \ref{item 2 DN map opt tom eq}: For any $f\in X$, let us define $\Lambda_{\gamma,q}f$ via the formula \eqref{eq: DN map opt tom eq}, where $g\in H^s({\mathbb R}^n)$ is any representative of the related equivalence class in $X$. First, we verify that this map is well-defined. If $h\in H^s({\mathbb R}^n)$ is any other representative, that is $g-h\in \widetilde{H}^s(\Omega)$, then since $u_f$ solves \eqref{eq: nonlocal opt tomography problem} we have
\[
B_{\gamma,q}(u_f,g)=B_{\gamma,q}(u_f,g-h)+B_{\gamma,q}(u_f,h)=B_{\gamma,q}(u_f,h)
\]
and so the expression for $\langle \Lambda_{\gamma,q}f,g\rangle$ is unambiguous. By the continuity of the bilinear form $B_{\gamma,q}$ it is easily seen that $\Lambda_{\gamma,q}f\in X^*$ for any $f\in X$.
\end{proof}
\begin{theorem}[Well-posedness and DN map for fractional Schr\"odinger equation]
\label{thm: well-posedness for fractional Schrödinger type equation}
Let $\Omega\subset {\mathbb R}^n$ be an open set which is bounded in one direction and $0<s<\min(1,n/2)$. Moreover, assume that the uniformly elliptic diffusion $\gamma\in L^{\infty}({\mathbb R}^n)$ with lower bound $\gamma_0>0$ satisfies $m_{\gamma}\in H^{s,n/s}({\mathbb R}^n)$ and the potential $q$ belongs to $M_{\gamma_0/\delta_0,+}(H^s\to H^{-s})$. Then the following assertions hold:
\begin{enumerate}[(i)]
\item\label{item 1 well-posedness schrödinger} The distribution $Q_{\gamma,q}$ defined by
\begin{equation}
\label{eq: reduced potential}
Q_{\gamma,q}=-\frac{(-\Delta)^sm_{\gamma}}{\gamma^{1/2}}+\frac{q}{\gamma}
\end{equation}
belongs to $M(H^s\to H^{-s})$.
\item\label{item 2 well-posedness schrödinger} If $u\in H^s({\mathbb R}^n)$, $f\in X$ and $v\vcentcolon =\gamma^{1/2}u,\,g\vcentcolon =\gamma^{1/2}f$, then $v\in H^s({\mathbb R}^n), g\in X$ and $u$ is a solution of
\begin{equation}
\label{eq: NOTE well-posedness schrödinger}
\begin{split}
L_{\gamma}^su+qu&= 0\quad\text{in}\quad\Omega,\\
u&= f\quad\text{in}\quad\Omega_e
\end{split}
\end{equation}
if and only if $v$ is a weak solution of the fractional Schr\"odinger equation
\begin{equation}
\label{eq: FSE well-posedness schrödinger}
\begin{split}
((-\Delta)^s+Q_{\gamma,q})v&=0\quad\text{in}\quad\Omega,\\
v&=g\quad\text{in}\quad\Omega_e.
\end{split}
\end{equation}
\item\label{item 3 well-posedness schrödinger} Conversely, if $v\in H^s({\mathbb R}^n), g\in X$ and $u\vcentcolon =\gamma^{-1/2}v,\,f\vcentcolon =\gamma^{-1/2}g$, then $v$ is a weak solution of \eqref{eq: FSE well-posedness schrödinger} if and only if $u$ is a weak solution of \eqref{eq: NOTE well-posedness schrödinger}.
\item\label{item 4 well-posedness schrödinger} For all $f\in X$ there is a unique weak solutions $v_g\in H^s({\mathbb R}^n)$ of the fractional Schr\"odinger equation
\begin{equation}
\label{eq: well-posedness fractional Schrödinger equation}
\begin{split}
((-\Delta)^s+Q_{\gamma,q})v&=0\quad\text{in}\quad\Omega,\\
v&=g\quad\text{in}\quad\Omega_e.
\end{split}
\end{equation}
\item\label{item 5 well-posedness schrödinger} The exterior DN map $\Lambda_{Q_{\gamma,q}}\colon X\to X^*$ given by
\begin{equation}
\label{eq: well-defined DN map Schrödinger}
\begin{split}
\langle \Lambda_{Q_{\gamma,q}}f,g\rangle\vcentcolon =B_{Q_{\gamma,q}}(v_f,g),
\end{split}
\end{equation}
where $v_f\in H^s({\mathbb R}^n)$ is the unique solution to \eqref{eq: FSE well-posedness schrödinger} with exterior value $f$, is a well-defined bounded linear map.
\end{enumerate}
\end{theorem}
\begin{proof}
\ref{item 1 well-posedness schrödinger}: Since $q\in M(H^s\to H^{-s})$, we can estimate
\begin{equation}
\label{eq: continuity estimate potential}
\begin{split}
|\langle q/\gamma u,v\rangle|&=|\langle q (\gamma^{-1/2}u),\gamma^{-1/2}v\rangle|\\
&\leq \|q\|_s\|\gamma^{-1/2}u\|_{H^s({\mathbb R}^n)}\|\gamma^{-1/2}v\|_{H^s({\mathbb R}^n)}\\
&\leq \|q\|_s\|(1-\frac{m}{m+1})u\|_{H^s({\mathbb R}^n)}\|(1-\frac{m}{m+1})v\|_{H^s({\mathbb R}^n)}\\
&\leq C\|q\|_s\|u\|_{H^s({\mathbb R}^n)}\|v\|_{H^s({\mathbb R}^n)}
\end{split}
\end{equation}
for all $u,v\in H^s({\mathbb R}^n)$, where we used that the assertion \ref{estimate mathcal M} of Lemma~\ref{Lemma: fractional Liouville reduction} implies $\gamma^{-1/2}w\in H^s({\mathbb R}^n)$ for all $w\in H^s({\mathbb R}^n)$ with $\|\frac{m}{m+1}w\|_{H^s({\mathbb R}^n)}\leq C\|w\|_{H^s({\mathbb R}^n)}$ for some constant $C>0$ only depending polynomially on the $L^{\infty}$ and $H^{s,n/s}$ norm of $\frac{m}{m+1}$. Now, the estimate \eqref{eq: continuity estimate potential} can be used to see that $q/\gamma$ is a distribution and belongs to $M(H^s\to H^{-s})$. On the other hand, by the statement \ref{potential frac Liouville reduction} of Lemma~\ref{Lemma: fractional Liouville reduction} we know that $q_{\gamma}=\frac{(-\Delta)^sm_{\gamma}}{\gamma^{1/2}}\in M(H^s\to H^{-s})$. This in turn implies $Q_{\gamma}\in M(H^s\to H^{-s})$.
\noindent\ref{item 2 well-posedness schrödinger}: The assertions $v\in H^s({\mathbb R}^n)$, $g\in X$ and $u-f\in \widetilde{H}^s(\Omega)$ if and only if $v-g\in \widetilde{H}^s(\Omega)$ are direct consequences of the property \ref{estimate mathcal M} of Lemma~\ref{Lemma: fractional Liouville reduction}. Furthermore, the fact that $u$ solves $L_{\gamma}^su+qu=0$ in $\Omega$ if and only if $v$ solves $(-\Delta)^sv+Q_{\gamma,q}=0$ in $\Omega$ follows by the definition of $Q_{\gamma,q}$, \ref{liouville red identity} and \ref{estimate mathcal M} of Lemma~\ref{Lemma: fractional Liouville reduction}.
\noindent\ref{item 3 well-posedness schrödinger}: The proof of this fact is essentially the same as for \ref{item 2 well-posedness schrödinger} and therefore we drop it.
\noindent\ref{item 4 well-posedness schrödinger}: By \ref{item 3 well-posedness schrödinger}, we know that $v\in H^s({\mathbb R}^n)$ solves \eqref{eq: well-posedness fractional Schrödinger equation} if and only if $u$ solves \eqref{eq: NOTE well-posedness schrödinger} with exterior value $f=\gamma^{1/2}g$. The latter Dirichlet problem is well-posed by Theorem~\ref{thm: well-posedness opt tom eq} and hence it follows from \ref{item 2 well-posedness schrödinger} and \ref{item 2 well-posedness schrödinger} that the unique solution of \eqref{eq: well-posedness fractional Schrödinger equation} is given by $v_g=\gamma^{1/2}u_{\gamma^{-1/2}g}\in H^s({\mathbb R}^n)$.
\noindent\ref{item 5 well-posedness schrödinger}: The fact that $\Lambda_{Q_{\gamma,q}}$ defined via formula \eqref{eq: well-defined DN map Schrödinger} is well-defined follows from the properties \ref{item 4 well-posedness schrödinger}, \ref{item 1 well-posedness schrödinger} and the same calculation as in the proof of Theorem~\ref{thm: well-posedness opt tom eq}, \ref{item 2 DN map opt tom eq}.
\end{proof}
\begin{remark}
\label{remark: interior source problem}
Let us note that essentially the same proofs as in Theorem~\ref{thm: well-posedness opt tom eq} and \ref{thm: well-posedness for fractional Schrödinger type equation}, can be used to show that
\[
\begin{split}
L_{\gamma}^su+qu&= F\quad\text{in}\quad\Omega,\\
u&= u_0\quad\text{in}\quad\Omega_e
\end{split}
\]
and
\[
\begin{split}
((-\Delta)^s+Q_{\gamma,q})v&=G\quad\text{in}\quad\Omega,\\
v&=v_0\quad\text{in}\quad\Omega_e.
\end{split}
\]
for all $u_0,v_0\in H^s({\mathbb R}^n)$ and $F,G\in(\widetilde{H}^s(\Omega))^*$ are well-posed.
\end{remark}
\section{Inverse problem}
\label{sec: inverse problem}
In Section~\ref{sec: uniquness} we first prove Theorem~\ref{main thm} and hence providing an answer to Question~\ref{question uniqueness}. We establish this result in four steps. First, in Section~\ref{subsec: exterior reconstruction} we extend the exterior determination result of the fractional conductivity equation to the nonlocal tomography equation (Theorem~\ref{thm: exterior reconstruction}). Then in Lemma~\ref{lemma: relation of sols} we show that $\gamma_1^{1/2}u_f^{(1)}$ and $\gamma_2^{1/2}u_f^{(2)}$ coincide in ${\mathbb R}^n$ whenever $\gamma_1=\gamma_2$, $q_1=q_2$ in the measurement set and generate the same DN data. These two preparatory steps then allow us to prove that the diffusion coefficients are the same in ${\mathbb R}^n$ (Section~\ref{subsec: determination of diffusion coeff}) and to conclude that in that case also the absorption coefficients are necessarily identical (Section~\ref{subsubsec: equality of q}). Then in Section~\ref{sec: non-uniqueness}, we provide an answer to Question~\ref{question non-uniqueness}. Following a similar strategy as in \cite{RZ2022LowReg}, we first derive a characterization of the uniqueness in the inverse problem for the nonlocal optical tomography equation and then use this to construct counterexamples to uniqueness when the potentials are non-equal in the measurement set (see~Theorem~\ref{thm: non uniqueness}).
\subsection{Uniqueness}
\label{sec: uniquness}
\subsubsection{Exterior reconstruction formula}
\label{subsec: exterior reconstruction}
The main result of this section is the following reconstruction formula in the exterior.
\begin{theorem}[Exterior reconstruction formula]
\label{thm: exterior reconstruction}
Let $\Omega\subset {\mathbb R}^n$ be an open set which is bounded in one direction, $W\subset\Omega_e$ a measurement set and $0<s<\min(1,n/2)$. Assume that the uniformly elliptic diffusion $\gamma\in L^{\infty}({\mathbb R}^n)$, which is bounded from below by $\gamma_0>0$, and the potential $q\in M_{\gamma_0/\delta_0,+}(H^s\to H^{-s})$ satisfy the following additional properties
\begin{enumerate}[(i)]
\item\label{prop 1 reconstruction} $\gamma$ is a.e. continuous in $W$
\item\label{prop 2 reconstruction} and $q\in L^p_{loc}(W)$ for some $\frac{n}{2s}<p\leq \infty$.
\end{enumerate}
Then for a.e. $x_0\in W$ there exists a sequence $(\Phi_N)_{N\in{\mathbb N}}\subset C_c^{\infty}(W)$ such that
\begin{equation}
\label{eq: reconstruction formula}
\gamma(x_0)=\lim_{N\to\infty}\langle \Lambda_{\gamma,q}\Phi_N,\Phi_N\rangle.
\end{equation}
\end{theorem}
Before giving the proof of this result, we prove the following interpolation estimate:
\begin{lemma}[Interpolation estimate for the potential term]\label{lem: cont potential term}
Let $0 < s < \min(1,n/2)$ and assume $W\subset{\mathbb R}^n$ is a non-empty open set. If $q\in M(H^s\to H^{-s})\cap L^p_{loc}(W)$ for some $\frac{n}{2s} < p \le \infty$, then for an $V\Subset W$ the following estimate holds
\begin{equation}
\label{eq: potential goes to zero estimate}
|\langle qu,v\rangle|\leq C \|u\|^{1-\theta}_{H^s({\mathbb R}^n)}\|u\|_{L^2(V)}^{\theta} \|v\|_{H^s({\mathbb R}^n)}
\end{equation}
for all $u,v\in C_c^{\infty}(V)$ and some $C>0$, where $\theta\in (0,1]$ is given by
\begin{equation}
\theta=\begin{cases}
2-\frac{n}{sp},&\enspace\text{if}\enspace \frac{n}{2s}<p\leq \frac{n}{s}, \\
1,&\enspace\text{otherwise}.
\end{cases}
\end{equation}
\end{lemma}
\begin{proof}
Without loss of generality we can assume that there holds $\frac{n}{2s}<p\leq \frac{n}{s}$. First, by H\"older's inequality and Sobolev's embedding we have
\begin{equation}
\label{eq: first estimate}
|\langle qu,v\rangle|\leq \|qu\|_{L^{\frac{2n}{n+2s}}(V)}\|v\|_{L^{\frac{2n}{n-2s}}(V)}\leq C\|qu\|_{L^{\frac{2n}{n+2s}}(V)}\|v\|_{H^s({\mathbb R}^n)}.
\end{equation}
Next, observe that if $\theta=2-\frac{n}{sp}\in (0,1]$, then there holds
\[
\frac{n+2s}{2n}=\frac{1}{p}+\frac{1-\theta}{\frac{2n}{n-2s}}+\frac{\theta}{2}.
\]
Therefore, by interpolation in $L^q$ and Sobolev's embedding we can estimate
\begin{equation}
\label{eq: second estimate}
\begin{split}
\|qu\|_{L^{\frac{2n}{n+2s}}(V)}&\leq \|q\|_{L^p(V)}\|u\|_{L^{\frac{2n}{n-2s}}(V)}^{1-\theta}\|u\|_{L^2(V)}^{\theta}\\
&\leq C\|q\|_{L^p(V)}\|u\|_{H^s({\mathbb R}^n)}^{1-\theta}\|u\|_{L^2(V)}^{\theta}.
\end{split}
\end{equation}
Combining the estimates \eqref{eq: first estimate} and \eqref{eq: second estimate}, we obtain \eqref{eq: potential goes to zero estimate}.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{thm: exterior reconstruction}]
Let $x_0\in W$ be such that $\gamma$ is continuous at $x_0$. By \cite[Theorem~1.1]{KLZ22FracpLap}, there exists a sequence $(\Phi_N)_{N\in{\mathbb N}}\subset C_c^{\infty}(W)$ satisfying the following conditions:
\begin{enumerate}[(i)]
\item\label{support cond} $\supp(\Phi_N)\to \{x_0\}$ as $N\to\infty$,
\item\label{normalization cond} $[\Phi_N]_{H^s({\mathbb R}^n)}=1$ for all $N\in{\mathbb N}$
\item\label{convergence cond} and $\Phi_N\to 0$ in $H^t({\mathbb R}^n)$ as $N\to\infty$ for all $0\leq t<s$.
\end{enumerate}
The last condition implies that $\Phi_N\to 0$ in $L^p({\mathbb R}^n)$ for all $1\leq p <\frac{2n}{n-2s}$ as $N\to\infty$. Next, let $u_N\in H^s({\mathbb R}^n)$ be the unique solution to
\begin{equation}
\begin{split}
L^s_{\gamma}u+qu&= 0\quad\enspace\text{in}\enspace\Omega,\\
u&= \Phi_N\,\enspace\text{in}\enspace\Omega_e.
\end{split}
\end{equation}
By linearity $v_N\vcentcolon = u_N-\Phi_N\in \widetilde{H}^s(\Omega)$ is the unique solution to
\begin{equation}
\label{eq: sol hom ext cond}
\begin{split}
L_{\gamma}^s v+qv&= -B_{\gamma,q}(\Phi_N,\cdot)\enspace\text{in}\enspace\Omega,\\
v&= 0\quad\,\,\,\quad\quad\quad\quad\text{in}\enspace\Omega_e.
\end{split}
\end{equation}
One easily sees that $B_{\gamma,q}(\Phi_N,\cdot)\in (\widetilde{H}^s(\Omega))^*$. Similarly as in \cite[Lemma~3.1]{RZ2022LowReg}, for any $v\in \widetilde{H}^s(\Omega)$ we may calculate
\allowdisplaybreaks
\[
\begin{split}
&|B_{\gamma,q}(\Phi_N,v)|=|B_{\gamma}(\Phi_N,v)|=C\left|\int_{W\times\Omega}\gamma^{1/2}(x)\gamma^{1/2}(y)\frac{\Phi_N(x)v(y)}{|x-y|^{n+2s}}\,dxdy\right|\\
&\quad\leq C\int_{\Omega}\gamma^{1/2}(y)|v(y)|\left(\int_W\frac{\gamma^{1/2}(x)|\Phi_N(x)|}{|x-y|^{n+2s}}\,dx\right)dy\\
&\quad\leq C\|\gamma\|_{L^{\infty}(\Omega\cup W)}\|v\|_{L^2(\Omega)}\left\|\int_W\frac{|\Phi_N(x)|}{|x-y|^{n+2s}}\,dx\right\|_{L^2(\Omega)}\\
&\quad\leq C\|\gamma\|_{L^{\infty}(\Omega\cup W)}\|v\|_{L^2(\Omega)}\int_W|\Phi_N(x)|\left(\int_{\Omega}\frac{dy}{|x-y|^{n+2s}}\,dy\right)^{1/2}\,dx\\
&\quad\leq C\|\gamma\|_{L^{\infty}(\Omega\cup W)}\|v\|_{L^2(\Omega)}\int_W|\Phi_N(x)|\left(\int_{(B_r(x))^c}\frac{dy}{|x-y|^{n+2s}}\,dy\right)^{1/2}\,dx\\
&\quad\leq \frac{C}{r^{\frac{n+4s}{2}}}\|\gamma\|_{L^{\infty}({\mathbb R}^n)}\|v\|_{L^2(\Omega)}\|\Phi_N\|_{L^1(W)}.
\end{split}
\]
In the above estimates we used that $\gamma\in L^{\infty}({\mathbb R}^n)$ is uniformly elliptic, $\supp(\Phi_N)\subset \supp(\Phi_1)\Subset W$ (see~\ref{support cond}), H\"older's and Minkowski's inequality and set $r\vcentcolon = \dist(\Omega,\supp(\Phi_1))>0$. This implies
\begin{equation}
\label{eq: bounded on norm}
\|B_{\gamma,q}(\Phi_N,\cdot)\|_{(\widetilde{H}^s(\Omega))^*}\leq \frac{C}{r^{\frac{n+4s}{2}}}\|\gamma\|_{L^{\infty}({\mathbb R}^n)}\|\Phi_N\|_{L^1(W)}.
\end{equation}
Now, testing equation \eqref{eq: sol hom ext cond} by $v_N\in\widetilde{H}^s(\Omega)$, using the fractional Poincar\'e inequality (see~Theorem~\ref{thm: Poinc Unbounded Doms}), the uniform ellipticity of $\gamma$ and the coercivity estimate \eqref{eq: coercivity estimate potential q}, we get
\[
\begin{split}
\|v_N\|_{H^s({\mathbb R}^n)}^2&\leq C|B_{\gamma,q}(\Phi_N,v_N)|\leq C\|B_{\gamma,q}(\Phi_N,\cdot)\|_{(\widetilde{H}^s(\Omega))^*}\|v_N\|_{H^s({\mathbb R}^n)}\\
&\leq \frac{C}{r^{\frac{n+4s}{2}}}\|\gamma\|_{L^{\infty}({\mathbb R}^n)}\|\Phi_N\|_{L^1(W)}\|v_N\|_{H^s({\mathbb R}^n)},
\end{split}
\]
which in turn implies
\[
\|v_N\|_{H^s({\mathbb R}^n)}\leq \frac{C}{r^{\frac{n+4s}{2}}}\|\gamma\|_{L^{\infty}({\mathbb R}^n)}\|\Phi_N\|_{L^1(W)}.
\]
Recalling that $v_N=u_N-\Phi_N$ and the property \ref{convergence cond} of the sequence $\Phi_N\in C_c^{\infty}(W)$, we deduce
\begin{equation}
\label{eq: conv to zero of diff}
\|u_N-\Phi_N\|_{H^s({\mathbb R}^N)}\to 0\quad\text{as}\quad N\to\infty.
\end{equation}
Let us next define the energy
\[
E_{\gamma,q}(v)\vcentcolon = B_{\gamma,q}(v,v)
\]
for any $v\in H^s({\mathbb R}^n)$. Using the computation in the proof of \cite[Theorem~3.2]{RZ2022LowReg} we have
\begin{equation}
\label{eq: concentration of energy}
\begin{split}
\lim_{N\to\infty}E_{\gamma,q}(\Phi_N)&=\lim_{N\to\infty}B_{\gamma}(\Phi_N,\Phi_N)+\lim_{N\to\infty}\langle q\Phi_N,\Phi_N\rangle_{L^2({\mathbb R}^n)}\\
&=\lim_{N\to\infty}B_{\gamma}(\Phi_N,\Phi_N)=\gamma(x_0)
\end{split}
\end{equation}
where we used Lemma~\ref{lem: cont potential term} and the properties \ref{normalization cond}, \ref{convergence cond} of the sequence $(\Phi_N)_{N\in{\mathbb N}}$ to see that the term involving the potential $q$ vanishes.
On the other hand, we can rewrite the DN map as follows
\[
\begin{split}
\langle \Lambda_{\gamma,q}\Phi_N,\Phi_N\rangle &=B_{\gamma,q}(u_N,\Phi_N)=B_{\gamma,q}(u_N,u_N)\\
&=E_{\gamma,q}(u_N-\Phi_N)+2B_{\gamma,q}(u_N-\Phi_N,\Phi_N)+E_{\gamma,q}(\Phi_N).
\end{split}
\]
Thus, arguing as above for the convergence $E_{\gamma,q}(\Phi_N)\to\gamma(x_0)$, we see that the first two terms on the right hand side vanish in the limit $N\to\infty$ and we can conclude the proof.
\end{proof}
\subsubsection{Uniqueness of the diffusion coefficient $\gamma$}
\label{subsec: determination of diffusion coeff}
\begin{lemma}[Relation of solutions]
\label{lemma: relation of sols}
Let $\Omega\subset {\mathbb R}^n$ be an open set which is bounded in one direction, suppose $W_1,W_2\subset\Omega_e$ are two measurement sets and $0<s<\min(1,n/2)$. Assume that the uniformly elliptic diffusions $\gamma,\gamma_2\in L^{\infty}({\mathbb R}^n)$ with lower bound $\gamma_0>0$ satisfy $m_{\gamma_1},m_{\gamma_2}\in H^{s,n/s}({\mathbb R}^n)$ and the potentials $q_1,q_2$ belong to $M_{\gamma_0/\delta_0,+}(H^s\to H^{-s})$. If $\gamma_1|_{W_2} = \gamma_2|_{W_2}$ and $\Lambda_{\gamma_1, q_1} f|_{W_2} = \Lambda_{\gamma_2, q_2} f|_{W_2}$ for some $f\in \widetilde{H}^s(W_1)$ with $W_2\setminus \supp(f)\neq \emptyset$, then there holds
\begin{equation}
\gamma_1^{1/2}u_f^{(1)} = \gamma_2^{1/2}u_f^{(2)}\enspace \text{a.e.\@ in }{\mathbb R}^n
\end{equation}
where, for $j = 1, 2$, $u_f^{(j)}\in H^s({\mathbb R}^n)$ is the unique solution of \begin{equation}
\label{eq: NOTE relation solutions}
\begin{split}
L_{\gamma_j}^su+q_ju&= 0\quad\text{in}\quad\Omega,\\
u&= f\quad\text{in}\quad\Omega_e
\end{split}
\end{equation}
(see~Theorem~\ref{thm: well-posedness opt tom eq}).
\end{lemma}
\begin{proof}
First let $\gamma,q$ satisfy the assumptions of Lemma~\ref{lemma: relation of sols} and assume that $f,g\in H^s({\mathbb R}^n)$ have disjoint support. Then there holds
\begin{equation}
\label{eq: disjoint support}
\begin{split}
B_{\gamma,q}(f,g)&=B_{\gamma}(f,g)=C_{n,s}\int_{{\mathbb R}^{2n}}\gamma^{1/2}(x)\gamma^{1/2}(y)\frac{f(x)g(y)}{|x-y|^{n+2s}}\,dxdy\\
&=\langle (-\Delta)^{s/2}(\gamma^{1/2}f),(-\Delta)^{s/2}(\gamma^{1/2}g)\rangle_{L^2({\mathbb R}^n)}.
\end{split}
\end{equation}
Now, let $f\in \widetilde{H}^s(W_1)$ and $u_f^{(j)}\in H^s({\mathbb R}^n)$ for $j=1,2$ be as in the statement of the lemma. Set $V\vcentcolon = W_2\setminus\supp(f)$ and take any $\varphi\in \widetilde{H}^s(V)$. Then we have $\supp(u_f^{(j)})\cap \supp(\varphi)=\emptyset$ and the assumption that the DN maps coincide implies
\[
\begin{split}
B_{\gamma_1,q_1}(u_f^{(1)},\varphi)=\langle \Lambda_{\gamma_1,q_1}f,\varphi\rangle =\langle \Lambda_{\gamma_2,q_2}f,\varphi\rangle=B_{\gamma_2,q_2}(u_f^{(2)},\varphi).
\end{split}
\]
By \eqref{eq: disjoint support} and the assumption $\gamma_1=\gamma_2$ on $W_2$, this is equivalent to
\[
\langle (-\Delta)^{s/2}(\gamma_1^{1/2}u_f^{(1)}-\gamma_2^{1/2}u_f^{(2)}),(-\Delta)^{s/2}(\gamma_1^{1/2}\varphi)\rangle_{L^2({\mathbb R}^n)}=0
\]
for all $\varphi\in \widetilde{H}^s(V)$. By our assumptions on the diffusion coefficients $\gamma_j$ and Lemma~\ref{Lemma: fractional Liouville reduction}, we can replace $\varphi$ by $g=\gamma_1^{-1/2}\varphi$ to obtain
\[
\langle (-\Delta)^{s/2}(\gamma_1^{1/2}u_f^{(1)}-\gamma_2^{1/2}u_f^{(2)}),(-\Delta)^{s/2}g\rangle_{L^2({\mathbb R}^n)}=0
\]
for all $g\in \widetilde{H}^s(V)$. We know that $\gamma_1^{1/2}u_f^{(1)}-\gamma_2^{1/2}u_f^{(2)}=0$ on $V$ as $u_f^{(j)}=0$ on $V$. Therefore, Lemma~\ref{Lemma: fractional Liouville reduction} and the usual UCP for the fractional Laplacian for $H^s$ functions implies $\gamma_1^{1/2}u_f^{(1)}=\gamma_2^{1/2}u_f^{(2)}$ a.e. in ${\mathbb R}^n$.
\end{proof}
\begin{figure}[!ht]
\centering
\begin{tikzpicture}
\filldraw[color=blue!50, fill=blue!15, xshift=11cm, yshift=1.35cm] (3,-2.5) ellipse (0.9 and 0.9);
\node[xshift=13cm, yshift=1.5cm] at (3,-2.5) {$\raisebox{-.35\baselineskip}{\ensuremath{W_2}}$};
\filldraw[color=green!70, fill=green!15, xshift=11cm, yshift=0.1cm, opacity=0.8] (3,-2.5) ellipse (1.3 and 0.75);
\filldraw[color=red!60, fill=red!15, xshift=11cm, yshift=0.1cm, opacity=0.8] (3,-2.5) ellipse (1.1 and 0.65);
\node[xshift=11cm, yshift=0.1cm] at (3,-2.5) {$\raisebox{-.35\baselineskip}{\ensuremath{\supp(f)}}$};
\node[xshift=13cm, yshift=0.1cm] at (3,-2.5) {$\raisebox{-.35\baselineskip}{\ensuremath{W_1}}$};
\draw[xshift=13cm, yshift=0.1cm] (2.4,-2.5) -- (2.6,-2.5);
\draw[xshift=13cm, yshift=0.1cm] (2,-1.1) -- (2.6,-1.1);
\filldraw[color=yellow!70, fill=yellow!45, xshift=11cm, yshift=1.35cm, opacity=0.8] (3,-2.5) ellipse (0.8 and 0.55);
\node[xshift=11cm, yshift=1.35cm] at (3,-2.5) {$\raisebox{-.35\baselineskip}{\ensuremath{\supp(\varphi)}}$};
\filldraw [color=orange!80, fill = orange!15, xshift=8cm, yshift=-2cm,opacity=0.8] plot [smooth cycle] coordinates {(-1,0.9) (3,1.5) (4.5,-0.5) (3,-1) (-1.5,-0.25)};
\node[xshift=3cm] at (6.3,-1.75) {$\raisebox{-.35\baselineskip}{\ensuremath{L_{\gamma_j}^su_f^{(j)}+q_ju_f^{(j)}=0\enspace\text{in}\enspace\Omega}}$};
\end{tikzpicture}
\caption{\begin{small} A graphical illustration of the sets and functions used in the proof of Lemma~\ref{lemma: relation of sols}. \end{small}}
\end{figure}
\begin{theorem}[Uniqueness of $\gamma$]
\label{thm: uniqueness of gamma}
Let $0 < s < \min(1,n/2)$, suppose $\Omega\subset {\mathbb R}^n$ is a domain bounded in one direction and let $W_1,W_2\subset \Omega$ be two non-disjoint measurement sets. Assume that the diffusions $\gamma_1, \gamma_2\in L^{\infty}({\mathbb R}^n)$ with background deviations $m_{\gamma_1},m_{\gamma_2}\in H^{s,n/s}({\mathbb R}^n)$ and potentials $q_1,q_2\in \distr({\mathbb R}^n)$ satisfy
\begin{enumerate}[(i)]
\item\label{uniform ellipticity diffusions} $\gamma_1,\gamma_2$ are uniformly elliptic with lower bound $\gamma_0>0$,
\item\label{continuity diffusions} $\gamma_1, \gamma_2$ are a.e. continuous in $W_1\cap W_2$,
\item\label{integrability potentials} $q_1,q_2\in M_{\gamma_0/\delta_0,+}(H^s\to H^{-s})\cap L^p_{loc}(W_1\cap W_2)$ for $\frac{n}{2s}< p\leq \infty$
\item\label{equal potentials in measurement sets} and $q_1|_{W_1\cap W_2}=q_2|_{W_1\cap W_2}$.
\end{enumerate}
If $\Lambda_{\gamma_1,q_1}f|_{W_2}=\Lambda_{\gamma_2,q_2}f|_{W_2}$ for all $f\in C_c^{\infty}(W_1)$, then there holds $\gamma_1=\gamma_2$ in ${\mathbb R}^n$.
\end{theorem}
\begin{proof}
Let $W\vcentcolon = W_1\cap W_2$. Then Theorem~\ref{thm: exterior reconstruction} ensures that $\gamma_1=\gamma_2$ on $W$. Next choose $V\Subset W$ and let $f\in \widetilde{H}^s(V)$. By assumption there holds
\[
\begin{split}
0 &=\langle (\Lambda_{\gamma_1,q_1}-\Lambda_{\gamma_2,q_2} )f,f\rangle =B_{\gamma_1,q_1}(u_f^{(1)},f)-B_{\gamma_2,q_2}(u_f^{(2)},f)\\
&=B_{\gamma_1}(u_f^{(1)},f))-B_{\gamma_2}(u_f^{(2)},f)+\langle (q_1-q_2)f,f\rangle\\
&=\langle (-\Delta)^{s/2}(\gamma_1^{1/2}u_f^{(1)}),(-\Delta)^{s/2}(\gamma_1^{1/2}f)\rangle_{L^2({\mathbb R}^n)}\\
&\quad-\left\langle \frac{(-\Delta)^sm_{\gamma_1}}{\gamma_1^{1/2}}\gamma_1^{1/2}u_f^{(1)},\gamma_1^{1/2}f\right\rangle\\
&\quad +\langle (-\Delta)^{s/2}(\gamma_2^{1/2}u_f^{(2)}),(-\Delta)^{s/2}(\gamma_2^{1/2}f)\rangle_{L^2({\mathbb R}^n)}\\
&\quad-\left\langle \frac{(-\Delta)^sm_{\gamma_2}}{\gamma_2^{1/2}}\gamma_2^{1/2}u_f^{(2)},\gamma_2^{1/2}f\right\rangle\\
&\quad +\langle (q_1-q_2)f,f\rangle\\
&=\left\langle \frac{(-\Delta)^s(m_{\gamma_2}-m_{\gamma_1})}{\gamma_1^{1/2}}\gamma_1^{1/2}f,\gamma_1^{1/2}f\right\rangle+\langle (q_1-q_2)f,f\rangle\\
&\quad +\langle (-\Delta)^{s/2}(\gamma_1^{1/2}u_f^{(1)}-\gamma_2^{1/2}u_f^{(2)}),(-\Delta)^{s/2}(\gamma_1^{1/2}f)\rangle_{L^2({\mathbb R}^n)},
\end{split}
\]
where in the fourth inequality sign we used the fractional Liouville reduction (Lemma~\ref{Lemma: fractional Liouville reduction}, \ref{liouville red identity}) and in the fifth equality sign $\gamma_1=\gamma_2$ in $W$. By Lemma~\ref{lemma: relation of sols} with $W_1=V$ and $W_2=W\setminus\overline{V}$, the term in the last line vanishes. Moreover, since $q_1=q_2$ in $W$, the term involving the potentials is zero as well. Using the polarization identity, we deduce that there holds
\[
\left\langle \frac{(-\Delta)^s(m_{\gamma_2}-m_{\gamma_1})}{\gamma_1^{1/2}}\gamma_1^{1/2}f,\gamma_1^{1/2}g\right\rangle=0
\]
for all $f,g\in \widetilde{H}^s(V)$. In particular, by first changing $f\mapsto \gamma_1^{-1/2}f\in \widetilde{H}^s(V)$ and $g\mapsto \gamma_1^{-1/2}g\in \widetilde{H}^s(V)$ (see~Lemma~\ref{Lemma: fractional Liouville reduction}, \ref{estimate mathcal M}) and then selecting $U\Subset V$, $g\in C_c^{\infty}(V)$ with $0\leq g\leq 1$, $g|_{\overline{U}}=1$, this implies
\[
\left\langle (-\Delta)^s(m_{\gamma_2}-m_{\gamma_1}),\gamma_1^{-1/2}f\right\rangle=0
\]
for all $f\in \widetilde{H}^s(U)$. Using again the assertion \ref{estimate mathcal M} of Lemma~\ref{Lemma: fractional Liouville reduction}, we deduce
\[
\left\langle (-\Delta)^s(m_{\gamma_2}-m_{\gamma_1}),f\right\rangle=0
\]
for all $f\in \widetilde{H}^s(U)$. Hence, $m=m_{\gamma_2}-m_{\gamma_1}\in H^{s,n/s}({\mathbb R}^n)$ satisfies
\[
(-\Delta)^sm=m=0\quad \text{in}\quad U.
\]
Now, the UCP for the fractional Laplacian in $H^{s,n/s}({\mathbb R}^n)$ (see~\cite[Theorem~2.2]{KRZ2022Biharm}) guarantees that $\gamma_1=\gamma_2$ in ${\mathbb R}^n$.
\end{proof}
\subsubsection{Uniqueness of the potential $q$}
\label{subsubsec: equality of q}
In this section, we finally establish the uniqueness assertion in Theorem~\ref{main thm}. In fact, under the given assumptions of Theorem~\ref{main thm}, Theorem~\ref{thm: uniqueness of gamma} implies $\gamma_1=\gamma_2$ in ${\mathbb R}^n$. The next theorem now ensures that there also holds $q_1=q_2$ in $\Omega$.
\begin{theorem}
\label{thm: uniqueness q}
Let $0 < s < \min(1,n/2)$, suppose $\Omega\subset {\mathbb R}^n$ is a domain bounded in one direction and let $W_1,W_2\subset \Omega$ be two non-disjoint measurement sets. Assume that the diffusions $\gamma_1, \gamma_2\in L^{\infty}({\mathbb R}^n)$ with background deviations $m_{\gamma_1},m_{\gamma_2}\in H^{s,n/s}({\mathbb R}^n)$ and potentials $q_1,q_2\in \distr({\mathbb R}^n)$ satisfy
\begin{enumerate}[(i)]
\item\label{uniform ellipticity diffusions} $\gamma_1,\gamma_2$ are uniformly elliptic with lower bound $\gamma_0>0$,
\item\label{continuity diffusions} $\gamma_1, \gamma_2$ are a.e. continuous in $W_1\cap W_2$,
\item\label{integrability potentials} $q_1,q_2\in M_{\gamma_0/\delta_0,+}(H^s\to H^{-s})\cap L^p_{loc}(W_1\cap W_2)$ for $\frac{n}{2s}< p\leq \infty$
\item\label{equal potentials in measurement sets} and $q_1|_{W_1\cap W_2}=q_2|_{W_1\cap W_2}$.
\end{enumerate}
If $\Lambda_{\gamma,q_1}f|_{W_2}=\Lambda_{\gamma,q_2}f|_{W_2}$ for all $f\in C_c^{\infty}(W_1)$, then there holds $q_1=q_2$ in $\Omega$.
\end{theorem}
\begin{proof}
We first show that the fractional conductivity operator $L_{\gamma}^s$ has the UCP on $H^s({\mathbb R}^n)$ as long as $m_{\gamma}\in H^{s,n/s}({\mathbb R}^n)$. For this purpose, assume that $V\subset{\mathbb R}^n$ is a nonempty, open set and $u\in H^s({\mathbb R}^n)$ satisfies $L_{\gamma}^su=u=0$ in $V$. By the fractional Liouville reduction (Lemma~\ref{Lemma: fractional Liouville reduction}, \ref{liouville red identity}) and $u|_V=0$, there holds
\[
\begin{split}
0&=\langle L_{\gamma}^su,\varphi\rangle \\
&=\langle (-\Delta)^{s/2}(\gamma^{1/2}u),(-\Delta)^{s/2}(\gamma^{1/2}\varphi)\rangle_{L^2({\mathbb R}^n)}-\left\langle \frac{(-\Delta)^sm_{\gamma}}{\gamma}\gamma^{1/2} u,\gamma^{1/2}\varphi \right\rangle\\
&=\langle (-\Delta)^{s/2}(\gamma^{1/2}u),(-\Delta)^{s/2}(\gamma^{1/2}\varphi)\rangle_{L^2({\mathbb R}^n)}
\end{split}
\]
for any $\varphi\in C_c^{\infty}(V)$. By approximation the above identity holds for all $\varphi\in \widetilde{H}^s(V)$. By the property \ref{estimate mathcal M} of Lemma~\ref{Lemma: fractional Liouville reduction}, we can replace $\varphi\in \widetilde{H}^s(V)$ by $\psi=\gamma^{-1/2}\varphi\in \widetilde{H}^s(V)$ to see that $(-\Delta)^{s/2}(\gamma^{1/2}u)=0$ in $V$. Now, the UCP for the fractional Laplacian implies $\gamma^{1/2}u=0$ in ${\mathbb R}^n$. Hence, the uniform ellipticity of $\gamma$ ensures $u=0$ in ${\mathbb R}^n$.
Hence, the problem at hand satisfies the conditions in \cite[Theorem~2.6]{RZ2022unboundedFracCald} (see also \cite[Remamrk~4.2]{RZ2022unboundedFracCald}, Theorem~\ref{thm: well-posedness for fractional Schrödinger type equation} and Remark~\ref{remark: interior source problem}) and we obtain $q_1=q_2$ in $\Omega$.
\end{proof}
\subsection{Non-uniqueness}
\label{sec: non-uniqueness}
In this section, we construct counterexamples to uniqueness when the potentials are non-equal in the whole measurement set $W$ and hence prove Theorem~\ref{thm: non uniqueness}. Similarly, as in the articles \cite{counterexamples,RZ2022LowReg}, the construction of counterexamples relies on a PDE characterization of the equality of the DN maps. To derive such a correspondence between DN maps and a PDE for the coefficients, we need the following lemma:
\begin{lemma}[Relation to fractional Schr\"odinger problem]
\label{Auxiliary lemma}
Let $\Omega\subset{\mathbb R}^n$ be an open set which is bounded in one direction, $W\subset\Omega_e$ an open set and $0<s<\min(1,n/2)$. Assume that $\gamma,\Gamma\in L^{\infty}({\mathbb R}^n)$ with background deviations $m_{\gamma},m_{\Gamma}$ satisfy $\gamma(x),\Gamma(x)\geq \gamma_0>0$ and $m_{\gamma},m_{\Gamma}\in H^{s,n/s}({\mathbb R}^n)$. Moreover, let $q\in M_{\gamma_0/\delta_0,+}(H^s\to H^{-s})$. If $\gamma|_{W}=\Gamma|_{W}$, then
\begin{equation}
\label{eq: identity DN maps}
\langle \Lambda_{\gamma,q}f,g\rangle=\langle \Lambda_{Q_{\gamma,q}}(\Gamma^{1/2}f),(\Gamma^{1/2}g)\rangle
\end{equation}
holds for all $f,g\in\widetilde{H}^{s}(W)$, where the potential $Q_{\gamma,q}\in M(H^s\to H^{-s})$ is given by formula \eqref{eq: reduced potential}.
\end{lemma}
\begin{proof}
First recall that if $u_f\in H^s({\mathbb R}^n)$ is the unique solution to
\[
\begin{split}
L_{\gamma}^s u+qu&= 0\enspace\text{in}\enspace\Omega,\\
u&= f\enspace\text{in}\enspace\Omega_e
\end{split}
\]
with $f\in \widetilde{H}^s(W)$, then $\gamma^{1/2}u_f\in H^s({\mathbb R}^n)$ is the unique solution to
\[
\begin{split}
((-\Delta)^s+Q_{\gamma,q})v&=0\quad\,\,\,\,\enspace\text{in}\enspace\Omega,\\
v&=\gamma^{1/2}f\enspace\text{in}\enspace\Omega_e
\end{split}
\]
(see~Theorem~\ref{thm: well-posedness for fractional Schrödinger type equation}, \ref{item 2 well-posedness schrödinger}). Since $\gamma|_W = \Gamma|_W$, we have $\gamma^{1/2}f=\Gamma^{1/2}f$ and therefore $\gamma^{1/2}u_f$ is the unique solution to
\[
\begin{split}
((-\Delta)^s+Q_{\gamma,q})v&=0\quad\,\,\,\,\enspace\text{in}\enspace\Omega,\\
v&=\Gamma^{1/2}f\enspace\text{in}\enspace\Omega_e,
\end{split}
\]
which we denote by $v_{\Gamma^{1/2}f}$. Using the property \ref{liouville red identity} of Lemma~\ref{Lemma: fractional Liouville reduction} and the definition of $Q_{\gamma,q}$ via formula \eqref{eq: reduced potential}, we deduce
\[
\begin{split}
\langle \Lambda_{\gamma,q}f,g\rangle&=B_{\gamma,q}(u_f,g)=B_{\gamma}(u_f,g)+\langle qu_f,g\rangle\\
&=B_{q_{\gamma}}(\gamma^{1/2}u_f,\gamma^{1/2}g)+\left\langle \frac{q}{\gamma}(\gamma^{1/2}u_f),\gamma^{1/2}g\right\rangle\\
&=B_{Q_{\gamma,q}}(\gamma^{1/2}u_f,\gamma^{1/2}g)=B_{Q_{\gamma,q}}(v_{\Gamma^{1/2}f},\Gamma^{1/2}g)\\
&=\langle \Lambda_{Q_{\gamma,q}}(\Gamma^{1/2}f),(\Gamma^{1/2}g)\rangle
\end{split}
\]
for all $f,g\in\widetilde{H}^s(W)$. In the last equality sign we used the definition of the DN map $\Lambda_{Q_{\gamma,q}}$ given in Theorem~\ref{thm: well-posedness for fractional Schrödinger type equation}, \ref{item 5 well-posedness schrödinger}.
\end{proof}
With this at hand, we can now give the proof of Theorem~\ref{thm: non uniqueness}:
\begin{proof}[{Proof of Theorem~\ref{thm: non uniqueness}}]
First assume that the coefficients $(\gamma_1,q_1)$ and $(\gamma_2,q)$ satisfy the regularity assumptions of Theorem~\ref{main thm}. Next, denote by $\Gamma\colon{\mathbb R}^n\to{\mathbb R}_+$ any function satisfying the following conditions
\begin{enumerate}[(a)]
\item $\Gamma\in L^{\infty}({\mathbb R}^n)$,
\item $\Gamma\geq \gamma_0$,
\item $\Gamma|_W=\gamma_1|_W=\gamma_2|_W$
\item and $m_{\Gamma}=\Gamma^{1/2}-1\in H^{s,n/s}({\mathbb R}^n)$.
\end{enumerate}
By Lemma~\ref{Auxiliary lemma}, Theorem~\ref{thm: well-posedness for fractional Schrödinger type equation} and Theorem~\ref{thm: exterior reconstruction}, one sees that $\Lambda_{\gamma_1,q_1}f|_W=\Lambda_{\gamma_2,q_2}f|_W$ for all $f\in C_c^{\infty}(W)$ is equivalent to $\Lambda_{Q_{\gamma_1,q_1}}f|_W=\Lambda_{Q_{\gamma_2,q_2}}f|_W$ for all $f\in C_c^{\infty}(W)$ and $\gamma_1|_W=\gamma_2|_W$. Next, we claim this is equivalent to the following two assertions:
\begin{enumerate}[(i)]
\item $\gamma_1=\gamma_2$ in $W$,
\item\label{item 1 equal potentials} $Q_{\gamma_1,q_1}=Q_{\gamma_2,q_2}$ in $\Omega$
\item\label{item 2 equality in exterior set} and
\begin{equation}
\label{eq: equivalence measurement set}
(-\Delta)^sm+\frac{q_2-q_1}{\gamma_2^{1/2}}=0\enspace\text{in}\enspace W,
\end{equation}
where $m=m_{\gamma_1}-m_{\gamma_2}$.
\end{enumerate}
If $\Lambda_{Q_{\gamma_1,q_1}}f|_W=\Lambda_{Q_{\gamma_2,q_2}}f|_W$ for all $f\in C_c^{\infty}(W)$, then \cite[Theorem~2.6, Corollary~2.7]{RZ2022unboundedFracCald} ensure that $Q_{\gamma_1,q_1}=Q_{\gamma_2,q_2}$ in $\Omega$ and $W$. Next note that
\begin{equation}
\label{eq: some calculation}
\begin{split}
0&=Q_{\gamma_1,q_1}-Q_{\gamma_2,q_2}=-\frac{(-\Delta)^sm_{\gamma_1}}{\gamma_1^{1/2}}+\frac{(-\Delta)^sm_{\gamma_2}}{\gamma_2^{1/2}}+\frac{q_1}{\gamma_1}-\frac{q_2}{\gamma_2}\\
&=-\frac{(-\Delta)^sm}{\gamma_1^{1/2}}+\left(\frac{1}{\gamma_2^{1/2}}-\frac{1}{\gamma_1^{1/2}}\right)(-\Delta)^sm_{\gamma_2}+\frac{q_1}{\gamma_1}-\frac{q_2}{\gamma_2}\\
&=-\frac{(-\Delta)^sm}{\gamma_1^{1/2}}+\frac{m}{\gamma_1^{1/2}\gamma_2^{1/2}}(-\Delta)^sm_{\gamma_2}+\frac{q_1}{\gamma_1}-\frac{q_2}{\gamma_2},
\end{split}
\end{equation}
where set $m=m_{\gamma_1}-m_{\gamma_2}$. As $\gamma_1=\gamma_2$ in $W$, the identity \eqref{eq: some calculation} reduces to the one in statement \ref{item 2 equality in exterior set}.
Next, assume the converse namely that $\gamma_1=\gamma_2$ in $W$ and $m=m_{\gamma_1}-m_{\gamma_2}$ as well as $Q_{\gamma_j,q_j}$ for $j=1,2$ satisfy \ref{item 1 equal potentials} and \ref{item 2 equality in exterior set}. Then for any given Dirichlet value $f\in C_c^{\infty}(W)$, the Dirichlet problems for $(-\Delta)^sv+Q_{\gamma_1,q_1}v=0$ in $\Omega$ and $(-\Delta)^sv+Q_{\gamma_2,q_2}v=0$ in $\Omega$ have the same solution $v_f^{(1)}=v_f^{(2)}$. Hence, one has \[
\begin{split}
B_{Q_{\gamma_1,q_1}}(v_f^{(1)},g)&=\langle (-\Delta)^{s/2}v_f^{(1)},(-\Delta)^{s/2}g\rangle+\langle Q_{\gamma_1,q_1}v_f^{(1)},g\rangle\\
&=\langle (-\Delta)^{s/2}v_f^{(2)},(-\Delta)^{s/2}g\rangle+\langle Q_{\gamma_1,q_1}f,g\rangle\\
&=\langle (-\Delta)^{s/2}v_f^{(2)},(-\Delta)^{s/2}g\rangle+\langle Q_{\gamma_2,q_2}f,g\rangle\\
&=\langle (-\Delta)^{s/2}v_f^{(2)},(-\Delta)^{s/2}g\rangle+\langle Q_{\gamma_2,q_2}v_f^{(2)},g\rangle
\end{split}
\]
for any $g\in C_c^{\infty}(W)$, but this is nothing else than $\Lambda_{Q_{\gamma_1,q_1}}f|_W=\Lambda_{Q_{\gamma_1,q_1}}f|_W$.
Next, choose $\gamma_2=1$ and $q_2=0$ and assume that $(\gamma_1,q_1)$ satisfies the assumptions of Theorem~\ref{main thm}. This implies that there holds $\Lambda_{\gamma_1,q_1}f|_W=\Lambda_{1,0}f|_W$ for all $f\in C_c^{\infty}(W)$ if and only if we have
\begin{enumerate}[(I)]
\item\label{item 2 measurement set 2} $\gamma_1=1$ on $W$,
\item\label{item 1 equal potentials 2} $Q_{\gamma_1,q_1}=0$ in $\Omega$
\item\label{item 3 equality in exterior set} and $(-\Delta)^sm_{\gamma_1}=q_1\enspace\text{in}\enspace W$.
\end{enumerate}
Therefore, if we define $q_1$ via
\begin{equation}
\label{eq: specification of potential}
q_1=\gamma_1^{1/2}(-\Delta)^sm_{\gamma_1}\enspace\text{in}\enspace{\mathbb R}^n
\end{equation}
for a given sufficiently regular function $\gamma_1\colon{\mathbb R}^n\to{\mathbb R}_+$ with $\gamma_1|_W=1$, then the conditions \ref{item 2 measurement set 2}, \ref{item 1 equal potentials 2} and \ref{item 3 equality in exterior set} are satisfied. Hence, the remaining task is to select $\gamma_1$ in such a way that the required regularity properties of Theorem~\ref{main thm} are met. We construct $m_{\gamma_1}\in H^{s,n/s}({\mathbb R}^n)\cap H^s({\mathbb R}^n)$ as follows: First, choose open sets $\Omega',\omega\subset{\mathbb R}^n$ satisfying $\Omega'\Subset\Omega$ and $\omega\Subset \Omega\setminus\overline{\Omega'}$. Next, let us fix some $\epsilon>0$ such that $\Omega'_{5\epsilon}, \omega_{5\epsilon}, \Omega_e$ are disjoint. Here and in the rest of the proof, we denote by $A_{\delta}$ the open $\delta$-neighbor-hood of the set $A\subset{\mathbb R}^n$.
Now, choose any nonnegative cut-off function $\eta\in C_c^{\infty}(\omega_{3\epsilon})$ satisfying $\eta|_{\omega}=1$. We define $\widetilde{m}\in H^s({\mathbb R}^n)$ as the unique solution to
\begin{equation}
\label{eq: PDE in extended domain}
(-\Delta)^s\widetilde{m}=0\quad \text{in}\quad \Omega'_{2\epsilon},\quad \widetilde{m}=\eta\quad\text{in}\quad {\mathbb R}^n\setminus\overline{\Omega'}_{2\epsilon}.
\end{equation}
Since $\eta\geq 0$, the maximum principle for the fractional Laplacian shows $\widetilde{m}\geq 0$ (cf.~\cite[Proposition~4.1]{RosOton16-NonlocEllipticSurvey}). Proceeding as in \cite[Proof of Theorem 1.6]{RZ2022LowReg} one can show that
\[
m_{\gamma_1}\vcentcolon =C_{\epsilon}\rho_{\epsilon}\ast\widetilde{m}\in H^s({\mathbb R}^n)\quad\text{with}\quad C_{\epsilon}\vcentcolon=\frac{\epsilon^{n/2}}{2|B_1|^{1/2}\|\rho\|_{L^{\infty}({\mathbb R}^n)}^{1/2}\|\widetilde{m}\|_{L^2({\mathbb R}^n)}},
\]
where $\rho_{\epsilon}\in C_c^{\infty}({\mathbb R}^n)$ is the standard mollifier of width $\epsilon$, solves
\[
(-\Delta)^sm=0\quad \text{in}\quad \Omega',\quad m=m_{\gamma_1}\quad\text{in}\quad \Omega'_e.
\]
Furthermore, $m_{\gamma_1}$ has the following properties
\begin{enumerate}[(A)]
\item\label{item 2 m} $m_{\gamma_1}\in L^{\infty}({\mathbb R}^n)$ with $\|m_{\gamma_1}\|_{L^{\infty}({\mathbb R}^n)}\leq 1/2$ and $m_{\gamma_1}\geq 0$,
\item\label{item 3 m} $m_{\gamma_1}\in H^{s}({\mathbb R}^n)\cap H^{s,n/s}({\mathbb R}^n)$
\item\label{item 4 m} and $\supp(m_{\gamma_1})\subset \Omega_e$.
\end{enumerate}
Now, we define $\gamma_1\in L^{\infty}({\mathbb R}^n)$ via $\gamma_1=(m_{\gamma_1}+1)^2\geq 1$. Therefore, $\gamma_1$ satisfies all required properties and even belongs to $C^{\infty}_b({\mathbb R}^n)$, since $m_{\gamma_1}$ is defined via mollification of a $L^2$ function. Using a similar calculation as for Lemma~ \ref{Lemma: fractional Liouville reduction}, \ref{potential frac Liouville reduction}, we have $q_1\in M(H^s\to H^{-s})$ and by scaling of $m_{\gamma_1}$ we can make the norm $\|q_1\|_s$ as small as we want. In particular, this allows to guarantee $q_1\in M_{\gamma_0/\delta_0}(H^s\to H^{-s})$ with $\gamma_0=1$. Note that we cannot have $q_1|_W=0$ as then the UCP implies $m_{\gamma_1}=0$. Hence, we can conclude the proof.
\end{proof}
\begin{figure}[!ht]
\centering
\begin{tikzpicture}[scale=1.2]
\draw[color=cyan!70, fill=cyan!30] plot [smooth cycle, tension=0.05] coordinates {(0,0) (0,3) (2,3) (2,2) (1,2) (1,1) (2,1) (2,0)};
\filldraw[color=red!60, fill=red!20, opacity = 0.4] plot [smooth cycle, tension=0.2] coordinates {(-0.47,-0.6) (-0.47,3.6) (4.3,3.62) (4.3,-0.65) };
\draw[color=cyan!70] plot [smooth cycle, tension=0.05] coordinates { (-0.1,-0.1) (-0.1,3.1) (2.1,3.1) (2.1,1.9) (1.1,1.9) (1.1,1.1) (2.1,1.1) (2.1,-0.1)};
\draw[color=cyan!70] plot [smooth cycle, tension=0.05] coordinates { (-0.2,-0.2) (-0.2,3.2) (2.2,3.2) (2.2,1.8) (1.2,1.8) (1.2,1.2) (2.2,1.2) (2.2,-0.2)};
\draw[color=cyan!70] plot [smooth cycle, tension=0.05] coordinates { (-0.3,-0.3) (-0.3,3.3) (2.3,3.3) (2.3,1.7) (1.3,1.7) (1.3,1.3) (2.3,1.3) (2.3,-0.3)};
\draw[color=cyan!70] plot [smooth cycle, tension=0.05] coordinates { (-0.4,-0.4) (-0.4,3.4) (2.4,3.4) (2.4,1.6) (1.4,1.6) (1.4,1.4) (2.4,1.4) (2.4,-0.4)};
\draw[color=gray!70] (-0.03,2.1)--(-1.1,2.1);
\node at (-1.3,2.1) {$\raisebox{-.35\baselineskip}{\ensuremath{\Omega'}}$};
\draw[color=gray!70] (-0.68,1.6)--(-1.1,1.6);
\node at (-1.3,1.6) {$\raisebox{-.35\baselineskip}{\ensuremath{\Omega}}$};
\filldraw[color=blue!60, fill=blue!15, opacity=0.5] (5.5,1) ellipse (0.8 and 0.6);
\node at (5.5,1) {$\raisebox{-.35\baselineskip}{\ensuremath{W}}$};
\filldraw[color=green!70, fill=green!40, opacity=0.8] (3.5,1) ellipse (0.3 and 0.7);
\draw[color=green!70] (3.5,1) ellipse (0.4 and 0.8);
\draw[color=green!70] (3.5,1) ellipse (0.5 and 0.9);
\draw[color=green!70] (3.5,1) ellipse (0.6 and 1);
\draw[color=green!70] (3.5,1) ellipse (0.7 and 1.1);
\draw[color=green!70] (3.5,1) ellipse (0.8 and 1.2);
\node at (3.5,1) {$\raisebox{-.35\baselineskip}{\ensuremath{\omega}}$};
\end{tikzpicture}\label{fig: Geometric setting 2}
\caption{A graphical illustration of the sets used in the proof of Theorem~\ref{thm: non uniqueness}. }
\end{figure}
\medskip
\newpage
| {
"attr-fineweb-edu": 1.569336,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdL04dbghP4tkLQQ6 | \section{Introduction}
The theory of massive gravity which once had been thought to be ruled out because of propagation of ghost
at non-linear level \cite{BD,Creminelli}, was recently revitalized in \cite{Giga_FP,Giga_Resum}, where it
was observed that there is a two parameter family of generalizations of Fierz-Pauli mass term \cite{FP},
which is free from ghostlike instability at least in the decoupling limit, and also in the full theory
at least in the quartic order in nonlinearities \cite {Giga_Resum}. For further interesting studies
of this model see \cite {Koyama1+2,Nieuwenhuizen,HassanRosen,dato}.
This on the other hand strongly motivates a quest for underlying setups that
would naturally explain or at least elegantly reproduce the unusual
structures emerged in \cite{Giga_FP,Giga_Resum}.
The theories of gravity supplemented with an auxiliary extra dimension (AED) \cite{GG,Claudia},
inspired by the DGP brane-world gravity model \cite {DGP}, appeared as a promising step in this
direction as they correctly predicted a cubic ghost-free completion of Fierz-Pauli \cite{Giga_tuned}.
In these models while all matter fields live on a 4 dimensional brane, the metric has been extended to
an extra dimension $-1<u<1$, and is called $\tilde g_{\mu\nu}(x,u)$. The brane is located at $u=0$ and a
$\mathbf Z_2$ symmetry is imposed on the fields, $\tilde g_{\mu\nu}(x,u)=\tilde g_{\mu\nu}(x,-u)$, using which
the graviton's Lagrangian looks like
\begin{eqnarray}
\label{lagr}
{\cal L}=M_{\rm Pl}^2\sqrt{-g}R-M_{\rm Pl}^2m^2\int_0^1du\sqrt{-\tilde g}(k_{\mu\nu}^2-k^2)\,,
\end{eqnarray}
where the first term is the Einstein-Hilbert Lagrangian on the brane as a function of $g_{\mu\nu}(x)=\tilde g_{\mu\nu}(x,u=0)$, while $k_{\mu\nu}=\frac{1}{2}\partial_u\tilde g_{\mu\nu}$, $k=\tilde g^{\mu\nu}k_{\mu\nu}$, and all indices in the second term are contracted using inverse extended metric $\tilde g^{\mu\nu}$. The coordinate $u$ is called an auxiliary dimension because after choosing a second boundary condition, say at $u=1$, $\tilde g_{\mu\nu}(x,u)$ is algebraically determined in terms of $g_{\mu\nu}$ and the second term in \eqref{lagr} describes just a potential for the induced metric on the brane.
Of course the choice of this second boundary condition is by no means unique at the level of auxiliary extra dimension: while $\tilde g_{\mu\nu}(x,u=1)=\eta_{\mu\nu}$ was originally chosen to describe a particular completion of Fierz-Pauli massive gravity, nothing prevents us from considering a more general boundary condition to accommodate a larger family of completions. In this way we have written the most generic graviton's potential in terms of a geometrical construct that naturally arises in higher dimensional theories of gravity, where $k_{\mu\nu}$ is taken to be the extrinsic curvature.
The abovementioned generalization of the boundary condition is straightforward and will be the subject of \S2, where we show that one can consistently take $\tilde g_{\mu\nu}(x,u=1)$ a generic function of $g_{\mu\nu}$, the 4D-brane metric. In \S3 we perform the dimensional reduction to obtain the 4D potential for graviton, and explicitly show that in the original choice of boundary condition the absence of the ghost does not persist at higher than cubic order (in agreement with \cite{Rachel}). Thereafter a recipe will be provided to reproduce the potential of \cite{Giga_FP} by a proper choice of the boundary condition, ensuring the stability of the full theory in the decoupling limit. In \S4 we consider the generalization of the model to higher extra dimensions and prove that in the most tractable case with a spherically symmetric boundary condition in the bulk, one and many extra dimensions are equivalent.
We further provide a simple explanation for the absence of the cubic ghost and the presence of higher order ones in the original choice of boundary condition ($\tilde g_{\mu\nu}(x,u=1)=\eta_{\mu\nu}$), in appendix A. In appendix B we address the possibility of modifying the bulk action by inclusion of other geometrical constructs so as to avoid the adjustment of the boundary condition. It is shown there that the most natural candidate which is (the auxiliary version of) Gauss-Bonnet term, as the only ``Lovelock'' term other than Einstein-Hilbert in 5D for which the variation principle is well defined, cannot do much better than the original action in predicting right coefficients. Nevertheless one can build a ghost-free theory by adding higher powers of $k_{\mu\nu}$ with specially tuned coefficients. Finally, in appendix C we give the most general boundary condition which gives healthy theory at quartic order.
\section{Generalized Boundary Conditions}
We begin this letter by considering the generalization of the boundary conditions for \eqref{lagr}. For convenience we will assume the location of the boundaries to be at $u=0,1$ rather than $u=\pm 1$. As usual we fix $\tilde{g}_{\mu\nu}(x,u=0)=g_{\mu\nu}(x)$, while the choice at $u=1$ is in principle arbitrary since this boundary does not have an intrinsic dynamics and the values of the fields on it are completely determined by imposed conditions. Hence, the most general boundary conditions would be
\begin{eqnarray}
\tilde{g}_{\mu\nu}(x,u=0)=g_{\mu\nu}(x) \quad \text{and} \quad \tilde{g}_{\mu\nu}(x,u=1)=\hat g_{\mu\nu}(g_{\alpha\beta}),
\label{bound}
\end{eqnarray}
where $\hat g_{\mu\nu}-\eta_{\mu\nu}$ has to vanish when $g_{\mu\nu}=\eta_{\mu\nu}$, if the theory is to describe massive gravity around Minkowski. The variation of the action \eqref{lagr} then generalizes to
\begin{eqnarray}
{\delta} S=M^2_{pl}\int d^4x\left\{ \sqrt{-g} G_{\mu\nu} {\delta} g^{\mu\nu}\vert_{u=0}-m^2\left.\left[ \sqrt{-\tilde{g}}(k^{\mu\nu}-\tilde g^{\mu\nu}k){\delta} \tilde g_{\mu\nu} \right]\right\vert_{u=0} ^{u=1} \right.\nonumber \\
\left. -m^2\int_0^1 du \sqrt{-\tilde{g}} \left[ \partial_u k_{\mu \nu}-\tilde g _{\mu\nu}\partial_u k-\frac{1}{2}\tilde g_{\mu\nu}(k_{\alpha\beta}^2+k^2)-2k_{\mu}^{\alpha}k_{\alpha\nu}+kk_{\mu\nu} \right]{\delta} \tilde g ^{\mu\nu} \right\},
\end{eqnarray}
from which it follows that the bulk ($u>0$) equation of motion is
\begin{eqnarray}
\partial_u k_{\mu \nu}-\tilde g _{\mu\nu}\partial_u k=\frac{1}{2}\tilde g_{\mu\nu}(k_{\alpha\beta}^2+k^2)+2k_{\mu}^{\alpha}k_{\alpha\nu}-kk_{\mu\nu},
\label{bulk}
\end{eqnarray}
the same as in the case with field independent boundary condition at $u=1$. The brane equation of motion on the other hand gets slightly modified to
\begin{eqnarray}
\sqrt{-g}G_{\mu\nu}-m^2 \left\{ \left[ \sqrt{-\tilde g} (k_{\mu\nu}-\tilde g_{\mu\nu} k) \right]_0-\left[ \sqrt{-\tilde g} (k^{\alpha\beta}-\tilde g^{\alpha\beta} k) \right]_1\frac{\partial \hat g_{\alpha\beta}}{\partial g_{\rho\sigma}}\hat g_{\mu\rho}\hat g_{\nu\sigma}\right\}=0.\nonumber
\end{eqnarray}
Notice that this differs from the 4D equation of \cite{GG} by the last term, which arises because of the non-vanishing ${\delta} \tilde g _{\mu\nu}\vert_{u=1}$\footnote{It should be mentioned that this does not make the variation principle ill-defined.}.
An equivalent and often simpler way of analyzing the AED models is to integrate over the $u$-coordinate using the solution to the bulk equation \eqref{bulk} with the boundary condition \eqref{bound} in order to obtain an effective 4-dimensional potential for graviton. This was the strategy pursued in \cite{Giga_tuned} for the special case of $\tilde g_{\mu\nu}(x,u=1)=\eta_{\mu\nu}$, treating $h_{\mu\nu}$ perturbatively, but the generalization is straightforward. One simply needs to write
\begin{eqnarray}
g_{\mu\nu}=\eta_{\mu\nu}+h_{\mu\nu}=\hat g_{\mu\nu}+\hat h_{\mu\nu}\,, \quad \text{with} \quad \hat{h}_{\mu\nu}\equiv \eta_{\mu\nu}+h_{\mu\nu}-\hat g_{\mu\nu},
\end{eqnarray}
and expand all $\tilde g_{\mu\nu}$ around $\hat g_{\mu\nu}$ so that all computations go through identically if the following replacements are made:
\begin{eqnarray}
\label{replace}
\eta_{\mu\nu}\to\hat g_{\mu\nu}\,,\qquad\text{and}\qquad h_{\mu\nu}\to\hat h_{\mu\nu}\,.
\end{eqnarray}
Therefore one just needs to take the original 4D effective potential and make the substitution \eqref{replace} (including the overall factor $\sqrt{-\det(\eta)}=1\to\sqrt{-\det(\hat g)}$).
\section{\label{quartic} Quartic Ghost and a Remedy}
Perhaps the simplest method to check models of massive gravity against Boulware-Deser ghost \cite{BD} is to take the routes of refs. \cite{Giga_tuned} and \cite{Lasha}, namely, to write the graviton potential in the unitary gauge, expand it around the Minkowski background and compare it with the unique set of conceivably ghost-free potentials constructed in \cite{Giga_FP}. Obviously to perform this comparison the latter should also be re-expanded in the unitary gauge where no St\"uckelberg fields are present.
Following the logic of the previous section, in order to perform the dimensional reduction perturbatively in $\hat{h}_{\mu\nu}$, we define
\begin{eqnarray}
\label{h_tilde}
\tilde h_{\mu\nu}(x,u)=\tilde g_{\mu\nu}(x,u)-\hat g_{\mu\nu}=H^{(1)}_{\mu\nu}(x,u)+H^{(2)}_{\mu\nu}(x,u)+\dots\,,
\end{eqnarray}
where $H^{(n)}_{\mu\nu}$ is of n$^{\text{th}}$ order in $\hat h_{\mu\nu}$ and except $H^{(1)}_{\mu\nu}$ which is given by
\begin{eqnarray}
\label{H1}
H^{(1)}_{\mu\nu}(x,u)=(1-u)\hat h_{\mu\nu}(x)\,,
\end{eqnarray}
all the rest vanish both at $u=0$ and $u=1$, so that $\tilde g_{\mu\nu}(x,u)$ satisfies the desired boundary conditions at each order. This property of higher order bulk solutions together with the absence of second or higher derivatives of fields in the bulk action ensures that to obtain the n$^{\text{th}}$ order graviton potential one only needs to solve $\tilde h_{\mu\nu}$ up to (n-2)\nd order. To wit note that the action \eqref{lagr} starts from quadratic in $\tilde h_{\mu\nu}$ which thus contains the n$^{\text{th}}$ order term of the schematic form
\begin{eqnarray}
\int_0^1 du\;\partial_uH_{\mu\nu}^{(n-1)}\partial_uH_{\mu\nu}^{(1)}= \left. H_{\mu\nu}^{(n-1)}\partial_uH_{\mu\nu}^{(1)}\right \vert_0^1-\int_0^1 du \;H_{\mu\nu}^{(n-1)}\partial_u^2 H_{\mu\nu}^{(1)}\,,
\end{eqnarray}
but the first term on the r.h.s. vanishes because of the vanishing of $H^{(n-1)}_{\mu\nu}(u)$ on the boundaries, and the second term because of the linear equation of motion which is satisfied by $H^{(1)}_{\mu\nu}(u)$.
In the case of field-independent boundary value $\hat g_{\mu\nu}=\eta_{\mu\nu}$, the comparison with the tuned polynomial was carried out in \cite{Giga_tuned} up to cubic order yielding agreement with one class of (decoupling limit) ghost-free potentials characterized by $c_3=1/4$. Here we show that this agreement breaks down at quartic order and in the appendix A using a simple covariantization we explain why it could have been expected. We also find the most general modification of boundary condition that give rise to the ghost free 4D action up to quartic order. This is achieved by perturbative adjustment of the boundary value at $u=1$, hence one can continue this process to all orders.
As already mentioned, to obtain fourth order 4D effective potential one needs only up to the second order bulk solution:
\begin{eqnarray}
\label{H2}
H^{(2)}_{\mu\nu}=\frac{1}{2}u(1-u)\left[\frac{1}{12}\eta_{\mu\nu}(\hat h_{\sigma\rho}^2-\hat h^2)-\hat h_\mu^\sigma \hat h_{\sigma\nu}+\frac{1}{2}\hat h\hh_{\mu\nu}\right]\,,
\end{eqnarray}
which after some work yields the following unitary-gauge potential
\begin{eqnarray}
\label{V_4}
V^{(4)}(h_{\mu\nu})&=&\sqrt{-\hat g}\left(\hat h_{\mu\nu}^2-\hat h^2-\hat h_{\mu\nu}^3+\frac{5}{4}\hat h\hh_{\mu\nu}^2-\frac{1}{4}\hat h^3\right.\nonumber\\
&&\left.+\frac{11}{12}\hat h_{\mu\nu}^4-\frac{11}{12}\hat h\hh_{\mu\nu}^3-\frac{53}{144}(\hat h_{\mu\nu}^2)^2+\frac{29}{72}\hat h^2\hat h_{\mu\nu}^2-\frac{5}{144}\hat h^4\right)\,,
\label{general}
\end{eqnarray}
with indices contracted by matrix $\hat g^{\mu\nu}$, inverse to $\hat g_{\mu\nu}$. This expression should be compared to
\begin{eqnarray}
\label{V(H)}
V^{(4)}_{\rm{tuned}}(H_{\mu\nu})&=&\sqrt{-g}\left[H_{\mu\nu}^2-H^2+c_1H_{\mu\nu}^3+c_2HH_{\mu\nu}^2+c_3H^3\right.\nonumber\\
&&\left. +d_1H_{\mu\nu}^4+d_2HH_{\mu\nu}^3+d_3(H_{\mu\nu}^2)^2+d_4H^2H_{\mu\nu}^2+d_5H^4\right]\,,
\end{eqnarray}
where $H_{\mu\nu}$, which is a covariant version of $h_{\mu\nu}=g_{\mu\nu}-\eta_{\mu\nu}$, reduces to $h_{\mu\nu}$ in the unitary gauge, indices are now contracted with the full metric $g^{\mu\nu}$ and all the coefficients are expressed in terms of $c_3$ and $d_5$ as given in \cite{Giga_FP}.
Concentrating on the case of constant boundary $\hat g_{\mu\nu}=\eta_{\mu\nu}$, where we know that $c_3=1/4$ guaranties up to 3\rd order agreement between \eqref{V_4} and \eqref{V(H)}, and, expanding the latter in the unitary gauge around Minkowski background we get
\begin{eqnarray}
V^{(4)}_{\rm{tuned}}(h_{\mu\nu})&=&h_{\mu\nu}^2-h^2-h_{\mu\nu}^3+\frac{5}{4}hh_{\mu\nu}^2-\frac{1}{4}h^3\nonumber\\
&&+d_1h_{\mu\nu}^4+d_2hh_{\mu\nu}^3+d_3(h_{\mu\nu}^2)^2+d_4h^2h_{\mu\nu}^2+d_5h^4\,.
\end{eqnarray}
However this can never coincide with \eqref{V_4} for any value of $d_5$ and consequently the theory suffers from a ghost at quartic order. This can most easily be seen by introducing St\"uckelberg fields in terms of which there remains a quartic self-interaction of the helicity-0 part $\pi$ of the schematic form
\begin{eqnarray}
\label{pi_4}
M_{\rm Pl}^2m^2(\partial\partial\pi)^4 = \frac{1}{M_{\rm Pl}^2m^6}(\partial\partial\pi_c)^4\equiv \frac{1}{\Lambda_4^8}(\partial\partial\pi_c)^4 \,,
\end{eqnarray}
where $\pi_c=M_{\rm Pl} m^2 \pi\equiv\Lambda_3^3\pi$ is the canonically normalized field. In terms of interactions of $\pi$ the agreement with the adjusted potential to 3\rd order translates into the absence of the cubic interactions:
\begin{eqnarray}
\label{pi_3}
\frac{1}{M_{\rm Pl} m^4}(\partial\partial\pi_c)^3\equiv \frac{1}{\Lambda_5^5}(\partial\partial\pi_c)^3 \,,
\end{eqnarray}
which are generically present in theories of massive gravity \cite{AGS}. Consequently \eqref{pi_4} is the most strongly coupled interaction in the theory or equivalently the scale $\Lambda_4=(M_{\rm Pl} m^3)^{1/4}$ is the smallest mass scale by which any interaction in this model may be suppressed. We can therefore define a decoupling limit
\begin{eqnarray}
M_{\rm Pl}\to\infty,\qquad m\to 0,\qquad \Lambda_4-\rm{fixed},
\end{eqnarray}
in which \eqref{pi_4} is the only interaction that survives and it propagates ghosts lighter than the cutoff around any reasonable astrophysical background (as shown in \cite{Creminelli} for the similar case of the cubic interaction \eqref{pi_3}). The fact that the action contains only a finite number of terms leaves no ambiguity in the instability of the model since no resummation may be invoked to remove the ghost.
Having discussed the problems of the original model let us comment on \eqref{general} with general boundary condition. A similar analysis shows that ghost can be avoided by order-by-order adjustment of the boundary condition
\begin{eqnarray}
\hat g_{\mu\nu}&=&\eta_{\mu\nu}+b^{(1)}_1\eta_{\mu\nu}h+b^{(1)}_2h_{\mu\nu}\nonumber \\
&+&b^{(2)}_1\eta_{\mu\nu}h_{\alpha\beta}^2+b^{(2)}_2\eta_{\mu\nu}h^2+b^{(2)}_3h_{\mu\nu}h+b^{(2)}_4[h^2]_{\mu\nu}\nonumber \\
&+&b^{(3)}_1\eta_{\mu\nu}h_{\alpha\beta}^3+b^{(3)}_2\eta_{\mu\nu}hh_{\alpha\beta}^2+b^{(3)}_3\eta_{\mu\nu}h^3+b^{(3)}_4h_{\mu\nu}h_{\alpha\beta}^2\nonumber\\
&+&b^{(3)}_5h_{\mu\nu}h^2+b^{(3)}_6[h^2]_{\mu\nu}h+b^{(3)}_7[h^3]_{\mu\nu}\nonumber \\
&&\ldots
\label{expansion}
\end{eqnarray}
The requirement of the Fierz-Pauli structure constrains $b^{(1)}_{1,2}$, the absence of the cubic Boulware-Deser ghost relates the quadratic coefficients $b^{(2)}_{1,\ldots,4}$ to each other and so on. The most general healthy coefficients of the above expansion are explicitly given in appendix C. Here, on the other hand, we give just one of the simplest expressions for $\hat g_{\mu\nu}$ which cures the instability of the theory in quartic order
\begin{eqnarray}
\hat g_{\mu\nu}=\eta_{\mu\nu}+\frac{1}{96}\eta_{\mu\nu}h_{\alpha\beta}^3-\frac{7}{432}\eta_{\mu\nu}hh_{\alpha\beta}^2+\frac{5}{864}\eta_{\mu\nu}h^3-\frac{17}{288}h_{\mu\nu}h_{\alpha\beta}^2+\frac{11}{96}[h^3]_{\mu\nu}.
\end{eqnarray}
The theory with this choice of the boundary condition, once reduced to 4D gives the 4$^{\text{th}}$ order potential of \cite{Giga_FP} corresponding to $c_3=1/4$ and $d_5=0$.
We would like to stress that our approach is perturbative in contrast to \cite{Rachel}, where authors have performed the $u$-integral exactly and obtained nonlinear 4D action as a function of $\hat g_{\mu\nu}$ (in their notation the metric at $u=1$ is labeled as $f_{\mu\nu}$). The advantage of that framework is that one may try to find an exact ghost-free boundary condition by equating $F(g^{\mu\nu}\hat g_{\nu\alpha})$ of \cite{Rachel} to the ghost-free potential $\mathcal{U}_{\text{gh-fr}}(g_{\mu\nu},\eta_{\mu\nu})$ of \cite{Giga_Resum}. However, because of the transcendental nature of the equation
\begin{eqnarray}
&&\left[\text{det}(g^{\mu\alpha}\hat g_{\alpha\nu})\right]^{1/2}-2\left[\text{det}(g^{\mu\alpha}\hat g_{\alpha\nu})\right]^{1/4}\times\nonumber\\
&&\text{cosh}\left( \frac{1}{2\sqrt{3}}\sqrt{\text{Tr}[\text{ln}(g^{\mu\alpha}\hat g_{\alpha\nu})]^2-\frac{1}{4}[\text{Tr ln}(g^{\mu\alpha}\hat g_{\alpha\nu} ) ]^2} \right)+1=\frac{1}{3}\mathcal{U}_{\text{gh-fr}},
\end{eqnarray}
one seems to be forced to do the perturbative analysis.
\section{Generalization to Multi-D}
Another natural generalization of AED is to consider a multi-dimensional auxiliary space instead of a single dimensional one. In this section we investigate the special case when there is a rotationally invariant condition on the boundaries of the $d$-dimensional bulk, and show that it gives identical 4D effective action as the original AED model.
Using the cartesian coordinates $y_a$, where $a = 1,2,\dots,d$, the action of the extra dimensions generalizes to
\begin{eqnarray}
\label{S_d_dim}
S_d=\int d^d y \sqrt{-\tilde{g}}(k_{a\mu \nu}^2-k_a^2),
\end{eqnarray}
where $k_{a\mu\nu}\equiv\frac{1}{2}\partial_a\tilde g_{\mu\nu}$ and summation on repeated indices is implied. We are willing to impose a spherically symmetric boundary condition, however, one should be cautious in this case since the solutions of the Laplace equation are singular at the origin. This is a generic feature of frameworks with co-dimension >1 branes \cite{derham-tolley}, and it is well known that the singularity can be regularized by assigning a finite width $\epsilon$ to the brane. It suffices to take the radial coordinate, defined as $r\equiv\sqrt{y_ay_a}$, to range in the interval $r\in [\epsilon,1]$. Therefore the region $r<\epsilon$ is excluded from the integral in \eqref{S_d_dim}, and the boundary conditions are modified to
\begin{eqnarray}
\tilde g_{\mu\nu}(x,r=\epsilon)=g_{\mu\nu},\qquad\tilde g_{\mu\nu}(x,r=1)=\hat g_{\mu\nu}\,.
\end{eqnarray}
From this boundary condition it follows that the solution to the bulk equations of motion will also be spherically symmetric and therefore \eqref{S_d_dim} can be simplified considerably by going to the spherical coordinates. Integrating over angular variables and dropping an unimportant normalization constant which can always be absorbed in the definition of the graviton's mass we obtain
\begin{eqnarray}
\label{r_d}
S_d=\int_\epsilon^1 r^{d-1}dr \sqrt{-\tilde{g}}\left[(k_{r \mu \nu})^2-k_r^2\right],
\end{eqnarray}
with $k_{r\mu\nu}\equiv\frac{1}{2}\partial_r\tilde g_{\mu\nu}$. However by a change of the variable of integration to
\begin{eqnarray}
du=\frac{dr}{r^{d-1}}\,,
\end{eqnarray}
and a rescaling such that $u$ ranges in $[0,1]$, the action \eqref{r_d} transforms to that of single extra dimension model \eqref{lagr}. Hence the spherically symmetric multi-D analog is equivalent to the original model with one extra dimension. However, this equivalence does not generically persist the modifications of the bulk action, see appendix B for more details.
\section{Acknowledgements}
We are grateful to Gregory Gabadadze for his guidance, useful discussions and support. LB and MM are supported respectively by MacCracken and James Arthur Graduate Fellowships at NYU.
\renewcommand{\theequation}{A-\Roman{equation}}
\setcounter{equation}{0}
\section*{Appendix A. Covariantization by $N_\mu$}
As first pointed out in \cite{Claudia}, AED models with fixed (e.g. $\eta_{\mu\nu}$) boundary condition at $u=1$ have their own natural candidate for playing the role of St\"uckelberg fields, namely the ADM shift vector $N_\mu$. Consider Einstein-Hilbert action in 5 dimension with metric $G_{MN}$ and decompose it on spatial slices of constant 5$^{\text{th}}$ coordinate using ADM parameters: lapse $N=(G^{55})^{-1/2}$, shift $N_\mu = G_{5\mu}$, and 4D metric $\tilde g_{\mu\nu}\equiv G_{\mu\nu}$
\begin{eqnarray}
S=2M_5^2 \int d^4x du \sqrt{-\det(\tilde g_{\mu\nu})}N [R^{(4)}(\tilde g) - K^{\mu\nu}K_{\mu\nu}+ K^2]\,,
\end{eqnarray}
where
\begin{eqnarray}
K_{\mu\nu}=\frac{1}{2N}(\partial_u \tilde g_{\mu\nu}- D_\mu N_\nu - D_\nu N_\mu)\,,
\end{eqnarray}
all indices are raised using $(\tilde g^{-1})^{\mu\nu}\equiv\tilde g^{\mu\nu}$, and $D_\mu$ is the covariant derivative with respect to the 4D metric. The action is invariant under the full 5D diffeomorphisms and in particular the subclass of $u$-dependent 4D diffeomorphisms
\begin{eqnarray}
\label{diff}
u\to u'=u\,,\qquad x^\mu \to {x'}^\mu(x,u)\,,
\end{eqnarray}
under which $N$ and $R^{(4)}$ behave as scalars which implies that $K_{\mu\nu}$ must transform as a covariant tensor.
Likewise, we can restore $u$-dependent diffeomorphism by replacing $k_{\mu\nu}$ in the action of AED with $K_{\mu\nu}=(\partial_u \tilde g_{\mu\nu}/2- D_{(\mu }N_{\nu)})$ and stipulating that $N_\mu$, which is now some auxiliary field, transforms the same way as the 5D gravity's shift vector does, namely
\begin{eqnarray}
N^\mu \to N^\nu \frac{\partial x'^\mu}{\partial x^\nu} +\partial_u x'^\mu\,.
\end{eqnarray}
Having restored this class of diffeomorphisms the action is now invariant if one reparametrizes the 4D metric $g_{\mu\nu}$ on $u=0$ brane but keep the other boundary fixed at $\tilde g_{\mu\nu}(x,u=1)=\eta_{\mu\nu}$. In other words $N_\mu$ covariantizes the 4D effective Lagrangian, and can be regarded as the St\"uckelberg field. This covariantization fails to work in the more general case where the boundary condition at $u=1$ ($\hat g_{\mu\nu}$) depends on $g_{\mu\nu}$ because now the reparametrization at $u=0$ brane changes the $\hat g_{\mu\nu}$ but not necessarily in the manner of a coordinate transformation.
In terms of $N_\mu$ it is easy to understand the absence of the cubic ghost and the emergence of the quartic one: Working around Minkowski where $N_\mu$ is also taken to be small, the first order bulk equations of motion become
\begin{eqnarray}
&\partial_u^2 \tilde h_{\mu\nu} - 2 \partial_u \partial_{(\mu}N_{\nu)}=0\,,&\\
&\partial_\sigma \partial_u (\tilde h^\sigma_\mu-\delta^\sigma_\mu \tilde h)-\partial^\sigma(\partial_\sigma N_\mu - \partial_\mu N_\sigma) = 0\,,&
\end{eqnarray}
which are solved by $\tilde h^{(1)}_{\mu\nu}=(1-u) h_{\mu\nu}$ (as in the non-covariant case \eqref{H1}), and a constant $N_\mu$ along the $u$ direction. This linear solution, as before, is sufficient to obtain cubic effective action which consequently contains at most two powers of $N_\mu$, because there are originally two $N_\mu$'s present in the action and $\tilde h^{(1)}_{\mu\nu}$ does not contain any. This explains the absence of the cubic ghost since there cannot be any cubic self-interaction of the helicity-0 mode which is contained completely in the St\"uckelberg field $N_\mu$ in the decoupling limit \cite{Claudia}. This argument, however, breaks down beyond that order because higher order solutions will necessarily contain $N_\mu$ (e.g. $\tilde h^{(2)}_{\mu\nu}\supset {N_\mu}^2 $) which upon reduction to 4D result in quartic and higher order terms in $N_\mu$, explaining the presence of the interaction \eqref{pi_4}.
\renewcommand{\theequation}{B-\Roman{equation}}
\setcounter{equation}{0}
\section*{Appendix B. AED with 5D Gauss-Bonnet Term}
In this section we extend the framework of the auxiliary extra dimension (AED) by addition of terms descendent from the Gauss-Bonnet (GB) action. The latter, being the only other five-dimensional Lovelock invariant besides the Einstein-Hilbert term, is given by
\begin{eqnarray}
\mathcal{L}_{GB}=\frac{\kappa}{4}\left(R^2-4R_{AB} R^{AB}+R_{ABCD}R^{ABCD}\right),
\label{GB}
\end{eqnarray}
with $A,\ldots=0,1,2,3,5$ and $\kappa$ being an arbitrary constant.
In order to find the AED analog of \eqref{GB} we impose $g_{\mu 5}=0$ and $g_{55}=1$ on the metric tensor. Furthermore, since we are not interested in generating derivative self-interactions for graviton, we remove terms containing four-dimensional derivatives. As a result \eqref{GB} reduces to
\begin{eqnarray}
\mathcal{L}_{GB}^{AED}=\kappa[g^{\mu \nu} \partial_5 k_{\mu \nu} \left( k^2-k_{\alpha \beta}^2 \right) +2\partial_5 k_{\mu \nu} \left( k^{\mu \alpha} k_{\alpha}^{\nu}-k^{\mu \nu}k \right) \nonumber \\
+\frac{1}{4} \left( -14k_{\mu \nu}^4+16 k k_{\mu \nu}^3+7k_{\mu \nu}^2 k_{\alpha \beta}^2-10k^2k_{\mu \nu}^2+k^4 \right) ]\,.
\label{GBAED}
\end{eqnarray}
However, from the definition of $k_{\mu \nu}$ it follows that there are terms with more than one derivative per field in \eqref{GBAED}, hence, one needs to introduce a boundary term in order for the variation principle to be well-defined. Including those we get the following modification to the Einstein-Hilbert Lagrangian
\begin{eqnarray}
\label{L_GB}
\mathcal{L}_{\rm{mass}} = M_{pl}^2m^2\left[ \left. \frac{\kappa}{3} \sqrt{-g} (2k_{\mu \nu}^3-3k k_{\mu \nu}^2+k^3)\right \vert _{u=0}^{u=1}
- \int_{0}^{1} du\sqrt{-\tilde g}(k_{\mu \nu}^2-k^2+\mathcal{L}_{GB}^{AED})\right]\!,\nonumber \\
\end{eqnarray}
In order to integrate out $u-$dimension, one has to find the solution to the bulk equations of motion which now generalizes to
\begin{eqnarray}
g_{\mu \nu} \partial _u k-\partial_u k_{\mu \nu}=-\frac{1}{2}g_{\mu \nu}(k^2+k_{\alpha \beta}^2)-2k_{\mu \alpha}k^\alpha_\nu+kk_{\mu \nu}\nonumber \\
+\kappa \left[ \partial_u^2 k_{**}k^{**}+\partial_u k_{**}k^{**} k_{**} +k_{**}k^{**} k_{**} k^{**} \right]_{\mu \nu}\,,
\label{eqGB}
\end{eqnarray}
where we have presented the contribution of the GB term schematically since it is sufficient for our purposes, as will be seen shortly. We choose the boundary condition to be $\tilde g_{\mu\nu}(x,u=1)=\eta_{\mu\nu}$.
The easiest way of solving \eqref{eqGB} for $\tilde{h}_{\mu \nu}(x,u)\equiv g_{\mu \nu}(x,u)-\eta_{\mu \nu}$ is to proceed order-by-order in four-dimensional metric perturbations $h_{\mu \nu}(x)\equiv \tilde{h}_{\mu \nu}(x,u=0)$. One immediately notices that the newly added terms proportional to $\kappa$ in \eqref{eqGB} start to change the solution only at the $4^{th}$ order in $h_{\mu \nu} (x)$. Using therefore the old linear solution \eqref{H1} it follows that the Gauss-Bonnet term does not contribute to the cubic 4D effective action: The cubic bulk terms in \eqref{GBAED} lead to $\partial_u^2 H^{(1)}=0$, while the boundary terms in \eqref{L_GB} evaluated to 3\rd order are identical at $u=0$ and $u=1$ and cancel each other.
As in \S\ref{quartic}, to find the 4D action up to 4$^{th}$ order one only needs the 2\nd order bulk solution \eqref{H2} which leads to the following four-dimensional Lagrangian
\begin{eqnarray}
\mathcal{L}&=&M^2_{pl}\sqrt{-g}R-\frac{m^2M^2_{pl}}{4}\sqrt{-g}\times (h_{\mu \nu}^2-h^2-h_{\mu \nu}^3+\frac{5}{4} hh_{\mu \nu}^2-\frac{1}{4} h^3\nonumber \\
&&+\frac{1}{144}\left( 6(22+3\kappa)h_{\mu \nu}^4-(53+9\kappa)h_{\mu\nu}^2 h_{\alpha\beta}^2 - 12(11+2\kappa)hh_{\mu\nu}^3\right.\nonumber \\
&&\left. +2(29+9 \kappa)h^2 h_{\mu \nu}^2-(5+3\kappa)h^4 \right)+O(h^5)),
\end{eqnarray}
with indices contracted by the Minkowsi metric $\eta^{\mu \nu}$. It is easy to see that this will never match the tuned potential of \cite{Giga_FP} for any value of $\kappa$, meaning that the quartic ghost-like pathology of the original model can not be cured by terms of geometrical origin, since the GB term is the only available one in 5D.
So far we limited ourselves to potentials which are motivated by some 5D geometrical construct, however by giving up that requirement one can naturally generalize the potential term in \eqref{lagr} to
\begin{eqnarray}
\label{gen_pot}
V(g_{\mu\nu})= {m^2}\int_{0}^{+1}du \sqrt{\tilde g}
\left ( k_{\mu\nu}^2 - k^2 +a_1k_{\mu\nu}^3+a_2kk_{\mu\nu}^2+a_3k^3+\dots\right)\,,
\end{eqnarray}
where now the coefficients at each order should be chosen such that after reduction to 4D and introduction of St\"uckelberg fields pure helicity-0 interactions add up to a total derivative at each order.
One interesting observation that follows from the $N_\mu$ covariantization (appendix A) is that the n$^{\text{th}}$ order terms in $k_{\mu\nu}$ make no contribution to the (n+1)$^{\text{th}}$ order ghost-like interactions. This is because as was the case for the cubic effective action in the original model, only the first order solution is needed to be substituted in $(k_{\mu\nu})^n$ terms. After covariantization (i.e. replacing $k_{\mu\nu}$ with $K_{\mu\nu}$), $H^{(1)}_{\mu\nu}$ remains independent of $N_\mu$ and therefore the highest power of $N_\mu$ at (n+1)\st order is $(N_\mu)^n$. This, however, cannot possibly affect (n+1)\st order self-interaction of helicity-0 mode.
It is also worth mentioning that the equivalence between one and several spherically symmetric auxiliary dimensions does not survive general modifications of the bulk action that include higher powers of $k_{\mu\nu}$. The addition of $k_{\mu\nu}^n$ terms causes the multi-dimensional model to deviate from its co-dimension one counterpart at (n+1)\st order in perturbations. Similarly the Gauss-Bonnet terms lift this degeneracy because of containing higher powers of $\partial_r$, moreover there is one new GB term for each extra dimension which can in principle be included in the action.
\renewcommand{\theequation}{C-\Roman{equation}}
\setcounter{equation}{0}
\section*{Appendix C. Fine-Tuning of the Boundary}
There are two sets of coefficients \eqref{expansion} that do not give rise to the ghost. One of them is given by
\begin{eqnarray}
b^{(1)}_1&=&0,\qquad \forall~ b^{(1)}_2\neq1,\nonumber \\
b^{(2)}_1&=&-b^{(2)}_2=\frac{b^{(2)}_3}{3}+\frac{1}{24} \left(1-4c_3-2b^{(1)}_2+4c_3b^{(1)}_2+{b^{(1)}_2}^2\right),\quad\forall~ b^{(2)}_3,\nonumber\\
b^{(2)}_4&=&\frac{1}{4}(b^{(1)}_2-1)(-1+4c_3+2b^{(1)}_2),\nonumber\\
b^{(3)}_1&=&\frac{b^{(3)}_6}{3}+\frac{1}{48}(8c_3^2(b^{(1)}_2-1)-(b^{(1)}_2-1)(-3+16d_5+3b^{(1)}_2)-4(3+4b^{(1)}_2)b^{(2)}_3\nonumber\\
&&+4c_3(4+(-5+b^{(1)}_2)b^{(1)}_2+4b^{(2)}_3))\nonumber\\
b^{(3)}_2&=&\frac{{b^{(2)}_3}^2}{18(b^{(1)}_2-1)}+\frac{1}{864}((b^{(1)}_2-1)(-96c_3^2+432d_5+12c_3(27-5b^{(1)}_2)\nonumber\\
&&+(b^{(1)}_2-1)(61+5b^{(1)}_2))+24(9-10c_3+16b^{(1)}_2)b^{(2)}_3+288(b^{(3)}_5-b^{(3)}_6))\nonumber\\
b^{(3)}_3&=&-\frac{b^{(3)}_5}{3}-\frac{{b^{(2)}_3}^2}{18(b^{(1)}_2-1)}-\frac{1}{864}((b^{(1)}_2-1)(48c_3^2+144d_5\nonumber\\
&&+12c_3(3+b^{(1)}_2)+(b^{(1)}_2-1)(7+5b^{(1)}_2))+48(c_3+2b^{(1)}_2)b^{(2)}_3)\nonumber
\end{eqnarray}
\begin{eqnarray}
b^{(3)}_4&=&\frac{2{b^{(2)}_3}^2}{3(-1+b^{(1)}_2)}+\frac{1}{72}((b^{(1)}_2-1)(84c_3^2+108d_5+12c_3b^{(1)}_2+(b^{(1)}_2-1)(1+2b^{(1)}_2))\nonumber\\
&&+6b^{(2)}_3(-9+20c_3+4b^{(1)}_2)),\nonumber\\
b^{(3)}_7&=&\frac{b^{(1)}_2-1}{24}(7-72d_5-14b^{(1)}_2+4(-3c_3(3+c_3)+6c_3b^{(1)}_2+{b^{(1)}_2}^2)),\quad \forall~ b^{(3)}_{5,6}.\nonumber
\end{eqnarray}
while the other one being
\begin{eqnarray}
b^{(1)}_1&=&\frac{1}{2}(1-b^{(1)}_2),\qquad \forall~ b^{(1)}_2\neq1,\nonumber \\
b^{(2)}_1&=&-\frac{b^{(2)}_3}{3}-\frac{b^{(1)}_2-1}{24}(-8+16c_3+3b^{(1)}_2),\quad\forall~ b^{(2)}_3,\nonumber\\
b^{(2)}_2&=&\frac{1}{12}((-1+2c_3)(-1+b^{(1)}_2)-2b^{(2)}_3),\nonumber\\
b^{(2)}_4&=&\frac{b^{(1)}_2-1}{4}(-1+4c_3+2b^{(1)}_2),\nonumber\\
b^{(3)}_1&=&\frac{1}{48}(4(b^{(1)}_2-1)(c_3^2+22d_5)-4c_3(13+b^{(1)}_2(-16+3b^{(1)}_2)+4b^{(2)}_3)\nonumber\\
&&+b^{(1)}_2(-23+b^{(1)}_2(5+4b^{(1)}_2)+16b^{(2)}_3)+2(7+6b^{(2)}_3-8b^{(3)}_6)),\nonumber\\
b^{(3)}_2&=&-\frac{7{b^{(2)}_3}^2}{18(-1+b^{(1)}_2)}+\frac{1}{864}((1-b^{(1)}_2)(-85+408c_3^2+1080d_5+12c_3(27+7b^{(1)}_2)\nonumber\\
&&+b^{(1)}_2(14+53b^{(1)}_2))-12(-9+40c_3+20b^{(1)}_2)b^{(2)}_3-144(2b^{(3)}_5+b^{(3)}_6)),\nonumber
\end{eqnarray}
\begin{eqnarray}
b^{(3)}_3&=&\frac{{b^{(2)}_3}^2}{18(-1+b^{(1)}_2)}+\frac{1}{864}((b^{(1)}_2-1)(48c_3^2+144d_5+12c_3(3+b^{(1)}_2)\nonumber\\
&&+(b^{(1)}_2-1)(7+5b^{(1)}_2))+24b^{(2)}_3(2c_3+b^{(1)}_2)-144b^{(3)}_5),\nonumber\\
b^{(3)}_4&=&\frac{2{b^{(2)}_3}^2}{3(-1+b^{(1)}_2)}+\frac{1}{72}((b^{(1)}_2-1)(84c_3^2+108d_5+12c_3b^{(1)}_2+(b^{(1)}_2-1)(1+2b^{(1)}_2))\nonumber\\
&&+6b^{(2)}_3(-9+20c_3+4b^{(1)}_2)),\nonumber\\
b^{(3)}_7&=&\frac{b^{(1)}_2-1}{24}(7-72d_5-14b^{(1)}_2+4(-3c_3(3+c_3)+6c_3b^{(1)}_2+{b^{(1)}_2}^2)),\quad \forall~ b^{(3)}_{5,6}.\nonumber
\end{eqnarray}
These coefficients give the most general boundary condition at $u=1$, for which the theory is ghost-free (in decoupling limit) up-to 4$^{\text{th}}$ order. It is quite straightforward to continue this tuning to arbitrary order.
| {
"attr-fineweb-edu": 1.486328,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdLLxK7DgtAAQI-qw | \section{Introduction}
The routine use of online experiments at Internet firms motivates the development of tools and methods which are robust and reliable when used for thousands of experiments per day, either through the use of manual self-serve tools by non-experts, or automated experimentation techniques like multi-armed bandit optimization.
These experiments commonly have many variants or \emph{arms}; this often leads to the observed best performing arm being an overestimate of the true effect when using standard maximum likelihood estimation~\citep{gelman2012we}.
This behavior, then, can lead to suboptimal inferences about which arm is best.
In this paper, we present a simple and more accurate estimation tool--- shrinkage---which is particularly well suited to routinized digital experimentation and which leads to better resulting decisions by hedging more efficiently when the best arm is uncertain.
It is easy to understand, effective, and perhaps most importantly, it is highly robust.
This makes it readily able to serve as a default tool in experimental analyses.
A conception of ``scalability'' is the distinguishing feature of online experimentation.
In this context, that implies a number of particular features: (i) sample sizes range from hundreds to hundreds of millions (ii) the number of treatment groups in an experiment range from a few to hundreds (iii) many practitioners will have no expert training (e.g. they are software engineers, not statisticians) (iv) many business-relevant effects are small in magnitude (v) most experiments are run with a view towards optimization~\citep{letham2018}\footnote{This paper will deal with more traditional experimentation under SUTVA~\citep{imbens2015causal}, rather than difficult to study phenomena such as peer effects~\citep{bakshy2012,eckles2016}}.
This work provides an estimator of causal effects in experiments which aligns with these principles. Respectively, it (i) requires only summary statistics (means and variances) which are already, by necessity, collected for experiments \footnote{This is unlike covariate adjustment~\citep{bloniarz2016,lin2013} which requires additional infrastructure and domain knowledge to implement well~\citep{deng2013improving}.}, (ii) provides larger benefits as the number of similar treatments are increased, (iii) is easy to implement and understand, (iv) increases the accuracy of estimated effects, allowing less uncertainty for the same size of experiment, and (v) finds the best arm faster and more effectively.
The optimization-based perspective on experimentation is not isolated to technology firms, but is also common more broadly within the social sciences~\citep{imai2010,benartzi2017,madrian2014} and elsewhere~\citep{zhang2012robust}.
Whenever there is agreement within a group or organization about what outcomes should be improved, experimentation can be used as a tool to achieve these ends, whether those outcomes are revenue, voting behavior, or public health.
This paper will proceed by first laying out the proposed shrinkage estimator (a standard James-Stein estimator) for the estimation of causal effects as well as an appropriate variance estimator (not previously analyzed in depth in the extant literature).
We demonstrate the consistency of these estimators and then proceed to examine various properties of them.
Next, we introduce a series of seventeen experiments run on Facebook from April to June 2017.
This experimental data will be used to examine the performance of the shrinkage estimators on representative data.
This performance will be evaluated in two broad ways.
First, we will evaluate the performance and properties of the shrinkage estimator in the context of one-shot experimentation, addressing such questions as its accuracy and its coverage.
We will conclude with an evaluation of its performance in sequential experimentation, demonstrating its effectiveness in optimization, a less well-understood application of empirical Bayes.
We find that shrinkage estimation makes sequential decision-making more robust by efficiently exploring arms in the vicinity of the optima.
This thereby increases the likelihood that the best arm will be found and reduces the regret accrued by experimentation, resulting in better sequential decision making.
\section{Shrinkage}
Our setting for the remainder of the paper is that of a randomized control trial, performed online or otherwise.
We consider the experiment in which we observe data from $K$ treatment groups (used interchangeably with `arm').
Our inferential target will be the average response in each group.
For each group, we only observe the sample mean, $m_k = \frac{1}{n_k} \sum_{i:d_i=k} y_i$ where $d_i$ is the $i$th unit's treatment assignment, $n_k$ is the number of units assigned to the $k$th treatment group and $y_i$ is the $i$th unit's observed outcome.
We also observe the standard error of this quantity.
Since we assume a simple setting with randomization and no interference / spillovers, the sample means are unbiased for our target parameters, $\mu_k = \mathbb{E}[y_i(k)]$ \citep{imbens2015causal}.
Our goal is to take these highly aggregated quantities and construct an estimator better than $m_k$ by using the fact that each of these true effects share some underlying common distribution with a central tendency.
That is, we wish to improve our estimation without using any additional, difficult to acquire unit-level information.
We will do this using shrinkage.
To motivate our shrinkage estimator, we assume that the observed sample means $m_k$ are drawn from a normal distribution centered at $\mu_k$ with variation $\sigma^2$.
That is, we assume that the sampling distribution of $m_k$ is actually normal.
Of course, this is a fairly weak assumption, as we know that the Central Limit Theorem guarantees that this will be true asymptotically.
That is, we are simply assuming that our sample size is large enough that the Central Limit Theorem holds.
Note also that we are assuming homoskedasticity amongst our various $m_k$.
Similar derivations are straightforward in the case of variances that differ by treatment group, but those will not be addressed here.
The problem of estimating $K$ means, $m_k$, is very familiar to students of statistics.
\citep{stein1956} shows that the aggregate accuracy (e.g. compound mean squared error) of the $m_k$ for $K > 3$ are inadmissable for estimating the underlying means $\mu_k$ of a set of normally distributed variables.
That is, there is guaranteed to be a better estimator of $\mu_k$ than the maximum likelihood-based approach.
In a certain sense, this isn't surprising -- one of the reasons we analyze experiments with the sample means is because it allows for \emph{unbiased} estimation of causal effects.
This formulation, however, clearly demonstrates that while these effects are efficient among the class of unbiased estimators, they are not best possible estimators if we broaden our scope to include estimators which admit small amounts of bias in exchange for greater overall accuracy.
In cases like online experimentation, trading off small, well-understood biases for greater accuracy is likely a trade worth making.
Thus, we motivate an improved estimator which is guaranteed to be both consistent and more accurate than the na\"ive estimator.
In particular, the estimator we will focus our attention on is the positive-part James-Stein estimator~\citep{efron1971,efron1975}:
\begin{equation}
\label{ass:normal}
m_k^{JS} = \bar{m} + (1 - \xi_k) (m_k - \bar{m})
\end{equation}
where $\bar{m}$ is the mean of all $m_k$, $\sigma_k$ is the standard error of the means and where
$$
\xi_k = \text{min}\left(\sigma_k^2 \frac{K-3}{s^2}, 1\right)
$$,
where $s^2 = \sum_{k=1}^K (m_k - \bar{m})^2$. We replicate a standard derivation of this estimator in appendix~\ref{app:bayes_basic}.
With a fixed number of arms, fixed $\mu_k$ and increasing sample size (allocated with nonzero probability to each arm), then $\frac{K-3}{s^2}$ will converge to a constant, while each $\sigma_k^2$ will asymptotically approach zero.
Thus, $\xi_k$ (the amount of shrinkage) will approach zero by Slutsky's theorem. This implies that the estimator will converge to the same limit as the unbiased MLE.
In finite samples, some amount of shrinkage will be performed so as to increase accuracy, but this will be reduced as samples increase.
Typical treatments of James-Stein estimators treat $\bar{m}$ and $\frac{s^2}{K-3}$ as fixed and known a priori~\citep{efron2012}.
Thus, the most commonly used variance estimator for equation~\ref{ass:normal} would be, simply:
\begin{equation}
\label{eq:varbasic}
\mathbb{V}(m_k^{JS}) = (1-\xi_k) \sigma_k^2
\end{equation}
This shows that with a known common mean, as the amount of shrinkage (indicated by $\xi$) increases, the variance mechanically decreases to zero.
In our setup, however, these quantities are not known, but estimated, so it is extremely undesirable for the variance to converge to zero, as it will lead to strongly anti-conservative inference.
We can thus construct a variance estimator which incorporates the necessary additional sources of estimation error as follows:
\begin{equation}
\label{eq:varbetter}
\mathbb{V}(m_k^{JS}) \approx (1-\xi_k) \sigma_k^2 + \frac{\xi_k s^2}{K} + \frac{2 \xi_k^2 (m_k - \bar{m})^2}{K-3}
\end{equation}
There are a few things to note about this expression.
First, the first term is identical to the variance estimator in equation~\ref{eq:varbasic} that takes the common distribution as known.
This term represents the contribution of the uncertainty in the estimation of $m_k$.
The second term incorporates the uncertainty from the estimation of $\bar{m}$.
The final term adds in the uncertainty from the dispersion of the effects.
We derive this expression for the variance from Bayesian foundations with uninformative priors in appendix~\ref{app:bayes_full}.
The most clear objection to these formulations of a James-Stein estimator is that it will perform less well when the distribution of effects are non-normal.
For instance, if effects are mostly near zero with some outlying 'true' effects, then simple James-Stein will tend to perform poorly.
This motivates the use of a limited translation variant (a la \citet{ghosh2009}) of the James-Stein estimator which reduces or eliminates shrinkage for outlying arms.
In what follows, we prefer to be more conservative in our estimates for extreme arms in order to best insulate us from excessive optimism in the face of uncertainty.
Thus, the remainder of this paper will focus on the benefits of the estimators in equations \ref{ass:normal} and \ref{eq:varbetter}.
\section{Experiments}
In the simulation studies that follow, we take the means and variances from seventeen recent routine online experiments conducted on Facebook intended to optimize content in the normal course of operation of the platform.
These experiments appear at the top of News Feed as in figure \ref{fig:example_condition}.
These experiments were full-factorial \citep{box2005statistics} experiments testing different factors like wording, image, button text and layout of ``calls to action''.
That is, each of the factors were independently randomized, generating an arm for every possible combination of (for instance) wording and image.
In general, the substance of these experiments is mundane: encouraging users to upgrade their app versions, update their email addresses or (as in figure \ref{fig:example_condition}) donate to a charity of one's choice.
Treatments were essentially persuasion strategies for inducing individuals to take an action (the action varied by experiment), and the outcome was whether the user took the desired action or not (called a conversion).
In the example in figure \ref{fig:example_condition}, the action would be whether the user chose to create a charitable fundraiser or not.
The number of arms in these experiments varied from a minimum of 3 treatment groups to a maximum of 72 groups (see Figure~\ref{fig:num_arms}).
The sample sizes varied from 3000 to over 5 million (see~\ref{fig:num_users}).
Combined, these experiments consisted of over 20 million individuals.
The typical conversion rates were less than 10\%, with the sample size weighted average being approximately 2.5\% (see~\ref{fig:conversion}).
Given the factorial structure (and relative unlikelihood of there being strongly interactive effects), we wouldn't expect any single arm to be vastly superior to all other arms.
As such, we would expect there to be at least some central tendency of effect sizes and, therefore, shrinkage to be effective.
Our evaluations focus on three dimensions: accuracy (measured by mean squared error), frequentist coverage and sequential decision making (measured by frequency that the best arm is played and regret).
\begin{figure}
\begin{center}
\begin{subfigure}[t]{0.225\textwidth}
\begin{center}
\fbox{\includegraphics[width=.8\textwidth]{images/quick_promotion_example.png}}
\end{center}
\caption{Example messaging experimental condition. Factors in the factorial design are elements like image (1), title (2), text (3), and button text (4).}
\label{fig:example_condition}
\end{subfigure}
\begin{subfigure}[t]{.225\textwidth}
\begin{center}
\includegraphics[width=.9\textwidth]{images/pres_number_of_arms.pdf}
\end{center}
\caption{\centering Number of arms in each experiment.}
\label{fig:num_arms}
\end{subfigure}
\begin{subfigure}[t]{.225\textwidth}
\begin{center}
\includegraphics[width=.9\textwidth]{images/pres_number_of_units.pdf}
\end{center}
\caption{\centering Kernel density estimate of sample size in each experiment.}
\label{fig:num_users}
\end{subfigure}
\begin{subfigure}[t]{.225\textwidth}
\begin{center}
\includegraphics[width=.9\textwidth]{images/pres_conversion_rate.pdf}
\end{center}
\caption{\centering Kernel density estimate of conversion rate (for each arm and experiment), weighted by sample size.}
\label{fig:conversion}
\end{subfigure}
\end{center}
\caption{Summary information on the experiments used in this study.}
\end{figure}
\subsection{Static Simulations}
In the simulations that follow, we will treat the estimates from these experiments as the ground truth.
Our simulation studies will redraw data from a (Normal) parametric bootstrap downsampled to a significantly smaller sample size (20\%).
Note, however, that the true distribution of effects will still be non-normal.
This will generate similar data to that which we encounter in our day-to-day experimentation at Facebook.
The performance of our methods on this data, then, will do a good job of indicating how well we will do when we apply them to future experiments.
\begin{figure}
\begin{center}
\includegraphics[width=.65\textwidth]{images/pres_shrinkage.pdf}
\end{center}
\caption{Empirical Bayes shrinks extreme arms the most. Each point is a single point estimate of one arm from a single experiment. Arms lying near the dashed line have less shrinkage than those far from the line. Displayed effects are relative to the overall average within the experiment. Colors indicate experiments.}
\label{fig:shrinkage_by_arm}
\end{figure}
The first thing to note is the amount of shrinkage that is performed on each arm in Figure~\ref{fig:shrinkage_by_arm}.
All else equal, the most extreme values are shrunk the most towards the middle of the distribution (note that the sample sizes vary by arm, leading to some non-linearities).
This is an extremely desirable property.
Outlying values are, in general, likely to be over or under estimates of the true effect.
This is true through the same mechanism as the statistical significance filter: conditioning on observing a treatment as the 'best' among a set of treatments, it is more likely to be an overestimate of the true effect.
For example, it may be readily observed that the expectation of the maximum of two identical Normal distributions is larger than the expectation of either~\citep{nadarajah2008}.
In practice, particularly when our experimental objective is optimization, we focus our attention on the extreme arms, but more important than estimating the magnitude of this effect with minimal error, is discovering \emph{which arm is best}.
By concentrating shrinkage on these extreme arms, we guard against the multiple comparisons issue omnipresent in large online experiments.
This is true for the same reason that estimates from multilevel models do not tend to suffer from the same problems around multiple comparisons as do the maximum likelihood estimates~\citep{gelman2012}.
The overall performance of the method in reducing compound mean squared error (that is, the sum, for each experiment, of the MSE of the arms) can be seen in Figure~\ref{fig:mse_overall}.
The MSE does not increase for any experimentation under examination.
Indeed, most see around a 10\% reduction in MSE (with an average reduction of around 15\%).
Some experiments, however, see significantly greater improvements in accuracy; experiments 14 \& 15, for instance, reduce MSE on the order of 50\%.
These results match exactly the sense in which a James-Stein estimator renders the MLE inadmissable.
\begin{figure}
\begin{center}
\includegraphics[width=.6\textwidth]{images/mse_overall.pdf}
\end{center}
\caption{All experiments gain accuracy from empirical Bayes. This plot shows the change in compound mean squared error for each experiment attained by switching from the MLEs to the empirical Bayes estimates. Standard errors of the ratio are calculated using the Delta method.}
\label{fig:mse_overall}
\end{figure}
Figure~\ref{fig:mse_by_num_arms} demonstrates how the accuracy enhancing benefits accrue as a function of the number of arms.
To generate this figure, we simply subsampled (without replacement) a number of the original arms in the experiment and calculated compound MSE as above.
This figure demonstrates that a large number of arms are not necessary for empirical Bayes to substantially improve accuracy.
Indeed, so long as there are three arms empirical Bayes provides greater accuracy than the MLEs, and by around 10 groups, the gains are nearly as large as they'll ever be.
\begin{figure}
\centering
\includegraphics[width=.6\textwidth]{images/mse_by_num_arms.pdf}
\caption{Empirical Bayes confers benefits even with relatively few arms. The relative gain in compound MSE varies as a function of the number of arms. Each line represents a series of simulations with a variable number of arms derived a single real-world experiment.}
\label{fig:mse_by_num_arms}
\end{figure}
Clearly, if the goal of estimation is to reduce the overall accuracy across over all arms, shrinkage estimation is to be greatly preferred over the MLE.
But in fact, there are stronger statements we can make.
85\% of arms see a reduction in MSE from using empirical Bayes -- therefore, the \emph{modal} and \emph{median} arm also see gains.
Such reductions in MSE are of particular importance when seeking to understand tradeoffs between multiple metrics.
In such cases, it's important to have as much accuracy as possible for as many arms as possible.
Nevertheless, a method which provides small improvements in MSE for most arms, but a huge loss in MSE for the most important arms might not be a reasonable tradeoff to make.
Figure~\ref{fig:mse_by_arm} shows the change in MSE from using empirical Bayes for each arm in each experiment.
What can be seen is that only extreme arms tend to suffer in terms of MSE, while arms toward the center of the distribution tend to reap substantial benefits.
Overall, around 90\% of arms improve their MSE from using empirical Bayes.
Of course, this increase in MSE is also exactly why we are better insulated against type M errors (overestimation of effect size)--- estimates are brought more in line with the central tendency.
This causes us to erroneously believe that they are more similar to other arms than they are.
In practice, the desirability of this tradeoff will be based on how it affects the decisions made, analyzed in section~\ref{sec:dynamic}.
For some experiments, (like experiment 14) there is no real tradeoff -- every arm has lower MSE than the average MSE from the MLE.
That said, other experiments (such as experiment 13) have both arms with substantially lower MSE and some with higher MSE.
\begin{figure}
\begin{center}
\includegraphics[width=.6\textwidth]{images/mse_by_fx.pdf}
\end{center}
\caption{The majority of arms are estimated more accurately with empirical Bayes. Each point shows the relative change in MSE from switching from the MLE to EB for a single arm in one experiment. The x-axis is the ratio of the arm's effect size divided by the standard deviation of effect sizes in its experiment. Colored by experiment.}
\label{fig:mse_by_arm}
\end{figure}
Point estimates are not everything; we also want to be sure that our measures of uncertainty are appropriate, particularly given the improved variance estimator we have provided in equation~\ref{eq:varbetter}.
We examine this through the frequentist properties of the shrunk estimates.
We examine the coverage (the proportion of time our 95\% confidence intervals contain the ground truth) attained for the estimates of individual arms, displayed in Figure~\ref{fig:coverage_by_arm}.
\begin{figure}
\begin{center}
\includegraphics[width=.6\textwidth]{images/coverage_by_fx.pdf}
\end{center}
\caption{Most arms retain strong frequentist coverage through the use of empirical Bayes. Each point is a single arm in one experiment. The realized coverage for each arm based off of a nominal 95\% confidence interval. The x-axis is the ratio of the arm's effect size divided by the standard deviation of effect sizes in its experiment. Colored by experiment.}
\label{fig:coverage_by_arm}
\end{figure}
We can see that most arms actually have higher than nominal coverage (60\% of points have higher than nominal coverage, and 90\% have at least 90\% coverage).
By comparing Figure \ref{fig:mse_by_arm} to figure \ref{fig:coverage_by_arm}, we can see that the same extreme arms which received the most shrinkage tend to suffer most in terms of both MSE and coverage.
These are the arms which move their means most through shrinkage, so it is unsurprising that they suffer in terms of their frequentist properties.
That said, however, this reduction in coverage is not entirely bad.
We are particularly concerned with reducing the risk of type M errors, in which we estimate the effects of an arm too optimistically~\citep{gelman2014beyond}.
The reduced coverage we observe is intricately connected with the results observed in the following section.
\subsection{Dynamic Simulations}
\label{sec:dynamic}
In this section, we seek to validate the performance of shrinkage estimation when combined with sequential experimentation.
First, an aside on our methodology for performing sequential experimentation.
The method we use for sequential experimentation is batch-based Thompson sampling~\citep{scott2010}.
Thompson sampling is a common means of navigating the exploration-exploitation tradeoff in practice which is both simple and easy to understand: at each step, we assign units proportionally to our posterior that they are the best.
The process we follow, is to first run a large ``exploration'' experiment with a wide array of different arms (for instance, a large full-factorial experiment).
After this phase, we perform Thompson sampling to choose the appropriate arms for the next phase.
This can be estimated simply by drawing samples from the posterior distribution of all arms.
That is, we take each arm's mean as distributed independent normal by the CLT.
Thompson sampling then amounts to sampling a large number of times from $K$ independent normals.
We build a posterior by recording, for each set of $K$ samples, which arm's sample was largest.
The distribution of maximal arms, then, is the distribution over treatment assignment to be used in the following batch of the experiment.
In addition to working well in practice~\citep{chapelle2011}, Thompson sampling has more recently been shown to have good theoretical properties as well for minimizing discrepancy in the outcome between the best possible arm and the chosen arms in a sequential experiment~\citep{agrawal2012,agrawal2013}.
In these simulations, because we can't assume that the normality assumption will hold (due to the small conversion rates relative to sample size), we will form our posterior for Thompson Sampling using beta distributions.
After each batch, that is, we will fit a beta distribution to the set of $m_k$ in the simulation, and add in the estimated shape parameters to the observed successes and failures for each arm (which make up the shape parameters under maximum likelihood estimation).
We then draw the posterior distribution of optimal arms from these Beta distributions with empirical Bayes priors.
Figure~\ref{fig:ab_improve} shows the improvement of empirical Bayes relative to MLE for constructing a posterior.
That is, the figure plots how (over 500 simulations) cumulative regret (the cumulative difference between the best possible reward and the realized reward) changes by using empirical Bayes.
In Figure~\ref{fig:ab_improve}, the three panels demonstrate, the performance in a `best case' at the 2.5 percentile (over simulations), the median case and a `worst case' at the 97.5 percentile.
Moreover, this figure presents results for two points in time: early in the experiment (25\% through the procedure) and at the end of the experiment.
In the far left panel, it is clear that empirical Bayes provides substantial gains on the order of 10\% to 20\% by the end of the experiment.
In terms of median performance, empirical Bayes results in typical gains of around 10\% at the end of the experiment.
These gains tend to be smaller (but nonzero) early in the experiment, with empirical Bayes accruing fairly similar median regret than the MLEs as each method concentrates on exploration.
Together, these two results are indicative of a more efficient exploration strategy by empirical Bayes which results in a greater chance of successfully finding the optimal arm to play.
In the ``worst case'' shown in the panel on the right, empirical Bayes has relatively similar performance to the MLE.
Crucially, across all three panels there is not a clear example in which the MLE clearly outperforms empirical Bayes.
At worst, in the three cases where the median-case performance of empirical Bayes is a few percentage points below that of the MLE at the end of the experiment.
In two of these cases, empirical Bayes still outperforms the MLE in the best-case analysis.
In short, empirical Bayes improves the typical and the best case performance of sequential optimization.
\begin{figure}
\begin{center}
\includegraphics[width=.65\textwidth]{images/regret_change.pdf}
\end{center}
\caption{Empirical Bayes shrinkage tends to improve best and median-case performance of bandit optimization. Each panel shows how cumulative regret changes when posteriors for Thompson sampling are computed using the empirical Bayes estimator, rather than the MLE. Panels represent the 2.5th, 50th, and 97.5th percentile of cumulative regret over simulations from left to right. Lower regret is better. Comparisons are shown for both the end of the sequential experiment ("End") and 25\% through the experiment ("Early").}
\label{fig:ab_improve}
\end{figure}
The same general trends may be observed along a related metric: allocating units to the best arm~\citep{audibert2010}.
Figure~\ref{fig:best_arm} shows how often Thompson sampling with MLEs and empirical Bayes estimates allocate users into the best treatment group.
Shrinkage is able to ensure that the best arm is played more often than relying on the MLEs alone.
While early in the experiment, empirical Bayes often plays the best arm less often than one would when using the MLEs, by the end of the experiment there are substantial gains.
See, for instance, how in experiments 1 and 5, Thompson sampling on the empirical Bayes estimates provide as much as a 30\% to 40\% increase in the chance of allocating units to the best arm.
For no experiment does empirical Bayes play the best arm less often -- at worst, the use of empirical Bayes performs comparably to the MLEs.
The typical increase in playing the best arm is between 5\% and 10\%.
\begin{figure}
\begin{center}
\includegraphics[width=.65\textwidth]{images/best_arm_change.pdf}
\end{center}
\caption{Empirical Bayes is more likely to play the best arm, particularly in later rounds. Displayed is the probability that the sampling algorithm chooses the best arm for early in the experiment and at the end of the experiment.}
\label{fig:best_arm}
\end{figure}
A comparison to the results on coverage in the previous section (e.g. Figure~\ref{fig:coverage_by_arm}) demonstrates an important fact: it's exactly the experiments with arms that have below-nominal coverage which attain substantial gains in sequential optimization.
In other words, the under-coverage belies the fact that shrinkage \emph{makes better decisions about the best arm} when used within Thompson sampling.
Since our whole exercise is aimed at improving our statistical decisions, this is resoundingly a tradeoff worth making.
\begin{figure}
\begin{center}
\includegraphics[width=.5\textwidth]{images/pr_top_k_lines.pdf}
\end{center}
\caption{Empirical Bayes concentrates exploration on the \emph{set} of the best arms. Displayed is the probability of sampling the top 6 arms as a function of time averaged over 50 simulations. Compared are Thompson sampling using a posterior constructed from the MLEs and from the empirical Bayes estimator.}
\label{fig:pr_top_k}
\end{figure}
It's easy to see why empirical Bayes out-performs the MLEs from Figure~\ref{fig:pr_top_k}.
In this simulation, Experiment 1 was taken as the ground truth, and samples were drawn in batches of 1000 according to Thompson Sampling over 50 simulation iterations.
The proportion of the time each of the top 6 arms were played is plotted.
The early exploration period is shown here.
The benefits in this period from empirical Bayes can be seen very easily as improved exploration among the set of best arms.
While the MLE typically does a good job of playing the very best arm, empirical Bayes does a much better job of concentrating play among the top \emph{four} arms.
This, in turn, means that smart shrinkage avoids some of the common difficulties in Thompson sampling around distinguishing among the very best arms.
In essence, then, Thompson sampling with empirical Bayes shrinkage behaves similarly to algorithms like ``top two Thompson sampling"~\citep{russo2016simple}.
In this algorithm, Thompson sampling incorporates information about both of the top two performing arms in its construction of a posterior of the probability of an arm lying in the top \emph{two} ranks.
This, in turn, results in greater ability to statistically distinguish the two best arms (and thus does better in terms of best arm identification by gaining greater confidence in which arm is best).
Empirical Bayes with standard Thompson sampling, however, chooses the number of top arms to distinguish in a data-adaptive way.
When there is no clear evidence for the superiority of an arm, shrinkage causes the algorithm to hedge more strongly among the \emph{set} of best arms, not merely the top two.
\section{Conclusion}
This paper has introduced an approach which increases the accuracy and reliability of online experimentation in both one-shot and sequential experimentation.
It provides clear gains in accuracy, and it tends to do a better job of optimization and best-arm identification than do the MLEs.
Furthermore, the method is easy to describe and implement with little technical debt (e.g., it works entirely with commonly available summary statistics).
As such, it provides the basis for an agnostic approach to analyzing experimental data that is particularly well suited to the online environment.
Because of this, the empirical Bayes estimators presented here are used as our default estimator for many-armed experiments and automated experimentation services, such as those used to optimize creatives like the ones featured in Figure~\ref{fig:example_condition}.
Interesting avenues of future work might attempt to provide similar methods which provide analogous guarantees while taking into account slightly more structure in the data.
Two such opportunities present themselves.
First, the shrinkage estimator examined here did not take into account the factorial structure observed in the data. A similar approach tailored towards factorial designs could shrink towards a simpler main-effects only model.
Second, online experiments typically collect a wide variety of outcome variables.
Constructing an estimator using a multi-variate shrinkage method like Curds-and-whey~\citep{breiman1997} would likely result in further gains by considering the correlation structure.
More broadly, this paper demonstrates how shrinkage / regularization can greatly benefit the performance of experimentation with very few drawbacks by reining in extreme effect estimates when they differ greatly from those observed in other arms.
These benefits of regularization are well-understood in a machine-learning and regression context, but are underused in online experimentation, despite their clear applicability.
\section*{Acknowledgements}
This research conducted under the auspices of Adaptive Experimentation, a part of Facebook's Core Data Science team. We thank Steven Howard for feedback and for his help with the derivations in the Appendix.
\bibliographystyle{ACM-Reference-Format}
| {
"attr-fineweb-edu": 1.573242,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdLXxK03BfHDu4RwA |
\subsection{Adapter Networks}
Adapter networks have been introduced in the context of multi-task learning~\cite{rebuffi2017learning}. \citet{houlsby_2019_param_efficient}, and later \citet{pmlr-v97-stickland19a} attach the adapter module to the transformer blocks of a PLM, and learn a task by only updating the adapter parameters, while keeping the PLM's parameters unchanged. These studies show that the highly parameter-efficient adapter approach in general performs on par with fine-tuning all parameters of a BERT model~\cite{devlin_2019_bert} for many tasks.
Other studies investigate various characteristics of adapter-based models such as the parameter efficiency, architectural variations, and transfer learning across tasks. \citet{ruckle2021adapterdrop} show the training efficiency of adapter models in comparison with full model finetuning ($\sim\! 60\%$). \citet{han2021robust} examine the robustness to initialization seeds and training stability of the adapter approach. \citet{DBLP:conf/acl/MahabadiR0H20} propose more parameter-efficient models by sharing adapter parameters across layers, followed by \citet{DBLP:conf/nips/MahabadiHR21}, who introduce even more compact adapter modules. \citet{poth2021pre} investigate the potential knowledge transfer across tasks. Recently, \citet{pfeiffer2021adapterfusion} introduce AdapterFusion, where a fusion layer is defined on top of pre-trained adapter modules, and learns to combine information of adapters via an attention mechanism. This approach, avoiding the common pitfalls of catastrophic forgetting~\cite{parisi2019continual}, enables an effective knowledge transfer across tasks. Our work contributes to this direction by extending the concept of AdapterFusion to the task of bias mitigation via supervised interaction between the downstream task and the protected attribute.
\subsection{Fairness \& Bias Mitigation in NLP}
The existence of biases and stereotypes in PLMs has been reported and discussed in several studies~\cite{nadeem-etal-2021-stereoset, bhardwaj2021investigating,liang2021towards,kirk2021bias,vig2020investigating}. PLMs may even exacerbate these biases in downstream tasks as shown e.\,g. in the context of IR~\cite{rekabsaz2020neural}. To reduce biases, \textit{in-processing} methods -- the focus of this work -- aim at reducing the (cor)relation between the model's internal representations and the protected attributes~\cite{ganhoer2022mitigating}. Debiasing PLMs has been approached for instance by linearly projecting embeddings to a new space that removes correlations to protected attributes~\cite{kaneko2021debiasing,bolukbasi2016man}. In a similar spirit, \citet{guo2022auto} introduce a distribution alignment loss to force the model's outputs to become independent of the protected attribute. \citet{schick2021self} recently show that the encoded information in models can be exploited to spot the biases and hence to penalize them.
Adversarial training is a commonly used method to learn representations invariant to a specific factor of variation. \citet{xie2017controllable} and later \citet{madras2018learning} introduce adversarial learning to the context of fair representation learning, where an adversary network learns to predict the protected attribute, and exploits the gradient of this prediction to remove the protected information using a gradient reversal layer~\cite{ganin2015unsupervised}. Several works further investigate the use of adversarial training for removing demographic information from neural/PLM-based text classifiers~\cite{elazar2018adversarial,barrett2019adversarial,wang2021dynamically}. Notably, \citet{han-etal-2021-diverse} show the benefit of having an ensemble of orthogonal adversaries. Beyond classification, \citet{rekabsaz2021societal} show that by applying adversarial training in the context of IR, one can achieve a more balanced retrieval of documents with respect to the presence of protected attributes in their contents.
As alternatives to adversarial debiasing, other works approach bias mitigation by minimizing the approximated upper bound of mutual information between task and protected attribute~\cite{cheng-etal-2020-improving,colombo2021novel}, contrastive learning for fair representation disentanglement~\cite{cheng2020fairfil,zhang2021disentangling}, and introducing list-wise fairness regularization~\cite{zerveas2022mitigating}. While the mentioned methods are applied to whole a model, Iterative Nullspace Projection (INLP)~\cite{ravfogel2020null} achieves debiased representations by finding a linear mapping applied to output embeddings. The linear mapping of the INLP method and similarly the one of \citet{ravfogel2022linear} offer effective bias mitigation methods particularly designed for the models with a linear decoder, but are not necessarily suited for the cases with non-linear decoders (e.g., a language generation decoder, or non-linear attackers).
Regarding adapter-based debiasing and closely related to our work, \citet{lauscher2021sustainable} recently utilize adapters for debiasing PLMs by training an adapter which removes a protected attribute using counterfactual augmented data. When applied to a downstream task, another adapter is added on top of the debiasing adapter, and trained according to the task's objective. While shown effective in practice, in this stacking architecture the adapters in the higher levels inherently depend on the ones in the lower levels. They can not be learned stand-alone. In contrast, the fusion-based approach of \textsc{DAM}\xspace enables control by learning modular and independent debiasing adapters, which supports flexible plug-in/plug-out on demand.
\subsection{Model Architecture}
\textsc{DAM}\xspace consists of three main parts: task adapter, $k$ debiasing adapters, and fusion. Following \citet{pfeiffer2021adapterfusion}, and as shown in Figure~\ref{fig:ecnoder}, these parts extend the architecture of a transformer block by being added to its last layer.
\paragraph{Adapters} Each adapter is defined as a multilayer perceptron (one hidden layer and $\text{tanh}(x)$ in our experiments), where the hidden layer typically has the same or a smaller dimension than the input. In \textsc{DAM}\xspace, the task adapter and the $k$ debiasing adapters receive the output vector of the preceding transformer components, denoted as ${\bm{u}}$, and apply the corresponding transformations to output the vectors ${\bm{v}}_t$, and ${\bm{v}}_{b_1},...,{\bm{v}}_{b_k}$, respectively
\paragraph{Fusion} The fusion module combines the outputs of the task and debiasing adapters through the attention mechanism to produce the final output vector. This module receives $\left[{\bm{v}}_t,{\bm{v}}_{b_1},...,{\bm{v}}_{b_k} \right]$ as keys and values, and the ${\bm{u}}$ vector as the query of a single-head multiplicative attention network, and calculates the output as the weighted sum of the value vectors. These weights are calculated as attention scores and form a probability distribution over value vectors. Essentially, the fusion module learns to perform a \emph{linear combination} of the embedding containing information for the task with the embeddings of the debiasing adapters, to provide the final, debiased embedding.
\begin{figure}[t]
\centering
\includegraphics[width=0.35\textwidth]{figures/adv_test_small_font.pdf}
\caption{Schematic view of applying \textsc{DAM}\xspace to adversarial bias removal.}
\label{fig:architecture}
\end{figure}
\subsection{Learning On-demand Bias Mitigation}
\textsc{DAM}\xspace introduces a generic approach to bias mitigation and representation disentanglement, and can, in principle, be integrated with any bias removal optimization method, provided that the method defines separate losses for the task and each of the protected attributes. Irrespective of the optimization method, the training procedure of \textsc{DAM}\xspace is the following: first, the task adapter is trained according to the task's objective. Then, one debiasing adapter for each protected attribute is trained using only its own debiasing objective (see next paragraph), without involving the task loss. Finally, all adapters' parameters are kept frozen and the fusion layer is trained using a combination of the task objective and debiasing objectives. Throughout training, the parameters of the underlying PLM remain frozen. In what follows, we describe using \textsc{DAM}\xspace together with the adversarial bias mitigation method as depicted in Figure~\ref{fig:architecture}
\paragraph{Adversarial Training} Adversarial bias mitigation aims to make the output embedding of a model invariant to the protected attribute, by removing information that allows predicting protected attributes from the latent representation. In this sense, adversarial training follows a \emph{fairness through blindness}~\cite{barocas-hardt-narayanan} approach, where the system is made agnostic of the variations in underlying protected attributes.
To apply adversarial debiasing with \textsc{DAM}\xspace (see Figure~\ref{fig:architecture}) in the context of text classification, the model first encodes the input sequence $x$ in the latent vector ${\bm{z}}$, based on which the corresponding class is predicted using a task classification head. The loss of this prediction, denoted as $\mathcal{L}_{t}$, is defined as cross entropy, and its gradient is used to update the parameters of the task adapter. Similarly, a dedicated classification head is defined for each protected attribute $b_i$, which also receives ${\bm{z}}$ as input, predicts the corresponding protected attribute, and calculates the cross entropy loss function $\mathcal{L}_{b_i}$. A common approach to removing the information of $b_i$ encoded in ${\bm{z}}$ is gradient resversal layer (GRL)~\cite{ganin2015unsupervised} added before the debiasing head. GRL multiplies the gradient of $\mathcal{L}_{b_i}$ with a factor of $-\gamma_{i}$, and thereby simplifies the learning process to a standard gradient-based optimization. In \textsc{DAM}\xspace, this reversed gradient of $\mathcal{L}_{b_i}$ is used to learn the parameters of the debiasing adapter corresponding to the protected attribute $b_i$. Once adapters are trained, the fusion layer is learned jointly according to all task and debiasing objectives, as formulated below:
\begin{equation}
\mathcal{L}_{\text{Fusion}}^{\text{Adv}} = \mathcal{L}_{t}+\sum_{i=1}^{k}{\mathcal{L}_{b_i}}
\end{equation}
Note that, while no weights are used to directly scale the individual loss functions, the effects of bias mitigation losses on model parameters are adjusted via their corresponding $\gamma_i$ hyperparameters.
\section{Experiment Settings -- Additional Details}
\label{sec:appendix:setting}
\subsection{Datasets}
\label{sec:appendix:setting:dataset}
In FDCL18 dataset, we use the TwitterAAE model~\cite{blodgett-etal-2016-demographic} to assign racial dialect classes. The TwiiterAAE model predicts four racial classes, \emph{African American}, \emph{White American}, \emph{Hispanic}, and \emph{Others}. We labeled a tweet as \emph{African American} or \emph{White American} if the prediction score was greater than $0.5$. For PAN16 dataset, following \cite{sap-etal-2019-risk} we balanced the task labels and sampled 200K data. The age groups of this dataset are 18-24, 25-34, 35-49, 50-64, and 65+. The proportions of data items regarding the labels of the task and protected attributes in the three dataset are as follows:
\begin{itemize}
\item BIOS dataset: Task (dentist: 0.03, professor: 0.27, teacher: 0.04, psychologist: 0.04, nurse: 0.07, poet: 0.01, photographer: 0.06, journalist: 0.04, filmmaker: 0.02, physician: 0.08, composer: 0.2, attorney: 0.08, model: 0.03, painter: 0.02, accountant: 0.01, pastor: 0.01, comedian: 0.01, surgeon: 0.05, architect: 0.03, paralegal: 0.01, dj: 0.01, chiropractor: 0.01, software engineer: 0.02, dietitian: 0.02, rapper: 0.01, personal trainer: 0.003, yoga teacher: 0.01, interior designer: 0.01); Gender (Female: 0.5, Male: 0.5)
\item FDCL18 dataset: Task (normal: 0.73, spam: 0.12, abusive: 0.12, hateful: 0.04); Race (White: 0.5, AA: 0.5)
\item PAN16 dataset: Task (Mention: 0.5, Not Mention: 0.5); Gender (male: 0.54, female: 0.46); Age ( 3: 0.18, 1: 0.34, 2: 0.40, 0: 0.07, 4: 0.01)
\end{itemize}
\subsection{Hyperparameter setting}
\label{sec:appendix:setting:hyperparam}
Across experiments, we keep specific hyperparameters consistent. Batch size is 64, learning rate is 2e-5 (except for training task and debiasing adapter as explained below), training epochs is 20, dropout probability is 0.3, and adversarial debiasing coefficient is 1 for all models (when applicable).
For task adapter and debiasing adapters, we tune the learning rate and the hidden layer dimension of adapter. We conduct brute search over the learning rate values of [1e-5, 1e-4,1e-3,1e-2], and hidden layer dimension with a division factor of [1,1/2,1/4,1/8,1/16].
For \textsc{INLP}\xspace we use 10 iterations and 10 classifiers to learn null space while for \textsc{INLP-NonLin}\xspace we same setting (300 iterations and 300 classifiers) as in \cite{ravfogel2020null}.
\subsection{Training procedure}
\label{sec:appendix:setting:training}
We randomly split the dataset into train, validation, and test set with the proportions 63:12:15 for BIOS, 63:12:15 for FDCL18, and 80:5:15 on PAN16. We use the validation set for hyperparameter tuning, and the best result on the validation set is evaluated on test set for the final results. The validation and test sets in all datasets follow the same distribution as the whole dataset. To address the unbalancedness of the dataset and the potential problems in adversarial learning, we apply upsampling only on the \emph{training sets} of BIOS and FDCL18 datasets, to balance the protected attribute labels within each task label. For instance, genders are balanced in the dentist class by repeating the data items of the minority subgroup.
\section{Fusion Attention Analysis}
\label{sec:appendix:fusion}
We investigate the attention distribution of the fusion network and observe the weights it gives to the adapters. Figures~\ref{fig:appendix:bio}, \ref{fig:appendix:race}, and \ref{fig:appendix:mention} depict the attention scores of each adapter averaged over all fusion layer in \textsc{DAM}\xspace for BIOS, and FDCL18, and PAN16 datasets, respectively. To avoid confusion in visualization, we only used 4\% of data points randomly sampled from test set. As shown, the task adapter has weights close to 1, while debiasing adapters are assigned attention scores slightly higher than 0. The top three outliers in BIOS with the highest attentions on the debiasing adapter are reported in Table~\ref{tbl:appendix:datapoints}.
\begin{table}[h]
\centering
\begin{tabular}{ l L{6cm} }
\toprule
\multirow{1}{*}{} & \multicolumn{1}{c}{\textbf{Text}} \\\midrule
1 & passionately promotes healthy dietary and lifestyle choices to prevent disease and achieve optimal health \\%she passionately promotes healthy dietary and lifestyle choices to prevent disease and achieve optimal health \\
2 & practices include clayton heights family dental street panorama family dental avenue and surrey family dental avenue \\% surrey family dental avenue \\
3 & primary research interests include collective security and global health in an international relations and international political economy perspective \\
\bottomrule
\end{tabular}
\caption{}
\label{tbl:appendix:datapoints}
\end{table}
\begin{figure*}[t]
\centering
\subfloat[BIOS]{\includegraphics[width=0.45\textwidth]{figures/bio.pdf}\label{fig:appendix:bio}}
\hspace{10mm}
\subfloat[FDCL18]{\includegraphics[width=0.44\textwidth]{figures/race.pdf}\label{fig:appendix:race}}
\centering
\caption{Attention distribution of a set of sampled data points in the fusion module of \textsc{DAM}\xspace.}
\label{fig:appendix:bio_race}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=0.4\textwidth]{figures/mention.pdf}
\caption{Attention distribution of a set of sampled data points in the fusion module of \textsc{DAM}\xspace in PAN16 dataset.}
\label{fig:appendix:mention}
\end{figure*}
\subsection{Adapter Networks}
Adapter networks have been introduced in the context of multi-task learning~\cite{rebuffi2017learning}. \citet{houlsby_2019_param_efficient}, and later \citet{pmlr-v97-stickland19a} attach the adapter module to the transformer blocks of a PLM, and learn a task by only updating the adapter parameters, while keeping the PLM's parameters unchanged. These studies show that the highly parameter-efficient adapter approach in general performs on par with fine-tuning all parameters of a BERT model~\cite{devlin_2019_bert} for many tasks.
Other studies investigate various characteristics of adapter-based models such as the parameter efficiency, architectural variations, and transfer learning across tasks. \citet{ruckle2021adapterdrop} show the training efficiency of adapter models in comparison with full model finetuning ($\sim\! 60\%$). \citet{han2021robust} examine the robustness to initialization seeds and training stability of the adapter approach. \citet{DBLP:conf/acl/MahabadiR0H20} propose more parameter-efficient models by sharing adapter parameters across layers, followed by \citet{DBLP:conf/nips/MahabadiHR21}, who introduce even more compact adapter modules. \citet{poth2021pre} investigate the potential knowledge transfer across tasks. Recently, \citet{pfeiffer2021adapterfusion} introduce AdapterFusion, where a fusion layer is defined on top of pre-trained adapter modules, and learns to combine information of adapters via an attention mechanism. This approach, avoiding the common pitfalls of catastrophic forgetting~\cite{parisi2019continual}, enables an effective knowledge transfer across tasks. Our work contributes to this direction by extending the concept of AdapterFusion to the task of bias mitigation via supervised interaction between the downstream task and the protected attribute.
\subsection{Fairness \& Bias Mitigation in NLP}
The existence of biases and stereotypes in PLMs has been reported and discussed in several studies~\cite{nadeem-etal-2021-stereoset, bhardwaj2021investigating,liang2021towards,kirk2021bias,vig2020investigating}. PLMs may even exacerbate these biases in downstream tasks as shown e.\,g. in the context of IR~\cite{rekabsaz2020neural}. To reduce biases, \textit{in-processing} methods -- the focus of this work -- aim at reducing the (cor)relation between the model's internal representations and the protected attributes~\cite{ganhoer2022mitigating}. Debiasing PLMs has been approached for instance by linearly projecting embeddings to a new space that removes correlations to protected attributes~\cite{kaneko2021debiasing,bolukbasi2016man}. In a similar spirit, \citet{guo2022auto} introduce a distribution alignment loss to force the model's outputs to become independent of the protected attribute. \citet{schick2021self} recently show that the encoded information in models can be exploited to spot the biases and hence to penalize them.
Adversarial training is a commonly used method to learn representations invariant to a specific factor of variation. \citet{xie2017controllable} and later \citet{madras2018learning} introduce adversarial learning to the context of fair representation learning, where an adversary network learns to predict the protected attribute, and exploits the gradient of this prediction to remove the protected information using a gradient reversal layer~\cite{ganin2015unsupervised}. Several works further investigate the use of adversarial training for removing demographic information from neural/PLM-based text classifiers~\cite{elazar2018adversarial,barrett2019adversarial,wang2021dynamically}. Notably, \citet{han-etal-2021-diverse} show the benefit of having an ensemble of orthogonal adversaries. Beyond classification, \citet{rekabsaz2021societal} show that by applying adversarial training in the context of IR, one can achieve a more balanced retrieval of documents with respect to the presence of protected attributes in their contents.
As alternatives to adversarial debiasing, other works approach bias mitigation by minimizing the approximated upper bound of mutual information between task and protected attribute~\cite{cheng-etal-2020-improving,colombo2021novel}, contrastive learning for fair representation disentanglement~\cite{cheng2020fairfil,zhang2021disentangling}, and introducing list-wise fairness regularization~\cite{zerveas2022mitigating}. While the mentioned methods are applied to whole a model, Iterative Nullspace Projection (INLP)~\cite{ravfogel2020null} achieves debiased representations by finding a linear mapping applied to output embeddings. The linear mapping of the INLP method and similarly the one of \citet{ravfogel2022linear} offer effective bias mitigation methods particularly designed for the models with a linear decoder, but are not necessarily suited for the cases with non-linear decoders (e.g., a language generation decoder, or non-linear attackers).
Regarding adapter-based debiasing and closely related to our work, \citet{lauscher2021sustainable} recently utilize adapters for debiasing PLMs by training an adapter which removes a protected attribute using counterfactual augmented data. When applied to a downstream task, another adapter is added on top of the debiasing adapter, and trained according to the task's objective. While shown effective in practice, in this stacking architecture the adapters in the higher levels inherently depend on the ones in the lower levels. They can not be learned stand-alone. In contrast, the fusion-based approach of \textsc{DAM}\xspace enables control by learning modular and independent debiasing adapters, which supports flexible plug-in/plug-out on demand.
\subsection{Model Architecture}
\textsc{DAM}\xspace consists of three main parts: task adapter, $k$ debiasing adapters, and fusion. Following \citet{pfeiffer2021adapterfusion}, and as shown in Figure~\ref{fig:ecnoder}, these parts extend the architecture of a transformer block by being added to its last layer.
\paragraph{Adapters} Each adapter is defined as a multilayer perceptron (one hidden layer and $\text{tanh}(x)$ in our experiments), where the hidden layer typically has the same or a smaller dimension than the input. In \textsc{DAM}\xspace, the task adapter and the $k$ debiasing adapters receive the output vector of the preceding transformer components, denoted as ${\bm{u}}$, and apply the corresponding transformations to output the vectors ${\bm{v}}_t$, and ${\bm{v}}_{b_1},...,{\bm{v}}_{b_k}$, respectively
\paragraph{Fusion} The fusion module combines the outputs of the task and debiasing adapters through the attention mechanism to produce the final output vector. This module receives $\left[{\bm{v}}_t,{\bm{v}}_{b_1},...,{\bm{v}}_{b_k} \right]$ as keys and values, and the ${\bm{u}}$ vector as the query of a single-head multiplicative attention network, and calculates the output as the weighted sum of the value vectors. These weights are calculated as attention scores and form a probability distribution over value vectors. Essentially, the fusion module learns to perform a \emph{linear combination} of the embedding containing information for the task with the embeddings of the debiasing adapters, to provide the final, debiased embedding.
\begin{figure}[t]
\centering
\includegraphics[width=0.35\textwidth]{figures/adv_test_small_font.pdf}
\caption{Schematic view of applying \textsc{DAM}\xspace to adversarial bias removal.}
\label{fig:architecture}
\end{figure}
\subsection{Learning On-demand Bias Mitigation}
\textsc{DAM}\xspace introduces a generic approach to bias mitigation and representation disentanglement, and can, in principle, be integrated with any bias removal optimization method, provided that the method defines separate losses for the task and each of the protected attributes. Irrespective of the optimization method, the training procedure of \textsc{DAM}\xspace is the following: first, the task adapter is trained according to the task's objective. Then, one debiasing adapter for each protected attribute is trained using only its own debiasing objective (see next paragraph), without involving the task loss. Finally, all adapters' parameters are kept frozen and the fusion layer is trained using a combination of the task objective and debiasing objectives. Throughout training, the parameters of the underlying PLM remain frozen. In what follows, we describe using \textsc{DAM}\xspace together with the adversarial bias mitigation method as depicted in Figure~\ref{fig:architecture}
\paragraph{Adversarial Training} Adversarial bias mitigation aims to make the output embedding of a model invariant to the protected attribute, by removing information that allows predicting protected attributes from the latent representation. In this sense, adversarial training follows a \emph{fairness through blindness}~\cite{barocas-hardt-narayanan} approach, where the system is made agnostic of the variations in underlying protected attributes.
To apply adversarial debiasing with \textsc{DAM}\xspace (see Figure~\ref{fig:architecture}) in the context of text classification, the model first encodes the input sequence $x$ in the latent vector ${\bm{z}}$, based on which the corresponding class is predicted using a task classification head. The loss of this prediction, denoted as $\mathcal{L}_{t}$, is defined as cross entropy, and its gradient is used to update the parameters of the task adapter. Similarly, a dedicated classification head is defined for each protected attribute $b_i$, which also receives ${\bm{z}}$ as input, predicts the corresponding protected attribute, and calculates the cross entropy loss function $\mathcal{L}_{b_i}$. A common approach to removing the information of $b_i$ encoded in ${\bm{z}}$ is gradient resversal layer (GRL)~\cite{ganin2015unsupervised} added before the debiasing head. GRL multiplies the gradient of $\mathcal{L}_{b_i}$ with a factor of $-\gamma_{i}$, and thereby simplifies the learning process to a standard gradient-based optimization. In \textsc{DAM}\xspace, this reversed gradient of $\mathcal{L}_{b_i}$ is used to learn the parameters of the debiasing adapter corresponding to the protected attribute $b_i$. Once adapters are trained, the fusion layer is learned jointly according to all task and debiasing objectives, as formulated below:
\begin{equation}
\mathcal{L}_{\text{Fusion}}^{\text{Adv}} = \mathcal{L}_{t}+\sum_{i=1}^{k}{\mathcal{L}_{b_i}}
\end{equation}
Note that, while no weights are used to directly scale the individual loss functions, the effects of bias mitigation losses on model parameters are adjusted via their corresponding $\gamma_i$ hyperparameters.
\section{Experiment Settings -- Additional Details}
\label{sec:appendix:setting}
\subsection{Datasets}
\label{sec:appendix:setting:dataset}
In FDCL18 dataset, we use the TwitterAAE model~\cite{blodgett-etal-2016-demographic} to assign racial dialect classes. The TwiiterAAE model predicts four racial classes, \emph{African American}, \emph{White American}, \emph{Hispanic}, and \emph{Others}. We labeled a tweet as \emph{African American} or \emph{White American} if the prediction score was greater than $0.5$. For PAN16 dataset, following \cite{sap-etal-2019-risk} we balanced the task labels and sampled 200K data. The age groups of this dataset are 18-24, 25-34, 35-49, 50-64, and 65+. The proportions of data items regarding the labels of the task and protected attributes in the three dataset are as follows:
\begin{itemize}
\item BIOS dataset: Task (dentist: 0.03, professor: 0.27, teacher: 0.04, psychologist: 0.04, nurse: 0.07, poet: 0.01, photographer: 0.06, journalist: 0.04, filmmaker: 0.02, physician: 0.08, composer: 0.2, attorney: 0.08, model: 0.03, painter: 0.02, accountant: 0.01, pastor: 0.01, comedian: 0.01, surgeon: 0.05, architect: 0.03, paralegal: 0.01, dj: 0.01, chiropractor: 0.01, software engineer: 0.02, dietitian: 0.02, rapper: 0.01, personal trainer: 0.003, yoga teacher: 0.01, interior designer: 0.01); Gender (Female: 0.5, Male: 0.5)
\item FDCL18 dataset: Task (normal: 0.73, spam: 0.12, abusive: 0.12, hateful: 0.04); Race (White: 0.5, AA: 0.5)
\item PAN16 dataset: Task (Mention: 0.5, Not Mention: 0.5); Gender (male: 0.54, female: 0.46); Age ( 3: 0.18, 1: 0.34, 2: 0.40, 0: 0.07, 4: 0.01)
\end{itemize}
\subsection{Hyperparameter setting}
\label{sec:appendix:setting:hyperparam}
Across experiments, we keep specific hyperparameters consistent. Batch size is 64, learning rate is 2e-5 (except for training task and debiasing adapter as explained below), training epochs is 20, dropout probability is 0.3, and adversarial debiasing coefficient is 1 for all models (when applicable).
For task adapter and debiasing adapters, we tune the learning rate and the hidden layer dimension of adapter. We conduct brute search over the learning rate values of [1e-5, 1e-4,1e-3,1e-2], and hidden layer dimension with a division factor of [1,1/2,1/4,1/8,1/16].
For \textsc{INLP}\xspace we use 10 iterations and 10 classifiers to learn null space while for \textsc{INLP-NonLin}\xspace we same setting (300 iterations and 300 classifiers) as in \cite{ravfogel2020null}.
\subsection{Training procedure}
\label{sec:appendix:setting:training}
We randomly split the dataset into train, validation, and test set with the proportions 63:12:15 for BIOS, 63:12:15 for FDCL18, and 80:5:15 on PAN16. We use the validation set for hyperparameter tuning, and the best result on the validation set is evaluated on test set for the final results. The validation and test sets in all datasets follow the same distribution as the whole dataset. To address the unbalancedness of the dataset and the potential problems in adversarial learning, we apply upsampling only on the \emph{training sets} of BIOS and FDCL18 datasets, to balance the protected attribute labels within each task label. For instance, genders are balanced in the dentist class by repeating the data items of the minority subgroup.
\section{Fusion Attention Analysis}
\label{sec:appendix:fusion}
We investigate the attention distribution of the fusion network and observe the weights it gives to the adapters. Figures~\ref{fig:appendix:bio}, \ref{fig:appendix:race}, and \ref{fig:appendix:mention} depict the attention scores of each adapter averaged over all fusion layer in \textsc{DAM}\xspace for BIOS, and FDCL18, and PAN16 datasets, respectively. To avoid confusion in visualization, we only used 4\% of data points randomly sampled from test set. As shown, the task adapter has weights close to 1, while debiasing adapters are assigned attention scores slightly higher than 0. The top three outliers in BIOS with the highest attentions on the debiasing adapter are reported in Table~\ref{tbl:appendix:datapoints}.
\begin{table}[h]
\centering
\begin{tabular}{ l L{6cm} }
\toprule
\multirow{1}{*}{} & \multicolumn{1}{c}{\textbf{Text}} \\\midrule
1 & passionately promotes healthy dietary and lifestyle choices to prevent disease and achieve optimal health \\%she passionately promotes healthy dietary and lifestyle choices to prevent disease and achieve optimal health \\
2 & practices include clayton heights family dental street panorama family dental avenue and surrey family dental avenue \\% surrey family dental avenue \\
3 & primary research interests include collective security and global health in an international relations and international political economy perspective \\
\bottomrule
\end{tabular}
\caption{}
\label{tbl:appendix:datapoints}
\end{table}
\begin{figure*}[t]
\centering
\subfloat[BIOS]{\includegraphics[width=0.45\textwidth]{figures/bio.pdf}\label{fig:appendix:bio}}
\hspace{10mm}
\subfloat[FDCL18]{\includegraphics[width=0.44\textwidth]{figures/race.pdf}\label{fig:appendix:race}}
\centering
\caption{Attention distribution of a set of sampled data points in the fusion module of \textsc{DAM}\xspace.}
\label{fig:appendix:bio_race}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=0.4\textwidth]{figures/mention.pdf}
\caption{Attention distribution of a set of sampled data points in the fusion module of \textsc{DAM}\xspace in PAN16 dataset.}
\label{fig:appendix:mention}
\end{figure*}
\section{Introduction}
\label{sec:introduction}
\input{1-introduction}
\section{Related Work}
\label{sec:related}
\input{2-relatedwork}
\section{Bias Mitigation with \textsc{DAM}\xspace}
\label{sec:method}
\input{3-method}
\section{Experiment Setup}
\label{sec:setup}
\input{4-setup}
\section{Results and Discussion}
\label{sec:results}
\input{5-results}
\section{Conclusion}
\label{sec:conclusion}
We propose a novel bias mitigation approach which enables flexible switching between the original and debiased state of a model. Our proposed \textsc{DAM}\xspace method extends the idea of multi-task learning using AdapterFusion to bias mitigation, by first learning the main task and the debiasing objectives as separate adapters, and then leveraging the attention-based fusion approach to merge these adapters and deliver debiased results. Our experiments on three classification tasks show that, beside flexible switching, \textsc{DAM}\xspace improves bias mitigation, provides on par classification performance to vanilla adversarial debiasing, and addresses the issue of catastrophic forgetting in multi-attribute bias mitigation.
\section{Limitations}
\label{sec:limitations}
In the experiments, gender is considered as a binary construct due to practical constraints. In particular, in BIOS and PAN16 the gender is provided only in the form of male and female. We are however fully aware that a gender binary model is not representative of all individuals; yet working with in-the-wild data (as in our case) entails an unavoidable caveat, derived from the still predominant belief that human beings can be sorted into two discrete categories~\cite{hyde2019future}. However, the proposed method can still be defined for generic non-binary settings and can be applied to any sensitive attribute with more than two categories, as exemplified by our consideration of age classes.
We should also highlight two limitations of this study in respect to the model and optimization. First, adversarial approaches in general (including our proposed approach) aims to reduce the correlations in the model to the protected attribute based on the observed data. This approach, like other data-oriented bias mitigation methods, might lack effective generalization, particularly when the model is evaluated on other domains or out-of-distribution data. The second limitation regarding the discussed bias mitigation methods based on adversarial training (also involving our proposed approach) is that, while the method aims to make the model prediction agnostic to protected attributes, it does not directly account for a balanced treatment of subpopulations in regards to the utility metrics and quality of the received service.
\section{Acknowledgment}
This work received financial support by the Austrian Science Fund (FWF): P33526 and DFH-23; and by the State of Upper Austria and the Federal Ministry of Education, Science, and Research, through grants LIT-2020-9-SEE-113 and LIT-2021-YOU-215.
\section{Introduction}
\label{sec:introduction}
\input{1-introduction}
\section{Related Work}
\label{sec:related}
\input{2-relatedwork}
\section{Bias Mitigation with \textsc{DAM}\xspace}
\label{sec:method}
\input{3-method}
\section{Experiment Setup}
\label{sec:setup}
\input{4-setup}
\section{Results and Discussion}
\label{sec:results}
\input{5-results}
\section{Conclusion}
\label{sec:conclusion}
We propose a novel bias mitigation approach which enables flexible switching between the original and debiased state of a model. Our proposed \textsc{DAM}\xspace method extends the idea of multi-task learning using AdapterFusion to bias mitigation, by first learning the main task and the debiasing objectives as separate adapters, and then leveraging the attention-based fusion approach to merge these adapters and deliver debiased results. Our experiments on three classification tasks show that, beside flexible switching, \textsc{DAM}\xspace improves bias mitigation, provides on par classification performance to vanilla adversarial debiasing, and addresses the issue of catastrophic forgetting in multi-attribute bias mitigation.
\section{Limitations}
\label{sec:limitations}
In the experiments, gender is considered as a binary construct due to practical constraints. In particular, in BIOS and PAN16 the gender is provided only in the form of male and female. We are however fully aware that a gender binary model is not representative of all individuals; yet working with in-the-wild data (as in our case) entails an unavoidable caveat, derived from the still predominant belief that human beings can be sorted into two discrete categories~\cite{hyde2019future}. However, the proposed method can still be defined for generic non-binary settings and can be applied to any sensitive attribute with more than two categories, as exemplified by our consideration of age classes.
We should also highlight two limitations of this study in respect to the model and optimization. First, adversarial approaches in general (including our proposed approach) aims to reduce the correlations in the model to the protected attribute based on the observed data. This approach, like other data-oriented bias mitigation methods, might lack effective generalization, particularly when the model is evaluated on other domains or out-of-distribution data. The second limitation regarding the discussed bias mitigation methods based on adversarial training (also involving our proposed approach) is that, while the method aims to make the model prediction agnostic to protected attributes, it does not directly account for a balanced treatment of subpopulations in regards to the utility metrics and quality of the received service.
\section{Acknowledgment}
This work received financial support by the Austrian Science Fund (FWF): P33526 and DFH-23; and by the State of Upper Austria and the Federal Ministry of Education, Science, and Research, through grants LIT-2020-9-SEE-113 and LIT-2021-YOU-215.
| {
"attr-fineweb-edu": 1.432617,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdN45qX_BrrvEBGz0 | \section{Introduction}
\label{sec:intro}
With the advances in both stable interest region detectors~\cite{mik_ijcv05} and robust and distinctive descriptors~\cite{mik_pami05}, local feature-based image or object retrieval has become a popular research topic.
In particular, binary local features such as Oriented FAST and Rotated BRIEF (ORB)~\cite{rub_iccv11} have attracted much attention due to their efficiency.
Binary features are one or two orders of magnitude faster than Scale Invariant Feature Transform (SIFT)~\cite{low04} or Speeded Up Robust Features (SURF)~\cite{bay_cviu08} features in detection and description, while providing comparable performance~\cite{rub_iccv11, hei_eccv12}.
These binary features are especially suitable for mobile visual search or augmented reality on mobile devices\cite{yan_ismar12}.
With the increasingly widespread use of mobile devices such as Android phones or iPhones, mobile visual search (MVS) has become one of the major applications of image retrieval and recognition technology.
While some research focuses on server-client systems in the context of MVS, the purpose of our research is to achieve fast and accurate recognition with lower memory requirements on mobile devices~\cite{pan_iccv13, dav_mm14}; in this paper, we call the latter type of MVS "local MVS".
Local MVS does not require any server and it works without a network, achieving faster recognition.
Thus, it is suitable for recognizing medium sized databases:
i.e., recognizing catalogs, paintings in a museum, or cards in a collectible card game.
As these kinds of objects can be considered as explicitly or approximately planar~\cite{phi07}, we focus on recognizing planar objects in this study.
We also assume a real-time local MVS system or application, where images captured by a mobile device's camera are continuously used as an input to the local MVS system.
If the camera captures an object registered in the database, our system is expected to show information or content related to the object immediately.
This use case further requires local MVS systems to suppress undesirable false positives because objects in the database do not always appear in captured images.
The difficulty for the local MVS lies in the indexing of local features because it is necessary to fit the database into the size of the memory in mobile devices or an application while maintaining retrieval accuracy.
In other words, managing the trade-off between the memory size of the database and the accuracy of image retrieval is very important.
We assume that the proposed local MVS system is implemented as a mobile application, and the application is provided via digital distribution services such as Play Store or App Store.
In such a situation, the application size is very important because users tend not to download large-size applications.
In this study, we aim at maximizing the accuracy under the constraint of a reasonably small application size (e.g. no larger than 10 MB).
When indexing binary features, Locality Sensitive Hashing (LSH)~\cite{gio_vldb99} is often used~\cite{rub_iccv11, yan_ismar12}.
However, this does not satisfy our constraint because it requires a large amount of memory;
original feature vectors and many hash tables must be stored in LSH~\cite{jeg10, muj_crv12}.
A Bag-of-Visual Words (BoVW) framework~\cite{siv03} is the most widely-used approach for local feature-based image or object retrieval that achieves fast retrieval with lower memory requirements.
As there is room for improvement in the accuracy of the standard BoVW framework, many methods have been proposed to improve this framework for continuous features such as SIFT~\cite{phi07, phi_cvpr08, jeg_ijcv10, mik_eccv10}.
However, there are not many studies on indexing binary features for the purpose of image retrieval.
In \cite{gal_iros11}, the BoVW framework has been adopted to index recent binary features, referred to as Bag-of-Binary Words (BoBW).
In \cite{zho_mm12}, a variant of the Hamming embedding method~\cite{jeg_ijcv10} is proposed for binarized features in order to improve the trade-off between memory requirements and accuracy.
In this method, the quantization process is performed by treating the first $a$ bits of a binary feature as an integer ranging from 0 to $2^a-1$.
Such VWs are not robust because two binary features that are different only in one bit out of the first $a$ bits are quantized into different VWs.
In this study, in order to achieve a real-time local MVS system, we propose a variant of the BoBW framework, which adaptively extracts and stores VW-dependent information of each binary feature to refine VW-based matching.
As the scoring method for matched feature pairs has not been considered in depth and only the standard tf-idf scoring is used in \cite{zho_mm12}, we also propose a modified version of the local Naive Bayes Nearest Neighbor (local NBNN) scoring method for image retrieval, which was originally proposed for image classification~\cite{mcc_cvpr12}.
It provides a theoretical basis for scoring feature matching in voting and the proposed modification improves performance by using adaptive density estimation without any additional overhead.
Finally, we introduce a geometric verification method in order to suppress false positives.
This paper is the extended version of the paper~\cite{uch_gcce14} that appeared in Global Conference on Consumer Electronics (GCCE) 2014.
In particular, we introduce a new geometric verification method and show that our system can achieve a zero false positive rate.
In this study, we did not consider the Fisher vector approach~\cite{per_eccv10, jeg_pami12, uch_acpr13} or the Vector of Locally Aggregated Descriptors (VLAD) approach~\cite{jeg_cvpr10, ara_cvpr13, dav_mm14, spy_tmm14}, which achieves reasonable retrieval accuracy with very compact image representation.
However geometric verification becomes unavailable in these approaches because matching pairs of features are not obtained in the search process.
Geometric verification is essential to ensure a low false positive rate as also shown in our experiment later.
Although it is possible to perform feature-level matching after an image-level search, this makes these approaches inefficient and reduces their advantages.
In summary, the contributions of this research are three-fold, as follows:
\begin{enumerate}
\item We propose an adaptive substring extraction method that adaptively extracts informative bits from the original binary vector and stores them in an inverted index. These substrings are used to refine visual word-based matching.
\item A modified local NBNN scoring method is proposed in the context of image retrieval, which considers the density of binary features in scoring each feature matching.
\item In order to suppress false positives, we introduce a convexity check step that imposes a convexity constraint on the configuration of a transformed reference image.
\end{enumerate}
The rest of this paper is organized as follows.
In Section 2, the binary features we are going to use in our system are briefly introduced.
In Section 3, we describe the BoVW framework and its extensions.
In Section 4, our proposed framework is introduced.
In Section 5, the effectiveness of the proposed framework is confirmed.
Our conclusions are presented in Section 6.
\section{Local binary features}
To date, many binary features are proposed such as ORB~\cite{rub_iccv11}, Fast Retina Keypoint (FREAK)~\cite{ala_cvpr12}, Binary Robust Invariant Scalable Keypoints (BRISK)~\cite{leu_iccv11}, KAZE features~\cite{alc_eccv12}, Accelerated-KAZE (A-KAZE)~\cite{alc_bmvc13}, Local Difference Binary (LDB)~\cite{yan_pami14}, and Learned Arrangements of Three patCH codes (LATCH)~\cite{lev_wacv16}.
In this section, binary features are briefly introduced focusing on the ORB feature, which is one of the most frequently used binary features.
The algorithm of a local (binary) feature is divided into two major parts: detection and description.
In detection, local patches are detected in an image, and then binary feature vectors are extracted from these patches in the description.
\subsection{Detection}
Most of the local binary features employ fast feature detectors.
The ORB feature utilizes the Features from the Accelerated Segment Test (FAST)~\cite{ros_iccv05} detector, which detects pixels that are brighter or darker than neighboring pixels based on the accelerated segment test.
The test is optimized to reject candidate pixels very quickly, achieving extremely fast feature detection.
In order to ensure approximate scale invariance, feature points are detected from an image pyramid.
\subsection{Description}
Local binary features extract binary strings from patches of interest regions instead of extracting gradient-based high-dimensional feature vectors like SIFT.
The BRIEF descriptor~\cite{cal_eccv10}, a pioneering work in the area of binary descriptors, is a bit string description of an image patch constructed from a set of binary intensity tests.
Consider the $t$-th smoothed image patch $p_t$, a binary test $\tau$ for $d$-th bit is defined by:
\begin{equation}
x_{td} = \tau(p_t; a_d, b_d) =
\begin{cases}
\, 1 & \mathrm{if} \; p_t(a_d) \ge p_t(b_d) \\
\, 0 & \mathrm{else}
\end{cases},
\end{equation}
where $a_d$ and $b_d$ denote relative positions in the patch $p_t$, and $p_t(\cdot)$ denotes the intensity at the point.
Using $D$ independent tests, we obtain $D$-bit binary string $x_t = (x_{t1}, \cdots, x_{td}, \cdots, x_{tD})$ for the patch $p_t$.
The ORB feature employs a learning method for de-correlating BRIEF features under rotational invariance.
\section{Bag-of-visual words and its extensions}
Indexing features is an essential component of efficient retrieval or recognition.
The most widely adopted framework is the BoVW framework~\cite{siv03}.
In this section, the BoVW framework and its extensions are described.
\subsection{Bag-of-visual words framework}
In the BoVW framework, extracted local features of an image are quantized into visual words (VWs), resulting in a histogram representation of VWs.
Here, VWs are representative feature vectors, which are created beforehand by applying the $k$-means algorithm to the training feature vectors.
In many cases, image similarity is measured by the $\ell_1$ or $\ell_2$ distance between the normalized histograms of VWs.
As the histograms are generally sparse, an inverted index and a voting function enable an efficient similarity search~\cite{siv03,jeg_ijcv10}.
\subsection{Extended inverted index}
\label{sec:extended}
Though the BoVW framework achieves efficient retrieval with lower memory requirements, degradation of accuracy is caused by quantization.
In the BoVW framework, two features are matched if and only if they are assigned to the same VW~\cite{jeg_ijcv10}.
Therefore, quite different features are sometimes matched in the BoVW framework.
One approach to suppress these unreliable feature matches is to increase the number of VWs (e.g. 1M).
Although this approach is used in local MVS~\cite{pan_iccv13, dav_mm14} as well as large-scale image retrieval~\cite{nis06, phi07, mik_ijcv13}, it is not suitable for our purpose because it increases the size of the application.
For example, the VWs used in \cite{pan_iccv13} or \cite{dav_mm14} requires 17 MB or 72 MB storage respectively.
The other approach to refine feature matching is the post-filtering approach~\cite{jeg_ijcv10, jeg10} where relatively small size of VWs can be used.
In this approach, after VW-based matching, the distances between a query feature and the reference features that are assigned to the same VW are calculated and matches with large distances are filtered out.
In order to perform post-filtering, the inverted index is extended to store additional information for reference features.
As exact distance calculation is undesirable in terms of the computational cost and memory requirement to store raw feature vectors, short code-based methods are used for this purpose~\cite{jeg_ijcv10, jeg10};
original feature vectors are encoded into short codes and the distances between feature vectors are approximated by the distances between the short codes.
In \cite{jeg_ijcv10}, a pioneer work of this approach, the Hamming embedding (HE) method is proposed.
In HE, local feature vectors of reference images are encoded into binary codes by random projection and thresholding after quantization, and the resulting binary codes are stored in the inverted index.
In the search step, each query feature vector is also binarized after quantization, and the Hamming distances between the binary code of the query feature and the binary codes in the corresponding list of the inverted index are calculated.
Then, scores are voted for the reference images associated with the binary codes whose Hamming distances to the binary code of the query feature are no more than a threshold.
In \cite{jeg_ijcv10}, weak geometric consistency is also proposed that filters matching pairs that are not consistent in terms of angle and scale.
In \cite{jeg10}, instead of the binarization of vectors, a product quantization (PQ) is used to encode reference feature vectors into short codes, and the resulting short codes are also stored in inverted index.
\subsection{Problems of conventional systems}
Although many methods have been proposed to improve the BoVW framework for continuous features, there are not many studies on indexing binary features for the purpose of image retrieval.
In \cite{gal_iros11}, the BoVW framework has been adopted to indexing recent binary features, namely BoBW.
In \cite{zho_mm12}, a variant of the Hamming embedding method is proposed for binarized features, where the first $a$ bits are used to form $2^a$ VWs by treating a bit string as an integer, and the next $b$-bit substring is stored in an inverted index.
However, this approach results in non-optimal performance because directly using the binary string as a VW is not robust against disturbance;
if even one of the first $a$ bits is different between the reference and query binary features, they are quantized into different VWs and never match.
The other problem is that non-informative bits are sometimes selected as a substring.
This problem is caused by VW-based clustering.
Figure~\ref{fig:corr} illustrates this problem by considering the statistics of binary features before and after VW-based clustering.
Figure~\ref{fig:corr} (a) and (d) represent mean values and absolute correlation coefficients of the first 16 bits of the ORB feature vectors \textit{before} clustering.
We can see that mean values are close to 0.5 and correlation is weak as designed.
Figure~\ref{fig:corr} (b) and (e) represent mean values and absolute correlation coefficients of the ORB feature vectors assigned to a certain VW among 1024 VWs, and Figure~\ref{fig:corr} (c) and (f) correspond to the other VW.
Figure~\ref{fig:corr} gives us three observations about the statistics \textit{after} VW-based clustering;
(1) the mean values of binary feature vectors are significantly different from 0.5 in some dimensions,
(2) some bits of the binary feature are correlated with each other, and
(3) the characteristics of these correlations and mean values are different from one VW to another.
Thus, using the fixed positions of bits as a substring among VWs is not appropriate.
In the next section, we propose selecting informative bits as a substring and store the substring in an inverted index for post-filtering to solve this problem.
\begin{figure*}[tb]
\centering
\begin{minipage}[c]{0.3\linewidth}
\includegraphics[width=\linewidth]{fig/average0.eps} \\
\centering (a) \\
\end{minipage}
\begin{minipage}[c]{0.3\linewidth}
\includegraphics[width=\linewidth]{fig/average1.eps} \\
\centering (b) \\
\end{minipage}
\begin{minipage}[c]{0.3\linewidth}
\includegraphics[width=\linewidth]{fig/average2.eps} \\
\centering (c) \\
\end{minipage} \\
\begin{minipage}[c]{0.3\linewidth}
\includegraphics[width=\linewidth]{fig/corr_0.eps} \\
\centering (d) \\
\end{minipage}
\begin{minipage}[c]{0.3\linewidth}
\includegraphics[width=\linewidth]{fig/corr_1.eps} \\
\centering (e) \\
\end{minipage}
\begin{minipage}[c]{0.3\linewidth}
\includegraphics[width=\linewidth]{fig/corr_2.eps} \\
\centering (f) \\
\end{minipage} \\
\caption{(a) and (d) represent mean values and absolute correlation coefficients of first 16 bits of the ORB feature vectors \textit{before} clustering.
(b) and (e) represent mean values and absolute correlation coefficients of the ORB feature vectors assigned to a certain VW among 1024 VWs, and (c) and (f) correspond to the other VW.}
\label{fig:corr}
\end{figure*}
\begin{figure}[tb]
\centering
\includegraphics[width=\linewidth]{fig/framework.eps}
\caption{The framework of the proposed method.}
\label{fig:framework}
\end{figure}
\section{Proposed local MVS system}
In this section, we propose a local MVS system using a post-filtering approach, which is suitable for binary features.
Figure~\ref{fig:framework} shows the framework of the proposed method.
In the indexing step (offline), binary reference features are extracted from reference images and quantized into VWs.
The substrings of reference features are generated and stored in an inverted index to facilitate an efficient search.
In the search step (online), each query feature of a query image votes on reference images with scores according to the distances between the query feature substring and reference feature substrings.
Finally, geometric verification is performed to suppress false positives.
As mentioned in Section~\ref{sec:intro}, our novelty lies in adaptive substring extraction (Section~\ref{sec:extract}), modified local NBNN scoring (Section~\ref{sec:scoring}), and geometric verification with a convexity check (Section~\ref{sec:gv}).
\subsection{Constructing VWs and quantization}
As the proposed system is constructed on the BoVW framework, VWs should be constructed before indexing.
For VW construction, the $k$-means algorithm is performed on the training binary features.
Then, the resulting centroid vectors are binarized by thresholding at 0.5 as done in \cite{gal_iros11}, obtaining binary VWs.
Quantization is done by calculating the Hamming distances between the reference feature and the VWs, and assigning the reference feature to the nearest VW.
\subsection{Adaptive substring extraction}
\label{sec:extract}
While the method proposed in \cite{zho_mm12} extracts the substring using the fixed positions of bits, we extract the substring adaptively changing the bit positions according to the assigned VW.
To this end, we utilize \textit{substring dictionary} $\{ \mathcal{D}_w \}_{w = 1}^{N}$ where $\mathcal{D}_w$ defines the positions of $T$ useful bits for the $w$-th VW.
Letting $w$ denotes the identifier of the nearest VW of the input binary vector $x$, we extract the substring $(x_{\mathcal{D}_{w1}}, \cdots, x_{\mathcal{D}_{wT}})$ from the original binary vector $x = (x_1, \cdots, t_D)$ using $\mathcal{D}_w$.
For example, in the case that $T = 4$ and $\mathcal{D}_w = (4, 25, 70, 87)$, the resulting 4-bit substring becomes $(x_4, x_{25}, x_{70}, x_{87})$.
The substring dictionary $\{ \mathcal{D}_w \}_{w = 1}^{N}$ is constructed with the algorithm used in ORB~\cite{rub_iccv11}, where informative (mean value is close to 0.5) and non-correlated bits are selected.
To do this, training vectors are first clustered into $N$ sets $\{ \mathcal{X}_w \}_{w = 1}^{N}$ using VWs, where $N$ denotes the number of VWs.
Then, for each $w$-th VW, the substring dictionary $\mathcal{D}_w$ is constructed using the training vectors $\mathcal{X}_w$.
Algorithm~\ref{alg:construct} describes the algorithm for the construction of the substring dictionary $\mathcal{D}_w$ for the $w$-th VW.
In this algorithm, bit positions are first sorted according to their entropy.
Then, each bit position is added to the dictionary $\mathcal{D}_w$ if the bit is not correlated with all of the bit positions already in $\mathcal{D}_w$.
If the algorithm is finished before $T$ bits are selected, the threshold $th$ is decreased and Algorithm~\ref{alg:construct} is performed again.
The resulting dictionary $\{ \mathcal{D}_w \}_{w = 1}^{N}$ is used in the following indexing and search step.
Storing the dictionary requires $N{\times}T$ bytes because one byte can represent one bit position of a 256-bit string.
\begin{algorithm}[tb]
\caption{Substring dictionary construction}
\label{alg:construct}
\begin{algorithmic}[1]
\REQUIRE Training vectors $\mathcal{X}_w$ assinged to $w$-th VW, threshold for correlation $th$
\ENSURE Substring dictionary $\mathcal{D}_w$ with size $T$
\STATE $C \leftarrow$ correlation matrix of $\mathcal{X}_w$
\STATE $A \leftarrow$ bit identifiers sorted in ascending order of the absolute difference between the mean value of the bit from 0.5
\STATE $\mathcal{D}_w \leftarrow \{ A_1 \}$
\FOR{$i = 2$ \TO $D$}
\STATE $j \leftarrow A_i$
\IF{$\max_{k \in \mathcal{D}_w} |C_{jk}| < th$}
\STATE $\mathcal{D}_w \leftarrow \mathcal{D}_w \cup j$
\IF{$|\mathcal{D}_w| = T$}
\STATE \textbf{break}
\ENDIF
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
\subsection{Indexing reference images}
In the indexing step, binary features are extracted from reference images and these features are stored in the inverted index as follows.
First, each reference binary feature is quantized into VW.
Letting $w$ denote the identifier of the nearest VW, the substring is extracted using $\mathcal{D}_w$.
Then, the following information is stored in the $w$-th list of the inverted index as shown in Figure~\ref{fig:framework}:
image identifier (2 bytes), the position $(x, y)$ (2+2 bytes), and the substring ($T/8$ bytes).
In total, $6+T/8$ bytes per feature are required.
\subsection{Searching inverted index}
In the search step, binary features are extracted from a query image.
Each query feature votes on reference images with scores using the following procedure.
First, the binary query feature is quantized into VW $w$.
The substring of the binary query feature is generated in the same manner as in the indexing step using $\mathcal{D}_w$.
Then, the distances between the query substring and reference substrings in the corresponding $w$-th list of the inverted index are calculated.
Finally, scores are assigned to the $K$-nearest neighbor reference features.
\subsection{Modified local NBNN scoring}
\label{sec:scoring}
It is known that weighting scores according to their distances improves performance.
The most common way of doing this weighting is to use the Gaussian function $\exp(-d^2/\sigma^2)$~\cite{phi_cvpr08, jeg_cvpr09, jeg_ijcv10}, where $d$ is the Euclidean or Hamming distance between the query feature and reference feature and $\sigma$ is an adjustable parameter.
However, this approach has little theoretical basis and is not optimal.
In this paper, we propose a modified version of the local NBNN (LN) method~\cite{mcc_cvpr12}, which has a theoretical background in the derivation of its score.
Although LN was originally proposed for image classification~\cite{mcc_cvpr12}, we show that this method also works well in image retrieval.
In LN, for each query $q$, its $K$ nearest neighbor features are searched for, where $K$ is an adjustable parameter that specifies the number of samples used in kernel density estimation.
Then, a score of $d^2_K - d^2_k$ is assigned to the corresponding image of the $k$-th nearest neighbor feature, where $d^2_x$ represents the distance between $q$ and its $x$-th nearest neighbor feature.
We refer to this original LN scoring as \textsf{LNo}.
In this study, we modify \textsf{LNo} to $(d_K / d_k)^2 - 1$.
This modification has the effect of adaptively changing the kernel radius in kernel density estimation similar to local scaling~\cite{zel_nips04}, resulting in more appropriate scoring.
We refer to this modified LN scoring method as \textsf{LNm}.
While the parameter $K$ is set to 10 in \cite{mcc_cvpr12}, we empirically use $K = 2$ in this study.
This is because, in the classification task \cite{mcc_cvpr12}, many training images (e.g. 15 or 30) are available for each class, while only a single reference image can be used to represent a reference object in many retrieval tasks.
In classification task, using relatively large $K$ contributes to accuracy because it can utilize multiple training images in density estimation.
In a small-scale retrieval task, using the first and second nearest neighbors is enough, similar to the ratio test done in feature matching~\cite{low04}.
\subsection{Geometric verification with convexity check}
\label{sec:gv}
Geometric Verification (GV) or spatial re-ranking is an important step to improve the results obtained by the voting function~\cite{phi07, chu_iccv07}.
In this step, transformations between the query image and the top-$R$ reference images in the list of voting results are estimated, eliminating matching pairs that are not consistent with the estimated transformation.
In the estimation, the RANdom SAmple Consensus (RANSAC) algorithm or its variants~\cite{chum_accv04, chu_cvpr05} are used.
Then, the score is updated counting only inlier pairs.
As a transformation model, an affine or homography matrix is usually used.
In our case, we estimate the homography matrix using the PROgressive SAmpling and Consensus (PROSAC) algorithm~\cite{chu_cvpr05} for efficiency.
As matching pairs between the query image and the top-$R$ reference images have already been obtained in the voting step, these matching pairs are used as an input to RANSAC.
In many studies, GV is used only for re-ranking~\cite{phi07, chu_iccv07} to improve retrieval accuracy.
In our case, the purpose of GV is to suppress false positives by thresholding the number of inliers.
For this purpose, standard GV is not sufficient;
there is an adequate number of inliers for non-relevant image pairs as will be shown in Section~\ref{sec:gvresult}.
In this study, after standard GV, we check the estimated model under the assumption that reference images represent planar objects.
In particular, we check the \textit{convexity} of the reference image projected to the query image using the Homography matrix estimated in geometric verification.
Let $a$, $b$, $c$, and $d$ denote the four corners of a reference image in clockwise order.
These points are transformed by the estimated homography $H$:
$a' = H a$, $b'= H b$, $c' = H c$, and $d' = H d$.
Here, $a'$, $b'$, $c'$, and $d'$ represent the corners of the reference image captured in the query image.
The angle of each corner of the transformed reference image does not become larger than 180 degrees if the estimated Homography is correct because the transformed reference image should be convex.
We can verify this constraint using the following inequalities:
$\overrightarrow{a'd'} \times \overrightarrow{a'b'} > 0$,
$\overrightarrow{b'a'} \times \overrightarrow{b'c'} > 0$,
$\overrightarrow{c'b'} \times \overrightarrow{c'd'} > 0$,
$\overrightarrow{d'c'} \times \overrightarrow{d'a'} > 0$,
where $\times$ represents the cross product.
If one of the inequalities is not satisfied, the estimated homography is discarded, suppressing false positives in our framework.
Even after the above convexity check, we empirically found some false positives with moderate scores.
These were caused by a characteristic of the ORB feature;
the multi-scale FAST detector used in ORB tends to detect features at almost the same position (often at corners) with different scales.
These features are frequently considered as inliers at the same time, increasing scores between unrelated image pairs.
In order to reduce this phenomenon, we discard the inliers if the inliers have positions of both reference and query features closer than five pixels, reducing the scores of non-informative matches.
\section{Experimental evaluation}
In the experiments, the Stanford mobile visual search dataset\footnote{\url{https://sites.google.com/site/chenmodavid/mobile-visual-search}} is used.
It contains eight classes of images:
camera-phone images of books, business cards, CDs, DVDs, outdoor landmarks, museum paintings, text documents, and video clips.
Each class consists of 100 reference images and 400 query images.
As an indicator of retrieval performance, mean average precision (MAP; higher is better)~\cite{jeg_ijcv10} is used.
We adopt the ORB feature~\cite{rub_iccv11} implemented in the OpenCV library\footnote{\url{http://opencv.org/}}, where at most 900 features are extracted from four scales on average.
The number of VWs is fixed at 1024 in all methods and experiments.
The VWs and substring dictionary are trained using the MIR Flickr collection\footnote{\url{http://press.liacs.nl/mirflickr/}}.
\begin{table*}[tb]
\centering
\caption{Comparison of the proposed method with conventional methods in terms of substring extraction.
For each method, MAP scores of eight classes are shown.}
\label{tab:result1}
\begin{tabular}{c|>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.2cm}|>{\centering\arraybackslash}p{2cm}} \hline
&book &cards &cd &dvd &landmarks &paintings &text &video &average \\ \hline
BoBW~\cite{gal_iros11} &0.610 &0.173 &0.427 &0.465 &0.080 &0.486 &0.125 &0.584 &0.369${\pm}$0.012 \\
\cite{zho_mm12} &0.874 &0.463 &0.752 &0.811 &0.197 &0.671 &0.423 &0.824 &0.627${\pm}$0.020 \\
PROP &\textbf{0.916} &\textbf{0.535} &\textbf{0.807} &\textbf{0.897} &\textbf{0.253} &\textbf{0.718} &\textbf{0.542} &\textbf{0.853} &\textbf{0.690}${\pm}$0.015 \\ \hline
\end{tabular} \\
\end{table*}
\begin{figure}[tb]
\centering
\includegraphics[width=0.8\linewidth]{fig/bits.eps}
\caption{Average MAP scores of eight classes as a function of the length of substrings $T$.}
\label{fig:bits}
\end{figure}
\subsection{Effect of substring generation}
First, the effectiveness of the proposed substring generation method was evaluated.
Table~\ref{tab:result1} summarizes the experimental results.
The MAP scores of eight classes are shown for three methods:
(1) BoBW~\cite{gal_iros11},
(2) Zhou's method~\cite{zho_mm12},
where we used the first 10 bits to define 1024 ($=2^{10}$) VWs and used the next 64 bits as the substring, and (3) the proposed method with 64 bits as the substring ($T = 64$).
In order to focus on the evaluation of the substring generation methods, conventional tf-idf scoring~\cite{low04} is used for all methods.
Comparing BoBW with Zhou's method, we can see that the use of the substring improves accuracy dramatically.
However, the proposed method can further improve accuracy by adaptively generating the substring.
For a statistical test, we split queries into five disjoint sets and calculated the average MAP for each set.
In Table~\ref{tab:result1}, the standard deviation of the average MAP is shown.
Furthermore, we conducted a paired-samples t-test where the null hypothesis is that there is no difference between the average MAPs of \cite{zho_mm12} and PROP. The two-tailed $P$-value for this hypothesis was 0.0001 and the null hypothesis was rejected.
Therefore, we can say that the improvement is statistically significant.
Next, the effect of the number of selected bits and the selection of methods are evaluated.
Here, in addition to the proposed method, we evaluate two selection methods: \textsf{Fixed} and \textsf{Random}. \textsf{Fixed} is a modified version of the proposed method,
where a substring is created using the first fixed $T$ bits in all visual words.
\textsf{Random} uses $T$ bits randomly selected for each visual word.
These three selection methods become identical when $T = 256$, where full binary strings of ORB features are always used.
Figure~\ref{fig:bits} shows the average MAP scores of eight classes as a function of the length of substrings $T$, comparing these three selection methods.
We can see that the proposed method achieves the best scores among these variants for all $T$.
\textsf{Fixed} is slightly better than \textsf{Random}.
This is reasonable because, in the ORB algorithm, binary tests are sorted according to their entropies;
the leading bits are more informative.
The interesting observation is that the proposed method at $T = 128$ outperforms the proposed method at $T = 256$, while a longer bit string achieves a better result using the \textsf{Fixed} and \textsf{Random} methods.
This implies that using non-informative or correlated bits degrades search accuracy.
\begin{table*}[tb]
\centering
\caption{Comparison of the proposed method with conventional methods in terms of scoring.}
\label{tab:result2}
\begin{tabular}{c|>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.2cm}|>{\centering\arraybackslash}p{2cm}} \hline
&book &cards &cd &dvd &landmarks &paintings &text &video &average \\ \hline
PROP+GW &0.943 &0.602 &0.849 &0.930 &0.278 &0.740 &0.568 &0.900 &0.726${\pm}$0.017 \\
PROP+LNo &0.927 &0.515 &0.830 &0.924 &0.282 &0.758 &0.501 &0.909 &0.706${\pm}$0.008 \\
PROP+LNm &\textbf{0.955} &\textbf{0.609} &\textbf{0.873} &\textbf{0.944} &\textbf{0.289} &\textbf{0.773} &\textbf{0.570} &\textbf{0.914} &\textbf{0.741}${\pm}$0.010 \\ \hline
\end{tabular} \\
\end{table*}
\subsection{Comparison in scoring function}
Second, we evaluated the proposed scoring method.
The proposed adaptive substring extraction method ($T = 64$) with Gaussian weighting (\textsf{GW})
\footnote{
For the Gaussian weighting,
we set $\sigma = 9$,
which achieved the best performance in our preliminary experiments,
while $\sigma = 16$ in \cite{jeg_cvpr09}.
}
is used as a conventional method.
From Table~\ref{tab:result2}, it is shown that, while the original LN scoring method (\textsf{LNo}) is inferior to \textsf{GW}, the proposed modified LN scoring method (\textsf{LNm}) outperforms GW by 1.5\% and the method in \cite{zho_mm12} by 11\% in MAP.
This is because scores of original LN are directly affected by the density of feature vectors;
the score of a query feature in dense space tends to be low, while the score of a query feature in sparse space tends to be high.
The proposed scoring method normalizes the score using the distance between the query feature and its $K$-th nearest neighbor feature in the database.
Thus, all query features can equally contribute to the similarity score, improving the final result.
As the overhead of the proposed method is negligible, the proposed system can improve retrieval accuracy with the same memory requirements and almost the same computational cost as conventional methods.
We also conducted the paired-samples t-test as done in Section 5.1.
The two-tailed p-value was 0.003 when the average MAPs of \textsf{GW} and \textsf{LNm} are assumed to be the same.
Thus, it can be said that the proposed scoring method achieves a statistically significant improvement in term of MAP.
\begin{figure}[tb]
\centering
\includegraphics[width=0.8\linewidth]{fig/gv1.eps} \\
(a) GV \\
\includegraphics[width=0.8\linewidth]{fig/gv2.eps} \\
(b) GV+CC \\
\caption{Distributions of the correct and incorrect image pairs with and without the convexity check.
Scores larger than 20 are merged into the bin with 20 score.}
\label{fig:gv}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=\linewidth]{fig/sample2.jpg} \\
\includegraphics[width=\linewidth]{fig/sample1.jpg} \\
\includegraphics[width=\linewidth]{fig/sample3.jpg} \\
\caption{False positive examples after geometric verification.
For each row, the left (resp. right) image represents the query (resp. reference) image.
Green lines represent false inliers and red quadrilateral represents four sides of the reference image projected to the query image by the estimated Homography matrix.}
\label{fig:gv_fp}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=0.8\linewidth]{fig/roc.eps}
\caption{Receiver Operating Characteristic (ROC) curves for the methods without geometric verification, with geometric verification, and with geometric verification and convexity check.}
\label{fig:roc}
\end{figure}
\subsection{Results with geometric verification}
\label{sec:gvresult}
The accuracy of the proposed framework with geometric verification is evaluated.
In this study, we perform geometric verification against the top three results of \textsf{LNm}.
First, we evaluate the effectiveness of the convexity check method described in Section~\ref{sec:gv} against false positive suppression.
For this purpose, we conducted an experiment where 10,000 images distinct from the reference images of the eight classes are used as queries and the databases of the eight classes are searched, obtaining 80,000 search results in total.
By taking the top-1 scores from these results, we can estimate the distribution of the scores between incorrect image pairs.
We also estimate the distribution of the scores between correct image pairs using the queries and references of eight classes.
The distributions of the correct and incorrect image pairs obtained by standard geometric verification (\textsf{GV}) and by the convexity check described in Section~\ref{sec:gv} (\textsf{GV+CC}) are shown in Figure~\ref{fig:gv} (a) and (b) respectively.
We can see that standard geometric verification returns relatively high scores even for incorrect image pairs.
In contrast, geometric verification with the convexity check can suppress most of the false positives.
Figure~\ref{fig:gv_fp} shows false positive examples after geometric verification, where red quadrilateral represents four sides of the reference image projected to the query image by the estimated Homography matrix.
We can see that the red quadrilaterals are complex (self-intersecting) and they can be filtered out by the proposed convexity check process.
Figure~\ref{fig:roc} shows the Receiver Operating Characteristic (ROC) curves for the methods without geometric verification, with geometric verification, and with geometric verification and the convexity check.
It can be said that both geometric verification and convexity check are vital to improving the accuracy under the constraint of a low false positive rate.
Table~\ref{tab:result3} shows the detection accuracy under the constraint that the false positive rate becomes zero.
Without geometric verification, it is impossible to achieve both reasonable detection accuracy and zero false positive rate.
Comparing \textsf{GV}/\textsf{GV+CC} with \textsf{LNm} in Table~\ref{tab:result2}, accuracy declines because geometric verification sometimes discards true positives in order to achieve a zero false positive rate.
The degradation of accuracy caused by GV is particularly prominent in the landmark and painting classes.
The landmark class includes many non-planar objects, and GV based on homography sometimes fails.
The painting class includes fewwer textured images, and lacks enough inliers in RANSAC.
\begin{table*}[tb]
\centering
\caption{Detection accuracy under the constraint that false positive rate becomes zero.}
\label{tab:result3}
\begin{tabular}{c|>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.2cm}>{\centering\arraybackslash}p{1.2cm}|>{\centering\arraybackslash}p{1.2cm}} \hline
&book &cards &cd &dvd &landmarks &paintings &text &video &average \\ \hline
Without GV & 0.062 & 0.010 & 0.088 & 0.120 & 0.000 & 0.214 & 0.000 & 0.300 & 0.099 \\
With GV & 0.826 & 0.380 & 0.690 & 0.813 & 0.018 & 0.607 & 0.370 & 0.765 & 0.559 \\
With GV+CC & 0.868 & 0.505 & 0.790 & 0.893 & 0.058 & 0.654 & 0.473 & 0.818 & 0.632 \\ \hline
\end{tabular} \\
\end{table*}
\subsection{Computational cost and memory requirement}
Table~\ref{tab:timing} shows processing times for feature extraction (\textsf{Feature}), quantization (\textsf{Quantize}), Hamming distance calculation and voting (\textsf{Hamming}), and geometric verification (\textsf{GV}).
These durations are measured on a standard PC with a Core i7 2600 CPU and 32 GB of main memory (\textsf{PC}), and on an IS11S smartphone (released in 2011) with Qualcomm Snapdragon S2 MSM8655 1 GHz (\textsf{SP}).
Extracting substrings is quite simple and thus fast, so the computational cost is negligible.
The times required for \textsf{Feature} and \textsf{GV} take significantly longer on the smartphone, while those for \textsf{Quantize} and \textsf{Hamming} are not.
This is because, in our implementation, \textsf{Quantize} and \textsf{Hamming} processes can be sped up using the ARM NEON SIMD operation on the smartphone.
We can see that recognition rates are about 14 fps on the PC and about 2 fps on the smartphone, achieving a real-time local MVS with no false positives.
Comparing our method and Zhou's method~\cite{zho_mm12}, Zhou's method is a little faster because Zhou's method does not require the quantization process in Table~\ref{tab:timing}.
However, the above acceleration results in non-optimal performance in terms of search precision as discussed in Section~\ref{sec:extended} and shown in Table~\ref{tab:result1}.
In terms of memory requirements, we assume that 100 images are indexed, 900 features are extracted per image, the number of VWs is 1024, and the length of the substring is 64.
Under these settings, $100{\times}900{\times}14$ bytes = 1.26 MB is required to store features in the inverted index, $1024{\times}32$ bytes {$\simeq$} 0.03MB for VWs,
and $1024{\times}64$ bytes {$\simeq$} 0.07 MB for the substring dictionary.
This amount of data can reasonably be included in application binary files such as .apk for Android or .ipa for iPhone.
For instance, the size of our the test application apk that includes all of the above data was 3.66 MB.
Comparing our method and Zhou's method~\cite{zho_mm12}, Zhou's method does not need to store VWs (0.03 MB) and the substring dictionary (0.07 MB).
This memory footprint is relatively small compared with that of the inverted index, and it does not increase even if the number of images is increased.
\begin{table}[tb]
\centering
\caption{Processing times [sec] of the proposed MVS system.}
\label{tab:timing}
\begin{tabular}{c|>{\centering\arraybackslash}p{1cm}>{\centering\arraybackslash}p{1cm}>{\centering\arraybackslash}p{1cm}>{\centering\arraybackslash}p{1cm}|>{\centering\arraybackslash}p{1cm}} \hline
&Feature &Quantize &Hamming &GV &Total \\ \hline
PC &0.009 &0.015 &0.013 &0.035 &0.072 \\
SP &0.184 &0.076 &0.053 &0.211 &0.524 \\ \hline
\end{tabular} \\
\end{table}
\section{Conclusion}
In this paper, we proposed a stand-alone mobile visual search system based on binary features.
In our system, a VW-dependent substring extraction method and a new scoring method are used.
It is shown that the proposed system can improve retrieval accuracy with the same memory requirements as conventional methods.
Geometric verification using a constraint on the configuration of a transformed reference image achieved no false positive results.
In future work, we would like to confirm the scalability of our framework and apply it to a server-client system, where our substring method would be useful in reducing communication traffic between a mobile device and a search server.
{\small
\bibliographystyle{ieee}
| {
"attr-fineweb-edu": 1.956055,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdNA5qhLACAkxAunV | \section{Introduction}
The magnetic behavior of type-II superconductors depends strongly on the
sample shape \cite
{brandt96,gurevich94,brandt93,jooss96a,vlasko,jooss96,zeldov,norris}.
Significant progress has recently been made in understanding the effects of
a sample aspect ratio on its magnetic behavior\cite{brandt96}, in
particular, in the case of thin films with the magnetic field normal to the
film plane (``perpendicular geometry'') \cite{gurevich94,brandt93}. Theory
\cite{brandt96,gurevich94,brandt93,jooss96a} and experiment \cite
{vlasko,jooss96} show that the magnetic behavior in the perpendicular
geometry has many distinctive features, essentially different from the
parallel geometry, e.g. a more complicated structure of the critical state
and the presence of geometrical barriers \cite{zeldov}.
A number of elegant analytical solutions for the perpendicular geometry (for
strips and disks) describe the Meissner \cite{norris}, the mixed state \cite
{brandt96,brandt93}, and magnetic flux creep \cite{gurevich94}. These
solutions are based on the important ansatz, that one can treat the film as
an infinitesimal thin plane. Then, current distribution related to vortex
bending does not influence the results of the analysis which deals only with
the current density and the vortex displacements averaged over the film
thickness. This approach was very successful in explaining the peculiarities
of the current density and the magnetic induction distribution across the
film plane. However, this approach cannot account for any {\em thickness
dependence} of both persistent current density $j$ \cite
{jooss96,mcelfresh,oneoverd,prozorov97,sheriff97} and magnetic relaxation
rate \cite{prozorov97,sheriff97} in thin films.
Explanation of the observed decrease of $j$ with the increase of the film
thickness $d$ is usually based on the idea that pinning on surfaces {\em %
perpendicular} to the direction of vortices is strong enough and must be
taken into account \cite{jooss96a,jooss96,mcelfresh}. However, as we
demonstrate below, this is not sufficient for understanding the thickness
dependence of the magnetic relaxation rate, which was found to decrease with
the increase of the film thickness \cite{prozorov97,sheriff97}.
Another explanation of the observed thickness dependence of the current
density may be based on collective pinning in a 2D regime, i. e., for
longitudinal correlation length $L$\ larger than the film thickness. This
case is carefully considered in \cite{brandt86}. In a 2D collective pinning
regime, the pinning is stronger for thinner samples. As a result, in this
model both the critical-current density and the creep barrier are larger in
thinner samples, contrary to the experimental results. Also, this scenario
is probably not relevant for the explanation of the experimental data
discussed below, because the thickness of our films $d\geq 800$\AA\ is
larger than $L\approx 40-100$\AA .
In order to understand the experimental results we calculate the current
density and magnetic induction distribution by using the 'two-mode
electrodynamics' theory suggested earlier to explain the AC response in bulk
materials \cite{ST}. The essence of this theory is that two length scales
govern the penetration of fields and currents into type-II superconductors.
The longer scale is of electrodynamic origin and, therefore, is more
universal: it exists, for example, in a superconductor in the Meissner state
(the London penetration depth) or, in a normal conductor (the skin depth).
The shorter scale is related to the vortex-line tension, so it is unique for
a type-II superconductor in the mixed state. This scale was introduced into
the continuous theory of type-II superconductors by Matheiu and Simon \cite
{MS} (see also \cite{brandt,MSS}). When applying the two-mode
electrodynamics to the critical state one may ignore the time variation,
i.e., the two-mode electrodynamics becomes the {\em two-mode electrostatics }%
theory.
Our analysis of a type-II thin superconducting film within the two-mode
electrostatics theory leads to the conclusion that for strong enough bulk
pinning, inhomogeneity of the current density becomes important, even in the
absence of surface pinning, if the film thickness exceeds the Campbell
penetration depth $\lambda _{C}$. Thus, inhomogeneity of the current
distribution throughout the film thickness is a {\em distinctive} and
inevitable feature of the perpendicular film geometry like, for example, the
geometrical barrier \cite{zeldov}. Inhomogeneity of the current distribution
is significantly enhanced if the critical state is supported by the surface
pinning. In this case, most of the current is confined to a layer of a depth
of the order of the intervortex distance, which is usually much smaller than
the London penetration depth $\lambda $ and film thickness. As a result of
this inhomogeneity, the {\it measured} average critical current density
becomes thickness dependent. This current inhomogeneity also causes a
thickness dependence of the magnetic relaxation rate. In the following we
present detailed calculations of the distribution of the current density $j$
and induction field $B$ in thin type-II superconducting film, resulting from
surface and/or bulk pinning. We then introduce the first critical state
model which takes into account the variation in $j$ throughout the film
thickness. Calculations based on this critical state model lead to a
thickness dependence in $j$ and magnetic relaxation rate. These predictions
are compared with the experimental data.
\section{Theory}
\subsection{Equations of electrodynamics for the mixed state in
perpendicular geometry}
Let us consider a thin superconducting strip, infinitely long in the $y$
-direction, with width $2w$ $\left( - w<x<w\right) $ and thickness $2d$ $%
\left( -d<z<d\right) $. External magnetic field $H$ is applied along the $z$
-axis, perpendicular to the film plane. The vortex density $n$ is determined
by the $z$-component $B_z$ of the average magnetic field (magnetic
induction) $\vec B$ in the film: $n=B_z/\Phi _0$. Supercurrent of density $%
I_y\left( x,z\right) $ flows along the $y$-axis resulting in a Lorenz force
in the $x$-direction, and a vortex displacement $u$ along the $x$-axis.
We begin with the electrodynamic equations describing the mixed state of
type-II superconductors in such a geometry. They include the London equation
for the $x$-component of the magnetic field:
\begin{equation}
B_{x}-\lambda ^{2}\frac{\partial ^{2}B_{x}}{\partial z^{2}}=B_{z}\frac{
\partial u}{\partial z}~, \label{Lon}
\end{equation}
the Maxwell equation:
\begin{equation}
{\frac{4\pi }{c}}j_{y}=\frac{\partial B_{x}}{\partial z}-\frac{\partial
B_{z} }{\partial x}~, \label{Max}
\end{equation}
and the equation of vortex motion:
\begin{equation}
\eta \frac{\partial u}{\partial t}+ku={\frac{\Phi _{0}}{c}}j_{y}+{\frac{\Phi
_{0}}{4\pi }}H^{*}\frac{\partial ^{2}u}{\partial z^{2}}~, \label{Vor}
\end{equation}
where
\begin{equation}
H^{*}=\frac{\Phi _{0}}{4\pi \lambda ^{2}} \ln \frac{a_{0}}{r_{c}}
\label{H-c}
\end{equation}
is a field of order of the first critical field $H_{c1}$, $a_{0}\simeq \sqrt{
\Phi _{0}/B_{z}}$ is the inter-vortex distance, and $r_{c}\sim \xi $ is an
effective vortex core radius. The equation of the vortex motion arises from
the balance among four terms: (i) the friction force proportional to the
friction coefficient $\eta $; (ii) the homogeneous, linear elastic pinning
force $\propto k$ (i. e. assuming small displacements $u$); (iii) the
Lorentz force proportional to the current density $j$; and (iv) the
vortex-line tension force (the last term on the right-hand side of Eq. (\ref
{Vor})), taken from Ref. \cite{ST}.
In the parallel geometry, ($d\rightarrow \infty $), vortices move without
bending so that the $x$-component $B_{x}$ is absent, and the Maxwell
equation becomes: $4\pi j_{y}/c=-\partial B_{z}/\partial x$. Since $B_{z}$
is proportional to the vortex density, this current may be called a {\em %
diffusion current}. The case of the perpendicular geometry, ($d\ll w$), is
essentially different: the diffusion current is small compared to the {\em %
bending current} $\partial B_{x}/\partial z$ (see the estimation below) and
may be neglected for calculation of the distribution throughout the film
thickness (along the $z$-axis). As a result, Eq. (\ref{Vor}) becomes
\begin{equation}
\eta \frac{\partial u}{\partial t}+ku=\frac{\Phi _{0}}{4\pi }\frac{\partial
B_{x}}{\partial z}+{\frac{\Phi _{0}}{4\pi }}H^{\ast }\frac{\partial ^{2}u}{%
\partial z^{2}}~. \label{Vor-tr}
\end{equation}
Equations (\ref{Lon}) and (\ref{Vor-tr}) determine the distribution of the
displacement $u(z)$ and of the in-plane magnetic induction $B_{x}(z)$. This
also yields a distribution of the current density $(4\pi
/c)j_{y}(z)=\partial B_{x}(z)/\partial z$. But these equations are still not
closed, since the two components of the magnetic induction, $B_{x}$ and $%
B_{z}$, and current density $j_{y}(z)$ are connected by the Biot-Savart law.
However, neglecting the diffusion current in the Maxwell equation we
separate the problem into two parts: (1) determination of the distribution
of fields and currents along the $z$- axis, taking the total current $%
I_{y}=cB_{x}^{s}/2\pi $ (here $B_{x}^{s}\equiv B_{x}\left( z=d\right) $) and
the perpendicular magnetic-induction component $B_{z}$ as free parameters;
(2) determination of the parameters $I_{y}$ and $B_{z}$ using the Biot-
Savart law. The latter part of the problem (solution of the integral
equation given by the Biot-Savart law) has already been studied carefully in
previous works \cite{brandt96,brandt93}. In the present work we concentrate
on the analysis of the distribution of fields and currents throughout the
film thickness ($z$-dependence).
The accuracy of our approach is determined by the ratio of the diffusion
current $\partial B_{z}/\partial x$ to the bending current $\partial
B_{x}/\partial z$, since we neglect the diffusion current contribution to
the total current. Suppose, as a rough estimation, that $B_{z}\sim B_{x}$%
\cite{vlasko}. Then, the diffusion current density is roughly $\sim I_{y}/w$%
, whereas the bending current density is $\sim I_{y}/d$ \cite
{brandt96,brandt93,vlasko,zeldov}. Thus, the ratio between the diffusion and
the bending current is approximately $d/w\sim 10^{-3}\div 10^{-4}$ for
typical thin films. Note that this condition does not depend on the
magnitude of the critical current and is well satisfied also in typical
single crystals, where $d/w\sim 0.01\div 0.1$. Therefore, the results we
obtain below hold for a wide range of typical samples used in the experiment.
\subsection{Two-mode electrostatics: Two length scales}
Let us consider the static case when vortices do not move, hence there is no
friction. Then, Eq. (\ref{Vor-tr}) becomes
\begin{equation}
ku={\frac{\Phi _{0}}{4\pi }}\frac{\partial B_{x}}{\partial z}+{\frac{\Phi
_{0}}{4\pi }}H^{\ast }\frac{\partial ^{2}u}{\partial z^{2}}~. \label{Vor-st}
\end{equation}
Excluding the $B_{x}$ component of the magnetic induction from Eqs. (\ref
{Lon}) and (\ref{Vor-st}) we obtain equation for the vortex displacement:
\begin{equation}
-{\frac{4\pi k}{\Phi _{0}}}\left( u-\lambda ^{2}\frac{\partial ^{2}u}{%
\partial z^{2}}\right) +(H^{\ast }+B_{z})\frac{\partial ^{2}u}{\partial z^{2}%
}-\lambda ^{2}H^{\ast }\frac{\partial ^{4}u}{\partial z^{4}}=0~.
\label{vort-gen}
\end{equation}
The two length scales which govern distributions over the $z$-axis become
evident if one tries to find a general solution of equation \ref{vort-gen}
in the form $B_{x}\sim u\sim \exp (ipz)$. Then, the dispersion equation for $%
p$ is bi-quadratic and yields two negative values for $p^{2}$. In the limit $%
k\ll 4\pi \lambda ^{2}/\Phi _{0}(H^{\ast }+B_{z})$ (weak bulk pinning):
\begin{equation}
p_{1}^{2}=-{\frac{1}{\widetilde{\lambda }^{2}}}=-{\frac{1}{\lambda ^{2}}}%
\frac{H^{\ast }+B_{z}}{H^{\ast }}~, \label{k-1}
\end{equation}
\begin{equation}
p_{2}^{2}=-{\frac{1}{\lambda _{C}^{2}}}=-\frac{4\pi k}{\Phi _{0}(H^{\ast
}+B_{z})}~, \label{k-2}
\end{equation}
Thus, the distribution along the $z-$axis is characterized by the two length
scales: the Campbell length $\lambda _C$, which is the electrodynamic
length, and length $\widetilde{\lambda }$, given by Eq. (\ref{k-1}), which
is related to $\lambda $ and the vortex-line tension.
\subsection{Current density and field distribution}
In order to determine distribution of currents and fields throughout the
film thickness, one must add the proper boundary conditions to the general
solution of Eq. (\ref{vort-gen}). We look for a solution which is a
superposition of two modes. In particular, for the vortex displacement we
can write:
\begin{equation}
u(z)=u_0\cosh {\frac z{\lambda _C}}+u_1\cosh {\frac z{\widetilde{\lambda }}}%
~. \label{u-pin}
\end{equation}
Using Eq. (\ref{Vor-st}) one has for the current density:
\begin{equation}
{\frac{4\pi }c}j_y=\frac{\partial B_x}{\partial z}\approx B_z\frac{u_0}{%
\lambda _C^2}\cosh {\ \frac z{\lambda _C}}-H^{*}\frac{u_1}{\widetilde{%
\lambda }^2}\cosh {\frac z{\widetilde{\lambda }}}~. \label{curr}
\end{equation}
The total current is
\begin{equation}
{\frac{4\pi }c}I_y=2B_x(d)=2B_z{\frac{u_0}{\lambda _C}}\sinh {\frac d{%
\lambda _C}}-2H^{*}{\frac{u_1}{\widetilde{\lambda }}}\sinh {\frac d{%
\widetilde{\lambda }}}~. \label{curr-tot}
\end{equation}
Equation (\ref{curr-tot}) is in fact a boundary condition imposed on the
amplitudes of two modes, $u_0$ and $u_1$. The second boundary condition is
determined by the strength of the surface pinning. If displacements are
small, the general form of this boundary condition is
\begin{equation}
\alpha u(\pm d)\pm \left. \frac{\partial u}{\partial z}\right| _{\pm d}=0~,
\label{BC}
\end{equation}
where $\alpha =0$ in the absence of surface pinning and $\alpha \rightarrow
\infty $ in the limit of strong surface pinning. In the following parts of
the section we consider these two limits.
\subsubsection{Surface pinning}
\label{SP}
Let us consider the case of surface pinning in the absence of bulk pinning ($%
k=0$), when the Campbell length $\lambda _{C}\rightarrow \infty $ (see Eq. (%
\ref{k-2})). By ``surface pinning'' we understand pinning due to surface
roughness on the surfaces {\it perpendicular} to the vortex direction. The
surface roughness is assumed to be much smaller than the film thickness $d$.
By substituting $\lambda _{C}\rightarrow \infty $ in the general solution
Eq. (\ref{u-pin}), we derive the displacement for surface pinning:
\begin{equation}
u(z)=u_{0}+u_{1}\cosh {\frac{z}{\widetilde{\lambda }}}~, \label{u-z}
\end{equation}
where $u_{0}$ and $u_{1}$ are constants, which can be determined from the
boundary conditions Eqs. (\ref{curr-tot}) and (\ref{BC}). Note, however,
that $u_{0}$ is not important in the case of surface pinning, because the
constant $u_{0}$ does not affect distributions of currents and fields.
The magnetic field $B_{x}$ is obtained from Eq. (\ref{Vor-st}):
\begin{equation}
B_{x}(z)=-H^{\ast }{\frac{u_{1}}{\widetilde{\lambda }}}\sinh {\frac{z}{%
\widetilde{\lambda }}}, \label{B-x}
\end{equation}
and the current is determined from the Maxwell equation (\ref{Max})
neglecting the diffusion current:
\begin{equation}
j_{y}=-{\frac{c}{4\pi }}H^{\ast }{\frac{u_{1}}{\widetilde{\lambda }^{2}}}%
\cosh {\frac{z}{\widetilde{\lambda }}}. \label{j-y}
\end{equation}
It is important to note that the characteristic length $\widetilde{\lambda }$
, which varies between the London penetration length $\lambda $ and the
inter-vortex distance $a_0\sim \sqrt{\Phi _0/B_z}$, is much smaller than $%
\lambda $ for a dense vortex array, $B_z\gg H^{*}$. Taking into account that
usually thin films have thickness less or equal to $2\lambda $, the effect
of the vortex bending due to surface pinning may be very important: most of
the current is confined to a thin surface layer of width $\widetilde{\lambda
}$.
The current density on the surface is $j_{s}\equiv j_{y}\left( z=d\right) =$
$-{\ \ }\left( c/4\pi \right) H^{\ast }\left( u_{1}/\tilde{\lambda}%
^{2}\right) \cosh \left( d/\tilde{\lambda}\right) $. Thus,
\begin{equation}
u_{1}=-\frac{4\pi }{c}\frac{\widetilde{\lambda }^{2}j_{s}}{H^{\ast }\cosh {%
\frac{d}{\tilde{\lambda}}}}. \label{u1}
\end{equation}
\noindent The total current integrated over the film thickness $2d$ is:
\begin{equation}
I_{y}=\int_{-d}^{d}j_{y}(z)dz=-{\frac{c}{2\pi }}H^{\ast }{\frac{u_{1}}{%
\tilde{\lambda}}}\sinh {\frac{d}{\tilde{\lambda}}=2}\widetilde{{\lambda }}%
j_{s}\tanh \frac{d}{\widetilde{\lambda }}. \label{J-y}
\end{equation}
Thus, the {\em average} current density $j_{a}\equiv I_{y}/2d$ - the
quantity derived in the experiment - decreases with thickness as
\begin{equation}
j_{a}=j_{s}\frac{\widetilde{\lambda }}{d}\tanh \frac{d}{\widetilde{\lambda }}%
, \label{ja-surf}
\end{equation}
yielding $j_{a}=j_{s}\widetilde{\lambda }/d$ for $\widetilde{\lambda }/d<<1$
as found experimentally \cite{prozorov97}.
The field and the current distribution over the film thickness are:
\begin{equation}
j_{y}(z)=\frac{I_{y}}{2\widetilde{\lambda }}\frac{\cosh {\frac{z}{\tilde{%
\lambda}}}}{\sinh {\frac{d}{\tilde{\lambda}}}}=j_{s}\frac{\cosh {\frac{z}{%
\tilde{\lambda}}}}{\cosh {\frac{d}{\tilde{\lambda}}}}, \label{j-y-J}
\end{equation}
\begin{equation}
B_{x}(z)=\frac{2\pi }{c}I_{y}\frac{\sinh {\frac{z}{\widetilde{\lambda }}}}{%
\sinh {\ \frac{d}{\widetilde{\lambda }}}}=\frac{4\pi }{c}j_{s}\widetilde{%
\lambda }\frac{\sinh {\frac{z}{\widetilde{\lambda }}}}{\cosh {\frac{d}{%
\widetilde{\lambda }}}}. \label{B-x-J}
\end{equation}
Thus, the current penetrates into a small depth $\tilde{\lambda}$ and is
exponentially small in the bulk beyond this length.
\subsubsection{Bulk pinning}
A remarkable feature of the perpendicular geometry is that, even in the
absence of surface pinning, vortices are bent. This is in striking contrast
with the parallel geometry where the diffusion current distribution is
homogeneous along the direction of vortices and, therefore, does not bend
them. Absence of surface pinning means that at the surface $\partial
u/\partial z=0$ (a vortex is perpendicular to an ideal surface). This yields
the relation between $u_0$ and $u_1$ [see Eq. (\ref{u-pin})]:
\[
u_1=-u_0\frac{\widetilde{\lambda }}{\lambda _C}\frac{\sinh \frac z{\lambda
_C }}{\sinh \frac z{\widetilde{\lambda }}}
\]
Then, Eq. (\ref{curr-tot}) becomes
\begin{equation}
{\frac{4\pi }c}I_y=2(B_z+H^{*}){\frac{u_0}{\lambda _C}}\sinh {\frac d{
\lambda _C}}~. \label{curr-tot1}
\end{equation}
\noindent The current distribution is
\begin{equation}
j_y(z)=I_y\left( \frac 1{2\lambda _C}\frac{B_z}{H^{*}+B_z}\frac{\cosh {\frac %
z{\lambda _C}}}{\sinh {\frac d{\lambda _C}}}+{\frac 1{2\widetilde{\lambda }}}
\frac{H^{*}}{H^{*}+B_z}\frac{\cosh {\frac z{\widetilde{\lambda }}}}{\sinh {\
\frac d{\widetilde{\lambda }}}}\right) ~. \label{cur-dist}
\end{equation}
In the limit $d\ll \lambda _C$ Eq. (\ref{cur-dist}) yields
\begin{equation}
j_y(z)=I_y\left( \frac 1{2d}\frac{B_z}{H^{*}+B_z}+{\frac 1{2\tilde \lambda }}
\frac{H^{*}}{H^{*}+B_z}\frac{\cosh {\frac z{\widetilde{\lambda }}}}{\sinh {\
\frac d{\tilde \lambda }}}\right) ~. \label{d-sm}
\end{equation}
Another interesting case is that of the dense vortex array, $B_{z}\gg
H^{\ast }$:
\begin{equation}
j_{y}(z)=\frac{I_{y}}{2\lambda _{C}}\frac{\cosh {\frac{z}{\lambda _{C}}}}{%
\sinh {\ \frac{d}{\lambda _{C}}}}=j_{s}\frac{\cosh {\frac{z}{\lambda _{C}}}}{%
\cosh {\frac{d}{\lambda _{C}}}}~, \label{cur-Camp}
\end{equation}
where again $j_{s}$ is the current density on the film surface. Remarkably,
current density is inhomogeneous even in the absence of surface pinning. We
illustrate this in Fig.1, where we plot $j_{y}\left( z\right) /j_{b}$ vs. $%
z/d$ at different ratios $d/\lambda _{C}$. ``Uniform'' bulk current density $%
j_{b}=I_{y}/2d$ corresponds to the limit $d/\lambda _{C}=0$. Physically,
such current profiles reflect Meissner screening of the in-plane component $%
B_{x}$ of the self-field.
For the average current density we have
\begin{equation}
j_a=j_s\frac{\lambda _C}d\tanh \frac d{\lambda _C}~, \label{ja-bulk}
\end{equation}
which is similar to the case of the surface pinning, Eq. (\ref{ja-surf}),
with $\widetilde{\lambda }$ replaced by $\lambda _C$.
Thus, in the perpendicular geometry, the current distribution is strongly
inhomogeneous: the whole current is confined to a narrow surface layer of
width $\widetilde{\lambda }$ (surface pinning), or $\lambda _C$ (bulk
pinning).
\subsection{Critical state}
In the theory given in the previous sections we have assumed that currents
and vortex displacements are small. In this section we deal with the
critical state when the current density equals its critical value $j_c$. Let
us consider how it can affect our picture, derived in the previous sections
for small currents.
\subsubsection{Surface pinning}
If vortices are pinned only at the surface, the value of the critical
current depends on the profile of the surface, and one may not use the
linear boundary condition imposed on the vortex displacement, Eq. (\ref{BC}
). However, the $z$-independent vortex displacement $u_0$ does not influence
the current density and field distribution in the bulk as shown in Sec. \ref
{SP} (see Eqs. (\ref{B-x}) and (\ref{j-y})). Therefore the bulk current
density and field distribution derived from our linear analysis can be used
even for the critical state.
\subsubsection{Bulk pinning}
\label{BP}
In this case our theory must be modified for the critical state. In
particular, for large currents the bulk pinning force becomes nonlinear and,
as a result, the current and field penetration is not described by simple
exponential modes. Formally, this nonlinearity may be incorporated into our
theory assuming a $u$ - dependent pinning constant $k$, thus allowing $k$ to
vary along the vortex line. As an example, let us consider the case of
strongly localized pinning force when the vortex is pinned by a potential
well of a small radius $r_{d}$ like that sketched in Fig.2: the vortex
energy per unit length (vortex-line tension) is given by $\varepsilon $ for
vortex line segments outside the potential well and by $\varepsilon _{0}$
for segments inside the well. Thus, the pinning energy per unit length is $%
\varepsilon -\varepsilon _{0}$. In fact, such a potential well model may
describe pinning of vortices by, for example, one-dimensional columnar
defects or planar defects, such as twin or grain boundaries\cite
{nucl,Sonin95}. The latter is relevant in thin films obtained by usual
method of laser ablation.{\large \ }Therefore, we can also use such a
pinning potential as a rough qualitative model for typical types of pinning
sites, in order to illustrate the effect of bulk pinning on the current
density distribution and the rate of magnetic relaxation in thin films.
If the current distribution were uniform, such a potential well would keep
the vortex pinned until the current density $j_{y}$ exceeds the critical
value $c(\varepsilon -\varepsilon _{0})/\Phi _{0}r_{d}$. The escape of the
trapped vortex line from the potential well occurs via formation of the
un-trapped circular segment of the vortex line (see Fig.3(a)). In this case,
both the critical-current density and the energy barrier for vortex
depinning do not depend on film thickness \cite{nucl}.
But, in perpendicular geometry the current distribution is not homogeneous.
In order to find it for the critical state, we may use the following
approach. The vortex line consists of the trapped and untrapped segments as
shown in Fig.3(b). The untrapped segment is beyond the potential well,
therefore there is no bulk pinning force acting on it. This means that the
shape of this segment is described by Eq. (\ref{Vor-st}) with $k=0$.
Applying the theory of Sec. \ref{SP}, one obtains that the total current $%
I_{y}=\int_{-d}^{d}j_{y}(z)dz$ is concentrated near the film surfaces within
a narrow surface layer of width $\widetilde{\lambda }$. Inside the surface
layer the vortex line is curved, but has a straight segment of length $L$
outside the layer, as illustrated in Fig.3(b). As for the vortex-line
segment trapped by the potential well, we assume that it is straight and
vertical, neglecting its possible displacements inside the potential well.
Formally speaking, our approach introduces a non-homogeneous bulk-pinning
constant $k$ assuming that $k=0$ for the untrapped segment and $k=\infty $
for the trapped one. The energy of the vortex line in this state is
determined by the line tensions ($\varepsilon $ and $\varepsilon _{0}$) and
is given by
\begin{equation}
E=2\varepsilon {\frac{L}{\cos \alpha }}-2\varepsilon _{0}L-2{\frac{\Phi _{0}%
}{c}}I_{y}L\tan \alpha =2L\tan \alpha \left( \varepsilon \sin \alpha -{\frac{%
\Phi _{0}}{c}}I_{y}\right) ~, \label{E-J}
\end{equation}
where the contact angle $\alpha $ is determined by the balance of the
line-tension forces at the point where the vortex line meets the line
defect:
\begin{equation}
\cos \alpha =\frac{\varepsilon _{0}}{\varepsilon }~. \label{alpha}
\end{equation}
\section{Magnetic relaxation}
\label{relaxation}
We now discuss the effect of current density distribution on the thickness
dependence of magnetic relaxation. We first show below, that uniform current
density cannot explain the experimentally observed thickness dependence. We
also show that inhomogeneous current density distribution, resulting from
the surface pinning only, cannot explain the experimental data too. We
demonstrate that only presence of a bulk pinning and the resulting current
inhomogeneity may lead to an accelerated relaxation in thinner films. We
finally discuss the general case when both bulk and surface pinning are
present.
As pointed out above, if the current distribution is uniform throughout the
film thickness, a trapped vortex may escape from the potential well (Fig.2)
via formation of a circular segment of the vortex line (Fig.3(a)), with the
energy
\begin{equation}
E=\varepsilon L-\varepsilon _{0}L_{0}-{\frac{\Phi _{0}}{c}}j_{y}S,
\label{E-par}
\end{equation}
where $L$ and $L_{0}$ are the lengths of the vortex line segment before and
after formation of the loop, $S$ is the area of the loop \cite{Sonin95,nucl}%
. If the loop is a circular arc of the radius $R$ and the angle $2\alpha $
(Fig.3(a)), then $L_{0}=2R\sin \alpha $, $L=2R\alpha $, and $S={\frac{1}{2}}%
R^{2}(2\alpha -\sin 2\alpha )$, where the contact angle $\alpha $ is given
by Eq. (\ref{alpha}). Then,
\begin{equation}
E=2R(\varepsilon \alpha -\varepsilon _{0}\sin \alpha )-\frac{\Phi _{0}}{2c}%
j_{y}R^{2}(2\alpha -\sin 2\alpha )=(2\alpha -\sin 2\alpha )\left(
\varepsilon R-{\ \frac{\Phi _{0}}{2c}}j_{y}R^{2}\right) . \label{E-par-fin}
\end{equation}
The height of the barrier is determined by the maximum energy at $%
R_{c}=\varepsilon c/\Phi _{0}j_{y}$:
\begin{equation}
E_{b}=(2\alpha -\sin 2\alpha )\frac{\varepsilon ^{2}c}{2\Phi _{0}j_{y}}.
\label{bar}
\end{equation}
As one might expect, this barrier and consequently the relaxation rate do
not depend on the film thickness. We stress that this estimation is valid
only for $d>R_{c}$. If $d<R_{c}$\ the energy barrier is obtained from Eq. (%
\ref{E-par-fin}) by substituting $R=d.$ This case of uniform current,
however, leads to a thickness independent current density, and therefore
cannot describe the experimental data.
\subsection{Surface pinning}
In this case, the whole current is confined to the surface layer of width $%
\widetilde{\lambda }$. It is apparent from Eq.\ (\ref{k-2}) that for typical
experimental fields ($\sim 1T$) $\widetilde{\lambda }$ is smaller than the
film thickness. This means that current flows mostly in a thin surface
layer. Thus, all creep parameters, including the creep barrier, are governed
by the total current $I_y$, and not by the average current density $I_y/2d$.
Then, apparently, the critical current density and the creep barrier are
larger for thinner films, similar to the case of the collective- pinning
effect mentioned above. Thus, also this scenario cannot explain the observed
accelerated relaxations in the thinner films.
\subsection{Short-range bulk pinning}
\label{creep-bulk}
Let us consider the relaxation process for a critical state supported by the
short-range pinning force discussed in Sec. \ref{BP}. The energy $E$ of the
vortex line is given by Eq. (\ref{E-J}). The average critical current
density corresponds to $E=0$ and is inversely proportional to the film
thickness [see also Eq. (\ref{ja-bulk})]:
\begin{equation}
j_{c}=\frac{I_{c}}{2d}=\frac{c\varepsilon }{2d\Phi _{0}}\sin \alpha ~.
\label{j-c}
\end{equation}
The energy barrier is given by the maximum energy at $d=L+\tilde{\lambda}%
\approx L$ when the whole vortex line has left the potential well
(Fig.4(a)):
\begin{equation}
E_{b}=\tan \alpha \left( 2d\varepsilon \sin \alpha -4d^{2}{\frac{\Phi _{0}}{c%
}}j_{a}\right) ~, \label{bar-J}
\end{equation}
where $j_{a}=I_{y}/2d$ is the average current density. If $%
j_{c}>j_{a}>j_{c}/2$, then $\partial E_{b}/\partial d<0$, i.e., the barrier
is larger for thinner films. But, for $j_{a}<j_{c}/2$ the derivative $%
\partial E_{b}/\partial d>0$ , and the barrier {\em increases} with the
increase of the film thickness. Thus, under this condition ($j_{a}<j_{c}/2$)
the magnetic relaxation rate is larger in the thinner samples.
The above analysis did not take into account the possibility for dense
defects. By ``dense'' we mean that the distance $r_{i}$ from the neighbor
potential well is less than $d\tan \alpha $ (see Fig.4(b)). In this case the
maximal energy (the barrier peak) is smaller than the barrier calculated in
Eq. (\ref{bar-J}). Then the barrier energy is given by
\begin{equation}
E_{b}=r_{i}\left( 2\varepsilon \sin \alpha -4d{\frac{\Phi _{0}}{c}}%
j_{a}\right) \label{bar-def}
\end{equation}
In this case $\partial E_{b}/\partial d<0$ and the energy barrier for
thinner films is always larger. Therefore one can see faster relaxation in
thinner films only if the films are so thin that $d<r_{i}/\tan \alpha $ and
the energy barrier is given by Eq. (\ref{bar-J}). From the experimental
results shown below we infer that the average distance between effective
defects $r_{i}\geq 1000\ \AA $\ in agreement with direct measurements using
atomic force microscopy.
To conclude, if the average current density in thin films becomes small
enough compared to the original critical current density and if the films
are thin enough, the relaxation {\em at the same average persistent current}
is predicted to be faster for the thinner films.
\subsection{General case}
\label{creep-general}
In the simplified picture of the critical-state relaxation outlined in the
previous subsection, the total current was concentrated within a very thin
layer of the width $\widetilde{\lambda }$. It was based on the assumption
that the pinning force disappears when the vortex line leaves the small-size
potential well, whereas inside the potential well the pinning force is very
strong. As a result, outside the thin surface layers of the width $%
\widetilde{\lambda }$ the vortex line consists of two straight segments
(Figs.3(b) and 4). In the general case, the distribution of the pinning
force may be smoother and the shape of a vortex line is more complicated. In
addition, interactions between the vortices may modify the barrier for flux
creep as well. However, the tendency must be the same: the current confined
in a narrow surface layer drives the end of a vortex line away from the
potential well to the regions where the pinning force is weaker and the
vortex line is quite straight with the length proportional to thickness of
the film if the latter is thin enough. Therefore, the barrier height for the
vortex jump is smaller for smaller $d$.
We also note that we do not consider an anisotropic case and limit our
discussion to isotropic samples. The effect of anisotropy on the barrier
height was considered in details in Ref.\cite{nucl} In the presence of
anisotropy the circular loop becomes elliptic and the vortex-line tension $%
\varepsilon $\ must be replaced by some combination of vortex-line tensions
for different crystal directions. These quantitative modifications are not
essential for our qualitative analysis.
Our scenario assumes that the current is concentrated near the film
surfaces. In general, width of the current layer may vary from $\widetilde{%
\lambda }$ to effective Campbell length $\lambda _{C}$. One may then expect
a {\em non-monotonous} thickness dependence when $\lambda _{C}$ is
comparable with $d$. As we see, the Campbell length is an important quantity
in determining whether current density inhomogeneity must be taken into
account or not (in the absence of the surface pinning). The length $\lambda
_{C}$ can be estimated from the micro-wave experiments: according to
Golosovskii {\it et al.}\cite{golosovskii} $\lambda _{C}\simeq 1000\sqrt{H}\
\AA $, where the field $H$ is measured in $Tesla$. For $H\simeq 0.2$ $T$
this results in $\lambda _{C}\approx 450\ \AA $\ or $2\lambda _{C}\approx
900\ \AA $, which has to be compared with the film thickness.
\section{Comparison with the experiment}
A decrease of the measured current density with an increase of the film
thickness is reported in numerous experimental works \cite
{jooss96,oneoverd,prozorov97,sheriff97}. This is consistent with the
predictions given above for either surface or/and bulk pinning. Both pinning
mechanisms predict similar $1/d$ dependence of $j$ and it is, therefore,
impossible to distinguish between surface and bulk pinning in this type of
measurements. Only the additional information from the thickness dependence
of the relaxation rate allows the drawing of some conclusions about the
pinning mechanisms.
Magnetic relaxation measurements in films of different thickness are
discussed in detail in \cite{prozorov97,sheriff97}. Using excerpts from the
data reported there we demonstrate an agreement of these data with our
theory.
Measurements were conducted on four $5\times 5$ $mm^{2}$ $%
YBa_{2}Cu_{3}O_{7-\delta }$ films of thickness $2d=800,\ 1000,\ 2000$ and $%
3000\ \AA $, prepared by the laser ablation technique on $SrTiO_{3}$
substrates \cite{koren}. All samples had $T_{c}\approx 89\ K$. The
morphology of the samples was examined by atomic-force microscopy (AFM)
technique and was found to be similar: the average grain size $(1-50)\times
10^{2}\ \AA $\ and intergrain distance $50\ \AA $ (For typical AFM picture
of our samples, see Fig. 1(c) in \cite{sheriff97}).{\large \ }The magnetic
moment was measured as a function of field, temperature and time, using a
{\it Quantum Design} SQUID magnetometer.
The {\em average} persistent current density was extracted from the magnetic
hysteresis loops using the Bean model adapted for our case: $j_{a}\left[
A/cm^{2}\right] =30M/da^{3}$, where $M\left[ emu\right] $ is the
irreversible magnetic moment, $d\left[ cm\right] $ is a half of the film
thickness and $a=0.5\ cm$ is the lateral dimension. Fig.5 shows the
persistent current density $j$ at $T=5\ K$ as a function of the applied
magnetic field $H$. Apparently, $j$ is larger in thinner films. The same
trend is found at all temperatures. These observations are in good agreement
with Eqs. (\ref{ja-surf}) and (\ref{ja-bulk}). We note, however, that since
the value of $j_{s}$ is not known, we cannot point out the dominance of pure
surface, pure bulk or a mixed type of pinning. On the other hand it is
unlikely that the observed thickness dependence is due to changes in the
density of pinning centers with thickness, since the films' morphology is
similar for all of our samples. This is further indirectly confirmed by the
relaxation measurements. The decrease of current density due to increase of
a mean grain size in thicker films would simultaneously result in faster
relaxation, contrary to our observations.
Fig.6 shows typical relaxation curves at $H=0.2\ T$ (ramped down from $1\
Tesla$) measured in films of different thickness. The interesting and
unexpected feature is that curves cross, i. e., the relaxation is faster in
thinner films. This is further illustrated in Fig.7 where $j$ vs. $d$ is
plotted at different times. At the beginning of the relaxation process, the
average current density in the thinner films is larger. However, in the
thinner films, the current density decreases much faster than in the thicker
ones; as a result $j_{a}$ exhibits a non-monotonous dependence on thickness
at later times, as shown in Fig.7. The faster relaxation in thinner films is
in qualitative agreement with our results, discussed in Sec. \ref{relaxation}%
, in particular in subsections \ref{creep-bulk} and \ref{creep-general}.
There, we find that such acceleration of the relaxation in thinner films may
be understood only if we consider inhomogeneous bulk current density. In
reality, it is very probable that {\em both} surface and bulk pinning
mechanisms lead to inhomogeneous current density with a characteristic
length scale in between the short (surface pinning) length $\widetilde{%
\lambda }$ and the larger Campbell length.
\section{Summary and Conclusions}
Based on the two mode electrostatics approach we built a consistent theory
of the critical state in thin type-II superconducting films {\em throughout
the film thickness}. We show that, irrespective of the pinning mechanism,
current density is always larger near the surface, and decays over a
characteristic length scale, which is in between $\widetilde{\lambda }$ (of
order of the intervortex distance) and the Campbell length $\lambda _C$. The
length scale $\widetilde{\lambda }$ is determined by the (finite) vortex
tension and by the boundary conditions which force vortices to be
perpendicular to the surface of superconductor, whereas the Campbell length $%
\lambda _C$ is determined by bulk pinning potential.
Following this novel physical picture we conclude that:
\begin{itemize}
\item Current density and magnetic induction in thin films in perpendicular
field are highly inhomogeneous throughout the film thickness. Surface
pinning significantly enhances these inhomogeneities.
\item Average current density decreases with the increase of film thickness
approximately as $1/d$.
\item Magnetic relaxation is {\em slower} in thinner films in the following
cases: (1) In the absence of bulk pinning, i.e., only surface pinning is
effective. (2) In the presence of bulk pinning, provided that the ratio
between thickness and distance between neighboring defects is above a
certain threshold ($d/a\sim 1)$.
\item Magnetic relaxation is {\em faster} in thinner films only if bulk
pinning is effective and the ratio $d/a$ is below this threshold.
\end{itemize}
In the experimental data presented here the measured average current $j_{a}$
decreases with the increase of film thickness as predicted, and the
relaxation rate is larger for the thinner films, suggesting that $d/a\sim 1$%
, and the effective distance between defects $\geq 1000\ \AA $.
{\sl Acknowledgments:} We thank V. Vinokur, E. H. Brandt, L. Burlachkov, E.
Zeldov and M. Golosovskii for useful discussions. This work was supported in
part by a grant from the Israel Science Foundation and the Heinrich Hertz
Minerva Center for High Temperature Superconductivity, and in part by the
German - Israeli Foundation under Grant $128-3-3/95$. R. P. acknowledges a
support from the Clore Foundations. E. B. S. acknowledges a support by the
Lady Davis Grant and thanks the Racah Institute of the Hebrew University for
hospitality.
| {
"attr-fineweb-edu": 1.951172,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdOk4dbghMG6xpEa8 |
\subsection*{Tight-binding Hamiltonian}
We obtain the tight-binding (TB) Hamiltonian $\mathcal{H}_{\mathrm{TB}} = \sum_{i,j} \sum_{\mu,\beta} t_{\mu\nu}^{ij} \cre{i\mu} \can{j\nu} $ on the maximally localized Wannier functions (MLWF)
from the density functional theory (DFT) calculation with the PBEsol functional. The indices $\mu$ and $\nu$ denote the $t_{2g}$ orbitals.
Some parameters between nearest neighbors and next-nearest neighbors are obtained as
$ t^{\pm1,0,0}_{xy,xy} = -0.388 $,
$ t^{\pm1,0,0}_{yz,yz} = -0.325 $,
$ t^{0,\pm1,0}_{yz,yz} = -0.048 $,
$ t^{\pm1,\pm1,0}_{xy,xy} = -0.130 $,
$ t^{\pm1,\pm1,0}_{yz,yz} = -0.017 $, and
$ t^{\pm1,\pm1,0}_{xy,yz} = 0.010 $.
We introduce the spin-orbit coupling (SOC), $\lambda^{}_\mathrm{SOC} \sum_i \bm{l}_i \cdot \bm{s}_i$ (where $i$ is index for electrons on a site), on the TB Hamiltonian.
From fitting the band structure of Wannier to the DFT bands having the SOC, the strength of the SOC, $\lambda^{}_\mathrm{SOC}$, is set to 0.1 eV (as shown in Fig.~\ref{fig:dft}).
We perform the dynamical mean-field theory (DMFT) calculation based on top of this TB Hamiltonian with $j_{\textrm{eff}}$-basis states.
We also perform the LDA calculations and obtain its MLWF and TB Hamiltonian.
It has slightly reduced bandwidths compared to that of the PBEsol.
This gives only minor correction on the magnitude of self-energies for fixed $U$ and $J_H$ of our interest in the DMFT calculation.
We confirm that this correction does not affect our main result significantly.
It is expected to have a small shift of the critical Hund's coupling strength associated with the resulting correation effects.
\begin{figure}[hp]
\includegraphics[width=0.5\columnwidth]{supp_fig1}
\caption{
(a) DFT and MLWF bands of Sr$_2$RuO$_4$ along the high-symmetry lines.
(b) Density of states of MLWF bands on $t_{2g}$\xspace.
}
\label{fig:dft}
\end{figure}
\subsection*{Orbital-dependent correlations}
\begin{figure}[!tbh]
\includegraphics[width=\columnwidth]{supp_fig2}
\caption{
(a) Imaginary part of Matsubara self-energy projected on the $t_{2g}$\xspace-basis in Sr$_2$RuO$_4$ at $T\simeq10$K.
Inset shows the log-log plot with fitting lines showing different power exponents - 0.75 for $d_{xy}$\xspace and 0.91 for $d_{yz}$\xspace/$d_{zx}$\xspace.
Power exponent $\alpha$ of the $d_{yz}$\xspace compoenent (b) in the $T$-$J_H$ plane and (c) in the $T$-$n$ plane.
}
\label{fig:self}
\end{figure}
From the DFT band structure and density of states, the effective model of Sr$_2$RuO$_4$ can be described by two degenerate $d_{yz}$\xspace and $d_{zx}$\xspace bands and the other $d_{xy}$\xspace band.
This difference between the bands enables Sr$_2$RuO$_4$ to have the orbital selectivity, which is one of remarkable aspect of correlations effects driven by the Hund's coupling.
Figure~\ref{fig:self}(a) shows the orbital-depenent correlations in the self-energy on $t_{2g}$\xspace.
The imaginary part of the self-energy of the $d_{xy}$\xspace is more enhanced than that of the $d_{yz}$\xspace / $d_{zx}$\xspace near $\omega=0$.
This difference exists noticeably in low-energy excitations and maintains up to 0.5 eV.
The Matsubara-frequency self-energy near $\omega=0$ is associated with renormalized mass and renormalization factor when its imaginary part follows linear (or Fermi liquid) behavior in $\omega_n$.
From the $T$=0 data, we compute and compare the renormalized masses $m^*$ on $t_{2g}$\xspace and $j_{\textrm{eff}}$ defined as $m^*=\left( 1-\partial_\omega \mathrm{Im}\Sigma(i\omega) |_{\omega \rightarrow 0} \right)^{-1}$.
We obtain $(m^*_{(\frac{3}{2},\frac{1}{2})} , m^*_{(\frac{1}{2},\frac{1}{2})} , m^*_{(\frac{3}{2},\frac{3}{2})})\simeq(6.03,5.58,4.11)$, and $(m^*_{xy} , m^*_{yz} , m^*_{zx})\simeq(7.37,4.21,4.21)$ in Sr$_2$RuO$_4$ .
\subsection*{Comparison to the case in the absence of SOC}
\begin{figure}[!tbh]
\includegraphics[width=\columnwidth]{supp_fig3}
\caption{
Power exponent $\alpha$ of the imaginary part of the Matsubara self-energy
(a) of the $d_{xy}$\xspace-component and (b) of the $d_{yz}$\xspace-component.
(c) long-time correlator $\chi^{S}_{zz}(\tau=\beta/2)$ in the $T$-$n$ plane.
}
\label{fig:zerosoc}
\end{figure}
| {
"attr-fineweb-edu": 1.229492,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdQc5qX_Bl-uWGjpS | \section{Introduction}
\label{sec:intro}
Users increasingly rely on cloud-based machine learning models to process their personal data~\cite{chen2020fedhealth,geyer2017differentially,liang2020think,xu2019federated, mcgraw2016personalized, gong2020cloud, anggraini2018speech}.
For example, in cloud-based automatic speech recognition (ASR) systems, audio data recorded on client-side mobile devices are typically uploaded to centralized servers for server-side processing~\cite{leroy2019federated, mcgraw2016personalized, gong2020cloud, anggraini2018speech}, which enables general server-side ASR models to improve over time and enjoy economies of scale. However, there is growing concern that sending raw speech data to the cloud leaves users vulnerable to giving away sensitive personal biometric information such as gender, age, race, and other social constructs~\cite{singh2019profiling}.
In order to have full control over their privacy, users should be able to encrypt their data themselves before uploading to downstream applications. We refer to this type of privacy as \textbf{client-side privacy}, which requires removing sensitive information on the client device while keeping the resulting data compatible with cloud-based services. For example, for a cloud-based ASR service, a client-side privacy algorithm needs to remove biometric information from raw speech data while keeping the resulting audio signal useful for training the ASR model on the cloud.
As a step towards achieving client-side privacy for speech data, we contribute the following in this work:
\begin{enumerate}
\item We formally define client-side privacy and describe its three unique technical challenges: (1) direct manipulation and regeneration of raw data on client devices, (2) adaptability with a wide range of server-side processing methods, and (3) low time and space complexity for compatibility with limited-bandwidth client devices.
\item We study three different client-side privacy approaches for speech: signal processing, disentangled representation learning, and adversarial training.
\item We conduct experiments on protecting gender and accent information for downstream ASR systems and provide an empirical comparison of current approaches.
\end{enumerate}
We find that each of our three approaches performs well on a subset of metrics, and quantify remaining areas for improvement using multiple privacy metrics. Based on these insights, we propose several extensions for future work and call for more research in client-side privacy to ensure safe cloud-based speech processing.
We proceed by discussing related privacy algorithms in Section~\ref{sec:privacy}. In Section~\ref{sec:client_side_privacy}, we formalize client-side privacy and describe its unique technical challenges. Then, we describe our client-side privacy approaches for downstream ASR in Section \ref{sec:asr_privacy} and detail the experiments in Section \ref{sec:experiments}. Finally, we summarize our results and propose future directions in Section~\ref{sec:conclusion}. All our code and other supplementary material can be found at \url{https://github.com/peter-yh-wu/speech-privacy}.
\section{Related Work}
\label{sec:privacy}
\subsection{Client- and Server-side Privacy}
\label{subsec:client_server_privacy}
One way to view privacy algorithms is by whether they preserve privacy on the client-side, the server-side, or both. Client-side algorithms execute operations on the user's local device, and server-side algorithms run on a remote server \cite{jing1999client, li2020federated, caldas2018leaf}. For example, one way for cloud services to strengthen data privacy is by encrypting on the client-side and decrypting on the server-side \cite{hwang2011serverclient}. For a privacy algorithm to be client-side only, all operations must run on-device without any additional work needed on the server.
Based on these definitions, we can categorize existing ways to preserve the privacy of data processed by machine learning models. Currently, utilizing cryptography algorithms like secure multi-party computation (SMC) and fully homomorphic encryption (FHE) requires both client and server-side components \cite{zhao2019secure, sun2018private}. Other popular approaches like federated learning and Private Aggregation of Teacher Ensembles (PATE) require both client and server-side modifications as well \cite{li2020federated, caldas2018leaf, papernot2018scalable}. Server-side only approaches also exist, including differentially private SGD (DP-SGD) and global differential privacy \cite{abadi2016deep, de2020overview}. We identify two types of approaches that are client-side only: (1) local differential privacy and (2) methods we refer to as client-side transforms. We define \textbf{client-side transforms} as algorithms that anonymize sensitive information on-device while preserving content needed for downstream tasks. In other words, client-side privacy can be obtained using performant client-side transforms. We note that by being entirely on-device, client-side transforms can be used in conjunction with the aforementioned privacy algorithms by simply applying the client-side transforms first. All three approaches that we study in this paper are client-side transforms, as detailed in Section \ref{subsec:task_formal}. We observe that client-side transforms are not constrained by trade-offs inherent in local differential privacy.
Figure \ref{fig:privacy} summarizes the aforementioned categorization of privacy algorithms.
We note that many client-side transforms exist for downstream tasks simpler than ASR. For example, for the downstream task of storage, client-side encryption is sufficient \cite{wilson2014share}. In contrast, complex downstream tasks like ASR require that the output of the client-side transforms preserve complex content like transcribable audio. Thus, a challenge arises from ensuring that this resulting content also lacks sensitive information like biometrics. Other vulnerabilities like lexical-based ones and sensitive contextual information can also be protected using client-side transforms \cite{silessi2016lexical, reza2017sensitive, caliskan2014privacy}. Since methods like changing language usage can mitigate these vulnerabilities much more easily than biometrics in speech, we focus on client-side privacy for downstream speech tasks here \cite{sun2019mitigating, singh2019profiling}.
\begin{figure}[th]
\centering
\includegraphics[width=50mm]{Latex/images/client_server_venn_diagram.png}
\caption{Privacy approaches. Unlike other algorithms, our client-side transforms are housed entirely on the client side and circumvent trade offs inherent in differential privacy.}
\label{fig:privacy}
\end{figure}
\subsection{Privacy-preserving Speech Processing}
ASR models are getting larger and more powerful, making the case for putting them on the server side stronger~\cite{baevski2020wav2vec}. Since the best ASR models are currently on the server side, measures must be taken to ensure user data sent to the server remains private.
Early research on privacy for speech data has focused on voice encryption, which aims to make the original audio hard to recover from the encrypted data \cite{kak1977speech, sridharan1991fast, smaragdis2007smc_asr}. We note that these methods cannot be only on the client side since they require the receiver to decrypt the signal. Another more recent direction focuses on hiding speaker identity~\cite{jin2009speaker, tomashenko2020introducing, aloufi2020privacy, ahmed_preech_privacy_server, ma_fei_privacy_server, nautsh2019speaker, srivastava2020design_vc}. A range of methods relying on server-side operations have been proposed, including rearranging audio segments on the server \cite{ahmed_preech_privacy_server} or leveraging server communication protocols~\cite{ma_fei_privacy_server}. Similar to research in voice encryption, these studies, to our knowledge, do not address the privacy of sensitive information beyond speaker identity like race, gender, or accent.
\textbf{Speaker anonymization} generally refers to approaches that hide speaker identity on the client side \cite{tomashenko2020introducing, srivastava2020design_vc, huang2020sequence_vc_shinji}. Namely, these works leverage voice conversion techniques to transform raw speech into that of another speaker \cite{jin2009speaker, tomashenko2020introducing, nautsh2019speaker,srivastava2020design_vc, huang2020sequence_vc_shinji}. Current approaches are predominantly neural, utilizing adversarial, disentanglement, or other encoder-decoder-related architectures
\cite{sisman2021vc, zheng2020automatic, tomashenko2020introducing, nautsh2019speaker,srivastava2020design_vc, noe2020adversarial, ericsson2020adversarial_amnist, aloufi2020privacy, huang2020sequence_vc_shinji}. Since voice conversion has already shown success in anonymizing speaker identity, our client side transforms in this work extend these ideas. Among related work, several address biometric information like gender \cite{noe2020adversarial, ericsson2020adversarial_amnist, chen2007using, kondo2014genderbabble, aloufi2020privacy, stoidis2021protecting}, but to our knowledge only two evaluate on complex downstream tasks like ASR \cite{aloufi2020privacy, stoidis2021protecting}. Since both of these works report high ASR error rates, namely word error rates (WER) above 60\% on LibriSpeech \cite{panayotov2015librispeech}, they are unable to maintain downstream performance while preserving privacy. In our paper, we study three distinct approaches that achieve lower WER while preserving privacy.
We focus on complex downstream tasks like ASR in this work since differential privacy or on-device approaches may be preferable for simpler tasks like classification \cite{ji2014differential, wu2019ordinal, wu2020automatically, liang2020cross}. In Section \ref{sec:experiments}, we also show that differential privacy is not suitable for ensuring privacy in downstream ASR tasks. Additionally, it is much easier to anonymize speaker identity than biometrics like gender, since the former generally requires a much smaller user data distribution shift than the latter \cite{wang2017gender}. Thus, in this paper, we study how well our client-side transforms can anonymize gender and accent, being the first to our knowledge to explore the latter. We note that speaker anonymization is a subtask of our client-side privacy task defined in Section \ref{sec:intro} and detailed below.
\section{Client-side Privacy}
\label{sec:client_side_privacy}
As defined in Section \ref{sec:intro}, client-side privacy refers to privacy obtained only via on-device operations, which remove sensitive information while preserving content needed for downstream tasks. We proceed to formalize the problem statement in Section \ref{subsec:task_formal} and discuss the technical challenges in Section \ref{subsec:technical_challenges}.
\subsection{Problem Statement}
\label{subsec:task_formal}
We start with a set of users $\mathcal{U}$ each of which has access to a client-side device $m_u, u \in \mathcal{U}$. On each client-side device, data $x_u$ is collected which potentially contains information about their private attributes $y_u$ such as gender, age, race, or accent. While it is ideal to leverage shared data collected at a large scale across users, it is also imperative to prevent leakage of private attributes $y_u$ outside of the client's device. Therefore, the goal in client-side privacy is to learn an \textit{encrypted} signal $x_u'$ from $x_u$ using a privacy-preserving function $f_\theta: x_u \rightarrow x_u'$ with parameters $\theta$. $f_\theta$ should perform transformations efficiently and learn an encrypted signal $x_u'$ that balances both fidelity and privacy:
1. \textit{Efficiency}: $|\theta|$ should be small and applying the encoding function $f_\theta: x_u \rightarrow x_u'$ should be fast for cheap inference and storage on resource-constrained mobile devices.
2. \textit{Fidelity}: Given a downstream model trained for a certain task (e.g., ASR) defined on the server, the performance of model on the encrypted signal $x_u'$ should be as close as possible to that of the original signal $x_u$.
3. \textit{Privacy}: One should not be able to decode the private attributes $y_u$ from an encrypted signal $x_u'$ regardless of the function used to predict private attributes.
To avoid confounding factors, both evaluation models (ASR and private attribute classifier) are trained on data completely separate from those used to train encryption approaches. In this paper, we measure fidelity using ASR performance, namely character error rate (CER) and word error rate (WER), and privacy using gender and accent classification accuracy. In other words, low ASR error rate and low classification accuracy would indicate high fidelity and high privacy, respectively. Section~\ref{sec:asr_privacy} contains the efficiency of each of our three approaches, and further details are described in Section~\ref{sec:experiments}.
\subsection{Technical Challenges}
\label{subsec:technical_challenges}
Client-side privacy for downstream speech tasks essentially requires one to \textit{re-generate} raw audio with data-level private attributes masked out. This poses three compelling challenges. First, it requires directly manipulating the user's audio~\cite{srivastava2020design_vc,huang2020sequence_vc_shinji} and re-generating high-dimensional raw speech. Second, the encrypted audio must still be compatible with downstream server tasks without any modification on the server. For downstream tasks like ASR, this means that the encrypted data should still be comprehensible for downstream tasks. This is challenging as it requires preserving information at the high-dimensional data level rather than the feature level. Third, methods for client-side privacy must be efficient and have low time and space complexity to be compatible with limited-bandwidth client devices.
As a result, client-side privacy presents novel challenges over commonly studied server-side methods, particularly on fidelity and efficiency perspectives.
Furthermore, it is much more challenging to preserve privacy for cluster-level attributes such as race, gender, and accent as compared to individual-level attributes such as speaker identity. This is because transforming data across clusters requires a larger distribution shift than transformations to a new speaker, who could be in the same cluster \cite{wang2017gender}.
\section{Client-side Transforms for Downstream ASR}
\label{sec:asr_privacy}
Given our definition in Section \ref{sec:privacy}, client-side transforms are one approach to obtaining client-side privacy as defined in Section \ref{sec:intro}. Namely, client-side transforms anonymize sensitive information on-device while preserving content needed for downstream tasks. In this paper, we study three client-side transforms adapted from existing voice conversion literature and analyze their pros and cons.
\subsection{Pitch Standardization}
For our first client-side transform approach, we perform pitch standardization using signal processing \cite{mousa2010voice, laskar2012pitch}. Specifically, we shift the average pitch of each utterance to a predefined value
while preserving formants. For each utterance, we calculate its fundamental frequency ($F_0$) and then perform a pitch shift from that value to a reference $F_0$. We calculate the sequence of $F_0$'s for each utterance using REAPER,\footnote{\href{https://github.com/google/REAPER}{https://github.com/google/REAPER}} and define the utterance $F_0$ as the average of the non-negative $F_0$'s. We then use the Rubber Band Library to perform pitch shifting with formant preservation using a phase vocoder.\footnote{\href{https://github.com/breakfastquay/rubberband}{https://github.com/breakfastquay/rubberband}} For utterance $u$ with $F_0$ value of $f_u$, we shift its pitch by $12 \log_2 (f_r/f_u)$ semitones, where $f_r$ is the reference $F_0$. This approach easily has the highest efficiency out of our three since it does not depend on a neural model.
\subsection{Disentangled Representation Learning}
\label{disentanglement}
Our second proposed approach uses variational autoencoders (VAE) to disentangle private attributes from non-private speech features~\cite{aloufi2020privacy, stoidis2021protecting, disentangle_vae_avc}. VAEs allow us to learn a set of latent representations that best reconstruct a given input audio signal, while enforcing disentanglement into a set of speaker-dependent and speaker-independent factors \cite{burgess2018understanding}. Here, we define a private attribute encoder $e(x;\theta_p)$ that encodes the input audio signal $x$ into speaker-dependent private factors $z_p$. We also define a content encoder $e(x;\theta_c)$ that encodes $x$ into speaker-independent content factors $z_c$. Since the private factors should capture the private attributes $y$ using a classifier, our goal is to exclude the private factor when decoding the encrypted signal. We optimize the following loss function:
\begin{align}
\mathcal{L}_{\textrm{dis}} &= \| d(e(x; \theta_c); \theta_d) - x \|_1 - \lambda_p \log P(y|e(x; \theta_c)) \\
&+ \lambda_{\textrm{dis}} \mathrm{KL} \left( [e(x; \theta_c), e(x; \theta_p)] \ \| \ \mathcal{N}(0, I_d) \right),
\end{align}
where $d(z; \theta_d)$ is a decoder from latent space $z$ back into the audio space. The first term measures the reconstruction of the input signal, the second term measures how well $z_p$ captures the private attributes $y$, and the third term measures disentanglement of $z_c$ and $z_p$ by ensuring minimal correlated entries. $\lambda_p$ and $\lambda_{\textrm{dis}}$ are tunable hyperparameters controlling the tradeoff between privacy disentanglement and performance.
We train a convolutional VAE using the hyperparameters described in Chou et al.~\cite{disentangle_vae_avc}. The model has log-magnitude spectrograms as its input and output acoustic features, and we use the Griffin-Lim algorithm to convert the model output into waveforms~\cite{griffin_lim}. Instance normalization is added to the content encoder $e(x;\theta_c)$ in order to remove speaker information. Also, an adaptive instance normalization layer is added to the decoder in order to add the desired speaker information~\cite{ulyanov2016instance, huang_adaIN}. This allows for the reconstruction of content while transforming speaker information.
As far as we are aware, this architecture is considered fairly recent among the voice conversion literature \cite{disentangle_vae_avc, huang2020sequence_vc_shinji, sisman2021vc}. Given the success of related architectures in anonymizing speaker identity \cite{yoo2020cyclevaegan, stoidis2021protecting}, we study this model's efficacy in anonymizing biometrics like gender and accent.
\subsection{Adversarial Training}
Reconstruction of high-dimensional signals is difficult and has been shown to cause poor generation quality \cite{dhariwal2020jukebox, rallabandi2019submission, ranzato2014video}. Our final approach attempts to fix this by using adversarial training to ensure high-fidelity generation of encrypted audio \cite{chou_gan, yoo2020cyclevaegan, zhou2021gan}. In the first stage, we disentangle the input audio $x$ into speaker-dependent private factors $z_p$ and speaker-independent content factors $z_c$ learned using an auto-encoder, similarly to Section~\ref{disentanglement}. The second stage trains a generative adversarial network (GAN) \cite{goodfellow2014generative}
to generate realistic audio. The generator is conditioned on the content factor $z_c$ and a new speaker label.
The discriminator predicts whether an audio sample was from the true dataset or from the generator, and also classifies the speaker.
The loss function for the generator is
\begin{equation}
\log c_2(x) + \log (1-c_2(g(x,y)) - \log P_{c_2'}(y|g(x)),
\end{equation}
where $g$ is the generator, and the loss function for the discriminator is
\begin{equation}
-\log c_2(x) - \log (1-c_2(g(x,y)) - \log P_{c_2'}(y|x).
\end{equation}
This model is suitable for our work because it explicitly separates speaker identity from content twice. We additionally modify this architecture by substituting the speaker label with either the gender label or the accent label. We refer to these three approaches as the speaker, gender, and accent adversarial approaches. While this architecture predates our disentangled one, we observe that our modified methods outperform the disentangled approach on multiple metrics, as detailed in Section \ref{sec:experiments}. We train a convolutional autoencoder using the hyperparameters described in Chou et al.~\cite{chou_gan}. The model has log-magnitude spectrograms as its input and outputs acoustic features, and we use the Griffin-Lim algorithm to convert the model output into waveforms~\cite{griffin_lim}. Thus, we note that our reported results in Section \ref{sec:experiments} can be further improved using more complex vocoders \cite{oord2016wavenet}.
\section{Experiments}
\label{sec:experiments}
Our experiments test whether our proposed approaches are able to balance the trade-offs in fidelity, privacy, and efficiency required for client-side privacy. We test these approaches on masking gender and accents in speech recognition.\footnote{Our code and models are publicly available at \href{https://github.com/peter-yh-wu/speech-privacy}{https://github.com/peter-yh-wu/speech-privacy}.}
\subsection{Setup}
\textbf{Datasets:} We train all of our encryption models on the VCTK corpus, and the ASR model on LibriSpeech~\cite{veaux2016vctk, panayotov2015librispeech}. For both our VCTK and LibriSpeech experiments, we test on speakers unseen during training. We train our privacy attribute classifer on the respective dataset used during testing. In our LibriSpeech experiments, the ASR model and the gender classifier are both evaluated on the test-clean subset. In our VCTK experiments, we evaluate on a hold-out set of 20 speakers comprised of 10 males and 10 females.
\textbf{Classifier:} We use the VGGVox model, a modified version of the VGG-16 CNN, as our privacy attribute classifier~\cite{voxceleb_vggvox, simonyan2014vgg}, slightly modifying the network by adding a ReLU activation followed by a fully-connected layer with size-2 output. We approximate the data available to an adversary by training in two stages: 1. we pre-train the classifier on 100 hours of labeled, unmodified speech from the train data, and 2. we fine-tune the classifier on the encrypted speech of a handful of speakers from the same subset. For each privacy attribute, we measure the masking ability of each encryption approach by calculating the classifier's accuracy on an encrypted version of the test subset after being finetuned on data encrypted using the respective approach.
\textbf{ASR model:} Unless mentioned otherwise, we use a pretrained ESPNet Transformer ASR model to evaluate downstream ASR performance~\cite{watanabe2018espnet}. This model was trained on 960 hours of LibriSpeech data.
\subsection{Privacy-Fidelity Tradeoff}
Table \ref{tab:tradeoff} compares gender classification accuracy with CER and WER for different levels of Gaussian noise added to the VCTK test data at the waveform level. We increment the standard deviation of the noise by 0.01 for each subsequent experiment. As expected, we observe a negative correlation between classification accuracy and ASR performance. In other words, these results reveal a tradeoff between privacy and fidelity.
\begin{table}[th]
\caption{Tradeoff between privacy and fidelity on the VCTK test set for different levels of added Gaussian noise. We observe a negative correlation between classification accuracy and ASR performance, as expected.}
\label{tab:tradeoff}
\centering
\begin{tabular}{ l | c c c}
\toprule
\multicolumn{1}{c}{\textbf{Noise}} & \multicolumn{1}{c}{\textbf{Classification}} & \multicolumn{1}{c}{\textbf{CER}} & \multicolumn{1}{c}{\textbf{WER}} \\
\midrule
0.00 & $0.99$ & $4.5$ & $9.5$~~ \\
0.01 & $0.91$ & $11.2$ & $26.7$~~ \\
0.02 & $0.79$ & $19.3$ & $31.9$~~ \\
0.03 & $0.68$ & $28.5$ & $41.0$~~ \\
0.04 & $0.61$ & $33.9$ & $48.1$~~ \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Gender Classification}
Table \ref{tab:gender_clf} describes the gender classification accuracy on VCTK and LibriSpeech using gender-masked audio. All proposed approaches can successfully mask gender when no encrypted training examples are available. However, given encrypted training data from a male and female speaker, the signal processing samples become much easier to classify than those generated from the other approaches. This makes sense as pitch shifting would retain some underlying speaker qualities that could be readily identified by a neural classifier. Also, the adversarial approach using the gender-based loss outperforms the speaker-based loss approach here, which reflects how the former explicitly learns to hide gender information.
\begin{table}[th]
\caption{Gender classification accuracy on two datasets using gender-masked audio. The integer $n$ in each column denotes the number of speakers whose encrypted audio was used to finetune the classifier, where $n/2$ are male and $n/2$ are female. All our voice conversion approaches perform better than the baseline without any masking for $n=0$. For higher $n$ values, the disentanglement approach performs the best, followed by the adversarial approach with the modified gender loss.}
\label{tab:gender_clf}
\centering
\begin{tabular}{ l r r r r}
\toprule
\multicolumn{1}{c}{\textbf{VCTK}} & \multicolumn{1}{c}{\textbf{0}} & \multicolumn{1}{c}{\textbf{2}} & \multicolumn{1}{c}{\textbf{4}} & \multicolumn{1}{c}{\textbf{20}} \\
\midrule
No Masking & $0.991$ & $0.991$ & $0.991$ & $0.991$~~ \\
Signal Processing & $0.590$ & $0.987$ & $0.990$ & $0.997$~~ \\
Disentanglement & $0.590$ & $0.590$ & $0.598$ & $0.757$~~ \\
Adversarial (Speaker) & $0.591$ & $0.824$ & $0.840$ & $0.935$~~ \\
Adversarial (Gender) & $0.590$ & $0.707$ & $0.740$ & $0.877$~~ \\
\midrule
\multicolumn{1}{c}{\textbf{LibriSpeech}} &
\multicolumn{1}{c}{\textbf{0}} & \multicolumn{1}{c}{\textbf{2}} & \multicolumn{1}{c}{\textbf{4}} & \multicolumn{1}{c}{\textbf{20}} \\
\midrule
No Masking & $0.972$ & $0.972$ & $0.972$ & $0.972$~~ \\
Signal Processing & $0.422$ & $0.869$ & $0.887$ & $0.933$~~ \\
Disentanglement & $0.590$ & $0.627$ & $0.702$ & $0.781$~~ \\
Adversarial (Speaker) & $0.580$ & $0.712$ & $0.714$ & $0.838$~~ \\
Adversarial (Gender) & $0.570$ & $0.621$ & $0.628$ & $0.833$~~ \\
\bottomrule
\end{tabular}
\end{table}
\subsection{ASR after Gender Encryption}
\label{sec:gender_asr}
Table \ref{tab:gender_asr} describes the ASR results when transcribing gender-encrypted data, measured using mean character and word error rates. For each approach, we provide results on both VCTK and LibriSpeech \cite{veaux2016vctk, panayotov2015librispeech}. All ASR models are pretrained on LibriSpeech as aforementioned. The VCTK model is further finetuned on the data from speakers outside our test set. All finetuned ASR models are tuned on the respective converted data of 20 train speakers. The signal processing approach performs the best, potentially since Griffin-Lim and output distribution priors inherent in neural network architectures introduce artifacts. The adversarial approach using the gender-based loss again outperforms the speaker-based loss approach. This reflects how the former can model less style information than the latter and thus can model more content. For the LibriSpeech dataset, our adversarial method with the modified gender loss performs similarly to the disentangled method in the finetuned scenario. Moreover, our disentanglement and adversarial approaches do not improve when finetuned for the VCTK experiments. This suggests that the ASR model may not be robust enough or our neural converted samples may be acting like adversarial samples during the training procedure \cite{goodfellow2014explaining}. We also note that, compared to Table \ref{tab:tradeoff}, our voice conversion approaches generally achieve lower WER for fixed privacy performances. This suggests that our client-side transforms described in Section \ref{sec:asr_privacy} are more suitable for client-side privacy than other approaches like differential privacy.
\begin{table}[th]
\caption{ASR performance on two datasets using gender-masked audio. Among our voice conversion approaches, the signal processing method performs the best. For the LibriSpeech dataset, our adversarial method with the modified gender loss performs similarly to the disentangled method in the finetuned scenario. Moreover, our disentenglement and adversarial approaches do not improve when finetuned for the VCTK experiments. This suggests that the ASR model may not be robust enough or our neural converted samples may be acting like adversarial samples during the training procedure \cite{goodfellow2014explaining}.}
\label{tab:gender_asr}
\centering
\begin{tabular}{ l | c c c c}
\toprule
\multicolumn{1}{c}{\textbf{}} & \multicolumn{4}{c}{\textbf{VCTK}} \\
\midrule
\multicolumn{1}{c}{\textbf{}} & \multicolumn{2}{c}{\textbf{CER}} & \multicolumn{2}{c}{\textbf{WER}} \\
\multicolumn{1}{c}{\textbf{Method}} & \multicolumn{1}{c}{\textbf{0-Shot}} & \multicolumn{1}{c}{\textbf{Finetune}} &
\multicolumn{1}{c}{\textbf{0-Shot}} & \multicolumn{1}{c}{\textbf{Finetune}} \\
\midrule
No Masking & $4.5$ & $3.4$ & $9.5$ & $4.8$ \\
Signal Processing & $15.0$ & $7.7$ & $24.7$ & $9.8$ \\
Disentanglement & $21.1$ & $21.1$ & $35.0$ & $35.0$ \\
Adversarial (Speaker) & $31.1$ & $31.1$ & $48.5$ & $48.5$ \\
Adversarial (Gender) & $25.0$ & $25.0$ & $40.1$ & $40.1$ \\
\bottomrule
\end{tabular}
\vspace{2mm}
\begin{tabular}{ l | c c c c}
\toprule
\multicolumn{1}{c}{\textbf{}} & \multicolumn{4}{c}{\textbf{LibriSpeech}} \\
\midrule
\multicolumn{1}{c}{\textbf{}} & \multicolumn{2}{c}{\textbf{CER}} & \multicolumn{2}{c}{\textbf{WER}} \\
\multicolumn{1}{c}{\textbf{Method}} & \multicolumn{1}{c}{\textbf{0-Shot}} & \multicolumn{1}{c}{\textbf{Finetune}} &
\multicolumn{1}{c}{\textbf{0-Shot}} & \multicolumn{1}{c}{\textbf{Finetune}} \\
\midrule
No Masking & $2.4$ & $2.4$ & $4.6$ & $4.6$~~ \\
Signal Processing & $5.0$ & $5.0$ & $8.8$ & $8.8$~~ \\
Disentanglement & $15.5$ & $15.5$ & $25.0$ & $25.0$~~ \\
Adversarial (Speaker) & $29.7$ & $17.5$ & $47.5$ & $28.0$~~ \\
Adversarial (Gender) & $22.0$ & $15.8$ & $36.1$ & $25.3$~~ \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Gender Listening Tests}
In addition to our automatic metrics, we perform mean opinion score (MOS) preference tests using eight human listeners.
Namely, we compare the signal processing, the disentanglement, and the adversarial gender-based loss approaches for the VCTK gender encryption task. For the MOS test, we ask listeners to rate audio samples on a naturalness scale of 1 to 5.
We use 40 utterances for each test, where 2 are randomly chosen from each test speaker. In other words, each listener listens to 120 unique audio clips. Table \ref{tab:listen} summarizes these results. The signal processing approach performs the best for female speakers and the worst for male speakers. This is likely due to the reference F0 being from a female speaker and the relative absence of artifacts. Also, while the disentanglement approach outperforms the adversarial one in both the classification and ASR metrics, listeners consistently rated the latter higher. This suggests that the disentanglement approach may be standardizing the audio to waveforms that are unnatural to people but suitable for downstream ASR systems. We perceive such utterances as robotic but understandable.
\begin{table}[th]
\caption{Listening test results on VCTK gender encryption approaches. Our signal processing and adversarial approaches perform the best. While our signal processing approach performs the best for female speakers, our adversarial approach does the best for males, suggesting that the latter is more robust to different speaker attributes.}
\label{tab:listen}
\centering
\begin{tabular}{ l r r r}
\toprule
\multicolumn{1}{c}{\textbf{MOS}} & \multicolumn{1}{c}{\textbf{M}} & \multicolumn{1}{c}{\textbf{F}} & \multicolumn{1}{c}{\textbf{Both}} \\
\midrule
Signal Processing & $1.8 \pm 0.2$ & $4.3 \pm 0.2$ & $3.3 \pm 0.2$~~ \\
Disentanglement & $2.4 \pm 0.5$ & $2.4 \pm 0.5$ & $2.4 \pm 0.5$~~ \\
Adversarial (Gender) & $2.7 \pm 0.6$ & $3.7 \pm 0.3$ & $3.3 \pm 0.4$~~ \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Accent Classification}
Table \ref{tab:accent_clf} describes the accent classification accuracy on VCTK using accent-masked audio. Given that the largest class in the test set contains 31\% of the samples, we observe that both the disentanglement and adversarial approaches are able to successfully fool the accent classifier. Moreover, our adversarial approach using the modified accent loss performs the best, indicating the usefulness of our modified loss function.
\begin{table}[th]
\caption{Accent classification accuracy using accent-masked audio. Given that the largest class in the test set has 31\% of the samples, our results indicate that both our disentanglement and adversarial approaches successfully fooled the accent classifier. Moreover, our adversarial approach using the modified accent loss performs the best.}
\label{tab:accent_clf}
\centering
\begin{tabular}{ l r r}
\toprule
\multicolumn{1}{c}{\textbf{Classification}} & \multicolumn{1}{c}{\textbf{0}} & \multicolumn{1}{c}{\textbf{20}} \\
\midrule
Largest Class & $0.31$ & $0.31$~~ \\
No Masking & $0.36$ & $0.36$~~ \\
Disentanglement & $0.29$ & $0.29$~~ \\
Adversarial (Speaker) & $0.25$ & $0.25$~~ \\
Adversarial (Gender) & $0.25$ & $0.25$~~ \\
Adversarial (Accent) & $0.23$ & $0.23$~~ \\
\bottomrule
\end{tabular}
\end{table}
\subsection{ASR after Accent Encryption}
Table \ref{tab:accent_asr} describes the ASR results when transcribing gender-encrypted data, measured using mean character and word error rates. Our experimental setup here follows that of the gender ASR experiment. We observe trends similar to Section \ref{sec:gender_asr}. Additionally, ASR results here are consistently better than those in the gender encryption task. This reflects how transforming gender requires a larger data distribution shift than transforming accent.
\begin{table}[th]
\caption{ASR performance using accent-masked audio. As with our other ASR experiments, we observe that our disentanglement approach outperforms our adversarial one. We also note that these ASR results are consistently better than those for the gender experiment, reflecting the larger data distribution shift required for transforming gender.}
\label{tab:accent_asr}
\centering
\begin{tabular}{ l r r}
\toprule
\multicolumn{1}{c}{\textbf{Speech Recognition}} &
\multicolumn{1}{c}{\textbf{CER}} & \multicolumn{1}{c}{\textbf{WER}} \\
\midrule
Disentanglement & $17.5$ & $29.9$~~ \\
Adversarial (Speaker) & $26.5$ & $42.4$~~ \\
Adversarial (Gender) & $19.5$ & $32.4$~~ \\
Adversarial (Accent) & $23.1$ & $37.4$~~ \\
\bottomrule
\end{tabular}
\end{table}
\section{Key Takeaways}
In this section, we outline several key takeaways from our experimental results which we hope will help practitioners working on client-side privacy for complex downstream tasks.
1. \textit{Pitch standardization} approaches unfortunately are not very effective in keeping gender private, as the gender classification accuracy on data encrypted this way is much higher than those of the neural methods. In other words, this approach yielded artifacts that were readily recognizable by the adversary gender classifier, which implies poorer performance in maintaining privacy. When observing the attention map of the classifier, we noticed that the classifier learned to identify specific patterns that resulted from the pitch shift. Thus, the gender classification accuracy of the neural methods was much lower than those of the signal processing methods
2. \textit{VAEs and GANs}: VAEs, through use of an encoder, are suitable to learn latent disentangled representations~\cite{higgins2016beta,locatello2019challenging} which are useful in our task of disentangling content from private attributes.
GANs are also suitable for learning latent representations. While they have been used less in the disentanglement literature, adding attribute-specific loss functions can disentangle sensitive information from content well. While we found GANs to be harder to train than VAEs, GANs that converge appear to perform better.
3. \textit{Memory}: The large differences in memory consumption are a consequence of the large memory costs of using neural models compared to signal processing approaches. Overall, our conclusions point out a ripe opportunity for future work to reconcile the privacy benefits of neural methods with the performance and memory advantages of signal processing approaches.
\section{Conclusion and Future Directions}
\label{sec:conclusion}
In this work, we setup the problem of ensuring the privacy of speech data sent to downstream services that does not rely on any server-side privacy guarantees. We formalized several desirable properties regarding performance, privacy, and computation and performed a large-scale empirical study of existing approaches.
We find that while GAN-based approaches currently have the best tradeoff between gender masking, downstream performance, and memory usage, all existing approaches still fall short of ideal performance.
Our initial empirical analysis opens the door towards more reliable evaluations of the tradeoffs underlying privacy-preserving approaches on the client side, a property crucial for safe real-world deployment of speech systems at scale across mobile devices. In addition to developing privacy-preserving algorithms that satisfy the various desiderata as outlined in this paper, future work should also analyze other downstream speech tasks, including speech translation and other speech recognition settings.
\bibliographystyle{IEEEtran}
| {
"attr-fineweb-edu": 1.827148,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdQw4eILhQLGUUc9w | \section{Introduction}
Let $M = \Gamma \backslash \mathbb{H}^2$ be a hyperbolic surface,
with $\Gamma$ a cofinite Fuchsian group, and denote by $\pi_{\Gamma}$ the counting
function of the primitive length spectrum of $M$, i.e.~$\pi_{\Gamma}(X)$ is
the number of primitive closed geodesics on $M$ of length at most $\log X$.
The study of $\pi_{\Gamma}$ has a long history dating back to works of
Huber~\cite{huber_zur_1961,huber_zur_1961-1},
Selberg~\cite[Chapter~10]{iwaniec_spectral_2002}
and others.
In particular, for surfaces of arithmetic type, much progress has been made
in estimating the asymptotic error term related to $\pi_{\Gamma}$, see
e.g.~\cite{iwaniec_prime_1984},
\cite{luo_quantum_1995}, \cite{soundararajan_prime_2013}.
In three dimensions, that is $M=\hmodgs$ where
$\Gamma\subset\pslc$ is a cofinite Kleinian group, we know that~\cite{gangolli_1980}
\begin{equation}\label{eq:pibound}
\pi_{\Gamma}(X)\sim \li(X^{2}).
\end{equation}
In analogy with the classical prime number theory, it is more convenient to
work with the hyperbolic
analogue of the Chebyshev function, which is defined as
\[
\psi_{\Gamma}(X)=\sum_{N(P)\leq X}\Lambda_{\Gamma}(N(P)).
\]
Here the sum runs over hyperbolic and loxodromic conjugacy classes of $\Gamma$
of norm at most $X$ and $\Lambda_{\Gamma}$ denotes the hyperbolic von Mangoldt
function. That is,
$\Lambda_{\Gamma} (N(P)) = \log N(P_0)$, if $P$ is a power of a primitive
hyperbolic (or loxodromic) conjugacy class $P_0$, and zero otherwise.
The classical bound for the remainder term in~\eqref{eq:pibound} was given by
Sarnak~\cite{sarnak_arithmetic_1983} in 1983. In the arithmetic case,
$\Gamma=\pslzi$, his result says that
\begin{equation}\label{eq:sarnakbound}
\psi_{\Gamma}(X) = \frac{X^{2}}{2} + E_{\Gamma}(X),\quad
E_{\Gamma}(X)\ll_{\epsilon}X^{5/3+\epsilon}.
\end{equation}
The estimate~\eqref{eq:sarnakbound} for the error term is actually valid for all
cofinite Kleinian groups, provided that the contribution from possible
small eigenvalues is included in the main term.
Sarnak's pointwise bound~\eqref{eq:sarnakbound} has been improved for the Picard group
in~\cite{koyama_prime_2001,balkanova_prime_2017,balkanova_2018}.
The current best unconditional bound is due to Balkanova
and Frolenkov~\cite{balkanova_2018}, who showed that
\begin{equation}\label{eq:balkanovabound}
E_{\Gamma}(X)\ll_{\epsilon}X^{\eta+\epsilon},\quad
\eta=\frac{3}{2}+\frac{103}{1024}.
\end{equation}
By assuming the Lindel\"of hypothesis for quadratic Dirichlet $L$-functions
over Gaussian integers, they obtain $\eta=3/2$.
It is not clear how far this is from the truth (see the discussion in Remarks 1.5 and 3.1 in \cite{balkanova_prime_2017}).
The main result of this paper is that the exponent $3/2+\epsilon$ holds on
average. This is achieved by studying the second moment of the error term.
Namely, we prove the following theorem.
\begin{theorem}\label{intro:thm1}
Let $V\geq Y \gg 1$ and $\eps>0$. Then
\begin{equation}\label{eq:thm1}
\frac{1}{Y} \int_V^{V+Y} |E_{\Gamma}(X)|^2dX \ll V^{3+\eps} \left(\frac{V}{Y}\right)^{2/3}.
\end{equation}
\end{theorem}
Theorem~\ref{intro:thm1} follows from a short interval second moment estimate for
the spectral exponential sum $S(T, X)$, which is defined as
\[
S(T, X) = \sum_{0<r_{j}\leq T}X^{ir_{j}},
\]
where $\lambda_{j}=1+r_{j}^{2}$ are the (embedded) eigenvalues of the
Laplace--Beltrami operator $\Delta$ on $M$.
\begin{theorem}\label{intro:thm2}
Let $V\geq Y\gg 1$ and $T\ll V^{1/2-\eps}$. Then
\[
\frac{1}{Y} \int_V^{V+Y} |S(T,X)|^2 dX \ll T^{3+\eps} V^{3/2+\eps} Y^{-1}.
\]
\end{theorem}
The connection between the Prime Geodesic Theorem and $S(T,X)$ is given by
the explicit formula of Nakasuji, see~\eqref{eq:explicit}.
\begin{remark}\label{intro:rmk1}
Note that the bound for arbitrary $Y$ in Theorem~\ref{intro:thm2}
follows by positivity from the estimate over the dyadic interval $[V,2V]$.
Nevertheless, Theorem~\ref{intro:thm2} allows us to prove a nontrivial
result in short intervals in Theorem~\ref{intro:thm1}
since the parameter $T$ can depend on $Y$.
Despite this, we will carry out the proof of Theorem~\ref{intro:thm2} in the
interval $[V,V+Y]$
in order to highlight how the dependence in $Y$ gets absorbed into the
final bound.
\end{remark}
As a corollary of Theorem~\ref{intro:thm1} we recover the pointwise bound
$E_{\Gamma}(X)\ll_{\epsilon}X^{13/8+\epsilon}$ of~\cite[Theorem~1.1]{balkanova_prime_2017}.
Furthermore, our second moment bound~\eqref{eq:thm1} has immediate consequences
analogous to Corollary~1.3 and Equation~(1.3) in~\cite{balkanova_prime_2017},
but we will not write them here explicitly.
Finally, we observe that Theorem~\ref{intro:thm1} implies that the short interval estimate
\[
\frac{1}{V} \int_{V}^{2V} \abs*{\psi_{\Gamma}(X)-\psi_{\Gamma}(X-\eta X)-\eta(1-\eta/2) X^{2}}^{2} dX
\ll_{\epsilon} V^{3+\epsilon},
\]
is valid for all $0\leq\eta\leq 1$.
In other words, the approximation $\psi_\Gamma(X)-\psi_\Gamma(X-\eta X)=\eta(1-\eta/2)X^2$
holds with the error term $O(X^{3/2+\eps})$ in a square mean sense.
A weaker second moment estimate, which is valid for \emph{all}
cofinite $\Gamma$, was obtained in~\cite[Theorem~1.2]{balkanova_prime_2017},
where the authors showed for $V\geq Y \gg 1$ that
\[
\frac{1}{Y}\int_{V}^{V+Y}\abs{E_{\Gamma}(X)}^{2}dX
\ll
V^{16/5}\left(\frac{V}{Y}\right)^{2/5}(\log V)^{2/5}.
\]
This was proved by using the Selberg trace formula and it is of analogous
strength to Sarnak's bound (see Remark~1.5 in~\cite{balkanova_prime_2017}).
In our proof we will instead use the Kuznetsov trace formula (see~\cite{motohashi_trace_1996,motohashi_trace_1997})
for $\pslzi$, which allows us to get stronger estimates.
A key component of our proof is a careful analysis of integrals involving
multiple Bessel functions. In particular, by relying on exact formulas, we
avoid having to deal with the oscillatory integrals that appear in the proof of
Koyama~\cite{koyama_prime_2001} for the pointwise bound.
We also incorporate some ideas of~\cite{balog_2018}
and~\cite{cherubini_mean_2017} from two dimensions.
The paper is organized as follows. We begin by stating our main tool,
the Kuznetsov formula, in section~\ref{SK}.
Then, in section \ref{S2}, we give a detailed outline
of the proof of Theorem~\ref{intro:thm2} under the assumption of two key estimates,
which are stated as Propositions~\ref{proposition-kloosterman-sums}
and~\ref{proposition-average-rankin-selberg}. In sections~\ref{S3}
and~\ref{S4} we prove these two estimates. Finally, in section~\ref{S5}
we show how to recover Theorem~\ref{intro:thm1} from Theorem~\ref{intro:thm2}.
\section{Kuznetsov formula}\label{SK}
The Kuznetsov trace formula relates the Fourier
coefficients of cusp forms to Kloosterman sums. For Gaussian integers,
Kloosterman sums are defined as
\[
S_{\Q(i)}(n,m,c) = \sum_{a\in (\Z[i]/(c))^\times} e\big(\langle
m,a/c\rangle\big) e\big(\langle n,a^{*}/c\rangle\big),
\]
where $m, n,c\in\mathbb{Z}[i]$, $c\neq 0$; $a^{*}$ denotes the inverse of
$a$ modulo the ideal $(c)$; and $\langle x,y\rangle$ denotes the
standard inner product on $\R^2\cong \C$.
The Kloosterman sums obey Weil's bound~\cite[(3.5)]{motohashi_trace_1997}
\begin{equation}\label{weil}
|S_{\rationals(i)}(n,m,c)| \leq |(n,m,c)| d(c) N(c)^{1/2},
\end{equation}
which we will use repeatedly.
Here $d(c)$ is the number of divisors of $c$.
\begin{theorem}[Kuznetsov formula
\cite{motohashi_trace_1996,motohashi_trace_1997}]\label{Kuznetsovformula}
Let $h$ be an even function,
holomorphic in $|\Im r|<1/2+\epsilon$, for some $\epsilon>0$,
and assume that $h(r)=O((1+|r|)^{-3-\epsilon})$ in the strip.
Then, for any non-zero $m,n \in {\mathbb Z}[i]$:
\[
D + C = U + S,
\]
with
\[
\begin{split}
D &= \sum_{j=1}^{\infty} \frac{ r_j \rho_j(n) \overline{\rho_j(m)} }{\sinh \pi r_j} h(r_j), \\
C &= 2 \pi \int_{-\infty}^{\infty} \frac{\sigma_{ir}(n)
\sigma_{ir}(m)}{ |mn|^{ir} |\zeta_{K}(1+ir)|^2}dr, \\
U &= \frac{\delta_{m,n} + \delta_{m,-n}}{\pi^2} \int_{-\infty}^{\infty} r^2 h(r) dr, \\
S &= \sum_{c \in\zin} \frac{S_{\rationals(i)}(n,m,c)}{|c|^2}
\int_{-\infty}^{\infty} \frac{ir^2}{\sinh \pi r} h(r) H_{ir} \left(\frac{2 \pi \sqrt{\overline{mn}}}{c}\right) dr,
\end{split}
\]
where $\sigma_s(n) = \sum_{d|n} N(d)^s$ is the divisor function,
\[
H_{\nu} (z) = 2^{-2\nu} |z|^{2 \nu} J_{\nu}^*(z) J_{\nu}^{*}(\overline{z}),
\]
$J_{\nu}$ is the $J$-Bessel function of order $\nu$,
and $ J_{\nu}^*(z) = J_{\nu}(z) (z/2)^{-\nu}$.
\end{theorem}
For the definition of the $\rho_{j}$, see the explanation
after~\eqref{0612:eq001}.
We will also need the power series expansion~\cite[8.402]{gradshteyn2007}
for the $J$-Bessel function:
\begin{equation}\label{eq:jseries}
J_{ir}(z)(z/2)^{-ir} = \sum_{k=0}^\infty \frac{(-1)^k}{k!\Gamma(k+1+ir)}\left(\frac{z}{2}\right)^{2k}.
\end{equation}
\section{Outline of proof of Theorem~\ref{intro:thm2}}\label{S2}
In this section we outline the proof of Theorem~\ref{intro:thm2}.
The result for the sharp sum $S(T,X)$ can be deduced from the corresponding result
for the smooth sum
\begin{equation}\label{def:smoothSTX}
\sum_{r_j} X^{ir_j}e^{-r_j/T}.
\end{equation}
Indeed, if we assume the inequality
\[
\int_V^{V+Y} \Big|\sum_{r_j} X^{ir_j}e^{-r_j/T}\Big|^2 dX \ll T^3 V^{3/2+\eps},
\]
then using a standard Fourier analysis method (see~\cite{iwaniec_prime_1984,luo_quantum_1995})
and the Cauchy--Schwarz inequality we can estimate, for $T<V^{1/2}$ and $Y\leq V$,
\[
\begin{split}
\int_V^{V+Y} \!\!\!
&
|S(T,X)|^2 dX
\\
&\ll
T^\eps \!\! \int_{|\xi|\leq 1} \int_V^{V+Y} \Big|\sum_{r_j} (Xe^{-2\pi\xi})^{ir_j}e^{-r_j/T} \Big|^2 dX \min(T,|\xi|^{-1}) d\xi
+T^{4+\eps}Y
\\
&\ll
T^{3+\eps}V^{3/2+\eps}.
\end{split}
\]
To study the sum in~\eqref{def:smoothSTX}, we approximate $X^{ir_j}e^{-r_j/T}$ by a more regular function
that we borrow from~\cite{deshouillers_iwaniec_1982}, namely
\begin{equation}\label{def:h}
h(r) = \frac{\sinh((\pi + 2i\alpha)r)}{\sinh \pi r},\quad 2\alpha = \log X + i/T,
\end{equation}
which satisfies $h(r) = X^{ir}e^{-r/T} + O(e^{-\pi r})$
\cite{motohashi_trace_1996,motohashi_trace_1997}.
Before applying the Kuznetsov formula, we need to insert the Fourier
coefficients into our spectral sum.
We do this by means of an extra average
and by using the fact that the (normalized) Rankin--Selberg $L$-function has a simple pole at $s=1$
with the residue being an absolute constant.
Consider a smooth function $f$, compactly supported on
$[\sqrt{N},\sqrt{2N}]$,
satisfying $|f^{(p)}(\xi)|\ll N^{-p/2}$ for every $p\geq 0$ and with mass
$\int_0^\infty f(x)dx = N$.
Let $\tilde{f}$ be the Mellin transform of $f$. Then
\begin{equation}\label{0612:eq001}
\frac{1}{N} \sum_{n,r_j} f(\abs{n}) h(r_j) |v_j(n)|^2
=
\frac{1}{\pi iN} \sum_{r_j} h(r_j) \int_{(3)} \tilde{f}(2s) L(s,u_j\otimes u_j) ds,
\end{equation}
where $L(s,u_j\otimes u_j)$ is the Rankin--Selberg $L$-function
\[
L(s,u_j\otimes u_j) = \sum_{n\in\zin} \frac{|v_j(n)|^2}{N(n)^s},
\]
where $v_j(n)$ are Fourier coefficients of cusp forms, normalized by the relation
$v_j(n)\sqrt{\sinh(\pi r_j)}=\rho_j(n)\sqrt{r_j}$.
We apply the Kuznetsov formula on the left-hand side in~\eqref{0612:eq001},
while on the right-hand side we move the line of integration to $\Re(s)=1/2$,
picking up the residue at $s=1$. We obtain, for absolute constants $c_1,c_2$,
\begin{equation}\label{eq003}
\begin{split}
\sum_{r_j} X^{ir_j} e^{-r_j/T}
&=
\frac{c_1}{N} \!\! \sum_{n\in\zin} \!\!\!\! f(|n|)
\mathcal{S}_n(\omega)
\\
&+
\frac{c_2}{N}\int_{(1/2)}
\tilde{f}(2s) M_1(s) ds
+
O(T^2).
\end{split}
\end{equation}
The quantities appearing in~\eqref{eq003} are described as follows.
The term $\mathcal{S}_n(\omega)$ is a weighted sum of Gaussian Kloosterman
sums,
\begin{equation}\label{def:Snomega}
\mathcal{S}_n(\omega)
=
\sum_{c\in\mathbb{Z}[i]\backslash\{0\}} \frac{S_{\mathbb{Q}(i)}(n,n;c)}{N(c)}\omega\left(\frac{2\pi\bar{n}}{c}\right),
\end{equation}
where $\omega$ is the integral transform of $h$ that appears in
Kuznetsov's formula, that is,
\begin{equation}\label{def:omega}
\omega(z) = \int_{-\infty}^{+\infty}\frac{ir^2}{\sinh(\pi
r)}H_{ir}(z)h(r)dr.
\end{equation}
The kernel $H_{ir}(z)$ is given by
\[
H_{\nu} (z) = 2^{-2\nu} |z|^{2 \nu} J_{\nu}^*(z) J_{\nu}^{*}(\overline{z}),
\]
with $J_{\nu}^*(z) = J_{\nu}(z) (z/2)^{-\nu}$, where
$J_{\nu}$ denotes the Bessel function of the first kind.
The term $M_1(s)$ in~\eqref{eq003} is a weighted first moment of Rankin--Selberg
$L$-functions:
\[
M_1(s) = \sum_{r_j} h(r_j) L(s,u_j\otimes u_j).
\]
Note that the integral on the half line in~\eqref{eq003} is absolutely convergent since
$\tilde{f}(2s)\ll N^{1/2} |s|^{-M}$, for arbitrarily large $M$ (when
$\Re(s)=1/2$),
and $L(s,u_j\otimes u_j)$ is polynomially bounded in $|s|$.
Finally, the term $O(T^2)$ in~\eqref{eq003} comes from the identity element and the continuous
spectrum in the Kuznetsov formula.
In sections~\ref{S3} and~\ref{S4} we will prove the following two estimates
that we state as separate propositions.
In order to simplify the exposition,
we assume that $N$ is bounded polynomially
in $X$ and $T$, i.e.
\begin{equation}\label{NTX}
N \ll (TX)^A,
\end{equation}
for some arbitrary $A>0$.
Our final choice of $N$ satisfies this condition and thus~\eqref{NTX} is
not restrictive.
\begin{proposition}\label{proposition-kloosterman-sums}
Let $V\geq Y\gg 1$, and $V^\eps \ll T\ll V^{1/2-\eps}$.
Let $N\geq 1$ be chosen so that \eqref{NTX} holds,
and suppose $n\in\Z[i]$ with $N(n)\asymp N$.
Then
\begin{equation}\label{prop:eq001}
\int_V^{V+Y} |\mathcal{S}_n(\omega)|^2 dX
\ll
(NV^2+T^3Y) (NV)^\eps.
\end{equation}
\end{proposition}
In our proof, the first term in~\eqref{prop:eq001} will be the dominant
one.
Since this term does not depend on $Y$,
the most interesting result is obtained on the full dyadic interval $[V,2V]$.
The same observation was made in Remark~\ref{intro:rmk1} and it applies to the next proposition as well.
\begin{proposition}\label{proposition-average-rankin-selberg}
Let $V\geq Y\gg 1$, and $T\gg 1$. Then
\[
\int_V^{V+Y} |M_1(s)|^2 dX \ll |s|^A T^{6+\eps} V,
\]
for some absolute constant $A$.
\end{proposition}
Let us show that Theorem~\ref{intro:thm2} follows from the above two propositions.
By using the Cauchy--Schwarz inequality and integrating in $X$
in~\eqref{eq003}, we get
\[
\begin{split}
\int_V^{V+Y} \Big|\sum_{r_j} X^{ir_j}e^{-r_j/T}\Big|^2 dX
&\ll
\frac{1}{N} \sum_{N(n)\sim N} \int_V^{V+Y} |\mathcal{S}_n(\omega)|^2 dX
\\
&+
\frac{1}{N} \int_{(1/2)} |s|^{-M} \int_V^{V+Y} |M_1(s)|^2 dX ds
+
O(T^4).
\end{split}
\]
Applying Proposition~\ref{proposition-kloosterman-sums} and
Proposition~\ref{proposition-average-rankin-selberg} yields
\[
\int_V^{V+Y} \Big|\sum_{r_j} X^{ir_j}e^{-r_j/T}\Big|^2 dX
\ll
N^{1+\eps} V^{2+\eps} + T^3 Y (NV)^\eps + \frac{T^{6+\eps}V}{N} + T^4.
\]
We pick $N=T^3V^{-1/2}$
and thus arrive at the inequality
\begin{equation}\label{0512:eq001}
\int_V^{V+Y} \Big|\sum_{r_j} X^{ir_j}e^{-r_j/T}\Big|^2 dX \ll T^{3}
V^{3/2+\eps},
\end{equation}
since $T\leq V^{1/2-\eps}$.
Note that $N\geq 1$ only if $T\geq V^{1/6}$.
For $T<V^{1/6}$, the bound~\eqref{0512:eq001}
follows from the trivial estimate $S(T,X)\ll T^3$. This proves~\eqref{0512:eq001} for every value
of $T\leq V^{1/2-\eps}$, which concludes the proof of
Theorem~\ref{intro:thm2}.
It remains to prove Propositions~\ref{proposition-kloosterman-sums}
and~\ref{proposition-average-rankin-selberg}.
\section{Second moment of sums of Kloosterman sums}\label{S3}
Next we want to prove Proposition~\ref{proposition-kloosterman-sums}.
In order to do this, we will need to simplify expressions involving
$\omega(z)$ according to the size of $\abs{z}$. We will first prove a
number of auxiliary lemmas, which are then used in different ranges of
the summation in $\mathcal{S}_{n}(\omega)$.
Throughout this section we shall assume that
$N\geq 1$, $n\in\Z[i]$ satisfies $N(n)\asymp N$,
and that $T$, $X$, $V$ and $Y$ are real numbers satisfying the inequalities
\begin{equation}\label{0412:eq001}
1\ll Y\leq V\leq X\leq V+Y
\quad\text{and}\quad
V^\eps \ll T\leq V^{1/2-\eps}.
\end{equation}
Moreover, we recall the mild assumption~\eqref{NTX} on $N$.
We begin the proof by simplifying the
expression defining $\mathcal{S}_n(\omega)$.
After removing the initial
part of the sum,
we can replace the weight function $\omega$ by
a simpler function $\omega_0$ given by
\begin{equation}\label{def:omega0}
\omega_0(z) = \int_{-\infty}^{\infty} \frac{1}{\Gamma(1+ir)^2} \left|\frac{z}{2}\right|^{2ir} \frac{ir^2h(r)}{\sinh(\pi r)} dr.
\end{equation}
These two simplifications come at the cost of an admissible error term,
as demonstrated in the following lemma.
\begin{lemma}\label{lemma:omega0}
Let $\mathcal{S}_n(\omega)$ be as in~\eqref{def:Snomega},
and let $\omega_0$ be as in~\eqref{def:omega0}. Then
\begin{equation}\label{2211:lemma:eq}
\begin{split}
\mathcal{S}_n(\omega)
=
\mathcal{S}_n^\dagger(\omega_0)
+
O( N^{1/2+\epsilon}T^{1+\eps} ),
\end{split}
\end{equation}
where
\[
\mathcal{S}_n^\dagger(\omega_0)
=
\sum_{N(c)>4\pi^2 N(n)}
\frac{S_{\Q(i)}(n,n,c)}{N(c)} \omega_0\left(\frac{2\pi\bar{n}}{c}\right).
\]
\end{lemma}
\begin{proof}
Let us focus first on the portion of the sum where $N(c)\leq 4\pi^2N(n)$,
i.e.~when the complex number
$z=2\pi\bar{n}/c$ satisfies $|z|\geq 1$.
We start from the definition of $\omega(z)$, see~\eqref{def:omega}, and
apply an integral representation
for the kernel $H_{ir}(z)$ (see~\cite[Equation (2.10)]{motohashi_trace_1997}).
Writing $z=|z|e^{i\theta}$, we have
\[
\omega(z)
=
\frac{8}{\pi^2}
\int_0^{\pi/2} \cos(2|z|\cos\theta\sin\phi)
\int_{-\infty}^{+\infty} r^2 h(r) \cosh(\pi r) K_{2ir}(2|z|\cos\phi) dr \, d\phi.
\]
When $\abs{r}$ is bounded, we estimate $h(r)$ trivially and use the fact
that
$K_{2ir}(x)\ll x^{-1/2}$ for all $r\in\mathbb{R}$ and $x>0$ real.
Thus the integral over $r$
contributes $O(|z|^{-1/2})$ in this range.
Now, for $r$ bounded away from zero, we approximate $h(r)$ by
\begin{equation}\label{1511:eq001}
h(r) = \frac{\cosh((\pi+2i\alpha)r)}{\cosh(\pi r)} + O(e^{-\frac{1}{2}\pi r}),
\end{equation}
and the error contributes again $O(|z|^{-1/2})$.
Note also that, after integrating over $\abs{r}=O(1)$, the fraction
in~\eqref{1511:eq001} is bounded by $O(|z|^{-1/2})$.
The remaining integral reads
\[
I
=
\frac{8}{\pi^2}\int_0^{\pi/2} \cos(2|z|\cos\theta\sin\phi)
\int_{-\infty}^{+\infty} r^2 \cosh((\pi+2i\alpha)r) K_{2ir}(2|z|\cos\phi)dr \, d\phi.
\]
The integral over $r$ can be evaluated exactly by
using the formula~\cite[6.795.1]{gradshteyn2007}. This gives
\[
\int_{-\infty}^{+\infty} r^2 \cosh((\pi+2i\alpha)r) K_{2ir}(2|z|\cos\phi)dr
=
-\frac{\pi}{8} \cdot \frac{\partial^2}{\partial b^2} \exp(-a\cosh b),
\]
where $a:=2|z|\cos\phi$ and $b:=\alpha-i\pi/2$ (in particular, $|\Im(b)|<\pi/2$).
Hence, we arrive at the expression
\[
\begin{split}
I
&=
-\frac{1}{\pi}
\int_0^{\pi/2} \cos(2|z|\cos\theta\sin\phi)
\frac{\partial^2}{\partial b^2} \exp(-a\cosh b) d\phi
\\
&=
-\frac{1}{\pi}
\int_0^{\pi/2} \cos(2|z|\cos\theta\sin\phi)
e^{-2|z|\cos\phi \cosh b}
\\
&\phantom{xxxxxxxxxxx}\times
\left(4|z|^2\cos^2\phi \sinh^{2}b + 2|z|\cos\phi \cosh b\right) d\phi.\rule{0pt}{14pt}
\end{split}
\]
Observe that $\Re(\cosh b)\asymp X^{1/2}T^{-1}$ and $|\cosh b|\asymp |\sinh b|\asymp X^{1/2}$.
In the range $\cos\phi>\log^2T/|z|\Re(\cosh b)$ we bound the integrand in
absolute value.
Since the exponential is $O(T^{-p})$ for arbitrarily large $p$,
the integral contributes $O(|z|^2X T^{-p})$.
On the other hand, when $\cos\phi\leq\log^2T/|z|\Re(\cosh
b)$, we integrate by parts in $\phi$ once.
This gives a factor $1/|z|\cosh b$ from the
exponential and thus the contribution from the integral
is
$O(T^{2+\eps}/|z|X^{1/2})$.
All in all, we have proved that, for $|z|\geq 1$, we can estimate
\[
\omega(z) \ll \frac{1}{|z|^{1/2}} + \frac{|z|^2X}{T^p} + \frac{T^{2+\epsilon}}{|z|X^{1/2}}.
\]
Summing this for $z=2\pi\bar{n}/c$ and $N(c)\leq 4\pi^2 N(n)$,
and using Weil's bound to estimate the Kloosterman sums,
we get a quantity not bigger than
\[
O( N^{1/2+\epsilon} + N^{1+\eps}X T^{-p} + N^{1/2+\epsilon}T^{2+\epsilon}
X^{-1/2} ).
\]
This is absorbed in the error in~\eqref{2211:lemma:eq}
since $V^\eps\ll T\leq V^{1/2-\eps}$ and $N$ satisfies~\eqref{NTX}
(in fact, this is the only place where we use this assumption).
It remains to estimate the portion
of $\mathcal{S}_n(\omega)$ where $N(c)>4\pi^2N(n)$,
i.e.~when $|z|<1$.
In this range we expand the $J$-Bessel functions in the
definition of $\omega$, \eqref{def:omega},
into power series (see~\eqref{eq:jseries}). We get
\begin{equation}\label{1511:eq003}
\begin{split}
\omega(z)
=
\sum_{k_1,k_2\geq 0}
&\frac{(-1)^{k_1+k_2}}{k_1!k_2!2^{2k_1+2k_2}}z^{2k_1}\bar{z}^{2k_2}
\\
&\times
\int_{-\infty}^{\infty} \frac{1}{\Gamma(k_1+1+ir)\Gamma(k_2+1+ir)}\left|\frac{z}{2}\right|^{2ir}\frac{ir^2h(r)}{\sinh(\pi r)}dr.
\end{split}
\end{equation}
By Stirling's formula it follows that, for any $k\geq 0$, we have
\begin{equation}\label{eq:gammaest}
\frac{1}{\Gamma(k+1+ir)}
\ll
e^{k-(k+1/2)\log(1+|r|) + \pi|r|/2}
\ll
\frac{e^{k+\pi|r|/2}}{(1+|r|)^{k+1/2}}
\end{equation}
Using~\eqref{eq:gammaest} and the fact that
$h(r)\ll e^{-|r|/T}$, we can bound all but the initial part of the double
sum in~\eqref{1511:eq003}
(that is, when $k_1+k_2\geq 1$) by
\[
\ll
|z|^2\sum_{k_1+k_2\geq 1} \frac{e^{k_1+k_2}}{k_1!k_2!2^{2k_1+2k_2}}
\int_{-\infty}^{+\infty} e^{-|r|/T}dr
\ll
T|z|^2.
\]
By Weil's bound, summing this for $N(c)>4\pi^2N(n)$ gives a contribution
of at most
$O(N^{1/2+\eps}T)$, which is absorbed in the error term in~\eqref{2211:lemma:eq}.
We are thus left with the term associated to $k_1=k_2=0$, which is precisely
$\omega_0(z)$.
\end{proof}
Next we evaluate $\omega_0$ with an explicit error term.
It turns out that a simple closed formula for $\omega_0$
can be given in terms of the $K$-Bessel function of order zero.
Estimates for $K_0$ and its derivative are collected in the following lemma.
\begin{lemma}\label{lemma:K0}
Let $K_0(w)$ be the $K$-Bessel function of order zero and let $w\in\C\setminus\{0\}$ with $\Re(w)\geq 0$.
Then
\begin{equation}\label{K0:eq1}
|K_0(w)| \leq \frac{2}{|w|^{1/2}}\exp(-\Re(w)).
\end{equation}
Moreover, setting $f(w)=e^wK_0(w)$, we have
\begin{equation}\label{K0:eq2}
|f'(w)| \leq \frac{1}{|w|^{3/2}}\left(1+\frac{1}{|w|}\right).
\end{equation}
\end{lemma}
\begin{proof}
The integral representation~\cite[8.432.8]{gradshteyn2007}
\[
K_\nu(w)
=
\sqrt{\frac{\pi}{2w}} \frac{e^{-w}}{\Gamma(\nu+1/2)}
\int_0^\infty e^{-t} t^{-1/2} \left(1+\frac{t}{w}\right)^{\nu-1/2} dt
\]
holds for $|\arg(w)|<\pi$ and $\Re(\nu)>-1/2$.
Since $\Re(w)\geq 0$, the estimate~\eqref{K0:eq1}
follows after bounding the integrand in absolute value.
Multiplying both sides by~$e^w$, differentiating in $w$ and bounding the result
yields~\eqref{K0:eq2}.
\end{proof}
The most important consequence of the simple closed
formula for $\omega_{0}$
is being able to see that $\omega_0(z)\ll|z|^{1/2+\eps}$ as $|z|\to 0$.
Such a decay is guaranteed \emph{a priori} by Kuznetsov's formula,
but is not directly visible in the definition of $\omega_0$
in~\eqref{def:omega0}.
The behaviour of $\omega_0(z)$ for small $z$ is made explicit in the following lemma,
which in turn allows us to replace the infinite sum
over $c$ by a finite sum.
\begin{lemma}\label{lemma:MellinK0}
Let $N\geq 1$ and $n\in\Z[i]$ with $N(n)\asymp N$,
and let $\omega_0$ be as in~\eqref{def:omega0}.
Then
\begin{equation}\label{2311:lemma:eq1}
\omega_0(z) = \frac{iM^2|z|^2X}{\pi} K_0(M|z|X^{1/2}) + O\left(\frac{|z|^{3/2}}{X^{1/4}}\right),
\end{equation}
where $M=e^{-i(\pi/2-1/2T)}$. In particular, we have
\begin{equation}\label{2311:eq003}
\mathcal{S}_n^\dagger(\omega_0)
=
\mathcal{S}_n^{\ddagger}(K_0)
+
O\left(N^{1/2+\eps}X^{1/2+\eps}\right),
\end{equation}
where $\mathcal{S}_n^{\ddagger}$ is a finite weighted sum
of Kloosterman sums given by
\begin{equation}\label{def:ddagger}
\mathcal{S}_n^{\ddagger}(K_0)
=
2iM^2N(n)X
\!\!\!\!\!\!
\sum_{C_1<N(c)\leq C_2}
\!\!\!\!\!\!
\frac{S_{\Q(i)}(n,n,c)}{N(c)^2} K_0\left(\frac{2\pi
M|n|X^{1/2}}{|c|}\right)
\end{equation}
with $C_1=N(n)V/T^2\log^2T$ and $C_2=N(n)V$.
\end{lemma}
\begin{proof}
Let $2A=M|z|X^{1/2}$ and $2B=M|z|X^{-1/2}$.
Then, from
the definition of $h(r)$, together with the relation
\[
\frac{\pi r}{\sinh(\pi r)} = \Gamma(1+ir)\Gamma(1-ir),
\]
we deduce the identity
\begin{align*}
\omega_0(z)
&=
\frac{4A^2}{\pi^2} \int_{(1)} \Gamma(s)^2 A^{-2s} ds
+
\frac{4B^2}{\pi^2} \int_{(1)} \Gamma(s)^2 B^{-2s} ds
\\
&=
\frac{i(2A)^2}{\pi} K_0(2A)
+
\frac{i(2B)^2}{\pi} K_0(2B).
\end{align*}
The second equality follows from~\cite[\S7.3 (17)]{erdelyi_tables_1954}
(see also~\cite[17.43.32]{gradshteyn2007}).
Note that $\Re(2A),\Re(2B)\geq 0$.
Using~\eqref{K0:eq1}, and bounding the exponential
crudely by one, we see that
\begin{equation}\label{2311:eq002}
\left|B^2K_0(2B)\right| \ll |z|^{3/2}X^{-1/4}.
\end{equation}
This proves~\eqref{2311:lemma:eq1}. Also, summing~\eqref{2311:eq002}
over $N(c)>4\pi^2N(n)$ gives a quantity bounded by
\(
O(N^{1/2+\eps}X^{-1/4}),
\)
which is absorbed in the error term in~\eqref{2311:eq003}.
Similarly, from~\eqref{K0:eq1} we see that
\[
\left|A^2K_0(2A)\right| \ll |z|^{3/2}X^{3/4}
\exp\left(-\frac{|z|X^{1/2}}{100T}\right).
\]
Therefore, on summing over $4\pi^2N(n)<N(c)\leq C_1$ and $N(c)\geq C_2$ we obtain a quantity bounded by
\[
O(N^{1/2+\eps}X^{1/2+\eps}).\qedhere
\]
\end{proof}
We have now reduced the problem of estimating
$\mathcal{S}_{n}(\omega)$ to a matter of understanding a finite sum
of Kloosterman sums,
$\mathcal{S}_{n}^{\ddagger}(K_{0})$,
weighted by a $K$-Bessel function of order zero.
\begin{remark}
Notice that if we use the estimate~\eqref{K0:eq1} also in the remaining
range,
$C_1<N(c)\leq C_2$, we obtain $\mathcal{S}_n^\ddagger(K_0)\ll (NTX)^{1/2+\eps}$.
Collecting the errors from Lemma~\ref{lemma:omega0} and Lemma~\ref{lemma:MellinK0},
we see that this contribution
dominates. Therefore we would have
\begin{equation}\label{pointiwse-bound}
\mathcal{S}_n(\omega)\ll (NTX)^{1/2+\eps},
\end{equation}
which recovers the pointwise bound that appears
in~\cite[p.792]{koyama_prime_2001}.
The method in our proof is slightly
different at places
and provides additional details compared to~\cite{koyama_prime_2001}.
Moreover, Lemma~\ref{lemma:MellinK0} bypasses the use of the method of stationary phase,
giving instead a closed formula for the weight function.
\end{remark}
We will now study the second moment of
$\mathcal{S}_n^\ddagger(K_0)$.
By exploiting the oscillation in the weight function
$K_0(M|z|X^{1/2})$, we
can obtain additional decay when integrating in~$X$.
\begin{lemma}\label{lemma:productK0}
Let $M$ be as in Lemma~\ref{lemma:MellinK0}, and let $x_1,x_2$
be positive real numbers satisfying $1\gg x_1,x_2\gg V^{-1/2}$.
Then
\begin{equation}\label{lemma:product:eq}
\int_V^{V+Y} K_0(Mx_1X^{1/2})
\overline{K_0(Mx_2X^{1/2})} dX
\ll
\frac{\min(YV^{-1/2},L^{-1})}{(x_1x_2)^{1/2}},
\end{equation}
where $L=|x_1-x_2|$.
\end{lemma}
\begin{proof}
Consider the function $f(w)=e^wK_0(w)$. From Lemma~\ref{lemma:K0} we have
\begin{equation}\label{f:bound}
|f(w)|\ll \frac{1}{|w|^{1/2}},\quad |f'(w)|\ll \frac{1}{|w|^{3/2}},
\end{equation}
for $\Re(w)\geq 0$ and $|w|$ bounded away from zero.
The integral in~\eqref{lemma:product:eq} can be written as
\begin{equation}\label{2611:eq001}
\int_V^{V+Y} e^{-X^{1/2}(Mx_1+\overline{M}x_2)} f(Mx_1X^{1/2})\overline{f(Mx_2X^{1/2})}dX.
\end{equation}
Bounding the integrand in absolute value and applying~\eqref{f:bound}
leads to the first term in the minimum in~\eqref{lemma:product:eq}.
The second term in~\eqref{lemma:product:eq} follows from integration by parts
and~\eqref{f:bound}.
\end{proof}
We are now ready to prove Proposition~\ref{proposition-kloosterman-sums}.
\subsection{Proof of Proposition~\ref{proposition-kloosterman-sums}}
By Lemma~\ref{lemma:omega0} and Lemma~\ref{lemma:MellinK0}, we have
\begin{equation}\label{2711:eq004}
\int_V^{V+Y} |\mathcal{S}_n(\omega)|^2 dX
\ll
\int_V^{V+Y} |\mathcal{S}_n^\ddagger(K_0)|^2 dX
+
O(YN^{1+\eps}V^{1+\eps}),
\end{equation}
where $\mathcal{S}_n^\ddagger(K_0)$ is as given in~\eqref{def:ddagger}.
From a dyadic decomposition
and the Cauchy--Schwarz inequality it follows that
we can bound the integral on the right-hand side by
\begin{equation}\label{prop21:eq001}
\begin{split}
&\ll N^{2+\eps} V^{2+\eps} \!\!\!\! \max_{C_1<R\leq C_2}
\int_V^{V+Y} \bigg| \sum_{N(c)\sim R} \!\!\!\!
\frac{S_{\Q(i)}(n,n,c)}{N(c)^2}K_0\left(\frac{2\pi M|n|X^{1/2}}{|c_1|}\right)\bigg|^2 dX
\\
&=
N^{2+\eps} V^{2+\eps} \!\!\!\! \max_{C_1<R\leq C_2} \sum_{c_1,c_2} \frac{S_{\Q(i)}(n,n,c_1)S_{\Q(i)}(n,n,c_2)}{N(c_1c_2)^2}
\\
&\phantom{xxxxxxxx}\times
\int_V^{V+Y} K_0\left(\frac{2\pi M|n|X^{1/2}}{|c_1|}\right)\overline{K_0\left(\frac{2\pi M|n|X^{1/2}}{|c_2|}\right)} dX,
\end{split}
\end{equation}
where the sum over $c_1,c_2$ is restricted to $R<N(c_1),N(c_2)\leq 2R$.
Note that in this range the numbers $x_j=2\pi|n|/|c_j|$ satisfy the inequality $1\gg x_j\gg V^{-1/2}$,
for $j=1,2$, and we can therefore bound the integral in~\eqref{prop21:eq001}
by using Lemma~\ref{lemma:productK0}.
For $x_1=x_2$, i.e.~$|c_1|=|c_2|$, we use the factor $YV^{-1/2}$ in the minimum
in~\eqref{lemma:product:eq}.
This, coupled with Weil's bound for $S_{\Q(i)}(n,n,c)$, leads to the
following estimate for the diagonal part of the sum:
\begin{equation}\label{2711:eq002}
\ll YN^{3/2+\eps}V^{3/2} \!\!\!\!\!\! \sum_{C_1<N(c)\leq C_2} \frac{|(n,c)|^2r_2(N(c))}{N(c)^{5/2-\eps}} \ll T^3Y (NV)^\eps.
\end{equation}
Here we use $r_2(n)$ to denote the
number of ways of writing $n$ as a sum of two squares,
along with the standard estimate $r_2(n)\ll n^\eps$.
For the off-diagonal terms
in~\eqref{prop21:eq001} (when $|c_1|\neq|c_2|$)
we use again Lemma~\ref{lemma:productK0}.
For technical convenience we interpolate
the two bounds in the minimum with the exponents $(\eps,1-\eps)$,
which gives
\[
\min(YV^{-1/2},L^{-1}) \ll V^\eps L^{-1+\eps}.
\]
Inserting this into~\eqref{prop21:eq001}, we can estimate
the double sum over $|c_1|\neq |c_2|$ by
\begin{equation}\label{2711:eq001}
\ll N^{1+\eps} V^{2+\eps} \!\!\!\!\! \sum_{R\leq
\ell_1\neq \ell_2\leq 2R}
\frac{a_{\ell_{1}}a_{\ell_{2}}}{|\ell_{1}-\ell_{2}|^{1-\eps}},
\end{equation}
where $\ell_{j}=N(c_j)$ and the coefficients $a_{\ell}$ are given by
\[
a_{\ell}: = \sum_{N(c)=\ell} \frac{|S_{\Q(i)}(n,n,c)|}{N(c)^{1+\eps}}.
\]
Using the Hardy--Littlewood--P\'olya inequality~\cite[Th.~381, p.~288]{hardy_inequalities_1934}
and Weil's bound~\eqref{weil} for the Kloosterman sums, we can bound~\eqref{2711:eq001} by
\begin{equation}\label{2711:eq003}
\ll N^{1+\eps} V^{2+\eps} \sum_{\ell\sim R}
a_{\ell}^{2}
\ll N^{1+\eps} V^{2+\eps} \sum_{c\neq 0} \frac{|(n,c)|^2r_2(N(c))}{N(c)^{1+\eps}} \ll N^{1+\eps}V^{2+\eps},
\end{equation}
uniformly in $R$.
Since the above bound dominates the last term in~\eqref{2711:eq004},
combining~\eqref{2711:eq003} with~\eqref{2711:eq002} we deduce that
\[
\int_V^{V+Y} |\mathcal{S}_n(\omega)|^2 dX \ll N^{1+\eps} V^{2+\eps} + T^3Y
(NV)^\eps,
\]
which is what we wanted to prove.
\qed
\section{Average of Rankin--Selberg $L$-functions}\label{S4}
In this section we prove Proposition~\ref{proposition-average-rankin-selberg}.
As in section~\ref{S3}, we take real numbers $T$, $X$, $V$ and $Y$
satisfying the inequalities in~\eqref{0412:eq001}.
Moreover, we assume that $s$
is a complex number with $\Re(s)=1/2$.
First, we note that $h(r)=X^{ir}e^{-r/T} +O(e^{-\pi r})$.
Therefore,
using the fact that the Rankin--Selberg $L$-function
is bounded polynomially in both $r_j$ and $s$,
we can write
\[
M_1(s) = \sum_{r_j} X^{ir_j} e^{-r_j/T} L(s,u_j\otimes u_j) + O(|s|^A).
\]
We decompose the sum on the right-hand side
into intervals of length $T$ and use the Cauchy--Schwarz
inequality to get
\begin{equation}\label{0412:eq003}
|M_1(s)|^2
\ll
\sum_{m=1}^\infty m^2 \Big|\sum_{(m-1)T<r_j\leq mT} \!\!\!\! X^{ir_j} L_j \Big|^2 + O(|s|^{2A}),
\end{equation}
where we
use the shorthand $L_j=L(s,u_j\otimes u_j)$.
We want to integrate over $X\in [V,V+Y]$
in~\eqref{0412:eq003}. Thus we need to
understand the integral
\[
I = \int_V^{V+Y} \Big|\sum_{(m-1)T<r_j\leq mT} \!\!\!\! X^{ir_j} L_j \Big|^2 dX.
\]
Opening up the square and integrating directly
yields
\begin{align}\label{eq:Iest}
I
&\ll
V T^\eps e^{-2m} \!\!\!\! \sum_{(m-1)T<r_j,r_k\leq mT} \frac{|L_jL_k|}{1+|r_j-r_k|}
\notag\\
&\ll
V T^\eps e^{-2m} \!\!\!\! \sum_{(m-1)T<r_j\leq mT} \!\!\!\! |L_j|^2 \!\!\!\! \sum_{(m-1)T<r_k\leq mT} \frac{1}{1+|r_j-r_k|}.
\end{align}
The Weyl law (with remainder) on
$\hmodgs$~\cite[Theorem~2]{bonthonneau_weyl_2015} implies
the estimate $\#\{T<r_j\leq T+1\}\ll T^2$ in unit intervals
(see~\cite[(2.1)]{balkanova_prime_2017}).
This gives, for $r_j\leq mT$,
\begin{equation}\label{eq:sumbound1}
\sum_{(m-1)T<r_k\leq mT} \frac{1}{1+|r_j-r_k|} \ll (mT)^{2+\eps}.
\end{equation}
Now, in order to estimate the remaining sum over
$r_{j}$ in~\eqref{eq:Iest}, we use the relation between the Rankin--Selberg $L$-function
and the symmetric square $L$-function, i.e.
\[
L(s,u_j\otimes u_j) = |v_j(1)|^2 \frac{\zeta_{\Q(i)}(s)}{\zeta_{\Q(i)}(2s)}
L(s,\mathrm{sym}^2 u_j),
\]
where $\zeta_{\Q(i)}$ is the Dedekind zeta function of $\Q(i)$.
The bound $v_j(1)\ll r_j^\eps$
(see~\cite[Proposition~3.1]{koyama_prime_2001})
together with the second moment estimate~\cite[Theorem~3.3]{balkanova_prime_2017}
give
\begin{equation}\label{eq:sumbound2}
\sum_{(m-1)T<r_j\leq mT} |L_j|^2 \ll |s|^A (mT)^{4+\eps}.
\end{equation}
After substituting the estimates~\eqref{eq:sumbound1} and~\eqref{eq:sumbound2}
into~\eqref{eq:Iest}, we finally obtain
\[
I \ll V T^{6+\eps} m^{8+\eps} e^{-2m}.
\]
Using this with~\eqref{0412:eq003} and summing over $m$
leads us to the desired bound.
\qed
\section{Recovering Theorem~\ref{intro:thm1}}\label{S5}
Finally, we show how to recover Theorem~\ref{intro:thm1} from Theorem~\ref{intro:thm2}.
The error term $E_{\Gamma}(X)$ is related to the spectral exponential
sum $S(T,X)$ via the explicit formula. For $\pslzi$, this was proved by
Nakasuji~\cite[Thm. 4.1]{nakasuji_prime_2001}, who showed that
\begin{equation}\label{eq:explicit}
E_\Gamma(X)
=
2\Re\left(\sum_{0<r_j\leq T}\frac{X^{1+ir_j}}{1+ir_j}\right) + O\left(\frac{X^2}{T}\log X\right),
\end{equation}
for $1 \leq T < X^{1/2}$.
Thus, we can estimate the second moment of $E_\Gamma(X)$ as
\[
\frac{1}{Y} \int_V^{V+Y} |E_\Gamma(X)|^2dX
\ll
\frac{1}{Y} \int_V^{V+Y} \bigg|\sum_{0<r_j\leq T}\frac{X^{1+ir_j}}{1+ir_j}\bigg|^2 dX
+
O\left( \frac{V^4 }{T^2} \log^2 V \right).
\]
Using partial summation, we write the exponential sum as
\[
\sum_{0<r_j\leq T} \frac{X^{1+ir_j}}{1+ir_j}
=
\frac{X}{1+iT} S(T,X) + iX \int_{1}^{T} \frac{S(U,X)}{(1+iU)^2} dU,
\]
and by a repeated use of the Cauchy--Schwarz inequality we obtain
\[
\begin{split}
\int_V^{V+Y} \bigg| \sum_{0<r_j\leq T}\frac{X^{1+ir_j}}{1+ir_j}\bigg|^2 dX
&\ll
\frac{V^2}{T^2} \int_{V}^{V+Y} |S(T,X)|^2 dX
\\
&+
V^2 \log T \int_1^T \frac{1}{|1+iU|^3} \int_{V}^{V+Y} |S(U,X)|^2 dX \; dU.
\end{split}
\]
We apply Theorem~\ref{intro:thm2} and bound the right-hand side by
\[
\ll V^{7/2+\epsilon} T^{1+\epsilon} + V^{7/2+\eps} T^\eps \int_1^T dU
\ll V^{7/2+\epsilon} T^{1+\epsilon}.
\]
Thus
\[
\frac{1}{Y} \int_V^{V+Y} |E_\Gamma(X)|^2dX \ll V^{7/2+\epsilon} T^{1+\epsilon}
Y^{-1}+ \frac{V^4 }{ T^2} \log^2 V.
\]
Balancing with $T = V^{1/6} Y^{1/3}$ completes
the proof of Theorem~\ref{intro:thm1}.\qed
| {
"attr-fineweb-edu": 1.174805,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdRI5qoYAzjhRyakv | \section{Conclusion}
In this paper we demonstrated an application of new Multi-region Bilinear CNN architecture to the problem of person re-identification.
Having tried different variants of the bilinear architecture, we showed that such architectures give state-of-the-art performance on larger datasets.
In particular, Multi-region Bilinear CNN allows to retain some spatial information and to extract more complex features, while increase the number of parameters over the baseline CNN without overfitting. We have demonstrated notable gap between the performance of the Multi-region Bilinear CNN and the performance of the standard CNN \cite{yi2014deep}.
\section*{Acknowledgements}
\FloatBarrier
{\small
\bibliographystyle{ieee}
\section{Experiments}
\indent\textbf{Datasets and evaluation protocols}. We investigate the performance of the CNN method and its Bilinear variant (figure \ref{fig:architecture}) for three re-identification datasets: CUHK01 \cite{LiZW12}, CUHK03 \cite{li2014deepreid} and Market-1501 \cite{zheng2015scalable}. The CUHK01 dataset contains images of 971 identities from two disjoint camera views. Each identity has two samples per camera view. We used 485 randomly chosen identities for train and the other 486 for test.
The CUHK03 dataset includes 13,164 images of 1,360 pedestrians captured from 3 pairs of cameras. The two versions of the dataset are provided: \textit{CUHK03-labeled} and \textit{CUHK03-detected} with manually labeled bounding boxes and automatically detected ones accordingly. We provide results for both versions.
Following \cite{li2014deepreid}, we use Recall@K metric to report our results.
In more detail, the evaluation protocol accepted for the CUHK03 is the following: 1,360 identities are split into 1,160 identities for training, 100 for validation and 100 for testing. At test time single-shot Recall@K curves are calculated. Five random splits are used for both CUHK01 and CUHK03 to calculate the resulting average Recall@K. Some sample images of CUHK03 dataset are shown in figure \ref{fig:teaser}.
\begin{table
\caption{Recall@K for the CUHK03-labeled dataset.}
\begin{tabular}{c|ccccc}
\hline
Method & r = 1 & r = 5 & r = 10 & r = 20 \\
\hline
FPNN \cite{li2014deepreid} & 20.65 & 51.50 & 66.50 & 80.00 \\
LOMO+XQDA \cite{liao2015person} & 52.20 & 82.23 & 92.14 & 96.25 \\
ImrovedDeep \cite{ahmed2015improved}& 54.74 & 86.50 & 93.88 & 98.10 \\
ME \cite{paisitkriangkrai2015learning}
&62.10 & 89.10 & 94.30 & 97.80 \\
DiscrNullSpace \cite{zhang2016learning}&62.55 & 90.05 & 94.80 & 98.10 \\
\hline
CNN &64.15 & 91.66 & 96.97 &99.26 \\
MR B-CNN & \bf{69.7} & \bf{93.37} &\bf{98.91} &\bf{99.39} \\
\hline
\end{tabular}
\label{tab:cuhk03_labeled}
\end{table}
\begin{table
\caption{Recall@K for the CUHK03-detected dataset. The new architecture (MR B-CNN) outperforms other methods.}
\begin{tabular}{c|ccccc}
\hline
Method & r = 1 & r = 5 & r = 10 & r = 20 \\
\hline
FPNN \cite{li2014deepreid} & 19.89 & 50.00 & 64.00 & 78.50 \\
ImrovedDeep \cite{ahmed2015improved}& 44.96 & 76.01 & 83.47 & 93.15 \\
LOMO+XQDA \cite{liao2015person} & 46.25 & 78.90 & 88.55 & 94.25 \\
DiscrNullSpace \cite{zhang2016learning}& 54.70 & 84.75 & \bf{94.80} & 95.20 \\
SiamLSTM \cite{VariorSLXW16} & 57.3 &80.1 & 88.3 &- \\
GatedSiamCNN \cite{VariorHW16} & 61.8 & 80.9 & 88.3 & - \\
\hline
CNN &58.09 & 87.06 & 93.38 & 97.17 \\
MR B-CNN &\bf{63.67} &\bf{89.15} & 94.66 & \bf{97.5} \\
\hline
\end{tabular}
\label{tab:cuhk03_detected}
\end{table}
We also report our results on the Market-1501 dataset, introduced in \cite{zheng2015scalable}.
This dataset contains 32,643 images of 1,501 identities, each identity is captured by from two to six cameras. The dataset is randomly divided into the test set of 750 identities and the train set of 751 identities.
\indent\textbf{Architectures.} In the experiments, we compare the baseline CNN architecture of \cite{yi2014deep} as one of the baselines.
We also evaluate the baseline Bilinear CNN (\textit{''B-CNN''}) architecture where bilinear features are pooled over all locations for each of the three image parts. This corresponds to the formula (\ref{eq:desc}), where whole image is used for pooling.
Finally, we present the results for the Multi-region Bilinear CNN (\textit{''MR B-CNN''}) introduced in this paper (figure \ref{fig:architecture}).
\indent\textbf{Implementation details.} As in \cite{yi2014deep}, we form training pairs inside each batch consisting of 128 randomly chosen training images (from all cameras). The training set is shuffled after each epoch, so the network can see many different image pairs while training. All images are resized to height 160 and width 60 pixels. Cosine similarity is used to compute the distance between a pair of image descriptors. As discussed above, the Histogram loss \cite{UstinovaNIPS16} is used to learn the models.
\begin{table}
\caption{Recall@K for the Market-1501 dataset. The proposed architecture (MR B-CNN) outperforms other methods.}
\begin{tabular}{c|cccc}
\hline
Method & r = 1 & r = 5 & r = 10 & mAP \\
\hline
DeepAttrDriven \cite{SuZX0T16} & 39.4 & - & - & 19.6 \\
DiscrNullSpace \cite{zhang2016learning}& 61.02 & - & - & 35.68 \\
SiamLSTM \cite{VariorSLXW16} & 61.60 & - & - & 35.31 \\
GatedSiamCNN \cite{VariorHW16} & 65.88 & - & - & 39.55 \\
\hline
CNN & 56.62 & 78.92 & 85.15& 32.97 \\
MR B-CNN & \bf{66.36} & \bf{85.01} & \bf{90.17} & \bf{41.17} \\%& 85.01 & 90.17 & 92.76 & 94.09 \\
\hline
\end{tabular}
\label{tab:market}
\end{table}
\begin{table
\caption{Recall@K for the CUHK01 dataset. For CNN and MR B-CNN, single-shot protocol with 486 queries was used. We include some results for this dataset, although we are not sure which protocol is used in \cite{zhang2016learning}. Other works use the same protocol as ours.}
\begin{tabular}{c|ccccc}
\hline
Method & r = 1 & r = 5 & r = 10 & r = 20 \\
\hline
ImrovedDeep \cite{ahmed2015improved} &47.53& 71.60& 80.25& 87.45\\
ME \cite{paisitkriangkrai2015learning} & 53.40 & 76.40 & 84.40 & 90.50\\
DiscrNullSpace \cite{zhang2016learning} & 69.09 &86.87 &91.77& 95.39\\
\hline
CNN & 48.04 &74.34 &83.33& 90.48 \\
MR B-CNN & 52.88 &78.08& 86.3& 92.63 \\
\hline
\end{tabular}
\label{tab:cuhk01}
\end{table}
We train networks with the weight decay rate of $0.0005$. The learning rate is changing according to the ``step'' policy, the initial learning rate is set to $10^{-4}$ and it is divided by ten when the performance on the validation set stops improving (which is roughly every $100,000$ iterations).
The dropout layer with probability of 0.5 is inserted before the fully connected layer. The best iteration is chosen using the validation set. Following \cite{ahmed2015improved}, for CUHK01 we finetune the net pretrained on CUHK03.
\indent\textbf{Variations of the Bilinear CNN architecture.}
We have conducted a number of experiments with the varying pooling area for bilinear features (MR B-CNN), including full area pooling (B-CNN), on the CUHK03-labeled. Here we demonstrate results for our current MR B-CNN architecture with $5\times5$ pooling area, as this architecture has been found to be the most beneficial for the CUHK03 dataset.
We also compare results for B-CNN architecture, where no spatial information is preserved. In \fig{recall}a and \fig{recall}b B-CNN is shown to be outperformed by other two architectures by a large margin. This result is not specific to a particular loss, as we observed the same in our preliminary experiments with the Binomial Deviance loss~\cite{yi2014deep}.
The MR B-CNN architecture shows uniform improvement over baseline CNN architecture on all three datasets (\fig{recall}a,b,c,d).
\indent\textbf{Comparison with the state-of-the-art methods.}
To our knowledge, Multi-region Bilinear CNN networks introduced in this paper outperform previously published methods on the CUHK03 (both 'detected' and 'labeled' versions), and Market-1501 datasets. Recall@K for several rank values are shown in \tab{cuhk03_labeled}, \tab{cuhk03_detected} and \tab{market} (singe query setting was used). For the Market-1501 dataset, mean average precision value is additionally shown. The results for CUHK01 are shown in \tab{cuhk01}.
\section{Introduction}
\begin{figure*}[t
\centering
\begin{tabular}{c c}
\begin{tabular}{c c c c}
\includegraphics[ height=2cm, width=1.2cm]{figures/pedestrians_pairs/false_pos1.png}&
\includegraphics[ height=2cm, width=1.2cm]{figures/pedestrians_pairs/false_pos2.png}&
\includegraphics[ height=2cm, width=1.2cm]{figures/pedestrians_pairs/false_pos3.png}&
\includegraphics[ height=2cm, width=1.2cm]{figures/pedestrians_pairs/false_pos4.png}
\\
\includegraphics[ height=2cm, width=1.2cm]{figures/pedestrians_pairs/false_neg1.png}&
\includegraphics[ height=2cm, width=1.2cm]{figures/pedestrians_pairs/false_neg2.png}&
\includegraphics[ height=2cm, width=1.2cm]{figures/pedestrians_pairs/false_neg3.png}&
\includegraphics[ height=2cm, width=1.2cm]{figures/pedestrians_pairs/false_neg4.png}
\end{tabular} &
\begin{tabular}{c c}
\includegraphics[ height=2cm, width=4cm]{figures/birds/birds_false_pos1.png}&
\includegraphics[ height=2cm, width=4cm]{figures/birds/birds_false_pos2.png}
\\
\includegraphics[ height=2cm, width=4cm]{figures/birds/birds_false_neg1.png}&
\includegraphics[ height=2cm, width=4cm]{figures/birds/birds_false_neg3.png}
\end{tabular}
\end{tabular}
\caption{Left-- difficult re-identification cases in the CUHK03 dataset \cite{li2014deepreid}. The pairs in the upper row show very similar images depicting different persons, the pairs in the lower row show dissimilar images depicting the same person. Right -- analogous cases for the CUB-Birds dataset for the fine-grained classification~\cite{Wah11}. There is a clear similarity between the challenges posed by the two tasks. Both re-identification and fine-grained classification deal with strong viewpoint variations, and often need to focus on small-scale fragments in order to distinguish subjects/classes. At the same time the re-identification task has a greater degree of alignment, and we therefore suggest a modification of the bilinear CNN exploiting the presenсe of such weak alignment in the re-identification case.}
\label{fig:teaser}
\end{figure*}
The task of person re-identification is drawing the ever-increasing attention from the computer vision and the visual surveillance communities. This is because of the inherent difficulty of the task paired with the fact that medium-sized training datasets have become available only recently. The task also has a clear practical value for automated surveillance systems.
Despite a long history of research on re-identification \cite{yi2014deep, ma2012bicov, DBLP:journals/cviu/BazzaniCM13, li2015cross,prosser2010person, kuo2013person,roth2014mahalanobis,hirzer2012person,paisitkriangkrai2015learning,ma2012local,liao2015person,li2014deepreid,ahmed2015improved,chen2015deep}, the accuracy of the existing systems is often insufficient for the full automation of such application scenarios, which stimulates further research activity. The main confounding factor is the notoriously high variation of the appearance of the same person (even at short time spans) due to pose variations, illumination variation, background clutter, complemented by the high number of individuals wearing similar clothes that typically occur in the same dataset.
In this work, we follow the line of work that applies deep convolutional neural networks (CNNs) and embedding learning to the person re-identification task. Our aim is an architecture that can map (embed) an image of a detected person to a high-dimensional vector (descriptor) such that a simple metric such as Euclidean or cosine distance can be applied to compare pairs of vectors and reason about the probability of two vectors to describe the same person. Here, we avoid the approach taken in several recent works \cite{ahmed2015improved} that train a separate multi-layer network to compute the distance between a pair of descriptors, since such methods do not scale well to large datasets, where the ability to perform fast search requires the use of a simple metric.
The choice of the convolutional architecture for embedding in the case of person re-identification is far from obvious. In particular, ``standard'' architectures that combine convolutional layers followed by fully-connected layers such as those used for image classification or face embedding can fail to achieve sufficient invariance to strong 3D viewpoint changes as well as to non-rigid articulations of pedestrians, given the limited amount of training data typical for re-identification tasks and datasets.
Here, we propose a person re-identification architecture that is based on the idea of bilinear convolutional networks (bilinear CNNs) \cite{lin2015bilinear} that was originally presented for fine-grained classification tasks and later evaluated for face recognition \cite{roychowdhury2015face}. We note that the task of person re-identification shares considerable similarity with fine-grained categorization (\fig{teaser}), as the matching process in both cases often needs to resort to the analysis of fine texture details and parts that are hard to localize. Bilinear CNNs, however, rather radically discard spatial information in the process of the bilinear pooling. While this may be justified for fine-grained classification problems such as bird classification, the variability of geometric pose and viewpoints in re-identification problems is more restricted. Overall, the multi-region bilinear CNNs can be regarded as a middle ground between the traditional CNNs and the bilinear CNNs. In the experiments, we show that such a compromise achieves an optimal performance across a range of person re-identification benchmarks, while also performing favorably compared to previous state-of-the-art. The success of our architecture confirms the promise hold by deep architectures with multiplicative interactions such as bilinear CNNs and our multi-region bilinear CNNs for hard pattern recognition tasks.
\begin{figure*}
\begin{center}
\begin{tabular}{ c c }
\cincludegraphics[width=0.4\textwidth]{figures/architecture/multiregion_bilinear.eps}
&
\cincludegraphics[width=0.4\textwidth]{figures/architecture/architecture.pdf}
\\
(a)&(b)
\end{tabular}
\caption{The proposed architecture for person re-identification: (a) - multi-region bilinear sub-network used for each of the three parts of the input image, (b) - the whole multi-region Bilinear CNN architecture that uses bilinear pooling over regions rather than the entire image. The new architecture achieves state-of-the-art performance over a range of benchmark datasets.}
\label{fig:architecture}
\end{center}
\end{figure*}
\section{The architecture}
Our solution combines the state-of-the-art method for person re-identification (Deep Metric Learning \cite{yi2014deep} ) and the state-of-the-art fine-grained recognition method (bilinear CNN \cite{lin2015bilinear}). Modifying the bilinear CNNs by performing multi-region pooling boosts the performance of this combination significantly. Below, we introduce the notations and discuss the components of the system in detail.
\indent\textbf{Convolutional architecture.}
We use architecture proposed by \cite{yi2014deep} as baseline. The network incorporates independent streams, in which three overlapping parts of person images are processed separately (top, middle and bottom parts), and produces 500-dimensional descriptor as an output.
Each of the three streams incorporates two convolutional layers of size $7\times7$ and $5\times5$, followed by the rectified linear (ReLU) non-linearity and max pooling with the kernel size of two pixels and the stride of two pixels.
\begin{figure*}
\begin{tabular}{cccc}
\includegraphics[width=0.22\textwidth]{figures/plots/cuhk03_labeled_different_arch_hist.pdf}&
\includegraphics[width=0.22\textwidth]{figures/plots/cuhk03_detected_different_arch_hist.pdf}&
\includegraphics[width=0.22\textwidth]{figures/plots/market_different_arch_hist.pdf}&
\includegraphics[width=0.22\textwidth]{figures/plots/cuhk01_different_arch_hist.pdf}
\\
(a)&(b)&(c)&(d)
\end{tabular}
\caption{Recall@K results for the (a) CUHK03-labeled, (b) CUHK03-detected (c) Market-1501 (d) CUHK01, datasets. MR B-CNN uniformly outperforms other architectures.}
\label{fig:recall}
\end{figure*}
\indent\textbf{Multi-region Bilinear Model.} Bilinear CNNs are motivated by the specialized pooling operation that aggregates the correlations across maps coming from different feature extractors. The aggregation however discards all spatial information that remains in the network prior to the application of the operation. This is justified when the images lack even loose alignment (as e.g.\ in the case of some fine-grained classification datasets), however is sub-optimal in our case, where relatively tight bounding boxes are either manually selected or obtained using a good person detector. Thus some loose geometric alignment between images is always present. Therefore we modify bilinear layer and replace it with the \textit{multi-region bilinear layer}, which allows us to retain some of the geometric information. Our modification is, of course, similar to many other approaches in computer vision, notably to the classical spatial pyramids of \cite{Lazebnik06}.
In more detail, similarly to \cite{lin2015bilinear}, we introduce the bilinear model for image similarity as follows:
\\ $\mathcal{B} = ({f_{A}^{}}, {f_{B}^{}}, \mathcal{P}, \mathcal{S}) $, where ${f_{A}^{}}$ and ${f_{B}^{}}$ are feature extractor functions (implemented as CNNs), $\mathcal{P}$ is the pooling function, $\mathcal{S}$ is the similarity function. The feature function takes an image ${\mathcal{I}}$ at location $\mathcal{L}$ and outputs the feature of determined dimension $\mathcal{D}$ (unlike \cite{lin2015bilinear}, we use vector notation for features for simplicity): $f : \mathcal{I} \times \mathcal{L} \rightarrow \mathcal{R}_{}^{1 \times \mathcal{D}}$. In this work, two convolutional CNNs (without fully-connected layers) serve as the two feature extractors $f_{A}^{}$ and $f_{B}^{}$. For each of the two images in the pair at each spatial location, the outputs of two feature extractors $f_{A}^{}$ and $f_{B}^{}$
are combined using the bilinear operation \cite{lin2015bilinear}:
\begin{equation}
\label{eq:bilinear}
\text{bilinear}(l, im, {f_{A}^{}}, {f_{B}^{}}) = {{f_{A}^{}}(l, im)_{}^{T}{f_{B}^{}}(l, im)},
\end{equation}
where $l \in \mathcal{L}, im \in \mathcal{I}$.
Using the operation \eq{bilinear}, we compute the bilinear feature vector for each spatial location $l$ of the image $im$. If the feature extractors $f_{A}^{}$ and the $f_{B}^{}$ output local feature vectors of size $M$ and $N$ correspondingly, their bilinear combination will have size $M \times N$ or $MN \times 1$, if reshaped to the column vector.
We then suggest to aggregate the obtained bilinear features by pooling across locations that belong to a predefined set of image regions: ${r_{1}}, ..., {r_{R}}$, where $R$ is number of chosen regions.
After such pooling, we get the pooled feature vector for each image region $i$ (as opposed to the feature vector that is obtained in \cite{lin2015bilinear} for the whole image):
\begin{equation} \label{eq:bilinear_pooling}
\phi_{r_{i}}(im) = \phi(im_{r_{i}}) = \sum_{l \in r_{i}} \text{bilinear}(l, im, {f_{A}^{}}, {f_{B}^{}})
\end{equation}
Finally, in order to get a descriptor for image $im$, we combine all region descriptors into a matrix of size $R \times MN$:
\begin{equation} \label{eq:desc}
\phi(im) = [{\phi_{r_{1}}(im)}_{}^{T}; {\phi_{r_{2}}(im)}_{}^{T}; ... ; {\phi_{r_{R}}(im)}_{}^{T}].
\end{equation}
To pick the set of regions, in our experiments, we simply used the grid of equally-sized non-overlapping patches (note that the receptive fields of the units from different regions are still overlapping rather strongly). The scheme of Multi-region Bilinear CNN architecture is shown in figure~\ref{fig:architecture}a. \\
We incorporate the multi-region bilinear operation (\ref{eq:desc}) into the convolutional architecture in the following way: instead of using one sub-network for each image part, we use two feature extractors with the same convolutional architecture described above. The outputs are combined by the multi-region bilinear operation (\ref{eq:desc}) after the second convolution. Three bilinear outputs for each of the image parts are then concatenated and turned into a 500-dimensional image descriptor by an extra fully connected layer. The overall scheme of Multi-region Bilinear CNN net for each of the two siamese sub-networks used is this work is shown in figure \ref{fig:architecture}b.
\\\indent\textbf{Learning the model.}
As in \cite{yi2014deep}, we use deep embedding learning \cite{chopra2005learning}, where multiple pedestrian images are fed into the identical neural networks and then the loss attempts to pool descriptors corresponding to the same people (\textit{matching pairs}) closer and the descriptors corresponding to different people (\textit{non-matching pairs}) apart. To learn the embeddings, we use the recently proposed Histogram loss \cite{UstinovaNIPS16} that has been shown to be effective for person re-identification task.
\section{Related work}
\textbf{Deep CNNs for Re-Identifications.} Several CNN-based methods for person re-identification have been proposed recently\cite{li2014deepreid, yi2014deep, ahmed2015improved, chen2015deep, VariorHW16, VariorSLXW16,SuZX0T16, LiuFQJY17, XiaoLOW16}. Yi~\etal~\cite{yi2014deep} were among the first to evaluate ``siamese'' architectures that accomplishes embedding of pedestrian images into the descriptor space, where they can be further compared using cosine distance. In \cite{yi2014deep}, a peculiar architecture specific to pedestrian images is proposed that includes three independent sub-networks corresponding to three regions (legs, torso, head-and-shoulders). This is done in order to take into account the variability of the statistics of textures, shapes, and articulations between the three regions. Our architecture includes the network of Yi~\etal~\cite{yi2014deep} as its part.
Apart from \cite{yi2014deep}, \cite{li2014deepreid} and \cite{ahmed2015improved} learn classification networks that can categorize a pair of images as either depicting the same subjects or different subjects.
The proposed deep learning approaches \cite{ahmed2015improved, yi2014deep, li2014deepreid}, while competitive, do not clearly outperform more traditional approaches based on ``hand-engineered'' features \cite{paisitkriangkrai2015learning, zhao2014person}.
Unfortunately, when searching for matches in a dataset, the methods proposed in \cite{li2014deepreid}, \cite{ahmed2015improved} and \cite{chen2015deep} need to process pairs that include the query and every image in the dataset, and hence cannot directly utilize fast retrieval methods based on Euclidean and other simple distances. Here we aim at the approach that can learn per-image descriptors and then compare them with cosine similarity measure. This justifies starting with the architecture proposed in \cite{yi2014deep} and then modifying it by inserting new layers.
There are several new works reporting results that are better than ours \cite{LiuFQJY17, XiaoLOW16} where additional data and/or sophisticated pre-training schemes were used, whereas we train our model from scratch on each dataset (except for CUHK01, where CUHK03 was used for pre-training).
\textbf{Bilinear CNNs.} Bilinear convolutional networks (Bilinear CNNs), introduced in \cite{lin2015bilinear} achieved state-of-the-art results for a number of fine-grained recognition tasks, and have also shown potential for face verification \cite{roychowdhury2015face}. Bilinear CNNs consists of two CNNs (where the input of these two CNNs is the same image) without fully-connected layers. The outputs of these two streams are combined in a special way via bilinear pooling. In more detail, the outer product of deep features are calculated for each spatial location, resulting in the quadratic number of feature maps, to which sum pooling over all locations is then performed. The resulting orderless image descriptor is then used in subsequent processing steps. For example, in \cite{lin2015bilinear} and \cite{roychowdhury2015face} it is normalized and fed into the softmax layer for classification. An intuition given in \cite{lin2015bilinear} is that the two CNN streams combined by bilinear operation may correspond to part and texture detectors respectively. This separation may facilitate localization when significant pose variation is present without the need for any part labeling of the training images. Our approach evaluates bilinear CNNs for the person re-identification tasks and improves this architecture by suggesting its multi-region variant.
| {
"attr-fineweb-edu": 1.487305,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdRU4ubng7g7bWC__ |
\section{Additional Diagnostic Experiments}
\label{app:diagnostics}
\subsection{The Role of Explicit Backward Label Mapping}
\label{sec:no-back-map}
Related work either focus on tasks with labels invariant to warping like image classification or gaze estimation \cite{jaderberg2015spatial,recasens2018learning} (discussed in
\ifstandalonesupplement
Sec~3.1),
\else
Sec~\ref{sec:background}),
\fi
or expect an implicit backward mapping to be learned through black-box end-to-end training \cite{marin2019efficient}
(discussed in
\ifstandalonesupplement
Sec~2).
\else
Sec~\ref{sec:related}).
\fi
In this section, we suggest that the implicit backward label mapping approach is not feasible for object detection. To this end, we train and test our KDE methods minus any bounding box unwarping. Specifically, we no longer unwarp bounding boxes when computing loss during training and when outputting final detections during testing. Instead, we expect the model to output detections in the original image space.
Due to instability, additional measures are taken to make it end-to-end trainable. First, we train with a decreased learning rate of 1e-4. Second, we train with and without adding ground truth bounding boxes to RoI proposals. The main KDE experiments do not add ground truth to RoI proposals, because there is no way of warping bounding boxes into the warped image space (the implementation of $\mathcal{T}$ does not exist). We additionally try setting this option here, because it would help the RoI head converge quicker, under the expectation that the RPN should output proposals in the original space. All other training settings are identical to the baseline setup (\ifstandalonesupplement Sec~4.1.1)\else Sec~\ref{Baseline and Setup})\fi.
Results are shown in Tab~\ref{tab:additional-diagnostics}. The overall AP is single-digit under all of these configurations, demonstrating the difficulty of implicitly learning the backward label mapping. This is likely due to the fact that our model is pretrained on COCO \cite{lin2014microsoft}, so it has learned to localize objects based on their exact locations in the image, and finetuning on Argoverse-HD is not enough to ``unlearn" this behavior and learn the backward label mapping. Another factor is that in the $S_I$ and $S_C$ cases, each image is warped differently, making the task of learning the backwards label mapping even more challenging. We suspect that training from scratch with a larger dataset like COCO and using the warp parameters (e.g. the saliency map) as input may produce better results. However, this only reinforces the appeal of our method due to ease of implementation and cross-warp generalizability (we can avoid having to train a new model for each warping mechanism).
\subsection{Sensitivity to Quality of Previous-Frame Detections}
\label{sec:perfect-saliency}
Two of our methods, $S_I$ and $S_C$ are dependent on the accuracy of the previous-frame detections. In this section, we analyze the sensitivity of such a dependency through a soft upper bound on $S_I$ and $S_C$, which is generated using the current frame's ground truth annotations in place of detections from the previous frame. This soft upper bound is a perfect saliency map, up to the amplitude and bandwidth hyperparameters. Note that this is only a change in the testing configuration.
We report results in Tab \ref{tab:additional-diagnostics}. We see a significant boost in accuracy in all cases. Notably, the finetuned KDE $S_I$ model at $0.5$x scale achieves an AP of $29.6$, outperforming the baseline's accuracy of $29.2$ at $0.75$x scale.
\subsection{Sensitivity to Inter-Frame Motion}
\label{sec:motion-sensitivity}
\begin{table*}[hbt!]
\centering
Argoverse-HD before finetuning\\
\begin{adjustbox}{width=1.0\linewidth,center}
\begin{tabular}{@{}lcccccccccccccc@{}}
\toprule
Method & AP & AP$_{50}$ & AP$_{75}$ & AP$_{S}$ & AP$_{M}$ & AP$_{L}$ & person & mbike & tffclight & bike & bus & stop & car & truck \\
\midrule
\multicolumn{10}{@{}l}{\textbf{Main Results} (copied from the main text for comparison)} \\
\ \ \ Baseline & 21.5 &35.8 &22.3 &2.8 &22.4 &\textbf{50.6} &20.8 &9.1 &13.9 &7.1 &48.0 &16.1 &37.2 &20.2 \\
\ \ \ KDE ($S_D$) & 23.3 &40.0 &22.9 &5.4 &25.5 &48.9 &20.9 &13.7& 12.2 &9.3& \textbf{50.6}& 20.1& 40.0& 19.5 \\
\ \ \ KDE ($S_I$) &\textbf{24.1} &\textbf{40.7} &\textbf{24.3} &\textbf{8.5} &24.5 &48.3 &\textbf{23.0} &\textbf{17.7} &\textbf{15.1} &\textbf{10.0} &49.5 &17.5 &\textbf{41.0} &19.4 \\
\ \ \ KDE ($S_C$) & 24.0 &40.5& \textbf{24.3}& 7.4& \textbf{26.0}& 48.2 &22.5 &14.9 &14.0 &9.5 &49.7 &\textbf{20.6} &\textbf{41.0} &\textbf{19.9}\\
\ \ \ Upp. Bound ($0.75$x) & 27.6 &45.1 &28.2 &7.9 &30.8 &51.9 &29.7 &14.3 &21.5 &6.6 &54.4 &25.6 &44.7 &23.7 \\
\ \ \ Upp. Bound (1x) &32.7 &51.9 &34.3 &14.4 &35.6 &51.8 &33.7 &21.1 &33.1 &5.7 &57.2 &36.7 &49.5 &24.6 \\
\midrule
\multicolumn{10}{@{}l}{\textbf{Without an Explicit Backward Label Mapping (Sec \ref{sec:no-back-map})}} \\
\ \ \ KDE ($S_D$)& 5.4 &14.2& 3.7& 0.0& 0.9 &20.7 &3.2 &0.4 &1.2 &0.8 &27.9 &0.0 &5.3 &4.2 \\
\ \ \ KDE ($S_I$)&6.1& 15.6& 4.0& 0.2 &0.8 &20.3 &2.3& 0.6& 0.7 &1.8 &30.8& 0.0 &7.0& 5.4 \\
\ \ \ KDE ($S_C$)&6.0 &15.9& 3.8 &0.1& 0.9 &21.9 &3.0& 0.6 &0.9& 1.5& 30.2& 0.0 &6.7 &5.2 \\
\midrule
\multicolumn{10}{@{}l}{\textbf{Upper Bound with Ground Truth Saliency (Sec \ref{sec:perfect-saliency})}} \\
\ \ \ KDE ($S_I$) &25.4 &42.6 &25.6& 9.1 &26.2 &49.5 &25.3 &17.4 &16.8 &10.1 &49.4 &23.4 &41.7 &19.4 \\
\ \ \ KDE ($S_C$) &24.5& 41.7& 24.6& 7.5 &26.8& 48.8& 23.6& 14.5& 15.2& 9.7& 49.7 &22.6 &41.3& 19.8 \\
\midrule
\multicolumn{10}{@{}l}{\textbf{Sensitivity to Inter-Frame Motion (Sec \ref{sec:motion-sensitivity})}} \\
\ \ \ KDE ($S_I$), $j=10$ &25.3& 42.9& 25.3 &8.4 &26.7& 49.1& 25.0 &16.4& 16.2 &10.1 &48.8 &25.0& 41.8& 19.5 \\
\ \ \ KDE ($S_I$), $j=25$ &24.1 &41.0& 24.5 &6.4 &26.1 &49.0& 24.0& 12.6 &15.2 &9.0& 48.5& 22.9& 41.1& 19.6\\
\ \ \ KDE ($S_I$), $j=50$ &22.5 &38.3& 22.9 &4.2& 24.1& 49.1& 21.9& 9.9 &14.4& 8.2 &48.4 &18.5& 39.0 &19.7\\
\ \ \ KDE ($S_I$), $j=100$ &20.9& 35.1 &21.6& 2.8 &21.9& 48.0& 20.1& 7.1& 14.0 &6.8& 47.8& 15.3& 36.7& 19.1\\
\ \ \ KDE ($S_I$), $j=200$ &20.0 &33.5& 20.6 &2.5& 20.5& 46.7& 19.2& 6.0 &13.4& 6.2 &46.7& 14.3& 35.5& 18.5\\
\bottomrule
\end{tabular}
\end{adjustbox}
Argoverse-HD after finetuning\\
\begin{adjustbox}{width=1.0\linewidth,center}
\begin{tabular}{@{}lcccccccccccccc@{}}
\toprule
Method & AP & AP$_{50}$ & AP$_{75}$ & AP$_{S}$ & AP$_{M}$ & AP$_{L}$ & person & mbike & tffclight & bike & bus & stop & car & truck \\
\midrule
\multicolumn{10}{@{}l}{\textbf{Main Results} (copied from the main text for comparison)} \\
\ \ \ Baseline &24.2 &38.9 &26.1 &4.9 &29.0 &50.9 &22.8 &7.5 &23.3 &5.9 &44.6 &19.3 &43.7 &26.6 \\
\ \ \ Learned Sep. & 27.2&44.8&28.3&\textbf{12.2}&29.1&46.6&24.2& 14.0 &22.6& 7.7 &39.5& \textbf{31.8}& 50.0& 27.8\\
\ \ \ Learned Nonsep. & 25.9&42.9&26.5&10.0&28.4&48.5&25.2 &11.9 &20.9& 7.1& 39.5& 25.1& 49.4& 28.1\\
\ \ \ KDE ($S_D$) & 26.7 &43.3& 27.8& 8.2 &29.7& 54.1 &25.4& 13.5& 22.0 &8.0 &\textbf{45.9}& 21.3& 48.1 &29.3\\
\ \ \ KDE ($S_I$) & 28.0 & 45.5 &\textbf{29.2} &10.4 &\textbf{31.0} &\textbf{54.5} &27.3 &16.9 &\textbf{24.3} &\textbf{9.0} &44.5 &23.2 &\textbf{50.5} &28.4 \\
\ \ \ KDE ($S_C$) &27.2& 44.7& 28.4 &9.1 &30.9& 53.6 &27.4 &14.5& 23.0& 7.0 &44.8 &21.9& 49.9& \textbf{29.5} \\
\ \ \ LKDE ($S_I$) & \textbf{28.1} &\textbf{45.9} &28.9 &10.3& 30.9 &54.1 &\textbf{27.5}& \textbf{17.9}& 23.6& 8.1 &45.4 &23.1& 50.2& 28.7 \\
\ \ \ Upp. Bound ($0.75$x) & 29.2 &47.6 &31.1 &11.6 &32.1 &53.3 &29.6 &12.7 &30.8 &7.9 &44.1 &29.8 &48.8 &30.1\\
\ \ \ Upp. Bound (1x) & 31.6& 51.4 &33.5& 14.5& 33.5 &54.1& 31.8& 15.2 &37.4& 9.0& 43.9& 35.3& 50.2& 30.2\\
\midrule
\multicolumn{10}{@{}l}{\textbf{Without an Explicit Backward Label Mapping (Sec \ref{sec:no-back-map})}} \\
\ \ \ KDE ($S_D$), no RoI GT &2.1 &2.6& 2.5 &0.0& 0.0& 4.0 &0.6& 0.0& 0.0& 0.6& 14.8& 0.0 &0.0& 0.9\\
\ \ \ KDE ($S_D$) &1.8 &2.7 &1.9& 0.0& 0.0 &3.2& 0.6 &0.0& 0.0& 0.0 &13.3 &0.0 &0.1 &0.6\\
\ \ \ KDE ($S_I$), no RoI GT &2.5& 3.0 &2.9 &0.0& 0.1 &4.3& 0.7 &0.0& 0.0& 0.6 &17.0 &0.9& 0.0 &0.9\\
\ \ \ KDE ($S_I$) &2.0& 2.8& 2.4 &0.0& 0.0 &3.7& 0.6& 0.0 &0.0& 0.0 &14.8 &0.0 &0.3& 0.5\\
\midrule
\multicolumn{10}{@{}l}{\textbf{Upper Bound with Ground Truth Saliency (Sec \ref{sec:perfect-saliency})}} \\
\ \ \ KDE ($S_I$) &29.6 &48.7 &30.7& 12.0 &32.8 &54.4 &28.3& 16.3& 27.7& 9.9 &43.9 &30.6 &50.9& 28.8 \\
\ \ \ KDE ($S_C$) &27.8 &45.5& 28.8& 9.6 &31.7 &53.4 &27.5& 13.9& 24.7& 6.5 &44.5& 25.1 &50.2 &29.6 \\
\midrule
\multicolumn{10}{@{}l}{\textbf{Sensitivity to Inter-Frame Motion (Sec \ref{sec:motion-sensitivity})}} \\
\ \ \ KDE ($S_I$), $j=10$ &29.4& 48.3& 30.7 &11.5 &32.8& 54.6& 27.9& 15.9& 27.2 &9.7& 43.7& 31.1& 50.6& 28.7 \\
\ \ \ KDE ($S_I$), $j=25$ &28.0& 46.1& 29.2 &9.2& 32.1& 55.3& 26.4& 13.9& 25.9 &9.3& 43.9& 26.8& 49.2& 28.7\\
\ \ \ KDE ($S_I$), $j=50$ &26.2 &42.9& 27.7 &6.6& 30.5& 54.9& 24.1& 12.1& 24.9 &8.6& 44.1& 21.8& 46.2& 27.9\\
\ \ \ KDE ($S_I$), $j=100$ &24.5 &39.9& 25.8 &4.8 &28.6& 53.5& 22.3& 10.2& 23.5 &7.6 &43.5& 17.7 &43.9& 27.1\\
\ \ \ KDE ($S_I$), $j=200$ &23.6 &38.3& 25.2 &4.2& 27.8& 53.0 &21.4& 8.6 &22.8 &7.4 &42.9& 16.6& 42.7& 26.6\\
\bottomrule
\end{tabular}
\end{adjustbox}
\vspace{-0.5em}
\caption{Additional diagnostics experiments on Argoverse-HD. Please refer to Sec~\ref{app:diagnostics} for a detailed discussion.}
\label{tab:additional-diagnostics}
\end{table*}
Having noted that the $S_I$ and $S_C$ formulations are sensitive to the accuracy of the previous-frame detections, in this section, we further test its robustness to motion between frames. We use ground truth bounding boxes (rather than detections) from the previous frame in order to isolate the effect of motion on accuracy. We introduce a jitter parameter $j$ and
translate each of the ground truth bounding boxes in the x and y directions by values sampled from $\mathcal{U}(-j,j)$. The translation values are in pixels in reference to the original image size of $1920\times 1200$. As in Sec~\ref{sec:perfect-saliency}, this is a purely testing-time change. Also note that the upper bound experiments in Sec~\ref{sec:perfect-saliency} follows by setting $j=0$. We test only on $S_I$ and report the full results in Tab \ref{tab:additional-diagnostics}. We also plot summarized results and discuss observations in \ref{fig:motion-sensitivity}.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{Fig/motion-sensitivity.png}
\vspace{-1.5em}
\caption{
Plots showing the effect of motion (jitter) on AP using the KDE $S_I$ formulation. Results have been normalized according to the AP at 0 jitter. As is intuitive, motion affects AP$_S$ the most and AP$_L$ the least. After finetuning (with an artificial jitter of 50), we see that the model reacts less adversely to jitter, indicating that our regularization has helped.}
\label{fig:motion-sensitivity}
\end{figure}
\section{FOVEA Beyond Faster R-CNN}
\label{app:other-det}
In the main text and other sections of the appendix, we conduct our experiment based on Faster R-CNN. However, our proposed warping-for-detection framework is agnostic to specific detectors. To show this, we test our methods on RetinaNet \cite{lin2017focal}, a popular single-stage object detector, and on YOLOF \cite{chen2021you}, a recent YOLO variant that avoids bells and whistles and long training schedules (up to 8x for ImageNet and 11x for COCO compared to standard schedules for YOLOv4~\cite{bochkovskiy2020yolov4}).
For both these detectors, we test baselines at $0.5$x and $0.75$x scales both before and after finetuning. We then compare these results against our KDE $S_I$ method at $0.5$x scale.
We use a learning rate of 0.01 for the RetinaNet KDE $S_I$ model and 0.005 for the RetinaNet baselines. All other training settings for RetinaNet are identical to the Faster-RCNN baseline. For YOLOF, we use a learning rate of 0.012 and keep all other settings true to the original paper. Results are presented in Tab~\ref{tab:another-det}.
\begin{table}[t]
\centering
\begin{adjustbox}{width=1.0\linewidth,center}
\begin{tabular}{@{}lcccccc@{}}
\toprule
Method & AP & AP$_{50}$ & AP$_{75}$ & AP$_{S}$ & AP$_{M}$ & AP$_{L}$ \\
\midrule
\multicolumn{6}{@{}l}{\textbf{RetinaNet, Before Finetuning on Argoverse-HD}}\\
Baseline ($0.5$x) &18.5& 29.7& 18.6& 1.3 &17.2 &48.8 \\
KDE ($S_I$) & 18.5& 31.2& 17.9& 4.5 &16.8 &44.9\\
Upp. Bound ($0.75$x) & 24.8& 38.8& 25.5& 4.5 &28.7 &52.0\\
\midrule
\multicolumn{6}{@{}l}{\textbf{RetinaNet, After Finetuning on Argoverse-HD}}\\
Baseline ($0.5$x) & 22.6 &38.9& 21.4& 4.0 &22.0 &53.1\\
KDE ($S_I$) & 24.9 &40.3 &25.3& 7.1& 27.7 &50.6\\
Upp. Bound ($0.75$x) & 29.9 &48.6& 30.1& 9.7 &32.5 &54.2\\
\midrule
\multicolumn{6}{@{}l}{\textbf{YOLOF, Before Finetuning on Argoverse-HD}}\\
Baseline ($0.5$x) & 15.0 & 25.4 & 14.3 & 0.6 & 11.0 & 46.0 \\
KDE ($S_I$) & 16.8 & 29.0 & 16.0 & 0.9 & 14.0 & 46.4 \\
Upp. Bound ($0.75$x) & 21.6 &35.5& 22.3& 2.3& 22.2& 52.7\\
\midrule
\multicolumn{6}{@{}l}{\textbf{YOLOF, After Finetuning on Argoverse-HD}}\\
Baseline ($0.5$x) & 18.4& 30.5& 18.3& 1.4& 16.5& 47.9\\
KDE ($S_I$) & 21.3& 36.7& 20.2& 3.5& 21.8& 49.7\\
Upp. Bound ($0.75$x) & 25.1& 41.3& 25.3& 4.7& 27.6& 54.1\\
\bottomrule
\end{tabular}
\end{adjustbox}
\caption{Experiments with RetinaNet \cite{lin2017focal} and YOLOF \cite{chen2021you}. We follow the same setup as the experiment with Faster R-CNN. The top quarter suggests that unlike Faster R-CNN, RetinaNet does not work off-the-shelf with our KDE warping. However, the second quarter suggests similar performance boosts as with Faster R-CNN can be gained after finetuning on Argoverse-HD. Interestingly, for YOLOF, our method boosts AP in all categories -- small, medium, and large -- even with off-the-shelf weights.}
\label{tab:another-det}
\end{table}
\section{Comparison Against Additional Baselines}
\label{app:other-baselines}
There are other approaches that make use of image warping or patch-wise zoom for visual understanding. The first noticeable work \cite{recasens2018learning}, explained extensively in the main text, warps the input image for tasks that have labels invariant to warping. The second noticeable work \cite{gao2018dynamic} employs reinforcement learning (RL) to decide which patches to zoom in for high-resolution processing. In this section, we attempt to compare our FOVEA with these two approaches.
Our method builds upon spatial transformer networks~\cite{jaderberg2015spatial,recasens2018learning} and we have already compared against \cite{recasens2018learning} sporadically in the main text. Here provides a summary of all the differences (see Tab~\ref{tab:techcontrib}). A naive approach might
directly penalize the discrepancy between the output of the (warped) network and the unwarped ground-truth
in an attempt to implicitly learn the inverse mapping,
but this results in abysmal performance
(dropping 28.1 to 2.5 AP, discussed in Sec~\ref{sec:no-back-map}). To solve this issue, in Sec~\ref{sec:background}, we note that \cite{jaderberg2015spatial,recasens2018learning} actually learn a backward map $\mathcal{T}^{-1}$ instead of a forward one $\mathcal{T}$. This allows us to add a backward-map layer that transforms bounding box coordinates back to the original space via $\mathcal{T}^{-1}$, dramatically improving accuracy.
A second significant difference with \cite{jaderberg2015spatial,recasens2018learning} is our focus on attention-for-efficiency. If the effort required to determine where to attend is more than the effort to run the raw detector, attentional processing can be inefficient (see the next paragraph). \cite{recasens2018learning} introduces a lightweight saliency network to produce a heatmap for where to attend; however, this model does not extend to object detection, perhaps because it requires the larger capacity of a detection network (see Sec~\ref{Baseline and Setup}). Instead, we replace this feedforward network with an essentially {\em zero}-cost saliency map constructed via a simple but effective global spatial prior (computed offline) or temporal prior (computed from previous frame's detections).
Next, we propose a technique to prevent cropping during warping (via reflection padding, as shown in Fig~\ref{fig:anticropping}), which also boosts performance by a noticeable amount. Finally, as stated in the training formulation in Sec~\ref{sec:warping4det}, it {\em doesn't even make sense} to train a standard RPN-based detector with warped input due to choice of delta encoding (which normally helps stabilize training). We must remove this standard encoding and use GIoU to compensate for the lost stability during training.
\begin{table}[!h]
\small
\centering
\adjustbox{width=1\linewidth}{
\begin{tabular}{lcc}
\toprule
Method & AP \\ \midrule
FOVEA (Ours full) & 28.1 \\
w/o Explicit backward mapping & 2.5 \\
w/o KDE saliency (using saliency net as in \cite{recasens2018learning}) & Doesn't train \\
w/o Anti-crop regularization & 26.9 \\
w/o direct RPN box encoding & N/A \\
\bottomrule
\end{tabular}
}
\caption{Summary of key modifications in FOVEA.}
\label{tab:techcontrib}
\end{table}
Next, we attempt to compare against this RL-based zoom method \cite{gao2018dynamic} using our baseline detector (public implementation from mmdetection~\cite{mmdetection}) on their Caltech Pedestrian Dataset~\cite{Dollar2012PAMI}. However, while their full-scale $800 \times 600$ Faster R-CNN detector reportedly takes 304ms, our implementation is {\em dramatically} faster (44ms), consistent with the literature for modern implementations and GPUs.
This changes the conclusions of that work because full-scale processing is now faster than coarse plus zoomed-in processing (taking 28ms and 25ms respectively), even assuming a zero-runtime RL module (44ms $<$ 28ms + 25ms).
\section{Additional Visualizations}
Please refer to Fig \ref{fig:kde-supp-visuals} and \ref{fig:kde-visuals} for additional qualitative results of our method.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{Fig/kde-supp2.pdf}
\vspace{-1.5em}
\caption{Additional examples of the $S_I$ KDE warping method. Bounding boxes on the saliency map denote previous frame detections, and bounding boxes on the warped image denote current frame detections. The magnification heatmap depicts the amount of magnification at different regions of the warped image.
(a) is an example of $S_I$ correctly adapting to an off-center horizon.
(b) shows a multimodal saliency distribution, leading to a multimodal magnification in the $x$ direction.
(c) is another example of $S_I$ correctly magnifying small objects in the horizon.
(d) is a failure case in which duplicate detections of the traffic lights in the previous frame leads to more magnification than desired along that horizontal strip. One solution to this could be to weight our KDE kernels by the confidence of the detection.
(e) is another failure case of $S_I$, in which a small clipped detection along the right edge leads to extreme magnification in that region.
One general issue we observe is that the regions immediately adjacent to magnified regions are often contracted. This is visible in the magnification heatmaps as the blue shadows around magnified regions. This is a byproduct of the dropoff in attraction effect of the local attraction kernel. Perhaps using non-Gaussian kernels can mitigate this issue.
}
\label{fig:kde-supp-visuals}
\end{figure*}
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{Fig/main_kde_vis_small.pdf}
\vspace{-1em}
\caption{Examples of KDE warp computed from bounding boxes, extracted from a training dataset ($S_D$) or the previous frame's detections ($S_I, S_C$). We visualize predicted bounding boxes in the warped image. Recall that large objects won't be visible in the saliency due to their large variance from Eq~\ref{eq:kde}. (a) $S_D$ magnifies the horizon (b) $S_I$ magnifies the center of the image, similar to $S_D$ (c) $S_I$ adapts to magnify the mid-right region (d) $S_C$'s saliency combines the temporal and spatial biases.}
\label{fig:kde-visuals}
\end{figure*}
\section{Detection-Only Streaming Evaluation}
\label{app:det-streaming}
In
\ifstandalonesupplement
Sec~4.2
\else
Sec~\ref{sec:streaming}
\fi
of the main text, we provide the full-stack evaluation for streaming detection. Here we provide the detection-only evaluation for completeness in Tab~\ref{tab:streaming-det}. This setting only allows detection and scheduling, and thus isolating the contribution of tracking and forecasting. We observe similar trend as in the full-stack setting in
\ifstandalonesupplement
Tab~2.
\else
Tab~\ref{tab:streaming-full}.
\fi
\section{Additional Implementation Details}
\label{app:impl-details}
In this section, we provide additional details necessary to reproduce the results in the main text.
For the learned separable model from Sec~\ifstandalonesupplement 4.1.2\else\ref{direct saliency experiments}\fi, we use two arrays of length $31$ to model saliency along the $x$ and $y$ dimensions, and during training, we blur the image with a $47\times47$ Gaussian filter in the first epoch, a trick introduced in \cite{recasens2018learning} to force the model to zoom.
For the learned nonseparable model, we use an $11\times11$ saliency grid, and we blur the image with a $31\times31$ filter in the first epoch.
We use an attraction kernel $k$ with a standard deviation of $5.5$ for both versions. Additionally, we multiply the learning rate and weight decay of saliency parameters by 0.5 in the first epoch and 0.2 in the last two epochs, for stability. We find that we don't need anti-crop regularization here, because learning a fixed warp tends to behave nicely.
For each of our KDE methods, we use arrays of length $31$ and $51$ to model saliency in the vertical and horizontal directions, respectively. This is chosen to match the aspect ratio of the original input image and thereby preserve the vertical and horizontal ``forces" exerted by the attraction kernel.
For the baseline detector, we adopt the Faster R-CNN implementation of mmdetection 2.7 \cite{mmdetection}. All our experiments are conducted in an environment with PyTorch 1.6, CUDA 10.2 and cuDNN 7.6.5. For streaming evaluation, we mention a performance boost due to better implementation in Tab~\ref{tab:streaming-det}
\&
\ifstandalonesupplement
Tab~2,
\else
Tab~\ref{tab:streaming-full},
\fi
and the changes are mainly adopting newer versions of mmdetection and cuDNN compared to the solution in \cite{Li2020StreamingP} (switching from a smooth L1 loss to L1 loss for the regression part and code optimization).
\begin{table}[t]
\small
\centering
\begin{tabular}{clcccc}
\toprule
ID & Method & AP & AP$_S$ & AP$_M$ & AP$_L$ \\
\midrule
1 & Prior art \cite{Li2020StreamingP} & 13.0 & 1.1 & 9.2 & 26.6 \\
\midrule
2 & + Better implementation & 14.4 & 1.9 & 11.5 & \textbf{27.9} \\
3 & + Train with pseudo GT & 15.7 & 3.0 & 14.8 & 27.1 \\
\midrule
4 & 2 + Ours ($S_I$) & 15.7 & 4.7 & 12.8 & 26.8 \\
5 & 3 + Ours ($S_I$) & \textbf{17.1} & \textbf{5.5} & \textbf{15.1} & 27.6 \\
\bottomrule
\end{tabular}
\caption{Streaming evaluation in the detection-only setting. First, we are able to improve over previous state-of-the-art through better implementation (row 2) and training with pseudo ground truth (row 3). Second, our proposed KDE warping further boosts the streaming accuracy (row 4-5).}
\label{tab:streaming-det}
\end{table}
\section{Introduction}
\begin{figure}[t]
\centering
\includegraphics[width=1\linewidth]{Fig/teaser2.pdf}
\caption{Standard image downsampling (top right) limits the capability of the object detector to find small objects. In this paper, we propose an attentional warping method (bottom right) that enlarges salient objects in the image while maintaining a small input resolution. Challenges arise when warping also alters the output labels (\eg, bounding boxes).}
\label{fig:teaser-teaser}
\end{figure}
Safety-critical robotic agents such as self-driving cars make use of an enormous suite of high-resolution perceptual sensors, with the goal of minimizing blind spots, maximizing perception range, and ensuring redundancy~\cite{Argoverse,nuscenes2019,Waymo}.
We argue that ``over-sensed'' perception platforms provide unique challenges for vision algorithms since those visual sensors must rapidly consume sensor streams while continuously reporting back the state of the world.
While numerous techniques exist to make a particular model run fast, such as quantization~\cite{Vanhoucke2011ImprovingTS}, model compression~\cite{Cheng2017ASO}, and inference optimization~\cite{RaganKelley2013HalideAL}, at the end of the day, simple approaches that subsample sensor data (both spatially by frame downsampling and temporally by frame dropping) are still most effective for meeting latency constraints~\cite{Li2020StreamingP}.
However, subsampling clearly throws away information, negating the goals of high-resolution sensing in the first place!
This status quo calls for novel vision algorithms.
To address this challenge, we take inspiration from the human visual system; biological vision makes fundamental use of {\em attentional} processing.
While current sensing stacks make use of regular grid sampling,
the human vision system in the periphery has a much lower resolution than in the center (fovea), due to the pooling of information from retinal receptors by retinal ganglion cells. Such variable resolution is commonly known as foveal vision \cite{Larson2009TheCO}.
In this paper, we propose FOVEAted image magnification (FOVEA) for object detection, which retains high resolution for objects of interest while maintaining a small canvas size. We exploit the sparsity in detection datasets -- objects of interest usually only cover a portion of the image. {\em The key idea is to resample such that background pixels can make room for objects of interest.
} The input images are downsampled and warped such that
salient
areas in the warped image have higher resolutions. While image warping has been explored for image classification \cite{jaderberg2015spatial,recasens2018learning} and regression \cite{recasens2018learning}, major challenges remain for object detection. First, processing the images in the warped space will produce bounding box outputs in the warped space. We make use of
differentiable backward maps
to unwarp the bounding box coordinates. Second, it's much more challenging to identify regions for magnification. Empirically, we find that end-to-end trained saliency networks that work well for image classification fail for object detection. However, unlike gaze estimation and fine-grained image classification, the tasks evaluated on by \cite{recasens2018learning}, we have an explicit signal for saliency with object detection -- bounding box annotations or outputs. Specifically, we use dataset-wide priors and object locations from the previous frame (for video streams). We train a function that maps bounding box locations to warping parameters. Third, object detection has a much lower tolerance to cropping than image classification, since objects appear not only in the center but also near the edges of the image. We find that previous image warping methods are very susceptible to this issue, so we introduce an anti-cropping modification to the warping formulation.
We validate our approach on two self-driving datasets for 2D object detection: Argoverse-HD \cite{Li2020StreamingP} and BDD100K \cite{bdd100k}. First, we show that even without learning, our hand-coded bounding-box-guided magnification improves the average precision (AP) for off-the-shelf Faster R-CNN \cite{ren2015faster}, suggesting that considerable sparsity exists in the input space for those datasets. Next, we finetune the detector with differentiable image warping and a backward label mapping, which further boosts AP. In both cases, the improvement for small objects is most significant. Finally, to show that such accuracy improvement is worth the latency cost, we evaluate our algorithm under the streaming perception framework \cite{Li2020StreamingP}, and we achieve state-of-the-art performance in terms of streaming AP.
\section{Related Work}
\label{sec:related}
\paragraph{Object detection}
Object detection is one of the most fundamental problems in computer vision. Many methods have pushed the state-of-the-art in detection accuracy~\cite{girshick2014rich,ren2015faster,lin2017feature,chen2019hybrid,qiao2020detectors}, and many others aim for improving the efficiency of the detectors~\cite{Liu2016SSDSS,redmon2018yolov3,tan2020efficientdet,bochkovskiy2020yolov4}. The introduction of fully convolution processing~\cite{sermanet2013overfeat} and spatial pyramid pooling~\cite{He2015SpatialPP} have allowed us to process the input image in its original size and shape. However, it is still a common practice to downsample the input image for efficiency purposes. Efficiency becomes a more prominent issue when people move to the video domain. In video object detection, the focus has been on how to make use of temporal information to reduce the number of detectors invoked~\cite{zhu2017flow,zhu2018towards,luo2019detect}. These methods work well on simple datasets like ImageNet~VID~\cite{ILSVRC15}, but might be unsuitable for the self-driving car senarios, where multiple new objects appear at almost every frame. Furthermore, those methods are usually designed to work in the offline fashion, \ie, allowing access to future frames. Detection methods are the building blocks of our framework, and our proposed approach is largely agnostic to any particular detector.
\paragraph{Online/streaming perception}
In the online setting, the algorithm must work without future knowledge. \cite{lin2019tsm} proposes the Temporal Shift Module that enables video understanding through channel shifting and in the online setting, the shifting is restricted to be uni-directional. \cite{Bergmann2019TrackingWB} proposes a multi-object tracking method that takes input previous frame detection as addition proposals for the current frame. Our method also takes previous frame detection as input, but we use that to guide image warping. Streaming accuracy~\cite{Li2020StreamingP} is a recently proposed metric that evaluates the output of a perception algorithm at all time instants, forcing the algorithm to consider the amount of streaming data that must be ignored while computation is occuring.
\cite{Li2020StreamingP} demonstrates that streaming object detection accuracy can be significantly improved by tuning the input frame resolution and framerate. In this work, we demonstrate that adaptive attentional processing is an orthogonal dimension for improving streaming performance.
\begin{figure*}[t]
\centering
\includegraphics[width=0.8\linewidth]{Fig/approach2.pdf}
\caption{Our proposed method for object detection. Given bounding box predictions from the previous frame (if the input are videos) or a collection of all the ground truth bounding boxes in the training set, the saliency generator creates a saliency map and that is fed into the spatial transformer (adapted from \cite{recasens2018learning,jaderberg2015spatial}) to downsample the high-resolution input frame while magnifying salient regions. Then we feed the downsampled input into a regular object detector, and it produces bounding-box output in the warped space, which is then converted back to the original image space as the final output.
}
\label{fig:teaser}
\end{figure*}
\paragraph{Adaptive visual attention} Attentional processing has been well studied in the vision community, and it appears in different forms \cite{dai2017deformable,huang2017learning,kirillov2020pointrend,liu2018dynamic,li2017not,verelst2020dynamic}. Specially in this paper, we focus on dynamic resolutions. For image classification, \cite{uzkent2020learning} designs an algorithm to select high-resolution patches, assuming each patch is associated with a data acquisition cost. \cite{marin2019efficient} applies non-uniform downsampling to semantic segmentation and relies on the network to learn both the forward and backward mapping, whose consistency is not guaranteed. For object detection, a dynamic zoom-in algorithm is proposed that processes high-resolution patches sequentially \cite{gao2018dynamic}. However, sequential execution might not meet latency requirements for real-time applications. Most similar to our work, \cite{recasens2018learning} proposes an adaptive image sampling strategy that allocates more pixels for salient areas, allowing a better downstream task performance. But the method only works for image classification and regression, where the output is agnostic to the input transformation.
\section{Approach}
Assume we are given a training set of image-label pairs $(I,L)$. We wish to learn a nonlinear deep predictor $f$ that produces a low loss $\mathcal L(f(I),L)$. Inspired by past work~\cite{recasens2018learning,jaderberg2015spatial}, we observe that certain labeling tasks can be performed more effectively by warping/resampling the input image. However, when the label $L$ itself is spatially defined (e.g., bounding box coordinates or semantic pixel labels), the label itself may need to be warped, or alternatively, the output of the deep predictor may need to be inverse-warped.
In this section, we first introduce the saliency-guided spatial transform from related work as the foundation of our method. Next, we introduce our solutions to address the challenges in image warping for object detection. An overview of FOVEA, our method, is shown in Fig~\ref{fig:teaser}.
\subsection{Background: Saliency-Guided Spatial Transform}
\label{sec:background}
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{Fig/warping2.pdf}
\caption{Image warps $\mathcal{W}_{\mathcal{T}}$ are commonly implemented via a backward map $\mathcal{T}^{-1}$ followed by (bilinear) interpolation of nearby source pixel grid values, since forward mapping $\mathcal{T}$ can result in target pixel positions that do not lie on the pixel grid (not shown). Though image warping is an extensively studied topic (notably by ~\cite{jaderberg2015spatial,recasens2018learning} in the context of differentiable neural warps), its effect on labels is less explored because much prior art focuses on global labels invariant to warps (e.g. an image class label). We explore warping for spatial prediction tasks whose output must be transformed back into the original image space to generate consistent output. Interestingly, transforming pixel-level labels with warp $\mathcal{W}_{\mathcal{T}^{-1}}$ requires inverting $\mathcal{T}^{-1}$, which can be difficult depending on its parameterization~\cite{beier1992feature}. In this paper, we focus on transforming pixel {\em coordinates} of bounding boxes, which requires only the already-computed backward map $\mathcal{T}^{-1}$ (the red arrow).
}
\label{fig:warping}
\end{figure}
The seminal work of spatial transformer networks (STN) introduces a differentiable warping layer for input images and feature maps \cite{jaderberg2015spatial}. It was later extended to incorporate a saliency map to guide the warping \cite{recasens2018learning}. Here we provide implementation details that are crucial to our method. Please refer to the original papers \cite{jaderberg2015spatial,recasens2018learning} for more details.
A 2D transformation can be written as:
\begin{align}
\mathcal{T}: (x, y) \to (x', y'),
\end{align}
where $(x, y)$ and $(x', y')$ are the input and output coordinates. Since image pixels are usually discrete, interpolation is required to sample values at non-integral coordinates. An image warp $\mathcal{W}_{\mathcal{T}}$ takes input an image $I$, samples the pixel values according to the given transformation $\mathcal{T}$, and outputs the warped image $I'$:
\begin{align}
I'(\mathcal{T}(x, y)) = I(x,y)
\end{align}
Naive forward warping of discrete pixel locations from input $I$ can result in non-integral target pixel positions that need to be ``splatted" onto the pixel grid of $I$, which can produce artifacts such as holes. Instead, image warps are routinely implemented via a {\em backward map}~\cite{beier1992feature}: iterate over each output pixel grid location, compute its {\em inverse mapping} $\mathcal{T}^{-1}$ to find its corresponding input coordinates (which may be non-integral), and bilinearly interpolate its color from neighboring input pixel grid points:
\begin{align}
I'(x, y) = I(\mathcal{T}^{-1}(x, y))
\end{align}
{\em In other words, the implementation of $\mathcal{W}_{\mathcal{T}}$ only requires the knowledge of the inverse transformation $\mathcal{T}^{-1}$}. The pixel iteration can be replaced with a batch operation by using a grid generator and apply the transformation $\mathcal{T}^{-1}$ over the entire grid.
STN uses a differentiable formulation of $\mathcal{T}^{-1}_{\theta}$ (parameterized by $\theta$) and an ensuing bilinear grid sampler, which is differentiable and parameter-free. \cite{recasens2018learning} proposes a special form of $\mathcal{T}^{-1}$ parameterized by a saliency map $S$: $\mathcal{T}^{-1}_{\theta} = \mathcal{T}^{-1}_{S}$. This transform has a convolution form (therefore fast) using the intuition that each pixel in the input space $(x,y)$ is attracting samples taken of the original image with a force $S(x,y)$, leading to more sampling at salient regions during the warp. {\em We point out that both \cite{jaderberg2015spatial} and \cite{recasens2018learning} ignore the effect of warping on the corresponding label space and they skip the modeling of the forward transform $\mathcal{T}$, which is required for converting certain label types.}
\subsection{Image Warping for Object Detection}
\label{sec:warping4det}
In this section, we first explain our high-level inference formulation, then our specific form of the warping, and in the end some adjustments for training the task network.
\paragraph{Inference formulation}
We visually lay out the space of image and label warps in
\ifunlimited
Fig~\ref{fig:warping}.
Recent methods for differentiable image warping assume labels are invariant under the warping (the first pathway in Fig~\ref{fig:warping}). For object detection, however, image warping clearly warps bounding box outputs. To produce consistent outputs (e.g., for computing bounding-box losses during learning), these warped outputs need to transformed back into the original space (the second pathway in Fig~\ref{fig:warping}).
\else
Fig~\ref{fig:warping}, which shows that we have to transform the bounding box outputs accordingly after the warping.
\fi
Quite conveniently, because standard image warping is implemented via the backward map $\mathcal{T}^{-1}$, the backward map is already computed in-network and so can be directly applied to the pixel coordinates of the predicted bounding box.
The complete procedure for our approach $\hat{f}$ can be written as $\hat{f}(I, \mathcal{T})=\mathcal{T}^{-1}(f(\mathcal{W}_\mathcal{T}(I)))$. where $f(\cdot)$ is the nonlinear function that returns bounding box coordinates of predicted detections. Importantly, this convenience doesn't exist when warping pixel-level {\em values}; \eg, when warping a segmentation mask back to the original image input space (the third pathway in Fig~\ref{fig:warping}). Here, one needs to invert $\mathcal{T}^{-1}$ to explicitly compute the forward warp $\mathcal{T}$.
\paragraph{Warping formulation}
We adopt the saliency-guided warping formulation from \cite{recasens2018learning}:
\begin{align}
\mathcal{T}^{-1}_x(x,y) = \frac{\int_{x', y'} S(x', y') k((x,y), (x', y')) x'}{\int_{x', y'} S(x',y')k((x,y), (x',y'))}, \\
\mathcal{T}^{-1}_y(x,y) = \frac{\int_{x', y'} S(x', y') k((x,y), (x', y')) y'}{\int_{x', y'} S(x',y')k((x,y), (x',y'))},
\end{align}
where $k$ is a distance kernel (we use a Gaussian kernel in our experiments).
However, in this general form, axis-aligned bounding boxes might have different connotations in the original and warped space. To ensure axis-alignment is preserved during the mapping, we restrict the warping to be separable along the two dimensions, \ie, $\mathcal{T}^{-1}(x, y) = (\mathcal{T}^{-1}_x(x), \mathcal{T}^{-1}_y(y))$.
For each dimension, we adapt the previous formulation to 1D:
\begin{align}
\label{torralba warp}
\mathcal{T}^{-1}_x(x) = \frac{\int_{x'} S_x(x')k(x', x) x'}{\int_{x'} S_x(x')k(x, x')}, \\
\mathcal{T}^{-1}_y(y) = \frac{\int_{y'} S_y(y')k(y', y)y'}{\int_{y'} S_y(y')k(y, y')}.
\end{align}
We call this formulation \emph{separable} and the general form \emph{nonseparable}. Note that the nonseparable formulation has a 2D saliency map parameter, whereas the separable formulation has two 1D saliency maps, one for each axis.
Fig~\ref{fig:separable} shows an example of each type of warp.
One nice property of $\mathcal{T}^{-1}$ is that it is differentiable and thus can be trained with backpropagation. One limitation though is that its inverse $\mathcal{T}$ doesn't have a closed-form solution, nor does its derivative. The absence of $\mathcal{T}$ is not ideal, and we propose some workaround as shown in the following subsection.
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{Fig/separable.png}
\caption{By restricting the general class of warps ({\bf left}, figure adapted from~\cite{recasens2018learning}) to be separable ({\bf right}), we ensure that bounding boxes in the warped image (examples outlined in red) remain axis-aligned. We demonstrate that such regularization (surprisingly) improves performance, even though doing so theoretically restricts the range of expressible warps (details in Sec~\ref{direct saliency experiments}).
}
\label{fig:separable}
\end{figure}
\paragraph{Anti-Cropping Regularization}
We find the convolution form of saliency-guided spatial transform tends to crop the images, which might be acceptable for image classification where a large margin exists around the border. However, any cropping in object detection creates a chance to miss objects. We solve this by using reflect padding on the saliency map while applying the attraction kernel in Eq~\ref{torralba warp}. This introduces symmetries about each of the edges of the saliency map, eliminating all horizontal offsets along vertical image edges and vice versa. Thus cropping is impossible under this formulation.
A 1D illustration is shown in Fig \ref{fig:anticropping} to explain the problem and the solution.
\paragraph{Training formulation} Once we have the inference formulation, training is also straightforward as we require the loss $\loss$ to be computed in the original space: $\loss(\mathcal{Q}(f(\mathcal{W}_\mathcal{T}(I)), L)$, where $\mathcal{Q}$ is the label-type-specific backward mapping as shown in Fig~\ref{fig:warping}, and in our case, $\mathcal{Q} = \mathcal{T}^{-1}$. Note that $\mathcal{W}_\mathcal{T}$, $f$ and $\mathcal{T}^{-1}$ are all differentiable. While inference itself does not require the knowledge of $\mathcal{T}$, it is not the case for training detectors with region proposal networks (RPN) \cite{ren2015faster}.
When training RPNs \cite{ren2015faster}, the regression targets are the deltas between the anchors and the ground truth, and the deltas are later used in RoI Pooling/Align \cite{He2015SpatialPP,He2017MaskR}. The former should be computed in the original space (the ground truth is in the original space), while the latter is in the warped space (RoI Pooling/Align is over the warped image). This implies that the deltas need first to be learned in the original space, applied to the bounding box, and then mapped to the warped space using $\mathcal{T}$ for RoI Pooling/Align. But as discussed before, $\mathcal{T}$ cannot be easily computed. As a workaround,
we omit the delta encoding and adopt Generalized IoU (GIoU) loss \cite{rezatofighi2019generalized} to account for the lost stability.
The main idea of GIoU is to
better reflect the similarity of predicted and ground truth bounding boxes in cases of zero intersection; this has been shown to improve results.
\subsection{KDE Saliency Generator}
\label{Saliency Generator}
In prior arts \cite{jaderberg2015spatial,recasens2018learning}, saliency is supervised by the final task loss without intermediate supervision. We explore this for object detection in Sec~\ref{direct saliency experiments}, but find that
saliency maps for object detection are multi-modal and hard to learn.
In the streaming setting, predictions from the previous frame can serve as a strong intermediate signal for contextual priming. To learn a single attentional system that can generalize to frame-level or dataset-level priors, we describe an algorithmic approach for converting bounding boxes (be it from a dataset or the previous frame) to a saliency map.
\begin{figure}
\centering
\begin{subfigure}[h]{0.32\linewidth}
\centering
\includegraphics[width=1\linewidth]{Fig/warp1d_global_noanticrop.png}
\subcaption{\scriptsize Default, $\sigma\approx 5.5$}
\end{subfigure}
\begin{subfigure}[h]{0.32\linewidth}
\centering
\includegraphics[width=1\linewidth]{Fig/warp1d_global_anticrop.png}
\scriptsize
\subcaption{\scriptsize Anti-crop, $\sigma\approx 5.5$}
\end{subfigure}
\begin{subfigure}[h]{0.32\linewidth}
\centering
\includegraphics[width=1\linewidth]{Fig/warp1d_local_anticrop.png}
\scriptsize
\subcaption{\scriptsize Anti-crop, $\sigma\approx 1.7$}
\end{subfigure}
\caption{Saliency-guided transform illustrated in 1D. The red curve is a saliency map $S$. The bottom row of dots are the output points (at uniform intervals), and the top row of dots are the locations where we've sampled each output point from the original ``image", as computed by applying $\mathcal{T}^{-1}_S$ to the output points. (a) The default transform can be understood as a weighted average over the output points and thus ignores points with near zero weights such as those at the boundaries. (b) Note the effects of introducing anti-crop reflect padding, and (c) how decreasing the std dev $\sigma$ of the attraction kernel $k$ results in more local warping around each peak (better for multimodal saliency distributions).}
\label{fig:anticropping}
\end{figure}
To this end, we use kernel density estimation (KDE) with the bounding boxes as the data points. More precisely, given a set of bounding boxes $B$ with centers $c_i$, heights $h_i$ and widths $w_i$, we model the saliency map $S_B$ as a sum of normal distributions:
\begin{align}
S_B^{a, b} =
\frac{1}{K^2} + a\sum_{(c_i, w_i, h_i) \in B} \mathcal N\left(c_i, b
\begin{bmatrix}
w_i & 0 \\
0 & h_i
\end{bmatrix}
\right) \label{eq:kde}
\end{align}
where $a$ and $b$ are hyperparameters for amplitude and bandwidth, respectively, and $K$ is the size of the attraction kernel $k$ in Eq~\ref{torralba warp}. Adding the small constant is done to prevent extreme warps. We then normalize the 2D saliency map such that it sums to 1 and marginalize along the two axes if using the separable formulation\footnote{When using the separable formulation, we could instead skip the intermediate 2D saliency map representation. However, we opt not to, because the intermediate 2D saliency map produces more interpretable visualizations, and the difference in runtime is negligible.}.
As laid out in the previous section, this is then used to generate the image transformation $\mathcal T^{-1}_S$ according to Eq~\ref{torralba warp}.
Once we have the saliency generator defined, one can either apply $S_B$ to the previous frame prediction to obtain a frame-specific temporal prior (denoted as $S_I$), or to the set of all bounding boxes in the training set to obtain a dataset-wide prior (denoted as $S_D$). In the former case, we note that the KDE formulation effectively foveates the image at each of the previous frame's detections. For the first frame in each video sequence, this trivially defaults to the uniform saliency map. In the latter case, we note that for datasets like Argoverse-HD, the horizon tends to be in the center of the image, and thus objects are more likely to appear there.
We also try combining these signals to capture both biases. The resulting saliency map is $S_C = \alpha\cdot S_I + (1-\alpha)\cdot S_D$, where $\alpha$ is a hyperparameter for how much we trust the temporal bias.
All of the above saliency generators are differentiable, so we can use the final task loss to learn our hyperparameters $a$ and $b$.
\section{Experiments}
In this section, we first show on the autonomous driving dataset Argoverse-HD that FOVEA greatly improves accuracy over naive downsampling. Next, we conduct cost-performance analysis under streaming perception showing that accuracy gain is worth the additional latency cost. Finally, we present results on BDD100K showing the generalization capability of our method. We include
additional results, diagnostic experiments, and implementation details
in the
appendix.
\subsection{Object Detection for Autonomous Navigation}
Argoverse-HD \cite{Li2020StreamingP}, the primary dataset we evaluate our methods on, is an object detection dataset captured in an embodied autonomous vehicle setting. The data contain 30 FPS video sequences and dense 2D bounding box annotations.
As a common practise for detection, we adopt AP as our primary evaluation metric.
We also report {\em end-to-end} latency (including image preprocessing, network inference, and bounding box postprocessing) measured on a single GTX 1080 Ti GPU. The image resolution for this dataset is $1920 \times 1200$, much larger than COCO's, which is capped at $640$. Since all models used in this paper are fully convolutional, we run them with different input scales, denoted by ratios to the native resolution, \eg, $0.5$x means an input resolution of $960 \times 600$.
\subsubsection{Baseline and Setup}
\label{Baseline and Setup}
The baseline we compare to throughout our experiments is Faster RCNN \cite{ren2015faster} with a ResNet-50 backbone \cite{he2016deep} plus FPN \cite{lin2017feature}. The default input scale for both the baseline and our method is $0.5$x.
For the baseline, however, we additionally train and test at $0.75$x and $1$x scales, to derive a sense of the latency-accuracy tradeoff using this model. Our contribution is orthogonal to the choice of the baseline detector and we obtain similar results with other detectors including RetinaNet \cite{lin2017focal} and YOLOF \cite{chen2021you} (shown in Appendix~\ref{app:other-det}). Additionally, we compare against other zoom-based approaches \cite{recasens2018learning,gao2018dynamic} in Appendix~\ref{app:other-baselines}.
Notably, Argoverse-HD's training set only contains {\em pseudo ground truth} (at the time of paper submission) generated by running high-performing detector HTC \cite{chen2019hybrid} in the offline setting. For all experiments, unless otherwise stated, we train on the train split with pseudo ground truth annotations, and evaluate on the val split with real annotations.
Additional measures are taken to prevent overfitting to biased annotations. We finetune COCO pretrained models
on Argoverse-HD for only $3$ epochs (\ie, early stopping). We use momentum SGD with a batch size of $8$, a learning rate of $0.02$, $0.9$ momentum, $10^{-4}$ weight decay, and a step-wise linear learning rate decay for this short schedule \cite{Li2020BudgetTrain}. Also, when training detectors with warped input, we apply our modifications to RPN and the loss function as discussed in Sec~\ref{sec:warping4det}.
\subsubsection{Learned Saliency}
\label{direct saliency experiments}
Our first experiments directly try to learn saliency without using bounding box priors and under supervision from just the final task loss. This serves as a control for our later experiments using the KDE saliency generator formulation introduced in Sec~\ref{Saliency Generator}.
In our first formulation, the saliency map is a parameter of our network and learned through backpropagation, yielding a learned dataset-wide fixed warp. We test both the separable and nonseparable versions of this and report results in Tab~\ref{final-table}. Training configuration and implementation details are given in Appendix~\ref{app:impl-details}.
\begin{figure}[t]
\centering
\includegraphics[width=0.49\linewidth]{Fig/directsep.jpg}
\includegraphics[width=0.49\linewidth]{Fig/directnonsep.jpg}
\caption{The learned direct separable ({\bf left}) and nonseparable ({\bf right}) dataset-wide warps. Despite the vastly greater flexibility of nonseparable warps, the learned warp is almost separable anyway.}
\label{fig:direct-visuals}
\end{figure}
We find that both separable and nonseparable methods significantly improve overall AP over the baseline, owing to the boosted performance on small objects. However, there is also a small decrease in AP on large objects. Interestingly, even though the nonseparable formulation is more flexible than the separable formulation, it performs worse, showing that the model struggles to learn in this parameter space. Also, as shown in Fig \ref{fig:direct-visuals}, the final learned nonseparable warp is actually uncannily close to being separable, suggesting that the separable class of warps might be preferred anyways. Therefore, going forward, we choose the separable warp formulation in our later experiments.
Following the lead of \cite{recasens2018learning}, we also try learning a ``saliency network" that maps each input image to its saliency map via a ResNet-18 backbone \cite{he2016deep}. In this sense, the learned saliency map would adapt to each image. However, we find that this approach doesn't translate well to object detection in that it makes training very unstable. From our experiments, even with a learning rate of $10^{-5}$ on the saliency network, the model learns a degeneracy in which an extreme warp leads to no proposals being matched with ground truth bounding boxes in the RoI bounding box head, leading to a regression loss of $0$.
\subsubsection{KDE Saliency Generator}
In this section, we attempt to use bounding box detections to guide our saliency using the KDE formulation introduced in Sec~\ref{Saliency Generator}.
We first try manually tuning the amplitude $a$ and bandwidth $b$ to obtain the desired magnifications.
We find that an amplitude $a=1$ and a bandwidth $b=64$ works the best, paired with an attraction kernel of std. dev. of about $17.8\%$ the image height, which allows for more local warps as illustrated in Fig \ref{fig:anticropping}. We finetune our models using the same configuration as the baseline, the only difference being the added bounding box and saliency-guided spatial transformation layer. To simplify training, we use jittered ground truth bounding boxes from the current frame rather than detections from the previous frame. We implement the $S_I$ formulation, which uses just the bounding box detections from the previous frame, the $S_D$ formulation, which uses all bounding boxes in the training set to generate a fixed dataset-wide prior, and the $S_C$ formulation with $\alpha=0.5$.
We then attempt to learn hyperparameters $a$ and $b$ through backpropagation, since our KDE formulation is differentiable. We initialize parameters $a'$ and $b'$ to 0, under the construction that $a=|1+a'|+0.1, b=64\cdot |1+b'|+0.1$.
The learning rate of $a'$ and $b'$ is set to $10^{-4}$ with zero weight decay. Other than this, we train the learned KDE (LKDE) model with the same configuration as the baseline. We implement the $S_I$ formulation.
All results are shown in Table~\ref{final-table}. Even without finetuning our detector, using a simple fixed dataset-wide warp $S_D$, we find significant improvements in AP. As we switch to temporal bias and finetune, we see even more improvement. As in the learned saliency case, these improvements in overall AP are due to large boosts in AP$_S$, outweighing the small decreases in AP$_L$. Combining our saliency signals ($S_C$) doesn't help, because in our case, it seems that the temporal signal is strictly stronger than the dataset-wide signal. Perhaps if we had an alternate source of saliency like a map overlay, combining saliencies could help. Our best method overall is LKDE, which learned optimal values $a=1.07, b=71.6$.
Learning a nonseparable saliency performs better than our hand-constructed dataset-wide warp $S_D$; however, they're both outperformed by $S_I$. Finally, we note that our increased performance comes at the cost of only about $2$ ms in latency.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\linewidth]{Fig/main_kde_vis_small2.pdf}
\vspace{-0.5em}
\caption{Qualitative results for our methods after finetuning on Argoverse-HD. The cars in the distance (in the dotted boxes), undetected at $0.5$x scale, are detected at $1$x scale, and partially detected by our methods. Different rows show the variations within our method based on the source of attention.
}
\label{fig:kde-visuals2}
\end{figure}
\begin{table*}[ht]
\centering
Argoverse-HD before finetuning\\
\begin{adjustbox}{width=1.0\linewidth,center}
\begin{tabular}{@{}lccccccccccccccc@{}}
\toprule
Method & AP & AP$_{50}$ & AP$_{75}$ & AP$_{S}$ & AP$_{M}$ & AP$_{L}$ & person & mbike & tffclight & bike & bus & stop & car & truck & Latency (ms) \\
\midrule
Baseline & 21.5 &35.8 &22.3 &2.8 &22.4 &\textbf{50.6} &20.8 &9.1 &13.9 &7.1 &48.0 &16.1 &37.2 &20.2& 49.4 $\pm$ 1.0 \\
\midrule
KDE ($S_D$) & 23.3 &40.0 &22.9 &5.4 &25.5 &48.9 &20.9 &13.7& 12.2 &9.3& \textbf{50.6}& 20.1& 40.0& 19.5& 52.0 $\pm$ 1.0 \\
KDE ($S_I$) &\textbf{24.1} &\textbf{40.7} &\textbf{24.3} &\textbf{8.5} &24.5 &48.3 &\textbf{23.0} &\textbf{17.7} &\textbf{15.1} &\textbf{10.0} &49.5 &17.5 &\textbf{41.0} &19.4& 51.2 $\pm$ 0.7\\
KDE ($S_C$) & 24.0 &40.5& \textbf{24.3}& 7.4& \textbf{26.0}& 48.2 &22.5 &14.9 &14.0 &9.5 &49.7 &\textbf{20.6} &\textbf{41.0} &\textbf{19.9} &52.0 $\pm$ 1.2 \\
\midrule
Upp. Bound ($0.75$x) & 27.6 &45.1 &28.2 &7.9 &30.8 &51.9 &29.7 &14.3 &21.5 &6.6 &54.4 &25.6 &44.7 &23.7& 86.9 $\pm$ 1.6\\
Upp. Bound ($1.0$x) &32.7 &51.9 &34.3 &14.4 &35.6 &51.8 &33.7 &21.1 &33.1 &5.7 &57.2 &36.7 &49.5 &24.6& 133.9 $\pm$ 2.2 \\
\bottomrule
\end{tabular}
\end{adjustbox}
Argoverse-HD after finetuning\\
\begin{adjustbox}{width=1.0\linewidth,center}
\begin{tabular}{@{}lccccccccccccccc@{}}
\toprule
Method & AP & AP$_{50}$ & AP$_{75}$ & AP$_{S}$ & AP$_{M}$ & AP$_{L}$ & person & mbike & tffclight & bike & bus & stop & car & truck & Latency (ms) \\
\midrule
Baseline &24.2 &38.9 &26.1 &4.9 &29.0 &50.9 &22.8 &7.5 &23.3 &5.9 &44.6 &19.3 &43.7 &26.6 & 50.9 $\pm$ 0.9\\
\midrule
Learned Sep. & 27.2&44.8&28.3&\textbf{12.2}&29.1&46.6&24.2& 14.0 &22.6& 7.7 &39.5& \textbf{31.8}& 50.0& 27.8&51.5 $\pm$ 1.0\\
Learned Nonsep. & 25.9&42.9&26.5&10.0&28.4&48.5&25.2 &11.9 &20.9& 7.1& 39.5& 25.1& 49.4& 28.1&50.0 $\pm$ 0.8\\
\midrule
KDE ($S_D$) & 26.7 &43.3& 27.8& 8.2 &29.7& 54.1 &25.4& 13.5& 22.0 &8.0 &\textbf{45.9}& 21.3& 48.1 &29.3& 50.8 $\pm$ 1.2\\
KDE ($S_I$) & 28.0 & 45.5 &\textbf{29.2} &10.4 &\textbf{31.0} &\textbf{54.5} &27.3 &16.9 &\textbf{24.3} &\textbf{9.0} &44.5 &23.2 &\textbf{50.5} &28.4 & 52.2 $\pm$ 0.9 \\
KDE ($S_C$) &27.2& 44.7& 28.4 &9.1 &30.9& 53.6 &27.4 &14.5& 23.0& 7.0 &44.8 &21.9& 49.9& \textbf{29.5} &52.1 $\pm$ 0.9 \\
\midrule
LKDE ($S_I$) & \textbf{28.1} &\textbf{45.9} &28.9 &10.3& 30.9 &54.1 &\textbf{27.5}& \textbf{17.9}& 23.6& 8.1 &45.4 &23.1& 50.2& 28.7 &50.5 $\pm$ 0.8 \\
\midrule
Upp. Bound ($0.75$x) & 29.2 &47.6 &31.1 &11.6 &32.1 &53.3 &29.6 &12.7 &30.8 &7.9 &44.1 &29.8 &48.8 &30.1& 87.0 $\pm$ 1.4 \\
Upper Bound ($1.0$x) & 33.3 & 53.9 & 35.0 & 16.8 & 34.8 & 53.6 & 33.1 & 20.9 & 38.7 & 6.7 & 44.7 & 36.7 & 52.7 & 32.7 & 135.0 $\pm$ 1.6 \\
\bottomrule
\end{tabular}
\end{adjustbox}
\vspace{-0.5em}
\caption{Results before and after finetuning on Argoverse-HD. Without retraining, processing warped images (KDE $S_I$, top table) improves overall AP by 2.6 points and triples AP$_S$.
Even larger gains can be observed after finetuning, making our final solution (LKDE $S_I$) performing close to the $0.75$x upper bound.
Please refer to the text for a more detailed discussion.}
\label{final-table}
\end{table*}
\subsection{Streaming Accuracy for Cost-Performance Evaluation}
\label{sec:streaming}
{\em Streaming accuracy} is a metric that coherently integrates latency into standard accuracy evaluation and therefore is able to {\em quantitatively measure the accuracy-latency tradeoff} for embodied perception \cite{Li2020StreamingP}.
\ifunlimited
Such a setup is achieved by having the benchmark stream the data to the algorithm in real-time and query for the state of the world at all time instants. One of their key observations is that by the algorithm finishes processing, the world has around changed and therefore proper temporal scheduling and forecasting methods should be used to compensate for this latency.
\else
\fi
Here we adopt their evaluation protocol for our cost-performance analysis. In our case of streaming object detection, the streaming accuracy refers to {\em streaming AP}. We use the same GPU (GTX 1080 Ti) and their public available codebase for a fair comparison with their proposed solution. Their proposed solution includes a scale-tuned detector (Faster R-CNN), dynamic scheduler (shrinking-tail) and Kalman Filter forecastor. Our experiments focus on improving the detector and we keep the scheduler and forecastor fixed.
Tab~\ref{tab:streaming-full} presents our evaluation under the full-stack setting (a table for the detection-only setting is included in Appendix~\ref{app:det-streaming}.
We see that FOVEA greatly improves the previous state-of-the-art. The improvement first comes from a faster and slightly more accurate implementation of the baseline (please refer to Appendix~\ref{app:impl-details} for the implementation details). Note that under streaming perception, a faster algorithm while maintaining the same offline accuracy translates to an algorithm with higher streaming accuracy. The second improvement is due to training on pseudo ground truth (discussed in Sec~\ref{Baseline and Setup}). Importantly, our KDE image warping further boosts the streaming accuracy significantly {\em on top of} these improvements. Overall, these results suggest that {\em image warping is a cost-efficient way to improve accuracy}.
\begin{table}[t]
\small
\centering
\begin{tabular}{clcccc}
\toprule
ID & Method & AP & AP$_S$ & AP$_M$ & AP$_L$ \\
\midrule
1 & Prior art \cite{Li2020StreamingP} & 17.8 & 3.2 & 16.3 & 33.3 \\
\midrule
2 & + Better implementation & 19.3 & 4.1 & 18.3 & 34.9 \\
3 & + Train with pseudo GT & 21.2 & 3.7 & 23.9 & 43.8 \\
\midrule
4 & 2 + Ours ($S_I$) & 19.3 & 5.2 & 18.5 & 39.0 \\
5 & 3 + Ours ($S_I$) & \textbf{23.0} & \textbf{7.0} & \textbf{23.7} & \textbf{44.9} \\
\bottomrule
\end{tabular}
\vspace{-0.5em}
\caption{Streaming evaluation in the full-stack (with forecasting) setting on Argoverse-HD. We show that our proposed method significantly improves previous state-of-the-art by 5.2, in which 1.5 is from better implementation, 1.9 is from making use of pseudo ground truth and 1.8 is from our proposed KDE warping.}
\label{tab:streaming-full}
\end{table}
\subsection{Cross-Dataset Generalization}
\begin{table}[]
\small
\centering
\begin{tabular}{clcccccc}
\toprule
ID & Method & AP & AP$_S$ & AP$_M$ & AP$_L$ \\
\midrule
1 & Baseline ($0.5$x) & 15.1 & 1.0 & 10.6 & \textbf{39.0} \\
2 & Ours $S_D$ ($0.5$x) & 13.7 & 1.3 & 10.0 & 34.7 \\
3 & Ours $S_I$ ($0.5$x) & \textbf{16.4} & \textbf{2.1} & \textbf{12.8} & 38.6 \\
\midrule
4 & Baseline ($0.75$x) & 19.7 & 3.0 & 16.1 & \textbf{44.2} \\
5 & Ours $S_D$ ($0.75$x) & 18.2 & 3.4 & 15.4 & 40.0 \\
6 & Ours $S_I$ ($0.75$x) & \textbf{20.1} & \textbf{5.2} & \textbf{17.0} & 42.5 \\
\midrule
7 & Upper bound ($1.0$x) & 22.6 & 5.7 & 20.1 & 45.7 \\
\bottomrule
\end{tabular}
\vspace{-0.5em}
\caption{Cross-dataset generalization to BDD100K \cite{bdd100k}. Rows 2 \& 5 are saliency computed on the Argoverse-HD training set, as expected, they fail to generalize to a novel dataset. Despite operating at a larger temporal stride (5 FPS vs 30 FPS), our proposed image-adaptive KDE warping generalizes to a novel dataset (row 3 \& 6). Note that here the image native resolution is smaller at $1280 \times 720$.}
\label{tab:bdd}
\end{table}
Our experiments so far are all conducted on the Argoverse-HD dataset. In this section, we cross-validate our proposed method on another autonomous driving dataset BDD100K \cite{bdd100k}. Note that BDD100K and Argoverse-HD are collected in different cities. For simplicity, we only test out {\em off-the-shelf} generalization without any finetuning. We experiment on the validation split of the MOT2020 subset, which contains 200 videos with 2D bounding boxes annotated at 5 FPS (40K frames in total). Also, we only evaluate on common classes between BDD100K and Argoverse-HD: person, bicycle, car, motorcycle, bus, and truck. The results are summarized in Tab~\ref{tab:bdd}, which demonstrate the generalization capability of our proposed method.
\section{Conclusion}
We propose FOVEA, a highly efficient attentional model for object detection. Our model magnifies regions likely to contain objects, making use of top-down saliency priors learned from a dataset or from temporal context.
To do so, we make use of differentiable image warping that ensures bounding box predictions can be mapped back to the original image space.
The proposed approach significantly improves over the baselines on Argoverse-HD and BDD100K.
For future work, it would be natural to make use of trajectory forecasting models to provide even more accurate saliency maps for online processing.
| {
"attr-fineweb-edu": 1.794922,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdRY4dbghSWQcXQx1 | \section{Introduction} \label{sec:intro}
In recent years, drug discovery has drawn increasing interest in the machine learning community. Among many challenges therein, how to discriminatively represent a molecule with a vectorized embedding remains a fundamental yet open challenge. The underlying problem can be decomposed into two components: how to design a common latent space for molecule graphs ({\em{i.e.}}, designing a suitable encoder) and how to construct an objective function to supervise the training ({\em{i.e.}}, defining a learning target). Falling broadly into the second category, our paper studies self-supervised molecular representation learning by leveraging the consistency between 3D geometry and 2D topology.
Motivated by the prominent success of the pretraining-finetuning pipeline~\citep{devlin2018bert}, unsupervisedly pre-trained graph neural networks for molecules yields promising performance on downstream tasks and becomes increasingly popular~\citep{velivckovic2018deep,sun2019infograph,liu2018n,hu2019strategies,You2020GraphCL,you2021graph}. The key to pre-training lies in finding an effective proxy task ({\em{i.e.}}, training objective) to leverage the power of large unlabeled datasets. Inspired by \citep{schutt2017schnet,liu2018n,liu2021spherical} that molecular properties~\citep{gilmer2017neural,liu2018n} can be better predicted by 3D geometry due to its encoded energy knowledge, we aim to make use of the 3D geometry of molecules in pre-training. However, the stereochemical structures are often very expensive to obtain, making such 3D geometric information scarce in downstream tasks. To address this problem, we propose the Graph Multi-View Pre-training (GraphMVP) framework, where a 2D molecule encoder is pre-trained with the knowledge of 3D geometry and then fine-tuned on downstream tasks without 3D information. Our learning paradigm, during pre-training, injects the knowledge of 3D molecular geometry to a 2D molecular graph encoder such that the downstream tasks can benefit from the implicit 3D geometric prior even if there is no 3D information available.
We attain the aforementioned goal by leveraging two pretext tasks on the 2D and 3D molecular graphs: one contrastive and one generative SSL. Contrastive SSL creates the supervised signal at an \textbf{inter-molecule} level: the 2D and 3D graph pairs are positive if they are from the same molecule, and negative otherwise; Then contrastive SSL~\citep{wang2020understanding} will align the positive pairs and contrast the negative pairs simultaneously. Generative SSL~\citep{vincent2008extracting,kingma2013auto,higgins2016beta}, on the other hand, obtains the supervised signal in an \textbf{intra-molecule} way: it learns a 2D/3D representation that can reconstruct its 3D/2D counterpart view for each molecule itself.
To cope with the challenge of measuring the quality of reconstruction on molecule 2D and 3D space, we further propose a novel surrogate objective function called variation representation reconstruction (VRR) for the generative SSL task, which can effectively measure such quality in the continuous representation space. The knowledge acquired by these two SSL tasks is complementary, so our GraphMVP\ framework integrates them to form a more discriminative 2D molecular graph representation.
Consistent and significant performance improvements empirically validate the effectiveness of GraphMVP.
We give additional insights to justify the effectiveness of GraphMVP. First, GraphMVP\, is a self-supervised learning approach based on maximizing mutual information (MI) between 2D and 3D views, enabling the learnt representation to capture high-level factors~\citep{belghazi2018mutual,tschannen2019mutual,bachman2019learning} in molecule data. Second, we find that 3D molecular geometry is a form of privileged information~\cite{vapnik2009new,vapnik2015learning}. It has been proven that using privileged information in training can accelerate the speed of learning. We note that privileged information is only used in training, while it is not available in testing. This perfectly matches our intuition of pre-training molecular representation with 3D geometry.
Our contributions include
(1) To our best knowledge, we are the first to incorporate the 3D geometric information into graph SSL;
(2) We propose one contrastive and one generative SSL tasks for pre-training. Then we elaborate their difference and empirically validate that combining both can lead to a better representation;
(3) We provide theoretical insights and case studies to justify why adding 3D geometry is beneficial;
(4) We achieve the SOTA performance among all the SSL methods.
\textbf{Related work.}
We briefly review the most related works here and include a more detailed summarization in~\Cref{appendix:related_work}. Self-supervised learning (SSL) methods have attracted massive attention to graph applications~\citep{liu2021graph,xie2021self,wu2021self,liu2021self}. In general, there are roughly two categories of graph SSL: contrastive and generative, where they differ on the design of the supervised signals. Contrastive graph SSL~\citep{velivckovic2018deep,sun2019infograph,hu2019strategies,You2020GraphCL,you2021graph} constructs the supervised signals at the \textbf{inter-graph} level and learns the representation by contrasting with other graphs, while generative graph SSL~\citep{hamilton2017inductive,liu2018n,hu2019strategies,hu2020gpt} focuses on reconstructing the original graph at the \textbf{intra-graph} level. One of the most significant differences that separate our work from existing methods is that all previous methods \textbf{merely} focus on 2D molecular topology. However, for scientific tasks such as molecular property prediction, 3D geometry should be incorporated as it provides complementary and comprehensive information~\citep{schutt2017schnet,liu2021spherical}. To fill this gap, we propose GraphMVP to leverage the 3D geometry in graph self-supervised pre-training.
\section{Preliminaries} \label{sec:preliminary}
We first outline the key concepts and notations used in this work. Self-supervised learning (SSL) is based on the \textit{view} design, where each view provides a specific aspect and modality of the data. Each molecule has two natural views: the 2D graph incorporates the topological structure defined by the adjacency, while the 3D graph can better reflect the geometry and spatial relation. From a chemical perspective, 3D geometric graphs focus on the \textit{energy} while 2D graphs emphasize the \textit{topological} information; thus they can be composed for learning more informative representation in GraphMVP. \textit{Transformation} is an atomic operation in SSL that can extract specific information from each view. Next, we will briefly introduce how to represent these two views.
\textbf{2D Molecular Graph} represents molecules as 2D graphs, with atoms as nodes and bonds as edges respectively. We denote each 2D graph as $g_{\text{2D}} = (X, E)$, where $X$ is the atom attribute matrix and $E$ is the bond attribute matrix. Notice that here $E$ also includes the bond connectivity. Then we will apply a transformation function $T_{\text{2D}}$ on the topological graph. Given a 2D molecular graph $g_{\text{2D}}$, its representation $h_{2D}$ can be obtained from a \textit{2D graph neural network (GNN)} model:
\begin{equation} \label{eq:2d_gnn}
h_{\text{2D}} = \text{GNN-2D}(T_{\text{2D}}(g_{\text{2D}})) = \text{GNN-2D}(T_{\text{2D}}(X, E)).
\end{equation}
\textbf{3D Molecular Graph} additionally includes spatial positions of the atoms, and they are needless to be static since atoms are in continual motion on \textit{a potential energy surface}~\citep{axelrod2020geom}. \footnote{A more rigorous way of defining conformer is in~\citep{moss1996basic}: a conformer is an isomer of a molecule that differs from another isomer by the rotation of a single bond in the molecule.} The 3D structures at the local minima on this surface are named \textit{conformer}. As the molecular properties are conformers ensembled~\citep{hawkins2017conformation}, GraphMVP\, provides a novel perspective on adopting 3D conformers for learning better representation. Given a conformer $g_{\text{3D}} = (X, R)$, its representation via a \textit{3D GNN} model is:
\begin{equation} \label{eq:3d_gnn}
h_{\text{3D}} = \text{GNN-3D}(T_{\text{3D}}(g_{\text{3D}})) = \text{GNN-3D}(T_{\text{3D}}(X, R)),
\end{equation}
where $R$ is the 3D-coordinate matrix and $T_{\text{3D}}$ is the 3D transformation.
In what follows, for notation simplicity, we use ${\bm{x}}$ and ${\bm{y}}$ for the 2D and 3D graphs, {\em{i.e.}}, ${\bm{x}} \triangleq g_{\text{2D}}$ and ${\bm{y}} \triangleq g_{\text{3D}}$. Then the latent representations are denoted as $h_{\bm{x}}$ and $h_{\bm{y}}$.
\section{GraphMVP: Graph Multi-View Pre-training} \label{sec:method}
Our model, termed as Graph Multi-View Pre-training (GraphMVP), conducts self-supervised learning (SSL) pre-training with 3D information. The 3D conformers encode rich information about the molecule energy and spatial structure, which are complementary to the 2D topology. Thus, applying SSL between the 2D and 3D views will provide a better 2D representation, which implicitly embeds the ensembles of energies and geometric information for molecules.
In the following, we first present an overview of GraphMVP, and then introduce two pretext tasks specialized concerning 3D conformation structures. Finally, we summarize a broader graph SSL family that prevails the 2D molecular graph representation learning with 3D geometry.
\subsection{Overview of GraphMVP} \label{sec:overview}
\begin{figure}[ht]
\centering
\vspace{-2ex}
\includegraphics[width=\textwidth]{figures/Diagram_Bundle_4.pdf}
\vspace{-2ex}
\caption{
Overview of the pre-training stage in GraphMVP. The black dashed circles denote subgraph masking, and we mask the same region in the 2D and 3D graphs. Multiple views of the molecules (herein: Halicin) are mapped to the representation space via 2D and 3D GNN models, where we conduct GraphMVP\ for SSL pre-training, using both contrastive and generative pretext tasks.
}
\label{fig:both_ssl}
\end{figure}
In general, GraphMVP\, exerts 2D topology and 3D geometry as two complementary views for each molecule. By performing SSL between these views, it is expected to learn a 2D representation enhanced with 3D conformation, which can better reflect certain molecular properties.
As generic SSL pre-training pipelines, GraphMVP\, has two stages: pre-training then fine-tuning. In the pre-training stage, we conduct SSL via auxiliary tasks on data collections that provide both 2D and 3D molecular structures. During fine-tuning, the pre-trained 2D GNN models are subsequently fine-tuned on specific downstream tasks, where only 2D molecular graphs are available.
At the SSL pre-training stage, we design two pretext tasks: one contrastive and one generative. We conjecture and then empirically prove that these two tasks are focusing on different learning aspects, which are summarized into the following two points. (1) From the perspective of representation learning, contrastive SSL utilizes \textbf{inter-data} knowledge and generative SSL utilizes \textbf{intra-data} knowledge. For contrastive SSL, one key step is to obtain the negative view pairs for inter-data contrasting; while generative SSL focuses on each data point itself, by reconstructing the key features at an intra-data level. (2) From the perspective of distribution learning, contrastive SSL and generative SSL are learning the data distribution from a \textbf{local} and \textbf{global} manner, respectively. Contrastive SSL learns the distribution locally by contrasting the pairwise distance at an inter-data level. Thus, with sufficient number of data points, the local contrastive operation can iteratively recover the data distribution. Generative SSL, on the other hand, learns the global data density function directly.
Therefore, contrastive and generative SSL are essentially conducting representation and distribution learning with different intuitions and disciplines, and we expect that combining both can lead to a better representation. We later carry out an ablation study (\Cref{sec:experiment_each_loss_component}) to verify this empirically.
In addition, to make the pretext tasks more challenging, we take views for each molecule by randomly masking $M$ nodes (and corresponding edges) as the transformation function, {\em{i.e.}}, $T_{\text{2D}} = T_{\text{3D}} =\text{mask}$. This trick has been widely used in graph SSL~\citep{hu2019strategies,You2020GraphCL,you2021graph} and has shown robust improvements.
\subsection{Contrastive Self-Supervised Learning between 2D and 3D Views} \label{sec:contrastive_SSL}
The main idea of contrastive self-supervised learning (SSL)~\citep{oord2018representation,chen2020simple} is first to define positive and negative pairs of views from an inter-data level, and then to align the positive pairs and contrast the negative pairs simultaneously~\citep{wang2020understanding}. For each molecule, we first extract representations from 2D and 3D views, {\em{i.e.}}, $h_{\bm{x}}$ and $h_{\bm{y}}$. Then we create positive and negative pairs for contrastive learning: the 2D-3D pairs $({\bm{x}},{\bm{y}})$ for the same molecule are treated as positive, and negative otherwise. Finally, we align the positive pairs and contrast the negative ones. The pipeline is shown in~\Cref{fig:both_ssl}. In the following, we discuss two common objective functions on contrastive graph SSL.
\textbf{InfoNCE} is first proposed in~\citep{oord2018representation}, and its effectiveness has been validated both empirically~\citep{chen2020simple,he2020momentum} and theoretically~\citep{arora2019theoretical}. Its formulation is given as follows:
\begin{equation}\label{eq:objective_infonce}
\small{
\fontsize{7.8}{1}\selectfont
\begin{aligned}
\mathcal{L}_{\text{InfoNCE}}=-\frac{1}{2} \mathbb{E}_{p({\bm{x}},{\bm{y}})}\Big[\log \frac{\exp(f_{{\bm{x}}}({\bm{x}}, {\bm{y}}))}{\exp(f_{{\bm{x}}}({\bm{x}}, {\bm{y}})) + \sum\limits_{j} \exp(f_{{\bm{x}}}({\bm{x}}^{j},{\bm{y}})})+\log\frac{\exp(f_{{\bm{y}}}({\bm{y}},{\bm{x}}))}{\exp(f_{{\bm{y}}}({\bm{y}},{\bm{x}})) + \sum\limits_{j} \exp(f_{{\bm{y}}}({\bm{y}}^{j},{\bm{x}}))} \Big],
\end{aligned}
}
\end{equation}
where ${\bm{x}}^{j}, {\bm{y}}^{j}$ are randomly sampled 2D and 3D views regarding to the anchored pair $({\bm{x}},{\bm{y}})$. $f_{{\bm{x}}}({\bm{x}},{\bm{y}})$ and $f_{{\bm{y}}}({\bm{y}},{\bm{x}})$ are scoring functions for the two corresponding views, with flexible formulations. Here we adopt $f_{\bm{x}}({\bm{x}},{\bm{y}}) = f_{\bm{y}}({\bm{y}},{\bm{x}}) = \langle h_{\bm{x}}, h_{\bm{y}} \rangle$. More details are in~\Cref{sec:app:contrastive_ssl}.
\textbf{Energy-Based Model with Noise Contrastive Estimation (EBM-NCE)} is an alternative that has been widely used in the line of graph contrastive SSL~\citep{sun2019infograph,hu2019strategies,You2020GraphCL,you2021graph}. Its intention is essentially the same as InfoNCE, to align positive pairs and contrast negative pairs, while the main difference is the usage of binary cross-entropy and extra noise distribution for negative sampling:
\begin{equation} \label{eq:objective_ebm_nce}
\small{
\begin{aligned}
\mathcal{L}_{\text{EBM-NCE}}
& = -\frac{1}{2} \mathbb{E}_{p({\bm{y}})} \Big[\mathbb{E}_{p_{n}({\bm{x}}|{\bm{y}})} \log \big(1-\sigma( f_x({\bm{x}}, {\bm{y}}))\big) + \mathbb{E}_{p({\bm{x}}|{\bm{y}} )} \log \sigma( f_x({\bm{x}}, {\bm{y}})) \Big]\\
& ~~~~~ -\frac{1}{2} \mathbb{E}_{p({\bm{x}})} \Big[\mathbb{E}_{p_{n}({\bm{y}}|{\bm{x}})} \log \big(1-\sigma( f_y({\bm{y}},{\bm{x}}))\big) + \mathbb{E}_{p({\bm{y}},{\bm{x}})} \log \sigma( f_y({\bm{y}},{\bm{x}})) \Big],
\end{aligned}
}
\end{equation}
where $p_n$ is the noise distribution and $\sigma$ is the sigmoid function. We also notice that the final formulation of EBM-NCE shares certain similarities with Jensen-Shannon estimation (JSE)~\citep{nowozin2016f}. However, the derivation process and underlying intuition are different: EBM-NCE models the conditional distributions in MI lower bound (\Cref{eq:MI_objective}) with EBM, while JSE is a special case of variational estimation of f-divergence. Since this is not the main focus of GraphMVP, we expand the a more comprehensive comparison in~\Cref{sec:app:contrastive_ssl}, plus the potential benefits with EBM-NCE.
Few works~\citep{hassani2020contrastive} have witnessed the effect on the choice of objectives in graph contrastive SSL. In GraphMVP, we treat it as a hyper-parameter and further run ablation studies on them, {\em{i.e.}}, to solely use either InfoNCE ($\mathcal{L}_{\text{C}} = \mathcal{L}_{\text{InfoNCE}}$) or EMB-NCE ($\mathcal{L}_{\text{C}} = \mathcal{L}_{\text{EBM-NCE}}$).
\subsection{Generative Self-Supervised Learning between 2D and 3D Views} \label{sec:generative_SSL}
Generative SSL is another classic track for unsupervised pre-training~\citep{kingma2013auto,chen2016infogan,larsson2016learning,kingma2018glow}. It aims at learning an effective representation by self-reconstructing each data point. Specifically to drug discovery, we have one 2D graph and a certain number of 3D conformers for each molecule, and our goal is to learn a robust 2D/3D representation that can, to the most extent, recover its 3D/2D counterparts. By doing so, generative SSL can enforce 2D/3D GNN to encode the most crucial geometry/topology information, which can improve the downstream performance.
There are many options for generative models, including variational auto-encoder (VAE)~\citep{kingma2013auto}, generative adversarial networks (GAN)~\citep{goodfellow2014generative}, flow-based model~\citep{dinh2016density}, etc. In GraphMVP, we prefer VAE-like method for the following reasons: (1) The mapping between two molecular views is stochastic: multiple 3D conformers correspond to the same 2D topology; (2) An explicit 2D graph representation ({\em{i.e.}}, feature encoder) is required for downstream tasks; (3) Decoders for structured data such as graph are often highly nontrivial to design, which make them a suboptimal choice.
\textbf{Variational Molecule Reconstruction.}
Therefore we propose a \textit{light} VAE-like generative SSL, equipped with a \textit{crafty} surrogate loss, which we describe in the following. We start with an example for illustration. When generating 3D conformers from their corresponding 2D topology, we want to model the conditional likelihood $p({\bm{y}}|{\bm{x}})$. By introducing a reparameterized variable ${\bm{z}}_{\bm{x}} = \mu_{{\bm{x}}} + \sigma_{{\bm{x}}} \odot \epsilon$, where $\mu_{\bm{x}}$ and $\sigma_{\bm{x}}$ are two flexible functions on $h_{\bm{x}}$, $\epsilon \sim \mathcal{N}(0,I)$ and $\odot$ is the element-wise production,
we have the following lower bound:
\begin{equation} \label{eq:variational_lower_bound}
\small
\begin{aligned}
\log p({\bm{y}}|{\bm{x}})
\ge \mathbb{E}_{q({\bm{z}}_{\bm{x}}|{\bm{x}})} \big[ \log p({\bm{y}}|{\bm{z}}_{\bm{x}}) \big] - KL(q({\bm{z}}_{\bm{x}}|{\bm{x}}) || p({\bm{z}}_{\bm{x}})).
\end{aligned}
\end{equation}
The expression for $\log p({\bm{x}}|{\bm{y}})$ can be similarly derived. \Cref{eq:variational_lower_bound} includes a conditional log-likelihood and a KL-divergence term, where the bottleneck is to calculate the first term for structured data. This term has also been recognized as the \textit{reconstruction term}: it is essentially to reconstruct the 3D conformers (${\bm{y}}$) from the sampled 2D molecular graph representation (${\bm{z}}_{{\bm{x}}}$). However, performing the graph reconstruction on the data space is not trivial: since molecules ({\em{e.g.}}, atoms and bonds) are discrete, modeling and measuring on the molecule space will bring extra obstacles.
\textbf{Variational Representation Reconstruction (VRR).}
To cope with this challenge, we propose a novel surrogate loss by switching the reconstruction from data space to representation space. Instead of decoding the latent code $z_{{\bm{x}}}$ to data space, we can directly project it to the 3D representation space, denoted as $q_{\bm{x}}(z_{\bm{x}})$. Since the representation space is continuous, we may as well model the conditional log-likelihood with Gaussian distribution, resulting in L2 distance for reconstruction, {\em{i.e.}}, $\|q_{{\bm{x}}}(z_{{\bm{x}}}) - \text{SG}(h_{\bm{y}}({\bm{y}}))\|^2$. Here SG is the stop-gradient operation, assuming that $h_{{\bm{y}}}$ is a fixed learnt representation function. SG has been widely adopted in the SSL literature to avoid model collapse~\citep{grill2020bootstrap,chen2021exploring}. We call this surrogate loss as variational representation reconstruction (VRR):
\begin{equation} \label{eq:variational_lower_bound_approximation}
\small
\begin{aligned}
\mathcal{L}_{\text{G}} = \mathcal{L}_{\text{VRR}}
= & \frac{1}{2} \Big[ \mathbb{E}_{q({\bm{z}}_{\bm{x}}|{\bm{x}})} \big[ \| q_{{\bm{x}}}({\bm{z}}_{\bm{x}}) - \text{SG}(h_{{\bm{y}}}) \|^2 \big] + \mathbb{E}_{q({\bm{z}}_{\bm{y}}|{\bm{y}})} \big[ \| q_{{\bm{y}}}({\bm{z}}_{\bm{y}}) - \text{SG}(h_{{\bm{x}}}) \|_2^2 \big] \Big]\\
& + \frac{\beta}{2} \cdot \Big[ KL(q({\bm{z}}_{\bm{x}}|{\bm{x}}) || p({\bm{z}}_{\bm{x}})) + KL(q({\bm{z}}_{\bm{y}}|{\bm{y}}) || p({\bm{z}}_{\bm{y}})) \Big].
\end{aligned}
\end{equation}
We give a simplified illustration for the generative SSL pipeline in~\Cref{fig:both_ssl} and the complete derivations in~\Cref{sec:app:generative_ssl}. As will be discussed in~\Cref{sec:mutual_information}, VRR is actually maximizing MI, and MI is invariant to continuous bijective function~\citep{belghazi2018mutual}. Thus, this surrogate loss would be exact if the encoding function $h$ satisfies this condition. However, we find that GNN, though does not meet the condition, can provide quite robust performance, which empirically justify the effectiveness of VRR.
\subsection{Multi-task Objective Function}
As discussed before, contrastive SSL and generative SSL essentially learn the representation from distinct viewpoints. A reasonable conjecture is that combining both SSL methods can lead to overall better performance, thus we arrive at minimizing the following complete objective for GraphMVP:
\begin{equation} \label{eq:graphmvp}
\small
\begin{aligned}
\mathcal{L}_{\text{GraphMVP}}
& = \alpha_1 \cdot \mathcal{L}_{\text{C}} + \alpha_2 \cdot \mathcal{L}_{\text{G}},\\
\end{aligned}
\end{equation}
where $\alpha_1, \alpha_2$ are weighting coefficients. A later performed ablation study (\Cref{sec:experiment_each_loss_component}) delivers two important messages: (1) Both individual contrastive and generative SSL on 3D conformers can consistently help improve the 2D representation learning; (2) Combining the two SSL strategies can yield further improvements. Thus, we draw the conclusion that GraphMVP\, (\Cref{eq:graphmvp}) is able to obtain an augmented 2D representation by fully utilizing the 3D information.
As discussed in~\Cref{sec:intro}, existing graph SSL methods only focus on the 2D topology, which is in parallel to GraphMVP: 2D graph SSL focuses on exploiting the 2D structure topology, and GraphMVP\, takes advantage of the 3D geometry information. Thus, we propose to merge the 2D SSL into GraphMVP. Since there are two main categories in 2D graph SSL: generative and contrastive, we propose two variants GraphMVP-G\, and GraphMVP-C\, accordingly. Their objectives are as follows:
\begin{equation} \label{eq:graphmvp_variants}
\small
\begin{aligned}
\mathcal{L}_{\text{GraphMVP-G}} = \mathcal{L}_{\text{GraphMVP}} + \alpha_3 \cdot \mathcal{L}_{\text{Generative 2D-SSL}},~~~~~
\mathcal{L}_{\text{GraphMVP-C}} = \mathcal{L}_{\text{GraphMVP}} + \alpha_3 \cdot \mathcal{L}_{\text{Contrastive 2D-SSL}}.
\end{aligned}
\end{equation}
Later, the empirical results also help support the effectiveness of GraphMVP-G\, and GraphMVP-C, and thus, we can conclude that existing 2D SSL is complementary to GraphMVP.
\section{Experiments and Results} \label{sec:experiments}
\subsection{Experimental Settings} \label{sec:experiments_setting}
\textbf{Datasets.} We pre-train models on the same dataset then fine-tune on the wide range of downstream tasks. We randomly select 50k qualified molecules from GEOM~\citep{axelrod2020geom} with both 2D and 3D structures for the pre-training. As clarified in~\Cref{sec:overview}, conformer ensembles can better reflect the molecular property, thus we take $C$ conformers of each molecule. For downstream tasks, we first stick to the same setting of the main graph SSL work~\citep{hu2019strategies,You2020GraphCL,you2021graph}, exploring 8 binary molecular property prediction tasks, which are all in the low-data regime. Then we explore 6 regression tasks from various low-data domains to be more comprehensive. We describe all the datasets in~\Cref{app:sec:dataset}.
\textbf{2D GNN.} We follow the research line of SSL on molecule graph~\citep{hu2019strategies,you2021graph,You2020GraphCL}, using the same Graph Isomorphism Network (GIN)~\citep{xu2018powerful} as the backbone model, with the same feature sets.
\textbf{3D GNN.} We choose SchNet~\citep{schutt2017schnet} for geometric modeling, since SchNet: (1) is found to be a strong geometric representation learning method under the fair benchmarking; (2) can be trained more efficiently, comparing to the other recent 3D models. More detailed explanations are in~\Cref{appendix:3d_gnn}.
\subsection{Main Results on Molecular Property Prediction.} \label{sec:main_results}
We carry out comprehensive comparisons with 10 SSL baselines and random initialization. For pre-training, we apply all SSL methods on the same dataset based on GEOM~\citep{axelrod2020geom}. For fine-tuning, we follow the same setting~\citep{hu2019strategies,You2020GraphCL,you2021graph} with 8 low-data molecular property prediction tasks.
\textbf{Baselines.} Due to the rapid growth of graph SSL~\citep{liu2021graph,xie2021self,wu2021self}, we are only able to benchmark the most well-acknowledged baselines: EdgePred~\citep{hamilton2017inductive}, InfoGraph~\citep{sun2019infograph}, GPT-GNN\citep{hu2020gpt}, AttrMask \& ContextPred\citep{hu2019strategies}, GraphLoG\citep{xu2021self}, G-\{Contextual, Motif\}\citep{rong2020self}, GraphCL\citep{You2020GraphCL}, JOAO\citep{you2021graph}.
\textbf{Our method.} GraphMVP\, has two key factors: i) masking ratio ($M$) and ii) number of conformers for each molecule ($C$). We set $M=0.15$ and $C=5$ by default, and will explore their effects in the following ablation studies in~\Cref{sec:effect_of_M_and_C}. For EBM-NCE loss, we adopt the empirical distribution for noise distribution. For~\Cref{eq:graphmvp_variants}, we pick the empirically optimal generative and contrastive 2D SSL method: that is AttrMask for GraphMVP-G\, and ContextPred for GraphMVP-C.
\begin{table}[t!]
\caption{
Results for molecular property prediction tasks.
For each downstream task, we report the mean (and standard deviation) ROC-AUC of 3 seeds with scaffold splitting.
For GraphMVP\,, we set $M=0.15$ and $C=5$.
The best and second best results are marked \textbf{\underline{bold}} and \textbf{bold}, respectively.
\vspace{-0.24cm}
}
\label{tab:main_results}
\small
\setlength{\tabcolsep}{3pt}
\centering
\begin{tabular}{l c c c c c c c c c}
\toprule
Pre-training & BBBP & Tox21 & ToxCast & Sider & ClinTox & MUV & HIV & Bace & Avg\\
\midrule
-- & 65.4(2.4) & 74.9(0.8) & 61.6(1.2) & 58.0(2.4) & 58.8(5.5) & 71.0(2.5) & 75.3(0.5) & 72.6(4.9) & 67.21 \\
\midrule
EdgePred & 64.5(3.1) & 74.5(0.4) & 60.8(0.5) & 56.7(0.1) & 55.8(6.2) & 73.3(1.6) & 75.1(0.8) & 64.6(4.7) & 65.64 \\
AttrMask & 70.2(0.5) & 74.2(0.8) & 62.5(0.4) & 60.4(0.6) & 68.6(9.6) & 73.9(1.3) & 74.3(1.3) & 77.2(1.4) & 70.16 \\
GPT-GNN & 64.5(1.1) & \textbf{75.3(0.5)} & 62.2(0.1) & 57.5(4.2) & 57.8(3.1) & 76.1(2.3) & 75.1(0.2) & 77.6(0.5) & 68.27 \\
InfoGraph & 69.2(0.8) & 73.0(0.7) & 62.0(0.3) & 59.2(0.2) & 75.1(5.0) & 74.0(1.5) & 74.5(1.8) & 73.9(2.5) & 70.10 \\
ContextPred & 71.2(0.9) & 73.3(0.5) & 62.8(0.3) & 59.3(1.4) & 73.7(4.0) & 72.5(2.2) & 75.8(1.1) & 78.6(1.4) & 70.89 \\
GraphLoG & 67.8(1.7) & 73.0(0.3) & 62.2(0.4) & 57.4(2.3) & 62.0(1.8) & 73.1(1.7) & 73.4(0.6) & 78.8(0.7) & 68.47 \\
G-Contextual & 70.3(1.6) & 75.2(0.3) & 62.6(0.3) & 58.4(0.6) & 59.9(8.2) & 72.3(0.9) & 75.9(0.9) & 79.2(0.3) & 69.21 \\
G-Motif & 66.4(3.4) & 73.2(0.8) & 62.6(0.5) & 60.6(1.1) & 77.8(2.0) & 73.3(2.0) & 73.8(1.4) & 73.4(4.0) & 70.14 \\
GraphCL & 67.5(3.3) & 75.0(0.3) & 62.8(0.2) & 60.1(1.3) & 78.9(4.2) & \textbf{77.1(1.0)} & 75.0(0.4) & 68.7(7.8) & 70.64 \\
JOAO & 66.0(0.6) & 74.4(0.7) & 62.7(0.6) & 60.7(1.0) & 66.3(3.9) & 77.0(2.2) & \textbf{76.6(0.5)} & 72.9(2.0) & 69.57 \\
\midrule
GraphMVP\, & 68.5(0.2) & 74.5(0.4) & 62.7(0.1) & \textbf{62.3(1.6)} & \textbf{79.0(2.5)} & 75.0(1.4) & 74.8(1.4) & 76.8(1.1) & 71.69 \\
GraphMVP-G & \textbf{70.8(0.5)} & \textbf{\underline{75.9(0.5)}} & \textbf{\underline{63.1(0.2)}} & 60.2(1.1) & \textbf{\underline{79.1(2.8)}} & \textbf{\underline{77.7(0.6)}} & 76.0(0.1) & \textbf{79.3(1.5)} & \textbf{72.76} \\
GraphMVP-C & \textbf{\underline{72.4(1.6)}} & 74.4(0.2) & \textbf{\underline{63.1(0.4)}} & \textbf{\underline{63.9(1.2)}} & 77.5(4.2) & 75.0(1.0) & \textbf{\underline{77.0(1.2)}} & \textbf{\underline{81.2(0.9)}} & \textbf{\underline{73.07}} \\
\bottomrule
\end{tabular}
\vspace{-2ex}
\end{table}
The main results on 8 molecular property prediction tasks are listed in~\Cref{tab:main_results}. We observe that the performance of GraphMVP\, is significantly better than the random initialized one, and the average performance outperforms the existing SSL methods by a large margin. In addition, GraphMVP-G\, and GraphMVP-C\, consistently improve the performance, supporting the claim: \textbf{3D geometry is complementary to the 2D topology}. GraphMVP\, leverages the information between 3D geometry and 2D topology, and 2D SSL plays the role as regularizer to extract more 2D topological information; they are extracting different perspectives of information and are indeed complementary to each other.
\subsection{Ablation Study: The Effect of Masking Ratio and Number of Conformers} \label{sec:effect_of_M_and_C}
We analyze the effects of masking ratio $M$ and the number of conformers $C$ in GraphMVP. In~\Cref{tab:main_results}, we set the $M$ as 0.15 since it has been widely used in existing SSL methods~\citep{hu2019strategies,You2020GraphCL,you2021graph}, and $C$ is set to 5, which we will explain below. We explore on the range of $M \in \{0, 0.15, 0.3\}$ and $C \in \{1, 5, 10, 20\}$, and report the average performance. The complete results are in~\Cref{app:sec:effect_of_M_and_C}.
\begin{table}[t!]
\fontsize{8.5}{4}\selectfont
\centering
\vspace{-2ex}
\begin{minipage}[t]{.48\linewidth}
\setlength{\tabcolsep}{5.3pt}
\renewcommand\arraystretch{2.2}
\caption{Ablation of masking ratio $M$, $C\equiv5$.\vspace{-0.3cm}}
\label{tab:ablation_masking}
\centering
\begin{tabular}{c c c c}
\toprule
$M$ & GraphMVP & GraphMVP-G & GraphMVP-C\\
\midrule
0 & 71.12 & 72.15 & 72.66\\
0.15 & 71.60 & 72.76 & 73.08\\
0.30 & 71.79 & 72.91 & 73.17\\
\bottomrule
\end{tabular}
\end{minipage}
\hfill
\begin{minipage}[t]{.48\linewidth}
\setlength{\tabcolsep}{5.5pt}
\caption{Ablation of \# conformer $C$, $M\equiv0.15$.\vspace{-0.3cm}}
\label{tab:ablation_n_conformers}
\centering
\begin{tabular}{c c c c}
\toprule
$C$ & GraphMVP & GraphMVP-G & GraphMVP-C\\
\midrule
1 & 71.61 & 72.80 & 72.46\\
5 & 71.60 & 72.76 & 73.08\\
10 & 72.20 & 72.59 & 73.09\\
20 & 72.39 & 73.00 & 73.02\\
\bottomrule
\end{tabular}
\end{minipage}
\vspace{-1ex}
\end{table}
As seen in~\Cref{tab:ablation_masking}, the improvement is more obvious from $M=0$ (raw graph) to $M=0.15$ than from $M=0.15$ to $M=0.3$. This can be explained that subgraph masking with larger ratio will make the SSL tasks more challenging, especially comparing to the raw graph ($M=0$).
\Cref{tab:ablation_n_conformers} shows the effect for $C$. We observe that the performance is generally better when adding more conformers, but will reach a plateau above certain thresholds. This observation matches with previous findings~\citep{axelrod2020molecular}: adding more conformers to augment the representation learning is not as helpful as expected; while we conclude that adding more conformers can be beneficial with little improvement. One possible reason is, when generating the dataset, we are sampling top-$C$ conformers with highest possibility and lowest energy. In other words, top-5 conformers are sufficient to cover the most conformers with equilibrium state (over 80\%), and the effect of larger $C$ is thus modest.
To sum up, adding more conformers might be helpful, but the computation cost can grow linearly with the increase in dataset size. On the other hand, enlarging the masking ratio will not induce extra cost, yet the performance is slightly better. Therefore, we would encourage tuning masking ratios prior to trying a larger number of conformers from the perspective of efficiency and effectiveness.
\subsection{Ablation Study: The Effect of Objective Function} \label{sec:experiment_each_loss_component}
\begin{wraptable}[13]{r}{0.5\textwidth}
\setlength{\tabcolsep}{3.53pt}
\vspace{-0.4cm}
\small
\caption{
Ablation on the objective function.
\vspace{-0.35cm}
}
\label{tab:ablation_objective_function}
\small
\centering
\begin{tabular}{l c c c}
\toprule
GraphMVP\, Loss & Contrastive & Generative & Avg\\
\midrule
Random & & & 67.21\\
\midrule
InfoNCE only & \checkmark & & 68.85\\
EBM-NCE only & \checkmark & & 70.15\\
VRR only & & \checkmark & 69.29\\
RR only & & \checkmark & 68.89\\
\midrule
InfoNCE + VRR & \checkmark & \checkmark & 70.67\\
EBM-NCE + VRR & \checkmark & \checkmark & 71.69\\
InfoNCE + RR & \checkmark & \checkmark & 70.60\\
EBM-NCE + RR & \checkmark & \checkmark & 70.94\\
\bottomrule
\end{tabular}
\end{wraptable}
In~\Cref{sec:method}, we introduce a new contrastive learning objective family called EBM-NCE, and we take either InfoNCE and EBM-NCE as the contrastive SSL. For the generative SSL task, we propose a novel objective function called variational representation reconstruction (VRR) in~\Cref{eq:variational_lower_bound_approximation}. As discussed in~\Cref{sec:generative_SSL}, stochasticity is important for GraphMVP\, since it can capture the conformer distribution for each 2D molecular graph. To verify this, we add an ablation study on \textit{representation reconstruction (RR)} by removing stochasticity in VRR. Thus, here we deploy a comprehensive ablation study to explore the effect for each individual objective function (InfoNCE, EBM-NCE, VRR and RR), followed by the pairwise combinations between them.
The results in \Cref{tab:ablation_objective_function} give certain constructive insights as follows:
(1) Each individual SSL objective function (middle block) can lead to better performance. This strengthens the claim that adding 3D information is helpful for 2D representation learning.
(2) According to the combination of those SSL objective functions (bottom block), adding both contrastive and generative SSL can consistently improve the performance. This verifies our claim that conducting SSL at both the inter-data and intra-data level is beneficial.
(3) We can see VRR is consistently better than RR on all settings, which verifies that stochasticity is an important factor in modeling 3D conformers for molecules.
\subsection{Broader Range of Downstream Tasks}
The 8 binary downstream tasks discussed so far have been widely applied in the graph SSL research line on molecules~\cite{hu2019strategies,You2020GraphCL,you2021graph}, but there are more tasks where the 3D conformers can be helpful. Here we test 4 extra regression property prediction tasks and 2 drug-target affinity tasks.
About the dataset statistics, more detailed information can be found in~\Cref{app:sec:dataset}, and we may as well briefly describe the affinity task here. Drug-target affinity (DTA) is a crucial task~\citep{pahikkala2015toward,wen2017deep,ozturk2018deepdta} in drug discovery, where it models both the molecular drugs and target proteins, with the goal to predict their affinity scores. One recent work~\citep{nguyen2019graphdta} is modeling the molecular drugs with 2D GNN and target protein (as an amino-acid sequence) with convolution neural network (CNN). We adopt this setting by pre-training the 2D GNN using GraphMVP.
As illustrated in~\Cref{tab:main_regression}, the consistent performance gain verifies the effectiveness of our proposed GraphMVP.
\begin{table}[t!]
\setlength{\tabcolsep}{7pt}
\centering
\caption{
Results for four molecular property prediction tasks (regression) and two DTA tasks (regression).
We report the mean RMSE of 3 seeds with scaffold splitting for molecular property downstream tasks, and mean MSE for 3 seeds with random splitting on DTA tasks.
For GraphMVP\,, we set $M=0.15$ and $C=5$.
The best performance for each task is marked in \textbf{\underline{bold}}.
We omit the std here since they are very small and indistinguishable. For complete results, please check~\Cref{sec:app:regression_results}.
\vspace{-4ex}
}
\label{tab:main_regression}
\begin{tabular}{l c c c c c c c c}
\toprule
& \multicolumn{5}{c}{Molecular Property Prediction} & \multicolumn{3}{c}{Drug-Target Affinity}\\
\cmidrule(lr){2-6} \cmidrule(lr){7-9} Pre-training & ESOL & Lipo & Malaria & CEP & Avg & Davis & KIBA & Avg\\
\midrule
--
& 1.178 & 0.744 & 1.127 & 1.254 & 1.0756
& 0.286 & 0.206 & 0.2459\\
\midrule
AM
& 1.112 & 0.730 & 1.119 & 1.256 & 1.0542
& 0.291 & 0.203 & 0.2476\\
CP
& 1.196 & 0.702 & 1.101 & 1.243 & 1.0606
& 0.279 & 0.198 & 0.2382 \\
JOAO
& 1.120 & 0.708 & 1.145 & 1.293 & 1.0663
& 0.281 & 0.196 & 0.2387\\
\midrule
GraphMVP
& 1.091 & 0.718 & 1.114 & 1.236 & 1.0397
& 0.280 & 0.178 & 0.2286\\
GraphMVP-G\,\,\,
& 1.064 & 0.691 & 1.106 & \textbf{\underline{1.228}} & 1.0221
& \textbf{\underline{0.274}} & 0.175 & 0.2248\\
GraphMVP-C\,\,\,
& \textbf{\underline{1.029}} & \textbf{\underline{0.681}} & \textbf{\underline{1.097}} & 1.244 & \textbf{\underline{1.0128}}
& 0.276 & \textbf{\underline{0.168}} & \textbf{\underline{0.2223}}\\
\bottomrule
\end{tabular}
\vspace{-1ex}
\end{table}
\subsection{Case Study}
We investigate how GraphMVP\, helps when the task objectives are challenging with respect to the 2D topology but straightforward using 3D geometry (as shown in~\Cref{fig:case}). We therefore design two case studies to testify how GraphMVP\, transfers knowledge from 3D geometry into the 2D representation.
The first case study is \textit{3D Diameter Prediction}. For molecules, usually, the longer the 2D diameter is, the larger the 3D diameter (largest atomic pairwise l2 distance). However, this does not always hold, and we are interested in using the 2D graph to predict the 3D diameter. The second case study is \textit{Long-Range Donor-Acceptor Detection}. Molecules possess a special geometric structure called donor-acceptor bond, and we want to use 2D molecular graph to detect this special structure. We validate that GraphMVP\, consistently brings improvements on these 2 case studies, and provide more detailed discussions and interpretations in~\Cref{appendix:case}.
\begin{figure}[htbp!]
\centering
\vspace{-2ex}
\includegraphics[width=\textwidth]{figures/Case_Study_2.pdf}
\vspace{-5ex}
\caption{We select the molecules whose properties can be easily resolved via 3D but not 2D. The randomly initialised 2D GNN achieves accuracy of $38.9\pm0.8$ and $77.9\pm1.1$, respectively. The GraphMVP\, pre-trained ones obtain scores of $42.3\pm1.3$ and $81.5\pm0.4$, outperforming all the precedents in~\Cref{sec:main_results}. We plot cases where random initialization fails but GraphMVP\, is correct.}
\label{fig:case}
\vspace{-4ex}
\end{figure}
\section{Theoretical Insights} \label{sec:theoretical_insight}
In this section, we provide the mathematical insights behind GraphMVP. We will first discuss both contrastive and generative SSL methods (\Cref{sec:contrastive_SSL,sec:generative_SSL}) are maximizing the mutual information (MI) and then how the 3D geometry, as privileged information, can help 2D representation learning.
\vspace{-1ex}
\subsection{Maximizing Mutual Information} \label{sec:mutual_information}
\vspace{-1ex}
Mutual information (MI) measures the non-linear dependence~\citep{belghazi2018mutual} between two random variables: the larger MI, the stronger dependence between the variables. Therefore for GraphMVP, we can interpret it as maximizing MI between 2D and 3D views: to obtain a more robust 2D/3D representation by sharing more information with its 3D/2D counterparts. This is also consistent with the sample complexity theory~\citep{erhan2010does,arora2019theoretical,garg2020functional} where SSL as functional regularizer can reduce the uncertainty in representation learning. We first derive a lower bound for MI (see derivations in~\Cref{sec:app:MI}), and the corresponding objective function $\mathcal{L}_{\text{MI}}$ is
\begin{equation} \label{eq:MI_objective}
\small
I(X;Y) \ge \mathcal{L}_{\text{MI}} = \frac{1}{2} \mathbb{E}_{p({\bm{x}},{\bm{y}})} \big[ \log p({\bm{y}}|{\bm{x}}) + \log p({\bm{x}}|{\bm{y}}) \big].
\end{equation}
\textbf{Contrastive Self-Supervised Learning.}
InfoNCE was initialized proposed to maximize the MI directly~\citep{oord2018representation}.
Here in GraphMVP, EBM-NCE estimates the conditional likelihood in~\Cref{eq:MI_objective} using EBM, and solves it with NCE~\cite{gutmann2010noise}. As a result, EBM-NCE can also be seen as maximizing MI between 2D and 3D views. The detailed derivations can be found in~\Cref{app:ebm_nce}.
\textbf{Generative Self-Supervised Learning.}
One alternative solution is to use a variational lower bound to approximate the conditional log-likelihood terms in~\Cref{eq:MI_objective}. Then we can follow the same pipeline in~\Cref{sec:generative_SSL}, ending up with the surrogate objective, {\em{i.e.}}, VRR in~\Cref{eq:variational_lower_bound_approximation}.
\subsection{3D Geometry as Privileged Information} \label{sec:privileged_info}
We show the theoretical insights from privileged information that motivate GraphMVP. We start by considering a supervised learning setting where $(\bm{u}_i,\bm{l}_i)$ is a feature-label pair and $\bm{u}_i^*$ is the privileged information~\cite{vapnik2009new,vapnik2015learning}. The privileged information is defined to be additional information about the input $(\bm{u}_i,\bm{l}_i)$ in order to support the prediction. For example, $\bm{u}_i$ could be some CT images of a particular disease, $\bm{l}_i$ could be the label of the disease and $\bm{u}_i^*$ is the medical report from a doctor. VC theory~\cite{vapnik2013nature,vapnik2015learning} characterizes the learning speed of an algorithm from the capacity of the algorithm and the amount of training data. Considering a binary classifier $f$ from a function class $\mathcal{F}$ with finite VC-dimension $\text{VCD}(\mathcal{F})$. With probability $1-\delta$, the expected error is upper bounded by
\begin{equation}
\small
R(f)\leq R_n(f) +\mathcal{O}\bigg( \big( \frac{\text{VCD}(\mathcal{F})-\log\delta}{n} \big)^\beta \bigg)
\end{equation}
where $R_n(f)$ denotes the training error and $n$ is the number of training samples. When the training data is separable, then $R_n(f)$ will diminish to zero and $\beta$ is equal to $1$. When the training data is non-separable, $\beta$ is $\frac{1}{2}$. Therefore, the rate of convergence for the separable case is of order $1/n$. In contrast, the rate for the non-separable case is of order $1/\sqrt{n}$. We note that such a difference is huge, since the same order of bounds require up to 100 training samples versus 10,000 samples. Privileged information makes the training data separable such that the learning can be more efficient. Connecting the results to GraphMVP, we notice that the 3D geometric information of molecules can be viewed as a form of privileged information, since 3D information can effectively make molecules more separable for some properties~\cite{schutt2017schnet,liu2018n,liu2021spherical}. Besides, privileged information is only used in training, and it well matches our usage of 3D geometry for pre-training. In fact, using 3D structures as privileged information has been already shown quite useful in protein classification~\cite{vapnik2009new}, which serves as a strong evidence to justify the effectiveness of 3D information in graph SSL pre-training.
\section{Conclusion and Future Work}
\vspace{-1ex}
In this work, we provide a very general framework, coined GraphMVP. From the domain perspective, GraphMVP\, (1) is the first to incorporate 3D information for augmenting 2D graph representation learning and (2) is able to take advantages of 3D conformers by considering stochasticity in modeling. From the aspect of technical novelties, GraphMVP\, brings following insights when introducing 2 SSL tasks:
(1) Following~\Cref{eq:MI_objective}, GraphMVP\, proposes EBM-NCE and VRR, where they are modeling the conditional distributions using EBM and variational distribution respectively.
(2) EBM-NCE is similar to JSE, while we start with a different direction for theoretical intuition, yet EBM opens another promising venue in this area.
(3) VRR, as a generative SSL method, is able to alleviate the potential issues in molecule generation~\citep{zhavoronkov2019deep,gao2020synthesizability}.
(4) Ultimately, GraphMVP\, combines both contrastive SSL (InfoNCE or EBM-NCE) and generative SSL (VRR) for objective function.
Both empirical results (solid performance improvements on 14 downstream datasets) and theoretical analysis can strongly support the above domain and technical contributions.
We want to emphasize that GraphMVP\, is model-agnostic and has the potential to be expanded to many other low-data applications. This motivates broad directions for future exploration, including but not limited to: (1) More powerful 2D and 3D molecule representation methods. (2) Different application domain other than small molecules, {\em{e.g.}}, large molecules like proteins.
\part{\Large\centering{Appendix}}
\parttoc
\clearpage
\input{09_related}
\clearpage
\input{10_MI}
\clearpage
\input{11_contrastive_ssl}
\clearpage
\input{12_generative_ssl}
\clearpage
\input{13_datasets}
\input{14_experiments}
\section{Self-Supervised Learning on Molecular Graph} \label{appendix:related_work}
Self-supervised learning (SSL) methods have attracted massive attention recently, trending from vision~\citep{chen2020simple,he2020momentum,caron2020unsupervised,chen2021exploring,OcCo}, language~\citep{oord2018representation,devlin2018bert,brown2020language} to graph~\citep{velivckovic2018deep,sun2019infograph,liu2018n,hu2019strategies,You2020GraphCL,you2021graph}. In general, there are two categories of SSL: contrastive and generative, where they differ on the design of the supervised signals. Contrastive SSL realizes the supervised signals at the \textbf{inter-data} level, learning the representation by contrasting with other data points; while generative SSL focuses on reconstructing the original data at the \textbf{intra-data} level. Both venues have been widely explored~\citep{liu2021graph,xie2021self,wu2021self,liu2021self}.
\subsection{Contrastive graph SSL}
Contrastive graph SSL first applies transformations to construct different \textit{views} for each graph. Each view incorporates different granularities of information, like node-, subgraph-, and graph-level. It then solves two sub-tasks simultaneously: (1) aligning the representations of views from the same data; (2) contrasting the representations of views from different data, leading to a uniformly distributed latent space~\citep{wang2020understanding}. The key difference among existing methods is thus the design of view constructions. InfoGraph~\citep{velivckovic2018deep,sun2019infograph} contrasted the node (local) and graph (global) views. ContextPred~\citep{hu2019strategies} and G-Contextual~\cite{rong2020self} contrasted between node and context views. GraphCL and JOAO~\citep{You2020GraphCL,you2021graph} made comprehensive comparisons among four graph-level transformations and further learned to select the most effective combinations.
\subsection{Generative graph SSL}
Generative graph SSL aims at reconstructing important structures for each graph. By so doing, it consequently learns a representation capable of encoding key ingredients of the data. EdgePred~\citep{hamilton2017inductive} and AttrMask~\citep{hu2019strategies} predicted the adjacency matrix and masked tokens (nodes and edges) respectively. GPT-GNN~\citep{hu2020gpt} reconstructed the whole graph in an auto-regressive approach.
\subsection{Predictive graph SSL}
There are certain SSL methods specific to the molecular graph. For example, one central task in drug discovery is to find the important substructure or motif in molecules that can activate the target interactions. G-Motif~\citep{rong2020self} adopts domain knowledge to heuristically extract motifs for each molecule, and the SSL task is to make prediction on the existence of each motif. Different from contrastive and generative SSL, recent literature~\citep{wu2021self} takes this as predictive graph SSL, where the supervised signals are self-generated labels.
\begin{table}[b]
\small
\caption{
Comparison between GraphMVP\, and existing graph SSL methods.
\vspace{-0.6ex}
}
\label{app:tab:compare}
\center
\setlength{\tabcolsep}{10pt}
\begin{tabular}{l c c c c c}
\toprule
\multirow{2}{*}{SSL Pre-training} & \multicolumn{2}{c}{Graph View} & \multicolumn{3}{c}{SSL Category}\\
\cmidrule(lr){2-3}
\cmidrule(lr){4-6}
& 2D Topology & 3D Geometry & Generative & Contrastive & Predictive\\
\midrule
EdgePred~\citep{hamilton2017inductive} & \checkmark & - & \checkmark & - & -\\
AttrMask~\citep{hu2019strategies} & \checkmark & - & \checkmark & - & -\\
GPT-GNN~\citep{hu2020gpt} & \checkmark & - & \checkmark & - & -\\
InfoGraph~\citep{velivckovic2018deep,sun2019infograph} & \checkmark & - & - & \checkmark& -\\
ContexPred~\citep{hu2019strategies} & \checkmark & - & - & \checkmark& -\\
GraphLoG~\citep{xu2021self} & \checkmark & - & - & \checkmark& -\\
G-Contextual~\citep{rong2020self} & \checkmark & - & - & \checkmark& -\\
GraphCL~\citep{You2020GraphCL} & \checkmark & - & - & \checkmark& -\\
JOAO~\citep{you2021graph} & \checkmark & - & - & \checkmark& -\\
G-Motif~\citep{rong2020self} & \checkmark & - & - & - & \checkmark\\
\midrule
GraphMVP\,(Ours) & \checkmark & \checkmark & \checkmark & \checkmark& -\\
\bottomrule
\end{tabular}
\end{table}
\textbf{SSL for Molecular Graphs.} Recall that all previous methods in~\Cref{app:tab:compare} \textbf{merely} focus on the 2D topology. However, for science-centric tasks such as molecular property prediction, 3D geometry should be incorporated as it provides complementary and comprehensive information~\citep{schutt2017schnet,liu2021spherical}. To mitigate this gap, we propose GraphMVP to leverage the 3D geometry with unsupervised graph pre-training.
\section{Molecular Graph Representation} \label{app:sec:molecule_representation}
There are two main methods for molecular graph representation learning. The first one is the molecular fingerprints. It is a hashed bit vector to describe the molecular graph. There has been re-discoveries on fingerprints-based methods~\citep{ramsundar2015massively,liu2018practical,liu2019loss,meyer2019learning,alnammi2021evaluating,jiang2021could}, while its has one main drawback: Random forest and XGBoost are very strong learning models on fingerprints, but they fail to take benefits of the pre-training strategy.
Graph neural network (GNN) has become another mainstream modeling methods for molecular graph representation. Existing methods can be generally split into two venues: 2D GNN and 3D GNN, depending on what levels of information is considered. 2D GNN focuses on the topological structures of the graph, like the adjacency among nodes, while 3D GNN is able to model the ``energy'' of molecules by taking account the spatial positions of atoms.
First, we want to highlight that GraphMVP\, is model-agnostic, {\em{i.e.}}, it can be applied to any 2D and 3D GNN representation function, yet the specific 2D and 3D representations are not the main focus of this work.
Second, we acknowledge there are a lot of advanced 3D~\citep{fuchs2020se,satorras2021n,liu2021spherical,jumper2021highly} and 2D~\citep{gilmer2017neural,yang2019analyzing,liu2018n,xu2018powerful,corso2020principal,demirel2021analysis} representation methods.
However, considering the \textit{graph SSL literature} and \textit{graph representation liteature} (illustrated below), we adopt GIN~\citep{xu2018powerful} and SchNet~\citep{schutt2017schnet} in current GraphMVP.
\subsection{2D Molecular Graph Neural Network}
The 2D representation is taking each molecule as a 2D graph, with atoms as nodes and bonds as edges, {\em{i.e.}}, $g_{\text{2D}} = (X, E)$. $X\in\mathbb{R}^{n\times d_n}$ is the atom attribute matrix, where $n$ is the number of atoms (nodes) and $d_n$ is the atom attribute dimension. $E\in\mathbb{R}^{m\times d_e}$ is the bond attribute matrix, where $m$ is the number of bonds (edges) and $d_m$ is the bond attribute dimension. Notice that here $E$ also includes the connectivity. Then we will apply a transformation function $T_{\text{2D}}$ on the topological graph. Given a 2D graph $g_{\text{2D}}$, its 2D molecular representation is:
\begin{equation}
h_{\text{2D}} = \text{GNN-2D}(T_{\text{2D}}(g_{\text{2D}})) = \text{GNN-2D}(T_{\text{2D}}(X, E)).
\end{equation}
The core operation of 2D GNN is the message passing function~\citep{gilmer2017neural}, which updates the node representation based on adjacency information. We have variants depending on the design of message and aggregation functions, and we pick GIN~\citep{xu2018powerful} in this work.
\paragraph{GIN} There has been a long research line on 2D graph representation learning~\citep{gilmer2017neural,yang2019analyzing,liu2018n,xu2018powerful,corso2020principal,demirel2021analysis}. Among these, graph isomorphism network (GIN) model~\citep{xu2018powerful} has been widely used as the backbone model in recent graph self-supervised learning work~\citep{hu2019strategies,You2020GraphCL,you2021graph}. Thus, we as well adopt GIN as the base model for 2D representation.
Recall each molecule is represented as a molecular graph, {\em{i.e.}}, $g_{\text{2D}} = (X, E)$, where $X$ and $E$ are feature matrices for atoms and bonds respectively. Then the message passing function is defined as:
\begin{equation}
z_i^{(k+1)} = \text{MLP}_{\text{atom}}^{(k+1)} \Big(z_i^{(k)} + \sum_{j \in \mathcal{N}(i)} \big( z_j^{(k)} + \text{MLP}_{\text{bond}}^{(k+1)}(E_{ij}) \big) \Big),
\end{equation}
where $z_0=X$ and $\text{MLP}_{\text{atom}}^{(k+1)}$ and $\text{MLP}_{\text{bond}}^{(k+1)}$ are the $(l+1)$-th MLP layers on the atom- and bond-level respectively. Repeating this for $K$ times, and we can encode $K$-hop neighborhood information for each center atom in the molecular data, and we take the last layer for each node/atom representation. The graph-level molecular representation is the mean of the node representation:
\begin{equation}
z({\bm{x}}) = \frac{1}{N} \sum_{i} z_i^{(K)}
\end{equation}
\subsection{3D Molecular Graph Neural Network} \label{appendix:3d_gnn}
Recently, the 3D geometric representation learning has brought breakthrough progress in molecule modeling~\citep{schutt2017schnet,fuchs2020se,satorras2021n,liu2021spherical,jumper2021highly}. 3D molecular graph additionally includes spatial locations of the atoms, which needless to be static since, in real scenarios, atoms are in continual motion on \textit{a potential energy surface}~\citep{axelrod2020geom}. The 3D structures at the local minima on this surface are named \textit{molecular conformation} or \textit{conformer}. As the molecular properties are a function of the conformer ensembles~\citep{hawkins2017conformation}, this reveals another limitation of existing mainstream methods: to predict properties from a single 2D or 3D graph cannot account for this fact~\citep{axelrod2020geom}, while our proposed method can alleviate this issue to a certain extent.
For specific 3D molecular graph, it additionally includes spatial positions of the atoms. We represent each conformer as $g_{\text{3D}} = (X, R)$, where $R \in \mathbb{R}^{n \times 3}$ is the 3D-coordinate matrix, and the corresponding representation is:
\begin{equation}
h_{\text{3D}} = \text{GNN-3D}(T_{\text{3D}}(g_{\text{3D}})) = \text{GNN-3D}(T_{\text{3D}}(X, R)),
\end{equation}
where $R$ is the 3D-coordinate matrix and $T_{\text{3D}}$ is the 3D transformation. Note that further information such as plane and torsion angles can be solved from the positions.
\paragraph{SchNet}
SchNet~\citep{schutt2017schnet} is composed of the following key steps:
\begin{equation}
\begin{aligned}
& z_i^{(0)} = \text{embedding} (x_i)\\
& z_i^{(t+1)} = \text{MLP} \Big( \sum_{j=1}^{n} f(x_j^{(t-1)}, r_i, r_j) \Big)\\
& h_i = \text{MLP} (z_i^{(K)}),
\end{aligned}
\end{equation}
where $K$ is the number of hidden layers, and
\begin{equation}
f(x_j, r_i, r_j) = x_j \cdot e_k(r_i - r_j) = x_j \cdot \exp(- \gamma \| \|r_i - r_j\|_2 - \mu \|_2^2)
\end{equation}
is the continuous-filter convolution layer, enabling the modeling of continuous positions of atoms.
We adopt SchNet for the following reasons. (1) SchNet is a very strong geometric representation method after \textit{fair} benchmarking. (2) SchNet can be trained more efficiently, comparing to the other recent 3D models. To support these two points, we make a comparison among the most recent 3D geometric models~\cite{fuchs2020se,satorras2021n,liu2021spherical} on QM9 dataset. QM9~\citep{wu2018moleculenet} is a molecule dataset approximating 12 thermodynamic properties calculated by density functional theory (DFT) algorithm. Notice: UNiTE~\citep{qiao2021unite} is the state-of-the-art 3D GNN, but it requires a commercial software for feature extraction, thus we exclude it for now.
\begin{table}[H]
\centering
\scriptsize
\caption{
Reproduced MAE on QM9. 100k for training, 17,748 for val, 13,083 for test. The last column is the approximated running time.
}
\label{tab:app:qm9}
\begin{tabular}{l c c c c c c c c c c c c r}
\toprule
& alpha & gap & homo & lumo & mu & cv & g298 & h298 & r2 & u298 & u0 & zpve & time\\
\midrule
SchNet~\citep{schutt2017schnet} & 0.077 & 50 & 32 & 26 & 0.030 & 0.032 & 15 & 14 & 0.122 & 14 & 14 & 1.751 & 3h\\
SE(3)-Trans~\citep{fuchs2020se} & 0.143 & 59 & 36 & 36 & 0.052 & 0.068 & 68 & 72 & 1.969 & 68 & 74 & 5.517 & 50h\\
EGNN~\citep{satorras2021n} & 0.075 & 49 & 29 & 26 & 0.030 & 0.032 & 11 & 10 & 0.076 & 10 & 10 & 1.562 & 24h\\
SphereNet~\citep{liu2021spherical} & 0.054 & 41 & 22 & 19 & 0.028 & 0.027 & 10 & 8 & 0.295 & 8 & 8 & 1.401 & 50h\\
\bottomrule
\end{tabular}
\end{table}
\Cref{tab:app:qm9} shows that, under a fair comparison (w.r.t. data splitting, seed, cuda version, etc), SchNet can reach pretty comparable performance, yet the efficiency of SchNet is much better. Combining these two points, we adopt SchNet in current version of GraphMVP.
\subsection{Summary}
To sum up, in GraphMVP, the most important message we want to deliver is how to design a well-motivated SSL algorithm to extract useful 3D geometry information to augment the 2D representation for downstream fine-tuning. GraphMVP\, is model-agnostic, and we may as well leave the more advanced 3D~\citep{fuchs2020se,satorras2021n,liu2021spherical,jumper2021highly} and 2D~\citep{yang2019analyzing,liu2018n,corso2020principal} GNN for future exploration.
In addition, molecular property prediction tasks have rich alternative representation methods, including SMILES~\cite{weininger1988smiles,hirohara2018convolutional}, and biological knowledge graph~\cite{wang2021multi,liu2022structured}. There have been another SSL research line on them~\cite{liustructured,zhu2021dual,fang2021molecular}, yet they are beyond the scope of discussion in this paper.
\section{Maximize Mutual Information} \label{sec:app:MI}
In what follows, we will use $X$ and $Y$ to denote the data space for $2D$ graph and $3D$ graph respectively. Then the latent representations are denoted as $h_{\bm{x}}$ and $h_{\bm{y}}$.
\subsection{Formulation}
The standard formulation for mutual information (MI) is
\begin{equation} \label{eq:app:MI}
\begin{aligned}
I(X;Y)
& = \mathbb{E}_{p({\bm{x}},{\bm{y}})} \big[ \log \frac{p({\bm{x}},{\bm{y}})}{p({\bm{x}}) p({\bm{y}})} \big].
\end{aligned}
\end{equation}
Another well-explained MI inspired from wikipedia is given in~\Cref{fig:app:MI}.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.5\textwidth]{figures/MutualInfo_Final.pdf}
\caption{Venn diagram of mutual information. Inspired by wikipedia.}
\label{fig:app:MI}
\end{figure}
Mutual information (MI) between random variables measures the corresponding non-linear dependence. As can be seen in the first equation in~\Cref{eq:app:MI}, the larger the divergence between the joint ($p({\bm{x}}, {\bm{y}}$) and the product of the marginals $p({\bm{x}}) p({\bm{y}})$, the stronger the dependence between $X$ and $Y$.
Thus, following this logic, maximizing MI between 2D and 3D views can force the 3D/2D representation to capture higher-level factors, {\em{e.g.}}, the occurrence of important substructure that is semantically vital for downstream tasks. Or equivalently, maximizing MI can decrease the uncertainty in 2D representation given 3D geometric information.
\subsection{A Lower Bound to MI}
To solve MI, we first extract a lower bound:
\begin{equation}
\begin{aligned}
I(X;Y)
& = \mathbb{E}_{p({\bm{x}},{\bm{y}})} \Big[ \log \frac{p({\bm{x}},{\bm{y}})}{p({\bm{x}}) p({\bm{y}})} \Big]\\
& \ge \mathbb{E}_{p({\bm{x}},{\bm{y}})} \Big[ \log \frac{p({\bm{x}},{\bm{y}})}{\sqrt{p({\bm{x}}) p({\bm{y}})}} \Big]\\
& = \frac{1}{2} \mathbb{E}_{p({\bm{x}},{\bm{y}})} \Big[ \log \frac{(p({\bm{x}},{\bm{y}}))^2}{p({\bm{x}}) p({\bm{y}})} \Big]\\
& = \frac{1}{2} \mathbb{E}_{p({\bm{x}},{\bm{y}})} \Big[ \log p({\bm{x}}|{\bm{y}}) \Big] + \frac{1}{2} \mathbb{E}_{p({\bm{x}},{\bm{y}})} \Big[ \log p({\bm{y}}|{\bm{x}}) \Big]\\
& = -\frac{1}{2} [H(Y|X) + H(X|Y)].
\end{aligned}
\end{equation}
Thus, we transform the MI maximization problem into minimizing the following objective:
\begin{equation} \label{eq:app:MI_objective}
\begin{aligned}
\mathcal{L}_{\text{MI}} & = \frac{1}{2} [H(Y|X) + H(X|Y)].
\end{aligned}
\end{equation}
In the following sections, we will describe two self-supervised learning methods for solving MI. Notice that the methods are very general, and can be applied to various applications. Here we apply it mainly for making 3D geometry useful for 2D representation learning on molecules.
\section{Contrastive Self-Supervised Learning} \label{sec:app:contrastive_ssl}
The essence of contrastive self-supervised learning is to align positive view pairs and contrast negative view pairs, such that the obtained representation space is well distributed~\citep{wang2020understanding}. We display the pipeline in~\Cref{fig:app:contrastive_ssl}. Along the research line in graph SSL~\citep{liu2021graph,xie2021self,wu2021self,liu2021self}, InfoNCE and EBM-NCE are the two most-widely used, as discussed below.
\begin{figure}[htb!]
\centering
\includegraphics[width=\textwidth]{figures/Diagram_Contrastive_final.pdf}
\vspace{-4ex}
\caption{
Contrastive SSL in GraphMVP. The black dashed circles represent subgraph masking.
}
\label{fig:app:contrastive_ssl}
\vspace{-2ex}
\end{figure}
\subsection{InfoNCE} \label{sec:app:infonce}
InfoNCE~\citep{oord2018representation} is first proposed to approximate MI~\Cref{eq:app:MI}:
\begin{equation}\label{eq:app:InfoNCE}
\footnotesize{
\begin{aligned}
\mathcal{L}_{\text{InfoNCE}} =
-\frac{1}{2} \mathbb{E} &\left[
\log \frac{\exp(f_{{\bm{x}}}({\bm{x}}, {\bm{y}}))}{\exp(f_{{\bm{x}}}({\bm{x}}, {\bm{y}})) + \sum_j \exp(f_{{\bm{x}}}({\bm{x}}^{j},{\bm{y}})}) + \log \frac{\exp(f_{{\bm{y}}}({\bm{y}},{\bm{x}}))}{\exp(f_{{\bm{y}}}({\bm{y}},{\bm{x}})) + \sum_j \exp{f_{{\bm{y}}}({\bm{y}}^{j},{\bm{x}})}} \right],
\end{aligned}
}
\end{equation}
where ${\bm{x}}^{j}, {\bm{y}}^{j}$ are randomly sampled 2D and 3D views regarding to the anchored pair $({\bm{x}},{\bm{y}})$. $f_{{\bm{x}}}({\bm{x}},{\bm{y}}), f_{{\bm{y}}}({\bm{y}},{\bm{x}})$ are scoring functions for the two corresponding views, whose formulation can be quite flexible. Here we use $f_{\bm{x}}({\bm{x}},{\bm{y}}) = f_{\bm{y}}({\bm{y}},{\bm{x}}) = \exp(\langle h_{\bm{x}}, h_{\bm{y}} \rangle)$.
\paragraph{Derivation of InfoNCE}
\begin{equation}
\begin{aligned}
I(X;Y) - \log (K)
& = \mathbb{E}_{p({\bm{x}}, {\bm{y}})} \big[\log \frac{1}{K} \frac{p({\bm{x}}, {\bm{y}})}{p({\bm{x}}) p({\bm{y}})} \big]\\
& = \sum_{{\bm{x}}^{i},{\bm{y}}^{i}} \big[\log \frac{1}{K} \frac{p({\bm{x}}^{i},{\bm{y}}^{i})}{p({\bm{x}}^{i}) p({\bm{y}}^{i})} \big]\\
& \ge -\sum_{{\bm{x}}^{i},{\bm{y}}^{i}} \big[\log \big( 1 + (K-1) \frac{p({\bm{x}}^{i}) p({\bm{y}}^{i})}{p({\bm{x}}^{i},{\bm{y}}^{i})} \big)\big]\\
& = -\sum_{{\bm{x}}^{i},{\bm{y}}^{i}} \big[\log \frac{\frac{p({\bm{x}}^{i},{\bm{y}}^{i})}{p({\bm{x}}^{i}) p({\bm{y}}^{i})} + (K-1)}{\frac{p({\bm{x}}^{i},{\bm{y}}^{i})}{p({\bm{x}}^{i}) p({\bm{y}}^{i})}} \big]\\
& \approx -\sum_{{\bm{x}}^{i},{\bm{y}}^{i}} \big[ \log \frac{\frac{p({\bm{x}}^{i},{\bm{y}}^{i})}{p({\bm{x}}^{i}) p({\bm{y}}^{i})} + (K-1)\mathbb{E}_{{\bm{x}}^{j} \ne {\bm{x}}^{i}}\frac{p(x^{j},{\bm{y}}^{i})}{p({\bm{x}}^{j}) p({\bm{y}}^{i})} }{\frac{p({\bm{x}}^{i},{\bm{y}}^{i})}{p({\bm{x}}^{i}) p({\bm{y}}^{i})}} \big] \quad \text{// \textcircled{1}}\\
& = \sum_{{\bm{x}}^{i},{\bm{y}}^{i}} \big[ \log \frac{\exp(f_{\bm{x}}({\bm{x}}^{i},{\bm{y}}^{i}))}{\exp(f_{\bm{x}}({\bm{x}}^{i},{\bm{y}}^{i})) + \sum_{j=1}^K f_{\bm{x}}({\bm{x}}^{j},{\bm{y}}^{i})} \big],
\end{aligned}
\end{equation}
where we set $ f_{\bm{x}}({\bm{x}}^{i},{\bm{y}}^{i}) = \log \frac{p({\bm{x}}^{i},{\bm{y}}^{i})}{p({\bm{x}}^{i})p({\bm{y}}^{i})}$.
Notice that in \textcircled{1}, we are using data $x \in X$ as the anchor points. If we use the $y \in Y$ as the anchor points and follow the similar steps, we can obtain
\begin{equation}
\begin{aligned}
I(X;Y) - \log(K) \ge \sum_{{\bm{y}}^{i},{\bm{x}}^{i}} \big[ \log \frac{\exp(f_{\bm{y}}({\bm{y}}^{i},{\bm{x}}^{i}))}{\exp f_{\bm{y}}({\bm{y}}^{i},{\bm{x}}^{i}) + \sum_{j=1}^K \exp (f_{\bm{y}}({\bm{y}}^{j},{\bm{x}}^{i}))} \big].
\end{aligned}
\end{equation}
Thus, by add both together, we can have the objective function as~\Cref{eq:app:InfoNCE}.
\subsection{EBM-NCE} \label{app:ebm_nce}
We here provide an alternative approach to maximizing MI using energy-based model (EBM). To our best knowledge, we are the \textbf{first} to give the rigorous proof of using EBM to maximize the MI.
\subsubsection{Energy-Based Model (EBM)}
Energy-based model (EBM) is a powerful tool for modeling the data distribution. The classic formulation is:
\begin{equation} \label{eq:app:EBM_original}
p({\bm{x}}) = \frac{\exp(-E({\bm{x}}))}{A},
\end{equation}
where the bottleneck is the intractable partition function $A = \int_{\bm{x}} \exp(-E({\bm{x}})) d{\bm{x}}$. Recently, there have been quite a lot progress along this direction~\citep{gutmann2010noise,du2020improved,song2020score,song2021train}. Noise Contrastive Estimation (NCE)~\citep{gutmann2010noise} is one of the powerful tools here, as we will introduce later.
\subsubsection{EBM for MI}
Recall that our objective function is~\Cref{eq:app:MI_objective}: $\mathcal{L}_{\text{MI}} = \frac{1}{2} [H(Y|X) + H(X|Y)]$. Then we model the conditional likelihood with energy-based model (EBM). This gives us
\begin{equation} \label{eq:app:EBM}
\begin{aligned}
\mathcal{L}_{\text{EBM}} = -\frac{1}{2} \mathbb{E}_{p({\bm{x}},{\bm{y}})} \Big[ \log \frac{\exp(f_{\bm{x}}({\bm{x}}, {\bm{y}}))}{A_{{\bm{x}}|{\bm{y}}}} + \log \frac{\exp(f_{\bm{y}}({\bm{y}}, {\bm{x}}))}{A_{{\bm{y}}|{\bm{x}}}} \Big],
\end{aligned}
\end{equation}
where $f_{\bm{x}}({\bm{x}}, {\bm{y}}) = -E({\bm{x}}|{\bm{y}})$ and $f_{\bm{y}}({\bm{y}}, {\bm{x}}) = -E({\bm{y}}|{\bm{x}})$ are the negative energy functions, and $A_{{\bm{x}}|{\bm{y}}}$ and $A_{{\bm{y}}|{\bm{x}}}$ are the corresponding partition functions.
Under the EBM framework, if we solve~\Cref{eq:app:EBM} with Noise Contrastive Estimation (NCE)~\citep{gutmann2010noise}, the final EBM-NCE objective is
\begin{equation} \label{eq:app:EBM_NCE}
\begin{aligned}
\mathcal{L}_{\text{EBM-NCE}}
= & -\frac{1}{2} \mathbb{E}_{p_{\text{data}}(y)} \Big[ \mathbb{E}_{p_n({\bm{x}}|{\bm{y}})} [\log \big(1-\sigma(f_{\bm{x}}({\bm{x}}, {\bm{y}}))\big)] + \mathbb{E}_{p_{\text{data}}({\bm{x}}|{\bm{y}})} [\log \sigma(f_{\bm{x}}({\bm{x}}, {\bm{y}}))] \Big] \\
& ~~ - \frac{1}{2} \mathbb{E}_{p_{\text{data}}(x)} \Big[ \mathbb{E}_{p_n({\bm{y}}|{\bm{x}})} [\log \big(1-\sigma(f_{\bm{y}}({\bm{y}}, {\bm{x}}))\big)] + \mathbb{E}_{p_{\text{data}}({\bm{y}}|{\bm{x}})} [\log \sigma(f_{\bm{y}}({\bm{y}}, {\bm{x}}))] \Big].
\end{aligned}
\end{equation}
Next we will give the detailed derivations.
\subsubsection{Derivation of conditional EBM with NCE}
WLOG, let's consider the $p_\theta({\bm{x}}|{\bm{y}})$ first, and by EBM it is as follows:
\begin{equation} \label{eq:app:EBM_SSL}
p_\theta({\bm{x}}|{\bm{y}}) = \frac{\exp(-E({\bm{x}}|{\bm{y}}))}{ \int \exp(-E({\tilde {\bm{x}}}|{\bm{y}})) d{\tilde {\bm{x}}}} = \frac{\exp(f_{\bm{x}}({\bm{x}}, {\bm{y}}))}{\int \exp(f_{\bm{x}}({\tilde {\bm{x}}}|{\bm{y}})) d{\tilde {\bm{x}}}} = \frac{\exp(f_{\bm{x}}({\bm{x}}, {\bm{y}}))}{A_{{\bm{x}}|{\bm{y}}}}.
\end{equation}
Then we solve this using NCE. NCE handles the intractability issue by transforming it as a binary classification task. We take the partition function $A_{{\bm{x}}|{\bm{y}}}$ as a parameter, and introduce a noise distribution $p_n$. Based on this, we introduce a mixture model: ${\bm{z}}=0$ if the conditional ${\bm{x}}|{\bm{y}}$ is from $p_n({\bm{x}}|{\bm{y}})$, and ${\bm{z}}=1$ if ${\bm{x}}|{\bm{y}}$ is from $p_{\text{data}}({\bm{x}}|{\bm{y}})$. So the joint distribution is:
\begin{equation*}
p_{n,\text{\text{data}}}({\bm{x}}|{\bm{y}}) = p({\bm{z}}=1) p_{\text{data}}({\bm{x}}|{\bm{y}}) + p({\bm{z}}=0) p_n({\bm{x}}|{\bm{y}})
\end{equation*}
The posterior of $p({\bm{z}}=0|{\bm{x}},{\bm{y}})$ is
\begin{equation*}
p_{n,\text{\text{data}}}({\bm{z}}=0|{\bm{x}},{\bm{y}}) = \frac{p({\bm{z}}=0) p_n({\bm{x}}|{\bm{y}})}{p(z=0) p_n({\bm{x}}|{\bm{y}}) + p({\bm{z}}=1) p_{\text{data}}({\bm{x}}|{\bm{y}})} = \frac{\nu \cdot p_n({\bm{x}}|{\bm{y}})}{\nu \cdot p_n({\bm{x}}|{\bm{y}}) + p_{\text{data}}({\bm{x}}|{\bm{y}})},
\end{equation*}
where $\nu = \frac{p({\bm{z}}=0)}{p({\bm{z}}=1)}$.
Similarly, we can have the joint distribution under EBM framework as:
\begin{equation*}
p_{n, \theta}({\bm{x}}) = p(z=0) p_n({\bm{x}}|{\bm{y}}) + p(z=1) p_{\theta}({\bm{x}}|{\bm{y}})
\end{equation*}
And the corresponding posterior is:
\begin{equation*}
p_{n,\theta}({\bm{z}}=0|{\bm{x}},{\bm{y}}) = \frac{p({\bm{z}}=0) p_n({\bm{x}}|{\bm{y}})}{p({\bm{z}}=0) p_n({\bm{x}}|{\bm{y}}) + p({\bm{z}}=1) p_{\theta}({\bm{x}}|{\bm{y}})} = \frac{\nu \cdot p_n({\bm{x}}|{\bm{y}})}{\nu \cdot p_n({\bm{x}}|{\bm{y}}) + p_{\theta}({\bm{x}}|{\bm{y}})}
\end{equation*}
We indirectly match $p_\theta({\bm{x}}|{\bm{y}})$ to $p_{\text{data}}({\bm{x}}|{\bm{y}})$ by fitting $p_{n,\theta}({\bm{z}}|{\bm{x}},{\bm{y}})$ to $p_{n,\text{\text{data}}}({\bm{z}}|{\bm{x}},{\bm{y}})$ by minimizing their KL-divergence:
\begin{equation} \label{eq:app:EBM_01}
\begin{aligned}
& \min_\theta D_{\text{KL}}(p_{n,\text{\text{data}}}({\bm{z}}|{\bm{x}},{\bm{y}}) || p_{n,\theta}({\bm{z}}|{\bm{x}},{\bm{y}})) \\
& = \mathbb{E}_{p_{n,\text{\text{data}}}({\bm{x}},{\bm{z}}|{\bm{y}})} [\log p_{n,\theta}({\bm{z}}|{\bm{x}},{\bm{y}})] \\
& = \int \sum_{\bm{z}} p_{n,\text{\text{data}}}({\bm{x}},{\bm{z}}|{\bm{y}}) \cdot \log p_{n,\theta}({\bm{z}}|{\bm{x}},{\bm{y}}) d {\bm{x}}\\
& = \int \Big\{ p({\bm{z}}=0) p_{n,\text{\text{data}}}({\bm{x}}|{\bm{y}},{\bm{z}}=0) \log p_{n,\theta}({\bm{z}}=0|{\bm{x}},{\bm{y}}) \\
& \quad\quad\quad\quad + p({\bm{z}}=1) p_{n,\text{\text{data}}}({\bm{x}}|{\bm{z}}=1,{\bm{y}}) \log p_{n,\theta}({\bm{z}}=1|{\bm{x}},{\bm{y}}) \Big\} d{\bm{x}} \\
& = \nu \cdot \mathbb{E}_{p_{n}({\bm{x}}|{\bm{y}})} \Big[\log p_{n,\theta}({\bm{z}}=0|{\bm{x}},{\bm{y}}) \Big] + \mathbb{E}_{p_{\text{data}}({\bm{x}}|{\bm{y}} )} \Big[\log p_{n,\theta}({\bm{z}}=1|{\bm{x}},{\bm{y}}) \Big] \\
& = \nu \cdot \mathbb{E}_{p_{n}({\bm{x}}|{\bm{y}})} \Big[ \log \frac{\nu \cdot p_n({\bm{x}}|{\bm{y}})}{\nu \cdot p_n({\bm{x}}|{\bm{y}}) + p_{\theta}({\bm{x}}|{\bm{y}})} \Big] + \mathbb{E}_{p_{\text{data}}({\bm{x}}|{\bm{y}} )} \Big[ \log \frac{p_\theta({\bm{x}}|{\bm{y}})}{\nu \cdot p_n({\bm{x}}|{\bm{y}}) + p_{\theta}({\bm{x}}|{\bm{y}})} \Big].\\
\end{aligned}
\end{equation}
This optimal distribution is an estimation to the actual distribution (or data distribution), {\em{i.e.}}, $p_\theta({\bm{x}}|{\bm{y}}) \approx p_{\text{data}}({\bm{x}}|{\bm{y}})$. We can follow the similar steps for $p_\theta({\bm{y}}|{\bm{x}}) \approx p_{\text{data}}({\bm{y}}|{\bm{x}})$. Thus following~\Cref{eq:app:EBM_01}, the objective function is to maximize
\begin{equation} \label{eq:app:EBM_02}
\begin{aligned}
\nu \cdot \mathbb{E}_{p_{\text{data}}({\bm{y}})} \mathbb{E}_{p_{n}({\bm{x}}|{\bm{y}})} \Big[ \log \frac{\nu \cdot p_n({\bm{x}}|{\bm{y}})}{\nu \cdot p_n({\bm{x}}|{\bm{y}}) + p_{\theta}({\bm{x}}|{\bm{y}})} \Big] + \mathbb{E}_{p_{\text{data}}({\bm{y}})} \mathbb{E}_{p_{\text{data}}({\bm{x}}|{\bm{y}} )} \Big[ \log \frac{p_\theta({\bm{x}}|{\bm{y}})}{\nu \cdot p_n({\bm{x}}|{\bm{y}}) + p_{\theta}({\bm{x}}|{\bm{y}})} \Big].
\end{aligned}
\end{equation}
The we will adopt three strategies to approximate~\Cref{eq:app:EBM_02}:
\begin{enumerate}
\item \textbf{Self-normalization.} When the EBM is very expressive, {\em{i.e.}}, using deep neural network for modeling, we can assume it is able to approximate the normalized density directly~\cite{mnih2012fast,song2021train}. In other words, we can set the partition function $A=1$. This is a self-normalized EBM-NCE, with normalizing constant close to 1, {\em{i.e.}}, $p({\bm{x}}) = \exp(-E({\bm{x}})) = \exp(f({\bm{x}}))$ in~\Cref{eq:app:EBM_original}.
\item \textbf{Exponential tilting term.} Exponential tilting term~\cite{arbel2020generalized} is another useful trick. It models the distribution as $\tilde p_\theta({\bm{x}}) = q({\bm{x}}) \exp(-E_\theta({\bm{x}})) $, where $q({\bm{x}})$ is the reference distribution. If we use the same reference distribution as the noise distribution, the tilted probability is $\tilde p_\theta({\bm{x}}) = p_n({\bm{x}}) \exp(-E_\theta({\bm{x}}))$ in~\Cref{eq:app:EBM_original}.
\item \textbf{Sampling.} For many cases, we only need to sample 1 negative points for each data, {\em{i.e.}}, $\nu=1$.
\end{enumerate}
Following these three disciplines, the objective function to optimize $p_{\theta}({\bm{x}}|{\bm{y}})$ becomes
\begin{equation}
\begin{aligned}
& \mathbb{E}_{p_{n}({\bm{x}}|{\bm{y}})} \Big[ \log \frac{ p_n({\bm{x}}|{\bm{y}})}{ p_n({\bm{x}}|{\bm{y}}) + \tilde p_{\theta}({\bm{x}}|{\bm{y}})} \Big] + \mathbb{E}_{p_{\text{data}}({\bm{x}}|{\bm{y}} )} \Big[ \log \frac{\tilde p_\theta({\bm{x}}|{\bm{y}})}{ p_n({\bm{x}}|{\bm{y}}) + \tilde p_{\theta}({\bm{x}}|{\bm{y}})} \Big]\\
= & \mathbb{E}_{p_{n}({\bm{x}}|{\bm{y}})} \Big[ \log \frac{1}{1 + p_{\theta}({\bm{x}}|{\bm{y}})} \Big] + \mathbb{E}_{p_{\text{data}}({\bm{x}}|{\bm{y}} )} \Big[ \log \frac{p_\theta({\bm{x}}|{\bm{y}})}{1 + p_{\theta}({\bm{x}}|{\bm{y}})} \Big]\\
= & \mathbb{E}_{p_{n}({\bm{x}}|{\bm{y}})} \Big[ \log \frac{\exp (- f_{\bm{x}}({\bm{x}}, {\bm{y}}))}{\exp (- f_{\bm{x}}({\bm{x}}, {\bm{y}})) + 1} \Big] + \mathbb{E}_{p_{\text{data}}({\bm{x}}|{\bm{y}} )} \Big[ \log \frac{1}{\exp (- f_{\bm{x}}({\bm{x}}, {\bm{y}})) + 1} \Big]\\
= & \mathbb{E}_{p_{n}({\bm{x}}|{\bm{y}})} \Big[ \log \big(1-\sigma( f_{\bm{x}}({\bm{x}}, {\bm{y}}))\big) \Big] + \mathbb{E}_{p_{\text{data}}({\bm{x}}|{\bm{y}} )} \Big[ \log \sigma( f_{\bm{x}}({\bm{x}}, {\bm{y}})) \Big].
\end{aligned}
\end{equation}
Thus, the final EBM-NCE contrastive SSL objective is
\begin{equation}
\begin{aligned}
\mathcal{L}_{\text{EBM-NCE}}
& = -\frac{1}{2} \mathbb{E}_{p_{\text{data}}({\bm{y}})} \Big[\mathbb{E}_{p_{n}({\bm{x}}|{\bm{y}})} \log \big(1-\sigma( f_{\bm{x}}({\bm{x}}, {\bm{y}}))\big) + \mathbb{E}_{p_{\text{data}}({\bm{x}}|{\bm{y}} )} \log \sigma( f_{\bm{x}}({\bm{x}}, {\bm{y}})) \Big]\\
& ~~~~ - \frac{1}{2} \mathbb{E}_{p_{\text{data}}({\bm{x}})} \Big[\mathbb{E}_{p_{n}({\bm{y}}|{\bm{x}})} \log \big(1-\sigma( f_{\bm{y}}({\bm{y}},{\bm{x}}))\big) + \mathbb{E}_{p_{\text{data}}({\bm{y}},{\bm{x}})} \log \sigma( f_{\bm{y}}({\bm{y}},{\bm{x}})) \Big].
\end{aligned}
\end{equation}
\subsection{EBM-NCE v.s. JSE and InfoNCE}
We acknowledge that there are many other contrastive objectives~\citep{poole2019variational} that can be used to maximize MI. However, in the research line of graph SSL, as summarized in several recent survey papers~\citep{xie2021self,liu2021graph,wu2021self}, the two most used ones are InfoNCE and Jensen-Shannon Estimator (JSE)~\citep{nowozin2016f,hjelm2018learning}.
We conclude that JSE is very similar to EBM-NCE, while the underlying perspectives are totally different, as explained below.
\begin{enumerate}
\item \textbf{Derivation and Intuition.} Derivation process and underlying intuition are different. JSE~\citep{nowozin2016f} starts from f-divergence, then with variational estimation and Fenchel duality on function $f$. Our proposed EBM-NCE is more straightforward: it models the conditional distribution in the MI lower bound~\Cref{eq:app:MI_objective} with EBM, and solves it using NCE.
\item \textbf{Flexibility.} Modeling the conditional distribution with EBM provides a broader family of algorithms. NCE is just one solution to it, and recent progress on score matching~\citep{song2020score,song2021train} and contrastive divergence~\citep{du2020improved}, though no longer contrastive SSL, adds on more promising directions. Thus, EBM can provide a potential unified framework for structuring our understanding of self-supervised learning.
\item \textbf{Noise distribution.} Starting from~\citep{hjelm2018learning}, all the following works on graph SSL~\cite{sun2019infograph,xie2021self,liu2021graph,wu2021self} have been adopting the empirical distribution for noise distribution. However, this is not the case in EBM-NCE. Classic EBM-NCE uses fixed distribution, while more recent work~\citep{arbel2020generalized} extends it with adaptively learnable noise distribution. With this discipline, more advanced sampling strategies (w.r.t. the noise distribution) can be proposed, {\em{e.g.}}, adversarial negative sampling in \citep{hu2021adco}.
\end{enumerate}
In the above, we conclude three key differences between EBM-NCE and JSE, plus the solid and straightforward derivations on EBM-NCE. We believe this can provide a insightful perspective of SSL to the community.
According to the empirical results~\Cref{sec:experiment_each_loss_component}, we observe that EBM-NCE is better than InfoNCE. This can be explained using the claim from~\citep{khosla2020supervised}, where the main technical contribution is to construct many positives and many negatives per anchor point. The binary cross-entropy in EBM-NCE is able to realize this to some extent: make all the positive pairs positive and all the negative pairs negative, where the softmax-based cross-entropy fails to capture this, as in InfoNCE.
To conclude, we are introduce using EBM in modeling MI, which opens many potential venues. As for contrastive SSL, EBM-NCE provides a better perspective than JSE, and is better than InfoNCE on graph-level self-supervised learning.
\section{Generative Self-Supervised Learning} \label{sec:app:generative_ssl}
Generative SSL is another classic track for unsupervised pre-training~\citep{kingma2013auto,larsson2016learning,kingma2018glow}, though the main focus is on distribution learning. In GraphMVP, we start with VAE for the following reasons:
\begin{enumerate}
\item One of the biggest attributes of our problem is that the mapping between two views are stochastic: multiple 3D conformers can correspond to the same 2D topology. Thus, we expect a stochastic model~\citep{nielsen2020survae} like VAE, instead of the deterministic ones.
\item For pre-training and fine-tuning, we need to learn an explicit and powerful representation function that can be used for downstream tasks.
\item The decoder for structured data like graph are often complicated, {\em{e.g.}.}, the auto-regressive generation. This makes them suboptimal.
\end{enumerate}
To cope with these challenges, in GraphMVP, we start with VAE-like generation model, and later propose a \textit{light-weighted} and \textit{smart} surrogate loss as objective function.
Notice that for notation simplicity, for this section, we use $h_{{\bm{y}}}$ and $h_{{\bm{x}}}$ to delegate the 2D and 3D GNN respectively.
\subsection{Variational Molecule Reconstruction}
As shown in~\Cref{eq:app:MI_objective}, our main motivation is to model the conditional likelihood:
\begin{equation*}
\begin{aligned}
\mathcal{L}_{\text{MI}} & = -\frac{1}{2} \mathbb{E}_{p({\bm{x}},{\bm{y}})} [\log p({\bm{x}}|{\bm{y}}) + \log p({\bm{y}}|{\bm{x}})]
\end{aligned}
\end{equation*}
By introducing a reparameterized variable ${\bm{z}}_{\bm{x}} = \mu_{{\bm{x}}} + \sigma_{{\bm{x}}} \odot \epsilon$, where $\mu_{\bm{x}}$ and $\sigma_{\bm{x}}$ are two flexible functions on $h_{\bm{x}}$, $\epsilon \sim \mathcal{N}(0,I)$ and $\odot$ is the element-wise production,
we have a lower bound on the conditional likelihood:
\begin{equation} \label{eq:app:log_likelihood_01}
\begin{aligned}
\log p({\bm{y}}|{\bm{x}})
\ge \mathbb{E}_{q({\bm{z}}_{\bm{x}}|{\bm{x}})} \big[ \log p({\bm{y}}|{\bm{z}}_{\bm{x}}) \big] - KL(q({\bm{z}}_{\bm{x}}|{\bm{x}}) || p({\bm{z}}_{\bm{x}})).
\end{aligned}
\end{equation}
Similarly, we have
\begin{equation} \label{eq:app:log_likelihood_02}
\log p({\bm{x}}|{\bm{y}}) \ge \mathbb{E}_{q({\bm{z}}_{\bm{y}}|{\bm{y}})} \big[ \log p({\bm{x}}|{\bm{z}}_{\bm{y}}) \big] - KL(q({\bm{z}}_{\bm{y}}|{\bm{y}}) || p({\bm{z}}_{\bm{y}})),
\end{equation}
where ${\bm{z}}_{\bm{y}} = \mu_{\bm{y}} + \sigma_{\bm{y}} \odot \epsilon$. Here $\mu_{\bm{y}}$ and $\sigma_{\bm{y}}$ are flexible functions on $h_{\bm{y}}$, and $\epsilon \sim \mathcal{N}(0, I)$.
For implementation, we take multi-layer perceptrons (MLPs) for $\mu_{\bm{x}}, \mu_{\bm{y}}, \sigma_{\bm{x}}, \sigma_{\bm{y}}$.
Both the above objectives are composed of a conditional log-likelihood and a KL-divergence. The conditional log-likelihood has also been recognized as the \textit{reconstruction term}: it is essentially to reconstruct the 3D conformers (${\bm{y}}$) from the sampled 2D molecular graph representation (${\bm{z}}_{{\bm{x}}}$). However, performing the graph reconstruction on the data space is not easy: since molecules are discrete, modeling and measuring are not trivial.
\subsection{Variational Representation Reconstruction}
To cope with data reconstruction issue, we propose a novel generative loss termed variation representation reconstruction (VRR). The pipeline is in~\Cref{fig:app:generative_ssl}.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{figures/Diagram_Generative_final.pdf}
\vspace{-4ex}
\caption{VRR SSL in GraphMVP. The black dashed circles represent subgraph masking.
}
\label{fig:app:generative_ssl}
\hfill
\end{figure}
Our proposed solution is very straightforward. Recall that MI is invariant to continuous bijective function~\citep{belghazi2018mutual}. So suppose we have a representation function $h_{{\bm{y}}}$ satisfying this condition, and this can guide us a surrogate loss by transferring the reconstruction from data space to the continuous representation space:
\begin{equation*}
\begin{aligned}
\mathbb{E}_{q({\bm{z}}_{\bm{x}}|{\bm{x}})}[\log p({\bm{y}}|{\bm{z}}_{\bm{x}})]
& = - \mathbb{E}_{q({\bm{z}}_{\bm{x}}|{\bm{x}})}[ \| h_{{\bm{y}}}(g_x({\bm{z}}_{\bm{x}})) - h_{{\bm{y}}}({\bm{y}}) \|_2^2 ] + C,
\end{aligned}
\end{equation*}
where $g_x$ is the decoder and $C$ is a constant, and this introduces to using the mean-squared error (MSE) for \textbf{reconstruction on the representation space}.
Then for the reconstruction, current formula has two steps: i) the latent code $z_{{\bm{x}}}$ is first mapped to molecule space, and ii) it is mapped to the representation space. We can approximate these two mappings with one projection step, by directly projecting the latent code $z_{{\bm{x}}}$ to the 3D representation space, {\em{i.e.}}, $q_{\bm{x}}(z_{\bm{x}}) \approx h_{{\bm{y}}}( g_{\bm{x}} ( z_{\bm{x}} ))$. This gives us a variation representation reconstruction (VRR) SSL objective as below:
\begin{equation*}
\begin{aligned}
\mathbb{E}_{q({\bm{z}}_{\bm{x}}|{\bm{x}})}[\log p({\bm{y}}|{\bm{z}}_{\bm{x}})]
& = - \mathbb{E}_{q({\bm{z}}_{\bm{x}}|{\bm{x}})}[ \| q_x({\bm{z}}_{\bm{x}}) - h_{{\bm{y}}}({\bm{y}}) \|_2^2 ] + C.
\end{aligned}
\end{equation*}
\paragraph{$\beta$-VAE}
We consider introducing a $\beta$ variable~\citep{higgins2016beta} to control the disentanglement of the latent representation. To be more specific, we would have
\begin{equation}
\begin{aligned}
\log p({\bm{y}}|{\bm{x}})
& \ge \mathbb{E}_{q({\bm{z}}_{\bm{x}}|{\bm{x}})} \big[ \log p({\bm{y}}|{\bm{z}}_{\bm{x}}) \big] - \beta \cdot KL(q({\bm{z}}_{\bm{x}}|{\bm{x}}) || p({\bm{z}}_{\bm{x}})).
\end{aligned}
\end{equation}
\paragraph{Stop-gradient}
For the optimization on variational representation reconstruction, related work have found that adding the stop-gradient operator (SG) as a regularizer can make the training more stable without collapse both empirically~\citep{grill2020bootstrap,chen2021exploring} and theoretically~\citep{tian2021understanding}. Here, we may as well utilize this SG operation in the objective function:
\begin{equation}
\mathbb{E}_{q({\bm{z}}_{\bm{x}}|{\bm{x}})}[\log p({\bm{y}}|{\bm{z}}_{\bm{x}})] = - \mathbb{E}_{q({\bm{z}}_{\bm{x}}|{\bm{x}})}[ \| q_x({\bm{z}}_{\bm{x}}) - \text{SG}(h_{{\bm{y}}}({\bm{y}})) \|_2^2 ] + C.
\end{equation}
\paragraph{Objective function for VRR}
Thus, combining both two regularizers mentioned above, the final objective function for VRR is:
\begin{equation} \label{eq:app:final_vrr}
\begin{aligned}
\mathcal{L}_{\text{VRR}}
& = \frac{1}{2} \Big[ \mathbb{E}_{q({\bm{z}}_{\bm{x}}|{\bm{x}})} \big[ \| q_x({\bm{z}}_{\bm{x}}) - \text{SG}(h_{\bm{y}}) \|^2 \big] + \mathbb{E}_{q({\bm{z}}_{\bm{y}}|{\bm{y}})} \big[ \| q_y({\bm{z}}_{\bm{y}}) - \text{SG}(h_{\bm{x}}) \|_2^2 \big] \Big]\\
& \quad\quad + \frac{\beta}{2} \cdot \Big[ KL(q({\bm{z}}_{\bm{x}}|{\bm{x}}) || p({\bm{z}}_{\bm{x}})) + KL(q({\bm{z}}_{\bm{y}}|{\bm{y}}) || p({\bm{z}}_{\bm{y}})) \Big].
\end{aligned}
\end{equation}
Note that MI is invariant to continuous bijective function~\citep{belghazi2018mutual}, thus this surrogate loss would be exact if the encoding function $h_{{\bm{y}}}$ and $h_{{\bm{x}}}$ satisfy this condition. However, we find GNN (both GIN and SchNet) can, though do not meet the condition, provide quite robust performance empirically, which justify the effectiveness of VRR.
\subsection{Variational Representation Reconstruction and Non-Contrastive SSL}
By introducing VRR, we provide another perspective to understand the generative SSL, including the recently-proposed non-contrastive SSL~\citep{grill2020bootstrap,chen2021exploring}.
We provide a unified structure on the intra-data generative SSL:
\begin{itemize}
\item Reconstruction to the data space, like~\Cref{eq:variational_lower_bound,eq:app:log_likelihood_01,eq:app:log_likelihood_02}.
\item Reconstruction to the representation space, {\em{i.e.}}, VRR in~\Cref{eq:app:final_vrr}.
\begin{itemize}
\item If we \textbf{remove the stochasticity}, then it is simply the representation reconstruction (RR), as we tested in the ablation study~\Cref{sec:experiment_each_loss_component}.
\item If we \textbf{remove the stochasticity} and assume two views are \textbf{sharing the same representation function}, like CNN for multi-view learning on images, then it is reduced to the BYOL~\citep{grill2020bootstrap} and SimSiam~\citep{chen2021exploring}. In other words, these recently-proposed non-contrastive SSL methods are indeed special cases of VRR.
\end{itemize}
\end{itemize}
\section{Dataset Overview} \label{app:sec:dataset}
\subsection{Pre-Training Dataset Overview}
In this section, we provide the basic statistics of the pre-training dataset (GEOM).
In~\Cref{fig:hist}, we plot the histogram (logarithm scale on the y-axis) and cumulative distribution on the number of conformers of each molecule. As shown by the histogram and curves, there are certain number of molecules having over 1000 possible 3d conformer structures, while over 80\% of molecules have less than 100 conformers.
\begin{figure}[htb!]
\centering
\includegraphics[width=\textwidth]{figures/hist.pdf}
\caption{Statistics on the conformers of each molecule\label{fig:hist}}
\end{figure}
In~\Cref{fig:hist}, we plot the histogram of the summation of top (descending sorted by weights) \{1,5,10,20\} conformer weights. The physical meaning of the weight is the portion of each conformer occurred in nature. We observe that the top 5 or 10 conformers are sufficient as they have dominated nearly all the natural observations. Such long-tailed distribution is also in alignment with our findings in the ablation studies. We find that utilizing top five conformers in the GraphMVP\, has reached an idealised spot between effectiveness and efficiency.
\begin{figure}[htb!]
\centering
\includegraphics[width=\textwidth]{figures/weight_conf.pdf}
\caption{Sum of occurrence weights for the top major conformers\label{fig:cumulative}}
\end{figure}
\subsection{Downstream Dataset Overview} \label{app:sec:downstream_datasets}
In this section, we review the four main categories of datasets used for downstream tasks.
\paragraph{Molecular Property: Pharmacology}
The Blood-Brain Barrier Penetration (BBBP)~\cite{martins2012bayesian} dataset measures whether a molecule will penetrate the central nervous system.
All three datasets, Tox21~\cite{tox21}, ToxCast~\cite{wu2018moleculenet}, and ClinTox~\cite{gayvert2016data} are related to the toxicity of molecular compounds.
The Side Effect Resource (SIDER)~\cite{kuhn2015sider} dataset stores the adverse drug reactions on a marketed drug database.
\paragraph{Molecular Property: Physical Chemistry}
Dataset proposed in~\cite{delaney2004esol} measures aqueous solubility of the molecular compounds. Lipophilicity (Lipo) dataset is a subset of ChEMBL~\cite{gaulton2012chembl} measuring the molecule octanol/water distribution coefficient. CEP dataset is a subset of the Havard Clean Energy Project (CEP)~\cite{hachmann2011harvard}, which estimates the organic photovoltaic efficiency.
\paragraph{Molecular Property: Biophysics}
Maximum Unbiased Validation (MUV)~\cite{rohrer2009maximum} is another sub-database from PCBA, and is obtained by applying a refined nearest neighbor analysis. HIV is from the Drug Therapeutics Program (DTP) AIDS Antiviral Screen~\cite{zaharevitz2015aids}, and it aims at predicting inhibit HIV replication. BACE measures the binding results for a set of inhibitors of $\beta$-secretase 1 (BACE-1), and is gathered in MoleculeNet~\cite{wu2018moleculenet}. Malaria~\cite{gamo2010thousands} measures the drug efficacy against the parasite that causes malaria.
\paragraph{Drug-Target Affinity}
Davis~\citep{davis2011comprehensive} measures the binding affinities between kinase inhibitors and kinases, scored by the $K_d$ value (kinase dissociation constant).
KIBA~\citep{tang2014making} contains binding affinities for kinase inhibitors from different sources, including $K_i$, $K_d$ and $\text{IC}_{50}$. KIBA scores~\citep{ozturk2018deepdta} are constructured to optimize the consistency among these values.
\begin{table}[H]
\centering
\small
\caption{Summary for the molecule chemical datasets.}
\begin{tabular}{l l r r r r}
\toprule
Dataset & Task & \# Tasks & \# Molecules & \# Proteins & \# Molecule-Protein pairs\\
\midrule
BBBP & Classification & 1 & 2,039 & -& -\\
Tox21 & Classification & 12 & 7,831 & -& -\\
ToxCast & Classification & 617 & 8,576 & -& -\\
Sider & Classification & 27 & 1,427 & -& -\\
ClinTox & Classification & 2 & 1,478 & -& -\\
MUV & Classification & 17 & 93,087 & -& -\\
HIV & Classification & 1 & 41,127 & -& -\\
Bace & Classification & 1 & 1,513 & -& -\\
\midrule
Delaney & Regression & 1 & 1,128 & -& -\\
Lipo & Regression & 1 & 4,200 & -& -\\
Malaria & Regression & 1 & 9,999 & -& -\\
CEP & Regression & 1 & 29,978 & -& -\\
\midrule
Davis & Regression & 1 & 68 & 379 & 30,056 \\
KIBA & Regression & 1 & 2,068 & 229 & 118,254 \\
\bottomrule
\end{tabular}
\label{tab:mol_dataset_summary}
\end{table}
\section{Experiments Details}
\subsection{Self-supervised Learning Baselines} \label{appendix:hyper}
For the SSL baselines in main results (\Cref{tab:main_results}), generally we can match with the original paper, even though most of them are using larger pre-training datasets, like ZINC-2m. Yet, we would like to add some specifications.
\begin{itemize}
\item G-\{Contextual, Motif\}\citep{rong2020self} proposes a new GNN model for backbone model, and does pre-training on a larger dataset. Both settings are different from us.
\item JOAO~\citep{you2021graph} has two versions in the original paper. In this paper, we run both versions and report the optimal one.
\item Almost all the graph SSL baselines are reporting the test performance with optimal validation error, while GraphLoG~\citep{xu2021self} reports 73.2 in the paper with the last-epoch performance. This can be over-optimized in terms of overfitting, and here we rerun it with the same downstream evaluation strategy as a fair comparison.
\end{itemize}
\clearpage
\subsection{Ablation Study: The Effect of Masking Ratio and Number of Conformers} \label{app:sec:effect_of_M_and_C}
\begin{table}[H]
\caption{
Full results for ablation of masking ratio $M$ ($C=0.15$), MVP is short for GraphMVP.
\vspace{-0.3cm}
}
\fontsize{9pt}{2pt}
\selectfont
\setlength{\tabcolsep}{2pt}
\centering
\begin{tabular}{l l c c c c c c c c c}
\toprule
& $M$\,\,\,\,\,\, & BBBP & Tox21 & ToxCast & Sider & ClinTox & MUV & HIV & Bace & Avg\\
\midrule
-- & -- & 65.4(2.4) & 74.9(0.8) & 61.6(1.2) & 58.0(2.4) & 58.8(5.5) & 71.0(2.5) & 75.3(0.5) & 72.6(4.9) & 67.21 \\
\midrule
\multirow{3}{*}{MVP}
& 0 & 69.4 (1.0)& 75.3 (0.5)& 62.8 (0.2)& 61.9 (0.5)& 74.4 (1.3)& 74.6 (1.4)& 74.6 (1.0)& 76.0 (2.0) & 71.12\\
& 0.15 & 68.5 (0.2)& 74.5 (0.4)& 62.7 (0.1)& 62.3 (1.6)& 79.0 (2.5)& 75.0 (1.4)& 74.8 (1.4)& 76.8 (1.1) & 71.69\\
& 0.3 & 68.6 (0.3)& 74.9 (0.6)& 62.8 (0.4)& 60.0 (0.6)& 74.8 (7.8)& 74.7 (0.8)& 75.5 (1.1)& 82.9 (1.7) & 71.79\\
\midrule
\multirow{3}{*}{MVP-G}
& 0 & 72.4 (1.3)& 74.7 (0.6)& 62.4 (0.2)& 60.3 (0.7)& 76.2 (5.7)& 76.6 (1.7)& 76.4 (1.7)& 78.0 (1.1) & 72.15\\
& 0.15 & 70.8 (0.5)& 75.9 (0.5)& 63.1 (0.2)& 60.2 (1.1)& 79.1 (2.8)& 77.7 (0.6)& 76.0 (0.1)& 79.3 (1.5) & 72.76\\
& 0.3 & 69.5 (0.5)& 74.6 (0.6)& 62.7 (0.3)& 60.8 (1.2)& 80.7 (2.0)& 77.8 (2.5)& 76.2 (0.5)& 81.0 (1.0) & 72.91\\
\midrule
\multirow{3}{*}{MVP-C}
& 0 & 71.5 (0.9)& 75.4 (0.3)& 63.6 (0.5)& 61.8 (0.6)& 77.3 (1.2)& 75.8 (0.6)& 76.1 (0.9)& 79.8 (0.4) & 72.66\\
& 0.15 & 72.4 (1.6)& 74.4 (0.2)& 63.1 (0.4)& 63.9 (1.2)& 77.5 (4.2)& 75.0 (1.0)& 77.0 (1.2)& 81.2 (0.9) & 73.07\\
& 0.3 & 70.7 (0.8)& 74.6 (0.3)& 63.8 (0.7)& 60.4 (0.6)& 83.5 (3.2)& 74.2 (1.6)& 76.0 (1.0)& 82.2 (2.2) & 73.17\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[H]
\caption{
Full results for ablation of \# conformers $C$ ($M=0.5$), MVP is short for GraphMVP.
\vspace{-0.3cm}
}
\fontsize{9pt}{2pt}
\selectfont
\setlength{\tabcolsep}{2pt}
\centering
\begin{tabular}{l r c c c c c c c c c}
\toprule
& \,\,\,\,$C$ & BBBP & Tox21 & ToxCast & Sider & ClinTox & MUV & HIV & Bace & Avg\\
\midrule
-- & -- & 65.4(2.4) & 74.9(0.8) & 61.6(1.2) & 58.0(2.4) & 58.8(5.5) & 71.0(2.5) & 75.3(0.5) & 72.6(4.9) & 67.21 \\
\midrule
\multirow{4}{*}{MVP}
& 1 & 69.2 (1.0)& 74.7 (0.4)& 62.5 (0.2)& 63.0 (0.4)& 73.9 (7.2)& 76.2 (0.4)& 75.3 (1.1)& 78.0 (0.5) & 71.61\\
& 5 & 68.5 (0.2)& 74.5 (0.4)& 62.7 (0.1)& 62.3 (1.6)& 79.0 (2.5)& 75.0 (1.4)& 74.8 (1.4)& 76.8 (1.1) & 71.69\\
& 10 & 68.3 (0.5)& 74.2 (0.6)& 63.2 (0.5)& 61.4 (1.0)& 80.6 (0.8)& 75.4 (2.4)& 75.5 (0.6)& 79.1 (2.3) & 72.20\\
& 20 & 68.7 (0.5)& 74.9 (0.3)& 62.7 (0.3)& 60.8 (0.7)& 75.8 (0.5)& 76.3 (1.5)& 77.4 (0.3)& 82.3 (0.8) & 72.39\\
\midrule
\multirow{4}{*}{MVP-G}
& 1 & 70.9 (0.4)& 75.3 (0.7)& 62.8 (0.5)& 61.2 (0.6)& 81.4 (3.7)& 74.2 (2.1)& 76.4 (0.6)& 80.2 (0.7) & 72.80\\
& 5 & 70.8 (0.5)& 75.9 (0.5)& 63.1 (0.2)& 60.2 (1.1)& 79.1 (2.8)& 77.7 (0.6)& 76.0 (0.1)& 79.3 (1.5) & 72.76\\
& 10 & 70.2 (0.9)& 74.9 (0.4)& 63.4 (0.4)& 60.8 (1.0)& 80.6 (0.4)& 76.4 (2.0)& 77.0 (0.3)& 77.4 (1.3) & 72.59\\
& 20 & 69.5 (0.4)& 74.9 (0.4)& 63.3 (0.1)& 60.8 (0.3)& 81.2 (0.5)& 77.3 (2.7)& 76.9 (0.3)& 80.1 (0.5) & 73.00\\
\midrule
\multirow{4}{*}{MVP-C}
& 1 & 69.7 (0.9)& 74.9 (0.5)& 64.1 (0.5)& 61.0 (1.4)& 78.3 (2.7)& 75.7 (1.5)& 74.7 (0.8)& 81.3 (0.7) & 72.46\\
& 5 & 72.4 (1.6)& 74.4 (0.2)& 63.1 (0.4)& 63.9 (1.2)& 77.5 (4.2)& 75.0 (1.0)& 77.0 (1.2)& 81.2 (0.9) & 73.07\\
& 10 & 69.5 (1.5)& 74.5 (0.5)& 63.9 (0.9)& 60.9 (0.4)& 81.1 (1.8)& 76.8 (1.5)& 76.0 (0.8)& 82.0 (1.0) & 73.09\\
& 20 & 72.1 (0.4)& 73.4 (0.7)& 63.9 (0.3)& 63.0 (0.7)& 78.8 (2.4)& 74.1 (1.0)& 74.8 (0.9)& 84.1 (0.6) & 73.02\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Ablation Study: Effect of Each Loss Component}
\begin{table}[H]
\caption{Molecular graph property prediction, we set $C$=5 and $M$=0.15 for GraphMVP methods.}
\small
\setlength{\tabcolsep}{2pt}
\centering
\begin{tabular}{l c c c c c c c c c}
\toprule
& BBBP & Tox21 & ToxCast & Sider & ClinTox & MUV & HIV & Bace & Avg\\
\midrule
\# Molecules & 2,039 & 7,831 & 8,575 & 1,427 & 1,478 & 93,087 & 41,127 & 1,513 & - \\
\# Tasks & 1 & 12 & 617 & 27 & 2 & 17 & 1 & 1 & - \\
\midrule
- & 65.4(2.4) & 74.9(0.8) & 61.6(1.2) & 58.0(2.4) & 58.8(5.5) & 71.0(2.5) & 75.3(0.5) & 72.6(4.9) & 67.21 \\
\midrule
InfoNCE only & 68.9(1.2) & 74.2(0.3) & 62.8(0.2) & 59.7(0.7) & 57.8(11.5) & 73.6(1.8) & 76.1(0.6) & 77.6(0.3) & 68.85 \\
EBM-NCE only & 68.0(0.3) & 74.3(0.4) & 62.6(0.3) & 61.3(0.4) & 66.0(6.0) & 73.1(1.6) & 76.4(1.0) & 79.6(1.7) & 70.15 \\
VAE only & 67.6(1.8) & 73.2(0.5) & 61.9(0.4) & 60.5(0.2) & 59.7(1.6) & 78.6(0.7) & 77.4(0.6) & 75.4(2.1) & 69.29 \\
AE only & 70.5(0.4) & 75.0(0.4) & 62.4(0.4) & 61.0(1.4) & 53.8(1.0) & 74.1(2.9) & 76.3(0.5) & 77.9(0.9) & 68.89 \\
\midrule
InfoNCE + VAE & 69.6(1.1) & 75.4(0.6) & 63.2(0.3) & 59.9(0.4) & 69.3(14.0) & 76.5(1.3) & 76.3(0.2) & 75.2(2.7) & 70.67 \\
EBM-NCE + VAE & 68.5(0.2) & 74.5(0.4) & 62.7(0.1) & 62.3(1.6) & 79.0(2.5) & 75.0(1.4) & 74.8(1.4) & 76.8(1.1) & 71.69 \\
InfoNCE + AE & 65.1(3.1) & 75.4(0.7) & 62.5(0.5) & 59.2(0.6) & 77.2(1.8) & 72.4(1.4) & 75.8(0.6) & 77.1(0.8) & 70.60 \\
EBM-NCE + AE & 69.4(1.0) & 75.2(0.1) & 62.4(0.4) & 61.5(0.9) & 71.1(6.0) & 73.3(0.3) & 75.2(0.6) & 79.3(1.1) & 70.94 \\
\bottomrule
\end{tabular}
\end{table}
\clearpage
\subsection{Broader Range of Downstream Tasks: Molecular Property Prediction Prediction} \label{sec:app:regression_results}
\begin{table}[H]
\small
\setlength{\tabcolsep}{10pt}
\centering
\caption{
Results for four molecular property prediction tasks (regression).
For each downstream task, we report the mean (and standard variance) RMSE of 3 seeds with scaffold splitting.
For GraphMVP\,, we set $M=0.15$ and $C=5$.
The best performance for each task is marked in \textbf{bold}.
}
\begin{tabular}{l c c c c c}
\toprule
& ESOL & Lipo & Malaria & CEP & Avg\\
\midrule
-- & 1.178 (0.044) & 0.744 (0.007) & 1.127 (0.003) & 1.254 (0.030) & 1.07559 \\
\midrule
AM & 1.112 (0.048) & 0.730 (0.004) & 1.119 (0.014) & 1.256 (0.000) & 1.05419 \\
CP & 1.196 (0.037) & 0.702 (0.020) & 1.101 (0.015) & 1.243 (0.025) & 1.06059 \\
JOAO & 1.120 (0.019) & 0.708 (0.007) & 1.145 (0.010) & 1.293 (0.003) & 1.06631 \\
\midrule
GraphMVP\, & 1.091 (0.021) & 0.718 (0.016) & 1.114 (0.013) & 1.236 (0.023) & 1.03968 \\
GraphMVP-G & 1.064 (0.045) & 0.691 (0.013) & 1.106 (0.013) & \textbf{1.228 (0.001)} & 1.02214 \\
GraphMVP-C & \textbf{1.029 (0.033)} & \textbf{0.681} (0.010) & \textbf{1.097 (0.017)} & 1.244 (0.009) & \textbf{1.01283} \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Broader Range of Downstream Tasks: Drug-Target Affinity Prediction}
\begin{table}[H]
\small
\setlength{\tabcolsep}{15pt}
\centering
\caption{
Results for two drug-target affinity prediction tasks (regression).
For each downstream task, we report the mean (and standard variance) MSE of 3 seeds with random splitting.
For GraphMVP\,, we set $M=0.15$ and $C=5$.
The best performance for each task is marked in \textbf{bold}.
}
\centering
\begin{tabular}{l c c c}
\toprule
& Davis & KIBA & Avg\\
\midrule
& 0.286 (0.006) & 0.206 (0.004) & 0.24585 \\
\midrule
AM & 0.291 (0.007) & 0.203 (0.003) & 0.24730 \\
CP & 0.279 (0.002) & 0.198 (0.004) & 0.23823 \\
JOAO & 0.281 (0.004) & 0.196 (0.005) & 0.23871 \\
\midrule
GraphMVP & 0.280 (0.005) & 0.178 (0.005) & 0.22860 \\
GraphMVP-G & \textbf{0.274 (0.002)} & 0.175 (0.001) & 0.22476 \\
GraphMVP-C & 0.276 (0.004) & \textbf{0.168 (0.001)} & \textbf{0.22231} \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Case Studies~\label{appendix:case}}
\textbf{Shape Analysis (3D Diameter Prediction).} Diameter is an important measure in molecule~\cite{liu2010using,melnik2003distance}, and genome~\cite{finn2017comparative} modelling. Usually, the longer the 2D diameter (longest adjacency path) is, the larger the 3D diameter (largest atomic pairwise l2 distance). However, this is not always true. Therefore, we are particularly interested in using the 2D graph to predict the 3D diameter when the 2D and 3D molecular landscapes are with large differences (as in~\Cref{fig:case} and ~\Cref{fig:case_appendix}). We formulate it as a $n$-class recognition problem, where $n$ is the number of class after removing the consecutive intervals. We provide numerical results in~\Cref{table:diam} and more visualisation examples in~\Cref{fig:case_vis}.
\begin{figure}[ht]
\centering
\vspace{-2ex}
\includegraphics[width=\textwidth]{figures/Case_Study_Appendix.pdf}
\caption{Molecules selection, we select the molecules that lies in the black dash box.}
\label{fig:case_appendix}
\end{figure}
\begin{table}[htb!]
\caption{Accuracy on Recognizing Molecular Spatial Diameters~\label{table:diam}}
\small
\setlength{\tabcolsep}{3pt}
\centering
\begin{tabular}{l c c c c c c c c c c}
\toprule
Random & \,AttrMask\, & \,ContextPred\, & GPT-GNN & GraphCL & JOAOv2 & MVP & MVP-G & MVP-C \\
\midrule
38.9 (0.8) & 37.6 (0.6) & 41.2 (0.7) & 39.2 (1.1) & 38.7 (2.0) & 41.3 (1.2) & 42.3 (1.9) & 41.9 (0.7) & 42.3 (1.3)\\
\bottomrule
\end{tabular}
\end{table}
\textbf{Long-Range Donor-Acceptor Detection.} Donor-Acceptor structures such as hydrogen bonds have key impacts on the molecular geometrical structures (collinear and coplanarity), and physical properties (melting point, water affinity, viscosity etc.). Usually, atom pairs such as ``O...H'' that are closed in the Euclidean space are considered as the donor-acceptor structures~\cite{kaur2020understanding}.
On this basis, we are particularly interested in using the 2D graph to recognize (i.e., binary classification) donor-acceptor structures which have larger ranges in the 2D adjacency (as shown in~\Cref{fig:case}). Similarly, we select the molecules whose donor-acceptor are close in 3D Euclidean distance but far in the 2D adjacency. We provide numerical results in~\Cref{table:donor}. Both tables show that MVP is the MVP :)
\begin{table}[htb!]
\caption{Accuracy on Recognizing Long-Range Donor-Acceptor Structures~\label{table:donor}}
\small
\setlength{\tabcolsep}{3pt}
\centering
\begin{tabular}{l c c c c c c c c c c}
\toprule
Random & \,AttrMask\, & \,ContextPred\, & GPT-GNN & GraphCL & JOAOv2 & MVP & MVP-G & MVP-C \\
\midrule
77.9 (1.1) & 78.6 (0.3) & 80.0 (0.5) & 77.5 (0.9) & 79.9 (0.7) & 79.2 (1.0) & 80.0 (0.4) & 81.5 (0.4) & 80.7 (0.2) \\
\bottomrule
\end{tabular}
\end{table}
\paragraph{Chirality.} We have also explored other tasks such as predicting the molecular chirality, it is a challenging setting if only 2D molecular graphs are provided~\cite{pattanaik2020message}. We found that GraphMVP\,brings negligible improvements due to the model capacity of SchNet. We save this in the ongoing work.
\begin{figure}[ht]
\centering
\vspace{-2ex}
\includegraphics[width=\textwidth]{figures/case_study_SSL_better.png}
\vspace{-2ex}
\caption{Molecule examples where GraphMVP successfully recognizes the 3D diameters while random initialisation fails, legends are in a format of ``molecule id''-``2d diameter''-``3d diameter''.}
\label{fig:case_vis}
\end{figure}
\section{Generative Self-Supervised Learning} \label{sec:app:generative_ssl}
Generative SSL is another classic track for unsupervised pre-training~\citep{kingma2013auto,larsson2016learning,kingma2018glow}, though the main focus is on distribution learning. In GraphMVP, we start with VAE for the following reasons:
\begin{enumerate}
\item One of the biggest attributes of our problem is that the mapping between two views are stochastic: multiple 3D conformers can correspond to the same 2D topology. Thus, we expect a stochastic model~\citep{nielsen2020survae} like VAE, instead of the deterministic ones.
\item For pre-training and fine-tuning, we need to learn an explicit and powerful representation function that can be used for downstream tasks.
\item The decoder for structured data like graph are often complicated, {\em{e.g.}.}, the auto-regressive generation. This makes them suboptimal.
\end{enumerate}
To cope with these challenges, in GraphMVP, we start with VAE-like generation model, and later propose a \textit{light-weighted} and \textit{smart} surrogate loss as objective function.
Notice that for notation simplicity, for this section, we use $h_{{\bm{y}}}$ and $h_{{\bm{x}}}$ to delegate the 2D and 3D GNN respectively.
\subsection{Variational Molecule Reconstruction}
As shown in~\Cref{eq:app:MI_objective}, our main motivation is to model the conditional likelihood:
\begin{equation*}
\begin{aligned}
\mathcal{L}_{\text{MI}} & = -\frac{1}{2} \mathbb{E}_{p({\bm{x}},{\bm{y}})} [\log p({\bm{x}}|{\bm{y}}) + \log p({\bm{y}}|{\bm{x}})]
\end{aligned}
\end{equation*}
By introducing a reparameterized variable ${\bm{z}}_{\bm{x}} = \mu_{{\bm{x}}} + \sigma_{{\bm{x}}} \odot \epsilon$, where $\mu_{\bm{x}}$ and $\sigma_{\bm{x}}$ are two flexible functions on $h_{\bm{x}}$, $\epsilon \sim \mathcal{N}(0,I)$ and $\odot$ is the element-wise production,
we have a lower bound on the conditional likelihood:
\begin{equation} \label{eq:app:log_likelihood_01}
\begin{aligned}
\log p({\bm{y}}|{\bm{x}})
\ge \mathbb{E}_{q({\bm{z}}_{\bm{x}}|{\bm{x}})} \big[ \log p({\bm{y}}|{\bm{z}}_{\bm{x}}) \big] - KL(q({\bm{z}}_{\bm{x}}|{\bm{x}}) || p({\bm{z}}_{\bm{x}})).
\end{aligned}
\end{equation}
Similarly, we have
\begin{equation} \label{eq:app:log_likelihood_02}
\log p({\bm{x}}|{\bm{y}}) \ge \mathbb{E}_{q({\bm{z}}_{\bm{y}}|{\bm{y}})} \big[ \log p({\bm{x}}|{\bm{z}}_{\bm{y}}) \big] - KL(q({\bm{z}}_{\bm{y}}|{\bm{y}}) || p({\bm{z}}_{\bm{y}})),
\end{equation}
where ${\bm{z}}_{\bm{y}} = \mu_{\bm{y}} + \sigma_{\bm{y}} \odot \epsilon$. Here $\mu_{\bm{y}}$ and $\sigma_{\bm{y}}$ are flexible functions on $h_{\bm{y}}$, and $\epsilon \sim \mathcal{N}(0, I)$.
For implementation, we take multi-layer perceptrons (MLPs) for $\mu_{\bm{x}}, \mu_{\bm{y}}, \sigma_{\bm{x}}, \sigma_{\bm{y}}$.
Both the above objectives are composed of a conditional log-likelihood and a KL-divergence. The conditional log-likelihood has also been recognized as the \textit{reconstruction term}: it is essentially to reconstruct the 3D conformers (${\bm{y}}$) from the sampled 2D molecular graph representation (${\bm{z}}_{{\bm{x}}}$). However, performing the graph reconstruction on the data space is not easy: since molecules are discrete, modeling and measuring are not trivial.
\subsection{Variational Representation Reconstruction}
To cope with data reconstruction issue, we propose a novel generative loss termed variation representation reconstruction (VRR). The pipeline is in~\Cref{fig:app:generative_ssl}.
\begin{figure}[H]
\centering
\includegraphics[width=\textwidth]{figures/Diagram_Generative_final.pdf}
\vspace{-4ex}
\caption{VRR SSL in GraphMVP. The black dashed circles represent subgraph masking.
}
\label{fig:app:generative_ssl}
\hfill
\end{figure}
Our proposed solution is very straightforward. Recall that MI is invariant to continuous bijective function~\citep{belghazi2018mutual}. So suppose we have a representation function $h_{{\bm{y}}}$ satisfying this condition, and this can guide us a surrogate loss by transferring the reconstruction from data space to the continuous representation space:
\begin{equation*}
\begin{aligned}
\mathbb{E}_{q({\bm{z}}_{\bm{x}}|{\bm{x}})}[\log p({\bm{y}}|{\bm{z}}_{\bm{x}})]
& = - \mathbb{E}_{q({\bm{z}}_{\bm{x}}|{\bm{x}})}[ \| h_{{\bm{y}}}(g_x({\bm{z}}_{\bm{x}})) - h_{{\bm{y}}}({\bm{y}}) \|_2^2 ] + C,
\end{aligned}
\end{equation*}
where $g_x$ is the decoder and $C$ is a constant, and this introduces to using the mean-squared error (MSE) for \textbf{reconstruction on the representation space}.
Then for the reconstruction, current formula has two steps: i) the latent code $z_{{\bm{x}}}$ is first mapped to molecule space, and ii) it is mapped to the representation space. We can approximate these two mappings with one projection step, by directly projecting the latent code $z_{{\bm{x}}}$ to the 3D representation space, {\em{i.e.}}, $q_{\bm{x}}(z_{\bm{x}}) \approx h_{{\bm{y}}}( g_{\bm{x}} ( z_{\bm{x}} ))$. This gives us a variation representation reconstruction (VRR) SSL objective as below:
\begin{equation*}
\begin{aligned}
\mathbb{E}_{q({\bm{z}}_{\bm{x}}|{\bm{x}})}[\log p({\bm{y}}|{\bm{z}}_{\bm{x}})]
& = - \mathbb{E}_{q({\bm{z}}_{\bm{x}}|{\bm{x}})}[ \| q_x({\bm{z}}_{\bm{x}}) - h_{{\bm{y}}}({\bm{y}}) \|_2^2 ] + C.
\end{aligned}
\end{equation*}
\paragraph{$\beta$-VAE}
We consider introducing a $\beta$ variable~\citep{higgins2016beta} to control the disentanglement of the latent representation. To be more specific, we would have
\begin{equation}
\begin{aligned}
\log p({\bm{y}}|{\bm{x}})
& \ge \mathbb{E}_{q({\bm{z}}_{\bm{x}}|{\bm{x}})} \big[ \log p({\bm{y}}|{\bm{z}}_{\bm{x}}) \big] - \beta \cdot KL(q({\bm{z}}_{\bm{x}}|{\bm{x}}) || p({\bm{z}}_{\bm{x}})).
\end{aligned}
\end{equation}
\paragraph{Stop-gradient}
For the optimization on variational representation reconstruction, related work have found that adding the stop-gradient operator (SG) as a regularizer can make the training more stable without collapse both empirically~\citep{grill2020bootstrap,chen2021exploring} and theoretically~\citep{tian2021understanding}. Here, we may as well utilize this SG operation in the objective function:
\begin{equation}
\mathbb{E}_{q({\bm{z}}_{\bm{x}}|{\bm{x}})}[\log p({\bm{y}}|{\bm{z}}_{\bm{x}})] = - \mathbb{E}_{q({\bm{z}}_{\bm{x}}|{\bm{x}})}[ \| q_x({\bm{z}}_{\bm{x}}) - \text{SG}(h_{{\bm{y}}}({\bm{y}})) \|_2^2 ] + C.
\end{equation}
\paragraph{Objective function for VRR}
Thus, combining both two regularizers mentioned above, the final objective function for VRR is:
\begin{equation} \label{eq:app:final_vrr}
\begin{aligned}
\mathcal{L}_{\text{VRR}}
& = \frac{1}{2} \Big[ \mathbb{E}_{q({\bm{z}}_{\bm{x}}|{\bm{x}})} \big[ \| q_x({\bm{z}}_{\bm{x}}) - \text{SG}(h_{\bm{y}}) \|^2 \big] + \mathbb{E}_{q({\bm{z}}_{\bm{y}}|{\bm{y}})} \big[ \| q_y({\bm{z}}_{\bm{y}}) - \text{SG}(h_{\bm{x}}) \|_2^2 \big] \Big]\\
& \quad\quad + \frac{\beta}{2} \cdot \Big[ KL(q({\bm{z}}_{\bm{x}}|{\bm{x}}) || p({\bm{z}}_{\bm{x}})) + KL(q({\bm{z}}_{\bm{y}}|{\bm{y}}) || p({\bm{z}}_{\bm{y}})) \Big].
\end{aligned}
\end{equation}
Note that MI is invariant to continuous bijective function~\citep{belghazi2018mutual}, thus this surrogate loss would be exact if the encoding function $h_{{\bm{y}}}$ and $h_{{\bm{x}}}$ satisfy this condition. However, we find GNN (both GIN and SchNet) can, though do not meet the condition, provide quite robust performance empirically, which justify the effectiveness of VRR.
\subsection{Variational Representation Reconstruction and Non-Contrastive SSL}
By introducing VRR, we provide another perspective to understand the generative SSL, including the recently-proposed non-contrastive SSL~\citep{grill2020bootstrap,chen2021exploring}.
We provide a unified structure on the intra-data generative SSL:
\begin{itemize}
\item Reconstruction to the data space, like~\Cref{eq:variational_lower_bound,eq:app:log_likelihood_01,eq:app:log_likelihood_02}.
\item Reconstruction to the representation space, {\em{i.e.}}, VRR in~\Cref{eq:app:final_vrr}.
\begin{itemize}
\item If we \textbf{remove the stochasticity}, then it is simply the representation reconstruction (RR), as we tested in the ablation study~\Cref{sec:experiment_each_loss_component}.
\item If we \textbf{remove the stochasticity} and assume two views are \textbf{sharing the same representation function}, like CNN for multi-view learning on images, then it is reduced to the BYOL~\citep{grill2020bootstrap} and SimSiam~\citep{chen2021exploring}. In other words, these recently-proposed non-contrastive SSL methods are indeed special cases of VRR.
\end{itemize}
\end{itemize}
\section{Conclusion and Future Work}
\vspace{-1ex}
In this work, we provide a very general framework, coined GraphMVP. From the domain perspective, GraphMVP\, (1) is the first to incorporate 3D information for augmenting 2D graph representation learning and (2) is able to take advantages of 3D conformers by considering stochasticity in modeling. From the aspect of technical novelties, GraphMVP\, brings following insights when introducing 2 SSL tasks:
(1) Following~\Cref{eq:MI_objective}, GraphMVP\, proposes EBM-NCE and VRR, where they are modeling the conditional distributions using EBM and variational distribution respectively.
(2) EBM-NCE is similar to JSE, while we start with a different direction for theoretical intuition, yet EBM opens another promising venue in this area.
(3) VRR, as a generative SSL method, is able to alleviate the potential issues in molecule generation~\citep{zhavoronkov2019deep,gao2020synthesizability}.
(4) Ultimately, GraphMVP\, combines both contrastive SSL (InfoNCE or EBM-NCE) and generative SSL (VRR) for objective function.
Both empirical results (solid performance improvements on 14 downstream datasets) and theoretical analysis can strongly support the above domain and technical contributions.
We want to emphasize that GraphMVP\, is model-agnostic and has the potential to be expanded to many other low-data applications. This motivates broad directions for future exploration, including but not limited to: (1) More powerful 2D and 3D molecule representation methods. (2) Different application domain other than small molecules, {\em{e.g.}}, large molecules like proteins.
\section{Self-Supervised Learning on Molecular Graph} \label{appendix:related_work}
Self-supervised learning (SSL) methods have attracted massive attention recently, trending from vision~\citep{chen2020simple,he2020momentum,caron2020unsupervised,chen2021exploring,OcCo}, language~\citep{oord2018representation,devlin2018bert,brown2020language} to graph~\citep{velivckovic2018deep,sun2019infograph,liu2018n,hu2019strategies,You2020GraphCL,you2021graph}. In general, there are two categories of SSL: contrastive and generative, where they differ on the design of the supervised signals. Contrastive SSL realizes the supervised signals at the \textbf{inter-data} level, learning the representation by contrasting with other data points; while generative SSL focuses on reconstructing the original data at the \textbf{intra-data} level. Both venues have been widely explored~\citep{liu2021graph,xie2021self,wu2021self,liu2021self}.
\subsection{Contrastive graph SSL}
Contrastive graph SSL first applies transformations to construct different \textit{views} for each graph. Each view incorporates different granularities of information, like node-, subgraph-, and graph-level. It then solves two sub-tasks simultaneously: (1) aligning the representations of views from the same data; (2) contrasting the representations of views from different data, leading to a uniformly distributed latent space~\citep{wang2020understanding}. The key difference among existing methods is thus the design of view constructions. InfoGraph~\citep{velivckovic2018deep,sun2019infograph} contrasted the node (local) and graph (global) views. ContextPred~\citep{hu2019strategies} and G-Contextual~\cite{rong2020self} contrasted between node and context views. GraphCL and JOAO~\citep{You2020GraphCL,you2021graph} made comprehensive comparisons among four graph-level transformations and further learned to select the most effective combinations.
\subsection{Generative graph SSL}
Generative graph SSL aims at reconstructing important structures for each graph. By so doing, it consequently learns a representation capable of encoding key ingredients of the data. EdgePred~\citep{hamilton2017inductive} and AttrMask~\citep{hu2019strategies} predicted the adjacency matrix and masked tokens (nodes and edges) respectively. GPT-GNN~\citep{hu2020gpt} reconstructed the whole graph in an auto-regressive approach.
\subsection{Predictive graph SSL}
There are certain SSL methods specific to the molecular graph. For example, one central task in drug discovery is to find the important substructure or motif in molecules that can activate the target interactions. G-Motif~\citep{rong2020self} adopts domain knowledge to heuristically extract motifs for each molecule, and the SSL task is to make prediction on the existence of each motif. Different from contrastive and generative SSL, recent literature~\citep{wu2021self} takes this as predictive graph SSL, where the supervised signals are self-generated labels.
\begin{table}[b]
\small
\caption{
Comparison between GraphMVP\, and existing graph SSL methods.
\vspace{-0.6ex}
}
\label{app:tab:compare}
\center
\setlength{\tabcolsep}{10pt}
\begin{tabular}{l c c c c c}
\toprule
\multirow{2}{*}{SSL Pre-training} & \multicolumn{2}{c}{Graph View} & \multicolumn{3}{c}{SSL Category}\\
\cmidrule(lr){2-3}
\cmidrule(lr){4-6}
& 2D Topology & 3D Geometry & Generative & Contrastive & Predictive\\
\midrule
EdgePred~\citep{hamilton2017inductive} & \checkmark & - & \checkmark & - & -\\
AttrMask~\citep{hu2019strategies} & \checkmark & - & \checkmark & - & -\\
GPT-GNN~\citep{hu2020gpt} & \checkmark & - & \checkmark & - & -\\
InfoGraph~\citep{velivckovic2018deep,sun2019infograph} & \checkmark & - & - & \checkmark& -\\
ContexPred~\citep{hu2019strategies} & \checkmark & - & - & \checkmark& -\\
GraphLoG~\citep{xu2021self} & \checkmark & - & - & \checkmark& -\\
G-Contextual~\citep{rong2020self} & \checkmark & - & - & \checkmark& -\\
GraphCL~\citep{You2020GraphCL} & \checkmark & - & - & \checkmark& -\\
JOAO~\citep{you2021graph} & \checkmark & - & - & \checkmark& -\\
G-Motif~\citep{rong2020self} & \checkmark & - & - & - & \checkmark\\
\midrule
GraphMVP\,(Ours) & \checkmark & \checkmark & \checkmark & \checkmark& -\\
\bottomrule
\end{tabular}
\end{table}
\textbf{SSL for Molecular Graphs.} Recall that all previous methods in~\Cref{app:tab:compare} \textbf{merely} focus on the 2D topology. However, for science-centric tasks such as molecular property prediction, 3D geometry should be incorporated as it provides complementary and comprehensive information~\citep{schutt2017schnet,liu2021spherical}. To mitigate this gap, we propose GraphMVP to leverage the 3D geometry with unsupervised graph pre-training.
\section{Molecular Graph Representation} \label{app:sec:molecule_representation}
There are two main methods for molecular graph representation learning. The first one is the molecular fingerprints. It is a hashed bit vector to describe the molecular graph. There has been re-discoveries on fingerprints-based methods~\citep{ramsundar2015massively,liu2018practical,liu2019loss,meyer2019learning,alnammi2021evaluating,jiang2021could}, while its has one main drawback: Random forest and XGBoost are very strong learning models on fingerprints, but they fail to take benefits of the pre-training strategy.
Graph neural network (GNN) has become another mainstream modeling methods for molecular graph representation. Existing methods can be generally split into two venues: 2D GNN and 3D GNN, depending on what levels of information is considered. 2D GNN focuses on the topological structures of the graph, like the adjacency among nodes, while 3D GNN is able to model the ``energy'' of molecules by taking account the spatial positions of atoms.
First, we want to highlight that GraphMVP\, is model-agnostic, {\em{i.e.}}, it can be applied to any 2D and 3D GNN representation function, yet the specific 2D and 3D representations are not the main focus of this work.
Second, we acknowledge there are a lot of advanced 3D~\citep{fuchs2020se,satorras2021n,liu2021spherical,jumper2021highly} and 2D~\citep{gilmer2017neural,yang2019analyzing,liu2018n,xu2018powerful,corso2020principal,demirel2021analysis} representation methods.
However, considering the \textit{graph SSL literature} and \textit{graph representation liteature} (illustrated below), we adopt GIN~\citep{xu2018powerful} and SchNet~\citep{schutt2017schnet} in current GraphMVP.
\subsection{2D Molecular Graph Neural Network}
The 2D representation is taking each molecule as a 2D graph, with atoms as nodes and bonds as edges, {\em{i.e.}}, $g_{\text{2D}} = (X, E)$. $X\in\mathbb{R}^{n\times d_n}$ is the atom attribute matrix, where $n$ is the number of atoms (nodes) and $d_n$ is the atom attribute dimension. $E\in\mathbb{R}^{m\times d_e}$ is the bond attribute matrix, where $m$ is the number of bonds (edges) and $d_m$ is the bond attribute dimension. Notice that here $E$ also includes the connectivity. Then we will apply a transformation function $T_{\text{2D}}$ on the topological graph. Given a 2D graph $g_{\text{2D}}$, its 2D molecular representation is:
\begin{equation}
h_{\text{2D}} = \text{GNN-2D}(T_{\text{2D}}(g_{\text{2D}})) = \text{GNN-2D}(T_{\text{2D}}(X, E)).
\end{equation}
The core operation of 2D GNN is the message passing function~\citep{gilmer2017neural}, which updates the node representation based on adjacency information. We have variants depending on the design of message and aggregation functions, and we pick GIN~\citep{xu2018powerful} in this work.
\paragraph{GIN} There has been a long research line on 2D graph representation learning~\citep{gilmer2017neural,yang2019analyzing,liu2018n,xu2018powerful,corso2020principal,demirel2021analysis}. Among these, graph isomorphism network (GIN) model~\citep{xu2018powerful} has been widely used as the backbone model in recent graph self-supervised learning work~\citep{hu2019strategies,You2020GraphCL,you2021graph}. Thus, we as well adopt GIN as the base model for 2D representation.
Recall each molecule is represented as a molecular graph, {\em{i.e.}}, $g_{\text{2D}} = (X, E)$, where $X$ and $E$ are feature matrices for atoms and bonds respectively. Then the message passing function is defined as:
\begin{equation}
z_i^{(k+1)} = \text{MLP}_{\text{atom}}^{(k+1)} \Big(z_i^{(k)} + \sum_{j \in \mathcal{N}(i)} \big( z_j^{(k)} + \text{MLP}_{\text{bond}}^{(k+1)}(E_{ij}) \big) \Big),
\end{equation}
where $z_0=X$ and $\text{MLP}_{\text{atom}}^{(k+1)}$ and $\text{MLP}_{\text{bond}}^{(k+1)}$ are the $(l+1)$-th MLP layers on the atom- and bond-level respectively. Repeating this for $K$ times, and we can encode $K$-hop neighborhood information for each center atom in the molecular data, and we take the last layer for each node/atom representation. The graph-level molecular representation is the mean of the node representation:
\begin{equation}
z({\bm{x}}) = \frac{1}{N} \sum_{i} z_i^{(K)}
\end{equation}
\subsection{3D Molecular Graph Neural Network} \label{appendix:3d_gnn}
Recently, the 3D geometric representation learning has brought breakthrough progress in molecule modeling~\citep{schutt2017schnet,fuchs2020se,satorras2021n,liu2021spherical,jumper2021highly}. 3D molecular graph additionally includes spatial locations of the atoms, which needless to be static since, in real scenarios, atoms are in continual motion on \textit{a potential energy surface}~\citep{axelrod2020geom}. The 3D structures at the local minima on this surface are named \textit{molecular conformation} or \textit{conformer}. As the molecular properties are a function of the conformer ensembles~\citep{hawkins2017conformation}, this reveals another limitation of existing mainstream methods: to predict properties from a single 2D or 3D graph cannot account for this fact~\citep{axelrod2020geom}, while our proposed method can alleviate this issue to a certain extent.
For specific 3D molecular graph, it additionally includes spatial positions of the atoms. We represent each conformer as $g_{\text{3D}} = (X, R)$, where $R \in \mathbb{R}^{n \times 3}$ is the 3D-coordinate matrix, and the corresponding representation is:
\begin{equation}
h_{\text{3D}} = \text{GNN-3D}(T_{\text{3D}}(g_{\text{3D}})) = \text{GNN-3D}(T_{\text{3D}}(X, R)),
\end{equation}
where $R$ is the 3D-coordinate matrix and $T_{\text{3D}}$ is the 3D transformation. Note that further information such as plane and torsion angles can be solved from the positions.
\paragraph{SchNet}
SchNet~\citep{schutt2017schnet} is composed of the following key steps:
\begin{equation}
\begin{aligned}
& z_i^{(0)} = \text{embedding} (x_i)\\
& z_i^{(t+1)} = \text{MLP} \Big( \sum_{j=1}^{n} f(x_j^{(t-1)}, r_i, r_j) \Big)\\
& h_i = \text{MLP} (z_i^{(K)}),
\end{aligned}
\end{equation}
where $K$ is the number of hidden layers, and
\begin{equation}
f(x_j, r_i, r_j) = x_j \cdot e_k(r_i - r_j) = x_j \cdot \exp(- \gamma \| \|r_i - r_j\|_2 - \mu \|_2^2)
\end{equation}
is the continuous-filter convolution layer, enabling the modeling of continuous positions of atoms.
We adopt SchNet for the following reasons. (1) SchNet is a very strong geometric representation method after \textit{fair} benchmarking. (2) SchNet can be trained more efficiently, comparing to the other recent 3D models. To support these two points, we make a comparison among the most recent 3D geometric models~\cite{fuchs2020se,satorras2021n,liu2021spherical} on QM9 dataset. QM9~\citep{wu2018moleculenet} is a molecule dataset approximating 12 thermodynamic properties calculated by density functional theory (DFT) algorithm. Notice: UNiTE~\citep{qiao2021unite} is the state-of-the-art 3D GNN, but it requires a commercial software for feature extraction, thus we exclude it for now.
\begin{table}[H]
\centering
\scriptsize
\caption{
Reproduced MAE on QM9. 100k for training, 17,748 for val, 13,083 for test. The last column is the approximated running time.
}
\label{tab:app:qm9}
\begin{tabular}{l c c c c c c c c c c c c r}
\toprule
& alpha & gap & homo & lumo & mu & cv & g298 & h298 & r2 & u298 & u0 & zpve & time\\
\midrule
SchNet~\citep{schutt2017schnet} & 0.077 & 50 & 32 & 26 & 0.030 & 0.032 & 15 & 14 & 0.122 & 14 & 14 & 1.751 & 3h\\
SE(3)-Trans~\citep{fuchs2020se} & 0.143 & 59 & 36 & 36 & 0.052 & 0.068 & 68 & 72 & 1.969 & 68 & 74 & 5.517 & 50h\\
EGNN~\citep{satorras2021n} & 0.075 & 49 & 29 & 26 & 0.030 & 0.032 & 11 & 10 & 0.076 & 10 & 10 & 1.562 & 24h\\
SphereNet~\citep{liu2021spherical} & 0.054 & 41 & 22 & 19 & 0.028 & 0.027 & 10 & 8 & 0.295 & 8 & 8 & 1.401 & 50h\\
\bottomrule
\end{tabular}
\end{table}
\Cref{tab:app:qm9} shows that, under a fair comparison (w.r.t. data splitting, seed, cuda version, etc), SchNet can reach pretty comparable performance, yet the efficiency of SchNet is much better. Combining these two points, we adopt SchNet in current version of GraphMVP.
\subsection{Summary}
To sum up, in GraphMVP, the most important message we want to deliver is how to design a well-motivated SSL algorithm to extract useful 3D geometry information to augment the 2D representation for downstream fine-tuning. GraphMVP\, is model-agnostic, and we may as well leave the more advanced 3D~\citep{fuchs2020se,satorras2021n,liu2021spherical,jumper2021highly} and 2D~\citep{yang2019analyzing,liu2018n,corso2020principal} GNN for future exploration.
In addition, molecular property prediction tasks have rich alternative representation methods, including SMILES~\cite{weininger1988smiles,hirohara2018convolutional}, and biological knowledge graph~\cite{wang2021multi,liu2022structured}. There have been another SSL research line on them~\cite{liustructured,zhu2021dual,fang2021molecular}, yet they are beyond the scope of discussion in this paper.
\section{Experiments Details}
\subsection{Self-supervised Learning Baselines} \label{appendix:hyper}
For the SSL baselines in main results (\Cref{tab:main_results}), generally we can match with the original paper, even though most of them are using larger pre-training datasets, like ZINC-2m. Yet, we would like to add some specifications.
\begin{itemize}
\item G-\{Contextual, Motif\}\citep{rong2020self} proposes a new GNN model for backbone model, and does pre-training on a larger dataset. Both settings are different from us.
\item JOAO~\citep{you2021graph} has two versions in the original paper. In this paper, we run both versions and report the optimal one.
\item Almost all the graph SSL baselines are reporting the test performance with optimal validation error, while GraphLoG~\citep{xu2021self} reports 73.2 in the paper with the last-epoch performance. This can be over-optimized in terms of overfitting, and here we rerun it with the same downstream evaluation strategy as a fair comparison.
\end{itemize}
\clearpage
\subsection{Ablation Study: The Effect of Masking Ratio and Number of Conformers} \label{app:sec:effect_of_M_and_C}
\begin{table}[H]
\caption{
Full results for ablation of masking ratio $M$ ($C=0.15$), MVP is short for GraphMVP.
\vspace{-0.3cm}
}
\fontsize{9pt}{2pt}
\selectfont
\setlength{\tabcolsep}{2pt}
\centering
\begin{tabular}{l l c c c c c c c c c}
\toprule
& $M$\,\,\,\,\,\, & BBBP & Tox21 & ToxCast & Sider & ClinTox & MUV & HIV & Bace & Avg\\
\midrule
-- & -- & 65.4(2.4) & 74.9(0.8) & 61.6(1.2) & 58.0(2.4) & 58.8(5.5) & 71.0(2.5) & 75.3(0.5) & 72.6(4.9) & 67.21 \\
\midrule
\multirow{3}{*}{MVP}
& 0 & 69.4 (1.0)& 75.3 (0.5)& 62.8 (0.2)& 61.9 (0.5)& 74.4 (1.3)& 74.6 (1.4)& 74.6 (1.0)& 76.0 (2.0) & 71.12\\
& 0.15 & 68.5 (0.2)& 74.5 (0.4)& 62.7 (0.1)& 62.3 (1.6)& 79.0 (2.5)& 75.0 (1.4)& 74.8 (1.4)& 76.8 (1.1) & 71.69\\
& 0.3 & 68.6 (0.3)& 74.9 (0.6)& 62.8 (0.4)& 60.0 (0.6)& 74.8 (7.8)& 74.7 (0.8)& 75.5 (1.1)& 82.9 (1.7) & 71.79\\
\midrule
\multirow{3}{*}{MVP-G}
& 0 & 72.4 (1.3)& 74.7 (0.6)& 62.4 (0.2)& 60.3 (0.7)& 76.2 (5.7)& 76.6 (1.7)& 76.4 (1.7)& 78.0 (1.1) & 72.15\\
& 0.15 & 70.8 (0.5)& 75.9 (0.5)& 63.1 (0.2)& 60.2 (1.1)& 79.1 (2.8)& 77.7 (0.6)& 76.0 (0.1)& 79.3 (1.5) & 72.76\\
& 0.3 & 69.5 (0.5)& 74.6 (0.6)& 62.7 (0.3)& 60.8 (1.2)& 80.7 (2.0)& 77.8 (2.5)& 76.2 (0.5)& 81.0 (1.0) & 72.91\\
\midrule
\multirow{3}{*}{MVP-C}
& 0 & 71.5 (0.9)& 75.4 (0.3)& 63.6 (0.5)& 61.8 (0.6)& 77.3 (1.2)& 75.8 (0.6)& 76.1 (0.9)& 79.8 (0.4) & 72.66\\
& 0.15 & 72.4 (1.6)& 74.4 (0.2)& 63.1 (0.4)& 63.9 (1.2)& 77.5 (4.2)& 75.0 (1.0)& 77.0 (1.2)& 81.2 (0.9) & 73.07\\
& 0.3 & 70.7 (0.8)& 74.6 (0.3)& 63.8 (0.7)& 60.4 (0.6)& 83.5 (3.2)& 74.2 (1.6)& 76.0 (1.0)& 82.2 (2.2) & 73.17\\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[H]
\caption{
Full results for ablation of \# conformers $C$ ($M=0.5$), MVP is short for GraphMVP.
\vspace{-0.3cm}
}
\fontsize{9pt}{2pt}
\selectfont
\setlength{\tabcolsep}{2pt}
\centering
\begin{tabular}{l r c c c c c c c c c}
\toprule
& \,\,\,\,$C$ & BBBP & Tox21 & ToxCast & Sider & ClinTox & MUV & HIV & Bace & Avg\\
\midrule
-- & -- & 65.4(2.4) & 74.9(0.8) & 61.6(1.2) & 58.0(2.4) & 58.8(5.5) & 71.0(2.5) & 75.3(0.5) & 72.6(4.9) & 67.21 \\
\midrule
\multirow{4}{*}{MVP}
& 1 & 69.2 (1.0)& 74.7 (0.4)& 62.5 (0.2)& 63.0 (0.4)& 73.9 (7.2)& 76.2 (0.4)& 75.3 (1.1)& 78.0 (0.5) & 71.61\\
& 5 & 68.5 (0.2)& 74.5 (0.4)& 62.7 (0.1)& 62.3 (1.6)& 79.0 (2.5)& 75.0 (1.4)& 74.8 (1.4)& 76.8 (1.1) & 71.69\\
& 10 & 68.3 (0.5)& 74.2 (0.6)& 63.2 (0.5)& 61.4 (1.0)& 80.6 (0.8)& 75.4 (2.4)& 75.5 (0.6)& 79.1 (2.3) & 72.20\\
& 20 & 68.7 (0.5)& 74.9 (0.3)& 62.7 (0.3)& 60.8 (0.7)& 75.8 (0.5)& 76.3 (1.5)& 77.4 (0.3)& 82.3 (0.8) & 72.39\\
\midrule
\multirow{4}{*}{MVP-G}
& 1 & 70.9 (0.4)& 75.3 (0.7)& 62.8 (0.5)& 61.2 (0.6)& 81.4 (3.7)& 74.2 (2.1)& 76.4 (0.6)& 80.2 (0.7) & 72.80\\
& 5 & 70.8 (0.5)& 75.9 (0.5)& 63.1 (0.2)& 60.2 (1.1)& 79.1 (2.8)& 77.7 (0.6)& 76.0 (0.1)& 79.3 (1.5) & 72.76\\
& 10 & 70.2 (0.9)& 74.9 (0.4)& 63.4 (0.4)& 60.8 (1.0)& 80.6 (0.4)& 76.4 (2.0)& 77.0 (0.3)& 77.4 (1.3) & 72.59\\
& 20 & 69.5 (0.4)& 74.9 (0.4)& 63.3 (0.1)& 60.8 (0.3)& 81.2 (0.5)& 77.3 (2.7)& 76.9 (0.3)& 80.1 (0.5) & 73.00\\
\midrule
\multirow{4}{*}{MVP-C}
& 1 & 69.7 (0.9)& 74.9 (0.5)& 64.1 (0.5)& 61.0 (1.4)& 78.3 (2.7)& 75.7 (1.5)& 74.7 (0.8)& 81.3 (0.7) & 72.46\\
& 5 & 72.4 (1.6)& 74.4 (0.2)& 63.1 (0.4)& 63.9 (1.2)& 77.5 (4.2)& 75.0 (1.0)& 77.0 (1.2)& 81.2 (0.9) & 73.07\\
& 10 & 69.5 (1.5)& 74.5 (0.5)& 63.9 (0.9)& 60.9 (0.4)& 81.1 (1.8)& 76.8 (1.5)& 76.0 (0.8)& 82.0 (1.0) & 73.09\\
& 20 & 72.1 (0.4)& 73.4 (0.7)& 63.9 (0.3)& 63.0 (0.7)& 78.8 (2.4)& 74.1 (1.0)& 74.8 (0.9)& 84.1 (0.6) & 73.02\\
\bottomrule
\end{tabular}
\end{table}
\subsection{Ablation Study: Effect of Each Loss Component}
\begin{table}[H]
\caption{Molecular graph property prediction, we set $C$=5 and $M$=0.15 for GraphMVP methods.}
\small
\setlength{\tabcolsep}{2pt}
\centering
\begin{tabular}{l c c c c c c c c c}
\toprule
& BBBP & Tox21 & ToxCast & Sider & ClinTox & MUV & HIV & Bace & Avg\\
\midrule
\# Molecules & 2,039 & 7,831 & 8,575 & 1,427 & 1,478 & 93,087 & 41,127 & 1,513 & - \\
\# Tasks & 1 & 12 & 617 & 27 & 2 & 17 & 1 & 1 & - \\
\midrule
- & 65.4(2.4) & 74.9(0.8) & 61.6(1.2) & 58.0(2.4) & 58.8(5.5) & 71.0(2.5) & 75.3(0.5) & 72.6(4.9) & 67.21 \\
\midrule
InfoNCE only & 68.9(1.2) & 74.2(0.3) & 62.8(0.2) & 59.7(0.7) & 57.8(11.5) & 73.6(1.8) & 76.1(0.6) & 77.6(0.3) & 68.85 \\
EBM-NCE only & 68.0(0.3) & 74.3(0.4) & 62.6(0.3) & 61.3(0.4) & 66.0(6.0) & 73.1(1.6) & 76.4(1.0) & 79.6(1.7) & 70.15 \\
VAE only & 67.6(1.8) & 73.2(0.5) & 61.9(0.4) & 60.5(0.2) & 59.7(1.6) & 78.6(0.7) & 77.4(0.6) & 75.4(2.1) & 69.29 \\
AE only & 70.5(0.4) & 75.0(0.4) & 62.4(0.4) & 61.0(1.4) & 53.8(1.0) & 74.1(2.9) & 76.3(0.5) & 77.9(0.9) & 68.89 \\
\midrule
InfoNCE + VAE & 69.6(1.1) & 75.4(0.6) & 63.2(0.3) & 59.9(0.4) & 69.3(14.0) & 76.5(1.3) & 76.3(0.2) & 75.2(2.7) & 70.67 \\
EBM-NCE + VAE & 68.5(0.2) & 74.5(0.4) & 62.7(0.1) & 62.3(1.6) & 79.0(2.5) & 75.0(1.4) & 74.8(1.4) & 76.8(1.1) & 71.69 \\
InfoNCE + AE & 65.1(3.1) & 75.4(0.7) & 62.5(0.5) & 59.2(0.6) & 77.2(1.8) & 72.4(1.4) & 75.8(0.6) & 77.1(0.8) & 70.60 \\
EBM-NCE + AE & 69.4(1.0) & 75.2(0.1) & 62.4(0.4) & 61.5(0.9) & 71.1(6.0) & 73.3(0.3) & 75.2(0.6) & 79.3(1.1) & 70.94 \\
\bottomrule
\end{tabular}
\end{table}
\clearpage
\subsection{Broader Range of Downstream Tasks: Molecular Property Prediction Prediction} \label{sec:app:regression_results}
\begin{table}[H]
\small
\setlength{\tabcolsep}{10pt}
\centering
\caption{
Results for four molecular property prediction tasks (regression).
For each downstream task, we report the mean (and standard variance) RMSE of 3 seeds with scaffold splitting.
For GraphMVP\,, we set $M=0.15$ and $C=5$.
The best performance for each task is marked in \textbf{bold}.
}
\begin{tabular}{l c c c c c}
\toprule
& ESOL & Lipo & Malaria & CEP & Avg\\
\midrule
-- & 1.178 (0.044) & 0.744 (0.007) & 1.127 (0.003) & 1.254 (0.030) & 1.07559 \\
\midrule
AM & 1.112 (0.048) & 0.730 (0.004) & 1.119 (0.014) & 1.256 (0.000) & 1.05419 \\
CP & 1.196 (0.037) & 0.702 (0.020) & 1.101 (0.015) & 1.243 (0.025) & 1.06059 \\
JOAO & 1.120 (0.019) & 0.708 (0.007) & 1.145 (0.010) & 1.293 (0.003) & 1.06631 \\
\midrule
GraphMVP\, & 1.091 (0.021) & 0.718 (0.016) & 1.114 (0.013) & 1.236 (0.023) & 1.03968 \\
GraphMVP-G & 1.064 (0.045) & 0.691 (0.013) & 1.106 (0.013) & \textbf{1.228 (0.001)} & 1.02214 \\
GraphMVP-C & \textbf{1.029 (0.033)} & \textbf{0.681} (0.010) & \textbf{1.097 (0.017)} & 1.244 (0.009) & \textbf{1.01283} \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Broader Range of Downstream Tasks: Drug-Target Affinity Prediction}
\begin{table}[H]
\small
\setlength{\tabcolsep}{15pt}
\centering
\caption{
Results for two drug-target affinity prediction tasks (regression).
For each downstream task, we report the mean (and standard variance) MSE of 3 seeds with random splitting.
For GraphMVP\,, we set $M=0.15$ and $C=5$.
The best performance for each task is marked in \textbf{bold}.
}
\centering
\begin{tabular}{l c c c}
\toprule
& Davis & KIBA & Avg\\
\midrule
& 0.286 (0.006) & 0.206 (0.004) & 0.24585 \\
\midrule
AM & 0.291 (0.007) & 0.203 (0.003) & 0.24730 \\
CP & 0.279 (0.002) & 0.198 (0.004) & 0.23823 \\
JOAO & 0.281 (0.004) & 0.196 (0.005) & 0.23871 \\
\midrule
GraphMVP & 0.280 (0.005) & 0.178 (0.005) & 0.22860 \\
GraphMVP-G & \textbf{0.274 (0.002)} & 0.175 (0.001) & 0.22476 \\
GraphMVP-C & 0.276 (0.004) & \textbf{0.168 (0.001)} & \textbf{0.22231} \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Case Studies~\label{appendix:case}}
\textbf{Shape Analysis (3D Diameter Prediction).} Diameter is an important measure in molecule~\cite{liu2010using,melnik2003distance}, and genome~\cite{finn2017comparative} modelling. Usually, the longer the 2D diameter (longest adjacency path) is, the larger the 3D diameter (largest atomic pairwise l2 distance). However, this is not always true. Therefore, we are particularly interested in using the 2D graph to predict the 3D diameter when the 2D and 3D molecular landscapes are with large differences (as in~\Cref{fig:case} and ~\Cref{fig:case_appendix}). We formulate it as a $n$-class recognition problem, where $n$ is the number of class after removing the consecutive intervals. We provide numerical results in~\Cref{table:diam} and more visualisation examples in~\Cref{fig:case_vis}.
\begin{figure}[ht]
\centering
\vspace{-2ex}
\includegraphics[width=\textwidth]{figures/Case_Study_Appendix.pdf}
\caption{Molecules selection, we select the molecules that lies in the black dash box.}
\label{fig:case_appendix}
\end{figure}
\begin{table}[htb!]
\caption{Accuracy on Recognizing Molecular Spatial Diameters~\label{table:diam}}
\small
\setlength{\tabcolsep}{3pt}
\centering
\begin{tabular}{l c c c c c c c c c c}
\toprule
Random & \,AttrMask\, & \,ContextPred\, & GPT-GNN & GraphCL & JOAOv2 & MVP & MVP-G & MVP-C \\
\midrule
38.9 (0.8) & 37.6 (0.6) & 41.2 (0.7) & 39.2 (1.1) & 38.7 (2.0) & 41.3 (1.2) & 42.3 (1.9) & 41.9 (0.7) & 42.3 (1.3)\\
\bottomrule
\end{tabular}
\end{table}
\textbf{Long-Range Donor-Acceptor Detection.} Donor-Acceptor structures such as hydrogen bonds have key impacts on the molecular geometrical structures (collinear and coplanarity), and physical properties (melting point, water affinity, viscosity etc.). Usually, atom pairs such as ``O...H'' that are closed in the Euclidean space are considered as the donor-acceptor structures~\cite{kaur2020understanding}.
On this basis, we are particularly interested in using the 2D graph to recognize (i.e., binary classification) donor-acceptor structures which have larger ranges in the 2D adjacency (as shown in~\Cref{fig:case}). Similarly, we select the molecules whose donor-acceptor are close in 3D Euclidean distance but far in the 2D adjacency. We provide numerical results in~\Cref{table:donor}. Both tables show that MVP is the MVP :)
\begin{table}[htb!]
\caption{Accuracy on Recognizing Long-Range Donor-Acceptor Structures~\label{table:donor}}
\small
\setlength{\tabcolsep}{3pt}
\centering
\begin{tabular}{l c c c c c c c c c c}
\toprule
Random & \,AttrMask\, & \,ContextPred\, & GPT-GNN & GraphCL & JOAOv2 & MVP & MVP-G & MVP-C \\
\midrule
77.9 (1.1) & 78.6 (0.3) & 80.0 (0.5) & 77.5 (0.9) & 79.9 (0.7) & 79.2 (1.0) & 80.0 (0.4) & 81.5 (0.4) & 80.7 (0.2) \\
\bottomrule
\end{tabular}
\end{table}
\paragraph{Chirality.} We have also explored other tasks such as predicting the molecular chirality, it is a challenging setting if only 2D molecular graphs are provided~\cite{pattanaik2020message}. We found that GraphMVP\,brings negligible improvements due to the model capacity of SchNet. We save this in the ongoing work.
\begin{figure}[ht]
\centering
\vspace{-2ex}
\includegraphics[width=\textwidth]{figures/case_study_SSL_better.png}
\vspace{-2ex}
\caption{Molecule examples where GraphMVP successfully recognizes the 3D diameters while random initialisation fails, legends are in a format of ``molecule id''-``2d diameter''-``3d diameter''.}
\label{fig:case_vis}
\end{figure}
\section*{Reproducibility Statement}
To ensure the reproducibility of the empirical results, we include our code base in the supplementary material, which contains: instructions for installing conda virtual environment, data preprocessing scripts, and training scripts. Our code is available on \href{https://github.com/chao1224/GraphMVP}{GitHub} for reproducibility. Complete derivations of equations and clear explanations are given in~\Cref{sec:app:MI,sec:app:contrastive_ssl,sec:app:generative_ssl}.
\section*{Acknowledgement}
This project is supported by the Natural Sciences and Engineering Research Council (NSERC) Discovery Grant, the Canada CIFAR AI Chair Program, collaboration grants between Microsoft Research and Mila, Samsung Electronics Co., Ltd., Amazon Faculty Research Award, Tencent AI Lab Rhino-Bird Gift Fund and a NRC Collaborative R\&D Project (AI4D-CORE-06). This project was also partially funded by IVADO Fundamental Research Project grant PRF-2019-3583139727.
\bibliographystyle{plain}
{\small
\section{Dataset Overview} \label{app:sec:dataset}
\subsection{Pre-Training Dataset Overview}
In this section, we provide the basic statistics of the pre-training dataset (GEOM).
In~\Cref{fig:hist}, we plot the histogram (logarithm scale on the y-axis) and cumulative distribution on the number of conformers of each molecule. As shown by the histogram and curves, there are certain number of molecules having over 1000 possible 3d conformer structures, while over 80\% of molecules have less than 100 conformers.
\begin{figure}[htb!]
\centering
\includegraphics[width=\textwidth]{figures/hist.pdf}
\caption{Statistics on the conformers of each molecule\label{fig:hist}}
\end{figure}
In~\Cref{fig:hist}, we plot the histogram of the summation of top (descending sorted by weights) \{1,5,10,20\} conformer weights. The physical meaning of the weight is the portion of each conformer occurred in nature. We observe that the top 5 or 10 conformers are sufficient as they have dominated nearly all the natural observations. Such long-tailed distribution is also in alignment with our findings in the ablation studies. We find that utilizing top five conformers in the GraphMVP\, has reached an idealised spot between effectiveness and efficiency.
\begin{figure}[htb!]
\centering
\includegraphics[width=\textwidth]{figures/weight_conf.pdf}
\caption{Sum of occurrence weights for the top major conformers\label{fig:cumulative}}
\end{figure}
\subsection{Downstream Dataset Overview} \label{app:sec:downstream_datasets}
In this section, we review the four main categories of datasets used for downstream tasks.
\paragraph{Molecular Property: Pharmacology}
The Blood-Brain Barrier Penetration (BBBP)~\cite{martins2012bayesian} dataset measures whether a molecule will penetrate the central nervous system.
All three datasets, Tox21~\cite{tox21}, ToxCast~\cite{wu2018moleculenet}, and ClinTox~\cite{gayvert2016data} are related to the toxicity of molecular compounds.
The Side Effect Resource (SIDER)~\cite{kuhn2015sider} dataset stores the adverse drug reactions on a marketed drug database.
\paragraph{Molecular Property: Physical Chemistry}
Dataset proposed in~\cite{delaney2004esol} measures aqueous solubility of the molecular compounds. Lipophilicity (Lipo) dataset is a subset of ChEMBL~\cite{gaulton2012chembl} measuring the molecule octanol/water distribution coefficient. CEP dataset is a subset of the Havard Clean Energy Project (CEP)~\cite{hachmann2011harvard}, which estimates the organic photovoltaic efficiency.
\paragraph{Molecular Property: Biophysics}
Maximum Unbiased Validation (MUV)~\cite{rohrer2009maximum} is another sub-database from PCBA, and is obtained by applying a refined nearest neighbor analysis. HIV is from the Drug Therapeutics Program (DTP) AIDS Antiviral Screen~\cite{zaharevitz2015aids}, and it aims at predicting inhibit HIV replication. BACE measures the binding results for a set of inhibitors of $\beta$-secretase 1 (BACE-1), and is gathered in MoleculeNet~\cite{wu2018moleculenet}. Malaria~\cite{gamo2010thousands} measures the drug efficacy against the parasite that causes malaria.
\paragraph{Drug-Target Affinity}
Davis~\citep{davis2011comprehensive} measures the binding affinities between kinase inhibitors and kinases, scored by the $K_d$ value (kinase dissociation constant).
KIBA~\citep{tang2014making} contains binding affinities for kinase inhibitors from different sources, including $K_i$, $K_d$ and $\text{IC}_{50}$. KIBA scores~\citep{ozturk2018deepdta} are constructured to optimize the consistency among these values.
\begin{table}[H]
\centering
\small
\caption{Summary for the molecule chemical datasets.}
\begin{tabular}{l l r r r r}
\toprule
Dataset & Task & \# Tasks & \# Molecules & \# Proteins & \# Molecule-Protein pairs\\
\midrule
BBBP & Classification & 1 & 2,039 & -& -\\
Tox21 & Classification & 12 & 7,831 & -& -\\
ToxCast & Classification & 617 & 8,576 & -& -\\
Sider & Classification & 27 & 1,427 & -& -\\
ClinTox & Classification & 2 & 1,478 & -& -\\
MUV & Classification & 17 & 93,087 & -& -\\
HIV & Classification & 1 & 41,127 & -& -\\
Bace & Classification & 1 & 1,513 & -& -\\
\midrule
Delaney & Regression & 1 & 1,128 & -& -\\
Lipo & Regression & 1 & 4,200 & -& -\\
Malaria & Regression & 1 & 9,999 & -& -\\
CEP & Regression & 1 & 29,978 & -& -\\
\midrule
Davis & Regression & 1 & 68 & 379 & 30,056 \\
KIBA & Regression & 1 & 2,068 & 229 & 118,254 \\
\bottomrule
\end{tabular}
\label{tab:mol_dataset_summary}
\end{table}
\section{Introduction} \label{sec:intro}
In recent years, drug discovery has drawn increasing interest in the machine learning community. Among many challenges therein, how to discriminatively represent a molecule with a vectorized embedding remains a fundamental yet open challenge. The underlying problem can be decomposed into two components: how to design a common latent space for molecule graphs ({\em{i.e.}}, designing a suitable encoder) and how to construct an objective function to supervise the training ({\em{i.e.}}, defining a learning target). Falling broadly into the second category, our paper studies self-supervised molecular representation learning by leveraging the consistency between 3D geometry and 2D topology.
Motivated by the prominent success of the pretraining-finetuning pipeline~\citep{devlin2018bert}, unsupervisedly pre-trained graph neural networks for molecules yields promising performance on downstream tasks and becomes increasingly popular~\citep{velivckovic2018deep,sun2019infograph,liu2018n,hu2019strategies,You2020GraphCL,you2021graph}. The key to pre-training lies in finding an effective proxy task ({\em{i.e.}}, training objective) to leverage the power of large unlabeled datasets. Inspired by \citep{schutt2017schnet,liu2018n,liu2021spherical} that molecular properties~\citep{gilmer2017neural,liu2018n} can be better predicted by 3D geometry due to its encoded energy knowledge, we aim to make use of the 3D geometry of molecules in pre-training. However, the stereochemical structures are often very expensive to obtain, making such 3D geometric information scarce in downstream tasks. To address this problem, we propose the Graph Multi-View Pre-training (GraphMVP) framework, where a 2D molecule encoder is pre-trained with the knowledge of 3D geometry and then fine-tuned on downstream tasks without 3D information. Our learning paradigm, during pre-training, injects the knowledge of 3D molecular geometry to a 2D molecular graph encoder such that the downstream tasks can benefit from the implicit 3D geometric prior even if there is no 3D information available.
We attain the aforementioned goal by leveraging two pretext tasks on the 2D and 3D molecular graphs: one contrastive and one generative SSL. Contrastive SSL creates the supervised signal at an \textbf{inter-molecule} level: the 2D and 3D graph pairs are positive if they are from the same molecule, and negative otherwise; Then contrastive SSL~\citep{wang2020understanding} will align the positive pairs and contrast the negative pairs simultaneously. Generative SSL~\citep{vincent2008extracting,kingma2013auto,higgins2016beta}, on the other hand, obtains the supervised signal in an \textbf{intra-molecule} way: it learns a 2D/3D representation that can reconstruct its 3D/2D counterpart view for each molecule itself.
To cope with the challenge of measuring the quality of reconstruction on molecule 2D and 3D space, we further propose a novel surrogate objective function called variation representation reconstruction (VRR) for the generative SSL task, which can effectively measure such quality in the continuous representation space. The knowledge acquired by these two SSL tasks is complementary, so our GraphMVP\ framework integrates them to form a more discriminative 2D molecular graph representation.
Consistent and significant performance improvements empirically validate the effectiveness of GraphMVP.
We give additional insights to justify the effectiveness of GraphMVP. First, GraphMVP\, is a self-supervised learning approach based on maximizing mutual information (MI) between 2D and 3D views, enabling the learnt representation to capture high-level factors~\citep{belghazi2018mutual,tschannen2019mutual,bachman2019learning} in molecule data. Second, we find that 3D molecular geometry is a form of privileged information~\cite{vapnik2009new,vapnik2015learning}. It has been proven that using privileged information in training can accelerate the speed of learning. We note that privileged information is only used in training, while it is not available in testing. This perfectly matches our intuition of pre-training molecular representation with 3D geometry.
Our contributions include
(1) To our best knowledge, we are the first to incorporate the 3D geometric information into graph SSL;
(2) We propose one contrastive and one generative SSL tasks for pre-training. Then we elaborate their difference and empirically validate that combining both can lead to a better representation;
(3) We provide theoretical insights and case studies to justify why adding 3D geometry is beneficial;
(4) We achieve the SOTA performance among all the SSL methods.
\textbf{Related work.}
We briefly review the most related works here and include a more detailed summarization in~\Cref{appendix:related_work}. Self-supervised learning (SSL) methods have attracted massive attention to graph applications~\citep{liu2021graph,xie2021self,wu2021self,liu2021self}. In general, there are roughly two categories of graph SSL: contrastive and generative, where they differ on the design of the supervised signals. Contrastive graph SSL~\citep{velivckovic2018deep,sun2019infograph,hu2019strategies,You2020GraphCL,you2021graph} constructs the supervised signals at the \textbf{inter-graph} level and learns the representation by contrasting with other graphs, while generative graph SSL~\citep{hamilton2017inductive,liu2018n,hu2019strategies,hu2020gpt} focuses on reconstructing the original graph at the \textbf{intra-graph} level. One of the most significant differences that separate our work from existing methods is that all previous methods \textbf{merely} focus on 2D molecular topology. However, for scientific tasks such as molecular property prediction, 3D geometry should be incorporated as it provides complementary and comprehensive information~\citep{schutt2017schnet,liu2021spherical}. To fill this gap, we propose GraphMVP to leverage the 3D geometry in graph self-supervised pre-training.
\section{Theoretical Insights} \label{sec:theoretical_insight}
In this section, we provide the mathematical insights behind GraphMVP. We will first discuss both contrastive and generative SSL methods (\Cref{sec:contrastive_SSL,sec:generative_SSL}) are maximizing the mutual information (MI) and then how the 3D geometry, as privileged information, can help 2D representation learning.
\vspace{-1ex}
\subsection{Maximizing Mutual Information} \label{sec:mutual_information}
\vspace{-1ex}
Mutual information (MI) measures the non-linear dependence~\citep{belghazi2018mutual} between two random variables: the larger MI, the stronger dependence between the variables. Therefore for GraphMVP, we can interpret it as maximizing MI between 2D and 3D views: to obtain a more robust 2D/3D representation by sharing more information with its 3D/2D counterparts. This is also consistent with the sample complexity theory~\citep{erhan2010does,arora2019theoretical,garg2020functional} where SSL as functional regularizer can reduce the uncertainty in representation learning. We first derive a lower bound for MI (see derivations in~\Cref{sec:app:MI}), and the corresponding objective function $\mathcal{L}_{\text{MI}}$ is
\begin{equation} \label{eq:MI_objective}
\small
I(X;Y) \ge \mathcal{L}_{\text{MI}} = \frac{1}{2} \mathbb{E}_{p({\bm{x}},{\bm{y}})} \big[ \log p({\bm{y}}|{\bm{x}}) + \log p({\bm{x}}|{\bm{y}}) \big].
\end{equation}
\textbf{Contrastive Self-Supervised Learning.}
InfoNCE was initialized proposed to maximize the MI directly~\citep{oord2018representation}.
Here in GraphMVP, EBM-NCE estimates the conditional likelihood in~\Cref{eq:MI_objective} using EBM, and solves it with NCE~\cite{gutmann2010noise}. As a result, EBM-NCE can also be seen as maximizing MI between 2D and 3D views. The detailed derivations can be found in~\Cref{app:ebm_nce}.
\textbf{Generative Self-Supervised Learning.}
One alternative solution is to use a variational lower bound to approximate the conditional log-likelihood terms in~\Cref{eq:MI_objective}. Then we can follow the same pipeline in~\Cref{sec:generative_SSL}, ending up with the surrogate objective, {\em{i.e.}}, VRR in~\Cref{eq:variational_lower_bound_approximation}.
\subsection{3D Geometry as Privileged Information} \label{sec:privileged_info}
We show the theoretical insights from privileged information that motivate GraphMVP. We start by considering a supervised learning setting where $(\bm{u}_i,\bm{l}_i)$ is a feature-label pair and $\bm{u}_i^*$ is the privileged information~\cite{vapnik2009new,vapnik2015learning}. The privileged information is defined to be additional information about the input $(\bm{u}_i,\bm{l}_i)$ in order to support the prediction. For example, $\bm{u}_i$ could be some CT images of a particular disease, $\bm{l}_i$ could be the label of the disease and $\bm{u}_i^*$ is the medical report from a doctor. VC theory~\cite{vapnik2013nature,vapnik2015learning} characterizes the learning speed of an algorithm from the capacity of the algorithm and the amount of training data. Considering a binary classifier $f$ from a function class $\mathcal{F}$ with finite VC-dimension $\text{VCD}(\mathcal{F})$. With probability $1-\delta$, the expected error is upper bounded by
\begin{equation}
\small
R(f)\leq R_n(f) +\mathcal{O}\bigg( \big( \frac{\text{VCD}(\mathcal{F})-\log\delta}{n} \big)^\beta \bigg)
\end{equation}
where $R_n(f)$ denotes the training error and $n$ is the number of training samples. When the training data is separable, then $R_n(f)$ will diminish to zero and $\beta$ is equal to $1$. When the training data is non-separable, $\beta$ is $\frac{1}{2}$. Therefore, the rate of convergence for the separable case is of order $1/n$. In contrast, the rate for the non-separable case is of order $1/\sqrt{n}$. We note that such a difference is huge, since the same order of bounds require up to 100 training samples versus 10,000 samples. Privileged information makes the training data separable such that the learning can be more efficient. Connecting the results to GraphMVP, we notice that the 3D geometric information of molecules can be viewed as a form of privileged information, since 3D information can effectively make molecules more separable for some properties~\cite{schutt2017schnet,liu2018n,liu2021spherical}. Besides, privileged information is only used in training, and it well matches our usage of 3D geometry for pre-training. In fact, using 3D structures as privileged information has been already shown quite useful in protein classification~\cite{vapnik2009new}, which serves as a strong evidence to justify the effectiveness of 3D information in graph SSL pre-training.
\section{Experiments and Results} \label{sec:experiments}
\subsection{Experimental Settings} \label{sec:experiments_setting}
\textbf{Datasets.} We pre-train models on the same dataset then fine-tune on the wide range of downstream tasks. We randomly select 50k qualified molecules from GEOM~\citep{axelrod2020geom} with both 2D and 3D structures for the pre-training. As clarified in~\Cref{sec:overview}, conformer ensembles can better reflect the molecular property, thus we take $C$ conformers of each molecule. For downstream tasks, we first stick to the same setting of the main graph SSL work~\citep{hu2019strategies,You2020GraphCL,you2021graph}, exploring 8 binary molecular property prediction tasks, which are all in the low-data regime. Then we explore 6 regression tasks from various low-data domains to be more comprehensive. We describe all the datasets in~\Cref{app:sec:dataset}.
\textbf{2D GNN.} We follow the research line of SSL on molecule graph~\citep{hu2019strategies,you2021graph,You2020GraphCL}, using the same Graph Isomorphism Network (GIN)~\citep{xu2018powerful} as the backbone model, with the same feature sets.
\textbf{3D GNN.} We choose SchNet~\citep{schutt2017schnet} for geometric modeling, since SchNet: (1) is found to be a strong geometric representation learning method under the fair benchmarking; (2) can be trained more efficiently, comparing to the other recent 3D models. More detailed explanations are in~\Cref{appendix:3d_gnn}.
\subsection{Main Results on Molecular Property Prediction.} \label{sec:main_results}
We carry out comprehensive comparisons with 10 SSL baselines and random initialization. For pre-training, we apply all SSL methods on the same dataset based on GEOM~\citep{axelrod2020geom}. For fine-tuning, we follow the same setting~\citep{hu2019strategies,You2020GraphCL,you2021graph} with 8 low-data molecular property prediction tasks.
\textbf{Baselines.} Due to the rapid growth of graph SSL~\citep{liu2021graph,xie2021self,wu2021self}, we are only able to benchmark the most well-acknowledged baselines: EdgePred~\citep{hamilton2017inductive}, InfoGraph~\citep{sun2019infograph}, GPT-GNN\citep{hu2020gpt}, AttrMask \& ContextPred\citep{hu2019strategies}, GraphLoG\citep{xu2021self}, G-\{Contextual, Motif\}\citep{rong2020self}, GraphCL\citep{You2020GraphCL}, JOAO\citep{you2021graph}.
\textbf{Our method.} GraphMVP\, has two key factors: i) masking ratio ($M$) and ii) number of conformers for each molecule ($C$). We set $M=0.15$ and $C=5$ by default, and will explore their effects in the following ablation studies in~\Cref{sec:effect_of_M_and_C}. For EBM-NCE loss, we adopt the empirical distribution for noise distribution. For~\Cref{eq:graphmvp_variants}, we pick the empirically optimal generative and contrastive 2D SSL method: that is AttrMask for GraphMVP-G\, and ContextPred for GraphMVP-C.
\begin{table}[t!]
\caption{
Results for molecular property prediction tasks.
For each downstream task, we report the mean (and standard deviation) ROC-AUC of 3 seeds with scaffold splitting.
For GraphMVP\,, we set $M=0.15$ and $C=5$.
The best and second best results are marked \textbf{\underline{bold}} and \textbf{bold}, respectively.
\vspace{-0.24cm}
}
\label{tab:main_results}
\small
\setlength{\tabcolsep}{3pt}
\centering
\begin{tabular}{l c c c c c c c c c}
\toprule
Pre-training & BBBP & Tox21 & ToxCast & Sider & ClinTox & MUV & HIV & Bace & Avg\\
\midrule
-- & 65.4(2.4) & 74.9(0.8) & 61.6(1.2) & 58.0(2.4) & 58.8(5.5) & 71.0(2.5) & 75.3(0.5) & 72.6(4.9) & 67.21 \\
\midrule
EdgePred & 64.5(3.1) & 74.5(0.4) & 60.8(0.5) & 56.7(0.1) & 55.8(6.2) & 73.3(1.6) & 75.1(0.8) & 64.6(4.7) & 65.64 \\
AttrMask & 70.2(0.5) & 74.2(0.8) & 62.5(0.4) & 60.4(0.6) & 68.6(9.6) & 73.9(1.3) & 74.3(1.3) & 77.2(1.4) & 70.16 \\
GPT-GNN & 64.5(1.1) & \textbf{75.3(0.5)} & 62.2(0.1) & 57.5(4.2) & 57.8(3.1) & 76.1(2.3) & 75.1(0.2) & 77.6(0.5) & 68.27 \\
InfoGraph & 69.2(0.8) & 73.0(0.7) & 62.0(0.3) & 59.2(0.2) & 75.1(5.0) & 74.0(1.5) & 74.5(1.8) & 73.9(2.5) & 70.10 \\
ContextPred & 71.2(0.9) & 73.3(0.5) & 62.8(0.3) & 59.3(1.4) & 73.7(4.0) & 72.5(2.2) & 75.8(1.1) & 78.6(1.4) & 70.89 \\
GraphLoG & 67.8(1.7) & 73.0(0.3) & 62.2(0.4) & 57.4(2.3) & 62.0(1.8) & 73.1(1.7) & 73.4(0.6) & 78.8(0.7) & 68.47 \\
G-Contextual & 70.3(1.6) & 75.2(0.3) & 62.6(0.3) & 58.4(0.6) & 59.9(8.2) & 72.3(0.9) & 75.9(0.9) & 79.2(0.3) & 69.21 \\
G-Motif & 66.4(3.4) & 73.2(0.8) & 62.6(0.5) & 60.6(1.1) & 77.8(2.0) & 73.3(2.0) & 73.8(1.4) & 73.4(4.0) & 70.14 \\
GraphCL & 67.5(3.3) & 75.0(0.3) & 62.8(0.2) & 60.1(1.3) & 78.9(4.2) & \textbf{77.1(1.0)} & 75.0(0.4) & 68.7(7.8) & 70.64 \\
JOAO & 66.0(0.6) & 74.4(0.7) & 62.7(0.6) & 60.7(1.0) & 66.3(3.9) & 77.0(2.2) & \textbf{76.6(0.5)} & 72.9(2.0) & 69.57 \\
\midrule
GraphMVP\, & 68.5(0.2) & 74.5(0.4) & 62.7(0.1) & \textbf{62.3(1.6)} & \textbf{79.0(2.5)} & 75.0(1.4) & 74.8(1.4) & 76.8(1.1) & 71.69 \\
GraphMVP-G & \textbf{70.8(0.5)} & \textbf{\underline{75.9(0.5)}} & \textbf{\underline{63.1(0.2)}} & 60.2(1.1) & \textbf{\underline{79.1(2.8)}} & \textbf{\underline{77.7(0.6)}} & 76.0(0.1) & \textbf{79.3(1.5)} & \textbf{72.76} \\
GraphMVP-C & \textbf{\underline{72.4(1.6)}} & 74.4(0.2) & \textbf{\underline{63.1(0.4)}} & \textbf{\underline{63.9(1.2)}} & 77.5(4.2) & 75.0(1.0) & \textbf{\underline{77.0(1.2)}} & \textbf{\underline{81.2(0.9)}} & \textbf{\underline{73.07}} \\
\bottomrule
\end{tabular}
\vspace{-2ex}
\end{table}
The main results on 8 molecular property prediction tasks are listed in~\Cref{tab:main_results}. We observe that the performance of GraphMVP\, is significantly better than the random initialized one, and the average performance outperforms the existing SSL methods by a large margin. In addition, GraphMVP-G\, and GraphMVP-C\, consistently improve the performance, supporting the claim: \textbf{3D geometry is complementary to the 2D topology}. GraphMVP\, leverages the information between 3D geometry and 2D topology, and 2D SSL plays the role as regularizer to extract more 2D topological information; they are extracting different perspectives of information and are indeed complementary to each other.
\subsection{Ablation Study: The Effect of Masking Ratio and Number of Conformers} \label{sec:effect_of_M_and_C}
We analyze the effects of masking ratio $M$ and the number of conformers $C$ in GraphMVP. In~\Cref{tab:main_results}, we set the $M$ as 0.15 since it has been widely used in existing SSL methods~\citep{hu2019strategies,You2020GraphCL,you2021graph}, and $C$ is set to 5, which we will explain below. We explore on the range of $M \in \{0, 0.15, 0.3\}$ and $C \in \{1, 5, 10, 20\}$, and report the average performance. The complete results are in~\Cref{app:sec:effect_of_M_and_C}.
\begin{table}[t!]
\fontsize{8.5}{4}\selectfont
\centering
\vspace{-2ex}
\begin{minipage}[t]{.48\linewidth}
\setlength{\tabcolsep}{5.3pt}
\renewcommand\arraystretch{2.2}
\caption{Ablation of masking ratio $M$, $C\equiv5$.\vspace{-0.3cm}}
\label{tab:ablation_masking}
\centering
\begin{tabular}{c c c c}
\toprule
$M$ & GraphMVP & GraphMVP-G & GraphMVP-C\\
\midrule
0 & 71.12 & 72.15 & 72.66\\
0.15 & 71.60 & 72.76 & 73.08\\
0.30 & 71.79 & 72.91 & 73.17\\
\bottomrule
\end{tabular}
\end{minipage}
\hfill
\begin{minipage}[t]{.48\linewidth}
\setlength{\tabcolsep}{5.5pt}
\caption{Ablation of \# conformer $C$, $M\equiv0.15$.\vspace{-0.3cm}}
\label{tab:ablation_n_conformers}
\centering
\begin{tabular}{c c c c}
\toprule
$C$ & GraphMVP & GraphMVP-G & GraphMVP-C\\
\midrule
1 & 71.61 & 72.80 & 72.46\\
5 & 71.60 & 72.76 & 73.08\\
10 & 72.20 & 72.59 & 73.09\\
20 & 72.39 & 73.00 & 73.02\\
\bottomrule
\end{tabular}
\end{minipage}
\vspace{-1ex}
\end{table}
As seen in~\Cref{tab:ablation_masking}, the improvement is more obvious from $M=0$ (raw graph) to $M=0.15$ than from $M=0.15$ to $M=0.3$. This can be explained that subgraph masking with larger ratio will make the SSL tasks more challenging, especially comparing to the raw graph ($M=0$).
\Cref{tab:ablation_n_conformers} shows the effect for $C$. We observe that the performance is generally better when adding more conformers, but will reach a plateau above certain thresholds. This observation matches with previous findings~\citep{axelrod2020molecular}: adding more conformers to augment the representation learning is not as helpful as expected; while we conclude that adding more conformers can be beneficial with little improvement. One possible reason is, when generating the dataset, we are sampling top-$C$ conformers with highest possibility and lowest energy. In other words, top-5 conformers are sufficient to cover the most conformers with equilibrium state (over 80\%), and the effect of larger $C$ is thus modest.
To sum up, adding more conformers might be helpful, but the computation cost can grow linearly with the increase in dataset size. On the other hand, enlarging the masking ratio will not induce extra cost, yet the performance is slightly better. Therefore, we would encourage tuning masking ratios prior to trying a larger number of conformers from the perspective of efficiency and effectiveness.
\subsection{Ablation Study: The Effect of Objective Function} \label{sec:experiment_each_loss_component}
\begin{wraptable}[13]{r}{0.5\textwidth}
\setlength{\tabcolsep}{3.53pt}
\vspace{-0.4cm}
\small
\caption{
Ablation on the objective function.
\vspace{-0.35cm}
}
\label{tab:ablation_objective_function}
\small
\centering
\begin{tabular}{l c c c}
\toprule
GraphMVP\, Loss & Contrastive & Generative & Avg\\
\midrule
Random & & & 67.21\\
\midrule
InfoNCE only & \checkmark & & 68.85\\
EBM-NCE only & \checkmark & & 70.15\\
VRR only & & \checkmark & 69.29\\
RR only & & \checkmark & 68.89\\
\midrule
InfoNCE + VRR & \checkmark & \checkmark & 70.67\\
EBM-NCE + VRR & \checkmark & \checkmark & 71.69\\
InfoNCE + RR & \checkmark & \checkmark & 70.60\\
EBM-NCE + RR & \checkmark & \checkmark & 70.94\\
\bottomrule
\end{tabular}
\end{wraptable}
In~\Cref{sec:method}, we introduce a new contrastive learning objective family called EBM-NCE, and we take either InfoNCE and EBM-NCE as the contrastive SSL. For the generative SSL task, we propose a novel objective function called variational representation reconstruction (VRR) in~\Cref{eq:variational_lower_bound_approximation}. As discussed in~\Cref{sec:generative_SSL}, stochasticity is important for GraphMVP\, since it can capture the conformer distribution for each 2D molecular graph. To verify this, we add an ablation study on \textit{representation reconstruction (RR)} by removing stochasticity in VRR. Thus, here we deploy a comprehensive ablation study to explore the effect for each individual objective function (InfoNCE, EBM-NCE, VRR and RR), followed by the pairwise combinations between them.
The results in \Cref{tab:ablation_objective_function} give certain constructive insights as follows:
(1) Each individual SSL objective function (middle block) can lead to better performance. This strengthens the claim that adding 3D information is helpful for 2D representation learning.
(2) According to the combination of those SSL objective functions (bottom block), adding both contrastive and generative SSL can consistently improve the performance. This verifies our claim that conducting SSL at both the inter-data and intra-data level is beneficial.
(3) We can see VRR is consistently better than RR on all settings, which verifies that stochasticity is an important factor in modeling 3D conformers for molecules.
\subsection{Broader Range of Downstream Tasks}
The 8 binary downstream tasks discussed so far have been widely applied in the graph SSL research line on molecules~\cite{hu2019strategies,You2020GraphCL,you2021graph}, but there are more tasks where the 3D conformers can be helpful. Here we test 4 extra regression property prediction tasks and 2 drug-target affinity tasks.
About the dataset statistics, more detailed information can be found in~\Cref{app:sec:dataset}, and we may as well briefly describe the affinity task here. Drug-target affinity (DTA) is a crucial task~\citep{pahikkala2015toward,wen2017deep,ozturk2018deepdta} in drug discovery, where it models both the molecular drugs and target proteins, with the goal to predict their affinity scores. One recent work~\citep{nguyen2019graphdta} is modeling the molecular drugs with 2D GNN and target protein (as an amino-acid sequence) with convolution neural network (CNN). We adopt this setting by pre-training the 2D GNN using GraphMVP.
As illustrated in~\Cref{tab:main_regression}, the consistent performance gain verifies the effectiveness of our proposed GraphMVP.
\begin{table}[t!]
\setlength{\tabcolsep}{7pt}
\centering
\caption{
Results for four molecular property prediction tasks (regression) and two DTA tasks (regression).
We report the mean RMSE of 3 seeds with scaffold splitting for molecular property downstream tasks, and mean MSE for 3 seeds with random splitting on DTA tasks.
For GraphMVP\,, we set $M=0.15$ and $C=5$.
The best performance for each task is marked in \textbf{\underline{bold}}.
We omit the std here since they are very small and indistinguishable. For complete results, please check~\Cref{sec:app:regression_results}.
\vspace{-4ex}
}
\label{tab:main_regression}
\begin{tabular}{l c c c c c c c c}
\toprule
& \multicolumn{5}{c}{Molecular Property Prediction} & \multicolumn{3}{c}{Drug-Target Affinity}\\
\cmidrule(lr){2-6} \cmidrule(lr){7-9} Pre-training & ESOL & Lipo & Malaria & CEP & Avg & Davis & KIBA & Avg\\
\midrule
--
& 1.178 & 0.744 & 1.127 & 1.254 & 1.0756
& 0.286 & 0.206 & 0.2459\\
\midrule
AM
& 1.112 & 0.730 & 1.119 & 1.256 & 1.0542
& 0.291 & 0.203 & 0.2476\\
CP
& 1.196 & 0.702 & 1.101 & 1.243 & 1.0606
& 0.279 & 0.198 & 0.2382 \\
JOAO
& 1.120 & 0.708 & 1.145 & 1.293 & 1.0663
& 0.281 & 0.196 & 0.2387\\
\midrule
GraphMVP
& 1.091 & 0.718 & 1.114 & 1.236 & 1.0397
& 0.280 & 0.178 & 0.2286\\
GraphMVP-G\,\,\,
& 1.064 & 0.691 & 1.106 & \textbf{\underline{1.228}} & 1.0221
& \textbf{\underline{0.274}} & 0.175 & 0.2248\\
GraphMVP-C\,\,\,
& \textbf{\underline{1.029}} & \textbf{\underline{0.681}} & \textbf{\underline{1.097}} & 1.244 & \textbf{\underline{1.0128}}
& 0.276 & \textbf{\underline{0.168}} & \textbf{\underline{0.2223}}\\
\bottomrule
\end{tabular}
\vspace{-1ex}
\end{table}
\subsection{Case Study}
We investigate how GraphMVP\, helps when the task objectives are challenging with respect to the 2D topology but straightforward using 3D geometry (as shown in~\Cref{fig:case}). We therefore design two case studies to testify how GraphMVP\, transfers knowledge from 3D geometry into the 2D representation.
The first case study is \textit{3D Diameter Prediction}. For molecules, usually, the longer the 2D diameter is, the larger the 3D diameter (largest atomic pairwise l2 distance). However, this does not always hold, and we are interested in using the 2D graph to predict the 3D diameter. The second case study is \textit{Long-Range Donor-Acceptor Detection}. Molecules possess a special geometric structure called donor-acceptor bond, and we want to use 2D molecular graph to detect this special structure. We validate that GraphMVP\, consistently brings improvements on these 2 case studies, and provide more detailed discussions and interpretations in~\Cref{appendix:case}.
\begin{figure}[htbp!]
\centering
\vspace{-2ex}
\includegraphics[width=\textwidth]{figures/Case_Study_2.pdf}
\vspace{-5ex}
\caption{We select the molecules whose properties can be easily resolved via 3D but not 2D. The randomly initialised 2D GNN achieves accuracy of $38.9\pm0.8$ and $77.9\pm1.1$, respectively. The GraphMVP\, pre-trained ones obtain scores of $42.3\pm1.3$ and $81.5\pm0.4$, outperforming all the precedents in~\Cref{sec:main_results}. We plot cases where random initialization fails but GraphMVP\, is correct.}
\label{fig:case}
\vspace{-4ex}
\end{figure}
\section{Contrastive Self-Supervised Learning} \label{sec:app:contrastive_ssl}
The essence of contrastive self-supervised learning is to align positive view pairs and contrast negative view pairs, such that the obtained representation space is well distributed~\citep{wang2020understanding}. We display the pipeline in~\Cref{fig:app:contrastive_ssl}. Along the research line in graph SSL~\citep{liu2021graph,xie2021self,wu2021self,liu2021self}, InfoNCE and EBM-NCE are the two most-widely used, as discussed below.
\begin{figure}[htb!]
\centering
\includegraphics[width=\textwidth]{figures/Diagram_Contrastive_final.pdf}
\vspace{-4ex}
\caption{
Contrastive SSL in GraphMVP. The black dashed circles represent subgraph masking.
}
\label{fig:app:contrastive_ssl}
\vspace{-2ex}
\end{figure}
\subsection{InfoNCE} \label{sec:app:infonce}
InfoNCE~\citep{oord2018representation} is first proposed to approximate MI~\Cref{eq:app:MI}:
\begin{equation}\label{eq:app:InfoNCE}
\footnotesize{
\begin{aligned}
\mathcal{L}_{\text{InfoNCE}} =
-\frac{1}{2} \mathbb{E} &\left[
\log \frac{\exp(f_{{\bm{x}}}({\bm{x}}, {\bm{y}}))}{\exp(f_{{\bm{x}}}({\bm{x}}, {\bm{y}})) + \sum_j \exp(f_{{\bm{x}}}({\bm{x}}^{j},{\bm{y}})}) + \log \frac{\exp(f_{{\bm{y}}}({\bm{y}},{\bm{x}}))}{\exp(f_{{\bm{y}}}({\bm{y}},{\bm{x}})) + \sum_j \exp{f_{{\bm{y}}}({\bm{y}}^{j},{\bm{x}})}} \right],
\end{aligned}
}
\end{equation}
where ${\bm{x}}^{j}, {\bm{y}}^{j}$ are randomly sampled 2D and 3D views regarding to the anchored pair $({\bm{x}},{\bm{y}})$. $f_{{\bm{x}}}({\bm{x}},{\bm{y}}), f_{{\bm{y}}}({\bm{y}},{\bm{x}})$ are scoring functions for the two corresponding views, whose formulation can be quite flexible. Here we use $f_{\bm{x}}({\bm{x}},{\bm{y}}) = f_{\bm{y}}({\bm{y}},{\bm{x}}) = \exp(\langle h_{\bm{x}}, h_{\bm{y}} \rangle)$.
\paragraph{Derivation of InfoNCE}
\begin{equation}
\begin{aligned}
I(X;Y) - \log (K)
& = \mathbb{E}_{p({\bm{x}}, {\bm{y}})} \big[\log \frac{1}{K} \frac{p({\bm{x}}, {\bm{y}})}{p({\bm{x}}) p({\bm{y}})} \big]\\
& = \sum_{{\bm{x}}^{i},{\bm{y}}^{i}} \big[\log \frac{1}{K} \frac{p({\bm{x}}^{i},{\bm{y}}^{i})}{p({\bm{x}}^{i}) p({\bm{y}}^{i})} \big]\\
& \ge -\sum_{{\bm{x}}^{i},{\bm{y}}^{i}} \big[\log \big( 1 + (K-1) \frac{p({\bm{x}}^{i}) p({\bm{y}}^{i})}{p({\bm{x}}^{i},{\bm{y}}^{i})} \big)\big]\\
& = -\sum_{{\bm{x}}^{i},{\bm{y}}^{i}} \big[\log \frac{\frac{p({\bm{x}}^{i},{\bm{y}}^{i})}{p({\bm{x}}^{i}) p({\bm{y}}^{i})} + (K-1)}{\frac{p({\bm{x}}^{i},{\bm{y}}^{i})}{p({\bm{x}}^{i}) p({\bm{y}}^{i})}} \big]\\
& \approx -\sum_{{\bm{x}}^{i},{\bm{y}}^{i}} \big[ \log \frac{\frac{p({\bm{x}}^{i},{\bm{y}}^{i})}{p({\bm{x}}^{i}) p({\bm{y}}^{i})} + (K-1)\mathbb{E}_{{\bm{x}}^{j} \ne {\bm{x}}^{i}}\frac{p(x^{j},{\bm{y}}^{i})}{p({\bm{x}}^{j}) p({\bm{y}}^{i})} }{\frac{p({\bm{x}}^{i},{\bm{y}}^{i})}{p({\bm{x}}^{i}) p({\bm{y}}^{i})}} \big] \quad \text{// \textcircled{1}}\\
& = \sum_{{\bm{x}}^{i},{\bm{y}}^{i}} \big[ \log \frac{\exp(f_{\bm{x}}({\bm{x}}^{i},{\bm{y}}^{i}))}{\exp(f_{\bm{x}}({\bm{x}}^{i},{\bm{y}}^{i})) + \sum_{j=1}^K f_{\bm{x}}({\bm{x}}^{j},{\bm{y}}^{i})} \big],
\end{aligned}
\end{equation}
where we set $ f_{\bm{x}}({\bm{x}}^{i},{\bm{y}}^{i}) = \log \frac{p({\bm{x}}^{i},{\bm{y}}^{i})}{p({\bm{x}}^{i})p({\bm{y}}^{i})}$.
Notice that in \textcircled{1}, we are using data $x \in X$ as the anchor points. If we use the $y \in Y$ as the anchor points and follow the similar steps, we can obtain
\begin{equation}
\begin{aligned}
I(X;Y) - \log(K) \ge \sum_{{\bm{y}}^{i},{\bm{x}}^{i}} \big[ \log \frac{\exp(f_{\bm{y}}({\bm{y}}^{i},{\bm{x}}^{i}))}{\exp f_{\bm{y}}({\bm{y}}^{i},{\bm{x}}^{i}) + \sum_{j=1}^K \exp (f_{\bm{y}}({\bm{y}}^{j},{\bm{x}}^{i}))} \big].
\end{aligned}
\end{equation}
Thus, by add both together, we can have the objective function as~\Cref{eq:app:InfoNCE}.
\subsection{EBM-NCE} \label{app:ebm_nce}
We here provide an alternative approach to maximizing MI using energy-based model (EBM). To our best knowledge, we are the \textbf{first} to give the rigorous proof of using EBM to maximize the MI.
\subsubsection{Energy-Based Model (EBM)}
Energy-based model (EBM) is a powerful tool for modeling the data distribution. The classic formulation is:
\begin{equation} \label{eq:app:EBM_original}
p({\bm{x}}) = \frac{\exp(-E({\bm{x}}))}{A},
\end{equation}
where the bottleneck is the intractable partition function $A = \int_{\bm{x}} \exp(-E({\bm{x}})) d{\bm{x}}$. Recently, there have been quite a lot progress along this direction~\citep{gutmann2010noise,du2020improved,song2020score,song2021train}. Noise Contrastive Estimation (NCE)~\citep{gutmann2010noise} is one of the powerful tools here, as we will introduce later.
\subsubsection{EBM for MI}
Recall that our objective function is~\Cref{eq:app:MI_objective}: $\mathcal{L}_{\text{MI}} = \frac{1}{2} [H(Y|X) + H(X|Y)]$. Then we model the conditional likelihood with energy-based model (EBM). This gives us
\begin{equation} \label{eq:app:EBM}
\begin{aligned}
\mathcal{L}_{\text{EBM}} = -\frac{1}{2} \mathbb{E}_{p({\bm{x}},{\bm{y}})} \Big[ \log \frac{\exp(f_{\bm{x}}({\bm{x}}, {\bm{y}}))}{A_{{\bm{x}}|{\bm{y}}}} + \log \frac{\exp(f_{\bm{y}}({\bm{y}}, {\bm{x}}))}{A_{{\bm{y}}|{\bm{x}}}} \Big],
\end{aligned}
\end{equation}
where $f_{\bm{x}}({\bm{x}}, {\bm{y}}) = -E({\bm{x}}|{\bm{y}})$ and $f_{\bm{y}}({\bm{y}}, {\bm{x}}) = -E({\bm{y}}|{\bm{x}})$ are the negative energy functions, and $A_{{\bm{x}}|{\bm{y}}}$ and $A_{{\bm{y}}|{\bm{x}}}$ are the corresponding partition functions.
Under the EBM framework, if we solve~\Cref{eq:app:EBM} with Noise Contrastive Estimation (NCE)~\citep{gutmann2010noise}, the final EBM-NCE objective is
\begin{equation} \label{eq:app:EBM_NCE}
\begin{aligned}
\mathcal{L}_{\text{EBM-NCE}}
= & -\frac{1}{2} \mathbb{E}_{p_{\text{data}}(y)} \Big[ \mathbb{E}_{p_n({\bm{x}}|{\bm{y}})} [\log \big(1-\sigma(f_{\bm{x}}({\bm{x}}, {\bm{y}}))\big)] + \mathbb{E}_{p_{\text{data}}({\bm{x}}|{\bm{y}})} [\log \sigma(f_{\bm{x}}({\bm{x}}, {\bm{y}}))] \Big] \\
& ~~ - \frac{1}{2} \mathbb{E}_{p_{\text{data}}(x)} \Big[ \mathbb{E}_{p_n({\bm{y}}|{\bm{x}})} [\log \big(1-\sigma(f_{\bm{y}}({\bm{y}}, {\bm{x}}))\big)] + \mathbb{E}_{p_{\text{data}}({\bm{y}}|{\bm{x}})} [\log \sigma(f_{\bm{y}}({\bm{y}}, {\bm{x}}))] \Big].
\end{aligned}
\end{equation}
Next we will give the detailed derivations.
\subsubsection{Derivation of conditional EBM with NCE}
WLOG, let's consider the $p_\theta({\bm{x}}|{\bm{y}})$ first, and by EBM it is as follows:
\begin{equation} \label{eq:app:EBM_SSL}
p_\theta({\bm{x}}|{\bm{y}}) = \frac{\exp(-E({\bm{x}}|{\bm{y}}))}{ \int \exp(-E({\tilde {\bm{x}}}|{\bm{y}})) d{\tilde {\bm{x}}}} = \frac{\exp(f_{\bm{x}}({\bm{x}}, {\bm{y}}))}{\int \exp(f_{\bm{x}}({\tilde {\bm{x}}}|{\bm{y}})) d{\tilde {\bm{x}}}} = \frac{\exp(f_{\bm{x}}({\bm{x}}, {\bm{y}}))}{A_{{\bm{x}}|{\bm{y}}}}.
\end{equation}
Then we solve this using NCE. NCE handles the intractability issue by transforming it as a binary classification task. We take the partition function $A_{{\bm{x}}|{\bm{y}}}$ as a parameter, and introduce a noise distribution $p_n$. Based on this, we introduce a mixture model: ${\bm{z}}=0$ if the conditional ${\bm{x}}|{\bm{y}}$ is from $p_n({\bm{x}}|{\bm{y}})$, and ${\bm{z}}=1$ if ${\bm{x}}|{\bm{y}}$ is from $p_{\text{data}}({\bm{x}}|{\bm{y}})$. So the joint distribution is:
\begin{equation*}
p_{n,\text{\text{data}}}({\bm{x}}|{\bm{y}}) = p({\bm{z}}=1) p_{\text{data}}({\bm{x}}|{\bm{y}}) + p({\bm{z}}=0) p_n({\bm{x}}|{\bm{y}})
\end{equation*}
The posterior of $p({\bm{z}}=0|{\bm{x}},{\bm{y}})$ is
\begin{equation*}
p_{n,\text{\text{data}}}({\bm{z}}=0|{\bm{x}},{\bm{y}}) = \frac{p({\bm{z}}=0) p_n({\bm{x}}|{\bm{y}})}{p(z=0) p_n({\bm{x}}|{\bm{y}}) + p({\bm{z}}=1) p_{\text{data}}({\bm{x}}|{\bm{y}})} = \frac{\nu \cdot p_n({\bm{x}}|{\bm{y}})}{\nu \cdot p_n({\bm{x}}|{\bm{y}}) + p_{\text{data}}({\bm{x}}|{\bm{y}})},
\end{equation*}
where $\nu = \frac{p({\bm{z}}=0)}{p({\bm{z}}=1)}$.
Similarly, we can have the joint distribution under EBM framework as:
\begin{equation*}
p_{n, \theta}({\bm{x}}) = p(z=0) p_n({\bm{x}}|{\bm{y}}) + p(z=1) p_{\theta}({\bm{x}}|{\bm{y}})
\end{equation*}
And the corresponding posterior is:
\begin{equation*}
p_{n,\theta}({\bm{z}}=0|{\bm{x}},{\bm{y}}) = \frac{p({\bm{z}}=0) p_n({\bm{x}}|{\bm{y}})}{p({\bm{z}}=0) p_n({\bm{x}}|{\bm{y}}) + p({\bm{z}}=1) p_{\theta}({\bm{x}}|{\bm{y}})} = \frac{\nu \cdot p_n({\bm{x}}|{\bm{y}})}{\nu \cdot p_n({\bm{x}}|{\bm{y}}) + p_{\theta}({\bm{x}}|{\bm{y}})}
\end{equation*}
We indirectly match $p_\theta({\bm{x}}|{\bm{y}})$ to $p_{\text{data}}({\bm{x}}|{\bm{y}})$ by fitting $p_{n,\theta}({\bm{z}}|{\bm{x}},{\bm{y}})$ to $p_{n,\text{\text{data}}}({\bm{z}}|{\bm{x}},{\bm{y}})$ by minimizing their KL-divergence:
\begin{equation} \label{eq:app:EBM_01}
\begin{aligned}
& \min_\theta D_{\text{KL}}(p_{n,\text{\text{data}}}({\bm{z}}|{\bm{x}},{\bm{y}}) || p_{n,\theta}({\bm{z}}|{\bm{x}},{\bm{y}})) \\
& = \mathbb{E}_{p_{n,\text{\text{data}}}({\bm{x}},{\bm{z}}|{\bm{y}})} [\log p_{n,\theta}({\bm{z}}|{\bm{x}},{\bm{y}})] \\
& = \int \sum_{\bm{z}} p_{n,\text{\text{data}}}({\bm{x}},{\bm{z}}|{\bm{y}}) \cdot \log p_{n,\theta}({\bm{z}}|{\bm{x}},{\bm{y}}) d {\bm{x}}\\
& = \int \Big\{ p({\bm{z}}=0) p_{n,\text{\text{data}}}({\bm{x}}|{\bm{y}},{\bm{z}}=0) \log p_{n,\theta}({\bm{z}}=0|{\bm{x}},{\bm{y}}) \\
& \quad\quad\quad\quad + p({\bm{z}}=1) p_{n,\text{\text{data}}}({\bm{x}}|{\bm{z}}=1,{\bm{y}}) \log p_{n,\theta}({\bm{z}}=1|{\bm{x}},{\bm{y}}) \Big\} d{\bm{x}} \\
& = \nu \cdot \mathbb{E}_{p_{n}({\bm{x}}|{\bm{y}})} \Big[\log p_{n,\theta}({\bm{z}}=0|{\bm{x}},{\bm{y}}) \Big] + \mathbb{E}_{p_{\text{data}}({\bm{x}}|{\bm{y}} )} \Big[\log p_{n,\theta}({\bm{z}}=1|{\bm{x}},{\bm{y}}) \Big] \\
& = \nu \cdot \mathbb{E}_{p_{n}({\bm{x}}|{\bm{y}})} \Big[ \log \frac{\nu \cdot p_n({\bm{x}}|{\bm{y}})}{\nu \cdot p_n({\bm{x}}|{\bm{y}}) + p_{\theta}({\bm{x}}|{\bm{y}})} \Big] + \mathbb{E}_{p_{\text{data}}({\bm{x}}|{\bm{y}} )} \Big[ \log \frac{p_\theta({\bm{x}}|{\bm{y}})}{\nu \cdot p_n({\bm{x}}|{\bm{y}}) + p_{\theta}({\bm{x}}|{\bm{y}})} \Big].\\
\end{aligned}
\end{equation}
This optimal distribution is an estimation to the actual distribution (or data distribution), {\em{i.e.}}, $p_\theta({\bm{x}}|{\bm{y}}) \approx p_{\text{data}}({\bm{x}}|{\bm{y}})$. We can follow the similar steps for $p_\theta({\bm{y}}|{\bm{x}}) \approx p_{\text{data}}({\bm{y}}|{\bm{x}})$. Thus following~\Cref{eq:app:EBM_01}, the objective function is to maximize
\begin{equation} \label{eq:app:EBM_02}
\begin{aligned}
\nu \cdot \mathbb{E}_{p_{\text{data}}({\bm{y}})} \mathbb{E}_{p_{n}({\bm{x}}|{\bm{y}})} \Big[ \log \frac{\nu \cdot p_n({\bm{x}}|{\bm{y}})}{\nu \cdot p_n({\bm{x}}|{\bm{y}}) + p_{\theta}({\bm{x}}|{\bm{y}})} \Big] + \mathbb{E}_{p_{\text{data}}({\bm{y}})} \mathbb{E}_{p_{\text{data}}({\bm{x}}|{\bm{y}} )} \Big[ \log \frac{p_\theta({\bm{x}}|{\bm{y}})}{\nu \cdot p_n({\bm{x}}|{\bm{y}}) + p_{\theta}({\bm{x}}|{\bm{y}})} \Big].
\end{aligned}
\end{equation}
The we will adopt three strategies to approximate~\Cref{eq:app:EBM_02}:
\begin{enumerate}
\item \textbf{Self-normalization.} When the EBM is very expressive, {\em{i.e.}}, using deep neural network for modeling, we can assume it is able to approximate the normalized density directly~\cite{mnih2012fast,song2021train}. In other words, we can set the partition function $A=1$. This is a self-normalized EBM-NCE, with normalizing constant close to 1, {\em{i.e.}}, $p({\bm{x}}) = \exp(-E({\bm{x}})) = \exp(f({\bm{x}}))$ in~\Cref{eq:app:EBM_original}.
\item \textbf{Exponential tilting term.} Exponential tilting term~\cite{arbel2020generalized} is another useful trick. It models the distribution as $\tilde p_\theta({\bm{x}}) = q({\bm{x}}) \exp(-E_\theta({\bm{x}})) $, where $q({\bm{x}})$ is the reference distribution. If we use the same reference distribution as the noise distribution, the tilted probability is $\tilde p_\theta({\bm{x}}) = p_n({\bm{x}}) \exp(-E_\theta({\bm{x}}))$ in~\Cref{eq:app:EBM_original}.
\item \textbf{Sampling.} For many cases, we only need to sample 1 negative points for each data, {\em{i.e.}}, $\nu=1$.
\end{enumerate}
Following these three disciplines, the objective function to optimize $p_{\theta}({\bm{x}}|{\bm{y}})$ becomes
\begin{equation}
\begin{aligned}
& \mathbb{E}_{p_{n}({\bm{x}}|{\bm{y}})} \Big[ \log \frac{ p_n({\bm{x}}|{\bm{y}})}{ p_n({\bm{x}}|{\bm{y}}) + \tilde p_{\theta}({\bm{x}}|{\bm{y}})} \Big] + \mathbb{E}_{p_{\text{data}}({\bm{x}}|{\bm{y}} )} \Big[ \log \frac{\tilde p_\theta({\bm{x}}|{\bm{y}})}{ p_n({\bm{x}}|{\bm{y}}) + \tilde p_{\theta}({\bm{x}}|{\bm{y}})} \Big]\\
= & \mathbb{E}_{p_{n}({\bm{x}}|{\bm{y}})} \Big[ \log \frac{1}{1 + p_{\theta}({\bm{x}}|{\bm{y}})} \Big] + \mathbb{E}_{p_{\text{data}}({\bm{x}}|{\bm{y}} )} \Big[ \log \frac{p_\theta({\bm{x}}|{\bm{y}})}{1 + p_{\theta}({\bm{x}}|{\bm{y}})} \Big]\\
= & \mathbb{E}_{p_{n}({\bm{x}}|{\bm{y}})} \Big[ \log \frac{\exp (- f_{\bm{x}}({\bm{x}}, {\bm{y}}))}{\exp (- f_{\bm{x}}({\bm{x}}, {\bm{y}})) + 1} \Big] + \mathbb{E}_{p_{\text{data}}({\bm{x}}|{\bm{y}} )} \Big[ \log \frac{1}{\exp (- f_{\bm{x}}({\bm{x}}, {\bm{y}})) + 1} \Big]\\
= & \mathbb{E}_{p_{n}({\bm{x}}|{\bm{y}})} \Big[ \log \big(1-\sigma( f_{\bm{x}}({\bm{x}}, {\bm{y}}))\big) \Big] + \mathbb{E}_{p_{\text{data}}({\bm{x}}|{\bm{y}} )} \Big[ \log \sigma( f_{\bm{x}}({\bm{x}}, {\bm{y}})) \Big].
\end{aligned}
\end{equation}
Thus, the final EBM-NCE contrastive SSL objective is
\begin{equation}
\begin{aligned}
\mathcal{L}_{\text{EBM-NCE}}
& = -\frac{1}{2} \mathbb{E}_{p_{\text{data}}({\bm{y}})} \Big[\mathbb{E}_{p_{n}({\bm{x}}|{\bm{y}})} \log \big(1-\sigma( f_{\bm{x}}({\bm{x}}, {\bm{y}}))\big) + \mathbb{E}_{p_{\text{data}}({\bm{x}}|{\bm{y}} )} \log \sigma( f_{\bm{x}}({\bm{x}}, {\bm{y}})) \Big]\\
& ~~~~ - \frac{1}{2} \mathbb{E}_{p_{\text{data}}({\bm{x}})} \Big[\mathbb{E}_{p_{n}({\bm{y}}|{\bm{x}})} \log \big(1-\sigma( f_{\bm{y}}({\bm{y}},{\bm{x}}))\big) + \mathbb{E}_{p_{\text{data}}({\bm{y}},{\bm{x}})} \log \sigma( f_{\bm{y}}({\bm{y}},{\bm{x}})) \Big].
\end{aligned}
\end{equation}
\subsection{EBM-NCE v.s. JSE and InfoNCE}
We acknowledge that there are many other contrastive objectives~\citep{poole2019variational} that can be used to maximize MI. However, in the research line of graph SSL, as summarized in several recent survey papers~\citep{xie2021self,liu2021graph,wu2021self}, the two most used ones are InfoNCE and Jensen-Shannon Estimator (JSE)~\citep{nowozin2016f,hjelm2018learning}.
We conclude that JSE is very similar to EBM-NCE, while the underlying perspectives are totally different, as explained below.
\begin{enumerate}
\item \textbf{Derivation and Intuition.} Derivation process and underlying intuition are different. JSE~\citep{nowozin2016f} starts from f-divergence, then with variational estimation and Fenchel duality on function $f$. Our proposed EBM-NCE is more straightforward: it models the conditional distribution in the MI lower bound~\Cref{eq:app:MI_objective} with EBM, and solves it using NCE.
\item \textbf{Flexibility.} Modeling the conditional distribution with EBM provides a broader family of algorithms. NCE is just one solution to it, and recent progress on score matching~\citep{song2020score,song2021train} and contrastive divergence~\citep{du2020improved}, though no longer contrastive SSL, adds on more promising directions. Thus, EBM can provide a potential unified framework for structuring our understanding of self-supervised learning.
\item \textbf{Noise distribution.} Starting from~\citep{hjelm2018learning}, all the following works on graph SSL~\cite{sun2019infograph,xie2021self,liu2021graph,wu2021self} have been adopting the empirical distribution for noise distribution. However, this is not the case in EBM-NCE. Classic EBM-NCE uses fixed distribution, while more recent work~\citep{arbel2020generalized} extends it with adaptively learnable noise distribution. With this discipline, more advanced sampling strategies (w.r.t. the noise distribution) can be proposed, {\em{e.g.}}, adversarial negative sampling in \citep{hu2021adco}.
\end{enumerate}
In the above, we conclude three key differences between EBM-NCE and JSE, plus the solid and straightforward derivations on EBM-NCE. We believe this can provide a insightful perspective of SSL to the community.
According to the empirical results~\Cref{sec:experiment_each_loss_component}, we observe that EBM-NCE is better than InfoNCE. This can be explained using the claim from~\citep{khosla2020supervised}, where the main technical contribution is to construct many positives and many negatives per anchor point. The binary cross-entropy in EBM-NCE is able to realize this to some extent: make all the positive pairs positive and all the negative pairs negative, where the softmax-based cross-entropy fails to capture this, as in InfoNCE.
To conclude, we are introduce using EBM in modeling MI, which opens many potential venues. As for contrastive SSL, EBM-NCE provides a better perspective than JSE, and is better than InfoNCE on graph-level self-supervised learning.
\section{Preliminaries} \label{sec:preliminary}
We first outline the key concepts and notations used in this work. Self-supervised learning (SSL) is based on the \textit{view} design, where each view provides a specific aspect and modality of the data. Each molecule has two natural views: the 2D graph incorporates the topological structure defined by the adjacency, while the 3D graph can better reflect the geometry and spatial relation. From a chemical perspective, 3D geometric graphs focus on the \textit{energy} while 2D graphs emphasize the \textit{topological} information; thus they can be composed for learning more informative representation in GraphMVP. \textit{Transformation} is an atomic operation in SSL that can extract specific information from each view. Next, we will briefly introduce how to represent these two views.
\textbf{2D Molecular Graph} represents molecules as 2D graphs, with atoms as nodes and bonds as edges respectively. We denote each 2D graph as $g_{\text{2D}} = (X, E)$, where $X$ is the atom attribute matrix and $E$ is the bond attribute matrix. Notice that here $E$ also includes the bond connectivity. Then we will apply a transformation function $T_{\text{2D}}$ on the topological graph. Given a 2D molecular graph $g_{\text{2D}}$, its representation $h_{2D}$ can be obtained from a \textit{2D graph neural network (GNN)} model:
\begin{equation} \label{eq:2d_gnn}
h_{\text{2D}} = \text{GNN-2D}(T_{\text{2D}}(g_{\text{2D}})) = \text{GNN-2D}(T_{\text{2D}}(X, E)).
\end{equation}
\textbf{3D Molecular Graph} additionally includes spatial positions of the atoms, and they are needless to be static since atoms are in continual motion on \textit{a potential energy surface}~\citep{axelrod2020geom}. \footnote{A more rigorous way of defining conformer is in~\citep{moss1996basic}: a conformer is an isomer of a molecule that differs from another isomer by the rotation of a single bond in the molecule.} The 3D structures at the local minima on this surface are named \textit{conformer}. As the molecular properties are conformers ensembled~\citep{hawkins2017conformation}, GraphMVP\, provides a novel perspective on adopting 3D conformers for learning better representation. Given a conformer $g_{\text{3D}} = (X, R)$, its representation via a \textit{3D GNN} model is:
\begin{equation} \label{eq:3d_gnn}
h_{\text{3D}} = \text{GNN-3D}(T_{\text{3D}}(g_{\text{3D}})) = \text{GNN-3D}(T_{\text{3D}}(X, R)),
\end{equation}
where $R$ is the 3D-coordinate matrix and $T_{\text{3D}}$ is the 3D transformation.
In what follows, for notation simplicity, we use ${\bm{x}}$ and ${\bm{y}}$ for the 2D and 3D graphs, {\em{i.e.}}, ${\bm{x}} \triangleq g_{\text{2D}}$ and ${\bm{y}} \triangleq g_{\text{3D}}$. Then the latent representations are denoted as $h_{\bm{x}}$ and $h_{\bm{y}}$.
\part{\Large\centering{Appendix}}
\parttoc
\clearpage
\input{09_related}
\clearpage
\input{10_MI}
\clearpage
\input{11_contrastive_ssl}
\clearpage
\input{12_generative_ssl}
\clearpage
\input{13_datasets}
\input{14_experiments}
\section{Maximize Mutual Information} \label{sec:app:MI}
In what follows, we will use $X$ and $Y$ to denote the data space for $2D$ graph and $3D$ graph respectively. Then the latent representations are denoted as $h_{\bm{x}}$ and $h_{\bm{y}}$.
\subsection{Formulation}
The standard formulation for mutual information (MI) is
\begin{equation} \label{eq:app:MI}
\begin{aligned}
I(X;Y)
& = \mathbb{E}_{p({\bm{x}},{\bm{y}})} \big[ \log \frac{p({\bm{x}},{\bm{y}})}{p({\bm{x}}) p({\bm{y}})} \big].
\end{aligned}
\end{equation}
Another well-explained MI inspired from wikipedia is given in~\Cref{fig:app:MI}.
\begin{figure}[htb!]
\centering
\includegraphics[width=0.5\textwidth]{figures/MutualInfo_Final.pdf}
\caption{Venn diagram of mutual information. Inspired by wikipedia.}
\label{fig:app:MI}
\end{figure}
Mutual information (MI) between random variables measures the corresponding non-linear dependence. As can be seen in the first equation in~\Cref{eq:app:MI}, the larger the divergence between the joint ($p({\bm{x}}, {\bm{y}}$) and the product of the marginals $p({\bm{x}}) p({\bm{y}})$, the stronger the dependence between $X$ and $Y$.
Thus, following this logic, maximizing MI between 2D and 3D views can force the 3D/2D representation to capture higher-level factors, {\em{e.g.}}, the occurrence of important substructure that is semantically vital for downstream tasks. Or equivalently, maximizing MI can decrease the uncertainty in 2D representation given 3D geometric information.
\subsection{A Lower Bound to MI}
To solve MI, we first extract a lower bound:
\begin{equation}
\begin{aligned}
I(X;Y)
& = \mathbb{E}_{p({\bm{x}},{\bm{y}})} \Big[ \log \frac{p({\bm{x}},{\bm{y}})}{p({\bm{x}}) p({\bm{y}})} \Big]\\
& \ge \mathbb{E}_{p({\bm{x}},{\bm{y}})} \Big[ \log \frac{p({\bm{x}},{\bm{y}})}{\sqrt{p({\bm{x}}) p({\bm{y}})}} \Big]\\
& = \frac{1}{2} \mathbb{E}_{p({\bm{x}},{\bm{y}})} \Big[ \log \frac{(p({\bm{x}},{\bm{y}}))^2}{p({\bm{x}}) p({\bm{y}})} \Big]\\
& = \frac{1}{2} \mathbb{E}_{p({\bm{x}},{\bm{y}})} \Big[ \log p({\bm{x}}|{\bm{y}}) \Big] + \frac{1}{2} \mathbb{E}_{p({\bm{x}},{\bm{y}})} \Big[ \log p({\bm{y}}|{\bm{x}}) \Big]\\
& = -\frac{1}{2} [H(Y|X) + H(X|Y)].
\end{aligned}
\end{equation}
Thus, we transform the MI maximization problem into minimizing the following objective:
\begin{equation} \label{eq:app:MI_objective}
\begin{aligned}
\mathcal{L}_{\text{MI}} & = \frac{1}{2} [H(Y|X) + H(X|Y)].
\end{aligned}
\end{equation}
In the following sections, we will describe two self-supervised learning methods for solving MI. Notice that the methods are very general, and can be applied to various applications. Here we apply it mainly for making 3D geometry useful for 2D representation learning on molecules.
\section{GraphMVP: Graph Multi-View Pre-training} \label{sec:method}
Our model, termed as Graph Multi-View Pre-training (GraphMVP), conducts self-supervised learning (SSL) pre-training with 3D information. The 3D conformers encode rich information about the molecule energy and spatial structure, which are complementary to the 2D topology. Thus, applying SSL between the 2D and 3D views will provide a better 2D representation, which implicitly embeds the ensembles of energies and geometric information for molecules.
In the following, we first present an overview of GraphMVP, and then introduce two pretext tasks specialized concerning 3D conformation structures. Finally, we summarize a broader graph SSL family that prevails the 2D molecular graph representation learning with 3D geometry.
\subsection{Overview of GraphMVP} \label{sec:overview}
\begin{figure}[ht]
\centering
\vspace{-2ex}
\includegraphics[width=\textwidth]{figures/Diagram_Bundle_4.pdf}
\vspace{-2ex}
\caption{
Overview of the pre-training stage in GraphMVP. The black dashed circles denote subgraph masking, and we mask the same region in the 2D and 3D graphs. Multiple views of the molecules (herein: Halicin) are mapped to the representation space via 2D and 3D GNN models, where we conduct GraphMVP\ for SSL pre-training, using both contrastive and generative pretext tasks.
}
\label{fig:both_ssl}
\end{figure}
In general, GraphMVP\, exerts 2D topology and 3D geometry as two complementary views for each molecule. By performing SSL between these views, it is expected to learn a 2D representation enhanced with 3D conformation, which can better reflect certain molecular properties.
As generic SSL pre-training pipelines, GraphMVP\, has two stages: pre-training then fine-tuning. In the pre-training stage, we conduct SSL via auxiliary tasks on data collections that provide both 2D and 3D molecular structures. During fine-tuning, the pre-trained 2D GNN models are subsequently fine-tuned on specific downstream tasks, where only 2D molecular graphs are available.
At the SSL pre-training stage, we design two pretext tasks: one contrastive and one generative. We conjecture and then empirically prove that these two tasks are focusing on different learning aspects, which are summarized into the following two points. (1) From the perspective of representation learning, contrastive SSL utilizes \textbf{inter-data} knowledge and generative SSL utilizes \textbf{intra-data} knowledge. For contrastive SSL, one key step is to obtain the negative view pairs for inter-data contrasting; while generative SSL focuses on each data point itself, by reconstructing the key features at an intra-data level. (2) From the perspective of distribution learning, contrastive SSL and generative SSL are learning the data distribution from a \textbf{local} and \textbf{global} manner, respectively. Contrastive SSL learns the distribution locally by contrasting the pairwise distance at an inter-data level. Thus, with sufficient number of data points, the local contrastive operation can iteratively recover the data distribution. Generative SSL, on the other hand, learns the global data density function directly.
Therefore, contrastive and generative SSL are essentially conducting representation and distribution learning with different intuitions and disciplines, and we expect that combining both can lead to a better representation. We later carry out an ablation study (\Cref{sec:experiment_each_loss_component}) to verify this empirically.
In addition, to make the pretext tasks more challenging, we take views for each molecule by randomly masking $M$ nodes (and corresponding edges) as the transformation function, {\em{i.e.}}, $T_{\text{2D}} = T_{\text{3D}} =\text{mask}$. This trick has been widely used in graph SSL~\citep{hu2019strategies,You2020GraphCL,you2021graph} and has shown robust improvements.
\subsection{Contrastive Self-Supervised Learning between 2D and 3D Views} \label{sec:contrastive_SSL}
The main idea of contrastive self-supervised learning (SSL)~\citep{oord2018representation,chen2020simple} is first to define positive and negative pairs of views from an inter-data level, and then to align the positive pairs and contrast the negative pairs simultaneously~\citep{wang2020understanding}. For each molecule, we first extract representations from 2D and 3D views, {\em{i.e.}}, $h_{\bm{x}}$ and $h_{\bm{y}}$. Then we create positive and negative pairs for contrastive learning: the 2D-3D pairs $({\bm{x}},{\bm{y}})$ for the same molecule are treated as positive, and negative otherwise. Finally, we align the positive pairs and contrast the negative ones. The pipeline is shown in~\Cref{fig:both_ssl}. In the following, we discuss two common objective functions on contrastive graph SSL.
\textbf{InfoNCE} is first proposed in~\citep{oord2018representation}, and its effectiveness has been validated both empirically~\citep{chen2020simple,he2020momentum} and theoretically~\citep{arora2019theoretical}. Its formulation is given as follows:
\begin{equation}\label{eq:objective_infonce}
\small{
\fontsize{7.8}{1}\selectfont
\begin{aligned}
\mathcal{L}_{\text{InfoNCE}}=-\frac{1}{2} \mathbb{E}_{p({\bm{x}},{\bm{y}})}\Big[\log \frac{\exp(f_{{\bm{x}}}({\bm{x}}, {\bm{y}}))}{\exp(f_{{\bm{x}}}({\bm{x}}, {\bm{y}})) + \sum\limits_{j} \exp(f_{{\bm{x}}}({\bm{x}}^{j},{\bm{y}})})+\log\frac{\exp(f_{{\bm{y}}}({\bm{y}},{\bm{x}}))}{\exp(f_{{\bm{y}}}({\bm{y}},{\bm{x}})) + \sum\limits_{j} \exp(f_{{\bm{y}}}({\bm{y}}^{j},{\bm{x}}))} \Big],
\end{aligned}
}
\end{equation}
where ${\bm{x}}^{j}, {\bm{y}}^{j}$ are randomly sampled 2D and 3D views regarding to the anchored pair $({\bm{x}},{\bm{y}})$. $f_{{\bm{x}}}({\bm{x}},{\bm{y}})$ and $f_{{\bm{y}}}({\bm{y}},{\bm{x}})$ are scoring functions for the two corresponding views, with flexible formulations. Here we adopt $f_{\bm{x}}({\bm{x}},{\bm{y}}) = f_{\bm{y}}({\bm{y}},{\bm{x}}) = \langle h_{\bm{x}}, h_{\bm{y}} \rangle$. More details are in~\Cref{sec:app:contrastive_ssl}.
\textbf{Energy-Based Model with Noise Contrastive Estimation (EBM-NCE)} is an alternative that has been widely used in the line of graph contrastive SSL~\citep{sun2019infograph,hu2019strategies,You2020GraphCL,you2021graph}. Its intention is essentially the same as InfoNCE, to align positive pairs and contrast negative pairs, while the main difference is the usage of binary cross-entropy and extra noise distribution for negative sampling:
\begin{equation} \label{eq:objective_ebm_nce}
\small{
\begin{aligned}
\mathcal{L}_{\text{EBM-NCE}}
& = -\frac{1}{2} \mathbb{E}_{p({\bm{y}})} \Big[\mathbb{E}_{p_{n}({\bm{x}}|{\bm{y}})} \log \big(1-\sigma( f_x({\bm{x}}, {\bm{y}}))\big) + \mathbb{E}_{p({\bm{x}}|{\bm{y}} )} \log \sigma( f_x({\bm{x}}, {\bm{y}})) \Big]\\
& ~~~~~ -\frac{1}{2} \mathbb{E}_{p({\bm{x}})} \Big[\mathbb{E}_{p_{n}({\bm{y}}|{\bm{x}})} \log \big(1-\sigma( f_y({\bm{y}},{\bm{x}}))\big) + \mathbb{E}_{p({\bm{y}},{\bm{x}})} \log \sigma( f_y({\bm{y}},{\bm{x}})) \Big],
\end{aligned}
}
\end{equation}
where $p_n$ is the noise distribution and $\sigma$ is the sigmoid function. We also notice that the final formulation of EBM-NCE shares certain similarities with Jensen-Shannon estimation (JSE)~\citep{nowozin2016f}. However, the derivation process and underlying intuition are different: EBM-NCE models the conditional distributions in MI lower bound (\Cref{eq:MI_objective}) with EBM, while JSE is a special case of variational estimation of f-divergence. Since this is not the main focus of GraphMVP, we expand the a more comprehensive comparison in~\Cref{sec:app:contrastive_ssl}, plus the potential benefits with EBM-NCE.
Few works~\citep{hassani2020contrastive} have witnessed the effect on the choice of objectives in graph contrastive SSL. In GraphMVP, we treat it as a hyper-parameter and further run ablation studies on them, {\em{i.e.}}, to solely use either InfoNCE ($\mathcal{L}_{\text{C}} = \mathcal{L}_{\text{InfoNCE}}$) or EMB-NCE ($\mathcal{L}_{\text{C}} = \mathcal{L}_{\text{EBM-NCE}}$).
\subsection{Generative Self-Supervised Learning between 2D and 3D Views} \label{sec:generative_SSL}
Generative SSL is another classic track for unsupervised pre-training~\citep{kingma2013auto,chen2016infogan,larsson2016learning,kingma2018glow}. It aims at learning an effective representation by self-reconstructing each data point. Specifically to drug discovery, we have one 2D graph and a certain number of 3D conformers for each molecule, and our goal is to learn a robust 2D/3D representation that can, to the most extent, recover its 3D/2D counterparts. By doing so, generative SSL can enforce 2D/3D GNN to encode the most crucial geometry/topology information, which can improve the downstream performance.
There are many options for generative models, including variational auto-encoder (VAE)~\citep{kingma2013auto}, generative adversarial networks (GAN)~\citep{goodfellow2014generative}, flow-based model~\citep{dinh2016density}, etc. In GraphMVP, we prefer VAE-like method for the following reasons: (1) The mapping between two molecular views is stochastic: multiple 3D conformers correspond to the same 2D topology; (2) An explicit 2D graph representation ({\em{i.e.}}, feature encoder) is required for downstream tasks; (3) Decoders for structured data such as graph are often highly nontrivial to design, which make them a suboptimal choice.
\textbf{Variational Molecule Reconstruction.}
Therefore we propose a \textit{light} VAE-like generative SSL, equipped with a \textit{crafty} surrogate loss, which we describe in the following. We start with an example for illustration. When generating 3D conformers from their corresponding 2D topology, we want to model the conditional likelihood $p({\bm{y}}|{\bm{x}})$. By introducing a reparameterized variable ${\bm{z}}_{\bm{x}} = \mu_{{\bm{x}}} + \sigma_{{\bm{x}}} \odot \epsilon$, where $\mu_{\bm{x}}$ and $\sigma_{\bm{x}}$ are two flexible functions on $h_{\bm{x}}$, $\epsilon \sim \mathcal{N}(0,I)$ and $\odot$ is the element-wise production,
we have the following lower bound:
\begin{equation} \label{eq:variational_lower_bound}
\small
\begin{aligned}
\log p({\bm{y}}|{\bm{x}})
\ge \mathbb{E}_{q({\bm{z}}_{\bm{x}}|{\bm{x}})} \big[ \log p({\bm{y}}|{\bm{z}}_{\bm{x}}) \big] - KL(q({\bm{z}}_{\bm{x}}|{\bm{x}}) || p({\bm{z}}_{\bm{x}})).
\end{aligned}
\end{equation}
The expression for $\log p({\bm{x}}|{\bm{y}})$ can be similarly derived. \Cref{eq:variational_lower_bound} includes a conditional log-likelihood and a KL-divergence term, where the bottleneck is to calculate the first term for structured data. This term has also been recognized as the \textit{reconstruction term}: it is essentially to reconstruct the 3D conformers (${\bm{y}}$) from the sampled 2D molecular graph representation (${\bm{z}}_{{\bm{x}}}$). However, performing the graph reconstruction on the data space is not trivial: since molecules ({\em{e.g.}}, atoms and bonds) are discrete, modeling and measuring on the molecule space will bring extra obstacles.
\textbf{Variational Representation Reconstruction (VRR).}
To cope with this challenge, we propose a novel surrogate loss by switching the reconstruction from data space to representation space. Instead of decoding the latent code $z_{{\bm{x}}}$ to data space, we can directly project it to the 3D representation space, denoted as $q_{\bm{x}}(z_{\bm{x}})$. Since the representation space is continuous, we may as well model the conditional log-likelihood with Gaussian distribution, resulting in L2 distance for reconstruction, {\em{i.e.}}, $\|q_{{\bm{x}}}(z_{{\bm{x}}}) - \text{SG}(h_{\bm{y}}({\bm{y}}))\|^2$. Here SG is the stop-gradient operation, assuming that $h_{{\bm{y}}}$ is a fixed learnt representation function. SG has been widely adopted in the SSL literature to avoid model collapse~\citep{grill2020bootstrap,chen2021exploring}. We call this surrogate loss as variational representation reconstruction (VRR):
\begin{equation} \label{eq:variational_lower_bound_approximation}
\small
\begin{aligned}
\mathcal{L}_{\text{G}} = \mathcal{L}_{\text{VRR}}
= & \frac{1}{2} \Big[ \mathbb{E}_{q({\bm{z}}_{\bm{x}}|{\bm{x}})} \big[ \| q_{{\bm{x}}}({\bm{z}}_{\bm{x}}) - \text{SG}(h_{{\bm{y}}}) \|^2 \big] + \mathbb{E}_{q({\bm{z}}_{\bm{y}}|{\bm{y}})} \big[ \| q_{{\bm{y}}}({\bm{z}}_{\bm{y}}) - \text{SG}(h_{{\bm{x}}}) \|_2^2 \big] \Big]\\
& + \frac{\beta}{2} \cdot \Big[ KL(q({\bm{z}}_{\bm{x}}|{\bm{x}}) || p({\bm{z}}_{\bm{x}})) + KL(q({\bm{z}}_{\bm{y}}|{\bm{y}}) || p({\bm{z}}_{\bm{y}})) \Big].
\end{aligned}
\end{equation}
We give a simplified illustration for the generative SSL pipeline in~\Cref{fig:both_ssl} and the complete derivations in~\Cref{sec:app:generative_ssl}. As will be discussed in~\Cref{sec:mutual_information}, VRR is actually maximizing MI, and MI is invariant to continuous bijective function~\citep{belghazi2018mutual}. Thus, this surrogate loss would be exact if the encoding function $h$ satisfies this condition. However, we find that GNN, though does not meet the condition, can provide quite robust performance, which empirically justify the effectiveness of VRR.
\subsection{Multi-task Objective Function}
As discussed before, contrastive SSL and generative SSL essentially learn the representation from distinct viewpoints. A reasonable conjecture is that combining both SSL methods can lead to overall better performance, thus we arrive at minimizing the following complete objective for GraphMVP:
\begin{equation} \label{eq:graphmvp}
\small
\begin{aligned}
\mathcal{L}_{\text{GraphMVP}}
& = \alpha_1 \cdot \mathcal{L}_{\text{C}} + \alpha_2 \cdot \mathcal{L}_{\text{G}},\\
\end{aligned}
\end{equation}
where $\alpha_1, \alpha_2$ are weighting coefficients. A later performed ablation study (\Cref{sec:experiment_each_loss_component}) delivers two important messages: (1) Both individual contrastive and generative SSL on 3D conformers can consistently help improve the 2D representation learning; (2) Combining the two SSL strategies can yield further improvements. Thus, we draw the conclusion that GraphMVP\, (\Cref{eq:graphmvp}) is able to obtain an augmented 2D representation by fully utilizing the 3D information.
As discussed in~\Cref{sec:intro}, existing graph SSL methods only focus on the 2D topology, which is in parallel to GraphMVP: 2D graph SSL focuses on exploiting the 2D structure topology, and GraphMVP\, takes advantage of the 3D geometry information. Thus, we propose to merge the 2D SSL into GraphMVP. Since there are two main categories in 2D graph SSL: generative and contrastive, we propose two variants GraphMVP-G\, and GraphMVP-C\, accordingly. Their objectives are as follows:
\begin{equation} \label{eq:graphmvp_variants}
\small
\begin{aligned}
\mathcal{L}_{\text{GraphMVP-G}} = \mathcal{L}_{\text{GraphMVP}} + \alpha_3 \cdot \mathcal{L}_{\text{Generative 2D-SSL}},~~~~~
\mathcal{L}_{\text{GraphMVP-C}} = \mathcal{L}_{\text{GraphMVP}} + \alpha_3 \cdot \mathcal{L}_{\text{Contrastive 2D-SSL}}.
\end{aligned}
\end{equation}
Later, the empirical results also help support the effectiveness of GraphMVP-G\, and GraphMVP-C, and thus, we can conclude that existing 2D SSL is complementary to GraphMVP.
\section*{Reproducibility Statement}
To ensure the reproducibility of the empirical results, we include our code base in the supplementary material, which contains: instructions for installing conda virtual environment, data preprocessing scripts, and training scripts. Our code is available on \href{https://github.com/chao1224/GraphMVP}{GitHub} for reproducibility. Complete derivations of equations and clear explanations are given in~\Cref{sec:app:MI,sec:app:contrastive_ssl,sec:app:generative_ssl}.
\section*{Acknowledgement}
This project is supported by the Natural Sciences and Engineering Research Council (NSERC) Discovery Grant, the Canada CIFAR AI Chair Program, collaboration grants between Microsoft Research and Mila, Samsung Electronics Co., Ltd., Amazon Faculty Research Award, Tencent AI Lab Rhino-Bird Gift Fund and a NRC Collaborative R\&D Project (AI4D-CORE-06). This project was also partially funded by IVADO Fundamental Research Project grant PRF-2019-3583139727.
\bibliographystyle{plain}
{\small
| {
"attr-fineweb-edu": 1.336914,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdRk4ubng0WcZyXeO |
\section{Introduction}
Hartree--Fock (HF) theory is a widely used approximation for fermionic many--body systems such as metals or atomic nuclei. It is particularly successful in mean--field parameter regimes, i.\,e., at high density and weak interaction. The HF approximation has been rigorously established for the time evolution of reduced density matrices \cite{BPS14b,BPS14a,BPS16,BSS18} and the ground state energy \cite{GS94}. However, HF theory suffers from defects such as predicting a vanishing density of states at the Fermi momentum, in contradiction to measurements of the specific heat. Recently, rigorous results going beyond HF theory have been obtained, estimating the correlation energy: as upper and lower bound to second order in perturbation theory \cite{HPR20} and as an upper bound including all orders of perturbation theory \cite{BNP+19} reproducing the predictions of \cite{Mac50,GB57}. The latter upper bound is based on bosonization of collective excitations, clarifying the nature of bosonic quasiparticles predicted by \cite{BP53,SBFB57}. Unlike, e.\,g., the Holstein--Primakoff bosonization \cite{HP40,CG12,CGS15,Ben17} or one--dimensional bosonization \cite{ML65}, collective bosonization is not an exact mapping but an approximation that requires estimates of its validity. In this paper we review three--dimensional bosonization and heuristically explore the predictions it makes about the excitation spectrum, in particular the emergence of plasma oscillations if a Coulomb interaction is present.
\medskip
We consider a system of a large number $N$ of fermions on the torus $\mathbb{T}^3 = \Rbb^3/(2\pi\mathbb{Z}^3)$, interacting via a two--body potential $V$, described by the Hamiltonian
\begin{equation}\label{eq:firstquantop}H_N = -\hbar^2 \sum_{i=1}^N \Delta_{x_i} + \frac{1}{N}\sum_{1\leq i<j \leq N} V(x_i - x_j), \qquad \hbar := N^{-1/3}\;,\end{equation}
on the Hilbert space $L^2_a\left((\mathbb{T}^3)^N\right)$ of functions that are antisymmetric under permutation of the $N$ arguments. The effective Planck constant $\hbar = N^{-1/3}$ and the coupling constant $N^{-1}$ model a high density regime with weak interactions (see \cite{BPS16} for an introduction).
The ground state energy is defined as
\begin{equation}\label{eq:gse-HN}E_N := \inf_{\substack{\psi \in L^2_a((\mathbb{T}^{3})^N)\\\norm{\psi} = 1}} \langle \psi, H_N \psi \rangle = \inf \operatorname{spec}\left( H_N \right)\;.\end{equation}
In Hartree-Fock theory, we restrict attention to Slater determinants
\begin{equation} \psi_\text{Slater} (x_1, \dots , x_N) = \frac{1}{\sqrt{N!}} \sum_{\sigma \in S_N} \operatorname{sgn}(\sigma) f_1 (x_{\sigma(1)}) f_2 (x_{\sigma(2)}) \dots f_N (x_{\sigma(N)}) \end{equation} with $\{f_j\}_{j=1}^N$ a collection of orthonormal functions in $L^2 (\mathbb{T}^3)$. The corresponding one--particle reduced density matrix is the projection operator $\omega = \sum_{j=1}^N \lvert f_j \rangle \langle f_j\rvert$. The infimum (over all rank-$N$ orthogonal projections $\omega$) of the HF functional
\begin{equation}
\begin{split}& \mathcal{E}_\text{HF}(\omega) := \langle \psi_\text{Slater}, H_N \psi_\text{Slater} \rangle = \mbox{Tr} \left(\!- \hbar^2 \Delta \omega\right) \\
& \hspace{2.5em} +\! \frac{1}{N}\! \int\! \di x \di y V(x-y) \omega (x,x) \omega (y,y)-\! \frac{1}{N}\! \int\! \di x \di y V(x-y) \lvert \omega (x,y) \rvert^2\end{split}
\end{equation}
provides a good approximation \cite{GS94} to $E_N$. The minimizer of $\mathcal{E}_\textnormal{HF}$ is hard to characterize, but according to \cite{GHL19}, the infimum only differs by an exponentially small amount from the energy we find using the $N$ plane waves
\begin{equation}\label{eq:plane-wave} f_{k_j}(x) = (2\pi)^{-3/2} e^{i k_j\cdot x}, \quad k_j \in \mathbb{Z}^3, \end{equation}
which minimize the kinetic energy $\mbox{Tr} (- \hbar^2 \Delta \omega) = \hbar^2\sum_{j=1}^N \lvert k_j\rvert^2$. In momentum space, the minimizing selection of momenta $k_j$ can be visualized as the Fermi ball
\begin{equation}
B_\textnormal{F} := \{k \in \mathbb{Z}^3 : \lvert k \rvert \leq k_\textnormal{F}\}\;,
\end{equation}
a ball around the origin of radius $k_\textnormal{F}$ chosen such that it contains $N$ points of $\mathbb{Z}^3$; i.\,e., $k_\textnormal{F} = (\frac{3}{4\pi})^{1/3}N^{1/3}$ up to lower order corrections. We write $\kappa := (\frac{3}{4\pi})^{1/3}$.
\medskip
Obviously the infimum of $\mathcal{E}_\textnormal{HF}$ is an upper bound to $E_N$. The difference between $E_N$ and $E_\textnormal{HF} := \inf_{\omega} \mathcal{E}_\textnormal{HF}(\omega)$ is called correlation energy. In physics it is commonly calculated by partial resummation of diagrammatic perturbation theory \cite{Mac50,GB57}, going by the name of \emph{random phase approximation}. The perturbative prediction has recently been proven \cite{BNP+19} to be an upper bound for $E_N$ in systems with regular interaction potential; see the following theorem for the precise statement.
\begin{thm}[Random Phase Approximation as Upper Bound \cite{BNP+19}]\label{thm:main}
Let $\hat{V}: \mathbb{Z}^3 \to \Rbb$ non--negative and compactly supported. Let the number of particles be $N := \lvert \{ k \in \mathbb{Z}^3 : \lvert k\rvert \leq k_\textnormal{F} \}\rvert$. Then for $k_\textnormal{F} \to \infty$ we have
\begin{align*}
& \frac{E_N - E_\textnormal{HF}}{\hbar} \leq \tagg{upper} \\
& \kappa \sum_{k \in \mathbb{Z}^3} \lvert k\rvert \Bigg[ \frac{1}{\pi}\! \int_0^\infty \!\!\!\!\!\log\Big( 1 + \hat{V}(k) \kappa 2\pi \Big(1 - \lambda \arctan \frac{1}{\lambda} \Big)\Big)\di\lambda - \hat{V}(k) \frac{\kappa \pi}{2} \Bigg] + \Ocal(N^{-1/27})\;.
\end{align*}
\end{thm}
A matching lower bound was proven recently \cite{BNP+20}. The proof of \cref{thm:main} is based on an effective bosonic theory which we review in \cref{sec:derivation}. In \cref{sec:energy} we discuss the predictions the effective theory makes about the excitation spectrum.
\section{Derivation of the Bosonic Effective Hamiltonian}
\label{sec:derivation}
It is convenient to embed the system in fermionic Fock space and use creation and annihilation operators, $a^*_k$ and $a_k$, where the momenta $k$ refer to the plane wave basis, defined as in \cref{eq:plane-wave}. They satisfy the canonical anticommutator relations
\begin{equation}
\{a_k, a_l\} = 0 = \{a^*_k, a^*_l\}\;, \qquad \{a_k,a^*_l\} = \delta_{k,l}\;.
\end{equation}
The fermionic number operator is $\Ncal := \sum_{k \in \mathbb{Z}^3} a^*_k a_k$.
The Hamiltonian becomes
\begin{equation}
\Hcal_N = \sum_{k \in \mathbb{Z}^3} \hbar^2 \lvert k\rvert^2 a^*_k a_k + \frac{1}{2N}\sum_{k_1,k_2,k \in \mathbb{Z}^3} \hat{V}(k) a^*_{k_1} a^*_{k_2} a_{k_2+k} a_{k_1 -k}\;;
\end{equation}
more precisely, $\Hcal_N$ restricted to the eigenspace $\{\psi: \Ncal \psi = N\psi\}$ agrees with $H_N$.
\subsection{Particle--Hole Transformation}
We start with a particle--hole transformation which extracts the HF energy and leaves us with a remainder Hamiltonian describing quantum correlations. This transformation is a unitary $R$ on fermionic Fock space defined by its action
\begin{equation}\label{eq:phtrafo}
R^* a^*_k R = \left\{ \begin{array}{cl}
a^*_k & \textnormal{for } k \in B_\textnormal{F}^c := \mathbb{Z}^3 \setminus B_\textnormal{F} \\ a_{-k} & \textnormal{for } k\in B_\textnormal{F} \end{array} \right.
\end{equation}
and by transforming the vacuum $\Omega := (1,0,0,0,\ldots)$ into the Slater determinant corresponding to the Fermi ball, $R\Omega = \bigwedge_{k \in B_\textnormal{F}} f_k$.
Applying \cref{eq:phtrafo} and then rearranging in normal order at the cost of anticommutators appearing, we find
\begin{equation}\label{eq:Htrafo}
R^* \Hcal_N R = \mathcal{E}_\textnormal{HF}(\omega) + \sum_{p \in B_\textnormal{F}^c} \hbar^2\lvert p\rvert^2 a^*_p a_p - \sum_{h \in B_\textnormal{F}} \hbar^2 \lvert h\rvert^2 a^*_h a_h + Q_N + \mathcal{O}\left( \frac{\Ncal^2+1}{\sqrt{N}} \right)\;,
\end{equation}
where now $\omega= \sum_{k\in B_\textnormal{F}} \lvert f_k\rangle\langle f_k \rvert$.
The operator $Q_N$ is quartic in fermionic operators; it describes repulsion of particles with particles (momenta ``$p$'') and holes with holes (momenta ``$h$'') as well as attraction between particles and holes.
The last term is negligible in the approximate ground state constructed by \cite{BNP+19}
and in states with small number of excitations.
The negative sign of the term $- \sum_{h \in B_\textnormal{F}} \hbar^2 \lvert h\rvert^2 a^*_h a_h$ can be understood as saying that creation of a hole in the Fermi ball lowers the energy.
\subsection{Introduction of Almost--Bosonic Quasiparticles}
We introduce pair excitation operators that lift a fermion from inside the Fermi ball to outside the Fermi ball by a relative momentum $k \in \mathbb{Z}^3$, delocalized over all the Fermi surface, by
\begin{equation}
\tilde{b}^*_k := \sum_{\substack{p \in B_\textnormal{F}^c\\h \in B_\textnormal{F}}} \delta_{p-h,k} a^*_p a^*_h\;.
\end{equation}
In terms of these operators the interaction $Q_N$ appearing in \cref{eq:Htrafo} can be written
\begin{equation}
Q_N = \frac{1}{2N}\sum_{k \in \mathbb{Z}^3} \hat{V}(k) \left( 2 \tilde{b}^*_k \tilde{b}_k + \tilde{b}^*_k \tilde{b}^*_{-k} + \tilde{b}_{-k} \tilde{b}_k \right)\;.
\end{equation}
Just as for bosons $[\tilde{b}^*_k,\tilde{b}^*_l]=0$. Unfortunately this is not sufficient for the pair operators to satisfy canonical commutator relations: consider a quasiparticle created by $b^*_{p,h} := a^*_p a^*_h$ as proposed by \cite{SBFB57}, then obviously $(b^*_{p,h})^2 =0$: we cannot create more than one such quasiparticle, so it cannot be bosonic. However, $\tilde{b}^*_{k}$ creates a superposition delocalized over many fermionic modes; thus $\big(\tilde{b}^*_k\big)^r$ only vanishes once $r \in \Nbb$ becomes larger than the number of available fermionic modes. So we expect the operators $\tilde{b}^*_k$ to be approximately bosonic on states where the number of occupied fermionic modes is much smaller than the number of modes constituting the superposition. In fact
\begin{equation}\label{eq:noccr}
[\tilde{b}_k,\tilde{b}^*_l] = \textnormal{const.}\times \left(\delta_{k,l} + \mathcal{E}(k,l) \right)
\end{equation}
where the operator $\mathcal{E}(k,l)$ is to be thought of as a small error. Controlling this type of error term is a central task achieved in \cite{BNP+19}.
\begin{figure}\centering
\begin{minipage}[t]{0.4\textwidth}\centering
\hspace{-1em}\begin{tikzpicture}[scale=0.6]
\def\RadiusSphere{4}
\def\angEl{20}
\def\angAz{-20}
\filldraw[ball color = white] (0,0) circle (\RadiusSphere);
\DrawLatitudeCircle[\RadiusSphere]{75+2}
\foreach \t in {0,-50,...,-250} {
\DrawLatitudeArc{75}{(\t+50-4)*sin(62)}{\t*sin(62)}
\DrawLongitudeArc{\t*sin(62)}{50+2}{75}
\DrawLongitudeArc{(\t-4)*sin(62)}{50+2}{75}
\DrawLatitudeArc{50+2}{(\t+50-4)*sin(62)}{\t*sin(62)}
}
\foreach \t in {0,-50,...,-300} {
\DrawLatitudeArc{50}{(\t+50-4)*sin(37)}{\t*sin(37)}
\DrawLongitudeArc{\t*sin(37)}{25+2}{50}
\DrawLongitudeArc{(\t-4)*sin(37)}{25+2}{50}
\DrawLatitudeArc{25+2}{(\t+50-4)*sin(37)}{\t*sin(37)}
}
\DrawLatitudeArc{50}{(-300-4)*sin(37)}{-330*sin(37)}
\foreach \t in {0,-50,...,-450} {
\DrawLatitudeArc{25}{(\t+50-4)*sin(23)}{\t*sin(23)}
\DrawLongitudeArc{\t*sin(23)}{00+2}{25}
\DrawLongitudeArc{(\t-4)*sin(23)}{00+2}{25}
\DrawLatitudeArc{00+2}{(\t+50-4)*sin(23)}{\t*sin(23)}
}
\DrawLatitudeArc{25}{(-450-4)*sin(23)}{-500*sin(23)}
\fill[black] (0,3.75) circle (.075cm);
\fill[black] (1.72,3.08) circle (.075cm);
\fill[black] (.76,2.73) circle (.075cm);
\fill[black] (-.66,2.73) circle (.075cm);
\fill[black] (-1.73,3.04) circle (.075cm);
\fill[black] (2.25,1.5) circle (.075cm);
\fill[black] (.8,1.2) circle (.075cm);
\fill[black] (-.85,1.22) circle (.075cm);
\fill[black] (-2.27,1.5) circle (.075cm);
\fill[black] (-3.09,1.97) circle (.075cm);
\fill[black] (3.09,1.97) circle (.075cm);
\fill[black] (2.57,-.15) circle (.075cm);
\fill[black] (1.43,-.37) circle (.075cm);
\fill[black] (.155,-.48) circle (.075cm);
\fill[black] (-1.17,-.41) circle (.075cm);
\fill[black] (-2.35,-.2) circle (.075cm);
\fill[black] (-3.26,0.1) circle (.075cm);
\fill[black] (-3.79,.55) circle (.075cm);
\fill[black] (3.37,.18) circle (.075cm);
\fill[black] (3.85,.57) circle (.075cm);
\end{tikzpicture}
\end{minipage}
\begin{minipage}[t]{0.51\textwidth}\centering
\includegraphics[scale=0.4]{rank-one_mod}
\end{minipage}
\caption{\textbf{Left:} Fermi surface decomposed into patches separated by corridors of width $2R$, $R := \operatorname{diam}\operatorname{supp}\hat{V}$. The area covered by a patch is approximately $4\pi N^{2/3}/M$. Extending the $\alpha$--th patch radially in- and outward by $R$ defines $B_\alpha \subset \mathbb{Z}^3$. The vectors ${\omega}_\alpha$ are the centers of the patches, marked by dots. Patches are reflected across the center to the southern half sphere. \textbf{Right:}
Left hand side of \cref{eq:tosolve} as function of $\lambda$ as solid line (poles at unperturbed eigenvalues $\lvert \hat{k}\cdot \hat{\omega}_\alpha \rvert^2$), right hand side dashed. Perturbed eigenvalues are the $\lambda$ where curves intersect. The perturbed eigenvalues interlace the unperturbed eigenvalues. As $\lvert k\rvert \to 0$, the dashed line moves toward the horizontal axis and the largest eigenvalue toward $+ \infty$, giving rise to the plasmon.}
\label{fig:blub}
\end{figure}
However, another problem appears: the kinetic energy cannot be expressed by such operators. To solve this problem we cut the Fermi surface into patches $B_\alpha$ as sketched in \cref{fig:blub} on the left and localize the pair operators accordingly. It turns out that the number of fermionic modes per patch is still large enough to justify the bosonic approximation if the number of patches is $M \ll N^{2/3}$. So let
\begin{equation} \label{eq:locbos}
b^*_{\alpha,k} := \frac{1}{n_{\alpha,k}} \sum_{\substack{p \in B_\textnormal{F}^c \cap B_\alpha\\h \in B_\textnormal{F} \cap B_\alpha}} \delta_{p-h,k} a^*_p a^*_h\;;
\end{equation}
the definition of the normalization constant $n_{\alpha,k}$ is discussed below. We write $\omega_\alpha$ for the vector pointing to the center of the patch $B_\alpha$. The operators $b^*_{\alpha,k}$ create approximate eigenmodes of the kinetic energy; more precisely, by using $\lvert p\rvert^2 - \lvert h\rvert^2 = \left(p-h\right)\cdot\left(p+h\right) \simeq k\cdot (2\omega_\alpha)$ we obtain the commutator
\begin{align}
\hbar^2\Bigg[\sum_{p \in B_\textnormal{F}^c} \lvert p\rvert^2 a^*_p a_p - \sum_{h \in B_\textnormal{F}} \lvert h\rvert^2 a^*_h a_h, b^*_{\alpha,k}\Bigg] & = \frac{\hbar^2}{n_{\alpha,k}} \sum_{\substack{p \in B_\textnormal{F}^c \cap B_\alpha\\h \in B_\textnormal{F} \cap B_\alpha}} \delta_{p-h,k} \left( \lvert p\rvert^2 - \lvert h\rvert^2 \right) a^*_p a^*_h \nonumber \\
& = \hbar 2\kappa \,k \cdot\hat{\omega}_\alpha + \Ocal\left(\frac{\Ncal}{M}\right)\;.
\end{align}
If $M \gg N^{1/3}$, the error term is smaller than $\hbar$ and thus negligible. Notice that there is a subtlety: if $k\cdot \hat{\omega}_\alpha < 0$, the relative momentum $k$ points from outside the Fermi ball to inside, incompatible with the summation in \cref{eq:locbos} being such that $k$ points from a hole momentum $h\in B_\textnormal{F}$ to a particle momentum $p \in B_\textnormal{F}^c$. So in this case the sum in \cref{eq:locbos} is empty. Similarly, there may still be very few summands if $k\cdot\hat{\omega}_\alpha \simeq 0$. We thus impose a cutoff $\hat{k}\cdot\hat{\omega}_\alpha \geq N^{-\delta}$ (with a small $\delta >0$ to be optimized), and call the set of patch indices $\alpha$ satisfying this condition $\Ical_{k}^{+}$. Since there are going to be interaction terms coupling $k$ to $-k$, it is convenient to also introduce the set $\Gamma^{\text{nor}} \subset \mathbb{Z}^3$, denoting an arbitrarily chosen half space. To conclude, as in the classical one-dimensional bosonization of the Luttinger model \cite{ML65}, we now propose to approximate the kinetic energy by
\begin{equation}\begin{split}
& \hbar^2 \sum_{p \in B_\textnormal{F}^c} \lvert p\rvert^2 a^*_p a_p - \hbar^2\sum_{h \in B_\textnormal{F}} \lvert h\rvert^2 a^*_h a_h \\
& \simeq \hbar 2\kappa\sum_{k \in \Gamma^{\text{nor}}} \lvert k\rvert\Bigg( \sum_{\alpha \in\Ical_{k}^{+}} \lvert \hat{k}\cdot \hat{\omega}_\alpha\rvert b^*_{\alpha,k} b_{\alpha,k} + \sum_{\beta \in\mathcal{I}_{-k}^+} \lvert \hat{k}\cdot \hat{\omega}_\beta\rvert b^*_{\beta,-k} b_{\beta,-k} \Bigg)\;.
\end{split}
\end{equation}
Decomposing $\tilde{b}^*_k$ into the $b^*_{\alpha,k}$ we arrive at a quadratic, approximately bosonic, Hamiltonian compromising both kinetic and interaction energy. It only remains to determine the normalization constant $n_{\alpha,k}$; we fix it by imposing that the leading order of $[b_{\alpha,k},b^*_{\beta,l}]$ is given by $\delta_{k,l}\delta_{\alpha,\beta}$ as in the canonical commutator relations; this is achieved by $n_{\alpha,k}^2 := \sum_{\substack{p \in B_\textnormal{F}^c \cap B_\alpha\\h \in B_\textnormal{F} \cap B_\alpha}} \delta_{p-h,k}$, the number of pairs of relative momentum $k$ in patch $B_\alpha$. If $\hat{k}\cdot\hat{\omega}_{\alpha} \simeq 1$ this can be identified with the volume of a flat box over the Fermi surface having base area $4\pi N^{2/3} M^{-1}$ and height $\lvert k\rvert$. As mentioned before, the number of pairs $(p,h)$, $p \in B_\textnormal{F}^c \cap B_\alpha$ and $h \in B_\textnormal{F} \cap B_\alpha$ with $p-h=k$, is small when $k \cdot \omega_\alpha \simeq 0$ (only very few ``tangential'' particle--hole excitation are possible). The correct interpolation between these extreme cases is found to be
$n_{\alpha,k}^2 \simeq {4\pi N^{2/3}} M^{-1} \lvert k\cdot\hat{\omega}_\alpha \rvert$.
We thus find the effective Hamiltonian
\begin{equation}\label{eq:directsum}H_\text{eff} = \hbar 2\kappa \sum_{k \in \Gamma^{\text{nor}}} \lvert k\rvert h_\text{eff}(k)\end{equation}
where, with ${u_\alpha}(k) := \sqrt{\lvert \hat{k}\cdot\hat{\omega}_{\alpha}\rvert}$, we have
\begin{align}h_\text{eff}(k) & = \sum_{\alpha \in\Ical_{k}^{+}} u_\alpha(k)^2 b^*_{\alpha,k} b_{\alpha,k} + \sum_{\alpha \in\mathcal{I}_{-k}^{+}} u_\alpha(k)^2 b^*_{\alpha,-k} b_{\alpha,-k} \nonumber\\
& \quad + \frac{\kappa \hat{V}(k) 2\pi}{M} \Big( \sum_{\alpha,\beta \in \Ical_{k}^{+}}\! u_\alpha(k) u_\beta(k) b^*_{\alpha,k} {b}_{\beta,k} +\! \sum_{\alpha,\beta \in\mathcal{I}_{-k}^{+}} \! u_\alpha(k) u_\beta(k) b^*_{\alpha,-k} b_{\beta,-k} \nonumber \\
&\hspace{2.8cm} + \bigg[ \sum_{\alpha \in \Ical_{k}^{+}}\sum_{\beta \in \mathcal{I}_{-k}^{+}} u_\alpha(k) u_\beta(k) b^*_{\alpha,k} b^*_{\beta,-k} + \hc\bigg]\Big).\end{align}
We think of $H_\text{eff}$ as a refinement of the Hamiltonian proposed by Sawada et al.~\cite{SBFB57}.
\section{Excitation Spectrum of the Bosonic Effective Hamiltonian}\label{sec:energy}
\emph{For the following, we use the approximation (compare to \cref{eq:noccr}) that the $b$-- and $b^*$--operators exactly satisfy canonical commutator relations, i.\,e.,}
\begin{equation}\label{eq:ccr}
[b_{\alpha,k}, b_{\beta,l}] = 0 = [b^*_{\alpha,k}, b^*_{\beta,l}]\;, \qquad [b_{\alpha,k}, b^*_{\beta,l}] = \delta_{\alpha,\beta} \delta_{k,l}\;.
\end{equation}
The approximation only lies in the last relation; we gave a rigorous estimate for the error in \cite[Lemma 4.1]{BNP+19}, showing that it is small if there are only few fermionic excitations over the Fermi ball. In \cite[Proposition 4.6]{BNP+19} we proved that the bosonic approximation to the ground state of \cref{eq:directsum} contains only few fermionic excitations, and so do also states obtained by adding a fixed number of bosonic excitations to the approximate ground state, i.\,e., by applying $b^*_{\alpha,k}$--operators.
\medskip
With this approximation, operators belonging to different $k$ commute, so we can diagonalize every $h_\text{eff}(k)$ independently and in the end sum over $k \in \Gamma^{\text{nor}}$. In the following we mostly drop the $k$--dependence in the notation .
Keeping only the non--vanishing operators, we set
\begin{equation}\label{eq:coperators}c^*_\alpha := \left\{ \begin{array}{lr} b^*_{\alpha,k} & \text{ for } \alpha \in \Ical_{k}^{+}\,, \\ b^*_{\alpha,-k} & \text{ for } \alpha \in
\mathcal{I}_{-k}^{+}\;. \end{array}\right.\end{equation}
These operators again satisfy canonical commutator relations, $[c_\alpha,c_\beta] = 0 = [c^*_\alpha,c^*_\beta]$ and $[c_\alpha,c^*_\beta] =\delta_{\alpha,\beta}$. We also introduce the index set $\Ical_{k} := \Ical_{k}^{+} \cup \mathcal{I}_{-k}^{+}$ and let $I_k := \lvert \mathcal{I}_k^{+}\rvert = \lvert \mathcal{I}_{-k}^{+}\rvert$. As in \cite{GS13} we write
\begin{equation}\label{eq:hameff}
h_\textnormal{eff} = \mathbb{H} - \frac{1}{2} \mbox{Tr} (D+W)\,,
\end{equation}
\begin{equation}\mathbb{H} := \frac{1}{2} \begin{pmatrix} (c^*)^T & c^T \end{pmatrix} \begin{pmatrix} D+W & \tilde W \\ \tilde W & D+W\end{pmatrix} \begin{pmatrix}c\\c^* \end{pmatrix},\quad c = \begin{pmatrix} \vdots \\ c_\alpha \\ \vdots\end{pmatrix},\quad c^* = \begin{pmatrix} \vdots \\ c^*_\alpha \\ \vdots \end {pmatrix},\end{equation}
where $c^T = \begin{pmatrix} \cdots & c_\alpha & \cdots \end{pmatrix}$.
Setting $g := \kappa \hat{V}2 \pi/M$, the matrices $D$, $W$, and $\tilde W$ are
\begin{equation}\label{eq:blocks}\begin{split}
D & := \operatorname{diag}(u_\alpha^2: \alpha \in \Ical_{k})\;, \\
W_{\alpha,\beta} & := \left\{ \begin{array}{cl} g u_\alpha u_\beta \quad & \text{for } \alpha,\beta \in \Ical_{k}^{+} \textnormal{ or } \alpha,\beta \in \mathcal{I}_{-k}^{+} \\ 0 & \text{for } \alpha \in \Ical_{k}^{+}, \beta \in \mathcal{I}_{-k}^{+} \text{ or } \alpha \in \mathcal{I}_{-k}^{+}, \beta \in \Ical_{k}^{+}\;,\end{array} \right. \\
\tilde W_{\alpha,\beta} & := \left\{ \begin{array}{cl} 0 & \text{for } \alpha,\beta \in \Ical_{k}^{+} \text{ or } \alpha,\beta \in \mathcal{I}_{-k}^{+} \\
g u_\alpha u_\beta \quad & \text{for } \alpha \in \Ical_{k}^{+}, \beta \in \mathcal{I}_{-k}^{+} \textnormal{ or } \alpha \in \mathcal{I}_{-k}^{+},\beta \in \Ical_{k}^{+}\;.\end{array} \right.
\end{split}\end{equation}
By reordering the indices we can write
\begin{equation}D = \begin{pmatrix} d & 0 \\ 0 & d \end{pmatrix}, \quad W = \begin{pmatrix} b & 0 \\ 0 & b \end{pmatrix}, \quad \tilde{W} = \begin{pmatrix} 0 & b \\ b& 0\end{pmatrix}\end{equation}
where $d = \operatorname{diag}(u_\alpha^2: \alpha=1,\ldots I_k)$, $b = g\lvert u\rangle \langle u\rvert$, and $u = ( u_1 ,\ldots, u_{I_k} )^T$.
The Segal field operators $\phi = \begin{pmatrix} \cdots & \phi_\alpha & \cdots \end{pmatrix}^T$ and $\pi = \begin{pmatrix} \cdots & \pi_\alpha & \cdots \end{pmatrix}^T$ are defined by
\begin{equation}\label{eq:defsegalfields}\begin{pmatrix}
c \\ c^*
\end{pmatrix} = \Theta \begin{pmatrix} \phi \\ \pi\end{pmatrix}, \quad \textnormal{where } \Theta := \frac{1}{\sqrt{2}} \begin{pmatrix} 1 & i\\ 1& -i \end{pmatrix}.
\end{equation}
Note that $\phi = \frac{1}{\sqrt{2}}(c+c^*) = \phi^*$ and $\pi = \frac{i}{\sqrt{2}}(c^*-c) = \pi^*$. Then
\begin{equation}\label{eq:quadratichamiltonian}\begin{split}\mathbb{H} & = \begin{pmatrix}\phi^T & \pi^T \end{pmatrix} \mathfrak{M} \begin{pmatrix} \phi \\ \pi\end{pmatrix}, \textnormal{ with } \mathfrak{M} = \frac{1}{2} \begin{pmatrix} D+W+\tilde W & 0\\ 0 & D+W-\tilde W \end{pmatrix}\;.\end{split}
\end{equation}
The commutator relations of the Segal field operators are invariant under symplectic transformations, which corresponds to Bogoliubov transformations of the bosonic creation and annihilation operators. We introduce
\begin{equation}E := \left( (D+W-\tilde W)^{1/2} (D+W+\tilde W) (D+W-\tilde W)^{1/2} \right)^{1/2} \in \Cbb^{2I_k\times 2I_k}\end{equation}
and, with $\mathbb{I}$ denoting the $I_k\times I_k$--identity matrix, the symplectic matrix $S$,
\begin{equation}\label{eq:defS}S := \begin{pmatrix} (D+W-\tilde W)^{1/2} E^{-1/2} U & 0 \\ 0 & (D+W-\tilde W)^{-1/2} E^{1/2} U \end{pmatrix}, \quad
U := \frac{1}{\sqrt{2}}\begin{pmatrix} \mathbb{I} & \mathbb{I} \\ \mathbb{I} & -\mathbb{I} \end{pmatrix}.\end{equation}
The matrix $U$ block--diagonalizes $D+W-\tilde{W}$ and $D+W+\tilde{W}$.
We then find
\begin{equation}\label{eq:completediag}\mathbb{H} = \begin{pmatrix}\tilde\phi^T & \tilde\pi^T \end{pmatrix} \frac{1}{2}\begin{pmatrix} \tilde{E} & 0\\ 0 & \tilde{E}\end{pmatrix} \begin{pmatrix} \tilde\phi \\ \tilde\pi\end{pmatrix}\;,\quad \textnormal{having set } \begin{pmatrix} \tilde{\phi} \\ \tilde{\pi} \end{pmatrix} := S^{-1} \begin{pmatrix}\phi \\ \pi \end{pmatrix}\end{equation}
and having introduced the matrix
\begin{equation}\tilde{E} = \begin{pmatrix} \left[ d^{1/2} (d+2b) d^{1/2}\right]^{1/2} & 0 \\ 0 & \left[ (d+2b)^{1/2}d(d+2b)^{1/2} \right]^{1/2} \end{pmatrix} =: \begin{pmatrix} A^{1/2} & 0 \\ 0& B^{1/2} \end{pmatrix}\;.\label{eq:tildeE}\end{equation}
Since $\tilde{E}$ is symmetric we can find an orthogonal matrix $O$ such that $O^T \tilde{E} O = \operatorname{diag}(e_\gamma: \gamma \in \Ical_{k})$.
After the further symplectic transformation $\tilde{S} := \begin{pmatrix} O & 0 \\ 0 & O \end{pmatrix}$ then
\begin{equation}\label{eq:diagonal}
\begin{split}\mathbb{H} & = \begin{pmatrix}\dbtilde\phi^T & \dbtilde\pi^T \end{pmatrix} \frac{1}{2}\begin{pmatrix} \operatorname{diag}(e_\gamma) & 0\\ 0 & \operatorname{diag}(e_\gamma)\end{pmatrix} \begin{pmatrix} \dbtilde\phi \\ \dbtilde\pi\end{pmatrix} \\
& = \sum_{\gamma \in \Ical_{k}} \frac{e_\gamma}{2} \left( \dbtilde{\phi}_\gamma^2 + \dbtilde{\pi}_\gamma^2 \right) = \sum_{\gamma \in \Ical_{k}} e_\gamma \left( \hat{n}_\gamma + \frac{1}{2} \right)\;,\end{split}\end{equation}
where we recognized harmonic oscillators and introduced the corresponding number operators $\hat{n}_\gamma$. In particular, \cref{eq:diagonal} implies $h_\textnormal{eff} \geq \frac{1}{2} \mbox{Tr} \left(E- (D+W)\right)$, which becomes \cref{eq:upper} as $M\to\infty$ \cite{BNP+19}. The total excitation spectrum is
\begin{equation}\sigma(H_\textnormal{eff}) = \Big\{ \hbar\kappa \sum_{k \in \Gamma^{\text{nor}}} \lvert k\rvert\Big[ \sum_{\gamma \in \Ical_{k}} 2 e_\gamma(k) n_\gamma(k) + \mbox{Tr}\big(E(k)-D(k)-W(k)\big)\Big]: n_\gamma(k) \in \mathbb{N} \Big\}\;.\end{equation}
In the following we approximately compute the excitation energies $e_\gamma(k)$.
\subsection{The Plasmon Dispersion Relation}
Recall the definition of the matrices $A$ and $B$ from \cref{eq:tildeE}. We observe that $A = d^{1/2}(d+2b)d^{1/2} = d^2 + 2g \lvert \tilde u\rangle \langle \tilde u\rvert$ where $\tilde u := d^{1/2} u$ is the vector with components $u_\alpha^2$; i.\,e., $A$ is diagonal plus a rank--one perturbation. Define $X := (d+2b)^{-1/2} d^{-1/2}$; then $A = X^{-1}BX$ and thus $A$ and $B$ have the same spectrum. Thus, we only need to find the eigenvalues of $A$ (or more precisely, of $A^{1/2}$).
The spectrum of $A$ can be obtained by the following standard method: by the matrix determinant lemma the characteristic polynomial of $A$ can be written as
\begin{equation}\begin{split}& \det(d^2 + 2g \lvert \tilde{u}\rangle\langle \tilde{u}\rvert - \lambda) = \det(d^2-\lambda)\det\left(1 + 2g(d^2-\lambda)^{-1}\lvert \tilde{u}\rangle\langle \tilde{u}\rvert\right)\\
& = \prod_{\beta=1}^{I_k} (u_\beta^4-\lambda) \left( 1 + 2g \sum_{\alpha=1}^{I_k} \frac{u_\alpha^4}{u_\alpha^4 - \lambda} \right) =: \prod_{\beta=1}^{I_k} (u_\beta^4-\lambda) w(\lambda)\;.\end{split}\end{equation}
For $\lambda \to u_\alpha^4$ this expression is non--zero, thanks to the previously introduced cutoff $u_\alpha^2 \geq N^{-\delta}$. So the characteristic polynomial vanishes if and only if $w(\lambda)$ has a zero.
Considering the Coulomb potential $\hat{V}(k) = \lvert k\rvert^{-2}$, the condition $w(\lambda) =0$ means
\begin{equation}
\label{eq:tosolve}\frac{1}{M}\sum_{\alpha=1}^{2I_k} \frac{\lvert \hat{k}\cdot \hat{\omega}_\alpha \rvert^2}{\lambda - \lvert \hat{k}\cdot \hat{\omega}_\alpha \rvert^2} = \frac{\lvert k\rvert^2}{4\pi\kappa}\;.\end{equation}
This has solutions that interlace the unperturbed eigenvalues $\lvert \hat{k}\cdot\hat{\omega}_\alpha \rvert^2$ and another solution at $\lambda > \max_\alpha \lvert \hat{k}\cdot\hat{\omega}_\alpha \rvert^2$ (see the right of \cref{fig:blub}); the latter solution will be identified as the plasmon. To calculate its energy, note that here the summand is free of singularities, and so, approximating the Riemann sum $\frac{4\pi}{M}\sum_{\alpha}$ by the surface integral over the unit 2--sphere, the left hand side becomes approximately
\begin{equation}\frac{1}{4\pi} \int_0^\pi \di\theta \sin \theta \int_0^{2\pi} \di\varphi \frac{\cos^2 \theta}{\lambda - \cos^2 \theta} = -1 + \sqrt{\lambda} \operatorname{arcoth}(\sqrt{\lambda})\;.\end{equation}
(We chose spherical coordinates such that $\hat{k}\cdot\hat{\omega}_\alpha = \cos\theta$.)
Solving \cref{eq:tosolve} for $\sqrt{\lambda}$ yields (approximately) the eigenvalues of $A^{1/2}$.
As we are interested in large $\sqrt{\lambda}$, we use the series expansion $\operatorname{arcoth}(\sqrt{\lambda}) = \sqrt{\lambda}^{-1} + \frac{1}{3}\sqrt{\lambda}^{-3} + \frac{1}{5}\sqrt{\lambda}^{-5} + \Ocal(\sqrt{\lambda}^{-7})$ and then solve the power series for $\sqrt{\lambda}$, yielding
\begin{equation}\sqrt{\lambda} = \frac{2\sqrt{\pi\kappa}}{\lvert k\rvert \sqrt{3}} + \frac{3^{3/2}}{20} \frac{\lvert k\rvert}{\sqrt{\pi \kappa}} + \Ocal(\lvert k\rvert^3) \quad \textnormal{for small } k\;.\end{equation}
(For a precise notion of ``small $k$'' one may consider a large--volume limit or compare to a coupling strength parameter.)
Multiplying by the prefactor $\hbar\kappa 2\lvert k\rvert$ from \eqref{eq:directsum}, and recalling $\kappa = \left(\frac{3}{4\pi}\right)^{1/3}$, the \emph{excitation energy (dispersion relation)} becomes
\begin{equation}\label{eq:plasmondisprel}
\lambda_\textnormal{pl}(k) = \hbar 2 + \hbar \frac{3}{10} \sqrt{\frac{3\kappa}{\pi}} \lvert k\rvert^2 + \Ocal(\lvert k\rvert^4)\;.
\end{equation}
The first summand, $\lambda_\textnormal{pl}(0) = \hbar 2$, is the frequency of classical plasma oscillations. But we also reproduce the quantum correction: in the physics literature \cite{vSF89} the plasmon is described as an excitation with dispersion relation
\begin{equation}\label{eq:experi}
\lambda_\textnormal{pl}(k) = \lambda_\textnormal{pl}(0) + \frac{\hbar^2}{m} \alpha_\textnormal{RPA} \lvert k\rvert^2 + \Ocal(\lvert k\rvert^4)\;, \qquad \textnormal{with } \alpha_\textnormal{RPA} = \frac{3}{5} \frac{E_F}{\lambda_\textnormal{pl}(0)}\;.
\end{equation}
Since the Fermi energy is given by $E_F = \hbar^2 k_F^2$, the particle mass $m=1/2$, and $k_F = \kappa N^{1/3}$, we see that \cref{eq:plasmondisprel} and \cref{eq:experi} agree.
In \cref{fig:monspectrum} on the left we plotted all solutions of \cref{eq:tosolve} as a function of $\lvert k\rvert$; on the right we show experimental data for the excitation spectrum of sodium as plotted in \cite{All96}.
\begin{figure}\centering
\begin{minipage}[b]{0.48\textwidth}\centering
\includegraphics[scale=0.55]{spectrum_bosonization_mod}
\\
\vspace{-1em}
{\footnotesize \hspace{3.5em}Wavevector $k$ of particle--hole pair}
\end{minipage}\hspace{1.0em}
\begin{minipage}[b]{0.48\textwidth}
\centering
\includegraphics[scale=0.62]{sodium_simpl}
\end{minipage}
\caption[Comparison of Excitation Spectra]{
\textbf{Left:} Numerically determined excitation spectrum of $\hbar 2 \kappa \lvert k\rvert h_\textnormal{eff}(k)$ (ground state energy has been subtracted). Plasmon in black, grey points approximate continuum as $M \to \infty$.
\textbf{Right:} Excitation spectrum of sodium, plot from \cite{All96} based on data from \cite{vSF89}. The area below the solid line represents the continuum of excitations; black marks indicate the plasmon.
}
\label{fig:monspectrum}
\end{figure}
\section*{Acknowledgements}
NB has been supported by ERC grant 694227 and by Swedish Research Council grant 2016-06596 and Verg Foundation while in residence at Institut Mittag--Leffler.
\bibliographystyle{alpha}
| {
"attr-fineweb-edu": 1.354492,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdTPxK3YB9i3RLCXY |
\section{Introduction}
\label{sec:intro}
The discovery of very small neutrino masses combined with the
theoretical possibility of the see-saw mechanism \cite{ms80, ms81, moh04}
present interesting challenges and prospects for unification.
In particular baryon asymmetry of the Universe via leptogenesis
\cite{fy86} becomes a natural outcome.
However, the high scale suggested by the see-saw mechanism
raises the hierarchy problem which can be avoided if the model is
supersymmetric. Cosmology of supersymmetric models have a variety of
issues that need to be addressed, the
most obvious ones being the potential over-abundance of the
gravitino and likewise the moduli fields.
Here we report on a
preliminary investigation of a specific supersymmetric model
which is Left-Right symmetric and can be embedded in a
renormalizable $SO(10)$ model \cite{cve85, km93, km95, abs97, amrs98, ams98}.
An appealing aspect of any model with gauged $B-L$ is the absence
of any pre-existing GUT or Planck scale $B-L$ asymmetry, which
combined with the anomalous nature of $B+L$ makes all of the
baryon asymmetry computable.
\section{Overview of the model}
\label{sec:model}
We consider the Minimal Supersymmetric Left Right Model (MSLRM) as discussed in
\cite{cve85, km93, km95, abs97, amrs98, ams98},
$SU(3)_c \otimes SU(2)_L \otimes SU(2)_R \otimes U(1)_{B-L}$
which can potentially be embedded in $SO(10)$.
The minimal set of Higgs multiplets required to implement the
symmetry breaking, along with their charges is given by
\begin{eqnarray*}
\Phi_i = (1,2,2,0), & \quad & i = 1,2 ~, \\
\Delta = (1,3,1,2) , & \quad & \bar{\Delta} = (1,3,1,-2) ~, \\
\Delta_c = (1,1,3,-2) , & \quad & \bar{\Delta}_c = (1,1,3,2) ~.
\end{eqnarray*}
In this scheme the Higgs bidoublet is doubled relative to the
non-supersymmetric case to obtain Cabbibo-Kobayashi-Maskawa
quark mixing matrix, while the number of
triplets is doubled to ensure anomaly cancellation \cite{abs97}.
In order to avoid charge breaking vacua while obtaining spontaneous
breaking of parity, two extra Higgs superfields
($\Omega$ \& $\Omega_c$) are introduced \cite{abs97}
\[ \Omega = (1,3,1,0), \qquad \Omega_c = (1,1,3,0) ~.\]
A consequence of this scheme is to break $SU(2)_R$ at a scale $M_R$
to $U(1)_R$ without breaking $U(1)_{B-L}$ or $SU(2)_L$.
This subsequently breaks to the SM at the scale $M_{B-L}$.
It is shown in \cite{amrs98} that these energy scales
obey the relation $M_R M_{W}\approx M_{B-L}^2$. For definiteness
we assume $M_R \sim 10^6$GeV and $M_{B-L}\sim 10^4$GeV, making the model
potentially testable at collider energies.
Due to the parity invariance of the original theory, the phase $SU(2)_L$
$\otimes U(1)_{R}$$\otimes U(1)_{B-L}$ is degenerate in energy with the
phenomenologically unacceptable $SU(2)_R$$\otimes U(1)_{L}$
$\otimes U(1)_{B-L}$. Thus in the early Universe,
Domain Walls (DW) occur at the scale $M_R$, causing contradiction with present
cosmological observations \cite{koz74, zko75}. In this paper we assume that
a small explicit breaking of parity results from soft terms induced by
supersymmetry (SUSY)
breaking in the hidden sector. In turn, smallness of this breaking permits a
certain period of DW domination, and the associated rapid expansion in
fact dilutes gravitino and other unwanted relics. This proposal is
similar in spirit to the idea of weak scale inflation \cite{ls96,mat00}.
In our model, this ``secondary inflation'' is an automatic consequence of
the phenomenological requirements of the model.
\section{Evolution of Domain Walls}
\label{sec:dynDW}
The best constraint that can be imposed on the gravitinos, produced after
primordial inflation, comes from the fact that decay of gravitino shouldn't
disturb the delicate balance of light nuclei abundance \cite{lin80, ekn84}.
This is ensured if the DW created in this model can cause the scale factor
to be enhanced by $\sim 10^9$. This agrees with the
observation by \cite{mat00,ls96} that a secondary inflation can dilute the
moduli and gravitino sufficiently to evade problems to cosmology.
Here we recapitulate the model independent considerations concerning Domain
Walls \cite{kib80,mat00,kt05} and check that the MSLRM DW indeed satisfy
them. In our model the DW form at the parity breaking phase transition
at the scale $M_R\sim 10^6$GeV. The value of Hubble parameter at
this scale is $H_i = 10^{-7}$ GeV. It is assumed that the Universe is
dominated by gravitinos or moduli which makes it matter dominated,
and that the DW obey the scaling solution appropriate to the matter
dominated evolution \cite{kt05}. With these assumptions,
the Hubble parameter at the epoch of equality of DW contribution
with contribution of the rest of the matter is given by
\begin{equation}
H_{eq} \sim \sigma^{\frac{3}{4}} H_i^{\frac{1}{4}} M_{Pl}^{-\frac{3}{2}} ~,
\end{equation}
where $\sigma$ is the wall tension. For our model this gives
$H_{eq}\sim 10^{-17}$ GeV, corresponding to a temperature $T_{eq}$
of $1$GeV reasonably higher than the Big Bang Nucleosynthesis (BBN) scale.
Let us assume that DW dynamics ensures the temperature scale of decay and
disappearance ($T_d$) of the DW to remain larger than the BBN scale.
In order that $T_{eq}$ remains bigger than $T_d$, the requirement on
the wall tension $\sigma$ is
\begin{equation}
\sigma\ >\ \left(\frac{T_d^8 M^2_{Pl}}{H_i} \right)^{1/3} ~.
\end{equation}
As an example, with $T_d\sim 10$MeV, we get $\sigma > 10^{10}(\textrm{GeV})^3$
easily satisfied for our scenario with $\sigma^{1/3}\sim M_R \sim 10^6$GeV.
Finally, a handle on the discrete symmetry breaking parameters
of the MSLRM can be obtained by noting that there should exist
sufficient wall tension for the walls to disappear before
a desirable temperature scale $T_d$. It has been observed by
\cite{ptww91} that energy density difference $\delta \rho$
between the almost degenerate vacua giving rise to the DW should be
of the order
\begin{equation}
\delta \rho \sim T_d^4
\label{eq:dr_Td_rel}
\end{equation}
for the DW to disappear at the scale $T_d$.
\section{Constraint on the soft terms of the model}
\label{sec:soft-terms}
The soft terms for the given model are:
{\setlength\arraycolsep{2pt}
\begin{eqnarray}
\mathcal{L}_{soft} &=&
\alpha_1 \textrm{Tr} (\Delta \Omega \Delta^{\dagger}) +
\alpha_2 \textrm{Tr} (\bar{\Delta} \Omega \bar{\Delta}^{\dagger}) +
\alpha_3 \textrm{Tr} (\Delta_c \Omega_c \Delta^{\dagger}_c) +
\alpha_4 \textrm{Tr} (\bar{\Delta}_c \Omega_c \bar{\Delta}^{\dagger}_c) ~~~~~
\label{eq:sigNdel} \\
&& + ~m_1 \textrm{Tr} (\Delta \Delta^{\dagger}) +
m_2 \textrm{Tr} (\bar{\Delta} \bar{\Delta}^{\dagger}) +
m_3 \textrm{Tr} (\Delta_c \Delta^{\dagger}_c) +
m_4 \textrm{Tr} (\bar{\Delta}_c \bar{\Delta}^{\dagger}_c)
\label{eq:delta} \\
&& + ~\beta_1 \textrm{Tr} (\Omega \Omega^{\dagger}) +
\beta_2 \textrm{Tr} (\Omega_c \Omega^{\dagger}_c) ~.
\label{eq:omega}
\end{eqnarray} }
The constributions to $\delta \rho$ can now be estimated from
the above lagrangian.
Use of eq. (\ref{eq:dr_Td_rel}) does not place a severe constraint
on the $\alpha_i$'s
if we consider $\alpha_1 \backsimeq \alpha_2$ and
$\alpha_3 \backsimeq \alpha_4$.
For the rest of the soft terms [(\ref{eq:delta}) and (\ref{eq:omega})]
we have respectively, in obvious notation
\begin{equation}
{\delta \rho}_\Delta =
\left[ m_1 \textrm{Tr} (\Delta \Delta^{\dagger}) +
m_2 \textrm{Tr} (\bar{\Delta} \bar{\Delta}^{\dagger}) \right]
- \left[ m_3 \textrm{Tr} (\Delta_c \Delta^{\dagger}_c) +
m_4 \textrm{Tr} (\bar{\Delta}_c \bar{\Delta}^{\dagger}_c) \right]
= 2(m - m^{\prime}) d^2 ~,
\label{eq:ep_delta}
\end{equation}
\begin{equation}
{\delta \rho}_\Omega = \beta_1 \textrm{Tr} (\Omega \Omega^{\dagger}) -
\beta_2 \textrm{Tr} (\Omega_c \Omega^{\dagger}_c)
= 2(\beta_1 - \beta_2) ~\omega^2 ~,
\label{eq:ep_omega}
\end{equation}
where we have considered
$m_1 \backsimeq m_2 \equiv m$, $m_3 \backsimeq m_4 \equiv m^{\prime}$.
The vev's of neutral component of $\Delta (\Delta_c)$ and $\Omega (\Omega_c)$
are $d (d_c)$ and $\omega (\omega_c)$. Here we have assumed that
$d_c \sim d$ and $\omega_c \sim \omega$.
Using the constraint (\ref{eq:dr_Td_rel}) in the eqns.
(\ref{eq:ep_delta}), (\ref{eq:ep_omega}),
the differences between the relevant soft parameters for a range of
permissible values of $T_d$ \cite{kt05} are
\centerline{
\begin{tabular}{c|c|c|c}
\hline
& $T_d = 100$ MeV & $T_d = 1$ GeV & $T_d = 10$ GeV \\ \hline
$(m - m^{\prime}) \sim$ &
$10^{-12}\textrm{ GeV}^2$ & $10^{-8} \textrm{ GeV}^2$ & $10^{-4}
\textrm{ GeV}^2$\\ [1mm]
$(\beta_1 - \beta_2) \sim$ &
$10^{-16}\textrm{ GeV}^2$ & $10^{-12}\textrm{ GeV}^2$ &
$10^{-8}\textrm{ GeV}^2$ \\ [1mm] \hline
\end{tabular} }
Here we have taken $d \sim 10^4 $ GeV,
$\omega \sim 10^6 $ GeV.
The differences between the values in the left and right sectors is a
lower bound on the soft parameters and is very small. Larger values would
be acceptable to low energy phenomenology. However if we wish to retain the
connection to the hidden sector, and have the advantage of secondary
inflation we would want the differences to be close to this bound.
As pointed out in \cite{ptww91, dn93} an asymmetry $\sim 10^{-12}$
is sufficient to ensure the persistence of the favoured vacuum.
\section{Conclusions}
\label{sec:conclusions}
We have considered a supersymmetric Left-Right model which can be embedded in a
renormalizable $ SO(10)$ model. A motivation is to understand the parity
breaking indispensable to such models.
Here we have checked the plausibility of relating this breaking to
the SUSY breaking in the hidden sector. Domain walls which result from
spontaneous breaking of L-R symmetry at the scale $10^6$GeV cause a secondary
inflation, sufficient to dilute gravitinos and other unwanted relics. SUSY
breaking soft terms come into play at the $B-L$ breaking scale $\sim 10^4$GeV
inducing explicit parity breaking terms and ensuring the disappearance of
Domain Walls before BBN. The entropy production and reheating following the
secondary inflation do not regenerate gravitinos to any significant extent
due to the low scale.
\begin{theacknowledgments}
This work is a part of a project funded by the Department of Science and
Technology, India. UAY acknowledges the hospitality of Abdus Salam ICTP
and Fermilab where parts of the work were carried out. The work of
AS is supported by Council of Scientific and Industrial Research grant.
\end{theacknowledgments}
| {
"attr-fineweb-edu": 1.90918,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdV0241xg-Ma3xb8c | \section{Introduction}
\label{introduction}
State-of-the-art neural machine translation systems use \textit{autoregressive} decoding where words are predicted one-by-one conditioned on all previous words \cite{Bahdanau2014NeuralMT,Vaswani2017AttentionIA}.
\textit{Non-autoregressive} machine translation (NAT, \citealp{Gu2017NonAutoregressiveNM}), on the other hand, generates all words in one shot and speeds up decoding at the expense of performance drop.
Parallel decoding results in conditional independence and prevents the model from properly capturing the highly multimodal distribution of target translations \cite{Gu2017NonAutoregressiveNM}.
One way to remedy this fundamental problem is to refine model output iteratively \cite{Lee2018DeterministicNN, Ghazvininejad2019MaskPredictPD}. This work pursues this iterative approach to non-autoregressive translation.\footnote{Refinement requires several sequential steps, but we abuse the term \textit{non-autoregressive} generation to mean a broad family of methods that generate the target in parallel for simplicity.}
In this work, we propose a transformer-based architecture with attention masking, which we call \textit{\textbf{Dis}entangled \textbf{Co}ntext} (DisCo) transformer, and use it for non-autoregressive decoding.
Specifically, our DisCo transformer predicts every word in a sentence conditioned on an arbitrary subset of the rest of the words.
Unlike the masked language models \cite{devlins2019bert,Ghazvininejad2019MaskPredictPD} where the model only predicts the masked words, the DisCo transformer can predict all words simultaneously, leading to faster inference as well as a substantial performance gain when training data are relatively large.
We also introduce a new inference algorithm for iterative parallel decoding, \textit{parallel easy-first}, where each word is predicted by attending to the words that the model is more confident about.
This decoding algorithm allows for predicting all tokens with different contexts in each iteration and terminates when the output prediction converges, contrasting with the constant number of iterations \cite{Ghazvininejad2019MaskPredictPD}.
Indeed, we will show in a later section that this method substantially reduces the number of required iterations without loss in performance.
Our extensive empirical evaluations on 7 translation directions from standard WMT benchmarks show that our approach achieves competitive performance to state-of-the-art non-autoregressive and autoregressive machine translation while significantly reducing decoding time on average.
\section{DisCo Transformer}
\begin{figure}[h]
\centering
\includegraphics[width=0.46\textwidth]{disco_transformer.pdf}
\caption{DisCo Transformer. $W$ and $p$ denote word and positional embeddings respectively. We simulate three \textit{disentangled} contexts to predict \textcolor{red}{A}, \textcolor{red}{B}, and \textcolor{red}{C} ($Y_n$) given \{\textcolor{blue}{C}\}, \{\textcolor{blue}{A}, \textcolor{blue}{C}\}, and \{\textcolor{blue}{B}\} ($Y_{{\tt obs}}^n$) respectively. \textcolor{lblue}{\bf Dashed lines} indicate masked-out attention connections to $Y_{{\tt mask}}^n$ and $Y_n$ itself. $K$ and $V$ are direct projections of $w_n+p_n$ (thus contextless) for all layers to avoid leakage.}
\label{fig:disco}
\end{figure}
We introduce our DisCo transformer for non-autoregressive translation (Fig.\ \ref{fig:disco}).
We propose a DisCo objective as an efficient alternative to masked language modeling and design an architecture that computes the objective in a single pass.
\subsection{DisCo Objective}
\label{subsec:generalization}
Similar to masked language models \cite{devlins2019bert}, a conditional masked language model (CMLM, \citealp{Ghazvininejad2019MaskPredictPD})
predicts randomly masked target tokens $Y_{{\tt mask}}$ given a source text $X$ and the rest of the target tokens $Y_{{\tt obs}}$. Namely, for every sentence pair in bitext $X$ and $Y$,
\begin{align*}
P(Y_{{\tt mask}}| X, Y_{{\tt obs}}) = {\tt Transformer}(X, Y_{{\tt obs}})\\
Y_{{\tt mask}} \sim {\tt RS}(Y) \quad Y_{{\tt obs}} = Y \setminus Y_{\tt mask}
\end{align*}
where ${\tt RS}$ denotes random sampling of masked tokens.\footnote{BERT \cite{devlins2019bert} masks a token with probability 0.15 while CMLMs \cite{Ghazvininejad2019MaskPredictPD} sample the number of masked tokens uniformly from $[1, N]$.}
CMLMs have proven successful in parallel decoding for machine translation \cite{Ghazvininejad2019MaskPredictPD}, video captioning \cite{Video2019}, and speech recognition \cite{Speech2019}.
However, the fundamental inefficiency with this masked language modeling objective is that the model can only be trained to predict a subset of the reference tokens ($Y_{{\tt mask}}$) for each network pass unlike a normal autoregressive model where we predict all $Y$ from left to right.
To address this limitation, we propose a \textit{\textbf{Dis}entangled \textbf{Co}ntext} (DisCo) objective.\footnote{We distinguish this from disentangled representation.}
The objective involves prediction of every token given an arbitrary (thus \textit{disentangled}) subset of the other tokens. For every $1\leq n \leq N$ where $|Y| = N$, we predict:
\begin{align*}
P(Y_n | X, Y_{{\tt obs}}^n) = {\tt Transformer}(X, Y_{{\tt obs}}^n)\\
Y_{{\tt obs}}^n \sim {\tt RS}(Y\setminus Y_n)
\end{align*}
\subsection{DisCo Transformer Architecture}
Simply computing conditional probabilities $P(Y_n|X, Y_{{\tt obs}}^n)$ with a vanilla transformer decoder will necessitate $N$ separate transformer passes for each $Y_{{\tt obs}}^n$.
We introduce the DisCo transformer to compute these $N$ \textit{contexts} in one shot:
\begin{align*}
P(Y_1| X, Y_{{\tt obs}}^1),\cdots, P(Y_N| X, Y_{{\tt obs}}^N) = {\tt DisCo}(X, Y)
\end{align*}
In particular, our DisCo transformer makes crucial use of attention masking to achieve this computational efficiency.
Denote input word and positional embeddings at position $n$ by $w_n$ and $p_n$.
For each position $n$ in $Y$, the vanilla transformer computes self-attention:\footnote{For simplicity, here we omit fully-connected layers, layer-norm, residual connections, and cross attention to the encoder.}
\begin{align*}
&k_n, v_n, q_n = {\tt Proj}(w_n + p_n)\\
&h_n = {\tt Attention}(K, V, q_{n})\\
&K, V = {\tt Concat}\left(\{k_m\}_{m=1}^N\right), {\tt Concat}\left(\{v_m\}_{m=1}^N\right)
\end{align*}
We modify this attention computation in two aspects.
First, we separate query input from key and value input to avoid feeding the token we predict.
Then we only attend to keys and values that correspond to observed tokens ($K_{{\tt obs}}^n$, $V_{{\tt obs}}^n$) and \textit{mask out} the connection to the other tokens ($Y_{{\tt mask}}^n$ and $Y_n$ itself, \textcolor{lblue}{\bf dashed lines} in Fig.\ \ref{fig:disco}).
\begin{align*}
&k_n, v_n = {\tt Proj}(w_n + p_n) \quad q_n = {\tt Proj}(p_n)\\
&h_n = {\tt Attention}(K_{{\tt obs}}^n, V_{{\tt obs}}^n, q_{n})\\
&K_{{\tt obs}}^n ={\tt Concat} \left( \left\{ k_m | Y_m \in Y_{{\tt obs}}^n \right\} \right)\\
&V_{{\tt obs}}^n ={\tt Concat} \left( \left\{ v_m | Y_m \in Y_{{\tt obs}}^n \right\} \right)
\end{align*}
\subsection{Stacked DisCo Transformer}
Unfortunately stacking DisCo transformer layers is not straightforward.
Suppose that we compute the $n$th position in the $j$th layer from the prevous layer's output as follows:
\begin{align*}
&k_n^j, v_n^j = {\tt Proj}(w_n + h_n^{j-1}) \quad q_n^{j} = {\tt Proj}(h_n^{j-1})\\
&h_n^j = {\tt Attention}(K_{{\tt obs}}^{n, j}, V_{{\tt obs}}^{n, j}, q_{n}^j)
\end{align*}
In this case, however, any cyclic relation between positions will cause information leakage.
Concretely, assume that $Y = [A, B]$ and $N=2$.
Suppose also that $Y_{obs}^1 = B$ and $Y_{obs}^2=A$, and thus there is a cycle that position 1 can see $B$ and position 2 can see $A$.
Then the output state at position 1 in the first layer $h_1^1$ becomes a function of $B$:
\begin{align*}
h_1^1 (B) = {\tt Attention}(k_2^1(B), v_2^1(B), q_1^1)
\end{align*}
Since position 2 can see position 1, the output state at position 2 in the second layer $h_2^2$ is computed by
\begin{align*}
h_2^2 = {\tt Attention}\left(k_1^2(h_1^1(B)), v_1^2(h_1^1(B)), q_2^2\right)
\end{align*}
But $h_2^2$ will be used to predict the token at position 2 i.e., $B$, and this will clearly make the prediction problem degenerate.
To avoid this cyclic leakage, we make keys and values independent of the previous layer's output $h_n^{j-1}$:
\begin{align*}
&k_n^j, v_n^j = {\tt Proj}(w_n + p_n) \quad q_n^j = {\tt Proj}(h_n^{j-1})\\
&h_n^j = {\tt Attention}(K_{{\tt obs}}^{n,j}, V_{{\tt obs}}^{n,j}, q_{n}^j)
\end{align*}
In other words, we \textit{decontextualize} keys and values in stacked DisCo layers.
\subsection{Training Loss}
We use a standard transformer as an encoder and stacked DisCo layers as a decoder.
For each $Y_n$ in $Y$ where $|Y|=N$, we uniformly sample the number of visible tokens from $[0, N-1]$, and then we randomly choose that number of tokens from $Y\setminus Y_n$ as $Y_{{\tt obs}}^n$, similarly to CMLMs \cite{Ghazvininejad2019MaskPredictPD}.
We optimize the negative log likelihood loss from $P(Y_n | X, Y_{{\tt obs}}^n)$ $(1 \leq n \leq N)$.
Again following CMLMs, we append a special token to the encoder and project the vector to predict the target length for parallel decoding.
We add the negative log likelihood loss from this length prediction to the loss from word predictions.
\subsection{DisCo Objective as Generalization}
We designed the DisCo transformer to compute conditional probabilities at every position efficiently, but here we note that the DisCo transformer can be readily used with other training schemes in the literature.
We can train an autoregressive DisCo transformer by always setting $Y_{{\tt obs}}^n = Y_{<n}$.
XLNet \cite{Yang2019XLNetGA} is also a related variant of a transformer that was introduced to produce general-purpose contextual word representations.
The DisCo transformer differs from XLNet in two critical ways. First, XLNet consists of separate context stream and query stream attention. This means that we need to double the amount of expensive attention and fully connected layer computation in the transformer.
Another difference is that XLNet is only trained to predict tokens in permuted order.
The DisCo transformer can be trained for the permutation objective by setting $Y_{{\tt obs}}^n = \{ Y_i | z(i) <z(n)\}$ where $z(i)$ indicates the rank of the $i$th element in the new order.
\citet{Baevski2019ClozedrivenPO} train their two tower model with the \textit{cloze objective} again for general-purpose pretraining.
We can train our DisCo transformer with this objective by setting $Y_{{\tt obs}}^n = \{ Y_{\neq n}\}$.
The proposed DisCo objective provides generalization that encompasses all of these special cases.
\section{Inference Algorithms}
In this section, we discuss inference algorithms for our DisCo transformer.
We first review \textit{mask-predict} from prior work as a baseline and introduce a new parallelizable inference algorithm, \textit{parallel easy-first} (Alg.\ \ref{alg:parallel_eas_first}).
\subsection{Mask-Predict}
\textit{Mask-predict} is an iterative inference algorithm introduced in \citet{Ghazvininejad2019MaskPredictPD} to decode a conditional masked language model (CMLM).
The target length $N$ is first predicted, and then the algorithm iterates over two steps: \textit{mask} where $i_t$ tokens with lowest probability are masked and \textit{predict} where those masked tokens are updated given the other $N-i_t$ tokens.
The number of masked tokens $i_t$ decays from $N$ with a constant rate over a fixed number of iterations $T$. Specifically, at iteration $t$,
\begin{align*}
&\quad i_t = \floor*{N \cdot \frac{T-t+1}{T}}\\
&Y_{{\tt obs}}^t = \{Y_{j}^{t-1} | j \in \topk_{n} (p_n^{t-1}, k=N-i_t) \}\\
&Y_n^t,p_n^t =
\begin{cases}
\argandmax_{w} P(Y_n=w | X, Y_{{\tt obs}}^t) \ \text{if } Y_n^t \not\in Y_{{\tt obs}}^t\\
Y_n^{t-1}, p_n^{t-1} \quad \quad \text{otherwise}
\end{cases}
\end{align*}
This method is directly applicable to our DisCo transformer by fixing $Y_{obs}^{n,t}$ regardless of the position $n$.
\subsection{Parallel Easy-First}
\label{sec:easy-first}
An advantage of the DisCo transformer over a CMLM is that we can predict tokens in all positions conditioned on different context simultaneously.
The mask-predict inference can only update masked tokens given the fixed observed tokens $Y_{{\tt obs}}^t$, meaning that we are wasting the opportunity to improve upon $Y_{{\tt obs}}^t$ and to take advantage of broader context present in $Y_{{\tt mask}}^t$.
We develop an algorithm, \textit{parallel easy-first} (Alg.\ \ref{alg:parallel_eas_first}), which makes predictions in all positions, thereby benefiting from this property.
Concretely, in the first iteration, we predict all tokens in parallel given source text:
\begin{align*}
Y_n^1, p_n = \argandmax_w P(Y_n=w | X)
\end{align*}
Then, we get the \textit{easy-first} order $z$ where $z(i)$ denotes the rank of $p_i$ in descending order.
At iteration $t>1$, we update predictions for all positions by
\begin{align*}
&Y_{{\tt obs}}^{n, t} = \left\{ Y_i^{t-1} | z(i) < z(n) \right\}\\
&Y_n^t, p_n^t = \argandmax_{w} P\left(Y_n=w | X, Y_{{\tt obs}}^{n,t}\right)
\end{align*}
Namely, we update each position given previous predictions on the \textit{easier} positions.
In a later section, we will explore several variants of choosing $Y_{{\tt obs}}^{n,t}$ and show that this easy-first strategy performs best despite its simplicity
\subsection{Length Beam}
Following \citet{Ghazvininejad2019MaskPredictPD}, we apply length beam.
In particular, we predict top K lengths from the distribution in length prediction and run parallel easy-first simultaneously.
In order to speed up decoding, we terminate if the one with the highest average log score $\sum_{n=1}^N \log(p_n^t)/N$ converges.
It should be noted that for parallel easy-first, $Y^t = Y^{t-1}$ means convergence because $Y^{n,t}_{obs}=Y^{n,t+1}_{obs}$ for all positions $n$ while mask-predict may keep updating tokens even after because $Y_{{\tt obs}}^t$ changes over iterations.
See Alg.\ \ref{alg:parallel_eas_first} for full pseudo-code.
Notice that all for-loops are parallelizable except the one over iterations $t$.
In the subsequent experiments, we use length beam size of 5 \cite{Ghazvininejad2019MaskPredictPD} unless otherwise noted.
In Sec.\ \ref{sec:analysis_inference}, we will illustrate that length beam facilitates decoding both the CMLM and DisCo transformer.
\begin{algorithm}[tb]
\caption{Parallel Easy-First with Length Beam}
\label{alg:parallel_eas_first}
\begin{algorithmic}
\STATE Source sentence: $X$\\
\STATE Predicted lengths: $N_1, \cdots , N_K$\\
\STATE Max number of iterations: $T$
\FOR{ { $k \in \left\{1,2,...,K\right\}$}}
\FOR{ { $n \in \left\{1,2,...,N_k\right\}$}}
\STATE {$Y^{1,k}_n, p_n^k = \argandmax_{w} P (y_n=w| X)$}
\ENDFOR
\STATE{Get the easy-first order $z_k$ by sorting $p^k$ and let $z_k(i)$ be the rank of the $i$th position.}
\ENDFOR
\FOR{ { $t \in \left\{2,...,T\right\}$}}
\FOR{ { $k \in \left\{1,...,K\right\}$}}
\FOR{{$n \in \left\{1,2,...,N_{k}\right\}$}}
\STATE{
\vspace{-0.3cm}
\begin{align*}
Y_{{\tt obs}}^{n, t} = \{ Y_i^{t-1, k} | z_k(i) < z_k(n) \}\\
Y_n^{t,k}, p_n^{t,k} = \argandmax_{w} P\left(Y_n=w | X, Y_{{\tt obs}}^{n,t}\right)
\end{align*}
\vspace{-0.5cm}
}
\ENDFOR
\ENDFOR
\STATE{
\vspace{-0.8cm}
\begin{align*}
k^* = \argmax_k \sum_{n=1}^{N_k} \log\left(p_n^{t,k} \right)/N_k
\end{align*}
}
\STATE {\textbf{if} $Y^{t-1,k^*} = Y^{t,k^*}$ \textbf{then} \textbf{return} $Y^{t,k^*}$}
\ENDFOR
\STATE {\textbf{return} $Y^{T, k^*}$}
\end{algorithmic}
\end{algorithm}
\section{Experiments}
We conduct extensive experiments on standard machine translation benchmarks.
We demonstrate that our DisCo transformer with the parallel easy-first inference achieves comparable performance to, if not better than, prior work on non-autoregressive machine translation with substantial reduction in the number of sequential steps of transformer computation.
We also find that our DisCo transformer achieves more pronounced improvement when bitext training data are large, getting close to the performance of autoregressive models.
\subsection{Experimental Setup}
\label{sec:setup}
\paragraph{Benchmark datasets}
We evaluate on 7 directions from four standard datasets with various training data sizes: WMT14 EN-DE (4.5M pairs), WMT16 EN-RO (610K pairs), WMT17 EN-ZH (20M pairs), and WMT14 EN-FR (36M pairs, en$\rightarrow$fr only).
These datasets are all encoded into subword units by BPE \cite{sennrich-etal-2016-neural}.\footnote{We run joint BPE on all language pairs except EN-ZH.}
We use the same preprocessed data and train/dev/test splits as prior work for fair comparisons (EN-DE: \citealp{Vaswani2017AttentionIA}; EN-RO: \citealp{Lee2018DeterministicNN}; EN-ZH: \citealp{Hassan2018AchievingHP, wu2018pay}; EN-FR: \citealp{Gehring2017ConvolutionalST, Ott2018ScalingNM}).
We evaluate performance with BLEU scores \cite{Papineni2001BleuAM} for all directions except that we use SacreBLEU \cite{post-2018-call}\footnote{SacreBLEU hash: BLEU+case.mixed+lang.en-zh+numrefs.1+smooth.exp+test.wmt17+tok.zh+version.1.3.7.} in en$\rightarrow$zh again for fair comparison with prior work \cite{Ghazvininejad2019MaskPredictPD}.
For all autoregressive models, we apply beam search with $b=5$ \cite{Vaswani2017AttentionIA, Ott2018ScalingNM} and tune length penalty of $\alpha \in [0.0, 0.2, \cdots, 2.0]$ in validation.
For parallel easy-first, we set the max number of iterations $T=10$ and use $T=4,10$ for constant-time mask-predict.
\begin{table*}[t]
\centering
\caption{The performance of non-autoregressive machine translation methods on the WMT14 EN-DE and WMT16 EN-RO test data. The Step columns indicate the average number of sequential transformer passes.
Shaded results use a small transformer ($d_{model}=d_{hidden}=512$).
Our EN-DE results show the scores after conventional compound splitting \cite{luong-etal-2015-effective, Vaswani2017AttentionIA}.}
\vskip 0.1in
\begin{tabular}{l|cc|cc|cc|cc}
\toprule
\textbf{Model}&
\multicolumn{2}{c}{\textbf{en$\rightarrow$de}} &
\multicolumn{2}{c|}{\textbf{de$\rightarrow$en}} &
\multicolumn{2}{c}{\textbf{en$\rightarrow$ro}} &
\multicolumn{2}{c}{\textbf{ro$\rightarrow$en}} \\
$n$: \# rescored candidates & Step\ & BLEU& Step & BLEU & Step & BLEU & Step & BLEU \\
\hline
\rowcolor[gray]{.90} \citet{Gu2017NonAutoregressiveNM} ($n=100$)& 1 & 19.17 & 1 & 23.20 & 1 &29.79 &1 & 31.44 \\
\citet{Wang2019NonAutoregressiveMT} ($n=9$) & 1 & 24.61 & 1 & 28.90 & --& --& --&-- \\
\citet{Li2019HintBasedTF} ($n=9$)& 1 & 25.20 & 1 & 28.80 & -- & -- & -- & --\\
\citet{Ma2019FlowSeqNC} ($n=30$) & 1 & 25.31 & 1 & 30.68 & 1 & 32.35 & 1 & 32.91\\
\citet{Sun2019Fast} ($n=19$) & 1 & 26.80 & 1 & 30.04 & -- & -- & -- & -- \\
\rowcolor[gray]{.90} \citet{ReorderNAT} & 1 & 26.51& 1 & 31.13 & 1 & 31.70 & 1 & 31.99\\
\citet{Shu2019LatentVariableNN} ($n=50$) & 1 & 25.1 & -- & -- & -- & -- & -- & --\\ \hline
\textbf{Iterative NAT Models} & & & & & & & &\\
\rowcolor[gray]{.90} \citet{Lee2018DeterministicNN} & 10 & 21.61 & 10 & 25.48 & 10 & 29.32 & 10 & 30.19 \\
\citet{Ghazvininejad2019MaskPredictPD} (CMLM) & 4& 25.94 & 4 & 29.90 & 4 & 32.53 & 4 & 33.23 \\
& 10& 27.03 & 10 & 30.53 &10 & 33.08&10 & 33.31\\
\citet{Gu2019LevenshteinT} (LevT) & 7+ & 27.27 &-- & -- & -- & -- & 7+ & 33.26 \\
\hdashline
\textbf{Our Implementations} & & & & & & & &\\
CMLM + Mask-Predict & 4 & 26.73&4& 30.75 & 4 & 33.02 & 4 & 33.27\\
CMLM + Mask-Predict & 10 & \textbf{27.39} & 10 & 31.24 & 10 & \textbf{33.33} & 10 & \textbf{33.67} \\
DisCo + Mask-Predict & 4& 25.83 & 4& 30.15 & 4 &32.22 & 4& 32.92\\
DisCo + Mask-Predict & 10& 27.06 &10&30.89 & 10 & 32.92 & 10 & 33.12\\
DisCo + Easy-First & 4.82 & 27.34& 4.23 & \textbf{31.31} &3.29 &33.22 &3.10 & 33.25 \\
\hline
\textbf{AT Models} & & & & & & & &\\
\citet{Vaswani2017AttentionIA} (base)& N&27.3 &--& -- & -- & -- & -- & --\\
\citet{Vaswani2017AttentionIA} (large) & N &28.4 &--& -- & -- & -- & -- & --\\
\hdashline
\textbf{Our Implementations} & & & & & & & &\\
AT Transformer Base (EN-RO teacher) & N & 27.38 & N & 31.78& N& 34.16 & N & 34.46\\
AT Transformer Base + Distillation\ & N & 28.24 & N & 31.54 & --& -- & -- & --\\
AT Transformer Large (EN-DE teacher) & N & 28.60 & N & 31.71 & -- & -- & -- & --\\
\bottomrule
\end{tabular}
\label{en-de-ro_result}
\vskip -0.1in
\end{table*}
\subsection{Baselines and Comparison}
There has been a flurry of recent work on non-autoregressive machine translation (NAT) that finds a balance between parallelism and performance.
Performance can be measured using automatic evaluation such as BLEU scores \cite{Papineni2001BleuAM}.
Latency is, however, challenging to compare across different methods.
For models that have an autoregressive component (e.g.,\ \citealp{Kaiser2018FastDI,ReorderNAT}), we can speed up sequential computation by caching states.
Further, many of prior NAT approaches generate varying numbers of translation candidates and rescore them using an autoregressive model.
The rescoring process typically costs overhead of one parallel pass of a transformer encoder followed by a decoder.
Given this complexity in latency comparison, we highlight two state-of-the-art iteration-based NAT models whose latency is comparable to our DisCo transformer due to the similar model structure.
See Sec.\ \ref{sec:related_work} for descriptions of more work on NAT.
\paragraph{CMLM}
As discussed earlier, we can generate a translation with mask-predict from a CMLM \cite{Ghazvininejad2019MaskPredictPD}.
We can directly compare our DisCo transformer with this method by the number of iterations required.\footnote{Caching \textit{contextless} key and value computation in the DisCo transformer gives us a slight speedup, but it is relatively minor as compared to expensive attention and fully connected computation.}
We provide results obtained by running their code.\footnote{\url{https://github.com/facebookresearch/Mask-Predict}}
\paragraph{Levenshtein Transformer}
Levenshtein transformer (LevT) is a transformer-based iterative model for parallel sequence generation \cite{Gu2019LevenshteinT}.
Its iteration consists of three sequential steps: \textit{deletion}, \textit{placeholder prediction}, and \textit{token prediction}.
Unlike the CMLM with the constant-time mask-predict inference, decoding in LevT terminates adaptively under certain condition.
Its latency is roughly comparable by the average number of sequential transformer runs.
Each iteration consists of three transformer runs except that the first iteration skips the \textit{deletion} step.
See \citet{Gu2019LevenshteinT} for detail.
\paragraph{Hyperparameters}
We generally follow the hyperparameters for a transformer base \cite{Vaswani2017AttentionIA, Ghazvininejad2019MaskPredictPD}: 6 layers for both the encoder and decoder, 8 attention heads, 512 model dimensions, and 2048 hidden dimensions.
We sample weights from $\mathcal{N}(0, 0.02)$, initialize biases to zero, and set layer normalization parameters to $\beta=0,\gamma=1$ \cite{devlins2019bert}.
For regularization, we tune the dropout rate from $[0.1, 0.2, 0.3]$ based on dev performance in each direction, and apply weight decay with $0.01$ and label smoothing with $\varepsilon=0.1$.
We train batches of approximately 128K tokens using Adam \cite{Kingma2014AdamAM} with $\beta=(0.9, 0.999)$ and $\varepsilon=10^{-6}$.
The learning rate warms up to $5\cdot10^{-4}$ in the first 10K steps, and then decays with the inverse square-root schedule.
We train all models for 300K steps apart from en$\rightarrow$fr where we make 500K steps to account for the data size.
We measure the dev BLEU score at the end of each epoch to avoid stochasticity, and average the 5 best checkpoints to obtain the final model.
We use 16 Telsa V100 GPUs and accelerate training by mixed precision floating point \cite{micikevicius2018mixed}, and implement all models with \texttt{fairseq} \cite{ott-etal-2019-fairseq}.
\paragraph{Distillation}
Similar to previous work on non-autoregressive translation (e.g.,\ \citealp{Gu2017NonAutoregressiveNM, Lee2018DeterministicNN}), we apply sequence-level knowledge distillation \cite{Kim2016SequenceLevelKD} by training every model in all directions on translations produced by a standard left-to-right transformer model (transformer large for EN-DE, EN-ZH, and EN-FR and base for EN-RO).
We also present results obtained from training a standard autoregressive base transformer on the same distillation data for comparison.
We assess the impact of distillation in Sec.\ \ref{sec:analysis_training} and demonstrate that distillation is still a key component in our non-autoregressive models.
\subsection{Results and Discussion}
Seen in Table \ref{en-de-ro_result} are the results in the four directions from the WMT14 EN-DE and WMT16 EN-RO datasets.
First, our re-implementations of CMLM + Mask-Predict outperform \citet{Ghazvininejad2019MaskPredictPD} (e.g.,\ 31.24 vs.\ 30.53 in de$\rightarrow$en with 10 steps). This is probably due to our tuning on the dropout rate and weight averaging of the 5 best epochs based on the validation BLEU performance (Sec.\ \ref{sec:setup}).
Our DisCo transformer with the parallel easy-first inference achieves at least comparable performance to the CMLM with 10 steps despite the significantly fewer steps on average (e.g.,\ 4.82 steps in en$\rightarrow$de).
The one exception is ro$\rightarrow$en (33.25 vs. 33.67), but DisCo + Easy-First requires only 3.10 steps, and CMLM + Mask-Predict with 4 steps achieves similar performance of 33.27.
The limited advantage of our DisCo transformer on the EN-RO dataset suggests that we benefit less from the training efficiency of the DisCo transformer on the small dataset (610K sentence pairs).
DisCo + Mask-Predict generally underperforms DisCo + Easy-First, implying that the mask-predict inference, which fixes $Y_{{\tt obs}}^n$ across all positions $n$, fails to utilize the flexibility of the DisCo transformer.
DisCo + Easy-First also accomplishes significant reduction in the average number of steps as compared to the adaptive decoding in LevT \cite{Gu2019LevenshteinT} while performing competitively.
As discussed earlier, each iteration in inference on LevT involves three sequential transformer runs, which undermine the latency improvement.
Overall, our implementations compare well with other NAT models from prior work.
We achieve competitive performance to the standard autoregressive models with the same transformer base configuration on the EN-DE dataset except that the autoregressive model with distillation performs comparably to the transformer large teacher in en$\rightarrow$de (28.24 vs. 28.60).
Nonetheless, we still see a large gap between the autoregressive teachers and our NAT results in both directions from EN-RO, illustrating a limitation of our remedy for the trade-off between decoding parallelism and performance.
Tables \ref{en-zh_result} and \ref{en-fr_result} show results from the EN-ZH and EN-FR datasets where the bitext data are larger (20M and 36M sentence pairs).
In both cases we see similar (yet more pronounced) patterns to the EN-DE and EN-RO experiments. Particularly noteworthy is that DisCo with the parallel easy-first inference and dropout tuning yields 34.63 points, a gain of 1.4 BLEU improvement over \citet{Ghazvininejad2019MaskPredictPD} in en$\rightarrow$zh despite the average of 5.44 steps.
\begin{table}[h]
\centering
\caption{WMT17 EN-ZH test results.}
\vskip 0.1in
\addtolength{\tabcolsep}{-1.5pt}
\begin{tabular}{l|cc|cc}
\toprule
\textbf{Model}& \multicolumn{2}{c|}{\textbf{en$\rightarrow$zh}} & \multicolumn{2}{c}{\textbf{zh$\rightarrow$en}} \\
& Step\ & BLEU& Step\ & BLEU\\
\hline
\citeauthor{Ghazvininejad2019MaskPredictPD} & 4& 32.63 & 4 & 21.90 \\
(\citeyear{Ghazvininejad2019MaskPredictPD}) & 10& 33.19 & 10 & 23.21 \\
\hdashline
\textbf{Our Implementations} & & & & \\
CMLM + Mask-Predict & 4 & 33.58 & 4 & 22.57\\
CMLM + Mask-Predict & 10 & 34.24 & 10 & 23.76\\
DisCo + Mask-Predict & 4 & 33.61 &4 & 22.42\\
DisCo + Mask-Predict & 10 & 34.51 &10 & 23.68\\
DisCo + Easy-First & 5.44 &\textbf{34.63} &5.90 & \textbf{23.83}\\
\hline
AT Transformer Base & N & 34.74 & N & 23.77\\
+ Distillation& N & 35.09 & N & 24.53\\
AT Trans.\ Large (teacher)& N & 35.01 & N & 24.65\\
\bottomrule
\end{tabular}
\label{en-zh_result}
\end{table}
\begin{table}[h]
\centering
\caption{WMT14 EN-FR test results.}
\vskip 0.1in
\begin{tabular}{l|ccc}
\toprule
\textbf{Model} & \multicolumn{2}{c}{\textbf{en$\rightarrow$fr}} & Train \\
& Step & BLEU & Time \\
\hline
CMLM + Mask-Predict & 4 & 40.21 & \multirow{2}{*}{53 h}\\
CMLM + Mask-Predict & 10& 40.55 & \\
\hdashline
DisCo + Mask-Predict & 4 & 39.59 & \multirow{3}{*}{37 h}\\
DisCo + Mask-Predict & 10 & 40.27& \\
DisCo + Easy-First & 4.29 & 40.66 & \\
\hline
\citet{Vaswani2017AttentionIA} (base) & N & 38.1 & --\\
\citet{Vaswani2017AttentionIA} (large) & N & 41.8 & --\\
\citet{Ott2018ScalingNM} (teacher) & N & 43.2 & --\\\hdashline
AT Transformer Base & N & 41.27 & 28 h\\
+ Distillation & N &42.03 & 28 h\\
\bottomrule
\end{tabular}
\label{en-fr_result}
\end{table}
\subsection{Decoding Speed}
We saw the the DisCo transformer with the parallel easy-first inference achieves competitive performance to the CMLM while reducing the number of iterations.
Here we compare them in terms of the wall-time speedup with respect to the standard autoregressive model of the same base configuration (Fig.\ \ref{fig:en-zh_speedup}).
For each decoding run, we feed one sentence at a time and measure the wall time from when the model is loaded until the last sentence is translated, following the setting in \citet{Gu2019LevenshteinT}.
All models are implemented in \texttt{fairseq} \cite{ott-etal-2019-fairseq} and run on a single Nvidia V100 GPU.
We can confirm that the average number of iterations directly translates to decoding time; the average number of iterations of the DisCo transformer with $T=10$ was 5.44 and the measured speedup lies between $T=5,6$ of the CMLM.
Note that \texttt{fairseq} implements effcient decoding of autoregressive models by caching hidden states.
The average length of generated sentences in the autoregressive model was 25.16 (4.6x steps compared to 5.44 steps), but we only gained a threefold speedup from DisCo.
\begin{figure}[t]
\centering
\includegraphics[width=0.46\textwidth]{decoding_speed_en-zh.pdf}
\caption{Relative decoding speedup on the en$\rightarrow$zh test data with respect to the standard autoregressive model (indicated as \textcolor{cyellow}{$\blacktriangle$}). $T$ and $b$ denote the (max) number of iterations and beam size respectively. The length beam size is all set to 5.}
\label{fig:en-zh_speedup}
\vspace{-0.5cm}
\end{figure}
\section{Analysis and Ablations}
In this section, we give an extensive analysis on our apporach along training and inference dimensions.
\subsection{Training}
\label{sec:analysis_training}
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{training_efficiency.pdf}
\caption{EN$\rightarrow$DE test results with varying batch size.}
\label{fig:training_efficiency}
\end{figure}
\paragraph{Training Efficiency}
In Sec.\ \ref{subsec:generalization}, we discussed the fundamental inefficiency of CMLM training---a CMLM model is trained to only predict a subset of the target words. DisCo addresses this problem by its architecture that allows for predicting every word given a randomly chosen subset of the target words. Seen in Fig.\ \ref{fig:training_efficiency} are results on the en$\rightarrow$de test data with varying batch sizes. We can see that DisCo is more robust to smaller batch sizes, supporting our claim that it provides more efficient training.
\paragraph{Distillation}
We assess the effects of knowledge distillation across different models and inference configurations (Table \ref{tab:distill}).
Consistent with previous models \cite{Gu2017NonAutoregressiveNM, UnderstandingKD}, we find that distillation facilitates all of the non-autoregressive models.
Moreover, the DisCo transformer benefits more from distillation compared to the CMLM under the same mask-predict inference.
This is in line with \citet{UnderstandingKD} who showed that there is correlation between the model capacity and distillation data complexity.
The DisCo transformer uses contextless keys and values, resulting in reduced capacity.
Autoregressive translation also improves with distillation from a large transformer, but the difference is relatively small.
Finally, we can observe that the gain from distillation decreases as we incorporate more global information in inference (more iterations in NAT cases and larger beam size in AT cases).
\begin{table}[h]
\centering
\small
\caption{Effects of distillation across different models and inference. All results are BLEU scores from the dev data. $T$ and $b$ denote the max number of iterations and beam size respectively.}
\vskip 0.1in
\addtolength{\tabcolsep}{-1.0pt}
\begin{tabular}{l|c|ccc|ccc}
\toprule
& & \multicolumn{3}{c|}{\textbf{en$\rightarrow$de}} & \multicolumn{3}{c}{\textbf{ro$\rightarrow$en}} \\
Model & $T$ & raw\ & dist. &$\Delta$ & raw\ & dist. & $\Delta $\\
\hline
CMLM + MaskP& 4 &22.7 & 25.5& 2.8 & 33.2 & 34.8 & 1.6\\
CMLM + MaskP & 10 & 24.5& 25.9 & 1.4 & 34.5 & 34.9 & 0.4 \\
\hdashline
DisCo + MaskP & 4& 21.4 &24.6 & 3.2 & 32.3 & 34.1 & 1.8\\
DisCo + MaskP & 10 & 23.6 & 25.3 & 1.7 & 33.4 & 34.3 & 0.9 \\
\hdashline
DisCo + EasyF& 10 & 23.9 & 25.6 & 1.7 & 34.0&35.0 & 1.0 \\ \hline
AT Base ($b=1$)& N & 25.5 & 26.4 & 0.9 & -- & --& -- \\
AT Base ($b=5$)& N & 26.1 &26.8 & 0.7 & -- & -- & -- \\
\bottomrule
\end{tabular}
\label{tab:distill}
\end{table}
\paragraph{AT with Contextless KVs}
We saw that a decoder with contextless keys and values can still retain performance in non-autoregressive models. Here we use a decoder with contextless keys and values in autoregressive models. The results (Table \ref{tab:contextless_kv}) show that it is able to retain performance even in autoregressive models regardless of distillation, suggesting further potential of our approach.
\begin{table}[h]
\centering
\caption{Test results (BLEU) from AT with contextless keys and values.}
\vskip 0.1in
\begin{tabular}{l|ccccc}
\hline
AT & \multicolumn{2}{c}{\textbf{en$\rightarrow$de}} & \multicolumn{2}{c}{\textbf{de$\rightarrow$en}}& \multicolumn{1}{c}{\textbf{ro$\rightarrow$en}} \\
Decoder & raw & dist.\ & raw & dist.\ & raw\\
\hline
Contextless & 27.09& 27.86 & 30.91 & 31.46 & 34.25\\
Original & 26.85 & 27.69 & 31.33 & 31.09 & 34.46\\
\hline
\end{tabular}
\label{tab:contextless_kv}
\vskip -0.05in
\end{table}
\paragraph{Easy-First Training}
So far we have trained our models to predict every word given a random subset of the other words.
But this training scheme yields a gap between training and inference, which might harm the model.
We attempt to make training closer to inference by training the DisCo transformer in the easy-first order.
Similarly to the inference, we first predict the easy-first order by estimating $P(Y_n|X)$ for all $n$. Then, use that order to determine $Y_{{\tt obs}}^{n}$.\footnote{This training process can be seen as the hard EM algorithm where the easy-first order is a latent variable.}
The overall loss will be the sum of the negative loglikelihood of these two steps.
Seen in Table \ref{tab:training_strategies} are the results on the dev sets of en$\rightarrow$de and ro$\rightarrow$en.
In both directions, this easy-first training does not ameliorate performance, suggesting that randomness helps the model.
Notice also that the average number of iterations in inference decreases (4.03 vs.\ 4.29, 2.94 vs.\ 3.17).
The model gets trapped in a sub-optimal solution with reduced iterations due to lack of exploration.
\begin{table}[h]
\centering
\caption{Dev results from bringing training closer to inference.}
\vskip 0.1in
\begin{tabular}{l|cccc}
\hline
& \multicolumn{2}{c}{\textbf{en$\rightarrow$de}} & \multicolumn{2}{c}{\textbf{ro$\rightarrow$en}} \\
Training Variant & Step\ & BLEU & Step\ & BLEU\\
\hline
Random Sampling & 4.29 & \textbf{25.60} & 3.17 & \textbf{34.97}\\
Easy-First Training & 4.03 & 24.76 & 2.94 & 34.96 \\
\hline
\end{tabular}
\label{tab:training_strategies}
\vskip -0.05in
\end{table}
\begin{table}
\centering
\vskip -0.15in
\caption{Dev results with different decoding strategies.}
\vskip 0.1in
\begin{tabular}{l|cccc}
\toprule
& \multicolumn{2}{c}{\textbf{en$\rightarrow$de}} & \multicolumn{2}{c}{\textbf{ro$\rightarrow$en}} \\
\textbf{Inference Strategy}& Step\ & BLEU& Step\ & BLEU \\
\hline
Left-to-Right Order & 6.80 &21.25 & 4.86 & 33.87\\
Right-to-Left Order & 6.79 &20.75 & 4.67 & 34.38\\
All-But-Itself & 6.90 & 20.72 & 4.80 & 33.35 \\ \hline
Parallel Easy-First & 4.29 & \textbf{25.60}& 3.17 & \textbf{34.97} \\
Mask-Predict &10 & 25.34 & 10 & 34.54 \\
\bottomrule
\end{tabular}
\label{tab:decoding_strategies}
\end{table}
\subsection{Inference}
\label{sec:analysis_inference}
\begin{figure*}[t]
\centering
\includegraphics[width=0.998\textwidth]{example_de-en_t.pdf}
\caption{An example of inference iterations in de$\rightarrow$en from the dev set when max iteration $T$ is 5. (\textit{Pres.}\ stands for \textit{President}). We show how each of the \tyellow{underscored words} were generated in the bottom section. Prediction is conditioned on \hly{highlighted tokens}.}
\label{fig:example_deen}
\end{figure*}
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{length_beam_en-fr.pdf}
\caption{EN$\rightarrow$FR dev results with varying length beam size.}
\label{fig:length_beam}
\end{figure}
\paragraph{Alternative Inference Algorithms}
Here we compare various decoding strategies on the DisCo transformer (Table \ref{tab:decoding_strategies}).
Recall in the parallel easy-first inference (Sec.\ \ref{sec:easy-first}), we find the easy-first order by sorting the probabilities in the first iteration and compute each position's probability conditioned on the \textit{easier} positions from the previous iteration.
We evaluate two alternative orderings: left-to-right and right-to-left.
We see that both of them yield much degraded performance.
We also attempt to use even broader context than parallel easy-first by computing the probability at each position based on all other positions (\textit{all-but-itself}, $Y_{obs}^{n,t} = Y_{\neq n}^{t-1}$).
We again see degraded performance, suggesting that cyclic dependency (e.g.,\ $Y_{m}^{t-1} \in Y_{{\tt obs}}^{n,t}$ and $Y_{n}^{t-1} \in Y_{{\tt obs}}^{m, t}$) breaks consistency.
For example, a model can have two output candidates: ``Hong Kong" and ``New York" \cite{Zhang2020POINTERCT}. In this case, we might end up producing ``Hong York" due to this cyclic dependency.
These results suggest that the easy-first ordering we introduced is a simple yet effective approach.
\paragraph{Example Translation}
Seen in Fig.\ \ref{fig:example_deen} is a translation example in de$\rightarrow$en when decoding the same DisCo transformer with the mask-predict or parallel easy-first inference.
In both algorithms, iterative refinement resolves structural inconsistency, such as repetition.
Parallel easy-first succeeds in incorporating more context in early stages whereas mask-predict continues to produce inconsistent predictions (``my my activities") until more context is available later, resulting in one additional iteration to land on a consistent output.
\paragraph{Length Beam}
Fig.\ \ref{fig:length_beam} shows performance of the CMLM and DisCo transformer with varying size of length beam.
All cases benefit from multiple candidates with different lengths to a certain point, but DisCo + Easy-First improves most.
This can be because parallel easy-first relies on the easy-first order as well as the length, and length beam provides opportunity to try multiple orderings.
\begin{figure}[t]
\centering
\includegraphics[width=0.47\textwidth]{length_iter.pdf}
\caption{\# Refinement steps vs.\ target length on the WMT14 en$\rightarrow$de test data.}
\label{fig:length_iter}
\end{figure}
\paragraph{Iterations vs.\ Length}
We saw that parallel easy-first inference substantially reduced the number of required iterations.
We hypothesize that the algorithm effectively adapts the number of iterations based on the difficulty, which is reminiscent of a dynamic halting mechanism \cite{Graves2016AdaptiveCT, Dehghani2019UniversalT}.
To see this, we compare the number of required iterations and the generated sentence length as a proxy for difficulty (Fig.\ \ref{fig:length_iter}).
Similarly to the experiments above, we set the max iteration and length beam to be 10 and 5 respectively.
While the number of required iterations vary to a certain degree, we see that long sentences tend to require more iterations to converge.
\section{Related and Future Work}
\label{sec:related_work}
In addition to the work discussed above, prior and concurrent work on non-autoregressive translation developed ways to mitigate the trade-off between decoding parallelism and performance.
As in this work, several prior and concurrent work proposed methods to iteratively refine (or insert) output predictions \cite{Mansimov2019AGF, Stern2019InsertionTF, Gu2019InsertionbasedDW, Chan2019KERMITGI, Chan2019AnES, Ghazvininejad2020SemiAutoregressiveTI, Li2020LAVANA, Saharia2020}.
Other approaches include adding a lite autoregressive module to parallel decoding \cite{Kaiser2018FastDI,Sun2019Fast,ReorderNAT}, partially decoding autoregressively \cite{Stern2018BlockwisePD,Stern2019InsertionTF}, rescoring output candidates autoregressively (e.g.,\ \citealp{Gu2017NonAutoregressiveNM}), mimicking hidden states of an autoregressive teacher \cite{Li2019HintBasedTF}, training with different objectives than vanilla negative log likelihood \cite{Libovick2018EndtoEndNN,Wang2019NonAutoregressiveMT,minBOW, Ghazvininejad2020AlignedCE, Li2020LAVANA}, reordering input sentences \cite{ReorderNAT}, generating with an energy-based inference network \cite{Tu2020ENGINEEI}, training on additional data from an autoregressive model \cite{Zhou2020ImprovingNN}, and modeling with latent variables \cite{Ma2019FlowSeqNC, Shu2019LatentVariableNN}.
While this work took iterative decoding methods, our DisCo transformer can be combined with other approaches for efficient training.
For example, \citet{Li2019HintBasedTF} trained two separate non-autoregressive and autoregressive models, but it is possible to train a single DisCo transformer with both autoregressive and random masking and use hidden states from autoregressive masking as a teacher.
We leave integration of the DisCo transformer with more approaches to non-autoregressive translation for future.
We also note that our DisCo transformer can be used for general-purpose representation learning.
In particular, \citet{Liu2019RoBERTaAR} found that masking different tokens in every epoch outperforms static masking in BERT \cite{devlins2019bert}.
Our DisCo transformer would allow for making a prediction at every position given arbitrary context, providing even more flexibility for large-scale pretraining.
\section{Conclusion}
We presented the DisCo transformer that predicts every word in a sentence conditioned on an arbitrary subset of the other words.
We developed an inference algorithm that takes advantage of this efficiency and further speeds up generation without loss in translation quality.
Our results provide further support for the claim that non-autoregressive translation is a fast viable alternative to autoregressive translation.
Nonetheless, a discrepancy still remains between autoregressive and non-autoregressive performance when knowledge distillation from a large transformer is applied to both.
We will explore ways to narrow this gap in the future.
\section*{Acknowledgements}
We thank Tim Dettmers, Hao Peng, Mohammad Rasooli, William Chan, and Qinghong Han as well as the anonymous reviewers for their helpful feedback on this work.
\nocite{langley00}
| {
"attr-fineweb-edu": 1.347656,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdVA5qoTAqrHwaqdG | \section{Introduction}
Since the introduction of measure-theoretical entropy for an invariant measure \cite{Kol} and topological entropy \cite{AKM}, the relationship between these two kinds of entropy has gained a lot of attention. By the work of Goodwyn \cite{Gwyn} and Goodman \cite{Gman}, the classical variational principle was completed, namely,
\begin{equation*}
\sup_{\mu}h_{\mu}(\varphi)=h_{\text{top}}(\varphi),
\end{equation*}
where $\varphi$ is a homeomorphism from a compact metric space $X$ to itself, and the supremum is taken over all invariant measures.
A short proof for it was given by Misiurewicz \cite{Misiurewicz}.
For a factor map between two dynamical systems $(X,\varphi)$ and $(Y,\phi)$ the notions of relative topological entropy $h_{\text{top}}(\varphi,X\mid Y)$ and relative measure-theoretical entropy $h_{\mu}(\varphi,X\mid Y)$ for an invariant measure were also introduced \cite{LedWal}. Ledrappier {\it et al.} \cite{LedWal} and Downarowicz {\it et al.} \cite{DS} obtained the relative variational principle
\begin{equation*}
\sup_{\mu}h_{\mu}(\varphi,X\mid Y)=h_{\text{top}}(\varphi,X\mid Y).
\end{equation*}
The random topological entropy has been studied by Kifer \cite{Kifer} for the independent identically distributed random transformations. For the random transformations $T$ and the corresponding skew product transformation $\Theta:\Omega\times X\rightarrow \Omega\times X$, he suggested the following relation between random measure-theoretical entropy and random topological entropy:
\begin{equation*}
\sup\{ h_{\mu}^{(r)}(T): \mu \,\,\text{is}\,\, \Theta \text{-invariant} \}=h_{\text{top}}(T),
\end{equation*}
where $h_{\mu}^{(r)}(T)$ and $h_{\text{top}}(T)$ are the random measure-theoretical entropy and random topological entropy, respectively.
This result was extended by Bogensch{\"u}tz \cite{Bogen} to random transformations acting on one space.
Kifer \cite{Kifer2001} formulated this variational principle in full generality for random bundle transformations.
The entropy concept can be localized by defining entropy pairs or tuples both in measure-theoretical and topological situations \cite{GY}. To study the relationship between the two kinds of entropy pairs or tuples, one needs a local version of the variational principle. Blanchard {\it et al.} \cite{Blanchard1997} showed that for a given topological dynamical system $(X,\varphi)$ and an open cover $\mathscr{U}$ of $X$ there exists an invariant measure $\mu$ with $\inf_{\alpha}h_{\mu}(\varphi,\alpha)\geq h_{\text{top}}(\varphi,\mathscr{U})$, where the infimum is taken over all partitions of $X$ which are finer than $\mathscr{U}$. Huang {\it et al.} \cite{huangye2006} provided some kind of converse statement of this result. To study the question of whether the two quantities is equal or not, Romagnoli \cite{Rom2003} introduced two kinds of measure-theoretical entropy $h_{\mu}^{-}$ and $h_{\mu}^{+}$ for covers. He showed that for a topological dynamical system $(X,\varphi)$ there is an invariant measure $\mu$ with $h_{\mu}^{-}(\varphi, \mathscr{U})=h_{\text{top}}(\varphi,\mathscr{U})$, and consequently obtained the local variational principle for a given open cover, i.e.
\begin{equation*}
\max_{\mu}h_{\mu}^{-}(\varphi, \mathscr{U})=h_{\text{top}}(\varphi,\mathscr{U}).
\end{equation*}
For a factor map between two topological dynamical systems $(X,\varphi)$ and $(Y,\phi)$, Huang {\it et al.} \cite{Huang2006} introduced two notions of measure-theoretical conditional entropy for covers, namely $h_{\mu}^-(\varphi,\mathscr{U}\mid Y)$ and $h_{\mu}^+(\varphi,\mathscr{U}\mid Y)$. They showed that for the factor map and a given open cover $\mathscr{U}$ of $X$, the local relative variational principle holds, i.e.
\begin{equation*}
\max_{\mu}h_{\mu}^{-}(\varphi, \mathscr{U}\mid Y)=h_{\text{top}}(\varphi,\mathscr{U}\mid Y).
\end{equation*}
We remark that the classical variational principle could follow from the local ones or the relative ones by some simple arguments.
Now it is a natural question if there exists a local variational principle for random bundle transformations. We will address this question in the current paper.
To study the question we introduced two notions of random measure-theoretical entropy for covers in random dynamical system, namely $h_{\mu}^{(r)-}(T,\mathcal{U})$ and $h_{\mu}^{(r)+}(T,\mathcal{U})$.
We derive a variational inequality of random entropy for $h_{\mu}^{(r)+}(T,\mathcal{U})$, i.e.
for a given open cover $\mathcal{U}$ of the measurable subset $\mathcal{E}\subset \Omega \times X$, there always exists a $\Theta$-invariant measure $\mu $ such that $h_{\mu}^{(r)+}(T,\mathcal{U})\geq h_{\text{top}}(T,\mathcal{U})$. Moreover, Using this variational inequality, we could show that for a given open cover $\mathcal{U}$,
\begin{equation*}
\max\{ h_{\mu}^{(r)-}(T,\mathcal{U}): \mu \,\,\text{is}\,\, \Theta \text{-invariant} \}=h_{\text{top}}(T,\mathcal{U}),
\end{equation*}
The classical variational principle for random bundle transformations follows from the local one.
This paper is organized as follows.
In Section \ref{section2}, we recall the notion of random measure-theoretical entropy, introduce the notions of random measure-theoretical entropy and topological entropy for covers and give the relationship of them.
In Section \ref{section3}, we introduce another notion of random measure-theoretical entropy for covers and prove a variational inequality of random entropy.
In Section \ref{section4}, we give some relations of the two notions of random measure-theoretical entropy for covers and prove the local variational principle for random bundle transformations.
\section{Preliminaries}\label{section2}
The setup consists of a probability space $(\Omega, \mathcal{F},P)$, together with a $P$-preserving transformation $\vartheta$, of a compact metric space $X$ together with the distance function $d$ and the Borel $\sigma$-algebra $\mathcal{B}$, and of a set $\mathcal{E}\subset \Omega\times X$ measurable with respect to the product $\sigma$-algebra $\mathcal{F}\times \mathcal{B}$ and such that the fibers $\mathcal{E}_{\omega}=\{x\in X: (\omega,x)\in \mathcal{E}\}$, $\omega\in \Omega$, are compact. The latter (see \cite{Castaing}) means that the mapping $\omega\rightarrow \mathcal{E}_{\omega}$ is measurable with respect to the Borel $\sigma$-algebra induced by the Hausdorff topology on the space $\mathcal{K}(X)$ of compact subsets of $X$ and that the distance function $d(x,\mathcal{E}_{\omega})$ is measurable in $\omega\in \Omega$ for each $x\in X$. We assume that $\mathcal{F}$ is complete, countably generated, and separated points, and so $(\Omega, \mathcal{F},P)$ is a Lebesgue space.
A continuous (or homeomorphic) bundle
random dynamical system (RDS) $T$ over $(\Omega,\mathcal {F},
P, \vartheta)$ is generated by map $T_{\omega}:
\mathcal{E_{\omega}}\rightarrow \mathcal{E_{\vartheta \omega}}$ with
iterates $T_{\omega}^{n}=T_{\vartheta^{n-1}\omega} \cdots
T_{\vartheta\omega}T_{\omega}$, $ n \geq 1$, and
$T_{\omega}^{0}=id$, so that the map $(\omega, x) \rightarrow
T_{\omega}x$ is measurable and the map $x \rightarrow T_{\omega}x$
is a continuous map (or a homeomorphism, respectively) for $P$-almost surely (a.s.) $\omega$. The map
$\Theta : \mathcal{E} \rightarrow \mathcal{E}$ defined by
$\Theta(\omega,x)=$ $(\vartheta\omega, T_{\omega}x)$ is called the skew
product transformation.
A {\it cover} is a finite family of measurable subsets $\{U_i\}_{i=1}^k$ of $\Omega\times X$ such that $\bigcup_{i=1}^kU_i=\Omega\times X$ and the $\omega$-section $U_i(\omega)=\{x\in X:(\omega,x)\in \Omega\times X\}$ is Borel for each $i\in \{1,\dots, k\}$. Obviously, $\mathcal{U}(\omega)=\{U_i(\omega)\}_{i=1}^k$ is a Borel cover of $X$ in the usual sense.
A {\it partition} of $\Omega\times X$ is a cover of $\Omega\times X$ whose elements are pairwise disjoint. An {\it open cover } of $\Omega\times X$ is a cover of $\Omega\times X$ such that the $\omega$-sections of whose elements are open subsets of $X$. Denote the sets of partitions by $\mathcal{P}_{\Omega\times X}$, the sets of covers by $\mathcal{C}_{\Omega\times X}$, the sets of open covers by $\mathcal{C}_{\Omega\times X}^o$.
Denote $\mathcal{C}_{\Omega\times X}^{o'}$ by the family of $\mathcal{U}\in \mathcal{C}_{\Omega\times X}^{o}$ with the form $\mathcal{U}=\{\Omega\times U_i\}$, where $\{U_i\}$ is an open cover of $X$. Clearly $\mathcal{C}_{\Omega\times X}^{o'}\subset \mathcal{C}_{\Omega\times X}^{o}$.
For the measurable subset $\mathcal{E}\subset \Omega\times X$,
we denote the restriction of $\mathcal{P}_{\Omega\times X}$, $\mathcal{C}_{\Omega\times X}$, $\mathcal{C}_{\Omega\times X}^o$ and $\mathcal{C}_{\Omega\times X}^{o'}$ on $\mathcal{E}$ by $\mathcal{P}_{\mathcal{E}}$, $\mathcal{C}_{\mathcal{E}}$, $\mathcal{C}_{\mathcal{E}}^o$ and $\mathcal{C}_{\mathcal{E}}^{o'}$, respectively. Given two covers $\mathcal{U}$, $\mathcal{V}\in \mathcal{C}_{\Omega\times X}$, $\mathcal{U}$ is said to be {\it finer} than $\mathcal{V}$ (write $\mathcal{U}\succeq \mathcal{V}$) if each element of $\mathcal{U}$ is contained in some element of $\mathcal{V}$. Let $\mathcal{U}\vee\mathcal{V}=\{U\cap V: U\in \mathcal{U}, V\in \mathcal{V}\}$. Given integers $M,$ $N\in \mathbb{N}$ with $M\leq N$ and $\mathcal{U}\in \mathcal{C}_{\Omega\times X}$ or $\mathcal{P}_{\Omega\times X}$, we use the notation $\mathcal{U}_M^N=\bigvee_{n=M}^N\Theta^{-n}\mathcal{U}$.
Given $\mathcal{U}\in \mathcal{C}_{\mathcal{E}}$, Let
\begin{equation*}
N(T,\omega,\mathcal{U},n)=\min\{\# F: F\,\, \text{is the finite subcover of }\,\, \bigvee_{i=0}^{n-1}(T_{\omega}^i)^{-1}\mathcal{U}(\vartheta^i\omega) \,\, \text{over}\,\, \mathcal{E}_{\omega} \},
\end{equation*}
where $\# F$ denotes the cardinality of $F$.
Note that $N(T,\omega,\mathcal{U},n)$ is measurable in $\omega$.
Then we can let
\begin{equation}
H(T,\mathcal{U},n)=\int \log N(T,\omega,\mathcal{U},n)dP(\omega).
\end{equation}
Clearly, if there is another cover $\mathcal{V}\succeq \mathcal{U}$ then $H(T,\mathcal{V},n)\geq H(T,\mathcal{U},n)$. In fact, for two covers $\mathcal{U}, \mathcal{V}$ we have $H(T,\mathcal{U}\vee\mathcal{V},n)\leq H(T,\mathcal{U},n)+H(T,\mathcal{V},n)$.
We now proceed by defining the random topological entropy of a cover $\mathcal{U}\in \mathcal{C}_{\mathcal{E}}$ on $\mathcal{E}$. Let $a_n=\log N(T,\omega,\mathcal{U},n)$. It is easy to see that $\{a_n\}$ is a non-negative subadditive sequence, i.e. $a_{n+m}\leq a_n+a_m\circ\vartheta^n$. Then by Kingman's subadditive ergodic theorem one can define the {\it random topological entropy of $\mathcal{U}$ on $\mathcal{E}$} as
\begin{equation*}
h_{\text{top}}(T,\mathcal{U})=\lim_{n\rightarrow\infty}\frac{1}{n}H(T,\mathcal{U},n)=\inf_{n\geq 1}\frac{1}{n}H(T,\mathcal{U},n).
\end{equation*}
Moreover, if $P$ is ergodic then $P$-$a.s.$
\begin{equation}
h_{\text{top}}(T,\mathcal{U})=\lim_{n\rightarrow\infty}\frac{1}{n}\log N(T,\omega,\mathcal{U},n).
\end{equation}
The {\it random topological entropy of $T$ on $\mathcal{E}$} is defined by
\begin{equation}
h_{\text{top}}(T)=\sup_{\mathcal{U}\in \mathcal{C}_{\mathcal{E}}^{o'}}h_{\text{top}}(T,\mathcal{U}).
\end{equation}
Note that the definition of $N(T,\omega,\mathcal{U},n)$
(hence $h_{\text{top}}(T,\mathcal{U})$) above is slightly different from the one of $\pi_T(f)(\omega,\epsilon,n)$ given in \cite{Kifer2001}, which is defined by separated sets. However,
it is easy to see that the random topological entropy $h_{\text{top}}(T)$ defined above is the
same as the random topological pressure for the null function (i.e. the random topological entropy) defined in \cite{Kifer2001}.
If $(\Omega,\mathcal{F},P,\vartheta)$ is a trivial system, the above definition is the standard topological entropy in the deterministic case.
Let $\mathcal {P}_{P}(\mathcal{E}) = \{\mu \in \mathcal
{P}_P(\Omega \times X) : \mu(\mathcal{E})=1\}$, where
$\mathcal {P}_P(\Omega \times X)$ is the space of
probability measures on $\Omega \times X$ with the marginal
$P$ on $\Omega$. Any $\mu \in \mathcal
{P}_P(\mathcal{E})$ on $\mathcal{E}$ can be disintegrated
as $d \mu(\omega, x)=d \mu_{\omega}(x)d P(\omega)$ (See
\cite{Dudley}), where $\mu_{\omega}$ are regular conditional
probabilities with respect to the $\sigma$-algebra $\mathcal
{F}_{\mathcal{E}}$ formed by all sets $(A\times X)\cap \mathcal{E}$
with $A \in \mathcal{F}$. Let $\mathcal {R}=
\{\mathcal{R}_{i}\}$ be a finite measurable partition of
$\mathcal{E}$, and denote $\mathcal {R}(\omega)=
\{\mathcal{R}_{i}(\omega)\}$, where $\mathcal{R}_{i}(\omega)=\{x\in
\mathcal{E}_{\omega}: (\omega,x)\in \mathcal{R}_{i}\}$ is a
partition of $\mathcal{E}_{\omega}$. The conditional entropy of $\mathcal{R}$ given the $\sigma$-algebra $\mathcal{F}_{\mathcal{E}}$ is defined by
\begin{equation}\label{kifer2.1}
H_{\mu}(\mathcal{R}\mid \mathcal{F}_{\mathcal{E}})=-\int\sum_{i}\mu(\mathcal{R}_i\mid \mathcal{F}_{\mathcal{E}})\log \mu(\mathcal{R}_i\mid \mathcal{F}_{\mathcal{E}})d\mu=\int H_{\mu_{\omega}}(\mathcal{R}(\omega))dP(\omega),
\end{equation}
where $H_{\mu_{\omega}}(\mathcal{A})$ denotes the usual entropy of a partition $\mathcal{A}$.
Let $\mathcal
{I}_P (\mathcal{E} )$ be the set of $\Theta$-invariant
measures $\mu \in\mathcal {P}_P(\mathcal{E})$. If $\vartheta$ is invertible, then $\mu$ is
$\Theta$-invariant if and only if the disintegrations $\mu_{\omega}$
of $\mu$ satisfy $T_{\omega}\mu_{\omega}=\mu_{\vartheta\omega}\,
P$-a.s. \cite{Arnold}. The {\it random measure-theoretical entropy $h_{\mu}^{(r)}(T)$ with respect to $\mu\in \mathcal{I}_P(\mathcal{E})$ } is defined by the formula
\begin{equation}\label{kifer2.2}
h_{\mu}^{(r)}(T)=\sup_{\mathcal{Q}}h_{\mu}^{(r)}(T,\mathcal{Q}), \,\,\text{where}\,\,
h_{\mu}^{(r)}(T,\mathcal{Q})=\lim_{n\rightarrow\infty}\frac{1}{n}
H_{\mu}(\bigvee_{i=0}^{n-1}(\Theta^i)^{-1}\mathcal{Q}\mid \mathcal{F}_{\mathcal{E}}),
\end{equation}
where the supremum is taken over all finite or countable measurable partitions $\mathcal{Q}=\{\mathcal{Q}_i\}$ of $\mathcal{E}$ with the finite conditional entropy $H_{\mu}(\mathcal{Q}\mid \mathcal{F}_{\mathcal{E}})<\infty$, and the above limit exists in view of subadditivity of conditional entropy (cf. Kifer \cite[Theorem
II.1.1]{Kifer}). From equality \eqref{kifer2.1}, it is not hard to see that $h_{\mu}^{(r)}(T,\mathcal{Q})$ has the following fiber expression:
\begin{equation}
h_{\mu}^{(r)}(T,\mathcal{Q})=\lim_{n\rightarrow\infty}\frac{1}{n}\int H_{\mu_{\omega}}(\bigvee_{i=0}^{n-1}(T^i_{\omega})^{-1}\mathcal{Q}(\vartheta^i\omega))dP(\omega).
\end{equation}
Moreover, the resulting entropy remains the same by taking
the supremum only over partitions $\mathcal{Q}$ of $\mathcal{E}$
into sets $Q_i$ of the form $Q_i=(\Omega\times P_i)\cap\mathcal{E}$,
where $\mathcal{P}=\{P_i\}$ is a partition of $X$ into measurable
sets, so that $Q_i(\omega)=P_i\cap\mathcal{E}_{\omega}$ (See
\cite{Bogen,Bogenthesis,Kifer} for detail).
As in the topological case, when $(\Omega,\mathcal{F},P,\vartheta)$ is a trivial system, the above notion is the standard metric entropy of $T$ with respect to $\mu$ in the deterministic system. For the classical entropy of measure-theoretical entropy see \cite{Par,Walters}, for the classical theory of topological entropy see \cite{DGS, Walters}, and, for the entropy theory of random dynamical system we refer to \cite{Arnold,Bogen,Kifer,LQ} and the references given there. The relation between the random measure-theoretical and topological entropy is stated as follows.
\begin{theorem}[Bogensch{\"u}tz \cite{Bogen} and Kifer \cite{Kifer2001}]\label{theorem2.1}
Let $T$ be a continuous bundle RDS on $\mathcal{E}$ over $\vartheta$, where $\vartheta $ is invertible. Then $h_{\text{top}}(T)=\sup_{\mu\in \mathcal{I}_P(\mathcal{E})} h_{\mu}^{(r)}(T) $
\end{theorem}
\begin{remark}
The classical variational principle follows from Theorem \ref{theorem2.1} by taking $(\Omega,\mathcal{F},P,\vartheta)$ to be the trivial system.
\end{remark}
Following the ideas of Romagnoli \cite{Rom2003} and Huang {\it et al.} \cite{Huang2006}, we define a new notion of random measure-theoretical entropy for covers, which extends definition \eqref{kifer2.2} to covers. It allows us to give a local version (for a given open cover $\mathcal{U}$) of Theorem \ref{theorem2.1}.
Let $T$ be a homeomorphic bundle RDS on $\mathcal{E}$ over $\vartheta$ and $\mu\in \mathcal{P}_P(\mathcal{E})$.
For $\mathcal{U}\in \mathcal{C}_{\mathcal{E}}$, let
\begin{equation}
H_{\mu}(\mathcal{U}\mid \mathcal{F}_{\mathcal{E}})=\inf_{\mathcal{R}\succeq \mathcal{U}, \mathcal{R}\in \mathcal{P}_{\mathcal{E}}}H_{\mu}(\mathcal{R}\mid \mathcal{F}_{\mathcal{E}}).
\end{equation}
It is not hard to prove that many of the properties of the conditional function $H_{\mu}(\mathcal{R}\mid \mathcal{F}_{\mathcal{E}})$ can be extended to $H_{\mu}(\mathcal{U}\mid \mathcal{F}_{\mathcal{E}})$ from partitions to covers; for details see \cite{Rom2003}.
We need the following basic result to prove Lemma \ref{lemma2.2}. For $\{p_k\}_{k=1}^K\subset(0,1)$ with $\sum_{k=1}^Kp_k\leq 1$, as usual we define $H(p_1,\dots,p_K)=-\sum_{k=1}^Kp_k\log p_k$.
\begin{lemma}
\label{lemma7}
Fix $K\geq 2$. Suppose that $p_1,\dots,p_K\in (0,1)$ with $p_1\leq\dots\leq p_K$ and $\sum_{k=1}^Kp_k\leq 1$. Let $0< \delta_1<p_1$ and suppose that for each $k=2,\dots,K$, $\delta_k\in[0,1-p_k)$ and $\sum_{k=2}^K\delta_k=\delta_1$. Then
$$
H(p_1,\dots,p_K)>H(p_1-\delta_1,p_2+\delta_2,\dots,p_K+\delta_K).
$$
\end{lemma}
\begin{lemma}
\label{lemma2.2}
Let $T$ be a homeomorphic bundle RDS on $\mathcal{E}$ over $\vartheta$ and $\mu\in \mathcal{P}_P(\mathcal{E})$. If $\mathcal{U}$, $\mathcal{V}\in \mathcal{C}_{\mathcal{E}}$, then
\renewcommand{\theenumi}{(\arabic{enumi})}
\begin{enumerate}
\item $0\leq H_{\mu}(\mathcal{U}\mid \mathcal{F}_{\mathcal{E}})\leq \log N(\mathcal{U})$, where $N(\mathcal{U})=\min\{\#F: \mathcal{F}\subset\mathcal{U}, \,\bigcup_{F\in \mathcal{F}}\supset \mathcal{E}\}$;
\item If $\mathcal{U}\succeq \mathcal{V}$, then $H_{\mu}(\mathcal{U}\mid \mathcal{F}_{\mathcal{E}})\geq H_{\mu}(\mathcal{V}\mid \mathcal{F}_{\mathcal{E}})$;
\item $H_{\mu}(\mathcal{U}\vee \mathcal{V}\mid \mathcal{F}_{\mathcal{E}})\leq H_{\mu}(\mathcal{U}\mid \mathcal{F}_{\mathcal{E}})+H_{\mu}(\mathcal{V}\mid \mathcal{F}_{\mathcal{E}})$;\label{lemma2.2p3}
\item If $\vartheta$ is invertible, then $H_{\mu}(\Theta^{-1}\mathcal{U}\mid \mathcal{F}_{\mathcal{E}} )=H_{\Theta\mu}(\mathcal{U}\mid \mathcal{F}_{\mathcal{E}} )$.\label{lemma2.2p4}
\end{enumerate}
\end{lemma}
\begin{proof}
Part (1), (2) and (3) are obvious.
We now prove part (4). We follow the arguments applied in \cite{Rom2003}.
Since
\begin{align*}
H_{\mu}(\Theta^{-1}\mathcal{U}\mid \mathcal{F}_{\mathcal{E}})
&
\leq \inf_{\mathcal{R}\in \mathcal{P}_{\mathcal{E}}, \mathcal{R}\succeq \mathcal{U}\cap \mathcal{E}}H_{\mu}(\Theta^{-1}\mathcal{R}\mid \mathcal{F}_{\mathcal{E}})=\inf_{\mathcal{R}\in \mathcal{P}_{\mathcal{E}}, \mathcal{R}\succeq \mathcal{U}\cap \mathcal{E}}H_{\Theta\mu}(\mathcal{R}\mid \mathcal{F}_{\mathcal{E}}),
\end{align*}
then $H_{\mu}(\Theta^{-1}\mathcal{U}\mid \mathcal{F}_{\mathcal{E}})\leq H_{\Theta\mu}(\mathcal{U}\mid \mathcal{F}_{\mathcal{E}} )$.
We now prove the opposite inequality.
Let $\mathcal{U}\in \mathcal{C}_{\mathcal{E}}$. For each $\mathcal{R}\in \mathcal{P}_{\mathcal{E}}$ with $\mathcal{R}\succeq \Theta^{-1}\mathcal{U}$, we recursively construct a $\mathcal{Q}\succeq \mathcal{U}$ such that $H_{\mu}(\mathcal{R}\mid \mathcal{F}_{\mathcal{E}})\geq H_{\Theta\mu}(\mathcal{Q}\mid \mathcal{F}_{\mathcal{E}} )$.
Let $\mathcal{U}=\{U_1,\dots,U_M\}$ and $\mathcal{R}=\{R_1,\dots,R_K\}\succeq \Theta^{-1}\mathcal{U}$
with $\emptyset \neq R_k\subseteq \Theta^{-1}U_{j_k}$, $j_k\in\{1,\dots,M\}$, for each $k\in\{1,\dots,K\}$. We may assume that $j_k\neq j_l$ if $k\neq l$, since the partition obtained by replacing $R_k\cup R_l$ is coarser than $\mathcal{R}$ and still finer than $\Theta^{-1}\mathcal{U}$. Notice that $\Theta^{-1}U_{j_1}\backslash \bigcup_{l=2}^K\Theta^{-1}U_{j_l}\subseteq R_1$. For each $k=1,\dots,K$ we define $p_k=\mu(R_k\mid \mathcal{F}_{\mathcal{E}})$ and $p_k(\omega)=\mu_{\omega}(R_k(\omega))$. Then
$$\sum_{k=1}^Kp_k=\sum_{k=1}^K \mu(R_k\mid \mathcal{F}_{\mathcal{E}})= \sum_{k=1}^Kp_k(\omega)=\mu_{\omega}(R_k(\omega))=1,\,\, P\text{-a.s.}\,\, \omega,$$
and
$
H_{\mu_{\omega}}(\mathcal{R}(\omega))= H(p_1(\omega ),\dots, p_K(\omega)) \,\, P\text{-a.s.}\,\, \omega.
$
By exchanging indices if necessary we assume $p_1\leq p_2\leq \dots\leq p_k$. Let us define
\begin{align*}
\delta_1&=\mu(R_1\mid\mathcal{F}_{\mathcal{E}})-\Theta\mu(U_{j_1}\backslash\bigcup_{l=2}^K U_{j_l}\mid \mathcal{F}_{\mathcal{E}})\\
&=\mu(R_1\mid\mathcal{F}_{\mathcal{E}})-\mu(\Theta^{-1}(U_{j_1}\backslash\bigcup_{l=2}^K U_{j_l})\mid \Theta^{-1}(\mathcal{F}_{\mathcal{E}}))
\end{align*}
Then for $P$-a.s. $\omega$,
\begin{equation*}
\delta_1(\omega)=\mu_{\omega}(R_1(\omega))-T_{\omega}\mu_{\omega}(U_{j_1}(\vartheta\omega)\backslash \bigcup_{l=2}^KU_{j_l}(\vartheta\omega))\geq 0.
\end{equation*}
Define $B^1_1=U_{j_1}\backslash \bigcup_{l=2}^KU_{j_l}$ and $R_2^1=R_2\cup (\Theta^{-1}U_{j_2}\cap (R_1\backslash \Theta^{-1}B_1^1))$. For $k=3,\dots,K$, let
\begin{equation*}
R_k^1=R_k\cup\big( \Theta^{-1}U_{j_k}\cap (R_1\backslash \Theta^{-1}B_1^1)\cap (\bigcup_{l=2}^{k-1}R_l^1)^c \big).
\end{equation*}
Then for each $\omega\in\Omega$,
$B^1_1(\vartheta\omega)=U_{j_1}(\vartheta\omega)\backslash \bigcup_{l=2}^KU_{j_l}(\vartheta\omega)$ and $R_2^1(\omega)=R_2(\omega)\cup (T_{\omega}^{-1}U_{j_2}(\vartheta\omega)\cap (R_1(\omega)\backslash T_{\omega}^{-1}B_1^1(\vartheta\omega)))$. For $k=3,\dots,K$,
\begin{equation*}
R_k^1(\omega)=R_k(\omega)\cup\big( T_{\omega}^{-1}U_{j_k}(\vartheta\omega)\cap (R_1(\omega)\backslash T_{\omega}^{-1}B_1^1(\vartheta\omega))\cap (\bigcup_{l=2}^{k-1}R_l^1(\omega))^c \big),
\end{equation*}
where $(\bigcup_{l=2}^{k-1}R_l^1(\omega))^c=\mathcal{E}_{\omega}\backslash \bigcup_{l=2}^{k-1}R_l^1(\omega)$.
Define $\mathcal{R}_1=\{\Theta^{-1}B_1^1,R_2^1,\dots,R_K^1\}$. It is clear that $\mathcal{R}_1\succeq \Theta^{-1}\mathcal{U}$. If $\delta_1=0$ then for $P$-a.s. $\omega\in\Omega$, $H_{\mu_{\omega}}(\mathcal{R}(\omega))=H_{\mu_{\omega}}(\mathcal{R}_1(\omega))$, it follows that $H_{\mu}(\mathcal{R}\mid \mathcal{F}_{\mathcal{E}})=H_{\mu}(\mathcal{R}_1\mid \mathcal{F}_{\mathcal{E}})$. If $\delta_1>0$, then for $P$-a.s. $\omega\in\Omega$, $\delta_1(\omega)>0$, Using Lemma \ref{lemma7} with $(p_1(\omega),\dots,p_K(\omega))$ and $\delta_k(\omega)=\mu_{\omega}(R_k^1(\omega))-\mu_{\omega}(R_k(\omega))$ for every $k\in\{2,\dots,K\}$, we have $H_{\mu_{\omega}}(\mathcal{R}(\omega))\geq H_{\mu_{\omega}}(\mathcal{R}_1(\omega))$, $P$-a.s. $\omega\in \Omega$. Then $H_{\mu}(\mathcal{R}\mid \mathcal{F}_{\mathcal{E}})\geq H_{\mu}(\mathcal{R}_1\mid \mathcal{F}_{\mathcal{E}})$.
Inductively, for each $n=2,\dots,K$, we construct
$$
\mathcal{R}_n=\{\Theta^{-1}B_1^n,\dots, \Theta^{-1}B_n^n,R_{n+1}^n,\dots,R_K^n\}\succeq \Theta^{-1}\mathcal{U},
$$
which satisfies that $H_{\mu}(\mathcal{R}_{n+1}\mid \mathcal{F}_{\mathcal{E}})\leq H_{\mu}(\mathcal{R}_n\mid \mathcal{F}_{\mathcal{E}})$. Let us give the construction of $\mathcal{R}_{n+1}$ given $\mathcal{R}_n$.
For each $k\in \{1,\dots,n\}$ define $B_k^{n+1}=B_k^n$. Let $B_{n+1}^{n+1}=U_{j_{n+1}}\backslash \bigcup_{l=n+2}^KU_{j_l}$ and $R_{n+2}^{n+1}=R_{n+2}^n\cup (\Theta^{-1}U_{j_{n+2}}\cap (R_{n+1}^n\backslash \Theta^{-1}B_{n+1}^{n+1}))$. For each $k\in \{n+3,\dots,K\}$, define
$$
R_k^{n+1}=R_k^n\cap \big(\Theta^{-1}U_{j_k}\cap (R_{n+1}^n\backslash \Theta^{-1}B_{n+1}^{n+1})\cap (\bigcup_{l=2}^{k-1}R_l^{n+1})^c \big).
$$
As before, for each $k\in \{1,\dots,k-n\}$ define $p_k=\mu(R_{n+k}^n\mid\mathcal{F}_{\mathcal{E}})$ and $\delta_k=\mu(R_{n+k}^{n+1}\mid\mathcal{F}_{\mathcal{E}})-\Theta\mu (R_{n+k}^n\mid\mathcal{F}_{\mathcal{E}}))$. Let $\delta_1=\mu(R_{n+1}^n\mid\mathcal{F}_{\mathcal{E}}))
-\Theta\mu(B_{n+1}^{n+1})\mid\mathcal{F}_{\mathcal{E}})$. By exchanging indices if necessary we assume that $p_1\leq \dots\leq p_{K-n}$. Using Lemma \ref{lemma7}, we prove that $H_{\mu}(\mathcal{R}_{n+1}\mid\mathcal{F}_{\mathcal{E}})\leq H_{\mu}(\mathcal{R}_n)\mid\mathcal{F}_{\mathcal{E}})$.
We have that $H_{\mu}(\mathcal{R}\mid\mathcal{F}_{\mathcal{E}})\geq H_{\mu}(\mathcal{R}_K\mid\mathcal{F}_{\mathcal{E}})$ and $\mathcal{R}_K=\Theta^{-1}\mathcal{Q}$ with $\mathcal{Q}\succeq\mathcal{U}$. Then
\begin{align*}
H_{\mu}(\Theta^{-1}\mathcal{U}\mid\mathcal{F}_{\mathcal{E}})&=\inf_{\mathcal{R}\succeq \Theta^{-1}\mathcal{U}}H_{\mu}(\mathcal{R}\mid\mathcal{F}_{\mathcal{E}})\\
&\geq \inf_{\mathcal{Q}\succeq \mathcal{U}}H_{\mu}(\Theta^{-1}\mathcal{Q}\mid\mathcal{F}_{\mathcal{E}})\\
&=\inf_{\mathcal{Q}\succeq \mathcal{U}}H_{\Theta\mu}(\mathcal{Q}\mid\mathcal{F}_{\mathcal{E}}) \quad (\text{since }\,\, \vartheta\,\, \text{is invertible})\\
&=H_{\Theta\mu}(\mathcal{U}\mid\mathcal{F}_{\mathcal{E}}).
\end{align*}
This complete the argument of Lemma \ref{lemma2.2}.
\end{proof}
Let $\mu\in \mathcal{I}_P(\mathcal{E})$. It follows from parts \ref{lemma2.2p3} and \ref{lemma2.2p4} of Lemma \ref{lemma2.2} that $H_{\mu}(\mathcal{U}_0^{n-1}\mid \mathcal{F}_{\mathcal{E}})$ is a sub-additive function of $n\in \mathbb{N}$.
Let $\vartheta$ be invertible. We may define the $\mu^-$ random measurable theoretic entropy of $\mathcal{U}$ with respect to $\mathcal{F}_{\mathcal{E}}$ as
\begin{equation*}
h_{\mu}^{(r)-}(T,\mathcal{U})=\lim_{n\rightarrow \infty}\frac{1}{n}H_{\mu}(\mathcal{U}_0^{n-1}\mid \mathcal{F}_{\mathcal{E}})=\inf_{n\geq 1}\frac{1}{n}H_{\mu}(\mathcal{U}_0^{n-1}\mid \mathcal{F}_{\mathcal{E}}).
\end{equation*}
Since for each partition $\mathcal{U}\in \mathcal{P}_{\mathcal{E}}$, $h_{\mu}^{(r)-}(T,\mathcal{U})=h_{\mu}^{(r)}(T,\mathcal{U})$, then $h_{\mu}^{(r)}(T)=\sup_{\mathcal{U}\in \mathcal{C}_{\mathcal{E}}}h_{\mu}^{(r)-}(T,\mathcal{U})$.
The following lemma gives some stronger results.
\begin{lemma}
\label{lemma2.3}
Let $T$ be a homeomorphic bundle RDS on $\mathcal{E}$ over $\vartheta$ and $\mu \in \mathcal{I}_P(\mathcal{E})$, where $\vartheta $ is invertible. Then
\renewcommand{\theenumi}{(\arabic{enumi})}
\begin{enumerate}
\item $h_{\mu}^{(r)}(T)=\sup_{\mathcal{U}\in \mathcal{C}_{\mathcal{E}}^o}h_{\mu}^{(r)-}(T,\mathcal{U})$;
\item $h_{\mu}^{(r)-}(T,\mathcal{U})\leq h_{\text{top}}(T,\mathcal{U})$ for each $\mathcal{U}\in \mathcal{C}_{\mathcal{E}}$.
\end{enumerate}
\end{lemma}
\begin{proof}
We follow the idea of Huang Wen {\it et al.} \cite{Huang2006}.
(1) For $\mathcal{U}\in \mathcal{C}_{\mathcal{E}}^o$, let $\mathcal{R}$ be the partition generated by $\mathcal{U}$. Then $\mathcal{R}\succeq \mathcal{U}$, and hence $h_{\mu}^{(r)}(T)\geq h_{\mu}^{(r)}(T,\mathcal{R})\geq h_{\mu}^{(r)-}(T,\mathcal{U})$. This implies that $h_{\mu}^{(r)}(T)\geq \sup_{\mathcal{U}\in \mathcal{C}_{\mathcal{E}}^o}h_{\mu}^{(r)-}(T,\mathcal{U})$.
Conversely, for a partition $\mathcal{R}\in \mathcal{P}_{\mathcal{E}}$, $\mathcal{R}=\{R_i\}_{i=1}^k$
with $R_i=(\Omega \times P_i)\cap \mathcal{E}$, where $P=\{P_i\}$ is the partition of $X$ into measurable sets such that $R_i(\omega)=P_i\cap \mathcal{E}_{\omega}$, and $\epsilon >0$, we have the following claim.
\begin{claim}
There exists an open cover $\mathcal{U}\in \mathcal{C}_{\mathcal{E}}^o$ with $K$ elements such that for any finite measurable partition $\mathcal{Q}\in \mathcal{P}_{\mathcal{E}}$ with $\mathcal{Q}=(\Omega \times Q_i)\cap \mathcal{E}$, $\mathcal{Q}_i(\omega)=Q_i\cap \mathcal{E}_{\omega}$, finer than $\mathcal{U}$ as a cover, $H_{\mu}(\mathcal{R}\mid \mathcal{Q}\vee \mathcal{F}_{\mathcal{E}})<\epsilon$.
\end{claim}
\begin{proof}[Proof the Claim]
Let $\mathcal{P}=\{P_i\}_{i=1}^k$ be a finite partition of $ X$. Denote by $\mathcal{P}(\omega)=\{P_i(\omega)\}_{i=1}^k$, $P_i(\omega)=P_i\cap \mathcal{E}_{\omega}$, $1\leq i\leq k$, the corresponding partition of $\mathcal{E}_{\omega}$. It is well known that there exists $\delta(\omega)>0$ such that if $\beta=\{B_i\}_{i=1}^k$ is a measurable partition of $X$ and $\sum_{i=1}^k\mu_{\omega}(P_i\bigtriangleup B_i)<\delta(\omega)$ then $H_{\mu_{\omega}}(\mathcal{P}\mid \beta )\leq \epsilon$ (See \cite{Walters}). Since $\mu_{\omega}$ is regular, we can find compact subsets $Q_i\subset P_i$ with
$$
\mu(\Omega\times P_i\backslash \Omega \times Q_i)=\int \mu_{\omega}(P_i(\omega)\backslash Q_i(\omega))dP(\omega)<\delta/2k^2, i=1, \dots, k,
$$
where $\delta=\int\delta(\omega)dP(\omega)$.
Let $Q_0=X\backslash \bigcup_{i=1}^k Q_i$. Then $\mu(\Omega\times Q_0)<\delta/2k$. Let $U_i=\Omega \times B_i$, where $B_i=Q_0\cup Q_i$, $i=1,\dots, k$, is open in $X$. Then $\mathcal{U}=\{U_i\}_{i=1}^k\cap \mathcal{E}\in \mathcal{C}_{\mathcal{E}}^o$.
For any partition $\mathcal{S}\succeq \mathcal{U}$, $\mathcal{S}=\{S_i\}\in \mathcal{P}_{\mathcal{E}}$ with $S_i=(\Omega\times C_i)\cap \mathcal{E}$, where $\mathcal{C}=\{C_i\}$ is a partition of $X$, we can find a partition $\mathcal{S}'=\{S'_i\}_{i=1}^k$ satisfying that $C'_i\subset B_i$, $S'_i\subset U_i$, $i=1,\dots, k$ and $\mathcal{S}\succeq \mathcal{S}'$, where $S_i'=(\Omega\times C_i')\cap\mathcal{E}$. Hence $H_{\mu}(\mathcal{R}\mid \mathcal{S}\vee \mathcal{F}_{\mathcal{E}})\leq H_{\mu}(\mathcal{R}\mid \mathcal{S}'\vee \mathcal{F}_{\mathcal{E}})$. Note that $B_i\supset C'_i\supset X\backslash \bigcup_{j\neq i}B_j=Q_i$, and thus $U_i\supset S'_i\supset (\Omega\times X)\backslash \bigcup_{j\neq i}(\Omega \times B_i)=\Omega\times Q_i$. One has
$$
\mu_{\omega}(C'_i\bigtriangleup P_i)\leq \mu_{\omega}(P_i\backslash Q_i)+\mu_{\omega}(Q_0)\leq \delta(\omega)/2k+\delta(\omega)/2k=\delta(\omega)/k.
$$
Hence $\sum_{i=1}^k\mu_{\omega}(C'_i\bigtriangleup P_i)<\delta(\omega)$ and $H_{\mu_{\omega}}(\mathcal{R}(\omega)\mid \mathcal{S}'(\omega))\leq \epsilon$. Then
\begin{align*}
H_{\mu}(\mathcal{R}\mid \mathcal{S}'\vee \mathcal{F}_{\mathcal{E}})
&=H_{\mu}(\mathcal{R}\vee \mathcal{S}'\mid \mathcal{F}_{\mathcal{E}})-H_{\mu}(\mathcal{S}'\mid \mathcal{F}_{\mathcal{E}})\\
&=\int H_{\mu_{\omega}}(\mathcal{R}\vee\mathcal{S}')(\omega)dP(\omega)-\int H_{\mu_{\omega}}(\mathcal{S}'(\omega))dP(\omega)\\
&=\int (H_{\mu_{\omega}}(\mathcal{R}\vee\mathcal{S}')(\omega)- H_{\mu_{\omega}}(\mathcal{S}'(\omega)))dP(\omega)\\
&=\int H_{\mu_{\omega}}(\mathcal{R}(\omega)\mid \mathcal{S}'(\omega))dP(\omega)\leq \epsilon.
\end{align*}
Thus $H_{\mu}(\mathcal{R}\mid \mathcal{S}\vee \mathcal{F}_{\mathcal{E}})<\epsilon$. This ends the proof the claim.
\end{proof}
Now for $n\in \mathbb{N}$ and a finite measurable partition $\mathcal{Q}_n\succeq \mathcal{U}_{0}^{n-1}$, since $\Theta^i\mathcal{Q}_n\succeq \mathcal{U}$, $0\leq i\leq n-1$, one has
\begin{align*}
H_{\mu}(\mathcal{R}_0^{n-1}\mid \mathcal{F}_{\mathcal{E}})
&\leq H_{\mu}(\mathcal{Q}_n\mid \mathcal{F}_{\mathcal{E}})+H_{\mu}(\mathcal{R}_0^{n-1}\mid \mathcal{Q}_n\vee \mathcal{F}_{\mathcal{E}})\\
& \leq H_{\mu}(\mathcal{Q}_n\mid \mathcal{F}_{\mathcal{E}})+\sum_{i=0}^{n-1}H_{\mu}(\mathcal{R}\mid \Theta^i\mathcal{Q}_n\vee \mathcal{F}_{\mathcal{E}})\\
& \leq H_{\mu}(\mathcal{Q}_n\mid \mathcal{F}_{\mathcal{E}})+n\epsilon.
\end{align*}
Hence
\begin{align*}
h_{\mu}^{(r)}(T,\mathcal{R})
&=\lim_{n\rightarrow \infty}\frac{1}{n}H_{\mu}(\mathcal{R}_0^{n-1}\mid \mathcal{F}_{\mathcal{E}})\\
&\leq \lim_{n\rightarrow \infty}\frac{1}{n}H_{\mu}(\mathcal{U}_0^{n-1}\mid \mathcal{F}_{\mathcal{E}})+\epsilon\\
&=h_{\mu}^{(r)-}(T,\mathcal{U})+\epsilon
\leq \sup_{\mathcal{U}\in \mathcal{C}^o_{\mathcal{E}}}h_{\mu}^{(r)-}(T,\mathcal{U})+\epsilon.
\end{align*}
Since $\mathcal{R}$ and $\epsilon$ are arbitrary, one has $h_{\mu}^{(r)}(T)\leq \sup_{\mathcal{U}\in \mathcal{C}^o_{\mathcal{E}}}h_{\mu}^{(r)-}(T,\mathcal{U})$. This ends the proof part (1).
(2) It is enough to show that for each $\mathcal{U}=\{U_i\}_{i=1}^k\in \mathcal{C}_{\mathcal{E}}$ there exists $\mathcal{R}\in \mathcal{P}_{\mathcal{E}}$ finer than $\mathcal{U}$ such that $H_{\mu}(\mathcal{R}\mid \mathcal{F}_{\mathcal{E}})\leq H(T,\mathcal{U},1)$ which implies $H_{\mu}(\mathcal{U}\mid \mathcal{F}_{\mathcal{E}})\leq H(T,\mathcal{U},1)$.
Let $d\mu(\omega,x)=d\mu_{\omega}(x)dP(\omega)$ $P$-a.s. be the integration of $\mu$. For each $\omega\in \Omega$, there exists $I_{\omega}\subset \{1,\dots,k\}$ with cardinality $N(T,\omega,\mathcal{U},1)$ such that $\bigcup_{i\in I_{\omega}}U_i(\omega)\supset \mathcal{E}_{\omega}$. Since $\mathcal{U}$ is finite, we can find $\omega_1,\dots,\omega_s\in \Omega$ such that for each $\omega\in\Omega$, $I_{\omega}=I_{\omega_i}$ for some $i\in \{1,\dots,s\}$. For $i=1,\dots,s$, define $D_i=\{\omega\in \Omega: \mu_{\omega}(\bigcup_{j\in I_{\omega_i}}U_j(\omega))=1\}$. Then $D_i$ is measurable for $i=1,\dots,s$ and $P(\bigcup_{i=1}^sD_i)=1$.
Fix $i\in\{1,\dots,s\}$. Assume that $I_{\omega_i}=\{k_1<\cdots<k_{t_i}\}$, where $t_i=N(T,\omega_i,\mathcal{U},1)$. Take $\mathcal{R}^i=\{W^i_1,\dots, W_{t_i}^i\}$, where $W_1^i=U_{k_1}$, $W^i_2=U_{k_2}\backslash U_{k_1}$, $\cdots$, $W_{t_i}^i=U_{k_{t_i}}\backslash \bigcup_{j=1}^{t_i-1}U_{k_j}$. For any $\omega\in D_i$, since $\mu_{\omega}(\bigcup_{j=1}^{t_i}W_j^i(\omega_i))=\mu_{\omega}(\bigcup_{j\in I_{\omega_i}}U_j(\omega))=1$. $\mathcal{R}^i(\omega_i)$ can be considered as a finite partition of $\mathcal{E}_{\omega}$ (mod $\mu_{\omega}$) and
$$
H_{\mu_{\omega}}(\mathcal{R}^i(\omega_i))\leq \log N(T,\omega_i,\mathcal{U},1).
$$
Let $C_1=D_1$, $C_i=D_i\backslash \bigcup_{j=1}^{i-1}D_j$, $i=1,\dots,s$ and $A=\mathcal{E}\backslash (\bigcup_{i=1}^s(E_i\cap \bigcup_{j=1}^{t_i}W_j^i))$, where $E_i=\{(\omega,x)\in \mathcal{E}: \omega\in C_i, x\in \mathcal{E}_{\omega}\}$. Set $A_l=A\cap (U_l\backslash \bigcup_{j=1}^{l-1}U_j)$, $l=1,\dots, k$. Then put
$$
\mathcal{R}=\{E_1\cap W_1^1, \dots, E_1\cap W_{t_1}^1, \dots, E_s\cap W_1^s, E_s\cap W_{t_s}^s, A_1\cap\mathcal{E}, \dots, A_k\cap \mathcal{E}\}.
$$
Clearly, $\mathcal{R}$ is a finite measurable partition of $\mathcal{E}$ finer than $\mathcal{U}$. Now for $\omega\in C_i$, $i=1, \dots, s$, we have
$H_{\mu_{\omega}}(\mathcal{R}(\omega))=H_{\mu_{\omega}}(\mathcal{R}^i(\omega_i))$. Hence
\begin{align*}
H_{\mu}(&\mathcal{R}\mid \mathcal{F}_{\mathcal{E}})=\int H_{\mu_{\omega}}(\mathcal{R}(\omega))dP(\omega)\\
&=\int_{\bigcup_{i=1}^sC_i}H_{\mu_{\omega}}(\mathcal{R}(\omega))dP(\omega)\leq \int \log N(T,\omega,\mathcal{U},1)dP(\omega)=H(T,\mathcal{U},1).
\end{align*}
This ends the proof of Lemma \ref{lemma2.3}.
\end{proof}
\begin{remark}
The constructed $\mathcal{U}$ in fact belongs to $\mathcal{C}_{\mathcal{E}}^{o'}$ in the proof of Lemma \ref{lemma2.3} part (1). Then, more precisely, $h_{\mu}^{(r)}(T)=\sup_{\mathcal{C}_{\mathcal{E}}^{o'}}h_{\mu}^{(r)-}(T,\mathcal{U})=\sup_{\mathcal{C}_{\mathcal{E}}^{o}}h_{\mu}^{(r)-}(T,\mathcal{U})$.
When $(\Omega,\mathcal{F},P,\vartheta)$ is a trivial system, the inequality $h_{\mu}^{(r)-}(T,\mathcal{U})\leq h_{\text{top}}(T,\mathcal{U})$ can be easily obtained by the fact $H_{\mu}(\alpha)\leq \log \# \alpha$ for $\alpha\in \mathcal{P}_X$, where $\mathcal{P}_X$ is the set of partitions of $X$.
\end{remark}
Let $\mathcal{P}_X$ be the set of partitions of $X$ and $\mathcal{M}(X)$ be the set of all Borel probability measures on $X$.
For any $\alpha\in \mathcal{P}_X$ and $\theta\in \mathcal{M}(X)$,
we define $|\mathcal{A}|_{\theta}=\# \{A\in \alpha: \theta(A)>0\}$. Then in the proof of Lemma \ref{lemma2.3} part (2), in fact we have obtained the following fact.
\begin{corollary}
Let $\mu\in \mathcal{P}_P(\mathcal{E})$ and $d\mu(\omega,x)=d\mu_{\omega}(x)dP(\omega)$ P-a.s. be the disintegration of $\mu$. Then for any $\mathcal{U}\in \mathcal{C}_{\mathcal{E}}$, there exists $\mathcal{R}\in \mathcal{P}_{\mathcal{E}}$ such that $\mathcal{R}\succeq \mathcal{U}$ and $\mid \mathcal{R}(\omega)\mid_{\mu_{\omega}}\leq \sup_{\omega\in\Omega}N(T,\omega,\mathcal{U},1)$ for $P$-a.s. $\omega\in \Omega$.
\end{corollary}
It follows from Lemma \ref{lemma2.3} that the inequality stated in Theorem \ref{theorem2.1} holds true. In fact, we have the following Theorem \ref{theorem2.5}. The proof of this result will be completed in Section \ref{section4}. In next section, we will introduce another new notion of random measure-theoretical entropy for covers, and prove a variational inequality for this new entropy. Using the inequality we can prove Theorem \ref{theorem2.5} in Section \ref{section4}.
\begin{theorem}
\label{theorem2.5}
Let $T$ be a homeomophic bundle RDS on $\mathcal{E}$ over $\vartheta$. Then for each $\mathcal{U}\in \mathcal{C}_{\mathcal{E}}^{o'}$ there exists $\mu\in \mathcal{I}_P(\mathcal{E})$ such that $h_{\text{top}}(T,\mathcal{U})=h_{\mu}^{(r)-}(T,\mathcal{U})$.
\end{theorem}
Theorem \ref{theorem2.5} together with Lemma \ref{lemma2.3} implies that $h_{\text{top}}(T,\mathcal{U})\leq \sup_{\nu}h_{\nu}^{(r)}(T)$. By taking the supremum over all covers $\mathcal{U}\in \mathcal{C}_{\mathcal{E}}^{o'}$ in Theorem \ref{theorem2.5}, we can easily get Theorem \ref{theorem2.1} in the homeomorphic case. Moreover, Theorem \ref{theorem2.5} also shows that if there exists $\mathcal{U}\in \mathcal{C}_{\mathcal{E}}^{o'}$ such that $h_{\text{top}}(T,\mathcal{U})=h_{\text{top}}(T)$, then there exists $\mu\in \mathcal{I}_P(\mathcal{E})$ such that $h_{\mu}^{(r)}(T)=h_{\text{top}}(T)$.
\section{A variational inequality of random entropy for $h_{\mu}^{(r)+}$}\label{section3}
Let $T$ be a homeomorphic bundle RDS on $\mathcal{E}$ over $\vartheta$. Given $\mu\in \mathcal{I}_P(\mathcal{E})$ and $\mathcal{U}\in \mathcal{C}_{\mathcal{E}}$ we define
\begin{equation*}
h_{\mu}^{(r)+}(T,\mathcal{U})=\inf_{\mathcal{Q}\in \mathcal{P}_{\mathcal{E}},\mathcal{Q}\succeq \mathcal{U}}h_{\mu}^{(r)}(T,\mathcal{Q}).
\end{equation*}
Obviously, $h_{\mu}^{(r)+}(T,\mathcal{U})\geq h_{\mu}^{(r)-}(T,\mathcal{U})$. By Lemma \ref{lemma2.3} part (1), $h_{\mu}^{(r)}(T)=\sup_{\mathcal{U}\in \mathcal{C}_{\mathcal{E}}^o}h_{\mu}^{(r)+}(T,\mathcal{U})$ also holds.
For $\mathcal{U}\in \mathcal{C}_{\mathcal{E}}^{o'} $,
it is not difficult to verify (See e.g. \cite{Bogen,Kifer}) that the infimum above can only over partitions $\mathcal{Q}$ of $\mathcal{E}$ into sets $Q_i$ of the form $Q_i=(\Omega\times P_i)\cap \mathcal{E}$, where $\mathcal{P}=\{P_i\}$ is a partition of $X$ into measurable subsets, so that $Q_i(\omega)=P_i\cap \mathcal{E}_{\omega}$.
In this section, we will show that, for given $\mathcal{U}\in \mathcal{C}_{\mathcal{E}}^{o'} $, there always exists $\mu \in \mathcal{I}_P(\mathcal{E})$ such that $h_{\mu}^{(r)+}(T,\mathcal{U})\geq h_{\text{top}}(T,\mathcal{U})$.
First we recall the definition of factor for two RDS (See \cite{Liupeidong2005}).
\begin{definition}
Given two continuous bundle RDS $T_{i}$ on $\mathcal{E}_i\subset \Omega \times X_i$ over $\vartheta$, $i=1,2$.
$T_2$ is called a factor of $T_1$ if there exists a family of subjective continuous maps $\{\pi_{\omega}:(\mathcal{E}_{1})_{\omega}\rightarrow (\mathcal{E}_{2})_{\omega}\}$ such that for $P$-a.s. $\omega$, $ (T_{2})_{\omega}\circ \pi_{\omega}=\pi_{\vartheta\omega}\circ (T_{1})_{\omega}$ and $\pi:(\omega,x)\rightarrow (\omega, \pi_{\omega}x)$ constitutes a measurable map from $\mathcal{E}_1$ to $\mathcal{E}_2$.
\end{definition}
The following lemma is an obvious fact.
\begin{lemma}
\label{lemma3.1}
Let $\psi: \mathcal{G}\rightarrow \mathcal{E}$ be a factor map between continuous bundle RDS $T_1$ on $\mathcal{E}$ and $T_2$ on $\mathcal{G}$ over $\vartheta$, where $\mathcal{E}\subset \Omega \times X$, $\mathcal{G}\subset \Omega \times Z$. If $\mu \in \mathcal{I}_P(\mathcal{G})$, $\nu=\psi \mu$, $\mathcal{R}\in \mathcal{P}_{\mathcal{E}}$ and $\mathcal{U}\in \mathcal{C}^o_{\mathcal{E}}$, then
\renewcommand{\theenumi}{(\arabic{enumi})}
\begin{enumerate}
\item $h_{\text{top}}(T_2,\psi^{-1}(\mathcal{U}))=h_{\text{top}}(T_1,\mathcal{U})$;
\item $h^{(r)}_{\mu}(T_2,\psi^{-1}(\mathcal{R}))=h^{(r)}_{\nu}(T_1,\mathcal{R})$.
\end{enumerate}
\end{lemma}
We also need the following lemma.
\begin{lemma}
\label{lemma3.3}
Let $T$ be a continuous bundle RDS on $\mathcal{E}$ over $\vartheta$ and $\mathcal{R}\in \mathcal{P}_{\mathcal{E}}$. Then the following hold:
\renewcommand{\theenumi}{(\arabic{enumi})}
\begin{enumerate}
\item the function $\mu\rightarrow H_{\mu}(\mathcal{R}\mid \mathcal{F}_{\mathcal{E}})$ is concave on $\mathcal{P}_P(\mathcal{E})$;
\item the function $\mu\rightarrow h^{(r)}_{\mu}(T,\mathcal{R})$ and $\mu\rightarrow h^{(r)}_{\mu}(T)$ are affine on $\mathcal{I}_P(\mathcal{E})$.
\end{enumerate}
\end{lemma}
\begin{proof}
(1) Let $\mu=a\nu+(1-a)\eta$, where $\nu$, $\eta\in \mathcal{P}_P(\mathcal{E})$ and $0<a<1$. Since $H_{\mu}(\mathcal{R}\mid \mathcal{F}_{\mathcal{E}})=\int H_{\mu_{\omega}}(\mathcal{R}(\omega))dP(\omega)$, $\mu \in \mathcal{P}_P(\mathcal{E})$, and $\mu_{\omega}\rightarrow H_{\mu_{\omega}}(\mathcal{R}(\omega))$ is concave on $\mathcal{M}(X)$, where $\mathcal{M}(X)$ is the set of Borel probability measures on $X$. It is easy to see that
\begin{equation}\label{ineq3.1}
H_{\mu}(\mathcal{R}\mid \mathcal{F}_{\mathcal{E}})\geq a H_{\nu}(\mathcal{R}\mid \mathcal{F}_{\mathcal{E}})+(1-a)H_{\eta}(\mathcal{R}\mid \mathcal{F}_{\mathcal{E}}).
\end{equation}
Then $\mu\rightarrow H_{\mu}(\mathcal{R}\mid \mathcal{F}_{\mathcal{E}})$ is concave on $\mathcal{P}_P(\mathcal{E})$.
(2) Let $\mu=a\nu+(1-a)\eta$, where $\nu$, $\eta\in \mathcal{I}_P(\mathcal{E})$ and $0<a<1$. Using inequality \eqref{ineq3.1} we have
\begin{align*}
0&\leq H_{\mu}(\mathcal{R}\mid\mathcal{F}_{\mathcal{E}})
-aH_{\nu}(\mathcal{R}\mid\mathcal{F}_{\mathcal{E}})-(1-a)H_{\eta}(\mathcal{R}\mid\mathcal{F}_{\mathcal{E}})\\
&=\int (H_{\mu_{\omega}}(\mathcal{R}(\omega))
-aH_{\nu_{\omega}}(\mathcal{R}(\omega))-(1-a)H_{\eta_{\omega}}(\mathcal{R}(\omega)))dP(\omega)\\
&\leq \int (-a\log a - (1-a)\log (1-a))dP(\omega)\\
&=-a\log a - (1-a)\log (1-a).
\end{align*}
Hence
\begin{equation}\label{eq3.1}
h_{\mu}^{(r)}(T,\mathcal{R})=ah_{\nu}^{(r)}(T,\mathcal{R})+(1-a)h_{\eta}^{(r)}(T,\mathcal{R}),
\end{equation}
so that $\mu\rightarrow h_{\mu}^{(r)}(T,\mathcal{R})$ is affine.
Note that the supremum in the definition of $h_{\mu}^{(r)}(T)$ can be taken over partitions $\mathcal{Q}$ of $\mathcal{E}$ into sets $Q_i$ with the form $Q_i=(\Omega\times P_i)\cap\mathcal{E}$, where $\mathcal{P}=\{P_i\}$ is a partition of $X$. We can take an increasing sequence of finite Borel partitions $\mathcal{P}_j$ of $X$ with $\text{diam}(\mathcal{P}_j)\rightarrow 0$. Then
$$
(\Omega \times \bigvee_{j=1}^{\infty}\mathcal{P}_j)\vee (\mathcal{F}\times X)=\mathcal{F}\otimes\mathcal{B}.
$$
It follows from Lemma 1.6 in \cite{Kifer}) that
$$
h^{(r)}_{\mu}(T)=\lim_{j\rightarrow \infty}h^{(r)}_{\mu}(T,\mathcal{Q}_j),
$$
where $\mathcal{Q}_j=\{Q_{j_i}\}$, $Q_{j_i}=(\Omega\times P_{j_i})\cap\mathcal{E}$, and $\mathcal{P}_j=\{P_{j_i}\}$ is a finite Borel partition of $X$. Replacing $\mathcal{R}$ by $\mathcal{R}_j$ in the equality \eqref{eq3.1}, letting $j\rightarrow \infty$, one has
$$
h_{\mu}^{(r)}(T)=ah_{\nu}^{(r)}(T)+(1-a)h_{\eta}^{(r)}(T),
$$
and we complete the proof of Lemma \ref{lemma3.3}.
\end{proof}
A real-valued function $f$ defined on a compact metric space $Z$ is
called {\it upper semi-continuous }(for short u.s.c.) if one of the
following equivalent conditions holds:
\begin{enumerate}
\item $\limsup_{z'\rightarrow z}f(z')\leq f(z)$ for each $z\in Z$;
\item for each $r\in \mathbb{R}$, the set $\{z\in Z:f(z)\geq r\}$ is
closed.\label{usc2}
\end{enumerate}
By \ref{usc2}, the infimum of any family of u.s.c. functions is
again a u.s.c. one; both the sum and supremum of finitely many
u.s.c. functions are u.s.c. ones.
For each function $f$ on $\mathcal{E}$, which is measurable in
$(\omega, x)$ and continuous in $x \in \mathcal{E}_{\omega}$, let
\begin{equation*}
\| f \| =\int \| f(\omega) \|_{\infty} \, dP, \quad
\text{where} \quad \| f(\omega) \|_{\infty} = \sup_{x\in
\mathcal{E}_{\omega}}\mid f(\omega,x) \mid.
\end{equation*}
Let $\mathbf{L}_{\mathcal{E}}^1 (\Omega, \mathcal{C}(X))$ be the
space of such functions $f$ with $\|f \| < \infty$. If we identify
$f$ and $g$ for $f, g \in
\mathbf{L}_{\mathcal{E}}^1 (\Omega, \mathcal{C}(X))$ with $\| f-g\| =0$, then
$\mathbf{L}_{\mathcal{E}}^1 (\Omega, \mathcal{C}(X))$ is a Banach
space with the norm $\| \cdot \|$.
For $\mu$, $\mu_n\in \mathcal{P}_P(\mathcal{E})$, $n=1,2,\dots$,
one called that $\mu_n$ converges to $\mu $ if $\int f d\mu_n \rightarrow \int f d\mu $
as $n\rightarrow \infty$ for any $f\in \mathbf{L}_{\mathcal{E}}^1 (\Omega, \mathcal{C}(X))$.
This introduces a weak* topology in $\mathcal{P}_P(\mathcal{E})$.
It is well known that $\mathcal{P}_P(\mathcal{E})$ is compact in this weak* topology.
Moreover, the follow lemma holds (\cite{Kifer2001}).
\begin{lemma}\label{lem3}
For any $\nu_k\in \mathcal{P}_P(\mathcal{E})$, $k\in \mathbb{N}$, the set of limit points in the above weak* topology of the sequence
$$\mu_n=\frac{1}{n}\sum_{k=0}^{n-1}\Theta^k\nu_n \quad \text{as}\,\,\, n\rightarrow \infty$$
is not empty and is contained in $\mathcal{I}_P(\mathcal{E})$.
\end{lemma}
The following lemma shows that in the sense of the above weak* topology the random measure-theoretical entropy map with the $\sigma$-algebra $\mathcal{F}_{\mathcal{E}}$ is u.s.c.. The first part of it was already given in \cite{Kifer2001}.
\begin{lemma}
\label{lemma3.4}
Let $T$ be a continuous RDS on $\mathcal{E}$ over $\vartheta$. Let $\mathcal{P}=\{P_1,\dots,P_k\}$ be a finite partition of $X$ satisfying $\int\mu_{\omega}(\partial\mathcal{P}_{\omega})dP(\omega)=0$, where $\mu_{\omega}$ are the disintegrations of $\mu$ and $\partial\mathcal{P}_{\omega}=\bigcup_{i=1}^k\partial (P_i\cap \mathcal{E}_{\omega})$ is the boundary of $\mathcal{P}_{\omega}=\{P_1\cap\mathcal{E}_{\omega},\dots,P_k\cap\mathcal{E}_{\omega}\}$; denote by $\mathcal{R}$ the partition of $\mathcal{E}$ into sets $(\Omega\times P_i)\cap \mathcal{E}$; then
\renewcommand{\theenumi}{(\alph{enumi})}
\begin{enumerate}
\item $\mu\rightarrow H_{\mu}(\mathcal{R}\mid \mathcal{F}_{\mathcal{E}})$ is a u.s.c. function on $\mathcal{P}_P(\mathcal{E})$.\label{a}
\item $\mu\rightarrow h_{\mu}^{(r)}(T,\mathcal{R})$ is a u.s.c. function on $\mathcal{I}_P(\mathcal{E})$.\label{b}
\end{enumerate}
\end{lemma}
\begin{proof}
We only prove the second part. By \ref{a}, $\mu\rightarrow H_{\mu}(\bigvee_{i=0}^{n-1}(\Theta^i)^{-1}\mathcal{R}\mid \mathcal{F}_{\mathcal{E}})$ is also a u.s.c. function on $\mathcal{I}_P(\mathcal{E})$. Note that for $\mu \in \mathcal{I}_P(\mathcal{E})$, $h_{\mu}^{(r)}(T,\mathcal{R})=\inf_{n\geq 1}\frac{1}{n}H_{\mu}(\bigvee_{i=0}^{n-1}(\Theta^i)^{-1}\mathcal{R}\mid \mathcal{F}_{\mathcal{E}})$, i.e. the function $\mu\rightarrow h_{\mu}^{(r)}(T,\mathcal{R})$ is the infimum of the family of u.s.c. functions $\frac{1}{n}H_{\mu}(\bigvee_{i=0}^{n-1}(\Theta^i)^{-1}\mathcal{R}\mid \mathcal{F}_{\mathcal{E}})$ on $\mathcal{I}_P(\mathcal{E})$. By \ref{usc2} in the definition of the u.s.c. function, $\mu\rightarrow h_{\mu}^{(r)}(T,\mathcal{R})$ is a u.s.c. function on $\mathcal{I}_P(\mathcal{E})$.
\end{proof}
The following Lemma \ref{lemma3.5} is important in the argument of the variational inequality of random entropy for $h_{\mu}^{(r)+}$. We follow the idea of Kifer \cite{Kifer2001} for constructing a measurable family of maximal separated sets on fibers in bundle RDS and that of Huang {\it et al.} \cite{Huang2006} for tackling with the local variational inequality in the deterministic dynamical system. Then following Misiurewicz's method, we could avoid a similar combinatorial lemma in \cite{Blanchard1997} as in the deterministic case and obtain the variational inequality of random entropy stated in the beginning of this section.
If $\mathcal{R}=\{R_i\}$ is a partition of $\mathcal{E}$,
then $\mathcal{Q}=\bigvee_{i=0}^{n-1}(\Theta^i)^{-1}\mathcal{R}$ (denote by $(\mathcal{R})_0^{n-1}$) is a partition consisting of sets $\{Q_i\}$ such that the corresponding partition $\mathcal{Q}(\omega)=\{Q_j(\omega)\}$, $Q_j(\omega)=\{x: (\omega,x)\in Q_j\}$ of $\mathcal{E}_{\omega}$ has the form $\mathcal{Q}(\omega)=\bigvee_{i=0}^{n-1}(T_{\omega}^i)^{-1}\mathcal{R}(\vartheta^i\omega)$, where $\mathcal{R}(\omega)=\{R_i(\omega)\}$, $R_i(\omega)=\{x\in \mathcal{E}_{\omega}:(\omega,x)\in R_i \}$ partitions $\mathcal{E}_{\omega}$.
Let $A_1, A_2\in \mathcal{F}\times \mathcal{B}$ and each $A_i(\omega)$ be a closed subset of $\mathcal{E_{\omega}}$. It follows from \cite[Proposition \Rmnum{3}.13]{Castaing} that $\{(\omega,x_1,x_2): x_i\in A_i, \forall i\}$ belongs to the product $\sigma$-algebra $\mathcal{F}\times \mathcal{B}^2$ (as a graph of a measurable multifunction).
\begin{lemma}
\label{lemma3.5}
Let $X$ be zero-dimensional and $T$ be a continuous RDS on $\mathcal{E}$ over $\vartheta$, $\mathcal{U}\in \mathcal{C}_{\mathcal{E}}^o$.
Assume $K\in \mathbb{N}$ and $\{\mathcal{R}_l\}_{l=1}^K$ is a finite sequence of partitions of $\mathcal{E}$ finer than $\mathcal{U}$, where $\mathcal{R}_l=\{R_{l,i}\}$, $1\leq l\leq K$ and each $R_{l,i}(\omega)$ is clopen subset of $X$. Then for each $n\in \mathbb{N}$, there exists a family of maximal subsets $B_n(\omega)\subset \mathcal{E}_{\omega}$ with cardinality at least $[N(T,\omega,\mathcal{U},n)/K] $ such that each atom of $(\mathcal{R}_l)_0^{n-1}(\omega)$ contains at most one point of $B_n(\omega)$ for $1\leq l\leq K$, and depending measurably on $\omega$ in the sense that $B_n=\{(\omega,x):x\in B_n(\omega)\}\in \mathcal{F}\times \mathcal{B}$,
where $[N(T,\omega,\mathcal{U},n)/K]$ is the integer part of $N(T,\omega,\mathcal{U},n)/K$.
\end{lemma}
\begin{proof}
Let $n\in \mathbb{N}$. For any $x\in \mathcal{E}_{\omega}$ and $1\leq l\leq K$, let $A_{l,n}^{\omega}(x)$ be the atom of $(\mathcal{R}_l)_0^{n-1}(\omega)$ containing the point $x$. Then for any $x_1$ and $x_2$ in $\mathcal{E}_{\omega}$ and $1\leq l\leq K$, $ x_1 $ and $ x_2 $ are contained in the same atom of $(\mathcal{R}_l)_0^{n-1}(\omega)$ if and only if $A_{l,n}^{\omega}(x_1)=A_{l,n}^{\omega}(x_2)$. For convenience, we write $\mathcal{Q}_l=(\mathcal{R}_l)_0^{n-1}$ and $\mathcal{Q}_l(\omega)=\{Q_{l,j}(\omega)\}=(\mathcal{R}_l)_0^{n-1}(\omega)$.
For $q\in \mathbb{Z}^+$, set
\begin{align*}
D_q&=\{(\omega,x_1,\ldots,x_q):\omega\in \Omega, \,\, x_i\in \mathcal{E}_{\omega}, \,\,\forall i\},\\
E_q^n&=\{(\omega,x_1,\ldots,x_q)\in D_q: \,\, A_{l,n}^{\omega}(x_i)\neq A_{l,n}^{\omega}(x_j), \,\, \forall i\neq j, \,\, \forall l \},\\
E_{q,l}^n&=\{(\omega,x_1,\ldots,x_q)\in D_q: \,\, A_{l,n}^{\omega}(x_i)\neq A_{l,n}^{\omega}(x_j), \,\, \forall i\neq j \},\\
E_q^n(\omega)&=\{(x_1,\ldots,x_q): (\omega,x_1,\ldots,x_q)\in
E_q^n\}\\
E_{q,l}^n(\omega)&=\{(x_1,\ldots,x_q): (\omega,x_1,\ldots,x_q)\in
E_{q,l}^n\}.
\end{align*}
Observe that $D_q\in \mathcal{F}\times \mathcal{B}^q$ (\cite{Kifer2001}), where $\mathcal{B}^q$ is the product $\sigma$-algebra on the product $X^q$ of $q$ copies of $X$. $E_{q,l}^n$ can also be expressed as
\begin{align*}
E_{q,l}^n &=\{(\omega,x_1,\ldots,x_q)\in D_q: x_i\in Q_{l,r}(\omega),\,\, x_j\in Q_{l,s}(\omega),\,\, \forall i\neq j,\,\, \forall r\neq s \}\\
&=\bigcup_{(r_1,\cdots,r_q)}\{(\omega,x_1,\ldots,x_q)\in D_q: x_i\in Q_{l,r_i}(\omega),\,\, \forall i \},
\end{align*}
where the union takes over all the elements of the set $\{(r_1,\cdots,r_q)\in \mathbb{N}^q: 1\leq r_1< r_2<\cdots< r_q \leq \#\mathcal{Q}_l\}$.
Put
\begin{align*}
E_{q,l}^{n,r}&=\{(\omega,x_1,\ldots,x_q)\in D_q: x_i\in Q_{l,r_i}(\omega),\,\, \forall i \},\\
E_{q,l}^{n,r}(\omega)&=\{(x_1,\ldots,x_q)\in D_q: (\omega,x_1,\ldots,x_q)\in E_{q,l}^{n,r}\}.
\end{align*}
The set $E_{q,l}^{n,r}$ may be empty. Note that each $Q_{l,r_i}\in \mathcal{F}\times \mathcal{B}$ and $Q_{l,r_i}(\omega)$ is closed subset of $\mathcal{E}_{\omega}$ by the continuity of the RDS $T$.
If $E_{q,l}^{n,r}$ is not an empty subset of $\Omega\times X^q$, then $E_{q,l}^{n,r}\in \mathcal{F}\times \mathcal{B}^q$ and $E_{q,l}^{n,r}(\omega)$ is a closed subset of $\mathcal{E}_{\omega}^q$. In particular, $E_q^n\in \mathcal{F}\times \mathcal{B}^q$ and $E_q^n(\omega)$ is also a closed subset of $\mathcal{E}_{\omega}^q$.
Let $s_n(\omega) $ be the largest cardinality of $B_n(\omega)$ such that any element of $\mathcal{Q}_l(\omega)$ contains at most one point of $B_n(\omega)$. By Theorem \Rmnum{3}.23 in \cite{Castaing} it follows that
$$
\{\omega: s_n(\omega)\geq q\}=\{\omega: E_q^n(\omega)\neq \emptyset\}=\Pr\nolimits_{\Omega}E_q^n\in \mathcal{F},
$$
where $\Pr\nolimits_{\Omega}$ is the projection of $\Omega \times X^q$ to $\Omega$, and so $s_n(\omega)$ is measurable in $\omega$.
Observe that the sets $\Omega_q=\{\omega: s_n(\omega)=q\}$ are measurable, disjoint and $\bigcup_{q\geq 1}\Omega_q=\Omega$. It follows from Theorem \Rmnum{3}.30 in \cite{Castaing} that the multifunction $\Psi_q$ defined by $\Psi_q(\omega)=E_q^n(\omega)$ for $\omega\in \Omega_q$ is measurable, and it admits a measurable selection $\sigma_q$ which is a measurable map $\sigma_q: \Omega_q\rightarrow X^q$ such that $\sigma_q(\omega )\in E_q^n(\omega)$ for all $\omega\in \Omega_q$. Let $\zeta_q$ be the multifunction from $X^q$ to $q$-point subsets of $X$ defined by $\zeta_q(x_1,\cdots,x_q)=\{x_1,\cdots,x_q\}\subset X$. Then $\zeta_q\circ\sigma_q$ is a multifunction assigning to each $\omega\in \Omega_q$ a maximal subset $B_n(\omega)$ in $\mathcal{E}_{\omega}$.
Let $B'_n(\omega)$ be a maximal subset of $\mathcal{E}_{\omega}$ such that any atom of $\mathcal{Q}_l(\omega)$ contains at most one point of $B'_n(\omega)$ for $1\leq l \leq K$. We claim that the cardinality of $B'_n(\omega)$ is no less than $[N(T,\omega,\mathcal{U},n)/K]$. Assume the contrary, i.e., $B'_n(\omega)=\{x_1,\dots, x_d\}$ with $d< [N(T,\omega,\mathcal{U},n)/K]$.
Set $B_{\omega}=\bigcup_{i=1}^d\bigcup_{l=1}^k A_{l,n}^{\omega}(x_i)\bigcap \mathcal{E}_{\omega}$. For any $1\leq i \leq d$ and $1\leq l\leq K$, $A_{l,n}^{\omega}(x_i)$ is an atom of $\mathcal{Q}_l(\omega)$, and thus is contained in an element of
$\mathcal{U}_0^{N-1}(\omega) =\bigvee_{i=0}^{N-1}(T^i_{\omega})^{-1}\mathcal{U}(\vartheta^i\omega)$.
Particularly, $B_{\omega}$ is covered by at most $dK<K[N(T,\omega,\mathcal{U},n)/K]\leq N(T,\omega, \mathcal{U}, n)$ elements of $\mathcal{U}_0^{N-1}(\omega)$ . Since any subcover of $\mathcal{U}_0^{N-1}(\omega)$ which covers $\mathcal{E}_{\omega}$ has at least $N(T,\omega,\mathcal{U},n)$ elements, we have $\mathcal{E}_{\omega}\backslash B_{\omega}\neq \emptyset$.
Choosing an arbitrary point $x\in \mathcal{E}_{\omega}\backslash B_{\omega}$, we have $ x \not\in \bigcup_{i=1}^d\bigcup_{l=1}^k A_{l,n}^{\omega}(x_i) $. Note that for any $1\leq i \leq d$ and $1\leq l\leq K$, $A_{l,n}^{\omega}(x)\neq A_{l,n}^{\omega}(x_i)$, we conclude that $B'_n(\omega)\cup \{x\}$ is also a subset of $\mathcal{E}_{\omega}$ such that any atom of $\mathcal{Q}_l(\omega)$ contains at most one point of $B'_n(\omega)\cup \{x\}$ for $1\leq l\leq K$. This is contradiction, as $B'_n(\omega)$ is maximal. Choosing $B_n(\omega)\subset B'_n(\omega)$ with the cardinality $[N(T,\omega,\mathcal{U},n)/K]$ we obtain the maximal subset we needed.
For any open subset $U\subset X$, set $V_U^q(i)=\{(x_1,\cdots,x_q)\in X^q: x_i\in U\}$ which is an open subset of $X^q$. Then
$$
\{\omega\in \Omega_q: \zeta_q\circ\sigma_q(\omega)\cap U\neq \emptyset\}=\bigcup_{i=1}^q\sigma_q^{-1}V_U^q(i)\in \mathcal{F}.
$$
Let $\Phi(\omega)=\zeta_{s_n(\omega)}\circ\sigma_{s_n(\omega)}(\omega)$, then
$$
\{\omega: \Phi(\omega)\cap U\neq \emptyset\}=\bigcup_{q=1}^{\infty}\{\omega\in \Omega_q: \zeta_q\circ\sigma_q(\omega)\cap U\neq \emptyset\}\in \mathcal{F}.
$$
Hence $\Phi$ is a measurable multifunction which assigns to each $\omega\in \Omega$ a maximal finite subset $B_n(\omega)$ with cardinality at least $[N(T,\omega,\mathcal{U},n)/K] $ such that each atom of $(\mathcal{R}_l)_0^{n-1}(\omega)$ contains at most one point of $B_n(\omega)$ for $1\leq l\leq K$, and we complete the proof of Lemma \ref{lemma3.5}.
\end{proof}
\begin{comment}
\begin{lemma}
\label{lemma3.6}
Let $T$ be a continuous RDS on $\mathcal{E}$ over $\vartheta$, $\omega\in \Omega$ and $\mathcal{U}\in \mathcal{C}_{\mathcal{E}}^o$. Assume $K\in \mathbb{N}$ and $\{\mathcal{R}_l\}_{l=1}^k$ is a finite sequence of partitions of $\mathcal{E}$ finer than $\mathcal{U}$. Then for each $N\in \mathbb{N}$, there exists a finite subset $B_N\subset\mathcal{E}_{\omega}$ with cardinality $[N(T,\omega,\mathcal{U},N)/K]$ such that each atom of $(\mathcal{R}_l)_0^{N-1}$ contains at most one point of $\{\omega\}\times B_N$ for $1\leq l\leq K$, where $[N(T,\omega,\mathcal{U},N)/K]$ is the integer part of $N(T,\omega,\mathcal{U},N)/K$.
\end{lemma}
\begin{proof}
Let $N\in \mathbb{N}$. For any $x\in \mathcal{E}_{\omega}$ and $1\leq l\leq K$, let $A_l(\omega,x)$ be the atom of $(\mathcal{R})_0^{N-1}$ containing the point $(\omega,x)$. Then for any $x_1$ and $x_2$ in $\mathcal{E}_{\omega}$ and $1\leq l\leq K$, $(\omega,x_1)$ and $(\omega,x_2)$ are contained in the same atom of $(\mathcal{R})_0^{N-1}$ if and only if $A_l(\omega,x_1)=A_l(\omega,x_2)$.
Let $B'_N$ be a maximal subset of $\mathcal{E}_{\omega}$ such that any atom of $(\mathcal{R}_l)_0^{N-1}$ contains at most one point of $\{\omega\}\times B'_N$ for $1\leq l \leq K$. We claim that the cardinality of $B'_N$ is no less than $[N(T,\omega,\mathcal{U},N)/K]$. Assume the contrary, i.e., $B'_N=\{x_1,\dots, x_d\}$ with $d< [N(T,\omega,\mathcal{U},N)/K]$.
Set $B_{\omega}=\big((\bigcup_{i=1}^d\bigcup_{l=1}^k A_l(\omega,x_i))\cap \mathcal{E} \big)_{\omega}$. Obviously, $B_{\omega}\subset \mathcal{E}_{\omega}$. For any $1\leq i \leq d$ and $1\leq l\leq K$, $A_l(\omega,x_i)$ is an atom of $(\mathcal{R}_l)_0^{N-1}$, and thus is contained in an element of $\mathcal{U}_0^{N-1}$. Particularly, $B_{\omega}$ is covered by at most $dK<K[N(T,\omega,\mathcal{U},N)/K]\leq N(T,\omega, \mathcal{U}, N)$. Since any subcover of $\bigvee_{i=0}^{N-1}(T^i_{\omega})^{-1}\mathcal{U}(\vartheta^i\omega)$
which covers $\mathcal{E}_{\omega}$ has at least $N(T,\omega,\mathcal{U},N)$ elements, we have $\mathcal{E}_{\omega}\backslash B_{\omega}\neq \emptyset$.
Choosing an arbitrary point $x\in \mathcal{E}_{\omega}\backslash B_{\omega}$, we have $(\omega,x)\not\in \bigcup_{i=1}^d\bigcup_{l=1}^k A_l(\omega,x_i) $. Note that for any $1\leq i \leq d$ and $1\leq l\leq K$, $A_l(\omega,x)\neq A_l(\omega,x_i)$, we conclude that $B'_N\cup \{x\}$ is also a subset of $\mathcal{E}_{\omega}$ such that any atom of $(\mathcal{R}_l)_0^{N-1}$ contains at most one point of $\{\omega\}\times B'_N\cup \{(\omega,x)\}$ for $1\leq l\leq K$. This is contradiction, as $B'_N$ is maximal. Choosing $B_N\subset B'_N$ with the cardinality $[N(T,\omega,\mathcal{U},N)/K]$ we complete the proof.
\end{proof}
\end{comment}
\begin{comment}
\begin{lemma}
\label{lemma3.5}
Let $T$ be a continuous RDS on $\mathcal{E}$ over $\vartheta$ and $\mathcal{U}\in \mathcal{C}_{\mathcal{E}}^o$. Assume $K\in \mathbb{N}$ and $\{\mathcal{R}_l\}_{l=1}^K$ is a finite sequence of partitions of $\mathcal{E}$ finer than $\mathcal{U}$. Then for each $N\in \mathbb{N}$, there exists a finite subset $B_N\subset\mathcal{E}$ with cardinality $[\frac{1}{K}\int N(T,\omega,\mathcal{U},N)dP(\omega)]$ such that each atom of $(\mathcal{R}_l)_0^{N-1}$ contains at most one point of $B_N$ for $1\leq l\leq K$, where $[\frac{1}{K}\int N(T,\omega,\mathcal{U},N)dP(\omega)]$ is the integer part of $\frac{1}{K}\int N(T,\omega,\mathcal{U},N)dP(\omega)$.
\end{lemma}
\begin{proof}
Let $N\in \mathbb{N}$. For any $(\omega,x)\in \mathcal{E}$ and $1\leq l\leq K$, let $A_l(\omega,x)$ be the atom of $(\mathcal{R}_l)_0^{N-1}$ containing the point $(\omega,x)$. Then for any $(\omega_1,x_1)$ and $(\omega_2, x_2)$ in $\mathcal{E}$ and $1\leq l\leq K$, $(\omega_1,x_1)$ and $(\omega_2,x_2)$ are contained in the same atom of $(\mathcal{R}_l)_0^{N-1}$ if and only if $A_l(\omega_1,x_1)=A_l(\omega_2,x_2)$.
Let $B'_N$ be a maximal subset of $\mathcal{E}$ such that any atom of $(\mathcal{R}_l)_0^{N-1}$ contains at most one point of $ B'_N$ for $1\leq l \leq K$. We claim that the cardinality of $B'_N$ is no less than $[\frac{1}{K}\int N(T,\omega,\mathcal{U},N)dP(\omega)]$. Assume the contrary, i.e., $B'_N=\{(\omega_i, x_i)\}_{i=1}^{d}$ with $d< [\frac{1}{K}\int N(T,\omega,\mathcal{U},N)dP(\omega)]$.
Set $B=(\bigcup_{i=1}^d\bigcup_{l=1}^K A_l(\omega_i,x_i))\cap \mathcal{E} $. Obviously, $B \subset \mathcal{E} $. For any $1\leq i \leq d$ and $1\leq l\leq K$, $A_l(\omega_i,x_i)$ is an atom of $(\mathcal{R}_l)_0^{N-1}$, and thus is contained in an element of $\mathcal{U}_0^{N-1}$. Particularly, $B$ is covered by at most $dK<K[\frac{1}{K}\int N(T,\omega,\mathcal{U},N)dP(\omega)]\leq \int N(T,\omega,\mathcal{U},N)dP(\omega)$ elements of $\mathcal{U}_0^{n-1}$.
Let $\omega_0\in \Omega$ be some point with $N(T,\omega_0,\mathcal{U},N)\geq $ $\int N(T,\omega,\mathcal{U},N)dP(\omega)$. Since any subcover of $\mathcal{U}_0^{n-1}$
which covers $\mathcal{E}$ has at least $N(T,\omega_0,\mathcal{U},N)$ elements covering the $\omega_0$-section $\mathcal{E}_{\omega_0}$, we have $\mathcal{E}\backslash B\neq \emptyset$.
Choosing an arbitrary point $(\omega, x)\in \mathcal{E}\backslash B$, we have $(\omega,x)\not\in \bigcup_{i=1}^d \bigcup_{l=1}^k A_l(\omega_i,x_i) $. Note that for any $1\leq i \leq d$ and $1\leq l\leq K$, $A_l(\omega,x)\neq A_l(\omega_i,x_i)$, we conclude that $B'_N\cup \{(\omega, x)\}$ is also a subset of $\mathcal{E}$ such that any atom of $(\mathcal{R}_l)_0^{N-1}$ contains at most one point of $B'_N\cup \{(\omega,x)\}$ for $1\leq l\leq K$. This is contradiction, as $B'_N$ is maximal. Choosing $B_N\subset B'_N$ with the cardinality $[\frac{1}{K}\int N(T,\omega,\mathcal{U},N)dP(\omega)]$ we complete the proof.
\end{proof}
\end{comment}
\begin{lemma}
\label{lem2}
Let $T$ be a continuous bundle RDS on $\mathcal{E}$ over $\vartheta$. There exists a continuous bundle RDS $S$ on $Y\subset \Omega\times K^{\mathbb{N}}$ over $\vartheta$, where $K$ is a Cantor space, and a family of subjective continuous maps $\{\pi_{\omega}: Y_{\omega}\rightarrow \mathcal{E}_{\omega}\}_{\omega\in \Omega}$ such that $\pi_{\vartheta\omega}\circ S_{\omega}=T_{\omega}\circ \pi_{\omega}$ for $P$-a.s. $\omega$ and $\pi:(\omega,y)\rightarrow (\omega,\pi_{\omega}y)$ constitutes a measurable map from $Y$ to $\mathcal{E}$.
\end{lemma}
\begin{proof}
Since $X$ is compact metric space, there exists a Cantor space $K$ and a subjective continuous map $f: K\rightarrow X$.
For each $\omega\in \Omega$, $f^{-1}(\mathcal{E}_{\omega})$ is a closed subset of $K$. Let $K_{\omega}=f^{-1}(\mathcal{E}_{\omega})$ and $f_{\omega}$ be the restriction of $f$ on $K_{\omega}$. Denote by $\Pi_{i=0}^{\infty}K_{\vartheta^i\omega}=K_{\omega}\times K_{\vartheta\omega}\times\cdots \times K_{\vartheta^n\omega}\times \cdots$. Since $K_{\omega}$ is a Cantor subset of $K$, $\Pi_{i=0}^{\infty}K_{\vartheta^i\omega}$ is also a Cantor subspace of $ K^{\mathbb{N}}$, where the latter is equipped with the product topology.
For each $\omega$, put
\begin{equation*}
Y_{\omega}=\{y\in \Pi_{i=0}^{\infty}K_{\vartheta^i\omega}: T_{\vartheta^n\omega}f_{\vartheta^n\omega}(y_n)=f_{\vartheta^{n+1}\omega}(y_{n+1}) \,\, \text{for every}\,\, n\in\mathbb{N} \}.
\end{equation*}
Then $Y_{\omega}$ is a closed subset of $K^{\mathbb{N}}$ for each $\omega\in \Omega$. Let $Y=\{(\omega,y): \omega\in \Omega, y\in Y_{\omega}\}$. Then $Y$ is measurable in $\Omega \times K^{\mathbb{N}}$, i.e., $\{\omega: Y_{\omega}\cap U\neq \emptyset\}\in \mathcal{F}$ for each open subset $U\in K^{\mathbb{N}}$. In fact, Let $U=\Pi_{i=0}^{\infty}U_i$, where $U_i$ is the element of the basis of $K$. Note that for each $i\in \mathbb{N}$, $U_i$ is a clopen set. If $y\in Y_{\omega}\cap U $, then for each $n\in \mathbb{N}$, one have $y_n\in U_n$, $y_{n+1}\in U_{n+1}$ and $T_{\vartheta^n\omega}f_{\vartheta^n\omega}(y_n)=f_{\vartheta^{n+1}\omega}(y_{n+1})$.
Let $V_n=f_{\vartheta^n\omega}(U_n)=f(K_{\vartheta^n\omega}\cap U_n)$, $n\in \mathbb{N}$. Then $V_n$ is the closed subset of $\mathcal{E}_{\vartheta^n\omega}$, $n\in \mathbb{N}$. It follows that
\begin{align*}
\{\omega: Y_{\omega}\cap U\neq \emptyset\}
=&\bigcap_{n=0}^{\infty}
\{\omega: T_{\vartheta^n\omega}f_{\vartheta^n\omega}(y_n)=f_{\vartheta^{n+1}\omega}(y_{n+1}), y_n\in U_n, y_{n+1}\in U_{n+1}
\}\\
=&\bigcap_{n=0}^{\infty}\{\omega: T_{\vartheta^n\omega}x_n=x_{n+1}, x_n\in V_n, x_{n+1}\in V_{n+1} \}\\
=&\bigcap_{n=0}^{\infty}\{\omega: T_{\vartheta^n\omega}^{-1}V_{n+1}\cap V_n\neq \emptyset \}
\end{align*}
Since the map $(\omega,x)\rightarrow T_{\omega}x$ is measurable and $V_n$ is closed in X for each $n\in \mathbb{N}$, then
\begin{align*}
&\{(\omega,x)\in \Omega\times V_n: T_{\vartheta^n\omega}x\in V_{n+1}\}\\
= &\{(\omega,x): T_{\vartheta^n\omega}x\in V_{n+1}\}\cap \{(\omega,x): \omega\in \Omega, x\in V_n\} \end{align*}
is a measurable subset of $\Omega\times X$ for each $n \in \mathbb{N}$. By the projection theorem in \cite{Castaing} (Theorem \Rmnum{3}.23), one have
$$\{\omega: T_{\vartheta^n\omega}^{-1}V_{n+1}\cap V_n\neq \emptyset \}\in \mathcal{F},$$
for each $n \in \mathbb{N}$. Then $\{\omega: Y_{\omega}\cap U\neq \emptyset\}\in \mathcal{F}$.
For each $\omega$, let $\pi_{\omega}: Y_{\omega}\rightarrow \mathcal{E}_{\omega}$ be defined by $\pi_{\omega}(y)=f_{\omega}(y_0)$. Then $\pi_{\omega}$ is a subjective continuous map.
Let $\pi: (\omega,y)\rightarrow (\omega,\pi_{\omega}y)$ be the map from $Y$ to $\mathcal{E}$.
Note that for fixed $y$, $\{\omega: \pi_{\omega}y\in A\}=Y_y$ or the null set for any open subset $A$ of $X$, where $Y_y=\{\omega: (\omega,y)\in Y \}$ is the $y$-section of $Y$.
By the measurability of $Y$,
one knows that the map $(\omega,y)\rightarrow \pi_{\omega}y$ is measurable in $\omega$. Since $(\omega,y)\rightarrow \pi_{\omega}y$ is continuous in $y$, then the map $(\omega,y)\rightarrow \pi_{\omega}y$ in jointly measurable and $\pi$ constitutes a measurable map from from $Y$ to $\mathcal{E}$.
Let $S_{\omega}: Y_{\omega}\rightarrow Y_{\vartheta\omega}$ be defined by the left shift $(S_{\omega}y)_i=y_{i+1}$, $i\in \mathbb{N}$ for each $\omega$ . Obviously, $(\omega,y)\rightarrow S_{\omega}y$ is continuous in $y$. With a similar argument as in the above proof of the measurability of $Y$, one can show that $(\omega,y)\rightarrow S_{\omega}y$ is measurable in $\omega$. Then the map $(\omega,y)\rightarrow S_{\omega}y$ is jointly measurable. Then the map $S:Y\rightarrow Y$ defined by $(\omega,y)\rightarrow (\vartheta\omega, S_{\omega}y)$ constitutes a skew product transformation. It is immediate to check that $\pi_{\vartheta\omega}\circ S_{\omega}=T_{\omega}\circ \pi_{\omega}$.
This completes the proof the lemma.
\end{proof}
\begin{remark}
In the deterministic dynamical system, it is well-known that for any dynamical system $(X,\varphi)$, where $X$ is a compact metric space and $\varphi:X\rightarrow X$ is a subjective continuous map, there exists a zero-dimensional dynamical system $(Z,\psi)$ and a subjective continuous map $\tau:Z\rightarrow X$ with $\tau\circ \psi=\varphi\circ \tau$ (See e.g. \cite{Blanchard1997}). The above lemma gives a random version of this result. When $(\Omega, \mathcal{F}, P, \vartheta)$ is a trivial system, Lemma \ref{lem2} is the result in the deterministic case. For homeomorphic bundle RDS, through replacing the left shift by two-sided shift and $ K^{\mathbb{N}}$ by $K^{\mathbb{Z}} $ one can find that a similar result also holds.
\end{remark}
The sequel of this section is devoted to the proof of the following variational inequality of random entropy for $h_{\mu}^{(r)+}$.
\begin{theorem}
\label{theorem3.6}
Let $T$ be a homeomorphic bundle RDS on $\mathcal{E}$ over $\vartheta$ and $\mathcal{U}\in \mathcal{C}_{\mathcal{E}}^{o'}$. Then there exists a $\mu\in \mathcal{I}_P(\mathcal{E})$ such that $h_{\mu}^{(r)+}(T,\mathcal{U})\geq h_{\text{top}}(T,\mathcal{U})$.
\end{theorem}
\begin{proof}
We adopt the argument in \cite{Huang2006} for the deterministic dynamical system.
Let $\mathcal{U}=\{\Omega\times U_i\}_{i=1}^d\cap \mathcal{E}\in \mathcal{C}_{\mathcal{E}}^{o'}$. Then $A=\{U_i\}_{i=1}^d$ is an open cover of $X$. Define
\begin{equation*}
\mathcal{U}^*=\{\mathcal{R}\in \mathcal{P}_{\mathcal{E}}: \mathcal{R}=\{\Omega\times R_i\}_{i=1}^d\cap\mathcal{E}, R_m\subset U_m \,\, \text{for all}\,\, 1\leq m\leq d \}.
\end{equation*}
Case 1. Assume that $X$ is zero-dimensional. Then the family of partitions finer than $\mathcal{U}$, consisting of sets $\{\Omega\times R_i\}\cap\mathcal{E}$ with each $R_i$ being clopen sets, is countable. Let $\{\mathcal{R}_l:l\geq 1\}$ be the enumeration of this family. It is clear that $\{\mathcal{R}_l\}\subset \mathcal{U}^*$.
Fix $n\in \mathbb{N}$. By Lemma \ref{lemma3.5}, there exists a measurable in $\omega$ family of maximal subsets $C_n(\omega)$ of $\mathcal{E_{\omega}}$
$$
C_n(\omega)=B_{n^2}\big( (\Theta^n)^{-1}\mathcal{U},
\{(\Theta^n)^{-1}\mathcal{R}_l\}_{l=1}^n,\omega \big)
\subset \mathcal{E}_{\omega}
$$
with cardinality at least $\big[N(T,\omega,(\Theta^n)^{-1}\mathcal{U},n^2)/n\big]$ such that any atom of $(\mathcal{R}_l)_n^{n^2+n-1}(\omega)$ contains at most one point of $C_n(\omega)$ for all $1\leq l\leq n$.
Next, define probability measures $\nu_n$ on $\mathcal{E}$ via their measurable disintegrations $\nu_{n,\omega}$, where $\nu_{n,\omega}$ is the equidistributed probability measure on $C_n(\omega)$,
so that $d\nu_n(\omega,x)=d\nu_{n,\omega}(x)dP(\omega)$. Then for each $0\leq i, l\leq n$, every element of
$(\Theta^i)^{-1}(\mathcal{R}_l)_0^{n^2+n-1}(\omega)
=(T_{\omega}^i)^{-1}\bigvee_{j=0}^{n^2+n-1}(T_{\omega}^j)^{-1}\mathcal{R}_l(\vartheta^{j+i}\omega)$, which is finer than
$(\mathcal{R}_l)_n^{n^2+n-1}(\omega)
=\bigvee_{j=n}^{n^2+n-1}(T_{\omega}^j)^{-1}\mathcal{R}_l(\vartheta^j\omega)$, also contains at most one atom of the discrete measure $\nu_{n,\omega}$.
Since for each $\omega$,
\begin{align*}
N(T,\omega,\mathcal{U},n^2+n)
&\leq N(T, \omega,\mathcal{U},n)\cdot N(T,\omega,(\Theta^n)^{-1}\mathcal{U},n^2)\\
&\leq d^n N(T,\omega,(\Theta^n)^{-1}\mathcal{U},n^2),
\end{align*}
we have for any $1\leq i, l\leq n$,
\begin{align}
\label{eq3.2}
H_{T_{\omega}^i\nu_{n,\omega}}&(\bigvee_{j=0}^{n^2+n-1}(T^j_{\omega})^{-1}\mathcal{R}_l(\vartheta^{j+i}\omega)
)
=H_{\nu_{n,\omega}}((T_{\omega}^i)^{-1}\bigvee_{j=0}^{n^2+n-1}(T^j_{\omega})^{-1}\mathcal{R}_l(\vartheta ^{j+i}\omega))\notag\\
&\geq\log [\frac{1}{n}N(T,\omega,(\Theta^n)^{-1}\mathcal{U},n^2)]\geq \log[\frac{1}{nd^n}N(T,\omega,\mathcal{U},n^2+n)].
\end{align}
Since $\nu_{n,\omega}$ is supported by $\mathcal{E}_{\omega}$, $T_{\omega}^i\nu_{n,\omega}$ is supported by $\mathcal{E}_{\vartheta^i\omega}$,
for all $1\leq i,l\leq n$, integrating in \eqref{eq3.2} against $P$, we have by \eqref{kifer2.1} the inequality
\begin{align*}
H_{\Theta^i\nu_n}((\mathcal{R}_l)_0^{n^2+n-1}\mid \mathcal{F}_{\mathcal{E}})
&=H_{\nu_n}\big((\Theta^i)^{-1}(\mathcal{R}_l)_0^{n^2+n-1}\mid \mathcal{F}_{\mathcal{E}}\big)\\
&\geq \int \log[\frac{1}{nd^n}N(T,\omega,\mathcal{U},n^2+n)]dP(\omega).
\end{align*}
Fix $m\in \mathbb{N}$ with $m\leq n$, and let $n^2+n=km+b$, where $0\leq b\leq m-1$. Then for $1\leq i,l\leq n$, we have
\begin{align*}
&H_{\Theta^i\nu_n}((\mathcal{R}_l)_0^{n^2+n-1}\mid \mathcal{F}_{\mathcal{E}})\\
&=H_{\Theta^i\nu_n}\big((\mathcal{R}_l)_{km}^{n^2+n-1}\vee \bigvee_{j=0}^{k-1}(\Theta^{mj})^{-1}(\mathcal{R}_l)_0^{m-1}\mid \mathcal{F}_{\mathcal{E}}\big)\\
&\leq H_{\Theta^i\nu_n}((\mathcal{R}_l)_{km}^{n^2+n-1}\mid \mathcal{F}_{\mathcal{E}})+\sum_{j=0}^{k-1}H_{\Theta^i\nu_n}\big( (\Theta^{mj})^{-1} (\mathcal{R}_l)_0^{m-1} \mid \mathcal{F}_{\mathcal{E}}\big)\\
&\leq \sum_{j=0}^{k-1}H_{\Theta^{i+mj}\nu_n}((\mathcal{R}_l)_0^{m-1} \mid \mathcal{F}_{\mathcal{E}})+m\log d \quad(\text{by Lemma \ref{lemma2.2} (1) and (4)})
\end{align*}
Summing over $0\leq i\leq m-1$ for each $1\leq l\leq n$, we get
\begin{align*}
m\int\log[\frac{1}{nd^n} & N(T,\omega,\mathcal{U},n^2+n)]dP(\omega)
\leq \sum_{i=0}^{m-1}H_{\Theta^i\nu_n}((\mathcal{R}_l)_0^{n^2+n-1}\mid \mathcal{F}_{\mathcal{E}})\\
&\leq \sum_{i=0}^{m-1}\sum_{j=0}^{k-1}H_{\Theta^{i+jm}\nu_n}((\mathcal{R}_l)_0^{m-1}\mid \mathcal{F}_{\mathcal{E}} )+m^2\log d\\
&\leq \sum_{i=0}^{n^2+n-1}H_{\Theta^i\nu_n}((\mathcal{R}_l)_0^{m-1}\mid \mathcal{F}_{\mathcal{E}} )+m^2\log d.
\end{align*}
Denote
$$\mu_n=\frac{1}{n^2+n}\sum_{i=0}^{n^2+n-1}\Theta^i\nu_n.$$
Since the function $\mu\rightarrow H_{\mu} ((\mathcal{R}_l)_0^{m-1}\mid \mathcal{F}_{\mathcal{E}} )$ is concave on $\mathcal{P}_P(\mathcal{E})$, by Lemma \ref{lemma3.3} part (1), we get for each $1\leq l\leq n$,
\begin{align}\label{ineq3.9}
H_{\mu_n} &((\mathcal{R}_l)_0^{m-1}\mid \mathcal{F}_{\mathcal{E}} )\geq \frac{1}{n^2+n}\sum_{i=0}^{n^2+n-1} H_{\Theta^i\nu_n}((\mathcal{R}_l)_0^{m-1}\mid \mathcal{F}_{\mathcal{E}} )\notag\\
&\geq \frac{m}{n^2+n}\big( \int\log[\frac{1}{nd^n} N(T,\omega,\mathcal{U},n^2+n)]dP(\omega) -m\log d \big).
\end{align}
Suppose that $\mu_{n_k}\rightarrow \mu$ as $k\rightarrow \infty$ with $\mu \in \mathcal{P}_P(\mathcal{E})$ in the weak* topology. Then by Lemma \ref{lem3}, $\mu \in \mathcal{I}_P(\mathcal{E})$. Fixing $m\in \mathbb{N}$, we have
\begin{equation}\label{eq3.10}
\lim_{k\rightarrow \infty}\frac{1}{n_k^2+n_k}\big( \int\log[\frac{1}{n_kd^{n_k}} N(T,\omega,\mathcal{U},n_k^2+n_k)]dP(\omega) -m\log d \big)= h_{\text{top}}(T,\mathcal{U}).
\end{equation}
Since any element of $(\mathcal{R}_l)_0^{m-1}(\omega)$ is clopen for each $\omega\in \Omega$, by Lemma \ref{lemma3.4}, equation \eqref{eq3.10}, and replacing $n$ by $n_k$ in the inequality \eqref{ineq3.9} and letting $k\rightarrow +\infty$, we get
$$
\frac{1}{m}H_{\mu}((\mathcal{R}_l)_0^{m-1}\mid \mathcal{F}_{\mathcal{E}})\geq h_{\text{top}}(T,\mathcal{U}),
$$
for any $l, m\in \mathbb{N}$.
Fixing $l\in \mathbb{N}$ and letting $m\rightarrow \infty$, we get $h_{\mu}^{(r)}(T,\mathcal{R}_l)\geq h_{\text{top}}(T,\mathcal{U})$. Since $X$ is zero-dimensional, $\{\mathcal{R}_l\}_{l\geq 1}$ is dense in $\mathcal{U}^*$ with respect to the distance associated with $L^1(\mu)$. Hence
\begin{equation*}
h_{\mu}^{(r)+}(T,\mathcal{U})=\inf_{\mathcal{Q}\in \mathcal{P}_{\mathcal{E}}, \mathcal{Q}\succeq \mathcal{U}}h_{\mu}^{(r)}(T,\mathcal{Q})=\inf_{l\in \mathbb{N}}h_{\mu}^{(r)}(T,\mathcal{R}_l)\geq h_{\text{top}}(T,\mathcal{U}).
\end{equation*}
This completes the proof of Case 1.
Case 2. This is the general case.
By Lemma \ref{lem2}, there exists a homeomorphic bundle RDS $S$ on $\mathcal{G}\subset \Omega\times Z$ over $\vartheta$, where $Z$ is zero-dimensional, and a factor map $\psi:\mathcal{G}\rightarrow \mathcal{E}$ such that $T\circ \psi=\psi\circ S$.
By Case 1 and Lemma \ref{lemma3.1}, there exists $\nu\in \mathcal{I}_P(\mathcal{G})$ such that $h_{\nu}^{(r)+}(S,\psi^{-1}(\mathcal{U}))\geq h_{\text{top}}(S,\psi^{-1}(\mathcal{U}))=h_{\text{top}}(T,\mathcal{U})$. This means that for each partitions $\mathcal{R}$ of $\mathcal{G}$ finer than $\psi^{-1}(\mathcal{U})$, $h_{\nu}^{(r)}(S,\mathcal{R})\geq h_{\text{top}}(T,\mathcal{U})$. Let $\mu=\psi \nu$, then $\mu \in \mathcal{I}_P(\mathcal{E})$ \cite{Liupeidong2005}. It follows that for any partition $\mathcal{Q}$ of $\mathcal{E}$ finer than $\mathcal{U}$, $\psi^{-1}(\mathcal{Q})$ is also a partition of $\mathcal{G}$ finer than $\psi^{-1}(\mathcal{U})$. By Lemma \ref{lemma3.1} we have $h_{\mu}^{(r)}(T,\mathcal{Q})=h_{\nu}^{(r)}(S,\psi^{-1}(\mathcal{Q}))\geq h_{\text{top}}(T,\mathcal{U})$. Hence $h_{\mu}^{(r)+}(T,\mathcal{U})\geq h_{\text{top}}(T,\mathcal{U})$ and we complete the proof.
\end{proof}
\section{A local variational principle for $h_{\mu}^{(r)-}$}\label{section4}
In this section, we will prove a local variational principle for a certain class of covers of $\Omega\times X$, i.e. for $\mathcal{U}\in \mathcal{C}_{\Omega\times X}^{o'}$. We first give some relations between the two kinds of random measure-theoretical entropy for covers, then use these results and Theorem \ref{theorem3.6} to prove this local variational principle.
\begin{lemma}
\label{lemma4.1}
Let $T$ be a homeomorphic bundle RDS on $\mathcal{E}$ over $\vartheta$ and $\mu\in\mathcal{I}_P(\mathcal{E}) $. Then for $\mathcal{U}\in \mathcal{C}_{\mathcal{E}}$, we have
\renewcommand{\theenumi}{(\arabic{enumi})}
\begin{enumerate}
\item $h_{\mu}^{(r)-}(T,\mathcal{U})\leq h_{\mu}^{(r)+}(T,\mathcal{U})$;\label{411}
\item $h_{\mu}^{(r)-}(T,\mathcal{U})=(1/M)h_{\mu}^{(r)-}(T^M,\mathcal{U}_0^{M-1})$ for each $M\in \mathbb{N}$;\label{412}
\item $h_{\mu}^{(r)-}(T,\mathcal{U})=\lim_{n\rightarrow \infty}(1/n)h_{\mu}^{(r)+}(T^n,\mathcal{U}_0^{n-1})$.\label{413}
\end{enumerate}
\end{lemma}
\begin{proof}
Part \ref{411} and \ref{412} is obvious. We only prove part \ref{413}. By part \ref{411} and \ref{412}, for any $M\in \mathbb{N}$, we have
\begin{align*}
h_{\mu}^{(r)-}(T,\mathcal{U})&=\frac{1}{M}h_{\mu}^{(r)-}(T^M,\mathcal{U}_0^{M-1})\\
&\leq \frac{1}{M}h_{\mu}^{(r)+}(T^M,\mathcal{U}_0^{M-1})\leq \frac{1}{M}H_{\mu}(\mathcal{U}_0^{M-1}\mid \mathcal{F}_{\mathcal{E}}).
\end{align*}
Taking the limit when $n\rightarrow +\infty$, we complete the argument.
\end{proof}
For each $k\in \mathbb{N}$, let $\mathcal{I}_P(T^k,\mathcal{E})=\{\mu\in \mathcal{P}_P(\mathcal{E}):\mu \,\,\text{is}\,\, \Theta^k\text{-invariant}\}$. $\mathcal{I}_P(T,\mathcal{E})$ is simply written as the usual $\mathcal{I}_P(\mathcal{E})$.
Now we prove Theorem \ref{theorem2.5}.
\begin{proof}[Proof of Theorem \ref{theorem2.5}]
We follow the method applied in \cite{Huang2006} and \cite{Huang2004} for the deterministic system.
Let $\mathcal{U}=\{\Omega\times U_i\}_{i=1}^d\cap\mathcal{E}\in \mathcal{C}_{\mathcal{E}}^{o'}$. Then $A=\{U_i\}_{i=1}^d$ is an open cover of $X$. Define
\begin{equation*}
\mathcal{U}^*=\{\mathcal{R}\in \mathcal{P}_{\mathcal{E}}: \mathcal{R}=\{\Omega\times R_i\}_{i=1}^d\cap\mathcal{E}, R_m\subset U_m \,\, \text{for all}\,\, 1\leq i\leq d \}.
\end{equation*}
Case 1. Assume that $X$ is zero-dimensional. Then the family of partitions finer than $\mathcal{U}$, consisting of sets $\{\Omega\times R_i\}\cap\mathcal{E}$ with each $R_i$ being clopen sets, is countable. Let $\{\mathcal{R}_l:l\geq 1\}$ be the enumeration of this family. It is clear that $\{\mathcal{R}_l\}\subset \mathcal{U}^*$. Moreover, for any $k\in \mathbb{N}$ and $\mu\in\mathcal{I}_P(\mathcal{E})$,
\begin{equation}\label{eq4.1}
h_{\mu}^{(r)+}(T^k,\mathcal{U}_0^{k-1})=\inf_{s_k\in \mathbb{N}^k}h_{\mu}^{(r)}(T^k,\bigvee_{i=0}^{k-1}(\Theta^i)^{-1}\mathcal{R}_{s_k(i)}).
\end{equation}
For any $k\in \mathbb{N}$ and $s_k\in \mathbb{N}^k$, denote
\begin{align*}
M(k,s_k)=\big\{\mu \in \mathcal{I}_P(\mathcal{E}): &\frac{1}{k}h_{\mu}^{(r)} (T^k,\bigvee_{i=0}^{k-1}(\Theta^i)^{-1}\mathcal{R}_{s_k(i)})\\
&\geq \frac{1}{k}h_{\text{top}}(T^k,\mathcal{U}_0^{k-1})=h_{\text{top}}(T,\mathcal{U}) \big\}.
\end{align*}
By Theorem \ref{theorem3.6} we know that there exists $\mu_k\in \mathcal{I}_P(T^k,\mathcal{E})$ such that
\begin{equation*}
h_{\mu_k}^{(r)+}(T^k,\mathcal{U}_0^{k-1})\geq h_{\text{top}}(T^k,\mathcal{U}_0^{k-1}).
\end{equation*}
Since $\bigvee_{i=0}^{k-1}(\Theta^i)^{-1}\mathcal{R}_{s_k(i)}$ is finer than $\mathcal{U}_0^{k-1}$ for each $s_k\in \mathbb{N}^k$, one has
\begin{equation}\label{ineq4.2}
h_{\mu_k}^{(r)}(T^k,\bigvee_{i=0}^{k-1}(\Theta^i)^{-1}\mathcal{R}_{s_k(i)})\geq h_{\text{top}}(T^k,\mathcal{U}_0^{k-1}).
\end{equation}
Let
$$\nu_k=\frac{1}{k}\sum_{i=0}^{k-1}\Theta^i\mu_k.$$
As for each $0\leq i\leq k-1$, $\Theta^i\mu_k\in \mathcal{I}_P(T^k,\mathcal{E})$, one has $\nu_k\in \mathcal{I}_P(\mathcal{E})$. For $s_k\in \mathbb{N}^k$ and $1\leq j\leq k-1$, denote
\begin{equation*}
P^js_k=\underbrace{s_k(k-j)s_k(k-j+1)\dots s_k(k-1)}_j\underbrace{s_k(0)s_k(1)\dots s_k(k-j-1)}_{k-j}\in \mathbb{N}^k
\end{equation*}
and $P^0s_k=s_k$. It is easy to see that for $0\leq j\leq k-1$,
\begin{align*}
h_{\Theta^j\mu_k}^{(r)}(T^k, \bigvee_{i=0}^{k-1}(\Theta^i)^{-1}\mathcal{R}_{s_k(i)})
&=h_{\mu_k}^{(r)}(T^k,\bigvee_{i=0}^{k-1}(\Theta^i)^{-1}\mathcal{R}_{P^js_k(i)})\\
&\geq h_{\text{top}}(T^k,\mathcal{U}_0^{k-1}) \quad(\text{by inequality \eqref{ineq4.2}}).
\end{align*}
Moreover by Lemma \ref{lemma3.3} part (2) for each $s_k\in \mathbb{N}^k$,
\begin{align*}
h_{\nu_k}^{(r)}(T^k, \bigvee_{i=0}^{k-1}(\Theta^i)^{-1}\mathcal{R}_{s_k(i)})
&=\frac{1}{k}\sum_{i=0}^{k-1}h_{\Theta^i\mu_k}^{(r)}(T^k, \bigvee_{i=0}^{k-1}(\Theta^i)^{-1}\mathcal{R}_{s_k(i)})\\
&\geq h_{\text{top}}(T^k,\mathcal{U}_0^{k-1}).
\end{align*}
This means that $\nu_k\in \bigcap_{s_k\in \mathbb{N}^k}M(k,s_k)$. Let $M(k)=\bigcap_{s_k\in \mathbb{N}^k}M(k,s_k)$, Then $M(k)$ is non-empty subset of $\mathcal{I}_P(\mathcal{E})$.
By Lemma \ref{lemma3.4} part (b), for each $s_k\in \mathbb{N}^k$ the map $\mu\rightarrow h_{\mu}^{(r)}(T^k, \bigvee_{i=0}^{k-1}(\Theta^i)^{-1}\mathcal{R}_{s_k(i)})$ is a u.s.c. function from $\mathcal{I}_P(T^k,\mathcal{E})$ to $\mathbb{R}$. Since $\mathcal{I}_P(\mathcal{E})\subset \mathcal{I}_P(T^k,\mathcal{E})$, the map $\mu\rightarrow h_{\mu}^{(r)}(T^k, \bigvee_{i=0}^{k-1}(\Theta^i)^{-1}\mathcal{R}_{s_k(i)})$ is also u.s.c. on $\mathcal{I}_P(\mathcal{E})$. Therefore, $M(k,s_k)$ is closed in $\mathcal{I}_P(\mathcal{E})$ for each $s_k\in \mathbb{N}^k$. Thus $M(k)$ is a non-empty closed subset of $\mathcal{I}_P(\mathcal{E})$.
Now we show that if $k_1, k_2\in \mathbb{N}$ with $k_1\mid k_2$ then $M(k_2)\subset M(k_1)$. Let $\mu\in M(k_2)$ and $k=k_2/k_1$. For any $s_{k_1}\in \mathbb{N}^{k_1}$, let $s_{k_2}=s_{k_1}\dots s_{k_1}\in \mathbb{N}^{k_2}$. Then
\begin{align*}
h_{\text{top}}(T,\mathcal{U})
&\leq \frac{1}{k_2}h_{\mu}^{(r)}(T^{k_2},\bigvee_{i=0}^{k_2-1}(\Theta^i)^{-1}\mathcal{R}_{s_{k_2}(i)})\\
&=\frac{1}{k_1}\big[\frac{1}{k}h_{\mu}^{(r)}( T^{kk_1}, \bigvee_{j=0}^{k-1}(\Theta^{jk_1})^{-1}\bigvee_{i=0}^{k_1-1}(\Theta^i)^{-1} \mathcal{R}_{s_{k_1}(i)} ) \big]\\
&=\frac{1}{k_1}h_{\mu}^{(r)}(T^{k_1},\bigvee_{i=0}^{k_1-1}(\Theta^i)^{-1}\mathcal{R}_{s_{k_1}(i)}).
\end{align*}
Hence $\mu \in M(k_1,s_{k_1})$. Then $\mu \in M(k_1)$.
Since for each $k_1, k_2\in \mathbb{N}$, $M(k_1)\cap M(k_2)\supset M(k_1k_2)\neq \emptyset$, then $\bigcap_{k\in\mathbb{N}}M(k)\neq \emptyset$.
Take $\mu \in \bigcap_{k\in\mathbb{N}}M(k) $. For any $k\in \mathbb{N}$, by equation \eqref{eq4.1} one has
\begin{equation*}
\frac{1}{k}h_{\mu}^{(r)+}(T^k,\mathcal{U}_0^{k-1})=\inf_{s_k\in \mathbb{N}^k}\frac{1}{k}h_{\mu}^{(r)}(T^k, \bigvee_{i=0}^{k-1}(\Theta^i)^{-1}\mathcal{R}_{s_k(i)})\geq h_{\text{top}}(T,\mathcal{U}).
\end{equation*}
Moreover, by Lemma \ref{lemma4.1} part \ref{413}, one gets that
\begin{equation*}
h_{\mu}^{(r)-}(T,\mathcal{U})=\lim_{k\rightarrow +\infty}\frac{1}{k}h_{\mu}^{(r)+}(T^k, \mathcal{U}_0^{k-1})\geq h_{\text{top}}(T,\mathcal{U}).
\end{equation*}
Following Lemma \ref{lemma3.3} part (2), we have $h_{\mu}^{(r)-}(T,\mathcal{U})=h_{\text{top}}(T,\mathcal{U})$. This ends the proof of Case 1.
Case 2. This is the general case.
By Lemma \ref{lem2}, there exists a homeomorphic bundle RDS $S$ on $\mathcal{G}\subset \Omega\times Z$ over $\vartheta$, where $Z$ is zero-dimensional, and a factor map $\psi: \mathcal{G}\rightarrow \mathcal{E}$. Let $\mathcal{V}=\psi^{-1}\mathcal{U}$. By Lemma \ref{lemma3.1} part (1), one has $h_{\text{top}}(S,\mathcal{V})=h_{\text{top}}(T,\mathcal{U})$. By Case 1, there exists $\nu\in \mathcal{I}_P(\mathcal{G})$ such that $h_{\mu}^{(r)-}(S,\mathcal{V})=h_{\text{top}}(S,\mathcal{V})$. Let $\mu=\psi\nu$, then $\mu\in \mathcal{I}_P(\mathcal{E})$. Note that if $N\in\mathbb{N}$ and $\mathcal{R}\in \mathcal{P}_{\mathcal{E}}$ is finer than $\mathcal{U}_0^{N-1}$, then $\psi^{-1}(\mathcal{R})\in \mathcal{P}_{\mathcal{G}}$ is finer than $\mathcal{V}_0^{N-1}$. Thus
$H_{\mu}(\mathcal{R}\mid \mathcal{F}_{\mathcal{E}})=H_{\nu}(\psi^{-1}\mathcal{R}\mid\psi^{-1}\mathcal{F}_{\mathcal{E}})
=H_{\nu}(\psi^{-1}\mathcal{R}\mid \mathcal{F}_{\mathcal{G}} )\geq H_{\nu}(\mathcal{V}_0^{N-1}\mid \mathcal{F}_{\mathcal{G}})$. Hence $ H_{\mu}(\mathcal{U}_0^{N-1}\mid \mathcal{F}_{\mathcal{E}})\geq H_{\nu}(\mathcal{V}_0^{N-1}\mid \mathcal{F}_{\mathcal{G}})$ for each $N\in \mathbb{N}$. Thus one get
\begin{align*}
h_{\mu}^{(r)-}(T,\mathcal{U})&=\lim_{N\rightarrow +\infty}\frac{1}{N}H_{\mu}(\mathcal{U}_0^{N-1}\mid \mathcal{F}_{\mathcal{E}})\geq \lim_{N\rightarrow +\infty}\frac{1}{N}H_{\nu}(\mathcal{V}_0^{N-1}\mid \mathcal{F}_{\mathcal{G}})\\
&=h_{\nu}^{(r)-}(S,\mathcal{V})=h_{\text{top}}(S,\mathcal{V})=h_{\text{top}}(T,\mathcal{U})\geq h_{\mu}^{(r)-}(T,\mathcal{U}).
\end{align*}
We complete the argument of this theorem.
\end{proof}
| {
"attr-fineweb-edu": 1.810547,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdVk5qWTBE9PKweGE | \section{Introduction}
The post-Newtonian (PN) approximation
(see, e.g.,~\cite{Blanchet:2013haa,Sasaki:2003xr} for a review)
is widely used not only to derive gravitational waveforms
for binary inspirals, but also to prepare initial parameters
for numerical relativity (NR) simulations
(see, e.g.,~\cite{Baker:2002qf,Husa:2007rh,Campanelli:2008nk}
for quasicircular initial parameters of binary black hole
(BBH) simulations).
In the PN approximation, we assume the slow-motion/weak-field.
To treat the PN approximation,
it is important to know where the above assumptions are appropriate.
In the case of quasicircular binaries,
the region of validity of the PN approximation is expressed,
for example, by the orbital radius.
Using the PN energy flux for extreme mass ratio inspirals (EMRIs)
in the black hole (BH) perturbation approach,
the region of validity of the PN approximation has been discussed
in~\cite{Poisson:1995vs,Yunes:2008tw,Zhang:2011vha}
(see also~\cite{Mino:1997bx}).
Since we use the assumption of $\mu \ll M$
where the EMRIs are described by a point particle with mass $\mu$
orbiting around a BH with mass $M$, in this approach,
it is possible to calculate the energy flux numerically
and analytically up to very high PN orders.
In our previous paper~\cite{Sago:2016xsp},
we treated the numerical energy flux derived
in~\cite{Fujita:2004rb,Fujita:2009uz}
and the PN flux in~\cite{Fujita:2014eta}
(see also~\cite{Fujita:2011zk,Fujita:2012cm,Sago:2015rpa})
for prograde circular orbits in the Kerr spacetime,
and analyzed the region of validity.
Although there are some large region of validity for some PN order
(see, e.g., the top panel of Figure 1 in~\cite{Sago:2016xsp}
where the 3PN order ($N=6$) has a larger region of validity
than that of the 3.5PN order ($N=7$)),
we have found that higher PN orders give a larger region of validity
as expected.
In our previous paper and this note, we note that the region of validity is discussed for the gravitational energy flux, i.e. the time-averaged dissipative piece of the first-order gravitational self-force (GSF). We do not consider the conservative piece of the first-order GSF in this note.
Here, an important circular orbit in the equatorial plane
is the innermost stable circular orbit (ISCO).
The ISCO radius in Boyer-Lindquist coordinates is given as~\cite{Bardeen:1972fi}
\begin{eqnarray}
\frac{r_{\rm ISCO}}{M} &=& 3 + Z_2 \mp \sqrt{(3 - Z_1) (3 + Z_1 + 2 Z_2)} \,,
\label{eq:ISCO_radius}
\end{eqnarray}
where
\begin{eqnarray}
Z_1 &=& 1 + (1 - (a/M)^2)^{1/3} [(1 + a/M)^{1/3} + (1 - a/M)^{1/3} ] \,,
\cr
Z_2 &=& \sqrt{3 (a/M)^2 + Z_1^2} \,,
\end{eqnarray}
and the upper and lower signs refer to the prograde (direct)
and retrograde orbits (with the Kerr parameter $0 \leq a \leq M$), respectively.
Given the orbital radius $r$, the orbital velocity is calculated by
\begin{eqnarray}
v = (M\Omega)^{1/3} \,; \quad
M \Omega = \frac{M^{3/2}}{r^{3/2}+aM^{1/2}} \,.
\label{eq:v_r}
\end{eqnarray}
In Table~\ref{tab:ISCO}, the ISCO radius, frequency and velocity
are summarized for various $a/M$.
\begin{table}[!ht]
\caption{The ISCO radius ($r_{\rm ISCO}$),
frequency ($\Omega_{\rm ISCO}$) and velocity ($v_{\rm ISCO}$)
for various Kerr spin parameters ($a$).}
\label{tab:ISCO}
\begin{center}
\begin{tabular}{|cccc|}
\hline
$a/M$ & $r_{\rm ISCO}/M$ & $M\Omega_{\rm ISCO}$ & $v_{\rm ISCO}$ \\
\hline
1.0 (retrograde) & 9.000000000 & 0.03571428571 & 0.3293168780
\\
0.9 (retrograde) & 8.717352279 & 0.03754018063 & 0.3348359801
\\
0.5 (retrograde) & 7.554584713 & 0.04702732522 & 0.3609525320
\\
0.0 & 6.000000000 & 0.06804138173 & 0.4082482904
\\
0.5 (prograde) & 4.233002531 & 0.1085883589 & 0.4770835292
\\
0.9 (prograde) & 2.320883043 & 0.2254417086 & 0.6086179484
\\
1.0 (prograde) & 1.000000000 & 0.5000000000 & 0.7937005260
\\
\hline
\end{tabular}
\end{center}
\end{table}
For the retrograde ISCOs in Table~\ref{tab:ISCO},
we see larger ISCO radii, smaller ISCO frequencies and velocities
than those for the prograde ISCOs.
As a naive expectation,
the region of validity of the PN approximation for the retrograde case
will be smaller than that for the prograde case.
Despite this spin dependence, we usually require
a same frequency for various configuration
to produce hybrid PN-NR waveforms~\cite{Ajith:2012az,Aasi:2014tra}.
Also, we need to search gravitational waves (GWs) with various spin directions
of binary systems
since the plausible spin direction is not known.
Interestingly, a recent GW event, GW170104
suggests that both spins aligned with the orbital angular momentum
of the BBH system are disfavored~\cite{Abbott:2017vtc}.
In our previous paper~\cite{Sago:2016xsp}, we only discussed several
cases of prograde orbits and did not discuss retrograde orbits since
we expected that the region of validity in the PN approximation
for retrograde orbits would behave similar to that for prograde ones.
Motivated by the observation of GW170104,
however, we reanalyze the region of validity in the PN approximation
both for prograde and retrograde orbits more carefully
and find complicated behaviors in the region of validity which have not been
noticed in our previous paper (see, e.g., Fig.~\ref{fig:N_vs_AR_summary}).
We would expect that additional results might be of interest to
people working on numerical relativity simulations,
and would like to summarize results in this note.
The note is organized as follows. In Section~\ref{sec:EAR},
we review the definition of the edge of allowable region (AR) briefly
which was introduced in our previous paper~\cite{Sago:2016xsp}.
In Section~\ref{sec:results},
we show the edge of the allowable region from the eccentricity estimation
for retrograde circular orbits in the equatorial plane.
Finally, we summarize and discuss
the results presented in our previous paper and this note in Section~\ref{sec:dis}.
In this note, we use the geometric unit system, where $G=c=1$.
\section{Edges of the allowable region}\label{sec:EAR}
To discuss the region of validity of the PN approximation,
we use the gravitational energy flux $F_{g}$
for a point particle orbiting in circular orbits around a BH.
The basic equations in the BH perturbation approach
are the Regge-Wheeler-Zerilli~\cite{Regge:1957td,Zerilli:1971wd}
and Teukolsky~\cite{Teukolsky:1973ha} equations.
The energy flux is computed in two ways:
one is the numerical approach which gives the ``exact'' value
in the numerical accuracy, and the other is the analytical approach
where we employ the PN approximation.
In the analytical approach,
the gravitational energy flux is written as
\begin{eqnarray}
\frac{dE^{(N)}}{dt} = - F_{\rm Newt} F^{(N)} \,,
\label{eq:EF}
\end{eqnarray}
where $F_{\rm Newt}$ is the Newtonian flux,
\begin{eqnarray}
F_{\rm Newt} = \frac{32}{5} \left(\frac{\mu}{M}\right)^2 v^{10} \,,
\end{eqnarray}
and $F^{(N)}$ is the ($N/2$)PN result
normalized by the Newtonian flux.
$v$ is the circular orbital velocity.
We denote the exact value obtained in the numerical approach
as $F$.
In our previous paper~\cite{Sago:2016xsp},
we followed the analyses presented in~\cite{Yunes:2008tw}
(based on~\cite{BenderOrszag})
and obtained consistent results with~\cite{Yunes:2008tw,Zhang:2011vha}.
As an alternative method which is the simplest analysis
and a practical method for the region of validity,
we have considered an allowance $\delta$ for the difference
between the numerical and analytical results as
\begin{eqnarray}
|F-F^{(N)}| < \delta \,.
\end{eqnarray}
Here, when we discuss the quasicircular evolution,
the deviation from the exact energy flux induces the ``artificial'' orbital eccentricity $e$, that shows the deviation from the ``exact'' quasicircular evolution.
Assuming the eccentric orbit with a semimajor axis $r_0$,
\begin{eqnarray}
r(t) \sim r_{0}\left[ 1-e \cos \left(\frac{v t}{r_{0}}\right) \right] \,.
\end{eqnarray}
where we have already known that the Newtonian orbit is sufficient
to estimate the eccentricity~\cite{Sago:2016xsp},
the orbital eccentricity is expressed as
\begin{eqnarray}
e \equiv& \left(\frac{dE}{dr}\right)^{-1}
\frac{1}{v}\, F_{\rm Newt}\, |F-F^{(N)}|
\,,
\label{eq:ee}
\end{eqnarray}
where $E$ is the orbital energy of the particle,
\begin{eqnarray}
\frac{E}{\mu} =&
{\frac {{r}^{3/2}-2\,M\sqrt {r}+a\sqrt {M}}{{r}^{3/4} \left( {r}^{3/2}
-3\,M\sqrt {r}+2\,a\sqrt {M} \right)^{1/2} }} \,.
\end{eqnarray}
The relation between the orbital radius and velocity is given
by~\eqref{eq:v_r}.
In NR simulations of BBH~\cite{Pretorius:2005gq, Campanelli:2005dd, Baker:2005vv},
an iterative procedure
to obtain low-eccentricity initial orbital parameters
is used~\cite{Boyle:2007ft,Pfeiffer:2007yz,Buonanno:2010yk,
Purrer:2012wy,Buchman:2012dw}.
According to~\cite{Buchman:2012dw},
we need a $t = O(1000M)$ numerical evolution
(where $M$ denotes the total mass for comparable mass binaries)
to reduce the eccentricity by about a factor of 10.
For example,
we find the eccentricity in the initial parameters
for NR simulations of GW150914~\cite{Abbott:2016blz,TheLIGOScientific:2016qqj,
TheLIGOScientific:2016wfe,TheLIGOScientific:2016uux} is
$\sim 0.0012$ (RIT) and $0.0008$ (SXS)~\cite{Lovelace:2016uwp}
(see also BBH simulation
catalogs~\cite{Mroue:2013xna,Jani:2016wkt,Abbott:2016apu,Healy:2017psd}
and references therein).
Since quasicircular initial parameters are an important input
for NR simulations~\cite{Healy:2017zqj},
we treat the edge of the allowable region by using the eccentricity
to discuss the region of validity of the PN approximation.
\section{Results}\label{sec:results}
We set a restriction on $e$ from the error in the energy flux as
\begin{eqnarray}
e \leq 1 \times 10^{-5} \,,
\label{eq:eerest}
\end{eqnarray}
where this reference value is adopted from
the lowest eccentricity in NR simulations presented in~\cite{Ajith:2012az}.
Combining~\eqref{eq:ee} with~\eqref{eq:eerest},
the edge of the allowable region is presented
in terms of the orbital velocity (top), $v^{(N)}$, and radius (bottom)
for retrograde circular orbits
in Kerr ($q=a/M=0,\,-0.1,\,-0.3,\,-0.5,\,-0.9$) cases
in Figure~\ref{fig:N_vs_AR_r}.
Here, the (normalized) numerical energy flux $F$
and the (normalized) PN flux $F^{(N)}$ in~\eqref{eq:ee}
are calculated by~\cite{Fujita:2004rb,Fujita:2009uz}
and~\cite{Fujita:2014eta}, respectively.
\begin{figure}[!ht]
\center
\includegraphics[width=0.8\textwidth]{N_vs_v_AR_retro.pdf}
\\
\includegraphics[width=0.8\textwidth]{N_vs_r_AR_retro.pdf}
\caption{
The edge of the allowable region from the eccentricity estimation
in the case of $e \leq 1 \times 10^{-5}$
in terms of the orbital velocity (top) and radius (bottom).}
\label{fig:N_vs_AR_r}
\end{figure}
At 3.5PN order ($N=7$), we see a clear tendency for
the allowable region
to decrease when $q$ becomes more negative.
Also, in large $N$,
the allowable region
for large negative $q$ values
has a similar behavior to the ISCO given in~\eqref{eq:ISCO_radius}
(see also Table~\ref{tab:ISCO} and Figure~\ref{fig:N_vs_AR_summary}),
but it is not so clear.
\begin{figure}[!t]
\center
\includegraphics[width=0.8\textwidth,clip=true]{v_AR_vs_q.pdf}
\\
\includegraphics[width=0.8\textwidth,clip=true]{r_AR_vs_q.pdf}
\caption{
Edges of the allowable region for the orbital velocity $v_{\rm AR}$
(top) and radius $r_{\rm AR}$ (bottom)
for all range of the non-dimensional
Kerr parameter, $q=a/M$.
Here, $q>0$ and $q<0$ are for prograde and retrograde orbits, respectively.}
\label{fig:N_vs_AR_summary}
\end{figure}
\section{Summary and Discussions}\label{sec:dis}
In this note, we have used EMRIs for the analysis.
For the current ground-based GW detectors,
Advanced LIGO (aLIGO)~\cite{TheLIGOScientific:2014jea},
Advanced Virgo (AdV)~\cite{TheVirgo:2014hva},
KAGRA~\cite{Somiya:2011np,Aso:2013eba},
comparable mass ratio binaries are the target.
To extrapolate the result in this note to these binaries
where individual objects have mass $m_1$ and $m_2$,
and nondimensional spin $\chi_1$ and $\chi_2$,
it may be possible to consider $M$, $\mu$ and $q=a/M$ as
the total mass ($m_1+m_2$), the reduced mass ($m_1m_2/(m_1+m_2)$) of the system,
and an effective spin
$\chi_{\rm eff}=(m_1\chi_1+m_2\chi_2)/(m_1+m_2)$~\cite{Ajith:2009bn}
(see also~\cite{Damour:2001tu}), respectively.
Figure~\ref{fig:N_vs_AR_summary}
shows the edges of the allowable region for the orbital velocity $v_{\rm AR}$
and radius $r_{\rm AR}$
from the eccentricity estimation (\eqref{eq:ee} with~\eqref{eq:eerest})
for all range of Kerr parameter.
The ISCO values given in Table~\ref{tab:ISCO} are also shown.
For example, it is found at 2.5PN and 4PN order ($N=5$ and $8$, respectively)
that the orbital radius at the edge of the allowable region
for large negative values of $q$ tends to be larger and
has a similar $q$ dependence to the ISCO radius.
This suggests that when we start numerical relativity simulations
of binary systems with the 2.5PN and 4PN initial orbital parameters,
we need to consider a larger initial orbital separation
for large negative $q$ cases.
On the other hand, at 3.5PN order ($N=7$)
the allowable region basically decreases for large $q$.
This may be a possible reason why
the initial parameters given by Ref.~\cite{Healy:2017zqj}
for nearly maximally spinning BBH simulations~\cite{Zlochower:2017bbg}
do not work well.
It is noted in Figure~\ref{fig:N_vs_AR_summary} that
discontinuous change with respect to $q$
arise from different sequences of solutions
for the equality \eqref{eq:ee} with $e=1 \times 10^{-5}$.
The top panel of Figure~\ref{fig:fe_r_N4} shows complicated behaviors
of the right hand side of \eqref{eq:ee} for $N=7$ and various $q$
as a function of the orbital radius.
From this figure, we can find that
there are several solutions in the orbital radius for $q=0.20$ and $0.14$
when we set $e=1 \times 10^{-5}$,
while there is a single solution for $q=0.40$, $0.10$ and $0.05$.
In the case that \eqref{eq:ee} has several solutions, we choose the largest
solution in the orbital radius as the edge of the allowable region,
which is larger than the one
that gives the local maximum of the right hand side of \eqref{eq:ee}.
Then the solutions vary smoothly from $q=0.40$ to $q=0.14$.
The discontinuous change of solutions in the orbital radius
with respect to $q$ might appear between $q=0.14$ and $0.10$
when we set $e=1 \times 10^{-5}$.
This is because the local maximum of the right hand side of \eqref{eq:ee}
for $q=0.10$ becomes smaller than $1 \times 10^{-5}$
and the solution in the orbital radius for $q=0.10$ becomes smaller than
the one that gives the local minimum of the right hand side of \eqref{eq:ee}.
The locations of discontinuities depend on the restriction on $e$.
Contrarily, we see simple behaviors for $N=8$ in the bottom panel
of Figure~\ref{fig:fe_r_N4}, and this gives the continuous variation
in Figure~\ref{fig:N_vs_AR_summary}.
Even if we treat higher PN order, say, $N=20$,
it is difficult to estimate a simple $q$-dependence of the edge
of the allowable region.
Therefore, the above conclusion derived from the 2.5PN and 4PN analyses
is not held. We should note that
any local peaks in Figures~\ref{fig:N_vs_AR_r} and~\ref{fig:N_vs_AR_summary}
do not indicate the best PN approximation in a given order
as mentioned in~\cite{Sago:2016xsp}.
We always welcome higher PN results
in order to investigate whether the allowable region becomes larger.
\begin{figure}[!t]
\center
\includegraphics[width=0.8\textwidth,clip=true]{fe_r_N7.pdf}
\\
\includegraphics[width=0.8\textwidth,clip=true]{fe_r_N8.pdf}
\caption{
The right hand side of \eqref{eq:ee}
as a function of the orbital radius
for various non-dimensional Kerr parameters, $q=a/M$
in the case of $N=7$ (top) and $8$ (bottom).}
\label{fig:fe_r_N4}
\end{figure}
Although the allowable region has been discussed
for the gravitational energy flux, i.e.,
the first order dissipative self-force
in our previous paper~\cite{Sago:2016xsp} and this note,
the post-adiabatic effects of the self-force,
i.e., the first order conservative
and the second order dissipative self-forces should be studied.
As for the Schwarzschild background,
the radius of convergence of the PN series
for the first order conservative self-force
has been shown in~\cite{Johnson-McDaniel:2015vva} (see also~\cite{Kavanagh:2015lva}).
For the Kerr case,
It is possible to discuss the radius of convergence
by using an analytic result presented in~\cite{Kavanagh:2016idg}.
Furthermore,~\cite{Bini:2016dvs} is usable
in the case of eccentric orbits around a Kerr BH.
For future work,
we will analyze the allowable region by using the above results
to connect our analysis with the comparable mass ratio binaries.
\ack
RF's work was funded through H2020 ERC Consolidator Grant
``Matter and strong-field gravity: New frontiers in Einstein's theory''
(MaGRaTh-646597).
This work was also supported by
JSPS Grant-in-Aid for Scientific Research (C), No.~JP16K05356 (NS)
and No.~JP16K05347 (HN),
and MEXT Grant-in-Aid for Scientific Research
on Innovative Areas, ``New developments in astrophysics
through multi-messenger observations of gravitational wave sources'',
No.~JP24103006 (HN).
Some numerical computations were carried out at the Yukawa Institute Computer Facility.
\section*{References}
| {
"attr-fineweb-edu": 1.598633,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdVo5qX_BhGrDF-d8 | \section{Introduction}
\label{sec:intro}
The development of potential models to describe the energy spectra of
mesonic and baryonic systems has proved extremely successful.
Phenomenological models that
use a simple relativistic kinetic energy term and a scalar
potential that incorporates the linear confinement and the short-distance
color-Coulomb interaction suggested by QCD
give good descriptions of the observed spectra of both heavy-
and light-quark mesons and baryons \cite{REL,MOI,LDIV,LFI,Isgur}. Moreover,
Duncan, Eichten, and Thacker \cite{TAI} have demonstrated a
nontrivial connection between the relativistic potential models and
rigorous numerical results from lattice
QCD, showing that both the spectrum and the lattice wave functions for
light-quark mesons are reproduced very well
when the lattice potential
is used in the relativistic wave equation
\begin{equation}
\label{eq:salpeter}
\left[\sqrt{p^2+m_1^2} + \sqrt{p^2+m_2^2} + V(r)\right]\psi({\bf r})
=E\psi({\bf r}).
\end{equation}
This equation, the spinless Salpeter equation,
can be derived as a limit of the full
Salpeter equation in which the ``small-small'' components of the Salpeter wave
function are neglected and spin effects are averaged out as discussed, for
example, in \cite{LDIV}. In the expression above
$p$ is the momentum of either quark in the center-of-momentum frame,
$m_1$ and $m_2$ are the quark masses, and $V(r)$ is the effective
potential between the quarks.
We will be concerned here with the description of mesonic systems
described as quark-antiquark bound states $q\bar{Q}$, where the quarks
$q$ and $Q$ may be the same or different.
We assume that these systems can be described by the spinless
Salpeter equation as demonstrated in \cite{TAI}, and will take
$V(r)$ as the linear-plus-Coulomb potential used in much phenomenological
work. This also gives a good approximation to the lattice potential.
For heavy quarks, the kinetic terms in Eq.~(\ref{eq:salpeter}) can be
expanded in inverse powers of the quark mass to obtain
the usual nonrelativistic Schr\"{o}dinger Hamiltonian. This gives successful
descriptions of the $b\bar{b}$ and $c\bar{c}$ states \cite{LSI},
even though the latter
are close to being relativistic. More surprisingly, Martin \cite{AMII,AMI}
showed that a nonrelativistic model based on a power-law potential could be extended to include the
clearly relativistic $s\bar{s}$ states, and was able using that model to
predict successfully the masses of a number of then unmeasured
light-heavy states \cite{AMIII}.
Although numerical methods have been
developed which allow one to treat a relativistic kinetic term as
easily as a nonrelativistic term \cite{LDIV,LDIII,LDI,LFII},
it is important to understand why an ostensibly nonrelativistic
treatment works and allows useful predictions to be made for relativistic
systems as in the work of Martin and others.
In this paper, we explore this problem theoretically, and develop a
nonrelativistic approximation to the Hamiltonian
in Eq.~(\ref{eq:salpeter}) based on an effective-mass expansion
of the kinetic energy terms. We then study
the accuracy of this approximation in reproducing the
energy spectra and wave functions of relativistic $q\bar{Q}$ bound
states by using the corresponding Schr\"{o}dinger equation to fit ``data''
obtained by solving the spinless Salpeter equation.
We find that it is possible to fit the energy spectra for the low-lying energy
levels agree to within a few MeV for both heavy-heavy ``$c\bar{c}$'' and
light-light ``$s\bar{s}$'' states.
We are then able, using the nonrelativistic
description, to predict the energies of the low-lying $c\bar{s}$
states to within 11 MeV. However, the effective quark mass $M$ found in the
fits is considerably larger than either the input quark mass $m$ or the
natural effective mass $\sqrt{\langle p^2\rangle+m^2}$ expected from
various arguments \cite{JBI,AMIII}. We also obtain quite good fits to the
relativistic $c\bar{c}$ and $s\bar{s}$ spectra, and good absolute predictions
for the $c\bar{s}$ energies, using $M=\sqrt{\langle p^2\rangle+m^2}$
We also study the wave functions in detail,
and find qualitative agreement between the relativistic and nonrelativistic
functions in the regions in which both are large provided the effective
quark mass is used as a parameter in the fitting procedure. However,
systematic differences are evident, and the
nonrelativistic wave functions can be seriously in error locally, a problem
that can limit the usefulness of the approximate wave functions in
calculations of such quantities as transition matrix elements.
In the next section, we develop the theory of the
nonrelativistic approximation.
We then outline the numerical techniques used to determine the energy
spectra and wave functions, and discuss the
results of the heavy and light fits and the light-heavy predictions in
Sec.\ \ref{sec:numerical}, and
summarize our conclusions in Sec.\ \ref{sec:conclusions}.
\section{Theoretical background}
\label{sec:approx}
Our objective is to approximate the relativistic potential model
defined by the spinless Salpeter equation using a nonrelativistic
Schr\"{o}dinger description. Since only a small number of low-lying
heavy- and light-quark bound states are actually
known experimentally, an approximation will be successful
for practical purposes if it reproduces the wave functions and the energy
spectra for those limited sets of states. We will suppose that the
potential $V(r)$ is known and is kept
fixed.\footnote{Possible differences between the effective potentials
$V(r)$ for the heavy- and light-quark systems are outside our concern.
However, we note that the potential is often varied in making phenomenological
fits to data on different systems, for example, in \cite{LFI}.
This eases the problem
of fitting the heavy- and light-quark systems together.}
This requires
that our approximation for the kinetic energy terms in Eq.~(\ref{eq:salpeter})
be accurate in some average sense for the low-lying states in both the
heavy- and light-quark systems.
Our nonrelativistic approximation to the kinetic terms is suggested by
Martin's \cite{AMI} operator bound
\begin{equation}
\label{eq:opbnd} \sqrt{p^2+m^2} \leq \frac{M}{2} + \frac{p^2}{2M} +
\frac{m^2}{2M},
\end{equation}
valid for an arbitrary mass $M$. The right hand side of this
equation has the form of a nonrelativistic kinetic energy
operator with an effective mass $M$, plus an additive constant that
shifts the total energy.
The equality in Eq.~(\ref{eq:opbnd}) holds in momentum space
at the momentum $p_0=\sqrt{M^2-m^2}$. Alternatively,
the effective mass $M$ is given in terms of the quark mass $m$ and the point
of tangency $p_0$ of the curves defined by the two sides of the inequality by
\begin{equation}
\label{eq:massrel} M^2=m^2+p_0^2
\end{equation}
Because the Martin bound is an operator
relation, the inequality in Eq.~(\ref{eq:opbnd}) holds for expectation values
in single states, and for averages of expectation values over sets of
states. The choice $p_0^2=\langle p^2\rangle$
would put the point of equality in Eq.~(\ref{eq:opbnd}) at the
average value of $p^2$ for the state or set of
states under consideration. We would expect this choice for $p_0^2$ to
yield a reasonably accurate nonrelativistic approximation for the relativistic
kinetic energy, a point noted in different contexts by other authors
\cite{LSI,JBI,AMIII}. More important theoretically, the effective mass
$M=\sqrt{\langle p^2\rangle+m^2}$ minimizes the average value of the right hand
side of Eq.~(\ref{eq:opbnd}), so gives a least upper bound for the average
of the relativistic kinetic energies when the average is calculated using
the actual eigenfunctions for the relativistic problem. Using this value for
$M$ we obtain the relation
\begin{equation}
\label{eq:nrappp2} \sqrt{p^2+m^2} \leq M + \frac{p^2}{2M} -
\frac{\left<p^2\right>}{2M}.
\end{equation}
The physical content of this result can be illustrated through
a direct expansion of the square root operator. The standard expansion
\begin{equation}
\label{eq:standexp} \sqrt{p^2+m^2} = m+\frac{p^2}{2m} -\frac{p^4}{8m^3}
+ \cdots
\end{equation}
in powers of $p^2/m^2$
may be reliable for heavy-quark systems, but fails for light-quark systems.
A possible solution to this problem is to consider an expansion about a fixed
momentum $p_0^2$,
\begin{eqnarray}
\label{eq:expans}
\sqrt{p^2+m^2} &=& \sqrt{p^2-p_0^2+M^2} \nonumber \\
&=& M+\frac{p^2-p_0^2}{2M}-\frac{(p^2-p_0^2)^2}{8M^3}+ \cdots
\end{eqnarray}
where $M=\sqrt{m^2+p_0^2}$. The expansion will give a good
average approximation
to the relativistic kinetic energy provided the relevant values of $p^2$
are concentrated near $p_0^2$ with $\langle (p^2-p_0^2)^2\rangle
\ll M^4$.\footnote{Basdevant and Boukraa \cite{JBI} consider the approximation
obtained by including only the linear term in $p^2-\left<p^2\right>$
in the expansion. A very different approximation
which leads to a smaller effective mass $M'=M/2$ was proposed
in \cite{LSI} and \cite{LSII} and studied in more detail by
Lucha, Sch\"{o}berl, and Moser \cite{LSM}. This approximation was
obtained by manipulating an inequality for matrix elements,
$\langle\sqrt{p^2+m^2}\rangle\leq\sqrt{\langle p^2\rangle+m^2}$, and leads
to an ambiguous result, in contrast to the operator inequality in
Eq.~(\ref{eq:opbnd}). For example, the expression in Eq.~(\ref{eq:nrappp2})
holds as an operator inequality, but amounts to the addition of zero to the
right hand side of the inequality for matrix elements when viewed at
that level. The effective mass $M'$ obtained in \cite{LSI,LSII,LSM} is
substantially too small, and the bound too weak, as will be seen in Sec.\
\ref{subsec:results}.}
The numerator in this ratio has its minimum value for
$p_0^2=\langle p^2\rangle$.
A comparison of Eqs.\ (\ref{eq:opbnd}) and (\ref{eq:expans}) shows that
the net effect of all the terms in Eq.~(\ref{eq:expans}) beyond the simple
nonrelativistic result $M+p^2/2M$ is to decrease the kinetic energy.
Note that the ``relativistic correction'' $-(p^2-p_0^2)^2/
8M^3$ to the kinetic energy operator in Eq.~(\ref{eq:expans})
does not have the standard form
$-p^4/8M^3$, and would be expected to be much smaller in magnitude
for $p_0^2$ close to $\langle p^2\rangle$.
To remove the strict inequality in Eq.~(\ref{eq:opbnd})
in the following discussion, we will allow for an
energy shift $\epsilon'$ that includes the average contribution
of the ``relativistic corrections'' in Eq.~(\ref{eq:expans}), taken as
constant, and will use a nonrelativistic approximation to the relativistic
kinetic energy operator of the form
\begin{equation}
\label{eq:nrapprox}
\sqrt{p^2+m^2}\approx M+\frac{p^2}{2M}+\frac{1}{2}\epsilon
,
\end{equation}
where
\begin{equation}
\label{eq:epsilon}
\epsilon=-\frac{\langle p^2\rangle}{M}+\epsilon'\approx
-\frac{\langle p^2\rangle}{M}-\frac{\langle(p^2-\langle
p^2\rangle)^2\rangle}{4M^3}+\cdots.
\end{equation}
The content of this approximation is best illustrated in momentum space.
In Figure \ref{fig:loceq}, we compare a model relativistic operator with
$m=0.5$ GeV, $\langle p^2\rangle=3.75$ GeV$^2$, and $M=2$ GeV with the nonrelativistic
approximation in Eq.~(\ref{eq:nrappp2}). The curves corresponding to the
relativistic and nonrelativistic expressions are tangent at $p^2=\langle p^2\rangle$.
For all other momenta, the nonrelativistic approximation
lies above the actual relativistic kinetic energy, as expected from the
Martin bound. To improve the agreement between the operators for momenta
away from the point of tangency, we can add
a negative shift $\epsilon'$ to the the nonrelativistic
approximation as suggested above and shown in Figure \ref{fig:loceq}.
Because of the negative curvature of the relativistic kinetic energy,
it is also advantageous to increase the value of $M$
relative to $\sqrt{\langle p^2\rangle+m^2}$ to move the point of tangency outward and
reduce the slope of the nonrelativistic curve. This will be seen in our
numerical results. The quality of the resulting approximation is evident
in Fig.~\ref{fig:ccmbnde}, in which we compare the exact and approximate
kinetic energies for the $c\bar{c}$ system
over the region in which the wave function for the
second excited $c\bar{c}$ state is large. The products of the
kinetic energy operators with the squares of the momentum-space
wave functions for the Salpeter and Schr\"{o}dinger equations are
compared in Fig.~\ref{fig:ccpop}. The details and interpretation of
the fit are discussed in Sec.\ \ref{subsec:results}.
\section{Numerical investigation of the nonrelativistic approximation}
\label{sec:numerical}
In this section, we will explore the accuracy of the nonrelativistic
approximations derived above in the case of the $c\bar{c}$, $s\bar{s}$, and
$c\bar{s}$ systems by comparing the results for the energy spectra and wave
functions obtained by solving the corresponding
Salpeter and Schr\"{o}dinger equations. The relativistic kinetic energy
operators will be approximated as in Eq.~(\ref{eq:nrapprox}), so that,
for example,
\begin{equation}
\label{eq:approxH}
H_c=2\sqrt{p^2+m_c^2}+V(r)\approx 2M_c+\epsilon_c+\frac{p^2}{M_c}+V(r)
\end{equation}
for charmonium.
We will take a standard linear-plus-Coulomb form for $V(r)$,
\begin{equation}
\label{eq:cornpot}
V(r)=Ar-\frac{B}{r},
\end{equation}
with $A=0.203$ GeV$^2$ and $B=0.437$. These values
correspond to the potential parameters used by Fulcher
for fits to the charmonium system \cite{LFI}.
We will concentrate on the $L=0$ states, and will consider
the possibility of varying $M$ as well as that of keeping $M$ fixed
at the value $M=\sqrt{\langle p^2\rangle+m^2}$ determined by a relativistic
calculation. The best values of $M$ and $\epsilon$ in the Schr\"{o}dinger
equation, or of $\epsilon$ alone,
will be determined by making a least squares fit to the relativistic
``data'' calculated using the Salpeter equation.
\subsection{Numerical methods}
\label{sec:numspec}
We have calculated the
relativistic energy spectra and wave functions using
now-standard numerical methods developed elsewhere \cite{LDIII,LDI,LFII}.
We first construct matrix representations for the potential $V(r)$ and
the positive operators $E_i^2=p^2+m_i^2=-\nabla^2+m^2$
in a suitable orthonormal basis of angular momentum eigenstates.
The matrix $E_i^2$ can be diagonalized by an orthogonal transformation
$U$, $E_i^2=U\Lambda_i U^{-1}$. The eigenvalues are necessarily positive.
The square-root operator $E_i=\sqrt{p^2+m_i^2}$ is
then defined as $U\Lambda_i^{1/2}U^{-1}$ where $\Lambda^{1/2}$ is the
diagonal matrix of the square roots of the eigenvalues \cite{LDIII,LDI}.
With a finite basis,
this construction reduces the solution of the Salpeter equation
to the matrix eigenvalue problem
\begin{equation}
\label{eq:matrix}
\left(E_1+E_2+V-E\right)R_l=0,
\end{equation}
where $R_l$ is the column-vector representation of the radial wave functions
in the given basis for orbital angular momentum $l$.
This equation can be solved by standard methods.
As shown by Fulcher \cite{LFII}, the matrix elements needed in this
construction can be
calculated analytically using basis wave functions
\begin{equation}
\psi_{l,m}^n(\vec{r})=R_l^n(r) Y_{l,m} {\left( \hat{r} \right)}
\label{eq:basis}
\end{equation}
with the angular dependence given by the spherical harmonics $Y_{l,m}$
and the radial wave function $R_l^n(r)$ given by
\begin{equation}
R_l^n(r)=\beta^{3/2} {\left(2 \beta r\right)}^l e^{-2 \beta r}
L^{2l+2}_n\left(2 \beta r\right). \label{eq:basisrad}
\end{equation}
Here $\beta$ is a length scale parameter and $L^{2l+2}_n$ is the
associated Laguerre polynomial \cite{ASI}. This set has been
investigated by several authors \cite{MOI,LFI,LDIII,LDI,LFII,EWI}. We
find that a matrix size of $20\times 20$ is sufficient to produce stable
eigenvalues and wave functions. The same basis functions can be used to
solve the Schr\"{o}dinger equation as a matrix problem.
In various figures which appear later, we will use the function
$u_{n,l}(r)=rR_{n,l}(r)$. The radial probability density for the quarks
is just $|u_{n,l}(r)|^2$.
We will also use the momentum-space wave functions $\phi_{n,l}(p)$
when analyzing our results. These are defined by the Fourier transform
\begin{equation}
\phi_{n,l}(p) = \frac{1}{2 \pi^2} \int_{0}^{\infty}\, dr\,
pj_l{pr} u_{n,l}(r),
\end{equation}
where $j_l$ is the standard spherical Bessel function.
Finally, to determine the best values of $M$ and $\epsilon$, we minimize the
function
\begin{equation}
\label{eq:fiteq} \sum_{k=1}^{N} {(E_{R, k} - E_{NR, k})}^2
\end{equation}
for the $N$ lowest energy levels, varying $M$ and $\epsilon$ in the
nonrelativistic Schr\"{o}dinger equation with the calculated relativistic
energies $E_{R,k}$ held fixed.
\subsection{Results for heavy-quark systems}
\label{subsec:results}
We will use the $c\bar{c}$ system for our study of bound states of two heavy
quarks. We use
the quark mass $m_c=1.320$ GeV and the linear-plus-Coulomb potential
determined by Fulcher \cite{LFII} in his Salpeter-equation fit to the
charmonium spectrum. After calculating the exact Salpeter energy spectrum
for those parameters to obtain our ``data'', we fit the four lowest energy
levels using a sequence of nonrelativistic approximations.
Since it is frequently argued that the
$c\bar{c}$ is almost nonrelativistic, we consider the standard Schr\"{o}dinger
kinetic energy $2m_c+p^2/m_c$ as well as the effective-mass approximation
discussed above. In the latter case, we take $M_c$ either as fixed at the
value $\sqrt{\langle p^2\rangle+m_c^2}$ obtained using the Salpeter
value of $\langle p^2\rangle$ averaged over the states in question, or
allow $M_c$ to vary along with the energy shift $\epsilon_c$. Our results are
given in Table~\ref{tbl:ccespect}.
We see from Table~\ref{tbl:ccespect} that the Schr\"{o}dinger approximation is
rather poor, with deviations of the fitted energies from the exact values
ranging from 57 MeV in the ground state to 173 MeV in the third excited
state. The Schr\"{o}dinger energies are all too high, and increase much
too rapidly for the excited states, with a total change in the deviation
of +116 MeV over the states considered. The failure of the Schr\"{o}dinger
approximation is not surprising given the rather large mean momentum
in the Salpeter $c\bar{c}$ states, $\langle p^2\rangle=1.021$ GeV$^2$, where
\begin{equation}
\label{eq:p2}
\langle p^2\rangle=\frac{1}{4}\sum_{n=1}^4\langle n|p^2|n\rangle.
\end{equation}
This corresponds to a root-mean-square velocity $\langle v^2\rangle^{1/2}
=0.61$ for the quarks, and the system is semirelativistic.
The energies obtained using the
approximation in Eq.~(\ref{eq:nrapprox}) with $M_c=\sqrt{\langle
p^2\rangle+m_c^2}$ are substantially better, with deviations ranging from
-18 MeV for the ground state to +21 MeV in the third excited state. Moreover,
the approximate energies increase less rapidly than those for the
Schr\"{o}dinger approximation, with an excess increase of only 39 MeV
relative to the Salpeter energies over the four states shown.
The overall fit is good. The improvement in the mean
energy is the result of including the energy shift $\epsilon_c$. The flattening
of the deviations is the result of the larger value of the effective mass,
with $M_c=1.662$ GeV rather than the input mass $m_c=1.320$ GeV.
The fitted value of the energy shift, $\epsilon=-669$ MeV,
is close to, and smaller in magnitude than the average kinetic term $-\langle p^2\rangle/M_c=-614$ MeV as expected from Eq.~(\ref{eq:nrappp2}). The extra shift is associated with the terms omitted in
Eq.~(\ref{eq:epsilon}).
Finally, if we allow $M_c$ to vary along with $\epsilon$ in the fitting
procedure, we obtain an excellent fit to the relativistic spectrum, with
errors less than 3 MeV and a root-mean-squared (rms)
deviation of 2.12 MeV as shown in Table~\ref{tbl:ccespect}. However, $M_c$ is
now quite large, $M_c=1.861$ GeV, while $\epsilon_c=-1.009$ GeV. The large
value of $M_c$ is needed to slow the growth of the nonrelativistic
kinetic energy with increasing $p$, and improve its agreement with the
relativistic kinetic energy as remarked earlier. However, the resulting
effective mass is not directly related to the charm-quark mass $m_c$.
As shown in Fig.~\ref{fig:ccpop}, the variable-$M$ nonrelativistic
approximation leads to a seemingly excellent result for the kinetic-energy
density. However, the relativistic and nonrelativistic wave functions
do not agree precisely even for this fit
as seen either in momentum space in Fig.~\ref{fig:pwf},
or in position space in Fig.~\ref{fig:spacewf}. Some
quantities of interest such as leptonic \cite{vanRoyen} and electromagnetic
transition rates are sensitive to these differences, and the nonrelativistic
model must therefore be used with care.
The increase in the heights of successive peaks in the
nonrelativistic position-space wave function relative to the relativistic
wave function, can be understood on the basis of the relativistic WKB
approximation \cite{Cea}.\footnote{Numerical calculations show that the
approximation is rather good in this case.}
In particular, the velocity of a nonrelativistic
particle is larger semiclassically than that of a relativistic particle
in the region near the origin where the color-Coulomb potential is large,
so the particle spends less time in that region and its wave function is
consequently smaller. Correspondingly, its wave function is larger near the
outer turning point.
In Fig.~\ref{fig:varyingM} we show the effect of varying the mass $M_c$ on the
wave function for the second excited state of the $c\bar{c}$ system. The
lower masses shown bracket the input value of $m_c$, while the highest
mass is close to that obtained in the variable-mass fit, $M_c=1.86$ GeV.
It is clear from the figure that the wave functions are quite inaccurate
for the lower masses, are not especially good even for the large
effective mass $\sqrt{\langle p^2\rangle+m_c^2}=1.66$
GeV, or the mass 1.86 GeV obtained in the variable-mass fit.
The trends in the wave functions discussed above are
also clearly evident.
Finally, in Figs. \ref{fig:uHu} and \ref{fig:uVu} we compare the total energy
densities $u^*Hu$ and the potential energy densities $u^*Vu$ for the second
excited states for the Salpeter equation and the optimal nonrelativistic
approximation with $M_c=1.861$ GeV. The difference between the potential energy
densities results entirely from the difference in the wave functions.
The systematic difference between the wave functions shows up clearly in
Fig.~\ref{fig:uHu}.
\subsection{Results for light-quark systems}
\label{subsec:ssres}
Our results for $s\bar{s}$ system of two light quarks are given in
Table ~\ref{tbl:ssespect}. We have used a strange-quark mass $m_s=364$ MeV
in these calculations following Fulcher \cite{LFI}, but have not changed
the potential as he did, preferring to keep the same
potential as for the heavy-quark system so as to be able to treat both
systems simultaneously and predict the $c\bar{s}$ spectrum. The results are
actually rather insensitive to $m_s$ because $\langle p^2\rangle^{1/2}\approx
744\ {\rm MeV} \gg m_s$. The system is clearly relativistic, with an rms
velocity $\langle v^2\rangle^{1/2}=0.90$ for the quarks.
The energies obtained with the effective mass $M_s=\sqrt{\langle
p^2\rangle+m_s^2}$ are reasonably good on the average, but the
approximate energies again increase too fast relative to the Salpeter
spectrum. The fit obtained when $M_s$ is allowed to vary
is excellent, with the energies differing from the Salpeter energies
by less than 4 MeV for the three lowest states considered. The fitted value
of $M_s$ has essentially no relation to the input mass $m_s$.
Unfortunately, the wave functions obtained in this case are poor even
for the best fit to the spectrum. We compare the Salpeter and approximate
energy densities in Fig.~\ref{fig:uHu_ssbar}. The differences are due
mainly to differences in the wave functions.
Even the kinetic energy densities shows significant
pointwise disagreement in this case.
\subsection{Predictions for the light-heavy system}
\label{sec:hlpred}
We consider finally the light-heavy system corresponding to the relativistic
Hamiltonian of Eq.~(\ref{eq:salpeter}) with $m_1=m_c$ and
$m_2=m_s$ corresponding to the masses used in the discussion above.
We use the nonrelativistic Hamiltonian
\begin{equation}
\label{eq:hlmass} H_{c\bar{s}} = M_c + M_s +
\frac{p^2}{2M_c} + \frac{p^2}{2 M_s} + \frac{1}{2}\left(\epsilon_c +
\epsilon_s\right)+V(r)
\end{equation}
obtained by replacing the square-root operators in Eq.~(\ref{eq:salpeter})
by the approximation in Eq.~(\ref{eq:nrapprox}). The kinetic term is
of the standard Schr\"{o}dinger form with a reduced mass $M=M_sM_c/(M_s+M_c)$
given in terms of the effective masses rather than the quark masses.
For the purpose of making predictions, we will keep the energy
shifts $\epsilon_i$ and the effective masses $M_i$
fixed at the values determined separately for the
heavy- and light-quark systems. These quantities would all
be expected to change somewhat in the light-heavy system. For example, the
masses $M_i=\sqrt{\langle p^2\rangle+m_i^2}$ that minimize the
Martin bound on the total kinetic energy change because of the different
value of $\langle p^2\rangle$ in the light-heavy system. The value of this
quantity averaged over the three lowest states is
$\langle p^2\rangle_{c\bar{s}}=835$ GeV, a value intermediate between the values $\langle p^2\rangle_{c\bar{c}}
= 1.021$ GeV and $\langle p^2\rangle_{s\bar{s}}=0.744$ GeV obtained for the
heavy- and light-quark systems. The energy shifts are given to leading
approximation by $\epsilon_i\approx-\langle p^2\rangle/M_i$, so also
change. However, the conditions for minimizing the bound make the kinetic energy stationary with respect to the masses $M_c$ and $M_s$. As a result,
by the Feynman-Hellman theorem \cite{feynman},
there is no first-order change in the
energies for small changes in $\langle p^2\rangle$. More physically,
the original nonrelativistic approximations for the kinetic
energy operators are already good over a wide range of momenta as
shown in Fig.~\ref{fig:pwf}, so the effect of the changes
on the spectrum is not expected to be large.
Our predictions for the Salpeter energy spectrum for the light-heavy
system are shown in Table~\ref{tbl:hlespect}. If we use the fixed values
of the masses, the energies of the four lowest $c\bar{s}$ states are
predicted to within 36 MeV as shown in the table. We note that
the ground state is predicted to lie at too low an energy
as a result of the large negative value of the energy shift
defined above. However, an examination of
Tables \ref{tbl:ccespect} and \ref{tbl:ssespect} shows that the predicted
ground-state energies of the $c\bar{c}$ and $s\bar{s}$ systems are also
too small. The usual fitting procedure adjusts the energy shift to
minimize the deviations between the theory and the input data over the set of
states considered. If we consider instead adjusting the energy shifts
$\epsilon_c$ and $\epsilon_s$ to fit the $c\bar{c}$ and $s\bar{s}$
ground-state energies exactly, a reasonable procedure
phenomenologically, we predict the normalized energies given in the
third row in Table~\ref{tbl:hlespect}. The ground state is now predicted
correctly. However, the energies of the excited states $c\bar{s}$ increase too
rapidly. This too rapid increase was also present for the
$c\bar{c}$ and $s\bar{s}$ states. We note in this connection that the energies
of the $c\bar{s}$ states are
very close to the average of the energies of the corresponding
$c\bar{c}$ and $s\bar{s}$ states.
If we use instead of the fixed masses the fitted values of the masses and
energy shifts for the heavy- and light-quark systems, we predict the energies
of the lowest three $c\bar{s}$ states to within 11 MeV as shown in
Table~\ref{tbl:hlespect}.
The largest difference occurs for the second excited state. The fits to the
$c\bar{c}$ and $s\bar{s}$ energies are already excellent, and there is no
reason in this case to renormalize the energy shifts.
The closeness of the predictions to the actual energies would be expected
given the results obtained for the $c\bar{c}$ and $s\bar{s}$ systems.
In particular, the nonrelativistic approximations to the kinetic energy
operators are good in the regions in which the momentum-space wave functions
are large. However, the final position-space $c\bar{s}$ wave functions are
again not accurate.
\section{Conclusions}
\label{sec:conclusions}
We find that the apparent success of nonrelativistic models for relativistic
systems can be understood in terms of an approximation to the relativistic
kinetic energy operator motivated by the Martin bound \cite{AMI} in
Eq.~(\ref{eq:opbnd}).
Although the physical content of the approximation can be understood in
terms of an expansion of the relativistic operator about a mean momentum
squared $p_0^2$, given optimally from the bound as $p_0^2=\langle p^2\rangle$,
the series expansion is not necessary. What is important is to obtain
a good average representation of the kinetic energy operator of
Schr\"{o}dinger form. We observe in this connection
that the approximation can be improved significantly by allowing
an extra energy shift to eliminate the inequality, and, if desired, also
allowing the effective mass $M$ appears to vary.
We have investigated the effectiveness of this procedure in detail by
using the nonrelativistic approximation to fit ``data'' obtained by solving
the relativistic Salpeter equation for the linear-plus-Coulomb
potential used by Fulcher \cite{LFII} in fits to the the charmonium
spectrum. We find that the nonrelativistic approximation for the kinetic
energy operator in Eq.~(\ref{eq:nrapprox}) gives generally good descriptions
of the Salpeter energy spectra for the $c\bar{c}$ and $s\bar{s}$ systems,
taken as examples of bound states of heavy and light quark pairs. The
results obtained with the effective masses fixed at the values $\sqrt{\langle
p^2\rangle+m^2}$ suggested by minimizing the Martin bound over a set of states
are good, but the excited state energies generally increase too rapidly
if the potential is kept fixed. The results obtained when $M$ is allowed to
vary in the fitting procedure are accurate to a few MeV is all cases,
a striking result.
We believe that the theoretical understanding of the success of the
nonrelativistic effective-mass approximation developed here
provides a justification for Martin's
nonrelativistic treatment of heavy- and light-quark systems,
and explains the unexpected success of his predictions for the
masses of light-heavy systems \cite{AMI,AMII,AMIII}.
\acknowledgments
This work was supported in part by the U.S. Department of Energy under
Grant No.\ DE-FG02-95ER40896.
One of the authors (LD) would like to thank the Aspen Center for Physics
for its hospitality while parts of this work were done.
| {
"attr-fineweb-edu": 1.938477,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdVs4dbghTN2K2FR7 | \section*{Introduction}
\section*{Main Text}
As a part of the largest international effort underway to explore alternative computing methods called \emph{unconventional computing}\cite{bib8,adamatzky2021handbook},
there is a consolidated trend in the research on devices, materials and in natural processes, to find an implicit exhibition of computing features, even beyond solid aggregation state.
The idea of computing with liquids attracted engineers and mathematicians since the early 1900s~\cite{emch1901two}, but later prototypes of liquid computers were mostly based on hydraulic, reaction-diffusion, and fluidic principles \cite{adamatzky2019brief}.
Only recently, liquid and colloidal systems have been subject to attention for mimicking the ions moving in the human brain through embedding aqueous solutions in gel or solid-state scaffolds \cite{noushin}.
Being applicable to soft robotics \cite{bib76}, energy harvesting \cite{bib77}, and computation in general \cite{bib78},
magnetic fluids are always of great research interest (Sec.~S1). Particularly, ferrofluids (FFs) are mixtures in which nanometric-size dispersed insoluble particles are suspended throughout a solvent, the particles being typically superparamagnetic, giving rise to interesting collective behaviour.
A potential of FF in computing, massive-parallel information processing, sensing, and energy harvesting regardless of their shape has not been addressed before, notwithstanding first reports on their shape reconfiguration are already available \cite{shape1}. Here, we demonstrate that a volume of a superparamagnetic FF can be interchangeably assigned to memory and computing roles. This property is consistent with the rising paradigm of in-memory computing, which aims at mitigating processor-memory data transfer bottleneck by embedding computation in memory \cite{ielmini2018memory,verma2019memory,le2018mixed,sebastian2020memory}. Furthermore, we demonstrate FF computation using the concept of Reservoir Computing (RC), a paradigm that takes advantage from system dynamics (spontaneous or excited from external sources) for advanced information processing. Reversibility, fading memory, nonlinearity of electrical response and structural stochasticity are usually considered as prerequisites for any physical implementation of RC concepts \cite{reservoir}, and most solid-state memristors fulfil these requirements \cite{reservoir2}.
\begin{figure}[h]%
\centering
\includegraphics[width=0.85\textwidth]{./figures/fig1-rasterized.eps}
\caption{\textbf{A} Experimental set-up. \textbf{B} Measurement concept and relevant parameters. The colloid is programmed using a quasi-DC voltage and its internal status is read through its distributed impedance values $Z_{11}^C$, $Z_{12}^C$, $Z_{21}^C$ and $Z_{22}^C$, that correspond to the sum of the samples obtained throughout the measurement bandwidth of the VNA (10\,MHz--6\,GHz). \textbf{C} Hysteresis loops as a function of experiment elapsed time obtained from an initial impedance multi-point (comprising $Z_{11}$, $Z_{12}$, $Z_{21}$ and $Z_{22}$) obtained by applying a -3.8\,V--3.8\,V voltage sweep.}\label{fig1}
\end{figure}
Our experimental setup shown in Fig.~\ref{fig1}A comprises a FF sample (the reservoir) connected to a two-port Vector Network Analyzer (VNA), a DC bias generator (where the negative terminal is internally connected to ground) and two bias tee circuits to decouple RF and DC signals (details in Materials and Methods).
Both the DC bias generator and the VNA are connected to a Personal Computer to implement measurement scripts (Sec.~S2), that is, applying a DC voltage across the reservoir and reading the impedance through its S-parameters, that can be always converted to impedances as a function of frequency.
As shown in Fig.~\ref{fig1}B, the FF is stimulated as follows: a quasi-DC voltage $V_P$ (both positive and negative) is applied to the system to program/write it, and its internal status (read) is acquired in RF mode using the magnitude impedance parameters. Since the observed S-parameters and consequently impedance magnitude variations are small (Sec.~S3), the sum of the numerical impedance values over all scanned frequencies $Z_{xy}^C$ can be used profitably as an indicator (see the formulas in the figure). This way the S-parameters are collapsed to a single number for each measurement point, reducing data volume. Consequently, each reading measurement is an ensemble of four real numbers $Z_{11}^C$, $Z_{12}^C$, $Z_{21}^C$ and $Z_{22}^C$, all a function of time $t$, e.g., for port one, $Z_{11}^C\equiv Z_{11}^C(t)$.
Hysteresis, a fingerprint of memristance \cite{chua1}, is a necessary condition for neuromorphic computation \cite{bib72}.
Fig.~\ref{fig1}C shows hysteresis loops obtained with a voltage sweep from -3.8\,V to 3.8\,V with steps of 0.1\,V each lasting 1\,s, repeated for 50 times. At the beginning of the test ($t$\,=\,0\,s) we started with $V_P$\,=\,-0.85\,V and impedance values were $Z_{11}^C(0)$\,=\,14312\,$\Omega$, $Z_{12}^C(0)$\,=\,2320\,$\Omega$, $Z_{21}^C(0)$\,=\,2060\,$\Omega$ and $Z_{22}^C(0)$\,=\,11795\,$\Omega$.
The pinched hysteresis of $Z_{11}^C$ shrinks for positive $V_P$ as the number of iterations increases, even with such zero average excitation. The hysteresis of $Z_{22}^C$ shrinks throughout the whole $V_P$ range while for $Z_{12}^C$ and $Z_{21}^C$ we observe the opposite phenomenon. On the one hand, the results indicate that assuming a given DC stimulus, its effect on the impedance variation is not constant and varies over time. On the other hand, this feature indicates a long-term adjustment of the material towards an \emph{equilibrium} condition, that can be interpreted as the feature of memorizing the previous DC bias history. Furthermore, due to fluidity of the material, this memory will be naturally fading, which is another important prerequisite for an efficient and universal RC system \cite{memory1}.
\begin{figure}[h]%
\centering
\includegraphics[width=0.9\textwidth]{./figures/fig2-rasterized.eps}
\caption{\textbf{A} Long-term memory stimulus scheme and \textbf{B} results obtained by applying a positive pulse to the material of value 3.3\,V with different duration $T_P(i)$, and by restoring the initial impedance value $\mathrm{Z}_{11}^{C*}$\,=\,14338\,$\Omega$ for each test using a closed control loop. \textbf{C} Variation of $\mathrm{Z}_{22}^{C}$ and corresponding mean and variance during the {\tt Hold} phase for all information values. }\label{fig3}
\end{figure}
The liquid can be used to store information in the form of a particular impedance evolution at a given port. To pose a parallelism with biological neurons we can refer to a \emph{long-term plasticity} feature.
Fig.~\ref{fig3}A shows the stimulus scheme used to demonstrate storage capacity for $N$ information values -- in this specific test $N$\,=\,16. The test comprises repeated {\tt Reset}, {\tt Write} and {\tt Hold} phases, where {\tt Reset} implements a control loop on $\mathrm{Z}_{11}^C$ to reset its value to $\mathrm{Z}_{11}^{C*}$.
The results in Fig.~\ref{fig3}B show that $\mathrm{Z}_{11}^{C*}$ is correctly reached for each iteration, and notwithstanding impedance control is implemented at port one, $\mathrm{Z}_{22}^C$ evolves towards well defined impedance values, that are a function of the applied pulse duration $T_P(i)$. Interestingly, the $\mathrm{Z}_{22}^C$ values do not reset at the beginning of each {\tt Write} phase. The small variation of the $\Delta Z_{22}^C$ values and the associated uniform distribution parameters during {\tt Hold} of Fig.~\ref{fig3}C, suggests that the colloid can be used as a high resolution short-term memory. In general, as $T_P(i)$ can be controlled with the power of continuum, analog information storage can be implemented and information can be stored even for a longer duration (Sec.~S4).
\begin{figure}[h]%
\centering
\includegraphics[width=0.9\textwidth]{./figures/fig3-rasterized.eps}
\caption{\textbf{A} Weighting used for in-memory digit filtering, based on a simple elongation of the expected black pixels pulses. Each digit is serialized from bottom-left to top-right, sequentially line by line.
\textbf{B} Stimulus scheme for classification, with reset control on $Z_{22}^C$ after each serialized digit (exemplified here for {\tt 1} weighting). \textbf{C, D, E} Measured impedance variation for all digits for three examples weighted sequences, {\tt 1}, {\tt 4} and {\tt 7}. Weighting a particular digit leads to a lowering of its impedance $Z_{22}^C$ compared to the others (orange paths).
\textbf{F} Final $Z_{22}^C$ for all weighting sequences. Classification for digit {\tt 3} fails due to overlapping with {\tt 8}.}\label{fig5}
\end{figure}
By extending the memory stimulus scheme of Fig.~\ref{fig3}A, we could demonstrate in-memory digit classification. To this end,
we prepared a dataset consisting of the ten digits {\tt 0}--{\tt 9}, in 8\,$\times$\,8 matrices of pixels (Sec.~S5) that we have serialized as shown in the scheme of Fig.~\ref{fig5}A. Data serialization has been demonstrated to be an efficient approach towards neuromorphic data processing with very minimal computational resources [e.g. single artificial neurons \cite{molecules, serial2}].
Each pixel, besides its value 0--1 that can be mapped to a voltage level $V_{P}$, can be attributed a weight in terms of pulse duration $w_i$ and, in general, an offset (in terms of additive voltage). It is therefore possible to build up sequences with higher or lower sensitivity to a particular digit or selectively filter particular pixel values.
Here, we do not bias the colloid in the condition of having a pinched hysteresis, so to obtain a monotonic decrease of impedance during the tests, and therefore enable a direct comparison of the final value for all digits after the application of the sequences.
By assuming instead that the system is in the conditions of a pinched hysteresis, the FF can progressively adapt to implement a learning mechanism by providing a particular non-zero offset weighting (Sec.~S6).
Fig.~\ref{fig5}B shows the stimulus scheme of the pattern classification test, which exploits in-memory computing features. Similarly to the memorization experiments here we apply a reset control on $Z_{22}^C$, towards the impedance set-point $Z_{22}^C$\,=\,14338 \,$\Omega$, using an initial {\tt Charge} phase at 10\,V. Each pixel is associated to a -3.3\,V or 0\,V voltage (black or white) for a given weight duration, with zero offset. After verifying differentiation (Sec.~S5), we provide the weighted sequences so that the pulse durations of the expected black pixels are longer compared to the others. We apply sequentially all serialized pixel matrices from {\tt 0} to {\tt 9}, using all the weighted sequences for each digit. Fig.~\ref{fig5}C--E show the measurement results assuming 4.5 and 0.25\,s weights (black and white, respectively), for three sample digits, {\tt 1}, {\tt 4} and {\tt 7}. Results show that with this in-memory computing scheme $Z_{22}^C$ decreases more considerably in case the weighted sequence matches the digit.
The final decision can then be achieved by using a simple threshold on the $Z_{22}^C$ value, which depends on the digit to be detected, or alternatively, by indexing the digit that leads to the lowest impedance after all of them are serialized.
This particular scheme works except for {\tt 3} which is a subset of {\tt 8} (Fig.~\ref{fig5}F), thus leading to 90\% accuracy.
As an effect of long-term plasticity, we have observed also that if the above test is repeated for days without interruption, the impedance dynamics shrinks, irrespective of the digit (see Sec.~S7). The behaviour of the liquid, however, is reversible and impedance dynamics can be restored.
\begin{figure}[h]%
\centering
\includegraphics[width=0.9\textwidth]{./figures/fig4-rasterized.eps}
\caption{\textbf{A} Stimulus scheme used for both training and inference during PRC tests with parallelized outputs for the readout NN layer (each one identifying the effect of a pixel value), detail on the NN layer and conceptual liquid reservoir. \textbf{B} Confusion map of a real-time classification test of the digits {\tt 0} to {\tt 3} using the trained NN.}\label{fig6}
\end{figure}
Similarly to memristors (differences outlined in Sec.~S8), to further demonstrate the computation capability of the FF we have implemented PRC using an ad-hoc readout layer.
The FF exhibits chaotic nature (resulting, \emph{inter alia} from Brownian motions as well as from the surfactant molecules featuring electrical polarizability), and within its deterministic features, it presents a strong sensitivity to initial electrical conditions (Sec.~S9), while it can provide both fading memory and long-term plasticity (for instance see the plots in Fig.~S3).
RC is typically implemented taking advantage of a physical reservoir short-term memory \cite{bib67}. However, within the time frame of a digit classification, our findings show that the dynamics of the FF tend in the long-term to reduce if a trivial reset condition is used (Sec.~S7). Moreover, besides sensitivity to initial conditions, the FF exhibits chaotic non-equilibrium at repeated impedance sets (Sec.~S10).
Such features can make PRC unfeasible and it is thus necessary to
avoid changes in dynamical regime during both training and inference. To mitigate these variations, we have designed a particular 'reset' sequence that maintains the dynamical features of the material consistent. It consists of the application of a high voltage towards two impedance points, that are higher and lower compared to the initial impedance $Z_{22}^{C*}$ used to run active computation, respectively.
Fig.~\ref{fig6}A shows the measurement scheme and the reset sequence. In our tests $Z_{22}^C\rvert_\mathrm{LOW}$\,=\,16350\,$\Omega$, $Z_{22}^C\rvert_\mathrm{HIGH}$\,=\,16450\,$\Omega$ and $Z_{22}^{C*}$\,=\,16400\,$\Omega$.
We used constant weighting to serialize the 64 pixels of the digit matrices (each pixel lasts 2\,s, with -3.3\,V for {\tt 1} and 0\,V for {\tt 0}) and we have trained a Neural Network (NN) layer to classify four digits {\tt 0}--{\tt 3} (rationale and dataset in Sec.~S11). The NN comprises a first block which normalizes in parallel the 64 impedance values in the range 0--1 to get rid of residual dynamical variations due to the FF chaotic nature. The input layer is made of a 64 {\tt Dense} model, followed by a {\tt Batch Normalization} block (that helps back-propagation convergence), another 14 elements {\tt Dense} model, and finally a single {\tt Dense} neuron. Inference is achieved in real-time using the trained NN on new measurement data from the FF consisting on 64 impedance values $Z_{22}^C$. Fig.~\ref{fig6}B shows the confusion map, after detecting the four digits with the pre-trained NN, achieving an accuracy of 90.6\%.
In conclusion, we have demonstrated the first ever evidence of a FF in-memory computing device. A FF can implement complex calculations both with custom in-memory computing schemes, and PRC, thus widening its spectrum of features. Besides extending the possibilities of already existing applications, these findings make solutions featuring unprecedented plasticity, fault-tolerance and resilience towards extreme environments a plausible reality, thanks to FFs amorphous nature.
\section{Extended Introduction}
\begin{large}
\noindent \textbf{Supplementary Text}
\end{large}
\bigskip
\section{Typical Applications of Magnetic Fluids and Ferrofluids}
\label{Sec1}
Magnetic fluids in general are of great interest in biomedical applications e.g., cell separation \cite{bib53}, tumor hyperthermia \cite{bib54}, cancer diagnosis \cite{bib55}, magnetorelaxometry \cite{bib56}, drug delivery \cite{bib57} and thermal energy conversion \cite{bib79}.
Some noteworthy applications of ferrofluids are magnetic seals for pumps and mixers \cite{bib59}, inertial and viscous damping for loudspeakers and stepper motors \cite{bib60}, bearings \cite{bib61}, lubricants \cite{bib62}, heat transfer media \cite{bib63}, and soft robots \cite{bib64,bib65,bib66,bib64c}.
\newpage
\section{Measurement Software}
\label{Sec3}
We have designed a specific scripting language based on Python to support the experiments through script files and to interface the low-level drivers of the VNA and the DC generator. It comprises a specific software interpreter, capable of compiling the scripts and partially executing them in advance to check their correctness. The possibility of controlling both VNA and DC generator through a PC opens the way to the implementation of programmable experiments, automatic result storage, real-time computation of quantities of interest from the S-parameters, and implementation of control loops to investigate algorithms, for instance, to bring impedance to known set points.
The experiment files include specific commands to set the DC bias, read the data from the VNA and give the possibility of declaring variables, calculating quantities on the fly based on the measurement output, and thus implementing control loops thanks to the possibility of including selections. The measurement program also takes care of saving the measurement data on the PC and permits customization in the writing of Comma-Separated Values (CSV) files by including not only directly measured S-parameters but also any of the variables used during the execution of the experiment. This approach has the power of enabling the scheduling of specific scripts and deep customizations to enable batch measurements and implementing complex calculations. For instance, the herein referenced impedance values $Z_{11}^C$--$Z_{22}^C$ are calculated by the measurement program immediately after the acquisition of the S-parameters from the VNA.
\newpage
\section{RF Impedance Controllability}
\begin{figure}[hbt!]%
\centering
\includegraphics[width=0.9\textwidth]{./figures/figs1-rasterized.eps}
\caption{Measured S-parameters of the FF with a -3.3\,V bias applied to the material, from 0 to 64\,s, one measurement per second. This measurement demonstrates the DC controllability of the internal particle collective status of the FF, and the possibility of reading it out in RF mode.}\label{figsc}
\end{figure}
\FloatBarrier
Fig.~\ref{figsc} demonstrates the controllability of the liquid RF impedance through the application of a DC bias $V_P$ and shows its consequent S-parameters evolution.
Here, we have applied a constant -3.3\,V DC bias to the material and we have run measurements using the VNA at every second, for 60\,s overall. The graphs show the evolution of the parameters over time, while the negative DC bias is maintained. In general, although $\rvert S_{12}\rvert$ and $\rvert S_{21}\rvert$ slightly differ
(non-ideal reciprocal behaviour)
different trends occur in the curves. For this experiment, $\rvert S_{22}\rvert$ decreases in the full 10\,MHz--6\,GHz bandwidth, while $\rvert S_{11}\rvert$ for a particular range (see inset) has a different trend. Similar results can be obtained by applying a positive DC voltage. The material behaviour strongly depends on its impedance value at the beginning of the experiment, and the status of the liquid is inherently encoded in the \emph{unbalancing} of all the four parameters. Moreover, the evolution of its internal status is in general reversible by applying a signal with opposite sign. Higher variations in the S-parameters are achievable by applying a larger voltage (up to $\pm$\,10\,V for our set-up), however, we have decided to keep limits at a maximum of $\pm$\,3.3\,V during computation (except for hard resets), for compatibility with the standard voltage levels used in consumer electronic components.
\newpage
\section{Enhanced Information Storage}
\label{Sec4}
\begin{figure}[hbt!]%
\centering
\includegraphics[width=0.9\textwidth]{./figures/figs2-rasterized.eps}
\caption{\textbf{A} Measured storage capacity obtained by starting from an initial condition $Z_{11}^{C*}$\,=\,15300\,$\Omega$ for the same pulse duration given in Fig.~2A (main manuscript), with a total test time of 2500\,s. \textbf{B} Memorization capability obtained using repeated short duration pulses and resetting the impedance at every iteration to $Z_{22}^{C*}$\,=\,14338\,$\Omega$.}\label{figs6}
\end{figure}
\FloatBarrier
The plots of Fig.~\ref{figs6}A show information storage results for more than 2500\,s obtained by applying the same signals given in Fig.~2A of the main manuscript, starting from an impedance \linebreak set point $Z_{11}^{C*}$\,=\,15300 and by resetting it at every iteration (blue $T_P$\,=\,4\,s, towards red $T_P$\,=\,64\,s). Interestingly, while the effect of the pulses is not appreciable at port one, at port two we can observe that the final impedance values are separated from each other and their separation is maintained even for such long duration.
The graphs of Fig.~\ref{figs6}B show the possibility of carefully mapping pulse duration $T_P$, thus increasing resolution towards a full analog memory. In this test, we have applied the same scheme given in Fig.~2A (main manuscript), but with pulses instead of fixed 3.3\,V signals ($T_\mathrm{high}$\,=\,0.25\,s, $T_\mathrm{low}$\,=\,0.75\,s, for a duty cycle of 25\%, see detail in the plots). The use of these pulses permits the observation of fine variations of storage capacity as information is encoded in the number of pulses where a higher number indicates a higher information value. In this case, programming occurs with a lower \emph{energy} compared to the previous case. Here, however, the obtained impedance values decrease with time but the curves are still separated during their evolution.
As reported in Fig.~2 (main manuscript) and Fig.~\ref{figs6}, depending on the bias setting of the colloid different numerical values and trends can be achieved. This implicitly demonstrates that the system provides a long term memorization feature that stores the previous stimulation history. Analog memories in general are not a new concept [see \cite{bib71}], but can open the way, similarly to memristors, to unexplored computation capabilities if paired with modern neural networks and machine learning.
\newpage
\section{Dataset and Differentiation}
\label{Sec9}
\begin{figure}[hbt!]%
\centering
\includegraphics[width=0.75\textwidth]{./figures/figs3-rasterized.eps}
\caption{\textbf{A} Dataset of ten different 8\,$\times$\,8 pixel images (black and white), each one identifying a digit from {\tt 0} to {\tt 9}. \textbf{B} Measured $Z_{22}^C$ during serialization of digit {\tt 8} using constant weighting, with detail on short and long-term information storage features. \textbf{C} Measured pixel array differentiation assuming constant time-weighting $w_i$\,=\,$w$\,=\,4\,s for the digit dataset (single measurement run). The values obtained at the end of the serialization are all different and depend on both the input sequence and the previous history. Within all the serialization curves, we can appreciate local variations attributable to a short-term memory effect, while final values include both long and short-term memory contributions.} \label{figs5}
\end{figure}
\FloatBarrier
We have investigated the possibility of classifying the different digits assuming constant delay weights $w$, to verify that the colloid operates similarly to a solid state memristive device featuring short-term plasticity, and therefore, thanks to its non-linear lossy integration, differentiate spike sequences \cite{bib67}. To verify differentiation, we have applied a constant interval of 4\,s for all pixels (that can be 1, i.e., -3.3\,V or 0, i.e., 0\,V) of all digits (see dataset in Fig.~\ref{figs5}A). We have run measurements by serializing all digits from 0 to 9 and applying them to the liquid in sequence.
Fig.~\ref{figs5}B shows an example measurement for digit {\tt 8} during serialization with detail on the pixel values. Our measurement script saves $V_P$ at the end of each stimulation and thus we provided both sampled data and its corresponding zero-order hold interpolation, that identifies the actual voltage across the colloid. From the measurements of $Z_{22}^C$ we can clearly identify two memory contributions, one long-term and another short-term. The short-term contribution fades out rapidly as stimulation is interrupted (for black pixels, -3.3\,V).
Fig.~\ref{figs5}C shows the measurement results for all the digits. The values obtained at the end of each sequence are all different, demonstrating a differentiation, similarly to the results obtained on memristors matrices \cite{bib68}. Observe how the slope of the decreasing curves (that is while a black pixel is applied to the liquid), varies across the serialized digits, to show, similarly to Fig.~\ref{figs5}B an impact of both short and long-term plasticity.
\newpage
\section{Progressive Adaptation}
\label{Sec5}
\begin{figure}[hbt!]%
\centering
\includegraphics[width=0.9\textwidth]{./figures/figs4-rasterized.eps}
\caption{\textbf{A} Measured progressive adaptation obtained by running across the hysteresis curve given in Fig.~1C and by repeating the digits 20 times each. The initial impedance set is $Z_{11}^C$\,=\,15100\,$\Omega$. When the correct digit matches weighting (which is constant across all digits, here for digit {\tt 1}), impedance decreases significantly. \textbf{B} Applied voltage across the colloid with progressively increasing bias to run across the hysteresis curve. \textbf{C} Corresponding measurement scheme for the test.}\label{figs2}
\end{figure}
\FloatBarrier
Among the extensive tests we performed on the colloid, we also investigated the possibility of exploiting the pinched hysteresis to demonstrate progressive adaptation to a training sequence, in a similar way as a 'learning' mechanism.
In our in-memory computing scheme, weighting can be applied also with a non-zero offset. Assuming a pinched hysteresis, an offset stimulation is useful to run across it, and depending on the overall duration of the voltage applied, we demonstrate that the liquid can dynamically latch towards low or high impedance values. Fig.~\ref{figs2}A shows measurement results in which we have applied repeatedly and sequentially the digits (20 times each) given a weighted sequence for {\tt 1}. Stimulation runs across the pinched hysteresis loop deeper when {\tt 1} is applied with respect to all the other digits, so to lead impedance dynamically decrease of a larger magnitude. We have reset the impedance to $Z_{11}^{C*}$\,=\,15100\,$\Omega$, exploiting the equilibrium conditions of the material immediately before our hysteresis tests given in Fig.~1C (main manuscript). Compared to the measurements given in Fig.~3 (main manuscript), the FF is here in the conditions such as the voltage values are exchanged, that is obtain a $Z_{11}^{C}$ impedance decrease a positive $V_P$ is required, and consequently a negative one is required to reset it.
To run across the pinched hysteresis, as shown in Fig.~\ref{figs2}B we have applied a progressive offset to the same weight sequence of the measurements of Fig.~3.
The offset function is all equal for all digits, and progressively increases as the serial pixel is streamed according to the sequence $K\left(1-\frac{i}{32}\right)$, where $K$ is negative, and $i$ is the pixel number in the range 0--63.
As shown in Fig.~\ref{figs2}C, to permit the establishment of an internal equilibrium and not perturb the colloid status, we have not performed a hard reset with a high 10\,V voltage, rather we have reset it with the same magnitude voltage of 3.3\,V used for serialization. As exemplified qualitatively in the hysteresis diagram details, this measurement comprises mostly all the effects we have observed during our study. In particular, hysteresis progressively evolves depending on the previous history to increase the impedance dynamics for digit {\tt 1} and terminate with an almost full zeroing for digit {\tt 9}, where the pinch disappears.
The impedance dynamics of the colloid then progressively latches towards large variations for {\tt 1} and progressively reduces for the remainder digits.
These results demonstrate the possibility of a dynamical latching using a FF in a similar way it can be done using memristors. Moreover, it demonstrates that a `learning' mechanism is possible.
\newpage
\section{Dynamics Reduction}
\label{Sec6}
\label{sec:limitcycle}
\begin{figure}[hbt!]%
\centering
\includegraphics[width=0.8\textwidth]{./figures/figs5-rasterized.eps}
\caption{\textbf{A} Measured reduction of the FF dynamics after repeating 100 times the classification measurement with weighting for digit {\tt 4}, and initial set point $Z_{22}^C$\,=\,14338\,$\Omega$. The impedance of the system evolves to shrink around a non-dynamical fixed point, but with minimal dynamical features (limit cycle). \textbf{B} Final value of $Z_{22}^C$ after the application of all pixels versus iteration. \textbf{C} Detected digits versus number of iterations. \textbf{D} Corresponding threshold for the detection of {\tt 4}. }\label{figs1}
\end{figure}
\FloatBarrier
In the in-memory classification experiments of Fig.~3 (main manuscript) we have correctly assumed that during the tests
the status of the liquid was not significantly changing. This hypothesis was verified because the tests lasted less than one hour each (see the experimental time in the measurement results), for an overall duration of approximately 5\,h. However, in general,
we have observed that after applying the serial sequences for days, the resetting of the material to the same condition at every test leads to a dynamics reduction towards the impedance set point itself. This result is consistent with the hysteresis plots of Fig.~1 (main manuscript): assuming a periodical stimulation, its effect on impedance varies over time, leading, in the ultra-long term, to a shrinking of the hysteresis curve of $Z_{22}^C$.
Fig.~\ref{figs1}A shows superposed $Z_{22}^C$ curves
for each digit, when the pattern classification test is repeated in sequence for 100 times with weighting for digit {\tt 4} and overall measurement time of 3 days. The dynamics of the impedance patterns obtained during the measurements tend to collapse to a straight line superposed to the initial impedance set point, with reduced dynamical behaviour.
Fig.~\ref{figs1}B shows the final $Z_{22}^C$ values after the streaming of all pixels as a function of iteration. We can define a detection threshold as the average value between the lowest and the immediately preceding value of $Z_{22}^C$ across all digits at a given iteration. Due to dynamics reduction, the in-memory classification scheme fails as the lowest impedance values do not always correspond to {\tt 4}, notwithstanding weighting. The detected digits shown in Fig.~\ref{figs1}C, confirm that digit {\tt 4} leads to the lowest impedance value 29 times out of 100, for an accuracy of 29\%. The expected detection threshold for {\tt 4}, as shown in Fig.~\ref{figs1}D, follows the general compression trend of the liquid as the classification is progressively repeated.
Consequently, within the given experiment time, we can conclude that the system evolves towards a limit cycle, making in-memory computing with the previously defined scheme ineffective. We observed that this behaviour is induced by the reset phase of the impedance achieved in the same increasing direction for all interactions, which, \emph{inter alia}, is performed at higher voltage (i.e., 10\,V) compared to those used during the serialization (i.e., -3.3/3.3\,V). The behaviour of the liquid, however, is reversible. After applying a fixed stimulation of -10\,V for a sufficiently long time (on the order of minutes) the dynamical behaviour of the liquid is increased and restored. Moreover, the liquid dynamics can be also restored by repeatedly applying fast and zero mean hysteresis sweeps at higher voltage (e.g. a factor two faster compared to the reported results of Fig.~1C, main manuscript). Another possibility is to pose $V_P$\,=\,0\,V and wait for a sufficiently long time to enforce the system towards an internally defined equilibrium, thus leaving the liquid to self-adjust. We have observed that after such restoring, although the initial impedance values can significantly vary compared to the preceding epoch, the dynamical behaviour is still restored. After this restoring process, the impedance values can be successfully brought back to the original values by applying DC voltage.
Given that such locking phenomenon occurs after many repeated classification cycles and in general after long time, provided that the computation time is reasonably small, the liquid can be still effectively used to run computation, even assuming the above dynamics reduction. This limit cycle (that can be considered as a long-term effect) fades out, but slower than the computation time of the above experiments thanks to the relatively low voltages used in the serialization process. We have observed faster dynamics reduction with higher stimulation voltage.
\newpage
\section{Solid-state Memristors vs. Ferrofluids}
\label{Sec2}
Two features of FF that can be different compared with typical solid-state memristors are, i) the application of DC bias multiple times to change the material state and ii) speed of the inference operation.
Most memristor-based neural networks are inferred at biological speed, i.e., in a millisecond-range time constant. With oscillatory neurons, however, one can vary this time constant by several orders of magnitude, for instance between 1\,$\mu$s and 100\,s or even 1\,ks. On the other hand, the FF needs to be read at RF-level (using an AC signal), unlike memristors, to the best of our knowledge.
Contrarily to memristors where the impedance observed is mono-dimensional (applied voltage, read current), here the status of the liquid is observable across all four impedance parameters, thus potentially allowing for higher dimensional interpretations. Another interesting point, regards the application of programming biases repeated times. This feature is valid for both FF and some memristors. HfO$_{2}$-based memristors, for instance, need this kind of multiple biasing approach to witness a change of state. This is mainly due to their conductive filament-based switching mechanism they provide \cite{bib75}.
\newpage
\section{Sensitivity to Initial Conditions}
\label{Sec7}
\begin{figure}[hbt!]%
\centering
\includegraphics[width=0.9\textwidth]{./figures/figs6-rasterized.eps}
\caption{\textbf{A} Computed series $\ln \left[ d(k) \right]$ associated to the $Z_{22}^C$ time-series of our hysteresis test in Fig.~1C ($\epsilon$\,=\,0.1). The slope at the beginning of the test is positive and therefore exists a linear fit with positive slope that is a reasonable estimation of the Lyapunov exponent. \textbf{B} The same computed series of A, but for a uniform distributed function. The trend is flat, therefore $d(k)$ does not depend on $k$, confirming stochasticity.}\label{figs3}
\end{figure}
\FloatBarrier
A defining feature of a chaotic deterministic systems is its high sensitivity to initial conditions. To check whether our system provides this feature we have extracted the $Z_{22}^C$ time-series given in Fig.~1C (for the complete experiment) and we have applied the algorithm of \cite{bib74} which provides a means to estimate the Lyapunov exponent, based on the quantity $d(k)=\lvert T_{i+k}-T_{j+k}\rvert$, where $T$ is the time-series. Fig.~\ref{figs3}A shows the results of the application of the algorithm with an initial diameter bound of $\epsilon$\,=\,0.1. The curve exhibits periodicity because the data comprises all the loops depicted in the hysteresis. In contrast to the uniform distribution case (which provides a flat behavior), the term $\ln \left[d(k)\right]$, at the beginning of the experiment, shows an increasing trend with $k$, suggesting that exists a linear fit with positive slope, which is a good estimation of the Lyapunov exponent. A positive Lyapunov exponent identifies chaotic character of the time-series and hence a high sensitivity to initial conditions, as well as a system where the entropy creation rate is positive \cite{chaos}.
\newpage
\section{Chaotic Non-Equilibrium}
\label{Sec8}
\begin{figure}[hbt!]%
\centering
\includegraphics[width=0.9\textwidth]{./figures/figs7-rasterized.eps}
\caption{\textbf{A} Measured $Z_{22}^C$ impedance during a repeated test in which the liquid, thanks to the depicted control algorithm ({\tt Tol}\,=\,1, {\tt Tick}\,$\sim$\,0.7\,s) is brought to two particular impedance values, $Z_{22}^C\rvert_\mathrm{LOW}$\,=\,16250\,$\Omega$ and $Z_{22}^C\rvert_\mathrm{HIGH}$\,=\,16300\,$\Omega$. The plot shows chaos in the oscillatory regimes. \textbf{B} Applied voltage $V_P$ during the experiment. \textbf{C} Summary of the measurement scheme depicting where the control algorithm intervenes to bias the material to the two particular impedance points.}\label{figs4}
\end{figure}
\FloatBarrier
In view of the reported properties of the FF, and to confirm the obtained results, we have run an experiment to further demonstrate that the system is chaotic and features a chaotic non-equilibrium, which could also explain the evolution over time of the hysteresis curves. Fig.~\ref{figs4} shows the results of this experiment. The test consists of the multiple setting of two impedance values, with a control algorithm that enforces $V_P$\,=\,3.3\,V or -3.3\,V across the liquid to set the impedance state to $Z_{22}^C\rvert_\mathrm{HIGH}$ and $Z_{22}^C\rvert_\mathrm{LOW}$, respectively. In between stimuli, we applied pauses of 10\,s where voltage $V_P$ is zero (see the measurement scheme of Fig.~\ref{figs4}C). The algorithm applies a fixed voltage until impedance reaches a predefined set point, but the reached impedance value is not exactly the same as the previous iteration, because the measurement accuracy of the set-up is finite. This way, initial conditions of the next iteration are slightly perturbed, and we can then determine if the response of the liquid varies over time. Results in Fig.~\ref{figs4}A depict the switching between various oscillatory states with different rates across $Z_{22}^C$. In particular, see Fig.~\ref{figs4}B, when the voltage $V_P$ applied across the material is fixed (in this example 3.3\,V but our findings show that it occurs also for -3.3\,V biases), the impedance variation that is expected to be monotonically increasing, instead varies with a more complex trend, to separate the oscillatory states in subgroups. The presence of multiple oscillatory regimes can be assumed to pertain to metastable attractors, while overall the resulting complex oscillation can be assumed as chaotic.
Being the FF a chaotic dynamical system from a DC/RF viewpoint, it can always be seen as an Analog Recurrent Neural Network (ARNN) \cite{bibmst}, therefore as an ensemble of nodes of a reservoir. Given such a high degree of complexity the system exhibits, we conclude that the colloid can be considered not only a liquid synapse but an ensemble of more complex nodes, all inter-related: a liquid network. Considering the measurements of Fig.~\ref{figs4}, a possible explanation of the phenomena, can be traced back to a network of oscillators. Indeed, each particle of the FF can be seen as a single superparamagnetic oscillator that is coupled via dipolar magnetic fields to those surrounding it. Considering the size of the particles in the FF (nanometric) and the material itself, magnetite, the spin resonance frequency is in the order of 1--3\,GHz, within the range of the conducted experiments. Hence, the application of the RF field to readout the status of the material surely comprises the effects of the ferromagnetic resonance of these oscillators. The effect of the DC stimulus could be explained by the application of a torque on the oscillators with respect to their natural position, generated on each local spin by the magnetic field.
\newpage
\section{Physical Reservoir Computing -- Readout Neural Network Training and Notes}
\label{Sec10}
\begin{figure}[hbt!]%
\centering
\includegraphics[width=0.9\textwidth]{./figures/figs8-rasterized.eps}
\caption{\textbf{A} Complete $Z_{22}^C$ values corresponding to 50 serializations of the digits {\tt 0}--{\tt 3} (overall 4\,$\times$\,50), for the training of the readout neural network used in PRC. Notwithstanding the 'reset' condition maintains dynamical properties of the material alive, outputs have variability. \textbf{B} Superposed $Z_{22}^C$ values for each digit for all the 50 iterations. \textbf{C} and \textbf{D} Histograms of the values of $Z_{22}^C$ for the four digits at the first iteration (left) and at the last iteration (right).} \label{figs7}
\end{figure}
\FloatBarrier
Fig.~\ref{figs7}A shows the complete waveform of $Z_{22}^C$ used to train the neural network depicted in Fig.~4 (main manuscript), later used to perform real-time detection. As shown in Fig.~\ref{figs7}A, the periodical streaming of the digits (repeated {\tt 0}--{\tt 3} sequence) leads to a high variability in the impedance values, even with a 'reset' sequence that maintains the liquid in a dynamical condition. This notwithstanding, the detail of each digit given in Fig.~\ref{figs7}B shows the successful set of the initial impedance value thanks to the control loop that here is set to an unilateral tolerance of 2.5\,$\Omega$.
A detail in the distribution of impedance values is given in Fig.~\ref{figs7}C--D that show histogram bars of both first and last $Z_{22}^C$ values for all digits, with normal distribution fitting (mean and standard deviation, $\mu$ and $\sigma$).
The 'reset' sequence, used to counterbalance the chaotic features of the FF, enables the application of a normalization layer as first element in the neural network.
Observe that the 'reset' sequence can be considered as a means to enforce the Echo State Property (ESP) \cite{esp}, a key requirement to achieve training using only the output of the reservoir. The FF provides both long-term and short-term information storage and a required condition for ESP is that the effect of initial condition should vanish as time passes. Without a 'reset' sequence, this property is definitely not verified for our reservoir assuming the same order of magnitude time required for digit classification. Waiting for a complete fade out of the effect of the inputs would make RC unreasonably slow.
In memristors PRCs, depending on the specific computing application, the readout NN can be for instance a Convolutional Neural Network (CNN) to account for the time dependency of subsequent samples in real-time detection of firing patterns, or a fully connected NN for pattern recognition \cite{bib67}. Here, the monothonically decreasing curve of impedance ensured by the 'reset' sequence, permits a readout network implementation based on fully connected {\tt Dense} layers. The readout layer given in Fig.~4 (main manuscript), provides excellent training results, i.e., 3\,$\times 10^{-4}$ loss for normalized outputs ({\tt 0}--{\tt 3}\,$\rightarrow$\,0--1) on 2000 epochs, resulting in a 0.001 Root-Mean-Square Error (RMSE). Hence, this NN permits inference on new real-time data by enabling an easy counterbalancing of the chaotic variations of our reservoir. However, the definition of RC typically refers to a single output layer to map the higher dimensional space projected by the reservoir [that internally implements a Recurrent Neural Network (RNN)] to the desired features. This corresponds to a single fully connected neuron in our specific case, that implements our single dimensional digit mapping {\tt 0}--{\tt 3} (normalized between 0 and 1).
Although with lower training accuracy compared to the previous case, by using a single {\tt Dense} layer (sigmoid activation function), input normalization and the dataset given in Fig.~\ref{figs7}, we obtained a loss of 0.044 on 25000 epochs (0.062 RMSE). By using two {\tt Dense} layers, the first with four neurons, and the second with one neuron, with input normalization we obtained 0.015 loss on 25000 epochs for a 0.026 RMSE. These values are still reasonable to implement PRC with a lower complexity readout NN.
| {
"attr-fineweb-edu": 1.90332,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdWk5qhLA11kI-J2- | \section{Introduction}
\label{sec:intro}
There are several examples of face recognition systems using a single sample per person (SSPP) in daily life, such as applications based on an ID card or e-passport \cite{lu2013discriminative}. Despite its importance in the real world, there are several unresolved issues associated with implementing systems based on SSPP. In this paper, we address two such difficulties and propose a deep domain adaptation with image synthesis to resolve these.
The first issue encountered while using SSPP is the heterogeneity of the shooting environment between the gallery and probe set \cite{xie2015blurred}. In real-world scenarios, the photo used in an ID card or e-passport is captured in a very stable environment and is often used as a gallery image. On the other hand, probe images are captured in a highly unstable environment using equipment such as surveillance cameras. The resulting image includes noise, blur, arbitrary pose, and illumination, which makes recognition difficult.
\begin{figure}[t]
\vspace*{-0.07in}
\centering
\subfloat[]{\includegraphics[height=6.6\baselineskip]{real1}\label{fig:real1}}
\hfill
\subfloat[]{\includegraphics[height=6.6\baselineskip]{real2}\label{fig:real2}}
\hfill
\subfloat[]{\includegraphics[height=6.6\baselineskip]{real3}\label{fig:real3}}
\vspace*{-0.1in}
\caption{Examples of (a) a stable gallery image (source domain) (b) synthetic images generated to overcome the lack of gallery samples (source domain) (c) unstable probe images that include blur, noise, and pose variation (target domain) }
\label{fig:real_scenario}
\vspace*{-0.2in}
\end{figure}
\begin{figure*}
\centering
\includegraphics[width=0.89\linewidth]{overallflow}
\vspace*{-0.08in}
\caption{
{
Outline of the SSPP-DAN. Image synthesis is used to increase the number of samples in the source domain. The feature extractor and two classifiers are used to bridge the gap between source domain (i.e., stable images) and target domain (i.e., unstable images) by adversarial training with gradient reversal layer (GRL). }}
\label{fig:overallflow}
\vspace*{-0.2in}
\end{figure*}
To address this issue, we approach SSPP face recognition from the perspective of domain adaptation (DA). Generally, in DA, a mapping between the source domain and the target domain is constructed, such that the classifier learned for the source domain can also be applied to the target domain. Inspired by this, we assume stable shooting condition of a gallery set as the source domain and unstable shooting condition of a probe set as the target domain as shown in Fig.~\ref{fig:real_scenario}. To apply DA in the unified deep architecture, we use a deep neural network with domain-adversarial training, in a manner proposed in \cite{icml2015_ganin15}. The benefit of this approach is that labels in the target domain are not required for training, i.e., the approach accommodates unsupervised learning.
The second challenge in using SSPP is in the shortage of training samples \cite{wright2009robust}. In general, the lack of training samples affects any learning system adversely, but it is more severe for deep learning approaches. To overcome this, we generate synthetic images with varying poses using a 3D face model \cite{zhang2008spacetime} as shown in Fig.~\ref{fig:real_scenario} (center). Unlike SSPP methods that are based on external datasets \cite{wright2009robust, su2010adaptive, deng2012extended}, we generate virtual samples from an SSPP gallery set. The proposed method also differs from conventional data augmentation methods that use crop, flip, and rotation \cite{parkhi2015deep, zhang2016localize} in that it takes into account well-established techniques such as facial landmark detection and alignment that consider realistic facial geometric information. We propose a method SSPP-DAN that combines face image synthesis and deep DA network to enable realistic SSPP face recognition.
To validate the effectiveness of SSPP-DAN, we constructed a new SSPP dataset called ETRI-KAIST Labeled Faces in the Heterogeneous environment (EK-LFH). In this dataset, the gallery set was captured using a webcam in a stable environment, and the probe set was captured using surveillance cameras in an unconstrained environment. Using the experimental results, we validated that DA and image synthesis complement each other and eventually show a drastic 19.31 percentage points improvement over the baseline that does not use DA and image synthesis. Additionally, we performed experiments on the SSPP protocol of Labeled Faces in the Wild (LFW) benchmark \cite{wolf2011effective} to demonstrate the generalization ability of the proposed approach and confirmed state-of-the-art performance.
The main contributions of this study are as follows:
({\romannumeral 1}) We propose SSPP-DAN, a method that combines face synthesis and deep architecture with domain-adversarial training.
({\romannumeral 2}) To address the lack of realistic SSPP datasets, we construct a dataset whose gallery and probe sets are obtained from very different environments.
({\romannumeral 3}) We present a comparative analysis of the influence of DA with the face benchmark as well as with the EK-LFH dataset.
\vspace*{-0.05in}
\section{Related Works}
\label{sec:related}
A number of methods based on techniques such as image partitioning and generic learning have been proposed to address the shortage of training samples in SSPP face recognition. Image partitioning based methods augment samples by partitioning a face image into local patches \cite{lu2013discriminative, yan2014multi}. Although these techniques efficiently obtain many samples from a single subject, the geometric information of the local patch is usually ignored. There have been attempts to use external generic sets \cite{wright2009robust, su2010adaptive, deng2012extended} by assuming that the generic set and the SSPP gallery set share some intra-class and inter-class information \cite{pei2017decision}. In this study, we augmented virtual samples from an SSPP gallery set instead of using an external set.
Several studies have proposed the application of DA for face recognition. Xie et al. \cite{xie2015blurred} used DA and several descriptors like LBP, LPQ, and HOG to handle the scenario in which the gallery set consists of clear images and the probe set has blurred images. Banerjee et al. \cite{banerjee2016domain} proposed a technique for surveillance face recognition using DA and a bank of eight descriptors such as Eigenfaces, Fisherfaces, Gaborfaces, FV-SIFT, and so on. Unlike the above approaches, which apply DA after extracting the handcrafted-feature from the image, we jointly perform feature learning, DA, and classification in an integrated deep architecture. Moreover, we solve the SSPP problem and consider pose variations, unlike the abovementioned approaches that only use frontal images.
A face database using surveillance camera image called SCface was proposed in \cite{grgic2011scface}. In SCface, only one person appears in each image and they are photographed at a fixed location. In contrast, the images in ours were captured in an unconstrained scenario in which 30 people were walking in the room, which induced more noise, blur, and partial occlusions.
\vspace*{-0.05in}
\section{Proposed Method}
\label{sec:method}
SSPP-DAN consists of two main components: virtual image synthesis and deep domain adaptation network (DAN) that consists of feature extractor and two classifiers. The overall flow of SSPP-DAN is illustrated in Fig.~\ref{fig:overallflow}.
\vspace*{-0.07in}
\subsection{Virtual Image Synthesis}
\label{sec:virtual}
The basic assumption in DA is that samples are abundant in each domain and the sample distribution of each domain is similar but different (i.e., shifted from the source domain to the target domain \cite{shimodaira2000improving}). However, in the problem under consideration, there are few samples in the source domain (i.e., SSPP). In such an extreme situation, it is difficult to apply DA directly and eventually, the mechanism will fail. To address this problem, we synthesize images with changes in pose, which improves the feature distribution obtained from the face images.
For image synthesis, we first estimate nine facial landmark points from the source domain. We use the supervised descent method (SDM) \cite{xiong2013supervised} because it is robust to illumination changes and does not require a shape model in advance. We then estimate a transformation matrix between the detected 2D facial points and the landmark points in the 3D model \cite{zhang2008spacetime, zhu2014mirror} using least-squares fit. Finally, we generate synthetic images in various poses, and these are added to the source domain as shown in Fig.~\ref{fig:concept_space}.
\begin{figure}[t]
\centering
\subfloat[
DA fails to work because of the lack of samples in the source domain]{\includegraphics[width = 1\linewidth]{fig3_a}\label{fig:real1}}
\vspace*{-0.1in}
\hfill
\subfloat[
Virtual samples along the pose axis enable successful DA, resulting in a discriminative embedding space]{\includegraphics[width = 1\linewidth]{fig3_b}\label{fig:real2}}
\caption{
Facial feature space (left) and its embedding space after applying DA (right). The subscript s and t in the legend refer to the source and target domains, respectively.
}
\label{fig:concept_space}
\vspace*{-0.15in}
\end{figure}
\vspace*{-0.07in}
\subsection{Domain Adaptation Network}
\label{ssec:DomAdaptModel}
While the variations in pose between the distributions of the two domains can be made similar by image synthesis $S$, other variations such as blur, noise, partial occlusion, and facial expression remain. To resolve the remaining differences between the two domains using DA, we use a deep network that consists of feature extractor $F$, label classifier $C$, and domain discriminator $D$. Given an input sample, it is first mapped as a feature vector through $F$. There are two branches from the feature vector\textemdash the label (identity) is predicted by $C$ and the domain (source or target) is predicted by D as shown in Fig.~\ref{fig:overallflow}.
Our aim is to learn deep features that are discriminative on the source domain during training. For this, we update the parameters of $F$ and $C$, $\theta_F$ and $\theta_C$, to minimize the label prediction loss. At the same time, we aim to train features from the labeled source domain that are discriminative in the unlabeled target domain as well (recall that we consider unsupervised DA). To obtain the domain-invariant features, we attempt to find a $\theta_F$ that maximizes the domain prediction loss, while simultaneously searching for parameters of $D$ ($\theta_D$) that minimize the domain prediction loss. Taking into consideration all these aspects, we set the network loss as
O
\vspace*{-0.15in}
\begin{equation}
\begin{split}
L &= \sum _{i\in {S}}{L}_{C}^i + \sum_{i\in {S}\cup{T}}{L}_{D}^i \:\,\,\,\qquad\textrm{when update}\,\theta_D\\
L &= \sum _{i\in {S}}{L}_{C}^i - \lambda\sum_{i\in {S}\cup{T}}{L}_{D}^i \qquad\textrm{when update}\, \theta_F, \theta_C \\
\end{split}
\label{eq:loss}
\end{equation}
where ${L}_{C}^i$ and ${L}_{D}^i$ represent the loss of label prediction and domain prediction evaluated in the $i$-th sample, respectively. Here, $S$ and $T$ denote a finite set of indexes of samples corresponding to the source and target domains.
The parameter $\lambda$ is the most important aspect of this equation. A negative sign of $\lambda$ leads to an adversarial relationship between $F$ and $D$ in terms of loss, and its size adjusts the trade-off between them. As a result, during minimization of the network loss $L$, the parameters of F converge at a compromise point that is discriminative and satisfies domain invariance.
\vspace*{-0.05in}
\section{Experimental Results}
\vspace*{-0.07in}
\subsection{Experimental Setup}
\label{ssec:Network}
In all experiments, the face region was detected using the AdaBoost detector trained using Faces in the Wild \cite{berg2005s}. For feature learning, we fine-tuned a pre-trained CNN model, VGG-Face \cite{parkhi2015deep}, used it as the feature extractor $F$, and attached a shallow network as the label classifier $C$ (1024 - 30) and domain discriminator $D$ (1024 - 1024 - 1024 - 2).
\vspace*{-0.07in}
\subsection{Evaluation on EK-LFH}
\label{sec:exprOurDB}
Owing to the lack of a dataset suitable for real-world SSPP, we constructed a EK-LFH dataset containing 15,930 images of 30 subjects. Table~\ref{table:dataset_detail} shows the details of the dataset. The webcam set was used as the source domain for the training. In the surveillance set, 10,760 samples were used for training without labels in the target domain, and the rest were used for testing. Example images are shown in Fig.~\ref{fig:dataset}.
\begin{table}[b]
\vspace*{-0.12in}
\caption{Dataset specification}
\label{table:dataset_detail}
\vspace*{-0.18in}
\begin{center}
\begin{tabular}{ l|c|c }
\hline
Domain & Source& Target \\ \hline\hline
Set & {webcam} & {surveillance} \\ \hline
Subjects & $30$ & $30$ \\ \hline
Samples & $30$ & $15,900$ \\ \hline
Pose & frontal & various \\\hline
\multirow{2}{*} {Condition} &\multirow{2}{*} {stable} & unstable \\
& & (blur, noise, illumination) \\ \hline
\end{tabular}
\end{center}
\vspace*{-0.05in}
\end{table}
\begin{figure}[h]
\vspace*{-0.05in}
\centering
\subfloat[Shooting condition for the source (left) and target (center and right)]{\includegraphics[width=\linewidth]{db1}\label{fig:db1}}
\vfill
\vspace*{-0.1in}
\subfloat[Face regions from the source (leftmost) and target (the others)]{\includegraphics[width=\linewidth]{db2}\label{fig:db2}}
\vspace*{-0.1in}
\caption{{Sample images in EK-LFH} }
\label{fig:dataset}
\end{figure}
To demonstrate the effectiveness of the proposed method, we performed evaluations using several models as shown in Table~\ref{table:result_mw} using the procedure followed in \cite{icml2015_ganin15}. The source-only model was trained using samples in the source domain, which revealed the theoretical lower bound on performance as 39.22\%. The train-on-target model was trained on the target domain with known class labels. This revealed the upper performance bound as 88.31\%. The unlabeled target domain as well as the labeled source domain were used in DAN and SSPP-DAN for unsupervised DA. Additionally, we evaluated the semi-supervised models using the same setting as DAN and SSPP-DAN, but by revealing only three labels per person in the target domain.
From Table~\ref{table:result_mw}, we clearly observe that SSPP-DAN with unsupervised as well as semi-supervised learning significantly improves accuracy. In particular, even when the labels of the target domain are not given, the accuracy of the proposed SSPP-DAN was 19.31 percentage points higher than that for source-only. The fourth and fifth rows validate the importance of image synthesis when applying unsupervised DA. Adding synthesized virtual images to the training set increased the performance by 27.42 percentage points. Interestingly, as shown in the third row, adding synthetic images to source-only degrades performance. This result indicates that image synthesis alone cannot solve the SSPP problem efficiently, instead DA and image synthesis operate complementarily in addressing the SSPP problem.
\begin{table}
\caption{Recognition rates (\%) for different models and different training sets of the EK-LFH
}
\label{table:result_mw}
\vspace*{-0.18in}
\begin{center}
\begin{tabular}{l|l|c }
\hline
Model & Training set & Accuracy \\\hline
\hline
\multirow{2}{*} {Source only} & $\textrm{S}$ & 39.22 \\
& $\textrm{S} + \textrm{S}_\textrm{v}$ & 37.15 \\
\hline
DAN &$\textrm{S} + \textrm{T}$ & 31.11 \\
\textbf{SSPP-DAN} & $\textrm{S} + \textrm{S}_\textrm{v} + \textrm{T}$ & \textbf{58.53} \\
\hline
{Semi DAN} & $\textrm{S} + \textrm{T} + \textrm{T}_\textrm{l}$ & 67.28 \\
\textbf{Semi SSPP-DAN} &$\textrm{S} + \textrm{S}_\textrm{v} + \textrm{T} + \textrm{T}_\textrm{l}$ & \textbf{72.08} \\
\hline
Train on target & $\textrm{T}_\textrm{l}$ & 88.31 \\
\hline
\multicolumn{3}{c}{$\textrm{S}$: Labeled webcam \quad $\textrm{T}$: Unlabeled surveillance} \\
\multicolumn{3}{c}{$\textrm{S}_\textrm{v}$: Virtual set from $\textrm{S}$ \quad $\textrm{T}_\textrm{l}$: Labeled surveillance} \\
\end{tabular}
\end{center}
\vspace*{-0.3in}
\end{table}
\vspace*{-0.07in}
\subsection{Evaluation on LFW for SSPP}
\label{sec:exprBenchmark}
In order to demonstrate the generalization ability of SSPP-DAN, we performed an additional experiment on the LFW using the proposed SSPP method. For fair comparison with previous SSPP methods, we used LFW-a \cite{wolf2011effective}, and followed the experimental setup described in \cite{yang2016joint}. The LFW for SSPP included images from 158 subjects, each of which contained more than 10 samples, as well as the labels of all subjects. The first 50 subjects were used as probe and gallery, and the images of the remaining 108 subjects were used as a generic set. For the 50 subjects, the first image was used as the gallery set and the remaining images were used as the probe set.
Since the LFW did not consider DA originally, it made no distinction between source and target domain. Hence, we used the original generic set as the source domain and the synthetic images from the generic set as the target domain. We applied DA in a supervised manner to generate a discriminative embedding space. After training, we used the output of the last FC layer as the feature, and implemented prediction using the linear SVM. We also evaluated fine-tuned VGG-Face without image synthesis and DA.
Experiments using the benchmark confirmed that VGG-face based methods including ours have superior discriminative power over other approaches as shown in Table~\ref{table:lfwResult}.
This indicates the generality of deep features from the VGG-Face trained on a large scale dataset.
It is apparent from this table that, by comparing VGG-Face with the proposed method, the combination of image synthesis and DA shows promising results in the `wild' dataset.
\begin{table}
\caption{Recognition rates (\%) on LFW dataset for SSPP}
\label{table:lfwResult}
\vspace*{-0.18in}
\begin{center}
\begin{tabular}{l|c|l|c}
\hline
Method & Accuracy & Method & Accuracy \\\hline
\hline
DMMA \cite{lu2013discriminative}& 17.8 & RPR \cite{gao2015neither} & 33.1 \\
AGL \cite{su2010adaptive} & 19.2 & DeepID \cite{sun2014deep} & 70.7 \\
SRC \cite{wright2009robust} & 20.4 & JCR-ACF \cite{yang2016joint} & 86.0 \\
ESRC \cite{deng2012extended} & 27.3 & VGG-Face \cite{parkhi2015deep}& 96.43 \\
LGR \cite{zhu2014local} & 30.4 & \textbf{Ours} & \textbf{97.91} \\
\hline
\end{tabular}
\end{center}
\vspace*{-0.18in}
\end{table}
\vspace*{-0.05in}
\section{CONCLUSION}
\label{sec:con}
This paper proposed a method based on integrated domain adaptation and image synthesis for SSPP face recognition, especially for cases in which the shooting conditions for the gallery image and the probe set are completely different. Synthetic images generated in various poses were used to deal with the lack of samples in the SSPP. In addition, a deep architecture with domain-adversarial training was used to perform domain adaptation, feature extraction, and classification jointly. Experimental evaluations showed that the proposed SSPP-DAN had an accuracy 19.31 percentage points higher than that of the source-only baseline even when the labels of the target domain were not given. Our method also achieved state-of-the-art results on the challenging LFW for SSPP. In future work, we plan to expand our approach to a fully trainable architecture including image synthesis as well as domain adaptation using standard back-propagation.
\bibliographystyle{IEEEbib}
| {
"attr-fineweb-edu": 1.705078,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdWw4eIXh70-IxTJp | \section{Introduction}
Video-based face hallucination (VFH) targets at recovering a high-resolution (HR) human face from low-resolution (LR) video frames. This task has gained long-standing interest in the research community~\cite{liu2007face,wang2014comprehensive,liu2021point,yu2016ultra,zhang2021recursive,liu2014feature} due to its wide applications such as emotion recognition~\cite{hassan2019human,liu2014facial} and video surveillance~\cite{zhou2019anomalynet}. As a domain-specific task in video super-resolution, VFH is far less explored due to its high complexity and requirement in jointly modeling both spatio-temporal and facial prior information.
Recent studies~\cite{chan2020basicvsr} have pinpointed two critical steps of VFH approaches, \emph{i.e.}, \emph{alignment} and \emph{aggregation}.
Previous methods typically conduct pairwise alignment and aggregate frames with estimated optical flow or deformations. Due to the large pose or expression changes in face video snapshots, these approaches may fail to establish frame-wise correspondences accurately especially when input videos are in very low resolution. Furthermore, the pairwise aggregation manner neglects the long-range temporal information offered by an integral sequence but faces the risk of accumulating alignment errors.
Last but not least, to the best of our knowledge, previous VFH methods often rely heavily on beforehand aligned faces in terms of facial landmarks. Considering that raw faces in LR videos are naturally tiny and blurry~\cite{yu2019semantic}, this would drastically hinder the accuracy of face alignment. To date, how to effectively align and aggregate both spatio-temporal and facial prior information for VFH remains challenging.
In this paper, we propose a new video face hallucination method, dubbed VidFace, to super-resolve unaligned tiny face snapshots. VidFace is a full-transformer solver addressing alignment and aggregation in a unified scheme. It capitalizes on the contextual modeling ability of the attention mechanism to harness \emph{integral} information of \emph{all} the given snapshots, both spatially and temporally, for face hallucination.
Specifically, VidFace starts with splitting snapshots into overlapped patches and flattening them into spatio-temporal tokens.
Inspired by the structural design of Token-to-Token network, we aggregate neighboring tokens iteratively to enrich the feature representation of facial details.
To further utilize the facial structural information, we propose a novel recurrent landmark positional encodings (RLPE) to equip our transformer with facial priors. RLPE is designed to recurrently estimate the landmarks and attentions together in a mutual-promotion manner, which regularises the feature alignment and aggregation without the need of pretraining transformers.
We also present a new upsampling module, dubbed Detoken Upsampling (DeU). Apart from previous upsampling such as deconvolution and pixelshuffle that operated on each local patch, we design a novel ``Detoken Transformer'' to decode tokens into dilated ones to enlarge the latent feature resolution and then reconstruct these decoded tokens into HR images. In such dilation process, facial structure information is naturally incorporated with the help of global attention and landmark positional encoding, thus leading to superior face hallucination performance.
Overall, our main contributions are summarized as follows:
\begin{itemize}
\vspace{-1em}
\item We present a unified pure transformer-based solver, namely VidFace for video based face hallucination by utilizing the long-range spatial and temporal information from inputs integrally. To the best of our knowledge, we are the first attempt to super-resolve face videos by a pure transformer-based solver.
\item We propose a novel recurrent landmark positional encoding (RLPE) to equip our transformer with facial priors. RLPE is designed to recurrently estimate the landmarks and attentions together in a mutual promotion manner, significantly facilitating facial feature alignment and aggregation.
\item We curate a new large-scale benchmark dubbed TUFS-145K, and LR faces are unaligned in TUFS-145K, posing a more realistic and challenging scenario for existing baselines.
\item Extensive experiments demonstrate the new state-of-the-art face hallucination performance of the proposed VidFace.
\end{itemize}
\section{Related Work}
\textbf{Video Super-resolution.}
Dong \textit{et al.} proposed a seminar idea to tackle image super-resolution (ISR) with a Convolutional Neural Network~\cite{dong2014learning}. Based on this pioneering work, many deep learning-based approaches have been proposed for video super-resolution (VSR) task~\cite{wang2019edvr,caballero2017real,xue2019video,Tao_2017_ICCV,kappeler2016video,jo2018deep,tian2020tdan}. Comparing to ISR, VSR lays more emphasis on exploiting temporal alignment across frames. Some methods~\cite{kappeler2016video,caballero2017real,Tao_2017_ICCV} utilize optical flow to estimate the motions between frames. Xue \textit{et al.} proposed TOFlow~\cite{xue2019video}, an end-to-end trainable network to predict task-oriented motion and fuse all input frames according to estimated motion fields. Besides, several methods~\cite{jo2018deep,wang2019edvr,tian2020tdan} supplant flow estimation with implicit alignment mechanism.
TDAN~\cite{tian2020tdan} employs deformable convolution to align neighbouring frames at the feature level. Based on~\cite{tian2020tdan}, EDVR~\cite{wang2019edvr} improves previous works with a Pyramid, Cascading and Deformable (PCD) alignment module to align multi-frames.
\textbf{Face Hallucination.}
Face hallucination, a domain-specific super-resolution task, aims at generating high-resolution facial images from low-resolution inputs. Based on deep learning, face super-resolution methods have been actively researched and achieved impressive progress. Yu \textit{et al.} \cite{yu2016ultra} super-resolve aligned tiny face images with GAN-based models. In order to process unaligned face images, Yu \textit{et al.} \cite{yu2017face} insert multiple spatial transformer networks (STN) in a generator to hallucinate LR images. Cao \textit{et al.} \cite{cao2017attention} learn the recurrent policy network and local enhancement network through deep reinforcement learning. Zhang \textit{et al.} \cite{zhang2020copy} propose a two-branch super-resolution network (CPGAN) to upsample face images while compensating for low and nonuniform illumination.
\textbf{Vision Transformer.}
Transformer models have been heralded as a breakthrough in Natural Language Processing (NLP) domain, and are gradually applied to a wide range of computer vision tasks such as semantic segmentation~\cite{luo2018macro,luo2019significance,luo2021category,luo2019taking} due to its feature extraction ability and impressive performance. Yang \textit{et al.} \cite{yang2020learning} formulated LR and Ref images as queries and keys in a transformer, which contained a hard-attention module and a soft-attention module to discover deep feature correspondence and transfer rich textures to help super-resolving the input LR image. Different from Yang \textit{et al.} \cite{yang2020learning}, we formulate central frames and other frames as queries and keys in order to search relevant regions between them and fuse the correspondent feature in a pyramid transformer block.
Our work does not follow the strategy of using local-perceptual operator as optical flow or deformable convolution. Instead, a pure-transformer-based method is proposed to harness global information spatially and temporally to exploit better alignment. To our knowledge, we are the first works to propose a full-transformer solution for VFH.
\section{Methodology}
\subsection{Problem Setting and Overall Idea}\label{sec_Problem setting}
Given $N$ unaligned tiny low-resolution face snapshots captured from human video, denoted as $\{\bm{LR}_{n}\}$ where $n \in \{1, ..., N\}$, we aim at super-resolving the reference frame $\bm{LR}_{ref} \in \{\bm{LR}_{n}\}$ to restore the facial details. Our task can be clearly distinguished from traditional facial video super-resolution in two folds. First, our method operates on tiny inconsecutive data, \emph{e.g.}, each snapshot is sized $16 \times 12$ with large pose or expression divergence. Second, the faces are not aligned to their landmarks across snapshots by any pre-process. To our knowledge, such tiny, inconsecutive and unaligned facial snapshots pose a practical scenario that challenges all existing efforts. For convenience, we denote other snapshots as auxiliary snapshots $\bm{LR}_{aux}$, which are utilized to supplement the reference one $\bm{LR}_{ref}$ in hallucination.
Our overall idea is to capitalize on the contextual modeling ability of attention mechanism to harnesses all the spatial, temporal and facial prior information of given face video snapshots. We hence propose the VidFace, an unified transformer-based solver for VFH, which boosts existing methods in all the \emph{alignment}, \emph{aggregation} and \emph{upsampling} steps. We detail the VidFace in what follows.
\subsection{Network Architecture}
The network architecture of VidFace is illustrated in Fig.~\ref{fig_VidFace}. It is composed of an encoder $\mathbf{E}$, a refiner $\mathbf{R}$ and a decoder $\mathbf{D}$, wherein $\mathbf{E}$ is responsible for an initiatory spatio-temporal feature alignment and aggregation, $\mathbf{R}$ recurrently refines the latent features with facial structure prior and $\mathbf{D}$ generates the final HR face with a facial structure-aware upsampling.
\textbf{Encoder as Token-to-token Transformer.}
The design spirit of the encoder $\mathbf{E}$ is inspired by Token-to-Token (T2T) ViT but also introduces many improvements tailored for VFH. Given $N$ face video snapshots $\{\bm{LR}_{n}\}$ where $n \in \{1, ..., N\}$, we first $\texttt{Unfold}$ them to softly split snapshots into overlapped patches. These patches are then flattened in to tokens $\bm{T} = \{\bm{T}_n\}$ (where $\bm{T}_n$ denotes the tokens in $\bm{LR}_{n}$) and forwarded to transformer layer for feature extraction. Finally, the output tokens from transformer are restructured as an image again to bootstrap the next unfold-reconstruction iteration. For a (restructured) LR snapshot $\bm{LR}_n\in \mathbb{R}^{h\times w\times c}$, the number of output tokens can be calculated as:
\begin{eqnarray}
l_n=\left \lfloor \frac{h+2p-k}{s} + 1\right \rfloor \times \left \lfloor \frac{w+2p-k}{s} + 1\right \rfloor,
\end{eqnarray}
where $k$ is the size of patch, $s$ is the stride and $p$ is the padding on $\bm{LR}_n$. Then the total number of tokens in $N$ snapshots will be $L = \sum_{n=1}^{N} l_{n}$.
Different from vanilla T2T process which reduces the number of tokens progressively, VidFace keeps same amount of tokens to maintain the resolution during encoding. VidFace also stacks the tokens in different layers to further enrich the feature with multi-scale representation. As the neighboring (either intra- or inter-snapshot) tokens are aggregated iteratively into single token, the spatio-temporal information are naturally embedded into them. Moreover, the interaction between overlapped tokens also helps to extract more structure information like edges and textures, which is of pivotal for facial detail hallucination.
In order to maintain the spatial structure of snapshots when splitting tokens, we add the spatial position encoding to tokens as in vanilla transformer. It is noteworthy that we do not consider temporal position encoding since we assume that the ``snapshots'' are unordered compared to ``frames''.
\begin{figure}
\centering
\includegraphics[scale=0.43]{figure/fig1.pdf}
\caption{The network architecture of VidFace.}
\label{fig_VidFace}
\end{figure}
\textbf{Refiner as Recurrent Landmark Transformer.}
We further refine the initiatory spatio-temporal tokens by introducing facial structure information, which has been proven as a robust prior to regularise the feature alignment \& aggregation~\cite{yu2017face,zhang2018super,grm2019face}. However, to utilize such prior effectively in a transformer is non-trivial. Here we propose to additionally estimate the facial landmarks during the training and employ the estimated facial landmarks in positional encoding. To achieve the auxiliary landmark estimation task, we first utilize a linear projection to decrease the dimension of every tokens. Then all tokens from the same snapshot will be flattened into a one-dimensional feature. Followed by a linear layer and a landmark head, these $5$ landmark positions of the face will be predicted as:
\begin{eqnarray}
\bm{LM}_{n} = \texttt{LMHead}(\texttt{MLP}(\texttt{Flatten}(\bm{T}_{n,0},...,\bm{T}_{n,l_n}))),
\end{eqnarray}
where $l_n=h\times w$ denotes the tokens number of snapshot $LR_n$.
To represent the positions encoding of each token in terms of landmarks, we employ the estimated facial landmarks as the anchors. The landmark position encoding of token $\bm{T}_{n,i}$ is denoted by $\bm{RLPE}_{n, i} \in \mathbb{R}^{5}$, which is reflected in the distance between token $\bm{T}_{n,i}$ and $5$ facial landmarks. Concretely, $\bm{RLPE}_{n, i}$ can be calculated as:
\begin{eqnarray}
\bm{RLPE}_{n, i} = [d(\bm{P}_{n, i}, lm_{n, 1}), ..., d(\bm{P}_{n, i}, lm_{n, 5})],
\end{eqnarray}
where $d(.)$ indicates the \emph{Euclidean} distance between two points in a snapshot, $\bm{P}_{n, i}$ is the position of token $\bm{T}_{n, i}$ and $\bm{lm}_{n, j}$ indicates the \emph{j}-th facial landmark of the \emph{n}-th snapshot obtained by the recurrent transformer, which serves as the anchor point when calculating $\bm{RLPE}_{n,i}$. In order to handle faces that occupy different sizes in images, we normalize $\bm{RLPE}$ (\emph{i.e.}, $\frac{\bm{RLPE}}{\left \| \bm{RLPE} \right \|_{1}}$ ) before concatenating it to the tokens. Since the auxiliary head is trained from scratch and the estimated landmarks are inaccurate in early iteration, we propose to recurrently learn the facial landmark and attention map together in a mutual-promotion manner. Specially, the yielded landmarks would recurrently embedded into the position encoding, together with the refined tokens to bootstrap the next refine iteration, as shown in Fig.~\ref{fig_VidFace}. Afterwards, the final tokens would be forwarded to the decoder.
\textbf{Decoder as Detoken Upsampling.}
We propose Detoken Upsampling (DeU) to conduct the facial structure-aware upsampling, which reconstructs the latent tokens into $8\times$ facial images, expressed as:
\begin{eqnarray}
\bm{T}^{l\times d_{model}} \mapsto \bm{HR}^{8h \times 8w \times 3},
\end{eqnarray}
where $d_{model}$ represent the input channels, respectively. DeU is composed of two key modules, \emph{i.e.}, \emph{Detoken Transformer (DeT)} and \emph{Tokens to Image (T2I)}.
\emph{Detoken Transformer (DeT)} is designed to overcome the limitation of deconvolution~\cite{noh2015learning} or sub-pixel convolution~\cite{shi2016real} on their inherent local perceptivity.
DeT decodes tokens into dilated ones to enlarge the latent feature resolution. The feature of every token has the dimension of $d_{out}*k*k$, where $k$ is the kernel size used in T2I and $d_{out}$ is the output dimension of T2I. In such dilation process, facial structure information is naturally employed with the help of global attention and landmark positional encoding.
\begin{wrapfigure}{r}{8.5cm}
\centering
\includegraphics[width=0.5\textwidth]{figure/fig2.pdf}
\caption{\footnotesize The architectural overview of DeU.}
\label{fig_DeU}
\end{wrapfigure}
As depicted in Fig.~\ref{fig_DeU}, for video tokens $\bm{T}$ whose feature dimension is $d_{model}$, we utilize multi-head attention~\cite{vaswani2017attention} to extract the $d_{q}$-dimensional queries, $d_{k}$-dimensional keys and $d_{v}$-dimensional values for $h$ times with different, learnable linear projections. We remove the vanilla feed forward layer after the multi-head attention to maintain the global perceptual field. Formally, the self-attention process can be presented in the following function:
\begin{equation}
\begin{aligned}
DeT(\bm{T}) &= \texttt{Concat}(\texttt{head}_{1}, \dots,\texttt{head}_{h}) + \texttt{Concat}(\bm{TW}_{1}^{V},\dots,\bm{TW}_{h}^{V}),\\
where \ \texttt{head}_{i} &= \texttt{Attention}(\bm{TW}_{i}^{Q}, \bm{TW}_{i}^{K}, \bm{TW}_{i}^{V}),
\end{aligned}
\end{equation}
where the projection matrix $\bm{W}_{i}^{Q}\in \mathbb{R}^{d_{model}\times d_{q}}$, $\bm{W}_{i}^{K}\in \mathbb{R}^{d_{model}\times d_{k}}$, $\bm{W}_{i}^{V}\in \mathbb{R}^{d_{model}\times d_{v}}$. For instance, when using $d_{q} = d_{k} = 32$, $d_{v}=1024$ and $h = 8$, the token feature would be extended, thus can be then folded to a $8\times$ feature map using T2I.
\emph{Tokens to Image (T2I)} refers to the eventual step for face hallucination to reconstruct tokens into images. We take the tokens $\bm{T}'_{n} \in \bm{T}'$ from snapshot $n$ as an example to illustrate T2I as shown in Fig.~\ref{fig_DeU}, where $\bm{T}'$ denotes the overall video tokens generated from DeT. The reconstruction process can be formatted as:
\begin{eqnarray}
\bm{HR}_{n} = \texttt{Reshape}(\texttt{MLP}(\texttt{T2I}(\bm{T}_{n}'))).
\end{eqnarray}
Inspired by the implementation of deconvolutional layer, we implement \texttt{T2I} with \texttt{fold} (a.k.a, \emph{col2im}) as follows:
\begin{eqnarray}
\texttt{T2I}(\bm{T}_{n}') = \texttt{Fold}(\bm{T}_{n}', k, s, p),
\end{eqnarray}
where $k, s, p$ denote the kernel size, stride and padding used in \texttt{Fold}, respectively. For instance, when setting $k=16$, $s=8$ and $p=4$ as in our work, we can upsample LR snapshots into $8\times$ HR faces which can be regard as an inverse operation of T2T.
\subsection{Training Objective}
The training objective of VidFace is twofold. For the main task of face hallucination, we adopt the Charbonnier Loss \cite{li2020mucan} to produce better refined edges during training. Specially, Charbonnier Loss is defined as:
\begin{equation}
\begin{aligned}
\mathcal{L}_{pix} = \sqrt{{\left \| \hat{\bm{HR}}_{n} - \bm{HR}_{n}\right \|} ^{2} + \epsilon^{2}} \,,
\end{aligned}
\end{equation}
where $\hat{\bm{HR}}_{n}$ is the predicted high-resolution result, and $\epsilon$ is a small constant.
For the auxiliary task of landmark estimation, we use a robust loss function (smooth-L$_1$) defined in~\cite{girshick2015fast} to optimize the five facial landmarks. Specially, smooth-L$_1$ can be defined as $L_{pts} (lm_n, lm^{*}_n)$, where $\bm{lm}_n=\{lm_{x_1}, lm_{y_1}, \dots , lm_{x_5}, lm_{y_5}\}_n$ and $\bm{lm}^{*}_n=\{lm^{*}_{x_1}, lm^{*}_{y_1}, \dots , lm^{*}_{x_5}, l^{*}_{y_5}\}_n$ represent the predicted five facial landmarks and ground-truth.
\begin{equation}
\mathcal{L}_{pts} (\bm{lm}_n, \bm{lm}^{*}_n)=\left\{\begin{matrix} 0.5 \left ( \bm{lm}_{n}-\bm{lm}_{n}^{*} \right )^{2},
& if\left | \bm{lm}_{n}-\bm{lm}_{n}^{*} \right | <1 \\ \left | \bm{lm}_{n}-\bm{lm}_{n}^{*} \right | -0.5,
& otherwise
\end{matrix}\right.
\end{equation}
Therefore, the final loss is formulated as:
\begin{equation}
\begin{aligned}
\mathcal{\bm{L}}_{total} = \sum_{n}\left (\lambda_{1} \mathcal{L}_{pix}+\lambda_{2} \mathcal{L}_{pts} \right),
\end{aligned}
\end{equation}
where $\lambda_{1}$ and $\lambda_{2}$ denote hyper-parameters controlling the relative importance of the two losses.
\section{Experiments}
\label{sec_exp}
To demonstrate the advantages of VidFace, we have conducted extensive experiments on both public and our new curated datasets. We first briefly introduce the evaluation datasets, the corresponding evaluation protocols, and the implementation details. Then, we perform comprehensive comparisons to verify the superiority of VidFace over all the compared state-of-the-art approaches. Finally, we perform in-depth analysis on each module with ablation study to pinpoint their contributions for VFH.
\subsection{Datasets and Evaluation Protocols}
\begin{figure}
\centering
\includegraphics[width=0.99\textwidth]{figure/fig5.pdf}
\caption{\footnotesize Some facial video snapshot examples of TUFS-145K.}
\label{fig_tufs}
\end{figure}
We employ two datasets for a comprehensive evaluation to validate the effectiveness of VidFace. \textbf{IJB-C} is a video-based face database which contains $3,531$ subjects with $31.3K$ still images and $117.5K$ frames from $11,779$ videos. We set a threshold to filter out the over-short video whose length is less than $4$ and results in a subset with $6.4K$ video clips. \textbf{TUFS-145K} is origin from the public \emph{voxceleb2}~\cite{chung2018voxceleb2} dataset. We first detect and crop the face boundingbox of the videos every several frames with pyramidbox tool. We score the candidate clips with (1) faceQnet to judge the quality of cropped faces and (2) 3ddfa to estimate the pose variance. By combined considering these two scores, we select 145K 7-frame clips from the candidates to constitute TUFS-145K, which has favourable face quality and large pose variance. TUFS-145K is divided into a training set of $116K$ video clips, a validation set of $14.5K$ clips and a testing set of $14.5K$ clips. During the network training, we employ Bicubic downsampling to generate the LR face video snapshots as the input data.
The estimated high-resolution images are evaluated by peak signal to noise ratio (PSNR) and the structural similarity index (SSIM) on $Y$ channel (\emph{i.e.}, luminance) of the transformed YCbCr space and RGB space.
\subsection{Implementation Details}
We use PyTorch~\cite{paszke2017automatic} for our implementation. We employ T2T-ViT~\cite{yuan2021tokens} as the backbone. In order to stablize the training, we choose AdamW~\cite{loshchilov2018fixing} and Lookahead~\cite{zhang2019lookahead} as the optimizer. In addition, we set 0.05 as the maximum $l_{2}$ norm for gradient clipping. We train the network for a total of $600k$ iterations. And the learning rate is decreased by cosine annealing scheduler~\cite{loshchilov2016sgdr} without restart. We resize the snapshots to $16 \times 12$ with Bicubic operation as the LR input and the length of the snapshots is set to $7$. These frames are shuffled in training to improve the generalization. The batchsize is set to $16$, which is the same with MUCAN~\cite{li2020mucan} and EDVR\cite{wang2019edvr} in all experiments for fair comparison.
\begin{table}[h]
\begin{center}
\caption{Quantitative evaluation on TUFS-145K and IJBC dataset.}
\label{tab_main}
\vspace{-2mm}
\scalebox{0.9}{
\begin{tabular}{lcccccccccccc}
\toprule
\multirow{4}{*}{\vspace{-2mm}Model} &
\multicolumn{5}{c}{Y-channel} & & \multicolumn{5}{c}{RGB} & \\
\cmidrule{2-6} \cmidrule{8-12}
& \multicolumn{2}{c}{TUFS-145K} & & \multicolumn{2}{c}{IJB-C} & & \multicolumn{2}{c}{TUFS-145K} & & \multicolumn{2}{c}{IJB-C} \\
\cmidrule{2-3} \cmidrule{5-6} \cmidrule{8-9} \cmidrule{11-12}
& PSNR & SSIM & & PSNR & SSIM & & PSNR & SSIM & & PSNR & SSIM \\
\hline
Bicubic & 25.72 & 0.7914 & & 23.86 & 0.7029 & & 24.31 & 0.7631 & & 22.44 & 0.6647 \\
EDVR~\cite{wang2019edvr} & 30.55 & 0.9119 & & 26.05 & 0.7968 & & 29.08 & 0.8957 & & 24.59 & 0.7675\\
MUCAN~\cite{li2020mucan} & 30.49 & 0.9113 & & 26.11 & 0.7988 & & 29.00 & 0.8949 & & 24.64 & 0.7693\\
VidFace (Ours) & \textbf{30.94} &\textbf{ 0.9181} & & \textbf{26.35} & \textbf{0.8076} & & \textbf{29.46} & \textbf{0.9026} & & \textbf{24.89} & \textbf{0.7795}\\
\bottomrule
\end{tabular}
}
\end{center}
\end{table}
\begin{figure}[t]
\centering
\includegraphics[width=0.99\textwidth]{figure/fig3.pdf}
\caption{\footnotesize Qualitative comparison on the TUFS-145K dataset for $8\times$ video face snapshot SR. The proposed VidFace
hallucinates more facial details specially in ``eyes'', ``mouth'' and so on. Zoom in for best view.}
\label{fig_res1}
\end{figure}
\subsection{Comparison with State-of-the-Art Methods}
We compare our method with several state-of-the-art methods, including EDVR~\cite{wang2019edvr} and MUCAN~\cite{li2020mucan}. EDVR is a typical deformable convolution-based method. MUCAN depends on the self-attention modules for alignment and aggregation, but limited to pairwise frames. It is noteworthy that we have not reported the results of optical flow-based methods since these methods are not applicable for over-small unaligned snapshots (the spatial pyramid used in optical flow calculation requires the image to be larger than $16 \times 16$). The hallucination results of Bicubic are also reported. The PSNR and SSIM comparisons between VidFace and the above mentioned methods on TUFS-145K as well as IJB-C by $8\times$ upscaling are shown in Table~\ref{tab_main}. The average Y-channel and RGB-space PSNR and SSIM on all the evaluation snapshots are reported.
Firstly, our proposed VidFace achieves the highest PSNR, yielding $30.94 dB$ on TUFS-145K and $26.35 dB$ on IJB-C and significantly outperforming EDVR over $0.4 dB$ and $0.2 dB$, which demonstrates the superiority of a full-transformer base solver over the conventional deformable convolutional. The similar result can be also observed on SSIM score.
Secondly, VidFace significantly outperforms MUCAN by $0.45 dB$ and $0.24 dB$. Since the alignment mechanism of MUCAN also relies on spatial self-attention, such better results draw our attention to the temporal space. Comparing to MUCAN, VidFace handles multiple snapshots all at once and harnesses the spatial and temporal information integrally, thus boost the alignment and aggregation. We also give the visualization results on the above two datasets shown in Fig.~\ref{fig_res1} and Fig.~\ref{fig_res2}. These qualitative result further verifies that VidFace can actually boost the final face hallucination.
\begin{figure}[t]
\centering
\includegraphics[width=0.99\textwidth]{figure/fig6.pdf}
\caption{\footnotesize The visualization of RLPE. We show both the estimated landmarks and the position encoding in terms of distance field. The landmarks are estimated from scratch, which would be increasing accurate after iterations thus recurrently yielding finer facial prior information to regularize the position encoding for transformer.}
\label{fig_rl}
\end{figure}
\begin{figure}[t]
\centering
\includegraphics[width=0.99\textwidth]{figure/fig4.pdf}
\caption{\footnotesize Qualitative comparison on the IJB-C dataset for $8\times$ video face snapshot SR. The proposed VidFace hallucinates more facial details specially in ``eyes'', ``mouth'' and so on. Zoom in for best view.}
\label{fig_res2}
\end{figure}
\subsection{Ablation Study}
To assess the importance of various aspects of the VidFace, we run experiments on TUFS-145K under the setting of Deformable convolution-based encoder, T2T transformer-based encoder, RLPE refiner and DeU decoder, deactivating one or a few modules at a time while keeping the others activated. Table~\ref{tab_ablation} reports the face hallucination performance in terms of PSNR under different ablations. To begin with, we compared the deformable and transformer-based backbones. As observed in the table, the employment of global spatio-temporal information in transformer brings significant improvement ($0.15 \uparrow$) to PSNR score. Besides, the employment of RLPE and DeU brings about another $0.23 \uparrow$ gain in PSNR. Together with the visualization result of RLPE depicted in Fig.~\ref{fig_rl}, we can conclude that the designed modules are effective for face hallucination.
More experimental results can be found in the Appendix.
\begin{table}[t]
\caption{\textbf{Ablation studies of the components.} Each component brings significant improvements in PSNR, verifying their effectiveness.}
\label{tab_ablation}
\begin{center}
\tabcolsep=0.13cm
\scalebox{0.85}{
\begin{tabular}{l|c|c|c}
\hline
& (A) & (B) & \textbf{VidFace} \\\hline
Deformable Conv & \checkmark & & \\
T2T Transformer & & \checkmark & \checkmark \\
DeU & & & \checkmark \\
RLPE & & & \checkmark \\\hline\hline
PSNR (dB) & 29.08 & 29.23 (0.15$\uparrow$) & 29.46 (0.38$\uparrow$) \\\hline
\end{tabular}}
\end{center}
\end{table}
\section{Conclusion}
This work devotes attention to jointly model both spatio-temporal and facial prior information for better video face hallucination with unaligned tiny snapshots. We pinpoint and analyze the shortcomings of previous work and accordingly propose the VidFace. VidFace is a pure-transformer method to address alignment, aggregation and upsampling in an unified scheme. We further curate a large-scale benchmark of tiny unaligned facial video snapshots, dubbed TUFS-145K, which poses more realistic and challenging scenario for existing baselines and hopefully advances the future face hallucination research. To our knowledge, we are the first attempt to propose a unified transformer-based solver tailored for VFH task. Extensive experiments on video face benchmarks show that the proposed method significantly outperforms the state of the arts.
{\small
| {
"attr-fineweb-edu": 1.05957,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdZY5jDKDx7Q7Ncl5 | \section{Introduction}
The field of quantum information theory (QIT) was born out of the union of the theory of quantum mechanics and the classical theory of information \cite{NC}. This union also happened to kickstart what it is nowadays known as the (ongoing) second quantum revolution which, roughly speaking, aims at the development of quantum technologies \cite{SQR1, SQR2}. Compared with its direct predecessors however, QIT is still a relatively young field and therefore, it is important to keep unveiling, exploiting, and strengthening the links between quantum theory and classical information theory.
In this direction, the framework of quantum resource theories (QRTs) has emerged as a fruitful approach to quantum theory \cite{QRT_QP, RT_review}. A central subject of study within QRTs is that of \emph{resource quantifiers} \cite{QRT_QP, RT_review}. Two well-known families of these measures are the so-called \emph{robustness-based} \cite{RoE, GRoE, RoNL_RoS_RoI, RoS, RoA, RoC, SL, RoT, RoT2, RT_magic, citeme1, citeme2, LBDPS} and \emph{weight-based} \cite{EPR2, WoE, WoS, WoAC} resource quantifiers. Importantly, these quantities have been shown to be linked to operational tasks and therefore, this establishes a type of quantifier-task correspondence. Explicitly, robustness-based quantifiers are linked to discrimination-based operational tasks \cite{RoS, RoI_task, RoI_Channels, RoA, SL, TR2}, whilst weight-based resource quantifiers are linked to exclusion-based operational tasks \cite{DS, Uola}. A resource quantifier is a particular case of a more general quantity known as a \emph{resource monotone} \cite{TG} and therefore, this correspondence can alternatively be addressed as a \emph{monotone-task} correspondence.
From a different direction, in classical information theory, the \emph{Kullback-Leibler (KL) divergence} (also known as the Kullback-Leibler relative entropy) emerges as a central object of study \cite{KL1951}. The importance of this quantity is in part due to the fact that it acts as a parent quantity for many other quantities, such as the Shannon entropy, conditional entropy, conditional divergence, mutual information, and the channel capacity \cite{CT}. Within this classical framework, it has also proven fruitful to consider \emph{R\'enyi-extensions} of these quantities \cite{renyi}. In particular, there is a clear procedure for how to define the R\'enyi-extensions of both Shannon entropy and KL-divergence, which are known as the R\'enyi entropy and the R\'enyi divergence, respectively \cite{renyi, RD}. Interestingly however, there is yet no consensus within the community as to what is the ``proper" way to R\'enyi-extend other quantities. As a consequence of this, there are several different candidates for R\'enyi conditional entropies \cite{review_RCE}, R\'enyi conditional divergences \cite{BLP1}, and R\'enyi mutual information measures \cite{review_RMI}. The latter quantities are also known as measures of dependence \cite{BLP1} or $\alpha$-mutual information measures \cite{review_RMI}, and we address them here as (R\'enyi) \emph{dependence measures}. In particular, we highlight the dependence measures proposed by Sibson \cite{sibson}, Arimoto \cite{arimoto}, Csiszár \cite{csiszar}, as well as a recent proposal independently derived by Lapidoth-Pfister \cite{LP}, and Tomamichel-Hayashi \cite{TH}. It is known that these dependence measures (with the exception of Arimoto's) can be derived from their respective conditional R\'enyi divergence \cite{BLP1} and therefore, we address this relationship as a \emph{dependence-divergence} correspondence.
The links between the two worlds of QRTs and classical information theory are now beginning to be understood to run much deeper than just the \emph{monotone-task} and \emph{dependence-divergence} correspondences from above. In fact, they are intimately connected via a more general \emph{four-way} monotone-task-dependence-divergence correspondence. Explicitly, the robustness-discrimination correspondence \cite{SL, TR2} is furthermore connected to the information-theoretic quantity known as the \emph{accessible information} \cite{Wilde_book} which can, in turn, be written in terms of dependence measures. In a similar manner the weight-exclusion correspondence \cite{DS, Uola} is linked to the \emph{excludible information} \cite{DS, MO}, which can also be written in terms of dependence measures. Even though it was not explicitly stated in any of the above references the fourth corner in terms of ``R\'enyi divergences", it is nowadays a well known fact within the community, first noted by Datta, that the generalised robustness is related to the R\'enyi divergence of order $\infty$ (also called the max quantum divergence) \cite{datta}, with a similar case happening for the weight and the divergence of order $-\infty$ \cite{DS}. These two apparently ``minor" remarks raise the following fascinating question: Could there exist a whole spectrum of connections between dependence measures, R\'enyi divergences, resource monotones, and operational tasks, with only the two extreme ends at $\pm \infty$ currently being uncovered? \cite{DS}.
In this work we provide a positive answer to this question, by implementing insights from the theory of games and economic behaviour \cite{risk_vNM}. This latter theory, in short, encompasses many of the theoretical tools currently used in the economic sciences. In particular, we invoke here the so-called \emph{expected utility theory} \cite{risk_vNM} and more specifically, we borrow the concept of \emph{risk-aversion}; the behavioural tendency of rational agents to have a preference one way or another for guaranteed outcomes versus uncertain outcomes. This concept remains of great research interest in the economic sciences, with various Nobel prices having been awarded to its understanding \cite{nobel}
In general, the concept of \emph{risk aversion} is a ubiquitous characteristic of rational agents and, as such, it naturally emerges as a subject of study in various different areas of knowledge such as: the economic sciences \cite{risk_EGS}, biology and behavioral ecology \cite{risk_biology1, risk_biology2}, and neuroscience \cite{NS1, NS2, NS3}. In short, it addresses the behavioural tendencies of rational agents when faced with uncertain events. Intuitively, a gambler spending money on bets with the hope of winning big, can be seen as an individual taking (potentially unnecessary) risks, in the eyes of a more conservative gambler. One of the challenges that economists have tackled, since roughly the second half of the previous century, is the incorporation of the concept of risk aversion into theoretical models describing the behaviour of rational agents, as well as its quantification, and exploitation of its descriptive power \cite{risk_EGS}.
The concept risk was first addressed within theoretical models by Bernoulli in 1738 (translated into English by Sommer in 1954) \cite{risk_bernoulli}. Later on, the theory of expected utility, formalised by von Neumann and Morgenstern in 1944 \cite{risk_vNM}, provided a framework within which to address and incorporate behavioural tendencies like risk aversion. It was then further formalised, independently and within the theory of expected utility, by Arrow, Pratt, and Finetti in the 1950's and 60's \cite{risk_arrow, risk_pratt, risk_finetti} who, in particular, introduced measures for its quantification. The quest for further understanding and exploiting this concept has since remained of active research interest in the economic sciences \cite{risk_EGS}. Recently, an important step was taken in the work of Bleuler, Lapidoth and Pfister (BLP) in 2020 \cite{BLP1}, where the concept of risk aversion was implemented within the realm of classical information theory.
In this work, we invoke the concept of \emph{risk aversion} as well as the operational tasks with risk addressed by BLP \cite{BLP1} and adapt them to a quantum setting, in what we address here as quantum state betting (QSB) games. Surprisingly, we find that when the operational tasks introduced by BLP are appropriately modified and extended to the quantum regime (as QSB games), they turn out to provide the correct approach for solving the conundrum regarding the four-way correspondence for QRTs described above. Specifically, we find that the concept of risk aversion allows us to define operational tasks (QSB games) which can be viewed as a generalisation of state discrimination and state exclusion, and consequently allows for the derivation of the desired four-way correspondence between operational tasks, dependence measures, R\'enyi divergences, and resource monotones. We now briefly describe our findings in more detail.
\section{Summary of main results}
In this work we report on the existence of a continuous spectrum of connections between: operational tasks, dependence measures, R\'enyi divergences, and resource monotones. We summarise these findings in a \emph{quantitative} manner as a list of four chains of equalities below, and succinctly describe them in a \emph{qualitative} manner following Fig.~\ref{fig:fig}, as follows:
\begin{figure}[h!]
\includegraphics[scale=0.5]{fig1.pdf}
\caption{
A four-way correspondence for the QRT of measurement informativeness. The correspondence is parametrised by the R\'enyi parameter $\alpha \in \mathds{R}\cup\{\infty, -\infty\}$. The outer rectangle represents $\alpha=\infty$, the inner rectangle represents $\alpha=-\infty$, and the shaded region in-between represents the values $\alpha \in \mathds{R}$. This four-way correspondence links: operational tasks, dependence measures, R\'enyi divergences, and resource monotones. The operational task is quantum state betting (QSB) played by a Gambler with risk aversion $R(\alpha)=1/\alpha$. This task generalises quantum state discrimination (QSD) (recovered when $\alpha\rightarrow \infty$), and quantum state exclusion (QSE) (recovered when $\alpha \rightarrow -\infty$). $I_\alpha(p_{GX}^{(\mathbb{M},\mathcal{E})})$ is Arimoto's dependence measure, from which we recover the accessible information $I_{\infty}^{\rm acc}(\Lambda_\mathds{M})$ when $\alpha\rightarrow \infty$, and the excludible information $I_{-\infty}^{\rm exc}(\Lambda_\mathds{M})$ when $\alpha\rightarrow -\infty$, with $\Lambda_{\mathds{M}}$ the measure-prepare channel of the measurement $\mathds{M}$.
We introduce $D_{\alpha}^{\mathcal{S}} ( \mathds{M} || \mathds{N})$, the quantum R\'enyi divergence of two measurements $\mathbb{M}$ and $\mathbb{N}$ for a given set of states $\mathcal{S}=\{\rho_x\}$. We also introduce ${\rm M}_\alpha(\mathds{M})$, a new family of resource monotones, which generalise the robustness of informativeness $\rm R(\mathds{M})$ (when $\alpha\rightarrow \infty)$ and the weight of informativeness $\rm W(\mathds{M})$ (when $\alpha\rightarrow -\infty$). The outer rectangle was uncovered in \cite{SL}, whilst the inner rectangle was first uncovered in \cite{DS}. The main set of results of this paper is to fill the shaded region, and connect these two correspondences for all $\alpha\in \mathds{R}$.
}
\label{fig:fig}
\end{figure}
\begin{itemize}
\item (Bottom left): We introduce the operational tasks of quantum state betting (QSB) games, and consider them being played by gamblers with a risk tendency parametrised by a coefficient $\alpha$. We do this by modifying and extending, to the quantum regime, the operational tasks of horse betting (HB) games with risk introduced by BLP \cite{kelly, BLP1, thesis_CP}. HB games were first introduced by Kelly in 1956 \cite{kelly}, and were recently generalised by Bleuler, Lapidoth, and Pfister (BLP) in 2020 \cite{BLP1}, in order to address gamblers with different risk tendencies. We show that when properly modified and extended to the quantum regime into what we address here as quantum state betting (QSB) games, these tasks can be seen to generalise quantum state discrimination (QSD) and quantum state exclusion (QSE), which we show it is equivalent to in the limits $\alpha \to \infty$ and $\alpha \to -\infty$, respectively.
\item (Top left): Within the quantum resource theory of measurement informativeness \cite{SL}, we prove that Arimoto's dependence measure (in a quantum setting) quantifies the advantage provided by an informative measurement versus any possible uninformative measurement, when used to play QSB games. In a purely classical setting (i. e. without invoking quantum theory), this result also provides an operational interpretation for Arimoto's dependence measure of order $\alpha$ \cite{arimoto}; it quantifies the advantage that side information provides in horse betting (HB) games. This finding complements and extends the results found by BLP \cite{BLP1}, which characterise HB games in terms of a R\'enyi conditional divergence. Specifically, their work addresses HB games with and without side information in an \emph{independent} manner, whilst in this work on the other hand, we compare side information against no side information, and quantify the usefulness of such side information.
\item (Top and bottom right): Building on the above-mentioned results, we introduce new quantum R\'enyi divergences for measurements which allow us to identify a family of resource monotones for the order induced by the simulability of measurements. This family of resource monotones recover, as limiting cases, the robustness of informativeness \cite{SL, RT_measurements1} when $\alpha \rightarrow \infty$, and the weight of informativeness \cite{DS, Uola} when $\alpha \rightarrow -\infty$.
\end{itemize}
Altogether, these results therefore establish a four-way task-dependence-divergence-monotone correspondence for the QRT of measurement informativeness, by means of a risk aversion factor parametrised by the R\'enyi parameter $\alpha$ as $R(\alpha)=1/\alpha$, as qualitatively depicted in Fig.~\ref{fig:fig}. In a more quantitatively manner, a summary of the results is given in terms of four equalities below. This four-way correspondence generalises the two sets of correspondences already uncovered \cite{SL, DS}, which it recovers as limiting cases. It however shows, as conjectured previously, that this correspondence is in fact much more general, and exists for the full range of R\'enyi parameters $\alpha$.
This work is organised as follows. We start by describing the concept of risk aversion in the theory of games and economic behaviour in Section~\ref{s:risk}. We then introduce the operational tasks of quantum state betting in Sec.~\ref{s:HB}. In Sec.~\ref{s:arimoto} and \ref{s:arimoto2} we address Arimoto's dependence measure and the R\'enyi capacity both in classical and quantum domains. In Sec.~\ref{s:RoMI} we describe the quantum resource theory of measurement informativeness. Our results sections start in Sec.~\ref{e:result1} where we relate Arimoto's dependence measure (in the quantum domain) to the operational task of quantum state betting. In Sec. \ref{e:divergences} and \ref{e:monotones} we address quantum R\'enyi divergences and resource monotones. In Sec. \ref{e:general} we address a slightly more general version of result 1, by providing an operational interpretation of Arimoto's dependence measure without invoking quantum theory. We finish in Secs.~\ref{s:conclusions} with conclusions, open questions, perspectives, and avenues for future research.
\newpage
\onecolumngrid
\begin{summary*}
The main result of this work is a four-way quantum correspondence between operational tasks, dependence measures, quantum R\'enyi divergences, and resource monotones. This is qualitatively summarised in Fig.~\ref{fig:fig}, and quantitatively summarised in the following four chains of equalities, going from $\alpha \rightarrow \infty$ passing through $\alpha=0$ until $\alpha \rightarrow -\infty$ as follows (definitions and proofs in the main text and appendices):
\begin{align*}
\max_\mathcal{E}
I_{\infty}(X_\mathcal{E};G_\mathbb{M})
&=
\log
\left[
\max_\mathcal{E}
\frac{
p^{\rm QSD}_{\rm \, succ}
\left(
\mathcal{E}, \mathbb{M}
\right)
}{
\max_{\mathbb{N}\in {\rm UI}}
P^{\rm QSD}_{\rm succ}(\mathcal{E},\mathbb{N})
}
\right]
=
\log
\left[1+{\rm R}(\mathbb{M})\right]
=
\max_{\mathcal{S}}
\min_{\mathbb{N}\in {\rm UI}}
D_{\infty}^{\mathcal{S}}
(\mathbb{M}||\mathbb{N}),
\\
\max_\mathcal{E}
I_{\alpha \geq 0}(X_\mathcal{E};G_\mathbb{M})
&=
\log
\left[
\max_{\mathcal{E}}
\frac{
\displaystyle
\max_{b_{X|G}}
\,
w^{CE}_{1/\alpha}
\left(
b_{X|G},
\mathbb{M}
,
o^{c}_X
,
\mathcal{E}
\right)
}{
\displaystyle
\max_{\mathbb{N}\in {\rm UI}}
\max_{
b_{X|G}
}
\,
w^{CE}_{1/\alpha}
\left(
b_{X|G}
,
\mathbb{N}
,
o^{c}_X
,
\mathcal{E}
\right)
}
\right]
=
\log
\left[1+{\rm M}_{\alpha}
(\mathbb{M})\right]
=
\max_{\mathcal{S}}
\min_{\mathbb{N}\in {\rm UI}}
D_{\alpha}^{\mathcal{S}}
(\mathbb{M}||\mathbb{N}),
\end{align*}
\begin{align*}
\max_\mathcal{E}
I_{\alpha<0}(X_\mathcal{E};G_\mathbb{M})
&=
-
\log
\left[
\min_{\mathcal{E}}
\frac{
\displaystyle
\max_{b_{X|G}}
\,
w^{CE}_{1/\alpha}
\left(
b_{X|G}
,
\mathbb{M}
,
o_X^{-c}
,
\mathcal{E}
\right)
}{
\displaystyle
\max_{\mathbb{N}\in {\rm UI}}
\max_{
b_{X|G}
}
\,
w^{CE}_{1/\alpha}
\left(
b_{X|G}
,
\mathbb{N}
,
o_X^{-c}
,
\mathcal{E}
\right)
}
\right]
=
-
\log
\left[1-{\rm M}_{\alpha}
(\mathbb{M})\right]
=
\max_{\mathcal{S}}
\min_{\mathbb{N}\in {\rm UI}}
D_{\alpha}^{\mathcal{S}}
(\mathbb{M}||\mathbb{N}),
\\
\max_\mathcal{E}
I_{-\infty}(X_\mathcal{E};G_\mathbb{M})
&=
-
\log
\left[
\min_\mathcal{E}
\frac{
P^{\rm QSE}_{\rm err}(\mathcal{E},\mathbb{M})
}
{
\min_{\mathbb{N}\in {\rm UI}}
P^{\rm QSE}_{\rm err}(\mathcal{E},\mathbb{N})
}
\right]
=
-
\log
\left[1-{\rm W}(\mathbb{M})\right]
=
\max_{\mathcal{S}}
\min_{\mathbb{N}\in {\rm UI}}
D_{-\infty}^{\mathcal{S}}
(\mathbb{M}||\mathbb{N})
.
\end{align*}
\end{summary*}
\twocolumngrid
\section{Background theory}
In this section we start by addressing the preliminary theoretical tools necessary to establish our main results. We start with the concept of risk in the theory of games and economic behaviour, a pair of games involving risk, followed by the operational tasks of quantum state betting, then Arimoto's dependence measure and the R\'enyi capacity in both classical and quantum information theory and, finally, the quantum resource theory of measurement informativeness.
\subsection{
The concept of risk in the theory of games and economic behaviour
}
\label{s:risk}
In expected utility theory \cite{risk_vNM}, the level of `satisfaction' of a rational agent, when \emph{receiving} (obtaining, being awarded) a certain amount of wealth, or goods or services, is described by a utility function \cite{risk_vNM}. The utility function of a rational agent is a function $u:A\rightarrow \mathds{R}$, with $A=\{a_i\}$ a the set of \emph{alternatives} from which the rational agent can choose from. The set $A$ is endowed with a binary relation $\preceq$. The utility function is asked to be a monotone for such a binary relation; if $a_1 \preceq a_2$ then $u(a_1) \leq u(a_2)$. In this work we address the set of alternatives as representing \emph{wealth} and therefore, it is enough to consider an interval of the real numbers.
We are going to consider two different types of situation in this work. In the first case, the wealth will always be non-negative, and so we consider the interval being $A=\mathcal{I}=[0,w^M]\subseteq \mathds{R}$, with $w^M>0$ a maximal amount of wealth, and the standard binary relation $\leq$. Similarly, we also will also consider a situation where the wealth is \emph{non-positive}, meaning we address a utility function taking negative arguments $w<0$, with $\mathcal{I}=[-w^M,0]\subseteq \mathds{R}$, as the level of (dis)satisfaction when the rational agent has to \emph{pay} an amount of money $|w|$ (or when the amount $|w|$ is taken away from him).
We note here that the utility function does not necessarily need to be positive (or negative), because it is only used to \emph{compare} alternatives. The condition that the utility function is monotonic is the equivalent to it being an increasing function for both positive and negative wealth. Intuitively, this represents that the rational agent is interested in acquiring as much wealth as possible (for positive wealth), and losing the least amount of wealth as possible (for negative wealth). Additionally, the utility function is asked to be twice-differentiable, both for mathematical convenience and, because it is natural to assume that smooth changes in wealth imply smooth changes in the rational agent's satisfaction.
In order to address the concept of risk, we first need to introduce two games (or operational tasks), which involves a player Bob (the \emph{Better} or \emph{Gambler}, who we take to be a rational agent with a utility function $u$) and a referee Alice, who is in charge of the game. We are going to address two different games which we call here: i) \emph{gain games} and ii) \emph{loss games}.
\subsubsection{A gain game and utility theory}
In a \emph{gain game}, Alice (Referee) offers Bob (Gambler) the choice between two options: i) a fixed \emph{guaranteed} amount of wealth $w^G \in [0,w^M]$ or ii) a \emph{bet}. The bet consists of the following: Alice uses a random event distributed according to a probability mass function (PMF) $p_W$, (i.e.~$\sum_{w\in \mathcal{I}} p_W(w)=1$, $p_W(w)\geq 0$, $\forall w\in \mathcal{I}$, with $W$ a random variable in the alphabet $\mathcal{I}$), in order to give Bob a reward. Specifically, Alice will reward Bob with an amount of wealth $w^B=w$, whenever the random event happens to be $w$, which happens with probability $p(w)$ (we drop the label $W$ on $p_W(w)$ from now on). The choice facing Bob is therefore between a fixed guaranteed amount of wealth $w^G \in [0,w^M]$, or taking the bet and potentially earning more $w^B>w^G$, at the risk of earning less $w^B<w^G$.
Since the utility function $u(w)$ determines Bob's satisfaction when acquiring the amount wealth $w$, we will see below that it can be used to model his behaviour in this game, i.e. whether he chooses the first or second option. First, considering the bet (option ii) we can consider the \emph{expected gain} of Bob at the end,
\begin{align}
\mathbb{E}[W]
=
\sum_{w\in \mathcal{I}}
p(w)
w.
\end{align}
How satisfied Bob is with this expected amount of wealth is given by the utility of this value, i.e.
\begin{align}
u
\left(
\mathbb{E}[W]
\right)
=
u
\left(
\sum_{w\in \mathcal{I}}
p(w)
w
\right)
.
\end{align}
Now, Bob's wealth at the end of the bet is a random variable, this means that his satisfaction will also be a random variable, with some uncertainty. We can also ask what Bob's expected satisfaction, i.e.~\emph{expected utility} will be at the end of the bet,
\begin{align}
\mathbb{E}[u(W)]
=
\sum_{w\in\mathcal{I}}
p(w)
u(w)
.
\end{align}
This represents how satisfied Bob will be with the bet on average.
We can now introduce the first key concept, that of the \emph{Certainty Equivalent (CE)}: it is the amount of (certain) wealth $w^{CE}$ which Bob is as satisfied with as the average wealth he would gain from the bet. In other words, the amount of wealth which is as desirable as the bet itself. That is, it is the amount of wealth $w^{CE}$ that satisfies
\begin{align}
\mathbb{E}[u(W)] =u(w^{CE}). \label{e:wCE}
\end{align}
It is crucial to note that the certainty equivalent wealth depends upon the utility function $u$ and the PMF $p_W$, and therefore we interchangeably write it as $w^{CE}(u,p_W)$. We can now return to the original game, i.e.~the choice between a fixed return $w^G$, or the average return $\mathbb{E}[W]$. The rational decision for Bob is to pick which of the two he is most satisfied with. We now see that if we set $w^G > w^{CE}$ then he will choose to take the guaranteed amount, if $w^G < w^{CE}$ he will choose the bet, and if $w^G = w^{CE}$ then in fact the two options are equivalent to Bob, and he can rationally pick either. That is, we see that the certainty equivalent $w^{CE}$ sets the boundary between which option Bob will pick.
Introducing the certainty equivalent moreover allows us to introduce the concept of Bob's \emph{risk-aversion}. To do so, we will compare Bob's expected wealth, in relation to the certainty equivalent of the bet. There are only three possible scenarios,
\begin{align}
w^{CE} &< \mathbb{E}[W], \label{eq:R1} \\
w^{CE} &> \mathbb{E}[W], \label{eq:R2} \\
w^{CE} &= \mathbb{E}[W]. \label{eq:R3}
\end{align}
In the first case \eqref{eq:R1}, Alice can offer Bob an amount of wealth $w^G$ that is larger than $w^{CE}$ but less than $\mathbb{E}[W]$, $w^{CE} < w^G < \mathbb{E}[W]$ and Bob will rationally take this amount over accepting the bet, even though he will walk away with less wealth on average than if he took the bet. In other words, Bob is \emph{reluctant} to take the bet, and so we say that he is \emph{risk-averse}.
In the second case \eqref{eq:R2}, on the other hand, if Alice wants to make Bob walk away from the bet, and accept a fixed amount of wealth instead, she will have to offer him \emph{more} than the expected gain. That is, Bob will only choose an amount $w^G$ if $w^G > w^{CE} > \mathbb{E}[W]$. Here Bob is \emph{risk-seeking}.
Finally, in the third case \eqref{eq:R3}, Bob will take the bet if Alice offers him any $w^G$ less than the expected gains from the bet, and will take the guaranteed amount $w^G$ if it is larger. In this case, we say that Bob is \emph{risk-neutral}, as Bob is essentially indifferent between the uncertain gains of the bet and the certain gains of the guaranteed return.
If we recall that by definition the utility function $u$ is strictly increasing in the interval $\mathcal{I}$ (more wealth is also more satisfactory to Bob), then by applying $u$ to the previous three equations, and using the definition of $w^{CE}$ \eqref{e:wCE}, we get
\begin{align}
\mathbb{E}[u(W)] &< u(\mathbb{E}[W]), \label{eq:Rp1} \\
\mathbb{E}[u(W)] &> u(\mathbb{E}[W]), \label{eq:Rp2} \\
\mathbb{E}[u(W)] &= u(\mathbb{E}[W]). \label{eq:Rp3}
\end{align}
This is an important result, which shows that Bob's risk-aversion is characterised by the curvature of his utility function: Bob is risk-averse when his utility function is concave \eqref{eq:Rp1}, risk-seeking when his utility function is convex \eqref{eq:Rp2}, and risk-neutral when it is linear \eqref{eq:Rp3}. This intuitively makes sense, since roughly speaking this corresponds to his satisfaction growing more slowly than wealth when he is risk-averse and his satisfaction growing faster than wealth when he is risk-seeking. We now move on to analyse the concept of risk in our second game.
\subsubsection{A loss game and utility theory}
Let us now analyse a game which we call here a \emph{loss game}. Similarly to the gain game from the previous section, in an loss game we have two agents, a Referee (Alice) and a Gambler (Bob), who has to make a payment to the Referee. In an loss game Bob is now asked to choose between two options: i) paying a fixed amount of wealth $|w^{F}|$, $w^F\in[-w^M,0]$ or ii) a bet. Choosing the bet means Bob has to pay an amount of wealth according to the outcome of a PMF $p_W$. Similarly to the gain game, we address some quantities of interest: \emph{expected debt} ($\mathds{E}(W)$), \emph{expected utility} ($\mathds{E}[u(W)]$), and the \emph{certainty equivalent (CE)} $w^{CE}(u,p_W)$, as the amount of wealth $w^{CE}$ such that $u(w^{CE}) = \mathds{E}[u(W)]$. We note the CE depends on the utility function $u$ representing the Player, and the PMF $p_W$ representing the bet. The CE is the amount of wealth that Bob pays to Alice, which generates the same level of (dis)satisfaction, had Bob opted for the bet instead. We also note here that both the expected debt and the certainty equivalent are now negative quantities.
We now analyse the meaning of the certainty equivalent in loss games, i.e., where Bob (the Gambler) has to choose between having to pay a certain fixed amount of wealth (fixed debt) $|w^F|$, or paying an average amount (average debt) $|\mathbb{E}[W]|$. The rational decision for Bob is to pick which of the two options he is \emph{more satisfied} (equivalently, we could say least dissatisfied) with. We then see that if we set $w^F < w^{CE}$ he then will choose to take the bet, if $w^F > w^{CE}$ he will choose to pay the fixed amount, and if $w^F = w^{CE}$ he can rationally pick either. That is, we see that the certainty equivalent $w^{CE}$ again sets here the boundary between which option Bob will pick in an loss game.
We now compare Bob's expected debt $\mathbb{E}[W]$ and the certainty equivalent of the bet $w^{CE}$. We have the three possible scenarios,
\begin{align}
w^{CE} & < \mathbb{E}[W],
\hspace{0.3cm}
\longleftrightarrow
\hspace{0.3cm}
|w^{CE}| > |\mathbb{E}[W]|,
\label{eq:D1} \\
w^{CE} & > \mathbb{E}[W],
\hspace{0.3cm}
\longleftrightarrow
\hspace{0.3cm}
|w^{CE}| < |\mathbb{E}[W]|,
\label{eq:D2} \\
w^{CE} & = \mathbb{E}[W],
\hspace{0.3cm}
\longleftrightarrow
\hspace{0.3cm}
|w^{CE}| = |\mathbb{E}[W]|.
\label{eq:D3}
\end{align}
In the first case \eqref{eq:D1}, Alice can \emph{request} from Bob a fixed amount of wealth $|w^F|$
as $w^{CE} < w^{F} < \mathbb{E}[W]$, which is equivalent to $|w^{CE}| > |w^{F}| > |\mathbb{E}[W]|$ and Bob will still \emph{prefer} to pay this amount over opting for the bet, even though he will \emph{potentially have to pay less} $|\mathds{E}(W)|$, on average, had he opted for the bet. In other words, Bob is \emph{reluctant} to take the bet, and so we see that he is \emph{risk-averse}.
In the second case \eqref{eq:D2}, if Alice wants to make Bob walk away from choosing the bet, and accept \emph{paying} a fixed amount of wealth instead, she will have to offer him a deal where he has to pay \emph{less} than the CE (and in turn less than the expected debt). In other words, in this case Bob is confident that the bet will allow him to pay less than the expected debt. That is, Bob will choose paying a fixed amount $|w^F|$ only if $w^{F} > w^{CE} > \mathbb{E}[W]$, which is equivalent to $|w^{F}| < |w^{CE}| < |\mathbb{E}[W]|$. Here Bob can then be considered as \emph{risk-seeking}, because he is hopeful/optimistic about having the chance of paying less than the expected debt.
Taking into account the utility function is still an strictly increasing function for negative wealth, together with the definition of the certainty equivalent we get:
\begin{align}
\mathbb{E}[u(W)] &< u(\mathbb{E}[W]), \label{eq:Dp1} \\
\mathbb{E}[u(W)] &> u(\mathbb{E}[W]), \label{eq:Dp2} \\
\mathbb{E}[u(W)] &= u(\mathbb{E}[W]). \label{eq:Dp3}
\end{align}
This means that in an loss game we can also characterise the risk tendencies of a Gambler in terms of the concavity/convexity/linearity of his utility function as: risk-averse (concavity \eqref{eq:Dp1}), risk-seeking (convexity \eqref{eq:Dp2}), risk-neutral (linear \eqref{eq:Dp3}). This characterisation of risk tendencies and the types of games are going to be useful later on when introducing more elaborate games in the form of operational tasks involving the discrimination or exclusion of quantum states. We now move on to the quantification of risk.
\subsubsection{Quantifying risk tendencies}
We can go one step further, and not only classify whether Bob (the Gambler) is risk-averse, risk-seeking, or risk-neutral, but moreover \emph{quantify} how risk-averse he is. Let us start by addressing a \emph{gain game}, which means we are interested in analysing Bob being represented by an utility function on positive wealth. Since Bob's attitude toward risk relates to the concavity/convexity/linearity of the utility function $u$, it is natural that the second derivative of the function is going to play a role. This, because $u$ is concave on an interval if and only if its second derivative is non-positive on that interval. However, it is also desirable for measures representing risk to be invariant under \emph{affine transformations} of the utility function, which in this context means that they are invariant under transformations of the form $u \rightarrow a+b u$, with $a, b \in \mathds{R}$. This is because the actual values of utility aren't themselves physical, but only the \emph{comparison} between values, and therefore rescaling or displacing the utility should not alter how risk-averse we quantify Bob to be. Given these requirements, a natural measure that emerges is the so-called Relative Risk Aversion (RRA) measure\footnote{An additional benefit of this quantifier is that it is dimensionless, which is not satisfied by all quantifiers of risk-aversion}:
\begin{align}
RRA(
w
)
\coloneqq
-
w
\frac{
u^{''}
(w)
}{
u'(w)
}.
\label{e:RRA}
\end{align}
This measure assigns \emph{positive} values for risk-averse players in a gain game (\emph{concave} utility functions of positive wealth) because we have: i) $w>0$, because we are considering the player \emph{receiving} money ii) $u^{''}(w)<0$, $\forall w$, because a risk-averse player in a gain game is represented by a concave function, and iii) $u^{'}(w)>0$, because the utility function is a strictly increasing function. An analysis of signs then yields $RRA(w)>0$.
Similarly, we now also analyse this measure of risk-aversion when Bob plays a \emph{loss game}. A loss game is characterised by \emph{negative wealth}, and we have already derived the fact that that a risk-averse Gambler is also characterised by a \emph{concave} utility function. We now want to quantify the degree of risk-aversion of a Gambler playing the loss game, and therefore we then can proceed in a similar fashion as before, and define the risk-aversion measure RRA.
We now check that this measure assigns \emph{negative} values for risk-averse players in a loss game (\emph{concave} utility functions of negative wealth) because we have: i) $w<0$ because we are considering the player \emph{paying} money ii) $u^{''}(w)<0$, $\forall w$, because a risk-averse player in a loss game is represented by a concave function, and iii) $u^{'}(w)>0$, because the utility function is a strictly increasing function. An analysis of signs yields $RRA(w)<0$. We can see that this is the opposite to what happens in gain games, where $RRA(w)>0$ represents risk-averse players. We highlight this fact in Table~\ref{tab:tab1}, and present an analysis of the sign of the RRA measure for the two types of players (risk-averse or risk-seeking) and the two types of games (gain game or loss game).
\begin{table}[h!]
\centering
\begin{tabular}{|c||c|c|}
\hline
&
Risk-averse player
&
Risk-seeking player
\\
& $u^{''}(w)<0$ & $u^{''}(w)>0$
\\
\hline \hline
$w>0$ & $RRA(w)>0$ & $RRA(w)<0$
\\
\hline
$w<0$ & $RRA(w)<0$ & $RRA(w)>0$
\\
\hline
\end{tabular}
\caption{Analysis of the sign of the quantity $RRA(w)$ for the different regimes being considered. We have that the utility function is always strictly increasing, meaning that $u^{'}(w)>0$, and therefore we then only need to analyse the signs of $w$ and $u^{''}(w)$. In particular, we have that risk-averse players are represented by positive RRA when dealing with positive wealth, and by negative RRA when dealing with negative wealth.}
\label{tab:tab1}
\end{table}
\subsubsection{The isoelastic certainty equivalent}
We now note that the RRA measure does not assign a global value for how risk averse Bob is, but allows this to depend upon the wealth $w$, i.e. Bob may be more or less risk averse depending on the wealth that is at stake. In order to remove this, it is usual to consider those utility functions where Bob's relative risk aversion is \emph{constant}, independent of wealth. In this case, \eqref{e:RRA} can be solved assuming $RRA(w) = R$, which leads to the so-called \emph{isoelastic utility function} for positive and negative wealth as:
\begin{align}
u_R(w)
\coloneqq
\begin{cases}
\sgn(w)
\frac{|w|^{1-R}-1}{1-R},
& \text{if}\ R \neq 1 \\
\sgn(w)
\ln(|w|),
& \text{if}\ R = 1
\end{cases}
,
\label{eq:isoelastic}
\end{align}
with the auxiliary ``sign" function:
\begin{align}
\sgn(
w
)
\coloneqq
\begin{cases}
1, &
w
\geq 0;
\\
-1,&
w
< 0.
\end{cases}
\label{eq:sgn}
\end{align}
The parameter $R$ varies from minus to plus infinity, describing all possible risk tendencies of Bob, for either positive or negative wealth. For positive wealth for instance, $R$ goes from maximally risk-seeking at $R=-\infty$, passing through risk-neutral at $R=0$, to maximally risk-averse at $R=\infty$. In Fig.~\ref{fig:fig0} we can see the behaviour of the isoelastic function for positive wealth and different values of $R$.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.57]{fig2.pdf}
\vspace{-0.5cm}
\caption{
Isoelastic utility function $u_R(w)$ \eqref{eq:isoelastic} as a function of positive wealth ($1\leq w \leq 3$) for players with different risk tendencies (different values of $R$). The risk parameter $R$ quantifies different types of risk tendencies: i) $R<0$ risk-seeking players (convex) ii) $R=0$ risk-neutral players (linear), and iii) $R>0$ risk-averse players (concave). Risk-aversion for positive wealth then increases from $-\infty$ to $\infty$.
}
\label{fig:fig0}
\end{figure}
The certainty equivalent \eqref{e:wCE} for this setup can be calculated for either positive or negative wealth as:
\begin{align}
w^{CE}_R
=
u^{-1}_R
\left(
\mathbb{E}[u_R(W)]
\right)
=
\left(
\sum_{w\in\mathcal{I}}
w^{1-R}
\,
p(w)
\right)^{\frac{1}{1-R}}
.
\label{eq:wCEF}
\end{align}
The certainty equivalent (CE) of the isoelastic function, or \emph{isoelastic certainty equivalent} (ICE), is going to be the figure of merit in the next section, and it is going to play an important role in this paper. As we have already seen, the CE stands out as an important quantity because it: i) determines the choice of a Gambler when playing either a gain or loss game, helping to establish the characterisation of risk tendencies of said Gambler and ii) optimising the CE is equivalent to optimising the \emph{expected utility}, given that the utility function is a strictly increasing function and that $u(w^{CE}) = \mathds{E}[u(W)]$. One may be tempted here to propose the \emph{expected utility} function $\mathds{E}[u(W)]$ as the figure of merit instead of the CE, but the expected utility unfortunately suffers from having the rather awkward set of units $[w]^{1-R}$, whilst the certainty equivalent on the other hand has simply units of wealth $[w]$ (\$, \pounds, ...).
\subsection{
Quantum state betting games played by Gamblers with different risk tendencies
}\label{s:HB}
We now introduce the main operational tasks that we consider in this work. We start by describing quantum state betting (QSB) games being played by Gamblers with different risk tendencies, as a generalisation (to the quantum domain) of the operational tasks known as horse betting games.
Horse betting (HB) games were first introduced by Kelly in 1956 \cite{kelly}, a modern introduction can be found, for instance, in Cover \& Thomas \cite{CT}, as well as in the lectures notes by Moser \cite{LN_moser}. Recently, Bleuler, Lapidoth, and Pfister generalised HB games in order to include a factor $\beta = 1-R$ \cite{BLP1}, representing the risk-aversion of the Gambler (Bob) playing these games, with standard HB games being recovered by setting $\beta=0$, corresponding to $R = 1$, i.e. a risk-averse Bob. In this subsection we address this generalisation, including risk aversion and side information, and further generalise it to the quantum domain, in what we address as quantum state betting (QSB) games. Specifically, we will address two variants of QSB games in the form of quantum state discrimination (QSD) with risk, and quantum state exclusion (QSE) with risk. We then will address the isoelastic certainty equivalent (ICE) as the figure of merit for QSB games, and show how it generalises the quantification of standard quantum state discrimination and exclusion.
\subsubsection{Quantum state betting games}
Consider two rational agents, a Referee (Alice) and a Gambler (Bob). Alice is in possession of an ensemble of quantum states $\mathcal{E} = \{\rho_x, p(x)\}$, $x\in\{1,...,K\}$, and is going to send one of these states to Bob, say $\rho_x$. We address here a \emph{quantum state}, or \emph{state} for short, as a positive semidefinite ($\rho_x \geq 0$) and trace one ($\tr(\rho_x)=1$) operator in an finite-dimensional Hilbert space.
As above, we will consider two different classes of state betting games, gain games, and loss games. In a gain game, Alice offers Bob odds $o(x)$, which is a positive function ($o(x)>0$, $\forall x$) but not necessarily a PMF, such that if Bob places a unit bet on the state being $\rho_x$, and this is the correct state, then Alice will pay out $o(x)$ to Bob. In a loss game, on the contrary, we take the `odds' to be \emph{negative}, $o(x) < 0$, for all $x$, such that if Bob places a unit bet on $\rho_x$, then he will have to pay out to Alice an amount $|o(x)|$.\footnote{That is, similarly to in thermodynamics, we take the sign of the odds to signify whether this is a gain or a loss for Bob.}
In order to decide how to place his bets, Bob is allowed to first perform a \emph{quantum measurement} on the state given to him by Alice. In general, this will be a positive operator-valued measure (POVM), $\mathbb{M}=\{M_g\}$, $M_g\geq 0$ $\forall g$, $\sum_g M_g=\mathds{1}$, which will allow him to (hopefully) extract some useful information from the state.
Let us assume that Bob measures the state he receives from Alice using a measurement $\mathbb{M}=\{M_g\}$, producing a measurement result $g$, with probability given by the Born rule, $p(g|x)=\tr[M_g\rho_x]$. Bob will then use this result to decide on his \emph{betting strategy}. We assume that he bets all of his wealth, and divides this in some way amongst all the possible options $x\in \{1,...K\}$. That is, Bob's strategy is a PMF $b_{X|G}$, such that Bob bets the proportion $b(x|g)$ of his wealth on state $x$ being the sent state, when his measurement outcome was $g$.\footnote{Note that for loss games, Bob can end up having to pay out \emph{more} than the wealth he bet (similarly to how in a game gain Bob can walk away with more wealth than he started with).} We note that Bob's overall strategy is then defined by the pair $(b_{X|G}, \mathbb{M})$. We also note that the PMF $p_X$ from the ensemble of states together with the conditional PMF $p_{G|X}$ from the measurement implemented by Bob, defines the joint PMF $p_{XG} \coloneqq p_{G|X}p_X$.
Therefore, when the quantum state was $\rho_x$, and Bob obtained the measurement outcome $g$, he bet the proportion of his wealth $b(x|g)$ on the actual state, and hence Alice either pays out $w(x,g) = o(x)b(x|g)$ in the case of a gain game, or Bob has to pay Alice the amount $|w(x,g)|$ (i.e. he loses $|w(x,g)|$) in a loss game. We can view gain games as a generalisation of \emph{state discrimination}. Here, since Bob is winning money, it is advantageous, in general, for him to correctly \emph{identify} the state that was sent. On the other hand, we see that loss games can be viewed as a generalisation of \emph{state exclusion}, since now in order to minimise his losses, it is useful for Bob to be able to \emph{avoid} or \emph{exclude} the state that was sent.
Finally, we note that the settings of the game are specified by the pair $(o_X,\mathcal{E})$. It is important to stress that by assumption Bob is fully aware of the settings of the game, meaning that the pair $(o_X,\mathcal{E})$ is known to him prior to playing the game, and therefore he can use this knowledge in order to select an optimal betting strategy $b_{X|G}$.
\subsubsection{Figure of merit for quantum state betting games}
Given these two variants of QSB games, we now want analyse the behaviour of different types of Gamblers (represented by different utility functions), according to their risk tendencies. We will consider quantities of interest like in the previous sections such as: expected wealth, expected utility, and similar. In particular, we model Gamblers with utility functions displaying constant relative risk aversion (CRRA) and therefore, the utility functions we consider are isoelastic functions $u_R(w)$ \eqref{eq:isoelastic}. The figure of merit we are interested in is then the \emph{isoelastic certainty equivalent (ICE)} $w^{CE}_R$ with $R \in \mathds{\overline R}$. For risk $R\in (-\infty,1) \cup (1,\infty)$, this quantity is given by:
\begin{align}
&w^{CE}_R
(b_{X|G}, \mathbb{M}, o_X,\mathcal{E})
\nonumber
\\
&=
u_R^{-1}
\left(
\mathbb{E}_{p_{XG}}
\left[
u_R
(
w_{XG}
)
\right]
\right)
,
\nonumber
\\
&=
\left[
\sum_{g,x}
\big[
b(x|g)
o(x)\big]^{1-R}
p(g|x)
p(x)
\right]^\frac{1}{1-R}
.
\label{eq:wCE_QSB}
\end{align}
The cases $R \in\{1,\infty,-\infty\}$ are defined by continuous extension of \eqref{eq:wCE_QSB}.
In summary, the game is specified by the pair $(o_{X},\mathcal{E})$, the behavioural tendency of Bob is represented by the utility function $u_R(w_{XG})$ with a fixed $R \in \mathds{\overline R}$, the overall strategy of Bob is specified by the pair $(b_{X|G}, \mathbb{M})$, and the figure of merit here considered is the isoelastic certainty equivalent (ICE) \eqref{eq:wCEF}. We can alternatively address these operational tasks as horse betting games with risk and quantum side information, or quantum horse betting (QHB) games for short, and we describe this in more detail later on.
Bob is in charge of the measurement and the betting strategy ($b_{X|G}, \mathds{M}$), so in particular, for a fixed measurement $\mathds{M}$, Bob is interested in maximising the ICE (maximising gains in a gain game, and minimising losses in a loss game) so we are going to be interested in the following quantity:
\begin{align*}
\max_{b_{X|G}}
\,
w^{CE}_R
\left(
b_{X|G}, \mathbb{M}, o_X,\mathcal{E}
\right)
,
\end{align*}
for a fixed QSB game $(o_X,\mathcal{E})$ with either positive or negative odds, and Bob's risk tendencies being fixed, and specified by an isoelastic utility function $u_R$.
\subsubsection{Quantum state betting games generalise discrimination and exclusion games}
We will now show that quantum state betting games with risk can indeed be seen as generalisations of standard quantum state discrimination and exclusion games. We can see this by considering a risk-neutral ($R=0$) Bob playing a gain game (positive odds) which are constant: $o^{c}(x) \coloneqq C$, $C>0$, $\forall x$, in which case we find that the quantity of interest becomes:
\begin{align}
\max_{
b_{X|G}
}
\,
w^{CE}_{0}
(b_{X|G}, \mathbb{M}, o^{c}_X,\mathcal{E})
&
=
C
\max_{
b_{X|G}
}
\,
\sum_{g,x}
b(x|g)
p(g|x)
p(x)
,\nonumber
\\
&=
C \,
P^{\rm QSD}_{\rm succ}(\mathcal{E},\mathbb{M})
.
\end{align}
For more details on standard quantum state discrimination games we refer to \cite{SL, TR2}. Therefore, standard quantum state discrimination can be seen as as special instance of quantum state betting games with constant odds, and played by a risk-neutral player. Similarly, for a loss game, with negative constant odds $o^{-c}(x) \coloneqq -C$, $C>0$, $\forall x$:
\begin{align}
\max_{
b_{X|G}
}
\,
w^{CE}_{0}
(b_{X|G}, \mathbb{M}, &o^{-c}_X,\mathcal{E})
\nonumber
\\
&
=
C
\max_{
b_{X|G}
}
\,
-
\sum_{g,x}
b(x|g)
p(g|x)
p(x)
,\nonumber
\\
&=
-
C \,
P^{\rm QSE}_{\rm err}(\mathcal{E},\mathbb{M})
.
\end{align}
For more details on standard quantum state exclusion games we refer to \cite{DS, uola2020}. Therefore, standard quantum state exclusion can be seen as a quantum state betting game constant negative odds, again played by a risk-neutral player. We now move on to introduce some information-theoretic quantities.
\subsection{Arimoto's dependence measure and R\'enyi channel capacity in the theory of information}\label{s:arimoto}
We start this subsection by introducing the \emph{dependence measures} of interest (also known as measures of dependence \cite{BLP1} or $\alpha$-mutual information measures \cite{review_RMI}) and particularly, Arimoto's dependence measure \cite{arimoto}. A reminder note on notation before we start: we consider random variables (RVs) ($X, G,...$) on a finite alphabet $\mathcal{X}$, and the probability mass function (PMF) of $X$ represented as $p_X$ satisfying: $p_X(x)\geq 0$, $\forall x\in \mathcal{X}$, and $\sum_{x\in \mathcal{X}}p_X(x)=1$. For simplicity, we omit the alphabet when summing, and write $p_X(x)$ as $p(x)$ when evaluating. The support of $p_X$ as ${\rm supp} (p_X) \coloneqq \{x\,|\,p(x)>0\}$, the cardinality of the support as $|{\rm supp}(p_X)|$, and the extended line of real numbers as $ \mathds{ \overline R}\coloneqq \mathds{R} \cup \{\infty,-\infty\}$. We now start by considering the R\'enyi entropy.
\begin{definition} (R\'enyi entropy \cite{renyi})
The R\'enyi entropy of order $\alpha \in \mathds{\overline R}$ of a PMF $p_X$ is denoted as $H_{\alpha}(X)$. The orders $\alpha \in(-\infty,0)\cup(0,1)\cup(1,\infty)$ are defined as:
\begin{align}
H_{\alpha}(X)
& \coloneqq
\frac{1}{1-\alpha}
\log
\left(
\sum_x
p(x)^{\alpha}
\right)
.
\label{eq:RE}
\end{align}
The orders $\alpha \in\{0,1,\infty,-\infty\}$ are defined by continuous extension of \eqref{eq:RE} as: $H_0(X)\coloneqq \log |{\rm supp}(p_X)|$, $H_1(X)\coloneqq H(X)$, with $H(X) \coloneqq -\sum_x p(x)\log p(x)$ the Shannon entropy \cite{CT}, $H_{\infty}(X)\coloneqq-\log \max_x p(x)=-\log p_{\rm max}$, and $H_{-\infty}(X)\coloneqq -\log \min_x p(x) = - \log p_{\rm min}$. The R\'enyi entropy is a function of the PMF $p_X$ and therefore, one can alternatively write $H_\alpha(p_X)$. However, we keep the convention of writing $H_{\alpha}(X)$.
\end{definition}
The R\'enyi entropy is mostly considered for positive orders, but it is also sometimes explored for negative values \cite{NV1, NV2, SR1, SR2}. In this work we use the whole spectrum $\alpha \in \mathds{\overline R}$. We now consider the Arimoto-R\'enyi extension of the conditional entropy.
\begin{definition} (Arimoto-R\'enyi conditional entropy \cite{arimoto}) The Arimoto-R\'enyi conditional entropy of order $\alpha \in \mathds{\overline R}$ of a joint PMF $p_{XG}$ is denoted as $H_{\alpha}(X|G)$. The orders $\alpha \in (-\infty,0) \cup (0,1) \cup (1,\infty)$ are defined as:
\begin{align}
H_{\alpha}(X|G)
& \coloneqq
\frac{\alpha}{(1-\alpha)}
\log
\left[
\sum_g
\left(
\sum_x
p(x,g)^\alpha
\right)^\frac{1}{\alpha}
\right]
.
\label{eq:ARCE}
\end{align}
The orders $\alpha \in\{0,1,\infty,-\infty\}$ are defined by continuous extension of \eqref{eq:ARCE} as: $H_{0}(X|G) \coloneqq \log \max_g |{\rm supp}(p_{X|G=g})|$, $H_{1}(X|G) \coloneqq H(X|G)$, with $H(X|G) \coloneqq -\sum_{x,g}p(x,g)\log p(x|g)$ the conditional entropy \cite{CT}, $H_{\infty}(X|G)
\coloneqq
-\log
\sum_g
\max_x
p(x,g)$, and $H_{-\infty}
(X|G)
\coloneqq
-
\log
\sum_g
\min_x
p(x,g)$. Arimoto-R\'enyi conditional entropy is a function of the joint PMF $p_{XG}$ and therefore, one can alternatively write $H_\alpha(p_{XG})$. However, we keep the convention of writing $H_{\alpha}(X|G)$.
\end{definition}
We remark that there are alternative ways to R\'enyi-extend the conditional entropy \cite{review_RCE}. The Arimoto-R\'enyi conditional entropy is however, the only one (amongst five alternatives \cite{review_RCE}) that simultaneously satisfy the following desirable properties for a conditional entropy \cite{review_RCE}: i) monotonicity, ii) chain rule, iii) consistency with the Shannon entropy, and iv) consistency with the $\infty$ conditional entropy (also known as min-entropy). Consistency with the conditional entropy means that $\lim_{\alpha \rightarrow 1}H_{\alpha}(X|G) = H(X|G)$, and similarly for property iv). In this sense, one can think about the Arimoto-R\'enyi conditional entropy as the ``most appropriate" R\'enyi-extension (if not the outright ``proper" R\'enyi extension) of the conditional entropy. We now consider Arimoto's dependence measure, and its associated R\'enyi channel capacity
\begin{definition} (Arimoto's dependence measure \cite{arimoto})
Arimoto's dependence measure of order $\alpha \in\mathds{\overline R}$ of a joint PMF $p_{XG}$ is given by:
\begin{align}
I
_\alpha(X;G)
&\coloneqq
\sgn(\alpha)
\left[
H_{\alpha}(X)
-
H
_{\alpha}(X|G)
\right]
,
\label{eq:DMA}
\end{align}
with the R\'enyi entropy \eqref{eq:RE} and the Arimoto-R\'enyi conditional entropy \eqref{eq:ARCE}. The case $\alpha=1$ reduces to the standard mutual information \cite{CT} $I_1(X;G) = I(X;G)$, with $I(X;G) \coloneqq H(X)-H(X|G)$. Arimoto's dependence measure is a function of the joint PMF $p_{XG}$ and therefore, one can alternatively write $I_\alpha(p_{XG})$ or $I_\alpha(p_{G|X}p_X)$, the latter taking into account that $p_{XG} = p_{G|X}p_X$. We use these three different notations interchangeably.
\end{definition}
\begin{definition} (R\'enyi channel capacity \cite{arimoto, csiszar, remarks, Nakiboglu})
The R\'enyi channel capacity of order $\alpha \in \mathds{\overline R}$, of a conditional PMF $p_{G|X}$ is given by:
\begin{align}
C_{\alpha}
(p_{G|X})
\coloneqq
\max_{p_X}
I_{\alpha}
(p_{G|X}p_X)
\label{eq:iso}
\end{align}
with the maximisation over all PMFs $p_X$, and Arimoto's dependence measure \eqref{eq:DMA}. The case $\alpha=1$ reduces to the standard channel capacity \cite{CT} $C_1(p_{G|X})=C(p_{G|X})=\max_{p_X}I(X;G)$.
\end{definition}
We remark that there are alternative candidates as R\'enyi-extensions of the mutual information \cite{review_RCE, review_RMI}. In particular, we highlight the dependence measures of: Sibson \cite{sibson}, Csisz\'ar \cite{csiszar}, and Bleuler-Lapidoth-Pfister \cite{BLP1}, which we address in the appendices as $I^{\rm V}_\alpha(X;G)$ with the label $\rm V\in\{S,C,BLP\}$ representing each case. These dependence measures are going to be useful, in particular, due to their connection to conditional R\'enyi divergences. We address these information-theoretic quantities in Appendix A. We now extend these information-theoretic quantities to the quantum domain.
\subsection{Arimoto's dependence measure and R\'enyi channel capacity in a quantum setting}
\label{s:arimoto2}
We now move on to describe Arimoto's measure of dependence in this quantum setting, as well as the
R\'enyi channel capacity.
\begin{remark}(Arimoto's dependence measure in a quantum setting)
We address Arimoto's dependence between two classical random variables encoded into quantum objects. Explicitly, the random variable $X$ is encoded in an ensemble of states $\mathcal{E}=\{\rho_x,p(x)\}$ and therefore, we address it as $X_\mathcal{E}$. On the other hand, $G$ is considered as the random variable obtained from a decoding measurement $\mathds{D}=\{D_g=\ketbra{g}{g}\}$ and therefore, we address it as $G_\mathbb{D}$. We consider a conditional PMF as $p_{G|X}^{(\mathbb{M},\mathcal{S})}$, given by $p(g|x)\coloneqq \tr [D_g \Lambda_\mathbb{M}(\rho_x)]$, $\mathcal{S} \coloneqq \{\rho_x\}$ a set of states, and the quantum-to-classical (measure-prepare) channel associated to the measurement $\mathbb{M}$ given by:
\begin{align}
\Lambda_\mathbb{M}(\sigma)
\coloneqq
\sum_a
\tr
[M_a\sigma]
\ketbra{a}{a},
\label{eq:qc}
\end{align}
with $\{\ket{a}\}$ an orthonormal basis. We effectively have $p(g|x)\coloneqq \tr [M_g \rho_x]$ and therefore we can think about the decoding variable $G_\mathbb{D}$ as $G_\mathbb{M}$. We are now interested in dependence measures quantifying the dependence between variables $X_\mathcal{E}$ and $G_\mathbb{M}$, when encoded and decoded in the quantum setting described previously. We then consider the Arimoto's dependence measure:
\begin{align}
I_\alpha
(X_\mathcal{E};G_\mathbb{M})
&\coloneqq
\sgn(\alpha)
\left[
H_{\alpha}
(X_\mathcal{E})
-
H_{\alpha}
(X_\mathcal{E}|G_\mathbb{M})
\right]
,
\end{align}
with the standard R\'enyi entropy \eqref{eq:RE} and the Arimoto-R\'enyi conditional entropy \eqref{eq:ARCE} for the quantum conditional PMF described above.
\end{remark}
\begin{remark} (R\'enyi capacity of a quantum conditional PMF)
The R\'enyi capacity of order $\alpha \in \mathds{\overline R}$ of a quantum conditional PMF $p_{G|X}^{(\mathbb{M},\mathcal{S})}$ is given by:
\begin{align}
C_{\alpha}
\left(
p_{G|X}^{(\mathbb{M},\mathcal{S})}
\right)
\coloneqq
\max_{p_X}
I_{\alpha }
\left(
p_{G|X}^{(\mathbb{M},\mathcal{S})}
p_X
\right),
\label{eq:iso}
\end{align}
with the maximisation over all PMFs $p_X$.
\end{remark}
The quantity we are interested in the quantum domain is the R\'enyi capacity of order $\alpha$ of a quantum-classical channel.
\begin{definition}(R\'enyi capacity of a quantum-classical channel) The R\'enyi capacity of order $\alpha \in \mathbb{\overline R}$ of a quantum-classical channel $\Lambda_\mathbb{M}$ associated to the measurement $\mathbb{M}$ is given by:
\begin{align}
C_{\alpha}
(\Lambda_\mathbb{M})
\coloneqq
\max_{\mathcal{S}}
C_{\alpha}
\left(
p_{G|X}^{(\mathbb{M},\mathcal{S})}
\right)
=
\max_\mathcal{E}
I_{\alpha}
\left(
p_{G|X}^{(\mathbb{M},\mathcal{S})}
p_X
\right)
,
\end{align}
with the maximisation over all sets of states $\mathcal{S}=\{\rho_x\}$ or over all ensembles $\mathcal{E}=\{\rho_x, p(x)\}$.
\end{definition}
We now address a resource-theoretic approach for the property of measurement informativeness.
\subsection{The quantum resource theory of measurement informativeness}\label{s:RoMI}
The framework of quantum resource theories (QRTs) has proven a fruitful approach towards quantum theory \cite{QRT_QP, RT_review}. In this work we focus on convex QRTs of measurements, with the resource of informativeness \cite{SL}.
\begin{definition} (Convex QRT of measurement informativeness \cite{SL}) Consider the set of Positive-Operator Valued Measures (POVMs) acting on a Hilbert space of dimension $d$. A POVM $\mathbb{M}$ is a collection of POVM elements $\mathbb{M}=\{M_a\}$ with $a\in \{1,...,o\}$ satisfying $M_a\geq 0$ $\forall a$ and $\sum_a M_a=\mathds{1}$. We now consider the resource of informativeness \cite{SL}. We say a measurement $\mathbb{N}$ is uninformative when there exists a PMF $q_A$ such that $N_a=q(a)\mathds{1}$, $\forall a$. We say that the measurement is informative otherwise, and denote the set of all uninformative measurements as ${\rm UI}$.
\end{definition}
The set of uninformative measurements forms a convex set and therefore, defines a convex QRT of measurements. We now introduce the notion of simulability of measurements, which is also called classical post-processing (CPP).
\begin{definition}(Simulability of measurements \cite{simulability, SL})
A measurement $\mathbb{N}=\{N_x\}$, $x\in \{1,...,k\}$ is simulable by the measurement $\mathbb{M}=\{M_a\}$, $a\in \{1,...,o\}$ when there exists a conditional PMF $q_{X|A}$ such that:
$N_x=\sum_a q(x|a)M_a$, $\forall x$. The simulability of measurements defines a partial order for the set of measurements which we denote as $\mathbb{N} \preceq \mathbb{M}$, meaning that $\mathbb{N}$ is simulable by $\mathbb{M}$. Simulability of the measurement $\mathbb{N}$ can alternatively be understood as a classical post-processing of the measurement $\mathbb{M}$.
\end{definition}
Two quantifiers for informativeness are the following.
\begin{definition} (Generalised robustness and weight of informativeness)
The generalised robustness \cite{GRoE, SL} and the weight \cite{EPR2, DS} of informativeness of a measurement $\mathbb{M}$ are given by:
\begin{align}
{\rm R}\left(\mathbb{M}\right)
&\coloneqq
{\scriptsize
\begin{matrix}
\text{\small \rm min}\\
r \geq 0\\
\mathbb{N} \in {\rm UI} \\
\mathbb{M}^G \\
\end{matrix}
}
\left\{
\rule{0cm}{0.6cm} r\,\bigg| \, M_a+rM^G_a=(1+r)N_a
\right\},
\label{eq:RoI}\\
{\rm W}\left(\mathbb{M}\right)
&\coloneqq
{\scriptsize
\begin{matrix}
\text{\small \rm min}\\
w \geq 0\\
\mathbb{N} \in {\rm UI} \\
\mathbb{M}^G \\
\end{matrix}
}
\left\{
\rule{0cm}{0.6cm} w\,\bigg| \, M_a=wM^G_a+(1-w)N_a
\right\}.
\label{eq:WoI}
\end{align}
The generalised robustness quantifies the minimum amount of a general measurement $\mathbb{M}^G$ that has to be added to $\mathbb{M}$ such that we get an uninformative measurement $\mathbb{N}$. The weight on the other hand, quantifies the minimum amount of a general measurement $\mathbb{M}^G$ that has to be used for recovering the measurement $\mathbb{M}$.
\end{definition}
These resource quantifiers are going to be useful later on. Here we finish with the preliminary concepts and theoretical tools needed to describe our main results which we do next.
\section{Main Results}
We now start with the presentation of the main results of this work. First, we relate Arimoto's dependence measure (in the quantum domain) to the operational tasks of quantum betting games with risk. As corollaries of this result, we obtain the previous two known relationships relating: i) the accessible information to quantum state discrimination, and ii) the excludible information to quantum state exclusion. Second, using the insights from the previous result, we derive new quantum measured R\'enyi divergences for measurements. Third, we introduce resource monotones for the order generated by the simulability of measurements, which additionally recover the resource monotones of \emph{generalised robustness of resource}, as well as the \emph{weight of resource}. Taking all of these results into consideration, one arrives to the equations described in the summary of results, as well as to the diagram depicted in Fig.~\ref{fig:fig}. Finally, we provide an alternative interpretation of quantum state betting in terms of quantum horse betting (QHB), and address a slightly generalised version of Result 1, where one does not need to invoke quantum theory, and show that Arimoto's dependence measure quantifies now the advantage provided by \emph{side information}, when comparing Gamblers playing the operational tasks of horse betting games.
\subsection{Arimoto's dependence measure and quantum state betting games played by Gamblers with different risk tendencies} \label{e:result1}
The main motivation now is to compare the performance of two gamblers via the maximised isoelastic certainty equivalent (ICE)
$\max_{
b_{A|G}
}
w^{CE}_R
\left(
b_{A|G}, \mathbb{M}, o_X,\mathcal{E}
\right)$. Specifically, we want to compare: i) a general gambler using a fixed measurement $\mathbb{M}$ against ii) the best uninformative gambler, meaning a gambler who can implement any uninformative measurement $\mathbb{N}\in{\rm UI}$, or equivalently, a gambler described by the quantity
$
\max_{\mathds{N} \in {\rm UI}}
\max_{
b_{A|G}
}
w^{CE}_R
\left(
b_{A|G}, \mathbb{N}, o_X,\mathcal{E}
\right)
$.
We have the following main result.
\begin{result}
Consider the a QSB game defined by the pair $(o^{\sgn(\alpha)c}_X, \mathcal{E})$ with constant odds as $o^{\sgn(\alpha)c}(x) \coloneqq \sgn(\alpha)C$, $C>0$, $\forall x$, and an ensemble of states $\mathcal{E}=\{\rho_x,p(x)\}$. Consider a Gambler playing this game using a fixed measurement $\mathbb{M}$ in comparison to a Gambler being allowed to implement any uninformative measurement $\mathbb{N}\in{\rm UI}$. Consider both Gamblers with the same attitude to risk, meaning that they are represented by isoelastic functions $u_R(W)$ with the risk parametrised as $R(\alpha) \coloneqq 1/\alpha$. Each Gambler is allowed to play the game with the optimal betting strategies, meaning they can each propose a betting strategy independently from each other. Remembering that the Gamblers are interested in maximising the isoelastic certainty equivalent (ICE), we have the following relationship:
\begin{align}
&I_{\alpha}
(X_\mathcal{E};G_\mathbb{M})
\label{eq:result1}
\\
\nonumber
&=
\sgn(\alpha)
\log
\left[
\frac{
\displaystyle
\max_{
b_{X|G}
}
\,
w^{CE}_{1/\alpha}
\left(
b_{X|G}
,
\mathbb{M}
,
o_X^{\sgn(\alpha)c}
,
\mathcal{E}
\right)
}{
\displaystyle
\max_{\mathbb{N}\in {\rm UI}}
\max_{
b_{X|G}
}
\,
w^{CE}_{1/\alpha}
\left(
b_{X|G}
,
\mathbb{N}
,
o_X^{\sgn(\alpha)c}
,
\mathcal{E}
\right)
}
\right]
.
\end{align}
This means that Arimoto's dependence measure quantifies the ratio of the isoelastic certainty equivalent with risk $R(\alpha) \coloneqq 1/\alpha$ of the game defined by $(o^{\sgn(\alpha)c}_X, \mathcal{E})$, when the QSB game is being played with the best betting strategy, and when we compare a Gambler implementing a fixed measurement $\mathbb{M}$ against a Gambler using any uninformative measurement $\mathbb{N}\in {\rm UI}$.
\end{result}
The full proof of this result is in Appendix B. We now analyse two cases of particular interest ($\alpha \in\{\infty, -\infty\}$), as the following corollaries.
\begin{corollary}
In the case $\alpha\rightarrow\infty$ we recover the result found in \cite{SL}. Explicitly, we have:
\begin{align}
C_{\infty}(\Lambda_\mathbb{M})
&= \max_\mathcal{E} I_{\infty} (X_\mathcal{E};G_\mathbb{M}), \nonumber \\ &
=
\log
\left[
\max_\mathcal{E}
\frac{
P^{\rm QSD}_{\rm succ}(\mathcal{E},\mathbb{M})
}
{
\max_{\mathbb{N}\in {\rm UI}}
P^{\rm QSD}_{\rm succ}(\mathcal{E},\mathbb{N})
}
\right],
\end{align}
with the denominator being maximised over all uninformative measurements, and $P^{\rm QSD}_{\rm succ}(\mathcal{E},\mathbb{M})$ the probability of success in the quantum state discrimination (QSD) game defined by $\mathcal{E}$, with the Gambler using the measurement $\mathbb{M}$, given explicitly by:
\begin{align}
P^{\rm QSD}_{\rm succ}(\mathcal{E},\mathbb{M})
\coloneqq
\max_{q_{G|A}}
\sum_{g,a,x}
\delta^g_x\,
q(g|a)\,
p(a|x)\,
p(x),
\end{align}
with $p(a|x)\coloneqq \tr [M_a\rho_x]$, and the maximisation over all classical post-processing $q_{G|A}$. We remark that the R\'enyi capacity of order $\infty$ has also been called as the accessible min-information of a channel, and denoted as $I^{\rm acc}_{\infty}(\Lambda_\mathbb{M})$ \cite{SL, Wilde_book}. This means that quantum state betting with risk (QSB$_{R(\alpha)}$) becomes quantum state discrimination (QSD) when $\alpha \rightarrow \infty$.
\end{corollary}
\begin{corollary}
In the case $\alpha\rightarrow -\infty$ we recover the result found in \cite{DS}. Explicitly, we have:
\begin{align}
C_{-\infty}(\Lambda_\mathbb{M})
&= \max_\mathcal{E} I_{-\infty} (X_\mathcal{E};G_\mathbb{M}), \nonumber \\ &
=
-
\log
\left[
\min_\mathcal{E}
\frac{
P^{\rm QSE}_{\rm err}(\mathcal{E},\mathbb{M})
}
{
\min_{\mathbb{N}\in {\rm UI}}
P^{\rm QSE}_{\rm err}(\mathcal{E},\mathbb{N})
}
\right],
\end{align}
with the denominator being minimised over all uninformative measurements, and $P^{\rm QSE}_{\rm err}(\mathcal{E},\mathbb{M})$ the probability of error in the quantum state exclusion (QSE) game defined by $\mathcal{E}$, with the Gambler using the measurement $\mathbb{M}$ explicitly given by:
\begin{align}
P^{\rm QSE}_{\rm err}(\mathcal{E},\mathbb{M})
\coloneqq
\min_{q_{G|A}}
\sum_{g,a,x}
\delta^g_x\,
q(g|a)\,
p(a|x)\,
p(x).
\end{align}
with $p(a|x)\coloneqq \tr [M_a\rho_x]$, and the minimisation being performed over all classical post-processing $q_{G|A}$. We remark that the R\'enyi capacity of order $-\infty$ has also been called the excludible information of a channel, and denoted as $I^{\rm exc}_{-\infty}(\Lambda_\mathbb{M})$ \cite{DS, MO}. This means that quantum state betting with risk (QSB$_{R(\alpha)}$) becomes quantum state exclusion (QSE) when $\alpha \rightarrow -\infty$.
\end{corollary}
In Appendix C we provide further details on these two corollaries.
Result 1 establishes a connection between Arimoto's dependence measure and the operational tasks of quantum state betting (QSB) games, which recovers two known cases at $\alpha \in \{\infty, -\infty\}$ \cite{SL, DS}. We start by noting that the right hand side of \eqref{eq:result1} is a completely \emph{operational} quantity, which represents the advantage that an informative measurement provides when being used as a resource for QSB games, whilst the left hand side is the raw \emph{information-theoretic} dependence measure proposed by Arimoto and consequently, this result provides an operational interpretation of Arimoto's measure in the quantum domain.
Furthermore, it interprets the R\'enyi parameter as characterising the risk tendency of the Gamblers as $R = 1/\alpha$. It is also interesting to note that this works for all ensembles $\mathcal{E} = \{\rho_x,p(x)\}$, all measurements $\mathds{M} = \{M_g\}$, as well as for the whole range of the R\'enyi parameter $\alpha \in \mathds{\overline R}$, including negative values. We summarise the interpretation of this result in Fig.~\ref{fig:grid}.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.47]{fig3.pdf}
\vspace{-0.2cm}
\caption{
Possible scenarios for quantum state discrimination (QSD) and quantum state exclusion (QSE) games being played by Gamblers with different risk tendencies: risk-averse, risk-seeking, or risk-neutral, with the risk being parametrised as $R(\alpha)=1/\alpha$. Result 1 establishes that Arimoto's dependence measure quantifies the shaded region for $\alpha \in \mathds{\overline R}$, meaning that it characterises risk-averse Gamblers playing either QSD ($\alpha \geq 0$) and QSE games ($\alpha<0$). The left bottom corner $(\alpha \rightarrow -\infty)$ and the top-right corner $(\alpha \rightarrow \infty)$ represent a risk-neutral Gambler $R=0$ playing either standard exclusion or discrimination games, respectively. This means that standard QSD games can be understood as a risk-neutral Gambler playing QSD games with risk. Similarly, standard QSE games can be understood as a risk-neutral Gambler playing QSE games. The middle point at $\alpha \rightarrow 0$ represents the transition between a maximally risk-averse Gambler playing QSD games and a maximally risk-averse Gambler playing QSE games.
}
\label{fig:grid}
\end{figure}
We also highlight here that Result 1 lies at the intersection of three major fields: quantum theory, information theory, and the theory of games and economic behaviour. We can see here how the concept of risk-aversion, from the economic sciences, has helped us to derive and address the operational tasks of quantum state betting. We believe that this result has the potential to spark further cross-fertilisation of ideas between these three major areas of knowledge, with only this particular example currently being unfolded. In particular, and building on these results, we now propose new quantum R\'enyi divergences for measurements.
\subsection{Quantum R\'enyi divergences}
\label{e:divergences}
Considering that the KL-divergence is of central importance in classical information theory, it is natural to consider quantum-extensions of such quantity. There are many ways to define quantum R\'enyi divergences \cite{review_qrd, petz-renyi, sandwiched1, sandwiched2, geometric, sharp, measured1}, with most of the effort being concentrated on divergences as a functions of quantum states. Recently however, divergences and entropies for additional objects like channels and measurements have been started to be explored \cite{qrd_channels1, qrd_channels2, channel_entropy}. We are now interested in addressing quantum R\'enyi divergences for measurements. The approach we take here takes inspiration from both: measured R\'enyi divergences for states \cite{measured3, measured2, measured1}, as well as R\'enyi conditional divergences in the classical domain \cite{sibson, csiszar, BLP1}. Explicitly, we invoke the measures for R\'enyi conditional divergences, and use them to define measured R\'enyi divergences for measurements.
\begin{definition} (Measured quantum R\'enyi divergence of Sibson)
The measured R\'enyi divergence of Sibson of order $\alpha \in \mathds{\overline R}$ and a set of states $\mathcal{S}=\{\rho_x\}$ of two measurements $\mathds{M}=\{M_g\}$ and $\mathds{N}=\{N_g\}$ is given by:
\begin{align}
D_{\alpha}^{\mathcal{S}}
(\mathds{M}||\mathds{N})
\coloneqq
\max_{p_X}
D_{\alpha}
\left(
p_{G|X}^{({\mathbb{M}},\mathcal{S})}
\Big|\Big|
q_{G|X}^{({\mathbb{N}},\mathcal{S})}
\Big|
\,
p_X
\right).
\end{align}
with the maximisation over all PMFs $p_X$, and the conditional PMFs $p_{G|X}^{({\mathbb{M}},\mathcal{S})}$ and $q_{G|X}^{({\mathbb{N}},\mathcal{S})}$ given by $p(g|x)\coloneqq \tr (M_g \rho_x)$, $q(g|x)\coloneqq \tr (N_g \rho_x)$, respectively, and $D(\cdot||\cdot|\cdot)$ the conditional R\'enyi divergence of Sibson \cite{sibson} which is defined in Appendix A.
\end{definition}
We now use this measured R\'enyi divergence in order to define a distance measure with respect to a free set of interest, the set of uninformative measurements in this case.
\begin{definition} (Measurement informativeness measure of Sibson)
The measurement informativeness measure of Sibson of order $\alpha \in \mathds{\overline R}$ and set of states $\mathcal{S}$ of a measurement $\mathbb{M}$ is given by:
\begin{align}
E_{\alpha}^{\mathcal{S}}
(\mathds{M})
\coloneqq
\min_{\mathds{N}\in {\rm UI}}
D_{\alpha}^{\mathcal{S}}
(\mathds{M}||\mathds{N}),
\label{eq:MIM}
\end{align}
with the minimisation over all uninformative measurements.
\end{definition}
Interestingly, it turns out that this quantity becomes equal to a quantity which we have already introduced.
\begin{result}
The informativeness measure of Sibson is equal to the R\'enyi capacity of order $\alpha \in \mathds{\overline R}$ of the measurement $\mathbb{M}$ as:
\begin{align}
E_{\alpha}^{\mathcal{S}}
(\mathds{M})
=
C_\alpha
\left(
p_{G|X}^{(\mathds{M}, \mathcal{S})}
\right),
\end{align}
with the quantum-classical channel associated to the measurement $\mathbb{M}$ \eqref{eq:qc}.
\end{result}
The proof of this result is in Appendix D. This result establishes a connection between R\'enyi dependence measures (which are used to define the R\'enyi channel capacity) and quantum R\'enyi divergences of measurements (which are used to define the measurement informativeness measure). We now consider the quantity $E_{\alpha}(\mathds{M}) \coloneqq \max_{\mathcal{S}} E_{\alpha}^{\mathcal{S}} (\mathds{M})=C_\alpha(\Lambda_\mathds{M})$ and analyse the particular cases of $\alpha \in\{\infty,-\infty\}$.
\begin{corollary}
The measurement informativeness measure of Sibson recovers the generalised robustness and the weight of resource at the extremes $\alpha \in\{\infty,-\infty\}$ as:
\begin{align}
E_{\infty}(\mathds{M})
&=
\log
\left[
1+{\rm R}(\mathbb{M})
\right],\\
E_{-\infty}(\mathds{M})
&=
-
\log
\left[
1-{\rm W}(\mathbb{M})
\right],
\end{align}
with the generalised robustness of informativeness \eqref{eq:RoI} \cite{SL}, and the weight of informativeness \eqref{eq:WoI} \cite{DS}.
\end{corollary}
This result follows from the fact that the R\'enyi channel capacity becomes the accessible min-information and the excludible information at the extremes $\alpha \in\{\infty,-\infty\}$, together with the results from \cite{SL} and \cite{DS}. Result 3 therefore establishes a connection between R\'enyi dependence measures and quantum R\'enyi divergences of measurements. Inspired by these results, we now proceed to propose a family of resource monotones.
\subsection{Resource monotones}
\label{e:monotones}
Resource quantifiers are special cases of resource monotones, which are central objects of study within QRTs \cite{RT_review, TG}. Two common families of resource monotones are the so-called \emph{robustness-based} \cite{RoE, RoNL_RoS_RoI, RoS, RoA, RoC, SL, RoNL_RoS_RoI, RoT, RoT2, RT_magic, citeme1, citeme2} and \emph{weight-based} \cite{WoE, EPR2, WoS, RoNL_RoS_RoI, WoAC} resource monotones. Inspired by the previous results, we now define measures which turn out to be monotones for the order induced by the simulability of measurements and furthermore, that this new family of monotones recover, at its extremes, the generalised robustness and the weight of informativeness.
\begin{definition} ($\alpha$-measure of informativeness)
The $\alpha$-measure of informativeness of order $\alpha \in \mathbb{\overline R}$ of a measurement $\mathbb{M}$ is given by:
\begin{align}
{\rm M}_{\alpha}
(\mathds{M})
\coloneqq
\sgn(\alpha)
2^{
\sgn(\alpha)
E_{\alpha}
(\mathds{M})
}
-
\sgn(\alpha)
,
\label{eq:aR}
\end{align}
with $E_{\alpha}(\mathds{M}) \coloneqq \max_{\mathcal{S}} E_{\alpha}^{\mathcal{S}} (\mathds{M})$ and the measurement informativeness measure defined in \eqref{eq:MIM}.
\end{definition}
The motivation behind the proposal of this resource measure is because: i) it recovers the generalised robustness and the weight of resource as ${\rm M}_{\infty}(\mathds{M})
={\rm R}(\mathds{M})$ and ${\rm M}_{-\infty}(\mathds{M})
={\rm W}(\mathds{M})$ and, ii) it allows the following operational characterisation.
\begin{remark}
They $\alpha$-measure of informativeness of order $\alpha \in \mathbb{\overline R}$ of a measurement $\mathbb{M}$ characterises the performance of the measurement $\mathbb{M}$, when compared to the performance of all possible uninformative measurements, when playing the same QSB game as:
\begin{align}
\max_{\mathcal{E}}
\frac{
\displaystyle
\max_{b_{X|G}}
\,
w^{CE}_{1/\alpha}
\left(
b_{X|G}
,
\mathbb{N}
,
o^{c}_X
,
\mathcal{E}
\right)
}{
\displaystyle
\max_{\mathbb{N}\in {\rm UI}}
\max_{
b_{X|G}
}
\,
w^{CE}_{1/\alpha}
\left(
b_{X|G}
,
\mathbb{N}
,
o^{c}_X
,
\mathcal{E}
\right)
}
&=
1+{\rm M}_{\alpha}(\mathbb{M}),
\\
\min_{\mathcal{E}}
\frac{
\displaystyle
\max_{b_{X|G}}
\,
w^{CE}_{1/\alpha}
\left(
b_{X|G}
,
\mathbb{N}
,
o^{-c}_X
,
\mathcal{E}
\right)
}{
\displaystyle
\max_{\mathbb{N}\in {\rm UI}}
\max_{
b_{X|G}
}
\,
w^{CE}_{1/\alpha}
\left(
b_{X|G}
,
\mathbb{N}
,
o^{-c}_X
,
\mathcal{E}
\right)
}
&=
1-{\rm M}_{\alpha}(\mathbb{M})
,
\end{align}
for $\alpha\geq 0$ and $\alpha<0$, respectively. These two equalities follow directly from the definitions and the previous results.
\end{remark}
This result is akin to the connections between generalised robustness characterising discrimination games, and the weight of resource characterising exclusion games. We now also have that the $\alpha$-measure of informativeness defines a resource monotone for the simulability of measurements.
\begin{result} (The $\alpha$-measure of informativeness is a resource monotone)
The $\alpha$-measure of informativeness \eqref{eq:aR} defines a resource monotone for the simulability of measurements, meaning that it satisfies the following properties. (i) Faithfulness: ${\rm M_\alpha} (\mathbb{M}) = 0
\leftrightarrow \mathbb{M} = \{M_a=q(a )\mathds{1}\}$ and (ii) Monotonicity under measurement simulation: $\mathbb{N} \preceq \mathbb{M} \rightarrow {\rm M_\alpha}(\mathbb{N})\leq {\rm M_\alpha}(\mathbb{M})$.
\end{result}
The proof of this result is in Appendix E. It would be interesting to find a geometric interpretation of this measure, in a similar manner that its two extremes admit a geometric interpretation as in \eqref{eq:RoI} and \eqref{eq:WoI}, as well as to explore additional properties, like convexity, in order to talk about it being a resource quantifier. It would also be interesting to explore additional monotones, in particular, whether the isoelastic certainty equivalent forms a complete set of monotones for the simulability of measurements, this, given that this holds for the two extremes at plus and minus infinity.
We now describe a slightly generalised version of Result 1, relating Arimoto's dependence measure to horse betting games with side information.
\subsection{
Arimoto's dependence measure and horse betting games with risk and general side information
}
\label{e:general}
We now consider horse betting (HB) games with risk and side information without making reference to quantum mechanics, and present a slightly more general version of result 1, which interprets Arimoto's dependence measure as quantifying the advantage provided by side information when playing horse betting games.
We consider here the Gambler now having access to a random variable $G$, which is potentially correlated with the outcome of the `horse race' $X$ and therefore, the Gambler can try to use this for her/his advantage. This means that these horse betting games are defined by the pair $(o_X,p_{GX})$, and the Gambler is in charge of proposing the betting strategy $b_{X|G}$. We highlight here that this contrasts the case of QSB games, because there the Gambler could in principle be in charge of intervening in the conditional PMF $p_{G|X}$, as the Gambler had access to a measurement and $p_{G|X}=\tr(M_g\rho_x)$, whilst here on the other hand, $p_{GX}=p_{G|X}p_X$ is a given, and the Gambler cannot influence $p_{G|X}$. However, the figure of merit is still the isoelastic certainty equivalent for risk $R \in (-\infty,1) \cup (1,\infty)$ which we now write as:
\begin{multline}
w^{CE}_R
(b_{X|G},o_X,p_{XG})\\
\coloneqq
\left[
\sum_{g,x}
\big[
b(x|g)
o(x)\big]^{1-R}
p(x,g)
\right]^\frac{1}{1-R}.
\label{eq:HB_SI_W}
\end{multline}
The cases $R \in\{1,\infty,-\infty\}$ are defined again by continuous extension of \eqref{eq:HB_SI_W}. A HB game is then specified by the pair $(o_X,p_{GX})$, and the Gambler plays this game with a betting strategy $b_{X|G}$.
The operational tasks of HB games were characterised by Bleuler, Lapidoth, and Pfister (BLP), in terms of the BLP-CR divergence \cite{BLP1} (see Appendix A for more details on this). We now modify these tasks in order to consider both gain games (when the odds are positive) and loss games (when the odds are negative), and relate Arimoto's dependence measure to HB games with the following result, which can be derived in a similar manner as result 1.
\begin{result}
Consider a horse betting game defined by the pair $(o^{\sgn(\alpha)c}_X, p_{XG})$ with constant odds as $o^{\sgn(\alpha)c}(x) \coloneqq \sgn(\alpha)C$, $C>0$, $\forall x$, and a joint PMF $p_{XG}$. Consider a Gambler playing these game having access to the side information $G$, against a Gambler \emph{without} access to any side information. Consider both Gamblers with the same attitude to risk, meaning they are represented by isoelastic functions $u_R(w)$ with the risk parametrised as $R(\alpha) \coloneqq 1/\alpha$. The Gamblers are allowed to play these games with the optimal betting strategies, which they can each choose independently from each other. Remembering that the Gamblers are interested in maximising the isoelastic certainty equivalent, we have the following relationship:
\begin{multline}
I_{\alpha}(X;G)
\\
=
\sgn(\alpha)
\log
\left[
\frac{
\displaystyle
\max_{b_{X|G}}
\,
w^{CE}_{1/\alpha}
(
b_{X|G}
,
o^{\sgn(\alpha)c}_X
,
p_{XG}
)
}{
\displaystyle
\max_{b_{X}}
\,
w^{CE}_{1/\alpha}
(
b_X
,
o^{\sgn(\alpha)c}_X
,
p_X
)
}
\right]
.
\end{multline}
This means that Arimoto's dependence measure quantifies the ratio of the isoelastic certainty equivalent with risk $R(\alpha) \coloneqq 1/\alpha$ of the games defined by $(o^{\sgn(\alpha)c}_X, p_{XG})$, when each HB game is played with the best betting strategy, and we compare the performance of a first Gambler who makes use of the side information $G$, against a second gambler which has no access to side information.
\end{result}
We emphasise that this result is purely "classical", as it does not invoke any elements from quantum theory. It also complements a previous relationship between HB games and the BLP-CR divergence \cite{BLP1}. Here on the other hand we characterise instead \emph{the ratio} between the two HB scenarios, where we compare two Gamblers, a first gambler with access to side information against a second gambler having no access to side information. We now address a particular known case as the following corollary.
\begin{corollary}
In the case $\alpha=1$, which means HB games with risk aversion given by $R=1$, we get:
\begin{multline}
I(X;G)
=
\max_{b_{X|G}}
U_0
(
b_{X|G},o_X^{c},p_{XG}
)\\
-
\max_{b_X}
U_0
(
b_{X},o_X^{c},p_{X}
),
\end{multline}
with $I_{1}(X;G)= I(X;G)$ the standard mutual information, and $U_0 \coloneqq \log w^{CE}_0$ the logarithm of the isoelastic certainty equivalent. This is a particular case of a relationship known to hold for all odds $o(x)$ \cite{CT, LN_moser}.
\end{corollary}
\section{Conclusions}
\label{s:conclusions}
In this work, we have proposed that using ideas of risk-aversion and utility theory are a powerful way of extending the well studied tasks of quantum state discrimination and quantum state exclusion. We have shown that this places two recently discovered four-way correspondences \cite{SL, DS} into a much broader continuous family of correspondences. For the first time, this shows that there exist deep connections between operational state identification tasks, dependence measures, R\'enyi divergences, and resource monotones.
In more detail, first, we introduced the operational tasks of quantum state betting games, as a generalisation of horse betting games \cite{kelly, BLP1} to the quantum domain, where the side information is now encoded as an ensemble of states, which Bob needs to decode by means of a measurement, in order to try to use it for his advantage, and consequently propose an optimal possible betting strategy. We prove that when we consider a gambler who is risk-averse, then Arimoto's dependence measure \cite{arimoto}, in the quantum domain, quantifies the multiplicative increase in their isoelastic certainty equivalent when using a given (potentially informative) measurement, compared to using the best uninformative measurement (i.e. having to disregard the quantum side information).
Second, related to the above, we also proved that in a purely classical setting (meaning without invoking quantum theory), more generally Arimoto's dependence measure quantifies how useful side-information is. In particular, it quantifies the multiplicative increase in the isoelastic certainty equivalent of the gambler when given the side information, over not having access to it. This result complements the results of Bleuler, Lapidoth, and Pfister, where HB games were characterised in terms of the R\'enyi divergence and the BLP-CR-divergence \cite{BLP1}, without explicitly characterising this ratio. Our result can be seen as giving a very clean operational interpretation of Arimoto's dependence measure, and that the R\'enyi parameter can be understood operationally as quantifying the risk aversion of a gambler.
Third, we proposed a set of new \emph{measured} quantum R\'enyi divergences for measurements, in terms of the conditional R\'enyi divergences of Sibson \cite{sibson}. Finally, we introduced a new family of resource monotones for the QRT of measurement informativeness. These resource monotones recover, as limiting cases, the generalised robustness of informativeness \cite{SL, GRoE} and the weight of informativeness \cite{DS, EPR2}.
All of the above elements are elegantly connected via a four-way correspondence, which substantially extended the two correspondences previously uncovered \cite{SL, DS}, which we now understand to be the two extremes of a continuous spectrum.
We believe our results are the start of a much broader and deeper investigation into the use of risk-aversion, utility theory, and other ideas from economics, to obtain a broader unified understanding of many topics in quantum information theory. Our results raises many questions and opens up various avenues for future research, a number of which we briefly describe below.
\subsection{Open problems, perspectives, and avenues for future research}\label{s:perspective}
\begin{enumerate}
\item In this first investigation, we have restricted our attention exclusively to a very simple resource theory -- that of measurement informativeness. There is no reason to believe that the results found here are not much more general, and it will be important to show that they extend in a straightforward way when considering other resources for measurements. Moreover, it will also be interesting to develop the ideas presented here to other QRTs, with different resourceful objects besides measurements, such as states and channels.
\item An exciting broad possibility, is to explore more generally the concept of risk aversion in quantum information theory. This is a concept which we are just starting to understand and incorporate into the theory of information and therefore, we believe this is an exciting avenue of research which could have far-reaching implications when considered for additional operational tasks, like Bell-nonlocal games, and interactive proof systems.
\item Similarly, the scenario here considered represents the convergence of three major research fields: i) quantum theory, ii) information theory and iii) the theory of games and economic behaviour. Specifically, we borrowed the concept of risk aversion from the economic sciences in order to solve an open problem in quantum information theory. We believe that this is just an example of the benefits that can be obtained from considering the cross-fertilisation of ideas between these three major current research fields. Consequently, it would be interesting to keep importing further concepts (in addition to risk aversion), as well as to explore the other direction, i. e., whether quantum information theory can provide insights into the theory of games and economic behaviour. We believe this can be a fruitful approach for future research. In particular, horse betting games are a particular family of a larger family of tasks which are related to the investment in portfolios \cite{CT}, and it therefore would be interesting to explore quantum versions of the operational tasks that emerge in these scenarios.
\item The set of connections we have established here are by means of the R\'enyi entropies, and we have seen that the parameter $\alpha$ is intimately linked to the risk aversion of a gambler. It is interesting to speculate whether other types of connections might be possible. For example, Brandao \cite{WoE_Brandao} previously found a family of entanglement witnesses that encompassed both the generalised robustness and the weight of entanglement. We do not know if this is intimately related with our findings here, or whether our insights might shed further light, e.g.~operational significance, on these entanglement witnesses and their generalisations.
\item We were led to introduce new measured quantum R\'enyi divergences for measurements in this work. We believe that they should find relevance and application in settings far removed from the specific setting we considered here. It would also be interesting to further explore their relevance in other other areas within quantum information theory.
\item We have also introduced new resource monotones, for which we do not yet have a full understanding. In particular, unlike numerous other monotones, these do not yet have an obvious geometric interpretation. It would be interesting to develop such ideas further.
\item It would be interesting to explore additional monotones, in particular, whether the isoelastic certainty equivalent $w^{CE}_{R(\alpha)}$ forms (for all $\alpha$) a complete set of monotones for the order induced by the simulability of measurements, this, given that this is the case for the two extremes at $\alpha \in \{\infty,-\infty\}$ \cite{SL, DS}.
\item We point out that we have used information-theoretic quantities with the R\'enyi parameter $\alpha$ taking both positive and negative values. Whilst negative values have been explored in the literature, it is fair to say that they have not been the main focus of attention. Here we have proven that information-theoretic quantities with negative orders posses a descriptive power \emph{different} from their positive counterparts and therefore, it would be interesting to explore their usefulness in other information-theoretic scenarios.
\end{enumerate}
\section*{Acknowledgements}
We thank Patryk Lipka-Bartosik, Tom Purves, Noah Linden, and Roope Uola for insightful discussions. A.F.D. acknowledges support from COLCIENCIAS 756-2016 and the UK EPSRC (EP/L015730/1). P.S. acknowledges support from a Royal Society URF (UHQT).
\onecolumngrid
| {
"attr-fineweb-edu": 1.793945,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUd_k5qX_BxeH2UYk5 | \section{Introduction}
For years, neural networks have been applied to medical diagnostic tasks and have achieved levels of performance on par with highly trained pathologists, radiologists, and other diagnosticians \cite{medical-cnn-survey}. However, these systems are often relegated to lab settings and clinical decision support systems, as their black-box nature does not inspire the confidence necessary for life-critical environments \cite{NIH-AI}\cite{AI-CDSS}.
The decisions of such models are near impossible for the researchers that created them to understand, let alone the proposed clinical end-user \cite{Med-XAI}. Additionally, neural networks are incredibly vulnerable to adversarial examples. Adversarial examples are defined as instances, $x$, whose true label is certifiably $y$, but the mechanics of the estimator (either inherent to the architecture or specific to the weights) result in a confidently incorrect prediction \cite{Intro-Adv}. These examples can be carefully contrived instances, $x^\prime = x + p$, which are extremely similar to a natural instance, $x$, but are classified differently, $h(x) \not = h(x^\prime)$. However, adversarial examples aren't just the product of bad-actors. Recent research has demonstrated the existence of many naturally occurring instances, which behave adversarially when used as input to popular models \cite{Natural-Adv}. Prior to this discovery, researchers and engineers building models for environments where security and bad-actors were not a concern could somewhat understandably deprioritize adversarial robustness, but with the combination of increased reliance on AI-driven systems, the existence of natural adversarial examples, and the proliferation of civilian-targeted cyber-warfare, secure and reliable neural networks are more important than ever.
For medical applications, clean- and robust-accuracy must be optimized simultaneously and with near-equal priority. If the goal of medical AI is to increase accessibility to high-quality diagnostics and treatments, humans will need to abdicate their roles in some procedural medical processes where AI succeeds, such as image classification and segmentation. Unfortunately, these are also the applications where a well-placed orthopedic pin or speck of dust could make all the difference between a properly and improperly diagnosed patient.
This paper proposes a simple architectural suggestion for medical image classification models that greatly increases robust-accuracy without sacrificing clean-accuracy (i.e. accuracy on non-adversarial inputs), unlike previous techniques \cite{RobustVsAccuracy}. Additionally, this method increases the parameter count of ResNet-50-based \cite{ResNet} architectures by less than 1.3\%; therefore, training time and data requirements are not significantly affected. Later sections will carefully compare the behavioral differences between baseline and adjusted architectures to investigate the root-cause of this increased performance.
\section{Related Work}
\subsection{Vulnerability of Medical Image Models}
Paschali et al. (2018) was among the first to examine the susceptibility of medical imaging models to adversarial examples, discovering that current attacks were able to decrease accuracy on popular medical classification models by up to 25\% and medical segmentation models by up to 41\% \cite{Paschali}. Building on this work, as well as that of Finlayson et al. (2019) \cite{Finlayson}, Ma and Niu et al. (2021) found that models designed for medical images were, in fact, more vulnerable to adversarial image attacks than those designed for natural images. Key to this conclusion was their finding that the medical image models they tested had significantly sharper loss landscapes than their natural image counterparts \cite{MaNiu}. This correlation between sharp loss landscapes and more vulnerable models is outlined by Madry et al. (2017), which attributes this sharpness to overparametrization \cite{Madry}. Ma and Niu et al. (2021) echoes this concern of overcomplexity and also suspects that the salience of intricate textures in medical images may also contribute to their compatibility with adversarial attacks. Additionally, they find that while medical image models are easily fooled by adversarial images, adversarially perturbed medical images are more easily detected than their natural image counterparts; they attribute this property to the tendency of popular attacks to place perturbations outside of the image's salient region \cite{MaNiu}.
\subsection{Finding Adversarially Robust Architectures}
Training adversarially robust neural networks is an extremely active area of research. Current best-practices involve adversarial training, in which adversarial examples are included in the training set \cite{Madry}. However, this method serves neither to remedy nor understand the underlying vulnerabilities of neural networks and often comes at the cost of training time and clean accuracy \cite{RobustVsAccuracy}. For this reason, it is imperative that the research community continues to investigate architectural modifications that yield greater adversarial robustness.
Though many recent works have developed neural architecture search algorithms for identifying robust architectures \cite{AdvRush}\cite{MORAS}\cite{DSRNA}, these methods are merely choosing the most robust architecture from a finite, user-defined search space. This is a computationally expensive process that requires a large amount of labeled data. Additionally, the architectures found are specific to a single dataset, and the robustness does not generalize to other datasets.
\subsection{CNNs with Attention}
Recent advances in natural language processing have offered the computer vision research community a new option for image classification, the vision transformer (ViT) \cite{VisionTransformerSurvey}. Models utilizing this family of architectures commonly match or exceed the performance of convolutional neural networks \cite{ViT}\cite{DeiT}\cite{PVT}\cite{Swin}. Shao et al. (2021) found that ViTs tend to learn more robust, high-level features, allowing them to ignore the high frequency perturbations of many attack methods. When compared to various versions of ResNet, ShuffleNet, MobileNet, and VGG16, ViT-S/16 was up to 46\% more robust to PGD attacks and 44\% more robust to AutoAttack \cite{RobustTransformers}.
Since their inception, it has been understood that the superior clean-accuracy of vision transformers is related to their reliance on attention \cite{VisionTransformerSurvey}, and recently it has been confirmed that this property is also responsible for the architecture's superior robustness \cite{UnderstandingTransformerRobustness}. Zhou et al. (2022) found that the self-attention of vision transformers tends to promote the saliency of more meaningful (non-spurious) clusters of image regions.
Working from this knowledge, it is natural to wonder whether attention mechanisms can offer additional adversarial robustness to CNNs. Agrawal et al. (2022) concluded that this was not necessarily the case, finding that while their attentive CNNs had slightly superior robustness to PGD attacks on the CIFAR-100 dataset, it fell behind ResNet-50 on CIFAR-10 and Fashion MNIST. Based on these results, they suspected that the adversarial robustness of attentive models on a given dataset may correlate to the number of classes \cite{AttentiveCNNRobustness}.
Additionally, early research on CNNs with attention by Xiao et al. (2015) found that attentive CNNs had significantly higher clean-accuracy for fine-grained classification than vanilla-CNNs. As small details are incredibly salient in medical image datasets, research on fine-grained classification is especially relevant. Based on these findings and those concerning the role of attention in robust-accuracy, it appears possible that the use of attention is a minimal architectural feature that challenges the findings of Tsipras et al. (2018) \cite{RobustVsAccuracy} by improving clean- and robust-accuracy simultaneously.
\section{Method}
\subsection{Datasets}
Benchmark medical image classification tasks were chosen based on their frequent use in similar studies, such as Ma and Niu et al. (2021) \cite{MaNiu} and Finlayson et al. (2019) \cite{Finlayson}. These tasks are diabetic retinopathy detection from fundoscopy images, pneumothorax detection from chest x-rays, and skin lesion classification from dermatoscopy images. Fundoscopy images were sourced from the popular Kaggle diabetic retinopathy detection competition \cite{KaggleDR}. Chest x-rays used were from the ChestX-Ray14 dataset \cite{ChestX-Ray14}.
\subsection{Network Architecture}
Four models were trained for each dataset, a ResNet-50 with and without a soft attention block and an InceptionResNetV2 with and without soft attention. The architecture of each model was based on the TensorFlow \cite{TensorFlow} implementations of ResNet-50 \cite{ResNetImplementation} and InceptionResNetV2 \cite{InceptionImplementation}.
ResNet-50 was instantiated with the top block included, initial weights based on ImageNet \cite{ImageNet} pretraining, and class count set to 1000. The final three layers of the network were removed. In the models without attention, these final layers were replaced with a 2D global average pooling layer followed by a fully-connected layer with softmax activation. In the models with attention, the last three layers were replaced with a soft attention block implemented according to the specifications of Datta et al. (2021). This block produces a $7 \times 7$ feature map which is $2 \times 2$ max pooled and concatenated with a $2 \times 2$ max pooled version of the input to the attention block. This concatenated output is put through ReLU activation, 50\% dropout, and global average pooling before being handed off to the fully connected prediction head with softmax activation \cite{AttentionSkinCancerClassification}.
InceptionResNetV2 \cite{InceptionResNet} was instantiated with the top block included, initial weights based on ImageNet, and classifier activation as softmax. In the models without attention, the final 28 layers were replaced with a ReLU activation and 50\% dropout whose output was flattened and sent to the fully connected prediction head with softmax activation. In the models with attention, these layers were replaced by the soft attention block described above. This output is $2 \times 2$ max pooled and concatenated with a max pooled version of the input to the attention block before being sent through ReLU activation and the softmax prediction head.
All models used the Adam optimizer with $\eta=0.01$ and $\epsilon=0.1$ and minimized categorical cross-entropy during training. During training classes were weighted based on their inverse frequency relative to the other classes. Each model was trained for a maximum of 300 epochs with early stopping (patience $=$ 40, minimum delta $=$ 0.001) causing most to stop after 60-90 epochs. In the dermatoscopy task, models only saw 10\% of the dataset during each epoch; this was done to increase the odds of reproducing the results of Datta et al. (2021) \cite{AttentionSkinCancerClassification}.
\subsection{Evaluation and Analysis of Robustness}
After training, the models' clean- and robust-accuracy was evaluated using the FoolBox \cite{FoolBoxPaper}\cite{FoolBoxLibrary} library's implementation of $l_\infty$ projected gradient descent attacks. Each set of models was tested at increasing epsilons (perturbation radii). Attacks of these perturbation radii were created for each image in the test sets. Unweighted accuracy was calculated for each perturbation radius.
\input figures/ExamplePerturbations
\section{Results}
\subsection{ResNet-50 Models}
\input figures/ResNetDermPlot
\input figures/ResNetDRPlot
\input figures/ResNetPneumoPlot
For the task of skin lesion classification, the models with attention are clearly superior, in terms of robustness (Fig. \ref{DermResNet50Robustness}). Although the model without attention has slightly higher clean-accuracy (0.905 vs. 0.88), even the slightest perturbation ($\epsilon=0.00125$) is able to reduce its accuracy by 24\% and put the model with attention in the lead. By the time $\epsilon=0.005$, the model is worse than random, and by the time $\epsilon=0.01$, the model is 0\% accurate. It is worth noting that perturbation of this size (Fig. \ref{example_derm_perturbation}) is far from being human perceptible. Meanwhile, at this perturbation radius, the model with attention remains better than random selection.
The models for diabetic retinopathy detection tell a slightly different story (Fig. \ref{DRResNet50Robustness}). These models were, overall, much more robust, requiring a perturbation of at least $\epsilon=0.01$ to reduce accuracy by 3\%. As in the skin lesion classification models, the model without attention has a slightly higher clean accuracy (0.832 vs. 0.818). It retains this very small lead until $\epsilon=0.02$, at which point, the accuracy of both models begins falling, but the model with attention does so a bit less dramatically. At the maximum perturbation radius tested ($\epsilon=0.32$) (Fig. \ref{example_dr_perturbation}), the accuracy of the model with attention is over 20 times higher than that of the model without.
The pneumothorax detection models, collectively, were even more robust than the diabetic retinopathy detection models (Fig. \ref{PneumoResNet50Robustness}). After perturbations of $\epsilon=0.32$ (Fig. \ref{example_pneumo_perturbation}), the accuracy of both models met at 0.805. Prior to this, the accuracy of the model without attention hovered 1-2\% above that of the model with attention. However, based on the trajectory of the models' accuracy with respect to $\epsilon$, perturbation radii higher than 0.32 would likely result in the model with attention leapfrogging the model without.
\subsection{InceptionResNetV2 Models}
\input figures/IRV2DermPlot
\input figures/IRV2DRPlot
\input figures/IRV2PneumoPlot
Experiments with the InceptionResNetV2 architecture paint a slightly different picture. In all tasks, clean-accuracy for the baseline and attentive models was within 1\%. This result is, somewhat, at odds with those of Datta et al. (2021) \cite{AttentionSkinCancerClassification} which found soft attention to improve accuracy of InceptionResNetV2 (on HAM10000 \cite{HAM10000}) by ~3\%.
For the skin lesion classification and diabetic retinopathy detection tasks, the baseline and attentive architectures performed near-identically in adversarial scenarios. The diabetic retinopathy task slightly favored the baseline, whereas skin lesion classification slightly favored the attentive model; however, at most, there was a 6\% discrepancy between their accuracies ($\epsilon = 0.00125$).
In the pneumothorax detection task, the model with attention performed significantly worse under adversarial scenarios. The model reached a level of accuracy worse than random selection after the perturbation radius was pushed beyond 0.00125, while the baseline remained usable with a perturbation radius of 0.005.
\section{Discussion}
\subsection{Analysis of Perturbations and Activation Maps}
In an attempt to better understand the principles and behaviors that led the CNNs with attention to perform better in both clean and adversarial scenarios, perturbation difference maps and Grad-CAM activation maps were generated for a select sample of images on each model. Individually, these difference maps and activation maps were unremarkable. However, a number of dataset specific patterns were noticed throughout the images used for this analysis.
\subsubsection{ResNet-50 Models}
While imperceptible in the final images, the perturbations for dermatoscopic and fundoscopic images were found to carry easily perceptible information about the source image, when the changes were scaled to a range of $[0,255]$. In these cases, the adversarial ``noise'' contained the shape of the lesion or eye. For dermatoscopic images, this shape was in the form of a densely perturbed ring enclosing a sparsely perturbed center (Fig. \ref{LesionShape}). This phenomenon was most visible when $\epsilon=0.00125$. For the fundoscopic images, this shape was represented by a sparsely perturbed ellipse. In the attacks generated for the model with attention, the shape of this ellipse occasionally diverged from the true shape of the eye (Fig. \ref{EyeShapeDivergence}).
\input figures/DRPerturbationDivergence
\input figures/ResNetPneumoWithAttentionActivation
\input figures/ResNetPneumoWithoutAttentionActivation
In the attacks tailored to the fundoscopic image models, perturbations introduced dark splotchy patterns to the eye area (Fig. \ref{CottonWool}). Based on diagnostic literature, these spots could be ``attempts'' by the attack to emulate cotton-wool spots or hemorrhages, key indicators of diabetic retinopathy \cite{WillsEye}. Similar patterns appear in the perturbations for the chest x-ray model with attention (Fig. \ref{PneumoWithAttention}) (these spots are larger and less speckled), though these do not appear to be similar to the key indicators of pneumothoraces (air in pleural space, misplaced lung edge, less distinct lung markings, etc.) \cite{UnofficialGuide}, it remains unclear whether the attack could be emulating these features indirectly. If so, this behavior could be responsible for medical adversarial images being so easily detected, as observed by Ma and Niu et al. (2021) \cite{MaNiu}.
\input figures/ResNetCottonWool
While the activation maps for the dermatoscopic and fundoscopic image models were invariant to perturbation, the chest x-ray model without attention occasionally produced very different activation maps under adversarial attacks (Fig. \ref{DifferentMaps}). However, this was only true under successful attacks.
For all three datasets, models with attention produced activation maps that were very different from their baseline counterparts; in most cases, the regions of highest activation did not match across models. Additionally, the activation maps for the chest x-ray model with attention were highly focused, typically having a single region of significance (Fig. \ref{DifferentMaps}).
\subsubsection{InceptionResNetV2 Models}
Similar to the ResNet-50-based models, perturbations of dermatoscopic and fundoscopic images carried perceptible information about the source image. Across all three datasets, the perturbations for attentive models were generally higher contrast; in other words, the difference in the level of perturbation between the most- and least-perturbed pixels was greater. Like the ResNet-50 models, the perturbations targeting fundoscopic and chest x-ray images produced intensely perturbed splotches. However, unlike the ResNet-50 models, the splotches on the x-rays appear in both the baseline and attentive models and are less widespread (Fig. \ref{IRV2Comparison}).
The activation maps for all skin lesion classification and diabetic retinopathy detection models were greatly impacted by perturbation, often being nearly inverted, indicating that the model is focusing on the incorrect regions of the image. Perturbations appeared to have less of an effect on the activation maps of the pneumothorax detection models. Like the ResNet-50 models, the activation maps of the attentive models shared no similarity with those of the baselines. Also, similar to its ResNet-50 counterpart, the attention maps of the chest x-ray models were significantly more focused than those of the other models (Fig. \ref{IRV2Comparison}).
\subsection{Conclusion}
The inclusion of a soft-attention block was able to improve the robustness of ResNet-50 on two of the three medical classification tasks tested, while only slightly decreasing clean-accuracy. Additionally, previous experiments have shown the ability of this architectural modification to increase clean accuracy \cite{AttentionSkinCancerClassification}. Though soft-attention was unable to significantly improve the robust accuracy of the InceptionResNetV2 models, its inclusion was not detrimental to clean- or robust-accuracy in two of three tasks.
These results suggest that, in most cases, the inclusion of a soft-attention block is beneficial (or at least not harmful) to the overall performance of medical image classification architectures. This modification has potential to improve accuracy and even greater potential to improve model robustness.
Additionally, the observations above regarding perturbation behavior suggest that PGD may be inadvertently emulating cotton-wool spots, hemorrhages, and lung boundaries. The fact that these patterns are so easily visible in the perturbations may lend further insight into why adversarial images targeting medical classifiers are so easily detected.
Future research into the susceptibility of medical image classifiers to adversarial images may wish to investigate this hypothesis by creating a modified PGD attack algorithm which maximizes the perceived randomness of the perturbation, as this strategy could lead to less detectable attacks.
{\small
\bibliographystyle{ieee_fullname}
\section{Introduction}
For years, neural networks have been applied to medical diagnostic tasks and have achieved levels of performance on par with highly trained pathologists, radiologists, and other diagnosticians \cite{medical-cnn-survey}. However, these systems are often relegated to lab settings and clinical decision support systems, as their black-box nature does not inspire the confidence necessary for life-critical environments \cite{NIH-AI}\cite{AI-CDSS}.
The decisions of such models are near impossible for the researchers that created them to understand, let alone the proposed clinical end-user \cite{Med-XAI}. Additionally, neural networks are incredibly vulnerable to adversarial examples. Adversarial examples are defined as instances, $x$, whose true label is certifiably $y$, but the mechanics of the estimator (either inherent to the architecture or specific to the weights) result in a confidently incorrect prediction \cite{Intro-Adv}. These examples can be carefully contrived instances, $x^\prime = x + p$, which are extremely similar to a natural instance, $x$, but are classified differently, $h(x) \not = h(x^\prime)$. However, adversarial examples aren't just the product of bad-actors. Recent research has demonstrated the existence of many naturally occurring instances, which behave adversarially when used as input to popular models \cite{Natural-Adv}. Prior to this discovery, researchers and engineers building models for environments where security and bad-actors were not a concern could somewhat understandably deprioritize adversarial robustness, but with the combination of increased reliance on AI-driven systems, the existence of natural adversarial examples, and the proliferation of civilian-targeted cyber-warfare, secure and reliable neural networks are more important than ever.
For medical applications, clean- and robust-accuracy must be optimized simultaneously and with near-equal priority. If the goal of medical AI is to increase accessibility to high-quality diagnostics and treatments, humans will need to abdicate their roles in some procedural medical processes where AI succeeds, such as image classification and segmentation. Unfortunately, these are also the applications where a well-placed orthopedic pin or speck of dust could make all the difference between a properly and improperly diagnosed patient.
This paper proposes a simple architectural suggestion for medical image classification models that greatly increases robust-accuracy without sacrificing clean-accuracy (i.e. accuracy on non-adversarial inputs), unlike previous techniques \cite{RobustVsAccuracy}. Additionally, this method increases the parameter count of ResNet-50-based \cite{ResNet} architectures by less than 1.3\%; therefore, training time and data requirements are not significantly affected. Later sections will carefully compare the behavioral differences between baseline and adjusted architectures to investigate the root-cause of this increased performance.
\section{Related Work}
\subsection{Vulnerability of Medical Image Models}
Paschali et al. (2018) was among the first to examine the susceptibility of medical imaging models to adversarial examples, discovering that current attacks were able to decrease accuracy on popular medical classification models by up to 25\% and medical segmentation models by up to 41\% \cite{Paschali}. Building on this work, as well as that of Finlayson et al. (2019) \cite{Finlayson}, Ma and Niu et al. (2021) found that models designed for medical images were, in fact, more vulnerable to adversarial image attacks than those designed for natural images. Key to this conclusion was their finding that the medical image models they tested had significantly sharper loss landscapes than their natural image counterparts \cite{MaNiu}. This correlation between sharp loss landscapes and more vulnerable models is outlined by Madry et al. (2017), which attributes this sharpness to overparametrization \cite{Madry}. Ma and Niu et al. (2021) echoes this concern of overcomplexity and also suspects that the salience of intricate textures in medical images may also contribute to their compatibility with adversarial attacks. Additionally, they find that while medical image models are easily fooled by adversarial images, adversarially perturbed medical images are more easily detected than their natural image counterparts; they attribute this property to the tendency of popular attacks to place perturbations outside of the image's salient region \cite{MaNiu}.
\subsection{Finding Adversarially Robust Architectures}
Training adversarially robust neural networks is an extremely active area of research. Current best-practices involve adversarial training, in which adversarial examples are included in the training set \cite{Madry}. However, this method serves neither to remedy nor understand the underlying vulnerabilities of neural networks and often comes at the cost of training time and clean accuracy \cite{RobustVsAccuracy}. For this reason, it is imperative that the research community continues to investigate architectural modifications that yield greater adversarial robustness.
Though many recent works have developed neural architecture search algorithms for identifying robust architectures \cite{AdvRush}\cite{MORAS}\cite{DSRNA}, these methods are merely choosing the most robust architecture from a finite, user-defined search space. This is a computationally expensive process that requires a large amount of labeled data. Additionally, the architectures found are specific to a single dataset, and the robustness does not generalize to other datasets.
\subsection{CNNs with Attention}
Recent advances in natural language processing have offered the computer vision research community a new option for image classification, the vision transformer (ViT) \cite{VisionTransformerSurvey}. Models utilizing this family of architectures commonly match or exceed the performance of convolutional neural networks \cite{ViT}\cite{DeiT}\cite{PVT}\cite{Swin}. Shao et al. (2021) found that ViTs tend to learn more robust, high-level features, allowing them to ignore the high frequency perturbations of many attack methods. When compared to various versions of ResNet, ShuffleNet, MobileNet, and VGG16, ViT-S/16 was up to 46\% more robust to PGD attacks and 44\% more robust to AutoAttack \cite{RobustTransformers}.
Since their inception, it has been understood that the superior clean-accuracy of vision transformers is related to their reliance on attention \cite{VisionTransformerSurvey}, and recently it has been confirmed that this property is also responsible for the architecture's superior robustness \cite{UnderstandingTransformerRobustness}. Zhou et al. (2022) found that the self-attention of vision transformers tends to promote the saliency of more meaningful (non-spurious) clusters of image regions.
Working from this knowledge, it is natural to wonder whether attention mechanisms can offer additional adversarial robustness to CNNs. Agrawal et al. (2022) concluded that this was not necessarily the case, finding that while their attentive CNNs had slightly superior robustness to PGD attacks on the CIFAR-100 dataset, it fell behind ResNet-50 on CIFAR-10 and Fashion MNIST. Based on these results, they suspected that the adversarial robustness of attentive models on a given dataset may correlate to the number of classes \cite{AttentiveCNNRobustness}.
Additionally, early research on CNNs with attention by Xiao et al. (2015) found that attentive CNNs had significantly higher clean-accuracy for fine-grained classification than vanilla-CNNs. As small details are incredibly salient in medical image datasets, research on fine-grained classification is especially relevant. Based on these findings and those concerning the role of attention in robust-accuracy, it appears possible that the use of attention is a minimal architectural feature that challenges the findings of Tsipras et al. (2018) \cite{RobustVsAccuracy} by improving clean- and robust-accuracy simultaneously.
\section{Method}
\subsection{Datasets}
Benchmark medical image classification tasks were chosen based on their frequent use in similar studies, such as Ma and Niu et al. (2021) \cite{MaNiu} and Finlayson et al. (2019) \cite{Finlayson}. These tasks are diabetic retinopathy detection from fundoscopy images, pneumothorax detection from chest x-rays, and skin lesion classification from dermatoscopy images. Fundoscopy images were sourced from the popular Kaggle diabetic retinopathy detection competition \cite{KaggleDR}. Chest x-rays used were from the ChestX-Ray14 dataset \cite{ChestX-Ray14}.
\subsection{Network Architecture}
Four models were trained for each dataset, a ResNet-50 with and without a soft attention block and an InceptionResNetV2 with and without soft attention. The architecture of each model was based on the TensorFlow \cite{TensorFlow} implementations of ResNet-50 \cite{ResNetImplementation} and InceptionResNetV2 \cite{InceptionImplementation}.
ResNet-50 was instantiated with the top block included, initial weights based on ImageNet \cite{ImageNet} pretraining, and class count set to 1000. The final three layers of the network were removed. In the models without attention, these final layers were replaced with a 2D global average pooling layer followed by a fully-connected layer with softmax activation. In the models with attention, the last three layers were replaced with a soft attention block implemented according to the specifications of Datta et al. (2021). This block produces a $7 \times 7$ feature map which is $2 \times 2$ max pooled and concatenated with a $2 \times 2$ max pooled version of the input to the attention block. This concatenated output is put through ReLU activation, 50\% dropout, and global average pooling before being handed off to the fully connected prediction head with softmax activation \cite{AttentionSkinCancerClassification}.
InceptionResNetV2 \cite{InceptionResNet} was instantiated with the top block included, initial weights based on ImageNet, and classifier activation as softmax. In the models without attention, the final 28 layers were replaced with a ReLU activation and 50\% dropout whose output was flattened and sent to the fully connected prediction head with softmax activation. In the models with attention, these layers were replaced by the soft attention block described above. This output is $2 \times 2$ max pooled and concatenated with a max pooled version of the input to the attention block before being sent through ReLU activation and the softmax prediction head.
All models used the Adam optimizer with $\eta=0.01$ and $\epsilon=0.1$ and minimized categorical cross-entropy during training. During training classes were weighted based on their inverse frequency relative to the other classes. Each model was trained for a maximum of 300 epochs with early stopping (patience $=$ 40, minimum delta $=$ 0.001) causing most to stop after 60-90 epochs. In the dermatoscopy task, models only saw 10\% of the dataset during each epoch; this was done to increase the odds of reproducing the results of Datta et al. (2021) \cite{AttentionSkinCancerClassification}.
\subsection{Evaluation and Analysis of Robustness}
After training, the models' clean- and robust-accuracy was evaluated using the FoolBox \cite{FoolBoxPaper}\cite{FoolBoxLibrary} library's implementation of $l_\infty$ projected gradient descent attacks. Each set of models was tested at increasing epsilons (perturbation radii). Attacks of these perturbation radii were created for each image in the test sets. Unweighted accuracy was calculated for each perturbation radius.
\input figures/ExamplePerturbations
\section{Results}
\subsection{ResNet-50 Models}
\input figures/ResNetDermPlot
\input figures/ResNetDRPlot
\input figures/ResNetPneumoPlot
For the task of skin lesion classification, the models with attention are clearly superior, in terms of robustness (Fig. \ref{DermResNet50Robustness}). Although the model without attention has slightly higher clean-accuracy (0.905 vs. 0.88), even the slightest perturbation ($\epsilon=0.00125$) is able to reduce its accuracy by 24\% and put the model with attention in the lead. By the time $\epsilon=0.005$, the model is worse than random, and by the time $\epsilon=0.01$, the model is 0\% accurate. It is worth noting that perturbation of this size (Fig. \ref{example_derm_perturbation}) is far from being human perceptible. Meanwhile, at this perturbation radius, the model with attention remains better than random selection.
The models for diabetic retinopathy detection tell a slightly different story (Fig. \ref{DRResNet50Robustness}). These models were, overall, much more robust, requiring a perturbation of at least $\epsilon=0.01$ to reduce accuracy by 3\%. As in the skin lesion classification models, the model without attention has a slightly higher clean accuracy (0.832 vs. 0.818). It retains this very small lead until $\epsilon=0.02$, at which point, the accuracy of both models begins falling, but the model with attention does so a bit less dramatically. At the maximum perturbation radius tested ($\epsilon=0.32$) (Fig. \ref{example_dr_perturbation}), the accuracy of the model with attention is over 20 times higher than that of the model without.
The pneumothorax detection models, collectively, were even more robust than the diabetic retinopathy detection models (Fig. \ref{PneumoResNet50Robustness}). After perturbations of $\epsilon=0.32$ (Fig. \ref{example_pneumo_perturbation}), the accuracy of both models met at 0.805. Prior to this, the accuracy of the model without attention hovered 1-2\% above that of the model with attention. However, based on the trajectory of the models' accuracy with respect to $\epsilon$, perturbation radii higher than 0.32 would likely result in the model with attention leapfrogging the model without.
\subsection{InceptionResNetV2 Models}
\input figures/IRV2DermPlot
\input figures/IRV2DRPlot
\input figures/IRV2PneumoPlot
Experiments with the InceptionResNetV2 architecture paint a slightly different picture. In all tasks, clean-accuracy for the baseline and attentive models was within 1\%. This result is, somewhat, at odds with those of Datta et al. (2021) \cite{AttentionSkinCancerClassification} which found soft attention to improve accuracy of InceptionResNetV2 (on HAM10000 \cite{HAM10000}) by ~3\%.
For the skin lesion classification and diabetic retinopathy detection tasks, the baseline and attentive architectures performed near-identically in adversarial scenarios. The diabetic retinopathy task slightly favored the baseline, whereas skin lesion classification slightly favored the attentive model; however, at most, there was a 6\% discrepancy between their accuracies ($\epsilon = 0.00125$).
In the pneumothorax detection task, the model with attention performed significantly worse under adversarial scenarios. The model reached a level of accuracy worse than random selection after the perturbation radius was pushed beyond 0.00125, while the baseline remained usable with a perturbation radius of 0.005.
\section{Discussion}
\subsection{Analysis of Perturbations and Activation Maps}
In an attempt to better understand the principles and behaviors that led the CNNs with attention to perform better in both clean and adversarial scenarios, perturbation difference maps and Grad-CAM activation maps were generated for a select sample of images on each model. Individually, these difference maps and activation maps were unremarkable. However, a number of dataset specific patterns were noticed throughout the images used for this analysis.
\subsubsection{ResNet-50 Models}
While imperceptible in the final images, the perturbations for dermatoscopic and fundoscopic images were found to carry easily perceptible information about the source image, when the changes were scaled to a range of $[0,255]$. In these cases, the adversarial ``noise'' contained the shape of the lesion or eye. For dermatoscopic images, this shape was in the form of a densely perturbed ring enclosing a sparsely perturbed center (Fig. \ref{LesionShape}). This phenomenon was most visible when $\epsilon=0.00125$. For the fundoscopic images, this shape was represented by a sparsely perturbed ellipse. In the attacks generated for the model with attention, the shape of this ellipse occasionally diverged from the true shape of the eye (Fig. \ref{EyeShapeDivergence}).
\input figures/DRPerturbationDivergence
\input figures/ResNetPneumoWithAttentionActivation
\input figures/ResNetPneumoWithoutAttentionActivation
In the attacks tailored to the fundoscopic image models, perturbations introduced dark splotchy patterns to the eye area (Fig. \ref{CottonWool}). Based on diagnostic literature, these spots could be ``attempts'' by the attack to emulate cotton-wool spots or hemorrhages, key indicators of diabetic retinopathy \cite{WillsEye}. Similar patterns appear in the perturbations for the chest x-ray model with attention (Fig. \ref{PneumoWithAttention}) (these spots are larger and less speckled), though these do not appear to be similar to the key indicators of pneumothoraces (air in pleural space, misplaced lung edge, less distinct lung markings, etc.) \cite{UnofficialGuide}, it remains unclear whether the attack could be emulating these features indirectly. If so, this behavior could be responsible for medical adversarial images being so easily detected, as observed by Ma and Niu et al. (2021) \cite{MaNiu}.
\input figures/ResNetCottonWool
While the activation maps for the dermatoscopic and fundoscopic image models were invariant to perturbation, the chest x-ray model without attention occasionally produced very different activation maps under adversarial attacks (Fig. \ref{DifferentMaps}). However, this was only true under successful attacks.
For all three datasets, models with attention produced activation maps that were very different from their baseline counterparts; in most cases, the regions of highest activation did not match across models. Additionally, the activation maps for the chest x-ray model with attention were highly focused, typically having a single region of significance (Fig. \ref{DifferentMaps}).
\subsubsection{InceptionResNetV2 Models}
Similar to the ResNet-50-based models, perturbations of dermatoscopic and fundoscopic images carried perceptible information about the source image. Across all three datasets, the perturbations for attentive models were generally higher contrast; in other words, the difference in the level of perturbation between the most- and least-perturbed pixels was greater. Like the ResNet-50 models, the perturbations targeting fundoscopic and chest x-ray images produced intensely perturbed splotches. However, unlike the ResNet-50 models, the splotches on the x-rays appear in both the baseline and attentive models and are less widespread (Fig. \ref{IRV2Comparison}).
The activation maps for all skin lesion classification and diabetic retinopathy detection models were greatly impacted by perturbation, often being nearly inverted, indicating that the model is focusing on the incorrect regions of the image. Perturbations appeared to have less of an effect on the activation maps of the pneumothorax detection models. Like the ResNet-50 models, the activation maps of the attentive models shared no similarity with those of the baselines. Also, similar to its ResNet-50 counterpart, the attention maps of the chest x-ray models were significantly more focused than those of the other models (Fig. \ref{IRV2Comparison}).
\subsection{Conclusion}
The inclusion of a soft-attention block was able to improve the robustness of ResNet-50 on two of the three medical classification tasks tested, while only slightly decreasing clean-accuracy. Additionally, previous experiments have shown the ability of this architectural modification to increase clean accuracy \cite{AttentionSkinCancerClassification}. Though soft-attention was unable to significantly improve the robust accuracy of the InceptionResNetV2 models, its inclusion was not detrimental to clean- or robust-accuracy in two of three tasks.
These results suggest that, in most cases, the inclusion of a soft-attention block is beneficial (or at least not harmful) to the overall performance of medical image classification architectures. This modification has potential to improve accuracy and even greater potential to improve model robustness.
Additionally, the observations above regarding perturbation behavior suggest that PGD may be inadvertently emulating cotton-wool spots, hemorrhages, and lung boundaries. The fact that these patterns are so easily visible in the perturbations may lend further insight into why adversarial images targeting medical classifiers are so easily detected.
Future research into the susceptibility of medical image classifiers to adversarial images may wish to investigate this hypothesis by creating a modified PGD attack algorithm which maximizes the perceived randomness of the perturbation, as this strategy could lead to less detectable attacks.
{\small
\bibliographystyle{ieee_fullname}
| {
"attr-fineweb-edu": 1.870117,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdaY5qWTBEb4NrmUc | \section{Introduction}
When a system is driven adiabatically close to a critical point or through a gapless phase, the divergence of its intrinsic relaxation time leads to a complete breakdown of the adiabatic condition, no matter how slow the driving is. In the vicinity of the gapless point the dynamics switches from an adiabatic to a sudden quench regime. Consequently, as we drive the system from its initial ground state closer and closer to the gapless point, the departure from the adiabatic evolution induces transitions towards excited states which are seen as a proliferation of topological defects. This proliferation ultimately generates a final state which differs significantly from the naively expected ground state associated to the final Hamiltonian.
For a slow driving rate, generalizing to the quantum situation the classical Kibble-Zurek Mechanism (KZM) \cite{KZMclassical}, the density of such defects is expected to be given by a power-law function of the driving rate with an exponent related to the quantum critical point exponents \cite{ZuDoZo05,Da05,Po05,Dz05,BaPo08}.
The scaling argument goes as follows. For a linear ramping through an isolated quantum critical point controlled by the parameter $\epsilon(t)\sim t/\tau_Q$, the system, after following adiabatically the ramping far away from the critical point, will suddenly freeze out when it gets close enough to the critical locus. This happens at a typical time scale $\tau_{KZ}$ which is deduced self-consistently by equating the intrinsic relaxation time with the inverse of the instantaneous energy gap $\Delta(t)$ at $t=\tau_{KZ}$, where the typical scaling of the inverse gap, $\Delta^{-1}(\tau_{KZ})\sim |\epsilon(\tau_{KZ})|^{-z\nu}$, is set by the deviation from the critical point $\epsilon(\tau_{KZ})$ at that time. One obtains $\tau_{KZ}\sim \tau_Q^{z\nu/(1+z\nu)}$ and accordingly the associated typical length scale
$\xi_{KZ}\sim \tau_{KZ}^{1/z}\sim \tau_Q^{\nu/(1+z\nu)}$ which gives an estimate of the defect density as $n_{exc}\sim \xi_{KZ}^{-d}\sim \tau_Q^{-d\nu/(1+z\nu)}$.
This scaling prediction, based on an adiabatic-sudden-adiabatic evolution scenario \cite{Da05}, has been tested numerically and by analytical means in a great variety of models; see \cite{Dziarmaga2010,Polkovnikov2011} for extensive reviews.
In particular, in a number of integrable models, the KZM can be derived exactly from the mapping to a set of independent two-level systems, each of them undergoing Landau-Zener (LZ) anti-crossings \cite{LZ}. Integrating the LZ transition probabilities over all modes leads finally to the KZM prediction for the density of excitations \cite{Dz05,Dziarmaga2010,LZKZM}. Initially introduced for homogeneous systems, the KZM has since been generalized to inhomogeneous situations such as those generated by the release of a power-law confining potential \cite{CoKa10} or by the propagation of a domain wall or critical front \cite{CritFront}.
The KZM has also been used in spatially inhomogeneous situations to describe symmetry-breaking phase transitions in space \cite{KZspace}.
In all these cases the non-equilibrium situation generated by the temporal variation of a Hamiltonian parameter is obtained from an initial equilibrium state (generally the ground state since the system is supposed to be at zero temperature). However, many situations of physical and technological interest have to do with starting states that are intrinsically out-of-equilibrium, for example, situations where there is a macroscopic current flowing through the system as a result of a coupling with two different baths or reservoirs. Starting the driving process from such an excited state will strongly affect the way the defects are generated especially when the dynamical system comes close to a gapless point. If the initial current density is small enough, one expects to recover a density of defects which is governed by (almost) the usual KZM prediction. However, it is possible that this prediction could break down completely at large initial current values. It is the aim of this paper to clarify the issue.
In a one-dimensional system, a current-carrying state can be prepared by the contact of the system at its boundaries with reservoirs (or heat baths) at different chemical potentials (or temperatures). In such a case, the system steady state will in general be a statistical mixture, well described close to equilibrium (small gradient of chemical potential or temperature) by a MacLennan-Zubarev density matrix \cite{MacLennan}. Far away from equilibrium, there is no general prediction for the density matrix of such a steady state although some exact results in terms of Matrix Product States have recently been obtained for integrable models \cite{KarevskiNEQ,XXZNEQ}.
Another generic situation where such a current-carrying state emerges asymptotically in time is the case of the relaxation of a one-dimensional system in which initially the left and right halves have been set to different temperatures, or chemical potentials, and then glued together by a local coupling. It was shown on quasi-free systems (such as the quantum $XY$ spin chain) that the steady state reached from this type of initial state by a unitary evolution is effectively described by a generalized Gibbs distribution of the form
$e^{-\bar{\beta} (H - \lambda Y)}$ where $H$ is the Hamiltonian of the chain, $\bar{\beta}=(\beta_L+\beta_R)/2$ the average inverse temperature, $\lambda=(\beta_L-\beta_R)/\bar{\beta}$ the non-equilibrium driving force, and $Y$ a long-ranged conjugated current operator (which describes an infinite set of conserved quantities) \cite{Ogata}. Moreover, in one-dimensional critical systems it has been proven by conformal field theory techniques that the steady-state currents are universal, depending only on the central charges of the theories and on the external temperature or chemical potential bias \cite{DoyonBernard}.
In the zero temperature limit the generalized Gibbs or MacLennan-Zubarev density matrix
$\rho\sim e^{-\beta(H-\lambda Y)}$ reduces to the ground state projector $|GS_Y\rangle\langle GS_Y|$ associated to the effective Hamiltonian $H-\lambda Y$. This is basically the physical justification for the use of the Lagrange multiplier method developed in \cite{Antal97,Antal}.
Specifically, to the Hamiltonian $H$ of the considered system one adds a term $-\lambda \hat{J}$ proportional to a given current operator $\hat{J}$ (associated to a given conserved quantity). The ground state of the effective Hamiltonian $H-\lambda \hat{J}$ will then
carry a non-vanishing mean current $\langle \hat{J}\rangle\neq 0$ at sufficiently large values of $\lambda$.
This effective ground state is then interpreted as a non-equilibrium, current-carrying, state of the original model described by $H$. Such an effective approach is believed to capture, at least locally, the essential features of a system coupled to two different quantum reservoirs at sufficiently low temperature.
{{In fact, it is known to yield an \emph{exact} description of steady states for energy transport in critical systems, where $\hat{J}$ becomes the total momentum operator \cite{DoyonBernard,Bhaseen15}. This latter observation suggests that analysis of $H-\lambda \hat{J}$ is particularly relevant for studying scaling exponents.}
In the following we will apply this strategy to the Ising quantum chain to select a state carrying an energy density current. Given that state, we will drive the Ising chain through its quantum critical point and focus on the asymptotic defect generation. Finally, after the presentation of the model and its explicit solution, we will give an LZ calculation leading to the density of defects and extract from it a general argument for the generation of defects in such current-carrying situations.
\section{Model}
We start by considering the transverse Ising Hamiltonian on a one-dimensional lattice of $L$ sites with periodic boundary conditions:
\begin{equation}
H=-\sum_l \sigma_l^x \sigma_{l+1}^x - \frac{h}{2} \sum_l \sigma_l^z.
\end{equation}
Here the $\sigma_l$'s are the usual Pauli spin matrices and $h$ is a field favouring alignment in the $z$ direction. For definiteness we take $h>0$. It is straightforward to show that an energy current in this model can be defined as
\begin{equation}
\hat{J}=\sum_l \hat{J}_l = \frac{h}{4} \sum_l \left( \sigma_l^x \sigma_{l+1}^y - \sigma_l^y \sigma_{l+1}^x \right)
\end{equation}
with current conservation reflected by $[H,\hat{J}]=0$. Following the approach outlined in the introduction, we argue that a current-carrying excited state of $H$ can be generated as the ground state of the effective Hamiltonian
\begin{equation}
H_J= H - \lambda \hat{J} \label{e:master}
\end{equation}
where the $\lambda \hat{J}$ term introduces an interaction of Dzyaloshinskii-Moriya form and, without loss of generality, we assume $\lambda \geq 0$. Such an effective Hamiltonian was previously studied in, e.g.,~\cite{Antal97, Eisler03,Das11b} and, as we recap below, can be brought into diagonal free-fermion form by a series of exact transformations.\footnote{Subtleties relating to the boundary conditions under these transformations are not expected to be relevant in the thermodynamic limit; see, e.g.,~\cite{Lieb61} for more detailed analysis.}
Firstly, using the standard Jordan-Wigner transformation~\cite{Jordan28}
\begin{eqnarray}
\sigma_l^+ &= c_l^\dagger \exp\left(i\pi \sum_{j<l} c_j^\dagger c_j\right) \\
\sigma_l^- &= \exp\left(- i\pi \sum_{j<l} c_j^\dagger c_j\right) c_l
\end{eqnarray}
the complete Hamiltonian can be written as
\begin{eqnarray}
H_J =&-\sum_l \left[ \left(c_l^\dagger c_{l+1}^\dagger + c_l^\dagger c_{l+1} - c_l c_{l+1}^\dagger -c_l c_{l+1}\right) \right. \nonumber \\
&+ \left. \frac{h}{2} \left(2 c_l^\dagger c_l -1\right) + \frac{\lambda hi}{2} \left( c_l^\dagger c_{l+1} + c_l c_{l+1}^\dagger \right)\right]
\end{eqnarray}
where the $c_l$'s are fermion operators. The next step is a Fourier transform to wave fermions $\alpha_k$,
\begin{eqnarray}
c_l^\dagger &= \frac{1}{\sqrt{L}} \sum_k \alpha_k e^{ikl} \\
c_l &= \frac{1}{\sqrt{L}} \sum_k \alpha_k^\dagger e^{-ikl},
\end{eqnarray}
which, after some manipulation using fermion anti-commutation rules, yields
\begin{equation}
\fl H_J=\sum_k \left\{ \left[h \lambda \sin k + (h + 2\cos k) \right] \alpha_k^\dagger \alpha_k + i \sin k (\alpha_k^\dagger \alpha_{-k}^\dagger + \alpha_k \alpha_{-k} )+ \frac{h}{2} \right\}. \label{e:Fourier}
\end{equation}
Finally, we perform a Bogoliubov-type similarity transform~\cite{Bogoliubov58}
\begin{eqnarray}
\alpha_k &= \eta_k \cos \omega_k - i \eta_{-k}^\dagger \sin \omega_k \\
\alpha_{-k}^\dagger &= -i \eta_{k} \sin \omega_k + \eta_{-k}^\dagger \cos \omega_k.
\end{eqnarray}
Here $\omega_k$ is assumed odd in $k$, i.e., $\omega_{-k}=-\omega_k$ which implies that terms of the form $\eta_{-k} \eta_{k} h \lambda \sin k \times i \cos \omega_k \sin \omega_k$ (and conjugate) are also odd and cancel in the sum. The off-diagonal terms in the remaining part are cancelled by the choice
\begin{equation}
\tan2\omega_k= \frac{2\sin k}{h+2 \cos k}. \label{e:bog}
\end{equation}
Note that this is exactly the same Bogoliubov angle as in the $\lambda=0$ case which is, in fact, to be expected since the current operator commutes with the Hamiltonian. With the natural choice that $2\omega_k$ is in the same quadrant as the point $(h+2 \cos k, 2\sin k)$ so that $\omega_k$ takes the sign of $k$, the Hamiltonian $H_J$ can then be written in diagonal form as
\begin{equation}
H_J= \sum_k \eta_k^\dagger \eta_k \left( h \lambda \sin k + \sqrt{(h+2\cos k)^2 + 4 \sin^2 k} \right) + \mathrm{const.}
\end{equation}
We observe immediately that $\lambda\neq0$ breaks the $k \leftrightarrow - k$ symmetry of the spectrum; see, e.g.,~\cite{Antal97}.
Following~\cite{Eisler03}, we show in figure~\ref{f:phase}
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{QuenchRev_fig1-crop}
\caption{Phase diagram corresponding to Hamiltonian $H_J$~\eref{e:master}. The mean current is non-zero for $\lambda>\lambda_c(h)=\max(1,2/h)$.
In later sections we use this construction to consider a quench starting from a current-carrying initial state with $h>2$.}
\label{f:phase}
\end{figure}
the resulting phase-diagram in $h-\lambda$ space noting the slightly different parameterization of our Hamiltonian to that in the literature.
As indicated by the spectrum in figure~\ref{f:spec},
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{QuenchRev_fig2-crop}
\caption{Spectrum $\varepsilon_k=h \lambda \sin k + \sqrt{(h+2\cos k)^2 + 4 \sin^2 k}$ for $h=3$, $\lambda=2$ (solid red line) with zero energy line (dashed green) shown for comparison. States between $k_-$ and $k_+$ are filled current-carrying modes.}
\label{f:spec}
\end{figure}
for $\lambda>\lambda_c(h)=\max(1,2/h)$ the ground state of $H_J$ consists of a band of filled current-carrying modes between $k_-$ and $k_+$ where
\begin{equation}
\cos k_\pm = \frac{-2 \pm \sqrt{(\lambda^2 h^2 -4)(\lambda^2-1)}}{h \lambda^2}. \label{e:qpm}
\end{equation}
In concluding this {section}, we emphasize that the dynamics is unaltered by the addition of the Dzyaloshinskii-Moriya term {(since $\lambda \hat{J}$ commutes with $H$)} but the effective Hamiltonian $H_J$ \textcolor{black}{(with its $\lambda$-dependent ground state)} can be used as a tool to generate a current-carrying initial state. Our programme in the following section is to analyse the creation of defects when the system starts from such a current-carrying initial state
and is then quenched {across the critical line which, with our parameterization, is at $h=2$. A related discussion on the properties of the entanglement entropy under a similar quench can be found in~\cite{Das11b}. In fact, there is another critical line at $h=-2$ but as we restrict ourselves throughout to $h>0$, our quench protocol never crosses this.
\section{Calculation of defect production}
\subsection{Details of dynamics in Heisenberg picture}
In order to analyse the defect production it is helpful to follow the seminal paper of Barouch, McCoy and Dresden~\cite{BMD} and consider the dynamics from the Heisenberg viewpoint. We start by writing~\eref{e:Fourier} as a sum over positive modes $p$,
\begin{equation}
H=\sum_{p=1}^{L/2} \tilde{H}_p,
\end{equation}
where
\begin{eqnarray}
\tilde{H}_p =& (h+2\cos k)[ \alpha_k^\dagger \alpha_k + \alpha_{-k}^\dagger \alpha_{-k} ]
+ h\lambda\sin k [ \alpha_k^\dagger \alpha_k - \alpha_{-k}^\dagger \alpha_{-k} ] \nonumber \\
&+ 2i\sin k [\alpha_k^\dagger \alpha_{-k}^\dagger + \alpha_{k} \alpha_{-k} ] + h
\end{eqnarray}
with $k=({2\pi}/{L})p$. With the obvious choice of basis $\{|0\rangle,\alpha_k^\dagger \alpha_{-k}^\dagger |0\rangle, \alpha_k^\dagger |0\rangle, \alpha_{-k}^\dagger |0\rangle\}$ in which $|0\rangle$ is the vacuum state of the $\alpha$ fermions, we then have the $4\times4$ matrix representation
\begin{equation}
\fl
\tilde{H}_p =
\left(
\begin{array}{cccc}
h & 2i \sin k & 0 & 0 \\
- 2i \sin k & 4\cos k +3h & 0 & 0 \\
0 & 0 & h\lambda \sin k+2\cos k + 2h & 0 \\
0 & 0 & 0 & -h\lambda \sin k+2\cos k + 2h \end{array}
\right). \label{e:Hmatrix}
\end{equation}
We note that the current-carrying states $\alpha_k^\dagger |0\rangle$ and $\alpha_{-k}^\dagger |0\rangle$ are completely decoupled from the other states and the diagonal structure of their submatrix indicates the conservation of current for constant field.
The time evolution matrix, $U_p(t)$, in the Heisenberg picture obeys the ($\hbar=1$) equation
\begin{equation}
i \frac{d}{dt} U_p(t) = U_p(t) \tilde{H}_p(t) \label{e:Hberg}
\end{equation}
with initial condition $U_p(t_0)={\mathbb I}$. Here ${\mathbb I}$ is the $4\times 4$ identity matrix and we have explicitly now included the time-dependence in $\tilde{H}_p(t)$ to allow for the time-dependent $h(t)$ which will be of interest in the following. After the system has reached a current-carrying steady state corresponding to a non-zero $\lambda$, we consider a quench in $h(t)$ with the usual $\lambda=0$ dynamics so that the non-trivial part of~\eref{e:Hberg} reduces to an equation in the $2\times2$ basis $\{|0\rangle,|2\rangle=\alpha_k^\dagger \alpha_{-k}^\dagger |0\rangle\}$:
\begin{equation}
\fl
i \frac{d}{dt}
\left(
\begin{array}{cc}
U_{11}(t) & U_{12}(t) \\
U_{21}(t) & U_{22}(t)
\end{array}
\right)
=
\left(
\begin{array}{cc}
U_{11}(t) & U_{12}(t) \\
U_{21}(t) & U_{22}(t)
\end{array}
\right)
\times
\left(
\begin{array}{cc}
h(t) & 2i\sin k \\
-2i \sin k & 4\cos k + 3h(t)
\end{array}
\right)
\end{equation}
where we have suppressed the $p$ subscript in the matrix elements for notational brevity. This system of coupled first-order differential equations, easily yields decoupled second-order ones. For example, we have
\begin{eqnarray}
\fl
i U_{11}'' &= h' U_{11} + h U_{11}' - (2i\sin k) U_{12}' \\
\fl &= h' U_{11} + h U_{11}' + i(2i\sin k)[(2i\sin k) U_{11} + (4\cos k+3h) U_{12}] \\
\fl &= h' U_{11} + h U_{11}' - (4 i\sin^2 k) U_{11} - 2\sin k (4\cos k+3h)\left[ \frac{iU_{11}'-hU_{11}}{-2i\sin k} \right] \\
\fl &= (4\cos k + 4h) U_{11}'+[h'-4i\sin^2 k + i(4 \cos k+3h) h]U_{11}
\end{eqnarray}
with initial condition $U_{11}(t_0)=1$ and $U'_{11}(t_0)=
-ih(t_0)$.
Similarly, one finds
\begin{equation}
iU_{12}'' = (4\cos k + 4h) U_{12}'+[3h'-4i\sin^2 k + i(4 \cos k+3h)h]U_{12}
\end{equation}
with initial condition $U_{12}(t_0)=0$, $U'_{12}(t_0)=2\sin k$. The differential equations for $U_{21}$ and $U_{22}$ are identical to those for $U_{11}$ and $U_{12}$ respectively but with different initial conditions: $U_{21}(t_0)=0$, $U'_{21}(t_0)=-2\sin k$, $U_{22}(t_0)=1$, $U'_{22}(t_0)=-i[4 \cos k + 3 h(t_0)]$.
For certain choices of $h(t)$ the corresponding differential equations can be solved analytically (at least with the aid of a suitable computer algebra package) in terms of extremely tortuous combinations of hypergeometric/special functions. However, our focus here is rather on using properties of the solutions to extract the scaling of the defect production when the system is quenched across the critical line. Specifically, we first calculate the defect density starting from an initial state with $\lambda$ below the critical value (i.e., a steady state with zero current) and then demonstrate how the apparently simple change in the analysis required for a current-carrying initial state can lead to a dramatic change in the results.
We consider quenching the system from above the critical line ($h=2$) at some initial time $t_0$ to below the critical line at some final time $t_f$, according to a given smooth protocol \textcolor{black}{(with the shorthand definitions $h_0:=h(t_0)$ and $h_f:=h(t_f)$ now introduced)}. \textcolor{black}{ For $\lambda<1$, since there is no current, the system starts in the ground state of the $\eta$ fermions which we denote as
\begin{equation}
| \psi(t_0) \rangle = \prod_{k} |\tilde{0}_k(t_0)\rangle,
\end{equation}
where the $|\tilde{0}_k(t_0)\rangle$ are the vacuum states
associated to the diagonal fermions at time $t_0$ such that $\eta_k(\textcolor{black}{h_0})|\tilde{0}_k(t_0)\rangle=0$. }
To calculate the production of defects we need to consider the expectation of $\eta_k^\dagger(\textcolor{black}{h_f})\eta_k(\textcolor{black}{h_f})$ with respect to the time-evolved ground-state $|\psi(t_f)\rangle = U^\dagger(t_f) |\psi(t_0)\rangle$. \textcolor{black}{To understand this, recall that at any time $t$ the system has a field value $h(t)$ and is diagonalized in terms of the operators $\eta_k^\dagger (h(t))\eta_k(h(t))$. The associated (adiabatically expected) ground state is the vacuum state with respect to these fermions (since the single particle spectrum is positive). As a consequence, the number of defects is just the number of fermions on top of the instantaneous vacuum. At very low fields, $h\simeq 0$, the total number of defects $\sum_k \eta_k^\dagger(h)\eta_k(h)$ reduces to the kink number operator $\frac{1}{2}\sum_l (1-\sigma^x_l\sigma^x_{l+1})$, see \cite{Dziarmaga2010,CoKa10} for more explanation.} Since the dynamics is most easily expressed in terms of the $\alpha$ fermions we use the time-dependent version of the inverse Bogoliubov transformation to write
\begin{equation}
\fl
\eta_k^\dagger(\textcolor{black}{h_f})\eta_k(\textcolor{black}{h_f})= -is(\textcolor{black}{h_f})c(\textcolor{black}{h_f}) \alpha_{-k} \alpha_{k} + s(\textcolor{black}{h_f})^2 \alpha_{-k}\alpha_{-k}^\dagger + c(\textcolor{black}{h_f})^2 \alpha_{k}^\dagger \alpha_{k} + is(\textcolor{black}{h_f})c(\textcolor{black}{h_f}) \alpha_{k}^\dagger \alpha_{-k}^\dagger
\end{equation}
where $s(\textcolor{black}{h_f})$ and $c(\textcolor{black}{h_f})$ denote trigonometric functions (sine and cosine respectively) of the Bogoliubov angle evaluated at field $\textcolor{black}{h_f}$. \textcolor{black}{In the remainder of this subsection we suppress all $h_f$ and $t_f$ arguments and indicate explicitly only the initial-time quantities.} Now, working in the $2\times2$ basis introduced above, we have
\begin{equation}
U \eta^\dagger_k \eta_k U^\dagger =
\left(
\begin{array}{cc}
U_{11} & U_{12} \\
U_{21} & U_{22}
\end{array}
\right)
\left(
\begin{array}{cc}
s^2 & isc \\
-isc & c^2
\end{array}
\right)
\left(
\begin{array}{cc}
U_{11}^* & U^*_{21} \\
U^*_{12} & U^*_{22}
\end{array}
\right). \label{e:Umateq}
\end{equation}
Using the Bogoliubov transformation at $t_0$ the expression in~\eref{e:Umateq} can then be re-written in terms of the operators $\eta_k(\textcolor{black}{h_0})$ and $\eta_k^\dagger(\textcolor{black}{h_0})$. It turns out that the only terms coupling $|\tilde{0}(t_0) \rangle$ and $\langle \tilde{0}(t_0)|$ are those proportional to $\eta_ {-k}(\textcolor{black}{h_0}) \eta_ {-k}^\dagger(\textcolor{black}{h_0})$ and the result is
\begin{eqnarray}
\fl \langle \tilde{0}(t_0)| \textcolor{black}{U} \eta_k^\dagger \eta_k \textcolor{black}{U^\dagger}|\tilde{0}(t_0) \rangle& = & c(\textcolor{black}{h_0})^2[s^2|U_{11}|^2+isc U_{11}U_{12}^* - isc U_{11}^* U_{12} + c^2 |U_{12}|^2] \nonumber \\
&\phantom{=}& +is(\textcolor{black}{h_0})c(\textcolor{black}{h_0})[s^2 U_{21}^*U_{11}+isc U_{11}U_{22}^* - isc U_{12} U_{21}^* + c^2 U_{12}U_{22}^*] \nonumber \\
&\phantom{=}& -is(\textcolor{black}{h_0})c(\textcolor{black}{h_0})[s^2 U_{11}^*U_{21}+isc U_{12}^*U_{21} - isc U_{11}^* U_{22} + c^2 U_{12}^*U_{22}] \nonumber \\
&\phantom{=}& +s(\textcolor{black}{h_0})^2[s^2|U_{21}|^2+isc U_{21}U_{22}^* - isc U_{21}^* U_{22} + c^2 |U_{22}|^2] \\
& =& |c(\textcolor{black}{h_0})(s U_{11} - ic U_{12})-is(\textcolor{black}{h_0})(sU_{21}-icU_{22})|^2.
\end{eqnarray}
If $\mathcal{N}$ defects are generated during the quench then, in the thermodynamic limit where the sum over modes becomes an integral, the defect density is finally given by
\begin{equation}
n_{exc} = \lim_{L\to\infty}\frac{\mathcal{N}}{L} = \frac{1}{\pi} \int_0^\pi |c(\textcolor{black}{h_0})(s U_{11} - ic U_{12})-is(\textcolor{black}{h_0})(sU_{21}-icU_{22})|^2 \, dk. \label{e:int}
\end{equation}
Crucially, for $\lambda > 1$ the \emph{only} difference is the initial state -- recall that the dynamics is unchanged. If we start in the current-carrying regime, then states between $k_-$ and $k_+$ are occupied and since the dynamics of these current-carrying modes is decoupled, we argue that defect production (corresponding to the production of fermion pairs with equal and opposite momenta) can only occur for momenta \emph{outside} this range. \textcolor{black}{Repeating the calculations leading to~\eref{e:int}, the only difference is a change in the limits of integration so that the result is replaced by}
\begin{eqnarray}
n_{exc}=&\frac{1}{\pi}\int_0^{-k_+} |c(\textcolor{black}{h_0})(s U_{11} - ic U_{12})-is(\textcolor{black}{h_0})(sU_{21}-icU_{22})|^2 \, dk \nonumber \\
&+
\frac{1}{\pi}\int_{-k_-}^\pi |c(\textcolor{black}{h_0})(s U_{11} - ic U_{12})-is(\textcolor{black}{h_0})(sU_{21}-icU_{22})|^2 \, dk. \label{e:ndefect}
\end{eqnarray}
This integral can be evaluated numerically (see figure~\ref{f:defect})
\begin{figure}
\centering
\includegraphics[width=0.8\textwidth]{QuenchRev_fig3-crop}
\caption{Numerical evaluation of~\eref{e:ndefect} for quench protocol $h(t)=2-t/\tau_Q$ from $t_0=-\tau_Q$ to $t_f=\tau_Q$ with $\tau_Q=10$ (red $+$ symbols) and $\tau_Q=100$ (green $\times$ symbols). The horizontal dashed lines show the small-$\lambda$ scaling prediction of~\eref{e:lamsmall}, the diagonal dashed line is the large-$\lambda$ prediction of~\eref{e:lambig}; the crossover between the regimes is well-described by the error function (solid lines).}
\label{f:defect}
\end{figure}
using the solutions of the differential equations for the matrix elements of $U$ and, significantly, the scaling form can also be predicted by an LZ argument as shown in the next subsection.
\subsection{Mapping to a set of Landau-Zener transitions}
If one restricts the matrices $\tilde{H}_p$ (\ref{e:Hmatrix}) associated to the Ising Hamiltonian $H$ to the non-trivial sector $\{|0\rangle,|2\rangle\}$
the system maps to a set of independent two-level systems each described by the Hamiltonian
\begin{equation}
H(k,t)=[2\cos k + 2h(t)] \; {\mathbb I} - [\epsilon(t) +b(k)]\; \sigma^z + \Delta(k)\; \sigma^y
\end{equation}
where the $\sigma$'s are Pauli matrices as before, ${\mathbb I}$ is here the $2\times 2$ identity matrix, and the coefficients are given by $\epsilon(t)=h(t)-2$, $b(k)=4\cos^2(k/2)$, and $\Delta(k)=-2\sin k$.
The instantaneous eigenvalues are
$[2\cos k + 2h(t)]\pm \sqrt{[\epsilon(t) +b(k)]^2+ \Delta^2(k)}$ associated to the instantaneous eigenvectors $|\pm(t)\rangle$.
The dephasing factor $b(k)$ can be absorbed by a redefinition of a local time, $t\rightarrow t_k$, for each mode $k$ \cite{Dziarmaga2010}.
For a driving $\epsilon(t)=-t/\tau_Q$ starting deep in the disordered phase ($h \gg 2$) and ending deep in the ordered phase ($h \simeq 0$),
the main contribution to the excitation density comes from the modes close to the Fermi point $k_F=\pi$ with an excitation probability
\cite{Dziarmaga2010,LZ}
\begin{equation}
p_k=e^{-\pi \tau_Q \Delta^2(k)}= e^{-4\pi \tau_Q \sin^2 k}\simeq e^{-4\pi \tau_Q |k-\pi|^2} \; .
\end{equation}
This excitation probability is substantial only for those modes where $\tau_Q \Delta^2(k)\ll 1$, that is, in a region around the Fermi point of size $|k-k_F|\sim \tau_Q^{-1/2}$ which shrinks towards $k_F$ as the ramping gets slower.
Since the initial state from which we start the ramping carries a non-vanishing current generated by the population of the negative modes within a region $ [k_-,k_+]$ of the first Brillouin zone, the modes $k\in [k_-,k_+]$ are dynamically protected thanks to the diagonal dynamics (\ref{e:Hmatrix}). No excitation pairs with momenta $\{+k,-k\}$, where $k\in [k _-,k_+]$, can be generated in the course of time. Consequently, the defect density from the current-carrying initial state is given by
\begin{equation}
n_{exc}=\frac{1}{\pi}\int_0^{-k_+} p_k\; dk + \frac{1}{\pi} \int_{-k_-}^\pi p_k \; dk \; .
\end{equation}
Since our initial state has a very large value of the transverse field $h$, the boundary mode $-k_+$ is always far away from the Fermi point $k_F=\pi$ and, since the main contribution to $p_k$ comes from the region $|k-k_F|\sim \tau_Q^{-1/2}$, one can simply omit the first integral and write
\begin{equation}
n_{exc}\simeq\frac{1}{\pi} \int_{-k_-}^\pi p_k \; dk \; .
\end{equation}
Plugging $p_k\simeq e^{-4\pi \tau_Q |k-\pi|^2}$ in the previous equation, one finally obtains the excitation density
\begin{equation}
n_{exc}\simeq \frac{1}{4\pi \tau_Q^{1/2}} \mathrm{erf}[\sqrt{4\pi \tau_Q} (\pi+k_-)]\; ,
\end{equation}
where $\mathrm{erf}(z)$ is the error function.
Analysis of the error function presents two limiting cases:
\begin{itemize}
\item If $(8 \pi \tau_Q)^{-1/2} \ll \pi + k_-$, then we have
\begin{equation}
n_{exc} \simeq \frac{1}{4\pi} \tau_Q^{-1/2}, \label{e:lamsmall}
\end{equation}
in agreement with the standard Kibble-Zurek result (remembering here that $\nu=z=d=1$). Note that, in this limit of slow quenching (large $\tau_Q$, small $\lambda$), there is no dependence on $\lambda$.
\item If $(8 \pi \tau_Q)^{-1/2} \gg \pi + k_-$, then
\begin{equation}
n_{exc} \simeq \frac{1}{4\pi \tau_Q^{1/2}} \frac{2}{\sqrt{\pi}}\sqrt{4\pi\tau_Q}(\pi+k_-)
\simeq \frac{1}{\pi} (\pi+k_-).
\end{equation}
Furthermore, small $\pi + k_-$ corresponds to $\lambda$ large and expanding (\ref{e:qpm}) in this limit gives
\begin{equation}
\pi+k_- \simeq \frac{1}{\lambda}\left(1-\frac{2}{\textcolor{black}{h_0}}\right)
\end{equation}
where $\textcolor{black}{h_0}$ is the value of the field at the start of the quench. (Note that, since we start in the ordered state, $\textcolor{black}{h_0}>2$ and the term in the bracket is guaranteed to be positive.)
Hence, we finally get
\begin{equation}
n_{exc} \simeq \frac{1}{\pi\lambda}\left(1-\frac{2}{\textcolor{black}{h_0}}\right). \label{e:lambig}
\end{equation}
We see that, in this large $\lambda$ limit, there is no dependence on $\tau_Q$ but the initial field $\textcolor{black}{h_0}$ does plays a role since it controls the initial current. Of course, for a quench starting far away from the critical line we have $\textcolor{black}{h_0} \to \infty$ and this term drops out.
\end{itemize}
\section{Summary and outlook}
To impose a finite current on the system, we have to populate the vacuum state with modes (current carriers) within a finite set $I_J$ such that the new state is given by $\prod_{k\in I_J} \eta^\dagger_k |0\rangle$. Now, if the set $I_J$ has no significant overlap with the critical domain $|k-k_F|\sim \tau_Q^{-1/2} $, the excitation density $n_{exc}$ will be given by the KZM prediction $n_{exc}\sim \tau_Q^{-1/2}$ (with $\nu=z=d=1$ for our model). On the contrary, when there is a significant overlap between the two domains, which is exactly what happens at high currents, the excitation density will be lowered.
At very high current values, the scaling of the defect production is given by $n_{exc}\sim \lambda^{-1}$. The reason is that, in this case, the Lagrange multiplier has to be very high, $\lambda \gg \|H\|$, and the dominant contribution to the effective Hamiltonian $H-\lambda \textcolor{black}{\hat J}$ comes from the current term, the Hamiltonian $H$ itself being a small perturbation.\textcolor{black}{\footnote{\textcolor{black}{The work of~\cite{DoyonBernard} suggests that such large $\lambda$ values can indeed be physically relevant, even for low temperatures.}}} Consequently, the effective spectrum has a dominant contribution of the form $h \lambda \sin k$ and, in the absence of $H$, the ground state is given by occupying all negative modes $k\in [-\pi, 0]$ so there is no possibility of exciting the system: all modes are protected. Now, when we add the Hamiltonian $H$ itself, it will slightly shift the single-particle spectrum by a term $h$ (for $h\gg h_c=2$) resulting in $h \lambda \sin k +h$. Modes close to the Fermi point $k_F=-\pi$ will be unoccupied up to the point $k_-$ where $h \lambda \sin k_-+h=0$, that is, to leading order up to $k_-=-\pi+1/\lambda$. We are then left close to $k_F=-\pi$ with an unoccupied domain of size $|k-k_F|=1/\lambda$ that can be excited during the quench, leading finally to $n_{exc}=\frac{1}{\pi} \int_{\pi -1/\lambda}^\pi p_k dk= \frac{1}{\pi} \int_{\pi -1/\lambda}^\pi dk= \frac{1}{\pi\lambda}$. At lower fields, the same argument (linearizing the dispersion relation close to $k_F=-\pi$) will lead to $k_-\simeq -\pi+ \frac{1}{\lambda}\left(1-\frac{2}{\textcolor{black}{h_0}}\right)$ and consequently to
$n_{exc}\simeq \frac{1}{\pi\lambda}\left(1-\frac{2}{\textcolor{black}{h_0}}\right)$.
In summary, we have studied the defect generation when driving an Ising quantum chain through its critical point from an initial state that carries a net energy current. The current value is imposed via a Lagrange multiplier field.
We have shown that low values of the energy current do not affect the density of defects that are generated during the crossing of the critical point. In contrast, at large enough currents the current carriers are dynamically protected leading to a significant suppression of the defect generation.
It would be very interesting to test the influence of an initial current of particles on the generation of topological defects in models which do not reduce to a set of free particles and where, consequently, the generation of defects does not reduce to a Landau-Zener problem. A potential candidate to probe that influence is the non-integrable Bose-Hubbard model with a slow driving from the Mott insulator phase to the superfluid phase. In the zero-current situation there are already some analytical KZM predictions based on the fact that the $d$-dimensional Mott insulator to superfluid transition belongs to the universality class of the $(d+1)$-dimensional $XY$ spin model \cite{Dziarmaga2010}. The possibility of extending these KZM predictions to the case of a current-carrying initial state, with $(d+1)$-dimensional classical analogue, is currently under consideration.
\ack
RJH is grateful for the hospitality of the Groupe de Physique Statistique in the Institut Jean Lamour, Nancy where this work was carried out.
The authors would like to thank Mario Collura for useful discussions.
\section*{References}
| {
"attr-fineweb-edu": 1.673828,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdbQ5qhDBRmv5t5h1 | \section{Introduction and results}
Let $\D$ stand for the open unit disk (centered at $0$) in the complex plane, $\T$ for its boundary, and $m$ for the normalized arclength measure on $\T$. Recall, further, that the {\it Hardy space} $H^p=H^p(\D)$ with $0<p<\infty$ is defined as the set of all holomorphic functions $f$ on $\D$ that satisfy
$$\sup_{0<r<1}\int_\T|f(r\ze)|^pdm(\ze)<\infty,$$
while $H^\infty=H^\infty(\D)$ denotes the space of bounded holomorphic functions. As usual, we identify elements of $H^p$ with their boundary functions (living almost everywhere on $\T$) and thus embed $H^p$ isometrically into $L^p=L^p(\T,m)$. The underlying theory can be found in \cite[Chapter II]{G}.
\par Now suppose $\th$ is an {\it inner function} on $\D$; this means, by definition, that $\th\in H^\infty$ and $|\th|=1$ a.e. on $\T$. We then introduce the associated {\it star-invariant} (or {\it model}) {\it subspace} $K^p_\th$, this time with $1\le p\le\infty$, by putting
\begin{equation}\label{eqn:defnkpth}
K^p_\th:=H^p\cap\th\,\ov{H^p_0},
\end{equation}
where $H^p_0:=\{f\in H^p:f(0)=0\}$ and the bar denotes complex conjugation. The term \lq\lq star-invariant" means invariant under the backward shift operator $$f\mapsto\f{f-f(0)}z,$$
and it is known (cf. \cite{DSS}) that the general form of a proper closed star-invariant subspace in $H^p$, with $1\le p<\infty$, is indeed given by \eqref{eqn:defnkpth} for some inner function $\th$. The alternative term \lq\lq model subspace" is due to the appearance of these subspaces in the Sz.-Nagy--Foia\c{s} operator model; see \cite{N}. It follows from the definition that each $K^p_\th$ carries a natural antilinear isometry (or involution) given by $f\mapsto\widetilde f$, where
\begin{equation}\label{eqn:ftilde}
\widetilde f:=\ov z\ov f\th.
\end{equation}
\par When $p=2$, we can equivalently define $K^2_\th$ as the orthogonal complement of the shift-invariant subspace $\th H^2$ in $H^2$. Moreover, letting $P_\th$ denote the orthogonal projection from $H^2$ onto $K^2_\th$, one easily verifies that
$$P_\th f=f-\th P_+(\ov\th f)=\th P_-(\ov\th f)$$
for $f\in H^2$, where $P_+$ and $P_-$ are the orthogonal projections from $L^2$ onto $H^2$ and $\ov{H^2_0}$, respectively. Now, we know from the M. Riesz theorem (see \cite[Chapter III]{G}) that $P_+$ and $P_-$ extend -- or restrict -- to every $L^p$ space with $1<p<\infty$ as bounded operators (called the {\it Riesz projections}), their respective ranges being $H^p$ and $\ov{H^p_0}$. It follows then that $P_\th$ admits a bounded extension -- or restriction -- to every $H^p$ with $1<p<\infty$, and projects the latter space onto $K^p_\th$ parallel to $\th H^p$. Accordingly, we arrive at the direct sum decomposition
\begin{equation}\label{eqn:dirsumth}
H^p=K^p_\th\oplus\th H^p,\qquad1<p<\infty,
\end{equation}
with orthogonality for $p=2$.
\par We shall make use of \eqref{eqn:dirsumth} in a special situation, which we now describe. Recall that, given a sequence $\{a_j\}=\{a_j\}_{j\in\N}$ of points in $\D$ with $\sum_j(1-|a_j|)<\infty$, the associated {\it Blaschke product} $B$ is defined by
\begin{equation}\label{eqn:blapro}
B(z)=B_{\{a_j\}}(z):=\prod_jb_j(z),\quad\text{\rm where}\quad b_j(z):=\f{|a_j|}{a_j}\f{a_j-z}{1-\ov a_jz},
\end{equation}
with the convention that $|a_j|/a_j=-1$ if $a_j=0$. Then $B$ is an inner function that vanishes precisely at the $a_j$'s; see \cite[Chapter II]{G}. Recall also that a sequence $\{a_j\}$ in $\D$ is called an {\it interpolating sequence} if
\begin{equation}\label{eqn:trcarl}
H^\infty\big|_{\{a_j\}}=\ell^\infty.
\end{equation}
(Here and below, given a function space $\mathcal X$ on $\D$, we denote by $\mathcal X\big|_{\{a_j\}}$ the set of those sequences $\{w_j\}$ in $\C$ for which the interpolation problem $f(a_j)=w_j$, $j\in\N$, has a solution $f\in\mathcal X$.) Carleson's celebrated theorem (see \cite{Carl} or \cite[Chapter VII]{G}) characterizes the interpolating sequences $\{a_j\}$ in terms of the corresponding Blaschke product \eqref{eqn:blapro} or rather its subproducts $B_j:=B/b_j$. Namely, it asserts that $\{a_j\}$ is interpolating if and only if
\begin{equation}\label{eqn:carlcond}
\inf_j|B_j(a_j)|>0,
\end{equation}
a condition that can be further rephrased as
\begin{equation}\label{eqn:intbla}
\inf_j|B'(a_j)|\,(1-|a_j|)>0.
\end{equation}
A Blaschke product $B=B_{\{a_j\}}$ satisfying \eqref{eqn:intbla} is said to be an {\it interpolating Blaschke product}.
\par When $0<p<\infty$, we have a similar \lq\lq free interpolation" phenomenon in $H^p$. This time, \eqref{eqn:trcarl} gets replaced by
\begin{equation}\label{eqn:trshsh}
H^p\big|_{\{a_j\}}=\left\{\{w_j\}:\,\sum_j|w_j|^p(1-|a_j|)<\infty\right\},
\end{equation}
and results of \cite{Kab, SS} tell us that this happens, for some or each $p\in(0,\infty)$, if and only if $\{a_j\}$ obeys the Carleson condition \eqref{eqn:carlcond}. Now, in the case $1<p<\infty$, we may apply \eqref{eqn:dirsumth} with $\th=B\left(=B_{\{a_j\}}\right)$, and restricting both sides to $\{a_j\}$ yields
\begin{equation}\label{eqn:equaltraces}
H^p\big|_{\{a_j\}}=K^p_B\big|_{\{a_j\}}.
\end{equation}
Finally, we combine \eqref{eqn:trshsh} and \eqref{eqn:equaltraces} to conclude that
\begin{equation}\label{eqn:tracekpb}
K^p_B\big|_{\{a_j\}}=\left\{\{w_j\}:\,\sum_j|w_j|^p(1-|a_j|)<\infty\right\},\qquad1<p<\infty,
\end{equation}
whenever $B$ is an interpolating Blaschke product with zeros $\{a_j\}$.
\par For $p=\infty$, however, no such thing is true, since the endpoint version of \eqref{eqn:equaltraces} breaks down in general. A natural problem that arises is, therefore, to find out what happens to \eqref{eqn:trcarl} if we replace $H^\infty$ by its star-invariant subspace
$$K^\infty_B:=H^\infty\cap B\ov{H^\infty_0},$$
always assuming that $B=B_{\{a_j\}}$ is an interpolating Blaschke product.
\par It does happen sometimes that
\begin{equation}\label{eqn:maxtrace}
K^\infty_B\big|_{\{a_j\}}=\ell^\infty,
\end{equation}
but typically, and in \lq\lq most" cases, our trace space will be essentially smaller. As a matter of fact, \eqref{eqn:maxtrace} holds if and only if $\{a_j\}$ is an interpolating sequence that satisfies the so-called {\it (uniform) Frostman condition}:
\begin{equation}\label{eqn:ufc}
\sup\left\{\sum_j\f{1-|a_j|}{|\ze-a_j|}:\,\ze\in\T\right\}<\infty.
\end{equation}
(This result is a fairly straightforward consequence of Hru\v s\v cev and Vinogradov's work in \cite{HV}; see also \cite[Section 3]{C2} for details.) In particular, \eqref{eqn:ufc} implies that the $a_j$'s may only approach the unit circle in a suitably tangential manner.
\par On the other hand, it was shown by Vinogradov in \cite{V} that whenever $\{a_j\}$ is an interpolating sequence, one has
$$K^\infty_{B^2}\big|_{\{a_j\}}=\ell^\infty,$$
where again $B=B_{\{a_j\}}$. Note, however, that $K^\infty_{B^2}$ is substantially larger than $K^\infty_B$.
\par We now describe the trace space $K^\infty_B\big|_{\{a_j\}}$ in the general case.
\begin{thm}\label{thm:interpolthm} Suppose that $\{w_j\}\in\ell^\infty$ and $B$ is an interpolating Blaschke product with zeros $\{a_j\}$. In order that
\begin{equation}\label{eqn:wbeltrace}
\{w_j\}\in K^\infty_B\big|_{\{a_j\}},
\end{equation}
it is necessary and sufficient that
\begin{equation}\label{eqn:crucond}
\sup_k\left|\sum_j\f{w_j}{B'(a_j)\cdot(1-a_j\ov a_k)}\right|<\infty.
\end{equation}
\end{thm}
An equivalent formulation is as follows.
\begin{thm}\label{thm:equivform} Suppose $h\in H^\infty$ and $B$ is an interpolating Blaschke product with zeros $\{a_j\}$. Then $h\in K^\infty_B+BH^\infty$ if and only if
$$\sup_k\left|\sum_j\f{h(a_j)}{B'(a_j)\cdot(1-a_j\ov a_k)}\right|<\infty.$$
\end{thm}
To deduce Theorem \ref{thm:equivform} from Theorem \ref{thm:interpolthm} and vice versa, it suffices to observe that a given $H^\infty$ function will be in $K^\infty_B+BH^\infty$ if and only if its values at the $a_j$'s can be interpolated by those of a function in $K^\infty_B$.
\par Before stating our next result, we need to recall the notion of an ideal sequence space. A vector space $S$ consisting of sequences (of complex numbers) is said to be {\it ideal} if, whenever $\{x_j\}\in S$ and $\{y_j\}$ is a sequence with $|y_j|\le|x_j|$ for all $j$, we necessarily have $\{y_j\}\in S$. Roughly speaking, this property tells us that the elements $\{x_j\}$ of $S$ are somehow describable by means of a \lq\lq size condition" on $|x_j|$.
\par The trace space in \eqref{eqn:tracekpb} is obviously ideal, but its endpoint version $K^\infty_B\big|_{\{a_j\}}$ can no longer be expected to have this nice feature. Of course, the latter space {\it will} be ideal for the \lq\lq few" interpolating sequences $\{a_j\}$ that obey the Frostman condition \eqref{eqn:ufc}, in which case we have \eqref{eqn:maxtrace}, but other choices of $\{a_j\}$ make things different. The difference becomes especially dramatic in the \lq\lq anti-Frostman" situation where the $a_j$'s are taken to lie on a radius. In our next theorem, we furnish a {\it universal} ideal sequence space, namely $\ell^1$, that is contained in every trace space $K^\infty_B\big|_{\{a_j\}}$, and we show (by examining the radial case) that no larger ideal space would do in general.
\begin{thm}\label{thm:elloneideal} Suppose $B$ is an interpolating Blaschke product with zeros $\{a_j\}$.
\par{\rm (a)} We have
\begin{equation}\label{eqn:ellonetrace}
\ell^1\subset K^\infty_B\big|_{\{a_j\}}.
\end{equation}
\par{\rm (b)} If $0\le a_1<a_2<\dots<1$, and if $X$ is an ideal sequence space with
\begin{equation}\label{eqn:xtrace}
X\subset K^\infty_B\big|_{\{a_j\}},
\end{equation}
then $X\subset\ell^1$.
\end{thm}
\par At the same time, it is worth mentioning that the trace space $K^\infty_B\big|_{\{a_j\}}$ always contains non-$\ell^1$ sequences (assuming, as we do, that $B=B_{\{a_j\}}$ is an {\it infinite} Blaschke product). An example is provided by the constant sequence consisting of $1$'s, since these are the values of the function $1-\ov{B(0)}B\,\left(\in K^\infty_B\right)$ at the $a_j$'s.
\par We shall be also concerned with uniform smallness conditions on the values $w_j$ that guarantee \eqref{eqn:wbeltrace}, once the $a_j$'s are given. To be more precise, we fix a (reasonable) positive function $\ph$ on the interval $(0,1]$ and write $X_\ph(\{a_j\})$ for the set of all sequences $\{w_j\}\in\ell^\infty$ that satisfy
\begin{equation}\label{eqn:sizewj}
\sup_j\f{|w_j|}{\ph(1-|a_j|)}<\infty.
\end{equation}
Our aim is then to determine whether
\begin{equation}\label{eqn:xphintrace}
X_\ph(\{a_j\})\subset K^\infty_B\big|_{\{a_j\}}
\end{equation}
with $B=B_{\{a_j\}}$, for every interpolating sequence ${\{a_j\}}$. This will be settled by Theorem \ref{thm:phiideal} below, but first we have to describe the class of $\ph$'s we have in mind.
\par It will be assumed that $\ph:(0,1]\to(0,\infty)$ is a nondecreasing continuous function for which $t\mapsto\ph(t)/t$ is nonincreasing; a function $\ph$ with these properties will be called a {\it majorant}. Also, following the terminology of \cite{J}, we say that $\ph$ is {\it of upper type less than $1$} if there are constants $C>0$ and $\ga\in[0,1)$ such that
\begin{equation}\label{eqn:uppertype}
\ph(st)\le Cs^\ga\ph(t)
\end{equation}
whenever $s\ge1$ and $0<t\le1/s$. It should be noted that for $\ga=1$, \eqref{eqn:uppertype} is automatic (with $C=1$) for every majorant $\ph$.
\begin{thm}\label{thm:phiideal} {\rm (i)} If $\ph$ is a majorant of upper type less than $1$ satisfying
\begin{equation}\label{eqn:intconv}
\int_0^1\f{\ph(t)}t\,dt<\infty,
\end{equation}
then, for every interpolating Blaschke product $B=B_{\{a_j\}}$, we have \eqref{eqn:xphintrace}.
\par{\rm (ii)} Conversely, if $\ph$ is a majorant with the property that \eqref{eqn:xphintrace} is valid for each interpolating Blaschke product $B=B_{\{a_j\}}$, then \eqref{eqn:intconv} holds true.
\end{thm}
In fact, a glance at the proof of part (ii) will reveal that it is enough to assume \eqref{eqn:xphintrace} for a {\it single} interpolating sequence $\{a_j\}$, namely, for $a_j=1-2^{-j}$; this alone will imply \eqref{eqn:intconv}.
\par As examples of majorants $\ph$ that are of upper type less than $1$ and satisfy \eqref{eqn:intconv}, one may consider
$$\ph_1(t)=t^\al,\quad\ph_2(t)=\left(\log\f2t\right)^{-1-\eps},\quad \ph_3(t)=\left(\log\f3t\right)^{-1}\left(\log\log\f3t\right)^{-1-\eps}$$
(with $0<\al<1$ and $\eps>0$), and so on.
\par While we are only concerned with the traces of functions from $K^\infty_B$ on $\{a_j\}$, which is the zero sequence of $B$, an obvious generalization would consist in restricting our functions (or those from $K^\infty_\th$, with $\th$ inner) to an arbitrary interpolating sequence in $\D$. In this generality, however, even the case of $K^p_\th$ with $1<p<\infty$ (or with $p=2$) cannot be viewed as completely understood. At least, the existing results (see \cite{AH, DPAMS, HNP} for some of these) appear to be far less clear-cut than in the current setting. By contrast, the difficulty we have to face here is entirely due to the endpoint position of $K^\infty_B$ within the $K^p_B$ scale, the only enemy being the failure of the M. Riesz projection theorem.
\par The other endpoint case, where $p=1$, is no less wicked and we briefly discuss it here. For an interpolating sequence $\{a_k\}$, the values $w_k=f(a_k)$ of a function $f\in H^1$ satisfy
\begin{equation}\label{eqn:carlhone}
\sum_k|w_k|\,(1-|a_k|)<\infty
\end{equation}
and, in virtue of the Shapiro--Shields theorem \cite{SS}, this property characterizes the sequences $\{w_k\}$ in $H^1\big|_{\{a_k\}}$. The latter set will, however, be strictly larger than $K^1_B\big|_{\{a_k\}}$ with $B=B_{\{a_k\}}$, unless we are dealing with finitely many $a_k$'s. (Equivalently, we never have $K^1_B\oplus BH^1=H^1$ except when $B$ is a finite Blaschke product. This can be deduced from \cite[Theorem 3.8]{Steg}.) Thus, in the nontrivial cases, \eqref{eqn:carlhone} is necessary but not sufficient for $\{w_k\}$ to be the trace of a function from $K^1_B$ on $\{a_k\}$. Now, an adaptation of our current method leads to another necessary condition involving the numbers
$$\widetilde w_k:=\sum_j\f{w_j}{B'(a_j)\cdot(1-a_j\ov a_k)},\qquad k\in\N.$$
Namely, \eqref{eqn:carlhone} must also hold with $\widetilde w_k$ in place of $w_k$. It would be interesting to determine whether the two conditions together are actually sufficient for $\{w_k\}$ to be in $K^1_B\big|_{\{a_k\}}$. A more detailed discussion and further questions can be found in \cite{DIEOT}.
\par In the next section, we collect a few auxiliary facts to lean upon. The remaining sections contain the proofs of our results.
\section{Preliminaries}
Given an inner function $\th$, we write
$$K_{*\th}:=K^2_\th\cap\bmo,$$
where $\bmo=\bmo(\T)$ is the space of functions of bounded mean oscillation on $\T$; see \cite[Chapter VI]{G}. The following representation formula is borrowed from \cite[Lemma 3.1]{C1}.
\begin{lem}\label{lem:genformkbmo} Given an interpolating Blaschke product $B$ with zeros $\{a_j\}$, the general form of a function $g\in K_{*B}$ is
\begin{equation}\label{eqn:kbmoseries}
g(z)=\sum_jc_j\f{1-|a_j|}{1-\ov a_jz}
\end{equation}
with $\{c_j\}\in\ell^\infty$.
\end{lem}
The series in \eqref{eqn:kbmoseries} is understood to converge in the weak-star topology of $\bmoa:=\bmo\cap H^2$, viewed as the dual of $H^1$. It also converges in $H^2$ (cf. \cite{HNP}), and hence on compact subsets of $\D$.
\par We now cite another result of Cohn, namely \cite[Corollary 3.2]{C2}, as our next lemma.
\begin{lem}\label{lem:kbmobdd} If $B$ is an interpolating Blaschke product with zeros $\{a_j\}$, and if $g\in K_{*B}$ is a function satisfying $\{g(a_j)\}\in\ell^\infty$, then $g\in H^\infty$.
\end{lem}
Further, we need to recall the definition of the space $\bmo_\ph=\bmo_\ph(\T)$ associated with a majorant $\ph$. A function $f\in L^1(\T,m)$ is said to be in $\bmo_\ph$ if
$$\sup_I\f1{m(I)\cdot\ph(m(I))}\int_I|f-f_I|\,dm<\infty,$$
where $f_I:=m(I)^{-1}\int_If\,dm$, the supremum being taken over the open arcs $I\subset\T$. The classical $\bmo$ corresponds to the constant majorant $\ph\equiv1$.
\par The following fact (and its converse) can be found in \cite{Sp}.
\begin{lem}\label{lem:bmocont} If $\ph$ is a majorant satisfying \eqref{eqn:intconv}, then $\bmo_\ph\subset C(\T)$.
\end{lem}
Our last lemma is essentially contained in \cite{DJAM98}.
\begin{lem}\label{lem:hankelbmo} Suppose that $B$ is an interpolating Blaschke product with zeros $\{a_j\}$, $\ph$ is a majorant of upper type less than $1$, and $f\in H^2$. If
\begin{equation}\label{eqn:sizefataj}
\sup_j\f{|f(a_j)|}{\ph(1-|a_j|)}<\infty,
\end{equation}
then $P_-(f\ov B)\in\bmo_\ph$.
\end{lem}
Precisely speaking, this result was incorporated into the proof of Theorem 2.3 in \cite{DJAM98}. The theorem asserted (among other things) that \eqref{eqn:sizefataj} is necessary and sufficient in order that $f\ov B\in\bmo_\ph$, provided that $f\in\bmoa_\ph:=\bmo_\ph\cap H^2$. The sufficiency was then established by splitting $f\ov B$ as
$$P_+(f\ov B)+P_-(f\ov B)$$
and verifying that both terms are in $\bmo_\ph$. In particular, the second term, $P_-(f\ov B)$, was shown to be in $\bmo_\ph$ by means of a duality argument (based on a result from \cite{J}) which actually works for any $f\in H^2$, not just for $f\in\bmoa_\ph$; see \cite[p.\,97]{DJAM98} for details.
\par When $\ph(t)=t^\al$ with some $\al\in(0,1)$, $\bmo_\ph$ reduces to the usual Lipschitz space of order $\al$, and in this special case Lemma \ref{lem:hankelbmo} was previously established in \cite[Section 4]{DSpb93}; the case of higher order Lipschitz--Zygmund spaces was treated there as well. We also refer to \cite{DIUMJ94, DAiM17} for related results.
\section{Proof of Theorem \ref{thm:interpolthm}}
Suppose \eqref{eqn:wbeltrace} holds, so that there exists a function $f\in K^\infty_B$ satisfying
\begin{equation}\label{eqn:valuefaj}
f(a_j)=w_j,\qquad j\in\N.
\end{equation}
To deduce \eqref{eqn:crucond}, we first define
\begin{equation}\label{eqn:defgammaj}
\ga_j:=-\f{a_j}{|a_j|}\f{w_j}{B_j(a_j)},\qquad j\in\N
\end{equation}
(where $B_j$ is the Blaschke product with zeros $\{a_k:k\ne j\}$), and consider the function
\begin{equation}\label{eqn:functg}
g(z):=\sum_j\ov\ga_j\f{1-|a_j|^2}{1-\ov a_jz}.
\end{equation}
Observe that $\{\ga_j\}\in\ell^\infty$, because $\inf_j|B_j(a_j)|>0$, and so $g\in K_{*B}$ by virtue of Lemma \ref{lem:genformkbmo}; the latter is being applied with $c_j=\ov\ga_j(1+|a_j|)$.
\par Recalling the notation \eqref{eqn:ftilde}, which is henceforth used with $B$ in place of $\th$, we have then (a.e. on $\T$)
\begin{equation*}
\begin{aligned}
\widetilde g(z)&:=\ov z\ov{g(z)}B(z)=\ov z\sum_jB_j(z)b_j(z)\ga_j\f{1-|a_j|^2}{1-a_j\ov z}\\
&=\ov z\sum_jB_j(z)\f{|a_j|}{a_j}\f{a_j-z}{1-\ov a_jz}
\ga_j\f{1-|a_j|^2}{1-a_j\ov z}\\
&=-\sum_jB_j(z)\f{|a_j|}{a_j}\ga_j\f{1-|a_j|^2}{1-\ov a_jz}\\
&=\sum_jB_j(z)\f{w_j}{B_j(a_j)}\f{1-|a_j|^2}{1-\ov a_jz}.
\end{aligned}
\end{equation*}
The resulting identity
$$\widetilde g(z)=\sum_jB_j(z)\f{w_j}{B_j(a_j)}\f{1-|a_j|^2}{1-\ov a_jz}$$
actually holds for all $z\in\D$, and computing both sides at $a_k$ gives
\begin{equation}\label{eqn:gtildeatak}
\widetilde g(a_k)=w_k,\qquad k\in\N
\end{equation}
(just note that $B_j(a_k)=0$ whenever $j\ne k$).
\par Comparing \eqref{eqn:gtildeatak} and \eqref{eqn:valuefaj}, we deduce that $\widetilde g=f$; indeed, the difference $\widetilde g-f$ belongs to both $K^2_B$ and $BH^2$, and is therefore null. Because $f$ is actually in $K^\infty_B$, so is $g(=\widetilde f)$, and this obviously implies that
\begin{equation}\label{eqn:gatakbdd}
\{g(a_k)\}\in\ell^\infty.
\end{equation}
Equivalently, the values $\ov{g(a_k)}$ ($k\in\N$) form a bounded sequence, i.e.,
\begin{equation}\label{eqn:bargatak}
\sup_k\left|\sum_j\ga_j\f{1-|a_j|^2}{1-a_j\ov a_k}\right|<\infty.
\end{equation}
Finally, we combine \eqref{eqn:defgammaj} with the elementary formula
$$B'(a_j)=B_j(a_j)\cdot b'_j(a_j)=-\f{|a_j|}{a_j}\f{B_j(a_j)}{1-|a_j|^2}$$
to get
\begin{equation}\label{eqn:uside}
\ga_j\cdot(1-|a_j|^2)=\f{w_j}{B'(a_j)},
\end{equation}
and substituting this into \eqref{eqn:bargatak} yields \eqref{eqn:crucond}.
\par Conversely, assume that \eqref{eqn:crucond} holds. Further, let $f\in K^2_B$ be a function satisfying \eqref{eqn:valuefaj}. (To find such an $f$, it suffices to solve the interpolation problem with an $H^2$ function and then project it orthogonally onto $K^2_B$.) Defining the numbers $\ga_j$ and the function $g$ by \eqref{eqn:defgammaj} and \eqref{eqn:functg}, respectively, we then infer -- exactly as before -- that $g$ lies in $K_{*B}$ and obeys \eqref{eqn:gtildeatak}. The latter in turn implies that $\widetilde g=f$, as above.
\par On the other hand, we may again use the identity \eqref{eqn:uside} to rewrite \eqref{eqn:crucond} as \eqref{eqn:bargatak}, or equivalently, as \eqref{eqn:gatakbdd}. This done, we invoke Lemma \ref{lem:kbmobdd} to conclude that $g$ is in $H^\infty$, and hence in $K^\infty_B$. The function $f(=\widetilde g)$ therefore belongs to $K^\infty_B$ as well, and recalling \eqref{eqn:valuefaj} we finally arrive at \eqref{eqn:wbeltrace}.
\section{Proofs of Theorems \ref{thm:elloneideal} and \ref{thm:phiideal}}
\noindent{\it Proof of Theorem \ref{thm:elloneideal}.} (a) For all $j$ and $k$ in $\N$, we have
$$|B'(a_j)|\cdot|1-a_j\ov a_k|\ge|B'(a_j)|\cdot(1-|a_j|)\ge\de>0,$$
where $\de$ is the infimum in \eqref{eqn:intbla}. Therefore, whenever $\{w_j\}\in\ell^1$,
$$\sup_k\left|\sum_j\f{w_j}{B'(a_j)\cdot(1-a_j\ov a_k)}\right|
\le\f1\de\sum_j|w_j|<\infty,$$
and \eqref{eqn:crucond} holds true. In view of Theorem \ref{thm:interpolthm}, the inclusion \eqref{eqn:ellonetrace} is thereby verified.
\smallskip (b) Assume, under the current hypotheses on $\{a_j\}$ and $X$, that we can find a sequence $\{\be_j\}\in X\setminus\ell^1$. Put
$$w_j:=B'(a_j)\cdot(1-a^2_j)\cdot|\be_j|,\qquad j\in\N.$$
Because $B$ is a unit-norm $H^\infty$ function, we have
$$|B'(a_j)|\cdot(1-a^2_j)\le1$$
(we are also using the fact that $0\le a_j<1$ for all $j$), and so
$$|w_j|\le|\be_j|,\qquad j\in\N.$$
Since $X$ is an ideal sequence space containing $\{\be_j\}$, it follows that $\{w_j\}\in X$. Recalling \eqref{eqn:xtrace}, we readily arrive at \eqref{eqn:wbeltrace}, which we further rephrase (using Theorem \ref{thm:interpolthm}), as \eqref{eqn:crucond}. Thus, the sums
$$S_k:=\sum_j\f{w_j}{B'(a_j)\cdot(1-a_ja_k)}$$
(whose terms are all real and nonnegative) must satisfy
\begin{equation}\label{eqn:crucondspcase}
\sup_kS_k<\infty.
\end{equation}
\par On the other hand, for any fixed $k\in\N$ and any $j\le k$, we have $a_k\ge a_j$, whence
$$S_k\ge\sum_{j=1}^k\f{w_j}{B'(a_j)\cdot(1-a_ja_k)}
\ge\sum_{j=1}^k\f{w_j}{B'(a_j)\cdot(1-a^2_j)}=\sum_{j=1}^k|\be_j|.$$
Now, since $\sum_{j=1}^k|\be_j|\to\infty$ as $k\to\infty$, we conclude that $\sup_kS_k=\infty$, which contradicts \eqref{eqn:crucondspcase}. The contradiction means that the difference $X\setminus\ell^1$ is actually empty, so $X\subset\ell^1$ as required. \qed
\medskip\noindent{\it Proof of Theorem \ref{thm:phiideal}.} (i) Let $\{w_j\}$ be a sequence in $X_\ph(\{a_j\})$, so that \eqref{eqn:sizewj} holds, and let $f\in K^2_B$ be a solution to the interpolation problem \eqref{eqn:valuefaj}. Rewriting \eqref{eqn:sizewj} as \eqref{eqn:sizefataj}, while noting that
$$f\ov B=P_-(f\ov B),$$
we deduce from Lemma \ref{lem:hankelbmo} that $f\ov B\in\bmo_\ph$. On the other hand, condition \eqref{eqn:intconv} guarantees, by virtue of Lemma \ref{lem:bmocont}, that every function in $\bmo_\ph$ is continuous (and hence bounded) on $\T$. It follows that $f\ov B$ is in $L^\infty$, and so is $f$. Consequently, $f$ actually belongs to $K^2_B\cap L^\infty=K^\infty_B$, and we arrive at \eqref{eqn:wbeltrace}. The inclusion \eqref{eqn:xphintrace} is thus established.
\smallskip (ii) We put $a_j=1-2^{-j}$ ($j=1,2,\dots$) and exploit \eqref{eqn:xphintrace} in this special case only. The sequence space $X_\ph(\{a_j\})$ is obviously ideal, so we infer from Theorem \ref{thm:elloneideal}, part (b), that $X_\ph(\{a_j\})\subset\ell^1$; and since the sequence $\{\ph(1-a_j)\}=\{\ph(2^{-j})\}$ belongs to $X_\ph(\{a_j\})$, it follows in particular that
\begin{equation}\label{eqn:phiellone}
\sum_{j=1}^\infty\ph(2^{-j})<\infty.
\end{equation}
It remains to observe that \eqref{eqn:phiellone} is equivalent to \eqref{eqn:intconv}. To see why, one only needs to break up the integration interval $(0,1]$ as $\bigcup_{j=1}^\infty I_j$, where $I_j:=(2^{-j},2^{-j+1}]$, and notice that
$$\ph(2^{-j})\le\ph(t)\le2\ph(2^{-j})$$
for $t\in I_j$. The proof is complete. \qed
\medskip
| {
"attr-fineweb-edu": 1.683594,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdbk4uBhi9r4Z-Ri1 |
\section{Experiments and Results}
\begin{figure}[h]
\centering
\includegraphics[width=0.8\linewidth]{timit_learning.pdf}
\caption{Example training run on TIMIT.}\label{fig:timit}
\end{figure}
We conducted experiments on two different speech corpora using this model. Initial experiments were
conducted on TIMIT to assess hyperparameters that could lead to stable behavior of the model. The
second set of experiments were conducted on the Wall Street Journal corpus to assess if the method worked
on a large vocabulary speech recognition task that is much more realistic and complicated than the
TIMIT phoneme recognition task. While our experiments on TIMIT produced numbers close to the state of
the art, our results on WSJ are only a preliminary demonstation that this method indeed works on such
as task. Further hyperparameter tuning and method development should improve results on this task
significantly.
\subsection{TIMIT}
The TIMIT data set is a phoneme recognition task in which phoneme sequences have to be inferred from
input audio utterances. The training dataset contains 3696 different audio clips and the target is
one of 60 phonemes. Before scoring, these are collapsed to a standard 39 phoneme set, and then the
Levenshtein edit distance is computed to get the phoneme error rate (PER).
We trained models with various number of layers on TIMIT, starting with a small number of layers.
Initially, we achieved 28\% phoneme error rate (PER) using a three layer LSTM model with 300
units in each layer. During these experiments we found that using a weight of $\lambda=1$ for
entropy regularization seemed to produce best results. Further it was crucial to decay this parameter
as learning proceeded to allow the model to sharpen its predictions, once enough learning
signal was available. To do this, the entropy penalty was initialized to 1, and decayed as
$\exp(0.97, \textrm{step}/10000) + 0.1$. Results were further improved to 23\% with the use of dropout with
15\% of the units being dropped out. Results were improved further when we used five layers of units.
Best results were acheived through the use of Grid LSTMs \cite{kalchbrenner2015grid},
rather than stacked LSTMs.
See figure~\ref{fig:timit} for an example of a training curve. It can be seen that the model
requires a larger number of updates (>100K) before meaningful models are learnt. However, once
learning starts, steady process is achieved, even though the model is trained by policy gradient.
Training of the models was done using Asynchronous Gradient Descent with 20 replicas in
Tensorflow \cite{abadi2016tensorflow}. Training was much more stable when Adam was used,
compared to SGD, although results were more or less the same when both models were run to
convergence. We used a learning rate of 1e-4 with Adam. In order to speed up RNN training
we also bucketed examples by length -- each replica used only examples whose length lay within
specific ranges. During training, dropout rate was increased from 0 as the training proceeded.
This is because using dropout early in the training prevented the model from latching on to
any training signal.
Lastly, we note that the input filterbanks were processed such that three continuous frames of
filterbanks, representing a total of 30ms of speech were concatenated and input to the model.
This results in a smaller number of input steps and allows the model to learn hard alignments
much faster than it would otherwise.
Table~\ref{tab:timit} shows a summary of the results achieved on TIMIT by our method and
other, more mature models.
\begin{table}[h]
\caption{Results on TIMIT using Unidirectional LSTMs for various models.}
\label{tab:timit}
\centering
\begin{tabular}{lll}
\toprule
Method & PER \\
\midrule
Connectionist Temporal Classification (CTC)\cite{graves2013speech} & 19.6\% \\
Deep Neural Network - Hidden Markov Model (DNN-HMM)\cite{mohamed2012acoustic} & 20.7\% \\
Sequence to Sequence Model With Attention (our implementation) & 24.5\% \\
Online Sequence to Sequence Model\cite{jaitly2015online} & 19.8\% \\
\midrule
Our Model (Stacked LSTM) & 21.5\% \\
Our Model (Grid LSTM) & 20.5\% \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Wall Street Journal}
\begin{table}[h]
\caption{Results on WSJ}
\label{tab:timit}
\centering
\begin{tabular}{lll}
\toprule
Method & PER \\
\midrule
Connectionist Temporal Classification (CTC)(4 layer bidirectional LSTM)\cite{graves-icml-2014} & 27.3 \% \\
Sequence to Sequence Model With Attention (4 layer bidirectional GRU)\cite{BahdanauCSBB15} & 18.6\% \\
\midrule
Our Model (4 layer unidirectional LSTM) & 27.0\% \\
\bottomrule
\end{tabular}
\end{table}
We used the {\it train\_si284} dataset of the Wall Street Journal (WSJ) corpus for the second
experiment. This dataset consists of more than thirty seven thousand utterances, corresponding
to around 81 hours of audio signals. We trained our model to predict the character seqeunces
directly, without the use of pronounciation dictionaries, or language models, from the audio
signal. Since WSJ is a larger corpus we used 50 replicas for the AsyncSGD training. Each
utterance was speaker mean centered, as is standard for this dataset.
Similar to the TIMIT setup above, we concatenated three continous frames of filterbanks,
representing a total of 30ms of speech as input to the model at each time step. This is
especially useful for WSJ dataset because its audio clips are typically much longer than
those of TIMIT.
\begin{figure}[h]
\centering
\includegraphics[width=0.9\linewidth, trim={2cm 0 2cm 0}]{wsj.png}
\caption{Example training run on WSJ. The pl.}\label{fig:wsj}
\end{figure}
A constant entropy penalty of one was used for the first 200,000 steps, and then it was
decayed as $0.8 * \exp \left(0.97, \textrm{step}/10000 - 20\right) + 0.2$. Stacked LSTMs with 2
layers of 300 hidden units were used for this experiment\footnote{Admittedly this is a small number
of units and results should improve with the use of a larger model. However, as a proof of
concept it shows that the model can be trained to give reasonable accuracy.}. Gradients
were clipped to a maximum norm of 30 for these experiments.
It was seen that if dropout was used early in the training, the model that was unable to learn.
Thus dropout was used only much later in the training. Other differences from the TIMIT
experiments included the observation that stacked model outperformed the grid LSTM model.
\begin{figure}[h]
\centering
\includegraphics[width=1\linewidth,trim=2cm 2cm 2cm 3cm, clip]{wsj_emissions.png}
\caption{Example output for an utterance in WSJ. The blue line shows the emission
probability $\tilde{b}_i$, while the red line shows the
discrete emission decisions, $b_i$, over the time steps corresponding to an
input utterance. The bottom panel shows the corresponding filterbanks. It
can be seen that the model often decides to emit symbols only when new audio
comes in. It possibly reflects the realization of the network, that it needs to
output symbols that it has heard, to effectively process the new audio.}\label{fig:wsj_emi}
\end{figure}
\subsubsection{Example transcripts}
We show three example transcripts to give a flavour for the kinds of outputs this model
is able to produce. The first one is an example of a transcript that made several errors.
It can be seen however that the outputs have close phonetic similarity to the actual transcript.
The second example is of an utterance that was transcribed almost entirely correctly, other than
the word {\it AND} being substituted by {\it END}. Occasionally the model is even able to transcribe
a full utterance entirely correctly, as in the third example below.
{\bf REF}: {\it \color{blue} ONE LONGTIME EASTERN PILOT INSISTED THAT THE SAFETY CAMPAIGN INVOLVED NUMEROUS SERIOUS PROBLEMS BUT AFFIRMED THAT
THE CARDS OFTEN CONTAINED INSUFFICIENT INFORMATION FOR REGULATORS TO ACT ON} \\
{\bf HYP}: {\it \color{red} ONE LONGTIME EASTERN PILOT INSISTED THAT THE SAFETY CAMPAIGN INVOLVED NEW MERCE SERIOUS PROBLEMS BUT AT FIRM THAT
THE CARDS OFTEN CONTAINED IN SECURITION INFORMATION FOR REGULATORS TO ACT} \\
{\bf REF}: {\it \color{blue} THE COMPANY IS OPENING SEVEN FACTORIES IN ASIA THIS YEAR AND NEXT}\\
{\bf HYP}: {\it \color{red} THE COMPANY IS OPENING SEVEN FACTORIES IN ASIA THIS YEAR END NEXT}\\
{\bf REF}: {\it \color{blue} HE SAID HE AND HIS FATHER J WADE KINCAID WHO IS CHAIRMAN OWN A TOTAL OF ABOUT SIX POINT FOUR PERCENT OF THE COMPANYS COMMON}\\
{\bf HST}: {\it \color{red} HE SAID HE AND HIS FATHER J WADE KINCAID WHO IS CHAIRMAN OWN A TOTAL OF ABOUT SIX POINT FOUR PERCENT OF THE COMPANYS COMMON}\\
\subsubsection{Example Emissions}
Figure~\ref{fig:wsj_emi} shows a plot of the emission probabilities produced as the input
audio is processed. Interestingly, the model produces the final words, only at the end of
the input sequence. Presumably this happens because no new audio comes in after half way
through the utterance, and the model has no need to clear its internal memory to process
new information.
\section{Introduction}
Sequence-to-sequence models \cite{sutskever-nips-2014,cho-emnlp-2014}
are a general model family for solving supervised learning problems
where both the inputs and the outputs are sequences. The performance
of the original sequence-to-sequence model has been greatly improved
by the invention of \emph{soft attention} \cite{bahdanau-iclr-2015},
which made it possible for sequence-to-sequence models to generalize
better and achieve excellent results using much smaller networks on
long sequences. The sequence-to-sequence model with attention had
considerable empirical success on machine translation
\cite{bahdanau-iclr-2015}, speech recognition
\cite{chorowski-nips-2014,chan2015listen}, image caption generation
\cite{xu-icml-2015,vinyals-arvix-2014}, and question answering
\cite{weston2014memory}.
Although remarkably successful, the sequence-to-sequence model with
attention must process the entire input sequence before producing an
output. However, there are tasks where it is useful to start
producing outputs before the entire input is processed. These tasks
include both speech recognition and machine translation, especially
because a good online speech recognition system and a good online
translation system can be combined to produce a voice-based
instantaneous translator (also known as a Babel Fish
\cite{adams1995hitch}), which is an important application.
In this work, we present a simple online sequence-to-sequence model
that uses binary stochastic variables to select the timesteps at which
to produce outputs. The stochastic variables are trained with a policy
gradient method (similarly to Mnih et al.~\cite{mnih2014recurrent} and
Zaremba and Sutskever \cite{zaremba2015reinforcement}). Despite its
simplicity, this method achieves encouraging results on the TIMIT and
the Wall Street Journal speech recognition datasets. Our results
suggest that a larger scale version of the model will likely achieve
state-of-the-art results on many sequence-to-sequence problems.
\subsection{Relation To Prior Work}
While the idea of soft attention as it is currently understood was
first introduced by Graves \cite{graves2013generating}, the first
truly successful formulation of soft attention is due to Bahdanau et
al.~\cite{bahdanau-iclr-2015}. It used a neural architecture that
implements a ``search query'' that finds the most relevant element in
the input, which it then picks out. Soft attention has quickly become
the method of choice in various settings because it is easy to implement
and it has led to state of the art results on various tasks. For example,
the Neural Turing Machine \cite{graves2014neural} and the Memory Network
\cite{sukhbaatar2015end} both use an attention mechanism similar to
that of Bahdanau et al.~\cite{bahdanau-iclr-2015} to implement models
for learning algorithms and for question answering.
While soft attention is immensely flexible and easy to use, it assumes
that the test sequence is provided in its entirety at test time. It
is an inconvenient assumption whenever we wish to produce the relevant
output as soon as possible, without processing the input sequence in
its entirety first. Doing so is useful in the context of a speech
recognition system that runs on a smartphone, and it is especially
useful in a combined speech recognition and a machine translation
system.
There exists prior work that investigated methods for producing an
output without consuming the input in its entirety. These include the
work by Mnih \cite{mnih2014recurrent} and Zaremba and Sutskever
\cite{zaremba2015reinforcement} who used the Reinforce algorithm to
learn the location in which to consume the input and when to emit an
output. Finally, Jaitly et al.~\cite{jaitly2015online} used an
online sequence-to-sequence method with conditioning on partial
inputs, which yielded encouraging results on the TIMIT dataset.
Our work is most similar to Zaremba and Sutskever \cite{zaremba2015reinforcement}.
However, we are able to simplify the learning problem for
the policy gradient component of the algorithm by using only
one stochastic decision per time step, which makes the model much more
effective in practice.
\section{Methods}
\begin{figure}[t]
\centering
\includegraphics[width=0.8\linewidth]{model.pdf}
\caption{Overall Architecture of the model used in this paper.}\label{fig:model}
\end{figure}
In this section we describe the details of our recurrent neural
network architecture, the reward function, and the training and
inference procedure. We refer the reader to figure~\ref{fig:model} for
the details of the model.
We begin by describing the probabilistic model we used in this
work.
At each time step, $i$, a recurrent neural network (represented in figure 1) decides whether to emit an output token. The decision is made by a stochastic binary logistic unit $b_i$. Let $\tilde{b}_i \sim \text{Bernoulli}(b_i)$ be a Bernoulli distribution such that if $\tilde{b}_i$ is 1, then the model outputs the vector $d_i$, a softmax distribution over the set of possible tokens. The current position in the output sequence $y$ can be written $\tilde{p}_i = \sum_{j=1}^i \tilde{b}_j$, which is incremented by 1 every time the model chooses to emit. Then the model's goal is to predict the desired output $y_{\tilde{p}_i}$; thus whenever $\tilde{b}_i = 1$, the model experiences a loss given by
\[\text{softmax\_logprob}(d_i;y_{\tilde{p}_i}) = - \sum_{k}\log(d_{ik})y_{\tilde{p}_i k}\]
where $k$ ranges over the number of possible output tokens.
At each step of the RNN, the binary decision of the previous timestep,
$\tilde{b}_{i-1}$ and the previous target $t_{i-1}$ are fed into the
model as input. This feedback ensures that the model's outputs are maximally
dependent and thus the model is from the sequence to sequence family.
We train this model by estimating the gradient of the log probability
of the target sequence with respect to the parameters of the
model. While this model is not fully differentiable because it uses
non-diffentiable binary stochastic units, we can estimate the
gradients with respect to model parameters by using a policy gradient
method, which has been discussed in detail by Schulman et
al.~\cite{schulman2015gradient} and used by Zaremba and Sutskever
\cite{zaremba2015reinforcement}.
In more detail, we use supervised learning to train the network to
make the correct output predictions, and reinforcement learning to
train the network to decide on when to emit the various
outputs. Let us assume that the input sequence is given by $(x_1,\ldots,x_{T_1})$
and let the desired sequence be $(y_1,\ldots,y_{T_2})$, where $y_{T_2}$ is a special
end-of-sequence token, and where we assume that $T_2 \leq T_1$. Then the log
probability of the model is given by the following equations:
\begin{eqnarray}
h_i &=& \textrm{LSTM}(h_{i-1}, \mathrm{concat}(x_i, \tilde{b}_{i-1}, \tilde{y}_{i-1})) \\
b_i &=& \mathrm{sigmoid}(W_\mathrm{b} \cdot h_i) \\
\tilde{b}_i &\sim & \mathrm{Bernoulli}(b_i) \\
\tilde{p}_i &=& \sum_{j=1}^i \tilde{b}_j \\
\tilde{y}_i &=& y_{\tilde{p}_i} \\
d_i &=& \mathrm{softmax}(W_o h_i) \\
\mathcal{R} &=& \mathcal{R} + \tilde{b}_i \cdot \text{softmax\_logprob}(d_i; \tilde{y}_i) \label{eqn:reward}
\end{eqnarray}
In the above equations, $\tilde{p}_i$ is the ``position'' of the model
in the output, which is always equal to $\sum_{k=1}^i \tilde{b}_i$:
the position advances if and only if the model makes a prediction.
Note that we define $y_0$ to be a special beginning-of-sequence
symbol. The above equations also suggest that our model can easily be implemented
within a static graph in a neural net library such as TensorFlow, even though
the model has, conceptually, a dynamic neural network architecture.
Following Zaremba and Sutskever~\cite{zaremba2015reinforcement}, we modify the model from
the above equations by forcing $\tilde{b}_i$ to be equal to 1
whenever $T_1 - i \leq T_2 -\tilde{p}_i$. Doing so ensures that
the model will be forced to predict the entire target sequence $(y_1,\ldots,y_{T_2})$,
and that it will not be able to learn the degenerate solution where
it chooses to never make any prediction and therefore never experience
any prediction error.
\begin{figure*}[h]
\centering
\includegraphics[width=\linewidth]{entropy_penalty.pdf}
\caption{The impact of entropy regularization on emission locations. Each
line shows the emission predictions made for an example input utterance,
with each symbol representing 3 input time steps. 'x' indicates that the
model chooses to emit output at the time steps, whereas '-' indicates
otherwise. Top line - without entropy penalty the model emits symbols
either at the start or at the end of the input, and is unable to get
meaningful gradients to learn a model. Middle line - with entropy
regularization, the model avoids clustering emission predictions in
time and learns to spread the emissions meaningfully and learn a model.
Bottom line - using KL divergence regularization of emission probability
also mitigates the clustering problem, albeit not as effectively as with
entropy regularization.}\label{fig:entropy_pen}
\end{figure*}
We now elaborate on the manner in which the gradient is computed. It
is clear that for a given value of the binary decisions $\tilde{b}_i$,
we can compute $\partial \mathcal{R} /\partial \theta$ using the
backpropagation algorithm. Figuring out how to learn $\tilde{b}_i$ is
slightly more challenging. To understand it, we will factor the
reward $\mathcal{R}$ into an expression $\mathcal
{R}(\tilde{\vect{b}})$ and a distribution $\rho(\tilde{\vect{b}})$
over the binary vectors, and derive a gradient estimate with respect
to the parameters of the model:
\begin{equation}
\label{eqn:expectedR}
\mathcal{R} = \E_{\vect{\tilde{b}}} \left[R(\tilde{\vect{b}}) \right]
\end{equation}
Differentiating, we get
\begin{equation}
\label{eqn:grad_reward}
\nabla \mathcal{R} = \E_{\vect{\tilde{b}}} \left[ \nabla R(\tilde{\vect{b}})
+ R(\tilde{\vect{b}}) \nabla \log \rho(\tilde{\vect{b}})
\right]
\end{equation}
where $\rho(\tilde{\vect{b}})$ is the probability of a binary sequence of the $\tilde{b}_i$
decision variables. In our
model, $\rho(\tilde{\vect{b}})$ is computed using the chain rule over the $b_i$ probabilities:
\begin{equation}
\label{eqn:emit_prob}
\log \rho (\tilde{\vect{b}}) = \sum_{i=1}^T \tilde b_i \log b_i + (1-\tilde b_i) \log (1-b_i)
\end{equation}
Since the gradient in equation~\ref{eqn:grad_reward} is a policy
gradient, it has very high variance, and variance reduction techniques
must be applied. As is common in such problems we use {\it centering}
(also known as baselines) and Rao-Blackwellization to reduce the
variance of such models. See Mnih and Gregor \cite{anvil} for an example of the use of
such techniques in training generative models with stochastic units.
Baselines are commonly used in the reinforcement learning literature to
reduce the variance of estimators, by relying on the identity
$\E_{\vect{\tilde{b}}} \left[ \nabla \log \rho(\tilde{\vect{b}}) \right] = 0$. Thus
the gradient in~\ref{eqn:grad_reward} can be better estimated by the following, through the
use of a well chosen {\it baseline} function, $\Omega({\bf x})$, where ${\bf x}$ is a vector
of side information which happens to be the input and all the outputs up to timestep $\tilde{p}_i$:
\begin{equation}
\label{eqn:baselined_reward}
\nabla \mathcal{R} = \E_{\vect{\tilde{b}}} \left[ \nabla R(\tilde{\vect{b}})
+ \left(R(\tilde{\vect{b}}) - \Omega(\bf{x})\right) \nabla \log \rho (\tilde{\vect{b}})
\right]
\end{equation}
The variance of this estimator itself can be further reduced by Rao-Blackwellization,
giving:
\begin{equation}
\label{eqn:blackwellized_reward}
\E_{\vect{\tilde{b}}} \left[\left(R(\tilde{\vect{b}}) - \Omega(\bf{x})\right)
\nabla \log \rho (\tilde{\vect{b}}) \right] =
\sum_{j=1}^T \E_{\vect{\tilde{b}}} \left[ \left( \sum_{i=j}^T R_i - \Omega_j \right)
\nabla \log p (b_t | b_{<t}, \vect{x}_{\leq t}, \vect{y}_{\leq \tilde{p_t}}) \right]
\end{equation}
Finally, we note that reinforcement learning models are often trained with augmented
objectives that add an entropy penalty for actions are the too
confident~\cite{levine2014motor,williams1992simple}. We found this to be crucial for our models to train
successfully. In light of the regularization term, the augmented reward at any time
steps, $i$, is:
\begin{align*}
\label{eqn:entropy_regularization}
R_i = & \tilde{b}_i \log p(d_i = t_i | \vect{x}_{\leq i},\tilde{\vect{b}}_{<i},
\vect{t}_{<i}) \\ & - \lambda \left(\tilde{b}_i \log p (b_i=1 | b_{<i}, \vect{x}_{\leq i})
+ (1-\tilde{b}_i) \log (p (b_i=0 | b_{<i}, \vect{x}_{\leq i}))\right)
\end{align*}
Without the use of this regularization in the model, the RNN emits all the symbols
clustered in time, either at very start of the input sequence, or at the end. The
model has a difficult time recovering from this configuration, since the gradients
are too noisy and biased. However, with the use of this penalty, the model successfully
navigates away from parameters that lead to very clustered predictions and eventually
learns sensible parameters. An alternative we explored was to use the the KL divergence
of the predictions from a target Bernouilli rate of emission at every step. However,
while this helped the model, it was not as successful as entropy regularization. See
figure~\ref{fig:entropy_pen} for an example of this clustering problem and how
regularization ameliorates it.
\section{Results}
\section{Conclusions}
In this work, we presented a simple model that can solve
sequence-to-sequence problems without the need to process the entire
input sequence first. Our model directly maximizes the log
probability of the correct answer by combining standard supervised
backpropagation and a policy gradient method.
Despite its simplicity, our model achieved encouraging results on a
small scale and a medium scale speech recognition task. We hope that
by scaling up the model, it will achieve near state-of-the-art results
on speech recognition and on machine translation, which will in turn
will enable the construction of the universal instantaneous
translator.
Our results also suggest that policy gradient methods are reasonably powerful,
and that they can train highly complex neural networks that learn to make
nontrivial stochastic decisions.
\section{Acknowledgements}
We would like to thank Dieterich Lawson for his helpful comments on the manuscript.
| {
"attr-fineweb-edu": 1.661133,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdc7xK0iCl4YJnsgJ | \section{Introduction} \label{sec:intro}
Despite the work of many authors, minimal surfaces $S$ of general type with the lowest possible value of the holomorphic Euler characteristic, namely such that $\chi(\mathcal{O}_S)=1$, are far from being classified,
see e.g. the survey papers \cite{BaCaPi06}, \cite{BaCaPi11} and \cite{MP12} for a detailed bibliography on the subject.
These surfaces satisfy the Bogomolov-Miyaoka-Yau inequality $K^2\leq 9.$
The ones with $K^2=9$ are rigid, their universal cover is the unit ball in $\mathbb{C}^2$ and $p_g=q\leq 2$.
The fake planes, i.e. surfaces with $p_g=q=0$, have been classified in \cite{PrYe} and \cite{CaSt}, the two Cartwright-Steger surfaces \cite{CaSt} satisfy $q=1$, whereas no example is known for $p_g=q=2$.
The next case is
$K^2=8$. In this situation, Debarre's inequality for irregular surfaces \cite{Deb81} implies
\begin{equation*}
0\leq p_g=q\leq 4.
\end{equation*}
The cases $p_g=q=3$ and $p_g=q=4$ are nowadays classified (\cite{HacPar}, \cite{Piro}, \cite[Beauville's appendix]{Deb81}), whereas for $p_g=q\leq 2$ some families are known (\cite{Sh78}, \cite{ BCG05}, \cite{ Pol06}, \cite{ Pol08}, \cite{ CP09}, \cite{ Pe11}) but there is no complete description yet.
All the examples of minimal surfaces with $\chi=1$ and $K^2=8$ known so far are uniformized by the bidisk $\mathbb H\times\mathbb H$, where $\mathbb H = \{z \in \mathbb{C} \; | \; \mathrm{Im}\, z >0 \}$ is the Poincar\'e upper half-plane; so the following question naturally arose:
\smallskip
\emph{Is there a smooth minimal surface of general type with invariants $\chi=1$ and $K^2=8$ and whose universal cover is not biholomorphic to $\mathbb H\times\mathbb H$?}
\smallskip
For general facts about surfaces uniformized by the bidisk, we refer the reader to \cite{CaFra}. One of the aims of this paper is to give an affirmative answer to the question above. In fact we construct two rigid, minimal surfaces with $p_g=q=2$ and $K^2=8$ whose universal cover is not the bidisk.
Moreover, we show that these surfaces are complex-conjugated and that they are the unique minimal surfaces with these invariants and having Albanese map of degree $2$, apart from the family of product-quotient surfaces constructed in \cite{Pe11}.
This complete the classification of minimal surfaces with $p_g=q=2$, $K^2=8$ and Albanese map of degree $2$.
Our results can be summarized as follows, see Proposition \ref{prop:branch-locus}, Theorem \ref{thm:class-type I}, Theorem \ref{thm:class-type II},
Theorem \ref{thm:The-number-of-Surfaces} and Proposition \ref{prop:univ-II}.
\begin{main}
Let $S$ be a smooth, minimal surface of general type with $p_g=q=2$, $K^2=8$ and such that its Albanese map $\alpha \colon S \to A:=\mathrm{Alb}(S)$ is a generically
finite double cover. Writing $D_A$ for the branch locus of $\alpha$, there are exactly two possibilities, both of which occur$:$
\begin{itemize}
\item[$\boldsymbol{(I)}$] $D_A^2=32$ and $D_A$ is an irreducible curve with one ordinary point of multiplicity $6$ and no other singularities. These are the product-quotient surfaces constructed in \emph{\cite{Pe11}}$;$
\item[$\boldsymbol{(II)}$] $D_A^2=24$ and $D_A$ has two ordinary points $p_1$, $p_2$ of multiplicity $4$ and no other singularities.
More precisely, in this case we can write
\begin{equation*}
D_A = E_1+E_2+E_3+E_4,
\end{equation*}
where the $E_i$ are elliptic curves intersecting pairwise transversally at $p_1$, $p_2$ and not elsewhere. Moreover, $A$ is an \'etale double cover of the abelian surface $A':=E' \times E'$, where $E'$ denotes the equianharmonic elliptic curve.
Up to isomorphism, there are exactly two such surfaces, which are complex-conjugate. Finally, the universal cover of these surfaces is not biholomorphic to the bidisk $\mathbb H\times\mathbb H.$
\end{itemize}
\end{main}
According to the dichotomy in the Main Theorem, we will use the terminology \emph{surfaces of type I} and \emph{surfaces of type II}, respectively. Besides answering the question above about the universal cover, the Main Theorem is also significant because
\begin{itemize}
\item it contains a new geometric construction of rigid surfaces, which is usually something hard to do;
\item it provides a substantially new piece in the fine classification of minimal surfaces of general type with $p_g=q=2$;
\item it shows that surfaces of type $II$ present the the so-called \verb|Diff|$\nRightarrow$\verb|Def| phenomenon, meaning that their diffeomorphism type does not determine their deformation class, see Remark \ref{rem:def-diff}.
\end{itemize}
Actually, the fact that there is exactly one surface of type $II$ up to complex conjugation is a remarkable feature. The well-known Cartwright-Steger surfaces \cite{CaSt} share the same property, however our construction is of a different nature, more geometric and explicit.
The paper is organized as follows.
In Section \ref{sec:Albanese} we provide a general result for minimal surfaces $S$ with $p_g=q=2$, $K^2=8$ and Albanese map $\alpha \colon S \to A$ of degree $2$, and we classify all the possible branch loci $D_A$ for $\alpha$ (Proposition \ref{prop:branch-locus}).
In Section \ref{sec:type I} we consider surfaces of type $I$, showing that they coincide with the family of
product-quotient surfaces constructed in \cite{Pe11} (Theorem \ref{thm:class-type I}).
In Section \ref{sec:type II} we start the investigation of surfaces of type $II$. The technical core of this part is Proposition \ref{prop:2-divisibility}, showing that, in this situation, the pair $(A, \, D_A)$ can be realized as an \'etale double cover of the pair $(A', \, D_A')$, where $D_{A'}$ is a configuration of four elliptic curves in $A'=E' \times E'$ intersecting pairwise and transversally only at the origin $o' \in A'$ (as far as we know, the existence of such a configuration was first remarked in \cite{Hir84}). The most difficult part is to prove that we can choose the double cover $A \to A'$ in such a way that the curve $D_A$ becomes $2$-divisible in the Picard group of $A$ (Proposition \ref{prop:2-divisibility}). The rigidity of $S$ then follows from a characterization of $A'$ proven in \cite{KH04} (cf. also \cite{Aide}).
In Section \ref{sec:classification-II} we show that there are precisely two surfaces of type $II$ up to isomorphism, and that they are complex-conjugated (Theorem \ref{thm:The-number-of-Surfaces}). In order to do this, we have to study the groups of automorphisms and anti-biholomorphisms of $A$ that preserve the branch locus $D_A$, and their permutation action on the set of the sixteen square roots of $\mathcal{O}_A(D_A)$ in the Picard group of $A$ (Proposition \ref{prop:permutation-action}).
Finally, we show that the universal cover of the surfaces of type $II$
is not biholomorphic to $\mathbb{H} \times \mathbb{H}$ (Proposition \ref{prop:univ-II} and Remark \ref{rem:Shimura}), we note that they can be given the structure of an open ball quotient in at least two different ways (Remark \ref{openball}) and we sketch an alternative geometric construction for their Albanese variety $A$ (Remark \ref{remGeomConstr}).
\bigskip
\noindent\textbf{Acknowledgments.}
F. Polizzi was partially supported by GNSAGA-INdAM.
C. Rito was supported by FCT (Portugal) under the project PTDC/MAT-GEO/2823/2014,
the fellowship SFRH/ BPD/111131/2015 and by CMUP (UID/MAT/00144/2013),
which is funded by FCT with national (MEC) and European structural funds through the programs FEDER, under the partnership agreement PT2020.
We thank M. Bolognesi, F. Catanese, F. Lepr\'evost, B. Poonen, R. Pardini, C. Ritzenthaler and M. Stoll for useful conversations and suggestions, and the MathOverflow user Ulrich for his answer in the thread:
\verb|http://mathoverflow.net/questions/242406|\\
This work started during the workshop \emph{Birational Geometry of Surfaces}, held at the Department of Mathematics of the University of Roma Tor Vergata from January 11 to January 15, 2016. We warmly thank the organizers for the invitation and the hospitality.
Finally, we are indebted to the anonymous referees for their helpful comments and remarks, that considerably improved the presentations of these results.
\bigskip
\noindent\textbf{Notation and conventions.} We work over the field of complex numbers. All varieties are assumed to be projective.
For a smooth surface $S$, $K_S$ denotes the \emph{canonical class}, $p_g(S)=h^0(S, \, K_S)$ is the \emph{geometric genus},
$q(S)=h^1(S, \, K_S)$ is the \emph{irregularity} and $\chi(\mathcal{O}_S)=1-q(S)+p_g(S)$ is the \emph{Euler-Poincar\'e characteristic}.
Linear equivalence of divisors is denoted by $\simeq$.
If $D_1$ is an effective divisor on $S_1$ and $D_2$ is an effective divisor on $S_2$, we say that the pair $(S_1, \, D_1)$ is an \emph{\'etale double cover} of the pair $(S_2, \, D_2)$ if there exists an \'etale double cover $f \colon S_1 \to S_2$ such that $D_1 = f^* D_2$.
If $A$ is an abelian surface, we denote by $(-1)_A \colon A \to A$ the involution $x \to -x$. If $a \in A$, we write $t_a \colon A \to A$ for the translation by $a$, namely $t_a(x)=x+a$. We say that a divisor $D \subset A$ (respectively, a line bundle $\mathcal{L}$ on $A$) is \emph{symmetric} if $(-1)_A^*D = D$ (respectively, if $(-1)_A^* \mathcal{L} \simeq \mathcal{L}$).
\section{The structure of the Albanese map} \label{sec:Albanese}
Let us denote by $S$ a minimal surface of general type with $p_g=q=2$ and maximal Albanese dimension, and by $$\alpha \colon S \to A=\textrm{Alb}(S)$$ its Albanese map. It follows from \cite[Section 5]{Ca13} that $\deg \alpha$ is equal to the index of the image of
$\wedge^4 H^1(S, \, \mathbb{Z})$ inside $H^4(S, \, \mathbb{Z}) = \mathbb{Z}[S]$, hence it is a topological invariant of $S$.
So, one can try to classify these surfaces by looking at the pair of invariants $\left(K_S^2, \, \deg \alpha\right)$.
\begin{lemma} \label{lem:alb-isom}
Let $S$ be as above and assume that there is a generically finite double cover $\tilde{\alpha} \colon S \to \widetilde{A}$, where $\widetilde{A}$ is an abelian surface. Then
$\widetilde{A}$ can be identified with $A=\mathrm{Alb}(S)$ and there exists an automorphism $\psi \colon A \to A$ such that $\tilde{\alpha} = \psi \circ \alpha$.
\end{lemma}
\begin{proof}
The universal property of the Albanese map (\cite[Chapter V]{Be96}) implies that the morphism $\widetilde{\alpha} \colon S \to \widetilde{A}$ factors through a morphism $\psi \colon A \to \widetilde{A}$. But $\widetilde{\alpha}$ and $\alpha$ are both generically of degree $2$, so $\psi$ must be a birational map between abelian varieties, hence an isomorphism. Thus we can identify $\widetilde{A}$ with $A$ and with this identification $\psi$ is an automorphism of $A$.
\end{proof}
Throughout the paper, we will assume $\deg \alpha=2$, namely that $\alpha \colon S \to A$ is a generically finite double cover. Let us denote by $D_A \subset A$ the branch locus of $\alpha$ and let
\begin{equation} \label{dia.alpha}
\xymatrix{
S \ar[r]^{c} \ar[dr]_{\alpha} & X \ar[d]^{\alpha_X} \\
& A}
\end{equation}
be its Stein factorization. The map $\alpha_X \colon X \to A$ is a finite double cover and the fact that $S$ is smooth implies that $X$ is normal, see \cite[Chapter I, Theorem 8.2]{BHPV03}.
In particular $X$ has at most isolated singularities, hence $D_A$ is reduced. Moreover, $D_A$ is $2$-divisible in $\mathrm{Pic}(A)$, in other words there exists a divisor $L_A$ on $A$ such that $D_A \simeq 2L_A$.
We have a \emph{canonical resolution diagram}
\begin{equation} \label{dia.resolution}
\begin{CD}
\bar{S} @> >> X\\
@V{\beta}VV @VV \alpha_X V\\
B @> {\varphi}>> A,\\
\end{CD}
\end{equation}
see \cite[Chapter III, Section 7]{BHPV03}, \cite[Section 2]{PePol13b} and \cite{Ri10}. Here $\beta \colon \bar{S} \to B$ is a finite double cover, $\bar{S}$ is smooth, but not necessarily minimal, $S$ is the minimal model of $\bar{S}$ and $\varphi \colon B \to A$ is composed of a series of blow-ups. Let
$x_1, \, x_2, \ldots, x_r$ be the centers of these blow-ups and $\mathsf{E}_1, \ldots, \mathsf{E}_r$
the reduced strict transforms in $B$ of the corresponding exceptional divisors. Then the scheme-theoretic inverse image $\mathcal{E}_j$ of $x_j$ in $B$ is a linear combination with non-negative, integer coefficients of the curves in the set $\{\mathsf{E}_i\}$, and we have
\begin{equation*}
\mathcal{E}_i \mathcal{E}_j=- \delta_{ij}, \quad K_B=\varphi^*
K_A+\sum_{i=1}^r \mathcal{E}_i.
\end{equation*}
Moreover the branch locus $D_B$ of $\beta \colon \bar{S} \to B$ is smooth and can be written as
\begin{equation} \label{hat B}
D_B=\varphi^* D_A- \sum_{i=1}^r d_i \mathcal{E}_i,
\end{equation}
where the $d_i$ are even positive integers, say $d_i=2m_i$. Motivated by \cite[p. 724-725]{Xi90} we introduce the following definitions:
\begin{itemize}
\item a \emph{negligible singularity} of $D_A$ is a point $x_j$ such that $d_j=2$, and $d_i \leq 2$ for any point $x_i$ infinitely near to $x_j;$
\item a $[2d+1, \, 2d+1]$-\emph{singularity} of $D_A$ is a pair $(x_i, \, x_j)$ such that $x_i$ belongs to the first infinitesimal neighbourhood of $x_j$ and $d_i=2d + 2$, $d_j=2d$.
\item a $[2d, \, 2d]$-\emph{singularity} of $D_A$ is a pair $(x_i, \, x_j)$ such that $x_i$ belongs to the first infinitesimal neighbourhood of $x_j$ and $d_i=d_j=2d$;
\item a \emph{minimal singularity} of $D_A$ is a point $x_j$ such that its inverse image in $\bar{S}$ via the canonical resolution contains no $(-1)$-curves.
\end{itemize}
Let us give some examples:
\begin{itemize}
\item an ordinary double point and an ordinary triple point are both minimal, negligible singularities. More generally, an ordinary $d$-ple point is a always a minimal singularity, and it is non-negligible for $d \geq 4$;
\item a $[3, \, 3]$-point (triple point $x_j$ with a triple point $x_i$ in its first infinitesimal neighbourhood) is neither minimal nor negligible. Indeed, in this case we have
\begin{equation*}
D_B=\varphi^*D_A-2 \mathsf{E}_j - 6 \mathsf{E}_i = \varphi^*D_A - 2 \mathcal{E}_j - 4 \mathcal{E}_i,
\end{equation*}
with $\mathcal{E}_j=\mathsf{E}_j+\mathsf{E}_i$ and $\mathcal{E}_i=\mathsf{E}_i$. The divisor $\mathsf{E}_j$ is a $(-2)$-curve contained in the branch locus of $\beta \colon \bar{S} \to B$, so its pull-back in $\bar{S}$ is a $(-1)$-curve;
\item a $[4, \, 4]$-point (quadruple point $x_j$ with a quadruple point $x_i$ in its first infinitesimal neighbourhood) is minimal and non-negligible. Indeed, in this case we have
\begin{equation*}
D_B=\varphi^*D_A- 4 \mathsf{E}_j - 8 \mathsf{E}_i = \varphi^*D_A - 4 \mathcal{E}_j - 4 \mathcal{E}_i,
\end{equation*}
with $\mathcal{E}_j=\mathsf{E}_j+\mathsf{E}_i$ and $\mathcal{E}_i=\mathsf{E}_i$. The divisor $\mathsf{E}_j$ is a $(-2)$-curve that does not intersect the branch locus of $\beta \colon \bar{S} \to B$, so its pull-back in $\bar{S}$ consists of the disjoint union of two $(-2)$-curves, that are the unique rational curves coming from the canonical resolution of the singularity.
\end{itemize}
Let us come back now to our original problem.
\begin{lemma} \label{lem:minimal}
In our situation, the following holds$:$
\begin{itemize}
\item[$\boldsymbol{(a)}$] we have $S= \bar{S}$ in \eqref{dia.resolution} if and only if all singularities of $D_A$ are minimal$;$
\item[$\boldsymbol{(b)}$] if $S$ contains no rational curves, then $D_A$ contains no negligible singularities.
\end{itemize}
\end{lemma}
\begin{proof}
\begin{itemize}
\item[$\boldsymbol{(a)}$] If $D_A$ contains a non-minimal singularity then, by definition, $\bar{S}$ is not a minimal surface, hence $\bar{S} \neq S$.
Conversely, if all singularities of $D_A$ are minimal then there are no $(-1)$-curves on $\bar{S}$ coming from the resolution of the singularities of $D_A$. Since the abelian surface $A$ contains no rational curves, this implies that $\bar{S}$ contains no $(-1)$-curves at all, so $\bar{S}=S$.
\item[$\boldsymbol{(b)}$] Any negligible singularity of $D_A$ is minimal and gives rise to some rational double point in $X$, and hence to some $(-2)$-curve in $\bar{S}$ that cannot be contracted by the blow-down morphism $\bar{S} \to S$ (since $A$ contains no rational curves, it follows as before that all $(-1)$-curves in $\bar{S}$ come from the resolution of singularities of $D_A$). This is impossible because we are assuming that $S$ contains no rational curves.
\end{itemize}
\end{proof}
By using the formulae in \cite[p. 237]{BHPV03}, we obtain
\begin{equation} \label{eq:sum-sing}
2=2 \chi(\mathcal{O}_{\bar{S}})=L_A^2- \sum m_i(m_i-1), \quad
K_{\bar{S}}^2=2L_A^2-2 \sum (m_i-1)^2.
\end{equation}
Notice that the sums only involve the non-negligible singularities of $D_A\simeq 2L_A$. The two equalities in \eqref{eq:sum-sing} together imply
\begin{equation} \label{eq:res.can}
K_S^2 \geq K_{\bar{S}}^2 = 4 + 2 \sum (m_i-1).
\end{equation}
We are now ready to analyse in detail the case $K_S^2=8$.
\begin{proposition} \label{prop:branch-locus}
Let $S$ be a minimal surface with $p_g=q=2$ and $K_S^2=8$. Then $S$ contains no rational curves, in particular $K_S$ is ample. Using the previous notation, if the Albanese map $\alpha \colon S \to A$ is a generically finite double cover then we are in one of the following cases$:$
\begin{itemize}
\item[$\boldsymbol{(I)}$] $D_A^2=32$ and $D_A$ has one ordinary singular point of multiplicity $6$ and no other singularities$;$
\item[$\boldsymbol{(II)}$] $D_A^2=24$ and $D_A$ has two ordinary singular points of multiplicity $4$ and no other singularities.
\end{itemize}
\end{proposition}
\begin{proof}
The non-existence of rational curves on $S$ is a consequence of a general bound for the number of rational curves on a surface of general type, see \cite[Proposition 2.1.1]{Miy}.
Since $K_S^2=8$, inequality \eqref{eq:res.can} becomes
\begin{equation} \label{eq:res.can-1}
\sum(m_i - 1) \leq 2.
\end{equation}
By Lemma \ref{lem:minimal} there are no negligible singularities in $D_A$, so \eqref{eq:res.can-1} implies that we have three possibilities:
\begin{itemize}
\item $D_A$ contains precisely one singularity (which is necessarily ordinary) and $m_1=3$, that is $d_1=6$; this is case $\boldsymbol{(I)}.$
\item $D_A$ contains precisely two singularities and $m_1=m_2=2$, that is $d_1=d_2=4$. We claim that these two quadruple points cannot be infinitely near. In fact, the canonical resolution of a $[4, \, 4]$-point implies that $\bar S$ contains (two) rational curves and, since a $[4, \, 4]$-point is a minimal singularity, this would imply the existence of rational curves on $S=\bar{S}$, a contradiction.
So we have two ordinary points of multiplicity $4$, and we obtain case $\boldsymbol{(II)}.$
\item $D_A$ contains precisely one singularity (which is necessarily ordinary) and $m_1=2$, that is $d_1=4$. An ordinary singularity is minimal, hence we get equality in \eqref{eq:res.can}, obtaining $K_S^2=6$ (this situation is considered in \cite{PePol13b}), that is a contradiction.
\end{itemize}
\end{proof}
\begin{remark} \label{rem:sing-Stein}
Lemma \ref{lem:minimal} and Proposition \ref{prop:branch-locus} imply that for any surface $S$ with $p_g=q=2$, $K_S^2=8$ and Albanese map of degree $2$, we have $\bar{S}=S$. Furthermore, referring to diagram \eqref{dia.alpha}, the following holds:
\begin{itemize}
\item in case $\boldsymbol{(I)}$, the birational morphism $c \colon S \to X$ contracts precisely one smooth curve $Z$, such that $g(Z)=2$ and $Z^2=-2$. This means that the singular locus of $X$ consists of one isolated singularity $x$, whose geometric genus is $p_g(X, \, x)= \dim_{\mathbb{C}}R^1c_* \mathcal{O}_S = 2$;
\item in case $\boldsymbol{(II)}$, the birational morphism $c \colon S \to X$ contracts precisely two disjoint elliptic curves $Z_1, \, Z_2$ such that $(Z_1)^2 = (Z_2)^2=-2$. This means that the singular locus of $X$ consists of two isolated elliptic singularities $x_1, \, x_2$ of type $\widetilde{E}_7$, see \cite[Theorem 7.6.4]{Is14}.
\end{itemize}
\end{remark}
\begin{definition} \label{def:typeI-II}
According to the dichotomy in Proposition \ref{prop:branch-locus}, we will use the terminology \emph{surfaces of type I} and \emph{surfaces of type II}, respectively.
\end{definition}
\begin{proposition} \label{prop:branch-type-II}
Let us denote as above by $D_A$ the branch locus of the Albanese map $\alpha \colon S \to A$. Then$:$
\begin{itemize}
\item if $S$ is of type $I,$ the curve $D_A$ is irreducible$;$
\item if $S$ is of type $II,$ the curve $D_A$ is of the form
$D_A = E_1+E_2+E_3+E_4,$
where the $E_i$ are elliptic curves meeting pairwise transversally at two points $p_1$, $p_2$ and not elsewhere. In particular, we have $E_iE_j=2$ for $i \neq j.$
\end{itemize}
\end{proposition}
\begin{proof}
Suppose first that $S$ is of type $I$ and consider the blow-up $\varphi \colon B \to A$ at the singular point $p \in D_A$. Let $C_1,\ldots,C_r$ be the irreducible components of the strict transform of $D_A$ and $\mathcal{E} \subset B$ the exceptional divisor.
The curve $D_A$ only contains the ordinary singularity $p$, so the $C_i$ are pairwise disjoint; moreover, the fact that
\begin{equation*}
\sum_{i=1}^r C_i =\varphi^*D_A- 6 \mathcal{E}
\end{equation*}
is $2$-divisible in $\mathrm{Pic}(B)$
implies that $C_i^2=C_i \big( \sum_{i=1}^r C_i \big) $ is an even integer.
Let us recall now that the abelian surface $A$ contains no rational curves, so $g(C_i) >0$ for all $i \in \{1, \ldots, r\}$. On the other hand, if $g(C_i)=1$ then its image $D_i:=\varphi(C_i)$ is a smooth elliptic curve, because it is a curve of geometric genus $1$ on the abelian surface $A$. Thus $D_i^2=0$ and, since $p \in D_i$, it follows $C_i^2=-1$, a contradiction because $-1$ is an odd integer. So we infer $g(C_i)\geq 2$ for all $i \in \{1, \ldots, r\}$ and we can write
\begin{equation*}
\begin{split}
6-4 & = \mathcal{E}(\varphi^* D_A - 6 \mathcal{E}) + (\varphi^*D_A - 6 \mathcal{E})^2 \\ & = K_B \bigg(\sum_{i=1}^r C_i \bigg) + \bigg( \sum_{i=1}^r C_i \bigg)^2 = \sum_{i=1}^r (2 g(C_i)- 2) \geq 2r,
\end{split}
\end{equation*}
that is $r=1$ and $D_A$ is irreducible.
Assume now that $S$ is of type $II$, and write $D_A = E_1 + \cdots + E_r,$ where each $E_i$ is an irreducible curve. Denote by $m_i$ and $n_i$ the multiplicities of $E_i$ at the two ordinary singular points $p_1$ and $p_2$ of $D_A$, and let $p_a(E_i)$ and $g_i$ be the arithmetic and the geometric genus of $E_i$, respectively.
We have
$\sum_{i=1}^r m_i = \sum_{i=1}^r n_i =4$
and
\begin{equation*} \label{eq:Ci-2}
E_i^2 = 2p_a(E_i)-2 = 2g_i -2 + m_i(m_i-1)+n_i(n_i-1).
\end{equation*}
Using this, we can write
\begin{equation*}
\begin{split}
24 & = D_A^2 = \sum_{i=1}^r E_i^2 + 2 \sum_{j<k} E_j E_k \\
& = 2 \sum_{i=1}^r g_i - 2r + \sum_{i=1}^r m_i(m_i-1) + \sum_{i=1}^r n_i(n_i-1) + 2 \sum_{j<k} (m_j m_k + n_j n_k) \\
& = 2 \sum_{i=1}^r g_i - 2r + \bigg(\sum_{i=1}^r m_i \bigg)^2 + \bigg(\sum_{i=1}^r n_i \bigg)^2 - \sum_{i=1}^r m_i - \sum_{i=1}^r n_i \\
& = 2 \sum_{i=1}^r g_i - 2r + 24,
\end{split}
\end{equation*}
that is
$\sum_{i=1}^r g_i = r.$
Since $A$ contains no rational curves we have $g_i \geq 1$, and we conclude that
\begin{equation} \label{eq:Ci-4}
g_1 = \cdots = g_r=1.
\end{equation}
But every curve of geometric genus $1$ on $A$ is smooth, so \eqref{eq:Ci-4} implies that $D_A$ is the sum of $r$ elliptic curves $E_i$ passing through the singular points $p_1$ and $p_2$. Therefore $r=4$, because these points have multiplicity $4$ in the branch locus $D_A$.
\end{proof}
\section{Surfaces of type $I$} \label{sec:type I}
\subsection{The product-quotient examples} \label{2sec:type I}
The following family of examples, whose construction can be found in \cite{Pe11}, shows that surfaces of type $I$ do actually exist. Let $C'$ be a curve of genus $g(C') \geq 2$ and let $G$ be a finite group that acts freely on $C'\times C'$. We assume moreover that the action is \emph{mixed}, namely that there exists an element in $G$ exchanging the two factors; this means that
\begin{equation*}
G\subset {\rm Aut}(C' \times C') \simeq {\rm Aut}(C')^2 \rtimes\mathbb{Z}/2 \mathbb{Z}
\end{equation*}
is not contained in ${\rm Aut}(C')^2$. Then the quotient $S:=(C'\times C')/G$ is a smooth surface with
\begin{equation} \label{eq:inv-prod-quot}
\chi(\mathcal{O}_S)=(g-1)^2/|G|, \quad K_S^2=8\chi (\mathcal{O}_S).
\end{equation}
The intersection $G^0:=G\,\cap\,{\rm Aut}(C')^2$ is an index $2$ subgroup of $G$ (whose action on $C'$ is independent on the factor).
From \cite[Theorem 3.6, Theorem 3.7 and Lemma 3.9]{Fr13}, the exact sequence
\begin{equation*}
1 \to G^0 \to G \to \mathbb{Z}/ 2 \mathbb{Z} \to 1
\end{equation*}
is non-split and the genus of the curve $C := C' /G^0$ equals $q(S)$.
We have a commutative diagram
\begin{equation*}
\begin{CD}
C' \times C' @> t >> C\times C\\ @V VV @VV u V\\ S@> \beta >> \textrm{Sym}^2(C),
\end{CD}
\end{equation*}
where $t \colon C' \times C' \to C \times C $ is a $(G^0\times G^0)$-cover, $u \colon C \times C \to \textrm{Sym}^2(C) $ is the natural projection onto the second symmetric product and $\beta \colon S \to \textrm{Sym}^2(C)$ is a finite cover of degree $|G^0|$.
Assume now that $C'$ has genus $3$ and that $G^0 \simeq \mathbb{Z}/2 \mathbb{Z}$ (hence $G \simeq \mathbb{Z}/4 \mathbb{Z}$).
Since $G$ acts freely on $C'\times C',$ then $G^0$ acts freely on $C'$ and thus $C$ has genus $2$.
Denoting by $\Delta \subset C \times C$ the diagonal and by $\Gamma \subset C \times C$ the graph of the hyperelliptic involution $\iota \colon C \to C$, we see that $\Delta$ and $\Gamma$ are smooth curves isomorphic to $C$ and satisfying
\begin{equation*}
\Delta \Gamma =6, \quad \Delta^2 = \Gamma^2 = -2.
\end{equation*}
The ramification divisor of $u$ is precisely $\Delta$, so $u(\Delta)^2=-4$, whereas $u(\Gamma)$ is a
$(-1)$-curve. The corresponding blow-down morphism $\varphi \colon \textrm{Sym}^2(C) \to A$ is the Abel-Jacobi map, and $A$ is an abelian surface isomorphic to the Jacobian variety $J(C)$. The composed map
\begin{equation*}
\alpha= \varphi \circ \beta \colon S \to A
\end{equation*}
is a generically finite double cover, that by the universal property coincides, up to automorphisms of $A$, with the Albanese morphism of $S$. Such a morphism is branched over $D_A := (\varphi \circ u)(\Delta)$, which is a curve with $D_A^2=32$ and containing an ordinary sextuple point and no other singularities: in fact, the curves $u(\Delta)$ and $u(\Gamma)$ intersect transversally at precisely six points, corresponding to the six Weierstrass points of $C$.
From this and \eqref{eq:inv-prod-quot}, it follows that $S$ is a surface with $p_g=q=2$, $K_S^2=8$ and of type $I$. Note that, with the notation of Section \ref{sec:Albanese}, we have $B= \textrm{Sym}^2(C)$ and $D_B = u(\Delta)$.
\begin{remark} \label{rem:curve-6-ple}
Here is a different construction of the singular curve $D_A$ considered in the previous example. Let $A:=J(C)$ be the Jacobian of a smooth genus $2$ curve and let us consider a symmetric theta divisor $\Theta \subset A$. Then the Weierstrass points of $\Theta$ are six $2$-torsion points of $A$, say $p_0, \ldots, p_{5}$, and $D_A$ arises as the image of $ \Theta $ via the multiplication map $2_A \colon A \to A$ given by $x \mapsto 2x$. Note that $D_A$ is numerically equivalent to $4 \Theta$.
\end{remark}
\begin{remark} \label{rem:pign-pol}
Recently, R. Pignatelli and the first author studied some surfaces with $p_g=q=2$ and $K_S^2=7$, originally constructed in \cite{CanFr15} and arising as \emph{triple} covers $S \to A$ branched over $D_A$, where $(A, \, D_A)$ is as in the previous example. We refer the reader to \cite{PiPol16} for more details.
\end{remark}
\subsection{The classification} \label{subsec:classification I}
The aim of this subsection is to show that every surface of type $I$ is a product-quotient surface of the type described in Subsection \ref{2sec:type I}.
\begin{lemma}\label{symmetric}
Let $D$ be an irreducible curve contained in an abelian surface $A$, with $D^{2}=32$ and having an ordinary point $p$ of multiplicity $6$ and no other singularities.
Then, up to translations, we can suppose $p=0$ and $D$ symmetric, namely $(-1)_A^* D=D$.
\end{lemma}
\begin{proof}
Up to a translation, we may assume $p=0$. Using the results of Subsection \ref{subsec:AbelVar} and \cite[Corollary 2.3.7]{BL04}, it follows that $(-1)_A^*$ acts trivially on $\mathrm{NS}(A)$, hence
$D$ and $D':=(-1)_A^*D$ are two algebraically equivalent, irreducible divisors, both having a sextuple point at $0$. If $D$ and $D'$ were distinct, we would have
$D D' \geq 36$, a contradiction because $D^2=32$; thus $D=D'$.
\end{proof}
\begin{proposition} \label{prop:curve-mult-2}
If $D \subset A$ is as in $\mathrm{Lemma}$ $\mathrm{\ref{symmetric}}$, then there exists a smooth genus $2$ curve $C$ such that $A=J(C)$. Furthermore, up to translations, the curve $D$ can be obtained as in $\mathrm{Remark}$ $\mathrm{\ref{rem:curve-6-ple}}$, namely as the image of a symmetric theta divisor $\Theta \subset A$ via the multiplication map $2_A \colon A \to A$.
\end{proposition}
\begin{proof}
By Lemma \ref{symmetric}, we can assume that $D$ is a symmetric divisor and that its sextuple point is the origin $0 \in A$. The geometric genus of $D$ is $2$, hence its normalization $C \to D$ is a smooth genus $2$ curve.
By the universal property of the Jacobian, the composed map $C \to D \hookrightarrow A$ factors through an isogeny
\begin{equation*}
\eta \colon J(C)\to A,
\end{equation*}
where we can assume, up to translations, that the image $\Theta$ of the embedding $C \hookrightarrow J(C)$ is a theta divisor containing the origin $0 \in J(C)$. Thus, the abelian surface $A$ is isomorphic to $J(C)/T$, where $T:= \ker \eta$ is a torsion subgroup whose order $|T|$ equals the degree $d$ of $\eta$.
The group $T$ contains the group generated by the six points
\begin{equation*}
0=p_0, \, p_1, \ldots, p_5
\end{equation*}
corresponding to the six distinct points of $C$ over $0 \in D$.
The restriction of $\eta$ to $C$ is birational,
so we have
\begin{equation*}
\eta^{*}D=\Theta_{0}+\dots+\Theta_{d-1},
\end{equation*}
where $\Theta_{0}= \Theta $ and the $\Theta_{j}$ are translates of $\Theta_0$ by the elements of $T$.
Since $D^{2}=32$, we obtain $(\eta^{*}D)^{2}=32d$. On the other hand, all the curves $\Theta_j$ are algebraically equivalent, hence $\Theta_i \Theta_j=2$ for all pairs $(i, \, j)$ and we infer $(\eta^{*}D)^{2} = (\Theta_0 + \cdots + \Theta_{d-1})^2=2d^2$. So $32d=2d^2$, that is $d=16$.
This shows that the reducible curve $\eta^*D$ has sixteen sextuple points $p_0, \ldots, p_{15}$, such that every curve $\Theta_j$ contains six of them; conversely, since all the $\Theta_j$ are smooth, through any of the $p_k$ pass exactly six curves. We express these facts by saying that the sixteen curves $\Theta_j$ and the sixteen points $p_k$ form a \emph{symmetric} $(16_6)$-\emph{configuration}.
The involution $(-1)_A$ acts on $D$, so the involution $(-1)_{J(C)}$ acts on $\Theta$, that is $\Theta$ is a symmetric divisor on $J(C)$. Furthermore, the action of $(-1)_A$ induces the multiplication by $-1$ on the tangent space $T_{A, 0}$, hence it preserves the six tangent directions of $D$ at $0$; this means that $p_0, \ldots, p_{5}$ are fixed points for the restriction of $(-1)_{J(C)}$ to $\Theta$. But a non-trivial involution with six fixed points on a smooth curve of genus $2$ must be the hyperelliptic involution, so $p_0, \ldots, p_{5}$ are the Weierstrass points of $\Theta$. By \cite[Chapter 3.2, pp. 28–-39]{Mu84}, these six points generate the (order $16$) subgroup $J(C)[2]$ of points of order $2$ in $J(C)$, thus $T=J(C)[2]$.
Summing up, our symmetric $(16_6)$-configuration coincides with the so-called \emph{Kummer configuration}, see
\cite[Chapter 10]{BL04};
moreover, $A$ is isomorphic to $J(C)$ and the map $\eta$ coincides with the multiplication map $2_A \colon A \to A$.
\end{proof}
\begin{theorem} \label{thm:class-type I}
Surfaces of type $I$ are precisely the product-quotient surfaces described in Section $\ref{2sec:type I}$, in particular they form a family of dimension $3$. More precisely, denoting by $\mathcal{M}_I$ their Gieseker moduli space and by $\mathcal{M}_2$ the moduli space of curves of genus $2$, there exists a surjective, quasi-finite morphism $\mathcal{M}_I \to \mathcal{M}_2$ of degree $15$.
\end{theorem}
\begin{proof}
Given any surface $S$ of type $I$, by Proposition \ref{prop:curve-mult-2} there exists a smooth curve $C$ of genus $2$ such that $S$ is the canonical desingularization of the double cover $\alpha \colon X \to A$, where $A=J(C)$, branched over the singular curve $D_A$ described in the example of Section $\ref{2sec:type I}$ and in Remark \ref{rem:curve-6-ple}. Equivalently, $S$ arises as a double cover $\beta \colon S \to B$, where $B= \mathrm{Sym}^2(C)$, branched over the smooth diagonal divisor $D_B$. There are sixteen distinct covers, corresponding to the sixteen square roots of $D_B$ in $\mathrm{Pic}(B)$. One of them is the double cover $u \colon C \times C \to B$, whereas the others are fifteen surfaces $S$ with $p_g(S)=q(S)=2$ and Albanese variety isomorphic to $J(C)$. We claim that, for a general choice of $C$, such surfaces are pairwise non-isomorphic. In fact, let us consider two of them, say $S_i$ and $S_j$; then, if $S_i \stackrel{\simeq}{\longrightarrow} S_j$ is an isomorphism, by the universal property of the Albanese map there exists an automorphism of abelian varieties $J(C) \stackrel{\simeq}{\longrightarrow} J(C)$ that makes the following diagram commutative:
\begin{equation*}
\begin{CD}
S_i @>{\simeq} >> S_j\\
@V{}VV @VV { } V\\
J(C) @> {\simeq}>> J(C).\\
\end{CD}
\end{equation*}
If $C$ is general, the only automorphism of $C$ is the hyperelliptic involution, so the only automorphism of $J(C)$ is the multiplication by $(-1)$, that acts trivially on the $2$-torsion divisors of $J(C)$. Consequently, the induced involution on $B$ acts trivially on the sixteen square roots of $D_B$, that is $S_i=S_j$, as claimed.
On the other hand, once fixed a curve $C$ of genus $2$, the product-quotient construction uniquely depends on the choice of the \'etale double cover $C' \to C$, that is on the choice of a non-trivial $2$-torsion element of $J(C)$. There are precisely fifteen such elements, that necessarily correspond to the fifteen surfaces with $p_g(S)=q(S)=2$ and $\textrm{Alb}(S) \simeq J(C)$ found above.
Therefore every surface of type $I$ is a product-quotient example, and the map $\mathcal{M}_I \to \mathcal{M}_2$ defined by $[S] \mapsto [C]$ is a quasi-finite morphism of degree $15$.
\end{proof}
\begin{remark}
The moduli space of genus $2$ curves $C$ with a non-trivial $2$ torsion point in $J(C)$ is rational (see \cite{Do08}). According to the description of $\mathcal{M}_I$ in the proof of Theorem \ref{thm:class-type I}, we see that $\mathcal{M}_I$ is rational.
\end{remark}
Theorem \ref{thm:class-type I} in particular implies that the universal cover of $S$ coincides with the universal cover of $C' \times C'$, so we obtain
\begin{corollary} \label{cor:univ-I}
Let $S$ be a surface of type $I$ and $\widetilde{S} \to S$ its universal cover. Then $\widetilde{S}$ is biholomorphic to the bidisk $\mathbb{H} \times \mathbb{H}$, where $\mathbb{H} = \{z \in \mathbb{C} \; | \; \mathrm{Im} \, z >0 \}$ is the Poincar\'e upper half-plane.
\end{corollary}
\section{Surfaces of type $II$: construction} \label{sec:type II}
\subsection{Line bundles on abelian varieties and the Appell-Humbert theorem} \label{subsec:AbelVar}
In this subsection we shortly collect some results on abelian varieties that will be used in the sequel, referring the reader to \cite[Chapters 1-4]{BL04} for more details.
Let $A=V/\Lambda$ be an abelian variety, where $V$ is a finite-dimensional $\mathbb{C}$-vector space and $\Lambda \subset V$ a lattice. Then the \emph{Appell-Humbert Theorem}, see \cite[Theorem 2.2.3]{BL04}, implies that
\begin{itemize}
\item the N\'eron-Severi group $\mathrm{NS}(A)$ can be identified with the group of hermitian forms $h \colon V \times V \to \mathbb{C}$ whose imaginary part $\mathrm{Im}\, h$ takes integral values on $\Lambda$;
\item the Picard group $\mathrm{Pic}(A)$ can be identified with the group of pairs $(h, \, \chi)$, where $h \in \mathrm{NS}(A)$ and $\chi$ is a \emph{semicharacter}, namely a map
\begin{equation*}
\chi \colon \Lambda \to U(1), \quad \textrm{where } U(1)= \{z \in \mathbb{C} \; | \; |z|=1 \},
\end{equation*}
such that
\begin{equation}\label{eq:semichar formula}
\chi(\mathbb{\lambda}+\mu)=\chi(\mathbb{\lambda})\chi(\mu)e^{ \pi i \, \mathrm{Im} \, h(\mathbb{\lambda}, \, \mu)} \quad \textrm{for all } \mathbb{\lambda}, \, \mu \in \Lambda.
\end{equation}
\item with these identifications, the first Chern class map $c_1 \colon \mathrm{Pic}(A)\to \mathrm{NS}(A)$ is nothing but the projection to the first component, i.e. $(h, \, \chi) \mapsto h$.
\end{itemize}
We will write $\mathcal{L}= \mathcal{L}(h, \, \chi)$, so that we have $\mathcal{L}(h, \, \chi)\otimes \mathcal{L}(h', \, \chi')= \mathcal{L}(h+h', \, \chi\chi')$. The line bundle $\mathcal{L}(h, \, \chi)$ is symmetric if and only if the semicharacter $\chi$ has values in $\{\pm1\}$, see \cite[Corollary 2.3.7]{BL04}.
Furthermore, for any $\bar{v}\in A$ with
representative $v\in V$, we have
\begin{equation} \label{eq:translation-A}
t_{\bar{v}}^{*}\mathcal{L}(h,\, \chi)=\mathcal{L}(h, \, \chi \, e^{2\pi i\,\mathrm{Im}\,h(v, \, \cdot)}),
\end{equation}
see \cite[Lemma 2.3.2]{BL04}.
\begin{remark} \label{rem:twice}
Assume that the class of $\mathcal{L}=\mathcal{L}(h, \, \chi)$ is $2$-divisible in
$\mathrm{NS}(A)$, that is $h=2 h'$. Then $\mathrm{Im}\, h( \Lambda,\, \Lambda) \subseteq 2 \mathbb{Z}$
and moreover formula (\ref{eq:semichar formula}) implies that $\chi \colon \Lambda \to U(1)$ is a character, namely $\chi(\mathbb{\lambda} + \mu) = \chi(\lambda) \chi(\mu)$. In particular, $\mathcal{L}$ belongs to $\mathrm{Pic}^0(A)$ if and only if there exists a character $\chi$ such that $\mathcal{L} = \mathcal{L}(0, \, \chi)$.
\end{remark}
\begin{proposition}[\cite{BL04}, Lemma 2.3.4] \label{Prop:BirLangeEssential}
Let $A_1 = V_1/ \Lambda_1$ and $A_2 = V_2/ \Lambda_2$ be two abelian varieties, and let $f \colon A_2 \to A_1$ be a homomorphism with
analytic representation $F \colon V_2 \to V_1$ and rational representation $F_{\Lambda} \colon \Lambda_2 \to \Lambda_1$. Then for any $\mathcal{L}(h, \, \chi) \in \mathrm{Pic}(A_1)$ we have
\begin{equation} \label{eq:L-pullback}
f^{*}\mathcal{L}(h, \, \chi)=\mathcal{L}(F^{*}h, \, F_{\Lambda}^{*}\chi),
\end{equation}
\end{proposition}
Given a point $x \in A$ and a divisor $D \subset A$, let us denote by $m(D, \, x)$ the multiplicity
of $D$ at $x$.
\begin{lemma}[\cite{BL04}, Proposition 4.7.2] \label{lem:SymDiv}
Let $\mathcal{L}=\mathcal{L}(h, \, \chi)$ be a symmetric line bundle on $A$ and $D$ a symmetric effective
divisor such that $\mathcal{L} = \mathcal{O}_A(D)$. For every $2$-torsion point $x \in A[2]$ with representative $\frac{1}{2} \lambda$, where $\lambda \in \Lambda$, we have
\begin{equation*}
\chi(\mathbb{\lambda})=(-1)^{m(D, \, 0)+m(D, \, x)}.
\end{equation*}
\end{lemma}
\subsection{The equianharmonic product} \label{subsec:Hirzebruch}
Let $\zeta:=e^{2 \pi i/6}= \frac{1}{2} + \frac{\sqrt{3}}{2} i$, so that $\zeta^2-\zeta+1=0$, and consider the \emph{equianharmonic elliptic curve}
\begin{equation} \label{eq:curve-E'}
E':=\mathbb{C}/ \Gamma_{\zeta}, \quad \Gamma_{\zeta}:= \mathbb{Z} \zeta \oplus \mathbb{Z}.
\end{equation}
Setting $V:= \mathbb{C}^2$, we can define
\begin{equation*}
A' := E' \times E' = V/\Lambda_{A'}, \quad \Lambda_{A'}: = \Gamma_{\zeta} \times \Gamma_{\zeta}.
\end{equation*}
Then $A'$ is a principally polarized abelian surface, that we will call the \emph{equianharmonic product}.
Denoting by $(z_1, \, z_2)$ the coordinates of $V$
and by $e_1=(1, \, 0)$, $e_2 = (0, \, 1)$ its standard basis, the four vectors
\begin{equation} \label{eq:basis-l-m}
\lambda_1 := \zeta e_1,\ \ \lambda_2 := \zeta e_2,\ \ e_1,\ \ e_2
\end{equation}
form a basis for the lattice $\Lambda_{A'}$.
We now consider the four $1$-dimensional complex subspaces of $V$ defined as
\begin{equation} \label{eq:four-complex-lines}
\begin{aligned}
V_1 & := \textrm{span}(e_1) = \{z_2=0\}, \quad \quad \quad \quad \, V_2 := \textrm{span}(e_2) = \{z_1=0\}, \\
V_3 & := \textrm{span}(e_1+e_2) = \{z_1-z_2=0\}, V_4 := \textrm{span}(e_1 +\zeta e_2) =\{\zeta z_1 - z_2 =0 \}.
\end{aligned}
\end{equation}
For each $k \in \{1, \, 2, \, 3, \, 4\}$, the subspace $V_k$ contains a rank $2$ sublattice $\Lambda_k \subset \Lambda_{A'}$ isomorphic to $\Gamma_{\zeta}$, where
\begin{equation} \label{eq:four-sublattices}
\begin{aligned}
\Lambda_1 & := \mathbb{Z} \lambda_1 \oplus \mathbb{Z} e_1, \quad \quad \quad \quad \quad \quad
\Lambda_2 := \mathbb{Z} \lambda_2 \oplus \mathbb{Z} e_2, \\
\Lambda_3 & := \mathbb{Z}(\lambda_1 + \lambda_2) \oplus \mathbb{Z}(e_1+ e_2),
\Lambda_4 :=\mathbb{Z}(\lambda_1 + \lambda_2 - e_2) \oplus \mathbb{Z}(\lambda_2 + e_1).
\end{aligned}
\end{equation}
Consequently, in $A'$ there are four elliptic curves isomorphic to $E'$, namely
\begin{equation}
E'_k := V_k/ \Lambda_k, \quad k \in \{1, \, 2, \, 3, \, 4 \}.
\end{equation}
\begin{proposition}[\cite{Hir84} Section 1] \label{prop:Hir-quadruple-point}
The four curves $E'_k$ only intersect $($pairwise transversally$)$ at the origin $o' \in A'$. Consequently, the reducible divisor
\begin{equation*}
D_{A'} := E'_1+E'_2+E'_3+E'_4
\end{equation*}
has an ordinary quadruple point at $o'$ and no other singularities.
\end{proposition}
By the Appell-Humbert Theorem, the N\'eron-Severi group $\textrm{NS}(A')$ of $A'$
can be identified with the group of hermitian forms $h$ on $V$ whose imaginary part takes integral values on $\Lambda_{A'}$. We will use the symbol $H$ for the $2 \times 2$ hermitian matrix associated to $h$ with respect to the standard basis of $V$ so that, thinking of $v, \, w \in V$ as column vectors, we can write
$h(v, \, w) = {}^tv H \bar{w}.$
We want now to identify those hermitian matrices $H_1, \ldots, H_4$
that correspond to the classes of the curves $E_1', \ldots, E_4'$, respectively.
\begin{proposition} \label{pro:class_DA'}
We have
\begin{equation*}
\begin{split}H_{1} & =\frac{2}{\sqrt{3}}\left(\begin{array}{cc}
0 & 0\\
0 & 1
\end{array}\right),\quad\quad\;\,H_{2}=\frac{2}{\sqrt{3}}\left(\begin{array}{cc}
1 & 0\\
0 & 0
\end{array}\right),\\
H_{3} & =\frac{2}{\sqrt{3}}\left(\begin{array}{cc}
\;\;1 & -1\\
-1 & \;\;1
\end{array}\right),\quad H_{4}=\frac{2}{\sqrt{3}}\left(\begin{array}{cc}
\;1 & -\zeta\\
-\bar{\zeta} & \;1
\end{array}\right),
\end{split}
\end{equation*}
so that the hermitian matrix representing in $\mathrm{NS}(A')$ the class of the divisor $D_{A'}$ is
\begin{equation*}
H:=H_1+H_2+H_3+H_4 = \frac{2}{\sqrt{3}} \left(\begin{array}{cc}
3 & -1-\zeta\\
-1- \bar{\zeta} & 3
\end{array}\right).
\end{equation*}
Moreover, setting $\lambda = (a_1+\zeta a_2, \, a_3+\zeta a_4) \in \Lambda_{A'}$, the semicharacter $\chi_{D_{A'}}$ corresponding to the line bundle $\mathcal{O}_{A'}(D_{A'})$ can be written as
\begin{equation*}
\chi_{D_{A'}}(\lambda)=(-1)^{a_{1}+a_{2}+a_{3}+a_{4}+a_1 (a_2+ a_3+a_4)+(a_2 +a_3)a_4}.
\end{equation*}
\end{proposition}
\begin{proof}
The hermitian form $\tilde{h}$ on $\mathbb{C}$ given by $\tilde{h}(z_1, \, z_2)=\frac{2}{\sqrt{3}}z_1\bar{z_2}$ is positive
definite and its imaginary part is integer-valued on $\Gamma_{\zeta}$, so it defines a positive class in $\mathrm{NS}(E')$. Moreover, in the ordered basis $\{\zeta, \, 1\}$ of $\Gamma_{\zeta}$, the alternating form $\mathrm{Im}\, \tilde{h}$ is represented by the skew-symmetric matrix $\begin{psmallmatrix}\;\;0 & 1 \\ -1 & 0\end{psmallmatrix}$,
whose Pfaffian equals $1$, so $\tilde{h}$ corresponds to the ample generator
of the N\'eron-Severi group of $E'$, see \cite[Corollary 3.2.8]{BL04}. In other words, $\tilde{h}$ is the Chern class of $\mathcal{O}_{E'}(0)$,
where $0$ is the origin of $E'$. Write $\mathcal{O}_{E'}(0)=\mathcal{L}(\tilde{h}, \, \nu)$
for a suitable semicharacter $\nu \colon \Gamma_{\zeta} \to \mathbb{C}$;
since $\mathcal{O}_{E'}(0)$
is a symmetric line bundle, the values of $\nu$ at
the generators of $\Gamma_{\zeta}$ can be computed by using Lemma \ref{lem:SymDiv}, obtaining
$\nu(1)=-1, \quad \nu(\zeta)=-1.$
Consequently, for all $a, \, b \in \mathbb{Z}$ we get
\begin{equation} \label{eq:char-psi}
\begin{split}
\nu(a+b\zeta)&=\nu(a) \nu(b \zeta) e^{\pi i \, \mathrm{Im} \, \tilde{h}(a, \, b \zeta)}=(-1)^a (-1)^b (-1)^{ab} = (-1)^{a+b+ab}.
\end{split}
\end{equation}
For any $k \in \{1,\dots, 4\}$ let us define a group homomorphism $F_{k} \colon A'\to E'$ as follows:
\begin{equation*}
F_{1}(z_{1}, \, z_{2})=z_{2}, \quad F_{2}(z_{1}, \, z_{2})=z_{1},\quad F_{3}(z_{1}, \, z_{2})=z_{1}-z_{2},\quad F_{4}(z_{1}, \, z_{2})=\zeta z_{1}-z_{2}.
\end{equation*}
By \eqref{eq:four-complex-lines} we have $E_{k}'=F_k^{*}(0)$ and so, setting $\mathcal{O}_{A'}(E_k')=\mathcal{L}(h_k, \, \chi_k')$, by \eqref{eq:L-pullback} we deduce
\begin{equation} \label{eq:Fk*}
h_k=F_k^* \tilde{h}, \quad \chi_k'=F_k^* \nu.
\end{equation}
This gives immediately the four matrices $H_{1},\dots,H_{4}$. Moreover, by using \eqref{eq:char-psi} and \eqref{eq:Fk*}, we can write down the semicharacters $\chi'_{1},\dots,\chi'_{4}$; in fact, for any $\lambda = (a_1+\zeta a_2, \, a_3+\zeta a_4) \in \Lambda_{A'}$, we obtain
\begin{equation*}
\begin{split}
\chi'_{1}(\lambda)&=(-1)^{a_{3}+a_{4}+a_{3}a_{4}}\\
\chi'_{2}(\lambda)&=(-1)^{a_{1}+a_{2}+a_{1}a_{2}}\\
\chi'_{3}(\lambda)&=(-1)^{a_{1}+a_{2}+a_{3}+a_{4}+(a_{1}+a_{3})(a_{2}+a_{4})}\\
\chi'_{4}(\lambda)&=(-1)^{a_{1}+a_{2}+a_{3}+a_{4}+(a_{1}+a_{4})(a_{2}+a_{3})+a_{2}a_{3}}.
\end{split}
\end{equation*}
The semicharacter $\chi_{D_{A'}}$ can be now computed by using the formula
$\chi_{D_{A'}}=\chi'_{1}\chi'_{2} \chi'_{3}\chi'_{4}.$
\end{proof}
\begin{remark} \label{rem:princ-pol}
The hermitian matrix
\begin{equation*}
H_1+H_2= \frac{2}{\sqrt{3}} \left(\begin{array}{cc}
1 & 0\\
0 & 1
\end{array}\right)
\end{equation*}
represents in $\mathrm{NS}(A')$ the class of the principal polarization of product type
\begin{equation*}
\Theta := E' \times \{0\} + \{0\} \times E'.
\end{equation*}
\end{remark}
\begin{remark} \label{prop:neron-Severi-via-E}
The free abelian group $\mathrm{NS}(A')$ is generated by the classes of the elliptic curves $E_1'$, $E_2'$, $E_3'$, $E_4'$. In fact, since $A' = E' \times E'$ and $E'$ has complex multiplication, it is well-known that $\textrm{NS}(A')$ has rank $4$, see \cite[Exercise 5.6 (10) p.142]{BL04}, hence we only need to show that the classes of the curves $E_k'$ generate a primitive sublattice of maximal rank in the N\'eron-Severi group. By Proposition \ref{prop:Hir-quadruple-point}, the corresponding Gram matrix has determinant
\begin{equation*}
\det \, (E_i' \cdot E_j') = \det(1-\delta_{ij})= -3
\end{equation*}
so the claim follows because $-3$ is a non-zero, square-free integer.
\end{remark}
\subsection{Double covers of the equianharmonic product} \label{subsec:example-II}
In order to construct a surface of type $II$, we must find an abelian surface $A$ and a divisor $D_A$ on it such that
\begin{itemize}
\item $D_A$ is $2$-divisible in $\textrm{Pic}(A)$;
\item $D_A^2=24$ and $D_A$ has precisely two ordinary quadruple points as singularities.
\end{itemize}
We will construct the pair $(A, \, D_A)$ as an \'etale double cover of the pair $(A', \, D_{A'})$, where $A'=V/\Lambda_{A'}$ is the equianharmonic product and $D_{A'}=E_1'+E_2'+E_3'+E_4'$ is the sum of four elliptic curves considered in Proposition \ref{prop:Hir-quadruple-point}.
By the Appell-Humbert theorem, the sixteen $2$-torsion divisors on $A'$, i.e. the elements of order $2$ in $\textrm{Pic}^0(A')$, correspond to the sixteen characters
\begin{equation} \label{eq:char-chi}
\chi \colon \Lambda_{A'} \to \{ \pm 1 \}.
\end{equation}
Any such character is specified
by its values at the elements of the ordered basis $\{\lambda_1, \, \lambda_2, \, e_1, \, e_2 \}$ of $\Lambda_{A'}$ given in \eqref{eq:basis-l-m}, so it can be denoted by
\begin{equation*}
\chi = (\chi(\lambda_1), \, \chi(\lambda_2), \, \chi(e_1), \, \chi(e_2)).
\end{equation*}
For instance, $\chi_0:=(1, \, 1,\, 1,\, 1)$ is the trivial character, corresponding to the trivial divisor $\mathcal{O}_{A'}$.
We will write
\begin{equation} \label{eq:characters}
\begin{array}{lll}
\chi_1\ :=(-1, \, -1, \, 1, -1), & \chi_2\ := (1, \, -1, \, -1, \, 1), & \chi_3\ :=(-1, \, 1, \, -1, \, -1),\\
\chi_4\ :=(1, \, 1, \, -1, \, 1), & \chi_5\ :=(-1, \, 1, \, 1, \,1), & \chi_6\ :=(-1, \, 1,\, -1, \, 1),\\
\chi_7\ :=(1, \, 1, \, 1, \, -1), & \chi_8\ :=(1, \, -1, \, 1, \, -1), & \chi_9\ :=(-1, \, 1, \, 1, \,-1),\\
\chi_{10} :=(1, \, 1,\, -1, \, -1), & \chi_{11}:=(-1, \, -1, \, -1, \, 1), & \chi_{12} :=(1, \, -1, \, 1, \, 1),\\
\chi_{13}:=(1, \, -1, \, -1, \,-1), & \chi_{14}:=(-1, \, -1,\, 1, \, 1), & \chi_{15} :=(-1, \, -1, \, -1, \, -1)
\end{array}
\end{equation}
for the fifteen non-trivial characters. To any non-trivial $2$-torsion divisor on $A'$, and so to any non-trivial character $\chi$ as in \eqref{eq:char-chi}, it corresponds an isogeny of degree two
$f_{\chi} \colon A_{\chi} \to A';$
in fact, $\ker \chi \subset \Lambda_{A'}$ is a sublattice of index $2$ and $A_{\chi}$ is the abelian surface
\begin{equation} \label{eq:A-chi}
A_{\chi} = V/\ker \chi.
\end{equation}
Let us set
\begin{equation*}
E_i := f_{\chi}^*(E_i'), \quad
D_{A_{\chi}} := f_{\chi}^*(D_{A'})=E_1+E_2+E_3+E_4
\end{equation*}
and write $\varSigma$ for the subgroup of $\textrm{Pic}^0(A')$ generated by $\chi_1$ and $\chi_2$, namely
\begin{equation} \label{Characters}
\varSigma:= \{\chi_0, \, \chi_1, \, \chi_2, \, \chi_3 \}.
\end{equation}
We are now ready to prove the key result of this subsection.
\begin{proposition} \label{prop:2-divisibility}
The following are equivalent$:$
\begin{itemize}
\item[$\boldsymbol{(a)}$] the divisor $D_{A_{\chi}}$ is $2$-divisible in $\mathrm{Pic}(A_{\chi});$
\item[$\boldsymbol{(a')}$] the divisor $D_{A_{\chi}}$ is $2$-divisible in $\mathrm{NS}(A_{\chi});$
\item[$\boldsymbol{(b)}$] every $E_i$ is an irreducible elliptic curve in $A_{\chi};$
\item[$\boldsymbol{(c)}$] the character $\chi$ is a non-trivial element of $\varSigma$.
\end{itemize}
\end{proposition}
\begin{proof}
We first observe that $\mathrm{NS}(A_{\chi})=\mathrm{Pic}(A_{\chi})/\mathrm{Pic}^0(A_{\chi})$ and $\mathrm{Pic}^0(A_{\chi})$ is a divisible group, so $\boldsymbol{(a)}$ is equivalent to $\boldsymbol{(a')}$.
Next, the curve $E_i \subset A_{\chi}$ is irreducible if and only if the $2$-torsion divisor corresponding to the character $\chi \colon \Lambda_{A'} \to \{\pm1\}$ restricts non-trivially to $E_i'$. This in turn means that $\chi$ restricts non-trivially to the sublattice $\Lambda_i$, and so $\boldsymbol{(b)}$ occurs if and only if
$\chi$ restricts non-trivially to all $\Lambda_1$, $\Lambda_2$,
$\Lambda_3$, $\Lambda_4$. By using the generators given in \eqref{eq:four-sublattices}, a long but elementary computation (or a quick computer calculation) shows that this happens if and only if $\boldsymbol{(c)}$ holds.
It remains to prove that $\boldsymbol{(a')}$ and $\boldsymbol{(c)}$ are equivalent. The isogeny $f_{\chi} \colon A_{\chi} \to A$ lifts to the identity $1_{V} \colon V \to V$ so, if $h \colon V \times V \to \mathbb{C}$ is the hermitian form that represents the class of $D_{A'}$ in $\mathrm{NS}(A')$, then the same form also represents the class of $D_{A_{\chi}}$ in $\mathrm{NS}(A_{\chi})$.
By the Appell-Humbert theorem the group ${\rm NS}(A_{\chi})$ can be identified with the group of hermitian forms on $V$ whose imaginary part takes integral values on the lattice $\ker \chi$, so \eqref{eq:A-chi} implies that condition $\boldsymbol{(a')}$ is equivalent to
\begin{equation} \label{eq:(a')}
\mathrm{Im}\; h(\ker \chi, \, \ker \chi) \subseteq 2 \mathbb{Z}.
\end{equation}
The non-zero values assumed by the alternating form $\textrm{Im} \, h$ on the generators $\lambda_1, \, \lambda_2, \, e_1, \, e_2$ of $\Lambda_{A'}$ can be computed by using the hermitian matrix $H$ given in Proposition \ref{pro:class_DA'}, obtaining Table \ref{table:imh} below:
\begin{table}[H]
\begin{center}
\begin{tabular}{c|c|c|c|c|c|c}
$(\cdot, \, \cdot)$ & $(\lambda_1, \, \lambda_2)$ & $(\lambda_1, \, e_1)$ & $(\lambda_1, \, e_2)$ & $(\lambda_2, \, e_1)$ & $(\lambda_2, \, e_2) $ & $(e_1, \, e_2)$ \\
\hline
$\mathrm{Im} \; h(\cdot, \, \cdot)$ & $-1$ & $3$ & $-2$ & $-1$ & $3$ & $-1$ \\
\end{tabular} \caption{Non-zero values of $\textrm{Im}\; h$ at the generators of $\Lambda_{A'}$} \label{table:imh}
\end{center}
\end{table}
Now we show that (\ref{eq:(a')}) holds if and only if $\chi$ is a non-trivial element of $\varSigma.$ In fact we have seen that, if $\chi \notin \{\chi_1, \, \chi_2, \, \chi_3\}$, then
one of the effective divisors $E_i=f_{\chi}^*(E_i')$ is a disjoint union of two elliptic curves, say $E_i=E_{i1}+E_{i2}.$ But then, using the projection formula, we find
\begin{equation*}
D_{A_{\chi}} \cdot E_{i1} = f_{\chi}^*(D_{A'}) \cdot E_{i1} = D_{A'} \cdot f_{\chi \,*}(E_{i1}) = D_{A'} \cdot E_i' =3
\end{equation*}
which is not an even integer, so $D_{A_{\chi}}$ is not $2$-divisible in this situation.
Let us consider now the case $\chi \in \{\chi_1, \, \chi_2, \, \chi_3 \}$. We can easily see that the integral bases of $\ker \chi_1$, $\ker \chi_2$, $\ker \chi_3$ are given by
\begin{equation} \label{eq:bases-ker-chi}
\begin{split}
\mathscr{B}_1&:=\{e_1, \, \lambda_1+e_2, \, \lambda_2+e_2, \, 2e_2\}, \\
\mathscr{B}_2&:=\{\lambda_2+e_1, \, \lambda_1, \, e_2, \, 2e_1 \},\\
\mathscr{B}_3&:=\{\lambda_1+e_2, \, \lambda_2, \, 2e_2, \, e_1+e_2 \},\\
\end{split}
\end{equation}
respectively. Then, by using Table \ref{table:imh}, it is straightforward to check that
$\mathrm{Im}\, h(b_1, \, b_2) \in 2 \mathbb{Z}$ for all $b_1, \, b_2\in \mathscr{B}_1$;
for instance, we have
\begin{equation*} \label{eq1}
\begin{split}
\mathrm{Im}\; h(\lambda_1+e_2, \, \lambda_2+e_2) & = \mathrm{Im}\; h(\lambda_1,\lambda_2) + \mathrm{Im}\; h(\lambda_1,e_2) + \mathrm{Im}\; h(e_2,\lambda_2) + \mathrm{Im}\; h(e_2,e_2) \\
& = -1 -2 -3 +0 = -6 \in 2 \mathbb{Z}.
\end{split}
\end{equation*}
This shows that the inclusion (\ref{eq:(a')}) holds for $\chi_1.$ The proof that it also holds for $\chi_2$ and $\chi_3$ is analogous.
\end{proof}
\begin{remark} \label{rem:duality-for-2-torsion}
Writing the details in the proof of Proposition \ref{prop:2-divisibility}, one sees that every non-trivial character $\chi$ in \eqref{eq:characters} restricts trivially to \emph{at most one} curve $E_i'$. Identifying $A'$ with $\textrm{Pic}^0(A')$ via the principal polarization $\Theta$ described in Remark \ref{rem:princ-pol}, this corresponds to the fact that every non-zero $2$-torsion point of $A'$ is contained in at most one of the $E_i'$. More precisely, every $E_i'$ contains exactly three non-zero $2$-torsion points of $A'$, so it remain $16-(4\times 3 + 1)=3$ of them that are not contained in any of the $E_i'$. Via the identification above, they clearly correspond to the three $2$-torsion divisors restricting non-trivially to all the $E_i'$, namely to the three non-trivial characters in the group $\varSigma$.
\end{remark}
Summing up, we have the following existence result for surfaces of type $II$.
\begin{proposition} \label{prop:type-II}
Let $\chi$ be any non-trivial element in the group $\varSigma$, and write $f \colon A \to A'$ instead of $f_{\chi} \colon A_{\chi} \to A'$. Then there exists a double cover
$\alpha_X \colon X \to A $
branched precisely over the $2$-divisible effective divisor $D_A \subset A$. The minimal resolution $S$ of $X$ is a smooth surface with $p_g=q=2$, $K^2=8$ and Albanese map of degree $2$, belonging to type $II$.
\end{proposition}
\begin{proof}
It only remains to compute the invariants of $S$. From the double cover formulas (see \cite[Chapter V.22]{BHPV03}) we see that, if we impose an ordinary quadruple point to the branch locus, then $\chi$ decreases by $1$ and $K^2$ decreases by $2$, hence we get
\begin{equation*}
\chi(\mathcal O_S)=\frac{1}{8}D_A^2-2=1 {\rm\ \ \ and\ \ \ } K_S^2=\frac{1}{2}D_A^2-4=8.
\end{equation*}
Since $q(S)\geq q(A)=2,$ we have $p_g(S)=q(S)\geq 2.$
Assume that $p_g(S)=q(S)\geq 3.$ By \cite{HacPar}, \cite{Piro} and \cite[Beauville appendix]{Deb81}, we have two possibilities:
\begin{itemize}
\item $p_g(S)=q(S)=4$ and $S$ is the product of two curves of genus $2$;
\item $p_g(S)=q(S)=3$ and $S=(C_2 \times C_3)/\mathbb{Z}_2$, where $C_2$ is a smooth curve of genus $2$ with an elliptic involution $\tau_2$, $C_3$ is a smooth curve of genus $3$ with a free involution $\tau_3$, and the cyclic group $\mathbb{Z}_2$ acts freely on the product $C_2 \times C_3$ via the involution $\tau_2 \times \tau_3$.
\end{itemize}
In both cases above, $S$ contains no elliptic curves. On the other hand, all our surfaces of type $II$ contain four elliptic curves, coming from the strict transform of $D_A$. Therefore the only possibility is $p_g(S)=q(S)=2$.
\end{proof}
\section{Surfaces of type $II$: classification}
\label{sec:classification-II}
\subsection{Holomorphic and anti-holomorphic diffeomorphisms of cyclic covers}
In this Section we discuss about lifts on cyclic covers of automorphisms or anti-automorphisms. We use methods and results of Pardini, see \cite{Pa91}.
Let $n \geq 2$ be an integer and let $D$ be an effective divisor
on a smooth projective variety $Y$,
such that
\begin{equation*}
\mathcal{O}_{Y}(D) \simeq \mathcal{L}_{1}^{\otimes n} \simeq \mathcal{L}_{2}^{\otimes n}
\end{equation*}
for some line bundles $\mathcal{L}_{1},\,\mathcal{L}_{2} \in \mathrm{Pic}(Y)$.
Canonically associated to such data, there exists two simple $n$-cyclic covers
\begin{equation*}
\pi_{1} \colon X_{1}\to Y \quad \text{and} \quad \pi_{2} \colon X_{2}\to Y,
\end{equation*}
both branched over $D$. We want to provide conditions ensuring that the two compact complex manifolds
underlying $X_{1}$ and $X_{2}$ are biholomorphic or anti-biholomorphic.
Following \cite[Section3]{KK}, let us denote
by ${\rm Kl}(Y)$ the group of holomorphic and anti-holomorphic diffeomorphisms of $Y$.
There is a short exact sequence
\begin{equation*}
1 \longrightarrow \mathrm{Aut}(Y) \longrightarrow {\rm Kl}(Y) \longrightarrow H \to 1,
\end{equation*}
where $H=\mathbb{Z}/2 \mathbb{Z}$ or $H=0$.
To any anti-holomorphic element $\mathbb{\sigma} \in{\rm Kl}(Y)$ we can associate a $\mathbb{C}$-antilinear map
\begin{equation*}
\sigma^* \colon \mathbb{C}(Y) \longrightarrow \mathbb{C}(Y)
\end{equation*}
on the function field $\mathbb{C}(Y)$ by defining
\begin{equation*}
(\sigma^*f)(x):= \overline{f(\sigma(x))}
\end{equation*}
for all $f \in \mathbb{C}(Y)$. That action extends the usual action of ${\rm Aut}(Y)$ on $\mathbb{C}(Y)$ in a natural way (note that in \cite{KK} the notation $\sigma^*$ is used only for holomorphic maps, whereas for anti-holomorphic maps the corresponding notation is $\sigma^{!}$).
We have $
\sigma^{-1}( \mathrm{div \,}(f)) = \mathrm{div \,} (\sigma^*f),$ hence the action $\sigma^* \colon \mathrm{Div}(Y) \to \mathrm{Div}(Y)$ induces an action $\sigma^* \colon \mathrm{Pic}(Y) \to \mathrm{Pic}(Y)$, such that $\sigma^* K_Y = K_Y$, in the usual way. Namely, if the line bundle $\mathcal{L} \in \mathrm{Pic}(Y)$ is defined by the transition functions $\{g_{ij} \}$ with respect to the open cover $\{U_i\}$, then $\sigma^* \mathcal{L}$ is determined by the transition functions $\{\sigma^* g_{ij}\}$ with respect to the cover $\{\sigma^{-1}(U_i) \}$; furthermore, given a open set $U \subseteq Y$ and a holomorphic $r$-form $\omega = \sum g_{i_1 \ldots i_r} dx_{i_1} \wedge \ldots \wedge dx_{i_r} \in \Gamma(U, \, K_Y)$, its pullback via $\sigma$ is the holomorphic $r$-form
$\sigma^* \omega = \sum \sigma^*g_{i_1\ldots i_r} \, d(\sigma^*x_{i_1}) \wedge \ldots \wedge d(\sigma^*x_{i_r}) \in \Gamma(\sigma^{-1}(U), \, K_Y)$.
Moreover, the intersection numbers are also preserved by the action of any $\sigma \in {\rm Kl}(Y)$.
\begin{example} \label{ex:anti-holo-A}
Let $A_1 = V_1/ \Lambda_1$ and $A_2 = V_2/ \Lambda_2$ be two abelian varieties, and let $\sigma \colon A_2 \to A_1$ be an anti-holomorphic homomorphism with analytic representation $\mathfrak{S} \colon V_2 \to V_1$ and rational representation $\mathfrak{S} _{\Lambda} \colon \Lambda_2 \to \Lambda_1$ (note that $\mathfrak{S}$ is a $\mathbb{C}$-antilinear map). Then, for any $\mathcal{L}(h, \, \chi) \in \mathrm{Pic}(A_1)$, we have the following analogue of \eqref{eq:L-pullback}:
\begin{equation} \label{eq:L-anti-holo-pullback}
\sigma^{*}\mathcal{L}(h, \, \chi) = \mathcal{L}(\overline{\mathfrak{S}^{*}h},\, \overline{\mathfrak{S}_{\Lambda}^{*}\chi}),
\end{equation}
In fact, looking at the transition function of the anti-holomorphic line bundle $\mathcal{L}(\mathfrak{S}^{*}h, \, \mathfrak{S}_{\Lambda}^{*}\chi)$ we see that, in order to obtain a holomorphic one, we must take the conjugated hermitian form $\overline{\mathfrak{S}^{*}h}$
and the conjugated semicharacter $\overline{\mathfrak{S}_{\Lambda}^{*}\chi}$.
\end{example}
Let us now denote by ${\rm Kl}(Y, \, D)$ and ${\rm Aut}(Y, \, D)$ the subgroups of
${\rm Kl}(Y)$ and ${\rm Aut}(Y)$ given by diffeomorphisms such that $\sigma^*D=D$. Again, ${\rm Aut}(Y, \, D)$
is a normal subgroup of ${\rm Kl}(Y, \, D)$, of index $1$ or $2$.
\begin{proposition} \label{prop:antiHolo and Holo} ${}$
\begin{itemize}
\item[$\boldsymbol{(i)}$] Let $\mathbb{\sigma}\in{\rm Kl}(Y, \, D)$ be such that
$\mathbb{\sigma}^{*}\mathcal{L}_{2} \simeq \mathcal{L}_{1}$. Then there exists a diffeomorphism
$\tilde{\mathbb{\sigma}} \colon X_{1}\to X_{2}$
such that $\sigma \circ \pi_{1}=\pi_{2}\circ\tilde{\mathbb{\sigma}}$. Moreover, $\tilde{\sigma}$ is holomorphic $($respectively, anti-holomorphic$)$ if and only if $\sigma$ is so.
\item[$\boldsymbol{(ii)}$]
Let be $\sigma \in {\rm Kl}(Y, \, D)$. If $\mathbb{\sigma}^* \mathcal{L}_{i}\simeq \mathcal{L}_{i}$, then $\sigma$ lifts to $X_i$ and there are $n$ different such lifts.
\item[$\boldsymbol{(iii)}$] Let be $\tilde \mathbb{\sigma}\in{\rm Kl}(X_i)$ such that $\tilde \mathbb{\sigma} \circ \pi_i=\pi_i$. Then $\tilde s$ induces an automorphism $\mathbb{\sigma} \in {\rm Kl}(Y,D)$. Moreover if either $D>0$ or $n=2$, one has $\mathbb{\sigma}^* \mathcal{L}_{i}\simeq \mathcal{L}_{i}$.
\end{itemize}
\end{proposition}
\begin{proof}
Let us prove $\boldsymbol{(i)}$. Let $\mathbb{L}_1$, $\mathbb{L}_2$ be the total spaces of $\mathcal{L}_1$, $\mathcal{L}_2$ and let $p_1 \colon \mathbb{L}_1 \to Y$, $p_2\colon \mathbb{L}_2 \to Y$ be the corresponding projections. Let $s \in H^0(Y, \, \mathcal{O}_Y(D))$ be a section vanishing exactly along $D$ (if $D=0$, we take for $s$ the constant function $1$). If $t_i \in H^0(\mathbb{L}_i, \, p_i^* \mathcal{L}_i)$ denotes the tautological section, by \cite[I.17]{BHPV03} it follows that global equations for $X_1$ and $X_2$, as analytic subvarieties of $\mathbb{L}_1$ and $\mathbb{L}_2$, are provided by
\begin{equation*}
t_1^n-p_1^*s=0 \quad \mathrm{and} \quad t_2^n-p_2^*s=0,
\end{equation*}
the covering maps $\pi_1$ and $\pi_2$ being induced by the restrictions of $p_1$ and $p_2$, respectively. Since $\sigma \in {\rm Kl}(Y, \, D)$, we have $\sigma^*s = \lambda s$ with $\lambda \in \mathbb{C}^*$. Moreover, $\mathbb{\sigma}^{*}\mathcal{L}_{2} \simeq \mathcal{L}_{1}$ implies that there exists a diffeomorphism $\tilde{\sigma} \colon \mathbb{L}_1 \to \mathbb{L}_2$ such that $p_2 \circ \tilde{\sigma}= \sigma \circ p_1$, hence
\begin{equation*}
\tilde{\sigma}^*(p_2^* s)=p_1^* (\sigma^* s)= \lambda \, p_1^* s.
\end{equation*}
Moreover, we have $\tilde{\sigma}^* t_2 = \mu t_1$, with $\mu \in \mathbb{C}^*$. Up to rescaling $t_2$ by a constant factor we can assume $\mu = \sqrt[n]{\lambda}$, so that
\begin{equation*}
\tilde{\sigma}^*(t_2^n-p_2^*s)=\lambda(t_1^n-p_1^*s).
\end{equation*}
This means that $\tilde{\sigma}\colon \mathbb{L}_1 \to \mathbb{L}_2$ restricts to a diffeomorphism $\tilde{\sigma} \colon X_1 \to X_2$, which is compatible with the two covering maps $\pi_1$ and $\pi_2$. By construction, such a diffeomorphism is holomorphic (respectively, anti-holomorphic) if and only if $\sigma$ is so. \\
The part $\boldsymbol{(ii)}$ follows from part $\boldsymbol{(i)}$, setting $\mathcal{L}_1=\mathcal{L}_2$, so that $X_1=X_2$. Any two lifts differ by an automorphism of the cover that induces the identity on Y, thus there are $n$ different lifts of $\sigma$.\\% is a consequence of the fact that there are $n$ different choices for $\sqrt[n]{\lambda}$.
Let us prove part $\boldsymbol{(iii)}$. Since $\tilde \mathbb{\sigma}$ preserves $\pi_i$, the induced automorphism $\sigma$ of $Y$ must preserve $D$.
Let $g$ be a generator of the Galois group $G$ of $\pi_i$.
There exists an integer $a$ prime with $n$ such that $\tilde \mathbb{\sigma}$ satisfies $\tilde \mathbb{\sigma} (gx)=g^a(\tilde \mathbb{\sigma} x)$ for all $x \in X_i$.
Thus the induced automorphism $\mathbb{\sigma}^*$ of $\pi_{i*} \mathcal{O}_{X_i}$ permutes the summands of the decomposition under the action of $G$.
Since the cyclic cover is simple, if $D > 0$ by looking at the Chern classes of the summands, one can see that we must have $\mathbb{\sigma}^*(\mathcal{L}_i)=\mathcal{L}_i$.
In the case of a double cover, the statement is true also in the \'etale case, since there is only one non-trivial summand in the decomposition of $\pi_{i*}$.
\end{proof}
In the case of double covers induced by the Albanese map, we have the following converse of Proposition \ref{prop:antiHolo and Holo}, $\boldsymbol{(i)}$.
\begin{proposition} \label{prop:converse-holo}
Set $n=2$, let $Y=A$ be an abelian variety and assume that the double cover $\pi_i \colon X_i \to A$ is the Albanese map of $X_i$, for $i=1,\, 2$. If there is a holomorphic $($respectively, anti-holomorphic$)$ diffeomorphism $\tilde{\sigma} \colon X_1 \to X_2$, then there exists a holomorphic $($respectively, anti-holomorphic$)$ diffeomorphism $\sigma \in {\rm Kl}(A, \, D)$ such that $\sigma^{*}\mathcal{L}_{2}\simeq \mathcal{L}_{1}$.
\end{proposition}
\begin{proof}
We first assume that $\tilde{\sigma}$ is holomorphic.
By the universal property of the Albanese map the morphism $\pi_2 \circ \tilde{\sigma} \colon X_1 \to A$ factors through $\pi_1$, in other words there exists $\sigma \colon A \to A$ such that $\sigma \circ \pi_1 = \pi_2 \circ \tilde{\sigma}$. The map $\sigma$ is an isomorphism because $\tilde{\sigma}$ is an isomorphism, then it sends the branch locus of $\pi_1$ to the branch locus of $\pi_2$, or equivalently $\sigma^*D=D$. Finally, looking at the direct image of the structural sheaf $\mathcal{O}_{X_1}$ we get
\begin{equation*}
\begin{split}
(\sigma \circ \pi_1)_* \mathcal{O}_{X_1} & = (\pi_2 \circ \tilde{\sigma})_* \mathcal{O}_{X_1}, \quad \textrm{that is} \\
\sigma_* (\mathcal{O}_A \oplus \mathcal{L}_1^{-1}) & = \pi_{2*} (\tilde{\sigma}_* \mathcal{O}_{X_1}), \quad \; \; \textrm{that is}\\
\mathcal{O}_A \oplus (\sigma_* \mathcal{L}_1^{-1}) & = \mathcal{O}_A \oplus \mathcal{L}_2^{-1}.
\end{split}
\end{equation*}
The decomposition is preserved because the map $X_1 \to X_2$ is compatible with the involutions induced by the Albanese map, so we obtain $\sigma_* \mathcal{L}_1^{-1} \simeq \mathcal{L}_2^{-1}$ as desired.
If $\tilde{\sigma}$ is anti-holomorphic, it suffices to apply the same proof to the holomorphic diffeomorphism which is complex-conjugated to it.
\end{proof}
Summing up, Propositions \ref{prop:antiHolo and Holo} and \ref{prop:converse-holo} together imply
\begin{corollary} \label{cor:holo-antiholo}
With the same assumption as in Proposition \emph{\eqref{prop:converse-holo}}, there exists a holomorphic $($respectively, anti-holomorphic$)$ diffeomorphism $\tilde{\sigma} \colon X_1 \to X_2$ if and only if there exists a holomorphic $($respectively, anti-holomorphic$)$ element $\sigma \in {\rm Kl}(A, \, D)$ such that $\sigma^* \mathcal{L}_2 \simeq \mathcal{L}_1$.
\end{corollary}
\subsection{The uniqueness of the abelian surface $A$}
We follow the notation of Section \ref{subsec:example-II}. If $\chi_i$ is any non-trivial element of the group $\varSigma=\{\chi_0, \, \chi_1,$ $\chi_2, \, \chi_3\}$ for the sake of brevity we will write $A_i$, $D_{A_i}$ and $f_i \colon A_i \to A'$ instead of $A_{\chi_i}$, $D_{A_{\chi_i}}$ and $f_{\chi_i} \colon A_{\chi_i} \to A'$, respectively. We will also denote by $\mathcal{L}_i \in \textrm{Pic}^0(A')$ the $2$-torsion line bundle corresponding to $\chi_i$, so that
$f_{i*} \mathcal{O}_{A_i} = \mathcal{O}_{A'} \oplus \mathcal{L}_i^{-1}$.
\begin{proposition} \label{prop:isom-chi}
The abelian surfaces $A_1$, $A_2$, $A_3$ are pairwise isomorphic. More precisely, for all $i, \, j \in \{1, \, 2, \, 3\}$ there exists an isomorphism $\tilde{\gamma}_{ij} \colon A_j \to A_i$ such that $\tilde{\gamma}_{ij}^*D_{A_i}= D_{A_j}$.
\end{proposition}
\begin{proof}
By Proposition \ref{prop:antiHolo and Holo} it suffices to prove that there exists an automorphism $\gamma_{ij} \in {\rm Aut}(A', \, D_{A'})$ such that $\gamma_{ij}^* \, \mathcal{L}_i = \mathcal{L}_j$.
Consider the linear automorphism $\gamma \colon V \to V$ whose action on the standard basis is
$\gamma(e_1)= - \zeta e_1, \quad \gamma(e_2)= e_1 + e_2.$
It preserves the lattice $\Lambda_{A'}$, in fact we have
\begin{equation} \label{eq:action-varphi-lattice}
\gamma(\lambda_1)=e_1-\lambda_1, \quad \gamma
(\lambda_2)= \lambda_1+\lambda_2, \quad \gamma(e_1)=- \lambda_1, \quad \gamma(e_2)=e_1+e_2,
\end{equation}
so it descends to an automorphism of $A'$ that we still denote by $\gamma \colon A' \to A'$. An easy calculation shows that $\gamma$ is an element of order $3$ in $\mathrm{Aut}(A', \, D_{A'})$, so it induces by pull-back an action of $\langle \gamma \rangle \simeq \mathbb{Z}/ 3 \mathbb{Z}$ on $\mathrm{NS}(A')$. Such an action is obtained by composing a character $\chi \colon \Lambda_{A'} \to \{\pm1\}$ with \eqref{eq:action-varphi-lattice}, and it is straightforward to check that it restricts to an action on the subgroup $\varSigma$ (defined in (\ref{Characters})), namely the one generated by the cyclic permutation $(\chi_1 \; \chi_3 \; \chi_2).$
This shows that $\langle \gamma \rangle$ acts transitively on the non-trivial characters of $\varSigma$. Since the action of $\gamma$ on the characters $\chi_i$ corresponds to the pullback action on the corresponding $2$-torsion divisors $\mathcal{L}_i$, by setting $\gamma_{13}=\gamma_{21}=\gamma_{32}=\gamma$ and $\gamma_{31}=\gamma_{12}=\gamma_{23}=\gamma^2$ we obtain $\gamma_{ij}^* \, \mathcal{L}_i = \mathcal{L}_j$, as desired.
\end{proof}
\subsection{A rigidity result for surfaces of type $II$}
Let us first recall the notions of deformation equivalence and global rigidity, see \cite[Section 1]{Ca13}.
\begin{definition} \label{def:def-and-rigidity} ${}$
\begin{itemize}
\item Two complex surfaces $S_1$, $S_2$ are said to be \emph{direct deformation equivalent} if there is a proper holomorphic submersion with connected fibres $f \colon \mathcal{Y} \to \mathbb{D}$, where $\mathcal{Y}$ is a complex manifold and $\mathbb{D} \subset \mathbb{C}$ is the unit disk, and moreover there are two fibres of $f$ biholomorphic to $S_1$ and $S_2$, respectively;
\item two complex surfaces $S_1$, $S_2$ are said to be \emph{deformation equivalent} if they belong to the same deformation equivalence class, where by deformation equivalence we mean the equivalence relation generated by direct deformation equivalence;
\item a complex surface $S$ is called \emph{globally rigid} if its deformation equivalence class consists of $S$ only, i.e. if every surface which is deformation equivalent to $S$ is actually isomorphic to $S$.
\end{itemize}
\end{definition}
The following result is a characterization of the equianharmonic product, that can be found in \cite[Proposition 5]{KH04}.
\begin{proposition} \label{prop:char-Hirzebruch}
Let $A'$ be an abelian surface containing four elliptic curves,
that intersect pairwise at the origin $o'$ and not elsewhere. Then $A'$ is isomorphic to the equianharmonic product $E' \times E'$ and, up to the action of $\mathrm{Aut}(A')$, the four curves are $E_1'$, $E_2'$, $E_3'$, $E_4'$.
\end{proposition}
A more conceptual proof of Proposition \ref{prop:char-Hirzebruch}, exploiting some results of Shioda and Mitani on abelian surfaces with maximal Picard number, can be found in \cite{Aide}.
Using Proposition \ref{prop:char-Hirzebruch} we obtain:
\begin{theorem} \label{thm:class-type II}
Let $S$ be a surface with $p_g(S)=q(S)=2$, $K_S^2=8$ and Albanese map $\alpha \colon S \to A$ of degree $2$. If $S$ belongs to type $II$, then the pair $(A, \, D_A)$ is isomorphic to an \'etale double cover of the pair $(A', \, D_{A'})$, where $A'$ is the equianharmonic product and $D_{A'}=E_1'+E_2'+E_3'+E_4'$. In particular, all surfaces of type $II$ arise as in Proposition \emph{\ref{prop:type-II}}. Finally, all surfaces of type $II$ are globally rigid.
\end{theorem}
\begin{proof}
Let us consider the Stein factorization $\alpha_X \colon X \to A$ of the Albanese map $\alpha \colon S \to A$; then $\alpha_X$ is a finite double cover branched over $D_A$.
By Proposition \ref{prop:branch-type-II} we have $D_A=E_1+E_2+E_3+E_4$, where the $E_i$ are four elliptic curves intersecting pairwise transversally at two points $p_1$, $p_2$ and not elsewhere. Up to a translation, we may assume that $p_1$ coincides with the origin of $o \in A$. Then $p_2= a$, where $a$ is a non-zero, $2$-torsion point of $A$ (in fact the $E_i$ are subgroups of $A$, so the same is true for their intersection $\{o, \, a\}$).
If we consider the abelian surface $A':= A/ \langle a \rangle$, then the projection $f \colon A \to A'$ is an isogeny of degree $2$. Moreover, setting $E_i' :=f(E_i)$, we see that $E_i', \ldots, E_4'$ are four elliptic curves intersecting pairwise transversally at the origin $o' \in A'$ and not elsewhere. Then the claim about $A'$ and $D_{A'}$ follows from Proposition \ref{prop:char-Hirzebruch}.
Let us finally provide our rigidity argument. First of all, we observe that surfaces of type $II$ cannot be specialization of surfaces of type $I$, for instance because every flat deformation of the Albanese map must preserve the arithmetic genus of the branch divisor, and so must preserve $D_A^2$ (in fact, we can state the stronger result that every flat limit of surfaces of type $I$ is still a surface of type $I$, because being isogenous to a higher product is a topological condition by \cite[Theorem 3.4]{Cat00}). From this, since there are finitely many possibilities for both the double covers $f \colon A \to A'$ and $a_X \colon X \to A$, it follows that $S$, being the minimal desingularization of $X$, belongs to only finitely many isomorphism classes. Therefore $S$ is globally rigid, because by \cite{G77} the moduli space of surfaces of general type is separated.
\end{proof}
\subsection{The groups $\protect{\rm Aut}(A,\, D_A)$ and $\protect{\rm Kl}(A, \, D_A)$}
In the sequel we will write
$A:=V/\Lambda_A $
in order to denote any of the pairwise isomorphic abelian surfaces $A_1$, $A_2$, $A_3$, see Proposition \ref{prop:isom-chi}. We choose for instance $\Lambda_A=\ker \chi_1$. By \eqref{eq:bases-ker-chi} we have
\begin{equation}
\Lambda_A = \mathbb{Z} \mathbf{e}_1 \oplus \mathbb{Z} \mathbf{e}_2 \oplus \mathbb{Z} \mathbf{e}_3 \oplus \mathbb{Z} \mathbf{e}_4,
\end{equation}
where
\begin{equation} \label{eq:e1-e2}
\mathbf{e}_1:=e_1, \quad \mathbf{e}_2:=\lambda_1+e_2, \quad \mathbf{e}_3:=\lambda_2+e_2, \quad \mathbf{e}_4:=2e_2.
\end{equation}
Note that $\mathbf{e}_1=(1, \, 0)$ and $\mathbf{e}_2 = (\zeta, \, 1)$ form a basis for $V$.
\begin{remark} \label{rem:a}
It is straightforward to check that the class of the point $\zeta e_1 = (\zeta, \, 0)$ in $A_1$ is contained in all the curves $E_1, \ldots, E_4$, so we obtain $a=\zeta e_1 + \Lambda_A$, where $a$ is the $2$-torsion point defined in the proof of Theorem \ref{thm:class-type II}.
\end{remark}
Let $\Gamma_{\zeta}$ and $E'$ be as in \eqref{eq:curve-E'} and set
\begin{equation*}
E'':=\mathbb{C}/\Gamma_{2 \zeta}, \quad \Gamma_{2 \zeta} := \mathbb{Z}[2 \zeta]
\end{equation*}
The next result implies that $A$ is actually isomorphic to the product $E'' \times E'$.
\begin{lemma} \label{lem:Lambda-A}
We have $\Lambda_A = \Gamma_{2 \zeta} \, \mathbf{e}_1 \oplus \Gamma_{\zeta} \, \mathbf{e}_2.$
\end{lemma}
\begin{proof}
We check that the base-change matrix between the $\mathbb{Q}$-bases $\mathbf{e}_1, \mathbf{e}_2, \mathbf{e}_3, \mathbf{e}_4$ and $\mathbf{e}_1, \mathbf{e}_2, 2\zeta \mathbf{e}_1, \zeta \mathbf{e}_2$ of $H_1 (A,\mathbb{Q})$ is in $\mathsf{GL}(4,\mathbb{Z})$.
\end{proof}
We will use Lemma \ref{lem:Lambda-A} in order to describe the groups ${\rm Aut}(A, \, D_A)$ and ${\rm Kl} (A, \, D_A)$.
In what follows,
we will identify an automorphism $A \to A$ with the matrix of its analytic
representation $V \to V$ with respect to the standard basis $\{e_{1},\, e_{2} \}$. Moreover, we will write $\tau=\tau_a \colon A \to A$ for the translation by the $2$-torsion
point $a= \zeta e_1 + \Lambda_A$.
\begin{proposition} \label{prop:The-group-Aut(A,D)}
The following holds.
\begin{itemize}
\item[$\boldsymbol{(a)}$]
We have
\begin{equation} \label{eq:aut(A,D)}
\mathrm{Aut}(A, \, D_A) = \mathrm{Aut}_0(A, \, D_A) \times \mathbb{Z}/2 \mathbb{Z},
\end{equation}
where $\mathbb{Z}/2 \mathbb{Z}$ is generated by the translation $\tau$, whereas $\mathrm{Aut}_0(A, \, D_A)$ is the subgroup of group automorphisms of $A$ generated by the elements
\begin{equation} \label{eq:g2-g3}
g_{2}=\left(\begin{array}{cc}
\zeta & -1\\
\zeta & - \zeta
\end{array}\right),\quad g_{3}=\left(\begin{array}{cc}
0 & \zeta-1\\
1-\zeta & \zeta-1
\end{array}\right).
\end{equation}
As an abstract group, $\mathrm{Aut}_0(A, \, D_A)$ is isomorphic to $\mathsf{SL}(2, \, \mathbb{F}_3);$ in particular, its order is $24$.
\item[$\boldsymbol{(b)}$]
The group ${\rm Kl}(A, \, D_A)$ is generated by ${\rm Aut}(A, \, D_A)$ together with the anti-holomorphic involution $\sigma \colon A \to A$ induced by the $\mathbb{C}$-antilinear involution of $V$ given by
\begin{equation} \label{eq:sigma}
(z_{1}, \, z_{2}) \mapsto ((\zeta-1)\bar{z}_{2}, \, (\zeta-1)\bar{z}_{1}).
\end{equation}
Furthermore, the two involutions $\tau$ and $\sigma$ commute, so that we can write
\begin{equation} \label{eq:Kl(A,D)}
{\rm Kl}(A, \, D_A) = \mathrm{Kl}_0(A, \, D_A) \times\mathbb{Z}/2\mathbb{Z},
\end{equation}
where $\mathrm{Kl}_0(A, \, D_A) $ contains $\mathrm{Aut}_0(A, \, D_A)$ as a subgroup of index $2$.
\end{itemize}
\end{proposition}
\begin{proof}
$\boldsymbol{(a)}$ Let us work using the basis $\{\mathbf{e}_1, \, \mathbf{e_2}\}$ of $V$ defined in $\eqref{eq:e1-e2}$. With respect to this basis, using \eqref{eq:four-complex-lines} we see that the four elliptic curves $E_{1},\dots,E_{4}$ have tangent spaces
\begin{equation*}
\begin{aligned}
V_1 & = \mathrm{span}(\mathbf{e}_1), & V_2 = \mathrm{span}(-\zeta \mathbf{e}_1 + \mathbf{e}_2), \\
V_3 & = \mathrm{span}((1-\zeta) \mathbf{e}_1 + \mathbf{e}_2), & V_4 = \mathrm{span}((1- 2\zeta) \mathbf{e}_1 + \mathbf{e}_2). \\
\end{aligned}
\end{equation*}
Then, up to the translation $\tau$, we are looking at the subgroup $\mathrm{Aut}_0(A, \, D_A)$ of the group automorphisms of $A$ whose elements have matrix representation preserving the set of four points $ \mathscr{P}=\{P_1, \, P_2, \, P_3, \, P_4 \} \subset \mathbb{P}^1$, where
\begin{equation*}
P_1=[1:\, 0], \quad P_2=[-\zeta: \, 1], \quad P_3=[1-\zeta: \, 1],\quad P_4=[1-2\zeta: \, 1].
\end{equation*}
The cross ratio $(P_1, \, P_2 ,\, P_3, \, P_4)$ equals $\zeta^{-1}$, hence $\mathscr{P}$ is an equianharmonic quadruple and so the group $\mathrm{PGL}(2, \, \mathbb{C})$ acts on it as the alternating group $\mathsf{A}_4$, see
\cite[p. 817]{Maier07}. Such a group can be presented as
\begin{equation} \label{eq:presentation-A4}
\mathsf{A}_4 = \langle \alpha, \, \beta \; | \; \alpha^2 = \beta^3 = (\alpha \beta)^3 =1 \rangle,
\end{equation}
where $\alpha = (13)(24)$ and $\beta=(123)$, so we need to find matrices $\tilde{g}_2, \, \tilde{g}_3 \in \mathrm{GL}(2, \, \mathbb{C})$, acting as an isomorphism on the lattice $\Lambda_A$ and inducing the permutations $(P_1 \, P_3)(P_2 \, P_4)$ and $(P_1 \, P_2 \, P_3)$ on $\mathscr{P}$, respectively.
Using Lemma \ref{lem:Lambda-A}, we see that $\tilde{g} \in \mathrm{GL}(2, \, \mathbb{C})$ preserves $\Lambda_A$ if and only if it has the form
\begin{equation*}
\tilde{g}=\left(\begin{array}{cc}
a_{11} & a_{12}\\
a_{21} & a_{22}
\end{array}\right), \quad \mathrm{with}\; a_{11} \in \Gamma_{2 \zeta}, \; \; a_{12} \in 2 \Gamma_{\zeta}, \; \; a_{21}, \, a_{22} \in \Gamma_{\zeta},
\end{equation*}
and its determinant belongs to the group of units of $\Gamma_{\zeta}$, namely $\{\pm 1, \, \pm \zeta, \, \pm \zeta^2 \}$.
Now an elementary computation yields the matrices $\pm \tilde{g}_{2}, \, \pm \tilde{g}_{3}$, where
\begin{equation*}
\tilde{g}_{2}=\left(\begin{array}{cc}
1 & 2 \zeta -2\\
\zeta & -1
\end{array}\right), \quad \tilde{g}_{3}=\left(\begin{array}{cc}
-1 & 0\\
1- \zeta & \zeta
\end{array}\right),
\end{equation*}
and from this we can obtain the matrix representations $g_2$, $g_3$ of our automorphisms in the basis $\{e_1, \, e_2 \}$ of $V$ by taking
\begin{equation*}
g_i = N \tilde{g}_i N^{-1}, \quad \mathrm{with} \; N =\left(\begin{array}{cc}
1 & \zeta\\
0 & 1
\end{array}\right).
\end{equation*}
This gives \eqref{eq:g2-g3}. Setting $h=-I_2$ and lifting the presentation \eqref{eq:presentation-A4} we get the presentation
\begin{equation}
\mathrm{Aut}_0(A, \, D_A)=\langle g_2, \, g_3, \, h \; | \; h^2=1, \, \, g_2^2 = g_3^3 = (g_2g_3)^3= h \rangle,
\end{equation}
showing the isomorphism $\mathrm{Aut}_0(A, \, D_A) \simeq \mathsf{SL}(2, \, \mathbb{F}_3)$.
Finally, a standard computation shows that $\tau$ commutes with both $g_2$ and $g_3$. Since $\mathrm{Aut}(A)$ is the semidirect product of the translation group of $A$ by the group automorphisms, it follows that $\mathrm{Aut}(A, \, D_A)$ is the direct product of $\langle \tau \rangle \simeq \mathbb{Z}/2 \mathbb{Z}$ by $\mathrm{Aut}_0(A, \, D_A)$, hence we obtain \eqref{eq:aut(A,D)}.
\\ \\
$\boldsymbol{(b)}$ Let us consider the $\mathbb{C}$-antilinear map $V \to V$
\begin{equation*}
(z_{1}, \, z_{2})\mapsto(\bar{z}_{1}, \, (\zeta-1)\bar{z}_{1} + \zeta \bar{z}_{2}),
\end{equation*}
expressed with respect to the basis $\{\mathbf{e}_1, \, \mathbf{e}_2 \}$.
It preserves the lattice $\Lambda_A$ and so it defines an anti-holomorphic involution $\sigma\colon A \to A$,
inducing the transposition $(P_1 \, P_2)$ on the set $\mathscr{P}=\{P_1, \, P_2, \, P_3, \, P_4\}$. Since ${\rm Aut}(A,D)$
has index at most $2$ in ${\rm Kl}(A, \, D_A)$, it follows that ${\rm Aut}(A, \, D_A)$ and
$\sigma$ generate ${\rm Kl}(A, \, D_A)$. Moreover, a change of coordinates allows us to come back to the basis $\{e_1, \, e_2\}$ and to obtain the expression of $\sigma$ given in \eqref{eq:sigma}.
The subgroup $\mathrm{Kl}_0(A, \, D_A)$ generated by $g_2, \, g_3$ and the involution $\sigma$ contains $\mathrm{Aut}_0(A, \, D_A)$ as a subgroup of index $2$; a straightforward computation now shows that $[\tau, \, \sigma]=0$, so \eqref{eq:Kl(A,D)} follows from \eqref{eq:aut(A,D)} and the proof is complete.
\end{proof}
\begin{remark} \label{rmk:central-extensions}
The proof of Proposition \ref{prop:The-group-Aut(A,D)} also shows that there are central extensions
\begin{equation*} \label{eq:central-extensions}
\begin{split}
&1 \to \langle -I_2 \rangle \longrightarrow \mathrm{Aut}_0(A, \, D_A) \longrightarrow \mathsf{A}_4 \longrightarrow 0, \\
&1 \to \langle -I_2 \rangle \longrightarrow \mathrm{Kl}_0(A, \, D_A) \longrightarrow \mathsf{S}_4 \longrightarrow 0
\end{split}
\end{equation*}
such that $-I_2=[g_2, \, g_3]^2$, where the commutator is defined as $[x, \, y] := xyx^{-1}y^{-1}$. In fact, $\mathrm{KL}_0(A, \, D_A)$ is isomorphic as an abstract group to $\mathsf{GL}(2, \, \mathbb{F}_3)$, see the proof of Theorem \ref{thm:The-number-of-Surfaces}.
\end{remark}
\subsection{The action of $\protect{\rm Kl}(A,\, D_A)$ on the square roots of $\mathcal{O}_A(D_A)$}
The main result of this subsection is the following
\begin{theorem}
\label{thm:The-number-of-Surfaces} Up to isomorphism, there exist exactly two surfaces of type II. These surfaces $S_{1}, \, S_{2}$ have conjugated complex structures, in other words there exists an anti-holomorphic diffeomorphism $S_1 \to S_2$.
\end{theorem}
In order to prove this result, we must study the action of the groups $\mathrm{Aut}(A, \, D_A)$ and $\mathrm{Kl}(A, \, D_A)$ on the sixteen square roots $\mathcal{L}_1, \ldots, \mathcal{L}_{16}$ of the line bundle $\mathcal{O}_A(D_A) \in \mathrm{Pic}(A)$. The Appell-Humbert data of such square roots are described in the following
\begin{proposition} \label{prop:square-roots-DA}
For $k \in \{1, \ldots, 16\},$ we have
$\mathcal{L}_k = \mathcal{L} \left(\frac{1}{2}\,h_A, \, \psi_k \right)$
where
\begin{itemize}
\item $h_A \colon V \times V \to \mathbb{C}$ is the hermitian form on $V$ whose associated alternating form $\mathrm{Im}\, h_A$ assumes the following values at the generators $\mathbf{e}_1, \ldots, \mathbf{e}_4$ of $\Lambda_A:$
\begin{table}[H]
\begin{center}
\begin{tabular}{c|c|c|c|c|c|c}
$(\cdot, \, \cdot)$ & $(\mathbf{e}_1, \, \mathbf{e}_2)$ & $(\mathbf{e}_1, \, \mathbf{e}_3)$ & $(\mathbf{e}_1, \, \mathbf{e}_4)$ & $(\mathbf{e}_2, \, \mathbf{e}_3)$ & $(\mathbf{e}_2, \, \mathbf{e}_4) $ & $(\mathbf{e}_3, \, \mathbf{e}_4)$ \\
\hline
$\mathrm{Im} \; h_A(\cdot, \, \cdot)$ & $-4$ & $0$ & $-2$ & $-6$ & $-4$ & $6$ \\
\end{tabular} \caption{The values of $\mathrm{Im}\, h_A$ at the generators of $\Lambda_{A}$} \label{table:imh_A}
\end{center}
\end{table}
\item Using the notation $\psi_k := (\psi_k(\mathbf{e}_1), \, \psi_k(\mathbf{e}_2), \, \psi_k(\mathbf{e}_3), \, \psi_k(\mathbf{e}_4)),$
the semicharacters $\psi_k \colon \Lambda_A \to \mathbb{C}^*$ are as follows$:$
\begin{equation} \label{eq:semi-characters}
\begin{array}{lll}
\psi_1 \ := (i, \, 1, \, i, \, 1), & & \\
\psi_2\ :=(-i, \, -1, \, i, -1), & \psi_3\ := (i, \, -1, \, -i, \, 1), & \psi_4\ :=(-i, \, 1, \, -i, \, -1),\\
\psi_5\ :=(i, \, 1, \, -i, \, 1), & \psi_6\ :=(-i, \, 1, \, i, \,1), & \psi_7\ :=(-i, \, 1,\, -i, \, 1),\\
\psi_8\ :=(i, \, 1, \, i, \, -1), & \psi_9\ :=(i, \, -1, \, i, \, -1), & \psi_{10}\ :=(-i, \, 1, \, i, \,-1),\\
\psi_{11} :=(i, \, 1,\, -i, \, -1), & \psi_{12}:=(-i, \, -1, \, -i, \, 1), & \psi_{13} :=(i, \, -1, \, i, \, 1),\\
\psi_{14}:=(i, \, -1, \, -i, \,-1), & \psi_{15}:=(-i, \, -1,\, i, \, 1), & \psi_{16} :=(-i, \, -1, \, -i, \, -1).
\end{array}
\end{equation}
\end{itemize}
\end{proposition}
\begin{proof}
Let us consider the double cover $f \colon A \to A'$.
If the hermitian form $h \colon V \times V \to \mathbb{C}$ and the semicharacter $\chi_{D_{A'}} \colon \Lambda_{A'} \to \{ \pm1 \}$ are as in Proposition \ref{pro:class_DA'} and Table \ref{table:imh}, then we have $\mathcal{O}_A(D_A) = \mathcal{L}(h_A, \, \chi_{D_A})$, where
$h_A=f^*h, \quad \chi_{D_A} = f^* \chi_{D_{A'}}.$
From this, using \eqref{eq:e1-e2} we can compute the values of the alternating form $\mathrm{Im}\, h_A$ and of the semicharacter $\chi_{D_A}$ at $\mathbf{e_1}, \ldots, \, \mathbf{e}_4$, obtaining Table \ref{table:imh_A} and
\begin{equation*}
\chi_{D_A} = (\chi_{D_A}(\mathbf{e}_1), \, \chi_{D_A}(\mathbf{e}_2), \, \chi_{D_A}(\mathbf{e}_3), \, \chi_{D_A}(\mathbf{e}_4)) = (-1, \, 1, \, -1, \, 1).
\end{equation*}
Then, setting $\mathcal{L}_k = \mathcal{L}(h_k, \, \psi_k)$, the equality $\mathcal{L}_k^{\otimes 2} = \mathcal{O}_A(D_A)$ implies
\begin{equation*}
2h_k = h_A, \quad \psi_k^2 = \chi_{D_A},
\end{equation*}
hence $h_k = \frac{1}{2} h_A$ for all $k$. Moreover we can set $\psi_1 = (i, \, 1, \, i, \, 1)$, whereas the remaining $15$ semicharacters $\psi_k$ are obtained by multiplying $\psi_1$ by the $15$ non-trivial characters $\Lambda_A \to \{ \pm 1 \}$.
\end{proof}
The hermitian form $h_A$ is $\mathrm{Kl}(A, \, D_A)$-invariant (accordingly with the fact that the divisor $D_A$ is so), hence the action of $\mathrm{Kl}(A, \, D_A)$ on the set $\{\mathcal{L}_1, \ldots, \mathcal{L}_{16} \}$ is completely determined by its permutation action on the set $\{\psi_1, \ldots, \psi_{16} \}$, namely
\begin{equation*}
\varrho \colon \mathrm{Kl}(A, D_A) \longrightarrow \mathsf{Perm}(\psi_1, \ldots, \psi_{16}), \quad
\varrho(g) (\psi_k) := g^* \psi_k.
\end{equation*}
Let us now identify the group $ \mathsf{Perm}(\psi_1, \ldots, \psi_{16})$ with the symmetric group $\mathsf{S}_{16}$ on the symbols $\{1, \ldots, 16\}$, where we multiply permutations from the left to the right, for instance $(1 \, 2)(1\, 3)=(1\,2\,3)$. Then we get the following
\begin{proposition} \label{prop:permutation-action}
With the notation of \emph{Proposition \ref{prop:The-group-Aut(A,D)}}, we have
\begin{equation*}
\begin{split}
\varrho(g_2)& =(1 \; \, 13 \; \, 7\;\, 12)(2 \;\, 9 \;\, 14 \;\, 16)(3 \;\, 5 \;\, 15\;\, 6)(4 \;\, 11 \;\, 8 \;\, 10), \\
\varrho(g_3)& = (1 \; \, 13\; \, 5 \; \, 7 \; \, 12 \; \, 6)(2 \; \, 4 \; \, 11 \; \, 14 \; \, 8 \; \, 10) (3 \; \, 15) (9 \; \, 16), \\
\varrho(-I_2)=\varrho(g_2^2) = \varrho(g_3^3)= \varrho(\tau)& = (1 \; \, 7)(2 \; \, 14) (3 \; \, 15)(4 \; \, 8)(5 \; \, 6)(9 \; \, 16)(10 \; \, 11)(12 \; \, 13), \\
\varrho(\sigma) & = (1 \; \, 14)(2 \; \, 7) (3 \; \, 16)(4 \; \, 5)(6 \; \, 8)(9 \; \, 15)(10 \; \, 12)(11 \; \, 13).
\end{split}
\end{equation*}
\end{proposition}
\begin{proof}
Using the explicit expressions given in Proposition \ref{prop:The-group-Aut(A,D)}, by a standard computation we can check that $g_2$, $g_3$, $\tau$ and $\sigma$ send the ordered basis $\{\mathbf{e}_1, \, \mathbf{e}_2, \, \mathbf{e}_3, \, \mathbf{e}_4 \}$ of $\Lambda_A$ to the bases
\begin{equation} \label{eq:mathbf-e}
\begin{split}
& \{\mathbf{e}_2 + \mathbf{e}_3 - \mathbf{e}_4, \; -2 \mathbf{e}_1+ \mathbf{e}_2 - \mathbf{e}_4, \; -\mathbf{e}_1 - \mathbf{e}_2 - 2 \mathbf{e}_3+ 2\mathbf{e}_4, \; -2\mathbf{e}_1 - 2 \mathbf{e}_3+ \mathbf{e}_4 \}, \\
& \{-\mathbf{e}_3 + \mathbf{e}_4, \; -\mathbf{e}_1 +\mathbf{e}_2 + \mathbf{e}_3- \mathbf{e}_4, \; - 2\mathbf{e}_1 + \mathbf{e}_2 + \mathbf{e}_3 -2 \mathbf{e}_4, \; - 2\mathbf{e}_1 + 2 \mathbf{e}_2 + 2 \mathbf{e}_3 -3 \mathbf{e}_4 \}, \\
& \{-\mathbf{e}_3 \; - \mathbf{e}_2, \; - \mathbf{e}_3, \; - \mathbf{e}_4 \}, \\
& \{\mathbf{e}_3 - \mathbf{e}_4, \; -\mathbf{e}_1 +\mathbf{e}_2 +\mathbf{e}_3- \mathbf{e}_4, \; -\mathbf{e}_1 +2 \mathbf{e}_2 - \mathbf{e}_4, \; -2 \mathbf{e}_1 +2 \mathbf{e}_2 - \mathbf{e}_4 \},
\end{split}
\end{equation}
respectively. For any $g \in \mathrm{Aut}(A, \, D_A)$, calling $G_{\Lambda} \colon \Lambda_A \to \Lambda_A$ the corresponding rational representation we have $\varrho(g)(\psi_k) = \psi_k \circ G_{\Lambda}$, whereas $\varrho(\sigma)(\psi_k) = \overline{\psi_k \circ \mathfrak{S}_{\Lambda}}$ (see \eqref{eq:L-anti-holo-pullback}). Then another long but straightforward calculation using \eqref{eq:semi-characters} and \eqref{eq:mathbf-e} concludes the proof.
\end{proof}
We are now ready to give the \\
\emph{Proof of Theorem $\mathrm{\ref{thm:The-number-of-Surfaces}}$}. Since any surface $S$ of type $II$ is the minimal desingularization of a double cover $f \colon S \to A$, branched over $D_A$, by Proposition \ref{prop:converse-holo} it follows that the number of surfaces of type $II$ up to isomorphisms (respectively, up to holomorphic and anti-holomorphic diffeomorphisms) equals the number of orbits for the permutation action of $\mathrm{Aut}(A, \, D_A)$ (respectively, of $\mathrm{Kl}(A, \, D_A)$) on the set $\{\mathcal{L}_1, \ldots, \mathcal{L}_{16}\}$ of the sixteen square roots of $\mathcal{O}_A(D_A)$. We have seen that such an action is determined by the permutation action on the set of sixteen semicharacters $\{\psi_1, \ldots, \psi_{16}\}$, so we only have to compute the number of orbits for the subgroup of $\mathsf{S}_{16}$ whose generators are described in Proposition \ref{prop:permutation-action}.
This can be done by hand, but it is easier to write a short script using the Computer Algebra System \verb|GAP4| (\cite{GAP4}):
\begin{Verbatim}[commandchars=\\\{\}]
g2:=(1, 13, 7, 12)(2, 9, 14, 16)(3, 5, 15, 6)(4, 11, 8, 10);;
g3:=(1, 13, 5, 7, 12, 6)(2, 4, 11, 14, 8, 10)(3, 15)(9, 16);;
sigma:=(1, 14)(2, 7)(3, 16)(4, 5)(6, 8)(9, 15)(10, 12)(11, 13);;
Aut:=Group(g2, g3);;
Kl:=Group(g2, g3, sigma);;
StructureDescription(Aut);
\textcolor{red}{"SL(2,3)"}
OrbitsPerms(Aut, [ 1 .. 16 ]);
\textcolor{red}{[ [ 1, 7, 12, 13, 3, 15, 5, 6 ], [ 2, 14, 16, 9, 10, 11, 4, 8 ] ]}
StructureDescription(Kl);
\textcolor{red}{"GL(2,3)"}
OrbitsPerms(Kl, [1..16]);
\textcolor{red}{[ [ 1, 7, 12, 13, 15, 3, 6, 5, 14, 2, 10, 11, 9, 16, 8, 4 ] ]}
\end{Verbatim}
The first two output lines (in red) show that $\varrho$ induces an embedding of $\mathrm{Aut}_0(A, \, D_A)$ in $\mathsf{S}_{16}$ (note that $\rho$ is a group homomorphism because of our convention on the multiplication on $\mathsf{S}_{16}$), and that the corresponding permutation subgroup has precisely two orbits. Therefore there are exactly two surfaces $S_1$, $S_2$ of type $II$, up to isomorphisms. Analogously, the last two output lines show that $\varrho$ induces an embedding of $\mathrm{Kl}_0(A, \, D_A)$ in $\mathsf{S}_{16}$,
and the corresponding permutation subgroup has only one orbit. This means that there exists an anti-holomorphic diffeomorphism $S_1 \to S_2$, hence these surfaces are not isomorphic, but they have conjugated complex structures. \hfill $\Box$
\bigskip
Let us finally show that surfaces of type $II$ are not uniformized by the bidisk (unlike surfaces of type $I$, see Corollary \ref{cor:univ-I}).
\begin{proposition} \label{prop:univ-II}
Let $S$ be a surface of type $II$ and $\widetilde{S} \to S$ its universal cover. Then $\widetilde{S}$ is \emph{not} biholomorphic to $\mathbb{H} \times \mathbb{H}$.
\end{proposition}
\begin{proof}
Looking at diagram \eqref{dia.resolution} in Section 1, we see that, in case $II$, the map $\varphi \colon B \to A$ is the blow-up of $A$ at the two quadruple points $p_1, \, p_2$ of the curve $D_A$ and that $\bar{S}=S$. Moreover, considering $\beta \colon S \to B$ we have
\begin{equation*}
\beta ^* D_B = C_1 + C_2 + C_3 + C_4,
\end{equation*}
where the $C_i$ are (pairwise disjoint) elliptic curves with $C_i^2=-1$.
The embedding $C_{i} \to S$, composed with the universal cover $\mathbb{C} \to C_i$, gives a
non-constant holomorphic map $\mathbb{C} \to S$, that in turn lifts to a non-constant holomorphic map $\mathbb{C} \to \widetilde{S}$. If $\widetilde{S}$ were isomorphic to $\mathbb{H} \times \mathbb{H}$, projecting onto one of the two factors we would obtain a non-constant holomorphic map $\mathbb{C} \to \mathbb{H}$, whose existence would contradict Liouville's theorem because $\mathbb{H}$ is biholomorphic to the bounded domain $\mathbb{D} = \{z \in \mathbb{C} \; : \, |z| < 1 \}$.
\end{proof}
\subsection{Concluding remarks} \label{subsec:final}
\begin{remark}
In the argument in the proof of Proposition \ref{prop:univ-II}, we could have used one of the elliptic curves $Z_1, \, Z_2$ instead of the $C_i$ (see Remark \ref{rem:sing-Stein}).
\end{remark}
\begin{remark}\label{openball}
Denoting by $\chi_{\textrm{top}}$ the topological Euler number, we have
\begin{equation*}
\bigg(K_{S}+\sum_{i=1}^{4}C_{i} \bigg)^{2}=12=3 \, \chi_{\textrm{top}} \bigg(S - \sum_{i=1}^{4}C_{i} \bigg)
\end{equation*}
and
\begin{equation*}
\bigg(K_{S}+\sum_{i=1}^{2}Z_{i} \bigg)^{2}=12=3 \, \chi_{\textrm{top}} \bigg(S - \sum_{i=1}^{2}Z_{i} \bigg).
\end{equation*}
This implies that the open surfaces $S - \sum_{i=1}^{4}C_{i}$ and $S - \sum_{i=1}^{2}Z_{i}$ both have the structure of a complex ball-quotient, see \cite{Ro14} for references and further details.
\end{remark}
\begin{remark} \label{rem:def-diff}
The two non-isomorphic surfaces of type $II$ exhibit a new occurrence of the so-called \verb|Diff|$\nRightarrow$\verb|Def| phenomenon, meaning that their diffeomorphism type does not determine their deformation class. In fact, they are (anti-holomorphically) diffeomorphic, but not deformation equivalent since they are rigid. See \cite{Man01}, \cite{ KK}, \cite{Ca03}, \cite{ CaWa07} for further examples of this situation.
\end{remark}
\begin{remark} \label{rem:Shimura}
We can also give the following different proof of Proposition \ref{prop:univ-II}, based on the same argument used in \cite[proof of Proposition 10]{RRS18}. We are indebted to F. Catanese for several comments and suggestions on this topic. If a surface $S$ is uniformized by $\mathbb{H} \times \mathbb{H}$, then it is the quotient of $\mathbb{H} \times \mathbb{H}$ by a cocompact subgroup $\Gamma$ of $\mathrm{Aut}(\mathbb{H} \times \mathbb{H})=\mathrm{Aut}(\mathbb{H}) \rtimes (\mathbb{Z}/2 \mathbb{Z})$ acting freely. If $\Gamma$ is reducible in the sense of \cite[Theorem 1]{Shi63} it follows that $S$ is isogenous to a product; in particular, if $p_g=q=2$, $K_S^2=8$ then $S$ is of type $I$ by the classification given in \cite{Pe11}. Thus, if $S$ is of type $II$ then $\Gamma$ must be irreducible and so, by using \cite[Theorem 7.2]{MS63}, we infer
\begin{equation*}
q(S)=\frac{1}{2}b_1(S)= \frac{1}{2}b_1(\mathbb{P}^1 \times \mathbb{P}^1)=0,
\end{equation*}
a contradiction.
\end{remark}
\begin{remark} \label{remGeomConstr}
It is possible to give a different geometric construction of the abelian surfaces $A'$, $A$ and of the divisor $D_A$ as follows. Unfortunately, at present we do not know how to recover the $2$-divisibility of the curve $D_A$ in $\mathrm{Pic}(A)$ by using this alternative approach.
Let $F_1, \, F_2, \, F_3$ and $G_1, \,G_2, \, G_3$ be general fibres of the two rulings $f, \, g \colon\mathbb{P}^1\times\mathbb{P}^1 \to \mathbb{P}^1$, respectively; then
the two reducible divisors $F_1+F_2+F_3$ and $G_1+G_2+G_3$ meet at nine distinct points. Consider three of these points, say $p_1, \, p_2, \, p_3$, with the property that
each $F_i$ and each $G_i$ contain exactly one of them. Then there exists precisely one (smooth) curve $C_1$ of bidegree $(1, \, 1)$ passing through $p_1, \, p_2, \, p_3$.
Similarly, if we choose three other points $q_1, \, q_2, \, q_3\notin\{p_1, \, p_2, \, p_3\}$ with the same property, there exist a unique curve $C_2$ of bidegree $(1, \, 1)$ passing through $q_1, \, q_2, \, q_3.$
The curves $C_1$ and $C_2$ meet at two points, say $r_1, \, r_2$, different from the points $p_i$ and $q_i$.
Let us call $F_4$ and $G_4$ the fibres of $f$ and $g$ passing through one of these two points, say $r_1$. Then the reducible curve $B$ of bidegree $(4, \, 4)$ defined as
\begin{equation*}
B=F_1+ \cdots +F_4+G_1+ \cdots +G_4
\end{equation*}
has sixteen ordinary double points as only singularities, and
the double cover $\phi \colon Q'\longrightarrow \mathbb{P}^1\times\mathbb{P}^1$ branched over
$B$ gives a singular Kummer surface $Q'$; let us write $A'$ for the associate abelian surface.
We can easily show that
\begin{equation*}
\phi^*C_1 = C_{11}+C_{12}, \quad \phi^*C_2 = C_{21}+C_{22},
\end{equation*}
where all the $C_{ij}$ are smooth and irreducible. Moreover we see that $C_{11}$ and $C_{22}$ intersect at exactly one point, which is a node of $Q'.$
Writing
\begin{equation*}
\phi^*F_4 = 2 \widehat F_4, \, \quad \phi^*G_4 = 2 \widehat G_4,
\end{equation*}
we see that the rational curves $C_{11},$ $C_{22},$ $\widehat F_4, \, \widehat G_4$ meet at one node of $Q'$ and that each of them contains precisely four nodes of $Q'.$ Hence the pullback of these curves via the double cover $A' \to Q'$ yields four elliptic curves in $A'$ intersecting pairwise and transversally at a single point.
Let us choose now $i, \, j, \, h, \, k \in \{1, \, 2, \, 3, \, 4 \}$, with $i\not=j$ and $h\not=k$, and consider the eight nodes of $B$ lying in the smooth locus of the curve $H=F_i+F_j+G_h+G_k$.
The $2$-divisibility of $H$ in $\mathrm{Pic}(\mathbb{P}^1 \times \mathbb{P}^1)$ implies that the corresponding set $\Xi$ of eight nodes in the Kummer surface $Q'$ is $2$-divisible, so we can consider the double cover $Q\longrightarrow Q'$ branched over $\Xi$.
The surface $Q$ is again a singular Kummer surface and, calling $A$ the abelian surface associate with $Q$, we obtain a degree $2$ isogeny $A\longrightarrow A'$.
We can choose (in three different ways) $i, \, j, \, h, \, k$ so that each of the four curves $C_{11},$ $C_{22},$ $\widehat F_4$ and $\widehat G_4$ contains exactly two points of $\Xi$.
Therefore the pullback of these curves to $Q$ are rational curves, all passing through two of the nodes of $Q$ and containing four nodes each. This in turn gives four elliptic curves in $A$ meeting at two common points and not elsewhere, and the union of these curves is the desired divisor $D_A$.
\end{remark}
| {
"attr-fineweb-edu": 1.499023,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUddY4uzlh5TJwIrFP | \section{}
In the last decade, Dynamical Mean Field Theory (DMFT)
\cite{Georges96, Held07} has become a widely used method to
study strongly correlated electrons systems. It can be formulated
by an action formalism with a self-consistent mapping onto a single impurity
Anderson model (SIAM). Because the
critical step in this method is the quality of the impurity solvers,
there has recently been an increased interest
in new and more accurate SIAM solvers.
Currently available solvers include: iterated perturbation theory(IPT) \cite{IPT}, which
works well for small U and single band, the non-crossing
approximation(NCA) \cite{NCA}, which is able to give the coherent peak but
fails to reproduce Fermi liquid behavior, and the equation of motion
(EOM) method \cite{EOM}, which requires a decoupling scheme
and misses the Kondo peak at finite $U$ for
particle-hole symmetric systems. IPT also presents pathologies away from
half-filling. A numerical renormalization group (NRG)\cite{NRG} has
also been been employed but has the limitation that it captures correctly the low
energy physics but with a logarithmic divergence instead of the
expected Lorentzian shape of a Fermi liquid. It is also inaccurate
for high energy physics and is limited at T=0 though finite T is
tried to be implemented \cite{NRG-T}. Quantum Monte Carlo (QMC)
\cite{QMC}, as a non-perturbative approach, is in principle
rigorous and provides good results at high temperature for static
properties but introduces uncertainty for dynamical properties
(like spectral density) because of the poorly defined inverse
problem, and cannot reach zero temperature.
The purpose of an Extended Recursion method in Operator Space
(EROS) is to provide an efficient (rapid) and accurate method to
solve the quantum impurity problem. Efficiency is especially
important for LDA+DMFT calculations, since most of
the computational time is spent on the quantum impurity problem.
An accurate solution must include a reliable calculation for both low and
high temperatures, low and high-energy physics, as well as for any
filling factor for the correlated
bands. We also intend to extend this method to quantum
transport problems for low-dimensional correlated systems (e.g., quantum
dots and molecular electronics). As shown below, EROS is able to retrieve well-known
approximations such as Hartree-Fock, Hubbard I, and Hubbard III, at the lowest level
of recursion and the
Fermi liquid regime and metal-insulator transition at higher levels.
The solution involves calculating a retarded Green function
$G(\omega)\equiv <<A;B>>_{\omega}$, which is the Fourier
transform of $G(t)=-i\theta(t)<\{A(t),B^{\dag}(0)\}>$ for the operators
A and B. This can be done through a recursion process (RP)
(see \cite{MoriZwanzig,FuldeBook}),
which can be directly seen by examining the coupled equations of motion
for the retarded Green function for a Hamiltonian $H$:
\begin{eqnarray}
<<A;B>>_{\omega}=\frac{<\{A,B^{\dag}\}>}{\omega} \nonumber\\
+\frac{<\{[A,H],B^{\dag}\}>}{\omega^2}+
\frac{<\{[[A,H],H],B^{\dag}\}>}{\omega^3}...
\label{momentexp}\end{eqnarray} This can be identified with a
moment expansion through
the definition \begin{equation}
\mu_n= <\{[...[A,H],H]..,H],B^{\dag}\}>, \end{equation}
where $H$ appearing $n$ times introduces a \textit{super}-operator
$\mathcal{H}$, the 'Liouvillian', acting in
operator space: \begin{equation} A\mathcal{H}=[A,H] \label{liouville}\end{equation}
such that $\mu_n= <\{A \mathcal{H}^n,B^{\dag}\}> $.
From its moment expansion it is possible to reconstruct the Green's function
as a continued fraction (CF) \cite{Cyrot-Lackmann73}, but
a better conditioned approach is to directly obtain the
CF coefficients from the recursion method of Haydock
\cite{Haydock84}, which can be directly applied in the space
of operators, with the Liouvillian playing the role of the
Hamiltonian in the standard RP. The more conventional wave-function RP
consists in starting from an initial chosen state
$\psi_0$ and then generating a set of orthonormal states $\psi_n$ ($n>0$)
by the recurrence relation ($b_0\equiv0$):
\begin{equation}
H \psi_n= a_n \psi_n+b_n \psi_{n-1}+b_{n+1} \psi_{n+1}
\end{equation}
The coefficients $a_n$ and $b_{n+1}$ are obtained by
the scalar product of $\psi_n$ by $H \psi_n$ and with the norm
$\parallel H \psi_n - a_n \psi_n-b_n \psi_{n-1}\parallel$ respectively.
In the $\psi_n$ basis, the Hamiltonian has a tridiagonal form,
which can be represented as a semi-infinite tight-binding (TB) chain.
Once a sufficient number of coefficients pairs have been calculated, the
diagonal element of the resolvent $(\omega - H)^{-1}$ can be expressed
as a CF. Standard techniques exist to terminate this expansion \cite{TurchiDucastelle}.
The application of this approach to operator space is
straightforward, each vector now corresponding with an operator.
For our purposes we will be mainly concerned with the
local Green function, where $A=B=c_0$, the destruction operator of
an electron of given spin at site $0$. The action of the
Liouvillian on any creation or destruction operator (or any
combination of them) is then easily computable from its definition,
Eq.~(\ref{liouville}). A natural choice for the scalar product, which is
necessary to compute the CF coefficients, is $(A|B)=
<\{A,B^{\dag}\}>$ as suggested by Eq.~(\ref{momentexp}), where
thermal averages reduce to their ground-state expectation value at zero
temperature. Details for computing them will be given in
a forthcoming paper.
In this formalism, the Green's function appears as the diagonal element of
the resolvent of the Liouvillian:
\begin{equation}
G(\omega)\equiv <<c_0;c_0>>_{\omega}=(c_0(\omega-\mathcal{H})^{-1}|c_0)
\end{equation}
This expression is the origin of the name we have proposed for this approach:
the Extended Recursion in Operator Space
(EROS); it reduces to the usual RP when only one-body operators appear in the Hamiltonian.
We now apply it to the SIAM Hamiltonian:
\begin{eqnarray} H_{SIAM}&=&\epsilon_0 \sum_{\sigma}n_{0 \sigma}+U
n_{0 \uparrow}n_{0 \downarrow} \nonumber\\
+ \sum_{k \sigma} \varepsilon_k c_{k
\sigma}^\dagger c_{k \sigma}
&+& \sum_{k \sigma}(V_{k \sigma}c_{0 \sigma
}^\dagger c_{k \sigma} + V_{k \sigma}^{*}c_{k \sigma}^\dagger c_{0
\sigma}),
\label{SIAM}\end{eqnarray} which describes a localized orbital $0$ that has
electronic correlations because of the Coulomb
interaction $U$ and is coupled to itinerant non-interacting
electrons of dispersion $\varepsilon_k$; the latter is reflected by an
hybridization function $\Delta(\omega)= \sum_k
\frac{|V_k|^2}{\omega-\varepsilon_k}$, when spin indices are ignored.
\begin{figure}[h!]
\centerline{\psfig{figure=fig1.eps,width=8cm,angle=0,scale=1}}
\vspace{-1cm}
\caption{\label{fig0}Schematic representation of
the Liouvillian}
\end{figure}
If we use a conventional RP to tridiagonalize the last 2 terms, we
can rewrite Eq.~(\ref{SIAM}) as:
\begin{eqnarray} H^{imp}= U
n_{0\uparrow}n_{0\downarrow}+ \sum_{\sigma } \epsilon_0
n_{0\sigma}\nonumber\\+\sum_{p>0 \sigma } ( \alpha_p c_{p
\sigma}^\dagger c_{p \sigma}+ \beta_p c_{p-1 \sigma}^\dagger c_{p
\sigma}+ \beta_{p+1} c_{p+1 \sigma}^\dagger c_{p \sigma}
)\label{Himp1}\end{eqnarray}
These coefficients $\{\alpha_p,\beta_{p+1}\}$, are the
tight-binding parameters of a semi-infinite chain and also
those of the CF expansion considered as a parametrization of the
hybridization function $\Delta(\omega)$, as already often noticed
in NRG context
\cite{Uhrig04}), whereas a RP applied to Hamiltonian (\ref{Himp1})
provides the CF coefficients $\{a _n,b_{n+1}\}$ of the impurity
Green function $G(\omega)$. Eqs.~(\ref{liouville}) and (\ref{Himp1})
give the action of $\mathcal{H}$
on the basis operators that is necessary to perform an operator RP (where the up-spin
is assumed if a spin-index is omitted):
\begin{eqnarray} c_0 \mathcal{H}=\varepsilon_0 c_0+\beta_1 c_1+
U c^{}_0 c^{\dagger}_{0\downarrow} c^{}_{0\downarrow}\\
c_p \mathcal{H}=\alpha_p c_p+\beta_p c_{p-1}+\beta_{p+1} c_{p+1}
\;\; p>0. \end{eqnarray}
Compared to the ``natural" operators $c_p$, related
to a given state $p$, the local 2-body interaction
$\mathcal{H}_{int}$ due to $U n_{0\uparrow}n_{0\downarrow}$ has
generated a new operator $c^{}_0 c^{\dagger}_{0\downarrow}
c^{}_{0\downarrow}$, which is the origin of a site representation
for 1/8 of a simple
tri-dimensional cubic lattice, where each site represents an
operator $c^{}_p c^{\dagger}_{q \downarrow} c^{}_{r \downarrow}$ indexed
by 3 positive integers $p, q, r$ (see Fig. \ref{fig0}). This
construction is supported by the following observation, which can be
systematically extended:
\begin{eqnarray} c_0 n_{0\downarrow}\mathcal{H}=Uc_0
n_{0\downarrow}+
\beta_1 c^{}_1 c^{\dagger}_{0 \downarrow} c^{}_{0 \downarrow
-\beta_1 c^{}_0
c^{\dagger}_{1 \downarrow} c^{}_{0 \downarrow}\nonumber\\
+\beta_1 c^{}_0 c^{\dagger}_{0 \downarrow} c^{}_{1 \downarrow} \end{eqnarray}
The RP can be performed in this landscape as easily as the regular
recursion on an usual lattice through knowing the action of $\mathcal{H}$
on operators like
$c^{}_p c^{\dagger}_{q \downarrow} c^{}_{r \downarrow}$.
\begin{figure}[h!]
\vspace{-2cm}
\centerline{\psfig{figure=fig2.eps,width=8cm,angle=0,scale=1.2}}
\vspace{-2cm} \caption{\label{fig2} Two different recursion processes for $G(\omega)$ }
\end{figure}
Special attention should be paid for the case where one or
two of the indexes $p,q,r$ are zero, viz., the 3 faces and the 3
edges of the 1/8 of the cubic lattice, since in these cases
$\mathcal{H}_{int}$ has a non-zero effect. For the edges, this operation gives
either $0$ or $U$ for the on-site term, but for the faces, a new kind of
operator is generated, which is the product of 5 ``natural"
operators (3 destruction and 2 creation operators), and
the action of the Liouvillian (which are not representable by Fig.~\ref{fig0})
can be represented as the
sites of 5-dimensional hypercube.
These operators correspond to the motion of a hole
with 2 electron-hole pairs. In our calculations we did not take them exactly into
account, but instead projected them onto existing products of
3 operators (which corresponds to a certain EOM decoupling) or
simply neglected them; in term of moments their influence only starts with the 7th moment.
The self-energy, $\Sigma(\omega)$, which is related to the Green's function through
\begin{equation} G(\omega)=(\omega-\Sigma(\omega)-\Delta(\omega))^{-1},
\label{GSigDelta}\end{equation}
can be expanded as a CF:
\begin{eqnarray} \Sigma(\omega)&=&A_0+{B_1^2\over\displaystyle \omega-A_1--\cdots}.
\label{SigCF}
\end{eqnarray}
To determine the coefficients of this CF, we note that
the right hand side of Eq.~(\ref{GSigDelta}), which includes the sum of
$\Delta$ and $\Sigma$ in the denominator, can be expanded in a
CF by a RP on their 2 representative chains coupled by their
common first site as shown in Fig.~\ref{fig2}(a) \cite{JTM01};
at the same time this also has to give the
the CF expansion for $G$ in Fig.~\ref{fig2}(b), which is calculated
from the RP in operator space. This enables
the coefficients of Eq.~(\ref{SigCF})
for the self-energy $\Sigma$ to be determined.
We now describe an application of our EROS methodology to a DMFT
solution of the Hubbard model.
An excellent description of the DMFT method is given in \cite{Georges96}
and a recent review \cite{Held07} describes recent developments and extensions.
The Hubbard model on a lattice (for site indices $i$ and $j$) can be written as
\begin{equation} H= \sum_{i\neq
j,\sigma} t_{ij} c_{i\sigma}^\dagger c_{j\sigma}+U\sum_i
n_{i\uparrow}n_{i\downarrow} \label{HHub}\end{equation} The motivation for
DMFT arises from the observation based on a diagrammatic analysis that in infinite dimensions
\cite{MetznerVollhardt89} the self-energy becomes local,
\textit{i.e.}, $k$-independent or simply $\Sigma(\omega)$.
In lower dimensions, this is often only approximately true. In the local limit for $\Sigma$ the
Hubbard model reduces to the problem of a correlated impurity at site
``$0$" embedded in an effective medium where all the effects of
correlation are represented by the self-energy $\Sigma(\omega)$.
This can be considered as a complex and energy-dependent
on-site energy in terms of a strong analogy with the TB language
used in the Coherent Potential Approximation (CPA,for a detailed discussion see \cite{CPAGonis}), which was developed for studying
alloys and has now been further extended for strongly correlated electron systems \cite{Kake02}:
\begin{equation} H_{DMFT}= U
n_{0\uparrow}n_{0\downarrow}+\sum_{i\neq j,\sigma} t_{ij}
c_{i\sigma}^\dagger c_{j\sigma}+\sum_{i\neq 0 \sigma }
\Sigma(\omega)n_{i\sigma} \label{Himp2}\end{equation}
It can be shown that this problem
maps onto a SIAM model, Eq.~(\ref{SIAM}). To understand this
within our approach, we start by noticing that the second term
of Eq.~(\ref{Himp2}), the ``hopping" term, can be tridiagonalized
by a RP to provide the CF expansion of the ``bare" hybridization function
$\Delta(\omega)$. The last term then functions like a constant onsite
energy in a TB Hamiltonian that simply shifts the frequency by $\Sigma(\omega)$.
Hence the last two terms of Eq.~(\ref{Himp2}) can be represented
by a Green's function for an effective TB model with the bare
hybridization function $\Delta(\omega)$ replaced with an effective
$\bar{\Delta}(\omega) = \Delta(\omega-\Sigma(\omega))$.
To complete the DMFT method, we then require that this
self-consistent condition be supplemented by the requirement that the
local impurity Green's function, computed from the Hamiltonian of
Eq.~(\ref{Himp2}), be equal to the effective lattice Green's
function, which has $\Sigma(\omega)$ on all sites, including
site ``$0$". In RP language, this causes the the local $\Sigma(\omega)$
to be added to the on-site terms $\alpha_{p}$ in the
chain representation for $\Delta(\omega)$. Including this
self-energy at site ``$p$" is equivalent to attaching to the TB
semi-infinite chain the CF expansion of $\Sigma$ as parameters
(see Appendix of \cite{JM93}), and leads to a comb-shape topology for
computing $\bar{\Delta}(\omega)$. Having performed
again a RP on this object provides a CF expansion for
$\bar{\Delta}(\omega)$. The DMFT self-consistency is
achieved by simultaneously satisfying all of these CF equations as
one steps through the recursion procedure. The first few
recursion steps generate the CF coefficients $A_0\equiv
U<n_{0-\sigma}>$ of $\Sigma$, corresponding to the Hartree-Fock (HF)
energy-independent self-energy, and then $B_1=Un(n-1)$ and
$A_1=U(1-n)$ gives Hubbard I, which is exact in the atomic limit.
If only operators $c_p c^{\dagger}_{0 \downarrow}
c_{0 \downarrow}$ on one edge of the cubic lattice are retained,
Hubbard III is recovered. Further recursions generate approximations
that go well beyond these (see below). It is worth mentioning that our approach has
some similiarities with the approach developed in
\cite{KakeFulde04}, but our direct use of a RP enables us to go
beyond this work by avoiding their requirement to orthogonalize
each new operator, which becomes becomes difficult after the first few
steps.
\begin{figure}[h!]
\centerline{\psfig{figure=fig3.eps,width=8cm,angle=270,scale=0.7}}
\caption{\label{spectDEns.5} RP DMFT calculation of the Hubbard model for the spectral density vs energy for different values of U (color on-line)}
\vspace{-.5cm}
\end{figure}
We have benchmarked our method on the well-studied half-filled case.
For simplicity, all calculations were performed at zero temperature
and used as a HF solution of Eq.~(\ref{Himp2}) as an approximate
ground state for computing the scalar products of the operators.
In the future we will try to use a
better ground state such as a Gutzwiller approximation or an exact
diagonalization for a given number of sites. By including approximate
excited states in a thermal average, finite-temperature calculations are
also possible. For the lattice
model that describes the itinerant electrons, we used
a semi-elliptic non-interacting density of states, with the energy
scale set by the bandwidth. Other lattices would
only change the input CF coefficients for the hybridization
function. For several different values of $U$, the spectral
density is displayed in Fig.~\ref{spectDEns.5}. One clearly
observes the characteristic three-peak features in the Fermi liquid
regime: the coherent contribution of the central peak, and the lower
and upper Hubbard subbands with a splitting of the order of $U$.
One also observes a redistribution of the spectral weight of the coherent central peak
as the interaction $U$ increases.
The Fermi liquid behavior and the
intermediate regime can be better monitored by considering
the real and imaginary parts of the self-energy.
In the Fermi liquid regime, we have checked that the imaginary part of the
self-energy vanishes as it should be at the Fermi level $E_F$, and that
it has the usual $(\omega-E_F)^2$ behavior around $E_F$.
We have presented a new impurity self-consistent solver for the
SIAM model and a DMFT solution of the Hubbard model
using a RP procedure in an operator space. The lowest order
approximations to this method generate the Hartree-Fock, Hubbard I, and Hubbard III
approximations. Increasing the continued fractions go well beyond these solutions,
generating the coherent Kondo peak, for example, and can provide increasingly
accurate results.
The way that self-consistency is
achieved, step by step, makes the method efficient and
very promising for its use in DMFT approaches and other more
complex correlation problems.
JPJ is grateful to J.X. Zhu (Los Alamos) for helpful discussions, and F. Harris
(Gainesville) for sharing with us his code to compute scalar
product of operators. This work was carried out under the auspices
of the National Nuclear Security Administration of the U.S.
Department of Energy at Los Alamos National Laboratory under
Contract No. DE-AC52-06NA25396. Financial support from
CNLS, Los Alamos and DGA (Delegation Generale pour l'Armement Contract No.
07.60.028.00.470.75.01) are gratefully acknowledged.
| {
"attr-fineweb-edu": 1.585938,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdew4uBhiz5uUzClz | \section{Muon $g-2$ and physics beyond the SM}
\label{sec:BSMoverview}
In quantum field theory, the anomalous magnetic moment of the muon is
given by
\begin{align}
\amu &= -2m_\mu F_M(0)\,,
\label{amuFM}
\end{align}
where $m_\mu$ is the muon pole mass and $F_M(0)$ is the zero-momentum
limit of the magnetic moment form factor. The latter is defined via
the covariant decomposition of the 1-particle irreducible
muon--muon--photon vertex function $\Gamma^\mu(p,-p',q)$,
\begin{align}
\label{covdecomp}
\bar u(p')\Gamma^{\mu}(p,-p',q) u(p) & =
-eQ\,
\bar u(p')\left[\gamma^\mu F_V(q^2) + (p+p')^\mu F_M(q^2) +
\ldots\right] u(p)
\end{align}
with the on-shell renormalized electric charge $e$, $Q=-1$, on-shell momenta $p^2=p'{}^2=m_\mu^2$, on-shell spinors $u(p)$,
$u(p')$ and $q=p'-p$. The quantum field theory operator corresponding to $\amu$ connects
left- and right-handed muons, i.e.\ it involves a chirality flip.
The observable $\amu$ is CP-conserving, flavour conserving, loop induced, and chirality flipping.
These properties make it complementary to many other precision and collider observables.
In particular the need for a muon chirality flip has a pivotal influence on the BSM phenomenology of $\amu$. It requires
two ingredients.
\begin{itemize}
\item
Breaking of chiral symmetry. There must be a theory parameter
breaking the chiral symmetry under which the left- and right-handed
muon fields transform with opposite phases. In the SM and the MSSM
and many other models this chiral symmetry is broken only by the
non-vanishing muon Yukawa coupling $y_\mu$.\footnote{%
For the MSSM this statement is true if one follows the customary
approach to parametrize the trilinear scalar soft SUSY-breaking
parameters as $T_f\equiv A_f y_f$ by explicitly factoring out the
respective Yukawa couplings.}
In all these cases contributions to $F_M(0)$ are proportional to at
least one power of the muon Yukawa coupling, where e.g.\ the MSSM Yukawa
coupling is enhanced compared to the SM one.
In some models, there are additional sources of breaking of the muon
chiral symmetry. Examples are provided by the leptoquark model
discussed below in Sec.\ \ref{Sec:Leptoquarks}, where the
simultaneous presence of left- and right-handed couplings
$\lambda_{L,R}$ and the charm- or top-Yukawa coupling breaks the
muon chiral symmetry and leads to contributions governed by
$\lambda_L\lambda_R m_{c,t}$. Similar mechanisms can also exist in
the three-field models discussed below.
\item
Spontaneous breaking of electroweak gauge invariance.
Since the $\amu$ operator connects a left-handed lepton doublet and
right-handed lepton singlet it is not invariant under electroweak
(EW) gauge transformations. Hence any contribution to $F_M(0)$ also must
be proportional to at least one power of some vacuum expectation
value (VEV) breaking EW gauge invariance. In the SM, there is only
a single VEV $v$, so together with the required chirality flip, each
SM-contribution to $F_M(0)$ must be proportional to
$y_\mu v$ and thus to the tree-level muon mass. However, e.g.\ in the
MSSM, there are two VEVs $v_{u,d}$; hence there are contributions to
$\amu$ governed by $y_\mu v_u$, while the tree-level muon mass is
given via $y_\mu v_d$. This leads to the well-known enhancement by
$\tan\beta=v_u/v_d$.
\end{itemize}
In addition, the gauge invariant operators contributing to $\amu$ are (at least) of dimension six; hence any BSM contribution to $\amu$ is suppressed by (at least) two powers of a typical BSM mass scale.
In conclusion, BSM contributions to $\amu$ can generically be
parametrized as
\begin{align} \label{eqn:GeneralGM2Contribution}
\Delta a_\mu^{\text{BSM}} &=
C_{\text{BSM}}\frac{m_\mu^2}{M_{\text{BSM}}^2} ,
\end{align}
where $M_{\text{BSM}}$ is the relevant mass scale and where
the coefficient $C_{\text{BSM}}$ depends on all model details
like origins of chirality flips and electroweak VEVs as well as further
BSM coupling strengths and loop factors.\footnote{We note that
Eq.\ (\ref{eqn:GeneralGM2Contribution}) does not imply the naive
scaling
$\Delta a_e^{\text{BSM}}:\Delta a_\mu^{\text{BSM}}:\Delta a_\tau^{\text{BSM}}\approx
m_e^2:m_\mu^2:m_\tau^2$ with the lepton generation since the
coefficient $C_{\text{BSM}}$ does not have to be
generation-independent. Still, the prefactor
$m_l^2/M_{\text{BSM}}^2$ in $a_l$ implies that the muon magnetic
moment is more sensitive to BSM physics than the electron magnetic
moment and that typical models which explain e.g.\ the BNL deviation
for $\amu$ give negligible contributions to $a_e$.
For detailed discussions and examples for
deviations from naive scaling in models with leptoquarks, two Higgs doublets or supersymmetry we refer to Refs.\ \cite{Giudice:2012ms,Crivellin:2018qmi}.}
An interesting side comment is that
BSM particles will typically not only contribute to $\amu$ but also to the muon mass
in similar loops, and
those contributions depend on the same model details
and scale as $\Delta m_\mu^{\text{BSM}} /m_\mu \sim {\cal O}(C_{\text{BSM}})$.
The estimate $\Delta a_\mu^{\text{BSM}}\sim {\cal O}(\Delta m_\mu^{\text{BSM}}/m_\mu)\times
\frac{m_\mu^2}{M_{\text{BSM}}^2}$ is therefore valid in many models
\cite{Czarnecki:2001pv,Stockinger:1900zz}.
One may impose a criterion that these BSM corrections to the muon mass
do not introduce fine-tuning, i.e.\
do not exceed the actual muon mass.
In models where this criterion
is satisfied, $C_{\text{BSM}}$ can be at most of order unity and
a generic upper limit,
\begin{align}
\label{eq:damulimit}
\Delta a_\mu^{\text{BSM}} \lesssim {\cal O}(1)\frac{m_\mu^2}{M_{\text{BSM}}^2} ,
\end{align}
is obtained \cite{Czarnecki:2001pv,Stockinger:1900zz}.
In this wide class of models, imposing this criterion then \update{implies an
order-of-magnitude upper limit} on the mass
scale for which the value $\Delta a_\mu$ can be
accommodated:\footnote{%
The case of vector-like leptons provides an interesting exception with
a slightly more complicated behaviour, see the discussions in
Refs.\ \cite{Kannike2012,Dermisek:2013gta,Dermisek:2020cod} and below
in Sec.\ \ref{sec:two_field_overview}. There, also tree-level BSM
contributions to the muon mass exist, and the ratio between $\Delta a_\mu$ and $\Delta m_\mu ^{\text{tree}}$ does not
scale as $1/M_{\text{BSM}}^2$ as above but as $1/(16\pi^2v^2)$. This
might seem to allow arbitrarily high masses, circumventing the bounds
(\ref{eq:damulimit},\ref{BNLuppermassbound},\ref{FNALuppermassbound}). However, even using only tree-level effects in
the muon mass, these references also find upper mass
limits from perturbativity and constraints on the Higgs--muon coupling.
}
\begin{align}\label{BNLuppermassbound}
\text{BNL:}&&
\Delta a_\mu^{\text{BSM}}&=\update{27.9 \times 10^{-10}}
&
\Rightarrow
\
M_{\text{BSM}}&\lesssim
{\cal O}(2)\text{TeV}\\
\text{Including \update{FNAL}:}&&
\Delta a_\mu^{\text{BSM}}&=\update{25.1 \times 10^{-10}}
&
\Rightarrow
\
M_{\text{BSM}}&\lesssim
\update{ {\cal O}(2.1)}\text{TeV}\label{FNALuppermassbound}
\end{align}
In Appendix \ref{app:MuonGm2Contributions} we collect the generic
one-loop Feynman diagrams which can contribute to $\amu$ in a general
renormalizable quantum field theory. The results are expressed in
terms of generic masses and couplings and reflect the above
discussion. Contributions containing the factor $m_\mu^2$ correspond
to chirality flips on the external muon line governed by the SM Yukawa
coupling and VEV; the other contributions correspond to chirality
flips via BSM couplings or fermion masses and require the simultaneous
presence of BSM couplings to left- and right-handed muons, which in
turn requires that some virtual states in the loop are not pure gauge eigenstates but mix
via electroweak VEVs.
\section{Supersymmetry and the Minimal Supersymmetric Standard Model}
\label{sec:MSSM}
In the present section we consider the Minimal Supersymmetric Standard
Model (MSSM). It is one of
the most promising extensions of the SM and offers potential
explanations of EW naturalness and of dark matter as the lightest
supersymmetric particle (LSP), it is compatible
with unification of gauge couplings --- and it could easily explain
the deviation $\Delta a_\mu$ \update{based on the initial BNL
measurement. However a tension has developed because of LHC
and dark matter searches, which severely constrain the MSSM parameter
space. } \update{The tension remains in place
in view of the new FNAL result (\ref{Eq:FNALresult}) if the MSSM is
required to accommodate the deviation $\damu^{\text{2021}}$.}
Here we present an update and investigate the parameter space of the MSSM which can
accommodate \update{this deviation, given the new FNAL measurement
Eq.\ (\ref{Eq:FNALresult}) and the SM theory update
(\ref{amuSMWP}).} We focus particularly on the
combined constraints from LHC
and dark matter direct detection (DMDD) searches.
The MSSM contributions have been extensively studied in the
past. For analyses and reviews of the basic behaviour we refer to
Refs.\ \cite{Moroi:1995yh,Martin:2001st,Stockinger:2006zn,Cho:2011rk};
higher-precision calculations were done in Refs.\
\cite{Arhrib:2001xx,Chen:2001kn,Heinemeyer:2003dq,Heinemeyer:2004yq,Feng:2006ei,Marchetti:2008hw,vonWeitershausen:2010zr,Fargnoli:2013zda,Fargnoli:2013zia}.
For recent studies
in the light of LHC run-II data we refer to
Refs.\ \cite{Hagiwara:2017lse,Choudhury:2017acn,Endo:2020mqz,Chakraborti:2020vjp,Horigome:2021qof}
(which involve a detailed recasting of LHC data) and
\cite{Yin:2016shg,Zhu:2016ncq,Hussain:2017fbp,Altin:2017sxx,Choudhury:2017fuu,Endo:2017zrj,Yanagida:2017dao,Frank:2017ohg,Bagnaschi:2017tru,Wang:2018vxp,Chakraborti:2017dpu,Ning:2017dng,Li:2017fbg,Pozzo:2018anw,Tran:2018kxv,Yang:2018guw,Wang:2018vrr,Cox:2018qyi,Cox:2018vsv,Bhattacharyya:2018inr,Ibe:2019jbx,Abdughani:2019wai,Dong:2019iaf,Kotlarski:2019muo,Yanagida:2020jzy,Yang:2020bmh,Han:2020exx,Cao:2019evo,Yamaguchi:2016oqz,Yanagida:2018eho,Shimizu:2015ara,Yin:2016pkz}
(which involve model building, the construction of specific scenarios
and/or constraints from dark matter or electroweak precision observables). Earlier
studies of the SUSY phenomenology of $\amu$ of the LHC run 1 era
can be found in
Refs.\ \cite{Cho:2011rk,Endo:2011gy,Endo:2011xq,Endo:2011mc,Kim:2012fc,Ibe:2012qu,Zhang:2013hva,Bhattacharyya:2013xma,Akula:2013ioa,Evans:2013uza,Ibe:2013oha,Endo:2013lva,Endo:2013bba,Iwamoto:2014ywa,Kersten:2014xaa,Badziak:2014kea,Gogoladze:2014cha,Bach:2015doa,Ajaib:2015yma,Kowalska:2015zja,Khalil:2015wua,Gogoladze:2015jua,Baez:2015sqj,Harigaya:2015jba,Harigaya:2015kfa,Chakrabortty:2015ika,Padley:2015uma,Chowdhury:2015rja,Li:2016ucz,Okada:2016wlm,Belyaev:2016oxy,Kobakhidze:2016mdx,Belanger:2017vpq,Fukuyama:2016mqb}.
Refs.\ \cite{Badziak:2019gaf,Endo:2019bcj} study simultaneous SUSY
explanations of $(g-2)$ of both the muon and the electron.\footnote{%
\label{aefootnote}
There used to be a slight disagreement between the experimental number for
the electron $g-2$ \cite{Hanneke:2008tm} and the SM theory evaluation
\cite{Aoyama:2012wj,Aoyama:2017uqe} based on the
measurement of the fine-structure constant of
Ref.\ \cite{Parker:2018vye}.
However, a more
recent measurement of the
fine-structure constant of Ref.\ \cite{Morel:2020dww} leads to good agreement
between theory and experiment for the electron $g-2$. Here we do
not consider the electron $g-2$ further.} We will provide more detailed comparisons to the
literature in the following subsections.
The MSSM
contributions to $\amu$ are mainly generated
by left- and right-handed smuons ${\tilde{\mu}}_{\text{L},\text{R}}$,
sneutrino ${\tilde{\nu}}_{\mu\text{L}}$,
Bino ${\tilde{B}}$, Winos ${\tilde{W}}_{1,2,3}$, and
Higgsinos ${\tilde{H}}_{u,d}$
which are the SUSY partners of the muon, muon-neutrino, and the
electroweak SM gauge and Higgs bosons.
${\tilde{B}}$, ${\tilde{W}}_{1,2,3}$ and ${\tilde{H}}_{u,d}$
form neutralinos $\chi ^0 _{1,2,3,4}$ and charginos $\chi ^\pm _{1,2}$ mass eigenstates.
They appear in the one-loop
diagrams of Fig.\ \ref{fig:FFSDiagram} (with charginos and
sneutrinos) and of Fig.\ \ref{fig:SSFDiagram} (with neutralinos and
smuons).
To provide a brief overview and set the stage for the following
discussion we present Fig.~\ref{fig:briefsurveyplot} and the following
approximation of
the leading MSSM contributions to $\amu$,
\begin{subequations}
\label{eq:SUSY1Lnumerical}
\begin{align}
\amuWHL
&\approx21\times10^{-10}\text{sign}(\mu M_2) \left(\frac{500\text{ GeV}}{M_{\text{SUSY}}}\right)^2\frac{\tan\beta}{40},
\label{eq:SUSYWHL}\\
\Delta a_{\mu}^{1{\rm L}}(\text{BHL}) &\approx1.2\times10^{-10}\text{sign}(\mu M_1)\left(\frac{500\text{ GeV}}{M_{\text{SUSY}}}\right)^2\frac{\tan\beta}{40},\\
\Delta a_{\mu}^{1{\rm L}}(\text{BHR}) &\approx-2.4\times10^{-10}\text{sign}(\mu M_1)\left(\frac{500\text{ GeV}}{M_{\text{SUSY}}}\right)^2\frac{\tan\beta}{40},\\
\amuBmuLmuR
&\approx2.4\times10^{-10}\text{sign}(\mu M_1)\left(\frac{500\text{ GeV}}{M_{\text{SUSY}}}\right)^2\frac{\tan\beta}{40}\frac{\mu}{500\text{ GeV}}. \label{eq:SUSYBLR}
\end{align}
\end{subequations}
The letters B,W,H,L, and R are the abbreviations for Bino, Winos, Higgsinos, and Left-and Right-handed
smuons respectively.
As indicated by the letters in brackets, each contribution depends on three of these states and their
respective masses.
In each formula the three appropriate masses have been set to a common
scale $M_{\text{SUSY}}$ and $M_{\text{SUSY}}\gg M_Z$ is assumed. The approximations
highlight the linearity in $\tan\beta=v_u/v_d$, the ratio of the two
MSSM Higgs VEVs.
The physics of the individual contributions is as follows. In the
first three lines the muon chirality is flipped at the Yukawa coupling
to a Higgsino; the Higgsino is then converted to a gaugino via a Higgs
vacuum expectation value. The appearance of the enhanced VEV is always
accompanied by a factor of the Higgsino mass $\mu$. This explains the
appearance of one Higgsino, one gaugino (Bino or Wino) and of the factors $\tan\beta$
and $\mu$ in the numerators.
The BLR
contribution in the fourth line is special.
Here the muon chirality is flipped at the smuon line via the insertion
of a smuon-left-right flip, which is governed by a product of the
Higgsino mass $\mu$ and the enhanced VEV in the smuon mixing
matrix. This explains the appearance of left- and right-handed smuons
and of the factors $\tan\beta$ and $\mu$ in the numerator.
In contrast to the other contributions, the BLR contribution is
approximately linearly enhanced by the Higgsino mass $\mu$ since this
parameter does not appear as the mass of a virtual Higgsino. The signs
of all contributions are
determined by the signs of $\mu$ and the gaugino masses
$M_{1,2}$. In the following we will only consider positive signs of
all these parameters, leading to positive MSSM contributions to $\amu$.
Figure
\ref{fig:briefsurveyplot} shows the theoretical maximum
of the SUSY contributions (i.e.\ MSSM minus SM contributions) $\amu^{\text{SUSY,Max}}$
in the plane of $m _{\chi ^\pm _2}$ and $m_{{\tilde{\mu}} _1}$, where $\chi ^\pm _2$ is the heavier
chargino and $\tilde{\mu} _1$ the lighter smuon (for similar plots in
different planes see
Refs.\ \cite{Byrne:2002cw,Stockinger:2006zn,Badziak:2014kea}).
It fixes $\tan\beta=40$ and allows all SUSY masses to vary
independently between $100$\,GeV and
4\,TeV, for further details see the caption.\footnote{%
Widening the range of masses would not change the plot since the
maximum $\amu^{\text{SUSY}}$ is obtained in the bulk of the mass range, not at
its boundary. This is a reflection of the fact that the leading
contributions approximated in Eq.\ (\ref{eq:SUSY1Lnumerical}) are
suppressed if Bino, Wino or Higgsino masses are too small.}
The red dashed lines and the yellow/green coloured regions correspond
to the indicated values of $\amu^{\text{SUSY,Max}}$; specifically the
yellow and green colours correspond to the $1\sigma$ regions for the
BNL deviation and the new deviation including FNAL in
Eqs.\ (\ref{eqn:BNLDiscrepancy},\ref{eqn:avgDiscrepancy}),
respectively; the bright green colour corresponds to the overlap
region. As indicated on the right of the legend plot, an alternative
interpretation of the contour lines and regions is possible. Thanks to
the approximate linearity in $\tan\beta$, each value for $\amu^{\text{SUSY}}$ with
$\tan\beta=40$ can be translated into approximate $\tan\beta$-values
for which the other values of $\amu^{\text{SUSY}}$ can be obtained. In the legend
plot this translation is done for $\damu^{\text{2021}}=\update{25.1\times10^{-10}}$,
the new experimental average deviation from the SM. E.g.\ on the contour
labeled \update{``20''} we have $\amu^{\text{SUSY,Max}}=20\times10^{-10}$
for $\tan\beta=40$, and equivalently we would get
$\amu^{\text{SUSY,Max}}=\damu^{\text{2021}}$
for $\tan\beta\approx\update{50}$. On the contour
labeled \update{``50''}, we have $\amu^{\text{SUSY,Max}}=\update{50}\times10^{-10}$
for $\tan\beta=40$, and the world average deviation can be
explained for $\tan\beta\approx\update{20}$.
The figure and the formulas show that large contributions in the
ballpark of the BNL deviation are of
course possible but require upper limits on the SUSY masses. E.g.\
$\amu^{\text{SUSY}}>20\times10^{-10}$ \update{(which is just at the
lower $1\sigma$ level of the deviation)} can
only be explained (for $\tan\beta=40$ and $\mu\le4$\,TeV) if
\begin{itemize}
\item either both chargino masses are lighter than around 1.1\,TeV
(vertical black line in Fig.~\ref{fig:briefsurveyplot})
\item
or one smuon is lighter than around 700\,GeV (horizontal black line in Fig.~\ref{fig:briefsurveyplot}).
\end{itemize}
This \update{already illustrates the potential
tension} between $\amu$ and LHC data since the
LHC-searches are sensitive to chargino and smuon masses above these
values. \update{In addition to LHC, constraints from dark matter searches enforce
lower limits on combinations of the chargino and neutralino masses.}
In more detail, the WHL contributions (\ref{eq:SUSYWHL}) are by far
most important in the largest part of parameter space.
However for large $\mu$, the BLR contribution can become sizable as
well. In Fig.\ \ref{fig:briefsurveyplot} the slight rise of
$\amu^{\text{SUSY,Max}}$ with increasing $m_{\chi^\pm_2}$ is due to the
BLR contribution. In contrast, the BHR and
particularly the BHL
contributions are not important for the maximum value
$\amu^{\text{SUSY,Max}}$, and in general they are practically always strongly suppressed, unless there
are extremely large mass splittings between left- and right-handed
smuons. We do not consider such cases in the present paper; hence in
the following only the WHL and BLR contributions are important.\footnote{%
Specifically the BHR contribution can be decisive if
$\tan\beta\gg50$ and if $m_\text{L}\gg m_\text{R}$ \cite{Bach:2015doa}. For further
dedicated investigations focusing on parameter situations in which
the BHL or BHR contributions dominate we refer to
Ref.\ \cite{Endo:2017zrj}.\label{footnoteBHR}}
In the following subsections we will first introduce the SUSY
contributions to $\amu$ in more detail (Sec.~\ref{sec:SUSYcontributions}), survey relevant
LHC and dark matter constraints (Sec.~\ref{sec:SUSYconstraints}) and present a detailed
phenomenological analysis and brief summary
(Secs.~\ref{sec:SUSYphenosetup},~\ref{sec:SUSYpheno} and Sec.~\ref{sec:SUSYSummary}).
\begin{figure}[t]
\includegraphics[scale=1.]{SUSYplots/OnlyAmuGeneralMSSM.pdf}
\includegraphics[scale=1.]{SUSYplots/PlotAmuLegend.pdf}
\caption{\label{fig:briefsurveyplot}
The theoretical maximum MSSM contribution $\amu^{\text{SUSY,Max}}$ for
$\tan\beta=40$ in
the plane of the heaviest chargino and the lightest smuon
mass. (For each point in the plane, the actual value of the MSSM
contribution can take any value between $0$ and $\pm
\amu^{\text{SUSY,Max}}$, depending on the signs of parameters and
details such as other masses and mixings.) The yellow/green
coloured regions show where
$\amu^{\text{SUSY,Max}}$ (for $\tan\beta=40$) is within the $1\sigma$ bands corresponding
to the BNL and new deviations $\Delta a_\mu^{\textrm{BNL}}$ and
$\damu^{\text{2021}}$, see Eqs.\
(\ref{eqn:BNLDiscrepancy},\ref{eqn:avgDiscrepancy}), and their
overlap.
The red dashed contour lines can be interpreted in two
ways. Firstly, they directly correspond to certain values of
$\amu^{\text{SUSY,Max}}$ for $\tan\beta=40$, as indicated in the left
axis of the legend plot. Secondly, thanks to the approximate
linearity in $\tan\beta$, each contour can be used to estimate the
required $\tan\beta$ value for which $\amu^{\text{SUSY,Max}}$ agrees
with the deviation $\damu^{\text{2021}}$ (keeping other input
parameters fixed). These $\tan\beta$ values can be
read off from the right axis of the legend plot (the values are
approximate since the linearity is not exact).
As an example of the reinterpretation we take the point $m_{\chi^\pm_2} = 1750$ GeV and
$m_{\tilde{\mu}_1}= 700$ GeV. For $\tan\beta =40$ we get
$\amu^{\text{SUSY,Max}} =10 \times 10^{-10}$. The required
$\tan\beta$ value to get $\damu^{\text{2021}}$ \update{would be around
100}, as read off from the right axis.
The
results for $\amu^{\text{SUSY,Max}}$ were obtained from a scan using
GM2Calc \cite{Athron:2015rva} in which all relevant SUSY masses
are varied independently between $100$\,GeV and 4\,TeV.
The black lines
indicate the maximum LHC reach for charginos and sleptons of
$1100$ and $700$ GeV reported in
Refs.\ \cite{Aaboud:2018jiw,Aad:2019vnb}, respectively.
}
\end{figure}
\subsection{SUSY parameters and contributions to $\amu$}
\label{sec:SUSYcontributions}
In the following we define our notation and provide details on the
MSSM contributions to $\amu$. Our notation is essentially the same as
e.g.\ in Refs.\ \cite{Martin:2001st,Stockinger:2006zn,Cho:2011rk}. To
fix it we provide the mass matrices of the charginos and
neutralinos and the smuons:
\begin{align}
X &=
\left(\begin{array}{cc}
M_2 & M_W\sqrt2 \sin\beta\\
M_W \sqrt2 \cos\beta & \mu
\end{array}
\right)
,
\\
Y &=
\left(\begin{array}{cccc}
M_1 & 0 & -M_Z s_\text{W} \cos\beta & M_Z s_\text{W} \sin\beta \\
0 & M_2 & M_Z c_\text{W} \cos\beta & -M_Z c_\text{W} \sin\beta \\
-M_Z s_\text{W} \cos\beta &M_Z c_\text{W} \cos\beta & 0 & -\mu \\
M_Z s_\text{W} \sin\beta & -M_Z c_\text{W} \sin\beta& -\mu & 0
\end{array}
\right),
\\
M_{\tilde{\mu}}^2 &= \left(\begin{array}{lr}
m_\mu^2 + m_\text{L} ^2 + M_Z ^2 \cos2\beta\left(-\frac12+s_\text{W} ^2\right)
&
m_\mu (-\mu\tan\beta+A_\mu^*)
\\
m_\mu (-\mu^*\tan\beta+A_\mu)
&
m_\mu^2 + m_{\text{R}}^2 + M_Z ^2\cos2\beta\ (-s_\text{W} ^2)
\end{array}
\right),
\label{smuonmassmatrix}
\end{align}
where the fundamental SUSY parameters appearing in these expressions are $\tan\beta$ and
the two gaugino (Bino and Wino) mass parameters $M_{1,2}$, the
Higgsino mass $\mu$ and the
left-/right-handed smuon masses $m_{\text{L}, \text{R}}$. The smuon mass matrix also involves the trilinear
soft SUSY-breaking
parameter $A_\mu$, which however will not play any role in the present
paper. The other appearing parameters are
the SM parameters $m_\mu,M_{W,Z}$ and $s_\text{W} =\sqrt{1-c_\text{W} ^2}$.
In our numerical treatment the fundamental SUSY parameters are defined
as running $\overline{\text{DR}}$-parameters at the scale 1 TeV, and
the
\code{MSSMEFTHiggs_mAmu} spectrum generator, created with\footnote{%
\texttt{FlexibleSUSY}\@\xspace is a generic spectrum generator generator, and the
FlexibleEFTHiggs extension \cite{Athron:2016fuq,Athron:2017fvs,Kwasnitza:2020wli} improves the Higgs mass
calculation by resummation of large logarithms. The version used
within \GAMBIT\ is the one of Ref.\ \cite{Athron:2017fvs}. \texttt{FlexibleSUSY}\@\xspace also uses some numerical routines originally from
\cite{Allanach:2001kg,Allanach:2013kza} and uses \texttt{SARAH}\@\xspace 4.14.1
\cite{Staub:2009bi,Staub:2010jh,Staub:2012pb,Staub:2013tta}.}
{\texttt{FlexibleSUSY}\@\xspace} \cite{Athron:2014yba,Athron:2017fvs} and incorporated in \GAMBIT-1.3,
is used for
the precise evaluation of the spectrum of mass eigenvalues including
higher-order corrections.
The 1-loop contributions of the MSSM to $\amu$ have been
systematically and comprehensively studied in Ref.\ \cite{Moroi:1995yh}, for
reviews see
Refs.\ \cite{Martin:2001st,Stockinger:2006zn,Cho:2011rk}. A wide range
of higher-precision
calculations of 2-loop contributions is
available. Including higher-order corrections, the full known
SUSY contributions (i.e.\ the difference between MSSM
and SM contributions) to $\amu$ can
be written as
\newcommand{a_\mu^{\rm 1L\, SUSY}}{a_\mu^{\rm 1L\, SUSY}}
\newcommand{a_\mu^{{\rm 2L,} f\tilde{f}}}{a_\mu^{{\rm 2L,} f\tilde{f}}}
\newcommand{a_\mu^{\rm 2L(a)}}{a_\mu^{\rm 2L(a)}}
\newcommand{a_\mu^{\rm 2L,\ photonic}}{a_\mu^{\rm 2L,\ photonic}}
\begin{align}
\amu^{\text{SUSY}} &= \left[a_\mu^{\rm 1L\, SUSY} +a_\mu^{\rm 2L(a)}
+
a_\mu^{\rm 2L,\ photonic}
+a_\mu^{{\rm 2L,} f\tilde{f}}\right]_{t_\beta\text{-resummed}}
\ ,
\label{amuSUSYdecomposition}
\end{align}
Refs.\ \cite{Arhrib:2001xx,Chen:2001kn,Heinemeyer:2003dq,Heinemeyer:2004yq,Cheung:2009fc}
evaluated all 2-loop diagrams $a_\mu^{\rm 2L(a)}$ in which a SUSY loop is inserted into a
SM-like loop, including so-called Barr-Zee
diagrams. Given current experimental constraints these diagrams are
very small. Refs.\ \cite{Degrassi:1998es,vonWeitershausen:2010zr}
computed leading QED-logarithms and the full 2-loop QED corrections $a_\mu^{\rm 2L,\ photonic}$;
Refs.\ \cite{Marchetti:2008hw,Bach:2015doa} showed how to take into
account $n$-loop higher-order terms enhanced by $(\tan\beta)^n$, i.e.\
carry out a $\tan\beta$-resummation.
Finally Refs.\ \cite{Fargnoli:2013zda,Fargnoli:2013zia} computed
genuine SUSY 2-loop corrections $a_\mu^{{\rm 2L,} f\tilde{f}}$ to the SUSY 1-loop diagrams which include
non-decoupling effects from e.g.\ heavy squarks. Each of
these three kinds of corrections can shift the 1-loop contributions by
around 10\%. All mentioned 1-loop and 2-loop contributions in Eq.\ (\ref{amuSUSYdecomposition})
are
implemented in the code GM2Calc \cite{Athron:2015rva}, which is used
in
our later phenomenological evaluations.\footnote{%
For higher-order calculations in extensions of the MSSM see
Refs.\ \cite{Su:2020lrv,Liu:2020nsm,Dong:2019iaf,Zhao:2014dx
}.}
The exact one-loop expression for $\amu^{\text{1L SUSY}}$ can be
found in most mentioned references; a full overview of all
contributions including higher orders is given in
Ref.\ \cite{Athron:2015rva}.
Here we provide the 1-loop
contributions in mass-insertion
approximation, which allows to directly read off the main parameter
dependences. Following the form given e.g.\ in
Refs.\ \cite{Cho:2011rk,Fargnoli:2013zia,Bach:2015doa} they
read
\begin{align}
\amu^{\text{1L SUSY}} &\approx \amuWHL + \Delta a_{\mu}^{1{\rm L}}(\text{BHL}) \nonumber + \Delta a_{\mu}^{1{\rm L}}(\text{BHR}) +
\amuBmuLmuR,
\end{align}
with
\begin{subequations}
\label{eq:SUSYMIapprox}
\begin{align}
\amuWHL &= \frac{g_{2}^{2}}{8 \pi ^{2}} \frac{m_{\mu} ^{2} M_{2}}{m_\text{L} ^{4}}\,\mu \tan\beta\,
F_{a}\left(\frac{M_{2}^{2}}{m_\text{L} ^{2}},\frac{\mu^{2}}{m_\text{L} ^{2}}\right) \nonumber\\*
&- \frac{g_{2}^{2}}{16 \pi ^{2}} \frac{m_{\mu}^{2} M_{2}}{m_\text{L} ^{4}}\,\mu \tan\beta\,
F_{b}\left(\frac{M_{2}^{2}}{m_\text{L} ^{2}},\frac{\mu^{2}}{m_\text{L} ^{2}}\right),\\*
\Delta a_{\mu}^{1{\rm L}}(\text{BHL}) &= \frac{g_{1}^{2}}{16 \pi ^{2}}\frac{m_{\mu}^{2} M_{1}}{m_\text{L} ^{4}}\,\mu \tan\beta\,
F_{b}\left(\frac{M_{1}^{2}}{m_\text{L} ^{2}},\frac{\mu^{2}}{m_\text{L} ^{2}}\right), \\*
\Delta a_{\mu}^{1{\rm L}}(\text{BHR}) &= - \frac{g_{1}^{2}}{8 \pi ^{2}} \frac{m_{\mu}^{2} M_{1}}{m_\text{R} ^{4}}\,\mu \tan\beta\,
F_{b}\left(\frac{M_{1}^{2}}{m_\text{R} ^{2}},\frac{\mu^{2}}{m_\text{R} ^{2}}\right), \\*
\amuBmuLmuR &= \frac{g_{1}^{2}}{8 \pi ^{2}} \frac{m_{\mu}^{2}}{M_{1}^{3}}\,\mu \tan\beta\,
F_{b}\left(\frac{m_\text{L} ^{2}}{M_{1}^{2}},\frac{m_\text{R} ^{2}}{M_{1}^{2}}\right).
\end{align}
\end{subequations}
For SUSY masses significantly above $M_Z$ this is a very good
approximation (of the 1-loop contributions). Physics explanations and
numerical examples have
already been given around Eq.\ (\ref{eq:SUSY1Lnumerical}). Next to
$\tan\beta$ and the $SU(2)\times U(1)$ gauge couplings $g_{1,2}$,
these contributions depend on the five independent SUSY mass
parameters $M_{1,2}$, $\mu$ and $m_{\text{L}, \text{R}}$ introduced above.
The appearing loop functions are normalized as $F_a(1,1)=1/4$,
$F_b(1,1)=1/12$ and can be found in the mentioned references as well
as in Appendix \ref{app:MuonGm2Contributions}.
The linear enhancement in $\tan\beta$ already explained in
Sec.\ \ref{sec:BSMoverview} is apparent.\footnote{%
The leading higher-order effects from QED-logarithms and from $n$-loop
$(\tan\beta)^n$-effects can be approximately taken into account by
multiplying the formulas by
\cite{Degrassi:1998es,vonWeitershausen:2010zr,Marchetti:2008hw,Bach:2015doa}
$\left(1-\frac{4\alpha}{\pi}\log\frac{M_{\text{SUSY}}}{m_\mu}\right)/(1+\Delta_\mu)$,
where $\Delta_\mu$ is a correction to the muon Yukawa coupling and
$M_{\text{SUSY}}$ the appropriate SUSY mass scale.} The $\tan\beta$
enhancement is accompanied by explicit factors of the Majorana gaugino
masses $M_{1,2}$ and the MSSM Higgsino mass parameter $\mu$.\footnote{As a side
remark, in SUSY models with continuous R-symmetry such as the MRSSM,
such Majorana gaugino masses and the $\mu$-parameter are zero. Hence $\amu^{\text{SUSY}}$ is not $\tan\beta$-enhanced,
leading to distinctly different $\amu$ phenomenology
\cite{Kotlarski:2019muo}.
\label{footnoteMRSSM}}
As indicated, all contributions in Eq.~(\ref{eq:SUSYMIapprox}) involve
three different SUSY masses; the generic behaviour is
$\propto1/M_{\text{SUSY}}^2$. The BLR-contribution in
Eq.\ (\ref{eq:SUSYMIapprox}) is
special because it is linearly enhanced by large $\mu$;
this enhancement arises via the smuon mixing off-diagonal element in
Eq.\ (\ref{smuonmassmatrix}).
Specific constraints on this BLR-contribution have
been very thoroughly investigated in Ref.\ \cite{Endo:2013lva}.
Most importantly, vacuum stability requires that staus, the
superpartners of $\tau$-leptons, do not receive
a charge-breaking vacuum expectation value, and this provides a
constraint on the relation between the off-diagonal and diagonal
elements of the stau mass matrix similar to
Eq.\ (\ref{smuonmassmatrix}). As a quantitative example,
Ref.\ \cite{Endo:2013lva} finds that in case of universal left- and
right-handed stau masses, the Higgsino mass has an upper limit,
specifically
\begin{align}\label{VacStabEndo}
m_{\tilde{\tau}_\text{L}}=m_{\tilde{\tau}_\text{R}}&=300\text{
GeV (}600\text{ GeV)}
&\Rightarrow&&
\text{ }\mu&\lesssim1\text{
TeV (}2\text{ TeV)}\,.
\end{align}
In our later plots we will show the appropriate
constraint in the approximate form of Eq.\ (14) of
Ref.\ \cite{Endo:2013lva}.
\subsection{LHC and dark matter constraints on explanations of $\damu^{\text{2021}}$}
\label{sec:SUSYconstraints}
Our aim is to analyze MSSM contributions to $\amu$ in the context of
constraints from LHC and dark matter searches. Here we list the
relevant constraints.
The relevant LHC constraints can be grouped into ``standard'' searches
for electroweak particles (charginos/neutralinos and sleptons) and
searches optimized for compressed spectra. The relevant searches are
the following:
\begin{itemize}
\item
Chargino/neutralino searches with decay into sleptons:
The strongest chargino/neutralino mass limits are obtained from the
pair production channel $pp\to{\chi} ^\pm _1 {\chi} ^0
_2$ with subsequent decay via on-shell sleptons into three charged
leptons and two LSPs. In simplified-model
interpretations, in which 100\%\ decay branching ratios are assumed
and the slepton mass is halfway between the LSP- and the chargino
mass, the limits extend up to \cite{Aaboud:2018jiw,Sirunyan:2017lae}
\begin{subequations}\label{Charginosleptonlimits}
\begin{align}
\text{Chargino/slepton channel:}\quad
\parbox{0cm}{\mbox{$ \chi_1^\pm\chi_2^0\to\tilde{l}_\text{L} \nu\tilde{l}_\text{L} l(\tilde{\nu}\nu),l\tilde{\nu}\tilde{l}_\text{L}
l(\tilde{\nu}\nu) \to l\nu\chi_1^0,
ll(\nu\nu)\chi_1^0$}}
&&
&&
\\\label{maximumChamasslimit}
&& m_{\chi^\pm_1}&\approx1100\text{ GeV}
&&(\text{for
$m_{\text{LSP}}\approx0\ldots500\text{ GeV}$}),\\
&& m_{\text{LSP}} &\approx700\text{ GeV}
&& (\text{for
$m_{\chi^\pm_1}\approx900\ldots1000 \text{ GeV}$}).
\end{align}
\end{subequations}
Further important, slightly weaker limits are
obtained from the pair production channel $pp \rightarrow {\chi}
^\pm _1 {\chi} ^\mp _1$ with subsequent decay via on-shell sleptons into
two charged leptons and two LSPs. The limits in simplified-model
interpretations reach up to $m_{\chi^\pm_1}\approx 1000$ GeV
\cite{Aad:2019vnb}.
All limits of this kind depend on a significant mass splitting
$\gtrsim100$ GeV between
the chargino and LSP masses; for smaller mass splittings the limits
become much weaker.
\item
Chargino/neutralino searches with decay into other particles:
The above limits are absent if the charginos cannot decay into
sleptons e.g.\ because sleptons are too heavy.
In such cases further limits
are applicable.
One such limit is obtained from the channel
$pp\to{\chi}^\pm _1 {\chi} ^0 _2$ assuming subsequent decays into on-shell $W$ and Higgs
bosons plus LSP. This limit extends up to \cite{Aad:2019vvf}
\begin{subequations}\label{eq:ChaWhlimits}
\begin{align}
\text{Chargino/$Wh$-channel:}\quad
\parbox{0cm}{\mbox{$ \chi_1^\pm\chi_2^0\to Wh\chi_1^0\chi_1^0,
W\to l\nu, h\to b\bar b$}}
&&
&&
\\&& m_{\chi^\pm_1}&\approx 750\text{ GeV}
&& (\text{for
$m_{\text{LSP}}\approx0\ldots100\text{ GeV}$}),\\
&& m_{\text{LSP}} &\approx250\text{ GeV}
&&(\text{for
$m_{\chi^\pm_1}\approx600 \text{ GeV}$}),
\end{align}
\end{subequations}
where Wino-like charginos and Bino-like LSP are assumed. If this
assumption is not met, the production cross section is lower and/or
the decay branching ratios are reduced \cite{Canepa:2020ntc}.
Similar but
slightly weaker limits are obtained in Ref.\ \cite{Sirunyan:2018ubx}.
A complementary limit is obtained in Ref.\ \cite{Aaboud:2017nhr} which
searched for
$pp\to{\chi}^\pm _1{\chi}^\mp _1,{\chi}^\pm _1 {\chi} ^0 _2$ with
subsequent decays via $\tilde{\tau}$-sleptons (staus) into $\tau$-leptons. The limit
in a simplified-model interpretation assuming Wino-like chargino
reaches up to
\begin{subequations}\label{eq:Chastaulimits}
\begin{align}
\text{Chargino/stau-channel:}\quad
\parbox{0cm}{\mbox{$\chi_1^\pm\chi_2^0\to\tilde{\tau}\nu\tilde{\tau}\tau(\tilde{\nu}\nu),
\tau\tilde{\nu}\tilde{\tau}\tau(\tilde{\nu}\nu)
\to\tau\nu\chi_1^0, \tau\tau(\nu\nu)\chi_1^0,$}} &&
&&
\\
\parbox{0cm}{\mbox{$
\chi_1^\pm\chi_1^\mp\to2\times \tilde{\tau}\nu(\tilde{\nu}\tau)\to 2\times\tau\nu\chi_1^0$}}
&&
&&
\\
&& m_{\chi^\pm_1}&\approx 760\text{ GeV}
&& (\text{for
$m_{\text{LSP}}\approx0\ldots200\text{ GeV}$}),\\
&& m_{\text{LSP}} &\approx300\text{ GeV}
&&(\text{for
$m_{\chi^\pm_1}\approx600\ldots700 \text{ GeV}$}).
\end{align}
\end{subequations}
Figure 8 of Ref.\ \cite{Aaboud:2017nhr} and the recasting study of
Ref.\ \cite{Hagiwara:2017lse} show that the limit is rather robust against
changes of the stau masses and mixings and against the Higgsino content
of the chargino.
There exist further chargino searches with decays into $W$ and $Z$
bosons
\cite{Aaboud:2018jiw,Sirunyan:2017qaj,Aad:2019vnb,Aaboud:2018sua,Aad:2019vvi},
however the
resulting limits are weaker and do not lead to excluded regions of the
parameter spaces we will consider.
\item
Slepton searches: Searches for the direct production of
slepton pairs
$\tilde{l}\tilde{l},(\tilde{l}=\tilde{e},\,\tilde{\mu})$ with
subsequent decay into leptons plus LSP have been analyzed in
Ref.\ \cite{Aad:2019vnb} (based on $139\text{\,fb}^{-1}$ data) and
\cite{Aaboud:2018jiw,Sirunyan:2018nwe} (based on
$36\text{\,fb}^{-1}$ data). The limits extend up to
\begin{subequations}\label{eq:sleptonsearch}
\begin{align}
\label{eq:sleptonsearch2019}
\text{Slepton \cite{Aad:2019vnb}}:\quad
\parbox{0cm}{\mbox{$\tilde{l}_{\text{L}, \text{R}}\tilde{l}_{\text{L}, \text{R}}\to l^+ l^-\chi_1^0\chi_1^0,$}} &&
&&
\\&& m_{\tilde{l}}&\approx 700\text{ GeV}
&& (\text{for
$m_{\text{LSP}}\approx0\ldots300\text{ GeV}$}),\\
&& m_{\text{LSP}} &\approx400\text{ GeV}
&&(\text{for
$m_{\chi^\pm_1}\approx550\ldots650 \text{ GeV}$}),\\
\text{Slepton \cite{Aaboud:2018jiw,Sirunyan:2018nwe}}:&& m_{\tilde{l}}&\approx 500\text{ GeV}
&& (\text{for
$m_{\text{LSP}}\approx0\ldots300\text{ GeV}$}),\\
&& m_{\text{LSP}} &\approx300\text{ GeV}
&&(\text{for
$m_{\chi^\pm_1}\approx500 \text{ GeV}$}).
\end{align}
\end{subequations}
\item
Searches for SUSY particles with compressed-mass spectra:
The compressed mass spectrum scenarios are investigated through the
chargino-neutralino pair production modes ${\chi} ^\pm _1
{\chi} ^0 _2/{\chi}^\pm_1{\chi}^\mp_1$ with decays via virtual
$W$/$Z$ bosons and slepton pair production
$\tilde{l}\tilde{l}$ with decays into leptons. In simplified-model
analyses the limits depend on the nature of the
charginos/neutralinos (Higgsino- or Wino-like) and reach up to
masses of around 250 GeV and mass splittings to the LSP between
$1\ldots50$ GeV~\cite{Aad:2019qnd,Sirunyan:2019zfq,Sirunyan:2018iwl}.
\end{itemize}
Our technical setup for checking against these constraints is as
follows. The LHC searches with the highest mass reach, i.e.\ the
chargino/neutralino searches
of Refs.\ \cite{Aaboud:2018jiw,Sirunyan:2017lae,Sirunyan:2017qaj}, and the slepton
searches of Ref.\ \cite{Aaboud:2018jiw}, and the compressed-mass
searches of Ref.\ \cite{Sirunyan:2018iwl}
are checked using \texttt{ColliderBit}\@\xspace \cite{Balazs:2017moi}, a recasting tool
within the \GAMBIT-1.3
software framework
\cite{Athron:2017ard,Workgroup:2017bkh,Balazs:2017moi,Workgroup:2017lvb,Workgroup:2017htr,Workgroup:2017myk}.
This framework was extended and applied to the chargino/neutralino sector already in
Ref.\ \cite{Athron:2018vxy}, where also a full description
of all included analyses can be found. For
each signal region (SR) of each analysis, \GAMBIT/\texttt{ColliderBit}\@\xspace
evaluates the theory prediction of the signal yield
$S_{\text{SR}}$. It then constructs the log-likelihood differences
$\ln{\cal L}_{\text{SR}}\equiv\ln{\cal
L}(n|s=S_{\text{SR}},b)-\ln{\cal L}(n|s=0,b) $ for
the computed signal yield and the observed
event number $n$ and background expectation $b$ reported by the
experiments. It also determines for each analysis which signal
region SR$^{\text{max}}$ has the highest expected sensitivity. For each given analysis
\GAMBIT/\texttt{ColliderBit}\@\xspace outputs a single ``effective'' log-likelihood
difference
$\ln{\cal L}_{\text{eff}}\equiv\ln{\cal L}_{\text{SR$^{\text{max}}$}}$.
We refer to
Refs.\ \cite{Athron:2017ard,Workgroup:2017bkh,Balazs:2017moi,Workgroup:2017lvb,Workgroup:2017htr,Workgroup:2017myk}
for a detailed description of the procedure and cross-checks with
original LHC analyses.
To obtain a conservative LHC exclusion contour in the following
plots of this section we proceed as follows. For each parameter
point we take the largest effective $(-2\ln{\cal L}_{\text{eff}})$ of
any implemented analysis and employ $(-2\ln{\cal L}_{\text{eff}})^{\text{Max}}\ge6$ as the criterion for
exclusion by the LHC recasting analysis.
In App.\ \ref{app:SUSYLHC}
we provide further extensive details on the behaviour of the recasting
for the parameter regions and the LHC analyses most relevant
for our discussion. We discuss both cases with high sensitivity and
low sensitivity.
In particular we validate the recasting procedure
by reproducing the exclusion contour of
the ATLAS chargino/neutralino search of Ref.\ \cite{Aaboud:2018jiw},
Fig.\ 8c. We further verify that the exclusion contour obtained as
described above via $(-2\ln{\cal L}_{\text{eff}})^{\text{Max}}$ is also fully consistent with the contour where
the predicted signal yield for at least one signal region is equal
to the respective ATLAS 95\% C.L.\ upper limit. We checked that
the same would be true for all
following plots of the paper.\footnote{%
In exceptional cases the comparison is not possible. In
particular, the CMS analysis of Ref.\ \cite{Sirunyan:2018iwl} for
compressed spectra gives the maximum contribution $(-2\ln{\cal
L}_{\text{eff}})^{\text{Max}}$ in some parameter regions; for this analysis no individual
95\% C.L.\ upper signal limits are available.
}
As we will see the remaining mentioned LHC searches affect only a
minor portion of the relevant SUSY parameter space. Hence we do not
implement them in
\GAMBIT/\texttt{ColliderBit}\@\xspace but take
them into account by directly
using the
simplified-model interpretations of the original ATLAS/CMS
references. This is a very conservative approach which likely slightly
overestimates the true LHC constraints on the MSSM,
but it is motivated by the desire to find parameter regions which are
definitely viable.
Specifically, we apply the following constraints.
Figs.\ 14, 16 of Ref.\ \cite{Aad:2019qnd} are applied for
Wino- or Higgsino-like charginos or sleptons as appropriate.
Fig.\ 6 of Ref.\ \cite{Aad:2019vvf} based on the $Wh$-channel
chargino/neutralino search is applied to the lightest chargino in case
of the hierarchy $M_1<M_2<\mu$ and if the chargino is lighter than
sleptons and staus. Fig.\ 7b of Ref.\ \cite{Aaboud:2017nhr} based
on the stau-channel search is applied to the lightest chargino if its
decay into stau is dominant and to the heavier chargino in case
$\mu>M_2$ if the decay into stau is kinematically possible.
As a further check we also directly apply the strong constraints of Figs.\ 7b, 7c of
Ref.\ \cite{Aad:2019vnb} based on direct slepton searches (see
Eq.\ (\ref{eq:sleptonsearch2019})) and chargino searches with decays
to sleptons; however as we will see they have no impact on our
parameter spaces.
The following constraints from dark matter physics are relevant:
\begin{itemize}
\item
Dark Matter Relic Density (DMRD): We assume the LSP to be the
lightest neutralino, and unless noted otherwise we assume the LSP to
be stable. In this case the LSP contributes to the dark matter relic
density, and we require that the LSP relic density is in agreement with
or smaller than the observed dark matter relic density. We have to
distinguish the cases of dominantly Bino- or Wino- or Higgsino-like
LSP. In the case of Wino- or Higgsino-like LSP and LSP-masses below 1
TeV, the relic density is always smaller than the observed value,
thus leading to no constraints for our analysis (but to the
requirement of additional, non-MSSM components of dark matter, see
e.g.\ Ref.\ \cite{Bramante:2015una,Roszkowski:2017nbc} for recent
reviews). Ref.\ \cite{Chakraborti:2017dpu} has shown that
coannihilation effects can increase the Higgsino- or Wino-like relic density
if the mass splitting between sleptons and the LSP is significantly
below $10\%$. However the extent is not sufficient to be of interest
in the parameter regions of interest for $\amu$, where we consider
LSP masses below around 500 GeV.
In the case of a Bino-like LSP in the considered mass range, the
relic density is typically too large unless a specific mechanism
acts to enhance the dark matter annihilation and to suppress the
relic density. In the mass range of Bino masses of around
200\ldots600 GeV there are three possibilities: stau-coannihilation,
other slepton-coannihilation, and Wino-coannihilation
\cite{Roszkowski:2017nbc,Ellis:1998kh,Ellis:1999mm,Nihei:2002sc,Harigaya:2014dwa}.
As we will see, in each of our scenarios with Bino-like LSP, one of
these possibilities can be realized without further impact on
LHC-exclusion of parameters or on values of $\amu$. Hence in
summary we do not need to explicitly apply DMRD-constraints on our analysis of $\amu$ in
the MSSM.
\item
Dark Matter Direct Detection (DMDD): If the LSP is stable there are
constraints from the non-observation of dark matter in direct
detection experiments. The most
stringent constraints are obtained from the XENON1T experiment
\cite{1705.06655}; similar but weaker limits are obtained from
XENON100, PandaX-II and
LUX \cite{1207.5988,Tan:2016zwf,1608.07648}. We evaluate these
constraints using \texttt{DarkBit}\@\xspace and \ddcalc
\cite{Workgroup:2017lvb,Athron:2018hpc}\footnote{%
The actual calculations can be done using internal code as well as
interfaces to the public codes DarkSUSY
\cite{Bringmann:2018lay} and \texttt{micrOMEGAs}\@\xspace \cite{Belanger2018}. For
the calculations presented here we choose internal code and the interface to
DarkSUSY.}
within the \GAMBIT
software framework \cite{Athron:2017ard,Workgroup:2017bkh,Balazs:2017moi,Workgroup:2017lvb,Workgroup:2017htr,Workgroup:2017myk}, using the provided log-likelihood
functions as described in Ref.\ \cite{Workgroup:2017lvb}. Since the
XENON1T limits are the strongest, we will only use those in our
phenomenological analysis and consider a parameter point excluded at
the $90\%$ confidence level if $2\ln{\cal L}(\sigma=0)-2\ln{\cal L}(\sigma,m_{\text{LSP}})>1.64$ for the XENON1T analysis.
The required
calculations depend on the dark matter relic density. In the case of a
Bino-like LSP, which allows an explanation of the observed value, we
set the relic density to the observed value. In the case of Higgsino- or
Wino-like LSP, in which case the relic density is smaller than the
observed one, we use the relic density computed by \texttt{DarkBit}\@\xspace.
It is well known that the phenomenological impact of these
constraints is that strong gaugino--Higgsino mixing of the LSP is
not viable, except in ``blind spots'' which are characterized by
particular ratios $\mu/m_{\text{LSP}}$, require negative $\mu$ and depend on $\tan\beta$ and
the CP-odd Higgs boson mass $M_A$\ \cite{Huang:2014xua}. As we will see, in case of a
Bino-like LSP, lower limits on the Higgsino mass, and in case of a Higgsino-like LSP, lower limits on the
Wino mass $M_2$ are implied. In contrast, the limits obtained in
case of a Wino-like LSP turn out to be weaker.\footnote{%
Technically, we determined these limits from dark matter direct
detection in a separate computation using the \GAMBIT/\texttt{DarkBit}\@\xspace
framework. The limits may be of interest in their own right.
In the parameter space relevant for our later plots with
$\tan\beta=40$
we
obtained the following simple functional forms of the limits, correct
to within $2\%$:
\begin{align*}
\text{Bino-like LSP:}&&\mu&> 467\text{ GeV}+0.157M_1(1+M_1/159\text{ GeV})\\
\text{Wino-like LSP:}&&\mu&> 34.2\text{ GeV} +1.46 M_2\\
\text{Higgsino-like LSP:}&&M_2&>207\text{ GeV} +1.83\mu
\end{align*
\label{footnotedarkmatterapprox} }
\end{itemize}
\subsection{Setup of the phenomenological analysis}
\label{sec:SUSYphenosetup}
Our phenomenological analysis focuses on a wide parameter space of the
MSSM (without flavour mixing in the sfermion sector). Our setup is as
follows (all input parameters are defined as
$\overline{\text{DR}}$-parameters at the scale $1$\,TeV):
\begin{itemize}
\item
The SUSY 1-loop contributions to $\amu$ depend on five mass parameters
$M_1$, $M_2$, $\mu$, $m_\text{L}$, $m_\text{R}$. We treat them as independent,
except for setting the two slepton masses equal, $m_\text{L}=m_\text{R}$. Allowing
different smuon masses would neither lead to enhanced $\amu^{\text{SUSY}}$ nor
significantly alter LHC and dark matter limits unless the mass
splittings are extreme (see footnote \ref{footnoteBHR}).
Since the influence of the trilinear scalar $A$-parameters is very
small, we set $A_f=0$ for all sfermion flavours.
\item
In all plots we fix
\begin{align}
\tan\beta&=40
\end{align}
as a reference value. Since $\amu^{\text{SUSY}}$ is essentially linear in
$\tan\beta$ it is easily possible to re-interpret plots for
other values of $\tan\beta$, as already explained in the caption of
Fig.\ \ref{fig:briefsurveyplot}.
\item
We assume R-parity conservation, such that the lightest SUSY
particle (LSP) is stable and forms (a component of) dark matter. We
assume the LSP to be the lightest
neutralino.
\item
The selectron masses are set equal to the smuon masses, and
generally we assume absence of flavour mixing in the slepton
sector.\footnote{%
Possible slepton flavour mixing between staus and smuons can lead
to additional contributions to $\amu$ via enhanced chirality
flips of the kind
$\tilde{\mu}_\text{L}\to\tilde{\tau}_\text{L}\to\tilde{\tau}_\text{R}\to\tilde{\mu}_\text{R}$. Such
effects have been studied in
Refs.\ \cite{Moroi:1995yh,Baez:2015sqj}; they enhance in
particular the BLR-contributions and thus allow explanations of
large $\amu$ with higher values of $M_1$, without conflict to
bounds from the non-observation of $\tau\to\mu\gamma$.
In contrast, slepton flavour mixing between selectrons and smuons
does not enhance $\amu$, but it leads to specific
correlations to flavour-violating decays like $\mu\to e\gamma$ and
$\mu\to e$ conversion, which in turn show interesting differences
between the MSSM
\cite{Chacko:2001xd,Kersten:2014xaa,Calibbi:2015kja} and other
models such as the MRSSM which does not allow a $\tan\beta$
enhanced dipole contributions
\cite{Kotlarski:2019muo}. \label{footnoteLFV}
}
This is
taken into account in evaluating
the LHC search limits.
The precise values of stau masses and mixings
are not important for any of the
considered observables except for the dark matter relic density in
case of a Bino-like LSP. In the case of a
Bino-like LSP, the LSP-relic density is generally too high, and we
assume coannihilation with either staus, other sleptons or Winos
to suppress the relic density to an
acceptable value. Stau-coannihilation is generally possible as
long as the Bino-mass is
below around 600\,GeV \cite{Ellis:1998kh,Ellis:1999mm}. It
requires that one stau is sufficiently
light but it does not fix the stau-masses and
mixings uniquely. Since no other considered observables depend on
their precise values, we fix the stau mass parameters
to
$2M_1$ in cases where stau-coannihilation is relevant. Due to stau
mixing one mass eigenvalue is then close to the LSP
mass.
In cases where stau-coannihilation is not relevant we fix the stau
mass parameters to $2$ TeV, representing heavy staus. In the first
case
the chargino/neutralino
limits based on Ref.\ \cite{Aaboud:2017nhr}, see
Eq.\ (\ref{eq:Chastaulimits}), are relevant and taken into
account.
\item
Squark and gluino masses are set to $6$ TeV and the CP-odd
Higgs mass is set to $M_A=2$ TeV. In this way, all
respective LHC limits are evaded, and the mass of the SM-like Higgs
boson is in the right ball-park. Such heavy squarks imply small,
positive 2-loop corrections to $\amu^{\text{SUSY}}$ \cite{Fargnoli:2013zda,Fargnoli:2013zia}; we do not
finetune the squark masses and mixings to fit the SM-like Higgs mass
to agree with the measured values exactly, since there is no unique
way to do it and since the impact on $\amu^{\text{SUSY}}$ and all other
observables considered here is negligible.
\end{itemize}
Thus the essential parameters for our discussion are the four mass
parameters
\begin{align}
M_1, M_2,\mu,m_{\text{L}, \text{R}},
\end{align}
where $M_1$ will be similar to the LSP-mass $m_{\text{LSP}}$ in case
the LSP is Bino-like; the two chargino masses are essentially given by
$M_2$ and $\mu$, respectively.
Despite having only four parameters, the parameter space is complex
and there are many possible organizing principles. We may distinguish
24 different mass orderings and compressed or non-compressed
spectra. We may classify according to the nature of the LSP and how
dark matter is generated and according to which contributions to
$\amu^{\text{SUSY}}$ (WHL and/or BLR in
Eqs.\ (\ref{eq:SUSY1Lnumerical},\ref{eq:SUSYMIapprox})) are dominant.
We find that a very effective way to organize the discussion is to
divide the parameter space in three distinct kinds of scenarios,
denoting $m_{\tilde{l}}\equiv m_{\text{L}, \text{R}}$ as the generic 1st and 2nd
generation slepton mass, see also Tab.\ \ref{tab:SUSYscenarios}:
\begin{itemize}
\item \underline{Scenario with heavy chargino and smuons:}
\begin{align}
m_{\tilde{l}}&>700 \text{\, GeV},&m_{\chi^\pm_{1,2}}>1100\text{\,GeV}\,.
\end{align}
These mass ranges correspond to the maximum reach of the LHC searches for sleptons and charginos found in Refs.\ \cite{Aaboud:2018jiw,Aad:2019vnb} and are indicated by black lines in Fig.\ \ref{fig:briefsurveyplot}.
The scenario thus evades all LHC constraints in the simplest
possible way and is denoted as ``LHC-unconstrained'' in
Tab.\ \ref{tab:SUSYscenarios}. The following scenarios are
defined to allow lighter SUSY particles.
\item
\underline{Scenarios with lighter sleptons:} First we consider
scenarios with slepton masses significantly below $700$ GeV, which
is possible if the sleptons are close in mass with the LSP, i.e.\
\begin{align}
m_{\text{LSP}}\lessapprox m_{\tilde{l}}<700\text{\,GeV}\,,
\end{align}
where the $\lessapprox$ symbol here denotes ``lighter but not much
lighter'' to evade LHC limits, i.e.\ within around $100$ GeV.\footnote{%
More explicitly, the notation $ m_{\text{LSP}}\lessapprox
m_{\tilde{l}}$ may be written as $ m_{\text{LSP}}<
m_{\tilde{l}}< m_{\text{LSP}}+\Delta m_{\text{LHC}}$, where
$\Delta m_{\text{LHC}}$ is the mass gap allowed by LHC
searches. The value of this allowed mass gap depends on the
details of the considered spectrum but is typically of order $100$ GeV.}
There are
three sub-scenarios depending on the nature of the LSP, which we denote as
\begin{align}
(\tilde{B}\tilde{l}),\ (\tilde{W}\tilde{l}),\ (\tilde{H}\tilde{l})
\end{align}
scenarios, respectively. In all these scenarios the chargino
masses are obviously either heavier than the slepton masses or
lighter but not by a significant amount.
\item
\underline{Scenarios with charginos lighter than sleptons:}
Next we consider scenarios where both charginos are lighter than
the sleptons:
\begin{align}
m_{\text{LSP}}<m_{\chi^\pm_{1,2}}<m_{\tilde{l}}\,.
\end{align}
Here weaker LHC limits on charginos apply, such that chargino masses below
$700$ GeV are viable. We assume slepton masses to be
non-compressed with the LSP, such that the LHC
constraints on sleptons of
Refs.\ \cite{Aad:2019vnb,Aaboud:2018jiw,Sirunyan:2018nwe} are
relevant (although the
simplified-model interpretations illustrated in
Eqs.\ (\ref{eq:sleptonsearch}) do not necessarily apply). Again we
can distinguish several
sub-scenarios, depending on the nature of the LSP. We denote them
as
\begin{align}
(\tilde{B}\tilde{W}\tilde{H}),\ (\tilde{H}\tilde{W}/\tilde{W}\tilde{H})
\end{align}
scenarios. In the $(\tilde{B}\tilde{W}\tilde{H})$ scenario, no
particular mass-ordering between the two \update{chargino mass parameters} $M_2$ and
$\mu$ is implied; the $(\tilde{H}\tilde{W}/\tilde{W}\tilde{H})$
scenarios will be discussed together; the LSP is either Wino- or
Higgsino-like and no particular value
of $M_1$ is implied except that the Bino is not the
LSP.
\end{itemize}
\begin{table}
\begin{center}
\begin{tabular}{|l|l|l|l|l|}
\hline
Scenario & Hierarchy & LSP/DM & Dominant
$\amu$
\\\hline
``LHC-unconstrained'' & $m_{\tilde{l}}>700\text{\,GeV,
}m_{\chi^\pm_{1,2}}>1100\text{\,GeV}$&Bino, Wino, Higgsino&WHL+BLR\\\hline
$(\tilde{B}\tilde{l})$ & $M_1\lessapprox m_{\text{L}, \text{R}}\lesssim\mu,M_2$ &
Bino/$\tilde{\tau}$- or $\tilde{l}$- or $\chi^\pm$-coann. &
WHL+BLR\\\hline
$(\tilde{W}\tilde{l})$ & $M_2\lessapprox m_{\text{L}, \text{R}}\lesssim\mu,M_1$ &
Wino & WHL+BLR\\\hline
$(\tilde{H}\tilde{l})$ & $\mu\lessapprox m_{\text{L}, \text{R}}\lesssim M_1,M_2$ &
Higgsino & WHL\\\hline
$(\tilde{B}\tilde{W}\tilde{H})$ & $M_1<M_2,\mu<m_{\text{L}, \text{R}}$
& Bino/$\tilde{\tau}$- or $\chi^\pm$-coann.
& WHL \\\hline
$(\tilde{W}\tilde{H})$ & $M_2<\mu,M_1<m_{\text{L}, \text{R}}$ &
Wino& WHL\\\hline
$(\tilde{H}\tilde{W})$ & $\mu<M_2,M_1<m_{\text{L}, \text{R}}$ &
Higgsino& WHL\\\hline
\end{tabular}
\end{center}
\caption{\label{tab:SUSYscenarios}
Overview of the scenarios defined in section \ref{sec:SUSYphenosetup}
and analysed in Sec.\ \ref{sec:SUSYpheno}. Here the $\lessapprox$
symbol denotes ``lighter but not much
lighter'', i.e.\ sufficiently close to evade LHC
limits, see main text; the $\lesssim$ symbol denotes ``lighter or slightly
heavier''.
In each case, the nature of the dark matter candidate is
indicated. In cases of Bino-like LSP we assume the dark matter
density to agree with observation, which in turn requires one of the
indicated coannihilation mechanisms to be present. In the other
cases we only assume that the predicted dark matter density does not surpass
the observed one.}
\end{table}
\subsection{Phenomenological analysis}
\label{sec:SUSYpheno}
We will now discuss each scenario of Table
\ref{tab:SUSYscenarios} in turn. In each case we will
evaluate the possible values for $\amu^{\text{SUSY}}$ and the constraints from LHC and
dark matter, and we will determine resulting viable parameter regions.
Before explaining our results we briefly discuss the status of related
studies in the
literature. The general, phenomenological MSSM was analyzed in view of the BNL result
$\Delta a_\mu^{\text{BNL}}$ versus LHC run-II in
Refs.\ \cite{Hagiwara:2017lse,Cox:2018qyi,Abdughani:2019wai,Endo:2020mqz,Chakraborti:2020vjp},
in Refs.\ \cite{Cox:2018qyi,Abdughani:2019wai,Chakraborti:2020vjp}
including dark matter.
All
these references require a Bino-like LSP and consider parameter regions similar to our
$(\tilde{B}\tilde{l})$
scenario.\footnote{%
While finalizing this paper,
Ref.\ \cite{Chakraborti:2021kkr} appeared, which also contains
results on the general MSSM, comparing $\amu$ to results from dark
matter and LHC experiments in cases with Wino-like and Higgsino-like
LSP. It uses the same approach as
Ref.\ \cite{Chakraborti:2020vjp} and shows similar complementarities
to our study.
}
Refs.\ \cite{Hagiwara:2017lse,Endo:2020mqz} also consider
the general $(\tilde{B}\tilde{W}\tilde{H})$ scenario but do not consider dark matter. The LHC run-II data is
treated with an increasing level of detail, and slightly different
restrictions on the allowed masses are employed.
Ref.\ \cite{Chakraborti:2020vjp} uses LHC recasting
of a similar set of constraints as discussed in
Sec.\ \ref{sec:SUSYconstraints}, but with different recasting
tools. It assumes stau-masses equal to the other
slepton masses, i.e.\ generation universality, but allows differences
between left- and right-handed sfermions $m_\text{L}\ne m_\text{R}$. This leads to slightly weaker
LHC limits on $M_2$. Ref.\ \cite{Hagiwara:2017lse} carries out a
recasting of chargino search channels via staus, as mentioned in
Sec.\ \ref{sec:SUSYconstraints}. Refs.\ \cite{Cox:2018qyi,Abdughani:2019wai,Endo:2020mqz} treat LHC-data
in different simplified ways without recasting. Ref.\ \cite{Abdughani:2019wai}
focuses on scans of two parameter regions (both within our
$(\tilde{B}\tilde{l})$ scenario), in which different states are decoupled.
Ref.\ \cite{Endo:2020mqz} focuses on the two cases $\mu= M_2$ and $\mu=2M_2$
but allows for arbitrary slepton masses.
The results of these references are that Bino-like LSP with
either chargino-coannihilation or slepton/stau-coannihilation can provide
viable explanations of dark matter and $\Delta a_\mu^{\text{BNL}}$, and
Refs.\ \cite{Abdughani:2019wai,Chakraborti:2020vjp} specify upper
limits on the LSP mass.
Our study aims to provide an up-to-date and coherent analysis of the
general MSSM in view of the
Fermilab result for $\amu$, dark matter data and our LHC recasting. We will
treat all scenarios of Table
\ref{tab:SUSYscenarios} including Bino-, Wino- or Higgsino-like LSP
and provide details on allowed and preferred patterns of SUSY masses.
\subsubsection{Scenario with heavy charginos and smuons, above all LHC
limits}
As a first basic
scenario we consider sleptons and charginos heavier than $700$ GeV and
$1100$ GeV, respectively,
denoted as ``LHC-unconstrained'' in Tab.\ \ref{tab:SUSYscenarios}.
In Fig.~\ref{fig:briefsurveyplot} this region corresponds to the upper
right quadrant of the plot, delineated by the black lines.
The choice of this region is motivated by
the maximum LHC reach for charginos and sleptons
found in Refs.\ \cite{Aaboud:2018jiw,Aad:2019vnb}. In other words, in the considered region the LHC limits are
trivially fulfilled. Of course, the maximum LHC constraints were obtained for the simple special case of a massless
Bino-like LSP $\chi ^0 _1$, intermediate sleptons, and heavier
charginos. The scenarios discussed later on will involve smaller masses and evade the LHC limits through
choices of specific mass patterns, hierarchies and mass splittings.
Here we will first discuss the behaviour of $\amu^{\text{SUSY}}$ in the upper right quadrant of
Fig.~\ref{fig:briefsurveyplot}, i.e.\ for $m_{\tilde{\mu}_1}\ge700$ GeV,
$m_{\chi^\pm_{2}}\ge1100$ GeV. The figure already shows that $\amu^{\text{SUSY}}$ is severely
limited for such high masses. Overall we obtain
$\amu^{\text{SUSY}}\le13\times10^{-10}$ (for $\tan\beta=40$ and
$\mu\le4$ TeV), and the maximum $\amu^{\text{SUSY}}$ quickly drops
for even heavier slepton masses.
\update{Hence the deviation observed at BNL
(\ref{eqn:BNLDiscrepancy}) could at most be
explained at the $2\sigma$-level.
for these values of $\tan\beta$ and $\mu$.
This remains the case also with the
slightly smaller value and uncertainty of $\damu^{\text{2021}}$.}\footnote{%
Allowing for $\tan\beta\to\infty$ \cite{Bach:2015doa} or ultra-high values of
$\mu$ \cite{Endo:2013lva} changes the picture. In both cases the linearity in
$\tan\beta$ and $\mu$ visible in Eqs.\ (\ref{eq:SUSYMIapprox}) is replaced by a
saturation resulting from resummed higher-order effects. In such
extreme parameter regions it is possible to obtain
$\amu^{\text{SUSY}}\approx20\times10^{-10}$ for LSP masses above 1 TeV with
$\mu=100$ TeV and $\tan\beta=40$ \cite{Endo:2013lva} or even
$\amu^{\text{SUSY}}>30\times10^{-10}$ for LSP masses above 1 TeV with
$\tan\beta\to\infty$ \cite{Bach:2015doa}. Similarly, the scenario
of Ref.\ \cite{Crivellin:2010ty} realizes radiative muon mass
generation in the MSSM with non-holomorphic soft SUSY breaking
parameters and allows LSP masses above 1 TeV while explaining
$\Delta a_\mu^{\text{BNL}}$.
\label{footnoteTBMUlarge}}
Despite the small contributions to $\amu$, scenarios with heavy
sparticles can be well motivated. E.g.\ the original focus point
scenario \cite{Feng:1999zg,Feng:1999mn} naturally involve sleptons
above the TeV scale, and in models with universality boundary
conditions below the GUT scale, squarks and sleptons are rather close
in mass \cite{Ellis:2007ac} and LHC-constraints on squarks imply
sleptons above the TeV scale \cite{Costa:2017gup}.
Another attractive scenario where all sparticle masses are above the
TeV scale is given by a Higgsino-like LSP with Higgsino mass around $1$
TeV. This case leads to an explanation of the observed dark matter
relic density without further tuning of masses or mass splittings and
can be realized in the constrained MSSM or in more general variants of the MSSM (see
e.g.\ Ref.\ \cite{Roszkowski:2017nbc}). However all such scenarios
restrict $\amu^{\text{SUSY}}$ to values below around $10\times10^{-10}$ for
$\tan\beta\lesssim50$.
In the following we will focus on the alternative, ``non-standard''
scenarios which allow lighter sparticle masses.
\subsubsection{$(\tilde{B}\tilde{l})$-scenario with light sleptons and
Bino }
\begin{figure}[t]
\begin{subfigure}[t]{0.405\textwidth}
\centering \includegraphics[width=\textwidth]{SUSYplots/BinoSlepPlot.pdf}
\caption{}
\label{fig:Binosleptonscompresseda}
\end{subfigure}
\begin{subfigure}[t]{0.405\textwidth}
\centering\includegraphics[width=\textwidth]{SUSYplots/BinoSlepMUEM2Plot.pdf}
\caption{}
\label{fig:Binosleptonscompressedb}
\end{subfigure}
\begin{subfigure}[t]{0.18\textwidth}
\centering\includegraphics[scale=1.]{SUSYplots/PlotAmuLegend.pdf}
\end{subfigure}
\caption{\label{fig:Binosleptonscompressed}
$(\tilde{B}\tilde{l})$-scenario with either $M_2=1200$ GeV fixed
or $M_1=250$ GeV fixed. For the remaining parameter values see the
plots.
The red dashed contours correspond to values of
$\amu$ as indicated in the legend on the right; the yellow/green coloured
regions correspond to the $1\sigma$ bands corresponding
to the BNL deviation (\ref{eqn:BNLDiscrepancy}) and the new
deviation including FNAL (\ref{eqn:avgDiscrepancy}), and their overlap. For the
$\tan\beta$-reinterpretation see caption of Fig.\ \ref{fig:briefsurveyplot}.
The (light) red shaded regions are excluded by dark matter direct detection
if the LSP is assumed stable; the blue shaded region corresponds
to the limits from the LHC recasting, see
Fig.\ \ref{fig:ReproduceATLASplot} and text for details. These
plots do not contain regions excluded by
the additional LHC limits implemented in a
simplified way.
The red thick
solid line in the right plot corresponds to the parameter strip
where chargino-neutralino coannihilation is possible; directly
below this strip a tiny region is excluded by the LHC-constraint
from compressed masses, Ref.\ \cite{Sirunyan:2018iwl}, but we verified that this
does not exclude the chargino-neutralino coannihilation region.
The gray thin line corresponds to the vacuum
stability constraint of Ref.\ \cite{Endo:2013lva}; it applies in case the
left- and right-handed stau-masses are set equal to the
smuon/selectron masses and excludes the points to the right,
i.e.\ with larger $\mu$.
}
\end{figure}
The $(\tilde{B}\tilde{l})$-scenario is characterized by a Bino-like LSP and
sleptons significantly lighter than $700$ GeV. Given current LHC
constraints it is viable if the mass splitting between sleptons and
the LSP is sufficiently small, $m_{\text{L}, \text{R}}-M_1\lesssim100$\,GeV (for
masses below around 200 GeV and very small splittings additional
constraints from compressed-mass searches become relevant).
The dark matter relic density can be correctly achieved by either ${\tilde{l}}$/${\tilde{\tau}}$ or
$\chi ^{\pm}$-coannihilation by appropriate fine-tuning of parameters as discussed below.
The scenario is illustrated in
Fig.\ \ref{fig:Binosleptonscompressed}. The left plot
Fig.\ \ref{fig:Binosleptonscompresseda} shows results in the
$\mu$--$M_1$-plane. The Wino mass is fixed to the
rather high value $M_2=1200$ GeV, safely but not too far above the
chargino mass limit (\ref{maximumChamasslimit}), and the
mass splitting $m_{\text{L}, \text{R}}-M_1=50$ GeV. The right plot
Fig.\ \ref{fig:Binosleptonscompressedb} shows results in the
$\mu$--$M_2$-plane, while the Bino and slepton masses are fixed to the
rather light values $M_1=250$ GeV and
$m_{\text{L}, \text{R}}=275$ GeV. In both plots the shown quantities are not very
sensitive to the choice of the mass splitting, so the plots are
representative for a wider range of values for $m_{\text{L}, \text{R}}-M_1$.
The red dashed lines and yellow/green coloured regions show the contours of
$\amu$ and $1\sigma$ regions corresponding to the measurements from
BNL only and the average including \update{FNAL}. The behaviour of $\amu$ in this
$(\tilde{B}\tilde{l})$-scenario is dominated by the WHL and BLR
contributions of
Eqs.\ (\ref{eq:SUSY1Lnumerical},\ref{eq:SUSYMIapprox}) and can be well
understood via these approximations. The
WHL contributions dominate in the left plot at large $M_1$ and
very small $\mu$ and in the right plot at $\mu\lesssim1 $ TeV; in these
regions $\amu^{\text{SUSY}}$ decreases with increasing $\mu$. The BLR contributions
are linearly enhanced by $\mu$ and dominate at large $\mu$ in both
plots. As the plots show very large $\amu^{\text{SUSY}}$ can be obtained both for large $\mu$,
where the BLR-contribution dominates, and for small $\mu$ with
WHL-dominance.
LHC-constraints obtained by the recasting described in Sec.\ \ref{sec:SUSYconstraints} are
displayed by the blue shaded region in the plots.
The parameter space of the left plot
Fig.\ \ref{fig:Binosleptonscompresseda} is entirely allowed (more
details on the recasting are exhibited in the Appendix), even where
the Higgsino-like chargino is light. The right plot
Fig.\ \ref{fig:Binosleptonscompressedb} shows the expected large excluded
region approximately for $300\text{ GeV}<M_2<900\text{ GeV}$. It is
excluded by the chargino/slepton channel search of
Refs.\ \cite{Aaboud:2018jiw,Sirunyan:2017lae}, see
Eq.\ (\ref{Charginosleptonlimits}). The recasting shows again
that the Wino-like charginos are significantly more constrained than
Higgsino-like charginos and that the exclusion region is smaller than
in the simplified-model interpretation of
Eq.\ (\ref{maximumChamasslimit}). An additional strip of parameter
space at around $M_2\approx M_1-5$ GeV (in which case the mass
eigenvalues satisfy $m_{\chi^\pm_1}-m_{\chi^0_1}\approx15$ GeV) is
excluded by the recasting of the CMS compressed-mass search of
Ref.\ \cite{Sirunyan:2018iwl}.
Regarding dark matter it is well known that in case of a Bino-like LSP
in the considered mass range the relic density is too high unless some
coannihilation mechanism is active. In our case there are three
options: \mbox{chargino-,} stau- or slepton-coannihilation (see also the review
\cite{Roszkowski:2017nbc}).
The possibility of chargino-coannihilation takes place in the
parameter space where
$m_{\chi^\pm_1}-m_{\chi^0_1}\approx25$ GeV in the right plot, shown as
the thick red line around $M_2=255$ GeV. Here the relic density
takes the measured value (\ref{eqn:DMRD}) without further tuning of the slepton
masses. Everywhere else in the two plots the relic density can be
correctly explained via slepton- or stau-coannihilation
by slightly finetuning the slepton and/or stau
masses.
Since there is no unique way to achieve the required coannihilation
we do not carry out this finetuning but fix the parameters as
described above for the evaluation of all other observables.
In this way the plot is representative for all these cases.\footnote{%
The other observables
have been evaluated by setting the stau masses to $2M_1$ in Fig.~\ref{fig:Binosleptonscompresseda}
and to $2000$ GeV in Fig.~\ref{fig:Binosleptonscompressedb}.
In order to achieve
stau-coannihilation at least one stau mass has to be small and close
to the LSP-mass. None of the plotted observables would change
significantly, except that the LHC-constraints in the right plot
would become slightly weaker since a larger variety of decay modes
would exist for the charginos \cite{Chakraborti:2020vjp}. In this
sense both plots are representative for a variety of cases and
conservative in case of stau-coannihilation.}
Assuming now that the relic density is correctly explained,
constraints from dark matter direct detection become relevant.
The constraints from direct detection experiments are shown as the
(light) red shaded bands; they exclude a large portion of the parameter space with
small $\mu$,
implying $\mu\gtrsim600\ldots 800$ GeV in the plot. This reflects the
well-known need for small gaugino--Higgsino mixing and a significant mass gap between the LSP and the
Higgsino mass $\mu$, see also Sec.\ \ref{sec:SUSYconstraints} and
footnote \ref{footnotedarkmatterapprox}.
The thin solid gray line in the plots corresponds to the vacuum
stability constraint of Ref.\ \cite{Endo:2013lva} on stau-mixing already explained
around Eq.\ (\ref{VacStabEndo}). It excludes the large-$\mu$ region to its right
under the condition that both left- and
right-handed stau masses are as light as the
smuon/selectron masses. This upper limit on $\mu$ thus applies in particular if
stau-coannihilation and $m_{\tilde{\tau}_\text{L}}\approx m_{\tilde{\tau}_\text{R}}$
is assumed. Then the limit on $\mu$
significantly reduces the region in which the BLR-contributions to
$\amu^{\text{SUSY}}$ dominate. The vacuum stability constraint can be evaded and larger
$\mu$ and large $\amu^{\text{SUSY}}$ for heavier $M_1$ remain possible under the
assumption that one
stau or both staus are heavier. This can be compatible with a dark matter explanation
either in the case of selectron/smuon--coannihilation, or in the case
of stau--coannihilation with strongly non-universal
left-/right-handed staus.
In summary, the $(\tilde{B}\tilde{l})$-scenario allows the following three
parameter regions with large $\amu^{\text{SUSY}}$.
\update{
The first region is in the lower left of Fig.\ \ref{fig:Binosleptonscompresseda} and the
upper left of Fig.\ \ref{fig:Binosleptonscompressedb} between the dark matter and
vacuum stability constraints on $\mu$. It involves $\mu$ around the 1
TeV scale and is allowed by all constraints
even if we assume completely universal sleptons
$m_{\text{L}, \text{R}}\approx m_{\tilde{\tau}_{\text{L}, \text{R}}}$.
Here dark
matter constraints can be explained by stau and/or slepton
coannihilation and the $\amu$ result can be accommodated easily via
the large BLR contributions. The new world average result
for $\Delta a_\mu$ including FNAL can
be explained well for $M_1\lesssim300$ GeV and $\tan\beta=40$.
As shown by Fig.\ \ref{fig:Binosleptonscompressedb},
$M_2$ can be as low as around $900$ GeV for our choice of $M_1=250$ GeV, while for
larger $M_1$ the LHC-limit on $M_2$ would relax slightly, and
WHL-contributions could further increase $\amu^{\text{SUSY}}$. }
\update{
The second region is to the right of the vacuum stability lines where
$\mu$ is in the multi-TeV region and $\amu^{\text{SUSY}}$ is further
increased by the BLR contributions.
The region is viable if at least one stau is sufficiently heavier. The
dark matter relic density
can then be generated via slepton
or $\tilde{\tau}_1$ coannihilation.
Here an explanation of $\Delta a_\mu$ is widely
possible. The LSP mass can be heavier than $300$ GeV, and the large white regions in
Fig.\ \ref{fig:Binosleptonscompressedb} and lower right corner of Fig.\ \ref{fig:Binosleptonscompresseda} mean that $\damu^{\text{2021}}$ can
be explained for $\tan\beta<40$ according to the right axis
of the legend plot.
The third region is the
patch of parameter space close to the lower border of
Fig.\ \ref{fig:Binosleptonscompressedb}. Here the
Bino-like LSP, the Wino and the sleptons are all light and
close in mass. This patch of parameter space allows in particular to
generate dark matter via chargino-coannihilation, and it leads to very
large $\amu^{\text{SUSY}}$ for any value of $\mu$ above the dark matter
limit. Here again the updated deviation $\damu^{\text{2021}}$ can be
explained for $\tan\beta<40$.}
The recent model-building literature has put forward a variety of
constructions leading to
our second parameter region with light sleptons and very heavy $\mu$,
where $\amu^{\text{SUSY}}$ is dominated by the
BLR-contributions. These constructions
are particularly motivated in view of
the LHC constraints on the coloured SUSY particles and the Higgs mass.
Clearly, a straightforward conclusion from such constraints is that
gluino and top-squark masses are in the (multi-)TeV region. Via
renormalization effects these masses can enter the electroweak symmetry
breaking relations, and in many models very large $\mu$ in the multi-TeV
region is then necessary in order to cancel such effects and allow a
Higgs-VEV compatible with observations
\cite{Yanagida:2020jzy,Ibe:2019jbx,Yanagida:2017dao}.
Many concrete constructions are inspired by universality ideas but
involve some degree of non-universality to accommodate all existing
constraints. One class of such models with multi-TeV-scale $\mu$
involves non-universal sfermion masses.
General models with non-universal
sfermion masses (but universal gaugino masses) have been constructed and investigated in
Refs.\ \cite{Okada:2016wlm,Tran:2018kxv}, where
$(\tilde{B}\tilde{l})$-like scenarios with $\mu\sim10$ TeV were
identified as promising. A specific kind of sfermion non-universality
was considered in Refs.\
\cite{Ibe:2019jbx,Ibe:2013oha,Hussain:2017fbp}, where
the third generation of sfermions is assumed heavier than the
first two, but universality between squarks and sleptons at some high
scale is retained. Up-to-date LHC
constraints on gluino and Wino masses then imply that non-universal
gaugino masses are almost unavoidable unless one allows an unstable
charged slepton LSP \cite{Ibe:2019jbx}.
Another class of models retains universality of all scalar
soft SUSY-breaking masses but allows non-universal gaugino masses. In
this case again, the scenario of Fig.\ \ref{fig:Binosleptonscompressed}
with very large $\mu$ is the only option to obtain significant $\amu^{\text{SUSY}}$
\cite{Akula:2013ioa,Gogoladze:2014cha,Chakrabortty:2015ika,Wang:2018vrr}.
We mention that the scenario with $\mu$ in the multi-TeV region is
also important to obtain large $\amu^{\text{SUSY}}$ in the context of various
specific model constructions, such as models based on Pati-Salam
symmetry \cite{Belyaev:2016oxy} (here, at the same time light Winos
are preferred), models with usual GUT constraints but extra vectorlike
matter fields \cite{Choudhury:2017acn,Choudhury:2017fuu},
and in a hybrid gauge-gravity mediation model with only four free
parameters \cite{Zhu:2016ncq}.
\subsubsection{$(\tilde{W}\tilde{l})$-scenario with light sleptons
and Wino}
\label{sec:WinoSlepscenario}
\begin{figure}[t]
\begin{subfigure}[t]{0.405\textwidth}
\centering\includegraphics[width=\textwidth]{SUSYplots/WinoSlepPlot.pdf}
\caption{}
\label{fig:WHsleptonscompresseda}
\end{subfigure}
\begin{subfigure}[t]{0.405\textwidth}
\centering\includegraphics[width=\textwidth]{SUSYplots/HiggsinoSlepPlot.pdf}
\caption{}
\label{fig:WHsleptonscompressedb}
\end{subfigure}
\begin{subfigure}[t]{0.18\textwidth}
\centering\includegraphics[scale=1.]{SUSYplots/PlotAmuLegend.pdf}
\end{subfigure}
\caption{\label{fig:WHsleptonscompressed} (a)
$(\tilde{W}\tilde{l})$-scenario. (b)
$(\tilde{H}\tilde{l})$-scenario. For parameter values see the
plots and the text.
The red dashed contours correspond to values of
$\amu^{\text{SUSY}}$ as indicated in the legend on the right; the yellow/green coloured
regions correspond to the $1\sigma$ bands corresponding
to the BNL deviation (\ref{eqn:BNLDiscrepancy}) and the new
deviation including FNAL (\ref{eqn:avgDiscrepancy}), and their overlap. For the
$\tan\beta$-reinterpretation see caption of Fig.\ \ref{fig:briefsurveyplot}.
The red shaded region is excluded by dark matter direct detection
if the LSP is assumed stable; the blue shaded regions correspond
to the limits from the LHC recasting, see
Fig.\ \ref{fig:ReproduceATLASplot} for details. The cyan shaded
region corresponds to the additional LHC limits implemented in a
simplified way; in both plots the slepton search
(\ref{eq:sleptonsearch}), Ref.\ \cite{Aad:2019vnb} excludes a
narrow strip at small $\mu$ and $M_2$, where the slepton--LSP mass splitting
is largest. In the the right plot the compressed-mass searches of
Ref.\ \cite{Aad:2019qnd} exclude another small region at large
$M_2$, which enters the LSP mass via mixing. The thin solid gray line corresponds to the vacuum
stability constraint of Ref.\ \cite{Endo:2013lva}; it applies in case the
left- and right-handed stau-masses are set equal to the
smuon/selectron masses and excludes the points to its right,
i.e.\ with larger $\mu$.
}
\end{figure}
The $(\tilde{W}\tilde{l})$-scenario involves a Wino-like LSP and
sleptons significantly lighter than 700 GeV. Current LHC-constraints
on sleptons allow this scenario provided the slepton--LSP mass
splitting is sufficiently small. The scenario is illustrated in
Fig.~\ref{fig:WHsleptonscompresseda}
in the $\mu$--$M_2$-plane.
The mass splitting is chosen as $m_{\text{L},\text{R}}=M_2+50$ GeV, but the
plotted quantities are not very sensitive to this choice.
By definition of the scenario, the Bino mass is
assumed to be heavier than the Wino mass. Since the Bino mass is also
not strongly constrained by LHC data we fix it to $M_1=600$ GeV, an
intermediate value which
is always heavier than $M_2$ in the plot
but still allows significant BLR-contributions to $\amu^{\text{SUSY}}$.
The behaviour of $\amu^{\text{SUSY}}$ (red dashed lines and yellow/green coloured
regions) in this scenario with a Wino-like LSP is
similar to the previous one in the case of a Bino-like LSP. The
$\amu^{\text{SUSY}}$-contours in
Figs.~\ref{fig:Binosleptonscompresseda},~\ref{fig:Binosleptonscompressedb}
and~\ref{fig:WHsleptonscompresseda}
have a similar shape. For small $\mu$
the WHL-contributions of
Eqs.\ (\ref{eq:SUSY1Lnumerical},\ref{eq:SUSYMIapprox}) dominate, and
for $\mu\gtrsim1500$\,GeV the
BLR-contributions to $\amu^{\text{SUSY}}$ dominate. An important difference is that the Wino-like
LSP scenario allows very low Wino masses without the need for
finetuning $M_1\approx M_2$ as e.g.\ in
Fig.~\ref{fig:Binosleptonscompressedb}.
Hence the WHL-dominance region is wider in the
$(\tilde{W}\tilde{l})$-scenario. In all of this WHL-dominance region
the actual choice of $M_1=600$ GeV is inconsequential.
This choice is important for $\mu\gtrsim1500$\,GeV, where
the BLR-contributions dominate and $\amu^{\text{SUSY}}$ rises with $\mu$. Higher
choices of $M_1$ would reduce $\amu^{\text{SUSY}}$ in this region.
The recasting of the ATLAS chargino search
\cite{Aaboud:2018jiw} excludes only a tiny blue shaded region in the
plot at $M_2,\mu\lesssim220$ GeV, where both charginos and the
sleptons are similar in mass.
In addition the cyan shaded narrow strip at small $\mu$ corresponds to
the additional LHC limits implemented in a simplified way as discussed
in Sec.\ \ref{sec:SUSYconstraints}. The specific analysis relevant
here is
the slepton search (\ref{eq:sleptonsearch}),
Ref.\ \cite{Aad:2019vnb}. It excludes this cyan parameter strip,
where the splitting between the slepton and LSP mass eigenvalues is
largest.
The plot shows the dark matter direct detection limit as a red shaded band.
It is well-known that a Wino-like LSP cannot
produce the observed relic density unless the Wino mass is in the
multi-TeV region, for a recent account see
Ref.\ \cite{Beneke:2020vff}. In the mass region of interest for us, we
obtained a
Wino-LSP relic density which is typically a factor $10\ldots100$ smaller than
the observed relic density.
Nevertheless,
the LSP--nucleon cross sections depend on the Wino--Higgsino mixing
and are rather high. Hence the dark matter direct searches imply
significant lower limits on the Higgsino mass $\mu$ of around
$300\ldots800$ GeV,
see also footnote
\ref{footnotedarkmatterapprox}.
This limit could only be circumvented by dropping the assumption of a
stable LSP, e.g.\ by assuming R-parity violation of LSP-decays into
light gravitinos.
The thin solid gray line in the plot corresponds to the vacuum
stability constraint of Ref.\ \cite{Endo:2013lva}. It applies if
the left- and right-handed stau-masses are both set equal to the
smuon/selectron masses. In such a case of slepton universality
an upper limit on $\mu$ exists which
essentially eliminates the region in which the BLR-contribution to
$\amu^{\text{SUSY}}$ dominates.
In summary, the $(\tilde{W}\tilde{l})$-scenario can easily accommodate
$\Delta a_\mu$ \update{as large as the deviation $\damu^{\text{2021}}$ or even larger}.
\update{Specifically e.g.\ the new world
average (\ref{eqn:avgDiscrepancy}) can be explained for $\tan\beta=40$ with universal slepton
masses and an LSP mass around $M_2=350$ GeV and $\mu=800$ GeV.
Higher masses, in particular higher $\mu$ are also possible. For lower
masses much smaller values of $\tan\beta$ can be sufficient.}
There are essentially no LHC
constraints on this scenario as long as the mass splitting between sleptons and the LSP
are sufficiently small. Dark matter direct detection enforces lower
limits on $\mu$, still leaving a wide parameter space in which the
WHL-contributions to $\amu^{\text{SUSY}}$ are dominant and large.
If slepton universality is assumed including staus, vacuum stability
imposes an upper limit on $\mu$; larger $\mu$ is possible if (at least
one) heavy stau is assumed and provides
further parameter space with large $\amu^{\text{SUSY}}$.
Again we provide a brief survey of model building efforts which lead
to constructions like the $(\tilde{W}\tilde{l})$-scenario
with a Wino-like LSP, light
sleptons and very large $\mu$.
Ref.\ \cite{Yanagida:2020jzy} has
constructed an extreme variant of such a model
with Wino-like LSP and slepton masses
around 500 GeV based on
Higgs-anomaly mediated SUSY-breaking
\cite{Yanagida:2020jzy,Yin:2016shg,Evans:2013uza}; that construction
produces $\mu\gtrsim25$ TeV.
Ref.\ \cite{Cox:2018vsv} shows
that a similar scenario which involves both light Wino and Bino can follow from
gaugino+Higgs-mediated SUSY breaking.
Ref.\ \cite{Endo:2019bcj} has also embedded a
$(\tilde{W}\tilde{l})$-like scenario in a UV-model
based on Higgs-mediated SUSY breaking. Such scenarios also had the
potential to explain not only $\Delta a_\mu$ but also the smaller deviation in the
electron magnetic moment $a_e$ \cite{Badziak:2019gaf,Endo:2019bcj}
(see however footnote \ref{aefootnote}).
\subsubsection{$(\tilde{H}\tilde{l})$-scenario with light sleptons and
Higgsino }
\label{sec:HiggsinoSlepscenario}
The $(\tilde{H}\tilde{l})$-scenario is characterized by a Higgsino-like LSP and
sleptons significantly lighter than 700 GeV. Again, in view of current
LHC-constraints the slepton--LSP mass splitting cannot be much larger
than 100 GeV. The scenario is illustrated in
Fig.\ \ref{fig:WHsleptonscompressedb} in the $\mu$--$M_2$-plane. We again fix
the mass splitting $m_{\text{L}, \text{R}}=\mu+50$ GeV and we set the Bino
mass to $M_1=2000$ GeV as reference values, although the considered observables are not very
sensitive to this choice (except that significantly lower $M_1$ can
lead to conflict with dark matter direct detection limits).
The behaviour of $\amu^{\text{SUSY}}$ shown by the red dashed lines and
yellow/green coloured regions is quite different from the one in the previous
two scenarios. Since the Higgsino mass $\mu$ is small, only the
WHL-contributions of
Eqs.\ (\ref{eq:SUSY1Lnumerical},\ref{eq:SUSYMIapprox}) are
important. For this reason the result shows the generic
$1/M_{\text{BSM}}^2$-behaviour explained in
Sec.\ \ref{sec:BSMoverview} and
drops quickly both with increasing $\mu$ or increasing
$M_2$, while the choice of $M_1$ has not much influence.
Still, e.g.\ if $\mu=300$ GeV the BNL deviation can be
explained at the 1$\sigma$ level even for $M_2=1$ TeV.
The constraints from LHC-recasting, shown in blue, are rather
strong. They originate from the chargino/slepton channel searches of
Eq.\ (\ref{Charginosleptonlimits}),
Refs.\ \cite{Aaboud:2018jiw,Sirunyan:2017lae}.
Compared to e.g.\ Fig.\ \ref{fig:ReproduceATLASplot} now $\mu$ instead of $M_1$
takes the role of the LSP-mass, and the recasting shows that the
resulting limits on the Wino-like chargino mass and
thus on $M_2$ are weaker than in
Fig.\ \ref{fig:ReproduceATLASplot}, extending only up to $M_2=700$
GeV in Fig.\ \ref{fig:WHsleptonscompressedb}.
The plot also shows additional cyan shaded parameter regions, which
are subject to
additional LHC-constraints implemented in a simplified way. Here,
both the slepton search of Ref.\ \cite{Aad:2019vnb} and the compressed-mass searches of
Ref.\ \cite{Aad:2019qnd} exclude small regions at small $M_2$ close to the $M_2 = \mu$ boundary and at
large $M_2\approx 2$ TeV, close to the bino mass. In both regions,
neutralino mixing happens to lead to
LSP--next-to-LSP mass splittings which are excluded.
Though not visible in the plot we
mention that the same compressed-mass searches also impose limits on
the Higgsino-like chargino/neutralino system and thereby exclude
parameter space with $\mu<200$ GeV.
It is well known that a
Higgsino-like LSP cannot
produce the observed relic density for such light Higgsinos as
considered here. Still there are relevant limits from dark matter
direct detection experiments \cite{Baer:2018rhs}. In the mass region
of interest for us, we find that the
Higgsino-LSP relic density is typically a factor 10 smaller than
the observed relic density.
This value is
higher than the relic density in the previous case of a Wino-like LSP. As a
result, stronger dark matter direct detection limits are obtained on
$M_2$, shown as the red shaded band. In the plot they
require $M_2\gtrsim500\ldots1500$
GeV, depending on $\mu$.
As before, the dark matter direct detection limits apply only under
the assumption that the Higgsino-like LSP is stable.
In summary, the $(\tilde{H}\tilde{l})$-scenario is strongly
constrained by LHC chargino searches and by dark matter direct
detection constraints
(if the LSP is assumed to be stable). Still it allows values of $M_2$
as small as around $700$ GeV and $\mu$ around $200$ GeV which lead to
$\amu^{\text{SUSY}}$ as large as $30\times10^{-10}$, but outside this small corner
of parameter space the values of $\amu^{\text{SUSY}}$ quickly drop.
\update{The average deviation can be explained at the $1\sigma$ level for LSP masses up
to $350$ GeV for $\tan\beta=40$. At the $2\sigma$ level, much higher
LSP masses are possible.}
In the model building literature, Ref.\ \cite{Han:2020exx} has considered a scenario
of the $(\tilde{H}\tilde{l})$-type with significant mass gap between
the Higgsino-LSP and the two gauginos, motivated within the context of a
model with seesaw mechanism and SO(10) GUT constraints on the gaugino masses but
non-universal scalar masses. Although electroweak LHC and DMDD
constraints are not
considered, this reference also finds only small viable contributions
to $\amu^{\text{SUSY}}$ as a result of model-specific
correlations to flavour-violating observables.
\subsubsection{$(\tilde{B}\tilde{W}\tilde{H})$-scenario with light charginos }
\begin{figure}[t]
\begin{subfigure}[t]{0.405\textwidth}
\centering\includegraphics[width=\textwidth]{SUSYplots/BinoLSPPlot.pdf}
\caption{}
\label{fig:chalightersleptonsa}
\end{subfigure}
\begin{subfigure}[t]{0.405\textwidth}
\centering\includegraphics[width=\textwidth]{SUSYplots/WHLSPPlot.pdf}
\caption{}
\label{fig:chalightersleptonsb}
\end{subfigure}
\begin{subfigure}[t]{0.18\textwidth}
\centering\includegraphics[scale=1.]{SUSYplots/PlotAmuLegendhalf.pdf}
\end{subfigure}
\caption{\label{fig:chalightersleptons} (a)
$(\tilde{B}\tilde{W}\tilde{H})$-scenario. (b)
$(\tilde{H}\tilde{W}/\tilde{W}\tilde{H})$-scenario. For parameter values see the
plots and the text.
The red dashed contours correspond to values of
$\amu^{\text{SUSY}}$ as indicated in the legend on the right; the yellow/green coloured
regions correspond to the $1\sigma$ bands corresponding
to the BNL deviation (\ref{eqn:BNLDiscrepancy}) and the new
deviation including FNAL (\ref{eqn:avgDiscrepancy}), and their overlap. For the
$\tan\beta$-reinterpretation see caption of Fig.\ \ref{fig:briefsurveyplot}.
The red shaded region is excluded by dark matter direct detection
if the LSP is assumed stable; in the left plot this red region is
the rectangle extending up to $\mu\approx540$ GeV. Both plots do not contain regions
excluded by the LHC recasting. The cyan shaded
regions correspond to the additional LHC limits implemented in a
simplified way. In the left plot the cyan region is the large
region extending from $\mu\gtrsim300\ldots400$ GeV to the right
(it is partially overlaid with dark matter and $\amu^{\text{SUSY}}$-regions),
it mainly arises
from the stau-channel chargino search (\ref{eq:Chastaulimits}),
Ref.\ \cite{Aaboud:2017nhr}; it is valid if light staus are assumed for
stau-coannihilation. The red thick line in the left plot
corresponds to the parameter strip
where chargino-neutralino coannihilation is possible; in this
region the dark matter relic density can be correctly described
without light staus and the LHC-constraint does not apply.
}
\end{figure}
The remaining scenarios differ from the previous ones in that they
involve two light charginos, while the
sleptons are not assumed particularly light.
The $(\tilde{B}\tilde{W}\tilde{H})$-scenario discussed here
assumes both charginos to be lighter than the sleptons, but the
Bino-like neutralino to be the LSP. Hence $M_1<M_2,\mu$ with no
particular order between $M_2$
and $\mu$. The
scenario is illustrated in Fig.\ \ref{fig:chalightersleptonsa}.
In the figure we set $m_{\text{L}, \text{R}}=700$ GeV, safely but not too far above
the maximum LHC-limit of Ref.\ \cite{Aad:2019vnb}.
We also fix $M_1=200$ GeV
and show $\amu^{\text{SUSY}}$ versus LHC and dark matter
constraints as a function of $M_2$ and $\mu$.
The behaviour of $\amu^{\text{SUSY}}$ is dominated by the WHL-contributions of
Eqs.\ (\ref{eq:SUSY1Lnumerical},\ref{eq:SUSYMIapprox}) which are
suppressed by the heavy slepton masses and further suppressed if $\mu$
and/or $M_2$ become heavy. The maximum contribution to $\amu^{\text{SUSY}}$ in Fig.\ \ref{fig:chalightersleptonsa} is around
$30\times10^{-10}$; if $\mu$ and $M_2$ are heavier than
$500$ GeV, $\amu^{\text{SUSY}}$ reaches at most $15\times10^{-10}$
for $\tan\beta=40$ \update{(which
is just within the $2\sigma$ region of the updated deviation
$\damu^{\text{2021}}$}.
The chosen value of
$M_1=200$ GeV has almost no influence on $\amu^{\text{SUSY}}$. However the parameter
space of this $(\tilde{B}\tilde{W}\tilde{H})$-scenario is subject to
an interesting interplay of LHC and dark matter constraints.
First, since we assume the dark matter relic density to be correctly
explained, the limits from dark matter direct detection are
applicable, similarly to the plots in
Fig.\ \ref{fig:Binosleptonscompressed}. In the present case with
$M_1=200$ GeV, the region with $\mu\lesssim540$ GeV is excluded by
this constraint. The constraint is shown by the red shaded band (partially
overlaid with other coloured regions). As a result the entire region with
$\amu^{\text{SUSY}}>20\times10^{-10}$ is excluded, independently of other
details.
Second, in order to achieve the correct relic density for this case of
a Bino-like LSP in the given mass range, some coannihilation
mechanism must act. Since sleptons are heavy, the two options are either
chargino-coannihilation or
stau-coannihilation. Chargino-coannihilation requires a specific
chargino--LSP mass splitting. In Fig.\ \ref{fig:chalightersleptonsa}
the thick red line denotes the one-dimensional contour along which
chargino-coannihilation is possible due to appropriately small mass
splittings between the lightest chargino and the LSP. Along this red
contour we may assume
staus to be heavy (e.g.\ degenerate with 1st and 2nd generation
sleptons or even heavier).
Anywhere outside the red line we must assume at least one or both staus
to be light and close to the LSP in mass for stau-coannihilation.
Finally, we can apply LHC-constraints. The constraints depend on the
coannihilation mechanism and the stau masses.
Along the red contour for chargino-coannihilation we assume the
staus to be heavy. Then the
LHC-limit from chargino searches
with $Wh$-channel \cite{Aad:2019vvf}, see
Eq.\ (\ref{eq:ChaWhlimits}), is relevant.
It turns out, however, that these limits
do not exclude the chargino-coannihilation contour due to the small mass splittings.
The scenario with
Bino-like LSP and light charginos but heavier sleptons and staus has
also been investigated thoroughly in Ref.\ \cite{Athron:2018vxy}, where
these scenarios were found to be not constrained and in some cases
fitted excesses in the data.
Everywhere outside the red contour, we need to assume
stau-coannihilation and light stau(s).
In this case, the constraint from chargino searches
with stau-channel \cite{Aaboud:2017nhr}, see
Eq.\ (\ref{eq:Chastaulimits}), applies.
If
we apply this LHC-constraint as described in
Sec.\ \ref{sec:SUSYconstraints}, the cyan shaded region in the plot is
excluded, which is essentially the entire region with $\mu\gtrsim350$ GeV.\footnote{As mentioned in the context of
Eq.\ (\ref{eq:Chastaulimits}) the LHC-constraint of
Ref.\ \cite{Aaboud:2017nhr} is rather robust against
changes of the stau masses and mixings and against the Higgsino content
of the chargino. Hence we apply literally the constraints obtained in
the simplified-model interpretation of Ref.\ \cite{Aaboud:2017nhr} to
charginos with dominant decay into staus.}
As a result, the combination of dark matter and LHC-constraints
exclude the entire scenario with stau-coannihilation for $M_1=200$
GeV. Larger values of $M_1$ \update{above around $300$ GeV} would relax
LHC limits but lead to
stronger dark matter direct detection limits (see footnote
\ref{footnotedarkmatterapprox}), thus
leaving little room for large contributions to
$\amu$.
\update{ If in the future the deviation decreases, this parameter region with significantly
higher LSP masses $M_1$ and stau-coanihilation may become more promising.}
Ref.\ \cite{Hagiwara:2017lse} has considered $\amu$ versus LHC in the same
scenario as well, however evaluated for $M_1\le50$ GeV. This smaller
value of $M_1$ leads to larger $\amu^{\text{SUSY}}$ and weaker dark matter constraints,
but also to
stronger LHC exclusion limits (assuming stau masses in between $M_1$
and the chargino masses) essentially excluding the entire
$(\mu,M_2)$-region of interest for $\amu$. Our larger value $M_1=200$
GeV reduces
$\amu^{\text{SUSY}}$ but also leads to a parameter region with low chargino mass
allowed by LHC; however our parameter region is challenged by DMDD
constraints. This comparison highlights the complementarity between
$\amu$, LHC and dark matter constraints.
In summary, the entire
$(\tilde{B}\tilde{W}\tilde{H})$-scenario with light staus turns out to be strongly
under pressure. \update{A remaining possibility in this
$(\tilde{B}\tilde{W}\tilde{H})$-scenario is chargino-coannihilation
and heavier staus:
the small part of the red thick line at $\mu>540$ GeV is viable. Here
$\Delta a_\mu$ reaches up to $20\times10^{-10}$ for
$\tan\beta=40$, }\update{which is just sufficient to accommodate the
deviation (\ref{eqn:avgDiscrepancy}) at the $1\sigma$
level.}
The $(\tilde{B}\tilde{W}\tilde{H})$-scenario appears in an elaborate
model building construction of Ref.\ \cite{Altin:2017sxx}
based on the Pati-Salam model with inverse seesaw mechanism for neutrino
masses. The benchmark points of that reference are very similar to the
region of Fig.\ \ref{fig:chalightersleptonsa} with $\mu\sim450$
GeV, $M_2\sim300$ GeV; however that reference does not consider DMDD
constraints, which exclude such masses in our
Fig.\ \ref{fig:chalightersleptonsa}. In that model the right-handed
sneutrinos provide significant additional contributions to $\amu$,
enlarging the parameter space with large $\amu^{\text{SUSY}}$.
A variant of the scenario has been considered in
Ref.\ \cite{Pozzo:2018anw}. This reference investigates dark matter
generation via resonances, the so-called ``$Z$/$h$-funnel'' regions. It
shows that LHC-constraints allow the $h$-funnel region, i.e.\ an
LSP-mass around 60 GeV, together with large $\tan\beta\gtrsim20$ and
$\mu\gtrsim390$ GeV, which opens up additional parameter space similar
to the one of Fig.\ \ref{fig:chalightersleptonsa}.
\subsubsection{$(\tilde{H}\tilde{W}/\tilde{W}\tilde{H})$-scenarios
with light charginos }
In the $(\tilde{H}\tilde{W}/\tilde{W}\tilde{H})$-scenarios again both charginos are lighter than the
sleptons, but now the Bino mass is also heavier, so the LSP is either Wino- or
Higgsino-like. Both scenarios are illustrated in
Fig.\ \ref{fig:chalightersleptonsb} in the $\mu$--$M_2$-plane. Like
in the previous case we set $m_{\text{L}, \text{R}}=700$ GeV, safely but not too far above
the LHC limit. We also set $M_1=2000$ GeV. This choice is not critical
--- it neither influences LHC-limits nor $\amu$, but it avoids limits
from dark matter direct detection (which would be similar to the
limits on $M_2$ discussed below).
The Figure shows that the values of $\amu^{\text{SUSY}}$ are very similar to the
previous $(\tilde{B}\tilde{W}\tilde{H})$-case. This is no surprise since $\amu^{\text{SUSY}}$ is dominated by the
WHL-contributions and the higher value of $M_1$ is inconsequential.
\update{Hence the new $\amu$ deviation can be generally
explained if $\tan\beta$ is around $40$ or higher.}
In fact, the main difference
between the present scenarios and the previous
$(\tilde{B}\tilde{W}\tilde{H})$-scenario is the nature of the LSP and
the resulting very different dark matter and LHC constraints.
The LHC-constraints on the previous scenario were strong in case of
light staus. Since now in the $(\tilde{H}\tilde{W}/\tilde{W}\tilde{H})$-scenarios there is no need to assume light
staus we assume the staus to be at least as heavy as the other
sleptons. As a result there are essentially no LHC constraints on the
scenarios, in line with the previous case and the findings of
Ref.\ \cite{Athron:2018vxy}. Only the regions with
very small $\mu$ or $M_2\lesssim200$
GeV are subject to constraints from compressed-spectrum searches for
the Higgsino- or Wino-like chargino/neutralino system. In the plot
only a tiny cyan region at $\mu\approx200$ GeV is excluded in this
way, but the largest part of parameter space is allowed by LHC.
The dark matter direct detection limits apply under the assumption
that the LSP is stable. As also mentioned in the context of
Fig.\ \ref{fig:WHsleptonscompressed} in Secs.\
\ref{sec:WinoSlepscenario} and \ref{sec:HiggsinoSlepscenario},
the LSP relic density is smaller than the
observed value in the displayed parameter region, hence only a
fraction of the observed dark matter can be explained. Nevertheless,
the LSP--nucleon cross sections are sufficiently high to imply
significant lower limits on the chargino masses. The limits are
stronger in case of a Higgsino-like LSP and exclude the largest part
of parameter space in the upper left part of the plot. The fact that
current direct detection constraints exclude a
large fraction of the parameter space with Higgsino-like dark
matter has already been observed in Ref.\ \cite{Baer:2018rhs},
where also scenarios with non-thermal dark matter production and
additional dark matter candidates such as axions were considered.
In the case of a
Wino-like LSP the dark matter relic density is smaller and the
constraints from direct detection are weaker, leaving open a larger
triangular region in the lower right part of the plot.
In summary, therefore, almost the entire Higgsino-like LSP region of the $(\tilde{H}\tilde{W}/\tilde{W}\tilde{H})$-scenario
is excluded by the combination of LHC
compressed-mass searches and dark matter direct detection.
Only a small region in the upper left part of the plot around
$\mu\sim250$ GeV and
$M_2\sim600$ GeV remains viable. In this region, $\amu^{\text{SUSY}}$ is always
smaller than $20\times10^{-10}$ for $\tan\beta=40$ and \update{outside the
$1\sigma$ region of the new deviation $\damu^{\text{2021}}$.}
The Wino-like LSP region of the $(\tilde{H}\tilde{W}/\tilde{W}\tilde{H})$-scenario
in the triangular bottom
right region of the plot
\update{allows a larger viable parameter space, in which
a
$1\sigma$ explanation of $\damu^{\text{2021}}$ is possible for $\tan\beta=40$ and
even a full explanation $\amu^{\text{SUSY}}=25\times10^{-10}$ can be reached.
However for smaller $\tan\beta$, the current deviation is harder to explain.} In case
the dark
matter constraints are
not applied (by assuming unstable LSP) both scenarios can accommodate
$\Delta a_\mu$ higher than $30\times10^{-10}$.
In the literature, several model-building efforts have led to
specific constructions with mass patterns of the
$(\tilde{H}\tilde{W}/\tilde{W}\tilde{H})$ kind.
The scenario with either Higgsino- or Wino-like LSP can be motivated in
gauge-mediated SUSY breaking \cite{Bhattacharyya:2018inr}, extended by
non-minimal contributions to the soft-breaking parameters which allow
the Higgs, squark and slepton scalar mass parameters to be
non-universal.
In addition, in the context of gauge-mediated SUSY breaking
the lightest neutralino can decay into gravitinos; hence the DMDD
constraints of Fig.\ \ref{fig:chalightersleptonsb} do not apply and
the scenario of
Ref.\ \cite{Bhattacharyya:2018inr} provides a
viable SUSY model explaining $\Delta a_\mu$.
The Higgsino-like LSP scenario can be motivated within
anomaly-mediated SUSY breaking \cite{Chowdhury:2015rja},
gaugino-mediated SUSY breaking \cite{Harigaya:2015kfa} or in the
context of electroweak finetuning considerations
\cite{Li:2016ucz,Padley:2015uma}. The scenario has also been
constructed in
Ref.\ \cite{Harigaya:2015jba} as a
focus-point scenario (called FPNUS or FPHGM) based on gravity
mediation with
non-universal scalar masses or on Higgs-gaugino mediation.
All these references do
not apply DMDD constraints, which may be justified by assuming
R-parity violation \cite{Harigaya:2015jba}. Similar scenarios were
considered in a model with pseudo-Dirac gluino \cite{Li:2017fbg};
there, also DMDD constraints were investigated based only on the
weaker limits from LUX and PandaX, however also leading to significant
constraints on parameter space.
Ref.\ \cite{Hussain:2017fbp} also considers the
$(\tilde{H}\tilde{W})$-scenario with Higgsino-like LSP, but here
universality between squarks and
sleptons and quark flavour constraints imply only small contributions
to $\amu$.
\subsection{Summary}
\label{sec:SUSYSummary}
Here we briefly summarize our main results on SUSY explanations of
$\damu^{\text{2021}}$. Like for other BSM scenarios, negative results from LHC and
dark matter searches have significantly reduced the viable SUSY
parameter space. Simple traditionally considered cases such as the
Constrained MSSM are already excluded as explanations of
$\Delta a_\mu^{\text{BNL}}$ and now \update{also of $\damu^{\text{2021}}$}
\cite{Buchmueller:2013rsa,Bechtle:2015nua,Han:2016gvr,Athron:2017qdc}.
In our detailed phenomenological analysis we focused on the general
MSSM without restrictions from GUT scale assumptions or specific SUSY
breaking mechanisms. The only
restrictions imposed by our analysis are a stable neutralino-like LSP
which constitutes (part or all of) dark matter and the
absence of flavour-violating soft SUSY-breaking parameters. For
simplicity we also consider equal masses of left- and right-handed
selectrons and smuons (called sleptons for short), while the
stau-masses are left arbitrary. In a series of footnotes
\ref{footnoteBHR}, \ref{footnoteMRSSM}, \ref{footnoteLFV}, \ref{footnoteTBMUlarge} we
commented on alternative cases with ultra-high $\tan\beta$ or $\mu$,
enhancements via lepton-flavour violation, and the MRSSM without
$\tan\beta$ enhancement. The results of our analysis are as follows.
\begin{itemize}
\item
\update{
\underline{Scenario with heavy charginos and smuons}: the MSSM scenario with generally heavy masses,
corresponding to the upper right quadrant of
Fig.\ \ref{fig:briefsurveyplot} where LHC limits are trivially
avoided is disfavoured
as an explanation of $\damu^{\text{2021}}$. For such SUSY heavy masses, the
current $\amu$ deviation can at most be
explained if $\tan\beta\gg40$ and/or $\mu\gg4$~TeV.}
\item
\underline{$(\tilde{B}\tilde{l})$-scenario}: \update{a
promising} MSSM scenario is the
$(\tilde{B}\tilde{l})$-scenario with Bino-like LSP and close-by
sleptons to evade LHC limits. We identified three allowed parameter
regions particularly
promising in view of $\damu^{\text{2021}}$: (1) Wino mass above LHC limits of
around $900$ GeV (for LSP mass of $250$ GeV)
and Higgsino mass $\mu$ of order
$1$ TeV. Here all slepton and stau masses may be universal. (2)
Wino mass as before but $\mu$ in the multi-TeV region. Here at least one stau must be
heavier to avoid vacuum stability constraints. (3) Light Wino with
mass similar to the Bino and slepton masses. In all regions dark
matter data implies a lower mass limit on $\mu$. The relic density
can be generated via stau/slepton coannihilation; in region
(3) also Wino coannihilation is possible.
\update{In all these cases
the result for $\damu^{\text{2021}}$ after the FNAL measurement can be easily explained in a wide range
of masses and $\tan\beta$ values.}
\item
\underline{ $(\tilde{W}\tilde{l})$- and $(\tilde{H}\tilde{l})$-scenarios}:
the $(\tilde{W}\tilde{l})$- and $(\tilde{H}\tilde{l})$-scenarios are
characterized by Wino- or Higgsino-like LSP; the sleptons are
sufficiently close to the LSP to evade LHC limits. Specifically the
$(\tilde{W}\tilde{l})$-scenario can lead to the largest $\amu^{\text{SUSY}}$
of any MSSM scenario in a wide parameter space via the WHL and
BLR contributions: the DMDD constraints are weak
and there are essentially no additional
LHC constraints. \update{The new updated $\amu$ deviation can be accommodated
in a wide range of masses and $\tan\beta$ values, see
Fig.\ \ref{fig:WHsleptonscompresseda}. At the $1\sigma$ level and
for $\tan\beta=40$ LSP masses above $400$ GeV are possible.}
In the $(\tilde{H}\tilde{l})$-scenario $\amu^{\text{SUSY}}$ is
dominated only by WHL contributions, and DMDD and
LHC constrain the Wino mass to be rather heavy. Hence there is an
upper limit to the possible values of $\amu^{\text{SUSY}}$,
\update{but a $1\sigma$ explanation of the
current $\amu$ deviation is possible for
$\tan\beta=40$ and for LSP masses below $350$ GeV.}
However, in both cases of Wino- or Higgsino-like LSP, the dark matter relic density cannot be
fully accommodated simultaneously with $\damu^{\text{2021}}$, necessitating
additional non-MSSM components
of dark matter such as gravitinos.
\item
\underline{$(\tilde{B}\tilde{W}\tilde{H})$- and
$(\tilde{H}\tilde{W}/\tilde{W}\tilde{H})$-scenarios}:
in the $(\tilde{B}\tilde{W}\tilde{H})$- and
$(\tilde{H}\tilde{W}/\tilde{W}\tilde{H})$-scenarios both
charginos are assumed to be lighter than the sleptons, which in turn are
constrained by LHC data.
$\amu$ is rather limited in both scenarios. In the
$(\tilde{B}\tilde{W}\tilde{H})$-scenario the Bino-like neutralino is
the LSP and even lighter than both charginos. Outside the Bino-Wino
coannihilation region this scenario is very
strongly constrained by the combination of dark matter and LHC
constraints. In the Bino-Wino coannihilation region, all
constraints
can be fulfilled for sufficiently large $\mu$, however in this
region and for
$\tan\beta=40$
\update{$\amu^{\text{SUSY}}$ is almost always at least
$1\sigma$ lower than the observed deviation.}
In the $(\tilde{H}\tilde{W}/\tilde{W}\tilde{H})$-scenarios either
the Wino- or Higgsino-like neutralino is the LSP. Assuming staus as
heavy as the other sleptons, there are no relevant LHC constraints.
Although the full dark matter relic density is below the observed one
(similarly to the $(\tilde{W}\tilde{l})$- and
$(\tilde{H}\tilde{l})$-cases), direct detection
constraints exist and require a mass splitting between the Higgsinos and
Winos. The resulting $\amu^{\text{SUSY}}$ can be larger in case of a Wino-like
LSP. \update{Such a scenario is well able to explain the
observed deviation for $\tan\beta=40$ or slightly smaller
$\tan\beta$ values.}
\end{itemize}
\section{Details on LHC-constraints on SUSY parameter regions}
\label{app:SUSYLHC}
In section \ref{sec:SUSYconstraints}
we described our procedure for recasting LHC-constraints on the SUSY
parameter space; the procedure was then applied in section
\ref{sec:SUSYpheno} to investigate the impact of
LHC-constraints on the SUSY parameter space. Here we provide further
details on the recasting.
In Fig.\ \ref{fig:ReproduceATLASplot} we
illustrate the recasting by reproducing the exclusion contour of
the ATLAS chargino/neutralino search of Ref.\ \cite{Aaboud:2018jiw},
Fig.\ 8c. Like in that reference, we have generated MSSM parameter
points with mass hierarchy $M_1<m_{\text{L}, \text{R}}<M_2<\mu$, i.e.\ Bino-like
LSP, intermediate sleptons of the 1st and 2nd generation, and a
Wino-like pair of $\chi_2^0/\chi_1^\pm$. We allowed the slepton
mass ratio parameter
$x=(m_{\text{L}, \text{R}}-m_{\chi_1^0})/(m_{\chi_1^\pm}-m_{\chi_1^0})$ to be in
the range $1/3<x<2/3$, while Ref.\ \cite{Aaboud:2018jiw} fixed
$x=1/2$. The thick solid blue contour in our
Fig.\ \ref{fig:ReproduceATLASplot} corresponds to points where the
predicted signal yield for at least one signal region is equal to
the respective ATLAS 95\% C.L.\ upper limit. It can be seen that
this contour tracks the corresponding 95\% C.L.\ contour of
Ref.\ \cite{Aaboud:2018jiw}, Fig.\ 8c, very well, i.e.\ to
within 50 GeV. To illustrate the gradient we also show the
thick dashed blue contour, corresponding to points where the
predicted signal yield is three times higher than the respective
ATLAS upper limit. Finally we also show blue coloured regions corresponding
to different values of the largest effective $(-2\ln{\cal
L}_{\text{eff}})$ of
any implemented analysis (in this figure, this is always the 3-lepton channel
analysis of Ref.\ \cite{Aaboud:2018jiw}).
We find that numerically the contour with $(-2\ln{\cal
L}_{\text{eff}})^{\text{Max}}=6$ tracks very well the 95\%
C.L.\ contour. We checked that the same would be true for most
plots of our section \ref{sec:SUSYpheno} (the exceptions are very
few cases where such a comparison is not possible because the
relevant contribution to $(-2\ln{\cal
L}_{\text{eff}})^{\text{Max}}$ comes from the CMS analysis of
Ref.\ \cite{Sirunyan:2018iwl} for compressed spectra, which does
not provide individual 95\% C.L.\ upper signal limits).
Fig.\ \ref{fig:ReproduceATLASplot} shows that the recasting
reproduces the corresponding original ATLAS exclusion contour very
well. On the other hand, several of the scenarios
presented in Sec.\ \ref{sec:SUSYpheno}, particularly
Figs.\ \ref{fig:Binosleptonscompresseda},
and \ref{fig:chalightersleptons}
turned out to be entirely unconstrained by the LHC recasting.
Hence we present here quantitative results of our
recasting analysis, to confirm these statements and to expose further
details.\footnote{%
Our results are compatible with related results of
Ref.\ \cite{Athron:2018vxy} applying the {\tt GAMBIT/ColliderBit}
framework on chargino and neutralino searches with
heavy sleptons, and we refer to that reference for further
explanations of the weak LHC sensitivity to many realistic SUSY scenarios.}
We confirmed that out of all ATLAS and CMS analyses implemented in
\GAMBIT/\texttt{ColliderBit}\@\xspace, the ATLAS electroweakino search of
Ref.\ \cite{Aaboud:2018jiw} is most sensitive in the parameter regions
in
Figs.\ \ref{fig:Binosleptonscompressed} and \ref{fig:chalightersleptons}. Within
this ATLAS
search, the 3-lepton channel is most sensitive. The 3-lepton channel
in turn is divided into 11 signal regions. Hence we will mainly focus on the
results for these 11 signal regions in the following.
\begin{figure}
\null\hfill
\includegraphics[scale=1]{SUSYplots/ReproduceATLASMultiLEPFig8cPlot.pdf}
\hspace{0em}\hfill\null
\caption{\label{fig:ReproduceATLASplot}
Recasting of the exclusion limits corresponding to Ref.\ \cite{Aaboud:2018jiw},
Fig.\ 8c. The generated MSSM parameter
points have the mass hierarchy $M_1<m_{\text{L}, \text{R}}<M_2<\mu$, i.e.\ Bino-like
LSP, intermediate sleptons of the 1st and 2nd generation, and a
Wino-like pair of $\chi_2^0/\chi_1^\pm$. The slepton
masses satisfy
$x=(m_{\text{L}, \text{R}}-m_{\chi_1^0})/(m_{\chi_1^\pm}-m_{\chi_1^0})$ with
$1/3<x<2/3$. The thick solid blue contour can be directly compared to
the 95\% C.L.\ exclusion contour of Ref.\ \cite{Aaboud:2018jiw},
Fig.\ 8c; the thick dashed blue contour corresponds to points
where the predicted signal yield is three times higher than the
respective ATLAS upper limit in at least one signal region. The
blue coloured regions correspond to various values of the maximum
effective $(-2\ln{\cal
L}_{\text{eff}})$ of any analysis implemented in
\GAMBIT/\texttt{ColliderBit}\@\xspace. The plots in section \ref{sec:SUSYpheno}
show only the region
corresponding to $(-2\ln{\cal
L}_{\text{eff}})=6$, which is very close to the line where
$\left(S_\text{theo}/S_{\text{obs}}^{95\%}\right)\ge1$ for at
least one signal region; see text for more details.
}
\end{figure}
Tables \ref{tab:sampleLHCpointsunconstrainedscenarios} and \ref{tab:sampleLHCpointsunconstrainedscenariosresults} define
example SUSY parameter points and present detailed results. The first
four parameter
points represent the
$(\tilde{B}\tilde{l})$-, $(\tilde{B}\tilde{W}\tilde{H})$-, and
$(\tilde{H}\tilde{W})$-scenarios. They are examples obtained in a parameter scan
with particularly bad fit to experiment, i.e.\ particularly small
likelihood ratio (but they are still not excluded).
The other two points represent the excluded region in
Fig.\ \ref{fig:ReproduceATLASplot} with high or low chargino mass.
The columns $\ln{\cal
L}_{\text{eff}}^{\text{analysis}}$ of
Tab.\ \ref{tab:sampleLHCpointsunconstrainedscenarios}
display effective analysis-specific log-likelihood
differences $\ln{\cal
L}_{\text{eff}}^{\text{analysis}}\equiv\ln{\cal
L}_{\text{SR$^{\text{max}}$}}^{\text{analysis}}$, where
$\text{SR}^{\text{max}}$ denotes the signal region of the respective
analysis with the highest expected sensitivity, see
Sec.\ \ref{sec:SUSYconstraints} and
Ref.\ \cite{Balazs:2017moi}.
The analyses are the ATLAS analyses
of Ref.\ \cite{Aaboud:2018jiw} for the 3-lepton, 2-lepton
0-jet and 2-lepton+jets channels, and to the CMS analyses of
Ref.\ \cite{Sirunyan:2017lae} for the 2-same sign lepton and
3-lepton channels.
The table shows that the ATLAS and CMS 3-lepton channels are most
sensitive for all parameter points.
Tab.\ \ref{tab:sampleLHCpointsunconstrainedscenariosresults} shows the
results obtained by \texttt{ColliderBit}\@\xspace for the 11 invidual signal regions of
the ATLAS 3-lepton analysis and
compares to the ATLAS results. The most important observation is that
all signal yield predictions of the first four parameter points are far smaller than the ATLAS signal
$95\%$ C.L.\ upper limits. The entries which come relatively closest,
i.e.\ which lead to the smallest negative log-likelihood difference,
are highlighted in boldface. A second observation is that these
entries are not always identical to the ones selected by \texttt{ColliderBit}\@\xspace
for evaluating the effective log-likelihood difference for the
next-to-last column of
Tab.\ \ref{tab:sampleLHCpointsunconstrainedscenarios}. The reason
\cite{Balazs:2017moi} is that the selection is done assuming that the
observed counts match the background expectation; however for the
WZ-1Jc and slep-a signal regions this assumption is not true
($N_{\text{obs}}:N_{\text{exp}}=4:1.3\pm0.3$ and
$N_{\text{obs}}:N_{\text{exp}}=4:2.2\pm0.8$ respectively).
On the other hand, for the two parameter points of the excluded region
in Fig.\ \ref{fig:ReproduceATLASplot} we observe that the high-mass
point is indeed excluded by the slep-e signal region, and the low-mass
point is excluded by a variety of signal regions by a large margin.
\begin{table}
\centerline{
\scalebox{.8}{$\begin{array}{|c|c|c|c|c|c|c|c|c|c|c|c|}\hline
\text{Point} & M_1 & M_2 & \mu & m_{L,R}
& \ln{\cal
L}_{\text{eff}}^{\text{ATLAS\_3Lep}}& \ln{\cal
L}_{\text{eff}}^{\text{ATLAS\_2Lep0Jets}}&
\ln{\cal
L}_{\text{eff}}^{\text{ATLAS\_2LepPlusJets}}
&
\ln{\cal
L}_{\text{eff}}^{\text{CMS\_2SSLep}}&
\ln{\cal
L}_{\text{eff}}^{\text{CMS\_3Lep}}
\\\hline
\text{$(\tilde{B}\tilde{l})_1$}
& 200 & 1200 & 630 & 250 & -0.45 & 0.33 & 0.003&
0.002 & 0.25
\\\hline
\text{$(\tilde{B}\tilde{W}\tilde{H})_1$}
& 200 & 296 & 346 & 700 & 0.50 & 0.07 &0.04& 0.05& -0.24
\\\hline
\text{$(\tilde{B}\tilde{W}\tilde{H})_2$}
& 200 & 329 & 388 & 700 &1.05 &0.16& -0.37 & 0.05& -0.35
\\\hline
\text{$(\tilde{H}\tilde{W})_1$}
& 2000 & 239 & 162 & 700 &-0.83 &0.17 & -0.04 & -0.04&
-0.1
\\\hline
\text{(Fig.\ \ref{fig:ReproduceATLASplot})$_1$}
& 500 & 1039 & 2000 & 800 &-3.4 & -0.2& 0 &0 & -0.13
\\\hline
\text{(Fig.\ \ref{fig:ReproduceATLASplot})$_2$}
& 204 & 350 & 2000 & 300 &-55.8 & -15.9& 0 & -6.3 &
-56.9
\\\hline
\end{array}
$ }}
\caption{\label{tab:sampleLHCpointsunconstrainedscenarios} Definitions
and basic properties of sample parameter points. The first four
points represent the $(\tilde{B}\tilde{l})$-,
$(\tilde{B}\tilde{W}\tilde{H})$-, and
$(\tilde{H}\tilde{W})$-scenarios and correspond to
points in
Figs.\ \ref{fig:Binosleptonscompressed} and \ref{fig:chalightersleptons}
with particularly bad fit to experiment (though not excluded).
The last two points represent the excluded region in
Fig.\ \ref{fig:ReproduceATLASplot}. The columns $\ln{\cal
L}_{\text{eff}}^{\text{analysis}}$
display effective analysis-specific log-likelihood
differences obtained by \GAMBIT/\texttt{ColliderBit}\@\xspace as
described in Sec.\ \ref{sec:SUSYconstraints} and in
Ref.\ \cite{Balazs:2017moi}. They correspond to the ATLAS analyses
of Ref.\ \cite{Aaboud:2018jiw} for the 3-lepton, 2-lepton
0-jet and 2-lepton+jets channels, and to the CMS analyses of
Ref.\ \cite{Sirunyan:2017lae} for the 2-same sign lepton and 3-lepton
channels.}
\end{table}
\begin{table}
\centerline{
\scalebox{.82}{$
\begin{array}{|cccc|cccccc|}
\hline
\text{Region} & N_{\text{obs}} & N_{\text{exp}} & S^{95}_{\text{obs}} &
\begin{array}{c}\text{$(\tilde{B}\tilde{l})_1$} \\ S\\
\ln{\cal L}_{\text{SR}}
\end{array}
& \begin{array}{c}\text{$(\tilde{B}\tilde{W}\tilde{H})_1$} \\ S\\
\ln{\cal L}_{\text{SR}}
\end{array} &
\begin{array}{c}\text{$(\tilde{B}\tilde{W}\tilde{H})_2$} \\ S\\
\ln{\cal L}_{\text{SR}}
\end{array} &
\begin{array}{c}\text{$(\tilde{H}\tilde{W})_1$} \\ S\\
\ln{\cal L}_{\text{SR}}
\end{array}
&
\begin{array}{c}\text{(Fig.\ \ref{fig:ReproduceATLASplot})$_1$} \\ S\\
\ln{\cal L}_{\text{SR}}
\end{array} &
\begin{array}{c}\text{(Fig.\ \ref{fig:ReproduceATLASplot})$_2$} \\ S\\
\ln{\cal L}_{\text{SR}}
\end{array} \\\hline
\left.
\begin{array}{c}
\text{WZ-0Ja} \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
21 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
21.70\pm 2.90 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
12.80 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
0.02 \\
0. \\
\end{array}
\right.
& \left.
\begin{array}{c}
0.41 \\
-0. \\
\end{array}
\right. & \left.
\begin{array}{c}
2.11 \\
-0.11 \\
\end{array}
\right. & \left.
\begin{array}{c}
1.31 \\
-0.05 \\
\end{array}
\right.
&\left.\begin{array}{c}
0\\0\\\end{array}\right.
&\left.\begin{array}{c}8.98\\-1.33\\\end{array}\right.
\\
\hline
\left.
\begin{array}{c}
\text{WZ-0Jb} \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
1 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
2.70\pm 0.50 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
3.70 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
0.03 \\
-0.02 \\
\end{array}
\right.
& \left.
\begin{array}{c}
0.37 \\
-0.23 \\
\end{array}
\right. & \left.
\begin{array}{c}
\mathbf{ 1.47} \\
\mathbf{-1.0} \\
\end{array}
\right. & \left.
\begin{array}{c}
\mathbf{ 1.24} \\
\mathbf{ -0.83} \\
\end{array}
\right.
&\left.\begin{array}{c}
0\\0\\\end{array}\right.
&\left.\begin{array}{c}12.0\\-10.2\\\end{array}\right.
\\
\hline
\left.
\begin{array}{c}
\text{WZ-0Jc} \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
2 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
1.60\pm 0.30 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
4.80 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
0.12 \\
0.03 \\
\end{array}
\right.
& \left.
\begin{array}{c}
0.26 \\
0.04 \\
\end{array}
\right. & \left.
\begin{array}{c}
0.78 \\
0.02 \\
\end{array}
\right. & \left.
\begin{array}{c}
0.53 \\
0.05 \\
\end{array}
\right.
&\left.\begin{array}{c}0.02\\ 0\\\end{array}\right.
&\left.\begin{array}{c}10.9\\-6.69\\\end{array}\right.
\\
\hline
\left.
\begin{array}{c}
\text{WZ-1Ja} \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
1 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
2.20\pm 0.50 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
3.20 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
0. \\
0. \\
\end{array}
\right.
& \left.
\begin{array}{c}
0.15 \\
-0.08 \\
\end{array}
\right. & \left.
\begin{array}{c}
0.33 \\
-0.18 \\
\end{array}
\right. & \left.
\begin{array}{c}
0.06 \\
-0.03 \\
\end{array}
\right.
&\left.\begin{array}{c} 0\\0\\\end{array}\right. &\left.\begin{array}{c}0.19\\-0.1\\\end{array}\right.\\
\hline
\left.
\begin{array}{c}
\text{WZ-1Jb} \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
3 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
1.80\pm 0.30 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
5.60 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
0.11 \\
0.07 \\
\end{array}
\right.
& \left.
\begin{array}{c}
0.46 \\
0.22 \\
\end{array}
\right. & \left.
\begin{array}{c}
1.17 \\
0.32 \\
\end{array}
\right. & \left.
\begin{array}{c}
0.54 \\
0.24 \\
\end{array}
\right.
&\left.\begin{array}{c}0\\0\\\end{array}\right.
&\left.\begin{array}{c}3.55\\-0.28\\\end{array}\right.\\
\hline
\left.
\begin{array}{c}
\text{WZ-1Jc} \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
4 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
1.30\pm 0.30 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
7.20 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
0.13 \\
0.21 \\
\end{array}
\right.
& \left.
\begin{array}{c}
0.33 \\
0.49 \\
\end{array}
\right. & \left.
\begin{array}{c}
\mathbf{0.87} \\
\mathbf{ 1.05} \\
\end{array}
\right. & \left.
\begin{array}{c}
0.24 \\
0.37 \\
\end{array}
\right.
&\left.\begin{array}{c}0.03\\0.05\\\end{array}\right.
&\left.\begin{array}{c}5.59\\0.94\\\end{array}\right.
\\
\hline
\left.
\begin{array}{c}
\text{slep-a} \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
4 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
2.20\pm 0.80 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
6.80 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
0.12 \\
0.07 \\
\end{array}
\right.
& \left.
\begin{array}{c}
\mathbf{ 1.95} \\
\mathbf{ 0.50} \\
\end{array}
\right. & \left.
\begin{array}{c}
0.60 \\
0.29 \\
\end{array}
\right. & \left.
\begin{array}{c}
0.44 \\
0.22 \\
\end{array}
\right.
&\left.\begin{array}{c} 0\\0\\\end{array}\right.
&\left.\begin{array}{c}13.00\\-5.13\\\end{array}\right.
\\
\hline
\left.
\begin{array}{c}
\text{slep-b} \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
3 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
2.80\pm 0.40 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
5.20 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
0.11 \\
0. \\
\end{array}
\right.
& \left.
\begin{array}{c}
\mathbf{ 2.30} \\
\mathbf{ -0.48} \\
\end{array}
\right. & \left.
\begin{array}{c}
0.94 \\
-0.06 \\
\end{array}
\right. & \left.
\begin{array}{c}
0.55 \\
-0. \\
\end{array}
\right.
&\left.\begin{array}{c}0.10\\ 0\\\end{array}\right.
&\left.\begin{array}{c}\mathbf{76.6}\\\mathbf{-66.5}\\\end{array}\right.\\
\hline
\left.
\begin{array}{c}
\text{slep-c} \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
9 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
5.40\pm 0.90 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
10.50 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
0.52 \\
0.26 \\
\end{array}
\right.
& \left.
\begin{array}{c}
0.94 \\
0.43 \\
\end{array}
\right. & \left.
\begin{array}{c}
1.30 \\
0.54 \\
\end{array}
\right. & \left.
\begin{array}{c}
0.68 \\
0.33 \\
\end{array}
\right.
&\left.\begin{array}{c}0.14\\0.07\\\end{array}\right.
&\left.\begin{array}{c}\mathbf{81.0}\\\mathbf{-55.8}\\\end{array}\right. \\
\hline
\left.
\begin{array}{c}
\text{slep-d} \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
0 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
1.40\pm 0.40 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
3.00 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
0.30 \\
-0.29 \\
\end{array}
\right.
& \left.
\begin{array}{c}
{ 0.39} \\
{ -0.38} \\
\end{array}
\right. & \left.
\begin{array}{c}
{ 0.50} \\
{ -0.48} \\
\end{array}
\right. & \left.
\begin{array}{c}
0.24 \\
-0.23 \\
\end{array}
\right.
&\left.\begin{array}{c}0.35\\-0.34\\\end{array}\right.
&\left.\begin{array}{c}42.2\\-42.1\\\end{array}\right.
\\
\hline
\left.
\begin{array}{c}
\text{slep-e} \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
0 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
1.10\pm 0.20 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
3.30 \\
\text{} \\
\end{array}
\right. & \left.
\begin{array}{c}
\mathbf{0.45} \\
\mathbf{ -0.45} \\
\end{array}
\right.
& \left.
\begin{array}{c}
0.27 \\
-0.27 \\
\end{array}
\right. & \left.
\begin{array}{c}
0.34 \\
-0.33 \\
\end{array}
\right. & \left.
\begin{array}{c}
0.24 \\
-0.23 \\
\end{array}
\right.
&\left.\begin{array}{c}\mathbf{3.41}\\\mathbf{-3.40}\\\end{array}\right.
&\left.\begin{array}{c}12.5\\-12.5\\\end{array}\right.\\
\hline
\end{array}
$}
}
\caption{\label{tab:sampleLHCpointsunconstrainedscenariosresults} Detailed
results for the sample parameter points defined in
Tab.\ \ref{tab:sampleLHCpointsunconstrainedscenarios} versus the 3-lepton channel of
the ATLAS analysis \cite{Aaboud:2018jiw}.
The first four columns are taken from Ref.\
\cite{Aaboud:2018jiw}, Table 15, and show the names of the signal
regions, the observed and expected background yields
$N_{\text{obs,exp}}$ as well as the model-independent signal upper
limits $S^{95}_{\text{obs}}$. The other columns show for each
parameter point the predicted signal yield $S$ and the resulting
log-likelihood difference as defined in Ref.\ \cite{Balazs:2017moi} (the signal
uncertainties estimated by \texttt{ColliderBit}\@\xspace are at the level of $10\%$
of the signal or less). Numbers highlighted in boldface
correspond to the entries with
the highest significance and to the entries selected by
\texttt{ColliderBit}\@\xspace for the overall $\ln{\cal
L}_{\text{eff}}^{\text{ATLAS\_13TeV\_MultiLEP\_3Lep}}$ of
Tab.\ \ref{tab:sampleLHCpointsunconstrainedscenarios} (selected
``on the basis of the signal region {\em expected} to give the
strongest limit'' \cite{Balazs:2017moi}).}
\end{table}
\section{Single Field Extensions} \label{sec:SingleField}
In this section we discuss the impact of $\amu$ on simple single field
extensions of the SM. Such extensions can be interesting in their own
right or representative for more elaborate models with many new fields
and particles and illustrate the impact of the $\amu$ measurement. We
begin in Sec.\ \ref{sec:one_field_overview} with a general overview of
the status of one-field extensions, covering renormalizable models
with new spin $0$, spin $1/2$ or spin $1$ fields. In Secs.~\ref{Sec:THDM}-\ref{Sec:Leptoquarks} we
then show the impact of the latest data on the two most interesting
cases --- the two-Higgs doublet model and leptoquark models.
\subsection{Overview}\label{sec:one_field_overview}
\definecolor{viable}{rgb}{0., 1., 0.5}
\definecolor{ruledout}{rgb}{0.97, 0.51, 0.47}
\definecolor{UVCruledout}{rgb}{0.6, 0.4, 0.8}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|} \hline
Model & Spin & $SU(3)_C \times SU(2)_L \times U(1)_Y$ & \update{Result for $\Delta a_\mu^\text{BNL}$, $\damu^{\text{2021}}$} \\ \hline
1 & 0 & $({\bf 1},{\bf 1},1)$ & \cellcolor{ruledout} Excluded: $\Delta \amu < 0$ \\
2 & 0 & $({\bf 1},{\bf 1},2)$ & \cellcolor{ruledout} Excluded: $\Delta \amu < 0$\\
3 & 0 & $({\bf 1},{\bf 2},-1/2)$ & \cellcolor{viable} Updated in Sec.\ \ref{Sec:THDM} \\
4 & 0 & $({\bf 1},{\bf 3},-1)$ & \cellcolor{ruledout} Excluded: $\Delta \amu < 0$ \\
5 & 0 & $({\bf \overline{3}},{\bf 1}, 1/3)$ & \cellcolor{viable} Updated Sec.\ \ref{Sec:Leptoquarks}. \\
6 & 0 & $({\bf \overline{3}},{\bf 1}, 4/3)$ & \cellcolor{ruledout} Excluded: LHC searches \\
7 & 0 & $({\bf \overline{3}},{\bf 3}, 1/3)$ & \cellcolor{ruledout} Excluded: LHC searches \\
8 & 0 & $({\bf 3},{\bf 2}, 7/6)$ & \cellcolor{viable} Updated Sec.\ \ref{Sec:Leptoquarks}. \\
9 & 0 & $({\bf 3},{\bf 2}, 1/6)$ & \cellcolor{ruledout} Excluded: LHC searches \\
10 & $1/2$ & $({\bf 1},{\bf 1},0)$ & \cellcolor{ruledout} Excluded: $\Delta \amu < 0$ \\
11 & $1/2$ & $({\bf 1},{\bf 1},-1)$ & \cellcolor{ruledout} Excluded: $\Delta \amu$ too small \\
12 & $1/2$ & $({\bf 1},{\bf 2},-1/2)$ & \cellcolor{ruledout} Excluded: LEP lepton mixing \\
13 & $1/2$ & $({\bf 1},{\bf 2},-3/2)$ & \cellcolor{ruledout} Excluded: $\Delta \amu < 0$ \\
14 & $1/2$ & $({\bf 1},{\bf 3},0)$ & \cellcolor{ruledout} Excluded: $\Delta \amu < 0$ \\
15 & $1/2$ & $({\bf 1},{\bf 3},-1)$ & \cellcolor{ruledout} Excluded: $\Delta \amu < 0$ \\
16 & $1$ & $({\bf 1},{\bf 1},0)$ & \cellcolor{viable} Special cases viable \\
17 & $1$ & $({\bf 1},{\bf 2},-3/2)$ & \cellcolor{UVCruledout} UV completion problems \\
18 & $1$ & $({\bf 1},{\bf 3}, 0)$ & \cellcolor{ruledout} Excluded: LHC searches \\
19 & $1$ & $({\bf \overline{3}}, {\bf 1}, -2/3)$ & \cellcolor{UVCruledout} UV completion problems \\
20 & $1$ & $({\bf \overline{3}}, {\bf 1}, -5/3)$ & \cellcolor{ruledout} Excluded: LHC searches \\
21 & $1$ & $({\bf \overline{3}}, {\bf 2}, -5/6)$ & \cellcolor{UVCruledout} UV completion problems \\
22 & $1$ & $({\bf \overline{3}}, {\bf 2}, 1/6)$ & \cellcolor{ruledout} Excluded: $\Delta \amu < 0$ \\
23 & $1$ & $({\bf \overline{3}}, {\bf 3}, -2/3)$ & \cellcolor{ruledout} Excluded: proton decay\\
\hline
\end{tabular}
\caption{Summary of known results for gauge invariant single field extensions with one-loop contributions to the anomalous magnetic moment of the muon. These results are rather exhaustive due to systematic investigations and classifications in Ref.\ \cite{Freitas:2014pua,Queiroz:2014zfa,Biggio:2014ela,Biggio:2016wyy}. Note however that while we present the results based on representations of SM gauge and Lorentz symmetries, the references make assumptions that can be important to the conclusions and are different in each paper. Thus the conclusions summarised in this table should be interpreted with care. For more information on models 1-2, 3-4, 5-9, 10-12, 13, 14-18 and 19-23 see references \cite{CoarasaPerez:1995wa, Biggio:2014ela, Gunion:1989in, Chiu:2014oma}, \cite{Freitas:2014pua, Biggio:2014ela}, \cite{Chakraverty:2001yg,Biggio:2014ela}, \cite{Biggio:2008in, Freitas:2014pua, Biggio:2014ela}, \cite{Biggio:2014ela}, \cite{Biggio:2008in, Freitas:2014pua, Biggio:2014ela, Biggio:2016wyy, Kelso:2014qka} and \cite{Biggio:2016wyy}, respectively. We use color highlighting to give a visual indication of the status of the model, namely green for viable explanations, red for excluded and purple for vector extensions excluded on the basis of their UV completions. \label{tab:one-field-summary} }
\end{center}
\end{table}
Before presenting our updated results for those cases in
Secs.\ \ref{Sec:THDM}-\ref{Sec:Leptoquarks}, we first classify the
single field extensions according to their spin and their SM
representations and charges, and discuss the known results to provide
a very important overview of what is possible and put our new results
in the appropriate context. Single field models have been classified
or reviewed in a systematic manner in
Refs.\ \cite{Freitas:2014pua,Queiroz:2014zfa,Chiu:2014oma,Biggio:2014ela,Biggio:2016wyy},
with the results summarized in Table \ref{tab:one-field-summary}.
\update{The confirmation of a large positive deviation} from the SM
prediction in the anomalous magnetic moment of the muon rules out most
one-field extensions of the SM. The reasons for this are
simple. First to explain the \update{anomaly} these models must provide
a \update{positive contribution} to $\amu$, and this constraint alone rules
out a large number of the possible extensions. Secondly even if the
sign of the \update{contribution is positive, the models must have a chirality
flip in order for the contribution to be large enough with
perturbative couplings}. Without a chirality flipping enhancement,
contributions that explain $\amu$ \update{require the masses of the new
particles to be so light that they would already have been observed
in collider experiments}.
Ref.\ \cite{Freitas:2014pua} considers scalars, fermions and vectors.
For fermions and scalars they considered gauge invariant extensions
with $SU(3)$ singlets, which may be $SU(2)$ singlets, doublets,
triplets ($Y=-1$) and adjoint triplets ($Y=0$) for fermions, and
doublets and triplets for scalars. They do not consider scalars
obtaining a VEV. They treated vector states as simplified models of
neutral and charged vector states without specifying any gauge
extension. They assume minimal flavour violating interactions with
leptons (see 2.2 of Ref.\ \cite{Freitas:2014pua} for details) for LEP
contact interaction limits and LHC searches, and perform the
calculation of $\Delta a_\mu$ at the one-loop level. They obtained a negative
contribution to $\amu$ from the scalar triplet, the neutral fermion
singlet, and fermion triplets with hypercharge $0$ or $-1$, and found
that while a charged fermion singlet can give a positive contribution
it is always too small to explain $\Delta a_\mu^{\textrm{BNL}}$. They found
scalar and fermion doublet scenarios that could accommodate
$\Delta a_\mu^{\textrm{BNL}}$ at the $1\sigma$ level were ruled out by LEP
searches for neutral scalars and LEP limits on mixing with SM leptons
respectively. For a single neutral vector boson, they find that the
region where $\Delta a_\mu^{\textrm{BNL}}$ can be explained within $1\sigma$
is entirely ruled out by LEP constraints from 4-fermion contact
interactions and resonance searches. They also consider a single
charged vector boson coupling to a right handed charged lepton and a
right handed neutrino,\footnote{Technically this is a two-field
extension of the SM though they do not classify it as such.} and
find that in this case the region where $\Delta a_\mu^{\textrm{BNL}}$ can be
explained within $1 \sigma$ is ruled out by the combination of LEP
limits on contact interactions and LHC direct searches. In summary
they find that all gauge invariant one-field extensions they
considered failed to explain the anomaly. This paper's findings are
reflected in Table \ref{tab:one-field-summary}, except for the cases
of the scalar doublet (see Sec.\ref{Sec:THDM}) and the neutral vector
(see the discussion in Sec.\ \ref{sec:darkphoton} at the end of this
overview), where there is a lot of dedicated literature and it is
known that breaking the assumptions of Ref.\ \cite{Freitas:2014pua} can
change the result.
Refs.\ \cite{Chiu:2014oma, Biggio:2014ela, Queiroz:2014zfa} also take
a systematic approach. Ref.\ \cite{Chiu:2014oma} considers scalar
bosons\footnote{They also consider vector states but assume additional
fermions in that case.}. Compared to results in the other papers
this adds the singly charged $SU(2)$ singlets to Table
\ref{tab:one-field-summary}. This result was also used in the
classification in Ref.\ \cite{Biggio:2014ela} (drawing also from
Ref.\ \cite{CoarasaPerez:1995wa}), along with doubly charged $SU(2)$
scalar singlets using results from Ref.\ \cite{Gunion:1989in} and
scalar leptoquarks (see Sec.\ \ref{Sec:Leptoquarks} for our update)
originally proposed in Ref.\ \cite{Chakraverty:2001yg}. They also add
a new result for a fermion $SU(2)$ singlet with hypercharge $-3/2$,
i.e.\ the fermion state $({\bf 1},{\bf 2},-3/2)$, showing that the
contribution is always negative above the LEP limit. Otherwise their
classification overlaps with Ref.\ \cite{Freitas:2014pua} and the
conclusions are effectively consistent\footnote{They do however
comment that they obtained minor differences to those from
Ref.\ \cite{Freitas:2014pua} in the $\amu$ calculation for the
$({\bf 1},{\bf 1},-1)$ and $({\bf 1},{\bf 2},-1/2)$ fermions, which
alter the reason why they are excluded. We checked these results
and agree with the results of Ref.\ \cite{Freitas:2014pua}.}.
Ref.\ \cite{Queiroz:2014zfa} does not require $SU(2)_L$ invariance and
instead considers simplified models of Lorentz scalar, fermion and
vector states with results presented in terms of axial and vector
couplings, $g_a$ and $g_v$ and classify states according to
electromagnetic charges and $SU(3)$ representations. The reference
presents plots of $\Delta a_\mu$ predictions against the mass of the new state for
specific cases of the couplings. We checked that the results are
consistent with what we present in Table \ref{tab:one-field-summary},
but Ref.\ \cite{Queiroz:2014zfa} does not contain additional general
conclusions on the viability of each case.
While Refs.\ \cite{Freitas:2014pua} and \cite{Queiroz:2014zfa} used a
simplified models treatment of vector states,
Ref.\ \cite{Biggio:2016wyy} systematically classified vector
extensions according to SM gauge representations and considered the
implications of embedding these into a UV complete gauge extension of
the SM. They found that only $({\bf 1},{\bf 1},0)$ may provide a
viable UV complete explanation of $\Delta a_\mu^{\textrm{BNL}}$, depending on
specific model dependent details (see Sec.\ \ref{sec:darkphoton} below
for more details on such explanations). Although the $({\bf 1},{\bf
2},-3/2)$ vector state gives large contributions to $\amu$, they
rejected this since the UV completion into 331 models cannot provide a
$\Delta a_\mu^{\textrm{BNL}}$ explanation consistent with experimental limits
\cite{Kelso:2014qka}. A $({\bf 1},{\bf 3},0)$ vector state has no
chirality flip, so explanations are ruled out by LHC limits
\cite{Khachatryan:2014qwa}.
$({\bf\overline{3}}, {\bf 1}, -2/3)$ and $({\bf \overline{3}}, {\bf
2}, -5/6)$ have chirality flipping enhancements, but they reject
$({\bf \overline{3}}, {\bf 1}, -2/3)$ based on an $SU(4)_C\times
SU(2)_L \times U(1)_R$ UV completion and limits on the masses from
rare decays \cite{Kuznetsov:2012ai}, while the $({\bf \overline{3}},
{\bf 2}, -5/6)$ state is rejected based on an $SU(5)$ UV completion
and proton decay limits. Models without chirality flip enhancements
($({\bf \overline{3}}, {\bf 1}, -5/3)$, $({\bf \overline{3}}, {\bf 2},
1/6)$ and $({\bf \overline{3}}, {\bf 3}, -2/3)$) \update{can all be
ruled out by collider constraints or because they give the wrong
sign}. A summary of the constraints excluding each of the vector
leptoquarks are included in Table \ref{tab:one-field-summary}.
\subsubsection{Dark photon and dark $Z$ explanations}
\label{sec:darkphoton}
Before concluding this overview we now briefly discuss the particularly interesting case of an additional gauge field $Z_d$ with $({\bf 1}, {\bf 1}, {\bf 0})$ quantum numbers that arises from some additional $U(1)_d$ gauge symmetry.
The dark photon scenario assumes that the known quarks and leptons
have no $U(1)_d$ charge. The potential impact of dark photons on
$\amu$ has been extensively studied, after the first proposal in
Ref.\ \cite{Pospelov:2008zw}.
Models with a general Higgs sector contain both kinetic mixing of the SM $B$-field and $Z_d$ and the mass mixing of the SM $Z$-field and $Z_d$.
As the mass mixing parameter is typically far smaller than the kinetic mixing one, the leading contribution to $\amu$ is proportional to the kinetic mixing parameter $\epsilon$.
The kinetic mixing term induces an interaction between the SM fermions and the dark photon,
and the region relevant for significant $\Delta a_\mu$ has first been found to be $10 ^{-6} < \epsilon ^2 < 10 ^{-4}$, with dark photon masses in the range between $1 \text{ MeV} \cdots 500 \text{ MeV}$~\cite{Pospelov:2008zw}. However the electron anomalous magnetic moment result~\cite{Hanneke:2008tm} reduces the mass range to $20 \text{ MeV} \cdots 500 \text{ MeV}$~\cite{Davoudiasl:2012ig}, and the remaining range is excluded by the following experimental results obtained from various dark photon production channels from A1 in Mainz~\cite{Merkel:2014avp} (radiative dark photon production in fixed-target electron scattering with decays into $e^+e^-$ pairs), BaBar~\cite{Lees:2014xha} (pair production in $e^+e^-$ collision with subsequent decay into $e ^+ e^-$ or $\mu ^+ \mu ^-$ pairs), NA48/2 at CERN~\cite{Batley:2015lha} ($\pi ^0$ decay modes via dark photon and subsequent decay into $e ^+ e^-$-pair) and from dark matter production via dark photon from NA46 at the CERN~\cite{Gninenko:2019qiv}.
As a result, pure dark photon models cannot accommodate significant
contributions to $\amu$. Extensions, e.g.\ so-called ``dark $Z$'' models, open
up new possibilities but are also strongly constrained \cite{Davoudiasl:2012qa,Davoudiasl:2012ig,Davoudiasl:2014kua,Chen:2015vqy,Mohlabeng:2019vrz}.
Similarly, neutral $Z^\prime$ vector bosons with direct gauge couplings to
leptons are also strongly constrained (as indicated in
Tab.~\ref{tab:one-field-summary}); for examples of remaining viable
possibilities with significant contributions to $\amu$ we mention the
model with gauged $L_\mu - L_\tau$ quantum number and generalizations
thereof, see Refs.\ \cite{Heeck:2011wj,Altmannshofer:2014pba,Altmannshofer:2016brv,Gninenko:2018tlp,Escudero:2019gzq,Amaral:2020tga,Huang:2021nkl}.
\subsection{Two-Higgs Doublet Model} \label{Sec:THDM}
As can be seen in Table \ref{tab:one-field-summary}, the two-Higgs
doublet model is one of the very few viable one-field explanations of
$\Delta a_\mu^{\textrm{BNL}}$. It is in fact the only possibility without
introducing new vector bosons or leptoquarks.
The two-Higgs doublet model (2HDM) contains a charged Higgs $H^\pm$, a
CP-odd Higgs $A$, and two CP-even Higgs bosons $H,h$, where $h$ is
assumed to be SM-like (we assume here a CP-conserving Higgs potential, which
is sufficient to maximize contributions to $\amu$). To be specific we
list here the Yukawa Lagrangian for the neutral Higgs bosons in a form
appropriate for the 2HDM of type I, II, X, Y and the
flavour-aligned 2HDM, in the form of Ref.\ \cite{Cherchiglia:2016eui},
\begin{align}\label{yukawaalign}
{\cal{L}} _Y =& - \sum _{{\cal S}=h,H,A}\sum_{f}\,\frac{Y_{f}^{\cal S} m_f}{v}\,
{\cal S}{\bar{f}} P _{\text{R}} f + h.c.,
\\%\end{align}
Y^{h}_{f} =& \sin(\beta-\alpha)+\cos(\beta-\alpha)\zeta_{f}, &
Y^{A}_{d,l} =& -\zeta_{d,l}\,,\\
Y^{H}_{f} =& \cos(\beta-\alpha)-\sin(\beta-\alpha)\zeta_{f}, &
Y^{A}_{u} =&+ \zeta_{u}\,.
\end{align}
where the Dirac fermions $f$ run over all quarks and leptons,
$(\beta-\alpha)$ is a mixing angle and $\sin(\beta-\alpha)=1$
corresponds to $h$ being SM-like. The
dimensionless Yukawa prefactors $\zeta_{f}$ depend on the 2HDM version
and will be specialized later.
The 2HDM has a rich
phenomenology with a plethora of
new contributions to the Higgs potential and the Yukawa sector.
It differs from the
previously mentioned models in that two-loop contributions to
$\amu$ are known to be crucial. Typically the
dominant contributions arise via so-called Barr-Zee two-loop
diagrams. In these diagrams an inner fermion loop generates an effective
Higgs--$\gamma$--$\gamma$ interaction which then couples to the muon
via a second loop. If the new Higgs has a large Yukawa coupling to the
muon and if the couplings in the inner loop are large and the new
Higgs is light, the contributions to $\amu$ can be sizeable.
The Higgs mediated flavour changing neutral currents in the 2HDM can be avoided
by imposing either $\mathbb{Z}_2$ symmetry or flavour-alignment.
\begin{figure}
\begin{subfigure}[]{0.5\textwidth}
\begin{center}
\includegraphics[scale=1.]{THDMTypeXplot.pdf}\caption{}\label{fig:THDMupdatea}
\end{center}
\end{subfigure}
\begin{subfigure}[]{0.5\textwidth}
\begin{center}
\includegraphics[scale=1.]{THDMAligned.pdf}\caption{}\label{fig:THDMupdateb}
\end{center}
\end{subfigure}
\caption{\label{fig:THDMupdate}
The maximum results for $\Delta a_\mu$ in the two versions of the
two-Higgs Doublet Model with minimal flavour violation, compared
with the $1\sigma$ regions around $\Delta a_\mu^{\textrm{BNL}}$ (yellow) and
new world average $\damu^{\text{2021}}$ (green); light green shows the overlap
between the two regions. The maximum results are shown as functions of $M_A$, for
three different values of $M_{H,H^\pm}$, as
indicated: (a) lepton-specific/type X model (b) flavour-aligned two-Higgs Doublet Model. The results are based on
Ref.\ \cite{Cherchiglia:2017uwv}. The left plot is technically
obtained in the framework of the flavour-aligned model but taking
only $\tau$-loop contributions, which coincides with the type X
model.
}
\end{figure}
Fig.\ \ref{fig:THDMupdate} presents up-to-date results of the possible
contributions $\Delta a_\mu$ in both of these versions of the 2HDM. The
figure is based on results of Ref.\ \cite{Cherchiglia:2017uwv} and
compares them to the new world average \update{$\damu^{\text{2021}}$ obtained from including}
the FNAL value. It
arises from scans of the model parameter space and shows the maximum
possible $\Delta a_\mu$ as a function of the most important parameters, the
two new Higgs masses $M_A$
and $M_H$, where the choice $M_H=M_{H^\pm}$ maximises $\Delta a_\mu$. The
reason why there are absolute upper limits on $\Delta a_\mu$ is a combination
of theoretical and experimental constraints, as discussed in the
following.
Fig.\ \ref{fig:THDMupdatea} shows the results for the 2HDM type
X, the so-called lepton-specific version of the 2HDM with
$\mathbb{Z}_2$ symmetry.
A general analysis of all types of the 2HDM with discrete
$\mathbb{Z}_2$ symmetries and minimal flavour violation has been done in
Ref.\ \cite{Broggio:2014mna}, where only this lepton-specific
type X model survived as a possible source of significant
$\Delta a_\mu$. In this model, the parameters of Eq.\ (\ref{yukawaalign}) are
$\zeta_l=-\tan\beta$ for all charged leptons, while
$\zeta_{u,d}=\cot\beta$ for all quarks. The $\tan\beta$-suppression of
quark Yukawa couplings helps evading experimental constraints from
LEP, LHC
and flavour physics. In the type X 2HDM the main contributions arise from Barr-Zee
diagrams with an inner $\tau$-loop, which are
$(\tan\beta)^2$-enhanced. Hence important constraints arise
from e.g.\ precision data on $Z\to\tau\tau$ and
$\tau$-decay \cite{Wang:2014sda,Abe:2015oca,Chun:2016hzs,Wang:2018hnw}
as well as from LEP data on the mass range $M_A\lesssim20$ GeV
\cite{Cherchiglia:2017uwv}. As
Fig.\ \ref{fig:THDMupdatea} shows,
\update{only a tiny parameter
space in the 2HDM type X remains a viable explanation of the
observed $\damu^{\text{2021}}$.
For a $1\sigma$ explanation, $M_A$ must be in the small interval
$20\ldots40$ GeV; the corresponding maximum values of the
$\tan\beta$ parameter, which governs the lepton Yukawa couplings in
this model, are in the range $50\ldots100$.}
The masses of the new heavier Higgs bosons $M_{H,H^\pm}$ vary between
$150$ and $250$ GeV in the figure. Smaller values of these masses lead
to stronger constraints (since loop contributions to $\tau$-physics
are less suppressed), while larger values lead to a larger hierarchy
$M_A\ll M_{H,H^\pm}$ which leads to stronger constraints from
electroweak precision physics and theoretical constraints such as
perturbativity \cite{Chun:2016hzs,Cherchiglia:2017uwv}.
We mention also that the 2HDM type X parameter space with particularly large
contributions to $\amu$ can lead to peculiar $\tau$-rich final states
at LHC \cite{Chun:2015hsa,Iguro:2019sly} but can be tested
particularly well at a future lepton collider
\cite{Chun:2019sjo} and is also compatible with CP violation and testable
contributions to the electron electric dipole moment \cite{Chun:2019oix}.
Fig.\ \ref{fig:THDMupdateb} shows results for the so-called
flavour-aligned two-Higgs doublet model, which is a more general but
still minimal flavour violating scenario. Here the parameters
$\zeta_l$, $\zeta_u$, $\zeta_d$ are independent, however assumed to be
generation-universal. The contributions to $\amu$
were first discussed in Ref.\ \cite{Ilisie:2015tra} and then
scrutinized in
Refs.\ \cite{Han:2015yys,Cherchiglia:2016eui,Cherchiglia:2017uwv}. Here
not only the $\tau$-lepton loop contributes in essentially the same
way as before, but also Barr-Zee diagrams with the top-quark in the
inner loop may contribute. To a smaller extent, also purely bosonic
two-loop diagrams can increase $\amu$. The plot takes into account all
contributions, based on
Refs.\ \cite{Cherchiglia:2016eui,Cherchiglia:2017uwv}. In particular
the top-quark loop leads to a larger possible value of $\amu$. Its
contributions are bounded by constraints from LHC data and
$B$-physics. The LHC
constraints are weaker for $M_A>62$ GeV, where the decay $h\to AA$ is
kinematically impossible \cite{Cherchiglia:2017uwv}. This is reflected
in the behaviour of the maximum $\Delta a_\mu$ as a function of $M_A$ in the
plot. In case of the flavour-aligned 2HDM
\update{the world average deviation
$\damu^{\text{2021}}$ can be accommodated for $M_A$ up to $100$ GeV,
if the
heavy and charged Higgs masses are in the region $M_H = M_{H ^\pm} =200 \ldots 250$ GeV.}
In the parameter space which
maximises $\Delta a_\mu$ in the flavour-aligned 2HDM the light $A$-boson has
simultaneously significant Yukawa couplings to the top quark and to
$\tau$-leptons, leading to a significant rate for the process $gg\to
A\to \tau\tau$, which might be tested at future LHC runs.
Hence among the 2HDM versions without tree-level
FCNC, the well-known type I and type II versions are excluded as
explanations of the deviation $\damu^{\text{2021}}$. In contrast, the
lepton-specific type X model and the more general
flavour-aligned 2HDM can give significant contributions to
$\Delta a_\mu$. In both cases, two-loop Barr-Zee diagrams with $A$-boson
exchange and $\tau$-loop are important; in the flavour-aligned model
also top-loops are important. The mass $M_A$ is severely constrained
and the new Yukawa couplings must be at their upper experimental
limits.
Further, we mention that more exotic variants of the 2HDM which
involve neither
$\mathbb{Z}_2$-symmetric nor general flavour-aligned Yukawa couplings
can open up additional possibilities.
E.g.\ large non-flavour aligned Yukawa couplings to $\tau$-leptons or
top quarks can allow large contributions to $\amu$ even for masses of
$M_A$ above $100$ GeV \cite{Iguro:2019sly,Li:2018aov}. In these cases, important
constraints arise from lepton flavour violating processes \cite{Iguro:2019sly} and
B-physics \cite{Li:2018aov}. Large, non-flavour aligned $\tau$-Yukawa
couplings also allow another window of significant contributions with
a very light CP-even Higgs with $M_H\lesssim1$ GeV \cite{Jana:2020pxx}. And a
muon-specific 2HDM can accommodate large $\Delta a_\mu$ with $\tan\beta$ of
order $1000$ \cite{Abe:2017jqo}.
\subsection{Scalar Leptoquarks} \label{Sec:Leptoquarks}
In this subsection we update the results for the other single
field models which could explain $\Delta a_\mu^{\textrm{BNL}}$,
i.e.\ the scalar leptoquarks. Scalar and vector leptoquarks
that interact with SM leptons and quarks can appear as the
only BSM particle in one-loop contributions to the anomalous
magnetic moment of the muon. Scalar leptoquarks have been
considered as a solution for the anomalous magnetic moment
anomaly in Refs.\ \cite{Chakraverty:2001yg, Queiroz:2014zfa,
Biggio:2014ela, Bauer:2015knc, Popov:2016fzr}, while vector
leptoquarks have also been considered in
Refs.\ \cite{Queiroz:2014zfa, Biggio:2016wyy}. Here we focus
on studying scalar leptoquarks in detail, since one would
expect vector leptoquarks to be associated with an extension
of the gauge symmetries, which complicates the construction of
these models, and taking the simplified model approach
they
may yield results which are rather misleading compared to what
can be achieved in a realistic model.
Requiring gauge invariant couplings to SM leptons and quarks restricts us to the five scalar leptoquarks \cite{Buchmuller:1986zs} shown in Table \ref{tab:one-field-summary} (Models 5--9). Only two of these models\footnote{We follow the notation in Ref.\ \cite{Buchmuller:1986zs}.}, $S_{1}$ $({\bf \overline{3}},{\bf 1},1/3)$ and $R_{2}$ $({\bf 3},{\bf 2},7/6)$, have both left- and right-handed couplings to the SM fermions and can therefore have a chirality flip enhancing their one-loop contributions \cite{Chakraverty:2001yg,Queiroz:2014zfa,Biggio:2014ela}.
Leptoquarks can in general have complicated flavour structure in their
couplings. Since our focus is on demonstrating the impact of the
anomalous magnetic moment experiment and demonstrating the various
ways to explain it, we prefer to simplify the flavour structure and
focus on the couplings that lead to an enhanced $\Delta a_\mu$ contribution.
We therefore restrict ourselves to muon-philic
leptoquarks that couple only to the second-generation of SM leptons,
evading constraints on flavour violating processes such as $\mu\rightarrow e\gamma$. Leptoquarks that induce flavour violation in the quark sector have
been widely considered in the literature as possible solutions to
flavour anomalies, and sometimes simultaneous explanations of $\amu$
and these anomalies (see e.g.\ Refs.\ \update{\cite{Bauer:2015knc, Popov:2016fzr}}).
However we also do not consider these here for
the same reasons we choose to avoid lepton flavour violating couplings
and the same reasoning applies to simultaneous explanations of the
more recent $a_e$ anomaly \cite{Bigaran:2020jil}.
We found that it is possible to explain the $\Delta a_\mu^{\textrm{BNL}}$
\update{and $\damu^{\text{2021}}$ results with moderately sized}
perturbative couplings using leptoquarks that are both muon-philic
{\it and} charm-philic, i.e.\ leptoquarks that only couple to second
generation up-type quarks as well as only second generation charged
leptons. Specifically we found $\Delta a_\mu^{\textrm{BNL}}$ could be explained while satisfying LHC limits from direct searches as long as $\sqrt{|\lambda_L \lambda_R|}\gtrsim 0.4$, where $\lambda_L$ and $\lambda_R$ are the leptoquark couplings to the muons and the quarks. However careful consideration of CKM mixing and
flavour changing neutral currents (FCNC) reveals stringent
constraints. While one may require that the new states couple only to
the charm and not the up-quark or top-quark, CKM effects will then
still generate couplings to the bottom and down-quark. This effect is
very important and the impact of these for ``charm-philic'' leptoquark
explanations of $\Delta a_\mu^\textrm{BNL}$ has been considered in
Ref.\ \cite{Kowalska:2018ulj}. There they find that constraints from
BR($K^+\rightarrow\pi^+\nu\overline{\nu}$) for the $S_1$ leptoquark,
or BR($K_L\rightarrow\mu^+\mu^-$) for the $R_2$ leptoquark, heavily
restrict one of the couplings that enter the $\amu$ calculation. They
find this excludes fitting $\Delta a_\mu^{\textrm{BNL}}$ within $1\sigma$, but
in the case of the first model an explanation within $2\sigma$
remained possible, while for the second model explanations well beyond
$2\sigma$ were excluded. They also consider the possibility that it
is the down-type couplings that are second generation only, and find
even more severe constraints in that case. Finally for a limited case,
they explore including a direct coupling to the top-quark and find
that quite large couplings to the top quark are needed to explain
$\Delta a_\mu^{\textrm{BNL}}$ within $1\sigma$. Due to the strong flavour
constraints from coupling the leptoquark to the second generation of
SM quarks, we instead present results for top-philic leptoquarks,
i.e.\ using scalar leptoquarks which couple to the second generation
SM leptons, and the third generation of SM quarks.
Below is written the Lagrangian for both scalar leptoquarks, where here all fermions are written as 2-component left-handed Weyl spinors, for example $Q_3 = (t_{L},b_L)^T$ and $\mu_R^\dagger$, which follows the notation of Ref.\ \cite{Martin1997}. For simplicity we also define $\mu, t, b := \mu_R^\dagger, t_R^\dagger, b_R^\dagger$ below.
\begin{equation} \label{eqn:ScalarLeptoquarkSinglet}
{\cal L}_{S_1} = -\begin{pmatrix}\lambda_{QL} Q_3\cdot L_2 S_{1} + \lambda_{t\mu} t \mu S_{1}^* + h.c.\end{pmatrix} - M_{S_1}^2 |S_{1}|^2 - g_{H S_1} |H|^2 |S_{1}|^2 - \frac{\lambda_{S_1}}{2} \begin{pmatrix}|S_{1}|^2\end{pmatrix}^2 ,
\end{equation}
\begin{equation} \label{eqn:ScalarLeptoquarkDoublet}
{\cal L}_{R_2} = -\begin{pmatrix}\lambda_{Q\mu} R_{2}^\dagger Q_3 \mu + \lambda_{tL} L_2\cdot R_{2} t + h.c.\end{pmatrix} - M_{R_2}^2 |R_{2}|^2 - g_{H R_2} |H|^2 |R_{2}|^2 - \frac{\lambda_{R_2}}{2} \begin{pmatrix}|R_{2}|^2\end{pmatrix}^2.
\end{equation}
where the dot product above denotes the $SU(2)_L$ product, so e.g.\ $Q_3 \cdot L_2 = t_L \mu_L - b_L \nu_{\mu L}$. For the $S_{1}$ leptoquark one could also include $SU(3)_C \times SU(2)_L \times U(1)_Y$ gauge invariant renormalizable operators, $S_{1} Q_3 L_2$ and $S_{1} tb$ but unless these diquark couplings are severely suppressed or forbidden, they will give rise to rapid proton decay when combined with the leptoquark operators we consider here \cite{Arnold:2013cva,Queiroz:2014pra}. $R_{2}$ does not admit such renormalizable operators \cite{Arnold:2013cva} though there remain dangerous dimension 5 operators that would need to be forbidden or suppressed \cite{Queiroz:2014pra}. Since we are focused on $\amu$ we again simplify things by assuming all parameters are real, but note that if we were to consider complex phases then electric dipole moments would also be of interest, see e.g.\ Ref.\ \cite{Dekens:2018bci}.
Constraints on the masses of scalar leptoquarks with second and third
generation couplings to the SM leptons and quarks respectively can be
directly applied from $13$ TeV CMS
\cite{Sirunyan:2018ruf,Sirunyan:2018kzh} results, dependent on how
strong they couple to those fermions. Given the above Lagrangians,
one can see that the scalar leptoquark singlet $S_1$ can decay to
either a top quark and muon or bottom quark and neutrino, while the
upper and lower components of the scalar leptoquark doublet decay as
$R^u_2$ to a top quark and muon and $R^d_2$ to either a top quark and
neutrino or a bottom quark and muon. Thus for the leptoquark $S_1$
given in Eqs.\ (\ref{eqn:ScalarLeptoquarkSinglet}), the branching
fraction $\beta_{S_1} = Br(S_1 \rightarrow t\mu)$, is given by:
\begin{align} \label{eqn:ScalarLeptoquarkSingletBR}
\beta_{S_1} &= \frac{\lambda_{QL}^2 + \lambda_{t\mu}^2}{2\lambda_{QL}^2 + \lambda_{t\mu}^2}.
\end{align}
For scalar leptoquark singlet $S_1$ the most stringent LHC limits when
coupling to third generation quarks and second generation leptons are
dependent on $\beta_{S_1}$ \cite{Sirunyan:2018ruf}. Thus we can
calculate $\beta_{S_1}$ using selected values of the couplings between
$S_1$ and the fermions, and interpolate between them to find the
limits on the mass given in Ref.\ \cite{Sirunyan:2018ruf}. Now for
$R_{2}$ in Eq.\ (\ref{eqn:ScalarLeptoquarkDoublet}), limits can be
placed on the upper component of the doublet, $R^u_2$, which decays
solely to $t\mu$. In this case the mass limits from
Ref.\ \cite{Sirunyan:2018ruf} are applied where the branching ratio
for $R^u_2$ to decay to $t\mu$ is taken to be $\beta_{R ^u _2}=1$.
Further constraints can be placed on leptoquarks from the effective
coupling of a $Z$ boson to leptons. The experimentally measured
effective couplings of the $Z$ boson to a pair of muons are given as
$g^{\mu\mu}_L = -0.2689\pm0.0011$, $g^{\mu\mu}_R = 0.2323\pm0.0013$
\cite{ALEPH:2005ab,Tanabashi2018} in the case of left- and
right-handed couplings. The contribution from a scalar leptoquark
with couplings to any flavour of the SM fermions to the effective
couplings between $Z$ and muon, $\delta g^{\mu\mu}_{L,R}$, is given by
Eqs.\ (22,23) in Ref.\ \cite{Arnan:2019olv} for the leptoquarks $S_1$
and $R_2$ respectively. Points with left-right effective couplings
more than $2\sigma$ away from the measured values are treated as
constrained.
Likewise, the effective coupling of the $Z$ boson to any two neutrinos
has been measured as the observed number of light neutrino species
$N_\nu = 2.9840\pm0.0082$ \cite{ALEPH:2005ab}. The BSM contributions
from a scalar leptoquark to this are given by \cite{Arnan:2019olv}:
\begin{equation} \label{eqn:ZnuEffectiveCouplingLQ}
N_\nu = \sum_{i,j=e,\mu,\tau} \begin{pmatrix} |\delta^{ij} + \frac{\delta g^{ij}_{\nu L}}{g^{\textrm{SM}}_{\nu L}}|^2 + |\frac{\delta g^{ij}_{\nu R}}{g^{\textrm{SM}}_{\nu L}}|^2 \end{pmatrix},
\end{equation} where $g^{\textrm{SM}}_{\nu L}$ are the SM couplings, and $\delta g^{ij}_{\nu L,R}$ are the BSM couplings between the $Z$ boson and the neutrinos given again in Eqs.\ (22,23) from Ref.\ \cite{Arnan:2019olv}.
Due to the large masses of the leptoquarks considered for this model,
it is reasonable to consider fine-tuning in the mass of the muon.
With large BSM masses and sizeable couplings to the SM, contributions
to the muon can be generated as detailed in
Sec.\ \ref{sec:BSMoverview}. The specific constraint considered in
this paper for when the contribution to the muon mass is considered
not ``fine-tuned'' is
\begin{align} \label{eqn:FineTuningLimits}
\frac{1}{2} &< \frac{m_\mu^{\overline{\text{MS}}}}{m_\mu} <2
\end{align} i.e.\ the relative difference between the $\overline{\text{MS}}$ and pole masses $m_\mu^{\overline{\text{MS}}}$ and $m_\mu $ should not exceed $100\%$. While not forbidden, one may consider explanations of large $\amu$ via ${\cal O}(>100\%)$ corrections to the pole mass $m_\mu$ as unattractive.
Since the chirality flip provides an enhancement to $\Delta a_\mu$
proportional to the squared mass of the heaviest quark that the
leptoquark couples to, in the case of a leptoquark coupling to the
SM's third quark generation the enhancement is $m_t/ m_\mu$. Note that
we actually found that the $m_c/m_\mu$ enhancement for charm-philic
leptoquarks was sufficient to allow explanations of
$\Delta a_\mu^{\textrm{BNL}}$ consistent with LHC limits (but not flavour
constraints) with perturbative couplings, therefore this much larger
enhancement should be more than sufficient, even with rather small
couplings or quite heavy masses. In both
cases of the leptoquark models, the leptoquark contributions to $\amu$
are given by two kinds of diagrams of the FFS and SSF type,
\begin{align}
\Delta a_\mu^{LQ}&=\Delta a_\mu^{\text{FFS}}+\Delta a_\mu^{\text{SSF}}\,.
\end{align}
For the scalar leptoquark singlet $S_{1}$, the contributions to $\amu$
specifically come from diagrams with top--leptoquark loops and are
given by:
\begin{equation} \label{eqn:ScalarLeptuquarkSingletContribFFS}
\Delta a_\mu^{\text{FFS}} = \frac{3 m_\mu^2 Q_{t}}{32\pi^2 M_{S_1}^2}
\begin{pmatrix}
\frac{\lambda_{QL}^2 + \lambda_{t\mu}^2}{6} \,E\!\begin{pmatrix}\frac{m_t^2}{M_{S_1} ^2}\end{pmatrix}
+ \frac{4\lambda_{QL} \lambda_{t\mu}}{3} \frac{m_t}{m_\mu} \,F\!\begin{pmatrix}\frac{m_t^2}{M_{S_1} ^2}\end{pmatrix}
\end{pmatrix},
\end{equation}
\begin{equation} \label{eqn:ScalarLeptuquarkSingletContribSSF}
\Delta a_\mu^{\text{SSF}} = \frac{3 m_\mu^2 Q_{S_1}}{32\pi^2 M_{S_1} ^2}
\begin{pmatrix}
- \frac{\lambda_{QL}^2 + \lambda_{t\mu}^2}{6} \,B\!\begin{pmatrix}\frac{m_t^2}{M_{S_1} ^2}\end{pmatrix}
- \frac{2\lambda_{QL} \lambda_{t\mu}}{3} \frac{m_t}{m_\mu} \,C\!\begin{pmatrix}\frac{m_t^2}{M_{S_1} ^2}\end{pmatrix}
\end{pmatrix},
\end{equation} where the charges $Q_t=2/3$, $Q_{S_1}=1/3$ and the one-loop functions $B(x)$, $C(x)$, $E(x)$, and $F(x)$ are defined in Appendix \ref{app:MuonGm2Contributions}. These contributions are described by the fermion-fermion-scalar (FFS) diagram \ref{fig:FFSDiagram} and the scalar-scalar-fermion (SSF) diagram \ref{fig:SSFDiagram} of Fig.\ \ref{fig:GeneralDiagrams} with the generic $F$ fermion lines replaced by a top quark and the scalar $S$ ones by $S_1$. Similarly, the contributions from the scalar leptoquark doublet $R_{2}$ involving top/bottom and leptoquark loops are given by:
\begin{align}
\label{eqn:ScalarLeptuquarkDoubletContribFFS}
\Delta a_\mu^{\text{FFS}} &= \frac{3 m_\mu^2}{32\pi^2 M_{R_2} ^2}
\begin{pmatrix}
Q_{t} \frac{4\lambda_{Q\mu} \lambda_{tL}}{3} \frac{m_t}{m_\mu} \,F\!\begin{pmatrix}\frac{m_t^2}{M_{R_2} ^2}\end{pmatrix}
- Q_{t} \frac{\lambda_{Q\mu}^2+\lambda_{tL}^2}{6} \,E\!\begin{pmatrix}\frac{m_t^2}{M_{R_2} ^2}\end{pmatrix}
- Q_{b} \frac{\lambda_{Q\mu}^2}{6} \,E\!\begin{pmatrix}\frac{m_b ^2}{M_{R_2} ^2}\end{pmatrix}
\end{pmatrix},\\
\label{eqn:ScalarLeptuquarkDoubletContribSSF}
\Delta a_\mu^{\text{SSF}} &= \frac{3 m_\mu ^2}{32\pi^2 M_{R_2} ^2}
\begin{pmatrix}
Q_{R_2} ^u \frac{2\lambda_{Q\mu}\lambda_{tL}}{3} \frac{m_t}{m_\mu} \,C\!\begin{pmatrix}\frac{m_t ^2}{M_{R_2} ^2}\end{pmatrix}
- Q_{R_2} ^u \frac{\lambda_{Q\mu} ^2+\lambda_{tL} ^2}{6} \,B\!\begin{pmatrix}\frac{m_t ^2}{M_{R_2} ^2}\end{pmatrix}
- Q_{R_2}^d \frac{\lambda_{Q\mu}^2}{6} \,B\!\begin{pmatrix}\frac{m_b^2}{M_{R2}^2}\end{pmatrix}
\end{pmatrix},
\end{align}
where $Q_{R_2}^d = 2/3$ is the charge of the lower component of the
leptoquark doublet, and $Q_{R_2}^u = 5/3$ is the charge of the upper
component, and $Q_b=-1/3$. Note the colour factor $3$ in front of
each of the leptoquark contributions. Each of the above contributions
is produced by a pair of diagrams which are of the FFS diagram type
\ref{fig:FFSDiagram} or the SSF diagram type \ref{fig:SSFDiagram} of
Fig.\ \ref{fig:GeneralDiagrams} with the generic $F$ fermion lines
replaced by a top or bottom quark and the scalar $S$ ones by $R_2^d$
and $R_2^u$, respectively.
The important parameters for determining the above contributions to $\amu$ from scalar leptoquarks are the couplings $\lambda_L$, $\lambda_R$ between the leptoquarks and either the left- or right-handed top quarks, and the mass of the leptoquark $M_{LQ}$. For either of these leptoquarks the dominant contribution to $\amu$, arises from the internal chirality flip enhancement and has the following approximate form,
\begin{align} \label{eqn:ScalarLeptoquarkLOContrib}
\Delta a_\mu^{LQ} &\approx \frac{\lambda_L \lambda_R m_\mu m_t}{8\pi^2 M_{LQ}^2} \bigg(Q_{t} \,F\!\begin{pmatrix}\frac{m_t^2}{M_{LQ}^2}\end{pmatrix} \pm \frac{Q_{LQ}}{2} \,C\!\begin{pmatrix}\frac{m_t^2}{M_{LQ}^2}\end{pmatrix}\bigg),
\end{align}
where the $-/+$ is for the singlet/doublet. Comparing this to Eq.\
(\ref{eqn:GeneralGM2Contribution}), we can see that $C_{BSM} = Q_t \lambda_L
\lambda_R m_t/ (8 \pi^2 m_\mu)$, with the ratio $m_t / m_\mu \approx 1600$, gives
the parametric enhancement to $\Delta a_\mu$ from the chirality flip. We can also see how
allowing the leptoquark to couple to the top quark versus the charm quark leads to
a larger value of $\Delta a_\mu$. While this can be used to qualitatively
understand our results, the numerical calculations were performed with \texttt{FlexibleSUSY}\@\xspace 2.5.0
\cite{Athron:2014yba,Athron:2017fvs} \footnote{where \texttt{SARAH}\@\xspace 4.14.1
\cite{Staub:2009bi,Staub:2010jh,Staub:2012pb,Staub:2013tta} was used to get
expressions for masses and vertices, and \texttt{FlexibleSUSY}\@\xspace also uses some numerical routines
originally from \texttt{SOFTSUSY}\@\xspace \cite{Allanach:2001kg,Allanach:2013kza}.}. The \texttt{FlexibleSUSY}\@\xspace
calculation of $\Delta a_\mu$ includes the full one-loop contribution and the universal
leading logarithmic two-loop QED contribution \cite{Degrassi:1998es}, which tends
to reduce the result by $\approx 6\% - 10\%$.
The contributions to $\amu$ from the introduction of the scalar leptoquarks
$S_{1}$ and $R_{2}$ with Lagrangians given by Eqs.\
(\ref{eqn:ScalarLeptoquarkSinglet},\ref{eqn:ScalarLeptoquarkDoublet}) are shown
in Figs.\ \ref{fig:ScalarLeptoquarkSinglet} and
\ref{fig:ScalarLeptoquarkDoublet}, respectively. Results are shown in the
$M_{LQ}$-$\lambda_{QL}$ and $M_{LQ}$-$\lambda_{Q\mu}$ planes, scanning up to a
leptoquark mass of $M_{LQ}=4500$ GeV. We find that for both $S_{1}$ and $R_{2}$
the observed $\amu$ discrepancy can be explained within $1\sigma$ using similar
ranges of couplings \update{and a leptoquark mass $M_{LQ} \gtrsim 1.1$--$1.5$ TeV}.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.32\textwidth]{ScalarLeptoquarkSinglettopscan1}
\includegraphics[width=0.32\textwidth]{ScalarLeptoquarkSinglettopscan2}
\includegraphics[width=0.32\textwidth]{ScalarLeptoquarkSinglettopprofilescan2}
\caption{Scenarios which can explain $\Delta a_\mu^\text{BNL}$ and/or
$\damu^{\text{2021}}$ in the model with an $SU(2)_L$ singlet
scalar leptoquark $S_{1}$ defined in
Eq.\ (\ref{eqn:ScalarLeptoquarkSinglet}). Results are shown in
the $M_{S1}$-$\lambda_{QL}$ plane for fixed $\lambda_{t\mu}=0.1$
(left panel), $\lambda_{t\mu}=0.2$ (middle panel) and by varying
$\lambda_{t\mu}\in\update{[0,0.5]}$ (right panel). In
the latter case we {\it profile} to find the regions in the
$M_{S1}$-$\lambda_{QL}$ plane that for some value of
$\lambda_{t\mu}$ in the scan, satisfy all experimental
constraints and can explain the $\Delta a_\mu$ measurements within
1$\sigma$. Regions which can explain $\Delta a_\mu^\textrm{BNL}$ and
$\damu^{\text{2021}}$ to within $1\sigma$ are yellow and green
respectively, with the overlap between these two regions
coloured lime. The black line indicates points which produce a
$\Delta a_\mu$ contribution matching \update{Eq.\ (\ref{eqn:avgDiscrepancy})}. The
grey regions are excluded by scalar leptoquark searches at the
Large Hadron Collider \cite{Sirunyan:2018ryt}. Regions shaded
cyan are disfavoured as they provide a relative contribution
which shift the muon mass up by more than $100\%$ or down by
more than $50\%$ as in Eq.\ (\ref{eqn:FineTuningLimits}). In
the right panel only solutions which have not been excluded by
leptoquark searches or $Z\rightarrow\mu\mu$ or
$Z\rightarrow\nu\nu$ constraints are displayed, i.e.\ any points
ruled out by any of these experimental constraints are
discarded. \label{fig:ScalarLeptoquarkSinglet} }
\end{figure}
The left and middle panels of
Fig.\ \ref{fig:ScalarLeptoquarkSinglet} show where
$\Delta a_\mu^\text{BNL}$ and $\damu^{\text{2021}}$ can be explained
when the coupling to the right-handed top quark is fixed to
$\lambda_{t\mu}=0.1,0.2$ respectively. For $\lambda_{t\mu} =
0.1$, the black line showing points which exactly explain the
$\damu^{\text{2021}}$ discrepancy is a parabolic curve,
following the quadratic relationship between leptoquark mass
and coupling in Eq.\ (\ref{eqn:ScalarLeptoquarkLOContrib}).
By increasing the coupling to $\lambda_{t\mu}=0.2$, a lower
value of the coupling $\lambda_{QL}$ to the left-handed top
quark is required to get the same contributions to $\Delta a_\mu$,
and the region which \update{can explain $\damu^{\text{2021}}$}
narrows and flattens as shown in the middle panel. In both
cases the new $\damu^{\text{2021}}$ value can be explained
with a \update{marginally smaller} coupling than the
previous BNL value due to the small \update{decrease} in the discrepancy.
CMS searches for scalar leptoquarks \cite{Sirunyan:2018ryt},
shown by grey shading, exclude regions with masses
\update{$M_{S_1} \lesssim 1.1$--$1.5$ TeV}, dependent on the
branching ratio $\beta_{S_1}$ in
Eq.\ (\ref{eqn:ScalarLeptoquarkSingletBR}). For lower
$\lambda_{QL}$ couplings the strongest mass constraint of
$M_{S_1} > 1420$ GeV arises, since $\beta_{S_1}\rightarrow1$
as $\lambda_{QL}\rightarrow0$. Additionally, the cyan region
indicates points affected by the ``fine-tuning'' in the muon
mass discussed in Sec.\ \ref{sec:BSMoverview} and defined in
Eq.\ (\ref{eqn:FineTuningLimits}): here the loop contributions
to muon mass calculated in \texttt{FlexibleSUSY}\@\xspace cause the muon mass to increase
more than twice the pole mass or decrease less than half the
pole mass. Satisfying this fine-tuning criteria, places a
rough upper limit of \update{$\lambda_{QL} \lesssim 0.06$} for
$\lambda_{t\mu}=0.1$ and \update{$\lambda_{QL} \lesssim
0.04$} for $\lambda_{t\mu}=0.2$. Note that these fine-tuning
conditions allow only a \update{small strip} of the parameter space to remain.
In the right panel of Fig.\ \ref{fig:ScalarLeptoquarkSinglet}, we
profile over $\lambda_{t\mu}$ to find the best fits to $\amu$, that is
we vary the coupling $\lambda_{t\mu}\in[0,0.5]$ and show the lowest
number of standard deviation within which $\amu$ can be explained in
the 2D plane, from all scenarios where
$\lambda_{t\mu}\in[0,0.5]$. This panel has the same constraints as before, except for
the muon mass fine-tuning. The LHC leptoquark searches impose a limit
of \update{$M_{S_1} \gtrsim 1.1$ TeV}, with slightly higher mass limits
implied by this for the lowest $\lambda_{QL}$ values since the
branching ratios depend on the leptoquark couplings. The exclusion in the bottom right of the plot is formed by our choice of restricting the $\lambda_{t\mu}$ to moderate values $\leq 0.5$. If we allow $\lambda_{t\mu}$ to vary up to $\sqrt{4\pi}$ then no exclusion in the bottom right would be visible due to the large chirality flip enhancement.\footnote{In contrast if we plotted the charm-philic case the combination of LHC and large $\Delta a_\mu$ constraints would lead to a much larger exclusion in the bottom right of the plot than can be seen here, even when the equivalent coupling is varied up to the perturbativity limit.} As the mass $M_{S_1}$ is increased, the minimum coupling $\lambda_{QL}$ which produces a contribution that can explain $\amu$ within $1\sigma$ increases, in line with
Eq.\ (\ref{eqn:ScalarLeptoquarkLOContrib}), with the \update{new world
average} result able to be explained in \update{narrower region} of the
parameter space due to \update{the reduction in the uncertainty}.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.32\textwidth]{ScalarLeptoquarkDoublettopscan1}
\includegraphics[width=0.32\textwidth]{ScalarLeptoquarkDoublettopscan2}
\includegraphics[width=0.32\textwidth]{ScalarLeptoquarkDoublettopprofilescan2}
\caption{ Scenarios which can explain
$\Delta a_\mu^\text{BNL}$ and/or $\damu^{\text{2021}}$ in the
model with a scalar leptoquark $SU(2)_L$ doublet
$R_{2}$ defined in
Eq.\ (\ref{eqn:ScalarLeptoquarkDoublet}). Results
are shown in the $M_{R2}$-$\lambda_{QL}$ plane for
fixed $\lambda_{tL}=0.1$ (left panel),
$\lambda_{tL}=0.2$ (middle panel) and by varying
$\lambda_{tL}\in[0,0.5]$ (right
panel). In the latter case we {\it profile} to find
the regions in the $M_{R2}$-$\lambda_{QL}$ plane
where these measurements can be explained for some
value of $\lambda_{tL}$ in the scan and satisfy
experimental constraints. Colours are the same as in
Fig.\ \ref{fig:ScalarLeptoquarkSinglet}. Additional
relevant exclusions in the parameter space from the
constraint $Z\rightarrow\nu\nu$ are shaded in pink
and in the right panel this is included in the list
of experimental constraints that points must
satisfy, in addition to those listed in
Fig.\ \ref{fig:ScalarLeptoquarkSinglet}. \label{fig:ScalarLeptoquarkDoublet}}
\end{figure}
Similarly, for the scalar leptoquark doublet $R_{2}$, $\Delta a_\mu$ contours
for fixed values of the coupling to right-handed top quarks,
$\lambda_{tL}=0.1,0.2$, are shown in left and middle panels of
Fig.\ \ref{fig:ScalarLeptoquarkDoublet}, with a profile over
$\lambda_{tL}$ shown in the right panel. Again, the $\Delta a_\mu$ contours
for fixed $\lambda_{tL}=0.1, 0.2$ follow the quadratic ratio given in
Eq.\ (\ref{eqn:ScalarLeptoquarkLOContrib}), with larger $\Delta a_\mu$ values
from a given leptoquark mass being obtained with the larger coupling
$\lambda_{tL}=0.2$. This time, the mass constraints placed on the
leptoquark $R_{2}^u$ from the LHC are independent of the couplings, as
$R_2^u$ always decays to a top quark and muon. Again imposing the
fine-tuning criteria has a huge impact, with the contributions from
the leptoquark reducing the muon mass $m_\mu$.
\update{For $\lambda_{tL}=0.1$ only a small corner of
parameter space can explain $\damu^{\text{2021}}$ without fine tuning, and for
$\lambda_{tL}=0.2$ this shrinks to a tiny region.} From
profiling over $\lambda_{tL}$, we find that again $\damu^{\text{2021}}$ can be
explained within $1\sigma$ for \update{masses $M_{R_2} \ge 1420$ GeV}.
For higher couplings $\lambda_{Q\mu} \gtrsim 0.47$, constraints from
$Z\rightarrow\nu\nu$ raise the minimum mass which can explain
$\damu^{\text{2021}}$. As with the previous model our choice to
restrict ourselves to moderate $\lambda_{tL} \leq 0.5$ leads a lower
bound on the $\lambda_{Q\mu}$ coupling that increases with mass as a
smaller coupling $\lambda_{Q\mu}$ \update{cannot produce a large enough
$\damu^{\text{2021}}$}. As before there would be \update{no exclusion} in the bottom
right region if we varied $\lambda_{tL}$ up to $\sqrt{4\pi}$.
\begin{figure}[tb]
\centering
\includegraphics[width=0.32\textwidth]{ScalarLeptoquarkSinglettopprofilescan1quad}
\includegraphics[width=0.32\textwidth]{ScalarLeptoquarkDoublettopprofilescan1quad}
\caption{ Results for $\damu^{\text{2021}}$ and $\damu^{\text{BNL}}$ in the planes of the leptoquark couplings where we profile over all possible scalar leptoquark masses up to $4.5$ TeV. More specifically in the left panel we show results for the $SU(2)_L$ singlet $S_{1}$ defined in Eq.\ (\ref{eqn:ScalarLeptoquarkSinglet}) in the $\lambda_{QL}$--$\lambda_{t\mu}$ plane and in the right panel we show the scalar leptoquark $SU(2)_L$ doublet $R_{2}$ defined in Eq.\ (\ref{eqn:ScalarLeptoquarkDoublet}) in the $\lambda_{Q\mu}$--$\lambda_{tL}$ plane. We show couplings between the leptoquark and the SM fermions that can satisfy direct LHC searches and $Z\rightarrow\mu\mu$ and $Z\rightarrow\nu\nu$ constraints and can simulteneously explain $\damu^{\text{2021}}$ (green) or both $\Delta a_\mu^{\textrm{BNL}}$ {\it and} $\damu^{\text{2021}}$ (lime green) within $1\sigma$. Additionally points which can also avoid the fine-tuning constraints in Eq.\ (\ref{eqn:FineTuningLimits}) on the muon mass are shaded in dark blue if they can explain $\damu^{\text{2021}}$, aqua if they can explain $\Delta a_\mu^{\textrm{BNL}}$ within $1\sigma$, and blue if they can explain {\it both}. \label{fig:ScalarLeptoquarkProfiles}}
\end{figure}
In Fig.\ \ref{fig:ScalarLeptoquarkProfiles} the $\Delta a_\mu$
predictions for both leptoquarks were profiled over the mass
of the leptoquark with couplings ranging up to a cutoff of
$0.5$, where we have again excluded points ruled out by the
LHC searches and the decays $Z\rightarrow\mu\mu$ or
$\nu\nu$. The regions which can explain \update{$\damu^{\text{2021}}$} follow the
relationship given in
Eq.\ (\ref{eqn:ScalarLeptoquarkLOContrib}), where the
contribution has identical dependence on the couplings to the
left- and right-handed top quarks. As expected LHC limits on
the leptoquark mass mean that there is a lower limit for the
values of the couplings that can \update{explain the observed
$\amu$ disagreement within $1\sigma$}. However, this limit
is extremely small so that only leptoquarks with couplings
\update{$\lambda_{QL}\times\lambda_{t\mu} \lesssim 0.003$} are unable to produce large enough contributions to
$\amu$ with a mass that has not be excluded by LHC searches in
the $S_1$ model. As can be seen in the plot we obtain slightly lower limits on
the couplings in the $R_2$ model compared to the $S_1$ model. This is due to
the higher limits from the LHC on the leptoquark singlet at low
couplings, as seen in Figs.\ \ref{fig:ScalarLeptoquarkSinglet},
\ref{fig:ScalarLeptoquarkDoublet}. Note that if we try to explain
$\Delta a_\mu^{\textrm{BNL}}$ or $\damu^{\text{2021}}$ while avoiding fine-tuning of the muon mass,
then we get an upper limit on the couplings of
\update{$\lambda_{QL}\times\lambda_{t\mu} \lesssim 0.006$} for the $S_1$ model and
\update{$\lambda_{tL}\times\lambda_{Q\mu} \lesssim 0.004$} for the $R_2$ model.
However, putting aside fine-tuning constraints the deviation between the SM
prediction and the observed value of the anomalous magnetic moment of the muon can
be explained within $1\sigma$, needing couplings of \update{reasonable size no
smaller than $\approx 0.05$ each}.
In summary leptoquarks are an exciting and well-motivated
possibility for physics beyond the SM, which is not only motivated
by $\amu$ but also by flavour anomalies. However, among all possible
leptoquark quantum numbers, the $S_1$ and $R_2$ models are
\update{the only two viable explanations of $\Delta a_\mu^{\text{BNL}}$ and
$\damu^{\text{2021}}$}. Furthermore, leptoquark couplings of the left- and
the right-handed muon to top quarks are required. In this way,
chirality flip enhancements by $m_t/m_\mu$ are possible. Under
these conditions, leptoquark masses above the LHC-limit of around
\update{$1.4$~TeV} can accommodate $\damu^{\text{2021}}$ without violating
flavour constraints. \update{The large $\damu^{\text{2021}}$ can even be
explained with leptoquark couplings to the left- and right-handed
muon that are as small as $\approx 0.05$, which is essentially
unchanged from the BNL value, despite the small reduction in the
central value.} In principle, masses in the multi-TeV region can
accommodate the measured $\amu$, if the couplings are sufficiently
increased. However, if one takes the fine tuning criteria on the
muon mass seriously then only very narrow regions of parameter space
remain for natural leptoquark solutions of the observed $\amu$, just
above the LHC limits.
\section{Extensions of the SM by two-fields with different spin} \label{sec:TwoFieldsDifferentSpin}
\section{Two-Field Extensions} \label{sec:TwoFields}
In this section we consider simple extensions of the SM by two
fields. We focus on models where the new particles can appear together
in pure BSM loops that contribute to $\amu$, in which case the new
fields should have different spins. It is not possible to get an enhancement through internal chirality flips in these models, restricting explanations of $\amu$ to low mass regions, but compressed spectra in these regions may evade the LHC limits. In addition one of the new particles could be a
stable dark matter candidate, and then simultaneous explanation of
$\amu$ and dark matter is possible. For these reasons the case of two
fields with different spins behaves very differently from the one-field
extensions, and is representative for a range of more
elaborate models.
We begin in
Sec.\ \ref{sec:two_field_overview} with an overview
of the status of two-field extensions. There we also comment on the
case with two fields of the same spin, including the interesting case
of vector-like leptons. From the overview we identify two specific
models that are promising in view of $\amu$,
dark matter and LHC constraints. These models will be analysed in detail in
Sec.\ \ref{sec:Z2symmetric} and compared against the latest $\amu^{\text{2021}}$ result and results from LHC and dark matter experiments.
\subsection{Overview}\label{sec:two_field_overview}
\definecolor{DM}{rgb}{0.6, 0.4, 0.8}
\begin{table}[t]
\begin{center}
\setlength\tabcolsep{1.5pt}
\begin{tabular}{|rl|c|c|} \hline
\multicolumn{2}{|c|}{{\scriptsize $(SU(3)_C \times SU(2)_L \times U(1)_Y)_\textrm{spin}$}} & $+\mathbb{Z}_2$ & \update{Result for $\Delta a_\mu^\text{BNL}$, $\damu^{\text{2021}}$}\\ \hline
\multirow{2}{*}{\FieldN$_0$} &\multirow{2}{*}{-- \FieldC$_{1/2}$}\@\xspace
& No &\cellcolor{viable} Projected LHC 14 TeV exclusion, not confirmed \\
&& Yes &\cellcolor{viable} Updated Sec.\ \ref{sec:Z2symmetric} \\ \hline
\FieldC$_0$ &-- \FieldN$_{1/2}$\@\xspace & Both &\cellcolor{ruledout} Excluded: $\Delta \amu < 0$ \\ \hline
\FieldD$_0$ &-- \FieldN$_{1/2}$\@\xspace & Both &\cellcolor{ruledout} Excluded: $\Delta \amu < 0$ \\ \hline
\multirow{2}{*}{\FieldN$_{0}$} &\multirow{2}{*}{-- \FieldD$_{1/2}$}\@\xspace
& No &\cellcolor{ruledout} Excluded: LHC searches\\
&& Yes &\cellcolor{viable} Updated Sec.\ \ref{sec:Z2symmetric} \\ \hline
\multirow{2}{*}{\FieldD$_{0}$} &\multirow{2}{*}{-- \FieldC$_{1/2}$}\@\xspace
& No &\cellcolor{ruledout} Excluded: LEP contact interactions \\
&& Yes &\cellcolor{DM} Viable with under abundant DM \\ \hline
\FieldC$_0$ &-- \FieldD$_{1/2}$\@\xspace & Both &\cellcolor{ruledout} Excluded: $\Delta \amu < 0$ \\ \hline
\FieldD$_0$ &-- \FieldD$_{1/2}$\@\xspace & Both &\cellcolor{ruledout} Excluded: LEP search \\ \hline
\multirow{2}{*}{\FieldD$_{0}$} &\multirow{2}{*}{-- \FieldA$_{1/2}$}\@\xspace
& No &\cellcolor{ruledout} Excluded: LHC searches \\
&& Yes &\cellcolor{DM} Viable with under abundant DM \\ \hline
\multirow{2}{*}{\FieldD$_{0}$} &\multirow{2}{*}{-- \FieldT$_{1/2}$}\@\xspace
& No &\cellcolor{ruledout} Excluded: LHC searches + LEP contact interactions \\
&& Yes &\cellcolor{DM} Viable with under abundant DM \\ \hline
\FieldA$_{0}$ &-- \FieldD$_{1/2}$\@\xspace & Both &\cellcolor{ruledout} Excluded: $\Delta \amu < 0$ \\ \hline
\multirow{2}{*}{\FieldA$_{0}$} &\multirow{2}{*}{-- \FieldT$_{1/2}$}\@\xspace
& No &\cellcolor{ruledout} Excluded: LHC searches \\
&& Yes &\cellcolor{DM} Viable with under abundant DM \\ \hline
\FieldT$_{0}$ &-- \FieldD$_{1/2}$\@\xspace & Both &\cellcolor{ruledout} Excluded: $\Delta \amu < 0$ \\ \hline
\FieldT$_{0}$ &-- \FieldA$_{1/2}$\@\xspace & Both &\cellcolor{ruledout} Excluded: $\Delta \amu < 0$ \\ \hline
\FieldC$_{1/2}$ &-- \FieldN$_{1}$\@\xspace & No &\cellcolor{ruledout} Excluded: $\Delta \amu < 0$ \\ \hline
\FieldD$_{1/2}$ &-- \FieldN$_{1}$\@\xspace & No &\cellcolor{ruledout} Excluded: $\Delta \amu < 0$ \\ \hline
\FieldD$_{1/2}$ &-- \FieldA$_{1}$\@\xspace & No &\cellcolor{ruledout} Excluded: LHC searches + LEP contact interactions \\ \hline
\FieldN$_{1/2}$ &-- \FieldP$_{1}$\@\xspace & No &\cellcolor{ruledout} Excluded: LHC searches + LEP contact interactions \\ \hline
\FieldD$_{1/2}$ &-- \FieldC$_{1}$\@\xspace & No &\cellcolor{ruledout} Excluded: LHC searches + LEP contact interactions \\ \hline
\FieldT$_{1/2}$ &-- \FieldA$_{1}$\@\xspace & No &\cellcolor{ruledout} Excluded: $\Delta \amu < 0$ \\ \hline
\end{tabular}
\caption{Summary of known results for gauge invariant
extensions of two fields with different spin with one-loop
contributions to the anomalous magnetic moment of the muon.
These results are rather exhaustive due to systematic
investigations and classifications in
Refs.\ \cite{Freitas:2014pua,Kowalska:2017iqv,Calibbi:2018rzv}.
Note that this summarises results in the literature where
different assumptions have been made, see the text and the
original references for details. When there are multiple
reasons for ruling a scenario out, we mention the most
model- independent constraint. We use color highlighting to
give a visual indication of the status of the model, namely
green for viable explanations, red for excluded and purple
for extensions that are only viable with under abundant dark
matter.
\label{tab:TwoFieldsDifferentSpin}}
\end{center}
\end{table}
With the possibilities for one-field extensions essentially exhausted,
it is natural to then consider including two new fields entering the
one-loop diagrams for $\amu$ together. The one-loop diagrams shown in
Fig.\ \ref{fig:GeneralDiagrams} of Appendix
\ref{app:MuonGm2Contributions}, have topologies FFS, SSF, VVF, FFV,
VSF and SVF (where F $=$ fermion, S $=$ scalar and V $=$ vector
boson). Clearly, having loops with only BSM particles requires that
the two new states have different spin. One of the new states must be
a fermion (allowed to be Dirac, Majorana or Weyl fermion), the other
either a scalar or a vector boson. This means the new states can't mix
and new fermions can't couple to both the left- and right-handed muon,
so no new chirality flips may be introduced. Therefore
$\Delta a_\mu^\textrm{BNL}$ explanations like this are heavily constrained
\update{and this has now been strongly confirmed with $\damu^{\text{2021}}$
result}. Nonetheless one important new feature is that these both
states couple to the muon together in trilinear vertices. If this is
the only way they couple to SM states, and the BSM states have similar
masses, then LHC limits could be evaded due to compressed spectra.
Furthermore if this is due to a $\mathbb{Z}_2$ symmetry where the BSM
states transform as odd, then it predicts a stable particle that could
be a dark matter candidate.
The status of $\Delta a_\mu^\textrm{BNL}$ from a pair of BSM fields with
different spins is summarised in Table
\ref{tab:TwoFieldsDifferentSpin}\update{, and these results have
simply been confirmed by the new world average $\damu^{\text{2021}}$}. This
table draws extensively from
Refs.\ \cite{Freitas:2014pua,Kowalska:2017iqv,Calibbi:2018rzv}.
Ref.~\cite{Freitas:2014pua} considers models without dark matter
candidates and requires minimal flavour violation, while
Refs.~\cite{Kowalska:2017iqv} and~\cite{Calibbi:2018rzv} consider
models with ${\mathbb{Z}}_2$ symmetry and dark matter candidates. Many
models can be immediately excluded because their contributions to
$\amu$ are always negative\footnote{The sign of $\Delta a_\mu$ from the
one-loop diagrams can be understood analytically and
Ref.\ \cite{Calibbi:2018rzv} also presents general conditions for a
positive contribution, based on the hypercharge and the $SU(2)$
representations.
} with results in the literature in reasonable agreement regarding this.
Models with positive contributions to $\amu$ predict a sufficiently
large contribution only for very light states, and as a result
collider searches place severe constraints on these models. However the collider search limits depend on the model
assumptions and need to be understood in this context. The ${\mathbb{Z}}_2$ symmetry is particularly important as it regulates the interaction between the SM and BSM particles. Therefore, when necessary/required, the models in Table~\ref{tab:TwoFieldsDifferentSpin} are considered in two cases: with or without ${\mathbb{Z}}_2$ symmetry. Collider
constraints effectively eliminate almost all the models without
$\mathbb{Z}_2$ symmetry, while requiring that the $\mathbb{Z}_2$
symmetric models simultaneously explain dark matter and
$\Delta a_\mu^\textrm{BNL}$ effectively restricts us to just the two models that we
update in Sec.\ \ref{sec:Z2symmetric}. However first, in the
following, we explain the assumptions used in our main sources to
eliminate the remaining models that give a positive contribution to $\amu$ and the important
caveats to these findings.
The introduction of a vector-like fermion and a scalar or a vector
without any additional symmetries was dealt with by
Ref.\ \cite{Freitas:2014pua}, considering different $SU(2)_L$
representations, namely singlets, doublets, triplets or adjoint
triplets. They quickly eliminate a scalar doublet and fermion doublet
combination, i.e.\ \FieldD$_{0}$ -- \FieldD$_{1/2}$\@\xspace, without considering LHC constraints
because cancellations amongst the contributions mean $\Delta a_\mu$ is too
small for a $1\sigma$ explanation of $\Delta a_\mu^\textrm{BNL}$ while
satisfying basic assumptions like perturbativity and the $\approx 100$
GeV LEP limit \cite{Heister:2002mn,Abdallah:2003xe}. For LHC searches
they look at Drell-Yan production, which depends only on the gauge
structure, but for decays they rely on $\mathbb{Z}_2$ violating
leptonic interactions where they assume a minimal flavour violating
structure. They apply also LEP constraints on contact interactions,
using the same assumptions for the lepton interactions. However one
should again note the caveats described at the beginning of
Sec.~\ref{sec:one_field_overview} on minimal flavour violation and
abscence of scalar VEV. Adding only 8 TeV LHC searches effectively
eliminates three more $\Delta a_\mu^\textrm{BNL}$ explanations:
\FieldN$_{0}$ -- \FieldD$_{1/2}$\@\xspace, \FieldD$_{0}$ -- \FieldA$_{1/2}$\@\xspace and \FieldA$_{0}$ -- \FieldT$_{1/2}$\@\xspace.\footnote{There is a
very small $5$ GeV gap in the exclusion for \FieldN$_{0}$ -- \FieldD$_{1/2}$\@\xspace and a
small corner of parameter space of \FieldA$_{0}$ -- \FieldT$_{1/2}$\@\xspace escaped 8 TeV
searches, but given the LHC run-II projection in
Ref.~\cite{Freitas:2014pua}, this should now be excluded unless there is a large excess in the data. For this
reason we describe these models as excluded in
Table.~\ref{tab:TwoFieldsDifferentSpin}.}
Constraints on $ee\ell\ell$ contact interactions derived from LEP
observables alone further excludes \FieldD$_{0}$ -- \FieldC$_{1/2}$\@\xspace, and in combination with LHC searches excludes \FieldD$_{0}$ -- \FieldT$_{1/2}$\@\xspace and fermion-vector
extensions \FieldD$_{1/2}$ -- \FieldA$_{1}$\@\xspace, \FieldN$_{1/2}$ -- \FieldC$_{1}$\@\xspace and \FieldD$_{1/2}$ -- \FieldC$_{1}$\@\xspace. It is
important to also note that the limits from contact interactions can
also be avoided by cancellations from the contributions of heavy states that may appear as part of a more elaborate model, and are therefore quite model dependent.
Ref.\ \cite{Kowalska:2017iqv} considered scenarios with a new fermion
and scalar, where there is a $\mathbb{Z}_2$ symmetry under which the
BSM fields are odd, so that the models have a stable dark matter
candidate. In all cases the dark matter candidate was the scalar.
The new couplings they introduce are renormalizable, perturbative, CP
conserving, gauge invariant and muon-philic (meaning that muons are
the only SM fermions the BSM states couple to). For the scalar-fermion
\FieldN$_0$ -- \FieldC$_{1/2}$\@\xspace pair with charged fermion singlet, they find a region
where $\Delta a_\mu^{\textrm{BNL}}$ and the relic density can both be
explained within $2\sigma$. LHC constraints exclude much of the
parameter space, but significant regions with compressed spectra
survive. The results for the pair \FieldN$_{0}$ -- \FieldD$_{1/2}$\@\xspace are essentially the
same. We update both models in Sec.~\ref{sec:Z2symmetric}. For an
inert scalar doublet, coupled with fermions that are $SU(2)_L$
singlets, doublets, triplets, i.e.\ \FieldD$_{0}$ -- \FieldC$_{1/2}$\@\xspace, \FieldD$_{0}$ -- \FieldD$_{1/2}$\@\xspace and
\FieldD$_{0}$ -- \FieldT$_{1/2}$\@\xspace, they find a narrow region at, or just below, the Higgs
resonance region ($m_s\approx m_h/2$), that is consistent with both
the relic density and $\amu^{\textrm{BNL}}$ within
$2\sigma$\footnote{Explaining $\amu^\textrm{BNL}$ within $2\sigma$
requires only adding a small part of the deviation. As the authors
comment in their text, for the doublet case the BSM $\Delta a_\mu$
contribution is very small. A $1\sigma$ explanation should not be
possible, so we regard this explanation excluded by LEP.}. LHC
searches almost exclude these scenarios, but in all cases a tiny
region where the fermion is about $100$ GeV survives. Since these
regions are so small, in Table \ref{tab:TwoFieldsDifferentSpin} we
only report that these models are viable with under abundant dark
matter. For an inert scalar doublet coupled with an adjoint triplet
fermion (\FieldD$_{0}$ -- \FieldA$_{1/2}$\@\xspace) they find no region can be consistent
$\amu^{\textrm{BNL}}$ and the observed DM relic density.
Ref.\ \cite{Calibbi:2018rzv} considered the same type
of models, but substantially expanded the number by including a
wider range of dark matter candidates and systematically classifying
the representations in a manner similar to that shown in our Table
\ref{tab:TwoFieldsDifferentSpin}. For a given mass of the dark
matter candidate that is sufficiently heavier than the $W$-boson mass,
higher $SU(2)_L$ representations will have a comparatively large
dark matter annihilation cross-section, they use this to place
further analytically derived constraints on the models. Beyond the
very fine tuned regions near the Higgs resonance\footnote{As an
example of this they also present results with explanations near
the Higgs resonance for the the scalar-fermion \FieldD$_{0}$ -- \FieldC$_{1/2}$\@\xspace
case.
} the
only models that could explain $\Delta a_\mu^{\textrm{BNL}}$ and the relic abundance of
dark matter are scalar singlet dark matter with either a fermion singlet or fermion doublet, i.e.\ \FieldN$_0$ -- \FieldC$_{1/2}$\@\xspace and
\FieldN$_{0}$ -- \FieldD$_{1/2}$\@\xspace. For these two surviving explanations of dark matter
and $\Delta a_\mu^\textrm{BNL}$ they further apply constraints from the $8$
TeV and $13$ TeV LHC searches and LEP limit on electroweak
states. While in the latter model one can also get a neutral dark
matter candidate if the fermion is lighter than the scalar, direct
detection rules this out. They find that LHC searches heavily constrain the regions where relic density and $\Delta a_\mu^{\textrm{BNL}}$
may be simultaneously explained, but cannot entirely exclude the possibility.
\subsubsection{Vector-like leptons}
Before ending this overview we briefly also discuss the case of models
with two fields of the same spin. In this case mixing of fields and enhanced chirality flips are allowed, but a dark matter candidate is precluded.
Particularly interesting examples are extensions with vector-like leptons (VLLs), i.e.\ extension
by two new vector-like fermions with the same quantum numbers as the
left- and right-handed SM leptons. The muon-philic VLLs can couple
both to left- and right-handed muons and to the Higgs
boson; the contributions to $\amu$ behave similarly to the ones of
leptoquarks in Eq.\ (\ref{eqn:ScalarLeptoquarkLOContrib}), but the
chirality flip at the top quark is replaced by the new Yukawa coupling
of the VLL to the Higgs boson. Such models have been
discussed in detail in Refs.~\cite{Kannike2012,Dermisek:2013gta,Poh:2017tfo}.
A distinguishing property of such models is that significant new
contributions to the muon mass arise at tree level via the mixing with
the new VLL. The relation between loop contributions to
$\amu$ and to the tree-level muon mass
is therefore more complicated than in the discussion of
Sec.\ \ref{sec:BSMoverview} leading to
Eq.\ (\ref{eq:damulimit}): for Higgs-loop contributions to $\amu$,
the ratio between $\Delta a_\mu$ and $\Delta m _\mu ^{\text{tree}}$ does not
behave as $1/M_{\text{BSM}}^2$ but rather as $1/(16\pi^2 v^2)$
\cite{Kannike2012,Dermisek:2013gta}. Nevertheless, for
couplings in line with electroweak precision constraints and
perturbativity, only masses up to the TeV scale are able to provide
significant contributions to $\amu$, similar to the bound of
Eq.\ (\ref{BNLuppermassbound}).
Currently these models with a single VLL can accommodate
all existing limits, but there are two noteworthy constraints. On the
one hand, bounds from $\mu\to e\gamma$ require the VLL couplings to be
non-universal, i.e.\ very much weaker to the electron than to the muon
\update{(to allow large enough contributions for $\amu^{\text{2021}}$)}
\cite{Poh:2017tfo}. On the other hand, the muon--Higgs coupling is
modified by the large tree-level contributions to the muon
mass. Already the current LHC constraints on the
Higgs decay rate $h\to\mu^+\mu^-$ imply upper
limits on the possible contributions to $\amu$, and if future
experiments measure the $h\to\mu^+\mu^-$ rate to be in agreement with the SM prediction,
a deviation as large as $\Delta a_\mu^{\text{BNL}}$ cannot be explained
\cite{Poh:2017tfo}. However extensions with more fields may relax the
mass limits and constraints from $h \to \mu ^+ \mu
^-$~\cite{Dermisek:2020cod,Dermisek:2021ajd}, given the lowest-order
calculations of these references.
Slightly generalized models, where vector-like fermions may also
carry quantum numbers different from ordinary leptons, have
been examined in Refs.\ \cite{Freitas:2014pua,Kannike2012}. Ref.\
\cite{Freitas:2014pua} examined the introduction of a vector-like
fermion $SU(2)_L$ doublet
{\FieldD\@\xspace$\!\!_{1/2}$} with either an $SU(2)_L$
singlet {(\FieldN\@\xspace$\!\!_{1/2}$ or \FieldC\@\xspace$\!\!_{1/2}$)} or a $SU(2)_L$ triplet
{(\FieldA\@\xspace$\!\!_{1/2}$ or \FieldT\@\xspace$\!\!_{1/2}$)}. Each BSM fermion state is coupled to the Higgs boson
and either the SM lepton $SU(2)_L$ doublet $L_i$, the SM lepton
$SU(2)_L$ singlet $e_i$ with a flavour-universal coupling, or another
BSM vector-like fermion. BSM and SM fermions of the same charge then
mix together, with the amount of mixing between the SM and fermion
states constrained by LEP limits \cite{delAguila:2008pw}. Thus they
allow minimal flavour violation between the SM lepton generations,
and some of the main constraints on the masses of the vector-like
fermions comes from minimal flavour violation. Ref.\
\cite{Kannike2012} additionally considered the introduction of a
vector-like fermion $SU(2)_L$ doublet with hypercharge $Y=-3/2$ with
either a charged fermion singlet or triplet (either \FieldY$_{1/2}$ -- \FieldC$_{1/2}$\@\xspace or
\FieldY$_{1/2}$ -- \FieldT$_{1/2}$\@\xspace). Ultimately both consider that these mixing vector-like
fermions can produce positive contributions to $\amu$ and cannot be
ruled out even by $14$-TeV LHC projections.
\subsection{Scalar singlet plus fermion explanations} \label{sec:Z2symmetric}
Our overview has resulted in two kinds of two-field models of particular interest,
shown in Table \ref{tab:TwoFieldsDifferentSpin}. Both models are heavily constrained
by LHC, but have not been excluded yet in the literature. We denote the two models as
Model L and Model R, which extend the SM by adding the following fields:
\begin{align}
\text{Model L:} \quad \phi, \ \psi _d =
\begin{pmatrix}
\psi_d^+\\ \psi_d^0
\end{pmatrix},\, & &
\text{Model R:} \quad \phi, \ \psi _s \equiv \psi_s^-,
\end{align}
where $\phi=\phi^0$ is an $SU(2)_L$ singlet scalar field with representation
$({\mathbf{1}},{\mathbf{1}},0)_0$, $\psi_d$ a doublet fermion field with representation
$({\mathbf{1}},{\mathbf{2}},1/2)_{1/2}$, and $\psi_s$ a singlet fermion field with
representation $({\mathbf{1}},{\mathbf{1}},-1)_{1/2}$. These new fermions are
vector-like, or Dirac fermions, thus not only are the Weyl spinors $\psi_d$ and
$\psi_s$ introduced but also their Dirac partners $\psi^c_d =
(\psi_d^{+c\dagger},\psi_d^{0c\dagger})$ and $\psi_s^c \equiv \psi_s^{-c\dagger}$.
In Model L the new fields couple only to the left-handed muon, and in Model R to the
right-handed muon.
The two models add the following terms to the SM Lagrangian, using
Weyl notation as in Eq.\ (\ref{eqn:ScalarLeptoquarkDoublet}):
\begin{align} \label{eqn:Min2FieldsLL}
{\cal L}_{\text{L}} &= \begin{pmatrix}\lambda_L L\cdot \psi_d \phi - M_{\psi} \psi_d^c \psi_d + h.c.\end{pmatrix} - \frac{M_{\phi}^2}{2} |\phi|^2, \\
\label{eqn:Min2FieldsRR}
{\cal L}_{\text{R}} &= \begin{pmatrix}\lambda_R \mu \phi \psi_s - M_{\psi} \psi_s^c \psi_s + h.c.\end{pmatrix} - \frac{M_{\phi}^2}{2} |\phi|^2,
\end{align}
which can be written out into their $SU(2)_L$ components:
\begin{align} \label{eqn:Min2FieldsLLComponents}
{\cal L}_{\text{L}} &= \begin{pmatrix} \lambda_L \nu_{\mu L} \psi_d^0 \phi^0 - \lambda_L \mu_L \psi_d^+ \phi^0 - M_{\psi} \psi_d^{0c\dagger}
\psi_d^0 - M_{\psi} \psi_d^{+c\dagger} \psi_d^+ + h.c.\end{pmatrix} - \frac{M_{\phi}^2}{2} |\phi^0|^2, \\
\label{eqn:Min2FieldsRRComponents}
{\cal L}_{\text{R}} &= \begin{pmatrix}\lambda_R \mu_R^\dagger \psi_s^- \phi^0 - M_{\psi} \psi_s^{-c\dagger} \psi_s^- + h.c.\end{pmatrix} - \frac{M_{\phi}^2}{2} |\phi^0|^2.
\end{align}
We have not included additional renormalisable terms involving the
scalar singlet in the Higgs potential, which are not relevant for
$\amu$, but we do briefly comment on the impact they would have later
on, as they can affect the dark matter phenomenology. This leaves
both models with just three parameters.
In the following we therefore present a detailed update of the
phenomenology of Model L and Model R, scanning over all three
parameters in each case to test whether or not dark matter and large
$\Delta a_\mu$ can be simultaneously explained. We include the latest data
from Fermilab, and the most recent LHC collider searches included in
\texttt{SModelS}\@\xspace $1.2.3$ in
Ref.\ \cite{Kraml:2013mwa,Ambrogi:2017neo,Dutta:2018ioj,Ambrogi:2018ujg,Sjostrand:2006za,Sjostrand:2014zea,Buckley:2013jua}
and \texttt{CheckMate}\@\xspace $2.0.26$
\cite{Drees2015,Dercks2017,Read:2002hq,Cacciari:2005hq,Cacciari:2008gp,Cacciari:2011ma,deFavereau:2013fsa}. It
should be noted that we deal only with BSM fields that couple to
second generation SM fermions. Thus, flavour violation constraints on
our models can safely be ignored, including limits from contact
interactions. This is in line with the methodology in
Ref.\ \cite{Calibbi:2018rzv}, but not \cite{Freitas:2014pua}.
The BSM contributions from both of the two field dark matter models come from
the FFS diagram \ref{fig:FFSDiagram} of Fig.\ \ref{fig:GeneralDiagrams} with the
generic $F$ fermion lines replaced by $\psi^-$ and the scalar $S$ one by $\phi^0$.
The contributions to $\amu$ from both of these models are
given by identical expressions (where $\lambda_{L,R}$ are
generally denoted by $\lambda$). The result is given by
\begin{equation} \label{eqn:Min2FieldsLLContributions}
\Delta a_\mu^{\text{Model L}} = \Delta a_\mu^{\text{Model R}} = -Q_{\psi} \frac{\lambda^2}{32\pi^2} \frac{m_\mu^2}{6 M_{\phi}^2} \,E\!\begin{pmatrix}\frac{M_\psi^2}{M_\phi^2}\end{pmatrix},
\end{equation}
where $Q_\psi=-1$. It does not involve a chirality flip enhancement, which has a large impact on
the ability of these models to explain $\amu$ whilst avoiding collider constraints.
\begin{figure}[tb]
\centering
\includegraphics[width=0.32\textwidth]{Min2FieldsLL-overlap4}
\includegraphics[width=0.32\textwidth]{Min2FieldsLL-overlap5}
\includegraphics[width=0.32\textwidth]{Min2FieldsLL-overlap7}
\caption{Results from Model L, scanning over the masses of the new fermion and scalar which couple to the left-handed muon. Regions which can explain $\Delta a_\mu$ in \update{Eq.\ (\ref{eqn:avgDiscrepancy})} or Eq.\ (\ref{eqn:BNLDiscrepancy}) to within $1\sigma$ are coloured green and yellow respectively, with the overlap between the two regions is coloured lime. The black line indicates masses which produce a $\Delta a_\mu$ contribution matching \update{Eq.\ (\ref{eqn:avgDiscrepancy})}. The points where a dark matter candidate particle produces the observed relic abundance of $0.1200$, Eq.\ (\ref{eqn:DMRD}), are shown in red, with the region below being excluded due to having an over abundance of dark matter. The region where $M_\psi < M_\phi$ is shaded cyan. Regions excluded by $13$-TeV results at the LHC are shaded grey, with the exclusions from the soft leptons search \cite{Sirunyan:2018iwl} obtained using \texttt{CheckMate}\@\xspace shaded in orange. Thus $\amu$ can only be explained in small slices of parameter space between the grey and orange regions. As the coupling between the left-handed muon and the BSM particles is increased, higher masses are required to explain $\amu$. \label{fig:Min2FieldsLLSlice} }
\end{figure}
The results for the first of the two field dark matter models,
Model L, are shown in
Figs.\ \ref{fig:Min2FieldsLLSlice}, \ref{fig:Min2FieldsLLProfile}.
Scans were performed over a grid of scalar mass $M_\phi \in
[0,400]$ GeV and fermion masses of $M_\psi \in [100,400]$ GeV,
taking into consideration the LEP limit $M_\psi > 100$ GeV
\cite{Heister:2002mn,Abdallah:2003xe} on charged fermion masses.
Before showing results of the full scan, in
Fig.\ \ref{fig:Min2FieldsLLSlice} we show $\Delta a_\mu$ in three slices
of the parameter space in the $M_\psi$--$M_\phi$ ($M_\psi$ over
$100...300$ GeV and $M_\phi$ over $0...300$ GeV) with fixed
$\lambda_L = 2,2.5,3.5$. Due to the lack of enhanced chirality
flips a sufficiently large $\Delta a_\mu$ is obtained only when the
coupling constant is large and the masses are relatively small.
However due to the \update{new reduced} measurement of $\amu$, the
discrepancy can be explained with \update{heavier} masses than
before as indicated in the green curve. For \update{$\lambda_L
\le 2$}, the model cannot provide large enough $\Delta a_\mu$ to
explain the anomaly within $1\sigma$ while avoiding the LEP
$M_\psi > 100$ GeV limit and LHC limits (discussed below), while for very
\update{large values of $\lambda_L = 2.5$ or $\lambda_L = 3.5$} it
is possible to explain the anomaly but even when $\lambda_L$ is
close to $\sqrt{4\pi}$, $\damu^{\text{2021}}$ can only be explained within
$1\sigma$ \update{for masses below $260$ GeV}. These results can
be approximately reproduced using
Eq.\ (\ref{eqn:Min2FieldsLLContributions}), though we have again
performed the calculation with \texttt{FlexibleSUSY}\@\xspace 2.5.0
\cite{Athron:2014yba,Athron:2017fvs}, which includes the full
one-loop calculation and the universal leading logarithmic
two-loop QED contribution \cite{Degrassi:1998es}.
We do not consider scenarios with $M_\psi < M_\phi$ because such cases are like
Higgsino dark matter or the doublet case of minimal dark matter and as such will
be under abundant when the mass is below about $1$ TeV (see e.g.\ Ref.\
\cite{ArkaniHamed:2006mb}). Without a chirality flipping enhancement it will
not be possible to explain $\Delta a_\mu^{\text{BNL}}$ \update{or $\damu^{\text{2021}}$}
with masses that are heavy enough to explain dark matter. Hence the dark matter candidate is given
by the scalar singlet $\phi$. Direct detection of
dark matter constraints then depend on the parameters of Higgs potential
terms involving the singlet. By including such terms, we found that direct detection
constraints rule out significant parts of the parameter space but
can always be evaded by choosing the additional parameters to be small.
Therefore, for simplicity, we neglect these
parameters in our final numerical results and do not show direct
detection constraints.
The collider constraints are shown with overlayed shading. The lower grey
shaded region comes from searches for charginos, neutralinos,
sleptons, and long-lived particles, using leptonic final
states in the $8$-TeV searches \cite{Chatrchyan:2013oca,
Aad:2014vma, Khachatryan:2014qwa, Khachatryan:2015lla} and the
$13$-TeV searches \cite{Aaboud:2018jiw, Sirunyan:2018nwe,
CMS:2016ybj, Aad:2019vnb}, included in \texttt{SModelS}\@\xspace $1.2.3$
\cite{Kraml2014,Khosa2020} and excludes most of the light mass
parameter space where $\amu$ can be explained. Nonetheless there
is still a considerable gap close to the $M_\phi = M_\psi$ line,
which escapes these constraints, but may be closed by searches for
compressed spectra. We therefore also show in shaded orange the
CMS search for the compressed spectra of soft leptons
\cite{Sirunyan:2018iwl}, which was obtained using \texttt{CheckMate}\@\xspace
$2.0.26$ \cite{Drees2015,Dercks2017} using \texttt{MadAnalysis}\@\xspace
\cite{Alwall:2014hca} and \texttt{PYTHIA}\@\xspace
\cite{Sjostrand:2006za,Sjostrand:2014zea} to generate event
cross-sections. As a result of these constraints there is little
room for the model to evade collider constraints and explain the
observed value of $\amu$, \update{though some gaps} do evade the
collider constraints we apply.
Finally we also consider the relic density of dark matter in this
model using \texttt{micrOMEGAs}\@\xspace $5.2.1$ \cite{Belanger2018}. The red line
in Fig.\ \ref{fig:Min2FieldsLLSlice} indicates where the model's
dark matter candidate particle produces the Planck-observed relic
density of $\Omega_{h^2}=0.1200$,
Eq.\ (\ref{eqn:DMRD}). Along this red line the relic
abundance of dark matter is depleted to the observed value
through t-channel exchange of the BSM fermions. This mechanism is
less effective below the line, where the mass splitting between the BSM scalar and fermion is larger,
leading to over abundance. All points below
the red line are strongly excluded by this over abundance, though this does not rule out any
scenarios that are not already excluded by collider constraints.
Above the line the relic density is under abundant. Our
results show \update{that it is not possible for any of these
$\lambda_L$ values} to simultaneously explain \update{$\amu^{\text{2021}}$} and the
relic density, while evading collider limits.
\begin{figure}[tb]
\centering
\includegraphics[width=0.32\textwidth]{Min2FieldsLL-profileoverlapcutoff}
\includegraphics[width=0.32\textwidth]{Min2FieldsLL-contourscanomgcutoff}
\includegraphics[width=0.32\textwidth]{Min2FieldsLL-profileoverlapexclude}
\caption{Several profiles of Model L. Colours for the
first and third panels are the same as
Fig.\ \ref{fig:Min2FieldsLLSlice}, and the same LHC
limits (which do not depend on the coupling) are
overlaid. The left panel shows the best
contributions to $\amu$ for each value of the BSM
masses, without having a dark matter candidate
particle with an over abundant relic density.
Likewise, the right panel only shows points which
have a dark matter relic density within $1\sigma$ of
the Planck observation \cite{Aghanim:2018eyx}. The
middle panel shows the smallest value of the
coupling $\lambda_L$ which can \update{explain $\amu^{\text{2021}}$
(Eq.\ (\ref{eqn:avgDiscrepancy}))} within
$1\sigma$ without producing over abundant dark matter. \label{fig:Min2FieldsLLProfile} }
\end{figure}
To give a more global picture of the status of the model in
Fig.\ \ref{fig:Min2FieldsLLProfile} we also vary $\lambda _L
\in [0,3.5]$. In the left panel we show where on the
$M_\phi$--$M_\psi$ plane it is possible to simultaneously fit
the \update{BNL measurement or the new world average} for
$\amu$ and avoid an over abundant relic density, for some
value of $\lambda_L$ within this range. As expected this
shows that $\lambda_L$ may be adjusted so that for very light
masses one may always explain $\amu$ within $1\sigma$, but as
the masses are raised past \update{$200-300$ GeV}, this is no
longer possible. As a result the combination of collider
constraints shown in grey, cyan and orange \update{exclude all
but two narrow regions of parameter space} in the compressed
spectra region between the $M_\phi$ and $M_\psi$ masses. As
shown in the middle panel of
Fig.\ \ref{fig:Min2FieldsLLProfile}, where we plot the minimum
value of $\lambda_L$ consistent with a $1\sigma$ explanation
\update{of $\damu^{\text{2021}}$}, explaining the observed value requires
\update{a very large coupling $\lambda_L \ge 2.5$}. One may
question the precision of our calculation for \update{such
large values} of $\lambda_L$, however given the mass reach
of the collider experiments which extends \update{well past}
the $1\sigma$ region it is unlikely that including higher
orders in the BSM contributions will change anything
significantly. Finally the right panel of
Fig.\ \ref{fig:Min2FieldsLLProfile} shows $\amu$ results that
are compatible with explaining the full observed dark matter
relic density within $1\sigma$, having obtained this data from
a targeted scan using \texttt{MultiNest}\@\xspace 3.10
\cite{Feroz:2007kg,Feroz:2008xx,Feroz:2013hea,Buchner2014}. Our
results show that it is now impossible to simultaneously
explain dark matter and $\Delta a_\mu^\text{BNL}$ and \update{the
same is true with the updated $\damu^{\text{2021}}$ measurement}.
Compared to the results for this model shown in
Ref.\ \cite{Calibbi:2018rzv}, we find that the most recent
collider search(es) \cite{Aad:2014vma,Sirunyan:2018nwe}, are
the most important for \update{narrowing the gaps} in the
exclusion. Currently, there is \update{very little room for
the model to survive}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.32\textwidth]{Min2FieldsRR-overlap4}
\includegraphics[width=0.32\textwidth]{Min2FieldsRR-overlap5}
\includegraphics[width=0.32\textwidth]{Min2FieldsRR-overlap7}
\caption{Results from Model R, scanning over the masses of the new fermion and scalar which couple to the right-handed muon. Colours are the same as Fig.\ \ref{fig:Min2FieldsLLSlice}. \label{fig:Min2FieldsRR}}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width=0.32\textwidth]{Min2FieldsRR-profileoverlapcutoff}
\includegraphics[width=0.32\textwidth]{Min2FieldsRR-contourscanomgcutoff}
\includegraphics[width=0.32\textwidth]{Min2FieldsRR-profileoverlapexclude}
\caption{Several profiles of Model R. Colours for the first and third panels are the same as Fig.\ \ref{fig:Min2FieldsLLSlice}. The middle panel shows the lowest value of $\lambda_R$ which can \update{explain $\amu^{\text{2021}}$ from Eq.\ \update{(\ref{eqn:avgDiscrepancy})}} within $1\sigma$. The panels are constructed in the same way as those in Fig.\ \ref{fig:Min2FieldsLLProfile}. The same LHC limits as Fig.\ \ref{fig:Min2FieldsRR} (which do not depend on the coupling) are overlaid. \label{fig:Min2FieldsRRProfile}}
\end{figure}
Results for Model R are shown in
Figs.\ \ref{fig:Min2FieldsRR}, \ref{fig:Min2FieldsRRProfile}.
Since the contributions to $\amu$ for this model are also
governed by Eq.\ (\ref{eqn:Min2FieldsLLContributions}), the
behaviour of $\Delta a_\mu$ is the same. However as one can see in
Figs.\ \ref{fig:Min2FieldsRR}, \ref{fig:Min2FieldsRRProfile}
the collider constraints for Model R, which has no $SU(2)_L$
interactions, are weaker than those for Model L, with the
exclusions again coming from \texttt{SModelS}\@\xspace 1.2.3 through the
$8$-TeV searches
\cite{Aad:2014vma,CMS:hwa,CMS:2013dea,Khachatryan:2015lla} and
the $13$-TeV searches
\cite{Aaboud:2018jiw,Sirunyan:2018nwe,Aad:2019vnb,CMS:2016ybj,CMS:2017mkt}.
The searches providing most of the constriction on the
parameter space of Model R are the ATLAS searches
\cite{Aad:2014vma,Aad:2019vnb} and the CMS searches
\cite{CMS:2017mkt,Sirunyan:2018nwe}. As a result in the case
of Model R there is still some room to explain the $\amu$
measurement within currently applied collider constraints.
However as can be seen in the left panel of
Fig.\ \ref{fig:Min2FieldsRR} \update{even with $\lambda_R =
2.0$, there is only a little room} to escape the combination
of the standard electroweakino searches shaded grey and the
soft lepton compressed spectra search shaded orange. In the
middle panel when $\lambda_R = 2.5$ one can see \update{a
significantly larger, but still narrow} region where
\update{$\amu^{\text{2021}}$} can be explained within $1\sigma$ while
evading the collider limits. In the right panel as
$\lambda_R$ approaches the perturbative limit \update{both the
value and area of the viable parameter space are larger.}
In the left panel of Fig.\ \ref{fig:Min2FieldsRRProfile},
where $\lambda_R$ is allowed to vary, one can see the full
region of the $M_\phi-M_\psi$ plane where \update{$\amu^{\text{2021}}$ is
explained} within $1\sigma$, while avoiding giving a relic
density of dark matter which is too over abundant. With the
overlaid collider limits we can see the complete picture of
parameter space where it remains possible to explain this
observed $\amu$, while evading limits from colliders.
\update{Avoiding collider constraints from the LHC requires
compressed spectra, and comparing the green and lime green
regions, one can see that the situation with the new world
average has just marginally increased the masses that can be
accommodated}. However as shown by the middle panel of
Fig.\ \ref{fig:Min2FieldsRRProfile}, \update{evading these
limits requires a coupling of $\lambda_R \gtrsim 1.8$}. One
can also see that the relic density cannot be explained
simultaneously with \update{$\amu^{\text{2021}}$} in any of the slices shown in
Fig.\ \ref{fig:Min2FieldsRR}, and the right panel of
Fig.\ \ref{fig:Min2FieldsRRProfile} does show that the regions
which can explain dark matter relic density \update{and $\damu^{\text{2021}}$} to
within $1\sigma$ are \update{fully ruled out} by collider
constraints.
In summary, two-field models with different spin do not contain new
sources of chirality flipping enhancements, and many models of this
class are not able to generate significant positive contributions to
$\amu$, see Table \ref{tab:TwoFieldsDifferentSpin}. However the
Model L and Model R investigated here do give positive
contributions, and our results show they remain viable explanations
of \update{the new world average
deviation $\damu^{\text{2021}}$ after the FNAL measurement}, as can be seen from
Figs.\ \ref{fig:Min2FieldsLLSlice}-\ref{fig:Min2FieldsRRProfile},
with small parts of the parameters space escaping LHC
constraints. Model R is found to be
slightly more viable due to reduced impact of LHC constraints, in
particular those from compressed spectra searches.
In both models, however, it is \update{not possible} to simultaneously
explain $\damu^{\text{2021}}$
whilst producing the observed dark matter relic density
\cite{Aghanim:2018eyx} and satisfying LHC collider constraints. It
is only possible to explain \update{$\damu^{\text{2021}}$} within $1\sigma$ with an under
abundant dark matter candidate particle. Further, the lack of
chirality flip for either of these models constrain the masses to be
\update{$M_\psi,M_\phi \le 210$ GeV} to both avoid collider constraints and explain
the value of $\damu^{\text{2021}}$, where compressed spectra searches are
important for the viability of these models.
Future data from \update{$\amu$ experiments and LHC} might entirely exclude these models.
\section{Introduction} \label{sec:Introduction}
Precision measurements of the anomalous magnetic moment of the muon,
$\amu$, provide excellent tests of physics beyond the Standard Model
(BSM), and the results can give hints at what form it might
take. Recently the E989 experiment \cite{Grange:2015fou} at the Fermi
National Laboratory (FNAL) published the \update{most precise
measurement} of the anomalous magnetic moment of the muon\update{~\cite{PhysRevLett.126.141801}}. This result, and the previous result from
Brookhaven National Laboratory (BNL) \cite{Bennett:2006fi} (adjusted
according to the latest value of $\mu_\mu/\mu_p$ as in
Ref.\ \cite{aoyama2020anomalous}) and \update{the new world average \cite{PhysRevLett.126.141801}}
are
\begin{align}
\amu^{\textrm{FNAL}} &= \update{(116\,592\,040\pm54)\times10^{-11}} ,\label{Eq:FNALresult}\\
\amu^{\textrm{BNL}} &= (116\,592\,089\pm63)\times10^{-11}
,\\
\amu^{\text{2021}} &= \update{(116\,592\,061\pm41)\times10^{-11}}.
\label{Eq:amuNEW}
\end{align}
The FNAL measurement \update{is fully compatible with } the previous
best measurement and \update{has a smaller uncertainty}.
Compared to the BNL result, the new world average $\amu^{\text{2021}}$ has a
\update{slightly decreased} central value
and a \update{30\%} reduced statistics-dominated uncertainty.
In parallel to the FNAL measurement, a worldwide theory initiative
provided the White Paper
\cite{aoyama2020anomalous} with the best estimate for the central
theory prediction in the Standard Model (SM). Its value and
uncertainty are
\begin{align}
\amu^{\textrm{SM}} &= (116\,591\,810\pm43)\times10^{-11} .
\label{amuSMWP}
\end{align}
This SM prediction is based
on up-to-date predictions of QED \cite{Aoyama:2012wk,Aoyama:2019ryr},
electroweak \cite{Czarnecki:2002nt,Gnendiger:2013pva}, hadronic vacuum
polarization
\cite{Davier:2017zfy,Keshavarzi:2018mgv,Colangelo:2018mtw,Hoferichter:2019gzf,Davier:2019can,Keshavarzi:2019abf,Kurz:2014wya}
and hadronic light-by-light contributions
\cite{Melnikov:2003xd,Masjuan:2017tvw,Colangelo:2017fiz,Hoferichter:2018kwz,Gerardin:2019vio,Bijnens:2019ghy,Colangelo:2019uex,Pauk:2014rta,Danilkin:2016hnh,Jegerlehner:2017gek,Knecht:2018sci,Eichmann:2019bqf,Roig:2019reh,Blum:2019ugy,Colangelo:2014qya}. For
further discussion of recent progress we refer to
Ref.\ \cite{aoyama2020anomalous}.\footnote{%
The White Paper also contains an extensive discussion of promising
progress of lattice QCD calculations for the hadronic vacuum
polarization. The lattice world average evaluated in
Ref.\ \cite{aoyama2020anomalous}, based on \cite{Chakraborty:2017tqp,Borsanyi:2017zdw,
Blum:2018mom,Giusti:2019xct,Shintani:2019wai,Davies:2019efs,Gerardin:2019rua,Aubin:2019usy,Giusti:2019hkz}, is compatible
with the data-based result
\cite{Davier:2017zfy,Keshavarzi:2018mgv,Colangelo:2018mtw,Hoferichter:2019gzf,Davier:2019can,Keshavarzi:2019abf,Kurz:2014wya}, has a higher
central value and larger uncertainty. More recent lattice results
are obtained in Refs.\ \cite{Borsanyi:2020mff,Lehner:2020crt}. Scrutiny of these results is
ongoing (see e.g.\ Ref.\ \cite{Colangelo:2020lcg}) and further progress can be expected.\label{footnoteLQCD}
}
The experimental measurements show the following deviations from the
updated theoretical SM prediction:
\begin{align} \label{eqn:MuonGm2Discrepancy}
\Delta \amu^{\textrm{FNAL}} &= \update{(23.0 \pm 6.9) \times 10^{-10}} ,\\
\Delta \amu^{\textrm{BNL}} &= (27.9 \pm 7.6) \times 10^{-10} ,\label{eqn:BNLDiscrepancy}\\
\damu^{\text{2021}} &= \update{(25.1 \pm 5.9) \times 10^{-10}} .\label{eqn:avgDiscrepancy}
\end{align}
In each case the uncertainties are combined
by summing them in quadrature. \update{In the last line $\damu^{\text{2021}}$ is the new, updated
deviation based on the experimental world average and the
SM White Paper result.}
The long standing
discrepancy between the BNL measurement and the SM theory prediction
\update{is confirmed and sharpened}. Its significance is increased
from \update{$3.7\sigma$} to
to \update{$4.2 \sigma$} by the combination with FNAL data.
This \update{improvement} has a significant impact on our understanding of BSM physics as it \update{strengthens} a major constraint on a variety of otherwise plausible SM extensions.
In this paper we provide a comprehensive overview of this impact the FNAL measurement has on BSM physics.
We examine
the impact in minimal 1-, 2- and 3-field extensions of the SM, and in the well-motivated Minimal Supersymmetric Standard Model
(MSSM). Within this theoretical framework we provide a thorough overview of the impact the FNAL
measurement has and highlight promising scenarios that can
explain it. In our investigation we use state-of-the-art $\amu$ calculations.
For the simple SM extensions we use FlexibleSUSY \cite{Athron:2014yba,Athron:2017fvs}, which includes the universal leading logarithmic two-loop QED contributions in addition to the full one-loop calculation.
For the MSSM we use \texttt{GM2Calc}\@\xspace \cite{Athron:2015rva}, which
implements a dedicated high-precision MSSM calculation including two-loop and higher-order contributions based
on the on-shell scheme.
Reviews and general discussions of BSM contributions to $\amu$ have been given in
Refs.\ \cite{Melnikov:2006sr,Jegerlehner:2017gek,Jegerlehner:2009ry,Czarnecki2001,Stockinger:2006zn,Stockinger:1900zz,Lindner:2016bgg}.
Previously the deviation from BNL has been studied extensively in the
literature. There was intensive activity proposing BSM
explanations of the BNL result after its first discovery and in
following years
\cite{Arnowitt:2001be,Czarnecki:2001pv,Baltz:2001ts,Everett:2001tq,Feng:2001tr,Chattopadhyay:2001vx,Choudhury:2001ad,Komine:2001fz,Mahanta:2001gg,Das:2001it,Cheung:2001ip,Kephart:2001iu,Ma:2001mr,Hisano:2001qz,Xing:2001qn,Ibrahim:2001ym,Ellis:2001yu,Dedes:2001hh,Einhorn:2001mf,Choi:2001pz,Kang:2001sq,Kim:2001se,Martin:2001st,Rajpoot:2001tz,deS.Pires:2001da,Komine:2001hy,Cheung:2001hz,Baek:2001nz,Raidal:2001pf,Carvalho:2001ex,Baer:2001kn,Chacko:2001xd,Wu:2001vq,Baek:2001kh,Chen:2001kn,Arhrib:2001xx,Enqvist:2001qz,Cerdeno:2001aj,Kim:2001eg,Blazek:2001zm,Cho:2001nfa,Barshay:2001eq,Arnowitt:2001pm,
Belanger:2001am,deBoer:2001in,Roszkowski:2001sb,Daikoku:2001wm,Adhikari:2001sf,Wang:2001ig,deBoer:2001nu,Kersting:2001zz,Zhou:2001ew,Endo:2001ym,Cacciapaglia:2001pa,Ma:2001tb,
Cho:2001hx,
Park:2001uc,Kim:2001rc,Agashe:2001ra,Calmet:2001si,Appelquist:2001jz,
Das:2004ay,
Xiong:2001rt,Calmet:2001dc,Dai:2001vv,Yue:2001db,
Tabbakh:2006zy,Blanke:2007db,
Iltan:2001nk,Krawczyk:2001pe,Larios:2001ma,Krawczyk:2002df,
Chakraverty:2001yg,Mahanta:2001yc,
Huang:2001zx,Gninenko:2001hx,Lynch:2001zr,Baek:2001kca,Ma:2001md,Murakami:2001cs,
Pospelov:2008zw,Heo:2008dq,
Huang:2002dj,Kiritsis:2002aj,Das:2002en,Baek:2002cc,Chattopadhyay:2002jx,Byrne:2002cw,Kim:2002cy,Baek:2002wm,Martin:2002eu,Boyarkina:2003mr,Chavez:2004nr,Sawa:2005py,Cembranos:2005sr,Hambye:2006zn,Boyarkin:2008zz,Aguilar:2008qj,Hektor:2008xu,Domingo:2008bb,Biggio:2008in
Adachi:2009fw, Cheung:2009fc, Hofer:2009xb,Fukuyama:2009xk,Ho:2010yp,Crivellin:2010ty,
Heinemeyer:2003dq,Heinemeyer:2004yq,Feng:2006ei,Marchetti:2008hw,vonWeitershausen:2010zr
}.
Many ideas came under pressure from results at the LHC, and scenarios
were proposed which could resolve tensions between $\amu$, LHC results
and other constraints
\cite
Fargnoli:2013zda,Fargnoli:2013zia,
Cho:2011rk,Endo:2011gy,Endo:2011mc,Endo:2011xq,Ibe:2012qu,Kim:2012fc,Endo:2013bba,Ibe:2013oha,Akula:2013ioa,Zhang:2013hva,Endo:2013lva,Bhattacharyya:2013xma,Evans:2013uza,Iwamoto:2014ywa,Kersten:2014xaa,Gogoladze:2014cha,Badziak:2014kea,Kowalska:2015zja,Chakrabortty:2015ika,
Padley:2015uma,
Bach:2015doa,Harigaya:2015jba,Chowdhury:2015rja,Khalil:2015wua,Ajaib:2015yma,Harigaya:2015kfa,Gogoladze:2015jua,Baez:2015sqj,Belyaev:2016oxy,Li:2016ucz,Okada:2016wlm,Kobakhidze:2016mdx,Belanger:2017vpq,Fukuyama:2016mq
,Choudhury:2017acn,Hagiwara:2017lse,Endo:2020mqz,Chakraborti:2020vjp,Horigome:2021qo
,Yin:2016shg,Zhu:2016ncq,Hussain:2017fbp,Ning:2017dng,Frank:2017ohg,Bagnaschi:2017tru,Li:2017fbg,Pozzo:2018anw,Altin:2017sxx,Chakraborti:2017dpu,Yanagida:2017dao,Choudhury:2017fuu,Endo:2017zrj,Wang:2018vxp,Bhattacharyya:2018inr,Cox:2018qyi,Cox:2018vsv,Yang:2018guw,Tran:2018kxv,Wang:2018vrr, Abdughani:2019wai,Kotlarski:2019muo,Dong:2019iaf,Ibe:2019jbx,Yang:2020bmh,Yanagida:2020jzy,Han:2020exx,Cao:2019evo,Yamaguchi:2016oqz,Yanagida:2018eho,Shimizu:2015ara,Yin:2016pkz,Su:2020lr
,Hundi:2012uf,Megias:2017dzd,
Doff:2015nru,Hong:2016uou,
Broggio:2014mna,Han:2015yys,Wang:2014sda,Ilisie:2015tra,Abe:2015oca,Chun:2015hsa,Wang:2016rvz,Chun:2016hzs,Abe:2017jqo,Cherchiglia:2016eui,Cherchiglia:2017uwv,Wang:2018hnw,Han:2018znu,Chun:2019oix,Iguro:2019sly,Wang:2019ngf,Sabatta:2019nfg,Jana:2020pxx,Botella:2020xzf,Rose:2020nxm,Li:2020dbg,Ghosh:2021jeg,Mondal:2021vou,
Das:2016vkr,ColuccioLeskow:2016dox,Kowalska:2018ulj,Dorsner:2019itg,Crivellin:2020tsz,Gherardi:2020qhc,Babu:2020hun,Crivellin:2020mjs,
Heeck:2011wj,Altmannshofer:2014pba,Altmannshofer:2016oaq,Belanger:2015nma,Altmannshofer:2016brv,Huang:2021nkl,Gninenko:2014pea,Araki:2015mya,Biswas:2016yjr,Kamada:2018zxi,Gninenko:2018tlp,Patra:2016shz,Amaral:2020tga,Bodas:2021fsy,Daikoku:2020nhr,
Davoudiasl:2012qa,Davoudiasl:2012ig,Davoudiasl:2014kua,Lee:2014tba,Mohlabeng:2019vrz,
Kannike2012,Dermisek:2013gta,Raby:2017igl,Poh:2017tfo,Kawamura:2020qxo,Frank:2020smf,deJesus:2020upp,Endo:2020tkb,Huang:2020ris,Chun:2020uzw,Dermisek:2020cod,Dermisek:2021ajd,
Chen:2015vqy,Marciano:2016yhf,Bauer:2017nlg,Liu:2018xkx,Bauer:2019gfk,Cornella:2019uxs,Abdallah:2020biq,Escribano:2020wua,
Hundi:2011si,
Huh:2013hga,Okada:2013ija,Kelso:2013zfa,
Babu:2014lwa,Cogollo:2014tra,
Ajaib:2015ika,Binh:2015jfz,Hektor:2015zba,Cogollo:2015fpa,Allanach:2015gkd,Chakrabortty:2015zpm,
JinChun:2016onh,
Belanger:2016ywb, Nishida:2016lyk,
Hong:2017tel,Crivellin:2018qmi,Li:2018aov,
Liu:2018pdq,
Chen:2019nud,
Badziak:2019gaf,Datta:2019bzu,
Endo:2019bcj,CarcamoHernandez:2019ydc,
Liu:2020nsm,
Calibbi:2020emz,Chen:2020jvl,Chua:2020dya,
Saad:2020ihm,Acuna:2020ccz,Frank:2020kvp,Dutta:2020scq,
Chen:2020tfr,
Liu:2020ser,Dorsner:2020aaz,
Nagai:2020xbq,Arbelaez:2020rbq,Abdullahi:2020nyr,
Jana:2020joi,Chakrabarty:2020jro,Banerjee:2020zvi,Dinh:2020inx,Baryshevsky:2020aez,Ramsey-Musolf:2020ndm,
Chen:2021rnl,
Liu:2020sim,Aebischer:2021uvt,Cao:2021lmj,
Fajfer:2021cxa,Yin:2021yqy,Greljo:2021xmg,
Abdullah:2019ofw,Baker:2021yli,
Freitas:2014pua,Calibbi:2014yha,Queiroz:2014zfa,
Biggio:2016wyy,
Biggio:2014ela,Kowalska:2017iqv,Athron:2017drj,
Banerjee:2018eaf,Chiang:2018bnu,
Calibbi:2018rzv
}.
Many of
these constructions use supersymmetry in some way and will be
discussed in our Section \ref{sec:MSSM}, but this list also includes
solutions motivated by extra dimensions
\cite{Park:2001uc,Kim:2001rc,Agashe:2001ra,Calmet:2001si,Das:2004ay,Hundi:2012uf,Megias:2017dzd,Appelquist:2001jz}
and technicolor or compositeness
\cite{Xiong:2001rt,Calmet:2001dc,Dai:2001vv,Yue:2001db,Doff:2015nru,Tabbakh:2006zy,Hong:2016uou,Blanke:2007db},
or even introducing unparticle physics \cite{Hektor:2008xu}, as well
as just extending models with new states like the two-Higgs doublet
models
\cite{Iltan:2001nk,Krawczyk:2001pe,Krawczyk:2002df,Larios:2001ma,Broggio:2014mna,Wang:2014sda,Han:2015yys,Abe:2015oca,Ilisie:2015tra,Chun:2015hsa,Wang:2016rvz,Chun:2016hzs,Abe:2017jqo,Cherchiglia:2016eui,Cherchiglia:2017uwv,Wang:2018hnw,Han:2018znu,Iguro:2019sly,Wang:2019ngf,Chun:2019oix,Sabatta:2019nfg,Jana:2020pxx,Botella:2020xzf,Rose:2020nxm,Li:2020dbg,Ghosh:2021jeg,Mondal:2021vou}
or adding leptoquarks
\cite{Chakraverty:2001yg,Mahanta:2001yc,Das:2016vkr,ColuccioLeskow:2016dox,Kowalska:2018ulj,Dorsner:2019itg,Crivellin:2020tsz,Gherardi:2020qhc,Babu:2020hun,Crivellin:2020mjs},
new gauge bosons (including sub-GeV gauge bosons, dark photons and generalizations)
\cite{Huang:2001zx,Gninenko:2001hx,Lynch:2001zr,Murakami:2001cs,Pospelov:2008zw,Heo:2008dq,Heeck:2011wj,Davoudiasl:2012qa,Davoudiasl:2012ig,Davoudiasl:2014kua,Lee:2014tba,Gninenko:2014pea,Araki:2015mya,Belanger:2015nma,Altmannshofer:2016brv,Baek:2001kca,Ma:2001md,Altmannshofer:2016oaq,Patra:2016shz,Biswas:2016yjr,Kamada:2018zxi,Gninenko:2018tlp,Altmannshofer:2014pba,Mohlabeng:2019vrz,Amaral:2020tga,Bodas:2021fsy,Daikoku:2020nhr,Huang:2021nkl},
Higgs triplets \cite{Fukuyama:2009xk,Ho:2010yp} and vector-like
leptons
\cite{Davoudiasl:2012ig,Kannike2012,Dermisek:2013gta,Raby:2017igl,Poh:2017tfo,Kawamura:2020qxo,Frank:2020smf,deJesus:2020upp,Endo:2020tkb,Huang:2020ris,Chun:2020uzw,Dermisek:2020cod,Dermisek:2021ajd},
or very light,
neutral and weakly interacting scalar particles \cite{Chen:2015vqy,Marciano:2016yhf,Bauer:2017nlg,Liu:2018xkx,Bauer:2019gfk,Cornella:2019uxs,Abdallah:2020biq,Escribano:2020wua}.
Some works have taken
a systematic approach, classifying states according to
representations and investigating a large set of them
\cite{Freitas:2014pua,Calibbi:2014yha,Queiroz:2014zfa,Biggio:2014ela,Biggio:2016wyy,Kowalska:2017iqv,Athron:2017drj,
Banerjee:2018eaf,Chiang:2018bnu,Calibbi:2018rzv,Biggio:2008in}
\footnote{\update{Finally on the same day as the release of the FNAL result a very large number of papers were already released interpreting it \cite{das2021fimpwimp, buenabad2021challenges,wang2021gutscale,abdughani2021common,chen2021muon,ge2021probing,cadeddu2021muon,brdar2021semisecretly,cao2021imporved,chakraborti2021new,ibe2021muon,cox2021muon, babu2021muon,han2021muon, heinemeyer2021new,calibbi2021implications,amaral2021distinguishing,bai2021muon,baum2021tiny, yin2021muon,anselmi2021fake, nomura2021explanations, vanbeekveld2021dark,wang2021revisiting,gu2021heavy,zhu2021probing,criado2021confronting, arcadi2021muon,han2021leptonspecific,iwamoto2021winohiggsino,endo2021supersymmetric,crivellin2021consequences,hernandez2021fermion,Chakraborti:2021bmv}. This demonstrates what a landmark result this is and the intense interest it is generating within the particle physics community.}}.
The deviation found already by the BNL measurement also gave
rise to the question whether \update{it} could be due to
hypothetical, additional contributions to the hadronic vacuum
polarization.\footnote{%
This question is further motivated by lattice QCD results on the
hadronic vacuum polarization, see footnote \ref{footnoteLQCD}.
}
If such additional effects would exist, they could
indeed shift the SM
prediction for $\amu$ towards the \update{experimental value}, but would at the
same time worsen the fit for electroweak precision observables,
disfavouring such an explanation of the deviation \cite{Passera:2008jk,Crivellin:2020zul,Keshavarzi:2020bfy,deRafael:2020uif,Malaescu:2020zuc}.
In spite of the vast number of works and the many varied
ideas, for most models the same general principles apply. Typically
the deviation requires new states with masses below the TeV scale or
not much above $1$ TeV to
explain the \update{experimental value} with perturbative couplings.
The models which allow large $\amu$ with particularly large masses
involve very large couplings and/or
introduce enhancements through new sources of muon chirality
flips (as we will describe in the next
section).
Therefore the absence of BSM signals at the LHC has led to tensions
with large $\amu$ in many models: either very large couplings and
heavy masses are needed or the stringent LHC limits have to be evaded
in other ways.
Not only LHC, but also dark matter searches can lead to tensions in many
models. The Planck experiment
\cite{Aghanim:2018eyx,Tanabashi2018} observed the dark matter abundance
of the universe to be:
\begin{equation} \label{eqn:DMRD}
\Omega_{h^2}=0.1200\pm0.001.
\end{equation}
Since BSM contributions to $\amu$ are often mediated by new weakly
interacting neutral particles, many interesting models also contain
dark matter candidate particles. Any dark matter candidate particle
with a relic density more than $0.12$ is over
abundant and therefore strongly excluded.
Further, the negative results of
direct dark matter searches can lead to strong additional constraints on the
model parameter spaces.
In the present work we aim to provide a comprehensive picture
of the impact the new FNAL measurement has on BSM physics.
The models we investigate in detail represent a wide range of
possibilities. They cover models with new strongly or weakly
interacting particles, with extra Higgs or SUSY particles,
with or without a dark matter candidate, with or
without new chirality flips and with strong or weak constraints from the
LHC.
In all cases we provide a detailed description of the mechanisms
for the contributions to $\amu$; we then carry out detailed
investigations of the model parameter spaces, including
applicable constraints from the LHC and dark
matter using state-of-the-art tools for evaluating constraints and LHC
recasting.
This allows us to answer which models and which model scenarios can
accommodate \update{the new FNAL measurement and the deviation
$\damu^{\text{2021}}$} while satisfying all other constraints.
The rest of this paper is as follows. In Sec.\ \ref{sec:BSMoverview}
we explain how the anomalous magnetic moment appears in quantum field
theories and emphasise the most important aspects which both make it
an excellent probe of BSM physics and make the observed anomaly very
difficult to explain simultaneously with current collider limits on
new physics. In Secs.\ \ref{sec:SingleField}, \ref{sec:TwoFields}, and
\ref{sec:ThreeFields} we present results for minimal 1-, 2- and
3-field extensions of the SM respectively that show the impact the new
FNAL result has on these models. To provide a more global picture for
1- and 2-field extensions, in Sec.\ \ref{sec:one_field_overview} and
Sec.\ \ref{sec:two_field_overview} we also classify models of this
type systematically by quantum numbers and use known results to
summarise their status with respect to \update{explaining the BNL}
result, showing that this measurement severely restricts the set of
possible models. This allows us to select models \update{with the best
prospects} for detailed investigation, presenting results for the
two-Higgs doublet model (Sec.\ \ref{Sec:THDM}),
leptoquark models (Sec.\ \ref{Sec:Leptoquarks}) and two field extensions
with scalar singlet dark matter (Sec.\ \ref{sec:Z2symmetric}). For three field models we perform a
detailed examination of models with mixed scalar singlet and doublet
dark matter (Sec.\ \ref{sec:2S1F}) and fermion singlet and doublet
dark matter (Sec.\ \ref{sec:2F1S}). In Sec.\ \ref{sec:MSSM} we
discuss the impact of \update{the sharpened deviation} on the MSSM, which is
widely considered one of the best motivated extensions of the SM. This
section also contains a brief self-contained discussion of $\amu$ and the possible enhancement mechanisms in
the MSSM and explains in
detail our treatment of dark matter data and
LHC recasting. All constraints are then applied
on the general MSSM, allowing all kinds of neutralino LSPs.
Finally we present our conclusions in Section \ref{sec:Conclusions}.
\input{SectionOverview}
\input{SectionSingleFields}
\input{SectionTwoFields}
\input{three_fields}
\input{three_fields_2F1S}
\input{SectionSUSY}
\section{Conclusions} \label{sec:Conclusions}
15 years after the BNL $\amu$ measurement showed a tantalizing
deviation from the SM theory value and following
tremendous theoretical work on improving and stabilizing the SM
prediction, the Fermilab E989 experiment has published its first
measurement of $\amu$. The result is \update{a strong
confirmation of the BNL result and the existence of a deviation to
the SM. After including the FNAL result, the new world average results in
$\damu^{\text{2021}}=25.1\times10^{-10}$, a $4.2\sigma$ deviation. It strengthens
the indications for the existence of BSM physics in the lepton
sector, possibly related to the muon mass generation mechanism.}
Which BSM scenarios can accommodate the new Fermilab $\amu$
measurement, and what are the required parameter values? The present
paper provides a detailed survey of possible explanations to answer
this question. We focused on renormalizable models which were already
promising in view of the previous BNL result. We asked particularly in
what parameter space the models can accommodate the $\amu$ results,
taking into account up-to-date constraints from LHC and dark matter
searches, as well as other relevant constraints. Our survey covered
simple extensions of the SM by one, two, or three new fields (required
to be full SM gauge multiplets), including e.g.\ leptoquark and
Two-Higgs doublet models, as well as a general version of the
MSSM. Secs.\ \ref{sec:SingleField}, \ref{sec:TwoFields} also contain
detailed overviews of the status of models, and
summaries of our phenomenological results can
be found at the end of each section.
A useful background information is that
the observed deviation $\damu^{\text{2021}}$ is \update{larger than } the electroweak contributions $\amu ^{\text{EW}}=15.36(0.10)\times10^{-10}$. BSM contributions to $\amu$ are typically suppressed as
$1/M_{\text{BSM}}^2$. Hence BSM models explaining
the deviation must have nontrivial properties, and many models are excluded as
explanations.
The common nontrivial feature of most viable explanations is enhancements in the
muon left--right chirality flip via new couplings and interactions.
As explained around Eq.~(\ref{eqn:GeneralGM2Contribution}) the
chirality flip enhancement is strongly related to the muon mass
generation mechanism and causes related loop contributions to the
muon mass.
As a side note, one obtains
a quite model-independent order-of-magnitude relationship: models in
which the muon mass correction does not exceed $100\%$ can
explain $ \damu^{\text{2021}}$ only for
$M_{\text{BSM}}^{\text{FNAL}}\lesssim\update{2.1\text{\ TeV}}$ according to Eq.\ (\ref{FNALuppermassbound}).
But even in such models with chirality flip
enhancements an \update{explanation of $\damu^{\text{2021}}$} requires specific,
often ``non-traditional''
regions of parameter space.
Examples of excluded models include the red highlighted
models in Tables
\ref{tab:one-field-summary}, \ref{tab:TwoFieldsDifferentSpin}, many
versions of leptoquark models or the
familiar type I, II versions of the 2HDM.
In the context of supersymmetry, familiar scenarios such as the
Constrained MSSM cannot explain the deviation.
Certain leptoquark and vector-like lepton models are examples which
generate very large chiral enhancements, but they need non-flavour universal
couplings, e.g.\ direct leptoquark couplings of muon to top-quark in the left-
and right-handed sector.
An important outcome of the present study is that once the $\amu$ result
is combined with current data from LHC and dark matter
experiments, we obtain strong constraints on the detailed ways how BSM models
can be realized. The only viable models of the kind discussed in Sec.\ \ref{sec:TwoFields} that do not have chirality enhancements, are
particularly strongly constrained. \update{In these models, LHC data
and $\damu^{\text{2021}}$ can be accommodated simultaneously in a small slice of
parameter space, however it is impossible to also account for the full dark matter relic
density, see Figs.\ \ref{fig:Min2FieldsLLSlice}--\ref{fig:Min2FieldsRRProfile}.} In contrast the simple 3-field models 2F1S and 2S1F
of Sec.\ \ref{sec:ThreeFields} are least constrained and can
accommodate $\amu$ and dark matter in a wide parameter region. The required values of the new coupling constants, however, are large and
\update{it remains to be seen how such scenarios can arise in more complete theories.}
The 2HDM can accommodate \update{the observed} $\damu^{\text{2021}}$
while preserving minimal flavour violation, but only in the
lepton-specific type X version or the generalized flavour-aligned
2HDM. Even in these scenarios only a tiny parameter space remains,
where the new Yukawa couplings are close to
their upper experimental limits and the new Higgs masses in a very
narrow range below $100$ GeV, see Fig.\ \ref{fig:THDMupdate}. Leptoquark masses are strongly
constrained by LHC to be significantly \update{above $1$ TeV}, pushing
the explanation of $\damu^{\text{2021}}$ close to the region violating the fine
tuning criterion on the muon mass, see Figs.\ \ref{fig:ScalarLeptoquarkSinglet}--\ref{fig:ScalarLeptoquarkProfiles}.
For SUSY scenarios, the general tension between LHC mass limits and
explanations of $\damu^{\text{2021}}$ is illustrated in
Fig.\ \ref{fig:briefsurveyplot}.
Still, the MSSM with Bino-like LSP and either
stau/slepton-coannihilation or chargino coannihilation can fully
explain $\damu^{\text{2021}}$ and the dark matter relic density, see
Fig.\ \ref{fig:Binosleptonscompressed}. Apart from the
constraints implied by the coannihilation mechanisms, \update{the parameter
space is wide open in the $M_1$--$\mu$ plane.} LHC and dark matter constraints however imply mass patterns, e.g.\ lower limits on the Higgsino
mass and two windows for the Wino mass. The case where both charginos are lighter than sleptons is very
strongly constrained and may be excluded by future data from $\amu$,
LHC and dark matter experiments (Fig.\ \ref{fig:chalightersleptonsa}). Further, in the largest part of viable parameter
space the simple GUT constraint on gaugino masses $M_1/M_2\approx1/2$
is strongly violated. The scenarios with a Higgsino-like LSP are strongly
constrained and can
accommodate the current $\amu$ \update{only in a limited range of masses}. Scenarios with Wino-like LSP emerge
as particularly interesting (Figs.\ \ref{fig:WHsleptonscompressed},
\ref{fig:chalightersleptonsb}). They can accommodate $\amu$ in a vast
parameter space without significant constraints; however in these
scenarios the dark matter relic density can only be partially
explained without additional contributions to the relic density.
These results and discussions highlight the importance of $\amu$ not
only as a potential proof of BSM physics but also of a crucial constraint on
models.
Particularly in combination with current LHC and dark matter data, it
points to specific parameter regions of models and gives crucial clues
on how BSM physics can (or cannot) be realized.
Since many models involve muon chirality flip enhancements and/or
flavour non-universality, further experiments testing lepton flavour
violation, electric dipole moments, or lepton universality are
promising to uncover further properties of BSM physics. Since the muon
chirality flip enhancements are related to the mass generation
mechanism for the muon, also
the measurement of the Higgs--muon coupling at LHC or future lepton
colliders can provide \update{a test
of various explanations of $\damu^{\text{2021}}$.} Further \update{ultimate} tests
may be performed at a multi-TeV muon collider
\cite{Capdevilla:2020qel,Buttazzo:2020eyl,Yin:2020afe,Capdevilla:2021rwo,Cheung:2021iev,Han:2021udl}.
\update{The new Fermilab $\amu$ measurement provides the best possible starting point for
future $\amu$ determinations.}
Exciting further progress can be expected from the Run-2--4 results of
the FNAL $g-2$ experiment, the planned JPARC $g-2$ experiment
\cite{Mibe:2010zz,Abe:2019thb}, and from further progress on SM theory including the MUonE
initiative to provide alternative experimental input to the
determination of the hadronic contributions to $\amu$
\cite{Abbiendi:2016xup,Banerjee:2020tdt}.
\section*{Acknowledgements}
We thank the ColliderBit group from GAMBIT (and in particular Anders
Kvellestad) for helpful discussions about available analyses and the
statistical interpretation of their likelihoods and we also thank
Felix Kahlhoefer for helpful comments on the statistical
interpretation of both ColliderBit and DDCalc likelihoods. We also
thank Adriano Cherchiglia for the consent to use internal results and data
of Ref.\ \cite{Cherchiglia:2017uwv}. H.S-K. and D.S thank Hyesung Lee
for the helpful discussion and comments on dark photon models. P.A.\ and D.J.\ thank Innes Bigaran for helpful discussions regarding leptoquark constraints applied in Ref.\ \cite{Bigaran:2020jil}. D.J. thanks Susanne Westhoff for helpful clarifications regarding details about Ref.\ \cite{Freitas:2014pua}, and Ursula Laa for her helpful support with \texttt{SModelS}\@\xspace and \texttt{micrOMEGAs}\@\xspace. The work of P.A.\ is supported by the Australian Research Council Future Fellowship grant FT160100274. P.A. also acknowledges the hospitality of Nanjing Normal University while working on this manuscript. The work of P.A. and C.B. is also supported with the Australian Research Council Discovery Project grant DP180102209. The work of C.B. was supported by the Australian Research Council through the ARC Centre of Excellence for Particle Physics at the Tera-scale CE110001104.
The research placement of D.J. under which this work was done was
supported by the Australian Government Research Training Program
(RTP) Scholarship and the Deutscher Akademischer Austauschdienst (DAAD) One-Year Research Grant. The work
of H.S.\ and D.S.\ is supported by DFG grant STO 876/7-1.
\section{Three-Field Extensions } \label{sec:ThreeFields}
As we saw in Sec.\ \ref{sec:TwoFields} introducing two fields with different spins is the simplest way to explain dark matter and give new contributions to muon $g-2$, but collider constraints heavily constrain the region \update{where such models can explain the observed muon $g-2$}. This is because in those cases it is not possible to induce a chirality flip in the muon $g-2$ diagram's loop, as both the new fermion and scalar only couple to either the left- or right-handed muon.
We therefore now consider the simplest models that can both have an
enhanced contribution to muon $g-2$ from a chirality flip and explain
dark matter. These are models with two fields of the same spin that
mix together and a third field with a different spin and a $\mathbb{Z}_2$
symmetry under which these new fields are odd, while the SM fields are
even. The leading contributions in supersymmetric
models are in fact of this kind, as will be discussed in
Sec.\ \ref{sec:MSSM}, but here we also consider three-field models
that do not arise as part of the MSSM. As in previous sections we will restrict ourselves to
renormalizable models with new scalars and new fermions. Models
of this nature have previously been classified in
Ref.\ \cite{Calibbi:2018rzv}, and there it is demonstrated that due to
the chirality flip contribution there are many models which can
explain the $\amu$ anomaly in Eq.\ (\ref{eqn:BNLDiscrepancy}), with
models that have a new field with a much higher $SU(2)_L$
representation (up to $20$) being able to produce sizable
contributions to $\amu$.
Likewise, the larger contributions to $\amu$ from the chirality flip enables three field models to simultaneously produce a dark matter candidate with the observed dark matter relic abundance \cite{Planck2015} also with much higher $SU(2)_L$ representations compared to the two field models.
This chirality flip enhancement also enables one to examine much higher
mass scales compared to two field models, evading LHC collider constraints.
In this section we present a detailed discussion of the phenomenology
of two kinds of three-field models, which we denote as 2S1F and 2F1S,
with two scalars and one fermion or vice versa. The quantum numbers
will be chosen such that the models represent a variety of possible
dark matter candidates. In model 2S1F, the dark matter is a scalar
singlet or doublet (or mixture) and may be compared to generic scalar
singlet or inert doublet models or to left- or right-handed
sneutrinos. In model 2F1S, the dark matter is a singlet or doublet
fermion (or a mixture) and may be compared to Bino or neutral
Higgsino. Model 2S1F has some similarities to the BLR
contribution in the MSSM, but the spin of the dark matter candidate is
different.
Model 2F1S contains the same states as the BHL contribution in the
MSSM. In both cases, the couplings are treated as arbitrary, not
respecting SUSY-like relations, which leads to a different
phenomenology, and we compare and contrast these two models with their
respective MSSM scenarios. Both of these models were considered in
Ref.\ \cite{Calibbi:2018rzv}, showing particular slices of the
parameter space. Here we update these models with the latest
$\amu^{\text{2021}}$ result and direct detection constraints, and
perform a thorough sampling of the whole parameter space to show how
the data leads to meaningful constraints on the parameters of these
models.
\subsection{Three-field model with two scalars, one charged
fermion and scalar dark matter}
\label{sec:2S1F}
We begin with the Model 2S1F which is the SM extended with three ${\mathbb{Z}}_2$-odd fields: one electrically charged fermion and two scalar fields,
\begin{equation}
\psi _s,\quad \phi_s ^0,\quad \phi_d= \begin{pmatrix} \frac{\phi^0_d + i A_\phi}{\sqrt{2}} \\ \phi_d^- \end{pmatrix}\,,
\end{equation}
where $\psi _s$ is an $SU(2)$ singlet charged Dirac fermion expressed in Weyl spinors with representation $({\bf{1}},{\bf{1}},1)_{1/2}$, $\phi _s ^0$ a scalar singlet with representation $({\bf{1}},{\bf{1}},0)_{0}$, and $\phi _d$ a scalar doublet with representation $({\bf{1}},{\bf{2}},-\frac{1}{2})_{0}$.
The relevant interactions are:
\begin{equation} \label{eqn:Min3FieldsSLRF1}
{\cal L}_{\text{2S1F}} = \begin{pmatrix}a_H H\cdot\phi_d \phi_s^0 + \lambda_L \phi_d \cdot L \psi_s + \lambda_R \phi_s^0 \mu \psi_s^c - M_{\psi} \psi_s^c \psi_s + h.c.\end{pmatrix} - M_{\phi_d}^2 |\phi_d|^2 - \frac{M_{\phi_s}^2}{2} |\phi_s^0|^2.
\end{equation}
The new scalars couple to one of the left- and right-handed
muons each, along with the new fermion. Then using a
trilinear scalar coupling between the two scalars and the
Higgs boson, the VEV of the Higgs bosons gives rise to mass
mixing terms between the two scalars, and induces a chirality
flip in the loop of muon $g-2$ processes. The relevant
parameter combination for invoking this chirality flip
enhancement is $a_H\lambda_L\lambda_R$.
For Model 2S1F in Eq.\ (\ref{eqn:Min3FieldsSLRF1}), the
neutral scalar singlet $\phi_s^0$ and the CP-even part of the
neutral component of $\phi_d$ mix together into two flavours
of a neutral scalar $\phi^0_i$ for $i=1,2$:
\begin{align} \label{eqn:Min3FieldsSLRF1Mixing}
{\cal L}_{\text{2S1F}} \ni -\frac{1}{2}\begin{pmatrix} \phi_s^0 & \phi_d^0 \end{pmatrix} \begin{pmatrix} M_{\phi_s}^2 & a_H v \\ a_H v & M_{\phi_d}^2 \end{pmatrix} \begin{pmatrix} \phi_s^0 \\ \phi_d^0 \end{pmatrix}
&= -\frac{1}{2}\sum_{i=1,2} M^2_{\phi_i^0} |\phi_i^0|^2, \nonumber
\end{align}
where the mass eigenstates relate to the gauge eigenstates through the
mixing matrix $U_S$ as
\begin{align}
\begin{pmatrix}\phi_1^0\\\phi_2^0
\end{pmatrix}
&= U_S^T\begin{pmatrix}\phi_s^0\\\phi_d^0
\end{pmatrix}\,.
\end{align}
The dark matter candidate is then the lightest neutral scalar
$\phi^0_1$. For this mixing to avoid producing a tachyon, the limit
$a_H v < M_{\phi_s} M_{\phi_d}$ must be respected. As in the previous
section on two-field extensions, we do not consider additional
renormalisable terms involving the new scalars that could appear in
the Higgs potential, but would not affect the $\amu$ prediction.
These terms can change the dark matter phenomenology though, adding
additional mechanisms to deplete the relic density, and new
interactions that can mediate direct detection. However we do not
expect these to substantially modify our conclusions.
In this model the one-loop BSM contribution to the anomalous magnetic moment is
given by:
\begin{align} \label{eqn:amu_2S1F_model}
\Delta a_\mu^{\text{2S1F}} = & \frac{m_\mu^2}{32 \pi^2} \bigg(
\frac{\lambda_L^2}{12 M_{\phi_d}^2} \,E\!\begin{pmatrix} \frac{M_\psi^2}{M_{\phi_d}^2} \end{pmatrix}
+ \sum_{i=1,2} \frac{\lambda_L^2|U_{S\,2i}|^2 + 2\lambda_R^2|U_{S\,1i}|^2}{12 M_{\phi_i^0}^2} \,E\!\begin{pmatrix} \frac{M_\psi^2}{M_{\phi_i^0}^2} \end{pmatrix}\nonumber \\
&+ \frac{M_\psi}{m_\mu} \sum_{i=1,2} \frac{2\sqrt{2} \lambda_L \lambda_R U_{S\,1i} U_{S\,2i}}{3 M_{\phi_i^0}^2} \,F\!\begin{pmatrix} \frac{M^2_\psi}{ M^2_{\phi_i^0}} \end{pmatrix} \bigg).
\end{align}
These contributions come in the form of the FFS diagram
\ref{fig:FFSDiagram} of Fig.\ \ref{fig:GeneralDiagrams} with the
generic $F$ fermion lines replaced by $\psi^-$ the conjugate of
$\psi_s$, and the scalar $S$ one by $\phi^0_i$ and $A_\phi$,
respectively.
The second line of Eq.\ (\ref{eqn:amu_2S1F_model}) provides the
contribution with the chirality flip enhancement, and is therefore
typically the dominant contribution. While a cursory glance at
Eq.\ (\ref{eqn:amu_2S1F_model}) might lead one to believe that this
model's chirality flip provides an apparent enhancement according to
the ratio $M_\psi/m_\mu$, the actual behaviour is more complicated.
The actual form of this enhancement can be seen by using the
simplification of the mixing angles
$U_{S\,11}U_{S\,21}=-U_{S\,12}U_{S\,22} \to a_H
v/(M_{\phi_1^0}^2-M_{\phi_2^0}^2)$. Plugging this into
Eq.\ (\ref{eqn:amu_2S1F_model}) generates a difference between loop
functions which, together with the mass difference in the denominator,
can be written in terms of the loop function $\tilde{F}_a$ of the
appendix Eq.\ (\ref{FaDefinition}). In this way, for $M_\psi \gg
m_\mu$, the contribution from this model simplifies to the chirality
flip contribution as:
\begin{align}
\Delta a_\mu^{\text{2S1F}} \approx& \frac{m_\mu^2}{32 \pi^2}\frac{ M_\psi}{m_\mu} \sum_{i=1,2} \frac{2\sqrt{2} \lambda_L \lambda_R U_{S\,1i} U_{S\,2i}}{3 M_{\phi_i^0}^2} \,F\!\begin{pmatrix} \frac{M^2_\psi}{M^2_{\phi_i^0}} \end{pmatrix} \\
\approx&
\frac{m_\mu^2}{32 \pi^2}\frac{ M_\psi}{m_\mu} \frac{2\sqrt{2} \lambda_L \lambda_R a_H v}{3( M_{\phi_1^0}^2-M_{\phi_2^0}^2)} \left(
\frac{1}{M^2_{\phi_1^0}} \,F\!\begin{pmatrix} \frac{M^2_\psi}{M^2_{\phi_1^0}} \end{pmatrix}
-\frac{1}{M^2_{\phi_2^0}}\,F\!\begin{pmatrix} \frac{M^2_\psi}{M^2_{\phi_2^0}} \end{pmatrix}\right).\\
\approx&
-\frac{m_\mu^2}{32 \pi^2}\frac{ M_\psi}{m_\mu} \frac{2\sqrt{2} \lambda_L \lambda_R a_H v}{M_{\psi}^4} \,\tilde{F}_a\!\begin{pmatrix} \frac{M^2_\psi}{M^2_{\phi_1^0}}, \frac{M^2_\psi}{M^2_{\phi_2^0}} \end{pmatrix}.
\end{align}
Thus the final result for this chirality-enhanced contribution can be written in
the form
\begin{align}\label{eqn:2S1FLOContribution}
\Delta a_\mu^{\text{2S1F}} \approx& -\frac{m_\mu^2}{32 \pi^2 M_{\psi}^2}\frac{}{}
\frac{2\sqrt{2} \lambda_L \lambda_R a_H v}{m_\mu M_{\psi}} \,\tilde{F}_a\!\begin{pmatrix} \frac{M^2_\psi}{M^2_{\phi_1^0}}, \frac{M^2_\psi}{M^2_{\phi_2^0}} \end{pmatrix},
\end{align}
where e.g.\ $\tilde{F}_a(1,1)=1/12$.
We see that the actual enhancement factor, beyond the typical loop and mass
suppression factor, is given by the ratio $\lambda_L \lambda_R a_H v/(m_\mu
M_{\psi})$. As announced earlier it relies on the factor $\lambda_L \lambda_R
a_H$ and requires all three of these couplings to be non-zero.
We have presented this discussion of how the parametric behaviour of the chirality flip can be extracted from the exact result Eq.\ (\ref{eqn:amu_2S1F_model}) in quite some detail, and we remark that similar discussions can also be applied to other models, e.g.\ to the model 2F1S discussed later, or to MSSM contributions discussed in Sec.\ \ref{sec:MSSM}.\footnote{
The enhancement may be compared to the enhancement factor for the MSSM BLR
contribution shown in Eq.\ (\ref{eq:SUSYMIapprox}) in Sec.\
\ref{sec:MSSM}, where $a_Hv$ corresponds to $m_\mu\mu\tan\beta$. In
the corresponding MSSM contribution, the couplings analogue to
$\lambda_{L,R}$ are gauge couplings, while these couplings are
independent here.}
If either $\lambda_L$ or $\lambda_R$ are zero then clearly the above contribution vanishes, while it will also vanish if there is no mixing between the scalars, i.e.\ $a_H = 0$, so that $U_{S\,1i}U_{S\,2i}$ vanishes. If $\lambda_L = a_H = 0$ or $\lambda_R = a_H = 0$ then Eq.\ (\ref{eqn:amu_2S1F_model}) reduces to the same result for two fields with different spin, i.e.\ Eq.\ (\ref{eqn:Min2FieldsLLContributions}), while if $a_H$ is non-zero, then one gets a similar result, but with some dilution from the mixing. Similarly if both $\lambda_L$ and $\lambda_R$ are non-zero, but $a_H=0$ then we simply get two copies of Eq.\ (\ref{eqn:Min2FieldsLLContributions}), with different couplings that are summed.
Finally note that the mixing will also be heavily suppressed just by having a large enough mass splitting between the two scalar states, i.e.\ making the ratio $ (v a_H)^2 / (M_{\phi_s}^2 M_{\phi_d}^2)$ sufficiently small.
In the following we provide a detailed
phenomenological analysis of the Model 2S1F. The underlying question
is whether the model can accommodate the new $\amu^{\text{2021}}$
value simultaneously with
current constraints from dark matter and LHC.
The model is complicated and depends on three masses and three couplings in a relevant way. We focus on the
parameter space where $\lambda_L$, $\lambda_R$ and $a_H$ are all
non-zero, such that the model behaves differently from the models of
the previous section. Eq.\ (\ref{eqn:2S1FLOContribution}) then provides the dominant contribution and can be used to understand our results. As in previous sections we do however use \texttt{FlexibleSUSY}\@\xspace 2.5.0 \cite{Athron:2014yba,Athron:2017fvs} to obtain the numerical results, where we augment the full one-loop BSM calculation with the universal leading logarithmic photonic $2$-loop contributions \cite{Degrassi1998}.
Before entering details, we briefly explain relevant dark matter
processes. Without the $a_H$, $\lambda_L$ and $\lambda_R$ interactions
that are important for $\amu$, dark matter could behave either as pure
scalar singlet or as pure inert scalar doublet dark matter. In the
inert doublet case, already the standard SU(2)$_L$ electroweak gauge
interactions are sufficient to deplete the relic density to (below)
the observed value when it has a mass of (less than) about $600$ GeV
\cite{LopezHonorez:2006gr}. The preferred mass of scalar singlet dark
matter depends on the Higgs portal coupling, but outside of fine tuned
parameter regions and for couplings $\lambda_{L,R} < 1$, the best fit for the
data is found when the pure scalar singlet dark matter has a mass of
$\approx 1$--$3$ TeV \footnote{We do not consider the Higgs portal coupling needed for this, in our analysis, but the well studied
phenomenology of scalar singlet dark matter may be useful in showing
the kind of scenarios that we neglect here.}
\cite{Athron:2018ipf}. Beyond these simple special cases, the model
here allows many additional depletion mechanisms: significant
$\lambda_{L,R}$ allow t-channel exchange of the new fermion, through
$\lambda_{L}$ or $\lambda_R$ if the scalar dark matter is doublet or
singlet dominated (or both if there is mixing). If the new fermion
mass is sufficiently close, coannihilations through $\lambda_L$ and
$\lambda_R$ can also be active in depleting the relic density. The
coupling $a_H$ between singlet, doublet, and the SM Higgs can lead to
singlet--doublet coannihilation processes. This same coupling $a_H$ is
also relevant for direct detection constraints on dark matter, which
depend particularly on Higgs-mediated processes. Importantly, the
mentioned additional depletion processes can compensate general
suppressions of cross sections for heavier masses. As a result even
heavier dark matter masses than for the inert doublet can become
possible.
As we will see from detailed scans, the dark matter constraints
indeed only allow rather heavy masses, which are above LHC limits.
For this reason we do not explicitly consider LHC
constraints in the following.
\begin{figure}[t]
\centering
\includegraphics[width=0.31\textwidth]{Min3FieldsSLRFR-equalmasses1}
` \hspace{0.01\linewidth}
\includegraphics[width=0.31\textwidth]{Min3FieldsSLRFR-equalmasses2}
\hspace{0.01\linewidth}
\includegraphics[width=0.31\textwidth]{Min3FieldsSLRFR-equalmasses3}
\caption{Results from Model 2S1F, scanning over the couplings of the new scalars and fermion to the left- and right-handed muon. All BSM fields have identical masses as specified in the subfigures. The trilinear coupling which parametrises the mixing of the neutral BSM scalars is fixed to $a_H = 246$ GeV. Regions which can explain $\Delta a_\mu^\textrm{BNL}$ and $\damu^{\text{2021}}$ to within $1\sigma$ are coloured yellow and green respectively, while the overlap between these two regions is coloured lime. The black line marks couplings which produce a contribution which exactly matches Eq.\ \update{(\ref{eqn:avgDiscrepancy})}. The points where a dark matter candidate particle produces the observed relic abundance of $0.1200$, Eq.\ (\ref{eqn:DMRD}), are at the boundary of the hatched region shown in red, with the hatched region below and to the right being excluded due to having a relic abundance greater than $0.12$. Regions excluded by searches for the direct detection of dark matter at the Xenon1T experiment\cite{Aprile:2017iyp,Aprile:2018dbl} are shaded in orange. \label{fig:Min3FieldsSLRFRCouplingsEqualMasses} }
\end{figure}
In Fig.\ \ref{fig:Min3FieldsSLRFRCouplingsEqualMasses} we begin our
numerical analysis of this model. The figure represents a baseline
behaviour. In order to maximize contributions to $\amu$ via a large
chirality flip and mixing (see Eq.\ (\ref{eqn:2S1FLOContribution}) and
following discussion) we choose equal masses $M_{\phi_s}=M_{\phi_d}$
and fix $a_H=v=246$ GeV as a reference value, similar to scenarios
examined by Ref.\ \cite{Calibbi:2018rzv}. We also choose $M_\psi$ to
be equal to the scalar masses. This is a rather special choice for
dark matter, as all mechanisms for depleting the relic density can be
active here, including scalar--fermion coannihilation. As a result,
the relic density will be quite suppressed and the Planck-observed
value can only be explained for small couplings. On the other hand,
the dark matter direct detection will not be suppressed due to the
large mixing between the scalar and doublet, allowing interactions
with the nucleons through the moderately sized $a_H$ coupling.
The behaviour of $\Delta a_\mu$ in the three panels of
Fig.\ \ref{fig:Min3FieldsSLRFRCouplingsEqualMasses} clearly
reflects the $\lambda_L\lambda_R a_H/M^2$ dependence of the
chirality-flipping contributions shown in
Eq.\ (\ref{eqn:2S1FLOContribution}). In each panel the
coupling dependence is approximately hyperbolic.
Specifically, in the left panel for TeV-scale masses it is
\update{possible} to \update{explain $\damu^{\text{2021}}$} with
couplings \update{$|\lambda_L\lambda_R| \approx 0.8^2$}. As the
masses are increased to $1.5$ TeV and $2$ TeV in the middle
and right panels respectively, the contribution to $\amu$ is
suppressed. Therefore at higher masses to explain
$\Delta a_\mu^{\textrm{BNL}}$ \update{and $\damu^{\text{2021}}$} to within
$1\sigma$ either larger couplings are required, as shown in
the plot, or alternatively a larger value of $a_H$ (which
is fixed to 246 GeV on this plot) could be chosen.
Each panel in Fig.\ \ref{fig:Min3FieldsSLRFRCouplingsEqualMasses} also shows a red
hatched region, which is excluded by over abundance of the
dark matter relic density. At the boundary the observed value
\cite{Aghanim:2018eyx} can be explained. In the non-hatched
region the predicted relic density is under abundant. Further,
the orange shaded regions are excluded by direct
detection searches using \ddcalc 2.2.0 \cite{Workgroup:2017lvb,Athron:2018hpc}.\footnote{%
In the calculation we have correctly rescaled the direct detection spin-independent
(SI) cross-sections according to the abundance of dark matter
as
\begin{align} \label{eqn:DDScaling}
\sigma_{\textrm{SI,eff}} &= \sigma_{\textrm{SI}} \times \frac{\Omega_{h^2}}{\Omega_{h^2,\textrm{Planck}}},
\end{align}
where
$\Omega_{h^2,\textrm{Planck}} = 0.1200$ is the dark matter relic abundance as observed
by the Planck experiment, see Eq.\ (\ref{eqn:DMRD}), and $\Omega_{h^2}$ is the
relic abundance produced by the dark matter candidate particle.
}
As mentioned above, the equal mass scenario is constructed to maximize
the impact of relic density depletion mechanisms. The dark matter
candidate has significant singlet and doublet components, and the
depletion mechanisms are equally sensitive to both $\lambda_L$ and
$\lambda_R$, see the Lagrangian Eq.\ (\ref{eqn:Min3FieldsSLRF1}). If
either coupling is sufficiently large the relic
density becomes under abundant. As a result the red lines are
approximately parabolic in the $\lambda_L-\lambda_R$ plane, and for
${\cal O}(1)$ values of the couplings the relic density can be
explained. As the overall mass
scale is increased, the required couplings become slightly
larger.
The dark matter direct detection limits depend on spin-independent
cross sections between dark matter and nucleons in the detector. These
cross sections are mediated particularly by the Higgs
boson, and the relevant coupling of dark matter to the Higgs boson
is particularly sizeable in the present equal mass case
with strong singlet--doublet mixing and a significant value of
$a_H$. It does not
depend on $\lambda_L$ and $\lambda_R$, but via the rescaling
according to relic density,
the resulting direct detection limits do vary across the $\lambda_L$--$\lambda_R$ plane.
These limits are generally strong, because of the unsuppressed coupling to
the Higgs boson.
They become however significantly weaker for increasing masses
because of the suppression of the cross sections, despite the slight
increase of the relic density.
The combination of all these constraints means that for our baseline
case, with equal masses, shown in the three panels of
Fig.\ \ref{fig:Min3FieldsSLRFRCouplingsEqualMasses} it is
\update{impossible to explain \update{$\damu^{\text{2021}}$} simultaneously} with the dark
matter relic density. \update{For masses of $1$ TeV, $\damu^{\text{2021}}$ could
be explained simultaneously with the dark matter relic density,
however direct detection constraints exclude a large part of the
$\Delta a_\mu$ band and the entire region where the relic density is
explained.} For masses of $1.5$ TeV, the direct detection limits
\update{constrain only a very little of the region which can explain
$\damu^{\text{2021}}$, however the direct detection
constraints still rule out an explanation of the relic density.}
For masses of $2$ TeV \update{the full relic density can be
explained, however not simultaneously with $\damu^{\text{2021}}$.} One may ask
whether just changing the value of the trilinear coupling $a_H$
between the Higgs boson and the two BSM scalar bosons could allow the
simultaneous explanation of large $\Delta a_\mu$ and dark matter. As
Eq.\ (\ref{eqn:2S1FLOContribution}) shows, increasing the value of
$a_H$ increases the size of the BSM contributions to $\amu$, while the
dark matter relic density is essentially independent of $a_H$ in these
scenarios. Therefore the $\amu$-bands in
Fig.\ \ref{fig:Min3FieldsSLRFRCouplingsEqualMasses} would move down
and to the right, to smaller couplings, while the red hatched regions
would remain largely unchanged from adjusting $a_H$. However the
direct detection constraints become stronger if $a_H$ is increased,
because it increases Higgs-nucleon interactions via the SM Higgs. As
a result the strengthened direct detection limits would rule out
points with the observed dark matter relic density. Decreasing $a_H$ has the
opposite effect: the band of parameter space where
$\Delta a_\mu$ is explained moves up and to the left, further
away from the observed relic density, and weakening the direct
detection constraint will not help. So in summary it is \update{not
possible} in these equal mass scenarios to explain both the measured
value $\Delta a_\mu$ and the observed value of the relic density of dark
matter at the same time, while avoiding constraints from direct
detection. In the following we therefore consider further parameter
slices to obtain additional insight into the phenomenology of the
model.
\begin{figure}[t]
\centering
\includegraphics[width=0.31\textwidth]{Min3FieldsSLRFR-splitmasses1}
\hspace{0.01\linewidth}
\includegraphics[width=0.31\textwidth]{Min3FieldsSLRFR-splitmasses2}
\hspace{0.01\linewidth}
\includegraphics[width=0.31\textwidth]{Min3FieldsSLRFR-splitmasses3}
\caption{Results from Model 2S1F, scanning over the couplings of the new scalars and fermion to the left- and right-handed muon. The trilinear coupling which parametrises the mixing of the neutral BSM scalars is fixed to $a_H = 246$ GeV. Colours are the same as in Fig.\ \ref{fig:Min3FieldsSLRFRCouplingsEqualMasses}. In these scenarios the mass of either the scalar singlet $\phi_s$, scalar doublet $\phi_d$, or fermion $\psi_s$ is displaced by $200$ GeV from the other BSM masses at $1500$ GeV. \label{fig:Min3FieldsSLRFRCouplingsSplitMasses} }
\end{figure}
The situation can be changed substantially if we split the masses of
the new states. The relic density of each point will in general be
larger, since mass splittings will suppress several relic density
depletion mechanisms such as coannihilation channels. Fig.\
\ref{fig:Min3FieldsSLRFRCouplingsSplitMasses} investigates this for
three example cases. In each of the three panels the mass of the fermion singlet
is $200$ GeV away from the dark matter scalar, which in turn
is either singlet- or doublet-dominated or a mixture.
All panels of Fig.\
\ref{fig:Min3FieldsSLRFRCouplingsSplitMasses} can be compared to the
middle panel of Fig.\
\ref{fig:Min3FieldsSLRFRCouplingsEqualMasses}, where all masses were
equal to $1500$ GeV. The contribution to $\amu$ is similar in all
cases, with slight changes of the required couplings depending on the
increased/decreased masses.
The dark matter phenomenology changes
more dramatically.
Both the relic density and the direct detection constraints differ
strongly between the panels of Fig.\
\ref{fig:Min3FieldsSLRFRCouplingsSplitMasses} and compared to the
middle panel of Fig.\ \ref{fig:Min3FieldsSLRFRCouplingsEqualMasses}.
Specifically the direct detection constraint ruled out a full
explanation of the relic density in the equal mass case of
the middle panel of Fig.\ \ref{fig:Min3FieldsSLRFRCouplingsEqualMasses}. The same still happens
in the third panel
of Fig.\ \ref{fig:Min3FieldsSLRFRCouplingsSplitMasses}, where the scalar
doublet and singlet masses are equal. The reason in all these cases is
the doublet--singlet mixing and the unsuppressed coupling of dark
matter to the Higgs boson via the
doublet--singlet--Higgs coupling $a_H$. In contrast, as soon as the
scalar doublet and singlet masses are different (left and middle
panels of Fig.\ \ref{fig:Min3FieldsSLRFRCouplingsSplitMasses}), the
direct detection constraints are weakened, and as a result a full
explanation of the observed relic density is viable.
In detail the left panel of
Fig.\ \ref{fig:Min3FieldsSLRFRCouplingsSplitMasses} illustrates the
case where the dark matter candidate is dominantly a scalar singlet
with only about a $1\%$ admixture of doublet. The singlet mass is set
$200$ GeV lighter than the scalar doublet and the fermion masses. The
relic density is driven up compared to the middle panel of
Fig.\ \ref{fig:Min3FieldsSLRFRCouplingsEqualMasses} since several dark
matter depletion mechanisms are suppressed. Generally, the scalar
singlet participates only in few interactions and mostly annihilates
through t-channel exchange of the BSM fermion\footnote{Other processes also play a significant role including e.g.\ t-channel exchanged of scalar, which occurs through the small doublet admixture.}. Although the singlet couples to
BSM fermions only through $\lambda_R$, in the cross-section for t-channel
exchange of the muon, contributions that depend only on $\lambda_L$
(or only on $\lambda_R$) are suppressed by either $m_\mu^2 / M_F^2$ or
by the relative velocity in comparison to the contribution
proportional to $\lambda_L^2 \lambda_R^2$. This leads to the shape of the
red line that can be seen in the plot, where it follows a similar
trajectory to the $\amu$ contours, which is approximately a hyperbola.
\update{Strikingly in the left panel of
Fig.\ \ref{fig:Min3FieldsSLRFRCouplingsSplitMasses} we find that the
red curve, where the observed relic density can be explained, and
the black curve, where $\damu^{\text{2021}}$ can be explained are very close to
each other and follow the same trajectory. However while a
1$\sigma$ explanation of $\damu^{\text{BNL}}$ could have been accommodated,
the slightly higher couplings required to explain relic density
measurements means that the over abundance of dark matter now rules
out most of the parameter space where $\damu^{\text{2021}}$
(Eq.\ (\ref{eqn:avgDiscrepancy})) can be explained, except for a
small slice of parameter space with $\lambda_R > 2.5$. Nonetheless
this could be changed with small adjustments to the mass splitting
and in general it is possible to explain $\damu^{\text{2021}}$ and relic
density at the same time. Explanations of $\damu^{\text{2021}}$ are possible for
masses above $1$ TeV, as long as the product $\lambda_L\lambda_R
a_H$ is sufficient.}
In the middle panel of
Fig.\ \ref{fig:Min3FieldsSLRFRCouplingsSplitMasses} we instead
focus on doublet-dominated dark matter, by setting the scalar
doublet mass $200$ GeV lighter than both the scalar singlet
and fermion (and again compared to the middle panel of
Fig.\ \ref{fig:Min3FieldsSLRFRCouplingsEqualMasses}). The
impact on $\Delta a_\mu$ is \update{very similar} to what happened in
the left plot as expected from
Eq.\ (\ref{eqn:2S1FLOContribution}). However, the change in
the behaviour of dark matter is very different. Now the dark
matter candidate has generally many more interaction channels
via the SU(2)$_L$ gauge interactions of the doublet
component. Hence the relative importance of the
$\lambda_{L,R}$ parameters is reduced compared to the
singlet-dominated case. Although not visible in the plot, the
overall variation of the relic density is much reduced
compared to the left panel or the middle panel of
Fig.\ \ref{fig:Min3FieldsSLRFRCouplingsEqualMasses}. The
residual dependence on $\lambda_L$ is still important to
deplete the relic density sufficiently, but the residual
dependence on $\lambda_R$ via the singlet-admixture is of
minor importance. In this case the constraints from the relic
density and from $\Delta a_\mu$ are essentially orthogonal in the
parameter space and thus complementary. Both constraints can
be fulfilled, but agreement can only be achieved for very
large values of $|\lambda_L|$ (\update{$\approx$ between $2.4$
and $2.5$ when $\damu^{\text{2021}}$ is explained with $1\sigma$}).
In the right panel of Fig.\ \ref{fig:Min3FieldsSLRFRCouplingsSplitMasses} we raise
the fermion mass by $200$ GeV, compared to the middle panel of Fig.\
\ref{fig:Min3FieldsSLRFRCouplingsEqualMasses}. Here we can see that this slightly
increases the size of the couplings required to explain the measured
$\Delta a_\mu$ values. As mentioned before, however, in this case with equal
scalar singlet and doublet masses the relic density must be under abundant
because of direct detection constraints.
Raising the fermion mass suppresses the coannihilation mechanisms involving the
fermion, which are strongly dependent on the couplings $\lambda_L$ and
$\lambda_R$, but t-channel fermion exchange can still be active as
suggested by the fact that the relic density curve is again a
hyperbola similar to the left panel. However since the singlet and doublet masses are equal in this case, the usual $SU(2)$ scalar doublet (co)annihilations have greater impact, reducing the required size of $\lambda_L\lambda_R$. Due to the rescaling, the direct detection
constraints reflect this behaviour.
With \update{the chosen value of $a_H=246$ GeV}
\update{$\damu^{\text{2021}}$ can be explained} with an under abundant relic
density.
Just as for Fig.\ \ref{fig:Min3FieldsSLRFRCouplingsEqualMasses}, we can change $a_H$
to see if we can simultaneously explain $\amu$ and dark matter. Currently in the
\update{middle panel of Fig.\ \ref{fig:Min3FieldsSLRFRCouplingsSplitMasses}, as well as
the left panel with very large $\lambda_R$}, we can already explain $\amu^{\text{2021}}$ and
dark matter simultaneously. As we saw previously increasing $a_H$ increases the
constraints from direct detection. \update{For the left panel as nearly all}
of the parameter space that can explain $\damu^{\text{2021}}$ to within $1\sigma$
is ruled out by over abundant dark matter, we can decrease $a_H$ to push the
parameter space up into the under abundant regions. However, by increasing $a_H$
for the middle panel, the observed dark matter relic density line would become more
dependent on $\lambda_R$, as the two scalars mix more. For the right panel, again
increasing $a_H$ would cause the regions ruled out by direct detection to become
more prevalent, while pulling the space which can explain $\damu^{\text{2021}}$ \update{closer to
it}. However, while decreasing $a_H$ would shrink the regions constrained by direct
detection, \update{the red line does not shift close enough to the band which can
explain $\damu^{\text{2021}}$}, which itself would move away up and to the left as $a_H$ is
decreased. So in the scenario with $M_\psi$ increased slightly from
$M_{\phi_s}=M_{\phi_d}=1.5$ TeV, \update{it is never possible to explain $\damu^{\text{2021}}$
and the dark matter relic density simultaneously, even if we adjust $a_H$.}
\begin{figure}[t]
\centering
\includegraphics[width=0.32\textwidth]{Min3FieldsSLRFR-multinestnewall}
\includegraphics[width=0.32\textwidth]{Min3FieldsSLRFR-multinestnewallsingletDM}
\includegraphics[width=0.32\textwidth]{Min3FieldsSLRFR-multinestnewallaH}
\caption{Profile over the $\lambda_L-\lambda_R$ plane or the
$\lambda_L-a_H$ plane for Model 2S1F, scanning over the
masses of the new scalars and fermion between $0$ and
$5000$ GeV, as well as the mixing $a_H \le 5000$ GeV and
coupling $\lambda_R \in [0,1.5]$. Colours are the same as
in Fig.\ \ref{fig:Min3FieldsSLRFRCouplingsEqualMasses}. The
scan targeted the observed value of the dark matter relic
density $0.1200$, Eq.\ (\ref{eqn:DMRD}), the value of $\Delta a_\mu$ in Eq.\ (\ref{eqn:BNLDiscrepancy}), and the direct detection log likelihood provided by \ddcalc \cite{Workgroup:2017lvb,Athron:2018hpc}, where points which are excluded by direct detection or are more than $3\sigma$ away from the Planck observation are thrown away. \label{fig:Min3FieldsSLRFRprofile}
}
\end{figure}
In all the two-dimensional slices of parameter space we presented,
\update{explaining $\damu^{\text{2021}}$} and dark matter is only
possible with sizable interactions between the BSM states and muons,
$\lambda_L$ and $\lambda_R$. Indeed the relic density is over abundant
when both $\lambda_L$ and $\lambda_R$, are small because both the
t-channel exchange of the BSM fermion and coannihilations with the BSM
fermion are driven by these $\lambda_L$ and $\lambda_R$ couplings. At
lighter masses both these processes and (co)annihilation processes
involving $SU(2)$ interactions of the scalar doublet will become more
efficient as the mass suppression is reduced, with the $SU(2)$
interactions becoming more significant relative to $\lambda_L$ and
$\lambda_R$. These effects imply that at much lighter masses it
should be possible to also deplete the relic density with much smaller
$\lambda_L$ and $\lambda_R$. At the same time, explaining
$\Delta a_\mu^{\text{BNL}}$ \update{or $\damu^{\text{2021}}$} with smaller
couplings is also possible when all the masses are small. However
because it is possible for the observed relic density to be obtained
through $SU(2)$ interactions of a $600$ GeV scalar doublet, it is not
clear that with small couplings large $\Delta a_\mu$ and
dark matter can be explained simultaneously as the $\amu$ explanation may
then require larger values of $\lambda_L$ and $\lambda_R$ (or smaller
masses) than what is required for fitting the relic density.
Whether dark matter and the $\amu$
\update{anomalies} can be simultaneously explained for all
$\lambda_L$, $\lambda_R$ or if there is a lower bound on these
couplings just from these requirements remains an open question.
Therefore we now turn to a complete exploration of the parameter space
of this model, varying $M_{\phi_s}$, $M_{\phi_d}$, $M_\psi$, $a_H$,
$\lambda_L$, and $\lambda_R$ to see the impact of the new
$\damu^{\text{2021}}$ measurements and address the question posed
above. We vary $M_i \in [0,5000]$, $a_H \in [0, 5000]$,
$\lambda_{L,R} \in [0, 1.5]$ in a number of \texttt{MultiNest}\@\xspace
\cite{Feroz:2007kg,Feroz:2008xx,Feroz:2013hea,Buchner2014} scans with a likelihood constructed to target
$\Omega_{h^2}=0.1200$, different possible\footnote{Due to the time
required for such large dimensional scans, these were performed
before the release of the FNAL Muon g-2 results \update{\cite{PhysRevLett.126.141801}} and without
knowledge of the result. Therefore for we performed scans for a
range of results that we considered plausible given the previous
$\amu^{\textrm{BNL}}$ measurement.} values of $\amu$ and solutions
which evade direct detection limits. In total we collected about \update{15
million} samples targeting several different values in the range $\Delta a_\mu \underset{\sim}{\in} [0,42\times10^{-10}]$.
In the left panel of Fig.\ \ref{fig:Min3FieldsSLRFRprofile} we present
results in the $\lambda_L$--$\lambda_R$ plane. This shows that it is
only possible to explain the $\amu^{\text{BNL}}$ measurement and dark
matter simultaneously when \update{$|\lambda_L \lambda_R| \ge 0.22$}.
With the new \update{$\amu^{\text{FNAL}}$ measurement} this limit
update {changes very little}. Couplings smaller than this cannot
simultaneously explain \update{$\amu^{\text{2021}}$} and dark matter. For the case
of scalar doublet dominated dark matter the $SU(2)$ interactions will
deplete the relic density to below the observed value whenever
$m_{\phi_d} \lesssim 600$ GeV and this limit on the lightest mass
makes explaining large $\Delta a_\mu$ impossible for such small
couplings. For pure scalar singlet dark matter, note that if the
scalar doublet is so heavy it completely decouples, then the model
effectively reduces to that of Model R from the two field section.
There we already found that a simultaneous explanation of dark matter
and the $\amu$ measurements is only possible for very large
$\lambda_R$ (higher than the values shown in
Fig.\ \ref{fig:Min3FieldsSLRFRprofile}) and not possible at all when
collider limits are also taken into account.
If instead the dark matter has a significant admixture of both the
scalar singlet and the scalar doublet, then the situation is a bit
more complicated. With both singlet and doublet mixing, the relic
density can deplete through processes involving both $\lambda_L$ and
$\lambda_R$ couplings and $a_H$ in addition to the $SU(2)$
interactions of the doublet component. One can compensate increased
splitting in the masses by raising $a_H$ to keep $\Delta a_\mu$ fixed to the
measured value, or similarly one can reduce the mass splitting to
compensate for reducing $a_H$ but in either case the relic density is
still depleted too efficiently since both raising $a_H$ and reducing
the mass splitting can enhance annihilation cross-sections.
Increasing $a_H$ and reducing the mass splitting also increases direct
detection cross-sections, increasing the tension with large $\Delta a_\mu$
further. Therefore when there is significant mixing between the scalar
singlet and doublet, requiring that the measured relic density is
obtained implies masses that are too large (for any given $a_H$ value)
for large $\Delta a_\mu$ to be explained with small couplings. We also looked
at scenarios where the dark matter is more singlet or more doublet in
nature separately. We found quite similar limits on
$\lambda_L\lambda_R$ emerge in both cases, though the more singlet DM
case had a \update{marginally higher limit}, as can be seen in the
\update{middle} panel of Fig.\ \ref{fig:Min3FieldsSLRFRprofile}.
\update{The tension with $a_H$ is also shown in the plot on the right
panel of Fig.\ \ref{fig:Min3FieldsSLRFRprofile}. There one can see
that there is an upper limit on $a_H$, coming from the dark matter
constraints discussed above. The value of $a_H$ can also be
restricted by the need to avoid tachyonic scalars that would appear
when $a_H v < M_{\phi_s} M_{\phi_d}$. However we find it is the
limits from dark matter that leads to the constraint shown
in the right panel of
Fig.\ \ref{fig:Min3FieldsSLRFRprofile}.}
As a result we find a lower limit on $|\lambda_L\lambda_R|$ that
increases with the value for $\Delta a_\mu$ used as a constraint. In this
way we directly see the impact of the new $\damu^{\text{2021}}$ result on this
model. We do not expect collider limits to affect our results here,
since we do not find any solutions explaining both dark matter and
$\Delta a_\mu^{\text{BNL}}$ \update{or $\damu^{\text{2021}}$} where the lightest of the
two scalar masses $M_{\phi^0_1}$ is below around $500$ GeV for our
choice of coupling range ($\lambda_{L,R} \le 1.5$). Our scan results
also indicate interesting structure in the mass planes. We reserve
the detailed discussion and presentation of such mass plane results
for a future dedicated global fit of the model. Nonetheless it is
clear from our results that the combination of parameters and masses
that are allowed are significantly impacted by the interplay between
dark matter and $\amu$ constraints. The $\amu^{\textrm{BNL}}$ and
$\amu^{\textrm{FNAL}}$ measurements are therefore very important for
the phenomenology of models like this and should be included in all
such studies, and in any future global fits of any models like this.
\subsection{Three-field model with two fermions, one charged scalar
and fermionic dark matter}
\label{sec:2F1S}
Having looked at scalar dark matter in the previous section we now
consider a model with a fermionic dark matter candidate. For this we
choose a minimal three-field model with two new fermions and one new
scalar field, which we call Model 2F1S. Specifically this model extends the SM by adding three $\mathbb{Z}_2$-odd fields
\begin{align}
\phi_d = \begin{pmatrix} \phi^+_d \\ \phi_d^0 \end{pmatrix}, ~ & \psi_d = \begin{pmatrix} \psi^0_d \\ \psi_d^- \end{pmatrix}, ~ \psi _s ^0\,,
\end{align}
where $\phi _d$ is a scalar doublet with representation
$({\bf{1}},{\bf{2}},\frac{1}{2}) _0$, $\psi _d$ a Dirac fermion doublet with
representation $({\bf{1}},{\bf{2}},-\frac{1}{2})_{1/2}$ expressed in
Weyl spinors $\psi_d$ and its Dirac partner $\psi _d
^c =(\psi_d^{0c\dagger},\psi_d^{-c\dagger})$,
and $\psi _s ^0$ a
neutral singlet Weyl fermion with representation
$({\bf{1}},{\bf{1}},0)_{1/2}$.
In principle the model allows scenarios with
scalar dark matter, however we do not consider such scenarios here. We
use the model as an illustration of fermionic dark matter, in which
case the dark matter candidate is predicted to be a mixture of the
fermion singlet and the neutral doublet
component. Ref.\ \cite{Calibbi:2018rzv} also studied the model in
certain parameter slices. Here we intend to determine the full status
of the model, studying the detailed parameter dependence and
performing general scans of the parameter space.
The relevant interactions are:
\begin{align} \label{eqn:Min3FieldsFLRS1}
{\cal L}_{\text{2F1S}} = \bigg(& \lambda_{1} H \cdot
\psi_d \psi_s^0 - \lambda_{2} \psi_d^c \psi_s^0 H + \lambda_L L_2\cdot \phi_d \psi_s^0 + \lambda_R \psi_{d} \mu \phi_d^\dagger \nonumber \\
&- M_{\psi_d} \psi_d^c \psi_d - \frac{M_{\psi_s}}{2} \psi_s^0 \psi_s^0 + h.c.\bigg) - M_{\phi}^2 |\phi_d|^2,
\end{align}
where the sign in front of the $\lambda_2$ term reflects our
definition of $\psi_d^c$ as an $SU(2)$ anti-doublet.
The model has four new coupling parameters. $\lambda_{L,R}$ are the
couplings to the left- and right-handed muon, similarly to the case of
the previous Model 2S1F. The couplings $\lambda_{1,2}$ govern the
mixing of the BSM fermions via the Higgs VEV, similarly to the $a_H$
parameter in the previous case. The products
$\lambda_i\lambda_L\lambda_R$ are responsible for the new source of
muon chirality flips (for $i=1,2$). To reiterate, this model can also
be compared to the Bino-Higgsino-left smuon (BHL) system for the MSSM,
see Eq.\ (\ref{eq:SUSYMIapprox}) below. There $\psi_d$ corresponds to
the Higgsino, $\psi_s$ to the Bino, and $\phi_d$ to the left smuon
doublet, and the couplings $\lambda_{1},\lambda_{2}$ and $\lambda_L$
would all correspond to the Bino coupling $g_1$, while $\lambda_R$
would correspond to the muon yukawa coupling. A difference is that
the MSSM counterparts of the couplings $\lambda_{1,2}$ would couple
the Higgsinos to the up-type and down-type Higgs doublets, and the
resulting contributions to the mass mixings would involve the
$\tan\beta$ parameter, while here there is just one single Higgs
doublet and the mass mixing is simpler.
The mixing of the BSM fermions affects the singlet $\psi^0_s$ and the
neutral components of the fermion doublet $\psi_d^0$ and its Dirac
partner $\psi_d^{0c\dagger}$. They mix into three different flavours
of neutral fermions $\psi^0_i$ with Majorana mass terms via the mixing
matrix $V_F$:
\begin{equation} \label{eqn:Min3FieldsFLRS1Mixing}
\begin{split}
{\cal L}_{\text{2F1S}} & \ni -\frac{1}{2}\begin{pmatrix} \psi_s^{0} & \psi_d^{0} & \psi_d^{0c\dagger} \end{pmatrix} \begin{pmatrix} M_{\psi_s} & \frac{\lambda_{1} v}{\sqrt{2}} & \frac{\lambda_{2} v}{\sqrt{2}} \\ \frac{\lambda_{1} v}{\sqrt{2}} & 0 & M_{\psi_d} \\ \frac{\lambda_{2} v}{\sqrt{2}} & M_{\psi_d} & 0 \end{pmatrix} \begin{pmatrix} \psi_s^0 \\ \psi_d^{0} \\ \psi_d^{0c\dagger} \end{pmatrix}
= -\frac{1}{2}\sum_{i=1,2,3} M_{\psi_i^0} \psi_i^0\psi_i^0 \,,
\end{split}
\end{equation}
where
\begin{equation}
\begin{pmatrix} \psi_1^0 \\ \psi_2^{0} \\ \psi_3^{0} \end{pmatrix} = V_F^T\begin{pmatrix} \psi_s^0 \\ \psi_d^{0} \\ \psi_d^{0c\dagger} \end{pmatrix}\,.
\end{equation}
The contribution to $\amu$ from this model comes from two diagrams,
the FFS diagram of Fig.\ \ref{fig:FFSDiagram} and the SSF diagram of
Fig.\ \ref{fig:SSFDiagram}, with the generic $F$ fermion lines
replaced by $\psi^-$ and $\psi_i^0$ and the scalar $S$ ones by
$\phi^0_d $ and $\phi^{+*}_d$, respectively. Among these diagrams,
only the SSF diagram with neutral fermion exchange can lead to an
enhanced chirality flip. The full contributions are given by:
\begin{align}\label{eqn:amu_2F1S_model}
\Delta a_\mu^{\text{2F1S}} =&
\frac{m_\mu^2}{32 \pi^2 M_\phi^2} \bigg(\frac{\lambda_R^2}{6} \,E\!\begin{pmatrix}\frac{M^2_{\psi_d}}{M^2_\phi} \end{pmatrix}
- \sum_{i=1}^3
\frac{\lambda_R^2 |V_{F\,2i}|^2 + \lambda_L^2 |V_{F\,1i}|^2}{6}\,B\!\begin{pmatrix} \frac{M^2_{\psi_i^0}}{M^2_\phi}\end{pmatrix} \nonumber\\
& +\sum_{i=1}^3 \frac{M_{\psi_i^0}}{m_\mu}\frac{2 \lambda_L \lambda_R V_{F\,1i} V_{F\,2i}}{3} \,C\!\begin{pmatrix}\frac{M^2_{\psi_i^0}}{M^2_\phi}\end{pmatrix}\bigg).
\end{align}
The term in the second line corresponds to the additional source of
chirality flip in the model. It can be analyzed similarly to the term
in Eq.\ (\ref{eqn:2S1FLOContribution}). The actual enhancement
factor, beyond the loop and mass suppression factor, is given by the
coupling combination $\lambda_{1,2}\lambda_L\lambda_R/y_\mu$ where the
$\lambda_{1,2}$ are contained in the product of the mixing matrix
elements $V_{F\,1j} V_{F\,2j}$ of the neutral fermions. If this
coupling combination vanishes, the new chirality flip contribution and
the third term disappear, and we end up with the contribution of Model
L from Sec.\ \ref{sec:TwoFields},
Eq.\ (\ref{eqn:Min2FieldsLLContributions}).
\begin{figure}[t]
\centering
\includegraphics[width=0.31\textwidth]{Min3FieldsFLRSR-splitmasses1}
\hspace{0.01\linewidth}
\includegraphics[width=0.31\textwidth]{Min3FieldsFLRSR-splitmasses2}
\hspace{0.01\linewidth}
\includegraphics[width=0.31\textwidth]{Min3FieldsFLRSR-splitmasses3}
\caption{Results from Model 2F1S, scanning over the couplings of the
new fermions and scalar to the left- and right-handed muon. Colours
are the same as in
Fig.\ \ref{fig:Min3FieldsSLRFRCouplingsEqualMasses}. The mixing
couplings are fixed to $-\lambda_{1}=\lambda_{2}=0.1$. In this
scenario the masses of the scalar doublet $\phi_d$ and fermion
doublet $\psi_d$ are raised by $200$ GeV above the mass of the
fermion singlet $\psi_S$. Points to the right of the red line
produce an over abundance of dark matter and are strongly
excluded. \label{fig:Min3FieldsFLRSRCouplingsSplitMasses}}
\end{figure}
The case of a pure fermion $SU(2)$ doublet dark matter candidate is
very well known as this corresponds to Higgsino dark matter in the
MSSM, or the lowest representation of minimal dark matter
\cite{Cirelli:2005uq}. In that case it is possible to deplete the
relic density to the measured value, using only $SU(2)$ gauge
interactions, when the fermion doublet has a mass of $1$ TeV
\cite{ArkaniHamed:2006mb} while above (below) it will be over (under)
abundant. This can be adjusted by mixing with singlet, to obtain
well-tempered dark matter\cite{ArkaniHamed:2006mb} where the rapid
depletion of dark matter from $SU(2)$ interactions are diluted through
mixing with the singlet, though well-tempered dark matter is heavily
constrained by direct detection (see e.g. Fig.\ 8 of
Ref.\ \cite{Athron:2016gor}). As in the previous example explaining
$\Delta a_\mu^{\text{BNL}}$ \update{and $\damu^{\text{2021}}$ after including the FNAL result} will
require significant values of the $\lambda_L$, $\lambda_R$ and $\lambda_i$
couplings, leading to additional depletion mechanisms. Therefore
scenarios where all the input masses are set to be equal will only explain dark
matter at very heavy masses.
Instead we first focus on scenarios where the dark matter is dominantly
singlet in nature in
Fig.\ \ref{fig:Min3FieldsFLRSRCouplingsSplitMasses}, showing two
dimensional parameter slices in the $\lambda_L$--$\lambda_R$ plane,
with the overall mass scale increasing between the panels from left
to right. Specifically each of the three panels shows scenarios where
the fermion doublet $\psi_d$ and scalar doublet $\phi_d$ are always
$200$ GeV higher than the mass of the fermion singlet
$\psi_s^0$. To simplify things further, we choose fairly small values
for the mixing parameters $-\lambda_{1}=\lambda_{2}=0.1$, since large
singlet--doublet mixing increases dark matter direct detection
constraints and depletes the relic density. The other couplings are
varied in the range $-\lambda_L,\lambda_R \in [0,3.5]$.
As a function of $\lambda_L,\lambda_R $, we get a \update{similar}
curve for \update{$\amu^{\text{2021}}$} to the ones shown in
Figs.\ \ref{fig:Min3FieldsSLRFRCouplingsEqualMasses}, \ref{fig:Min3FieldsSLRFRCouplingsSplitMasses},
due to the dominant chirality flip contribution of
Eq.\ (\ref{eqn:amu_2F1S_model}) having the same dependence on
$\lambda_L \lambda_R$ as Eq.\ (\ref{eqn:2S1FLOContribution}). The
nearly vertical red lines indicate points in agreement with the Planck
observed \cite{Aghanim:2018eyx} dark matter relic abundance; points to
the right are over abundant and excluded. As can be seen in all
panels of Fig.\ \ref{fig:Min3FieldsFLRSRCouplingsSplitMasses}, all
points allowed by the dark matter relic density are also in agreement
with direct detection constraints due to the small values of the
$\lambda_i$ coupling \footnote{This can be compared to MSSM
scenarios where such a small mass splitting between the Bino and Higgsino would face more severe constraints from direct detection since the gauge interactions that control the Bino-Higgsino-Higgs vertices are larger than $\lambda_i$ couplings chosen for this example.}.
Going from left to right in the three panels of
Fig.\ \ref{fig:Min3FieldsFLRSRCouplingsSplitMasses}, the singlet mass
is increased from $500$ GeV to $1000$ GeV and $1200$ GeV, while the
mass splitting remains $200$ GeV. In the left panel the dark matter
relic abundance depends only on $\lambda_L$, the coupling of the
singlet to the muon and the BSM scalar, due to the large mixing
suppression of any $\lambda_R$ contribution through the fermion
doublet component of the dark matter.\footnote{Note that in contrast
to the scalar dark matter case discussed in the previous section, in
the annihilation cross-section for the t-channel of the BSM scalar,
terms that depend only on $\lambda_L$ are not suppressed compared to
terms that depend on both $\lambda_L$ and $\lambda_R$.} In the
middle and right panels the relative doublet content of the dark
matter candidate rises, opening up additional annihilation mechanisms
through the $SU(2)$ interactions and the $\lambda_R$ coupling. Thus when
$\lambda_R$ is large t-channel exchange of the BSM scalars may play a role
through $\lambda_R$, leading to the curvature of
the red line, which is most pronounced in the right panel. Therefore,
while in the left panel avoiding an over abundance of dark matter gives
a simple bound of \update{$\lambda_L < -1.5$}, in the middle and right
panel larger values of $|\lambda_L|$ are required to deplete the relic
density to the observed value or below, and the precise limit also
depends on $\lambda_R$. In all panels \update{$\damu^{\text{2021}}$}
can be explained simultaneously with the relic density
if $\lambda_R$ has the rather small values \update{$\lambda_R\in[0.1,0.7]$}.
\begin{figure}[t]
\centering
\includegraphics[width=0.31\textwidth]{Min3FieldsFLRSR-largerdoubletsinglet1}
\hspace{0.01\linewidth}
\includegraphics[width=0.31\textwidth]{Min3FieldsFLRSR-largerdoubletsinglet2}
\hspace{0.01\linewidth}
\includegraphics[width=0.31\textwidth]{Min3FieldsFLRSR-largerdoubletsinglet3}
\caption{ Results from Model 2F1S, scanning over the couplings of the
new scalars and fermion to the left- and right-handed muon. Colours
are the same as in
Fig.\ \ref{fig:Min3FieldsSLRFRCouplingsEqualMasses}. The mixing
couplings are fixed to $\lambda_{1}=\lambda_{2}=-0.1$. In this
scenario the fermion doublet has its mass fixed at $M_{\psi_d}=3000$
GeV, the fermion singlet is fixed at $M_{\psi_s}=500$ GeV, and the
scalar doublet has its mass slowly increased from $M_{\psi_s}$.
Points below or to the left of the red line produce an over abundance
of dark matter and are strongly
excluded. \label{fig:Min3FieldsFLRSRCouplingsLargerDoubletSinglet}}
\end{figure}
The mass splitting between the singlet fermion $\psi_s$ and the scalar
doublet $\phi$ has a large impact on the relic density. In
Fig.\ \ref{fig:Min3FieldsFLRSRCouplingsLargerDoubletSinglet} we show
this impact by varying this mass splitting over three
$\lambda_L$--$\lambda_R$ planes, specifically we fix $M_{\psi_s} =
500$ GeV and set $M_\phi = 500$ GeV, $550$ GeV and $600$ GeV in the
left, middle and right panels respectively. At the same time we keep
the fermion doublet mass fixed at a heavy scale $M_{\psi_d} = 3000$
GeV, well above that of the dark matter candidate, so that in this
case it will not play any role in the depletion of dark matter relic
density\footnote{The exact choice for the mass is not important and,
for this reason, the plots of
Fig.\ \ref{fig:Min3FieldsFLRSRCouplingsLargerDoubletSinglet}
actually allow understanding of the behaviour of the model in a
wider range of doublet masses.}. This choice also suppresses the $\Delta a_\mu$
prediction and means that very large couplings are required to explain
$\Delta a_\mu^\text{BNL}$ \update{and $\damu^{\text{2021}}$}. We also
choose $\lambda_{1,2}=-0.1$ and $\lambda_L>0$. The relative
sign change between $\lambda_1$ and $\lambda_2$ leads to destructive
interference between different terms and thus a further slight suppression of
the overall contributions to $\Delta a_\mu$.
In all three panels of
Fig.\ \ref{fig:Min3FieldsFLRSRCouplingsLargerDoubletSinglet} the relic
density depends only on $\lambda_L$ due to the decoupling of the
doublet state, while $\Delta a_\mu$ depends on both $\lambda_L$ and
$\lambda_R$ since the chirality flip enhancement is proportional to
$\lambda_L\lambda_R$. Just as before, \update{in all cases we can explain
the discrepancy in $\amu$ and provide a dark matter candidate
particle simultaneously}. Between the three panels the $\Delta a_\mu$ result only
changes a little, as expected due to the small mass increases. However
the dark matter relic density depends strongly on the value of the
mass splitting. In the left panel where $M_\phi = M_{\psi_s}$, most of
the parameter space has an under abundance of dark matter. The very
small mass splitting between the dark matter fermion $\psi_1^0$ and
the BSM scalar opens up coannihilation channels which were suppressed
in Fig.\ \ref{fig:Min3FieldsFLRSRCouplingsSplitMasses}. Due to these
highly efficient coannihilations the relic density is depleted for
much smaller values of $\lambda_L$ than in the previous figures. When
$M_\phi$ is increased to $550$ GeV in the middle panel, the
(co)annihilations through $\lambda_L$ become less efficient. As a
result a much larger $\lambda_L\approx 1.1$ is required to deplete the
relic density to the observed value via t-channel processes as in the
case of Fig.\ \ref{fig:Min3FieldsFLRSRCouplingsSplitMasses}.
Increasing the scalar mass further to $600$ GeV (right panel) an even
larger $\lambda_L\approx 1.4$ is required to produce a relic abundance
that matches the Planck observation. However while the bound from an
over abundant relic density changed a lot between $500$ GeV and $550$
GeV, as we move away from the optimal window for coannihilations,
subsequently increasing $M_\phi$ above $600$ GeV has a much weaker
impact. If we increase the scalar mass further the relic density limit gradually
increases until somewhere between $2.3$--$2.4$ TeV it is no longer
possible to deplete the relic density to the observed value or below
with perturbative couplings.
\begin{figure}[t]
\centering
\includegraphics[width=0.32\textwidth]{Min3FieldsFLRSR-multinestfixallcoupling}
\includegraphics[width=0.32\textwidth]{Min3FieldsFLRSR-multinestfixallcouplingdoubletDM}
\caption{Viable points in the $\lambda_L-\lambda_R$ plane obtained from scanning over the masses of the new scalar and fermions between $0$ and $5000$ GeV, as well as the Yukawa couplings $\lambda_{L,R} \in [0,1.5]$ and $\lambda_{1,2}\in [0,3.5]$ for Model 2F1S. Colours are the same as in Fig.\ \ref{fig:Min3FieldsSLRFRCouplingsEqualMasses}. Scan targeted the observed value of the dark matter relic density $0.1200$, Eq.\ (\ref{eqn:DMRD}), the $\amu$ value \update{in Eq. (\ref{eqn:BNLDiscrepancy})}, and the direct detection log likelihood provided by DDCalc \cite{Workgroup:2017lvb}, where points which are excluded by direct detection limits or are more than $3\sigma$ away from the Planck observation are thrown away.
\label{fig:Min3FieldsFLRSRprofile}}
\end{figure}
In Sec.\ \ref{sec:2S1F} the fact that a $600$ GeV scalar dark matter
candidate naturally depletes the relic density to the observed value
played a critical role in leading to a lower bound on
$|\lambda_L\lambda_R|$ from the combination of $\Delta a_\mu$ and dark matter
constraints. Since a pure fermion doublet naturally depletes the relic
density to the observed value when it has a mass of about $1.1$ TeV, we
may anticipate a similar result in this model. In addition we have also
seen that when the doublet fermion is very heavy, large
couplings are required, while mixed singlet-doublet fermion dark
matter scenarios should be strongly constrained by direct detection.
Therefore we now perform a scan over the full parameter space of the
Model 2F1S to see if this also leads to a lower bound on $|\lambda_L \lambda_R|$.
We sample all free parameters in this model ($\lambda_{1},
\lambda_{2}, \lambda_{L}$, $\lambda_{R}$, $M_\phi$, $M_{\psi_d}$, and
$M_{\psi_s}$) using \texttt{MultiNest}\@\xspace \cite{Feroz:2007kg,Feroz:2008xx,Feroz:2013hea,Buchner2014}, with mass range $M_i
\in [0,5000]$ GeV, and the BSM Yukawa couplings having the values
$\lambda_{L,R} \in [0,1.5]$ and $\lambda_{1,2} \in [0,3.5]$, with the
results shown in Fig.\ \ref{fig:Min3FieldsFLRSRprofile}. Again, several values of
$\Delta a_\mu$ where targeted in the range $[0,42]\times10^{-10}$, with a total of \update{62
million} points scanned. In the left panel of the figure, we can see that there is a
lower bound on the couplings \update{$|\lambda_L \lambda_R| \gtrsim 0.036$} below which
we cannot simultaneously explain the measured value of $\amu$ whilst
producing a dark matter candidate particle with the observed relic
density that escapes direct detection limits. As we anticipated in the
discussion above, pure singlet fermion dark matter requires larger
couplings to explain large $\Delta a_\mu$, while fermion doublet dark
matter annihilates too effectively for light masses.
Significant singlet-doublet fermion mixing can dilute the $SU(2)$
gauge interactions of the doublet, but does not avoid this problem as
it means that t-channel exchange of the BSM scalars will be active
through both $\lambda_L$ and $\lambda_R$, and increasing the mixing
also increases direct detection. Reducing $\lambda_L \lambda_R$ while
increasing $\lambda_i$ to keep the $\Delta a_\mu$ prediction fixed makes
direct detection limits more relevant. As a result the combination of
these constraints leads to the lower limit $|\lambda_L\lambda_R|$
shown in the plot. Interestingly we also find a more \update{severe}
limit on \update{$|\lambda_L \lambda_R|\approx0.14$} for dark matter
that is mostly doublet in nature, as shown in the right panel of
Fig.\ \ref{fig:Min3FieldsFLRSRprofile}.
As in the previous case this shows how the $\amu$ measurements have
have a significant impact on the model, implying a siginifcant
constraint on the couplings. \update{However since the new world
average is quite close to the BNL value there is little difference
between the limits implied by the $\damu^{\text{BNL}}$ and $\damu^{\text{2021}}$
results}. The masses in
our samples are mostly heavy enough to easily evade collider
limits\footnote{The lightest mass scenarios in our samples {\it are}
relevant for the lower bound on the couplings we obtained and our
samples include some points where the lightest BSM fermion is
between $300$--$500$ GeV. In principle therefore LHC data could
imply further constraints on our surviving samples. However we
tested all such points from the scan targetting the
$\amu^{\text{BNL}}$ with \texttt{SModelS}\@\xspace 1.2.3 and did not find any further
exclusion. Therefore we expect our limit from just dark matter and
Muon g-2 to be robust. }. However collider limits should be
included in a full global fit of the model, but we leave that and a
presentation of the full impact of all constraints on all parameters
and masses to future work. Again we stress that our results
demonstrate that the \update{FNAL measurement plays a
critical role} in the phenomenology of these models and should be
included in all phenomenological studies of models of this type and
future global fits.
As discussed earlier this model can be compared to the BHL scenario of
the MSSM. There since $\lambda_L\lambda_R\lambda_{i}$ corresponds to
$g_1^2 y_\mu$, $\amu$ can only be explained with masses less than
$200$ GeV, due to the weaker enhancement \cite{Endo:2017zrj}. Similarly the
$|\lambda_L\lambda_R|$ bound corresponds to $g_1 y_\mu$, which should
be well below the bound for this model and would suggest that the BHL
scenario alone cannot explain both the measured $\amu$ and dark
matter. However it is important to note that the comparison is not
perfect due to the fact that the MSSM contains two Higgs doublets,
which gives an additional enhancement from the ratio of the two Higgs
VEVs, $\tan \beta$. As a result in the BHL scenario, a dominantly
bino dark matter candidate can simultaneously explain dark matter and
account for $\Delta a_\mu^{\text{BNL}}$ and $\damu^{\text{2021}}$ despite having
couplings below the lower bound for this in the Model 2F1S. On the
other hand, the Model 2F1S has much greater freedom to explain dark
matter and large $\Delta a_\mu$ at much higher masses than is possible in the
BHL scenario. The fact that the $\lambda_R\neq y_\mu$ in this model is of great
significance, but on top of that the freedom to push all couplings up
to values much larger than that of $g_1$ and to vary the interactions
independently, gives rise to a rich and distinct phenomenology.
Therefore our results show that this class of simple models provide an
interesting way to explain both dark matter and large $\Delta a_\mu$. They
are complementary to the explanations that come from the MSSM and
appear to be significantly less constrained. However these models
have not been constructed from any principle other than to explain
the phenomenology, and as a result the deeper motivation is unclear. It
would be very interesting to consider whether models that are
motivated from more fundamental principles can have such scenarios as a
low energy effective field theory.
| {
"attr-fineweb-edu": 1.800781,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdfY25V5hZbT2dMuN |
\section{\label{sec:intro} Introduction}
The importance of efficient computational modeling in chemistry and materials science cannot be understated, and for many applications, Kohn--Sham density functional theory presents the most appealing compromise between accuracy and efficiency. The favorable position of this compromise has been enabled by the steady progression of ever more accurate density functionals produced over the last 60 years. These functionals are commonly characterized by the Perdew--Schmidt hierarchy \cite{Perdew2001}, a progression of increasing non-locality where successive levels can be expected to give greater accuracy at the cost of increased computational complexity.
The meta-generalized gradient approximations (meta-GGAs) stand as an appealing level at which the highest accuracy can be expected from semi-local ingredients, including the electron density, its gradient, and the kinetic energy density. While hybrid functionals incorporating admixtures of non-local exact exchange have become most prominent for molecular applications, the prohibitive cost scaling of exact-exchange with number of electrons has limited their utility for extended systems.
A meta-GGA is commonly designed by either enforcing constraints on the exchange-correlation (XC) functional, or by fitting to reference data sets. Those that take the latter route, called empirical functionals, can be inaccurate for systems outside their respective fitting set, or can suffer from difficulties due to over-fitting. General purpose functionals that are accurate for diverse systems have tended to be of the former, so-called non-empirical, variety in which transferable accuracy is promoted by adherence to physical constraints that are necessarily true for all systems of electrons.
The first generation of meta-GGAs were non-empirical, and predate most GGAs.
Becke and Roussel \cite{Becke1983,Becke1989} derived generalized Taylor series of the exact-exchange hole by enforcing sum rule and non-negativity constraints on a hole model.
Perdew \cite{Perdew1985} derived a Laplacian-level meta-GGA for the exchange energy by enforcing the same set of constraints.
At the meta-GGA level, the strongly-constrained and appropriately-normed (SCAN) functional has incorporated all 17 known constraints on the exact XC energy appropriate to the semi-local level \cite{Sun2015}.
(These constraints are listed together in the Supplementary Material of Ref. \cite{Sun2015}, and the references for them are presented in the main text of Ref. \cite{Sun2015}.)
From this foundation, further works have proposed modifications to the SCAN energy densities to improve its accuracy in certain domains. revSCAN is a simple modification to the slowly-varying limit of SCAN's correlation energy that modifies the fourth-order term in its density-gradient expansion \cite{Mezei2018}. The TASK functional is a complete revision of SCAN, retaining only its fulfillment of exact constraints for the exchange energy \cite{Aschebrock2019}. TASK is designed to accurately predict band gaps, and has recently been itself extended for accuracy in 2D systems, in a modification termed ``mTASK'' \cite{Neupane2021}. Note TASK and mTASK use a correlation density functional at the local density approximation.
The SCAN functional has proved broadly transferable and has shown good accuracy for many systems normally challenging for DFT methods \cite{Kitchaev2016, Sun2016, Peng2017, Zhang2017, Chen2017, Furness2018, Lane2018, SaiGautam2018, Zhang2019, Zhang2020b, Pulkkinen2020, Zhang2020a, Ning2021}, though its numerical difficulties have hindered some applications such as pseudo-potential generation \cite{Bartok2019, Furness2019}. To address this, Bart\'ok and Yates proposed a regularized SCAN, termed ``rSCAN'', that aims to control numerical challenges while remaining as close to the original SCAN functional as possible. While the regularizations are effective in improving numerical performance, removing the grid sensitivity of SCAN that is problematic in some electronic structure codes, they break five of the exact conditions SCAN was designed to obey. Recent work by Mej\'ia-Rodr\'iguez and Trickey shows that some transferablity is lost in rSCAN, with atomization energies particularly degraded \cite{Mejia-Rodriguez2019, Bartok2019a}. We have recently proposed a restored-regularized-SCAN, called r\tss{2}SCAN\xspace, which maintains the regularizations of rSCAN while restoring exact constraint adherence \cite{Furness2020c}. Compared to SCAN, the resulting r\tss{2}SCAN\xspace functional has shown pronounced improvements in numerical efficiency, alongside small systematic improvements in accuracy \cite{Furness2020c, Mejia-Rodriguez2020f, Mejia-Rodriguez2020g, Ehlert2021b, Grimme2021,Ning2021}. On the extensive GMTKN55 test set \cite{Goerigk2017} for main-group chemistry, the overall error measure WTMAD-2 was \cite{Ehlert2021b} 8.6 kcal/mol for SCAN+D4 and 7.5 kcal/mol for r\tss{2}SCAN\xspace+D4, where D4 is a dispersion correction. Note that SCAN and r\tss{2}SCAN\xspace are not fitted to any bonded system, but are genuinely predictive for bonded systems. Applying SCAN+D4 to the unrestricted Hartree-Fock density \cite{Santra2021} instead of its own self-consistent density leads to a remarkably small WTMAD-2 of 5.079 kcal/mol, better than nearly all the (necessarily empirical) hybrid functionals tested thus far. This ``density correction'' \cite{Song2021} to SCAN also leads to a nearly-perfect many-body expansion and molecular dynamics for water \cite{Dasgupta2021}.
The present publication completes Ref \cite{Furness2020c}, providing the necessary detail of how each exact constraint was restored in r\tss{2}SCAN\xspace. We show how these restorations affect the numerical performance of the functional and how smoothness can be maintained for all constraints except the fourth order term of the slowly-varying density gradient expansion for exchange, which is less easy to enforce following the present interpolation-based model.
This work also provides context for the exact constraints enforced by SCAN, and demonstrates how a meta-GGA can be constructed to enforce those constraints.
This work builds upon Ref. \cite{Furness2020c} by expanding it to a progression of functionals (rSCAN, r++SCAN\xspace, r\tss{2}SCAN\xspace, and r\tss{4}SCAN\xspace) that ultimately restore all the exact constraints obeyed by SCAN to a regularized form. Thus our presentation expands upon and completes the letter version of Ref. \cite{Furness2020c}, by filling in the details in the constructions of the last three functionals, and by presenting numerical results for r++SCAN\xspace and r\tss{4}SCAN\xspace. We also present individual errors of these functionals on their appropriate norms (used to determine their parameters) in Table \ref{tab:norms}, and on the lattice constants of solids in Table \ref{tab:LC20_table}. Figures \ref{fig:fx_osc}, \ref{fig:setting_r4_params}, \ref{fig:Xe_derivatives}, and \ref{fig:G3_progression} will be familiar to readers of Ref \cite{Furness2020c}, but are expanded to incorporate results for the new meta-GGAs. We also include a more detailed analysis of the construction of r\tss{2}SCAN\xspace than was presented in Ref. \cite{Furness2020c}: in addition to a derivation of the r\tss{2}SCAN\xspace gradient expansion
in supplemental material A,
Fig. \ref{fig:d2_Set} shows how the damping factor $d_{p2}$ of r\tss{2}SCAN\xspace was determined. Variations of Figs. \ref{fig:alpha_comp} and \ref{fig:iefcomp} were presented in Ref. \cite{Furness2020c}; they are included here for completeness. The Tables in Appendices C--F report results for the same test sets considered in Ref. \cite{Furness2020c}, but including the novel meta-GGAs.
\section{\label{sec:theory} Constraint restoration}
\subsection{Coordinate scaling and uniform density limit \label{sec:theory_scaling}}
The SCAN functional is comprised of independent exchange and correlation functionals each constructed as an interpolation and extrapolation of two semi-local energy densities: one for single-orbital densities $\epsilon_\mr{x/c}^0$, and one for slowly-varying densities $\epsilon_\mr{x/c}^1$, where ``x/c'' is either exchange or correlation, respectively. Here, we will use $\epsilon$ to refer to the energy density, and $\varepsilon=\epsilon/n$ to refer to the energy per electron. The single-orbital and slowly-varying energy densities are joined by way of an interpolation function,
\begin{equation}
\epsilon_\mr{x/c}^{\mr{SCAN}}(\v{r}) = \epsilon_\mr{x/c}^1(\v{r}) + f_\mr{x/c}(\alpha(\v{r}))\left[\epsilon_\mr{x/c}^0(\v{r}) - \epsilon_\mr{x/c}^1(\v{r})\right], \label{eq:scan_framework}
\end{equation}
which is controlled by the iso-orbital indicator variable,
\begin{equation}
\alpha = \frac{\tau - \tau_W}{\tau_U}. \label{eq:alpha}
\end{equation}
$\alpha$ is built from three kinetic energy densities: the positive-definite conventional $\tau = 1/2\sum_i^{\mr{occ.}}|\nabla \phi_i|^2$ defined with the occupied Kohn-Sham orbitals \{$\phi_i$\}, von Weizs\"acker $\tau_W = |\nabla n|^2/(8n)$ that is the single-orbital limit of $\tau$ as a function of the electron density $n=\sum_i^{\mr{occ.}}|\phi_i|^2$, and $\tau_U = \frac{3}{10}k_{\mr{F}}^2\, n\, d_s(\zeta)$ the uniform electron gas limit of $\tau$. Here, the Fermi wavevector $k_{\mr{F}}=[3\pi^2 n]^{1/3}$, and $d_s(\zeta) = [(1 + \zeta)^{5/3} + (1 - \zeta)^{5/3}]/2$ is a function of the spin polarization $\zeta = (n_\uparrow - n_\downarrow)/(n_\uparrow + n_\downarrow)$. We refer to Refs. \cite{Becke1990, Ruzsinszky2012, Sun2012, Sun2013a, Sun2015, Furness2019, Furness2020a} for a detailed discussion of the properties of $\alpha$ and related quantities.
The first change made in rSCAN \cite{Bartok2019} is to regularize the iso-orbital indicator $\alpha$ to prevent divergence of the XC potential in the asymptotic regions of single orbital systems \cite{Furness2019}. In this region, the derivative $\partial\alpha/\partial\tau$ diverges faster than the decay of $\epsilon_{\mr{x}}^{\mr{LDA}}=-3k_{\mr{F}} n/(4\pi)$ (the local density approximation for exchange), resulting in a diverging exchange-correlation potential when $\partial\epsilon_\mr{xc}/\partial\alpha \neq 0$ \cite{Furness2019}, as is the case for SCAN. This diverging potential is problematic for pseudo-potential generation \cite{Bartok2019, Furness2019} and is avoided in rSCAN using a regularized $\alpha^\prime$,
\begin{align}
\tilde\tau_U &= \left (\frac{3}{10}(3\pi^2)^{2/3}n^{5/3} + \tau_r \right)d_s(\zeta), \label{eq:regUEG}\\
\tilde\alpha &= \frac{\tau - \tau_W}{\tilde\tau_U}, \label{eq:tildealpha}\\
\alpha^\prime &= \frac{\tilde\alpha^3}{\tilde\alpha^2 + \alpha_r}.
\end{align}
The regularizing constants are $\tau_r = 10^{-4}$ and $\alpha_r = 10^{-3}$. Whilst successful in preventing the rSCAN potential from diverging, the $\alpha^\prime$ regularization breaks two exact constraints: the 1) uniform density limit, and 2) the uniform coordinate scaling of the exchange energy \cite{Levy1985}.
The exact uniform density limit is recovered in SCAN by recognizing,
\begin{equation}
\lim_{|\nabla n|\to0} \tau = \lim_{|\nabla n|\to0}\tau_U,
\end{equation}
and,
\begin{equation}
\lim_{|\nabla n|\to0} \tau_W = 0, \label{eq:tauwlim}
\end{equation}
hence,
\begin{equation}
\lim_{|\nabla n|\to0} \alpha = 1.
\end{equation}
Then by construction,
\begin{equation}
\lim_{|\nabla n|\to0} f_{\mr{x/c}}^{\mr{SCAN}}(\alpha) = 0,
\end{equation}
and Eq. \ref{eq:scan_framework} exclusively selects $\epsilon_{\mr{x}}^1$ and $\epsilon_{\mr{c}}^1$ in the uniform density limit. These energy densities satisfy the uniform density limit by design.
The uniform density limit is broken by the regularization parameters in $\alpha^\prime$ as,
\begin{align}
&\lim_{|\nabla n|\to0}\tilde\tau_U \neq \lim_{|\nabla n|\to0} \tau , \\
&\lim_{|\nabla n|\to0} \tilde\alpha = \frac{\lim_{|\nabla n|\to0}\tau_U}{\lim_{|\nabla n|\to0}\tilde\tau_U} \neq 1, \\
&\lim_{|\nabla n|\to0} \alpha^\prime \neq 1,
\end{align}
hence,
\begin{equation}
\lim_{|\nabla n|\to0} f_{\mr{x/c}}^{\mr{rSCAN}}(\alpha^\prime) \neq 0.
\end{equation}
The final inequality results in a slight scaling of $\epsilon_{\mr{x/c}}^1$ and a small inclusion of $\epsilon_\mr{x/c}^0$, which does not recover the uniform density limit. Hence the constraint is broken. The uniform density limit is important for metallic elements. For a uniform electron gas of density parameter $r_{\mathrm{s}}=4$ (roughly characteristic of the valence electron density in solid sodium) $\alpha'\approx 0.719$, for example.
The regularized uniform electron gas kinetic energy density, $\tilde\tau_U$, also causes the exchange energy density to scale incorrectly under the uniform coordinate scaling transformations. To see this, we define a uniform coordinate scaling of the density $n$ and Kohn-Sham orbital $\phi_i$ as
\begin{align}
n_\lambda(\bm{r}) &= \lambda^3 n(\lambda \bm{r}), \\
\phi_{i,\lambda}(\bm{r}) &= \lambda^{3/2}\phi_i(\lambda \bm{r}),
\end{align}
with $\lambda \geq 0$, such that the standard meta-GGA variables scale as,
\begin{align}
\tau_\lambda (\bm{r}) &= \lambda^5 \frac{1}{2} \sum_i^{\mr{occ.}}\left| \frac{\partial \phi_i(\lambda \bm{r})}{\partial (\lambda \bm{r})} \right|^2 = \lambda^5 \tau (\lambda \bm{r}) \\
\tau_{W,\lambda}(\bm{r}) &= \lambda^5 \tau_W(\lambda \bm{r}) \\
\tau_{U,\lambda}(x,y,z) &= \lambda^{5}\tau_U(\lambda \bm{r}).
\end{align}
Thus, while $\alpha(\bm{r}) \to \alpha(\lambda \bm{r})$ under uniform coordinate scaling, $\tilde \alpha$ does not have this correct behavior except in the limit $\lambda \to \infty$, because the regularization $\tau_r$ in $\tilde\tau_U$ doesn't vary with the coordinate scaling parameter $\lambda$. This clearly violates the uniform coordinate scaling behavior of the exchange energy \cite{Levy1985}
\begin{equation}
E_\mr{x}[n_{\lambda}] = \lambda E_\mr{x}[n],
\end{equation}
as $\tilde \alpha$ does not scale correctly, and the exchange and correlation models here are highly nonlinear in $\tilde \alpha$.
The exact correlation energy evaluated on a uniformly scaled density tends to distinct limits \cite{Levy1991,Levy1993}
\begin{align}
\lim_{\lambda \to \infty} E_\mr{c}[n_\lambda] &= \text{constant} \leq 0 \\
\lim_{\lambda \to 0} E_\mr{c}[n_\lambda] &= \lambda D_\mr{c}[n],
\end{align}
where the constant and functional $D_\mr{c}$ are unknown. It can be seen that SCAN, and the functionals presented here satisfy both limits, but rSCAN satisfies only the $\lambda \to \infty$ limit.
It should be noted that under nonuniform coordinate scaling, rSCAN does not violate known exact constraints \cite{Levy1991,Gorling1992,Pollack2000} because of the robustness of the underlying SCAN model. It does, however, tend to distinct limits from SCAN, likely impacting performance for real systems. To see this, we define a non-uniform coordinate scaling of the density $n$ and Kohn--Sham orbital $\phi_i$ in one dimension as,
\begin{align}
n_\lambda^{x}(x, y, z) &= \lambda n(\lambda x, y, z), \label{eq:nus_dens} \\
\phi_{i,\lambda}^x(x, y, z) &= \lambda^{1/2}\phi_i(\lambda x, y, z), \label{eq:nus_ks_orb}
\end{align}
again with $\lambda \geq 0$.
Under this coordinate transformation, the exact exchange energy satisfies \cite{Levy1991,Gorling1992}
\begin{align}
\lim_{\lambda \to 0}\frac{1}{N} E_\mr{x}[n_\lambda^x] &> -\infty\\
\lim_{\lambda \to \infty} \frac{1}{N} E_\mr{x}[n_\lambda^x] &> -\infty, \label{eq:nus_ex_inf}
\end{align}
with $N$ the number of electrons.
Identical inequalities hold for the exact correlation energy \cite{Gorling1992,Pollack2000}.
It should be emphasized that these constraints imply that the exact exchange and correlation energies per electron tend to finite constants under either limit of non-uniform coordinate scaling.
The constant limit for Eq. \ref{eq:nus_ex_inf} is a non-zero negative constant \cite{Pollack2000}, the exchange energy per electron for a two-dimensional system.
To recover these constraints on an approximate exchange energy functional, the exchange enhancement factor $F_\mr{x} = \epsilon_\mr{x}/\epsilon_\mr{x}^\text{LDA}$ must satisfy \cite{Chiodo2012,Perdew2014}
\begin{equation}
\lim_{p\to \infty} F_\mr{x} \propto p^{-1/4}.
\end{equation}
$p = [|\nabla n|/(2k_\mr{F} n)]^2$ is the square of a dimensionless gradient of the density on the appropriate length scale for the exchange energy.
We will discuss $p$ further in the ensuing section on gradient expansions, but it suffices here to note that $p$ scales as $\lambda^{-2/3}$ as $\lambda \to 0$, and as $\lambda^{4/3}$ as $\lambda \to \infty$, as shown in
supplemental material B.
Therefore, $p$ is always divergent under the extreme limits of non-uniform coordinate scaling.
In SCAN, rSCAN, and the functionals developed here, the set of coordinate scaling constraints for exchange are imposed through a function $g_\mr{x}(p)$
\begin{align}
F_\mr{x}(n,|\nabla n|,\tau) &= \left\{h_{\mr{1x}} + f_\mr{x}(\alpha)\left[h_{0\mr{x}} - h_{\mr{1x}}\right ]\right\} g_\mr{x}(p) \\
g_\mr{x}(p) &= 1 - \exp[-a_1p^{-1/4}].
\end{align}
Referring to Eq. (\ref{eq:scan_framework}), it can be seen that $h_\mr{jx}=\epsilon_\mr{x}^j/[\epsilon_\mr{x}^\text{LDA}g_\mr{x}]$, with $j=0,1$.
In the limit $p\to \infty$,
\begin{align}
\lim_{p\to \infty} g_\mr{x}(p) &= a_1 p^{-1/4} + \mathcal{O}(p^{-1/2}) \\
\lim_{p\to \infty} h_\mr{xj} &= \text{constant} > 0, \quad j = 0,1.
\end{align}
Thus $F_\mr{x}\sim p^{-1/4}$.
In SCAN, rSCAN, and this work, $h_\mr{x0}=1.174$ identically, and $h_\mr{x1}(p\to\infty)=1.065$.
The LDA, most GGAs, and most meta-GGAs do not recover the right asymptotic behavior for exchange.
Recovering the analogous set of non-uniform coordinate scaling constraints for correlation is more straightforward, and requires that, for $j=0,1$,
\begin{align}
\lim_{p \to \infty} \epsilon_\mr{c}^j &= \text{constant} \leq 0 \\
\lim_{r_{\mathrm{s}} \to 0} \epsilon_\mr{c}^j &= \text{constant} \leq 0,
\end{align}
where $r_{\mathrm{s}} = [3/(4\pi n)]^{1/3}$ is the Wigner-Seitz radius.
In SCAN, rSCAN, and the functionals developed here, both constants are chosen to be zero.
Many non-empirical GGAs and meta-GGAs for correlation satisfy the non-uniform coordinate scaling constraints.
LDA, which has a logarithmic divergence as $r_{\mathrm{s}} \to 0$, does not.
We can now consider the iso-orbital indicators used in SCAN, rSCAN, and r\tss{2}SCAN\xspace. Under the non-uniform coordinate scaling defined in Eqs. \ref{eq:nus_dens} and \ref{eq:nus_ks_orb}, the standard meta-GGA variables scale as
\begin{align}
\tau_\lambda^x (x, y, z) &= \frac{\lambda}{2} \sum_i^{\mr{occ.}}\left|\hat{\bm{x}}\lambda \frac{\partial \phi_i(\lambda x, y, z)}{\partial(\lambda x)} + \nabla_\perp \phi_i(\lambda x, y, z)\right|^2 \label{eq:nus_tau} \\
\tau_{W,\lambda}^x(x,y,z) &= \lambda \frac{\left|\hat{\bm{x}}\lambda\frac{\partial n(\lambda x, y, z)}{\partial(\lambda x)} + \nabla_\perp n(\lambda x, y, z)\right|^2}{8n(\lambda x, y, z)} \\
\tau_{U,\lambda}^{x}(x,y,z) &= \lambda^{5/3}\tau_U(\lambda x, y, z), \label{eq:nus_tauu}
\end{align}
with
\[
\nabla_\perp = \hat{\bm{y}} \frac{\partial}{\partial y} + \hat{\bm{z}} \frac{\partial}{\partial z}.
\]
From these equations, we see that, when $\lambda \to 0$, $\tau$ and $\tau_W$ scale as $\lambda$, but $\tau_U$ scales as $\lambda^{5/3}$.
Thus, $\alpha$ scales with a leading order of $\lambda^{-2/3}$ in this limit.
When $\lambda \to \infty$, $\alpha$ can either scale as $\lambda^{4/3}$ or $\lambda^{-2/3}$. Examples and analysis of both scaling limits are given in
supplemental material B
Due to the $\tau_r$ constant in the denominator of $\tilde\alpha$ (Eq. \ref{eq:tildealpha}), the leading order behavior of the regularized $\tilde \alpha$ under non-uniform scaling, with $\lambda \to 0$, is $\lambda$. Then, whereas $\alpha$ tends to infinity in the $\lambda \to 0$ limit, $\tilde \alpha$ tends to zero. $\tilde \alpha$ has the correct leading-order behavior in the limit $\lambda \to \infty$, which can be a single-orbital limit where exact constraints, including the finite exchange and correlation energies under the nonuniform coordinate scaling, were built for SCAN.
In Ref. \cite{Furness2021a}, we proposed an alternative regularization of $\alpha$ to restore these constraints,
\begin{equation}
\bar{\alpha} = \frac{\tau - \tau_W}{\tau_U + \eta\tau_W} = \frac{\alpha}{1 + \eta \frac{5}{3}p}, \label{eq:alphabar} \\
\end{equation}
where $\eta$ is a regularization parameter to be determined later.
Clearly, $\bar\alpha$ has the correct behavior, $\bar\alpha(\bm{r})\to \bar\alpha(\lambda\bm{r})$ under uniform coordinate scaling.
This regularization eliminates the asymptotic region ($|\bm{r}|\to\infty$) divergence
and, by Eq. (\ref{eq:tauwlim}), does not change the uniform density limit,
\begin{equation}
\lim_{|\nabla n|\to 0} \bar\alpha = 1.
\end{equation}
The nonuniform coordinate scaling of $\bar\alpha$ is also maintained as $\lambda^{-2/3}$ to leading order in the $\lambda \to 0$ limit. But, for $\lambda \to \infty$, the leading-order term of $\bar\alpha$ is independent of $\lambda$, for non-homogeneous densities. This is demonstrated in supplemental material B
Figure \ref{fig:alpha_comp} shows a comparison of $\alpha$, $\alpha^\prime$, and $\bar{\alpha}$ for the krypton atom. The divergence of the conventional $\alpha$ is apparent in the asymptotic region, while $\alpha^\prime$ and $\bar{\alpha}$ decay to 0. Close to the nucleus, the $\alpha_r$ regularization constant causes $\alpha^\prime$ to behave differently to $\alpha$ and $\bar{\alpha}$, but otherwise all three indicators behave similarly.
\begin{figure}[t]
\centering
\includegraphics[width=0.45\textwidth]{Comparing_alpha.pdf}
\caption{\label{fig:alpha_comp} Comparison between the conventional $\alpha$ (e.g. from SCAN), $\alpha^\prime$ (from rSCAN), and the new $\bar{\alpha}$ as a function of distance from the Kr nucleus (in Bohr radii) computed from accurate spherical Hartree--Fock Slater type orbitals \cite{Clementi1974, Furness2021a}. Regularization parameters are $\tau_r = 10^{-4}$ and $\alpha_r = 10^{-3}$ in $\alpha^\prime$, and $\eta = 10^{-3}$ in $\bar{\alpha}$.}
\end{figure}
Substituting $\bar\alpha$ for $\alpha^\prime$ in rSCAN is sufficient to restore the uniform density limit and coordinate scaling behaviors, and we refer to rSCAN with this replacement as ``r++SCAN'' throughout.
\subsection{Gradient expansions\label{sec:gradexp}}
The interpolative design of SCAN allows construction of the single-orbital ($\epsilon^0$) and slowly-varying ($\epsilon^1$) energy densities that consider only the exact constraints relevant to their respective limits. In SCAN, the interpolation function is a piece-wise combination of two exponential terms,
\begin{equation}
f_\mr{x/c}(\alpha) =
\begin{cases}
\exp[\frac{-c_{1\mr{x/c}}\alpha}{1 - \alpha}] & \alpha \leq 1 \\
-d_\mr{x/c}\exp[\frac{c_{2\mr{x/c}}}{1 - \alpha}] & \alpha > 1,
\end{cases}
\label{eq:scanf}
\end{equation}
where $\{d_{\mr{x/c}}, c_\mr{1x/c}, c_\mr{2x/c}\}$ are separate parameters for exchange and correlation determined by fitting to appropriate norms \cite{Sun2015}. This function was chosen such that: 1) $f(\alpha = 0) = 1$ exclusively selects $\epsilon^0$ for single-orbital densities, 2) $f(\alpha = 1) = 0$ exclusively selects $\epsilon^1$ in slowly-varying densities, and 3)
\begin{equation}
\left.\frac{df(\alpha)}{d\alpha}\right |_{\alpha \to 1} = \left.\frac{d^2f(\alpha)}{d\alpha^2}\right |_{\alpha \to 1} = 0,\label{eq:interp_const}
\end{equation}
which prevents $\epsilon^0$ contributing to the slowly-varying density gradient expansions to the 4th order in $|\nabla n|$. Note $\left.\frac{d^mf(\alpha)}{d\alpha^m}\right |_{\alpha \to 1} = 0$ with $m$ to be any integer in Eq. \ref{eq:scanf} by design. While theoretically convenient, Eq. \ref{eq:scanf} introduces a twist into the function around $\alpha = 1$, see Figure \ref{fig:iefcomp}. This twist destroys the overall smoothness of the functional, introduces oscillations into the XC potential \cite{Yang2016, Bartok2019}, and harms its performance on numerical integration grids.
The rSCAN functional uses a smooth polynomial interpolation function in place of the SCAN piece-wise exponential for the range $0 \leq \alpha^{\prime} < 2.5$,
\begin{equation}
f(\alpha^\prime) = \begin{cases}
\sum_{i=0}^7c_{i}\alpha^{\prime \, i} & 0 \le \alpha^\prime \leq 2.5 \\
-d_\mr{x/c}\exp[\frac{c_{2\mr{x/c}}}{1 - \alpha^\prime}] & \alpha^\prime > 2.5
\end{cases},\label{eq:rscan_f}
\end{equation}
where $\{c_i\}$ are polynomial coefficients determined to smoothly join $f^\mr{SCAN}(\alpha^\prime)$ at $\alpha^\prime = 0$ and $2.5$, see Ref. \citenum{Bartok2019}. A comparison of the two interpolation functions is shown in Figure \ref{fig:iefcomp}. While this replacement smooths the XC energy density and potential, it breaks the third constraint on the interpolation function, and hence $\epsilon^0$ makes spurious contributions to the slowly-varying gradient density expansion of $\epsilon^1$. Here, we restore the correct expansions by directly subtracting the extra terms that result from breaking the third condition (Eq. (\ref{eq:interp_const})) around $p \to 0$ and $\alpha \to 1$ where the expansion is relevant.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Interpolation_Comparison.pdf}
\caption{The exchange (solid) and correlation (dashed) interpolation functions for the SCAN (blue, Eq. \ref{eq:scanf}) and rSCAN (orange, Eq. \ref{eq:rscan_f}) functionals.}
\label{fig:iefcomp}
\end{figure}
The exact gradient expansion for exchange around the slowly-varying density limit to the 2nd order (GE2X) and to the 4th order (GE4X) was derived in terms of the exchange enhancement factor $F_\mr{x}$ in Refs. \cite{Kirzhnits1957, Svendsen1996} as,
\begin{equation}
\lim_{|\nabla n|\to 0} F_{\mr{x}}^{\mr{GE}} = 1 + \mu p + \frac{146}{2025}\tilde q^2 - \frac{73}{405}p\tilde q + \mo{6}, \label{eq:GE_x}
\end{equation}
where $\mu = 10/81$ and $\tilde q = (9/20)(\bar\alpha - 1) + (\eta 3/4 + 2/3)p$ recovers the reduced density Laplacian at $|\nabla n| \to 0$. The gradient expansion for correlation was derived to the 2nd order (GE2C) in Ref. \cite{Ma1968,Wang1991,Perdew1996,Sun2015} as,
\begin{equation}
\varepsilon_{\mr{c}} = \varepsilon_{\mr{c}}^{\mr{LSDA}} + \beta(r_{\mathrm{s}})\phi(\zeta)^3 t(r_{\mathrm{s}}, \zeta, p)^2 + \mo{4},
\end{equation}
where $\phi(\zeta) = [(1 + \zeta)^{2/3} + (1 - \zeta)^{2/3}]/2$ and $t(r_{\mathrm{s}}, \zeta, p) = (3\pi^2/16)^{1/3} \sqrt{p/r_{\mathrm{s}}}/\phi(\zeta)$. We will restore each expansion to r++SCAN in turn.
\subsubsection{Exchange}
The SCAN exchange enhancement factor for a spin-unpolarized system is,
\begin{align}
F_\mr{x}^{\mr{SCAN}}&(p,\alpha) \\
&= \left\{h_{\mr{1x}}(p, \alpha) + f_\mr{x}(\alpha)\left[h_{0\mr{x}} - h_{\mr{1x}}(p,\alpha)\right ]\right\} g_\mr{x}(p),\nonumber \\
g_\mr{x}(p) &= 1 - \exp[-a_1p^{-1/4}], \\
h_{\mr{0x}} &= 1 + \kappa_0 = 1.174, \\
h_\mathrm{1x}(p,\alpha) &= 1 + \kappa_1 - \frac{\kappa_1}{1 + \frac{x(p,\alpha)}{\kappa_1}},\\
x(p, \alpha) &= \mu p\left[1 + \left(\frac{b_4p}{\mu}\right)\exp\left(\frac{-|b_4|p}{\mu}\right)\right] \nonumber\\
&+ \left\{b_1 p + b_2(1 - \alpha)\exp\left[-b_3(1 - \alpha)^2\right]\right\}^2, \label{eq:scan_x(pa)}
\end{align}
where $\mu = 10/81$ and $\{b_1, b_2, b_3, b_4\}$ are chosen such that SCAN yields GE2X and GE4X, noting that the expansion of $g_\mr{x}(p)$ around $p=0$ has only zero-order contributions (see supplemental material A 3)
In rSCAN, the interpolation function derivatives are non-zero at $\bar\alpha = 1$, so $h_\mr{0x}$ also contributes to the gradient expansion. This changes the functional's gradient expansion, spoiling the correct gradient expansion of the $x(p,\alpha)$ inherited from SCAN. Thus both rSCAN and r++SCAN fail to recover GE2X and GE4X.
We restore GE2X to give the r\tss{2}SCAN\xspace exchange functional by redesigning $x(p,\alpha)$ as,
\begin{equation}
x(p) = \left(C_\eta C_{2\mr{x}} \exp[-p^2/d_{p2}^4] + \mu\right)p, \label{eq:x_change}
\end{equation}
where $C_\eta$ and $C_{2\mr{x}}$ are constants set to cancel spurious contributions from $h_{\mr{0x}}$ to the 2nd order in $\nabla n$. The restoring constants are multiplied by the damping function $\exp[-p^2/d_{p2}^4]$ to prevent them dominating as $p$ becomes large. The damping parameter $d_{p2}$ derives from scaling the reduced density gradient as $s \to s/d_{p2}$ with $d_{p2}$ fit to recover the appropriate norms in Section \ref{sec:settingParams}.
To find $C_\eta$ and $C_{2\mr{x}}$ we take the Taylor expansion of the rSCAN interpolation function (Eq. \ref{eq:rscan_f}) around $\bar\alpha = 1$, noting that $1-\bar\alpha$ is $\mo{2}$,
\begin{align}
\lim_{|\nabla n|\to0} &f^{\mr{rSCAN}}(\bar\alpha)\label{eq:lim_f} \\
&= -(1 - \bar{\alpha})\Delta f_2 + \frac{(1 - \bar{\alpha})^2}{2}\Delta f_4 + \mo{6}, \nonumber
\end{align}
where,
\begin{align}
\Delta f_\mathrm{2} &= \sum_{i=1}^7 ic_{i}, \label{eq:del_f2} \\
\Delta f_\mathrm{4} &= \sum_{i=2}^7i(i - 1)c_{i},
\end{align}
are determined by the first and second derivatives of the interpolation function with respect to $\bar{\alpha}$ respectively.
The $(1 - \bar\alpha)$ term of Eq. \ref{eq:lim_f} indicates a fixed slope for the $\bar\alpha$ dependence of the enhancement factor across the slowly-varying limit that is found to be numerically problematic, analyzed further in Section \ref{sec:numerics}. This can be avoided to second order by expressing $(1 - \bar\alpha)$ in terms of $p$ through an integration by parts on the exchange energy density \cite{Sun2015a},
\begin{equation}
\lim_{|\nabla n|\to 0}(1 - \bar\alpha) = \left(\frac{20}{27} + \eta\frac{5}{3}\right)p + \mo{4}. \label{eq:oma_to_p}
\end{equation}
This substitution, derived and discussed in supplemental material A 2,
is used in r$^2$SCAN and identifies,
\begin{equation}
C_\eta = \left(20/27 + \eta5/3\right). \label{eq:c_eta_expr}
\end{equation}
To second order the slowly-varying gradient expansion of r$^2$SCAN is then,
\begin{equation}
\lim_{|\nabla n|\to0} F_\mr{x}^{\mr{r^2SCAN}} = \lim_{|\nabla n|\to0} h_\mr{1x} - C_\eta p \Delta f_2\left (h_\mr{0x} - \lim_{|\nabla n|\to0}h_\mr{1x}\right).
\end{equation}
Finding,
\begin{equation}
\lim_{|\nabla n|\to0}h_\mr{1x} = 1 + \left(\mu + C_\eta C_{2\mr{x}}\right)p + \mo{4},
\end{equation}
and collecting terms gives,
\begin{align}
\lim_{|\nabla n|\to0} &F_\mr{x}^{\mr{r^2SCAN}}\\
&= 1 + \mu p + C_\eta \left[ C_{2\mr{x}} - \Delta f_2 h_{\mr{0x}} + \Delta f_2 \right]p + \mo{4}, \nonumber
\end{align}
equating this to GE2X (second order and below terms of Eq. \ref{eq:GE_x}) and solving for $C_{2\mr{x}}$ gives,
\begin{equation}
C_{2\mr{x}} = -\Delta f_2(1 - h_\mr{0x}) \approx -0.162742, \label{eq:c_2x_expr}
\end{equation}
as shown in supplemental material A1
GE4X can be restored to give the ``r$^4$SCAN'' functional by including a further correcting term in the exchange enhancement factor outside the interpolation. This introduces three more constants, derived in supplemental material A 3,
for all terms in Eq. \ref{eq:GE_x}:
\begin{align}
F_\mr{x}^\mr{r^4SCAN}(p, \bar{\alpha}) &= \left\{h_\mr{1x}(p) + f_\mr{x}(\bar{\alpha})\left [h_\mr{0x} - h_\mr{1x}(p)\right ] \right. \nonumber \\
& \left. +\Delta F_4(p,\bar{\alpha}) \right\} g_\mr{x}(p) \label{eq:full_delfx} \\
\Delta F_4(p,\bar{\alpha}) &= \left\{C_{2\mr{x}}\left[(1-\bar{\alpha}) - C_\eta p\right] + C_{\bar{\alpha}\ba}(1-\bar{\alpha})^2 \right. \nonumber \\
& \left. +C_{p\bar{\alpha}}p(1-\bar{\alpha}) + C_{pp}p^2 \right\}\Delta F_4^{\mr{damp}}(p,\bar{\alpha}) \\
\Delta F_4^{\mr{damp}}(p,\bar{\alpha}) &= \frac{2\bar{\alpha}^2}{1+\bar{\alpha}^4} \exp\left[-\frac{(1-\bar{\alpha})^2 }{d_{\bar{\alpha} 4}^2} - \frac{p^2}{d_{p4}^4}\right] \label{eq:delf4}\\
C_{\bar{\alpha}\ba} &= \frac{73}{5000} - \frac{\Delta f_4}{2}[h_\mr{0x}-1] \approx -0.0593531 \label{eq:c_ba_ba}\\
C_{p\bar{\alpha}} &= \frac{511}{13500} - \frac{73}{1500}\eta - \Delta f_2[C_\eta C_{2\mr{x}} + \mu] \nonumber \\
& \approx 0.0402684 \label{eq:c_p_ba}\\
C_{pp} &= \frac{146}{2025}\left\{\eta\frac{3}{4} + \frac{2}{3}\right\}^2 - \frac{73}{405}\left\{\eta\frac{3}{4} + \frac{2}{3}\right\} \nonumber \\
& + \frac{\left(C_\eta C_{2\mr{x}} + \mu\right)^2}{k_1} \approx -0.0880769. \label{eq:c_p_p}
\end{align}
Further damping functions, $\Delta F_4^{\mr{damp}}$, are included to prevent the correction terms dominating as $(1 - \bar\alpha)$ and $p$ become large, introducing $d_{\bar\alpha 4}$ and $d_{p4}$ as additional parameters. These are again set to recover the appropriate norms in Section \ref{sec:settingParams}. For the fourth order expansion, the integration by parts substitution of Eq. \ref{eq:oma_to_p} cannot be applied, and hence $(1 - \bar\alpha)$ cannot be removed.
\subsubsection{Correlation}
The SCAN model of the correlation energy per electron $\varepsilon_{\mr{c}} = \epsilon_{\mr{c}}/n$ is
\begin{equation}
\varepsilon_\mr{c}^{\mr{SCAN}} = \varepsilon_\mr{c}^1 + f_\mr{c}^{\mr{SCAN}}(\alpha)\left[\varepsilon_\mr{c}^0 - \varepsilon_\mr{c}^1\right],
\end{equation}
with the $\alpha=0$ correlation energy per electron given by
\begin{align}
\varepsilon_\mr{c}^0 &= \left(\varepsilon_\mr{c}^\mr{LDA0} + H_\mr{c}^0\right)g_\mr{c}(\zeta), \\
\varepsilon_\mr{c}^\mr{LDA0} &= -\frac{b_{1\mr{c}}}{1 + b_{2\mr{c}}\sqrt{r_{\mathrm{s}}} + b_{3\mr{c}}r_{\mathrm{s}}}, \\
H_\mr{c}^0 &= b_{1\mr{c}}\ln\{1 + w_0[ 1 - g_\infty(p)]\}, \\
w_0 &= \exp[-\varepsilon_\mr{c}^{\mr{LDA0}}/b_{1\mr{c}}] - 1, \\
g_\infty(p) &= (1 + 4\chi_\infty p)^{-1/4},\\
g_\mr{c}(\zeta) &= \{ 1 - 2.3631[d_\mr{x}(\zeta) - 1]\}(1 - \zeta^{12}),
\end{align}
with $d_\mr{x}(\zeta) = [(1 + \zeta)^{4/3} + (1 - \zeta)^{4/3}]/2$.
Similarly, the $\alpha=1$ limit is given by
\begin{align}
\varepsilon_\mr{c}^{1} &= \varepsilon_\mr{c}^{\mr{LSDA}} + H_1, \\
H_\mr{c}^1 &= \gamma\phi^3\ln\{ 1 + w_1[1 - g(y)]\}, \\
w_1 &= \exp\left[-\frac{\varepsilon_\mr{c}^{\mr{LSDA}}}{\gamma\phi^3}\right] - 1, \\
g(y) &= (1 + 4y)^{-1/4}, \\
y &= \frac{\beta(r_{\mathrm{s}})}{\gamma w_1}t^2, \\
\beta(r_{\mathrm{s}}) &= \beta_\mr{MB}\frac{1 + A r_{\mathrm{s}}}{1 + B r_{\mathrm{s}}},
\end{align}
where $b_\mr{1c} = 0.0285764$, $b_\mr{2c} = 0.0889$, $b_\mr{3c} = 0.125541$, $\chi_\infty = 0.128026$ $\gamma = 0.0310907$, $\beta_{\mr{MB}} = 0.066725$, $A = 0.1$, and $B = 0.1778$. $\varepsilon_\mr{c}^\mr{LSDA}$ is the local spin-density approximation for correlation from Ref. \cite{Perdew1992a}.
As r++SCAN takes the same correlation model, the violation of Eq. \ref{eq:interp_const} by the rSCAN interpolation function breaks GE2C. The GE2C correction terms are restored to r++SCAN by replacing $g(y)$ in $\epsilon_{\mr{c}}^1$ with,
\begin{align}
g(y, \Delta y) =& \left [1 + 4(y - \Delta y)\right ]^{-1/4}, \label{eq:g(y_dy)}\\
\Delta y =& \frac{C_{2\mr{c}}}{27 \gamma d_s(\zeta) \phi^3 w_1} \left\{20r_{\mathrm{s}}\left[g_\mr{c}(\zeta)\frac{\partial \varepsilon_c^{\text{LDA0}}}{\partial r_{\mathrm{s}}} - \frac{\partial \varepsilon_c^{\text{LSDA}}}{\partial r_{\mathrm{s}}} \right] \right. \nonumber \\
& \left. - 45\eta\left[\varepsilon_c^{\text{LDA0}}g_\mr{c}(\zeta) - \varepsilon_c^{\text{LSDA}}\right] \vphantom{\frac{\partial \varepsilon_c^{\text{LSDA}}}{\partial r_{\mathrm{s}}}} \right\} p \exp[-p^2/d_{p2}^4] \label{eq:del_y}
\end{align}
where the damping function $\exp[-p^2/d_{p2}^4]$ is the same as in Eq. \ref{eq:x_change}. Similarly to exchange, we restore the second order slowly-varying density gradient expansion when,
\begin{equation}
C_{2\mr{c}} = \Delta f_2 \approx -0.711402.
\end{equation}
The derivation of this expression is shown in supplemental material A 4
Making these replacements to r++SCAN gives the ``r$^{2}$SCAN'' functional which only breaks GE4X. Including the full correction of Eq. \ref{eq:full_delfx} gives the ``r$^4$SCAN'' functional, which obeys all the exact constraints SCAN does. For convenience, a collected definition of the working equations for all new functionals is given in supplemental material C
\subsubsection{Summary of Changes}
Here, we summarize the changes made from SCAN for each of the functionals.
\begin{enumerate}
\item{rSCAN replaces $\alpha$ with $\alpha^\prime$, which contains two regularization parameters, $\tau_r=10^{-4}$ and $\alpha_r = 10^{-3}$. It also replaces the SCAN interpolation function with a polynomial between $0 \leq \alpha^\prime \leq 2.5$.}
\item{r++SCAN evolves from rSCAN, by replacing $\alpha^\prime$ with $\bar\alpha$ that uses only a single regularization parameter, $\eta = 0.001$.}
\item{r$^2$SCAN inherits all the changes of r++SCAN. Additionally, for exchange, it replaces $x(p, \alpha)$ in $h_{1\mr{x}}$ with Eq. \ref{eq:x_change}. For correlation, it replaces $g(y)$ with Eqs. \ref{eq:g(y_dy)} and \ref{eq:del_y} in $\epsilon_\mr{c}^1$.}
\item{r$^4$SCAN inherits all the changes from r$^2$SCAN. Additionally, for exchange, it replaces $F_\mr{x}$ with Eq. \ref{eq:full_delfx}, which introduces $\Delta F_4$ of Eq. \ref{eq:delf4}.}
\end{enumerate}
\section{\label{sec:numerics}Numerical Challenges}
The corrections to restore the slowly-varying density gradient expansions for exchange and correlation contain terms linear in $(1 - \bar\alpha)$. These terms are necessary to restore the 2nd or 4th order gradient expansion, for example, of Eq. \ref{eq:GE_x} for exchange, if the integration in parts is not used as we did for r\tss{2}SCAN\xspace in Eq. \ref{eq:oma_to_p}. These corrections inevitably twist the slope of the interpolation function $f(\bar\alpha)$ to that of $F_{\mr{x}}^{GE}$ with respect to $\bar\alpha$ around $\bar\alpha \to 1$ as $|\nabla n|\to 0$, illustrated in Fig. \ref{fig:fx_osc}. This introduces oscillations into the derivatives of the enhancement factor with respect to $\bar\alpha$, and hence into the overall exchange-correlation potential. Such oscillations are undesirable and reintroduce the numerical problems rSCAN regularizes away. As the gradient expansion constraint requires \begin{equation}
\left.\frac{\partial F_{\mr{x}}}{\partial \bar\alpha}\right|_{\bar\alpha = 1, p = 0} \propto \left.\frac{df(\bar\alpha)}{d\bar\alpha}\right|_{\bar\alpha = 1},
\end{equation}
this oscillation in derivatives must be present, at least in the interpolation scheme discussed here, and cannot be removed by damping. Figure \ref{fig:fx_osc}, compares uncorrected exchange enhancement, $d_{\bar\alpha 4} \to 0$, the exchange enhancement with no damping on the correction terms, $d_{\bar\alpha 4} \to \infty$, and the $d_{\bar\alpha 4} = 0.178$ determined in Section \ref{sec:settingParams}, showing this effect.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{fx_kink_annotated.pdf}
\caption{a) Exchange enhancement factor for r$^4$SCAN as a function of $\bar\alpha$ at $p = 0$. The uncorrected enhancement with $d_{\bar\alpha 4} \to 0$ (red) is contrasted against the un-damped corrections with $d_{\bar\alpha 4} \to \infty$ (black), the proposed r$^4$SCAN damping of $d_{\bar\alpha 4} = 0.178$ (dashed, dark red). b) Derivative of the exchange enhancement with respect to $\bar\alpha$ at $p = 0$ for the same conditions.}
\label{fig:fx_osc}
\end{figure}
These numerical problems are not present in r$^2$SCAN, as the corrections do not depend upon $(1-\bar{\alpha})$. Thus, the corresponding oscillation in derivative is avoided, allowing r$^2$SCAN to recover GE2X and GE2C whilst maintaining a smooth potential. As the integration by parts is not possible to fourth order, r$^4$SCAN necessarily suffers an oscillatory XC potential in order to recover GE4X.
\section{\label{sec:settingParams} Determining parameters}
The regularization of the $\bar\alpha$ indicator in r++SCAN, r$^2$SCAN, and r$^4$SCAN is controlled by the parameter $\eta$, with larger values increasing regularization strength. We find performance is largely insensitive to $\eta$ within the range of $0 \le \eta \le 0.001$ and take the upper value of $\eta = 0.001$.
We introduce a single damping parameter, $d_{p2}$, in r$^2$SCAN through Eqs. \ref{eq:x_change} and \ref{eq:del_y}, and set it using the appropriate norms philosophy of the SCAN functional. The parameter was chosen to minimize the sum of the mean absolute percentage errors in XC energy for four rare gas atoms: Ne, Ar, Kr, and Xe (evaluated for spherical Hartree--Fock orbitals \cite{Clementi1974} relative to \cite{Becke1988, Chakravorty1993, McCarthy2011} reference energies), and four jellium surface formation energies with $r_{\mathrm{s}} =$ 2, 3, 4, and 6 bohr (relative to reference energies from Ref. \cite{Wood2007}). As the parameters are not fit to any bound systems, we regard the resulting functionals as non-empirical.
Objective error as a function of damping parameter, $d_{p2}$, is shown in Figure \ref{fig:d2_Set}. Setting the damping parameter too high degrades accuracy for the rare gas atoms (while mildly improving the jellium surface formation energies) as the gradient expansion terms dominate too far from $|\nabla n| \to 0$. Conversely, setting $d_{p2}$ too small degrades accuracy for the jellium surfaces, as the second order gradient expansion is not sufficiently corrected. As a sharper damping function causes sharper features in XC potential, we take the largest value for $d_{p2}$ which meets the accuracy threshold defined by SCAN: a mean absolute percentage error (MAPE) of $0.1\%$ for rare gas XC energies and $5\%$ for jellium surfaces. The optimizing value is found as $d_{p2} = 0.361$.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{d2_paper_plot_fix.pdf}
\caption{The mean absolute percentage error for (left axis, blue) the exchange-correlation energies of Ne, Ar, Kr, Xe \cite{Becke1988, Chakravorty1993, McCarthy2011}, and (right axis, red) the exchange-correlation jellium surface energy for $r_{\mathrm{s}} = \{2,3,4,6\}$ \cite{Wood2007} as a function of second order gradient expansion damping parameter $d_{p2}$ for r\tss{2}SCAN\xspace. The optimal value is chosen as the largest for which the rare gas error is $< 0.1\%$ and the jellium error is $< 5\%$.}
\label{fig:d2_Set}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=0.36\textwidth]{fourth_order_fitting.pdf}
\caption{Mean absolute percentage error in rare gas XC energy (upper) and jellium surface exchange-correlation energy (lower) as a function of damping parameters $d_{\bar\alpha 4}$ and $d_{p4}$. Optimal parameters are identified by the white circled cross at $d_{p4} = 0.802, d_{\bar\alpha 4} = 0.178$.}
\label{fig:setting_r4_params}
\end{figure}
Two additional parameters are introduced in r$^4$SCAN that control damping of the fourth-order gradient expansion terms. These were determined similarly as those which minimize a normalized sum of the rare gas and jellium surface mean absolute percentage errors. The minimizing parameters were found as $d_{\bar\alpha 4} = 0.178$ and $d_{p4} = 0.802$, as shown in Figure \ref{fig:setting_r4_params}.
\section{\label{sec:results} Results}
\subsection{Enhancement Factors and Derivatives\label{sec:Xe_osc}}
An important principle in functional design is to take an ``Occam's razor'' approach and determine a functional that is free of twists and kinks. In this way the functional avoids over-fitting to data and ensures smooth functional derivatives that are easy to render on numerical grids.
Figure \ref{fig:Xe_derivatives} compares the XC enhancement factors of SCAN, r$^2$SCAN, and r$^4$SCAN for the Xenon atom. The SCAN enhancement factor shows sharp plateau like regions from the twists in its interpolation function around $\alpha = 1$. The smooth polynomial interpolation function removes these plateaus from r$^2$SCAN, though some twists are re-introduced in r$^4$SCAN by the $(1 - \bar\alpha)$ terms in the GE4X restoration.
The effect of twists in $F_\mr{xc}$ can be seen in the semi-local and non-local XC potential components of the XC potential, shown in Figure \ref{fig:Xe_derivatives}. The SCAN functional shows sharp oscillations around $\alpha = 1$ points and sharp drops in its non-local component. In contrast, the r\tss{2}SCAN\xspace functional is a smooth function of its ingredients, and hence has smooth semi-local and non-local components to its XC potential. While the potential components of r\tss{2}SCAN\xspace and r$^4$SCAN coincide for much of space they differ significantly around $\bar{\alpha} = 1$ points. Here the $(1-\bar\alpha)$ correction terms in r$^4$SCAN required for GE4X cause sharp oscillations that return the oscillatory behavior we aim to remove. As these terms cannot be removed by partial integration to the 4th order in $\nabla n$, we therefore conclude that GE4X is incompatible with functional smoothness under the present SCAN interpolation-based model: one must either twist the interpolation function to enforce Eq. \ref{eq:interp_const}, or include correcting terms that re-introduces oscillatory factors.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\textwidth]{Paper_potentials_Xe.pdf}
\caption{The XC enhancement factor (top), the multiplicative component of the XC potential (middle), and the non-local component, i.e., the derivative of the XC energy density with respect to the orbital dependent kinetic energy density, $\tau$ (bottom) in the Xe atom. Shown for the SCAN, r\tss{2}SCAN\xspace, and r$^4$SCAN functionals, calculated from reference Hartree--Fock Slater orbitals \cite{Clementi1974, Furness2021a}. Points where $\alpha = 1$ are shown by black vertical lines.}
\label{fig:Xe_derivatives}
\end{figure}
\subsection{Appropriate Norms}
An appropriate norm is defined in Ref. \cite{Sun2015} as ``systems for which semilocal functionals can be exact or extremely accurate''. Here, like SCAN, we take these as the exchange and correlation energies of four rare gas atoms (Ne, Ar, Kr, and Xe), the exchange and correlation surface energies of four jellium slabs ($\overline{r_{\mathrm{s}}}=$ 2, 3, 4, and 6), and the interaction energy of Ar\mol{2} at repulsive inter-atomic distances ($R_{\mr{Ar-Ar}} =$ 1.6, 1.8, and 2.0 \AA). Table \ref{tab:norms} compares the accuracy of SCAN, rSCAN, r++SCAN, r$^2$SCAN, and r$^4$SCAN for the appropriate norms.
Care must be taken when computing the jellium surface energy for rSCAN as the uniform bulk density energy is changed by the $\alpha^\prime$ regularization. To evaluate the exchange-correlation contribution to the jellium surface formation energy, we compute \cite{Lang1970}
\begin{equation}
\sigma_\mr{xc}(\overline{r_{\mathrm{s}}}) = \int_{-\infty}^{\infty} [\varepsilon_\mr{xc}(n,\nabla n, \tau) - \varepsilon_\mr{xc}(\overline{n},0,\overline{\tau_U}) ]n(x) dx,
\end{equation}
where $\overline{r_{\mathrm{s}}} = (4\pi \overline{n}/3)^{-1/3}$ is the density
parameter of the corresponding bulk jellium, and $\overline{\tau_U} =
3(3\pi^2)^{2/3}\overline{n}^{5/3}/10$ is its kinetic energy density. Here, we have assumed that the surface lies along the $x$ direction.
This ensures that the uniform density limit of a given functional is used, regardless of whether that uniform limit is the LSDA.
Building upon our previous example of the valence density in solid sodium: when $r_{\mathrm{s}}=4$, the rSCAN exchange uniform density limit is
\[
\epsilon_\mr{x}^\mr{rSCAN}(\overline{n},0,\overline{\tau_U})\bigg|_{\overline{r_{\mathrm{s}}}=4} \approx 1.051 \epsilon_\mr{x}^\mr{LDA}(\overline{r_{\mathrm{s}}}),
\]
making a substantial error over LDA exchange. For $\overline{r_{\mathrm{s}}}=6$, this error is increased to roughly 14\%.
The importance of recovering the second order gradient expansion is seen in the relative accuracy of the functionals for the rare gas atoms. The two functionals which do not recover the gradient expansions (rSCAN and r++SCAN) have MAPEs of $\approx 0.25\%$ whereas the functionals that do have MAPEs of $\approx 0.1\%$. Restoring the fourth order gradient expansion for exchange improves accuracy further, though r$^2$SCAN already has similar accuracy to SCAN for all the appropriate norms suggesting GEX4 is less important for these systems. Outside of these differences, all functionals performed similarly for the appropriate norm systems.
\begin{table*}
\centering
\caption{Accuracy for appropriate norms. Rare gas and jellium surface exchange-correlation energies given in Hartrees ($E_h$), Ar\mol{2} interaction energies in kcal/mol. Benchmark data for rare gas atom exchange-correlation, jellium surface exchange-correlation, and Ar\mol{2} interaction energy are from Refs. \cite{Becke1988, Chakravorty1993, McCarthy2011}, \cite{Wood2007}, and \cite{Patkowski2005} respectively. Error summaries are given as mean absolute percentage errors (MAPE). Full calculation details in main text.\label{tab:norms}}
\input{./norms_table}
\end{table*}
\subsection{Atomization Energies}
The work of Ref. \cite{Mejia-Rodriguez2019} shows the performance of rSCAN is relatively poor for atomization energy prediction, as measured by its increased error for the G3 test set of molecules \cite{Curtiss2000}. Table \ref{tab:G3_stats} compares the errors for this test set for all the functionals derived above. Consistent with Ref. \cite{Mejia-Rodriguez2019} we find a large error from rSCAN, with this error only slightly corrected in r++SCAN. As in Ref. \citenum{Furness2020c}, restoration of GE2X and GE2C in r$^2$SCAN restores the good accuracy of SCAN, showing the importance of these constraints for atomization energies. The good accuracy of r$^2$SCAN suggests that recovering GE4X is not essential for accurately predicting atomization energies.
\begin{table}
\centering
\caption{Summary of atomization energy errors (in kcal/mol) for the G3 test set \cite{Curtiss2000} using the most dense numerical integration grid (\textsc{Turbomole} level 7).}
\begin{tabular}{l|rrrrr}
& SCAN & rSCAN & r++SCAN & r$^2$SCAN & r$^4$SCAN \\
\hline
ME & -5.036 & -14.010 & -12.912 & -5.042 & -6.939 \\
MAE & 6.121 & 14.258 & 13.239 & 5.866 & 7.716
\end{tabular}
\label{tab:G3_stats}
\end{table}
The improved numerical performance of the functionals is illustrated by examining the convergence of atomization energy predictions as a function of numerical grid density, as shown in Figure \ref{fig:G3_progression}. The original SCAN functional shows wild variation with changing grid density, with a range of over 6 kcal/mol in mean absolute error! While there is some indication that a convergence is approached for the most dense grids, the results from more computationally efficient grids are problematic and clearly untrustworthy.
All four regularized functionals show very fast grid convergence, with all sparse grids giving close agreement to the dense grids. Given the sharp oscillations in Figure \ref{fig:Xe_derivatives} and analysis of Section \ref{sec:numerics}, it is somewhat unexpected that r$^4$SCAN shows good convergence with grid density. We attribute the improved performance over SCAN to the reduction in plateau like behavior of $F_\mr{xc}$ but caution that grid convergence behavior will likely be more system dependent for r$^4$SCAN than r++SCAN and r$^2$SCAN as a result of the oscillations.
It is similarly unexpected to find that the accuracy of r\tss{2}SCAN\xspace is degraded by the inclusion of the GE4X in r$^4$SCAN. This supports our previous conclusion that GE4X is not important for the properties tested here and elsewhere\cite{Ehlert2021b, Grimme2021}. We take the mild degradation of G3 accuracy from extending to r$^4$SCAN as further evidence that this method of including GE4X into interpolation based meta-GGA functionals is problematic. The inclusion of two additional fitting parameters beyond those in r\tss{2}SCAN\xspace may also contribute.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{G3_Updated.pdf}
\caption{Mean absolute percentage error of atomization energies (kcal/mol) for the G3 set of 226 molecules \cite{Curtiss2000} as a function of increasing numerical integration grid density expressed relative to the smallest grid. The grids chosen were defined by the default \textsc{Turbomole} grid levels. The mean number of grid points per atom over the G3 set is about 1,326 for the smallest (gridsize=1) and 53,590 for the largest (gridsize=7) grid shown here.
}
\label{fig:G3_progression}
\end{figure}
\subsection{Further Testing}
Beyond atomization energies, we have tested the accuracy of the progression of SCAN-like functionals for the interaction energies of closed shell complexes (S22)\cite{Jurecka2006} and reaction barrier heights (BH76)\cite{Zhao2005}, summarized in Table \ref{tab:S22_BH76}. Additionally, Table \ref{tab:LC20_table} summarizes accuracy for the LC20 set of lattice constants for solids \cite{Sun2011}, obtained by fitting the stabilized jellium equation of state \cite{Staroverov2004} to single point energy calculations at a range of lattice volumes. All five functionals gave comparable good accuracy across all three test sets, showing that SCAN's good performance is not significantly changed by the regularizations or exact constraint restoration for these properties.
\begin{table}
\caption{Mean error (ME) and Mean absolute error (MAE) of SCAN\cite{Sun2015}, rSCAN\cite{Bartok2019}, r++SCAN, r\tss{2}SCAN\xspace\cite{Furness2020c}, and r$^4$SCAN for the S22 set of 22 interaction energies between closed shell complexes\cite{Jurecka2006}, and the BH76 set of 76 chemical barrier heights\cite{Zhao2005}. Full data is presented in supplemental material Tables
VI and VII.
All calculations use the most dense integration grid (\textsc{Turbomole} level 7).}
\label{tab:S22_BH76}
\centering
\begin{tabular}{cc|rrrrr}
& & SCAN & rSCAN & r++SCAN & r$^2$SCAN & r$^4$SCAN \\
\hline
\multirow{2}{*}{S22} & ME & -0.524 & -1.153 & -0.554 & -0.937 & -0.874 \\
& MAE & 0.786 & 1.273 & 0.846 & 1.057 & 1.015 \\
\hline
\multirow{2}{*}{BH76} & ME & -7.653 & -7.365 & -7.488 & -7.125 & -7.463 \\
& MAE & 7.724 & 7.434 & 7.556 & 7.182 & 7.527 \\
\end{tabular}
\end{table}
\begin{table}
\caption{Error in lattice constant prediction for the LC20 lattice constant test set \cite{Sun2011}. Errors are in $\text{\AA}$ relative to zero-point anharmonic expansion corrected experimental data from Ref. \cite{Hao2012}. Lattice constants were obtained by fitting the stabilized jellium equation of state \cite{Staroverov2004} to single point energy calculations at a range of lattice volumes.}
\label{tab:LC20_table}
\centering
\begin{tabular}{c|rrrrr}
\hline
\hline
& SCAN & rSCAN & r++SCAN & r$^2$SCAN & r$^4$SCAN \\ \hline
Ag & 0.012 & 0.028 & 0.026 & 0.034 & 0.017 \\
Al & -0.012 & -0.027 & -0.031 & -0.032 & -0.014 \\
Ba & 0.049 & 0.100 & 0.046 & 0.076 & 0.062 \\
C & -0.004 & -0.001 & -0.001 & 0.005 & 0.002 \\
Ca & -0.009 & 0.017 & -0.011 & 0.018 & 0.016 \\
Cu & -0.030 & -0.023 & -0.025 & -0.020 & -0.027 \\
GaAs & 0.020 & 0.031 & 0.026 & 0.029 & 0.017 \\
Ge & 0.029 & 0.043 & 0.038 & 0.039 & 0.028 \\
Li & 0.011 & 0.021 & 0.006 & 0.016 & 0.010 \\
LiCl & 0.016 & 0.021 & 0.017 & 0.034 & 0.032 \\
LiF & 0.004 & 0.008 & 0.008 & 0.021 & 0.020 \\
MgO & 0.018 & 0.019 & 0.018 & 0.027 & 0.024 \\
NaCl & 0.010 & 0.025 & 0.015 & 0.036 & 0.036 \\
NaF & 0.006 & 0.015 & 0.012 & 0.028 & 0.028 \\
Na & -0.007 & 0.024 & -0.012 & 0.004 & 0.031 \\
Pd & 0.016 & 0.026 & 0.026 & 0.032 & 0.019 \\
Rh & -0.005 & 0.006 & 0.005 & 0.008 & -0.005 \\
Si & 0.005 & 0.012 & 0.009 & 0.018 & 0.016 \\
SiC & 0.002 & -0.001 & -0.002 & 0.006 & 0.007 \\
Sr & 0.039 & 0.064 & 0.019 & 0.056 & 0.095 \\
\hline
ME & 0.009 & 0.020 & 0.009 & 0.022 & 0.021 \\
MAE & 0.015 & 0.025 & 0.017 & 0.027 & 0.025 \\
\hline
\hline
\end{tabular}
\end{table}
\section{\label{sec:conclusions} Conclusions}
To summarize, we have shown how exact constraints obeyed by SCAN are broken by the regularizations in rSCAN. Through this analysis we have shown how the exact constraint adherence can be restored and how this can be achieved without sacrificing the good numerical performance of rSCAN. This results in three new functionals with increasing exact constraint adherence: r++SCAN, r$^2$SCAN, and r$^4$SCAN. Additional parameters introduced to the new functionals are set without reference to any real bonded systems, thus we can still regard the resulting functionals as non-empirical and expect them to be applicable to a wide range of systems. Figure \ref{fig:G3_progression} suggests that restoring GEX4 in r$^4$SCAN gives similar accuracy to SCAN with some improvement in grid efficiency. We therefore expect the new r$^2$SCAN functional to remain the preferred choice for situations where the accuracy of SCAN is desired but its use is prohibited by poor numerical performance \cite{Kingsbury2021,Ning2021}.
Further improvement over r\tss{2}SCAN\xspace might be achieved by a smoother and fuller incorporation of the fourth-order density-gradient terms for the exchange energy in a SCAN-like functional. Work on this is underway.
Figure \ref{fig:G3_progression} shows that rSCAN and r++SCAN, which lose the correct second-order gradient expansions for densities that vary slowly over space, also lose accuracy for atomization energies of molecules, and that the restoration of this limit in SCAN, r\tss{2}SCAN\xspace, and r\tss{4}SCAN\xspace also restores accurate atomization energies. This result is in line with arguments made in Ref. \cite{Kaplan2020}.
Experience with SCAN and r\tss{2}SCAN\xspace (and with atomic densities \cite{Medvedev2017}) suggests that smoothness at fixed electron number could be elevated to the status of an 18$^{\text{th}}$ exact constraint that a meta-GGA can satisfy, or at least to the status of a construction principle: By Occam's Razor, the simplest assumption, consistent with what is known, should be preferred.
\begin{acknowledgments}
J.F., J.N., and J.S. acknowledge the support of the U.S. DOE, Office of Science, Basic Energy Sciences Grant No. DE-SC0019350. J.S. also acknowledges the support of the US National Science Foundation under Grant No. DMR-2042618.
A.D.K. acknowledges the support of the U.S. Department of Energy, Office of Science, Basic Energy Sciences, through Grant No. DE-SC0012575 to the Energy Frontier Research Center: Center for Complex Materials from First Principles, and also support from Temple University.
JPP acknowledges the support of the US National Science Foundation under Grant No. DMR-1939528.
We thank Albert Bart\'ok-Partay and Daniel Mej\'ia-Rodr\'iguez for their invaluable discussions around the ideas presented here. J.P.P. and J.S. thank Natalie Holzwarth for pointing out that the SCAN exchange-correlation potential for an atom diverges in the tail of the density, making pseudo-potential construction difficult.
\end{acknowledgments}
\section*{Materials availability}
r\tss{2}SCAN\xspace and r\tss{4}SCAN\xspace subroutines are made freely available at \url{https://gitlab.com/dhamil/r2scan-subroutines}. The data that support the findings of this study are available from the corresponding author upon reasonable request.
\input{Restoring_rSCAN_bib}
\input{SupMat}
\end{document}
\section*{Supplemental Material: Construction of meta-GGA functionals through restoration of exact constraint adherence to regularized SCAN functionals.}
\end{widetext}
\section{The slowly-varying limit of r\tss{2}SCAN\xspace and r\tss{4}SCAN\xspace \label{AP:exc_deriv}}
This Appendix sketches the derivation of r\tss{2}SCAN\xspace and r\tss{4}SCAN\xspace. We will presume that both functionals have the
structure of Eq. 1,
where the interpolation functions $f_\mr{x/c}(\v{r})$, are taken from rSCAN. As discussed in the main text, this functional is termed r++SCAN\xspace.
From the starting point of r++SCAN\xspace, we will derive the corrections needed to restore the second order gradient expansions for exchange and correlation (r\tss{2}SCAN\xspace), and those for the fourth-order gradient expansion for exchange (r\tss{4}SCAN\xspace). Thus r\tss{4}SCAN\xspace can be viewed as a correction to r\tss{2}SCAN\xspace exchange, and we begin with r\tss{2}SCAN\xspace.
\subsection{The gradient expansion of $\bar{\alpha}$}
The gradient expansion of $\tau[n]$, the one-body, spin-unpolarized kinetic energy density, was derived in Ref. \cite{Brack1976}
\begin{equation}
\tau[n] = \tau_U(n) + \frac{1}{6} \nabla^2 n + \frac{1}{72}\frac{|\nabla n|^2}{n} + \mo{4}
\end{equation}
(in Hartree atomic units). Here, $\mo{4}$ indicates that the next term in the series of higher order is of the form $|\nabla n|^4$, $\nabla^2 n\, |\nabla n|^2$, $(\nabla^2 n)^2$, etc. The gradient expansion is more useful in terms of dimensionless (length-scale invariant) variables
\begin{align}
p(n) &= \left[ \frac{|\nabla n|}{2 k_{\mr{F}}n}\right]^2 = \frac{3}{40}\frac{|\nabla n|^2}{\tau_U(n)n} \\
q(n) &= \frac{\nabla^2 n}{4 k^2_{\mr{F}}n} = \frac{3}{40}\frac{\nabla^2 n}{\tau_U(n)},
\end{align}
where we have used
\begin{align}
\tau_U(n) &= \frac{3}{10}k_{\mr{F}}^2 n \\
k_{\mr{F}} &= (3 \pi^2 n)^{1/3}.
\end{align}
Then the gradient expansion of $\tau_{\sigma}$ can be cast as
\begin{equation}
\tau[n] = \tau_U(n)\left[1 + \frac{20}{9}q(n) + \frac{5}{27}p(n) \right] + \mo{4}.
\end{equation}
The integrated kinetic energy scales with the spin-densities in the same manner as the exchange energy \cite{Oliver1979}
\begin{equation}
T[n_{\uparrow},n_{\downarrow}] = \frac{1}{2}\{ T[2n_{\uparrow}]+T[2n_{\downarrow}]\}.
\end{equation}
This implies a local spin-scaling relation
\begin{equation}
\tau[n_{\uparrow},n_{\downarrow}] = \frac{1}{2}\{ \tau[2n_{\uparrow}]+\tau[2n_{\downarrow}] \}.
\end{equation}
We will seek a gradient expansion in terms of $n$ and $\zeta$, where
\begin{equation}
\zeta = \frac{n_{\uparrow} - n_{\downarrow}}{n_{\uparrow} + n_{\downarrow}},
\end{equation}
rather than the individual spin-densities. After simplification, one finds that
\begin{align}
\tau(n_{\uparrow},n_{\downarrow}) &= \tau_U(n)d_s(\zeta) \left[ 1 + \frac{20}{9d_s(\zeta)}q + \frac{5}{27d_s(\zeta)} p \right. \nonumber \\
& \left. + \frac{5}{27} \frac{\xi^2}{d_s(\zeta)(1-\zeta^2)} \right] + \mo{4}, \label{eq:tau_spin_res}
\end{align}
where
\begin{equation}
d_s(\zeta) = [(1 + \zeta)^{5/3} + (1 - \zeta)^{5/3} ]/2
\end{equation}
describes the spin-scaling of the uniform electron gas kinetic energy density, and
\begin{equation}
\xi = \frac{|\nabla \zeta|}{2 k_{\mr{F}}},
\end{equation}
which also appeared in TPSS \cite{Tao2003}.
The spin resolved $\bar{\alpha}$ tends to
\begin{align}
\bar{\alpha}(n_{\uparrow},n_{\downarrow}) &= \frac{\tau(n_{\uparrow},n_{\downarrow}) - \tau_W}{\tau_U(n)d_s(\zeta) + \eta \tau_W} \\
& = \left[\frac{\tau}{\tau_U(n)d_s(\zeta)} - \frac{5}{3d_s(\zeta)}p \right]\left[1 + \frac{5\eta}{3d_s(\zeta)}p \right]^{-1}.
\end{align}
After performing a Taylor expansion in $p$, and inserting Eq. \ref{eq:tau_spin_res} for $\tau(n_{\uparrow},n_{\downarrow})$, we find
\begin{align}
\bar{\alpha}(n_{\uparrow},n_{\downarrow}) &= 1 + \frac{20}{9d_s(\zeta)}q - \frac{5(8 +9 \eta )}{27 d_s(\zeta)}p \nonumber \\
& + \frac{5}{27d_s(\zeta)(1-\zeta^2)}\xi^2 + \mo{4}. \label{eq:ba_ge_pol}
\end{align}
In the special case where $\zeta=0$ (needed for the exchange energy),
\begin{equation}
\bar{\alpha}(n) = 1 + \frac{20}{9}q - \frac{5(8 +9 \eta )}{27 }p + \mo{4}. \label{eq:ba_ge_unp}
\end{equation}
\subsection{Exchange, second order gradient expansion \label{AP:r2_x_deriv}}
For any spin-unpolarized exchange energy density $\epsilon_{\mr{x}}(n)$, the spin-scaled exchange energy density is \cite{Oliver1979}
\begin{equation}
\epsilon_{\mr{x}}(n_{\uparrow},n_{\downarrow}) = \frac{1}{2}[\epsilon_{\mr{x}}(2n_{\uparrow}) + \epsilon_{\mr{x}}(2n_{\downarrow})].
\end{equation}
Therefore we need only consider the spin-unpolarized exchange energy density, and apply the spin-scaling relationship as needed.
We start with an explicit expression for the r\tss{2}SCAN\xspace exchange enhancement factor
\begin{equation}
F^{\text{r\tss{2}SCAN\xspace}}_{\mr{x}}(p,\bar{\alpha}) = \{h_{\mr{x}}^1(p) + f_{\mr{x}}(\bar{\alpha})[h_{\mr{x}}^0 - h_{\mr{x}}^1(p)] \}g_{\mr{x}}(p),
\end{equation}
with $h_{\mr{x}}^0=1.174$. The function
\begin{equation}
g_{\mr{x}}(p) = 1-\exp(-a_{\mr{x}}/p^{1/4}) = 1 + \mo{\infty} \label{eq:gx_taylor}
\end{equation}
in the slowly-varying limit (its derivatives of all order vanish in the limit $p\to 0$). In r\tss{2}SCAN\xspace, we take
\begin{equation}
h_{\mr{x}}^1(p) = 1 + k_1 - k_1\{1 + [D_{\mr{x}} \exp(-p^2/d_{p2}^4) + \muak \}]p/k_1 \}^{-1},
\end{equation}
which has the following Taylor series:
\begin{equation}
h_{\mr{x}}^1(p) = 1 + (D_{\mr{x}} + \muak) p - \frac{(D_{\mr{x}} + \muak)^2}{k_1}p^2 + \mo{6}. \label{eq:hx_taylor}
\end{equation}
The r\tss{2}SCAN\xspace interpolation function is taken from rSCAN, and has the structure
\begin{equation}
f_{\mr{x}}(\bar{\alpha}) = \left\{ \begin{array}{ll} \sum_{i=0}^7c_{\mr{x},i}\bar{\alpha}^i, & \bar{\alpha} <= 2.5 \\
-c_{\mr{dx}}^{\mr{SCAN}}\exp\left[\frac{c_{\mr{2x}}^{\mr{SCAN}}}{1-\bar{\alpha}}\right], & \bar{\alpha} > 2.5
\end{array} \right.,
\end{equation}
with Taylor series
\begin{align}
f_{\mr{x}}(\bar{\alpha}) &= (\bar{\alpha} -1)\sum_{i=1}^7 i c_{\mr{x},i} + \frac{1}{2}(\bar{\alpha} - 1)^2\sum_{i=2}^7 i (i-1) c_{\mr{x},i} \nonumber \\
& + \mathcal{O}[(\bar{\alpha}-1)^3] \label{eq:fx_taylor}
\end{align}
in the slowly-varying limit. It's important here to note that $\bar{\alpha}$ has a gradient expansion to much higher order than $\mo{2}$, as we derived in Eqs. \ref{eq:ba_ge_pol} and \ref{eq:ba_ge_unp}. Therefore, the term of lowest order in $(1-\bar{\alpha})$ is $\mo{2}$, and the term of lowest order in $(1-\bar{\alpha})^2$ is $\mo{4}$.
Inserting the Taylor series of Eqs. \ref{eq:gx_taylor}, \ref{eq:hx_taylor}, and \ref{eq:fx_taylor} into $F_{\mr{x}}^{\text{r\tss{2}SCAN\xspace}}$, we find, to $\mo{4}$,
\begin{align}
& F_{\mr{x}}^{\text{r\tss{2}SCAN\xspace}}(p,\bar{\alpha}) = 1 + (D_{\mr{x}} + \muak) p \nonumber \\
&+ \left[(h_{\mr{x}}^0-1)\sum_{i=1}^7 i c_{\mr{x},i}\right](\bar{\alpha} -1) - \frac{(D_{\mr{x}} + \muak)^2}{k_1}p^2 \nonumber \\
& -\left[(D_{\mr{x}} + \muak) \sum_{i=1}^7 i c_{\mr{x},i}\right](\bar{\alpha} -1)p \nonumber \\
& +\left[\frac{h_{\mr{x}}^0-1}{2}\sum_{i=2}^7 i(i-1) c_{\mr{x},i}\right](\bar{\alpha} -1)^2 + \mo{6}, \label{eq:fx_rr_taylor}
\end{align}
with the terms written in (generally) increasing order. For r\tss{2}SCAN\xspace, we demand that the exchange enhancement factor recover the exchange gradient expansion \cite{Kirzhnits1957, Svendsen1996}
\begin{equation}
F^{\text{GE}}_{\mr{x}}(p,q) = 1 + \muak p + \frac{146}{2025}q^2 - \frac{73}{405}pq + \mo{6} \label{eq:ge4x_pq}
\end{equation}
only to second order, i.e. $ 1 + \muak p$. Therefore, we can ignore all terms of order higher than $\mo{2}$ that are written explicitly in Eq. \ref{eq:fx_rr_taylor}, but also those that are included implicitly in $(\bar{\alpha}-1)$,
\begin{widetext}
\begin{equation}
F_{\mr{x}}^{\text{r\tss{2}SCAN\xspace}}(p,\bar{\alpha}) = 1 + \left[D_{\mr{x}} - \frac{5(8 +9 \eta )}{27 }(h_{\mr{x}}^0-1)\sum_{i=1}^7 i c_{\mr{x},i} +\muak\right] p + \frac{20(h_{\mr{x}}^0-1)}{9}\left(\sum_{i=1}^7 i c_{\mr{x},i}\right)q + \mo{4},
\end{equation}
where we have evaluated $(\bar{\alpha}-1)$ using Eq. \ref{eq:ba_ge_unp}.
To eliminate the term linear in $q$, we perform an integration by parts. The ``gauge variance'' of the exchange energy density implies that two exchange enhancement factors can yield the same integrated exchange energy, but different exchange energy densities,
\begin{equation}
\int_{\Omega} F_{\mr{x}} \epsilon_{\mr{x}}^{\text{LSDA}}d^3 r = \int_{\Omega} [\widetilde{F}_{\mr{x}} \epsilon_{\mr{x}}^{\text{LSDA}} + n^{-4/3}\nabla \cdot \bm{G}_{\mr{x}}]d^3 r = \int_{\Omega}\widetilde{F}_{\mr{x}} d^3r -\frac{3}{4\pi}(3\pi^2)^{1/3} \int_{\text{bdy}\,\Omega} \bm{G}\cdot d\bm{S}.
\end{equation}
The second equality is a straightforward application of the divergence theorem. Provided that the gauge function $\bm{G}_{\mr{x}}$ vanishes sufficiently rapidly over the bounding surface $\text{bdy}\,\Omega$ of the integration volume $\Omega$, the surface integral vanishes. Note also that the LSDA exchange energy density is
\begin{equation}
\epsilon_{\mr{x}}^{\text{LSDA}} = -\frac{3}{4\pi}(3\pi^2)^{1/3} n^{4/3}.
\end{equation}
Consider then
\begin{equation}
q \,\epsilon_{\mr{x}}^{\text{LSDA}} = \left\{ \frac{p}{3} + n^{-4/3}\nabla \cdot \left[ \frac{\nabla n}{4(3\pi^2)^{2/3}n^{1/3} } \right] \right\}\epsilon_{\mr{x}}^{\text{LSDA}},
\end{equation}
thus
\begin{equation}
F_{\mr{x}}^{\text{r\tss{2}SCAN\xspace}}(p,\bar{\alpha}) = 1 + \left[D_{\mr{x}} - \frac{5(4 +9 \eta )}{27 }(h_{\mr{x}}^0-1)\sum_{i=1}^7 i c_{\mr{x},i} +\muak\right] p + n^{-4/3}\nabla \cdot \left\{ \left[ \frac{5(h_{\mr{x}}^0-1)}{9(3\pi^2)^{2/3}} \left(\sum_{i=1}^7 i c_{\mr{x},i}\right) \right]\frac{\nabla n}{n^{1/3}}\right\} + \mo{4}.
\end{equation}
\end{widetext}
The rightmost term in curly braces is the gauge function $\bm{G}_{\mr{x}}$. Except in certain situations, like the density tail of an atom (generally, outside the Kohn-Sham turning surface if one exists), where the gradient expansion does not apply, $n^{-1/3} \nabla n $ vanishes sufficiently rapidly at infinity. Moreover, we generalize $F_{\mr{x}}^{\text{r\tss{2}SCAN\xspace}}$ by cutting off the divergent gradient expansion terms and keeping only the integrated-by-parts expression
\begin{align}
& \widetilde{F}_{\mr{x}}^{\text{r\tss{2}SCAN\xspace}}(p,\bar{\alpha}) = 1 + \left[D_{\mr{x}} - \frac{5(4 +9 \eta )}{27 }(h_{\mr{x}}^0-1)\sum_{i=1}^7 i c_{\mr{x},i} \right. \nonumber \\
& +\muak \bigg] p + \mo{4},
\end{align}
allowing for validity even outside the Kohn-Sham turning surface.
Thus, we recover the correct second-order gradient expansion for exchange by demanding
\begin{equation}
D_{\mr{x}} = \frac{5(4 +9 \eta )}{27 } (h_{\mr{x}}^0-1)\sum_{i=1}^7 i c_{\mr{x},i}
\end{equation}
which is the product of $C_{\eta}$ and $C_{2\mr{x}}$ defined
in Eqs. 53 and 57,
respectively,
\begin{align}
C_{\eta} &= \frac{5(4 +9 \eta )}{27 } \nonumber \\
C_{2\mr{x}} &= (h_{\mr{x}}^0-1)\sum_{i=1}^7 i c_{\mr{x},i}. \nonumber
\end{align}
\subsection{Exchange, fourth-order gradient expansion \label{AP:r4_x_deriv}}
r\tss{4}SCAN\xspace prescribes an explicit correction to the r\tss{2}SCAN\xspace enhancement factor,
\begin{equation}
F_{\mr{x}}^{\text{r\tss{4}SCAN\xspace}} = F_{\mr{x}}^{\text{r\tss{2}SCAN\xspace}} + \Delta F_4(p,\bar{\alpha})g_{\mr{x}}(p)
\end{equation}
where $g_{\mr{x}}(p)$ is unchanged from r\tss{2}SCAN\xspace, and
\begin{align}
\Delta F_4(p,\bar{\alpha}) &= \left\{ -C_{2\mr{x}}[ (\bar{\alpha}-1) + C_{\eta}p ] + C_{\bar{\alpha} \bar{\alpha}}(1-\bar{\alpha})^2 \right. \nonumber \\
& \left. + C_{p \bar{\alpha}}p(1-\bar{\alpha}) + C_{pp} p^2 \right\}\frac{2\bar{\alpha}^2}{1+\bar{\alpha}^4}\nonumber \\
& \times \exp\left[-\frac{(1-\bar{\alpha})^2}{d_{\ba4}^2} - \frac{p^2}{d_{p4}^4} \right].
\end{align}
Despite the complexity of $\Delta F_4$, the damping function used to modulate these corrections, it has a simple Taylor series
\begin{align}
\Delta F_4(p,\bar{\alpha}) &= -C_{2\mr{x}}[ (\bar{\alpha}-1) + C_{\eta}p ] + C_{\bar{\alpha} \bar{\alpha}}(1-\bar{\alpha})^2 \nonumber \\
& + C_{p \bar{\alpha}}p(1-\bar{\alpha}) + C_{pp} p^2 + \mo{6}.
\end{align}
We will now take $D_{\mr{x}} = C_{\eta}C_{2\mr{x}}$ in the r\tss{2}SCAN\xspace exchange enhancement factor. Returning to Eq. \ref{eq:fx_rr_taylor} for the r\tss{2}SCAN\xspace exchange enhancement factor, and adding in the r\tss{4}SCAN\xspace corrections,
\begin{align}
& F_{\mr{x}}^{\text{r\tss{4}SCAN\xspace}}(p,\bar{\alpha}) = 1 + \muak p +\left[C_{pp} - \frac{(C_{\eta}C_{2\mr{x}} + \muak)^2}{k_1} \right]p^2 \nonumber \\
& + \left[ C_{p \bar{\alpha}} + (C_{\eta}C_{2\mr{x}} + \muak) \sum_{i=1}^7 i c_{\mr{x},i}\right] (1 - \bar{\alpha})p \nonumber \\
& +\left[C_{\bar{\alpha} \bar{\alpha}} + \frac{h_{\mr{x}}^0-1}{2}\sum_{i=2}^7 i(i-1) c_{\mr{x},i}\right](\bar{\alpha} -1)^2 + \mo{6}. \label{eq:fx_rf_taylor}
\end{align}
Now, again using Eq. \ref{eq:ba_ge_unp} for the gradient expansion of $\bar{\alpha}$ to second-order,
\begin{align}
(1 - \bar{\alpha})p &= -\frac{20}{9}p q + \frac{5(8 +9 \eta )}{27 }p^2 + \mo{6} \label{eq:p_ba_ge}\\
(1 - \bar{\alpha})^2 &= \frac{400}{81}q^2 + \left[\frac{5(8 +9 \eta )}{27 } \right]^2 p^2 \nonumber \\
& - \frac{200(8 +9 \eta )}{243}p q + \mo{6}. \label{eq:ba_ba_ge}
\end{align}
The second-order gradient expansion of $\bar{\alpha}$ is valid here, because any higher order terms in $1-\bar{\alpha}$ will yield, to lowest order, sixth-order terms in these products. Using Eqs. \ref{eq:p_ba_ge} and \ref{eq:p_ba_ge}, one can show that
\begin{align}
& \frac{73}{5000}(\bar{\alpha}-1)^2 + \left[\frac{511}{13500} -\frac{73}{1500}\eta \right] (1 - \bar{\alpha} )p \nonumber \\
& + \left[\frac{146}{2025} \left(\frac{2}{3} + \frac{3\eta}{4} \right)^2 - \frac{73}{405}\left(\frac{2}{3} + \frac{3\eta}{4} \right) \right] p^2 \nonumber \\
& = \frac{146}{2025} q^2 - \frac{73}{405} p q + \mo{6}, \label{eq:ge4_p_ba}
\end{align}
the same fourth-order terms as in Eq. \ref{eq:ge4x_pq}.
Then to recover the fourth-order gradient expansion for exchange in r\tss{4}SCAN\xspace, we equate \ref{eq:fx_rf_taylor} and \ref{eq:ge4_p_ba}, and find
\begin{align}
C_{pp} &= \frac{(C_{\eta}C_{2\mr{x}} + \muak)^2}{k_1} + \frac{146}{2025} \left(\frac{2}{3} + \frac{3\eta}{4} \right)^2 \nonumber \\
& - \frac{73}{405}\left(\frac{2}{3} + \frac{3\eta}{4} \right) \\
C_{p \bar{\alpha}} &= \frac{511}{13500} -\frac{73}{1500}\eta - (C_{\eta}C_{2\mr{x}} + \muak) \sum_{i=1}^7 i c_{\mr{x},i} \\
C_{\bar{\alpha} \bar{\alpha}} &= \frac{73}{5000} - \frac{h_{\mr{x}}^0-1}{2}\sum_{i=2}^7 i(i-1) c_{\mr{x},i},
\end{align}
as presented in Eqs. 61--63.
\subsection{Correlation, second-order gradient expansion \label{AP:r2_c_deriv}}
The gradient expansion for the correlation energy per electron is known only to second order \cite{Ma1968,Wang1991,Perdew1996,Sun2015}
\begin{equation}
\varepsilon_{\mr{c}}(r_{\mathrm{s}},\zeta,t) = \varepsilon_{\mr{c}}^{\text{LSDA}}(r_{\mathrm{s}},\zeta) + \beta(r_{\mathrm{s}})\phi^3(\zeta) t^2.
\end{equation}
The density-dependent function $\beta(r_{\mathrm{s}})$ is known only for small values of $r_{\mathrm{s}}$, and we take the parameterization used in Ref. \cite{Sun2015},
\begin{equation}
\beta(r_{\mathrm{s}}) = \beta_{\mr{MB}}\frac{1 + 0.1 r_{\mathrm{s}}}{1 + 0.1778 r_{\mathrm{s}}},
\end{equation}
constructed to cancel with the second-order gradient expansion term for exchange in the limit $r_{\mathrm{s}} \to \infty$.
Two other quantities enter this expansion: the spin-scaling function
\begin{equation}
\phi(\zeta) = [(1 + \zeta)^{2/3} + (1 - \zeta)^{2/3}]/2,
\end{equation}
and dimensionless density gradient on the length scale of the Thomas-Fermi wavevector
\begin{equation}
t^2 = \left( \frac{3\pi^2}{16} \right)^{2/3}\frac{p}{\phi^2(\zeta)r_{\mathrm{s}}}.
\end{equation}
In both r\tss{2}SCAN\xspace and r\tss{4}SCAN\xspace, we propose that the correlation energy per electron is
\begin{align}
\varepsilon^{\text{r\tss{2}SCAN\xspace}}_{\mr{c}}(r_{\mathrm{s}},\zeta,p,\bar{\alpha}) &= \varepsilon_{\mr{c}}^1(r_{\mathrm{s}},\zeta,p) + f_{\mr{c}}(\bar{\alpha})[ \varepsilon_{\mr{c}}^0(r_{\mathrm{s}},\zeta,p) \nonumber \\
& - \varepsilon_{\mr{c}}^1(r_{\mathrm{s}},\zeta,p)]
\end{align}
with $f_{\mr{c}}(\bar{\alpha})$ taken from rSCAN. It has a Taylor series about $\bar{\alpha} = 1$ that is identical in structure (but not value) to the Taylor series for $f_{\mr{x}}(\bar{\alpha})$. The individual energies per electron are
\begin{align}
\varepsilon_{\mr{c}}^0(r_{\mathrm{s}},\zeta,p) &= [\varepsilon_{\mr{c}}^{\text{LDA0}}(r_{\mathrm{s}},\zeta) + H_0(r_{\mathrm{s}},\zeta,p)]g_{\mr{c}}(\zeta) \\
\varepsilon_{\mr{c}}^1(r_{\mathrm{s}},\zeta,p) &= \varepsilon_{\mr{c}}^{\text{LSDA1}}(r_{\mathrm{s}},\zeta) + H_1(r_{\mathrm{s}},\zeta,p),
\end{align}
with $\varepsilon_{\mr{c}}^0$ unchanged from SCAN (see also Eqs. \ref{eq:ec0_scan}--\ref{eq:gc_zeta}). In r\tss{2}SCAN\xspace, we posit that
\begin{align}
H_1(r_{\mathrm{s}},\zeta,p) &= \gamma \phi^3(\zeta)\ln \left\{ 1 + w_1\left[ 1 - g(y,\Delta y)\right]\right\} \\
y &= \frac{\beta(r_{\mathrm{s}})}{\gamma w_1} t^2 \\
g(y,\Delta y)&= [1 + 4(y - \Delta y)]^{-1/4} \\
\Delta y &= D_{\mr{c}} p\exp[-p^2/d_{p2}^4],
\end{align}
with $d_{p2}$ unchanged from the exchange component of r\tss{2}SCAN\xspace. It can readily be seen that these have the following Taylor series:
\begin{align}
\varepsilon_{\mr{c}}^0(r_{\mathrm{s}},\zeta,p) &= \varepsilon_{\mr{c}}^{\text{LDA0}}(r_{\mathrm{s}},\zeta)g_{\mr{c}}(\zeta) + \chi_{\infty}g_{\mr{c}}(\zeta) p + \mo{4} \\
\varepsilon_{\mr{c}}^1(r_{\mathrm{s}},\zeta,p) &= \varepsilon_{\mr{c}}^{\text{LSDA1}}(r_{\mathrm{s}},\zeta) + \beta(r_{\mathrm{s}})\phi^3(\zeta)t^2 \nonumber \\
& - \gamma \phi^3(\zeta)w_1 D_{\mr{c}} p + \mo{4}.
\end{align}
Then the full gradient expansion of the r\tss{2}SCAN\xspace correlation energy per electron is, after simplification,
\begin{align}
& \varepsilon^{\text{r\tss{2}SCAN\xspace}}_{\mr{c}}(r_{\mathrm{s}},\zeta,p,\bar{\alpha}) = \varepsilon_{\mr{c}}^{\text{LSDA1}} + \beta(r_{\mathrm{s}})\phi^3(\zeta)t^2 \nonumber \\
& - \gamma \phi^3(\zeta)w_1 D_{\mr{c}} p + \left(\sum_{i=1}^7 i c_{\mr{c},i}\right)(\bar{\alpha} - 1) + \mo{4},
\end{align}
where $\varepsilon_{\mr{c}}^{\text{LSDA0}} = \varepsilon_{\mr{c}}^{\text{LDA0}}g_{\mr{c}}(\zeta)$. Let
\begin{equation}
\Delta f_{\mr{c}2} \equiv \sum_{i=1}^7 i c_{\mr{c},i}.
\end{equation}
We can now use Eq. \ref{eq:ba_ge_pol} for the gradient expansion of $\bar{\alpha}$ at arbitrary spin polarization to find that
\begin{widetext}
\begin{align}
&\varepsilon^{\text{r\tss{2}SCAN\xspace}}_{\mr{c}}(r_{\mathrm{s}},\zeta,p,\bar{\alpha}) = \varepsilon_{\mr{c}}^{\text{LSDA1}}(r_{\mathrm{s}},\zeta) + \beta(r_{\mathrm{s}})\phi^3(\zeta)t^2 + \frac{20\Delta f_{\mr{c}2}}{9d_s(\zeta)}[\varepsilon_{\mr{c}}^{\text{LSDA0}}(r_{\mathrm{s}},\zeta)-\varepsilon_{\mr{c}}^{\text{LSDA1}}(r_{\mathrm{s}},\zeta)]q \nonumber \\
& - \left\{\gamma \phi^3(\zeta)w_1 D_{\mr{c}} + \frac{5(8 +9 \eta )\Delta f_{\mr{c}2}}{27 d_s(\zeta)}[\varepsilon_{\mr{c}}^{\text{LSDA0}}(r_{\mathrm{s}},\zeta)-\varepsilon_{\mr{c}}^{\text{LSDA1}}(r_{\mathrm{s}},\zeta)] \right\} p \nonumber \\
& + \frac{5\Delta f_{\mr{c}2}}{27d_s(\zeta)(1-\zeta^2)}[\varepsilon_{\mr{c}}^{\text{LSDA0}}(r_{\mathrm{s}},\zeta)-\varepsilon_{\mr{c}}^{\text{LSDA1}}(r_{\mathrm{s}},\zeta)]\xi^2 + \mo{4}.
\end{align}
We can eliminate the term linear in $q$ using a similar gauge variance principle for the correlation energy:
\begin{equation}
\int_{\Omega} \varepsilon_{\mr{c}} n \, d^3 r = \int_{\Omega} [\widetilde{\varepsilon}_{\mr{c}} + n^{-1} \nabla \cdot \bm{G}_{\mr{c}} ]n \, d^3 r = \int_{\Omega} \widetilde{\varepsilon}_{\mr{c}} n \, d^3 r + \int_{\text{bdy}\,\Omega}\bm{G}_{\mr{c}} \cdot d\bm{S}.
\end{equation}
Again, provided that the gauge function $\bm{G}_{\mr{c}}$ vanishes sufficiently rapidly at the bounding surface $\text{bdy}\,\Omega$, we may replace the r\tss{2}SCAN\xspace correlation energy per electron with the integrated-by-parts expression $\widetilde{\varepsilon}_{\mr{c}}$. To do this, consider that for a general function $f(r_{\mathrm{s}},\zeta)$,
\begin{equation}
\nabla f(r_{\mathrm{s}},\zeta) = -\frac{r_{\mathrm{s}}}{3n}\frac{\partial f}{\partial r_{\mathrm{s}}}\nabla n + \frac{\partial f}{\partial \zeta} \nabla \zeta,
\end{equation}
where we have used $r_{\mathrm{s}}=[4\pi n/3]^{-3}$. Then
\begin{align}
&\frac{\varepsilon_{\mr{c}}^{\text{LSDA0}}(r_{\mathrm{s}},\zeta)-\varepsilon_{\mr{c}}^{\text{LSDA1}}(r_{\mathrm{s}},\zeta)}{d_s(\zeta)}q \, n = \left\{ \frac{2}{3} \frac{\varepsilon_{\mr{c}}^{\text{LSDA0}}(r_{\mathrm{s}},\zeta)-\varepsilon_{\mr{c}}^{\text{LSDA1}}(r_{\mathrm{s}},\zeta)}{d_s(\zeta)} + \frac{r_{\mathrm{s}}}{3d_s(\zeta)} \left[ \frac{\partial\varepsilon_{\mr{c}}^{\text{LSDA0}}}{\partial r_{\mathrm{s}}}- \frac{\partial\varepsilon_{\mr{c}}^{\text{LSDA1}}}{\partial r_{\mathrm{s}}} \right] \right\} p \, n \nonumber \\
& - \frac{\nabla n \cdot \nabla \zeta}{4(3\pi^2)n^{5/3}}\frac{\partial}{\partial \zeta}\left[\frac{\varepsilon_{\mr{c}}^{\text{LSDA0}}(r_{\mathrm{s}},\zeta)-\varepsilon_{\mr{c}}^{\text{LSDA1}}(r_{\mathrm{s}},\zeta)}{d_s(\zeta)} \right] n
+ n^{-1} \nabla \cdot \left[\frac{\varepsilon_{\mr{c}}^{\text{LSDA0}}(r_{\mathrm{s}},\zeta)-\varepsilon_{\mr{c}}^{\text{LSDA1}}(r_{\mathrm{s}},\zeta)}{d_s(\zeta)} \frac{\nabla n}{4(3\pi^2n)^{2/3}} \right].
\end{align}
The rightmost term in square brackets is the gauge function $\bm{G}_{\mr{c}}$. Then the integrated-by-parts r\tss{2}SCAN\xspace correlation energy per electron is
\begin{align}
&\widetilde{\varepsilon}^{\text{r\tss{2}SCAN\xspace}}_{\mr{c}}(r_{\mathrm{s}},\zeta,p,\bar{\alpha}) = \varepsilon_{\mr{c}}^{\text{LSDA1}}(r_{\mathrm{s}},\zeta) + \beta(r_{\mathrm{s}})\phi^3(\zeta)t^2 \nonumber \\
& - \left\{\gamma \phi^3(\zeta)w_1 D_{\mr{c}} + \frac{45\Delta f_{\mr{c}2} \eta}{27 d_s(\zeta)}[\varepsilon_{\mr{c}}^{\text{LSDA0}}(r_{\mathrm{s}},\zeta)-\varepsilon_{\mr{c}}^{\text{LSDA1}}(r_{\mathrm{s}},\zeta)] - \frac{20\Delta f_{\mr{c}2}r_{\mathrm{s}}}{27d_s(\zeta)} \left[ \frac{\partial\varepsilon_{\mr{c}}^{\text{LSDA0}}}{\partial r_{\mathrm{s}}} - \frac{\partial\varepsilon_{\mr{c}}^{\text{LSDA1}}}{\partial r_{\mathrm{s}}} \right] \right\} p \nonumber \\
& - \frac{5\nabla n \cdot \nabla \zeta}{9(3\pi^2)n^{5/3}}\frac{\partial}{\partial \zeta}\left[\frac{\varepsilon_{\mr{c}}^{\text{LSDA0}}(r_{\mathrm{s}},\zeta)-\varepsilon_{\mr{c}}^{\text{LSDA1}}(r_{\mathrm{s}},\zeta)}{d_s(\zeta)} \right] + \frac{5\Delta f_{\mr{c}2}}{27d_s(\zeta)(1-\zeta^2)}[\varepsilon_{\mr{c}}^{\text{LSDA0}}(r_{\mathrm{s}},\zeta)-\varepsilon_{\mr{c}}^{\text{LSDA1}}(r_{\mathrm{s}},\zeta)]\xi^2 + \mo{4}.
\end{align}
In r\tss{2}SCAN\xspace, we make the simplification that $\nabla \zeta\approx 0$. Thus $\xi\approx 0$, and
\begin{align}
&\widetilde{\varepsilon}^{\text{r\tss{2}SCAN\xspace}}_{\mr{c}}(r_{\mathrm{s}},\zeta,p,\bar{\alpha}) = \varepsilon_{\mr{c}}^{\text{LSDA1}}(r_{\mathrm{s}},\zeta) + \beta(r_{\mathrm{s}})\phi^3(\zeta)t^2 \nonumber \\
& - \left\{\gamma \phi^3(\zeta)w_1 D_{\mr{c}} + \frac{45\Delta f_{\mr{c}2} \eta}{27 d_s(\zeta)}[\varepsilon_{\mr{c}}^{\text{LSDA0}}(r_{\mathrm{s}},\zeta)-\varepsilon_{\mr{c}}^{\text{LSDA1}}(r_{\mathrm{s}},\zeta)] - \frac{20\Delta f_{\mr{c}2}r_{\mathrm{s}}}{27d_s(\zeta)}\left[ \frac{\partial\varepsilon_{\mr{c}}^{\text{LSDA0}}}{\partial r_{\mathrm{s}}} - \frac{\partial\varepsilon_{\mr{c}}^{\text{LSDA1}}}{\partial r_{\mathrm{s}}} \right] \right\} p + \mo{4}.
\end{align}
To recover the second order gradient expansion for correlation, we take
\begin{equation}
D_{\mr{c}} = \frac{\Delta f_{\mr{c}2}}{27 \gamma \phi^3(\zeta)d_s(\zeta) w_1(r_{\mathrm{s}},\zeta)}\left\{20r_{\mathrm{s}} \left[ \frac{\partial\varepsilon_{\mr{c}}^{\text{LSDA0}}}{\partial r_{\mathrm{s}}} - \frac{\partial\varepsilon_{\mr{c}}^{\text{LSDA1}}}{\partial r_{\mathrm{s}}} \right] - 45\eta [\varepsilon_{\mr{c}}^{\text{LSDA0}}(r_{\mathrm{s}},\zeta)-\varepsilon_{\mr{c}}^{\text{LSDA1}}(r_{\mathrm{s}},\zeta)]\right\},
\end{equation}
which is the factor appearing Eq. 78
\end{widetext}
| {
"attr-fineweb-edu": 1.84082,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdfg5qsBC9WrglOk- | \section{Training and Evaluation}
\subsection{Dataset}
We train on a combination of several datasets: AudioSet, BBC sound effects, Audiostock, AudioCaps-train, ESC-50, FSD50K, Free To Use Sounds, Sonniss Game Effects, WeSoundEffects, MACS, Epidemic Sound, UrbanSound8K, WavText5Ks, LibriSpeech, and Medley-solos-DB. For audios without natural language annotation, we apply the pseudo prompt enhancement to construct captions aligned well with the audio. Overall we have $\sim$3k hours with 1M audio-text pairs for training data. For evaluating text-to-audio models~\citep{yang2022diffsound,kreuk2022audiogen}, the AudioCaption validation set is adopted as the standard benchmark, which contains 494 samples with five human-annotated captions in each audio clip. For a more challenging zero-shot scenario, we also provide results in Clotho~\citep{drossos2020clotho} validation set which contain multiple audio events. A more detailed data setup has been attached in Appendix~\ref{app:data}.
We conduct preprocessing on the text and audio data: 1) convert the sampling rate of audios to 16kHz and pad short clips to 10-second long; 2) extract the spectrogram with the FFT size of 1024, hop size of 256 and crop it to a mel-spectrogram of size $80 \times 624$; 3) non-standard words (e.g., abbreviations, numbers, and currency expressions) and semiotic classes~\citep{taylor2009text} (text tokens that represent particular entities that are semantically constrained, such as measure phrases, addresses, and dates) are normalized.
\subsection{Model Configurations}
We train a continuous autoencoder to compress the perceptual space with downsampling to a 4-channel latent representation, which balances efficiency and perceptually faithful results. For our main experiments, we train a U-Net~\citep{ronneberger2015u} based text-conditional diffusion model, which is optimized using 18 NVIDIA V100 GPU until 2M optimization steps. The base learning rate is set to 0.005, and we scale it by the number of GPUs and the batch size following LDM. We utilize HiFi-GAN~\cite{kong2020hifi} (V1) trained on VGGSound dataset~\citep{chen2020vggsound} as the vocoder to synthesize waveform from the generated mel-spectrogram in all our experiments. Hyperparameters are included in Appendix~\ref{appendix:model}.
\subsection{Evaluation Metrics}
We evaluate models using objective and subjective metrics over audio quality and text-audio alignment faithfulness. Following common practice~\citep{yang2022diffsound,iashin2021taming}, the key automated performance metrics used are melception-based~\citep{koutini2021efficient} FID~\citep{heusel2017gans} and KL divergence to measure audio fidelity. Additionally, we introduce the CLAP score to measure audio-text alignment for this work. CLAP score is adapted from the CLIP score~\citep{hessel2021clipscore,radford2021learning} to the audio domain and is a reference-free evaluation metric that closely correlates with human perception.
For subjective metrics, we use crowd-sourced human evaluation via Amazon Mechanical Turk, where raters are asked to rate MOS (mean opinion score) on a 20-100 Likert scale. We assess the audio quality and text-audio alignment faithfulness by respectively scoring MOS-Q and MOS-F, which is reported with 95\% confidence intervals (CI). More information on evaluation has been attached in Appendix~\ref{appendix:eval}.
\begin{table*}[ht]
\centering
\small
\begin{tabular}{c|cc|ccc|cc|cc}
\toprule
Model & Text-cond & Params & FID & KL & CLAP & MOS-Q & MOS-F & FID-Z & KL-Z \\
\midrule
Reference & / & / & / & / & 0.526 & 74.7$\pm$0.94 & 80.5$\pm$1.84 & / & / \\
\midrule
Diffsound & CLIP & 520M & 7.17 & 3.57 & 0.420 & 67.1$\pm$1.03 & 70.9$\pm$1.05 & 24.97 & 6.53 \\
\midrule
\multirow{4}{*}{Make-An-Audio} & \bf CLAP & \bf 332M &\textbf{4.61} & \textbf{2.79} &0.482 & \textbf{72.5$\pm$0.90} & \textbf{78.6$\pm$1.01} & 17.38 & \textbf{6.98} \\
\cline{2-10}
\rule{0pt}{10pt} & BERT & 809M &5.15 & 2.89 & 0.480 & 70.5$\pm$0.87 & 77.2$\pm$0.98 & 18.75 & 7.01 \\
& T5-Large & 563M &4.83 & 2.81 &\textbf{0.486} & 71.8$\pm$0.91 & 77.2$\pm$0.93 & \textbf{17.23} & 7.02 \\
& CLIP & 576M &6.45 & 2.91 &0.444 & 72.1$\pm$0.92 & 75.4$\pm$0.96 & 17.55 & 7.09 \\
\bottomrule
\end{tabular}
\vspace{-2mm}
\caption{Text-to-audio evaluation. We report the evaluation metrics including MOS($\uparrow$), FID($\downarrow$), KL($\downarrow$), and CLAP($\uparrow$). FID-Z and KL-Z denote the zero-shot results in the Clotho dataset.}
\label{table:text-to-audio}
\vspace{-6mm}
\end{table*}
\begin{table*}[ht]
\centering
\small
\hspace{-0.5cm}
\begin{minipage}{0.60\textwidth}
\vspace{6mm}
\scalebox{0.9}{
\begin{tabular}{c|ccc|ccc}
\toprule
\multirow{2}{*}{Training Masks} & \multicolumn{3}{c|}{Narrow Masks} & \multicolumn{3}{c}{Wide Masks} \\
& FID & KL & MOS-Q & FID & KL & MOS-Q \\
\midrule
Irregular (Thin) & 1.83 &0.46 & 68.3$\pm$1.38 & 4.01 & 0.86 & 66.2$\pm$1.20 \\
Irregular (Medium) & 1.76 &0.31 & 67.8$\pm$1.41 & 3.93 & 0.65 & 66.9$\pm$1.22 \\
Irregular (Thick) & 1.73 &0.32 & 69.6$\pm$1.36 & 3.83 & 0.67 & 69.3$\pm$1.05 \\
\midrule
Frame (p=30\%) & 1.64 &0.29 & 66.9$\pm$1.60 & 3.68 & 0.62 & 66.1$\pm$1.29 \\
Frame (p=50\%) & 1.77 &0.32 & 68.6$\pm$1.42 & 3.66 & 0.63 & 67.4$\pm$1.27 \\
Frame (p=70\%) & 1.59 &0.32 & 71.0$\pm$1.12 & 3.49 & 0.65 & 70.8$\pm$1.50 \\
\bottomrule
\end{tabular}}
\vspace{-2mm}
\caption{Audio inpainting evaluation with variety masking strategies.}
\label{table:audio-inpainting}
\end{minipage}
\hspace{-0.5cm}
\vspace{-2cm}
\begin{minipage}[t]{0.32\linewidth}
\centering
\small
\scalebox{0.9}{
\begin{tabular}{c|cc}
\toprule
Method & MOS-Q & MOS-F \\
\midrule
\multicolumn{3}{l}{\textbf{Image-to-Audio Generation}} \\
\midrule
Reference & 72.0$\pm$1.54 & 76.4$\pm$1.83 \\
Make-An-Audio & 68.4$\pm$1.09 & 78.0$\pm$1.20 \\
\midrule
\multicolumn{3}{l}{\textbf{Video-to-Audio Generation}} \\
\midrule
Reference & 69.5$\pm$1.22 & 81.0$\pm$1.43 \\
Make-An-Audio & 60.0$\pm$1.31 & 69.0$\pm$1.08 \\
\bottomrule
\end{tabular}}
\vspace{-3mm}
\caption{Image/Video-to-audio evaluation.}
\label{table:visual2audio}
\end{minipage}
\vspace{1.9cm}
\end{table*}
\section{X-To-Audio: No Modality Left Behind}
In this section, we generalize our powerful conditional diffusion model for X-To-Audio generation. For the first time, we contextualize the need for audio generation with different conditional modalities, including: 1) text, 2) audio (inpainting), and 3) visual. Make-An-Audio empowers humans to create rich and diverse audio content with unprecedented ease, unlocking the ability to generate high-definition, high-fidelity audio given a user-defined modality input.
\subsection{Personalized Text-To-Audio Generation}
Adapting models~\citep{chen2021adaspeech,huang2022generspeech} to a specific individual or object is a long-standing goal in machine learning research. More recently, personalization~\citep{gal2022image,benhamdi2017personalized} efforts can be found in vision and graphics, which allows to inject unique objects into new scenes, transform them across different styles, and even produce new products. For instance, when asked to generate ``baby crying'' given the initial sound of ``thunder'', our model produces realistic and faithful audio describing ``a baby cries in the thunder day". Distinctly, it has a wide range of uses for audio mixing and tuning, e.g., adding background sound for an existing clip or editing audio by inserting a speaking object.
We investigate the personalized text-to-audio generation by stochastic differential editing~\citep{meng2021sdedit}, which has been demonstrated to produce realistic samples with high-fidelity manipulation. Given input audio with a user guide (prompt), we select a particular time $t_0$ with total denoising steps $N$, and add noise to the raw data ${\mathbf{z}}_0$ for ${\mathbf{z}}_T$ ($T = t_0 \times N$) according to Equation~\ref{diffusion_forward}. It is then subsequently denoised through a reverse process parameterized by shared $\theta$ to increase its realism according to Equation~\ref{denoising_backward}.
A trade-off between faithfulness (text-caption alignment) and realism (audio quality) could be witnessed: As $T$ increases, a large amount of noise would be added to the initial audio, and the generated samples become more realistic while less faithful. We refer the reader to Figure~\ref{fig:personalized} for a summary of our findings.
\subsection{Audio Inpainting}
Inpainting~\citep{liu2020rethinking,nazeri2019edgeconnect} is the task of filling masked regions of an audio with new content since parts of the audio are corrupted or undesired. Though diffusion model inpainting can be performed by adding noise to initial audio and sampling with SDEdit, it may result in undesired edge artifacts since there could be an information loss during the sampling process (the model can only see a noised version of the context). To achieve better results, we explicitly fine-tune Make-An-Audio for audio inpainting.
During training, the way masks are generated greatly influences the final performance of the system. As such, we adopt irregular masks (thick, medium, and thin masks) suggested by LaMa~\citep{suvorov2022resolution}, which uniformly uses polygonal chains dilated by a high random width (wide masks) and rectangles of arbitrary aspect ratios (box masks). In addition, we investigate the frame-based masking strategy commonly adopted in speech liteature~\citep{baevski2020wav2vec,hsu2021hubert}. It is implemented using the algorithm from wav2vec 2.0~\citep{baevski2020wav2vec}, where spans of length are masked with a $p$ probability.
\subsection{Visual-To-Audio Generation}
Recent advances in deep generative models have shown impressive results in the visually-induced audio generation~\citep{su2020audeo,gan2020foley}, towards generating realistic audio that describes the content of images or videos: ~\citet{hsu2020text} show that spoken language could be learned by a visually-grounded generative model of speech. ~\citet{iashin2021taming} propose a multi-class visual guided sound synthesis that relies on a codebook prior-based transformer.
To pursue this research further, we extend Make-An-Audio for visual-to-audio generation. For the lack of large-scale visual-audio datasets in image-to-audio (I2A) research, our main idea is to utilize contrastive language-image pretraining (CLIP) with CLIP-guided T2A model and leverage textual representations to bridge the modality gap between visual and audio world. As CLIP encoders embed images and text to the joint latent space, our T2A model provides a unique opportunity to visualize what the CLIP image encoder is seeing. Considering the complexity of V2A generation, it is natural to leverage image priors for videos to simplify the learning process. On this account, we uniformly pick up 4 frames from the video and pool these CLIP image features to formulate the ``averaged" video representation, which is then deteriorated to I2A generation.
To conclude, the visual-to-audio inference scheme can be formulated in Figure~\ref{fig:visual_to_audio}. It significantly reduces the requirement for pair visual datasets, and the plug-and-play module with pre-trained Make-An-Audio empowers humans to create rich and diverse audio content from the visual world.
\section{Introduction}
Deep generative models~\citep{goodfellow2020generative,kingma2018glow,ho2020denoising} have recently exhibited high-quality samples in various data modalities. With large-scale training data and powerful models, kinds of text-to-image~\citep{saharia2022photorealistic,ramesh2021zero,nichol2021glide} and text-to-video~\citep{singer2022make,hong2022cogvideo} models are now able to vividly depict the visual scene described by a text prompt, and empower humans to create rich and diverse visual content with unprecedented ease. However, replicating this success for audios is limited for the lack of large-scale datasets with high-quality text-audio pairs, and the extreme complexity of modeling long continuous signal data.
In this work, we propose Make-An-Audio, with a prompt-enhanced diffusion model for text-to-audio (T2A) generation. To alleviate the issue of data scarcity, we introduce a pseudo prompt enhancement approach to construct natural languages that align well with audio, opening up the usage of orders of magnitude unsupervised language-free data. To tackle the challenge of modeling complex audio signals in T2A generation, we introduce a spectrogram autoencoder to predict the self-supervised representations instead of waveforms, which guarantees efficient compression and high-level semantic understanding. Together with the power of contrastive language-audio pretraining (CLAP)~\citep{radford2021learning,elizalde2022clap} and high-fidelity diffusion models~\citep{ho2020denoising,song2020denoising,rombach2022high}, it achieves a deep level of language understanding with high-fidelity generation.
While conceptually simple and easy to train, Make-An-Audio yields surprisingly strong results. Both subjective and objective evaluations demonstrate that Make-An-Audio achieves new state-of-the-art in text-to-audio with natural and controllable synthesis. Make-An-Audio exhibits superior audio quality and text-audio alignment faithfulness on the benchmark AudioCaption dataset and even generalizes well to the unsupervised Clotho dataset in a zero-shot fashion.
For the first time, we contextualize the need for audio generation with different input modalities. Besides natural language, Make-An-Audio generalizes well to multiple user-defined input modalities (audio, image, and video), which empowers humans to create rich and diverse audio content and opens up a host of applications for personalized transfer and fine-grained control.
Key contributions of the paper include:
\begin{itemize}
\item We present Make-An-Audio – an effective method that leverages latent diffusion with a spectrogram autoencoder to model the long continuous waveforms.
\item We introduce a pseudo prompt enhancement with the distill-then-reprogram approach, it includes a large number of concept compositions by opening up the usage of language-free audios to alleviate data scarcity.
\item We investigate textual representation and emphasize the advantages of contrastive language-audio pretraining for a deep understanding of natural languages with computational efficiency.
\item We evaluate Make-An-Audio and present state-of-the-art quantitative results and thorough evaluation with qualitative findings.
\item We generalize the powerful model to X-to-Audio generation, for the first time unlocking the ability to generate high-definition, high-fidelity audios given a user-defined modality input.
\end{itemize}
\section{Related Works}
\subsection{Text-Guided Image/Video Synthesis}
With the rapid development of deep generative models, text-guided synthesis has been widely studied in images and videos. The pioneering work of DALL-E~\citep{ramesh2021zero} encodes images into discrete latent tokens using VQ-VAE~\citep{van2017neural} and considers T2I generation as a sequence-to-sequence translation problem. More recently, impressive visual results have been achieved by leveraging large-scale diffusion models. GLIDE~\citep{nichol2021glide} trains a T2I upsampling model for a cascaded generation. Imagen~\citep{saharia2022photorealistic} presents T2I with an unprecedented degree of photorealism and a deep level of language understanding. Stable diffusion~\citep{rombach2022high} utilizes latent space diffusion instead of pixel space to improve computational efficiency. A large body of work also explores the usage of T2I models for video generation. CogVideo~\citep{hong2022cogvideo} is built on top of a CogView2~\citep{ding2022cogview2} T2I model with a multi-frame-rate hierarchical training strategy. Make-A-Video~\citep{singer2022make} extends a diffusion-based T2I model to T2V through a spatiotemporally factorized diffusion model.
Moving beyond visual generation, our approach aims to generate high-fidelity audio from arbitrary natural language, which has been relatively overlooked.
\subsection{Text-Guided Audio Synthesis}
While there is remarkable progress in text-guided visual generation, the progress of text-to-audio (T2A) generation lags behind mainly due to two main reasons: the lack of large-scale datasets with high-quality text-audio pairs, and the complexity of modeling long continuous waveforms data. DiffSound~\citep{yang2022diffsound} is the first to explore text-to-audio generation with a discrete diffusion process that operates on audio codes obtained from a VQ-VAE, leveraging masked text generation with CLIP representations. AudioLM~\citep{borsos2022audiolm} introduces the discretized activations of a masked language model pre-trained on audio and generates syntactically plausible speech or music.
Very recently, the concurrent work AudioGen~\citep{kreuk2022audiogen} propose to generate audio samples autoregressively conditioned on text inputs, while our proposed method differentiates from it in the following: 1) we introduce pseudo prompt enhancement and leverage the power of contrastive language-audio pre-training and diffusion models for high-fidelity generation. 2) We predict the continuous spectrogram representations, significantly improving computational efficiency and reducing training costs.
\begin{figure*}[h]
\vspace{-2mm}
\centering
\includegraphics[width=0.9\textwidth]{Figures/arch.pdf}
\vspace{-4mm}
\caption{A high-level overview of Make-An-Audio. Note that some modules (printed with a \textit{lock}) are frozen for training the T2A model.}
\vspace{-2mm}
\label{fig:arch}
\end{figure*}
\begin{figure*}[h]
\centering
\includegraphics[width=0.9\textwidth]{Figures/pseudo_prompt.pdf}
\vspace{-4mm}
\caption{The process of pseudo prompt enhancement. Our semi-parametric diffusion model consists of a fixed expert distillation and a dynamic reprogramming stage. The database $D$ contains audio examples with a sampling strategy $\xi$ to create unseen object compositions. We use CLAPS to denote the CLAP selection.}
\vspace{-2mm}
\label{fig:pseudo_prompt}
\end{figure*}
\subsection{Audio Representation Learning}
Different from modeling fine-grain details of the signal, the usage of high-level self-supervised learning (SSL)~\citep{baevski2020wav2vec,hsu2021hubert,he2022masked} has been shown to effectively reduce the sampling space of generative algorithms. Inspired by vector quantization (VQ) techniques, SoundStream~\citep{zeghidour2021soundstream} presents a hierarchical architecture for high-level representations that carry semantic information. Data2vec~\citep{baevski2022data2vec} uses a fast convolutional decoder and explores the contextualized target representations in a self-supervised manner.
Recently, spectrograms (akin to 1-channel 2D images) autoencoder~\citep{gong2022ssast,he2022masked} with reconstruction objective as self-supervision have demonstrated the effectiveness of heterogeneous image-to-audio transfer, advancing the field of speech and audio processing on a variety of downstream tasks. Among these approaches, ~\citet{xu2022masked} study the Masked Autoencoders (MAE)~\citep{he2022masked} to self-supervised representation learning from audio spectrograms. ~\citet{gong2022ssast} adopt audio spectrogram transformer with joint discriminative and generative masked spectrogram modeling. Inspired by these, we inherit the recent success of spectrogram SSL in the frequency domain, which guarantees efficient compression and high-level semantic understanding.
\section{Conclusion}
In this work, we presented Make-An-Audio with a prompt-enhanced diffusion model for text-to-audio generation. Leveraging the prompt enhancement with the distill-then-reprogram approach, Make-An-Audio was endowed with various concept compositions with orders of magnitude unsupervised data. We investigated textual representation and emphasized the advantages of contrastive pre-training for a deep understanding of natural languages with computational efficiency. Both objective and subjective evaluation demonstrated that Make-An-Audio achieved new state-of-the-art results in text-to-audio with realistic and faithful synthesis. Make-An-Audio was the first attempt to generate high-definition, high-fidelity audio given a user-defined modality input, opening up a host of applications for personalized transfer and fine-grained control. We envisage that our work serve as a basis for future audio synthesis studies.
\section{Detailed Experimental Setup} \label{app:data}
\begin{table}[ht]
\centering
\small
\begin{tabular}{llll}
\toprule
Dataset & Hours & Type & Source \\
\midrule
Clotho & 152 & Caption & \citet{drossos2020clotho} \\
AudioCaps & 109 & Caption & \citet{kim2019audiocaps} \\
MACS & 100 & Caption & \citet{martin2021ground} \\
WavText5Ks & 25 & Caption & \citet{deshmukh2022audio} \\
BBC sound effects & 481 & Caption & \url{https://sound-effects.bbcrewind.co.uk/} \\
Audiostock & 43 & Caption & \url{https://audiostock.net/se} \\
\midrule
Filter AudioSet & 2084 & Label & \citet{gemmeke2017audio} \\
ESC-50 & 3 & Label & \citet{piczak2015esc} \\
FSD50K & 108 & Label & \url{https://annotator.freesound.org/fsd/} \\
Sonniss Game Effects & 20 & Label & \url{https://sonniss.com/gameaudiogdc/} \\
WeSoundEffects & 11 & Label & \url{https://wesoundeffects.com/} \\
Epidemic Sound & 220 & Label & \url{https://www.epidemicsound.com/} \\
UrbanSound8K & 8 & Label & \citet{Salamon:UrbanSound:ACMMM:14} \\
\midrule
LibriTTS & 300 & Language-free & \citet{zen2019libritts} \\
Medley-solos-DB & 7 & Language-free & \citet{bittner2014medleydb} \\
\bottomrule
\end{tabular}
\label{dataset}
\caption{Statistics for the combination of several datasets.}
\end{table}
As shown in Table~\ref{table:dataset}, we collect a large-scale audio-text dataset consisting of 1M audio samples with a total duration of $\sim$3k hours. It contains audio of human activities, natural sounds, and audio effects, consisting of several data sources from publicly available websites. For audio with text descriptions, we download the parallel audio-text data. For audios without natural language annotation (or with labels), we discard the corresponding class label (if any) and apply the pseudo prompt enhancement to construct natural language descriptions aligned well with the audio.
As speech and music are the dominant classes in Audioset, we filter these samples to construct a more balanced dataset. Overall we are left with ~3k hours with 1M audio-text pairs for training data. For evaluating text-to-audio models~\citep{yang2022diffsound,kreuk2022audiogen}, the AudioCaption validation set is the standard benchmark, which contains 494 samples with five human-annotated captions in each audio clip. In both training and inference, we pad short clips to 10-second long and randomly crop a $624 \times 80$ mel-spectrogram from 10-second 16 kHz audio.
\begin{table}[ht]
\centering
\small
\vspace{2mm}
\begin{tabular}{c|ccc}
\toprule
Method & FSD50K & ESC-50 & Urbansound8k \\
\midrule
Original & 0.40 & 0.43 & 0.33 \\
\midrule
Captioning & 0.35 & 0.46 & 0.37 \\
Retrieval & 0.31 & 0.44 & 0.38 \\
\midrule
Both + CLAP Select & 0.54 & 0.62 & 0.55 \\
\bottomrule
\end{tabular}
\caption{Text-audio alignment CLAP score averaged across the single-label dataset.}
\label{table:dataset}
\end{table}
\section{Model Configurations} \label{appendix:model}
We list the model hyper-parameters of Make-An-Audio in Table~\ref{tab:hyperparameters_ps}.
\begin{table}[h]
\small
\centering
\begin{tabular}{l|c|c}
\toprule
\multicolumn{2}{c|}{Hyperparameter} & Make-An-Audio \\
\midrule
\multirow{5}{*}{Spectrogram Autoencoders}
&Input/Output Channels & 1 \\
&Hidden Channels & 4 \\
&Residual Blocks & 2 \\
&Spectrogram Size & $80 \times 624$ \\
&Channel Mult & $[1, 2, 2, 4]$ \\
\midrule
\multirow{6}{*}{Denoising Unet}
&Input/Output Channels & 4 \\
&Model Channels & 320 \\
&Attention Heads & 8 \\
&Condition Channels & 1024 \\
&Latent Size & $10 \times 78$\\
&Channel Mult & $[1, 2]$ \\
\midrule
\multirow{3}{*}{CLAP Text Encoder}
&Transformer Embed Channels & 768 \\
&Output Project Channels & 1024 \\
&Token Length & 77 \\
\midrule
\multicolumn{2}{c|}{Total Number of Parameters} & 332M \\
\bottomrule
\end{tabular}
\caption{Hyperparameters of Make-An-Audio models.}
\label{tab:hyperparameters_ps}
\end{table}
\section{Evaluation} \label{appendix:eval}
To probe audio quality, we conduct the MOS (mean opinion score) tests and explicitly instruct the raters to \textit{``focus on examining the audio quality and naturalness.''}. The testers present and rate the samples, and each tester is asked to evaluate the subjective naturalness on a 20-100 Likert scale.
To probe text-audio alignment, human raters are shown an audio and a prompt and asked \textit{``Does the natural language description align with audio faithfully?''}. They must respond with ``completely'', ``mostly'', or ``somewhat'' on a 20-100 Likert scale.
Our subjective evaluation tests are crowd-sourced and conducted via Amazon Mechanical Turk. These ratings are obtained independently for model samples and reference audio, and both are reported. The screenshots of instructions for testers have been shown in Figure~\ref{fig:screenshot_eval}. We paid \$8 to participants hourly and totally spent about \$750 on participant compensation. A small subset of speech samples used in the test is available at \url{https://Text-to-Audio.github.io/}.
\begin{figure*}[!h]
\centering
\subfigure[Screenshot of MOS-F testing.]
{
\includegraphics[width=0.95\textwidth]{Figures/amt/mosf.png}
}
\subfigure[Screenshot of MOS-Q testing.]
{
\includegraphics[width=0.95\textwidth]{Figures/amt/mosq.png}
}
\caption{Screenshots of subjective evaluations.}
\label{fig:screenshot_eval}
\end{figure*}
\section{Detailed Formulation of DDPM} \label{DDPM}
We define the data distribution as $q({\mathbf{x}}_{0})$. The diffusion process is defined by a fixed Markov chain from data ${\mathbf{x}}_0$ to the latent variable ${\mathbf{x}}_T$:
\begin{equation} \label{q_func}
q({\mathbf{x}}_{1},\cdots,{\mathbf{x}}_T|{\mathbf{x}}_0) = \prod_{t=1}^T q({\mathbf{x}}_t|{\mathbf{x}}_{t-1}),
\quad\ \
\end{equation}
For a small positive constant $\beta_t$, a small Gaussian noise is added from ${\mathbf{x}}_{t-1}$ to the distribution of ${\mathbf{x}}_{t}$ under the function of $q({\mathbf{x}}_t|{\mathbf{x}}_{t-1})$.
The whole process gradually converts data ${\mathbf{x}}_0$ to whitened latents ${\mathbf{x}}_T$ according to the fixed noise schedule $\beta_1,\cdots,\beta_T$, where $\boldsymbol\epsilon\sim{\mathcal{N}}({\bm{0}}, {\bm{I}})$:
\begin{equation} \label{diffusion_forward}
q({\mathbf{x}}_t|{\mathbf{x}}_{t-1}) := {\mathcal{N}}({\mathbf{x}}_t;\sqrt{1-\beta_t}{\mathbf{x}}_{t-1},\beta_t {\bm{I}})
\end{equation}
Efficient training is optimizing a random term of $t$ with stochastic gradient descent:
\begin{equation}
\label{eq: score_loss}
{\mathcal{L}}_{\theta} = \left\lVert \boldsymbol\epsilon_\theta\left(\alpha_t{\mathbf{x}}_{0}+\sqrt{1-\alpha_t^2}\boldsymbol\epsilon\right)-\boldsymbol\epsilon\right\rVert_2^2
\end{equation}
Unlike the diffusion process, the reverse process is to recover samples from Gaussian noises. The reverse process is a Markov chain from $x_T$ to $x_0$ parameterized by shared $\theta$:
\begin{equation} \label{denoising_backward}
p_{\theta}({\mathbf{x}}_0,\cdots,{\mathbf{x}}_{T-1}|{\mathbf{x}}_T)=\prod_{t=1}^T p_{\theta}({\mathbf{x}}_{t-1}|{\mathbf{x}}_t),
\end{equation}
where each iteration eliminates the Gaussian noise added in the diffusion process:
\begin{equation}
p_{\theta}({\mathbf{x}}_{t-1}|{\mathbf{x}}_{t}) := \mathcal{N}({\mathbf{x}}_{t-1};\mu_{\theta}({\mathbf{x}}_t,t), \sigma_{\theta}({\mathbf{x}}_t,t)^2{\bm{I}})
\end{equation}
\section{Implementation Details} \label{app:implementation}
\subsection{Spectrogram Autoencoders} \label{autoencoder}
We also investigate the effectiveness of several audio autoencoder variants in Table~\ref{table:ablation}, and find that deeper representation (i.e., 32 or 128) relatively brings more compression, while the information deterioration could burden the Unet model in generative modeling.
\begin{table}[ht]
\centering
\small
\vspace{2mm}
\begin{tabular}{c|cccc}
\toprule
Method & Channel & FID & KL \\
\midrule
\multicolumn{4}{c}{\bf Supervised Evaluation in AudioCaps dataset} \\ \midrule
\multirow{3}{*}{Base} & 4 & 5.15 & 2.89 \\
& 32 & 9.22 & 3.54 \\
& 128 & 10.92 & 3.68 \\ \midrule
w/o PPE & 4 & 5.37 & 3.05 \\ \midrule
\multicolumn{4}{c}{\bf Zero-Shot Evaluation in Clotho dataset} \\ \midrule
Base & 4& 18.75 & 7.01 \\
w/o PPE & 4 & 22.31 & 7.19 \\
\bottomrule
\end{tabular}
\caption{Audio quality comparisons for ablation study with Make-An-Audio BERT. We use PPE to denote pseudo prompt enhancement.}
\label{table:ablation}
\end{table}
\subsection{Text-to-audio} \label{detail_t2a}
We first encode the text into a sequence of K tokens, and utilize the cross-attention mechanism to learn a language and mel-spectrograms representation mapping in a powerful model. After the initial training run, we fine-tuned our base model to support unconditional generation, with 20\% of text token sequences being replaced with the empty sequence. This way, the model retains its ability to generate text-conditional outputs, but can also generate spectrogram representation unconditionally.
We consider the pre-trained automatic audio captioning~\citep{xu2020crnn} and audio-text retrieval~\citep{deshmukh2022audio,koepke2022audio} systems as our experts for prompt generation. Regarding automatic audio captioning, the model consists of a 10-layer convolution neural network (CNN) encoder and a temporal attentional single-layer gated recurrent unit (GRU) decoder. The CNN encoder is pre-trained on a large-scale Audioset dataset. As for audio-text retrieval, the model leverages BERT with a multi-modal transformer encoder for representation learning. It is trained on AudioCaps and Clotho datasets.
\subsection{Visual-to-audio} \label{detail_v2a}
For visual-to-audio (image/video) synthesis, we utilize the CLIP-guided T2A model and leverage global textual representations to bridge the modality gap between the visual and audio worlds. However, we empirically find that global CLIP conditions have a limited ability to control faithful synthesis with high text-audio similarity. On that account, we use the 110h FSD50K audios annotated with a class label for training, and this simplification avoids multimodal prediction (a conditional vector may refer to different concepts) with complex distribution.
We conduct ablation studies to compare various training settings, including datasets and global conditions. The results have been presented in Table~\ref{table:ablation_v2a}, and we have the following observations: 1) Replacing the FSD50K dataset with AudioCaps~\citep{kim2019audiocaps} have witnessed a significant decrease in faithfulness. The dynamic concepts compositions confuse the global-condition models, and the multimodal distribution hinders its capacity for controllable synthesis; 2) Removing the normalization in the condition vector has witnessed the realism degradation measured by FID, demonstrating its efficiency in reducing variance in latent space.
\begin{table}[H]
\centering
\small
\begin{tabular}{cc|ccc}
\toprule
Training/Testing Dataset& Condition &FID & KL & CLAP\\
\midrule
AudioCaption & Global & / & / & 0.12 \\
FSD50k & Global & 40.7 & 8.2 & 0.40 \\
FSD50k & NormGlobal & 31.1 & 8.0 & 0.42 \\
\bottomrule
\end{tabular}
\caption{Ablation studies for training Make-An-Audio with global conditions.}
\label{table:ablation_v2a}
\end{table}
\section{Dynamic Reprogramming Templates} \label{templates}
Below we provide the list of text templates used when providing dynamic reprogramming:
\begin{itemize}
\item before $v$ $q$ $a$ $n$ of \&, X
\item X before $v$ $q$ $a$ $n$ of \&,
\item in front of $v$ $q$ $a$ $n$ of \&, X
\item first is X second is $q$ $a$ $n$ of \&
\item after X, $v$ $q$ $a$ $n$ of \&
\item after $v$ $q$ $a$ $n$ of \&, X
\item behind $v$ $q$ $a$ $n$ of \&, X
\item $v$ $q$ $a$ $n$ of \&, then X
\item $v$ $q$ $a$ $n$ of \&, following X
\item $v$ $q$ $a$ $n$ of \&, later X
\item X after $v$ $q$ $a$ $n$ of \&
\item before X, $v$ $q$ $a$ $n$ of \&
\end{itemize}
Specifically, we replace X and \&, respectively, with the natural language of sampled data and the class label of sampled events from the database.
For verb (denoted as v), we have \{`hearing', `noticing', `listening to', `appearing'\};
for adjective (denoted as a), we have \{`clear', `noisy', `close-up', `weird', `clean'\};
for noun (denoted as n), we have \{`audio', `sound', `voice'\};
for numeral/quantifier (denoted as q), we have \{`a', `the', `some'\};
\section{Potential Negative Societal Impacts}
This paper aims to advance open-domain text-to-audio generation, which will ease the effort of short video and digital art creation. The efficient training method also transfers knowledge from text-to-audio models to X-to-audio generation, which helps avoid training from scratch, and thus reduces the issue of data scarcity. A negative impact is the risk of misinformation. To alleviate it, we can train an additional classifier to discriminate the fakes. We believe the benefits outweigh the downsides.
Make-An-Audio lowers the requirements for high-quality text-to-audio synthesis, which may cause unemployment for people with related occupations, such as sound engineers and radio hosts. In addition, there is the potential for harm from non-consensual voice cloning or the generation of fake media, and the voices in the recordings might be overused than they expect.
\section{Limitations}
Make-An-Audio adopts generative diffusion models for high-quality synthesis, and thus it inherently requires multiple iterative refinements for better results. Besides, latent diffusion models require typically require more computational resources, and degradation could be witnessed with decreased training data. One of our future directions is to develop lightweight and fast diffusion models for accelerating sampling.
\section{Make-An-Audio}
In this section, we first overview the Make-An-Audio framework and illustrate pseudo prompt enhancement to better align text and audio semantics, following which we introduce textual and audio representations for multimodal learning. Together with the power of diffusion models with classifier-free guidance, Make-An-Audio explicits high-fidelity synthesis with superior generalization.
\subsection{Overview}
Deep generative models have achieved leading performances in text-guided visual synthesis. However, the current development of text-to-audio (T2A) generation is hampered by two major challenges: 1) Model training is faced with data scarcity, as human-labeled audios are expensive to create, and few audio resources provide natural language descriptions. 2) Modeling long continuous waveforms (e.g., typically 16,000 data points for 1s 16 kHz waveforms) poses a challenge for all high-quality neural synthesizers.
As illustrated in Figure~\ref{fig:arch}, Make-An-Audio consists of the following main components: 1) the pseudo prompt enhancement to alleviate the issue of data scarcity, opening up the usage of orders of magnitude language-free audios; 2) a spectrogram autoencoder for predicting self-supervised representation instead of long continuous waveforms; 3) a diffusion model that maps natural language to latent representations with the power of contrastive language-audio pretraining (CLAP) and 4) a separately-trained neural vocoder to convert mel-spectrograms to raw waveforms. In the following sections, we describe these components in detail.
\subsection{Pseudo Prompt Enhancement: Distill-then-Reprogram}
To mitigate the data scarcity, we propose to construct prompts aligned well with audios, enabling a better understanding of the text-audio dynamics from orders of magnitude unsupervised data. As illustrated in Figure~\ref{fig:pseudo_prompt}, it consists of two stages: an expert distillation approach to produce prompts aligned with audio, and a dynamic reprogramming procedure to construct a variety of concept compositions.
\subsubsection{Expert Distillation}
We consider the pre-trained automatic audio captioning~\citep{xu2020crnn} and audio-text retrieval~\citep{deshmukh2022audio,koepke2022audio} systems as our experts for prompt generation. Captioning models aim to generate diverse natural language sentences to describe the content of audio clips. Audio-text retrieval takes a natural language as a query to retrieve relevant audio files in a database. To this end, experts jointly distill knowledge to construct a caption aligned with audio, following which we select from these candidates that endow high CLAP~\citep{elizalde2022clap} score as the final caption (we include a threshold to selectly consider faithful results). This simple yet effective procedure largely alleviates data scarcity issues and explicit generalization to different audio domains, and we refer the reader to Section~\ref{ablation:pseudo} for a summary of our findings. Details have been attached in Appendix~\ref{detail_t2a}.
\subsubsection{Dynamic Reprogramming}
To prevent overfitting and enable a better understanding of concept compositions, we introduce a dynamic reprogramming technique that constructs a variety of concept compositions. It proceeds in three steps as illustrated in Figure~\ref{fig:pseudo_prompt}, where we elaborate the process as follows: 1) We first prepare our sound event database $D$ annotated with a single label. 2) Each time $N$ concepts are sampled from the database $D$, where $N \in \{0, 1, 2\}$. 3) The original text-audio pair data has been randomly concatenated with the sampled events according to the template, constructing a new training example with varied concept compositions. It can be conducted online, significantly reducing the time consumed for data preparation. The reprogramming templates are attached in Appendix~\ref{templates}.
\subsection{Textual Representation}
Text-guided synthesis models need powerful semantic text encoders to capture the meaning of arbitrary natural language inputs, which could be grouped into two major categories: 1) Contrastive pretraining. Similar to CLIP~\citep{radford2021learning} pre-trained on image-text data, recent progress on contrastive language-audio pretraining (CLAP)~\citep{elizalde2022clap} brings audio and text descriptions into a joint space and demonstrates the outperformed zero-shot generalization to multiple downstream domains. 2) Large-scale language modeling (LLM). \citet{saharia2022photorealistic} and \citet{kreuk2022audiogen} utilize language models (e.g., BERT~\citep{devlin2018bert}, T5~\citep{raffel2020exploring}) for text-guided generation. Language models are trained on text-only corpus significantly larger than paired multimodal data, thus being exposed to a rich distribution of text.
Following the common practice~\citep{saharia2022photorealistic,ramesh2022hierarchical}, we freeze the weights of these text encoders. We find that both CLAP and T5-Large achieve similar results on benchmark evaluation, while CLAP could be more efficient without offline computation of embeddings required by LLM. We refer the reader to Section~\ref{ablation:textual} for a summary of our findings.
\subsection{Audio Representation}
Recently, spectrograms (akin to 1-channel 2D images) autoencoder~\citep{gong2022ssast,he2022masked} with reconstruction objective as self-supervision have demonstrated the effectiveness of heterogeneous image-to-audio transfer, advancing the field of speech and audio processing on a variety of downstream tasks. The audio signal is a sequence of mel-spectrogram sample $\boldsymbol{x} \in[0,1]^{C_{\mathrm{a}} \times T}$, where $C_{\mathrm{a}}, T$ respectively denote the mel channels and the number of frames. Our spectrogram autoencoder is composed of 1) an encoder network $E$ which takes samples $\boldsymbol{x}$ as input and outputs latent representations $z$; 2) a decoder network $G$ reconstructs the mel-spectrogram signals $\boldsymbol{x'}$ from the compressed representation $z$; and 3) a multi-window discriminator $Dis$ learns to distinguish the generated samples $G(z)$ from real ones in different multi-receptive fields of mel-spectrograms.
The whole system is trained end-to-end to minimize 1) Reconstruction loss ${\mathcal{L}}_{re}$, which improves the training efficiency and the fidelity of the generated spectrograms; 2) GAN losses ${\mathcal{L}}_{GAN}$, where the discriminator and generator play an adversarial game; and 3) KL-penalty loss ${\mathcal{L}}_{KL}$, which restricts spectrogram encoders to learn standard $z$ and avoid arbitrarily high-variance latent spaces.
To this end, Make-An-Audio takes advantage of the spectrogram autoencoder to predict the self-supervised representations instead of waveforms. It largely alleviates the challenges of modeling long continuous data and guarantees high-level semantic understanding.
\begin{figure*}[h]
\centering
\includegraphics[width=0.7\textwidth]{Figures/all-to-audio.pdf}
\vspace{-4mm}
\caption{A high-level overview of visual-to-audio generation (I2A/V2A) pipeline using Make-An-Audio.}
\vspace{-2mm}
\label{fig:visual_to_audio}
\end{figure*}
\subsection{Generative Latent Diffusion}
We implement our method over Latent Diffusion Models (LDMs)~\citep{rombach2022high}, a recently introduced class of Denoising Diffusion Probabilistic Models (DDPMs)~\citep{ho2020denoising} that operate in the latent space. It is conditioned on textual representation, breaking the generation process into several conditional diffusion steps. The training loss is defined as the mean squared error in the noise $\boldsymbol\epsilon \sim \mathcal{N}({\bm{0}},{\mathbf{I}})$ space, and efficient training is optimizing a random term of $t$ with stochastic gradient descent:
\begin{equation}
\label{score_loss}
{\mathcal{L}}_{\theta} = \left\lVert \boldsymbol\epsilon_\theta({\mathbf{z}}_t, t, c)-\boldsymbol\epsilon\right\rVert_2^2,
\end{equation}
where $\alpha$ denotes the small positive constant, and $\boldsymbol\epsilon_\theta$ denotes the denoising network. To conclude, the diffusion model can be efficiently trained by optimizing ELBO without adversarial feedback, ensuring extremely faithful reconstructions that match the ground-truth distribution. Detailed formulation of DDPM has been attached in Appendix~\ref{DDPM}.
\subsection{Classifier-Free Guidance}
For classifier-free guidance shown in~\citep{dhariwal2021diffusion,ho2022classifier}, by jointly training a conditional and an unconditional diffusion model, it could be possible to combine the conditional and unconditional scores to attain a trade-off between sample quality and diversity. The textual condition in a latent diffusion model $\boldsymbol\epsilon_\theta({\mathbf{z}}_t, t, c)$ is replaced by an empty prompt $c_{\emptyset}$ with a fixed probability during training. During sampling, the output of the model is extrapolated further in the direction of $\boldsymbol\epsilon_\theta({\mathbf{z}}_t, t, c)$ and away from $\boldsymbol\epsilon_\theta({\mathbf{z}}_t, t, c_{\emptyset})$ with the guidance scale $s \geq 1$:
\begin{shrinkeq}{-1ex}
\begin{equation}
\small
\label{guidance}
\tilde{\boldsymbol\epsilon}_\theta({\mathbf{z}}_t, t, c) = \boldsymbol\epsilon_\theta({\mathbf{z}}_t, t, c_{\emptyset}) + s \cdot (\boldsymbol\epsilon_\theta({\mathbf{z}}_t, t, c) - \boldsymbol\epsilon_\theta({\mathbf{z}}_t, t, c_{\emptyset}))
\end{equation}
\end{shrinkeq}
\section{Results}
\subsection{Quantitative Results}
\textbf{Automatic Objective Evaluation}
The objective evaluation comparison with baseline Diffsound (the only publicly-available T2A generation model) are presented in Table~\ref{table:text-to-audio}, and we have the following observations: 1) In terms of audio qualty, Make-An-Audio achieves the highest perceptual quality in AudioCaption with FID of 4.61 and KL of 2.79. For zero-shot generation, it also demonstrates the outperformed results superior to the baseline model; 2) On text-audio similarity, Make-An-Audio scores the highest CLAP with a gap of 0.037 compared to the ground truth audio, suggesting Make-An-Audio's ability to generate faithful audio that aligns well with descriptions.
\textbf{Subjective Human Evaluation} The evaluation of the T2A models is very challenging due to its subjective nature in perceptual quality, and thus we include a human evaluation in Table~\ref{table:text-to-audio}: Make-An-Audio (CLAP) achieves the highest perceptual quality with MOS-Q of 72.5 and MOS-F of 78.6. It indicates that raters prefer our model synthesis against baselines in terms of audio naturalness and faithfulness.
For audio-inpainting, we compare different masking designs, including the irregular (thick, medium, and thin) strategy from visual world~\citep{suvorov2022resolution}, as well as the frame-based (with varying $p$) strategy commonly used in speech~\citep{baevski2020wav2vec,hsu2021hubert}. During evaluation, we randomly mask the \textit{wide} or \textit{narrow} regions and utilize FID and KL metrics to measure performance. The results have been presented in Table~\ref{table:audio-inpainting}, and we have the following observations: 1) In both frame-based or irregular strategies, larger masked regions in training have witnessed the improved perceptual quality, which force the network to exploit the high receptive field of continuous spectrograms fully. 2) With the similar size of the masked region, the frame-based strategy consistently outperforms the irregular one, suggesting that it could be better to mask the audio spectrograms which align in time series.
We also present our visual-to-audio generation results in Table~\ref{table:visual2audio}. As can be seen, Make-An-Audio can generalize to a wide variety of images and videos. Leveraging contrastive pre-training, the model provides a high-level understanding of visual input, which generates high-fidelity audio spectrograms well-aligned with their semantic meanings.
\begin{figure*}[h]
\centering
\includegraphics[width=1.0\textwidth]{Figures/personalize_tta.pdf}
\vspace{-6mm}
\caption{We illustrate personalized text-to-audio results with various $t_0$ initializations. $t_0 = 0$ indicates the initial audio itself, whereas $t_0 = 1$ indicates a text-to-audio synthesis from scratch. For comparison, realism is measured by the 1-MSE distance between generated and initial audio, and faithfulness is measured by the CLAP score between the generated sample. Prompt: A clock ticktocks.}
\vspace{-2mm}
\label{fig:personalized}
\end{figure*}
\subsection{Qualitative Findings}
\begin{figure}[h]
\centering
\includegraphics[width=0.5\textwidth]{Figures/inpaint.pdf}
\vspace{-7mm}
\caption{Qualitative results with our inpainting model.}
\label{fig:inpaint}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=0.35\textwidth]{Figures/CLAP-FID.pdf}
\vspace{-2mm}
\caption{Classifier-free guidance trade-off curves.}
\vspace{-3mm}
\label{fig:CLAP-FID}
\end{figure}
Firstly, we explore the classifier-free guidance in text-to-audio synthesis. We sweep over guidance values and present trade-off curves between CLAP and FID scores in Figure~\ref{fig:CLAP-FID}. Consistent with the observations in ~\citet{ho2022classifier}, the choice of the classifier guidance weight could scale conditional and unconditional synthesis, offering a trade-off between sample faithfulness and realism with respect to the conditioning text.
For better comparison in audio inpainting, we visualize different masking strategies and synthesis results in Figure~\ref{fig:inpaint}. As can be seen, given the initial audio with undesired content, our model correctly fills and reconstruct the audio robust to different shapes of masked regions, suggesting that it is capable of a high-level understanding of audio content.
On the personalized text-to-audio generation, we explore different $t_0 \in(0,1)$ to add Gaussian noise and conduct reverse sampling. As shown in Figure~\ref{fig:personalized}, a trade-off between faithfulness (measured by CLAP score) and realism (measured by 1-MSE distance) could be witnessed. We find that $t_0 \in[0.2, 0.5]$ works well for faithful guidance with realistic generation, suggesting that audio variants (e.g., speed, timbre, and energy) could be easily destroyed as $t_0$ increases.
\subsection{Analysis and Ablation Studies}
To verify the effectiveness of several designs in Make-An-Audio, including pseudo prompt enhancement, textual and audio representation, we conduct ablation studies and discuss the key findings as follows. More analysis on audio representation has been attached in Appendix~\ref{autoencoder}.
\subsubsection{Textual Representation} \label{ablation:textual}
We explore several pretrained text encoders, including language models BERT~\citep{devlin2018bert}, T5-Large~\citep{raffel2020exploring}, as well as the multimodal contrastive pre-trained encoder CLIP~\citep{radford2021learning} and CLAP~\citep{elizalde2022clap}. We freeze the weights of text encoders for T2A generation. For easy comparison, we present the results in Table~\ref{table:text-to-audio} and have the following observations: 1) Since CLIP is introduced as a scalable approach for learning joint representations between text and images, it could be less useful in deriving semantic representation for T2A in contrast to~\citet{yang2022diffsound}. 2) CLAP and T5-Large achieve similar performances on benchmarks dataset, while CLAP could be more computationally efficient (with only \%59 params), without the need for offline computation of embeddings in large-scale language models.
\subsubsection{Pseudo Prompt Enhancement} \label{ablation:pseudo}
Our prompt enhancement approach alleviates the issue of data scarcity, which consists of two stages with a distill-then-reprogram approach. As shown in Table~\ref{table:dataset} in Appendix~\ref{app:data}, we calculate and compare the prompt-audio faithfulness averaged across datasets: The joint expert distillation produces high-quality captions aligned well with audio, and suggests strong generalization to diverse audio domains.
To highlight the effectiveness of the proposed dynamic reprogramming strategy to create unseen object compositions, we additionally train our Make-An-Audio in the static training dataset, and attach the results in Table~\ref{table:ablation} in Appendix~\ref{app:implementation}: 1) Removing the dynamic reprogramming approach results in a slight drop in evaluation; 2) When migrating to a more challenging scenario to Clotho in a zero-shot fashion, a significant degradation could be witnessed, demonstrating its effectiveness in constructing diverse object compositions for better generalization.
| {
"attr-fineweb-edu": 1.379883,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdgk5qoYA4o7sC45r | \section{Conclusion}\label{sec:conclusion}
In this paper, we have proposed a framework that integrates low-light enhancement and fuses RGB and event modailies for effective monocular depth estimation under adverse night conditions. A synthetic night-time driving dataset that contains paired RGB, event and depth images has also been constructed, which includes scenes that encountering adverse weather, light and road conditions. The experiment results have shown that our proposed framework is able to achieve satisfactory depth estimation results in various adverse night scenarios.
\section{Dataset}\label{sec:dataset}
To the best of our knowledge, there is currently no dataset that is proposed for monocular depth estimation at adverse night conditions, containing paired RGB, event and depth images. In order to validate the effectiveness of our proposed framework, and advance future research in this direction, we construct the first adverse night-time driving dataset that includes the aforementioned data modalities and the ground truth depth maps. The dataset was constructed using CARLA \cite{dosovitskiy2017carla}, a popular simulator in the field of autonomous driving, and the event camera plugin \cite{hidalgo2020learning}.
\subsection{Data Collection and Statistics}
We collect the data through a sensor suite that contains an RGB camera, an event-based camera, and a depth camera. The positive and negative thresholds for triggering an event of the event camera was set to 0.4. All the sensors were set with a FOV of 90 degrees, a resolution of $640 \times 480$ pixels, and a data generation rate of 8 Hz. All sensors had an egocentric view mounted on a vehicle while it was driving around. The start and end points of the vehicle were manually selected and the routes of the vehicle were accurately planned. The speed and following distance of the vehicles as well as the lights at night were based on road traffic guidelines to achieve the maximum realism of night-time driving. The driving scenes are diverse, including typical city and rural roads, as well as tunnel and highway. The statistics of the driving scenarios is shown in Fig.~\ref{fig:scene_distribution}. We also apply adverse weather conditions such as rain, fog, and the combination of rain and fog to the scene, and Fig.~\ref{fig:weather_distribution} shows the distribution of the adverse weather in our dataset. The dataset contains 11,191 samples. Each sample has paired RGB, event and ground truth depth images. We split the entire dataset into 70\% for training, 15\% for validation and the rest 15\% for testing. We name our dataset as \textbf{MonoANC} (\textbf{Mono}cular depth estimation at \textbf{A}dverse \textbf{N}ight \textbf{C}onditions).
\begin{figure}
\centering
\begin{subfigure}{.5\columnwidth}
\centering
\includegraphics[width=\columnwidth]{figures/pie-city4.0.pdf}
\caption{}
\label{fig:scene_distribution}
\end{subfigure}%
\begin{subfigure}{.5\columnwidth}
\centering
\includegraphics[width=\columnwidth]{figures/pie-scenraio6.0.pdf}
\caption{}
\label{fig:weather_distribution}
\end{subfigure}%
\caption{Distribution of different night-time driving environments (a) and different adverse weather conditions (b).}
\label{fig:dataset_statistics}
\end{figure}
\section{Discussion}\label{sec:discussion}
\subsection{Performance on Carla-ADC}
We designed sufficient experiments to demonstrate that our framework effectively fuses information from the event camera into depth estimation. From table II, it can be seen that pure events as input has no way to predict satisfactory depth values due to the limited information, while pure RGB input could perform normally. On the basis of RGB input, either low-light enhancement, adding Sober operator, or fusing events, will improve the accuracy of depth estimation on top of that, and the enhancement performance varies due to the different enhancement methods. And after adding the edge information extracted by the sober operator directly to the image after low-light enhancement, there is a significant decrease in Abs-rel. The edge information effectively complements the possible errors caused by overexposure or wrong enhancement in the low-light enhancement process. A similar conclusion can be drawn from table III. Meanwhile, the final predictions of the two networks regress to a similar interval.
Figure 6 shows the quantitative result of the initial RGB input, prediction of fusion map, and comparison with depth ground truth. The depth map of the EVEN fusion network has a very noticeable improvement in detail at the edges as well as the object in the distance. This also visually evidences the effective fusion of the edge information and HDR features of events into our network
However, the edge information extracted by the Sober operator is generated from computer graphics while not captured by the real sensor. Finally, event fused with low-light enhancement achieves targeted enhancement of edge features and adds more high dynamic range information to get the best predictions.
\iffalse
\subsection{Performance on MVSEC}
\begin{table}
\centering
\begin{tabular}{l|l|r|r|r}
\hline
\multirow{\begin{tabular}[c]{@{}l@{}}Input Sequences \\(Trained on day2)\end{tabular}} & Error Metric ↓~ & \multicolumn{3}{l}{~ ~ ~Accuracy Metric ↑} \\
\cline{2-5}
& ~ ~ ~ ~Abs & \multicolumn{1}{l|}{~ ~α1} & \multicolumn{1}{l|}{~ ~α2} & \multicolumn{1}{l}{~ ~α3} \\
\hline
RGB\_Night1 & ~ ~ 4.7747 & 0.6724 & 0.8819 & 0.9530 \\
\hline
EVEN\_Night1 & \textbf{~ ~ 4.3375} & \textbf{0.6729} & \textbf{0.8858} & \textbf{0.9606} \\
\hline
RGB\_Night2 & ~ ~ 5.1952 & 0.6643 & 0.8734 & 0.9414 \\
\hline
EVEN\_Night2 & \textbf{~ ~ 4.4671} & \textbf{0.6788} & \textbf{0.8898} & \textbf{0.9600} \\
\hline
RGB\_Night3 & ~ ~ 4.7022 & 0.6719 & 0.8988 & 0.9633 \\
\hline
EVEN\_Night3 & \textbf{~ ~ 4.1098} & \textbf{0.6883} & 0.8939 & \textbf{0.9716} \\
\hline
\end{tabular}
\end{table}
\fi
\section{Experiment}\label{sec:experiment}
In this section, we first describe the implementation details of our framework - EVEN, and then the evaluation metrics, followed by the baseline methods that are used to compare against our framework. We then show overall results of all methods on MonoANC, and present the results of cross validation of the performance of EVEN on different adverse weather combinations at the end.
\subsection{Implementation Details}
We implement our EVEN framework using PyTorch. The learning rate for training the multi-modal fusion network was 1e-3. AdamW \cite{loshchilov2017decoupled} was used as the optimizer. Weight decay was set to 1e-3. Step size of scheduler was 5 during the training of the fusion network and we trained it for 100 epochs. We set $\beta$ to 0.8 in Equation~\ref{eq:joint_loss}. After the fusion network was properly trained, we pre-generated the fusion images, and trained depth estimation network (i.e., Depthformer and SimIPU) using their default settings \cite{lidepthtoolbox2022}.
\subsection{Evaluation Metrics}
We use standard evaluation protocols following~\cite{eigen2014depth} to evaluate our framework and all baseline methods. Specifically, we measure the mean
absolute relative error (Abs. Rel.), mean squared relative error (Sq. Rel.), root mean squared error (RMSE), and mean log10 error (Log10). Apart from the error metrics, we also adopted three different thresholds as the accuracy metrics which are the common practice in the literature, i.e., $\alpha = 1.25^i, i = 1, 2, 3$.
\subsection{Baseline Methods}
We implement six baselines to compare and examine the effectiveness of our framework on boosting depth estimation, i.e., the use of low-light enhancement and fusion with event and RGB modalities. As mentioned early, salient edges and texture are core features for depth estimation. We therefore adopted the Sobel operator \cite{kanopoulos1988design}, which is an edge detector, to process the RGB modality, and using the resulting image as the alternative to the event image in our framework to justify the use of event data, which is also able to retain salient edge information.
\begin{enumerate}
\item RGB: the raw RGB image is fed directly into the depth estimation network as the only input for depth estimation.
\item Event: the event image is fed directly into the depth estimation network as the only input.
\item RGB + Sobel: the paired raw RGB and Sobel operator processed images are used as the inputs to the phase-2 of EVEN, followed by depth estimation of phase-3.
\item RGB + Event: the paired raw RGB and event images are used as the inputs to the phase-2 of EVEN, followed by depth estimation of phase-3.
\item RGB$_{Enhanced}$: the enhanced RGB image after phase-1 is fed directly into the depth estimation network as the only input for depth estimation.
\item RGB$_{Enhanced}$ + Sobel: the paired enhanced RGB image after phase-1 and Sobel operator processed image are used as the inputs to the phase-2 of EVEN, followed by depth estimation of phase-3.
\end{enumerate}
\begin{table*}[]
\caption{Results on MonoANC Dataset When the Depth Estimation Network in EVEN is Instantiated as Depthformer and SimIPU Respectively}
\label{tab:overall_results}
\resizebox{\textwidth}{!}{
\begin{tabular}{@{}ccccccccccccccc@{}}
\toprule
\multicolumn{1}{l}{} & \multicolumn{7}{c}{Depthformer} & \multicolumn{7}{c}{SimIPU} \\ \cmidrule(l){2-15}
Input Sequence &
\multicolumn{4}{c}{Error Metric ↓} &
\multicolumn{3}{c}{Accuracy Metric ↑} &
\multicolumn{4}{c}{Error Metric ↓} &
\multicolumn{3}{c}{Accuracy Metric ↑} \\ \cmidrule(l){2-15}
& Abs. Rel. & Sq. Rel. & RMSE & Log10 & $\alpha 1$ & $\alpha 2$ & $\alpha 3$ & Abs. Rel. & Sq. Rel. & RMSE & Log10 & $\alpha 1$ & $\alpha 2$ & $\alpha 3$ \\ \midrule
RGB & 0.192 & 0.310 & 4.973 & 0.069 & 0.810 & 0.911 & 0.985 & 0.293 & 0.370 & 5.177 & 0.079 & 0.710 & 0.921 & 0.972 \\ \midrule
Event & 0.452 & 0.220 & 7.775 & 0.172 & 0.390 & 0.622 & 0.795 & 0.594 & 1.240 & 9.180 & 0.116 & 0.552 & 0.828 & 0.932 \\ \midrule
RGB + Sobel & 0.180 & 0.340 &5.304 & 0.064 & 0.808 & 0.908 & 0.956 & 0.266 & 0.310 & 4.947 & 0.067 & 0.773 & 0.930 & 0.976 \\ \midrule
RGB + Event & 0.179 & 0.340 & 5.992 & 0.067 & 0.795 & 0.920 & 0.956 & 0.229 & 0.280 & 5.151 & 0.057 & 0.837 & 0.953 & 0.984 \\ \midrule
RGB$_{Enhanced}$ & 0.181 & 0.390 & 5.737 & 0.074 & 0.765 & 0.924 & 0.971 & 0.263 & 0.300 & 4.998 & 0.058 & 0.824 & 0.948 & 0.984 \\ \midrule
RGB$_{Enhanced}$ + Sobel & 0.139 & \textbf{0.280} & 5.023 & 0.063 & 0.806 & 0.970 & 0.988 & 0.216 & \textbf{0.240} & \textbf{4.080} & 0.063 & 0.846 & 0.954 & 0.986 \\ \midrule
EVEN (Ours) &
\textbf{0.112} &
\textbf{0.280} &
\textbf{4.335} &
\textbf{0.049} &
\textbf{0.903} &
\textbf{0.976} &
\textbf{0.993} &
\textbf{0.125} &
0.280 &
4.845 &
\textbf{0.049} &
\textbf{0.857} &
\textbf{0.959} &
\textbf{0.988} \\ \bottomrule
\end{tabular}
}
\end{table*}
\begin{comment}
\begin{table*}[]
\caption{Cross Validation Results of EVEN on Different Adverse Weather Conditions}
\label{tab:cross_validation}
\resizebox{\textwidth}{!}{
\begin{tabular}{@{}cccccccccccccccc@{}}
\toprule
\multicolumn{2}{c}{\multirow{Input Sequence}} &
\multicolumn{7}{c}{Depthformer} &
\multicolumn{7}{c}{SimIPU} \\ \cmidrule(l){3-16}
\multicolumn{2}{c}{} &
\multicolumn{4}{c}{Error Metric ↓
} &
\multicolumn{3}{c}{Accuracy Metric
↑} &
\multicolumn{4}{c}{Error Metric ↓
} &
\multicolumn{3}{c}{Accuracy Metric
↑} \\ \midrule
Train Set &
Test Set &
Abs. Rel. &
Sq. Rel. &
RMSE &
Log10 &
$\alpha 1$ &
$\alpha 2$ &
$\alpha 3$ &
Abs. Rel. &
Sq. Rel. &
RMSE &
Log10 &
$\alpha 1$ &
$\alpha 2$ &
$\alpha 3$ \\ \midrule
rain and fog at the same time &
rain only and fog only &
0.325 & 1.987 & 8.475 & 0.187 &
0.471 & 0.645 & 0.797 & 0.330 &
1.865 & 8.710 &
0.187 &
0.420 &
0.655 &
0.786 \\ \midrule
rain only and fog only&
rain and fog at the same time&
\textbf{0.267} & \textbf{0.315} & \textbf{4.934} \textbf{0.031} & \textbf{0.646} & \textbf{0.833} & \textbf{0.937} & \textbf{0.260} & \textbf{0.307} & \textbf{4.933} &
\textbf{0.031} &
\textbf{0.680} &
\textbf{0.844} &
\textbf{0.939} \\ \bottomrule
\end{tabular}
}
\end{table*}
\end{comment}
\begin{table*}[]
\caption{Cross Validation Results of EVEN on Different Adverse Weather Conditions}
\label{tab:cross_validation}
\resizebox{\textwidth}{!}{
\begin{tabular}{@{}cccccccccccccccc@{}}
\toprule
\multicolumn{2}{c}{\multirow{2}{*}{Input Sequence}} & \multicolumn{7}{c}{Depthformer} & \multicolumn{7}{c}{SimIPU} \\ \cmidrule(l){3-16}
\multicolumn{2}{c}{} & \multicolumn{4}{c}{Error Metric ↓} & \multicolumn{3}{c}{Accuracy Metric ↑} & \multicolumn{4}{c}{Error Metric ↓} & \multicolumn{3}{c}{Accuracy Metric ↑} \\ \midrule
Train Set & Test Set & Abs. Rel. & Sq. Rel. & RMSE & Log10 & $\alpha 1$ & $\alpha 2$ & $\alpha 3$ & Abs. Rel. & Sq. Rel. & RMSE & Log10 & $\alpha 1$ & $\alpha 2$ & $\alpha 3$ \\ \midrule
rain and fog at the same time & rain only and fog only & 0.325 & 1.987 & 8.475 & 0.187 & 0.471 & 0.645 & 0.797 & 0.330 & 1.865 & 8.710 & 0.187 & 0.420 & 0.655 & 0.786 \\ \midrule
rain only and fog only & rain and fog at the same time & \textbf{0.267} & \textbf{0.315} & \textbf{4.934} & \textbf{0.031} & \textbf{0.646} & \textbf{0.833} & \textbf{0.937} & \textbf{0.260} & \textbf{0.307} & \textbf{4.933} & \textbf{0.031} & \textbf{0.680} & \textbf{0.844} & \textbf{0.939} \\ \bottomrule
\end{tabular}}
\end{table*}
\subsection{Overall Results}
As we instantiate the depth estimation network separately as either a Depthformer or a SimIPU, we run six baselines accordingly based on the instantiated depth estimation network. Table~\ref{tab:overall_results} summarizes the overall results. Our complete EVEN framework outperforms the baseline methods, and its performance improvement is consistent across Depthformer and SimIPU. The absolute relative error (Abs. Rel.) is reduced by 41.7\% and 57.3\% respectively compared to a single RGB input to the Depthformer and SimIPU. An 11.5\% relative improvement on $\alpha 1$ accuracy metric can also be observed for EVEN using Depthformer, and a 20.7\% increase for EVEN using SimIPU, compared to a single RGB image as the input to these two depth estimation networks.
Fig.~\ref{fig:qualitative_results} shows four qualitative results of depth estimation on MonoANC. It can be observed that the depth maps estimated by EVEN have a noticeable improvement in detail at the edges as well as the objects in the far distance compared to those of baselines. As indicated by the red boxes in Fig.~\ref{fig:qualitative_results}, our complete EVEN framework can produce depth maps without much artifacts, and are closer to the ground truth. These visually prove that the fusion of the edge information and HDR features of event data in EVEN is effective. When we replace the event image with Sobel operator processed image, i.e., indicated by RGB$_{Enhanced}$ + Sobel, the quality of the estimated depth map slightly degrades, but is still better than those of the rest baseline methods.
\begin{figure*}[]
\centering
\centerline{\includegraphics[width=\textwidth,scale=1.0]{figures/typical4-7-9.0.pdf}}
\caption{Qualitative results of depth estimation on MonoANC dataset. Top two examples are the results when Depthformer is adopted as the depth estimation network, and the bottom two examples are the results when SimIPU is adopted as the depth estimation network. Areas indicated by the red boxes show that our EVEN framework can better estimate monocular depth than other baseline methods.}
\label{fig:qualitative_results}
\end{figure*}
\subsection{Cross Validation on Adverse Weather}
We further split MonoANC based on different weather conditions. Specifically, there are three adverse weather conditions as shown in Fig.~\ref{fig:dataset_statistics}(b): 1) rain only; 2) fog only; 3) rain and fog occur together. We split the dataset into two sets. One set contains samples of rain only and fog only, and the other set contains samples of simultaneous occurrence of rain and fog in the scene. A two-fold cross-validation is then conducted to evaluate the performance of EVEN. Table~\ref{tab:cross_validation} shows the results. When the framework has seen each individual weather condition during the training, it can well estimate the depth of the scene with mixed adverse weather conditions, i.e., rain and fog occurring at the same time in the scene. Conversely, it becomes difficult for the framework to estimate depth for the scenes with only a single adverse weather condition if the training data is scenes of mixed adverse weather. Hence, cost function of decomposing adverse weather combinations is worth investigating for better depth estimation in future work.
\section{Introduction}\label{sec:intro}
\begin{figure}[h]
\centerline{\includegraphics[width=\linewidth,scale=1.0]{figures/typical33-2.0.pdf}}
\caption{Data samples from our MonoANC dataset. We show paired RGB and event images, and the ground truth depth map for each sample. The adverse night scenarios from top to bottom are: 1) driving in the heavy rain on a city road; 2) driving under a bridge at a foggy night; 3) driving at the countryside at a rainy and foggy night.}
\label{fig:framework}
\end{figure}
Depth estimation with monocular cameras has been actively studied over the past decades~\cite{Godard_2017_CVPR, godard2019digging, casser2019depth}, as it offers an efficient and economic way of obtaining depth. Compared to LiDAR, a monocular camera can be deployed pervasively, and due to its small scale, it can also be installed on an agent, e.g., an autonomous car, unobtrusively.
Albeit convenient and flexible, accurately estimating depth from a monocular camera is non-trivial, especially at night time, at which the visual perception of conventional RGB cameras degrades. The low dynamic range and sensitivity to motion blur of conventional cameras can lead to defective imaging at night, and the captured images/videos often exhibit underexposure due to low-lighting or back-lighting~\cite{wang2019underexposed}. For an autonomous car, when it is driving at night accompanied by adverse weather (e.g., rain and fog), the dual occurrence of adverse light and weather can cause a challenge for its RGB-based vision system.
Recently, event camera has gained popularity in visual perception and robotics. Event camera is a bio-inspired vision sensor that works in a different way than conventional cameras \cite{gallego2020event, lichtsteiner2008c}. Rather than capturing intensity images at a fixed rate, event cameras measure intensity changes asynchronously in the form of an event stream. Event cameras have distinct advantages over conventional RGB cameras, including very high dynamic range (HDR), high temporal resolution, less motion blur, and low power consumption. These features of the event camera can complement its RGB counterpart, providing extra visibility and leading to an enhanced visual perception system.
On the other hand, in depth estimation, texture and salient edges play more important roles than color as recognized by research in the computer vision community~\cite{hu2019visualization}. Texture can be well retained in RGB data whereas salient edges can be better captured by the event camera. Therefore, using both data modalities is a straightforward attempt to boost the overall depth estimation accuracy.
Although there are few studies ~\cite{gehrig2021combining},~\cite{Zhu_2018_ECCV},~\cite{tulyakov2019learning} that have been proposed to jointly utilize RGB and event data for monocular depth estimation, they mainly focus on day time or normal weather conditions. Thus far, no research has been carried out on event-based monocular depth estimation under adverse night conditions, which is challenging as the RGB source does not contain as much effective visual information as it does at day time, and how to effectively fuse RGB data with event stream at night time has yet to be addressed.
Despite practical applications, such as more intelligent and lightweight night-time autonomous driving and rescue robots, there is currently also no dataset that contains paired RGB, event and ground truth depth data captured at adverse night conditions to validate and benchmark research in this direction. Hence, in this work, we made the following two contributions:
\begin{enumerate}
\item We propose the first adverse night-time driving dataset that contains paired RGB images, event streams, and ground truth depth maps. The adverse night conditions in our dataset are diverse in a variety of aspects including adverse weather such as rain and fog, and different scenes such as driving on dim countryside roads.
\item We propose a novel three-phase framework, which employs low-light enhancement and multi-modal fusion to tackle the problem of monocular depth estimation at adverse night conditions with event-based vision. The entire framework has been thoroughly evaluated, with the results showing that it outperforms six baselines.
\end{enumerate}
\section{Method}\label{sec:method}
Our framework decomposes monocular depth estimation at adverse night conditions into three phases as shown in Fig.~\ref{fig:framework_overview}. In phase one, the raw RGB image is first enlightened using low-light image enhancement; In phase two, the enhanced RGB image and the event image are fused to generate a fusion image; In phase three, depth estimation is carried out based on the fusion image. We denote our framework as \textbf{EVEN} as it is based on \textbf{EV}ent vision and low-light \textbf{EN}hancement. We elaborate our framework in the following.
\begin{figure*}
\centerline{\includegraphics[width=\textwidth,scale=1.0]{figures/Pipeline_11.0.pdf}}
\caption{An overview of the proposed framework for monocular depth estimation at adverse night conditions (e.g., at foggy night). Our framework, named EVEN, leverages a three-phase process to estimate depth: 1) phase-1: enlightening the low-light RGB image; 2) phase-2: fusing visual information from enhanced RGB and event images; 3) phase-3: estimating depth based on reconstructed fusion image.}
\label{fig:framework_overview}
\end{figure*}
\begin{figure}
\centerline{\includegraphics[width=\columnwidth,scale=1.0]{figures/FusionNet12.0.pdf}}
\caption{The multi-modal fusion network of EVEN. }
\label{fig:fusion_network}
\end{figure}
\subsection{Event Stream}\label{subsec: event_stream}
Asynchronous event streams reflect changes in light intensity. In order to efficiently make full use of the information from the event-based data. We convert event streams in the voxel grid format to image format. Specifically, spatial points (indexed by $x$ and $y$ positions in image coordinates with the value being the polarity $p$) are stacked along the time axis $t$ using a fixed time period $\Delta$$t$ = 0.125 s. This produces a compact event image.
\subsection{Phase-1: Low-light Enhancement}
The visual perception of conventional RGB cameras degrades at night due to the limited amount of light. To recover the necessary scene color and texture information captured by the RGB camera, we utilize EnlightenGAN \cite{jiang2021enlightengan} to enhance the raw night-time RGB image. EnlightenGAN is of an attention-based U-Net structure. The input RGB image is normalized by using the illumination channel as the attention map for the ultimate low-light enhancement.
\subsection{Phase-2: Multi-modal Fusion}
Event data can capture much more HDR and temporal details of night-time scenes, whereas RGB data can provide necessary texture and color information. As these two modalities complement each other, and in order to leverage the merits of both, a novel fusion network (refer to Fig.~\ref{fig:fusion_network}), which is built on top of selective kernel network~\cite{li2019selective}, is designed to integrate event data with RGB modality.
\subsubsection{Fusion Network}
given an event image $\mathbf{X}_{Event}$ and an enhanced RGB image $\mathbf{X}_{Enhanced}$, we use two convolutional kernels with different kernel sizes to transform the input images into feature maps. After transformation, two feature maps $\mathbf{F}_{Event}$ and $\mathbf{F}_{Enhanced}$ are obtained:
\begin{equation}
\mathbf{F}_{Event} = g(\mathbf{X}_{Event}), \mathbf{F}_{Event} \in \mathbb{R}^{H \times W \times C}
\end{equation}
\begin{equation}
\mathbf{F}_{Enhanced} = h(\mathbf{X}_{Enhanced}), \mathbf{F}_{Enhanced} \in \mathbb{R}^{H \times W \times C}
\end{equation}
where $g(\cdot)$ and $h(\cdot)$ are separate convolutional neural network layers that conduct transformation. For the event image, we use a kernel size of $5 \times 5$ as the information carried in event modality is relatively sparse. Therefore, a large kernel size is used. For the enhanced RGB image, we use a kernel size of $3 \times 3$. Following convolutional transformation, the feature maps of the two modalities are merged using an element-wise summation:
\begin{equation}
\mathbf{F}_{sum} = \mathbf{F}_{Event} + \mathbf{F}_{Enhanced}, \mathbf{F}_{sum} \in \mathbb{R}^{H \times W \times C}
\end{equation}
We then apply global average pooling to conduct dimension reduction (along the $H$ and $W$ dimensions) for the merged feature map $\mathbf{F}_{sum}$, which produces a vector $\mathbf{V} \in \mathbb{R}^{1 \times C}$. Similar to~\cite{li2019selective}, we then use a simple fully connected layer $f(\cdot)$ to create a compact vector $\mathbf{k}$ on the basis of $\mathbf{V}$:
\begin{equation}
\mathbf{k} = f(\mathbf{V}), \mathbf{k} \in \mathbb{R}^{d \times 1}
\end{equation}
$\mathbf{k}$ is then used to guide adaptive fusion of the two modalities. Specifically, we create soft attention across channel $C$. For $c$-th element along the channel $C$, the soft attention for fusing event and enhanced RGB feature maps can be formulated as follows:
\begin{equation}
a_c=\frac{e^{\mathbf{A}_c \mathbf{k}}}{e^{\mathbf{A}_c \mathbf{k}}+e^{\mathbf{B}_c \mathbf{k}}}, b_c=\frac{e^{\mathbf{B}_c \mathbf{k}}}{e^{\mathbf{A}_c \mathbf{k}}+e^{\mathbf{B}_c \mathbf{k}}}
\end{equation}
\begin{equation}
\mathbf{F}_{{fused}_c} = a_c \cdot \mathbf{F}_{{Event}_c}+b_c \cdot \mathbf{F}_{{Enhanced}_c}, a_c + b_c = 1
\end{equation}
where $\mathbf{A}_c \in \mathbb{R}^{1 \times d}$ and $\mathbf{B}_c \in \mathbb{R}^{1 \times d}$ are learnable vectors.
The fused feature map $\mathbf{F}_{fused}$ is then fed into an U-Net~\cite{ronneberger2015u} followed by a group of convolution and ReLU operations to 1) further fuse features of the event and RGB modalities, and 2) reconstruct a fusion image $\mathbf{Y}$ of the same resolution to the input event and enhanced RGB images:
\begin{equation}
\mathbf{Y} = \text{Conv}(\text{ReLU}(\text{Conv}(\text{U-Net}(\mathbf{F}_{fused}))))
\end{equation}
The resulting fusion image, which has HDR property and better edge salience, also suppresses areas of overexposure caused by low-light enhancement as shown in Fig.~\ref{fig:fusion_network}.
\subsubsection{Fusion Loss}\label{AA}
In order to allow the entire fusion network to effectively merge visual information from the two modalities, a joint loss $\mathcal{L}_{joint}$ is designed as shown in Equation~\ref{eq:joint_loss}. We use the reconstruction loss between the fusion image and the enhanced RGB image as the primary loss (i.e., $\mathcal{L}_{Enhanced}$), and that between the fusion image and the event image as the auxiliary loss (i.e., $\mathcal{L}_{Event}$). Both reconstruction losses are implemented as an $\mathcal{L}_2$ loss that measures the mean squared error between the fusion image and the respective event or enhanced RGB image. During training, the fusion network is trained to decrease $\mathcal{L}_{joint}$.
\begin{equation}\label{eq:joint_loss}
\mathcal{L}_{joint} = \beta \times \mathcal{L}_{Enhanced} + (1 - \beta) \times \mathcal{L}_{Event}
\end{equation}
\subsection{Phase-3: Depth Estimation}\label{AA}
The fusion image, which contains visual information from both event and RGB modalities, is then used as the source for depth estimation. We separately adopt two state-of-the-art depth estimation networks, i.e., Depthformer~\cite{li2022depthformer} and SimIPU~\cite{li2021simipu} in our EVEN framework to carry out the depth estimation with the fusion image as their input.
\section{Related Work}\label{sec:related_work}
\subsection{Monocular Depth Estimation with Multi-Modal Fusion}
Monocular depth estimation can be achieved using RGB modality alone~\cite{Godard_2017_CVPR, godard2019digging, casser2019depth}. Recent advances in multiple data modalities have further improved the depth estimation accuracy. For instance, some research works proposed to use RGB and optical flow~\cite{ranftl2016dense,tateno2017cnn,chen2020denao,shimada2022pix2pix}, RGB combined with segmentation maps~\cite{zama2018geometry, zhu2020edge, he2021sosd}, or RGB with extra saliency features~\cite{zhang2021deep, abdulwahab2022monocular} as the inputs, and use multi-modal fusion to enhance depth estimation.
LiDAR has been explored for enhancing monocular depth estimation recently. \cite{jaritz2018sparse} and \cite{fu2019lidar} proposed using late fusion methods to fuse depth data from LiDAR and monocular RGB inputs. Apart from pure visual signals, radar has also been used with RGB modality for monocular depth estimation \cite{lin2020depth, lo2021depth}. Recently, an attention-based method has been proposed for fusing radar signals with monocular RGB images \cite{long2022radar}.
\subsection{Event-Based Depth Estimation}
Daniel et al.~\cite{gehrig2021combining} combined event-based data and monocular RGB frames with a recurrent asynchronous network for depth estimation, which is also the first work to fuse the event and monocular RGB frames. Zhou et al.~\cite{zhou2018semi} investigated the use of stereo event cameras for semi-dense depth estimation by
maximizing a temporal consistency between the corresponding event streams. Another event vision-based method was proposed by Zhu et al.~\cite{Zhu_2018_ECCV} which eliminates disparity for depth estimation. The method proposed by~\cite{tulyakov2019learning} shows the first learning-based stereo depth estimation for event cameras which is also the first one that produces dense results. \cite{zhu2019unsupervised} is an unsupervised framework that learns motion information only from event streams, achieving multi-task objectives including optical flow, egomotion and depth estimation. Cui et al. \cite{cui2022dense} proposed a dense depth estimation method based on the fusion of dense event stream and sparse point cloud.
Despite the efforts being made in event-based depth estimation, existing works are not engineered to specifically tackle monocular depth estimation at adverse night conditions, but instead mainly target at day time and normal weather conditions. In this work, we target monocular depth estimation at adverse night conditions. In order to improve the illumination in the field of view (FOV) and to take advantage of the HDR property of the event-based camera, we propose to combine low-light enhancement and multi-modal fusion of event and RGB data for better depth estimation. To the best of our knowledge, we are the first work that uses the event-based vision along with low-light image enhancement to estimate monocular depth at adverse night conditions.
| {
"attr-fineweb-edu": 1.941406,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdgo4ubngxbxwfDIr | \section{Introduction}
We consider the Fisher-KPP equation:
\begin{equation}
\label{e2.1}
u_t-u_{xx}=u-u^2,~~t>0,~x\in{\mathbb R},
\end{equation}
with an initial condition $u(0,x)=u_0(x)$ which is a compact
perturbation of a step function, in the sense
that there exist $x_1$ and $x_2$ so that
$u_0(x)=1$ for all $x\le x_1$, and $u_0(x)=0$ for all $x\ge x_2$.
This equation has a traveling wave solution $u(t,x)=\phi(x-2t)$, moving with the minimal speed~$c_*=2$, connecting the stable
equilibrium $u\equiv 1$ to the unstable equilibrium~$u\equiv 0$:
\begin{equation}
\label{e2.4}
\begin{array}{rll}
&-\phi''-2\phi'=\phi-\phi^2, \\
&\phi(-\infty)=1, \quad \phi(+\infty)=0.
\end{array}
\end{equation}
Each solution $\phi(\xi)$ of
\eqref{e2.4} is a shift of a fixed profile $\phi_*(\xi)$:
$
\phi(\xi)=\phi_*(\xi+s)
$
with some fixed $s\in{\mathbb R}$. The profile $\phi_*(\xi)$
satisfies the asymptotics
\begin{equation}
\label{e2.20}
\phi_*(\xi)=(\xi+k)e^{-\xi}+O(e^{-(1+\omega_0 )\xi}),~~~\xi\to+\infty,
\end{equation}
with two universal constants $\omega_0 >0$, $k\in\mathbb{R}$.
The large time behaviour of the solutions of this problem has a long history, starting with a striking paper of Fisher \cite{Fisher}, which
identifies the spreading velocity $c_*=2$ via numerical computations and other arguments.
In the same year, the pioneering KPP paper \cite{KPP} proved
that the solution of~\eqref{e2.1}, starting from a step function:
$u_0(x)=1$ for $x\le 0$, $u_0(x)=0$
for $x > 0$, converges to $\phi_*$ in the following sense: there is a function
\begin{equation}\label{mar2202}
\sigma_\infty(t)=2t+o(t),
\end{equation}
such that
\[
\displaystyle
\lim_{t\to+\infty}u(t,x + \sigma_\infty(t))=\phi_*(x).
\]
Fisher has already made an informal
argument that the $o(t)$ in (\ref{mar2202}) is of the
order $O(\log t)$.
An important series of papers by Bramson
proves the following
\begin{thm}
[\cite{Bramson1}, \cite{Bramson2}]\label{t2.10}
There is a constant $x_\infty$, depending on $u_0$, such that
\[
\sigma_\infty(t)=2t-\displaystyle
\frac32\log t-x_\infty+o(1),\hbox{ as {$t\to+\infty$}}.
\]
\end{thm}
Theorem \ref{t2.10} was proved through elaborate probabilistic arguments. Bramson also gave necessary and sufficient conditions on the decay of the initial data to zero (as $x \to +\infty$) in order that the solution converges to $\phi_*(x)$ in some moving frame. Lau~\cite{Lau} also proved those necessary and sufficient conditions (for a more general nonlinear term) using a PDE approach based on the decrease in the number of the intersection points for a pair of solutions of the parabolic Cauchy problem. The asymptotics of $\sigma_\infty(t)$ were not identified by that approach.
A natural question is to prove Theorem \ref{t2.10} with purely PDE arguments.
In that spirit, a weaker version,
precise up to the~$O(1)$ term, (but valid also for a much more
difficult case of the periodic in space
coefficients), is the main result of~\cite{HNRR1,HNRR2}:
\begin{equation}\label{mar2204}
\hbox{ $\sigma(t)=2t-\displaystyle\frac32\log t+O(1)$ as $t\to+\infty$.}
\end{equation}
Here, we will give a simple and robust proof of Theorem~\ref{t2.10}.
These ideas are further developed
to study the refined asymptotics of the solutions in~\cite{NRR2}.
The paper is organized as follows. In Section \ref{sec:2},
we shortly describe some connections between the Fisher-KPP
equation \eqref{e2.1} and the branching Brownian motion.
In Section~\ref{sec:3}, we explain, in an informal way, the strategy of
the proof of the theorem: in a nutshell, the solution
is slaved to the dynamics at $x=O(\sqrt{t})$. In Sections~\ref{sec:4} and \ref{sec:5}, we make the
arguments of Section~\ref{sec:3} rigorous.
\noindent{\bf Acknowledgment.} JN was supported by NSF grant DMS-1351653,
and LR by NSF grant DMS-1311903. JMR was supported by the European Union's Seventh
Framework Programme (FP/2007-2013) / ERC Grant
Agreement n. 321186 - ReaDi - ``Reaction-Diffusion Equations, Propagation and Modelling'',
as well as the ANR project NONLOCAL ANR-14-CE25-0013. LR and JMR thank the Labex CIMI for a
PDE-probability quarter in Toulouse, in Winter 2014, out of which the idea of this
paper grew and which provided a stimulating scientific environment for this project.
\section{Probabilistic links and some related models}\label{sec:2}
The time delay in models of the Fisher-KPP type has been the subject of various
recent investigations,
both from the PDE and probabilistic points of view. The Fisher-KPP equation appears
in the theory of the
branching Brownian motion (BBM)~\cite{McK} as follows. Consider a BBM
starting at~$x=0$ at time $t=0$, with binary branching at rate $1$. Let $X_1(t),\dots,X_{N_t}(t)$ be
the descendants of the original particle at time $t$, arranged in the
increasing order:
$
X_1(t)\le X_2(t)\le\dots\le X_{N_t}(t).
$
Then, the probability distribution function of the maximum:
$$
v(t,x)={\mathbb P}(X_{N_t}(t)>x),
$$
satisfies the Fisher-KPP equation
\[
v_t=\displaystyle\frac 12 v_{xx}+v-v^2,
\]
with the initial data $v_0(x)=\mathbbm{1}_{x\le 0}$. Therefore,
Theorem \ref{t2.10} is about the median location of the maximal particle $X_{N_t}$.
Building on the work of Lalley and Sellke \cite{LS}, recent
probabilistic analyses \cite{ABBS,ABK1,ABK2,BD1,BD2} of this particle
system have identified a decorated Poisson-type point process which is the limit of
the particle distribution ``seen from the tip": there is a random variable $Z > 0$
such that the point process defined by the shifted particles $\{ X_1(t) - c(t) \, ,\,
\dots \, , \,X_{N_t}(t)- c(t)\}$, with
\[
c(t) = 2t - \frac{3}{2} \log t + \log Z,
\]
has a well-defined limit process as $t \to \infty$. Furthermore,
$Z$ is the limit of the martingale
\[
Z_t = \sum_{k} (2t - X_k(t))e^{X_k(t) - 2t},
\]
and
\[
\hbox{$\phi_*(x) = 1-
{\mathbb E}\big[e^{- Ze^{- x}}\big]$ for all $x \in {\mathbb R}$.}
\]
As we have mentioned, the logarithmic term in Theorem \ref{t2.10} arises also
in inhomogeneous variants of this model. For example, consider the Fisher-KPP
equation in a periodic medium:
\begin{equation}
\label{e1.300}
u_t-u_{xx}=\mu(x)u-u^2
\end{equation}
where $\mu(x)$ is continuous and 1-periodic in $\mathbb{R}$, such that the
principal periodic eigenvalue of the operator
$
-\partial_{xx}-\mu(x)
$
is negative. Then there is a minimal speed $c_* > 0$ such that
for each $c \geq c_*$, there is a unique pulsating front $U_c(t,x)$, up to a time
shift~\cite{BH,HR}. It was shown in~\cite{HNRR2} that there is~$s_0>0$ such that, if
$u(t,x)$ solves (\ref{e1.300}) with a nonnegative, nonzero, compactly supported
initial condition~$u_0(x)$, and~$0<s\leq s_0$, then the $s$-level set $\sigma_s(t)$ of $u(t,x)$ (here, the largest $\sigma>0$ such that $u(t,\sigma) = s$) must satisfy
$$
\sigma_s(t)=c_*t-\frac{3}{2\lambda_*}\log t +O(1),
$$
where $\lambda_* > 0$ is the rate of exponential decay (as $x \to \infty$) of the minimal front $U_{c_*}$, which depends on $\mu(x)$ but not on $s$ or on $u_0$. This implies the convergence of $u(t,x-\sigma_s(t))$ to a closed subset of the family of minimal fronts. It is an open problem to determine whether convergence to a single front holds, not to mention the rate of this convergence. When $\mu(x) > 0$ everywhere, the solution $u$ of the related model
$$
u_t-u_{xx}=\mu(x)(u-u^2)
$$
may be interpreted in terms of the extremal particle in a BBM with a spatially-varying branching rate \cite{HNRR2}.
Models with temporal variation in the branching process have also been considered. In \cite{FZ}, Fang and Zeitouni studied the extremal particle of such a spatially homogeneous BBM where the branching particles satisfy
\[
d X(t) = \sqrt{2} \kappa(t/T) \,dB(t)
\]
between branching events, rather than following a standard Brownian motion.
In terms of PDE, their study corresponds to the model
\begin{equation}\label{june1620}
u_t=\kappa^2(t/T)u_{xx}+f(u), \ \ \ 0<t<T,\ x\in\mathbb{R}.
\end{equation}
They proved that if $\kappa$ is increasing, and $f$ is of the Fisher-KPP type,
the shift is algebraic and not logarithmic in time: there exists $C>0$ such that
$$
\frac{T^{1/3}}C\leq X(T)-c_{eff}T\leq CT^{1/3},~~c_{eff}=2\displaystyle\int_0^1\kappa(s)ds.
$$
In \cite{NRR1}, we proved the asymptotics
\begin{equation}\label{june1622}
X(T)=c_{eff}T-\bar\nu T^{1/3}+O({\log}T), \hbox{ with } \bar\nu=\beta\int_0^1\kappa(\tau)^{1/3}\dot\kappa(\tau)^{2/3}d\tau.
\end{equation}
Here, $\beta < 0$ is the first zero of the Airy function.
Maillard and Zeitouni~\cite{MZ}
refined the asymptotics further, proving a logarithmic correction to \eqref{june1622}, and convergence of $u(T)$ to a traveling wave.
\section{Strategy of the proof of Theorem \ref{t2.10}}\label{sec:3}
\subsubsection*{Why converge to a traveling wave?}
We first provide an informal argument for the convergence of the solution of the initial
value problem to a traveling wave. Consider the Cauchy problem (\ref{e2.1}),
starting at $t=1$ for the convenience of the notation:
\begin{eqnarray}\label{sep2704}
u_t-u_{xx}=u-u^2,~~x\in{\mathbb R},~~t>1,
\end{eqnarray}
and
proceed with a standard sequence of changes of variables.
We first go into the moving frame:
\[
x \mapsto x - 2t + ({3}/{2}) \log t,
\]
leading to
\begin{equation}
\label{e2.0}
u_t-u_{xx}-(2-\frac3{2t})u_x=u-u^2.
\end{equation}
Next, we take out the exponential factor: set
$$
u(t,x)=e^{-x}v(t,x)
$$
so that $v$ satisfies
\begin{equation}
\label{e2.6}
v_t-v_{xx}-\frac3{2t}(v-v_x)+e^{-x}v^2=0, \quad \quad x \in {\mathbb R}, \quad t > 1.
\end{equation}
Observe that for any shift $x_\infty \in {\mathbb R}$,
the function $V(x) = e^{x} \phi(x - x_\infty)$ is a steady solution of
\[
V_t - V_{xx} + e^{-x} V^2 = 0.
\]
We regard (\ref{e2.6}) as a perturbation of this equation,
and expect that $v(t,x) \to e^{x} \phi(x - x_{\infty})$ as $t \to \infty$,
for some $x_\infty\in{\mathbb R}$.
\subsubsection*{The self-similar variables}
We note that for $x \to+\infty$, the term $e^{-x}v^2$ in (\ref{e2.6})
is negligible, while for $x\to-\infty$ the same term
will create a large absorption and
force the solution to be close to
zero. For this reason,
the linear Dirichlet problem
\begin{eqnarray}\label{sep1810}
&&z_t-z_{xx}-\frac3{2t}(z-z_x)=0, \quad \quad x > 0\\
&&z(t,0)=0\nonumber
\end{eqnarray}
is a reasonable proxy for (\ref{e2.6}) for $x \gg 1$, and, as shown in
\cite{HNRR1,HNRR2}, it provides good sub- and super-solutions for $v(t,x)$.
The main lesson of \cite{HNRR1,HNRR2} is that everything relevant to
the solutions of~(\ref{sep1810})
happens at the spatial scale~$x\sim\sqrt t$, and their asymptotics
may be unraveled by a self-similar change of variables. Here,
we will accept the
full nonlinear equation \eqref{e2.6} and perform directly the
self-similar
change of variables
\begin{equation}
\label{e2.500}
\tau={\log} t,\ \ \ \eta=\frac{x}{\sqrt{t}}
\end{equation}
followed by a change of the unknown
$$
v(\tau,\eta)=e^{\tau/2}w(\tau,\eta).
$$
This transforms \eqref{e2.6} into
\begin{equation}
\label{e2.7}
w_\tau-\displaystyle\frac\eta2w_\eta-w_{\eta\eta}-w
+\displaystyle\frac32e^{-\tau/2}w_\eta+e^{3\tau/2-\eta{\mathrm{exp}}(\tau/2)}w^2=0,\ \ \ \eta\in\mathbb{R},~~\tau>0.
\end{equation}
This transformation strengthens the reason why
the Dirichlet problem \eqref{sep1810} appears naturally:
for
\[
\eta\ll -\tau e^{-\tau/2},
\]
the last term in the left side of (\ref{e2.7}) becomes
exponentially large, which forces $w$ to be almost 0 in this region.
On the other hand, for
\[
\eta\gg \tau e^{-\tau/2},
\]
this term is very small, so it
should not play any role in the dynamics of $w$ in that region. The transition
region has width of the order $\tau e^{-\tau/2}$.
\subsubsection*{The choice of the shift}
Also, through this change of variables, we can see how a particular
translation of the wave will be chosen. Considering (\ref{sep1810})
in the self-similar variables, one can show -- see \cite{HNRR1,Henderson}
-- that, as~$\tau\to+\infty$, we have
\begin{equation}\label{sep2702}
e^{-\tau/2} z(\tau,\eta)\sim \alpha_\infty \eta e^{-\eta^2/4},~~\eta>0,
\end{equation}
with some $\alpha_\infty>0$.
Therefore, taking (\ref{sep1810}) as an approximation to (\ref{e2.6}), we should expect that
\begin{equation}\label{nov2302}
u(t,x)=e^{-x}v(t,x)\sim e^{-x}z(t,x)\sim e^{-x}e^{\tau/2}
\alpha_\infty \eta e^{-\eta^2/4}=\alpha_\infty xe^{-x}e^{-x^2/(4t)},
\end{equation}
at least for $x$ of the
order $O(\sqrt{t})$. This determines the unique translation:
if we accept that $u$ converges to a translate~$x_\infty$ of $\phi_*$, then
for large $x$ (in the moving frame) we have
\begin{equation}\label{sep1812}
u(t,x)\sim\phi_*(x - x_\infty)\sim xe^{-x + x_\infty}.
\end{equation}
Comparing this with (\ref{nov2302}), we infer that
$$
x_\infty= {\log}\alpha_\infty.
$$
The difficulty with this argument, apart from the justification
of the approximation
\[
u(t,x)\sim e^{-x}z(t,x),
\]
is that each of the asymptotics (\ref{nov2302}) and (\ref{sep1812})
uses different ranges of $x$:
(\ref{nov2302}) comes from the self-similar variables in the region
$x\sim O(\sqrt t)$, while (\ref{sep1812}) assumes $x$ to be large but finite.
However, the self-similar analysis does not tell us at
this stage what happens on the scale~$x\sim O(1)$.
Indeed, it is clear from (\ref{e2.7}) that the error in the
approximation (\ref{sep2702}) is at least of the
order~$O(e^{-\tau/2})$ -- note that the right side in (\ref{sep2702})
is a solution of (\ref{e2.7})
without the last two terms in the left side.
On the other hand, the scale~$x\sim O(1)$ corresponds to $\eta\sim e^{-\tau/2}$.
Thus, the leading order term and the error in~(\ref{sep2702}) are of the same size
for~$x\sim O(1)$, which means that
we can not extract information directly from~(\ref{sep2702}) on that scale.
To overcome this issue, we proceed in two steps:
first we use the self-similar variables to
prove stabilization (that is, (\ref{nov2302}) holds) at the spatial scales $x\sim O(t^\gamma)$ with
a small $\gamma>0$,
and not just at the diffusive scale $O(\sqrt{t})$. This boils down
to showing that
\[
w(\tau,\eta) \sim \alpha_\infty \eta e^{-\eta^2/4}
\]
for the solution to (\ref{e2.7}), even for $\eta \sim e^{-(1/2 - \gamma) \tau}$.
Next, we show that this stabilization
is sufficient to ensure the stabilization on the scale $x\sim O(1)$ and convergence to a unique wave.
This is the core of the argument:
everything happening at $x\sim O(1)$ should be governed by the tail of the solution -- the fronts are pulled.
We conclude this section with some remarks about the generality of the argument. Although we assume, for simplicity, that the reaction term in (\ref{e2.1}) is quadratic, our proof also works for a more general reaction term. Specifically, the function $u - u^2$ in (\ref{e2.1}) may be replaced by a $C^2$ function $f:[0,1] \to {\mathbb R}$ satisfying $f(0) = 0 = f(1)$, $f'(0) > 0$, $f'(1) < 0$, and $f'(s) \leq f'(0)$ for all $s \in [0,1]$. In particular, these assumptions imply that there is $C > 0$ such that $0 \leq f'(0) s - f(s) \leq C s^2$ for all $s \in [0,1]$. Without loss of generality, we may suppose that $f'(0) = 1$. Then, if $g(u) = u - f(u)$, the equation (\ref{e2.6}) for $v$ becomes
\[
v_t-v_{xx}-\frac3{2t}(v-v_x)+e^{x}g(e^{-x} v)=0, \quad \quad x \in {\mathbb R}, \quad t > 1,
\]
and the equation (\ref{e2.7}) for $w$ becomes
\[
w_\tau-\displaystyle\frac\eta2w_\eta-w_{\eta\eta}-w +\displaystyle\frac32e^{-\tau/2}w_\eta+e^{\tau/2+\eta{\mathrm{exp}}(\tau/2)}g( e^{\tau/2-\eta{\mathrm{exp}}(\tau/2)} w) =0,\ \ \ \eta\in\mathbb{R},~~\tau>0.
\]
where $0 \leq g(s) \leq C s^2$ and $g'(s) \geq 0$. Then all of the arguments below (and in \cite{HNRR1}) work in this more general setting. Finally, the arguments also apply to fronts arising from compactly supported initial data $u_0 \geq 0$ (not just perturbations of the step-function). In that case, one obtains two fronts propagating in opposite directions. Combined with \cite{HNRR1}, our arguments here imply that Theorem \ref{t2.10} holds for both fronts. That is, the fronts moving to $\pm \infty$ are at positions $\sigma_\infty^\pm(t)$ with
\[
\sigma^\pm_\infty(t) = \pm 2 \mp \frac{3}{2} \log t + x_\infty^{\pm} + o(1)
\]
where the shifts $x_\infty^+$ and $x_\infty^-$ may differ and depend on the initial data.
\section{Convergence to a single wave as a consequence of the diffusive scale convergence}\label{sec:4}
The proof of Theorem~\ref{t2.10} relies on the following two lemmas.
The first is a consequence of~\cite{HNRR1}.
\begin{lem}\label{lem:mar2302}
The solution of (\ref{e2.0}) with $u(1,x) = u_0(x)$ satisfies
\begin{equation}\label{mar2306}
\lim_{x\to-\infty}u(t,x)=1,~~\lim_{x\to+\infty}u(t,x)=0,
\end{equation}
both uniformly in $t>1$.
\end{lem}
The main new step is to establish
the following.
\begin{lem}\label{sep29-lem02}
There exists a constant $\alpha_\infty>0$ with the following property.
For any $\gamma>0$ and all $\varepsilon>0$ we can find $T_\varepsilon$
so that for all $t>T_\varepsilon$ we have
\begin{equation}\label{sep2902}
|u(t,x_\gamma)-\alpha_\infty x_\gamma e^{-x_\gamma}e^{-x_\gamma^2/(4t)}|\le
\varepsilon x_\gamma e^{-x_\gamma}e^{-x_\gamma^2/(6t)},
\end{equation}
with $x_\gamma=t^\gamma$.
\end{lem}
We postpone the proof of this lemma for the moment, and show how it is used.
A consequence of Lemma~\ref{sep29-lem02} is that the problem for the moment
is to understand, for a given $\alpha>0$,
the behavior of the solutions of
\begin{eqnarray}\label{e4.4}
&&\pdr{u_\alpha}{t}-\frac{\partial^2u_\alpha}{\partial x^2}
-\displaystyle(2-\displaystyle\frac3{2t})\pdr{u_\alpha}{x}-u_\alpha+u_\alpha^2=0,
\ \ x\leq x_{\gamma}(t)\\
&&u_\alpha(t,t^\gamma)=\alpha t^\gamma e^{-t^\gamma-t^{2\gamma-1}/4},\nonumber
\end{eqnarray}
for $t>T_\varepsilon$, with the initial condition $u_\alpha(T_\varepsilon,x)=u(T_\varepsilon,x)$.
In particular, we will show that $u_{\alpha_\infty\pm\varepsilon}(t,x)$
converge, as $t\to+\infty$, to a pair of steady solutions, separated only
by an order~$O(\varepsilon)$-translation.
Note that the function $v(t,x)=e^{x}u_\alpha(t,x)$ solves
\begin{eqnarray}
\label{e4.1}
&&v_t-v_{xx}+\displaystyle\frac3{2t}(v_x-v)+e^{-x}v^2=0,\ \ x\leq t^{\gamma}\\
&&v(t,t^\gamma)=\alpha t^\gamma e^{-t^{2\gamma-1}/4}.\nonumber
\end{eqnarray}
Since we anticipate that the tail is going to dictate the behavior of $u_\alpha$,
we choose the translate of the wave
that matches exactly the behavior of $u_\alpha(t,x)$
at the boundary $x=t^\gamma$: set
\begin{equation}
\label{e4.3}
\psi(t,x)=e^{x}\phi_*(x+\zeta(t)).
\end{equation}
Recall that $\phi_*(x)$ is the traveling wave profile.
We look for a function $\zeta(t)$ in (\ref{e4.3}) such that
\begin{equation}
\label{e4.2}
\psi(t,t^\gamma)=v(t,t^\gamma).
\end{equation}
In view of the expansion \eqref{e2.20}, we should have, with some $\omega_0>0$:
$$
e^{-\zeta(t)}(t^\gamma+\zeta(t)+k)+O(e^{-\omega_0 t^\gamma})=
\alpha t^\gamma e^{-1/(4t^{1-2\gamma})},
$$
which implies, for $\gamma \in (0,1/2)$,
\[
\zeta(t)=-{\log} \alpha-({\log}\alpha-k)t^{-\gamma}+O(t^{-2\gamma}),
\]
and thus
\[
|\dot \zeta(t)|\le\displaystyle\frac{C}{t^{1+\gamma}}.
\]
The equation for the function $\psi$ is
$$
\psi_t-\psi_{xx}+\displaystyle\frac3{2t}(\psi_x-\psi)+e^{-x}\psi^2=
- \dot \zeta \psi+ \dot\zeta \psi_{x}+\displaystyle\frac3{2t}(\psi_x-\psi)=
O(\frac{x}t)=O(t^{-1+\gamma}),~~|x|<t^\gamma.
$$
In addition, the left side above is exponentially small for $x<-t^{\gamma}$
because of the exponential factor in (\ref{e4.3}).
Hence, the difference $
s(t,x)=v(t,x)-\psi(t,x)$
satisfies
\begin{eqnarray}
\label{e4.5}
&&s_t-s_{xx}+\displaystyle\frac3{2t}(s_x-s)+e^{-x}(v+\psi)s=O(t^{-1+\gamma}),\ \ \vert x\vert\leq t^{\gamma}\\
&&s(t,-t^{\gamma})=O(e^{-t^\gamma}),\ \ s(t,t^\gamma)=0.\nonumber
\end{eqnarray}
\begin{prop}
\label{p4.1}
For $\gamma \in (0,1/3)$, we have
\begin{equation}
\label{mar2210}
\displaystyle\lim_{t\to+\infty}\sup_{\vert x\vert \leq t^\gamma}\vert s(t,x)\vert=0.
\end{equation}
\end{prop}
\noindent {\bf Proof.}
The issue is whether the Dirichlet boundary conditions would be stronger
than the force in the right side of (\ref{e4.5}). Since
the principal Dirichlet eigenvalue for the Laplacian
in $(-t^{\gamma},t^{\gamma})$ is~$\frac{\pi^2}{4t^{2\gamma}}$,
investigating \eqref{e4.5} is, heuristically, equivalent to solving the ODE
\begin{equation}\label{mar2208}
f'(t)+(1-2\gamma){t^{-2\gamma}} f=\frac1{t^{1-\gamma}}.
\end{equation}
The coefficient $(1-2\gamma)$ is chosen simply for convenience and
can be replaced by another constant. The solution of (\ref{mar2208}) is
\[
f(t)=f(1)e^{(-t^{-2\gamma+1}+1) }+\int_1^t
s^{\gamma-1}e^{(-t^{-2\gamma+1}+s^{-2\gamma+1}) }ds.
\]
Note that $f(t)$ tends to 0 as $t\to+\infty$
a little faster than $t^{3\gamma-1}$ as soon as $\gamma<1/3$, so the analog
of~(\ref{mar2210}) holds for the solutions of (\ref{mar2208}).
With this idea in mind, we are going to look for a super-solution
to \eqref{e4.5}, in the form
\begin{equation}
\label{e4.7}
\overline s(t,x)=t^{-\lambda}\cos \left(\frac{x}{t^{\gamma+\varepsilon}} \right),
\end{equation}
where $\lambda$, $\gamma$ and $\varepsilon$ will be chosen to be small enough.
We now set $T_\varepsilon=1$ for convenience.
We have, for~$\vert x\vert\leq t^{\gamma}$:
\begin{eqnarray}
\label{e4.8}
&&\overline s(t,x)\sim t^{-\lambda},\
-\overline s_{xx}= t^{-(2\gamma+2\varepsilon)}\overline s(t,x),\\
&&\overline s_t=-\frac{\lambda}{t}
\bar s+g(t,x),~~|g(t,x)|\le\frac{C|x|}{t^{\lambda+\gamma+\varepsilon+1}}\le
\frac{C}{t^{1+\varepsilon}}\overline s(t,x),\nonumber
\end{eqnarray}
and
\begin{equation}
\label{e4.9}
\left \lvert \displaystyle\frac3{2t} (\overline s_x-\overline s)(t,x) \right \rvert \leq Ct^{-1}\overline s(t,x).
\end{equation}
Gathering \eqref{e4.8} and \eqref{e4.9} we infer the
existence of $q>0$ such that, for $t$ large enough:
$$
\biggl(\partial_t-\partial_{xx}+\displaystyle\frac3{2t}(\partial_x-1)\biggl)\overline s(t,x)
\geq qt^{-(2\gamma+2\varepsilon)}\overline s(t,x)\geq
\frac{q}2t^{-(2\gamma+2\varepsilon+\lambda)}\geq O(\frac1{t^{1-\gamma}}),
$$
as soon as $\varepsilon$ and $\lambda$ are small enough, since $\gamma \in (0,1/3)$. Because
the right side of \eqref{e4.5} does not depend on $\overline s$,
the inequality extends to all $t\ge 1$
by replacing $\overline s$ by $A\overline s$, with $A$ large enough,
and (\ref{mar2210}) follows.
Let us note that the term $e^{-x}(v + \psi)$ in (\ref{e4.5}), which results from the quadratic structure of the nonlinearity, is positive. For a more general nonlinearity $f(u)$ replacing $u - u^2$, the monotonicity of $g(u) = uf'(0) - f(u)$ may be used in an analogous way. ~$\Box$
\subsubsection*{Proof of Theorem \ref{e2.1}}
We are now ready to prove the theorem. Fix $\gamma \in (0,1/3)$, as required by Proposition \ref{p4.1}. Given $\varepsilon>0$, take
$T_\varepsilon$ as in Lemma~\ref{sep29-lem02}. Let $u_\alpha(t,x)$ be the solution of~\eqref{e4.4} for $t>T_\varepsilon$,
and the initial condition $u_\alpha(T_\varepsilon,x)=u(T_\varepsilon,x)$. Here,
$u(t,x)$ is the solution of the original problem \eqref{e2.0}. Taking $T_\varepsilon$ larger, if necessary, we may assume that $e^{-x_\gamma^2/(4t)} \geq 1/2$ for $t \geq T_\varepsilon$. It follows from Lemma~\ref{sep29-lem02} that
for any $t\geq T_\varepsilon$, we have
\[
u_{\alpha_\infty-2\varepsilon}(t,x)\leq u(t,x)\leq u_{\alpha_\infty+2\varepsilon}(t,x),
\]
for all $x\le t^\gamma$.
From Proposition \ref{p4.1}, we have
\begin{equation}\label{mar2302}
e^x\big[u_{\alpha_\infty\pm 2\varepsilon}(t,x)-\phi_*(x+\zeta_\pm(t))\big]=o(1),
\hbox{ as $t\to+\infty$},
\end{equation}
uniformly in $x\in(-t^\gamma,t^\gamma)$, with
$$
\zeta_\pm(t)=-(1-t^{-\gamma}){\log}(\alpha_\infty\pm 2\varepsilon)+O(t^{-2\gamma}).
$$
Because $\varepsilon>0$ is arbitrary, we have
$$
\lim_{t\to+\infty}\big(u(t,x)-\phi_*(x+x_\infty)\big)=0,
$$
with $x_\infty=-{\log}\alpha_\infty$, uniformly on compact sets.
Together with Lemma~\ref{lem:mar2302},
this concludes the proof of Theorem \ref{t2.10}.~$\Box$
\section{The diffusive scale $x\sim O(\sqrt{t})$
and the proof of Lemma~\ref{sep29-lem02}}\label{sec:5}
Our analysis starts with (\ref{e2.7}),
which we write as
\begin{equation}
\label{sep2102}
w_\tau+Lw
+\displaystyle\frac32e^{-\tau/2}w_\eta+e^{3\tau/2-\eta{\mathrm{exp}}(\tau/2)}w^2=0,\ \ \ \eta\in\mathbb{R},~~\tau>0.
\end{equation}
Here, the operator $L$ is defined as
\begin{equation}\label{sep2104}
Lv=-v_{\eta\eta}-\displaystyle\frac{\eta}{2}v_\eta-v.
\end{equation}
Its principal eigenfunction
on the half-line $\eta>0$ with the Dirichlet boundary condition at~$\eta=0$~is
\[
\phi_0(\eta)=\frac{\eta}{2} e^{-\eta^2/4},
\]
as $L\phi_0=0$. The operator $L$ has a discrete spectrum in $L^2(\mathbb{R}_+)$, weighted by
$e^{-\eta^2/8}$, its non-zero eigenvalues are $\lambda_k = k \ge 1$, and the
corresponding eigenfunctions are related via
\[
\phi_{k+1}=\phi_k''.
\]
The principal eigenfunction of the adjoint operator
\[
L^*\psi=-\psi_{\eta\eta}+\frac 12\partial_\eta(\eta \psi)-\psi
\]
is~$\psi_0(\eta)=\eta$. Thus, the solution of the unperturbed version of (\ref{sep2102})
on a half-line
\begin{equation}
\label{sep2106}
p_\tau+Lp=0,~~~\eta>0,~p(\tau,0)=0,
\end{equation}
satisfies
\begin{equation}\label{sep2108}
p(\tau,\eta)=\eta\ \frac{e^{-\eta^2/4}}{2\sqrt{\pi}}
\int_0^{+\infty}\xi v_0(\xi)d\xi+O(e^{-\tau})e^{-\eta^2/6},
\hbox{ as $\tau\to +\infty$},
\end{equation}
and our task is to generalize this asymptotics to the full problem
(\ref{sep2102}) on the whole line. The weight~$e^{-\eta^2/6}$ in (\ref{sep2108})
is, of course, by no means optimal. We will prove the following:
\begin{lem}
\label{l3.1bis}
Let $w(\tau,\eta)$ be the solution of \eqref{e2.7} on ${\mathbb R}$, with
the initial condition $w(0,\eta)=w_0(\eta)$
such that~$w_0(\eta)=0$ for all $\eta>M$, with some $M>0$, and $w_0(\eta)=O(e^{\eta})$
for $\eta<0$.
There exists~$\alpha_\infty>0$ and a function $h(\tau)$ such that
$
\displaystyle\lim_{\tau\to+\infty}h(\tau)=0,$
and such that we have, for any $\gamma'\in(0,1/2)$:
\begin{equation}\label{mar2312}
w(\tau,\eta)=(\alpha_\infty+h(\tau))\eta_+
e^{-\eta^2/4}+ R(\tau,\eta)e^{-\eta^2/6}, \quad \quad \eta \in {\mathbb R},
\end{equation}
with
$$|R(\tau,\eta)|\le C_{\gamma'}e^{-(1/2-\gamma')\tau},
$$
and where $\eta_+ = \max(0,\eta)$.
\end{lem}
Once again, the weight $e^{-\eta^2/6}$ is not optimal.
Lemma~\ref{sep29-lem02} is an immediate consequence of this result.
Indeed,
\[
u(t,x)=e^{-x}\sqrt{t}w(\log t,\frac{x}{\sqrt{t}}),
\]
hence Lemma~\ref{l3.1bis} implies, with $x_\gamma=t^\gamma$,
\begin{eqnarray}\label{mar2314}
&&e^{x_\gamma}u(t,x_\gamma)-\alpha_\infty x_\gamma e^{-x_\gamma^2/(4t)}
= \sqrt{t}w\Big(\log t,\frac{x_\gamma}{\sqrt{t}}\Big)
-\alpha_\infty x_\gamma e^{-x_\gamma^2/(4t)}\\
&&~~~~~~~~~~~~~~=
h(\log t)x_\gamma e^{-x_\gamma^2/(4t)}+\sqrt{t}
R\Big(\log t,\frac{x_\gamma}{\sqrt{t}}\Big)e^{-x_\gamma^2/(6t)}.\nonumber
\end{eqnarray}
We now take $T_\varepsilon$ so that $|h(\log t)|<\varepsilon/3$
for all $t>T_\varepsilon$. For the second term in the right side of (\ref{mar2314})
we write
\begin{eqnarray}\label{mar2316}
\big|R\big(\log t,\frac{x_\gamma}{\sqrt{t}}\big)\big|\sqrt{t}
e^{-x_\gamma^2/(6t)}\le C t^{\gamma'}e^{-x_\gamma^2/(6t)}\le
\varepsilon x_\gamma e^{-x_\gamma^2/(6t)}
\end{eqnarray}
for $t>T_\varepsilon$ sufficiently large, as soon as $\gamma'<\gamma$.
This proves (\ref{sep2902}). Thus, the proof of Lemma~\ref{sep29-lem02}
reduces to proving Lemma~\ref{l3.1bis}. We will prove the latter by a construction of an upper and lower barrier for
$w$ with the correct behaviors.
\subsubsection*{The approximate Dirichlet boundary condition}
Let us come back to
why the solution of (\ref{sep2102}) must approximately satisfy the Dirichlet boundary condition
at $\eta=0$.
Recall that $w$ is related to the solution of the original KPP problem via
\[
w(\tau,\eta)=u(e^\tau,\eta e^{\tau/2})e^{-\tau/2+\eta e^{\tau/2}}.
\]
The trivial a priori bound $0<u(t,x)< 1$ implies that we have
\begin{equation}\label{sep2120}
0<w(\tau,\eta)<e^{-\tau/2+\eta e^{\tau/2}},~~\eta<0,
\end{equation}
and, in particular, we have
\begin{equation}\label{sep2110}
0<w(\tau,-e^{-(1/2-\gamma)\tau})\le e^{-e^{\gamma\tau}}.
\end{equation}
We also have
\[
w_\tau(\tau,\eta)= u_t(e^\tau,\eta e^{\tau/2})e^{\tau/2+\eta e^{\tau/2}}+\frac\eta2 u_x(e^\tau,\eta e^{\tau/2})e^{\eta e^{\tau/2}}
+(\frac\eta2 e^{\tau/2}-\frac{1}{2})u(e^\tau,\eta e^{\tau/2})e^{-\tau/2+\eta e^{\tau/2}},
\]
so that
\begin{eqnarray}\label{sep2114}
&&w_\tau(\tau,-e^{-(1/2-\gamma)\tau})=u_t(e^\tau,-e^{\gamma \tau})
e^{\tau/2- e^{\gamma\tau}}-
\frac 12
e^{-(1/2-\gamma)\tau}u_x(e^\tau,-e^{\gamma \tau})e^{-e^{\gamma\tau}}
\nonumber\\
&&~~~~~~~~~~~~~
~~~~~~~~~~~~-\frac12( e^{\gamma\tau}+1)u(e^\tau,- e^{\gamma\tau})e^{-\tau/2- e^{\gamma\tau}}\nonumber\\
&&~~~~~~~~~~~~~
~~~~~~~~~~~~=O(e^{-\gamma e^{\gamma\tau}}),
\end{eqnarray}
for $\gamma>0$ sufficiently small.
Thus, the solution of (\ref{sep2102}) satisfies
\begin{equation}\label{sep2116}
\begin{array}{rll}
0<w(\tau,-e^{-(1/2-\gamma)\tau})\le &e^{-e^{\gamma\tau}},\\
|w_\tau(\tau,-e^{-(1/2-\gamma)\tau})|\le &Ce^{-\gamma e^{\gamma\tau}},
\end{array}
\end{equation}
which we will use as an approximate Dirichlet boundary condition at $\eta=0$.
\subsubsection*{An upper barrier}
Consider the solution of
\begin{eqnarray}\label{sep2122}
&&\overline w_\tau+L\overline w+\displaystyle\frac32
e^{-\tau/2}\overline w_\eta=0,~~\tau>0,~~\eta>-e^{-(1/2-\gamma)\tau},\\
&& \overline w(\tau,-e^{-(1/2-\gamma)\tau})= e^{-e^{\gamma\tau}},\nonumber
\end{eqnarray}
with a compactly supported initial condition
$\bar w_0(\eta)=\bar w(0,\eta)$ chosen so that
$\bar w_0(\eta)\ge u(1,\eta)e^{\eta}.$ Here,~$\gamma\in(0,1/2)$ should be thought of as a small parameter.
It follows from (\ref{sep2116}) that $\overline w(\tau,\eta)$
is an upper barrier for $w(\tau,\eta)$.
That is, we have
\[
w(\tau,\eta)\le \bar w(\tau,\eta),
\hbox{ for all $\tau>0$ and $\eta>-e^{-(1/2-\gamma)\tau}$}.
\]
It is convenient to make a change of variables
\begin{equation}
\bar w(\tau,\eta)=\bar p(\tau,\eta+e^{-(1/2-\gamma)\tau})+e^{-e^{\gamma\tau}}g(\eta+e^{-(1/2-\gamma)\tau}),
\end{equation}
where $g(\eta)$ is a smooth monotonic function such that $g(\eta)=1$ for $0\le\eta<1$ and $g(\eta)=0$ for $\eta>2$.
The function $\bar p$ satisfies
\begin{equation}\label{sep2124}
\bar p_\tau+L\bar p+(\gamma e^{-(1/2-\gamma)\tau}+\farc32
e^{-\tau/2})\bar p_\eta=G(\tau,\eta)e^{- e^{\gamma\tau}},~~\eta>0,~~\bar p(\tau,0)=0,
\end{equation}
for $\tau > 0$, with a smooth function $G(\tau,\eta)$ supported in $0\le\eta\le 2$, and the initial condition
\[
\bar p_0(\eta)=\bar w_0(\eta-1)-e^{-1}g(\eta),
\]
which also is compactly supported.
We will allow (\ref{sep2124}) to run for a large time $T$, after which time
we can treat the right side and the last term in the left side of (\ref{sep2124})
as a small perturbation. A variant of Lemma 2.2 from~\cite{HNRR1} implies that $\bar p(T,\eta) e^{\eta^2/6} \in L^2({\mathbb R}_+)$ for all $T > 0$, as well as the following estimate:
\begin{lem}\label{l3.2bis}
Consider $\omega\in(0,1/2)$ and $G(\tau,\eta)$ smooth, bounded, and compactly supported in $\mathbb{R}_+$. Let $p(\tau,\eta)$ solve
\begin{equation}
\label{e3.2}
\vert p_\tau+Lp\vert\leq \varepsilon
e^{-\omega \tau}(\vert p_\eta\vert+\vert p\vert+G(\tau,\eta)),\ \
\tau>0,\,\eta>0,\ \ \ \ \ \ \ \ p(\tau,0)=0.
\end{equation}
with the initial condition $p_0(\eta)$ such that
$p_0(\eta) e^{\eta^2/6} \in L^2({\mathbb R}_+)$. There exists $\varepsilon_0>0$ and $C>0$ (depending on $p_0$) such that, for all~$0<\varepsilon<\varepsilon_0$, we have
\begin{equation}\label{sep2131}
p(\tau,\eta)=\eta\biggl(\frac{e^{-\eta^2/4}}{2\sqrt{\pi}}
\Big(\int_0^{+\infty}\xi p_0(\xi)d\xi+\varepsilon R_1(\tau,\eta)\Big)+
\varepsilon e^{-\omega\tau}R_2(\tau,\eta)e^{-\eta^2/6}+e^{-\tau}R_3(\tau,\eta)e^{-\eta^2/6}\biggl),
\end{equation}
where $\|R_{1,2,3}(\tau,\cdot)\|_{C^3}\leq C $ for all $\tau>0$.
\end{lem}
For any $\varepsilon > 0$, we may choose $T$ sufficiently large, and $\omega \in (0, 1/2-\gamma)$ so that
\begin{equation}
\label{peqnabs}
\vert \bar p_\tau+L \bar p\vert\leq \varepsilon
e^{-\omega (\tau - T)}(\vert \bar p_\eta\vert+|G(\tau,\eta)|),\ \
\tau>T,\,\eta>0,\ \ \ \ \ \ \ \ p(\tau,0)=0.
\end{equation}
This follows from (\ref{sep2124}). Then, applying Lemma \ref{l3.2bis} for $\tau > T$, we have
\begin{equation}\label{sep2502}
\bar p(\tau,\eta)=\eta\biggl(\frac{e^{-\eta^2/4}}{2\sqrt{\pi}}
\Big(\int_0^{+\infty}\!\!\xi \bar p(T,\xi)d\xi+\varepsilon R_1(\tau,\eta)\Big)+
\varepsilon e^{-\omega(\tau-T)}R_2(\tau,\eta)e^{-\eta^2/6}
+e^{-(\tau-T)}R_3(\tau,\eta)e^{-\eta^2/6}\biggl).
\end{equation}
We claim that with a suitable choice of $\bar w_0$, the integral term in (\ref{sep2502}) is bounded from below:
\begin{equation}\label{sep2130}
\int_0^\infty \eta \bar p(\tau,\eta)d\eta\ge 1, \hbox{ for all $\tau>0$.}
\end{equation}
Indeed, multiplying (\ref{sep2124}) by $\eta$ and integrating gives
\begin{eqnarray}\label{sep2126}
\frac{d}{d\tau}\int_0^\infty \eta\bar p(\tau,\eta)d\eta=(\gamma e^{-(1/2-\gamma)\tau}+\frac 32 e^{-\tau/2})\int_0^\infty \bar p(\tau,\eta)d\eta+
e^{-e^{\gamma\tau}}\int G(\tau,\eta)\eta d\eta.
\end{eqnarray}
The function $G(\tau,\eta)$ need not have a sign, hence a priori we do not know that $\bar p(\tau,\eta)$
is positive everywhere. However,
it follows from (\ref{sep2124}) that the negative part of $\bar p$ is bounded
as
\[
\displaystyle\int_0^\infty \bar p(\tau,\eta)d\eta \geq - C_0,
\]
for all $\tau>0$, with the constant $C_0$ which does not depend on $\bar w_0(\eta)$ on the interval $[2,\infty)$.
Thus, we deduce from (\ref{sep2126}) that for all $\tau>0$ we have
\begin{equation}\label{sep2128}
\int_0^\infty \eta \bar p(\tau,\eta)d\eta\ge \int_0^\infty \eta\bar w_0(\eta)d\eta-C_0',
\end{equation}
with, once again, $C_0'$ independent of $\bar w_0$. Therefore, after possibly increasing $\bar w_0$ we may ensure that~(\ref{sep2130}) holds.
It follows from (\ref{sep2130}) and (\ref{sep2502}) that there exists a
sequence $\tau_n\to+\infty$, $C>0$ and a function~$\overline W_\infty(\eta)$
such that
\begin{equation}\label{sep2504}
C^{-1} \eta e^{-\eta^2/4}\le \overline W_\infty(\eta)\le C\eta e^{-\eta^2/4},
\end{equation}
and
\begin{equation}
\lim_{n\to+\infty} e^{\eta^2/8}\vert\bar p(\tau_n,\eta)-\overline W_\infty(\eta)\vert=0,
\end{equation}
uniformly in $\eta$ on the half-line $\eta\ge 0$. The same bound for the function
$\bar w(\tau,\eta)$ itself follows:
\begin{equation}\label{sep2506}
\lim_{n\to+\infty} e^{\eta^2/8}\vert\bar w(\tau_n,\eta)-\overline W_\infty(\eta)\vert=0,
\end{equation}
also uniformly in $\eta$ on the half-line $\eta\ge 0$.
\subsubsection*{A lower barrier}
A lower barrier for $w(\tau,\eta)$ is devised as follows. First, note that the
upper barrier for $w(\tau,\eta)$ we have constructed above implies that
\[
e^{3\tau/2-\eta{\mathrm{exp}}(\tau/2)}w(\tau,\eta)\leq
C_\gamma e^{- {\mathrm{exp}(\gamma\tau/2)}},
\]
as soon as
\[
\eta\geq e^{-(1/2-\gamma)\tau},
\]
with $\gamma\in(0,1/2)$,
and $C_\gamma >0$ is chosen sufficiently large.
Thus, a lower barrier $\underline w(\tau,\eta)$ can be defined as the solution of
\begin{equation}\label{sep2508}
\underline w_\tau+L\underline w+\displaystyle\frac32
e^{-\tau/2}\underline w_\eta+C_\gamma e^{-{\mathrm{exp}(\gamma\tau/2)}}\underline w=0,\ \ \ \underline
w(\tau,e^{-(1/2-\gamma)\tau})=0,~~~\eta> e^{-(1/2-\gamma)\tau},
\end{equation}
and with an initial condition $\underline w_0(\eta)\le w_0(\eta)$. This time it is convenient to make the change of variables
$$
\underline w(\tau,\eta)={\underline z}(\tau,\eta-e^{-(1/2-\gamma)\tau})
$$
so that
\begin{equation}
\label{april2116}
\underline z_\tau+L\underline z+(-\gamma e^{-(1/2-\gamma)\tau}+\farc32
e^{-\tau/2})\underline z_\eta + C_\gamma e^{-{\mathrm{exp}(\gamma\tau/2)}} \underline z =0,~~\eta>0,~~\underline z(\tau,0)=0,
\end{equation}
We could now try to use an abstract stable manifold theorem to prove that
\begin{equation}\label{may202}
\underline I(\tau):=\displaystyle\int_{0}^\infty\eta\underline z(\tau,\eta)d\eta\ge c_0 > 0,~~\hbox{ for all $\tau>0$}.
\end{equation}
That is, $\underline I(\tau)$
remains uniformly bounded away from 0. However, to keep this paper self-contained, we give a direct proof of (\ref{may202}). We look for a sub-solution to (\ref{april2116}) in the form
\begin{equation}
\label{e5.3000}
\underline p(\tau,\eta)= \left( \zeta(\tau)\phi_0(\eta)-q(\tau)\eta e^{-\eta^2/8} \right) e^{-F(\tau)},
\end{equation}
where
\[
F(\tau) = \int_0^\tau C_\gamma e^{- \exp(\gamma s/2)} \,ds,
\]
and with the functions $\zeta(\tau)$ and $q(\tau)$ satisfying
\begin{equation}\label{may204}
\zeta(\tau)\geq\zeta_0>0,\quad \dot\zeta(\tau)<0,\quad q(\tau)>0,\quad q(\tau)=O(e^{-\tau/4}).
\end{equation}
In other words, we wish to devise $\underline p(\tau,\eta)$ as in (\ref{e5.3000})-(\ref{may204})
such that
\begin{equation}\label{may208}
\underline p(0,\eta)\le \underline z(0,\eta)=w_0(\eta+1),
\end{equation}
and
\begin{equation}\label{may206}
{\cal L}(\tau)\underline p\leq0,
\end{equation}
with
$$
{\cal L}(\tau)\underline p= \underline p_\tau+L\underline p+(-\gamma e^{-(1/2-\gamma)\tau}+\farc32
e^{-\tau/2})\underline p_\eta.
$$
Notice that the choice of $F(\tau)$ in (\ref{e5.3000}) has eliminated a low order term involving $C_\gamma e^{- \exp(\gamma \tau/2)}$. For convenience, let us define
\[
h(\tau) = -\gamma e^{-(1/2-\gamma)\tau}+\frac{3}{2}
e^{-\tau/2},
\]
which appears in (\ref{april2116}). Because $L\phi_0=0$ and because
$$
L(\eta e^{-\eta^2/8}) = \eta L e^{-\eta^2/8} =(\frac{\eta^2}{16}-\frac{3}{4})\eta e^{-\eta^2/8},
$$
we find that
\begin{eqnarray}\label{may210}
{\cal L}(\tau)\underline p=\dot\zeta\phi_0+\zeta h(\tau) \phi_0' -\biggl(\dot q+(\frac{\eta^2}{16}-\frac{3}{4})q\biggl)\eta e^{-\eta^2/8} + q\frac{\eta^2}{4} e^{-\eta^2/8} h(\tau) - q e^{-\eta^2/8} h(\tau). \nonumber
\end{eqnarray}
Let us write this as
\begin{eqnarray}\label{Lequivalent}
\eta^{-1} e^{\eta^2/8} {\cal L}(\tau)\underline p=\dot \zeta \eta^{-1} \phi_0e^{\eta^2/8} + \eta^{-1} h(\tau)\left( \zeta e^{\eta^2/8} \phi_0' + q \left(\frac{\eta^2}{4} - 1 \right)\right) -\biggl(\dot q+(\frac{\eta^2}{16}-\frac{3}{4})q\biggl) .
\end{eqnarray}
Our goal is to choose $\zeta(\tau)$ and $q(\tau)$ such that (\ref{may204}) holds and the right side of (\ref{Lequivalent}) is non-positive after a certain time $\tau_0$, possibly quite large. However, and this is an important point, this time $\tau_0$ will not depend on the initial condition
$w_0(\eta)$.
Let us restrict the small parameter $\gamma$ to the interval $(0,1/4)$. Observe that if $\tau_0 > 0$ is sufficiently large,
then $h(\tau) < 0$ and $|h(\tau)| \leq e^{-\tau/4}$ for all $\tau \geq \tau_0$. As $\phi_0(\eta) = \eta e^{-\eta^2/4}$, note that
in (\ref{Lequivalent}) both~$\phi_0'(\eta)e^{\eta^2/8}$ and~$\phi_0(\eta)e^{\eta^2/8}$
are bounded functions. In particular, if $\tau_0$ is large enough then
\[
|\phi_0' e^{\eta^2/8} h(\tau)| \leq e^{-\tau/4}
\]
for all $\tau \geq \tau_0$, $ \eta \geq 0$.
Note also that for all $\eta \geq \eta_1 = \sqrt{28}$ we have
\begin{equation}
\label{e5.3001}
\frac{\eta^2}{16}- \frac{3}{4} \geq 1 \quad \quad \text{and} \quad \quad \frac{\eta^2}{4} - 1 \geq 0.
\end{equation}
Therefore, on the interval $\eta \in [\eta_1,\infty)$ and for $\tau \geq \tau_0$, (\ref{Lequivalent}) is bounded by
\begin{eqnarray}
\eta^{-1} e^{\eta^2/8} {\cal L}(\tau)\underline p & \leq & \eta^{-1} h(\tau) \zeta e^{\eta^2/8} \phi_0' -\left(\dot q+q\right) \leq \zeta(\tau) e^{-\tau/4} -\left(\dot q+q\right), \nonumber
\end{eqnarray}
assuming $q(\tau) > 0$ and $\dot \zeta < 0$. Hence, if $q(\tau)$ and $\zeta(\tau)$ are chosen to satisfy
the differential inequality
\begin{equation}
\label{e5.3006}
\dot q+q- e^{-\tau/4}\zeta\geq0,\quad \tau\geq\tau_0,
\end{equation}
then we will have
\begin{equation}\label{may212}
\hbox{${\cal L}(\tau)\underline p\leq 0$ for $\tau\geq\tau_0$ and $\eta\geq\eta_1$,}
\end{equation}
provided that $\dot\zeta\leq0$, as presumed in (\ref{may204}). Still
assuming $\dot\zeta\leq 0$ on $(\tau_0,+\infty)$, a sufficient condition for (\ref{e5.3006}) to be satisfied is:
$$
\dot q+q\geq e^{-\tau/4}\zeta(\tau_0),\quad \tau\geq\tau_0.
$$
Hence, we choose
\begin{equation}
\label{e5.3007}
q(\tau)= e^{-(\tau-\tau_0)}+\frac{4 }3e^{-\tau /4}\zeta(\tau_0).
\end{equation}
Note that $q(\tau)$ satisfies the assumptions on $q$ in (\ref{may204}).
Let us now deal with the range $\eta\in[0,\eta_1]$.
The function $\eta^{-1}\phi_0(\eta)$ is bounded on ${\mathbb R}$ and it is bounded away from $0$ on $[0,\eta_1]$. Define
\[
\varepsilon_1 = \min_{\eta \in [0,\eta_\gamma]} \eta^{-1} \phi_0(\eta) e^{\eta^2/8} > 0.
\]
As $h(\tau)<0$ for $\tau\ge\tau_0$,
on the interval $[0,\eta_1]$, we can bound (\ref{Lequivalent}) by
\begin{eqnarray}\label{Lequivalent2}
\eta^{-1} e^{\eta^2/8} {\cal L}(\tau)\underline p & \leq & \varepsilon_1 \dot \zeta(\tau) + \eta^{-1} h(\tau)\left( \zeta e^{\eta^2/8} \phi_0' - q \right) -\biggl(\dot q -\frac{3}{4} q\biggl) .
\end{eqnarray}
For $\eta \in [1,\eta_1]$, where $\eta^{-1} < 1$, we have
\begin{eqnarray}\label{Lequivalent3}
\eta^{-1} e^{\eta^2/8} {\cal L}(\tau)\underline p & \leq & \varepsilon_1 \dot \zeta(\tau) + e^{-\tau/4} ( \zeta + q) -\biggl(\dot q -\frac{3}{4} q\biggl) .
\end{eqnarray}
To make this non-positive, we choose $\zeta$ to satisfy
\begin{eqnarray}
\varepsilon_1 \dot \zeta(\tau) \leq \dot q -\frac{3}{4} q - e^{-\tau/4} ( \zeta + q) \label{zetareq1}
= e^{-\tau/4} \zeta(\tau_0) -\frac{7}{4} q(\tau) - e^{-\tau/4} ( \zeta(\tau) + q(\tau)),
\end{eqnarray}
where the last equalilty comes from (\ref{e5.3007}). Assuming $\dot \zeta < 0$, we have $\zeta(\tau) < \zeta(\tau_0)$, so a sufficient condition for (\ref{zetareq1}) to hold when $\tau \geq \tau_0$ is simply
\begin{eqnarray} \label{zetareq2}
\varepsilon_1 \dot \zeta(\tau) & \leq & -3 q(\tau).
\end{eqnarray}
For $\eta$ near $0$, the dominant term in (\ref{Lequivalent2}) is $\eta^{-1} h(\tau)\left( \zeta e^{\eta^2/8} \phi_0' - q \right)$. Define
\[
\varepsilon_2 = \min_{\eta \in [0,1]} \phi_0'(\eta)e^{\eta^2/8} > 0.
\]
Therefore, if we can arrange that $\zeta(\tau) > q(\tau)/\varepsilon_2$, then for $\eta \in [0,1]$, we have $\zeta e^{\eta^2/8} \phi_0' - q \geq 0$, so
\[
\eta^{-1} h(\tau)\left( \zeta e^{\eta^2/8} \phi_0' - q \right) \leq 0.
\]
In this case,
\begin{eqnarray}\label{Lequivalent4}
\eta^{-1} e^{\eta^2/8} {\cal L}(\tau)\underline p & \leq & \varepsilon_1 \dot \zeta(\tau) -\biggl(\dot q -\frac{3}{4} q\biggl) .
\end{eqnarray}
which is non-positive for $\tau \geq \tau_0$, due to (\ref{zetareq1}). In summary, we will have ${\cal L}(\tau)\underline p \leq 0$
in the interval~$\eta \in [0,\eta_1]$ and $\tau \geq \tau_0$ if $\zeta$ satisfies (\ref{zetareq2}) and $\zeta(\tau) > q(\tau)/\varepsilon_2$ for $\tau \geq \tau_0$. In view of this, we let $\zeta(\tau)$ have the form
\[
\zeta(\tau) = a_2 + a_3 e^{-(\tau - \tau_0)/4}.
\]
Thus, (\ref{zetareq2}) holds if
\[
- \frac{\varepsilon_1 a_3}{4} e^{-(\tau - \tau_0)/4} \leq - 3q = - 3e^{-(\tau - \tau_0)} - 4 e^{- \tau/4} (a_2 + a_3), \quad \tau \geq \tau_0.
\]
Hence it suffices that
\[
\frac{\varepsilon_1 a_3}{4} \geq 3 + 4e^{-\tau_0/4} (a_2 + a_3)
\]
holds; this may be achieved with $a_2, a_3 > 0$ if $\tau_0$ is large enough. Then we may take $a_2$ large enough so that $\zeta(\tau) > q(\tau)/\varepsilon_2$ also holds for $\tau \geq \tau_0$; this condition translates to:
\[
a_2 + a_3 e^{-(\tau - \tau_0)/4} \geq \frac{1}{\varepsilon_2} \left(e^{-(\tau - \tau_0)} + \frac{4}{3} e^{-\tau/4} (a_2 + a_3)\right), \quad \tau \geq \tau_0.
\]
This also is attainable with $a_2 > \frac{1}{\varepsilon_2}$ and $a_3 > 0$ if $\tau_0$ is chosen large enough. This completes the construction of the subsolution $\underline p(\tau,\eta)$ in (\ref{e5.3000}).
\commentout{
So, even if it means enlarging $C_\gamma$ again, independently of $\tau_0$, we will satisfy ${\cal L}(\tau)\underline p\leq0$ on that interval as
soon as
$$
\dot\zeta+C_\gamma e^{-\tau/4}\zeta+C_\gamma\vert\dot q\vert+C_\gamma q\leq0.
$$
Given the choice of $q(\tau)$ in (\ref{e5.3007}), another enlargement of $C_\gamma$ yields the following sufficient condition:
\begin{equation}\label{may216}
\dot\zeta+C_\gamma e^{-\tau/4}\zeta\leq -C_\gamma\biggl(q(\tau_0)e^{-(\tau-\tau_0)/4}+ e^{-\tau/4}\zeta(\tau_0)\biggl),
\end{equation}
or, equivalently:
$$
\frac{d}{d\tau}\biggl[{\mathrm{exp}}\biggl(-C_\gamma\int_{\tau_0}^\tau e^{-\sigma/4}d\sigma\biggl)\zeta\biggl]
\leq-C_\gamma {\mathrm{exp}}\biggl(-C_\gamma\int_{\tau_0}^\tau e^{-\sigma/4}d\sigma\biggl)
\biggl(q(\tau_0)e^{-(\tau-\tau_0)/4}+e^{-\tau/4} \zeta(\tau_0)\biggl).
$$
We infer that the sub-solution condition is satisfied as soon as
$$
\frac{d}{d\tau}\biggl[{\mathrm{exp}}\biggl(-C_\gamma\int_{\tau_0}^\tau e^{-\sigma/4}d\sigma\biggl)\zeta\biggl]
=-C_\gamma
\biggl(q(\tau_0)e^{-(\tau-\tau_0)/4}+e^{-\tau/4} \zeta(\tau_0)\biggl),
$$
or, in other words:
$$
{\mathrm{exp}}\biggl(-4C_\gamma(e^{-\tau_0/4}-e^{-\tau/4})\biggl)\zeta(\tau)=\biggl(1-4e^{-\tau_0/4}C_\gamma (1-e^{-(\tau-\tau_0)/4})\biggl)\zeta(\tau_0)-
4C_\gamma(1-e^{-(\tau-\tau_0)/4})q(\tau_0).
$$
Note that $\dot\zeta(\tau)\le 0$ for all $\tau\ge\tau_0$ because of (\ref{may216}).
In order to complete the argument, we must ensure that $\zeta(\tau)$ remains positive for all $\tau\geq\tau_0$, and a set of sufficient conditions is
\begin{equation}
\label{e5.3009}
1-4C_\gamma e^{-4C_\gamma}e^{-\tau_0/4}\geq\frac12,\quad \quad \zeta(\tau_0)>8C_\gamma q(\tau_0).
\end{equation}
The first condition in (\ref{e5.3009}) determines the time $\tau_0$ -- note that it does not depend on the initial condition $w_0(\eta)$ in any way.
Under \eqref{e5.3009}, the function $\zeta(\tau)$ remains bounded from above and bounded away from zero:
\begin{equation}
\label{e5.3010}
\frac12(\zeta(\tau_0)-8C_\gamma q(\tau_0))\leq\zeta(\tau)\leq\zeta(\tau_0).
\end{equation}
Let us come back to our subsolution $\underline z(\tau,\eta)$. From the strong maximum principle,
we know that~$\underline z(\tau_0,\eta)>0$ and $\partial_\eta\underline z(\tau_0,0)>0$. Hence, there is $\lambda_0>0$ such that
\[
w(\tau_0,\eta)\geq\lambda_0\underline p(\tau_0,\eta),
\]
where $\underline p$ is given by (\ref{e5.3000}) with $\zeta$ and $q$ defined above, and we have for $\tau\geq\tau_0$:
$$
\underline w(\tau,\eta)\geq\lambda_0p(\tau,\eta).
$$
This, by (\ref{may204}), bounds the quantity $\underline I(\tau)$ uniformly from below, so that (\ref{may204}) holds with a constant~$c_0>0$
that depends on the initial condition $w_0$.
Therefore, just as in the study of the upper barrier, we obtain the uniform convergence of (possibly a subsequence of)
$\underline w(\tau_n,\cdot)$ on the half-line $\eta\ge e^{-(1/2-\gamma)\tau}$
to a function~$\underline W_\infty(\eta)$ which satisfies
\begin{equation}\label{sep2512}
C^{-1}\eta e^{-\eta^2/4}\le \underline W_\infty(\eta)\le C \eta e^{-\eta^2/4},
\end{equation}
and such that
\begin{equation}\label{sep2510}
\lim_{n\to+\infty} e^{\eta^2/8}\vert\underline w(\tau_n,\eta)-\underline W_\infty(\eta)\vert=0, \quad \quad \eta > 0.
\end{equation}
\subsubsection*{Convergence of $w(\tau,\eta)$: proof of Lemma \ref{l3.1bis}}
Let $X$ be the space of bounded uniformly continuous functions $u(\eta)$ such that
$e^{\eta^2/8}u(\eta)$ is bounded and uniformly continuous on $\mathbb{R}_+$.
We deduce from the convergence of the upper and
lower barriers for~$w(\tau,\eta)$ (and ensuing uniform bounds for $w$) that there
exists a sequence $\tau_n\to+\infty$ such that~$w(\tau_n,\cdot)$ itself
converges to a
limit $W_\infty\in X$, such that $W_\infty\equiv0$ on $\mathbb{R}_-$, and $W_\infty(\eta)>0$
for all~$\eta>0$. Our next step is to
bootstrap the convergence along a sub-sequence, and show that
the limit of $w(\tau,\eta)$ as $\tau\to+\infty$ exists in the space $X$. First, observe that the
above convergence implies that the shifted functions $
w_n(\tau,\eta)=w(\tau+\tau_n,\eta)$
converge in $X$, uniformly on compact time intervals,
as $n\to+\infty$ to the solution $w_\infty(\tau,\eta)$ of the linear problem
\begin{eqnarray}\label{sep2520}
&&(\partial_\tau+L)w_\infty=0,~~\eta>0,\\
&&w_\infty(\tau,0)=0,\nonumber\\
&&w_\infty(0,\eta)=W_\infty(\eta).\nonumber
\end{eqnarray}
In addition, there exists $\alpha_\infty>0$ such that $w_\infty(\tau,\eta)$
converges to
$\bar\psi(\eta)=\alpha_\infty\eta e^{-\eta^2/4}$,
in the topology of $X$ as $\tau\to+\infty$. Thus, for any $\varepsilon>0$ we may
choose~$T_\varepsilon$ large enough so that
\begin{equation}\label{sep2524}
|w_\infty(\tau,\eta)-\alpha_\infty\eta e^{-\eta^2/4}|\le \varepsilon \eta e^{-\eta^2/8}
\hbox{ for all $\tau>T_\varepsilon$, and $\eta>0$}.
\end{equation}
Given $T_\varepsilon$ we can find $N_\varepsilon$ sufficiently large so that
\begin{equation}\label{sep2526}
|w(T_\varepsilon+\tau_n,\eta+e^{-(1/2-\gamma)T_\varepsilon})-w_\infty(T_\varepsilon,\eta)|\le \varepsilon \eta e^{-\eta^2/8},
\hbox{ for all $n>N_\varepsilon$.}
\end{equation}
In particular, we have
\begin{equation}\label{sep2528}
\alpha_\infty\eta e^{-\eta^2/4} -2\varepsilon\eta e^{-\eta^2/8}
\leq w(\tau_{N_\varepsilon}+T_\varepsilon,\eta+e^{-(1/2-\gamma)T_\varepsilon})
\leq \alpha_\infty \eta e^{-\eta^2/4}+2\varepsilon\eta e^{-\eta^2/8}.
\end{equation}
We may now construct the upper and lower barriers for the function $w(\tau+\tau_{N_\varepsilon}+T_\varepsilon,\eta+e^{-(1/2-\gamma)T_\varepsilon})$, exactly as we have done before.
It follows, once again from Lemma~\ref{l3.2bis} applied to these barriers that any limit point $\phi_\infty$
of $w(\tau,\cdot)$ in $X$ as $\tau\to+\infty$ satisfies
\begin{equation}\label{sep2529}
(\alpha_\infty-C\varepsilon)\eta e^{-\eta^2/4}
\leq \phi_\infty(\eta)\leq (\alpha_\infty+C\varepsilon)\eta e^{-\eta^2/4}.
\end{equation}
As $\varepsilon>0$ is arbitrary, we conclude that $w(\tau,\eta)$ converges in $X$ as $\tau\to+\infty$
to $\bar\psi(\eta) = \alpha_\infty \eta e^{-\eta^2/4}$. Taking into account Lemma~\ref{l3.2bis} once again, applied to the upper and lower barriers for $w(\tau,\eta)$
constructed starting from any time $\tau>0$, we have proved Lemma \ref{l3.1bis}, which implies Lemma \ref{sep29-lem02}.
| {
"attr-fineweb-edu": 1.577148,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdhQ4c3aisDazmGWQ | \section{Introduction}
Spectroscopic diagnostics of cosmic sources rely on accurate charge state distribution (CSD) calculations \citep{Brickhouse:AIP:1996, Landi:AA:1999}.
In stellar coronae, supernova remnants, galaxies, the intracluster medium of galaxy clusters, and other collisionally ionized plasmas, the balance between electron impact excitation (EII) and electron-ion recombination determines the CSD \citep{Bryans:ApJ:2009}. Thus, accurate EII data are needed to derive the CSD. For most objects, the temperature varies slowly enough that only electron impact single ionization (EISI) matters \citep{Tendler:PhysLett:1984}; but when there is rapid heating, electron impact double ionization (EIDI) can be important \citep{Muller:PhysLett:1986}.
Most EII data come from theoretical calculations, as it is not possible to measure ionization for every ion. Experiments serve to benchmark these theoretical calculations. However, a major limitation of most existing EII measurements is that the ion beams used contained an unknown population of metastable ions. As the cross section for ionization from a metastable level generally differs from that for the ground level, the results of such experiments can be ambiguous and do not provide a clear test for theory.
Here we report EII measurements for Al-like $\fethirteen$, Ne-like $\fesixteen$, and F-like $\feseventeen$. Of these three ion species, previous measurements exist only for $\fethirteen$ \citep{Gregory:PRA:1987}. However, those measurements were performed using a crossed beams experiment and suffer from an unknown metastable fraction. We used an ion storage ring to avoid this problem. The ions were recirculated in the ring for several seconds before collecting data. This allowed any metastable levels to relax radiatively to the ground state. Our data thereby provide an unambiguous test for theoretical models.
The EISI cross section for $\fethirteen$ was measured from about 300~eV to 3100~eV, where the direct ionization channels are
\begin{equation}
\mathrm{e^{-} + Fe^{13+}} (2s^2\, 2p^6\, 3s^2\, 3p) \rightarrow \left\{ \begin{array}{l}
\mathrm{Fe^{14+}} (2s^2\, 2p^6\, 3s^2) + 2\mathrm{e^{-}} \\
\mathrm{Fe^{14+}} (2s^2\, 2p^6\, 3s\, 3p) + 2\mathrm{e^{-}} \\
\mathrm{Fe^{14+}} (2s^2\, 2p^5\, 3s^2\, 3p) + 2\mathrm{e^{-}} \\
\mathrm{Fe^{14+}} (2s\, 2p^6\, 3s^2\, 3p) + 2\mathrm{e^{-}}
\end{array} \right. .
\label{eq:transitions13}
\end{equation}
The corresponding thresholds for $3p$ and $3s$ ionization are 392.2~eV and 421.2~eV, respectively \citep{NIST:2012}. Direct ionization of a $2p$ electron is possible above $1127.3$~eV \citep{NIST:2012} and ionization of the $2s$ is possible above about $1270$~eV \citep{Kaastra:AAS:1993}. However, following direct ionization of a principal quantum number $n=2$ electron, the resulting excited state is expected to stabilize via autoionization with a probability of greater than 90\%. This produces double ionization rather than single ionization \citep{Kaastra:AAS:1993}. Ionization can also occur through Excitation-autoionization (EA). For example, excitation of a $3s$ electron to a doubly excited level lying in the continuum is possible starting from the 392.16~eV ionization threshold. The system can then autoionize resulting in EISI. EA via $n=2$ to $n=3$ excitations are predicted to occur starting at $\sim 700$~eV and via $2\rightarrow4$ $\sim 900$~eV \citep{Pindzola:PRA:1986,Arnaud:ApJ:1992,Dere:AA:2007}.
The EIDI cross section for $\fethirteen$ forming Fe$^{15+}$ was investigated from below just the threshold for direct double ionization at 848.4~eV up to 3100~eV. For highly charged ions the dominant double ionization process is expected to be single ionization of an electron in the $n=2$ level forming a state that stabilizes by emission of a second electron \citep{Muller:PRL:1980}. The threshold for this ionization-autoionization process is $1127.3$~eV \citep{NIST:2012}.
The EISI cross sections for $\fesixteen$ and $\feseventeen$ were measured from $1100$~eV to $3200$~eV, where the direct ionization channels are
\begin{equation}
\mathrm{e^{-} + Fe^{16+}} (2s^2\, 2p^6) \rightarrow \left\{ \begin{array}{l}
\mathrm{Fe^{17+}} (2s^2\, 2p^5) + 2\mathrm{e^{-}} \\
\mathrm{Fe^{17+}} (2s\, 2p^6) + 2\mathrm{e^{-}}
\end{array} \right. .
\label{eq:transitions16}
\end{equation}
The ionization thresholds for these channels are 1262.7~eV for ionization of a $2p$ electron and 1394.7~eV for ionization of a $2s$ electron \citep{NIST:2012}. EA may occur through excitation of the $2s$ electron beginning from the ionization threshold of 1262.7~eV.
Similarly, for $\feseventeen$ the direct ionization channels are
\begin{equation}
\mathrm{e^{-} + Fe^{17+}} (2s^2\, 2p^5) \rightarrow \left\{ \begin{array}{l}
\mathrm{Fe^{18+}} (2s^2\, 2p^4) + 2\mathrm{e^{-}} \\
\mathrm{Fe^{18+}} (2s\, 2p^5) + 2\mathrm{e^{-}}
\end{array} \right. .
\label{eq:transitions17}
\end{equation}
These have ionization thresholds of 1357.8 for the $2p$ electron and 1472.2 for the $2s$ electron. Here again EA from excitation of a $2s$ electron is possible beginning at the ionization threshold of 1357.8~eV.
\section{Experimental Method and Analysis}\label{sec:exan}
Cross section measurements were performed using the TSR heavy ion storage ring. This facility is located at the Max-Planck-Institut f\"{u}r Kernphysik in Heidelberg, Germany. The procedures used here are basically the same as those described by \citet{Linkemann:PRL:1995} and \citet{Hahn:ApJ:2010, Hahn:ApJ:2011, Hahn:ApJ:2011a, Hahn:PRA:2012, Hahn:ApJ:2012}. Below we outline the method and provide some details pertinent to the present measurements.
First a beam of iron ions was introduced into TSR. The ion beam energies were $141.0$~MeV for $\fethirteen$, $176.9$~MeV for $\fesixteen$ and $183.8$~MeV for $\feseventeen$. In each case the isotope $^{56}$Fe was used for the experiment. Two electron beams in the ring, dubbed the Cooler and the Target, where merged with the ions. Each electron beam is located in a different section of the ring. Initially the electron beams were used to cool the ion beam. That is, the energy of both electron beams was fixed to one where the electron velocity closely matched the average ion velocity, allowing elastic electron-ion collisions to reduce the energy spread of the ion beam \citep{Poth:PhysRep:1990}. This initial cooling period lasted three seconds.
During cooling, metastable states in the ion beam radiatively decayed. For $\fethirteen$ there are two metastable levels with relatively long lifetimes. These are the $3s^2\,3p\,^{2}P_{3/2}$ level, whose decay to the ground state forms the well-known coronal green line \citep{Edlen:ZA:1943, Esser:JGR:1995}, and the $3s\,3p\,3d\,^{4}F_{9/2}$ level. The $3s^2\,3p\,^{2}P_{3/2}$ lifetime has been measured experimentally to be about 16.73~ms \citep{Beiersdorfer:ApJ:2003, Brenner:PRA:2007}. The lifetime of the $3s\,3p\,3d\,^{4}F_{9/2}$ level has been calculated theoretically to be 17.7~ms \citep{Trabert:PhysScr:1993}. The similar lifetimes of these levels have been a source of systematic uncertainty in lifetime measurements at TSR \citep{Trabert:JPhysB:2002, Trabert:JPhysB:2009, Trabert:JPhysB:2010}.
For $\fesixteen$ the longest lived metastable levels are the $2s^2\,2p^5\,3s\,^{3}P_{0,2}$, which have predicted lifetimes of $63.3$~$\mathrm{\mu s}$ and $4.5$~$\mathrm{\mu s}$, respectively \citep{Liang:AA:2010, Landi:ApJ:2012}. The longest lived $\feseventeen$ metastable level is the $2s^2\,2p^5\,^{2}P_{1/2}$ level, which has a predicted lifetime of 51.5~$\mathrm{\mu s}$ \citep{DelZanna:AA:2006, Landi:ApJ:2012}. Since all these lifetimes are much shorter than the $3$~s cooling time, the metastable population for each ion is expected to be negligible during measurement.
After cooling, the Target was maintained at the cooling energy while the Cooler electron beam energy was varied so as to enable electron-ion collision studies at different energies. Ionized products from collisions in the Cooler section were diverted by a downstream dipole magnet onto a particle detector. The measurement energy was stepped through a range of energies. In between each measurement step the ionization count rate was recorded for a fixed reference energy. This allowed us to assess the rate for background stripping off the residual gas. Ideally, this reference rate should be measured below the EII threshold, but at high measurement energies we were limited by the dynamic range of the electron beam power supply. Hence, for these energies the reference energy was set to a point where the ionization cross section was already measured in lower energy scans, allowing the background rate to be derived. The energy range for which the reference point was set below threshold was $E \leq 1050$~eV for $\fethirteen$ EISI and $E \leq 1500$~eV for EIDI, $E \leq 1980$~eV for $\fesixteen$ EISI, and $E \leq 1950$~eV for $\feseventeen$ EISI.
The EII cross section $\sigma_{\mathrm{I}}$ was obtained from the difference between the measured count rate and the background signal, normalized by the stored ion number and the electron density \citep{Hahn:ApJ:2011}. The uncertainty introduced by the detector efficiency is about 3\% \citep{Rinn:RSI:1982}. Here and throughout all uncertainties are given at an estimated $1\sigma$ level. The electron density has an uncertainty of about 3\% \citep{Lestinsky:ApJ:2009}. The stored ion number was derived from the ion current measured with a beam profile monitor \citep[BPM;][]{Hochadel:NIMA:1994}. We calibrated the BPM several times during the measurement by comparing with the ion current measurement from a DC transformer \citep{Unser:IEEE:1981}. The calibration was performed using currents of up to 21~$\mathrm{\mu A}$ \citep{Hahn:ApJ:2011a}. However, the DC transformer is not sensitive to the $0.5$ - $5$~$\mathrm{\mu A}$ currents present during measurement and could not be used directly for the analysis. We estimate that the uncertainty of the BPM contributes 15\% to the experimental systematic uncertainty.
Energy dependent pressure fluctuations change the background rate and can systematically distort the measured cross section. We corrected for these following \citet{Hahn:ApJ:2010}. The magnitude of the correction was $(0.8 \pm 0.2)\%$ for $\fethirteen$ EISI, $(4\pm2)\%$ for $\fesixteen$ and $(5\pm2)\%$ for $\feseventeen$. For the $\fethirteen$ EIDI measurements we did not find any systematic pressure fluctuations and so no correction was necessary. Because this correction could only be used when the reference point was below the threshold for ionization, there are other uncertainties on the cross sections in the higher energy ranges. The experimental uncertainties are given in Table~\ref{table:err}.
\section{Results and Discussion}\label{sec:res}
\subsection{Single Ionization} \label{subsec:single}
\subsubsection{Cross Sections}\label{subsubsec:cross}
Figure~\ref{fig:fe13cross} shows the EISI cross section for $\fethirteen$ forming Fe$^{14+}$. These data are also available in the electronic edition of this journal as a table following the format of Table~\ref{table:fe13cross}. In Figure~\ref{fig:fe13cross} the filled circles show the measured cross section and the dotted curves illustrate the $1\sigma$ systematic uncertainty. Error bars on selected points represent the $1\sigma$ errors due to counting statistics. In some cases the error bars are smaller than the symbol size because the magnitude of the statistical uncertainty varies from about 1\% to 4\%, being smaller in places where more data were collected.
The diamonds in Figure~\ref{fig:fe13cross} show crossed beams EII measurements of \citet{Gregory:PRA:1987}. These results agree with our measurements from threshold to about 700~eV, but they are about $40\%$ larger above 700~eV. This discrepancy is well outside the uncertainties of their and our measurements and is likely due to metastable ions in the crossed beams experiment. Although \citet{Gregory:PRA:1987} do not discuss the possible influence of metastables for this particular ion, the lifetimes of the $\fethirteen$ metastable levels suggest that they could have been present. Their experiment used a 10~kV ion beam. Given that their device had a length scale of several meters, this implies that metastables with lifetimes $\gtrsim10$~$\mathrm{\mu s}$ could remain in the beam. As discussed above, the $\fethirteen$ metastable levels $3s^2\,3p\,^{2}P_{3/2}$ and $3s\,3p\,3d\,^{4}F_{9/2}$ have lifetimes of $\approx16.7$~ms and $\approx 17.7$~ms, respectively, and could therefore both be present in the crossed beams experiment. The ionization threshold for the $3s\,3p\,3d\,^{4}F_{9/2}$ level lies 81.86~eV below the ground state ionization threshold \citep{NIST:2012}. However, the \citet{Gregory:PRA:1987} cross section does not show any contribution below the ground state threshold. This apparently low abundance of the $3s\,3p\,3d\,^{4}F_{9/2}$ level may be due to its relatively high excitation energy and to not being strongly populated by cascades.
It appears that it is primarily the $3s^2\,3p\,^{2}P_{3/2}$ level that is present in their beam, in addition to the ground level. This is also consistent with the observation that our results and those of \citet{Gregory:PRA:1987} agree at energies where direct ionization dominates, but disagree above the $2\rightarrow3$~EA threshold. The $3s^2\,3p\,^{2}P_{3/2}$ is part of the ground term and so it is expected to have a direct ionization cross section similar to that of the ground state, while the autoionization probabilities of levels above the $2\rightarrow3$ EA threshold could be different for the $^{2}P_{3/2}$ state.
For comparison, Figure~\ref{fig:fe13cross} illustrates the recommended cross section of \citet{Arnaud:ApJ:1992}, which is based on the theoretical work of \citet{Younger:JQSRT:1983} and \citet{Pindzola:PRA:1986}. This cross section is up to 35\% larger than our results. The reason for this is not clear. Also shown is the distorted wave results of \citet{Dere:AA:2007}, which agrees with the measurement to within the level of the experimental uncertainties, though significant structural differences remain between the results of \citet{Dere:AA:2007} and ours, as discussed below.
Excitation of a $3s$ electron to an autoionizing level is not included in these theoretical cross sections. Using the LANL Atomic Code \citep{Magee:ASP:1995}, we have estimated that EA is possible for $3\rightarrow n \gtrsim 10$ excitations and could contribute to the ionization cross section near threshold (Figure~\ref{fig:fe13thresh}). Previous measurements for other $3s^2\,3p^q$ ions have found a significant EA contribution via this channel ($q=2$, \citealt{Hahn:ApJ:2011a}; $q=3$, \citealt{Hahn:ApJ:2011}; $q=4$~and~5, \citealt[][]{Hahn:ApJ:2012}) which has been confirmed by theoretical calculations ($q=3$, \citealt{Kwon:PRA:2012}). However, here we find little discrepancy between theory and experiment near threshold, despite the omission of $3\rightarrow n$ EA in the calculations. The $n$ required for the excitation to autoionize is about the same for $\fethirteen$ as for the ions of previous measurements, for example for Fe$^{11+}$ $3\rightarrow n > 8$ excitations were autoionizing. The reason this EA channel is relatively smaller here could be due to there being only one $3p$ electron. The effect could also be masked by the 16\% systematic uncertainty.
At higher energies there are some discrepancies in the shape of the cross section. The \citet{Dere:AA:2007} calculation overestimates the magnitude of $2\rightarrow4$~EA at about 1020~eV. This discrepancy has been found previously for similar ions \citep{Hahn:ApJ:2011, Hahn:ApJ:2011a, Hahn:ApJ:2012}. A possible explanation for the discrepancy is that the calculations underestimate the branching ratios for radiative stabilization. Another possibility is that the calculations underestimate the branching ratio for auto-double ionization \citep{Kwon:PRA:2012}, but this explanation is less likely since we do not observe any corresponding increase in the EIDI cross section at 1020~eV (see Section~\ref{subsec:double}).
Figures~\ref{fig:fe16cross} and \ref{fig:fe17cross} show the measured EISI cross sections for $\fesixteen$ forming Fe$^{17+}$ and $\feseventeen$ forming Fe$^{18+}$, respectively. These data are available in the electronic edition of this journal as tables following the format of Tables~\ref{table:fe16cross} and \ref{table:fe17cross}. The figures also illustrate the cross sections recommended by \citet{Arnaud:ApJ:1992}, which is based on the calculations of \citet{Younger:JQSRT:1982}, and the theoretical cross section of \citet{Dere:AA:2007}. For each ion there is generally good agreement with the measurement, to within the experimental uncertainties.
One discrepancy between experiment and theory is that the measured cross sections for both ions increase faster close to threshold than predicted by \citet{Dere:AA:2007}. This may be due to EA from excitation of a $2s$ electron to an energy above the threshold for ionization of the $2p$ electron. It should be noted, though, that the cross section of \citet{Arnaud:ApJ:1992} agrees very well with our results near threshold despite being based on calculations that included only direct ionization.
\subsubsection{Rate Coefficients} \label{subsubsec:rate}
Using the measured cross sections, we have derived EISI plasma ionization rate coefficients $\alpha_{\mathrm{I}}$ as a function of electron temperature $T_{\mathrm{e}}$ \citep[cf.,][]{Hahn:ApJ:2011}. Figures~\ref{fig:fe13rate}, \ref{fig:fe16rate}, and \ref{fig:fe17rate} show the results for $\fethirteen$, $\fesixteen$, and $\feseventeen$, respectively. In each case we compare these results to the rate coefficients of \citet{Arnaud:ApJ:1992} and \citet{Dere:AA:2007}. The vertical dotted lines in the figures indicate for ionization equilibrium the temperature ranges over which each ion is more than 1\% abundant relative to the total Fe abundance as well as the temperature of peak abundance \citep{Bryans:ApJ:2009}.
$\fethirteen$ is abundant from $1.3\times10^{6}$~K to $3.7\times10^6$~K with a peak abundance at $2.0\times10^{6}$~K. In this range the $\fethirteen$ rate coefficients of \citet{Arnaud:ApJ:1992} differ from our experiment by up to $32\%$, while those of \citet{Dere:AA:2007} agree with our results to within $10\%$. $\fesixteen$ is abundant from $1.6\times10^6$~K to $1.3\times10^7$~K peaking at $4.1\times10^6$~K. Surprisingly, the older rate coefficients of \citet{Arnaud:ApJ:1992} agree with our results to within 4\% within this range. The more recent data of \citet{Dere:AA:2007} differ by 19\% at the low end of the temperature range due to the discrepancy near the ionization threshold. At the temperature of peak abundance they differ by $9\%$ and at the high $T_{\mathrm{e}}$ end they agree with our measurement to within 1\%. Finally, $\feseventeen$ is abundant from $2.6\times10^6$~K to $1.5\times10^7$~K with peak abundance at $7.2\times10^6$~K. The rate coefficients of \citet{Arnaud:ApJ:1992} agree with our measurements in this range to within 13\%. Those of \citet{Dere:AA:2007} differ from the experimentally derived result by 17\% at the low end of the $T_{\mathrm{e}}$ range (for the same reasons as for Fe$^{16+}$), by $9\%$ at peak abundance, and by only 4\% at the high $T_{\mathrm{e}}$ end.
The energy range covered by the experiment does not include excitation or ionization of a $1s$ electron. These channels are also neglected by \citet{Arnaud:ApJ:1992} and \citet{Dere:AA:2007}. This neglect may introduce a small error in the calculation of the ionization rate coefficient for $\fesixteen$ and $\feseventeen$. The reason is that in deriving $\alpha_{\mathrm{I}}(T_{\mathrm{e}})$, the integration over the cross section is performed up to $E_{0} + 6k_{\mathrm{B}}T_{\mathrm{e}}$, where $E_0$ is the ionization threshold and $k_{\mathrm{B}}$ is the Boltzmann constant \citep{Fogle:ApJS:2008}. Thus, to calculate the rate coefficients over the full range where these ions are abundant, the integration should be performed up to 7984~eV for $\fesixteen$ and up to 9113~eV for $\feseventeen$. These are greater than the measured energy range, and we have extrapolated the measurements to higher energies by scaling the \citet{Dere:AA:2007} cross section to our measurements. This leaves an uncertainty as the integration limits exceed the $1s$ excitation and ionization thresholds. For $\fesixteen$ the lowest EA channel is the $1\rightarrow3$~EA, which opens at $\approx 7150$~eV. For $\feseventeen$ the lowest channel is $1\rightarrow2$~EA, which opens at $\approx6450$~eV \citep{Hou:ADNDT:2009,NIST:2012}. The threshold for $1s$ ionization is 7714.7~eV for $\fesixteen$ and 7823.2~eV for $\feseventeen$ \citep{NIST:2012}.
In order to assess the possible error from neglecting direct ionization and EA we have estimated the $1s$ ionization and excitation cross sections using the LANL Atomic Physics Code \citep{Magee:ASP:1995}. These calculations show that the $1s$ direct ionization cross section is $\sim 0.5\%$ of the $n=2$ direct ionization cross section for these ions in the relevant energy ranges. The maximum $1s$ EA cross section is the total excitation cross section. At the relevant energies, compared to the included EISI from $n=2$, the $\fesixteen$ $1\rightarrow3$ excitation cross section is $\sim 0.05\%$ and the $\feseventeen$ $1\rightarrow2$ excitation cross section is about $0.5\%$. The contribution to single ionization from $n=1$ excitation and ionization is actually smaller than implied by these cross sections. In the case of $1\rightarrow n$ EA, the continuum state is estimated to radiatively stabilize 40\% of the time and lead to EISI only for the other 60\% \citep{Kaastra:AAS:1993}. For $1s$ ionization, radiative relaxation of the intermediate state completes the EISI process 40\%, but the system autoionizes the other 60\% leading to EIDI. Given the small cross sections for $1\rightarrow n$ EA and $1s$ ionization, the branching ratios of the intermediate states, and the small fraction of the integrated energy range where they contribute at all, we expect that the omission of these processes has negligible effect on the calculated rate coefficients over the temperature ranges where $\fesixteen$ and $\feseventeen$ are abundant.
Table~\ref{table:coeff} presents coefficients for a polynomial fit to the scaled rate coefficient $\rho(x)=10^{-6}\sum_{i}{a_i x^i}$, which can be used to reproduce the plasma rate coefficients. The rate coefficient $\alpha_{\mathrm{I}}(T_{\mathrm{e}})$ is related to the scaled rate coefficient $\rho$ by \citep{Dere:AA:2007}:
\begin{equation}
\alpha_{\mathrm{I}}(T_{\mathrm{e}}) = t^{-1/2}E_0^{-3/2}E_{1}(1/t)\rho(x),
\label{eq:invscalerate}
\end{equation}
where $E_{1}(1/t)$ is the first exponential integral and $t=k_{\mathrm{B}}T_{\mathrm{e}}/ E_0$ with $E_0$ the ionization threshold (392.2~eV for $\fethirteen$, 1262.7 for $\fesixteen$, and 1357.8 for $\feseventeen$). The scaled temperature $x$ is given by
\begin{equation}
x = 1 - \frac{\ln 2}{\ln(t+2)}
\label{eq:invx}
\end{equation}
and by inverting $T_{\mathrm{e}}$ can be obtained from $x$:
\begin{equation}
T_{\mathrm{e}} = \frac{E_0}{k_{\mathrm{B}}}\left[\exp\left(\frac{\ln 2}{1-x} \right) - 2 \right].
\label{eq:invscaletemp}
\end{equation}
The experimental rate coefficents are reproduced to 1\% accuracy or better for $T_{\mathrm{e}} = 4\times10^{5}$ -- $1\times10^{8}$~K for $\fethirteen$, $T_{\mathrm{e}} = 6\times10^{5}$ -- $1\times10^{8}$~K for $\fesixteen$ and $T_{\mathrm{e}} = 1.5\times10^{6}$ -- $1\times10^{8}$~K for $\feseventeen$.
\subsection{Double Ionization}\label{subsec:double}
Figure~\ref{fig:fe13di} shows the measured $\fethirteen$ double ionization cross section. The dotted curves give the systematic uncertainty and the error bars illustrate the statistical uncertainty for select points. These data are available in the electronic edition of this journal as a table following the format of Table~\ref{table:fe13di}. Although the threshold for double ionization is 848.4~eV, the cross section is consistent with zero until about 1100~eV when ionization of an $n=2$ electron becomes possible. The solid line illustrates the expected double ionization cross section due to ionization of an $n=2$ electron forming a state that relaxes through autoionization. This cross section was calculated using the LANL Atomic Physics Code \citep{Magee:ASP:1995} to determine the cross section for single ionization of an $n=2$ electron and then scaling the result by the Auger yields of about 93\% for $2p$ ionization and 95\% for $2s$ ionization given by \citet{Kaastra:AAS:1993}. In the measured energy range, essentially all of the double ionization of $\fethirteen$ is due to this process. This is consistent with results from other highly charged ions \citep{Muller:PRL:1980,Muller:JPhysB:1985,Stenke:JPhysB:1999d,Hahn:ApJ:2011, Hahn:ApJ:2011a}.
\section{Summary}\label{sec:sum}
We have measured cross sections for EISI from the ground states of $\fethirteen$, $\fesixteen$, and $\feseventeen$. For $\fethirteen$ we find discrepancies of about 40\% when compared to an earlier crossed beams experiment. This is likely due to metastable ions in that work and their absence in ours. The theoretical cross section recommended by \citet{Arnaud:ApJ:1992} is more than 30\% larger than our result. The recent calculation of \citet{Dere:AA:2007} is also $20\%$ larger than our result. The discrepancy with these theoretical calculations appears to be due to their treatment of EA. In particular, we do not observe the contribution from $2\rightarrow4$ EA predicted by \citet{Dere:AA:2007}. For $\fesixteen$ and $\feseventeen$ our results generally agree with theory to within the experimental uncertainties. There is a small discrepancy in that the experimental cross section rises faster near threshold than predicted by \citet{Dere:AA:2007}. One possibility is that this is due to neglecting EA from $n=2$ excitations in the calculations. The measured EIDI cross section for $\fethirteen$ is dominated by $n=2$ ionization followed by autoionization of the excited state leading to a net double ionization. This result is consistent with measurements for other ions.
\begin{acknowledgments}
We appreciate the efficient support by the MPIK accelerator and TSR groups during the beamtime. This work was supported in part by the NASA Astronomy and Physics Research and Analysis program and the NASA Solar Heliospheric Physics program. We also acknowledge financial support by the Max Planck Society, Germany and from Deutsche Forschungsgemeinschaft (contract no. Schi 378/8-1).
\end{acknowledgments}
\begin{deluxetable}{lccccc}
\tablewidth{0pt}
\tablecaption{Sources of Uncertainty.
\label{table:err}}
\tablehead{
\colhead{} &
\colhead{Source} &
\multicolumn{4}{c}{Estimated $1\sigma$ Uncertainty} \\
\colhead{} &
\colhead{} &
\multicolumn{2}{c}{\hspace{30pt}$\fethirteen$} &
\colhead{$\fesixteen$} &
\colhead{$\feseventeen$} \\
\colhead{} &
\colhead{} &
\colhead{EISI} &
\colhead{EIDI} &
\colhead{EISI} &
\colhead{EISI}
}
\startdata
&Counting statistics & 2\% & 3\% & 3\% & 6\% \\
&Detector efficiency & 3\% & 3\% & 3\% & 3\% \\
&Ion current measurement & 15\% & 15\% & 15\% & 15\% \\
&Electron density & 3\% & 3\% & 3\% & 3\% \\
&Pressure fluctuations\tablenotemark{1} & 0.2\% (0.8\%) & -- & 2\% (4\%) & 2\% (5\%) \\
\hline
&Quadrature sum & 16\% (16\%) & 16\% & 16\% (16\%)& 17\% (17\%) \\
\enddata
\tablenotetext{1}{The uncertainties in parentheses refer to the energy range where the reference point was above the ionization threshold. This is $>1050$~eV for $\fethirteen$ EISI, $>1500$~eV for $\fethirteen$ EIDI, $>1980$~eV for $\fesixteen$ EISI, and $>1950$~eV for $\feseventeen$ EISI. For $\fethirteen$ EIDI no pressure fluctuations were observed.}
\end{deluxetable}
\clearpage
\begin{deluxetable}{lll}
\tablewidth{0pc}
\tablecaption{$\fethirteen$ Single Ionization Cross Section.
\label{table:fe13cross}}
\tablehead{
\colhead{$E$~(eV)} &
\colhead{$\sigma_{\mathrm{I}}$~(cm$^2$)} &
\colhead{Statistical Error}
}
\startdata
400 & 1.7490E-20 & 5.4474E-21 \\
550 & 1.5980E-19 & 4.2844E-21 \\
700 & 2.1955E-19 & 1.9516E-21 \\
850 & 4.6733E-19 & 2.7333E-21 \\
1000 & 4.8221E-19 & 2.4091E-21 \\
1500 & 4.1659E-19 & 2.7346E-21 \\
2000 & 3.6449E-19 & 1.6042E-20
\enddata
\tablecomments{There is a systematic uncertainty of 16\% in the cross section (see the text). Table~\ref{table:fe13cross} is published in its entirety in the electronic edition of this journal.}
\end{deluxetable}
\begin{deluxetable}{lll}
\tablewidth{0pc}
\tablecaption{$\fesixteen$ Single Ionization Cross Section.
\label{table:fe16cross}}
\tablehead{
\colhead{$E$~(eV)} &
\colhead{$\sigma_{\mathrm{I}}$~(cm$^2$)} &
\colhead{Statistical Error}
}
\startdata
1300.5 & 7.2389E-21 & 8.8629E-22 \\
1499.3 & 3.4611E-20 & 6.0735E-22 \\
1998.4 & 6.5945E-20 & 1.2343E-21 \\
2502.2 & 7.7283E-20 & 3.4703E-21 \\
3007.8 & 8.2407E-20 & 2.4911E-21
\enddata
\tablecomments{There is a systematic uncertainty of 16\% in the cross section (see the text). Table~\ref{table:fe16cross} is published in its entirety in the electronic edition of this journal.}
\end{deluxetable}
\begin{deluxetable}{lll}
\tablewidth{0pc}
\tablecaption{$\feseventeen$ Single Ionization Cross Section.
\label{table:fe17cross}}
\tablehead{
\colhead{$E$~(eV)} &
\colhead{$\sigma_{\mathrm{I}}$~(cm$^2$)} &
\colhead{Statistical Error}
}
\startdata
1407.7 & 5.0680E-21 & 1.6703E-21 \\
1601.0 & 2.3464E-20 & 1.5598E-21 \\
1995.0 & 4.1807E-20 & 3.7005E-21 \\
2498.4 & 5.2723E-20 & 2.5672E-21 \\
3003.6 & 5.5388E-20 & 2.5631E-21
\enddata
\tablecomments{There is a systematic uncertainty of 16\% in the cross section (see the text). Table~\ref{table:fe17cross} is published in its entirety in the electronic edition of this journal.}
\end{deluxetable}
\begin{deluxetable}{llll}
\tablewidth{0pc}
\tablecaption{Fifth-order Polynomial Fitting Parameters to Reproduce the Scaled Single Ionization Rate Coefficient $\rho = 10^{-6}\sum_{i=0}^{i=5}{a_{i}x^{i}}$~$\mathrm{cm^{3}\,s^{-1}\,eV^{3/2}}$ (see equations \ref{eq:invscalerate} and \ref{eq:invscaletemp}).
\label{table:coeff}}
\tablehead{
\colhead{$i$} &
\multicolumn{3}{c}{$a_{i}$} \\
\colhead{} &
\colhead{$\fethirteen$} &
\colhead{$\fesixteen$} &
\colhead{$\feseventeen$}
}
\startdata
0 & \phs9.59988 & \phs27.1411 & \phs17.4957 \\
1 & \phn-53.2715 & \phn-24.4446 & \phs63.0820 \\
2 & \phs401.820 & \phs43.0958 & \phn-485.718 \\
3 & \phn-1018.29 & \phs74.8937 & \phs1517.84 \\
4 & \phs1115.84 & \phn-252.597 & \phn-2127.02 \\
5 & \phn-455.515 & \phs150.090 & \phs1094.75
\enddata
\end{deluxetable}
\begin{deluxetable}{lll}
\tablewidth{0pc}
\tablecaption{$\fethirteen$ Double Ionization Cross Section.
\label{table:fe13di}}
\tablehead{
\colhead{$E$~(eV)} &
\colhead{$\sigma_{\mathrm{I}}$~(cm$^2$)} &
\colhead{Statistical Error}
}
\startdata
900 & 5.5118E-24 & 1.3882E-21 \\
1100 & -7.6013E-23 & 1.2907E-21 \\
1300 & 2.9696E-20 & 1.2073E-21 \\
1500 & 4.9954E-20 & 1.2722E-21 \\
2000 & 7.6579E-20 & 5.1443E-21 \\
2500 & 8.2175E-20 & 3.5210E-21 \\
3000 & 8.4374E-20 & 4.9785E-21
\enddata
\tablecomments{There is a systematic uncertainty of 16\% in the cross section (see the text). Table~\ref{table:fe13di} is published in its entirety in the electronic edition of this journal.}
\end{deluxetable}
\begin{figure}
\centering \includegraphics[width=0.9\textwidth]{Fe13+_EII.eps}
\caption{\label{fig:fe13cross} EISI cross section for $\fethirteen$ forming Fe$^{14+}$ (circles). The dotted curves illustrate the $1\sigma$ systematic uncertainties. Statistical uncertainties are indicated by the error bars on selected points, but in many cases they are smaller than the symbol size. The experimental results of \citet{Gregory:PRA:1987} are shown by the diamonds. The dashed and solid curves show the theoretical cross sections given by \citet{Arnaud:ApJ:1992} and \citet{Dere:AA:2007}, respectively.
}
\end{figure}
\begin{figure}
\centering \includegraphics[width=0.9\textwidth]{Fe13+_EII_threshold.eps}
\caption{\label{fig:fe13thresh} A portion of Figure~\ref{fig:fe13cross} focussing on the threshold energy range.
}
\end{figure}
\begin{figure}
\centering \includegraphics[width=0.9\textwidth]{Fe16+_EII.eps}
\caption{\label{fig:fe16cross} Same as Figure~\ref{fig:fe13cross}, but for $\fesixteen$ forming Fe$^{17+}$.
}
\end{figure}
\begin{figure}
\centering \includegraphics[width=0.9\textwidth]{Fe17+_EII.eps}
\caption{\label{fig:fe17cross} Same as Figure~\ref{fig:fe16cross}, but for $\feseventeen$ forming Fe$^{18+}$.
}
\end{figure}
\begin{figure}
\centering \includegraphics[width=0.9\textwidth]{fe13_RateCoeffs_dualplot.eps}
\caption{\label{fig:fe13rate} Thick lines show various plasma rate coefficients for $\fethirteen$ forming Fe$^{14+}$. The solid curve indicates the experimental results, which can be read off the left axis. They are compared to the theoretical results of \citet[][dashed curve]{Arnaud:ApJ:1992} and \citet[][dash-dotted curve]{Dere:AA:2007}. The relative difference between these and the present results, (theory-experiment)/experiment, are shown by thin lines, with values read off the right axis. The dotted vertical lines denote the temperature range where $\fethirteen$ is $>1\%$ abundant in collisional ionization equilibrium with the center line at the temperature of peak $\fethirteen$ abundance \citep{Bryans:ApJ:2009}.
}
\end{figure}
\begin{figure}
\centering \includegraphics[width=0.9\textwidth]{fe16_RateCoeffs_dualplot.eps}
\caption{\label{fig:fe16rate} Same as Figure~\ref{fig:fe13rate}, but for $\fesixteen$ forming Fe$^{17+}$.
}
\end{figure}
\begin{figure}
\centering \includegraphics[width=0.9\textwidth]{fe17_RateCoeffs_dualplot.eps}
\caption{\label{fig:fe17rate} Same as Figure~\ref{fig:fe13rate}, but for $\feseventeen$ forming Fe$^{18+}$.
}
\end{figure}
\begin{figure}
\centering \includegraphics[width=0.9\textwidth]{Fe13+_EIDI.eps}
\caption{\label{fig:fe13di} EIDI cross section for $\fethirteen$ forming Fe$^{15+}$. The statistical uncertainty is indicated by error bars on selected points and the systematic uncertainties are illustrated by the dotted curves. The solid curve shows an estimate for the EIDI cross section due to direct ionization of an $n=2$ electron forming a state that relaxes by emission of a second electron producing, in total, double ionization (see the text).
}
\end{figure}
| {
"attr-fineweb-edu": 1.75,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdhk4ubngyxyhmRLO | \section{Introduction}
Sparse representation of 2D images
has been a subject of extensive research in the
last fifteen years \cite{WMM10, Ela10, ZXY15}.
Applications which benefit from sparsity
range from image restoration \cite{MES08,ZSL13}
and classification \cite{GHO18,GYZ18,GWY18}
to feature extraction \cite{WYG09,YLY12}
and super resolution reconstructions \cite{YWH10,ZLY15}.
While sparse representation of
3D arrays has received
less attention, the advantage of modeling these arrays
as a superposition of 3D elementary
components is recognized in previous publications
\cite{DFK07,CC13,CML15,DYK17}.
At present, the most widely
used multichannel images in every day life are
true color images.
The simplest way of sparsely representing these images is
channel by channel, or adding constraints of
correlation across colors \cite{MES08,MM17}.
However, as demonstrated in this work,
sparsity in the representation of
true color images can increase substantially
if the approximation is realized by means of
3D elements taken from a highly redundant dictionary.
The effect is of course more pronounced
for arrays involving more channels, such as
hyper-spectral images.
From a practical view point,
the current drawbacks of 3D sparse modeling using
a large dictionary are (i) storage
requirements and (ii) the complexity of the concomitant
calculations. In this paper we propose a method
which, by addressing (i) leaves room for
possible high performance implementations using
Graphics Processing Unit (GPU) programming.
While the approach is illustrated using
Central Processing Unit (CPU) programming,
the storage requirements
are shown to fit within 48Kb's of fast access shared memory
of a GPU when the approximation of a 3D image is realized
with a partition block size of $8 \times 8 \times 8$ and
with a separable dictionary of redundancy 125.
The main contributions of the paper are listed below.
\begin{itemize}
\item
The low memory implementation of the
Orthogonal Matching Pursuit (OMP)
strategy, called Self Projected Matching Pursuit (SPMP)
\cite{RNB13}
is dedicated to operating in 3D (SPMP3D) with
separable dictionaries.
This technique delivers an iterative solution to the
3D least squares problem which requires much less storage
than direct linear algebra methods.
It could therefore be also applied with any other
of the pursuit strategies that include a least
squares step \cite{DTD06,NT09, EKB10, CC13, RNMB13}.
\item
The C++ MEX file for the SPMP3D method has
been made available on a dedicated website \cite{webpage}.
All the scripts for reproducing the results of the
paper in the MATLAB environment
have also been placed on that website.
\item
Remarkable reduction in the dimensionality of
the representation of true color images and
hyper-spectral images, with
high quality reconstruction, is demonstrated using
highly redundant and highly coherent
separable dictionaries.
\end{itemize}
The results suggest that the method may be of
assistance to
image processing applications which rely
on a transformation for data reduction as a first step
of further processing. For examples of relevant
applications we refer to
\cite{TWH15,NLS16,GLZ17,ZCL18,LPN18}.
\section{Notational Convention}
\label{notation}
$\mathbb{R}$ represents the set of real numbers.
Boldface letters are used to indicate Euclidean vectors,
2D and 3D arrays.
Standard mathematical fonts indicate components,
e.g., $\mathbf{d} \in \mathbb{R}^N$ is a vector of components
$d(i)\in \mathbb{R},\, i=1,\ldots,N$. The elements of a
3D array $\mathbf{I} \in \mathbb{R}^{\Nx \times \Ny \times \Nz}$ are indicated as
$I(i,j,m),\,i=1,\ldots, N_x, \, j=1,\ldots,N_y , \, m=1,\ldots,N_z$.
Moreover, for each $m$-value $\mathbf{I}_m \in \mathbb{R}^{\Nx \times N_y}$ stands for the
2D array of elements $I_m(i,j)=I(i,j,m),\,i=1,\ldots,N_x,\, j=1,\ldots,N_y$, which, when not leaving room for
ambiguity will also be represented as
$I(:,:,m)$. The transpose of a matrix, $\mathbf{G}$ say, is
indicated as $\mathbf{G}^\top$.
The inner product between 3D arrays, say $\mathbf{I} \in \mathbb{R}^{\Nx \times \Ny \times \Nz}$ and
$\mathbf{G} \in \mathbb{R}^{\Nx \times \Ny \times \Nz}$, is given as:
$$\langle \mathbf{G}, \mathbf{I} \rangle_{\mathrm{3D}}= \sum_{i=1}^{N_x} \sum_{j=1}^{N_y} \sum_{m=1}^{N_z} G(i,j,m) I(i,j,m).$$
For $\mathbf{G} \in \mathbb{R}^{\Nx \times \Ny \times \Nz}$ with tensor product structure, i.e.
for $\mathbf{G} = \mathbf{g}^x \otimes \mathbf{g}^y \otimes \mathbf{g}^z$,
with $\mathbf{g}^x \in \mathbb{R}^{N_x}, \mathbf{g}^y \in \mathbb{R}^{N_y}$ and $\mathbf{g}^z \in \mathbb{R}^{N_z}$, we further have
\begin{equation}
\label{inp3d}
\langle \mathbf{G}, \mathbf{I} \rangle_{\mathrm{3D}}= \sum_{m=1}^{N_z} \langle \mathbf{g}^x, \mathbf{I}_m \mathbf{g}^y \rangle g^z(m)
= \langle \mathbf{p}, \mathbf{g}^z \rangle,
\end{equation}
where for each value of $m$ the vector $\mathbf{I}_m \mathbf{g}^y$ in $\mathbb{R}^{N_x}$ arises by the standard matrix-vector multiplication rule and $\mathbf{p} \in \mathbb{R}^{N_z}$ is given by its components
$p(m)=\langle \mathbf{g}^x, \mathbf{I}_m \mathbf{g}^y \rangle,\, m=1,\ldots,N_z$. Note that
$\langle \mathbf{p}, \mathbf{g}^z \rangle$ indicates the Euclidean inner product in 1D, i.e.
$$\langle \mathbf{p}, \mathbf{g}^z \rangle= \sum_{m=1}^{N_z} p(m) g^z(m).$$
The definition $\eqref{inp3d}$ induces the norm
$\|\mathbf{I}\|_{\mathrm{3D}}= \sqrt{\langle \mathbf{I}, \mathbf{I} \rangle_{\mathrm{3D}}}$.
\section{Sparse Representation of Multi-channel Images}
Suppose that a 3D image, given as an array
$\mathbf{I} \in \mathbb{R}^{\Nx \times \Ny \times \Nz}$ of intensity
pixels, is to be approximated by the linear
decomposition
\begin{equation}
\mathbf{I}^k= \sum_{n=1}^k c(n) \mathbf{D}_{\ell_n},
\label{atom}
\end{equation}
where each $c(n)$ is a scalar and each $\mathbf{D}_{\ell_n}$
is an element of $\mathbb{R}^{\Nx \times \Ny \times \Nz}$ to be selected from a
set, $\mathcal{D}=\{\mathbf{D}_n\}_{n=1}^M$,
called a `dictionary'.
A sparse approximation of $\mathbf{I}\in \mathbb{R}^{\Nx \times \Ny \times \Nz}$ is
an approximation of the form \eqref{atom} such that the
number $k$
of elements in the decomposition is significantly smaller
than ${N=N_x N_y N_z}$.
The terms in the decomposition \eqref{atom}
are taken from a large
redundant dictionary, from where the
elements $\mathbf{D}_{\ell_n}$ in \eqref{atom}, called `atoms',
are chosen according to an optimality criterion.
Within the redundant dictionary framework for
approximation, the problem of finding the sparsest decomposition of a given multi-channel image
can be formulated as follows:
{\em{Given an image and a dictionary,
approximate the image by the `atomic decomposition'
\eqref{atom}
such that the number $k$ of atoms is minimum.}}
Unfortunately, the numerical
minimization of the number of terms to produce an
approximation up to a desired error,
involves a combinatorial problem for
exhaustive search. Hence, the solution
is intractable.
Consequently, instead of looking for the
sparsest solution, one looks for a
`satisfactory solution', i.e.,
a solution such that the number of $k$-terms
in \eqref{atom}
is considerably smaller than the image dimension.
For 2D images this can be effectively achieved by
greedy pursuit strategies in the line of the
Matching Pursuit (MP) \cite{MZ93} and
OMP \cite{PRK93}
methods, if dedicated to 2D separable dictionaries
\cite{RNBCP12,RNB13,CC13,LRN17}.
Within a tensor product framework the consideration
of OMP in 3D is natural.
Let's assume that a 3D dictionary is obtained as the
tensor product
$\mathcal{D}=\mathcal{D}^x \otimes \mathcal{D}^y \otimes \mathcal{D}^z$ of three
1D dictionaries
$\mathcal{D}^x =\{\mathbf{d}^x_m \in \mathbb{R}^{N_x}\}_{m=1}^{M_x}$,
$\mathcal{D}^y =\{\mathbf{d}^y_m \in \mathbb{R}^{N_y}\}_{m=1}^{M_y}$, and
$\mathcal{D}^z =\{\mathbf{d}^z_m \in \mathbb{R}^{N_z}\}_{m=1}^{M_z}$, with
$M_x M_y M_z= M$. For computational purposes the 1D
dictionaries are stored as three matrices
$\mathbf{D}^x \in \mathbb{R}^{N_x \times M_x}$,
$\mathbf{D}^y \in \mathbb{R}^{N_y \times M_y}$ and
$\mathbf{D}^z \in \mathbb{R}^{N_z \times M_z}$.
Suppose now that a 3D array $\mathbf{I} \in
\mathbb{R}^{N_x \times N_y \times N_z}$ is to be
approximated by an {{atomic decomposition}} of the
form
\begin{equation}
\label{atoq}
\mathbf{I}^{k}= \sum_{n=1}^{k}
c(n) \mathbf{d}^x_{\ell^{x}_n} \otimes \mathbf{d}^y_{\ell^{y}_n} \otimes \mathbf{d}^z_{\ell^{z}_n},
\end{equation}
where for $n=1,\ldots,k$ the atoms $\mathbf{d}^x_{\ell^{x}_n}$,
$\mathbf{d}^y_{\ell^{y}_n}$ and $\mathbf{d}^z_{\ell^{z}_n}$ are
selected from the given 1D dictionaries.
The common step of the techniques we
consider for constructing
approximations of the form \eqref{atoq} is the
stepwise selection of the atoms in the
atomic decomposition.
On setting $k=1$ and $\mathbf{I}^0=0$ at
iteration $k$ the algorithm selects the indices
$\ell^{x}_{k}$, $\ell^{y}_{k}$ and
$\ell^{z}_{k}$ as follows
\begin{equation}
\label{select}
\ell^{x}_{k},\ell^{y}_{k},
\ell^{z}_{k}= \operatorname*{arg\,max}_{\substack{n=1,\ldots,M_x\\
i=1,\ldots,M_y\\
s=1,\ldots,M_z}} \left |\langle \mathbf{d}^{x}_n \otimes \mathbf{d}^{y}_i \otimes \mathbf{d}^{z}_s ,\mathbf{R}^{k-1} \rangle_{\mathrm{3D}} \right|,
\end{equation}
with $\mathbf{R}^{k-1}= \mathbf{I} - \mathbf{I}^{k-1}$.
It is the determination
of the coefficients $c(n),\,n=1,\ldots,k$ in
\eqref{atoq} that
gives rise to pursuit strategies which go with
different names.
\subsection{Matching Pursuit in 3D (MP3D)}
The MP approach in 3D would simply calculate
the coefficients in
\eqref{atoq} as
\begin{equation}
\label{cmp}
c(n)= \langle \mathbf{d}^x_{\ell^{x}_n} \otimes \mathbf{d}^y_{\ell^{y}_n} \otimes \mathbf{d}^z_{\ell^{z}_n} ,\mathbf{R}^{n-1} \rangle_{\mathrm{3D}}, \quad
n=1,\ldots,k.
\end{equation}
The main drawback of the MP method is that it may select linearly
dependent atoms. Moreover, that approximation is not
stepwise optimal because at iteration $k$ the
coefficients \eqref{cmp}
do not minimize the norm of the residual error.
The pursuit strategy that overcomes
these limitations is the so called
OMP \cite{PRK93}.
\subsection{Orthogonal Matching Pursuit in 3D}
The implementation of OMP in 3D (OMP3D) we
describe here is the 3D
extension of the implementation of OMP in 2D
given in \cite{RNBCP12}. An alternative
algorithm called Kronecker-OMP, which is based on
the Tucker representation of a tensor, is discussed in
\cite{CC13}.
Our algorithm is based on adaptive
biorthogonalization and
Gram-Schmidt orthogonalization procedures, as
proposed in \cite{RNL02} for the one dimensional case.
In order to ensure the coefficients
$c(n),\,n=1,\ldots,k$ involved in
\eqref{atoq}
are such that
$\|\mathbf{R}^{k}\|_{\mathrm{3D}}^2= \langle\mathbf{R}^{k}, \mathbf{R}^{k} \rangle_{\mathrm{3D}} $ is minimum, the decomposition \eqref{atoq} should
fulfill that
\begin{equation}
\label{atop}
\mathbf{I}^{k}= \sum_{n=1}^{k}
c(n) \mathbf{d}^x_{\ell^{x}_n} \otimes \mathbf{d}^y_{\ell^{y}_n} \otimes \mathbf{d}^z_{\ell^{z}_n}= \hat{\mathrm{P}}_{\mathbb{V}_k} \mathbf{I},
\end{equation}
where $\hat{\mathrm{P}}_{\mathbb{V}_k}$ is the orthogonal
projection operator
onto $\mathbb{V}_k={\mbox{\rm{span}}}\{{\vd}^x_{\ell^x_n} \otimes {\vd}^y_{\ell^y_n} \otimes {\vd}^z_{\ell^z_n} \}_{n=1}^k$.
This is ensured by requiring that
$\mathbf{R}^{k}= \mathbf{I}- \hat{\mathrm{P}}_{\mathbb{V}_k} \mathbf{I}$,
where $\hat{\mathrm{P}}_{\mathbb{V}_k}$ is the orthogonal projection operator
onto $\mathbb{V}_k={\mbox{\rm{span}}}\{{\vd}^x_{\ell^x_n} \otimes {\vd}^y_{\ell^y_n} \otimes {\vd}^z_{\ell^z_n} \}_{n=1}^k$.
The required representation of $\hat{\mathrm{P}}_{\mathbb{V}_k}$ is of
the form
$\hat{\mathrm{P}}_{\mathbb{V}_k} \mathbf{I} = \sum_{n=1}^k \mathbf{A}_n \langle \mathbf{B}_n^k, \mathbf{I} \rangle_{\mathrm{3D}} $,
where each $\mathbf{A}_n \in \mathbb{R}^{\Nx \times \Ny \times \Nz}$ is an array with the
selected atoms $\mathbf{A}_n= {\vd}^x_{\ell^x_n} \otimes {\vd}^y_{\ell^y_n} \otimes {\vd}^x_{\ell^z_n}$. The
concomitant biorthogonal reciprocal
set $\mathbf{B}_n^k,\,n=1,\ldots,k$ comprises the unique elements
of $\mathbb{R}^{\Nx \times \Ny \times \Nz}$ satisfying the conditions:
\begin{itemize}
\item
[i)]$\langle \mathbf{A}_n, \mathbf{B}_m^k \rangle_{\mathrm{3D}}=\delta_{n,m}= \begin{cases}
1 & \mbox{if}\, n=m\\
0 & \mbox{if}\, n\neq m.
\end{cases}$
\item
[ii)]${\mathbb{V}_k}= {\mbox{\rm{span}}}\{\mathbf{B}_n^k\}_{n=1}^k.$
\end{itemize}
Thus, the coefficients $c(n),\,n=1,\ldots,N$ in \eqref{atop}
which guarantee minimum norm of the residual error
are calculated as
$$c(n)=\langle \mathbf{B}_n^k, \mathbf{I} \rangle_{\mathrm{3D}},\quad n=1,\ldots,k.$$
The required arrays $\mathbf{B}_n^k,\,n=1,\ldots,k$ should be
upgraded and updated to account for each newly
selected atom.
Starting from
$k=1,\mathbf{R}^{0}=\mathbf{I}$, $\mathbf{B}_1^1=\mathbf{W}_1=
\mathbf{A}_1= {\vd}^x_{\ell^x_1} \otimes {\vd}^y_{\ell^y_1} \otimes {\vd}^z_{\ell^z_1}$,
where
$$\ell^{x}_{1},\ell^{y}_{1},
\ell^{z}_{1}= \operatorname*{arg\,max}_{\substack{n=1,\ldots,M_x\\
i=1,\ldots,M_y\\
s=1,\ldots,M_z}} \left |\langle \mathbf{d}^{x}_n \otimes \mathbf{d}^{y}_i \otimes \mathbf{d}^{z}_s ,\mathbf{R}^{k-1} \rangle_{\mathrm{3D}} \right|,
$$
at iteration $k+1$ the indices
$\ell^{x}_{k+1},\ell^{y}_{k+1},
\ell^{z}_{k+1}$ corresponding to the new
atom $\mathbf{A}_{k+1}= {\vd}^x_{\ell^x_{k+1}} \otimes {\vd}^y_{\ell^y_{k+1}} \otimes {\vd}^z_{\ell^z_{k+1}}$ are selected as in
\eqref{select}. The required
reciprocal set $\mathbf{B}_n^{k+1},\,n=1,\ldots,k+1$ is
adapted and upgraded by extending the recursion formula given in \cite{RNL02} as follows.
\begin{equation}
\begin{split}
\mathbf{B}_n^{k+1}&= \mathbf{B}_n^k - \mathbf{B}_{k+1}^{k+1}\langle \mathbf{A}_{k+1}, \mathbf{B}_n^k \rangle_{\mathrm{3D}},\quad n=1,\ldots,k,\\
\text{where}\\
\mathbf{B}_{k+1}^{k+1}&= \mathbf{W}_{k+1}/\|\mathbf{W}_{k+1}\|_{\mathrm{3D}}^2,\\
\text{with} \\
\mathbf{W}_{k+1}&= \mathbf{A}_{k+1} - \sum_{n=1}^k \frac{\mathbf{W}_n}
{\|\mathbf{W}_n\|_{\mathrm{3D}}^2} \langle \mathbf{W}_n, \mathbf{A}_{k+1}\rangle_{\mathrm{3D}}, \nonumber
\end{split}
\end{equation}
including, for numerical accuracy, the re-orthogonalization
step:
\begin{equation}
\label{RGS}
\mathbf{W}_{k+1} \leftarrow \mathbf{W}_{k+1} - \sum_{n=1}^k \frac{\mathbf{W}_n}
{\|\mathbf{W}_n\|_{\mathrm{3D}}^2} \langle \mathbf{W}_n, \mathbf{W}_{k+1}\rangle_{\mathrm{3D}}. \nonumber
\end{equation}
Although the image approximation is carried out
by partitioning the images into
relatively small 3D blocks, memory
requirements of the OMP3D method are high.
Indeed, the above are $2(k+1)$ nonseparable arrays each of
dimension $N=N_x N_y N_z$ which need to be stored in
double precision.
Hence, we consider next a low memory implementation of the
orthogonal projection step, which avoids having to
store the arrays $\mathbf{W}_n,\,n=1,\ldots,k$ and $\mathbf{B}_n^k,\,n=1,\ldots,k$ and fully exploits the separability of the
dictionary.
\subsection{Self Projected Matching Pursuit in 3D (SPMP3D)}
The Self Projected Matching Pursuit (SPMP) methodology
was introduced in \cite{RNB13} and conceived to be
used with separable dictionaries in 2D (SPMP2D).
Because the technique is
based on calculations of inner products,
it can be easily extended to operate in 3D (SPMP3D).
Suppose that at iteration $k$ the selection process has
chosen the atoms labeled by the triple of indices
$\{\ell^{x}_n ,\ell^{y}_n,\ell^{z}_n\}_{n=1}^{k}$
and let $\tilde{\mathbf{I}}^{k}$ be the atomic decomposition
\begin{equation}
\label{atoq2}
\tilde{\mathbf{I}}^{k}= \sum_{n=1}^{k}
a(n)\mathbf{d}^x_{\ell^{x}_n} \otimes \mathbf{d}^y_{\ell^{y}_n}
B\otimes \mathbf{d}^z_{\ell^{z}_n},
\end{equation}
where the coefficients $a(n), \, n=1,\ldots,k$
are arbitrary numbers.
Every array ${\mathbf{I}} \in
\mathbb{R}^{\Nx \times \Ny \times \Nz}$ can be expressed as
\begin{equation}
{\mathbf{I}}= \tilde{\mathbf{I}}^{k} + \tilde{\mathbf{R}}.
\end{equation}
For $\tilde{\mathbf{I}}^{k}$ to be the optimal representation
of ${\mathbf{I}}$ in
$\mathbb{V}_{k}= {\mbox{\rm{span}}}\{\mathbf{d}^x_{\ell^{x}_n} \otimes
\mathbf{d}^y_{\ell^{y}_n} \otimes \mathbf{d}^z_{\ell^{z}_n}\}_{n=1}^{k}$, in the sense of minimizing the
norm of the residual $\til{\mathbf{R}}$, it should be true that
$\hat{\mathrm{P}}_{\mathbb{V}_{k}} \til{\mathbf{R}}=0$. The SPMP3D method fulfills this
property by approximating $\til{\mathbf{R}}$ in $\mathbb{V}_{k}$, via the
MP method, and subtracting that component from $\til{\mathbf{R}}$.
The following algorithm describes the whole
procedure.
Starting from $k=0$ and $\mathbf{R}^0=\mathbf{I}$,
at each iteration, implement the steps below.
\begin{itemize}
\item[i)]
Increase $ k \leftarrow k+1$ and apply the criterion \eqref{select} for selecting
the triple of indices
$(\ell^{x}_{k},\ell^{y}_{k},\ell^{z}_{k})$.
Save this triple in the array
$L(k,1:3)=(\ell^{x}_{k},\ell^{y}_{k},\ell^{z}_{k}),$
and set $$c(k)=\langle \mathbf{d}^x_{\ell^{x}_k} \otimes
\mathbf{d}^y_{\ell^{y}_k} \otimes \mathbf{d}^z_{\ell^{z}_k}, \mathbf{R}^{k-1}\rangle_{\mathrm{3D}}$$
and implement the update of the residue
$\mathbf{R}^k=\mathbf{R}^{k-1} - c(k) \mathbf{d}^x_{\ell^{x}_{k}} \otimes
\mathbf{d}^y_{\ell^{y}_{k}} \otimes \mathbf{d}^z_{\ell^{z}_{k}}$
as follows:\\
For $s=1,\ldotsN_z$ calculate
$$\Delta \mathbf{R}^k(:,:,s) = \mathbf{R}^{k-1}(:,:,s) -
c(k) \mathbf{d}^x_{\ell^{x}_{k}}
(\mathbf{d}^y_{\ell^{y}_{k}})^\top d^z_{\ell^{z}_{k}}(s),$$
to
update $\mathbf{R}^k$ as
$$\mathbf{R}^k= \mathbf{R}^{k-1} - \Delta \mathbf{R}^k.$$
\item[ii)]
Given the indices
$L(n,1:3)=(\ell^{x}_{n},\ell^{y}_{n},\ell^{z}_{n}),\, n=1,\ldots,k$ of the previously selected atoms, and a
tolerance $\epsilon$ for the projection
error, realize the orthogonal projection
up to that error as follows.
Set $j=1$, $\til{\mathbf{R}}^{0}=\mathbf{R}^{k}$ and
at iteration $j$ apply the steps a) - c) below.
\begin{itemize}
\item [a)]
For $n=1,\ldots,k$ evaluate
\begin{equation}
\label{alpha}
\alpha(n)=
\langle \mathbf{d}^{x}_{\ell^{x}_{n}}
\otimes \mathbf{d}^{y}_{\ell^{y}_{n}} \otimes \mathbf{d}^{z}_{\ell^{z}_{n}}, \til{\mathbf{R}}^{j-1}\rangle_{\mathrm{3D}},
\end{equation}
and single out the value $k^\ast$ such that
\begin{equation}
\label{ast}
k^\ast =\!\operatorname*{arg\,max}_{\substack{n=1,\ldots,k}} \! | \alpha(n)|.
\end{equation}
The value $k^\ast$ signalizes the indices
$\ell^{x}_{k^\ast},\ell^{y}_{k^\ast},\ell^{z}_{k^\ast}$
corresponding to the already selected atoms with maximum
correlation with the residual $\til{\mathbf{R}}^{j-1}$.
\item [b)] If
$|\alpha(k^\ast)| < \epsilon $
stop. Otherwise
update the coefficient $$c(k^\ast) \leftarrow
c(k^\ast) + \alpha(k^\ast) $$ and
for $s=1,\ldots,N_z$ evaluate
$$\Delta \til{\mathbf{R}}^j(:,:,s)=
\alpha(k^\ast)\, \mathbf{d}^x_{\ell^{x}_{k^\ast}} (\mathbf{d}^y_{\ell^{y}_{k^\ast}})^\top d^z_{\ell^{z}_{k^\ast}}(s)$$
to update the residual $\til{\mathbf{R}}^j$ as
$$\til{\mathbf{R}}^j= \til{\mathbf{R}}^{j-1} - \Delta \til{\mathbf{R}}^j.$$
This step subtracts from the residual a
component in $\mathbb{V}_{k}$ and add that component to the
approximation $\tilde{\mathbf{I}}^{k}$
\item [c)] Increase $j \leftarrow j+1$
and repeat the steps a) - c) to keep subtracting
components of $\til{\mathbf{R}}^j$ in $\mathbb{V}_{k}$ until
iteration, $J$ say, for which the stopping criterion b)
is met. This criterion indicates that,
up to tolerance $\epsilon$, the
residual has no component in $\mathbb{V}_{k}$ so that one can set
$\mathbf{R}^k= \tilde{\mathbf{R}}^{J-1}$.
\end{itemize}
Continue with
steps i) and ii) to keep enlarging
$\mathbb{V}_{k}$ until, for a required
tolerance error $\rho$, the condition
$\| \mathbf{R}^k\|_{3D} < \rho$ is reached.
\end{itemize}
{\em{Remark 1:}}
For each fixed value $k$ the rate of convergence
$$\lim_{j \to \infty} \mathbf{I} - \tilde{\mathbf{R}}^j= \hat{\mathrm{P}}_{\mathbb{V}_{k}} \mathbf{I}$$
through the steps
a) - c) above is given
in \cite{RNRS17} for the one dimensional case. The proof
for 3D is identical to that proof, because a
3D array can be represented as a long 1D vector.
What varies is the implementation. A vectorized
version of the algorithm would not be applicable
in this context.
\subsubsection*{Implementation details}
The bulk of the computational burden in the SPMP3D method
lies in the realization of the selection
of atoms \eqref{select}. Algorithm~\ref{IP3D} outlines a
procedure implementing the process. It should be
stressed once
again that
the algorithm is designed to use as little memory
as possible, rather than to reduce complexity.
\newcounter{myalg}
\begin{algorithm}[!htp]
\refstepcounter{myalg}
\begin{algorithmic}
\caption{Implementation of the selection of atoms
(c.f. \eqref{select})\newline
Procedure $[ {\alpha},{\ell^{x}}, {\ell^{y}}, {\ell^{z}}]=\text{Sel3DAtom}(\mathbf{R},\mathbf{D}_x,\mathbf{D}_y,\mathbf{D}_z)$}
\label{IP3D}
\STATE{{\bf{Input:}}\, 3D array $\mathbf{R}$,
matrices $\mathbf{D}_x$, $\mathbf{D}_y$ $\mathbf{D}_z$ the columns
of which are the
atoms in the corresponding dictionaries.}
\STATE{{\bf{Output:}}\, selected indices ${\ell^{x}}, {\ell^{y}}, {\ell^{z}}$, and ${\alpha}= \langle \mathbf{d}^x_{\ell^{x}} \otimes \mathbf{d}^y_{\ell^{y}} \otimes \mathbf{d}^z_{\ell^{z}}, \mathbf{R} \rangle_{\mathrm{3D}}$}
\STATE{\COMMENT{Initiate the algorithm}}
\STATE{$(N_z,M_z)=\text{size}(\mathbf{D}_z)\;,M_x=\text{size}(\mathbf{D}_x,2);\,M_y=\text{size}(\mathbf{D}_y,2)\,$}
\STATE{$q=\text{zeros}(M_x,M_y)$}
\STATE{${\alpha}=0$}
\FOR {$m=1:M_z$}
\STATE{$q(:,:)=0$}
\FOR {$s=1:N_z$}
\STATE{$q(:,:)=q(:,:)+\mathbf{D}_x^\top R(:,:,s)\mathbf{D}_y d^z_m(s)$}
\ENDFOR
\COMMENT{Realize \eqref{select} by finding the partial
maximum, and its argument, for each $m$-plane}
\STATE{$[l_1,l_2,\tilde{q}]=\text{max}(|q(:,:)|)$}
\IF {$\tilde{q} > {\alpha}$}
\STATE{${\alpha} = \tilde{q};\, {\ell^{x}}=l_1;{\ell^{y}}=l_2;\, {\ell^{z}}=m$}
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
At iteration $k$ the outputs of Algorithm~\ref{IP3D} are
saved as $c(k)={\alpha}$ and $L(k,1:3)=({\ell^{x}}, {\ell^{y}}, {\ell^{z}})$.
The implementation details for selecting the
triple of indices at the projection step are given in
Algorithm~\ref{SelTrip}. This is used in
Algorithm~\ref{Proj} for the realization of the actual
projection to recalculate the coefficients in the atomic
decomposition.
\begin{algorithm}[!htp]
\refstepcounter{myalg}
\begin{algorithmic}
\caption{Selection of the triple of indices from the reduced dictionary
(c.f. \eqref{ast})\newline
Procedure
$[\alpha^{\ast},k^{\ast}]$=\text{SelTrip}$(\mathbf{R}, \mathbf{D}_x,\mathbf{D}_y,\mathbf{D}_z,\mathbf{L})$}
\label{SelTrip}
\STATE{{\bf{Input:}}\, As in Algorithm~\ref{IP3D} plus
the array $\mathbf{L}$,
with the triple of indices
$L(n,1:3)=(\ell^{x}_{n},\ell^{y}_{n},\ell^{z}_{n}),\, n=1\ldots k$}
\STATE{{\bf{Output:}}\,$k^{\ast}$
and the corresponding values of $\alpha$
(c.f. \eqref{ast}) to update
the coefficients and residual}
\STATE{\COMMENT{Initiate the algorithm}}
\STATE{$\alpha^{\ast}=0$}
\FOR {$n=1:k$}
\STATE{$p=0$}
\FOR {$s=1:N_z$}
\STATE{$p=p+(\mathbf{d}^x_{\ell^{x}_{n}})^\top R(:,:,s) \mathbf{d}^y_{\ell^{y}_{n}} d^z_{\ell^{z}_{n}}(s)$}
\ENDFOR
\IF{$|p| > |\alpha^{\ast}|$}
\STATE{$k^{\ast}=n$
and $\alpha^{\ast}=p$}
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{algorithm}[!ht]
\refstepcounter{myalg}
\begin{algorithmic}
\caption{Implementation of the self projection steps
a) - c).\newline
Procedure
$[\tilde{\mathbf{R}}, \bar\mathbf{c}]$=\text{Proj3D}($\mathbf{R},\mathbf{D}_x,\mathbf{D}_y,\mathbf{D}_z, \mathbf{L}, \mathbf{c}, \epsilon, \text{MaxJ}$).}
\label{Proj}
\STATE{{\bf{Input:}}\, As in Algorithm \ref{SelTrip}, plus
the coefficients of the atomic decomposition $\mathbf{c}$,
a tolerance parameter $\epsilon$
for the numerical error of the projection,
and a maximum number of permitted iterations, MaxJ.}
\STATE{{\bf{Output:}}\, Orthogonal residual $\tilde{\mathbf{R}}$.
Coefficients $\tilde{\mathbf{c}}$ of the optimized
atomic decomposition.}
\FOR {$j=1:\text{MaxJ}$}
\STATE{\COMMENT{Selection of atoms using Algorithm~\ref{SelTrip}}}
\STATE{$[\alpha^\ast, k^\ast]$=\text{SelTrip}($\mathbf{R},\mathbf{D}_x,\mathbf{D}_y,\mathbf{D}_z, \mathbf{L}$)}
\STATE{\COMMENT{Check stopping criterion}}
\IF{$|\alpha^\ast| < \epsilon$}
\STATE{stop}
\ENDIF
\STATE{\COMMENT{Update the coefficients}}
\STATE{$c(k^\ast) \leftarrow c(k^\ast)+ \alpha^\ast$}
\STATE{\COMMENT{Update the residual}}
\FOR {$s=1: N_z$}
\STATE{$\mathbf{R}(:,:,s) \leftarrow \mathbf{R}(:,:,s)- \alpha^\ast(\mathbf{d}^x_{\ell^{x}_{k^\ast}})^\top R(:,:,s)\mathbf{d}^y_{\ell^y_{k^\ast}}d^z_{\ell^{z}_{k^\ast}}(s)$}
\ENDFOR
\ENDFOR
\STATE{\COMMENT{For clarity in the description only, we re-name here the residual and coefficients}}
\STATE{$\bar{\mathbf{R}}= \mathbf{R};\; \bar{\mathbf{c}}= \mathbf{c}$}
\end{algorithmic}
\end{algorithm}
Due to computational complexity and memory requirements,
pursuit strategies using general dictionaries
can only be implemented on an image partitioned into small
blocks.
We consider nonoverappling blocks. The approximation of
each block is carried out
independently of the others. When the approximation of
all the blocks is concluded, these are assembled
together to produce the approximation of the whole image.
While the sparsity results yielded by the OMP3D and
the SPMP3D methods are theoretically equivalent,
we have seen that the latter implementation
is much more economic in terms of storage demands.
As discussed in Remark 2 below, this feature makes the SPMP3D
algorithm suitable for
possible GPU implementations using only the
fast access shared memory.
Assuming for simplicity in the notation
that a 3D image is partitioned into cubes of size $N_b^3$
and the dictionaries $\mathcal{D}_x$, $\mathcal{D}_y$ and $\mathcal{D}_z$ are
all of the same size $N_b \times r N_b$, where $r>1$ is
the redundancy of the 1D dictionary, the
SPMP3D algorithm storage needs are as follows.
\begin{itemize}
\item[1.]
Two $N_b^3$ arrays for the intensity block
in the image partition and the
residual of the corresponding approximation.
\item[2.]
Three matrices of size $N_b \times r N_b$ for each dictionary, in
case they are different.
\item[3.] A $r^2 \times N_b^2$ array for the selection of
indices in Algorithm~\ref{IP3D}.
\item[4.] A vector of $k$ real numbers to store the
coefficients of the atomic decomposition and
$k$ vectors of size 3 to store the indices of the atoms in the atomic decomposition.
The value of $k$
is the total number of atoms in the approximation
of the block.
\end{itemize}
Since the stepwise complexity is dominated
by the selection of indices (c.f. \eqref{select}),
within this setup it is
O($r^3 N_b^5$) and for
true color images O($r^3 N_b^3$).
{{\em{Remark 2:}}
By considering blocks of size $8 \times 8 \times 8$
and dictionaries of redundancy $r=5$ in each dimension,
the above listed storage needs of the SPMP3D algorithm
comfortably fit the fast access shared memory of
a GPU in CUDA, which currently is
48Kb. Indeed, in the worst-case scenario (corresponding to
an approximation of zero error using $k=8^3$ atoms
for the approximation of an $8 \times 8 \times 8$ block)
SPMP3D would require 38Kb to store most of
the arrays in double precision, except for those
with the selected indices which contain
integer numbers. This still leaves 10Kb for temporary
variables to be used within calculations.
\subsection{Mixed Dictionaries}
\label{MD}
A key factor for the success in the
construction of sparse representations is
to have a good dictionary.
While a number of techniques for learning
dictionaries from training data have been proposed
in the literature
\cite{KMR03, AEB06, RZE10, TF11, ZGK11, HSK13, SNS15,
WEB15},
they are not designed
for learning large and highly coherent
separable dictionaries.
Nevertheless,
previous works \cite{RNBCP12, RNB13, LRN17,LRN18} have
demonstrated that
highly redundant and highly coherent
separable dictionaries, which are easy to
construct, achieve remarkable levels of sparsity
in the representation of 2D images.
Such dictionaries are not specific to a particular class of
images. A discrimination is only made to take into
account whether the approximation is
carried out in the pixel intensity or in the wavelet domain.
As will be illustrated by the numerical examples in the
next section, the approximation of the images we are
considering are sparser when realized in the wavelet domain
(wd).
This entails the following steps:
\begin{itemize}
\item
Apply a wavelet transform to each channel $\mathbf{I}_m, m=1,\ldots, N_z$ to obtain the arrays $\mathbf{U}_m, m=1,\ldots,N_z$.
For the numerical examples we have used the
9/7 Cohen-Daubechies-Feauveau biorthogonal wavelet
transform \cite{CDF92}.
\item
Approximate the array $\mathbf{U} \in \mathbb{R}^{\Nx \times \Ny \times \Nz}$ exactly as
it is done in the pixel domain (pd).
\item
Apply the inverse wavelet transform to the
approximated planes to recover the approximated intensity channels.
\end{itemize}
The mixed dictionary we consider for the 2D approximation
consists of two sub-dictionaries:
A trigonometric dictionary, $\mathcal{D}_T^x$, which is the common sub-dictionary for the approximation in both domains, and a dictionary of
localized atoms, which contains atoms of different
shapes when used in each domain.
The trigonometric dictionary is the union of the
dictionaries $\mathcal{D}_{\rm{C}}^x$ and $\mathcal{D}_{\rm{S}}^x$ defined below:
\begin{equation}
\begin{split}
\mathcal{D}_{\rm{C}}^x&=\!\{w_c(n)
\cos{\frac{{\pi(2i-1)(n-1)}}{2M_x}},i=1,\ldots,N_x\}_{n=1}^{M_x}\\
\mathcal{D}_{\rm{S}}^x&=\!\{w_s(n)\sin{\frac{{\pi(2i-1)(n)}}{2M_x}},i=1,\ldots,N_x\}_{n=1}^{M_x},\nonumber
\end{split}
\end{equation}
where $w_c(n)$ and $w_s(n),\, n=1,\ldots,M_x$
are normalization factors, and usually $M_x=2N_x$.
Thus, the trigonometric dictionary is
constructed as $\mathcal{D}_T^x= \mathcal{D}_{\rm{C}}^x \cup \mathcal{D}_{\rm{S}}^x$.
For approximations in the pd
we add the dictionary,
$\mathcal{D}_{\rm{Lp}}^x$, which is built by
translation of the prototype atoms in the left graph of
Fig.~\ref{proto}. This type of dictionary is
inspired by a general result holding for continuous spline
spaces. Namely, that spline spaces on a compact interval
can be spanned by dictionaries of B-splines
of broader support than the corresponding B-spline basis functions \cite{ARN05,RNX10}. Thus, the
first 4 prototype atoms $\mathbf{h}_i,\,i=1,\ldots,4$ in the left graph of
Fig.~\ref{proto} are generated by discretization of
linear B-spline functions
of different support. For
$m=1,2,3,4$ those functions are defined as follows:
\begin{equation}
\label{B2}
h_m(x)=
\begin{cases}
\frac{x}{m} & \mbox{if}\quad 0 \leq x < m\\
2-\frac{x}{m} & \mbox{if}\quad m\leq x <2m \\
0 & \mbox{otherwise.}
\end{cases}
\end{equation}
The remaining prototypes, $\mathbf{h}_5,\mathbf{h}_6$ and $\mathbf{h}_7$, in the left graph of Fig.~\ref{proto}
are generated taken the derivatives of the
previous functions: $h_5(x)= (h_2(x))'$,
$h_6(x)=(h_3(x))'$ and $h_7=(h_4(x))'$.
The corresponding dictionaries $\mathcal{D}_{H_m}, \,
m=1,\ldots, 7$ are built
by discretization of the variable
$x$ in \eqref{B2} and
sequential translation of one sampling point, i.e.,
$$\mathcal{D}_{H_m}=\{w_{h_m}(n) h_m(i-n)|N_x; i=1,\ldots,N_x\}_{n=1}^{M},\quad
m=1,\ldots,7,$$
where the notation $h_m(i-n)|N_x$ indicates the restriction to be
an array of size $N_x$. The numbers $w_{h_m}(n), \,n=1,\ldots,M, \,m=1,\ldots,7$
are normalization factors. The dictionary $\mathcal{D}_{\rm{Lp}}^x$
arises by the union of the dictionaries
$\mathcal{D}_{H_m},\, m=1,\ldots,7$ i.e.,
$\mathcal{D}_{\rm{Lp}}^x= \cup_{m=1}^7 \mathcal{D}_{H_m}$. The whole mixed dictionary
$\mathcal{D}_{\rm{pd}}^x$ is finally formed as $\mathcal{D}_{\rm{pd}}^x= \mathcal{D}_{\rm{C}}^x
\cup \mathcal{D}_{\rm{S}}^x \cup \mathcal{D}_{\rm{Lp}}^x$. For the other
dimension we take $\mathcal{D}_{\rm{pd}}^y=\mathcal{D}_{\rm{pd}}^x$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=8cm]{prot_pd-eps-converted-to.pdf}
\includegraphics[width=8cm]{prot_wd-eps-converted-to.pdf
\end{center}
\caption{{\small{The left graph illustrates the
prototype atoms which generate
by translation the dictionaries $\mathcal{D}_{H_m}, \,
m=1,\ldots,7.$ The prototypes in the
right graph generate by translation the dictionaries
$\mathcal{D}_{P_m},\, m=1,\ldots,7.$}}}
\label{proto}
\end{figure}
For approximations in the wd we use the
dictionary of localized atoms $\mathcal{D}_{\rm{Lw}}^x$
as proposed in \cite{LRN17},
which is built by translation of
the prototype atoms $\mathbf{p}_i, \, i=1,\ldots,7$
in the right graph of
Fig.~\ref{proto}. Notice that
$\mathbf{p}_1=\mathbf{h}_1$ and $\mathbf{p}_3=\mathbf{h}_5$.
The remaining prototypes are given by the vectors:\\\\
$\mathbf{p}_2=(1,1,0,0,\ldots,0)^\bot \in \mathbb{R}^{N_x},
\mathbf{p}_4=(1,1,1,0,\ldots,0)^\bot \in \mathbb{R}^{N_x},
\mathbf{p}_5=(-1,1,1,0,\ldots,0)^\bot \in \mathbb{R}^{N_x},\\
\mathbf{p}_6=(1,-1,1,0,\ldots,0)^\bot \in \mathbb{R}^{N_x},
\mathbf{p}_7=(-1,-1,1,0,\ldots,0)^\bot \in \mathbb{R}^{N_x},$\\\\
The corresponding dictionaries $\mathcal{D}_{P_m},
\, m=1,\ldots,7$ are built as in the previous case
by sequential translation of one sampling point,
$$\mathcal{D}_{P_m}=\{w_{p_m}(n) p_m(i-n)|N_x; i=1,\ldots,N_x\}_{n=1}^{M},\quad m=1,\ldots,7,$$
where the numbers $w_{p_m}(n), \,n=1,\ldots,M, \,m=1,\ldots,7$
are normalization factors. The dictionaries $\mathcal{D}_{P_m},
m=1,\ldots,7$ give rise to $\mathcal{D}_{\rm{Lw}}^x= \cup_{i=1}^7 \mathcal{D}_{P_m}$.
The latter generates the mixed dictionary
$\mathcal{D}_{\rm{wd}}^x= \mathcal{D}_{\rm{C}}^x
\cup \mathcal{D}_{\rm{S}}^x \cup \mathcal{D}_{\rm{Lw}}^x$ and
$\mathcal{D}_{\rm{wd}}^y=\mathcal{D}_{\rm{wd}}^x$.
The corresponding 2D dictionaries
$\mathcal{D}_{\rm{pd}}= \mathcal{D}_{\rm{pd}}^x \otimes \mathcal{D}_{\rm{pd}}^y$ and
$\mathcal{D}_{\rm{wd}}= \mathcal{D}_{\rm{wd}}^x \otimes \mathcal{D}_{\rm{wd}}^y$
are very large, but never used as such. All the
calculations are carried out using the 1D dictionaries.
In order to demonstrate the
gain in sparsity attained by the
approximation of 3D images by partitioning into 3D blocks,
we use dictionaries $\mathcal{D}_{\rm{wd}}$ and $\mathcal{D}_{\rm{pd}}$ only for the approximation of the
single channel 2D images.
For the 3D case we maintain the redundancy of the 3D
dictionary equivalent to that of the 2D dictionary,
by considering the 1D dictionary
$\til{\mathcal{D}}_{\rm{pd}}^x=\mathcal{D}_{\rm{C}}^x
\cup \mathcal{D}_{\rm{S}}^x \cup \mathcal{D}_{P_1}.$
Notice that $\mathcal{D}_{P_1}$ is the standard Euclidean basis for $\mathbb{R}^{N_x}$,
also called the Dirac's basis,
i.e., the basis arising by translation of the first atom
in Fig.~\ref{proto}.
Notice that $\til{\mathcal{D}}_{\rm{pd}}^x \subset \mathcal{D}_{\rm{pd}}^x$ and $\til{\mathcal{D}}_{\rm{pd}}^x \subset
\mathcal{D}_{\rm{wd}}^x$. We also consider $\til{\mathcal{D}}_{\rm{pd}}^y=\til{\mathcal{D}}_{\rm{pd}}^x$ and
$\til{\mathcal{D}}_{\rm{pd}}^z=\til{\mathcal{D}}_{\rm{pd}}^x$, but taking $N_x=N_z$.
The redundancy of the resulting dictionary
$\til{\mathcal{D}}_{\rm{pd}}=\til{\mathcal{D}}_{\rm{pd}}^x \otimes \til{\mathcal{D}}_{\rm{pd}}^y \otimes \til{\mathcal{D}}_{\rm{pd}}^z$
is equivalent to the redundancy of the 2D dictionary
$\mathcal{D}_{\rm{pd}}$. In 3D we use the same dictionary in both
domains
$\til{\mathcal{D}}_{\rm{wd}}= \til{\mathcal{D}}_{\rm{pd}}$.
\section{Numerical Results}
The merit of the simultaneous approximation of multiple
channel images is illustrated in this section by recourse to
two numerical examples. Firstly we
make the comparison between the sparsity
produced by the joint approximation of the Red-Green-Blue
(RGB) channel images partitioned into blocks of size
$N_b \times N_b \times 3$ and
the sparsity obtained by the independent approximation of each channel partitioned into blocks of size $N_b \times N_b$.
Secondly, the full power of the approach is
illustrated through the gain in sparsity attained by
approximating hyper-spectral images partitioned into
3D blocks, vs the plane by plane approximation.
In both cases, once the approximation
of each 3D block $\mathbf{I}_q$ in the
image partition is completed, for $q=1,\ldots,Q$ the
$k_q$-term
atomic decomposition of the corresponding block is
expressed in the form
\begin{equation}
\label{atoq2r}
\mathbf{I}_q^{k_q}= \sum_{n=1}^{k_q}
c_q(n) \mathbf{d}^x_{\ell^{x,q}_n} \otimes \mathbf{d}^y_{\ell^{y,q}_n} \otimes \mathbf{d}^z_{\ell^{z,q}_n}.
\end{equation}
The sparsity of the representation of an image of
dimension $N=N_x \cdot N_y \cdot N_z$ is measured by the
Sparsity Ratio ($\mathrm{SR}$), which is defined as:
\begin{equation}
\label{SR}
\text{SR}=\frac{N}{K},
\end{equation}
where for the 3D representation $K=\sum_{q=1}^Q k_q,$
with $k_q$ the number of atoms in the
atomic decomposition \eqref{atoq2r}. For the
channel by channel decomposition of a $N_z$-channel image,
each channel is partitioned into
$P=(N_x \cdot N_y)/N_b^2$ blocks
$\mathbf{I}_{p,z},\,p=1,\ldots,P$, which are approximated by
the 2D atomic decompositions
\begin{equation}
\label{atoq2c}
\mathbf{I}_p^{k_{p,l}}= \sum_{n=1}^{k_{p,l}}
c_p^{p,l}(n) \mathbf{d}^x_{\ell^{x,p,l}_n} \otimes \mathbf{d}^y_{\ell^{y,p,l}_n} ,\quad l=1,\ldots,N_z,
\end{equation}
where the indices $\ell^{x,p,l}_n, \ell^{y,p,l}_n$ are
selected for each channel $l$ by the OMP2D algorithm.
Accordingly, the number $K$ in \eqref{SR} is given as
$K=\sum_{l=1}^{N_z}\sum_{p=1}^{P}{k_{p,l}}$, with
$k_{p,l}$ the number of atoms in the atomic decomposition
\eqref{atoq2c}.
Notice that the $\mathrm{SR}$ is a measure of the
reduction of dimensionality for representing an image.
The larger the value of the SR the smaller the
dimensionality of the atomic decomposition representing
the whole image.
The required quality of the
approximation is ensured with respect to the
Mean Structural SIMilarity (MSSIM) index~\cite{ssim,ssimex}
and the classical Peak Signal-to-Noise Ratio ($\mathrm{PSNR}$),
which for a 3D image is defined as
\begin{equation}
\label{psnr}
\mathrm{PSNR}=10 \log_{10}\left(\frac{{(\mathrm{Imax})}^2}{\mathrm{MSE}}\right),
\quad
\mathrm{MSE}=\frac{\|\mathbf{I} - \mathbf{I}^K\|_{\mathrm{3D}}}{N_x \cdot N_y \cdotN_z},\nonumber
\end{equation}
where $\mathrm{Imax}$ is the maximum intensity range and
$\mathbf{I}^K$ the image approximation.
\subsection{Example I}
In this example we use the Kodak data set consisting
of 24 true color images shown in Fig.~\ref{Kodak}.
\begin{figure}[H]
\begin{center}
\includegraphics[width=14cm]{KodakCollage-eps-converted-to.pdf}
\end{center}
\caption{{\small{Illustration of the
Kodak data set consisting of 24 true color images,
credit Rich Franzen \cite{kodak}.
The size of these
images is $768 \times 512 \times 3$, for most of them,
except for numbers
4, 9, 10, 17, 18 and 19, which are of size
$512 \times 768 \times 3$.}}}
\label{Kodak}
\end{figure}
The approximations are realized in both domains
by maintaining the same redundancy in the
2D and 3D dictionaries.
For the independent approximation of the 2D channels
the partitions are realized with blocks of size
$8 \times 8$ and
$16 \times 16$ (a partition of block size $24 \times 24$
does not improve results for this data set).
Accordingly, the simultaneous
approximation of the 3 color channels involves partitions of
block size $8 \times 8 \times 3$ and
$16 \times 16 \times 3$ respectively.
As already discussed, for the independent approximation
of the 2D channels
we consider the dictionaries
$\mathcal{D}_{\rm{pd}}$ (in the pd) and $\mathcal{D}_{\rm{wd}}$ (in the wd)
as given in Sec.~\ref{MD}. For the simultaneous
approximation of the 3 channels we
consider the dictionaries $\til{\mathcal{D}}_{\rm{pd}}$
given in the same section. Both dictionaries have
redundancy of 125.
The average values of SR ($\overline{\mathrm{SR}}$),
with respect to the 24 images in the set,
are given in Table ~\ref{TABLE1} for the
approaches and partitions indicated by the first column.
\begin{table}[H]
\begin{center}
\begin{tabular}{|l||r|r|| r| r||}
\hline
$\mathrm{PSNR}$ & \multicolumn{2}{|c||} {45 dB} & \multicolumn{2}{|c||}{41 dB}\\ \hline \hline
& $\overline{\mathrm{SR}}$ & std& $\overline{\mathrm{SR}}$& std\\ \hline \hline
pd 2D $8\times 8$& 6.2 & 2.0& 9.1 & 3.5 \\ \hline
pd 3D $8\times 8 \times 3$& 10.3& 2.9& 16.1& 5.5 \\ \hline
wd 2D $8\times 8$& 7.1& 2.6& 11.8& 5.8 \\ \hline
wd 3D $8\times 8 \times 3$& 11.6& 3.8& 20.9& 9.2 \\ \hline
\hline
pd 2D $16\times 16$& 7.1 & 2.5 &11.1 & 5.0 \\ \hline
pd 3D $16\times 16\times 3$& 11.6& 3.6 & 18.8& 7.5 \\ \hline
wd 2D $16\times 16$& 7.5 & 2.7 & 12.0& 6.2 \\ \hline
wd 3D $16\times 16\times 3$& 12.4& 3.9 & 20.4 & 8.9 \\ \hline
Thresholding in the wd &3.2& 1.1 & 4.9 & 2.6 \\ \hline \hline
\end{tabular}
\caption{{\small{Mean value of the SR, with respect to the 24 images in the set, obtained with the 2D and 3D approximations in
both the $\mathrm{pd}$ and $\mathrm{wd}$
for two different sizes of the image partition.
The last row in the table gives the results
corresponding to standard nonlinear thresholding of
wavelet coefficients, to achieve the same quality of the
approximation as with
the dictionaries: $\mathrm{PSNR}=45{\mathrm{dB}}$ (left half)
and $\mathrm{PSNR}=41{\mathrm{dB}}$ (right half).}}}
\label{TABLE1}
\end{center}
\end{table}
All the results in the left half of the table
correspond to $\mathrm{PSNR} =45$ dB and all the results in the
right half correspond to $\mathrm{PSNR} =41$ dB.
The third and fifth columns give the
standard deviations (std).
For completeness we have also produced the
$\overline{\mathrm{SR}}$ rendered by nonlinear thresholding of the wavelets
coefficients (last row in the table).
Notice that the
resulting sparsity is
poor in comparison with the other 2D results.
All the results were obtained in the MATLAB environment
on a notebook 2.9GHz dual core i7 3520M CPU and 4GB
of memory.
For the channel by channel approximation
a C++ MEX file implementing OMP2D was used.
For the 3D approximation SPMP3D was implemented by a
C++ MEX file.
As observed in Table~\ref{TABLE1} the largest
$\overline{\mathrm{SR}}$ is achieved
in the wd and partition $16 \times 16 \times 3$
(c.f. last but one row of Table~\ref{TABLE1}). However,
the results obtained by
partition $8 \times 8 \times 3$ are very close
(c.f. last row of the upper half of
Table~\ref{TABLE1}) and
constitute a better tradeoff between SR and approximation
time.
Fig.~\ref{SRswd} shows the actual values of
SRs for this partition in the wd for each of the 24
images in the data set (c.f. Fig.~\ref{Kodak}).
The average time for the
3D approximation was 53 s per image.
\begin{figure}[htp!]
\begin{center}
\includegraphics[width=12cm]{SRwd845n-eps-converted-to.pdf}
\end{center}
\caption{{\small{SR for the 45dB
approximation, in the wd, of each of the 24 images
in the Kodak data set (c.f. Fig.~\ref{Kodak} enumerated from top left to bottom right).
The results for the independent approximation of
each 2D color channel are represented by
the filled circles
and those corresponding to the
simultaneous
approximation of the 3 channels are represented by the
filled squares. The corresponding
partitions are of size $8\times8$ and $8 \times 8 \times 3$.}}}
\label{SRswd}
\end{figure}
\begin{figure}[htp!]
\begin{center}
{\bf{3D\,\, SR=63.5\,\, PSNR=38.4\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;2D\,\,SR=63.5\,\, PSNR=24.8}}
\includegraphics[width=8cm]{kod03n_3D-eps-converted-to.pdf}
\includegraphics[width=8cm]{kod03n_2D-eps-converted-to.pdf}\\
{\bf{3D\,\,SR=63.5; PSNR=35.9\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;2D\,\,SR=63.5; PSNR=22.1}}
\includegraphics[width=8cm]{kod07n2_3D-eps-converted-to.pdf}
\includegraphics[width=8cm]{kod07n2_2D-eps-converted-to.pdf}\\
{\bf{3D\,\,SR=63.5; PSNR=37.6\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;2D\,\,SR=63.5; PSNR=24.9}}
\includegraphics[width=8cm]{kod12n_3D-eps-converted-to.pdf}
\includegraphics[width=8cm]{kod12n_2D-eps-converted-to.pdf}\\
\end{center}
\caption{{\small{Approximations of Images 3, 7 and 12 in the
Kodak data set, for SR=63.5. The images on the left
are the 3D approximations. The images on the right
are the 2D channel by channel approximations.}}}
\label{appro}
\end{figure}
Fig.~\ref{appro} demonstrates the gain in visual
quality obtained when the approximation of Images
3, 7 and 12 are
realized simultaneously in 3D, instead of
independently for each 2D channel. In both cases the SR is
fixed at a high value SR=63.5. While the
3D approximation is still of good quality
(c.f. images on the left in Fig.~\ref{appro}) the
distortion of the channel by channel approximation
is very noticeable even at the scale of the figure
(c.f. images on the right in Fig.~\ref{appro}).
As a final remark
it is worth noting that the number $k_q$ of atoms in the
approximation of each block $q$ of an image partition
produces a meaningful summary of local sparsity.
\begin{figure}[H]
\begin{center}
\includegraphics[width=7cm]{dig2_22-eps-converted-to.pdf}
\includegraphics[width=7cm]{dig3_22-eps-converted-to.pdf}
\includegraphics[width=7cm]{kod_22-eps-converted-to.pdf}
\end{center}
\caption{{\small{The upper graphs are a
representation of the piecewise sparsity corresponding
to Image 22 in the Kodak data set.
Both graphs are arrays of $64 \times 96$ points.
Each point corresponds to the number $k_q$ of
atoms in the approximation
of a block $q$.
The left graph corresponds
to the 2D approximation and the right graph to the
3D approximation. The lower graph is the
image given as 3 channels of $512 \times 768$
pixels each.}}}
\label{summary}
\end{figure}
The upper graphs of Fig.~\ref{summary}
are a representation of the piecewise sparsity
corresponding to Image 22 in the Kodak data set.
Both graphs are arrays of $64 \times 96$ points.
Each point corresponds to the number $k_q$ of
atoms in the approximation of a block $q$.
The left graph corresponds
to block size $8 \times 8$ in the 2D approximation,
by taking the average
$k_q$ over the three channels in the block,
which is roughly the
$k_q$-value corresponding to the equivalent block in the
gray scale image.
The right graph corresponds to $k_q$ for each block
of size $8 \times 8 \times 3$ in the 3D approximation. Both approximations
are realized in the pd.
The lower graph is the
image given as 3 channels of $512 \times 768$
pixels each. It follows from the figure that the
points corresponding to the 3D approximation
give mode details about the image.
\subsection{Example II}
We consider now the approximation of the
hyper-spectral images illustrated in Fig.~\ref{hsi}.
Details on the
images acquisition and processing
are described in \cite{FNA04, FAN06, NAF16}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.5]{Ribeira2-eps-converted-to.pdf} \hspace{-0.2cm}
\includegraphics[scale=0.5]{Grafitti2-eps-converted-to.pdf}\hspace{-0.2cm}\\
\includegraphics[scale=0.5]{Rose2-eps-converted-to.pdf} \hspace{-0.2cm}
\includegraphics[scale=0.5]{Collums2-eps-converted-to.pdf}
\caption{{\small{Illustration of the hyper-spectral images
available on \cite{HSI04} and \cite{HSI}. From top left to bottom right
in Table.~\ref{TABLE3} are labelled as Ribei., Graff., Rose, and Col. The of size of all four images is
$1016 \times 1336\times 32$ pixels.}}}
\label{hsi}
\end{figure}
All four images are of size
$1016 \times 1336\times 32$,
and have been approximated
in partitions of block size
$N_b \times N_b$, with
$N_b=8,16$, and 24
for the 2D approximation, and
$8 \times 8 \times 8$ for the 3D approximation.
For the 2D channel by channel approximation we use the
dictionaries $\mathcal{D}_{\rm{pd}}$ and $\mathcal{D}_{\rm{wd}}$ as defined
in Sec.~\ref{MD}.
For the 3D approximation we maintain the
redundancy as in 2D using the dictionary
$\til{\mathcal{D}}_{\rm{pd}}$ introduced
Sec.~\ref{MD} and $\til{\mathcal{D}}_{\rm{wd}}^a=\til{\mathcal{D}}_{\rm{pd}}^a$.
Because the range of intensity varies across the images,
in order to compare SRs with different approaches we fix the
Signal to Noise Ratio ($\mathrm{SNR}$)
\begin{equation}
\label{snr}
\mathrm{SNR}=10\log_{10}\left(\frac{\| \mathbf{I}\|^2_{\mathrm{3D}}}{\|\mathbf{I} - \mathbf{I}^K\|^2_{\mathrm{3D}}}\right).
\end{equation}
\begin{table}[H]
\begin{center}
\begin{tabular}{|l||c|c|c| c||}
\hline
Image & Ribei. & Graff.& Rose & Col. \\ \hline \hline
\multicolumn{5}{c}{SNR= 31 dB} \\ \hline
$\mathrm{PSNR}$& 46.8 & 48.2 & 47.8 & 46.7\\ \hline
$\mathrm{SR}_{\mathrm{2D}}\,N_b=8$& 19.2 & 19.2 & 24.1 & 47.7 \\ \hline
Time (min) &1.6&1.6 &1.3 &0.9 \\ \hline
$\mathrm{SR}_{\mathrm{2D}}\,N_b=16$&27.3&25.5& 38.7&110.6\\ \hline
Time (min) &3.4 &3.8 &2.1 &1.1 \\ \hline
$\mathrm{SR}_{\mathrm{2D}}\,N_b=24$&29.6&26.8& 44.2& 147.5\\ \hline
Time (min) &7.6 &9.2 &4.5 &1.5\\ \hline
$\mathrm{SR}_{\mathrm{3D}}\, N_b=8$&49.1&59.7&74.6&137.2\\ \hline
Time (min) & 18 &15 &10 &6 \\ \hline \hline
\multicolumn{5}{c}{SNR= 33 dB} \\ \hline
$\mathrm{PSNR}$& 48.8&50.2&49.8& 48.7\\ \hline
$\mathrm{SR}_{\mathrm{2D}}\,N_b=8$&15.2&15.4&19.3 &41.5\\ \hline
Time (min) &2.3 &2.1 &1.7 &1.1 \\ \hline
$\mathrm{SR}_{\mathrm{2D}}\,N_b=16$&20.4&19.5&29.1&86.4\\ \hline
Time (min) &5.4 &5.6 &2.9 &1.2 \\ \hline
$\mathrm{SR}_{\mathrm{2D}}\,Nb=24$& 21.9& 20.5& 32.7 & 106.3\\ \hline
Time (min) & 12 & 14 & 6.8 &1.9\\ \hline
$\mathrm{SR}_{\mathrm{3D}}\,N_b=8$& 33.5 &41.6& 53.2& 106.5\\ \hline
Time (min) &25&21 &16 &8 \\ \hline \hline
\end{tabular}
\caption{{\small{Values of SR for the approximation in the
pixel-intensity domain of the images listed in the
first row. $\mathrm{SR}_{\mathrm{2D}}$ indicates the SR
for the plane by plane approximation in partition
of block side $N_b=8,16$, and 24. $\mathrm{SR}_{\mathrm{3D}}$ corresponds to a
partition in ${\mathrm{3D}}$ blocks of size $8\times 8 \times 8$.
The times for completing the approximations are given
immediately below the sparsity results in minutes.}}}
\label{TABLE3}
\end{center}
\end{table}
Every block in the partition is approximated up to the
same error. With all the approaches,
two global values of $\mathrm{SNR}$ (31 dB and 33 dB)
were considered.
These values of $\mathrm{SNR}$ correspond to
the values of $\mathrm{PSNR}$ shown in
Tables ~\ref{TABLE3} and Table~\ref{TABLE4}.
In all of the cases the approximations are of excellent
visual quality.
The SRs produced by the 3D approximation
are indicated
by $\mathrm{SR}_{\mathrm{3D}}$ and those produced by the 2D plane
by plane approximation by $\mathrm{SR}_{\mathrm{2D}}$.
The times
for completing the approximations are given
in the row right after the corresponding sparsity result.
\begin{table}[!h]
\begin{center}
\begin{tabular}{|l||c|c|c|c||}
\hline
Image & Ribei. & Graff.& Rose & Col.\\ \hline \hline
\multicolumn{5}{c}{SNR= 31 dB} \\ \hline
$\mathrm{PSNR}$&46.8& 48.2 & 47.8 & 46.7 \\ \hline
$\mathrm{SR}_{\mathrm{2D}}\,N_b=8$ & 28.6& 26.8 & 38.6 & 56.5 \\ \hline
Time (min) & 1.4& 1.5 & 1.2 & 0.8 \\ \hline
$\mathrm{SR}_{\mathrm{2D}}\,N_b=16$& 36.5& 34.1 & 63.4 & 144.8 \\ \hline
Time (min) & 2.7& 3.5 & 2.3 & 0.9\\ \hline
$\mathrm{SR}_{\mathrm{2D}}\,N_b=24$& 37.2 & 35.7& 71.1 & 193 \\ \hline
Time (min) & 9.2 & 12 & 4.8 & 1.8 \\ \hline
$\mathrm{SR}_{\mathrm{3D}}\,N_b=8$ &86.5&108.0& 182.2& 371.7\\ \hline
Time (min) & 13 & 10 & 6 & 3 \\ \hline \hline
\multicolumn{5}{c} {SNR= 33 dB} \\ \hline
$\mathrm{PSNR}$& 48.8& 50.2 & 49.9 & 48.7 \\ \hline
$\mathrm{SR}_{\mathrm{2D}}\,N_b=8$& 22.6 & 21.8 & 33.0 & 56.1 \\ \hline
Time (min) &1.7 &1.8& 1.5 & 1.0 \\ \hline
$\mathrm{SR}_{\mathrm{2D}}\,N_b=16$& 26.6& 25.8 & 48.2 & 118.3 \\ \hline
Time (min) &3.5&5.0& 2.3&1.1 \\ \hline
$\mathrm{SR}_{\mathrm{2D}}\,N_b=24$&21.9&26.8&52.0&144.0\\ \hline
Time (min) &12&15&8.5&1.9 \\ \hline
$\mathrm{SR}_{\mathrm{3D}}\,N_b=8$& 55.1&70.5&129.5& 313.3\\ \hline
Time (min) &23&18&10&1.8\\ \hline \hline
\end{tabular}
\caption{{\small{Same description as in Table ~\ref{TABLE3},
but the approximations are realized by applying first
a wavelet transform to each of the 32 channels.}}}
\label{TABLE4}
\end{center}
\end{table}
{\em{Remark 3:}}
In both Table~\ref{TABLE3} and
Table~\ref{TABLE4} the values of $\mathrm{SR}_{\mathrm{3D}}$
are significantly larger than the values of $\mathrm{SR}_{\mathrm{2D}}$,
except for the Col. image
and $24 \times 24$ blocks. For this image we were
able to increase the 3D block size up to $16 \times 16 \times 16$ and the results for $\mathrm{SNR}=31$dB
are $\mathrm{SR}_{\mathrm{3D}}=357$ in the pd and $\mathrm{SR}_{\mathrm{3D}}= 892$ in the wd
(35 min and 10 min respectively).
For $\mathrm{SNR}=33$ dB $\mathrm{SR}_{\mathrm{3D}}=247$ in the pd and $\mathrm{SR}_{\mathrm{3D}}=590$
in the wd (55 min and 20 min respectively).
On comparing the two tables a drastic improvement
in the values of $\mathrm{SR}_{\mathrm{3D}}$ is observed when the approximation
is realized in the wavelet domain. This feature is
a consequence of the fact that the planes of
the natural images are very sparse in the wavelet domain.
In order to highlight differences we
produce next the $\mathrm{SR}_{\mathrm{3D}}$ corresponding
to the two remote sensing images in Fig.~\ref{RS}.
The graph on the left represents the
Urban remote sensing hyper-spectral
image taken from \cite{HSRS}. The graph on the right
is a portion of the University of Pavia
image also taken from \cite{HSRS}.
\begin{figure}[ht]
\centering
\includegraphics[scale=0.666]{urban-eps-converted-to.pdf}\hspace{0.4cm}
\includegraphics[scale=0.774]{Parvia-eps-converted-to.pdf}
\caption{{\small{Illustration of two remote sensing
hyper-spectral images taken from \cite{HSRS}.
The graph on the left is the Urban
image (size $320 \times 320 \times 128$ pixels).
The graph on the right is a portion of the
University of Pavia image
($256 \times 256 \times 96$ pixels).
}}}
\label{RS}
\end{figure}
Fig.~\ref{SRRS} plots the SR
vs four values of $\mathrm{SNR}$,
corresponding to the 3D approximations of
the Urban and University of Pavia images
in both the pd and wd.
\begin{figure}[ht!]
\centering
\includegraphics[width=9cm]{SRRS-eps-converted-to.pdf}
\caption{{\small{SR vs SNR values for the
3D approximation in
both the pd and wd for the Urban and University of Pavia
remote sensing images.}}}
\label{SRRS}
\end{figure}
Notice that the results in the pd are much closer to the results in the wd
than they are in the case of the natural images
in Fig.~\ref{hsi}. This is because, as
illustrated in Fig.~\ref{WT}, the planes of
the remote sensing images are not as
sparse in the wd as the planes of the natural images are.
\begin{figure}[ht]
\centering
\includegraphics[width=8cm]{WTCol2-eps-converted-to.pdf}
\includegraphics[width=8cm]{WTPav-eps-converted-to.pdf}
\caption{\small{Absolute value of the wavelet transform
of a plane in the Col. image (left graph) and
in the University of Pavia image (right graph).}}
\label{WT}
\end{figure}
\newpage
\section{Conclusions}
High quality approximation of 3D images has been considered within the context of data reduction. A remarkable
improvement in sparsity achieved by the simultaneous
approximation of multiple channels has been illustrated
through numerical experiments of different natures.
Firstly it was demonstrated that
a standard data set of RGB images can be approximated at
high quality using far fewer elementary components
if each image is treated as a very thin 3D array
instead of as 3 independent 2D arrays.
Secondly the
full power of the approach was demonstrated through the
approximation of hyper-spectral images. For the
hyper-spectral natural images the sparsity is remarkably
higher if the approximation is realized in the
wavelet domain.
For the remote sensing images
the domain of approximation has less influence because,
as opposed to natural images, these images are not
as sparse in the wavelet domain as natural images are.
Taking into account the major reduction of dimensionality
demonstrated by the numerical examples in this work,
we feel confident that the proposed approach will be
of assistance to the broad
range of image processing applications which rely
on a transformation for data reduction as a first step
of further processing.
\subsection*{Acknowledgments}
We are indebted to P. Getreuer,
for making available the {\tt{waveletcdf97}}
MATLAB function that we have used for the
transformation of each single channel image to the wavelet
domain.
| {
"attr-fineweb-edu": 1.848633,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdjY5qsNCPf8MxKkR | \section{Introduction}
A powerful method to construct solutions of $D=10$ or $D=11$
supergravity is to uplift solutions of simpler theories in
lower-dimensions. For this to work it is necessary that there is an
appropriate {\it consistent} Kaluza-Klein (KK) reduction on some
internal manifold $M$ from $D=10$ or $D=11$ down to the
lower-dimensional theory. In general, a KK expansion on $M$ leads to
a lower dimensional theory involving an infinite tower of fields.
Splitting these fields into a finite number of ``light'' fields and
an infinite tower of ``heavy'' fields\footnote{In general there is
not a sharp separation of energy scales, and hence the quotation
marks.}, the KK reduction is called consistent if it is in fact
consistent to set all of the heavy fields to zero in the equations
of motion, leaving equations of motion for the light fields only.
Clearly this is only possible if the on-shell light fields do not
source the heavy fields.
KK reductions on a circle or more generally on an $n$-dimensional torus are always consistent.
The heavy fields, which arise from modes with non-trivial dependence on
the coordinates of the torus, are all charged under the $U(1)^n$ gauge symmetry,
while the light fields, which in this case are actually massless fields, are independent of
these coordinates and hence uncharged under the gauge symmetry.
As a consequence, the heavy fields can never be sourced by the light fields alone and so the
truncation to the light fields is consistent.
Since this argument also extends to fermions, one concludes that a KK reduction of a higher-dimensional
supergravity theory on a torus can always
be consistently truncated to a lower-dimensional supergravity theory. Moreover, solutions of the lower
dimensional supergravity theory that preserve supersymmetry will uplift to supersymmetric solutions of
the higher dimensional supergravity theory.
More generally, however, consistent KK reductions are very much the
exception rather than the rule. For example, it is only in very
special circumstances that there is a consistent KK reduction on a
sphere (for further discussion see \cite{Cvetic:2000dm}). An
interesting class of examples are those associated with the
maximally supersymmetric solutions of $D=10$ and $D=11$ supergravity
that consist of products of $AdS$ spaces and spheres. Corresponding
to the $AdS_4\times S^7$ and $AdS_7\times S^4$ solutions of $D=11$
supergravity, there are consistent KK reductions on $S^7$
\cite{deWit:1986iy} and $S^4$ \cite{Nastase:1999cb,Nastase:1999kf}
to $D=4$ $SO(8)$ gauged supergravity and $D=7$ $SO(5)$ gauged
supergravity, respectively. Similarly, starting with the
$AdS_5\times S^5$ solution of type IIB supergravity there is
expected to be a consistent KK reduction to $SO(6)$ gauged
supergravity: various additional truncations were shown to be
consistent in \cite{Cvetic:1999xp,Lu:1999bw,Cvetic:2000nc} and an
ansatz for the full metric was constructed in \cite{Khavaev:1998fb}.
We would like to view these examples as special cases of the following conjecture:
\begin{quote}
\textsl{For any supersymmetric solution of $D=10$ or $D=11$ supergravity
that consists of a warped product of $d+1$ dimensional
anti-de-Sitter space with a Riemannian manifold $M$,
$AdS_{d+1}\times_w M$, there is a consistent Kaluza-Klein truncation
on $M$ to a gauged supergravity theory in $d+1$-dimensions for which
the fields are dual to those in the superconformal current multiplet
of the $d$-dimensional dual SCFT.}
\end{quote}
Equivalently, one can characterise the fields of the gauged
supergravity as those that contain the $d+1$-dimensional graviton
and fill out an irreducible representation of the superisometry
algebra of the $D=10$ or $D=11$ supergravity solution.
This conjecture is essentially a restricted version of one that appeared
long ago in \cite{Duff:1985jd}, for which general arguments supporting it
were put forward in \cite{sp}.
For example the $AdS_5\times S^5$ solution of type IIB, which has
superisometry algebra $SU(2,2|4)$, is dual to $N=4$ superYang-Mills
theory in $d=4$. The superconformal current multiplet of the latter
theory includes the energy momentum tensor, $SO(6)$ R-symmetry
currents, along with scalars and fermions. These are dual to the
metric, $SO(6)$ gauge fields along with scalar and fermion fields,
and are precisely the fields of the maximally supersymmetric $SO(6)$
gauged supergravity in five-dimensions.
As we have phrased the conjecture above, it is natural to try and
prove the conjecture directly from the SCFT point of view. For the
case of $AdS_3$ solutions, an argument has been made by
\cite{David:2007ak,sen}, but this needs to be modified for higher
dimension $AdS$ solutions. While we think that this is an
interesting avenue to pursue, in this paper we will verify the
conjecture for a number of cases by constructing an explicit
consistent KK reduction ansatz. By this we mean an explicit ansatz
for the higher-dimensional fields that is built from the fields of
the lower-dimensional theory with the property that it solves the
equations of motion of the higher-dimensional theory provided that
the equations of the lower-dimensional theory are satisfied. This
approach has the advantage that it allows one to uplift an explicit
solution of the lower-dimensional gauged supergravity to obtain an
explicit solution\footnote{Note that since the uplifting formulae
are local, in general, even if the lower-dimensional solution is
free from singularities one still needs to check that the higher
dimensional solution is also.} of $D=10$ or $D=11$ supergravity.
Often, for simplicity, such explicit KK reduction ans\"atze are
constructed for the bosonic fields only. This is thought to provide
very strong evidence that the ansatz can be extended to the
fermionic fields also. In fact an argument was constructed in
\cite{Cvetic:2000dm}, based on \cite{sp}, which shows that if a
consistent KK reduction has been constructed for the bosonic fields,
then the supersymmetry of the higher dimensional theory will
guarantee that the reduction can be consistently extended to the
fermionic sector. In any event, a bosonic KK ansatz certainly allows
one to uplift bosonic solutions which is the most interesting class
of solutions. One can go further and construct an ansatz for the
fermion fields and demand that the supersymmetry variation of a
bosonic configuration in higher-dimensions leads to the correct
supersymmetry variation of the bosonic configuration in
lower-dimensions. This explicitly demonstrates that a supersymmetric
bosonic solution of the lower-dimensional theory will uplift to a
supersymmetric solution of $D=10$ or $D=11$ supergravity which will
preserve at least the same amount of supersymmetry as in the
lower-dimensional theory.
In this paper we will verify the conjecture for a general class of
$AdS_5$ solutions which are dual to $N=1$ SCFTs in $d=4$ dimensions.
For this case, the bosonic fields in the superconformal current
multiplet are the energy momentum tensor and the abelian R-symmetry
current. Thus we seek a consistent truncation to minimal $D=5$
gauged supergravity whose bosonic fields are the metric (dual to the
energy momentum tensor of the SCFT) and an abelian gauge field (dual
to the R-symmetry current). For the special class of solutions of
type IIB of the form $AdS_5\times SE_5$, where $SE_5$ is a
five-dimensional Sasaki-Einstein manifold, and only the self-dual
five-form is non-vanishing, a consistent KK reduction was
constructed in \cite{Buchel:2006gb} (see also \cite{Tsikas:1986rx}).
Here we will extend this result by showing that for the most general
$AdS_{5}\times_w M_5$ supersymmetric solution of type IIB
supergravity with all of the fluxes active, that were analysed in
\cite{Gauntlett:2005ww}, the KK reduction is also consistent. We
will construct a KK ansatz for the bosonic fields and we will also
verify the consistency of the supersymmetry variations. The
analogous result for the most general supersymmetric solutions of
$D=11$ supergravity of the form $AdS_5\times_w M_6$ with
non-vanishing four-form flux \cite{GMSW}, was shown in
\cite{Gauntlett:2006ai}. Given that any $AdS_5$ solution of type IIA
supergravity can be considered to be a special case of one in
$D=11$, if we are assume that there are no $AdS_5$ solutions in type
I supergravity, the results here combined with
\cite{Buchel:2006gb,Gauntlett:2006ai} covers all $AdS_5$ solutions
in $D=10$ and $D=11$ dimensions.
We will also prove similar results for two classes of $AdS_4$
solutions of $D=11$ supergravity, both of which are dual to $N=2$
SCFTs in $d=3$. The first, and the simplest, is the Freund-Rubin
class of solutions which take the form $AdS_4\times SE_7$ where
$SE_7$ is a seven-dimensional Sasaki-Einstein manifold and the
four-form flux is proportional to the volume form of the $AdS_4$
factor. A discussion of this case appears in \cite{Duff:1984hn}.
Furthermore, our analysis is a simple extension of the analysis in
\cite{Pope:1985jg} which considered the seven-sphere viewed as a
$U(1)$ fibration over $CP^3$. The second is the class of
$AdS_4\times_w N_7$ solutions, corresponding to M5-branes wrapping
SLAG 3-cycles, that were classified in \cite{GMMW1}. It is very
plausible that this class of solutions are the most general class of
solutions with this amount of supersymmetry and with purely magnetic
four-form flux. In both cases we show that there is a consistent KK
reduction on the $SE_7$ or the $N_7$ to minimal gauged supergravity
in four spacetime dimensions. The bosonic fields of the latter
theory again consist of a metric and a $U(1)$ gauge field which are
dual to the bosonic fields in the superconformal current multiplet.
For these examples, we will be content to present the KK ansatz for
the bosonic fields only.
The general classes of supersymmetric solutions that we consider have been
analysed using $G$-structure techniques \cite{Gauntlett:2002sc,Gauntlett:2002fz}. In particular, the $G$-structure
can be characterised in terms of bi-linears constructed from the Killing
spinors. Since the results we obtain only assume supersymmetry and
$AdS$ factors one might expect that the explicit KK reduction ansatz
involves these bi-linears, and this is indeed the case.
In fact it might be illuminating to recast the known consistent KK truncations on spheres
in terms of this language, but we shall not investigate that here.
The plan of the rest of the paper is as follows. We begin in
sections 2 and 3 by considering the $AdS_4$ solutions of $D=11$
supergravity. In section 4 we consider the general class of $AdS_5$
solutions of type IIB supergravity. Only for the latter class we
will present details of our calculations and these can be found in
the appendices. In section 5 we briefly conclude.
\section{Reduction of $D=11$ supergravity on $SE_7$}
Our starting point in this section is the class of supersymmetric
solutions of $D=11$ supergravity of the form $AdS_4\times SE_7$
where $SE_7$ is a Sasaki-Einstein 7-manifold: \begin{eqnarray}
ds^2_{11}&=&\ft{1}{4}ds^2(AdS_4)+ds^2(SE_7)\nonumber \\ G&=&\ft{3}{8}
\textrm{vol}(AdS_4) . \end{eqnarray} Here $\textrm{vol}(AdS_4)$ is the volume
4-form of the unit radius $AdS_4$ metric $ds^2(AdS_4)$ and we have
normalised the Sasaki-Einstein metric $ds^2(SE_7)$ so that
$Ric(SE_7)=6g(SE_7)$ (the same as for the unit radius metric on the
round seven-sphere). The Sasaki-Einstein metric has a Killing vector
which is dual to the R-symmetry of the dual $N=2$ SCFT in $d=3$.
Introducing coordinates so that this Killing vector is
$\partial_\psi$, locally, the Sasaki-Einstein metric can be written
\begin{equation} ds^2(SE_7)=(d\psi+\sigma)^2+ds^2(M_6) \end{equation} where $ds^2(M_6)$ is
locally K\"ahler-Einstein with K\"ahler form $J$, normalised so that
$Ric(M_6)=8 g(M_6)$ and $d\sigma=2J$.
We now construct an ansatz which leads to a consistent truncation,
at the level of bosonic fields, to gauged supergravity in $D=4$.
Specifically, we consider \begin{eqnarray}
ds^2_{11}&=&\ft{1}{4}ds^2_4+(d\psi+\sigma+
\ft{1}{4}A)^2+ds^2(M_6)\nonumber \\ G&=&\ft{3}{8} \textrm{vol}_4- \ft{1}{4}
*_4 F_2 \wedge J \end{eqnarray} where $ds^2_4$ is an arbitrary metric on
a four-dimensional spacetime, $\textrm{vol}_4$ is its associated
volume form, and $A$ and $F_2=dA$ are one- and two-forms on this
spacetime with a normalisation chosen for convenience. Substituting
this into the $D=11$ equations of motion \cite{Cremmer:1978km} (we
use the conventions of \cite{Gauntlett:2002fz}), \begin{eqnarray}\label{d=11eq}
R_{AB}-\frac{1}{12}(G_{A C_1C_2C_3}G{_{B}}{^{C_1C_2C_3}}-
\frac{1}{12}g_{AB}G^2)&=&0\nonumber \\ d*_{11}G+\frac{1}{2}G\wedge G&=&0\nonumber \\
dG&=&0 \end{eqnarray} we find that the metric $g_{\mu\nu}$, corresponding to
$ds^2_4$, and $F_2$ must satisfy \begin{eqnarray} \label{eomD=4}
R_{\mu\nu}=-3g_{\mu\nu}+\ft12F_{\mu\rho}F_{\nu}{}^\rho
-\ft{1}{8}g_{\mu\nu}F_{\rho\sigma}F^{\rho\sigma} \nonumber \\ d*_4F_2=0 .
\end{eqnarray} These are precisely the equations of motion of minimal gauged
supergravity in $D=4$ \cite{Fradkin:1976xz,Freedman:1976aw}.
Thus we have shown the consistency of the KK reduction at the level
of the bosonic fields. In particular, any solution of the minimal
gauged supergravity, which were systematically studied in
\cite{Caldarelli:2003pb}, can be uplifted on an arbitrary
seven-dimensional Sasaki-Einstein manifold to a solution of $D=11$
supergravity
\section{Reduction of $D=11$ supergravity on a SLAG-3 Flux Geometry}
Let us now consider the general class of supersymmetric warped
product solutions of the form $AdS_4 \times_w {\cal N}_7$ with
purely magnetic four-form flux which are dual to $N=2$ SCFTs in
$d=3$ \cite{GMMW1}. We call these geometries SLAG-3 flux geometries,
since they can be derived from a class of geometries that correspond
to M5-branes wrapping special lagrangian (SLAG) three-cycles in a
$SU(3)$ holonomy manifold - for further details see \cite{GMMW1}. It
is quite possible that this class of geometries is the most general
class of $AdS_4$ geometries with this amount of supersymmetry and
with purely magnetic four-form flux, but this has not been proven.
The $D=11$ metric of the SLAG-3 flux geometry is given by \begin{equation}
ds^2_{11} =\lambda^{-1} ds^2(AdS_4) + d s^2(\mathcal{N}_7) \end{equation}
where $ds^2(AdS_4)$ has unit radius and the warp factor $\lambda$ is
independent of the coordinates of $AdS_4$. $\mathcal{N}_7$ has a
local $SU(2)$ structure which is specified by three one-forms and
three self-dual two-forms $J^1,J^2,J^3$. One of the one-forms is
dual to a Killing vector that also preserves the flux: this is dual
to the R-symmetry of the corresponding $N=2$ SCFT. Introducing local
coordinates so that this Killing vector is given by $\partial_\phi$
we have
\begin{equation}
d s^2(\mathcal{N}_7) =
d s^2(\mathcal{M}_{SU(2)})
+ w \otimes w
+ \frac{\lambda^2d\rho^2}{4(1-\lambda^3\rho^2)}
+ \frac{\lambda^2\rho^2}{4}d\phi^2 ,
\end{equation}
$\mathcal{M}_{SU(2)}$ is a four-dimensional space where the $J^a$
live. The three one-forms mentioned above are $w$,
$(\lambda/2{\sqrt{1-\lambda^3\rho^2} })d\rho$ and
$(\lambda\rho/2)d\phi$. In addition we must have
\begin{eqnarray} \label{101} d[\lambda^{-1}\sqrt{1-\lambda^3\rho^2} w]&=&
\lambda^{-1/2}J^1+ \frac{\lambda^2 \rho}{2\sqrt{1-\lambda^3\rho^2} } w \wedge d\rho ,\nonumber \\
d\left(\lambda^{-3/2}J^3\wedge w - \frac{\lambda
\rho}{2\sqrt{1-\lambda^3\rho^2} } J^2 \wedge d\rho \right)
&=&0,\nonumber \\
d\left(J^2\wedge w + \frac{1}{2 \lambda^{1/2} \rho
\sqrt{1-\lambda^3\rho^2}} J^3\wedge d\rho \right)
&=&0 .
\end{eqnarray}
Finally the 4-form flux is given by
\begin{eqnarray}\label{515}
G=d\phi\wedge
d\left(\frac{1}{2}\lambda^{-1/2}\sqrt{1-\lambda^3\rho^2}J^3\right) .
\end{eqnarray}
An explicit example of a solution to these equations was given in~\cite{gkw}
as discussed in \cite{GMMW1}.
We now consider the KK reduction ansatz:
\begin{eqnarray} ds^2_{11}&=& \lambda^{-1} ds^2_4+ ds^2(\hat {\cal N}_7)
\nonumber \\ G &=& \hat G + F_2 \wedge Y +*_4F_2 \wedge X \label{KKansatz}
\end{eqnarray}
where $ds^2_4$ is a line element and $F_2=dA$ is a two-form on a
four-dimensional spacetime. In addition $ds^2(\hat {\cal N}_7)$ is
the expected deformation of $ds^2({\cal N}_7)$, given by
\begin{equation}
d s^2(\hat {\mathcal{N}}_7) =
d s^2(\mathcal{M}_{SU(2)})
+ {w}\otimes {w}
+ \frac{\lambda^2d\rho^2}{4(1-\lambda^3\rho^2)}
+ \frac{\lambda^2\rho^2}{4}(d\phi+A)^2 ,
\end{equation}
$\hat G$ is the expected deformation of the four-form flux appearing in \reef{515}
\begin{eqnarray}
\hat G=(d\phi+A)\wedge
d\left(\frac{1}{2}\lambda^{-1/2}\sqrt{1-\lambda^3\rho^2}J^3\right)
\end{eqnarray}
and the two-forms $X$ and $Y$ are given by \begin{eqnarray} && \label{alpha2} X
= - \frac{1}{2} (\lambda^{-1/2} J^1 + \frac{\lambda^2 \rho}{2\sqrt{1-\lambda^3\rho^2}} \ {\omega}
\wedge d{\rho} )\nonumber \\ && \label{beta2} Y = - \frac{1}{2}
\lambda^{-1/2} \sqrt{1-\lambda^3 \rho^2} J^3 .
\end{eqnarray}
Substituting this ansatz into the equations of motion of $D=11$
supergravity \reef{d=11eq} and using \reef{101} we find that all
equations are satisfied provided that the equations of motion
(\ref{eomD=4}) of minimal gauged supergravity in $D=4$ are
satisfied. This again shows the consistency of the truncation, at
the level of the bosonic fields.
\section{Reduction of IIB on general $M_5$}
We now turn to the general class of supersymmetric $AdS_5 \times_w
M_5$ solutions of IIB supergravity with all fluxes active that were
analysed in \cite{Gauntlett:2005ww}. Such solutions are dual to
$N=1$ SCFTs in $d=4$ which all have a $U(1)$ $R$-symmetry. We will
show that there is a consistent KK reduction on $M_5$ to minimal gauged
supergravity in $D=5$. This case is more involved than the previous
two and so we have included some details of the calculation in the appendices.
\subsection{Internal geometry and fluxes}
We begin by summarising the results of \cite{Gauntlett:2005ww}. The
ten-dimensional metric is a warped product of $AdS_5$ with a
five-dimensional Riemannian manifold $M_5$,
\begin{equation} \label{metansatz}
ds^2_{10} = e^{2\Delta} \left[ ds^2(AdS_5)+ ds^2(M_5) \right] \; ,
\end{equation}
where the warp factor $\Delta$ is a real function on $M_5$.
All fluxes are active:
in order to preserve the spatial $SO(4,2)$ isometry, the one-forms $P$, $Q$
and the complex three-form $G$ lie entirely on
the internal $M_5$, and the five-form is taken to be
\begin{equation} \label{genansatz}
F= f \left( \textrm{vol}_{AdS_5} + \textrm{vol}_{M_5} \right) \; ,
\end{equation}
where $f$ is a constant and $\textrm{vol}$ is the volume form
corresponding to each of the metrics in the r.h.s.~of
(\ref{metansatz}). We use the same conventions as in \cite{Gauntlett:2005ww} and
some of this is recorded in appendix A.
The manifold $M_5$ is equipped with two spinors $\xi_1$,
$\xi_2$ of Spin(5) subject to a set of differential and algebraic
constraints
arising from the IIB Killing spinor equations. The spinors $\xi_1$,
$\xi_2$ define a local identity structure on $M_5$, which can be
conveniently characterised in terms of a set of forms, bi-linear in
$\xi_1$, $\xi_2$, consisting of a real scalar $\sin \zeta$, a
complex scalar $S$, a real one-form $K_5$, and two complex one-forms
$K, K_3$. These satisfy the following differential conditions
\begin{align}
e^{-4\Delta} d (e^{4\Delta} S) &= 3 i K \nonumber \\
e^{-6\Delta} D(e^{6\Delta} K_3)
&= P\wedge K_3^* - 4 i W - e^{-2\Delta}*G \nonumber \\
e^{-8\Delta} d (e^{8\Delta} K_5)
&= 4 \sin\zeta V - 6 U \label{K5-summ}
\end{align}
where $D(e^{6\Delta} K_3) \equiv d(e^{6\Delta} K_3 ) -i Q \wedge
e^{6\Delta} K_3$. In (\ref{K5-summ}), $U, V$ are real two-forms and
$W$ is a complex two-form that can be constructed as bi-linears in
$\xi$ and moreover can be expressed in terms of the identity
structure:
\begin{align}
\label{UVWexp}
U &= \frac{1}{2(\cos^2\zeta-|S|^2)}\big(
\mathrm{i} \sin\zeta K_3 \wedge K_3^*
+ \mathrm{i} K \wedge K^* - 2\im S^*K\wedge K_5\big) ,\nonumber \\
V &= \frac{1}{2\sin\zeta(\cos^2\zeta-|S|^2)}\big(
\mathrm{i} \sin\zeta K_3\wedge K_3^*
\nonumber \\ & \qquad\qquad\qquad {}
+ \mathrm{i} [\sin^2\zeta+|S|^2] K\wedge K^*
- 2\im S^*K\wedge K_5\big) ,\nonumber \\
W & = \frac{1}{\sin\zeta(\cos^2\zeta-|S|^2)}\big(
\cos^2\zeta K_5
+ \re S^*K
+ \mathrm{i}\sin\zeta\im S^*K\big)\wedge K_3 .
\end{align}
In addition, one also has the algebraic constraint
\begin{equation}
\label{holo-sum}
\mathrm{i}_{K_3^*} P = 2\,\mathrm{i}_{K_3} d\Delta \ ,
\end{equation}
the five-form flux is given by (\ref{genansatz}) with
\begin{equation}
\label{f-summ}
f = 4 e^{4\Delta}\sin\zeta ,
\end{equation}
the three-form flux is given by
\begin{equation}
\label{flux-summ}
\begin{aligned}
\left(\cos^2\zeta-|S|^2\right) &\, e^{-2\Delta} *G
\\ &
= 2P \wedge K_3^*
- \left(4d\Delta + 4i K_4
- 4 i \sin\zeta K_5\right) \wedge K_3
\\ &
+ 2 * \left( P\wedge K_3^*\wedge K_5
- 2d\Delta\wedge K_3\wedge K_5 \right)~,
\end{aligned}
\end{equation}
where $\sin \zeta K_4 = K_5 + \re ( S^*K )$, and the metric can be
written
\begin{equation}
\begin{aligned}
d s^2(M_5) & = \frac{(K_5)^{2}}{\sin^2\zeta+|S|^2}
+ \frac{K_3\otimes K_3^*}{\cos^2\zeta-|S|^2}
+ \frac{|S|^2}{\cos^2\zeta-|S|^2}
\left(\im{S^{-1}K}\right)^{2}
\\ & \qquad
+ \frac{|S|^2}{\sin^2\zeta}\;
\frac{\sin^2\zeta+|S|^2}{\cos^2\zeta-|S|^2}
\left(\re{S^{-1}K}
+ \frac{1}{\sin^2\zeta+|S|^2}K_5\right)^{2}~.
\label{undefmetM5}
\end{aligned}
\end{equation}
Finally, the vector dual to $K_5$ is a Killing vector of the metric
(\ref{undefmetM5}) that also generates a symmetry of the full
solution: $\mathcal{L}_{K_5}\Delta=i_{K_5}P=\mathcal{L}_{K_5}G=0$.
The above constraints arising from supersymmetry ensure that all
equations of motion and Bianchi identities are satisfied.
\subsection{KK reduction}
We now construct the ansatz for a KK reduction from type IIB on the
general $M_5$ that we discussed in the last subsection. We shall
show that there is a consistent reduction to minimal $D=5$ gauged
supergravity.
On $M_5$ the vector field dual to the one-form $K_5$ is Killing and
corresponds to the R-symmetry in the $d=4$ dual SCFT. If one
introduces coordinates such that this dual vector field is
$3\partial_\psi$, we would like to shift $d\psi$ by the gauge field
$A$: noting that $||K_5||^2=(\sin^2\zeta+|S|^2)$ this means that we
should make the shift
\begin{equation} \label{shiftU1}
K_5 \quad \longrightarrow \quad \hat K_5 = K_5 + (\sin^2\zeta+|S|^2)\frac{A}{3} \; .
\end{equation}
In particular, given (\ref{metansatz}), our ansatz for the $D=10$ type IIB metric is then
\begin{equation} \label{KKmetric}
ds^2_{10} = e^{2\Delta} \left[ ds^2_5 + ds^2(\hat M_5) \right] \;
\end{equation}
where $ds^2_5$ is an arbitrary
metric on five-dimensional spacetime, and $ds^2(\hat M_5)$ is the metric $ds^2(M_5)$
in (\ref{undefmetM5}) after the shift \reef{shiftU1}.
The KK ansatz for the five-form and the complex three-form of type
IIB reads:
\begin{eqnarray}
&& F_5 = \hat F_5 + F_2 \wedge \ft{1}{3} e^{4\Delta} \hat *_5 V +
*_5 F_2 \wedge \ft{1}{3} e^{4\Delta} V \nonumber \\
&& \label{KKansatzG} G = \hat G+ F_2 \wedge \ft{1}{3} e^{2\Delta} K_3
\end{eqnarray}
where $F_2=dA$, $\hat F_5$ and $\hat G$ are the five-form and
three-form flux of the undeformed solution on $M_5$ after we make
the shift (\ref{shiftU1}), $V$, $K_3$ are the bi-linears on $M_5$
introduced in the previous subsection\footnote{The bi-linear $V$ is
not affected by the shift (\ref{shiftU1}): choosing the convenient
frame of Appendix B of \cite{Gauntlett:2005ww} one can check that
all $K_5$ dependence of $V$ in equation (\ref{UVWexp}) drops out.},
and $\hat *_5$ and $*_5$ are, respectively, the Hodge duals with
respect to the metrics $ds^2(\hat M_5)$ and $ds^2_5$ in
(\ref{KKmetric}). Notice that Since the one-forms $P$ and $Q$ of the
undeformed solution on $M_5$ are independent of $K_5$, they remain
the same as they were.
In appendix \ref{AppKKansbos} we provide some details of how we constructed this particular ansatz.
In particular, a long calculation shows that the ansatz (\ref{KKmetric}), (\ref{KKansatzG}) with $P,Q$ unchanged
satisfies all of the IIB equations of motion and Bianchi identities, provided that $ds^2_5$ and $F_2$
satisfy
\begin{eqnarray}
&& \label{D=5Einstein} R_{\mu \nu} = -4 g_{\mu \nu} +\ft{1}{6}
F_{\mu \lambda} F_\nu{}^\lambda -\ft{1}{36} g_{\mu \nu} F_{\lambda
\rho} F^{\lambda \rho} \\ && \label{D=5eomF} d*_5 F_2 - \ft{1}{3}
F_2 \wedge F_2 =0 .
\end{eqnarray}
These are precisely the equations of motion of minimal $D=5$ gauged
supergravity \cite{Gunaydin:1983bi}. This shows the consistency of
the truncation of the bosonic sector.
The truncation is, moreover, consistent at the level of the
variations of the IIB fermion fields (see Appendix \ref{AppKKansfer}
for the details). On the one hand we find that the supersymmetry
variations of the dilatino $\lambda$ and of the internal
components of the gravitino $\Psi_M$ identically vanish.
On the other hand, the external components of the IIB gravitino
variation reduce to
\begin{equation}
\delta \psi_\alpha = D_\alpha \varepsilon -\ft12 \rho_\alpha
\varepsilon + \ft{i}{2} A_\alpha \varepsilon + \ft{i}{24} F_{\beta
\gamma} (\rho_\alpha{}^{\beta \gamma} -4 \delta_\alpha^\beta
\rho^\gamma ) \varepsilon ,
\end{equation}
where $\psi_\alpha$ is the $D=5$ gravitino and $\varepsilon$ a $D=5$
spinor. This is the gravitino variation corresponding to minimal
$D=5$ gauged supergravity.
To summarise, we have shown that any bosonic solution of $D=5$ supergravity can be
uplifted to $D=10$ using a general supersymmetric solution by means of the KK ansatz (\ref{KKmetric}),
(\ref{KKansatzG}). Moreover, if the five-dimensional bosonic solution is supersymmetric\footnote{Such solutions
were classified in \cite{Gauntlett:2003fk}.} then so will be the uplifted ten-dimensional solution.
\section{Conclusion}
In this paper we have constructed explicit consistent KK reduction
ans\"atze for general classes of $AdS_5$ solutions in type IIB
supergravity and $AdS_4$ solutions in $D=11$ supergravity. Our
results can be extended to other classes of supersymmetric solutions
that have been classified. It would be nice to show for the
$AdS_5\times_w M_6$ solutions of $D=11$ supergravity, classified in
\cite{Lin:2004nb}, which are dual to $N=2$ SCFTs in $d=4$, that
there is a consistent KK reduction to the $SU(2)\times U(1)$ gauged
supergravity of \cite{Romans:1985ps}. A similar result in type IIB
requires an analogous classification of $AdS_5\times_w M_5$
solutions that are dual to $N=2$ SCFTs in $d=4$, which has not yet
been carried out.
There are several classes of $AdS_4$ solutions of $D=11$
supergravity that can be considered. For example, one can consider
$AdS_4\times N_7$ solutions of $D=11$ where $N_7$ has weak $G_2$
holonomy \cite{Acharya:1998db,Morrison:1998cs} or the $AdS_4\times_w
N_7$ solutions that arise from $M5$-branes wrapping associative
3-cycles that were analysed in \cite{GMMW1}. These solutions are
dual to $N=1$ SCFTs in $d=3$, which have no $R$-symmetry, and so one
expects a consistent KK reduction on $N_7$ to a $N=1$ supergravity
whose field content is just the metric and fermions. In fact it is
easy to show that there is a consistent reduction to the $N=1$
supergravity of \cite{Townsend:1977qa}. Similarly, the $AdS_4\times
N_7$ solutions of $D=11$ where $N_7$ is tri-Sasaki
\cite{Acharya:1998db,Morrison:1998cs}, are dual to $N=3$ SCFTs in
$d=3$ and there should be a consistent KK reduction to an $SO(3)$
gauged supergravity in $D=4$. Additional $AdS_3$ and $AdS_2$
solutions of $D=11$ supergravity studied in \cite{GMMW1,Mac
Conamhna:2006nb,Figueras:2007cn} can also be considered.
The consistency of the KK truncation makes it manifest from the
gravity side that SCFTs with type a IIB or $D=11$ dual share common
sectors. For example, if we consider such SCFTs in $d=4$, the black
hole solutions of minimal gauged supergravity constructed in
\cite{Gutowski:2004ez} should be relevant for any of the SCFTs. It
would be interesting to pursue this further.
\section*{Acknowledgements}
We would like to thank Marco Caldarelli, Mike Duff, Oisin Mac
Conamhna, Daniel Freedman, Jaume Gomis, Chris Pope, Leonardo
Rastelli, Ashoke Sen, Kelly Stelle, Marika Taylor, Arkady Tseytlin,
Daniel Waldram and Toby Wiseman for helpful discussions. JPG is
supported by an EPSRC Senior Fellowship and a Royal Society Wolfson
Award. JPG would like to thank the Galileo Galilei Institute for
Theoretical Physics for hospitality. OV is supported by the Spanish
Ministry of Science and Education through a postdoctoral fellowship
and partially through the research grant FIS2005-02761.
| {
"attr-fineweb-edu": 1.695312,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdjo5qoYA189x-Tu8 | \section*{\refname}}
\makeatletter
\providecommand{\tabularnewline}{\\}
\usepackage{amsfonts}
\makeatother
\usepackage{babel}
\begin{document}
\title{\textcolor{black}{ Computer-predicted ionization energy of carbon within $1$\,cm$^{-1}$ of the best experiment}}
\author{Nike Dattani}
\email{nike@hpqc.org}
\affiliation{Harvard-Smithsonian Center for Astrophysics, Atomic and
Molecular Physics Division, 02138, Cambridge, MA, USA,}
\affiliation{McMaster University, 606-8103, Hamilton, ON, Canada,}
\author{Giovanni LiManni}
\email{g.limanni@fkf.mpg.de}
\selectlanguage{british}%
\affiliation{Max Planck Institute for Solid State Systems, Department of Electronic Structure Theory, Suttgart, Germany.}
\author{David Feller}
\email{dfeller@owt.com }
\selectlanguage{british}%
\affiliation{Washington State University, Pullman, Washington
99164-4630, USA,}
\affiliation{University of Alabama, Tuscaloosa, Alabama
35487-0336, USA,}
\author{Jacek Koput}
\email{koput@amu.edu.pl}
\selectlanguage{british}%
\affiliation{Adam Mickiewicz University, 61\textendash 614
Poznan, Poland.}
\selectlanguage{english}%
\date{\today}
\begin{abstract}
We show that we can predict the first ionization energy of the carbon atom to within 0.872~cm$^{-1}$ of the experimental value. This is an improvement of more than a factor of 6.5 over the preceding best prediction in [Phys. Rev. A \textbf{81}, 022503], and opens the door to achieving sub-cm$^{-1}$ accuracy for \textit{ab initio} predictions in larger elements of the periodic table
\end{abstract}
\selectlanguage{british}%
\maketitle
\selectlanguage{english}%
\vspace{-20mm}
In the last seven years, ionization energies (IEs) have been calculated
with unprecedented precision for the Li atom \cite{*Wang2017,*Drake2018,Puchalski2010},
Be atom \cite{Puchalski2013b} and B atom \cite{Puchalski2015}. Tight
variational bounds for non-relativistic ground state energies assuming
a clamped, point-sized nucleus have reached 49 digits in units of
Hartree for He \cite{2006Schwartz}, 16 digits for Li \cite{Wang2017}, 12 digits for Be \cite{Puchalski2013}, and 11 digits for
B \cite{Puchalski2015} (see Table \ref{tab:variational bounds}). Calculated
IEs have been made in agreement with experiment to
within $10^{-3}$~cm$^{-1}$ for $^{7}$Li, $10^{-1}$~cm$^{-1}$
for $^{9}$Be and $1$~cm$^{-1}$ for $^{11}$B (see Table \ref{tab:excitationEnergies}).
For the C atom, before this present study, no high-precision calculation had been reported to predict an IE to within $\sim1$~cm$^{-1}$
agreement with experiment, but two new experimental papers have been published on this IE very recently \cite{Glab2018,Haris2017} as well as a new measurement of the electron affinity with more than an order of magnitude better precision than the best previous experiment \cite{2016Brestau}. Table \ref{tab:variational bounds} shows that the method used for the smaller atoms up to boron has not had success for carbon. With twice as many variationally optimizable parameters, one fewer digit was obtained for the B atom than for the Be atom, which also suggests that it would be very difficult to variationally optimize a fully explicitly correlated wavefunction ansatz for atoms and molecules coming from the rest of the periodic table.
The best known variational bound for the non-relativistic, clamped, point-sized nucleus (NR-CPN) ground state energy for
C was calculated in 2015 using fixed-node diffusion Monte Carlo (FN-DMC)
with the nodes of the electronic wavefunction fixed at the locations
of a CISD/cc-pV5Z wavefunction, and the statistical uncertainty
based on the stochastic fluctuations was $\pm20$~$\mu E_{\textrm{Hartree}}$ \cite{FN-DMC}. However, the IE for C predicted by FN-DMC was in discrepancy with experiment by more than 40\,cm$^{-1}$. In this
paper, the approach we use to calculate the NR-CPN energy of the ground
state of C is FCIQMC (full configuration interaction quantum Monte Carlo)
with basis sets as large as aug-cc-pCV8Z. Table \ref{tab:variational bounds} shows
that our NR-CPN energy is at least 76~$\mu E_{\rm{Hartree}}$ higher than the
variational upper bound obtained from FN-DMC; but since in our approach, imperfections in the description of the wavefunction for the neutral atom are almost the same as in the cation, the individual errors almost completely cancel when
calculating the energy difference. Therefore, with our approach we achieve agreement with experiment that is comparable within an order of magnitude to what has been seen with the explicitly correlated approach for atoms as big as (but not exceeding) boron.
After adding relativistic and quantum electrodynamics (QED) corrections, and corrections to the clamped nucleus approximation,
we obtained an IE for the ground state of C
which is in only 0.872~cm$^{-1}$ disagreement with the best known experimental
estimate. While this is not as impressive as the
method of variationally optimizing parameters in an explicitly correlated
wavefunction ansatz has proven to be for Li and Be, the disagreement with experiment
has the same order of magnitude as the latter approach for B (see
Table \ref{tab:excitationEnergies}). We finally note that the approach
used in this paper, of calculating FCIQMC on a basis set of non-explicitly
correlated orbitals has successfully treated systems with far more
electrons (transition metal atoms \cite{Thomas2015a}, diatomics \cite{Cleland2012}, multi-reference polyatomics such as ozone \cite{Powell2017},
larger molecules such as butadiene \cite{Daday2012}, and even solid
state systems \cite{Booth2012a}), so it is conceivable that the approach used in this paper may in the near future be able to determine (with
fair accuracy) the IEs which at present remain experimentally elusive or poorly known according to NIST's atomic spectra database. These include arsenic (whose experimental IE has an uncertainty of $\pm2$\,\textrm{cm}$^{-1}$), Pm, Pa, Fm, Md, No, Sg, Bh and Hs (whose IEs are only known based on extrapolations of other experimental data and have uncertainties between $\pm$140\,cm$^{-1}$ and $\pm$4000\,cm$^{-1}$), Rf and Db (whose IEs are only known from theoretical calculations), and Mt, Ds, Rg, Cn, Nh, Fl, Mc, Lv, Ts, and Og (for which no IE is given in NIST's most recent databases).
\begin{center}
\begin{table*}
\caption{\label{tab:variational bounds}Upper bounds for
total non-relativistic electronic energies. VO stands for variational optimization (parameters in a wavefunction ansatz are optimized to yield the lowest
energy). Hylleraas-Log indicates the use of Hylleraas functions
supplemented with auxiliary log functions, and ECG($M$) stands for explicitly correlated Gaussian ansatz with $M$ optimizable parameters. Numbers in parentheses are estimated uncertainties \emph{within} the method used, so for FN-DMC does not include fixed-node error and for FCIQMC does not include basis set error. No numbers were obtained with basis set extrapolations.}
\begin{tabular*}{1\textwidth}{@{\extracolsep{\fill}}ccr@{\extracolsep{0pt}.}lcccccc}
\hline
\noalign{\vskip2mm}
& & \multicolumn{2}{c}{Total non-relativistic clamped point-nucleus (NR-CPN) energy [Hartree]} & Method/Ansatz type & Reference & & & & \tabularnewline[2mm]
\hline
\hline
\noalign{\vskip2mm}
H & 1 & -0&5 & {Analytic \& Exact} & 1926 Schr\"{o}dinger & & & & \tabularnewline
He & 2 & -2&903 724 377 034 119 598 311 159 245 194 404 446 696 925 309 838 & VO/Hylleraas-Log & 2006 Schwartz & \cite{2006Schwartz} & & & \tabularnewline
Li & 3 & -7&478 060 323 910 134 843 & VO/Hylleraas & 2017 Wang & \cite{Wang2017} & & & \tabularnewline
Be & 4 & -14&667 356 494 9 & VO/ECG(4096) & 2013 Puchalski & \cite{Puchalski2013b} & & & \tabularnewline
B & 5 & -24&653 867 537 & VO/ECG(8192) & 2015 Puchalski & \cite{Puchalski2015} & & & \tabularnewline
C & 6 & -37&844 48(2) & {\footnotesize FN-DMC/CISD/cc-pV5Z} & 2015 Yang & \cite{FN-DMC} & & & \tabularnewline
C & 6 & \textcolor{blue}{-37}&\textcolor{blue}{844~355~5(8)} & {\footnotesize FCIQMC/aug-cc-pCV8Z} & Present Work & - & & & \tabularnewline
C & 6 & \textminus 37&843 333 & VO/ECG(1000) & 2013 Bubin & \cite{Bubin2013} & & & \tabularnewline[2mm]
\hline
\end{tabular*}
\end{table*}
\par\end{center}
\vspace{-6mm}
\begin{center}
\begin{table*}
{\small \caption{\label{tab:excitationEnergies}The most precisely calculated ionization energies for the first six atoms, compared to the best known
experimental measurements to date. The last column indicates that
if aiming for the best precision, an experimental measurement is still
the best way to obtain the energy for most atoms, but for Be, the
energy has been obtained more precisely $\textit{in silico}$ than
in any experiment to date. The value for carbon of \textcolor{red}{90~832.299}\,cm$^{-1}$ was calculated in the present work.}}
\begin{tabular*}{1\textwidth}{@{\extracolsep{\fill}}lcr@{\extracolsep{0pt}.}lr@{\extracolsep{0pt}.}lr@{\extracolsep{0pt}.}lcr@{\extracolsep{0pt}.}lr@{\extracolsep{0pt}.}lc}
\hline
\noalign{\vskip2mm}
& \multirow{2}{*}{Transition} & \multicolumn{2}{c}{Experiment } & \multicolumn{2}{c}{} & \multicolumn{2}{c}{Theory} & & \multicolumn{2}{c}{Calc - Obs} & \multicolumn{2}{c}{$\lvert\frac{\text{Calc - Obs}}{\text{Uncertainty in obs}}\rvert$} & \multirow{2}{*}{More precise}\tabularnewline
& & \multicolumn{2}{c}{[cm$^{-1}$]} & \multicolumn{2}{c}{} & \multicolumn{2}{c}{[cm$^{-1}$]} & & \multicolumn{2}{c}{[cm$^{-1}$]} & \multicolumn{2}{c}{} & \tabularnewline[2mm]
\hline
\hline
\noalign{\vskip2mm}
$^{1}$H & H$^{+}\left(1^{1}S\right)\leftarrow\text{H}\left(1^{2}S\right)$ & 109~678&771~732(23) & \multicolumn{2}{c}{\cite{Kramida2010}} & 109~678&771~743~07(10) & \footnote{This number is based on the data in \cite{PhysRevLett.95.163003} although it is not explicitly written anywhere there. Two of the authors of \cite{PhysRevLett.95.163003} have presented the number in Table III of \cite{RevModPhys.88.035009}. $^\textrm{b}$After the completion of this work two new values in disagreement with each other, $90\ 833.021(9)$ and $90\ 832.98(3)$, have been suggested and these are in even closer agreement with our result \cite{Haris2017,Glab2018}.} & 0&000~011 & 0&48 & Theory\tabularnewline
$^{4}$He & He$^{+}\left(1^{2}S\right)\leftarrow\text{He}\left(1^{1}S\right)$ & 198~310&666~37(2) & \multicolumn{2}{c}{\cite{Kandula2011}} & 198~310&665~07(1) & \cite{Pachucki2017} & ~~~-0&001~3 & ~~~~~~65&00 & Theory\tabularnewline
$^{3}$Li & Li$^{+}\left(1^{1}S\right)\leftarrow\text{Li}\left(2^{2}S\right)$ & 43~487&159~40(18) & \multicolumn{2}{c}{\cite{Bushaw2007}} & 43~487&159~7(7) & \cite{*Wang2017,*Drake2018} & ~~~-0&000~3 & ~~~~~~1&66 & Experiment\tabularnewline
$^{9}$Be & Be$^{+}\left(2^{2}S\right)\leftarrow\text{Be}\left(2^{1}S\right)$ & 75~192&64(6) & \multicolumn{2}{c}{\cite{Beigang1983}} & 75~192&699(7) & \cite{Puchalski2013b} & ~~~0&059 & ~~~~~~0&98 & Theory\tabularnewline
$^{11}$B & B$^{+}\left(2^{1}S\right)\leftarrow\text{B}\left(2^{2}P\right)$ & 66~928&036(22) & \multicolumn{2}{c}{\cite{Kramida2007}} & 66~927&91(21) & \cite{Puchalski2015} & ~~~-0&126 & ~~~~~~5&73 & Experiment\tabularnewline
$^{12}$C & C$^{+}\left(2^{2}P\right)\leftarrow\text{C}\left(2^{3}P\right)$ & 90~833&171(15)$^\textrm{b}$ & \multicolumn{2}{c}{\cite{Chang1998}} & \textcolor{red}{90~832}&\textcolor{red}{299} & - & ~~~-0&872 & ~~~~~~~58&13 & Experiment\tabularnewline[2mm]
\hline
\end{tabular*}{\small \par}
\end{table*}
\par\end{center}
\vspace{-20mm}
\vspace{-1.5mm}
\section{Methodology}
\vspace{-3mm}
We begin with our main result in Table \ref{tab:Summary}, which shows that our computer-predicted ionization energy comes mainly from the NR-CPN Hamiltonian. This energy was calculated in four stages which we describe in the sub-sections below: (A) We developed larger core-valence (CV) basis sets than previously available for carbon, (B) we calculated the 1- and 2-electron integrals in these basis sets, (C) we solved the NR-CPN Schr\"{o}dinger equation at the FCI level in our finite-sized basis sets of two different sizes, and (D) we extrapolated the finite basis set results to estimate the energies at the \textbf{c}omplete \textbf{b}asis \textbf{s}et (CBS) limits. Finally, sub-section (E) describes how we added the corrections due to special relativity, QED, and due to the atom having an unclamped, zero-radius, nucleus.
\vspace{-7mm}
\begin{center}
\begin{table}[H]
\caption{\label{tab:Summary}{\small Summary of our main result. All energies are between the fine centre of gravity (fcog) of C$\left(^3P\right)$ and the fcog of C$^+\!\!\left(^2P\right)$ so the experimental spin-orbit lowering of 12.6725\,cm$^{-1}$ (calculated in our Supplemental Material, and based on measurements reported in \cite{Haris2017}) needs to be subtracted from all numbers to obtain the C$\left(^3P_0\right)\leftarrow ~ \textrm{C}^{+} \!\!\left( ^2 P_{\nicefrac{1}{2}} \right) $ energy. The experimental uncertainty is a 68\% confidence interval, meaning that there is a 32\% chance that the true energy is outside the range spanned by the uncertainty.}}
\begin{tabular*}{1\columnwidth}{@{\extracolsep{\fill}}llr@{\extracolsep{0pt}.}lr@{\extracolsep{0pt}.}l}
\hline
\noalign{\vskip2mm}
{Hamiltonian} & & \multicolumn{2}{c}{{{}Ionization Energy}} & \multicolumn{2}{c}{\textcolor{black}{{}(Calc - Obs) }}\tabularnewline
& & \multicolumn{2}{c}{[\,cm$^{-1}$\,]} & \multicolumn{2}{c}{\textcolor{black}{{}[\,cm$^{-1}$\,]}}
\tabularnewline[2mm]
\hline
\hline
\noalign{\vskip2mm}
NR-CPN & & 90~863&037 & \multicolumn{2}{c}{}\tabularnewline
X2C & & -30&023 & \multicolumn{2}{c}{}\tabularnewline
Breit \& QED & & -0&48 & \multicolumn{2}{c}{} \tabularnewline
DBOC & & -0&235 & \multicolumn{2}{c}{}\tabularnewline
\noalign{\vskip2mm}
\hline
\noalign{\vskip2mm}
Total (theory) & Present & \textcolor{red} {90~832}&\textcolor{red}{299} & \multicolumn{2}{c}{}\tabularnewline[2mm]
\hline
\noalign{\vskip2mm}
Experiment & 2017 \cite{Haris2017}& 90~833&021(9) & \qquad -0&722\tabularnewline
Experiment & 1998 \cite{Chang1998}& 90~833&171(15) & \qquad -0&872\tabularnewline
Experiment & 1966 \cite{Johansson1966} & 90~833&122(100) & \qquad -0&823\tabularnewline[2mm]
\hline
\hline
\noalign{\vskip2mm}
Theory & 2010 \cite{Klopper2010}& 90~838&75 & \qquad 5&74\tabularnewline
Theory & 2017 \cite{2017Feller}& 90~840&16 & \qquad 7&15\tabularnewline
Theory & 2015 \cite{FN-DMC}& 90~786&66 & \qquad -46&35\tabularnewline[2mm]
\hline
\end{tabular*}
\end{table}
\par\end{center}
\vspace{-16mm}
\subsection{{\scriptsize Optimization of `tight function' exponents for the aug-cc-pCV7Z and
aug-cc-pCV8Z basis sets}}
The largest orbital basis sets known for C prior to this work were the (aug-cc-pV$X$Z,
$X=$7,8,9) sets used by Feller in 2016 \cite{Feller2016}. These basis sets
did not contain `tight' exponent functions for capturing the effects
of the correlation between the core $(1s^{2},2s^{2})$ electrons and
the valence electrons $(2p^{2}$). The largest known basis set for carbon prior
to this work including the CV (core-valence) correction was the aug-cc-pCV6Z \cite{Wilson1996} set. In this work we start by optimizing the `tight' exponents for
the CV correction to Feller's 2016 aug-cc-pV7Z and aug-cc-pV8Z basis
sets, yielding the first aug-cc-pCV7Z basis set for carbon, and the
first aug-cc-pCV8Z basis set known for any element.
The final aug-cc-pCV$X$Z basis sets have $X$ new tight functions of $s$-type, $X-1$ of $p$-type, $X-2$ of $d$-type, and so forth, up to the final $i$-type function for $X=7$ and the final $k$-type function for $X=8$. The $j^\textrm{th}$ exponent corresponding to a function of type $L$ is named $\gamma_{X,L,j}$, and is assumed to follow an `even-tempered' model: $\gamma_{X,L,j}=\alpha_{X,L,j}\beta_{X,L,j}^{j-1}$.
In the non-linear optimization procedure to obtain $\alpha_{7,L,j}$ and $\beta_{7,L,j}$, the starting values were chosen to be the $\alpha_{6,L,j}$ and $\beta_{6,L,j}$ values that were already optimized in \cite{Wilson1996}. These were then treated as free parameters to minimize the difference between the frozen core and all-electron CISD energies of the carbon atom with all other exponent functions fixed. The $\tt{MOLPRO}$ program \cite{MOLPRO} was used to calculate the CISD energies, and the $\tt{L}$-$\tt{BFGS}$-$\tt{B}$ program of \cite{Zhu:1997:ALF:279232.279236} was used to optimize the free parameters. For $X=7$, the $s$-type functions were added first, then once they were optimized they were held fixed while the $p$-type functions were added and optimized. Then both the $s$- and $p$-type functions were held fixed while the $d$-type functions were added, and so on up to the single $i$-type function. The procedure for $X=8$ was the same, except the procedure continued to $k$-type functions, and the starting values came from the newly optimized $X=7$ case rather than the $X=6$ case from \cite{Wilson1996}. $\tt{MOLPRO}$ does not support $k$-functions, so to optimize the $k$-function we calculated the CISD energy at three points using $\tt{GAUSSIAN}$ \cite{g16} and estimated the value of $\alpha_{8,8,1}$ yielding the lowest energy by using a quadratic fit.
The tight exponents optimized in this work for aug-cc-pCV7Z and aug-cc-pCV8Z are presented in the Supplemental Material.
\vspace{-6mm}
\subsection{{\scriptsize Calculation of 1- and 2-electron integrals including $k$- and $l$-
functions}}
The calculation of the 1- and 2-electron integrals for (aug)-cc-p(C)V$X$Z
basis sets with $X\ge7$ is not possible with most quantum chemistry
packages, since very few software packages support $k$-
and $l$- functions, but for first row elements, $k$-functions appear in $X=7$ basis sets
and $l$-functions appear when $X=8$. To calculate these integrals, we have used a locally modified version of {\tt MOLCAS} \cite{MOLCAS} in order to support larger basis sets. The 1- and 2-electron integrals for C and C$^+$ were evaluated in the basis of the optimized CASSCF(6,5) and CASSCF(5,5) orbitals respectively, with the five active orbitals being the 1$s$, 2$s$, 2$p_x$, 2$p_y$ and 2$p_z$ of the C atom/ion. This active space is the minimal active space including all electrons, that is able to provide balanced orbitals for the three degenerate states of the $^3P$ state of the C atom, or the $^2P$ state of the C$^+$ ion.
\vspace{-5mm}
\subsection{{\scriptsize Calculation of NR-CPN energies in finite basis sets without truncating the possible excitation levels (FCIQMC)}}
\vspace{-3mm}
A deterministic FCI (full configuration interaction) calculation for the 5e$^-$ C$^+$ ion in the aug-cc-pCV7Z basis set would require almost 55\,TB of RAM, and for the neutral atom would require more. Therefore we use FCIQMC for all NR-CPN calculations.
The method was introduced in \cite{Booth2009a}, and
we use the initiator method first described in \cite{Cleland2010},
and the semi-stochastic method as described in \cite{Blunt2015}.
The calculations are performed using the developer version of the software ${\tt NECI}$ \cite{NECI}.
Within a given Hamiltonian (in this case the NR-CPN Hamiltonian) and
basis set, there are three sources of error in the FCIQMC energy calculations:
\begin{enumerate}
\item Trial wavefunction error ($\Delta E_{{\rm trial}}$), which approaches
zero in the limit where the number of determinants used in the trial
wavefunction approaches the number of determinants in the FCIQMC wavefunction;
\item Initiator error ($\Delta E_{{\rm initiator}}$), which approaches
zero in the limit where the number of walkers $N_{{\rm walkers}}$
gets sufficiently large; and
\item Stochastic error ($\Delta E_{{\rm stoch}}$), which for a given number
of walkers and trial wavefunction determinants is estimated as the square root of the unbiased variance
among different estimates $E_{i}$ of the energy from their mean $\bar{E}$
after different numbers $N$ of Monte Carlo macro-iterations (determined using the Flyvbjerg-Petersen blocking analysis \cite{Flyvbjerg1989}) after the walkers
have reached equilibrium: $\Delta E_{{\rm stochastic}}\approx\sqrt{\frac{\sum_{i=1}^{N}\left(E_{i}-\bar{E}\right)^{2}}{N-1}}=\mathcal{O}\left(\nicefrac{1}{\sqrt{N}}\right)$.
\end{enumerate}
Our goal was to obtain all energies to a precision of $\pm\epsilon$
where $\epsilon\le1\mu E_{{\rm Hartree}}\approx0.2$~cm$^{-1}$ (within
the basis sets used). To ensure that $\Delta E_{{\rm initiator}}$
can be neglected, we used a sufficiently large value of $N_{{\rm walkers}}$
for every energy calculation, so that the energy difference between
using $N_{{\rm walkers}}$ and $\frac{1}{2}N_{{\rm walkers}}$ was
smaller than 1 $\mu E_{\rm{Hartree}}$. Likewise, to ensure that $\Delta E_{{\rm trial}}$ can be neglected, we used a sufficiently large number of determinants in the trial wavefunction for every energy calculation, such that $\Delta E_{{\rm trial}}$ would also be smaller than 1 $\mu E_{\rm{Hartree}}$. We then ran every calculation for enough macro-iterations $N$ such that $\Delta E_{{\rm stoch}}$ was smaller than $\Delta E_{{\rm trial}}$ and $\Delta E_{{\rm initiator}}$. Further details are presented in the Supplemental Material, including tables which show that all three sources of error in our final numbers are not larger than we claim.
\vspace{-6mm}
\begin{center}
\begin{table}[H]
\caption{\label{tab:finalFCIQMC}{\small Final NR-CPN energies. The break-down of how
these energies were obtained, and the Hartree-Fock energies that were
used for the extrapolations are available in the Supplemental Material.
Numbers within parentheses indicate uncertainties in the last digit(s)
shown, and their determination is described in the Supplemental Material.}}
\scriptsize
\begin{tabular*}{1\columnwidth}{@{\extracolsep{\fill}}llr@{\extracolsep{0pt}.}lr@{\extracolsep{0pt}.}lr@{\extracolsep{0pt}.}l}
\hline
\noalign{\vskip2mm}
& & \multicolumn{2}{c}{C$\left(^{3}P\right)$ } & \multicolumn{2}{c}{C$^{+}$$\left(^{2}P\right)$} & \multicolumn{2}{c}{$2^{3}P\rightarrow2^{2}P$~{}}\tabularnewline
& & \multicolumn{2}{c}{ $\left[E_{{\rm Hartree}}\right]$} & \multicolumn{2}{c}{ $\left[E_{{\rm Hartree}}\right]$} & \multicolumn{2}{c}{~[cm$^{-1}$]}
\tabularnewline[2mm]
\hline
\hline
\noalign{\vskip2mm}
aug-cc-pCV7Z & & -37&844~251~5(05) & -37&430~345~1(01) & 90~841&955(028)\tabularnewline
aug-cc-pCV8Z & & -37&844~355~5(08) & -37&430~412~5(05) & 90~849&987(054)\tabularnewline[2mm]
\hline
\noalign{\vskip2mm}
Eq.\eqref{KutzelniggSolved}, $n=3.5$ & & -37&844~528~6 & -37&430~523~6 & 90~863&604\tabularnewline
Eq.\eqref{MartinSolved}, $n=4$ & & -37&844~514~2 & -37&430~514~3 & 90~862&471\tabularnewline[2mm]
\hline
\noalign{\vskip2mm}
Mean & & -37&844~521~4 & -37&430~519~0 & 90~863&037\tabularnewline[2mm]
\hline
\end{tabular*}
\end{table}
\par\end{center}
\vspace{-14mm}
\subsection{{\scriptsize Extrapolations to the CBS (complete basis set) limit}}
We use two different families of formulas to extrapolate the correlation energies (FCIQMC energies with the Hartree-Fock energies subtracted out) from $E_{X-1}$ and $E_{X}$ to $E_{\textrm{CBS}}$:
\vspace{-5mm}
\begin{align}
\label{Kutzelnigg}E_{\textrm{CBS}} & =E_{X}-\frac{A}{X^{n}}~,\\
E_{\textrm{CBS}} & =E_{X}-\frac{A}{\left(X+\nicefrac{1}{2}\right)^n}~.\label{Martin}
\end{align}
\noindent If we set $n=3$ in Eq.\,\eqref{Kutzelnigg}, we recover the formula originally proposed in \cite{Kutzelnigg1992}. If we set $n=4$ in Eq.\,\eqref{Martin}, we recover the formula originally proposed in \cite{Martin1996}. If we have values for $E_{X}$ at two different $X$ values, we can eliminate $A$ in both cases, so Eq.\eqref{Kutzelnigg} leads to Eq.\eqref{KutzelniggSolved} and Eq.\eqref{Martin} leads to Eq.\eqref{MartinSolved}:
\vspace{-2mm}
\begin{align}
\label{KutzelniggSolved}E_{\textrm{CBS}} & =\frac{X^nE_{X}-\left(X-1\right)^nE_{X-1}}{X^n-\left(X-1\right)^n}~,\\
\label{MartinSolved}E_{\textrm{CBS}} & =E_{X}+\frac{\left(2X-1\right)^n\left(E_X-E_{X-1}\right)}{\left(1+2X\right)^n-\left(2X-1\right)^n}~.
\end{align}
As explained on page 5 of \cite{Feller2016}, extrapolations to the CBS limit using $n=3$ in Eq.\eqref{Kutzelnigg} tend to over-shoot the CBS limit. The value of $n=3.5$ was therefore used in \cite{Feller2016}, and we have used it in this present study. The values of $E_{\textrm{CBS}}$ obtained from using $n=3.5$ in Eq.\,\eqref{KutzelniggSolved} and $n=4$ in Eq.\eqref{MartinSolved}, for $X=8$ were added to the Hartree-Fock energies for $X=8$ and are presented in Table \ref{tab:finalFCIQMC}. The final NR-CPN energy was taken as the mean of both values obtained from extrapolating the correlation energy and adding it to the Hartree-Fock energy with $X=8$
\vspace{-7mm}
\subsection{{\scriptsize Estimation of relativistic, QED, finite nuclear mass, and finite nuclear size corrections}}
\vspace{-3mm}
Scalar relativistic corrections were calculated by comparing the energies using the spin-free version of the 1e$^-$ X2C (exact 2-component) Hamiltonian \cite{Dyall1997,*Cheng2011}, to the energies of the NR-CPN Hamiltonian.
The integrals of our X2C Hamiltonian with ROHF orbitals were done in the $\tt{CFOUR}$ program, and were calculated at various levels of coupled cluster theory with the $\tt{MRCC}$ program \cite{MRCC,*Kallay2001}.
Further scalar relativistic effects were included by adding the Breit and QED corrections
(including the vacuum polarization and the self-energy terms that
together comprise a Lamb-like shift) from the state-averaged Dirac-Fock calculations done in \cite{Klopper2010}. The overall contribution from the Breit and QED correctons (combined) to the IE for C was -0.48\,cm$^{-1}$.
Diagonal Born-Oppenheimer breakdown corrections (DBOC) to the clamped nucleus approximation \cite{KUTZELNIGG1997,*Gauss2006} were calculated using $\tt{CFOUR}$, with $\tt{MRCC}$ used for the coupled-cluster part. Our value of -0.235\,cm$^{-1}$ is about triple the value of -0.08\,cm$^{-1}$ estimated in \cite{Klopper2010}, due to including higher levels of correlation.
The basis set and correlation convergence of the X2C and DBOC corrections is shown in the Supplemental Material, along with our calculation of the finite nuclear size correction of 0.00543 cm$^{-1}$. The final corrections that contributed to our final computer-predicted ionization energy are presented in Table \ref{tab:Summary}.
\vspace{-6mm}
\section{Conclusion}
\vspace{-4mm}
Table \ref{tab:Summary} summarizes the various contributions to our value of the IE, and compares our final value to experiment and to three recent theoretical estimates. Our value is 0.872\,cm$^{-1}$ smaller than the best experimental value.
The best theoretical estimate of the IE before this work was in \cite{Klopper2010}, and was in disagreement with experiment by more than a factor of 6.5 more than our present result. We believe that this could have been due to any or all of three things: (1) approximations inherent to the F12 approach used for their NR-CPN energy, (2) the perturbative nature of their scalar relativistic corrections (i.e. using the mass velocity and Darwin terms, rather than the X2C Hamiltonian used in the present work) and (3) the CCSD approximation made in their DBOC correction (as opposed to the CCSDTQ used in the present work which we have shown appears to be converged to the FCI limit).
\section{Acknowledgments}
\vspace{-1mm}
We wish to thank {\color{blue} Mariusz Puchalski, Krzysztof Pachucki, Robert Moszynski, Jacek Komasa, Gordon Drake, Michal Lesiuk, Michal Przybytek}, and {\color{blue} Wim Klopper} for helpful discussions, comments and suggestions.
\vspace{2mm}
We also thank {\color{blue}Alexander Kramida} and {\color{blue}Kunari Haris} of NIST for information about their recent pre-print on a newer experimental ionization energy for carbon (mentioned in the footnote to Table \ref{tab:excitationEnergies}), {\color{blue} Alexander Kramida} for information about the experimental ionization of hydrogen used in Table \ref{tab:excitationEnergies}, and {\color{blue} Barry Taylor} and {\color{blue} Peter Mohr} of NIST for information about the theoretical ionization energy of hydrogen used in Table \ref{tab:excitationEnergies}.
\selectlanguage{british}
\selectlanguage{english}
\input{main.bbl}
\clearpage
\newpage
\end{document}
\begin{center}
\begin{table}
\caption*{Supplemental Material: Tight exponents for aug-cc-pCV6Z and aug-cc-pCV7Z.}
\begin{tabular*}{0.4\textwidth}{@{\extracolsep{\fill}}cr@{\extracolsep{0pt}.}lr@{\extracolsep{0pt}.}l}
\hline
\noalign{\vskip2mm}
& \multicolumn{2}{c}{aug-cc-pCV7Z} & \multicolumn{2}{c}{aug-cc-pCV8Z}\tabularnewline[2mm]
\hline
\hline
\noalign{\vskip2mm}
$s$-type & 276&12 & 365&52\tabularnewline
& 158&30 & 232&20\tabularnewline
& 90&75 & 147&51\tabularnewline
& 52&026 & 93&707\tabularnewline
& 29&826 & 59&529\tabularnewline
& 17&099 & 37&817\tabularnewline
& \multicolumn{2}{c}{} & 24&024\tabularnewline[2mm]
$p$-type & 299&20 & 372&10\tabularnewline
& 149&46 & 198&95\tabularnewline
& 74&657 & 106&37\tabularnewline
& 37&293 & 56&868\tabularnewline
& 18&629 & 30&405\tabularnewline
& 9&3054 & 16&256\tabularnewline
& \multicolumn{2}{c}{} & 8&6911\tabularnewline[2mm]
$d$-type & 255&63 & 337&98\tabularnewline
& 111&57 & 191&20\tabularnewline
& 55&17 & 108&17\tabularnewline
& 27&282 & 61&193\tabularnewline
& 13&491 & 34&618\tabularnewline
& \multicolumn{2}{c}{} & 19&584\tabularnewline[2mm]
$f$-type & 132&26 & 207&91\tabularnewline
& 56&375 & 103&16\tabularnewline
& 24&030 & 51&185\tabularnewline
& 10&243 & 25&397\tabularnewline
& \multicolumn{2}{c}{} & 12&601\tabularnewline[2mm]
$g$-type & 94&488 & 155&52\tabularnewline
& 36&762 & 72&601\tabularnewline
& 14&302 & 33&891\tabularnewline
& \multicolumn{2}{c}{} & 15&821\tabularnewline[2mm]
$h$-type & 66&227 & 100&23\tabularnewline
& 22&68 & 39&552\tabularnewline
& \multicolumn{2}{c}{} & 15&608\tabularnewline[2mm]
$i$-type & 48&32 & 78&691\tabularnewline
& \multicolumn{2}{c}{} & 28&941\tabularnewline[2mm]
$k$-type & \multicolumn{2}{c}{} & \multicolumn{2}{c}{}\tabularnewline[2mm]
\hline
\end{tabular*}
\end{table}
\par\end{center}
\begin{center}
\begin{table}
\caption{\label{tab:x2cAndDBOC}Basis set and correlation convergence of the
X2C and DBOC corrections to the C$\left(^{3}P\right)\rightarrow$$\text{C}^{+}\left(^{2}P\right)$
ionization energy in cm$^{-1}$. The difference between CCSDT and
FCI for the smallest two basis sets is denoted by $\Delta_{{\rm correlaton}}$
and the values at aCV3Z are added to the CCSDT energies for all larger
basis sets (resulting in FCI estimates presented in \emph{italic}
font). The numbers in parentheses for the CBS (complete basis set)
estimates denote our assigned uncertainties in the last digits presented,
which we believe to be very conservative. aCV$X$Z is short for aug-cc-pCV$X$Z-uncontracted.}
\begin{tabular*}{0.5\textwidth}{@{\extracolsep{\fill}}ccccccc}
\hline
\noalign{\vskip2mm}
& {\footnotesize{}aCV2Z} & {\footnotesize{}aCV3Z} & {\footnotesize{}aCV4Z} & {\footnotesize{}aCV5Z} & {\footnotesize{}aCV6Z} & {\footnotesize{}CBS}\tabularnewline[2mm]
\hline
\hline
\noalign{\vskip2mm}
\multicolumn{7}{c}{{\footnotesize{}X2C}}\tabularnewline[2mm]
\hline
\noalign{\vskip2mm}
{\footnotesize{}CCSDT} & {\footnotesize{}-31.055} & {\footnotesize{}-30.144} & {\footnotesize{}-30.037} & {\footnotesize{}-30.004} & {\footnotesize{}-30.001} & {\footnotesize{}-29.999(003)}\tabularnewline
{\footnotesize{}FCI} & {\footnotesize{}-31.074} & {\footnotesize{}-30.168} & \emph{\footnotesize{}-30.060}{\footnotesize{}} & \emph{\footnotesize{}-30.028}{\footnotesize{}} & \emph{\footnotesize{}-30.025} & \textcolor{red}{\emph{\footnotesize{}-30.023(050)}}\tabularnewline
{\footnotesize{}$\Delta_{{\rm correlation}}$} & {\footnotesize{}-0.019} & {\footnotesize{}-0.024} & \emph{\footnotesize{}-0.024} & \emph{\footnotesize{}-0.024} & \emph{\footnotesize{}-0.024} & \emph{\footnotesize{}-0.024(050)}\tabularnewline[2mm]
\hline
\noalign{\vskip2mm}
\multicolumn{7}{c}{{\footnotesize{}DBOC}}\tabularnewline[2mm]
\hline
\noalign{\vskip2mm}
{\footnotesize{}CCSDT} & {\footnotesize{}-0.5} & {\footnotesize{}-0.5} & {\footnotesize{}-0.5} & {\footnotesize{}-0.5} & {\footnotesize{}-0.5} & {\footnotesize{}-0.5}\tabularnewline
{\footnotesize{}FCI} & {\footnotesize{}-0.5} & {\footnotesize{}-0.5} & \emph{\footnotesize{}-0.5} & \emph{\footnotesize{}-0.5} & \emph{\footnotesize{}-0.5} & \textcolor{red}{\emph{\footnotesize{}-0.5}}\tabularnewline
{\footnotesize{}$\Delta_{{\rm correlation}}$} & {\footnotesize{}0.0} & {\footnotesize{}0.0} & \emph{\footnotesize{}0.0} & \emph{\footnotesize{}0.0} & \emph{\footnotesize{}0.0} & \emph{\footnotesize{}0.0}\tabularnewline[2mm]
\hline
\end{tabular*}
\end{table}
\par\end{center}
\begin{center}
\begin{table*}
\caption{\label{tab:summary}Summary}
\begin{tabular*}{1\textwidth}{@{\extracolsep{\fill}}lclccr@{\extracolsep{0pt}.}l}
\hline
\noalign{\vskip2mm}
& & & \multicolumn{2}{c}{$E=\frac{\left\langle \psi_{N_{{\rm trial}}}\left|H_{{\rm NR-CPN}}\right|\psi_{{\rm FCIQMC}}\right\rangle }{\left\langle \psi_{{\rm N_{{\rm trial}}}}\left.\right|\psi_{{\rm FCIQMC}}\right\rangle }$} & \multicolumn{2}{c}{$E_{{\rm excitation}}$}\tabularnewline[2mm]
\noalign{\vskip2mm}
$N_{{\rm walkers}}$ & $N_{{\rm trial}}$ & Uncertainty & $2^{3}P$~{[}$E_{{\rm Hartree}}]$ & $2^{2}P$~{[}$E_{{\rm Hartree}}]$ & \multicolumn{2}{c}{$2^{3}P\rightarrow2^{2}P$~{[}cm$^{-1}${]}}\tabularnewline[2mm]
\hline
\hline
\noalign{\vskip2mm}
\multicolumn{7}{c}{aug-cc-pCV7Z}\tabularnewline[2mm]
\hline
\noalign{\vskip2mm}
$64\times10^{6}$ & 1 & & -37.844~251~5(15) ~~~~~~~ & -37.430~345~0(03) & 90~841&955\tabularnewline
& 1000 & & -37.844~251~5(08) ~~~~~~~ & -37.430~345~0(01) & 90~841&977\tabularnewline
& & $\Delta E_{{\rm trial}}$ & $<\,$0.000~000~1~~~~~~~ & $<\,$0.000~000~1~~~~~~~ & $<\,$0&02\tabularnewline[2mm]
\hline
\noalign{\vskip2mm}
$128\times10^{6}$ & 1 & & -37.844~251~5(0?) & -37.430~345~0(02) & 90~841&977\tabularnewline
& 1000 & & -37.844~251~5(05) & -37.430~345~1(01) & 90841&955\tabularnewline
& & $\Delta E_{{\rm trial}}$ & $<\,$0.000~000~1~~~~~~~ & $<\,$0.000~000~1~~~~~~~ & $<\,$0&02\tabularnewline
& & $\Delta E_{{\rm initiator}}$ & $<\,$0.000~000~1~~~~~~~ & $<\,$0.000~000~1~~~~~~~ & $<\,$0&02\tabularnewline[2mm]
\hline
\hline
\noalign{\vskip2mm}
\multicolumn{7}{c}{aug-cc-pCV8Z}\tabularnewline[2mm]
\hline
\noalign{\vskip2mm}
$64\times10^{6}$ & 1 & & -37.844~3??~?(??)~~~~~~~ & \multirow{}{}{-37.430~412~3(5)} & \multicolumn{2}{c}{}\tabularnewline
& 1000 & & -37.844~355~5(10)~~~~~~~ & \multirow{}{}{-37.430~412~3(5)} & \multicolumn{2}{c}{}\tabularnewline
& & $\Delta E_{{\rm trial}}$ & $<\,$0.000~001~1~~~~~~~ & \multirow{}{}{0.430~412~3(5)} & \multicolumn{2}{c}{}\tabularnewline
$128\times10^{6}$ & 1 & & -37.844~35?~?(08)~~~~~~~ & & \multicolumn{2}{c}{}\tabularnewline
& 1000 & & -37.844~355~5(08)~~~~~~~ & & \multicolumn{2}{c}{}\tabularnewline
& & $\Delta E_{{\rm trial}}$ & $<\,$0.000~001~3~~~~~~~ & & \multicolumn{2}{c}{}\tabularnewline
& & $\Delta E_{{\rm initiator}}$ & $<\,$0.000~000~4~~~~~~~ & & \multicolumn{2}{c}{}\tabularnewline[2mm]
\hline
\hline
\noalign{\vskip2mm}
\multicolumn{7}{c}{CBS Extrapolation }\tabularnewline[2mm]
\hline
\noalign{\vskip2mm}
$1/X^{3}$ & & & -37.844~697~1 & -37.430~741~9 & 90~852&665\tabularnewline
$1/X^{3.5}$ & & & -37.844~645~6 & -37.430~696~0 & 90~851&416\tabularnewline
$1/\left(X+\nicefrac{1}{2}\right)^{4}$ & & & -37.844~625~1 & -37.430~677~8 & 90~850&925\tabularnewline
& & $\Delta_{\text{basis set}}$ & \approx0.000~037~0 & \approx 0.000~033~0 & 0&90\tabularnewline[2mm]
\hline
\noalign{\vskip2mm}
\multicolumn{4}{l}{Non-relativistic, clamped nucleus (FCIQMC)} & & \textcolor{red}{90~851}&\textcolor{red}{69(90)}\tabularnewline
\multicolumn{4}{l}{X2C Relativistic correction} & & -30&02(05)\tabularnewline
\multicolumn{4}{l}{Breit + QED (from )} & & -0&48\tabularnewline
\multicolumn{4}{l}{DBOC unclamped nucleus correction} & & -0&50(01)\tabularnewline[2mm]
\hline
\noalign{\vskip2mm}
Theory & (Present Work) & \multicolumn{3}{l}{FCIQMC + X2C + Breit + QED + DBOC} & \textcolor{blue}{90~820}&\textcolor{blue}{46(95)}\tabularnewline
Experiment & (1990) & & & & 90~820&45(10)\tabularnewline
Theory & (2010) & \multicolumn{3}{l}{CCSD-F12 + HLC + MV + Darwin + Breit + QED + DBOC} & 90~826&09\tabularnewline
Theory & (2017) & \multicolumn{3}{l}{R/UCCSD(T) + HLC + DKH + Breit + QED + DBOC} & 90~827&50\tabularnewline
Theory & (2015) & \multicolumn{2}{l}{FN-DMC} & & 90~774&\tabularnewline[2mm]
\hline
\end{tabular*}
\end{table*}
\par\end{center}
| {
"attr-fineweb-edu": 1.686523,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdkjxaJJQnKrAkO54 | \section{Introduction}
The study of the modular (or community) structure of complex networks has become a challenging subject \cite{firstnewman} with potential applications in many disciplines, ranging from sociology to computer science, see reviews \cite{jstat, schaeffer,lanfortunato}. Understanding the modular units of graphs of interactions (links) between nodes, representing people and their acquaintances, documents and their citation relations, computers and their physical or logical connections, etc., is of utmost importance to grasping knowledge about the functionality and performance of such systems. One of the most successful approaches to identify the underlying modular structure of complex networks, has been the introduction of the quality function called {\em modularity} \cite{newgirvan,newanaly}. Modularity encompasses two goals: (i) it implicitly defines modules as those subgraphs that optimize this quantity, and (ii) it provides a quantitative measure to find them via optimization algorithms. It is based on the intuitive idea that random networks are not expected to exhibit modular structure (communities) beyond fluctuations \cite{rogerfluc}.
A lot of effort has been put into proposing reliable techniques to maximize modularity~\cite{newfast,clauset,donetti,santo,duch,amaral,latapy,pujol,newspect}, see review~\cite{fortunato1}. To a large extent, the success of modularity as a quality function to analyze the modular structure of complex networks relies on its intrinsic simplicity. The researcher interested in this analysis is endowed with a non-parametric function to be optimized: modularity. The result of the analysis will provide a partition of the network into communities such that the number of edges within each community is larger than the number of edges one would expect to find by random chance. As a consequence, each community is a subset of nodes more connected between them than with the rest of the nodes in the network. The user has to be aware of some aspects about resolution limitations that avoid grasping the modular structure of networks at low scales using modularity \cite{fortunato}. The problem can be solved using multiresolution methods \cite{bornholdt,nostre}.
The mathematical formulation of modularity was proposed for unweighed and undirected networks \cite{newgirvan} and generalized later to weighted \cite{newanaly} and directed networks \cite{sizered}. The generalized definition is as follows
\begin{equation}
Q\left(C\right) = \frac{1}{2w}\sum^{N}_{i=1}\sum^{N}_{j=1}\left(w_{ij} - \frac{w^{out}_{i}w^{in}_{j}}{2w}\right)\delta\left(C_{i},C_{j}\right)
\label{modularity}
\end{equation}
where $w_{ij}$ is the strength of the link between the nodes $i$ and $j$ of the network, $w^{out}_{i} = \sum_{j} w_{ij}$ is the strength of links going from $i$, $w^{in}_{j} = \sum_{i} w_{ij}$ is the strength of links coming to $j$, and the total strength of the network is $2w = \sum_{ij} w_{ij}$. Finally, $C_{i}$ is the index of the community to which node $i$ belongs to, and $\delta(x,y)$ is the Kronecker function assigning 1 only if $x=y$, and 0 otherwise.
A close look to Eq.(\ref{modularity}) reveals that the building block of the community structure we are looking for, within this formulation, is the link between two nodes. Every term in Eq.(\ref{modularity}) accounts for the difference, within a module, between the actual existence of a link with weight $w_{ij}$ and the probability of existence of such a link just by chance, preserving the strength distribution.
However, in many cases the minimal and functional structural entity of a graph is not a simple link but a small structure (motif) of several nodes \cite{motif-detec1}. Motifs are small subgraphs that can be found in a network and that correspond to a specific functional pattern of that network. Statistical over-representation of motifs (compared with the random occurrence of these sub-structures) has been a useful technique to determine minimum building blocks of functionality in complex networks, and several works exploit their identification \cite{motif-detec1,motif-detec3,motif-detec2}. Among the possible motifs, the simplest one is the triangle which represents the basic unit of transitivity and redundancy in a graph, see Figure~1. This motif is over-represented in many real networks, for example motifs~12 and~13 in Figure~1, the feedback with two mutual dyads and the fully connected triad respectively, are characteristic motifs of the WWW. Motif 7 (feed-forward loop) is over-represented in electronic circuits, neurons connectivity and gene regulatory transcription networks. The reason for this over-representation relies on the functionality of such small subgraphs on the evolution and performance of the specific network. In the WWW as well as in social networks, the fully connected triad is probably the result of the transitivity of contents or human relations, respectively. The feed-forward loop is related to the reliability or fail tolerance of the connections between important elements involved in communication chains. The idea we propose here is that finding modules containing such motifs as building blocks could improve our information about the modular structure of complex networks. The importance of transitivity is traced back to the seminal paper \cite{watts}
where it is proposed the clustering coefficient, a scalar measure quantifying the total number
of triangles in a network through the average likelihood that
two neighbors of a vertex are neighbors themselves.
The main goal of our work is to determine communities using as building blocks triangular motifs. We propose an approach for triangle community detection based on modularity optimization using the spectral algorithm decomposition and optimization. The resulting algorithm is able to identify efficiently the best partition in communities of triangles of any given network, optimizing their correspondent modularity function.
\begin{figure}[t]
\centering
\includegraphics[width=.90\textwidth]{fig1.eps}
\caption{List of all possible three-nodes motifs.}
\label{motif}
\end{figure}
\section{Spectral decomposition for triangle community detection}
Let $G=(V,A)$ be a weighted undirected graph representing a complex network, where $V$ represents the vertices set and $A$ the edges set. The objective is to identify communities of triangles, i.e.\ a partition with the requirement that the density of triangles formed by any three nodes $i$, $j$ and $k$ inside the same module is larger than the triangles formed outside the module. We will define this objective using a proper adaptation of modularity.
\subsection{Triangle modularity tensor}
In \cite{motifs} some of us introduced a mathematical formalism to cope with modularity of motifs of any size. Capitalizing on this work, here we study the specificity of triangle modularity $Q_{\triangle}(C)$ of a certain partition $C$ of an undirected graph (the extension to directed graphs is straightforward, although a little bit more intricate, we present this extension in the Appendix). The mathematical definition is
\begin{equation}
Q_{\triangle}(C) =
\sum_i \sum_j \sum_k B_{ijk} \delta(C_i,C_j) \delta(C_j,C_k) \delta(C_k,C_i)\,,
\label{Qtriangles}
\end{equation}
where $C_i$ is the index of the community which node $i$ belongs to, and $B_{ijk}$
\begin{equation}
B_{ijk} = \frac{1}{T_G} w_{ij} w_{jk} w_{ki} -
\frac{1}{T_N} (w_i w_j)
(w_j w_k )
(w_k w_i )
\end{equation}
is a three indices mathematical object ({\em triangle modularity tensor}, from now on) that evaluates for each triad $i$, $j$, $k$, the difference between the actual density of strength of the triangle in the graph and the expected density of this triangle in a random configuration with the same strength distribution (null case). The normalization constant $T_G$ is the total number of triads of nodes forming triangles in the network,
\begin{equation}
T_G = \sum_i \sum_j \sum_k w_{ij} w_{jk} w_{ki}\,,
\end{equation}
and its counterpart $T_N$ for the null case term is
\begin{equation}
T_N = \sum_i \sum_j \sum_k (w_i w_j )
(w_j w_k )
(w_k w_i )\,.
\end{equation}
It is straightforward to check that the triangle modularity tensor satisfies:
\begin{equation}
B_{ijk} = B_{jki} = B_{kij}\,,
\label{bijkcycle}
\end{equation}
\begin{equation}
\sum_i \sum_j \sum_k B_{ijk} = 0\,.
\label{bijk0}
\end{equation}
\subsection{Spectral optimization of triangle modularity}
The computation of the triangle modularity is demanding due to the combinatorial number of triads that can be formed. The proposal of any optimization algorithm for this function must be aware of this cost. Among the possibilities already stated in the literature we devise that the spectral optimization scheme, first proposed in \cite{newspect}, is a candidate to perform this task efficiently. The idea behind this algorithm is to use the eigenspectrum of the modularity matrix, which plays a role in community detection similar to that played by the graph Laplacian, and use a recursion splitting reminiscent of graph partitioning calculations. The problem we have is that a direct mapping to the usual spectral modularity optimization is not straightforward given the structure of Eq.(\ref{Qtriangles}). Basically we need to transform Eq.(\ref{Qtriangles}) in a function with the following structure:
\begin{equation}
Q(C) \propto \sum_i \sum_j s_i M_{ij} s_j\,,
\end{equation}
\noindent where the leading eigenvector of $M_{ij}$, the modularity matrix, will induce the first recursion step, splitting the network in two parts.
We propose the following transformation: let us assume a partition of the network in two communities, introducing the variables $s_i$, which are $+1$ or $-1$ depending on the community to which node $i$ belongs to, and taking into account that
\begin{equation}
\delta(C_i,C_j) = \frac{1}{2}(1 + s_i s_j)\,,
\end{equation}
then
\begin{eqnarray}
\delta(C_i,C_j) \delta(C_j,C_k) \delta(C_k,C_i) & = &
\frac{1}{8}(1 + s_i s_j) (1 + s_j s_k) (1 + s_k s_i) \nonumber \\
& = & \frac{1}{4}(1 + s_i s_j + s_j s_k + s_k s_i)\,,
\label{deltas}
\end{eqnarray}
where we have made use of $s_i^2 = +1$. Therefore, using Eqs.~(\ref{bijkcycle}) and~(\ref{bijk0}),
\begin{eqnarray}
Q_{\triangle}(S) & = &
\frac{1}{4} \sum_i \sum_j \sum_k B_{ijk} (1 + s_i s_j + s_j s_k + s_k s_i) \nonumber\\
& = & \frac{3}{4} \sum_i \sum_j \sum_k B_{ijk} s_i s_j\,.
\end{eqnarray}
Defining the {\em triangle modularity matrix}
\begin{eqnarray}
M_{ij} & = & \sum_k B_{ijk} \nonumber\\
& = & \frac{1}{T_G} w_{ij} \sum_k w_{jk} w_{ki} -
\frac{1}{T_N} (w_i w_i)
(w_j w_j )
\sum_k (w_k w_k )\,.
\end{eqnarray}
then
\begin{equation}
Q_{\triangle}(S) = \frac{3}{4} \sum_i \sum_j s_i M_{ij} s_j\,.
\end{equation}
Thus, we have been able to reduce the optimization of the triangle modularity into the standard spectral algorithm given in~\cite{newspect}.
For the case of undirected networks, this matrix is symmetric and the computation of its eigenspectra gives real values. However, if the network is directed, this property is not necessarily true, and then a symmetrization of the matrix is needed before computing its spectrum (see Appendix).
Once a first division of the network in two parts has been obtained, it is possible to iterate the process, while modularity improves, by a recursive application of the spectral splitting to each subgraph. To this end, we need the value of the triangle modularity matrix for any subgraph. Supposing we have a subgraph $g$ to be divided into $g_1$ and $g_2$, the change in triangle modularity is given by
\begin{eqnarray}
\Delta Q_{\triangle}(g\rightarrow g_1,g_2) & = &
\sum_{i,j,k\in g_1} B_{ijk} + \sum_{i,j,k\in g_2} B_{ijk} - \sum_{i,j,k\in g} B_{ijk} \nonumber \\
& = &
\frac{3}{4} \sum_{k\in g} \left(
\sum_{i,j\in g} B_{ijk} s_i s_j - \sum_{i,j\in g} B_{ijk}
\right) \nonumber \\
& = &
\frac{3}{4}\sum_{i,j\in g} s_i M_{ij}(g) s_j\,,
\end{eqnarray}
where
\begin{equation}
M_{ij}(g) = \sum_{k\in g} \left( B_{ijk} - \delta_{ij}\sum_{\ell\in g} B_{i\ell k} \right)\,,
\end{equation}
and $s_i$ is $+1$ for nodes in $g_1$ and $-1$ for nodes in $g_2$. Therefore, the new triangle modularity matrix is not just a submatrix of the original one, but additional terms appear to take into account the connectivity with the rest of the network.
\subsection{Algorithm}
Once the triangle modularity has been transformed to the proper form to be optimized by spectral decomposition, we can proceed to formulate a complete decomposition-optimization algorithm. After the first analysis of the eigenspectra, the eigenvector associated to the largest eigenvalue is used to determine the elements that will be assigned to one of the two communties according to the sign of their eigenvector component. this process is recursively executed until no new splits are obtained. The decomposition given by the spectral partitioning can be improved by a fine-tuning of the nodes asignments after the process ends.
We use the Kernighan-Lin optimization method to improve the modularity as explained in \cite{newspect}. The main idea is to move vertices in a group to another increasing the modularity. We move all vertices exactly once. At each step, we choose to move the vertex giving the best improvement (largest increase in the modularity). When all vertices are moved, we repeat the process until no improvement is possible. Some computational issues should be considered here: the computation of the largest eigenvalue and its corresponding eigenvector can be efficiently determined using the iterative Lanczos method \cite{parlett}; the computation of $Q_{\triangle}(S)$ is, in principle, of order $O(N^3)$, however it can be done very efficiently by pre-computing and storing the values of $T_N$ and $T_G$, and the lists of triangles to which each node belongs to; finally, the KL post-processing stage which is eventually the computational bottleneck of the process, must be parameterized according to the number of nodes we pretend to move and the relative improvement of modularity observed.
\begin{algorithm}[t]
\algsetup{linenosize=\small}
\begin{algorithmic}[1]
\caption{Triangle community detection}
\label{code}
\REQUIRE \textit{Connected network G(V,E)}
\ENSURE \textit{Triangle communities $C$, Triangle modularity of the partition $Q_{\triangle}(C)$}
\STATE Read network
\STATE Current subgraph $g$ $\leftarrow$ $G$
\STATE Build modularity matrix $M(g)$
\STATE Compute $Q_{\triangle}(g)$
\STATE Compute leading eigenvalue and eigenvector of $M(g)$
\STATE Decomposition of group $g$ in two groups: $g1$ and $g2$, using the signs of eigenvector components
\STATE Compute the modularity $Q_{\triangle}(g1,g2)$ of the initial split of group $g$
\STATE Improve $Q_{\triangle}(g1,g2)$ using KL optimization between $g1$ and $g2$
\STATE Compute the modularity $Q_{\triangle}(g1,g2)$ of the split of group $g$
\IF {$Q_{\triangle}(g1,g2)>Q_{\triangle}(g)$}
\STATE {\bf goto} {\small 3} with $g$ $\leftarrow$ $g1$
\STATE {\bf goto} {\small 3} with $g$ $\leftarrow$ $g2$
\ENDIF
\end{algorithmic}
\end{algorithm}
\section{Results}
In this section we show the results of the algorithm, applied to several real networks. We have used the following networks:
\begin{itemize}
\item{Football \cite{firstnewman}, a network of American football games between Division IA colleges during regular season Fall 2000.}
\item{Zachary \cite{zachary}, a social network of friendships between 34 members of a karate club at a US university in the 1970s.}
\item{Dolphins \cite{lusseau}, an undirected social network of frequent associations between 62 dolphins in a community living off Doubtful Sound, New Zealand.} \item{Adjnoun \cite{newman06}, adjacency network of common adjectives and nouns in the novel David Copperfield by Charles Dickens.}
\item{Elec s208 \cite{motif-detec1}, benchmark of sequential logic electronic circuit.}
\item{Neurons \cite{worm}, network of neural connectivity of the nematode {\em C.elegans}.}
\item{Cortex \cite{scanell}, network of connections between cortical areas in the cat brain}.
\end{itemize}
To evaluate the information provided by the new triangle modularity, we perform a comparison with the standard modularity Eq.\ref{modularity}. We have developed a comparison in both the values of the optimal modularity, and the partitions obtained.
\subsection{Modularities comparison}
Table~\ref{trgComm} shows the best standard, and triangle modularities found using spectral optimization. We define a new parameter $\Delta(Q,Q_{\triangle})= (Q_{\triangle}-Q)/Q$ that measures the relative difference between both. Positive values of $\Delta(Q,Q_{\triangle})$ indicate that the contribution of triangles to communities is larger than standard modularity communities, and the contrary for negative values.
\begin{table}[tbp]
\centering
\begin{tabular}{lrrccc}
\hline
Network & Nodes & Links & $Q$ & $Q_{\triangle}$ & $\Delta(Q,Q_{\triangle})$ \\
\hline
Football & 115 & 613 & 0.604 & 0.924 & 0.529 \\
Zachary & 34 & 78 & 0.419 & 0.706 & 0.685 \\
Dolphins & 62 & 159 & 0.528 & 0.817 & 0.547 \\
Adjnoun & 112 & 425 & 0.308 & 0.299 & -0.029 \\
Elec s208 & 122 & 189 & 0.686 & 0.998 & 0.454 \\
Neurons & 279 & 2287 & 0.405 & 0.433 & 0.069 \\
Cortex & 55 & 564 & 0.372 & 0.708 & 0.903 \\
\hline
\end{tabular}
\caption{Comparison of standard and triangle modularities.}
\label{trgComm}
\normalsize
\end{table}
From Table~\ref{trgComm} we observe that in Adjnoun, which is almost a bipartite network, the standard modularity is larger than the triangle modularity, in accordance with the absence of these motifs. On the other side, for the Zachary network, a human social network where transitivity is implicit in many acquaintances, the triangle modularity becomes more informative than the standard modularity. Indeed, the optimal standard modularity proposes a decomposition of this network in four groups, while the optimal triangle modularity is achieved for a partition in two groups plus two isolated nodes (nodes 10 and 12) that do not participate in any triangle. Moreover the partition in two groups is in accordance with the observed split of this network after a fight between the administrator and the instructor of the club, see Figure \ref{zacharyfig}.
\begin{figure}[tpb]
\begin{center}
\begin{tabular}[t]{cc}
\multicolumn{1}{l}{(a) Triangle modularity}
&
\multicolumn{1}{l}{(b) Standard modularity}
\\ \\
\mbox{\includegraphics*[width=.45\textwidth]{fig2a.eps}}
&
\mbox{\includegraphics*[width=.45\textwidth]{fig2b.eps}}
\end{tabular}
\end{center}
\caption{Zachary network partitions. Best partitions found by optimization
of (a) triangle modularity and (b) standard modularity. The real splitting
of the network is represented by the shape of the symbols (squares and
circles). Colors indicate the assignment of nodes to the modules found.}
\label{zacharyfig}
\end{figure}
\subsection{Communities comparison}
A deeper comparison consist in to analyze the different modules obtained using the standard and triangle modularity. To this end, we need some measures to analyze the difference in the assignments of nodes to modules, taking into account that we will also have different modular partitions. Here, we use two measures, the Normalized Mutual Information (NMI) and the Asymmetric Wallace Index (AW).
In \cite{nmi} the authors define the NMI to compare two clusterings. The idea is the following: let be a clustering $A$ with $c_{A}$ communities and a clustering $B$ with $c_{B}$ communities, and let us define the confusion matrix $N$ whose rows correspond to the communities of the first clustering ($A$) and columns correspond to the communities of second clustering ($B$). The elements of the confusion matrix, $N_{\alpha\beta}$, represent the number of common nodes between community $\alpha$ of the clutering $A$ and community $\beta$ of the clustering $B$, the partial sums $N_{\alpha.}=\sum_{\beta}N_{\alpha\beta}$ and $N_{.\beta}=\sum_{\alpha}N_{\alpha\beta}$ are the sizes of these communities, and $N_{..}=\sum_{\alpha}\sum_{\beta}N_{\alpha\beta}$ is the total number of nodes. The measure NMI between two clusterings $A$ and $B$ is
\begin{equation}
\mbox{NMI}(A,B)=\frac{\displaystyle
-2\sum^{c_{A}}_{\alpha=1}\sum^{c_{B}}_{\beta=1}N_{\alpha\beta}
\log\left(\frac{N_{\alpha\beta}N_{..}}{N_{\alpha.}N_{.\beta}}\right)
}{\displaystyle
\sum^{c_{A}}_{\alpha=1}N_{\alpha.}\log\left(\frac{N_{\alpha.}}{N_{..}}\right)
+ \sum^{c_{B}}_{\beta=1}N_{.\beta}\log\left(\frac{N_{.\beta}}{N_{..}}\right)
}
\label{eq3}
\end{equation}
If the partitions are identical, then NMI takes its maximum value of 1. If the partitions are totally independent, $\mbox{NMI} = 0$. It measures the amount of information that both partitions have in common.
The Asymmetric Wallace Index~\cite{wallace} is the probability that a pair of elements in one cluster of partition $A$ (resp.\ $B)$ is also in the same cluster of partition $B$ (resp.\ $A$). Using the same definitions as for the NMI, the two possible Asymmetric Wallace Indices are:
\begin{equation}
\mbox{AW}_{1}(A,B) =
\frac{\displaystyle
\sum^{c_{A}}_{\alpha=1}\sum^{c_{B}}_{\beta=1} N_{\alpha\beta} (N_{\alpha\beta} - 1)
}{\displaystyle
\sum^{c_{A}}_{\alpha=1} N_{\alpha.} (N_{\alpha.} - 1)
},
\end{equation}
\begin{equation}
\mbox{AW}_{2}(A,B) =
\frac{\displaystyle
\sum^{c_{A}}_{\alpha=1}\sum^{c_{B}}_{\beta=1} N_{\alpha\beta} (N_{\alpha\beta} - 1)
}{\displaystyle
\sum^{c_{B}}_{\beta=1} N_{.\beta} (N_{.\beta} - 1)
}.
\end{equation}
The asymmetric Wallace index shows the inclusion of a partition in the other.
\begin{table}[tbp]
\centering
\begin{tabular}{lcccc}
\hline
Networks & NMI & AW$_1$ & AW$_2$ \\
\hline
Football & 0.8903 & 0.8488 & 0.6901 \\
Zachary & 0.6380 & 0.7945 & 0.5524 \\
Dolphins & 0.6663 & 0.4810 & 0.7838 \\
Adjnoun & 0.4888 & 0.3136 & 0.3845 \\
Elec s208 & 0.6098 & 0.0307 & 0.9091 \\
Neurons & 0.6045 & 0.7276 & 0.6954 \\
Cortex & 0.8361 & 0.6841 & 1.0000 \\
\hline
\end{tabular}
\caption{Comparison of partitions obtained using standard and triangles modularities. The different measures are explained in the text.}
\label{metrics}
\normalsize
\end{table}
In Table~\ref{metrics}, we observe that the largest NMI is for the communities of football network. That means that the standard and triangle communities found in that network are very similar. Indeed, the structure of the football network is very dense and almost all nodes participate in triangles. For the the AW$_2$ of the cortex network is equal to 1, that means that all the triangle communities are included in the standard ones.
\section{Conclusions}
We have designed an algorithm to compute the communities of triangular motifs using an spectral decomposition of the triangle modularity matrix.
The algorithm provides partitions where transitive relations are the building blocks of their internal structure. The results of these partitions are complementary to those obtained maximizing the classical modularity, that accounts only for individual links, and can be used to improve our knowledge of the mesoscopic structure of complex networks.
\section{Appendix}
Here we show the computation of the triangle modularity matrix for a directed motif, in particular motif 7 in Figure~1, although as will be shown the process is equivalent for any other motif configuration. In this case, we have
\begin{equation}
Q_{\triangle}(C) =
\sum_i \sum_j \sum_k B_{ijk} \delta(C_i,C_j) \delta(C_j,C_k) \delta(C_k,C_i)\,,
\end{equation}
where $B_{ijk}$ is
\begin{equation}
B_{ijk} = \frac{1}{T_G} w_{ij} w_{jk} w_{ki} -
\frac{1}{T_N} (w_i^{\mbox{\scriptsize out}} w_j^{\mbox{\scriptsize in}})
(w_j^{\mbox{\scriptsize out}} w_k^{\mbox{\scriptsize in}})
(w_k^{\mbox{\scriptsize out}} w_i^{\mbox{\scriptsize in}})\,.
\end{equation}
The normalization constant $T_G$ are now
\begin{equation}
T_G = \sum_i \sum_j \sum_k w_{ij} w_{jk} w_{ki}\,,
\end{equation}
and
\begin{equation}
T_N = \sum_i \sum_j \sum_k (w_i^{\mbox{\scriptsize out}} w_j^{\mbox{\scriptsize in}})
(w_j^{\mbox{\scriptsize out}} w_k^{\mbox{\scriptsize in}})
(w_k^{\mbox{\scriptsize out}} w_i^{\mbox{\scriptsize in}})\,.
\end{equation}
Using the transformation proposed in Eq.~(\ref{deltas})
\begin{eqnarray}
M_{ij} & = & \sum_k B_{ijk} \nonumber\\
& = & \frac{1}{T_G} w_{ij} \sum_k w_{jk} w_{ki} -
\frac{1}{T_N} (w_i^{\mbox{\scriptsize out}} w_i^{\mbox{\scriptsize in}})
(w_j^{\mbox{\scriptsize out}} w_j^{\mbox{\scriptsize in}})
\sum_k (w_k^{\mbox{\scriptsize out}} w_k^{\mbox{\scriptsize in}})\,.
\end{eqnarray}
then
\begin{equation}
Q_{\triangle}(S) = \frac{3}{4} \sum_i \sum_j s_i M_{ij} s_j\,.
\end{equation}
Owing to the fact that the graph is directed, the modularity matrix $M_{ij}$ may be not symmetric, which causes technical problems. However, it is possible to restore the symmetry thanks to the scalar nature of $Q_{\triangle}(S)$~\cite{newmandirected}. A symmetrization of the triangle modularity matrix $M$,
\begin{equation}
M^{'} = \frac{1}{2} (M + M^{T})\,,
\end{equation}
yields
\begin{eqnarray}
Q_{\triangle}(S) & = & \frac{1}{2} (Q_{\triangle}(S) + Q_{\triangle}(S)^{T}) \nonumber \\
& = & \frac{3}{4} \sum_i \sum_j s_i M^{'}_{ij} s_j\,,
\end{eqnarray}
recovering the necessary symmetry to apply the standard spectral optimization.
In the same manner, we can define the modularity matrix for all possible motifs of Figure~\ref{motif} just by modifying $B_{ijk}$. For example, for motif~13 in Figure~\ref{motif} we have:
\begin{eqnarray}
B_{ijk} &=& \frac{1}{T_G} w_{ij} w_{ji} w_{jk} w_{kj} w_{ki} w_{ik} \nonumber \\
& &\mbox{}-
\frac{1}{T_N} (w_i^{\mbox{\scriptsize out}})^{2} (w_j^{\mbox{\scriptsize in}})^{2}
(w_j^{\mbox{\scriptsize out}})^{2} (w_k^{\mbox{\scriptsize in}})^{2}
(w_k^{\mbox{\scriptsize out}})^{2} (w_i^{\mbox{\scriptsize in}})^{2}\,,\\
T_G &=& \sum_i \sum_j \sum_k w_{ij} w_{ji} w_{jk} w_{kj} w_{ki} w_{ik}\,, \\
T_N &=& \sum_i \sum_j \sum_k (w_i^{\mbox{\scriptsize out}})^{2} (w_j^{\mbox{\scriptsize in}})^{2}
(w_j^{\mbox{\scriptsize out}})^{2} (w_k^{\mbox{\scriptsize in}})^{2}
(w_k^{\mbox{\scriptsize out}})^{2} (w_i^{\mbox{\scriptsize in}})^{2}\,.
\end{eqnarray}
\section*{Acknowledgments}
We acknowledge J.~Borge-Holthoefer and A.~Fern\'andez for useful discussions. This work was supported by Spanish Ministry of Science and Technology FIS2009-13730-C02-02 and the Generalitat de Catalunya SGR-00838-2009. B.S. acknowledges support from he Rhone-Alpes region for the financing of training by Explora'doc exchange scholarship.
\section*{References}
| {
"attr-fineweb-edu": 1.700195,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdkk4uzlh5qyYgaGP | \section{Introduction}
\label{sec:intro}
\vskip 0.2cm
In this paper, we consider a $M^X/G/1$ queue, that is, a single server
queue with Poisson batch arrivals and general service
distribution. We assume that the service distribution could follow any
probability distribution with finite support. Therefore, the queueing
model considered in the paper is fairly general and flexible. Our goal is to
obtain an analytic characterization of its stationary performance
metrics, such as the stationary system size under a class of general processor-sharing scheduling
policies.
There are many reasons for studying a queueing model with
a processor-sharing scheduling policy, which has been
proved to be a powerful and flexible model in performance
analysis. First, it has been demonstrated through extensive
research from different aspects that a processor-sharing queue is an accurate mathematical model for the performance of a real life system that operates under scheduling policies such as round-robin or time-sharing,
and these policies are commonly adapted in job scheduling for computer
or communications networks, see, e.g. \cite{Kleinrock}. Secondly, in
some recent study of fairness of the scheduling policies,
processor-sharing policy often serves as a reference or a base on which other
more complicated policies are developed and analyzed, see,
e.g. \cite{Wierman,FriedmanHenderson}. Fairness is a critical issue
for many new applications, especially those in web services. These
facts all contribute to the importance of the performance analysis of
processor-sharing policies.
One important feature of processor-sharing is its flexibility. The basic
principle of processor-sharing is to allocate resource capacity
proportional to jobs that need to be processed. However, there can be
many different ways to decide the proportion for each job, besides the equal proportion that corresponding to the
egalitarian processor-sharing policy. Different regimes for
determining the proportion can produce very different performances,
which have to be considered very carefully when a scheduler is designed
for any types of systems. A very natural scheme is to let the
proportions be determined by the service already attained by the
jobs, because this is usually a quantity that can be obtained with low
overhead charged to many systems.
The success of the deployment of these policies and ideas in the
design and performance analysis of the systems depends on efficient
evaluation of a wide class of processor-sharing polices. Meanwhile,
the theoretical development for understanding a processor-sharing
queue still has plenty of room for
improvements. It starts in the case of systems
with memory-less assumptions on distributions (Poisson arrival or
exponential service time), see, for example, Yashkov \cite{yashkov87}
for a survey on results of this nature, as well as other developments
prior to $1987$. For a general $GI/GI/1$ queue, Jean-Marie and Robert
\cite{jean-marie} study the transient behavior to obtain the rate of
growth of the number of jobs in the system; Baccelli and Towsley
\cite{baccelli} investigate stochastic ordering properties of the
response time; Grishechkin \cite{grishechkin91a, grishechkin}
establishes a strong approximation limit for the number of jobs in the
system under heavy traffic; More recently, Gromoll {\it et al.}
\cite{gromoll2002, gromoll2004} establish fluid and diffusion limits
through measure-valued processes.
In his effort to obtain the heavy traffic limit of the steady state
distribution of the performance metrics, Grishechkin establishes a
connection between the state process of a $M^X/G/1$ queue under a very general class of processor-sharing policies and a Crump-Mode-Jagers branching process, then derives an
integral equation for a wide class of performance metrics from the
dynamics of the branching process. Grishechkin names this class of policies generalized processor-sharing policies. To avoid any
confusion with the generalized processor sharing policy for multiclass
networks, we will call this class of policies Grishechkin
processor-sharing policy hereafter. Under a Grishechkin
processor-sharing policy, each job in the
system receives a portion of the processing capacity, and the
proportion is determined by a random function whose variables are the
amount of service each job has already received up to the moment under
consideration. By choosing different random functions, Grishechkin
processor-sharing can cover a
wide range of known scheduling policies. For example, the egalitarian processor-sharing policy is a special case of Grishechkin
processor-sharing when the random function just assigns the same weight to each job in the system, and the policy of shortest residual processing time first rule can be treated as the limit of a sequence of Grishechkin processor-sharing policies with different parameters.
Due to the complicated nature of this integral equation, only the
simplest case of egalitarian processor-sharing with unit demand has been solved in
\cite{grishechkin}. In this paper, our main contribution is to discover that several transforms can be applied to the integral equation so that it can be reduced to a
more tractable form. To be more precise, we reduce the integral
equation to a special form of combined first and second kind, and then
we observe some special structures of the classical solutions to the first
and second kinds of integral equation with the kernel appeared in our integral equation. These observations enable us to solve the
transformed integral equation to obtain the Laplace transform of the
stationary performance metrics. In the current paper, we will focus on
one performance metric, the system size, that is, the number of jobs
in the system. Similar approaches can be applied to other metrics that
are also studied in \cite{grishechkin91a, grishechkin}.
Then we devote our efforts to inverting the Laplace transform we obtained through the integral equation. Because in the course of solving the integral equation, we need to make use of the Laplace transform solution to the classical integral equation of the first kind, the final Laplace transform is essentially a two-dimensional Laplace transform. We adapt sophisticated Laplace transform inverting method to our problem, and obtain approximations to the stationary distribution of the performance metrics with accuracy guarantees.
The rest of the paper will be organized as follows. In
Sec. \ref{sec:prelim}, we will give a detailed description of the Grishechkin
processor-sharing policies, as well as the integral equation for the stationary
system size. In Sec. \ref{sec:CMJ}, we list some basic facts on
the Crump-Mode-Jagers branching process and describe Grishechkin's
results for connecting the queueing process and the branching process.
In Sec. \ref{sec:inteqn}, we will derive the solution for
the integral equation. In Sec. \ref{sec:performance} we will summarize
our results with a final expression of the stationary system size, and
derive numerical procedures for several special cases. In
Sec. \ref{sec:conclusions}, we will conclude the paper with a summary of
our results and contributions, as well as a description of some
ongoing research.
\section{Models and Preliminaries}
\label{sec:prelim}
\vskip 0.2cm
Let us describe the queueing system in detail, and the Grishechkin
processor-sharing scheduling
policies in particular. Jobs arrive at the queueing system following a
compound Poisson process with rate $\Lambda$ and batch size
distribution $B$ with finite support. For each job, the amount of service required is independent and
follows an identical general distribution, let us denote it as $\ell$, and there exists $M>0$ such that $\mbox{\sf P}[\ell>M]=0$. We assume that the
arrival and service processes are independent, the inter-arrival and
the batch distributions for different jobs are also independent. The
server serves at the rate of one, and can serve multiple jobs
simultaneously. At time $T=0$, we assume that there are $N$ jobs in
the system, and they attain no service before time $T=0$.
Each job $n$, $n=1, 2,\ldots, $ is associated with a pair of
independent stochastic processes, $(A_n(\cdot), D_n(\cdot))$, indexed
by the amount of service it received. These pairs are i.i.d. for all
jobs and a generic member $(A(\cdot),D(\cdot))$ satisfies,
\begin{itemize}
\item
$A(\cdot)$ and $D(\cdot)$ have absolutely continuous sample path almost surely;
\item
The random variable $\ell = \min\{ T>0, A(T) =0\}$ is almost surely
finite;
\item
The integral $\int_0^\ell (A(y))^{-1}dy $ converges almost surely.
\end{itemize}
Intuitively, the process $A(\cdot)$ reflects the weight of the job,
and $D(\cdot)$ the quantity of interest. Define $Q(t)$ as the number
of jobs in the system at any time $t$, and $V_i(t)$ as the amount of
service attained by each job $i, i=1,2,\ldots, Q(t)$, up to time $t$.
Then policy will allocate service capacity among jobs in the system
such that the following relationship for the service rates of job
$i$ must satisfy,
\begin{eqnarray}
\label{eqn:service_rate}
\frac{d V_i(t) }{dt } = \frac{A_i(V_i)}{\sum_{j=1}^{Q(t)}A_j(V_j)}.
\end{eqnarray}
That is, the sharing of the service capacity is determined by the
distribution of $A$ and the amount of service attained. Define
\begin{eqnarray*}
Q^D(T) =\sum_{i, T_i\le T} D_i (V_i(T)),
\end{eqnarray*}
where $T_i$ denotes the arrival time of job $i$. Then the main
performance metric of interest is the stationary distribution of the
above random variable. Depending upon the functional form of $D(\cdot)$,
$Q^D(T)$ corresponds to different types of performance metrics at
time $T$,
\begin{itemize}
\item
$D(y) = {\bf 1}\{y \in[0, \ell]\}$, $Q^D(T)$ represents the system size;
\item
$D(y) = {\bf 1}\{ y \in[0, \min[a, \ell]]\}$, $Q^D(T)$ represents
number of jobs with attained service less than $a$;
\end{itemize}
where ${\bf 1}A$ denotes the indicator function of set $A$, and $Q^D$
denotes its stationary distribution. While the methods developed in this paper certainly are not restricted to, for the ease of exposition, we will only present the case when $D(y) = {\bf 1}\{y \in[0, \ell]\}$, i.e. the case in which $Q^D(T)$ represents the system size, or equivalently, the number of jobs that are in the system, so, unless otherwise noted, we will omit the superscript $D$.
\section{Crump-Mode-Jagers branching process and the derivation of the main integral equation}
\label{sec:CMJ}
\vskip 0.2cm
It is shown in \cite{grishechkin}, see Theorem 2.1 in \cite{grishechkin}, that system size process will have the same
probability law as that of the population of a Crump-Mode-Jagers
branching process. In this section, for the completeness of the arguments, we will
introduce some basic facts about the Crump-Mode-Jagers branching
process and sketch the main arguments in \cite{grishechkin91a} for the
derivation of the integral equations that the stationary performance
metric will satisfy.
The following theorem of Grishechkin provides a description of the
stationary distribution of the system size.
\begin{thm}
The function $\phi(t;u, v)$ satisfies the following integral equation,
\begin{eqnarray}
\label{eqn1_branch_2} &&\phi(t;u, v) = \ex\left[\exp\left\{h(t;u,v)+ \G
\int_0^t f(\phi(t-y;v,u)-1) C(y)dy \right\}\right].
\end{eqnarray}
where $$h(t;u,v) := u \chi(t) + vC(t).$$
\end{thm}
Our performance analysis is reduced to solving this integral
equation.
\section{Solving the Integral Equation}
\label{sec:inteqn}
\vskip 0.2cm
It is well-known that although very common in various fields of
applied mathematics and engineering, an integral equation is rather
difficult to solve, unless it is in the standard first or second kind
integral equation forms, see, e.g. \cite{PorterStirling}. Our integral
equation (\ref{eqn1_branch_2}) does not appear to fall into those
categories. In the following, we will take two steps in obtaining its
solution. First, a sequence of transformations are applied to
(\ref{eqn1_branch_2}) aimed at simplifying it to a more tractable
form. As a result, we found that (\ref{eqn1_branch_2}) can be reduced
to the form of a combined first and second kinds of integral equation.
Then, in the second step, we derive the solution of the combined
first and second kinds of integral equation solution making use of the
special function form of the solution. There is no systematic approach
to solve a combined first and second kinds of integral equation
solution, to the best of our knowledge, our solution is among very few
cases where an explicit solution has been derived.
First, the assumption of the absolute continuity of the $(A(\cdot),
D(\cdot))$ enables us to write the integration equation as,
\begin{eqnarray*}
\phi(t;u,v) &= &\int_{{\mathbb R}^3\times [0,t]}
\exp[h_1(t;u,v;z_1,z_2,z_3, z_4)]
\times \nonumber \\ && \exp \left\{
\int_0^t [f(\phi(t-y;u,v))-1] h_2(y, z_1,z_2,z_3, z_4)dy\right\}\\
&&g(z_1,z_2,z_3,z_4)dz_1dz_2dz_3dz_4.
\end{eqnarray*}
where $g(z_1,z_2,z_3,z_4)$ is the joint density function of $(A(\cdot),
D(\cdot), \ell)$ at time $z_4$, and
\begin{eqnarray}
\label{eqn:H1}
h_1(t;u,v;z_1,z_2,z_3,z_4)&=&[u\chi(z_1,z_2,z_3)(t) +
vC(z_1,z_2,z_3)(t)]{\bf 1}\{t=z_4\},
\end{eqnarray}
and
\begin{eqnarray}
\label{eqn:H2}
h_2(t,z_1,z_2,z_3,z_4)=C(z_1,z_2,z_3)(t){\bf 1}\{t=z_4\}.
\end{eqnarray}
In the rest of the section, when transforms are applied to the
integral equations, all the actions on $z_1, z_2, z_3$ and $z_4$, in
fact, will be identical, meanwhile our results are not restricted by
any specific function form $g(\cdot)$ takes. Therefore, for the ease
of exposition, we will just use one variable $z$ and a univariate
function $g(z)$, with finite support $[0,M]$, to represent them, and restore their original form at
the end of the derivation. Therefore, the integral equation under
consideration will now bear the following form,
\begin{eqnarray}
\label{eqn:inteqn_1}
&&\phi(t;u,v) = \int_{{\mathbb R}} \exp[h_1(t;u,v;z)] \times
\nonumber \\ && \exp \left\{
\int_0^t [f(\phi(t-y;u,v))-1] h_2(y,z)dy\right\}g(z)dz.
\end{eqnarray}
with $h_1(t;u,v;z)$ and $h_2(t,z)$ in place for functions defined in
(\ref{eqn:H1}) and (\ref{eqn:H2}).
Define an operator $T:C^1({\mathbb R}_+^4) \rightarrow C^1({\mathbb
R}_+^3) $, where $C^1(\Omega)$ denotes the family of
functions on domain $\Omega$ with continuous derivatives, as follows,
\begin{eqnarray*}
T (\psi(t;u,v;z)) = \int_{\mathbb R} \exp[h_1(t;u,v;z)] \psi(t;u,v;z)
g(z) dz.
\end{eqnarray*}
Let us denote, $\psi(t;u,v;w)$ as the solution of the following
integral equation,
\begin{eqnarray}
\label{eqn:inteqn_2}
&& \psi(t;u,v;w) = \exp\left[\int_0^t -h_2(y,w)dy\right]\nonumber \\ && \exp \left\{
\int_0^t h_2(y,w) f(T (\psi(t-y;u,v;z))dy \right\}.
\end{eqnarray}
Then, it can be verified through direct calculation that,
\begin{lem}
\label{lem:transform}
\begin{eqnarray*}
\phi(t;u,v)=T (\psi(t,u,v,z))
\end{eqnarray*}
is a solution to the original integral equation (\ref{eqn:inteqn_1}).
\end{lem}
\vskip 1cm
Taking logarithms, the integral equation (\ref{eqn:inteqn_2}) can be
written in the following equivalent form,
\begin{eqnarray*}
&& \log(\psi(t;u,v;w))+ \int_0^t h_2(y,w)dy =\int_0^t
h_2(y,w) f(T (\psi(t-y;u,v;z))dy.
\end{eqnarray*}
Thus, we can further reduce the problem through the following lemma.
\begin{lem}
\label{lem:log_trans}
Suppose that $\Psi(t,u,v,w_1, w_2, \ldots, w_n)$ is the solution to
\begin{eqnarray}
\label{eqn:inteqn_n}
&&\log(\Psi(t,u,v,w_1, w_2, \ldots, w_n))+ \int_0^t
\sum_{i=1}^nh_2(y,w_i)dy \\ &=&\int_0^t
\sum_{i=1}^nh_2(y,w_i)\left[\int_{{\mathbb R}^n}
\sum_{\ell=0}^n f_\ell \exp\left(\sum_{k=1}^\ell h_1(t;u,v,z_k)\right)
\right. \nonumber \\ && \left.\prod_{k=\ell+1}^n \nu(z_k)\prod_{k=1}^n
g(z_k)\Psi(t,u,v,z_1, z_2, \ldots, z_n)\right. dz_1dz_2\ldots dz_n\Big]dy ,
\nonumber
\end{eqnarray}
where $\nu$ is any probability measure that has support on $[M,
\infty)$, then,
\begin{eqnarray*}
\phi(t;v,u)=T \left((\Psi(t,u,v, z, z, \ldots, z)^{\frac{1}{n}}\right)
\end{eqnarray*}
is the solution to the original integral equation (\ref{eqn:inteqn_1}).
\end{lem}
{\bf Proof }
First, let us consider the special case of $f(x)= x^n$ for some $n$,
we can rewrite the equation as,
\begin{eqnarray*}
&&\log(\psi(t,u,v,w))+ \int_0^t h_2(y,w)dy \\ &=&\int_0^t h_2(y,w)\left[\int_{{\mathbb R}^n}
\exp\left(\sum_{k=1}^nh_1(t;u,v;z_k)\right) \right. \\&& \left. \times
\prod_{k=1}^n g(z_k)\prod_{i=1}^n\psi(t-y,u,v,z_i)dz_1dz_2\ldots
dz_n\right]dy.
\end{eqnarray*}
For each $i=1,2\ldots, n$, define,
\begin{eqnarray*}
&&\log(\psi(t,u,v,w_i)) + \int_0^t h_2(y,w_i)dy \\ &=&\int_0^t h_2(y,w_i)\left[\int_{{\mathbb R}^n}
\exp\left(\sum_{k=1}^nh_1(t;u,v;z_k)\right) \right. \\&&
\left. \times \prod_{k=1}^n
g(z_k)\prod_{i=1}^n\psi(t-y,u,v,z_i)dz_1dz_2\ldots dz_n\right]dy.
\end{eqnarray*}
Next, define
\begin{eqnarray*}
\Psi(t,u,v, w_1, w_2, \ldots, w_n)= \prod_{i=1}^n &\psi(t,u,v,w_i).
\end{eqnarray*}
Therefore, $\Psi(t,u,v,w_1, w_2, \ldots, w_n)$ satisfies,
\begin{eqnarray}
\label{eqn:expan11}
&&\log(\Psi(t,u,v,w_1, w_2, \ldots, w_n)) + \int_0^t \sum_{i=1}^nh_2(y,w_i)dy\\
&= & \nonumber
\int_0^t \sum_{i=1}^nh_2(y,w_i) \int_{{\mathbb R}^n}
\exp\left[\sum_{k=1}^nh_1(t;u,v;z_k)\right] \\ \nonumber &&\times\prod_{k=1}^n
g(z_k)\Psi(t-y,u,v,z_1,z_2, \ldots, z_n)dz_1dz_2\ldots dz_ndy.
\end{eqnarray}
Define,
\begin{eqnarray*}
h_4(t;u,v; z_1,z_2, \ldots, z_n) =
\exp\left[\sum_{k=1}^nh_1(t;u,v;z_k)\right] \prod_{k=1}^n g(z_k).
\end{eqnarray*}
Then the above expression can be written as,
\begin{eqnarray*}
&&\log(\Psi(t,u,v,w_1, w_2, \ldots, w_n)) + \int_0^t \sum_{i=1}^nh_2(y,w_i)dy\\
&= &
\int_0^t \sum_{i=1}^nh_2(y,w_i) \int_{{\mathbb R}^n}
h_4(t;u,v; z_1,z_2, \ldots, z_n)\\ &&\Psi(t-y,u,v,z_1,z_2, \ldots, z_n)dz_1dz_2\ldots dz_ndy.
\end{eqnarray*}
In general, we have $f(x)$ in the following form $f(x)
=\sum_{\ell=0}^n f_{\ell}x^{\ell}$. Notice that if we replace
\begin{eqnarray*}
\exp\left[\sum_{k=1}^nh_1(t;u,v;z_k)\right],
\end{eqnarray*}
in (\ref{eqn:expan11}) by
\begin{eqnarray*}
\exp\left[\sum_{k=1}^\ell h_1(t;u,v;z_k)\right] \prod_{k=\ell+1}^n \nu(z_k).
\end{eqnarray*}
then the above logic applies to $x^\ell$, $\ell < n$, since $\nu$ has no
mass in $[0,M]$. Overall, we can apply the same procedure for general
function form $f(x)$, in which $h_4(t;u,v;
z_1,z_2, \ldots, z_n)$ takes the following form,
\begin{eqnarray*}
&&h_4(t;u,v; z_1,z_2, \ldots, z_n) \\ &=& \sum_{\ell=0}^n
f_\ell \exp\left[\sum_{k=1}^\ell h_1(t;u,v;z_k)\right]
\prod_{k=\ell+1}^n \nu(z_k)\prod_{k=1}^n g(z_k).
\end{eqnarray*}
Therefore, we can rewrite the equation as,
\begin{eqnarray*}
&&\log(\psi(t,u,v,w))+ \int_0^t \sum_{i=1}^nh_2(y,w_i)dy \\
&=&\int_0^t h_2(y,w)\left[\int_{{\mathbb R}^n}
\sum_{\ell=0}^n f_\ell \exp\left(\sum_{k=1}^\ell h_1(t;u,v;z_k)\right)
\right. \\ && \left.\prod_{k=\ell+1}^n \nu(z_k)\prod_{k=1}^n
g(z_k)\prod_{i=1}^n\psi(t-y,u,v,z_i)dz_1dz_2\ldots dz_n\right]dy ,
\end{eqnarray*}
with the convention that $\prod_{k=n+1}^n \nu(z_k) =1$. The lemma can
then be concluded in conjunction with Lemma \ref{lem:transform}.
$\Box$
\begin{thm}
\label{thm:solution}
{\rm The solution to the integral equation (\ref{eqn1_branch_2}) bears the following form,
\begin{eqnarray*}
\phi(t;u,v)& =& T \left(\exp[ h_4(t,u,v, z,z, \ldots,
z) - n\int_0^t h_2(y,z)\right.\\ & & \left. -
\int_0^t R(t-y, z,z,\ldots,z) nh_2(y,z)]^\frac{1}{n}\right)
\end{eqnarray*}}
where $R(t, w_1, w_2, \ldots, w_n)$ is the inverse Laplace transform, with respect to $p$, of
\begin{eqnarray*}
\frac{\sum_{i=1}^n {\hat h}(p, w_i)}{1+\sum_{i=1}^n {\hat h}(p, w_i)},
\end{eqnarray*}
and ${\hat h}(p, w_i)$ denotes the Laplace transform of $h_2(t,w_i)$,
\begin{eqnarray*}
{\hat h}_2(p, w_i)= \int_0^\infty h_2(t,w_i) e^{-pt}dt.
\end{eqnarray*}
\end{thm}
{\bf Proof }
The equation (\ref{eqn:inteqn_n}) is a combined first and second kind integral
equation. The techniques for solving this type of equations are
of independent interest. For the one considered in the paper, the
solution to its component of integral equation of the first kind has a
special functional structure which enables us to solve the two components
separately.
According to \cite{polyaninmanzhirov}, the solution to the integral equation,
\begin{eqnarray*}
&& \log(\Psi(t,u,v,w_1, w_2, \ldots, w_n)) \\
& = & \int_{{\mathbb R}^n}
h_4(t,u,v, z_1,z_2, \ldots, z_n)\\&& \Psi(t,u, v, z_1, z_2, \ldots,
z_n))dz+\eta(t,u,v,w_1, w_2, \ldots, w_n)
\end{eqnarray*}
is in the following form,
\begin{eqnarray}
\label{eqn:additive}
&& \log(\Psi(t, u, v, w_1, w_2, \ldots, w_n)) \\& =& h_4(t,u,v,
w_1,w_2, \ldots, w_n) +\eta(t, u, v, w_1, w_2, \ldots, w_n). \nonumber
\end{eqnarray}
So it can be verified that if we can find a function $$\eta(t,u,v,w_1, w_2, \ldots,
w_n)$$ that satisfies,
\begin{eqnarray}
\label{eqn:linear}
&&\eta(t,u,v,w_1, w_2, \ldots, w_n) + \sum_{i=1}^n\int_0^t h_2(y, w_i)\\
&=&\int_0^t \sum_{i=1}^n h_2(y, w_i)\eta(t-y,u,v,w_1, w_2, \ldots, w_n)dy, \nonumber
\end{eqnarray}
and plug it into (\ref{eqn:additive}), $\Psi(t, u, v, w_1, w_2,
\ldots, w_n)$ will be the solution to (\ref{eqn:inteqn_n}). Meanwhile,
the integral equation (\ref{eqn:linear}) is a typical integral
equation of the second kind, therefore, can be solved routinely
by Laplace transform method. More specifically, we have, see e.g. \cite{BellmanCooke},
\begin{eqnarray*}
&&\eta(t,u,v,w_1, w_2, \ldots, w_n)= - \sum_{i=1}^n\int_0^t h_2(y, w_i)\\&& -
\int_0^t R(t-y, w_1, w_2, \ldots, w_n) \sum_{i=1}^n h_2(y, w_i) dy
\end{eqnarray*}
In conjunction with lemmas \ref{lem:transform} and \ref{lem:log_trans}, the result follows.
$\Box$
\section{Transform Expression of the Performance Measures and Inversion}
\label{sec:performance}
\vskip 0.2cm
In this section, we will first produce the final expression for the
Laplace transform of our stationary performance metric, based upon
results in the previous sections. Then, we will apply sophisticated
Laplace inversion methods to several well-known scheduling policies that
are either a special case of Grishechkin processor-sharing policy or
can be treated as its limiting case. Especially, following the approach in \cite{grishechkin}, we demonstrate that some popular scheduling policies such as foreground-backgound, SRPT and time-function scheduling can be expressed as the limit of a sequence Grishechkin processor-sharing policies, then, by carefully selecting the parameters, we are able to obtain the expressions for the stationary performance metrics and their Laplace transform in fairly tractable forms.
Given the solution of the integral equation, we can calculate the
Laplace transform of the performance metric $Q$.
\begin{thm}
{\rm The Laplace transform of the stationary system size is given by
\begin{eqnarray}
\label{eqn:laplace_final}
\ex\exp(-u Q) & = & (1-\rho) +
\int_0^\infty f'(\kappa_1(t,u,0)) \kappa_2(t,u)dt,
\end{eqnarray}
where
\begin{eqnarray*}
\kappa_1(t, u,v)&=& \int_{\mathbb R} \exp[h_1(t;u,v;z)]\times
\\&& \left(\exp[ h_4(t,u,v, z,z, \ldots,
z) - n\int_0^t h_2(y,z)\right. \\ && \left. -
\int_0^t R(t-y, z,z, \ldots,
z) n h_2(y,z)]^\frac{1}{n}\right)
g(z) dz
\end{eqnarray*}
\begin{eqnarray*}
\kappa_2(t,u) &= & \int_{\mathbb R} \exp[h_1(t;u,0;z)] \times
\\&& \left(\exp[ h_4(t,u,0, z,z, \ldots,
z) - n\int_0^t h_2(y,z) \right. \\ && \left.
- \int_0^t R(t-y, z,z, \ldots,
z) n h_2(y,z)]^\frac{1}{n}\right)
\times \\&& \frac{\partial}{\partial v} |_{v=0} h_1(t,u,v,z) g(z) dz
\\ &&+ \int_{\mathbb R} \exp[h_1(t;u,0;z)]\times \\&& \left(\exp[
h_4(t,u,0, z,z, \ldots,z) - n\int_0^t h_2(y,z) \right. \\ &&
\left. - \int_0^t R(t-y, z,z, \ldots,
z) n h_2(y,z)]^\frac{1}{n}\right)\times \\&&
\frac{\partial}{\partial v} |_{v=0} \kappa_3(t,u,v,z) g(z)dz,
\end{eqnarray*}
and
\begin{eqnarray*}
\kappa_3(t,u,v,z)&=& [ h_4(t,u,0, z,z, \ldots,z) - n\int_0^t
h_2(y,z) \\ &&-\int_0^t R(t-y, z,z, \ldots,
z) n h_2(y,z)]^\frac{1}{n}.
\end{eqnarray*}}
\end{thm}
Apparently, this Laplace transform is in a very complicated functional
and integral form, it is unrealistic to seek its inversion in closed-form. Especially, the function $R(t,w_1,w_2,\ldots, w_n)$
is essentially an inverse Laplace transform, which makes our task essentially inverting a two-dimensional Laplace transform. This encourages us to use
numerical procedures for inverting Laplace transform to approximate
these functions. Extensive studies on a unified approach for inverting a Laplace transform are carried out in \cite{abatewhitt}. Among the methods discussed in \cite{abatewhitt}, we select the Talbot method for its concise expression and high accuracy. In the following we summarize the main idea of this method, for details, see, e.g. \cite{AbateValko} and \cite{abatewhitt}.
For any function $f$, the Laplace transform is defined by,
\begin{eqnarray*}
{\hat f}(s) = \int_0^\infty e^{-st} f(t) dt.
\end{eqnarray*}
Its inversion is given by the following Bromwich inversion integral,
\begin{eqnarray}
\label{eqn:Bromwich}
f(t) = \frac{1}{2\pi \sqrt{-1}} \int_C f(s) e^{st} ds, t> 0,
\end{eqnarray}
where the contour $C$ goes from $c-\infty \sqrt{-1}$ to $c+\infty
\sqrt{-1}$ for $c>0$. A unified Laplace inversion approach is to use
rational functions to approximate the exponential function in the
integrand. More specifically, use,
\begin{eqnarray*}
e^z \approx \sum_{k=0}^n \frac{w_k}{\al_k -z},
\end{eqnarray*}
for some carefully selected complex numbers $w_k$ and $\al_k$.
Then the Residue theorem will give a finite summation in
terms of the evaluation of Laplace transform for the Bromwich
integral (\ref{eqn:Bromwich}), more specifically,
\begin{eqnarray*}
f(t)= \frac{1}{t}\sum_{k=0}^n w_k {\hat f}\left(\frac{\al_k}{t}\right),t>0
\end{eqnarray*}
Different algorithms for Laplace inversion, such as the
Gaver-Stehfest algorithm, Euler algorithm and Talbot algorithm,
differ at the selection of the rational functions, i.e. $w_k$ and
$\al_k$. Here, we will use the Talbot algorithm.
Detailed analysis of the algorithm can be found in
\cite{abatewhitt, AbateValko}. For any large
integer $I>0$, the Talbot method uses the following expression as an
inversion of the Laplace transform ${\hat f}$.
\begin{eqnarray*}
f(t, I) = \frac{2}{5t} \sum_{k=0}^{I-1} {\rm Re} \left(\gamma_k {\hat
f}\left(\frac{\delta_k}{t}\right)\right).
\end{eqnarray*}
with
\begin{eqnarray*}
\delta_0& = & \frac{2m}{5}, \\ \delta_k&=&
\frac{2k\pi}{5}\left[\cot\left(\frac{k\pi}{M}\right)+\sqrt{-1}\right], k>0\\
\gamma_0&=&\frac{1}{2}e^{\delta_0},\\ \gamma_k&= & \left\{1+
\sqrt{-1}\left(\frac{k\pi}{I}\right)\left[+\cot\left(\frac{k\pi}{I}\right)^2\right]\right.
\\ && \left.-\sqrt{-1}\cot
\left(\frac{k\pi}{I}\right)\right\}e^{\delta_k}, k>0.
\end{eqnarray*}
\subsection{Accuracy of the Laplace inversion}
\vskip 0.2cm
Here, we have a brief discussion on the accuracy of the Laplace
inversion so that we will have a full picture on the approach we are
taking.
\begin{defn}
For a large integer $M'>0$, and $\al >0$, we say that the inversion $f_q$ of
the Laplace transform ${\hat f}$ produces $\al M$ significant digits, if
\begin{eqnarray*}
\left| \frac{f(t)-f_q}{f(t)}\right| \approx 10^{-\al M'}
\end{eqnarray*}
\end{defn}
Then, it is known from \cite{AbateValko}, $f(t, I)$ the output of the Talbot inversion produces
$0.6I$ significant digits while requiring $I$ evaluations of the Laplace
transform. In the case of two-dimensional inverting Laplace transform,
such as the problem we are studying, it is demonstrated in
\cite{abatewhitt} that we can apply the Talbot algorithm to both, and
the overall algorithm still produces $0.6I$ significant digits while
requiring $I^2$ evaluations of the Laplace transform.
\subsection{Egalitarian Processor-sharing Queues}
\label{sec:PS}
\vskip 0.2cm
Let us consider the simplest egalitarian processor-sharing queue $M^X/G/1/PS$. In
this system, $C(t) =A(t) = {\bf 1} (t\in[0, \ell)]$. So $h_2(y, z) =
{\bf 1}\{ z \in [0, y]\}$, and
\begin{eqnarray*}
{\hat h}_2(p, w ) = \int_0^\infty {\bf 1}\{ w \in [0, t]\}
e^{-pt}dt =\int_w^\infty e^{-pt} dt = e^{-pw}.
\end{eqnarray*}
Applying the Talbot method for integer $I_1>0$
\begin{eqnarray*}
&& R_1(t, w_1, w_2, \ldots, w_n, I_1)\\ &=& \frac{2}{5t}
\sum_{k=0}^{I_1-1} {\rm Re} \left(\gamma_k
\sum_{i=1}^n\frac{\exp(-w_i\delta_k/t)}{1+\sum_{i=1}^n\exp(-w_i\delta_k/t)}\right).
\end{eqnarray*}
Then the density function is given as, for integer $I_2>0$,
\begin{eqnarray}
\label{eqn:inverse_general}
&& \theta_{Q}(s) = \frac{2}{5s} \sum_{k=0}^{I_2-1} {\rm Re} L_k,
\end{eqnarray}
where
\begin{eqnarray*}
L_k=
\gamma_k
\left[(1-\rho) +
\int_0^\infty f'(\kappa_1(t,\delta_k/t,0)) \kappa_2(t,\delta_k/t)dt\right] ,
\end{eqnarray*}
where $$R_1(t,w_1, w_2, \ldots, w_n,I_1)$$ will replace $$R(t,w_1, w_2,
\ldots, w_n)$$ in the definition of $\kappa_i, i=1,2,3$. If we let both
$I_1$ and $I_2$ be the order of some large integer $I>0$, then the
above expression produces $0.6I$ significant digits.
In \cite{grishechkin}, for the case of egalitarian processor-sharing,
an integral equation is developed for another performance metric, the
sojourn time. More specifically, let $W(T)$ be the sojourn time for
a tagged job with processing time $\ell$, then, Theorem 6.2 in
\cite{grishechkin} gives,
\begin{thm} If $\rho< 1$ then $T\rightarrow \infty$, then $W(T)
\rightarrow W$, and the Laplace transform of the random variable $W$
is given by
\begin{eqnarray}
\label{eqn1_branch_4} &&\ex[\exp(-u W)] = K(u, \ell)
\exp(-u\ell) \exp\left[ \G \int_0^\ell (f(S(y,u))-1)dy
\right],
\end{eqnarray}
where $f(\cdot)\in C^1$ is the Z-transform for the arrival batch
size. Here, $S(t,u)$ satisfies the following equation,
\begin{eqnarray}
\label{eqn:S_inteqn}
&&S(t,u)= \ex\exp\left[-u\min(t,\ell)-\G\int_0^\ell (1-f(S(t-y,u)))
dy\right],
\end{eqnarray}
and
\begin{eqnarray*}
K(u,b) =\G(1-\rho)\left[\G^{-1} + \frac{\partial}{\partial v }\Big|_{v=0}
\int_0^\infty f(\phi_b(t;u,v))dt\right]
\end{eqnarray*}
with $\phi_b$ satisfies,
\begin{eqnarray}
\label{eqn:pb_inteqn}
&&\phi_b(t;v,u) = \ex \exp\left[ - u
\int_t^{t+b} {\bf 1}\{ y\in [0,\ell]\} dy \right.
\\&& \left. -v{\bf 1}\{ t\in [0,\ell]\} - \G\int_0^\ell
1-f(\phi_b(t-y;v, u)) dy\right] \nonumber
\end{eqnarray}
\end{thm}
It should be clear that both $S(t,u)$ and $\phi(t,u,v)$ are in the
same form of integral equation as (\ref{eqn1_branch_2}), hence,
similar techniques can be applied to them. More precisely, Theorem
\ref{thm:solution} can be applied to these two integral equations with
functions $h_1(\cdot)$ and $h_2(\cdot)$ defined as the following.
For $S(t,u)$
\begin{eqnarray}
\label{eqn:H1_sojourn_S}
&&h_1(t;u,v;z_1,z_2,z_3,z_4)=[-u\min(t,\ell)(t)]{\bf 1}\{t=z_4\},
\end{eqnarray}
and
\begin{eqnarray}
\label{eqn:H2_sourjour_S}
h_2(t,z_1,z_2,z_3,z_4)={\bf 1}\{t=z_4\}.
\end{eqnarray}
and for $\phi_b$,
\begin{eqnarray}
\label{eqn:H1_sojourn_Pb}
&&h_1(t;u,v;z_1,z_2,z_3,z_4)\\&=&[ - u
\int_t^{t+b} {\bf 1}\{ y\in [0,\ell]\} dy -v{\bf 1}\{ t\in [0,\ell]\}]{\bf 1}\{t=z_4\}, \nonumber
\end{eqnarray}
and
\begin{eqnarray}
\label{eqn:H2_sourjour_Pb}
h_2(t,z_1,z_2,z_3,z_4)=C(z_1,z_2,z_3)(t){\bf 1}\{t=z_4\}.
\end{eqnarray}
After obtaining the solution of the integral equations, the numerical
procedure described above can be applied to get approximation with
the same performance guarantee.
\subsection{Discriminatory Processor-sharing Queues with random class assignment}
\label{sec:DPS}
\vskip 0.2cm
Discriminatory processor-sharing queue was first studied by Kleinrock
\cite{Kleinrock67}. Jobs are grouped in $C$ classes, indexed by
$c=1,2,\ldots, C$, each class carries a fixed weight $\nu_c$. In a
system with $N_c$ jobs for each class $c$ jobs, the amount of service
each job $x$ receives is determined by
\begin{eqnarray*}
\frac{ \nu_{c(x)}}{\sum_{c=1}^C N_c \nu_c} .
\end{eqnarray*}
where $c(x)$ denotes the job class $x$ belongs to. Of course, it is
easy to see that when all the $\nu_c$ are equal, the policy is just
the ordinary processor-sharing. For an updated survey on the analysis of
discriminatory processor-sharing policy, see,
e.g. \cite{AltmanAvrachenkovAyesta}.
Here, we consider a queue under a policy that is an variation of the
discriminatory processor-sharing policy. We allow the class
characterization being randomly determined at the time of
arrival. More specifically, a job will belong to class $c,
c=1,2,\ldots, C$ with probability
\begin{eqnarray*}
\frac{ \nu_c}{\sum_{c=1}^C\nu_c}.
\end{eqnarray*}
Therefore,
\begin{eqnarray*}
A(t) = \nu_c {\bf 1}\{ w \in [0, t]\}
\end{eqnarray*}
and $C(t) =A(t) = {\bf 1} (t\in[0, \ell)]$. So $h_2(y, z) =
{\bf 1}\{ z \in [0, y]\}$, and
\begin{eqnarray*}
{\hat h}_2(p, w ) = \int_0^\infty \nu_c {\bf 1}\{ w \in [0, t]\}
e^{-pt}dt =\int_w^\infty \nu_c e^{-pt} dt =\nu_c e^{-pw}.
\end{eqnarray*}
Following our approach for the egalitarian processor-sharing, apply
the Talbot method for integer $I_1>0$, we obtain,
\begin{eqnarray*}
&& R_2(t, w_1, w_2, \ldots, w_n, I_1)\\ &=& \frac{2}{5t}
\sum_{k=0}^{I_1-1} {\rm Re} \left(\gamma_k
\sum_{i=1}^n\frac{\mu_c\exp(-w_i\delta_k/t)}{1+\sum_{i=1}^n\mu_c\exp(-w_i\delta_k/t)}\right).
\end{eqnarray*}
Plug the above $$R_2(t,w_1, w_2, \ldots, w_n,I_1)$$ as function $$R(t,w_1, w_2,
\ldots, w_n)$$ in the definition of $\kappa_i, i=1,2,3$ in equation (\ref{eqn:inverse_general}), for some $I_1$, we obtain a guaranteed approximation for the density function of the performance depending on the selection of $I_1$ and $I_2$.
\subsection{Shortest Residual Processing Time First}
\label{sec:SRPT}
\vskip 0.2cm
In \cite{grishechkin}, it is shown that another popular queue
scheduling policy, the shortest residual processing time (SRPT) rule,
can be treated as the limit of a sequence of Grishechkin
processor-sharing policies with
carefully chosen parameters. In particular, for any positive integer
$N=1,2,\ldots$, let $c_N$ be a sequence of functions satisfying
$c_N(T_1) /c_N(T_2)\rightarrow \infty$, as $N\rightarrow \infty$ for
any fixed $T_2>T_1>0$. Then define,
\begin{eqnarray*}
A_N(T) = c_n(T) {\bf 1}\{T \le \ell\}.
\end{eqnarray*}
Denote $Q^\phi_N$ as the state process of the
$N$-th system, and $Q^\phi$ as the state process of a system that
follows the SRPT rule, then Lemma 7.1 in \cite{grishechkin} indicates
that $Q^\phi_N$ converge to $Q^\phi$ in distribution, as $N\rightarrow
\infty$.
To facilitate the calculation, we select $c_N(y) = (\ell-y)^N$, it is easy
to see that this sequence satisfies the assumptions. Now,
\begin{eqnarray*}
A_N(T) = (\ell - T)^{N+1}{\bf 1}\{T \le \ell\}.
\end{eqnarray*}
\begin{eqnarray*}
R_N(u) = -\frac{1}{N} ( \ell^{-N} - (\ell-u)^{-N}
\end{eqnarray*}
\begin{eqnarray*}
C^N(u) = -\frac{1}{N}(\ell^N-Ny)^{-\frac{1}{N}-1}.
\end{eqnarray*}
Hence,
\begin{eqnarray*}
h_2(p,w) = -\frac{1}{N} p^{-\frac{1}{N+1}}\left[ \gamma\left(\frac{1}{N+1},
\frac{\ell^N-w}{p}\right)-\gamma\left(\frac{1}{N+1},
\frac{\ell^N}{p}\right)\right].
\end{eqnarray*}
where $\gamma(s,x)$ denotes the lower incomplete Gamma function, which
is defined as $\gamma(s,x)= \int_0^xt^{s-1}e^{-t}dt$.
For any $M_1>0$, we have,
\begin{eqnarray*}
&& R_3^N(t, w_1, w_2, \ldots, w_n, M_1)= \frac{2}{5t} \sum_{k=0}^{M_1-1} {\rm Re} (Q_k),
\end{eqnarray*}
where $Q_k$ is defined in (\ref{eqn:Q}).
\begin{figure*}
\begin{eqnarray}
\label{eqn:Q}
Q_k=-N\frac{1-\frac{1}{N}\sum_{i=1}^n\gamma_k
\frac{\delta_k}{t}^{-\frac{1}{N+1}}
\left(\gamma\left(\frac{1}{N+1},
\frac{t(\ell^N-w_i)}{\delta_k}\right)-\gamma\left(\frac{1}{N+1},
\frac{t(\ell^N)}{\delta_k}\right)\right)}{\sum_{i=1}^n\gamma_k
\frac{\delta_k}{t}^{-\frac{1}{N+1}}
\left(\gamma\left(\frac{1}{N+1},
\frac{t(\ell^N-w_i)}{\delta_k}\right)-\gamma\left(\frac{1}{N+1},
\frac{t(\ell^N)}{\delta_k}\right)\right)}.
\end{eqnarray}
\end{figure*}
Again, plug the above $$R_3^N(t,w_1, w_2, \ldots, w_n,I_1)$$ as function $$R(t,w_1, w_2, \ldots, w_n)$$ in the definition of $\kappa_i, i=1,2,3$ in equation (\ref{eqn:inverse_general}) for properly chosen $I_2$, the density function can be computed to desired accuracy.
\subsection{The Foreground-Background Queue}
\label{sec:FB}
\vskip 0.2cm
Foreground-background policy is another policy that can be analyzed
using the techniques discussed in this paper, because it is a policy
that allocates the service capacity according to the service attained
for each individual job. To be more specific, foreground-background
policy gives priority to that job that has the least amount of service
attained, and if there is more than one such jobs, then the server is
shared equally among these jobs. It is known that under heavy-tailed
distribution, there is a strong positive correlation between large
attained service and large remaining service, therefore, under this
circumstance, foreground-background policy is considered as a
surrogate for SRPT when the total amount of service requirement for each job
can not be determined by the scheduler.
As pointed out in \cite{grishechkin}, similar logic to the above in
the case of the shortest remaining service rule applies here.
So, for any positive integer
$N=1,2,\ldots$, let $c_N$ be a sequence of functions satisfying
$c_N(T_1) /c_N(T_2)\rightarrow \infty$, as $N\rightarrow \infty$ for
any fixed $T_2>T_1>0$.
Then define,
\begin{eqnarray*}
A_N(T) = c_N(T) {\bf 1}\{T \le \ell\}.
\end{eqnarray*}
Then, $Q^\phi_N$, defined as the state process of the $N$-th system,
and $Q^\phi$, the state process of a system that
follows the FB rule, satisfy that $Q^\phi_N$ converge to $Q^\phi$ in
distribution, as $N\rightarrow \infty$.
To facilitate the calculation, we select $c_N(y) = y^{-N}$, it is easy
to see that this sequence satisfies the assumptions. From the
definition, we know that,
\begin{eqnarray*}
R_N(u) = \frac{1}{N+1}u^{N+1}, R^{-1}_N(U) = (N+1)^{\frac{1}{N+1}}u^{\frac{1}{N+1}}.
\end{eqnarray*}
and
\begin{eqnarray*}
C^N(u) = (N+1)^{\frac{-N}{N+1}}u^{\frac{-N}{N+1}}.
\end{eqnarray*}
Therefore,
\begin{eqnarray*}
{\hat h}_2(p,w)& = &(N+1)^{\frac{-N}{N+1}} \int_0^w
y^{\frac{-N}{N+1}}e^{-yp}dy\\
&=&(N+1)^{\frac{-N}{N+1}}p^{-\frac{1}{N+1}} \gamma\left(\frac{1}{N+1},
\frac{w}{p}\right).
\end{eqnarray*}
where $\gamma(s,x)$ denotes the lower incomplete Gamma function, which
is defined as $\gamma(s,x)= \int_0^xt^{s-1}e^{-t}dt$.
For any $I_1>0$, we have,
\begin{eqnarray*}
&& R_4^N(t, w_1, w_2, \ldots, w_n, M_1)= \frac{2}{5t} \sum_{k=0}^{I_1-1} {\rm Re} (Q_k),
\end{eqnarray*}
where
\begin{eqnarray*}
Q_k=\frac{1+\sum_{i=1}^n\gamma_k
(N+1)^{\frac{-N}{N+1}}\frac{\delta_k}{t}^{-\frac{1}{N+1}}
\gamma\left(\frac{1}{N+1},
\frac{w_it}{\delta_k}\right)}{\sum_{i=1}^n\gamma_k
(N+1)^{\frac{-N}{N+1}}\frac{\delta_k}{t}^{-\frac{1}{N+1}}
\gamma\left(\frac{1}{N+1}, \frac{w_it}{\delta_k}\right)}.
\end{eqnarray*}
Plug the above $$R_4^N(t,w_1, w_2, \ldots, w_n,I_1)$$ as function $$R(t,w_1, w_2,
\ldots, w_n)$$ in the definition of $\kappa_i, i=1,2,3$ in equation (\ref{eqn:inverse_general}), we obtain a guaranteed approximation for the density function of the performance metric.
\subsection{Time Function Scheduling}
\label{sec:TFS}
\vskip 0.2cm
Time function scheduling is another scheduling policy that was first
studied by Kleinrock, see, e.g. \cite{Kleinrock}. Further studies can
be found in \cite{FongSquillante} and \cite{LuSquillante}. Under this policy,
jobs are grouped in $C$ classes, and a weight $\nu_c>0$ is
assigned to each class $c=1,2,\ldots, C$. The server serves only one
job at a time. For any class $c$ job in the system, a time function is
calculated in terms of its cumulative waiting time and its class
weight $\nu_c$, in the literature, such as, \cite{Kleinrock},
\cite{FongSquillante} and \cite{LuSquillante}, this function is
basically taking the form of a linear function of the waiting time with
$\nu_c$ as the slope. The scheduling policy is to assign the job that
has the highest value of its time function to be served by the
server. Again as in Sec. \ref{sec:DPS}, we assume that a job is
randomly assigned to a class, and the probability distribution is denoted by
$\mu_c$.
Following similar derivations as those for the SRPT rule in
\cite{grishechkin}, we can show that the above defined time function
scheduling policy can also be treated as a limit of Grishechkin
processor-sharing policies. For any positive integer
$N=1,2,\ldots$, let $c_N$ be a sequence of functions satisfying
$c_N(T_1) /c_N(T_2)\rightarrow \infty$, as $N\rightarrow \infty$ for
any fixed $T_2>T_1>0$. Then define,
\begin{eqnarray*}
A_N(T) = c_N(T) \mu_c {\bf 1}\{T \le \ell\}.
\end{eqnarray*}
Now, denote $Q^\phi_N$ as the state process of the
$N$-th system following a Grishechkin processor-sharing policy with
$A$ defined as above, and $Q^\phi$ as the state process of a system
that follows the time function rule. We can demonstrate that
$Q^\phi_N$ converge to $Q^\phi$ in distribution, as $N\rightarrow
\infty$.
Now, let us select $c_N(y) = y^{-N}$. Following the same calculation as in foreground-background queue, we have, Hence,
\begin{eqnarray*}
h_2(p,w)=\nu_c(N+1)^{\frac{-N}{N+1}}p^{-\frac{1}{N+1}} \gamma\left(\frac{1}{N+1},
\frac{w}{p}\right).
\end{eqnarray*}
Similarly, for any $I_1>0$, we have,
\begin{eqnarray*}
&& R_5^N(t, w_1, w_2, \ldots, w_n, M_1)= \frac{2}{5t} \sum_{k=0}^{I_1-1} {\rm Re} (Q_k),
\end{eqnarray*}
where
\begin{eqnarray*}
Q_k=\frac{1+\sum_{i=1}^n\mu_c\gamma_k
(N+1)^{\frac{-N}{N+1}}\frac{\delta_k}{t}^{-\frac{1}{N+1}}
\gamma\left(\frac{1}{N+1},
\frac{w_it}{\delta_k}\right)}{\sum_{i=1}^n\mu_c\gamma_k
(N+1)^{\frac{-N}{N+1}}\frac{\delta_k}{t}^{-\frac{1}{N+1}}
\gamma\left(\frac{1}{N+1}, \frac{w_it}{\delta_k}\right)}.
\end{eqnarray*}
The rest of the calculation can follow the same approach as for the foreground-background policy.
\section{Conclusions}
\label{sec:conclusions}
\vskip 0.2cm
In this paper, we analyze the stationary performance of a queueing
system under a general class of processor sharing scheduling
policies. Our main contribution is obtaining a solution to a complicated integral equation that plays a critical role in queueing analysis. The methods we derived in solving the integral equation appear to be of independent interest to many other problems in mathematics and engineering. Meanwhile, we adopted a sophisticated numerical Laplace inversion scheme, so that the relative error of the numerical inversion can be easily controlled. These results have important implications in the development of numerical computational package softwares for performance analysis and optimal control.
As we have demonstrated in the paper, because they allow service capacities to be dynamically determined by the attained service for each job, this general class of scheduling policies, Grishechkin processor-sharing policies, are very powerful and flexible mathematical models. Their performance analysis leads to deeper understanding and analysis of many popular scheduling policies. It also provides a building block for potential development of adaptive scheduling policies that can be used for different performance requirements. A typical example is to eliminate the independence assumptions on stochastic processes $A_i$ for different $i$. More precisely, let $(A_1, A_2, \dots, A_n)$ be dependent stochastic processes indexed by an infinite-dimensional sequence of service attained for each job, then,
\begin{eqnarray*}
\frac{dV_i(t)}{dt} = \frac{ A_i(V_1, V_2,\ldots, ) }{\sum_{j=1}^{Q(t)} A_j(V_1, V_2,\ldots, )}
\end{eqnarray*}
We aim to use some more generalized branching process to characterize this type of scheduling policy. Once this can be achieved, policies such as SRPT, FB and time function scheduling can be directly incorporated instead of resorting to limit.
The integral equation studied in the paper has a very complicated form, however, it is not extremely eccentric. Quite often, applications in mathematics, engineering and economics also produce integral equations of a similar form. In fact, some stochastic control and optimal stopping problems
appear to be very closely related to integral equations of this type,
see, e.g. a recent result on evaluation of finite horizon
Russian option in \cite{Peskir}. Therefore, our other line of
research is to apply some of the techniques we developed here to
integral equations arising in other areas.
| {
"attr-fineweb-edu": 1.801758,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdl3xK7IAHYL5-Cjg | \section{Introduction}
Observed counts as time series have been attracting considerable
attention both in terms of data analysis and developement of
methodological approaches. This type of data can appear in contexts as
diverse as Epidemiology (see for example \citeNP{Zeger} and
\shortciteNP{Davis2}) and Finance (\shortciteNP{Liesenfeld} and
\citeNP{Rydberg}). In this paper, the motivating datasets that will be
analyzed are the number of automobile production in Brazil, the number
of hospitalizations caused by Dengue Fever and the number of
deaths in Brazil.
Parameter and observation driven models provide a flexible framework
for modelling time series of counts. So far, a wide variety of models
for count time
series have been discussed in the literature usually embedded in the
framework of integer valued
ARMA type models (see for example \citeNP{biswas}). An overview of
these kind of models can be found in \citeN{Davis2} while
\citeN{Zeger} and \citeN{Chan} explicitly discuss and develop
estimation techniques for Poisson generalized linear models with an
autoregressive latent process in the mean.
\shortciteN{Davis1} proposed a flexible framework for modelling a wide
range of dependence structures using models for Poisson
counts. \shortciteN{jung} compares various models for time series of counts
which can account for discreteness, over dispersion and serial
correlation. \citeN{Zhu} proposed a negative binomial INGARCH model
applied to the Polio data discussed in \citeN{Zeger}.
This article extends the work of \shortciteN{Benjamin}, giving rise to the
Bayesian approach on the generalized autoregressive moving average
(GARMA) model. This approach presents some gain in terms of estimation,
that could be more adequate using different loss functions. The use
of Bayesian selection criteria is also an import contribution from this
article. Last but not least the application of discrete
models on important Brazilian real data providing a new perspective on
this field.
The remainder of this paper is organized as follows. Section \ref{sec:garma}
defines the GARMA model
with discrete distributions. The Bayesian approach and Bayesian
prediction are presented in Section \ref{sec:bayes}. Section \ref{simulations} describes the
simulation study where the performance of the Bayesian approach for
estimation and selection was investigated.
Real data applications are illustrated on Section 5. Finally, Section
\ref{conclusions} gives some concluding remarks.
\section{Generalized Autoregressive Moving Average Model}\label{sec:garma}
The GARMA model, introduced by \shortciteN{Benjamin}, assumes that the
conditional distribution of each observation $y_t$, for $t=1,\dots,n$
given the previous information set $\textit{F}_{t-1} =
(x_{1},\ldots,x_{t-1},y_{1},\ldots,y_{t-1},\mu_{1},\ldots,\mu_{t-1})$
belongs to the exponential family. The conditional density is given by,
\begin{equation}\label{defgarma}
f(y_t|\textit{F}_{t-1})=
\exp\left(\frac{y_t\alpha_t - b(\alpha_t)}{\varphi} + d(y_t,\varphi)\right),
\end{equation}
where $\alpha_t$ e $\varphi$ are conical and scale parameter
respectively, with $b(\cdot)$ e $d(\cdot)$ being specific functions
that define the particular exponential family. The conditional mean
and conditional variance of $y_t$ given $\textit{F}_{t-1}$ is
represented by the terms $\mu_t = E(y_t|\textit{F}_{t-1}) =
b'(\alpha_t)$ and $Var(y_t|\textit{F}_{t-1}) =
\varphi{b''}(\alpha_t)$, with $t=1,\ldots,n$.
Just as in Generalized Linear Models (GLM, \citeNP{McCullagh}), $\mu_t$,
is related to the linear predictor, $\eta_t$, by a twice-differentiable
one-to-one monotonic link function $g(\cdot)$. The linear predictor for
the GARMA model is given by,
\begin{equation}\label{predii33}
g(\mu_t) = \eta_t = x'_t\beta + \sum_{j=1}^p\phi_j\{g(y_{t-j}) -
x'_{t-j}\beta\} + \sum_{j=1}^q\theta_j\{g(y_{t-j}) - \eta_{t-j}\}.
\end{equation}
The GARMA($p$,$q$) model is defined by equations (\ref{defgarma}) and
(\ref{predii33}). For certain functions $g$, it may be necessary to
replace $y_t$ with $y_t^{*}$ in (\ref{predii33}) to avoid the
non-existence of $g(y_t)$ for certain values of $y_t$. The form
$y_t^{*}$ depends on the particular function $g(.)$ and is defined for
specific cases later.
The definition of GARMA model allows to consider the adjust of
exogenous variables $x'_t$ however in this work the term
$x'_t\beta$ will be considered as a constant $\beta_0$. For count data
time series we will consider the following distributions.
\subsection{Poisson GARMA model}
Suppose that $y_t|\textit{F}_{t-1}$ follows a Poisson distribution
with mean $\mu_t$. Then,
\begin{equation}\label{eqi}
f(y_t|\textit{F}_{t-1}) = \exp\left\{y_t\log(\mu_t) - \mu_t - \log(y_{t}!) \right\}.
\end{equation}
and $Y_t|\textit{F}_{t-1}$ has distribution in the exponential family
with $\varphi = 1$, $\alpha_t=\log(\mu_t)$,
$b(\alpha_t)=\exp(\alpha_t)$, $c(y_t,\varphi)=-\log(y_{t}!)$ and
$\nu(\mu_t)=\mu_t$.
The canonical link function for this model is the logarithmic
function, so that the linear predictor is given by,
\begin{equation}\label{mediagarma1}
\log(\mu_t) = \beta_0 + \sum_{j=1}^{p}\phi_j\{\log{y_{t-j}^*}\} +
\sum_{j=1}^{q}\theta_j\{\log(y^*_{t-j}) - \log(\mu_{t-j})\},
\end{equation}
Where $y_{t-j}^* = \max(y_{t-j},c), 0<c<1$. The Poisson GARMA model is
defined by equations (\ref{eqi}) and (\ref{mediagarma1}).
\subsection{Binomial GARMA model}
Suppose that $y_t|\textit{F}_{t-1}$ follows a binomial distribution
with mean $\mu_t$. Then,
\begin{footnotesize}
\begin{equation*}\label{eqiqq}
f(y_t|\textit{F}_{t-1}) =
\exp\left\{ {y_t}\log\left( \frac{\mu_t}{m-\mu_t} \right) +
m\log\left( \frac{m-\mu_t}{m} \right) + \log\left(
{\frac{\Gamma(m+1)}{\Gamma(y_t + 1)\Gamma(m - y_t + 1)}} \right)
\right\}.
\end{equation*}
\end{footnotesize}
The canonical link function for this model is the logarithmic
function. The linear predictor is given by,
\begin{equation}\label{mediagarma12}
\log\left( \frac{\mu_t}{m-\mu_t} \right) = \beta_0 +
\sum_{j=1}^{p}\phi_j\{\log{y_{t-j}^*}\} +
\sum_{j=1}^{q}\theta_j\{\log(y^*_{t-j}) - \log(\mu_{t-j})\},
\end{equation}
with $y_{t-j}^* = \max(y_{t-j},c), 0<c<1$, and $m$ is known.
\subsection{Negative Binomial}
Let $y_t$ a time series such that $y_t|\textit{F}_{t-1}\sim NB(k,\mu_t)$. Then,
\begin{equation*}\label{bngarma}
f(y_t|\textit{F}_{t-1}) =
\exp\left(k\log\left\{\frac{k}{\mu_t+k}\right\} +
y_t\log\left\{\frac{\mu_t}{\mu_t+k}\right\} + \log\left\{
\frac{\Gamma(k+y_t)}{\Gamma(y_{t}+1)\Gamma(k)} \right\}\right),
\end{equation*}
which belongs to the exponential family with $k$ known.
The link function for this model is the logarithmic function
\begin{equation*}\label{mediagarma}
\log\left(\frac{k}{\mu_t+k}\right) = \beta_0 +
\sum_{j=1}^{p}\phi_j\{\log{y_{t-j}^*}\} +
\sum_{j=1}^{q}\theta_j\{\log(y^*_{t-j}) - \log(\mu_{t-j})\},
\end{equation*}
with $y_{t-j}^* = \max(y_{t-j},c), 0<c<1$.
\section{Bayesian Approach on GARMA Models}\label{sec:bayes}
\subsection{Defining the Prior Densities}
Using the logarithmic in link
function to guarantee positive values for any values of the vectors
$\boldsymbol{\beta} = (\beta_1,\ldots,\beta_m)$,
$\Phi=(\phi_1,\ldots,\phi_p)$ and
$\Theta=(\theta_1,\ldots,\theta_q)$.$\beta$, $\phi_i$. Thus, a multivariate
Gaussian prior will be proposed for each parameter.
\begin{eqnarray*}
\boldsymbol{\beta} &\sim & N(\boldsymbol{\mu_0},\sigma_0^{2}\boldsymbol{I_0}),\\
\Phi &\sim & N(\boldsymbol{\mu_1},\sigma_1^{2}\boldsymbol{I_1}) \\
\Theta &\sim & N(\boldsymbol{\mu_2},\sigma_2^{2}\boldsymbol{I_2})\\
\end{eqnarray*}
where $\boldsymbol{\mu_0},\boldsymbol{\mu_1},\boldsymbol{\mu_1}$ are
vectors with length $m$, $p$ and $q$ respectively,
$\sigma_0^{2}$, $\sigma_1^{2}$ and $\sigma_1^{2}$ represent the prior
variance and $\boldsymbol{I_0}$, $\boldsymbol{I_1}$ and
$\boldsymbol{I_2}$ are $m\times m$, $p\times p$
and $q\times q$ identity matrices
respectively. The construction of the multivariate Gaussian depends on
hyper parameters, when there is no prior knowledge on these parameters
it can be considered a vary large variance making the prior densities
flats. The partial likelihood function for GARMA models can be
constructed as follows
\begin{eqnarray*} \label{q1}
L(\boldsymbol{\beta},\Phi,\Theta|Y) &\propto& \prod_{t=r+1}^{n}f(y_t|F_{t-1}) \nonumber\\
&\propto&
\prod_{t=r+1}^{n}\exp\left(\frac{y_{t}\alpha_{t} -
b(\alpha_{t})}{\varphi} + d(y_{t},\varphi)\right),
\end{eqnarray*}
where $\alpha_t = g(\mu_t)$, which represent the link function given by
\begin{equation*}\label{q2}
g(\mu_t) = x'_t\boldsymbol{\beta} + \sum_{j=1}^p\phi_j\{g(y_{t-j}^*) -
x'_{t-j}\} + \sum_{j=1}^q\theta_j\{g(y_{t-j}^*) - g(\mu_{t-j})\},
\end{equation*}
for all $t = r+1, \ldots, n$.
The posterior density is obtained combining the likelihood
function with the prior densities. Let the vector $\boldsymbol{Y}=
(y_t,y_{t-1},\dots,y_1, x_t, x_{t-1},\dots,x_1,\dots)$
represent the necessary information to construct the likelihood
function. The posterior density is then given by,
\begin{equation}\label{jjj1}
\pi(\boldsymbol{\beta},{\Phi},{\Theta}|\textit{Y}) \propto
L(\boldsymbol{\beta},{\Phi},{\Theta}|\textit{Y})\pi_{0}(\boldsymbol{\beta},{\Phi},{\Theta}).
\end{equation}
\noindent However, the joint posterior density of parameters in the
GARMA models can not
be obtained in closed form. Therefore, Markov chain Monte Carlo (MCMC)
sampling strategies will be employed for obtaining samples from this
joint posterior distribution. In particular, we use a
Metropolis-Hastings algorithm to yield the required realisations. We
adopt a sampling scheme where the parameters are
updated as o single block and at each iteration we generate new values
from a multivariate normal distribution centred around the maximum
likelihood estimates
with a variance-covariance proposal matrix given by the inverse
Hessian evaluated at the posterior mode.
\subsection{Bayesian prediction on GARMA models}
An important aspect of our Bayesian approach to GARMA
models is the hability to forecasting future values of the time
series, $y_{t+h}$, $h\geq 1$ given all the information available until
time $t$.
To evaluate this forecasting it is necessary to find the predictive
density function $p(y_{t+h}|Y)$.
Denoting the information set
$\widehat{\textit{F}}_{t+h} = (\widehat{x}_{t+h},\dots
,x_{t},x_{t-1},\dots,$
$\widehat{y}_{t+h-1},\dots,y_{t},$ $y_{t-1},\dots$
$\widehat{\mu}_{t+h-1}$,$\dots,$ $\mu_{t},\mu_{t-1},\dots)$, where
$\widehat{y}_{t+h-i}=y_{t+h-i}$, if $h\leq i$, else
$\widehat{y}_{t+h-i}=E\{y_{t+h-i}|\widehat{\textit{F}}_{t+h-i} \}$,
$i=1,2,\ldots h+1$. The general idea is that
$\widehat{\textit{F}}_{t+h}$ contains all the data observed until the
time $t$, for the future time $t+h$, $h\geq 1$, the set
$\widehat{\textit{F}}_{t+h}$ is completed with forecasts of necessary
information to estimate $y_{t+h}$. Starting with,
\begin{equation}\label{model}
f(y_{t+h}|\boldsymbol{\beta},{\Phi},{\Theta},\widehat{\textit{F}}_{t+h})
= \exp\left(\frac{y_{t+h}\alpha_{t+h} - b(\alpha_{t+h})}{\varphi} +
d(y_{t+h},\varphi)\right),
\end{equation}
The conditional mean and variance of $y_{t+h}$ given
$\widehat{\textit{F}}_{t+h}$ is represented by the terms
$\widehat{\mu}_{t+h} = $ $E(y_{t+h}|\widehat{\textit{F}}_{t+h})$
$=b'(\alpha_{t+h})$ and $Var(y_{t+h}|\textit{F}_{t+h})=$
$\varphi{b''}(\alpha_{t+h})$.
The $\mu_{t+h}$, is related to the predictor, $\eta_{t+h}$, by a
twice-differentiable one-to-one monotonic link function
$g(\cdot)$. The linear predictor for the GARMA model is given by,
\begin{equation}\label{predii33}
g(\mu_{t+h}) = \eta_{t+h} = \widehat{x'}_{t+h}\beta +
\sum_{j=1}^p\phi_j\{g(\widehat{y}_{t+h-j}) -
\widehat{x'}_{t+h-j}\beta\} +
\sum_{j=1}^q\theta_j\{g(\widehat{y}_{t+h-j}) -
\widehat{\eta}_{t+h-j}\}.
\end{equation}
With the equation \eqref{model} and posterior density \eqref{jjj1},
the predictive density for $y_{t+h}$ can be written as,
\begin{equation*}\label{jjhhh3}
p(y_{t+h}|\widehat{F}_{t+h}) =
\int_{\{\boldsymbol{\beta},{\Phi},{\Theta}\} \in
\Omega}f(y_{t+h}|\boldsymbol{\beta},{\Phi},{\Theta},\widehat{F}_{t+h})\pi(\boldsymbol{\beta},{\Phi},{\Theta}|
Y) d\boldsymbol{\beta} d\Phi d\Theta.
\end{equation*}
The aim is to determine the predictive density using the MCMC algorithm, thus
\begin{equation}\label{jjhhh5}
\widehat{p}(y_{t+h}|\widehat{\textit{F}}_{t+h}) =
\frac{1}{Q}\sum_{j=1}^{Q}f(y_{t+h}|\boldsymbol{\beta}^{(j)},{\Phi^{(j)}},{\Theta^{(j)}},\widehat{\textit{F}}_{t+h}).
\end{equation}
Given the predictive density, the next step is to evaluate the prediction,
$E(y_{t+h}|\widehat{\textit{F}}_{t+h}) = \hat{y}_{t+h}$.
\begin{equation}\label{jjhhh7}
E(y_{t+h}|\widehat{\textit{F}}_{t+h}) =
\int_{y_{t+h} \in R}y_{t+h}p(y_{t+h}|\widehat{\textit{F}}_{t+h})dy_{t+h}.
\end{equation}
\noindent Substituting the equation \eqref{jjhhh5} the equation (\ref{jjhhh7}) can be rewritten by,
\begin{eqnarray}\label{jjhhh8}
&&
E(y_{t+h}|\widehat{\textit{F}}_{t+h}) = \nonumber\\
&&
\int_{y_{t+h} \in R}y_{t+h}
\left[ \int_{\{\boldsymbol{\beta},{\Phi},{\Theta}\} \in
\Omega}f(y_{t+h}|\boldsymbol{\beta},{\Phi},{\Theta},\widehat{\textit{F}}_{t+h})\pi(\boldsymbol{\beta},{\Phi},{\Theta}|
Y)d\boldsymbol{\beta}d\Phi d\Theta\right] dy_{t+h}.\nonumber\\
\end{eqnarray}
\noindent Using properties of integer, we can rewrite \eqref{jjhhh8} as,
\begin{eqnarray*}\label{jjhhh9}
&&
E(y_{t+h}|\widehat{\textit{F}}_{t+h}) = \\
&&
\int_{\{\boldsymbol{\beta},{\Phi},{\Theta}\} \in \Omega} \left[
\int_{y_{t+h} \in R}y_{t+h}
f(y_{t+h}|\boldsymbol{\beta},{\Phi},{\Theta},\widehat{\textit{F}}_{t+h})dy_{t+h}\right]
\pi(\boldsymbol{\beta},{\Phi},{\Theta}| Y)d\boldsymbol{\beta} d\Phi
d\Theta,
\end{eqnarray*}
which can in turn be rewritten as
\begin{equation*}\label{jjhhh10}
E(y_{t+h}|\widehat{\textit{F}}_{t+h}) =
\int_{\{\boldsymbol{\beta},{\Phi},{\Theta}\} \in \Omega} \left[
E(y_{t+h}|\boldsymbol{\beta}
,{\Phi},{\Theta},\widehat{\textit{F}}_{t+h}) \right]
\pi(\boldsymbol{\beta} ,{\Phi},{\Theta}| Y)d\boldsymbol{\beta} d\Phi
d\Theta.
\end{equation*}
\noindent Now, denoting
$\mu_{t+h}(\boldsymbol{\beta},\Phi,\Theta,\widehat{\textit{F}}_{t+h})=
E(y_{t+h}|\boldsymbol{\beta},\Phi,\Theta,\widehat{\textit{F}}_{t+h})$ and
using the MCMC output vector
$(\boldsymbol{\beta}^{(j)},\Phi^{(j)},\Theta^{(j)})$,
$j=1,2,\ldots,Q$, it follows that\linebreak $E(y_{t+h}|\widehat{\textit{F}}_{t+h})$ can be
approximated by,
\begin{equation*}\label{jjhhh11}
\widehat{y}_{t+h} =
\frac{1}{Q}\sum_{k=1}^Q\mu_{t+h}(\boldsymbol{\beta}^{(k)},\Phi^{(k)},\Theta^{(k)},\widehat{\textit{F}}_{t+h}),
\end{equation*}
where
\begin{eqnarray}\label{predii33}
&& g(\mu_{t+h}^{(k)})= \nonumber\\
&& \widehat{x'}_{t+h}\boldsymbol{\beta}^{(k)} +
\sum_{j=1}^p\phi_j^{(k)}\{g(\widehat{y}_{t+h-j}) -
\widehat{x'}_{t+h-j}\boldsymbol{\beta}^{(k)}\} +
\sum_{j=1}^q\theta_j^{(k)}\{g(\widehat{y}_{t+h-j}) -
\widehat{\eta}_{t+h-j}^{(k)}\}.\nonumber\\
\end{eqnarray}
Credible intervals for $\widehat{y}_{t+h}$ can be calculated using
the $100\alpha\%$, and $100(1-\alpha)\%$ quantiles of the MCMC
sample $\mu^{(k)}_{t+h}$, with $k=1,\ldots,Q$. An approach to estimate
the credible interval of $\widehat{y}_{t+h}$ is the Highest
Posterior Density (HPD), see \citeN{hpd}.
A $100(1-\alpha)\%$ HPD region for $\widehat{y}_{t+h}$ are a subset $C
\in R$ defined by $C = \{y_{t+h} :
p(y_{t+h}|\widehat{\textit{F}}_{t+h})\geq \kappa\}$, where $\kappa$ is
the largest number such that
\begin{equation}
\int_{y_{t+h} \geq \kappa}p(y_{t+h}|\widehat{\textit{F}}_{t+h})dy_{t+h}=1-\alpha.
\end{equation}
We can use the $\widehat{p}(y_{t+h}|\widehat{\textit{F}}_{t+h})$ MCMC
estimates, given by the equation \eqref{jjhhh5}, to estimate the
$100(1-\alpha)\%$ HPD region.
We used the following algorithm to calculate the credible intervals
for the predictions.
\begin{itemize}
\item[1.] Let a sequence of forecast values $\widehat{y}_{t+h}$ for $h=1,\ldots,H$.
\item[2.] Take $h=1$, $k=0$, $y_{t+h}^{(0)}=0$, $S_{t+h}^{(0)}=0$ and also initiate $LB=0$, $UB$=0.
\item[3.] Using the initial values evaluate the equation:
\begin{equation*}
f(y_{t+h}^{(k)}|\beta^{(j)},\Phi^{(j)},\Theta^{(j)},\widehat{\textit{F}}_{t+h})
= \exp\left(\frac{y_{t+h}^{(k)}\alpha_{t+h}^{(j)} -
b(\alpha_{t+h}^{(j)})}{\varphi} + d(y_{t+h}^{(k)},\varphi)\right),
\end{equation*}
and also,
\begin{equation*}
\widehat{p}(y_{t+h}^{(k)}|\widehat{\textit{F}}_{t+h}) =
\frac{1}{Q}\sum_{j=1}^{Q}f(y_{t+h}^{(k)}|\beta^{(j)},{\Phi^{(j)}},{\Theta^{(j)}},\widehat{\textit{F}}_{t+h}).
\end{equation*}
\item[4.] Using $\widehat{p}(y_{t+h}^{(k)}|\widehat{\textit{F}}_{t+h})$ compute $S_{t+h}^{(k+1)}$ with
\begin{equation*}
S_{t+h}^{(k+1)}=S_{t+h}^{(k)}+\widehat{p}(y_{t+h}^{(k)}|\widehat{\textit{F}}_{t+h})
\end{equation*}
\item[5.] If $LB=0$ and $S_{t+h}^{(k+1)}\geq\delta$, $\rightarrow$ $y_{t+h,\delta}=y_{t+h}^{(k)}$ and $LB=1$.
\item[6.] If $UB=0$ and $S_{t+h}^{(k+1)}\leq(1-\delta)$, $\rightarrow$ $y_{t+h,(1-\delta)}=y_{t+h}^{(k)}$ and $UB=1$.
\item[7.] If $LB=0$ or $UB=0$, take $k=k+1$ and
$y_{t+h}^{(k)}=y_{t+h}^{(k-1)}+1$, repeat steps 3 and 4 until $LB=1$
and $UB=1$.
\end{itemize}
The percentiles $100\delta\%$ and $100(1-\delta)\%$ are represented by
$y_{t+h,\delta}$ and $y_{t+h,(1-\delta)}$ respectively, and given by,
\begin{equation*}
y_{t+h,\delta}=\max\left\{y_{t+h}^{(r)} \big| \sum_{k=1}^{r}
\widehat{p}(y_{t+h}^{(k)}\big|\widehat{\textit{F}}_{t+h})\leq \delta
\right\}.
\end{equation*}
\begin{equation*}
y_{t+h,(1-\delta)}=\min\left\{y_{t+h}^{(r)}\big| \sum_{k=1}^{r}
\widehat{p}(y_{t+h}^{(k)}\big|\widehat{\textit{F}}_{t+h})\geq
(1-\delta) \right\}.
\end{equation*}
and the $100(1-\delta)\%$ credible interval for the predictions is
denoted by $CI_{(1-\delta)}=\left[y_{t+h,\delta};y_{t+h,(1-\delta)}\right]$.
\section{Simulation Study}\label{simulations}
In this section we conduct a simulation study for negative binomial\linebreak
GARMA($p,q$) models with different orders $p$ and $q$.
The actual parameter values used to simulate the artificial series are shown in
Table \ref{tab1} and the parameter $k$ of the negative binomial was
fixed at $k=15$. These values were chosen taking into account that a
GARMA model can be nonstationary since they are in the exponencial
family and the variance function depends on the mean. So, we opted to
chose parameter values that would generate moderate values for the time series.
The experiment was replicated $m=1000$ times for each model. For each
dataset we used the prior distributions as described in Section
\ref{sec:bayes} with mean zero and variance 200. We then drew samples
from the posterior distribution
discarding the first 1000 draws as burn-in and keeping every 3rd
sampled value resulting in a final sample of 5000 values.
All the computations were implemented
using the open-source statistical software language and environment {\tt R}
\cite{r10}.
\begin{table}[h]
\caption{Parameters values to simulate from Negative Binomial GARMA($p$,$q$).}
\label{tab1}
\begin{center}
\renewcommand{\arraystretch}{1.2}
\begin{scriptsize}
\begin{tabular}{|c|c|c|c|c|c|c}
\hline
Order & $\beta_0$ & $\phi_1$ & $\phi_2$ & $\theta_1$ & $\theta_2$ \\
\hline
(1,1) & 0.80 & 0.50 & - & 0.30 & - \\
(1,2) & 1.00 & 0.30 & - & 0.40 & 0.25 \\
(2,1) & 0.55 & 0.30 & 0.40 & 0.20 & - \\
(2,2) & 0.65 & 0.30 & 0.40 & 0.25 & 0.35 \\
\hline
\end{tabular}
\end{scriptsize}
\end{center}
\end{table}
The performance of the Bayesian estimation was evaluated using three
metrics: the corrected bias (CB), the corrected error (CE) and the
mean acceptance rates in the MCMC algorithm called Acceptance
Probabilities (AP). These metrics are defined as,
\begin{eqnarray*}
CB &=& \frac{1}{m} \sum_{i=1}^m
\left|\frac{\theta-\hat{\theta}^{(i)}}{\theta}\right|,\label{bias}\\
CE^2 &=&
\frac{1}{Var}\frac{1}{m}\sum_{i=1}^m(\hat{\theta}^{(i)}-\theta)^2\label{mse}\\
AP &=& \frac{1}{m} \sum_{i=1}^m \hat{r}^{(i)}\label{ap},
\end{eqnarray*}
where $\hat{\theta}^{(i)}$ and $\hat{r}^{(i)}$ are the estimate of
parameter $\theta$ and the computed acceptance rate respectively for
the $i$-th replication, $i=1,\ldots,m$. In this paper we take the
posterior means of $\theta$ as point estimates. Also, the variance
term ($Var$) that appears in the definition of CE is the sample
variance of $\hat{\theta}^{(1)},\dots,\hat{\theta}^{(m)}$.
The estimation results appear in Table \ref{tab2} where the posterior
mean and variance (in brackets) as well as the aforementioned metrics
are shown for each model and parameter. These results indicate good
properties with relatively small values of the corrected bias (CB),
values of the corrected error (CE) around 1 and acceptance
probabilities between 0.20 and 0.70.
We also include Table \ref{ga3} with the
proportions of correct model choice using three popular Bayesian model selection
criteria. Specifically, we adopt the expected Bayesian information criterion
(EBIC, \shortciteNP{carl00}), the Deviance information criterion (DIC,
\shortciteNP{Spiegelhalter}) and the conditional predictive ordinate
(CPO, \shortciteNP{gelfand}) to
select the order of the GARMA models. Each column in this table
contains the model order and the associated proportions of correct
model choice according to EBIC, DIC and CPO criteria. Higher
proportions of correct model choices are observed as the sample sizes
increase for all models and criteria. Also, EBIC and CPO tend to
perform better for GARMA(1,1) and GARMA(1,2) models but none performed
particularly well with GARMA(2,2) models.
Finally, this simulation study was carried out also for the Poisson and
binomial distributions with results similar to the ones shown. These
results are not included to save space.
\setlength{\tabcolsep}{1mm}
\begin{table}[h]
\caption{Monte Carlo experiments. Corrected bias, corrected errors and
mean acceptance rates for the Bayesian estimation of Negative
Binomial GARMA($p$,$q$) model.}
\label{tab2}
\begin{center}
\renewcommand{\arraystretch}{1.2}
\begin{scriptsize}
\begin{tabular}{c|cccc|cccc}
\hline
Parameter & Mean(Var)(1,1)& CB(1,1)& CE(1,1)& AP(1,1)& Mean(Var)(1,2)& CB(1,2)& CE(1,2)& AP(1,2)\\
\hline
$\beta_0$ & 0.8571(0.0065)& 0.0984 & 1.2247 & 0.3746 & 1.0823(0.0196)& 0.1276 & 1.1592 & 0.3182 \\
$\phi_1$ & 0.4695(0.0026)& 0.0947 & 1.1637 & 0.3511 & 0.2554(0.0097)& 0.2820 & 1.0965 & 0.2702 \\
$\phi_2$ & - & - & - & - & - & - & - & - \\
$\theta_1$& 0.2927(0.0033)& 0.1531 & 1.0071 & 0.6480 & 0.4099(0.0091)& 0.1900 & 1.0048 & 0.4327 \\
$\theta_2$& - & - & - & - & 0.2478(0.0037)& 0.1929 & 1.0001 & 0.5882 \\
\hline
\end{tabular}
\begin{tabular}{c|cccc|cccc}
\hline
Parameter & Mean(Var)(2,1) & CB(2,1)& CE(2,1)& AP(2,1)& Mean(Var)(2,2) & CB(2,2)& CE(2,2)& AP(2,2)\\
\hline
$\beta_0$ & 0.6198(0.0097) & 0.1740 & 1.2240 & 0.2786 & 0.7344(0.0079) & 0.1497 & 1.3171 & 0.3397\\
$\phi_1$ & 0.2798(0.0152) & 0.3295 & 1.0127 & 0.1422 & 0.2887(0.0054) & 0.1959 & 1.0111 & 0.2282\\
$\phi_2 $ & 0.3794(0.0066) & 0.1661 & 1.0307 & 0.2091 & 0.3414(0.0049) & 0.1485 & 1.0787 & 0.2348\\
$\theta_1$& 0.2012(0.0182) & 0.5334 & 0.9995 & 0.3214 & 0.2430(0.0052) & 0.2307 & 1.0040 & 0.5237\\
$\theta_2$& - & - & - & - & 0.3464(0.0027) & 0.1193 & 1.0017 & 0.6614\\
\hline
\end{tabular}
\end{scriptsize}
\end{center}
\end{table}
\begin{table}[ht!]
\caption{Proportions of correct model chosen via Bayesian criteria with
Negative Binomial GARMA($p$,$q$) models.}
\label{ga3}
\begin{scriptsize}
\begin{center}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{ccccc}
\hline
\multicolumn{5}{c}{\textbf{EBIC}}\\
\hline
Size & GARMA(1,1) & GARMA(1,2) & GARMA(2,1) & GARMA(2,2)\\
\hline
200 & 0.9379 & 0.3042 & 0.5626 & 0.4450 \\
500 & 0.9799 & 0.6156 & 0.8048 & 0.5825 \\
1000 & 0.9852 & 0.9039 & 0.8471 & 0.6772 \\
\hline
\end{tabular}
\begin{tabular}{ccccc}
\hline
\multicolumn{5}{c}{\textbf{DIC}}\\
\hline
Size & GARMA(1,1) & GARMA(1,2) & GARMA(2,1) & GARMA(2,2)\\
\hline
200 & 0.6316 & 0.4804 & 0.5445 & 0.4437 \\
500 & 0.6876 & 0.6476 & 0.6221 & 0.4925 \\
1000 & 0.7155 & 0.7364 & 0.6469 & 0.7154 \\
\hline
\end{tabular}
\begin{tabular}{ccccc}
\hline
\multicolumn{5}{c}{\textbf{CPO}}\\
\hline
Size & GARMA(1,1) & GARMA(1,2) & GARMA(2,1) & GARMA(2,2) \\
\hline
200 & 0.8078 & 0.3493 & 0.5575 & 0.4112 \\
500 & 0.8188 & 0.5925 & 0.5993 & 0.4625 \\
1000 & 0.8325 & 0.7266 & 0.6152 & 0.7317 \\
\hline
\end{tabular}
\end{center}
\end{scriptsize}
\end{table}
\section{Bayesian Real Data Analysis}
In this section, we apply the methodology described so far to three
real time series of count data. For each series we estimated
GARMA($p,q$) models with varying orders and computed the Bayesian
selection criteria EBIC, DIC and CPO for model comparison. In all
cases we used the diagnostic proposed by \citeN{geweke} to assess
convergence of the chains. This is based on a test for equality
of the means of the first and last part of the chain (by default
the first 10$\%$ and the last 50$\%$). If the samples are drawn from
the stationary distribution, the two means are equal and
the statistic has an asymptotically standard normal
distribution. The calculed values of Geweke statistics were all
between -2 and 2 which is an indication of convergence of
the Markov chains.
\subsection{Automobile data set}
The first real data set analysed is the number of automobile
production in Brazil between January 1993 and December 2013. The data
is available from {\tt http://www.anfavea.com.br/tabelas.html}.
The original observations were divided by 1000 to reduce the magnitude
of the data.
\begin{figure}[h]\centering
\includegraphics[angle=270,width=0.8\linewidth]{plot_auto}
\caption{Graph of number of automobile production in Brazil.}
\label{auto1}
\end{figure}
The automotive industry is extremely important as it can influence
other industries activities. For example, 50\% of the world rubber
production, 25\% of the world glass production and 15\% of the world iron
production are destined to the automotive industry.
The behaviour of the data along time depicted in Figure \ref{auto1}
seems to indicate that an extra term should be included to take into
account a (possibly nonlinear) trend. The term $\beta_{{\exp}}=\log(t)$ was
then included in the model equation
to account for this long-term increase.
\setlength{\tabcolsep}{0.5mm}
\begin{table}[h]
\caption{Bayesian selection criteria for the number of automobile production in Brazil.}
\label{tab4}
\begin{center}
\renewcommand{\arraystretch}{1.2}
\begin{scriptsize}
\begin{tabular}{ccccccc}
\hline
Poisson & GARMA(1,0)& GARMA(2,0) & GARMA(1,1) & GARMA(1,2) & GARMA(2,1) & GARMA(2,2)\\
\hline
EBIC & 3046.61 & 3074.24 & 3045.70 & 3074.45 & 3071.21 & 3067.97 \\
DIC & 3032.06 & 3064.97 & 3030.38 & 3064.55 & 3046.02 & 3065.89 \\
CPO &-1519.88 &-1536.12 &-1519.65 & -1535.15 & -1536.76 & -1540.29 \\
\hline
Binomial & GARMA(1,0)& GARMA(2,0) & GARMA(1,1) & GARMA(1,2) & GARMA(2,1) & GARMA(2,2)\\
\hline
EBIC & 3559.79 & 3814.33 & 3559.11 & 3813.91 & 3759.32 & 3738.38 \\
DIC & 3545.12 & 3736.57 & 3544.19 & 3794.01 & 3738.90 & 3713.19 \\
CPO &-1782.36 &-1930.30 & -1780.67 & -1949.77 & -1929.13 & -1909.67 \\
\hline
Negative Binomial& GARMA(1,0) & GARMA(2,0) &\textbf{GARMA(1,1)} & GARMA(1,2) & GARMA(2,1) & GARMA(2,2)\\
\hline
EBIC & 2547.67 & 2792.38 &\textbf{2546.76} & 2799.21 & 2787.56 & 2785.10 \\
DIC & 2537.71 & 2777.16 &\textbf{2531.85} & 2779.28 & 2767.32 & 2760.13 \\
CPO & -1269.48 & -1427.72 &\textbf{-1267.34} & -1430.66 &-1426.47 &-1423.09 \\
\hline
\end{tabular}
\end{scriptsize}
\end{center}
\end{table}
The results regarding selection criteria are summarized in Table
\ref{tab4}. We note that the three criteria indicate that the most appropriate
model was the GARMA(1,1) Negative Binomial.
Also, Table \ref{tab5}
presents the estimation results for the selected GARMA(1,1) Negative
Binomial model with the extra parameter fixed at $k=150$.
\begin{table}[h]
\caption{Estimation results. GARMA(1,1) Negative
Binomial model for number of automobile production in Brazil.}
\label{tab5}
\begin{center}
\begin{footnotesize}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{ccccc}
\hline
Parameter & Mean & Variance & HPD Credible Interval & AP \\\hline
$\beta_0$ & 0.3834 & 0.0006 & (0.3543; 0.4159) & 0.3710 \\
$\beta_{{\exp}}$& 0.0850 & 0.0002 & (0.0814; 0.0884) & 0.3163 \\
$\phi_1$ & 0.8447 & 0.0005 & (0.8379; 0.8521) & 0.3038 \\
$\theta_1$ & 0.1149 & 0.0005 & (0.1064; 0.1244) & 0.6323 \\
\hline
\end{tabular}
\end{footnotesize}
\end{center}
\end{table}
We also performed a residual analysis based on the so called quantile
residuals which are the common choice for generalized linear models.
In fact, quantile residuals are the only
useful residuals for binomial, negative binomial or Poisson data when
the response takes on only a small number of distinct values (\citeNP{dunn}).
These are given by
$r_t=\Phi^{-1}(\textbf{F}_{y_t}(y_t|F_{t-1}))$ where
$\textbf{F}_{y_t}$ represent the cumulative distribution function of
the associated discrete distribution. In practice, when dealing with
discrete distributions we need to introduce some randomization to
produce continuous normal residuals.
The residual analysis summarized in Figure \ref{fig:res1} which
indicates that the residuals are non-correlated
and Gaussian distributed with mean 0.0767 and standard deviation 1.2295.
Kolmogorov-Smirnov and Lilliefors normality tests returned $p$-values
of 0.4502 and 0.0743 respectively which provides evidence for
Gaussian assumption (\citeNP{kolmo1}).
\begin{figure}[h]\centering
\includegraphics[angle=270,width=0.9\linewidth]{res_auto}
\caption{Residual Analysis for the number of automobile production in
Brazil under a GARMA(1,1) negative binomial model.}
\label{fig:res1}
\end{figure}
Finally, we performed a prediction exercise using the last 9
observations of the original series as follows. For each $k=1,\dots,9$ the
GARMA(1,1) negative binomial model was fitted to the series
$y_1,\dots,y_{n-k}$ and an out-of-sample one-step ahead prediction
$\hat{y}_{n-k+1}$ was produced. These predictions can then be compared
with the true values.
The results are illustrated in Figure \ref{fig:pred1} from which we
can see that the prediction errors are overall small.
A formal comparison was made by calculating the mean absolute percentage
error (MAPE, \citeNP{hyndman}) and we obtained the value $6.07\%$.
\begin{figure}[h]\centering
\includegraphics[angle=270,width=0.7\linewidth]{predauto}
\caption{Predictions for the number of automobile production in Brazil
with a GARMA(1,1) Negative Binomial model.}
\label{fig:pred1}
\end{figure}
\subsection{Epidemiology data set}
This real data set comprises the number of hospitalizations caused by Dengue Fever
in Campina Grande city (Brazil) between January 1998 and October 2003.
Dengue Fever is transmitted by several species of mosquito within the $genus
Aedes$, principally {\it A. aegypti}. The {\it Aedes} mosquito is easily
identifiable by the distinctive black and white stripes on its
body. It prefers to lay eggs on clean and stagnant water. Analysing the
autocorrelation function of this data, a seasonal behaviour is characterised.
This is because the Summer months in this region present higher volume
of rain, thus leading to more clean and
stagnant water. Therefore we included two seasonal components in the model,
$\beta_{{S}_1}$ and $\beta_{{S}_2}$, using {\tt cosine} and
{\tt sine} functions
respectively, and also considering the period of 12 months. These
components are expected to improve model estimation.
\begin{figure}[h!]
\centering
\includegraphics[angle=270,width=0.8\linewidth]{graph_dengue}
\caption{Number of hospitalizations caused by Dengue Fever.}
\end{figure}
\setlength{\tabcolsep}{0.5mm}
\begin{table}[h]
\caption{Bayesian selection criteria for the number of hospitalizations
caused by Dengue Fever.}
\label{tab6}
\begin{center}
\renewcommand{\arraystretch}{1.2}
\begin{scriptsize}
\begin{tabular}{ccccccc}
\hline
Poisson & GARMA(1,0)& GARMA(2,0)& GARMA(1,1)& GARMA(1,2) & GARMA(2,1) & GARMA(2,2) \\
\hline
EBIC & 632.82 & 633.13 & 633.73 & 632.48 & 632.65 & 628.20 \\
DIC & 580.66 & 581.04 & 581.86 & 581.25 & 580.32 & 578.31 \\
CPO &-794.03 &-794.87 &-794.69 &-794.11 &-793.83 &-792.34 \\
\hline
Binomial& GARMA(1,0)& GARMA(2,0)& GARMA(1,1)& GARMA(1,2) & GARMA(2,1) & GARMA(2,2) \\
\hline
EBIC & 690.62 & 689.28 & 690.34 & 656.56 & 688.82 & 655.30 \\
DIC & 679.14 & 679.92 & 679.42 & 642.12 & 674.83 & 637.19 \\
CPO &-345.89 &-346.16 &-345.76 &-327.13 &-348.04 &-324.83 \\
\hline
Negative Binomial & GARMA(1,0)& GARMA(2,0)& GARMA(1,1) &\textbf{GARMA(1,2)} & GARMA(2,1) & GARMA(2,2)\\
\hline
EBIC & 507.89 & 508.97 & 509.36 &\textbf{504.12} & 509.09 & 505.89\\
DIC & 519.66 & 520.93 & 520.22 &\textbf{518.30} & 523.11 & 523.19\\
CPO &-256.35 &-255.88 &-256.10 &\textbf{-254.24}&-257.64 &-256.26\\
\hline
\end{tabular}
\end{scriptsize}
\end{center}
\end{table}
The results regarding the selection criteria are summarized in Table
\ref{tab6} from which we can conclude that the most appropriate
model was the GARMA(1,2) Negative Binomial. Note that the three
criteria gave the same indication.
Table \ref{tab7} shows the estimation results for the selected GARMA(1,2) Negative
Binomial model with the extra parameter fixed at $k=30$.
\begin{table}[h]
\caption{Estimation results. GARMA(1,2) negative binomial model for
the number of hospitalizations caused by Dengue Fever.}
\label{tab7}
\begin{center}
\begin{footnotesize}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{cccll}
\hline
Parameter & Mean & Variance &HPD Credible Interval & AP \\
\hline
$\beta_0$ & 1.1916 & 0.0566 & ( 0.7443; 1.6068) & 0.1090\\
$\beta_{S_1}$&-0.2571 & 0.0035 & (-0.3753;-0.1407) & 0.6196\\
$\beta_{S_2}$& 0.1424 & 0.0040 & ( 0.0156; 0.2649) & 0.5858\\
$\phi_1$ & 0.5796 & 0.0078 & ( 0.4230; 0.7456) & 0.0968\\
$\theta_1$ & 0.1214 & 0.0112 & (-0.0853; 0.3273) & 0.3391\\
$\theta_2$ & 0.0987 & 0.0053 & (-0.0470; 0.2358) & 0.3978\\
\hline
\end{tabular}
\end{footnotesize}
\end{center}
\end{table}
\begin{figure}[h!]\centering
\includegraphics[angle=270,width=0.7\linewidth]{resdeng}
\caption{Residual Analysis of Hospitalizations caused by Dengue.}
\label{fig:res2}
\end{figure}
Again we performed a residual analysis based on quantile
residuals. This is summarized in Figure \ref{fig:res2} which indicates
that the residuals are non-correlated
and Gaussian distributed with mean 0.0258 and standard deviation 1.5571.
The Kolmogorov-Smirnov and Shapiro-Wilk normality tests returned
$p$-values of 0.4856 and 0.1176 respectively thus giving evidence for
the Gaussian assumption.
A similar prediction exercise was performed for this data. So, we
fitted a GARMA(1,2) negative binomial model to $y_1,\dots,y_{n-k}$ and
computed an out-of-sample one-step ahead prediction $\hat{y}_{n-k+1}$ for
$k=1,\dots,9$.
Figure \ref{fig:pred2} shows the predictions,
prediction intervals and the real observations for comparison. It can
be seen that, although relatively close to the actual values,
predictions for May, June, July and August 2003
are consistently below the
observations. The MAPE criterion was calculated as $47.81\%$.
\begin{figure}[h!]
\centering
\includegraphics[angle=270,width=0.7\linewidth]{preddengue}
\caption{Predictions with GARMA(1,2) Negative Binomial model with Hospitalizations caused by Dengue series.}
\label{fig:pred2}
\end{figure}
\subsection{Mortality data set}
Our last real data set is the number of deaths in Brazil between January
1984 and December 2007. This data is available from the Brazilian Health
Ministry at {\tt http://www2.datasus.gov.br/DATASUS}
and is depicted in Figure \ref{fig:deaths}. Likewise the
first example, the original series was divided by 1000 to reduce the
magnitude of the data.
As in the first example, we think there is a point for the inclusion
of an extra term here too since the series exhibits a long-term
(possibly nonlinear)
increase. So, a new component $\beta_{{\exp}}=\log(t)$ was added to
the model equation as this is expected to improve model estimation.
\begin{figure}[h!]
\centering
\includegraphics[angle=270,width=0.8\linewidth]{plotmort}
\caption{Number of deaths in Brazil.}
\label{fig:deaths}
\end{figure}
\begin{table}[h]
\caption{Bayesian selection criteria using the number of deaths in Brazil.}
\label{tab8}
\begin{center}
\begin{scriptsize}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{ccccccc}
\hline
Poisson & GARMA(1,0)& GARMA(2,0) & GARMA(1,1) & GARMA(1,2) & GARMA(2,1)& GARMA(2,2) \\
\hline
EBIC & 1549.55 & 1560.41 & 1546.60 & 1566.37 & 1565.79 & 1566.80 \\
DIC & 1531.53 & 1566.68 & 1531.10 & 1570.34 & 1571.11 & 1573.77 \\
CPO & -766.42 & -773.35 & -765.49 & -784.20 & -784.99 &-785.48 \\
\hline
Binomial& \textbf{GARMA(1,0)}& GARMA(2,0) & GARMA(1,1) & GARMA(1,2) & GARMA(2,1) & GARMA(2,2) \\
EBIC & \textbf{1351.42} & 1412.95 & 1357.52 & 1391.79 & 1399.13 & 1404.81 \\
DIC & \textbf{1341.42} & 1391.56 & 1342.28 & 1371.54 & 1378.10 & 1379.88 \\
CPO & \textbf{-670.73} & -705.64 & -671.10 & -695.29 & -716.72 & -708.52 \\
\hline
Negative Binomial & GARMA(1,0) & GARMA(2,0) & GARMA(1,1) & GARMA(1,2) & GARMA(2,1) & GARMA(2,2) \\
\hline
EBIC & 1705.33 & 1709.12 & 1700.61 & 1735.23 & 1734.39 & 1738.30 \\
DIC & 1693.59 & 1696.61 & 1685.18 & 1714.76 & 1713.51 & 1712.47 \\
CPO &-851.35 &-855.13 &-842.01 & -866.51 &-866.45 &-866.47 \\
\hline
\end{tabular}
\end{scriptsize}
\end{center}
\end{table}
Looking at the Bayesian selection criteria given in Table \ref{tab8} we
can conclude that the best model for this particular data is the
GARMA(1,0) Binomial model. There are only three parameters in this model and
the estimation results are shown in Table \ref{tab9}. Here the extra
parameter was fixed at $m=45$.
\begin{table}[h]
\caption{Estimates of the number of deaths in Brazil series with GARMA(1,0) Binomial.}
\label{tab9}
\begin{center}
\begin{footnotesize}
\renewcommand{\arraystretch}{1.2}
\begin{tabular}{ccccc}
\hline
Parameter & Mean & Variance &HPD Credible Interval& AP \\
\hline
$\beta_0$ & 0.4154 & 0.0006 & (0.3739; 0.4724) & 0.2272\\
$\beta_{\exp}$ & 0.0713 & 0.0004 & (0.0651; 0.0774) & 0.3503\\
$\phi_1$ & 0.7637 & 0.0007 & (0.7462; 0.7788) & 0.1885\\
\hline
\end{tabular}
\end{footnotesize}
\end{center}
\end{table}
\begin{figure}[h!]\centering
\includegraphics[angle=270,width=0.7\linewidth]{res_mort}
\caption{Residual analysis of the number of deaths in Brazil.}
\label{fig:res3}
\end{figure}
The residual analysis summarized in Figure \ref{fig:res3} indicates
that the residuals are non-correlated
and Gaussian distributed with mean 0.1850 and standard deviation 0.4894. The
Kolmogorov-Smirnov and Anderson-Darling normality tests returned
$p$-values of 0.6736 and 0.1304 respectively thus indicating evidence
for the Gaussian assumption.
Likewise the previous examples we repeated the prediction exercise
here. This time we used the 10 last observations as the series is
longer. So, the GARMA(1,0) binomial model was fitted to the series
$y_1,\dots,y_{n-k}$ and a one-step ahead prediction $\hat{y}_{n-k+1}$
was produced for $k=1,\dots,10$.
The results are illustrated in Figure \ref{fig:pred3} from which we
can see that the prediction errors are again overall small.
Using these prediction errors the calculated value for the MAPE
criterion was $3.63\%$.
\begin{figure}[h!]\centering
\includegraphics[angle=270,width=0.7\linewidth]{preddeath}
\caption{Predictions with GARMA(1,0) Binomial model with Number of death in Brazil series.}
\label{fig:pred3}
\end{figure}
\section{Discussion}\label{conclusions}
In this paper we discuss a Bayesian approach for estimation,
comparison and prediction of GARMA time series
models. We analysed three different discrete models:
Poisson, binomial and negative binomial. We implemented MCMC
algorithms to carry out the simulation study and the methodology was
also applied on three real discrete time series data.
Properties of the Bayesian estimation and the performance of Bayesian
selection criteria were assessed with our simulation study. The
analysis with real data also provided good estimates and predictions
via parsimonious models. All in all our results suggest that, as
indicated in the original GARMA paper,
this class of models have potential uses for modelling overdispersed
time series count data.
| {
"attr-fineweb-edu": 1.975586,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdlHxaJJQn2qq7BoF | \section{Introduction}
We consider the one-phase Stefan problem in periodic and random media in a dimension $n \geq 2$. The
aim of this paper is to understand the behavior of the solutions and their free boundaries when time $t \rightarrow \infty$.
Let $K \subset \mathbb{R}^n$ be a compact set with sufficiently regular boundary, for instance
$\partial K \in C^{1,1}$, and assume that $0 \in \operatorname{int} K$. The one-phase
Stefan problem (on an exterior domain) with inhomogeneous latent heat of phase transition is to find a
function $v(x,t): \mathbb{R}^n \times [0,\infty) \rightarrow [0,\infty)$ that satisfies the free
boundary problem
\begin{equation}
\label{Stefan}
\left\{\begin{aligned}
v_t- \Delta v &=0 && \text{ in } \{v>0\} \backslash K,\\
v &=1 && \text{ on } K,\\
V_\nu &=g(x)|Dv|&& \text{ on }\partial \{v>0\},\\
v(x,0)&=v_0 &&\text{ on } \mathbb{R}^n,
\end{aligned}\right.
\end{equation}
where $D$ and $\Delta$ are respectively the spatial gradient and Laplacian, $v_t$ is the partial
derivative of $v$ with respect to time variable $t$, $V_\nu$ is the normal velocity of the
\textit{free boundary} $\partial \{v>0\}$. $v_0$ and $g$ are given functions, see below. Note that
the results in this paper can be trivially extended
to general time-independent positive continuous boundary data, $1$ is taken only to simplify the exposition.
The one-phase Stefan problem is a mathematical model of phase transitions between a solid and a liquid.
A typical example is the melting of a body of ice maintained at temperature $0$, in contact with a
region of water. The unknowns are the temperature distribution $v$ and its free boundary
$\partial \{v(\cdot, t)>0\}$, which models the ice-water interface. Given an initial temperature
distribution of the water, the diffusion of heat in a medium by conduction and the exchange of
latent heat will govern the system. In this paper, we consider an inhomogeneous medium
where the latent heat of phase transition, $L(x)= 1/g(x)$, and hence the velocity law depend on
position. The related Hele-Shaw
problem is usually referred to in the literature as the quasi-stationary limit of the one-phase
Stefan problem when the heat operator is replaced by the Laplace operator. This problem typically
describes the flow of an injected viscous fluid between two parallel plates which form the
so-called Hele-Shaw cell, or the flow in porous media.
In this paper, we assume that the function $g$ satisfies the following two conditions, which guarantee respectively the
well-posedness of \eqref{Stefan} and averaging behavior as $t \to \infty$:
\begin{enumerate}
\item \label {condition in g} $g$ is a Lipschitz function in $\mathbb{R}^n$, $m\leq g \leq M$ for some positive constants $m$ and $M$.
\item \label {condition in g 2}$g(x)$ has some averaging properties so that Lemma~\ref{media}
applies, for instance, one of the following holds:
\begin{enumerate}
\item $g$ is a $\mathbb{Z}^n$-periodic function,
\item $g(x, \omega): \mathbb{R}^n \times A \to [m, M]$ is a stationary ergodic random variable over a probability space $(A, \mathcal{F},P)$.
\end{enumerate}
\end{enumerate}
For a detailed definition and overview of stationary ergodic media, we refer to \cite{P1, K3} and
the references therein.
Throughout most of the paper we will assume that the initial data $v_0$ satisfies
\begin{equation}
\label{initial data}
\begin{aligned}
&v_0 \in C^2(\overline{\Omega_0 \backslash K}), v_0 > 0 \text{ in } \Omega_0, v_0=0, \mbox{ on } \Omega_0^c := \mathbb{R}^n \setminus
\Omega_0,
\mbox{ and } v_0=1 \mbox{ on } K,\\& |Dv_0| \neq 0 \text{ on } \partial \Omega_0,
\text{ for some bounded domain $\Omega_0 \supset K$.}
\end{aligned}
\end{equation}
This will guarantee the existence of both the weak and viscosity solutions below and their coincidence,
as well as the weak monotonicity \eqref{monotonicity condition}. However, the asymptotic
limit, Theorem~\ref{th:main-convergence}, is independent of the initial data, and therefore the result applies to arbitrary initial
data as long as the (weak) solution exists, satisfies the comparison principle, and the initial
data can be approximated from below and from above by data satisfying \eqref{initial data}. For
instance, $v_0 \in C(\mathbb{R}^n)$, $v_0 = 1$ on $K$, $v_0 \geq 0$, $\operatorname{supp} v_0$ compact is sufficient.
The Stefan problem \eqref{Stefan} does not necessarily have a global classical solution in $n
\geq 2$ as singularities of the free boundary might develop in finite time. The classical approach to
define a generalized solution is to integrate $v$ in time and introduce $u(x,t) := \int_{0}^{t}v(x,s)ds$
\cite{Baiocchi,Duvaut,FK,EJ,R1,R2,RReview}. If $v$ is sufficiently regular, then $u$ solves the
variation inequality
\begin{equation}
\label{obstacle problem}
\begin{cases}
u(\cdot,t) \in \mathcal{K}(t),\\
(u_t - \Delta u)(\varphi - u) \geq f(\varphi -u) \mbox{ a.e } (x,t) \mbox{ for any } \varphi \in
\mathcal{K}(t),
\end{cases}
\end{equation}
where $\mathcal{K}(t)$ is a suitable functional space specified later in Section~\ref{sec:weak-sol} and $f$ is
\begin{equation}
\label{f}
f(x)= \begin{cases}
v_0(x), & v_0(x) > 0,\\
-\displaystyle \frac{1}{g(x)}, & v_0(x) = 0.
\end{cases}
\end{equation}
This parabolic inequality always has a global unique solution $u(x,t)$ for initial data satisfying
(\ref{initial data}) \cite{FK,R1,R2,RReview}. The corresponding time derivative $v=u_t$, if it exists, is then called a \emph{weak solution} of the Stefan problem (\ref{Stefan}). The main advantage of this
definition is that the powerful theory of variational inequalities can be applied for the study of the
Stefan problem, and as was observed in \cite{R3,K2,K3} yields homogenization of \eqref{obstacle
problem}.
More recently, the notion of viscosity solutions of the Stefan problem was introduced and
well-posedness was established by Kim
\cite{K1}. Since this notion relies on the
comparison principle instead of the variational structure, it allows for more general, fully
nonlinear parabolic operators and boundary
velocity laws. Moreover, the pointwise viscosity methods seem more appropriate for studying the
behavior of the free boundaries. The natural question whether the weak and viscosity solutions
coincide was answered positively by Kim and Mellet \cite{K3} whenever the weak solution
exists. In this paper we will use the strengths of both the weak and viscosity
solutions to study the behavior of the solution and its free boundary for large times.
The homogeneous version of this problem, i.e, when $g\equiv const$, was studied by Quir\'os and
V\'azques in \cite{QV}. They obtained the result on the long-time convergence of weak solution of
the one-phase Stefan problem to the self-similar solution of the Hele-Shaw problem. The
homogenization of this type of problem was considered by Rodrigues in \cite{R3} and by Kim-Mellet
in \cite{K2,K3}. The long-time behavior of solution of the Hele-Shaw problem was studied in detail
by the first author in \cite{P1}. In particular, the rescaled solution of the inhomogeneous
Hele-Shaw problem converges to the self-similar solution of the Hele-Shaw problem with a point-source, formally
\begin{equation}
\label{hs-point-source}
\left\{\begin{aligned}
-\Delta v &=C\delta &&\mbox{ in } \{v>0\},\\
v_t&=\frac{1}{\left< 1/g \right>} |Dv|^2 &&\mbox{ on } \partial \{v>0\},\\
v(\cdot, 0)&=0,
\end{aligned}\right.
\end{equation}
where $\delta$ is the Dirac $\delta$-function, $C$ is a constant depending on $K$ and $n$, and the constant $\left<1/g\right>$ will be properly defined later. Moreover, the rescaled free boundary uniformly approaches a sphere.
Here we extend the convergence result to the Stefan problem in the inhomogeneous medium. Since the
asymptotic behavior of radially symmetric solutions of the Hele-Shaw and the Stefan problem are
similar and the solutions are bounded, we can take the limit $t \rightarrow \infty$ and
obtain the convergence for rescaled solutions and their free boundaries. However, solutions of the
Hele-Shaw problem have a very useful monotonicity in time which is missing in the Stefan problem.
We instead take advantage of \eqref{monotonicity condition} for regular initial data satisfying \eqref{initial data}.
This makes some steps more difficult. Moreover, the heat operator is not invariant under the
rescaling, unlike the Laplace operator. The rescaled parabolic equation becomes elliptic when $\lambda
\rightarrow \infty$, which causes some issues when applying parabolic Harnack's inequality, for
instance.
Following \cite{QV,P1} we use the natural rescaling of solutions of the form
\begin{align*}
v^\lambda(x,t) &:=\lambda^{(n-2)/n}v(\lambda^{1/n}x, \lambda t) &&\mbox{ if } n \geq 3,\\
\intertext{and the corresponding rescaling for variational solutions}
u^\lambda(x,t) &:=\lambda^{-2/n}u(\lambda^{1/n},\lambda t) &&\mbox{ if } n\geq 3
\end{align*}
(see Section~\ref{sec:rescaling} for $n=2$). Then the rescaled viscosity solution satisfies the free boundary
velocity law
\begin{equation*}
V^\lambda_\nu=g(\lambda^{1/n}x)|Dv^\lambda|.
\end{equation*}
Heuristically, if $g$ has some averaging properties, such as in condition (\ref{condition in g 2}),
the free boundary velocity law should homogenize as $\lambda \to \infty$. Since the latent heat of
phase transition $1 /g$ should average out, the homogenized velocity law will be
\begin{equation*}
V_\nu=\frac{1}{\left< 1/g\right>}|Dv|,
\end{equation*}
where $\left<1/g\right>$ represents the ``average" of $1/g$. More precisely, the quantity $\left <1/g\right >$ is the constant in the subadditive ergodic theorem such that
\begin{equation*}
\int\limits_{\mathbb{R}^n}\frac{1}{g(\lambda^{1/n}x, \omega)}u(x)dx
\rightarrow \int\limits_{\mathbb{R}^n}\left<\frac{1}{g}\right>u(x)dx \text{ for all $u \in
L^2(\mathbb{R}^n)$}, \mbox{ for a.e. } \omega \in A. \end{equation*}
In the periodic case, it is just the average of $1/g$ over one period.
Since we always work with $\omega \in A$ for which the convergence above holds, we omit it from the
notation in the rest of the paper.
This yields the first main result of this paper, Theorem \ref{convergence of variational
solutions}, on the homogenization of the obstacle problem (\ref{obstacle problem}) for the rescaled
solutions, with the correct singularity of the limit function at the origin, and therefore the
locally uniform convergence of variational solutions. To prove the second main result in Theorem
\ref{convergence of rescaled viscosity solution} on the locally uniform convergence of viscosity
solutions and their free boundaries, we use pointwise viscosity solution arguments. In summary,
we will show the following theorem.
\begin{theorem}
\label{th:main-convergence}
For almost every $\omega \in A$,
the rescaled viscosity solution $v^\lambda$ of the Stefan problem \eqref{Stefan} converges
locally uniformly to the unique self-similar solution $V$ of the Hele-Shaw problem
\eqref{hs-point-source} in $(\mathbb{R}^n \backslash \{0\}) \times [0,\infty)$ as $\lambda
\rightarrow \infty$, where $C$ depends only on $n$, the set $K$ and the boundary data $1$.
Moreover, the rescaled free boundary $\partial \{(x,t): v^\lambda(x,t) > 0\}$ converges to
$\partial \{(x,t): V(x,t) > 0\}$ locally uniformly with respect to the Hausdorff distance.
\end{theorem}
It is a natural question to consider more general linear divergence form operators $\sum_{i,j}\partial_{x_i}
(a_{ij}(x) \partial_{x_j} \cdot )$ instead of the Laplacian in
\eqref{Stefan} so that the variational structure is preserved. This was indeed the setting
considered in \cite{K3}, with $g \equiv 1$ and appropriate free boundary velocity law adjusted for
the operator above. In the limit $\lambda \to \infty$, we expect that the rescaled solutions
$v^\lambda$ to converge to the unique solution of the Hele-Shaw type problem with a point source
with the homogenized non-isotropic operator with coefficients $\bar a_{i,j}$. This question is a topic of
ongoing work.
\subsection*{Context and open problems}
In recent years, there have been significant developments in the homogenization theory of partial
differential equations like Hamilton-Jacobi and second order fully nonlinear elliptic and parabolic
equations that have been made possible by the improvements of the viscosity solutions techniques, see
for instance the classical \cite{Evans,Souganidis,CSW,CS} to name a few.
A common theme of these results is finding (approximate) correctors and use the perturbed test
function method to establish the homogenization result in the periodic case, or using deeper
properties in the random case, such as the variational structure of the Hamilton-Jacobi equations
or the strong regularity results for elliptic and parabolic equations, including the ABP inequality.
One of the goals of this paper is to illustrate the powerful combination of variational and viscosity
solution techniques for some free boundary problems that have a variational structure. By viscosity
solution techniques we mean specifically pointwise arguments using the comparison principle.
Unfortunately, when the variational structure is lost, for instance, when the free boundary
velocity law is more general as in the problem with contact angle dynamics $V_\nu = |Dv| - g(x)$ so
that the motion is non-monotone
\cite{KimContact,KimContactRates}, or even
simple time-dependence $V_\nu = g(x,t) |Dv|$ \cite{Pozar15}, the comparison principle is all that is
left. Even in the periodic case, the classical correctors as solutions of a cell problem are not
available. This is in part the consequence of the presence of the free boundary on which the
operator is strongly discontinuous. \cite{KimHomog,KimContact,Pozar15} use a variant of the idea
that appeared in
\cite{CSW} to replace the correctors by solutions of certain obstacle problems. However, the
analysis of these solutions requires rather technical pointwise arguments since there are almost
no equivalents of the regularity estimates for elliptic equations.
An important tool in \cite{Pozar15} to overcome this was the large scale Lipschitz regularity of
the free boundaries of the obstacle problem solutions (called cone flatness there) that allows for
the control of the oscillations of the free boundary in the homogenization limit.
For the reasons above, the homogenization of free boundary problems is rather challenging and there are still many
open problems. Probably the most important one is the homogenization of free boundary
problems of the Stefan and Hele-Shaw type that do not admit a variational structure, such as those
mentioned above, in random
environments. Currently there is no known appropriate stationary subadditive quantity to which we could apply
the subadditive ergodic theorem to recover the homogenized free boundary velocity law, for
instance. Other tools like concentration inequalities have so far not yielded an alternative.
Another important problem are the optimal convergence rates of the free boundaries in the Hausdorff
distance. The techniques used in this paper do
not provide this information, however viscosity techniques were used to obtain non-optimal algebraic
convergence rates in \cite{KimContactRates}. It is an interesting question what the optimal rate in
the periodic case is, even for problems like \eqref{Stefan}. The large scale Lipschitz estimate
from \cite{Pozar15} could possibly directly give only $\varepsilon |\log \varepsilon|^{1/2}$-rate for
velocity law with $g(x/\varepsilon)$, but there are some indications that a rate $\varepsilon$ might be possible.
\subsection*{Outline}
The paper is organized as follows: In Section~\ref{sec:preliminaries}, we recall the definitions
and well-known results for weak and viscosity solutions. We also introduce the rescaling and state
some results for radially symmetric solutions. In Section~\ref{sec:var-conv}, we recall the limit
obstacle problem and prove the locally uniform convergence of rescaled variational solutions. In
Section~\ref{sec:visc-conv}, we focus on treating the locally uniform convergence of viscosity solutions and their free boundaries.
\section{Preliminaries}
\label{sec:preliminaries}
\subsection{Notation}
For a set $A$, $A^c$ is its complement. Given a nonnegative function $v$, we will use notations
for its positive set and free boundary of $v$,
$$\Omega(v):=\{(x,t): v(x,t)>0\}, \hspace{2cm} \Gamma(v):=\partial\Omega(v),$$
and for fixed time $t$,
$\hspace{1cm} \Omega_t(v):=\{x: v(x,t)>0\}, \hspace{1cm} \Gamma_t(v):=\partial\Omega_t(v) .$
$(f)_+$ is the positive part of $f$:
$(f)_+= \max(f,0)$.
\subsection{Weak solutions}
\label{sec:weak-sol}
Let $v(x,t)$ be a classical solution of the Stefan problem (\ref{Stefan}). Fix $R, T > 0$ and set $B=B_R$
, $D=B \backslash K$. Following \cite{FK} it can be shown that, if $R$ is
large enough (depending on $T$), then the function $u(x,t):=\int_{0}^{t}v(x,s)ds$ solves the following variational problem: Find $u \in L^2(0,T; H^2(D))$ such that $u_t \in L^2(0,T;L^2(D))$ and
\label{Variational problem}
\begin{equation}
\left\{\begin{aligned}
u(\cdot,t) &\in \mathcal{K}(t), && 0 < t < T,\\
(u_t-\Delta u)(\varphi -u) &\geq f(\varphi -u), &&\text{ a.e } (x,t) \in B \times (0,T) \text{ for any } \varphi \in \mathcal{K}(t),\\
u(x,0)&=0 \text{ in } D.\\
\end{aligned}\right.
\end{equation}
Here we set
$\mathcal{K}(t)=\{\varphi \in H^1(D),\varphi \geq 0, \varphi =0 \text{ on } \partial B, \varphi =t \text{ on } K \}$
and $f$ was defined in \eqref{f}. We use the standard notation for Sobolev spaces $H^k$, $W^{k,p}$.
If $v$ is a classical solution of (\ref{Stefan}) then $u$ is solution of (\ref{Variational problem}),
but the inverse statement is not valid in general. However, we have the following result
\cite{FK,R1}.
\begin{theorem}[Existence and uniqueness of variational problem]
If $v_0$ satisfies (\ref{initial data}), then the problem (\ref{Variational problem}) has a unique solution satisfying
$$
\begin{aligned}
u &\in L^\infty(0,T; W^{2,p}(D)), \qquad 1\leq p \leq \infty,\\
u_t &\in L^\infty(D \times (0,T)),\\
\end{aligned}
$$
and
$$
\left\{\begin{aligned}
u_t-\Delta u &\geq f && \mbox{ for a.e. } (x,t) \in \{u \geq 0 \},\\
u(u_t- \Delta u-f)&=0 && \mbox{ a.e in } D\times (0, \infty).
\end{aligned}\right.
$$
\end{theorem}
We will thus say that if $u$ is a solution of (\ref{Variational problem}), then $u_t$ is a
\emph{weak solution} of the corresponding Stefan problem (\ref{Stefan}). The theory of variational
inequalities for an obstacle problem is well developed, for more details, we refer to \cite{FK,R1,K2}.
We now collect some useful results on the weak solutions from \cite{FK,R1}.
\begin{prop}
The unique solution $u$ of (\ref{Variational problem}) satisfies
$$0 \leq u_t \leq C \mbox{ a.e } D \times (0,T),$$
where $C$ is a constant depending on $f$. In particular, $u$ is Lipschitz with respect to $t$ and
$u$ is $C^{\alpha}(D)$ with respect to $x$ for all $\alpha \in (0,1)$. Furthermore, if $0 \leq t
<s \leq T$, then $u(\cdot,t) < u(\cdot,s)$ in $\Omega_s(u)$ and also
$\Omega_0 \subset \Omega_t(u) \subset \Omega_s(u)$.
\end{prop}
\begin{lemma}[Comparison principle for weak solutions]
Suppose that $f \leq \hat{f}$. Let $u, \hat{u}$ be solutions of (\ref{Variational problem}) for respective $f, \hat{f}$. Then $u \leq \hat{u},$
moreover,
$$\theta \equiv \frac{\partial u}{\partial t} \leq \frac{ \partial \hat{u}}{\partial t} \equiv \hat{\theta}.$$
\end{lemma}
\begin{remark}
Regularity of $\theta$ and its free boundary has been studied quite extensively, including Caffarelli and Friedman (see \cite{C,CF,FN}).
It is known that a weak solution is classical as long as $\Gamma_t(u)$ has no singularity. The smoothness criterion (see \cite{C,FN}, \cite[Proposition 2.4]{QV}) immediately leads to the following corollary.
\end{remark}
\begin{cor}
Radial weak solutions of the Stefan problem (\ref{Stefan}) are smooth classical solutions.
\end{cor}
\subsection{Viscosity solutions}
The second notion of solutions we will use are the viscosity solutions introduced in \cite{K1}.
First, for any nonnegative function $w(x,t)$ we define the semicontinuous envelopes
\begin{align*}
&w_\star(x,t) := \liminf_{(y,s) \rightarrow (x,t)}w(y,s), &w^\star(x,t) := \limsup_{(y,s) \rightarrow (x,t)}w(y,s).
\end{align*}
We will consider solutions in the space-time cylinder $Q:=(\mathbb{R}^n \backslash K) \times [0,\infty)$.
\begin{definition}
\label{def of viscos subsol}
A nonnegative upper semicontinuous function $v(x,t)$ defined in $Q$ is a viscosity subsolution of (\ref{Stefan}) if the following hold:
\begin{enumerate}[label=\alph*)]
\item For all $T \in (0,\infty)$, the set $\overline{\Omega (v)} \cap \{t \leq T\} \cap Q$ is bounded.
\item For every $\phi \in C^{2,1}_{x,t}(Q)$ such that $v-\phi$ has a local maximum in $\overline{\Omega(v)} \cap \{t\leq t_0\} \cap Q$ at $(x_0,t_0)$, the following holds:
\begin{enumerate}[label=\roman*)]
\item If $v(x_0,t_0)>0$, then $(\phi_t- \Delta \phi )(x_0,t_0) \leq 0$.
\item If $(x_0,t_0) \in \Gamma(v), |D\phi (x_0,t_0)| \neq 0$ and $(\phi_t-\Delta \phi )(x_0,t_0)>0$, then
\begin{equation}
(\phi_t-g(x_0)|D\phi|^2)(x_0,t_0) \leq 0.
\end{equation}
\end{enumerate}
\end{enumerate}
Analogously, a nonnegative lower semicontinuous function $v(x,t)$ defined in $Q$ is a viscosity
supersolution if (b) holds with maximum replaced by minimum, and with inequalities reversed in
the tests for $\phi$ in (i--ii). We do not need to require (a).
\end{definition}
Now let $v_0$ be a given initial condition with positive set $\Omega_0$ and free boundary $\Gamma_0=\partial \Omega_0$, we can define viscosity subsolution and supersolution of (\ref{Stefan}) with corresponding initial data and boundary data.
\begin{definition}
A viscosity subsolution of (\ref{Stefan}) in $Q$ is a viscosity subsolution of (\ref{Stefan}) in $Q$ with initial data $v_0$ and boundary data $1$ if:
\begin{enumerate}[label=\alph*)]
\item $v$ is upper semicontinuous in $\bar Q, v=v_0$ at $t=0$ and $v \leq 1$ on $\Gamma$,
\item $\overline{\Omega(v)}\cap \{t=0\}=\overline{\{x: v_0(x)>0\}} \times \{0\}$.
\end{enumerate}
A viscosity supersolution is defined analogously by requiring (a) with $v$ lower semicontinuous
and $v \geq 1$ on $\Gamma$. We do not need to require (b).
\end{definition}
And finally we can define viscosity solutions.
\begin{definition}
The function $v(x,t)$ is a viscosity solution of (\ref{Stefan}) in $Q$ (with initial data $v_0$ and boundary data $1$) if $v$ is a viscosity supersolution and $v^\star$ is a viscosity subsolution of (\ref{Stefan}) in $Q$ (with initial data $v_0$ and boundary data $1$).
\end{definition}
\begin{remark}
By standard argument, if $v$ is the classical solution of (\ref{Stefan}) then it is a viscosity solution of that problem in $Q$ with initial data $v_0$ and boundary data $1$.
\end{remark}
The existence and uniqueness of a viscosity solution as well as its properties have been studied in
great detail in \cite{K1}. One important feature of viscosity solutions is that they satisfy a
comparison principle for ``strictly separated'' initial data.
One of the main tools we will use in this paper is the following coincidence of weak
and viscosity solutions from \cite{K3}.
\begin{theorem}[cf. {\cite[Theorem 3.1]{K3}}]
Assume that $v_0$ satisfies (\ref{initial data}). Let $u(x,t)$ be the unique solution of (\ref{Variational problem}) in $B \times [0,T]$ and let $v(x,t)$ be the solution of
\begin{equation}
\label{coincidence eq}
\left\{
\begin{aligned}
v_t-\Delta v &=0 && \mbox{in } \Omega(u) \backslash K,\\
v &=0 && \mbox{on } \Gamma(u),\\
v &=1 &&\mbox{in } K,\\
v(x,0) &=v_0(x).
\end{aligned}
\right.
\end{equation}
Then $v(x,t)$ is a viscosity solution of (\ref{Stefan}) in $B \times [0,T]$ with initial data
$v(x,0)=v_0(x)$, and
$u(x,t)= \int_{0}^{t}v(x,s)ds$.
\end{theorem}
\begin{remark}
The definition of the solution $v$ of \eqref{coincidence eq} must be clarified
when $\Omega(u)$ is not smooth. Since $u$ is continuous and $\Omega(u)$ is bounded at all times ( \cite[Lemma 3.6]{K3}) then the existence of solution of (\ref{coincidence eq}) is provided by Perron's method as
\begin{equation*}
v=\sup \{w| w_t- \Delta w \leq 0 \mbox{ in } \Omega(u), w\leq 0 \mbox{ on } \Gamma(u), w\leq 1 \mbox{ in } K, w(x,0) \leq v_0(x)\}.
\end{equation*}
Note that $v$ might be discontinuous on $\Gamma(u)$.
\end{remark}
The coincidence of weak and viscosity solutions gives us a more general comparison principle.
\begin{lemma}[cf. {\cite[Corollary 3.12]{K3}}]
Let $v^1$ and $v^2$ be, respectively, a viscosity subsolution and supersolution of the Stefan problem (\ref{Stefan}) with continuous initial data $v_0^1 \leq v_0^2$ and boundary data $1$. In addition, suppose that $v_0^1$(or $v_0^2$) satisfies condition (\ref{initial data}). Then
$v^1_\star \leq v^2 \mbox{ and } v^1 \leq (v^2)^\star \mbox{ in } \mathbb{R}^n \backslash K
\times [0,\infty).$
\end{lemma}
\subsection{Rescaling}
\label{sec:rescaling}
We will use the following rescaling of solutions as in \cite{P1}.
\subsubsection{For $n \geq 3$}
For $\lambda >0$ we use the rescaling
\begin{align*}
&v^\lambda(x,t)=\lambda^{\frac{n-2}{n}}v(\lambda^{\frac{1}{n}}x,\lambda t), & u^\lambda(x,t)= \lambda^{-\frac{2}{n}} u(\lambda^{\frac{1}{n}}x, \lambda t).
\end{align*}
If we define $K^\lambda := K / \lambda^{\frac{1}{n}}$ and $\Omega_0^\lambda:=
\Omega_0/\lambda^{\frac{1}{n}}$ then $v^\lambda$ satisfies the problem
\begin{equation}
\label{rescaled equation}
\left\{
\begin{aligned}
\lambda^{\frac{2-n}{n}}v_t^\lambda- \Delta v^\lambda &=0 && \mbox{ in } \Omega(v^\lambda) \backslash K^\lambda,\\
v^\lambda &= \lambda^{\frac{n-2}{n}} && \mbox{ on } K^\lambda,\\
v_t^\lambda&=g^\lambda(x) |Dv^\lambda|^2 && \mbox{ on } \Gamma(v^\lambda),\\
v^\lambda(\cdot,0)&=v_0^\lambda,
\end{aligned}
\right.
\end{equation}
where $g^\lambda(x)=g(\lambda^{\frac{1}{n}}x)$.
And the rescaled $u^\lambda$ satisfies the obstacle problem
\begin{equation}
\label{rescaled Variational problem}
\left\{
\begin{aligned}
u^\lambda(\cdot,t) &\in \mathcal{K}^\lambda(t),\\
(\lambda^{\frac{2-n}{n}}u^\lambda_t-\Delta u^\lambda)(\varphi -u^\lambda) &\geq
f(\lambda^{\frac{1}{n}}x)(\varphi -u^\lambda) &&\text{ a.e } (x,t) \in B_R \times (0,T)\\
&&&\text{ for any } \varphi \in \mathcal{K}^\lambda(t),\\
u^\lambda(x,0)&=0,
\end{aligned}
\right.
\end{equation}
where
$\mathcal{K^\lambda}(t)=\{\varphi \in H^1(\mathbb{R}^n),\varphi \geq 0, \varphi =0 \text{ on } \partial B^\lambda, \varphi =\lambda^{\frac{n-2}{n}}t \text{ on } K^\lambda \}.$
\subsubsection{For n=2}
For dimension $n=2$, we use a different rescaling that preserves the singularity of logarithm,
namely
\begin{align}
\label{rescaling n=2}
&v^\lambda(x,t)=\log \mathcal{R}(\lambda)v(\mathcal{R}(\lambda)x, \lambda t),&
u^\lambda(x,t)=\displaystyle \frac{\log \mathcal{R}(\lambda)}{\lambda}u(\mathcal{R}(\lambda)x,
\lambda t),
\end{align}
where $\mathcal{R}(\lambda)$ is the unique solution of
$\mathcal{R}^2 \log \mathcal{R}= \lambda$, $\lim_{\lambda \to \infty} \mathcal{R}(\lambda) \to
\infty$ (see \cite{P1} for more details). $v^\lambda$ and $u^\lambda$ satisfy rescaled problems
analogous to \eqref{rescaled equation} and \eqref{rescaled Variational problem}. In particular,
the term $\lambda^{(2-n)/n}$ in front of the time derivatives is replaced by
$1/\log(\mathcal{R}(\lambda)) \to 0$ as $\lambda \to \infty$.
\subsection{Convergence of radially symmetric solutions} We will recall the results on the
convergence of radially symmetric solutions of (\ref{Stefan}) as derived in \cite{QV}. First, we
collect some useful facts of radial solution of the Hele-Shaw problem and then use a comparison to
have the information of radial solution of the Stefan problem. The radially symmetric solution of
the Hele-Shaw problem in the domain $|x| \geq a, t \geq 0$ is a pair of functions $p(x,t)$ and
$R(t)$, where $p$ is of the form
\begin{equation}
\label{radial Hele-Shaw}
p(x,t)= \begin{cases}
\displaystyle \frac{Aa^{n-2}\left(|x|^{n-2}-R^{n-2}(t)\right){+}}{a^{2-n}-R^{2-n}(t)}, &n \geq 3,\\
\displaystyle \frac{A\left( \log\frac{R(t)}{|x|}\right)_{+}}{\log \frac{R(t)}{a}}, & n=2,
\end{cases}
\end{equation}
and $R(t)$ satisfies a certain algebraic equation (see \cite{QV} for details).
This solution satisfies the boundary conditions and initial conditions
\begin{equation}
\label{boundary con1}
\begin{aligned}
p(x,t)&=Aa^{2-n} &&\mbox{ for } |x|=a>0,\\
p(x,t)&=0 &&\mbox{ for } |x| =R(t),\\
R'(t)&=\frac{1}{L} |Dp| &&\mbox{ for } |x| =R(t),\\
R(0)&=b >a.
\end{aligned}
\end{equation}
Furthermore,
\begin{align*}
\lim\limits_{t \rightarrow \infty}\frac{R(t)}{c_\infty t^{1/n}}&=1, & c_\infty &=
\left(\frac{An(n-2)}{L}\right)^{1/n} &\mbox{ if } n \geq3,\\
\lim\limits_{t \rightarrow \infty} \frac{R(t)}{c_\infty\left(t/\log t\right)^{1/2}} &=1, &
c_\infty &= 2\sqrt{A/L} &\mbox{ if } n=2.
\end{align*}
In dimension $n=2$, we will also use $ \lim\limits_{t \rightarrow \infty} \frac{\log R(t)}{\log t}=\frac{1}{2}$.
The radial solution of the Stefan problem satisfies the corresponding conditions similar to
(\ref{boundary con1}) with the initial data
\begin{equation}
\label{initial cond radial}
\theta(x,0) =\theta_0(|x|) \mbox{ if } |x| \geq a.
\end{equation}
The following results were shown in \cite{QV}.
\begin{lemma}[cf. {\cite[Proposition~6.1]{QV}}]
Let $p$ and $\theta$ be radially symmetric solutions to the Hele-Shaw problem and to the Stefan
problem respectively, and let $\{|x|=R_p(t)\}, \{|x|=R_\theta(t)\}$ be the corresponding interfaces. If $R_p(0)> R_\theta(0), p(x,0)\geq \theta(x,0)$ and, moreover, $p(x,t) \geq \theta(x,t)$ on the fixed boundary, that is, for $|x| =a, t>0$, then $p(x,t) \geq \theta(x,t)$ for all $|x| \geq a$ and $t \geq 0$.
\end{lemma}
This immediately leads to an upper bound for the free boundary of radial solutions of Stefan problem, see Corollary 6.2, Theorem 6.4, Theorem 7.1 in \cite{QV}.
\begin{lemma}
\label{free boundary bound}
Let $\{|x|=R(t)\}$ be the free boundary of a radial solution to the Stefan problem satisfying the corresponding conditions (\ref{boundary con1}) and (\ref{initial cond radial}). There are constants $C,T>0$, such that, for all $t \geq T$,
\begin{equation*}
\begin{matrix}
& R(t) \leq Ct^{1/n}, n \geq 3,& \mbox{ or }
& R(t) \leq C(t/ {\log t})^{1/2}, n=2.
\end{matrix}
\end{equation*}
Moreover, we have
\begin{equation*}
\begin{matrix}
&\lim\limits_{t \rightarrow \infty} \displaystyle \frac{R(t)}{t^{1/n}}=
\left(An(n-2)/L\right)^{1/n}, n \geq 3,& \mbox{ or }
&\lim\limits_{t \rightarrow \infty} \displaystyle \frac{R(t)}{(t/ \log t)^{1/2}}= 2 \sqrt{A/L}, n=2.
\end{matrix}
\end{equation*}
\end{lemma}
The solution of the Stefan problem is bounded for all time.
\begin{lemma}[cf.{\cite[Lemma 6.3]{QV}}]
\label{Boundedness of weak solution}
Let $\theta$ be a weak solution of the Stefan problem for $n \geq 2$. There is a constant $C>0$
such that, for all $t > 0$,
$0 \leq \theta(x,t) \leq C |x|^{2-n}.$
\end{lemma}
Next, we define the solution of the Hele-Shaw problem with a point source, which will appear as
the limit function in our convergence results,
\begin{equation}
\label{Hele-Shaw with point sorce}
V(x,t)=V_{A,L}(x,t)=
\begin{cases}
A\left(|x|^{2-n}- \rho^{2-n}(t)\right)_{+}, & n \geq 3,\\
A\left(\log \frac{\rho(t)}{|x|}\right)_{+}, & n=2,
\end{cases}
\end{equation}
where
\begin{equation*}
\rho(t)= \rho_L(t)=R_\infty=
\begin{cases}
\left(An(n-2)t/L\right)^{1/n}, & n \geq 3,\\
\left(2At/L\right)^{1/2}, & n = 2.
\end{cases}
\end{equation*}
It is the unique solution of the Hele-Shaw problem with a point source,
\begin{equation}
\label{Hele-Shaw point source problem}
\left\{
\begin{aligned}
\Delta v &= 0 && \mbox{in } \Omega(v) \backslash \{0\},\\
\lim\limits_{|x| \rightarrow 0} \displaystyle \frac{v(x,t)}{|x|^{2-n}}&= C_*, && n\geq 3, \quad
\mbox{or} &\lim\limits_{|x| \rightarrow 0} \displaystyle -\frac{v(x,t)}{\log(|x|)}&= C_*, && n=2,\\
\displaystyle v_t&=\frac{1}{L}|Dv|^2 && \mbox{on } \partial \Omega(v),\\
v(x,0)&=0 && \mbox{in } \mathbb{R}^n \backslash \{0\}.
\end{aligned}
\right.
\end{equation}
The asymptotic result for radial solutions of the Stefan problem follows from Theorem 6.5 and Theorem 7.2 in \cite{QV}.
\begin{theorem}[Far field limit]
\label{far field radial}
Let $\theta$ be the radial solution of the Stefan problem satisfying boundary conditions (\ref{boundary con1}) and initial condition (\ref{initial cond radial}). Then
\begin{equation}
\lim\limits_{t \rightarrow \infty} t^{(n-2)/n} |\theta(x,t) - V(x,t)|=0
\end{equation}
uniformly on sets of form $\{x \in \mathbb{R}^n: |x| \geq \delta t^{1/n}\}, \delta >0$ if $n \geq 3$, and
\begin{equation}
\displaystyle \lim\limits_{t \rightarrow \infty} \log \sqrt{\frac{2A}{L}}\mathcal{R}(t) \left |\theta(x,t) - \displaystyle \frac{A}{\log \sqrt{\frac{2A}{L}}\mathcal{R}(t)}\left(\log \sqrt{\frac{2A}{L}}\mathcal{R}(t)- \log|x|\right)_+\right |=0
\end{equation}
uniformly on sets of form $\{x \in \mathbb{R}^n: |x| \geq \delta \mathcal{R}(t)\}, \delta >0$ if $n =2 $.
\end{theorem}
\begin{proof}
Follow the proof of Theorem 6.5 in \cite{QV} with recalling that we assume $\theta = Aa^{2-n}$ for $|x|=a$ we immediately get the result for $n=3$.
For $n=2$, let $\mathcal{R}_1(t)$ be the solution of
$\frac{\mathcal{R}_1^2}{2}\left(\log \mathcal{R}_1- \frac{1}{2}\right)=\frac{At}{L}$ with
$\lim\limits_{t \rightarrow \infty} \frac{\mathcal{R}_1(t)}{\mathcal{R}(t)}=\sqrt{\frac{2A}{L}}.$
Thus, we can replace $\mathcal{R}_1(t)$ in Theorem 7.2 in \cite{QV} by $ \sqrt{\frac{2A}{L}}\mathcal{R}(t)$.
\end{proof}
Finally, we can improve Theorem \ref{far field radial} to have the following convergence result
for rescaled radial solutions of the Stefan problem which holds up to $t=0$.
\begin{lemma}[Convergence for radial case]
\label{convergence radial lemma}
Let $\theta(x,t)$ be a radial solution of the Stefan problem satisfying the corresponding boundary and initial condition. Then $\theta^\lambda$ converge locally uniformly to $V_{A,L}$ in the set $(\mathbb{R}^n \backslash \{0\}) \times [0,\infty)$.
\end{lemma}
\begin{proof}
We will prove the uniform convergence in the sets $Q=\{(x,t): |x| \geq \varepsilon, 0 \leq t \leq T\}$ for some $\varepsilon, T >0$ and use notation $V=V_{A,L}$.
We consider the case $ n \geq 3$ first.
Set $\xi =\lambda^{1/n}x, \tau = \lambda t$ then an easy computation leads to $V(x,t)=\lambda^{(n-2)/n}V(\xi, \tau)$. Let $t_0 = \rho ^{-1}(\varepsilon /2)$. We split the proof into two cases:
\begin{enumerate}[label=(\alph*)]
\item When $0\leq t \leq t_0$: Clearly from the formula, we have $V(x,t)=0 \mbox{ in } \{(x,t): |x| \geq \varepsilon, 0 \leq t \leq t_0\}.$
Besides, for $\lambda$ large enough,
\begin{align*}
R_\lambda(t)= \frac{R(\lambda t)}{\lambda^{1/n}}\leq \frac{R(\lambda t_0)}{\lambda^{1/n}}<
\rho(t_0)+\frac{\varepsilon}{2} = \varepsilon \mbox{ (due to Proposition \ref{free boundary
bound}).}
\end{align*}
Thus, $\theta^\lambda =0=V$ in $\{(x,t): |x| \geq \varepsilon, 0 \leq t \leq t_0\}$ for $\lambda$ large enough.
\item When $t_0 \leq t \leq T$, we have:
\begin{align}
\label{second case}
|\theta^\lambda(x,t) -V(x,t)| = t^{(2-n)/n} \tau^{(n-2)/n}|\theta(\xi, \tau) - V(\xi, \tau)|
\end{align}
Since $t_0 \leq t \leq T$, $t^{(2-n)/n} $ is bounded. From Theorem \ref{far field radial}, the right hand side of (\ref{second case}) converges to $0$ uniformly in the sets $\{\xi \in \mathbb{R}^n: |\xi| \geq \delta \tau^{1/n}\}=\{x \in \mathbb{R}^n: |x| \geq \delta t^{1/n}\} \supset \{(x,t): |x| \geq \varepsilon, t_0 \leq t \leq T\}$ for fixed $\varepsilon$ and $\delta>0$ small enough and thus we obtain the convergence for $n \geq 3$.
\end{enumerate}
For $n=2$, we argue similar as the case $n \geq 3$, but noting that
$\lim\limits_{\lambda \rightarrow \infty} \frac{\mathcal{R}(\tau)}{\mathcal{R(\lambda)}}= t^{1/2} $
together with Theorem \ref{far field radial}.
\end{proof}
\subsection{Some more results for viscosity solutions}
Following \cite{P1,QV}, we also can state some results for viscosity solutions.
\begin{lemma}
For $L=1/m$ (resp. $L=1/M$), with $m,M$ as in (\ref{condition in g}), let $\theta(x,t)$ be the radial solution of Stefan problem (\ref{Stefan}) satisfying boundary conditions (\ref{boundary con1}) and initial condition (\ref{initial cond radial}) with $g(x)=1/L$ and $a$ such that $B(0,a) \subset K$ (resp. $K \subset B(0,a)$). Then the function $\theta(x,t)$ is a viscosity subsolution (resp. supersolution) of the Stefan problem (\ref{Stefan}) in $Q$.
\end{lemma}
\begin{proof}
The statement follows directly from properties of radially solutions and the fact that a classical solution is also a viscosity solution.
\end{proof}
Using viscosity comparison principle, we also can get the same estimates for free boundary as in
Proposition \ref{free boundary bound} and boundedness for a \textbf{general viscosity solution}.
\begin{lemma}
\label{viscos free boundary bound}
Let $v$ be a viscosity solution of (\ref{Stefan}). There exists $t_0>0$ and constant $C, C_1, C_2>0$ such that for $t\geq t_0$,
\begin{align*}
C_1 t^{1/n} &< \min_{\Gamma_t(v)}|x|\leq \max_{\Gamma_t(v)}|x|< C_2t^{1/n} &&\mbox{if } n\geq 3,\\
C_1 \mathcal{R}(t)&<\min_{\Gamma_t(v)}|x|\leq \max_{\Gamma_t(v)}|x|< C_2 \mathcal{R}(t) &&\mbox{if } n= 2,
\end{align*}
and for $0 \leq t\leq t_0$,
$\displaystyle\max_{\Gamma_t(v)}|x|< C_2.$ Moreover,
$0 \leq v(x,t) \leq C|x|^{2-n} \mbox{ for all } n \geq 2.$
\end{lemma}
\begin{proof}
Argue as in \cite{P1} with using Lemma \ref{free boundary bound} and Lemma \ref{Boundedness of weak solution} above.
\end{proof}
We also have the near field limit and the asymptotic behavior result as in \cite{QV}.
\begin{theorem}[Near-field limit]
\label{Near field limit Theorem} The viscosity solution $v(x,t)$ of the Stefan problem
(\ref{Stefan}) converges to the unique solution $P(x)$ of the exterior Dirichlet problem
\begin{equation}
\label{near filed limit}
\left\{
\begin{aligned}
\Delta P&=0, && x \in \mathbb{R}^n \backslash K,\\
P&=1,&&x \in \Gamma,\\
\lim\limits_{|x| \rightarrow \infty}P(x) &=0&& \mbox{if } n \geq 3, &&
\mbox{ or } & P \mbox{ is bounded }& \mbox{if } n=2,
\end{aligned}
\right.
\end{equation}
as $t \rightarrow \infty$ uniformly on compact subsets of $\overline{K^c}$.
\end{theorem}
\begin{proof}
See proof of Theorem 8.1 in \cite{QV}.
\end{proof}
\begin{lemma}[cf. {\cite[Lemma 4.5]{QV}}]
\label{C*}
There exists a constant $C_*=C_*(K,n)$ such that the solution $P$ of problem (\ref{near filed limit}) satisfies
$\lim\limits_{|x| \rightarrow \infty}|x|^{n-2}P(x)=C_*.$
\end{lemma}
\section{Uniform convergence of variational solutions}
\label{sec:var-conv}
\subsection{Limit problem and the averaging properties of media}
We first recall the limit variational problem as introduced in \cite{P1} (see \cite[section 5]{P1} for
derivation and properties). Let $U_{A,L}(x,t) := \int_0^t V_{A, L}(x, s) \;ds$. For given
$A,L>0$, \cite[Theorem~5.1]{P1} yields that $U_{A,L}(x,t)$ is the unique solution of the limit obstacle problem
\begin{equation}
\label{limit problem}
\left\{\begin{aligned}
w &\in \mathcal{K}_t,\\
a(w,\phi) &\geq \langle-L,\phi\rangle, && \text{for all } \phi \in V,\\
a(w,\psi w)&= \langle L, \psi w\rangle && \text{for all } \psi \in W,
\end{aligned}\right.
\end{equation}
where
$\mathcal{K}_t= \Big\{ \varphi \in \bigcap_{\varepsilon>0}H^1(\mathbb{R}^n \backslash
B_\varepsilon) \cap C(\mathbb{R}^n \backslash B_\varepsilon): \varphi \geq 0, \lim\limits_{|x|
\rightarrow 0}\frac{\varphi(x)}{U_{A,L}(x,t)}=1\Big \},$
\begin{align}
\label{set V}
V&=\left\{ \phi \in H^1(\mathbb{R}^n):\phi \geq 0, \phi = 0 \mbox{ on } B_\varepsilon \mbox{ for some } \varepsilon >0\right\},\\
\label{set W}
W&=V \cap C^1(\mathbb{R}^n),
\end{align}
and
$$ a_{\Omega} (u,v):=\int_{\Omega}{Du \cdot Dv dx}, \quad \left<u,v\right>_\Omega :=
\int_{\Omega}uvdx. $$
We omit the set $\Omega$ in the notation if $\Omega = \mathbb{R}^n$.
We also recall the following application of the subadditive ergodic theorem.
\begin{lemma}[cf. {\cite[Section 4, Lemma 7]{K2}, see also \cite{P1}}]
\label{media}
For given $g$ satisfying (\ref{condition in g 2}), there exists a constant, denoted by
$\left<1/g\right>$, such that if $\Omega \subset \mathbb{R}^n$ is a bounded measurable set and if
$\{u^\varepsilon\}_{\varepsilon>0} \subset L^2(\Omega)$ is a family of functions such that $u^\varepsilon \rightarrow u$ strongly in $L^2(\Omega)$ as $\varepsilon \rightarrow 0$, then
\begin{equation*}
\lim\limits_{\varepsilon\rightarrow 0} \int\limits_{\Omega}\frac{1}{g(x/\varepsilon,\omega)}u^\varepsilon(x)dx
=\int\limits_{\Omega}\left<\frac{1}{g}\right>u(x)dx \mbox{ a.e. $\omega \in A$}.
\end{equation*}
\end{lemma}
\subsection{Uniform convergence of rescaled variational solutions}
Now we are ready to prove the first main result, similar to Theorem 6.2 in \cite{P1}.
\begin{theorem}
\label{convergence of variational solutions}
Let $u$ be the unique solution of variational problem (\ref{Variational problem}) and
$u^\lambda$ be its rescaling. Let $U_{A,L}$ be the unique solution of limit problem (\ref{limit problem}) where $A=C_*$ as in Lemma \ref{C*}, and $L=\left<1/g\right>$ as in Lemma \ref{media}. Then the functions $u^\lambda$ converges locally uniformly to $U_{A,L}$ as $\lambda \rightarrow \infty$ on $\left(\mathbb{R}^n \backslash \{0\}\right) \times \left[0,\infty\right)$.
\end{theorem}
\begin{proof}
We argue as in \cite{P1}. Fix $T > 0$. By Lemma~\ref{viscos free boundary bound}, we can bound
$\Omega_t(u^\lambda)$ by $\Omega := B_{\delta}(0)$ for some $\delta >0$, for all $0 \leq t \leq
T$ and $\lambda >0$. For some $\varepsilon >0$, define $\Omega_\varepsilon:= \Omega \backslash
\overline{B(0,\varepsilon)}$, $Q_{\varepsilon}:= \Omega_\varepsilon \times [0,T]$ . We will prove the convergence in $Q_\varepsilon$.
Let $v$ be the viscosity solution of the Stefan problem (\ref{Stefan}). We can find constants
$0 < a < b$ such that $K \subset B_a(0)$ and $\overline{\Omega_0} \subset B_b(0)$. Set $L=1/M$ and $A
= \max v_0$. Choose radially symmetric smooth $\theta_0 \geq 0$ such that $\theta_0 \geq v_0$ on
$\Omega_0 \setminus B_a(0)$ and $\theta_0 = 0$ on $\mathbb{R}^n \setminus B_b(0)$. The radial solution
$\theta$ of the
Stefan problem on $\mathbb{R}^n \setminus \overline{B_a(0)}$ with such parameters will be above $v$ by the comparison
principle.
Thus, for $\lambda$ large enough, the rescaled solutions satisfy
$$0 \leq v^\lambda \leq \theta^\lambda \mbox{ in } Q_{\varepsilon / 2}.$$
On the other hand, by Lemma \ref{convergence radial lemma},
$\theta^\lambda$ converges to $V_{A,L}$ as $\lambda \rightarrow
\infty$ uniformly on $Q_{\varepsilon/2}$ and $V_{A,L}$ is bounded in $Q_{\varepsilon/2}$ and therefore for $\lambda$ large enough so
that $(B_a(0))^\lambda \subset B_{\varepsilon/2}(0)$,
\begin{equation}
\label{u^lamda_t bounded}
\|u^\lambda_t\|_{L^\infty(Q_{\varepsilon/2})}= \|v^\lambda\|_{L^\infty(Q_{\varepsilon/2})}\leq C(\varepsilon).
\end{equation}
Since $u^\lambda$ satisfies (\ref{rescaled Variational problem}), we have
$$\Delta u^\lambda (\varphi -u^\lambda) \leq
\left(\lambda^{(2-n)/n}u^\lambda_t-f(\lambda^{1/n}x)\right)(\varphi - u^\lambda ) \mbox{ a.e for
any } \varphi \in \mathcal{K}^\lambda(t).$$
As $u^\lambda_t$ is bounded, $u^\lambda$ satisfies the elliptic obstacle problem
$$\Delta u^\lambda (\varphi -u^\lambda) \leq \left(C\lambda^{(2-n)/n}-f(\lambda^{1/n}x)\right)(\varphi - u^\lambda )$$
a.e for any $\varphi \in \mathcal{K}^\lambda(t)$ such that $\varphi - u^\lambda \geq 0$.
Now we can use the standard regularity estimates for the obstacle problem (see
\cite [Proposition 2.2, chapter 5]{R1} for instance),
$$\displaystyle \|\Delta u^\lambda (\cdot,t)\|_{L^p(\Omega_{\varepsilon / 2})} \leq \left
\|C\lambda^{(2-n)/n}-\frac{1}{g^\lambda} \right\|_{L^p(\Omega_{\varepsilon /2})} \leq C_0 \mbox{
for all } 1 \leq p \leq \infty,$$
for all $\lambda$ large so that also $\Omega_0^\lambda \subset B_{\varepsilon/2}(0)$.
Using \eqref{u^lamda_t bounded} and $u^\lambda(x,t)= \int_{0}^{t}v^\lambda(x,s)ds$, we conclude
$\|u^\lambda(\cdot,t)\|_{L^p(\Omega_{\varepsilon /2})}$ is bounded uniformly in $t \in [0,T]$
and $\lambda$ large.
Using elliptic interior estimate results for obstacle problem again (for example,
\cite[Theorem 2.5]{R1}), we can find constants $0<\alpha < 1$ and $C_2$, independent of $t \in [0,T]$ and
$\lambda \gg 1$, such that
\begin{equation*}
\begin{aligned}
\|u^\lambda(\cdot,t)\|_{W^{2,p}(\Omega_\varepsilon)} & \leq C_2,\\
\|u^\lambda(\cdot,t)\|_{C^{0,\alpha}(\Omega_\varepsilon)}& \leq C_2,
\end{aligned}
\quad\mbox{for all } 0 \leq t \leq T, \lambda \gg 1.
\end{equation*}
Moreover, using (\ref{u^lamda_t bounded}) again, we have $|u^\lambda(x,t)-u^\lambda(x,s)| \leq
C_3|t-s|$. Thus $u^\lambda$ is H\"older continuous in $x$ with $0<\alpha<1$ and Lipschitz
continuous in $t$. In particular, $u^\lambda$ satisfies
$$\|u^\lambda\|_{C^{0,\alpha}(Q_\varepsilon)} \leq C_4(C_2,C_3) \mbox{ for all } \lambda \geq \lambda_0.$$
The argument for case $n=2$ is similar.
By the Arzel\`{a}-Ascoli theorem, we can find a function $\bar u \in C((\mathbb{R}^n \setminus \{0\})
\times [0, \infty))$ and a subsequence $\{u^{\lambda_k}\} \subset
\{u^\lambda\}$ such that
$$u^{\lambda_k} \rightarrow \bar{u} \mbox{ locally uniformly on $(\mathbb{R}^n \setminus \{0\})
\times [0, \infty)$} \mbox{ as } k
\rightarrow \infty, $$
Due to the compact embedding of $H^2$ in $H^1$, we have,
$u^{\lambda_k}(\cdot,t) \rightarrow \bar{u}(\cdot,t)$ strongly in $H^1(\Omega_\varepsilon)$ for all $t
\geq 0$, $\varepsilon > 0$.
To finish the proof, we need to show that the function $\bar{u}$ is the solution of limit
problem (\ref{limit problem}) and then by the uniqueness of the limit problem, we deduce that the convergence is not restricted to a subsequence.
\begin{lemma}[cf. {\cite[Lemma 6.3]{P1}}]
\label{U satisfies obstacle inequality}
For each $t \geq 0$, $\bar{w}:=\bar{u}(\cdot,t)$ satisfies
\begin{align}
\label{1}
a(\bar{w}, \phi) \geq \left<-L,\phi\right> & \mbox{ for all } \phi \in V,\\
\label{2}
a(\bar{w}, \psi \bar{w})=\left<-L, \psi \bar{w}\right>& \mbox{ for all } \psi \in W,
\end{align}
where $L= \left<1/g \right>$ as in Lemma \ref{media} and $V, W$ as in (\ref{set V}) and (\ref{set W}).
\end{lemma}
\begin{proof}Consider $n \geq 3$.
Follow techniques in \cite{P1}, fix $t \in [0,T]$ and $\phi \in V$, denote $w^k:=
u^{\lambda_k}(\cdot,t)$. Take $\phi \in V$ first. There exists $k_0>0$ such that for all $k
\geq k_0$, $\Omega_0^{\lambda_k} \subset B_\varepsilon(0)$ and $\phi =0$ on
$B_\varepsilon(0)$. Set $\varphi^k=\phi + w^k \in \mathcal{K}_t^{\lambda_k}$. Substitute the
function $\varphi^k$ into the rescaled equation (\ref{rescaled Variational problem}),
integrate both sides and integrate by parts, which yields
$$a(w^{\lambda_k},\phi) \geq -\lambda^{(2-n)/n}_k \left<u^{\lambda_k}_t, \phi\right>+ \left<-\frac{1}{g^{\lambda_k}}, \phi\right>.$$
Recalling Lemma \ref{media} and that $u^{\lambda_k}_t$ is bounded, $w \mapsto a(w,\phi)$ is
bounded linear functional in $H^1$. Since $w^k \rightarrow \bar{w}$ strongly in $H^1$ as $k
\rightarrow \infty$, we can send $\lambda_k \to \infty$ and obtain (\ref{1}).
Now take $\psi \in W$ such that $0 \leq \psi \leq 1, \psi =0 $ on $B_\varepsilon(0)$ and take
$k_0$ such that $\Omega^{\lambda_k}_0 \subset B_\varepsilon(0)$ for all $k \geq k_0$. Since
$\psi \in W$ then $\psi \bar{w} \in V$. As above we have $a(\bar{w},\psi \bar{w}) \geq
\left<-L, \psi \bar{w}\right>$. Moreover, consider $\varphi^k=(1-\psi)w^k \in \varphi^k \in
\mathcal{K}^{\lambda_k}(t)$, $k \geq k_0$. Then,
\begin{align*}
a(w^k, \psi w^k)= -a(w^k, \varphi^k-w^k)&\leq\left<-\frac{1}{g^{\lambda_k}}, \psi w^k\right>+\lambda_k^{(n-2)/n}\left<w^k, \psi w^k\right>.
\end{align*}
Again using Lemma \ref{media}, boundedness in $L^\infty(\mathbb{R}^n)$ of $w^k$, the lower
semi-continuity in $H^1$ of the map $w \mapsto a(w, \psi w)$, and the fact that $w^k \rightarrow \bar{w}$ strongly in $H^1$ as $k \rightarrow \infty$
we can conclude the equality (\ref{2}).
Again, $n = 2$ is similar.
\end{proof}
Finally, the next lemma establishes that the singularity of $\bar u$ as $|x| \rightarrow 0$ is correct.
\begin{lemma}[cf. {\cite[Lemma 6.4]{P1}}]
\label{singularity U}
We have
$$\lim\limits_{|x| \rightarrow 0} \frac{\bar{u}(x,t)}{U_{C_*, L}}(x,t)=1$$
for every $t \geq 0$, where $C_*$ as in Lemma \ref{C*}.
\end{lemma}
\begin{proof}
The proof follows the proof of \cite[Lemma 6.4]{P1} since the solutions of the Stefan problem have the same near field limit (Lemma \ref{near filed limit}) as the Hele-Shaw solutions.
\end{proof}
This finishes the proof of Theorem \ref{convergence of variational solutions} .
\end{proof}
\section{Uniform convergence of rescaled viscosity solutions and free boundaries}
\label{sec:visc-conv}
In this section, we will deal with the convergence of $v^\lambda$ and their free boundaries. Let $v$ be a viscosity solution of the Stefan problem (\ref{Stefan}) and $v^\lambda$ be its rescaling. Let $V=V_{C_*,L}$ be the solution of Hele-Shaw problem with a point source as in (\ref{Hele-Shaw with point sorce}), where $C_*$ is the constant of Lemma \ref{C*} and $L=\left<1/g\right>$ as in Lemma \ref{media}.
We define the half-relaxed limits in $\{|x| \neq 0, t\geq 0\}$:
\begin{align*}
& v^*(x,t)= \limsup_{(y,s),\lambda \rightarrow (x,t), \infty} v^\lambda(y,s), & v_*(x,t)= \liminf_{(y,s),\lambda \rightarrow (x,t), \infty} v^\lambda(y,s),
\end{align*}
\begin{remark}
$V$ is continuous in $\{|x| \neq 0, t\geq 0\}$, therefore $V_*=V=V^*$.
\end{remark}
To complete Theorem~\ref{th:main-convergence}, we prove a result similar to \cite[Theorem 7.1.]{P1}
\begin{theorem}
\label{convergence of rescaled viscosity solution}
The rescaled viscosity solution $v^\lambda$ of the Stefan problem (\ref{Stefan}) converges
locally uniformly to $V = V_{C_*, \ang{1/g}}$ in $(\mathbb{R}^n \backslash \{0\}) \times [0,\infty)$ as $\lambda \rightarrow \infty$ and
$$v_*=v^*=V.$$
Moreover, the rescaled free boundary $\{\Gamma(v^\lambda)\}_{\lambda}$ converges to $\Gamma(V)$ locally uniformly with respect to the Hausdorff distance.
\end{theorem}
To prepare for the proof of Theorem \ref{convergence of rescaled viscosity solution}, we need to
collect some results which are similar to the ones in \cite{K3} and \cite{P1} with some adaptations to
our case. All the results for $n\geq 3$ we have in this section can be obtained for $n=2$ by
using limit $\frac{1}{\log \mathcal{R}(\lambda)} \rightarrow 0$ as $\lambda \rightarrow \infty$.
Thus, from here on we only consider case $n \geq 3$, the results for $n=2$ are omitted.
\subsection{Some necessary technical results}
\begin{lemma}[cf. {\cite[Lemma 3.9]{K3}}]
\label{domain of u coincide with domain of v}
The viscosity solution $v$ of the Stefan problem (\ref{Stefan}) is strictly positive in $\Omega(u)$, satisfies $\Omega(v)= \Omega(u) \mbox{ and } \Gamma(v) = \Gamma(u)$.
\end{lemma}
\begin{lemma}
\label{subharmonic v^* Lemma}
Let $v^\lambda$ be a viscosity solution of the rescaled problem \eqref{rescaled equation}.
Then $v^*(\cdot,t)$ is subharmonic in $\mathbb{R}^n \setminus \{0\}$ and $v_{*}(\cdot,t)$ is superharmonic in $\Omega_t(v_*) \backslash \{0\}$ in viscosity sense.
\end{lemma}
\begin{proof}
This follows from a standard viscosity solution argument using test functions, see for instance \cite{K1}.
\end{proof}
The behavior of functions $v^*,v_*$ at the origin and their boundaries can be established by
following the arguments in \cite{P1} and \cite{K3}.
\begin{lemma}[$v^*$ and $v_*$ behave as $V$ at the origin]
\label{boundary condition for limit problem}
The functions $ v^*, v_*$ have a singularity at $0$ with:
\begin{align}
\label{singularity of V}
&\lim\limits_{|x| \rightarrow 0+}\frac{v_*(x,t)}{V(x,t)}=1, & \lim\limits_{|x| \rightarrow 0+}\frac{v^*(x,t)}{V(x,t)}=1, \mbox{ for each } t>0.
\end{align}
\end{lemma}
\begin{proof}
See \cite[Lemma 7.4]{P1}.
\end{proof}
\begin{lemma}[cf. {\cite[Lemma 5.4]{K3}}]
\label{limit of sequence in 0-level set of ulambda k}
Suppose that $(x_k,t_k) \in \{u^{\lambda_k}=0\}$ and $(x_k,t_k,\lambda_k) \rightarrow (x_0,t_0,\infty)$. Then:\begin{enumerate}[label=\alph*)]
\item $U(x_0,t_0)=0$,
\item If $x_k \in \Gamma_{t_k}(u^{\lambda_k})$ then $x_0 \in \Gamma
_{t_0}(U)$.
\end{enumerate}
\end{lemma}
\begin{proof}
See proof of \cite[Lemma 5.4]{K3}.
\end{proof}
The rest of the convergence proof in \cite{P1} relies on the monotonicity of the solutions of the
Hele-Shaw problem in time. Since the Stefan problem lacks this monotonicity, we will show that
sufficiently regular initial data satisfy a weak monotonicity below. The convergence result for general initial data will then follow by uniqueness of the limit and the comparison principle.
\begin{lemma}
\label{weak monotonicity}
Suppose that $v_0$ satisfies \eqref{initial data}. Then there exist $C \geq 1$ independent of $x$ and $t$ such that
\begin{equation}
\label{monotonicity condition}
v_0(x) \leq C v(x, t) \mbox{ in } \mathbb{R}^n \backslash K \times [0,\infty).
\end{equation}
\end{lemma}
\begin{proof}
Let $\gamma_1:=\min_{\partial \Omega_0}|Dv_0|, \gamma_2 :=\max_{\partial \Omega_0}|Dv_0|$. Note
that $0<\gamma_1\leq \gamma_2 <\infty$. For given $\varepsilon > 0$, let $w$ be the solution of
boundary value problem
\begin{equation*}
\left \{
\begin{aligned}
\Delta w &= 0 &&\mbox{in } \Omega_0 \backslash K,\\
w &= \varepsilon &&\mbox{on } K,\\
w &=0 && \mbox{on } \Omega_0^c.
\end{aligned}
\right.
\end{equation*}
For $x$ close to $\partial \Omega_0$ we have $v_0(x) \geq \frac{\gamma_1}{2} \text{dist}(x,\partial
\Omega_0)$. Since $\gamma_1>0$, $v_0 >0$ in $\Omega_0$ and $\partial \Omega_0$ has a uniform ball
condition, we can choose $\varepsilon >0$ small enough such that $w \leq v_0$ in $\mathbb{R}^n
\backslash K$. By Hopf's Lemma, $\gamma_w:=\min_{\partial\Omega_0}|Dw|>0$. It is clear that $w$ is
a classical subsolution of the Stefan problem (\ref{Stefan}) and the comparison principle yields
\begin{equation}
\label{w<=v}
w \leq v \mbox{ in } (\mathbb{R}^n \backslash K) \times [0,\infty).
\end{equation}
Now assume that (\ref{monotonicity condition}) does not hold, that is, for every $k \in \mathbb N$, there exists $(x_k,t_k) \in \mathbb{R}^n \backslash K \times [0,\infty)$ such that
\begin{equation}
\label{contradiction}
\frac{1}{k}v_0(x_k)> v(x_k,t_k).
\end{equation}
Clearly $x_k \in \Omega_0$. $\{t_k\}$ is bounded by Theorem~\ref{Near field limit Theorem} since
$v_0$ is bounded.
Therefore, there exists a subsequence $(x_{k_l},t_{k_l})$ and a point $(x_0,t_0)$ such that
$(x_{k_l},t_{k_l}) \rightarrow (x_0,t_0)$. Since $v_0$ is bounded, we get $v(x_0,t_0)\leq 0$ and
thus $x_0 \in \partial \Omega_0$ by \eqref{w<=v}. Consequently, for $k_l$ large enough,
\begin{equation*}
w(x_{k_l}) \geq \frac{1}{2} \gamma_w \text{dist}(x_{k_l}, \partial
\Omega_0)=\left(\frac{\gamma_w}{4\gamma_2}\right)2\gamma_2 \text{dist}(x_{k_l}, \partial \Omega_0)
\geq \frac{\gamma_w}{4 \gamma_2} v_0(x_{k_l}).
\end{equation*}
Combine this with (\ref{w<=v}) and (\ref{contradiction}) to obtain
\begin{equation*}
\frac{1}{k_l}v_0(x_{k_l})> \frac{\gamma_w}{4\gamma_2} v_0(x_{k_l})
\end{equation*}
for every $k_l$ large enough, which yields a contradiction since $v_0(x_{k_l})>0$.
\end{proof}
Some of the following lemmas will hold under the condition (\ref{monotonicity condition}).
\begin{lemma}
\label{monotonicity 2}
Let $u$ be the solution of the variational problem \eqref{Variational problem}, and $v$ be the
associated viscosity solution of the Stefan
problem, and suppose that \eqref{monotonicity condition} holds. Then
\begin{equation}
u(x,t)\leq C t v(x,t).
\end{equation}
\end{lemma}
\begin{proof}
The statement follows from checking that $\tilde{u}:= C tv$ is a supersolution of the heat
equation in $\Omega(u)$ and the classical comparison principle. Indeed,
$\tilde u_t - \Delta \tilde u = Cv + Ct(v_t - \Delta v) \geq v_0 \geq f = u_t - \Delta u$ in $\Omega(u)$ by
\eqref{monotonicity condition}.
\end{proof}
\begin{lemma}[cf. {\cite[Lemma 5.5]{K3}}]
\label{Inclusion of positive domain V, v*}
The function $v_*$ satisfies
$\Omega(V) \subset \Omega(v_*).$
In particular $v_* \geq V$.
\end{lemma}
\begin{proof}
Assume that the inclusion does not hold, there exists $(x_0,t_0) \in \Omega(V)$ and $v_*(x_0,t_0)=0$.
By (\ref{monotonicity condition}) and Lemma \ref{monotonicity 2}, there exists $C>1$ such that $
u(x,t)\leq C t v(x,t) .$ This inequality is preserved under the rescaling, $u^\lambda(x,t)\leq C
t v^\lambda(x,t)$ in $(\mathbb{R}^n\backslash K^\lambda)\times [0,\infty)$. Taking $\liminf^*$
of both sides gives the contradiction $0<U(x_0,t_0) \leq C t_0v_*(x_0,t_0)=0$.
The inequality $v_* \geq V$ follows from the elliptic comparison principle as $v_*$ is superharmonic in
$\Omega(v_*) \setminus \{0\}$ by Lemma~\ref{subharmonic v^* Lemma} and behaves as $V$ at the origin by Lemma \ref{boundary condition for limit
problem}.
\end{proof}
\begin{lemma}
\label{positive sup rescaling}
There exists constant $C > 0$ independent of $\lambda$ such that for every $x_0 \in \overline{\Omega_{t_0}(u^\lambda)}$ and $B_r(x_0) \cap \Omega_0^{\lambda} = \emptyset$ for some $r$, for every $\lambda$ large enough we have
\begin{equation*}
\sup_{x \in \overline{B_r(x_0)}}u^\lambda(x,t_0)>C r^2.
\end{equation*}
\end{lemma}
\begin{proof}
Follow the arguments in \cite[Lemma 3.1]{K2} with noting that since $u^\lambda_t$ is bounded then
for $\lambda$ large enough, $u^\lambda$ is a strictly subharmonic function in
$\Omega_{t_0}(u^\lambda) \setminus \overline{\Omega_0^\lambda}$.
\end{proof}
\begin{cor}
\label{monotoniciy for v}
There exists a constant $C_1=C_1(n,M,\lambda_0)$ such that if $(x_0,t_0) \in \Omega(v^\lambda)$ and $B_r(x_0) \cap \Omega_0^\lambda = \emptyset$ and $\lambda\geq \lambda_0$, we have
\begin{equation*}
\sup_{B_r(x_0)}v^\lambda(x,t_0) \geq \frac{C_1r^2}{t_0}.
\end{equation*}
\end{cor}
\begin{proof}
The inequality follows directly from Lemma \ref{monotonicity 2} and Lemma \ref{positive sup rescaling}.
\end{proof}
\begin{lemma}[cf. {\cite[Lemma 5.6 ii]{K3}}]
\label{Inclusion of boundaries}
We have the following inclusion:
$$\Gamma(v^*) \subset \Gamma(V).$$
\end{lemma}
\begin{proof}
Argue as in \cite[Lemma 5.6 ii]{K3} together with using Lemma \ref{limit of sequence in 0-level set of ulambda k} and Lemma \ref{positive sup rescaling} above.
\end{proof}
Now we are ready to prove Theorem \ref{convergence of rescaled viscosity solution}.
\subsection{Proof of Theorem \ref{convergence of rescaled viscosity solution}}
\begin{proof}
\textit{\textbf{Step 1}}. We prove the convergence of viscosity solutions and the free boundaries under the conditions (\ref{initial data}) and (\ref{monotonicity condition}) first.
Lemma~\ref{viscos free boundary bound} yields that $\Omega_t(v^*)$ is bounded at all time $t>0$. Since $\Omega(V)$ is simply connected set, Lemma \ref{Inclusion of boundaries} implies that
\begin{equation*}
\overline{\Omega(v^*)} \subset \overline{\Omega(V)} \subset \Omega(V_{C^*+\varepsilon,L}) \mbox{ for all } \varepsilon>0.
\end{equation*}
We see from Lemma \ref{subharmonic v^* Lemma}, $v^*(\cdot, t)$ is a subharmonic function in $\mathbb{R}^n \setminus \{0\}$ for every $t>0$ and $\lim_{|x|\rightarrow 0}\frac{v^*(x,t)}{V(x,t)}=1$ for all $t \geq 0$ by Lemma \ref{boundary condition for limit problem}, comparison principle yields $v^*(x,t) \leq V_{C_*+\varepsilon,L}(x,t)$ for every $\varepsilon>0$.
By Lemma \ref{Inclusion of positive domain V, v*}, $V(x,t) \leq v_*$ and letting $\varepsilon
\rightarrow 0^+$ we obtain by continuity
$$V(x,t) \leq v_*(x,t) \leq v^*(x,t) \leq V(x,t).$$
Therefore, $v_*=v^*= V$ and in particular, $\Gamma(v_*)= \Gamma(v^*)= \Gamma(V)$.
Now we need to show the uniform convergence of the free boundaries with respect to the Hausdorff distance. Fix $0 < t_1< t_2$ and denote:
\begin{align*}
& \Gamma^\lambda:= \Gamma(v^\lambda) \cap \{t_1 \leq t \leq t_2\}, & \Gamma^\infty:= \Gamma(V) \cap \{t_1 \leq t \leq t_2\},
\end{align*}
a $\delta$-neighborhood of a set $A$ in $\mathbb{R}^n \times \mathbb{R}$ is $$U_\delta(A):= \{(x,t): \mbox{dist}((x,t), A) < \delta\}.$$
We need to prove that for all $\delta >0$, there exists $\lambda_0>0$ such that:
\begin{equation}
\label{1st Theorem 4.2}
\begin{matrix}
\Gamma^\lambda \subset U_\delta (\Gamma^\infty) & \mbox{ and } &
\Gamma^\infty \subset U_\delta(\Gamma^\lambda), \hspace{1cm} \forall \lambda \geq \lambda_0.
\end{matrix}
\end{equation}
We prove the first inclusion in (\ref{1st Theorem 4.2}) by contradiction. Suppose
therefore that we can find a subsequence $\{\lambda_k\}$ and a sequence of points $(x_k,t_k) \in \Gamma^{\lambda_k}$ such that
$\mbox{dist}((x_k,t_k), \Gamma^\infty) \geq \delta.$
Since $\Gamma^\lambda$ is uniformly bounded in $\lambda$ by Lemma \ref{viscos free boundary
bound}, there exists a subsequence $\{(x_{k_j}, t_{k_j})\}$ which converge to a point
$(x_0,t_0)$. By Lemma \ref{limit of sequence in 0-level set of ulambda k}, $(x_0,t_0) \in
\Gamma(U)= \Gamma(V)$. Moreover, since $t_1 \leq t_{k_j} \leq t_2$ then $t_1 \leq t_0 \leq t_2$
and therefore, $(x_0,t_0) \in \Gamma^\infty$, a contradiction.
The proof of the second inclusion in (\ref{1st Theorem 4.2}) is more technical. We prove a
pointwise result first. Suppose that there exists $\delta >0, (x_0,t_0) \in \Gamma^\infty$ and
$\{\lambda_k\}, \lambda_k \rightarrow \infty$, such that $\displaystyle \mbox{dist}((x_0,t_0),
\Gamma^{\lambda_k}) \geq \frac{\delta}{2}$ for all $k$. Then there exists $r >0$ such that $D_r(x_0,t_0):= B(x_0, r) \times [t_0-r,t_0+r]$ satisfies either:
\begin{equation}
\label{4nd Theorem 4.2}
D_r(x_0,t_0) \subset \{v^{\lambda_k}=0\} \mbox{ for all } k,
\end{equation}
or after passing to a subsequence,
\begin{equation}
\label{3rd Theorem 4.2}
D_r(x_0,t_0) \subset \{v^{\lambda_{k}}>0\} \mbox{ for all } k.
\end{equation}
If (\ref{4nd Theorem 4.2}) holds, clearly $V=v_* =0$ in $D_r(x_0,t_0)$ which is in a contradiction with the assumption that $(x_0,t_0) \in \Gamma^\infty$.
Thus we assume that (\ref{3rd Theorem 4.2}) holds. In $D_r(x_0,t_0)$, $v^{\lambda_k}$ solves the
heat equation $\lambda^{(2-n)/n}v^{\lambda_k}_t- \Delta v^{\lambda_k} =0$. Set
\begin{equation*}
w^k(x,t):=v^{\lambda_k}(x,\lambda_k^{(2-n)/n} t)
\end{equation*} then $w^k>0$ in $D_r^w(x_0,t_0):=B(x_0,r) \times
[\lambda_k^{(n-2)/n}(t_0-r),\lambda_k^{(n-2)/n}(t_0+r)]$ and $w^k$ satisfies $w^k_t -\Delta w^k=0$
in $D_r^w(x_0,t_0)$. Since $\lambda_k^{(n-2)/n} \frac r2 \to \infty$ as $k \to \infty$, by Harnack's inequality for the heat equation, for fixed $\tau > 0$ there exists a constant $C_1 >
0$ such that for each $t \in [t_0-\frac r2, t_0+\frac r2]$ and $\lambda_k$ such that $\tau <
\lambda_k^{(n-2)/n} \tfrac r4$ we have
\begin{equation*}
\sup_{B(x_0,r/2)} w^k(\cdot, \lambda_k^{(n-2)/n}t- \tau) \leq C_1 \inf_{B(x_0,r/2)}w^k(\cdot, \lambda_k^{(n-2)/n}t).
\end{equation*}
This inequality together with Corollary~\ref{monotoniciy for v} yields:
\begin{equation*}
\frac{C_2r^2}{t- \lambda_k^{(2-n)/n}\tau} \leq \sup_{B(x_0,r/2)} v^{\lambda_k}(\cdot, t- \lambda_k^{(2-n)/n}\tau) \leq C_1 \inf_{B(x_0,r/2)}v^{\lambda_k}(\cdot, t)
\end{equation*}
for all $t \in [t_0 - \frac r2, t_0 + \frac r2]$, $\lambda_k \geq \lambda_0$ large enough, where
$C_2$ only depends on $n, M, \lambda_0$. Taking the limit when $\lambda_k \rightarrow \infty$, the
uniform convergence of $\{v^{\lambda_k}\}$ to $V$ gives $V>0$ in $B(x_0,\frac r2) \times [t_0-\frac
r2,t_0+\frac r2]$, which is a contradiction with $(x_0,t_0) \in \Gamma^\infty \subset \Gamma(V)$.
We have proved that every point of $\Gamma^\infty$ belongs to all
$U_{\delta/2}(\Gamma^\lambda)$ for sufficiently large $\lambda$. Therefore the second
inclusion in \eqref{1st Theorem 4.2} follows from the compactness of $\Gamma^\infty$.
This concludes the proof of Theorem~\ref{convergence of rescaled viscosity solution} when
(\ref{monotonicity condition}) holds.
\textit{\textbf{Step 2}}. For general initial data, we will find upper and lower bounds for the initial
data for which (\ref{monotonicity condition}) holds, and use the comparison principle.
For instance, assume that $v_0 \in C(\mathbb{R}^n)$, $v_0 \geq 0$, such that $\operatorname{supp} v_0$ is bounded, $v_0 = 1$ on $K$.
Choose smooth bounded domains $\Omega_0^1, \Omega_0^2$ such that $K \subset \Omega_0^1
\subset \overline{\Omega_0^1} \subset \operatorname{supp} v_0 \subset \Omega_0^2$. Let $v_0^1, v_0^2$ be two
functions satisfying \eqref{initial data} with positive domains $\Omega_0^1,\Omega_0^2$,
respectively, and $v_0^1 \leq v_0 \leq v_0^2$. If necessary, that is, when $v_0$ is not
sufficiently regular at $\partial K$, we may perturb the boundary data for
$v_0^1$, $v_0^2$ on $K$ as $1 - \varepsilon$ and $1 + \varepsilon$, respectively, for some
$\varepsilon \in (0, 1)$.
Let $v_1, v_2$ be respectively the viscosity solution of the Stefan problem (\ref{Stefan}) with
initial data $v_0^1,v_0^2$. By the comparison principle, we have $v_1 \leq
v \leq v_2$ and after rescaling $v_1^\lambda \leq v^\lambda \leq v_2^\lambda$. By Step 1, we see
that $v_1^\lambda \to V_{C_{*,1-\varepsilon}, L}$ and $v_2^\lambda \to V_{C_{*,1+\varepsilon}, L}$.
Since $C_{*,1\pm\varepsilon} \to C_*$ as $\varepsilon \to 0$ by \cite[Lemma~4.5]{QV}, we deduce the local uniform convergence of
$v^\lambda \to V = V_{C_*, L}$.
The convergence of free boundaries follow from the ordering $\Omega(v_1) \subset \Omega(v) \subset
\Omega(v_2)$ and the convergence of free boundaries of $V_{C_{*,1\pm\varepsilon}, L}$ to
the free boundary of $V_{C_*,L}$ locally uniformly with respect to the Hausdorff distance.
\end{proof}
\subsection*{Acknowledgments}
The first author was partially supported by JSPS KAKENHI Grant No. 26800068 (Wakate B). This work
is a part of doctoral research of the second author. The second author would like to thank her
Ph.D.
supervisor Professor Seiro Omata for his valuable support and advice.
\begin{bibdiv}
\begin{biblist}
\bib{Baiocchi}{article}{
author={Baiocchi, Claudio},
title={Sur un probl\`eme \`a fronti\`ere libre traduisant le filtrage de
liquides \`a travers des milieux poreux},
language={French},
journal={C. R. Acad. Sci. Paris S\'er. A-B},
volume={273},
date={1971},
pages={A1215--A1217},
review={\MR{0297207}},
}
\bib{C}{article}{
author={L.A. Caffarelli},
title={The regularity of free boundaries in higher dimensions},
date={1977},
journal={Acta Math.},
pages={155\ndash 184},
number={139},
review={MR 56:12601}
}
\bib{CF}{article}{
author={L.A. Caffarelli},
author={A. Friedman},
title={Continuity of the temperature in the Stefan problem},
date={1979},
journal={Indiana Univ. Math. J.},
pages={53\ndash 70},
number={28},
review={MR 80i:35104},
}
\bib{CS}{article}{
author={Caffarelli, Luis A.},
author={Souganidis, Panagiotis E.},
title={Rates of convergence for the homogenization of fully nonlinear
uniformly elliptic pde in random media},
journal={Invent. Math.},
volume={180},
date={2010},
number={2},
pages={301--360},
issn={0020-9910},
review={\MR{2609244}},
doi={10.1007/s00222-009-0230-6},
}
\bib{CSW}{article}{
author={Caffarelli, Luis A.},
author={Souganidis, Panagiotis E.},
author={Wang, L.},
title={Homogenization of fully nonlinear, uniformly elliptic and
parabolic partial differential equations in stationary ergodic media},
journal={Comm. Pure Appl. Math.},
volume={58},
date={2005},
number={3},
pages={319--361},
issn={0010-3640},
review={\MR{2116617}},
doi={10.1002/cpa.20069},
}
\bib{Duvaut}{article}{
author={Duvaut, Georges},
title={R\'esolution d'un probl\`eme de Stefan (fusion d'un bloc de glace \`a
z\'ero degr\'e)},
language={French},
journal={C. R. Acad. Sci. Paris S\'er. A-B},
volume={276},
date={1973},
pages={A1461--A1463},
review={\MR{0328346}},
}
\bib{EJ}{article}{
author={Elliott, C. M.},
author={Janovsk\'y, V.},
title={A variational inequality approach to Hele-Shaw flow with a moving
boundary},
journal={Proc. Roy. Soc. Edinburgh Sect. A},
volume={88},
date={1981},
number={1-2},
pages={93--107},
issn={0308-2105},
review={\MR{611303}},
doi={10.1017/S0308210500017315},
}
\bib{Evans}{article}{
author={Evans, Lawrence C.},
title={The perturbed test function method for viscosity solutions of
nonlinear PDE},
journal={Proc. Roy. Soc. Edinburgh Sect. A},
volume={111},
date={1989},
number={3-4},
pages={359--375},
issn={0308-2105},
review={\MR{1007533}},
doi={10.1017/S0308210500018631},
}
\bib{FK}{article}{
author={Friedman, Avner},
author={Kinderlehrer, David},
title={A one phase Stefan problem},
journal={Indiana Univ. Math. J.},
volume={24},
date={1974/75},
number={11},
pages={1005--1035},
issn={0022-2518},
review={\MR{0385326}},
doi={10.1512/iumj.1975.24.24086},
}
\bib{FN}{article}{
author={Kinderlehrer, David},
author={Nirenberg, Louis},
title={The smoothness of the free boundary in the one phase Stefan
problem},
journal={Comm. Pure Appl. Math.},
volume={31},
date={1978},
number={3},
pages={257--282},
issn={0010-3640},
review={\MR{480348}},
doi={10.1002/cpa.3160310302},
}
\bib{K1}{article}{
author={Kim, Inwon C.},
title={Uniqueness and existence results on the Hele-Shaw and the Stefan
problems},
journal={Arch. Ration. Mech. Anal.},
volume={168},
date={2003},
number={4},
pages={299--328},
issn={0003-9527},
review={\MR{1994745}},
doi={10.1007/s00205-003-0251-z},
}
\bib{KimHomog}{article}{
author={Kim, Inwon C.},
title={Homogenization of the free boundary velocity},
journal={Arch. Ration. Mech. Anal.},
volume={185},
date={2007},
number={1},
pages={69--103},
issn={0003-9527},
review={\MR{2308859 (2008f:35019)}},
doi={10.1007/s00205-006-0035-3},
}
\bib{KimContact}{article}{
author={Kim, Inwon C.},
title={Homogenization of a model problem on contact angle dynamics},
journal={Comm. Partial Differential Equations},
volume={33},
date={2008},
number={7--9},
pages={1235--1271},
issn={0360-5302},
review={\MR{2450158 (2010e:35031)}},
doi={10.1080/03605300701518273},
}
\bib{KimContactRates}{article}{
author={Kim, Inwon C.},
title={Error estimates on homogenization of free boundary velocities in
periodic media},
journal={Ann. Inst. H. Poincar\'e Anal. Non Lin\'eaire},
volume={26},
date={2009},
number={3},
pages={999--1019},
issn={0294-1449},
review={\MR{2526413 (2010f:35020)}},
doi={10.1016/j.anihpc.2008.10.004},
}
\bib{K2}{article}{
author={Kim, Inwon C.},
author={Mellet, Antoine},
title={Homogenization of a Hele-Shaw problem in periodic and random
media},
journal={Arch. Ration. Mech. Anal.},
volume={194},
date={2009},
number={2},
pages={507--530},
issn={0003-9527},
review={\MR{2563637}},
doi={10.1007/s00205-008-0161-1},
}
\bib{K3}{article}{
author={Kim, Inwon C.},
author={Mellet, Antoine},
title={Homogenization of one-phase Stefan-type problems in periodic and
random media},
journal={Trans. Amer. Math. Soc.},
volume={362},
date={2010},
number={8},
pages={4161--4190},
issn={0002-9947},
review={\MR{2608400}},
doi={10.1090/S0002-9947-10-04945-7},
}
\bib{P1}{article}{
author={Po\v z\'ar, Norbert},
title={Long-time behavior of a Hele-Shaw type problem in random media},
journal={Interfaces Free Bound.},
volume={13},
date={2011},
number={3},
pages={373--395},
issn={1463-9963},
review={\MR{2846016}},
doi={10.4171/IFB/263},
}
\bib{Pozar15}{article}{
author={Po\v z\'ar, Norbert},
title={Homogenization of the Hele-Shaw problem in periodic spatiotemporal
media},
journal={Arch. Ration. Mech. Anal.},
volume={217},
date={2015},
number={1},
pages={155--230},
issn={0003-9527},
review={\MR{3338444}},
doi={10.1007/s00205-014-0831-0},
}
\bib{QV}{article}{
author={Quir\'os, Fernando},
author={V\'azquez, Juan Luis},
title={Asymptotic convergence of the Stefan problem to Hele-Shaw},
journal={Trans. Amer. Math. Soc.},
volume={353},
date={2001},
number={2},
pages={609--634},
issn={0002-9947},
review={\MR{1804510}},
doi={10.1090/S0002-9947-00-02739-2},
}
\bib{R3}{article}{
author={Rodrigues, Jos\'e-Francisco},
title={Free boundary convergence in the homogenization of the one-phase
Stefan problem},
journal={Trans. Amer. Math. Soc.},
volume={274},
date={1982},
number={1},
pages={297--305},
issn={0002-9947},
review={\MR{670933}},
doi={10.2307/1999510},
}
\bib{R1}{book}{
author={Rodrigues, Jos\'e-Francisco},
title={Obstacle problems in mathematical physics},
series={North-Holland Mathematics Studies},
volume={134},
note={Notas de Matem\'atica [Mathematical Notes], 114},
publisher={North-Holland Publishing Co., Amsterdam},
date={1987},
pages={xvi+352},
isbn={0-444-70187-7},
review={\MR{880369}},
}
\bib{RReview}{article}{
author={Rodrigues, Jos\'e-Francisco},
title={The Stefan problem revisited},
conference={
title={Mathematical models for phase change problems},
address={\'Obidos},
date={1988},
},
book={
series={Internat. Ser. Numer. Math.},
volume={88},
publisher={Birkh\"auser, Basel},
},
date={1989},
pages={129--190},
review={\MR{1038069}},
}
\bib{R2}{article}{
author={Rodrigues, Jos\'e-Francisco},
title={Variational methods in the Stefan problem},
conference={
title={Phase transitions and hysteresis},
address={Montecatini Terme},
date={1993},
},
book={
series={Lecture Notes in Math.},
volume={1584},
publisher={Springer, Berlin},
},
date={1994},
pages={147--212},
review={\MR{1321833}},
doi={10.1007/BFb0073397},
}
\bib{Souganidis}{article}{
author={Souganidis, Panagiotis E.},
title={Stochastic homogenization of Hamilton-Jacobi equations and some
applications},
journal={Asymptot. Anal.},
volume={20},
date={1999},
number={1},
pages={1--11},
issn={0921-7134},
review={\MR{1697831}},
}
\end{biblist}
\end{bibdiv}
\end{document}
| {
"attr-fineweb-edu": 1.737305,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdm45qg5A583uG_Ot | \section{Introduction}
Although the Transformer architecture for deep learning was only recently introduced \cite{NIPS2017_3f5ee243}, it has had a profound impact on the development in Natural Language Processing (NLP) during the last couple of years. Starting with the seminal BERT model \cite{devlin-etal-2019-bert}, we have witnessed an unprecedented development of new model variations \cite{NEURIPS2019_dc6a7e65,Clark2020ELECTRA:,2020t5,radford2019language,brown2020language} with new State Of The Art (SOTA) results being produced in all types of NLP benchmarks \cite{wang-etal-2018-glue,NEURIPS2019_superglue,nie-etal-2020-adversarial}.
The leading models are large both with respect to the number of parameters and the size of the training data used to build the model; this correlation between size and performance has been demonstrated by \citet{kaplan2020scaling}. The ongoing scale race has culminated in the 175-billion parameter model GPT-3, which was trained on some 45TB of data summing to around 500 billion tokens \cite{brown2020language}.\footnote{The currently largest English model contains $1.6$ trillion parameters \citep{SwitchTransformer}.} Turning to the Scandinavian languages, there are no such truly large-scale models available. At the time of writing, there are around 300 Scandinavian models available in the Hugging Face Transformers model repository.\footnote{\url{huggingface.co/models}} Most of these are translation models, but there is already a significant number of monolingual models available in the Scandinavian languages.\footnote{At the time of submission, there are 17 monolingual Swedish models available.}
However, none of these Scandinavian language models are even close to the currently leading English models in parameter size or training data used. As such, we can expect that their relative performance in comparison with the leading English models is significantly worse. Furthermore, we can expect that the number of monolingual Scandinavian models will continue to grow at an exponential pace during the near future. The question is: do we need all these models? Or even: do we need {\em any} of these models? Can't we simply translate our data and tasks to English and use some suitable English SOTA model to solve the problem? This paper provides an empirical study of this idea.
\begin{table*}[t]
\begin{center}
\begin{tabular}{|l|rrrr|}
\hline
{\bf Language} & {\bf Vocab size} & {\bf Lexical richness} & {\bf Avg. word length} & {\bf Avg. sentence length} \\
\hline
Swedish & 31,478 & 0.07 & 4.39 & 14.75 \\
Norwegian & 26,168 & 0.06 & 4.21 & 14.10 \\
Danish & 42,358 & 0.06 & 4.17 & 19.55 \\
Finnish & 34,729 & 0.14 & 5.84 & 10.69 \\
\hline
English & 27,610 & 0.04 & 3.99 & 16.87 \\
\hline
\end{tabular}
\caption{The vocabulary size, Lexical richness, average word length and average sentence length for the Trustpilot sentiment data of each language.}
\label{tab:dataset_1}
\end{center}
\end{table*}
\section{Related work}
There is already a large, and rapidly growing, literature on the use of multilingual models \cite{conneau-etal-2020-unsupervised,xue2020mt5}, and on the possibility to achieve cross-lingual transfer in multilingual language models \cite{ruder2019survey,artetxe-etal-2020-cross,lauscher-etal-2020-zero,conneau-etal-2020-emerging,K2020Cross-Lingual,nooralahzadeh2020meta}. From this literature, we know among other things that multilingual models tend to be competitive in comparison with monolingual ones, and that especially languages with smaller amounts of training data available can benefit significantly from transfer effects from related languages with more training data available. This line of study focuses on the possibility to transfer {\em models} to a new language, and thereby facilitating the application of the model to data in the original language.
By contrast, our interest is to transfer the {\em data} to another language, thereby enabling the use of SOTA models to solve whatever task we are interested in. We are only aware of one previous study in this direction: \citet{IsMTRipe} performs cross-lingual machine translation using outdated methods, resulting in the claim that even if perfect translation would be possible, we will still see degradation of performance. In this paper, we use modern machine translation methods, and demonstrate empirically that no degradation of performance is observable when using large SOTA models.
\begin{table*}[!ht]
\centering
\begin{tabular}{|l|r|r|}
\hline
{\bf Model name in Hugging Face} & {\bf Language} & {\bf Data size} \\
\hline
\texttt{KB/bert-base-swedish-cased} & sv & 3B tokens \\
\texttt{TurkuNLP/bert-base-finnish-cased-v1} & fi & 3B tokens \\
\texttt{ltgoslo/norbert} & no & 2B tokens \\
\texttt{DJSammy/bert-base-danish-uncased\_BotXO,ai} & da & 1.6B tokens \\
\hline
\texttt{bert-base-cased} & en & 3.3B tokens \\
\texttt{bert-base-cased-large} & en & 3.3B tokens \\
\hline
\texttt{xlm-roberta-large} & multi & 295B tokens \\
\hline
\end{tabular}
\caption{Models used in the experiments and the size of their corresponding training data. 'B' is short for billion.}
\label{tab:models}
\end{table*}
\begin{table*}[!htbp]
\begin{center}
\begin{tabular}{|l|r|r|r|r|r|r|}
\hline
\textbf{Model} & {\bf sv} & {\bf no} & {\bf da} & {\bf fi} & {\bf en} \\
\hline
BERT-sv & \underline{96.76} & 89.32 & 90.68 & 83.40 & 86.76 \\
BERT-no & 90.40 & \underline{95.00} & 92.52 & 83.16 & 78.52 \\
BERT-da & 86.24 & 89.16 & \underline{94.72} & 80.16 & 85.28 \\
BERT-fi & 90.24 & 86.36 & 87.72 & \underline{\textbf{95.72}} & 84.32 \\
\hline
BERT-en & 85.72 & 87.60 & 87.72 & 84.16 & 96.08 \\
BERT-en-Large & 91.16 & 91.88 & 92.40 & 89.56 & \underline{{\bf 97.00}} \\
\hline
\hline
\multicolumn{6}{|c|}{\textbf{Translated Into English}} \\
\hline
BERT-sv & 88.24 & 87.80 & 89.68 & 83.60 & - \\
BERT-no & 88.40 & 86.80 & 88.44 & 80.72 & - \\
BERT-da & 88.24 & 84.20 & 89.12 & 83.32 & - \\
BERT-fi & 90.04 & 90.08 & 89.36 & 86.04 & - \\
\hline
BERT-en & 95.76 & 95.48 & 95.96 & 92.96 & - \\
BERT-en-Large & {\bf 97.16} & {\bf 96.56} & {\bf 97.48} & 94.84 & - \\
\hline
\end{tabular}
\caption{Accuracy for monolingual models for the native sentiment data (upper part) and machine translated data (lower part). Underlined results are the best results per language in using the native data, while boldface marks the best results considering both native and machine translated data.}
\label{tab:original}
\end{center}
\end{table*}
\begin{table*}[!htbp]
\begin{center}
\begin{tabular}{|l|r|r|r|r|r|r|}
\hline
\textbf{Model} & {\bf sv} & {\bf no} & {\bf da} & {\bf fi} & {\bf en} \\
\hline
XLM-R-large & \textbf{97.48} & \textbf{97.16} & 97.68 & 95.60 & \textbf{97.76} \\
\hline
\multicolumn{6}{|c|}{\textbf{Translated Into English}} \\
\hline
XLM-R-large & 97.04 & 96.84 & \textbf{98.24} & 95.48 & - \\
\hline
\end{tabular}
\caption{Accuracy on the various sentiment datasets using XLM-R-Large}
\label{tab:xlm-r}
\end{center}
\end{table*}
\section{Data}
In order to be able to use comparable data in the languages under consideration (Swedish, Danish, Norwegian, and Finnish), we contribute a Scandinavian sentiment corpus (ScandiSent),\footnote{https://github.com/timpal0l/ScandiSent}
consisting of data downloaded from \url{trustpilot.com}. For each language, the corresponding subdomain was used to gather reviews with an associated text. This data covers a wide range of topics and are divided into 22 different categories, such as electronics, sports, travel, food, health etc. The reviews are evenly distributed among all categories for each language.
All reviews have a corresponding rating in the range $1-5$.
The review ratings were polarised into binary labels, and the reviews which received neutral rating were discarded. Ratings with 4 or 5 thus corresponds to a positive label, and 1 or 2 correspond to a negative label.
To further improve the quality of the data, we apply fastText's language identification model \cite{joulin2016bag} to filter out any reviews containing incorrect language. This results in a balanced set of 10,000 texts for each language, with 7,500 samples for training and 2,500 for testing. Table \ref{tab:dataset_1} summarizes statistics for the various datasets of each respective language.
\subsection{Translation}
For all the Nordic languages we generate a corresponding English dataset by direct Machine Translation, using the Neural Machine Translation (NMT) model provided by Google.\footnote{https://cloud.google.com/translate/docs/advanced/translating-text-v3}
To justifiably isolate the effects of modern day machine translation, we restrict the translation to be executed in prior to all experiments. This means that all translation is executed prior to any fine-tuning, and that the translation model is not updated during training.
\section{Models}
In order to fairly select a representative pre-trained model for each considered Scandinavian language, we opt for the most popular native model according to Hugging Face. For each considered language, this corresponds to a BERT-Base model, hence each language is represented by a Language Model of identical architecture. The difference between these models is therefore mainly in the quantity and type of texts used during training, in addition to potential differences in training hyperparameters.
We compare these Scandinavian models against the English BERT-Base and BERT-Large models by Google. English BERT-Base is thus identical in architecture to the Scandinavian models, while BERT-Large is twice as deep and contains more than three times the amount of parameters as BERT-Base. Finally, we include XLM-R-Large, in order to compare with a model trained on significantly larger (and multilingual) training corpora.
Table \ref{tab:models} lists both the Scandinavian and English models, together with the size of each models corresponding training corpus.
\section{Experiments}
\subsection{Setup}
We fine-tune and evaluate each model towards each of the different sentiment datasets, using the hyperparameters listed in Appendix \ref{tab:parameters}. From this we report the binary accuracy, with the results for the BERT models available in Table \ref{tab:original}, and the XLM-R results in Table \ref{tab:xlm-r}.
\subsection{Monolingual Results}
The upper part of Table \ref{tab:original} shows the results using the original monolingual data.
From this we note a clear diagonal (marked by underline), where the native models perform best in their own respective language. Bert-Large significantly outperforms BERT-Base for all non-English datasets, and it also performs slightly better on the original English data.
Comparing these results with the amount of training data for each model (Table \ref{tab:dataset_1}), we see a correlation between performance and amount of pre-training data. The Swedish, Finnish and English models have been trained on the most amount of data, leading to slightly higher performance in their native languages. The Danish model which has been trained on the least amount of data, performs the worst on its own native language.
For the cross-lingual evaluation, BERT-Large clearly outperforms all other non-native models. The Swedish model reaches higher performance on Norwegian and Finnish compared to the other non-native Scandinavian models. However, the Norwegian model performs best of the non-native models on the Danish data. Finally, we observe an interesting anomaly in the results on the English data, where the Norwegian model performs considerably worse than the other Scandinavian models.
\subsection{Translation Results}
The results for the machine translated data, available as the lower part of Table \ref{tab:original}, show that BERT-Large outperforms all native models on their native data, with the exception of Finish. The English BERT-Base reaches higher performance on the machine translated data than the Norwegian and Danish models on their respective native data. The difference between English BERT-Base using the machine translated data, and the Swedish BERT using native data is about 1\% unit.
As expected, all Scandinavian models perform significantly worse on their respective machine translated data. We find no clear trend among the Scandinavian models when evaluated on translated data from other languages. But we note that the Danish model performs better on the machine translated Swedish data than on the original Swedish data, and the Finnish model also improves its performance on the other translated data sets (except for Swedish). All models (except, of course, the Finnish model) perform better on the machine translated Finnish data.
Finally, \ref{tab:xlm-r} shows the results from XLM-R-Large, which has been trained on data several orders of magnitude larger than the other models. XLM-R-Large achieves top scores on the sentiment data for all languages except for Finnish. We note that XLM-R produces slightly better results on the native data for Swedish, Norwegian and Finnish, while the best result for Danish is produced on the machine translated data.
\section{Discussion \& Conclusion}
Our experiments demonstrate that it is possible to reach better performance in a sentiment analysis task by translating the data into English and using a large pre-trained English language model, compared to using data in the original language and a smaller native language model. Whether this result holds for other tasks as well remains to be shown, but we see no theoretical reasons for why it would not hold. We also find a strong correlation between the quantity of pre-training data and downstream performance. We note that XLM-R in particular performs well, which may be due to data size, and potentially the ability of the model to take advantage of transfer effects between languages.
An interesting exception in our results is the Finnish data, which is the only task for which the native model performs best, despite XLM-R reportedly having been trained on more Finnish data than the native Finnish BERT model \cite{conneau-etal-2020-unsupervised}. One hypothesis for this behavior can be that the alleged transfer effects in XLM-R hold primarily for typologically similar languages, and that the performance on typologically unique languages, such as Finnish, may actually be negatively affected by the transfer. The relatively bad performance of BERT-Large on the translated Finnish data is likely due to insufficient quality of the machine translation.
The proposed approach is thus obviously dependent on the existence of a high-quality machine translation solution. The Scandinavian languages are typologically very similar both to each other and to English, which probably explains the good performance of the proposed approach even when using a generic translation API. For other languages, such as Finnish in our case, one would probably need to be more careful in selecting a suitable translation model. Whether the suggested methodology will be applicable to other language pairs thus depends on the quality of the translations and on the availability of large-scale language models in the target language.
Our results can be seen as evidence for the maturity of machine translation. Even using a generic translation API, we can leverage the existence of large-scale English language models to improve the performance in comparison with building a solution in the native language. This raises a serious counter-argument for the habitual practice in applied NLP to develop native solutions to practical problems. Hence, we conclude with the somewhat provocative claim that it might be unnecessary from an empirical standpoint to train models in languages where:
\begin{enumerate}
\item there exists high-quality machine translation models to English,
\item there does not exist as much training data to build a language model.
\end{enumerate}
In such cases, we may be better off relying on existing large-scale English models. This is a clear case for practical applications, where it would be beneficial to only host one large English model and translate all various incoming requests from different languages.
\bibliographystyle{acl_natbib}
| {
"attr-fineweb-edu": 1.762695,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdmQ5qYVBeaTuDb2d | \section{Introduction}
AE Phoenicis (\object{AE Phe}, G0V, P$_{orb}$ = 0.362 d) and YY Eridani (\object{YY Eri}, G5V, P$_{orb}$ = 0.312 d) are late
type contact binaries ( W~UMa stars) of W-subtype. Their components touch each other
inside a common convective envelope of constant entropy. According to the theory \citep{lucy} the primary ( i.e
the more masssive) component should be slightly hotter (by a few hundred degrees) than the companion, but observations
show the opposite.
\citet{mullan} and \citet{rucinski} ascribed the possible origin of this discrepancy to
the presence of cool star spots on the primary.
Some of the evolutionary models predict shallower outer convective zones for the secondary, because of its
physical status (out of thermal equilibrium), in particular the
angular momentum loss via magnetic braking (AML) models, see e.g. \citet{vilhu82} and \citet{vm88}, and the Thermal
Relaxation Oscillation models (TRO), see e.g. \citet{webbink}.
(Note, by the way, that \citet{webbink} does not include the AML-models in his review.)
Shallower convection would mean a less magnetically active secondary, because for a fixed rotation rate the dynamo action
is stronger in a thicker convective zone \citep[see, e.g,][]{vilhu87}.
The observational confirmation of this prediction has remained mostly unexplored although Maceroni et al. (1994),
using both photometry and H$\alpha$ spectroscopy, found a primary slightly cooler than the secondary component and large
photometric dark spots on the primary surface. Dark spots are generally related to the magnetic (dynamo generated) activity.
More recently, \citet{barnes}, using high resolution Doppler imaging techniques, found the primary of AE Phe
spectroscopically cooler, and provided a further indication of unresolved dark spots.
In the present paper we re-analyse the H$\alpha$-observations of AE Phe and YY Eri performed by \citet[][hereafter paper
I]{maceroni}, together with similar unpublished observations collected in 1995. The motivation is a rejuvenated interest in
contact binaries, mostly due to the Doppler imaging techniques \citep{barnes}.
The observations are explained in Section 2 and the H$\alpha$ equivalent widths compared with model predictions and with
the saturation limit (due to the chromospheric emission from plages or from penumbral spot regions filling the photospheric
absorption). The results are discussed in Section 3 and the conclusions given in Section 4.
\begin{figure*}
\centering
\includegraphics[angle=0,width=17cm]{testi1.eps}
\caption{H$\alpha$ 6563$\AA$ dynamic spectra of AE Phe and YY Eri from the 1995 observations. The grey scale is linear,
the white corresponding to the continuum and the darkest colour (at phase 0.5) to 60 per cent of the continuum level.
}
\label{FigGrey}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[angle=0,width=17cm]{testi2.eps}
\caption{Mean H$\alpha$ equivalent widths for the primary and secondary components of AE Phe and YY Eri (from Table 1).
The values from theoretical NextGen-models \citep{hauschildt} are also shown (solid line) together with the saturation
limit found by \citet{herbst} (dotted line). }
\label{FigEW}
\end{figure*}
\section{Observations and H$\alpha$ equivalent widths}
The observations were performed with the CAT-telescope (Coude Auxiliary Telescope) of the European Southern Observatory
(ESO at La Silla, Chile) during November 20-25, 1989, November 17-23, 1990, and October 26-31, 1995. The Coude Echelle
Spectrometer (CES), with the short camera in the red and resolution of 60 000 ( 5 km s$^-1$) at H$\alpha$ 6563 $\AA$,
was used. The exposure times were 15 and 20 minutes for AE Phe (m$_V$ = 7.9) and YY Eri (m$_V$ = 8.4), respectively.
This guaranteed orbital smearing of less than 0.05 in phase for both stars. Complementary photometric observations in B,
V, and I filters were obtained with the ESO 50 cm telescope. The sky conditions during the 1995 observations, however,
allowed to get complete, but
low quality, light curves for AE Phe only. The photometric observations were useful, at any rate,
to check the correct orbital phasing of the spectra.
A sample of line profiles for the years 1989 and 1990 were shown in \citet{maceroni} and are not repeated here for the sake
of brevity. In Fig. 1 the grey-scale dynamic light curves (phase vs. $\lambda$) for the year 1995 are shown. The grey-scale
is linear, the white corresponding to the continuum and the darkest colour (at phase 0.5) to 60 per cent of the continuum
level. The radial velocity curves of the broad H$\alpha$-absorption in both components are clearly seen, the curves with
larger amplitudes corresponding to the secondary (less massive) component.
The equivalent widths were measured at elongations, as mean values between phases 0.15 - 0.35 and 0.65 - 0.85. The
resulting equivalent widths are, however, relative to the total continuum. Since we are interested in the {\it intrinsic}
equivalent widths of the components, the measured values were further scaled with the component luminosities. If
\vspace{0.1cm}
L$_p$ = L/a and L$_s$ = L(a-1)/a,
\vspace{0.1cm}
where L, L$_p$ and L$_s$ are the total, primary and secondary luminosities, respectively, then
\vspace{0.1cm}
a = 1 + q$^{0.92}$(T$_s$/T$_p$)$^4$
\vspace{0.1cm}
which follows directly from the contact condition and the definition of effective temperatures T$_p$ and T$_s$
\citep[see e.g.][]{webbink}. Here q is the mass ratio M$_s$/M$_p$.
Using the photometric effective temperatures and mass ratios found by \citet{maceroni}: ((T$_s$, T$_p$, q) = (6140 K,
6000 K, 0.39) for AE Phe and = (5590 K, 5390 K, 0.44) for YY Eri), we find a = 1.46 and = 1.54 for AE Phe and YY Eri,
respectively. These values are very close to those of paper I from photometric solutions. The intrinsic equivalent widths
can be computed from the measured ones by
\vspace{0.1cm}
EW$_p$ = a x EW$_p$(measured) and
EW$_s$ = a/(a-1) x EW$_s$(measured).
\vspace{0.1cm}
The equivalent widths are listed in Table 1 and their mean values plotted in Fig.2. We also used NextGen photospheric
models\footnote{The NextGen uses the model atmosphere PHOENIX code.
The code is available from http://dilbert.physast.uga.edu/yeti},
with solar abundances and gravity $\log(g) = 4.5$ \citep{hauschildt, short} and computed the H$\alpha$ equivalent widths
for several temperatures, as shown in Fig.2.
These theoretical models match quite well with the solar value marked in Fig.2 (EW = 3.2 $\AA$), as observed with the
CAT/CES telescope by exposing the twilight sky before the observations. The (absorption) equivalent widths of AE Phe and
YY Eri are clearly below the theoretical predictions.
An explanation (which we adopt here) for this deficiency is that chromospheric emission fills-in the photospheric
absorption, thus lowering the measured equivalent widths. \citet{herbst} have estimated this emission for a large sample of
K-M stars. They found an upper bound (the saturation limit) to the fraction of star's bolometric luminosity that can appear
as H$\alpha$ emission: L$_{H\alpha}$/L$_{bol}$ = 10$^{-3.9}$. Using F$_{H\alpha}$/F$_{bol}$-values at different effective
temperatures, as computed from the NextGen - models, this relation can be easily converted to the saturation line in
Fig.2.
\begin{table}
\caption{H$\alpha$ equivalent widths (EW, in units of {\AA}ngstr{\" o}ms) of the more massive (p, primary) and the secondary
(s) components of AE Phe and YY Eri. The values are average values from the observations of 1989, 1990 and 1995. The
observed values are direct measurements, with the total luminosity as continuum, over the orbital phases 0.15 - 0.35
(marked as 0.25) and 0.65 - 0.85 (0.75).The intrinsic values are scaled with the components' individual luminosities (see
text).These intrinsic values are shown in Fig.2. The errors include the differences from epoch to epoch.}
\label{table:1}
\centering
\begin{tabular}{c c c c c}
\hline\hline
comp &EW(0.25)& EW(0.75) & EW mean & EW intrinsic \\
\hline
AE Phe p & 1.25 $\pm{0.1}$ &1.05 $\pm{0.05}$ &1.15 $\pm{0.07}$ & 1.68 $\pm{0.10}$ \\
AE Phe s & 0.90 $\pm{0.07}$ &0.95 $\pm{0.07}$ &0.92 $\pm{0.07}$ & 2.92 $\pm{0.20}$\\
YY Eri p & 1.00 $\pm{0.10}$ &0.90 $\pm{0.07}$ &0.95 $\pm{0.10}$ &1.45 $\pm{0.15}$\\
YY Eri s & 0.70 $\pm{0.07}$& 0.75 $\pm{0.07}$ &0.74 $\pm{0.07}$ &2.00 $\pm{0.20}$\\
\hline
\end{tabular}
\end{table}
\section{Discussion}
Both AE Phe and YY Eri clearly have equivalent widths of the H$\alpha$-absorption smaller than the Sun and as well than
those, predicted by NextGen-models for normal main sequence stars of similar effective temperatures. This can be
interpreted
as being due to extra chromospheric H$\alpha$ emission, that partly fills the photospheric absorption. The average values
of both stars lie close to the saturation limit. This behaviour is similar to other chromospheric emission diagnostics
\citep[see e.g.][]{vilhu87}, giving additional support for this interpretation.
The components of YY Eri are not very different from each other, but in AE Phe the primary has clearly much more
H$\alpha$-emission than the secondary.
This is presumably due to a weaker dynamo-generated magnetic activity of the secondary. Since both components rotate with
the same rate and have almost the same spectral types (effective temperatures) they probably differ with respect to another
crucial parameter of dynamo theories, the thickness of the convective zone; the shallower this zone, the weaker dynamo
action \citep[see e.g.][]{vilhu87}. Theoretical contact binary models predict shallower convective zones for the
secondaries, as well, due to their thermal non-equilibrium condition (AML- or TRO-models, \citep[see e.g.][]{vilhu82,
webbink}).
The equivalent widths remained practically the same over all our observing runs, from 1989 to 1990 and 1995. In particular,
the 1989 and 1990 observations showed that the larger photometric spots are found on the primary star \citep{maceroni}, as
well as weaker H$\alpha$-absorption, compatible with the present results . \citet{barnes} interpreted their spectroscopic
observations (analysed by Doppler-imaging) by introducing unresolved dark spots on the primary. Since the appearance of
active chromospheres (plages) and cool spots correlate and are the results of the same physical phenomenon, our
interpretation sounds valid.
The phase 0.75 side of the AE Phe primary is chromospherically more active than the 0.25 side (see Table 1). This is
compatible with the larger spots found on this side during the first two observing runs by \citet{maceroni} (paper I).
\section{Conclusions}
We have shown that the contact binaries AE Phe and YY Eri have a weaker photospheric H$\alpha$ 6563 $\AA$ absorption than
normal slowly rotating main sequence stars of the same spectral type have. This can be interpreted as due to the enhanced
chromospheric emission in the rapidly rotating components of these contact binaries. This emission is close to the
saturation limit (see Fig.2) and the behaviour is similar to many other chromospheric diagnostics found earlier.
In AE Phe the primary (more massive component) is clearly more active in this respect (smaller absorption as compared with
the secondary or the saturation limit). This is apparently a result from the larger depth of its outer convective zone
supporting a stronger dynamo-action, compatible with some structural and evolutionary models of contact binaries:
AML-models, \citep[see e.g.][]{vilhu82, vm88} and TRO-models, \citep[see e.g.][for references]{webbink}.
\begin{acknowledgements}
We are grateful to the 6.3 magnitude earthquake during the 1995 observations,
whose only consequence was that we will remember that night forever.
CM acknowledeges funding of this research project by MIUR/Cofin and F-INAF programs.
\end{acknowledgements}
| {
"attr-fineweb-edu": 1.851562,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdmQ5qhLBBuQRaeYt | \section{Introduction}
Time integration of ODEs is an inherently sequential process, since
each forward step ought to be based on the most recent information
available. Three conceivable options for achieving some level of \emph{parallel-in-time}
are (i) to have correction calculations follow the explicit forward
steps as closely behind as possible, letting them catch up frequently,
(ii) to carry out `preparatory' calculations that are based on trying
to anticipate later solution states, and (iii) to exploit extrapolation
ideas. While all of these concepts have been pursued for systems of
ODEs, as summarized in \cite{CMO10,KW14}, their performance is unclear
for ODE systems arisen from method-of-lines (MOL) discretization of
wave-type PDEs. The additional requirement that arises
then is that the ODE solver's \emph{stability domain} must include
a quite large stretch of the imaginary axis surrounding the origin.
We show here that extrapolation-based ODE solvers of Gragg-Bulirsch-Stoer
(GBS) type can meet this requirement. In particular, one such scheme
that we will focus on steps forward explicitly using six cores as
fast as Forward Euler (FE) does on one core, but combines eighth order
of accuracy with a generously sized stability domain. In contrast
to linear multistep methods, it needs no back levels in time to get
started. The present approach is compared against explicit Runge-Kutta (RK) methods
for a PDE test problem.
Standard Richardson extrapolation schemes utilize a square Vandermonde-type system
to compute the extrapolation weights. This system is constructed
to cancel successive terms in the asymptotic error expansion of the time stepper.
By allowing more columns than rows in the system - that is, more extrapolation
components than order constraints - we create an underdetermined system
that grants degrees of freedom to optimize the extrapolated stability domain.
For wave-type PDEs stepped with method-of-lines we optimize the stability
domain along the imaginary axis. We achieve stability domains far larger
than those of both the square extrapolation systems and other standard ODE integrators,
thereby enabling large time step sizes and thus faster time to solution.
\section{GBS-type ODE solvers}
\subsection{GBS concept}
We consider first the problem of advancing forward in time an ODE of
the form $y'(t)=f(t,y)$, where the unknown function $y(t)$ is either
scalar or vector valued. The complete time interval of interest is
split into $N$ sections. For each of these sections, the
basic (unextrapolated) GBS scheme consists of the steps
\begin{equation}
\left\{
\bgroup
\renewcommand*{\arraystretch}{2.2}
\begin{array}{ll}
{\displaystyle \frac{y_{1}-y_{0}}{h}}=f(t_{0},y_{0}) & \textrm{Forward Euler (FE)}\\
{\displaystyle \frac{y_{n+1}-y_{n-1}}{2h_{\phantom{}}}}=f(t_{n},y_{n}) & \textrm{Leap-frog (LF), }n=1,2,\ldots,N\\
y_{N}^{*}=\dfrac{1}{4}(y_{N-1}+2y_{N}+y_{N+1}) & \textrm{Averaging},
\end{array}
\egroup
\right.
\label{eq:GBS}
\end{equation}
after which $y_{N}^{*}$ is accepted as the new value at time $t_{N}.$
The initial FE step is accurate to first order, while the subsequent
LF steps are second order accurate. One would therefore expect $y_{N}^{*}$
to be accurate to at most second order, and have an error expansion
in which all further powers of $h$ would be present. Remarkably,
for any smooth (linear or nonlinear) function $f(t,y)$, it transpires
that, if $N$ is even, all odd powers in the expansion will vanish
\cite{BS66,G65,HNW87,L91}:
\begin{align}
\text{Error}=y_{N}^{*}-y(t_{N})=a_{2}h^{2}+a_{4}h^{4}+a_{6}h^{6}+\ldots
\end{align}
The form of the expansion makes Richardson extrapolation
particularly efficient since, each time this is applied, the result
will gain two orders of accuracy. For example, the results from four
completely independent calculations over the same section in time,
using different $h$-values, can be combined to give an $\mathcal{O}(h^{8})$-accurate
result. These four calculations require no communications between
each other, and can therefore be run simultaneously on separate cores.
Even when not counting the cost of the work on the extra cores, the GBS approach does not
offer any striking benefits for standard ODE systems, unless possibly
if extrapolated to very high orders. However, in the present context
of wave-type PDEs, the situation becomes different, since GBS methods
can be designed to feature particularly favorable stability domains.
\subsection{Stability domains for GBS-type methods}
\ref{sec:appendix_a} briefly summarizes the definition of an ODE solver's
stability domain, explains its significance in the context of
MOL time stepping, and provides stability domain information for some
well-known explicit ODE solvers. These domains should be contrasted
to the corresponding ones for GBS methods described below. The imaginary
stability boundary (ISB) of an ODE solver is defined as the largest
value such that the imaginary axis is included from $-i\cdot \text{ISB}$
to $+i\cdot \text{ISB}$. For solvers that lack any imaginary axis coverage,
we define their ISB to be zero.
In order to provide a fair comparison between different methods, we
will from now on further normalize all ISB values by the number of
function evaluations that each step requires and denote this ISB$_{\text{n}}$.
For example, we divide
$\textrm{R}\textrm{K}_{4}$'s ISB, stated in \ref{sec:appendix_a} as 2.8284,
by four to compensate for its four stages (with one function evaluation
in each), i.e. we list its ISB$_{\text{n}}$
as 0.7071 $(=1/\sqrt{2}$). Similarly, the ISB$_{\text{n}}$
for the 13-stage $\textrm{R}\textrm{K}_{8}$ method becomes 0.2848.
With this normalization, the largest feasible ISB$_{\text{n}}$
for any explicit method becomes one \cite{JN81}, which is realized
by the LF scheme. Since the longest distance a solution can be advanced
per function evaluation is proportional to the time stepping method's
ISB$_{\text{n}}$, a key goal will be to design a method that has both
high order and a large ISB$_{\text{n}}$.
Stability domains and ISBs for GBS-type schemes do not appear to have been
studied until in \cite{FZL07}. One key observation made there was that GBS schemes of
orders 4, 8, 12, ... will feature positive ISBs, whereas schemes of order 6,
10, 14, ... will not. Hence, in what follows we study only schemes with order
divisible by four.
\section{Optimizing the Stability Domain}
\subsection{Introduction to ISB Optimization}
Stability domain optimization has been well studied in the literature.
The class of steppers that maximizes ISB given stability polynomial order $N+1$ was found
independently by Kinnmark and Gray \cite{KG84a} and Sonneveld and van Leer \cite{SL85}.
The methods divide a time interval into $N$ evenly spaced steps. A Forward Euler predictor
and Backward Euler corrector pair initiates the time step, then $N-1$ leap frogs
bring us to the end of the time interval. This class of methods has order of accuracy at most
two, and achieves an ISB$_{\text{n}}=N/(N+1)$.
Kinnmark and Gray demonstrate third
and fourth order accurate stability polynomials in \cite{KG84b} that converge
to the optimal ISB as number of subintervals increases. Interestingly, the first
two methods of this class are the third order and fourth order explicit Runge-Kutta methods
with three and four stages, respectively. Thus RK$_4$ is optimal in the sense that it
fully utilizes its four function evaluations to maximize time step for wave-type problems.
It is therefore an excellent candidate for comparison with the optimized schemes that follow.
\subsection{GBS Stability Domain Optimization}
In Richardson extrapolation schemes one sets up a square Vandermonde
system to compute the weights guaranteeing a specified order of accuracy.
If we allow the number of components in the extrapolation scheme to
increase beyond those necessary for maintaining order of accuracy
we obtain an underdetermined system with degrees of freedom. We
utilize these degrees of freedom to optimize the stability domain
along a contour in the complex plane.
Extrapolation allows us to eliminate successively higher order terms
in the asymptotic error expansion of our solution. To do so, for each of \emph{$m$}
integrators we divide the time interval $H$ into $n_{i}$ steps of size $h_i=H/n_i$, $i=1,2,\hdots,m$. We then
construct a linear system to eliminate terms through order $p-1$ in the error expansion,
yielding a $p$-order accurate solution.
In the case of GBS integrators, the odd coefficients in the asymptotic expansion
are zero. Thus we may drop the constraint equations for odd powers
of $h_i$, obtaining a system of $\frac{p}{2}$ equations:
\begin{align}
\underbrace{\addstackgap[16pt]{
\begin{bmatrix}
1 & 1 & \hdots & 1 \\
h_1^2 & h_2^2 & \hdots & h_m^2 \\
\vdots & \vdots & \ddots & \vdots \\
h_1^{p-2} & h_2^{p-2} & \hdots & h_m^{p-2}
\end{bmatrix}_{\frac{p}{2}\times m}
}}_{V}
\hspace{.5ex}
\underbrace{\addstackgap[5pt]{
\begin{bmatrix}[1]
c_1 \\ c_2 \\ \\ \vdots \\ \\ c_m
\end{bmatrix}_m
}}_{c}
=
\underbrace{\addstackgap[16pt]{
\begin{bmatrix}[1]
1 \\
0 \\
\vdots \\
0
\end{bmatrix}_{\frac{p}{2}}.
}}_{b}
\label{eq:vander}
\end{align}
When $m=\frac{p}{2}$ we have a square matrix which corresponds to
the usual Richardson extrapolation schemes. The matrix is invertible
when $n_{i}\ne n_{j},i\ne j$, and so we solve for the weight vector $c$,
which we apply to the individual integrated solutions to form the combined
solution at the end of the time interval.
By allowing $m>\frac{p}{2}$ the system becomes underdetermined and we may
enforce order constraints while optimizing selected features of the stability
domain. The optimization algorithm was adapted from the polynomial optimization formulation
in \cite{KA12}; details are provided in \ref{sec:appendix_b}.
\subsection{Fully-Determined Optimization Results}
We first investigate optimal step count selection for fully determined
extrapolation schemes. For schemes of order $p$ we test each combination of $\frac{p}{2}$ step counts
up to a set maximum, here chosen to be 24. Each combination yields a set of uniquely determined
extrapolation weights. We then select the combination of step counts
that maximizes ISB$_{\text{n}}$ of the extrapolated stability domain. Tab.~\ref{tab:isb_table_fulldet} contains the tabulated
results for orders eight, twelve and sixteen. The schemes all have generous imaginary
axis coverage and can be implemented efficiently on three, four and five cores respectively.
\begin{table}[H]
\caption{Optimal step count sequences and ISB$_{\text{n}}$ for the fully-determined schemes}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Order & Cores & Step Counts & ISB$_{\text{n}}$ \\
\hline
8 & 3 & 2,16,18,20 & 0.5799 \\
12 & 4 & 2,8,12,14,16,20 & 0.4515 \\
16 & 5 & 2,8,10,12,14,16,18,22 & 0.4162 \\
\hline
\end{tabular}
\label{tab:isb_table_fulldet}
\end{table}
\subsubsection{Eighth Order}\label{sec:discrete_8}
The eighth order, three-core scheme has ISB$_{\text{n}} = 0.5799$ with the following step counts and
uniquely determined weights:
\begin{equation*}
\begin{aligned}
\begin{matrix}[1.5] \text{Step Counts } \\ \text{Weights } \end{matrix} &
\begin{matrix}[1.5] : \\ : \end{matrix}
\hspace{1ex}
\begin{matrix}[1.5]
2, & 16, & 18, & 20 \\
-\frac{1}{498960}, & \frac{65536}{9639}, & -\frac{531441}{25840}, & \frac{250000}{16929}.
\end{matrix}
\end{aligned}
\end{equation*}
\subsubsection{Twelfth Order}
The twelfth order, four-core scheme has ISB$_{\text{n}} = 0.4515$ with the following step counts and weights:
\begin{equation*}
\begin{aligned}
\begin{matrix}[1.5] \text{Step Counts } \\ \text{Weights } \end{matrix} &
\begin{matrix}[1.5] : \\ : \end{matrix}
\hspace{1ex}
\begin{matrix}[1.5]
2, & 8, & 12, & 14, & 16, & 20 \\
-\frac{1}{157172400}, & \frac{4096}{155925}, & -\frac{59049}{15925}, & \frac{282475249}{15752880}, & -\frac{4194304}{178605}, & \frac{9765625}{954261}.
\end{matrix}
\end{aligned}
\end{equation*}
\subsubsection{Sixteenth Order}
The sixteenth order, five-core scheme has ISB$_{\text{n}}=0.4162$ and utilizes the step count
sequence $\{2,8,10,12,14,16,18,22\}$. Extrapolation weights can be computed by solving
the corresponding Vandermonde system (\ref{eq:vander}). We omit them here since the weights are ratios of
large integers in both the numerators and denominators.
\subsection{Underdetermined Optimization Results}\label{sec:optresults}
Using the optimization methodology described in \ref{sec:appendix_b} we optimize the ISB of GBS-type methods up to order
sixteen. Increasing the number of extrapolation components leads to an
increase in ISB$_{\text{n}}$. Since for explicit schemes
the maximum ISB$_{\text{n}}$ is one we expect the relative gains of adding more components to eventually saturate.
By evenly distributing work across CPU cores we can demonstrate the relationship
between available processors and maximal time step size.
This correspondence between core count and ISB$_{\text{n}}$ is shown in Fig.~\ref{fig:isb_vs_cores}.
Here we observe that efficiency saturation occurs around ten cores for all orders of accuracy;
the saturation value itself is strongly dependent on the order of accuracy.
\begin{figure}[H]
\includegraphics[width=\linewidth]{\plotfigurepath/isb_vs_cores}
\caption{Trends for optimized ISB$_{\text{n}}$ versus core count for methods of various order}
\label{fig:isb_vs_cores}
\end{figure}
We achieve the best optimization results by utilizing all (even) step counts up to a maximum
dependent on the number of available CPU cores. The $m$ extrapolation
components then have subinterval counts $\{2, 4, \hdots, 2m\}$, with $m$ set by the available processing resources.
We denote the number of subintervals at which ISB$_{\text{n}}$ can no longer be increased $N_{\text{opt}}$ and
aggregate the results for each order of accuracy in Tab.~\ref{tab:isb_table}.
The stability domains for each core count are plotted
in Fig.~\ref{fig:collected_stabdomains}.
Results demonstrate a tradeoff between optimal ISB and order of accuracy as is typical
of explicit time integrators.
\begin{table}[H]
\caption{Largest optimized ISB$_{\text{n}}$ obtained for the underdetermined schemes}
\begin{tabular}{|c|c|c|c|}
\hline
Order & Cores & $N_{\text{opt}}$ & ISB$_{\text{n}}$ \\
\hline
4 & 8 & 28 & 0.9477 \\
8 & 11 & 40 & 0.8551 \\
12 & 10 & 36 & 0.7504 \\
16 & 9 & 32 & 0.6075 \\
\hline
\end{tabular}
\label{tab:isb_table}
\end{table}
The capping of $N_{\text{opt}}$ is an artifact of the optimizer. Our convex solver
fails to produce methods with larger ISB if we increase the number of free variables
beyond those presented in Tab.~\ref{tab:isb_table}. We believe that by addressing the conditioning
of the Vandermonde system as in \cite{KA12} one can continue further along the curves presented in
Fig.~\ref{fig:isb_vs_cores}. Extrapolating these curves shows the schemes do not converge
to the optimal ISB$_{\text{n}}=1$; the exact tradeoff between order of accuracy and optimal
ISB is a topic of future research.
\begin{figure}[H]
\parbox{.495\linewidth}{
\includegraphics[width=\linewidth]{\stabfigurepath/stabdomains_p_4}
}
\parbox{.495\linewidth}{
\includegraphics[width=\linewidth]{\stabfigurepath/stabdomains_p_8}
}\\
\parbox{.495\linewidth}{
\includegraphics[width=\linewidth]{\stabfigurepath/stabdomains_p_12}
}
\parbox{.495\linewidth}{
\includegraphics[width=\linewidth]{\stabfigurepath/stabdomains_p_16}
}
\caption{Optimized stability domains labeled with number of cores required to implement the schemes,
with fourth and eighth order Runge-Kutta (RK) stability domains for comparison}
\label{fig:collected_stabdomains}
\end{figure}
\subsection{Leading Order Error}
Let the time integrator's stability polynomial, as defined in \ref{sec:appendix_a}, be denoted $R(\xi)$.
For a method of order $p$, the stability domain's boundary follows the imaginary axis
surrounding the origin linearly up to deviation on the order $p+1$. To compute
the leading error coefficient we set the stability polynomial $R(\xi) = e^{i\theta}$. Taking the
complex logarithm of the polynomial and Taylor expanding yields a power series for $\theta(\xi)$.
We then compute the inverse series to find
$\xi(\theta) = i\theta+a_{p+1}(i\theta)^{p+1}+a_{p+2}(i\theta)^{p+2}+\mathcal{O}\left(\theta^{p+3}\right)$.
Since we consider only methods with order divisible by four we simplify as follows:
\begin{align}
\xi(\theta) = i\theta + i a_{p+1} \theta^{p+1} - a_{p+2} \theta^{p+2} + \mathcal{O}\left(\theta^{p+3}\right).
\end{align}
Departure from the imaginary axis is governed by the $a_{p+2}$ coefficient. We then require $a_{p+2}$ to be
negative to ensure the stability domain has a positive ISB. Accuracy is determined by
the $a_{p+1}$ coefficient; Fig.~\ref{fig:leading_order_error} presents this coefficient
for each method as a function of number of cores.
\begin{figure}[H]
\includegraphics[width=\linewidth]{\plotfigurepath/leading_error}
\caption{Leading order error term $a_{p+1}$ for the optimized methods}
\label{fig:leading_order_error}
\end{figure}
The leading order error coefficient decays toward zero as number of cores
increases while holding order fixed. This implies the underdetermined extrapolation
schemes do not trade away numerical precision for achieving large ISBs - they instead
gain significantly in accuracy.
\subsection{Core Partitioning}\label{sec:corepartition}
In order to achieve the theoretical efficiencies presented in Sect.~\ref{sec:optresults} we require
a work partitioning scheme that distributes the individual time steppers amongst the cores.
For a specified maximum subinterval count $N_{\text{max}}$ we achieve the largest ISB$_{\text{n}}$ by utilizing all even step counts up
to $N_{\text{max}}$. For a fixed number of cores, here denoted $n_{\text{cores}}$, we can always evenly distribute
the work when we choose $N_{\text{max}}=4\cdot n_{\text{cores}}-2$. This corresponds to folding together onto a single core
pairs of integrators with step counts adding to $N_{\text{max}}$. For example, with $N_{\text{max}}=10$,
we evenly load three cores with step counts \{10, \{8, 2\}, \{6, 4\}\}.
The GBS scheme requires $N+1$ function evaluations for an integrator with $N$ subintervals.
When stacking multiple integrators on a single core we share the first function evaluation
since it takes identical arguments for \emph{all} time steppers. We can in principle share this evaluation
among all cores but communication overhead may make this approach less efficient.
\subsection{Method Specification}\label{sec:notation}
Let the $m\times\frac{p}{2}$-sized Vandermonde matrix in (\ref{eq:vander}) be denoted $V$.
When we have $m>\frac{p}{2}$ the system is underdetermined. We
then choose $\frac{p}{2}$ dependent step counts to meet the order constraints
and collect these into the step count sequence $\{n_{\text{dep}}\}$. The associated extrapolation
weight vector is denoted $c_{\text{dep}}$ and the corresponding columns of $V$ denoted $V_{\text{dep}}$.
Likewise collect the remaining step counts into the $\left(m-\frac{p}{2}\right)$-length step count sequence $\{n_{\text{free}}\}$
and label corresponding extrapolation weights $c_{\text{free}}$ and Vandermonde columns $V_{\text{free}}$.
Then the constraint equations may be written
\begin{align}
V_{\text{dep}}c_{\text{dep}}+V_{\text{free}}c_{\text{free}} = b.
\label{eq:order_constraints}
\end{align}
The Vandermonde system order constraints must be met exactly.
We thus omit floating-point coefficients for $c_{\text{dep}}$ in the text; they are best computed
symbolically then converted to the desired floating point format. We may readily compute $c_{\text{dep}}$
with the relation
\begin{align}
c_{\text{dep}} = V_{\text{dep}}^{-1}\left(b-V_{\text{free}}c_{\text{free}}\right),
\label{eq:cdep_computation}
\end{align}
so one only needs the step count sequences $\{n_{\text{dep}}\}$ and $\{n_{\text{free}}\}$ and weight vector
$c_{\text{free}}$ to fully specify a scheme.
\subsection{Methods of Choice}
In this section we present two eighth order methods and one twelfth order method
with rational coefficients for convenient use. In order to provide
robustness to spurious discretized eigenvalues sitting slightly in
the right-half plane we push the optimization curve into the positive
reals. This disturbs the ISB$_{\text{n}}$ very little and makes the method
suitable for local differentiation stencils generated for example by
RBF-FD (radial basis function-generated finite difference) approximations
\cite{FFBook}.
To generate the following schemes we first optimize the free coefficients using
the optimization methodology described in \ref{sec:appendix_b}. The optimization contour is
chosen to trade off a small amount of imaginary axis coverage for an area containing
the positive reals away from the origin. We then perform a search
over a set of rational numbers that closely approximate the floating point free
coefficients. We select a set with small integers in the numerator and denominator
which disturbs the scheme's stability domain very little. These coefficients are
reported below.
\subsubsection{The Eighth Order, Six Core Method: GBS$_{8,6}$}
The following eighth order method achieves a robust stability domain for wave-type PDEs
on six cores and is therefore dubbed GBS$_{8,6}$.
The scheme's ISB$_{\text{n}}$ is 0.7675, a 0.26\% reduction from the optimal six core value of 0.7695.
The scheme can be implemented using the following step counts and extrapolation weights:
\begin{equation*}
\begin{aligned}
\{n_{\text{dep}}\} &= \hspace{.3ex} \left\{\begin{matrix} 2, & 4, & 6, & 10 \end{matrix}\right\} \\
\begin{matrix}[1.5] \{n_{\text{free}}\} \\ \vphantom{\bigg[}c_{\text{free}} \end{matrix} &
\begin{matrix}[1.5] = \\ \vphantom{\bigg[}= \end{matrix}
\hspace{1ex}
\begin{matrix}[1.5]
\{ & 8, & 12, & 14, & 16, & 18, & 20, & 22 &\}\hphantom{^T.} \\
\bigg[ & \frac{2165}{767488}, & \frac{13805}{611712}, & \frac{4553}{72080}, & \frac{14503}{66520}, & \frac{27058}{7627}, & -\frac{86504}{5761}, & \frac{40916}{3367} &\bigg]^T.
\end{matrix}
\end{aligned}
\end{equation*}
The dependent weights can be computed
exactly by inverting the corresponding Vandermonde system.
The scheme's stability domain is plotted in Fig.~\ref{fig:stabdomain_p8_6core}, with a
zoom-in around the imaginary axis on the right-hand side.
This method can be run efficiently on six cores with time steps 6.25 times larger
than those of RK$_4$. After normalizing for number of function evaluations, the scheme
achieves time-to-solution $0.7675/0.7071 = 8.5\%$ faster than RK$_4$ but with eighth order of accuracy.
Compared to RK$_8$, though, we achieve time-to-solution 269\% faster.
As will be seen in Sect.~\ref{sec:waveresults}, speed-up to achieve a specified accuracy
is far improved over RK$_4$ due to the size of the leading order error term combined
with eighth order convergence to the true solution.
\begin{figure}[H]
\includegraphics[width=\linewidth]{\stabfigurepath/stabdomain_p8_6core_rat}
\caption{Stability domain of the eighth order, six core scheme (left), and zoom (right)}
\label{fig:stabdomain_p8_6core}
\end{figure}
\subsubsection{The Eighth Order, Eight Core Method: GBS$_{8,8}$}
Likewise for the six core method, we produce a robust stability domain for an eighth order
scheme that runs efficiently on eight cores, called GBS$_{8,8}$.
The scheme's ISB$_{\text{n}}$ is 0.8176, a 0.24\% reduction from the optimal eight core value, 0.8196.
The scheme utilizes the following step counts and extrapolation weights:
\begin{equation*}
\begin{aligned}
\{n_{\text{dep}}\} &= \hspace{.3ex} \left\{\begin{matrix} 2, & 26, & 28, & 30 \end{matrix}\right\} \\
\{n_{\text{free}}\} &= \hspace{.3ex} \left\{\begin{matrix} 4, & 6, & 8, & 10, & 12, & 14, & 16, & 18, & 20, & 22 \end{matrix}\right\} \\
c_{\text{free}} &=
\begin{matrix}[1.5]
\bigg[ & \frac{6833}{476577792}, & \frac{10847}{91078656}, & \frac{15235}{34643968}, & \frac{383}{321152}, & \frac{543}{198784}, & \hdots
\end{matrix} \\
& \hspace{16ex}
\begin{matrix}[1.5]
\hdots & \frac{9947}{1741056}, & \frac{6243}{543104}, & \frac{6875}{296192}, & \frac{1401}{28496}, & \frac{17713}{152688}, & \frac{6375}{19264} & \bigg]^T.
\end{matrix}
\end{aligned}
\end{equation*}
The scheme's stability domain is plotted in Fig.~\ref{fig:stabdomain_p8_8core}, with a
zoom-in around the imaginary axis on the right-hand side.
This method can be run efficiently on eight cores with time steps 8.96 times larger
than those of RK$_4$. After normalizing for number of function evaluations the scheme
achieves time-to-solution 15.6\% faster than RK$_4$, and 287\% faster
than RK$_8$. The two additional cores grant us a 6.5\% increase in efficiency over GBS$_{8,6}$.
\begin{figure}[H]
\includegraphics[width=\linewidth]{\stabfigurepath/stabdomain_p8_8core_rat}
\caption{Stability domain of the eighth order, eight core scheme (left), and zoom (right)}
\label{fig:stabdomain_p8_8core}
\end{figure}
\subsubsection{The Twelfth Order, Eight Core Method: GBS$_{12,8}$}
The following twelfth order method runs on eight cores and is therefore dubbed GBS$_{12,8}$.
The scheme's ISB$_{\text{n}}$ is 0.7116, a 0.17\% reduction from the optimal eight core value of 0.7128.
The scheme can be implemented using the following step counts and extrapolation weights:
\begin{equation*}
\begin{aligned}
\{n_{\text{dep}}\} &= \hspace{.3ex} \left\{\begin{matrix} 2, & 8, & 10, & 16, & 24, & 26 \end{matrix}\right\} \\
\{n_{\text{free}}\} &= \hspace{.3ex} \left\{\begin{matrix} 4, & 6, & 12, & 14, & 18, & 20, & 22, & 28, & 30 \end{matrix}\right\} \\
c_{\text{free}} &=
\begin{matrix}[1.5]
\bigg[ & \frac{235}{21030240256}, & \frac{4147}{1612709888}, & \frac{11521}{39731200}, & \frac{2375}{3528704}, & \frac{6435}{708736}, & \hdots
\end{matrix} \\
& \hspace{27ex}
\begin{matrix}[1.5]
\hdots & \frac{1291}{15780}, & \frac{11311}{4672}, & -\frac{180864}{751}, & \frac{222080}{2079} &\bigg]^T.
\end{matrix}
\end{aligned}
\end{equation*}
Stable time steps with GBS$_{12,8}$ are 7.79 times larger than those of RK$_4$
and, after normalization, time-to-solution is improved by 0.6\%.
This (very) modest efficiency improvement is drastically offset by the twelfth order of convergence
of the method - wall time to achieve a desired accuracy is far shorter than that of RK$_4$.
\begin{figure}[H]
\includegraphics[width=\linewidth]{\stabfigurepath/stabdomain_p12_8core_rat}
\caption{Stability domain of the twelfth order, eight core scheme (left), and zoom (right)}
\label{fig:stabdomain_p12_8core}
\end{figure}
\section{Numerical Results}\label{sec:numerical_results}
The following results demonstrate the optimized time
steppers on a test problem with known analytic solutions.
\subsection{One-Way Wave Equation}\label{sec:waveresults}
To demonstrate the performance benefit over standard time steppers from the
literature we run the periodic one-way wave equation,
\begin{equation}
\begin{aligned}
& \tfrac{\partial}{\partial t}u(x,t) + \tfrac{\partial}{\partial x}u(x,t) = 0, \hspace{4ex}&& 0 \le x \le 1, \hspace{2ex}t>0, \\
& u(x,0) = \tfrac{1}{2}\left(1-\cos{2\pi x}\right), && 0 \le x \le 1,
\end{aligned}
\end{equation}
utilizing the rational-coefficient GBS$_{8,6}$ and GBS$_{12,8}$ methods, with RK$_4$ as reference.
Spatial derivatives are spectral to ensure errors are due to the time stepping algorithm alone. We run
all time steppers near their respective limits of stability, at $\lambda=\Delta t/\Delta x=\frac{.99}{\pi}\times \text{ISB}$,
where the factor of $\pi$ arises from the spectral spatial derivatives.
After convecting the wave once around the periodic interval we compute the absolute error with
respect to the analytic solution, then refine in both time and space.
Convergence to the analytic solution for the various methods
is demonstrated in Fig.~\ref{fig:convection_error}. For fair comparison across methods,
the horizontal axis is time step normalized by the number of function evaluations per step.
Thus vertical slices correspond to equal time-to-solution, neglecting the overhead of sharing
data across cores. We use a high precision floating point library \cite{MCT15} for computation since
machine precision is achieved in the high order methods before we can establish a
trendline. Coefficient truncation to double precision causes error to
stagnate at $10^{-14}$ and $10^{-12}$ for the eighth and twelfth order methods, respectively.
To obtain full floating point precision to $10^{-16}$ the extrapolation coefficients
must be precise to twenty significant digits.
\begin{figure}[H]
\includegraphics[width=\linewidth]{\plotfigurepath/convect_error}
\caption{Convection error vs. normalized time step for various methods}
\label{fig:convection_error}
\end{figure}
\section{Conclusions}
We have presented a scheme for maximizing the time step size for extrapolation
based ODE solvers. To do so we construct an underdetermined Vandermonde system,
then optimize the weights to maximize the stability domain along a given curve in
the complex plane. For wave-type PDEs we utilize GBS integrators and optimize the
methods for imaginary axis coverage. We achieve large ISB
values for methods through order sixteen which, when implemented on a computer with
several cores, yield faster time to solution than standard Runge-Kutta integrators.
The optimization method leaves both the time integrator and desired contour of stability
as user parameters. Changing the ODE integrator in turn changes the stability polynomial
basis, which immediately affects the resulting extrapolated stability domains. The GBS
integrator maintains large ISB through extrapolation; other integrators may be better suited
for different desired stability domains. Future work therefore involves identifying other
suitable integrators for stability domain optimization in different contexts.
The underdetermined extrapolation scheme saturates in parallelism
around ten cores. We can improve scalability by incorporating the optimized schemes as local
building blocks in time-parallel solvers like Parareal \cite{LMT01}. These solvers are known to be
less efficient with wave-type PDEs \cite{G15}. Stable algorithms may be achieved by optimizing the
global time-parallel integrators rather than optimizing the coarse and fine grid propagators individually.
These degrees of freedom provide more flexibility than optimizing only the local schemes and is
a promising research direction for improving time to solution for wave-type equations.
\begin{appendices}
\renewcommand\thesection{Appendix \Alph{section}}
\section{Stability domains and imaginary axis coverage for some standard classes of ODE solvers}
\label{sec:appendix_a}
\renewcommand\thesection{\Alph{section}}
\subsection{Stability domains and their significance for MOL time stepping}
Each numerical ODE integration technique has an associated \emph{stability
domain}, defined as the region in a complex $\xi$-plane, with $\xi=h \lambda$,
for which the ODE method does not have any growing solutions when
it is applied to the constant coefficient ODE
\begin{equation}
y^{\prime}=\lambda y\:.
\label{eq:SD_ODE}
\end{equation}
For a one-step method the \emph{stability polynomial}, here denoted
$R(\xi)$, is the numerical solution after one step for Dahlquist's
test equation (\ref{eq:SD_ODE}) \cite{NW91}. The method's stability
domain is then
\begin{align}
S=\{\xi \in \mathbb{C} : |R(\xi)| \le 1\}.
\end{align}
When solving ODEs, the stability domain can provide a guide to the
largest time step $h$ that is possible without a decaying solution
being misrepresented as a growing one. In the context of MOL-based
approximations to a PDE of the form $\frac{\partial u}{\partial t}=L(x,t,u)$,
the role of the stability domain becomes quite different, providing
necessary and sufficient conditions for numerical stability under
spatial and temporal refinement: all eigenvalues to the discretization
of the PDE's spatial operator $L$ must fall within the solver's stability
domain. For wave-type PDEs, the eigenvalues of $L$ will predominantly
fall up and down the imaginary axis. As long as the time step $h$ is
small enough, this condition can be met for solvers that feature a
positive ISB, but never for solvers with ISB = 0.
\subsection{Runge-Kutta methods}
All $p$-stage RK methods of order $p$ feature the same stability
domains when $p=1,2,3,4.$ For higher orders of accuracy, more than $p$ stages
(function evaluations) are required to obtain order $p$. The $\textrm{R}\textrm{K}_{4}$
scheme used here is the classical one, and the $\textrm{R}\textrm{K}_{8}$
scheme is the one with 13 stages, introduced by Prince and Dormand
\cite{PD81}, also given in \cite{HNW87} Table 6.4. Their normalized stability
domains are shown in Fig.~\ref{fig:stabdomain_rk}.
Their ISBs are 2.8284 and 3.7023, respectively.
\begin{figure}[H]
\includegraphics[width=\linewidth]{\stabfigurepath/stabdomain_rk_4_8}
\caption{Normalized stability domains for RK$_4$ (left) and RK$_8$ (right)}
\label{fig:stabdomain_rk}
\end{figure}
\renewcommand\thesection{Appendix \Alph{section}}
\section{ISB Optimization Algorithm}
\label{sec:appendix_b}
\renewcommand\thesection{\Alph{section}}
\subsection{Optimization Formulation}
Let the extrapolated GBS stability polynomial be denoted $R(\xi)$, and the individual
stability polynomials from each of $m$ extrapolation components be denoted $P_i(\xi)$.
Then we have
\begin{align}
R(\xi) = \sum_{i=1}^m{c_iP_i(\xi)}
\end{align}
for extrapolation weights $c_i$, $1 \le i \le m$.
We collect the monomial coefficients of each $P_i(\xi)$ into the rows of a matrix, denoted $P(\xi)$.
Then $R(\xi)$ can be more compactly expressed as $R(\xi)=c^TP(\xi)$. Now let the
left-hand-side Vandermonde matrix from (\ref{eq:vander}) be denoted $V$, and the right-hand-side
constraint vector be denoted $b$. Then our order constraint equation (\ref{eq:vander}) can be
rewritten as $Vc=b$.
Denoting the time step size $h$, and given a curve $\Lambda \subset \mathbb{C}$,
we specify the optimization problem as follows:
\begin{equation}
\begin{aligned}
& \underset{c_1,c_2,\ldots,c_m}{\text{maximize}}
& & h \\
& \text{subject to}
& & |R(h\lambda)|-1 \leq 0, \; \forall \lambda \in \Lambda. \\
& & & Vc = b
\end{aligned}
\label{eq:problem1}
\end{equation}
Following the work of Ketcheson and Ahmadia in \cite{KA12}, we reformulate the
optimization problem in terms of an iteration over a convex subproblem. Minimizing
the maximum value of $|R(h\lambda)|-1$ over the weights $c_i$ is a convex problem
(see \cite{KA12}). We therefore define the subproblem as follows:
\begin{equation}
\begin{aligned}
& \underset{c_1,c_2,\ldots,c_m}{\text{minimize}}
& & \max_{\lambda\in\Lambda}{(|R(h\lambda)|-1)} \\
& \text{subject to}
& & Vc = b
\end{aligned}
\label{eq:cvx_subproblem}
\end{equation}
Calling the minimax solution to (\ref{eq:cvx_subproblem}) $r(h,\Lambda)$, we can now
reformulate the optimization problem as:
\begin{equation}
\begin{aligned}
& \underset{c_1,c_2,\ldots,c_m}{\text{maximize}}
& & h \\
& \text{subject to}
& & r(h,\Lambda) \leq 0 \\
\end{aligned}
\label{eq:reformulated_problem}
\end{equation}
The optimization routine was implemented with the CVX toolbox for MATLAB \cite{CVX14}
using a bisection over time step $h$. Results presented in this paper use the software
\emph{OPTISB} \cite{AE19} to optimize the stability domains.
\subsection{Comparison to Optimizing Monomial Coefficients}
The main theoretical difference between our current algorithm and the algorithm
presented in \cite{KA12} is the basis over which coefficients are
optimized. In \cite{KA12} the authors optimize directly the coefficients
to the stability polynomial in the monomial basis.
This yields an optimal stability polynomial that must be approximated
with a Runge-Kutta integrator. The polynomial is therefore fed into a second
optimization routine to compute the Runge-Kutta coefficients.
In the extrapolation coefficient optimization we operate directly on linear
combinations of the time stepper stability polynomials. The true optimal stability
polynomial therefore may not be in the space of extrapolated GBS time stepper
stability polynomials. However, the resulting stability polynomial is immediately
realizable and we require no further optimization stage to generate our time
stepping algorithm.
\subsection{Implementing Order Constraints}
To guarantee accuracy we require order constraints to be satisfied to machine
precision. Most optimization routines accept equality constraints that will hold within a
certain tolerance. Due to ill-conditioning of the Vandermonde systems we
prefer to explicitly enforce the order constraints in the convex optimization.
As in Sect.~\ref{sec:notation} we split the stability polynomials into two groups which take on the ``dep" and
``free" subscripts, denoting dependent and optimized quantities, respectively. The dependent
weights guarantee the extrapolation scheme achieves the specified order of accuracy.
The remaining weights are our optimization variables. Thus the stability
polynomial is computed as follows:
\begin{align}
R(\xi) = c_{\text{dep}}^TP_{\text{dep}}(\xi) + c_{\text{free}}^TP_{\text{free}}(\xi).
\end{align}
Order constraints take the form (\ref{eq:order_constraints}) which yields the
dependent weight computation (\ref{eq:cdep_computation}).
Splitting the weights apart reduces the number of design variables and, in practice, leads to
better solutions than when utilizing equality constraints.
\end{appendices}
\bibliographystyle{spmpsci}
| {
"attr-fineweb-edu": 1.399414,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdn44eIXgu4C77Yn4 | \section{Introduction}
The penetration of new renewable energy sources (RES) such as photovoltaic panels and wind turbines is increasing
in most electric power grids worldwide. In their current configuration, these energy sources are essentially inertialess,
and their increased penetration leads to low inertia situations in periods of high RES production~\cite{ulbig2014impact}.
This raises important
issues of power grid stability, which is of higher concern to transmission system operators than the volatility of
the RES productions~\cite{Mil15,Win15}. The substitution of traditional productions based on
synchronous machines with inertialess RES may in particular lead to
geographically inhomogeneous inertia profiles.
It has been suggested to deploy substitution inertia -- synthetic inertia, flywheels or synchronous condensers --
to compensate locally or globally for missing inertia. Two related question naturally arise, which are (i) where is it safe to
substitute synchronous machines with inertialess RES and (ii) where is it optimal to distribute substitution inertia ?
Problem (ii) has been investigated in small power grid models with up to a dozen
buses, optimizing the geographical distribution of inertia against cost functions based
on eigenvalue damping ratios~\cite{borsche2015effects}, $\mathcal{H}_p$-norms~\cite{mevsanovic2016comparison,poolla2017optimal} or RoCoF~\cite{paganini2017global,bor18} and frequency
excursions~\cite{bor18}.
Investigations of problem (i) on large power grids
emphasized the importance of the geographical extent of the slow network modes~\cite{pagnier2019inertia}.
Numerical optimization can certainly be performed for any given network on a case-by-case basis, however
it is highly desirable to shed light on the problem with analytical results. So far, such results have been either
restricted to small systems or derived under the assumption that damping and inertia parameters,
or their ratio, are homogeneous. In this manuscript we go beyond these assumptions and construct an
approach applicable to large power grids with inhomogeneous independent damping and inertia parameters.
Inspired by theoretical physics, we introduce {\it matrix perturbation theory}~\cite{stewart1990matrix} as an
analytical tool to tackle this problem. That method is widely used in quantum
physics, where it delivers approximate solutions
to complex, perturbed
problems, extrapolated from known, exact solutions of integrable problems~\cite{macintyre}.
The approximation is valid as long as the difference between the two problems is small and it makes sense to
consider the full, complex problem as a {\it perturbation} of the exactly soluble, simpler problem.
The procedure is spectral in nature. It identifies a small, dimensionless parameter in which
eigenvalues and eigenvectors of the perturbed problem can be systematically expanded in a series not unlike a Taylor-expansion
about the unperturbed, integrable solution.
Depending on the value of that small parameter, the series can be truncated at low orders already and still deliver
rather accurate results. In the context of electric power grids,
the method was applied for instance in Ref.~\cite{coletta2018performance}, where quadratic performance measures similar to those discussed below were calculated following a line fault, starting from
the eigenvalues and eigenvectors of the network Laplacian before the fault.
In this paper, we apply matrix perturbation theory~\cite{stewart1990matrix} to calculate performance measures
following an abrupt power loss in the case of a transmission power grid with geographically
inhomogeneous inertia and damping (primary control) parameters. Our perturbation theory is an expansion in two parameters
which are
the maximal deviations $\delta m$ and $\delta d$ of the rotational inertia and damping parameters from their
average values $m$ and $d$. The approach is valid as long as these local deviations are small,
$|\,\delta m/m\,|<1$, $|\,\delta d/d\,|<1$. These conditions tolerate in principle that inertia and damping parameters
vanish or are twice as large as their average on some buses.
The main step forward brought about by our approach is that we are able
to derive analytical results without relying on the often used homogeneity assumptions that damping, inertia or
their ratio is constant - assumptions which are not satisfied in real electric power grids.
Our main results are given in Theorems~\ref{thm:lin_opt_inertia} and \ref{thm:lin_opt_control} below, which formulate
algorithms for optimal placement of local inertia and damping parameters.
The spectral decomposition approach used here has recently drawn the attention of a number of groups and has been used
to calculate performance measures in power grids and consensus algorithms e.g. in \cite{paganini2017global,Pir17,Guo18,Por18}.
The article is organized as follows. Section~\ref{section:homogeneous} deals with the case where inertia and primary control are
uniformly distributed in the system. The performance measure that quantifies system disturbances is introduced
and we calculate its value for abrupt power losses. In Section~\ref{section:perturbation} we apply matrix perturbation theory to calculate the sensitivities of our measure in local variations of inertia and primary control. Section~\ref{section:optimal} presents the optional placement of inertia and primary control in the case of weak inhomogeneity. In Section~\ref{section:applications} we apply our optimal placements to the continental European grid. Section~\ref{section:conclusion} concludes our article.
\section{Homogeneous case}\label{section:homogeneous}
We are interested in the dynamical response of an electric power grid to a disturbance such as an abrupt power loss. To that end,
we consider the power system dynamics in the lossless line approximation, which is a standard approximation used for
high voltage transmission grids~\cite{machowski2008power}. That dynamics is governed by the swing equations,
\begin{equation}\label{eq:swing}
m_i\dot\omega_i+d_i\omega_i=P_i-\sum_jB_{ij}\sin(\theta_i-\theta_j)\, ,
\end{equation}
which determine the time-evolution of the voltage angles $\theta_i$ and frequencies $\omega_i=\dot{\theta}_i$
at each of the $N$ buses labelled
$i$ in the power grid in a rotating frame such that $\omega_i$ measures the angle frequency deviation
to the rated grid frequency
of 50 or 60 Hz. Each bus is characterized by inertia, $m_i$, and damping, $d_i$, parameters
and $P_i$ is the active power injected ($P_i>0$) or extracted ($P_i<0$) at bus $i$. We introduce the
damping ratio $\gamma_i \equiv d_i/m_i$.
Buses are connected
to one another via lines with susceptances $B_{ij}$. Stationary solutions $\{\theta_i^{(0)}\}$ are power flow solutions
determined by $P_i=\sum_jB_{ij}\sin(\theta_i^{(0)}-\theta_j^{(0)})$.
Under a change in active power $P_i \rightarrow P_i + \delta P_i$, linearizing the dynamics
about such a solution with $\theta_i(t) = \theta_i^{(0)}+\delta \theta_i(t)$ gives, in matrix form,
\begin{equation}
\bm M\bm{\dot\omega}+\bm D \bm \omega= \bm{\delta P} - \bm{L \delta \theta} \,,\label{eq:swinglin}
\end{equation}
where $\bm M={\rm diag}(\{m_i\})$, $\bm D={\rm diag}(\{d_i\})$ and voltage angles and frequencies are cast into vectors
$\bm{\delta\theta}$ and $\bm \omega\equiv\bm{\delta\dot\theta}$.
The Laplacian matrix $\bm L$ has matrix elements
$L_{ij} = -B_{ij} \cos(\theta_i^{(0)}-\theta_j^{(0)})$, for $i \ne j$ and
$L_{ii} = \sum_{k} B_{ik} \cos(\theta_i^{(0)}-\theta_k^{(0)})$.
\subsection{Exact solution for homogeneous damping ratio}
When the damping ratio is constant, $d_i/m_i=\gamma_i = \gamma$, $\forall i$, \eqref{eq:swinglin} can be integrated
exactly~\cite{paganini2017global,coletta2018performance}. To see this we first
transform angle coordinates as
$\bm{\delta\theta}=\bm{M}^{-1/2}\bm{\delta\theta_M}$ to obtain
\begin{equation}
\bm{\dot{\omega}_M}+\underbrace{\bm M^{-1} \bm D}_{\bm \Gamma}\bm{\omega_M}+\underbrace{\bm M^{-1/2}\bm L \bm M^{-1/2}}_{\bm{L_M}}\bm{\delta\theta_M}=\bm M^{-1/2}\bm{\delta P}\,,\label{eq:omegaM}
\end{equation}
where we introduced the diagonal matrix $\bm \Gamma = {\rm diag}(\{d_i/m_i\}) \equiv {\rm diag}(\{\gamma_i\})$.
The inertia-weighted matrix $\bm{L_M}$ is real and symmetric, therefore it can be diagonalized
\begin{equation}\label{eq:lm}
\bm{L_M} = \bm U^{\top}\bm\Lambda \bm U
\end{equation}
with an orthogonal matrix $\bm U$, the $\alpha^{\rm th}$ row of which gives the components $u_{\alpha,i}$, $i=1, \ldots N$
of the $\alpha^{\rm th}$ eigenvector $\bm u_\alpha$ of $\bm{L_M}$. The diagonal matrix
$\bm \Lambda={\rm diag}(\{\lambda_1=0,\lambda_2,\cdots,\lambda_N\})$ contains the eigenvalues of $\bm L_M$
with $\lambda_\alpha<\lambda_{\alpha+1}$. For connected networks only the smallest eigenvalue $\lambda_1$ vanishes,
which follows from the zero row and column sum property of the Laplacian
matrix $\bm{L_M}$. Rewriting \eqref{eq:omegaM} in the basis diagonalizing $\bm L_M$ gives
\begin{equation}
\bm{\ddot\xi}+\bm U \bm\Gamma \bm U^\top \bm{\dot\xi} +\bm\Lambda \bm\xi=\bm U\bm M^{-1/2}\bm{\delta P}\, ,\label{eq:xi_ode}
\end{equation}
where $\bm{\delta\theta_M}=\bm U^\top \bm \xi$. This change of coordinates is nothing but a spectral decomposition of angle
deviations $\bm{\delta\theta_M}$ into their components in the basis of eigenvectors of $\bm L_{\bm M}$. These components
are cast in the vector $\bm \xi$.
The formulation \eqref{eq:xi_ode} of the problem makes it clearer that, if $\bm \Gamma$
is a multiple of identity, the problem can be recast as a diagonal ordinary differential equation problem
that can be exactly integrated. This is done below in \eqref{eq:chi_ode}, and
provides an exact solution about which we will construct a matrix perturbation theory in the next sections.
\begin{proposition}[Unperturbed evolution] \label{proposition1}
For an abrupt power loss,
$\bm{\delta P}(t) = \bm{\delta P} \, \Theta(t)$, with the Heaviside step function defined by $\Theta(t>0)=1$, $\Theta(t<0)=0$,
and with homogeneous damping ratio,
$\bm \Gamma=\gamma \, \mathbb{1}$ with the $N \times N$ identity matrix $\mathbb{1}$, the frequency
coordinates $\dot\xi_\alpha$ evolve independently as
\begin{align}
\dot\xi_\alpha(t)&=\frac{2\mathcal{P}_\alpha}{f_\alpha }e^{-\gamma t/2}\sin\Big(\frac{f_\alpha t}{2}\Big)\, \text{, $\forall \alpha >1$,}\label{eq:dchii}
\end{align}
where $f_\alpha =\sqrt{4\lambda_\alpha -\gamma^2}$ and $\mathcal{P}_\alpha=\sum_i u_{\alpha i}\,\delta P_i / m_i^{1/2}$.
\end{proposition}
This result generalizes Theorem III.3 of \cite{Guo18}.
\begin{IEEEproof}
The proof goes along the lines of the diagonalization procedure proposed in
\cite{paganini2017global,coletta2018performance,coletta2018transienta}. Equation
\eqref{eq:xi_ode} can be rewritten as
\begin{equation}
\frac{{\rm d}}{{\rm d}t}\left[\!\begin{array}{c}
\bm\xi\\
\bm{\dot\xi}
\end{array}\!\right]=\underbrace{
\left[\!\begin{array}{cc}
\mathbb{0}_{N\times N}&\!\!\!\!\mathbb{1}\\
\!\!\!\!-\bm\Lambda&\!\!\!\!\!\!-\gamma \, \mathbb{1}
\end{array}\!\right]}_{\bm H_0}\!\left[\!\begin{array}{c}
\bm\xi\\
\bm{\dot\xi}
\end{array}\!\right]+
\bigg[\!\begin{array}{c}
\mathbb{0}_{N\times 1} \\
\bm{\mathcal{P}}
\end{array}\!\bigg]\,,\label{eq:H}
\end{equation}
where $\bm{\mathcal{P}}=\bm U\bm M^{-1/2} \bm{\delta P}$ and $\mathbb{0}_{N\times M}$ is the $N\times M$ matrix of zeroes. The matrix
$\bm H_0$ is block-diagonal up to a permutation of rows and columns \cite{coletta2018performance}, and can easily be diagonalized
block by block, where each $2 \times 2$ block corresponds to one of the eigenvalues $\lambda_\alpha$ of $\bm{L_M}$.
The $\alpha^{\rm th}$ block is diagonalized by the transformation
\begin{align}
\left[\!\begin{array}{c}
\chi^{(0)}_{\alpha+}\\
\chi^{(0)}_{\alpha-}
\end{array}\!\right]
&=\bm{T}_{\alpha}^{L}\left[\!\begin{array}{c}
\xi_\alpha\\
\dot\xi_\alpha
\end{array}\!\right]\,, \;\;\; \bm{T}_{\alpha}^{L}\equiv
\frac{i}{f_\alpha }\left[\!\begin{array}{cc}
\phantom{-}\mu_{\alpha-}^{(0)} & -1\\
-\mu_{\alpha+}^{(0)} &\phantom{-}1
\end{array}\!\right] \, ,\label{eq:TiL}
\\
\left[\!\begin{array}{c}
\xi_\alpha\\
\dot\xi_\alpha
\end{array}\!\right]
&= \bm{T}_{\alpha}^{R}\left[\!\begin{array}{c}
\chi^{(0)}_{\alpha +}\\
\chi^{(0)}_{\alpha -}
\end{array}\!\right]\, , \;\;\;
\bm{T}_{\alpha}^{R} \equiv
\left[\!\begin{array}{cc}
1 & 1\\
\mu_{\alpha+}^{(0)} & \mu_{\alpha-}^{(0)}
\end{array}\!\right]\, ,\label{eq:TiR}
\end{align}
with the eigenvalues $\mu_{\alpha\pm}^{(0)}$ of the $\alpha^{\rm th}$ block,
\begin{equation}
\mu_{\alpha\pm}^{(0)}=-\frac{1}{2}(\gamma \mp if_\alpha)\, .
\end{equation}
The two rows (columns) of $\bm{T}_{\alpha}^{L}$ ($\bm{T}_{\alpha}^{R}$) give the nonzero components of the
two left (right) eigenvectors $\bm t_{\alpha \pm} ^{(0)L}$ ($\bm t_{\alpha \pm}^{(0)R}$) of $\bm H_0$.
Following this transformation, \eqref{eq:H} reads
\begin{equation}
\frac{{\rm d}}{{\rm d}t}\left[\!\!\begin{array}{c}
\chi^{(0)}_{\alpha +}\\
\chi^{(0)}_{\alpha -}
\end{array}\!\!\right]=
\left[\!\!\begin{array}{cc}
\mu_{\alpha +}^{(0)} & 0\\
0 & \mu_{\alpha -}^{(0)}
\end{array}\!\!\right]\!\left[\!\!\begin{array}{c}
\chi^{(0)}_{\alpha +}\\
\chi^{(0)}_{\alpha -}
\end{array}\!\!\right]+
\frac{i}{f_{\alpha}}\left[\!\!\begin{array}{c}
-\mathcal{P}_\alpha \\
\phantom{-}\mathcal{P}_\alpha
\end{array}\!\!\right]\,.\label{eq:chi_ode}
\end{equation}
The solutions of \eqref{eq:chi_ode} are
\begin{align}
\chi^{(0)}_{\alpha\pm}&=\pm\frac{i\,\mathcal{P}_\alpha }{f_\alpha \mu_{\alpha\pm}^{(0)}}\Big(1-e^{\mu_{\alpha\pm}^{(0)} t}\Big)\,,\;\forall \alpha>1\label{eq:chiipm} \, .
\end{align}
Inserting \eqref{eq:chiipm} back into \eqref{eq:TiL}, one finally finds \eqref{eq:dchii}
which proves the proposition.
\end{IEEEproof}
\subsection{Performance measure}
We want to mitigate disturbances following an abrupt power loss. To that end, we use performance measures
which evaluate the overall disturbance magnitude over time and the whole power grid.
Performance measures have been proposed, which
can be formulated as $\mathcal{L}_2$ and squared $\mathcal{H}_2$ norms of
linear
systems~\cite{poolla2017optimal,paganini2017global,coletta2018performance,coletta2018transienta,tegling2015price,Fardad14,gayme16,siami2016fundamental,tyloo2018robustness,guo2019performance} and are time-integrated quadratic forms in the angle,
$\bm{\delta \theta}$, or frequency, $\bm \omega$, deviations. Here we focus on frequency deviations and use the following
performance measure
\begin{equation}
\mathcal{M}=\intop_0^\infty\big(\bm \omega^\top -\bm{\bar\omega}^\top \big)\bm M\big(\bm \omega-\bm{\bar\omega}\big){\rm d}t\,,\label{eq:measure}
\end{equation}
where $\bm{\bar\omega}= (\omega_{\rm sys}, \omega_{\rm sys}, \ldots\omega_{\rm sys})^\top$ is the
instantaneous average frequency vector with components
\begin{equation}
\omega_{\rm sys}(t) = \sum_i m_i \omega_i(t) \Big / \sum_i m_i \, .
\end{equation}
It is straightforward to see that $\mathcal{M}$ reads
\begin{equation}\label{eq:M}
\mathcal{M}=\intop_0^\infty\sum_{\alpha>1}\dot\xi_\alpha^2(t){\rm d}t\, ,
\end{equation}
when rewritten in the eigenbasis of $\bm L_{\bm M}$, once one notices that the first eigenvector of $\bm L_{\bm M}$
(the one with zero eigenvalue) has components $u_{1i}=\sqrt{m_i}/\sqrt{\sum_j m_j}$.
\begin{proposition}
For an abrupt power loss,
$\bm{\delta P}(t) = \bm{\delta P} \, \Theta(t)$ on a single bus labeled $b$,
$\delta P_i=\delta_{ib} \, \delta P$,
and with an homogeneous damping ratio,
$\bm \Gamma=\gamma \, \mathbb{1}$ with the $N \times N$ identity matrix $\mathbb{1}$,
\begin{equation}
\mathcal{M}_b=\frac{\delta P^2}{2\gamma m_b}\sum_{\alpha>1}\frac{u_{\alpha b}^2}{\lambda_\alpha }\,,\label{eq:Mb}
\end{equation}
in terms of the eigenvalues $\lambda_\alpha$ and the components $u_{\alpha b}$ of the eigenvectors
$\bm u_\alpha$ of $\bm L_{\bm M}$.
\end{proposition}
Note that we introduced the subscript $b$ to indicate that the fault is localized on that bus only.
The power loss is modeled as $P_i = P_i^{(0)} -\delta P_i \, \Theta(t)$ with $\delta P_i = \delta_{ib} \, \delta P $ with the Kronecker
symbol $\delta_{ib}=1$ if $i = b$ and 0 otherwise.
\begin{IEEEproof}
Equation \eqref{eq:dchii} straightforwardly gives
\begin{equation}
\intop_0^\infty\dot\xi_\alpha^2(t){\rm d}t = \frac{\delta P^2\,u_{\alpha b}^2}{2\gamma \, m_b \, \lambda_\alpha }\,, \alpha>1\, ,
\end{equation}
which, when summed over $\alpha >1$ gives \eqref{eq:Mb}.
\end{IEEEproof}
\begin{remark}
For homogeneous inertia coefficients, $\bm M=m\mathbb{1}$, the eigenvectors and eigenvalues
of the inertia-weighted Laplacian $\bm L_{\bm M}$
defined in \eqref{eq:omegaM} are given by
$\bm u_{\alpha}=\bm u_{\alpha}^{(0)}$, and
$\lambda_\alpha=m^{-1}\lambda_\alpha^{(0)}$, in terms of the eigenvectors $\bm u_{\alpha}^{(0)}$ and eigenvalues
$\lambda_\alpha^{(0)}$ of the Laplacian $\bm L$. In that case, the performance measure reads
\begin{equation}
\mathcal{M}_b^{(0)}=\frac{\delta P^2}{2\gamma}\sum_{\alpha>1}\frac{u_{\alpha b}^{(0)2}}{\lambda_\alpha ^{(0)}}\,,
\label{eq:crit}
\end{equation}
where the superscript $^{(0)}$ refers to inertia homogeneity. This expression has an interesting graph theoretic
interpreation. We recall the definitions of the resistance distance $\Omega_{ij}$ between two nodes on the network,
the associated centrality $C_j$ and the generalized Kirchhoff indices $K\hspace{-0.8mm}f_p$~\cite{tyloo2018robustness,Kle93},
\begin{eqnarray}
\Omega_{ij} &=& \bm L_{ii}^\dagger + \bm L_{jj}^\dagger -\bm L_{ij}^\dagger -\bm L_{ji}^\dagger \, , \label{eq:rdistance} \\
C_j &=& N \, \Big(\sum_{i}\Omega_{ij}\Big)^{-1} \, , \label{eq:centrality} \\
K\hspace{-0.8mm}f_p &=& N \sum_{\alpha>1} \lambda_\alpha^{-p} \, ,
\label{eq:kfp}
\end{eqnarray}
where $\bm L^\dagger$ is the Moore--Penrose pseudo inverse of $\bm L$.
With these definitions, one can show that \cite{Por18,tyloo2018robustness,tyloo2018key}
\begin{equation}\label{eq:kf}
\sum_{\alpha>1} \frac{u_{\alpha b}^{(0)2}}{\lambda_\alpha ^{(0)}} = C_b^{-1} - N^{-2} K\hspace{-0.8mm}f_1 \, ,
\end{equation}
by using the spectral representation of the resistance distance~\cite{Gut96,Zhu96}
\begin{equation}
\Omega_{ib}=\sum_{\alpha>1}\big(u_{\alpha i}^{(0)}-u_{\alpha b}^{(0)}\big)^2\big/\lambda_{\alpha}^{(0)}\,.\label{eq:res_dist}
\end{equation}
Because $K\hspace{-0.8mm}f_1$ is a global quantity
characterizing the network, it follows from \eqref{eq:crit} with \eqref{eq:kf} that,
when inertia and primary control are homogeneously distributed in the system, the disturbance magnitude as measured
by $\mathcal{M}_b^{(0)}$ is larger for disturbances on peripheral nodes~\cite{pagnier2019inertia,Gam17}.
\end{remark}
\section{Matrix perturbation}\label{section:perturbation}
The previous section treats the case where inertia and primary control are uniformly distributed in the system. Our goal is to lift that restriction and to obtain $\mathcal{M}_b$ when some mild inhomogeneities are present. We parametrize these inhomogeneities by writing
\begin{align}
m_i&= m+\delta m\, r_i\,,\label{eq:mi}\\
d_i&= m_i\gamma_i=(m+\delta m\, r_i)(\gamma +\delta\gamma\, a_i)\label{eq:di}\,,
\end{align}
with the average $m$ and $\gamma$ and the maximum deviation amplitudes $\delta m$ and $\delta \gamma$
of inertia and damping ratio. Inhomogeneities are determined by the coefficients
$-1 \le a_i, r_i \le 1$ with $\sum_i r_i=\sum_i a_i=0$ which are determined following a minimization of the performance
measure $\mathcal{M}_b$ of~\eqref{eq:measure}. In the following two paragraphs we construct a matrix perturbation theory
to linear order in the inhomogeneity parameters $\delta m$, and $\delta\gamma$ to calculate
the performance measure $\mathcal{M}_b= \mathcal{M}_b^{(0)} + \sum_ir_i \rho_i +\sum_ia_i \alpha_i +
\mathcal{O}(\delta m^2, \delta \gamma^2)$. This requires to
calculate the susceptibilities $\rho_i \equiv \partial \mathcal{M}_b/\partial r_i$
and $\alpha_i \equiv \partial \mathcal{M}_b/\partial a_i$.
\subsection{Inhomogeneity in inertia}
When inertia is inhomogeneous, but the damping ratios remain homogeneous, the system dynamics and $\mathcal{M}_b$ are still
given by \eqref{eq:H} and \eqref{eq:Mb}. However, the eigenvectors of the inertia-weighted Laplacian matrix
$\bm{L_M}$ differ from those of $\bm L$ and consequently $\mathcal{M}_b$ is no longer equal to $\mathcal{M}_b^{(0)}$. In general
there is no simple way to diagonalize $\bm{L_M}$, but one expects that if the inhomogeneity is weak, then the eigenvalues and
eigenvectors of $\bm{L_M}$ only slightly differ from those of $m^{-1}\bm L$, which allows to construct a perturbation theory.
\begin{assumption}[Weak inhomogeneity in inertia] The deviations $\delta m \, r_i$ of the local inertias
$m_i$ are all small compared to their average $m$. We write
$\bm M = m\big[\mathbb{1}+\mu\,{\rm diag}\big(\{r_i\}\big)\big]$, where $\mu\equiv\delta m/m\ll 1$
is a small, dimensionless parameter.\label{ass:inertia}
\end{assumption}
To linear order in $\mu$, the series expansion of $\bm{L_M}$ reads
\begin{align}
\bm{L_M}&=\bm M^{-1/2}\bm L\bm M^{-1/2}=m^{-1}\big[\bm L+\mu\bm V_1+\mathcal{O}(\mu^2)\big]\, ,
\end{align}
with $\bm V_1=-\big(\bm R\bm L+\bm L\bm R\big)/2$ and $\bm R={\rm diag}(\{r_i\})$.
In this form, the inertia-weighted Laplacian matrix $\bm{L_{M}}$ is given by the sum of
an easily diagonalizable matrix, $m^{-1}\bm L$, and a small
perturbation matrix, $(\mu/m) \, \bm V_1$. Truncating the expansion of $\bm{L_M}$ at this linear order
gives an error of order $\sim \mu^2$, which is small under Assumption~\ref{ass:inertia}.
Matrix perturbation theory gives approximate expressions for the eigenvectors $\bm u_{\alpha}$
and eigenvalues $\lambda_\alpha$ of $\bm{L_M}$ in terms of
those ($\bm u_{\alpha}^{(0)}$ and $\lambda_\alpha ^{(0)}$)
of $\bm{L}$~\cite{stewart1990matrix}. To leading order in $\mu$ one has
\begin{align}
\lambda_\alpha &= m^{-1}\big[\lambda_\alpha ^{(0)}+\mu\lambda_\alpha ^{(1)}+\mathcal{O}(\mu^2)\big]\,,\label{eq:lambda}\\
\bm u_{\alpha}&= \bm u_{\alpha}^{(0)}+\mu \bm u_{\alpha}^{(1)}+\mathcal{O}(\mu^2)\label{eq:u}\,,
\end{align}
with
\begin{align}
\lambda_\alpha ^{(1)}&=\bm u_{\alpha}^{(0)\top}\bm V_1\bm u_{\alpha}^{(0)}\,,\label{eq:lambda1}\\
\bm u_{\alpha}^{(1)}&=\sum_{\beta\neq \alpha}\frac{\bm u_{\beta}^{(0)\top}\bm V_1 \bm u_{\alpha}^{(0)}}{\lambda_\alpha ^{(0)}-\lambda_\beta^{(0)}}\bm u_{\beta}^{(0)}\,.\label{eq:u1}
\end{align}
From \eqref{eq:Mb}, \eqref{eq:lambda} and \eqref{eq:u}, the first-order approximation of $\mathcal{M}_b$ in $\mu$ reads
\begin{eqnarray}
\mathcal{M}_{b}&=&\mathcal{M}_{b}^{(0)}+\frac{\mu\delta P^2}{2\gamma}\sum_{\alpha>1}\lambda_\alpha ^{(0)-1}\Big(2u_{\alpha b}^{(0)}u_{\alpha b}^{(1)}-r_bu_{\alpha b}^{(0)2}\nonumber\\
&&-u_{\alpha b}^{(0)2}\lambda_\alpha ^{(0)-1}\lambda_\alpha ^{(1)}\Big)+\mathcal{O}(\mu^2)\,.\label{eq:M1}
\end{eqnarray}
\begin{proposition}\label{prop:rho}
For an abrupt power loss,
$\bm{\delta P}(t) = \bm{\delta P} \, \Theta(t)$ on a single bus labeled $b$,
$\delta P_i=\delta_{ib} \, \delta P$, and
under Assumption~\ref{ass:inertia}, the susceptibilites $\rho_i \equiv \partial \mathcal{M}_b/\partial r_i$
are given by
\begin{equation}
\rho_i=-\frac{\mu\delta P^2}{\gamma N}\sum_{\alpha>1}\frac{u_{\alpha b}^{(0)}u_{\alpha i}^{(0)}}{\lambda_\alpha ^{(0)}}\, .\label{eq:rho}
\end{equation}
\end{proposition}
\begin{IEEEproof}
Taking the derivative of \eqref{eq:M1} with respect to $r_i$, with $\lambda_\alpha ^{(1)}$ and $u_{\alpha b}^{(1)}$ given in
\eqref{eq:lambda1} and \eqref{eq:u1}, one gets
\begin{align}
\frac{\partial\mathcal{M}_b}{\partial r_i}&=\frac{\mu\delta P^2}{2\gamma}\bigg[\sum_{\substack{\alpha>1,\\\beta\neq \alpha}}\!u_{\alpha b}^{(0)}u_{\beta b}^{(0)}u_{\alpha i}^{(0)}u_{\beta i}^{(0)}\bigg(\frac{1}{\lambda_\alpha ^{(0)}}-\frac{2}{\lambda_\alpha ^{(0)}-\lambda_\beta^{(0)}}\bigg)\nonumber\\
&-\delta_{ib}\sum_{\alpha>1}\frac{u_{\alpha b}^{(0)2}}{\lambda_\alpha ^{(0)}}+\sum_{\alpha>1}\frac{u_{\alpha b}^{(0)2}u_{\alpha i}^{(0)2}}{\lambda_\alpha ^{(0)}}\bigg]+\mathcal{O}(\mu^2)\,,\label{eq:dmdr}
\end{align}
The first term in the square bracket in \eqref{eq:dmdr} gives
\begin{align}
\!\!\!\sum_{\substack{\alpha>1\\\beta\neq \alpha}}\frac{u_{\alpha b}^{(0)}u_{\beta b}^{(0)}u_{\alpha i}^{(0)}u_{\beta i}^{(0)}}{\lambda_\alpha ^{(0)}} &= \sum_{\substack{\alpha>1,\\\beta}}\frac{u_{\alpha b}^{(0)}u_{\beta b}^{(0)}u_{\alpha i}^{(0)}u_{\beta i}^{(0)}}{\lambda_\alpha ^{(0)}} \nonumber \\
- \sum_{\alpha>1}\frac{u_{\alpha b}^{(0)2} u_{\alpha i}^{(0)2}}{\lambda_\alpha ^{(0)}}&= \delta_{ib} \sum_{\alpha>1}\frac{u_{\alpha b}^{(0)2}}{\lambda_\alpha ^{(0)}} - \sum_{\alpha>1}\frac{u_{\alpha b}^{(0)2} u_{\alpha i}^{(0)2}}{\lambda_\alpha ^{(0)}} \, ,
\end{align}
where we used $\sum_\beta u_{\beta i}^{(0)}u_{\beta b}^{(0)}=\delta_{ib}$. This terms therefore
exactly cancels out with the last two terms in the square bracket in \eqref{eq:dmdr} and one obtains
\begin{equation}
\rho_i(b) = \frac{\partial \mathcal{M}_b}{\partial r_i}=-\frac{\mu\delta P^2}{\gamma}\sum_{\substack{\alpha>1,\\\beta\neq \alpha}}\frac{u_{\alpha b}^{(0)}u_{\beta b}^{(0)}u_{\alpha i}^{(0)}u_{\beta i}^{(0)}}{\lambda_\alpha ^{(0)}-\lambda_\beta^{(0)}}+\mathcal{O}(\mu^2)\,.\label{eq:wk2}
\end{equation}
The argument of the double sum in \eqref{eq:wk2} is odd under permutation of $\alpha$ and $\beta$, therefore only terms with
$\beta=1$ survive. With $u_{1i}^{(0)}=1/\sqrt{N}$, one finally obtains \eqref{eq:rho}.
\end{IEEEproof}
\begin{remark}\label{rmk:rho}
By summing over every fault locations $b$, one gets $\sum_{b}\rho_i(b)=0$. This follows from the properties of the eigenvector $\bm u_{\alpha}^{(0)}$.
\end{remark}
\subsection{Inhomogeneity in damping ratios}
Equation \eqref{eq:dchii} gives exact solutions to the linearized dynamical problem defined in \eqref{eq:xi_ode}, under the assumption of homogeneous damping ratio, $m_i/d_i \equiv \gamma$. In this section we lift that constraint and write
$\gamma_i = \gamma + \delta \gamma \, a_i$. With inhomogeneous damping ratios, \eqref{eq:H} becomes
\begin{equation}
\frac{{\rm d}}{{\rm d}t}\left[\!\begin{array}{c}
\bm\xi\\
\bm{\dot\xi}
\end{array}\!\right]=\underbrace{
\left[\!\begin{array}{cc}
\mathbb{0}_{N\times N}&\!\!\!\!\mathbb{1}\\
\!\!\!\!-\bm\Lambda&\!\!\!\!\!\!-\gamma \, \mathbb{1} - \delta \gamma \, \bm V_2
\end{array}\!\right]}_{\bm H}\!\left[\!\begin{array}{c}
\bm\xi\\
\bm{\dot\xi}
\end{array}\!\right]+
\bigg[\!\begin{array}{c}
\mathbb{0}_{N\times 1} \\
\bm{\mathcal{P}}
\end{array}\!\bigg]\,,\label{eq:H1}
\end{equation}
which differs from \eqref{eq:H} only through the additional term $-\delta \gamma \bm V_2$ with
$\bm V_2 = \bm U\bm A\,\bm U^\top$, $\bm A={\rm diag}(\{a_i\})$. Under the assumption that
that the dimensionless parameter $g\equiv \delta \gamma/\gamma \ll 1$,
this additional term gives only small corrections to the unperturbed problem of \eqref{eq:H}, and we use matrix perturbation
theory to calculate these corrections in a polynomial expansion in $g$.
\begin{assumption}[Weak inhomogeneity in damping ratios] The deviations $\delta \gamma \, a_i$ of the damping ratio
$\gamma_i$ from their average $\gamma$ are all small compared to their average. We write
$\bm \Gamma = \gamma\big[\mathbb{1}+g\,{\rm diag}\big(\{a_i\}\big)\big]$, where $g\equiv\delta\gamma/\gamma \ll 1$
is a small, dimensionless parameter.\label{ass:gamma}
\end{assumption}
We want to integrate \eqref{eq:H1} using the spectral approach that provided the solutions
\eqref{eq:chiipm}. In principle this requires to know the eigenvalues and eigenvectors of $\bm H$ in \eqref{eq:H1},
which is not possible in general, because $\bm V_2$ does not commute with $\bm \Lambda$.
When $g$ is small enough, the eigenvalues and eigenvectors are only slightly
altered~\cite{stewart1990matrix} and can be systematically calculated order
by order in a polynomial expansion in $g$. We therefore follow a perturbative approach which expresses solutions
to \eqref{eq:H1} in such a polynomial expansion in $g$. Formally, one has, for the eigenvalues $\mu_{\alpha s}$
and for the left and right eigenvectors $\bm t_{\alpha s}^{L,R}$ of $\bm H$
\begin{align}
\mu_{\alpha s}&=\sum_{m=1}^\infty g^m \, \mu_{\alpha s}^{(m)} \label{eq:mu_exp} \, , \\
\bm{t}_{\alpha s }^{L,R}&= \sum_{m=1}^\infty g^m
\, \bm{t}_{\alpha s}^{(m)L,R} \label{eq:TL_exp}\, ,
\end{align}
where the $m=0$ terms are given by the eigenvalues, $\mu_{\alpha s}^{(0)}$, and the left and right eigenvectors,
$\bm{t}_{\alpha s}^{(0)L,R}$, of the matrix $\bm H_0$ in \eqref{eq:H}, corresponding to homogeneous inertia.
In order for the sums in \eqref{eq:mu_exp} and \eqref{eq:TL_exp} to converge, a necessary condition is that $g <1$.
The task is to calculate the terms $\mu_{\alpha s}^{(m)}$ and $\bm{t}_{\alpha s}^{(m)L,R}$ with $m=1,2,...$.
When $g \ll 1$, one expects that only few, low order terms already give a good estimate
of the eigenvalues and eigenvectors of $\bm H$. In this manuscript, we calculate the first-order corrections, $m=1$.
They are given by formulas similar to \eqref{eq:lambda1} and \eqref{eq:u1},
\begin{align}
& g \, \mu_{\alpha s}^{(1)}= \bm{t}_{\alpha s}^{(0)L} \left[\!\begin{array}{cc}
\mathbb{0}_{N\times N}&\mathbb{0}_{N\times N}\\
\mathbb{0}_{N\times N}&- \delta \gamma \, \bm V_2
\end{array}\!\right] \bm{t}_{\alpha s}^{(0)R} \, , \\
&g \, \bm{t}_{\alpha s}^{(1)R}=\overline{\sum_{\beta,s'}} \; \frac{\bm{t}_{\beta s'}^{(0)L} \left[\!\begin{array}{cc}
\mathbb{0}_{N\times N}&\mathbb{0}_{N\times N}\\
\mathbb{0}_{N\times N}&- \delta \gamma \, \bm V_2
\end{array}\!\right] \bm{t}_{\alpha s}^{(0) R}}{\mu_{\alpha s}^{(0)}-\mu_{\beta s'}^{(0)}} \bm{t}_{\beta s'}^{(0) R} \, , \\
&g \, \bm{t}_{\alpha s}^{(1)L}=\overline{\sum_{\beta,s'}} \; \frac{\bm{t}_{\alpha s}^{(0)L} \left[\!\begin{array}{cc}
\mathbb{0}_{N\times N}&\mathbb{0}_{N\times N}\\
\mathbb{0}_{N\times N}&- \delta \gamma \, \bm V_2
\end{array}\!\right] \bm{t}_{\beta s'}^{(0) R}}{\mu_{\alpha s}^{(0)}-\mu_{\beta s'}^{(0)}} \bm{t}_{\beta s'}^{(0) L} \, ,
\end{align}
where
${\overline\sum}$ indicates that the sum runs over $(\beta,s') \ne (\alpha, s)$.
One obtains
\begin{align}
&g \, \mu_{\alpha s}^{(1)} = -\delta \gamma\Big(\frac{1}{2}+ i s \frac{\gamma}{2 f_\alpha }\Big)\bm{V}_{2;\alpha\alpha}\,, \!\!\label{eq:pertval} \\
&g \, \bm{t}_{\alpha s}^{(1)R}= 2 \, \delta \gamma \, \overline{\sum_{\beta,s'}} \frac{\bm{V}_{2;\alpha\beta} \, \mu_{\alpha s}^{(0)}}{f_\beta (s s' \, f_\alpha-f_\beta)} \, \bm{t}_{\beta s'}^{(0) R} \, ,
\label{eq:pertvec1}\\
&g \, \bm{t}_{\alpha s}^{(1)L}= 2 \, \delta \gamma \, \overline{\sum_{\beta,s'}} \frac{\bm{V}_{2;\alpha\beta} \, \mu_{\beta s'}^{(0)}}{f_\alpha (f_\alpha-s s' \, f_\beta)} \, \bm{t}_{\beta s'}^{(0) L} \, ,\label{eq:pertvec}
\end{align}
with $\bm{V}_{2;\alpha\beta}=\sum_{i}a_i\,u_{\alpha i} \,u_{\beta i}$.
\begin{remark}
By definition, $-1 \le \bm{V}_{2;\alpha\alpha} \le 1$. Therefore,
\eqref{eq:pertval} indicates, among others,
that when the parameters $\{a_i\}$ are correlated (anticorrelated) with the square components
$\{u_{\alpha i}^2\}$ for some $\alpha$ then that mode is more strongly (more weakly) damped.
Accordingly, Theorem~\ref{thm:lin_opt_control} will distribute the set $\{a_i\}$ to increase the damping of the
slow modes of $\bm H$.
\end{remark}
\begin{proposition}
For an abrupt power loss,
$\bm{\delta P}(t) = \bm{\delta P} \, \Theta(t)$ on a single bus labeled $b$,
$\delta P_i=\delta_{ib} \, \delta P$,
and under Assumption \ref{ass:gamma}, $\dot\xi_\alpha(t)$ reads, to leading order
in $g$,
\begin{align}
\dot\xi_\alpha(t)=&\frac{\mathcal{P}_\alpha }{f_\alpha }e^{-\gamma t/2}\bigg[2s_\alpha \Big( 1+g \frac{\gamma^2}{f_\alpha^2} \bm{V}_{2; \alpha \alpha} \Big)\nonumber\\
& \;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;\;-g\gamma t\bm{V}_{2;\alpha\alpha}\Big(s_\alpha+\frac{\gamma}{f_\alpha }c_\alpha\Big)\bigg]\nonumber\\
&+g\gamma\sum_{\beta\neq \alpha}\frac{\bm{V}_{2;\alpha\beta}\mathcal{P}_\beta}{\lambda_\alpha -\lambda_\beta}e^{-\gamma t/2}\bigg[\frac{\gamma}{f_\beta}s_\beta-\frac{\gamma}{f_\alpha }s_\alpha+c_\alpha-c_\beta\bigg]\nonumber\\
&+\mathcal{O}(g^2)\,,\label{eq:perturbed_xii}
\end{align}\label{prop:perturbed_xii}
where $s_\alpha=\sin(f_\alpha t/2)$ and $c_\alpha=\cos(f_\alpha t/2)$, and $\mathcal{P}_\alpha$ and $f_\alpha$ are defined below
\eqref{eq:dchii}.
\end{proposition}
The proof is based on \eqref{eq:pertval} to \eqref{eq:pertvec} and is given in Appendix~\ref{sect:continuation}.
\begin{proposition}\label{prop:alpha}
For an abrupt power loss,
$\bm{\delta P}(t) = \bm{\delta P} \, \Theta(t)$ on a single bus labeled $b$,
$\delta P_i=\delta_{ib} \, \delta P$, and
under Assumption~\ref{ass:gamma}, the susceptibilities $\alpha_i \equiv \partial \mathcal{M}_b/\partial a_i$ are given by
\begin{eqnarray}\label{eq:alphai}
\alpha_i &=& - \frac{g\delta P^2}{2\gamma m_b} \Bigg[ \sum_{\alpha>1} \frac{u_{\alpha i}^{2}u_{\alpha b}^{2}}{\lambda_\alpha }
\nonumber \\
&& + \sum_{\substack{\alpha>1,\\\beta \ne \alpha}} \frac{
\, u_{\alpha i}\, u_{\alpha b} u_{\beta i}\, u_{\beta b} }
{(\lambda_\alpha-\lambda_\beta)^2 + 2 \gamma^2 (\lambda_\alpha+\lambda_\beta) }\Bigg] \, \qquad
\end{eqnarray}
\end{proposition}
\begin{IEEEproof}
From \eqref{eq:perturbed_xii}, to first order in $g$, one has
\begin{align}
&\intop_0^{\infty}\dot\xi_\alpha^2(t){\rm d}t=\frac{\mathcal{P}_\alpha ^2}{2\gamma\lambda_\alpha }\left(1-g \bm{V}_{2;\alpha\alpha} \right)\nonumber\\
&-g\gamma \sum_{\beta\neq \alpha}\frac{\bm{V}_{2;\alpha\beta} \, \mathcal{P}_\alpha\mathcal{P}_\beta}{(\lambda_\alpha -\lambda_\beta)^2+2\gamma^2(\lambda_\alpha +\lambda_\beta)}
+\mathcal{O}(g^2)\,.\label{eq:intop_xi}
\end{align}
Taking the derivative of \eqref{eq:intop_xi} with respect to $a_i$ with the definition of $\bm{V}_{2;\alpha\beta}$ given below
\eqref{eq:pertvec}, and summing over $\alpha >1$ one obtains \eqref{eq:alphai}.
\end{IEEEproof}
\begin{remark}
We have found numerically that the second term is generally much smaller than the first one and gives only marginal corrections
to our optimized solution.
\end{remark}
\begin{remark}\label{rmk:alpha}
Close to the homogeneous case $\bm M=m\mathbb{1}$ and $\bm \Gamma=\gamma\mathbb{1}$, summing over every fault locations $b$ makes the second term in \eqref{eq:alphai} vanish. One gets $\sum_b\alpha_i = -g\delta P^2\sum_{\alpha>1}
u_{\alpha i}^{(0)2}/(2\gamma\lambda_\alpha ^{(0)})$. This follows from the properties of the eigenvector of $\bm u_{\alpha}^{(0)}$.
\end{remark}
\section{Optimal placement of inertia and primary control}\label{section:optimal}
In general it is not possible to obtain closed-form
analytical expressions for the parameters $a_i$ and $r_i$ determining the
optimal placement of inertia and primary control. Simple optimization algorithms can however be constructed that
determine how to distribute these parameters to minimize $\mathcal{M}_b$. Theorems~\ref{thm:lin_opt_inertia} and
\ref{thm:lin_opt_control} give two such algorithms for optimization under Assumption~\ref{ass:inertia} and \ref{ass:gamma}
respectively. Additionally, Conjecture~\ref{conj:opt} proposes an algorithm for optimization under both
Assumption~\ref{ass:inertia} and \ref{ass:gamma}.
\begin{theorem}
\label{thm:lin_opt_inertia}
For an abrupt power loss, under Assumption \ref{ass:inertia} and with $\bm \Gamma=\gamma\mathbb{1}$, the optimal distribution
of parameters $\{r_i\}$ that minimizes $\mathcal{M}_b$ is obtained as follows.
\begin{enumerate}
\item{Compute the sensitivities}
$\rho_i= \partial \mathcal{M}_b/\partial r_i$ from \eqref{eq:rho}
\item{Sort the set $\{\rho_i\}_{i=1, ... N}$ in ascending order}
\item{Set $r_i=1$ for $i=1,... {\rm Int}[N/2]$ and $r_i=-1$ for $i=N-{\rm Int}[N/2]+1,...N$}
\end{enumerate}
The optimal placement of inertia and primary control is given by
\begin{align}
m_i=m+\delta m\, r_i\,,\;\;\;\;\;\;\;
d_i=\gamma(m+\delta m\, r_i)\,.
\end{align}
\end{theorem}
The proof is in Appendix~\ref{sect:continuation}.
\begin{theorem}
\label{thm:lin_opt_control}
For an abrupt power loss, under Assumption \ref{ass:gamma} and with $\bm M=m\mathbb{1}$, the optimal distribution of parameters
$\{a_i\}$ that minimizes $\mathcal{M}_b$ is obtained as follows.
\begin{enumerate}
\item{Compute the sensitivities}
$\alpha_i=\partial \mathcal{M}_b/\partial r_i$ from \eqref{eq:alphai}.
\item{Sort the set $\{\alpha_i\}$ in ascending order,}
\item{Set $a_i=1$ for $i=1,... {\rm Int}[N/2]$ and for $i=N-{\rm Int}[N/2]+1,...N$}
\end{enumerate}
The optimal placement of primary control is given by
\begin{equation}
d_i=m(\gamma+\delta\gamma\, a_i)\,.
\end{equation}
\end{theorem}
\begin{IEEEproof}
With Proposition~\ref{prop:alpha} and $\bm M=m\mathbb{1}$, we get \eqref{eq:alphai}. The proof is the same as the one for Theorem~\ref{thm:lin_opt_inertia} given in Appendix~\ref{sect:continuation}, but with $\{\alpha_i\}$ instead of $\{\rho_i\}$ .
\end{IEEEproof}
We next conjecture an algorithmic combined linear optimization treating simultaneously Assumptions~\ref{ass:inertia} and
\ref{ass:gamma}. The difficulty is that for fixed total inertia and damping, one must have $\sum_i m_i=N \, m $, $\sum_i d_i=N \,d$.
From \eqref{eq:di}, the second condition requires $\sum_i a_i r_i = 0$. This is a quadratic, nonconvex constraint, which makes the
problem nontrivial to solve. The following conjecture presents an algorithm that starts from the distribution
$\{a_i\}$ and $\{r_i\}$ from Theorems~\ref{thm:lin_opt_inertia} and \ref{thm:lin_opt_control} and orthogonalizes them
while trying to minimize the related increase in $\mathcal{M}_b$.
\begin{conjecture}[Combined linear optimization]
\label{conj:opt} For an abrupt power loss, under Assumptions~\ref{ass:inertia} and \ref{ass:gamma}, the optimal placement of a fixed
total amount of inertia $\sum_i m_i=mN$ and primary control $\sum_i d_i=dN$ that minimize $\mathcal{M}_b$ is obtained as
follows.
\begin{enumerate}
\item{Compute the parameters
$r_i$ and $a_i$ from Theorems \ref{thm:lin_opt_inertia} and \ref{thm:lin_opt_control}.}
\begin{enumerate}
\item{If $N$ is odd, align the zeros of $\{r_i\}$ and $\{a_i\}$
. Let $i_{r0}$ and $i_{a0}$ be the indexes of these zeros. Their new common index is}
$$
i_{\rm align} = \underset{i}{\rm argmin}(r_i\rho_{i_{r0}}+a_i\alpha_{i_{a0}}-r_i\rho_{i}-a_i\alpha_{i})\,.
$$
Interchange the parameter values $r_{i_{r0}} \leftrightarrow r_{i_{\rm align}}$ and $a_{i_{r0}} \leftrightarrow a_{i_{\rm align}}$.
\item{If $N$ is even, do nothing}
\end{enumerate}
\item{If $n\equiv\sum_ir_ia_i=0$, the optimization is done.}
\item{Find the set $\mathcal{I}=\{i\, | \, {\rm sgn}(r_ia_i)={\rm sgn}(n) \}$. To reach $\sum_ir_ia_i \rightarrow 0$, our strategy is to
set to zero some elements of $\mathcal{I}$. Since however $\sum_i a_i = \sum_i r_i =0$ must be conserved, this must
be accompanied by a simultaneous change of some other parameter.}
\item{Find the pair $(a_{i1},a_{i2}=-a_{i1})$ or $(r_{i1},r_{i2}=-r_{i1}) \in \mathcal{I} \times \mathcal{I}$
which, when sent to $(0,0)$, induce the smallest increase of the objective function $\mathcal{M}_b$.
Send it to $(0,0)$. Because the pair has opposite sign, this does not affect the condition $\sum_i a_i = \sum_i r_i =0$.}
\item{go to step \# 2.}
\end{enumerate}
\end{conjecture}
It is not at all guaranteed that the algorithm presented in Conjecture~\ref{conj:opt} is optimal, however, numerical results to be
presented below indicate that it works well.
The optimization considered so far focused on a single fault on bus labeled $b$. We are interested, however, in finding the optimal
distribution of inertia and/or primary control for all possible faults. To that end we introduce the following global
vulnerability measure
\begin{equation}
\mathcal{V}=\sum_b\eta_b \, \mathcal{M}_b(\delta P_b)\,,\label{eq:vulnerability}
\end{equation}
where the sum runs over all generator buses. The vulnerability measure $\mathcal{V}$
gives a weighted average over all possible fault positions, with the weight $\eta_b$
accounting for the probability that a fault occurs at $b$ and $\delta P_b$ accounting for its potential intensity as given, e.g. by
the rated power of the generator at bus $b$.
For equiprobable fault locations and for the same power loss everywhere, $\eta_b\equiv1$, with Remark~\ref{rmk:rho}, it is straightforward to see that
$\partial \mathcal{V}/\partial r_i = 0+\mathcal{O}(\mu^2)$.
Therefore, to leading order, there is no benefit in scaling up the inertia anywhere.
On the other hand, with Remark~\ref{rmk:alpha}, we get $\partial \mathcal{V}/\partial a_i = -g\delta P^2\sum_{\alpha>1}
u_{\alpha i}^{(0)2}/(2\gamma\lambda_\alpha ^{(0)})+\mathcal{O}(g^2)$. The corresponding optimal placement of primary control can
be obtained with Theorem~\ref{thm:lin_opt_control}, from which we observe that the damping ratios are increased for the buses
with large squared components $u_{\alpha i}^{(0)2}$ of the slow modes of $\bm L$ -- those with the smallest $\lambda_\alpha^{(0)}$. These modes are displayed in
Fig~\ref{fig:slow_modes}. One concludes that, with a non-weighted vulnerability measure, $\eta_b\equiv1$
in \eqref{eq:vulnerability}, an homogeneous inertia location is a local optimum for $\mathcal{V}$, for which damping parameters
need to be increased primarily on peripheral buses.
\begin{figure}[h!]
\center
\includegraphics[width=0.15\textwidth]{fiedler2.pdf}
\includegraphics[width=0.15\textwidth]{fiedler3.pdf}
\includegraphics[width=0.15\textwidth]{fiedler4.pdf}\\
\includegraphics[width=0.15\textwidth]{fiedler5.pdf}
\includegraphics[width=0.15\textwidth]{fiedler6.pdf}
\includegraphics[width=0.15\textwidth]{fiedler7.pdf}\\
\includegraphics[width=0.27\textwidth]{label2.pdf}\\
\caption{Color-coded components of the $\alpha=2,3,...7$ eigenvectors of $\bm L$. The colors
span the interval $[-u_{\max},u_{\max}]$ where $u_{\max}=\max_{\alpha\in\{2,\cdots,7\}}\big|u_{\alpha i}^{(0)}\big|$.}\label{fig:slow_modes}
\end{figure}
\section{Numerical investigations}\label{section:applications}
We illustrate our main results on a model of the synchronous power grid of continental Europe. The network
has 3809 nodes, among them 618 generators, connected through 4944 lines. For details of the model and its
construction we refer the reader to~\cite{pagnier2019inertia,tyloo2018key}.
To connect to the theory presented above, we remove inertialess buses through
a Kron reduction \cite{dorfler2013kron} and uniformize the distribution of inertia to $m_i=29.22$MWs$^2$,
and primary control $d_i=12.25$MWs. This guarantees that the total amounts of inertia and primary control
are kept at their initial levels.
\begin{figure}[h!]
\center
\includegraphics[width=0.24\textwidth]{vul_r_1.pdf}
\includegraphics[width=0.24\textwidth]{vul_a_1.pdf}\\
\includegraphics[width=0.24\textwidth]{vul_r_quad.pdf}
\includegraphics[width=0.24\textwidth]{vul_a_quad.pdf}\\
\includegraphics[width=0.24\textwidth]{vul_r_thres.pdf}
\includegraphics[width=0.24\textwidth]{vul_a_thres.pdf}\\
\vspace{5pt}
\includegraphics[width=0.47\textwidth]{vulnerability.pdf}
\caption{Deviation from homogeneous inertia and primary control following the minimization of $\mathcal{V}$ in \eqref{eq:vulnerability} for different choices, (a)-(b) $\eta_b\equiv 1$, (c)-(d) $\eta_b=\mathcal{M}_b^{(0)2}$ and (e)-(f) as defined in \eqref{eq:thres}. $r_i=-1,0,1$ (left) and $a_i=-1,0,1$ (right) are displayed in red, white and blue respectively. (g) Vulnerability $\mathcal{M}_b$ vs. fault location (in increasing order of $\mathcal{M}_b$) for the homogeneous model (black) and the optimized models corresponding to (a)-(b) (green line), (c)-(d) (blue line) and (e)-(f) (red line). The purple line shows the best reduction that achieved by optimizing $r_i$ and $a_i$ fault by fault.
The inset highlights the small discrepancies induced by the choice of $\eta_b$ for the faults
with largest impact.}\label{fig:vulnerability}
\end{figure}
At the end of the previous section
we argued that an homogeneous distribution of inertia, together with primary control
increased on the slowest eigenmodes of the network Laplacian minimize
the global vulnerability measure $\mathcal{V}$ of \eqref{eq:vulnerability}
for $\eta_b\equiv1$. This conclusion is confirmed numerically in Fig.~\ref{fig:vulnerability} (a) and (b).
The optimal placement of primary control displayed in panel (b) decreases $\mathcal{V}$ by more than 12\% with respect to the homogeneous case.
Setting $\eta_b\equiv 1$ in \eqref{eq:vulnerability} is convenient mathematically, however it treats
all faults equally, regardless of their impact. One may instead adapt $\eta_b$ to obtain inertia
and primary control distributions that reduce the impact of the strongest faults with largest
$\mathcal{M}_b$. We do this in two different ways, first with
$\eta_b=\mathcal{M}_b^{(0)2}$ and second with
\begin{equation}
\eta_b = \left\{\begin{array}{l}1\,,\text{ if $\mathcal{M}_b^{(0)}>\mathcal{M}_{\rm thres}$,}\\
0\,,\text{ otherwise.}\end{array}\right. \label{eq:thres}
\end{equation}
The corresponding geographical distributions of inertia and primary control redistribution parameters
$r_i$ and $a_i$ parameters are shown in Fig.~\ref{fig:vulnerability} (c) and (d) and
Fig.~\ref{fig:vulnerability} (e) and (f) respectively. Compared to the choice $\eta_b\equiv1$ [Fig.~\ref{fig:vulnerability} (a) and (b)], we see
rather small differences. More importantly, the impact of various choices of $\eta_b$ on
$\mathcal{M}_b$ is almost negligible, as can be seen in Fig.~\ref{fig:vulnerability} (g). In all cases,
our optimization algorithm reduces first and foremost the impact of the strongest faults, with little
or no influence on the faults that have little impact on grid stability.
It is finally interesting to note that our three choices of $\eta_b$ are close to being optimal, especially
when considering the strongest faults. This can be seen in Fig.~\ref{fig:vulnerability} (g), where the
purple line shows the maximal obtainable reduction, when inertia and primary control distributions
are optimized individually fault by fault, i.e. with a different redistribution for each fault. The inset
of Fig.~\ref{fig:vulnerability} (g) shows in particular that for the strongest fault, the three considered choices of $\eta_b$ lead to reductions in $\mathcal{M}_b$ that are very close to the maximal
one. We conclude that rather generically, inertia is optimally distributed homogeneously, while
primary control should be preferentially located on the slow modes of the grid Laplacian.
\section{Conclusion}\label{section:conclusion}
To find the optimal placement of inertia and primary control in electric power grids with limited
such resources is a problem of paramount importance. Here, we have made what we think is an
important step forward in constructing a perturbative analytical approach to this problem. In this
approach, both inertia and primary control are limited resources, as should be. Most importantly,
our method goes beyond the usually made assumption of constant inertia to damping ratio. In our approach inertia and primary control can vary spatially independently from one another.
Our results suggest that the optimal inertia distribution is close to
homogeneous over the whole grid, but that primary control should be reinforced on buses
located on the support of the slower modes of the network Laplacian. Further work should
try to extend the approach to the next order in perturbation theory. Work along those lines is in
progress.
\section*{Acknowledgment}
This work was supported by the Swiss National Science Foundation under grants PYAPP2\_154275
and 200020\_182050.
\appendices
\section{}\label{sect:continuation}
\begin{IEEEproof}[Proof of Proposition~\ref{prop:perturbed_xii}]
The proof follows the same steps as for Proposition \ref{proposition1}. The calculations are rather tedious, though algebraically straightforward. In the following, we sketch the
calculational steps. Assuming that $\bm H$ can be diagonalized as $\bm{t}^{R}\,\bm \mu \,\bm{t}^{L}$, where $\bm \mu \equiv {\rm diag}(\{\mu_{\alpha s}\})$ and $\bm{t}^{R,L}$ are matrices containing the right and left eigenvectors of $\bm H$, the problem is resolved by:\\
1) changing variables $\bm \chi\equiv \bm t^{L}[\bm \xi^\top \dot{\bm \xi}^\top]^\top$ to diagonalize \eqref{eq:H1}, as
\begin{equation}
\bm{\dot\chi}=
\bm{\mu}\bm\chi+\bm{t}^L
\left[\!\!\begin{array}{c}
\mathbb{0}_{N\times 1}\\
\bm{\mathcal{P}}
\end{array}\!\!\right]\equiv\bm\mu\bm\chi+\bm{\tilde \mathcal{P}}\,;\label{eq:unperturbed_ev}
\end{equation}
2) solving \eqref{eq:unperturbed_ev} as
\begin{equation}
\chi_{\alpha\pm}=-\frac{\tilde{\mathcal{P}}_{\alpha\pm}}{\mu_{\alpha\pm}}\Big(1-e^{\mu_{\alpha\pm}t}\Big)\,,\;\forall\alpha>1\,;\label{eq:chi_pm}
\end{equation}
3) Obtaining $\dot\xi_\alpha$ via the inverse transformation $[\bm\xi^\top \bm{\dot\xi}^\top]^\top=\bm t^R\bm\chi$.\\
These three steps are carried out with the approximate expressions $\bm t^{R,L}=\bm t^ {R,L(0)}+g\bm t^{R,L(1)}$ and $\mu_{\alpha\pm}=\mu_{\alpha\pm}^{(0)}+g\mu_{\alpha\pm}^{(1)}$ obtained with the first order in $g$ corrections presented in \eqref{eq:pertval}--\eqref{eq:pertvec}. One gets
\begin{align}
&\!\!\left[\!\!\!\begin{array}{c}\xi_\alpha\\\dot\xi_\alpha\end{array}\!\!\!\right]=\!
\left[\!\begin{array}{cc}1 & 1\\\mu_{\alpha+}^{(0)} & \!\!\!\!\mu_{\alpha-}^{(0)}\end{array}\!\!\right]\!\!\left[\!\!\!\begin{array}{c} \chi_{\alpha+} \\ \chi_{\alpha-}\end{array}\!\!\!\right]
-\frac{g\gamma\bm{V}_{2;\alpha\alpha}}{f_\alpha ^2}\left[\!\begin{array}{cc}\mu_{\alpha+}^{(0)}&\!\!\!\mu_{\alpha-}^{(0)}\\\lambda_\alpha &\!\!\!\lambda_\alpha \end{array}\!\right]\!
\!\!\left[\!\!\!\begin{array}{c} \chi_{\alpha+}^{(0)} \\ \chi_{\alpha-}^{(0)}\end{array}\!\!\!\right]\nonumber\\
&-g\gamma\sum_{\beta\neq \alpha}\frac{\bm{V}_{2;\alpha\beta}}{\lambda_\alpha -\lambda_\beta}
\left[\!\begin{array}{cc} \mu_{\beta+}^{(0)} & \mu_{\beta-}^{(0)} \\ \mu_{\beta+}^{(0)2} & \mu_{\beta-}^{(0)2} \end{array}\!\right]\!\!
\left[\!\!\!\begin{array}{c} \chi_{\beta+}^{(0)} \\ \chi_{\beta-}^{(0)}\end{array}\!\!\!\right]+\mathcal{O}(g^2)\,,\label{eq:xi_exp}
\end{align}
with
\begin{align}
\chi_{\alpha\pm}&=-\frac{1}{\mu_{\alpha\pm}^{(0)}}\bigg[\tilde{\mathcal{P}}_{\alpha\pm}^{(0)}+g\tilde{\mathcal{P}}_{\alpha\pm}^{(1)}-g\frac{\mu_{\alpha\pm}^{(1)}\tilde{\mathcal{P}}_{\alpha\pm}^{(0)}}{\mu_{\alpha\pm}^{(0)} }\bigg]\Big(1-e^{\mu_{\alpha\pm}^{(0)}t}\Big)\nonumber\\
&+gt\frac{\mu_{\alpha\pm}^{(1)}\tilde{\mathcal{P}}_{\alpha\pm}^{(0)}}{\mu_{\alpha\pm}^{(0)}} e^{\mu_{\alpha\pm}^{(0)}t}+\mathcal{O}(g^2)\,,\label{eq:chi_exp}
\end{align}
where
\begin{align}
\left[\!\!\!\begin{array}{c}\tilde{\mathcal{P}}_{\alpha+}^{(0)}\\ \tilde{\mathcal{P}}_{\alpha-}^{(0)}\end{array}\!\!\!\right]&=\frac{i}{f_\alpha }
\!\left[\!\!\!\begin{array}{cc}\mu_{\alpha-}^{(0)}& \!\!\!\!-1\\-\mu_{\alpha+}^{(0)}\!\! & 1\end{array}\!\!\!\right]\!
\left[\!\!\begin{array}{c} 0 \\ \mathcal{P}_\alpha \end{array}\!\!\right]\,,\nonumber\\
\left[\!\!\!\begin{array}{c}\tilde{\mathcal{P}}_{\alpha+}^{(1)}\\ \tilde{\mathcal{P}}_{\alpha-}^{(1)}\end{array}\!\!\!\right]
&=\frac{i\gamma}{f_\alpha}
\bigg(-\frac{\bm{V}_{2;\alpha\alpha}}{f_\alpha^2}\left[\!\begin{array}{cc}\lambda_\alpha &\!\!\!\!\!\!-\mu_{\alpha-}^{(0)}\\\!\!\!-\lambda_\alpha &\mu_{\alpha+}^{(0)}\end{array}\!\!\!\right]\!\!
\left[\!\!\begin{array}{c} 0 \\ \mathcal{P}_\alpha \end{array}\!\!\right]\nonumber\\
&+\sum_{\beta\neq \alpha}\frac{\bm{V}_{2;\alpha\beta}}{(\lambda_\alpha -\lambda_\beta)}\!\!
\left[\!\begin{array}{cc}\lambda_\beta &\!\!\!-\mu_{\alpha+}^{(0)}\\\!-\lambda_\beta &\mu_{\alpha-}^{(0)}\end{array}\!\!\!\right]\!\!\left[\!\begin{array}{c} 0 \\ \mathcal{P}_\beta\end{array}\!\right]\bigg)\,.\label{eq:P}
\end{align}
\eqref{eq:perturbed_xii} is obtained from \eqref{eq:xi_exp} by applying trigonometric identities.
\end{IEEEproof}
\begin{IEEEproof}[Proof of Theorem~\ref{thm:lin_opt_inertia}] To leading order in $\mu=\delta m/m$, this optimization problem is equivalent to the following linear programming problem \cite{bertsimas1997introduction}
\begin{align}
&\min_{\{r_i\}}\sum_i\rho_ir_i\,,\\
\text{s.t.}&\;|r_i|\le 1\,,\\
&\sum_i r_i=0\,.
\end{align}
It is solved by the Lagrange multipliers method, with the Lagrangian function
\begin{equation}
\mathcal{L}=\sum_{i=1}^N\rho_ir_i+\sum_{i=1}^N\varepsilon_i(r_i^2-1)+\varepsilon_0\sum_{i=1}^Nr_i\,,
\end{equation}
where $\varepsilon_i$ and $\varepsilon_0$ are Lagrange multipliers. We get
\begin{equation}
\frac{\partial\mathcal{L}}{\partial r_i}=\rho_i+2\varepsilon_ir_i+\varepsilon_0=0\,,\;\forall i\,.\label{eq:lagrange}
\end{equation}
The solution must satisfy the Karush-Kuhn-Tucker (KKT) conditions \cite{bertsimas1997introduction}, in particular the complementary slackness (CS) condition which imposes that either $\varepsilon_i=0$ or $r_i=\pm1\,,\;\forall i$. The former choice leads generally to a contradiction. From \eqref{eq:lagrange} and dual feasibility condition, one gets
\begin{equation}
\varepsilon_i=-\frac{\varepsilon_0+\rho_i}{2r_i}\ge 0\,.\label{eq:dual}
\end{equation}
This imposes that $r_i=-{\rm sgn}(\varepsilon_0+\rho_i)$. To ensure that $\sum_ir_i=0$ is satisfied, $\varepsilon_0$ is set to minus the median value of $\rho_i$. If the number of bus $N$ is odd, the $r_i$ corresponding to the median value of $\rho_i$ is set to zero.
\end{IEEEproof}
| {
"attr-fineweb-edu": 1.871094,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdnc5qoaAwm0PHJha | \section{Introduction}
Solid-state electronic spins coupled to microwave (MW) cavities have been extensively studied for realizing various coherent interfaces for quantum information applications. This is due to the possibility for fast manipulation and their high degree of tunability \cite{morton2018storing}, yet achieving long coherence times with electronic spins remains challenging due to their magnetic sensitivity. Efficient and coherent spin-MW interfaces have been developped for various solid-state electronic spin systems, such as nitrogen-vacancy centers in diamond~\cite{Kubo2010,Zhu2011,Amsuss2011,Ranjan2013,Putz2014}, rare-earth ion doped crystals~\cite{Bushev2011,Probst2013,Chen2016,farina2021coherent,King2021}, phosphorus donors in silicon~\cite{Zollitsch2015,Weichselbaumer2020} and ferri- and ferromagnetic magnons~\cite{Huebl2013,Tabuchi2015,Abdurakhimov2015,everts2020ultrastrong}. Furthermore, electronic spin systems that simultaneously couple coherently to light allow reversible coupling between optical and MW modes, which is a key feature of optical quantum memories \cite{Heshami2016,Rakonjac2021,Ortu2022,Businger2020,Jin2022} and MW-optical quantum transducers ~\cite{Williamson2014,Blum2015,Fernandez2015,Hisatomi2016,bartholomew2020chip}.
In this work, we explore coherent optical and MW coupling in the context of optical quantum memories, based on rare-earth (RE) ion doped crystals. An open challenge on this platform is to realize a broadband ($\geq$~100 MHz) memory while storing the optical photon as a long-lived spin excitation. Broadband operation requires large hyperfine splittings, which can be achieved with electronic spin systems \cite{Businger2020}, i.e. with Kramers RE ions. These, however, are prone to magnetic-induced dephasing due to their large moments. Yet, long electronic spin coherence times can be reached by combinations of moderate to large magnetic fields, sub-Kelvin temperatures and/or ppb-level RE concentrations \cite{Kindem2018o,Rancic2018,Dantec2021}, reaching up to 23 ms in a Er$^{3+}$:CaWO$_4$ crystal at 0.7 ppb concentration at 10 mK \cite{Dantec2021}. Another approach is to exploit zero first-order Zeeman (ZEFOZ) transitions (i.e. clock transitions) that appear at zero magnetic field in Kramers ions having anisotropic hyperfine interactions \cite{Ortu2018,Rakonjac2020,Kindem2020,Chiossi2022}, which can be reached without superconducting coils and with conventional cryo coolers working at 2.5-4 K. Zero-field ZEFOZ transitions have yielded Hahn-echo spin coherence times of up to 2.5 ms on the 655 MHz transition in \ybiso{} \cite{welinski2020coherence} at 10 ppm doping concentration and 3 K. In the MW regime, a Hahn-echo coherence time of 35~$\mu$s was obtained on the 3.369~GHz transition in the optically excited state of \ybi:YVO$_4$ at zero field \cite{bartholomew2020chip}. An important step is to simultaneously demonstrate long spin coherence times in the MW regime $>$1 GHz at the ZEFOZ condition of the ground state, efficient MW manipulation of a spin ensemble, and optical coupling to the same ensemble. In addition, in order to preserve the optical depth for efficient optical quantum memory operation, the coupling must be homogeneous over the length of the crystal.
\begin{figure*}[t!]
\includegraphics[width=0.95\linewidth]{Fig1}
\centering
\caption{a) Energy level diagram for the optical $^2$F$_{7/2}(0) \leftrightarrow ^2$F$_{5/2}(0)${} transition of site II in \ybiso{}, with relevant optical and MW transition used in the experiments. b) Picture of the loop-gap resonator and the crystal illustrating the common volume excited by the laser $\vec{E}$ and microwave $\vec{B}_{\text{MW}}$ fields, aligned along the D$_2$ and $b$ crystal axes, respectively. c) MW transmission spectra recorded through the resonator at 5~K and 300~K. d) Rabi oscillation time sequence, see text for details, with an example of laser power transmission showing the Rabi oscillations. e) MW Hahn echo time sequence, see text for details. The inset shows an example of the optical RHS echo signal, detected through balanced heterodyne detection at a beat frequency of 3 MHz.}
\label{fig:resonator}
\end{figure*}
Here, we employ a lumped-element resonator in order to achieve simultaneous optical-MW coupling over a 1-cm long Y$_2$SiO$_5${} crystal. The resonator was tuned to a zero-field ZEFOZ ground-state transition of the \ybi{} ion at 2.497 GHz. The enhanced MW field gives a 0.56~MHz Rabi frequency and Hahn-echo measurements resulted in an electronic spin coherence time of 10 ms, with a \ybi{} concentration of 2 ppm and a temperature of 3.4 K. With small magnetic fields ($<$1~mT), our results provide evidence of strong superhyperfine coupling to neighbouring yttrium ions (mostly $^{89}$Y$^{3+}$ ions which have a 100\% abundance) \cite{Car2018}, suggesting that superhyperfine-induced collapse of the spin echo signal plays an important role in the dephasing, as recently observed with photon echoes in Er-doped Y$_2$SiO$_5${} \cite{Car2020}. This effect can, however, be suppressed at certain angles of the magnetic field and at the ZEFOZ point. Our results shows the possibility of operating an optical quantum memory using any of the MW transitions available in \ybiso{}, over a sample volume compatible with crystals employed for quantum memories.
\section*{Results}
\subsection{Cavity-enhanced spin-microwave interaction}
In this work, we study \ybi{} ions occupying site II of the Y$_2$SiO$_5${} crystal, with the relevant optical transition $^2$F$_{7/2}(0) \leftrightarrow ^2$F$_{5/2}(0)${} at a wavelength of 978.854 nm \cite{Welinski2016}. The \ybi{} doping concentration is 2 ppm, with an isotopic purity of $\approx 95\%$, and the crystal is cut along the D$_1$, D$_2$ and $b$ axes \cite{Li1992}. The two electronic states have four nondegenerate hyperfine states at zero magnetic field, see Fig. \ref{fig:resonator}(a), due to the highly anistropic hyperfine interaction in Y$_2$SiO$_5${} \cite{Tiranov2018}. Previous zero-field spin coherence measurements focused on the low-frequency transitions at 528 and 655 MHz transitions \cite{Ortu2018, welinski2020coherence}, which can be rather efficiently excited by a solenoid coiled around the crystal \cite{Ortu2018,Businger2020}. It is interesting, however, to be able to address any MW transition, as other so-called $\Lambda$-systems involving the high-frequency MW transitions could have interesting features for optical quantum memories and transducers. For instance, the electronic spin component of the hyperfine states results in a polarization dependence of the relative transition strengths of the optical-hyperfine transitions (see Supplementary Information), which can be exploited to maximize the coupling strengths of both optical transitions in a $\Lambda$-system. This is in contrast to non-Kramers ions possessing only nuclear hyperfine states, such as Eu$^{3+}$ \cite{Lauritzen2012} and Pr$^{3+}$, where the relative strengths are typically polarization-independent, which severely restricts the number of useful $\Lambda$-systems. Here, we specifically focus on the $\ket{2_g}$-$\ket{4_g}$ transition at $\omega_{\mathrm{MW}}=2496.55$ MHz, because calculations show a particularly low magnetic sensitivity for small bias fields (see Supplementary Information).
To drive the high-frequency MW transition, we designed a loop-gap resonator based on the work by Angerer et al.~\cite{Angerer2016}, which enables efficient and homogeneous driving of the spins over the entire 1-cm length of the Y$_2$SiO$_5${} crystal. In short, the resonator consists of bow-tie type elements, see Fig. \ref{fig:resonator}(b), resulting in a lumped element LC-type electrical circuit where the resonance frequency can be tuned by adjusting the distance between the bow ties and the lid. The design of Ref.~\cite{Angerer2016} was modified by adding optical access through two small holes, allowing optical and MW excitation of a common mode, as shown in Fig. \ref{fig:resonator}(b). The crystal was placed in between the bow ties, with the crystal $b$-axis along the optical beam. The crystal downshifted the resonance frequency by about 60-80~MHz, due to the anisotropic dielectric permittivity of the crystal~\cite{Carvalho2015}.
In Fig. \ref{fig:resonator}(c), we show transmission spectra acquired with a vector network analyzer (VNA), at a temperature of 300 K and 5 K. Cooling down the resonator typically increased the frequency by $\approx$10 MHz, which was consistent enough between cool downs such that tuning at room temperature was possible. The transmission linewidth is 4.5~MHz, corresponding to a quality factor of $Q = 555$. The resonator was made out of standard copper, without mirror polishing, lowering the $Q$-factor in comparison to Ref. ~\cite{Angerer2016}. However, in this context, the strong coupling regime is not the goal. We use the resonator for a better impedance matching at high MW frequencies and for producing homogeneous oscillating magnetic field $B_{\text{MW}}$ in the crystal volume. For the following measurements, the crystal and resonator were cooled down to about 3.4 K.
The coherent MW driving was measured through optical detection of MW Rabi oscillations. A continuous probe laser tuned to the $\omega_{41}$ transition pumps spins into the $\ket{2_g}$ state, increasing the population difference between the $\ket{4_g}$ and $\ket{2_g}$ states. The transmitted power of the probe laser is recorded on a photodiode, effectively measuring the population in $\ket{4_g}$. The strong $\omega_{41}$ transition allows a high contrast measurement of the population. An amplified MW pulse at 2496.55 MHz was sent to the resonantor, reaching a power of 43~dBm at the MW input of the cryostat. As shown in Fig. \ref{fig:resonator}(d), the MW field induces clear Rabi oscillations between the $\ket{2_g}$ and the $\ket{4_g}$ states, reaching a frequency of $2\pi\times 560$ kHz. Note that the Rabi frequency is comparable to the inhomogeneous spin broadening, which is about 680 kHz.
The spin coherence time can be measured using a Hahn echo sequence on the MW transition, which can be optically detected using Raman heterodyne scattering (RHS) \cite{Mlynek1983,Ortu2018,everts2020ultrastrong}, see Fig. \ref{fig:resonator}(e). A pulsed probe laser tuned to the $\omega_{21}$ transition first polarizes spins into the $\ket{4_g}$ state. Then the MW Hahn echo sequence is applied, consisting of a 0.42-$\mu$s long $\pi/2$-pulse and a 0.84-$\mu$s long $\pi$-pulse, separated by a time $\tau$. The MW echo is detected by applying another probe pulse at the moment of the echo, which produces a RHS signal on the strong $\omega_{41}$ transition. For an increased sensitivity, the RHS signal field amplitude is detected through a balanced heterodyne detection with a local oscillator (LO) detuned by 3~MHz from the RHS signal, see Fig. \ref{fig:resonator}(e). The spin echo results will be discussed in the following section.
\subsection{Low-field spin properties of \ybiso{}}
The $C_1$ point symmetry of the doping sites in Y$_2$SiO$_5${} results in an effective electronic spin $S=1/2$ of the Yb$^{3+}$ ion, and the isotope \ybi{} has a nuclear spin $I = 1/2$. The effective spin Hamiltonian can then be decomposed into a hyperfine and an electronic Zeeman part \cite{Tiranov2018} (neglecting the weak nuclear Zeeman effect), for the ground ($g$) and electronic level ($e$), respectively,
\begin{equation}
\label{eq:Heff}
\boldsymbol{\mathcal{H}} = \mathbf{S} \cdot \mathbf{A}_{g,e} \cdot \mathbf{I} + \mathbf{B} \cdot \boldsymbol{\mu}_{g,e}.
\end{equation}
\noindent The electronic spin $\mathbf{S}$ and the nuclear spin $\mathbf{I}$ components are coupled through the hyperfine tensor $\mathbf{A}_{g,e}$. The magnetic dipole moment is $\boldsymbol{\mu}_{g,e} = \mu_\text{B} \mathbf{g}_{g,e} \cdot \mathbf{S}$, where $\mathbf{g}_{g,e}$ is the Zeeman tensor and $\mu_\text{B}$ the Bohr magneton. In a $C_1$ point symmetry, $A_x \neq A_y \neq A_z$, which completely lifts degeneracy at $B=0$ and results in four hyperfine eigenstates $\ket{k_i}$ ($k=1$ to $4$) in each electronic level ($i = g,e$), see Fig. \ref{fig:resonator}(a). At $B=0$, the hyperfine states are completely hybridized in their electronic and nuclear components, such that $\expval{\mathbf{S}} =\expval{\mathbf{I}} = 0$. As a consequence, the expectation value of the dipole moment is zero $\expval{\boldsymbol{\mu}_{g,e}} = 0$, and to first order there is no linear Zeeman effect in either electronic level. The zero first-order Zeeman effect (ZEFOZ) effect at $B=0$ leads to a strong increase in both optical and spin coherence times \cite{Ortu2018}. The zero effective magnetic dipole moment also quenches superhyperfine interactions to first order (see Supplementary Information), thereby quenching the electron spin echo envelope modulation (ESEEM)
\cite{rowan1965electron,car2018selective,lovric2011hyperfine}, as first observed experimentally in Ref. \cite{Ortu2018}.
\begin{figure*}[t!]
\includegraphics[width=\linewidth]{Fig2.pdf}
\caption{a) Map of the relative Hahn echo amplitude normalized to the maximum amplitude for a range of magnetic fields in the D$_1$-D$_2$ plane, with the pulse separation fixed at $\tau=5$ ms. b) Numerically computed $S_1$ gradient for the 2.497 GHz transition in the D$_1$-D$_2$ plane, as a function of the angle $\varphi$ to the D$_1$ axis. The solid lines in a) and c) show the expected minimum at $\varphi = 55.9^{\circ}$, while the dashed lines show the estimated angle of maximum echo amplitude, at an angle of $\varphi = 48^{\circ}$. They agree within the expected error of the field direction and of the cuts of the crystal surfaces. c) Similar to the plot in a), but showing data taken at a higher resolution in the field step. The color-coded crosses and circles show the field values at which the echo decays shown in e) and Fig. \ref{fig:scanD1_ESEEM}(b)-(c) were recorded, respectively. d) Map of the echo decays as a function of time delay $\tau$, for a range of fields along $B_{\text{D}_1}$ and for fixed $B_{\text{D}_2} = -155~\mu$T. The dashed line shows the yttrium Larmor period. Examples of decay curves from the map are shown in e), from top to bottom $B_{\text{D}_1} = 0, -66, -106, -202, -278 ~\mu$T, indicated with crosses in panel c).}
\label{fig:scanD1D2}
\end{figure*}
When a weak magnetic field is applied, the states $\ket{k}$ are mixed trough the Zeeman interaction (note that in the following we have omitted the index $i=g,e$ indicating the electronic level). The weakly perturbed states $\ket{l}$ result in a non-zero expectation value $\bra{l} \mathbf{S} \ket{l} \neq 0$, such that each state acquires a weak magnetic dipole moment $ \bra{l} \boldsymbol{\mu} \ket{l} \neq 0$. If the $\mathbf{A}$ and $\mathbf{g}$ tensors are aligned, a good approximation here \cite{Tiranov2018}, then a first-order perturbation calculation shows (see Supplementary Information) that the energy $E_l$ of the perturbed state $\ket{l}$ has a quadratic Zeeman effect, with an associated field-gradient $dE_l/dB_m = \mu_\text{B} g_m \bra{l} S_m \ket{l} $, where $m=x,y,z$ is the direction of the weak magnetic field. It can further be shown that the effective dipole moment component along $m$, for state $\ket{l}$, is given by $\bra{l} S_m \ket{l} = (1/2) \mu_\text{B} B_m g_m / (E_l - E_{l'})$, where $\ket{l'}$ is the only other hyperfine state such that $\bra{l'} S_m \ket{l} \neq 0$. Note that the zero-field energies $E_l$ can easily be calculated from the $\mathbf{A}$ tensor elements \cite{Tiranov2018}. The direction-averaged, first-order, transition frequency gradient $S_1$ is calculated by taking the difference of the energy gradients of the connected states.
The calculations emphasize the direct connection between the first-order dipole moment and the energy gradient when a weak field is applied, as expected for a classical dipole. Both are zero to first order in the ZEFOZ point, but can also be strongly quenched when applying a weak field, by tuning the field direction to minimize the nominator term $g_m^2$ and to maximize the denominator term $E_l - E_{l'}$, as shown experimentally in Ref. \cite{Ortu2018}. In the following, we will see how both effects change the observed spin dephasing time, and the strength of superhyperfine interaction and the ESEEM effect.
\subsection{Spin echo measurements}
The Hahn echo signal amplitude was measured as a function of magnetic field in the D$_1$-D$_2$ plane, i.e. with $B_{\text{D}_1}$ and $B_{\text{D}_2}$ components, for a fixed pulse separation of $\tau=5$~ms, see Fig. \ref{fig:scanD1D2}(a). The echo signal is strongest in the vicinity of zero applied field, as expected for the ZEFOZ point, but also remains strong along a particular line. The intense line corresponds well to the minimum $S_1$ gradient in the D$_1$-D$_2$ plane, expected at an angle of $\varphi = 55.9^{\circ}$ to the D$_1$ axis (see Fig. \ref{fig:scanD1D2}(b) and Supplementary Information). In addition, one can observe a distinct ESEEM oscillation pattern, which is a sign of dipole-dipole coupling to neighbouring $^{89}$Y$^{3+}$ ions (i.e. superhyperfine interaction).
\begin{figure*}[t!]
\includegraphics[width=0.95\textwidth, height=3.5cm]{Fig3}
\caption{Map of the echo decays as a function of time delay $\tau$, for a range of fields along $B_{\text{D}_1}$ and for fixed $B_{\text{D}_2} = -220~\mu$T. The white dashed lines show multiples of the yttrium Larmor period, $n T_\text{Y}$, where $n=1,2,3$. b) The echo decay obtained at $B_{\text{D}_1} = -248~\mu$T shows clear ESEEM peaks at the multiples of $T_\text{Y}$ (vertical dashed line). c) The echo decay at $B_{\text{D}_1} = -51~\mu$T shows no discernible modulation. The magnetic fields for the two decays in b) and c) are indicated as horizontal dashed lines in a), and with circles in Fig. \ref{fig:scanD1D2}(c).}
\label{fig:scanD1_ESEEM}
\end{figure*}
A map with finer resolution in the field was also recorded around the point of highest echo amplitude, for $\tau=5$~ms, see Fig. \ref{fig:scanD1D2}(c). A symmetric pattern appears around a field offset of around $B_{\text{D}_2} = -155~\mu$T along the D$_2$ axis. Such offsets have been observed in all our past echo measurements in \ybiso{} \cite{Ortu2018,Businger2020}, and it varies depending on the cryostat and electromagnetic environment, showing that it is produced externally from the crystal. In general, these slowly varying background fields can make it challenging to experimentally find and maintain the ZEFOZ point. In the following, we assume that the true zero field, i.e. the ZEFOZ point, is located close to $B_{\text{D}_1} = 0$ and $B_{\text{D}_2} = -155~\mu$T.
The strong ESEEM pattern seen in Fig. \ref{fig:scanD1D2}(c) indicates that the superhyperfine coupling contributes significantly to the effective spin dephasing mechanism away from the ZEFOZ point. To investigate this further, we recorded Hahn echo decay curves as a function of $\tau$, for a varying field $B_{\text{D}_1}$ along the D$_1$ axis. A constant field of $B_{\text{D}_2} = -155~\mu$T was applied in the D$_2$ axis, to compensate for the observed lab bias field. The data is presented as a 2D map in Fig. \ref{fig:scanD1D2}(d), and a few examples of decay curves are shown in Fig. \ref{fig:scanD1D2}(e). For the highest fields along D$_1$, there is a distinct ESEEM modulation appearing, with an almost complete collapse of the echo signal before the first ESEEM revival. This shows that the superhyperfine coupling plays a major role in the observed spin dephasing, as was also recently observed with photon echoes in Er-doped Y$_2$SiO$_5${} \cite{Car2020}. Our measurements also suggests that coupling to $^{89}$Y$^{3+}$ ions is dominating, as the time of the first ESEEM revival closely follows the Lamor period of the yttrium spin $T_\text{Y} = 1/(B \gamma_\text{Y})$, where $\gamma_\text{Y} = 2.095$~MHz/T, see Fig. \ref{fig:scanD1D2}(d). Note that in the model, the field amplitude $B$ is taken to be the field applied along D$_1$, i.e. $B = B_{\text{D}_1}$, as the D$_2$ field only compensates the lab bias field.
The ESEEM oscillation seen in Figs \ref{fig:scanD1D2}(d)-(e) increases in period and decreases in amplitude, as the $B$ field approaches the ZEFOZ condition at the estimated zero-field point. The increased period is clearly due to the yttrium Larmor frequency going to zero, while we attribute the decrease in amplitude to the quenching of the effective \ybi{} dipole moment in the ZEFOZ point. Note that this is a very effective method for finding the true $B=0$ point experimentally, by pushing the oscillation to longer periods until its amplitude vanishes. However, as both the Larmor frequency and the first-order effective dipole moment goes to zero, it is not possible to discriminate the two effects while approaching $B=0$.
A clearer way of showing how the ESEEM oscillation can be quenched by minimizing the effective dipole moment is to approach the line of minimum $S_1$ gradient, while applying a constant offset field along D$_2$ such that the yttrium Larmor frequency is strictly non-zero for all measurements. In Fig. \ref{fig:scanD1_ESEEM}(a), we show a 2D map of Hahn echo decays, as a function of fields applied along the D$_1$ axis. The D$_2$ field was held constant at $B_{\text{D}_2} = -220~\mu$T, meaning there was a constant offset of $-65~\mu$T along D$_2$ from the estimated true $B=0$ point. The data shows up to three clear ESEEM oscillations for the highest fields along D$_1$, see the example in Fig. \ref{fig:scanD1_ESEEM}(b). However, as the D$_1$ field is tuned towards the line of minimum dipole moment, the oscillations completely vanish, although the Larmor frequency is non-zero at all times. For the field of $B_{\text{D}_1} = -51~\mu$T, which is exactly on the line of minimum gradient (cf. Fig. \ref{fig:scanD1D2}(c)), the echo amplitude displays an exponential decay without oscillations, see Fig. \ref{fig:scanD1_ESEEM}(c). The dependence of the period of oscillation on the field magnitude $B$ follows closely the Larmor frequency, shown by the theoretical lines based on the model field amplitude $B = \sqrt{ (65~\mu \text{T})^2 + B_{\text{D}_1}^2}$, see Fig. \ref{fig:scanD1_ESEEM}(a). The quenching of the ESEEM modulation at a non-zero field magnitude unambiguously demonstrates that the effective dipole moment is field dependent in \ybi{} and that it can be minimized at specific orientations. This could provide an interesting tool for studying superhyperfine interactions in detail.
\begin{figure*}[t!]
\centering
\includegraphics[width=0.9\textwidth]{Fig4}
\caption{a) Hahn echo decays for fields $B_b = 18, 42, 85, 230~\mu$T along the $b$-axis, with a constant offset field $B_{\text{D}_2} = -155~\mu \text{T}$ to compensate for the lab bias field along D$_2$. Solid lines show fits to a stretched exponential. b) Spin coherence times measured along the $b$ axis. The solid line shows a simple empirical model (see text for details). c) The longest Hahn echo decay was obtained by carefully adjusting the bias magnetic field around the estimated ZEFOZ point, while optimizing the echo amplitude at a long delay, resulting in a spin coherence time of $T_2 = (10.0 \pm 0.4)~\text{ms}$.}
\label{fig:scanb}
\end{figure*}
The Hahn echo decay was also measured while applying a varying field in the $b$ axis, starting from the estimated ZEFOZ point (i.e. with $B_{\text{D}_2} = -155~\mu \text{T}$ constant). Along this axis, no oscillations appear for up to $B_b = 300~\mu \text{T}$, as shown in Fig. \ref{fig:scanb}(a). However, the timescale of the decay is similar as measured along the D$_1$ axis, indicating that the effective spin dephasing time is also governed by superhyperfine coupling along the $b$-axis. The spin decay curves were fitted to a stretched exponential $E(\tau) = E(0) \exp(-(2\tau/T_2)^m)$, where $m$ is the Mims factor, see Fig. \ref{fig:scanb}(a).
The spin coherence time decreases as the $B_b$ is increased, as shown in Fig. \ref{fig:scanb}(b), following a $1/B_b$ dependence at higher fields, as already observed in Ref. \cite{Ortu2018}. The coherence time data fits well to a simple empirical formula $T_2(B) = 1/(1/T_2(0)+\pi \kappa \abs{B-B_0})$, where $T_2(0)$ is the estimated zero-field coherence time, $\kappa$ the field dependent homogeneous linewidth, and $B_0$ the offset bias field along the b-axis. The fit to data yields $T_2(0) = (10.3 \pm 0.2)~\text{ms}$, $\kappa = (1.48 \pm 0.04) \text{MHz/T}$ and $B_0 = (14.1 \pm 0.4)~\mu \text{T}$. Similar values were found when fitting only the first collapses before the first ESEEM revival of the D$_1$ scan shown in Fig. \ref{fig:scanD1D2}(d) and (e) (see Supplementary Information). Two aspects stand out in this simple analysis, which is the universal $1/B$ dependence (also observed in Ref. \cite{Ortu2018}) and a field-dependent homogeneous linewidth $\kappa$ that is rather close to the yttrium gyromagnetic ratio $\gamma_\text{Y}$. As it stands, we do not have a detailed model to explain these observations, but we observe that the screened superhyperfine decay model in Ref. \cite{Car2020} cannot be applied here since the dipole moment is proportional to $B$ (see Supplementary Information). In any case, it appears clearly that a model of the spin dephasing times close to zero field must include superhyperfine interactions with yttrium ions.
Finally, we briefly discuss the longest spin coherence times obtained in this study.
As observed in our dataset,
magnetic bias fields of the order of a few $\mu \text{T}$ affect the Hahn echo decay. Exponential functions were fitted to the longest decays obtained for the D$_1$ and $b$ axis scans, see Figs \ref{fig:scanD1D2}(e) and Fig. \ref{fig:scanb}(a), from which we obtained $T_2 = (9.6 \pm 0.8)~\text{ms}$ and $T_2 = (8.5 \pm 0.6)~\text{ms}$, respectively. In a separate measurement, where we finely adjusted the bias field around the expected ZEFOZ point, we obtained $T_2 = (10.0 \pm 0.4)~\text{ms}$ as shown in Fig. \ref{fig:scanb}(c). This can be compared to the $T_2 = (6\pm 1)~\text{ms}$ obtained in a parallel study by Alexander \textit{et al.} \cite{Alexander2022}
in a Y$_2$SiO$_5${} crystal doped with 5 ppm of \ybi{}. In this context, it is also worth noting that we measured the optical coherence time in both crystals, and found a similar relative difference of $(0.610\pm 0.050)~\text{ms}$ and $(1.05\pm 0.130)~\text{ms}$ for the 5 and 2 ppm doping, respectively (see Supplementary Information).
\section{Conclusions and Discussion}
The lumped-element resonator provides a method for coherent driving of both optical and MW transitions over a large sample volume, giving us access to all the higher MW spin transitions for quantum memory schemes \cite{Businger2020}. The polarization-dependent optical transition strengths between the hyperfine levels opens up many possible $\Lambda$-schemes, particularly involving the high-frequency spin transitions such as the 1.841 GHz and 3.025 GHz transitions (see Supplementary Information), allowing one to optimize the memory scheme in terms of other parameters, such as the maximum memory bandwidth. Integrating the lumped-element resonator in a spin-wave quantum memory experiment, cf. Ref. \cite{Businger2020}, is an important next step.
The experiments presented in this work show that superhyperfine-induced ESEEM oscillations of the Hahn echo signal have a strong influence on the dephasing dynamics at low fields. We have also shown that the effective magnetic dipole moment is field-orientation dependent at low fields and can be quenched at the zero-field ZEFOZ point, or when the $S_1$ gradient is very low. These experimental observations are also well-supported by our calculations. There remains many open and interesting questions, however, such as the influence of a stochastic time-dependent spectral diffusion process \cite{Lange2010}, which has been successfully applied to europium-doped Y$_2$SiO$_5${} \cite{Holzaepfel2020,Ortu2022}. The simple empirical model to explain the $B$-field dependence of the coherence time includes a zero-field $T_2(0)$, which could be a sign that the observed zero-field coherence time could be dominated by spectral diffusion. In addition, one would expect to see more distinct ESEEM oscillations at longer delays, as seen in Ref. \cite{car2020superhyperfine}, which would also suggest another mechanism at long time scales. Since the effective magnetic dipole moment can be tuned by the strength and orientation of the magnetic field, we believe that \ybiso{} provides a particularly interesting system for further studies of these dephasing mechanisms.
\section{Methods}
The magnetic fields along the D$_1$ and D$_2$ axes were generated by two pairs of coils, placed outside the cryostat. A superconducting coil placed inside of the cryostat generated the field along the $b$-axis. The magnetic fields were calibrated by measuring the Zeeman frequency shifts they induced, and comparing these to the shifts calculated with the model presented in \cite{Tiranov2018}.
To increase the signal of the RHS echo, the spin echo sequence is preceded by a population preparation step, in order to increase the population difference between states $\ket{4_g}$ and $\ket{2_g}$. To do so, the laser is first tuned to the $\ket{2_g}$-$\ket{1_e}$ transition to decrease the population in the $\ket{2_g}$ state. Then, the frequency of the laser is scanned to address optical transitions involving the $\ket{1_g}$, $\ket{2_g}$ and $\ket{3_g}$ states, thereby pumping ions into the state $\ket{4_g}$.
To drive the $\ket{2_g}$-$\ket{4_g}$ MW-transition with high power, a standard wifi amplifier was used (Sunhans SH24Gi20W).
\section*{DATA AVAILABILITY}
The datasets shown in the figures of this article can be accessed from the Zenodo data repository 10.5281/zenodo.7064041.
\section*{ACKNOWLEDGEMENTS}
We acknowledge funding from the Swiss National Science Foundation (SNSF) through project 197168, the French Agence Nationale de la Recherche through project MIRESPIN (Grant No. ANR-19-CE47- 0011) and support from DGA.
\section*{AUTHOR CONTRIBUTIONS}
A.T. and T.S.M. designed and made initial tests of the loop-gap resonator. The crystal growth and initial optical spectroscopy was done by E.L.H., A.F. and P.G.. All the optically detected spin resonance measurements were carried out by L.N., with support from M.B. and T.S.M.. L.N. did all the data reduction and analysis, with support from M.A. The theoretical superhyperfin calculations were done by M.A. and T.C.. The manuscript was mainly written by N.L. and M.A., with contributions from all the authors. M.A. provided overall oversight of the project.
\section*{COMPETING INTERESTS}
The authors declare no competing interests.
| {
"attr-fineweb-edu": 1.899414,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdo85qU2Ap6TgukKB | \section{Introduction}
Lattice QCD with exact chiral symmetry is an ideal theoretical framework to study
the nonperturbative physics from the first principles of QCD.
However, it is rather nontrivial to perform Monte Carlo simulation
such that the chiral symmetry is perserved to a very high precision
and all topological sectors are sampled ergodically.
Since 2009, the Taiwan Lattice QCD Collaboration (TWQCD) has been using a
GPU cluster (currently constituting of 250 NVIDIA GPUs)
which attains 40 Teraflops (sustained) to simulate unquenched lattice QCD
with the optimal domain-wall quarks \cite{Chiu:2002ir, Chiu:2009wh}.
We have realized our goal of preserving the chiral symmetry
to a good precision (with $ m_{res} \sim 0.3 $ MeV) and also sampling
all topological sectors ergodically.
In this paper, we present our first results of the mass and the decay constant
of the pseudoscalar meson in two flavors QCD, and compare our results with
the next-to-leading order (NLO) chiral perturbation theory (ChPT).
We find that our data is in good agreement with NLO ChPT for $ M_\pi $ less than
450 MeV, and from which we determine
the low-energy constants $ f $, $ \Sigma $, $ \bar{l}_3 $ and $ \bar{l}_4 $, and
and the average up and down quark mass $m_{ud}^{\overline{\rm MS}}(\mathrm{2~GeV})$.
Our result of the topological susceptibility is presented in Ref. \cite{Hsieh:2010ab},
and our strategy of using GPU to speed up our Hybrid Monte Carlo simulations
is presented in Ref. \cite{Chiu:2010gp}.
\section{Hybrid Monte Carlo Simulation with Optimal Domain-Wall Quarks}
The optimal domain-wall fermion
is the theoretical framework which preserves the
(mathematically) maximal chiral symmetry for any finite $N_s$ (the length of the fifth dimension).
Thus the artifacts due to the chiral symmetry breaking with finite $ N_s $
can be reduced to the minimum.
The action of the optimal domain-wall fermion is defined as \cite{Chiu:2002ir}
\bea
\label{eq:ODWF}
S_\mathrm{odwf}
= \sum_{s,s'=1}^{N_s} \sum_{x,x'}
\bar\psi_{xs} \left[ (\omega_s D_w + 1)_{xx'} \delta_{ss'}
+(\omega_s D_w - 1)_{xx'} L_{ss'} \right] \psi_{x's'}
\equiv \bar\Psi \Dodwf \Psi,
\eea
where the weights $ \{ \omega_s \} $ along the fifth dimension are
fixed according to the formula derived in Ref. \cite{Chiu:2002ir}
such that the maximal chiral symmetry is attained.
Here $D_w$ denotes the standard Wilson-Dirac operator plus a negative parameter $-m_0\; (0 < m_0 < 2)$,
\begin{equation}
(D_w)_{xx'} = -\frac{1}{2} \sum_{\mu} \left[
(1-\gamma_\mu)U_\mu(x)\delta_{x+\hat{\mu},x'}
+(1+\gamma_\mu)U^\dagger_\mu(x')\delta_{x-\hat{\mu},x'} \right]
+ (4 - m_0),
\end{equation}
and
\begin{equation}
L = P_+ L_+ + P_- L_-, \quad P_\pm = (1\pm \gamma_5)/2,
\end{equation}
\begin{equation}
(L_+)_{ss'} = \left\{
\begin{array}{ll} \delta_{s-1,s'}, & 1 < s \leq N_s \\
-(m_q/2m_0) \delta_{N_s,s'}, & s = 1 \end{array}\right.;
\quad\quad L_-=(L_+)^T,
\end{equation}
where $ m_q $ denotes the bare quark mass.
Separating the even and the odd sites on the 4D space-time lattice,
$ \Dodwf $ can be written as
\begin{equation}
\Dodwf(m_q)=
S_1^{-1}
\begin{pmatrix}
1 & M_5 D_w^{\text{EO}} \\
M_5 D_w^{\text{OE}} & 1
\end{pmatrix}
S_2^{-1}
=S_1^{-1}
\begin{pmatrix}
1 & 0 \\
M_5 D_w^{\text{OE}} & 1
\end{pmatrix}
\begin{pmatrix}
1 & 0 \\
0 & C
\end{pmatrix}
\begin{pmatrix}
1 & M_5 D_w^{\text{EO}} \\
0 & 1
\end{pmatrix}
S_2^{-1},
\label{eq:D_odwf_decomp}
\end{equation}
where
\begin{equation}
\label{eq:m5}
M_5\equiv \left[(4-m_0) + \sqrt{\omega}^{-1}(1-L)(1+L)^{-1}\sqrt{\omega}^{-1}\right]^{-1},
\quad (\omega)_{ss'} = \omega_s \delta_{ss'},
\end{equation}
\begin{equation}
S_1\equiv M_5 \sqrt{\omega}^{-1}, \quad S_2\equiv (1 + L)^{-1} \sqrt{\omega}^{-1},
\end{equation}
and the Schur decomposition has been used in the last equality of (\ref{eq:D_odwf_decomp}),
with the Schur compliment
\begin{equation}
\label{eq:c_def}
C = 1 - M_5 D_w^{\text{OE}} M_5 D_w^{\text{EO}}.
\end{equation}
Since $ \det\Dodwf = \det S_1^{-1} \cdot \det C \cdot \det S_2^{-1} $, and
$ S_1 $ and $ S_2 $ do not depend on the gauge field, we can just use $ C $
for the HMC simulation. After including the Pauli-Villars fields (with $ m_q = 2 m_0 $), the pseudo-fermion
action for 2-flavor QCD ($ m_u = m_d $) can be written as
\bea
\label{eq:Spf}
S_{pf} = \phi^\dagger C_{PV}^\dagger ( C C^\dagger)^{-1} C_{PV} \phi, \quad C_{PV} \equiv C(2m_0).
\eea
In the HMC simulation, we first generate random noise vector $ \xi $ with Gaussian distribution,
then we obtain $ \phi = C_{PV}^{-1} C \xi $ using the conjugate gradient (CG).
With fixed $ \phi $, the system is evolved with a fictituous Hamiltonian dynamics,
the so-called molecular dynamics (MD). In the MD, we use the Omelyan integrator \cite{Takaishi:2005tz},
and the Sexton-Weingarten multiple-time scale method \cite{Sexton:1992nu}.
The most time-consuming part in the MD is to compute the vector $ \eta = (C C^\dagger)^{-1} C_{PV} \phi $
with CG, which is required for the evaluation of the fermion force in the equation
of motion of the conjugate momentum of the gauge field. Thus we program the GPU
to compute $ \eta $, using CG with mixed precision \cite{Chiu:2010gp}.
Also, we have ported the computation of the gauge force and the update of the gauge field to the GPU.
Furthermore, we introduce an extra heavy fermion field with mass $ m_H $ ($ m_q \ll m_H < 2 m_0 $),
similar to the case of the Wilson fermion \cite{Hasenbusch:2001ne}.
For two flavors QCD, the pseudofermion action becomes
\bea
S_{pf}^H = \phi^{\dagger} C_H^{\dagger} \frac{1}{CC^{\dagger}} C_H \phi +
\phi_H^{\dagger} C_{PV}^{\dagger} \frac{1}{C_H C_H^{\dagger}} C_{PV} \phi_H, \quad C_H \equiv C(m_H),
\eea
which gives exactly the same fermion determinant of (\ref{eq:Spf}).
However, the presence of the heavy fermion field plays the crucial role in reducing
the light fermion force and its fluctuation, thus diminishes the change of the Hamiltonian
in the MD trajactory, and enhances the acceptance rate.
For a system with CPU and GPU, we can have both of them compute concurrently, e.g.,
while the GPU is working on the CG of the light quark field, the CPU can compute the
fermion force of the heavy fermion field. This asynchronous concurrent excecution mode
enhances the overall performance by $\sim 5\% $.
A detailed description of our HMC simulations will be
presented in a forthcoming paper \cite{Chiu:HMC}.
\begin{figure}[htb]
\centerline{\includegraphics[width=90mm,clip=true]{Qt_history.eps}}
\caption{
\label{fig:Qt_history}
The topological charge versus the trajectory in the HMC simulation of two flavors QCD with ODWF.
The lattice is $ 16^3 \times 32 $ with the spatial box $ \sim (\mbox{2 fm})^3 $, and the quark mass
corresponding to $ M_\pi \sim 300 $ MeV. The topological charge is obtained by projecting the zero modes
of the overlap Dirac operator.}
\end{figure}
\section{Lattice setup}
We simulate two flavors ($N_f=2$) QCD on the $16^3 \times 32$
lattice at the lattice spacing $a \sim $ 0.11~fm,
for eight sea quark masses in the range $ m_q a =0.01, 0.02, \cdots, 0.08 $.
For the gluon part, we use the plaquette action at $\beta$ = 5.90.
For the quark part, we use the optimal domain-wall fermion with $ N_s = 16 $.
After discarding 300 trajectories for thermalization, we accumulated about
$ 3000-3200 $ trajectories in total for each sea quark mass.
From the saturation of the error (by binning) of the plaquette, as well as
the evolution of the topological charge (see Fig.~\ref{fig:Qt_history}),
we estimate the autocorrelation time to be $\sim 10 $ trajectories.
Thus we sample one configuration every 10 trajectories.
Then we have $ 270-290 $ configurations for each sea quark mass.
We determine the lattice spacing by heavy quark potential with Sommer parameter $ r_0 = 0.49 $ fm.
The inverse lattice spacing versus the quark mass is plotted in Fig.~\ref{fig:aimq}.
Using the linear fit, we obtain the inverse lattice spacing in the chiral limit,
$ a^{-1} = 1.8153(28)$ GeV.
\begin{figure}[htb]
\centerline{\includegraphics[width=90mm,clip=true]{aimq.eps}}
\caption{
\label{fig:aimq}
The inverse lattice spacing $ a^{-1} $~[GeV] versus $ m_q a $ for two flavors QCD with ODWF.}
\end{figure}
For each configuration, we calculate the exact zero modes plus
80 conjugate pairs of the lowest-lying eignmodes of the overlap Dirac operator.
We outline our procedures as follows.
First, we project 240 low-lying eigenmodes of $ H_w^2 $ using $\nu$-TRLan
alogorithm \cite{nu-TRLan}, where each eigenmode has a residual less than $ 10^{-12} $.
Then we approximate the sign function of the overlap operator
by the Zolotarev optimal rational approximation with 64 poles,
where the coefficents are fixed with $ \lambda_{max}^2 = (6.4)^2 $,
and $ \lambda_{min}^2 $ equal to the maximum of
the 240 projected eigenvalues of $ H_w^2 $.
Then the sign function error is less than $ 10^{-14} $.
Using the 240 low-modes of $ H_w^2 $ and the Zolotarev approximation
with 64 poles, we project the zero modes plus 80 conjugate pairs of
the lowest-lying eignmodes of the overlap operator
with the $\nu$-TRLan algorithm,
where each eigenmode has a residual less than $ 10^{-12} $.
We measure the chiral symmetry breaking (due to finite $N_s$) by computing the residual mass
\bea
\label{eq:mres}
m_{res} \equiv \left< \frac{\sum_x \left< J_5(x,N_s) \bar q(0) \gamma_5 q(0) \right>}
{\sum_x \left< \bar q(x) \gamma_5 q(x) \bar q(0) \gamma_5 q(0) \right>} \right>_{\{U\}}
= \left< \frac{ \tr(D_c + m_q)^{-1}_{0,0} }{ \tr[(D_c^\dagger + m_q)(D_c+m_q)]^{-1}_{0,0} } \right>_{\{U\}} - m_q,
\eea
where $ (D_c + m_q)^{-1} $ is the valence quark propagator with $ m_q $ equal to the mass of the sea quark,
tr denotes the trace running over the color and Dirac indices, and the subscript $ \{U\} $ denotes averaging
over an ensemble of gauge configurations. It turns out that, after averaging over an ensemble of a few
hundreds of independent gauge configurations, $ m_{res} $ is insensitive to the location of
the origin $ x^\mu = (0, 0, 0, 0) $. Thus (\ref{eq:mres}) gives a reliable measure of
chiral symmetry breaking due to finite $ N_s $.
The derivation of (\ref{eq:mres}) will be given in a forthcoming paper \cite{Chen:2011}.
In Fig. \ref{fig:mres}, we plot the residual mass versus the quark mass.
Using the power-law fit, we obtain the residual mass in the chiral limit,
$ m_{res} a = 0.00018(2) $, which amounts to $ m_{res} = 0.32(4) $~MeV.
Note that the value of $ m_{res} $ is less than 1/10 of the statistical
and systematic errors of the inverse lattice spacing,
thus confirming that the chiral symmetry has been preserved
to a good precision in our simulation.
\begin{figure}[htb]
\centerline{\includegraphics[width=100mm,clip=true]{mres.eps}}
\caption{
\label{fig:mres}
The residual mass versus the quark mass for two flavors QCD with ODWF.}
\end{figure}
\section{The Mass and the Decay Constant of the Pseudoscalar Meson}
In this section, we present our first results of the pseudoscalar mass and decay constant,
for 2 flavors QCD with optimal domain-wall quarks and the plaquette gluon action at
$ \beta = 5.90 $, on the $ 16^3 \times 32 \times 16 $ lattice.
In Fig. \ref{fig:mpi2omq_fpi_b590_nf2}, we plot $ M_\pi^2 /m_q $ and $ f_\pi $ versus $ m_q $
respectively. Here we have made the correction for the finite volume effect
using the estimate within ChPT calculated up
to $ {\cal O}(M_\pi^4/(4\pi f_\pi)^2 ) $ \cite{Colangelo:2005gd},
since our simulation is done on a finite volume
lattice with $ M_\pi L \sim 2.0 $ for the lightest sea quark, and
its finite volume effect cannot be neglected.
Taking into account of the correlation between $ M_\pi^2/m_q $ and $ f_\pi $ for the same sea quark mass,
we fit our data to the formulas of the next-to-leading order (NLO) chiral perturbation theory (ChPT) \cite{Gasser:1984gg}
\bea
\label{eq:mpi2omq_NLO_Nf2}
\frac{M_\pi^2}{m_q} &=& 2 B \left[ 1
+ \left(\frac{2 B m_q }{16 \pi^2 f^2}\right) \ln\left(\frac{2 B m_q}{\Lambda_3^2} \right) \right], \quad B \equiv \frac{2 \Sigma}{f^2} \\
\label{eq:fpi_NLO_Nf2}
f_\pi &=& f \left[ 1 - \left(\frac{4 B m_q}{16 \pi^2 f^2 } \right) \ln \left( \frac{2 B m_q}{\Lambda_4^2} \right) \right],
\eea
where $ \Lambda_i $ are related to the low energy constants $ \bar l_i $
\bea
\bar l_3 = \ln \left( \frac{\Lambda_3^2}{m_{\pi^{\pm}}^2} \right), \quad
\bar l_4 = \ln \left( \frac{\Lambda_4^2}{m_{\pi^{\pm}}^2} \right), \quad m_\pi^{\pm} = 0.140 \mbox{ GeV}.
\eea
For the six lightest quark masses (corresponding to pion masses in the range $210-445$ MeV),
our fit gives
\bea
\label{eq:Sfl3l4}
\Sigma = 0.2105(30) \mbox{ GeV}, \quad
f = 0.127(2) \mbox{ GeV}, \quad
\bar l_3 = 4.37(18), \quad
\bar l_4 = 5.31(11),
\eea
with $ \chi^2$/dof = 0.4.
At the physical pion mass $ M_\pi \simeq 0.135 $ GeV,
the value of pion decay constant is $ f_\pi = 0.133(1) $ GeV,
and the bare quark mass is $ 0.0069(2) $ GeV.
In order to convert the bare quark mass to that in the
$\overline{\mathrm{MS}}$ scheme, we calculate the
renormalization factor $Z_m^{\overline{\mathrm{MS}}}(\mathrm{2~GeV})$
using the non-perturbative renormalization technique
through the RI/MOM scheme \cite{Martinelli:1994ty}, and
obtain $Z_m^{\overline{{\mathrm{MS}}}}(\mbox{2 GeV}) = 0.5934(10)$ \cite{Chiu:NPR}.
Then the value of the average up and down quark mass is transcribed to
\bea
\label{eq:mudMS}
m_{ud}^{\overline{{\mathrm{MS}}}}(\mbox{2 GeV}) = 4.09(7)(11) \mbox{ MeV},
\eea
Similarly, the value of $ \Sigma $ in (\ref{eq:Sfl3l4}) is transcribed to
\bea
\label{eq:sigmaMS}
\Sigma^{\overline{{\mathrm{MS}}}}(\mbox{2 GeV}) = [250(4)(7) \mbox{ MeV}]^3
\eea
The systematic error is estimated from the turncation of higher order
effects and the uncertainty in the determination of lattice spacing with
$ r_0 = 0.49 $ fm.
Since our calculation is done at a single lattice spacing,
the discretization error cannot be quantified reliably, but
we do not expect much larger error because our lattice
action is free from $O(a)$ discretization effects.
\begin{figure}[tb]
\begin{center}
\begin{tabular}{@{}c@{}c@{}}
\includegraphics*[width=7.55cm,height=5.8cm]{mpi2omq_nf2.eps}
&
\includegraphics*[width=7.55cm,height=5.8cm]{fpi_nf2.eps}
\\ (a) & (b)
\end{tabular}
\caption{ Physical results of 2-flavor QCD with optimal domain-wall quarks:
(a) $ m_\pi^2/m_q $, and (b) $ f_\pi $.
The solid lines are the simultaneous fits to the NLO ChPT, for the six lightest quark masses.}
\label{fig:mpi2omq_fpi_b590_nf2}
\end{center}
\end{figure}
\section{Concluding remarks}
Using a GPU cluster (currently attaining 40 sustained Teraflops with
250 NVIDIA GPUs), we have succeeded to simulate unquenched lattice QCD
with optimal domain-wall quarks, which preserves the chiral symmetry
to a good precision
and samples all topological sectors ergodically.
Our results of the mass and the decay constant
of the pseudoscalar meson (in this paper)
and the topological susceptibility (in Ref. \cite{Hsieh:2010ab})
suggest that the nonperturbative chiral dynamics of the sea quarks are well under
control in our simulations. This provides a new strategy to tackle QCD nonperturbatively
from the first principles.
\begin{acknowledgments}
This work is supported in part by the National Science Council
(Nos.~NSC96-2112-M-002-020-MY3, NSC99-2112-M-002-012-MY3, NSC96-2112-M-001-017-MY3,
NSC99-2112-M-001-014-MY3, NSC99-2119-M-002-001) and NTU-CQSE~(Nos.~99R80869, 99R80873).
We are grateful to NCHC and NTU-CC for providing facilities to perform some of the computations.
We also thank Kenji Ogawa for his contribution in the development of simulation code.
\end{acknowledgments}
| {
"attr-fineweb-edu": 1.641602,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdojxK0iCl4YJnzfT | \section{Introduction}
The problem of the crossover from a BCS-like super\-conducting state
to a Bose condensate of
local pairs \cite{Legg80,NozSch84,Rand94} has gained
new interest in the context of high-$T_c\/$ super\-conductors. While
there is still no \mbox{quantitative}
microscopic theory of how superconductivity
arises from doping the antiferromagnetic and insulating parent com\-pounds
\cite{Scalap95}, it is clear that the superconducting state can be
described in terms of a generalized pairing picture.
The many body ground state is thus a coherent superposition of two particle
states built from spin singlets in a relative d-wave configuration
\cite{AnGoLe96}. The short coherence length $\xi (0)\approx 10-20\;
\mbox{\AA}\/$ parallel to the basic ${\rm CuO_2}$-planes
which is of the same
order than the average interparticle spacing $k_F^{\,-1}\/$, however
indicates that neither the BCS picture of highly overlapping pairs nor
a description in terms of composite Bosons is applicable here. It is
therefore of considerable interest to develop a theory, which is able to
cover the whole regime between weak and strong coupling in a unified manner.
On a phenomenological level such a description is provided by the
Ginzburg-Landau (GL) theory. Indeed, it is a nonvanishing
expectation value of the complex order parameter $\psi\/$ which signals
the breaking of gauge invariance as the basic characteristic of the
superconducting state, irrespective of whether the pair size is much larger
or smaller than the interparticle spacing. A GL description of the
BCS to Bose crossover was developed for the s-wave case by Drechsler
and one of the present authors \cite{DreZwe92} in two dimensions and by
S\'{a} de Melo, Randeria and Engelbrecht \cite{SaRaEn93} for the
three-dimensional case.
In our present work the theory for the two-dimensional case is reconsidered,
including a discussion of the Nelson-Kosterlitz jump of the order
parameter and the generalization
to the experimentally relevant situation of d-wave superconductivity.
Moreover we also calculate the characteristic lengths $\xi\/$ and $\lambda\/$
and compare our results with measured properties of
high-$T_c\/$ compounds.
The remarkable success with which the standard BCS model has been applied to
conventional superconductors relies on the fact that in the weak coupling
limit the details of the attractive interaction are irrelevant. By an
appropriate rescaling of the parameters the properties of all weak
coupling superconductors are therefore universal. It is one of our aims here to
investigate to which extent such a simplifying description also exists in
more strongly coupled superconductors.
Starting from a microscopic model with an instantaneous attractive interaction,
we find that the resulting GL functional takes the standard form for
arbitrary strength of the coupling. By adjusting a single dimensionless
parameter to the measured upper critical field near $T_c\/$,
we obtain consistent values for both the dimensionless
ratio $k_F\xi (0)\approx 5-8\/$ between the coherence length and interparticle
spacing as well as the observed value $\kappa\approx 90-100\/$ of the
GL parameter
in optimally doped high-$T_c\/$ compounds. Therefore, in spite of the
rather crude nature of the original microscopic model, our GL theory is
quantitatively applicable to strongly coupled superconductors which are
far from the standard weak coupling limit, although not yet in the
crossover regime to Bose-like behaviour.
The plan of the paper is as follows: in section II we introduce our
microscopic model which has an attractive interaction in the singlet
d-wave channel. From this we derive, via a Hubbard-Stratonovich
transformation, a statistical GL theory which is valid near $T_c\/$.
The relevant coefficients of the GL functional are calculated for
arbitrary interaction strength.
In section III we discuss the appropriate microscopic definition
of the order parameter and the evolution from the BCS to the Bose limit
of the well known Kosterlitz-Thouless jump in the superfluid density of
two-dimensional superconductors.
Ignoring the subtleties of the Kosterlitz-Thouless transition,
in section IV we use a Gaussian
approximation to determine the critical temperature and the
associated value of the chemical potential at the transition. Finally in
section V we determine the
characteristic lengths $\xi\/$ and $\lambda\/$ near the transition
for arbitrary strength of the coupling.
Adjusting the coupling to the experimental values of
the slope of ${H_c}_2\/$ near
$T_c\/$, we then determine
the associated dimensionless ratios $k_F\xi (0)\/$ and $\kappa=\xi/
\lambda\/$. They agree rather well with the observed values in YBCO and
BSCCO. A brief conclusion and a discussion of open problems is given
in section VI.
\section{Microscopic derivation of the GL functional}
As a general model describing Fermions with an instaneous pairwise
interaction $V(\boldsymbol{r})\/$ in a translationally invariant system,
we start from the Hamiltonian ($V\/$ is the volume of the system.)
\begin{multline}
H = \sum\limits_{\boldsymbol{k}}\sum\limits_{\sigma}\epsilon_k^{}\,
c_{\boldsymbol{k}\,\sigma}^{\dagger} c_{\boldsymbol{k}\,\sigma}^{}
\\ +\,\frac{1}{2V}
\sum\limits_{\boldsymbol{k}\,\boldsymbol{k'}\boldsymbol{q}}
\sum\limits_{\sigma\,\sigma'} V_{\boldsymbol{k}-\boldsymbol{k'}}^{}\;
c_{\boldsymbol{k}+\boldsymbol{q}\,\sigma}^{\dagger}
c_{-\boldsymbol{k}\,\sigma'}^{\dagger} c_{-\boldsymbol{k'}\,\sigma'}^{}
c_{\boldsymbol{k'}+\boldsymbol{q}\,\sigma}^{}
\end{multline}
with an arbitrary single particle energy $\epsilon_k^{}\/$ which we will
later replace by an effective mass approximation
$\epsilon_k^{} = \hbar^2 k^2/2m\/$.
In the two-dimensional case, which we consider throughout, the Fourier
transform $V_{\boldsymbol{k}-\boldsymbol{k'}}^{}\/$ of the interaction
potential may be expanded in its relative angular momentum contributions
$l\/$ by
\begin{equation}
V_{\boldsymbol{k}-\boldsymbol{k'}}^{} =
V_0^{}(k,k') + 2 \sum\limits_{l=1}^{\infty}
\cos(l\,\varphi)\,V_l^{}(k,k')
\end{equation}
with $\varphi\/$ the angle between $\boldsymbol{k}\/$ and
$\boldsymbol{k'}\/$. In the following we are only interested
in d-wave pairs with symmetry ${\rm d_{x^2-y^2}}\/$.
We therefore omit all contributions $l\neq 2\/$ and also neglect the
dependence on the absolute values $k\/$ and $k'\/$ of the momenta.
Assuming the interaction is separable, we thus approximate
\begin{equation} \label{ref3}
V_{\boldsymbol{k}-\boldsymbol{k'}}^{} \rightarrow -g\;
v_{\boldsymbol{k}+\boldsymbol{q}/2}^{}\,
v_{\boldsymbol{k'}+\boldsymbol{q}/2}^{}
\end{equation}
with
\begin{equation}
v_{\boldsymbol{k}}^{} = \sqrt{2}
\,\frac{k_y^{\,2}-k_x^{\,2}}{k^2}
\end{equation}
and $g\/$ a negative constant characterizing the strength of the
attractive interaction. Finally the
restriction to singlet pairing is incorporated trivially by considering only
interactions between Fermions with opposite spins $\sigma'=-\sigma\/$.
In this manner we obtain a Gorkov-like reduced interaction Hamiltonian
\begin{multline}
H' = -\frac{g}{2V}
\sum\limits_{\boldsymbol{k}\,\boldsymbol{k'}\,\boldsymbol{q}}
\sum\limits_{\sigma} v_{\boldsymbol{k}+\boldsymbol{q}/2}^{}\,
v_{\boldsymbol{k'}+\boldsymbol{q}/2}^{}\;\\ \cdot\,
c_{\boldsymbol{k}+\boldsymbol{q}\,\sigma}^{\dagger} \,
c_{-\boldsymbol{k}\,-\sigma}^{\dagger}\, c_{-\boldsymbol{k'}\,-\sigma}^{}\,
c_{\boldsymbol{k'}+\boldsymbol{q}\,\sigma}^{}
\end{multline}
(Note that the shift by $\boldsymbol{q}/2\/$ in Eq. (\ref{ref3}) is
necesssary to guarantee that the interaction is symmetric with respect to
$\sigma\leftrightarrow-\sigma\/$.).
For the derivation of a GL functional below, it is
convenient to introduce pair operators $b_{\boldsymbol{q}}\/$ via
\begin{equation}
b_{\boldsymbol{q}}^{} = \sum\limits_{\boldsymbol{k}}
v_{\boldsymbol{k}+\boldsymbol{q}/2}^{}\;
c_{-\boldsymbol{k}\;+1}^{}\,
c_{\boldsymbol{k}+\boldsymbol{q}\;-1}^{}\;.
\end{equation}
The contribution $H'\/$ may then be written in the form
\begin{equation}
H' = -\frac{g}{V}\sum\limits_{\boldsymbol{q}}
b_{\boldsymbol{q}}^{\dagger}\,b_{\boldsymbol{q}}^{}
\end{equation}
of an attractive interaction between pairs of Fermions with
opposite spin and total momentum $\boldsymbol{q}\/$.
In the following we want to derive
a functional integral representation of the grand partition function
\begin{equation}
Z = {\rm Tr}\;e^{-\beta(H-\mu N)}
\end{equation}
which gives the standard GL theory as its mean field limit. Since we
are interested in a superconducting state with a nonzero anomalous average
$\langle b_{\boldsymbol{q}}^{}\rangle \neq 0\/$, it is convenient to
formally linearize the interaction term $H'\/$ by a Hubbard-Stratonovich
transformation \cite{Mueh77}. The grand partition function is thus
expressed in terms of a functional integral
\begin{equation} \label{ref9} \hspace*{-0.1in}
Z = {\displaystyle \int} D^2 z\;\exp\bigg(-\frac{1}{V\hbar g}
\int\limits_{0}^{\beta\hbar} d\tau \sum\limits_{\boldsymbol{q}}
|z(\boldsymbol{q},\tau)|^2 \bigg) \;L[z]
\end{equation}
over a complex valued c-number field
$z(\boldsymbol{q},\tau)\/$. Here
\begin{equation} \label{ref10}
L[z] = {\rm Tr}\;\,{\rm T}\exp
\bigg( -\frac{1}{\hbar}\int\limits_0^{\beta\hbar}d\tau\;
H_z(\tau) \bigg)
\end{equation}
is a functional of the auxiliary field $z(\boldsymbol{q},\tau)\/$,
which acts as a space- and 'time'-dependent external potential on a
noninteracting Fermi system with Hamiltonian
\begin{multline}
H_z (\tau) = \sum\limits_{\boldsymbol{k}}\sum\limits_{\sigma}
(\epsilon_{k}^{}-\mu)\,
c_{\boldsymbol{k}\,\sigma}^{\dagger} c_{\boldsymbol{k}\,\sigma}^{} \\
+\,\frac{1}{V}\sum\limits_{\boldsymbol{q}} \big(z(\boldsymbol{q},\tau)\,
b_{\boldsymbol{q}}^{\dagger} +z^{\star}(\boldsymbol{q},\tau)\,
b_{\boldsymbol{q}}^{}\big) \;.
\end{multline}
The physical interpretation of the c-number field
$z(\boldsymbol{q},\tau)\/$ is obtained by noting that its expectation value
\begin{equation} \label{expect}
\langle z(\boldsymbol{q},\tau)\rangle = -g
\langle b_{\boldsymbol{q}}(\tau)\rangle
\end{equation}
is directly proportional to the anomalous average
$\langle b_{\boldsymbol{q}}(\tau)\rangle\/$. Thus
up to some normalization constant, which will be determined below, the field
$z(\boldsymbol{q},\tau)\/$ is just the spatial Fourier transform of
the complex order parameter $\psi(\boldsymbol{r},\tau)\/$ describing the
superconducting state. It depends both on position and imaginary time
$\tau\in [0,\beta\hbar]\/$ which is characteristic for a quantum
GL functional. Since the ${\rm d_{x^2-y^2}}\/$-symmetry
in a rotational invariant system is connected with a one dimensional
irreducible representation \cite{SigUed91}, the order parameter is still
a simple complex scalar, similar to the more familiar isotropic s-wave
case.
Obviously the trace in the time ordered exponential in Eq. (\ref{ref10})
cannot be calculated exactly. However it is straightforward to evaluate
$L[z]\/$ perturbatively in $z\/$. The naive justification for this is that
close to $T_c\/$ the order parameter is small. Strictly speaking however,
the functional integral in Eq. (\ref{ref9}) requires to integrate
over arbitrary
realizations of $z(\boldsymbol{q},\tau)\/$. In order to
obtain the standard form of the statistical GL functional, however, the
expansion is truncated at fourth order in the exponent of $L[z]\/$.
In the language of field theory we
are therefore calculating the bare coupling constants, which serve as the
starting point for treating the behaviour at long wavelenghts. By
a straightforward perturbative calculation \cite{stin96} up to fourth order
in $z\/$, the functional $L[z]\/$ turns out to be of the form
\begin{multline}
L[z] = Z_0\;\exp\bigg(\frac{1}{V\beta\hbar^2}\sum\limits_{\boldsymbol{q}}
\sum\limits_{\omega_n} \tilde{a}(\boldsymbol{q},\omega_n)\;
|z(\boldsymbol{q},\omega_n)|^2 \\
-\,\frac{1}{2V^3\beta^3\hbar^4}\sum\limits_{1\,2\,3}
b(1,2,3)\; z(1) z^{\star}(2) z(3) z^{\star}(1-2+3)\bigg) \;.
\end{multline}
Here $Z_0\/$ is the grand partition function of noninteracting
Fermions while
\begin{equation}
z(\boldsymbol{q},\omega_n)=
\int\limits_{0}^{\beta\hbar}d\tau\; e^{i\omega_n\tau}z(\boldsymbol{q},\tau)
\end{equation}
is the Fourier transform of the $\tau\/$-dependence of
$z(\boldsymbol{q},\tau)\/$ with bosonic Matsubara frequencies
$\omega_n = 2\pi n/(\beta\hbar)\/$, $n\/$ integer. In the quartic term
we have used the short hand notation $1 = (\boldsymbol{q_1},\omega_1)\/$,
etc. The functions $\tilde{a}\/$ and $b\/$ can be expressed in terms
of the normal state Green function ($\xi_k^{} = \epsilon_k^{}-\mu\/$,
$\tilde{\omega}_n = 2\pi (2n+1)/(\beta\hbar)\/$)
\begin{equation}
G_0(\boldsymbol{k},\tilde{\omega}_n)=\frac{1}{i\tilde{\omega}_n-
\xi_{k}^{}/\hbar}
\end{equation}
via
\begin{multline}
\tilde{a}(\boldsymbol{q},\omega_n) = \frac{1}{V\beta\hbar^2}
\sum\limits_{\boldsymbol{k}\,\tilde{\omega}_n}
v_{\boldsymbol{k}-\boldsymbol{q}/2}^{\;2}\,\\ \cdot\,
G_0(\boldsymbol{k}+\boldsymbol{q},\tilde{\omega}_n+\omega_n)
G_0(-\boldsymbol{k},-\tilde{\omega}_n)
\end{multline}
and a similar expression with four factors $G_0\/$ for $b\/$.
In order to obtain the standard form of a quantum GL functional, the
coefficients $\tilde{a}(\boldsymbol{q},\omega_n)\/$ and
$b(1,2,3)\/$ have to be expanded for small $\boldsymbol{q}\/$ and
$\omega_n\/$. To lowest order in the spatial and temporal gradients
of the order parameter, it is sufficient to keep only
the leading terms in
\begin{equation} \label{ref17} \hspace*{-0.0in}
a(\boldsymbol{q},\omega_n) = \frac{1}{g} -
\tilde{a}(\boldsymbol{q},\omega_n) =
a + c\,\frac{\hbar^2 q^2}{2m} - i \,d\,\hbar\omega_n + \cdots
\end{equation}
and replace
\begin{equation}
b(1,2,3) \rightarrow b(0,0,0) = b
\end{equation}
by its constant value at zero momentum and frequency.
This expansion is valid, provided
the contributions of order $\boldsymbol{q}^4\/$ and $\omega_n^{\,2}\/$
in Eq. (\ref{ref17}) are negligible. From an explicit calculation of these
higher order terms it may be shown \cite{stin96} that the order parameter
must vary slowly on length scales of order
$\xi_b \approx\hbar/\sqrt{mE_b}\/$ with $E_b\/$ the two particle
binding energy introduced in Eq. (\ref{ref23}) below.
Physically, the length $\xi_b\/$ is just the radius of a bound state
with energy $E_b\/$.
In the weak coupling limit this length coincides with
the standard BCS coherence length $\xi_0\approx\hbar v_F/T_c\/$
which, for the clean limit considered here, is identical with the
GL coherence length $\xi(0)\/$ as defined in (\ref{ref49}).
The standard form of the
GL functional with a gradient term $|\nabla\psi|^2\/$ is therefore valid
provided the order parameter varies on scales larger than $\xi_b \approx
\xi(0)\/$. With increasing strength of the coupling $E_b\/$, the pair radius
$\xi_b\/$ decreases and thus the validity of the expansion (17)
extends to variations on shorter length scales. Regarding the dependence on
$\tau\/$, the requirement is that $\psi(\boldsymbol{r},\tau)\/$ must
vary slowly on time scales $\tau_b\approx\hbar/E_b\/$. For weak coupling
this is a rather large scale of order $\hbar\epsilon_F/T_c^2\/$ (we set
$k_B=1\/$). Similar to the spatial dependence, however, the necessary scale
for the $\tau\/$-dependence of the order parameter for which the leading terms
kept in (17) are sufficient, decreases with increasing coupling. Thus, in the
Bose limit, the description of the time dependence of the order parameter by
a first order derivative like in the well known Gross-Pitaevskii
equation \cite{Gros63} becomes exact (see also section VI).
With these approximations our GL functional in
\begin{equation}
Z = Z_0\;{\displaystyle \int} D^2 z\;\exp
(-\beta F[z])
\end{equation}
finally takes the form
\begin{multline} \label{ref20}
F[z] = \frac{1}{\beta\hbar}\int\limits_{V}d^2 r
\int\limits_{0}^{\beta\hbar}d\tau\bigg(a\,|z(\boldsymbol{r},\tau)|^2 + c\,
\frac{\hbar^2}{2m}\,|\boldsymbol{\nabla}z(\boldsymbol{r},\tau)|^2 \\
+ \, d\,\hbar\,
z^{\star}(\boldsymbol{r},\tau)\,\partial_{\tau}z(\boldsymbol{r},\tau)
+ \frac{b}{2}\, |z(\boldsymbol{r},\tau)|^4\bigg)
\end{multline}
which reduces to the familiar expression if $z\/$ is independent of $\tau\/$.
The coefficients $a\/$ and $b\/$ are given by
\begin{equation} \label{ref21}
a = \frac{1}{g} - \frac{1}{2V}\sum\limits_{\boldsymbol{k}}
v_{\boldsymbol{k}}^{\,2}\;\frac{\tanh(\beta\xi_k^{}/2)}{\xi_k^{}}
\end{equation}
and
\begin{equation}
b = \frac{1}{V\beta}\sum\limits_{\boldsymbol{k}}\sum\limits_{\omega_n}
\frac{v_{\boldsymbol{k}}^{\,4}}{(\xi_{k}^{\;2}+
\hbar^2\omega_n^{\,2})^2}\;.
\end{equation}
Now the sum over wavevectors $\boldsymbol{k}\/$ in Eq. (\ref{ref21}) diverges
at large $k\/$ and thus the bare value of $a\/$ is undefined.
In the weak coupling limit this divergence is usually
eliminated by argueing that the interaction is finite only in a thin
shell around the Fermi surface. In the present case however, where the
condensation in the strong coupling limit really affects the whole
Fermi sphere, such a procedure is no longer possible. Instead, as was
pointed out by Randeria et al.\cite{RaDuSh90}, we have to connect the bare
coupling constant to the low energy limit of the two-body scattering
problem. In two dimensions this relation is of the form
\begin{equation} \label{ref23}
\frac{1}{g} =
\frac{m}{4\pi\hbar^2}\ln\bigg(\frac{2\epsilon_{\Lambda}}{E_b}\bigg),
\end{equation}
where $\epsilon_{\Lambda} \rightarrow \infty\/$ is a high energy cutoff
which precisely cancels the large $k\/$ divergence on the right hand side of
Eq. (\ref{ref21}). The parameter $E_b > 0\/$ is the binding energy of the two
particle bound state in vacuum, which in fact must be finite
in order to obtain a superconducting instability in two dimensions.
In our present model, which neglects the dependence of
$V_{\boldsymbol{k}-\boldsymbol{k'}}^{}\/$ on the absolute values
of $\boldsymbol{k}\/$ and $\boldsymbol{k'}\/$,
the existence of a bound state is indeed a necessary
condition for superconductivity even in the case of d-wave pairing,
although quite generally it only applies in the s-wave case \cite{RaDuSh90}.
Since in the effective mass approximation $\epsilon_k=\hbar^2k^2/2m\/$
which we are using throughout, the free Fermion density of
states in two dimensions is constant, the coefficient $a\/$
can now be calculated analytically in terms of $E_b\/$ as
\begin{multline}
a = -\frac{m}{4\pi\hbar^2}\Bigg( 2\ln (\frac{4e^{\gamma}}{\pi})
\,\theta (\mu) + \ln (\frac{\beta E_b}{4}) +
\ln (\frac{\beta |\mu|}{2})\\ \cdot\,\tanh (\frac{\beta\mu}{2})
+\frac{\beta\, {\rm sgn}(\mu)}{2}
\int\limits_{|\mu|}^{\infty}d\xi\;\frac{\ln (\beta\xi /2)}{\cosh^2
(\beta\xi /2)}\Bigg)
\end{multline}
($\gamma=0.577\ldots\/$ is the Euler constant.).
The coefficient $b\/$ in (22) is finite without a cutoff and given by
\begin{multline} \label{ref25}
b = \frac{3m}{16\pi\hbar^2}\Bigg(
\frac{7\zeta (3)}{2\pi^2}\,\beta_c^{\,2}\,\theta (\mu_c) -
\frac{1}{\mu_c^{\,2}}\,\tanh\big(\frac{\beta_c \mu_c}{2}\big)
\\ +\,{\rm sgn}(\mu_c)
\int\limits_{|\mu_c|}^{\infty}d\xi\;\frac{\tanh (\beta_c \xi/2)}{\xi^3}
\Bigg)\;.
\end{multline}
Here $\theta (x)\/$ and ${\rm sgn}(x)\/$ are the well known Heaviside
and sign functions. Moreover we
we have replaced $\beta\/$ and $\mu\/$ by their values at the
critical point. Similarly, the values of
the two remaining coefficients $c\/$ and $d\/$ at the critical point are
\begin{multline} \label{ref26}
c = \frac{m}{8\pi\hbar^2}\Bigg(
\frac{7\zeta(3)}{2\pi^2}\,\beta_c^{\,2}\mu_c\,\theta(\mu_c) \\ +\,
|\mu_c|\int\limits_{|\mu_c|}^{\infty}d\xi\;\frac{\tanh(\beta_c\xi/2)}{\xi^3}
\Bigg)
\end{multline}
and
\begin{equation} \label{ref27}
d = \frac{m}{8\pi\hbar^2}\,\frac{\tanh(\beta_c\mu_c/2)}{\mu_c}\;.
\end{equation}
All four GL coefficients can thus be expressed essentially in analytical
form for arbitrary strength of the interaction.
Comparing these results with those for the s-wave case \cite{DreZwe92},
it turns out that up to a geometrical factor $3/2\/$ in $b\/$ the
coefficients are identical at given values of $\beta_c^{}\/$ and
$\mu_c^{}\/$, provided the two-particle binding energy $E_b\/$ is simply
identified with the corresponding s-wave value.
\section{Order parameter and Nelson-Kosterlitz jump}
In order to relate the formal auxiliary field $z(\boldsymbol{r},\tau)\/$
in the functional (\ref{ref20}) to the usual superconducting order parameter
$\psi(\boldsymbol{r},\tau)\/$, the standard procedure in weak coupling is to
take $\psi_{\rm BCS}^{} = \sqrt{2c}\,z\/$, which gives the conventional
coefficient $\hbar^2/4m\/$ in front of $|\nabla\psi_{\rm BCS}^{}|^2\/$.
The gradient term is thus
identical with the kinetic energy of a Schr\"odinger field for a single
quantum mechanical particle with mass $m_{\star}=2m\/$, describing a pair
built from constituents with mass $m\/$.
As pointed out by
de Gennes \cite{deGe89}, however, the value of $m_{\star}\/$ is arbitrary
in principle, as long as one is considering the classical
GL functional with $\psi\/$ independent of $\tau\/$. Indeed all measurable
quantities obtained from the classical GL functional depend only on
ratios like $|\psi|^2/m_{\star}\/$. An arbitrary choice for $m_{\star}\/$
can therefore always be compensated by an appropriate rescaling of $\psi\/$.
This situation is changed, however, in a quantum mechanical treatment,
where the order parameter also depends on $\tau\/$, i.e. dynamics enters.
In this case there is a different natural normalization of
$\psi = \sqrt{d}\, z\/$ in which the coefficient of the
$\psi^{\star}\,\partial_{\tau}\psi\/$-contribution is just $\hbar\/$
\cite{DreZwe92,Zwer92}. With
this choice of normalization, the order parameter
$\psi(\boldsymbol{r},\tau)\/$ is precisely
the c-number field in a coherent state path integral \cite{Schu81} associated
with a genuine Bose field operator $\hat{\psi}(\boldsymbol{r})\/$
with canonical commutation relations
$[\hat{\psi}(\boldsymbol{r}),\hat{\psi}^{\dagger}(\boldsymbol{r'})]
= \delta(\boldsymbol{r}-\boldsymbol{r'})\/$.
While this normalization is evidently the most appropriate one in the
strong coupling Bose limit - where it agrees with the standard choice as
we will see - it can be used for arbitrary coupling, even in the
BCS-limit. Including the charge $-2e\/$ of a pair by generalizing the
gradient to a covariant derivative in the standard way and adding the
energy associated with the magnetic field $\boldsymbol{h}=\boldsymbol{\nabla}
\times\boldsymbol{A}\/$, the resulting free energy functional reads
($c_0\/$ is the velocity of light)
\begin{multline} \label{ref28}
F[\psi,\boldsymbol{A}] = \\
\frac{1}{\beta\hbar}\int\limits_{V}d^2r
\int\limits_{0}^{\beta\hbar}d\tau\Big(-\mu_{\star}\,
|\psi|^2 +
\frac{1}{2m_{\star}}\,\Big|\Big(\frac{\hbar}{i}\boldsymbol{\nabla}
+\frac{2e}{c_0}\boldsymbol{A}\Big)\psi\Big|^2 \\
+ \, \hbar\, \psi^{\star}\,\partial_{\tau}\psi
+ \frac{g_{\star}}{2}\, |\psi|^4\Big) +
\frac{\boldsymbol{h}^2}{8\pi}\;.
\end{multline}
With this normalization, the three remaining indepen\-dent coefficients now
have a very direct physical inter\-pretation \cite{Zwer92}: the coefficient
$\mu_{\star} = -a/d\/$ is the effective chemical potential of the Bosons,
$m_{\star} = m\;d/c\/$ their effective mass and $g_{\star} = b/d^2\/$ a
measure of the repulsive interaction between the composite Bosons.
In the following we will concentrate on the effective mass $m_{\star}\/$
(at $T_c\/$) which, according to (\ref{ref26},\ref{ref27})
is completely determined by
the ratio $T_c/\mu_c\/$. Now in the weak coupling limit $\mu_c\/$ is equal
to the Fermi energy (see section IV below) and thus $m_{\star}/m\/$
vanishes like $(T_c/\epsilon_F)^2\/$ in the case of a BCS-superconductor.
By contrast, for strong coupling $\mu_c\/$ approaches $-E_b/2\/$, i.e. is
large and negative. In the Bose limit $m_{\star}\/$ is therefore equal
to $2m\/$ as expected. Relating $T_c/\mu_c\/$ to the dimensionless
coupling strength $\ln{E_b/\epsilon_F}\/$ by the Gaussian approximation
discussed in the following section, we obtain a monotonic increase of
$m_{\star}\/$ from exponentially small values to $2m\/$ as a function of
coupling, as is shown in Fig. \ref{fig1}.
\begin{figure}
\epsfig{file=em2d_kap5.ps,scale=0.55}
\caption{\label{fig1}The effective mass $m_{\star}/m\/$ of the
composite Bo\-sons versus the binding energy $E_b/\epsilon_F\/$.}
\end{figure}
As was pointed out above, the mass
$m_{\star}\/$ of a Cooper pair defined in such a way cannot be
observed in any static measurement like the penetration depth.
To discuss this, we consider the two dimensional current density
\begin{equation} \label{ref29} \hspace*{-0.2cm}
\boldsymbol{j}(\boldsymbol{r}) =
-c_0 \frac{\delta F[\psi,\boldsymbol{A}]}
{\delta\boldsymbol{A}(\boldsymbol{r})}=
-2e|\psi|^2\frac{\hbar}{m_{\star}}\Big(\boldsymbol{\nabla}\phi +
\frac{2e}{\hbar c_0}\boldsymbol{A}\Big)
\end{equation}
which follows from (\ref{ref28}) for a $\tau\/$-independent order parameter
$\psi=|\psi|e^{i\phi}\/$. For a spatially constant magnitude $\psi_{\infty}\/$
of the order parameter, this leads immediately to the London equation
\begin{equation} \label{ref30}
\boldsymbol{\nabla}\times\boldsymbol{j}(\boldsymbol{r}) =
-\frac{4e^2|\psi_{\infty}|^2}{m_{\star}c_0}\boldsymbol{h}\;.
\end{equation}
As was noted above, it is only the ratio $|\psi|^2/m_{\star}\/$ which enters
here and thus static magnetic properties are independent of the choice
for $m_{\star}\/$. Specifically we consider a thin
superconducting film with thickness $\delta\/$. The in-plane
penetration depth $\lambda\/$ is then related to $|\psi_{\infty}|^2\/$
via \cite{HN79}
\begin{equation} \label{ref31}
L_s = \frac{2\,\lambda^2}{\delta}=\frac{m_{\star}c_0^{\,2}}
{8\pi|\psi_{\infty}|^2e^2}\;.
\end{equation}
Here we have introduced a further length $L_s\/$ which is the effective
magnetic penetration depth in a thin film. Typically this length is of the
order of one centimeter and thus for sample sizes which are smaller than that,
magnetic screening may be neglected. In such a situation the difference
between a charged and a neutral superfluid becomes irrelevant. A
superconducting film thus exhibits a Kosterlitz-Thouless transition
\cite{Minn87}, in which the renormalized helicity modulus
$\gamma(T)=\hbar^2|\psi|^2(T)/m_{\star}\/$ jumps from $2T_c/\pi\/$ to
zero at $T_c\/$. Using (31) this jump translates into one for the
two dimensional screening length $L_s\/$ of size \cite{HN79}
\begin{equation} \label{ref32}
\left.(L_s(T)\;T)\right|_{T_c^{\,-}} =
\Big(\frac{\phi_0}{4\pi}\Big)^2=1.96\; \text{K\,cm}
\end{equation}
where $\phi_0=hc_0/2e\/$ is the standard flux quantum. Consistent with our
remarks above, this jump is completely universal and independent of
$m_{\star}\/$, applying both to BCS- or Bose-like superconductors,
provided $L_s\/$ is larger than the sample size and the density of
vortices is low \cite{Minn87}.
In order to define a proper superfluid density $n_s\/$, we consider
the relation between the order parameter $\psi\/$ and the microscopic
anomalous average. From Eq. (\ref{expect}) we have
$\psi_{\rm BCS} = -g\sqrt{2c}\, \langle b \rangle\/$. Neglecting the internal
d-wave structure of the order parameter and the logarithmic factor in
(\ref{ref23}), it is straightforward to see that
\begin{equation} \label{ordpar}
\psi_{\rm BCS}(\boldsymbol{r}) \approx \xi_b
\langle\hat\psi_{+1}(\boldsymbol{r})\;
\hat\psi_{-1}(\boldsymbol{r})\rangle
\end{equation}
with $\hat\psi_{\sigma}(\boldsymbol{r})\/$, $\sigma = \pm 1\/$ the
Fermionic field
operators (The factor $\ln (2\epsilon_{\Lambda}/E_b)\/$ which is
omitted in (\ref{ordpar}) diverges as $\epsilon_{\Lambda}
\rightarrow \infty\/$. This is a reflection of the fact that the
product of two field operators at the same point can properly defined
only with a cutoff.). Since $\xi_b\/$ is the radius of a pair, the
relation (\ref{ordpar}) shows that
$|\psi_{\rm BCS}(\boldsymbol{r})|^2\/$ is just the areal density of
pairs. This remains true even in the Bose limit where
$\xi_b\rightarrow 0\/$ while the product
$\hat\psi_{+1}(\boldsymbol{r})\,\hat\psi_{-1}(\boldsymbol{r})\/$
eventually behaves like a true Bose field operator
$\hat\psi(\boldsymbol{r})\/$. The standard definition
$n_s = 2 |\psi_{\rm BCS}|^2\/$ of the superfluid density can thus
be applied for arbitrary coupling. By contrast, the Bose order
parameter $\psi = \sqrt{m_{\star}/2m}\,\psi_{\rm BCS}\/$ coincides
with $\psi_{\rm BCS}\/$ only in the strong coupling limit. For weak
coupling it is given by an expression like (\ref{ordpar})
but with the interparticle spacing $k_F^{\,-1}\/$ instead of $\xi_b\/$
as the prefactor. Thus $|\psi(\boldsymbol{r})|^2\/$ is essentially
the probability density for two Fermions with opposite spin at the
same point. In the BCS limit this density is exponentially small due
to the large size of a pair. The superfluid density in turn is still
of order one even in weak coupling and indeed at zero temperature
$n_s\/$ must be equal to the full density $n\/$ for any superfluid
ground state in a translational system as considered here \cite{Tony73}.
Using $n_s = 2 |\psi_{\rm BCS}(\boldsymbol{r})|^2 \/$, the relation
(\ref{ref32}) can be rewritten in terms of a jump
\begin{equation} \label{ref34}
n_s(T_c^-) =
\frac{2m}{\pi\hbar^2}\;T_c
\end{equation}
of the renormalized superfluid density. The superfluid fraction $n_s/n\/$
therefore has a jump of order $T_c/T_c^{\,\rm Bose}\/$. Since this ratio
approaches zero in weak coupling, there is a smooth crossover between the
universal jump of the superfluid density in a Bose superfluid \cite{NK77}
and the behaviour in a strict BCS model where $n_s/n = 2(1-T/T_c)\/$
vanishes {\em continuously} near $T_c\/$ even in two dimensions. Indeed
the BCS-Hamiltonian is equivalent to a model with an infinite
range interaction of strength $V^{-1}\/$ for which mean field theory
is exact \cite{MS63}. In the following we will neglect the
subtleties associated with the Kosterlitz-Thouless nature of the
transition, which is anyway masked by the coupling between different
CuO$_2\/$-planes in real high-$T_c\/$ superconductors, giving a
three dimensional critical behaviour near $T_c\/$ \cite{Schn94}.
\section{Gaussian Approximation}
In order to calculate directly observable quantities from our
GL functional (\ref{ref28}), we have to determine both the critical
temperature
$T_c\/$ and the corresponding chemi\-cal potential $\mu_c\/$ in terms
of the binding energy $E_b\/$.
Now it is obvious that an exact evaluation of the functional integral
over $\psi(\boldsymbol{r},\tau)\/$ is impossible. We will therefore use
the Gaussian approximation above $T_c\/$, which is obtained by simply omitting
the $|\psi|^4\/$-term. With this approximation our complete grand
canonical potential $\Omega\/$ per volume takes the form
\begin{multline} \label{ref34b}
\Omega = \Omega_0 + \frac{1}{\beta V}
\sum\limits_{\boldsymbol{q}\,\omega_n}\;\ln\;gd\Big(-\mu_{\star}+
\frac{\hbar^2 q^2}{2m_{\star}} - i\,\hbar\omega_n\Big)\,.
\end{multline}
The critical temperature and chemical potential then follow from
the standard condition
\begin{equation} \label{ref35}
\mu_{\star}(T_c,\mu_c) = 0
\end{equation}
for a bifurcation to a nonzero order parameter, and the particle number
equation
\begin{multline} \label{ref36}
n = - \frac{\partial \Omega}{\partial\mu} = n_0 \\ + \,
\partial_{\mu}\mu_{\star}
\int \frac{d^2q}{(2\pi)^2}\;\frac{1}{\exp
\big[\beta\big(-\mu_{\star}+\frac{\hbar^2 q^2}{2m_{\star}}\big)\big]-1}\;.
\end{multline}
Here
\begin{equation}
n_0 =
\frac{m}{\pi\hbar^2}\,\frac{\ln(1+\exp(\beta\mu))}{\beta}
\end{equation}
is the number density of a free Fermion gas in two dimensions.
Eq. (\ref{ref35}) is identical with the Thouless criterion
\cite{Thoule60} for a superconducting instability, which is
equivalent to the condition that the ladder approximation to the exact
pair field susceptibility
\begin{equation}
\chi_{pair}=\int\limits_{0}^{\beta\hbar}d\tau\;\langle
b_{\boldsymbol{0}}^{}(\tau)\,b_{\boldsymbol{0}}^{}(0)\rangle
\end{equation}
diverges \cite{ToMiYa93}. It is a straightforward generalization of
the usual gap equation to arbitrary coupling. The number equation
(\ref{ref36}) deserves some more comments. Since we have
\begin{equation} \label{ref39}
d = - \frac{1}{2}\,\partial_{\mu}a|_{\beta=\beta_c,\mu=\mu_c}
\end{equation}
quite generally, it is easy to see that
$\partial_{\mu}\mu_{\star} = 2\/$ at $\beta=\beta_c\/$ and
$\mu=\mu_c\/$. Therefore Eq. (\ref{ref36}) has the simple intuitive
interpretation that the total number of particles is split into the
number of free Fermions still present at $(\beta_c,\mu_c)\/$ plus the number
of Fermions already bound together in pairs, whose mean occupation
number is just the Bose distribution. Now formally this distribution
function arises from the summation over the Matsubara frequencies
$\omega_n\/$ in Eq. (\ref{ref34b}) precisely because our coefficient
$a(\boldsymbol{q},\omega_n)\/$ has been expanded only to linear order
in $\omega_n\/$. The omission of the higher order terms in this
expansion is therefore connected with neglecting scattering state
contributions, which would give an additional term in Eq. (\ref{ref36})
beyond the completely free and fully bound number of Fermions. Such a
contribution is important in the three-dimensional case where
true bound states exist
only beyond a critical strength of the coupling \cite{NozSch84,SaRaEn93}.
For our present discussion
of the problem in two dimensions, however, there are only free Fermions or
true bound states. Therefore there is no contribution in Eq. (\ref{ref36})
from scattering states and one expects that the expansion of
$a(\boldsymbol{q},\omega_n)\/$ to linear order in $\omega_n\/$ is reliable
at arbitrary strength of interaction.
There is however a rather different
problem which appears in the two-dimensional case. As was discussed above,
a superconducting transition exists only in
the Kosterlitz-Thouless sense. This problem shows up
in our Gaussian approximation, since the Bose integral in Eq. (\ref{ref36})
diverges at $\mu_{\star} = 0\/$. Now at this level of approximation
this is just a reflection of the fact that $T_c=0\/$ for an ideal Bose gas
in two dimensions,
because - as pointed out above - the omission of the $|\psi|^4\/$-term
corresponds to neglecting the repulsive interaction between the Bosons.
From our analytical results (\ref{ref25}) and (\ref{ref27}) for $b\/$ and
$d\/$, which do not contain $E_b\/$, it is however straightforward to
calculate the effective interaction $g_{\star}\/$ for arbitrary coupling.
It turns out that $g_{\star}\/$ is a monotonically decreasing function
of the coupling.
In the limits $\beta_c\mu_c \rightarrow \infty\/$ of a BCS-like
or $\beta_c\mu_c \rightarrow -\infty\/$ of a Bose-like system,
we find
\begin{equation} \label{ref40}
g_{\star} = \frac{6\pi\hbar^2}{m} \begin{cases}
\frac{\textstyle 7\zeta (3)}{\textstyle \pi^2}\, (\beta_c\mu_c)^2
& \text{BCS} \\
1 & \text{Bose}
\end{cases}\,\,.
\end{equation}
Thus, in two dimensions, there is always a finite repulsive
interaction between the pairs,
which is of purely statistical origin \cite{DreZwe92,Zwer92}.
In particular $g_{\star}\/$ remains finite in the Bose limit, where it
arises from processes with a virtual exchange of one of the constituent
Fermions in a Bose-Bose scattering process \cite{Haussm93}.
The fact that $g_{\star}\/$ is very large in the weak coupling limit is
simply a consequence of the large pair size
$\xi_0 \sim \beta_c\epsilon_F\/$ (note that
$\mu_c = \epsilon_F\/$ in the weak coupling limit), but does not imply
that the $|\psi|^4\/$-term is particularly relevant in this regime.
On the contrary, using the Gaussian approximation,
it is straightforward to show \cite{stin96} that in this
limit the pro\-duct $g_{\star}\langle |\psi|^2 \rangle\/$ which
effectively renormalizes the Boson chemical potential $\mu_{\star}\/$, is
of order $\sqrt{E_b\epsilon_F}\/$ which is roughly $T_c\/$ in the weak
coupling limit. For BCS-like superconductors the
$|\psi|^4\/$-contribution is therefore irrelevant except very close to
$T_c\/$, a fact which is well known from the
standard theory of conventional superconductors. Now the finite
value of $g_{\star}\/$ guarantees that even in two dimensions
there is a finite critical temperature below which the superfluid density
is nonvanishing.
Unfortunately it is not possible to incorporate the
Kosterlitz-Thouless nature of the transition in an approximate
treatment of the GL functional. However considering the effectively
three-dimensional structure of high-$T_c\/$ superconductors, this problem
may be circumvented by including the motion
in the direction perpendicular to the planes. A very simple method to
incorporate this is provided by adding a transverse contribution
$\epsilon_{\perp}\/$ to the kinetic energy by \cite{GuLoSh95}
\begin{equation} \hspace*{-0.5in}
\frac{\hbar^2 q^2}{2m_{\star}} \rightarrow \frac{\hbar^2 q^2}{2m_{\star}} +
\frac{\hbar^2 q_{\perp}^{\,2}}{2m_{\perp}}\;,
\end{equation}
whose average is equal to the thermal energy $\langle\epsilon_{\perp}
\rangle =T\/$.
Replacing the integral in Eq. (\ref{ref36}) by
\begin{equation}
\int\frac{d^2q}{(2\pi)^2}\;\;\rightarrow L_{\perp}\;\,
\int\frac{d^2q}{(2\pi)^2} \int\frac{dq_{\perp}}{2\pi}
\end{equation}
with $L_{\perp} = 2\pi/ \sqrt{\langle q_{\perp}^{\,2}\rangle}\;\/$
we obtain an effectively
three-dimensional system. This becomes evident by writing
the contribution of the bound pairs in (\ref{ref36}) in the form
\begin{equation} \label{ref43} \hspace*{-1.0cm}
n' = \frac{m_{\star}\,\partial_{\mu}\mu_{\star}\,\sqrt{\beta}}{\pi\hbar^2}\,
\int\limits_0^{\infty}d\epsilon
\;\frac{\sqrt{\epsilon}}{\exp[\beta(-\mu_{\star}+\epsilon)]-1}\;.
\end{equation}
The density of states is thus proportional to $\sqrt\epsilon\/$ as in
three dimensions, making $n'\/$
finite at $\mu_{\star} = 0\/$. Although rather crude, this
approximation gives a value for $T_c\/$ in the Bose limit, which is
very close to the Kosterlitz-Thouless value for the transition
temperature \cite{Minn87}
\begin{equation} \label{ref44}
T_c^{KT} = {0.89} \frac{\pi\hbar^2n}{4m k_B}
\end{equation}
of a dilute hard core Bose gas on a lattice\cite{FisHoh87}
with Boson mass $2m\/$ and number density $n/2\/$.
Using our results for the GL coefficients $a\/$, $c\/$ and $d\/$
and the replacement (\ref{ref43}) in the number equation, we can now
determine both $T_c\/$ and $\mu_c\/$ for arbitrary coupling from
$\mu_{\star}=0\/$ and Eq. (\ref{ref36}). The corresponding results
are shown in Figs. 2 and 3
in units of the characteristic energy scale $\epsilon_F\/$ and
as a function of the dimensionless
effective coupling $E_b/\epsilon_F\/$.
For weak coupling $E_b \ll \epsilon_F\/$, the critical temperature is
monotonic in $E_b\/$, behaving like $T_c \approx
\sqrt{E_b\epsilon_F}\/$. For intermediate coupling, it exhibits a
maximum \cite{DreZwe92}. A similar but less pronounced
behaviour is found in three dimensions, again in the
Gaussian approximation \cite{NozSch84,SaRaEn93}.
In a more refined
self-consistent treatment \cite{Haussm94},
however, the critical temperature is a monotonically increasing function
of the coupling.
It is likely that the same situation
also applies in two dimensions, but unfortunately there is at present no
quantitative theory taking into account the repulsive interaction between
the pairs in this case. In the Bose limit
the transition temperature becomes independent of the original attractive
interaction and is completely determined by the Boson density
$n/2\/$ and the effective mass $m_{\star} \rightarrow 2m\/$.
It approaches a value of about $10\%\/$ of the Fermi
energy, which is likely to be an upper limit for $T_c\/$ in the present
problem.
\begin{figure}
\epsfig{file=kt2d_kap5.ps,scale=0.55}
\caption{\label{fig2}The normalized critical temperature $T_c\/$ versus
the dimensionless coupling strength.}
\end{figure}
The chemical potential $\mu_c\/$ at the transition decreases
monotonically from its weak coupling value $\epsilon_F\/$ to $-E_b/2\/$
in the Bose limit. It changes sign at
$\ln(E_b/\epsilon_F) \approx -1.1\/$, where the behaviour crosses over from
BCS- to Bose-like (In Fig \ref{fig3} we have supressed a small dip
in $\mu_c\/$ around these couplings which is an artefact of the
pronounced maximum in $T_c\/$).
Apart from the chemical potential $\mu_c\/$,
the nature of the transition can also
be inferred from evaluating the number of preformed Bosons at $T_c\/$.
This quantity, which is just half of the contribution (\ref{ref43})
to the number equation is shown in Fig. \ref{fig4}.
It is obvious that the nature of the phase transition changes rather
quickly in a range of couplings between $\ln(E_b/\epsilon_F) \approx -2\/$
and $-1\/$. For
smaller couplings, the density of preformed pairs near $T_c\/$ is
essentially negligible and binding occurs simultaneously with
condensation. On the other hand, for
$\ln(E_b/\epsilon_F)\gtrsim -1\/$ basically all pairs are already
present above $T_c\/$ and the transition to superconductivity
is that of a true Bose system.
\begin{figure}
\epsfig{file=lnbecp2d.ps,scale=0.55}
\caption{\label{fig3}The normalized chemical potential $\mu_c\/$ at the
critical point versus coupling.}
\end{figure}
\begin{figure}
\epsfig{file=bos2d_kap5.ps,scale=0.55}
\caption{\label{fig4}Contribution $n'\/$ of bound pairs
at the critical point to the total Fermion number density $n\/$.}
\end{figure}
\section{Characteristic Lengths}
In the following we want to determine both the coherence length $\xi\/$
and the penetration depth $\lambda\/$ as a function of the coupling. The
former is defined both above and below $T_c\/$ and may be obtained from
the GL coefficients simply via
\begin{equation}
\xi^2 = \frac{\hbar^2}{2m}\frac{c}{a}\;.
\end{equation}
The definition of a penetration depth in a two-di\-mensional
super\-conductor has been discussed in section III. Within the Gaussian
approximation we may replace $|\psi_{\infty}|^2\/$ in (31) by
$\mu_{\star}/g_{\star}\/$ which leads to
\begin{equation}
\lambda^2=\lambda_L^2\cdot\frac{bn}{4c|a|}\;.
\end{equation}
Here we have introduced the bare value $\lambda_L\/$ of the London
penetration depth defined by
\begin{equation}
\lambda_L^2=\frac{mc_0^{\,2}}{4\pi n_3e^2}
\end{equation}
with $n_3=n/\delta\/$ the nominal three-dimensional carrier density. Note
that both $\xi\/$ and $\lambda\/$ have been written in terms of the original
static GL-coefficients $a,b\/$ and $c\/$, in order to stress that the
characteristic lengths are independent of the normalization of $\psi\/$,
i.e. the kinetic coefficient $d\/$ does not enter here. Since the
coefficient $a\/$ vanishes at the transition, both $\xi\/$ and
$\lambda\/$ diverge. Now in a full treatment of the GL-functional,
including the $\psi^4\/$-term, the behaviour very close to $T_c\/$
in a single layer would be of the Kosterlitz-Thouless type. The
correlation length would thus diverge like \cite{Minn87}
$\ln{\xi}\sim |T-T_c|^{-1/2}\/$ while $\lambda^{-2}\/$ would jump from zero
to a finite value below $T_c\/$ (see (32)). In the
three-dimensional case, the behaviour very close to $T_c\/$ is that
of a 3d XY-model with nontrivial
but well known critical exponents \cite{Schn94}.
In our Gaussian approximation this complex structure is replaced by a
simple mean field behaviour. However there is a subtle point even at this
level of approximation. Indeed the coefficient $a\/$ depends both on
temperature and chemical potential and it is only in the strict
BCS-limit, where the latter is a fixed constant equal to $\epsilon_F\/$.
With increasing coupling, however, the chemical potential changes and thus
the relevant limit close to $T_c\/$ is to consider
$a(T,\mu(T))\/$ as $T\to T_c\/$. Now by using the exact relation
(\ref{ref39}), it is straightforward to show \cite{stin96} that
$a(T,\mu(T)) \sim (T-T_c)^2\/$ vanishes quadratically near $T_c\/$. The
resulting critical exponent for the correlation length would thus be
$\nu=1\/$. Indeed this is the exponent expected for an ideal Bose gas in
three dimensions, to which our Gaussian approximation is effectively
equivalent. Now in order to allow a comparison of our results with
measured values of $\xi\/$ and $\lambda\/$, which are found to obey
a mean field behaviour with $\nu=1/2\/$
except very close to $T_c\/$ \cite{kama94},
we neglect the temperature dependence of the chemical
potential near $T_c\/$. As a result
\begin{equation}
a(T,\mu_c)=a'\cdot\frac{T-T_c}{T_c}
\end{equation}
vanishes linearly near $T_c\/$, giving the standard mean field divergence
of $\xi\/$ and $\lambda\/$. Obviously this approximation is only reliable
on the weak coupling side of the transition. As we will see below, however,
this is indeed the relevant regime even in high-$T_c\/$ superconductors.
We thus expect that our results are at least qualitatively reliable for
these systems. For strong coupling, the derivative $a'\/$ of $a(T,\mu_c)\/$
near $T_c\/$ vanishes like $\exp(-\beta_c E_b/2)\/$.
As a result, the
characteristic lengths $\xi\/$ and $\lambda\/$ would increase exponentially
in the Bose limit which is unphysical however. We have therefore restricted
our calculation of the GL coherence length $\xi(0)\/$ defined by
\begin{equation} \label{ref49}
\xi^2(0)=\frac{\hbar^2}{2m}\frac{c}{a'}
\end{equation}
to coupling strengths $\ln (E_b/\epsilon_F)\/$ smaller than about $-1\/$,
where the system starts to cross over to Bose like behaviour. The corresponding
result for $\xi(0)\/$ in units of $k_F^{-1}\/$ is shown in Fig. \ref{fig5}.
\begin{figure}
\epsfig{file=lnskinull2d_kap6.ps,scale=0.55}
\caption{\label{fig5}The Ginzburg-Landau coherence length $\xi(0)\/$ versus
the coupling strength.}
\end{figure}
It exhibits the expected decrease of the coherence length from
its weak coupling limit
\begin{equation}
k_F\xi(0)|_{\rm BCS}^{} = \sqrt{\frac{7\,\zeta(3)}{8\pi^2}}\;
\frac{\epsilon_F}{T_c}
\end{equation}
to values of order one near the crossover regime, before it starts to rise
again. As was noted above our approximations in determining
$\xi(0)\/$ are no longer reliable in this regime. While in three
dimensions $\xi(0)\/$ is expected to increase like $1/\sqrt{a_B}\;\/$
\cite{Wen90} with $a_B=2a_F\to 0\/$ the relevant Bose-Bose scattering
length \cite{SaRaEn93}, the actual behaviour in two dimensions is
unknown. Fortunately, however, this problem does not arise in determining
the GL-parameter $\kappa=\lambda/\xi\/$ which, in two dimensions,
can be expressed as
\begin{equation} \label{ref52}
\kappa=\left(\frac{m}{8\pi\hbar^2}\cdot
\frac{b\,\delta}{c^2 r_{\rm cl}}\right)^{1/2}\;.
\end{equation}
Here we have introduced the equivalent of the classical electron radius
$r_{\rm cl}=e^2/mc_0^{\,2}\/$ where $m\/$ is the band mass.
Since the problematic coefficient
$a\/$ has dropped out in $\kappa\/$, we can use (\ref{ref52}) to determine the
GL-parameter in the whole regime between weak and strong coupling. In the
BCS-limit $\kappa\/$ is exponentially small, behaving like
$\kappa_{\rm BCS}^{}\approx T_c/\epsilon_F
\cdot(\delta/r_{\rm cl})^{1/2}\/$.
The associated penetration depth $\lambda(0) = \sqrt{3/4}\,\lambda_L\/$
is thus essentially equal to the London value $\lambda_L\/$.
By contrast, for the Bose case $\kappa\/$
approaches a constant value
\begin{equation}
\kappa_{\rm Bose}^{} = \Big( \frac{3\delta}{r_{\rm cl}}\Big)^{1/2}
\end{equation}
which is large compared to one, since $\delta\gg r_{cl}\/$ for realistic
values of the sheet thickness $\delta\/$. The complete dependence
of $\kappa\/$ in the crossover regime
is shown in Fig. \ref{fig6}.
\begin{figure}
\epsfig{file=kappa2d_kap6.ps,scale=0.55}
\caption{\label{fig6}The GL parameter $\kappa\/$ versus
coupling.}
\end{figure}
It is a monotonic function of the binding energy.
Thus with increasing coupling there is always a transition from
type I to type II behaviour in two dimensions, even for a clean
superconductor with no impurities as considered here.
With these results, we are now in a position to compare our simple model
with experimental values for optimally doped high-$T_c\/$ superconductors.
Since the critical temperature is exponentially sensitive to the
coupling strength and also is likely to be considerably reduced compared
to our result in Fig. 2 by fluctuation effects in the crossover regime,
we have refrained from
taking $T_c\/$ as a reliable parameter for adjustment. Instead we have
used the measured values of the slope of the upper critical field
$H_{c_2}\/$ near $T_c\/$, which determines the GL
coherence length $\xi(0)\/$ via
\begin{equation}
\left. \frac{dH_{c_2}}{d\ln T}\right|_{T=T_c} =
\frac{\phi_0}{2\pi\xi^2(0)}\;.
\end{equation}
In order to fix the dimensionless
coupling strength $E_b/\epsilon_F\/$ from the measured values
\cite{WeKwCr89} $\xi(0) = 14\, \mbox{\AA}\/$ for optimally doped YBCO
($T_c = 92\,K\/$) and \cite{MaTaOg92}
$\xi(0) = 24\, \mbox{\AA}\/$ for the corresponding
compound BSCCO ($T_c = 107\,K\/$), we need the in plane carrier densities
$n\/$ which determine an effective value of $k_F = \sqrt{2\pi n}\/$ (It
should be pointed out that $k_F\/$ is introduced here only as a measure
of the carrier density. In fact due to the strong attractive interaction,
the actual momentum distribution of the Fermions just above $T_c\/$ will be
far from the standard Fermi distribution, except in the BCS limit.).
The three-dimensional carrier densities in optimally doped
YBCO and BSCCO are \cite{SeIsMa91} $n_3 = 4\cdot 10^{21}\,cm^{-3}\/$
and \cite{HarMil92} $n_3 = 2\cdot 10^{21}\,cm^{-3}\/$
respectively.
With the corresponding
values \cite{HarMil92}
$\delta = 5.84\,\mbox{\AA}\/$ and $\delta = 9.27\,\mbox{\AA}\/$
of the effective sheet thickness, the resulting Fermi momenta are
$k_F = 0.38\,\mbox{\AA}^{-1}\/$ for YBCO and
$k_F = 0.34\,\mbox{\AA}^{-1}\/$ for BSCCO. The dimensionless ratio
$k_F\xi(0) = 5\/$ and $8\/$ between the coherence length and the average
interparticle spacing then allows us to determine the effective coupling
strength. From Fig. \ref{fig5} we find that $\ln(E_b/\epsilon_F)\/$ is
equal to $-5.1\/$ for YBCO and $-6.0\/$ for BSCCO. As Figs. \ref{fig3}
and \ref{fig4} show, these coupling strengths describe superconductors
which are still on the weak coupling side of the crossover from BCS to
Bose behaviour. For instance the density of preformed Bosons at $T_c\/$
is less then 1\% in both cases (see Fig. \ref{fig4}).
In order to check whether our description is consistent, we determine
the associated values of the GL parameter $\kappa\/$ near $T_c\/$.
From the above values of $\delta\/$ and
the band masses \cite{SeIsMa91} $m =3.5\,m_e\/$ for YBCO and
\cite{HarMil92} $m =4.4\,m_e\/$ for BSCCO
($m_e\/$ is the free-electron mass.)
we find that $\kappa_{\rm Bose}^{}\/$ is equal to $1.5\cdot 10^3\/$
and $2.1\cdot 10^3\/$ respectively for the two compounds considered here.
From Fig. \ref{fig6} we thus obtain $\kappa = 99\/$ and $\kappa = 87\/$
for optimally doped YBCO and BSCCO. These numbers agree very well
with the experimentally determined values of $\kappa\/$, which are
\cite{KrGrHo89} $\kappa_{\rm exp} = 100\/$ and \cite{MaTaOg92,HarMil92}
$\kappa_{\rm exp} = 86\/$. Our
simple one parameter model therefore gives a consistent quantitative
description of the characteristic lengths $\xi\/$ and $\lambda\/$.
\section{Conclusion}
To summarize, we have studied the crossover in the superconducting
transition between BCS- and Bose-like behaviour within a GL description.
It has been found that optimally doped high-$T_c\/$ superconductors are still
on the weak coupling side of this crossover, although they are certainly
far away from the BCS-limit. Our microscopic model is characterized by a
single dimensionless parameter, similar to the familiar BCS-Hamiltonian.
While the GL functional has the standard form for arbitrary coupling its
coefficients depend strongly on the coupling strength. The crossover,
at least in two dimensions, is essentially identical for the s- or
d-wave case. For static properties, only two of the relevant coefficients
$a\/$, $b\/$ and $c\/$ are independent, since the normalization of the order
parameter is arbitrary. Fixing $\xi(0)\/$ from experiment therefore leaves
only one further parameter -- for instance $\kappa\/$ -- as an independent
predicted quantity. The good agreement of $\kappa\/$ with measured values
supports our conclusion that the optimally high-$T_c\/$ compounds are
intermediate between BCS and Bose behaviour. Since the crossover regime
is rather narrow, however, weak coupling theories are still a reasonable
approximation for the relevant coupling strengths. This is consistent
with the empirical fact that a weak coupling approach apparently works
well in many cases.
Evidently there are a number of important open questions. They may be
divided into two classes: the first one concerns the problem of a better
and more complete treatment of our model itself. The second class is related
to the problem, to which extent this model is applicable to high-$T_c\/$
compounds and what are the necessary ingredients for a more realistic
description. Regarding our microscopic Hamiltonian as a given model, it is
obvious that our treatment of the associated GL phenomenology is still
incomplete. In particular the Gaussian approximation is obviously
not sufficient to give a quantitatively reliable result for $T_c\/$ at
intermediate coupling. Moreover, the behaviour of the characteristic
lengths $\xi\/$ and $\lambda\/$ in the strong coupling limit is completely
unknown. In order to go beyond the Gaussian approximation, it is necessary
to include the pair interaction (i.~e. the $|\psi|^4\/$-term) properly.
Progress in this direction has been made in the three-dimensional
case by Haussmann \cite{Haussm94} and very recently by Pistolesi
and Strinati \cite{PisStr96}. Using a self-consistent and conserving
approximation for the Green- and vertex functions,
Haussmann obtained a smooth increase of $T_c\/$ with coupling, thus
eliminating the unphysical maximum in the crossover regime found in the
Gaussian approximation. This approach is rather different from our present
one and requires extensive numerical work. Pistolesi and Strinati have
performed an essentially analytical calculation of the correlation
length $\xi_0\/$ at zero temperature, using a loop expansion in a functional
approach similar to our present one. They have shown that the pair radius
$\xi_b \approx \hbar/\sqrt{mE_b}\/$ coincides with $\xi_0\/$ not only
in the BCS-limit, but down to values around $k_F\xi_0 \approx 10\/$.
Similar to our results for the GL coherence length $\xi(0)\/$ in Fig.
\ref{fig5}, $k_F\xi_0\/$ reaches a minimum of order one in the crossover
regime before it starts to rise again. Since in three dimensions the
behaviour at strong coupling is that of a weakly interacting Bose gas with
scattering length $a_B = 2a_F \rightarrow 0\/$ \cite{SaRaEn93,Haussm93},
$\xi_0\/$ eventually increases like $k_F^{\,-1}/\sqrt{k_Fa_B}\/$ while
$\xi_b \approx a_B\/$ approaches zero \cite{SaRaEn93}. Unfortunately
for the two-dimensional case, where the Boson interaction $g_{\star}\/$
is finite even at very strong coupling \cite{DreZwe92,Zwer92}, the
Kosterlitz-Thouless nature of the transition makes it very difficult
to improve upon the simple approximations used here. A first step in this
direction was taken by Traven\cite{traven}. He showed that interactions
between the pair fluctuations guarantee a nonvanishing superfluid density
at finite temperature, in agreement with our arguments below
Eq. (\ref{ref40}).
However there seems to be no quantitative calculation of the coherence length
in a two-dimensional Bose-like regime even near zero temperature.
A different problem we want to mention here is that of the proper time
dependent GL theory. Since our quantum GL functional is derived from a
microscopic Hamiltonian, in principle it contains the complete information
about the order parameter dynamics, at least as far as intrinsic effects
are concerned. Neglecting higher order terms in the expansion in
$\omega_n\/$, the resulting equation of motion for the order parameter
in real time $t\/$ is \cite{SaRaEn93}
\begin{equation} \label{time}
d\cdot i\hbar\frac{\partial \psi(\boldsymbol{r},t)}{\partial t} =
\frac{\delta F[\psi]}{\delta\psi^{\star}(\boldsymbol{r},t)}\;.
\end{equation}
Due to the analytic continuation, the coefficient $d = d_1 + i d_2\/$
has now acquired a finite imaginary part $d_2 > 0\/$ which describes
irreversible relaxation. For a better comparison with the standard literature,
it is convenient to choose the conventional order parameter
$\psi_{\rm BCS} = \sqrt{2c}\,z\/$, where the prefactor of the
gradient term is $\hbar^2/4m\/$. With this choice of normalization,
the kinetic coefficient $d_1 = m^{\star}/2m\/$ is identical with the
effective mass discussed in section III. It is then evident from Fig.
\ref{fig1} that a Gross-Pitaevskii-like dynamics where $d_1 =1\/$,
is only valid in the Bose limit, while $d_1 \sim (T_c/\epsilon_F)^2\/$
is exponentially small for weak coupling. Indeed for BCS-like systems it is
$d_2\/$ which is dominant, being of order $T_c/\epsilon_F\/$ in the
three-dimensional case \cite{Haussm94}. This result reflects
the fact that for weak coupling superconductors the order parameter
dynamics is purely relaxing. Going beyond the BCS-limit, the
associated kinetic coefficient $d_2\/$ has only been evaluated in
three dimensions \cite{Haussm94}, where it exhibits a maximum at intermediate
coupling. Its behaviour in the Bose-limit and in two dimensions in general,
however, is completely unknown. Since the incorporation of scattering states
in the three-dimensional case requires to go beyond the linear expansion
in $\omega_n\/$, it is likely however, that a simple first order equation like
Eq. (\ref{time}) is in fact not appropriate for describing the dynamics at
intermediate coupling. It is only at very low temperatures where the situation
becomes simple again. Indeed from quite general arguments \cite{Stone95},
the dynamics as $T \rightarrow 0\/$ is expected to be of the
Gross-Pitaevskii form, irrespective of the strength of the coupling. Finally
we mention that a proper microscopic
calculation of the complex coefficient $d\/$ is
relevant for understanding the Hall effect in high-$T_c\/$
compounds \cite{Dors92}.
Concerning the question to which extent our model is really applicable to
high-$T_c\/$ superconductors, it is obvious that most of the complexity of
these systems is neglected here. In particular we have assumed that the
normal state is characterized by a given density of effective mass Fermions
with some instantaneous attractive
interaction \cite{pietr}. Such a system will
certainly be very different from a conventional Fermi liquid for
intermediate or strong coupling. Our conclusion that we are still on the weak
coupling side of the crossover, with a negligible density of preformed
pairs, is consistent with the phenomenology of optimally (and perhaps
overdoped) high-$T_c\/$ superconductors, however it is certainly
inappropriate for the underdoped cuprates. Indeed these systems exhibit
a gap far above $T_c\/$ which may be interpreted in terms of preformed Bosons.
A GL description of underdoped compounds was very recently developed
by Geshkenbein, Ioffe and Larkin \cite{GeIoLa97}. Assuming that Bosons
form far above $T_c\/$ only in parts of the Fermi surface and coexist with
unpaired Fermions through $T_c\/$, they obtain a reasonable description
of the phenomenology of certain underdoped materials. This behaviour is quite
different from that obtained in our model, which completely neglects
any effects of the Coulomb repulsion
and band structure. It is obvious
that for a quantitative description of high-$T_c\/$ superconductors both
Coulomb correlations and band structure effects have to be included, which
requires to use lattice models. They provide a quantitative description of
these complex systems at least in the normal state and allow the calculation
of microscopic properties like spectral functions, etc \cite{Dopf92}.
Unfortunately with these models it still seems impossible to really explain
how d-wave superconductivity arises from the strongly spin and charge
correlated normal state. As a result, effective models like the
negative-U Hubbard model are often used to discuss microscopic properties
of high-$T_c\/$ compounds \cite{Rand92,Micnas95}.
Our present approach is more phenomenological, starting from a model in which
all microscopic details are neglected except for the fact that we have a
strong pairing interaction in a system of Fermions with given density.
The advantage of such an approach is that it allows a simple calculation
of the relevant lengths $\xi\/$ and $\lambda\/$ and quantities
following from that like the critical fields. The fact that the
resulting GL theory gives a consistent description of optimally doped
cuprates indicates that at least at this level the microscopic details
are not relevant. Certainly our results supporting this view are quite
limited so far and it is necessary to investigate this further. Since the
coefficients of the GL functional near $T_c\/$ are quite generally
determined by the properties in the {\em normal} state, an interesting
future direction would be to see whether superconducting properties
below $T_c\/$ can quantitatively be obtained from the GL functional by
properly incorporating the anomalous behaviour in the normal state.
\acknowledgments
One of the authors (W. Z.) would like to thank A. J. Leggett for his
hospitality at the University of Illinois where this work was completed
and for useful discussions. Part of this work was supported by a grant
from the Deutsche Forschungsgemeinschaft (S. S.).
| {
"attr-fineweb-edu": 1.333008,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdp45qsBB3KMWcCrT | \section{Introduction}
{\let\thefootnote\relax\footnotetext{The authors are supported by DFG Grant Gr 1569/13-2.}}
In recent years, dissipativity as introduced into systems theory by Willems \cite{Will72a,Will72b} has turned out to be a highly useful concept in order to understand the qualitative behaviour of optimally controlled systems. While related ideas were already present in early works by Willems \cite{Will71} in a linear quadratic setting, the approach has been revived and extended to fully nonlinear problems motivated by the observation of the importance of dissipativity concepts in model predictive control \cite{DiAR10,AnAR12,MuAA15,MuGA15} and for the characterization of the turnpike property \cite{GruM16}. The turnpike property expresses the fact that optimal (and possible also near-optimal) trajectories stay within a vicinity of an optimal equilibrium for most of the time. It can be seen as a way to generalize asymptotic stability properties of optimal equilibria to finite- and infinite-horizon optimal control problems.
While the references just discussed addressed non-discounted optimal control problems, the results from \cite{GGHKW18,GaGT15,GMKW20,Gruene2015} show that central results from this theory can be carried over to discounted optimal control problems and complement detectability-based approaches such as \cite{PBND17,PBND14} for analysing global asymptotic stability of equilibria of discounted optimally controlled systems.
A crucial difference between discounted and non-discounted optimal control problems is that in discounted problems it is much more likely that several asymptotically stable optimal equilibria coexist. Indeed, assuming complete controllability, in non-discounted optimal control two optimal equilibria can only coexist for arbitrary long (or infinite) horizons if they yield exactly the same optimal cost. Otherwise, for sufficiently long time it will always be beneficial to steer the system from the more expensive equilibrium to the cheaper one. In contrast to this, in discounted optimal control, due to the discounting it may not be possible to compensate for the transition cost from one equilibrium to the other with the lower cost of staying in the cheaper equilibrium. Therefore, in the discounted case locally asymptotically stable equilibria with different costs may coexist even for infinite horizon problems. In mathematical economy, where discounted optimal control problems are an important modelling tool, this is a well known fact at least since the pioneering work of Skiba \cite{Skib78} and Dechert and Nishimura \cite{DecN83}, and since then it was observed in many other papers, see, e.g., \cite{HKHF03} and the references therein.
It is the goal of this paper to show that a local version of the strict dissipativity property for discounted optimal control problems can be used for obtaining local convergence results to optimal equilibria. More precisely, we show that in the presence of local strict dissipativity and appropriate growth conditions on the optimal value functions there exist two thresholds for the discount factor $\beta\in(0,1)$, denoted by $\beta_1$ and $\beta_2$, with the following properties: Whenever $\beta\ge \beta_1$, any optimal trajectory that stays near a locally optimal equilibrium converges to this equilibrium. Whenever $\beta\le \beta_2$, any optimal trajectory that starts near this equilibrium will stay near the equilibrium. Together, this yields an interval $[\beta_1,\beta_2]$, which --- provided that $\beta_1\le \beta_2$ holds --- contains the discount factors for which convergence of optimal trajectories to the locally optimal equilibrium holds locally.
We formalize this convergence behaviour using the formalism from turnpike theory (see, e.g., \cite{Gruene2017}), because this provides a convenient way to express these properties in a mathematically precise way also for near-optimal trajectories and to link our results to the recent literature on the relation between dissipativity and turnpike properties. We carry out our analysis in discrete time because this simplifies some of our arguments, yet we think that conceptually similar results can also be achieved for continuous time problems.
The remainder of this paper is organised as follows. In Section \ref{sec:setting} we introduce the precise problem formulation and notation. Section \ref{sec:global} summarises the known results for globally strictly dissipative discounted problems. In Section \ref{sec:local} we show how this result can be reformulated in case that only local strict dissipativity holds, provided the trajectories under consideration satisfy an invariance condition. In Section \ref{sec:stay} we then show that this invariance condition is ``automatically'' satisfied under suitable conditions. Section \ref{sec:main} then contains the main result by bringing together the two results from Sections \ref{sec:local} and \ref{sec:stay}. In Section \ref{sec:ex} we illustrate our results by several examples and the final Section \ref{sec:conclusion} provides a brief concluding discussion.
\section{Setting and preliminaries}\label{sec:setting}
\subsection{System class and notation}
We consider discrete time nonlinear systems of the form
\begin{equation}\label{eq: nsys}
x(k+1)=f(x(k),u(k)),\quad x(0)=x_0
\end{equation}
for a map $f: X\times U\to X$, where $X$ and $U$ are normed spaces. We impose the constraints $(x,u)\in \Y\subset X\times U$ on the state $x$ and the input $u$ and define $\X:=\{x\in X \mid \exists u\in U: (x,u)\in\Y\}$ and $\U:=\{u\in U\mid \exists x\in X: (x,u)\in \Y\}$. A control sequence $u\in \U^N$ is called admissible for $x_0\in\X$ if $\xk\in\Y$ for $k=0,\dots,N-1$ and $x(N)\in\X$. In this case, the corresponding trajectory $x(k)$ is also called admissible. The set of admissible control sequences is denoted by $\U^N(x_0)$. Likewise, we define $\U^\infty(x_0)$ as the set of all control sequences $u\in\U^\infty$ with $\xk\in\Y$ for all $k\in\N_0$. Furthermore, we assume that $\X$ is controlled invariant, i.e. that $\U^\infty(x_0)\neq \empty$ for all $x_0\in\X$. The trajectories of \eqref{eq: nsys} are denoted by $x_u(k,x_0)$ or simply by $x(k)$ if there is no ambiguity about $x_0$ and $u$.
We will make use of comparison-functions defined by
\begin{align*}
\mathcal{K} :=\{\alpha:\R_0^+\to\R^+_0&| \alpha \text{ is continuous and strictly increasing with }\alpha(0)=0\}\\
\mathcal{K}_\infty :=\{\alpha:\R_0^+\to\R^+_0&|\alpha\in\mathcal{K}, \alpha \text{ is unbounded}\}\\
\mathcal{L}:=\{\delta:\R_0^+\to\R^+_0&|\delta\text{ is continuous and strictly decreasing with} \lim_{t\to\infty}\delta(t)=0\}\\
\mathcal{KL}:=\{\beta:\R_0^+\times\R_0^+\to\R_0^+&|\beta \text{ is continuous, } \beta(\cdot,t)\in\mathcal{K}, \beta(r,\cdot)\in\mathcal{L}\}.
\end{align*}
Moreover, with $\BB_\eps(x_0)$ we denote the open ball with radius $\eps>0$ around $x_0$
In this paper we consider infinite horizon discounted optimal control problems, i.e. problems of the type
\begin{equation}\label{dis OCP}
\min_{u\in\U\infty(x_0)} J_\infty(x_0,u) \quad \text{with } J_\infty(x_0,u)=\sum_{k=0}^\infty \beta^k\ell(x(k,x_0),u(k)).
\end{equation}
Herein, the number $\beta \in (0,1)$ is called the discount factor.
For such problems it was shown in \cite{GGHKW18} that if the optimal control problem is strictly dissipative at an optimal equilibrium $x^\beta$, then for sufficiently large $\beta\in(0,1)$ all optimal trajectories converge to a neighbourhood of $x^\beta$. This neighbourhood shrinks down to $x^\beta$ when $\beta\to 1$, cf.\ \cite[Theorem 4.4]{GGHKW18}. Under slightly stronger conditions on the problem data one can even show that the optimal trajectories converge to the optimal equilibrium $x^\beta$ itself and not only to a neighbourhood, cf.\ \cite[Section 6]{GGHKW18}. We will show in Theorem \ref{th: disturninf}, below, that this result can be rewritten in the language of turnpike theory, in which convergence is weakened to the property that the trajectories stay in a neighbourhood of the optimal equilibrium for a (quantifiable) amount of time, but not necessarily forever. While only the optimal trajectories satisfy convergence to the optimal equilibrium, we will show that also near-optimal trajectories satisfy the turnpike property.\footnote{We note that the turnpike property can also be defined for finite horizon optimal control problems. Still, we restrict ourselves to the infinite horizon case, since it was shown in \cite{Gruene2017} that under mild conditions on the problem data the finite horizon turnpike property holds if and only if the infinite horizon turnpike property holds.}
While this global turnpike result follows from a relatively straightforward modification of the arguments in \cite{GGHKW18}, the main question that we want to address in this paper is more difficult: assume that strict dissipativity does not hold globally but only in a neighbourhood of a locally optimal equilibrium $x_l^\beta$. Can we still expect to see a turnpike property of trajectories starting close to $x_l^\beta$?
For the derivation of our technical results, we make frequent use of the dynamic programming principle
\[V_\infty(x_0)= \inf_{u\in\U^1(x_0)}\{\ell(x,u)+\beta V_\infty(f(x_0,u))\},\]
where
\[V_\infty(x_0):=\min_{u\in\U^\infty(x_0)}J_\infty(x_0,u)\]
denotes the optimal value function of \eqref{dis OCP}.
If $u^*\in\U^\infty(x_0)$ is an optimal control sequence for an initial value $x_0\in\X$, i.e.\ if $J_\infty(x_0, u^*)=V_\infty(x_0)$ holds, then the identity
\[V_\infty(x_0)=\ell(x_0,u^*(0))+\beta V_\infty(f(x_0,u^*(0)))\]
holds. Proofs for these statements can be found, e.g., in \cite[Section 4.2]{Gruene2017a}.
We denote optimal trajectories by $x^*(k,x_0)$ and we say that a set $\X_{inv}\subset \X$ is forward invariant for the optimally controlled system, if for each $x_0\in\X_{inv}$ it follows that $x^*(k,x_0)\in\X_{inv}$ for all $k\geq 0$ and all optimal trajectories starting in $x_0$.
\section{The global discounted turnpike property}\label{sec:global}
In this section we first consider the optimal control problem \eqref{dis OCP} assuming {\em global} strict dissipativity. We show that under similar technical assumptions and with a similar proof technique as in \cite{GGHKW18} we can obtain a global turnpike result for near-optimal trajectories. To this end, we first introduce discounted strict dissipativity and afterwards we use it to conclude the turnpike property.
\subsection{Global discounted strict dissipativity}
We denote an equilibrium of system \eqref{eq: nsys} in the discounted case by $\equb$ since the equilibria are dependent on the discount factor $\beta\in (0,1)$.
\begin{defn}\label{def:discdiss}
Given a discount factor $\beta\in(0,1)$, we say that the system \eqref{eq: nsys} is discounted strictly dissipative at an equilibrium $\equb$ with supply rate $s:\Y\to\R$ if there exists a storage function $\lambda:\X\to\R$ bounded from below with $\lambda(x^\beta)=0$ and a class $\mathcal{K}_\infty$-function $\alpha$ such that the inequality
\begin{equation}
s(x,u)+\lambda(x)-\beta\lambda(f(x,u))\geq \alpha(\|x-\xb\|)
\label{eq:globdiss}\end{equation}
holds for all $(x,u)\in\Y$ with $f(x,u)\in\X$.
\end{defn}
The following lemma is Proposition 3.2 from \cite{GMKW20}. Since its proof is short and simple, we provide it here for convenience of the readers. It shows that we can replace the stage cost $\ell$ by a modified---usually called \emph{rotated}---stage cost $\tell$ that is positive definite without changing the optimal trajectories.
\begin{lem}\label{prop: traj}
Consider the discounted optimal control problem \eqref{dis OCP} with discount factor $\beta \in(0,1)$ and assume the system \eqref{eq: nsys} is discounted strictly dissipative at an equilibrium $\equb$ with supply rate $s(x,u)=\ell(x,u)-\ell\equb$ and bounded storage function $\lambda$.
Then the optimal trajectories of \eqref{dis OCP} coincide with those of the problem
\begin{equation}\label{mod OCP}
\min_{u\in\U^\infty(x_0)}\widetilde{J}_\infty(x_0,u) \quad\text{with } \widetilde{J}_\infty(x_0,u):=\sum_{k=0}^\infty\beta^k\tell(x(k,x_0),u(k))
\end{equation}
with rotated stage cost
\begin{equation}
\tell(x,u)=\ell(x,u)-\ell\equb+\lambda(x)-\beta\lambda(f(x,u))
\end{equation}
which is positive definite in $x^\beta$ at $\equb$, i.e.\ it satisfies the inequality $\tell(x,u) \ge \alpha(\|x-\xb\|)$ with $\alpha\in\KK_\infty$ from \eqref{eq:globdiss} for all $(x,u)\in\Y$.
\end{lem}
\begin{proof}
We rearrange
\begin{align*}
\widetilde{J}_\infty(x_0,u)&=\sum_{k=0}^\infty\beta^k\tell(x(k,x_0),u(k))\\
&=\sum_{k=0}^\infty \beta^k\left(\ell(x(k,x_0),u(k))-\ell\equb+\lambda(x(k,x_0))-\beta\lambda(x(k+1,x_0))\right)
\end{align*}
and a straightforward calculation shows that
\begin{equation}\label{eq: Jtilde}
\widetilde{J}_\infty(x_0,u)=J_\infty(x_0,u)-\dfrac{\ell\equb}{1-\beta}+\lambda(x_0)-\lim_{k\to\infty}\beta^k\lambda(x_u(k)).
\end{equation}
Since $\lambda$ is bounded and $\beta\in(0,1)$, the last limit exists and is equal to 0. Hence, the objectives differ only by expressions which are independent of $U$, from which the identity of the optimal trajectories immediately follows. The positive definiteness of $\tell$ follows from its definition, using strict dissipativity and the fact that $\lambda(\xb)=0$ implies $\tell\equb=0$.
\end{proof}
\begin{Bem}
The requirement that $\ell\equb=0$ is the reason for imposing $\lambda(\xb)=0$ as a condition in Definition \ref{def:discdiss}. Readers familiar with dissipativity for undiscounted problems will know that in the undiscounted case $\lambda(\xb)=0$ can be assumed without loss of generality, since if $\lambda$ is a storage function then $\lambda + c$ is a storage function for all $c\in\R$. In the discounted case, this invariance with respect to addition of constants no longer holds.
\end{Bem}
\subsection{The global turnpike property}
In the non-discounted setting it is known that strict dissipativity (together with suitable regularity assumptions on the problem data) implies that optimal as well as near-optimal trajectories exhibit the turnpike property. In the discounted setting, it was observed already in \cite{Gruene2017} that for merely near-optimal trajectories the turnpike property can only be guaranteed on a finite discrete interval $\{0,\ldots,M\}$. Here $M$ depends on the deviation from optimality (denoted by $\delta$ in the following theorem) and tends to infinity as this distance tends to 0. Exactly the same happens here. As the following theorem shows, under the assumption of global discounted dissipativity we obtain precisely the turnpike property from \cite[Definition 4.2]{Gruene2017}.
\begin{thm}\label{th: disturninf}
Consider the infinite horizon optimal control problem \eqref{dis OCP} with discount factor $\beta\in(0,1)$.
Assume that the optimal value function $\widetilde V_\beta$ of the modified problem satisfies $\widetilde V_\infty(x) \le \alpha_V(\|x-x^\beta\|)$ and
\begin{eqnarray} \widetilde V_\infty(x) \le C \inf_{u\in\U}\tilde \ell(x,u)\label{eq:Cdiss}\end{eqnarray}
for all $x\in\X$, a function $\alpha_V\in\mathcal{K}_\infty$, and a constant $C\ge 1$ satisfying
\begin{eqnarray} C < 1/(1-\beta) \label{eq:Cbetadiss}.\end{eqnarray
Then the optimal control problem has the following turnpike property (cf.\ \cite[Definition 4.2]{Gruene2017}):
For each $\eps>0$ and each bounded set $\X_b\subset \X$ there exist a constant $P>0$ such that for each $M\in\N$ there is a $\delta>0$, such that for all $x_0\in\X_b$ and $u\in\U^\infty(x_0)$ with $J_\infty(x_0,u)\leq V_\infty(x_0)+\delta$, the set $\mathcal{Q}(x_0,u,\eps,M,\beta):=\{k\in\{0,\dots, M\}\mid\|x_u(k,x_0)-x^\beta\|\geq \eps\}$ has at most $P$ elements.
\end{thm}
\begin{proof}
It follows from the proof of Lemma \ref{prop: traj} that the inequality $J_\infty(x_0,u)\leq V_\infty(x_0)+\delta$ implies $\widetilde J_\infty(x_0,u)\leq \widetilde V_\infty(x_0)+\delta$. Together with the dynamic programming principle for $\widetilde V_\infty$ this yields
\begin{eqnarray*} \delta & \geq & \widetilde J_\infty(x_0,u) - \widetilde V_\infty(x_0) \\
& = & \tilde \ell(x_0,u(0)) + \beta \widetilde J_\infty(x_u(1,x_0),u(\cdot+1)) - \inf_{u\in \U}\left\{ \tilde \ell(x_0,u) + \beta \widetilde V_\infty(f(x_0,u))\right\} \\
& \geq & \tilde \ell(x_0,u(0)) + \beta \widetilde J_\infty(x_u(1,x_0),u(\cdot+1))
- \left( \tilde \ell(x_0,u(0)) + \beta \widetilde V_\infty(f(x_0,u(0)))\right)\\
& = & \beta(\widetilde J_\infty(x_u(1,x_0),u(\cdot+1)) - \widetilde V_\infty(f(x_0,u(0)))).
\end{eqnarray*}
This implies $\widetilde J_\infty(x_u(1,x_0),u(\cdot+1))\leq \widetilde V_\infty(x_u(1,x_0))+\delta/\beta$, and proceding inductively we obtain
\[ \widetilde J_\infty(x_u(k,x_0),u(\cdot+k))\leq \widetilde V_\infty(x_u(k,x_0))+\frac{\delta}{\beta^k} \]
for all $k\in\N$.
This implies
\begin{eqnarray}
\lefteqn{\widetilde V_\infty(x_u(k+1,x_0)) - \widetilde V_\infty(x(k,x_0)) } & & \nonumber \\
& = & \frac{1}{\beta}\Big(\beta \widetilde V_\infty(x_u(k+1,x_0)) - \beta \widetilde V_\infty(x_u(k,x_0))\Big)\nonumber\\
& = & \frac{1}{\beta}\Big(\beta \widetilde V_\infty(x_u(k+1,x_0)) - \widetilde V_\infty(x_u(k,x_0)) + (1-\beta) \widetilde V_\infty(x_u(k,x_0))\Big)\nonumber\\
& \leq & \frac{1}{\beta}\Big(\beta \widetilde J_\infty(x_u(k+1,x_0),u(\cdot+k+1)) - \widetilde J_\infty(x_u(k,x_0),u(\cdot+k)) \nonumber\\
&&+ (1-\beta) \widetilde V_\infty(x_u(k,x_0))\Big) + \frac{\delta}{\beta^{k+1}}\nonumber\\
& = & \frac{1}{\beta}\Big(-\tilde\ell(x_u(k,x_0),u(k)) + (1-\beta) \widetilde V_\infty(x_u(k,x_0))\Big)+\frac{\delta}{\beta^{k+1}}\nonumber\\
& \leq & \frac{1}{\beta}\Big(-\frac{1}{C} \widetilde V_\infty(x_u(k,x_0))+ (1-\beta) \widetilde V_\infty(x_u(k,x_0))\Big) + \frac{\delta}{\beta^{k+1}}\nonumber\\
& = & \frac{\kappa}{\beta} \widetilde V_\infty(x_u(k,x_0)) +\frac{\delta}{\beta^{k+1}} \label{eq:case2}
\end{eqnarray}
where $\kappa = (1-\beta)-1/C < 0$ because of \eqref{eq:Cbetadiss}.
This implies that for fixed $M\in\N$ and $k\in\{0,\ldots,M\}$ the function $\widetilde V_\infty$ is a practical Lyapunov function. Using \cite{Gruene2014} Theorem 2.4 restricted to $\{0,\ldots,M\}$ and the fact that $\X_b$ is bounded we can conclude that there is a sequence $\eta_k\to 0$ (depending on $\X_b$) and a function $\gamma\in\mathcal{K}_\infty$ with
\[ \|x_u(k,x_0)-x^\beta\| \le \eta_k + \gamma(\delta/\beta^{k+1}) \leq \eta_k + \gamma(\delta/\beta^M)\]
for all $k\in\{0,\ldots,M\}$.
This implies the desired claim by choosing $P\in\N$ (depending on $\eps$ and $\eta_k$, hence on $\X_b$) such that $\eta_k < \eps/2$ for all $k\ge P$ and $\delta>0$ (depending on $\beta$, $\eps$ and $M$) such that $\gamma(\delta/\beta^M)<\eps/2$.
\end{proof}
For an illustration of the described turnpike property we refer to Fig. \ref{figure}. We note again that in the formulation of the discounted turnpike property the level $\delta$ which measures the deviation from optimality of the trajectory $x_u(\cdot,x0)$ depends on $M$. For guaranteeing the turnpike property on $\{0,\ldots,M\}$, $\delta\to 0$ may be required if $M\to\infty$, cf.\ also Remark \ref{rem:suffcond} (iv).
\begin{figure}[htb]
\begin{center}
\includegraphics[width= 0.5\textwidth]{Fig1.eps}
\caption{Illustration of the set $\mathcal{Q}(x_0,u,\eps,M,\beta)$}\label{figure}
\end{center}
\end{figure}
The following remark discusses aspects of the assumptions of Theorem \ref{th: disturninf}. For the turnpike property to hold, it is obviously necessary that the state of the system can be steered to $\xb$, at least asymptotically. This is made precise in part (i) of the remark. Part (ii) shows that if the state can be steered to $\xb$ fast enough, then a constant $C$ satisfying \eqref{eq:Cdiss} for all $\beta\in(0,1)$ holds. Finally, part (iii) of the remark discusses how inequality \eqref{eq:Cdiss} can be relaxed if such a $C$ cannot be found.
\begin{Bem} \begin{itemize}
\item[(i)] A necessary condition for the turnpike property to hold is that for each $\eps>0$, each bounded subset $\X_b\subseteq\X$ and each $x_0\in\X_b$ there exists a control sequence $u\in \U^{P+1}(x_0)$ with $x_u(k,x_0)\in \BB_\eps(\xb)$ for some $k\le P+1$, where $P$ is the constant from the turnpike property in Theorem \ref{th: disturninf}. This is immediately clear, because if such a control does not exist, then the number of points $x_u(k,x_0)\not\in\QQ(x_0,u,\eps,M,\beta)$ is larger than $P$ for all $u$.
\item[(ii)] If a constant $C$ satisfying \eqref{eq:Cdiss} for all $\beta\in(0,1)$ exists, then \eqref{eq:Cbetadiss} will hold for all sufficiently large $\beta\in(0,1)$. A sufficient condition for the existence of such a $C$ is the following exponential stabilizability assumption of the cost at the equilibrium $\equb$: there are constants $\sigma,\lambda>0$ such that for each $x_0\in\X$ there is $u\in\U^\infty(x_0)$ with
\begin{equation} \tell(x_u(k,x_0),u(k)) \le \sigma e^{-\lambda k}\inf_{\hat u \in \U} \tell(x_0,\hat u). \label{eq:expcost}\end{equation}
Then, since $\tell\ge 0$ we obtain
\begin{align*} \widetilde V_\infty(x_0) & \le \sum_{k=0}^\infty \beta^k\tell(x_u(k,x_0),u(k)) \le \sum_{k=0}^\infty \tell(x_u(k,x_0),u(k))\\
& \le \sum_{k=0}^\infty \sigma e^{-\lambda k}\inf_{\hat u \in \U} \tell(x_0,\hat u) = \frac{\sigma}{1-e^{-\lambda}} \inf_{\hat u \in \U} \tell(x_0,\hat u), \end{align*}
implying \eqref{eq:Cdiss} with $C=\sigma/(1-e^{-\lambda})$. We note that \eqref{eq:expcost} holds in particular if the system itself is exponentially stabilizable to $\xb$ with exponentially bounded controls and $\tell$ is a polynomial\footnote{We could further relax this assumption to $\tell$ being bounded by $C_1 P$ and $C_2 P$ from below and above, respectively, for constants $C_1>C_2>0$ and a polynomial $P$.}. Exponential stabilizability of the system, in turn, follows locally around $\xb$ from stabilizability of its linearization in $\xb$. If, in addition, the necessary condition from part (i) of this remark holds, then local exponential stabilizability implies exponential stabilizability for bounded $\X$. We refer to \cite[Section 6]{GGHKW18} for a more detailed discussion on these conditions.
\item[(iii)] If a $C$ meeting \eqref{eq:Cbetadiss} and \eqref{eq:Cdiss} for all $x\in\X$ does not exist, then we may still be able to find a $C$ satisfying \eqref{eq:Cbetadiss} and \eqref{eq:Cdiss} for all $x\in\X$ with $\vartheta \le \|x-x^\beta\| \le \Theta$, for parameters $0\le\vartheta<\Theta$. In this case we can follow the reasoning in the proof of Corollary 4.3 from \cite{GGHKW18} to conclude that we still obtain a turnpike property for $\eps>\eps_0$ and $\X_b=\BB_\Delta(\xb)\cap \X$, with $\eps_0\to 0$ as $\vartheta\to 0$ and $\Delta\to\infty$ as $\Theta\to\infty$.
\item[(iv)] Optimal trajectories, i.e., trajectories for which $J_\infty(x_0,u)=V_\infty(x_0)$ holds, satisfy the assumptions of Theorem \ref{th: disturninf} for each $\delta>0$. Hence, the assertion of the theorem holds for each $\eps>0$ and each $M\in\N$, implying that $x_u(k,x_0)$ converges to $x^\beta$ as $k\to\infty$.
\end{itemize}
\label{rem:suffcond}
\end{Bem}
\section{The local discounted turnpike property assuming invariance}\label{sec:local}
In the previous section, we have shown that an equilibrium at which the system is globally strictly dissipative has the turnpike property. Now, we consider an equilibrium denoted by $\equbl$ at which discounted strict dissipativity holds only locally, i.e., for all $x$ in a neighbourhood $\X_{\NN}$ of $\xl$, in the following sense.
\begin{defn}
Given a discount factor $\beta\in(0,1)$, we say that the system \eqref{eq: nsys} is locally discounted strictly dissipative at an equilibrium $\equbl$ with supply rate $s:\Y\to\R$ if there exists a storage function $\lambda:\X\to\R$ bounded from below with $\lambda(\xl)=0$ and a class $\mathcal{K}_\infty$-function $\alpha_\beta$ such that the inequality
\begin{equation}\label{ineq: dis.diss}
s(x,u)+\lambda(x)-\beta\lambda(f(x,u))\geq \alpha_\beta(\|x-x_l^\beta\|)
\end{equation}
holds for all $(x,u)\in\X_\mathcal{N}\times \U$.
Further, we say that system \eqref{eq: nsys} is locally discounted strictly $(x,u)$-dissipative at the equilibrium $\equbl$ with supply rate $s:\X\times \U\to\R$ if the same holds with the inequality
\begin{equation}
s(x,u)+\lambda(x)-\beta\lambda(f(x,u))\geq \alpha_\beta(\|(x-x_l^\beta\|+\| u-u_l^\beta)\|).
\end{equation}
\label{def:ldiss}\end{defn}
As in the global case we define the rotated stage cost by
\begin{equation}\label{eq: rotstcost b}
\tell(x,u) := \ell(x,u)-\ell\equbl +\lambda(x)-\beta\lambda(f(x,u)).
\end{equation
Obviously, with this definition Lemma \ref{prop: traj} remains valid. Moreover, for $x\in\X_{\NN}$ the function $\tell$ satisfies the same properties as in the globally dissipative case. This will enable us to derive a local turnpike property, provided the neighbourhood $\X_{\NN}$ contains an invariant set $\X_{inv}\subset \X_{\NN}$ for the optimally controlled system. The following lemma gives a consequence of this assumption for the modified optimal value function, which will be important for concluding the local turnpike property.
\begin{lem}\label{lem:lbound} Consider the optimal control problem \eqref{dis OCP} with given discount factor $\beta\in (0,1)$ and assume that the system is locally strictly dissipative in $\equbl\in\X_\mathcal{N}\subset \X$. Consider a subset $\X_{inv}\subset\X_\mathcal{N}$ such that all optimal solutions $x^*(k,x_0)$ with $x_0\in\X_{inv}$ satisfy $x^*(k,x_0) \in \X_{inv}$ for all $k\ge 0$.
Then the modified optimal value function $\widetilde V_\infty$ satisfies
\begin{equation} \widetilde V_\infty(x) \ge \alpha_\beta(\|x-x_l^\beta\|) \label{eq:lbound}
\end{equation}
for all $x\in\X_{inv}$.
\end{lem}
\begin{proof}
For all $x\in\X_\mathcal{N}$ and $u\in\U$ the modified cost satisfies $\tilde\ell(x,u) \ge \alpha_\beta(\|x-x_l^\beta\|)\ge 0$. This implies
\[ \widetilde V_\infty(x_0) = \sum_{k=0}^\infty \beta^k\tilde\ell(x(k,x_0),u(k)) \ge \sum_{k=0}^\infty \beta^k\alpha_\beta(\|x(k,x_0)-x_l^\beta\|) \ge \alpha_\beta(\|x_0-x_l^\beta\|),\]
which shows the claim.
\end{proof}
The following now gives a local version of Theorem \ref{th: disturninf}.
\begin{thm}\label{thm: localturn}
Consider the infinite horizon optimal control problem \eqref{dis OCP} with discount factor $\beta\in(0,1)$ and assume that the system is locally strictly dissipative at $\equbl\in\X_\mathcal{N}\subset \X$. Consider a subset $\X_{inv}\subset\X_\mathcal{N}$ such that all optimal solutions $x^*(k,x_0)$ with $x_0\in\X_{inv}$ satisfy $x^*(k,x_0) \in \X_{inv}$ for all $k\ge 0$ and suppose that the assumptions of Theorem \ref{th: disturninf} hold for all $x \in \X_{inv}$.
Then the optimal control problem has the following turnpike property on $\X_{inv}$:
For each $\eps>0$ and each bounded set $\X_b\subset \X_{inv}$ there exist a constant $P>0$ such that for each $M\in\N$ there is a $\delta>0$, such that for all $u\in\U^\infty(x_0)$ with $J_\infty(x_0,u)\leq V_\infty(x_0)+\delta$ and $x_u(k,x_0)\in\X_{inv}$ for all $k\in\{0,\ldots,M\}$, the set $\mathcal{Q}(x,u,\eps,M,\beta):=\{k\in\{0,\dots, M\}\mid\|x_u(k,x_0)-x^\beta\|\geq \eps\}$ has at most $P$ elements.
\end{thm}
\begin{proof}
The proof proceeds completely identical to the proof of Theorem \ref{th: disturninf}, using the fact that all inqualities used in this proof remain valid as long as the considered solutions stay in $\X_{inv}$ which is guaranteed by the assumptions. We note that Lemma \ref{lem:lbound} is needed for establishing the lower bound on $\widetilde V_\infty$ required from a practical Lyapunov function.
\end{proof}
\begin{rem} Instead of assuming the existence of the invariant set $\X_{inv}$ we could also assume \eqref{eq:lbound} for all $x\in\X_{\mathcal{N}}$. Then by standard Lyapunov function arguments the largest sublevel set of $\widetilde V_\infty$ contained in $\X_{\mathcal{N}}$ is forward invariant for the optimal solutions and can then be used as set $\X_{inv}$. Using \eqref{eq:case2} we can even ensure that this sublevel set is also forward invariant for all solutions satisfying $J_\infty(x_0,u)\leq V_\infty(x_0)+\delta$ provided $\delta>0$ is sufficiently small. Hence, for this choice of $\X_{inv}$ the assumption that $x_u(k,x_0)\in\X_{inv}$ for all $k\in\{0,\ldots,M\}$ in Theorem \ref{thm: localturn} would be automatically satisfies if $\delta$ is not too large.
\end{rem}
\section{Optimal trajectories stay near a locally dissipative equilibrium}\label{sec:stay}
Theorem 4.3 shows that the local turnpike property holds if the optimal solutions stay in the neighbourhood of $\xl$ in which the strict dissipativity property holds. In this section we show that this condition is "automatically'' satisfied for appropriate discount factors. This will enable us to conclude a local turnpike property from local strict dissipativity. To this end, we aim to show that there exists a range of discount factors $\beta$ for which it is more favourable to stay near the locally dissipative equilibrium than to move to other parts of the state space. The first lemma we need to this end shows a property of trajectories that move out of a neighbourhood of $\xl$. In contrast to the previous result, now we need the stronger $(x,u)$-dissipativity.
\begin{lem}\label{lem: ball}
Consider a discounted optimal control problem \eqref{dis OCP} subject to system \eqref{eq: nsys} with continuous $f$. Assume local strict $(x,u)$-dissipativity at an equilibrium $\equbl$ according to Definition \ref{def:ldiss} and let $\rho>0$ be such that $\mathcal{B}_\rho(x_l^\beta)\subset\X_{\NN}$ holds for the neighbourhood $\X_{\NN}$ from Definition \ref{def:ldiss}. Then there exists $\eta >0$ such that for each $K\in\N$ and any trajectory $x(\cdot)$ with $x_0=x(0)\in\BB_\eta(\xl)$ and $x(K)\notin \mathcal{B}_\rho(x_l^\beta)$ there is a $M\in\{0,\dots, K-1\}$ such that $x(0),\ldots,x(M)\in \mathcal{B}_\eta(\xl)$ and either
\[
\text{(i) } x(M)\in\mathcal{B}_\rho(x_l^\beta)\backslash\mathcal{B}_\eta(x_l^\beta)
\quad \text{or} \quad
\text{(ii) } \|u(M)-u_l^\beta\|\geq \eta\]
holds.
\end{lem}
\begin{proof}
The continuity of $f$ implies that there exists $\eps>0$ such that $\|f(x,u)-x_l^\beta\|<\rho$ for all $(x,u)\in\Y$ with $\|x-x_l^\beta\|<\eps$ and $\|u-u_l^\beta\|<\eps$. We let $K_{\min}$ be minimal with $x(K_{\min})\notin\mathcal{B}_\rho(x_l^\beta)$, set $\eta:=\min\{\eps, \rho\}$, and claim that this implies the assertion for $M=K_{\min}-1$.
We prove this claim by contradiction. To this end, we assume that for $M=K_{\min}-1$ neither assertion (i) nor assertion (ii) holds. This implies on the one hand that $\|x(M)-x_l^\beta\|<\eta$, since $x(M)\in\BB_\rho(\xl)$ by minimality of $K_{\min}$ and (i) is not fulfilled. On the other hand, it implies $\|u(M)-u_l^\beta\|<\eta$, because (ii) does not hold. Then, however, since $\eta \le \eps$, the continuity of $f$ implies
\[\|x(K_{\min})-x_l^\beta\| = \|f(x(M),u(M))-x_l^\beta\|< \rho.\]
This means that $x(K_{\min})\in\BB_\rho(\xl)$, which is a contradiction to the choice of $K_{\min}$. Hence, either assertion (i) or assertion (ii) must hold for $M=K_{\min}-1$.
\end{proof}
The next lemma shows that the behavior characterized in Lemma \ref{lem: ball} induces a lower bound for the rotated discounted functional $\widetilde{J}_\infty$ from \eqref{mod OCP} along trajectories that start in a neighborhood of $\xl$ and leave this neighborhood. To this end, we note that even if merely local strict dissipativity holds, the modified stage cost $\tell$ from \eqref{eq: rotstcost b} is well defined, since $\lambda$ is defined for all $x\in\X$. However, the inequality $\tell(x,u) \ge \alpha_\beta(\|x-\xl\|+\|u-\ul\|)$ and, more generally, positivity of $\tell$ are only guaranteed for $x\in\X_{\NN}$.
\begin{lem}\label{lem: optimal}
Let the assumptions of Lemma \ref{lem: ball} hold. In addition, assume that $\lambda$ from Definition \ref{def:ldiss} is bounded and the stage cost $\ell$ is bounded from below. Then, there exists $\beta^\star\in(0,1)$ with the following property: for any $\beta\in(0,\beta^\star)$ and any $K\in\N$ there is $\sigma(\beta,K)>0$ such that for any trajectory $x(\cdot)$ with $x_0=x(0)\in\BB_\eta(\xl)$ and $x(P)\notin \mathcal{B}_\rho(x_l^\beta)$ for some $P\in\{1,\ldots,K\}$ the inequality
\begin{equation} \widetilde J_\infty(x_0,u) \ge \sigma(\beta,K) \label{ineq: goal}\end{equation}
holds.
\end{lem}
\begin{proof}
First observe that boundedness from below of $\ell$ and boundedness of $\lambda$ imply boundedness from below of $\tell$. Let $\tell_{\min} := \inf_{(x,u)\in\Y} \tell(x,u)$ with $\tell_{\min}<0$. Moreover, local dissipativity implies that $\tell(x,u)\ge 0$ for all $x\in\X_{\NN}$ and all $u\in\U$.
Since the trajectory under consideration satisfies the assumptions of Lemma \ref{lem: ball} with $K=P$, there exists $M\in\{0,\dots,P\}$ such that either assertion (i) or assertion (ii) of this lemma holds. In case (i), we obtain that
\[ \tell(x(M),u(M)) \ge \alpha_\beta(\|x(M)-\xl\|) \ge \alpha_\beta(\eta) \]
and in case (ii) we obtain
\[ \tell(x(M),u(M)) \ge \alpha_\beta(\|u(M)-\ul\|) \ge \alpha_\beta(\eta). \]
Hence, we get the same inequality in both cases and we abbreviate $\delta:=\alpha_\beta(\eta)>0$.
In addition, Lemma \ref{lem: ball} yields $x(0),\ldots,x(M)\in \mathcal{B}_\eta(\xl)\subset\X_{\NN}$, which implies $\tell(x(k),u(k))\ge 0$ for all $k=0,\ldots,M-1$, and the lower bound on $\tell$ implies $\tell(x(k),u(k))\ge\tell_{\min}$ for all $k\ge M+1$. Together this yields
\begin{align*} \widetilde J_\infty(x_0,u) = & \sum_{k=0}^\infty\beta^k\ell(x(k,x_0),u(k)\\
=&\sum_{k=0}^{M-1}\beta^k\underbrace{\ell(x(k,x_0),u(k))}_{\ge 0}+\beta^M\underbrace{\ell(x(M,x_0),u(M))}_{\ge \delta}+\sum_{k=M+1}^\infty\beta^k\underbrace{\ell(x(k,x_0),u(k))}_{\ge \tell_{\min}}\\
\geq& \beta^M\delta+\dfrac{\beta^{M+1}}{1-\beta}\tell_{\min} = \dfrac{\beta^M}{1-\beta}\left(\left(\tell_{\min}-\delta\right)\beta+\delta\right).
\end{align*}
We now claim that the assertion holds for $\sigma = \frac{\beta^K\delta}{2(1-\beta)} \le \frac{\beta^M\delta}{2(1-\beta)}$. To this end, it is sufficient to show the existence of $\beta^\star$ with
\[ \dfrac{\beta^M}{1-\beta}\left(\left(\tell_{\min}-\delta\right)\beta+\delta\right)\ge \frac{\beta^M\delta}{2(1-\beta)}\]
for all $\beta\in(0,\beta^\star)$.
This is equivalent to
\[ \dfrac{\beta^M}{1-\beta}\left(\left(\tell_{\min}-\delta\right)\beta+\frac{\delta}{2}\right)\ge 0\;\; \Leftrightarrow \;\; \left(\tell_{\min}-\delta\right)\beta+\frac{\delta}{2}\ge 0,\]
since $\tell_{\min}-\delta<0$. This inequality holds for all $\beta\in(0,\beta^\star)$ if $\beta^*=\delta/(2(\delta-\tell_{\min}))$.
\end{proof}
\begin{rem}
The choice of the fraction $\frac 1 2$ for $\sigma$ in the proof of Lemma \ref{lem: optimal} is arbitrary. We can also use a more general fraction $\frac{1}{k+1}$ with $k\in\N$. Then, with the same calculation as above we get that $\beta^* = \dfrac{k}{k+1}\dfrac{\delta}{\delta-\tell_{\min}}$.
\end{rem}
Based on the estimate from Lemma \ref{lem: optimal} we can now conclude that near-optimal solutions starting near $\xl$ stay in $\X_\mathcal{N}$ for a certain amount of time.
\begin{lem} Consider a discounted optimal control problem \eqref{dis OCP} subject to system \eqref{eq: nsys} with $f$ continuous and stage cost $\ell$ bounded from below. Assume local strict $(x,u)$-dissipativity at an equilibrium $\equbl$ according to Definition \ref{def:ldiss} with bounded storage function $\lambda$. Assume furthermore that there is $\gamma\in\KK_\infty$ and $\hat\beta\in(0,1]$ such that $|\widetilde V_\infty(x)| \le \gamma(\|x-\xl\|)$ for all $x\in\X_{\NN}$ and all $\beta\in(0,\hat\beta]$. Then there exists $\beta_2\in(0,1)$ with the following property: for any $\beta\in(0,\beta_2)$ and any $K\in\N$ there exists a neighbourhood $\BB_{\eps(\beta,K)}(\xl)$ and a threshold value $\theta(\beta,K)>0$ such that all trajectories with $x_0\in \BB_{\eps(\beta,K)}(\xl)$ and $J_\infty(x_0,u) < V_\infty(x_0) + \theta(\beta,K)$ satisfy $x(k)\in \X_{\NN}$ for all $k\in\{0,\ldots,K\}$.
\label{lem:stay}\end{lem}
\begin{proof} We choose $\beta_2$ as the minimum of $\beta^\star$ from Lemma \ref{lem: optimal} and $\hat\beta$. We further use $\sigma(\beta,K)>0$ from Lemma \ref{lem: optimal} to set $\eps(\beta,K):= \gamma^{-1}(\sigma(\beta,K)/2)$ and $\theta(\beta,K):= \sigma(\beta,K)/2$. Now consider a trajectory meeting the assumptions and observe that since $J_\infty$ and $\widetilde J_\infty$ differ only by a term that is independent of $u(\cdot)$, the assumption $J_\infty(x_0,u) \le V_\infty(x_0) + \theta(\beta,K)$ together with the assumption on $x_0$ implies
\[ \widetilde J_\infty(x_0,u) < \widetilde V_\infty(x_0) + \theta(\beta,K) < \gamma(\eps(\beta,K)) + \theta(\beta,K).\]
The definition of $\theta$ and $\eps$ then implies
\[\widetilde J_\infty(x_0,u) < \sigma(\beta,K)/2 + \sigma(\beta,K)/2 = \sigma(\beta,K). \]
Since by Lemma \ref{lem: optimal} any trajectory leaving $\X_{\NN}$ (and thus also $\BB_\rho(\xl)$) up to time $K$ has a rotated value satisfying
\[\widetilde J_\infty(x_0,u) \ge \sigma(\beta,K), \]
the trajectory under consideration cannot leave $\X_{\NN}$ for $k\in\{0,\ldots,K\}$.
\end{proof}
\section{The local discounted turnpike property without assuming invariance}
\label{sec:main}
With the preparations from the previous sections, we are now able to formulate our main theorem on the existence of a local turnpike property.
\begin{thm}\label{th: locturnpike}
Consider a discounted optimal control problem \eqref{dis OCP} subject to system \eqref{eq: nsys} with $f$ continuous and stage cost $\ell$ bounded from below. Assume local strict $(x,u)$-dissipativity at an equilibrium $\equbl$ according to Definition \ref{def:ldiss} with bounded storage function $\lambda$ on $\X_{\NN}$. Assume furthermore that there is $\gamma\in\KK_\infty$ and $\hat\beta\in(0,1]$ such that $|\widetilde V_\infty(x)| \le \gamma(\|x-\xl\|)$ for all $x\in\X_{\NN}$ and all $\beta\in(0,\hat\beta)$, and that there is an interval $[\beta_1,\beta^*]$ of discount rates with $\beta_1<\hat\beta$, such that for each $\beta\in(\beta_1,\beta^*)$ the assumptions of Theorem \ref{th: disturninf} hold for all $x \in \X_{\NN}$.
Then there is $\beta_2\in(0,1)$ such that for all $\beta\in (\beta_1,\beta_2)$ there exists a neighbourhood $\NN$ of $\xl$ on which the system exhibits a local turnpike property in the following sense:
For each $\eps>0$ there exist a constant $P>0$ such that for each $M\in\N$ there is a $\delta>0$, such that for all $x_0\in \NN$ and all $u\in\U^\infty(x_0)$ with $J_\infty(x_0,u)\leq V_\infty(x_0)+\delta$, the set $\mathcal{Q}(x,u,\eps,M,\beta):=\{k\in\{0,\dots, M\}\mid\|x_u(k,x_0)-x^\beta\|\geq \eps\}$ has at most $P$ elements.
Particularly, if $J_\infty(x_0,u) = V_\infty(x_0)$, i.e., if the trajectory is optimal, then for each $\eps>0$ the set $\mathcal{Q}(x,u,\eps,\infty,\beta):=\bigcup_{M\in\N}\mathcal{Q}(x,u,\eps,M,\beta)$ has at most $P$ elements, implying the convergence $x_u(k,x_0)\to x^\beta$ as $k\to\infty$.
\end{thm}
\begin{proof} The idea of the proof is to use $\beta_2$ from Lemma \ref{lem:stay} and, for each $\beta\in(\beta_1,\beta_2)$, to construct a neighbourhood $\NN$ of $\xl$ and a $\delta>0$ such that all trajectories starting in $x_0\in\NN$ and satisfying $J_\infty(x_0,u)\leq V_\infty(x_0)+\delta$ stay in $\NN$ for all future times. Then the turnpike property follows from Theorem \ref{thm: localturn} applied with $\X_{inv}=\NN$.
To this end, we take $\beta_2$ from Lemma \ref{lem:stay}, fix $\beta\in(\beta_1,\beta_2)$, and consider the neighbourhood $\BB_{\eps(\beta,1)}(\xl)$ and the threshold value $\theta(\beta,1)$ from Lemma \ref{lem:stay} for $K=1$. We choose $\NN$ as the largest sublevel set of $\widetilde V_\infty$ that is contained in $\BB_{\eps(\beta,1)}(\xl)$ and denote the level by $\lambda>0$, i.e., $\NN=\{x\in\X_{\NN}\,|\, \widetilde V_\infty(x) < \lambda\}$. We abbreviate $\kappa = (1-\beta)-1/C$, observing that $\kappa<0$ because of because of \eqref{eq:Cdiss} (cf.\ also the proof of Theorem \ref{th: disturninf}), and set
\[ \delta:= \beta^M\min\left\{ \theta(\beta,1), -\frac{\kappa\lambda}{2\beta},\frac \lambda 2\right\}. \]
Now let $x_0$ and $u$ be as in the assertion, i.e., satisfying $J_\infty(x_0,u)\leq V_\infty(x_0)+\delta$, and denote the corresponding trajectory by $x(\cdot)$. Then, just as in the first part of the proof of Theorem \ref{th: disturninf}, we obtain the estimate
\[ \widetilde J_\infty(x(k),u(\cdot+k))\leq \widetilde V_\infty(x(k))+\frac{\delta}{\beta^k} \]
for all $k\in\N$. By definition of $\delta$ this in particular implies
\begin{equation} \widetilde J_\infty(x(k),u(\cdot+k))\leq \widetilde V_\infty(x(k))+ \theta(\beta,1)
\label{eq:Jbound} \end{equation}
for all $k=0,\ldots,M$.
Now we prove by induction that $x(k)\in \NN$ for all $k=0,\ldots,M$. For $k=0$ this follows from the choice of $x_0$. For $k\to k+1$, we make the induction assumption that $x(k)\in \NN$, i.e., $\widetilde V_\infty(x(k))<\lambda$. Then, because of \eqref{eq:Jbound} and $\NN\subseteq \BB_{\eps(\beta,1)}(\xl)$, Lemma \ref{lem:stay} (applied with initial value $x_0=x(k)$ and control $u(\cdot+k)$) implies that $x(k+1)\in \X_{\NN}$. Hence, all the (in)equalities leading to inequality \eqref{eq:case2} in the proof of Theorem \ref{th: disturninf} are valid and, together with the definition of $\delta$, yield
\[ \widetilde V_\infty(x(k+1)) - \widetilde V_\infty(x(k)) \le \frac{\kappa}{\beta} \widetilde V_\infty(x(k)) + \frac{\delta}{\beta^k} \le \frac{\kappa}{\beta} \widetilde V_\infty(x(k)) + \min\left\{\theta(\beta,1), -\frac{\kappa\lambda}{2\beta},\; \frac \lambda 2\right\}. \]
Now if $\widetilde V_\infty(x(k)) \ge \lambda/2$, then second term in the minimum defining $\delta$ implies
\[ \widetilde V_\infty(x(k+1)) - \widetilde V_\infty(x(k)) \le \frac{\kappa}{\beta} \frac \lambda 2 -\frac{\kappa\lambda}{2\beta}=0,\]
implying $\widetilde V_\infty(x(k+1)) \le \widetilde V_\infty(x(k)) < \lambda$ and thus $x(k+1)\in \NN$.
If $\widetilde V_\infty(x(k)) < \lambda/2$, then the third term in the minimum defining $\delta$ implies
\[ \widetilde V_\infty(x(k+1)) - \widetilde V_\infty(x(k)) \le \underbrace{\frac{\kappa}{\beta} \widetilde V_\infty(x(k)}_{\le 0} + \frac \lambda 2\le \frac \lambda 2,\]
implying $\widetilde V_\infty(x(k+1)) \le \widetilde V_\infty(x(k)) + \frac{\lambda}{2} < \lambda$, i.e., again $x(k+1)\in \NN$. This proves the induction step and hence $x(k)\in \NN$ for all $k=0,\ldots,M$.
Now the turnpike property follows from Theorem \ref{thm: localturn} applied with $\X_{inv}=\NN$.\end{proof}
\begin{rem}
We note that the interval $(\beta_1,\beta_2)$ may be empty. This is because
\begin{enumerate}
\item[(i)] the condition \eqref{eq:Cdiss} needed for proving the turnpike property for trajectories staying near $\xl$ may require sufficiently large $\beta$ to hold
\item[(ii)] a trajectory starting near $\xl$ will in general only stay near $\xl$ for sufficiently small $\beta$
\end{enumerate}
More precisely, the lower bound in (ii) as identified at the end of the proof of Lemma \ref{lem: optimal} depends on the cost $\tell$ outside a neighbourhood of $\xl$ and the cost to leave this neighbourhood. The upper bound in (i), in turn, depends on the cost to reach the equilibrium $\xl$ from a neighbourhood. If this cost is high and, in addition, the cost to leave the neighbourhood and the cost outside the neighbourhood are low, then the set of discount rates for which a local turnpike behaviour occurs may be empty.
\end{rem}
\begin{rem}
The attentive reader may have noted that we apply Lemma \ref{lem:stay} with $K=1$ in this proof, rather than with $K=M$, which might appear more natural given that we want to make a statement for $\{0,\ldots,M\}$. This is because the size of the neighbourhood $\BB_{\eps(\beta,K)}(\xl)$ delivered by Lemma \ref{lem:stay} depends on $K$. Hence, if we applied Lemma \ref{lem:stay} with $K=M$ in order to construct the neighbourhood $\NN$, this neighbourhood may shrink down to $\{\xl\}$ as $M$ increases. In contrast to this, the fact that $\widetilde V_\infty$ is a (practical) Lyapunov function allows us to construct a neighbourhood $\NN$ that does not depend on $M$.
\end{rem}
\section{Examples}\label{sec:ex}
We end our paper with a couple of examples illustrating our theoretical results. All numerical solutions were obtained using a dynamic programming algorithm as described in \cite{GruS04}. We start with two examples exhibiting a locally and a globally optimal equilibrium.
\begin{bsp}\label{ex:1}
Consider the dynamics $f(x,u) = x+ u$ and the stage cost $\ell(x,u)=x^4-\frac 1 4 x^3 - \frac 7 4 x^2$.
\begin{figure}[htb]
\begin{center}
\includegraphics[width= 0.3\textwidth]{Fig2.eps}
\caption{Stage cost $\ell(x)$}\label{fig: ex sc}
\end{center}
\end{figure}
As visualized in Figure \ref{fig: ex sc}, the stage cost $\ell$ has a local minimum in $x = \frac{3-\sqrt{905}}{32}$, a maximum in $x=0$ and a global minimum in $x = \frac{3+\sqrt{905}}{32}$. Following \cite[Section 4]{GMKW20} we can calculate the storage function $\lambda$ by using the optimality conditions for optimal equilibria. We remark that the procedure for computing global storage functions described in this reference also works for the local dissipativity in case of local convexity which is given in this example, cf.\ also the discussion after Example \ref{ex:2}, below. Thus, by a straightforward calculation, we get the local equilibrium $\equbl = (\frac{3-\sqrt{905}}{32},0)$ and the storage function $\lambda\equiv 0$. Inserting this, we get the rotated stage cost $\tell(x,u) = x^4-\frac 1 4 x^3 - \frac 7 4 x^2 - \ell(\xl, 0)$ and local discounted strict $(x,u)$-dissipativity of the system $f(x,u) = x+ u$ at $\xl$ for any $\beta\in(0,1)$. Thus, the assumptions of Lemma \ref{lem: ball} and Lemma \ref{lem: optimal} are fulfilled. Hence, following the proof of Lemma \ref{lem: optimal} we can estimate $\beta_2\approx 0.67$ with $\delta \approx 1$ and $\tell_{\min}\approx -0.42$. Further, since $\norm{\tell(x,u)}$ is bounded for $x$ in a neighbourhood $\BB_\eps(x_0)$, $\eps>0$, Theorem \ref{th: locturnpike} can be applied. For illustrating the theoretical results, we set $\U=[-0.75, 0.75]$.
\begin{figure}[htb]
\begin{center}
\begin{minipage}[t]{0.47\textwidth}
\includegraphics[width= \textwidth]{Fig3.eps}
\end{minipage}
\begin{minipage}[t]{0.47\textwidth}
\includegraphics[width = \textwidth]{Fig4.eps}
\end{minipage}
\caption{Example \ref{ex:1} with $x_0 = -0.8$}\label{fig: ex1 beta}
\end{center}
\end{figure}
On the left hand side of Figure \ref{fig: ex1 beta} we show the behaviour of the trajectory $x$ and the control $u$ for different discount factors $\beta$. On the right hand side, we can observe the optimal feedback control values $u_x$ and therefore the domain of attraction of the equilibria dependent on $\beta$. After a maximum of three time instants, the trajectory reaches the global equilibrium for $\beta$ large enough. In contrast, for $\beta\leq 0.67$ we can observe that it is more favourable to stay in a neighbourhood of the local equilibrium $\xl$. We remark that is sufficient to depict $\beta = 0.8$ as a representative for all $\beta \in (0.67, 1)$ since the behaviour of the trajectory, the control and the stage cost does not change significantly.
\begin{figure}[htb]
\begin{center}
\begin{minipage}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{Fig5.eps}
\end{minipage}
\begin{minipage}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{Fig6.eps}
\end{minipage}
\caption{Example \ref{ex:1} with $\beta = 0.7$ (left) and $\beta = 0.6$ (right) for different start values $x_0$}\label{fig: ex1 x0}
\end{center}
\end{figure}
Figure \ref{fig: ex1 x0}, for fixed $\beta = 0.7$ we consider different initial values $x_0$. As we can see, the initial value determines to which equilibrium the trajectory converges. This underpins the theoretical results of Theorem \ref{th: locturnpike} and especially of Lemma \ref{lem: optimal}. We note that for a completely controllable system such a behaviour cannot occur in undiscounted problems.
\end{bsp}
The following modified example illustrates the case that the interval $(\beta_1, \beta_2)$ is empty.
\begin{bsp}\label{ex:2}
Consider again the system $f(x,u)=x+u$, now with stage cost $\ell(x,u)=x^4-\frac 1 4 x^3 - \frac 7 4 x^2+\gamma |u|$ with $\gamma\neq 0$. As the added term has no influence on the conditions of Theorem \ref{th: locturnpike} we can again estimate $\beta_2\approx 0.67$. Further, for $\gamma=0$ we get the same stage cost as in Example \ref{ex:1} above. In contrast to Example \ref{ex:1}, now for $\gamma$ large enough we can observe that $(\beta_1, \beta_2)$ is empty. This fact is illustrated in Figure \ref{fig: ex2 beta} for $\gamma = 10$. For the numerical results we use the same setting as in example \ref{ex:1}.
\begin{figure}[htb]
\begin{center}
\includegraphics[width=0.7\textwidth]{Fig7.eps}
\caption{Example \ref{ex:2} with $\gamma = 10$ for different discount factor $\beta$}\label{fig: ex2 beta}
\end{center}
\end{figure}
In contrast, in the graph with $\gamma = 10$ we can clearly observe that independent of the discount factor $\beta$ we do not get convergence to the local equilibrium any more For $\beta$ large enough we even get convergence to the global equilibrium.
\begin{figure}[htb]
\begin{center}
\begin{minipage}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{Fig8.eps}
\end{minipage}
\begin{minipage}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{Fig9.eps}
\end{minipage}
\caption{Example \ref{ex:2} with $\beta = 0.7$ (left) and $\beta= 0.95$ (right) for different $\gamma$}\label{fig: ex2 gamma}
\end{center}
\end{figure}
In order to examine this property in more detail we illustrate the behaviour of different values of $\gamma$ for fixed discount factors $\beta$ in Figure \ref{fig: ex2 gamma}. For $\gamma >1$ and $\beta \lesssim 0.95$ we can observe that the trajectories stay near by the start value and do not move away. In contrast, for $\beta \approx 1$ the trajectories converge to the global equilibrium. Thus, we do not get convergence to the local equilibrium any more
\end{bsp}
The two examples, above, have the particular feature that the dynamics is affine and the stage cost $\ell$ is strictly convex in a neighbourhood of the optimal equilibria. In this case, similar arguments as used in the proof of Theorem 4.2 in \cite{GMKW20} show that local strict dissipativity always holds. More precisely, we can restrict the proof of Theorem 4.2 in \cite{GMKW20} to a bounded neighbourhood $\X_{\NN}\subset \X$ of the local equilibrium $\xl$, e.g., $\BB_\eps(\xl)$, $\eps>0$, instead of $\X$, and a local strict convex stage cost function $\ell$. Following the proof, $\mathrm{D}\tell\equbl=0$ holds in the neighbourhood $\X_{\NN}$, which by the local strict convexity of $\tell$ implies that $\equbl$ is a strict local minimum. Together with the boundedness of $\X_{\NN}$, this implies the existence of $\alpha_\beta\in\KK_\infty$ and thus local discounted strict dissipativity. We remark that the calculation of $\lambda$ is the same as in the global case and yields a linear storage function. In the special case of Example \ref{ex:1}, above, it yields the storage function $\lambda\equiv 0$. In conclusion, local strict dissipativity always holds if the dynamics is affine and the stage cost $\ell$ is strictly convex near the locally optimal equilibrium.
With this observation, our dissipativity based analysis provides a complementary approach to the stable manifold based analysis carried out, e.g., in \cite{HKHF03}. Particularly, we can conclude that the model from this reference exhibits two equilibria at which the local turnpike property holds, which explains why the optimal trajectories are correctly reproduced by nonlinear model predictive control as shown in \cite[Section 5.1]{Gruene2015}.
Our final example demonstrates that strict convexity of $\ell$ is not needed for obtaining strict dissipativity, thus showing that a dissipativity based analysis allows for strictly weaker assumptions than strict convexity of $\ell$.
\begin{bsp}\label{ex:nonconvex}
Consider the 1d control system
\[ x^+ = f(x,u) = 2x+u \]
with state constraints $\X=[-1,1]$, control constraints $\U=[-3,3]$, and stage cost
\[ \ell(x,u) = -x^2/2 + u^2.\]
Obviously, the stage cost is strictly concave in $x$ and strictly convex in $u$. Nevertheless, we can establish discounted strict $(x,u)$-dissipativity in $(x^*,u^*)=(0,0)$ (in this example even global) for $\beta\ge3/5$ with $\lambda(x) = -x^2$. This follows from the fact that with $a=2\beta/\sqrt{1+\beta}$ and $b=\sqrt{1+\beta}$ we have
\begin{eqnarray*}
\ell(x,u) + \lambda(x) - \beta\lambda(f(x,u)) & = &
-x^2/2 + u^2 - x^2 + \beta (2x+u)^2\\
& = & (4\beta-3/2) x^2 + 4\beta xu + (1+\beta)u^2\\
& = & (a x + b u)^2 + \left(4\beta-\frac{3}{2} - \frac{4\beta^2}{1+\beta}\right)x^2\\
& \ge & (a x + b u)^2,
\end{eqnarray*}
where the last inequality holds since the term in the large brackets is $\ge 0$ for $\beta \ge 3/5$.
Since the system is completely controllable in finite time, hence exponentially stabilizable, Theorem \ref{th: disturninf} in conjunction with Remark \ref{rem:suffcond}(ii) implies that for sufficiently large $\beta$ turnpike behaviour occurs at $x^*=0$. This is confirmed for $\beta=0.7$ in the left graph in Figure \ref{fig: ex nonconvex}. In contrast to this, the right graph in Figure~\ref{fig: ex nonconvex} shows that for $\beta=0.6$ the turnpike behaviour for $x^*=0$ does not occur. Rather, the optimal solution converges to the upper bound $x=1$ of the state constraint set. In this example, the numerical computations indicate that $\beta = 3/5=0.6$ is a relatively precise estimate of the threshold for the occurrence of the turnpike property at $x^*=0$, although for $\beta$ decreasing from $0.7$ to $0.6$ the set of initial values around $x^*=0$ for which the turnpike behaviour can be seen shrinks down rapidly.
\begin{figure}[htb]
\begin{center}
\begin{minipage}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{Fig10.eps}
\end{minipage}
\begin{minipage}[t]{0.47\textwidth}
\includegraphics[width=\textwidth]{Fig11.eps}
\end{minipage}
\caption{Optimal trajectories for Example \ref{ex:nonconvex} with $\beta = 0.7$ and $x_0=1$ (left) and with $\beta= 0.59$ and $x_0=0.004$ (right)}\label{fig: ex nonconvex}.
\end{center}
\end{figure}
\end{bsp}
\section{Conclusion}\label{sec:conclusion}
In this paper we have shown that a local strict dissipativity assumption in conjunction with an appropriate growth condition on the optimal value function can be used in order to conclude a local turnpike property at an optimal equilibrium. The turnpike property holds for discount factors from an interval $[\beta_1,\beta_2]$, where $\beta_1$ is determined by local quantities while $\beta_2$ is also determined by properties of the optimal control problem away from the local equilibrium. Hence, local and global properties together determine whether the interval is not empty. This is in accordance with other approaches for analysing local stability of equilibria in discounted optimal control such as those based on stable and unstable manifolds \cite{HKHF03}. In contrast to other approaches, however, the dissipativity based approach is not limited to (locally) strictly convex problems, as our last example showed.
{\bibliographystyle{plain}
| {
"attr-fineweb-edu": 1.726562,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdpw5ixsDMDLI8c6q |
\section*{Appendix}
\begin{figure*}[h]
\centering
\includegraphics[width=0.97\textwidth]{fig/System_Diagram.png}
\caption{System diagram.}
\label{fig:sys}
\vspace{-3mm}
\end{figure*}
\input{tables/state_of_art_tools.tex}
\input{appendix/Completed_Vulnerability_Support.tex}
\input{tables/old_detection_evaluation.tex}
\input{appendix/Fig_3_Attack_Code.tex}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\textwidth]{fig/Figure3_a_CFGwithIR}
\caption{The exemplar CFG converted from the AVS in Fig.~\ref{fig:avs}.}
\label{fig:cfg}
\end{figure}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.53\textwidth]{fig/Figure4_MAVS}
\caption{The AVS for matching the three CBS in Fig.~\ref{fig:learningSignature}.}
\label{fig:avs_mot3}
\end{figure*}
\input{appendix/figure_address.tex}
\input{appendix/attack_code.tex}
\newpage
\input{appendix/refined_rules_table.tex}
\section{Background} \label{sec:background}
In this section, we explain the five vulnerability types targeted by our study. We show two real cases, which are not well-handled by the state-of-the-art scanners, to motivate our study.
\subsection{Vulnerability Types} \label{sec:background:type}
Early in 2016, Atzei et al. \cite{AtzeiBC16} have observed 6 types of vulnerabilities that could exist in smart contracts.
Recently, on a public technical blog~\cite{sigmaprime}, a more detailed taxonomy for
vulnerabilities in Solidity code is summarized, accompanied by \emph{preventative techniques}
(similar to DMs in this study) and real-world examples.
We list the five major vulnerability types from the top-10 vulnerability types according to~\cite{consensys,dasp}.
\begin{enumerate}[leftmargin=*, topsep=0pt,itemsep=-1ex]
\item \emph{Reentrancy}. As the most famous Ethereum vulnerability, reentrancy recursively triggers the fall-back function\footnote{Fall-back function is a special function in Solidity, which has no function names, parameters, and return values. It will be triggered when the function signature does not match any of the available functions in a contract.} to steal money from the victim's balance or deplete the gas of the victim. Reentrancy occurs when external callers manage to invoke the callee contract before the execution of the original call is finished, and it was mostly caused by the improper usages of the function \codeff{withdraw()} and \codeff{call.value(amount)()}. It was also reported in \cite{AtzeiBC16}.
\item \emph{The Abuse of \codeff{tx.origin}.} When the visibility is improperly set for some key functions (e.g., some sensitive functions with \codeff{public} modifier), the extra permission control then matters. However, issues can arise when contracts use the deprecated \codeff{tx.origin} (especially, \codeff{tx.origin==owner}) to validate callers for permission control.
It is relevant to the \emph{access control} vulnerability in \cite{dasp}
\item \emph{Unchecked Low-level-call.} In Solidity, users can use low level functions \codeff{call()}, \codeff{callcode()}, \codeff{delegatecall()} and \codeff{send()}. They are different from other Solidity functions, as they will not throw exception or exit when encountering errors. Instead, they continue to run and return a boolean value \codeff{false}.
\emph{Gasless send}, one of the six vulnerability types listed by~\cite{AtzeiBC16}, is related to this vulnerability.
\item \emph{Unexpected Revert.} In a smart contract, if some operations unfortunately fail, the whole transaction will revert. So the attacker could deliberately make some operations fail for the purpose of denial of service (DoS).
This is also termed \emph{DoS with revert} in \cite{consensys}.
\item \emph{Self-destruct Abusing.}
This vulnerability allows the attackers to forcibly send Ether without triggering its fall-back function. Normally, the contracts place important logic in the fall-back function or making calculations based on a contract's balance. However, this could be bypassed via the \codeff{self-destruct} contract method that allows a user to specify a beneficiary to send any excess ether \cite{consensys}.
\end{enumerate}
\noindent\textbf{Scope of Our Study.} According to~\cite{consensys,dasp}, there are other
vulnerability types: \emph{Time Manipulation} (type 6), \emph{Arithmetic Issues} (type 7),
\emph{Bad Randomness} (type 8), \emph{Front-Running} (type 9), and \emph{Short Address} (type 10).
In this paper, we focus on the types of severe vulnerabilities that are not well-supported by the
existing scanners. After some preliminary study, we find type 7, 8, 10 are relatively easy to detect by simple
rules of the existing scanners. For example, as long as unsigned integers are used for arithmetic
operations, there exist \emph{arithmetic issues}. When \codeff{keccak256} or
\codeff{block.blockhash} are used for assignment, \emph{bad randomness} happens. If the input
address is less than 20-byte, it leads to \emph{short address}. Basically, these rules have no FPs
or FNs. Besides, type 9 (\emph{front-running}) is quite difficult to prevent, as it requires the
understanding of business logic inside a specific contract; type 6 is more like a theoretic threat
due to bad coding style. \emph{Thus, we target at the first 5 types in this paper, as they are not
well-supported by existing scanners.}
\subsection{Example One --- The Need for AVS} \label{sec:background:example1}
Similar code blocks (CBs) with small gaps (e.g., CBs in
Fig.~\ref{fig:learningSignature}) are well detected by clone detectors (e.g.,
\textsc{CCFinder}~\cite{ccfinder} \textsc{Deckard}~\cite{deckard} and \textsc{Nicad}~\cite{nicad}
based on editing distance) in software engineering community. In reality, there exist many
similar vulnerable CBs that just follow common code patterns but differ greatly in concrete logic
--- we refer to them as similar vulnerable code with big gaps.
At first glance, CB1 and CB2 in Fig.~\ref{fig:motivating2} are quite different, as their gaps
(omitted statements) significantly outnumber their common statements. These two CBs can not usually
be detected by mainstream clone detectors such as \textsc{CCFinder}~\cite{ccfinder} and
\textsc{Deckard}~\cite{deckard} --- their total similarity is lower than the often used similarity
threshold (e.g., 70\%). However, both of them are vulnerable and essentially similar. For CB1 in
Fig.~\ref{fig:motivating2:original}, an attacker can reassign the leader address and make any
refund to the address always fail. By calling the \codeff{bid()} function, the attacker can
exclusively occupy the leader forever and achieve the goal of DoS. Similarly, CB2 in
Fig.~\ref{fig:motivating2:cloned} is also problematic, the attacker could exploit it with the code
shown in Fig.~\ref{fig:motivating2:attack}. When
\codeff{highestBidder.transfer(fundsByBidder[highestBidder])} is executed, the fallback function of
the attacker will be triggered and the revert will happen.
Interestingly, most of the existing scanners have no rules for this. We propose the following rule to detect this vulnerability:
\vspace{-4mm}
\begin{mdframed}[
linewidth = 0pt,
innertopmargin = 0pt,
innerbottommargin = 0pt,
outerlinewidth = 0pt,
rightmargin =0pt
]
\begin{equation} \label{rule:unexpectedRevert}
\small
\begin{aligned}[1]
\small
\big( dcl(adr_{g}) \vee dcl(var) \big) \succ adr_{g}.transfer(var) \succ \\
(adr_{g} = msg.sender) \Rightarrow \text{unexpected~revert}
\end{aligned}
\end{equation}
\end{mdframed}
\vspace{-2mm}
where $dcl(adr_g)$ denotes the declaration of a public address, $\succ$ denotes the execution time
order in the control flow, $ adr_{g}.transfer(var) $ calls function $transfer()$ of that address, and $adr_{g} = msg.sender$ reassigns the address of $msg.sender$ to $adr_{g}$. This rule helps to detect some vulnerable samples but it is incomplete to cover all other similar ones (e.g., changing $transfer()$ to other functions for money transfer).
To sum up, Fig.~\ref{fig:motivating2} motivates the AVS design that detects similar vulnerable CBs with an
excellent recall and acceptable precision, regardless of small or big gaps across them.
\input{fig/eval_fp3.tex}
\subsection{Example Two --- The Need of RDR} \label{sec:background:example2}
In Fig.~\ref{fig:evaluation:fp3}, we can see an FP of reentrancy reported by \slither and \oyento.
\slither adopts Rule \ref{rule:slither:reentrance} to detect reentrancy.
\vspace{-2mm}
\begin{mdframed}[
linewidth = 0pt,
innertopmargin = 0pt,
innerbottommargin = 0pt,
outerlinewidth = 0pt,
rightmargin =0pt
]
\begin{equation} \label{rule:slither:reentrance}
\small
r(var_{g}) \vee w(var_{g}) \succ externCall \succ w(var_{g}) \Rightarrow \text{reentrancy}
\end{equation}
\end{mdframed}
where \ruleff{r()} and \ruleff{w()} denote the write and read operations, respectively;
$var_{g}$ denotes a certain public global variable; $\succ$ denotes the execution time
order in the control flow; \ruleff{externCall} denotes the external call to the
\emph{money-transfer} functions except built-in functions \codeff{send()} and \codeff{transfer()}.
In short, this rule is to check: if there exists an external call to a money-transfer function
(\codeff{***.\textcolor{blue}{\textbf{value}}(\_value)(\_storKey)}) and the call is between the
read and write operations to a state variable (\codeff{registered}), a reentrancy may happen.
Similarly, CB3 is reported by \oyento as vulnerable, according to its run-time detection Rule \ref{rule:oyente: reentrance} below:
\vspace{-2mm}
\begin{mdframed}[
linewidth = 0pt,
innertopmargin = 0pt,
innerbottommargin = 0pt,
outerlinewidth = 0pt,
rightmargin =0pt
]
\begin{equation} \label{rule:oyente: reentrance}
\small
\begin{split}
\big ( r(var_{g}) \wedge (gas_{trans} > 2300) \wedge (amt_{bal} > amt_{trans}) \wedge \\
var_{g} \text{~is changad before external call} \big ) \Rightarrow \text{reentrancy}
\end{split}
\end{equation}
\end{mdframed}
\vspace{-3mm}
where $gas_{trans} > 2300$ means the gas for transaction must be larger than $2300$, $amt_{bal} > amt_{trans}$ means the balance amount must be larger than transfer amount, and lastly the public variable could be changed before external calls.
However, CB3 actually takes into account the security issue and adds the self-defined modifier
\codeff{onlyAdmin()} before the possibly vulnerable function \codeff{regstDocs}. As
\codeff{onlyAdmin()} restricts that a transaction can only be done by the \codeff{admin} or
\codeff{owner} roles, otherwise the transactions will be reverted. In such a way,
\codeff{regstDocs}
cannot be recursively called by external attackers.
Clearly, as the defense mechanism (DM) just adds a small delta (e.g., \codeff{onlyAdmin()} at line 10), the AVS-based
code matching will report the CBs with or without the modifier as vulnerability candidates. In
such cases, some proper post-processing is required after the AVS-based matching, which
indicates that the DM-based RDR plays a vital role in achieving high precision when both vulnerable and safe code patterns exist. In reality, various DMs need to be considered to remove FPs from results of the existing scanners (see \S\ref{sec:rdr:dm}).
\section{System Overview} \label{sec:chanllenges}
Basically, our approach consists of three steps:
\begin{enumerate}[leftmargin=*,topsep=0pt,itemsep=-1ex]
\item Selecting the existing scanners: this step is to find some concrete vulnerabilities to serve
as the input for AVS learning, and hence our whole approach starts with the detection results of
the existing scanners (see \S\ref{sec:rdr:existingTool}).
\item Learning and Applying AVS: first, we adopt a clustering algorithm to group the similar
vulnerabilities of the same type and then extract the AVS via code differencing algorithm (see
\S\ref{sec:learnSigs}); then, on the basis of code matching technique (i.e., Algorithm
\ref{algo:matching} in \S\ref{sec:ourApproach:lcs}), we leverage the AVS to search for more
concrete vulnerabilities (see \S\ref{sec:ourApproach}).
\item Summarizing and Applying RDR: during the manual auditing of the detection results (especially
those FPs) of the existing scanners, we inspect their detection rules and observe the DMs in code
that cause FPs. Hence, we can refine existing good detection rules by considering DMs --- that is
to conclude the RDR based on DMs (see \S\ref{sec:rdr:dm}).
\end{enumerate}
As shown in Fig.~\ref{fig:sys} (see Appendix due to space limit), the input of our approach to build
benchmark just includes the set of unlabeled (unknown for vulnerability) smart contract and several
existing scanners, while the output consists of a set of vulnerable smart contracts, the leaned AVS
and the summarized RDR. Notably, the final AVS and RDR can be combined as a framework to detect
the unknown vulnerability on a new dataset.
\section{Conclusion} \label{sec:conclude}
So far, a few works have exposed various vulnerabilities in smart
contracts~\cite{AtzeiBC16,sigmaprime,dasp} and plenty of scanners have been presented to detect the security issues.
Yet, to the best knowledge of ours, there is no study on building a vulnerability benchmark for
smart contracts.
In this paper, we apply three state-of-the-art scanners, learn the AVS from the TPs of their results, and summarize the DM-based RDRs from FPs of their results.
Then, we combine the AVS and the DM-based RDRs in our tool, namely \ourTool, for achieving both precision and recall in building vulnerability benchmark and detecting unknown vulnerabilities.
In future, we will support more vulnerability types and integrate with verification techniques.
\subsection{Discussion} \label{sec:discuss}
\noindent\textbf{The Necessity of Manual Effort.} The efforts in our approach mainly lie in the investigation of the existing scanners used in our study (\S\ref{sec:rdr:dm}). Towards a high-quality vulnerability benchmark, identifying the TPs from results of existing tools is a inevitable task. Although two or more tools may agree on some cases, the manual audit is still required to confirm that. AVS could be learned automatically (\S\ref{sec:learnSigs}), but the DM summarization must rely on human expertise. Last, with the final AVS and RDRs, the experiments in \S\ref{sec:expements:otherRst} need human expert to count the TPs and calculate the precision and recall. For ease of public review, we publish the vulnerability dataset and tool comparison results in the website~\cite{mavs_link}.
\begin{table}[t]
\footnotesize
\centering
\caption{The time (\emph{min.}) of vulnerability detection for each scanner on \totalContract contracts. $\times$ means timeout in 72 hours.}
\begin{tabular}{|p{2.5em}|p{3.3em}|p{3em}|p{2.5em}|p{4em}|p{5.8em}|}
\hline
Dataset & \slither & \oyento & \textsc{S.C.} & \securify & \ourTool \\
\hline
\totalContract & 156 & 6434 & 641 & $\times$ & 883 \\
\hline
\testContract & 52 & 1352 & 141 & 8859 & 295 \\
\hline
\end{tabular}%
\label{tab:efficiency}%
\end{table}%
\section{Investigating Tools and Summarizing RDR} \label{sec:rdr}
{In \S\ref{sec:rdr}, we explain how to choose the existing scanners and how to observe the DMs from results of these tools.}
\subsection{Choosing Scanners} \label{sec:rdr:existingTool}
\label{sec:existingTool:overview}
\noindent\textbf{Tool Survey.}
We investigated 13 existing tools (see Table~\ref{tab:all_tools} in Appendix).
Some tools were not included in the current study for various reasons.
\zeus is not open-sourced, and hence we could not include it.
\textsc{Echidna} is a fuzzing library and it requires user-defined drivers to enable vulnerability
detection.
Similarly, \textsc{Octopus} is an EVM bytecode decompiling and analysis tool rather than an
automated detection tool.
Besides, we also tried to use \textsc{Mythril} and \textsc{MythX} (the Web version of
\textsc{Mythril}), but \textsc{Mythril} does not compile successfully and the service of
\textsc{MythX} is not under maintenance.\footnote{We tried to contact the developers of
\textsc{Mythril} via emails and inquiring on GitHub issues. However, we have not received any
reply.}
\textsc{ContractFuzzer} needs to run on a private blockchain and makes modifications to the
EVM. Also as a dynamic fuzzing tool, \textsc{ContractFuzzer} cannot scale up to a large dataset.
\noindent\textbf{Choice of Scanners.}
In Table~\ref{tab:tool_full_support} in Appendix, we list the remaining five scanners after the
initial trial and show the detection capabilities of each one.
For \textsc{Manticore}, we find that it has some performance issues when applied to
perform a large-scale detection.\footnote{We seek for help on GitHub issues, and one of the
\textsc{Manticore} developers suggested us to switch to \slither.}
Finally, we choose \slither, \oyento and \smartcheck to collect the concrete vulnerabilities for
AVS extraction and RDR summarization.
\textsc{Securify}, representing the state-of-the-art~\cite{securify}, was not chosen when we
summarized the DMs of the existing scanners, since at that time \securify was not open-sourced yet.
However, we include \securify in the tool comparison at a later time to evaluate our final
tool \ourTool (see \S\ref{sec:expements:otherRst}).
\subsection{Collecting Concrete Vulnerabilities and Summarizing RDR considering DMs} \label{sec:rdr:dm}
We applied the three chosen scanners to \totalContract contracts.
Overall, \slither~reports the most vulnerabilities, in total 1244 candidates covering five types.
In contrast, \smartcheck~reports 1,035 candidates and \oyento~reports only 108 candidates.
However, due to the internal detection mechanisms, each scanner inevitably yields some FPs.
We show some representative FP patterns below to illustrate how the existing scanners are ignorant of the possible DMs.
\input{fig/eval_fp1.tex}
\noindent\textbf{FPs of Reentrancy.}
As the reentrancy caused some significant losses in the past~\cite{daoAttack}, newly deployed
contracts on Ethereum have already adopted some DMs to prevent from exploits of the reentrancy.
We summarize the five main types of DMs for reentrancy: \textbf{DM1}. \emph{access control by
identity check with \codeff{owner}}; \textbf{DM2}. \emph{payment protection by hard-coding the
address of payee or payer}; \textbf{DM3}. \emph{payment protection by \codeff{private} or the
self-predefined function modifier}; \textbf{DM4}. \emph{payment protection by execution lock(s)};
\textbf{DM5}. \emph{state or balance updating before sensitive payment}. However, these DMs are
seldomly considered by the existing scanners, resulting in
the high FP rate of detection.
DM1 adds various forms of checks (i.e., in \codeff{require} or \codeff{assert} or \codeff{if}) for
\codeff{msg.sender}. For example, DM1 checks whether the identity of \codeff{msg.sender}
satisfies certain conditions (e.g., equal to the owner, or with a good reputation, or having the dealing
history) before calling the external payment functions.
For DM2, in Fig.~\ref{fig:evaluation:fp1}, according to Rule~\ref{rule:slither:reentrance}, CB4 is reported as a {reentrancy} by \slither --- firstly, it reads the \codeff{public} variable \codeff{agets[\_idx]}; then calls external function \codeff{bancorToken.transfer()}; last, writes to the \codeff{public} variable \codeff{agets[\_idx]}. However, in practice, reentrancy will never be triggered by external attackers due to the hard-coded address constant (\codeff{0x1F...FF1C}) at line 3 in Fig.~\ref{fig:evaluation:fp1}.
For DM3 of adopting user-defined modifiers for protection, we find some interesting cases that are falsely reported by existing scanners. For example, in Fig.~\ref{fig:evaluation:fp3}, CB3 actually takes into account the security issue and adds the self-defined modifier \codeff{onlyAdmin()} before the possibly vulnerable function \codeff{regstDocs}. Since \codeff{onlyAdmin()} restricts that the transaction can be only done by the \codeff{admin} or \codeff{owner} role, otherwise the transactions will be reverted. In such a way, \codeff{regstDocs} could not be recursively called by external attackers. Notably, for CB3 in Fig.~\ref{fig:evaluation:fp3}, if we changed function modifier \codeff{regstDocs} from \codeff{onlyAdmin} to \codeff{internal} at line 10, \slither and \oyento~would still report it as a reentrancy vulnerability --- but it cannot be called by external attackers, as it is not called in any public function.
DM4 is to prevent from the recursive entrance of the function --- eliminating the issue from root. For instance, in Fig.~\ref{fig:evaluation:fp4}, the internal instance variable \codeff{reEntered} will be checked at line 5 before processing the business logic between line 8 and 10 of CB5. To prevent the reentering due to calling \codeff{ZTHTKN.buyAndSetDivPercentage.value()}, \codeff{reEntered} will be switched to \codeff{true}; after the transaction is done, it will be reverted to \codeff{false} to allow other transactions.
According to \cite{consensys}, DM5 is to finish all internal work (i.e., state and balance changes) and then call the external payment function. For example, for CB3 in Fig.~\ref{fig:evaluation:fp3}, if \codeff{registered = true} at line 14 is moved before the external call \codeff{regstUser.value()} at line 13, then the reentrancy attack can be avoided owing to the failure of the check \codeff{require(!registered)} at line 11.
\noindent\textbf{FPs of Unexpected Revert.} Though \slither~and \smartcheck~can detect some cases, their rules currently are so general that most reported cases are actually bad coding practices (e.g., warnings hinted by the IDE), not exploitable vulnerabilities. Specifically, \slither~reports 666 cases of \emph{Call in Loop}, as long as an external call (e.g., \codeff{send()} or \codeff{transfer()} of other addresses) is inside a loop, regardless of its actual impact. For instance, in Fig.~\ref{fig:evaluation:fp5}, CB6 reported by \slither~does not cause expected revert, as \codeff{require} is not used to check the return value of function \codeff{send()} at line 7. Similarly, \smartcheck~supports the detection of \emph{Transfer in Loop}. As \smartcheck~checks only \codeff{transfer()} in a loop, it reports a much smaller number (274) than that of \slither (666). However, after manual auditing, we find that most reports of \slither are FPs due to inconsideration of the key \codeff{require} check that causes reverting. Hence, we summarize the \textbf{DM6} --- \emph{no use of \codeff{require} for transfer in loop to avoid DoS}.
\input{fig/eval_fp4.tex}
\begin{figure}[t]
\vspace{-4mm}
\begin{lstlisting}
function Payout(uint a, uint b) internal onlyowner {
while (a>b) {
uint c;
a-=1;
if(Tx[a].txvalue<1000000000000000000){c=4;}
else if (Tx[a].txvalue>=1000000000000000000){c=6;}
Tx[a].txuser.send((Tx[a].txvalue/100)*c);
}
}
\end{lstlisting}
\vspace{-2mm}
\caption{\textbf{CB6} : A real case reported by \slither~as \emph{call in loop}, which has no DoS owing to DM6 (no \codeff{require} at line 7).}\label{fig:evaluation:fp5}
\vspace{-4mm}
\end{figure}
According to our observation and the recent technical article~\cite{nvestlabs2019}, the rules of \emph{Call/Transaction in Loop} are neither sound nor complete to cover most of the unexpected revert cases. At least, modifier \codeff{require} is ignored in these two rules, which makes \slither~and \smartcheck~incapable to check possible revert operations on multiple account addresses. Here, multiple accounts must be involved for exploiting this attack --- the failure on one account blocks other accounts via reverting the operations for the whole loop. Hence, CB7 reported by \smartcheck~in Fig.~\ref{fig:evaluation:fp6} is an FP --- \textbf{DM7.} \emph{the operations in the loop are all on the same account (i.e., \codeff{sender} at line \textcolor{red}{5}) and potential revert will not affect other accounts}. In addition to reverting inside a loop, as shown in Fig.~\ref{fig:motivating2}, there exist a few other cases where reverting indeed happens in any unsecured external call without loop.
\begin{figure}[t]
\vspace{-4mm}
\begin{lstlisting}
function withdraw() private {
for(uint i = 0; i < player_[uid].planCount; i++) {
...
address sender = msg.sender;
sender.transfer(amount);
...
}
}
\end{lstlisting}
\vspace{-2mm}
\caption{\textbf{CB7}: a real FP of DoS reported by \smartcheck, where only one account is involved (DM7).}\label{fig:evaluation:fp6}
\vspace{-4mm}
\end{figure}
\noindent\textbf{FPs of \texttt{\small Tx.Origin} Abusing.}
For this, \slither~reports 34 results, none of which are FPs. \slither's rule is simple but effective (see Rule \ref{rule:slither: tx.origin} below), finding all \codeff{Tx.Origin} that appears in the control flow condition. The rationale is that accessing \codeff{Tx.Origin} is just a bad programming practice by itself, not vulnerable. Only when used in control flow conditions, it could be manipulated for control-flow hijack.
\vspace{0mm}
\begin{mdframed}[
linewidth = 0pt,
innertopmargin = 0pt,
innerbottommargin = 0pt,
outerlinewidth = 0pt,
rightmargin =0pt
]
\begin{equation} \label{rule:slither: tx.origin}
\small
\begin{split}
accesss(Tx.Origin) \wedge inContrFlowCondi(Tx.Origin) \\
\Rightarrow \text{Tx.Origin abusing}
\end{split}
\end{equation}
\end{mdframed}
\vspace{-2mm}
In contrast, \smartcheck~reports much more cases (210) than \slither (34), as it is more complete in considering control flow conditions inside both functions and the modifiers outside the function (see Rule~\ref{rule:smartcheck: tx.origin}). As shown in Fig.~\ref{fig:evaluation:fp3}, the self-defined modifier \codeff{onlyAdmin()} should be imported before the function. As such a modifier is used for permission control, it also affects the control flow of the program. The 149 FPs of \smartcheck~({70.95\%}) are due to {the reason that \codeff{Tx.Origin} is used for a parameter of a function call, and then the return value of a function call is neither check nor used. However, \codeff{Tx.Origin} is sometimes not abused --- \textbf{DM8.} {\emph{using \codeff{Tx.Origin} for the identity check of \codeff{msg.sender} in control flow}}.
\vspace{-3mm}
\begin{mdframed}[
linewidth = 0pt,
innertopmargin = 0pt,
innerbottommargin = 0pt,
outerlinewidth = 0pt,
rightmargin =0pt
]
\begin{equation} \label{rule:smartcheck: tx.origin}
\small
\begin{split}
accesss(Tx.Origin) \wedge \big (inContrFlowCondi(Tx.Origin) \\ \vee\,inModifierCode(Tx.Origin) \big )
\Rightarrow \text{Tx.Origin abusing}
\end{split}
\end{equation}
\end{mdframed}
\vspace{-2mm}
\noindent\textbf{FPs of Unchecked Low-Level-Call.} \smartcheck reports this vulnerability by verifying whether the following functions are under proper checks: \codeff{callcode()}, \codeff{call()}, \codeff{send()} and \codeff{delegatecall()}. Note that checking can be done in several ways: 1) using the built-in keyword \codeff{require} and \codeff{assert}, 2) using the \codeff{if} condition check, 3) using the self-defined exception-handler class to capture the error-prone low-level-calls. {According to the paper \cite{smartcheck}, the rule in \smartcheck~is called \emph{unchecked external call}, which checks whether calls of the above 4 functions
are inside \codeff{if} conditions.
However, in its implementation~\cite{Git-SmartCheck}, we find that it actually checks not only calls of the 4 low-level functions, but also calls of some user-defined functions.
Hence, checking extra calls of user-defined functions yields 189 FPs out of 551 results}.
We summarize the following DM for this type --- \textbf{DM9.} \emph{using various forms of checking (including \codeff{if}, \codeff{require} and \codeff{assert} checks) strictly for the four restricted low-level-calls.}.
\input{fig/eval_fp7.tex}
\noindent\textbf{FPs of Self-destruct Abusing.} In the existing scanners, only \slither~detects the misuse of self-destruct, which is called suicidal detection. In total, \slither~reports 46 cases of suicidal via its built-in rule --- as long as function \codeff{selfdestruct} is used, no matter what the context is, \slither~will report it. Obviously, the \slither's rule is too simple and too general. It mainly works for directly calling \codeff{selfdestruct} without any permission control or conditions of business logic --- under such circumstance (11 out of 46), the \slither~rule can help to detect the abusing. In practice, in most cases (35 out of 46) \codeff{selfdestruct} is called with the \codeff{admin} or \codeff{owner} permission control or under some strict conditions in business logic. For example, \codeff{selfdestruct} is indeed required in the business logic of the CB8 in Fig.~\ref{fig:evaluation:fp7}, as the owner wants to reset the contract via calling \codeff{selfdestruct} after the transactions in a period are all done and the contract is not active (i.e., the condition at line 2). Note that parameter \codeff{burn} is just padded to call \codeff{selfdestruct} in a correct way. Hence, we summarize the \textbf{DM10}, \emph{adding a strict condition control or a self-defined modifier for identity check when using \codeff{selfdestruct}}.
The precisions of these exiting scanners on \totalContract are shown in Table \ref{tab:old_rq1} in Appendix. More details can be found at the website of the benchmark.
As we audit the FPs and understand the rules of the existing scanners, we show the rules of existing scanners in Table~\ref{tab:refined_rules} in Appendix.
In Table \ref{tab:rdr_dm}, we show the summarized RDRs with the corresponding DMs.
{\emph{In brief, for a vulnerability type, we choose the rule of the tool, which yields a better recall, and combine this rule with the corresponding DMs.}} For example, for reentrancy, we find the rule of \slither yields a better recall, the RDR is based on the \slither's rule and then integrated with DM1 to DM5.
\input{fig/table_RDR.tex}
\section{Evaluation} \label{sec:expements}
\noindent\textbf{Experimental Environment.} Throughout the evaluation, all the steps are conducted on a machine running on Ubuntu 18.04, with 8 core 2.10GHz Intel Xeon E5-2620V4 processor, 32 GB RAM, and 4 TB HDD. For the scanners used in evaluation, no multithreading options are available and only the by-default setting is used for them.
\noindent\textbf{Dataset for Benchmark Construction.}
We implement a web crawler to download Solidity files from accounts of Etherscan~\cite{etherscan}, a famous third-party website on Ethereum block explorer. When we started crawling, public users are allowed to freely download Solidity files. However, since the beginning of 2019, only 1000 accounts of recently verified contracts can be accessed on Etherscan per day. Finally, we crawl from 18714 accounts and get \totalContract contracts. The crawler adopts a random search strategy on the webpages of Etherscan to assure the randomness of downloaded contracts.
\noindent\textbf{Dataset for Tool Evaluation.} As we observe the DMs and rules of existing tools on the \totalContract contracts, it will be unfair to evaluate the resulted tool (the finally learned AVS and the summarized RDRs) on that dataset. Hence, we get another address list of contracts from Google BigQuery Open Dataset. After removing the addresses that are already in those of the \totalContract contracts, we get the other \testContract real-world contracts deployed on Ethereum, on which we fairly compare our resulted tool \ourTool with the latest version of the scanners: \slither~v0.6.4. \oyento~v0.2.7, \smartcheck~v2.0 and \securify~v1.0 that is open-sourced at Dec 2018.
\noindent\textbf{Research Questions (RQs).} With three state-of-the-art scanners and two collected datasets, we aim to answer these RQs:
\begin{enumerate}[leftmargin=*,label=\textbf{RQ\arabic*.},topsep=0pt,itemsep=-1ex]
\item What is the quality of the constructed benchmark? Are the AVS representative and are vulnerabilities valid?
\item How is the accuracy of our resulted tool, compared with the existing scanners in detecting vulnerability?
\item How is the efficiency of our resulted tool, in building vulnerability benchmark and also in tool comparison?
\end{enumerate}
\input{sec/experiments_RQ2.tex}
\vspace{1mm}
\subsection{RQ2: Evaluating the Resulted Tool} \label{sec:expements:otherRst}
\input{tables/dm_stat.tex}
As mentioned in \S\ref{sec:rdr:dm} and \S\ref{sec:expements:avs}, we have learned 42 AVS and 10 DMs in total for the five types of vulnerabilities. To evaluate the effectiveness of the resulted AVS and DM-based RDRs, we apply them on the \testContract newly collected contracts and compare with the latest version of existing tools.
Details on the accuracy of each tool for every type are shown in Table~\ref{tab:rq1}.
We identify the TPs of each tool via manual auditing, as the detection number for each tool is still acceptable.
\vspace{-3mm}
\subsubsection{Reasons for Low FPs of of \ourTool} \label{sec:expements:otherRst:fp}
In Table~\ref{tab:rq1}, we list 313 detection results of \ourTool, with an overall precision of 48.2\%, regardless of vulnerability types. In comparison, \slither has a total precision of 20.1\%; \oyento's total precision is 14.3\%; \smartcheck's total precision is {37.3\%}; and \securify's total precision is surprisingly only 4.3\%.
We analyze the FP rates of these tools from the perspective of supporting DMs mentioned in \S\ref{sec:rdr:dm}.
\noindent\textbf{FPs of Reentrancy.} Among the four supported tools except \smartcheck, \ourTool yields the lowest FP rate (72.3\%) owing to the adoptions of DM1-5 for reentrancy. FP rates of other tools are even higher. For example, the FP rate of \securify is 95.7\%, as its detection pattern is too general but has not considered any possible DM in code. \slither adopts Rule \ref{rule:slither:reentrance} in \S\ref{sec:background:example2} to detect, but it supports no DMs --- its recall is acceptable, but FP rate is high. \oyento adopts Rule \ref{rule:oyente: reentrance} in \S\ref{sec:rdr:dm} and has no DMs --- its recall is low due to the strict rule, and its FP rate is also high.
\noindent\textbf{FPs of Unexpected Revert.} As summarized in \S\ref{sec:rdr:dm}, \slither reports all calls in loops as vulnerable, which leads to 53 FPs. \smartcheck handles DM6 but not DM7. In comparison, \ourTool handles DM6 well and partially supports DM7, yielding the lowest FP rate.
\noindent\textbf{FPs of \texttt{\small Tx.Origin} Abusing.} \slither has a strict rule for detecting this type, only checking the existence of \codeff{Tx.Origin == msg.sender}. For the case that \codeff{msg.sender} is assigned to variable \codeff{owner} and then \codeff{Tx.Origin == owner} is checked, \slither does not detect, causing FNs. \smartcheck and \ourTool manage to include all the identity check cases, but meanwhile also lead to FPs due to the fact --- accurate symbolic analysis is not adopted in \smartcheck or \ourTool to suggest whether \codeff{Tx.Origin} can be used to rightly replace \codeff{msg.sender}. Hence, the FP rate due to DM8 is high for \ourTool.
\noindent\textbf{FPs of Unchecked Low-Level-Call.} \slither has no FPs on this type, as it strictly checks the usage of the four low-level calls. In contrast, \smartcheck does not handle DM9, and causes {96} FPs by checking some high-level calls. \ourTool has 7 FPs, marking 7 cases as unchecked ones, where the return values of low-level calls are used as local variables and checked in late code. Hence, these 7 FPs require data dependency analysis to find that whether it is checked.
\noindent\textbf{FPs of Self-destruct Abusing.} \ourTool has 38 FPs. After inspecting, we find 30 FPs are due to the unsatisfactory handling of DM10 that hides in self-defined modifiers. \slither has a better FP rate. It shares some FPs due to the self-defined modifiers, and reports safe self-destruct as abusing in 10 FPs.
\noindent\textbf{Summary.} \emph{Our DM-based RDRs can significantly reduce FPs in most cases. Only for DM3, DM8 and DM10, when AVS-based code matching provides many vulnerability candidates with complicated self-defined modifiers or the usage of many local variables for value passing, these three DMs are not satisfactorily handled. To address this, accurate data dependency analysis on self-defined modifiers is required. }
\vspace{-3mm}
\subsubsection{Reasons for High Recall of \ourTool} \label{sec:expements:otherRst:fn}
In Table~\ref{tab:rq1}, in most cases, \ourTool yields the best recall except on unchecked low-level-calls, where R\% for \smartcheck is {52.3}\% and R\% for \ourTool is {89.2}\%.
Based on the AVS learned from the \totalContract contracts, we expect \ourTool can find more similar vulnerable candidates.
\input{tables/tp_distribution.tex}
\noindent\textbf{Unique TPs of Reentrancy.}
\ourTool finds 25 unique TPs of this type that are missed by other evaluated tools.
We find that the other three tools commonly fail to consider the user-defined function \codeff{transfer()}, not the built-in payment function \codeff{transfer()}.
{For the CB in Fig.~\ref{fig:evaluation:fn1}, \slither and \securify miss it as they mainly check the external call for low-level functions (e.g., \codeff{send(), value()}) and built-in \codeff{transfer()}. {\oyento does not report this CB, as it fails in the balance check according to Rule~\ref{rule:oyente: reentrance}.} However, \ourTool detects this vulnerability, as we have an AVS that has a high code similarity with this CB.} Notably, there are also 34 TPs missed by \ourTool, which is due to the fact that reentrancy has many forms and our AVS is not sufficient to cover those TPs.
\noindent\textbf{Unique TPs of Unexpected Revert.} For this type, \ourTool reports 2 unique TPs among all 5 unique TPs that are found by only one tool. Interestingly, most TPs for this type are found by at least two tools --- the detection mechanisms behind might be similar and our AVS works. {The reason for 3 other TPs missed by \ourTool is that we fail to consider the \codeff{assert} check for money transfer inside loop, which will similarly cause the DoS as the failed \codeff{require} check does.}
\noindent\textbf{Unique TPs of \texttt{\small Tx.Origin} Abusing.} For this type, \ourTool reports 10 unique TPs among all 10 unique TPs that are found by only one tool --- all unique ones are found by \ourTool. The reason is that we have the AVS of using \codeff{Tx.Origin} for identity check in self-defined modifiers. Especially, the check \codeff{origin != owner} in self-defined modifiers are TPs commonly missed by other evaluated tools.
\noindent\textbf{Unique TPs of Unchecked Low-Level-Call.} Among the {24} unique TPs that are found by only one tool, \ourTool finds about {17} unique ones, where the low-level-calls appearing in \codeff{if} statements are still unchecked. For example, the statement \codeff{if(success) \{send();\}} will be reported as non-vulnerable (i.e., being checked), according to the rules in \slither and \smartcheck (their rule is to judge whether the low-level-calls are inside an \codeff{if} statement). To address this, \ourTool has AVS to match the above FNs for other evaluated tools. However, there are {7} TPs that are all missed by \ourTool. After inspecting, we find that these return values of low-level-calls are not immediately checked after calling, but checked after several or even many lines of other operations in the \codeff{if} statement. Due to the fact that accurate data-flow analysis is not adopted for this, \ourTool fails to detect these TPs. However, \slither and \smartcheck rightly detect these TPs, since low-level-calls are inside \codeff{if} statements.
\noindent\textbf{Unique TPs of Self-destruct Abusing.} For this type, \ourTool reports 5 unique TPs among all 6 unique TPs that are found by only one tool. As our AVS is a little general and considers self-defined modifiers, the recall of \ourTool (87.5\%) is much better than \slither (37.5\%). Hence, the high recall is achieved at the cost of lowering the precision (15.6\%). Notably, 1 unique TP is missed by \ourTool, but detected by \slither. This case is still considered as abusing of self-destruct, as no permission check exists before calling \codeff{selfdestruct} --- statement \{\codeff{require(this.balance == 0); selfdestruct(owner)};\}
\noindent\textbf{Summary.} \emph{Based on various AVS learned from \totalContract contracts, \ourTool detects more unique TPs than any other evaluated tool, especially for types that own obvious code patterns (e.g., reentrancy, unexpected revert and \codeff{Tx.origin} abusing). On unchecked low-level-calls, \ourTool needs to support accurate data-flow analysis, which is required to judge whether the return value is checked on a long path. }
\input{fig/eval_fn1.tex}
\input{sec/experiments_RQ3.tex}
\subsection{RQ1: Evaluating the AVS and the Resulted Vulnerability Benchmark} \label{sec:expements:avs}
During the process of applying our approach (AVS and RDRs) to detect vulnerability, the important intermediate results include the totally learned AVS.
Besides, on the \totalContract contracts, the final results include the union of TPs reported by all tools, which constitutes the vulnerability benchmark.
\begin{figure}
\vspace{-2mm}
\centering
\includegraphics[width=\linewidth]{fig/unique_tp.png}
\caption{\#Vulnerabilities of each type in the benchmark. \lq\lq{Our
Unique.}\rq\rq{} means those found only by \ourTool.}\label{tab:benchmark}
\end{figure}
\noindent\textbf{Collecting Representative AVS.} Based on the TPs reported by the existing scanners, we automatically extract the AVS. In Table~\ref{tab:avs_num}, we show the number of the extracted AVS for each vulnerability type. Totally, from the existing TPs we learn 42 AVS, 47.6\% of which are of reentrancy --- indicating the various forms of reentrancy vulnerability. For example, for the 96 TPs of reentrancy found by \slither and \oyento, we apply the AST tree-edit distance based clustering method and get {24} clusters (when setting the cluster width to {100} edits). After manual inspection, we find 20 clusters are representative and can serve as the AVS for discovering more unknown ones. For example, for some functions (e.g., \codeff{buyFirstTokens}, \codeff{sellOnApprove}, \codeff{sendEthProportion} and so on), we find their cloned instances of an extent of similarity due to the copy-paste-modify paradigm. For these cloned instances, we apply the code differencing to extract the AVS and further refine the AVS to retain the core parts via manual inspection.
For unexpected revert, we also find there exist some cloned instances between TPs. Especially, for the two typical scenario of unexpected revert --- the revert on single account (see Fig.~\ref{fig:motivating2}) and the revert due to failed operations on multiple accounts via a loop, we get totally 8 AVS via clustering more than 200 TPs that are reported by \slither or \oyento.
For three other vulnerability types, we cannot get clusters of cloned function instances due to the fact that the triggering of vulnerability requires not much context. As the remaining types are all about improper checks, we design 4 or 5 AVS for each type. For example, regarding \codeff{Tx.origin} abusing, the existing scanners mainly check whether it is inside \codeff{if} statement. We extend this with more AVS, such as checking \codeff{Tx.origin} inside \codeff{require}, \codeff{assert} and \codeff{if-throw}.
Last, for unchecked low-level-call, 4 AVS are used to catch the improper low-level calls in loop without any validation check on return values, for \codeff{call()}, \codeff{callcode()}, \codeff{delegatecall()} and \codeff{send()}.
\begin{table}[t]
\vspace{-2mm}
\footnotesize
\centering
\caption{The number of AVS for each vulnerability type.}
\vspace{-2mm}
\begin{tabular}{|l|l|l|l|l|l|}
\hline
\multicolumn{1}{|p{2.72em}|} {}&\multicolumn{1}{|p{2.72em}|}{Reen-
ntrancy} & \multicolumn{1}{p{2.72em}|}{Tx Origin} & \multicolumn{1}{p{3.72em}|}{Unchecked L.-L.-C.} & \multicolumn{1}{p{2.72em}|}{Unex-
pected Revert} & \multicolumn{1}{p{2.72em}|}{Self
Des-
truct} \\
\hline
\multicolumn{1}{|p{2.72em}|}{\#AVS} & 20 & 5 & 4 & 8 & 5 \\
\hline
\end{tabular}%
\label{tab:avs_num}%
\end{table}%
\input{sec/experiments_RQ4.tex}
\subsection{RQ3: Evaluating the Efficiency} \label{sec:expements:comparison}
\noindent\textbf{On Dataset for Benchmark Construction.}
In Table \ref{tab:efficiency}, \slither takes the least time (only {156} \emph{min}) in detection. \smartcheck and \ourTool have the comparable detection time (500\textasciitilde1000 \emph{min}). They are essentially of the same type of technique --- pattern based static analysis. In practice, they may differ in performance due to implementation differences, but still they are significantly faster than \oyento that applies symbolic execution. Compared with other dynamic analysis or verification tools (i.e., \textsc{Mythrill} and \textsc{Securify} that cannot finish in three days for the \totalContract contracts), \oyento is quite efficient. Notably, the AVS learning time of \ourTool is not included in the detection time, as it could be done off-line separately. Since AVS learning is analogical to rules formulation, it is not counted in the detection time.
\noindent\textbf{On Dataset for Tool Evaluation.} On the smaller dataset, we observe the similar pattern of time execution --- \slither is the most efficient, \oyento is least efficient (except \securify), and \smartcheck and \ourTool have the comparable efficiency. Notably, \securify can finish the detection on \testContract contracts, but it takes significantly more time than other tools. {The performance issue of \securify rises due to the conversion of EVM IRs into datalog representation and then the application of verification technique.} \oyento is also less efficient, as it relies on symbolic execution for analysis. In theory, \ourTool should be comparable to \smartcheck and \slither, as these three all adopt rule based matching analysis. The extra overheads of \ourTool, compared with \smartcheck and \slither, are AVS-based code matching.
\subsection{RQ4: Evaluating the Resulted Benchmark} \label{sec:expements:benchmark}
\input{tables/detection_evaluation.tex}
\noindent\textbf{Unionizing TPs of Various Tools.} To identify the TPs of existing tools, some manual effort is inevitable. Still, we adopt some strategy to speed up the manual auditing process meanwhile achieving the accuracy of auditing. For example, if a vulnerability is reported by two or more tools among \slither, \oyento, \smartcheck and \securify, one researcher will manually check whether it is a TP. Otherwise if it is reported by only one tool, two researchers will check this, and the third researcher will be involved if they give different manual auditing results. After gathering and unionizing the TPs of all tools, we could build up the benchmark that is of high confidence. As shown in Fig.~\ref{tab:benchmark}, the vulnerability benchmark consists of totally \totalVuls vulnerability instances. Reentrancy and unchecked low-level-call still constitute a considerable part, while \codeff{Tx.Origin} abusing and self destruct are much fewer than what we expected. Finally, we publish all the \totalVuls TPs of five vulnerability types on the benchmark website \cite{mavs_link}.
\noindent\textbf{Dynamic Confirmation of TPs.} To confirm the validity of detected TPs of vulnerabilities, we further sample some of the instances and manage to trigger the vulnerability on our local server. Specifically, for vulnerabilities found by each AVS in Table~\ref{tab:avs_num}, we randomly pick up two samples from them. For each selected sample, we identify all the contracts it depends on and copy them into the local testing project on our local server. The reason that we just choose two samples of each AVS for confirmation is twofold: 1). The vulnerabilities matched by the same AVS will be similar and could be triggered in similar ways; 2). Triggering a concrete vulnerability requires the customization of the attack code, which is currently manually done via human expert (see attack code in {\cite{mavs_link}}). Notably, \emph{automated attack code customization for vulnerability triggering will be out of the scope of this study.}
\section{Introduction}
Powered by the blockchain technique~\cite{BeckART17},
smart contract~\cite{smartcontract} has attracted plenty of attention and have been applied in various industries, e.g., financial service, supply chains, smart traffic, and IoTs.
Among the languages for the smart contract, Solidity is now the first choice, owing to its popularity and simplicity.
However, in versions before Solidity {0.5.0}, little security consideration is presented in its
language tool chains~\cite{solidity050}.
Besides loose security checks, some special features of Solidity (e.g., fallback
functions) also exacerbate this problem,making smart contracts prone to errors and vulnerabilities.
The public has witnessed several severe security incidents, including the notorious DAO
attack~\cite{daoAttack} and Parity wallet hack~\cite{parityHack}.
According to the previous reports~\cite{AtzeiBC16,sigmaprime}, up to 16 types of security
vulnerabilities were found in Solidity programs.
These security issues undermine the confidence people have in executing transactions via smart
contracts and eventually affect the trusts towards the blockchain ecosystem.
Witnessing the severity and urgency of this problem, researchers and security practitioners have
made endeavors to develop automated security scanners.
According to our survey, there exist at least {13} scanners, most of which adopt the rule-based or
verification-based methods for vulnerability detection.
\slither~\cite{Git-Slither} and \oyento~\cite{Git-Oyente} are among the most popular open-source
analysis tools that are publicly available.
Besides, there are also several recently published security tools such as \zeus~\cite{zeus},
\smartcheck~\cite{smartcheck}, \textsc{Securify}~\cite{securify}, etc.
Each of them has its own advantages and limitations, covering various vulnerability types.
Notably, the term \emph{vulnerability} in this study refers to the security issues exploitable by
external attackers, not including unexploitable style issues and code smells (e.g., bad
naming conventions in \slither and unspecified version issue in \smartcheck).
Although various scanners are available, to the best of our knowledge, there still does not exist a
high-quality benchmark for vulnerable smart contracts yet.
Building such a benchmark is a non-trivial task,
as the real-world scanners may be neither sound nor complete.
The reasons come from twofold.
First, the detection rules may be out-of-date.
As developers become aware of possible vulnerabilities and the corresponding attacks, they often
add some \emph{defense mechanisms} (DMs) in code for the purpose of prevention (see examples in
\S\ref{sec:expements:otherRst}).
\emph{However, most of the existing scanners fail to consider DMs in code, causing FPs}.
Second, programs on Ethereum may be logically similar.
Hence, code cloning~\cite{cloneSurvey} also widely exists across smart contracts \cite{attackClone}
(see examples in Fig.~\ref{fig:codesimilarity} and Fig.~\ref{fig:learningSignature}).
\emph{Still, the existing scanners ignore the cloning phenomenon, causing FNs}.
Hence, purely relying on existing scanners to build a high-quality benchmark is
problematic.
\begin{figure}[t]
\vspace{-3mm}
\centerline{\includegraphics[width=8.4cm, height=2.8cm]{fig/similarity.png}}
\vspace{-2mm}
\caption{File similarity for smart contracts from \totalAccounts Ethereum
accounts.}\label{fig:codesimilarity
\vspace{-6mm}
\end{figure}
{To further support the argument that code cloning widely exists in smart contracts, we conduct
a textual-similarity analysis on our collected \totalContract contracts from \totalAccounts accounts in
Ethereum. We compile the contracts from the same account by alphabetical order of contract name
into a textual file, and apply the well-known clone detection tool \ccfinder~\cite{ccfinder} to
find their pairwise textual similarity. Surprisingly, as shown in Fig.~\ref{fig:codesimilarity}, we
find that at least 47.9\% files hold a $\ge60\%$ similarity with some other file. Especially, 8.6\%
account files are exactly the same. The reason is twofold: 1) the logic behind Solidity programs could be
generally similar; 2) new developers may browse and copy existing contracts' code. Hence, as
\vuddy~\cite{vuddy} is effectively applied for identifying similar buggy code via matching code
signatures for C language, \emph{it is desirable to have a tool of using the known bug signatures
to find more similar unknown bugs in smart contracts}. However, we find that none of the existing
tools take into account code similarity matching.}
The code-similarity based buggy code search helps provide more bug candidates and therefore
improves the recall. Meanwhile, the similarity-based code search may bring many false positives
(FPs), {\emph{owing to the observation that the fixed code with DMs is often syntactically similar to the original
vulnerable code}}. Hence, during the process of similarity-based code search, it is inevitable to
retrieve many clones of safe code with DMs, and they should be eliminated from the detection results via some automated
approach.
In this paper, we present a two-phase approach for building the benchmark.
The basic idea is shown in Fig.~\ref{fig:simple_flow}. First, we use true positives (TPs) reported
by the existing scanners as \emph{abstract vulnerability signature} (AVS), and apply (AVS)-based
code matching to automatically discover more candidates unknown to the existing scanners --- this
step is to improve the recall for building the vulnerability benchmark. Second, to eliminate the
code clones that are actually non-vulnerable with DMs, manual inspection is conducted to summarize
those DMs in code, based on which the refined detection rules (RDR) are summarized and applied to
improve the precision for the enlarged vulnerability benchmark. By integrating AVS and RDR, we
manage to build the vulnerability benchmark of good precision and recall, with a high extent of
automation and acceptable manual efforts.
\begin{figure}[t]
\vspace{-3mm}
\centerline{\includegraphics[width=9cm, height=3cm]{fig/simple_flow.png}}
\vspace{-2mm}
\caption{The idea of building vulnerability benchmark.}\label{fig:simple_flow
\vspace{-6mm}
\end{figure}
Technically, the AVS matching step consists of three steps (\S\ref{sec:chanllenges}). First, we choose three of the existing scanners to find some concrete vulnerable samples and remove the FPs via manual inspection (\S\ref{sec:rdr:dm}). Second, we employ the clustering algorithm to group the similar vulnerable samples of the same type and then extract AVS via code differencing algorithm (\S\ref{sec:learnSigs}). Third, we leverage the AVS to automatically search for more concrete vulnerable samples via similar code matching technique (\S\ref{sec:ourApproach}). The \emph{technical novelty} lies in the proposal of AVS and AVS-based code matching algorithm to help discover the similar vulnerabilities that could contain small or big code gaps (\S\ref{sec:background}). Note that AVS extraction is performed on the clustering and differencing results.
For the RDR part, the major workload is to survey the existing scanners and understand their internal detection rules.
It is inevitable to have some extent of manual inspection. Hence, we build a team of four research
staff, and three of them
have industrial working or internship experiences. After two weeks' training, the team started to
utilize the existing scanners. Starting with all
open-sources scanners, we finally chose \slither, \oyento and \smartcheck (see details in
\S\ref{sec:rdr:existingTool}).
The team spent in total \emph{five months} on reading the documentation and source code of the
existing scanners, and auditing the results to summarize the rules of scanners and the DMs in code
that cause FPs --- finally formulate the RDR. {The formulation of RDR is an iterative
process with the aid of AVS and the existing scanners, as they all provide FPs with DMs to fix the vulnerability in code (see details in \S\ref{sec:discuss}).}
To summarize, we make the following contributions:
\begin{enumerate}[leftmargin=*,topsep=0pt,itemsep=0ex]
\item
We improve the detection recall, we propose a similar-code matching
algorithm on the basis of the AVS, which helps to detect similar vulnerabilities containing big or
small gaps, given some existing known vulnerabilities.
\item
We improve the detection precision, we propose to investigate the three
state-of-the-art scanners and summarize their rules (if not documented). Beyond that, we summarize
the RDR considering the DMs in code that cause FPs.
\item On the \totalContract contracts collected from Etherscan, following the workflow in
Fig.~\ref{fig:simple_flow}, we publish a benchmark of five types of vulnerabilities that consists
of \totalVuls vulnerabilities. Details of the benchmark can be found at \cite{mavs_link}.
\item On the \testContract contracts recently from Google, with the final AVS and RDR,
our tool (named \ourTool) is compared with the four others. The results show \ourTool yields the
best accuracy on the five types of vulnerabilities.
\end{enumerate}
{To the best of our knowledge, we make the first attempt to summarize and apply the DMs that are unconsidered by the existing scanners, and applying the similarity-based code match for detection. Besides, vulnerability detection and benchmark construction involve more than 90,000 contracts. }
\input{fig/motivatingExm2.tex}
\section{Learning and Applying AVS} \label{sec:AVS}
In this section, we explain how to learn and apply AVS to discover more similar vulnerabilities with small or big gaps.
\subsection{Learning AVS} \label{sec:learnSigs}
We take three steps to extract the AVS from the vulnerable CBs: 1) preprocessing the ASTs of the input CBs to mitigate the noise due to differences in names of variables and constant values; 2) clustering the similar CBs via hierarchal clustering on the basis of tree-edit distance, 3) for each cluster of CBs, applying an efficient code differencing algorithm for the commonality and variability analysis among the CBs, on which AVS is extracted. Overall, the inputs of AVS-learning are the vulnerable CBs, and it yields as the output the learned AVS in the form of \emph{normalized} ASTs.
\noindent\textbf{Preprocessing.} To consider both semantic and structural information of the vulnerable code, we construct the AST and normalize the concrete data values in the AST nodes.
The AST parsing is conducted via the open-source tool, ANTLR-based Solidity parser~\cite{solParser}.
Proper preprocessing is applied to the AST for retaining core information and abstracting away unimportant differences (e.g., variable names or constant values) for the subsequent clustering step.
We split the whole AST into segments at the unit of function. For each segment of AST that corresponds to a function, we just retain the information of node type, name, parameters and return value (if contained); while other information (e.g., \emph{range}, \emph{visibility}, \emph{stateMutability}) will be discarded.
For the variable names (e.g., \codeff{\_indexs} in Fig.~\ref{fig:learningSignature:ex2} and \codeff{\_idxs} in Fig.~\ref{fig:learningSignature:ex3}), they will be normalized with the token asterisk \lq\lq{$\ast$}\rq\rq{}. Similarly, we repeat the same normalization for constant values of the types \codeff{string}, \codeff{int}, \codeff{bytes} or \codeff{uint}. Thus, we retain the core information and abstract away concrete names or values, making the clustering step sensitive to code gaps or patterns of function calls.
\noindent\textbf{Clustering Vulnerable Samples.} After preprocessing, a set of normalized AST segments corresponding to functions serve as the input for clustering. The basic idea is to perform \emph{hierarchal clustering} according to the tree-edit distance between two ASTs~\cite{Emms06}. In this paper, we apply a robust algorithm for the tree edit-distance (ARTED)~\cite{PawlikA15}, which computes the optimal path strategy by performing an exhaustive search in the space of all possible path strategies. Here, \emph{path strategy} refers to a mapping between two paths of the two input trees (or subtrees), as the distance between two (sub)trees is the minimum distance of four smaller problems. Though ARTED runs in \emph{quadratic} time and space complexity, it is guaranteed to perform as good or better than its competitors~\cite{PawlikA15}.
In Table~\ref{tab:treeDis}, the pair-wise tree edit distances among the four CBs are listed after applying ARTED on their normalized ASTs. As shown, the distances between the CBs in Fig.~\ref{fig:learningSignature} are small --- indicating the three CBs are very similar.
With the complete-linkage clustering \cite{completeLink}, the dendrogram of clustering results is shown in~Fig.\ref{fig:dendrogram}. If we choose the height as $50$ (intuitively, about the gaps of $5$ statements), we can rightly group CB9, CB10 and CB11 into one cluster and C3 alone in another.
\noindent\textbf{AVS Extraction.} After we cluster the similar vulnerabilities together, we need to summarize the commonality and variability among them for automated extraction of the AVS that represents their generic form. Towards this goal, we apply the open-source code differencing tool, \textsc{MCIDiff}~\cite{mcdiff}, which can detect differences across multiple instances of code clones in linear comparison times. The idea is inspired from the concept of progressive alignment \cite{Feng1987} that proves to be quite effective in multiple DNA sequence alignment. Rather than have a pair-wise comparison of time complexity $\small O(n^2)$, \textsc{MCIDiff} compares $n$ similar CBs in $n-1$ times.
Details of \textsc{MCIDiff} are omitted due to page limit, interested readers can refer to \cite{mcdiff}.
For the CBs in Fig.~\ref{fig:learningSignature}, it first compares two samples (CB9 and CB11) with the least tree-edit distance, then gradually compares with others (CB10), and finally automatically gets the AVS in Fig.~\ref{fig:avs_mot3} (see Appendix).
Besides, the AVS to match CB1 and CB2 in Fig.~\ref{fig:motivating2} is shown in Fig.~\ref{fig:avs}.
\input{tables/tree_edit_distance.tex}
\begin{figure}[t]
\vspace{-2mm
\centerline{\includegraphics[width=5cm]{fig/clustering.png}}
\caption{Hierarchical clustering via complete-linkage for AST-base tree-edit distances among CBs in Fig.~\ref{fig:motivating2} and Fig.~\ref{fig:learningSignature}.}\label{fig:dendrogram
\vspace{-4mm}
\end{figure}
\subsection{AVS-based Vulnerability Matching} \label{sec:ourApproach}
The matching method is expected to be robust, succeeding in pairing up CB1 with CB2 in~Fig.\ref{fig:motivating2}, CB9, CB10 and CB11 together in~Fig.~\ref{fig:learningSignature}, regardless the size of gaps. Besides, the algorithm should also consider the execution order of the statements, since some vulnerabilities (e.g., reentrancy) are strongly dependent on the statement execution order. Under such circumstance,
our AVS-based code matching approach is on the basis of CFG rather than AST that ignores the control flow of a program.
The matching process includes the following two steps: 1) generating and preprocessing the CFG from the AST of AVS; 2) applying our code matching algorithm based on the longest common sequence (LCS)~\cite{lcs} and sequence inclusion check. The input of AVS-based matching includes the AST of the AVS and the contracts to be scanned; the output are detected vulnerabilities in the contracts.
\begin{figure}[t]
\vspace{-1mm}
\centering
\includegraphics[width=0.5\textwidth]{fig/Figure3_a_ASTwithColor_part}
\caption{The exemplar AVS for CB1 in Fig.~\ref{fig:motivating2:original} in the form of a normalized AST segment.}
\label{fig:avs}
\vspace{0mm}
\end{figure}
\subsubsection{Generating and Preprocessing CFG} \label{sec:ourApproach:cfg}
\vspace{-2mm}
To match CB1 with CB2, in Fig.~\ref{fig:avs}, we show the AVS in the form of an AST that has been normalized. As the AVS should be compact and represent the core part of the relevant similar CBs, under the help of human expertise, the final form of AST owns just three statements (statement 6,7,8 in Fig.~\ref{fig:motivating2:original}) and has variable names and values normalized (in gray background color in Fig.~\ref{fig:avs}). Then, the AST can be used for CFG generation. The conversion process from AST to CFG is inspired by \slither, following the implementation routine in \slitherIR~\cite{slitherIR}.
In Fig. \ref{fig:cfg} (see Appendix), we show the CFG that is converted from the AST form of the AVS in Fig. \ref{fig:avs} and also contains the IR inside each node of the CFG. Before matching, the standard CFG requires to be preprocessed in two steps of normalization. First, the differences in variable names and constant values need to be abstracted away as we do that for AST. For the CFG of the AVS, this step is always done in the process of learning the AVS. However, for the CFGs of the code to be scanned, the step is done in the traversal of IR instructions inside the CFG. For example, no matter \codeff{currentLeader} in~Fig.~\ref{fig:motivating2:original} or \codeff{bettor} in~Fig.~\ref{fig:motivating2:cloned} will be normalized with the placeholder \codeff{*DEST*}; similarly, \codeff{highestBid} and \codeff{win} will be replaced with the placeholder \codeff{*VALUE*}. Second, the CFG will be normalized and flattened in order to remove all the loops inside, as the following code matching algorithm works on the IR sequence in the CFG nodes. Hence, to remove loops, we remove the transition from the last node back to the beginning node of the loop. Then, to flatten the CFG, we apply the Breadth-First-Search (BFS) algorithm.
\vspace{-6mm}
\subsubsection{Applying the Code Matching Algorithm on CFG} \label{sec:ourApproach:lcs}
\setlength{\textfloatsep}{0.1cm}
\vspace{-2mm}
\begin{algorithm}[t]
\small
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}
\Input{$S_1$, the signature sequence of the AVS}
\Input{$S_2$, the target sequence of code to be scanned}
\Input{$\eta$, the threshold of the similarity\% be matched}
\Output{$b_m$, the boolean result of matching}
\uIf{$|S_2| \le \frac{1}{\eta} \times |S_1|$}{ \label{algo:matching:if1}
$\sigma \leftarrow LCS(S_1,S_2)$ \label{algo:matching:if2}
}\ElseIf {$|S_2| > \frac{1}{\eta} \times |S_1|$} { \label{algo:matching:while1}
$pos=0$\\
\While{$pos \le |S_2|$}{
$S_t \leftarrow SubSequence(S_2, pos,\frac{1}{\eta} \times |S_1|)$ \\
$\sigma_t \leftarrow LCS(S_1,S_t)$ \label{algo:matching:lcs} \\
\uIf{ $\sigma_t > \sigma $} {
$\sigma \leftarrow \sigma_t $
}
$pos \leftarrow pos + itv$ \texttt{\scriptsize //set interval for the window} \label{algo:matching:whileN}
}
}
\uIf{$\sigma \ge \eta $}{
$b_m \leftarrow true$, return $b_m$ \texttt{\scriptsize //return using LCS} \label{algo:matching:rslt1}
}
\Else{
$b_m \leftarrow isIncludedByOrder(S_1,S_2)$, return $b_m$ \texttt{\scriptsize //return using subsequence inclusion check} \label{algo:matching:rslt2}
}
\caption{Code Matching based on CFG IR nodes}\label{algo:matching}
\end{algorithm}
Algo.~\ref{algo:matching} shows the work flow of AVS-based matching, which takes as input the two sequences (the signature $S_1$ and the target $S_2$) and the predefined similarity threshold $\eta$, and finally yields a boolean value $b_m$ indicating the result. Basically, Algo.~\ref{algo:matching} takes two steps: matching based on LCS at line \ref{algo:matching:rslt1} or on subsequence inclusion check at line \ref{algo:matching:rslt2}. First, if with a similar length, the two sequences can be directly matched by LCS algorithm at line \ref{algo:matching:if1}--\ref{algo:matching:if2}; if the target is much larger than the signature, the LCS with sliding window will be employed at line \ref{algo:matching:while1}--\ref{algo:matching:whileN}. Last, if not matched by LCS, the sequence inclusion check will be applied --- function \codeff{isIncludedByOrder} checks whether every node in $S_1$ appears and follows the same order as that in $S_2$. As the workflow of Algo. \ref{algo:matching} has the time complexity $O(n)$ and LCS is $O(nlog_{2}n)$ in our implementation, the overall time complexity is $O(n^2log_{2}n)$.
This algorithm can effectively match the CB pairs in Fig.~\ref{fig:motivating2} or Fig.~\ref{fig:learningSignature}, using any one in the pair as the signature and the other as the target. For the CBs with a small gap in Fig.~\ref{fig:learningSignature}, as they have the close length, the LCS can directly match them with a code similarity of {75\%}.
For the pair with big gaps in Fig.~\ref{fig:motivating2}, the method $LCS()$ at line \ref{algo:matching:lcs} actually fails. In contrast, our algorithm successes owing to \codeff{isIncludedByOrder()} at line \ref{algo:matching:rslt2} --- the CFG IR sequence of AVS for CB1 in Fig.~\ref{fig:avs} appears with the exactly same order as that in the flattened IR sequence of the CFG of CB2. Thus, with the proper AVS, the code matching approach can also effectively mitigate the big gaps.
Besides the bool result $b_m$, our approach can specify the possible position of a vulnerability as finer-grained as statement level, accord to the part of $S_2$ that is matched with $S_1$.
\section{Related Work} \label{sec:related}
\noindent\textbf{Vulnerability Detection in Smart Contracts.} As listed in Table~\xyx{\ref{tab:all_tools}} in Appendix, there are mainly 13 security scanners on smart contracts.
From the perspective of software analysis, these scanners could be categorized into static- or dynamic-based. In the former category, \slither~\cite{Git-Slither} aims to be the analysis framework that runs a suite of vulnerability detectors.
\textsc{Oyente}~\cite{oyento,Git-Oyente} analyzes the bytecode of the contracts and applies Z3-solver~\cite{z3} to conduct symbolic executions.
Recently, \smartcheck~\cite{smartcheck,Git-SmartCheck} translates Solidity source code into an XML-based IR and defines the XPath-based patterns to find code issues.
\textsc{Securify}~\cite{securify,Git-Securify}, a tool that can work on the EVM code, is proposed to detect the vulnerability via compliance (or violation) patterns to guarantee that certain behaviors are safe (or unsafe, respectively). These static tools usually adopt symbolic execution or verification techniques, being relevant to \ourTool. However, none of them applies code-similarity based matching technique or takes into account the possible DMs in code to prevent from attacks.
There are some other tools that enable the static analysis for smart contracts. \zeus \cite{zeus} adopts XACML as a language to write the safety and fairness properties, converts them into LLVM IR~\cite{llvm_ir} and then feeds them to a verification engine such as \textsc{SeaHorn} \cite{GurfinkelKKN15}. Besides, there is another EVM bytecode decompiling and analysis frame, namely \textsc{Octopus}~\cite{Git-Octopus}, which needs the users to define the patterns for vulnerability detections. To prevent the DAO, Grossman et al. propose the notion of effectively Callback Free (ECF) objects in order to allow callbacks without preventing modular reasoning \cite{ecf}. \textsc{Maian} is presented to detect greedy, prodigal, and suicidal contracts \cite{maian}, and hence the vulnerabilities to address differ from the types we address in this paper. The above tools are relevant, but due to various reasons (issues in tool availability or supported types), we cannot have a direct comparison between \ourTool and them.
The less relevant category includes dynamic testing or fuzzing tools: \textsc{Manticore}~\cite{Git-Manticore}, \mythril~\cite{Git-Mythril}, \textsc{MythX}~\cite{Git-Mythx}, \textsc{Echidna}~\cite{Git-Echidna} and \textsc{Ethracer}~\cite{ethracer}. \textsc{Mythril} and \textsc{MythX} use the advanced techniques (e.g., concolic testing and tainting) for detection. In addition, researchers also propose the testing-based tool \textsc{Manticore} and the fuzzing library \textsc{Echidna} or tool \textsc{Ethracer}. Dynamic tools often target certain vulnerability types and produce results with a low FP rate. However, they are unsuitable for a large-scale detection due to the efficiency issue.
\noindent\textbf{Code-similarity based Vulnerability Detection.} In general, similar-code matching technique is widely adopted for vulnerability detection. In 2016, \textsc{VulPecker}~\cite{VulPecker} is proposed to apply different code-similarity algorithms in various purposes for different vulnerability types. It leverages vulnerability signatures from National Vulnerability Database (NVD)~\cite{NVD} and applies them to detect 40
vulnerabilities that are not published in NVD, among which 18 are zero-days. As \textsc{VulPecker} works on the source code of C, \textsc{Bingo}~\cite{bingo} can execute on binary code and compare the assembly code via tracelet (partial trace of CFG) extraction~\cite{tracy} and similarity measuring.
Recently, \vuddy~\cite{vuddy} represents the state-of-the-art on construction of vulnerability benchmark of C.
To sum up, these studies usually resort to the vulnerability database of C language for discovering similar zero-days. In contrast, plenty of our efforts are exhausted in gathering vulnerabilities from other tools for smart contracts and auditing them manually. Besides, \vuddy~\cite{vuddy} targets at exact clones and parameterized clones, not gapped clones, as it utilizes hashing for matching for the purpose of high efficiency. \ourTool adopts a more robust algorithm, which can tolerate big or small code gaps across the similar candidates of a vulnerability.
| {
"attr-fineweb-edu": 1.491211,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdqE5qsBC-ZU-IOxY | \section{Introduction}
The field of temporal and contemporaneous aggregations of independent stationary stochastic processes is an
important and very active research area in the empirical and theoretical statistics and in other areas as well.
The scheme of contemporaneous (also called cross-sectional) aggregation of random-coefficient autoregressive
processes of order 1 was firstly proposed by Robinson \cite{Rob} and Granger \cite{Gra} in order to obtain the
long memory phenomena in aggregated time series.
For surveys on papers dealing with the aggregation of different kinds of stochastic processes, see, e.g.,
Pilipauskait{\.e} and Surgailis \cite{PilSur}, Jirak \cite[page 512]{Jir} or the arXiv version of Barczy et al.\
\cite{BarNedPap}.
In this paper we study the limit behaviour of temporal (time) and contemporaneous (space) aggregations of
independent copies of a strictly stationary multitype Galton--Watson branching process with immigration
in the so-called iterated and simultaneous cases, respectively.
According to our knowledge, the aggregation of general multitype Galton--Watson branching processes with
immigration has not been considered in the literature so far.
To motivate the fact that the aggregation of branching processes could be an important topic, now we present an
interesting and relevant example, where the phenomena of aggregation of this kind of processes may come into play.
A usual Integer-valued AutoRegressive (INAR) process of order 1, \ $(X_k)_{k\geq 0}$, \ can be used to model
migration, which is quite a big issue nowadays all over the world.
More precisely, given a camp, for all \ $k \geq 0$, \ the random variable \ $X_k$ \ can be interpreted as the
number of migrants to be present in the camp at time \ $k$, \ and every migrant will stay in the camp with
probability \ $\alpha \in (0, 1)$ \ indepedently of each other (i.e., with probability \ $1 - \alpha$ \ each
migrant leaves the camp) and at any time \ $k \geq 1$ \ new migrants may come to the camp.
Given several camps in a country, we may suppose that the corresponding INAR processes of order 1 share the same
parameter \ $\alpha$ \ and they are independent.
So, the temporal and contemporaneous aggregations of these INAR processes of order 1 is the total usage of the camps
in terms of the number of migrants in the given country in a given time period, and this quantity may be worth
studying.
The present paper is organized as follows.
In Section \ref{BRANCHING} we formulate our main results, namely the iterated and simultaneous limit behaviour of
time- and space-aggregated independent stationary $p$-type Galton--Watson branching processes with immigration is
described (where \ $p \geq 1$), \ see Theorems \ref{double_aggregation} and \ref{simulataneous_aggregation}.
The limit distributions in these limit theorems coincide, namely, it is a \ $p$-dimensional zero mean Brownian
motion with a covariance function depending on the expectations and covariances of the offspring and immigration
distributions.
In the course of the proofs of our results, in Lemma 2.3, we prove that for a subcritical, positively regular
multitype Galton--Watson branching process with nontrivial immigration, its unique stationary distribution admits
finite \ $\alpha^\mathrm{th}$ \ moments provided that the branching and immigration distributions have finite
\ $\alpha^\mathrm{th}$ \ moments, where \ $\alpha \in \{1, 2, 3\}$.
\ In case of \ $\alpha \in \{1, 2\}$, \ Quine \cite{Quine} contains this result, however in case of \ $\alpha = 3$,
\ we have not found any precise proof in the literature for it, it is something like a folklore, so we decided to
write down a detailed proof.
As a by-product, we obtain an explicit formula for the third moment in question.
Section \ref{SPECIAL} is devoted to the special case of generalized INAR processes, especially to single-type
Galton--Watson branching processes with immigration.
All of the proofs can be found in Section \ref{Proofs}.
\section{Aggregation of multitype Galton--Watson branching processes with immigration}
\label{BRANCHING}
Let \ $\ZZ_+$, \ $\NN$, \ $\RR$, \ $\RR_+$, \ and \ $\CC$ \ denote the set of non-negative integers, positive
integers, real numbers, non-negative real numbers, and complex numbers, respectively.
For all \ $d \in \NN$, \ the \ $d \times d$ \ identity matrix is denoted by \ $\bI_d$.
\ The standard basis in \ $\RR^d$ \ is denoted by \ $\{\be_1, \ldots, \be_d\}$.
\ For \ $\bv \in \RR^d$, \ the Euclidean norm is denoted by \ $\|\bv\|$, \ and for \ $\bA \in \RR^{d\times d}$,
\ the induced matrix norm is denoted by \ $\|\bA\|$ \ as well (with a little abuse of notation).
All the random variables will be defined on a probability space \ $(\Omega, \cF, \PP)$.
Let \ $(\bX_k = [X_{k,1}, \dots, X_{k,p}]^\top)_{k\in \ZZ_+}$ \ be a \ $p$-type Galton--Watson branching process
with immigration.
For each \ $k, \ell \in \ZZ_+$ \ and \ $i, j \in \{1, \ldots, p\}$, \ the number of \ $j$-type individuals in the
\ $k^\mathrm{th}$ \ generation will be denoted by \ $X_{k,j}$, \ the number of \ $j$-type offsprings produced by
the \ $\ell^\mathrm{th}$ \ individual belonging to type \ $i$ \ of the \ $(k-1)^\mathrm{th}$ \ generation will be
denoted by \ $\xi^{(i, j)}_{k,\ell}$, \ and the number of immigrants of type \ $i$ \ in the \ $k^\mathrm{th}$
\ generation will be denoted by \ $\vare^{(i)}_k$.
\ Then we have
\begin{equation}\label{GWI}
\bX_k = \sum_{\ell=1}^{X_{k-1,1}}
\begin{bmatrix}
\xi^{(1,1)}_{k,\ell} \\
\vdots \\
\xi^{(1,p)}_{k,\ell}
\end{bmatrix}
+ \dots
+ \sum_{\ell=1}^{X_{k-1,p}}
\begin{bmatrix}
\xi^{(p,1)}_{k,\ell} \\
\vdots \\
\xi^{(p,p)}_{k,\ell}
\end{bmatrix}
+ \begin{bmatrix}
\vare^{(1)}_{k} \\
\vdots \\
\vare^{(p)}_{k}
\end{bmatrix} \\
=: \sum_{i=1}^p \sum_{\ell=1}^{X_{k-1,i}} \bxi^{(i)}_{k,\ell} + \bvare_k
\end{equation}
for every \ $k \in \NN$, \ where we define \ $\sum_{\ell=1}^0 := \bzero$.
\ Here \ $\bigl\{\bX_0, \bxi^{(i)}_{k,\ell}, \bvare_k : k, \ell \in \NN, i \in \{1,\ldots,p\}\bigr\}$ \ are
supposed to be independent \ $\ZZ_+^p$-valued random vectors.
Note that we do not assume independence among the components of these vectors.
Moreover, for all \ $i \in \{1, \ldots, p\}$, \ $\{\bxi^{(i)}, \bxi^{(i)}_{k,\ell} : k, \ell \in \NN\}$ \ and
\ $\{\bvare, \bvare_k : k \in \NN\}$ \ are supposed to consist of identically distributed random vectors,
respectively.
Let us introduce the notations \ $\bm_\bvare := \EE(\bvare) \in \RR_+^p$,
\ $\bM_\bxi := \EE\bigl(\bigl[\bxi^{(1)}, \dots, \bxi^{(p)}\bigr]\bigr) \in \RR_+^{p\times p}$ \ and
\[
\bv_{(i,j)}
:= \bigl[\cov(\xi^{(1,i)}, \xi^{(1,j)}), \dots, \cov(\xi^{(p,i)}, \xi^{(p,j)}),
\cov(\vare^{(i)}, \vare^{(j)})\bigr]^\top \in \RR^{(p+1)\times1}
\]
for \ $i, j \in \{1, \dots, p\}$, \ provided that the expectations and covariances in question are finite.
Let \ $\varrho(\bM_\bxi)$ \ denote the spectral radius of \ $\bM_\bxi $, \ i.e., the maximum of the modulus of the
eigenvalues of \ $\bM_\bxi $.
\ The process \ $(\bX_k)_{k\in\ZZ_+}$ \ is called subcritical, critical or supercritical if \ $\varrho(\bM_\bxi)$
\ is smaller than \ $1$, \ equal to \ $1$ \ or larger than \ $1$, \ respectively.
The matrix \ $\bM_\bxi$ \ is called primitive if there is a positive integer \ $n \in \NN$ \ such that all the
entries of \ $\bM_\bxi^n$ \ are positive.
The process \ $(\bX_k)_{k\in\ZZ_+}$ \ is called positively regular if \ $\bM_\bxi$ \ is primitive.
In what follows, we suppose that
\begin{equation}\label{assumption}
\begin{aligned}
&\EE(\bxi^{(i)}) \in \RR_+^p , \quad i \in \{1, \ldots, p\}, \qquad
\bm_{\bvare} \in \RR_+^p \setminus \{\bzero\} , \\
&\hspace*{18mm}\rho(\bM_\bxi ) < 1 , \qquad \text{$\bM_\bxi$ \ is primitive.}
\end{aligned}
\end{equation}
For further application, we define the matrix
\begin{align}\label{V_matrix}
\bV := (V_{i,j})_{i,j=1}^p
:= \left(\bv_{(i,j)}^\top
\begin{bmatrix}
(\bI_p - \bM_\bxi)^{-1} \bm_\bvare \\
1
\end{bmatrix}\right)_{i,j=1}^p \in \RR^{p \times p} ,
\end{align}
provided that the covariances in question are finite.
\begin{Rem}\label{Minv}
Note that the matrix \ $(\bI_p-\bM_\bxi)^{-1}$, \ which appears in \eqref{V_matrix} and throughout the paper,
exists.
Indeed, \ $\lambda \in \CC$ \ is an eigenvalue of \ $\bI_p - \bM_\bxi$ \ if and only if \ $1 - \lambda$ \ is that
of \ $\bM_\bxi $.
\ Therefore, since \ $\rho(\bM_\bxi) < 1$, \ all eigenvalues of \ $\bI_p - \bM_\bxi $ \ are non-zero.
This means that \ $\det(\bI_p - \bM_\bxi) \neq 0$, \ so \ $(\bI_p - \bM_\bxi)^{-1}$ \ does exist.
One could also refer to Corollary 5.6.16 and Lemma 5.6.10 in Horn and Johnson \cite{HornJohnson}.
\proofend
\end{Rem}
\begin{Rem}
Note that \ $\bV$ \ is symmetric and positive semidefinite, since \ $\bv_{(i,j)} = \bv_{(j,i)}$,
\ $i, j \in \{1,\ldots,p\}$, \ and for all \ $\bx\in\RR^p$,
\begin{align*}
\bx^\top \bV \bx
= \sum_{i=1}^p \sum_{j=1}^p V_{i,j} x_i x_j
= \left(\sum_{i=1}^p \sum_{j=1}^p x_i x_j \bv_{(i,j)}^\top \right)
\begin{bmatrix}
(\bI_p - \bM_\bxi)^{-1} \bm_\bvare \\
1
\end{bmatrix} ,
\end{align*}
where
\begin{align*}
\sum_{i=1}^p \sum_{j=1}^p x_i x_j \bv_{(i,j)}^\top
= \bigl[\bx^\top \cov(\bxi^{(1)}, \bxi^{(1)}) \bx , \ldots, \bx^\top \cov(\bxi^{(p)}, \bxi^{(p)}) \bx ,
\bx^\top \cov(\bvare, \bvare) \bx \bigr] .
\end{align*}
Here \ $\bx^\top \cov(\bxi^{(i)}, \bxi^{(i)}) \bx \geq 0$, \ $i \in \{1, \ldots, p\}$,
\ $\bx^\top \cov(\bvare, \bvare) \bx \geq 0$, \ and \ $(\bI_p - \bM_\bxi)^{-1} \bm_\bvare \in \RR_+^p$ \ due to
the fact that \ $(\bI_p - \bM_\bxi)^{-1} \bm_\bvare$ \ is nothing else but the expectation vector of the unique
stationary distribution of \ $(\bX_k)_{k\in\ZZ_+}$, \ see the discussion below and formula \eqref{moment_1_pi}.
\proofend
\end{Rem}
Under \eqref{assumption}, by the Theorem in Quine \cite{Quine}, there is a unique stationary distribution \ $\pi$
\ for \ $(\bX_k)_{k\in\ZZ_+}$.
\ Indeed, under \eqref{assumption}, \ $\bM_\bxi$ \ is irreducible following from the primitivity of \ $\bM_\bxi$,
\ see Definition 8.5.0 and Theorem 8.5.2 in Horn and Johnson \cite{HornJohnson}.
For the definition of irreducibility, see Horn and Johnson \cite[Definitions 6.2.21 and 6.2.22]{HornJohnson}.
Further, \ $\bM_\bxi$ \ is aperiodic, since this is equivalent to the primitivity of \ $\bM_\bxi$, \ see Kesten and Stigum
\cite[page 314]{KesSti3} and Kesten and Stigum \cite[Section 3]{KesSti2}.
For the definition of aperiodicity (also called acyclicity), see, e.g., the Introduction of Danka and Pap
\cite{DanPap}.
Finally, since \ $\bm_\bvare \in \RR_+^p \setminus \{\bzero\}$, \ the probability generator function of
\ $\bvare$ \ at \ $\bzero$ \ is less than \ $1$, \ and
\[
\EE\Biggl(\log\Biggl(\sum_{i=1}^p \vare^{(i)}\Biggr) \bbone_{\{\bvare\ne\bzero\}}\Biggr)
\leq \EE\Biggl(\sum_{i=1}^p \vare^{(i)} \bbone_{\{\bvare\ne\bzero\}}\Biggr)
\leq \EE\left(\sum_{i=1}^p \vare^{(i)}\right)
= \sum_{i=1}^p \EE(\vare^{(i)}) < \infty ,
\]
so one can apply the Theorem in Quine \cite{Quine}.
For each \ $\alpha \in \NN$, \ we say that the \ $\alpha^\mathrm{th}$ \ moment of a random vector is finite if all
of its mixed moments of order \ $\alpha$ \ are finite.
\begin{Lem}\label{stationary_moments}
Let us assume \eqref{assumption}.
For each \ $\alpha \in \{1, 2, 3\}$, \ the unique stationary distribution \ $\pi$ \ has a finite
\ $\alpha^\mathrm{th}$ \ moment, provided that the \ $\alpha^\mathrm{th}$ \ moments of \ $\bxi^{(i)}$,
\ $i \in \{1, \ldots, p\}$, \ and \ $\bvare$ \ are finite.
\end{Lem}
In what follows, we suppose \eqref{assumption} and that the distribution of \ $\bX_0$ \ is the unique
stationary distribution \ $\pi$, \ hence the Markov chain \ $(\bX_k)_{k\in\ZZ_+}$ \ is strictly stationary.
Recall that, by (2.1) in Quine and Durham \cite{QuineDurham}, for any measurable function
\ $f : \RR^p \to \RR$ \ satisfying \ $\EE(|f(\bX_0)|) < \infty$, \ we have
\begin{align}\label{help_ergodic}
\frac{1}{n} \sum_{k=1}^n f(\bX_k) \as \EE( f(\bX_0)) \qquad \text{as \ $n\to\infty$.}
\end{align}
First we consider a simple aggregation procedure.
For each \ $N \in \NN$, \ consider the stochastic process \ $\bS^{(N)} = (\bS^{(N)}_k)_{k\in\ZZ_+}$ \ given by
\[
\bS^{(N)}_k := \sum_{j=1}^N (\bX^{(j)}_k - \EE(\bX^{(j)}_k)) , \qquad k \in \ZZ_+ ,
\]
where \ $\bX^{(j)} = (\bX^{(j)}_k)_{k\in\ZZ_+}$, \ $j \in \NN$, \ is a sequence of independent copies of the
strictly stationary $p$-type Galton--Watson process \ $(\bX_k)_{k\in\ZZ_+}$ \ with immigration.
Here we point out that we consider so-called idiosyncratic immigrations, i.e., the immigrations belonging to
\ $\bX^{(j)}$, \ $j\in\NN$, \ are independent.
We will use \ $\distrf$ \ or \ $\cD_\ff\text{-}\hspace*{-1mm}\lim$ \ for weak convergence of finite dimensional
distributions, and \ $\distr$ \ for weak convergence in \ $D(\RR_+, \RR^p)$ \ of stochastic processes with
c\`adl\`ag sample paths, where \ $D(\RR_+, \RR^p)$ \ denotes the space of $\RR^p$-valued c\`adl\`ag functions
defined on \ $\RR_+$.
\begin{Pro}\label{simple_aggregation}
If all entries of the vectors \ $\bxi^{(i)}$, \ $i \in \{1, \ldots, p\}$, \ and \ $\bvare$ \ have finite second
moments, then
\[
N^{-\frac{1}{2}} \bS^{(N)} \distrf \bcX \qquad \text{as \ $N \to \infty$,}
\]
where \ $\bcX = (\bcX_k)_{k\in\ZZ_+}$ \ is a stationary \ $p$-dimensional zero mean Gaussian process with
covariances
\begin{align}\label{cov}
\EE(\bcX_0 \bcX_k^\top)
= \cov(\bX_0, \bX_k)
= \var(\bX_0) (\bM_\bxi^\top)^k, \qquad k \in \ZZ_+ ,
\end{align}
where
\begin{equation}\label{var0}
\var(\bX_0) = \sum_{k=0}^\infty \bM_\bxi^k \bV (\bM_\bxi^{\top})^k .
\end{equation}
\end{Pro}
We note that using formula \eqref{moment_2_pi} presented later on, one could give an explicit formula for
\ $\var(\bX_0)$ \ (not containing an infinite series).
\begin{Pro}\label{simple_aggregation2}
If all entries of the vectors \ $\bxi^{(i)}$, \ $i \in \{1, \ldots, p\}$, \ and \ $\bvare$ \ have finite third
moments, then
\[
\biggl(n^{-\frac{1}{2}} \sum_{k=1}^\nt \bS^{(1)}_k\biggr)_{t\in\RR_+}
= \biggl(n^{-\frac{1}{2}} \sum_{k=1}^\nt (\bX_k^{(1)} - \EE(\bX_k^{(1)}))\biggr)_{t\in\RR_+}
\distr (\bI_p - \bM_\bxi)^{-1} \bB
\qquad \text{as \ $n \to \infty$,}
\]
where \ $\bB = (\bB_t)_{t\in\RR_+}$ \ is a \ $p$-dimensional zero mean Brownian motion satisfying
\ $\var(\bB_1) = \bV$.
\end{Pro}
Note that Propositions \ref{simple_aggregation} and \ref{simple_aggregation2} are about the scalings of the
space-aggregated process \ $\bS^{(N)}$ \ and the time-aggregated process
\ $\bigl(\sum_{k=1}^\nt \bS^{(1)}_k\bigr)_{t\in\RR_+}$, \ respectively.
For each \ $N, n \in \NN$, \ consider the stochastic process \ $\bS^{(N,n)} = (\bS^{(N,n)}_t)_{t\in\RR_+}$ \ given
by
\[
\bS^{(N,n)}_t := \sum_{j=1}^N \sum_{k=1}^\nt (\bX^{(j)}_k - \EE(\bX^{(j)}_k)) , \qquad t \in \RR_+ .
\]
\begin{Thm}\label{double_aggregation}
If all entries of the vectors \ $\bxi^{(i)}$, \ $i \in \{1,\ldots,p\}$, \ and \ $\bvare$ \ have finite second
moments, then
\begin{align}\label{help1_double_aggregation}
\cD_\ff\text{-}\hspace*{-1mm}\lim_{n\to\infty} \,
\cD_\ff\text{-}\hspace*{-1mm}\lim_{N\to\infty} \,
(nN)^{-\frac{1}{2}} \bS^{(N,n)}
= (\bI_p - \bM_\bxi)^{-1} \bB ,
\end{align}
where \ $\bB = (\bB_t)_{t\in\RR_+}$ \ is a \ $p$-dimensional zero mean Brownian motion satisfying
\ $\var(\bB_1) = \bV$.
\ If all entries of the vectors \ $\bxi^{(i)}$, \ $i \in \{1, \ldots, p\}$, \ and \ $\bvare$ \ have finite
third moments, then
\begin{align}\label{help2_double_aggregation}
\cD_\ff\text{-}\hspace*{-1mm}\lim_{N\to\infty} \,
\cD_\ff\text{-}\hspace*{-1mm}\lim_{n\to\infty} \,
(nN)^{-\frac{1}{2}} \bS^{(N,n)}
= (\bI_p - \bM_\bxi)^{-1} \bB ,
\end{align}
where \ $\bB = (\bB_t)_{t\in\RR_+}$ \ is a \ $p$-dimensional zero mean Brownian motion satisfying
\ $\var(\bB_1) = \bV$.
\end{Thm}
\begin{Thm}\label{simulataneous_aggregation}
If all entries of the vectors \ $\bxi^{(i)}$, \ $i \in \{1,\ldots,p\}$, \ and \ $\bvare$ \ have finite third
moments, then
\begin{align}\label{help3_simulataneous_aggregation}
(nN)^{-\frac{1}{2}} \bS^{(N,n)} \distr (\bI_p - \bM_\bxi)^{-1} \bB ,
\end{align}
if both \ $n$ \ and \ $N$ \ converge to infinity (at any rate), where \ $\bB = (\bB_t)_{t\in\RR_+}$ \ is a
\ $p$-dimensional zero mean Brownian motion satisfying \ $\var(\bB_1) = \bV$.
\end{Thm}
A key ingredient of the proofs is the fact that \ $(\bX_k - \EE(\bX_k))_{k\in\ZZ_+}$ \ can be rewritten as a stable
first order vector autoregressive process with coefficient matrix \ $\bM_\bxi$ \ and with heteroscedastic innovations,
see \eqref{help5}.
\section{A special case: aggregation of GINAR processes}\label{SPECIAL}
We devote this section to the analysis of aggregation of Generalized Integer-Valued Autoregressive processes of
order \ $p \in \NN$ \ (GINAR($p$) processes), which are special cases of \ $p$-type Galton--Watson branching
processes with immigration introduced in \eqref{GWI}.
For historical fidelity, we note that it was Latour \cite{Latour} who introduced GINAR($p$) processes as
generalizations of INAR($p$) processes.
This class of processes became popular in modelling integer-valued time series data such as the daily number of
claims at an insurance company.
In fact, a GINAR(1) process is a (general) single type Galton--Watson branching processes with immigration.
Let \ $(Z_k)_{k\geq -p+1}$ \ be a GINAR($p$) process.
Namely, for each \ $k, \ell \in \ZZ_+$ \ and \ $i \in \{1,\dots,p\}$, \ the number of individuals in the
\ $k^\mathrm{th}$ \ generation will be denoted by \ $Z_k$, \ the number of offsprings produced by the
\ $\ell^\mathrm{th}$ \ individual belonging to the \ $(k-i)^\mathrm{th}$ \ generation will be denoted by
\ $\xi^{(i,1)}_{k,\ell}$, \ and the number of immigrants in the \ $k^\mathrm{th}$ \ generation will be denoted by
\ $\vare^{(1)}_k$.
\ Here the \ $1$-s in the supercripts of \ $\xi^{(i,1)}_{k,\ell}$ \ and \ $\vare^{(1)}_k$ \ are displayed in order
to have a better comparison with \eqref{GWI}.
Then we have
\begin{equation*}
Z_k
= \sum_{\ell=1}^{Z_{k-1}} \xi^{(1,1)}_{k,\ell} + \dots + \sum_{\ell=1}^{Z_{k-p}} \xi^{(p,1)}_{k,\ell}
+ \vare_k^{(1)} , \qquad k \in \NN .
\end{equation*}
Here
\ $\bigl\{Z_0, Z_{-1},\ldots, Z_{-p+1}, \xi^{(i,1)}_{k,\ell}, \vare_k^{(1)}
: k, \ell \in \NN , i \in \{1, \ldots, p\}\bigr\}$
\ are supposed to be independent nonnegative integer-valued random variables.
Moreover, for all \ $i \in \{1, \ldots, p\}$, \ $\{\xi^{(i,1)}, \xi^{(i,1)}_{k,\ell} : k, \ell \in \NN \}$ \ and
\ $\{\vare^{(1)}, \vare_k^{(1)} : k \in \NN \}$ \ are supposed to consist of identically distributed random
variables, respectively.
A GINAR($p$) process can be embedded in a \ $p$-type Galton--Watson branching process with immigration
\ $(\bX_k = [Z_k, \dots, Z_{k-p+1}]^\top)_{k\in\ZZ_+}$ \ with the corresponding \ $p$-dimensional random vectors
\[
\bxi^{(1)}_{k,\ell} = \begin{bmatrix}
\xi^{(1,1)}_{k,\ell} \\
1 \\
0 \\
\vdots \\
0
\end{bmatrix} ,
\quad \cdots, \quad
\bxi^{(p-1)}_{k,\ell} = \begin{bmatrix}
\xi^{(p-1,1)}_{k,\ell} \\
0 \\
\vdots \\
0 \\
1
\end{bmatrix} ,
\quad
\bxi^{(p)}_{k,\ell} = \begin{bmatrix}
\xi^{(p,1)}_{k,\ell} \\
0 \\
0 \\
\vdots \\
0
\end{bmatrix} ,
\quad
\bvare_k = \begin{bmatrix}
\vare^{(1)}_k \\
0 \\
0 \\
\vdots \\
0
\end{bmatrix}
\]
for any \ $k, \ell \in \NN$.
In what follows, we reformulate the classification of GINAR($p$) processes in terms of the expectations of
the offspring distributions.
\begin{Rem}
In case of a GINAR($p$) process, one can show that \ $\varphi$, \ the characteristic polynomial of the matrix
\ $\bM_\bxi $, \ has the form
\[
\varphi(\lambda)
:= \det(\lambda \bI_p - \bM_\bxi)
= \lambda^p - \EE(\xi^{(1,1)}) \lambda^{p-1} - \cdots - \EE(\xi^{(p-1,1)}) \lambda - \EE(\xi^{(p,1)}) ,
\qquad \lambda \in \CC .
\]
Recall that \ $\varrho(\bM_\bxi)$ \ denotes the spectral radius of \ $\bM_\bxi $, \ i.e., the maximum of the
modulus of the eigenvalues of \ $\bM_\bxi $.
\ If \ $\EE(\xi^{(p,1)}) > 0$, \ then, by the proof of Proposition 2.2 in Barczy et al. \cite{BarIspPap2}, the
characteristic polynomial \ $\varphi$ \ has just one positive root, \ $\varrho(\bM_\bxi) > 0$, \ the nonnegative
matrix \ $\bM_\bxi$ \ is irreducible, \ $\varrho(\bM_\bxi)$ \ is an eigenvalue of \ $\bM_\bxi$, \ and
\ $\sum_{i=1}^p \EE(\xi^{(i,1)}) \varrho(\bM_\bxi)^{-i} = 1$.
\ Further,
\[
\varrho(\bM_\bxi ) \,
\begin{cases}
< & \\
= & \\
>
\end{cases}
1
\qquad \Longleftrightarrow \qquad
\sum_{i=1}^p \EE(\xi^{(i,1)}) \,
\begin{cases}
< & \\
= & \\
>
\end{cases}
1 . \\[-4mm]
\]
\proofend
\end{Rem}
Next, we specialize the matrix \ $\bV$, \ defined in \eqref{V_matrix}, in case of a subcritical GINAR($p$)
process.
\begin{Rem}
In case of a GINAR($p$) process, the vectors
\[
\bv_{(i,j)}
= \bigl[\cov(\xi^{(1,i)}, \xi^{(1,j)}), \dots, \cov(\xi^{(p,i)}, \xi^{(p,j)}),
\cov(\vare^{(i)}, \vare^{(j)})\bigr]^\top
\in \RR^{(p+1)\times1}
\]
for \ $i, j \in\{1, \dots, p\}$ \ are all zero vectors except for the case \ $i = j = 1$.
\ Therefore, in case of \ $\varrho(\bM_\bxi) < 1$, \ the matrix \ $\bV$, \ defined in \eqref{V_matrix}, reduces to
\begin{equation}\label{V_GINAR}
\bV = \bv_{(1,1)}^\top
\begin{bmatrix}
(\bI_p - \bM_\bxi)^{-1} \EE(\vare^{(1)}) \be_1 \\
1
\end{bmatrix}
(\be_1 \be_1^\top) .
\end{equation}
\proofend
\end{Rem}
Finally, we specialize the limit distribution in Theorems \ref{double_aggregation} and
\ref{simulataneous_aggregation} in case of a subcritical GINAR($p$) process.
\begin{Rem}
Let us note that in case of \ $p = 1$ \ and \ $\EE(\xi^{(1,1)}) < 1$ \ (yielding that the corresponding
GINAR($1$) process is subcritical), the limit process in Theorems \ref{double_aggregation} and
\ref{simulataneous_aggregation} can be written as
\[
\frac{1}{1-\EE(\xi^{(1,1)})}
\sqrt{\frac{\EE(\vare^{(1)}) \var(\xi^{(1,1)}) + (1 - \EE(\xi^{(1,1)})) \var(\vare^{(1)})}
{1-\EE(\xi^{(1,1)})}}
W ,
\]
where \ $W = (W_t)_{t\in\RR_+}$ \ is a standard 1-dimensional Brownian motion.
Indeed, this holds, since in this special case \ $\bM_\bxi = \EE(\xi^{(1,1)})$ \ yielding that
\ $(\bI_p - \bM_\bxi)^{-1} = (1 - \EE(\xi^{(1,1)}))^{-1}$, \ and, by \eqref{V_GINAR},
\begin{align*}
\bV = \begin{bmatrix}
\cov(\xi^{(1,1)}, \xi^{(1,1)}) \\ \cov(\vare^{(1)}, \vare^{(1)})
\end{bmatrix}^\top
\begin{bmatrix}
\frac{\EE(\vare^{(1)})} {1 - \EE(\xi^{(1,1)})} \\
1 \\
\end{bmatrix}
= \frac{\var(\xi^{(1,1)})\EE(\vare^{(1)})}{1-\EE(\xi^{(1,1)})} + \var(\vare^{(1)}) .\\[-17mm]
\end{align*}
\proofend
\end{Rem}
\section{Proofs}
\label{Proofs}
\noindent{\bf Proof of Lemma \ref{stationary_moments}.}
Let \ $(\bZ_k)_{k\in\ZZ_+}$ \ be a \ $p$-type Galton--Watson branching process without immigration, with the same
offspring distribution as \ $(\bX_k)_{k\in\ZZ_+}$, \ and with \ $\bZ_0 \distre \bvare$.
\ Then the stationary distribution \ $\pi$ \ of \ $(\bX_k)_{k\in\ZZ_+}$ \ admits the representation
\[
\pi \distre \sum_{r=0}^\infty \bZ_r^{(r)} ,
\]
where \ $(\bZ_k^{(n)})_{k\in\ZZ_+}$, \ $n \in \ZZ_+$, \ are independent copies of \ $(\bZ_k)_{k\in\ZZ_+}$.
\ This is a consequence of formula (16) for the probability generating function of \ $\pi$ \ in Quine \cite{Quine}.
It is convenient to calculate moments of Kronecker powers of random vectors.
We will use the notation \ $\bA \otimes \bB$ \ for the Kronecker product of the matrices \ $\bA$ \ and \ $\bB$,
\ and we put \ $\bA^{\otimes2} := \bA \otimes \bA$ \ and \ $\bA^{\otimes3} := \bA \otimes \bA \otimes \bA$.
\ For each \ $\alpha \in \{1, 2, 3\}$, \ by the monotone convergence theorem, we have
\[
\int_{\RR^p} \bx^{\otimes\alpha} \, \pi(\dd\bx)
= \EE\Biggl[\Biggl(\sum_{r=0}^\infty \bZ_r^{(r)}\Biggr)^{\otimes\alpha}\Biggr]
= \lim_{n\to\infty} \EE\Biggl[\Biggl(\sum_{r=0}^{n-1} \bZ_r^{(r)}\Biggr)^{\otimes\alpha}\Biggr] .
\]
For each \ $n \in \ZZ_+$, \ we have
\[
\sum_{r=0}^{n-1} \bZ_r^{(r)} \distre \bY_n ,
\]
where \ $(\bY_k)_{k\in\ZZ_+}$ \ is a Galton--Watson branching process with the same offspring and immigration
distributions as \ $(\bX_k)_{k\in\ZZ_+}$, \ and with \ $\bY_0 = \bzero$.
\ This can be checked comparing their probability generating functions taking into account formula (3) in Quine
\cite{Quine} as well.
Consequently, we conclude
\begin{equation}\label{reprpi}
\int_{\RR^p} \bx^{\otimes\alpha} \, \pi(\dd\bx) = \lim_{n\to\infty} \EE\bigl(\bY_n^{\otimes\alpha}\bigr) .
\end{equation}
For each \ $n \in \NN$, \ using \eqref{GWI}, we obtain
\begin{align}\label{help12}
\begin{split}
\EE(\bY_n \mid \cF_{n-1}^\bY)
&= \sum_{i=1}^p \sum_{j=1}^{Y_{n-1,i}} \EE(\bxi_{n,j}^{(i)} \mid \cF_{n-1}^\bY)
+ \EE(\bvare_n \mid \cF_{n-1}^\bY)
= \sum_{i=1}^p Y_{n-1,i} \EE(\bxi^{(i)}) + \EE(\bvare) \\
&= \sum_{i=1}^p \EE(\bxi^{(i)}) \be_i^\top \bY_{n-1} + \bm_\bvare
= \bM_\bxi \bY_{n-1} + \bm_\bvare ,
\end{split}
\end{align}
where \ $\cF_{n-1}^\bY := \sigma(\bY_0, \ldots, \bY_{n-1})$, \ $n \in \NN$, \ and
\ $Y_{n-1,i} := \be_i^\top \bY_{n-1}$, \ $i \in \{1, \ldots, p\}$.
\ Taking the expectation, we get
\begin{equation}\label{moment_1_Y}
\EE(\bY_n) = \bM_\bxi \EE(\bY_{n-1}) + \bm_\bvare , \qquad n \in \NN.
\end{equation}
Taking into account \ $\bY_0 = \bzero$, \ we obtain
\[
\EE(\bY_n)
= \sum_{k=1}^n \bM_\bxi^{n-k} \bm_\bvare
= \sum_{\ell=0}^{n-1} \bM_\bxi^\ell \bm_\bvare , \qquad n \in \NN .
\]
For each \ $n \in \NN$, \ we have \ $(\bI_p - \bM_\bxi) \sum_{\ell=0}^{n-1} \bM_\bxi^\ell = \bI_p - \bM_\bxi^n$.
\ By the condition \ $\varrho(\bM_\bxi) < 1$, \ the matrix \ $\bI_p - \bM_\bxi$ \ is invertible and
\ $\sum_{\ell=0}^\infty \bM_\bxi^\ell = (\bI_p - \bM_\bxi)^{-1}$, \ see Corollary 5.6.16 and Lemma 5.6.10 in Horn
and Johnson \cite{HornJohnson}.
Consequently, by \eqref{reprpi}, the first moment of \ $\pi$ \ is finite, and
\begin{equation}\label{moment_1_pi}
\int_{\RR^p} \bx \, \pi(\dd\bx) = (\bI_p - \bM_\bxi)^{-1} \bm_\bvare .
\end{equation}
Now we suppose that the second moments of \ $\bxi^{(i)}$, \ $i \in \{1, \ldots, p\}$, \ and \ $\bvare$ \ are
finite.
For each \ $n \in \NN$, \ using again \eqref{GWI}, we obtain
\begin{align*}
\EE(\bY_n^{\otimes2} \mid \cF_{n-1}^\bY)
&= \sum_{i=1}^p \sum_{j=1}^{Y_{n-1,i}} \sum_{i'=1}^p \sum_{j'=1}^{Y_{n-1,i'}}
\EE(\bxi_{n,j}^{(i)} \otimes \bxi_{n,j'}^{(i')} \mid \cF_{n-1}^\bY) \\
&\quad
+ \sum_{i=1}^p \sum_{j=1}^{Y_{n-1,i}}
\EE(\bxi_{n,j}^{(i)} \otimes \bvare_n + \bvare_n \otimes \bxi_{n,j}^{(i)} \mid \cF_{n-1}^\bY)
+ \EE(\bvare_n^{\otimes2} \mid \cF_{n-1}^\bY) \\
&= \sum_{i=1}^p \sum_{\underset{\SC i'\ne i}{i'=1}}^p
Y_{n-1,i} Y_{n-1,i'} \EE(\bxi^{(i)}) \otimes \EE(\bxi^{(i')})
+ \sum_{i=1}^p Y_{n-1,i} (Y_{n-1,i} -1) [\EE(\bxi^{(i)})]^{\otimes2} \\
&\quad
+ \sum_{i=1}^p Y_{n-1,i} \EE[(\bxi^{(i)})^{\otimes2}]
+ \sum_{i=1}^p Y_{n-1,i} \EE(\bxi^{(i)} \otimes \bvare + \bvare \otimes \bxi^{(i)})
+ \EE(\bvare^{\otimes2})
\end{align*}
\begin{align*}
&= \sum_{i=1}^p \sum_{i'=1}^p Y_{n-1,i} Y_{n-1,i'} \EE(\bxi^{(i)}) \otimes \EE(\bxi^{(i')})
+ \sum_{i=1}^p Y_{n-1,i} \bigl\{\EE[(\bxi^{(i)})^{\otimes2}] - [\EE(\bxi^{(i)})]^{\otimes2}\bigr\} \\
&\quad
+ \sum_{i=1}^p Y_{n-1,i} \bigl\{\EE(\bxi^{(i)}) \otimes \bm_\bvare + \bm_\bvare \otimes \EE(\bxi^{(i)})\bigr\}
+ \EE(\bvare^{\otimes2}) \\
&= (\bM_\bxi \bY_{n-1})^{\otimes2} + \bA_{2,1} \bY_{n-1} + \EE(\bvare^{\otimes2}) .
\end{align*}
with
\[
\bA_{2,1}
:= \sum_{i=1}^p
\bigl\{\EE[(\bxi^{(i)})^{\otimes2}] + \EE(\bxi^{(i)}) \otimes \bm_\bvare
+ \bm_\bvare \otimes \EE(\bxi^{(i)}) - [\EE(\bxi^{(i)})]^{\otimes2}\bigr\}
\be_i^\top
\in \RR^{p^2\times p} .
\]
Indeed, using the mixed-product property \ $(\bA \otimes \bB) (\bC \otimes \bD) = (\bA \bC) \otimes (\bB \bD)$
\ for matrices of such size that one can form the matrix products \ $\bA \bC$ \ and \ $\bB \bD$, \ we have
\[
Y_{n-1,i} Y_{n-1,i'}
= Y_{n-1,i} \otimes Y_{n-1,i'}
= (\be_i^\top \bY_{n-1}) \otimes (\be_{i'}^\top \bY_{n-1})
= (\be_i^\top \otimes \be_{i'}^\top) \bY_{n-1}^{\otimes2} ,
\]
hence
\begin{align*}
&\sum_{i=1}^p \sum_{i'=1}^p Y_{n-1,i} Y_{n-1,i'} \EE(\bxi^{(i)}) \otimes \EE(\bxi^{(i')})
= \sum_{i=1}^p \sum_{i'=1}^p
\bigl[\EE(\bxi^{(i)}) \otimes \EE(\bxi^{(i')})\bigr] (\be_i^\top \otimes \be_{i'}^\top) \bY_{n-1}^{\otimes2} \\
&= \sum_{i=1}^p \sum_{i'=1}^p
\bigl[(\EE(\bxi^{(i)}) \be_i^\top) \otimes (\EE(\bxi^{(i')}) \be_{i'}^\top)\bigr] \bY_{n-1}^{\otimes2}
= \Biggl(\sum_{i=1}^p \EE(\bxi^{(i)}) \be_i^\top\Biggr)^{\otimes2} \bY_{n-1}^{\otimes2} \\
&= (\bM_\bxi)^{\otimes2} \bY_{n-1}^{\otimes2}
= (\bM_\bxi \bY_{n-1})^{\otimes2} .
\end{align*}
Consequently, we obtain
\[
\EE(\bY_n^{\otimes2} \mid \cF_{n-1}^\bY)
= \bM_\bxi^{\otimes2} \bY_{n-1}^{\otimes2} + \bA_{2,1} \bY_{n-1} + \EE(\bvare^{\otimes2}) ,
\qquad n \in \NN .
\]
Taking the expectation, we get
\begin{equation}\label{moment_2_Y}
\EE(\bY_n^{\otimes2})
= \bM_\bxi^{\otimes2} \EE(\bY_{n-1}^{\otimes2}) + \bA_{2,1} \EE(\bY_{n-1}) + \EE(\bvare^{\otimes2}) ,
\qquad n \in \NN .
\end{equation}
Using also \eqref{moment_1_Y}, we obtain
\[
\begin{bmatrix}
\EE(\bY_n) \\
\EE(\bY_n^{\otimes2})
\end{bmatrix}
= \bA_2
\begin{bmatrix}
\EE(\bY_{n-1}) \\
\EE(\bY_{n-1}^{\otimes2})
\end{bmatrix}
+ \begin{bmatrix}
\bm_\bvare \\
\EE(\bvare^{\otimes2})
\end{bmatrix} , \qquad n \in \NN ,
\]
with
\[
\bA_2 := \begin{bmatrix}
\bM_\bxi & \bzero \\
\bA_{2,1} & \bM_\bxi^{\otimes2}
\end{bmatrix}
\in \RR^{(p+p^2)\times(p+p^2)} .
\]
Taking into account \ $\bY_0 = \bzero$, \ we obtain
\[
\begin{bmatrix}
\EE(\bY_n) \\
\EE(\bY_n^{\otimes2})
\end{bmatrix}
= \sum_{k=1}^n
\bA_2^{n-k}
\begin{bmatrix}
\bm_\bvare \\
\EE(\bvare^{\otimes2})
\end{bmatrix}
= \sum_{\ell=0}^{n-1}
\bA_2^\ell
\begin{bmatrix}
\bm_\bvare \\
\EE(\bvare^{\otimes2})
\end{bmatrix} , \qquad n \in \NN .
\]
We have \ $\varrho(\bA_2) = \max\{\varrho(\bM_\bxi), \varrho(\bM_\bxi^{\otimes2})\}$, \ where
\ $\varrho(\bM_\bxi^{\otimes2}) = [\varrho(\bM_\bxi)]^2$.
\ Taking into account \ $\varrho(\bM_\bxi) < 1$, \ we conclude \ $\varrho(\bA_2) = \varrho(\bM_\bxi) < 1$, \ and,
by \eqref{reprpi}, the second moment of \ $\pi$ \ is finite,
and
\begin{equation}\label{moment_2_pi}
\begin{bmatrix}
\int_{\RR^p} \bx \, \pi(\dd\bx) \\
\int_{\RR^p} \bx^{\otimes2} \, \pi(\dd\bx) \\
\end{bmatrix}
= (\bI_{p+p^2} - \bA_2)^{-1}
\begin{bmatrix}
\bm_\bvare \\
\EE(\bvare^{\otimes2})
\end{bmatrix} .
\end{equation}
Since
\[
(\bI_{p+p^2} - \bA_2)^{-1}
= \begin{bmatrix}
(\bI_p - \bM_\bxi)^{-1} & \bzero \\
(\bI_{p^2}-\bM_\bxi^{\otimes2})^{-1} \bA_{2,1} (\bI_p - \bM_\bxi)^{-1}
& (\bI_{p^2}-\bM_\bxi^{\otimes2})^{-1} \\
\end{bmatrix},
\]
we have
\begin{align*}
\int_{\RR^p} \bx^{\otimes2} \, \pi(\dd\bx)
= (\bI_{p^2}-\bM_\bxi^{\otimes2})^{-1} \bA_{2,1} (\bI_p - \bM_\bxi)^{-1} \bm_\bvare
+ (\bI_{p^2}-\bM_\bxi^{\otimes2})^{-1}
\EE(\bvare^{\otimes2}).
\end{align*}
Now we suppose that the third moments of \ $\bxi^{(i)}$, \ $i \in \{1, \ldots, p\}$, \ and \ $\bvare$ \ are finite.
For each \ $n \in \NN$, \ using again \eqref{GWI}, we obtain
\[
\EE(\bY_n^{\otimes3} \mid \cF_{n-1}^\bY)
= S_{n,1} + S_{n,2} + S_{n,3} + \EE(\bvare_n^{\otimes3} \mid \cF_{n-1}^\bY)
\]
with
\begin{align*}
S_{n,1}
&:= \sum_{i=1}^p \sum_{j=1}^{Y_{n-1,i}} \sum_{i'=1}^p \sum_{j'=1}^{Y_{n-1,i'}} \sum_{i''=1}^p
\sum_{j''=1}^{Y_{n-1,i''}}
\EE(\bxi_{n,j}^{(i)} \otimes \bxi_{n,j'}^{(i')} \otimes \bxi_{n,j''}^{(i'')} \mid \cF_{n-1}^\bY) , \\
S_{n,2}
&:= \sum_{i=1}^p \sum_{j=1}^{Y_{n-1,i}} \sum_{i'=1}^p \sum_{j'=1}^{Y_{n-1,i'}}
\EE(\bxi_{n,j}^{(i)} \otimes \bxi_{n,j'}^{(i')} \otimes \bvare_n
+ \bxi_{n,j}^{(i)} \otimes \bvare_n \otimes \bxi_{n,j'}^{(i')}
+ \bvare_n \otimes \bxi_{n,j}^{(i)} \otimes \bxi_{n,j'}^{(i')} \mid \cF_{n-1}^\bY) , \\
S_{n,3}
&:= \sum_{i=1}^p \sum_{j=1}^{Y_{n-1,i}}
\EE(\bxi_{n,j}^{(i)} \otimes \bvare_n^{\otimes2}
+ \bvare_n \otimes \bxi_{n,j}^{(i)} \otimes \bvare_n
+ \bvare_n^{\otimes2} \otimes \bxi_{n,j}^{(i)} \mid \cF_{n-1}^\bY) .
\end{align*}
We have
\begin{align*}
&S_{n,1}
= \sum_{i=1}^p \sum_{\underset{\SC i'\ne i}{i'=1}}^p \sum_{\underset{\SC i''\notin\{i,i'\}}{i''=1}}^p
Y_{n-1,i} Y_{n-1,i'} Y_{n-1,i''} \EE(\bxi^{(i)}) \otimes \EE(\bxi^{(i')}) \otimes \EE(\bxi^{(i'')}) \\
&+ \sum_{i=1}^p \sum_{\underset{\SC i'\ne i}{i'=1}}^p
Y_{n-1,i} (Y_{n-1,i} -1) Y_{n-1,i'} \\
&\phantom{+ \sum_{i=1}^p \sum_{\underset{\SC i'\ne i}{i'=1}}^p}
\times
\bigl\{[\EE(\bxi^{(i)})]^{\otimes2} \otimes \EE(\bxi^{(i')})
+ \EE(\bxi^{(i)}) \otimes \EE(\bxi^{(i')}) \otimes \EE(\bxi^{(i)})
+ \EE(\bxi^{(i')}) \otimes [\EE(\bxi^{(i)})]^{\otimes2}\bigr\} \\
&+ \sum_{i=1}^p \sum_{\underset{\SC i'\ne i}{i'=1}}^p
Y_{n-1,i} Y_{n-1,i'}
\bigl\{\EE[(\bxi^{(i)})^{\otimes2}] \otimes \EE(\bxi^{(i')})
+ \EE(\bxi^{(i)} \otimes \bxi^{(i')} \otimes \bxi^{(i)})
+ \EE(\bxi^{(i')}) \otimes \EE[(\bxi^{(i)})^{\otimes2}]\bigr\}
\end{align*}
\begin{align*}
&+ \sum_{i=1}^p Y_{n-1,i} (Y_{n-1,i} -1) (Y_{n-1,i} -2) [\EE(\bxi^{(i)})]^{\otimes3}
+ \sum_{i=1}^p Y_{n-1,i} \EE[(\bxi^{(i)})^{\otimes3}] \\
&+ \sum_{i=1}^p
Y_{n-1,i} (Y_{n-1,i} -1)
\bigl\{\EE[(\bxi^{(i)})^{\otimes2}] \otimes \EE(\bxi^{(i)})
+ \EE(\bxi_{1,1}^{(i)} \otimes \bxi_{1,2}^{(i)} \otimes \bxi_{1,1}^{(i)})
+ \EE(\bxi^{(i)}) \otimes \EE[(\bxi^{(i)})^{\otimes2}]\bigr\} ,
\end{align*}
which can be written in the form
\begin{align*}
S_{n,1}
&= \sum_{i=1}^p \sum_{i'=1}^p \sum_{i''=1}^p
Y_{n-1,i} Y_{n-1,i'} Y_{n-1,i''} \EE(\bxi^{(i)}) \otimes \EE(\bxi^{(i')}) \otimes \EE(\bxi^{(i'')}) \\
&\quad
+ \sum_{i=1}^p \sum_{i'=1}^p
Y_{n-1,i} Y_{n-1,i'}
\bigl\{\EE[(\bxi^{(i)})^{\otimes2}] \otimes \EE(\bxi^{(i')})
+ \EE(\bxi^{(i)} \otimes \bxi^{(i')} \otimes \bxi^{(i)}) \\
&\phantom{\quad + \sum_{i=1}^p \sum_{i'=1}^p Y_{n-1,i} Y_{n-1,i'} \bigl\{}
+ \EE(\bxi^{(i')}) \otimes \EE[(\bxi^{(i)})^{\otimes2}]
- [\EE(\bxi^{(i)})]^{\otimes2} \otimes \EE(\bxi^{(i')}) \\
&\phantom{\quad + \sum_{i=1}^p \sum_{i'=1}^p Y_{n-1,i} Y_{n-1,i'} \bigl\{}
- \EE(\bxi^{(i)}) \otimes \EE(\bxi^{(i')}) \otimes \EE(\bxi^{(i)})
- \EE(\bxi^{(i')}) \otimes [\EE(\bxi^{(i)})]^{\otimes2}\bigr\} \\
&\quad
+ \sum_{i=1}^p
Y_{n-1,i}
\bigl\{\EE[(\bxi^{(i)})^{\otimes3}]
- \EE[(\bxi^{(i)})^{\otimes2}] \otimes \EE(\bxi^{(i)})
- \EE(\bxi_{1,1}^{(i)} \otimes \bxi_{1,2}^{(i)} \otimes \bxi_{1,1}^{(i)}) \\
&\phantom{\quad + \sum_{i=1}^p Y_{n-1,i} \bigl\{}
- \EE(\bxi^{(i)}) \otimes \EE[(\bxi^{(i)})^{\otimes2}]
+ 2 [\EE(\bxi^{(i)})]^{\otimes3}\bigr\} .
\end{align*}
Hence
\begin{equation}\label{Sn1}
S_{n,1} = \bM_\bxi^{\otimes3} \bY_{n-1}^{\otimes3} + \bA_{3,2}^{(1)} \bY_{n-1}^{\otimes2}
+ \bA_{3,1}^{(1)} \bY_{n-1}
\end{equation}
with
\begin{align*}
\bA_{3,2}^{(1)}
&:= \sum_{i=1}^p \sum_{i'=1}^p
\bigl\{\EE[(\bxi^{(i)})^{\otimes2}] \otimes \EE(\bxi^{(i')})
+ \EE(\bxi^{(i)} \otimes \bxi^{(i')} \otimes \bxi^{(i)})
+ \EE(\bxi^{(i')}) \otimes \EE[(\bxi^{(i)})^{\otimes2}] \\
&\phantom{:= \sum_{i=1}^p \sum_{i'=1}^p \bigl\{}
- [\EE(\bxi^{(i)})]^{\otimes2} \otimes \EE(\bxi^{(i')})
- \EE(\bxi^{(i)}) \otimes \EE(\bxi^{(i')}) \otimes \EE(\bxi^{(i)})
- \EE(\bxi^{(i')}) \otimes [\EE(\bxi^{(i)})]^{\otimes2}\bigr\} \\
&\phantom{:= \sum_{i=1}^p \sum_{i'=1}^p}
\times (\be_i^\top \otimes \be_{i'}^\top)
\in \RR^{p^3\times p^2} , \\
\bA_{3,1}^{(1)}
&:= \sum_{i=1}^p
\bigl\{\EE[(\bxi^{(i)})^{\otimes3}]
- \EE[(\bxi^{(i)})^{\otimes2}] \otimes \EE(\bxi^{(i)})
- \EE(\bxi_{1,1}^{(i)} \otimes \bxi_{1,2}^{(i)} \otimes \bxi_{1,1}^{(i)})
- \EE(\bxi^{(i)}) \otimes \EE[(\bxi^{(i)})^{\otimes2}] \\
&\phantom{:= \sum_{i=1}^p \bigl\{}
+ 2 [\EE(\bxi^{(i)})]^{\otimes3}\bigr\}
\be_i^\top
\in \RR^{p^3\times p} .
\end{align*}
Moreover,
\begin{align*}
S_{n,2}
&= \sum_{i=1}^p \sum_{\underset{\SC i'\ne i}{i'=1}}^p
Y_{n-1,i} Y_{n-1,i'}
\bigl\{\EE(\bxi^{(i)}) \otimes \EE(\bxi^{(i')}) \otimes \bm_\bvare
+ \EE(\bxi^{(i)}) \otimes \bm_\bvare \otimes \EE(\bxi^{(i')}) \\
&\phantom{= \sum_{i=1}^p \sum_{\underset{\SC i'\ne i}{i'=1}}^p Y_{n-1,i} Y_{n-1,i'} \bigl\{}
+ \bm_\bvare \otimes \EE(\bxi^{(i)}) \otimes \EE(\bxi^{(i')})\bigr\} \\
&\quad
+ \sum_{i=1}^p
Y_{n-1,i} (Y_{n-1,i} - 1)
\bigl\{[\EE(\bxi^{(i)})]^{\otimes2} \otimes \bm_\bvare
+ \EE(\bxi^{(i)}) \otimes \bm_\bvare \otimes \EE(\bxi^{(i)}) \\
&\phantom{= + \sum_{i=1}^p Y_{n-1,i} (Y_{n-1,i} - 1) \bigl\{}
+ \bm_\bvare \otimes [\EE(\bxi^{(i)})]^{\otimes2}\bigr\} \\
&\quad
+ \sum_{i=1}^p
Y_{n-1,i}
\bigl\{\EE[(\bxi^{(i)})^{\otimes2}] \otimes \bm_\bvare
+ \EE(\bxi^{(i)} \otimes \bvare \otimes \bxi^{(i)})
+ \bm_\bvare \otimes \EE[(\bxi^{(i)})^{\otimes2}]\bigr\} ,
\end{align*}
where \ $\EE(\bxi^{(i)} \otimes \bvare \otimes \bxi^{(i)})$ \ is finite, since there exists a permutation matrix
\ $\bP \in \RR^{p^2\times p^2}$ \ such that \ $\bu \otimes \bv = \bP (\bv \otimes \bu)$ \ for all
\ $\bu, \bv \in \RR^p$ \ (see, e.g., Henderson and Searle \cite[formula (6)]{HenSea}), hence
\begin{align*}
\EE(\bxi^{(i)} \otimes \bvare \otimes \bxi^{(i)})
&= \EE([\bP (\bvare \otimes \bxi^{(i)})] \otimes \bxi^{(i)})
= \EE\bigl([\bP (\bvare \otimes \bxi^{(i)})] \otimes (\bI_p \bxi^{(i)})\bigr) \\
&= \EE\bigl((\bP \otimes \bI_p) (\bvare \otimes \bxi^{(i)} \otimes \bxi^{(i)})\bigr)
= (\bP \otimes \bI_p) \bigl(\bm_\bvare \otimes \EE[(\bxi^{(i)})^{\otimes2}]\bigr) .
\end{align*}
Thus
\begin{align*}
S_{n,2}
&= \sum_{i=1}^p \sum_{i'=1}^p
Y_{n-1,i} Y_{n-1,i'}
\bigl\{\EE(\bxi^{(i)}) \otimes \EE(\bxi^{(i')}) \otimes \bm_\bvare
+ \EE(\bxi^{(i)}) \otimes \bm_\bvare \otimes \EE(\bxi^{(i')}) \\
&\phantom{= \sum_{i=1}^p \sum_{i'=1}^p Y_{n-1,i} Y_{n-1,i'} \bigl\{}
+ \bm_\bvare \otimes \EE(\bxi^{(i)}) \otimes \EE(\bxi^{(i')})\bigr\} \\
&\quad
+ \sum_{i=1}^p
Y_{n-1,i}
\bigl\{\EE[(\bxi^{(i)})^{\otimes2}] \otimes \bm_\bvare
+ \EE(\bxi^{(i)} \otimes \bvare \otimes \bxi^{(i)})
+ \bm_\bvare \otimes \EE[(\bxi^{(i)})^{\otimes2}] \\
&\phantom{= + \sum_{i=1}^p Y_{n-1,i} \bigl\{}
- [\EE(\bxi^{(i)})]^{\otimes2} \otimes \bm_\bvare
- \EE(\bxi^{(i)}) \otimes \bm_\bvare \otimes \EE(\bxi^{(i)})
- \bm_\bvare \otimes [\EE(\bxi^{(i)})]^{\otimes2}\bigr\} .
\end{align*}
Hence
\begin{equation}\label{Sn2}
S_{n,2} = \bA_{3,2}^{(2)} \bY_{n-1}^{\otimes2} + \bA_{3,1}^{(2)} \bY_{n-1}
\end{equation}
with
\begin{align*}
\bA_{3,2}^{(2)}
&:= \sum_{i=1}^p \sum_{i'=1}^p
\bigl\{\EE(\bxi^{(i)}) \otimes \EE(\bxi^{(i')}) \otimes \bm_\bvare
+ \EE(\bxi^{(i)}) \otimes \bm_\bvare \otimes \EE(\bxi^{(i')}) \\
&\phantom{:= \sum_{i=1}^p \sum_{i'=1}^p \bigl\{}
+ \bm_\bvare \otimes \EE(\bxi^{(i)}) \otimes \EE(\bxi^{(i')})\bigr\}
(\be_i^\top \otimes \be_{i'}^\top)
\in \RR^{p^3\times p^2} , \\
\bA_{3,1}^{(2)}
&:= \sum_{i=1}^p
\bigl\{\EE[(\bxi^{(i)})^{\otimes2}] \otimes \bm_\bvare
+ \EE(\bxi^{(i)} \otimes \bvare \otimes \bxi^{(i)})
+ \bm_\bvare \otimes \EE[(\bxi^{(i)})^{\otimes2}] \\
&\phantom{:= \sum_{i=1}^p \bigl\{}
- [\EE(\bxi^{(i)})]^{\otimes2} \otimes \bm_\bvare
- \EE(\bxi^{(i)}) \otimes \bm_\bvare \otimes \EE(\bxi^{(i)})
- \bm_\bvare \otimes [\EE(\bxi^{(i)})]^{\otimes2}\bigr\}
\be_i^\top
\in \RR^{p^3\times p} .
\end{align*}
Further,
\[
S_{n,3}
= \sum_{i=1}^p Y_{n-1,i}
\bigl\{\EE(\bxi^{(i)}) \otimes \EE(\bvare^{\otimes2})
+ \EE(\bvare \otimes \bxi^{(i)} \otimes \bvare)
+ \EE(\bvare^{\otimes2}) \otimes \EE(\bxi^{(i)})\bigr\}
= \bA_{3,1}^{(3)} \bY_{n-1}
\]
with
\[
\bA_{3,1}^{(3)}
:= \sum_{i=1}^p
\bigl\{\EE(\bxi^{(i)}) \otimes \EE(\bvare^{\otimes2})
+ \EE(\bvare \otimes \bxi^{(i)} \otimes \bvare)
+ \EE(\bvare^{\otimes2}) \otimes \EE(\bxi^{(i)})\bigr\}
\be_i^\top
\in \RR^{p^3\times p} ,
\]
where \ $\EE(\bvare \otimes \bxi^{(i)} \otimes \bvare)$ \ is finite, since
\begin{align*}
\EE(\bvare \otimes \bxi^{(i)} \otimes \bvare)
&= \EE([\bP (\bxi^{(i)} \otimes \bvare)] \otimes \bvare)
= \EE\bigl([\bP (\bxi^{(i)} \otimes \bvare)] \otimes (\bI_p \bvare)\bigr) \\
&= \EE\bigl((\bP \otimes \bI_p) (\bxi^{(i)} \otimes \bvare \otimes \bvare)\bigr)
= (\bP \otimes \bI_p) \bigl(\EE(\bxi^{(i)}) \otimes \EE[\bvare^{\otimes2}]\bigr) .
\end{align*}
Consequently, we have
\[
\EE(\bY_n^{\otimes3} \mid \cF_{n-1}^\bY)
= \bM_\bxi^{\otimes3} \bY_{n-1}^{\otimes3} + \bA_{3,2} \bY_{n-1}^{\otimes2} + \bA_{3,1} \bY_{n-1}
+ \EE(\bvare^{\otimes3})
\]
with \ $\bA_{3,2} := \bA_{3,2}^{(1)} +\bA_{3,2}^{(2)}$ \ and
\ $\bA_{3,1} := \bA_{3,1}^{(1)} + \bA_{3,1}^{(2)} + \bA_{3,1}^{(3)}$.
\ Taking the expectation, we get
\begin{equation}\label{moment_3_Y}
\EE(\bY_n^{\otimes3})
= \bM_\bxi^{\otimes3} \EE(\bY_{n-1}^{\otimes3}) + \bA_{3,2} \EE(\bY_{n-1}^{\otimes2}) + \bA_{3,1} \EE(\bY_{n-1})
+ \EE(\bvare^{\otimes3}) .
\end{equation}
Summarizing, we obtain
\[
\begin{bmatrix}
\EE(\bY_n) \\
\EE(\bY_n^{\otimes2}) \\
\EE(\bY_n^{\otimes3})
\end{bmatrix}
= \bA_3
\begin{bmatrix}
\EE(\bY_{n-1}) \\
\EE(\bY_{n-1}^{\otimes2}) \\
\EE(\bY_{n-1}^{\otimes3})
\end{bmatrix}
+ \begin{bmatrix}
\bm_\bvare \\
\EE(\bvare^{\otimes2}) \\
\EE(\bvare^{\otimes3})
\end{bmatrix} , \qquad n \in \NN ,
\]
with
\[
\bA_3 := \begin{bmatrix}
\bM_\bxi & \bzero & \bzero \\
\bA_{2,1} & \bM_\bxi^{\otimes2} & \bzero \\
\bA_{3,1} & \bA_{3,2} & \bM_\bxi^{\otimes3}
\end{bmatrix}
\in \RR^{(p+p^2+p^3)\times(p+p^2+p^3)} .
\]
Taking into account \ $\bY_0 = \bzero$, \ we obtain
\[
\begin{bmatrix}
\EE(\bY_n) \\
\EE(\bY_n^{\otimes2}) \\
\EE(\bY_n^{\otimes3})
\end{bmatrix}
= \sum_{k=1}^n
\bA_3^{n-k}
\begin{bmatrix}
\bm_\bvare \\
\EE(\bvare^{\otimes2}) \\
\EE(\bvare^{\otimes3})
\end{bmatrix}
= \sum_{\ell=0}^{n-1}
\bA_3^\ell
\begin{bmatrix}
\bm_\bvare \\
\EE(\bvare^{\otimes2}) \\
\EE(\bvare^{\otimes3})
\end{bmatrix} , \qquad n \in \NN .
\]
We have \ $\varrho(\bA_3) = \max\{\varrho(\bM_\bxi), \varrho(\bM_\bxi^{\otimes2}), \varrho(\bM_\bxi^{\otimes3})\}$,
\ where \ $\varrho(\bM_\bxi^{\otimes2}) = [\varrho(\bM_\bxi)]^2$ \ and
\ \ $\varrho(\bM_\bxi^{\otimes3}) = [\varrho(\bM_\bxi)]^3$.
\ Taking into account \ $\varrho(\bM_\bxi) < 1$, \ we conclude \ $\varrho(\bA_3) = \varrho(\bM_\bxi) < 1$, \ and,
by \eqref{reprpi}, the third moment of \ $\pi$ \ is finite,
and
\begin{equation}\label{moment_3_pi}
\begin{bmatrix}
\int_{\RR^p} \bx \, \pi(\dd\bx) \\
\int_{\RR^p} \bx^{\otimes2} \, \pi(\dd\bx) \\
\int_{\RR^p} \bx^{\otimes3} \, \pi(\dd\bx) \\
\end{bmatrix}
= (\bI_{p+p^2+p^3} - \bA_3)^{-1}
\begin{bmatrix}
\bm_\bvare \\
\EE(\bvare^{\otimes2}) \\
\EE(\bvare^{\otimes3})
\end{bmatrix} .
\end{equation}
Since
\[
(\bI_{p+p^2+p^3} - \bA_3)^{-1}
= \begin{bmatrix}
(\bI_p - \bM_\bxi)^{-1} & \bzero & \bzero \\
\bB_{2,1} & (\bI_{p^2} - \bM_\bxi^{\otimes2})^{-1} & \bzero \\
\bB_{3,1} & \bB_{3,2} & (\bI_{p^3} - \bM_\bxi^{\otimes3})^{-1} \\
\end{bmatrix},
\]
where
\begin{align*}
&\bB_{2,1} = (\bI_{p^2} - \bM_\bxi^{\otimes2})^{-1} \bA_{2,1} (\bI_p - \bM_\bxi)^{-1},\\
&\bB_{3,1} = (\bI_{p^3} - \bM_\bxi^{\otimes3})^{-1} (\bA_{3,1}(\bI_p - \bM_\bxi)^{-1} + \bA_{3,2}\bB_{2,1}),\\
&\bB_{3,2} = (\bI_{p^3} - \bM_\bxi^{\otimes3})^{-1} \bA_{3,2} (\bI_{p^2} - \bM_\bxi^{\otimes2})^{-1},
\end{align*}
we have
\[
\int_{\RR^p} \bx^{\otimes3} \, \pi(\dd\bx)
= \bB_{3,1} \bm_\bvare + \bB_{3,2} \EE(\bvare^{\otimes2})
+ (\bI_{p^3} - \bM_\bxi^{\otimes3})^{-1} \EE(\bvare^{\otimes3}). \\[-4mm]
\]
\proofend
\noindent{\bf Proof of Proposition \ref{simple_aggregation}.}
Similarly as \eqref{help12}, we have
\[
\EE(\bX_k \mid \cF_{k-1}^\bX) =\bM_\bxi \bX_{k-1} + \bm_{\bvare}, \qquad k \in \NN ,
\]
where \ $\cF_k^\bX := \sigma(\bX_0, \ldots, \bX_k)$, \ $k \in \ZZ_+$.
\ Consequently,
\begin{align}\label{help3}
\EE(\bX_k) = \bM_\bxi \EE(\bX_{k-1}) + \bm_{\bvare} , \qquad k \in \NN ,
\end{align}
and, by \eqref{moment_1_pi},
\begin{align}\label{help_exp_stat_distr}
\EE(\bX_0) = (\bI_p - \bM_\bxi )^{-1} \bm_{\bvare} .
\end{align}
Put
\begin{align*}
\bU_k :&= \bX_k - \EE(\bX_k \mid \cF_{k-1}^\bX) =\bX_k - (\bM_\bxi \bX_{k-1} + \bm_{\bvare}) \\
&= \sum_{i=1}^p \sum_{\ell=1}^{X_{k-1,i}} (\bxi^{(i)}_{k,\ell} - \EE(\bxi^{(i)}_{k,\ell}))
+ (\bvare_k - \EE(\bvare_k)) ,
\qquad k \in \NN.
\end{align*}
Then \ $\EE(\bU_k \mid \cF_{k-1}^\bX) = \bzero$, \ $k \in \NN$, \ and using the independence of
\ $\bigl\{\bxi^{(i)}_{k,\ell}, \bvare_k : k, \ell \in \NN, i \in \{1,\ldots,p\}\bigr\}$, \ we have
\begin{equation}\label{help4}
\EE(U_{k,i}U_{k,j} \mid \cF_{k-1}^\bX)
= \sum_{q=1}^p X_{k-1,q} \cov(\xi_{k,1}^{(q,i)}, \xi_{k,1}^{(q,j)}) + \cov(\vare_k^{(i)}, \vare_k^{(j)})
= \bv_{(i,j)}^\top
\begin{bmatrix}
\bX_{k-1} \\
1
\end{bmatrix}
\end{equation}
for \ $i, j \in \{1, \dots, p\}$ \ and \ $k \in \NN$, \ where \ $[U_{k,1}, \ldots, U_{k,p}]^\top := \bU_k$,
\ $k \in \NN$.
\ For each \ $k \in \NN$, \ using \ $\bX_k = \bM_\bxi \bX_{k-1} + \bm_{\vare} + \bU_k$ \ and \eqref{help3}, we
obtain
\begin{equation}\label{help5}
\bX_k - \EE(\bX_k) = \bM_\bxi (\bX_{k-1} - \EE(\bX_{k-1})) + \bU_k , \qquad k \in \NN .
\end{equation}
Consequently,
\begin{align*}
&\EE((\bX_k - \EE(\bX_k)) (\bX_k - \EE(\bX_k))^\top \mid \cF_{k-1}^\bX) \\
&= \EE((\bM_\bxi (\bX_{k-1} - \EE(\bX_{k-1})) + \bU_k) (\bM_\bxi (\bX_{k-1} - \EE(\bX_{k-1})) + \bU_k)^\top
\mid \cF_{k-1}^\bX) \\
&= \EE(\bU_{k} \bU_{k}^\top \mid \cF_{k-1}^\bX)
+ \bM_\bxi (\bX_{k-1} - \EE(\bX_{k-1})) (\bX_{k-1} - \EE(\bX_{k-1}))^\top \bM_\bxi ^\top
\end{align*}
for all \ $k \in \NN$.
\ Taking the expectation, by \eqref{help_exp_stat_distr} and \eqref{help4}, we conclude
\[
\var(\bX_k)
= \EE(\bU_{k} \bU_{k}^\top) + \bM_\bxi \var(\bX_{k-1}) \bM_\bxi ^\top
= \bV + \bM_\bxi \var(\bX_{k-1}) \bM_\bxi ^\top ,
\qquad k \in \NN .
\]
Under the conditions of the proposition, by Lemma \ref{stationary_moments}, the unique stationary distribution
\ $\pi$ \ has a finite second moment, hence, using again the stationarity of \ $(\bX_k)_{k\in\ZZ_+}$, \ for each
\ $N \in \NN$, \ we get
\begin{align}\label{help10}
\var(\bX_0)
= \bV + \bM_\bxi \var(\bX_0)\bM_\bxi^\top
= \sum_{k=0}^{N-1} \bM_\bxi^k \bV (\bM_\bxi^\top)^k + \bM_\bxi^N \var(\bX_0) (\bM_\bxi^\top)^N .
\end{align}
Here \ $\lim_{N\to\infty} \bM_\bxi^N \var(\bX_0) (\bM_\bxi^\top)^N = \bzero \in\RR^{p\times p}$.
\ Indeed, by the Gelfand formula \ $\varrho(\bM_\bxi) = \lim_{k\to\infty} \|\bM_\bxi^k\|^{1/k}$, \ see, e.g.,
Horn and Johnson \cite[Corollary 5.6.14]{HornJohnson}.
Hence there exists \ $k_0 \in \NN$ \ such that
\begin{equation}\label{Gelfand}
\|\bM_\bxi^k\|^{1/k} \leq \varrho(\bM_\bxi) + \frac{1-\varrho(\bM_\bxi)}{2}
= \frac{1+\varrho(\bM_\bxi)}{2}
< 1
\qquad \text{for all \ $k \geq k_0$,}
\end{equation}
since \ $\varrho(\bM_\bxi) < 1$.
\ Thus, for all \ $N \geq k_0$,
\begin{align*}
\|\bM_\bxi^N \var(\bX_0)(\bM_\bxi^\top)^N\|
&\leq \|\bM_\bxi^N\| \|\var(\bX_0)\| \|(\bM_\bxi^\top)^N\|
= \|\bM_\bxi^N\| \|\var(\bX_0)\| \|\bM_\bxi^N\| \\
&\leq \left(\frac{1+\varrho(\bM_\bxi)}{2}\right)^{2N} \|\var(\bX_0)\| ,
\end{align*}
hence \ $\|\bM_\bxi^N \var(\bX_0) (\bM_\bxi^\top)^N\| \to 0$ \ as \ $N \to \infty$.
\ Consequently, \ $\var(\bX_0) =\sum_{k=0}^\infty \bM_\bxi ^{k} \bV (\bM_\bxi ^{\top})^{k}$, \ yielding
\eqref{var0}.
Moreover, by \eqref{help5},
\begin{align*}
&\EE((\bX_0 - \EE(\bX_0)) (\bX_k - \EE(\bX_k))^\top \mid \cF_{k-1}^\bX)
= (\bX_0 - \EE(\bX_0)) \EE((\bX_k - \EE(\bX_k))^\top \mid \cF_{k-1}^\bX) \\
&= (\bX_0 - \EE(\bX_0)) (\bX_{k-1} - \EE(\bX_{k-1}) )^\top \bM_\bxi ^\top,
\qquad k \in \NN .
\end{align*}
Taking the expectation, we conclude
\[
\cov(\bX_0, \bX_k) = \cov(\bX_0, \bX_{k-1}) \bM_\bxi ^\top , \qquad k \in \NN .
\]
Hence, by induction, we obtain the formula for \ $\cov(\bX_0, \bX_k)$.
\ The statement will follow from the multidimensional central limit theorem.
Due to the continuous mapping theorem, it is sufficient to show the convergence
\ $N^{-1/2} (\bS^{(N)}_0, \bS^{(N)}_1, \ldots, \bS^{(N)}_k) \distr (\bcX_0, \bcX_1, \ldots, \bcX_k)$ \ as
\ $N \to \infty$ \ for all \ $k \in \ZZ_+$.
\ For all \ $k \in \ZZ_+$, \ the random vectors
\ $\bigl((\bX^{(j)}_0 - \EE(\bX^{(j)}_0))^\top, (\bX^{(j)}_1 - \EE(\bX^{(j)}_1))^\top,
\ldots, (\bX^{(j)}_k - \EE(\bX^{(j)}_k))^\top\bigr)^\top$,
\ $j \in \NN$,
are independent, identically distributed having zero mean vector and covariances
\[
\cov(\bX^{(j)}_{\ell_1}, \bX^{(j)}_{\ell_2})
= \cov(\bX^{(j)}_0, \bX^{(j)}_{\ell_2-\ell_1})
= \var(\bX_0) (\bM_\bxi ^\top)^{\ell_2-\ell_1}
\]
for \ $j \in \NN$, \ $\ell_1, \ell_2 \in \{0, 1, \ldots, k\}$, \ $\ell_1\leq \ell_2$, \ following from the
strict stationarity of \ $\bX^{(j)}$ \ and from \eqref{cov}.
\proofend
\noindent{\bf Proof of Proposition \ref{simple_aggregation2}.}
It is known that
\[
\bU_k
= \bX_k - \EE(\bX_k \mid \cF^\bX_{k-1})
= \bX_k - \bM_\bxi \bX_{k-1} - \bm_{\vare} ,
\qquad k \in \NN ,
\]
are martingale differences with respect to the filtration \ $(\cF_k^\bX)_{k\in\ZZ_+}$.
\ The functional martingale central limit theorem can be applied, see, e.g., Jacod and Shiryaev
\cite[Theorem VIII.3.33]{JacShi}.
Indeed, using \eqref{help4} and the fact that the first moment of \ $\bX_0$ \ exists and is finite, by
\eqref{help_ergodic}, for each \ $t \in \RR_+$, \, and \ $i, j \in \{1, \ldots, p\}$, \ we have
\[
\frac{1}{n} \sum_{k=1}^\nt \EE(U_{k,i}U_{k,j} \mid \cF^\bX_{k-1})
\as \bv_{(i,j)}^\top
\begin{bmatrix}
\EE(\bX_0) \\
1
\end{bmatrix} t
= V_{i,j} t
\qquad \text{as \ $n \to \infty$,}
\]
and hence the convergence holds in probability as well.
Moreover, the conditional Lindeberg condition holds, namely, for all \ $\delta > 0$,
\begin{equation}\label{help9}
\begin{aligned}
\frac{1}{n}
\sum_{k=1}^\nt
\EE\bigl(\|\bU_k\|^2 \bone_{\{\|\bU_k\|>\delta\sqrt{n}\}} \mid \cF^\bX_{k-1}\bigr)
&\leq \frac{1}{\delta n^{3/2}} \sum_{k=1}^{\nt} \EE(\|\bU_k\|^3 \mid \cF^\bX_{k-1}) \\
&\leq \frac{C_3 (p+1)^3}{\delta n^{3/2}}
\sum_{k=1}^\nt \left\|\begin{bmatrix}
\bX_{k-1} \\
1 \\
\end{bmatrix}\right\|^3
\as 0
\end{aligned}
\end{equation}
with \ $C_3 := \max\{\EE(\|\bxi^{(i)} - \EE(\bxi^{(i)})\|^3)$, \ $i \in \{1, \ldots, p\}$,
\ $\EE(\|\bvare - \EE(\bvare)\|^3)\}$, \ where the last inequality follows by Proposition 3.3 of Ned\'enyi
\cite{Ned}, and the almost sure convergence is a consequence of \eqref{help_ergodic}, since, under the
third order moment assumptions in Proposition \ref{simple_aggregation2}, by Lemma \ref{stationary_moments}
and {\eqref{help_ergodic},
\[
\frac{1}{n}
\sum_{k=1}^{\nt}
\left\|\begin{bmatrix}
\bX_{k-1} \\
1 \\
\end{bmatrix}\right\|^3
\as t \EE\left(\left\|\begin{bmatrix}
\bX_0 \\
1 \\
\end{bmatrix}\right\|^3 \right)
\qquad \text{as \ $n \to \infty$.}
\]
Hence we obtain
\[
\biggl(\frac{1}{\sqrt{n}} \sum_{k=1}^{\lfloor nt \rfloor} \bU_k\biggr)_{t\in\RR_+}
\distr \bB \qquad
\text{as \ $n \to \infty$,}
\]
where \ $\bB = (\bB_t)_{t\in\RR_+}$ \ is a \ $p$-dimensional zero mean Brownian motion satisfying
\ $\var(\bB_1) = \bV$.
\ Using \eqref{help5}, we have
\[
\bX_k - \EE(\bX_k)
= \bM_\bxi ^k (\bX_0 - \EE(\bX_0)) + \sum_{j=1}^k \bM_\bxi ^{k-j} \bU_j ,
\qquad k \in \NN .
\]
Consequently, for each \ $n \in \NN$ \ and \ $t \in \RR_+$,
\begin{equation}\label{help7}
\begin{aligned}
&\frac{1}{\sqrt{n}} \sum_{k=1}^\nt (\bX_k - \EE(\bX_k)) \\
&= \frac{1}{\sqrt{n}}
\Biggl[\Biggl(\sum_{k=1}^\nt \bM_\bxi^k\Biggr) (\bX_0 - \EE(\bX_0))
+ \sum_{k=1}^\nt \sum_{j=1}^k \bM_\bxi^{k-j} \bU_j\Biggr] \\
&= \frac{1}{\sqrt{n}}
\Biggl[(\bI_p - \bM_\bxi)^{-1} (\bM_\bxi - \bM_\bxi^{\lfloor nt \rfloor+1}) (\bX_0 - \EE(\bX_0))
+ \sum_{j=1}^\nt \Biggl(\sum_{k=j}^\nt \bM_\bxi^{k-j}\Biggr) \bU_j \Biggr] \\
&= \frac{1}{\sqrt{n}}
\Biggl[(\bI_p - \bM_\bxi)^{-1} (\bM_\bxi - \bM_\bxi^{\lfloor nt \rfloor+1}) (\bX_0 - \EE(\bX_0))
+ (\bI_p - \bM_\bxi)^{-1} \sum_{j=1}^\nt (\bI_p - \bM_\bxi^{\lfloor nt \rfloor-j+1}) \bU_j\Biggr] ,
\end{aligned}
\end{equation}
implying the statement using Slutsky's lemma since \ $\rho(\bM_\bxi) < 1$.
\ Indeed, \ $\lim_{n\to\infty} \bM_\bxi^{\lfloor nt \rfloor+1} = \bzero$ \ by \eqref{Gelfand}, hence
\[
\frac{1}{\sqrt{n}}
(\bI_p - \bM_\bxi)^{-1} (\bM_\bxi - \bM_\bxi^{\lfloor nt \rfloor+1}) (\bX_0 - \EE(\bX_0))
\as \bzero \qquad \text{as \ $n\to\infty$.}
\]
Moreover, \ $n^{-1/2} (\bI_p - \bM_\bxi )^{-1} \sum_{j=1}^\nt \bM_\bxi^{\lfloor nt \rfloor-j+1} \bU_j$
\ converges in \ $L_1$ \ and hence in probability to \ $\bzero$ \ as \ $n \to \infty$, \ since by \eqref{help4},
\begin{align}\label{help8}
\EE(|U_{k,j}|)
\leq \sqrt{\EE(U_{k,j}^2)}
= \sqrt{\bv_{(j,j)}^\top \begin{bmatrix}\EE(\bX_0)\\ 1\end{bmatrix}}
= \sqrt{V_{j,j}} ,
\qquad j \in \{1, \ldots, p\} , \qquad k \in \NN ,
\end{align}
and hence
\begin{align}
&\EE\biggl(\biggl\|\frac{1}{\sqrt{n}} \sum_{k=1}^\nt \bM_\bxi^{\lfloor nt \rfloor-k+1} \bU_k \biggr\|\biggr)
\leq \frac{1}{\sqrt{n}} \sum_{k=1}^\nt \EE(\|\bM_\bxi^{\lfloor nt \rfloor-k+1} \bU_k\|) \nonumber \\
&\leq \frac{1}{\sqrt{n}} \sum_{k=1}^\nt \|\bM_\bxi^{\lfloor nt \rfloor-k+1}\| \EE(\|\bU_k\|)
\leq \frac{1}{\sqrt{n}} \sum_{k=1}^\nt \|\bM_\bxi^{\lfloor nt \rfloor-k+1}\| \sum_{j=1}^p \EE(|U_{k,j}|)
\nonumber \\
&\leq \frac{1}{\sqrt{n}}
\sum_{k=1}^{\lfloor nt \rfloor} \|\bM_\bxi^{\lfloor nt \rfloor-k+1}\|
\sum_{j=1}^p \sqrt{V_{j,j}}
\to 0 \qquad \text{as \ $n \to \infty$,} \label{Gelfand2}
\end{align}
since, applying \eqref{Gelfand} for \ $\nt \geq k_0$, \ we have
\begin{align*}
&\sum_{k=1}^\nt \|\bM_\bxi^{\lfloor nt \rfloor-k+1}\|
= \sum_{k=1}^\nt \|\bM_\bxi^k\|
= \sum_{k=1}^{k_0-1} \|\bM_\bxi^k\| + \sum_{k=k_0}^\nt \|\bM_\bxi^k\| \\
&\leq \sum_{k=1}^{k_0-1} \|\bM_\bxi^k\|
+ \sum_{k=k_0}^{\lfloor nt \rfloor} \biggl(\frac{1+\varrho(\bM_\bxi)}{2}\biggr)^k
\leq \sum_{k=1}^{k_0-1} \|\bM_\bxi^k\|
+ \sum_{k=k_0}^\infty \biggl(\frac{1+\varrho(\bM_\bxi)}{2}\biggr)^k
< \infty .
\end{align*}
Consequently, by Slutsky's lemma,
\[
\biggl(n^{-\frac{1}{2}} \sum_{k=1}^{\lfloor nt \rfloor} (\bX_k - \EE(\bX_k))\biggr)_{t\in\RR_+}
\distr (\bI_p-\bM_\bxi )^{-1} \bB\,
\qquad \text{as \ $n \to \infty$,}
\]
where \ $\bB = (\bB_t)_{t\in\RR_+}$ \ is a \ $p$-dimensional zero mean Brownian motion satisfying
\ $\var(\bB_1) = \bV$, \ as desired.
\proofend
\noindent{\bf Proof of Theorem \ref{double_aggregation}.}
First, we prove \eqref{help2_double_aggregation}.
For all \ $N, m \in \NN$ \ and all \ $t_1, \ldots, t_m \in\RR_+$, \ by Proposition \ref{simple_aggregation2} and
the continuity theorem, we have
\[
\frac{1}{\sqrt{n}} (\bS^{(N,n)}_{t_1}, \ldots, \bS^{(N,n)}_{t_m})
\distr (\bI_p-\bM_\bxi)^{-1} \sum_{\ell=1}^N (\bB^{(\ell)}_{t_1}, \ldots, \bB^{(\ell)}_{t_m})
\]
as \ $n \to \infty$, \ where \ $\bB^{(\ell)} = (\bB^{(\ell)}_t)_{t\in\RR_+}$, \ $\ell \in \{1, \ldots, N\}$, \ are
independent \ $p$-dimensional zero mean Brownian motions satisfying \ $\var(\bB^{(\ell)}_1) = \bV$,
\ $\ell \in \{1, \ldots, p\}$.
\ Since
\[
\frac{1}{\sqrt{N}} \sum_{\ell=1}^N (\bB^{(\ell)}_{t_1}, \ldots, \bB^{(\ell)}_{t_m})
\distre (\bB_{t_1}, \ldots, \bB_{t_m}) , \qquad N \in \NN , \quad m \in \NN ,
\]
we obtain the convergence \eqref{help2_double_aggregation}.
Now, we turn to prove \eqref{help1_double_aggregation}.
For all \ $n \in \NN$ \ and for all \ $t_1, \ldots, t_m \in \RR_+$ \ with \ $t_1 < \ldots < t_m$, \ $m \in \NN$,
\ by Proposition \ref{simple_aggregation} and by the continuous mapping theorem, we have
\begin{align*}
\frac{1}{\sqrt{N}} \bigl((\bS^{(N,n)}_{t_1})^\top , \ldots, (\bS^{(N,n)}_{t_m})^\top\bigr)^ \top
&\distr \Biggl(\sum_{k=1}^{\lfloor nt_1\rfloor} \bcX_k^\top , \ldots,
\sum_{k=1}^{\lfloor nt_m\rfloor} \bcX_k^\top\Biggr)^\top \\
&\distre \cN_ {pm} \Biggl(\bzero,
\var\Biggl(\Biggl(\sum_{k=1}^{\lfloor nt_1\rfloor}
\bcX_k^\top , \ldots,
\sum_{k=1}^{\lfloor nt_m\rfloor}
\bcX_k^\top\Biggr)^\top\Biggr)\Biggr)
\end{align*}
as \ $N \to \infty$, \ where \ $(\bcX_k)_{k\in\ZZ_+}$ \ is the \ $p$-dimensional zero mean stationary
Gaussian process given in Proposition \ref{simple_aggregation} and, by \eqref{cov},
\begin{align*}
&\var\Biggl(\Biggl(\sum_{k=1}^{\lfloor nt_1\rfloor} \bcX_k^\top , \ldots,
\sum_{k=1}^{\lfloor nt_m\rfloor} \bcX_k^\top\Biggr)^\top\Biggr)
= \left(\cov\Biggl(\sum_{k=1}^{\lfloor nt_i\rfloor} \bcX_k,
\sum_{k=1}^{\lfloor nt_j\rfloor} \bcX_k\Biggr) \right)_{i,j=1}^m
\end{align*}
\begin{align*}
&= \left(\sum_{k=1}^{\lfloor nt_i\rfloor} \sum_{\ell=1}^{\lfloor nt_j\rfloor}
\cov(\bcX_k, \bcX_\ell)\right)_{i,j=1}^m \\
&= \Bigg(\sum_{k=1}^{\lfloor nt_i\rfloor} \sum_{\ell=1}^{(k-1)\wedge \lfloor nt_j\rfloor}
\bM_\bxi ^{k-\ell} \var(\bX_0)
+ {(\lfloor nt_i\rfloor\wedge \lfloor nt_j\rfloor)}\var(\bX_0)\\
&\phantom{= \Bigg(\;}
+ \var(\bX_0)
\sum_{k=1}^{\lfloor nt_i\rfloor} \sum_{\ell=k+1}^{\lfloor nt_j\rfloor}
(\bM_\bxi^\top)^{\ell-k}\Bigg)_{i,j=1}^m ,
\end{align*}
where \ $\sum_{\ell=q_1}^{q_2} := 0$ \ for all \ $q_2 < q_1$, \ $q_1, q_2 \in \NN$.
\ By the continuity theorem, for all \ $\btheta_1, \ldots, \btheta_m \in \RR^p$, \ $m \in \NN$, \ we conclude
\begin{align*}
&\lim_{N\to\infty}
\EE\biggl(\exp\biggl\{\ii
\sum_{j=1}^m
\btheta_j^\top n^{-1/2} N^{-1/2}
\bS^{(N,n)}_{t_j}\biggr\}\biggr) \\
&= \exp\left\{-\frac{1}{2n}
\sum_{i=1}^m \sum_{j=1}^m
\btheta_i^\top \left[
\sum_{k=1}^{\lfloor nt_i\rfloor}
\sum_{\ell=1}^{\lfloor nt_j\rfloor}
\cov(\bcX_k,\bcX_\ell)\right]
\btheta_j \right\}\\
&\to \exp\biggl\{-\frac{1}{2}
\sum_{i=1}^m \sum_{j=1}^m
(t_i \land t_j) \btheta_i^\top \Big[
\bM_\bxi (\bI_p-\bM_\bxi )^{-1}\var(\bX_0)+\var(\bX_0) \\
&\phantom{\to \exp\biggl\{-\frac{1}{2} \sum_{i=1}^m \sum_{j=1}^m (t_i \land t_j) \btheta_i^\top \Big[\,}
+\var(\bX_0)
(\bI_p-\bM_\bxi ^\top)^{-1}\bM_\bxi ^\top\Big] \btheta_j \biggr\}
\qquad \text{as \ $n \to \infty$.}
\end{align*}
Indeed, for all \ $s, t \in \RR_+$ \ with \ $s < t$, \ we have
\begin{align*}
&\frac{1}{n} \sum_{k=1}^\ns \sum_{\ell=1}^\nt \cov(\bcX_k, \bcX_\ell) \\
&= \frac{1}{n} \sum_{k=1}^\ns \sum_{\ell=1}^{k-1} \bM_\bxi^{k-\ell} \var(\bX_0)
+ \frac{\ns}{n}\var(\bX_0)
+ \frac{1}{n} \var(\bX_0)\sum_{k=1}^\ns \sum_{\ell=k+1}^\nt (\bM_\bxi^\top)^{\ell-k} \\
&= \frac{1}{n} \sum_{k=1}^\ns (\bM_\bxi - \bM_\bxi^k) (\bI_p - \bM_\bxi)^{-1} \var(\bX_0)
+ \frac{\ns}{n}\var(\bX_0) \\
&\phantom{=\,}
+ \frac{1}{n} \var(\bX_0) (\bI_p - \bM_\bxi^\top)^{-1}
\sum_{k=1}^\ns (\bM_\bxi^\top - (\bM_\bxi^\top)^{\lfloor nt\rfloor-k+1}) \\
&= \frac{1}{n}
\Bigl(\ns \bM_\bxi - \bM_\bxi (\bI_p - \bM_\bxi^\ns) (\bI_p-\bM_\bxi )^{-1}\Bigr)
(\bI_p - \bM_\bxi)^{-1} \var(\bX_0)
+ \frac{\lfloor ns\rfloor}{n} \var(\bX_0) \\
&\quad
+ \frac{1}{n} \var(\bX_0) (\bI_p - \bM_\bxi^\top)^{-1}
\Bigl(\ns \bM_\bxi^\top
- (\bI_p - \bM_\bxi^\top)^{-1}
(\bI_p - (\bM_\bxi^\top)^\ns) (\bM_\bxi ^\top)^{\nt - \ns + 1}\Bigr)
\end{align*}
\begin{align*}
&= \frac{\ns}{n}
\Bigl(\bM_\bxi (\bI_p - \bM_\bxi)^{-1} \var(\bX_0) + \var(\bX_0)
+ \var(\bX_0) (\bI_p - \bM_\bxi^\top)^{-1} \bM_\bxi ^\top\Bigr) \\
&\quad
- \frac{1}{n} \Bigl(\bM_\bxi (\bI_p - \bM_\bxi^\ns) (\bI_p - \bM_\bxi)^{-2} \var(\bX_0) \\
&\phantom{\quad-\frac{1}{n}\Big[\;}
+ \var(\bX_0) (\bI_p - \bM_\bxi ^\top)^{-2}
(\bI_p - (\bM_\bxi^\top)^\ns) (\bM_\bxi^\top)^{\nt-\ns+1}\Bigr) \\
&\to s \Bigl(\bM_\bxi (\bI_p - \bM_\bxi)^{-1} \var(\bX_0) + \var(\bX_0)
+ \var(\bX_0) (\bI_p - \bM_\bxi^\top)^{-1} \bM_\bxi^\top\Bigr) \qquad \text{as \ $n \to \infty$,}
\end{align*}
since \ $\lim_{n\to\infty} \bM_\bxi^\ns = \bzero$, \ $\lim_{n\to\infty} (\bM_\bxi^\top)^\ns = \bzero$ \ and
\ $\lim_{n\to\infty} (\bM_\bxi^\top)^{\nt-\ns+1} = \bzero$ \ by \eqref{Gelfand}.
It remains to show that
\begin{equation}\label{help6}
\begin{aligned}
&\bM_\bxi (\bI_p - \bM_\bxi)^{-1} \var(\bX_0) + \var(\bX_0)
+ \var(\bX_0) (\bI_p - \bM_\bxi^\top)^{-1} \bM_\bxi^\top \\
&= (\bI_p - \bM_\bxi)^{-1} \bV (\bI_p - \bM_\bxi^\top)^{-1} .
\end{aligned}
\end{equation}
We have
\begin{align}\label{help11}
\bM_\bxi (\bI_p - \bM_\bxi)^{-1}
= (\bI_p - (\bI_p - \bM_\bxi)) (\bI_p - \bM_\bxi)^{-1}
= (\bI_p - \bM_\bxi)^{-1} - \bI_p ,
\end{align}
and hence \ $(\bI_p - \bM_\bxi^\top)^{-1} \bM_\bxi^\top = (\bI_p - \bM_\bxi^\top)^{-1} - \bI_p$, \ thus the
left-hand side of equation \eqref{help6} can be written as
\begin{align*}
&((\bI_p - \bM_\bxi)^{-1} - \bI_p) \var(\bX_0) + \var(\bX_0)
+ \var(\bX_0) ((\bI_p - \bM_\bxi^\top)^{-1} - \bI_p) \\
&= (\bI_p - \bM_\bxi)^{-1} \var(\bX_0) - \var(\bX_0) + \var(\bX_0) (\bI_p - \bM_\bxi^\top)^{-1} .
\end{align*}
By \eqref{help10}, we have \ $\bV = \var(\bX_0) - \bM_\bxi \var(\bX_0) \bM_\bxi^\top$, \ hence, by \eqref{help11},
the right-hand side of the equation \eqref{help6} can be written as
\begin{align*}
&(\bI_p - \bM_\bxi)^{-1} (\var(\bX_0) - \bM_\bxi \var(\bX_0) \bM_\bxi^\top) (\bI_p - \bM_\bxi^\top)^{-1} \\
&= (\bI_p - \bM_\bxi)^{-1} \var(\bX_0) (\bI_p - \bM_\bxi^\top)^{-1}
- (\bI_p - \bM_\bxi)^{-1} \bM_\bxi \var(\bX_0) \bM_\bxi^\top (\bI_p - \bM_\bxi^\top)^{-1} \\
&= (\bI_p - \bM_\bxi)^{-1} \var(\bX_0) (\bI_p - \bM_\bxi^\top)^{-1}
- ((\bI_p - \bM_\bxi)^{-1} - \bI_p) \var(\bX_0) ((\bI_p - \bM_\bxi^\top)^{-1} - \bI_p) \\
&= (\bI_p - \bM_\bxi)^{-1} \var(\bX_0) - \var(\bX_0) + \var(\bX_0) (\bI_p - \bM_\bxi^\top)^{-1} ,
\end{align*}
and we conclude \eqref{help6}.
This implies the convergence \eqref{help1_double_aggregation}.
\proofend
\noindent{\bf Proof of Theorem \ref{simulataneous_aggregation}.}
As \ $n$ \ and \ $ N$ \ converge to infinity simultaneously, \eqref{help3_simulataneous_aggregation} is equivalent
to \ $(nN_n)^{-\frac{1}{2}} \bS^{(N_n,n)} \distr (\bI_p-\bM_\bxi )^{-1} \, \bB$ \ as \ $n \to \infty$ \ for any
sequence \ $(N_n)_{n\in\NN}$ \ of positive integers such that \ $\lim_{n\to\infty} N_n = \infty$.
\ As we have seen in the proof of Proposition \ref{simple_aggregation2}, for each \ $j \in \NN$,
\[
\bU_k^{(j)}
:= \bX_k^{(j)} - \EE(\bX_k^{(j)} \mid \cF^{\bX_{k-1}^{(j)}})
= \bX_k^{(j)} - \bM_\bxi \bX_{k-1}^{(j)} - \bm_{\vare} ,
\qquad k \in \NN ,
\]
are martingale differences with respect to the filtration \ $(\cF_k^{\bX^{(j)}})_{k\in\ZZ_+}$.
\ We are going to apply the functional martingale central limit theorem, see, e.g., Jacod and Shiryaev
\cite[Theorem VIII.3.33]{JacShi}, for the triangular array consisting of the random vectors
\[
(\bV_k^{(n)})_{k\in\NN}
:= (nN_n)^{-\frac{1}{2}}
\bigl(\bU_1^{(1)}, \ldots, \bU_1^{(N_n)}, \bU_2^{(1)}, \ldots, \bU_2^{(N_n)} ,
\bU_3^{(1)}, \ldots, \bU_3^{(N_n)}, \ldots\bigr)
\]
in the \ $n^\mathrm{th}$ \ row for each \ $n \in \NN$ \ with the filtration \ $(\cF_k^{(n)})_{k\in\ZZ_+}$ \ given
by \ $\cF_k^{(n)} := \cF_k^{\bY^{(n)}} = \sigma(\bY_0^{(n)}, \ldots, \bY_k^{(n)})$, \ where
\[
(\bY_k^{(n)})_{k\in\ZZ_+}
:= \bigl((\bX_0^{(1)}, \ldots, \bX_0^{(N_n)}), \bX_1^{(1)}, \ldots, \bX_1^{(N_n)},
\bX_2^{(1)}, \ldots, \bX_2^{(N_n)}, \ldots\bigr) .
\]
Hence \ $\cF_0^{(n)} = \sigma(\bX_0^{(1)}, \ldots, \bX_0^{(N_n)})$, \ and for each \ $k = \ell N_n + r$ \ with
\ $\ell \in \ZZ_+$ \ and \ $r \in \{1, \ldots, N_n\}$, \ we have
\[
\cF_k^{(n)} = \sigma\bigl(\bigl(\cup_{j=1}^r \cF_{\ell+1}^{\bX^{(j)}}\bigr)
\cup \bigl(\cup_{j=r+1}^{N_n} \cF_\ell^{\bX^{(j)}}\bigr)\bigr) ,
\]
where \ $\cup_{j=N_n+1}^{N_n} := \emptyset$.
\ Moreover, \ $\bY_0^{(n)} = (\bX_0^{(1)}, \ldots, \bX_0^{(N_n)})$, \ and for \ $k = \ell N_n + r$ \ with
\ $\ell \in \ZZ_+$ \ and \ $r \in \{1, \ldots, N_n\}$, \ we have \ $\bY_k^{(n)} = \bX_{\ell+1}^{(r)}$ \ and
\ $\bV_k^{(n)} = (nN_n)^{-\frac{1}{2}} \bU_{\ell+1}^{(r)}$.
Next we check that for each \ $n \in \NN$, \ $(\bV_k^{(n)})_{k\in\NN}$ \ is a sequence of martingale differences
with respect to \ $(\cF_k^{(n)})_{k\in\ZZ_+}$.
\ We will use that \ $\EE(\bxi \mid \sigma(\cG_1 \cup \cG_2)) = \EE(\bxi \mid \cG_1)$ \ for a random vector
\ $\bxi$ \ and for \ $\sigma$-algebras \ $\cG_1 \subset \cF$ \ and \ $\cG_2 \subset \cF$ \ such that
\ $\sigma(\sigma(\bxi) \cup \cG_1)$ \ and \ $\cG_2$ \ are independent and \ $\EE(\|\bxi\|) < \infty$.
\ For each \ $k = \ell N_n + 1$ \ with \ $\ell \in \ZZ_+$, \ we have
\ $\EE(\bV_k^{(n)} \mid \cF_{k-1}^{(n)}) = (nN_n)^{-\frac{1}{2}} \EE(\bU_{\ell+1}^{(1)} \mid \cF_\ell^{\bX^{(1)}})
= \bzero$, \ since
\[
\EE(\bU_{\ell+1}^{(1)} \mid \cF_{k-1}^{(n)})
= \EE(\bU_{\ell+1}^{(1)} \mid \sigma(\cup_{j=1}^{N_n} \cF_\ell^{\bX^{(j)}}))
= \EE(\bU_{\ell+1}^{(1)} \mid \cF_\ell^{\bX^{(1)}})
= \bzero .
\]
In a similar way, for each \ $k = \ell N_n + r$ \ with \ $\ell \in \ZZ_+$ \ and \ $r \in \{2, \ldots, N_n\}$, \ we
have
\ $\EE(\bV_k^{(n)} \mid \cF_{k-1}^{(n)})
= (nN_n)^{-\frac{1}{2}} \EE(\bU_{\ell+1}^{(r)} \mid \cF_\ell^{\bX^{(r)}}) = \bzero$,
\ since
\[
\EE(\bU_{\ell+1}^{(r)} \mid \cF_{k-1}^{(n)})
= \EE(\bU_{\ell+1}^{(r)}
\mid \sigma((\cup_{j=1}^{r-1} \cF_{\ell+1}^{\bX^{(j)}}) \cup (\cup_{j=r}^{N_n} \cF_\ell^{\bX^{(j)}}))) \\
= \EE(\bU_{\ell+1}^{(r)} \mid \cF_\ell^{\bX^{(r)}})
= \bzero .
\]
We want to obtain a functional central limit theorem for the sequence
\[
\Biggl(\sum_{k=1}^{\lfloor nt \rfloor N_n} \bV_k^{(n)}\Biggr)_{t\in\RR_+}
= \biggl(\frac{1}{\sqrt{nN_n}} \sum_{\ell=1}^\nt \sum_{r=1}^{N_n} \bU_\ell^{(r)}\biggr)_{t\in\RR_+} , \qquad
n \in \NN .
\]
First, we calculate the conditional variance matrix of \ $\bV_k^{(n)}$.
\ If \ $k = \ell N_n + 1$ \ with \ $\ell \in \ZZ_+$, \ then
\begin{align*}
\EE(\bV_k^{(n)} (\bV_k^{(n)})^\top \mid \cF_{k-1}^{(n)})
&= (nN_n)^{-1}
\EE(\bU_{\ell+1}^{(1)} (\bU_{\ell+1}^{(1)})^\top \mid \sigma(\cup_{j=1}^{N_n} \cF_\ell^{\bX^{(j)}})) \\
&= (nN_n)^{-1} \EE(\bU_{\ell+1}^{(1)} (\bU_{\ell+1}^{(1)})^\top \mid \cF_\ell^{\bX^{(1)}}) .
\end{align*}
In a similar way, if \ $k = \ell N_n + r$ \ with \ $\ell \in \ZZ_+$ \ and \ $r \in \{2, \ldots, N_n\}$, \ then
\begin{align*}
\EE(\bV_k^{(n)} (\bV_k^{(n)})^\top \mid \cF_{k-1}^{(n)})
&= (nN_n)^{-1}
\EE(\bU_{\ell+1}^{(r)} (\bU_{\ell+1}^{(r)})^\top
\mid \sigma((\cup_{j=1}^{r-1} \cF_{\ell+1}^{\bX^{(j)}}) \cup (\cup_{j=r}^{N_n} \cF_\ell^{\bX^{(j)}}))) \\
&= (nN_n)^{-1} \EE(\bU_{\ell+1}^{(r)} (\bU_{\ell+1}^{(r)})^\top \mid \cF_\ell^{\bX^{(r)}}) .
\end{align*}
Consequently, for each \ $n \in \NN$ \ and \ $t \in \RR_+$, \ we have
\begin{align*}
\sum_{k=1}^{\nt N_n} \EE(\bV_k^{(n)} (\bV_k^{(n)})^\top \mid \cF_{k-1}^{(n)})
&= \sum_{\ell=1}^\nt \sum_{r=1}^{N_n}
\EE(\bV_{(\ell-1)N_n+r}^{(n)} (\bV_{(\ell-1)N_n+r}^{(n)})^\top \mid \cF_{(\ell-1)N_n+r-1}^{(n)}) \\
&= \frac{1}{nN_n}
\sum_{\ell=1}^\nt \sum_{r=1}^{N_n}
\EE(\bU_\ell^{(r)} (\bU_\ell^{(r)})^\top \mid \cF_{\ell-1}^{\bX^{(r)}}) .
\end{align*}
Next, we show that for each \ $t \in \RR_+$ \ and \ $i, j \in \{1,\ldots,p\}$, \ we have
\[
\frac{1}{nN_n}
\sum_{\ell=1}^\nt \sum_{r=1}^{N_n} \EE(U^{(r)}_{\ell,i}U^{(r)}_{\ell,j} \mid \cF^{\bX^{(r)}}_{\ell-1})
= \frac{1}{nN_n}
\sum_{\ell=1}^\nt \sum_{r=1}^{N_n}
\bv_{(i,j)}^\top
\begin{bmatrix}
\bX_{\ell-1}^{(r)} \\
1
\end{bmatrix}
\stoch
\bv_{(i,j)}^\top
\begin{bmatrix}
\EE(\bX_0) \\
1
\end{bmatrix}
t
= V_{i,j} t
\]
as \ $n \to \infty$.
\ Indeed, the equality follows by \eqref{help4}, and for the convergence in probability, note that
\ $\lim_{n\to\infty} \frac{\nt}{n} = t$, \ $t \in \RR_+$, \ and, by Cauchy-Schwarz inequality,
\begin{align*}
&\EE\left(\left(\frac{1}{\nt N_n}
\sum_{\ell=1}^{\nt} \sum_{r=1}^{N_n} \bv_{(i,j)}^\top
\begin{bmatrix} \bX_{\ell-1}^{(r)} - \EE(\bX_0) \\ 0 \end{bmatrix}\right)^2\right) \\
&= \frac{1}{\nt^2 N_n^2}
\EE\left(\left(\bv_{(i,j)}^\top
\sum_{\ell_1=1}^\nt \sum_{r_1=1}^{N_n}
\begin{bmatrix} \bX_{\ell_1-1}^{(r_1)} - \EE(\bX_0) \\ 0 \end{bmatrix}\right)
\left(\sum_{\ell_2=1}^\nt \sum_{r_2=1}^{N_n}
\begin{bmatrix} \bX_{\ell_2-1}^{(r_2)} - \EE(\bX_0) \\ 0 \end{bmatrix}^\top
\bv_{(i,j)}\right)\right) \\
&= \frac{1}{\nt^2 N_n^2} \bv_{(i,j)}^\top
\sum_{\ell_1=1}^\nt \sum_{\ell_2=1}^\nt \sum_{r_1=1}^{N_n} \sum_{r_2=1}^{N_n}
\begin{bmatrix}
\EE((\bX_{\ell_1-1}^{(r_1)} - \EE(\bX_0)) (\bX_{\ell_2-1}^{(r_2)} - \EE(\bX_0))^\top) & \bzero \\
\bzero & 0 \\
\end{bmatrix}
\bv_{(i,j)} \\
&= \frac{1}{\nt^2 N_n} \bv_{(i,j)}^\top
\sum_{\ell_1=1}^\nt \sum_{\ell_2=1}^\nt
\begin{bmatrix}
\EE((\bX_{\ell_1-1} - \EE(\bX_0))(\bX_{\ell_2-1} - \EE(\bX_0))^\top) & \bzero \\
\bzero & 0 \\
\end{bmatrix}
\bv_{(i,j)} \\
&\leq \frac{1}{\nt^2 N_n} \|\bv_{(i,j)}\|^2
\sum_{\ell_1=1}^\nt \sum_{\ell_2=1}^\nt
\EE\big(\|(\bX_{\ell_1-1} - \EE(\bX_0)) (\bX_{\ell_2-1} - \EE(\bX_0))^\top\|\big) \\
&\leq \frac{1}{\nt^2 N_n} \|\bv_{(i,j)}\|^2
\sum_{\ell_1=1}^\nt \sum_{\ell_2=1}^\nt \sum_{m_1=1}^p \sum_{m_2=1}^p
\EE(|(X_{\ell_1-1,m_1} - \EE(X_{0,m_1})) (X_{\ell_2-1,m_2} - \EE(X_{0,m_2}))|) \\
&\leq \frac{1}{\nt^2 N_n} \|\bv_{(i,j)}\|^2
\sum_{\ell_1=1}^\nt \sum_{\ell_2=1}^\nt
\sum_{m_1=1}^{p}\sum_{m_2=1}^p
\sqrt{\var(X_{\ell_1-1,m_1}) \var(X_{\ell_2-1,m_2})} \\
&= \frac{1}{N_n} \|\bv_{(i,j)}\|^2
\sum_{m_1=1}^p \sum_{m_2=1}^p \sqrt{\var(X_{0,m_1}) \var(X_{0,m_2})}
\to 0 \qquad \text{as \ $n \to \infty$,}
\end{align*}
where we used that \ $\|\bQ\| \leq \sum_{i=1}^p \sum_{j=1}^p |q_{i,j}|$ \ for every matrix
\ $\bQ = (q_{i,j})_{i,j=1}^p \in \RR^{p\times p}$.
Moreover, in a similar way, the conditional Lindeberg condition holds, namely, for all \ $\delta > 0$,
\begin{align*}
\sum_{k=1}^{\nt N_n} \EE(\|\bV_k^{(n)}\|^2 \bbone_{\{\|\bV_k^{(n)}\|>\delta\}} \mid \cF_{k-1}^{(n)})
&= \frac{1}{nN_n}
\sum_{\ell=1}^\nt \sum_{r=1}^{N_n}
\EE(\|\bU_\ell^{(r)}\|^2 \bbone_{\{\|\bU_\ell^{(r)}\|>\delta\sqrt{nN_n}\}} \mid \cF^{\bX^{(r)}}_{\ell-1}) \\
&\leq \frac{1}{\delta n^{3/2} N_n^{1/2}}
\sum_{\ell=1}^\nt \EE(\|\bU_\ell^{(1)}\|^3 \mid \cF^{\bX^{(1)}}_{\ell-1})
\as 0 \qquad\text{as \ $n \to \infty$,}
\end{align*}
where the almost sure convergence follows by \eqref{help9}.
Hence we obtain
\[
\biggl(\frac{1}{\sqrt{nN_n}} \sum_{\ell=1}^\nt \sum_{r=1}^{N_n} \bU_\ell^{(r)}\biggr)_{t\in\RR_+}
= \Biggl(\sum_{k=1}^{\lfloor nt \rfloor N_n} \bV_k^{(n)}\Biggr)_{t\in\RR_+}
\distr \bB \qquad
\text{as \ $n \to \infty$,}
\]
where \ $\bB = (\bB_t)_{t\in\RR_+}$ \ is a \ $p$-dimensional zero mean Brownian motion satisfying
\ $\var(\bB_1) = \bV$.
\ Using \eqref{help7}, for each \ $n \in \NN$ \ and \ $t \in \RR_+$, \ we have
\begin{align*}
&\frac{1}{\sqrt{nN_n}}
\sum_{\ell=1}^\nt \sum_{r=1}^{N_n} (\bX_\ell^{(r)} - \EE(\bX_\ell^{(r)})) \\
&= \frac{1}{\sqrt{n}}
\Bigg[(\bI_p - \bM_\bxi )^{-1} (\bM_\bxi - \bM_\bxi ^{\lfloor nt \rfloor+1})
\frac{1}{\sqrt{N_n}} \sum_{r=1}^{N_n} (\bX_0^{(r)} - \EE(\bX_0^{(r)}))\Bigg] \\
&\quad
- \frac{1}{\sqrt{n}}
\Bigg[(\bI_p - \bM_\bxi)^{-1}
\sum_{m=1}^\nt
\bM_\bxi ^{\lfloor nt \rfloor-m+1} \frac{1}{\sqrt{N_n}} \sum_{r=1}^{N_n} \bU_m^{(r)}\Bigg]
+ (\bI_p - \bM_\bxi)^{-1} \frac{1}{\sqrt{nN_n}}
\sum_{m=1}^\nt \sum_{r=1}^{N_n} \bU_m^{(r)} ,
\end{align*}
implying the statement using Slutsky's lemma, since \ $\rho(\bM_\bxi) < 1$.
\ Indeed, \ $\lim_{n\to\infty} \bM_\bxi ^{\lfloor nt \rfloor+1} = \bzero$ \ by \eqref{Gelfand}, thus
\[
\lim_{n\to\infty} (\bI_p-\bM_\bxi )^{-1} (\bM_\bxi - \bM_\bxi ^{\lfloor nt \rfloor+1})
= (\bI_p - \bM_\bxi )^{-1} \bM_\bxi ,
\]
and, by Proposition \ref{simple_aggregation},
\[
\frac{1}{\sqrt{N_n}} \sum_{r=1}^{N_n} (\bX_0^{(r)} - \EE(\bX_0^{(r)}))
\distr \cN_p(\bzero, \var(\bX_0))
\qquad \text{as \ $n\to\infty$,}
\]
where \ $\cN_p(\bzero, \var(\bX_0))$ \ denotes a \ $p$-dimensional normal distribution with zero mean and with
covariance matrix \ $\var(\bX_0)$, \ and then Slutsky's lemma yields that
\[
\frac{1}{\sqrt{n}}
\Bigg[(\bI_p - \bM_\bxi )^{-1} (\bM_\bxi -\bM_\bxi ^{\lfloor nt \rfloor+1})
\frac{1}{\sqrt{N_n}} \sum_{r=1}^{N_n} (\bX_0^{(r)} - \EE(\bX_0^{(r)}))\Bigg]
\stoch \bzero \qquad \text{as \ $n \to \infty$.}
\]
Further,
\begin{align*}
&\biggl\|\EE\biggl(\frac{1}{\sqrt{n}}
\sum_{m=1}^\nt
\bM_\bxi^{\lfloor nt \rfloor-m+1} \frac{1}{\sqrt{N_n}}
\sum_{r=1}^{N_n}\bU_m^{(r)}\biggr)\biggr\|
\leq \frac{1}{\sqrt{n}}
\sum_{m=1}^\nt
\EE\biggl(\biggl\|\bM_\bxi^{\lfloor nt \rfloor-m+1} \frac{1}{\sqrt{N_n}}
\sum_{r=1}^{N_n} \bU_m^{(r)}\biggr\|\biggr) \\
&\leq \frac{1}{\sqrt{n}}
\sum_{m=1}^\nt
\|\bM_\bxi^{\lfloor nt \rfloor-m+1}\| \EE\biggl(\biggl\|\frac{1}{\sqrt{N_n}}
\sum_{r=1}^{N_n} \bU_m^{(r)}\biggr\|\bigg) \\
&\leq \frac{1}{\sqrt{n}}
\sum_{m=1}^\nt
\|\bM_\bxi^{\lfloor nt \rfloor-m+1}\|
\sum_{j=1}^p
\EE\biggl(\biggl|\frac{1}{\sqrt{N_n}} \sum_{r=1}^{N_n} U^{(r)}_{m,j} \biggr|\biggr)
\end{align*}
\begin{align*}
&\leq \frac{1}{\sqrt{n}}
\sum_{m=1}^\nt
\|\bM_\bxi^{\lfloor nt \rfloor-m+1}\|
\sum_{j=1}^p
\sqrt{\EE\biggl(\biggl(\frac{1}{\sqrt{N_n}} \sum_{r=1}^{N_n} U^{(r)}_{m,j}\biggr)^2\biggr)} \\
&= \frac{1}{\sqrt{n}}
\sum_{m=1}^\nt \|\bM_\bxi^{\lfloor nt \rfloor-m+1}\| \sum_{j=1}^p \sqrt{\EE((U^{(1)}_{m,j})^2)} \\
&\leq \frac{1}{\sqrt{n}}
\sum_{m=1}^\nt
\|\bM_\bxi^{\lfloor nt \rfloor-m+1}\|
\sum_{j=1}^p \sqrt{V_{j,j}}
\to 0 \qquad \text{as \ $n \to \infty$,}
\end{align*}
by \eqref{Gelfand2}, where for the last inequality we used \eqref{help8}.
This completes the proof.
\proofend
\section*{Acknowledgements}
We would like to thank the referees for their comments that helped us improve the paper.
\bibliographystyle{plain}
| {
"attr-fineweb-edu": 1.177734,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdqY5qoTAqrHwa94b | \section{Introduction}
In this paper, we present an empirical law, with theoretical justification, linking the number of learning iterations to the mini-batch size. From this result, we derive a principled methodology for selecting mini-batch size w.r.t. \emph{training performance}\footnote{Any connections with testing/generalization performance are left for future work.} for data-parallel machine learning.
This methodology saves training time and provides both intuition and a principled approach for optimizing machine learning algorithms and machine learning hardware system design.
Further, we use our methodology to show that focusing on weak scaling can lead to suboptimal training times because, by neglecting the dependence of
convergence time on the size of the mini-batch used, weak scaling does not always minimize the training time.
All of these results derive from a novel insight presented in this paper, that
understanding the average algorithmic behavior of learning, decoupled
from hardware implementation details, can lead to deep insights into machine learning.
Our results have direct relevance to on-going research that accelerate training time.
For example, significant research effort has been focused on
accelerating mini-batch SGD \cite{dekel2012optimal,keskar2016large,li2014efficient}, primarily focused on
faster hardware \cite{coates2013deep,krizhevsky2014one,tan2011fast,chetlur2014cudnn}, parallelization using multiple learners
\cite{cho2017powerai,goyal2017accurate,dean2012large}, and improved algorithms and system designs for
efficient communication (\emph{e.g.}, parameter servers, efficient passing of
update vectors \cite{goyal2017accurate,watcharapichat2016ako,lian2015asynchronous,recht2011hogwild,seide2014parallelizability,zhang2015deep,li2014scaling}, \emph{etc.}
To assess the impact of these acceleration methods, published research typically evaluates
parallel improvements based on the time to complete an epoch for a fixed
mini-batch size, what is commonly known as ``weak'' scaling \cite{cho2017powerai,goyal2017accurate,watcharapichat2016ako,recht2011hogwild}.
The next section explains how the insight of decoupling algorithmic and implementation details is used to develop a close-form model for learning convergence time. We then derive implications for hardware system design. These implications are followed by sections with experimental support and theoretical justification for the empirical law connecting learning iterations with mini-batch size. We close with a discussion section.
\section{Modeling SGD Convergence Time\label{sec:Decompose}}
Given a learning problem represented by a data set, SGD as the learning
algorithm, and a learning model topology, we define the learning time,
$T_{\rm{Conv}}$, to be the average total time required for SGD to converge to a
specified achievable training error threshold.
The average is over all possible sources of randomness in the
process, including random initializations of the model, ``noise'' from the SGD
updates, noise in the system hardware, etc.
Focusing on the average
learning behavior allows us to identify fundamental properties of the
learning process. In particular, we can write the \emph{analytical complexity} as the product of \emph{iteration complexity}, $N_{\rm{Update}}$, and the \emph{per iteration computational complexity}, $T_{\rm{Update}}$.
\begin{equation}
\label{eqn:Decompose}
T_{\rm{Conv}}=N_{\rm{Update}} \cdot T_{\rm{Update}}
\end{equation}
In other words, $N_{\rm{Update}}$ is the average number of updates
required to converge, and $T_{\rm{Update}}$ is the average
time to compute and communicate one update.
This decomposition of $T_{\rm{Conv}}$ into an implementation-dependent and implementation-independent components is useful because it helps decouple the tasks of understanding how
implementation and algorithmic choices impact learning time, and allows us to
understand algorithmic choices, independent of system design choices.
\subsection{Modeling Average Convergence Time\label{sec:ModelingT}}
To analyze SGD convergence, we model $N_{\rm{Update}}$ as a function of the
mini-batch size, $M$, and model $T_{\rm{Update}}$ as a function of $M$ and the number of parallel
learners, $P$.
All other hyperparameters are held constant.
For simplicity, we refer to ``learner'' parallelism when multiple learners share the task of learning a single model.
Each learner is mapped to a compute element from a suitable level of parallelism, \emph{e.g.}, a server, a CPU, a GPU, \emph{etc. }
In general, the level of parallelism
selected will have implications for the software
implementation, communication requirements, and system performance.
However, our analysis below is independent of these details.
\subsection{Modeling $N_{\rm{Update}}(M)$}
Since $N_{\rm{Update}}$ is independent of the hardware, it
is independent of the number of compute elements used, and therefore
depends only on the mini-batch size, $M$.
Even with this simplification, measuring $N_{\rm{Update}}$ from hardware is generally
impractical due to the computational expense of running SGD to
convergence for all values of $M$. Fortunately, there is an easier
way: We have discovered a robust empirical inverse relationship between
$N_{\rm{Update}}$ and $M$ given by
\begin{equation}
\label{eqn:Inverse}
N_{\rm{Update}} = N_{\infty} + \frac{\alpha}{M}
\end{equation}
where $N_{\infty}$ and $\alpha$ are empirical parameters depending
on the data, model topology and learning algorithm used.
This inverse relationship captures the intuitively obvious result that
even if we compute exact gradients, \emph{i.e.}, even when $M$ equals all
of the data in a given data set, gradient descent still requires a
non-zero number of steps to converge.
Furthermore, the Central Limit Theorem tells us that the variance of the
SGD gradient is inversely proportional to $M$, for large $M$.
Thus, $N_{\rm{Update}}$ increases approximately linearly
with the SGD gradient variance, and $\alpha$ can be thought of the
system's sensitivity to noise in the gradient. We define $\alpha$ to be the ``noise sensitivity'' of the algorithm.
\subsection{Novelty of this Result}
Numerous papers have addressed the behavior and benefits of mini-batch training \cite{cotter2011better,jain2017parallelizing,bottou2018optimization}.
These papers frequently derive training error bounds in terms of the number of iterations, and/or the mini-batch size, often exhibiting $1/M$ and $1/N$ related behavior.
Although these results superficially look similar to Eqn.~\ref{eqn:Inverse}, they
do not address the functional relationship between the number of iterations and $M$.
Our work is the first to demonstrate this relationship empirically, on a wide variety of learning problems, in a much simpler way.
Furthermore, for some of the results in \cite{cotter2011better,jain2017parallelizing,bottou2018optimization}, it is possible to go beyond what the original authors intended by inverting their results to find relationships between number of iterations and $M$; however, without empirical evidence, one does not know how tight the bounds are in practice, and even if one assumes they are tight, the resulting inversion leads to relationships which are not the same as Eqn.~\ref{eqn:Inverse}.
To make this explicit with an example, consider the second equation on P.~31 of \cite{bottou2018optimization} which, with simplified notation and constants $A$ and $B$,
can be written as
$$E_N \leq A/M + (1-B/M)^{N-1}(E_1-A/M)$$
If we replace $E_N$ with a target error threshold, $\epsilon$, this equation implies the following inequality:
$$N\geq 1+\frac{M}{M-B}\ln\left(\frac{\epsilon M-A}{E_1M-A}\right)$$
which is clearly different from Eqn.~\ref{eqn:Inverse}.
One can perform similar analyses to the other published results to demonstrate that they are not equivalent Eqn.~\ref{eqn:Inverse}.
\subsection{Modeling $\mathbf{T_{\rm{Update}}(M,P)}$}
Measuring $T_{\rm{Update}}$ is comparatively
straightforward: One need only run enough iterations of SGD for a single learner to estimate the average
time to perform an update for a specified mini-batch size. This process
is possible because $T_{\rm{Update}}(M,P)$ is approximately constant throughout SGD learning.
This approach can be used to compare differences between specific types of
hardware, software implementations, \emph{etc.}
One then use the measured
$T_{\rm{Update}}$ to fit an analytic model, along with $N_{\rm{Update}}$, to model $T_{\rm{Conv}}(M,P)$.
To analyze the generic behavior, we model
\begin{equation}
T_{\rm{Update}}(M,P) = \Gamma(M) + \Delta(P)
\end{equation}
where $\Gamma(M)$ is the average time to compute an SGD update using
$M$ samples, and $\Delta(P)$ is the average time to communicate
gradient updates between $P$ learners.\footnote{Without loss of generality, we subsume any communication time internal to a single learner (\emph{e.g.}, the time to read/write data from memory) into the computation time.}
If some of the communication between learners
can occur during computation, then $\Delta(P)$ represents the
portion of communication that is not overlapping with computation.\footnote{An efficient SGD system will
attempt to overlap computation and
communication. For example, in backpropagation, gradient updates for all but the
input layer can in principle be transferred during the calculation of updates for
subsequent layers. In such systems, the communication time,
$\Delta(P)$, is understood to mean the portion that does
not overlap with computation time.}
Since computation and communication are handled by separate hardware, it
is a good approximation to assume that they can be decoupled in this
way.
Since $\Gamma(M)$ typically performs the same amount of computation
for each data sample, one might expect a linear relationship,
$\Gamma(M) = \gamma M$, for some constant,
$\gamma$. However, in
practice, hardware and software implementation inefficiencies lead to a
point where reducing $M$ does not reduce compute time linearly.\footnote{Here we are neglecting the generally insignificant time
required to sum over $M$ data samples.}
This effect can be approximated using
\begin{equation}
\label{eqn:HW}
\Gamma(M) = \gamma\max(M,M_T)
\end{equation}
where $M_T$ is the threshold at which the linear relationship
begins, \emph{i.e.}, the knee in the curve. For example, $M_T$ could be the number of cores per CPU, if
each sample is processed by a different core; or $M_T$ could be 1 if
a single core processes all samples. Ideally, efficient SGD hardware
systems should achieve low $\gamma$ and $M_T$. In practice, an
empirical measurement of this relationship provides more fidelity; but
for the purposes of this paper, this model is sufficient.
For $P = 1$, the communication time is zero, \emph{i.e.}, $\Delta(P)=0$.
For $P > 1$, $\Delta(P)$ depends on various
hardware and software implementation factors. We assume model updates exploit an optimized communication protocol,
such as the Message Passing Interface
(MPI) function \texttt{MPIAllReduce()} \cite{kumar2016optimization} on a high-end compute cluster.
Such systems provide a powerful network switch and an efficient
\texttt{MPIAllReduce()} implementation that delivers near perfect scaling of
\texttt{MPIAllreduce()} bandwidth, and so communication time is approximately constant, \emph{i.e.},
$\Delta(P)=\delta$ for some constant $\delta$ approximately inversely proportional to
the aggregate bandwidth between learners.
For comparison purposes, a synchronous parameter server
has a communication time that grows linearly with $P$, \emph{i.e.}, $\Delta(P) = \delta P$.
\subsection{Modeling $\mathbf{T_{\rm{Conv}}(M,P)}$}
Using Eqn.~\ref{eqn:Decompose} to combine our estimates for $N_{\rm{Update}}$,
$T_{\rm{Update}}$ yields the following general
approximation to the total convergence time for SGD running on $P$
parallel learners:
\begin{equation}
\label{eqn:TC}
T_{\rm{Conv}}(M,P) = \left(N_{\infty} + \frac{\alpha}{M}\right)\left[\gamma\max\left( \frac{M}{P},M_T \right) + \delta \right]
\end{equation}
We can now use this model to optimize training time and analyze the impact of system design on training time
in numerous ways.
\emph{E.g.}, in the experiments we show how to
select the mini-batch size that minimizes SGD training time.
Note that Eqn.~\ref{eqn:TC} relies on certain assumptions about the hardware
that might not be true in general, \emph{e.g.}, that $\delta$ is a constant.
We have chosen these assumptions to simplify the analysis; but in
practice, one can easily choose a different model for $T_{\rm{Conv}}$, or even measure the
exact form of $T_{\rm{Conv}}$, and still follow through with the analysis below.
One final consideration arises regarding cross-validation (CV) since SGD
training is rarely performed without some form of CV stopping criterion.
We can accommodate the effect of CV in our model by including a CV term,
such that
\begin{equation}
\Gamma(M) = \gamma N\max(M,M_T) + \gamma_{\rm CV}\max(M_{\rm CV},M_T)
\end{equation}
where $N$ is the number of SGD updates per CV calculation and
$M_\text{CV}$ is the number of CV samples to calculate. For
simplicity, we ignore CV in our analysis below, but the analysis follows
the same path as before. Additionally, the calculation of a CV subset
adds virtually no communication, since the parallel learners computing the
CV estimate communicate only a single number when they are done.
\section{Optimal Mini-Batch Size Selection\label{sec:Optimalmini-batch}}
For the single learner case ($P=1$), there is no inter-learner communication cost, so Eqn.~\ref{eqn:TC} yields
\begin{equation}
T_{\rm{Conv}}(M,1) = \left(N_{\infty} + \frac{\alpha}{M}\right)\gamma\max\left(M,M_T \right).
\end{equation}
Optimizing over $M$, we find the optimal mini-batch size to be
\begin{equation}
M_{\rm Opt} = M_T.
\end{equation}
One can easily identify $M_T$ for a given system by timing a few SGD updates per value of $M$ to obtain an estimate of $T_{\rm{Update}}(M)$, and then selecting the knee in the $T_{\rm{Update}}(M)$ curve as an estimate for $M_T$. If the simple model used in Eqn.~\ref{eqn:HW} is not accurate for a given system configuration, the methodology below can be used.
\begingroup
\setlength{\tabcolsep}{2pt}
\begin{table}[ht]
\centering
\begin{tabular}{ll}
\toprule
\multicolumn{2}{l}{\textbf{Methodology:} Optimal Mini-Batch Size Selection}\\
\midrule
1. & For a range of $M$ values: \\ & \quad Measure $T_{\rm{Update}}(M)$ over a few SGD updates. \\
2. & For a least two values of $M$: \\ & \quad Estimate $N_{\rm{Update}}(M)$ by running to convergence. \\
3. & Fit $N_\infty$ and $\alpha$ to estimate $N_{\rm{Update}}(M)$ values.\\
4. & Use $N_{\rm{Update}}(M,N_\infty,\alpha)$, $T_{\rm{Update}}(M)$ to select $M_{\rm Otp}$.\\
\bottomrule
\end{tabular}
\end{table}
\endgroup
For the multiple learning case ($P\geq 1$), the optimal $M$ is given by
\begin{equation}
\label{eqn:MOpt}
M_{\rm Opt}(P) = \max\left( \sqrt{\frac{\alpha\delta P}{N_\infty\gamma}}~~,~~M_T P\right)
\end{equation}
which demonstrates that linearly increasing the training data with the number of learners (\emph{i.e.}, ``weak scaling'') is not always the optimal choice because $M_TP$ can be less than $\sqrt{\alpha\delta P/N_\infty\gamma}$. Note that the methodology above can also be used to optimize $M$ in the multi-learner case.
\input{Sections/DataParallelScaling}
\input{Sections/SystemDesign}
\section{Experimental Results}\label{sec:Results}
We have observed that, to a reasonable approximation, the relationship
\begin{equation}
N_{\rm{Update}} = N_{\infty} + \frac{\alpha}{M}
\end{equation}
persists over a broad range of $M$, and a variety of machine learning
dimensions, including the choice of learning domain (image recognition and machine translation),
data set, model topology, number of
classes, convergence threshold and learning rate. This section describes
the methodology used to assess this relationship and the results obtained.
\begin{table*}[!ht]
\centering
\caption{Image recognition rraining experiments performed.}
\label{tab:Experiments}
\begin{tabular}{llllll}
\rowcolor{Gray}
\toprule
\textbf{Data Set} & \textbf{Model} & \textbf{\# Parameters} & \textbf{\#
Layers} & \textbf{Learning Rate} & \textbf{$M$ }\tabularnewline
\midrule
MNIST & Small -- LeNet \cite{lecun1995learning} & 43,158 & 4 & 0.01 & 1 - 1024\tabularnewline
& Medium -- LeNet \cite{lecun1995learning} & 169,506 & 4 & 0.01, 0.05 & 1 - 1024\tabularnewline
& Even/Odd -- LeNet \cite{lecun1995learning} & 169,506 & 4 & 0.05 & 1 - 1024\tabularnewline
& Large -- LeNet \cite{lecun1995learning} & 671,802 & 4 & 0.01 & 1 - 1024\tabularnewline
\midrule
CIFAR10 & Small -- LeNet \cite{lecun1995learning} & 487,458 & 4 & 0.05 & 1 - 1024\tabularnewline
& Medium -- VGG \cite{simonyan2014very}& 1,125,090 & 9 & 0.01, 0.05, Adam & 1 -
1024\tabularnewline
& Large -- VGG \cite{simonyan2014very}& 1,862,754 & 11 & 0.025 & 1 - 1024\tabularnewline
& ResNet \cite{he2016deep}& 270,410 & 20 & 0.05 & 1 - 1024\tabularnewline
\midrule
CIFAR100 & Small -- LeNet \cite{lecun1995learning} & 487,458 & 4 & 0.05 & 16 -
1024\tabularnewline
& Medium -- VGG \cite{simonyan2014very}& 1,125,090 & 9 & 0.05 & 16 - 1024\tabularnewline
& Large - VGG \cite{simonyan2014very}& 1,862,754 & 11 & 0.025, 0.05 & 16 - 1024\tabularnewline
\bottomrule
\end{tabular}
\end{table*}
\subsection{Image Recognition Task}
To measure the robustness of our observations for image recognition, we conducted a range of
experiments as described in Table~\ref{tab:Experiments}.
Adam \cite{kingma2014adam} adaptive learning rate was also used for one
of the models. Light regularization was used with a decay constant of
$10^{-4}$ on the $L_2$-norm of the weights. For each model
architecture, we varied the size in terms of width (\emph{i.e.}, parameters per
layer) and depth (\emph{i.e.}, number of layers) to measure the training
behavior across model topologies. In addition, we experimented with the
same model across all three data sets (LeNet). Training was performed
using the Torch library on a single K80 GPU.\footnote{None of our
measurements required data-level parallelism because our decomposition
of $T_{\rm{Conv}}$ allows us to estimate $N_{\rm{Update}}$ and $T_{\rm{Update}}$
separately, and $N_{\rm{Update}}$ is independent of
$P$, the level of data parallelism used.}
Training and cross-validation
losses were recorded after each update for MNIST and after every 100
updates for CIFAR10 and CIFAR100, using two distinct randomly selected
sets of 20\% of the available data. The recorded results were
``scanned'' to find the $T_{\rm{Update}}$ value
that first achieves the desired training loss level, $\epsilon$.
This approach is equivalent to a stopping criterion with no patience.
Each MNIST experiment was averaged over ten runs with different random
initializations to get a clean estimate of
$N_{\rm{Update}}$ as a function of $M$. Averaging was not
used with the other experiments, and as our results show, was not generally
needed.
The results of our experiments in Figure~\ref{fig:ExperimentalResults} show a robust inverse
relationship between $N_{\rm{Update}}$ and $M$ measured
across all the data sets, models, learning rates for each case we have
considered. The fit lines match the observed data closely and we
estimated $N_{\infty}$ and $\alpha$. Because of the large number of
possible combinations of experiments performed, we only show a
representative subset of the graphs to illustrate the behavior that was
observed in all experiments. This empirical behavior exists for training error,
cross-validation error, varying $\epsilon$, changing the number of
output classes, etc.
\begin{figure*}[!htb]
\centering
\includegraphics[width=6.10449in,trim={0 11cm 0 0},clip]{Figures/ExperimentalResults.png}
\includegraphics[width=6.10449in,trim={0 0 0 5cm},clip]{Figures/ExperimentalResults.png}
\caption{$N_{\rm{Update}}$ as a function of $M$ for a
variety of SGD learning problems for a variety of conditions.
The y-axis represents the training loss. The plots
generally show an inverse relationship between
$N_{\rm{Update}}$ and $M$. Since $N_{\rm{Update}}$ is a random variable, we see noise in
these graphs. Additional averaging over multiple runs removes this
noise.}
\label{fig:ExperimentalResults}
\end{figure*}
These results show that large learning rates (shown as ``lR'' in the
graphs) are associated with small $N_{\infty}$, which is not
unexpected. However, for the experiment with adaptive learning rate
(cifar10\_medium\_adam), $N_{\infty}$ is negative, which is likely the
result of noise in our estimates or a failure of the model for adaptive
learning rates. Further study is needed to understand this. Even
so, this indicates that $N_{\infty}$ is small compared to the $\alpha$, and
hence good parallel efficiency is possible.
\subsection{Machine Translation Task}
Our translation system implements the attentional model of translation \cite{DBLP:journals/corr/BahdanauCB14} consisting of an encoder-decoder network with an attention mechanism.
The encoder uses a bidirectional GRU recurrent neural network \cite{DBLP:journals/corr/ChoMBB14} to encode a source sentence ${\bf{x}}=(x_1,...,x_l)$, where $x_i$ is the embedding vector for the $i$th word and $l$ is the sentence length. The encoded form is a sequence of hidden states ${\bf{h}} = (h_1, ..., h_l)$ where each $h_i$ is computed as follows
\begin{equation}
h_i =
\begin{bmatrix}
\overleftarrow{{h}_i} \\
\overrightarrow{{h}_i}
\end{bmatrix}
=
\begin{bmatrix}
\overleftarrow{f}(x_i, \overleftarrow{h}_{i+1}) \\
\overrightarrow{f}(x_i, \overrightarrow{h}_{i-1})
\end{bmatrix},
\end{equation}
where $\overrightarrow{h_0} = \overleftarrow{h_0} = 0$. Here $\overleftarrow{f}$ and $\overrightarrow{f}$ are GRU cells.
Given $\bf{h}$, the decoder predicts the
target translation ${\bf y}$ by computing the output token sequence $(y_1,...y_m)$, where
$m$ is the length of the sequence.
At each time $t$, the probability of each token
$y_t$ from a target vocabulary is
\begin{equation}
p(y_t | {\bf h}, y_{t-1}..y_1) = g(s_t, y_{t-1}, H_t),
\end{equation}
where
$g$ is a two layer feed-forward network over the embedding of the
previous target word ($y_{t-1}$), the decoder hidden state ($s_t$), and the weighted sum of encoder states
${\bf h}$ ($H_t$), followed by a softmax to predict the
probability distribution over the output vocabulary.
We use a two layer GRU for $s_t$.
The two GRU units together with the attention constitute
the conditional GRU layer of \cite{sennrich-EtAl:2017:EACLDemo}. $H_t$ is computed as
\begin{equation}
H_t =
\begin{bmatrix}
\sum^l_{i=1} (\alpha_{t,i} \cdot \overleftarrow{h}_i) \\
\sum^l_{i=1} (\alpha_{t,i} \cdot \overrightarrow{h}_i)
\end{bmatrix},
\end{equation}
where $\alpha_{t,i}$ are the elements of $\alpha_{t}$ which is the output vector of the attention model.
This is computed with a two layer feed-forward network
\begin{equation}
\alpha'_t = v(\textrm{tanh}(w(h_i) + u(s'_{t-1}))),
\end{equation}
where $w$ and $u$ are weight matrices, and $v$ is another matrix resulting in one real value per encoder state $h_i$. $\alpha_t$ is then the softmax over $\alpha'_t$.
We train our MT model with the Pytorch framework \cite{paszke2017automatic}. Learning is
performed with the Pytorch implementation of SGD. No adjustment to learning
rates was performed. We used the Europarl \cite{Koehn05} German-English data set for training and newstest2013 data set for testing.
For our experiments, we trained with mini-batch sizes of 100, 200, 400, 800, 1600, 3200, and 6400 words per
mini-batch. 12800 was too large for GPU memory. Each mini-batch is created by
adding sentences until the number of target language words meets or exceeds
the batch size. For each batch size, we ran with learning rates of 0.1, 0.2, 0.5, and 1.0
Each combination of parameters was run with 5 different random
seeds. The results in Fig.~\ref{fig:MTResults} are again well fit by the $N_{\rm{Update}}=N_\infty+\alpha/M$ model.
\begin{figure}[!htb]
\centering
\includegraphics[width = 2.5in]{Figures/nvm-lr01-fit.pdf}\\
\includegraphics[width = 2.5in]{Figures/nvm-lr02-fit.pdf}\\
\includegraphics[width = 2.5in]{Figures/nvm-lr05-fit.pdf}\\
\includegraphics[width = 2.5in]{Figures/nvm-lr1-fit.pdf}
\caption{MT results are shown for learning rate, LR, equal to 0.1, 0.2, 0.5, and 1.0. For four learning error targets (i.e., MT BLEU scores), the measured $N$ values (y-axis) are plotted against their corresponding $M$ values (x-axis). The solid lines show that this data is well fit by the $N_{\rm{Update}}=N_\infty+\alpha/M$ model. Points are omitted where training did not achieve a specified BLEU score.}
\label{fig:MTResults}
\end{figure}
\section{Theoretical Bound on $N_{\rm{Update}}$ for SGD\label{sec:Theory}}
For completion, we provide a theoretical analysis of mini-batch SGD
convergence that supports our finding of a robust empirical inverse relation between $N_{\rm{Update}}$ and $M$. We begin by defining the SGD update step as
\begin{equation*}
w^{k + 1} = w^{k} - \eta\left( \nabla f\left( w^{k} \right) + \xi^{k} \right)
\end{equation*}
where $f$ is the function to be optimized, $w^{k}$ is a vector of
neural net weights at the $k^{\rm th}$ update of the SGD algorithm, $\xi^k$
is a zero-mean noise term with variance smaller than $\phi^{2}$, and $\eta$ is
the SGD step size. We assume that $\nabla f$ is Lipschitz continuous,
\emph{i.e.}, that
\begin{equation*}
f\left( x \right) \leq f\left( y \right) + \nabla f\left( y \right) \cdot \left( x - y \right) + \frac{L}{2}\ \left| x - y \right|^{2}
\end{equation*}
for some constant $L$.
Standard convex analysis steps along with
$E[\xi^k] = 0$ and Var$[\xi^k] \leq \phi^2$, gives
\begin{small}
\begin{equation*}
E_{\xi^k}\hspace{-0.1cm}\left\lbrack f\hspace{-0.1cm}\left(w^{k+1}\right) \right\rbrack \hspace{-0.1cm}\leq f\left(w^{k}\right) - \eta\left(1-\frac{\eta L}{2} \right)\hspace{-0.1cm}\left| \nabla f\left(w^{k}\right)\right|^{2}\ + \eta^{2}\frac{L}{2}\phi^{2}.
\end{equation*}
\end{small}
We define $\Delta_{k}$ to be the residual at
the $k^{\rm th}$ step, \emph{i.e.},
\begin{equation*}
\Delta_{k} \equiv f\left(w^k \right)-f\left(w^{*}\right)
\end{equation*}
where $w^{*}$ is the global minimum of $f$.
Using the residual,
assuming convexity, we get
\begin{equation*}
\frac{\Delta_{k}}{|w^{0} - w^{*}|} \leq \frac{\Delta_{k}}{\left| w^{k} - w^{*} \right|} \leq \left| \nabla f\left(w^{k} \right) \right|.
\end{equation*}
Choosing the learning rate $\eta$ such that
\begin{equation*}
\left( 1 - \frac{\eta L}{2} \right) > 0
\end{equation*}
results in
\begin{equation*}
E_{\xi^k}[\Delta_{k+1}] \leq \Delta_{k} - \lambda\Delta_{k}^{2} + \lambda\sigma^{2}
\end{equation*}
where
\begin{equation*}
\lambda \equiv \eta\left( 1 - \frac{\eta L}{2} \right)\ \frac{1}{\left( w^{0} - w^{*} \right)^{2}}\text{~~ and~~ }\sigma^{2} \equiv \frac{\eta^{2}L}{2\lambda}\phi^{2}.
\end{equation*}
Now, we take average over the full history and use the fact that
\begin{equation*}
(E[X])^2\leq E[X^2]
\end{equation*}
to obtain
\begin{equation*}
E[\Delta_{k+1}]\leq E[\Delta_{k}]-\lambda(E[\Delta_k])^2+\lambda\sigma^2.
\end{equation*}
For simplicity from here forward, the expectation sign will be omitted by using
$\Delta_k \equiv E_{\xi^1\xi^2\cdots\xi^{k-1}}[\Delta_k]$.
We rearrange this inequality as
\begin{equation*}
(\Delta_{k + 1} - \sigma) \leq (\Delta_{k} - \sigma)(1 - \lambda(\Delta_{k} + \sigma))
\end{equation*}
and observing that $\Delta_{k}$ cannot be smaller than $\sigma$
because of constant learning rate and additive noise, implies
\begin{equation*}
1 - \lambda\left(\Delta_{k} + \sigma \right) \geq 0.
\end{equation*}
By taking the inverse and using the fact that
\begin{equation*}
\frac{1}{1 - x} \geq 1 + x \ \ \ \ {\rm for}\ \ \ \ x \leq 1
\end{equation*}
we obtain
\begin{equation*}
\frac{1}{\Delta_{k + 1} - \sigma} \geq \frac{1}{\Delta_{k} - \sigma}\left( \ 1 + \lambda\left( \Delta_{k} + \ \sigma \right) \right) = \frac{1 + 2\lambda\sigma}{\Delta_{k} - \sigma} + \eta.
\end{equation*}
Then, telescoping this recurrence inequality results in
\begin{equation*}
\frac{1}{\Delta_{k + 1} - \sigma} + \frac{1}{2\sigma} \geq \left( 1 + 2\lambda\sigma \right)^{k + 1}\left( \frac{1}{\Delta_{0} - \sigma} + \frac{1}{2\sigma} \right).
\end{equation*}
Finally, solving for $\Delta_{k}$, gives
\begin{equation*}
\label{eqn:PowerDenom}
\Delta_{k} \leq \frac{1}{\left( 1 + 2\lambda\sigma \right)^{k}\left( \frac{1}{\Delta_{0} - \sigma} + \frac{1}{2\sigma} \right)\ - \frac{1}{2\sigma}} + \sigma
\end{equation*}
and, Taylor expanding for small $\sigma$, the number of updates to reach
$\Delta_{k} \leq \epsilon$ is given by
\begin{align*}
N_{\rm{Update}} &\geq \frac{\log\left\lbrack \frac{\epsilon + \sigma}{\epsilon - \sigma} \right\rbrack + \log\left\lbrack \frac{\Delta_{0} - \sigma}{\Delta_{0} + \sigma} \right\rbrack}{\log\left\lbrack 1 + 2\lambda\sigma \right\rbrack} \\
&\approx \frac{1}{\lambda}\left( \frac{1}{\epsilon} - \frac{1}{\Delta_{0}} \right)\hspace{-0.1cm}\left( 1 + \frac{\sigma^{2}}{3}\left( \frac{1}{\epsilon^{2}} + \frac{1}{\Delta_{0}^{2}} + \frac{1}{\epsilon\Delta_{0}} \right)\hspace{-0.1cm}\right).
\end{align*}
Using the Central Limit Theorem, we observe that
$\sigma^{2} \approx \frac{\theta}{M}$
and therefore obtain
\begin{equation*}
N_{\text{Update}} \geq \frac{1}{\lambda}\ \left( \frac{1}{\epsilon} - \frac{1}{\Delta_{0}} \right)\hspace{-0.1cm}\left( 1 + \frac{\theta}{M}\left( \frac{1}{\epsilon^{2}} + \frac{1}{\Delta_{0}^{2}} + \frac{1}{\epsilon\Delta_{0}} \right)\hspace{-0.1cm}\right).
\end{equation*}
The fact that this bound exhibits the same inverse $M$ relationship as
\begin{equation*}
N_{\rm{Update}} = N_{\infty} + \frac{\alpha}{M}
\end{equation*}
reinforces the robustness of our empirical finding.
\section{Discussion\label{sec:Summary}}
Our model separates algorithmic convergence properties from implementation details. This separation provides machine learning researchers and practitioners a new way of thinking about their algorithms: $N_{\infty}$ provides a lower bound on the number of updates required to converge and fundamentally limits the benefits of parallelization for accelerating machine learning; and $\alpha$ introduces a new concept, that of an algorithm's ``noise sensitivity'', as a key property in the optimization of machine learning. Using these new principles to guide algorithmic design may help researchers develop improved algorithms.
We close with a few observations about challenges and opportunities ahead.
\emph{Noise Sensitivity and Complexity}: Our experiments suggest that as the learning
problem grows in complexity from MNIST to CIFAR10 to CIFAR100, its
sensitivity to noise grows (\emph{i.e.}, $\alpha$ grows). See for example
the medium size model results for fixed learning rate. Thus, the onset
of the $N_{\infty}$ ``floor'' is pushed to larger mini-batch values.
This suggests that the benefit of parallelism may grow as the research
community explores more complex learning challenges. However, this
benefit must be balanced by any related increase in $N_{\infty}$,
which will in general also grow with complexity.
\emph{Improved Learning Algorithms}: The research community should
be encouraged to develop algorithms with lower $N_{\infty}$ as this
will lead to better data-parallel scaling.
\emph{Beyond SGD}:
The core methodology
presented in this paper is not limited to SGD.
Research is required to explore whether other robust
mini-batch (or other) relationships exist for different algorithms,
such as Expectation Maximization \cite{dempster1977maximum}. In this way, the methodology
described in this paper provides a new way of comparing the
parallelization effectiveness of algorithms.
\clearpage
\bibliographystyle{apalike}
\subsection{Comparison to Convergence Rate of Gradient Descent Method}
Note that Eqn.~\ref{eqn:PowerDenom} appears to suggest exponential convergence because of the power of $k$ term in the denominator. A closer
analysis shows that this is not correct. Specifically, in the limit
$\sigma \rightarrow 0$, the well-known $1/k$ convergence rate of
gradient descent is recovered:
\begin{equation}
\Delta_{k} \leq {\frac{2\sigma}{\left( 1 + 2\lambda\sigma k + \cdots \right)\left( \frac{2\sigma}{\Delta_{0} - \sigma} + 1 \right)\ - 1} + \sigma} = \frac{1}{\frac{1}{\Delta_{0}} + \lambda k}.
\end{equation}
Also, one can show that the bound is always bigger than the limit:
\[\text{\ \ }\frac{1}{\left( 1 + 2\lambda\sigma \right)^{k}\left( \frac{1}{\Delta_{0} - \sigma} + \frac{1}{2\sigma} \right)\ - \frac{1}{2\sigma}} + \sigma \geq \frac{1}{\frac{1}{\Delta_{0}} + \lambda k},\]
and thus, the exponential term cannot converge faster than $1/k$. The proof follows from expanding $\left( 1 + 2\lambda\sigma \right)^{k}$
to the first order and simplifying, and using
$\Delta_{0} \geq \sigma.$
\section{Data Parallel Scaling of Parallel SGD\label{sec:Scaling}}
Scaling measures the total time to solution, as a function of the number
of computer nodes.
Traditionally, there are two scaling schemes,
\emph{Strong Scaling} and \emph{Weak Scaling}. We discuss these below
and note that neither is ideal for SGD-based machine learning. We
therefore introduce \emph{Optimal Scaling} and compare the three
approaches.
Our analysis assumes data parallelism, which
leads to
node-level load imbalance (and corresponding inefficiency) when the
minibatch size is not a multiple of the number of nodes, $P$. For
convenience, the analysis below ignores these effects and thus presents
a slightly more optimistic analysis.
\subsection{Strong Scaling}
Strong scaling occurs when the problem size remains fixed. This means
that the amount of compute per node decreases as $P$ increases. For
training tasks, this implies that $M$ is fixed, \emph{i.e.}, $M=M_{\rm Strong}$.
In this case, $N_{\rm{Update}}$ does not change, so the training time improves only when $T_{\rm{Update}}$ decreases. Thus, strong scaling hits a
minimum when $P > M_{\rm Strong}/M_T$.
\subsection{Weak Scaling}
Weak scaling occurs when the problem size grows proportionately with $P$.
This implies that for training tasks, $M$ grows linearly with $P$
(\emph{i.e.}, $M = mP$) and therefore $N_{\rm{Update}}$
decreases as $P$ increases, while $T_{\rm{Update}}$ remains
constant, for constant $m$. Weak scaling can be optimized by
selecting $m$ appropriately, which leads to the optimal scaling
described below.
\subsection{Optimal Scaling}
The constant $M$ of strong scaling and the linear $M$ of weak
scaling prevent these methods from achieving optimal performance, and
are therefore inappropriate for SGD-based machine learning. We propose
an alternative approach to scaling that, unlike strong and weak scaling,
minimizes $T_{\rm{Conv}}(M,P)$ over $M$ for each value of $P$. This
approach allows better performance than either strong or weak scaling.
If we combine Eqn.~\ref{eqn:TC} with Eqn.~\ref{eqn:MOpt}, we get a closed for solution for the minimum time to convergence:
\begin{equation}
T_{\rm{Conv}}(P)=
\begin{cases}
\left(\sqrt{\strut\delta N_{\infty}} + \sqrt{\strut\frac{\alpha\gamma}{P}} \right)^2, & P < \frac{\alpha\delta}{\gamma M_T^{2}N_{\infty}}\\
\left(N_{\infty} + \frac{\alpha}{M_{T}P} \right)\left(\delta + \gamma M_T\right), & {\rm otherwise.}
\end{cases}
\end{equation}
Note that for large $P$ (\emph{i.e.}, the second condition above), optimal
scaling is identical to weak scaling. In this
way, optimal scaling naturally defines the per node minibatch size for
weak scaling.
\subsection{Practical Considerations: Estimating
$\mathbf{N_{\infty}}$ and $\mathbf{\alpha}$\label{sec:Estimation}}
In order to exploit the inverse relationship for efficient system
design, one needs to estimate $\alpha$ and $N_{\infty}$ from an
empirical $N_{\rm{Update}}$ curve, and do so in a
computationally efficient way. This goal can be achieved by evaluating
$N_{\rm{Update}}$ at two values of $M$ and averaging
as needed to remove noise from random initialization, SGD, etc. If the
values of M are chosen strategically, the overhead of measuring
$\alpha$ and $N_{\infty}$ can be reduced. And in practice, as one
explores a learning model, many experiments are run. So, the cost of
estimating $\alpha$ and $N_{\infty}$ is amortized.
One must of course recognize that when significant changes are made to
the learning task (e.g., major topology change, learning rate change,
target loss change, etc.) $\alpha$ and $N_{\infty}$ might need to be
re-estimated. Thus, looking for other ways to further improve efficiency
is an important research direction.
Our theoretical analysis suggests another path forward: Section~\ref{sec:Theory}
suggests that $N_{\infty}$ behaves like a constant $+1/\epsilon$. Inspired by
that result, we fit $\alpha$ and $N_{\infty}$ for various values of the
training loss, $\epsilon$. From the corresponding plots shown in Figure~\ref{fig:EpsilonDependence}, one
can see that the fits are very good for small $\epsilon$ but grown noisier as $\epsilon$
grows. This relationship might be due to early stopping from large $\epsilon$
leading to lots of very different solutions.
\begin{figure}[ht]
\centering
\includegraphics[width=5.99306in,height=3.24792in]{Figures/EpsilonDependence.png}
\caption{$N_{\infty}$ and $\alpha$ with various $\epsilon$ for the
CIFAR10 dataset for a constant learning rate.}
\label{fig:EpsilonDependence}
\end{figure}
We then plotted $\alpha$ and $N_{\infty}$ versus $\epsilon$ in Figure~\ref{fig:NAvsEpsilon}, from
which one can see that both exhibit a $1/\epsilon$ relationship for small $\epsilon$. If
this relation holds in general, then one could estimate $\alpha$ and
$N_{\infty}$ once for a given $\epsilon$ and then use the $1/\epsilon$ relationship to
calculate update $\alpha$ and $N_{\infty}$ for other values of $\epsilon$. However, these preliminary results need further study.
\begin{figure}[ht]
\centering
\includegraphics[width=2.75774in,height=2.37778in]{Figures/NAvsEpsilon.png}
\caption{$N_{\infty}$ and $\alpha$ versus $\epsilon$ in blue and
red, respectively, and $1/\epsilon$ fits shown as dashed lines. Both
$N_{\infty}$ and $\alpha$ exhibit a $1/\epsilon$ relationship.}
\label{fig:NAvsEpsilon}
\end{figure}
\subsection{Optimal System Hardware Design\label{sec:SystemDesign}}
We now show how optimal scaling can be used to optimize learning system hardware design.
The principle behind optimal system design is to balance the trade-offs between various system parameters so as to optimize some system performance metric, like time to convergence. If we assign a cost for each system component, such as the number of nodes, the size of the communication network, the amount of memory, etc., we can then find the value of these elements that optimize the system metric. This approach can be used to optimize system design for multiple machine learning problems, e.g., determining the correct ratio of compute to communication in a system; or to efficiently allocate resources in a data center running multiple learning problem concurrently, e.g., to decide how many learners to apply to each learning task running in a data center.
To make this explicit, consider optimizing a system for bandwidth and compute only. If we have a fixed amount of money to spend, then the cost, $C$, for each component must be constrained by
\begin{equation}
C_{\rm Compute}(P,\gamma) + C_{\rm Bandwidth}(\delta) = {\rm constant.}
\end{equation}
Combining this constraint with $T_{\rm{Conv}}$ implies that optimal design occurs when the mix of compute and bandwidth satisfy
\begin{equation}
\frac{{\Delta}T_{\rm{Conv}}(M_{\rm Opt}(P),P,\gamma)}{{\Delta}C_{\rm Compute}(P,\gamma)} = \frac{{\Delta}T_{\rm{Conv}}(M_{\rm Opt}(P),P,\gamma)}{{\Delta}C_{\rm Bandwidth}(\delta)}.
\end{equation}
In other words, performance gain per unit price must be balanced at the
optimal design point. Since we have a closed form solution for $T_{\rm{Conv}}$, we can easily find this optimal point if given the various costs. This approach can be generalized to optimize of multiple constraints.
\section{Introduction}
At its best, the study of machine learning is a principled scientific exploration of the properties of learning, coupled with solid engineering to take these ideas from the laboratory to real-world practice.
However, machine learning can sometimes appear to be voodoo
\cite{recht2017reflections}: \emph{e.g.}, hyperparameter tuning is often more of an art than a science, with significant time and effort spent searching for the best parameters, often without sufficient solid evidence nor theoretical underpinning to guide the search.
In this paper, we present an empirical law, with theoretical justification, from which we derive a principled methodology for selecting mini-batch size w.r.t. \emph{training performance}.\footnote{Any connections with testing/generalization performance are left for future work.}
This result provides two primary benefits:
$(i)$ it saves time; and
$(ii)$ it provides a deeper understanding of the relationship between model parameters.
The main contribution of this paper is the idea that
understanding the average algorithmic behavior of learning, decoupled
from implementation details, can lead to deep insight that can be used to
minimize training time and guide algorithmic development.
Our results have direct relevance to on-going research that accelerate training time.
For example, significant research effort has been focused on
accelerating mini-batch SGD \cite{dekel2012optimal}, \cite{keskar2016large} \cite{li2014efficient}, primarily focused on
faster hardware \cite{coates2013deep}, \cite{krizhevsky2014one}, \cite{tan2011fast}, \cite{chetlur2014cudnn}, parallelization using multiple learners
\cite{cho2017powerai}, \cite{goyal2017accurate}, \cite{dean2012large}, and improved algorithms and system designs for
efficient communication (\emph{e.g.}, parameters servers, efficient passing of
update vectors, \emph{etc.}) \cite{goyal2017accurate}, \cite{watcharapichat2016ako}, \cite{lian2015asynchronous}, \cite{recht2011hogwild}, \cite{seide2014parallelizability}, \cite{zhang2015deep}, \cite{li2014scaling}.
To assess the impact of these acceleration methods, published research typically evaluates
parallel improvements based on the time to complete an epoch for a fixed
mini-batch size, what is commonly known as ``weak'' scaling \cite{cho2017powerai}, \cite{goyal2017accurate}, \cite{watcharapichat2016ako}, \cite{recht2011hogwild}.
In this paper, we show that focusing on weak scaling can lead to suboptimal training times because, by neglecting the dependence of
convergence time on the size of the mini-batch used, it is not always minimizing the training time.
We explore the implications of this observation and provide specific guidance on optimal mini-batch size.
\section{Modeling SGD Convergence Time\label{sec:Decompose}}
Given a ML problem represented by a data set, SGD as the learning
algorithm, and a learning model topology, we define the learning time,
$T_{\rm{Conv}}$, to be the average total time required for SGD to converge to a
specified achievable training error threshold.
The average is over all possible sources of randomness in the
process, including random initializations of the model, ``noise'' from the SGD
updates, noise in the system hardware, etc.
Focusing on the average
learning behavior allows us to identify fundamental properties of the
learning process. In particular, we can write the \emph{analytical complexity} as the product of \emph{iteration complexity}, $N_{\rm{Update}}$, and the \emph{per iteration computational complexity}, $T_{\rm{Update}}$.
\begin{equation}
\label{eqn:Decompose}
T_{\rm{Conv}}=N_{\rm{Update}} \cdot T_{\rm{Update}};
\end{equation}
in other words, $N_{\rm{Update}}$ is the average number of updates
required to converge, and $T_{\rm{Update}}$ is the average
time to compute and communicate one update.
This decomposition of $T_{\rm{Conv}}$ into an implementation-dependent and implementation-independent components is useful because it helps decouple the tasks of understanding how
implementation and algorithmic choices impact learning time, and allows us to
understand algorithmic choices independent of system design choices.
\section{Modeling Average Convergence Time\label{sec:ModelingT}}
To analyze SGD convergence, we model $N_{\rm{Update}}$ as a function of the
mini-batch size, $M$, and model $T_{\rm{Update}}$ as a function of $M$ and the number of parallel
learners, $P$.
All other hyperparameters are held constant.
For simplicity, we refer to ``learner'' parallelism when multiple learners share the task of learning a single model.
Each learner is mapped to a compute element from a suitable level of parallelism, \emph{e.g.}, a server, a CPU, a CPU core, a GPU, \emph{etc. }
In general, the level of parallelism
selected will have implications for the software
implementation, communication requirements, and system performance.
However, our analysis below remains largely the same.
\subsection{Modeling $N_{\rm{Update}}(M)$}
Since $N_{\rm{Update}}$ is independent of the hardware, it
is independent of the number of compute elements used, and therefore
depends only on the mini-batch size, $M$.
Even with this simplification, measuring $N_{\rm{Update}}$ from hardware is generally
impractical due to the computational expense of running SGD to
convergence for all values of $M$. Fortunately, there is an easier
way: We have discovered a robust empirical inverse relationship between
$N_{\rm{Update}}$ and $M$ given by
\[N_{\rm{Update}} = N_{\infty} + \frac{\alpha}{M}\]
where $N_{\infty}$ and $\alpha$ are empirical parameters depending
on the data, model topology and learning algorithm used.
This inverse relationship captures the intuitively obvious result that
even if we compute exact gradients, \emph{i.e.}, even when $M$ equals all
of the data in a given data set, gradient descent still requires a
non-zero number of steps to converge.
Furthermore, the Central Limit Theorem tells us that the variance of the
SGD gradient is inversely proportional to $M$, for large $M$.
Thus, $N_{\rm{Update}}$ increases approximately linearly
with the SGD gradient variance, and $\alpha$ can be thought of the
system's sensitivity to noise in the gradient. We define $\alpha$ to be the ``noise sensitivity'' of the algorithm.
\subsection{Modeling $\mathbf{T_{\rm{Update}}(M,P)}$}
Measuring $T_{\rm{Update}}$ is comparatively
straightforward: One need only run enough iterations of SGD for a single learner to estimate the average
time to perform an update for a specified mini-batch size. This process
is possible because $T_{\rm{Update}}(M,P)$ is approximately constant throughout SGD learning; so it need only be
measured once for each $(M,P)$ pair of interest, which is
generally a small computation compared to running full SGD learning to convergence.
This approach can be used to compare differences between specific types of
hardware, software implementations, etc. One can then use the measured
$T_{\rm{Update}}$ to fit an analytic model to be used in
conjunction with $N_{\rm{Update}}$ to model $T_{\rm{Conv}}(M,P)$.
To analyze the generic behavior, we model
\begin{equation}
T_{\rm{Update}}(M,P) = \Gamma(M) + \Delta(P)
\end{equation}
where $\Gamma(M)$ is the average time to compute an SGD update using
$M$ samples, and $\Delta(P)$ is the average time to communicate
gradient updates between $P$ learners.\footnote{Without loss of generality, we subsume any communication time internal to a single learner (\emph{e.g.},the time to read/write data from memory) into the computation time.}
If some of the communication between learners
can occur during computation, then $\Delta(P)$ represents the
portion of communication that is not overlapping with computation.\footnote{An efficient SGD system will
attempt to overlap computation and
communication. For example, in backpropagation, gradient updates for all but the
input layer can in principle be transferred during the calculation of updates for
subsequent layers. In such systems, the communication time,
$\Delta(P)$, is understood to mean the portion that does
not overlap with computation time.}
Since computation and communication are handled by separate hardware, it
is a good approximation to assume that they can be decoupled in this
way.
Since $\Gamma(M)$ typically performs the same amount of computation
for each data sample, one might expect a linear relationship,
$\Gamma(M) = \gamma M$, for some constant,
$\gamma$. Here we are neglecting the generally insignificant time
required to sum over $M$ data samples. However, in
practice, hardware and software implementation inefficiencies lead to a
point where reducing $M$ does not reduce compute time linearly.
This effect can be approximated using
\begin{equation}
\label{eqn:HW}
\Gamma(M) = \gamma\max(M,M_T)
\end{equation}
where $M_T$ is the threshold at which the linear relationship
begins, \emph{i.e.}, the knee in the curve. For example, $M_T$ could be the number of cores per CPU, if
each sample is processed by a different core; or $M_T$ could be 1 if
a single core processes all samples. Ideally, efficient SGD hardware
systems should achieve low $\gamma$ and $M_T$. In practice, an
empirical measurement of this relationship provides more fidelity; but
for the purposes of this paper, this model is sufficient.
When $P = 1$, the communication time is vanished, \emph{i.e.}, $\Delta(P)=0$.
When $P > 1$, $\Delta(P)$ depends on various
hardware and software implementation factors. For optimal performance,
we assume model updates exploit an optimized communication protocol,
such as the Message Passing Interface
(MPI) function \texttt{MPIAllReduce()} \cite{kumar2016optimization} on a high-end compute cluster.
Such systems provide a powerful network switch and an efficient
\texttt{MPIAllReduce()} implementation that delivers near perfect scaling of
\texttt{MPIAllreduce()} bandwidth, and so communication time is approximately constant, \emph{i.e.},
$\Delta(P)=\delta$ for some constant $\delta$ approximately inversely proportional to
the aggregate bandwidth between learners.
For comparison purposes, a synchronous parameter server
has a communication time that grows linearly with $P$, \emph{i.e.}, $\Delta(P) = \delta P$.
\subsection{Modeling $\mathbf{T_{\rm{Conv}}(M,P)}$}
Using Eqn.~\ref{eqn:Decompose} to combine our estimates for $N_{\rm{Update}}$,
$T_{\rm{Update}}$ yields the following general
approximation to the total convergence time for SGD running on $P$
parallel learners:
\begin{small}
\begin{equation}
\label{eqn:TC}
T_{\rm{Conv}}(M,P) = \left(N_{\infty} + \frac{\alpha}{M}\right)\left[\gamma\max\left( \frac{M}{P},M_T \right) + \delta \right].
\end{equation}
\end{small}
We can now use this model to optimize training time and analyze the impact of system design on training time
in numerous ways.
\emph{E.g.}, in the experiments we show how to
select the mini-batch size that minimizes SGD training time.
Note that Eqn.~\ref{eqn:TC} relies on certain assumptions about the hardware
that might not be true in general, \emph{e.g.}, that $\delta$ is a constant.
We have chosen these assumptions to simplify the analysis; but in
practice, one can easily choose a different model for $T_{\rm{Conv}}$, or even measure the
exact form of $T_{\rm{Conv}}$, and still follow through with the analysis below.
One final consideration arises regarding cross-validation (CV) since SGD
training is rarely performed without some form of CV stopping criterion.
We can accommodate the effect of CV in our model by including a CV term,
such that
\begin{small}
\begin{equation}
\Gamma(M) = \gamma N\max(M,M_T) + \gamma_{\rm CV}\max(M_{\rm CV},M_T)
\end{equation}
\end{small}
where $N$ is the number of SGD updates per CV calculation and
$M_\text{CV}$ is the number CV samples to calculate. For
simplicity, we ignore CV in our analysis below, but the analysis follows
the same path as before. Additionally, the calculation of a CV subset
adds virtually no communication, since the parallel learners computing the
CV estimate communicate only a single number when they are done.
\section{Optimal mini-batch Selection\label{sec:Optimalmini-batch}}
For the single learner case ($P=1$), there is no inter-learner communication cost, so Eqn.~\ref{eqn:TC} yields
\begin{equation}
T_{\rm{Conv}}(M,1) = \left(N_{\infty} + \frac{\alpha}{M}\right)\gamma\max\left(M,M_T \right).
\end{equation}
Optimizing over $M$, we find the optimal mini-batch size to be
\begin{equation}
M_{\rm Opt} = M_T.
\end{equation}
One can easily identify $M_T$ for a given system by timing a few SGD updates per value of $M$ to obtain an estimate of $T_{\rm{Update}}(M)$, and then selecting the knee in the $T_{\rm{Update}}(M)$ curve as an estimate for $M_T$. If the simple model used in Eqn.~\ref{eqn:HW} is not accurate for a given system configuration, the methodology below can be used.
\begingroup
\setlength{\tabcolsep}{2pt}
\begin{table}[ht]
\centering
\begin{tabular}{ll}
\toprule
\multicolumn{2}{l}{\textbf{Methodology:} Optimal mini-batch Size Selection}\\
\midrule
1. & For a range of $M$: \\ & \quad Measure $T_{\rm{Update}}(M)$ over a few SGD updates. \\
2. & For a least two values of $M$: \\ & \quad Estimate $N_{\rm{Update}}(M)$ by running to convergence. \\
3. & Use estimate $N_{\rm{Update}}(M)$ to calculate $N_\infty$ and $\alpha$.\\
4. & Use $N_{\rm{Update}}(M,N_\infty,\alpha)$, $T_{\rm{Update}}(M)$ to select $M_{\rm Otp}$.\\
\bottomrule
\end{tabular}
\end{table}
\endgroup
For the multiple learning case ($P\geq 1$), the optimal $M$ is given by
\begin{equation}
M_{\rm Opt}(P) = \max\left( \sqrt{\frac{\alpha\delta P}{N_\infty\gamma}}~~,~~M_T P\right)
\end{equation}
which demonstrates that linearly increasing the training data with the number of learners (\emph{i.e.}, ``weak scaling'') is not always the optimal choice because $M_TP$ can be less than $\sqrt{\alpha\delta P/N_\infty\gamma}$. Note that the methodology above can also be used to optimize $M$ in the multi-learner case.
\begin{remark}
Numerous papers have addressed the behavior and benefits of minibatch training \cite{cotter2011better}, \cite{jain2017parallelizing}, \cite{bottou2018optimization}.
These papers frequently derive training error bounds in terms of the number of iterations, and/or the mini-batch size, often exhibiting $1/M$ and $1/N$ related behavior.
However these results do not address the functional relationship between the number of iterations and $M$.
Our work is the first to demonstrate this relationship empirically, on a wide variety of learning problems, in a much simpler way.
Furthermore, for some of the results in [1-11], it is possible to go beyond what the original authors intended by inverting their results to find relationships between number of iterations and $M$; however, without empirical evidence, one does not know how tight the bounds are in practice, and even if one assumes they are tight, the resulting inversion leads to relationships which are not the same as the $\alpha/M + N_\infty$ relationship presented here.
\end{remark}
\section{Experimental Results}\label{sec:Results}
We have observed that, to a reasonable approximation, the relationship
\begin{equation}
N_{\rm{Update}} = N_{\infty} + \frac{\alpha}{M}
\end{equation}
persists over a broad range of $M$, and a variety of machine learning
dimensions, including the choice of data set, model topology, number of
classes, convergence threshold and learning rate. This section describes
the methodology used and the results obtained.
\begin{table*}[!ht]
\centering
\caption{Training experiments performed.}
\label{tab:Experiments}
\begin{tabular}{llllll}
\rowcolor{Gray}
\toprule
\textbf{Data Set} & \textbf{Model} & \textbf{\# Parameters} & \textbf{\#
Layers} & \textbf{Learning Rate} & \textbf{Batch Sizes}\tabularnewline
\midrule
MNIST & Small -- LeNet & 43,158 & 4 & 0.01 & 1 - 1024\tabularnewline
& Medium -- LeNet & 169,506 & 4 & 0.01, 0.05 & 1 - 1024\tabularnewline
& Even/Odd -- LeNet & 169,506 & 4 & 0.05 & 1 - 1024\tabularnewline
& Large -- LeNet & 671,802 & 4 & 0.01 & 1 - 1024\tabularnewline
\midrule
CIFAR10 & Small -- LeNet & 487,458 & 4 & 0.05 & 1 - 1024\tabularnewline
& Medium -- VGG & 1,125,090 & 9 & 0.01, 0.05, Adam & 1 -
1024\tabularnewline
& Large -- VGG & 1,862,754 & 11 & 0.025 & 1 - 1024\tabularnewline
& ResNet & 270,410 & 20 & 0.05 & 1 - 1024\tabularnewline
\midrule
CIFAR100 & Small -- LeNet & 487,458 & 4 & 0.05 & 16 -
1024\tabularnewline
& Medium -- VGG & 1,125,090 & 9 & 0.05 & 16 - 1024\tabularnewline
& Large - VGG & 1,862,754 & 11 & 0.025, 0.05 & 16 - 1024\tabularnewline
\bottomrule
\end{tabular}
\end{table*}
To ensure the robustness of our observations, we conducted a range of
experiments over batch sizes from 1 to 1024 on benchmark image
classification data sets. Experiments covered a variety of common model
architectures (including LeNet \cite{lecun1995learning}, VGG \cite{simonyan2014very}, and ResNet \cite{he2016deep}) run
on the MNIST \cite{lecun2010mnist}, CIFAR10 \cite{krizhevsky2009learning} and CIFAR100 data sets. The models
were trained for a fixed number of updates with a slowly decaying
learning rate. Adam \cite{kingma2014adam} adaptive learning rate was also used for one
of the models. Light regularization was used with a decay constant of
$10^{-4}$ on the $L_2$-norm of the weights. For each model
architecture, we varied the size in terms of width (\emph{i.e.}, parameters per
layer) and depth (\emph{i.e.}, number of layers) to measure the training
behavior across model topologies. In addition, we experimented with the
same model across all three data sets (LeNet). Training was performed
using the Torch library on a single K80 GPU.\footnote{None of our
measurements required data-level parallelism because our decomposition
of $T_{\rm{Conv}}$ allows us to estimate $N_{\rm{Update}}$ and $T_{\rm{Update}}$
separately, and $N_{\rm{Update}}$ is independent of
$P$, the level of data parallelism used.} Table~\ref{tab:Experiments} summarizes the
various experiments that were performed. Training and cross-validation
losses were recorded after each update for MNIST and after every 100
updates for CIFAR10 and CIFAR100, using two distinct randomly selected
sets of 20\% of the available data. The recorded results were
``scanned'' to find the $T_{\rm{Update}}$ value
that first achieves the desired training loss level, $\epsilon$. Note
that this approach is equivalent to a stopping criterion with no
patience.
Each MNIST experiment was averaged over ten runs with different random
initializations to get a clean estimate of
$N_{\rm{Update}}$ as a function of $M$. Averaging was not
used with the other experiments, and as our results show, was not generally
needed.
The results of our experiments in Figure~\ref{fig:ExperimentalResults} show a robust inverse
relationship between $N_{\rm{Update}}$ and $M$ measured
across all the data sets, models, learning rates for each case we have
considered. The fit lines match the observed data closely and we
estimated $N_{\infty}$ and $\alpha$. Because of the large number of
possible combinations of experiments performed, we only show a
representative subset of the graphs to illustrate the behavior that was
observed in all experiments. This empirical behavior exists for training error,
cross-validation error, varying $\epsilon$, changing the number of
output classes, etc.
\begin{figure*}[htb]
\centering
\includegraphics[width=6.10449in,height=6.25278in]{Figures/ExperimentalResults.png}
\caption{$N_{\rm{Update}}$ as a function of $M$ for a
variety of SGD learning problems for a variety of conditions.
The y-axis represents the training loss. The plots
generally show an inverse relationship between
$N_{\rm{Update}}$ and $M$. Since $N_{\rm{Update}}$ is a random variable, we see noise in
these graphs. Additional averaging over multiple runs removes this
noise.}
\label{fig:ExperimentalResults}
\end{figure*}
These results show that large learning rates (shown as ``lR'' in the
graphs) are associated with small $N_{\infty}$, which is not
unexpected. However, for the experiment with adaptive learning rate
(cifar10\_medium\_adam), $N_{\infty}$ is negative, which is likely the
result of noise in our estimates or a failure of the model for adaptive
learning rates. Further study is needed to understand this. Even
so, this indicates that $N_{\infty}$ is small compared to the $\alpha$, and
hence good parallel efficiency is possible.
\section{Theoretical Bound on $N_{\rm{Update}}$ for SGD\label{sec:Theory}}
For completion, we provide a theoretical analysis of mini-batch SGD
convergence that supports our finding of a robust empirical inverse relation between $N_{\rm{Update}}$ and $M$. We begin by defining the SGD update step as
\begin{equation*}
w^{k + 1} = w^{k} - \eta\left( \nabla f\left( w^{k} \right) + \xi^{k} \right)
\end{equation*}
where $f$ is the function to be optimized, $w^{k}$ is a vector of
neural net weights at the $k^{\rm th}$ update of the SGD algorithm, $\xi^k$
is a zero-mean noise term with variance smaller than $\phi^{2}$, and $\eta$ is
the SGD step size. We assume that $\nabla f$ is Lipschitz continuous,
\emph{i.e.}, that
\begin{equation*}
f\left( x \right) \leq f\left( y \right) + \nabla f\left( y \right) \cdot \left( x - y \right) + \frac{L}{2}\ \left| x - y \right|^{2}
\end{equation*}
for some constant $L$.
Standard convex analysis steps along with
$E[\xi^k] = 0$ and Var$[\xi^k] \leq \phi^2$, gives
\begin{small}
\begin{equation*}
E_{\xi^k}\left\lbrack f\left(w^{k+1}\right) \right\rbrack \leq f\left(w^{k}\right) - \eta\left(1-\frac{\eta L}{2} \right)\left| \nabla f\left(w^{k}\right)\right|^{2}\ + \eta^{2}\frac{L}{2}\phi^{2}.
\end{equation*}
\end{small}
We define $\Delta_{k}$ to be the residual at
the $k^{\rm th}$ step, \emph{i.e.},
\begin{equation*}
\Delta_{k} \equiv f\left(w^k \right)-f\left(w^{*}\right)
\end{equation*}
where $w^{*}$ is the global minimum of $f$.
Using the residual,
assuming convexity, we get
\begin{equation*}
\frac{\Delta_{k}}{|w^{0} - w^{*}|} \leq \frac{\Delta_{k}}{\left| w^{k} - w^{*} \right|} \leq \left| \nabla f\left(w^{k} \right) \right|.
\end{equation*}
Choosing the learning rate $\eta$ such that
\begin{equation*}
\left( 1 - \frac{\eta L}{2} \right) > 0
\end{equation*}
results in
\begin{equation*}
E_{\xi^k}[\Delta_{k+1}] \leq \Delta_{k} - \lambda\Delta_{k}^{2} + \lambda\sigma^{2}
\end{equation*}
where
\begin{equation*}
\lambda \equiv \eta\left( 1 - \frac{\eta L}{2} \right)\ \frac{1}{\left( w^{0} - w^{*} \right)^{2}}\text{~~ and~~ }\sigma^{2} \equiv \frac{\eta^{2}L}{2\lambda}\phi^{2}.
\end{equation*}
Now, we take average over the full history and use the fact that
\begin{equation*}
(E[X])^2\leq E[X^2]
\end{equation*}
to obtain
\begin{equation*}
E[\Delta_{k+1}]\leq E[\Delta_{k}]-\lambda(E[\Delta_k])^2+\lambda\sigma^2.
\end{equation*}
For simplicity from here forward, the expectation sign will be omitted by using
$\Delta_k \equiv E_{\xi^1\xi^2\cdots\xi^{k-1}}[\Delta_k]$.
We rearrange this inequality as
\begin{equation*}
(\Delta_{k + 1} - \sigma) \leq (\Delta_{k} - \sigma)(1 - \lambda(\Delta_{k} + \sigma))
\end{equation*}
and observing that $\Delta_{k}$ cannot be smaller than $\sigma$
because of constant learning rate and additive noise, implies
\begin{equation*}
1 - \lambda\left(\Delta_{k} + \sigma \right) \geq 0.
\end{equation*}
By taking the inverse and using the fact that
\begin{equation*}
\frac{1}{1 - x} \geq 1 + x \ \ \ \ {\rm for}\ \ \ \ x \leq 1
\end{equation*}
we obtain
\begin{equation*}
\frac{1}{\Delta_{k + 1} - \sigma} \geq \frac{1}{\Delta_{k} - \sigma}\left( \ 1 + \lambda\left( \Delta_{k} + \ \sigma \right) \right) = \frac{1 + 2\lambda\sigma}{\Delta_{k} - \sigma} + \eta.
\end{equation*}
Then, telescoping this recurrence inequality results in
\begin{equation*}
\frac{1}{\Delta_{k + 1} - \sigma} + \frac{1}{2\sigma} \geq \left( 1 + 2\lambda\sigma \right)^{k + 1}\left( \frac{1}{\Delta_{0} - \sigma} + \frac{1}{2\sigma} \right).
\end{equation*}
Finally, solving for $\Delta_{k}$, gives
\begin{equation*}
\label{eqn:PowerDenom}
\Delta_{k} \leq \frac{1}{\left( 1 + 2\lambda\sigma \right)^{k}\left( \frac{1}{\Delta_{0} - \sigma} + \frac{1}{2\sigma} \right)\ - \frac{1}{2\sigma}} + \sigma
\end{equation*}
and, Taylor expanding for small $\sigma$, the number of updates to reach
$\Delta_{k} \leq \epsilon$ is given by
\begin{align*}
N_{\rm{Update}} &\geq \frac{\log\left\lbrack \frac{\epsilon + \sigma}{\epsilon - \sigma} \right\rbrack + \log\left\lbrack \frac{\Delta_{0} - \sigma}{\Delta_{0} + \sigma} \right\rbrack}{\log\left\lbrack 1 + 2\lambda\sigma \right\rbrack} \\ &\approx \frac{1}{\lambda}\ \left( \frac{1}{\epsilon} - \frac{1}{\Delta_{0}} \right)\left( 1 + \frac{\sigma^{2}}{3}\left( \frac{1}{\epsilon^{2}} + \frac{1}{\Delta_{0}^{2}} + \frac{1}{\epsilon\Delta_{0}} \right) \right).
\end{align*}
Using the Central Limit Theorem, we observe that
$\sigma^{2} \approx \frac{\theta}{M}$
and therefore obtain
\begin{equation*}
N_{\text{Update}} \geq \frac{1}{\lambda}\ \left( \frac{1}{\epsilon} - \frac{1}{\Delta_{0}} \right)\left( 1 + \frac{\theta}{M}\left( \frac{1}{\epsilon^{2}} + \frac{1}{\Delta_{0}^{2}} + \frac{1}{\epsilon\Delta_{0}} \right) \right).
\end{equation*}
The fact that this bound exhibits the same inverse $M$ relationship as
\begin{equation*}
N_{\rm{Update}} = N_{\infty} + \frac{\alpha}{M}
\end{equation*}
reinforces the robustness of our empirical finding.
\section{Discussion\label{sec:Summary}}
Our model separates algorithmic convergence properties from implementation details. This separation provides machine learning researchers and practitioners a new way of thinking about their algorithms: $N_{\infty}$ provides a lower bound on the number of updates required to converge and fundamentally limits the benefits of parallelization for accelerating machine learning; and $\alpha$ introduces a new concept, that of an algorithm's ``noise sensitivity'', as a key property in the optimization of machine learning. Using these new principles to guide algorithmic design may help researchers develop improved algorithms.
Our current research extends our results to new domains. For example,
preliminary results for machine translation using recurrent neural networks are very promising.
We close with a few observations about challenges and opportunities ahead.
\emph{Noise Sensitivity and Complexity}: Our experiments suggest that as the learning
problem grows in complexity from MNIST to CIFAR10 to CIFAR100, its
sensitivity to noise grows (\emph{i.e.}, $\alpha$ grows). See for example
the medium size model results for fixed learning rate. Thus, the onset
of the $N_{\infty}$ ``floor'' is pushed to larger mini-batch values.
This suggests that the benefit of parallelism may grow as the research
community explores more complex learning challenges. However, this
benefit must be balanced by any related increase in $N_{\infty}$,
which will in general also grow with complexity.
\emph{Improved Learning Algorithms}: The research community should
be encouraged to develop algorithms with lower $N_{\infty}$ as this
will lead to better data-parallel scaling. Algorithms that make better
use of the data to generate improved update estimates and thereby
reduce $N_{\infty}$ (\emph{e.g.},perhaps second order methods) are prime
candidates. Of course, this reduction needs to be understood in the
context of a tradeoff with a concomitant increase in $T_{\rm{Update}}$.
\emph{Local Minima}: It has been shown \cite{keskar2016large} that increasing
mini-batch size generally has negative effects on generalization.
Intuitively, the reduced gradient stochasticity of larger mini-batches
leads to increased risk of getting stuck in local minima. This problem
is fundamental to data-parallel scaling of SGD and needs to be addressed as
parallelization efficiency improves.
\emph{Beyond SGD}: It is important to note that the core methodology
presented in this paper is not limited to SGD. It is applicable to any
algorithm that has a calculation phase followed by a model update
phase. Research is required to explore whether other robust
mini-batch (or other) relationships exist for different algorithms,
such as Expectation Maximization \cite{dempster1977maximum}. In this way, the methodology
described in this paper provides a new way of comparing the
parallelization effectiveness of algorithms.
\clearpage
\bibliographystyle{apalike}
| {
"attr-fineweb-edu": 1.956055,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdrQ25V5jBFwBt50P | \section{Introduction}
\subsection{Background}
In addition to its rich stochastic geometric structure, first-passage percolation on $\mathbb{Z}^d$ provides a model for the study of fluctuations of a non-linear function of a large number of independent random variables. For recent surveys, see \cite{BStahn, GK, Howard}
In this paper, we are concerned with the variance of the passage time $\tau(0,x)$ from $0$ to $x\in \mathbb{Z}^d$. The passage time is the random variable defined as
\begin{equation}\label{eq: tau_def}
\tau(0,x) =\inf_{\gamma: 0\rightarrow x}\sum_{e\in \gamma} t_e\ ,
\end{equation}
where the infimum is taken over all lattices paths $\gamma=(v_0=0,e_0,v_1,\ldots,e_N,v_N=x)$ joining $0$ to $x$. The collection $(t_e)_{e\in \mathcal{E}^d}$ consists of nonnegative independent random variables with common distribution $\mu$ and $\mathcal{E}^d$ is the set of nearest-neighbor edges.
When $d=1$, \eqref{eq: tau_def} is simply a sum over i.i.d. random variables for each $x$, and the variance of $\tau(0,x)$ is of order $\|x\|_1$. In contrast, when $d\ge 2$, \eqref{eq: tau_def} is a mininum over a collection of correlated sums of i.i.d. random variables. This correlation structure has led physicists to conjecture a sublinear scaling of the form $\|x\|_1^\alpha$, $\alpha<1$ for the fluctuations. In the case $d=2$, the model is expected to have \emph{KPZ scaling} \cite{kpz}, with $\alpha=\frac{2}{3}$, and the recentered passage time approximately follows the Tracy-Widom distribution. Except for K. Johansson's work \cite{johansson} on a related exactly solvable model, there has been little success in rigorously confirming these predictions.
In \cite{kestenspeed}, H. Kesten showed that the variance of $\tau(0,x)$ is at most linear in the distance of $x$ to the origin:
\[\Var \tau(0,x)\le C\|x\|_1,\]
for some constant $C$. Kesten also showed that if $\mu$ has exponential moments:
\begin{equation}
\int e^{\delta x}\,\mu(\mathrm{d}x)<\infty \text{ for some } \delta>0\ ,\label{eqn: exp-moments}
\end{equation}
then the passage time is exponentially concentrated around its mean:
\begin{equation}\label{eqn: kesten}
\mathbb{P}(|\tau(0,x)-\mathbb{E}\tau(0,x)|\ge \lambda \sqrt{\|x\|_1})\le Ce^{-c\lambda},
\end{equation}
for $\lambda \le C\|x\|_1$.
M. Talagrand improved this result to Gausssian concentration on the scale $\sqrt{\|x\|_1}$: see \cite[Proposition~8.3]{talagrand}. These results have been used to derive concentration of the mean of the passage time around the ``time constant.'' Some relevant papers include \cite{alexander, rhee, zhang}. In the other direction, lower bounds have been given for the variance of the passage time, but the strongest results are dimension-dependent; see \cite{AD11, kestenspeed, newman, zhang2}.
In a remarkable paper \cite{BKS}, I. Benjamini, G. Kalai, and O. Schramm used an inequality due to Talagrand \cite{talagrand-russo} to prove that if the edge-weight distribution is uniform on a set of two positive values, the variance is sublinear in the distance:
\[\Var \tau(0,x) \le C(a,b)\frac{\|x\|_1}{\log\|x\|_1},~ d \geq 2\]
for $0<a<b$ and $\mathbb{P}(t_e=a)=\mathbb{P}(t_e = b) =\frac{1}{2}.$
M. Benaim and R. Rossignol \cite{benaimrossignol} introduced their ``modified Poincar\'e inequality,'' itself based on an inequality of D. Falik and A. Samorodnitsky (a corresponding inequality appears in Rossignol \cite[Equations~(11)-(14)]{Rossignol}), to extend the variance estimate to a class of continuous distributions which they termed ``nearly gamma.'' Nearly gamma distributions satisfy an entropy bound analogous to the logarithmic Sobolev inequality for the gamma distribution, which explains their name; for a nearly gamma $\mu$ and, for simplicity, $f$ smooth,
\begin{equation}\label{eqn: nearlygamma}
Ent_\mu f^2 := \int f^2(x) \log \frac{f^2(x)}{\mathbb{E}_\mu f^2}\,\mu(\mathrm{d}x) \le C\int \left(\sqrt{x}f'(x)\right)^2\,\mu(\mathrm{d}x).
\end{equation}
Benaim and Rossignol also show exponential concentration at scale $\sqrt{\|x\|_1/\log \|x\|_1}$ for nearly gamma distributions with exponential moments: if $\mu$ satisfies \eqref{eqn: nearlygamma} and \eqref{eqn: exp-moments},
then
\begin{equation}\label{eqn: concentration}
\mathbb{P}_\mu(|\tau(0,x)-\mathbb{E}_\mu \tau(0,x)| \ge \lambda \sqrt{\|x\|_1/\log \|x\|_1}) \le Ce^{-c\lambda}.
\end{equation}
The nearly gamma condition excludes many natural distributions, including all power law distributions and distributions with infinite support which decay too quickly, mixtures of continuous and discrete distributions, singular continuous distributions, and continuous distributions with disconnected support, or whose density has zeros on its support.
\subsection{Main result}
The purpose of the present work is to extend the sublinear variance results mentioned above to general distributions with $2+\log$ moments. We make two assumptions:
\begin{equation}\label{eqn: 2log}
\int x^2(\log x)_+~\mu(\text{d}x) < \infty\ ,
\end{equation}
\begin{equation}\label{eqn: geodesics}
\mu(\{0\})<p_c(d)\ ,
\end{equation}
where $p_c(d)$ is the critical parameter for bond percolation on $\mathbb{Z}^d$.
Our main result is the following:
\begin{thm}\label{thm: sub linear} Let $\mu$ be a Borel probability measure supported on $[0,\infty)$ satisfying \eqref{eqn: 2log} and \eqref{eqn: geodesics}. In i.i.d. first-passage percolation on $(\mathbb{Z}^d,\mathcal{E}^d)$, $d\ge 2$, with edge-weight distribution $\mu$, there exists a constant $C=C(\mu,d)$ such that
\[
\Var \tau(0,x) \le C\frac{\|x\|_1}{\log \|x\|_1} \text{ for all } x \in \mathbb{Z}^d\ .
\]
\end{thm}
\begin{remark}
When \eqref{eqn: geodesics} fails, the passage time is known to be bounded by $C\|x\|_1^\epsilon$ for any $\epsilon$. See \cite{chayes,zhang3} for more details.
\end{remark}
\begin{remark}
The moment condition $\mathbb{E}t_e^2(\log t_e)_+<\infty$ may be able to be weakened, perhaps as low as $\mathbb{E}t_e^{(2/d)+a}<\infty$ for some $a>0$ by tensorizing entropy over small blocks, as in \cite[Lemma~2.6]{DK}. The main reason is that, due to \cite[Lemma~3.1]{coxdurrett}, $\Var \tau(x,y) < \infty$ for all $x,y$ under the condition $\mathbb{E} t_e^{(1/d)+a} < \infty$ for some $a>0$.
\end{remark}
Our method of proof may be of independent interest. Following \cite{benaimrossignol}, we use a martingale difference decomposition and the inequality of Falik and Samorodnitsky to control the variance of an averaged version of $\tau(0,x)$ by the entropy times a $1/\log \|x\|_1$ factor. Instead of representing the measure $\mu$ as the pushfoward of a Gaussian by an invertible transformation and using the Gaussian logarithmic Sobolev inequality, we represent $\mu$ as the image of an infinite sequence of uniform Bernoulli variables, and use A. Bonami and L. Gross's two-point entropy inequality \cite{bonami, gross} (the ``discrete log-Sobolev inequality'') to control the entropy. A central part of the argument is then to estimate the discrete derivatives of $\tau(0,x)$ with respect to variations of the Bernoulli variables.
\subsection{Outline of the paper}
The plan of the paper is as follows: in Section~\ref{sec: entropy}, we review some basic properties of the entropy functional with respect to a probability measure, and present the inequality of Falik and Samorodnitsky which we will use. In Section~\ref{sec: bernoulli}, we apply this inequality to first-passage percolation, using the martingale decomposition introduced in \cite{benaimrossignol}. We then briefly explain Benaim and Rossignol's approach based on the Gaussian log-Sobolev inequality in Section~\ref{sec: BR}, and show that a modification of their method using positive association already allows one to deal with a larger class of continuous distributions than the ones handled in \cite{benaimrossignol}. The purpose of Section~\ref{sec: BR} is only to clarify the role of conditions appearing in \cite{benaimrossignol}. This section is independent of the derivation of our main result.
In Section~\ref{sec: lower}, we provide a lower bound for the quantity $\sum_{k=1}^\infty (\mathbb{E} |V_k|)^2$ appearing in the variance bound, which will give the logarithmic factor in the final inequality. Next, in Section~\ref{sec: derivatives} we represent the passage time variables through a Bernoulli encoding and, after applying Bonami's inequality, bound a sum of discrete derivatives with the help of estimates on greedy lattice animals.
\subsection{Notation and preliminary results}
We will work on the space $\Omega = [0,\infty)^{\mathcal{E}^d}$ and let $\mu$ be a Borel probability measure on $[0,\infty)$. The product measure $\prod_{e \in \mathcal{E}^d} \mu$ will be denoted by $\mathbb{P}$. A realization of passage times (edge-weights) $\omega \in \Omega$ will be written as $\omega = (t_e)$ with point-to-point passage time $\tau(x,y)$ given by \eqref{eq: tau_def}. Throughout the paper, the letter $I$ will refer to the infimum of the support of $\mu$: writing
\begin{equation}\label{eq: F_def}
F(x) = \mu((-\infty,x])
\end{equation}
for the distribution function of $\mu$, set
\begin{equation}\label{eq: I_def}
I = \inf\{x : F(x) > 0\}\ .
\end{equation}
A fundamental object in first-passage percolation is a geodesic, and we spend some time here giving some basic properties of geodesics. Any path $\gamma$ from $x$ to $y$ with passage time $\tau(\gamma) = \sum_{e \in \gamma} t_e$ satisfying $\tau(\gamma) = \tau(x,y)$ will be called a geodesic from $x$ to $y$. From the shape theorem of Cox-Durrett \cite{coxdurrett} and the fact that under \eqref{eqn: geodesics}, the limiting shape for the model is bounded \cite[Theorem~6.1]{kesten}, assumptions \eqref{eqn: 2log} and \eqref{eqn: geodesics} ensure the existence of geodesics:
\begin{equation}\label{eq: exist_geodesics}
\mathbb{P}( \text{for all }x,y \in \mathbb{Z}^d \text{ there exists a geodesic from }x \text{ to } y) = 1\ .
\end{equation}
There is almost surely a unique geodesic between $x$ and $y$ if and only if $\mu$ is continuous, so this need not be true in general. For any $x,y \in \mathbb{Z}^d$ we then use the notation
\begin{equation}\label{eq: geo_def}
Geo(x,y) = \{e \in \mathcal{E}^d : e \in \gamma \text{ for all geodesics } \gamma \text{ from } x \text{ to } y\}\ .
\end{equation}
Central to the current proofs of variance bounds for the passage time are estimates on the length of geodesics. The key theorem is due to Kesten \cite[(2.25)]{kestenspeed} and is listed below. We will need to derive two generalizations of this result. The first is Lemma~\ref{lem: exp_intersections} and concerns the number of intersections of $Geo(0,x)$ with arbitrary edge sets. The second, Theorem~\ref{thm: low_density}, gives a bound on the number of edges of $Geo(0,x)$ whose weight lies in a given Borel set.
\begin{thm}[Kesten]\label{thm: kesten_length}
Assume $\mathbb{E}t_e<\infty$ and \eqref{eqn: geodesics}. There exists $\mathbf{C}_1$ such that for all $x$,
\[
\mathbb{E}\#Geo(0,x) \leq \mathbf{C}_1\|x\|_1\ .
\]
\end{thm}
The second tool we shall need is \cite[Propsition~5.8]{kesten} and shows that under assumption \eqref{eqn: geodesics}, it is unlikely that long paths have small passage time.
\begin{thm}[Kesten]\label{thm: kesten_exp}
Assuming \eqref{eqn: geodesics}, there exist constants $a, \mathbf{C}_2>0$ such that for all $n \in \mathbb{N}$,
\[
\mathbb{P}\bigg(\exists \text{ self-avoiding } \gamma \text{ starting at }0 \text{ with } \#\gamma \geq n \text{ but with } \tau(\gamma) < an \bigg) \leq \exp(-\mathbf{C}_2 n)\ .
\]
\end{thm}
\subsection{Proof sketch}
\noindent
{\bf The setup.} Our argument begins with the setup of Benaim and Rossignol: to bound the variance, we use the inequality of Falik-Samorodnitsky. That is, if $T = \tau(0,x)$ is the passage time, then we enumerate the edges of the lattice as $\{e_1, e_2, \ldots\}$ and perform a martingale decomposition
\[
T - \mathbb{E}T = \sum_{k=1}^\infty V_k\ ,
\]
where $V_k = \mathbb{E}[T \mid \mathcal{F}_k] - \mathbb{E}[T \mid \mathcal{F}_{k-1}]$ and $\mathcal{F}_k$ is the sigma-algebra generated by the edge weights $t_{e_1}, \ldots, t_{e_k}$. Then one has
\[
\Var T~\log \left[ \frac{\Var T}{\sum_{k=1}^\infty (\mathbb{E}|V_k|)^2} \right] \leq \sum_{k=1}^\infty Ent(V_k^2)\ .
\]
(See Lemma~\ref{lem: lower_bound}.) If $\Var T \leq \|x\|^{7/8}$, then the required bound holds; otherwise, one has $\Var T \geq \|x\|^{7/8}$ and the bound is
\[
\Var T~\log \left[ \frac{\|x\|^{7/8}}{\sum_{k=1}^\infty (\mathbb{E}|V_k|)^2} \right] \leq \sum_{k=1}^\infty Ent(V_k^2)\ .
\]
By working with an averaged version $F_m$ of $T$ (similar to that used in \cite{BKS}, but a different definition that simplifies the analysis and requires a new argument) one can ensure that the sum in the denominator on the left is at most order $\|x\|^{3/4}$. (See Proposition~\ref{prop: 5.3}.) Thus we begin our analysis with
\begin{equation}\label{eq: setup}
\Var T \leq \frac{C}{\log \|x\|} \sum_{k=1}^\infty Ent(V_k^2)\ .
\end{equation}
\medskip
\noindent
{\bf Step 1.} Bernoulli encoding. If one knows a log-Sobolev inequality (LSI) of the form $Ent~f^2 \leq C \mathbb{E}\|\nabla f\|_2^2$, then the argument of Benaim-Rossignol would give $\sum_{k=1}^\infty Ent(V_k^2) \leq C\mathbb{E}\|\nabla T\|_2^2$ and the method of Kesten can give an upper bound on this term by $C\|x\|_1$. Combining with \eqref{eq: setup} gives the sub-linear variance bound.
Unfortunately very few distributions satisfy a LSI of the above type. Benaim-Rossignol deal with this by exhibiting certain edge-weight distributions (those in the ``nearly gamma'' class) as images of Gaussian random variables and using the Gaussian LSI. This does not work for all distributions, so our main idea is to encode general edge-weights using infinite sequences of Bernoulli variables and use the Bernoulli (two-point) LSI.
For simplicity, assume that the edge-weights $t_e$ are uniformly distributed on $[0,1]$, so that we can encode their values using the binary expansion and i.i.d. Bernoulli$(1/2)$ sequences
\[
t_e = \sum_{i=1}^\infty \omega_{e,i} 2^{-i}, \text{ where } (\omega_{e,1}, \omega_{e,2}, \ldots) \text{ is i.i.d. Bernoulli}(1/2)\ .
\]
(For general distributions, we compose with the right-continuous inverse of the distribution function of $t_e$.) Then using the Bernoulli LSI and the argument of Benaim-Rossignol,
\begin{equation}\label{eq: LSI}
\sum_{k=1}^\infty Ent(V_k^2) \leq 2\sum_{k=1}^\infty \sum_{i=1}^\infty \mathbb{E}(\Delta_{e_k,i} T)^2\ ,
\end{equation}
where $\Delta_{e_k,i}$ is the discrete derivative of $T$ relative to flipping the $i$-th bit in the binary expansion of $t_{e_k}$. This is done in Lemma~\ref{eqn: entropybydelta}.
\medskip
\noindent
{\bf Step 2.} The bulk of the paper is devoted to bounding these discrete derivatives: giving the inequality
\[
\sum_{k=1}^\infty \sum_{i=1}^\infty \mathbb{E}(\Delta_{e_k,i} T)^2 \leq C \|x\|_1\ .
\]
This is not a priori clear because flipping bits in the binary expansion can have a large change on $t_e$ if there are gaps in the support of the edge-weight distribution. We deal with this by considering this influence ``on average." That is, letting $\mathbb{E}_{< i}$ be the expectation over the binary variables $\omega_{e_k,1}, \ldots, \omega_{e_k,i-1}$, one has
\[
\mathbb{E}_{< i}(\Delta_{e_k,i}T)^2 = \frac{1}{2^{i-1}} \sum_{\sigma \in \{0,1\}^{i-1}} (T(\sigma,1)-T(\sigma,0))^2\ ,
\]
where we have indicated dependence of $T$ only on the first $i$ binary variables. Because the weights are bounded in $[0,1]$, the differences above are at most 1 (and nonzero only when $e_k$ is in a geodesic from $0$ to $x$ for some value of $t_e$), so we can telescope them, obtaining the upper bound
\[
\mathbf{1}_{\{e_k \in Geo(0,x) \text{ for some value of }t_{e_k}\}} \frac{1}{2^{i-1}} \sum_{\sigma \in \{0,1\}^{i-1}} (T(\sigma,1) - T(\sigma,0)) \leq \frac{1}{2^{i-1}} \mathbf{1}_{\{e_k \in Geo(0,x) \text{ for some value of } t_{e_k}\}}\ .
\]
Pretending for the moment that the indicator is actually of the event that $e_k \in Geo(0,x)$, we can sum over $i$ to give the bound $2 \mathbf{1}_{\{e_k \in Geo(0,x)\}}$, and sum over $k$, using Theorem~\ref{thm: kesten_length}, to obtain
\[
\sum_{k=1}^\infty Ent(V_k^2) \leq C \sum_k \mathbb{P}(e_k \in Geo(0,x)) \leq C\|x\|_1\ .
\]
\medskip
\noindent
{\bf Step 3.} General case. We are not using only uniform $[0,1]$ edge weights, so several complications arise, due both to large edge-weights and to edge-weights near the infimum of the support. The first problem forces the moment condition $\mathbb{E}t_e^2(\log t_e)_+<\infty$ and the second is related to the change from $\mathbf{1}_{\{e \in Geo(0,x) \text{ for some } t_e\}}$ to $\mathbf{1}_{\{e \in Geo(0,x)\}}$. However, careful bounding (for example, keeping track of the value $D_{z,e}$ of the edge-weight above which the edge leaves the geodesic -- see Lemma~\ref{lem: deriv}) leads to the inequality in Proposition~\ref{prop: intermediate}:
\begin{equation}\label{eq: last_entropy_bound}
\sum_{k=1}^\infty Ent(V_k^2) \leq C \mathbb{E} \sum_e (1-\log F(t_e)) \mathbf{1}_{\{e \in Geo(0,x)\}}\ ,
\end{equation}
where $F(t_e)$ is the distribution function of the weight $t_e$. Note that this is large when $t_e$ is near its infimum. In a sense, \eqref{eq: last_entropy_bound} is our version of an LSI, with the penalties due to the fact that we do not have a traditional LSI.
For certain distributions, we can bound $(1-\log F(t_e)) \leq C$ and sum as above. In particular, this is possible when there is an atom at the infimum of the support. But for general distributions, we must analyze the number of edges in the geodesic which have weight near the infimum. For this we use the theory of greedy lattice animals. Theorem~6.6 shows that without such an atom, for any $\epsilon>0$, the expected number of edges in $Geo(0,x)$ with weight within $\epsilon$ of the infimum of the support $I$ satisfies
\[
\mathbb{E}\#\{e \in Geo(0,x) : t_e \in [I,I+\epsilon]\} \leq C \|x\|_1 \beta(\epsilon)\ ,
\]
where $\beta(\epsilon) \to 0$ as $\epsilon \to 0$. Combining this with another dyadic partition of the interval $[I,\infty)$ (see Section~\ref{sec: finishing_deriv}) provides the required control on $(1-\log F(t_e))$ and allows the bound
\[
\mathbb{E} \sum_e (1-\log F(t_e)) \mathbf{1}_{\{e \in Geo(0,x)\}} \leq C\|x\|_1\ .
\]
Along with \eqref{eq: last_entropy_bound}, we obtain $\sum_{k=1}^\infty Ent(V_k^2) \leq C\|x\|$ and complete the proof.
\section{Entropy} \label{sec: entropy}
Recall the definition of entropy with respect to a probability measure $\mu$:
\begin{df}
Let $(\Omega,\mathcal{F},\mu)$ be a probability space and $X\in L^1(\Omega,\mu)$ be nonnegative. Then
\[
Ent_\mu X = \mathbb{E}_\mu X\log X - \mathbb{E}_\mu X \log \mathbb{E}_\mu X .
\]
\end{df}
Note that by Jensen's inequality, $Ent_\mu X \geq 0$. We will make use of the variational characterization of entropy (see \cite[Section 5.2]{ledoux}):
\begin{prop}\label{prop: characterization}
If $X$ is nonnegative, then
\[
Ent_\mu (X) = \sup \{\mathbb{E}_\mu XY : \mathbb{E}_\mu e^Y \leq 1\}\ .
\]
\end{prop}
This characterization will let us prove the ``tensorization'' of entropy.
\begin{thm}\label{thm: tensorization}
Let $X$ be a non-negative $L^2$ random variable on a product probability space
\[\left(\prod_{i=1}^\infty \Omega_i,\mathcal{F}, \mu = \prod_{i=1}^\infty \mu_i \right),\]
where $\mathcal{F} = \bigvee_{i=1}^\infty \mathcal{G}_i$ and each triple $(\Omega_i,\mathcal{G}_i,\mu_i)$ is probability space. Then
\begin{equation}
Ent_\mu X \leq \sum_{k=1}^\infty \mathbb{E}_\mu Ent_{i}X\ ,\label{eqn: tensor}
\end{equation}
where $Ent_i X$ is the entropy of $X(\omega)=X(\omega_1,\ldots, \omega_i, \ldots ))$ with respect to $\mu_i$, as a function of the $i$-th coordinate (with all other values fixed).
\end{thm}
\begin{proof}
We use a telescoping argument: write $\mathcal{F}_k$ for the sigma algebra generated by $\mathcal{G}_1\cup \cdots \cup \mathcal{G}_k$ (with $\mathcal{F}_0$ trivial) and compute for any $n$
\begin{align*}
Ent_\mu X &= \mathbb{E}_\mu X \left[ \log X - \log \mathbb{E}_\mu X \right] \\
&= \sum_{k=1}^n \mathbb{E}_\mu X \left[ \log \mathbb{E}_\mu [ X \mid \mathcal{F}_k] - \log \mathbb{E}_\mu [X \mid \mathcal{F}_{k-1}] \right] + \mathbb{E}_\mu X \left[ \log X - \log \mathbb{E}_\mu [X \mid \mathcal{F}_n] \right]\\
&= \sum_{k=1}^n \mathbb{E}_\mu \mathbb{E}_{\mu_k} X \left[ \log \mathbb{E}_\mu [ X \mid \mathcal{F}_k] - \log \mathbb{E}_\mu [X \mid \mathcal{F}_{k-1}] \right] \\
&+ \mathbb{E}_\mu X \left[ \log X - \log \mathbb{E}_\mu [ X \mid \mathcal{F}_n] \right]\ .
\end{align*}
Here $\mathbb{E}_{\mu_k}$ is expectation with respect to the coordinate $\omega_k$. Because for almost all realizations of $\{(\omega_i) : i \neq k\}$,
\[
\mathbb{E}_{\mu_k} \exp \left( \log \mathbb{E}_\mu [ X \mid \mathcal{F}_k] - \log \mathbb{E}_\mu [ X \mid \mathcal{F}_{k-1}] \right) = 1\ ,
\]
we use Proposition~\ref{prop: characterization} to get the bound
\[
Ent_\mu X \leq \sum_{k=1}^n \mathbb{E}_\mu Ent_k X + \mathbb{E}_\mu X \log X - \mathbb{E}_\mu X \log \mathbb{E}_\mu [ X \mid \mathcal{F}_n]\ .
\]
Putting $X_n = \mathbb{E}_\mu [ X \mid \mathcal{F}_n]$, one has
\[
\mathbb{E}_\mu X \log \mathbb{E}_\mu [X \mid \mathcal{F}_n] = \mathbb{E}_\mu X_n \log X_n\ .
\]
By martingale convergence (since $X \in L^1$), one has $X_n \to X$ almost surely. Furthermore, since $X \in L^2$, the sequence $(X_n \log X_n)$ is uniformly integrable. Therefore
\[
\mathbb{E}_\mu X \log X - \mathbb{E}_\mu X \log \mathbb{E}_\mu [ X \mid \mathcal{F}_n] \to 0
\]
and the proof is complete.
\end{proof}
We end this section with the lower bound from Falik and Samorodnitsky \cite[Lemma~2.3]{FS}.
\begin{prop}[Falik-Samorodnitsky]\label{prop: lb}
If $X \geq 0$ almost surely,
\[
Ent_\mu (X^2) \geq \mathbb{E}_\mu X^2 \log \frac{\mathbb{E}_\mu X^2}{(\mathbb{E}_\mu X)^2}\ .
\]
\end{prop}
\begin{proof}
First assume $X > 0$ almost surely and define $Y = X/ \|X\|_2$. Then
\begin{align*}
Ent_\mu (Y^2) = \mathbb{E}_\mu Y^2 \log Y^2 - \mathbb{E}_\mu Y^2 \log \mathbb{E}_\mu Y^2 &= \mathbb{E}_\mu Y^2 \log Y^2 \\
& = -2 \mathbb{E}_\mu Y^2 \log (1/Y)\ .
\end{align*}
Apply Jensen to the measure $\mathbf{E}(\cdot)=\mathbb{E}_\mu (\cdot ~ Y^2)$ and the function $-\log$ to obtain
\[
Ent_\mu (Y^2) \geq -2\mathbb{E}_\mu Y^2 \log \frac{\mathbf{E}(1/Y)}{\mathbb{E}_\mu Y^2} = \mathbb{E}_\mu Y^2 \log \frac{\mathbb{E}_\mu Y^2}{(\mathbb{E}_\mu Y)^2}\ ,
\]
proving the proposition for $Y$. Now for $X$,
\[
Ent_\mu (X^2) = \|X\|_2^2 Ent_\mu (Y^2) \geq \|X\|_2^2 \mathbb{E}_\mu Y^2 \log \frac{\mathbb{E}_\mu Y^2}{(\mathbb{E}_\mu Y)^2} = \mathbb{E}_\mu X^2 \log \frac{\mathbb{E}_\mu X^2}{(\mathbb{E}_\mu X)^2}\ .
\]
If $X = 0$ with positive probability, we can conclude by a limiting argument applied to $X_n = \max\{1/n,X\}$.
\end{proof}
\section{Variance bound for $\tau(0,x)$} \label{sec: bernoulli}
The mechanism for sublinear behavior of the variance which was identified in \cite{BKS} can be understood as follows. Since a geodesic from the origin to $x$ is ``one-dimensional,'' one expects that most edges in the lattice have small probability to lie in it: the edges have small influence. This is not true of edges very close to the origin. To circumvent this difficulty, Benjamini, Kalai and Schramm considered an averaged version of the passage time (see \cite[Lemma~3]{BKS}), which they subsequently compare to the actual passage time from $0$ to $x$. It was brought to our attention by S. Sodin (see \cite[Section~3]{sodin13}) that their argument can be replaced by a geometric average. This observation was made earlier by K. Alexander and N. Zygouras in \cite{AZ} for polymer models. Let $x \in \mathbb{Z}^d$ and $B_m$ be a box of the form $[-m,m]^d$ for $m=\lceil \|x\|_1\rceil ^{1/4}$. Define
\begin{equation}\label{eqn: Fdef}
F_m = \frac{1}{\# B_m} \sum_{z \in B_m} \tau(z,z+x)\ .
\end{equation}
Note that by \eqref{eqn: 2log}, $\Var F_m < \infty$.
\subsection{Approximating $\tau(0,x)$ by $F_m$}
Because of the choice of $m$, the variance of $F_m$ closely approximates that of $\tau$:
\begin{prop}\label{prop: approximating}
Assume $\mathbb{E} t_e^2<\infty$. Then there exists $\mathbf{C}_3>0$ such that
\[
|\Var \tau(0,x) - \Var F_m| \leq \mathbf{C}_3 \|x\|_1^{3/4} \text{ for all } x\ .
\]
\end{prop}
\begin{proof}
By subadditivity, for each $z \in B_m$, $|\tau(0,x) - \tau(z,z+x)| \leq \tau(0,z) + \tau(x,z+x)$. Therefore, writing $M_x = \max \{\tau(0,z) : z \in B_m \}$ and $\hat X = X - \mathbb{E} X$,
\[
|\Var \tau(0,x)-\Var F_m| \leq (\|\hat\tau(0,x)\|_2 + \|\hat F_m\|_2) \|\hat \tau(0,x) - \hat F_m\|_2\ .
\]
Using $\|\hat F_m\|_2 \leq \|\hat \tau(0,x)\|_2$, we get the bound
\[
2\|\hat \tau(0,x)\|_2 ( \|\tau(0,x) - F_m\|_2 + \mathbb{E} |\tau(0,x) - F_m|) \leq 4\|\hat \tau(0,x)\|_2 \|M_x\|_2\ .
\]
Since we assume \eqref{eqn: 2log}, \cite[Theorem~1]{kestenspeed} gives $\|\hat \tau(0,x)\|_2 \leq \mathbf{C}_4 \|x\|_1^{1/2}$. On the other hand, we can bound $M_x$ using the following lemma.
\begin{lma}\label{lem: max_lemma}
If $\mathbb{E}t_e^2<\infty$, there exists $\mathbf{C}_5$ such that for all finite subsets $S$ of $\mathbb{Z}^d$,
\[
\mathbb{E} \left[\max_{x,y \in S} \tau(x,y)\right]^2 \leq \mathbf{C}_5 (\text{diam }S)^2\ .
\]
\end{lma}
\begin{proof}
We start with the argument of \cite[Lemma~3.5]{kesten}. Given $x,y \in S$, we can build $2d$ disjoint (deterministic) paths from $x$ to $y$ of length at most $\mathbf{C}_6 \|x-y\|_1$ for some integer $\mathbf{C}_6$. This means that $\tau(y,z)$ is bounded above by the minimum of $2d$ variables $T_1, \ldots, T_{2d}$, the collection being i.i.d. and each variable distributed as the sum of $\mathbf{C}_6 \text{diam}(S)$ i.i.d. variables $t_e$, so
\[
\mathbb{P}(\tau(x,y) \geq \lambda) \leq \prod_{i=1}^{2d} \mathbb{P}(T_i \geq \lambda) \leq \left[ \frac{\mathbf{C}_6\text{diam}(S)\Var t_e }{(\lambda-\mathbf{C}_6 \text{diam}(S) \mathbb{E} t_e)^2} \right]^{2d}\ .
\]
Therefore if we fix some $x_0 \in S$, for $M = \lceil 2\mathbf{C}_6 \mathbb{E} t_e \rceil$,
\[
\sum_{\lambda = M \text{diam}(S)}^\infty \lambda \max_{y \in S} \mathbb{P}(\tau(x_0,y) \geq \lambda) \leq (4 \mathbf{C}_6 \text{diam}(S) \Var t_e)^{2d} \sum_{\lambda=M \text{diam}(S)}^\infty \lambda^{1-4d} = \mathbf{C}_7 (\text{diam }S)^{2-2d}\ .
\]
Last, by subadditivity,
\begin{align*}
\mathbb{E}\left[ \max_{x,y \in S} \tau(x,y) \right]^2 \leq 4 \mathbb{E} \left[ \max_{y \in S} \tau(x_0,y) \right]^2 &\leq 4(M \text{diam }S)^2 \\
&+ 8 (\text{diam }S)\sum_{\lambda=M\text{diam}(S)}^\infty \lambda \max_{y \in S} \mathbb{P}(\tau(x_0,y) \geq \lambda) \\
&\leq \mathbf{C}_8(\text{diam } S)^2\ .
\end{align*}
\end{proof}
Using the lemma, we find $\|M_x\|_2 \leq \mathbf{C}_9\text{diam}(B_m) \leq \mathbf{C}_{10}\|x\|_1^{1/4}$. This means
\[
|\Var \tau(0,x) - \Var F_m| \leq 4\mathbf{C}_4\mathbf{C}_{10} \|x\|_1^{1/2} \|x\|_1^{1/4} = \mathbf{C}_{11}\|x\|_1^{3/4}\ .
\]
\end{proof}
\subsection{Bounding the variance by the entropy}
Enumerate the edges of $\mathcal{E}^d$ as $e_1, e_2, \ldots$. We will bound the variance of $F_m$ using the martingale decomposition
\[F_m-\mathbb{E}F_m = \sum_{k=1}^\infty V_k,\]
where
\begin{equation} \label{eqn: Vkdef}
V_k = \mathbb{E}[F_m \mid \mathcal{F}_k] - \mathbb{E}[F_m \mid \mathcal{F}_{k-1}],
\end{equation}
and we have written $\mathcal{F}_k$ for the sigma-algebra generated by the weights $t_{e_1}, \ldots, t_{e_k}$ (with $\mathcal{F}_0$ trivial). In particular if $X\in L^1(\Omega, \mathbb{P})$, we have
\begin{equation}
\mathbb{E}[X\mid \mathcal{F}_k] = \int X\big((t_e)_{e\in \mathcal{E}^d}\big)\, \prod_{i\ge k+1}\mu(\mathrm{d}t_{e_i}).
\end{equation}
The idea now is to compare the variance of $F_m$ to $\sum_{k=1}^\infty Ent(V_k^2)$. The lower bound comes from the proof of \cite[Theorem~2.2]{FS}.
\begin{lma}[Falik-Samorodnitsky]\label{lem: lower_bound}
We have the lower bound
\begin{equation}\label{eq: BR_lower_bound}
\sum_{k=1}^\infty Ent(V_k^2) \geq \Var F_m ~ \log \left[ \frac{\Var F_m}{\sum_{k=1}^\infty (\mathbb{E} |V_k|)^2}\right] \ .
\end{equation}
\end{lma}
\begin{proof}
For $M \in \mathbb{N}$, define $\tilde F_m = \mathbb{E} [F_m \mid \mathcal{F}_M]$. We first use Proposition~\ref{prop: lb} and the fact that $\sum_{k=1}^M \mathbb{E} V_k^2 = \Var \tilde F_m$:
\[
\sum_{k=1}^M Ent(V_k^2) \geq \sum_{k=1}^M \mathbb{E} V_k^2 \log \left[ \frac{\mathbb{E}V_k^2}{(\mathbb{E} |V_k|)^2} \right] = - \Var \tilde F_m ~ \sum_{k=1}^M \frac{\mathbb{E}V_k^2}{\Var \tilde F_m} \log \left[ \frac{(\mathbb{E} |V_k|)^2}{\mathbb{E}V_k^2} \right]\ .
\]
Next use Jensen's inequality with the function $-\log$ and sum $\sum_{k=1}^M \frac{\mathbb{E} V_k^2}{\Var \tilde F_m} (\cdot)$ to get the lower bound
\[
-\Var \tilde F_m~ \log \left[ \sum_{k=1}^M \frac{\mathbb{E} V_k^2}{\Var \tilde F_m} \cdot \frac{(\mathbb{E} |V_k|)^2}{\mathbb{E} V_k^2} \right]\ ,
\]
which gives the lemma, after a limiting argument to pass to a countable sum.
\end{proof}
\section{Benaim and Rossignol's approach} \label{sec: BR}
In this section, we explain how the argument developed in \cite{benaimrossignol} can be extended, isolating a more general condition than the ``nearly gamma'' condition. It includes, for example, all power law distributions with $2+\epsilon$ moments. We emphasize that the content of this section is independent of the derivation of our main result.
In \cite{benaimrossignol}, the authors assume that the distribution
\[\mu = h(x)\,\mathrm{d}x,~ h \text{ continuous}\]
is an absolutely continuous measure such that
\[(\operatorname{supp} h)^\circ := \{x:h(x)>0\} \subset (0,\infty)\]
is an interval. Denoting by $G(x)$ the distribution function of the standard normal distribution, $H(x) =\int_{-\infty}^xh(t)\,\mathrm{d}t$, and $X$ an $N(0,1)$ variable, the random variable
\begin{equation}\label{eqn: cov}
Y=T(X),
\end{equation}
with $T=H^{-1}\circ G$, has distribution $\mu$. Recall the Gaussian logarithmic Sobolev inequality \cite{federbusch, gross, stam}: for any smooth $f:\mathbb{R}\rightarrow \mathbb{R}$
\begin{equation}
\label{eqn: gaussianLSI}
\mathbb{E} f^2(X)\log \frac{f^2(X)}{\mathbb{E}f^2(X)} \le 2\mathbb{E}(f'(X))^2.
\end{equation}
Combining \eqref{eqn: cov} and \eqref{eqn: gaussianLSI}, a calculation yields
\begin{equation}\label{eq: generalLSI}
Ent_\mu (f(Y))^2 \le 2\mathbb{E}_\mu((\psi\cdot f')(Y))^2,
\end{equation}
where
\[\psi(Y) = \frac{(g\circ G^{-1} \circ H)(Y)}{h(Y)}\]
for any $f$ in a suitable Sobolev space.
Benaim and Rossignol apply this inequality to the passage time, using inequality \eqref{eq: BR_lower_bound}.
It is shown in \cite{benaimrossignol}, along the same lines as the proof of Lemma \ref{eqn: entropybydelta} that \eqref{eq: generalLSI} implies
\begin{equation}\label{eqn: applyLSI}
\sum_{k=1}^\infty Ent_\mu(V_k^2)\le 2\sum_{j=1}^\infty \mathbb{E}[(\psi(t_{e_j})\partial_{t_{e_j}}F_m)^2],
\end{equation}
with $F_m$ as in \eqref{eqn: Fdef}. The derivative with respect to the edge weight can be expressed as
\begin{equation}\label{eqn: Fderivative}
\partial_{t_{e_j}}F_m = \frac{1}{\sharp B_m}\sum_{z\in B_m}\mathbf{1}_{\{e_j\in Geo(z,z+x)\}}\ .
\end{equation}
Observe that the right side of \eqref{eqn: Fderivative} is a decreasing function of the edge weight $t_{e_j}$.
The following simple asymptotics appear in \cite[Lemma~5.2]{benaimrossignol}:
\begin{lma}
\begin{align}
\label{eqn: Gasymp}
g\circ G^{-1}(y) &\sim y \sqrt{-2\log y}, \quad y\rightarrow 0,\\
g\circ G^{-1}(y) &\sim (1-y)\sqrt{-2\log(1-y)}, \quad y\rightarrow 1.
\end{align}
That is, in each case the ratio of the left to the right side tends to 1.
\end{lma}
Suppose that there is a constant $\mathbf{C}_{12}>0$ such that
\begin{equation}
\frac{H(t)\sqrt{-\log t}}{h(t)}\le \mathbf{C}_{12} \label{eqn: leftendpoint}
\end{equation}
for all $t$ with $I\leq t \le I+ \delta$, with $\delta>0$ and $I$ the left endpoint of the interval $(\operatorname{supp} h)^\circ$ (as in \eqref{eq: I_def}). The condition \eqref{eqn: leftendpoint} holds, for example, if the density $h$ is monotone near $I$ or if $h(t)\asymp (t-I)^\alpha$ for some (integrable) power $\alpha$. The latter condition appears in \cite[Lemma~5.3]{benaimrossignol}.
For $M>0$ such that $F(M)<1$, the expectation in \eqref{eqn: applyLSI} can be computed as
\begin{align}
\mathbb{E}[(\psi(t_{e_j})\partial_{t_{e_j}}F_m)^2] &= \mathbb{E}\mathbb{E}_{\mu_j}[(\psi(t_{e_j})\partial_{t_{e_j}}F_m)^2] \nonumber \\
&= \mathbb{E}\mathbb{E}_{\mu_j}[(\psi(t_{e_j})\partial_{t_{e_j}}F_m)^2; t_{e_j} \le M]+ \mathbb{E}\mathbb{E}_{\mu_j}[(\psi(t_{e_j})\partial_{t_{e_j}}F_m)^2; t_{e_j} > M]\ .\label{eqn: split}
\end{align}
For $t_{e_j}\le M$, \eqref{eqn: Gasymp} implies that the first term in \eqref{eqn: split} is bounded by
\[\left(\max\left\{ \mathbf{C}_{12},\sup_{\delta \le t\le M} h(t)^{-1}\right\}\right)^2\cdot \mathbb{E}_{\mu_j}(\partial_{t_{e_j}}F_m)^2.\]
The maximum is finite by assumption, and we have, by Cauchy-Schwarz,
\[\mathbb{E}(\partial_{t_{e_j}}F_m)^2 \le \frac{1}{\sharp B_m}\sum_{z\in B_m}\mathbb{E}(\mathbf{1}_{\{e_j\in Geo(z,z+x)}\}).\]
From there, one can conclude the argument as in Sections~\ref{sec: finishing_deriv} and \ref{sec: proof}.
As for the second term in \eqref{eqn: split}, assume first that
\begin{equation}\label{eqn: sqrtnearlygamma}
\psi(t_{e_j})\le \mathbf{C}_{13}\sqrt{t_{e_j}}.
\end{equation}
This is the ``nearly gamma'' condition of Benaim and Rossignol. The right side of \eqref{eqn: sqrtnearlygamma} is increasing in $t_{e_j}$. Using this in \eqref{eqn: split} together with the Chebyshev association inequality \cite[Theorem~2.14]{BLM}, we find
\begin{align} \label{eqn: fkgapplication}
\mathbb{E}\mathbb{E}_{\mu_j}[(\psi(t_{e_j})\partial_{t_{e_j}}F_m)^2; t_{e_j} > M] &\le \mathbf{C}^2_{13}\mathbb{E}\mathbb{E}_{\mu_j}(\sqrt{t_{e_j}}\cdot \partial_{t_{e_j}}F_m)^2\\
&\le \mathbf{C}_{13}^2\mathbb{E}(t_{e_j})\cdot\mathbb{E}(\partial_{t_{e_j}}F_m)^2. \nonumber
\end{align}
The previous argument shows that the condition \eqref{eqn: sqrtnearlygamma} is not necessary: it is sufficient that $\psi$ be bounded by some increasing, square integrable function of $t_{e_j}$. Suppose for example that $t\mapsto h(t)$ is decreasing for $t>M$. In this case, by \eqref{eqn: Gasymp}, we have
\begin{align}
\psi(t_{e_j})\mathbf{1}_{\{t_{e_j} > M\}} &= \frac{(g\circ G^{-1} \circ H)(t_{e_j})}{h(t_{e_j})}\mathbf{1}_{\{t_{e_j} > M\}} \nonumber\\
&\le \mathbf{C}_{14} \frac{(1-H(t_{e_j}))\cdot \sqrt{-2\log(1-H(t_{e_j})})}{h(t_{e_j})}\mathbf{1}_{\{t_{e_j} > M\}}. \label{eqn: big}
\end{align}
Let us denote by $K(t_{e_j})$ the expression in \eqref{eqn: big}. For $t>M$, we have
\begin{align}\label{eqn: weights}
1-H(t) = \int_t^\infty h(s)\,\mathrm{d}s &= \int_t^\infty s^{2/3+\epsilon} s^{-2/3-\epsilon} h(s)\,\mathrm{d}s \nonumber \\
&\le \left( \int_t^\infty s^{2+3 \epsilon} \, h(s)\mathrm{d}s\right)^{1/3}\left(\int_t^\infty s^{-1-3\epsilon/2}h(s)\,\mathrm{d}s\right)^{2/3} \nonumber \\
&\le \mathbf{C}_{15} h(t)^{2/3},\nonumber
\end{align}
assuming $h(s)$ is decreasing for $s> M$ and that the distribution posesses $2+3\epsilon$ moments. We have used the $L^3-L^{3/2}$ H\"older inequality. This gives
\[K(t) \le \mathbf{C}_{14}\mathbf{C}_{15} h(t)^{-1/3}\sqrt{-2\log(1-H(t))} \cdot \mathbf{1}_{\{t>M\}}.\]
Thus $K(t)$ is bounded by a quantity which is increasing in $t$. Using the Chebyshev association inequality as in \eqref{eqn: fkgapplication}, we find
\[\mathbb{E}\mathbb{E}_{\mu_j}[(\psi(t_{e_j})\partial_{t_{e_j}}F_m)^2; t_{e_j} > M] \le (\mathbf{C}_{14}\mathbf{C}_{15})^2\mathbb{E}\left(h^{-1/3}(t_{e_j}) \sqrt{-2\log(1-H(t_{e_j})}) \right)^2\cdot\mathbb{E}(\partial_{t_{e_j}}F_m)^2.\]
We are left with the task of estimating the first expectation, which is
\[\int h(s)^{-2/3}(-2\log(1-H(s)) h(s)\,\mathrm{d}s=\int h(s)^{1/3}(-2\log(1-H(s))\,\mathrm{d}s.\]
We again use polynomial weights and $L^3-L^{3/2}$:
\begin{align*}
\int h(s)^{1/3}(-2\log(1-H(s)))\,\mathrm{d}s &= \int s^{-2/3-\epsilon} s^{2/3+\epsilon}h(s)^{1/3}(-2\log(1-H(s)))\,\mathrm{d}s\\
&\le \left(\int s^{-1-3\epsilon/2}\,\mathrm{d}s\right)^{2/3}\left(\int s^{2+3\epsilon}(-2\log(1-H(s)))^3h(s)\,\mathrm{d}s\right)^{1/3}.
\end{align*}
A further application of H\"older's inequality allows one to control the logarithm, at the cost of an arbitrarily small increase in the moment assumption. It follows that
\[\mathbb{E}(\psi(t_{e_j})\partial_{t_{e_j}}F_m)^2\le \mathbf{C}_{16}\mathbb{E}(\partial_{t_{e_j}}F_m)^2 \]
if the distribution $\mu$ has $2+\epsilon'$ moments. In conclusion, Benaim and Rossignol's argument extends to the case of distributions with $2+\epsilon$ moments whose densities are positive and eventually decreasing.
One can derive many variants of the above, the key point being the application of positive association in \eqref{eqn: fkgapplication}.
\section{The lower bound} \label{sec: lower}
In this section we derive the first generalization of Kesten's geodesic length estimate and show how it is used to bound the sum $\sum_{k=1}^\infty (\mathbb{E}|V_k|)^2$ appearing in \eqref{eq: BR_lower_bound}. Let $\mathcal{G}$ be the set of all finite self-avoiding geodesics.
\begin{lma}\label{lem: exp_intersections}
Assuming \eqref{eqn: 2log} and \eqref{eqn: geodesics}, there exists $\mathbf{C}_{17}>0$ such that for all $x$ and all finite $E \subset \mathcal{E}^d$,
\[
\mathbb{E} \max_{\gamma \in \mathcal{G}} \# (E \cap \gamma) \leq \mathbf{C}_{17} \text{diam}(E)\ .
\]
\end{lma}
\begin{proof}
Choose $a, \mathbf{C}_2>0$ from Theorem~\ref{thm: kesten_exp}.
If $\# (E \cap \gamma) \geq \lambda$ for some $\gamma \in \mathcal{G}$, then we may find the first and last intersections (say $y$ and $z$ respectively) of $\gamma$ with $V$, the set of endpoints of edges in $E$. The portion of $\gamma$ from $y$ to $z$ is then a geodesic with at least $\lambda$ edges.
This means
\[
\mathbb{P}(\# (E \cap \gamma) \geq \lambda \text{ for some }\gamma \in \mathcal{G}) \leq (\# V) \exp(-\mathbf{C}_2 \lambda) + \mathbb{P}\left( \max_{y,z \in V}\tau(y,z) \geq a \lambda \right)\ .
\]
Therefore
\[
\mathbb{E} \max_{\gamma \in \mathcal{G}} \#(E \cap \gamma) \leq \text{diam}(E) + \sum_{\lambda=\text{diam}(E)}^\infty (\#V) \exp(-\mathbf{C}_2 \lambda) + \sum_{\lambda = \text{diam}(E) }^\infty \mathbb{P}\left( \max_{y,z \in V}\tau(y,z) \geq a \lambda \right)\ .
\]
By the inequality $\text{diam}(E) \geq \mathbf{C}_{18}(\#V)^{1/d}$ for some universal $\mathbf{C}_{18}$, the middle term is bounded uniformly in $E$, so we get the upper bound
\[
\mathbf{C}_{19}\text{diam}(E) + \frac{1}{a} \mathbb{E} \max_{y,z \in V} \tau(y,z)\ .
\]
By Lemma~\ref{lem: max_lemma}, this is bounded by $\mathbf{C}_{20} \text{diam}(E)$.
\end{proof}
We will now apply Lemma~\ref{lem: exp_intersections} to get an upper bound on $\sum_{k=1}^\infty (\mathbb{E}|V_k|)^2$. To do so, we use a simple lemma, a variant of which is already found in various places, including the work of Benaim-Rossignol \cite[Lemma~5.9]{benaimrossignol}. For its statement, we write an arbitrary element $\omega \in \Omega$ as $(t_{e^c},t_e)$, where $t_{e^c} = (t_f : f \neq e)$. Further, set
\[
S := \sup \text{supp}(\mu) = \sup \{x : F(x)<1\} \in \mathbb{R} \cup \{\infty\}\ .
\]
We use the following short-hand:
\[
\tau_z = \tau(z,z+x)\ .
\]
\begin{lma}\label{lem: deriv}
For $e \in \mathcal{E}^d$ and $z \in \mathbb{Z}^d$, the random variable
\[
D_{z,e} := \sup \{r < S : e \text{ is in a geodesic from }z \text{ to } z+x \text{ in } (t_{e^c},r)\}\
\]
has the following properties almost surely.
\begin{enumerate}
\item $D_{z,e} < \infty$.
\item For $s \leq t < S$,
\[
\tau_z(t_{e^c}, t) - \tau_z(t_{e^c},s) = \min\{t-s, (D_{z,e}-s)_+\}\ .
\]
\item For $s<D_{z,e}$, $e\in Geo(z,z+x)$ in $(t_{e^c},s)$.
\end{enumerate}
\end{lma}
\begin{proof}
Part 1 is clear if $S < \infty$. Otherwise choose any path $\gamma$ not including $e$. Then for $r$ larger than the passage time of this path, $e$ cannot be in a geodesic in $(t_{e^c},r)$, giving $D_{z,e} < \infty$.
If $e$ is in a geodesic $\gamma$ in $(t_{e^c},t)$ and $t \geq s$ then the passage time of $\gamma$ decreases by $t-s$ in $(t_{e^c},s)$. Since the passage time of no other path decreases by more than $t-s$, $\gamma$ is still a geodesic in $(t_{e^c},s)$. This shows that
\begin{equation}\label{eq: dec}
\mathbf{1}_{\{e \text{ is in a geodesic from } z \text{ to } z+x\}} \text{ is a non-increasing function of }t_e\ .
\end{equation}
Therefore if $t < D_{z,e}$, $e$ is in a geodesic in $(t_{e^c},t)$ and by the above argument, for any $s \leq t$, part 2 holds. We can then extend to $s \leq t \leq D_{z,e}$ by continuity.
If $D_{z,e} < s \leq t$ then $e$ is not in a geodesic from $z$ to $z+x$ in $(t_{e^c},s)$. By \eqref{eq: exist_geodesics}, we can almost surely find a geodesic $\gamma$ in $(t_{e^c},s)$ not containing $e$ and this path has the same passage time in $(t_{e^c},t)$. However all other paths have no smaller passage time, so $\tau_z(t_{e^c},t) - \tau_z(t_{e^c},s) = (D_{z,e}-s)_+$ almost surely, proving part 2 in this case. We can then extend the result to $D_{z,e} \leq s \leq t$ by continuity and for $s \leq D_{z,e} \leq t$ write
\[
\tau_z(t_{e^c},t) - \tau_z(t_{e^c},s) = \tau_z(t_{e^c},t) - \tau_z(t_{e^c},D_{z,e}) + \tau_z(t_{e^c},D_{z,e}) - \tau_z(t_{e^c},s)\ ,
\]
and use the other cases to complete the proof.
For part 3, let $s<D_{z,e}$, so that by \eqref{eq: dec}, $e$ is in a geodesic $\gamma_1$ in $(t_{e^c},\frac{s+D_{z,e}}{2})$ from $z$ to $x+z$. Assume for a contradiction that $e$ is not in every geodesic from $z$ to $x+z$ in $(t_{e^c},s)$, and choose $\gamma_2$ as one that does not contain $e$. Because $\frac{s + D_{z,e}}{2} \geq s$, $\gamma_1$ is still a geodesic in $(t_{e^c},s)$ and therefore has the same passage time in this configuration as $\gamma_2$. But then in $(t_{e^c},\frac{s+D_{z,e}}{2})$ it has strictly larger passage time, contradicting the fact that it is a geodesic.
\end{proof}
\begin{prop}\label{prop: 5.3}
Assuming \eqref{eqn: 2log} and \eqref{eqn: geodesics}, there exists $\mathbf{C}_{21}$ such that
\[
\sum_{k=1}^\infty (\mathbb{E}|V_k|)^2 \leq \mathbf{C}_{21} \|x\|_1^{\frac{5-d}{4}} \text{ for all } x\ .
\]
\end{prop}
\begin{proof}
Using the definition of $V_k$,
\begin{align}
\mathbb{E}|V_k| &= \frac{1}{\# B_m} \mathbb{E} \left| \mathbb{E} \left[ \sum_{z \in B_ m} \tau_z \mid \mathcal{F}_k \right] - \mathbb{E} \left[ \sum_{z \in B_m} \tau_z \mid \mathcal{F}_{k-1} \right] \right| \nonumber \\
&\leq \frac{1}{\# B_m} \sum_{z \in B_m} \mathbb{E} \left| \mathbb{E} \left[ \tau_z \mid \mathcal{F}_k \right] - \mathbb{E} \left[ \tau_z \mid \mathcal{F}_{k-1} \right] \right| \label{eq: tomato_salsa}\ .
\end{align}
Write a configuration $\omega$ as $(t_{<k},t_{e_k},t_{>k})$, where
\[
t_{<k} = (t_{e_j}:j < k) \text{ and } t_{>k} = (t_{e_j} : j > k)\ .
\]
The summand in \eqref{eq: tomato_salsa} becomes
\begin{align*}
&~\int \left| \int \tau_z(t_{<k},t,t_{>k}) \mathbb{P}(\text{d}t_{>k}) - \int \tau_z(t_{<k},s,t_{>k}) \mu(\text{d}s)\mathbb{P}(\text{d}t_{>k}) \right| \mu(\text{d}t) \mathbb{P}(\text{d}t_{<k}) \\
\leq 2 &~ \mathbb{E} \iint_{t \geq s} \left| \tau_z(t_{<k},t,t_{>k}) - \tau_z(t_{<k},s,t_{>k}) \right| \mu(\text{d}s) \mu(\text{d}t)\ .
\end{align*}
By Lemma~\ref{lem: deriv} and $\mathbb{E}t_e<\infty$, this equals
\[
\mathbb{E} \iint_{t \geq s} \min\{ t-s, (D_{z,e_k}-s)_+\} \mu(\text{d}s) \mu(\text{d}t) \leq 2 \int t~ \int_{s < D_{z,e_k}} \mu(\text{d}s) \mu(\text{d}t) = \mathbf{C}_{22} F(D_{z,e_k}^-)\ .
\]
Using part 3 of the same lemma, $\mathbb{E} F(D_{z,e_k}^-) \leq \mathbb{P}(e_k \in Geo(z,z+x))$. Therefore
\[
\mathbb{E}|V_k| \leq \mathbf{C}_{22} \frac{1}{\#B_m} \sum_{z \in B_m} \mathbb{P}(e_k \in Geo(z,z+x))\ .
\]
By translation invariance, the above probability equals $\mathbb{P}(e_k+z \in Geo(0,x))$, so we get the bound $\frac{\mathbf{C}_{22}}{\#B_m} \mathbb{E} \# \left[ Geo(0,x) \cap \{e_k+z : z \in B_m\} \right]$. Lemma~\ref{lem: exp_intersections} provides $\mathbf{C}_{23}$ such that this is no bigger than $\mathbf{C}_{23}\frac{\text{ diam }B_m}{ \#B_m}$. Hence
\[
\mathbb{E} |V_k| \leq \mathbf{C}_{24} \|x\|_1^{\frac{1-d}{4}}\ .
\]
This leads to
\[
\sum_{k=1}^\infty (\mathbb{E} |V_k|)^2 \leq \mathbf{C}_{24} \|x\|_1^{\frac{1-d}{4}} \sum_{k=1}^\infty \mathbb{E}|V_k| \leq \mathbf{C}_{25} \|x\|_1^{\frac{1-d}{4}} \frac{1}{\# B_m}\sum_{z\in B_m} \sum_{k=1}^\infty \mathbb{P}(e_k \in Geo(z,z+x)) \leq \mathbf{C}_{26} \|x\|_1^{\frac{5-d}{4}}\ .
\]
In the last inequality we have used Theorem~\ref{thm: kesten_length}.
\end{proof}
\section{Sublinear variance for general distributions}\label{sec: derivatives}
Combining the results from the previous sections, we have shown so far that if \eqref{eqn: 2log} and \eqref{eqn: geodesics} hold then
\begin{equation}\label{eq: breakdown}
\Var \tau(0,x) \leq \Var F_m + \mathbf{C}_3\|x\|_1^{3/4} \leq \mathbf{C}_{3}\|x\|_1^{3/4} + \left[ \log \left[ \frac{\Var F_m}{\|x\|_1^{\frac{5-d}{4}}} \right] \right]^{-1} \sum_{k=1}^\infty Ent(V_k^2)\ .
\end{equation}
Our goal now is to bound the sum by $C\|x\|_1$. We will do this using a Bernoulli encoding.
\subsection{Bernoulli encoding} \label{sec: encoding}
We will now view our edge variables as the push-forward of a Bernoulli sequence. Specifically, for each edge $e$, let $\Omega_e$ be a copy of $\{0,1\}^\mathbb{N}$ with the product sigma-algebra. We will construct a measurable map $T_e:\Omega_e \to \mathbb{R}$ using the distribution function $F$. To do this, we create a sequence of partitions of the support of $\mu$. Recalling $I := \inf \text{supp}(\mu) = \inf \{x : F(x) > 0\}$, set
\[
a_{0,j} = I \text{ and }a_{i,j} = \min\left\{x : F(x) \geq \frac{i}{2^j}\right\} \text{ for } j \geq 1 \text{ and } 1 \leq i \leq 2^j-1\ .
\]
Note that by right continuity of $F$, the minimum above is attained; that is,
\begin{equation}\label{eq: rt_cty}
F(a_{i,j}) \geq \frac{i}{2^j} \text{ for } j \geq 1 \text{ and } 0 \leq i \leq 2^j -1\ .
\end{equation}
Let us note two properties of the sequence.
\begin{equation}\label{eq: partition}
\text{For }j \geq 1,~ a_{0,j} \leq a_{1,j} \leq \cdots \leq a_{2^j-1,j}\ .
\end{equation}
\begin{equation}\label{eq: if_f}
\text{For }i=0, \ldots, 2^j-1, ~ x \geq a_{i,j} \text{ if and only if }F(x) \geq \frac{i}{2^j} \text{ and } x \geq a_{0,j}\ .
\end{equation}
Each $\omega \in \Omega_e$ gives us an ``address'' for a point in the support of $\mu$. Given $\omega = (\omega_1, \omega_2, \ldots)$ and $j \geq 1$, we associate a number $T_j(\omega)$ by
\[
T_j(\omega) = a_{i(\omega,j),j}, \text{ where } i(\omega,j) = \sum_{l=1}^j 2^{j-l} \omega_l\ .
\]
$i(\omega,j)$ is just the number between $0$ and $2^j - 1$ that corresponds to the binary number $\omega_1 \cdots \omega_j$. It will be important to note that if $\omega_i \leq \hat \omega_i$ for all $i \geq 1$ (written $\omega \leq \hat \omega$), then $i(\omega,j) \leq i(\hat \omega,j)$ for all $j \geq 1$. This, combined with the monotonicity statement \eqref{eq: partition}, implies
\begin{equation}\label{eq: monotone_2}
\omega \leq \hat \omega \Rightarrow T_j(\omega) \leq T_j(\hat \omega) \text{ for all } j \geq 1\ .
\end{equation}
It is well-known that one can represent Lebesgue measure on $[0,1]$ using binary expansions and Bernoulli sequences. One way to view the encoding $T$ in Lemma~\ref{lem: encoding} is a composition of this representation with the right-continuous inverse of the distribution function $F$. The function $T_j$ instead uses an inverse approximated by simple functions taking dyadic values.
\begin{lma}\label{lem: encoding}
For each $\omega$, the numbers $(T_j(\omega))$ form a non-decreasing sequence and have a limit $T(\omega)$. This map $T:\Omega_e \to \mathbb{R} \cup \{\infty\}$ is measurable and has the following properties.
\begin{enumerate}
\item (Monotonicity) If $\omega \leq \hat \omega$ then $T(\omega) \leq T(\hat \omega)$.
\item (Nesting) For any $\omega \in \Omega_e$ and $j \geq 1$, if $i(\omega,j) < 2^j-1$ then
\[
a_{i(\omega,j),j} \leq T(\omega) \leq a_{i(\omega,j)+1,j}\ .
\]
\item If $\omega_k = 0$ for some $k \geq 1$ then $T(\omega) < \infty$.
\item Letting $\pi$ be the product measure $\prod_{l \in \mathbb{N}} \pi_l$, with each $\pi_l$ uniform on $\{0,1\}$, we have
\[
\pi \circ T^{-1} = \mu\ .
\]
By part 3, $T$ is $\pi$-almost surely finite.
\end{enumerate}
\end{lma}
\begin{proof}
The functions $T_j$ are each measurable since their ranges are finite and the pre-image of each point is a cylinder in $\Omega_e$. If we show that $T_j \to T$ pointwise then $T$ will also be measurable. Given $\omega \in \Omega_e$, we have
\[
\frac{i(\omega,j)}{2^j} = \frac{1}{2^j} \sum_{l=1}^j 2^{j-l}\omega_l = \sum_{l=1}^j 2^{-l} \omega_l \leq \sum_{l=1}^{j+1} 2^{-l} \omega_l = \frac{i(\omega,j+1)}{2^{j+1}}\ .
\]
Therefore if $x$ is such that $F(x) \geq \frac{i(\omega,j+1)}{2^{j+1}}$ then also $F(x) \geq \frac{i(\omega,j)}{2^j}$. This means that if $i(\omega,j) > 0$,
\[
T_j(\omega) = \min \left\{x : F(x) \geq \frac{i(\omega,j)}{2^j} \right\} \leq \min \left\{x : F(x) \geq \frac{i(\omega,j+1)}{2^{j+1}} \right\}= T_{j+1}(\omega)\ .
\]
Otherwise if $i(\omega,j)=0$ then $T_{j+1}(\omega) \geq a_{0,j+1}=a_{0,j}=T_j(\omega)$. In either case, $(T_j(\omega))$ is monotone and has a limit $T(\omega)$.
For part 1, we simply take limits in \eqref{eq: monotone_2}. To prove part 2, we note the lower bound follows from monotonicity. For the upper bound, take $\omega \in \Omega_e$ and let $k \geq j$. Then
\[
\frac{i(\omega,k)}{2^k} = \sum_{l=1}^k 2^{-l} \omega_l \leq \sum_{l=1}^j 2^{-l} \omega_l + \sum_{l=j+1}^\infty 2^{-l} = \frac{i(\omega,j)+1}{2^j} \leq \frac{2^j-1}{2^j}\ .
\]
If $\omega$ is the zero sequence then $T(\omega) = I$ and $T(\omega) \leq a_{i(\omega,j)+1,j}$. Otherwise we can find $k \geq j$ such that $i(\omega,k) \neq 0$. For this $k$, $F(x) \geq \frac{i(\omega,j)+1}{2^j}$ implies $F(x) \geq \frac{i(\omega,k)}{2^k}$, giving
\[
T_k(\omega) = a_{i(\omega,k),k} \leq a_{i(\omega,j)+1,j}\ .
\]
Taking the limit in $k$ gives the result.
In part 3, we assume that $\omega_k=0$ for some $k \geq 1$. Then $i(\omega,k+1) < 2^{k+1}-1$ and therefore by part 2,
\[
T(\omega) \leq a_{i(\omega,k+1)+1,j} \leq a_{2^{k+1}-1,j} <\infty\ .
\]
Last we must show that $\pi \circ T^{-1} = \mu$. The first step is to show that for each $x \in \mathbb{R}$,
\[
\pi \circ T_j^{-1} ((-\infty,x]) \to \pi \circ T^{-1}((-\infty,x])\ .
\]
Consider the sets
\[
S_j(x) = \{\omega \in \Omega_e : T_j(\omega) \leq x\}\ .
\]
If $T_{j+1}(\omega) \leq x$ then $T_j(\omega) \leq T_{j+1}(\omega) \leq x$, so these sets are decreasing. If $\omega$ is in their intersection then $T_j(\omega) \leq x$ for all $j$. Since $T_j(\omega) \to T(\omega)$ this means $T(\omega) \leq x$ and thus $\omega \in S(x) := \{\omega \in \Omega_e : T(\omega) \leq x\}$. Conversely, if $\omega \in S(x)$ then $T(\omega) \leq x$ and so $T_j(\omega) \leq T(\omega) \leq x$ for all $j$, meaning $\omega \in \cap_j S_j(x)$. Therefore $\pi \circ T_j^{-1} ((-\infty,x])$ converges to $\pi \circ T^{-1}((-\infty,x])$.
Next we claim that
\begin{equation}\label{eq: equiv}
x \geq a_{0,j} \Rightarrow \pi \circ T_j^{-1} ((-\infty,x]) = 2^{-j} \max \left\{i+1 : F(x) \geq \frac{i}{2^j} \right\}\ .
\end{equation}
The left side of the equality is $\pi(\{\omega : T_j(\omega) \leq x\})$. The function $T_j$ is constant on sets of $\omega$ with the same first $j$ entries. By definition, if $\omega$ has first $j$ entries $\omega_1 \cdots \omega_j$ then $T_j(\omega) = T_j(\omega_1 \cdots \omega_j) = a_{i(\omega,j),j}$. So
\[
\pi\circ T_j^{-1}((-\infty,x]) = 2^{-j} \# \left\{ (\omega_1, \cdots, \omega_j) : a_{i(\omega,j),j} \leq x\right\}\ .
\]
Also, since $x \geq a_{0,j}$, \eqref{eq: if_f} gives
\[
\pi\circ T_j^{-1} ((-\infty,x]) = 2^{-j} \# \left\{(\omega_1, \cdots, \omega_j) : F(x) \geq \sum_{l=1}^j 2^{-l} \omega_l \right\}\ .
\]
This is exactly the right side of \eqref{eq: equiv}.
By \eqref{eq: equiv}, $\left| \pi \circ T_j^{-1}((-\infty,x]) - F(x) \right| \leq 2^{-j}$ and so $\pi\circ T_j^{-1} ((-\infty,x]) \to F(x)$, completing the proof of part 4.
\end{proof}
\subsection{Bound on discrete derivatives}
In this section we prove the result:
\begin{thm}\label{thm: derivative_bound}
Assume \eqref{eqn: 2log} and \eqref{eqn: geodesics}. There exists $\mathbf{C}_{27}$ such that
\[
\sum_{k=1}^\infty Ent(V_k^2) \leq \mathbf{C}_{27}\|x\|_1\ .
\]
\end{thm}
The proof will be broken into subsections. In the first we apply Bonami's inequality to the Bernoulli encoding of $F_m$ to get a sum involving discrete derivatives. The next subsection uses the quantities $D_{z,e}$ from Lemma~\ref{lem: deriv} to control the sum of derivatives. In the third subsection, we give a lemma based on the theory of greedy lattice animals and in the final subsection, we use this lemma to achieve the bound $\mathbf{C}_{27}\|x\|_1$.
\subsubsection{Application of Bonami's inequality}
We will view $F_m$ as a function of sequences of Bernoulli variables, so define
\[
\Omega_B = \prod_e \Omega_e
\]
where $\Omega_e$ is, as in the last section, a copy of $\{0,1\}^\mathbb{N}$. The measure on $\Omega_e$ is $\pi_e$, a product of the form $\prod_{j \geq 1} \pi_{e,j}$ with $\pi_{e,j}$ uniform on $\{0,1\}$ and the measure on $\Omega_B$ is $\pi : = \prod_e \pi_e$. Here as usual we use the product sigma-algebra. A typical element of $\Omega_B$ is denoted $\omega_B$ and we list the collection of individual Bernoulli variables as
\[
\omega_B = \left\{ \omega_{e,j} : e \in \mathcal{E}^d,~ j \geq 1\right\}\ .
\]
Last, calling $T_e$ the map from Lemma~\ref{lem: encoding} on $\Omega_e$, the product map $T:=\prod_e T_e : \Omega_B \to \Omega$ is defined
\[
T (\omega_B) = (T_e(\omega_e) : e \in \mathcal{E}^d)\ .
\]
It is measurable and, by Lemma~\ref{lem: encoding}, pushes the measure $\pi$ forward to $\mathbb{P}$, our original product measure on $\Omega$.
We consider $F_m$ as a function on $\Omega_B$; that is, we set $G = F_m \circ T$. The goal is to estimate the derivative of $G$, so define the derivative relative to $\omega_{e,j}$ of a function $f:\Omega_B \to \mathbb{R}$ as
\[
\left( \Delta_{e,j} f\right) (\omega) = f(\omega^{e,j,+}) - f(\omega^{e,j,-})\ ,
\]
where $\omega^{e,j,+}$ agrees with $\omega$ except possibly at $\omega_{e,j}$, where it is 1, and $\omega^{e,j,-}$ agrees with $\omega$ except possibly at $\omega_{e,j}$, where it is 0. Then the following analogue of \cite[Eq.~(3)]{benaimrossignol} holds.
\begin{lma} \label{eqn: entropybydelta}
We have the following inequality:
\[
\sum_{k=1}^\infty Ent(V_k^2) \leq \sum_e \sum_{j=1}^\infty \mathbb{E}_\pi (\Delta_{e,j} G)^2\ .
\]
\end{lma}
\begin{proof}
Define a filtration of $\Omega_B$ by enumerating the edges of $\mathcal{E}^d$ as $\{e_1, e_2, \ldots\}$ as before and setting $\mathcal{G}_k$ as the sigma-algebra generated by $\{\omega_{e_r,j} : r \leq k, j \in \mathbb{N}\}$. Also define $W_k = \mathbb{E}_\pi \left[ G \mid \mathcal{G}_k\right]$. It is straightforward to verify that, because $\mathbb{P} = \pi \circ T^{-1}$,
\[
\mathbb{E}[F_m \mid \mathcal{F}_k] (T(\omega_B)) = \mathbb{E}_\pi[G \mid \mathcal{G}_k](\omega_B) \text{ for } \pi\text{-almost every }\omega_B \in \Omega_B\ .
\]
Therefore $Ent(V_k^2) = Ent_\pi(W_k^2)$ for each $k$. Using tensorization of entropy (Theorem~\ref{thm: tensorization}),
\[
\sum_{k=1}^\infty Ent_\pi(W_k^2) \leq \sum_{k=1}^\infty \mathbb{E}_\pi \sum_e \sum_{j=1}^\infty Ent_{\pi_{e,j}}W_k^2\ .
\]
For this to be true, we need to check the condition $W_k^2 \in L^2$, or that $V_k \in L^4$. Since $V_k$ is a difference of martingale sequence terms, it suffices to show that $\tau(0,x)$ is in $L^4$. But this follows from \cite[Lemma~3.1]{coxdurrett}: if $Y = \min\{t_1, \ldots, t_{2d}\}$ is a minimum of $2d$ i.i.d. variables distributed as $t_e$, then $\tau(0,x) \in L^4$ if and only if $\mathbb{E}Y^4<\infty$. By Chebyshev's inequality,
\[
\mathbb{E}Y^\alpha = \alpha \int_0^\infty y^{\alpha-1} \mathbb{P}(t_e > y)^{2d}~\text{d}y \leq C\int_0^\infty y^{\alpha-1-4d}~\text{d}y\ ,
\]
which is finite if $\alpha < 4d$. In particular, since $d \geq 2$, one has $W_k^2 \in L^2$.
Recall the Bonami-Gross inequality \cite{bonami, gross}, which says that if $f:\{0,1\} \to \mathbb{R}$ and $\nu$ is uniform on $\{0,1\}$ then
\[
Ent_\nu f^2 \leq (1/2)(f(1)-f(0))^2\ .
\]
Therefore we get the upper bound $\sum_{j=1}^\infty \sum_e \sum_{k=1}^\infty\mathbb{E}_\pi(\Delta_{e,j}W_k)^2$. For fixed $e,j$,
\[
\Delta_{e,j}W_k = \begin{cases}
0 & \text{ if } k<j \\
\mathbb{E}_\pi[\Delta_{e,j}G \mid \mathcal{G}_k] & \text{ if } k=j \\
\mathbb{E}_\pi[\Delta_{e,j}G \mid \mathcal{G}_k] - \mathbb{E}_\pi[\Delta_{e,j}G \mid \mathcal{G}_{k-1}] & \text{ if } k > j
\end{cases}\ .
\]
The first follows because when $k<j$ then $W_k$ does not depend on $\omega_{e,j}$, as this variable is integrated out. A similar idea works for the second, noting that $\Delta_{e,j} \mathbb{E}_\pi[G \mid \mathcal{G}_{k-1}] = 0$. The third is straightforward. Using orthogonality of martingale differences, $\sum_{k=1}^\infty \mathbb{E}_\pi (\Delta_{e,j}W_k)^2 = \mathbb{E}_\pi(\Delta_{e,j} G)^2$ and this completes the proof.
\end{proof}
\subsubsection{Control by edges in geodesics}
The first major step is to bound the sum of discrete derivatives by a weighted average of edge-weights in geodesics. The bound we give is analogous to what would appear if we had a log-Sobolev inequality for $\mu$ (see the approach in Benaim-Rossignol \cite{benaimrossignol}); however, we get a logarithmic singularity as $t_e \downarrow I$.
\begin{prop}\label{prop: intermediate}
There exists $\mathbf{C}_{28}$ such that for all $x$,
\[
\sum_e \sum_{j=1}^\infty \mathbb{E}_\pi (\Delta_{e,j} G)^2 \leq \mathbf{C}_{28} \mathbb{E} \sum_e (1-\log F(t_e)) \mathbf{1}_{\{e \in Geo(0,x)\}}\ .
\]
\end{prop}
\begin{proof}
We begin by using convexity of the square function to get
\begin{equation}\label{eq: first_eq}
\sum_e \sum_{j=1}^\infty \mathbb{E}_\pi \left(\Delta_{e,j} G \right)^2 \leq \frac{1}{\#B_m} \sum_{z \in B_m} \left[ \sum_e \sum_{j=1}^\infty \mathbb{E}_\pi \left( \Delta_{e,j} \tau_z \right)^2 \right]\ ,
\end{equation}
where $\tau_z=\tau(z,z+x)$. Write $\mathbb{E}_{e^c}$ for expectation relative to $\prod_{f \neq e} \pi_f$ and for any $i\geq 1$, let $\pi_{e,\geq i}$ be the measure $\prod_{k \geq i} \pi_{e,k}$. Further, for $j \geq 1$ write
\[
\omega_B = (\omega_{e^c}, \omega_{e,< j}, \omega_{e,j}, \omega_{e,> j})\ ,
\]
where $\omega_{e^c}$ is the configuration $\omega_B$ projected on the coordinates $(\omega_{f,k} : f \neq e,~ k \geq 1)$, $\omega_{e,<j}$ is $\omega_B$ projected on the coordinates $(\omega_{e,k} : k < j)$ and $\omega_{e, > j}$ is $\omega_B$ projected on the coordinates $(\omega_{e,k} : k > j)$.
The expectation in \eqref{eq: first_eq} is now
\begin{align}
&~\mathbb{E}_{e^c} \mathbb{E}_{\pi_{e,1}} \cdots \mathbb{E}_{\pi_{e,j-1}} \left[ \mathbb{E}_{\pi_{e,\geq j}} \left( \Delta_{e,j} \tau_z \right)^2 \right] \nonumber \\
=& ~\mathbb{E}_{e^c} \left[ \frac{1}{2^{j-1}} \sum_{\sigma \in \{0,1\}^{j-1}} \left[ \mathbb{E}_{\pi_{e,\geq j}} \left( \Delta_{e,j} \tau_z (\omega_{e^c}, \sigma, \omega_{e,j}, \omega_{e, > j}) \right)^2 \right] \right] \label{eq: second_eq}\ ,
\end{align}
and the innermost term is
\begin{equation}\label{eq: sum_on_home}
\mathbb{E}_{\pi_{e, \geq j}} \left( \tau_z(\omega_{e^c},\sigma, 1, \omega_{e,>j}) - \tau_z(\omega_{e^c}, \sigma,0, \omega_{e,>j}) \right)^2 \ .
\end{equation}
Because of Lemma~\ref{lem: deriv}, we can rewrite \eqref{eq: sum_on_home} as
\begin{equation}\label{eq: hopefully_last}
\mathbb{E}_{\pi_{e,\geq j}} \min\left\{ (T_e(\sigma,1,\omega_{e,>j})-T_e(\sigma,0,\omega_{e,>j}))^2, (D_{z,e}-T_e(\sigma,0,\omega_{e,>j}))_+^2\right\}\ .
\end{equation}
Note that this allows us to assume $D_{z,e} > I$:
\begin{equation}\label{eq: D_{z,e}_reduction}
\mathbb{E}_\pi (\Delta_{e,j} \tau_z)^2 = \mathbb{E}_{e^c} \left[ \frac{1}{2^{j-1}} \sum_{\sigma \in \{0,1\}^{j-1}} \left[ \mathbb{E}_{\pi_{e,\geq j}} \left( \Delta_{e,j} \tau_z (\omega_{e^c}, \sigma, \omega_{e,j}, \omega_{e, > j}) \right)^2 \right] \mathbf{1}_{\{I<D_{z,e}\}}\right] \ .
\end{equation}
To simplify notation in the case $j \geq 2$, we write the values $a_{1,j-1}, \ldots, a_{2^{j-1}-1,j-1}$ as $a_1, \ldots, a_{2^{j-1}-1}$ and for a fixed $\sigma \in \{0,1\}^{j-1}$, $a_\sigma$ for $a_{i((\sigma,0,\omega_{e,>j}),j-1),j-1}$ (note that this does not depend on the configuration outside of $\sigma$). Also we write $a'_\sigma$ for the element of the partition that follows $a_\sigma$ (when there is one; that is, when $\sigma$ is not $(1, \ldots, 1)$). Last, we abbreviate $T_e(\sigma,c,\omega_{e,>j})$ by $T_{e,j}(\sigma,c)$ for $c=0,1$. With this notation, we claim the inequalities
\[
a_\sigma \leq T_{e,j}(\sigma,0) \leq T_{e,j}(\sigma,1) \leq a'_\sigma \text{ when } \sigma \neq (1, \ldots, 1) \text{ and } j \geq 2\ .
\]
The first and third inequalities follow from the nesting part of Lemma~\ref{lem: encoding}. The second holds because of the monotonicity part. Therefore we can give an upper bound for \eqref{eq: hopefully_last} when $j \geq 2$ of
\[
\begin{cases}
0 & \text{ if } D_{z,e} \leq a_\sigma \\
\mathbb{E}_{\pi_{e,\geq j}} \min\{D_{z,e}-a_\sigma, T_{e,j}(\sigma,1) - a_\sigma\}^2 \mathbf{1}_{\{T_{e,j}(\sigma,0) < D_{z,e}\}} & \text{ if } \sigma \neq (1, \ldots, 1) \text{ and } a_\sigma < D_{z,e} \leq a'_\sigma\\
&\text{ or } \sigma = (1, \ldots, 1) \\
(a'_\sigma - a_\sigma )^2 & \text{ if } a'_\sigma \leq D_{z,e}
\end{cases}\ .
\]
(Here and above we have strict inequality in the condition of the indicator function since when $T_e(\sigma,0,\omega_{e,>j})=D_{z,e}$, \eqref{eq: hopefully_last} is zero.) With this, when $j \geq 2$, the integrand of $\mathbb{E}_{e^c}$ in \eqref{eq: D_{z,e}_reduction} is no bigger than
\begin{align}
\frac{1}{2^{j-1}} &\bigg[ (a_1-a_0)^2 + \cdots + (a_s-a_{s-1})^2 \nonumber \\
&+ \mathbb{E}_{\pi_{e,\geq j}} \min\{D_{z,e}-a_s,T_{e,j}(\sigma(D_{z,e}),1)-a_s\}^2 \mathbf{1}_{\{T_{e,j}(\sigma(D_{z,e}),0) < D_{z,e}\}} \bigg] \mathbf{1}_{\{I<D_{z,e}\}} \label{eq: descending}\ .
\end{align}
Here we have written $s$ for the largest index $i$ such that $a_i < D_{z,e}$ and $\sigma(D_{z,e})$ for the configuration such that $a_{\sigma(D_{z,e})} = a_s$. In the case $j=1$, we have the similar upper bound
\begin{equation}\label{eq: j_one}
\mathbb{E}_{\pi_{e,\geq j}} \min\{ D_{z,e}-I, T_{e,1}(1)-I\}^2 \mathbf{1}_{\{T_{e,1}(0) < D_{z,e}\}} \mathbf{1}_{\{I<D_{z,e}\}}\ .
\end{equation}
Either way, writing $\vec{1}_j$ (respectively $\vec{0}_j$) for the configuration $(1, \ldots, 1)$ (respectively $(0, \ldots, 0)$) of length $j$,
\begin{equation}\label{eq: new_eq_1}
\mathbb{E}_{\pi_e} (\Delta_{e,j} \tau_z)^2 \leq \frac{1}{2^{j-1}} \mathbb{E}_{\pi_{e,\geq j}} \left[ \min\{D_{z,e},T_{e,j}(\vec 1_{j-1},1)\}^2 \mathbf{1}_{\{T_{e,j}(\vec 0_{j-1},0) < D_{z,e}\}}\right] \mathbf{1}_{\{I<D_{z,e}\}}\ .
\end{equation}
Note that $\min\{D_{z,e},T_{e,j}(\vec 1_{j-1},1)^2\}$ is an increasing function of $\omega_{e,\geq j}$ (with all other variables fixed), whereas $\mathbf{1}_{\{T_{e,j}(\vec 0_{j-1},0)< D_{z,e}\}}$ is decreasing (here we use monotonicity of $T_e$). Therefore we can apply the Harris-FKG inequality \cite[Theorem~2.15]{BLM} and sum over $j$ for the upper bound
\begin{equation}\label{eq: new_eq_2}
\mathbb{E}_{\pi_e} \sum_{j=1}^\infty (\Delta_{e,j} \tau_z)^2 \leq \sum_{j=1}^\infty \frac{1}{2^{j-1}} \left[ \mathbb{E}_{\pi_{e,\geq j}} \min\{D_{z,e},T_{e,j}(\vec 1_{j-1},1)\}^2~ \pi_{e,\geq j}(T_{e,j}(\vec 0_{j-1},0) < D_{z,e}) \right]\mathbf{1}_{\{I<D_{z,e}\}}\ .
\end{equation}
The goal is now to give a useful bound for this sum. To do this, we consider two types of values of $j$. Note that $F(D_{z,e}^-)>0$ and therefore for some $j$, $F(D_{z,e}^-) \geq 2^{-j}$. So define
\[
J(D_{z,e}) = \min \{j \geq 2 : F(D_{z,e}^-) \geq 2^{-(j-1)}\}\ .
\]
Note that
\begin{equation}\label{eq: J_bound}
1-\log_2 F(D_{z,e}^-) \leq J(D_{z,e}) \leq 2-\log_2 F(D_{z,e}^-)\ .
\end{equation}
We will estimate the term $\pi_{e,\geq j}(T_{e,j}(\vec 0_{j-1},0) < D_{z,e})$ only when $j < J(D_{z,e})$. By definition, it is
\[
\left( \prod_{k \geq j} \pi_{e,k}\right) (\{\omega_e : T_e(0,\ldots, 0,\omega_{e,j+1}, \ldots) < D_{z,e}\}) = \pi_e (\{\omega_e : T_e(0, \ldots, 0, \omega_{e,j+1}, \ldots) < D_{z,e}\})\ .
\]
The event in $\Omega_e$ listed on the right depends only on $\omega_{e,k}$ for $k > j$, so it is independent (under $\pi_e$) of the state of the first $j$ coordinates. Thus the above equals
\[
2^j \pi_e (T_e(0, \ldots, 0, \omega_{e,j+1}, \ldots) < D_{z,e},~ \omega_{e,1}, \ldots, \omega_{e,j} = 0) \leq 2^j \pi(T_e(\omega_e) < D_{z,e}) = 2^j F(D_{z,e}^-)\ .
\]
Using this inequality for $j <J(D_{z,e})$, \eqref{eq: new_eq_2} becomes
\begin{align}
\mathbb{E}_{\pi_e} \sum_{j=1}^\infty (\Delta_{e,j} \tau_z)^2 &\leq 2F(D_{z,e}^-) \mathbb{E}_{\pi_{e,\geq 1}}T_{e,1}(1)^2 \mathbf{1}_{\{I<D_{e,z}\}} + 2F(D_{z,e}^-) \sum_{j=2}^{J(D_{z,e})-1} D_{z,e}^2 \mathbf{1}_{\{I<D_{z,e}\}} \label{eq: new_eq_3a}\\
&+ \sum_{j=J(D_{z,e})}^\infty \frac{1}{2^{j-1}} \left[ \mathbb{E}_{\pi_{e,\geq j}} \min\{D_{z,e},T_{e,j}(\vec 1_{j-1},1)\}^2 \right]\mathbf{1}_{\{I<D_{z,e}\}} \label{eq: new_eq_3b}\ .
\end{align}
The second term on the right of \eqref{eq: new_eq_3a} is bounded by noting that when this sum is nonempty (that is, $J(D_{z,e})>2$), it follows that $F(D_{z,e}^-) <1/2$ and so $D_{z,e} \leq a_{1,1}$. Using this with \eqref{eq: J_bound} we obtain
\begin{equation}\label{eq: mid_bound}
2F(D_{z,e}^-) \sum_{j=2}^{J(D_{z,e})-1} D_{z,e}^2 \mathbf{1}_{\{I<D_{z,e}\}} \leq 2F(D_{z,e}^-)(1-\log_2 F(D_{z,e}^-)) a_{1,1}^2 \mathbf{1}_{\{I<D_{z,e}\}}\ .
\end{equation}
We next bound $\mathbb{E}_{\pi_{e,\geq j}} T_{e,j}(\vec 1_{j-1},1)^2$. Because $T_{e,j}(\vec 1_{j-1},1)$ only depends on $\omega_e$ through $\omega_{e,>j}$,
\[
\mathbb{E}_{\pi_e} T_{e,j}(\vec 1_{j-1},1)^2 = 2^j \mathbb{E}_{\pi_e} T_{e,j}(\vec 1_{j-1},1)^2 \mathbf{1}_{\{\omega_{e,\leq j} = \vec 1_j\}} = 2^j \mathbb{E}_{\pi_e} T_e^2 \mathbf{1}_{\{\omega_{e,\leq j} = \vec 1_j\}}\ .
\]
Thus in \eqref{eq: new_eq_3a},
\begin{equation}\label{eq: beginning_bound}
2F(D_{z,e}^-) \mathbb{E}_{\pi_{e,\geq 1}} T_{e,1}(1)^2 \mathbf{1}_{\{I<D_{z,e}\}} \leq 4F(D_{z,e}^-) \mathbb{E}_\mu t_e^2 \mathbf{1}_{\{I<D_{z,e}\}}
\end{equation}
and
\[
\eqref{eq: new_eq_3b} \leq 2\sum_{j=J(D_{z,e})}^\infty \left[ \mathbb{E}_{\pi_e} \min\{D_{z,e}, T_e\}^2 \mathbf{1}_{\{\omega_{e,\leq j} = \vec 1_j\}}\right] \mathbf{1}_{\{I < D_{z,e}\}}
\]
We now consider two cases. If $D_{z,e} \leq a_{1,1}$ then we use \eqref{eq: J_bound} to obtain the upper bound
\begin{align*}
\eqref{eq: new_eq_3b} \leq 2a_{1,1}^2 \sum_{j=J(D_{z,e})}^\infty \pi_e(\omega_{e,\leq j} = \vec 1_j) \mathbf{1}_{\{I < D_{z,e}\}} &= 2a_{1,1}^2 \sum_{j=J(D_{z,e})}^\infty 2^{-j} \mathbf{1}_{\{I<D_{z,e}\}} \\
&\leq 4a_{1,1}^2 2^{-J(D_{z,e})} \mathbf{1}_{\{I<D_{z,e}\}} \\
&\leq 2 a_{1,1}^2 F(D_{z,e}^-) \mathbf{1}_{\{I<D_{z,e}\}}\ .
\end{align*}
On the other hand, if $D_{z,e}>a_{1,1}$ then we use the bound
\[
\eqref{eq: new_eq_3b} \leq 2 \left[\mathbb{E}_{\pi_e} T_e^2 N\right] \mathbf{1}_{\{I<D_{z,e}\}}, \text{ where } N = \max\{j \geq 1 : \omega_{e,\leq j} = \vec 1_j\}\ .
\]
This is bounded by the variational characterization of entropy, Proposition~\ref{prop: characterization}. The expectation is no larger than
\[
2~Ent_\mu t_e^2 + 2 \mathbb{E}_\mu t_e^2 \log \mathbb{E}_{\pi_e} e^{N/2}\ .
\]
Because $N$ has a geometric distribution, this is bounded by $\mathbf{C}_{29}$ independently of $e$. As $D_{z,e}>a_{1,1}$, one has $F(D_{z,e}^-) \geq 1/2$ and so we obtain
\[
\eqref{eq: new_eq_3b} \leq 4\mathbf{C}_{29}F(D_{z,e}^-) \mathbf{1}_{\{I<D_{z,e}\}}\ .
\]
Combined with the case $D_{z,e} \leq a_{1,1}$, our final bound is
\begin{equation}\label{eq: real_end_bound}
\eqref{eq: new_eq_3b} \leq (4\mathbf{C}_{29}+2a_{1,1}^2) F(D_{z,e}^-) \mathbf{1}_{\{I<D_{z,e}\}}\ .
\end{equation}
Putting together the pieces, \eqref{eq: beginning_bound} with \eqref{eq: mid_bound} and \eqref{eq: real_end_bound},
\begin{equation}\label{eq: lasagna}
\mathbb{E}_{\pi_e}\sum_{j=1}^\infty (\Delta_{e,j} \tau_z)^2 \leq \mathbf{C}_{30} F(D_{z,e}^-)\mathbf{1}_{\{I<D_{z,e}\}} - \mathbf{C}_{31} F(D_{z,e}^-) \log F(D_{z,e}^-) \mathbf{1}_{\{I<D_{z,e}\}}\ .
\end{equation}
To bound terms of the second form we use a lemma.
\begin{lma}\label{lem: ibp}
For any $y>I$, we have
\begin{equation}
-F(y^-)\log F(y^-)\le -\int_{[I,y)}\log F(a)\,\mu(\mathrm{d}a)\ .\label{eqn: ibpbound}
\end{equation}
\begin{proof}
Let $\epsilon>0$. The function $\log F(x)$ is increasing on $(I, \infty)$. The usual Lebesgue construction gives a measure $\nu$ on $(I,\infty)$ such that
\[\nu(a,b] = \log F(b)-\log F(a)\ge 0\]
for $a,b\in (I,\infty)$.
Fix $x\in (I+\epsilon,\infty)$, and consider the square
\[\square= (I+\epsilon, x]\times (I+\epsilon, x]\ .\]
It has two parts:
\begin{gather}
\{(a,b):I+\epsilon < a<b \le x\},\label{eqn: one}\\
\{(a,b):I+\epsilon < b\le a \le x\} \label{eqn: two}.
\end{gather}
Thus,
\[(\mu\times \nu)(\square) = \iint_\eqref{eqn: one} (\mu\times\nu)(\mathrm{d}a\mathrm{d}b)+ \iint_\eqref{eqn: two} (\mu\times\nu)(\mathrm{d}a\mathrm{d}b).\]
By Fubini's theorem, the double integrals may be computed as iterated integrals
\begin{align}
\iint_\eqref{eqn: one} (\mu\times\nu)(\mathrm{d}a\mathrm{d}b)&=\int_{(I+\epsilon,x]} \mu((I+\epsilon,b))\nu(\mathrm{d}b)= \int_{(I+\epsilon,x]}(F(b^-)-F(I+\epsilon))\log F(\mathrm{d}b)\label{eqn: int1}\\
\iint_\eqref{eqn: two} (\mu\times\nu)(\mathrm{d}a\mathrm{d}b)&=\int_{(I+\epsilon,x]} \nu((I+\epsilon,a])\mu(\mathrm{d}a)= \int_{(I+\epsilon,x]}(\log F(a)-\log F(I+\epsilon)) F(\mathrm{d}a) \label{eqn: int2}.
\end{align}
By definition of the product measure,
\[(\mu\times \nu)(\square) = (F(x)-F(I+\epsilon))\cdot(\log F(x)-\log F(I+\epsilon)).\]
This gives the equality:
\begin{align*}
(F(x)-F(I+\epsilon))\cdot(\log F(x)-\log F(I+\epsilon)) &= \int_{(I+\epsilon,x]}F(b^-)\log F(\mathrm{d}b) +\int_{(I+\epsilon,x]}\log F(a)F(\mathrm{d}a) \\
&\quad -F(I+\epsilon)(\log F(x)-\log F(I+\epsilon)) \\
&\quad -\log F(I+\epsilon) (F(x)-F(I+\epsilon)).
\end{align*}
After performing cancellations, we obtain
\begin{equation} \label{eqn: F}
F(x)\log F(x)-F(I+\epsilon)\log F(I+\epsilon) = \int_{(I+\epsilon,x]}F(b^-)\log F(\mathrm{d}b)+\int_{(I+\epsilon,x]}\log F(a)F(\mathrm{d}a).
\end{equation}
Since $F(b^-)\ge 0$, this implies the estimate
\[-\int_{(I+\epsilon,x]} \log F(a)\,\mu(\mathrm{d}a)-F(I+\epsilon)\log F(I+\epsilon) \ge -F(x)\log F(x).\]
Taking $\epsilon \downarrow 0$ and using the right continuity of $F$,
\[-\int_{(I,x]}\log F(a)\,\mu(\mathrm{d}a) -F(I)\log F(I) \ge -F(x)\log F(x),\]
where the second term is interpreted as $0$ if $F(I)=0$. Since $F(I)=\mu(\{I\})$,
\[
-F(x)\log F(x) \leq - \int_{[I,x]} \log F(a) \mu(\text{d}a)\ .
\]
Taking $x \uparrow y$, \eqref{eqn: ibpbound} is proved.
\end{proof}
\end{lma}
Apply the last lemma in \eqref{eq: lasagna} with $y=D_{z,e}$:
\begin{align*}
\sum_e \mathbb{E}_\pi \sum_{j=1}^\infty (\Delta_{e,j} \tau_z)^2 &\leq \mathbf{C}_{32} \sum_e \mathbb{E}_{e^c} \int_{[I,D_{z,e})} (1-\log F(a))~\mu(\text{d}a) \\
&= \mathbf{C}_{32} \mathbb{E} \sum_e (1-\log F(t_e)) \mathbf{1}_{\{I\leq t_e < D_{z,e}\}}\ .
\end{align*}
By Lemma~\ref{lem: deriv}, if $t_e < D_{z,e}$ then $e$ is in $Geo(z,z+x)$, so this is bounded above by
\[
\mathbf{C}_{32} \mathbb{E} \sum_e (1- \log F(t_e)) \mathbf{1}_{\{e \in Geo(z,z+x)\}}\ .
\]
Translating back from $z$ to $0$ and putting this in \eqref{eq: first_eq} proves the proposition.
\end{proof}
\subsubsection{Lattice animals}
To bound the right side of the inequality in Proposition~\ref{prop: intermediate} we need finer control than what is given by Kesten's geodesic length estimates, due to the possible singularity of $\log F(t_e)$ as $t_e \downarrow I$. The idea will be that very few edges $e$ on a geodesic have $t_e$ close to $I$. To bound the precise number, we give the main result of this section:
\begin{thm}\label{thm: low_density}
Assume \eqref{eqn: geodesics} and $\mathbb{E} Y^\alpha<\infty$ for some $\alpha>1$, where $Y$ is the minimum of $2d$ i.i.d. variables distributed as $t_e$. There exists $\mathbf{C}_{33}$ such that for all $x \in \mathbb{Z}^d$ and any Borel set $B \subset \mathbb{R}$,
\[
\mathbb{E} \#\{e \in Geo(0,x) : t_e \in B\} \leq \mathbf{C}_{33}\|x\|_1 \mu(B)^{\frac{\alpha-1}{\alpha d}}\ .
\]
\end{thm}
The proof will require an excursion into the theory of greedy lattice animals. We say that a finite set of vertices $\alpha \subseteq \mathbb{Z}^d$ is a lattice animal if it is connected (under graph connectedness on $\mathbb{Z}^d$). One fundamental result on lattice animals is the following, taken from \cite[Lemma 1]{coxetal}, which describes how a lattice animal may be covered by boxes. We set the notation $B(l) = [-l,l]^d.$
\begin{lma}[Cox-Gandolfi-Griffin-Kesten]
\label{thm:coxetal_cover}
Let $\alpha$ be a lattice animal with $0 \in \alpha$ and $\# \alpha = n,$ and let $1 \leq l \leq n.$
There exists a sequence $x_0, x_1, \ldots, x_r \in \mathbb{Z}^d$, where $r = \lfloor 2n / l \rfloor$, such that $x_0 = 0$,
\begin{equation}
\label{eq:alpha_cover}
\alpha \subseteq \bigcup_{i=0}^r (lx_i + B(2l)),
\end{equation}
and
\[\|x_{i+1} - x_i\|_{\infty} \leq 1, \quad 0\leq i \leq r-1. \]
\end{lma}
We will use the above theorem in a similar setting to the original model for which it is proved.
Let $\Xi_n$ denote the set of all self-avoiding paths
\[\gamma = (0 = v_0, e_1, v_1, \ldots, e_n, v_n) \]
which begin at the origin and contain $n$ edges.
Denote by $V(\gamma)$ the vertex set of $\gamma.$ Assume that we have an edge-indexed set of i.i.d. variables $\{X_e\}_e$, where
$X_e = 1$ with probability $p$ and $0$ otherwise, and denote the joint distribution of $\{X_e\}$ by $\mathbb{P}_p$; we denote expectation under this measure by $\mathbb{E}_p.$ If $\gamma \in \Xi_ n$ for some $n$, we define
$X(\gamma) = \sum_{e \in \gamma} X_e.$ Last, let
\begin{equation}\label{eq: N_def}
N_n := \max_{\gamma \in \Xi_n} X(\gamma).
\end{equation}
The following lemma (and the proof thereof) is an adaptation of J. Martin's \cite[Prop. 2.2]{martin} extension of a theorem of S. Lee \cite{lee}.
\begin{lma}\label{lem: lee}
There is a constant $C_d < \infty$ depending only on the dimension $d$ such that, for all $p \in (0,1]$ and all $n \in \mathbb{N},$
\begin{equation}
\label{eq:pd_scaling}
\frac{\mathbb{E}_p N_n}{n p^{1/d}} < C_d.
\end{equation}
\end{lma}
\begin{proof}
Let $p\in (0,1]$ be arbitrary. We first consider the case that $np^{1/d} \leq 1.$ In this case, we have
\[
\frac{\mathbb{E}_p N_n}{n p^{1/d}} \leq \frac{1}{n p^{1/d}}\sum_{e \in [-n,n]^d} \mathbb{E}_pX_e \leq \frac{2d(2 n + 1)^d p}{n p^{1/d}} \leq 2d(3^d) (p^{1/d}n)^{d-1} \leq 3^{d+1}d.
\]
In the case that $n p^{1/d} > 1,$ we set $ l = \lceil p^{-1/d} \rceil$. Note that for any $\gamma \in \Xi_n,$ $V(\gamma)$ is a lattice animal with $n + 1$ vertices. In particular, it can by covered using the results of Theorem \ref{thm:coxetal_cover}.
So for any $s \geq 0,$
\begin{align}
\mathbb{P}_p \left(\frac{N_n}{n p^{1/d}} \geq s\right) = \mathbb{P}_p \left(\max_{\gamma \in \Xi_n} X(\gamma) \geq n p^{1/d} s \right) &\leq \mathbb{P}_p \left( \max_{x_0, \ldots, x_r} \sum_{\substack{e = \{ x, y \} \\ x,y \in \cup_{i=0}^r (lx_i + B(2l))}} X_e \geq n p^{1/d} s \right)\nonumber\\
\label{eq:la_sum_bd}
&\leq \sum_{x_0, \ldots, x_r} \mathbb{P}_p \left( \sum_{\substack{e = \{ x, y \} \\ x,y \in \cup_{i=0}^r (lx_i + B(2l))}} X_e \geq n p^{1/d} s \right),
\end{align}
where the outer sum is over all connected subsets of $\mathbb{Z}^d$ of cardinality $r+1 = 1 + \lfloor 2(n+1)/l \rfloor \leq 5 n p^{1/d}$ which contain the origin.
The expression in \eqref{eq:la_sum_bd} is bounded above by
\begin{align}
\sum_{x_0, \ldots, x_r} \exp(-np^{1/d} s) \mathbb{E}_p& \exp\left( \sum_{\substack{e = \{ x, y \} \\ x,y \in \cup_{i=0}^r (lx_i + B(2l))}} X_e \right)\nonumber\\
\label{eq:la_prod_bd}
&\leq \sum_{x_0, \ldots, x_r} \exp(-np^{1/d} s) \left[\mathbb{E}_p \exp(X_e)\right]^{\# \{e = \{ x, y \}: x,y \in \cup_{i=0}^r (lx_i + B(2l))\}}.
\end{align}
Now, note that
\begin{itemize}
\item $\mathbb{E}_p \exp(X_e) = 1 - p + p\mathrm{e};$
\item The number of vertices in $B(2l)$ is $(4l + 1)^d,$ so
\[
\# \{e = \{ x, y \} : x,y \in \cup_0^r (lx_i + B(2l))\} \leq (r+1) (2d)(4l+1)^d \leq \mathbf{C}_{34}(d) n p^{1/d-1}\ .
\]
\item The number of terms in the sum \eqref{eq:la_prod_bd} is at most $3^{d(r+1)} \leq 3^{5dnp^{1/d}}.$
\end{itemize}
Putting the above into \eqref{eq:la_prod_bd}, we have
\begin{align}
\mathbb{P}_p\left(\frac{N_n}{n p^{1/d}} \geq s\right) &\leq \exp(-n p^{1/d} s) 3^{5 d n p^{1/d}} \left[ 1 - p + p \mathrm{e}\right]^{\mathbf{C}_{34}(d) n p^{1/d - 1}}\nonumber\\
&= \exp(-n p^{1/d} s) 3^{5 d n p^{1/d}} \left(\left[ 1 - p + p \mathrm{e}\right]^{1/p}\right)^{\mathbf{C}_{34}(d) n p^{1/d}}\nonumber\\
&\leq \exp(-n p^{1/d} s) 3^{5 d n p^{1/d}} \left[ \mathrm{e}^{\mathrm{e}-1}\right]^{\mathbf{C}_{34}(d) n p^{1/d }}\nonumber\\
&=: \exp(-np^{1/d} s + \mathbf{C}_{35} n p^{1/d}) \label{eq:eminusone},
\end{align}
where $\mathbf{C}_{35} = \mathbf{C}_{35}(d)$ again does not depend on $p$ or $n$. Then we have, for $np^{1/d} > 1,$
\begin{align*}
\mathbb{E}_p\left(\frac{N_n}{n p^{1/d}}\right) \leq \mathbf{C}_{35} + \mathbb{E}_p \left[ \frac{N_n}{n p^{1/d}} - \mathbf{C}_{35} \right]_+ &= \mathbf{C}_{35} + \int_{\mathbf{C}_{35}}^{\infty} \mathbb{P}_p\left(\frac{N_n}{np^{1/d}} \geq s \right) \mathrm{d} s\\
&\leq \mathbf{C}_{35} + \int_{\mathbf{C}_{35}}^{\infty} \exp\left( - np^{1/d}(s-\mathbf{C}_{35})\right) \mathrm{d} s\\
&\leq \mathbf{C}_{35} + \int_{\mathbf{C}_{35}}^{\infty} \exp\left(-(s-\mathbf{C}_{35})\right)\mathrm{d} s \leq \mathbf{C}_{36}
\end{align*}
for some $\mathbf{C}_{36} = \mathbf{C}_{36}(d).$
\end{proof}
We are now ready to prove the theorem.
\begin{proof}[Proof of Theorem~\ref{thm: low_density}]
Consider any deterministic ordering of all finite self-avoiding lattice paths and denote by $\pi(x,y)$ the first geodesic from $x$ to $y$ in this ordering. Writing $Y_B(0,x)$ for the number of edges in $\pi(x,y)$ with weight in $B$, note that it suffices to give the bound for $\mathbb{E} Y_B(0,x)$. Define a set of edge weights $X_e$ as a function of $t_e$:
\[ X_e = \begin{cases}
1 & \text{ if $t_e \in B$ }\\
0 & \text{ otherwise}
\end{cases}\]
and build the random variables $N_n$ for these weights as in \eqref{eq: N_def}.
On the event $\{\# \pi(0,x) \leq i \},$ we have $Y_B(0,x) \leq N_i$. Therefore, for all $x \in \mathbb{Z}^d$ and $\kappa \in \mathbb{N},$
\begin{align*}
\mathbb{E} Y_B(0,x) &\leq \mathbb{E} N_{\kappa \|x\|_1} + \mathbb{E}\left[\# \pi(0,x) \mathbf{1}_{\{\#\pi(0,x)>\kappa \|x\|_1\}}\right] \\
&= \mathbb{E} N_{\kappa \|x\|_1} + \int_{\kappa\|x\|_1}^\infty \mathbb{P}\left(\#\pi(0,x) > s\right) \mathrm{d} s\\
&\leq C_d \kappa \|x\|_1 \mu(B)^{1/d} + \int_{\kappa\|x\|_1}^\infty \mathbb{P}\left(\#\pi(0,x) > s\right) \mathrm{d} s.
\end{align*}
To bound the integral above, we use the technique of Kesten (see Eq.~(2.26)-(2.27) in \cite{kestenspeed}). For $b, j > 0,$ denote by $D(j,b,x)$ the event that there exists a self-avoiding path $r$ starting at the origin of at least $j \|x\|_1$ steps but $\tau(r) < j b \|x\|_1.$ Then for any $b > 0,$
\begin{align}
\label{eq:boundlength}
\mathbb{P}\left(\#\pi(0,x) > j\|x\|_1\right) &\leq \mathbb{P}\left(\tau(0,x) \geq bj\|x\|_1\right) + \mathbb{P}(D(j,b,x)).
\end{align}
By our assumption $\mathbb{E} Y^\alpha<\infty$, \cite[Lemma~3.1]{coxdurrett} implies that there exists $\mathbf{C}_{37}$ such that for all $x$, $\mathbb{E} \tau(0,x)^\alpha \leq \mathbf{C}_{37} \|x\|_1^\alpha$.
Thus for arbitrary $x \in \mathbb{Z}^d,$
\[\mathbb{P}\left(\tau(0,x) \geq b j \|x\|_1\right) \leq \mathbf{C}_{37} / (b j)^\alpha.\]
Due to assumption \eqref{eqn: geodesics}, we may use Theorem~\ref{thm: kesten_exp} to see that, for $b$ smaller than some $b_0>0$ (which depends on $d$ and $\mu$), the probability of $D(j,b,x)$ is bounded above uniformly in $j$ and $x$ by $\exp (-\mathbf{C}_{38} j \|x\|_1)$.
Inserting this bound into \eqref{eq:boundlength}, we see that for $b$ small enough,
\[\mathbb{P}\left(\#\pi(0,x) > j\|x\|_1\right) \leq \frac{\mathbf{C}_{37}}{(b j)^\alpha} + \exp(- \mathbf{C}_{38} j\|x\|_1).\]
In particular, setting $r = s/\|x\|_1,$
\begin{align}
\mathbb{E} Y_B(0,x) &\leq C_d \kappa \|x\|_1 \mu(B)^{1/d} + \|x\|_1\int_{\kappa}^{\infty}\left( \frac{\mathbf{C}_{37}}{(br)^\alpha} + \exp(- \mathbf{C}_{38} r \|x\|_1)\right)\mathrm{d} r\nonumber\\
&\leq C_d \kappa \|x\|_1 \mu(B)^{1/d} + \frac{\mathbf{C}_{39} \|x\|_1}{\kappa^{\alpha-1}}\nonumber
\end{align}
for some constant $\mathbf{C}_{39}.$ Choosing $\kappa = \lceil \mu(B)^{-1/(\alpha d)}\rceil$ completes the proof.
\end{proof}
\subsubsection{Finishing the proof of Theorem~\ref{thm: derivative_bound}}\label{sec: finishing_deriv}
We use Theorem~\ref{thm: low_density}, with a dyadic partition of $[I,\infty)$: let
\[
x_0 = \infty \text{ and } x_n = \min \{x : F(x) \geq 2^{-n}\} \text{ for } n \in \mathbb{N}\ .
\]
Note that for any edge $e$, $t_e$ almost surely lies in one of the intervals $[x_i,x_{i-1})$ for $i \geq 1$. This is clear if $I<t_e$. Otherwise we must have $\mu(\{I\})>0$ and we simply take $i$ to be minimal such that $2^{-i} \leq \mu(\{I\})$.
Now the right side of the inequality in Proposition~\ref{prop: intermediate} can be rewritten as
\begin{align*}
\mathbf{C}_{28} \sum_{i=1}^\infty \sum_e \mathbb{E} &\left[ (1-\log F(t_e)) \mathbf{1}_{\{e \in Geo(z,z+x)\}} \mathbf{1}_{\{t_e \in [x_i,x_{i-1})\}}\right] \\
&\leq \mathbf{C}_{28} \sum_{i=1}^\infty (1-\log F(x_i)) \mathbb{E} \#\{e \in Geo(z,z+x) : t_e \in [I,x_{i-1})\}\ .
\end{align*}
By Theorem~\ref{thm: low_density} with $\alpha=2$, this is bounded by
\[
\mathbf{C}_{28}\mathbf{C}_{33} \|x\|_1 \sum_{i=1}^\infty (1-\log F(x_i)) F(x_{i-1}^-)^{1/(2d)} \leq \mathbf{C}_{28}\mathbf{C}_{33} \|x\|_1 \sum_{i=1}^\infty \frac{1+i}{2^{(i-1)/(2d)}} \leq \mathbf{C}_{27} \|x\|_1\ .
\]
\subsection{Proof of Theorem~\ref{thm: sub linear}}\label{sec: proof}
For $x \in \mathbb{Z}^d$, if $\Var F_m \leq \|x\|_1^{7/8}$ then by Proposition~\ref{prop: approximating}, we are done. Otherwise, under assumptions \eqref{eqn: 2log} and \eqref{eqn: geodesics} we can use \eqref{eq: breakdown} to find for some $\mathbf{C}_3$
\[
\Var~\tau(0,x) \leq \mathbf{C}_3\|x\|_1^{3/4} + \left[ \log \left[ \|x\|_1^{1/8}\right] \right]^{-1} \sum_{k=1}^\infty Ent(V_k^2)\ .
\]
By Theorem~\ref{thm: derivative_bound}, $\sum_{k=1}^\infty Ent(V_k^2) \leq \mathbf{C}_{27}\|x\|_1$, so
\[
\Var~\tau(0,x) \leq \mathbf{C}_3\|x\|_1^{3/4} + \frac{8\mathbf{C}_{27} \|x\|_1}{\log \|x\|_1} \leq \frac{\mathbf{C}_{40} \|x\|_1}{\log \|x\|_1}\ .
\]
\bigskip
\noindent
{\bf Acknowledgements}. We thank S. Sodin for helpful conversations and for pointing out the use of geometric averaging in his paper. We also thank him for careful reading of a previous draft. We are grateful to T. Sepp\"al\"ainen for pointing out an error in the entropy section and A. Auffinger for finding various typos. Last, we thank an anonymous referee for comments that led to a better organized presentation.
| {
"attr-fineweb-edu": 1.547852,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdrY5qYVBjcNb2BOm | \section*{Supplementary information}
Here we provide an extensive analysis on how modifying Schr\"{o}dinger equation with non-linearities, changes the spectral density of emitted radiation from a general two-level system. We will show that the non-linearity manifests as a broadening and shift in the spectral density of emitted radiation. We first discuss how the non-linearity can be introduced into the dynamics. Then, following standard QED formulations, we obtain the broadening and the shift induced by non-linearity. We finally compute analytically and quantify these radiative corrections from two most studied collapse models.
\section*{S1: Dynamical equations.} It was first proven by Gisin~\cite{signal0} that, in order to avoid superluminal signalling, nonlinear terms can be added to the Schr\"odinger equation only combined with stochastic terms, in such a way that the equivalence relation among statistical ensembles of states is preserved by the dynamics~\cite{signal1,signal2,signal3}. In more mathematical terms, this means that the modified dynamics for the wave function must generate a closed linear dynamics for the density matrix. Given these premises, it was recently proven~\cite{signal4} that such a dynamics must be of the Lindblad type. It is important to notice that in the proof, complete positivity---usually requested in order to derive Lindblad's theorem---is not a necessary hypothesis, but comes about from the existence of a (Markovian) dynamics for the wave function.
Therefore, we start from a dynamics for the density matrix of the form:
\begin{equation}\label{Lindblad}
\frac{\text d\hat{\rho}_t}{\text dt} = -\frac{i}{\hbar}[\hat{H}_0,\hat{\rho}_t] + \lambda\sum_{k=1}^n\left(\hat{L}_k\hat{\rho}_t\hat{L}_k^{\dagger} - \frac12 \hat{L}_k^{\dagger}\hat{L}_k\hat{\rho}_t - \frac12 \hat{\rho}_t\hat{L}_k^{\dagger}\hat{L}_k\right)\,
\end{equation}
where the Lindblad operators $\hat{L}_k$ can describe decoherence effect or, as it is the case here, intrinsic non-linearities in the dynamics for the wave function. Collapse models induce a dynamics of this type, but here we want to stay more general. The most convenient unraveling of Eq.~\eqref{Lindblad} for solving the equations of motions, is given in terms of a stochastic potential added to the Schr\"odinger equation~\cite{stoch1,stoch2,stoch3,stoch4,stoch5}:
\begin{equation}
i\hbar \frac{d}{dt} \psi_t = \left[ \hat{H}_0 + \hat{V}_t \right ] \psi_t, \qquad \hat{V}_t = - \hbar\,\sqrt{\lambda}\sum_{k=1}^n \hat{L}_k \xi_t^{(k)}
\end{equation}
where $\xi_t^{(k)}$ are $n$ independent white noises. Here, we have assumed that the Lindblad operators $\hat{L}_k$ are self-adjoint, which is the case for most proposals for nonlinear and stochastic modifications of the Schr\"odinger equation.
Since with violations of the superposition principle we mean superpositions in space, the Lindblad operators are taken as functions of space variables, therefore we have:
\begin{equation} \label{eq:v}
\hat{V}_t = - \hbar\,\sqrt{\lambda} \int d^3x\; \hat{L}({\bf x}) \xi_t({\bf x}),
\end{equation}
where $\xi_t({\bf x})$ is a noise-field, white both in space and time. Note that, in this form, the dynamical equation is still linear. As discussed several times in the literature~\cite{stoch1,stoch2,stoch3,stoch4,stoch5}, the effects of nonlinear terms introduced in the Schr\"odinger equation, at the statistical level, can be mimicked also by linear random potentials. For individual realizations of the noise, the affects are very different (those of a linear dynamics vs those of a nonlinear one), while at the statistical level they coincide, if the potential is suitably chosen.
\vspace{2 mm}
\section*{S2: Two level systems.} We consider the situation in which the system's dynamics effectively involves only two levels, whose transition dipole matrix element is not zero ($\mathbf{d}_{12}=\langle\varepsilon_1|\hat{\mathbf{d}}|\varepsilon_2\rangle\neq0$, with the dipole operator defined as $\hat{{\bf d}} = \sum_i e_i \hat{{\bf q}}_i$). This is means that the higher energy level eventually decays to the lower one, by emitting a photon. Therefore, the standard quantum Hamiltonian characterizing the interaction between a two-level system and a quantized radiation field can be, in the dipole approximation, written in the form~\cite{agar,QO_MW,QO_GC,QT_rad}:
\begin{eqnarray}
\hat{H}_0&=&\hbar
\left(
\sum_{s,\mathbf{k}}
\omega\,\hat{a}^\dagger_{s,\mathbf{k}}\hat{a}_{s,\mathbf{k}}
+
(\omega_{0}/2)\hat{\sigma}_z
-i\omega_{0} \sum_{s,\mathbf{k}} \left\lbrace
g_{s,\mathbf{k}}\left(\hat{\sigma}_+-\hat{\sigma}_-\right)\hat{a}_{s,\mathbf{k}}
- \text{H.C.}\right\rbrace\right),
\end{eqnarray}
with $g_{s,\mathbf{k}} = (2\epsilon_0\hbar\omega L^3)^{-1/2}\,
\mathbf{d}_{12}\cdot\mathbf{e}_{s,\mathbf{k}}$ the coupling constant of radiation-matter,
$\hat{\sigma}_+ = |\varepsilon_2\rangle\langle\varepsilon_1|$, $\hat{\sigma}_- = |\varepsilon_1\rangle\langle\varepsilon_2|$,
$\hat{\sigma}_z=\hat{\sigma}_+\hat{\sigma}_--\hat{\sigma}_-\hat{\sigma}_+$, $\omega_0=(\varepsilon_2-\varepsilon_1)/\hbar$
and $\{|\varepsilon_1\rangle,\,|\varepsilon_2\rangle\}$ the two levels of matter.
All other terms have the usual meanings.
The two-level representation of $\hat{V}_t$ is obtained by calculating $\langle\varepsilon_\alpha|\hat{V}_t|\varepsilon_\beta\rangle$ with $\alpha,\beta=1,2$.
In general, one has:
\begin{equation}
\hat{V}_t = -\hbar\left(
\sqrt{\lambda_z}\,w_t^{(z)}\,\hat{\sigma}_z
+\sqrt{\lambda_x}\,w_t^{(x)}\,\hat{\sigma}_x+\sqrt{\lambda_x}\,w_t^{(y)}\,\hat{\sigma}_y
\right)
\end{equation}
where $\lambda_i$ ($i=x,y,z$) are collapse rates and $w^{(i)}_t$ are three white noises. Eigenenergies are real functions in most cases, thus one finds: $\lambda_y=0$. Therefore $\hat{V}_t$ simplifies to:
\begin{eqnarray}
\label{eq:noise}
\hat{V}_t&=&- \hbar\left(
\sqrt{\lambda_z}\,w_t^{(z)}\,\hat{\sigma}_z+\sqrt{\lambda_x}\,w_t^{(x)}\,\hat{\sigma}_x
\right).
\end{eqnarray}
\section*{S3: Solution of the equations of motion: shift and broadening.}
The radiative corrections of the non-linearities appear in a very natural way from the formulation of the spectral density of emitted light~\cite{agar,QO_MW,QO_GC,QT_rad} which is given by the stochastic expectation (averaging) of the Fourier transform of the normalized dipole-dipole autocorrelation function.
The dipole-dipole autocorrelation function is given by $\langle\hat{\sigma}_+(t+\tau)\hat{\sigma}_-(t)\rangle$.
Accordingly, in order to obtain the spectral density, we will pursue the following steps. First, we will solve the Heisenberg equations of systems operator. Then, we will obtain the corresponding differential equations for dipole-dipole autocorrelation functions. We will solve these equations
using our previous results. With new solutions, we will finally get the dipole-dipole autocorrelation functions, and thus the spectral density.
With the Hamiltonian $\hat{H}=\hat{H}_0+\hat{V}_t$, the Heisenberg equations of motion for the system operators take the forms:
\begin{eqnarray}
\label{eq:sigma_z}
\frac{d}{dt}\hat{\sigma}_z(t)&=&
-2\beta_{\text{\tiny QED}}\left(\hat{\sigma}_z(t)+\hat{I}\right)
-2\omega_0 \left(\hat{\sigma}_x(t)\,
\mathbf{d}_{12}\cdot\hat{\mathbf{A}}^{(+)}_{\text{\tiny free}}(0,t)
+\mathbf{d}_{12}\cdot\hat{\mathbf{A}}^{(-)}_{\text{\tiny free}}(0,t)\,\hat{\sigma}_x(t)\right)
-2\sqrt{\lambda_x}\,w^{(x)}_t\,\hat{\sigma}_y(t),
\\
\label{eq:sigma_-}
\frac{d}{dt}\hat{\sigma}_y(t)&=&
-\left(\Omega_{\text{\tiny QED}}-2\sqrt{\lambda_z}\,w^{(z)}_t\right)\hat{\sigma}_x(t)
-\beta_{\text{\tiny QED}}\hat{\sigma}_y(t)
+2\sqrt{\lambda_x}\,w^{(x)}_t\,\hat{\sigma}_z(t),
\\
\label{eq:sigma_+}
\frac{d}{dt}\hat{\sigma}_x(t)&=&
-\left(\Omega_{\text{\tiny QED}}-2\sqrt{\lambda_z}\,w^{(z)}_t\right)\hat{\sigma}_y(t)
-\beta_{\text{\tiny QED}}\,\hat{\sigma}_x(t)
+2\omega_0
\left(\hat{\sigma}_z(t)\,
\mathbf{d}_{12}\cdot\hat{\mathbf{A}}^{(+)}_{\text{\tiny free}}(0,t)
+\mathbf{d}_{12}\cdot\hat{\mathbf{A}}^{(-)}_{\text{\tiny free}}(0,t)\,\hat{\sigma}_z(t)
\right),
\end{eqnarray}
where last terms on the right hand side of Eqs.~\eqref{eq:sigma_z} and \eqref{eq:sigma_-} are the new contributions to the dynamics due to the noise terms describing nonlinear effects. Other terms are obtained using standard derivations given by quantum statistical theories of spontaneous emission~\cite{agar,QO_MW,QO_GC,QT_rad}. In the above equations, $\Omega_{\text{\tiny QED}} = \omega_0-\gamma_{\text{\tiny QED}}$ where
\begin{eqnarray}
\gamma_{\text{\tiny QED}}&=&
\frac{\omega_0^2|\mathbf{d}_{12}|^2}{3\epsilon_0\hbar\pi^2c^3}
\left(\mathcal{P}\int_0^\infty \frac{d\omega \,\omega}{\omega-\omega_0}
-\mathcal{P}\int_0^\infty \frac{d\omega \,\omega}{\omega+\omega_0}\right),
\end{eqnarray}
is the Lamb shift, with $\mathcal{P}$ is the Cauchy principal part, which can be normalized in the standard fashion~\cite{bethe}; the parameter
\begin{eqnarray}
\beta_{\text{\tiny QED}}&=&\frac{\omega_0^3|\mathbf{d}_{12}|^2}{6\pi\epsilon_0\hbar c^3}
\end{eqnarray}
is the standard spontaneous emission rate; and:
\begin{eqnarray}
\mathbf{d}_{12}\cdot\hat{\mathbf{A}}^{(+)}_{\text{\tiny free}}(0,t)&=&
\sum_{s,\mathbf{k}} g_{s,\mathbf{k}}\,e^{-i\omega t} \,
\hat{a}_{s,\mathbf{k}}(0).
\end{eqnarray}
We now average Eqs.~(\ref{eq:sigma_z}) and~(\ref{eq:sigma_-}) over the initial state $|\psi\rangle|0\rangle$, according to which matter is in a generic state $|\psi\rangle$ and the radiation field in the vacuum state.
Therefore we get:
\begin{eqnarray}
\frac{d}{dt}\langle\hat{\sigma}_z(t)\rangle&=&
-2\beta_{\text{\tiny QED}}\left(\langle\hat{\sigma}_z(t)\rangle+1\right)
-2\sqrt{\lambda_x}\,w^{(x)}_t\,\langle\hat{\sigma}_y(t)\rangle
\\
\frac{d}{dt}\langle\hat{\sigma}_y(t)\rangle&=&
\left(\Omega_{\text{\tiny QED}}-2\sqrt{\lambda_z}\,w^{(z)}_t\right)\langle\hat{\sigma}_x(t)\rangle
-\beta_{\text{\tiny QED}}\langle\hat{\sigma}_y(t)\rangle
+2\sqrt{\lambda_x}\,w^{(x)}_t\,\langle\hat{\sigma}_z(t)\rangle
\\
\frac{d}{dt}\langle\hat{\sigma}_x(t)\rangle&=&
-\left(\Omega_{\text{\tiny QED}}-2\sqrt{\lambda_z}\,w^{(z)}_t\right)\langle\hat{\sigma}_y(t)\rangle
-\beta_{\text{\tiny QED}}\langle\hat{\sigma}_x(t)\rangle.
\end{eqnarray}
The above stochastic differential equations should be understood in Stratonovich sense. Since we want to compute stochastic averages, it is more convenient to switch to the It\^o formalism. To this end, one can use Eqs.~(10.2.5) to~(10.2.7) of Ref.~\cite{sto_book}. Then, once expressed in the It\^{o} form, by using theorem (8.5.5) of Ref.~\cite{sto_book}, one can prove that the stochastic expectations satisfy the following equations:
\begin{eqnarray}
\frac{d}{dt}\mathbb{E}(\langle\hat{\sigma}_z(t)\rangle)&=&
-2(\beta_{\text{\tiny QED}}+\lambda_x)\,\mathbb{E}(\langle\hat{\sigma}_z(t)\rangle)-2\beta_{\text{\tiny QED}}
\\
\frac{d}{dt}\mathbb{E}(\langle\hat{\sigma}_y(t)\rangle)&=&
\Omega_{\text{\tiny QED}}\,\mathbb{E}(\langle\hat{\sigma}_x(t)\rangle)
-(\beta_{\text{\tiny QED}}+2\lambda_x+2\lambda_z)\,\mathbb{E}(\langle\hat{\sigma}_y(t)\rangle)
\\
\frac{d}{dt}\mathbb{E}(\langle\hat{\sigma}_x(t)\rangle)&=&
-\Omega_{\text{\tiny QED}}\,\mathbb{E}(\langle\hat{\sigma}_y(t)\rangle)
-(\beta_{\text{\tiny QED}}+2\lambda_z)\mathbb{E}(\langle\hat{\sigma}_x(t)\rangle)
\end{eqnarray}
Using the solutions of above equations, one can find:
\begin{eqnarray}
\label{eq:decay}
\mathbb{E}(\langle\hat{\sigma}_z(t)\rangle)&=&
\left(
\frac{\beta_{\text{\tiny QED}}}{\beta_{\text{\tiny QED}}+\lambda_x}+\langle\hat{\sigma}_z(0)\rangle
\right)e^{-2(\beta_{\text{\tiny QED}}+\lambda_x)t}
-\frac{\beta_{\text{\tiny QED}}}{\beta_{\text{\tiny QED}}+\lambda_x},
\\
\label{eq:intensity0}
\mathbb{E}(\langle\hat{\sigma}_+(t)\hat{\sigma}_-(t)\rangle)&=&
\frac{1}{2}\left[\left(
\frac{\beta_{\text{\tiny QED}}}{\beta_{\text{\tiny QED}}+\lambda_x}+\langle\hat{\sigma}_z(0)\rangle
\right)e^{-2(\beta_{\text{\tiny QED}}+\lambda_x)t}
+\frac{\lambda_x}{\beta_{\text{\tiny QED}}+\lambda_x}\right],
\end{eqnarray}
where $\hbar\omega_0\,\mathbb{E}(\langle\hat{\sigma}_z(t)\rangle)$ gives the rate of energy emission by matter; and $\mathbb{E}(\langle\hat{\sigma}_+(t)\hat{\sigma}_-(t)\rangle)$ represents the change in the population of the excited state $|\varepsilon_2\rangle$. When the initial state is $|\varepsilon_1\rangle$, we have: $\langle\hat{\sigma}_z(0)\rangle=-1$, and for $|\varepsilon_2\rangle$, we have:
$\langle\hat{\sigma}_z(0)\rangle=1$. On the other hand, for both initial states $|\varepsilon_{1,2}\rangle$ we get:
\begin{eqnarray}
\label{eq:in-val}
\mathbb{E}(\langle\hat{\sigma}_-(t)\rangle)=\mathbb{E}(\langle\hat{\sigma}_+(t)\rangle)=0&\text{for}&|\varepsilon_{1,2}\rangle|0\rangle.
\end{eqnarray}
Using Eq.~\eqref{eq:intensity0}, one can compute for example the mean light intensity of emitted radiation in the far-field limit~\cite{agar,QO_MW,QO_GC,QT_rad} as follows:
\begin{eqnarray}\label{eq:intensity}
\langle\hat{I}(\mathbf{r},t)\rangle &=&
\left(\frac{\omega_0^2|\mathbf{d}_{12}|}{8\pi\varepsilon_0c^2r}\right)^2
\left(1-\frac{1}{2}\sin^2\theta\right)
\left[\left(\frac{\beta_{\text{\tiny QED}}}{\beta_{\text{\tiny QED}}+\lambda_x}+\langle\hat{\sigma}_z(0)\rangle
\right)e^{-2(\beta_{\text{\tiny QED}}+\lambda_x)(t-\frac{r}{c})}
+\frac{\lambda_x}{\beta_{\text{\tiny QED}}+\lambda_x}\right],\quad t>\frac{r}{c},
\end{eqnarray}
with $\theta$ the polar angle of the $\mathbf{r}$-vector, and the complex dipole moment $\mathbf{d}_{12}$ lies in the $xy$-plane, where $\mathbf{r}$ is the vector connecting the center-of-mass of system to the detector. $\langle\hat{I}(\mathbf{r},t)\rangle$ is a very interesting quantity for experimental research; however, here we are concerned with the spectral density of emitted light, which we now compute.
The exponential nature of energy decay, as given by Eqs.~(\ref{eq:decay}) and~(\ref{eq:intensity}), suggests that the spectral distribution of the emitted radiation is Lorentzian. The explicit mathematical form of spectral density can be obtained by computing the dipole-dipole autocorrelation function, $\mathbb{E}(\langle\hat{\sigma}_+(t+\tau)\hat{\sigma}_-(t)\rangle)$, and then using the Wiener-Khinchin theorem~\cite{QT_rad}. The time derivative of this autocorrelation function can be obtained by making the change $t\rightarrow t+\tau$ in Eqs.~(\ref{eq:sigma_z}), (\ref{eq:sigma_-}) and~(\ref{eq:sigma_+}), and then writing down the derivatives with respect to $\tau$. After multiplying the result from the right with $\hat{\sigma}_-(t)$ and then taking the quantum average over the initial state $|\psi\rangle|0\rangle$, we find:
\begin{eqnarray}
\frac{d}{d\tau}\langle\hat{\sigma}_z(t+\tau)\hat{\sigma}_-(t)\rangle&=&
-2\beta_{\text{\tiny QED}}\left(\langle\hat{\sigma}_z(t+\tau)\hat{\sigma}_-(t)\rangle
+\langle\hat{\sigma}_-(t)\rangle\right)
-2\sqrt{\lambda_x}\,w^{(x)}_\tau\,\langle\hat{\sigma}_y(t+\tau)\hat{\sigma}_-(t)\rangle,
\\
\frac{d}{d\tau}\langle\hat{\sigma}_y(t+\tau)\hat{\sigma}_-(t)\rangle&=&
\left(\Omega_{\text{\tiny QED}}-2\sqrt{\lambda_z}\,w^{(z)}_\tau\right)\langle\hat{\sigma}_x(t+\tau)\hat{\sigma}_-(t)\rangle
-\beta_{\text{\tiny QED}}\langle\hat{\sigma}_y(t+\tau)\hat{\sigma}_-(t)\rangle\\\nonumber&&
+2\sqrt{\lambda_x}\,w^{(x)}_\tau\,
\langle\hat{\sigma}_z(t+\tau)\hat{\sigma}_-(t)\rangle,
\\
\frac{d}{d\tau}\langle\hat{\sigma}_x(t+\tau)\hat{\sigma}_-(t)\rangle&=&
-\beta_{\text{\tiny QED}}\langle\hat{\sigma}_x(t+\tau)\hat{\sigma}_-(t)\rangle
-\left(\Omega_{\text{\tiny QED}}-2\sqrt{\lambda_z}\,w^{(z)}_t\right)
\langle\hat{\sigma}_y(t+\tau)\hat{\sigma}_-(t)\rangle.
\end{eqnarray}
Using the aforementioned theorems to switch between Stratonovich and It\^{o} forms and also to obtain the stochastic expectations, and also taking into the account the result given by Eq.\eqref{eq:in-val}, we find:
\begin{eqnarray}
\frac{d}{d\tau}\mathbb{E}(\langle\hat{\sigma}_z(t+\tau)\hat{\sigma}_-(t)\rangle)&=&
-2(\beta_{\text{\tiny QED}}+\lambda_x)\,\mathbb{E}(\langle\hat{\sigma}_z(t+\tau)\hat{\sigma}_-(t)\rangle)
\\
\frac{d}{d\tau}\mathbb{E}(\langle\hat{\sigma}_y(t+\tau)\hat{\sigma}_-(t)\rangle)&=&
\Omega_{\text{\tiny QED}}\,\mathbb{E}(\langle\hat{\sigma}_x(t+\tau)\hat{\sigma}_-(t)\rangle)
-(\beta_{\text{\tiny QED}}+2\lambda_x+2\lambda_z)\,\mathbb{E}(\langle\hat{\sigma}_y(t+\tau)\hat{\sigma}_-(t)\rangle)
\\
\frac{d}{d\tau}\mathbb{E}(\langle\hat{\sigma}_x(t+\tau)\hat{\sigma}_-(t)\rangle)&=&
-(\beta_{\text{\tiny QED}}+2\lambda_z)\,\mathbb{E}(\langle\hat{\sigma}_x(t+\tau)\hat{\sigma}_-(t)\rangle)
-\Omega_{\text{\tiny QED}}\,\mathbb{E}(\langle\hat{\sigma}_y(t+\tau)\hat{\sigma}_-(t)\rangle)
\end{eqnarray}
Accordingly, for the dipole-dipole autocorrelation function we get:
\begin{eqnarray}
\mathbb{E}(\langle\hat{\sigma}_+(t+\tau)\hat{\sigma}_-(t)\rangle)
&=&
e^{-(\beta_{\text{\tiny QED}}+\lambda_x+2\lambda_z)\tau}
\left(
\cosh(i\Omega\tau)+\frac{\Omega_{\text{\tiny QED}}\sinh(i\Omega\tau)}{\Omega}
\right)
\mathbb{E}(\langle\hat{\sigma}_+(t)\hat{\sigma}_-(t)\rangle)
\\
&=&
e^{-(\beta_{\text{\tiny QED}}+\lambda_x+2\lambda_z)\tau}
\left(
\cos\Omega \tau+i\frac{\Omega_{\text{\tiny QED}}}{\Omega}\sin \Omega\tau
\right)
\mathbb{E}(\langle\hat{\sigma}_+(t)\hat{\sigma}_-(t)\rangle)
\end{eqnarray}
with $\Omega=\sqrt{{\Omega_{\text{\tiny QED}}}^2-\lambda_x^2}$ for $\Omega_{\text{\tiny QED}}>\lambda_x$. For $\Omega_{\text{\tiny QED}}\gg\lambda_x$, which is the case for most cases of experimental interest, we can expand $\Omega_{\text{\tiny QED}}/\Omega$ to the first leading term in $\lambda_x/\Omega_{\text{\tiny QED}}$, and we get:
\begin{eqnarray}
\mathbb{E}(\langle\hat{\sigma}_+(t+\tau)\hat{\sigma}_-(t)\rangle)
&\simeq&
e^{-(\beta_{\text{\tiny QED}}+\lambda_x+2\lambda_z)\tau}\,e^{i\Omega\tau}\,
\mathbb{E}(\langle\hat{\sigma}_+(t)\hat{\sigma}_-(t)\rangle)
\end{eqnarray}
Using Wiener-Khinchin relation, for the spectral density of emitted radiation we obtain:
\begin{eqnarray}
\mathcal{S}(\omega)&=&
\frac{1}{\pi}\left(
\frac{\beta_{\text{\tiny QED}}+\lambda_x+2\lambda_z}{(\beta_{\text{\tiny QED}}+\lambda_x+2\lambda_z)^2+(\omega-\Omega)^2}
\right) \; \equiv \; \frac{1}{\pi}\left(
\frac{\beta}{\beta^2+(\omega-\Omega)^2}\right)\,,
\end{eqnarray}
with the full width at half-maximum of $\beta$ where
\begin{equation}
\beta = \beta_{\text{\tiny QED}} + \beta_{\text{\tiny N}}, \quad\text{with}\quad \beta_{\text{\tiny N}} = \lambda_x+2\lambda_z
\end{equation}
and the central frequency of
\begin{eqnarray}
\Omega=\sqrt{{\Omega_{\text{\tiny QED}}}^2-\lambda_x^2}\simeq \Omega_{\text{\tiny QED}}-\frac{\lambda_x^2}{2\Omega_{\text{\tiny QED}}}\simeq \Omega_{\text{\tiny QED}}-\frac{\lambda_x^2}{2\omega_0}
=\Omega_{\text{\tiny QED}}-\Omega_{\text{\tiny N}},\quad \text{with}\quad
\Omega_{\text{\tiny N}}=\frac{\lambda_x^2}{2\omega_0}.
\end{eqnarray}
Accordingly, the radiative corrections given by violations of the quantum superposition principle produce two observable effects: a frequency-shift and a line broadening, whose magnitude is controlled by the rates $\lambda_{x,z}$. Their numerical value depends on the specific model used to describe nonlinear (collapse) effects.
\vspace{2 mm}
\section*{S4: Calculation of rates $\lambda_{x,z}$}.\\
We now derive the rates $\lambda_{x,z}$ as predicted by the two most-studied collapse models in the literature: the mass proportional Continuous Spontaneous Localization (CSL) model~\cite{Cslmass}, and the Di\'{o}si-Penrose gravitational (DP) model~\cite{d,p}.
\noindent {\it Rates for the CSL model}.
The stochastic potential $\hat{V}_t$ associated to the CSL model is~\cite{Grw,Csl,Cslmass}:
\begin{eqnarray}
\label{eq:noise-csl}
\hat{V}_t&=&-\frac{\hbar\,\sqrt{\gamma}}{m_0}\int\,d\mathbf{x}\,\xi_t(\mathbf{x})\hat{L}(\mathbf{x}), \quad \text{with}\quad \hat{L}({\bf x})=
\int d\mathbf{y}\,g\left(\mathbf{x-y}\right)\,
\sum_{\mathfrak{j}}\,m_\mathfrak{j}\,\sum_s\,\hat{a}_{\mathfrak{j}}^{\dagger}(s,\mathbf{y})\hat{a}_{\mathfrak{j}}\left(s,\mathbf{y}\right),
\end{eqnarray}
where $m_0=1\,$amu, $\gamma \simeq 10^{-22}\text{cm}^{3}\text{s}^{-1}$~\cite{adlerphoto},
$\xi_t(\mathbf{x})$ is a white noise with correlation $\mathbb{E}(\xi_t(\mathbf{x})\xi_{\tau}(\mathbf{y}))=\delta(t-\tau)\,\delta(\mathbf{x}-\mathbf{y})$,
and
$\hat{a}_{\mathfrak{j}}\left(s,\mathbf{y}\right)$ is the annihilation operator of the particle type-$\mathfrak{j}$ with mass $m_\mathfrak{j}$ and the spin $s$ at position $\mathbf{y}$; and
$g({\bf r}) = \exp(-\mathbf{r}^{2}/2r_{C}^{2})/(\sqrt{2\pi}r_{C})^{3}$
with $r_C \simeq 10^{-5}\text{cm}$ the correlation length.
In the two-level representation,
the matrix elements of $\hat{V}_t$ are given by:
\begin{eqnarray}
\label{eq:element}
V^{\alpha\beta}_t=\langle\varepsilon_\alpha|\hat{V}_t|\varepsilon_\beta\rangle
&=&
-\frac{\hbar\,\sqrt{\gamma}}{m_0}\,\int\, d\mathbf{Q
\,\psi_\alpha(\mathbf{Q}
\,\psi_\beta(\mathbf{Q}
\,\int\,d\mathbf{x}\,\xi_t(\mathbf{x})\,\sum_{j=1}^N\,m_j\,g(\mathbf{x}-\mathbf{q}_j),
\end{eqnarray}
with $\alpha,\beta=1,2$, and $\psi_{\alpha}(\mathbf{Q})=\langle\mathbf{Q}|\varepsilon_{\alpha}\rangle$ where we use improper states:
$|\mathbf{Q}\rangle\equiv
|\mathbf{q}_{1};\mathbf{q}_{2};\cdots;\mathbf{q}_{N}\rangle$ (with $\mathbf{q}_{j}$ the position of $j$-th particle) for which we have:
$\hat{L}(\mathbf{x})|\mathbf{Q}\rangle=
\left(\sum_{j=1}^{N}m_j\,g(\mathbf{x}-\mathbf{q}_{j})\right)|\mathbf{Q}\rangle$.
We also assume that the wave functions $\psi_{\alpha}$ are real.
Since the right side of Eq.~\eqref{eq:element} contains a Gaussian white noise,
$\lambda_{x,z}$ can be calculated as follow:
\begin{eqnarray}
\label{eq:lambda}
\mathbb{E}\left(V^{\alpha\beta}_{t_1}V^{\alpha'\beta'}_{t_2}\right)&=&
\frac{\delta(t_1-t_2)\,\hbar^2\,\gamma}{8\pi^{3/2}r_C^3}\int d\mathbf{Q}\,d\mathbf{Q}'\,
\psi_\alpha(\mathbf{Q})\,\psi_\beta(\mathbf{Q})\,
\psi_{\alpha'}(\mathbf{Q}')\,\psi_{\beta'}(\mathbf{Q}')\,
\sum_{j,l=1}^N\frac{m_j\,m_l}{m_0^2}\,\exp[-\frac{(\mathbf{q}_j-\mathbf{q}'_l)^2}{4r_C^2}],
\\\nonumber&=&\delta(t_1-t_2)\,\hbar^2\,\lambda^{\alpha\beta}_{\alpha'\beta'}.
\end{eqnarray}
We consider the situation where the effective size of the region in which $\psi_{1,2}$ is different from zero is smaller than $r_C\simeq10^{-7}$m, which is the case for atomic and molecules systems. This is the small scale limit of the CSL model. Accordingly, by expanding the exponential term in Eq.~\eqref{eq:lambda} to first order in $(\mathbf{q}_j-\mathbf{q}'_l)^2/4r_C^2$ and then by performing the integrations, we get:
\begin{eqnarray}
\label{eq:lambda_ap}
\lambda_{11}^{11}&\simeq&\frac{\Lambda_{\text{\tiny CSL}}\,M^2}{m_0^2}
\left(
1-
\frac{1}{M}\,\int d\mathbf{Q}\,|\psi_1(\mathbf{Q})|^2\,
\sum_{j}m_j\left(\frac{\mathbf{q}_j}{2r_C}\right)^2\right),
\\
\lambda_{22}^{22}&\simeq&\frac{\Lambda_{\text{\tiny CSL}}\,M^2}{m_0^2}
\left(
1-
\frac{1}{M}\,\int d\mathbf{Q}\,|\psi_2(\mathbf{Q})|^2\,
\sum_{j}m_j\left(\frac{\mathbf{q}_j}{2r_C}\right)^2\right),
\\
\lambda_{12}^{12}=\lambda_{21}^{21}&\simeq&\frac{\Lambda_{\text{\tiny CSL}}}{2r_C^2\,m_0^2}
\left(
\int d\mathbf{Q}\,\psi_1(\mathbf{Q})\,\psi_2(\mathbf{Q})
\sum_{j}m_j \mathbf{q}_j
\right)^2,
\end{eqnarray}
with $\Lambda_{\text{\tiny CSL}}=\gamma/(8\pi^{3/2}r_C^3)=1.12\times10^{-9}\,$s$^{-1}$ and $M=\sum_jm_j$. In the derivation of above equations, we used the parity considerations and the orthogonality of $\psi_1$ and $\psi_2$.
Accordingly, we have:
\begin{eqnarray}
\label{eq:l_z}
\lambda_z&=&\frac{\lambda_{11}^{11}-\lambda_{22}^{22}}{2}=
\frac{\Lambda_{\text{\tiny CSL}}\,M}{8r_C^2\,m_0^2}
\,\int d\mathbf{Q}\,\left(|\psi_2(\mathbf{Q})|^2-|\psi_1(\mathbf{Q})|^2\right)\,
\sum_{j}m_j\,\mathbf{q}_j^2,
\\
\label{eq:l_x}
\lambda_x&=&\lambda^{12}_{12}=\frac{\Lambda_{\text{\tiny CSL}}}{2r_C^2\,m_0^2}
\left(
\int d\mathbf{Q}\,\psi_1(\mathbf{Q})\,\psi_2(\mathbf{Q})
\sum_{j}m_j \mathbf{q}_j
\right)^2.
\end{eqnarray}
\section*{S5: Rates for the Di\'{o}si-Penrose (DP) model}.
The stochastic potential in the DP model is given by:
\begin{eqnarray}
\label{eq:noise-dp}
\hat{V}_t&=& - \hbar \int\,d\mathbf{x}\,\xi_t(\mathbf{x})\hat{L}(\mathbf{x}),
\end{eqnarray}
where $\hat{L}(\mathbf{x})$ is the same like the CSL model, and $\xi_t(\mathbf{x})$ is a white noise with correlation $\mathbb{E}(\xi_t(\mathbf{x})\xi_s(\mathbf{y}))=G\,\delta(t-s)/\hbar\,|\mathbf{x}-\mathbf{y}|$ with $G$ the gravitational constant. This form of $\hat{L}(\mathbf{x})$ is different from Di\'{o}si's original proposal~\cite{d}. To avoid the divergence due to the Newton self-energy, Di\'{o}si initially proposed a Lindblad operator with a length cutoff equal to the nuclear size. Then, it was shown~\cite{ggr} that with this cutoff, predictions of the DP model are in contradiction with known observations. To avoid this problem, Ghirardi, Grassi and Rimini~\cite{ggr} proposed this new form of Lindblad operator whose cutoff is $r_C\simeq10^{-7}\,$m. This adjustment of the model was eventually acknowledged by Di\'{o}si~\cite{braz}.
For the matrix elements of $\hat{V}_t$ in two-level representation, one gets:
\begin{eqnarray}
\label{eq:element-dp}
V^{\alpha\beta}_t=\langle\varepsilon_\alpha|\hat{V}_t|\varepsilon_\beta\rangle
&=&\hbar
\,\int\, d\mathbf{Q
\,\psi_\alpha(\mathbf{Q}
\,\psi_\beta(\mathbf{Q}
\,\int\,d\mathbf{x}\,\xi_t(\mathbf{x})\,\sum_{j=1}^N\,m_j\,g(\mathbf{x}-\mathbf{q}_j),
\end{eqnarray}
Following the same approach that we used for the CSL model, we find:
\begin{eqnarray}
\label{eq:lambda-dp}
\mathbb{E}\left(V^{\alpha\beta}_{t_1}V^{\alpha'\beta'}_{t_2}\right)&=&
\delta(t_1-t_2)\,G\,\hbar\,\int d\mathbf{Q}\,d\mathbf{Q}'\,
\psi_\alpha(\mathbf{Q})\,\psi_\beta(\mathbf{Q})\,
\psi_{\alpha'}(\mathbf{Q}')\,\psi_{\beta'}(\mathbf{Q}')\times
\\\nonumber&&
\sum_{j,l}m_j\,m_l\,\int \frac{d\mathbf{x}\,d\mathbf{x}'}
{|\mathbf{x}-\mathbf{x}'|}\,
g(\mathbf{x}-\mathbf{q}_j)\,g(\mathbf{x}'-\mathbf{q}'_l)
\\\label{eq:erf}&=&
\frac{\delta(t_1-t_2)\,G\,\hbar}{4\pi}\int d\mathbf{Q}\,d\mathbf{Q}'\,
\psi_\alpha(\mathbf{Q})\,\psi_\beta(\mathbf{Q})\,
\psi_{\alpha'}(\mathbf{Q}')\,\psi_{\beta'}(\mathbf{Q}')\,
\sum_{j,l}m_j\,m_l\,\int d\mathbf{x}\,
g(\mathbf{x})\,\Phi_{jl}(\mathbf{x})
\\\nonumber
&=&\delta(t_1-t_2)\,\,\hbar^2\,\lambda^{\alpha\beta}_{\alpha'\beta'}
\end{eqnarray}
with
\begin{eqnarray}
\Phi_{jl}(\mathbf{x})&=&
\frac{\text{erf}\left(
\frac{|\mathbf{x}-(\mathbf{q}_j-\mathbf{q}'_l)|}{\sqrt{2}r_C}\right)}
{|\mathbf{x}-(\mathbf{q}_j-\mathbf{q}'_l)|},
\end{eqnarray}
where erf is the error function.
Here $\Phi_{jl}(\mathbf{x})$ is slowly varying with respect to $g(\mathbf{x})$ (for more detail, see Section (4.2) of Ref.~\cite{jack}). Therefore $g(\mathbf{x})$ acts like a Dirac-delta, practically selecting the value of $\Phi_{jl}(\mathbf{x})$ in the origin $x = 0$. Accordingly, we can write:
\begin{eqnarray}
\label{eq:lambda-dp-app}
\mathbb{E}\left(V^{\alpha\beta}_{t_1}V^{\alpha'\beta'}_{t_2}\right)&\simeq&
\frac{\delta(t_1-t_2)\,G\,\hbar}{4\pi}
\int d\mathbf{Q}\,d\mathbf{Q}'\,
\psi_\alpha(\mathbf{Q})\,\psi_\beta(\mathbf{Q})\,
\psi_{\alpha'}(\mathbf{Q}')\,\psi_{\beta'}(\mathbf{Q}')\,
\sum_{j,l}m_j\,m_l\,\frac{\text{erf}
\left(\frac{|\mathbf{q}_j-\mathbf{q}'_l|}{\sqrt{2}r_C}\right)}
{|\mathbf{q}_j-\mathbf{q}'_l|}
\end{eqnarray}
Like before, we are interested in the cases where the spatial width of eigenenergies $\psi_{1,2}$ are smaller than $r_C$, meaning $|\mathbf{q}_j-\mathbf{q}'_l|\ll r_C$.
This implies that $\Phi_{jl}(\mathbf{0})$ can be Taylor expanded, and to the leading order, one finds:
\begin{eqnarray}
\frac{\text{erf}
\left(\frac{|\mathbf{q}_j-\mathbf{q}'_l|}{\sqrt{2}r_C}\right)}
{|\mathbf{q}_j-\mathbf{q}'_l|}\simeq
\frac{2}{r_C\sqrt{2\pi}}\left(
1-\frac{|\mathbf{q}_j-\mathbf{q}'_l|^2}{6r_C^2}
\right),&\text{where}&|\mathbf{q}_j-\mathbf{q}'_l|\ll r_C.
\end{eqnarray}
Then, one can follow the same line of reasoning that we followed from Eq.~\eqref{eq:lambda}
to Eqs.~(\ref{eq:l_z}) and~(\ref{eq:l_x}) to solve the rest of integrations. Accordingly, the rates $\lambda_{x,z}$ of the DP model become:
\begin{eqnarray}
\lambda_z&=&
\frac{\Lambda_{\text{\tiny DP}}\,M}{8r_C^2\,m_0^2}
\,\int d\mathbf{Q}\,\left(|\psi_2(\mathbf{Q})|^2-|\psi_1(\mathbf{Q})|^2\right)\,
\sum_{j}m_j\,\mathbf{q}_j^2,
\\
\lambda_x&=&\lambda^{12}_{12}=\frac{\Lambda_{\text{\tiny DP}}}{2r_C^2\,m_0^2}
\left(
\int d\mathbf{Q}\,\psi_1(\mathbf{Q})\,\psi_2(\mathbf{Q})
\sum_{j}m_j \mathbf{q}_j
\right)^2.
\end{eqnarray}
with $\Lambda_{\text{\tiny DP}}=\frac{G\,m_0^2}{3\sqrt{2}\pi^{3/2}\,\hbar\,r_C}\simeq 7.39\times10^{-25}\,$s$^{-1}$. These rates have the same form as those of the CSL model, but with a different coupling constant, which is $10^{-16}$ times smaller than the CSL coupling constant, and therefore negligible. Accordingly, in the subsequent analysis we will report just the CSL values.
\vspace{2mm}
\section*{S6: Application to relevant physical systems.} We now provide quantitative estimation of these rates for some interesting physical systems.
\noindent{\it Hydrogen-like atoms.}
For an atom that contains only one electron, we have: $\mathbf{D}_{12}=(m_e/e)\,\mathbf{d}_{12}$ with $m_e$ the mass and $e$ the charge of electron, and $\mathbf{d}_{12}$ the off-diagonal element of the dipole moment with typical values of a few Debye. In addition, $\sqrt{I_\alpha/m_e}$ has typical values of a few Bohr radius. Accordingly, we get:
\begin{eqnarray}
\lambda_z\sim10^{-20}-10^{-18}\,\text{s}^{-1}; &&
\lambda_x\sim10^{-23}-10^{-21}\,\text{s}^{-1}.
\end{eqnarray}
For example, for the transition $2P\rightarrow1S$ of the hydrogen atom, where the emitted light is a K-level X-ray radiation~\cite{IUPAC},
we find: $\lambda_z\simeq5.2\times10^{-19}\,$s$^{-1}$ and
$\lambda_x\simeq1.4\times10^{-22}\,$s$^{-1}$.
\noindent{\it Harmonic oscillator.}
We consider the two lowest states of a harmonic oscillator with mass $\mu$ and frequency $\omega_0$. Introducing these eigenstates into Eqs.~(\ref{eq:l_z}) and~(\ref{eq:l_x}) and performing the integration, we find:
\begin{eqnarray}
\lambda_x =
4\lambda_z = \frac{\Lambda}{2}\left(\frac{\mu\, x_0}{m_0\,r_C}\right)^2,
\end{eqnarray}
with $x_0=\sqrt{\hbar/\mu\omega_0}$ the zero-point fluctuation amplitude.
\noindent{\it Double-well potential.} We consider a system of mass $\mu$ moving in a symmetric double-well potential at low temperatures, where the meaningful eigenstates are the two lowest ones: $|\varepsilon_1\rangle=\frac{1}{\sqrt{2}}(|R\rangle+|L\rangle)$ and $|\varepsilon_2\rangle=\frac{1}{\sqrt{2}}(|R\rangle-|L\rangle)$.
The tunnelling frequency is $\omega_0=(\varepsilon_2-\varepsilon_1)/\hbar$. We denote the separation of minima by $q_0$. The states $|L\rangle$ and $|R\rangle$ are localized states at left and right minima, respectively. They can be transformed to each other by the displacement operator where the displacement distance is $q_0$, and $\langle L|R\rangle\simeq0$. Accordingly, using Eqs.~(\ref{eq:l_z}) and~(\ref{eq:l_x}) and performing the integration, we find:
\begin{eqnarray}
\lambda_z \simeq 0;&&\lambda_x \simeq \frac{\Lambda}{8}\left(\frac{\mu\, q_0}{m_0\,r_C}\right)^2.
\end{eqnarray}
For example, some internal motions of non-rigid molecules and complexes (e.g., the inversion motion in Ammonia~\cite{chi_book1} or hindered torsional rotation in X-Y-Y-X molecules~\cite{chi_book2}) can be effectively described by a double-well potential. For these systems, we have $q_0\sim1-10\,$\AA$\;$ and $\mu\sim1-100\,$amu~\cite{chi_book1,chi_book2}.
Accordingly, the order of magnitude of the strongest collapse rate for the internal motions of molecular systems is $\lambda_x\sim10^{-9}\,$s$^{-1}$.
\end{widetext}
| {
"attr-fineweb-edu": 1.541016,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdsE5qsBB3NoB4Z_c | \section{Introduction}
Quantitative understanding of the long-standing $\Delta I=1/2$ puzzle and more importantly a reliable calculation
of the important direct CP-violation parameter in $K \to \pi \pi$, $\epsilon^\prime/\epsilon$ were in fact the primary
motivation for my entry into lattice methods for calculating weak matrix elements, about thirty years ago~\cite{BDHRS,BrGa,CMP,BDSPW,SS86}. Infact to tackle this difficult problem and bring it to our current level of understanding and progress has so far taken at least six
Ph D theses~\cite{TD_th,CC_th,JL_th,SL_th,mL_th,QL_th}.
Indeed, at the time the experimental measurement
of $\epsilon'$ was a huge challenge and it took close to ~20 years to completely nail it down experimentally.
For the lattice there were numerous obstacles that had to be overcome. First and foremost was lack of chiral symmetry of Wilson fermions entailing mixing with lower dimensional operators~\cite{Boc85,BDHS87}. While this severe difficulty thwarted early attempts
for all application to kaon physics (even for kaon-mixing parameter, $B_K$~\cite{BS88}), it motivated us to consider applications to
heavy-light physics as it was felt that therein chiral symmetry will be less of an issue~\cite{BDHS88,BES91,BLS93,BBS98}.
Many of the important applications to
observables relevant to the Unitarity Triangle are in fact off springs of these efforts.
While the primary focus of this article is on developments exclusively from the lattice perspective, we want to use the opportunity
to mention some prominent studies of $K \to \pi \pi$, the $\Delta~ I=1/2$ puzzle and $\epsilon'$ using continuum techniques
which offered interesting and very useful insights~\cite{cont}.
In 1996-97 the first simulations with domain wall quarks (DWQ) demonstrated the feasibility of using this 5-dimensional formulation~\cite{FS_94}; even with a modest extent of about 10 sites in the 5th dimension, Domain Wall Quarks (DWQ) exhibited excellent chiral symmetry as the first application to kaon matrix element, in the quenched approximation,
showed~\cite{BS96,BS97}. With the formation of RIKEN-BNL-Columbia (RBC) Collaboration around ~1998 first large scale simulations, in the quenched
approximations~\cite{RBC_cl}, with domain wall
quarks to $K \to \pi \pi$, $\Delta I=1/2$ and $\epsilon'$~ began. These continued to use chiral perturbation
theory (as was the case with the previous attempts with Wilson fermions) to reduce the problem to
a calculation of $K \to \pi$ and $K \to vac$ following~\cite{BDSPW}. The first results from this approach showed that
for $\epsilon'$, quenched approximation is highly pathological~\cite{RBC_eps01}. In particular, the QCD penguin operator
$Q_6$ which is an (8,1) suffers from mixing with the (8,8) operators such as $Q_8$ emphasizing to us the need for full QCD in so far as the calculation of $\epsilon'$ is concerned~\cite{GP1,GP2,RBC_JL}.
It took several years to finish the first calculation of $K \to \pi \pi$ with DWQ in full (2 + 1) flavor QCD again using ChPT only to discover that the kaon is simply too heavy for ChPT to be reliable~\cite{RBC_UKQCD_SU2}; the systematic errors for matrix elements of many of the key operators
were O(50\%) or even more~\cite{SL_NHC,SL_th}.
That brings us to the efforts of the past $\approx$ 6 years jointly by RBC and UKQCD collaborations to go instead for {\it direct} calculations of $K \to \pi \pi$ using finite volume correlation functions as suggested by Lellouch-Luscher~\cite{LL00}.
The results reported in this
talk~\cite{RBC_UKQCD_PRL13} are primarily using three different lattices (see Tab.\ref{tab:params}) accumulated over the past several years. The $16^3$ and $32^3$ lattices only allow for threshold studies, whereas the $32^3$ lattice of volume ~ $(4.5 fm)^3$
is used to study $K \to \pi \pi$ with physical kinematics.
While all three lattices have been used already for the simpler I=2
final state, for the more challenging I=0 final state studies at physical kinematics on the $32^3$ lattice are still not
complete. Fortunately, as will be explained, for the $\Delta I=1/2$ puzzle, it turns out that understanding the simpler $I=2$ channel proves to be crucial.
\subsection{The Puzzle}
Let's briefly recapitulate the so-called $\Delta I=1/2$ puzzle.
The issue boils down to the huge (factor of about 450) disparity in the life-times of neutral ({\it i.e} $K_S$)
and that of $K^\pm$. Thus, basically the spectator u-quark in $K^+$ is changing to d-quark in $K_S$ resulting in
this huge change in their life-times. Their main decay mode is just to two pions. However, whereas $\pi^+ \pi^0$ (resulting from the decays of $K^+$) is in
a pure $I=2$ final state, $\pi^+ \pi^-$ or $\pi^0 \pi^0$ are mixtures of $I=0$ and $I=2$; thus
the ratio of the two relevant amplitudes $Re A_0/ReA_2 \approx 22$, for the $I=0$ and $I = 2$ is a lot bigger than
unity. Since for the charged K
the change in isospin, $\Delta I=3/2$ whereas for the neutral K its either 1/2 or 3/2, it implies that the $\Delta I=1/2$
amplitude is significantly larger than the $\Delta I=3/2$ and this is the long-standing puzzle (see {\it e.g.}~\cite{DGH_book}).
While its long been speculated that QCD corrections may be responsible for this huge enhancement,
at this scale highly non-perurbative effects are anticipated; of course, over the years there have been numerous suggestions
~\cite{cont}, including new physics (see {\it e.g.}~\cite{AK94}) as the cause for this large enhancement.
\section{Weak Effective Hamiltonian and 4-quark operators}
Using the OPE apparatus, one arrives at the effective Hamiltonian for $\Delta S=1$ weak decays~\cite{GMLR,AJBRMP,RBC_eps01},
\begin{equation}
H^{\Delta S=1}=\frac{G_F}{\sqrt{2}}V_{ud}^*V_{us}
\sum_{i=1}^{10}[(z_i(\mu)+\tau y_i(\mu))] Q_i.
\label{eq: Eff_H}
\end{equation}
Here, $Q_i$ are the well-known 4-quark operators,
\begin{subequations}
\allowdisplaybreaks[2]
\begin{align}
Q_1 &= (\bar{s}_\alpha d_\alpha)_{V-A}(\bar{u}_\beta u_\beta )_{V-A}, \\
Q_2 &= (\bar{s}_\alpha d_\beta)_{V-A}(\bar{u}_\beta u_\alpha)_{V-A},\\
Q_3 &= (\bar{s}_\alpha d_\alpha)_{V-A}\sum_{q=u,d,s} (\bar{q}_\beta q_\beta )_{V-A} , \\
Q_4 &= (\bar{s}_\alpha d_\beta )_{V-A}\sum_{q=u,d,s} (\bar{q}_\beta q_\alpha)_{V-A},\\
Q_5 &= (\bar{s}_\alpha d_\alpha)_{V-A}\sum_{q=u,d,s} (\bar{q}_\beta q_\beta )_{V+A}, \\
Q_6 &= (\bar{s}_\alpha d_\beta )_{V-A}\sum_{q=u,d,s}(\bar{q}_\beta q_\alpha)_{V+A} ,\\
Q_7 &= \frac{3}{2} (\bar{s}_\alpha d_\alpha)_{V-A}\sum_{q=u,d,s} e_q (\bar{q}_\beta q_\beta )_{V+A},\\
Q_8 &= \frac{3}{2} (\bar{s}_\alpha d_\beta )_{V-A}\sum_{q=u,d,s} e_q (\bar{q}_\beta q_\alpha)_{V+A},\\
Q_9 &= \frac{3}{2} (\bar{s}_\alpha d_\alpha)_{V-A}\sum_{q=u,d,s} e_q (\bar{q}_\beta q_\beta )_{V-A},\\
Q_{10} &= \frac{3}{2} (\bar{s}_\alpha d_\beta )_{V-A}\sum_{q=u,d,s} e_q (\bar{q}_\beta q_\alpha)_{V-A},
\end{align}
\label{eq:ops}
\end{subequations}
\noindent where $\alpha$, $\beta$ are color indices and (V - A) means $\gamma_\mu ( 1 - \gamma_5)$.
It is important to recognize that $Q_2$ is the original 4-quark (tree) operator of the basic charged current
weak decay, $[ \bar{s}_\alpha (\gamma_\mu (1 - \gamma_5) u_\alpha] [\bar{u}_\beta \gamma_\mu ( 1 - \gamma_5) d_\beta]$, conventionally written here in the Fierz transformed basis. When you swich on QCD,
$Q_2$ is not multiplicatively renormalizable and as was realized long ago~\cite{GL74,AM74}, it mixes with another tree operator
$Q_1$. On the other hand, $Q_3$ to $Q_6$ are the QCD penguin operators~\cite{SVZ75} and $Q_7$ to $Q_{10}$ are the electroweak (EW)
penguin operators~\cite{GW78,GW79}.
On the lattice, in the absence of exact chiral symmetry, each of these dim-6, 4-quark operator of the $\Delta S=1$ Hamiltonian can mix with lower
dimensional operators, {\it e.g} $\bar{s}d$, $\bar{s}\gamma_5d$, etc. The effects of these mixings are purely
unphysical and need to be subtracted away. As you make the lattice spacing finer and
move towards the continuum limit, these unphysial contributions tend to become huge and it can become a very demanding and delicate subtraction, quite akin to
fine tuning. The chiral behavor of wilson fermions was so bad that original methods~\cite{BDSPW,KS96,Maiani87}, that were proposed to deal with such subtraction issues proved to be
quite inadequate. Because of the excellent chiral symmetry of DWQs, this became by and large a non-issue
provided the extent of the 5th dimension is not too small.
Matrix element $\langle\pi\pi|Q_i|K^0\rangle$ for each operator entail an evaluation of 48 different Wick contractions
which can be grouped into four different types~\cite{BS88,RBC_UKQCD_QL,QL_th}. Of these, type-4 involve disconnected diagrams and are therefore, computationally the most demanding. Type-3 contain ``eye'' contractions~\cite{BDHRS}, type-2 correspond to
``figure-eight'' diagrams,
and type-1 correspond to original weak interaction tree graphs; see fig~\ref{4_types}. In particular, it is to be stressed that only type-1 contributes
to the $\Delta I=3/2$ transitions and the corresponding $I=2$ final state of the two pions whereas the
$\Delta I=1/2$ transtions for $I=0$ final state receive contributions from all four types and consequently
are much more intricate and challenging to tackle than the $\Delta I=3/2$ case.
\begin{figure}[htb]
\centering
\includegraphics[width=8cm]{4_types.pdf}
\caption{Four general type of quark flow diagrams contribution to $K^0 \to \pi^+ \pi^-$; (a) corresponds
to spectator types in the continuum literature, (b) and (d) to annhilation and (c) to penguins; (d) though
requires disconnected contributions which on the lattice are extremely demanding. Taken from~\protect\cite{BS88}}
\label{4_types}
\end{figure}
\section{$\Delta I=3/2$}
As indicated already, ironically at the end of the day, it turned out that it is the simpler $\Delta I=3/2$,
$K \to \pi \pi$ that is very revealing in so far as the enhancement of the ratio is concerned. The 3/2 amplitude involves simply type-1 contractions~\cite{RBC_UKQCD_QL}.
The Wick contractions for $\langle\pi\pi|Q_{1,2}|K^0\rangle$, for the dominant operators
$Q_2$ or $Q_1$, entail two contributions, one goes as product of two traces in color space ($N \times N$) and
the other is a
single trace in color space (N), where for QCD, $N = 3$.
As is well known, continuum folklore says that $N^2$ term dominates and the two terms add~\cite{GL74,DGH_book_p,HG_book_p,CL_book_p}.
Our data using three different lattices~\cite{RBC_UKQCD_PRL13} collected over the past few years allows us to study these contributions
as a function of the pion mass (with $m_K \approx 2 m_\pi$). In fact the relative sign between the terms is
negative and
the cancellation between the two terms increases as the pion masses is lowered. Indeed at physical kinematics with $m_{\pi} = 142 MeV$
and $m_K = 520 MeV$, the single trace contribution is around - 0.7 of the trace $\times$ trace term.
So, the observed amplitude is only around 2.7/12 $\approx$ 0.25 of naive expectations, assuming $N = 3$. In other words,
out of the observed enhancement in the ratio of the two amplitudes of a factor of around 22, as much
as a factor of 4 may simply be coming from the fact that there is this cancellation making the 3/2
amplitude only about 0.25 of naive expectations.
\begin{figure}[htb]
\begin{minipage}{0.47\linewidth}
\centerline{\includegraphics[width=7cm]{Fig1a_C1.pdf}}
\end{minipage}
\hfill
\begin{minipage}{0.47\linewidth}
\centerline{\includegraphics[width=7cm]{Fig1b_C2.pdf}}
\end{minipage}
\caption{The two contractions contributing to Re$A_2$; $\bold i$ and $\bold j$ denote color indices. $s$ denotes the strange quark and $L$ that the currents are left-handed; taken from~\protect\cite{RBC_UKQCD_PRL13}.}
\label{c1c2}
\end{figure}
Another notable feature of the $I = 2$ channel is that its amplitude, $ReA_2$, shows a significant dependence
on $m_{\pi}$. We attribute this largely to the cancellation mentioned above. From Tab.\ref{tab:params} we see that as the pion mass decreases from about 420 MeV to 140 MeV, $ReA_2$ decreases by about a factor of 3.5 and with physical $\pi$, K masses
it is in good agreement (within $\approx$ 15\%) with its measured value from experiments~\cite{RBC_UKQCD_PRD12}.
Moreover, recall that $Re A_2$ is closely related to $B_K$, the neutral Kaon mixing operator, as has been long known since
the famous work of ~\cite{DGH_BK}, who obtained $B_K^{LOChPT} \approx 0.3$ by exploiting its relationship with
the experimentally measured value of $ReA_2$ from the charged Kaon lifetime, assuming SU(3) and
lowest order chiral perturbation theory. Lattice studies for a long
time of course also have shown that $B_K$ changes from about 0.3 to 0.6 as you move from the chiral limit to $m_K$~\cite{RBC_BKCL,LL11}.
\section{Implications for $ReA_0$ and the $\Delta I=1/2$ Rule}
What is even more striking is how this cancellation that is responsible for the suppression of
$Re A_2$ actually also ends up enhancing $Re A_0$.
First let's just look at the dominant operator, $Q_2$. Its contribution to
$Re A_2$ and to $Re A_0$ is as follows~\cite{RBC_UKQCD_QL}:
\begin{eqnarray}
Re A_{2,2} & = &i\sqrt{\frac{2}{3}}(ST + TSQ),\\
Re A_{0,2} & = &i\sqrt{\frac{1}{3}}(-ST + 2 TSQ)
\label{eq:Q2_cont}
\end{eqnarray}
\noindent where $A_{(i,j)}$ notation means $ i=0$ or $2$, (depending on the isospin of the pion final state) and $ j=1,2$ and $j=2$, for example, means $Q_2$ and ST means single trace over color indices and TSQ means trace $\times$ trace. Thus, recalling
that at physical kinematics, $ST/TSQ \approx - 0.7$, the ratio, $Re A_0 /Re A_2 \approx 6.4$. So far
we only looked at the contribution of the dominant tree operator $Q_2$. Let us next, also
retain the next most important operator, which happens to be the tree operator, $Q_1$. One finds,
\begin{eqnarray}
Re A_{2,1} & = &i\sqrt{\frac{2}{3}}(ST + TSQ),\\
Re A_{0,1} & = &i\sqrt{\frac{1}{3}}(2 ST - TSQ).
\label{eq:Q1_cont}
\end{eqnarray}
\noindent Thus, incorporating the Wilson coefficients ($Z_j$, with $j = 1,2$) for these two operators~\cite{RBC_UKQCD_QL},
\noindent $Z_1 = -0.30$ and $Z_2 = 1.14$, one gets,
\begin{equation}\label{eq:ReA1A2}
Re A_i = Z_j A_{i,j} \\
\end{equation}
\noindent for $i=0, 2$ corresponding to $I = 0, 2$ for the two final states. Then given ST $\approx$ -0.7 $\times$ TSQ,
we get $ReA_0/ReA_2 \approx 10.8$; thus accounting for almost half of the experimental number
$\approx$ 22.5.
Note also that the cancellation between the single color trace and trace square term
ends up causing not only a further suppression of ReA2 because of the fact that the sign of Wilson coefficient
$Z_1$ is negative to that of $Z_2$, but in addition as an interesting coincidence, it ends up enhancing $ReA_0$.
This is easily understood from the above simple eqns \ref{eq:Q2_cont}, \ref{eq:Q1_cont} as the relative signs
between single trace and the squared trace switch from $Re A_2$ to $Re A_0$.
While our calculation of $ReA_0$ at physical kinematics is not yet complete, there are several interesting features
of the existing calculations summarised in Tab.\,\ref{tab:params} that are noteworthy. One item to note is the ratio
$ReA_0/ReA_2$ resulting from our two completed calculations on the $16^3$ and $24^3$ lattices. It is 9
and 12 respectively. These numbers are for amplitudes calculated at threshold. As commented before
the corresponding $ReA_2$ on these lattices are factors of $\approx$ 3.5 and $\approx$ 2 times
the value of $Re A_2$ at physical kinematics. This is mostly the result of significant mass dependence
of $Re A_2$. In contrast, our numbers for $Re A_0$ show milder mass dependence and infact
the value we obtain on our $24^3$ lattice (whose volume is about three times bigger compared
to the smaller lattice) at threshold is quite consistent with experiment; whether this feature will remain true
at physical kinematics or not remains to be seen.
\begin{table*}[h!]
\caption{Reproduced from ~\protect\cite{RBC_UKQCD_PRL13}. Summary of simulation parameters and results obtained on three domain wall fermion ensembles. The errors with the Iwasaki action are statistical only, the second error for Re$A_2$ at physical kinematics from the IDSDR simulation is systematic and is dominated by an estimated 15\%
discretization uncertainty as explained in ~\protect\cite{RBC_UKQCD_PRD12}.}
\label{tab:params}
\vspace{0.4cm}
\begin{center}
\footnotesize
\begin{tabular}{c | c c c c c c | l}
&$~a^{-1}$&$m_\pi$ & $m_K$ & Re$A_2$ & Re$A_0$ &$\frac{\mathrm{Re}A_0}{\mathrm{Re}A_2}$ & notes \\
&$[\text{GeV}]$&$[\text{MeV}]$ & $[\text{MeV}]$ & $[10^{\textrm{-}8}\, \text{GeV}]$ & $[10^{\textrm{-}8}\, \text{GeV}]$ & & \\
\hline
$16^3$ {\bf Iwasaki} &1.73(3)& 422(7) & 878(15) &4.911(31) &45(10)&9.1(2.1) &\,threshold calculation\\
$24^3$ {\bf Iwasaki} & 1.73(3)& 329(6) & 662(11) &2.668(14) &32.1(4.6)&12.0(1.7) &\,threshold calculation \\
${\bf IDSDR}$ & 1.36(1)& 142.9(1.1) & 511.3(3.9) &1.38(5)(26)&-& - &\,physical kinematics \\
${\bf Experiment}$ &--& 135\,-\,140&494\,-\,498&1.479(4)&33.2(2)&22.45(6)&
\end{tabular}
\end{center}
\end{table*}
In passing let us note that from SU(2) ChPT description of $K \to \pi \pi$ one also finds
a significant dependence on pion mass of $ReA_2$ than of $ReA_0$ ~\cite{BC09} in qualitative
agreement with the lattice observations.
\begin{figure}[h!]
\centering
\includegraphics[width=9cm]{Fig2_c1c2data_DSDR_p2_dt24.pdf}
\caption{Contractions $\protect\circled{1}$ (that goes as trace $\times$ trace in color space and is also
call TSQ in here), -$\protect\circled{2}$ (that goes as single trace in color space and is also
called ST in here) and $\protect\circled{1}+\protect\circled{2}$ as functions of $t$ from the simulation at physical kinematics. Taken from~\protect\cite{RBC_UKQCD_PRL13}. \label{c1c2physical}}
\end{figure}
\begin{figure}[h!]
\centering
\includegraphics[width=9cm]{Fig3_c1c2data_24cube_dt20.pdf}
\caption{Contractions $\protect\circled{1}$, -$\protect\circled{2}$ and $\protect\circled{1}+\protect\circled{2}$ as functions of $t$ from the simulation at threshold with $m_\pi\simeq$ 330\,MeV; see also fig~\ref{c1c2physical}. Taken from~\protect\cite{RBC_UKQCD_PRL13}. \label{c1c2330}}
\end{figure}
\subsection{The role of penguins in the $\Delta I=1/2$ Puzzle}
From Tab.\,\ref{tab:a0breakdown} we see that at a scale of $\approx$ 2.15 GeV, the tree operators
$Q_2$ and $Q_1$ account for almost 97\% of $ReA_0$ so the contribution of the remaining
operators, in particular the QCD penguins, is only a few \%
and the EW penguins around 0.1\%. In fact roughly similar conclusions were arrived previosuly
when we used the chiral perturbation approach both in the quenched approximation~\cite{RBC_eps01}
as well as in dynamical 2+1 flavor QCD~\cite{SL_NHC,SL_th}.
We stress again that these calculations~ Tab.\,\ref{tab:a0breakdown}
for $Re A_0$ are not at physical kinematics so the relative importance of the penguin to tree contributions
may well change to some degree; however, the fact remains that the cancellation and suppression of
$ReA_2$ and enhancement of $ReA_0$, in the tree contributions, which are the new aspects being reported
here, imply a diminished role for the penguin contributions at least at a renormalization point
around 2 GeV.
\begin{table}[h!]
\caption{\label{tab:a0breakdown}
Contributions from each operator to Re$A_0$ for $m_K=662$\,MeV and $m_\pi=329$\,MeV. The second column contains the contributions from the 7 linearly independent lattice operators with $1/a=1.73(3)$\,GeV and the third column those in the 10-operator basis in the $\overline{\text{MS}}$-$\mathrm{NDR}$ scheme at $\mu=2.15$\,GeV. Numbers in parentheses represent the statistical errors.Taken from~\protect\cite{RBC_UKQCD_PRL13}}
\begin{center}
\vspace{0.2cm}
\begin{tabular}{c|c|c}
i & $Q_i^{\text{lat}}\;$[GeV]&$Q_i^{\overline{\text{MS}} \text{-NDR}} \;$[GeV]\\ \hline
1&\,\phantom{-}8.1(4.6) $10^{-8}\,\,$&\phantom{-}6.6(3.1) $10^{-8}\,\,$\\
2&\,\phantom{-}2.5(0.6) $10^{-7}\,\,$&\phantom{-}2.6(0.5) $10^{-7}\,\,$\\
3&\,-0.6(1.0) $10^{-8}\,\,$&\phantom{-}5.4(6.7) $10^{-10}$\\
4&--&\phantom{-}2.3(2.1)~$10^{-9}\,\,$\\
5&\,-1.2(0.5) $10^{-9}\,\,$&\phantom{-}4.0(2.6) $10^{-10}$\\
6&\,\phantom{-}4.7(1.7) $10^{-9}\,\,$&-7.0(2.4) $10^{-9}\,\,$\\
7&\,\phantom{-}1.5(0.1) $10^{-10}$&\phantom{-}6.3(0.5) $10^{-11}$\\
8& -4.7(0.2) $10^{-10}$&-3.9(0.1) $10^{-10}$\\
9&--&\phantom{-}2.0(0.6) $10^{-14}$\\
10& --& \phantom{-}1.6(0.5) $10^{-11}$\\ \hline
Re$A_0$ &\,\phantom{-}3.2(0.5) $10^{-7}\,\,$ & \phantom{-}3.2(0.5) $10^{-7}\,\,$
\end{tabular}
\end{center}
\end{table}
\subsection{The role of disconnected diagrams for $Re A_0$}
Our calculation of $A_0$ being discussed here is not yet at physical kinematics. It is actually at threshold
and perhaps more importanty the (valence)
pion masses $\approx$ are relatively heavy. As Tab.\,\ref{tab:params} shows we completed the threshold calculation
of $Re A_0$ with two different lattices ($16^3$ and $24^3$) with pion mass around 420 MeV and 330 MeV
attaining statistical accuracies around 25\% and 15\% respecitively. These calculations include the
contribution from disonnected diagrams as well.
Within the stated accuracy, we do not seem to see any discernible contribution from
the disconnected diagrams in so far as $Re A_0$ is concerned. Again we emphasize that this is with pion mass around 330 MeV and
not with physical pion masses.
Given that the dominant contribution to $Re A_0$ seems to come from tree operators,
which do not receive contribution from any disconnected diagrams, it is understandable that the
disconnected diagrams contribution to $Re A_0$ is most likely rather small.
\subsection{The role of disconnected diagrams for $Im A_0$}
Tree level operators cannot contribute to $Im A_0$. Only eye-contractions and disconnected
diagrams make contributions to $Im A_0$; thus one expects an enhanced role for disconnected graphs in $Im A_0$.
This is why our calculation of $Im A_0$, even with $m_\pi \approx 330 MeV$ has statistical errors of around 50\%.
\subsection{Status of $\epsilon^{\prime}$}
As is well known contributions to $\epsilon'$ can be divided into two categories: QCD penguins and EW penguins~\cite{GMLR,BJ04}
originating respectively from $Q_3$,$Q_4$,$Q_5$ and $Q_6$ and $Q_7$, $Q_8$, $Q_9$ and $Q_{10}$.
Amongst these $Q_6$ and $Q_8$ are the dominant players.
RBC and UKQCD have already finished their computation of $Im A_2$ as indicated in Tab.\,\ref{tab:params} at physical
kinematics~\cite{RBC_UKQCD_PRD12} with an estimated statistical error of $\approx$ 20\% and roughly similar error
for systematics. This means the EWP contributions to $\epsilon'$ has already been completed; indeed improved calculations of $Im A_2$ are well underway
and are expected rather soon with appreciable reduction in errors.
From a purely personal perspective, $\epsilon'$ has always been the main focus of the $K \to \pi \pi$ effort from
the very beginning. The calculation of $Im A_0$ relevant to $\epsilon'$ is even more challenging than that of
$Re A_0$ relevant for the $\Delta I=1/2$ rule. This is because $Im A_0$ does not receive any contribution
from the tree operators. This is understandable as in the SM all three generations have to participate
to make a non-vanishing contribution to any CP violation phenomena. Thus penguin graphs
and consequently eye contractions become essential on the lattice.
While that renders the calculation quite challenging, perhaps another order of magnitude in the
complexity is added by the fact that the $I=0$ channel receives contributions from disconnected
diagrams. The error on our $\epsilon'$ calculaion is around 100\% at present.
\subsection{Possible implications for other weak decays}
Our lattice studies of {\it direct} $K \to \pi \pi$ seem to show that for QCD {\it i.e}, N=3, large N approximation
is amenable to rather large corrections. Since its use, as well as that of the closely related notion of factorization, is so pervasive in weak decays, perhaps, D and B decays ought to be re-examined in light of these lattice findings. Moreover, because the cancellation discussed above results in a significant fraction
of the enhancement of $Re A_0/Re A_2$ and also because the penguin contribution to $Re A_0$ seems to be so small,
it tells us that the penguin contribution in D-decays (in the I=0 channel) is bound to be quite small as
\begin{equation}
\frac{P_D}{T_D} \approx \delta_{Uspin} \times \frac{P_K}{T_K}
\label{eq:charm}
\end{equation}
\noindent where, subscript D or K means exclusive $D \to PP$ or $K \to PP$, respectively, with P=pseudoscalar and
$\delta_{Uspin}$ is indicative of Uspin violation reflecting the cancellation between the s
and d virtual quarks in the penguin.
\section{Summary \& Outlook for the near future}
Summarizing, lattice studies of {\it direct} $K \to \pi \pi$ show that in the simpler $I=2$ channel,
at physical kinematics,
the contributing amplitude from the original, tree, 4-quark $\Delta S=1$ weak operators that goes as
$N^2$ cancels significantly with the one that goes as N, causing an appreciable suppression of the
$\Delta I=3/2$ transition~\cite{BBG_PR}. This seems to lead to a considerable fraction of the enhancement in the
ratio of $Re A_0/Re A_2$. These results suggest that expectations from large N may receive
large corrections for QCD in weak decays.
Understanding $K \to \pi \pi$ decays and calculation of $\epsilon'$ remains a very important goal of the RBC
and UKQCD collaborations. I am hopeful that the first calculation of $Re A_0$ and $\epsilon'$ in full QCD
with physical kinematics would be completed in about two years.
\section*{Acknowledgements}
I want to thank my colleagues from RBC and UKQCD collaborations and especially Norman Christ,
Christoph Lehner, Qi Liu and Chris Sachrajda
for numerous valuable discussions. I have also benefitted from many conversations with Andrzej Buras
and Enrico Lunghi.
Also I must thank the organizers of the EW Moriond 2013, and in
particular Jean-Marie Fr\`ere for inviting me.
This work is supported in part by the US DOE contract No.
DE-AC02-98CH10886.
\section*{References}
| {
"attr-fineweb-edu": 1.740234,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdtQ25YjgKOD-EHD5 | \section{Introduction and problem setting}
\subsection{Motivating example}
Let $(\B(t))_{t\in \Theta}$ with $\Theta=\Z$ or $\Theta=\R$
be a wide sense stationary process with discrete or continuous
time. The classical linear prediction problem consists of
finding an element in $\span{B(s),s\le t}$
providing the best possible mean square approximation to
the variable $\B(\tau)$ with $\tau>t$, see \cite{Kolm} and
\cite{Doob1, Doob2, GS, Roz, Yagl, W}.
Below we investigate this and some other similar problems where, in
addition to prediction quality, optimization takes into account
other features of the objects we search for, such as the smoothness
properties of approximation processes.
Here and elsewhere in the article $\span{\cdot}$ stands for the closed
linear span of a set in a Hilbert space.
All mentioned processes are assumed to be complex valued and all Hilbert spaces
are assumed to be complex.
\begin{example} \label{ex:kinetic} {\rm (Approximation saving kinetic
energy, \cite{KabLi}). By the instant {\it kinetic energy} of a process
$(\X(t))_{t\in \R}$ we understand just its squared derivative
$|\X'(t)|^2$. It is more than natural to search for an approximation
of a given stationary process $(\B(t))_{t\in \R}$ by a differentiable
stationary process $(\X(t))_{t\in \R}$ taking
into account the kinetic energy that $\X$ spends in its approximation
efforts. The goals of the approximation quality and energy saving may
be naturally combined with averaging in time by minimization of the
functional
\[
\lim_{N\to\infty} \frac 1N\ \int_0^N \left[ |\X(t)-\B(t)|^2 +
\al^2|\X'(t)|^2\right] dt.
\]
Here $\al>0$ is a fixed scaling regularization parameter balancing the quality
of approximation and the spent energy.
If, additionally, the process $\X(t)-\B(t)$ and the derivative $\X'(t)$
are stationary processes in the strict sense, in many situations
ergodic theorem applies and the limit above is equal to
$\E |\X(0)-\B(0)|^2 + \al^2 \E |\X'(0)|^2$.
Therefore, we may simplify our task to solving the problem
\be \label{EE0}
\E |\X(0)-\B(0)|^2 + \al^2 \E |\X'(0)|^2 \to \min,
\ee
and setting aside ergodicity issues.
The problem \eqref{EE0} makes sense either in a simpler
{\it linear non-adaptive setting}, i.e. with
\[
\X(t) \in \span{\B(s), s\in \R}, \quad t\in \R,
\]
or in {\it linear adaptive setting} by requiring additionally
\[
\X(t) \in \span{\B(s), s\le t}, \quad t\in \R.
\]
In other words, this means that we only allow approximations based on
the current and past values of $\B$.
}
\end{example}
\bigskip
Let us start with a basic notation. Let $(\B(t))_{t\in \Theta}$ be
a complex-valued random process satisfying
$\E\B(t)=0$, $\E|\B(t)|^2<\infty$ for all $t\in \Theta$.
Consider $H:=\span{\B(t), t\in \Theta}$ as a Hilbert space equipped
with the scalar product $(\xi,\eta)=\E (\xi\overline{\eta})$. For
$T\subset \Theta$ let $H(T):= \span{\B(t), t\in T}$.
Furthermore, let $L$ be a linear operator with values in $H$ and
defined on a linear subspace $\DD(L)\subset H$. For a fixed
$\tau\in \Theta$, consider the extremal problem
\be\label{A}
\E|Y-\B(\tau)|^2 + \E|L(Y)|^2 \to \min,
\ee
where the minimum is taken over all $Y\in H(T) \bigcap \DD(L)$.
The first term in the sum describes approximation, prediction, or
interpolation quality while the second term stands for additional
properties of the object we are searching for, e.g. for the
smoothness of the approximating process.
This is the most general form of the problem we are interested in.
Below we specify the class of the considered processes to
one-parametric wide sense stationary processes with discrete or continuous
time, introduce appropriate class of operators $L$ and
explain in Subsection \ref{ss:kin_as_L} why Example \ref{ex:kinetic}
is a special case of problem \eqref{A}.
\subsection{Spectral representation: brief reminder}
Let now $(\B(t))_{t\in \Theta}$ be the main object of our investigation
-- a centered wide sense stationary random process with univariate discrete
($\Theta=\Z$) or continuous ($\Theta=\R$) time. In case of continuous
time we additionally assume that $\B$ is mean square continuous.
Here and elsewhere we assume that all random variables under consideration
are centered. In particular, $\E \B(t)=0$ for all $t\in \Theta$.
By Khinchin theorem, the covariance function
\[
\K(t) := \E \B(t)\overline{\B(0)}
\]
admits the spectral representation
\[
\K(t) := \int e^{itu} \mu(du).
\]
Here and in the sequel integration in similar integrals is performed
over the interval $[-\pi,\pi)$ in case of the discrete time
processes and over real line $\R$ in case of the continuous time processes.
The finite measure $\mu$ on $[-\pi,\pi)$ or on $\R$,
respectively, is the spectral measure of the process $\B$. The process
$\B$ itself admits the spectral representation as a stochastic integral
\be \label{specBs}
\B(t) = \int e^{itu} \W(du)
\ee
where $\W$ is an orthogonal random measure on $[-\pi,\pi)$ or on $\R$,
respectively, with $\E|\W(A)|^2=\mu(A)$.
Let
\[
\LL = L_2(\mu)
=\left\{\phi:\ \int |\phi(u)|^2 \mu(du) <\infty \right\}
\]
be equipped with the usual scalar product
\[
\left(\phi, \psi \right)_\LL
= \int \phi(u)\, \overline{\psi(u)}\, \mu(du).
\]
Recall that for any
\[
\xi= \int \phi(u)\, \W(du), \qquad \eta= \int \psi(u)\, \W(du)
\]
it is true that
\[
\E \xi\overline{\eta} = \int \phi(u)\, \overline{\psi(u)}\, \mu(du)
= \left(\phi, \psi \right)_\LL.
\]
It follows that the correspondence $\B(t) \rightleftarrows e^{itu}$
extends to the linear isometry between $H$ and the closed linear
span of the exponents in $\LL$. Actually, the latter span coincides
with entire space $\LL$, cf.\,\cite[Section 1.9]{GS}, and we obtain a linear
isometry between $H$ and $\LL$ provided by stochastic integral.
In other words, every element $\xi$ of Hilbert space $H$ can be
represented as a stochastic integral
\be \label{specxi}
\xi = \int \phi_\xi(u) \W(du)
\ee
with some complex valued function $\phi_\xi\in\LL$, and every
random variable $\xi$ admitting the representation \eqref{specxi}
belongs to $H$.
\bigskip
An analogous theory exists for processes with wide sense
stationary increments.
Let $\B$ be such process with zero mean (for continuous time we
additionally assume mean square continuity). Similarly to
\eqref{specBs}, the process $\B$ admits a spectral representation
\[
\B(t) = \B_0 + \B_1 t + \int (e^{itu}-1) \W(du)
\]
where $\W(du)$ is an orthogonal random measure controlled by the spectral
measure $\mu$ and $\B_0,\B_1$ are centered random variables uncorrelated
with $\W$, see \cite[p.213]{Yagl}. Notice that in case of processes with
wide sense stationary increments the spectral measure $\mu$ need not be finite but
it must satisfy $\mu\{0\}=0$ and L\'evy's integrability condition
\[
\int_{-\pi}^\pi u^2\, \mu(du) < \infty
\]
for discrete time and
\[
\int_\R \min\{u^2,1\} \mu(du) < \infty
\]
for continuous time.
In the following we let $B_0=B_1=0$ because for prediction problems
we handle here the finite rank part is uninteresting. We also do not
loose any interesting example with this restriction. Therefore, we
consider the processes
\[
\B(t) = \int (e^{itu}-1) \W(du).
\]
\subsection{Probabilistic problem setting}
The operators $L$ we are going to handle are those of the form
\be \label{L}
L\xi = L\left( \int \phi_\xi(u) \W(du) \right)
:= \int \ell(u) \phi_\xi(u) \W(du),
\ee
where $\ell$ is a measurable function on $\R$ or on $[-\pi,\pi)$,
respectively. The domain $\DD(L)$ consists of $\xi$ such that
\[
\E|L\xi|^2 = ||L\xi||_H^2
= \int |\ell(u) \phi_\xi(u) |^2 \mu(du)<\infty.
\]
Such operators are often called {\it linear filters} while the
function $\ell$ is called the {\it frequency characteristic} of
a filter.
Below we consider problem \eqref{A} applied to wide sense stationary processes
with discrete or continuous time and operators $L$ from \eqref{L}.
For the space $H(T)$ we consider a variety of choices. Most
typically, we take $H(T)=H:=\span{\B(s), -\infty <s<\infty}$, the
space generated by all variables, or
$H(T)=H_t:=H((-\infty,t])= \span{\B(s),s\le t}$, the space
generated by the past of the process, or
$H(T)=\HTO_t:=\span{\B(s),|s|\ge t}$.
In problem \eqref{A}, we take the value of $\B$ at some point $\tau$ as
a subject of approximation. When $\B$ is a wide sense stationary process,
we may take $\tau=0$ without loss of generality.
Therefore, three following variations of problem \eqref{A} are
considered below.
Problem I (approximation):
\[
\E |Y-\B(0)|^2+ \E|LY|^2 \to \min, \qquad Y\in H.
\]
Problem II (prediction):
\[
\E |Y-\B(0)|^2+ \E|LY|^2 \to \min, \qquad Y\in H_t.
\]
Problem III (interpolation):
\[
\E |Y-\B(0)|^2+ \E|LY|^2 \to \min, \qquad Y\in \HTO_t.
\]
Notice that, due to the presence of $L$, Problems II and III represent
an extension of the classical prediction and interpolation problems.
As for Problem I, once $L$ is omitted, it is trivial. In our setting
it is also easy but provides non-trivial results (even in the simplest
cases) and therefore is sufficiently interesting.
Sometimes we call the setting of Problem II adaptive, because the best
approximation is based on (adapted to) the known past values of the
process. Opposite to this, the setting of Problem I is called
non-adaptive.
In the classical case, i.e. with $L=0$, Problem II, as stated here, is
non-trivial only for negative $t$. However, in presence of $L$ it
makes sense for arbitrary $t$.
\medskip
\subsection{Analytic problem setting}
Due to the spectral representation \eqref{specxi} problems I -- III
admit the following analytic setting.
Problem I$'$:
\be \label{P1}
\int |\psi(u)-1|^2 \mu(du) + \int |\ell(u)\psi(u)|^2 \mu(du)
\to \min, \qquad \psi\in \LL.
\ee
Problem II$'$:
\be \label{P2}
\int |\psi(u)-1|^2 \mu(du) + \int |\ell(u)\psi(u)|^2 \mu(du) \to \min,
\ \psi\in\span{e^{isu}, s\le t}.
\ee
Problem III$'$:
\be \label{P3}
\int\! |\psi(u)-1|^2 \mu(du)\!
+\! \int\! |\ell(u)\psi(u)|^2 \mu(du) \to \min,
\ \psi\in\span{e^{isu}, |s|\ge t}.
\ee
The spans in Problems II$'$ and III$'$ are taken in $\LL$.
\subsection{Energy saving approximation as a special case of
extended prediction problem}
\label{ss:kin_as_L}
Consider the setting of Example \ref{ex:kinetic}:
given a zero mean wide sense stationary process $B=(\B(t))_{t\in\R}$
with spectral representation \eqref{specBs}, the problem is to
minimize the functional
\[
\E |\X(0)-\B(0)|^2 + \al^2 \E |\X'(0)|^2
\]
over all mean square differentiable processes $X=(\X(t))_{t\in\R}$ such
that the processes $\X$ and $\B$ are jointly wide sense stationary.
The latter means that each of them is wide sense stationary and also the
cross-covariance $\E\X(t)\overline{\B(s)}$ depends only on $t-s$.
First of all, we show that while solving this minimization problem
one may only consider approximating processes of special type,
namely,
\be \label{psistat}
\tX(t)=\int e^{itu} \psi(u)\W(du), \qquad \psi\in \LL.
\ee
Indeed, for arbitrary $\X$, we may decompose its initial value as
$\X(0)=\X^\bot +\tX(0)$ with $\tX(0)\in H$, $\X^\bot$ orthogonal to
$H$. By representation \eqref{specxi} there exists $\psi\in\LL$ such
that
\[
\tX(0)= \int \psi(u)\, \W(du).
\]
For this $\psi$ define the process $\tX(t)$ by \eqref{psistat}.
We show that the process $\tX$ is at least as good as $\X$ in the
sense of \eqref{EE0}.
Due to the joint wide sense stationarity, for any $s,t$ we have
\begin{eqnarray*}
\E \X(t)\overline{\B(s)}&=& \E \X(0)\overline{\B(s-t)}
=\E \tX(0)\overline{\B(s-t)}
=\int \psi(u)e^{i(t-s)u} \mu(du);
\\
\E \tX(t)\overline{\B(s)}&=& \int [e^{itu}\psi(u)]e^{-isu}\mu(du)
= \int \psi(u)e^{i(t-s)u} \mu(du).
\end{eqnarray*}
It follows that $\X(t)-\tX(t)$ is orthogonal to $\B(s)$ for each $s$,
hence, it is orthogonal to $H$. Furthermore, it is easy to show
that if $\X$ is mean square differentiable then so are its components
$\tX$ and $\X-\tX$. For their derivatives, we know that
\[
\tX'(t)= \int e^{itu}(iu)\,\psi(u)\, \W(du) \in H
\]
and $(\X-\tX)'(t)$ is orthogonal to $H$. Hence,
\begin{eqnarray*}
&&\E |\X(0)-\B(0)|^2 + \al^2 \E |\X'(0)|^2
\\
&=& \E |(\X(0)-\tX(0))+ (\tX(0)-\B(0))|^2
+ \al^2 \E |(\X'(0)-\tX'(0))+ \tX'(0)|^2
\\
&=& \E |\X(0)-\tX(0)|^2 +\E |\tX(0)-\B(0)|^2
\\
&& + \al^2 \E |(\X'-\tX')(0)|^2 + \al^2 \E |(\tX'(0)|^2
\\
&\ge& \E |\tX(0)-\B(0)|^2 + \al^2 \E |\tX'(0)|^2.
\end{eqnarray*}
Therefore, $\tX$ is at least as good for \eqref{EE0}, as $\X$.
Finally, notice that for the processes defined by \eqref{psistat}
the expression in \eqref{EE0} is equal to
\[
\int |\psi(u)-1|^2 \mu(du) + \int |\ell(u)\psi(u)|^2 \mu(du)
\]
with $\ell(u)=\al iu$, exactly as in the analytical versions of our
problems \eqref{P1} and \eqref{P2}. In the non-adaptive version
of the approximation problem we have to optimize over $\LL$, as
in \eqref{P1}, while for adaptive version the requirement
$\tX\in H_t$ for all $t$ is satisfied iff
$\psi\in\span{e^{isu}, s\le 0}$, as in \eqref{P2}.
One may consider other types of energy, e.g. based on higher
order derivatives of $\X$. This option leads to the same problems
with arbitrary polynomials $\ell$.
For discrete time case it is natural to replace the derivative
$\X'$ by the difference $\X(1)-\X(0)$. Then we obtain the same
problem with $\ell(u)=\al(e^{iu}-1)$, for $u\in[-\pi,\pi)$.
Examples of optimal non-adaptive energy saving approximation are given
in Section \ref{s:ex} below.
One may also consider the energy saving approximation for the
processes with wide sense stationary increments. Consider such a process
$\B$ and its approximation $\X$ such that $(\X(t),\B(t))_{t\in \Theta}$ with
$\Theta=\Z$ or $\Theta=\R$ is a
two-dimensional process with wide sense stationary increments and
$(\X(t)-\B(t))_{t\in \Theta}$ is a wide sense stationary process.
Since $\B(0)=0$, the analogue of \eqref{EE0} is
\be \label{EE0si}
\E |\X(0)|^2+ \al^2 \E|\X'(0)|^2 \to \min.
\ee
Similarly to the case of wide sense stationary processes one can show that,
analogously to \eqref{psistat}, it is sufficient only to consider
approximating processes of the special form
\[
\tX(t):=\int \left( e^{itu} \psi(u)-1\right) \W(du),
\qquad \psi-1\in \LL.
\]
Then problem \eqref{EE0si} takes familiar analytical form
\[
\int |\psi(u)-1|^2 \mu (du) + \int |\psi(u) (iu)|^2 \mu (du)
\to\min
\]
with requirements $\psi-1\in \LL$ for non-adaptive setting and
\[
e^{itu}\psi(u)-1\in \span{e^{isu}-1, s\le t}
\]
in adaptive setting. The latter may be simplified
to
\[
\psi-1\in \span{e^{isu}-1, s\le 0}.
\]
\bigskip
\section{Abstract Hilbert space setting}
The basic matters about our problems such as the existence of
the solution or its uniqueness are easier to handle in a more
abstract setting.
A formal extension of problem \eqref{A} looks as follows. Let
$H$ be a separable Hilbert space with the corresponding scalar
product $(\cdot,\cdot)$ and norm $||\cdot||$. Let $L$ be
a linear operator taking values in $H$ and defined on a linear
subspace $\DD(L)\subset H$. Consider a problem
\be \label{A1}
G(y):= ||y-x||^2 +||L y||^2 \to \min.
\ee
Here $x$ is a given element of $H$ and minimum is taken over all
$y\in H_0 \bigcap \DD(L)$ where $H_0\subset H$ is a given closed
linear subspace.
The following results are probably well known, yet for
completeness we give their proofs in Section \ref{s:HSproofs}.
\begin{prop}\label{p:exist}
If $L$ is a closed operator
then the problem \eqref{A1} has a solution $\xi\in H_0 \bigcap \DD(L)$.
\end{prop}
\begin{prop}\label{p:uniq}
The problem \eqref{A1} has at most one solution.
\end{prop}
\begin{rem} {\rm
Unlike Proposition $\ref{p:exist}$, the assertion of
Proposition $\ref{p:uniq}$ holds without additional assumptions
on the operator $L$.
}
\end{rem}
\begin{prop} \label{p:LL}
Assume that in problem \eqref{A1} we have $H_0=H$ and $L$ is
a closed operator with the domain dense in $H$.
Then the unique solution of \eqref{A1} exists and is given by the formula
\[
\xi=(I+L^*L)^{-1} x,
\]
where $I:H\to H$ is the identity operator.
\end{prop}
\begin{prop} \label{p:euler} If $\xi$ is a solution of problem \eqref{A1},
then $\xi$ provides the unique solution of equations
\be \label{euler}
(\xi-x,h)+(L\xi,L h)=0 \qquad \textrm{for all } h\in H_0\cap \DD(L).
\ee
\end{prop}
\begin{rem} {\rm
If the operator $L$ is bounded, one may rewrite equations \eqref{euler}
as
\[
((I+L^*L)\xi -x, h)=0 \qquad \textit{for all } h\in H_0,
\]
where $I$ is the identity operator in $H$.
}
\end{rem}
\section{Solution of the non-adaptive problem}
\begin{thm} \label{t:nonad} Let $\B$ be a centered wide sense
stationary process with discrete or continuous time. Let $L$ be a
linear filter \eqref{L} with arbitrary measurable frequency characteristic
$\ell(\cdot)$. Then the unique solution of Problem I exists and is given by the
formula
\be \label{sol_na}
\xi=\int \frac{1}{1+|\ell(u)|^2} \ \W(du).
\ee
The error of optimal approximation, i.e. the minimum in Problem I,
and in its equivalent form \eqref{P1}, is given by
\begin{eqnarray} \label{errna}
\sigma^2 &:=& \E |\xi-\B(0)|^2 + \E|L\xi|^2
= \int \frac{|\ell(u)|^2}{1+ |\ell(u)|^2} \, \mu(du)
\\ \nonumber
&=& \E |\B(0)|^2 - \int \frac{\mu(du)}{1+ |\ell(u)|^2} \ .
\end{eqnarray}
\end{thm}
\begin{proof}
The operators of type \eqref{L} are clearly closed and have a dense domain.
Therefore, Proposition \ref{p:exist} provides the existence of
solution. Furthermore, Proposition \ref{p:uniq} confirms that the
solution is unique. Proposition \ref{p:LL} states the form of the
solution
\[
\xi=(I+L^*L)^{-1} \B(0).
\]
By using the definition of $L$, for any $Y\in H$ we easily obtain
\[
(I+L^*L)^{-1} \, Y
= \int \frac{1}{1+|\ell(u)|^2}\ \phi_Y(u)\, \W(du).
\]
For $Y=\B(0)$ we have $\phi_Y(u)\equiv 1$, thus \eqref{sol_na}
follows. Finally, by isometric property,
\begin{eqnarray*}
\sigma^2 &=& \E |\xi-\B(0)|^2 + \E|L\xi|^2
\\
&=& \int \left[ \left| \frac{1}{1+|\ell(u)|^2}-1\right|^2
+ \left|\frac{\ell(u)}{1+|\ell(u)|^2}\right|^2 \right] \, \mu(du)
\\
&=& \int \frac{|\ell(u)|^4+|\ell(u)|^2}{(1+|\ell(u)|^2)^2} \, \mu(du)
= \int \frac{|\ell(u)|^2}{1+|\ell(u)|^2} \, \mu(du),
\end{eqnarray*}
as claimed in \eqref{errna}.
\end{proof}
\begin{rem} {\rm
For equivalent Problem I$'$ one can arrive to the same conclusion
as in Theorem \ref{t:nonad} in a fairly elementary way. Using the
full square identity
\[
|\psi(u)-1|^2 + |\ell(u)\psi(u) |^2 =
\left(1+ |\ell(u)|^2 \right)
\left| \psi(u)-\frac{1}{ 1+ |\ell(u)|^2}\right|^2
+\frac{|\ell(u)|^2}{1+ |\ell(u)|^2}\ ,
\]
one immediately observes that
\be \label{sol_na_anal}
\psi_\xi(u):=\frac{1}{1+ |\ell(u)|^2}
\ee
solves Problem I$'$, while the error is given by \eqref{errna}.
}
\end{rem}
\begin{rem} {\rm
For processes with wide sense stationary increments the extremal problem
is the same, hence the solution \eqref{sol_na} is the same.
It should be noticed however that the solution is correct only
if the quantity $\sigma^2$ above is finite (for finite measure
$\mu$ it is always finite but for infinite measure and some
choices of $\ell$ it may be infinite). For example if $\ell$
is a polynomial without free term, $\ell(u)=\sum_{k=1}^m c_ku^k$,
then $\sigma^2$ above is finite. This includes kinetic energy
case $\ell(u)=i \alpha u$. Otherwise, if
\[
\int \frac{|\ell(u)|^2}{1+|\ell(u)|^2} \, \mu(du) =\infty,
\]
the quantity in Problem I$'$ is infinite for all admissible $\psi$.
}
\end{rem}
\section{Some examples of non-adaptive approximation}
\label{s:ex}
In this section we illustrate general results by some typical
examples. In all examples we consider kinetic energy, i.e. we let
$\ell(u)=\al(e^{iu}-1)$ in the discrete time case and
$\ell(u)=\al i u$ in the continuous time.
For discrete time we get
\begin{eqnarray*}
|\ell(u)|^2+1 &=& \al^2(e^{iu}-1)(e^{-iu}-1) +1
\\
&=& \frac{\al^2}{\b} (e^{iu}-\b)(e^{-iu}-\b)
\end{eqnarray*}
where
\be \label{beta}
\b=\frac{2\al^2+1+\sqrt{1+4\al^2}}{2\al^2}>1
\ee
is the larger root of the equation
\[
\b^2-\frac{2\al^2+1}{\al^2}\, \b +1=0.
\]
For the integrand in the solution \eqref{sol_na} of the non-adaptive problem, we easily derive
an expansion
\begin{eqnarray} \nonumber
\frac{1}{|\ell(u)|^2+1}
&=& \frac{\b}{\al^2} \, \frac{1}{(e^{i u}-\b)(e^{-i u}-\b)}
\\ \label{series1}
&=& \frac{1}{\sqrt{1+4\al^2}} \left( 1+ \sum_{k=1}^\infty \b^{-k}
\left( e^{i k u}+ e^{-i k u}\right)\right).
\end{eqnarray}
By plugging this expression into \eqref{sol_na}, it follows that the solution
of discrete non-adaptive problem involving kinetic energy
is given by the moving average with bilateral geometric progression weight:
\be \label{XgB_star_discr}
\xi = \frac{1}{\sqrt{1+4\al^2}} \left( \B(0) + \sum_{k=1}^\infty \b^{-k}
\left( \B(k)+ \B(-k) \right)\right).
\ee
By \eqref{errna}, the error of optimal non-adaptive approximation
in the discrete time case is given by
\be \label{errna_kin_discr}
\sigma^2 = \int_{-\pi}^{\pi}
\frac{\al^2 |e^{iu}-1|^2}{ \al^2 |e^{iu}-1|^2 +1}\ \mu(du).
\ee
For continuous time we get similar results.
By using inverse Fourier transform, we have
\[
\frac{1}{ |\ell(u)|^2 +1} = \frac{1}{ \al^2 u^2 +1}
= \frac 1{2\al}\ \int_\R \exp\{i\tau u - |\tau|/\al \} \, d\tau.
\]
By plugging this expression into \eqref{sol_na}, it follows that
the solution of continuous non-adaptive problem involving kinetic
energy is given by the moving average
\be \label{XgB_star}
\xi = \frac 1{2\al} \int_\R \exp\{-|\tau|/\al\} \B(\tau) d\tau.
\ee
By \eqref{errna}, the error of optimal non-adaptive approximation
in the continuous time case is given by
\be \label{errna_kin}
\sigma^2 = \int_\R \frac{\al^2 u^2}{ \al^2 u^2 +1}\ \mu(du).
\ee
Notice that both solutions \eqref{XgB_star_discr} and
\eqref{XgB_star} are indeed non-adaptive at all because they involve
future values of $\B$. Let us also stress that these solutions formulae
are the same for any spectral (covariance) structure of $\B$.
The formulae \eqref{XgB_star} and \eqref{errna_kin} were obtained
earlier in \cite{KabLi}.
We start with discrete time examples.
\begin{example} \label{ex:iid} {\rm
A sequence $(B(t))_{t\in \Z}$ of centered non-correlated random variables
with constant variance $V\ge 0$ has the spectral measure
\[
\mu(du):= \frac{\V du}{2\pi}.
\]
Surprisingly, the answer to the non-adaptive problem taking kinetic
energy into account even for this sequence is already non-trivial, since
the best non-adaptive approximation is given by the series
\eqref{XgB_star_discr}. The error formula \eqref{errna_kin_discr} yields
\[
\sigma^2 = \V \int_{-\pi}^{\pi} \frac{\al^2 |e^{iu}-1|^2 du}{2\pi
\al^2 |e^{iu}-1|^2 +1}
= \V \left (1 - \frac{1}{\sqrt{1+4\al^2}} \right),
\]
cf. an extension below in \eqref{errna_ar}.
}
\end{example}
\smallskip
\begin{example} \label{ex:ar}
{\rm
A sequence of random variables $(\B(t))_{t\in\Z}$ is called
autoregressive, if it satisfies the equation
$\B(t)=\rho \B(t-1) + \xi(t)$, where $|\rho|<1$ and
$(\xi(t))_{t\in\Z}$ is a sequence of centered non-correlated
random variables with some variance $\V$. In this case we have
a representation
\[
\B(t)= \sum_{j=0}^\infty \rho^j \xi(t-j), \qquad t\in \Z.
\]
Given the spectral representation
$\xi(t) = \int_{-\pi}^\pi e^{i t u} \W(du)$ from the previous
example, we obtain
\[
\B(t)= \int_{-\pi}^\pi \sum_{j=0}^\infty \rho^j e^{i(t-j)u} \W(du)
= \int_{-\pi}^\pi \frac{1}{ 1- \rho\, e^{-i u}} \ e^{i t u} \W(du).
\]
We see that the spectral measure for $\B$ is
\be \label{mu_ar}
\mu(du):= \frac {\V du}{2\pi|1- \rho\, e^{-i u}|^2}\, .
\ee
The best non-adaptive approximation is given by the series
\eqref{XgB_star_discr}. By \eqref{errna_kin_discr} and \eqref{mu_ar},
the error of non-adaptive approximation is
\begin{eqnarray*}
\sigma^2 &=& \frac{\V}{2\pi} \int_{-\pi}^\pi
\frac{\al^2 |e^{iu}-1|^2}{ \al^2 |e^{iu}-1|^2 +1}\
\frac {du}{|1- \rho\, e^{-iu}|^2}
\\
&=& \frac{\V}{2\pi} \left[ \int_{-\pi}^\pi
\frac {du}{|1- \rho\, e^{-iu}|^2}
- \int_{-\pi}^\pi \frac{1}{ \al^2 |e^{iu}-1|^2 +1} \
\frac {du}{|1-\rho\, e^{-iu}|^2} \right].
\end{eqnarray*}
By using the expansion
\be\label{series2}
\frac {1}{|1-\rho\, e^{-iu}|^2}
= \frac{1}{1-\rho^2} \left( 1+ \sum_{k=1}^\infty \rho^{k}
\left( e^{iku}+ e^{-iku}\right)\right)
\ee
we obtain immediately that
\[
\int_{-\pi}^\pi \frac {du}{|1- \rho\, e^{-iu}|^2}
= \frac{2\pi}{1-\rho^2} \, .
\]
Moreover, it follows from \eqref{series1} and \eqref{series2} that
\[
\int_{-\pi}^\pi \frac{1}{ \al^2 |e^{iu}-1|^2 +1}
\ \frac {du}{|1-\rho\, e^{-iu}|^2}
= \frac{2\pi}{\sqrt{1+4\al^2}(1-\rho^2)}\
\frac{\b+\rho}{\b-\rho}
\]
with $\b=\b(\al)$ defined in \eqref{beta}. Finally,
\be \label{errna_ar}
\sigma^2= \frac{\V}{1-\rho^2} \left( 1- \frac{1}{\sqrt{1+4\al^2}}\
\frac{\b+\rho}{\b-\rho} \right).
\ee
Notice that Example \ref{ex:iid} is a special case of this one
with $\rho=0$.
}
\end{example}
\smallskip
\begin{example}
{\rm
We call a sequence of random variables $(\B(t))_{t\in\Z}$
a simplest moving average sequence if it admits a representation
$\B(t)= \xi(t)+\rho\, \xi(t-1)$ where $\xi(t)$ is the same as
in Example $\ref{ex:ar}$. Proceeding as above, we obtain
\[
\B(t)= \int_{-\pi}^\pi \left(1+\rho e^{-iu} \right) e^{itu} \W(du),
\qquad t\in \Z.
\]
We see that the spectral measure for $\B$ is
\be\label{mu_ma1}
\mu(du):= \frac {\V |1+ \rho\, e^{-iu}|^2 du}{2\pi}\,.
\ee
The best non-adaptive approximation is given by
\eqref{XgB_star_discr}. By \eqref{mu_ma1} and \eqref{errna_kin_discr},
the error of non-adaptive approximation is
\begin{eqnarray*}
\sigma^2 &=& \frac {\V}{2\pi}
\int_{-\pi}^\pi \frac{\al^2 |e^{iu}-1|^2}{ \al^2 |e^{iu}-1|^2 +1}\
|1+ \rho\, e^{-iu}|^2 du
\\
&=& \frac {\V}{2\pi}
\int_{-\pi}^\pi \left[ 1- \frac{1}{ \al^2 |e^{iu}-1|^2 +1}\right]\
|1+ \rho\, e^{-iu}|^2 du
\\
&=& \frac {\V}{2\pi} \left[
\int_{-\pi}^\pi |1+ \rho\, e^{-iu}|^2 du
-
\int_{-\pi}^\pi \frac{|1+ \rho\, e^{-iu}|^2}
{\al^2 |e^{iu}-1|^2 +1}\ du \right].
\end{eqnarray*}
We easily get
\[
\int_{-\pi}^\pi |1+ \rho\, e^{-iu}|^2 du
= \int_{-\pi}^\pi (1+\rho^2 + \rho(e^{-iu}+e^{iu})) du
= 2\pi (1+\rho^2)
\]
and, by using \eqref{series1},
\begin{eqnarray*}
&& \int_{-\pi}^\pi \frac{|1+ \rho\, e^{-iu}|^2}
{ \al^2 |e^{iu}-1|^2 +1}\ du = \frac{1}{\sqrt{1+4\al^2}} \times
\\
&\times& \int_{-\pi}^\pi (1+\rho^2 + \rho(e^{-iu}+e^{iu}))
\left( 1+ \sum_{k=1}^\infty \b^{-k} \left( e^{iku}
+ e^{-iku}\right)\right) du
\\
&=& \frac{2\pi}{\sqrt{1+4\al^2}}
\left( 1+\rho^2 + \frac{2\rho}{\b} \right),
\end{eqnarray*}
whereas
\[
\sigma^2= \V \left( 1+\rho^2 - \frac{1}{\sqrt{1+4\al^2}}
\left( 1+\rho^2 + \frac{2\rho}{\b} \right)\right)
\]
with $\b=\b(\al)$ defined in \eqref{beta}.
}
\end{example}
\begin{example} \label{ex:sums}
{\rm Consider the partial sums of a sequence of centered
non-correlated random variables $(\xi(j))_{j\ge 1}$ each having
variance $\V$. Let $\B(0)=0$ and $\B(t):=\sum_{j=1}^t \xi(j)$,
$t\in\N$. Given the spectral representation in the form
$\xi_j = \int_{-\pi}^\pi e^{i t u} \W(du)$, we have
\[
\B(t)= \int_{-\pi}^\pi \Big( \sum_{j=1}^t e^{i j u} \Big) \W(du)
= \int_{-\pi}^\pi \frac{e^{i u}}{e^{i u}-1} \ (e^{i t u}-1) \W(du)
\]
and we obtain the spectral measure
\be \label{mu_sums}
\mu(du):= \frac {\V du}{2\pi |e^{iu}-1|^2}.
\ee
The best non-adaptive approximation is given by \eqref{XgB_star_discr}.
By \eqref{mu_sums} and \eqref{errna_kin_discr}, the corresponding
approximation error of
\[
\sigma^2 = \frac{\V}{2\pi}
\int_{-\pi}^\pi \frac{\al^2 \, du}{ \al^2 |e^{iu}-1|^2 +1}\, .
\]
Furthermore, by using expansion \eqref{series1} we have
\be \label{errna_sums}
\sigma^2 = \frac{\V \al^2}{2\pi} \cdot \frac{2\pi}{\sqrt{1+4\al^2}}
= \frac{\V \al^2}{\sqrt{1+4\al^2}}\, .
\ee
}
\end{example}
\bigskip
We pass now to continuous time examples.
\begin{example}
{\rm
The Ornstein--Uhlenbeck process is a centered Gaussian stationary
process with covariance $\K_\B(t)=e^{-|t|/2}$ and the spectral measure
\be \label{mu_OU}
\mu(du):= \frac {2 du}{\pi(4u^2+1)}.
\ee
Since we are developing only a linear theory, we do not need
Gaussianity assumption and may call Ornstein--Uhlenbeck any wide
sense stationary process having the mentioned covariance and spectral
measure.
The best non-adaptive approximation is given by \eqref{XgB_star}.
By \eqref{mu_OU} and \eqref{errna_kin}, the error of non-adaptive
approximation is
\[
\sigma^2 = \int_\R \frac{\al^2 u^2}{ \al^2 u^2 +1}\
\frac{2 du}{\pi(4 u^2+1)} = \frac{\al}{2+\al}\, .
\]
}
\end{example}
\begin{example} \label{ex:fbm}
{\rm
Fractional Brownian motion $(B^H(t))_{t\in\R}$,
$0<H\le 1$, is a centered Gaussian process with covariance
\[
\cov (B^H(t_1),B^H(t_2))
= \frac 12\left( |t_1|^{2H} + |t_2|^{2H} - |t_1-t_2|^{2H}\right).
\]
For any process with this covariance (interesting non-Gaussian
examples of this type are also known, see e.g. Telecom processes
in \cite{KajTaq, RPE}) the spectral measure is
\be \label{mu_fbm}
\mu(du):= \frac { M_H du}{|u|^{2H+1}}\,
\ee
where $M_H=\tfrac{\Gamma(2H+1) \sin(\pi H)}{2\pi}$\,.
The best non-adaptive approximation is given by \eqref{XgB_star}.
By \eqref{mu_fbm} and \eqref{errna_kin}, the error of non-adaptive
approximation is
\begin{eqnarray} \nonumber
\sigma^2 &=& \int_\R \frac{\al^2 u^2}{ \al^2 u^2 +1}\ \frac { M_H du}{|u|^{2H+1}}
= M_H \al^2 \int_\R \ \frac {|u|^{1-2H} du}{\al^2 u^2 +1 }
\\ \nonumber
&=& 2 M_H \al^{2H} \int_0^\infty \ \frac {w^{1-2H} dw}{w^2 +1 }
= M_H \al^{2H} \int_0^\infty \ \frac {v^{-H} dv}{v +1}
\\ \label{errna_fbm}
&=& M_H \al^{2H} \cdot \frac{\pi}{\sin(\pi H)}
= \frac{\Gamma(2H+1)\, \al^{2H} }{2}\, .
\end{eqnarray}
This result was obtained in \cite{KabLi}.
}
\end{example}
\begin{example}
{\rm
Consider a centered L\'evy process $(\B(t))_{t\ge 0}$ with finite
variance (this class includes Wiener process and centered Poisson
processes of any constant intensity). Let $\Var \B(1)= \V$. For any
such process the spectral measure is
\be \label{mu_w}
\mu(du):= \frac {\V du}{2\pi u^2}.
\ee
This is a continuous version of Example $\ref{ex:sums}$, as well
as a special case of Example \ref{ex:fbm} with $H=\tfrac 12$.
Notice, however, that in the more delicate problems of
{\it adaptive} approximation (that are not studied here) the cases
$H=\tfrac 12$ and $H \not =\tfrac 12$ are totally different.
The best non-adaptive approximation is given by \eqref{XgB_star}.
The calculation of non-adaptive approximation error based
on \eqref{mu_w} and \eqref{errna_kin} is a special case
of \eqref{errna_fbm} with $H=\tfrac 12$, up to a scaling constant
$\V$. We have thus
\be \label{errna_w}
\sigma^2= \frac{\al\,\V}{2}\,.
\ee
}
\end{example}
\bigskip
Finally, notice that there is a natural interplay between continuous
time and discrete time approximation. Let $(\B(t))_{t\in\R}$ be a
continuous time wide sense stationary process.
For any small $\delta>0$ consider its discrete time version
$(B_\delta(s))_{s\in\Z}:= (\B(\delta s ))_{s\in\Z}$.
The discrete time counterpart for kinetic energy $\al^2 \X'(t)^2$
of approximating process is
$\tfrac{\al^2(\X(\delta(s+1))- \X(\delta s))^2}{\delta^2}$,
thus one should consider discrete time approximation with parameter
$\al_\delta:=\tfrac{\al}{\delta}$. Using \eqref{beta}, we see that
\[
\b_\delta:= \b(\al_\delta) = 1 + \frac{1+o(1)}{\al_\delta}\, ,
\qquad \textrm{as } \al_\delta\to \infty,
\]
hence
$\b_\delta^{\al_\delta} \to e$ as $\al_\delta\to \infty$.
Therefore, for the best discrete time approximations
\eqref{XgB_star_discr} of $B_\delta$ we have
\begin{eqnarray*}
\X_\delta(s) &=&
\frac{1}{\sqrt{1+4\al_\delta^2}}
\left( \B(s) + \sum_{k=1}^\infty \b_\delta^{-k}
\left( \B(s+k\delta)+ \B(s-k\delta) \right)\right)
\\
&=& \frac{(1+o(1)) \delta}{2\al}
\left( B_\delta(s) + \sum_{k=1}^\infty [\b_\delta^{\al_\delta}]^{-k\delta/\al}
\left( \B(s+k\delta)+ \B(s-k\delta) \right)\right)
\\
&\to& \frac 1{2\al} \int_\R \exp\{-|\tau|/\al\} \B(s+\tau)\, d\tau,
\qquad \textrm{as } \delta\to 0,
\end{eqnarray*}
which is the solution of continuous time approximation problem
\eqref{XgB_star}. Similarly, one has the convergence of the optimal
errors of approximation, cf. e.g. \eqref{errna_sums} and
\eqref{errna_w}.
\section{An extension of A.N. Kolmogorov and M.G. Krein theorems
on error-free prediction}
\subsection{Discrete time}
\label{ss:discr}
In this subsection we assume that $(\B(t))_{t\in \Z}$ is a wide sense
centered stationary sequence and $\mu$ is its spectral measure. Let us represent
$\mu$ as the sum $\mu=\mu_a+\mu_s$ of its absolutely continuous and
singular components. We denote by $f_a$ the density of $\mu_a$ with
respect to the Lebesgue measure.
Consider Problem II and let
\[
\sigma^2(t):=\inf_{Y\in H_t} \left\{\E|Y-\B(0)|^2+ \E|L Y|^2\right\},
\qquad t\in \Z,
\]
be the corresponding prediction errors. We also let $\sigma^2(\infty)$
denote the similar quantity with $H_t$ replaced by $H$. It is easy to
see that the sequence $\sigma^2(t)$ is non-increasing in $t$ and
\[
\lim_{t\to+\infty} \sigma^2(t) = \sigma^2(\infty).
\]
For the classical prediction problem, i.e. for $L=0$, by Kolmogorov's
theorem (singularity criterion, see \cite[Chapter II, Theorem 5.4]{Roz})
we have
\[
\sigma^2(t) = \sigma^2(\infty) = 0 \qquad \textrm{for all } t\in \Z,
\]
iff
\be \label{logdiv}
\int_{-\pi}^\pi |\ln f_a(u)| du = \infty.
\ee
In our case, for $L\not=0$, we have $\sigma^2(t)\ge \sigma^2(\infty)>0$
unless $\ell(\cdot)\equiv 0$\ $\mu$-a.s. Therefore, we
state the problem as follows: when $\sigma^2(t)=\sigma^2(\infty)$
holds for a given $t\in\Z$?
In other words: when approximation based on the knowledge of
the process up to time $t$ works as well as the one based on
the knowledge of the whole process?
\begin{thm} \label{t:logdiv}
If \eqref{logdiv} holds, then we have
$\sigma^2(t)=\sigma^2(\infty)$ for all $t\in \Z$.
\end{thm}
\begin{proof}
If \eqref{logdiv} holds, then for all $t$ we have $H_t=H$, see e.g.
\cite[Chapter XII, Section 4]{Doob1} or
\cite[Chapter II, Section 2]{Roz}. Therefore, by Theorem \ref{t:nonad}
\[
\sigma^2(t)=\sigma^2(\infty)
=\int_{-\pi}^{\pi} \frac{|\ell(u)|^2}{1+|\ell(u)|^2}\, \mu(du).
\]
\end{proof}
\begin{thm} \label{t:logconv}
If the process $\B$ is such that the density $f_a$ satisfies
\be \label{logconv}
\int_{-\pi}^\pi |\ln f_a(u)| du < \infty,
\ee
then for every fixed $t\in \Z$ we have $\sigma^2(t)=\sigma^2(\infty)$ iff
the function
$\frac{1}{1+|\ell(u)|^2}$
is a trigonometric polynomial of degree not exceeding $t$, i.e.
\[
\frac{1}{1+|\ell(u)|^2}= \sum_{|j|\le t} b_{j}\, e^{iju}
\]
Lebesgue-a.e. with some coefficients $b_j\in \C$.
In particular, if $t<0$, then $\sigma^2(t)<\sigma^2(\infty)$;
equality $\sigma^2(0)=\sigma^2(\infty)$ holds iff $|\ell(\cdot)|$ is
a constant Lebesgue-a.e.
\end{thm}
\begin{proof}
The analytic form for prediction error is
\[
\sigma^2(t) =
\inf_{\psi \in \LL(t)} \int_{-\pi}^\pi
\left\{ |\psi(u)-1|^2 + |\ell(u)\psi(u)|^2 \right\} \mu(du),
\]
where $\LL(t)= \span{e^{isu}, s\le t}$ in $\LL$.
The solution $\psi_t(u)$ of our problem is unique by Proposition \ref{p:uniq};
we know from \eqref{sol_na_anal} that for $t=\infty$ the solution is
$\psi_\infty(u)=\tfrac{1}{1+|\ell(u)|^2}$. It follows that
$\sigma^2(t)=\sigma^2(\infty)$ iff $\psi_t=\psi_\infty$,
i.e. iff $\psi_\infty\in\LL(t)$. The latter is equivalent to
the existence of the trigonometric polynomials
\[
\vartheta_k(u):=\sum_{j\le t} a_{j,k}\, e^{iju}
\]
such that
\[
\lim_{k\to\infty} \int_{-\pi}^\pi
\left| \vartheta_k(u)- \frac{1}{1+|\ell(u)|^2}\right|^2 \mu(du) =0.
\]
It follows that
\begin{eqnarray} \nonumber
&& \lim_{k\to\infty} \int_{-\pi}^\pi
\left| \overline{\vartheta_k(u)}- \frac{1}{1+|\ell(u)|^2}\right|^2 f_a(u)\, du
\\ \label{psikconv}
&=& \lim_{k\to\infty} \int_{-\pi}^\pi
\left|\vartheta_k(u)-\frac{1}{1+|\ell(u)|^2}\right|^2 f_a(u)\,du
=0.
\end{eqnarray}
Due to assumption \eqref{logconv}
the density $f_a$ admits a representation
\[
f_a(u)=|g_*(e^{iu})|^2
\]
where $g_*(e^{iu}), u\in[-\pi,\pi),$ is the boundary value of the function
\[
g(z) := \exp\left\{ \frac{1}{4\pi} \int_{-\pi}^\pi \ln f_a(u)\
\frac{e^{iu}+z}{e^{iu}-z}\ du \right\}, \qquad
|z|<1,
\]
which is an analytic function in the unit disc $\D:=\{z\in\C, |z|<1\}$.
In other words, $g$ is an outer function from the Hardy class $\HH_2(\D)$;
we refer to \cite[Chapter 17]{Ru} for the facts and definitions mentioned in this subsection
concerning $\HH_2(\D)$, outer and inner functions.
Notice that $\tfrac{1}{g}$ also is an outer analytic function.
Rewrite \eqref{psikconv} as
\[
\lim_{k\to\infty} \int_{-\pi}^\pi \left| \overline{\vartheta_k(u)} g_*(e^{iu})
- \frac{g_*(e^{iu})}{1+|\ell(u)|^2} \right|^2 du = 0.
\]
Assume for a while that $t\le 0$. Then
$\overline{\vartheta_k} g_*$ also is the boundary value of a function from $\HH_2(\D)$.
Since the class of such boundary functions is closed in $L_2$ (with respect to Lebesgue
measure), this implies that
\[
h_*(e^{iu}):= \frac{g_*(e^{iu})}{1+|\ell(u)|^2}
\]
is the boundary value of a function $h\in \HH_2(\D)$.
Moreover, we have a power series representation
\be \label{hpower}
h(z)=\sum_{j\ge - t} h_j \, z^j, \qquad z\in \D.
\ee
Let us denote
\[
A_1(u):=\frac{1}{1+|\ell(u)|^2} = h_*(e^{iu}) \cdot \frac{1}{g_*(e^{iu})}\, ,
\qquad u\in [-\pi,\pi).
\]
The function $e^{iu}\mapsto A_1(u)$ admits an analytic continuation
from the unit circle to $\D$ given by
$A(z)=h(z)\cdot \tfrac{1}{g(z)}$.
Notice that in the power series representation
\[
A(z) = \sum_{j=0}^\infty a_j\, z^j, \qquad z\in\D,
\]
the terms with $j<-t$ vanish due to \eqref{hpower}.
We have therefore
\[
A(z) = \sum_{j\ge -t}^\infty a_j \,z^j, \qquad z\in\D.
\]
Let us prove that $A(\cdot)$ is bounded on $\D$. Write the
factorization $h=M_h\cdot Q_h$ where $M_h$ is an inner function and $Q_h$ is an outer function.
Then $A= M_h\cdot \frac{Q_h}{g}$. The function $M_h$ is bounded on $\D$ by the
definition of an inner function while $\frac{Q_h}{g}$ is an outer function
with bounded boundary values, because Lebesgue-a.e.
\[
\left|\left(\tfrac{Q_h}{g}\right)_*(e^{iu})\right|
=\left|\frac{A_1(u)}{(M_h)_*(e^{iu})}\right|
= \left| A_1(u)\right|\le 1.
\]
Since for outer functions the boundedness on the boundary implies,
via Poisson kernel representation, the boundedness on $\D$, we see that
the factor $\frac{Q_h}{g}$ is also bounded on $\D$. We conclude that
$A(\cdot)$ is bounded on $\D$.
For each $r\in (0,1)$ consider the function
\[
A_r(u):=A(re^{iu})= \sum_{j\ge -t}^\infty a_j\, r^j\, e^{iju}\, ,
\qquad u\in [-\pi,\pi).
\]
Since $A$ is bounded, the family $\{A_r\}_{0<r<1}$ is uniformly bounded.
Since $A_r\to A_1$ Lebesgue-a.e., as $r\nearrow 1$, the convergence also holds
in $L_2$. In particular, all Fourier coefficients converge and we have
\[
A_1(u) = \frac{1}{1+|\ell(u)|^2} = \sum_{j\ge -t} a_j \,e^{iju}.
\]
Since the left hand side is real, for $t<0$ the latter representation
is impossible. For $t=0$ it is only possible when both sides are equal
(Lebesgue-a.e.) to the constant $a_0$.
For $t>0$ the same reasonings give a representation
\[
\frac{e^{itu}}{1+|\ell(u)|^2} = \sum_{j\ge 0} a_j e^{iju}
\]
which implies that
\[
\frac{1}{1+|\ell(u)|^2} = \sum_{j=-t}^{t} a_{j+t} e^{iju}
\]
is a trigonometric polynomial of degree not exceeding $t$.
\medskip
The converse assertion is obvious: if for $t\ge 0$
we have a representation
\[
\psi_\infty(u) = \frac{1}{1+|\ell(u)|^2} = \sum_{j=-t}^{t} b_j e^{iju},
\]
then $\psi_\infty \in \LL(t)$ by the definition of $\LL(t)$.
\end{proof}
Theorems \ref{t:logdiv} and \ref{t:logconv} immediately yield the following
final result.
\begin{thm} \label{t:log}
Let $\B$ be a discrete time, wide sense stationary process. Let $L$ be a
linear filter with frequency characteristic $\ell(\cdot)$. Then for
every fixed $t\in \Z$
the equality $\sigma^2(t)=\sigma^2(\infty)$ holds iff either
\eqref{logdiv} holds, or \eqref{logconv} holds and
$\frac{1}{1+|\ell(u)|^2}$ is a trigonometric polynomial of degree
not exceeding $t$.
\end{thm}
\subsection{Continuous time}
In this subsection we assume that $(\B(t))_{t\in \R}$ is a continuous time,
mean square continuous, wide sense stationary process and $\mu$ is its spectral
measure. As before, we represent $\mu$ as the sum $\mu=\mu_a+\mu_s$ of
its absolutely continuous and singular components, denote $f_a$ the density
of $\mu_a$ with respect to Lebesgue measure and let
\[
\sigma^2(t) :=\inf_{Y\in H_t} \left\{ \E|Y-\B(0)|^2+ \E|L Y|^2 \right\},
\qquad t\in \R,
\]
denote the corresponding prediction errors. We also let $\sigma^2(\infty)$
denote the similar quantity with $H_t$ replaced by $H$.
The statement analogous to Theorem \ref{t:log} is as follows.
\begin{thm} \label{t:log_c}
Let $\B$ be a continuous time, mean square continuous, wide sense stationary process.
Let $L$ be a linear filter with frequency characteristic $\ell(\cdot)$.
Then for every fixed $t\in \R$ the equality
\[
\sigma^2(t)=\sigma^2(\infty)
= \int \frac{|\ell(u)|^2}{1+ |\ell(u)|^2} \, \mu(du)
\]
holds iff either
\noindent(a)
\be \label{logdiv_c}
\int_{-\infty}^{\infty} \frac{|\ln f_a(u)|}{1+u^2}\, du = \infty
\ee
holds or
\noindent(b)
\be \label{logconv_c}
\int_{-\infty}^{\infty} \frac{|\ln f_a(u)|}{1+u^2}\, du < \infty
\ee
holds, $t>0$ and $\frac{1}{1+|\ell(u)|^2}$ is a restriction (to $\R$) of
an entire analytic function of exponential type not exceeding $t$,
or
\noindent(c) inequality \eqref{logconv_c} holds, $t=0$, and
$|\ell(\cdot)|$ is Lebesgue a.e. equal to a constant.
\end{thm}
\begin{proof}
If \eqref{logdiv_c} holds, then by M.G. Krein singularity criterion,
we have $H_t=H$ for all $t\in \R$, see e.g.
\cite[Chapter XII, Section 4]{Doob1} or \cite[Chapter II, Section 2]{Roz},
and the assertion a) of the theorem follows.
Let $\Pi:=\{z\in\C: \Im(z)>0\}$ denote the upper half-plane. If \eqref{logconv_c} holds, then we
have a representation $f_a(u)=|g_*(u)|^2$, where $g_*(u)$ is the boundary value
of the function
\[
g(z) := \exp\left\{ \frac{1}{2\pi\, i} \int_{-\infty}^\infty
\ln f_a(u) \frac{1+z u}{z-u}\, \frac{du}{1+u^2} \right\}.
\]
Therefore $g$ is an outer function from Hardy class $\HH_2(\Pi)$;
we refer to \cite[Chapter 8]{Hof} for the facts and definitions mentioned in this subsection
concerning Hardy classes on $\Pi$ and related outer and inner functions.
Let $t\le 0$. The same arguments as those given in the proof of Theorem \ref{t:logconv}
show that
\[
h_*(u):= \frac{g_*(u)}{1+|\ell(u)|^2}
\]
is the boundary value of a function $h\in \HH_2(\Pi)$.
The function
\be \label{idenR}
A_*(u):=\frac{1}{1+|\ell(u)|^2} = h_*(u) \cdot \frac{1}{g_*(u)}\, ,
\qquad u\in \R,
\ee
admits an analytic continuation from the real line to $\Pi$ given by
$A(z)=h(z)\cdot \tfrac{1}{g(z)}$.
Let us prove that $A(\cdot)$ is bounded on $\Pi$. Write the factorization
$h=M_h\cdot Q_h$ where $M_h$ is an inner function and $Q_h$ is an outer function.
Then $A= M_h\cdot \frac{Q_h}{g}$. The function $M_h$ is bounded on $\Pi$ by the
definition of an inner function while $\frac{Q_h}{g}$ is an outer function
with bounded boundary values, because Lebesgue-a.e.
\[
\left|\left(\tfrac{Q_h}{g}\right)_*(u)\right|
=\left|\frac{A_*(u)}{(M_h)_*(u)}\right|
= \left| A_*(u)\right|\le 1, \qquad u\in \R.
\]
Since for outer functions the boundedness on the boundary implies,
via Poisson kernel representation, the boundedness on $\Pi$, we see that
the factor $\frac{Q_h}{g}$ is also bounded on $\Pi$. We conclude that
$A(\cdot)$ is bounded on $\Pi$. In other words, $A\in \HH_\infty(\Pi)$.
Furthermore, the function $A$ admits an analytic reflection to the lower
half-plane $\Pi_-:=\{z\in\C: \overline{z}\in \Pi\}$ by letting
$A_-(z):=\overline{A(\overline{z})}$. This reflection agrees with $A$ on the
real line because the boundary values $A_*(u),u\in\R,$ are real.
Consider now an auxiliary function $\SS_*(u):=\tfrac{\sin u}{u}\ e^{iu}$ on $\R$
and its analytic continuation $\SS(z):=\tfrac{\sin z}{z}\ e^{iz} \in \HH_2(\Pi)$.
Then $\SS\cdot A\in \HH_2(\Pi)$ and the corresponding boundary function
$\SS_*\cdot A_*\in L_2(\R)$. According to the Fourier representation of the elements of
$\HH_2(\Pi)$ and that of their boundary values we have
\begin{eqnarray}
\nonumber
&&\SS_*(u) A_*(u) = \frac{\sin u}{u}\ e^{iu} A_*(u)
\\ \label{Hu}
&=& \int_0^\infty e^{ixu} q(x)\, dx, \qquad u\in \R, \textrm{ Lebesgue-a.e.,}
\\ \nonumber
&& \SS(z) A(z) = \frac{\sin z}{z}\ e^{iz} A(z)
\\ \label{Hz}
&=& \int_0^\infty e^{ixz} q(x)\, dx, \qquad z\in \Pi,
\end{eqnarray}
with some $q(\cdot)\in L_2(\R_+)$.
We may rewrite \eqref{Hu} as
\[
\frac{\sin u}{u}\, A_*(u) = \int_{-1}^\infty e^{iyu} q(y+1)\,dy.
\]
Moreover, since the left hand side is real, its Fourier transform is symmetric.
Therefore, $q(\cdot+1)$ must vanish on $[1,\infty)$, i.e.
$q(\cdot)$ must vanish on $[2,\infty)$. Thus \eqref{Hz} writes as
\be \label{SAQ}
\SS(z)\, A(z) = \int_{0}^2 e^{ixu} q(x)\,dx:=Q(z)
\ee
and $Q$ is an entire function.
Consider the holomorphic function
\[
V(z):=\frac{Q(z)}{\SS(z)}= \frac{z e^{-iz} Q(z)}{\sin z}.
\]
Since $V=A$ on $\Pi$ and $V=A_*$ on $\R$, we see that $V$ is bounded on $\Pi\cup\R$.
Proceeding in the same way with the lower half-plane $\Pi_-$ instead of $\Pi$, we find "another"
holomorphic function $V_-$ such that $V_-=A_-$ on $\Pi_-$ and $V_-=A_*=V$ on $\R$. The latter
equality yields $V=V_-$ on $\C$; moreover, $V$ is bounded on $\C$. Hence $V$ is a constant and
$A_*=V$ is a constant, too, as required in the assertion c) of our theorem.
\medskip
If \eqref{logconv_c} holds and $t>0$, the same reasoning leads to
a representation analogue to \eqref{idenR}, namely,
\[
A_*(u):=\frac{1}{1+|\ell(u)|^2} = e^{-itu} \AA_*(u)\, ,
\qquad u\in \R,
\]
where $\AA_*$ is the boundary function of some $\AA\in\HH_\infty(\Pi)$.
Using that $\SS\cdot\AA\in \HH_2(\Pi)$
and proceeding as before, we have a representation analogue to \eqref{SAQ}
\[
\SS(z)\, \AA(z) = \int_{0}^{2(t+1)} e^{izu} q(x)\,dx:=Q(z)
\]
and $Q$ is an entire function. Let
\[
V(z)= \frac{Q(z)e^{-itz}}{\SS(z)}= \AA(z) e^{-itz}.
\]
Then
$V=A_*$ on $\R$ and we have
\[
|V(z)| \le e^{t|\Im(z)|} ||A||_\infty\, , \qquad z\in \Pi.
\]
Proceeding in the same way with the lower half-plane $\Pi_-$ instead of $\Pi$, we find "another"
holomorphic function $V_-$ such that
\[
|V_-(z)| \le e^{t|\Im(z)|} ||A_-||_\infty\ , \qquad z\in \Pi_-,
\]
and $V_-=A_*=V$ on $\R$ . The latter equality yields $V=V_-$ on $\C$. We conclude that
\[
|V(z)| \le e^{t|\Im(z)|} ||A||_\infty\, , \qquad z\in \C,
\]
i.e. $V$ is an entire function of exponential type not exceeding $t$,
as required in the assertion b) of our theorem.
\medskip
If \eqref{logconv_c} holds and $t<0$, we still see from the previous reasoning that
$\psi_\infty$ must be a constant.
Due to M.G. Krein's regularity criterion (see \cite[Chapter III, Theorem 2.4]{Roz}),
we know that under \eqref{logconv_c}
constants do not belong to $\LL(t)$ with $t<0$. Hence the equality
$\sigma^2(t)=\sigma^2(\infty)$ is not possible for $t<0$.
\medskip
Now we prove the sufficiency. Assume that $t>0$ and that
$A_*(u):=\frac{1}{1+|\ell(u)|^2}$ is a restriction (to $\R$) of
an entire analytic function $A(\cdot)$ of exponential type not exceeding $t$.
Write
\[
A_*(u)=A_*(0)+ u \cdot \frac{A_*(u)-A_*(0)}{u}:= A_*(0)+u \cdot \widetilde{A_*}(u).
\]
It is sufficient to show that the function $u\mapsto u \cdot \widetilde{A_*}(u)$ belongs to $H_t$.
Furthermore, since we have
\[
\frac{1-e^{-i\delta u}}{i\delta} \widetilde{A_*}(u) \to u \cdot \widetilde{A_*}(u),
\qquad \textrm{as } \delta\to 0,
\]
in $L_2(\R,\mu)$, it is sufficient to show that $\widetilde{A_*}\in H_t$.
Notice that $\widetilde{A_*}$ belongs to $L_2(\R^1)$ w.r.t. Lebesgue measure and is a restriction
of the analytic function of exponential type not exceeding $t$ given by $\widetilde{A}(z):=\frac{A(z)-A(0)}{z}$.
Hence, by Paley--Wiener theorem (see \cite[Chapter IV]{Akh} we have a representation
\[
\widetilde{A_*}(u) = \widetilde{A}(u) = \int_{-t}^t a(\tau) e^{i\tau u} d\tau
\]
with $a(\cdot)\in L_2[-t,t]\subset L_1[-t,t]$.
Since exponentials $u \mapsto e^{i\tau u}$ belong to $H_t$ as $\tau\le t$, it follows that
$\widetilde{A_*}\in H_t$.
This concludes the proof of our theorem.
\end{proof}
\begin{rem}
{\rm In 1923, S.N. Bernstein introduced a class of entire functions of exponential type
not exceeding $t$ and bounded on the real line, cf. \cite{Ber} or \cite[Chapter 4, Section 83]{Akh}.
The functions that appear in Theorem \ref{t:log_c} belong to this class.
}
\end{rem}
\subsection{An extension of regularity criterion}
Consider again the discrete time case. We handle wide sense stationary sequences
and use the notation from Subsection \ref{ss:discr}. Let
$\sigma_0^2(t)$ be the prediction error for $\B(0)$ given the past $H_{t}$
in the classical prediction problem (with $L=0$).
The sequence $\B$ is called {\it regular}, if we have
\[
\lim_{t\to -\infty} \sigma_0^2(t) = \E |\B(0)|^2.
\]
By the classical Kolmogorov regularity criterion, see
\cite[Chapter II, Theorem 5.1]{Roz}, a wide sense stationary sequence $\B$ is regular
iff its spectral measure is absolutely continuous and \eqref{logconv} is
verified.
The following result provides an extension of this assertion to our
settings.
\begin{thm}
Let $\B$ be a wide sense stationary sequence. Let $L$ be a linear filter with
frequency characteristic $\ell(\cdot)$. If \eqref{logconv} holds, then
\[
\lim_{t\to-\infty} \sigma^2(t) = \E |\B(0)|^2
- \int_{-\pi}^\pi \frac{\mu_s(du)}{1+|\ell(u)|^2}\ .
\]
\end{thm}
\begin{proof}
Consider first the sequences with absolutely continuous spectral measure.
In that case, by Kolmogorov's criterion, $\B$ is regular. Hence,
\[
\lim_{t\to-\infty} \sigma^2(t)
\ge \lim_{t\to-\infty} \inf_{Y\in H_t} \E|Y-\B(0)|^2
= \E|\B(0)|^2.
\]
On the other hand, since for each $t\in\Z$ it is true $Y=0\in H_{t}$,
we have $\sigma^2(t) \le \E|\B(0)|^2$
and the theorem is proved in the form
\be \label{regabsc}
\lim_{t\to-\infty} \sigma^2(t) = \E|\B(0)|^2.
\ee
In the general case,
by \cite[Chapter II, Theorem 2.2]{Roz} our sequence splits into a sum
$B=B^{(a)}+B^{(s)}$ of mutually orthogonal wide sense stationary processes such that
the regular part $B^{(a)}$ has the spectral measure $\mu_a$, the singular part
$B^{(s)}$ has the spectral measure $\mu_s$ and
the corresponding spaces
$H_{t}^{(a)}:=\span{B^{(a)}(v),v\le t}$,
$H_{t}^{(s)}:=\span{B^{(s)}(v), v\le t}$ are not only orthogonal but
also satisfy subordination inclusions $H_{t}^{(a)}\subset H_t$,
$H_{t}^{(s)}\subset H_t$. The latter yield
$H_t=H_{t}^{(a)}\oplus H_{t}^{(s)}$.
Moreover, for any $\xi \in H_t$ representation \eqref{L} implies
\begin{eqnarray*}
\E |L\xi|^2 &=& \int_{-\pi}^{\pi} |\ell(u)\phi_\xi(u)|^2\mu(du)
\\
&=& \int_{-\pi}^{\pi} |\ell(u)\phi_\xi(u)|^2\mu_a(du)
+ \int_{-\pi}^{\pi} |\ell(u)\phi_\xi(u)|^2\mu_s(du)
\\
&=& \E |L\xi^{(a)}|^2 + \E |L\xi^{(s)}|^2,
\end{eqnarray*}
where $\xi^{(a)}, \xi^{(s)}$ denote the projections of $\xi$
onto $H_{t}^{(a)}$ and $H_{t}^{(s)}$, respectively.
Therefore, for any $\xi\in H_t$ we have
\begin{eqnarray*}
&&\E |\xi-\B(0)|^2+ \E|L\xi|^2
\\
&=& \E |\xi^{(a)}-B^{(a)}(0)|^2+ \E |\xi^{(s)}-B^{(s)}(0)|^2
\\
&& + \E|L \xi^{(a)}|^2 + \E|L \xi^{(s)}|^2.
\end{eqnarray*}
In this situation, the minimization problem splits into two independent
ones and is solved by $\xi_t= \xi_t^{(a)}+\xi_t^{(s)}$, where
$\xi_t^{(a)},\xi_t^{(s)}$ are the solutions for the processes
$ B^{(a)}, B^{(s)}$, respectively.
For the extended prediction errors
we obtain, by using Theorem \ref{t:logdiv},
\be \label{sigma_as}
\sigma^2(t)= \sigma^{(a)2}(t) + \sigma^{(s)2}(t)
= \sigma^{(a)2}(t) + \sigma^{(s)2}(\infty).
\ee
By applying consequently \eqref{sigma_as} and \eqref{regabsc}, we have
\begin{eqnarray*}
\lim_{t\to-\infty} \sigma^2(t)&=&
\lim_{t\to-\infty} \sigma^{(a)2}(t)+ \sigma^{(s)2}(\infty)
\\
&=& \E|B^{(a)}(0)|^2
+ \int_{-\pi}^\pi \frac{|\ell(u)|^2 \mu_s(du)}{1+|\ell(u)|^2}
\\
&=& \E|\B(0)|^2 - \E|B^{(s)}(0)|^2
+ \int_{-\pi}^\pi \frac{|\ell(u)|^2\mu_s(du)}{1+|\ell(u)|^2}
\\
&=& \E|\B(0)|^2 + \int_{-\pi}^\pi
\left(\frac{|\ell(u)|^2}{1+|\ell(u)|^2}-1\right) \mu_s(du)
\\
&=& \E|\B(0)|^2 -
\int_{-\pi}^\pi \frac{ \mu_s(du)}{1+|\ell(u)|^2}\, ,
\end{eqnarray*}
as claimed.
\end{proof}
\section{Interpolation}
Consider the simplest case of interpolation problem (our Problem III)
in discrete time. Let $(B(t))_{t\in \Z}$ be a wide sense stationary sequence having
spectral density $f$ and let $L$ be a linear filter with frequency
characteristic $\ell(\cdot)$. Consider the extremal problem
\[
\E |Y-\B(0)|^2+ \E|LY|^2 \to \min, \qquad Y\in \HTO_1.
\]
Recall that $\HTO_1=\span{\B(s),|s|\ge 1}$.
Let
\[
\sigma_{\textrm{int}}^2 = \inf_{Y\in \HTO_1} \left(\E |Y-\B(0)|^2+ \E|LY|^2\right)
\]
denote the interpolation error.
The classical case of this problem, i.e. $L=0$, was considered by A.N. Kolmogorov
\cite{Kolm}. He proved that precise extrapolation with $ \sigma_{\textrm{int}}^2=0$ is possible iff
\be \label{interdiv}
\int_{-\pi}^\pi \frac{du}{f(u)} =\infty.
\ee
If the integral in \eqref{interdiv} is convergent, then
\[
\sigma_{\textrm{int}}^2 = 4 \pi^2 \left( \int_{-\pi}^\pi \frac{du}{f(u)}\right)^{-1}.
\]
We extend this result to the case of general $L$ as follows.
\begin{thm}
If \eqref{interdiv} holds, then
\[
\sigma_{\textrm{int}}^2
= \int_{-\pi}^\pi \frac{|\ell(u)|^2}{1+|\ell(u)|^2} \ f(u)\, du.
\]
Otherwise,
\begin{eqnarray*}
\sigma_{\textrm{int}}^2
&=& \int_{-\pi}^\pi \frac{|\ell(u)|^2 f(u) du}{1+|\ell(u)|^2}
\\
&& + \left( \int_{-\pi}^\pi \frac{du}{1+|\ell(u)|^2} \right)^2
\left( \int_{-\pi}^\pi \frac{du}{f(u)(1+|\ell(u)|^2)} \right)^{-1}.
\end{eqnarray*}
\end{thm}
\begin{proof}
If \eqref{interdiv} holds, then by Kolmogorov's theorem we have
$\B(0)\in \HTO_1$, thus $\HTO_1=H$ and by \eqref{errna}
\begin{eqnarray*}
\sigma_{\textrm{int}}^2
&=& \inf_{Y\in H} \left( \E |Y-\B(0)|^2+ \E|LY|^2\right)
\\
&=& \int_{-\pi}^\pi \frac{|\ell(u)|^2}{1+|\ell(u)|^2} \ f(u)\, du,
\end{eqnarray*}
proving the first assertion of the theorem.
Assume now that
\be \label{interconv}
\int_{-\pi}^\pi \frac{du}{f(u)} <\infty.
\ee
Let us define a function $\phi$ on $[-\pi,\pi)$
by the relations
\begin{eqnarray*}
\phi(u) &:=& \frac {c+f(u)}{f(u)(1+|\ell(u)|^2)}\ ,
\\
c &=& - \left( \int_{-\pi}^\pi \frac{du}{1+|\ell(u)|^2} \right)
\left( \int_{-\pi}^\pi \frac{du}{f(u)(1+|\ell(u)|^2)} \right)^{-1}.
\end{eqnarray*}
We will prove that the random variable
\[
\xi = \int_{-\pi}^\pi \phi(u)\, \W(du)
\]
solves our interpolation problem. To this aim, according to Propositions \ref{p:exist},
\ref{p:uniq}, and \ref{p:euler}, it is sufficient to prove that $\xi\in \DD(L)$, that
$\xi\in\HTO_1$, and that $\xi$ satisfies equations \eqref{euler} of Proposition \ref{p:euler}.
First, we have
\begin{eqnarray*}
\E|L\xi|^2 &=& \int_{-\pi}^\pi |\phi(u)\ell(u)|^2 f(u) du
\\
&=& \int_{-\pi}^\pi \frac {(c+f(u))^2}{f(u)}\ \frac{|\ell(u)|^2 }{(1+|\ell(u)|^2)^2} \ du
\\
&\le& \frac{1}{4} \int_{-\pi}^\pi \frac {(c+f(u))^2}{f(u)} \, du
\\
&=& \frac{c^2}{4} \int_{-\pi}^\pi \frac {du}{f(u)} + \pi\,c
+ \frac{1}{4} \int_{-\pi}^\pi f(u) \, du <\infty,
\end{eqnarray*}
whence $\xi\in \DD(L)$.
Second, we show that $\xi\in\HTO_1$. Consider an orthogonal decomposition
$\xi=\eta+\eta^{\bot}$ with $\eta\in\HTO_1$, $\eta^{\bot}\in(\HTO_1)^{\bot}$.
We also have the corresponding analytic decomposition
$\phi=\psi+\psi^{\bot}$ with
$\psi\in \LLO :=\span{e^{isu},|s|\ge 1}$
and $\psi^{\bot}\in (\LLO)^{\bot}$.
Let us show that any $h\in \LLO$ satisfies equation
\be \label{h0}
\int_{-\pi}^\pi h(u) \, du = 0.
\ee
Indeed under assumption \eqref{interconv} the linear functional
$h\mapsto \int_{-\pi}^\pi h(u) \, du$ is bounded and continuous
on $\LL$ because by H\"older's inequality
\begin{eqnarray*}
\left| \int_{-\pi}^\pi h(u) \, du \right|^2
&\le& \int_{-\pi}^\pi \frac{du}{f(u)}\ \int_{-\pi}^\pi |h(u)|^2 f(u) \, du
\\
&=& \int_{-\pi}^\pi \frac{du}{f(u)} \ \, ||h||_\LL^2 .
\end{eqnarray*}
Therefore, equality \eqref{h0} which is true for every exponent from the set
$\{h(u)=e^{isu}, |s|\ge 1\}$, extends to their span $\LLO$.
In particular, we obtain
\[
\int_{-\pi}^\pi \psi(u) \, du = 0.
\]
Since the constant $c$ in the definition of $\phi$ was chosen so that
\be \label{phi0}
\int_{-\pi}^\pi \phi(u) \, du = 0,
\ee
we obtain
\be \label{psi0}
\int_{-\pi}^\pi \psi^{\bot}(u) \, du = \int_{-\pi}^\pi (\phi-\psi)(u)\, du = 0.
\ee
On the other hand, we have $\psi^{\bot} f \in L_1[-\pi,\pi)$ and
$\psi^{\bot}\in (\LLO)^{\bot}$. The latter means that
\[
\int_{-\pi}^\pi \psi^{\bot}(u) e^{isu} f(u) \, du =0, \qquad s\in \Z, s\not= 0.
\]
Therefore $\psi^{\bot} f$ is a constant, say, $\psi^{\bot} f=a$. By plugging
$\psi^{\bot}= \frac{a}{f}$ into \eqref{psi0} we obtain $a=0$.
It follows that $\psi^{\bot}=0$ and $\phi=\psi\in \LLO$, which is equivalent to
$\xi\in \HTO_1$.
It remains to check that $\xi$ satisfies equations \eqref{euler}. The analytical
form of these equations is
\be \label{euler_inter}
\int_{-\pi}^\pi \left[ \phi(u)-1+|\ell(u)|^2 \phi(u) \right] h(u) f(u) du =0,
\qquad h\in\LLO.
\ee
By the definition of $\phi$ we have
\[
\phi(u)-1+|\ell(u)|^2 \phi(u) =\frac{c}{f(u)}.
\]
Therefore, \eqref{euler_inter} reduces to
\[
\int_{-\pi}^\pi h(u) \, du = 0, \qquad h\in\LLO.
\]
The latter was already verified in \eqref{h0}, and we have proved that $\xi$ is a solution of
interpolation problem. Now the direct computation using the definitions of $\phi$ and $c$,
as well as \eqref{phi0}, shows
\begin{eqnarray*}
\sigma_{\textrm{int}}^2
&=& \int_{-\pi}^\pi \left[(\phi(u)-1)^2+|\ell(u)|^2 \phi(u)^2 \right] f(u)\, du
\\
&=& \int_{-\pi}^\pi \left[\phi(u)^2(1+|\ell(u)|^2) + (1-2\phi(u)) \right] f(u)\, du
\\
&=& \int_{-\pi}^\pi \left[\phi(u)(c+f(u)) + (1-2\phi(u)) f(u) \right] du
\\
&=& \int_{-\pi}^\pi (1-\phi(u)) f(u) du
= \int_{-\pi}^\pi \frac{ |\ell(u)|^2f(u)- c }{1+|\ell(u)|^2}\, du
\\
&=& \int_{-\pi}^\pi \frac{ |\ell(u)|^2f(u)}{1+|\ell(u)|^2}\, du
\\
&& + \left( \int_{-\pi}^\pi \frac{du}{1+|\ell(u)|^2} \right)^2
\left( \int_{-\pi}^\pi \frac{du}{f(u)(1+|\ell(u)|^2)} \right)^{-1},
\end{eqnarray*}
as claimed in the theorem's assertion.
\end{proof}
\begin{rem} {\rm
The first term in $\sigma_{\textrm{int}}^2$ is just the optimal error \eqref{errna} in the
easier optimization problem with $Y\in H$. The second term is the price we must pay
for optimization over the smaller space $\HTO_1$ instead of $H$.
}
\end{rem}
\section{Proofs for abstract Hilbert space setting}
\label{s:HSproofs}
\begin{proof}[ of Proposition \ref{p:exist}]
Not that the set $H_0 \bigcap \DD(L)$ is non-empty, since it contains zero.
Let
\[
\sigma^2:= \inf_{y\in H_0 \bigcap \DD(L)} G(y).
\]
There exists a sequence $(\xi_n)_{n\in \N}$ in $H_0 \bigcap \DD(L)$ such that
\[
\lim_{n\to\infty} G(\xi_n) =\sigma^2.
\]
Clearly, for all $m,n\in \N$ we have $(\xi_m+\xi_n)/2\in H_0 \bigcap \DD(L)$
and by convexity of $||\cdot||^2$,
\begin{eqnarray*}
\left\|\frac{\xi_m+\xi_n}{2} - x\right\|^2
&\le& \frac 12 \ \left( ||\xi_m-x||^2 + ||\xi_n-x||^2 \right),
\\
\left\| L\left( \frac{\xi_m+\xi_n}{2} \right) \right\|^2
&=& \frac 14 \ \left\| L \xi_m +L \xi_n \right\|^2
\\
&\le& \frac 12 \ \left( ||L \xi_m||^2 + ||L \xi_n||^2 \right).
\end{eqnarray*}
It follows that
\[
\limsup_{m,n\to\infty} G\left( \frac{\xi_m+\xi_n}{2} \right)
\le \frac 12 \left( \lim_{m\to\infty} G(\xi_m) + \lim_{n\to\infty} G(\xi_n)\right)
=\sigma^2.
\]
Hence,
\[
\lim_{m,n\to\infty} G\left(\frac{\xi_m+\xi_n}{2} \right) =\sigma^2.
\]
The parallelogram identity
\be \label{para}
2||f||^2 +2 ||g||^2 = ||f+g||^2+||f-g||^2
\ee
yields
\begin{eqnarray*}
&& 2 \left( G(\xi_m)+G(\xi_n)\right)
\\
&=& 2\left(||\xi_m-x||^2+||L\xi_m||^2\right)
+ 2\left(||\xi_n-x||^2+||L \xi_n||^2\right)
\\
&=& ||\xi_m+\xi_n-2x||^2 + ||\xi_m-\xi_n||^2
+ ||L(\xi_m+\xi_n)||^2
\\
&& + ||L(\xi_m-\xi_n)||^2.
\end{eqnarray*}
Therefore, as $m,n\to\infty$, we have
\begin{eqnarray*}
&& ||\xi_m-\xi_n||^2 + ||L(\xi_m-\xi_n)||^2
\\
&=& 2 \left( G(\xi_m)+G(\xi_n)\right) - 4 G\left( \frac{\xi_m+\xi_n}{2} \right)
\to 0.
\end{eqnarray*}
It follows that
\[
\lim_{m,n\to\infty} ||\xi_m-\xi_n|| =0, \qquad
\lim_{m,n\to\infty} ||L\xi_m-L\xi_n|| =0.
\]
Therefore, the sequence $\xi_n$ converges in norm to an element $\xi\in H_0$, and the sequence
$L \xi_n$ converges in norm to an element $g\in H$. By the definition of the closed operator we have
$\xi\in\DD(L)$ and $g=L\xi$. Moreover, we have
\[
\lim_{n\to\infty} ||L\xi_n||= ||L\xi||.
\]
Finally, we obtain
\[
\sigma^2 =\lim_{n\to\infty} G(\xi_n) = G(\xi), \qquad \xi\in H_0\cap \DD(L),
\]
as required.
\end{proof}
\begin{proof} [ of Proposition \ref{p:uniq}]
Assume that $\xi_1$ and $\xi_2$ are two distinct solutions
of the problem \eqref{A1}, i.e.
\[
G(\xi_1)=G(\xi_2)= \sigma^2.
\]
By using the parallelogram identity \eqref{para}, we have
\begin{eqnarray*}
&& G\left( \frac{\xi_1+\xi_2}{2}\right)
\\ &=& \frac 12 \left(G(\xi_1)+G(\xi_2)\right)
- \left\| \frac{\xi_1-\xi_2}{2} \right\|^2
- \left\| \frac{L\xi_1-L\xi_2}{2} \right\|^2
\\
&\le& \sigma^2 - \frac 14 \ \left\| \xi_1-\xi_2 \right\|^2
<\sigma^2,
\end{eqnarray*}
which is impossible. The contradiction proves the proposition.
\end{proof}
\begin{proof}[ of Proposition \ref{p:LL}]
Let $L^*$ be the operator adjoint to $L$. Since $L$ is a closed
operator with the dense domain, $L^*L$ is a self-adjoint
non-negative operator, cf.\,\cite[Chapter IV]{AG}. Therefore,
$(I+L^*L)^{-1}$ is a bounded self-adjoint operator. By letting
$A:=I+L^*L$, we have
\begin{eqnarray*}
&& ||y-x||^2+||Ly||^2 = ||x||^2-(x,y)-(y,x) +(Ay,y)
\\
&=& ||x||^2 + ||A^{1/2} y-A^{-1/2} x||^2 - ||A^{-1/2} x||^2
\\
&\ge& ||x||^2 - ||A^{-1/2} x||^2,
\end{eqnarray*}
and the equality is attained iff $y=\xi=A^{-1}x$.
\end{proof}
\begin{proof}[ of Proposition \ref{p:euler}]
Let $\xi$ be a solution of Problem \eqref{A1}. Then for every
$Y\in H_0\cap \DD(L)$ we have
\[
m:=||\xi-x||^2+||L\xi||^2\le ||Y-x||^2+||L Y||^2.
\]
Fix an arbitrary $h\in H_0\cap \DD(L)$; then $\xi+\eps h\in H_0\cap \DD(L)$
for all real $\eps$. We have
\begin{eqnarray*}
m &\le& ||\xi+\eps h-x||^2+||L(\xi+\eps h)||^2
\\
&=& m + 2 \eps \left[ \Re(\xi-x,h) + \Re(L \xi, L h) \right]
\\
&& + \eps^2 \left[ ||h||^2+||L h||^2 \right].
\end{eqnarray*}
It follows that
\[
\Re(\xi-x,h) + \Re(L \xi, L h) =0.
\]
By replacing $\eps$ with $i\eps$ we obtain
\[
\Im(\xi-x,h) + \Im(L \xi, L h) =0.
\]
By adding up two equalities we arrive at \eqref{euler}.
We establish now the uniqueness of the solution for \eqref{euler}.
Let $\xi_1,\xi_2\in H_0\cap \DD(L)$ satisfy equalities
\begin{eqnarray*}
&& (\xi_1-x,h)+( L\xi_1,L h)=0,
\\
&& (\xi_2-x,h)+( L\xi_2,L h)=0,
\end{eqnarray*}
for all $h\in H_0\cap \DD(L)$. It follows that
\[
(\xi_1-\xi_2,h)+( L(\xi_1-\xi_2),L h)=0.
\]
By plugging $h=\xi_1-\xi_2$ into this equality, we arrive at
\[
||\xi_1-\xi_2||^2+ ||L(\xi_1-\xi_2)||^2=0,
\]
hence, $\xi_1=\xi_2$.
\end{proof}
\section*{Acknowledgements} M.A. Lifshits work was supported by RFBR grant
16-01-00258.
| {
"attr-fineweb-edu": 1.852539,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdtY4c3aisE1SQyu2 | \section{Conventions for Our Paper (Section to be Deleted )}
\begin{abstract}
We study the dynamics of magnetic flows on Heisenberg groups. Let $H$ denote the three-dimensional simply connected Heisenberg Lie group endowed with a left-invariant Riemannian metric and an exact, left-invariant magnetic field. Let $\Gamma$ be a lattice subgroup of $H,$ so that $\Gamma\backslash H$ is a closed nilmanifold. We first find an explicit description of magnetic geodesics on $H$, then determine all closed magnetic geodesics and their lengths for $\Gamma \backslash H$. We then consider two applications of these results: the density of periodic magnetic geodesics and marked magnetic length spectrum rigidity. We show that tangent vectors to periodic magnetic geodesics are dense for sufficiently large energy levels. We also show that if $\Gamma_1, \Gamma_2 < H$ are two lattices such that $\Gamma_1 \backslash H$ and $\Gamma_2 \backslash H$ have the same marked magnetic length spectrum, then they are isometric as Riemannian manifolds. Both results show that this class of magnetic flows carries significant information about the underlying geometry. Finally, we provide an example to show that extending this analysis of magnetic flows to the Heisenberg type setting is considerably more difficult.
\end{abstract}
\section{Introduction}
From the perspective of classical mechanics, the geodesics of a Riemannian manifold $(M,g)$ are the possible trajectories of a point mass moving in the absence of any forces and in zero potential. A magnetic field can be introduced by choosing a closed 2-form $\Omega$ on $M$. A charged particle moving on $M$ now experiences a Lorentz force, and its trajectory is called a magnetic geodesic. As with Riemannian geodesics, they can be handled collectively as a single object called the magnetic geodesic flow on $TM$ or $T^*M$ (see Section \ref{sec:mag flows} for precise definitions). Many classical questions concerning geodesic flows have corresponding analogs for magnetic flows. Indeed, magnetic flows display a number of remarkable properties. See \cite{grognet1999maginnegcurv}, \cite{paternain2006horocycleflows}, \cite{burns2006rigidity}, \cite{butlerpaternain2008magnetic}, and \cite{abbondandolo2017exactflowsonsurfaces} for a sampling of results.
One can interpret magnetic flows as a particular type of perturbation of the underlying geodesic flow. Much is known about the the underlying geodesic flow of nilmanifolds, and we are interested in what properties persist or fail to persist for magnetic flows. This perspective is adopted for the property of topological entropy in \cite{epstein2017topent} in the setting of two-step nilmanifolds and in \cite{butlerpaternain2008magnetic} in the setting of $\text{SOL}$ manifolds; and for topological entropy and the Anosov property in \cite{paternainpaternain1995}, \cite{paternainpaternain1997} and \cite{burnspaternain2002anosov}. In \cite{peyerimhoff2003} the authors show that at high enough energy levels the magnetic geodesics are quasi-geodesics with respect to the underlying Riemannian structure. An important classical question of geodesic flows concerns the existence of closed geodesics and related properties such as their lengths and their density. This paper focuses on these properties in the context of magnetic flows generated by left-invariant magnetic fields on Riemannian two-step nilmanifolds. Although this setting is more complicated than the Euclidean setting (i.e. 1-step nilmanifolds), many explicit computations are still tractable, and it has been a rich source of conjectures and counter-examples.
Let $H$ denote a simply connected $(2n+1)$-dimensional Heisenberg group endowed with a left-invariant Riemannian metric. The Lie group $H$ admits cocompact discrete subgroups (i.e. lattices) $\Gamma$ and, because the Riemannian metric is left-invariant, the quotient inherits a metric such that $\Gamma \backslash H$ is a compact Riemannian manifold and $H \to \Gamma \backslash H$ is a Riemannian covering. A geodesic $\sigma(t)$ in $H$ is said to be translated by an element $\gamma \in H$ if $\gamma \sigma(t) = \sigma(t + \omega)$ for all $t$ and for some $\omega > 0$. A geodesic that is translated by $\gamma$ is said to be $\gamma$-periodic. When $\gamma \in \Gamma$, each geodesics translated by $\gamma$ will project to a smoothly closed geodesic in $\Gamma \backslash H$. Geodesic behavior in $\Gamma \backslash H$ and, more generally, in $\Gamma\backslash N$, where $N$ denotes a simply connected two-step nilpotent Lie group with a left-invariant metric, is fairly well understood. In the general Riemannian two-step case, it is possible to describe precisely the set of smoothly closed geodesics in $\Gamma \backslash N$, along with their lengths. See Eberlein \cite{eberlein1994geometry} for the Heisenberg case and Gornet-Mast \cite {gornetmast2000lengthspectrum} for the more general setting. Our main result is a complete analysis of left-invariant, exact magnetic flows on three-dimensional Heisenberg groups.
\begin{theorem*}[See Section \ref{sec:magnetic geodesic equations on simply connected 2n+1 dim Heis}, Lemma \ref{lem:effect of energy constraint on range of ells} and Theorem \ref{thm:periods of central elt in 3D Heisenberg}]
Let $H$ be a three-dimensional simply connected Heisenberg group, $g$ a left-invariant metric on $H$, and $\Omega$ a left-invariant, exact magnetic field on $H$. For any $\gamma \in H$, there is an explicit description of all the $\gamma$-periodic magnetic geodesics of the magnetic flow generated by $(H, g, \Omega)$ satisfying $\sigma(0) = e$, the identity element. The lengths of closed magnetic geodesics may be explicitly computed in terms of metric Lie algebra information.
\end{theorem*}
This theorem allows for the explicit computation of all closed magnetic geodesics in the free homotopy class determined by each $\gamma \in \Gamma$. Unlike the Riemannian case, closed magnetic geodesics exist in all nontrivial homotopy classes only for sufficiently large energy. In addition, there exist closed and contractible magnetic geodesics on sufficiently small energy levels.
We give two applications of our main result. The first concerns the density of tangent vectors to closed magnetic geodesics. Eberlein analyzes this property for Riemannian geodesic flows on two-step nilmanifolds with a left-invariant metric, showing that for certain types of two-step nilpotent Lie groups (including Heisenberg groups), the vectors tangent to smoothly closed unit speed geodesics in the corresponding nilmanifold are dense in the unit tangent bundle \cite{eberlein1994geometry}; Mast \cite{mast1994closedgeods} and Lee-Park \cite{leepark1996density} broadened this result. In Theorem \ref{thm:density of closed mag geods}, we show that the density property continues to hold for magnetic flows on sufficiently high energy levels on the Heisenberg group. The second is a marked length spectrum rigidity result (see Section \ref{sec:rigidity} for the definition). It known that within certain classes of Riemannian manifolds, if two have the same marked length spectrum then they are isometric. This is true in the class of negatively curved surfaces (see \cite{croke1990rigidity} and \cite{otal1990commentarii, otal1990annals}) and compact flat manifolds (see \cite{berger1971lecturenotes}, \cite{berard1986spectralgeom}, \cite{miatellorossetti2003}). In \cite{grognet2005marked}, S. Grognet studies marked length spectrum rigidity of magnetic flows on surfaces with pinched negative curvature. In Theorem \ref{thm:MLS rigidity}, we show that the marked magnetic magnetic length spectrum of left-invariant magnetic systems on compact quotients of the Heisenberg group determine the Riemannian metric. Although it's a perturbation of geodesic flow, the magnetic flow still carries information about the underlying Riemannian manifold.
This paper is organized as follows. In Section \ref{sec:preliminaries}, we present the necessary preliminaries in order to state and prove the main theorems. The definition and basic properties of magnetic flows are given in Section \ref{sec:mag flows} and the necessary background on nilmanifolds is given in Section \ref{sec:geoemtry of two-step nilpotent lie groups}. Next, we show how a left-invariant Hamiltonian system on the cotangent bundle of a Lie group reduces to a so-called Euler flow on the dual to the Lie algebra. Such Hamiltonians are known as collective Hamiltonians, and this process is outlined in Section \ref{sec:left-inv hamiltonian on Lie groups}. Section \ref{sec:exact, left-inv mag forms} specializes the preceding to the case of exact, left-invariant magnetic flows on two-step nilpotent Lie groups. In Section \ref{sec:magnetic geodesic equations on simply connected 2n+1 dim Heis}, the magnetic geodesic equations on the $(2n+1)$-dimensional Heisenberg group are solved. In Section \ref{sec:compact quotients}, we apply these formulas to obtain our main theorem and the applications described above. Many geometric results for the Heisenberg group have been shown to hold for the larger class of Heisenberg type manifolds. In Section \ref{sec:HT manifolds}, we use a specific example to show why our analysis of magnetic flows on Heisenberg type manifolds is considerably more difficult. Lastly, the so-called $j$-maps are a central part of the theory of two-step Riemannian nilmanifolds. In the appendix, we provide an alternative approach to studying the magnetic geodesics using $j$-maps instead of collective Hamiltonians.
\section{Preliminaries} \label{sec:preliminaries}
\subsection{Magnetic flows} \label{sec:mag flows}
A \emph{magnetic structure} on a Riemannian manifold $\left(M,g\right)$
is a choice of closed $2$-form $\Omega$ on $M$, called the \emph{magnetic
$2$-form}. The \emph{magnetic flow} of $\left(M,g,\Omega\right)$
is the Hamiltonian flow $\Phi_{t}$ on $TM$ determined by the symplectic
form
\begin{align} \label{eq:magnetic symplectic form}
\varpi_\text{mag} = \bar\varpi + \pi^*\Omega
\end{align}
and the kinetic energy Hamiltonian $H_0:TM\rightarrow \bb{R},$ given by
\begin{align}
H_0 \left( v \right)=\frac{1}{2}g\left(v,v\right) = \frac{1}{2}|v|^2 \text{.}
\end{align}
Here $\pi:TM \rightarrow M$ denotes the canonical projection and
$\bar\varpi$ denotes the pullback via the Riemmanian metric of the canonical symplectic form
on $T^*M$.
The magnetic flow models the motion of a charged particle under the effect of a magnetic field whose Lorentz force $F:TM\rightarrow TM$
is the bundle map defined via
\begin{align*}
\Omega_{x}\left(u,v\right)=g_{x}\left(F_{x}u,v\right)
\end{align*}
for all $x\in M$ and all $u,v\in T_{x}M$. The orbits of the magnetic flow have the form $t\mapsto\dot{\sigma}\left(t\right)$, where $\sigma$ is a curve in $M$ such that
\begin{equation} \label{eq:magnetic geodesic equation}
\nabla_{\dot{\sigma}}\dot{\sigma}=F\dot{\sigma}\text{.}
\end{equation}
In the case that $\Omega=0$, the magnetic flow reduces to Riemannian geodesic flow. A curve $\sigma$ that satisfies \eqref{eq:magnetic geodesic equation} is called a \emph{magnetic geodesic}. The physical interpretation of a magnetic geodesic is that it is the path followed by a particle with unit mass and charge under the influence of the magnetic field. Because $F$ is skew-symmetric, the acceleration of the magnetic geodesic is perpendicular to its velocity.
\begin{rmk}
It is straightforward to show that magnetic geodesics have constant
speed. In contrast to the Riemannian setting, a unit speed reparametrization
of a solution to \eqref{eq:magnetic geodesic equation} may no longer be a solution.
To see this, let $\sigma(s)$ be a solution that is not unit speed
and denote energy $E=\left\vert \dot{\sigma}\right\vert >0.$ Define
$\tau\left(s\right)=\sigma\left(s/E\right)$, which is unit speed.
Then
\begin{align*}
\nabla_{\dot{\tau}}\dot{\tau}=\frac{1}{E^{2}}\nabla_{\dot{\sigma}}\dot{\sigma}=\frac{1}{E^{2}}F\dot{\sigma}=\frac{1}{E}F\dot{\tau}\neq F\dot{\tau}\text{,}
\end{align*}
in general. Therefore, one views a magnetic geodesic as the path,
not the parameterized curve. (Observe that $\tau$ is a solution
to the magnetic flow determined by the magnetic form $\frac{1}{\sqrt{E}}\Omega$.)
\end{rmk}
Recall that the tangent and cotangent bundles of a Riemannian manifold are canonically identified, and the Riemannian metric on $TM \to M$ induces a non-degenerate, symmetric 2-tensor on $T^*M \to M$. We will present most of the theory in the setting of the cotangent bundle, while occasionally indicating how to translate to the tangent bundle. Note that many authors use the tangent bundle approach. See for example \cite{burns2006rigidity}.
Slightly abusing notation, we now let $\pi$ denote the basepoint map of the cotangent bundle, let $g$ denote the metric on the cotangent bundle, and define $H_0 : T^*M \to \bb{R}$ as $H_0(p) = \frac{1}{2} g(p,p) = \frac{1}{2} |p|^2$. Accordingly, the magnetic flow of $(M,g,\Omega)$ is the Hamiltonian flow $\Phi_t$ on the symplectic manifold $(T^*M, \varpi + \pi^* \Omega)$ determined by the Hamiltonian $H_0$. Regardless of approach, the projections of the orbits to the base manifold will be the same magnetic geodesics determined by \eqref{eq:magnetic geodesic equation}.
On the cotangent bundle
\begin{align} \label{eq:cotangent magnetic symplectic form}
\varpi_\text{mag} = \varpi + \pi^*\Omega
\end{align}
defines a symplectic form as long as $\Omega$ is closed; $\Omega$ may be non-exact or exact. In the former case, $\Omega$ is referred to as a \emph{monopole}. In the latter case, when $\Omega$ is exact, the magnetic flow can be realized either as the Euler-Lagrange flow of an appropriate Lagrangian, or (via the Legendre transform) as a Hamiltonian flow on $T^\ast M$ endowed with its canonical symplectic structure. Note that even if two magnetic fields represent the same cohomology class, they generally determine distinct magnetic flows.
Suppose that $\Omega = d\theta$ for some 1-form $\theta$. A computation in local coordinates shows that the diffeomorphism $f:T^*M \to T^*M$ defined by $f(x,p) = (x,p-\theta_x)$ conjugates the Hamiltonian flow of $(T^*M, \varpi + \pi^{\ast}\Omega, H_0)$ with the Hamiltonian flow of $(T^*M, \varpi, H_1)$ where
\begin{align} \label{eq:modified Hamiltonian}
H_1(x,p) = \frac{1}{2}|p + \theta_x|^2.
\end{align}
\subsection{Example: Magnetic Geodesics in the Euclidean Plane} \label{sec:Euclidean plane example}
Before introducing two-step nilmanifolds in the following subsection, we first provide an example of a left-invariant magnetic system in a simpler context.
Let $M = \bb{R}^2$ endowed with the standard Euclidean metric $g$. Let $\Omega = B \ dx \wedge dy$ denote a magnetic 2-form, where $(x,y)$ denote global coordinates and $B$ is a real parameter that can be interpreted as modulating the strength of the magnetic field.
Let $\sigma_{v}\left(t\right) = (x(t),y(t))$ denote the magnetic geodesic through the identity $e=(0,0)$ with initial velocity $v = (x_0,y_0) = x_{0} \frac{\partial}{\partial x} + y_{0} \frac{\partial}{\partial y} \neq 0$ and energy $E=\sqrt{x_{0}^{2}+y_{0}^{2}}$. The Lorentz force $F$ satisfies $F \left( 1, 0 \right) = B (0,1)$ and $F \left( 0, 1 \right) = -B (1, 0)$. By \eqref{eq:magnetic geodesic equation} $\sigma_v(t)$ satisfies
\begin{align*}
\left(\ddot{x},\ddot{y}\right)=F\left(\dot{x},\dot{y}\right)=B\left(-\dot{y},\dot{x}\right)\text{.}
\end{align*}
The unique solution satisfying $\sigma_v(0) = e$ and $\dot{\sigma}_v(0) = v$ is
\begin{align*}
x\left(t\right) & = -\frac{y_{0}}{B}\left(1-\cos\left(tB\right)\right)+\frac{x_{0}}{B}\sin\left(tB\right) \\
y\left(t\right) & = \frac{x_{0}}{B}\left(1-\cos\left(tB\right)\right)+\frac{y_{0}}{B}\sin\left(tB\right)\text{.}
\end{align*}
Then $\sigma_{v}\left(t\right)$ is a \emph{circle} of radius $\frac{E}{|B|}$ and center $\left(-\frac{y_{0}}{B},\frac{x_{0}}{B}\right)$. It is immediate that magnetic geodesics cannot be reparameterized. For if $\sigma_{v'}(t)$ is another magnetic geodesic through the identity with $v'$ parallel to $v$ but with $|v| \neq |v'|$, then $\sigma_{v'}(t)$ will describe a circle of different radius. Furthermore magnetic geodesics are not even time-reversible. The magnetic geodesic $\sigma_{-v}\left(t\right)$ is a circle of radius $\frac{E}{|B|}$ and center $\left(\frac{y_{0}}{B},-\frac{x_{0}}{B}\right)$; in particular, $\sigma_{-v}\left(t\right)$ and $\sigma_{v}\left(t\right)$ are both circles of the same radius but trace different paths. Note that every magnetic geodesic in this setting is periodic. This will not be the case for two-step nilmanifolds.
\subsection{Left-invariant Hamiltonians on Lie groups} \label{sec:left-inv hamiltonian on Lie groups} Let $G$ be a Lie group with Lie algebra $\mf{g}$. On the one hand, $T^*G$ ( $=G\times \mathfrak{g}^*$) is a symplectic manifold and each function $H : T^*G \to \bb{R}$ generates a Hamiltonian flow with infinitesimal generator $X_H$. On the other hand, $\mf{g}^*$ is a Poisson manifold and each function $f : \mf{g}^* \to \bb{R}$ determines a derivation of $C^\infty(\mf{g}^*)$ and hence a vector field $E_f$, called the Euler vector field associated to $f$. When the function $H$ is left-invariant, i.e. $H((L_x)^* \alpha) = H(\alpha)$ for all $x \in G$ and all $\alpha \in T^*G$, it induces a function $h : \mf{g}^* \to \bb{R}$ and the flow of $X_H$ factors onto the flow of $E_h$. Moreover, the flow of $X_H$ can be reconstructed from $E_h$ and knowledge of the group structure of $G$. Note that this is a special case of a more general class of Hamiltonians, called collective Hamiltonians. More details and physical motivation can be found in Sections 28 and 29 of \cite{guillemin1990symplectic}. We outline below how we will use this approach to study magnetic flows.
A Poisson manifold is a smooth manifold $M$ together with a Lie bracket $\{ \cdot, \cdot \}$ on the algebra $C^\infty(M)$ that also satisfies the property
\begin{align} \label{eq:Poisson Leibniz rule}
\{f, gh\} = \{f,g\}h + g\{f, h\}
\end{align}
for all $f,g,h \in C^\infty(M)$. Hence, for a fixed function $h \in C^\infty(M)$, the map $C^\infty(M) \to C^\infty(M)$ defined by $f \mapsto \{ f, h \}$ is a derivation of $C^\infty(M)$. Therefore, there is an Euler vector field $E_h$ on $M$ such that $E_h ( \cdot ) = \{ \cdot, h \}$.
An important source of Poisson manifolds is the vector space dual to a Lie algebra. We will make use of the standard identifications $T_p \mf{g}^* \simeq \mf{g}^*$ and $T^*_p\mf{g}^* \simeq (\mf{g}^*)^* \simeq \mf{g}$, and $\langle \ \cdot \ , \ \cdot \ \rangle$ will denote the natural pairing between $\mf{g}$ and $\mf{g}^*$. For a function $f \in C^\infty(\mf{g}^*)$, its differential $df_p$ at $p \in \mathfrak{g}^*$ is identified with an element of the Lie algebra $\mf{g}$. The Lie bracket structure on $\mf{g}$ induces the Poisson structure on $\mf{g}^*$ by
\begin{align} \label{eq:Poisson structure on g^*}
\{ f, g \}(p) = - \langle p, \left[df_p, dg_p\right] \rangle= -p\left(\left[df_p,dg_p\right]\right).
\end{align}
Antisymmetry and the Jacobi Identity follow from the properties of the Lie bracket $[ \ \cdot \ , \ \cdot \ ]$, while the derivation property \eqref{eq:Poisson Leibniz rule} follows from the Leibniz rule for the exterior derivative.
It is useful to express the Euler vector field $E_h$ in terms of $h$ and the representation $\ad^* : \mf{g} \to \mf{gl}(\mf{g}^*)$ dual to the adjoint representation, defined as
\begin{align} \label{eq:Lie algebra dual}
\langle \ad^*_X p, Y \rangle = - \langle p, \ad_X Y \rangle.
\end{align}
From the definition of the differential of a function,
\begin{align*}
\langle E_h(p), df_p \rangle = E_h(f)(p) = \{ f, h \}(p) = - \langle p, [df_p, dh_p] \rangle = - \langle \ad^*_{dh_p} p, df_p \rangle.
\end{align*}
From this we conclude that
\begin{align} \label{eq:Euler v.f. on Lie algebra dual}
E_h(p) = - \ad^*_{dh_p} p.
\end{align}
Now consider $T^*G \simeq G \times \mf{g}^*$ trivialized via left-multiplication. Let $r: G \times \mf{g}^* \to \mf{g}^*$ be projection onto the second factor. If $h: \mf{g}^* \to \bb{R}$ is any smooth function, then $H = h \circ r$ is a left-invariant Hamiltonian on $T^*G$. Conversely, any left-invariant Hamiltonian $H$ factors as $H = h \circ r$. Recall that the canonical symplectic structure $\varpi$ on $T^*G \simeq G \times \mf{g}^*$ is
\begin{align} \label{eq:canon symp structure on Lie group}
\varpi_{(x,p)} ( (U_1, \alpha_1), (U_2, \alpha_2) ) = \alpha_2 (U_1) - \alpha_1(U_2) + p([U_1, U_2])
\end{align}
where we identify $T_{(g,p)}T^*G \simeq \mf{g} \times \mf{g}^*$ (see section 4.3 of \cite{eberlein2004Cubo} for more details). To find an expression for the Hamiltonian vector field $X_H(x,p) = (X,\lambda)$ of a left-invariant Hamiltonian, first consider vectors of the form $(0,\alpha)$ in the equation $\varpi( X_H, \ \cdot \ ) = dH( \ \cdot \ )$. We have
\begin{align*}
\varpi_{(x,p)} ( (X, \lambda), (0, \alpha) ) &= dH_{(x,p)}(0,\alpha) =d(h\circ r)_{(x,p)}(0,\alpha), \\
\alpha (X) - \lambda(0) + p([X, 0]) &= dh_p(\alpha), \\
\alpha (X) &= \alpha( dh_p ).
\end{align*}
Since this is true for all choices of $\alpha$, we get $X = dh_p$. Next consider vectors of the form $(U,0)$. Since $H$ is left-invariant,
\begin{align*}
\varpi_{(x,p)}((dh_p, \lambda), (U,0)) &= dH_{(x,p)}(U,0), \\
- \langle \lambda, U \rangle + \langle p, [dh_p, U] \rangle &= 0, \\
\langle \lambda, U \rangle &= - \langle \ad_{dh_p}^* p, U \rangle.
\end{align*}
Since this must be true for every $U$, we have that $\lambda = -\ad_{dh_p}^* p = E_h(p)$. For a left-invariant Hamiltonian, the equations of motions for its associated Hamiltonian flow are
\begin{align} \label{eq:equations of motion for left-inv ham}
X_H(x,p) = \begin{cases}
\dot{x} = (L_x)_*(dh_p) \\
\dot{p} = E_h(p) = - \ad^*_{dh_p} p
\end{cases}.
\end{align}
\subsection{The Geometry of Two-Step Nilpotent Metric Lie Groups} \label{sec:geoemtry of two-step nilpotent lie groups} Our objects of study in this paper are simply connected two-step nilpotent Lie groups endowed with a left-invariant metric. For an excellent reference regarding the geometry of these manifolds, see \cite{eberlein1994geometry}.
Let $\mf{g}$ denote a two-step nilpotent Lie algebra with Lie bracket $[\ ,\ ]$ and non-trivial center $\mf{z}$. That is, $\mf{g}$ is nonabelian and $[X,Y]\in\mf{z}$ for all $X,Y\in\mf{g}$. Let $G$ denote the unique, simply connected Lie group with Lie algebra $\mf{g}$; then $G$ is a two-step nilpotent Lie group. The Lie group exponential map $\exp : \mf{g} \to G$ is a diffeomorphism, with inverse map denoted by $\log : G \to \mf{g}$. Using the Campbell-Baker-Hausdorff formula, the multiplication law can be expressed as
\begin{align} \label{eq:multiplication law}
\exp(X)\exp(Y) = \exp \left( X + Y + \frac{1}{2}[X,Y] \right).
\end{align}
For any $A \in \mf{g}$ and any $X \in T_A \mf{g} \simeq \mf{g}$, the push-forward of the Lie group exponential at $A$ is
\begin{align*}
(\exp_*)_A (X) = (L_{\exp(A)})_*\left( X + \frac{1}{2}[X,A] \right).
\end{align*}
Using this, the tangent vector to any smooth path $\sigma(t) = \exp(U(t))$ in $G$ is given by
\begin{align} \label{eq:tangent vector field of path in two-step nilpotent group}
\sigma'(t) =(L_{\sigma(t)})_* \left( U'(t) + \frac{1}{2}[U'(t),U(t)] \right).
\end{align}
When a two-step nilpotent Lie algebra $\mf{g}$ is endowed with an inner product $g$, then there is a natural decomposition $\mf{g} = \mf{v} \oplus \mf{z}$, where $\mf{z}$ is the center of $\mf{g}$ and $\mf{v}$ is the orthogonal complement to $\mf{z}$ in $\mf{g}$. Every central vector $Z \in \mf{z}$ determines a skew-symmetric linear transformation of $\mf{v}$ (relative to the restriction of $g$), denoted $j(Z)$, as follows:
\begin{align} \label{eq:j-map def}
g(j(Z)V_1, V_2) = g([V_1, V_2],Z)
\end{align}
for any vectors $V_1, V_2 \in \mf{v}$. In fact, this correspondence is a linear map $j : \mf{z} \to \mf{so}(\mf{v})$. These maps, first introduced by Kaplan \cite{kaplan1981Cliffordmodules}, capture all of the geometry of a two-step nilpotent metric Lie group. For example, the $j$-maps provide a very useful description of the Levi-Civita connection. For $V_1, V_2 \in \mf{v}$ and $Z_1, Z_2 \in \mf{z}$,
\begin{align}
& \nabla_{X_1}X_2 = \frac{1}{2}[X_1, X_2], \nonumber \\
& \nabla_{X_1}Z_1 = \nabla_{Z_1} X_1 = - \frac{1}{2}j(Z) X, \label{eq:Levi-Civita eqns} \\
& \nabla_{Z_1}Z_2 = 0. \nonumber
\end{align}
\subsection{Exact, Left-Invariant Magnetic Forms on Simply Connected Two-Step Nilpotent Lie Groups} \label{sec:exact, left-inv mag forms}
We use the formalism of Subsection \ref{sec:left-inv hamiltonian on Lie groups} to express the equations of motion for the magnetic flow of an exact, left-invariant magnetic form on a simply connected two-step nilpotent Lie group. Throughout this section, $\mf{g}$ denotes a two-step nilpotent Lie algebra with an inner product and $G$ denotes the simply connected Lie group with Lie algebra $\mf{g}$ endowed with the left-invariant Riemannian metric determined by the inner product on $\mf{g}$.
As a reminder, angled brackets denote the natural pairing of a vector space and its dual. Recall that any (finite dimensional) vector space $V$ is naturally identified with $V^{**}$ by sending any vector $v \in V$ to the linear functional $V^* \mapsto \bb{R}$ defined by evaluation on $v$. Using this identification, we can and do view elements of $V$ simultaneously as elements of $V^{**}$. The inner product on $\mf{g}^*$ is specified by a choice of linear map $\sharp: \mf{g}^* \to \mf{g}$ such that (a) $\langle p, \sharp(p) \rangle > 0$ for all $p \neq 0$ and (b) $\langle p, \sharp(q) \rangle = \langle \sharp(p), q \rangle$ for all $p,q \in \mf{g}^*$. The inner product of $p, q \in \mf{g}^*$ is then given by $\langle p, \sharp(q) \rangle$ and the induced norm is $|p| = \sqrt{\langle p, \sharp(p) \rangle}$. Conversely any inner product on $\mf{g}^*$ induces a map $\sharp : \mf{g}^* \to \mf{g}^{**} \simeq \mf{g}$ with the properties (a) and (b). Of course, $\sharp^{-1} = \flat$ is then the flat map and the inner product of $X$ and $Y$ in $\mf{g}$ can be computed as $\langle X, \flat(Y) \rangle$.
Let $\mf{g} = \mf{v} \oplus \mf{z}$ be the decomposition of $\mathfrak{g}$ into the center and its orthogonal complement. Let $\mf{g}^* = \mf{v}^* \oplus \mf{z}^*$ be the corresponding decomposition where $\mf{v}^*$ is the set of functionals that vanish on $\mf{z}$ and vice versa.
\begin{lem} \label{lem:every exact left-invariant 2-form is d of something central}
If $\Omega$ is an exact, left-invariant 2-form on $G$, then there exists $B\in \bb{R}$ and $\zeta_m \in \mf{z}^*$ such that $|\zeta_m| = 1$ and $\Omega = d(B\zeta_m)$.
\end{lem}
\begin{proof}
By hypothesis, $\Omega = d\theta$ for some left-invariant 1-form $\theta$. By left-invariance, $\theta$ can be expressed as $\theta = \theta_\mf{v} + \theta_\mf{z}$, where $\theta_\mf{v} \in \mf{v}^*$ and $\theta_\mf{z} \in \mf{z}^*$, and $d\theta_\mf{v}(X,Y) = -\theta_\mf{v}([X,Y]) $ for any $X,Y \in \mf{g}$. Because $[X,Y] \in \mf{z}$, $d\theta_\mf{v} = 0$. Hence
\begin{align*}
\Omega = d\theta = d(\theta_\mf{v} + \theta_\mf{z}) = d\theta_\mf{z}.
\end{align*}
Lastly, set $\zeta_m = \theta_\mf{z} / |\theta_\mf{z}|$ and $B = |\theta_\mf{z}|$.
\end{proof}
Given $B\in\mathbb{R}$ and $\zeta_m\in\mf{g}^*$, we define the function $H:T^*G \to \bb{R}$ by
\begin{align} \label{eq:magnetic Hamiltonian}
H(x,p) = \frac{1}{2}|p + B\zeta_m|^2.
\end{align}
By the previous lemma, we may assume $\zeta_m$ is a unit element in $\mf{g}^*$ that vanishes on $\mf{v}$. Because $\zeta_m$ is left-invariant, $H$ is left-invariant and factors as $H = h \circ r$, where $h : \mf{g}^* \to \bb{R}$ is the function
\begin{align} \label{eq:reduced magnetic Hamiltonian}
h(p) = \frac{1}{2}|p + B\zeta_m|^2.
\end{align}
Note that when $B = 0$, the Hamiltonian flow of $H$ is the geodesic flow of the chosen Riemannian metric.
\begin{lem} \label{lem:differential of magnetic hamiltonian}
The differential of $h$ is $dh_p = \sharp(p + B \zeta_m)$.
\end{lem}
\begin{proof}
For any $p \in \mf{g}$ and any $q \in T_p\mf{g}^* \simeq \mf{g}^*$, we compute
\begin{align*}
\langle q, dh_p \rangle &= \frac{d}{dt}\bigg|_{t = 0} h(p + tq) \\
&= \frac{1}{2} \frac{d}{dt}\bigg|_{t = 0} |p + tq + B\zeta_m |^2 \\
&= \frac{1}{2} \frac{d}{dt}\bigg|_{t = 0} \langle p + B\zeta_m + tq, \sharp(p + B\zeta_m + tq) \rangle \\
&= \frac{1}{2} \frac{d}{dt}\bigg|_{t = 0} \left( | p + B\zeta_m |^2 + 2t\langle p + B\zeta_m , \sharp(q) \rangle + t^2 |q|^2 \right) \\
&= \langle p + B\zeta_m, \sharp(q) \rangle.
\end{align*}
The Lemma now follows from the properties of $\sharp$.
\end{proof}
We now prove that the Euler vector field on $\mf{g}^*$ is independent of the choice of exact magnetic field, including the choice $\Omega = 0$.
\begin{lem} \label{lem:independent of B}
Let $h \in C^\infty(\mf{g}^*)$ be any function of the form \eqref{eq:reduced magnetic Hamiltonian} and define the function $h_0 \in C^\infty(M)$ by $h_0(p) = \frac{1}{2}|p|^2$. Then $E_{h_0} = E_h$.
\end{lem}
\begin{proof}
For any $\zeta \in \mf{z}^*$ and any $V \in \mf{v}$, $\langle V, \flat(\sharp(\zeta)) \rangle = \langle V, \zeta \rangle = 0$ shows that $\sharp(\mf{z}^*) = \mf{z}$. For any $X \in \mf{g}$,
by the previous lemma,
\begin{align*}
\langle \ad^*_{dh_p} p, X \rangle = - \langle p, [\sharp(p + B\zeta_m), X] \rangle = - \langle p, [\sharp p, X] \rangle = \langle \ad^*_{(dh_0)_p} p, X \rangle.
\end{align*}
Hence $\ad^*_{dh_p} = \ad^*_{(dh_0)_p}$ and the proof follows from the expression \eqref{eq:Euler v.f. on Lie algebra dual} for the Euler vector field.
\end{proof}
We now describe the structure of the Euler vector field. Much of this can be gleaned from the results of \cite{eberlein1994geometry}. However, we include it here for the sake of self-containment. For any $X \in \mf{g}$ and $p \in \mf{g}^*$, we write $X = X_\mf{v} + X_\mf{z}$ and $p = p_\mf{v} + p_\mf{z}$ for the respective orthogonal decomposition according to $\mf{g} = \mf{v} \oplus \mf{z}$ and $\mf{g}^* = \mf{v}^* \oplus \mf{z}^*$.
\begin{lem} \label{lem:structure of Euler vector field on two-step}
The integral curves of the Euler vector field $E_h$ are of the form $p(t) = p_\mf{v}(t) + \zeta_0$ where $\zeta_0 \in \mf{z}^*$ and $p_\mf{v}(t) \in \mf{v}^*$ is a path that satisfies $p_\mf{v}'(t) = A(p_\mf{v}(t))$ for some skew-symmetric transformation of $\mf{v}^*$.
\end{lem}
\begin{proof}
From \eqref{eq:Lie algebra dual}, the dual adjoint representation clearly has the following properties: $\ad^*_{Z} = 0$ for every $Z \in \mf{z}$, $\ad^*_X(\mf{g}^*) \subset \mf{v}^*$ for all $X \in \mf{g}$, and $\ad^*_X(\mf{v}^*) = \{ 0 \}$ for every $X \in \mf{g}$. From this, if $p(t) = p_\mf{v}(t) + p_\mf{z}(t)$ is an integral curve of $E_h$, then $p_\mf{z}(t) = p_\mf{z}(0) = \zeta_0$ is constant, and, using Lemmas \ref{lem:differential of magnetic hamiltonian} and \ref{lem:independent of B}, $p_{\mf{v}}(t)$ must satisfy the system
\begin{align*}
p'_{\mf{v}}(t) = E_h(p(t)) = -\ad^*_{dh_{p(t)}} p(t) = -\ad^*_{\sharp(p_{\mf{v}}(t))} p_\mf{z}(t) = -\ad^*_{\sharp(p_{\mf{v}}(t))} \zeta_0.
\end{align*}
Since $A:\mf{v}^* \to \mf{v}^*$ is skew-symmetric with respect to the inner product restricted to $\mf{v}^*$, this completes the Lemma.
\end{proof}
Let $(G,g,\Omega)$ be a magnetic system, where $G$ is a simply connected two-step nilpotent Lie group, $g$ is a left-invariant metric, and $\Omega$ an exact, left-invariant magnetic form. Let $\flat : \mf{g} \to \mf{g}^*$ and $\sharp = \flat^{-1}$ be the associated flat and sharp maps, and let $\zeta_m$ be as in Lemma \ref{lem:every exact left-invariant 2-form is d of something central}. The magnetic flow can be found as follows. First, compute the coadjoint representation of $\ad^* : \mf{g} \to \mf{gl}(\mf{g}^*)$ and integrate the vector field $E(p) = -\ad_{dh_p}^* p$. It follows that the curves $\sigma(t)$ satisfying $\sigma'(t) = dh_{p(t)}$, where $p(t)$ is an integral curve of $E$, will be magnetic geodesics. To make this step more explicit, let $\mf{g} = \mf{v} \oplus \mf{z}$ be the decomposition of $\mf{g}$ where $\mf{z}$ is the center and $\mf{v}$ is its orthogonal complement. Suppose that $p(t) = p_1(t) + \zeta_0$ is an integral curve of $E$, where $p_1(t) \in \mf{v}^*$ and $\zeta_0 \in \mf{z}^*$, and $\sigma(t) = \exp({\bf X}(t) + {\bf Z}(t))$ is a path in $G$, where ${\bf X}(t) \in \mf{v}$ and ${\bf Z}(t) \in \mf{z}$. Using \eqref{eq:tangent vector field of path in two-step nilpotent group}, we can decompose the equation $\sigma'(t) = dh_{p(t)} = \sharp(p(t) + B\zeta_m)$ as
\begin{align}
& {\bf X}'(t) = \sharp(p_1(t)), \label{eq:mag goed equations v2a} \\
& {\bf Z}'(t) + \frac{1}{2}[{\bf X}'(t),{\bf X}(t)] = \sharp(\zeta_0 + B \zeta_m). \label{eq:mag goed equations v2b}
\end{align}
Assuming that the path satisfies $\sigma(0) = e$, the first equation can be integrated to find ${\bf X}(t)$, which then allows the second equation to be integrated to find ${\bf Z}(t)$.
\begin{rmk}
The presence of the magnetic field can be thought of as a perturbation of the geodesic flow of $(G,g)$, modulated by the parameter $B$. In the procedure outlined here for two-step nilpotent Lie groups, the magnetic field only appears in the final step. The Euler vector field, and hence its integral curve, is unchanged by the magnetic field. In addition, the non-central component of the magnetic geodesics is the same as that of the Riemannian geodesics. The presence of a left-invariant exact magnetic field only perturbs the geodesic flow in central component of the Riemannian geodesics.
\end{rmk}
\begin{rmk} \label{rmk:how to calculate energy of mag geod from initial conditions}
For a magnetic geodesic $\sigma(t)$, we will call $|\sigma'(t)|$ its \emph{energy}. Note that this is a conserved quantity for magnetic flows. Since we are not considering a potential, the total energy of a charged particle in a magnetic system is its kinetic energy $|\sigma'(t)|^2 /2$. Although this would be commonly referred to as the energy in the physics and dynamics literature, we find our convention to be more convenient from our geometric viewpoint.
\end{rmk}
\begin{rmk} \label{rmk:general expression for energy squared}
Although $t \mapsto (\sigma(t), p(t))$ is an integral curve of the Hamiltonian vector field, the Hamiltonian $h$ is not the kinetic energy, and hence the energy of the magnetic geodesic is not equal to $|p(0)|$. Instead, by \eqref{eq:mag goed equations v2a} and \eqref{eq:mag goed equations v2b}, the energy squared is
\begin{align} \label{eq:energy of mag geod}
|\sigma'(t)|^2 = |\sharp(p(0)) + B \sharp(\zeta_m)|^2 = |\sharp(p_1(0))|^2 + |\sharp(\zeta_0 + B \zeta_m)|^2.
\end{align}
\end{rmk}
\section{Simply Connected $(2n + 1)$-Dimensional Heisenberg Groups}
\label{sec:magnetic geodesic equations on simply connected 2n+1 dim Heis}
Let $\mf{h}_{n} = \myspan\{ X_1, \ldots, X_n, Y_1, \ldots, Y_n, Z\}$ and define a bracket structure on $\mf{h}_n$ by declaring the only nonzero brackets among the basis vectors to be $[X_i, Y_i] = Z$ and extending $[ \ \cdot \ , \ \cdot \ ]$ to all of $\mf{h}_n \times \mf{h}_n$ by bilinearity and skew-symmetry. Then $\mf{h}_n$ is a two-step nilpotent Lie algebra called the Heisenberg Lie algebra of dimension $2n + 1$, and the simply connected Lie group $H_n$ with Lie algebra $\mf{h}_n$ is called the Heisenberg group of dimension $2n+1$. Let $\{ \alpha_1, \beta_1, \ldots, \alpha_n, \beta_n, \zeta \}$ be the dual basis of $\mf{h}_n^*$. The following Lemma, proven in Lemma 3.5 of \cite{gordon1986spectrum}, shows that to consider every inner product on $\mf{h}_n$, we need only consider inner products on $\mf{h}_n$ that have a simple relationship to the bracket structure.
\begin{lem}
Let $g$ be any inner product on $\mf{h}_n$. There exists $\varphi \in \Aut(\mf{h}_n)$ such that
\begin{align} \label{eq:standard orthonomral basis for Heisenberg algebra}
\left\{ \frac{X_1}{\sqrt{A_1}}, \ldots, \frac{X_n}{\sqrt{A_n}}, \frac{Y_1}{\sqrt{A_1}}, \ldots, \frac{Y_n}{\sqrt{A_n}}, Z \right\}
\end{align}
is an orthonormal basis relative to $\varphi^*g$, where $A_i > 0,$ $i=1\dots n,$ are positive real numbers.
\end{lem}
\begin{proof}
Consider the linear map defined by
\begin{align*}
X_i \mapsto \frac{X_i}{|Z|} \qquad Y_i \mapsto Y_i \qquad Z \mapsto \frac{Z}{|Z|}.
\end{align*}
This is an automorphism of $\mf{h}_n$ and $Z$ is a unit vector relative to the pullback of the metric. Hence we can and will assume that $|Z| = 1$.
Let $\psi_1$ be the linear map defined by $\psi_1(X_i) = \allowbreak X_i - g(X_i,Z) Z$, $\psi_1(Y_i) = Y_i - g(Y_i,Z)Z$, and $\psi_1(Z) = Z$. Now $\psi_1 \in \Aut(\mf{h}_n)$ and $\mf{v} = \myspan\{ X_1, \ldots, X_n, \allowbreak Y_1, \ldots, Y_n \}$ is orthogonal to $\mf{z} = \myspan\{ Z \}$ relative to $\psi_1^* g$.
Next consider the map $j(Z) \in \so(\mf{v}, \psi_1^*g)$. Because it is skew-symmetric, there exists a $\psi_1^*g$-orthonormal basis $\{ \wt{X}_1, \ldots, \wt{X}_n, \wt{Y}_1, \ldots, \wt{Y}_n \}$ of $\mf{v}$ such that $j(Z)\wt{X}_i = d_i \wt{Y}_i$ and $j(Z)\wt{Y}_i = -d_i\wt{X}_i$
for some real numbers $d_i > 0$. Because
\begin{align*}
(\psi_1^*g)(Z, [\wt{X}_i, \wt{Y}_i]) = (\psi_1^*g)(j(Z)\wt{X}_i, \wt{Y}_i) = (\psi_1^*g)(d_i\wt{Y}_i, \wt{Y}_i) = d_i
\end{align*}
we see that $[\wt{X}_i, \wt{Y}_i] = d_i Z$. Define the linear map $\psi_2$ by
\begin{align*}
\psi_2(X_i) = \frac{1}{\sqrt{d_i}} \wt{X}_i \qquad \psi_2(Y_i) = \frac{1}{\sqrt{d_i}} \wt{Y}_i \qquad \psi_2(Z) = Z.
\end{align*}
Then $\psi_2 \in \Aut(\mf{h}_n)$ because
\begin{align*}
[\psi_2(X_i),\psi_2(Y_i)] = Z = \psi_2(Z)= \psi_2(\left[ X_i, Y_i \right])
\end{align*}
and, setting $A_i = d_i$, it is clear that the basis \eqref{eq:standard orthonomral basis for Heisenberg algebra} is orthonormal relative to $\psi_2^* (\psi_1^* g)$. Hence $\varphi = \psi_1 \circ \psi_2$ is the desired automorphism of $\mf{h}_n$.
\end{proof}
When \eqref{eq:standard orthonomral basis for Heisenberg algebra} is an orthonormal basis of $\mf{h}_n$, the sharp and flat maps are given by
\begin{center}
\begin{tabular}{l l}
$\flat( X_i / \sqrt{A_i} ) = \sqrt{A_i} \alpha_i,$ & $\sharp(\sqrt{A_i}\alpha_i) = X_i / \sqrt{A_i},$ \\
$\flat( Y_i / \sqrt{A_i} ) = \sqrt{A_i} \beta_i,$ & $\sharp(\sqrt{A_i}\beta_i) = Y_i / \sqrt{A_i},$ \\
$\flat( Z ) = \zeta,$ & $\sharp(\zeta) = Z.$
\end{tabular}
\end{center}
Relative to the basis $\{ X_1, \ldots, X_n, Y_1, \ldots, Y_n, Z\}$, the adjoint representation is
\begin{align*}
\ad_U = \begin{bmatrix}
0 & \cdots & & & & 0 & 0 \\
\vdots & & & & & \vdots & \vdots \\
0 & \cdots & & & & 0 & 0 \\
-y_1 & \cdots & -y_n & x_1 & \cdots & x_n & 0
\end{bmatrix}
\end{align*}
where $U = \sum x_i X_i + \sum b_i y_i + z Z$. Relative to the dual basis, the coadjoint representation is the negative transpose
\begin{align*}
\ad^*_U = -(\ad_U)^T = \begin{bmatrix}
0 & \cdots & & & & 0 & y_1 \\
\vdots & & & & & \vdots & \vdots \\
0 & \cdots & & & & 0 & y_n \\
0 & \cdots & & & & 0 & -x_1 \\
\vdots & & & & & \vdots & \vdots \\
0 & \cdots & & & & 0 & -x_n \\
0 & \cdots & & & & 0 & 0
\end{bmatrix}.
\end{align*}
Because the center of $\mf{h}_n$ is one-dimensional, $\zeta_m = \zeta$, where $\zeta_m$ is as specified in Lemma \ref{lem:every exact left-invariant 2-form is d of something central}.
Letting $p = \sum_i a_i \alpha_i + \sum_i b_i \beta_i + c\zeta$ be a point in $\mf{h}_n^*$, the differential of the Hamiltonian is
\begin{align*}
dh_p = \sharp(p + B\zeta) = \sum_i \frac{a_i}{A_i}X_i + \sum_i \frac{b_i}{A_i}Y_i + (c+ B)Z
\end{align*}
and the Euler vector field is
\begin{align*}
E_h(p) = -\ad_{dh_p}^* p = \sum_i \frac{-cb_i}{A_i} \alpha_i + \sum_i \frac{ca_i}{A_i} \beta_i.
\end{align*}
To integrate the system $p' = E_h(p)$, note that the central component of the Euler vector field is constant by Lemma \ref{lem:structure of Euler vector field on two-step}.
Suppose that $p(t) = \sum a_i(t) \alpha_i + \sum b_i(t) \beta_i + c(t) \zeta$ is a solution that satisfies the initial condition $p(0) = \sum u_i \alpha_i + \sum v_i \beta_i + z_0 \zeta$. Then $c(t) = z_0$ and the remaining components form a linear system,
\begin{align*}
a_i'(t) = -\frac{z_0}{A_i}b_i(t) \qquad \qquad b_i'(t) = \frac{z_0}{A_i} a_i(t)
\end{align*}
that is directly integrated to find
\begin{align*}
a_i(t) &= u_i \cos \left( \frac{z_0 t}{A_i} \right) - v_i \sin \left( \frac{z_0 t}{A_i} \right), \\
b_i(t) &= u_i \sin \left( \frac{z_0 t}{A_i} \right) + v_i \cos \left( \frac{z_0 t}{A_i} \right).
\end{align*}
With an expression for the integral curves of the Euler vector field now established, we use equations \eqref{eq:mag goed equations v2a} and \eqref{eq:mag goed equations v2b} to obtain a coordinate expression for the magnetic geodesics through the identity. Let ${\bf X}(t) = \sum x_i(t) X_i + \sum y_i(t) Y_i$. If $z_0 \neq 0$, a direct integration of \eqref{eq:mag goed equations v2a} together with ${\bf X}(0) = 0$ yields
\begin{align*}
x_i(t) &= \frac{u_i}{z_0}\sin \left( \frac{z_0 t}{A_i} \right) + \frac{v_i}{z_0}\cos \left( \frac{z_0 t}{A_i} \right) - \frac{v_i}{z_0},\\
y_i(t) &= -\frac{u_i}{z_0} \cos \left( \frac{z_0 t}{A_i} \right) + \frac{v_i}{z_0}\sin \left( \frac{z_0 t}{A_i} \right) + \frac{u_i}{z_0}.
\end{align*}
If $z_0 = 0,$ we obtain
\begin{align*}
x_i(t) &= \frac{u_i}{A_i}t, \\
y_i(t) &= \frac{v_i}{A_i}t.
\end{align*}
Because the center is one-dimensional, the central component ${\bf Z}(t)$ in \eqref{eq:mag goed equations v2b} can be expressed as ${\bf Z}(t) = z(t)Z$. To integrate \eqref{eq:mag goed equations v2b} in the case that $z_0 \neq 0$, first compute
\begin{align*}
[{\bf X}'(t), {\bf X}(t)] = \sum\left(x'_iy_i-x_iy'_i\right)Z=\sum \frac{u_i^2 + v_i^2}{A_i z_0} \left( \cos \left(\frac{z_0 t}{A_i} \right) - 1 \right) Z
\end{align*}
so that
\begin{align*}
{\bf Z}'(t) = z'(t)Z &= \sharp(z_0 \zeta + B \zeta) - \frac{1}{2}[{\bf X}'(t), {\bf X}(t)] \\
&= (z_0 + B)Z - \sum \frac{u_i^2 + v_i^2}{2 A_i z_0} \left( \cos \left(\frac{z_0 t}{A_i} \right) - 1 \right) Z \\
&= \left(z_0 + B + \sum \frac{u_i^2 + v_i^2}{2 A_i z_0} \right)Z - \sum \frac{u_i^2 + v_i^2}{2 A_i z_0} \cos \left(\frac{z_0 t}{A_i} \right) Z
\end{align*}
and hence
\begin{align*}
z(t) &= \left( z_0 + B + \sum \frac{u_i^2 + v_i^2}{2 A_i z_0} \right)t - \sum \frac{u_i^2 + v_i^2}{2z_0^2} \sin \left(\frac{z_0 t}{A_i} \right).
\end{align*}
In summary, when $z_0 \neq 0$, every magnetic geodesic $\sigma(t) = \exp( \sum x_i(t) X_i + \sum y_i(t) Y_i + z(t)Z)$ satisfying $\sigma(0) = e$ has the form
\begin{align}
x_i(t) &= \frac{u_i}{z_0}\sin \left( \frac{z_0 t}{A_i} \right) - \frac{v_i}{z_0} \left(1- \cos \left( \frac{z_0 t}{A_i} \right) \right), \label{eq:x_i component of mag geod} \\
y_i(t) &= \frac{u_i}{z_0} \left( 1 - \cos \left( \frac{z_0 t}{A_i} \right) \right) + \frac{v_i}{z_0}\sin \left( \frac{z_0 t}{A_i} \right), \label{eq:y_i component of mag geod} \\
z(t) &= \left( z_0 + B + \sum \frac{u_i^2 + v_i^2}{2 A_i z_0} \right)t - \sum \frac{u_i^2 + v_i^2}{2z_0^2} \sin \left(\frac{z_0 t}{A_i} \right). \label{eq:z component of mag geod}
\end{align}
When $z_0 = 0,$ we obtain
\begin{align}
x_i(t) &= \frac{u_i}{A_i}t, \label{eq:x_i component of nonspiraling mag geod} \\
y_i(t) &= \frac{v_i}{A_i}t, \label{eq:y_i component of nonspiraling mag geod} \\
z_i(t) &= B t. \label{eq:z component of nonspiraling mag geod}
\end{align}
\begin{rmk} \label{rmk:spiraling terminology}
A magnetic geodesic $\sigma(t)$ will be a one-parameter subgroup if and only if $z_0 = 0$ or $z_0 \neq 0$ and $u_i = v_i = 0$ for all $i$. We will sometimes call a magnetic geodesic \emph{spiraling} if it is not a one-parameter subgroup, and \emph{non-spiraling} if it is. We will also call a magnetic geodesic \emph{central} if it is of the form $\sigma(t) \in Z(H_n)$ for all $t$.
\end{rmk}
The initial velocity of the magnetic geodesic $\sigma(t)$ is
\begin{align*}
\sigma'(0) = \sum \left( \frac{u_i}{A_i} X_i + \frac{v_i}{A_i} Y_i \right) + (z_0 + B)Z.
\end{align*}
Because $|X_i|^2 = |Y_i|^2 = A_i$, we can compute the square of the energy $E = |\sigma'(t)| = |\sigma'(0)|$ as (see Remark \ref{rmk:how to calculate energy of mag geod from initial conditions})
\begin{align} \label{eq:energy of spiraling mag geod}
E^2 = |\sigma'(0)|^2 = \sum_i \frac{u_i^2 + v_i^2}{A_i} + (z_0 + B)^2
\end{align}
Note that this expression is valid for all values of $z_0$.
\begin{thm} \label{thm:closed contractible magnetic geodesics}
There exist periodic magnetic geodesics with energy $E$ if and only if $0 < E < |B|$. For any $0 < E < |B|$, let $z_0 = -\sgn(B)\sqrt{B^2 - E^2}$ and let $u_i$ and $v_i$ be any numbers satisfying \eqref{eq:energy of spiraling mag geod}. Then the spiraling magnetic geodesics determined by $u_1,v_1, \ldots, u_n, v_n, z_0$ will be periodic of energy $E$. Moreover, the period of such a geodesic is $\omega = 2\pi A / z_0$.
\end{thm}
\begin{proof}
Recall that non-spiraling magnetic geodesics cannot be periodic. Inspection of the coordinate functions \eqref{eq:x_i component of mag geod}-\eqref{eq:z component of mag geod} of a spiraling magnetic geodesic shows they will yield a periodic magnetic geodesic if and only if the coefficient of $t$ in \eqref{eq:z component of mag geod} is zero. This condition is
\begin{align*}
0 &= z_0 + B + \sum \frac{u_i^2 + v_i^2}{2 A_i z_0} = z_0 + B + \frac{1}{2z_0} \left( E^2 - (z_0 + B)^2 \right)
\end{align*}
or
\begin{align*}
z_0^2 = B^2 - E^2.
\end{align*}
It can only be satisfied when $E < |B|$. To obtain a spiraling magnetic geodesic we need to require that $(z_0 + B)^2 < E^2$ or, equivalently, $z_0 \in (-B - E, -B + E)$. Since this interval contains only negative or positive numbers, depending on the sign of $B$, we must choose $z_0 = -\sgn(B)\sqrt{B^2 - E^2}$. Finally, to see that $z_0$ is indeed contained in this interval, note that $\sqrt{(B-E)(B+E)} = \sqrt{(-B+E)(-B-E)}$ is the geometric mean of the endpoints of interval.
\end{proof}
\begin{ex} \label{ex:mag geods on 3D Heis}
For convenience we state the component functions of a magnetic geodesic $\sigma(t) = \exp(x(t)X + y(t)Y + z(t)Z)$ in the 3-dimensional Heisenberg group (i.e. $n = 1$) with $\sigma(0) = e$. To ease notation, we use the dual bases $\{ \alpha, \beta, \zeta\}$ and $\{X,Y,Z\}$ for $\mf{h}_1^*$ and $\mf{h}_1$, respectively, and we let $A = A_1$. Given a point $p(0) = u_0 \alpha + v_0 \beta + z_0 \zeta$, $z_0 \neq 0$, the corresponding magnetic geodesic has component functions
\begin{align*}
x(t) &= \frac{u_0}{z_0}\sin \left( \frac{z_0 t}{A} \right) - \frac{v_0}{z_0} \left(1- \cos \left( \frac{z_0 t}{A} \right) \right), \\
y(t) &= \frac{u_0}{z_0} \left( 1 - \cos \left( \frac{z_0 t}{A} \right) \right) + \frac{v_0}{z_0}\sin \left( \frac{z_0 t}{A} \right), \\
z(t) &= \left( z_0 + B + \frac{u_0^2 + v_0^2}{2 A z_0} \right)t - \frac{u_0^2 + v_0^2}{2z_0^2} \sin \left(\frac{z_0 t}{A} \right).
\end{align*}
When $z_0 = 0,$ we obtain
\begin{align*}
x(t) = \frac{u_0}{A}t \qquad y(t) = \frac{v_0}{A}t \qquad z(t) = B t.
\end{align*}
\end{ex}
\begin{rmk}
\label{rmk:RiemannVsMagGeods}
It is instructive to compare the magnetic geodesics on $\bb{R}^2$ given in Section \ref{sec:Euclidean plane example} and the magnetic geodesics on $H_1$ given in Example \ref{ex:mag geods on 3D Heis}. In the former, all magnetic geodesics are closed circles with radii that depend on the energy. In the latter, the paths $x(t)X + y(t)Y$ through the complement to the center are also circles whose radii depend on both the energy and $z_0$. It is also worth noting some qualitative differences between Riemannian geodesics and magnetic geodesics on Heisenberg groups. In Riemannian case, one-parameter subgroups of the form $\exp(t(x_0 X + y_0 Y))$ are always geodesics. In contrast, the central component $z(t)$ of a magnetic geodesic can never be zero. Finally, note that in the Riemannian setting there are never closed geodesics in $H_n$ (compare with Theorem \ref{thm:closed contractible magnetic geodesics}).
\end{rmk}
\section{Compact Quotients of Heisenberg Groups} \label{sec:compact quotients}
A geodesic $\sigma: \bb{R} \to M$ in a Riemannian manifold $M$ is called periodic or (smoothly) closed if $\sigma(t + \omega) = \sigma(t)$ for all $t \in \bb{R}$. A periodic or closed magnetic geodesic is defined similarly, and we now investigate the closed magnetic geodesics on manifolds of the form $\Gamma\backslash H_n$, where $\Gamma$ is a cocompact (i.e., $\Gamma\backslash H_n$ compact), discrete subgroup of the $(2n+1)$-dimensional simply connected Heisenberg group $H_n$. As is common, we proceed by considering $\gamma$-periodic magnetic geodesics on the universal cover $H_n$. An important distinction between the magnetic and Riemannian settings is that in the latter one needs to address each energy level separately because magnetic geodesics cannot be reparameterized.
\subsection{$\gamma$-Periodic Magnetic Geodesics}
\begin{defn}
Let $N$ be a simply connected nilpotent Lie group with left invariant metric and magnetic form. For any $\gamma \in N$ not equal to the identity, a magnetic geodesic $\sigma\left(t\right)$ is called \emph{$\gamma$-periodic with period $\omega$} if $\omega \neq 0$ and for all $t\in\mathbb{R}$
\begin{align} \label{eq:gamma-periodic geodeisc condition}
\gamma\sigma\left(t\right)=\sigma\left(t+\omega\right).
\end{align}
We also say that \emph{$\gamma$ translates the magnetic geodesic $\sigma(t)$ by amount $\omega$}. The number $\omega$ is called a \emph{period of $\gamma$}.
\end{defn}
When $\Gamma < N$ is a cocompact discrete subgroup and $\gamma \in \Gamma$, a $\gamma$-periodic magnetic geodesic will project to a smoothly closed magnetic geodesic under the mapping $N \rightarrow \Gamma\backslash N$ and will be contained in the free homotopy class represented by $\gamma$. Every periodic magnetic geodesic on $\Gamma\backslash N$ arises as the image of a $\gamma$-periodic magnetic geodesic on $N$.
\begin{lem} \label{lem:non-central elts only translate non-spiraling mag goeds}
Let $\gamma = \exp(V_\gamma + Z_\gamma) \in H_n$, where $Z_\gamma \in Z(\mf{h}_n)$ and $V_\gamma$ is orthogonal to $Z(\mf{h}_n)$, and let $\sigma(t) = \exp({\bf X}(t) + {\bf Z}(t))$ be a $\gamma$-periodic magnetic geodesic. If $V_\gamma \neq 0$, then $\sigma$ is a noncentral 1-parameter subgroup (see Remark \ref{rmk:spiraling terminology}).
\end{lem}
\begin{proof}
Repeated use of \eqref{eq:gamma-periodic geodeisc condition} shows that $\gamma^k \sigma(t) = \sigma(t + k\omega)$. Using the multiplication formula \eqref{eq:multiplication law} on each side of the equation, the non-central components must satisfy $kV_\gamma + {\bf X}(t) = {\bf X}(t + \omega)$. If $V_\gamma \neq 0$, then the vector-valued function ${\bf X}(t + \omega) - {\bf X}(t)$ must be unbounded. Inspection of the magnetic geodesic equation \eqref{eq:x_i component of mag geod}-\eqref{eq:z component of nonspiraling mag geod} shows that this can only happen if $z_0 = 0$, i.e. $\sigma$ is a 1-parameter subgroup. Moreover $\sigma$ cannot be a central 1-parameter subgroup because then the left-hand side of \eqref{eq:gamma-periodic geodeisc condition} would be noncentral and right-hand side would be central, a contradiction.
\end{proof}
\begin{thm} \label{thm:gamma-periodic mag geods when gamma is not central}
Let $\gamma = \exp(V_\gamma + z_\gamma Z) \in H_n$, with $V_\gamma \neq 0$. For each $E > |B|$, there exist two $\gamma$-periodic magnetic geodesic $\sigma(t)$ with energy $E$ and periods $\omega = \pm |V_\gamma|/\sqrt{E^2 - B^2}$. There do not exist any $\gamma$-periodic magnetic geodesics with energy $E \leq |B|$.
\end{thm}
\begin{proof}
By Lemma \ref{lem:non-central elts only translate non-spiraling mag goeds}, we need only consider non-spiraling magnetic geodesics. The energy of any such magnetic geodesic satisfies
\begin{align*}
E^2 = \sum \frac{u_i^2 + v_i^2}{A_i} + B^2 \geq B^2
\end{align*}
If equality holds, then $\sigma$ is a central 1-parameter subgroup, which is excluded by Lemma \ref{lem:non-central elts only translate non-spiraling mag goeds}. Hence $E > |B|$.
Fix $V_0 \in \mf{v}$ such that its magnitude satisfies $|V_0|^2 + B^2 = E^2$ and its direction is parallel to $V_\gamma$, $V_0 = (B/k)V_\gamma$ for some $k \in \bb{R}_{\neq0}$. Define $\gamma^* = \exp(V_\gamma + kZ)$ and $\sigma^*(t) = \exp(t(V_0 + BZ))$. Then
\begin{align*}
\gamma^* \sigma^*(t) &= \exp\left( \frac{k}{B}\left(\frac{B}{k}V_\gamma + BZ\right)\right) \exp(t(V_0 + BZ)) \\
&= \exp\left( \frac{k}{B}\left(V_0 + BZ\right)\right) \exp(t(V_0 + BZ)) \\
&= \exp\left( \left( t + \frac{k}{B} \right) \left(V_0 + BZ\right) \right) \\
&= \sigma^*\left( t + \frac{k}{B} \right)
\end{align*}
shows that $\sigma^*$ is a $\gamma^*$-periodic magnetic geodesic of energy $E$ with period $\omega = k/B$. Using the multiplication formula \eqref{eq:multiplication law} and the fact that $Z(\mf{h}_n)$ is one-dimensional, it is straightforward to see that $\gamma$ and $\gamma^*$ are conjugate in $H_n$. Thus, there exists $a \in H_n$ such that $a \gamma^* a^{-1} = \gamma$. Now $\sigma = a \cdot \sigma^*$ is a magnetic geodesic of energy $E$ and
\begin{align*}
\gamma \cdot \sigma(t) = a \gamma^* a^{-1} \sigma(t) = a \gamma^* \sigma^*(t) = a\sigma^*(t + \omega) = \sigma(t + \omega)
\end{align*}
shows that it is $\gamma$-periodic of period $\omega$. The expression for $\omega$ follows from $\pm k/B=|V_\gamma|/|V_0|,$ and $|V_0| = \sqrt{E^2 - B^2}$.
\end{proof}
Having dealt with the periods of a non-central element of $H_n$, we now consider the case when $\gamma = \exp(z_\gamma Z)$ is central. In this case, there exist $\gamma$-periodic magnetic geodesics starting at the identity of energy both greater than and less than $|B|$. For a fixed energy $E > |B|$, there will be finitely many distinct periods associated with $\gamma$-periodic magnetic geodesics, while there will be infinitely many distinct periods when $E < |B|$.
\begin{lem} \label{lem:central elements translate only central 1-param subgroups}
Let $\gamma = \exp(z_\gamma Z)$ for some $z_\gamma \in \bb{R}^*$ and suppose that $\sigma(t)$ is a $\gamma$-periodic magnetic geodesic and a 1-parameter subgroup. Then $\sigma(t) = \exp(tz_0Z)$ for some $z_0 \in \bb{R}^*$. Moreover, for every $E>0$, there exist two $\gamma$-periodic magnetic geodesics of energy $E$, $\sigma(t) = \exp(t(\pm E)Z)$, with period $\omega = z_\gamma / (\pm E)$.
\end{lem}
\begin{proof}
Since $\sigma$ is a 1-parameter subgroup by hypothesis, $\sigma(t) = \exp(tV_0 + BtZ)$. On the one hand $\gamma \sigma(t) = \exp(t V_0 + (Bt + z_\gamma)Z)$ and on the other $\sigma(t + \omega) = \exp((t + \omega)V_0 + B(t + \omega)Z)$. Hence $\omega V_0 = 0$ and since $\omega \neq 0$, we conclude that $V_0 = 0$, showing the first claim.
For each energy $E > 0$, let $z_0 = -B \pm E$ and let $\sigma(t)$ be the magnetic geodesic $\sigma(t) = \exp(t(\pm E)Z)$. Then $\sigma$ is a magnetic geodesic of energy $E$ and
\begin{align*}
\gamma \sigma(t) = \sigma( (z_\gamma \pm Et) Z) = \exp\left( \pm E \left( \frac{z_\gamma}{\pm E} + t \right) Z \right) = \sigma(t + \omega)
\end{align*}
shows that it is $\gamma$-periodic of period $\omega$.
\end{proof}
Next suppose that $\sigma(t)$ is a spiraling magnetic geodesic, so that the component functions of $\sigma(t)$ have the form \eqref{eq:x_i component of mag geod}-\eqref{eq:z component of mag geod}. Comparing the coefficients of $X_1, \ldots, X_n, Y_1, \ldots, Y_n$ in $\gamma \sigma(t)$ and $\sigma(t + \omega)$ give conditions
\begin{align} \label{eq:resonance condition}
\sin\left( \frac{z_0}{A_i}(t + \omega) \right) = \sin\left( \frac{z_0}{A_i} t \right) \qquad \cos\left( \frac{z_0}{A_i}(t + \omega) \right) = \cos\left( \frac{z_0}{A_i} t \right)
\end{align}
for each $i = 1, \ldots, n$ such that $u_i^2 + v_i^2 \neq 0$.
We now specialize to case of the three-dimensional Heisenberg group and obtain a complete description of the spiraling $\gamma$-periodic magnetic geodesics through the identity. Since the left-invariant metric is determined by one parameter, and a magnetic geodesic through the identity is determined by $z_0$ and only one pair of $u_i, v_i$, we write $A = A_1$, $u_0 = u_1$ and $v_0 = v_1$ to ease notation. In general, the analysis will depend on the relative size of $E$ and $B$, and hence breaks up naturally into the three cases $E> |B|$, $E<|B|$ and $E = |B|$. In each case, we first establish the range of permissible integers $\ell$. Next, for each permissible $\ell$, we describe the magnetic geodesics through the identity translated by $\gamma$ along with their respective periods.
In this case, the period $\omega$ and the coordinate $z_0$ must be related by $\omega z_0 = 2\pi A \ell$, where $\ell \in \bb{Z}$. Comparing the central components in $\gamma \sigma(t)$ and $\sigma(t + \omega)$ gives the condition $z(t) + z_\gamma = z(t + \omega)$. That is,
\begin{align*}
& \left(z_0 + B + \frac{u_0^2 + v_0^2}{2A z_0} \right) t - \frac{u_0^2 + v_0^2}{2 z_0^2} \sin \left(\frac{z_0 t}{A}\right) + z_\gamma \\ & \qquad = \left(z_0 + B + \frac{u_0^2 + v_0^2}{2A z_0} \right) (t + \omega) - \frac{u_0^2 + v_0^2}{2 z_0^2} \sin \left(\frac{z_0 }{A} (t + \omega) \right).
\end{align*}
This simplifies to
\begin{align} \label{eq:central periodic condition 1}
z_\gamma = \left(z_0 + B + \frac{u_0^2 + v_0^2}{2A z_0} \right) \omega,
\end{align}
and using \eqref{eq:energy of spiraling mag geod} to eliminate the fraction and $\omega z_0 = 2\pi A \ell$ to eliminate $\omega$ this can be written as
\begin{align} \label{eq:central periodic condition 2}
z_\gamma &= \left( z_0 + B + \frac{1}{2z_0}(E^2 - (z_0 + B)^2 ) \right) \frac{2\pi A \ell}{z_0}.
\end{align}
If $E = |B|$, then the above simplifies to $z_\gamma = \pi A \ell$. If $E \neq |B|$, then after clearing denominators and solving for $z_0$, we obtain the expression
\begin{align} \label{eq:central periodic condition 3}
z_0^2 = \frac{E^2 - B^2}{\frac{z_\gamma}{\pi A \ell} - 1}.
\end{align}
\begin{lem} \label{lem:effect of energy constraint on range of ells}
Let $\gamma = \exp(z_\gamma Z)$ be a central element of the Heisenberg group. For each nonzero energy level, the range of admissible integers $\ell$ and the corresponding choices of $z_0$ for which there exists a $\gamma$-periodic magnetic geodesic through the identity are given by the following table.
\begin{center}
{\renewcommand{\arraystretch}{2}
\begin{tabular}{l|l|l|l}
& & \multicolumn{1}{|c}{$\ell$} & \multicolumn{1}{|c}{$z_0$} \\ \hline
(1a) & $E > |B|$ & $1 < \frac{2E}{E + B} < \frac{z_\gamma}{\pi A \ell}$ & $-\sqrt{\frac{E^2 - B^2}{\frac{z_\gamma}{\pi A \ell} - 1}}$ \\ \hline
(1b) & $E > |B|$ & $1 < \frac{2E}{E - B} < \frac{z_\gamma}{\pi A \ell}$ & $+\sqrt{\frac{E^2 - B^2}{\frac{z_\gamma}{\pi A \ell} - 1}}$ \\ \hline
(2a) & $0 < E < B$ & $\frac{2E}{E-|B|} < \frac{z_\gamma}{\pi A \ell} < \frac{2E}{E+|B|} < 1$ & $-\sqrt{\frac{E^2 - B^2}{\frac{z_\gamma}{\pi A \ell} - 1}}$ \\ \hline
(2b) & $B < E < 0$ & $\frac{2E}{E-|B|} < \frac{z_\gamma}{\pi A \ell} < \frac{2E}{E+|B|} < 1$ & $+\sqrt{\frac{E^2 - B^2}{\frac{z_\gamma}{\pi A \ell} - 1}}$ \\ \hline
(3a) & $E = B$ & $\ell = \frac{z_\gamma}{\pi A}$ & $-2B < z_0 < 0$ \\ \hline
(3b) & $E = -B$ & $\ell = \frac{z_\gamma}{\pi A}$ & $0 < z_0 < -2B$
\end{tabular}}
\end{center}
In all cases, the associated period is $\omega = 2\pi A\ell / z_0$ and one can choose any $u_0$ and $v_0$ such that $u_0^2 + v_0^2 = A(E^2 - (z_0 + B)^2)$.
\end{lem}
\begin{proof}
The condition $(z_0 + B)^2 < E^2$ is equivalent to
\begin{align} \label{eq:basic energy constraint}
-E-B < \pm \sqrt{\frac{E^2 - B^2}{\frac{z_\gamma}{\pi A \ell} - 1}} < E-B.
\end{align}
In case (1), $-E - B < 0$ and $E - B > 0$, so this leads to the two inequalities
\begin{align*}
-E-B < - \sqrt{\frac{E^2 - B^2}{\frac{z_\gamma}{\pi A \ell} - 1}} < 0, \hspace{2cm} 0 < \sqrt{\frac{E^2 - B^2}{\frac{z_\gamma}{\pi A \ell} - 1}} < E - B.
\end{align*}
After squaring both inequalities and isolating $z_\gamma/(\pi A \ell)$, these become
\begin{align*}
1 < \frac{2E}{E+B} < \frac{z_\gamma}{\pi A \ell} \hspace{2cm} 1 < \frac{2E}{E-B} < \frac{z_\gamma}{\pi A \ell}
\end{align*}
yielding cases (1a) and (1b), respectively. Notice that one of these ranges for $\ell$ is a subset of the other. We keep them separate as they affect the choice of sign for $z_0$.
In case (2), either $-B-E < -B+E < 0$ if $B > 0$, or $0 < -B-E < -B+E < 0$ if $B < 0$. A similar computation as above leads to the inequalities
\begin{align*}
\frac{2E}{E-B} < \frac{z_\gamma}{\pi A \ell} < \frac{2E}{E + B} < 1, \hspace{1cm} \frac{2E}{E+B} < \frac{z_\gamma}{\pi A \ell} < \frac{2E}{E-B} < 1.
\end{align*}
Both of these ranges can be expressed simultaneously in terms of $|B|$ as in the Lemma statement. However, in case (2a), when $B> 0$, $z_0$ is chosen according to the negative branch, and vice versa in case (2b).
For case (3), it was noted above \eqref{eq:central periodic condition 3} that if $E = |B|$, then $z_\gamma = \pi A \ell$. Choose $z_0$ so that $(z_0 + B)^2 < E^2$. When $B>0$, this inequality is the same as $-2B < z_0 < 0$. Setting $\omega = 2\pi A \ell / z_0 = 2z_\gamma / z_0$, it is straightforward to check that $\frac{z_0}{A}(t + \omega) = \frac{z_0}{A}t$ and that \eqref{eq:central periodic condition 1} holds. The case when $B<0$ is handled similarly.
\end{proof}
\begin{rmk}
In case (2), the condition that $E< |B|$ ensures that $E^2 - B^2 < 0$, while the conditions on $\ell$ ensure that $z_\gamma/(\pi A \ell) - 1 < 0$. Hence the expression under the radical in $z_0$ will be positive.
\end{rmk}
\begin{rmk}
In every case, for each admissible $z_0$ there is a 1-parameter family of $\gamma$-periodic magnetic geodesics.
\end{rmk}
\begin{rmk} \label{rmk:sometimes critical case is empty}
The cases where $E = |B|$ are to be interpreted as follows. When $z_\gamma$ and $A$ are such that $z_\gamma / \pi A \in \bb{Z}$, then there exist $\gamma$-periodic magnetic geodesics with energy $E$ and $z_0$ as described in the table. Otherwise, the collection of such magnetic geodesics is empty.
\end{rmk}
\subsection{Lengths of Closed Magnetic Geodesics}
We are now in a position to compute the lengths of closed magnetic geodesics on $\Gamma \backslash H$ in the free homotopy class of $\gamma \in \Gamma$. If $\Gamma < H$ is a cocompact discrete subgroup, and $\gamma \in \Gamma$, then the length of the corresponding closed magnetic geodesic on the compact quotient $\Gamma \backslash H$ will be
\begin{align} \label{eq:length of gamma-periodic geodesic}
\int_0^{|\omega|} |\sigma'(t)| dt = E|\omega|.
\end{align}
Previous results results concerning the lengths of closed geodesics in the Riemannian case include \cite{gordon1986spectrum}, \cite{eberlein1994geometry}, \cite{gornetmast2000lengthspectrum}, \cite{gornetmast2003}. Unlike the Riemannian case, magnetic geodesics cannot be reparamterized to have a different energy. So it is more natural to consider the collection of lengths of closed geodesics of a fixed energy. Let $L(\gamma; E)$ denote the set of distinct lengths of closed magnetic geodesics in the free homotopy class of $\gamma$.
\begin{thm} \label{thm:periods of central elt in 3D Heisenberg}
Let $\Gamma < H$ be a cocompact discrete subgroup of the Heisenberg group $H$ and let $\gamma = \exp(V_\gamma + z_\gamma Z) \in \Gamma$.
\begin{itemize}
\item If $\gamma = e$ is the identity ($V_\gamma = 0$ and $z_\gamma = 0$), then
\begin{align} \label{eq:lengths in trivial free homotopy class}
\displaystyle L(e;E) = \begin{cases} \emptyset & \text{if } E \geq |B| \\ \left\{\frac{2\pi A}{\sqrt{\frac{B^2}{E^2} - 1}}\right\} & \text{if } 0 < E < |B| \end{cases}
\end{align}
\item If $\gamma$ is not central ($V_\gamma \neq 0$) then
\begin{align} \label{eq:lengths in noncentral free homotopy classes}
\displaystyle L(\gamma;E) = \begin{cases} \emptyset & \text{if } 0 < E \leq |B| \\ \left\{ \frac{|V_\gamma|}{\sqrt{1 - \frac{B^2}{E^2}}} \right\} & \text{if } E > |B| \end{cases}
\end{align}
\item If $\gamma$ is central ($V_\gamma = 0$ and $z_\gamma \neq 0$) then
\begin{align} \label{eq:lengths in central free homotopy classes}
& L(\gamma; E) = \\ & \begin{cases} \left\{ \frac{\sqrt{4 \pi A \ell (z_\gamma - \pi A \ell)}}{\sqrt{1 - \frac{B^2}{E^2}}} : \ell \in \bb{Z}, \frac{2E}{E + |B|} < \frac{z_\gamma}{\pi A \ell} \right\} \cup \left\{ |z_\gamma| \right\} & E > |B| \\ \left\{ \frac{\sqrt{4 \pi A \ell (\pi A \ell - z_\gamma)}}{\sqrt{\frac{B^2}{E^2} - 1}} : \ell \in \bb{Z}, \frac{2E}{E-|B|} < \frac{z_\gamma}{\pi A \ell} < \frac{2E}{E+|B|} \right\} \cup \left\{ |z_\gamma| \right\} & 0 < E < |B| \\ \left\{ \frac{2E |z_\gamma|}{|z_0|} : z_0 \in \bb{R}, (z_0 + B)^2 < E^2 \right\} \cup \left\{ |z_\gamma| \right\} & E = |B| \end{cases} \nonumber
\end{align}
\end{itemize}
\end{thm}
\begin{proof}
The case when $\gamma = e$ follows from Theorem \ref{thm:closed contractible magnetic geodesics}. The lengths of closed magnetic geodesics obtained in that theorem is
\begin{align*}
E|\omega| = E \left| \frac{2\pi A}{-\sgn(B)\sqrt{B^2 - E^2}} \right| = \frac{2\pi AE}{\sqrt{B^2 - E^2}}
\end{align*}
The case when $\gamma = \exp(V_\gamma + z_\gamma Z)$ is not central follows from Theorem \ref{thm:gamma-periodic mag geods when gamma is not central}. The length of closed magnetic geodesics obtain in that theorem is
\begin{align*}
E|\omega| = E \left| \frac{|V_\gamma|}{\sqrt{E^2 - B^2}} \right| = \frac{E|V_\gamma|}{\sqrt{E^2 - B^2}}.
\end{align*}
The case when $\gamma$ is central follows from Lemma \ref{lem:central elements translate only central 1-param subgroups} and Lemma \ref{lem:effect of energy constraint on range of ells}. In the former case, which applies to every energy, the length of the closed magnetic geodesic is
\begin{align*}
E|\omega| = E \left| \frac{z_\gamma}{\pm E} \right| = |z_\gamma|.
\end{align*}
In the latter case, when $E > |B|$ the lengths are
\begin{align*}
E|\omega| = E \left| \frac{2\pi A \ell}{z_0} \right| = 2\pi AE \ell \left| \pm \sqrt{\frac{\frac{z_\gamma}{\pi A \ell} - 1}{E^2 - B^2}} \right| = \frac{2E \sqrt{\pi A \ell (z_\gamma - \pi A \ell)}}{\sqrt{E^2 - B^2}}
\end{align*}
and when $E < |B|$ the lengths are
\begin{align*}
E|\omega| = E \left| \frac{2\pi A \ell}{z_0} \right| = 2\pi AE \ell \left| \pm \sqrt{\frac{1 - \frac{z_\gamma}{\pi A \ell}}{B^2 - E^2}} \right| = \frac{2E \sqrt{\pi A \ell (\pi A \ell - z_\gamma)}}{\sqrt{B^2 - E^2}}.
\end{align*}
The lengths when $E = |B|$ depend not on $\ell$ (which must be $\ell = z_\gamma / (\pi A)$) but instead on $z_0$ and are given by
\begin{align*}
E|\omega| = E\left| \frac{2\pi A \ell}{z_0} \right| = E\left| \frac{2 z_\gamma }{z_0} \right|.
\end{align*}
\end{proof}
\begin{rmk}
As $E \to \infty$ or $B \to 0$, the denominator $\sqrt{1 - B^2/E^2} \to 1$. Roughly speaking, the cases $E \leq |B|$ will be eliminated, and the collection of lengths in the case $E > |B|$ will approach the length spectrum in the Riemannian case, which was computed in \cite{gornetmast2000lengthspectrum}. This reflects the following physical intuition: when the magnetic field is very weak charged particles will behave more like they would in the absence of any forces, and when a particle is very energetic the magnetic field will have less of an effect on its trajectory.
\end{rmk}
\begin{rmk}
The dynamics of the magnetic flow on the various energy levels splits roughly into three regimes:
\begin{itemize}
\item For fixed energy levels $E > |B|$, there exist closed magnetic geodesics in every free homotopy class and the set of their lengths is finite.
\item For fixed energy levels $E<|B|$, there exist free homotopy classes without any closed magnetic geodesics, and in the case that there are closed magnetic geodesics, the set of their lengths is countably infinite. This reflects the paradigm that the dynamics on high energy levels will resemble that of the underlying geodesic flow.
\item Finally, when $E = |B|$, $\gamma$ is central, and $z_\gamma \in \pi A \bb{Z}$ (i.e. the set of lengths is nonempty), then the infinite set of lengths is not discrete.
\end{itemize}
\end{rmk}
The following three lemmas address bounds on the collection of lengths of closed magnetic geodesics in a given central free homotopy class.
\begin{lem} \label{lem:upper bound for supercritical lengths}
Consider the case $|B|<E$ in \eqref{eq:lengths in central free homotopy classes}. The set
\begin{align*}
\left\{ \frac{\sqrt{4 \pi A \ell (z_\gamma - \pi A \ell)}}{\sqrt{1 - \frac{B^2}{E^2}}} : \ell \in \bb{Z}, \frac{2E}{E + |B|} < \frac{z_\gamma}{\pi A \ell} \right\}
\end{align*}
is bounded above by $|z_\gamma|/\sqrt{1-B^2/E^2},$ which is larger than $|z_\gamma|.$ The example below shows that this upper bound is the best possible.
\end{lem}
\begin{proof}
Without loss of generality, we assume $z_\gamma>0.$ The condition on $\ell$ implies $0<\ell<\frac{z_\gamma}{2\pi A}\left(1+\frac{|B|}{E}\right). $ We define
\begin{align*}
\lambda(\ell)=\frac{4 \pi A \ell (z_\gamma - \pi A \ell)}{\left(1 - \frac{B^2}{E^2}\right)}.
\end{align*}
The parabola $\lambda(\ell)$ opens downward and has zeroes at $\ell=0$ and $\ell=z_\gamma/\pi A,$ hence achieves a maximum of $z^2_\gamma/\left(1-B^2/E^2\right)$ at $\ell=z_\gamma/2\pi A.$ See the example below for values of $A,B,z_\gamma$ such that this maximum is achieved. The result follows.
\end{proof}
\begin{ex} \label{ex:maximal length need not be straight line}
Consider the particular example where $A = 1$, $B = 1$ and $E=2$. Choose the central element $\gamma = \exp(20\pi Z)$ so that $z_\gamma = 20\pi$. In this case, for each $\ell$ such that $0 < \ell < 15$, there is a closed magnetic geodesic with length given by \eqref{eq:lengths in central free homotopy classes}. In particular, when $\ell = 10$ the corresponding length is $(2/\sqrt{3})20 \pi > z_\gamma$.
\end{ex}
\begin{rmk} \label{rmk:maximal mag length spec not well behaved}
In the setting of Riemannian two-step nilmanifolds, the maximal length of a closed magnetic geodesic in a central free homotopy class is the length of the central geodesic. In fact, the maximal length spectrum determines the length spectrum for central free homotopy classes (see Proposition 5.15 of \cite{eberlein1994geometry}). Example \ref{ex:maximal length need not be straight line} shows that this is no longer true in the magnetic setting.
\end{rmk}
\begin{lem}
Consider the case $|B|>E$ in \eqref{eq:lengths in central free homotopy classes}. The set
\begin{align}
\left\{ \frac{\sqrt{4 \pi A \ell (\pi A \ell - z_\gamma)}}{\sqrt{\frac{B^2}{E^2} - 1}} : \ell \in \bb{Z}, \frac{2E}{E-|B|} < \frac{z_\gamma}{\pi A \ell} < \frac{2E}{E+|B|} \right\}
\end{align}
is bounded below by $|z_\gamma|$.
\end{lem}
\begin{proof}
Without loss of generality, we assume $z_\gamma>0.$
We define $$\lambda(\ell)=\frac{4 \pi A \ell (\pi A \ell-z_\gamma)}{\left(\frac{B^2}{E^2}-1\right)}.$$
The parabola $\lambda(\ell)$ opens upward and has zeroes at $\ell=0$ and $\ell=z_\gamma/\pi A.$
The condition on $\ell$ implies $\ell>\frac{z_\gamma}{2\pi A}\left(1+\frac{|B|}{E}\right)>\frac{z_\gamma}{\pi A}$ or $\ell<\frac{z_\gamma}{2\pi A}\left(1-\frac{|B|}{E}\right)<0.$ A lower bound of the set is thus provided by the minimum of
$\sqrt{\lambda\left(\frac{z_\gamma}{2\pi A}\left(1+\frac{|B|}{E}\right)\right)}$
and $\sqrt{\lambda\left(\frac{z_\gamma}{2\pi A}\left(1-\frac{|B|}{E}\right)\right)}.$
However, both of these evaluate to $z_\gamma,$ and the result follows.
\end{proof}
\begin{lem}
Consider the case $|B|=E$ in \eqref{eq:lengths in central free homotopy classes}. If the set
\begin{align*}
\left\{ \frac{2E |z_\gamma|}{|z_0|} : z_0 \in \bb{R}, (z_0 + B)^2 < E^2 \right\}
\end{align*}
is nonempty (see Remark \ref{rmk:sometimes critical case is empty}), then it is unbounded above and has an infimum of $|z_\gamma|.$
\end{lem}
\begin{proof}
If $B > 0$, then $z_0$ can be chosen in the interval $-2B < z_0 < 0$. As $z_0 \to 0^-$, the length diverges to infinity, and as $z_0 \to (-2B)^+$ the lengths converge to $|z_\gamma|$. The case when $B<0$ is analogous.
\end{proof}
\subsection{Density of Closed Magnetic Geodesics}
Given a Riemannian manifold $M$, define $S^E M=\{V \in TM: |V| = E\}$ and let $S_\gamma^E M$ denote the tangent sphere of radius $E$ at the point $\gamma$. Given a vector $V \in TM$, let $\sigma_V$ denote the magnetic geodesic such that $\sigma_V'(0) = V$. We are interested in the size of the set of vectors that determine periodic magnetic geodesics. In the Riemannian case, this set is scale invariant. That is, if $V$ determines a periodic geodesic, then so does $cV$ for any $c \neq 0$. So it is natural in this case to restrict attention to unit vectors. However, this property does not hold for magnetic geodesics. Therefore, in the following definition we include a dependence on the energy of the vectors.
\begin{align} \label{eq:defn of set of periodic vectors}
\Per^E(M) := \{ V \in S^E M \ : \ \sigma_V \text{ is periodic} \} \subset S^EM.
\end{align}
In the context of Riemannian two-step nilmanifolds, the density of this set was first investigated in \cite{eberlein1994geometry}, and subsequently in \cite{mast1994closedgeods}, \cite{mast1997resonance}, \cite{leepark1996density}, \cite{demeyer2001compactnilmanifolds}, \cite{decoste2008chevalleyratstructs}. The following result shows that for magnetic flows on the Heisenberg group, density persists for sufficiently high energy.
\begin{thm} \label{thm:density of closed mag geods}
For each $E > |B|$, $\Per^E(\Gamma \backslash H)$ is dense in $S^E(\Gamma \backslash H)$.
\end{thm}
\begin{proof}
We begin with a series of reductions. First, it suffices to show that the set of $V \in S^E(H)$ such that $\sigma_V$ is $\gamma$-periodic for some $\gamma \in \Gamma$ is dense in $S^E(H)$. For any $V \in S^E(\Gamma \backslash H)$, let $W \in \pi^{-1}(V)$ and let $\{ W_i \} \subset S^E(H)$ be such that $\sigma_{W_i}$ is $\gamma_i$-periodic for some $\gamma_i \in \Gamma$ and $W_i \to W$. Then $\{ V_i = \pi(W_i) \} \subset S^E(\Gamma \backslash H)$ is a sequence of tangent vectors such that $\sigma_{V_i}$ is periodic and $V_i \to V$.
Next, we claim that it suffices to show that the set of $W \in S^E_e H$ such that $\sigma_W$ is periodic for some $\gamma \in Z(\Gamma)$ is dense in $S^E_e H$. For if $\sigma_W$ is such a magnetic geodesic and $\phi \in H$ is any element, then $\phi \cdot \sigma_W(t)$ is a $(\phi \gamma \phi^{-1})$-periodic magnetic geodesic satisfying $\phi \cdot \sigma_V(0) = \phi$ and $(\phi \cdot \sigma_V)'(0) = L_{\phi *}(V)$. Because $\gamma$ is central, $\phi \gamma \phi^{-1} = \gamma$ and $\phi \cdot \sigma_W$ is a $\gamma$-periodic magnetic geodesic. Because $L_{\phi *} S_e^E(H) \to S_\phi^E(H)$ is a diffeomorphism, this proves the claim.
Lastly, we claim that it suffices to show that set $z_0 \in [-B-E, -B+E]$ chosen according to cases (1a) and (1b) in Lemma \ref{lem:effect of energy constraint on range of ells} (for some choice of $\gamma \in Z(\Gamma)$) is dense in $[-B-E, -B+E]$. As noted in Lemma \ref{lem:effect of energy constraint on range of ells}, for any such $z_0$ there is a one parameter family of $\gamma$-periodic magnetic geodesics given by any choice of $u_0, v_0$ such that $u_0^2 + v_0^2 = A(E^2 - (z_0 + B)^2)$. Hence if the resulting $z_0$ are dense in $[-B-E, -B+E]$, then there is a dense set of latitudes in the ellipsoid $E^2 = ((u_0^2 + v_0^2)/A) + (z_0 + B)^2 \subset \bb{R}^3$ such that those vectors yield $\gamma$-periodic magnetic geodesics for some $\gamma \in \Gamma$. The initial conditions $(u_0, v_0, z_0) \in \bb{R}^3$ determine the magnetic geodeisc $\sigma_V$ where $V = (u_0/A)X + (v_0/A)Y + (z_0 + B)Z$, showing that the set of $V \in S^E_e H$ tangent to $\gamma$-periodic magnetic geodesics ($\gamma \in \Gamma$) is dense in $S^E_e H$.
By Proposition 5.4 of \cite{eberlein1994geometry}, $\Gamma \cap Z(H) = Z(\Gamma)$ is a lattice in $Z(H)$. Hence there exists $\bar{z} \in \bb{R}^*$ such that $\Gamma \cap Z(H) = \{ \exp(h\bar{z}Z) \ : \ h \in \bb{Z} \}$. By replacing $\bar{z}$ with $-\bar{z}$, if necessary, we can assume that $\bar{z} > 0$. Consider the set of numbers
\begin{align*}
\left\{ \frac{h}{\ell} \ : \ h, \ell \in \bb{Z}^+ \text{ and } \left( \frac{2\pi AE}{\bar{z}(E + B)} \right) \ell < h \right\}.
\end{align*}
This set is dense in the interval $(2\pi AE/(\bar{z}(E + B)), \infty)$. Via a sequence of continuous mappings of $\bb{R}$, each of which preserves density, \begin{align*}
\left\{ - \sqrt{\frac{E^2 - B^2}{\frac{h\bar{z}}{\pi A \ell} - 1}} \ : \ h, \ell \in \bb{Z}^+ \text{ and } \left( \frac{2 E}{E + B} \right) \ell < h \right\}
\end{align*}
is dense in the interval $(-E-B,0)$. These are preciesly the values for $z_0$ appearing in case (1a) of Lemma \ref{lem:effect of energy constraint on range of ells}. Starting instead with the set
\begin{align*}
\left\{ \frac{h}{\ell} \ : \ h, \ell \in \bb{Z}^+ \text{ and } \left( \frac{2\pi AE}{\bar{z}(E - B)} \right) \ell < h \right\}
\end{align*}
and using a parallel sequence of transformations shows that
\begin{align*}
\left\{ \sqrt{\frac{E^2 - B^2}{\frac{h\bar{z}}{\pi A \ell} - 1}} \ : \ h, \ell \in \bb{Z}^+ \text{ and } \left( \frac{2 E}{E - B} \right) \ell < h \right\}
\end{align*}
is dense in $(0, E - B)$. These numbers are the $z_0$ appearing in case (1b) of Lemma \ref{lem:effect of energy constraint on range of ells}. This shows the density of permissible $z_0$ in the interval $[-E-B, E-B]$ and hence the theorem.
\end{proof}
\subsection{Rigidity and the Marked Magnetic Length Spectrum}
\label{sec:rigidity}
We begin by recalling the notion of marked length spectrum for a compact Riemannian manifold $M$. For each nontrivial free homotopy class $\mc{C}$, there exists at least one smoothly closed Riemannian geodesic. Let $L(\mc{C})$ denote the collection of all lengths of smooth closed geodesics that belong to $\mc{C}$. Recall that free homotopy classes of closed curves on $M$ are in bijection with conjugacy classes of $\pi_1(M)$. If $\bar{M}$ is another compact Riemannian manifold and $\phi : \pi_1(M) \to \pi_1(\bar{M})$ is an isomorphism, then $\phi$ maps conjugacy classes of $\pi_1(M)$ bijectively onto conjugacy classes of $\pi_1(\bar{M})$. Hence $\phi$ induces a bijection $\phi_*$ of the set of free homotopy classes of closed curves on $M$ onto the set of free homotopy of classes of closed curves on $\bar{M}$. Two compact Riemannian manifolds $M$ and $\bar{M}$ are said to have the \emph{same marked length spectrum} if there exists an isomorphism $\phi : \pi_1(M) \to \pi_1(\bar{M})$ such that $L(\phi_* \mc{C}) = L(\mc{C})$ for all nontrivial free homotopy classes of closed curves on $M$. Specializing to the case at hand, let $G_1$ and $G_2$ be two simply connected 2-step nilpotent Lie groups and $\Gamma_1 < G_1$ and $\Gamma_2 < G_2$ cocompact discrete subgroups. Then $\pi_1(\Gamma_i \backslash G_i) \simeq \Gamma_i$. With these identifications, we say the nilmanifolds $\Gamma_1 \backslash G_1$ and $\Gamma_2 \backslash G_2$ have the same marked length spectrum if there is an isomorphism $\phi: \Gamma_1 \to \Gamma_2$ such that $L(\phi_* \mc{C}) = L(\mc{C})$ for all nontrivial free homotopy classes of closed curves on $\Gamma_1 \backslash G_2$. See \cite{eberlein1994geometry} and \cite{gornetmast2004} for previous results on marked length spectrum rigidity of Riemannian two-step nilmanifolds.
While the above definition could be used in the context of magnetic flows on nilmanifolds, it seems more natural to modify it in light of the dependence of the dynamics on the relative magnitudes of $E$ and $|B|$. For a fixed homotopy class $\mc{C}$, the collection lengths of closed magnetic geodesics of any energy could be an infinite open interval. Therefore, let $L(\mc{C}; E)$ denote the collection of all lengths of smoothly closed magnetic geodesics that belong to $\mc{C}$ and have energy $E$. By Theorem \ref{thm:periods of central elt in 3D Heisenberg}, $L$ is not well-defined for $E \leq |B|$. In order to avoid this, we only define $L$ for $E > |B|$.
\begin{defn}
Let $H$ be the simply connected three-dimensional Heisenberg group. Let $g_1$ and $g_2$ be two left-invariant Riemannian metrics on $H$ with parameters $A_1$ and $A_2$. Let $\Omega_1$ and $\Omega_2$ be two left-invariant magnetic forms on $H$ with parameters $B_1$ and $B_2$. Let $\Gamma_1, \Gamma_2 < H$ be two cocompact discrete subgroups, and $\phi: \Gamma_1 \to \Gamma_2$ an isomorphism. The nilmanifolds $\Gamma_1 \backslash H$ and $\Gamma_2 \backslash H$ with corresponding magnetic structures are said to have the \emph{same marked magnetic length spectrum} if $L(\phi_* \mc{C}; E_2) = L(\mc{C}; E_1)$ for some $E_1 > |B_1|$ and some $E_2 > |B_2|$ for each nontrivial free homotopy class $\mc{C}$.
\end{defn}
Even though the magnetic flow is a perturbation away from the underlying geodesic flow, it reflects enough of the underlying Riemannian geometry to exhibit a degree of geometric rigidity.
\begin{thm} \label{thm:MLS rigidity}
Let $H$ be the simply connected, three-dimensional Heisenberg group endowed with left-invariant Riemannian metric $g$ and left-invariant magnetic form $\Omega$, with corresponding parameters $A$ and $B$ respectively. Let $\Gamma_1, \Gamma_2 < H$ be two cocompact lattices. Suppose that for some $E > |B|$, the two manifolds $\Gamma_1 \backslash H$ and $\Gamma_2 \backslash H$ have the same marked magnetic length spectrum at energy $E$. Then $\Gamma_1 \backslash H$ and $\Gamma_2\backslash H$ are isometric.
\end{thm}
The proof of Theorem \ref{thm:MLS rigidity} is similar to the proof of Theorem 5.20 in \cite{eberlein1994geometry}, with one notable exception. The latter uses the maximal marked length spectrum, i.e. only the length longest closed geodesic in each free homotopy class. For Riemannian geodesics in central free homotopy classes (on two-step nilpotent Lie groups), this is always length of the one-paramter subgroup. Example \ref{ex:maximal length need not be straight line} and Remark \ref{rmk:maximal mag length spec not well behaved} show that the maximal magnetic marked length spectrum is not so well behaved. To circumvent this, we consider all the lengths of closed magnetic geodesics in central free homotopy classes. This argument is given in the following Lemma.
\begin{lem} \label{lem:rigidity of central lattice}
Under the same hypotheses as Theorem \ref{thm:MLS rigidity}, let $\exp(\bar{z}_1 Z)$ and $\exp(\bar{z}_2 Z)$ be generators for the central lattices $\Gamma_1 \cap H$ and $\Gamma_2 \cap H$, respectively. Then $|\bar{z}_1| = |\bar{z}_2|$.
\end{lem}
\begin{proof}
First, we claim that
\begin{align} \label{eq:normalized lengths of closed mag geods in central free homotopy classes}
\sup_{h \in \bb{Z}} \left\{ \frac{\max(L([\exp(h\bar{z}_1 Z)]_1; E))}{|h|} \right\} = \sup_{h \in \bb{Z}} \left\{ \frac{\max(L([\exp(h\bar{z}_2 Z)]_2; E))}{|h|} \right\}
\end{align}
where $[\gamma]_i$ denotes the free homotopy class of closed curves on $\Gamma_i \backslash H$ determined by $\gamma \in \Gamma_i$. Let $\phi: \Gamma_1 \to \Gamma_2$ be an isomorphism. Since $\phi$ is an isomorphism of $Z(\Gamma_1)$ onto $Z(\Gamma_2)$, $\phi(\exp(h\bar{z}_1 Z)) = \exp(\pm h \bar{z}_2 Z)$, and so $\phi_*[\exp(h\bar{z}_1 Z)]_1 = [\exp(\pm h \bar{z}_2 Z)]_2$. By hypothesis, the sets of lengths of closed magnetic geodesics in these two classes are equal. Moreover, the positive integer $|h|$ is the same for both free homotopy classes. Hence the sets over which the supremums are taken are equal.
Next we evaluate the supremums in \eqref{eq:normalized lengths of closed mag geods in central free homotopy classes}. By Lemma \ref{lem:upper bound for supercritical lengths}, the set of lengths of smoothly closed magnetic geodesics in the free homotopy class determined by an element of the form $\exp(h\bar{z}_i Z)$ is bounded above by $|h\bar{z}_i|/\sqrt{1 - B^2 / E^2}$. After dividing all the lengths in each set by $|h|$, respectively, we obtain a uniform upper bound,
\begin{align} \label{eq:uniform upper bound for normalized lengths}
\sup_{h \in \bb{Z}} \left\{ \frac{\sqrt{4 \pi A \ell (h\bar{z}_i - \pi A \ell)}}{|h| \sqrt{1 - \frac{B^2}{E^2}}} : \ell \in \bb{Z}, \ \frac{2E}{E + |B|} < \frac{h\bar{z}_i}{\pi A \ell} \right\} \leq \frac{|\bar{z}_i|}{\sqrt{1 - \frac{B^2}{E^2}}}.
\end{align}
We now claim that the inequality in \eqref{eq:uniform upper bound for normalized lengths} is actually an equality. If the quantity $\bar{z}_i/(2\pi A) \in \bb{Q}$, then for $h$ large enough $\ell = (h\bar{z}_i)/(2\pi A)$ will be an allowable integer value for $\ell$, and $\max_{\ell}(\sqrt{4 \pi A \ell (h\bar{z}_i - \pi A \ell)}) = |h\bar{z}_i|$. If the quantity $\bar{z}_i/(2\pi A) \notin \bb{Q}$, then the numbers $(h\bar{z}_i)/(2\pi A)$ will will come arbitrarily close to an integer. In either case, the supremum is $|\bar{z}_i|/\sqrt{1 - B^2/E^2}$. The lemma now follows.
\end{proof}
We now proceed with the proof of Theorem \ref{thm:MLS rigidity}.
\begin{proof}
First, extend the marking $\phi : \Gamma_1 \to \Gamma_2$ to an automorphism $\phi: H \to H$. Because $\phi_*$ is a Lie algebra automorphism of $\mf{h} = \mf{v} \oplus \mf{z}$, we can decompose it as $\phi_* = R_1 + R_2 + S$, where $R_1 : \mf{v} \to \mf{v}$, $R_2 : \mf{v} \to \mf{z}$, and $S : \mf{z} \to \mf{z}$ are linear maps. Using Lemma \ref{lem:rigidity of central lattice},
\begin{align*}
|S(\bar{z}_1Z)| = |\pm \bar{z}_2 Z| = |\bar{z}_2| = |\bar{z}_1| = |\bar{z}_1 Z|
\end{align*}
shows that $S$ is an isometry of $\mf{z}$. Let $\pi_\mf{v}:\mf{h} \to \mf{v}$ denote the projection. For any $V \in \pi_\mf{v} \log \Gamma_1$, there is some $\xi \in \Gamma_1$ such that $\xi = V + Z$ and $Z \in \mf{z}$. By hypothesis, $\exp(\xi)$ and
\begin{align*}
\phi(\exp(V+Z)) = \phi(\exp(\xi)) = \exp(\phi_* \xi) = \exp( R_1(V) + R_2(V) + S(Z))
\end{align*}
have the same lengths of closed magnetic geodesics. By \eqref{eq:lengths in noncentral free homotopy classes}, we have
\begin{align*}
\frac{|V|}{\sqrt{1 - \frac{B^2}{E^2}}} = \frac{|R_1(V)|}{\sqrt{1 - \frac{B^2}{E^2}}}
\end{align*}
and we conclude that $R_1$ is an isometry of $\mf{v}$. It is straightforward to check that $R_1 + S : \mf{h} \to \mf{h}$ is an isometric Lie algebra isomorphism. Let $\phi_1 : H \to H$ be the Lie group isomorphism such that $(\phi_1)_* = R_1 + S$. Define $T : \mf{h} \to \mf{h}$ by $T(V + Z) = V + Z + (S^{-1} \circ R_2)(V)$. Once can verify directly that $T$ is an inner automorphism of $\mf{h}$ and $(\phi_1)_* \circ T = \phi_*$. Let $\phi_2$ be the inner automorphism of $H$ such that $(\phi_2)_* = T$.
Now we have that $\phi = \phi_1 \circ \phi_2$, where $\phi_1$ is an isometric automorphism of $H$ and $\phi_2$ is an in inner automorphism of $H$. Because $\phi_2$ is an inner automorphism, $\Gamma_1 \backslash H$ is isometric to $\phi_2(\Gamma_1) \backslash H$ (via a left-translation), and $\phi_2(\Gamma_1) \backslash H$ is isometric to $\phi_1(\phi_2(\Gamma_1)) \backslash H = \phi(\Gamma_1) \backslash H = \Gamma_2 \backslash H$.
\end{proof}
\begin{rmk}
For $E < |B|$, the set of lengths in any noncentral free homotopy class is empty. Hence magnetic length spectrum does not determine
\end{rmk}
\begin{rmk}
The isometry $\Gamma_1 \backslash H \to \Gamma_2 \backslash H$ preserves the magnetic form $\Omega = d(B \zeta)$ up to sign. Since the isometry is realized by $\phi_1 \circ L_x$ for some $x \in H$,
\begin{align*}
(\phi_1 \circ L_x)^* \zeta = L_x^* (\phi_1^* \zeta) = L_x^*(\pm \zeta) = \pm \zeta.
\end{align*}
\end{rmk}
\section{Heisenberg Type Manifolds} \label{sec:HT manifolds}
Heisenberg type manifolds are Riemannian manifolds that generalize the Heisenberg group endowed with a left-invariant metric. A metric two-step nilpotent Lie algebra $\mf{h}$ is of \emph{Heisenberg type} if
\begin{align*}
j(Z)^2 = -|Z|^2 I_{\mf{v}}
\end{align*}
for every $Z \in \mf{z}$ (see \eqref{eq:j-map def}). A simply connected two-step nilpotent Lie group with left-invariant metric is of Heisenberg type if its metric Lie algebra is of Heisenberg type. It is often the case that theorems concerning the Heisenberg group endowed with a left-invariant metric, or their analogous formulations, are also true for Heisenberg type manifolds. For an example, see the results of \cite{gornetmast2000lengthspectrum}. This paradigm does not appear to extend to the setting of Heisenberg type manifolds endowed with a left-invariant magnetic field. In this section, we show by way of a simple example that the computation of the lengths of closed magnetic geodesics becomes significantly more complex for Heisenberg type manifolds.
Let $\mf{h} = \myspan \{ X_1, \ldots, X_4, Z_1, Z_2 \}$ and define a bracket structure by
\begin{align*}
[X_1, X_2] = Z_1 \qquad [X_1, X_3] = Z_2 \qquad [X_2, X_4] = -Z_2 \qquad [X_3, X_4] = Z_1
\end{align*}
and extending by bilinearity and skew-symmetry to all of $\mf{h}$. Define the metric on $\mf{h}$ by declaring $\{ X_1, X_2,X_3, \allowbreak X_4, Z_1, Z_2 \}$ to be an orthonormal basis. It is straightforward to check that $\mf{h}$ is of Heisenberg type with two dimensional center $\mf{z} = \myspan\{ Z_1, Z_2 \}$. Let $\{ \alpha_1, \alpha_2,\alpha_3, \alpha_4, \zeta_1, \allowbreak \zeta_2 \}$ be the basis of $\mf{h}^*$ dual to $\{ X_1, X_2,X_3, X_4, Z_1, Z_2 \}$. Let $H$ be the simply connected Lie group with left-invariant metric with metric Lie algebra $\mf{h}$. As in Lemma \ref{lem:every exact left-invariant 2-form is d of something central}, any exact, left-invariant 2-form is of the form $\Omega = d(\zeta_m)$ for some $\zeta_m \in \mf{z}^*$. For simplicity we take as magnetic field $\Omega = d(B\zeta_1)$. For each $\gamma \in H$, we wish to understand the $\gamma$-periodic geodesics and the associated periods.
Proceeding as in the Heisenberg case in Section \ref{sec:compact quotients}, let $p(t) = \sum a_i(t) \alpha_i + \sum c_i(t) \zeta_i$ be the integral curve of the Euler vector field on $\mf{h}^*$ with initial condition $p(0) = \sum u_i \alpha_i + \sum z_i \zeta_i$. Then the component functions are
\begin{align*}
a_1(t) &= u_1 \cos(\hat{z}t) + \left( \frac{-z_1 u_2 - z_2 u_3}{\hat{z}} \right) \sin(\hat{z}t) \\
a_2(t) &= u_2 \cos(\hat{z}t) + \left( \frac{z_1 u_1 + z_2 u_4}{\hat{z}} \right) \sin(\hat{z}t) \\
a_3(t) &= u_3 \cos(\hat{z}t) + \left( \frac{z_2 u_1 - z_1 u_4}{\hat{z}} \right) \sin(\hat{z}t) \\
a_4(t) &= u_4 \cos(\hat{z}t) + \left( \frac{-z_2 u_2 + z_1 u_3}{\hat{z}} \right) \sin(\hat{z}t) \\
c_1(t) &= z_1 \\
c_1(t) &= z_2
\end{align*}
where $\hat{z} = \sqrt{z_1^2 + z_2^2}$. Next, let $\sigma(t)$ be the magnetic geodesic through the identity determined by the integral curve $p(t)$. Hence $\sigma(t)$ solves $\sigma'(t) = dh_{p(t)}$, where $h : \mf{h}^* \to \bb{R}$ is the Hamiltonian (see \eqref{eq:reduced magnetic Hamiltonian}). Writing $\sigma(t) = \Exp({\bf X}(t) + {\bf Z}(t))$, where ${\bf X}(t) = \sum x_i(t) X_i$ and ${\bf Z}(t) = \sum z_i(t) Z_i$, we have on the one hand
under trivialization by left-multiplication,
\begin{align*}
\sigma'(t) = {\bf X}'(t) + {\bf Z}'(t) + \frac{1}{2}[{\bf X}'(t), {\bf X}(t)] = \sum x_i'(t) X_i + {\bf Z}'(t) + \frac{1}{2}[{\bf X}'(t), {\bf X}(t)],
\end{align*}
and on the other hand
\begin{align*}
dh_{p(t)} &= \sharp(p(t) + B\zeta_1) = \sum a_i(t)X_i + (z_1 + B)Z_1 + z_2 Z_2.
\end{align*}
Matching up the non-central components shows that
\begin{align*}
x_1(t) &= \frac{u_1}{\hat{z}} \sin(\hat{z}t) + \frac{- z_1 u_2 - z_2 u_3}{\hat{z}^2}\left( 1 - \cos(\hat{z}t) \right) \\
x_2(t) &= \frac{u_2}{\hat{z}} \sin(\hat{z}t) + \frac{z_1 u_1 + z_2 u_4}{\hat{z}^2} \left( 1 - \cos(\hat{z}t) \right)\\
x_3(t) &= \frac{u_3}{\hat{z}} \sin(\hat{z}t) + \frac{z_2 u_1 - z_1 u_4}{\hat{z}^2}\left( 1 - \cos(\hat{z}t) \right) \\
x_4(t) &= \frac{u_4}{\hat{z}} \sin(\hat{z}t) + \frac{- z_2 u_2 + z_1 u_3}{\hat{z}^2}\left( 1 - \cos(\hat{z}t) \right)
\end{align*}
while the central components satisfies
\begin{align*}
{\bf Z}'(t) = (z_1 + B)Z_1 + z_2 Z_2 - \frac{1}{2}[{\bf X}'(t), {\bf X}(t)].
\end{align*}
A tedious computation shows
\begin{align*}
\left[{\bf X}'(t), {\bf X}(t)\right] &= \left( - \frac{z_1 \hat{u}^2}{\hat{z}^2} + \frac{z_1 \hat{u}^2}{\hat{z}^2} \cos(\hat{z}t) \right)Z_1 + \left( - \frac{z_2 \hat{u}^2}{\hat{z}^2} + \frac{z_2 \hat{u}^2}{\hat{z}^2} \cos(\hat{z}t) \right)Z_2 \\
&= - \frac{z_1 \hat{u}^2}{\hat{z}^2} \left( 1 - \cos(\hat{z}t) \right)Z_1 - \frac{z_2 \hat{u}^2}{\hat{z}^2} \left( 1 - \cos(\hat{z}t) \right)Z_2 \\
&= \left( 1 - \cos(\hat{z}t) \right) \left( - \frac{z_1 \hat{u}^2}{\hat{z}^2} Z_1 - \frac{z_2 \hat{u}^2}{\hat{z}^2} Z_2 \right)
\end{align*}
where $\hat{u} = \sqrt{u_1^2 + \cdots + u_4^2}$. A final integration now provides the central components:
\begin{align*}
z_1(t) &= \left( z_1 + B + \frac{z_1 \hat{u}^2}{2\hat{z}^2} \right)t - \frac{z_1 \hat{u}^2}{2\hat{z}^3} \sin(\hat{z}t) \\
z_2(t) &= \left( z_2 + \frac{z_2 \hat{u}^2}{2\hat{z}^2} \right) t - \frac{z_2 \hat{u}^2}{2\hat{z}^3} \sin(\hat{z}t).
\end{align*}
Let $\gamma = \exp(\xi_1 Z_1 + \xi_2 Z_2)$ be a central element of $H$. Comparing the components of $\gamma \sigma(t) = \sigma(t + \omega)$ shows that $\omega = 2 \pi k / \hat{z}$, $k \in \bb{Z}$. With this choice of $\omega$, the non-central components are equal, while the central component yield the system
\begin{align}
& \left( z_1 + B + \frac{z_1 \hat{u}^2}{2\hat{z}^2} \right) \frac{2\pi k}{\hat{z}} = \xi_1 \label{eq:HT central periods 1}\\
& \left( z_2 + \frac{z_2 \hat{u}^2 }{2 \hat{z}^2} \right) \frac{2\pi k}{\hat{z}} = \xi_2. \label{eq:HT central periods 2}
\end{align}
Each choice of $u_1, \ldots, u_4, z_1, z_2$ satisfying this system and the energy constraint
\begin{align*}
1 = \hat{u}^2 + (z_1 + B)^2 + z_2^2 = \hat{u}^2 + \hat{z}^2 + 2z_1 B + B^2
\end{align*}
will yield a unit speed magnetic geodesic translated by $\gamma$. In the case that the magnetic field and $\gamma$ are ``parallel'', i.e. $\xi_2 = 0$, then \eqref{eq:HT central periods 2} becomes
\begin{align*}
z_2 \left(1 + \frac{\hat{u}^2 }{2 \hat{z}^2} \right) \frac{2\pi k}{\hat{z}} = 0.
\end{align*}
The second and third factor are necessarily nonzero, so $z_2 = 0$. This reduces \eqref{eq:HT central periods 1} to an equation that can be solved in the same way as the Heisenberg case, according to the strength of the magnetic field relative to the energy. When $\gamma$ is an arbitrary element of the center, it it is much more difficult to completely solve \eqref{eq:HT central periods 1} and \eqref{eq:HT central periods 2}, and hence obtain an explicit description of all the $\gamma$-periodic geodesics.
\section{Appendix: Tangent Bundle Viewpoint: Periodic Magnetic Geodesics in Heisenberg manifolds}
In \cite{kaplan1981Cliffordmodules}, A. Kaplan introduced so-called $j$-maps to study Clifford modules (see Section \ref{sec:geoemtry of two-step nilpotent lie groups} for the definition). A metric two-step nilpotent Lie algebra is completely characterized by its associated $j$-maps. Since being introduced, they have proven very useful in the study of two-step nilpotent geometry. In this appendix we show how the magnetic geodesic equations can be characterized in terms of the $j$-maps.
Let $G$ be a two-step nilpotent Lie group endowed with a left-invariant metric $g$ and an exact, left-invariant magnetic form $\Omega$. Let $\mf{g} = \mf{v} \oplus \mf{z}$ be the decomposition of the Lie algebra into the center and its orthogonal complement. By Lemma \ref{lem:every exact left-invariant 2-form is d of something central}, there is $\zeta_m \in \mf{z}^*$ such that $\Omega = d(B\zeta_m)$.
\begin{lem} \label{lem:Lorenta force is j of something}
The Lorentz force associated to the magnetic field $\Omega$ satisfies $F_\mf{v} = j(-BZ_m)$ and $F_\mf{z} = 0$, where $Z_m = \sharp(\zeta_m)$.
\end{lem}
\begin{proof}
Let $X \in \mf{g}$, $V \in \mf{v}$ and $Z \in \mf{z}$. Then
\begin{align*}
g(F(Z), X) &= \Omega(Z, X) = d(B \zeta_m)(Z,X) = -B\zeta_m([Z,X]) = 0,
\end{align*}
and
\begin{align*}
g(F(V), X) &= \Omega(V, X) = d(B \zeta_m)(V,X) = -B\zeta_m([V,X]) \\
&= -Bg(\sharp(\zeta_m), [V,X]) = -Bg(j(Z_m)V, X) \\
&= g(j(-B Z_m)V, X).
\end{align*}
\end{proof}
Because of Lemma \ref{lem:Lorenta force is j of something}, we will write $F = j(-BZ_m)$ with the understanding that $F$ vanishes on central vectors and agrees with $j(-BZ_m)$ on vectors in $\mf{v}$. Let $\gamma(t) = \exp(X(t) + Z(t))$ be a magnetic geodesic on $G$ where $X(t) \in \mf{v}$ and $Z(t) \in \mf{z}$. By \eqref{eq:tangent vector field of path in two-step nilpotent group}, we can express the velocity vector of $\gamma$ as $\gamma'(t) = X'(t) + \frac{1}{2}[X'(t), X(t)] + Z'(t)$. The condition for $\gamma$ to be a magnetic geodesic is $\nabla_{\gamma'(t)} \gamma'(t) = F(\gamma'(t))$. Using \eqref{eq:Levi-Civita eqns} to expand this condition and imposing the initial conditions $\gamma(0) = e$ and $\gamma'(0) = X_0 + Z_0$, the geodesic equations on $\mf{v}$ and $\mf{z}$ separately are
\begin{align}
& X''(t) = j(Z_0 - BZ_m)X'(t) \\
& Z'(t) + \frac{1}{2}[X'(t),X(t)] = Z_0.
\end{align}
We restrict to the three-dimensional Heisenberg case and consider the magnetic geodesics in this context. Following the approach as illustrated in Prop. 3.5 on pages 625--628 of \cite{eberlein1994geometry}, and reducing to the three-dimensional Heisenberg case, a straightforward calculation gives the following result.
\begin{cor}
\label{GeodEqsThm}If $z_{0}-B=0$, (or if $z_{0}-B\neq 0$ and $x_{0}=y_{0}=0$%
) then $\sigma \left( t\right) $ is the one parameter subgroup \
\begin{equation}
\sigma \left( t\right) =\exp \left( x\left( t\right) X+y\left( t\right)
Y+z\left( t\right) Z\right) =\exp \left( t\left( x_{0}X+y_{0}Y+z_{0}Z\right)
\right) \text{.} \label{StraightGeodEqs}
\end{equation}%
If $z_{0}-B\neq 0$, the solution is
\begin{equation}
\left(
\begin{array}{c}
x\left( t\right) \\
y\left( t\right)%
\end{array}%
\right) =\frac{A}{z_{0}-B}\left(
\begin{array}{cc}
\sin \left( t\left( \frac{z_{0}-B}{A}\right) \right) & -\left( 1-\cos \left(
t\left( \frac{z_{0}-B}{A}\right) \right) \right) \\
1-\cos \left( t\left( \frac{z_{0}-B}{A}\right) \right) & \sin \left( t\left(
\frac{z_{0}-B}{A}\right) \right)%
\end{array}%
\right) \left(
\begin{array}{c}
x_{0} \\
y_{0}%
\end{array}%
\right) \text{,} \label{SpiralingGeodEqsNoncentral}
\end{equation}%
and%
\begin{equation}
\begin{array}{c}
z\left( t\right)
=\left( z_{0}+\frac{A\left(x_0^2+y_0^2\right)}{2\left( z_{0}-B\right) }\right) t-\frac{A^2%
\left(x_0^2+y_0^2\right)}{2\left( z_{0}-B\right) ^{2}}\sin \left( t\left( \frac{z_{0}-B}{A%
}\right) \right). \label{SpiralingGeodEqsCentral}
\end{array}
\end{equation}
\end{cor}
\begin{rmk}
The coordinate functions \eqref{SpiralingGeodEqsNoncentral} and \eqref{SpiralingGeodEqsCentral} are equivalent the one obtained in \eqref{eq:x_i component of mag geod}-\eqref{eq:z component of mag geod} in the following sense. In order to obtain the magnetic geodesic through the origin determined by $(u_0, v_0, z_0)$ as in section \ref{sec:magnetic geodesic equations on simply connected 2n+1 dim Heis} take as initial tangent vector in \eqref{SpiralingGeodEqsNoncentral} and \eqref{SpiralingGeodEqsCentral} to be $(x_0, y_0, z_0) = (u_0/A, v_0/A, z_0 + B)$.
\end{rmk}
We now present some of the main results about the three-dimensional Heisenberg manifold proved in the body of the paper, but expressed using the tangent bundle, rather than the cotangent bundle.
Continuing the notation from the previous sections, we fix energy $E,$ magnetic strength $B,$ and metric parameter $A.$
Let $\left( H, g_A, \Omega\right)$ denote a simply connected Heisenberg manifold. The theorems in this section state precisely the set of periods $\omega$ such that
there exists an intial velocity $v_{p}\in TH$ such that $\sigma_{v_{p}}\left(t\right)$
is periodic with period $\omega$. We also precisely state the
set of initial velocities $v_{p}$, hence the set of geodesics, that
produce each period $\omega$.
Let $\Gamma$ denote a cocompact discrete subgroup of $H$ and, as above, denote the resulting compact Heisenberg manifold by $\left(\Gamma\backslash H, g_A, \Omega\right).$ For all $\gamma \in \Gamma,$ we
state below precisely the set of periods $\omega$ such that
there exists an intial velocity $v_{p}\in TH$ such that $\sigma_{v_{p}}\left(t\right)$
is $\gamma$-periodic with period $\omega$. We also precisely state the
set of initial velocities $v_{p}$, hence the set of geodesics, that
produce each period $\omega$.
\subsection{Periodic Magnetic Geodesics on the Simply Connected Heisenberg Group}
We now consider the existence
of periodic geodesics in $\left(H,g_{A}, d\left(B\zeta\right)\right)$,
the three-dimensional Heisenberg Lie group $H$ with left-invariant
metric determined by the orthonormal basis $\left\{ \frac{1}{\sqrt{A}}X,\frac{1}{\sqrt{A}}Y,Z\right\} $
and magnetic form $\Omega=-B\,\alpha\wedge\beta$. \ Recall that for
a vector $v\in\mathfrak{h}$, $\sigma_{v}\left(t\right)$ denotes
the magnetic geodesic through the identity with initial velocity $v$.
Note that if $v_{p}\in T_{p}H$, then $\sigma_{v_{p}}\left(t\right)$
denotes the magnetic geodesic through $p=\sigma_{v_{p}}\left(0\right)$
with initial velocity $v_{p}$. Also note that because $g_{A}$ and
$\Omega$ are left-invariant, that $\sigma_{v_{p}}\left(t\right)=L_{p}\sigma_{v}\left(t\right)$,
where $v_{p}=L_{p\ast}\left(v\right)$; i.e., magnetic geodesics through
$p\in H$ are just left translations of magnetic geodesics through
the identity. Clearly, a magnetic geodesic through $p\in H$ is periodic
with period $\omega$ if and only if its left translation by $p^{-1}$
is a magnetic geodesic through the identity with period $\omega$.
\begin{thm}
\label{thm:PreciseThmSimplyConnected}With notation as above, fix energy
$E,$ magnetic strength $B,$ and metric paramter $A.$
\begin{enumerate}
\item If $B^2>E^2$ then there exists a one-parameter
family of vectors $v\in\mathfrak{h}$, $\left\vert v\right\vert =E,$
such that $\sigma_{v}\left(t\right)$ is periodic. In particular,
$\sigma_{v}\left(t\right)$ is periodic if and only if $z_{0}=B-\mathrm{sgn}\left(B\right)\sqrt{B^{2}-E^{2}}$and
\[
x_{0}^{2}+y_{0}^{2}=-2z_{0}\left(z_{0}-B\right)/A.
\]
The set of periods of $\sigma_{v}\left(t\right)$ is $\frac{2\pi}{\sqrt{B^{2}-E^{2}}}\mathbb{Z}_{\neq0}$
and the smallest positive period is $\omega=\left\vert \frac{2\pi A}{z_{0}-B}\right\vert =\frac{2\pi A}{\sqrt{B^{2}-E^{2}}}$.
\item If $B^2\leq E^2$ then there does not exist a
vector $v$ with $\left\vert v\right\vert =E$ such that $\sigma_{v}\left(t\right)$
is periodic.
\end{enumerate}
\end{thm}
\subsection{Periodic Geodesics on Compact Quotients of the Heisenberg group}
We ultimately wish to consider closed magnetic geodesics on Heisenberg
manifolds of the form $\Gamma\backslash H$, where $\Gamma$ is a
cocompact discrete subgroup of $H.$ As above, we proceed by considering
$\gamma$-periodic magnetic geodesics on the cover $H$.
The purpose of this section is stating more precisely, and proving,
the following, which is divided into several cases. See Theorem \ref{thm:NonCentralThm},
Theorem \ref{thm:StraightThm}, and Theorem \ref{thm:SpiralingThm} below.
\begin{thm}
Consider the three-dimensional Heisenberg Lie group $H$ with left
invariant metric $g_{A}$ determined by the orthonormal basis $\left\{ \frac{1}{\sqrt{A}}X,\frac{1}{\sqrt{A}}Y,Z\right\} $
and magnetic form $\Omega=d\left(B\zeta\right)=-B\alpha\wedge\beta$. Fix $\gamma\in H$
and fix energy $E,$ magnetic strength $B$ and metric parameter $A.$
Then we can state precisely the set of periods $\omega$ such that
there exists an initial velocity $v_{p}\in TH$ with $\left\vert v_{p}\right\vert =E$
such that $\sigma_{v_{p}}\left(t\right)$ is $\gamma$-periodic with
period $\omega$. We can also precisely state the set of initial velocities
$v_{p}$, hence the set of geodesics, that produce each period $\omega$.
\end{thm}
\subsubsection{Noncentral Case}
Let $\gamma=\exp\left(x_\gamma X+y_\gamma Y+z_\gamma Z\right)\in H$ with
$x_\gamma^{2}+y_\gamma^{2}\neq0$. Let $a=\exp\left(a_{x}X+a_{y}Y+a_{z}Z\right)\in H$.\ From
(\ref{eq:multiplication law}),
the conjugacy class of $\gamma$ in $H$ is $\exp\left(\hat{x}X+\hat{y}Y+\mathbb{R}Z\right)$.
\begin{thm}
\label{thm:NonCentralThm} Fix energy $E,$ magnetic strength $B$ and metric parameter $A.$
Let $\gamma=\exp\left(x_\gamma X+y_\gamma Y+z_\gamma Z\right)\in H$ with
$x_\gamma^{2}+y_\gamma^{2}\neq0$.
\begin{enumerate}
\item If $E^2>B^2$ (ie, if $\mu>1$ ) then there exists
a two-parameter family of elements $a\in H$ such that $a\gamma a^{-1}=\exp\left(x_\gamma X+y_\gamma Y+z_\gamma^{\prime}Z\right)$
where $z_\gamma^{\prime}=\pm B\sqrt{A\left(x_\gamma^{2}+y_\gamma^{2}\right)/\left(E^{2}-B^{2}\right)}$.
Letting $v=$ $\frac{B}{z_\gamma^{\prime}}\left(x_\gamma X+y_\gamma Y+z_\gamma^{\prime}Z\right)$,
which satisfies $\left\vert v\right\vert =E$, then $\gamma$ translates
the (non-spiraling) magnetic geodesic $a^{-1}\exp\left(tv\right)$
with period $\omega=\pm\sqrt{A\left(x_\gamma^{2}+y_\gamma^{2}\right)/\left(E^{2}-B^{2}\right)}$.
These are the only magnetic geodesics with energy $E$ translated
by $\gamma$.
\item If $E^2\leq B^2$ (ie, if $\mu\leq1$ ) then neither
$\gamma$ nor any of its conjugates in $H$ translate a magnetic geodesic
with energy $E$.
\end{enumerate}
\end{thm}
\subsubsection{Central Case}
Throughout this subsection, we assume that $x_\gamma=y_\gamma=0$;
i.e., that $\gamma$ lies in $Z\left(H\right),$ the center of three-dimensional
Heisenberg group $H.$ Recall that since $\gamma$ is central, $\gamma$
translates a magnetic geodesic $\sigma\left(t\right)$ through the
identity $e\in H$ with period $\omega$ if and only if for all $a\in H$,
$\gamma$ translates a magnetic geodesic through $a$ with period
$\omega$. \ That is, without loss of generality, if $\gamma$ lies
in the center, we may assume that $\sigma\left(0\right)=e$.
Recall that magnetic geodesics in $H$ are either spiraling or one
parameter subgroups. We first consider the case of one-parameter subgroups.
\begin{thm}
\label{thm:StraightThm} Fix energy $E,$ magnetic strength $B$ and metric parameter $A.$
\ Let $\gamma=\exp\left(z_\gamma Z\right)\in Z\left(H\right)$, with
$z_\gamma \neq0$. The element $\gamma$ translates the magnetic geodesics
$\sigma\left(t\right)=$ $\exp\left(\pm tEZ\right)$ with initial
velocities $v=\pm EZ$ and periods $\omega=\pm z_\gamma /E$. This pair
of one-parameter subgroups and their left translates are the only
straight magnetic geodesics translated by $\gamma$.
\end{thm}
\begin{thm}
\label{thm:SpiralingThm}Fix energy $E,$ magnetic strength $B, $ and metric parameter $A.$
Denote $\mu=\frac{E}{\left\vert B\right\vert }$. Let $\gamma=\exp\left(z_\gamma Z\right)\in Z\left(H\right)$,
$z_\gamma\neq0$. If there exists a vector $v=x_{0}X+y_{0}Y+z_{0}Z$
and a period $\omega\neq0$ such that the spiraling geodesic $\sigma_{v}\left(t\right)$
is $\gamma$-periodic with period $\omega$, then there exists $\ell\in\mathbb{Z}_{\neq0}$
such that $\zeta_{\ell}=$ $\frac{z_\gamma}{\pi \ell}$ $\ $satisfies the
conditions relative to $\mu$ specified in the following six cases
and $A\left(x_{0}^{2}+y_{0}^{2}\right)$, $z_{0}$, and
$\omega$ are as expressed below. Conversely, for every choice of
$\ell\in\mathbb{Z}_{\neq0}$ $\ $such that $\zeta_{\ell}=\frac{z_\gamma}{\pi \ell}$
satisfies the conditions in one of the cases below, there exists at
least one vector $v$ as given below such that $\sigma_{v}\left(t\right)$
is $\gamma$-periodic (spiraling) geodesic with period $\omega$ as
given below. \ Note that Case 1 requires $E^2<B^2$.
Cases 2 through 5 require $E^2>B^2,$ and Case
6 requires $E^2=B^2.$ Note that in all cases, $\zeta_\ell\neq0.$
\begin{enumerate}
\item $\frac{-2\mu}{1-\mu}<\frac{\zeta_{\ell}}{A}<\frac{2\mu}{1+\mu}<1$,
\item $1<\frac{2\mu}{1+\mu}<\frac{\zeta_{\ell}}{A}<2$,
\item $2<\frac{\zeta_{\ell}}{A}\leq\frac{2\mu}{\mu-1}$,
\item $2<\frac{2\mu}{\mu-1}<\frac{\zeta_{\ell}}{A}$,
\item $\frac{\zeta_{\ell}}{A}=2$ and $\mu>1$,
\item $\frac{\zeta_{\ell}}{A}=1$ and $\mu=1.$
\end{enumerate}
In Cases 1 through 4, we choose any $x_{0},y_{0}\in\mathbb{R}$ so
that
\begin{equation}
A\left(x_0^2+y_0^2\right)=B^{2}\left(\frac{\mu^{2}-1}{\frac{\zeta_{\ell}}{A}-1}\left(\frac{\zeta_{\ell}}{A}-2\right)+2\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{\ell}}{A}-1}}\right)\label{Ebar-plus}
\end{equation}
and let
\begin{equation}
z_{0}=-B\left(-1+\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{\ell}}{A}-1}}\right)\label{z0-plus}
\end{equation}
and
\[
\omega=\frac{2z_\gamma A}{\zeta_{\ell}\left(z_{0}-B\right)}=\frac{2\pi \ell\sqrt{\left\vert \frac{\zeta_{\ell}}{A}-1\right\vert }}{\sqrt{E^{2}-B^{2}}}\text{.}
\]
In Case 4, we may also choose any $x_{0},y_{0}\in\mathbb{R}$ so that
\begin{equation}
A\left(x_0^2+y_0^2\right)=B^{2}\left(\frac{\mu^{2}-1}{\frac{\zeta_{\ell}}{A}-1}\left(\frac{\zeta_{\ell}}{A}-2\right)-2\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{\ell}}{A}-1}}\right)\label{Ebar-minus}
\end{equation}
and let
\begin{equation}
z_{0}=-B\left(-1-\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{\ell}}{A}-1}}\right)\label{z0-minus}
\end{equation}
and
\[
\omega=\frac{2 z_\gamma A}{\zeta_{\ell}\left(z_{0}-B\right)}=-\frac{2\pi \ell\sqrt{\left\vert \frac{\zeta_{\ell}}{A}-1\right\vert }}{\sqrt{E^{2}-B^{2}}}\text{.}
\]
The conditions on $\mu,\zeta_{\ell},x_{0}$,$y_{0\text{ }}$and $z_{0}$
imply $\frac{\mu^{2}-1}{\frac{\zeta_{\ell}}{A}-1}>0$, $x_0^2+y_0^2>0$,
$E^{2}=A\left(x_0^2+y_0^2\right) \allowbreak +z_{0}^{2}$, and the (spiraling) magnetic geodesic
through the identity $\sigma_{v}\left(t\right)$ with initial velocity
$v=x_{0}X+y_{0}Y+z_{0}Z$ is $\gamma=\exp\left(z_\gamma Z\right)$-periodic
with energy $E$ and period $\omega$ as given.
In Case 5, which only occurs if $\frac{z_\gamma}{A}\in2\pi\mathbb{Z}_{\neq0}$,
we choose any $x_{0},y_{0\text{ }}\in\mathbb{R}$ so that
\[
A\left(x_0^2+y_0^2\right)=2\left\vert B\right\vert \sqrt{E^{2}-B^{2}}
\]
and
\[
z_{0}=B-\frac{A\left(x_0^2+y_0^2\right)}{2B}\text{.}
\]
Then the conditions on $\mu,\zeta_{\ell},x_{0},y_{0}$ and $z_{0\text{ }}$imply
that $E^{2}=A\left(x_0^2+y_0^2\right)+z_{0}^{2}$ and the (spiraling) magnetic geodesic
$\sigma_{v}\left(t\right)$ starting at the identity with initial
velocity $v=x_{0}X+y_{0}Y+z_{0}Z$ is $\gamma$-periodic with energy
$E$ and period
\[
\omega=-\mathrm{sgn}\left(B\right)\frac{z_\gamma} {\sqrt{E^{2}-B^{2}}}\text{.}
\]
In Case 6, which only occurs if $\frac{z_\gamma}{A}\in\pi\mathbb{Z}_{\neq0}$,
we choose any $x_{0},y_{0\text{ }},z_{0}\in\mathbb{R}$ so that $E^{2}=B^{2}=A\left(x_{0}^{2}+y_{0}^{2}\right)+z_{0}^{2}$
and $z_{0}\neq\pm B$. The conditions on $\mu$ and $\zeta_{\ell}$ imply
that the (spiraling) magnetic geodesic $\sigma_{v}\left(t\right)$
with intial velocity $v=x_{0}X+y_{0}Y+z_{0}Z$\ will yield a $\gamma$-periodic
magnetic geodesic with energy $E$ and period
\[
\omega=\frac{2 z_\gamma} {z_{0}-B}\text{.}
\]
\end{thm}
\begin{rmk}
In Case 1, there are infinitely many values of $\ell$ that satisfy the
conditions, hence infinitely many distinct periods $\omega.$ In particular,
if $\mu<1$ and there exists $\ell_{0}\in\mathbb{Z}_{>0}$ such that
$\zeta_{\ell_{0}}\in\left(\frac{-2\mu}{1-\mu},\frac{2\mu}{1+\mu}\right)$,
then for all $\ell>\ell_{0}$, $\zeta_{\ell}\in\left(\frac{-2\mu}{1-\mu},\frac{2\mu}{1+\mu}\right)$.
Likewise if there exists $\ell_{0}\in\mathbb{Z}_{<0}$ such that $\zeta_{\ell_{0}}\in\left(\frac{-2\mu}{1-\mu},\frac{2\mu}{1+\mu}\right)$,
then for all $\ell<\ell_{0}$, $\zeta_{\ell}\in\left(\frac{-2\mu}{1-\mu},\frac{2\mu}{1+\mu}\right)$.
\end{rmk}
\begin{rmk}
In Case 6, the magnitude of the periods take all values in the interval
$\left(\left\vert z_\gamma\right\vert /E,\infty\right)$. The period
$\omega=\left\vert z_\gamma\right\vert /E$ is achieved when $v=-BZ$,
which implies $\sigma_{v}$is a one-parameter subgroup; i.e., non-spiraling.
The magnitude of the period approaches $\infty$ as $v\rightarrow BZ$.
This behavior is in contrast to the Riemannian case; i.e., the case
$B=0$. In the Riemannian case, there are finitely many periods associated
to each element $\gamma.$ However, if there exists $\gamma\in\Gamma$
such that $\log\gamma\in2\pi\mathbb{Z},$ then $\Gamma\backslash H$
does not satisfy the Clean Intersection Hypothesis, so the fact that
unusual magnetic geodesic behavior occurs in this case is not unprecedented (see \cite{gornet2005trace}).
\end{rmk}
\begin{comment}
\begin{proof}
We now prove Theorem \ref{SpiralingThm}. We fix energy $E$ and magnetic
strength $\left\vert c\right\vert /A$. We first show that if the
spiraling magnetic geodesic $\sigma_{v}\left(t\right)$ is $\gamma$-periodic
with energy $E$, then there exists $k\in\mathbb{Z}_{\neq0}$ such
that $\mu=E/\left\vert c\right\vert $, $\zeta_{k}=\hat{z}/\pi k$,
$v,$ and $\omega$ satisfy one of the six conditions above. We then
show that any choice of $k$ that satisfies one of the above six conditions,
will yield a (spiraling) magnetic geodesic $\sigma_{v}\left(t\right)$
that is $\gamma$-periodic with energy $E$ with the given period.
Let $\gamma=\exp\left(\hat{z}Z\right)$, $\hat{z}\neq0$, and let
$\sigma_{v}\left(t\right)$ be $\gamma$-periodic with energy $E=\left\vert v\right\vert $
and period $\omega$, where $\sigma_{v}\left(t\right)$ is the magnetic
geodesic starting at the identity with initial velocity $v=x_{0}X+y_{0}Y+z_{0}Z$.
\ We assume $\sigma_{v}\left(t\right)$ is a spiraling geodesic;
i.e., not a one-parameter subgroup, so by Corollary \ref{GeodEqsThm},
$z_{0}+c\neq0,$and $x\left(t\right)$, $y\left(t\right)$, and $z\left(t\right)$
satisfy (\ref{SpiralingGeodEqsNoncentral}) and (\ref{SpiralingGeodEqsCentral}).
\ Because $\sigma_{v}\left(t\right)$ is $\gamma$-periodic, we must
have, for all $t$, $x\left(t+\omega\right)=x\left(t\right)$ and
$y\left(t+\omega\right)=y\left(t\right)$, which by (\ref{SpiralingGeodEqsNoncentral})
implies
\[
\omega\left(\frac{z_{0}+c}{A}\right)\in2\pi\mathbb{Z}_{\neq0}.
\]
Also because $\sigma_{v}\left(t\right)$ is $\gamma$-periodic, we
must have for all $t,$ $z\left(t+\omega\right)-z\left(t\right)=\hat{z}$.
This together with (\ref{SpiralingGeodEqsCentral}) implies
\begin{eqnarray}
\hat{z} & = & z\left(t+\omega\right)-z\left(t\right)\label{ConditionOne}\\
& = & \frac{z_{0}\left(z_{0}+2c\right)+E^{2}}{2\left(z_{0}+c\right)}\omega\nonumber \\
& = & \frac{z_{0}\left(z_{0}+2c\right)+A\left(x_{0}^{2}+y_{0}^{2}\right)+z_{0}^{2}}{2\left(z_{0}+c\right)}\omega\nonumber
\end{eqnarray}
Let $k\in\mathbb{Z}_{\neq0}$ be such that $\ \omega\left(\frac{z_{0}+c}{A}\right)=2k\pi$.
Substituting this into (\ref{ConditionOne}), we obtain the equivalent
conditions, written in terms of $\zeta_{k}=\hat{z}/k\pi$,
\begin{eqnarray}
\omega\left(\frac{z_{0}+c}{A}\right) & = & 2k\pi,\nonumber \\
\left(2-\frac{\zeta_{k}}{A}\right)z_{0}^{2}+2c\left(1-\frac{\zeta_{k}}{A}\right)z_{0}+\left(\bar{E}^{2}-c^{2}\frac{\zeta_{k}}{A}\right) & = & 0\text{.}\label{EquivConds}
\end{eqnarray}
Cases 1-4 assume $\frac{\zeta_{k}}{A}\neq1,2$ so that the second
equation is quadratic in $z_{0}$. Case 5 assumes $\frac{\zeta_{k}}{A}=2$,
and the second equation is linear in $z_{0}$. Case 6 assumes $\frac{\zeta_{k}}{A}=1$,
in which case the second equation is an equation in $E$, and not
$z_{0}$.
We first consider the case that $\frac{\zeta_{k}}{A}=1=\frac{\hat{z}}{Ak\pi}$;
i.e., Case 6. Plugging this into (\ref{EquivConds}), we obtain
\begin{eqnarray*}
\omega\left(z_{0}+c\right) & = & 2\hat{z}\\
E^{2} & = & c^{2}\text{,}
\end{eqnarray*}
which implies $\mu=1$. Also, if $z_{0}=\pm c$, then $x_{0}=y_{0}=0$
and the geodesic is a one-parameter subgroup, a contradiction. The
forward direction of Case 6 has now been proved.
For the reverse direction, let $\mu=1$ and let $k\in\mathbb{Z}_{\neq0}$
be such that $\frac{\hat{z}}{A}=k\pi$; i.e., $\zeta_{k}=$ $\frac{\hat{z}}{k\pi}=A$.
Let $v=x_{0}X+y_{0}Y+z_{0}Z$ \ satisfy
\begin{eqnarray*}
\left\vert v\right\vert & = & E=\left\vert c\right\vert \\
v & \neq & \pm cZ\text{.}
\end{eqnarray*}
Note that this implies that $z_{0}+c\neq0$ and $\bar{E}\neq0$, i.e.,
the magnetic geodesic is spiraling. Let
\[
\omega=\frac{2\hat{z}}{z_{0}+c}\text{.}
\]
Then $\sigma_{v}\left(t+\omega\right)=\exp\left(x\left(t+\omega\right)X+y\left(t+\omega\right)Y+z\left(t+\omega\right)Z\right)$.
Because $\omega=\frac{2\hat{z}}{z_{0}+c}$ and $2\frac{\hat{z}}{A}\in2\pi\mathbb{Z}_{\neq0}$,
clearly $\cos\left(t\left(\frac{z_{0}+c}{A}\right)\right)$ and $\sin\left(t\left(\frac{z_{0}+c}{A}\right)\right)$
are periodic with period $\omega$. From this, one easily verifies
from the magnetic geodesic equations (\ref{SpiralingGeodEqsNoncentral})
and (\ref{SpiralingGeodEqsCentral}) that for all $t$, $x\left(t+\omega\right)=x\left(t\right)$
and $y\left(t+\omega\right)=y\left(t\right)$ and
\[
z\left(t+\omega\right)-z\left(t\right)=\frac{z_{0}\left(z_{0}+2c\right)+c^{2}}{2\left(z_{0}+c\right)}\omega=\left(\frac{z_{0}+c}{2}\right)\left(\frac{2\hat{z}}{z_{0}+c}\right)=\hat{z}\text{,}
\]
as desired. Because $\left\vert z_{0}\right\vert <\left\vert c\right\vert =E$,
the periods take all values
\[
\left\vert \frac{\hat{z}}{c}\right\vert <\left\vert \omega\right\vert =\left\vert \frac{2\hat{z}}{z_{0}+c}\right\vert <\infty\text{.}
\]
The period $\left\vert \omega\right\vert $ approaches $\left\vert \frac{\hat{z}}{c}\right\vert $
as $v\rightarrow cZ$ and the period $\left\vert \omega\right\vert $
approaches $\infty$ as $v\rightarrow-cZ.$ The left-hand inequality
is achieved for the one-parameter subgroup $\exp\left(tcZ\right)$,
i.e., by a non-spiraling magnetic geodesic. Recall from Theorem \ref{GeodEqsThm}
above, the magnetic geodesic with initial velocity $-cZ$ is a one-parameter
subgroup of the form $\sigma\left(t\right)=\exp\left(-tcZ\right)$
and yields a $\gamma$-periodic geodesic, $\gamma=\exp\left(\hat{z}Z\right)$
with period $\omega=\left\vert \hat{z}\right\vert $, and does not
require $\hat{z}\in\pi\mathbb{Z}$. Case 6 has now been proved.
We now consider the case{\huge{} }$\frac{\zeta_{k}}{A}=2$; i.e.,
Case 5. Plugging $\frac{\zeta_{k}}{A}=2$, or $\hat{z}=2k\pi A$,
into (\ref{EquivConds}) we obtain
\begin{eqnarray*}
\omega\left(\frac{z_{0}+c}{A}\right) & = & 2k\pi,\\
z_{0} & = & \frac{\bar{E}^{2}-c^{2}2}{2c}=-c+\frac{\bar{E}^{2}}{2c}
\end{eqnarray*}
so that
\[
z_{0}+c=\frac{\bar{E}^{2}}{2c}\text{.}
\]
Because $E^{2}=\bar{E}^{2}+z_{0}^{2}$,
\begin{eqnarray*}
E^{2} & = & \bar{E}^{2}+\left(-c+\frac{\bar{E}^{2}}{2c}\right)^{2}\\
& = & c^{2}+\frac{\bar{E}^{4}}{4c^{2}}
\end{eqnarray*}
which implies, since $\bar{E}\neq0$,
\[
\bar{E}^{4}=4c^{2}\left(E^{2}-c^{2}\right)>0\text{,}
\]
which implies $\mu=E/\left\vert c\right\vert >1$. Therefore
\[
\bar{E}^{2}=A\left(x_{0}^{2}+y_{0}^{2}\right)=2\left\vert c\right\vert \sqrt{E^{2}-c^{2}}
\]
The period of $\omega$ then satisfies
\[
\omega=\frac{2k\pi A}{z_{0}+c}=\frac{4ck\pi A}{\bar{E}^{2}}=\frac{2c\hat{z}}{\bar{E}^{2}}=\frac{c\hat{z}}{\left\vert c\right\vert \sqrt{E^{2}-c^{2}}}\text{,}
\]
so that
\[
\omega=\mathrm{sgn}\left(c\right)\frac{\hat{z}}{\sqrt{E^{2}-c^{2}}}\text{.}
\]
The forward direction of Case 5 is now complete.
For the reverse direction, assume $\mu>1$ and let $k\in\mathbb{Z}_{\neq0}$
be such that $\hat{z}=2k\pi A$; i.e., $\frac{\zeta_{k}}{A}=$ $\frac{\hat{z}}{Ak\pi}=2$.
Let $v=x_{0}X+y_{0}Y+z_{0}Z$ \ satisfy
\begin{eqnarray*}
\bar{E}^{2} & = & A\left(x_{0}^{2}+y_{0}^{2}\right)=2\left\vert c\right\vert \sqrt{E^{2}-c^{2}}\\
z_{0} & = & -c+\frac{\bar{E}^{2}}{2c}
\end{eqnarray*}
and let
\[
\omega=\mathrm{sgn}\left(c\right)\frac{\hat{z}}{\sqrt{E^{2}-c^{2}}}\text{.}
\]
Note that $\bar{E}\neq0$ since $\mu=E/\left\vert c\right\vert >1$,
which also implies $z_{0}+c\neq0$ and the magnetic geodesic $\sigma_{v}\left(t\right)$
is spiraling. Now
\begin{eqnarray*}
\omega\left(\frac{z_{0}+c}{A}\right) & = & \mathrm{sgn}\left(c\right)\frac{\hat{z}}{\sqrt{E^{2}-c^{2}}}\frac{\bar{E}^{2}}{2cA}\\
& = & \frac{2\left\vert c\right\vert }{2cA}\sqrt{E^{2}-c^{2}}\mathrm{sgn}\left(c\right)\frac{\hat{z}}{\sqrt{E^{2}-c^{2}}}\\
& = & \frac{\hat{z}}{A}=2k\pi
\end{eqnarray*}
Because $\omega\left(\frac{z_{0}+c}{A}\right)=$ $\frac{\hat{z}}{A}$
and $\frac{\hat{z}}{A}\in2\pi\mathbb{Z}_{\neq0}$, clearly $\cos\left(t\left(\frac{z_{0}+c}{A}\right)\right)$
and $\sin\left(t\left(\frac{z_{0}+c}{A}\right)\right)$ are periodic
with period $\omega$. From this, one easily verifies from the magnetic
geodesic equations\ in Corollary \ref{GeodEqsThm} for $\sigma_{v}\left(t\right)=\exp\left(x\left(t\right)X+y\left(t\right)Y+z\left(t\right)Z\right)$,
that for all $t$, $x\left(t+\omega\right)=x\left(t\right)$ and $y\left(t+\omega\right)=y\left(t\right)$
and
\begin{eqnarray*}
z\left(t+\omega\right)-z\left(t\right) & = & \frac{z_{0}\left(z_{0}+2c\right)+\bar{E}^{2}+z_{0}^{2}}{2\left(z_{0}+c\right)}\omega\\
& = & \frac{2z_{0}\left(z_{0}+c\right)+\bar{E}^{2}}{2\left(z_{0}+c\right)^{2}}\hat{z}\\
& = & \frac{2\left(-c+\frac{\bar{E}^{2}}{2c}\right)\frac{\bar{E}^{2}}{2c}+\bar{E}^{2}}{2\frac{\bar{E}^{4}}{4c^{2}}}\hat{z}=\hat{z}
\end{eqnarray*}
as desired. Finally, we verify
\[
\bar{E}^{2}+z_{0}^{2}=2\left\vert c\right\vert \sqrt{E^{2}-c^{2}}+\left(-c+\frac{2\left\vert c\right\vert \sqrt{E^{2}-c^{2}}}{2c}\right)^{2}=E^{2}
\]
In other words, this choice of $v$ produces a spiraling magnetic
$\gamma$-periodic geodesic of energy $E$ and period $\omega=\mathrm{sgn}\left(c\right)\frac{\hat{z}}{\sqrt{E^{2}-c^{2}}}$.
Case 5 has now been proved.
It remains to consider when{\huge{} }$\zeta_{k}\neq1,2$. By (\ref{EquivConds})
and the quadratic equation,
\begin{equation}
z_{0}=-c+\frac{1}{\left(2-\frac{\zeta_{k}}{A}\right)}\left(c\pm\sqrt{c^{2}+\bar{E}^{2}\left(\frac{\zeta_{k}}{A}-2\right)}\right)\text{,}\label{z0-from-quadeq}
\end{equation}
which must be real, hence
\begin{equation}
\bar{E}^{2}\left(2-\frac{\zeta_{k}}{A}\right)\leq c^{2}\text{.}\label{E-bar-condition}
\end{equation}
The remaining four cases follow from (\ref{E-bar-condition}) and
the facts that $E^{2}=\bar{E}^{2}+z_{0}^{2}$ and the magnetic geodesic
is spiraling; i.e., $\bar{E}$ is real and positive.
To solve for $\bar{E}$ \ in terms of $E,c,\mu$, and $\zeta_{k}$,
we plug (\ref{z0-from-quadeq}) into $E^{2}=\bar{E}^{2}+z_{0}^{2}$
to obtain
\[
E^{2}=\bar{E}^{2}+\left(-c+\frac{1}{\left(2-\frac{\zeta_{k}}{A}\right)}\left(c\pm\sqrt{c^{2}+\bar{E}^{2}\left(\frac{\zeta_{k}}{A}-2\right)}\right)\right)^{2}\text{.}
\]
We eliminate the denominator,
\begin{eqnarray*}
\text{OMIT}\text{:\ } & & E^{2}\left(2-\frac{\zeta_{k}}{A}\right)^{2}=\left(\bar{E}^{2}+c^{2}\right)\left(2-\frac{\zeta_{k}}{A}\right)^{2}-2c\left(2-\frac{\zeta_{k}}{A}\right)\left(c\pm\sqrt{c^{2}+\bar{E}^{2}\left(\frac{\zeta_{k}}{A}-2\right)}\right)\\
& & +\left(c^{2}\pm2c\sqrt{c^{2}+\bar{E}^{2}\left(\frac{\zeta_{k}}{A}-2\right)}+c^{2}+\bar{E}^{2}\left(\frac{\zeta_{k}}{A}-2\right)\right)
\end{eqnarray*}
isolate the square root,
\[
E^{2}\left(2-\frac{\zeta_{k}}{A}\right)^{2}-\bar{E}^{2}\left(\frac{\zeta_{k}}{A}-2\right)\left(\frac{\zeta_{k}}{A}-1\right)-c^{2}\left(\frac{\zeta_{k}^{2}}{A^{2}}-2\frac{\zeta_{k}}{A}+2\right)=\pm2c\left(\frac{\zeta_{k}}{A}-1\right)\sqrt{c^{2}+\bar{E}^{2}\left(\frac{\zeta_{k}}{A}-2\right)}
\]
and square both sides and collect $\bar{E}$ terms
\[
0=\bar{E}^{4}\left(\frac{\zeta_{k}}{A}-1\right)^{2}+2\bar{E}^{2}\left(E^{2}-c^{2}\right)\left(\frac{\zeta_{k}}{A}-1\right)\left(2-\frac{\zeta_{k}}{A}\right)+\left(E^{2}-c^{2}\right)\left(E^{2}\left(2-\frac{\zeta_{k}}{A}\right)^{2}-c^{2}\frac{\zeta_{k}^{2}}{A^{2}}\right)\text{.}
\]
Finally, by the quadratic equation, recalling that $\mu=E/\left\vert c\right\vert >0$
and $c\neq0$,
\[
\bar{E}^{2}=\frac{\left(E^{2}-c^{2}\right)}{\frac{\zeta_{k}}{A}-1}\left(\frac{\zeta_{k}}{A}-2\right)\pm2c\sqrt{\frac{E^{2}-c^{2}}{\frac{\zeta_{k}}{A}-1}}
\]
so that
\[
\frac{\bar{E}^{2}}{c^{2}}=\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}\left(\frac{\zeta_{k}}{A}-2\right)\pm2\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}\text{.}
\]
As $\bar{E}^{2}$ is real and positive, $\mu\neq1$ and we must have
\begin{equation}
\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}>0\text{,}\label{positivity-restriction}
\end{equation}
or equivalently $\zeta_{k}\,<A$ if and only if $\mu\,<1$
It will be convenient to denote
\begin{equation}
\bar{E}_{+}^{2}=c^{2}\left(\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}\left(\frac{\zeta_{k}}{A}-2\right)+2\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}\right)\label{EplusDefn}
\end{equation}
and
\begin{equation}
\bar{E}_{-}^{2}=c^{2}\left(\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}\left(\frac{\zeta_{k}}{A}-2\right)-2\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}\right)\text{.}\label{EminusDefn}
\end{equation}
{[}UPDATED WORDING OF THIS CHOICE{]} For $\bar{E}_{+}^{2}$, by choosing
the $\pm$ sign in (\ref{z0-from-quadeq}) to have the same sign (parity?)
as
\[
-c\left(\frac{\zeta_{k}}{A}-2\right)\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}-1,
\]
a straightforward calculation shows that (\ref{z0-plus}) agrees with
(\ref{z0-from-quadeq}). Note also that from (\ref{z0-plus})
\begin{eqnarray}
\frac{\bar{E}_{+}^{2}+z_{0}^{2}}{c^{2}} & = & \left(\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}\left(\frac{\zeta_{k}}{A}-2\right)+2\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}\right)+\left(-1+\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}\right)^{2}\label{correct-energy-plus}\\
\text{OMIT} & = & \frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}\left(\frac{\zeta_{k}}{A}-2\right)+2\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}+1-2\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}+\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}\nonumber \\
& = & \mu^{2}\nonumber
\end{eqnarray}
as desired. Likewise, for $\bar{E}_{-}^{2}$, by choosing the $\pm$
sign in (\ref{z0-from-quadeq}) to have the same sign (parity?) as
\[
c\left(\frac{\zeta_{k}}{A}-2\right)\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}-1\text{,}
\]
a straightforward calculation shows that (\ref{z0-minus}) agrees
with (\ref{z0-from-quadeq}). From (\ref{z0-minus})
\begin{eqnarray}
\frac{\bar{E}_{-}^{2}+z_{0}^{2}}{c^{2}} & = & \left(\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}\left(\frac{\zeta_{k}}{A}-2\right)-2\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}\right)+\left(-1-\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}\right)^{2}\label{correct-energy-minus}\\
\text{OMIT} & = & \frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}\left(\frac{\zeta_{k}}{A}-2\right)-2\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}+1+2\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}+\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}\nonumber \\
& = & \mu^{2}\nonumber
\end{eqnarray}
as desired. For both $\bar{E}_{\pm}^{2}$, if we choose the opposite
sign in (\ref{z0-from-quadeq}), the initial velocity will not have
energy $E$.
Now observe that a restriction on $\mu$ and $\frac{\zeta_{k}}{A}$
arises from the positivity of
\[
\frac{\bar{E}_{\pm}^{2}}{c^{2}}\sqrt{\frac{\frac{\zeta_{k}}{A}-1}{\mu^{2}-1}}=\left(\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}\left(\frac{\zeta_{k}}{A}-2\right)\pm2\right)>0
\]
To complete the proof of the forward direction in Cases 1 through
4, it remains to show the following:
\begin{enumerate}
\item For $\frac{-2\mu}{1-\mu}<\frac{\zeta_{k}}{A}<\frac{2\mu}{1+\mu}<1,\frac{\zeta_{k}}{A}\neq0$
we must choose $\bar{E}_{+}$.
\item For $1<\frac{\zeta_{k}}{A}<2$ and $1<\frac{2\mu}{1+\mu}<\frac{\zeta_{k}}{A}<2$,
which implies $\mu\,>1$, we must choose $\bar{E}_{+}$.
\item If $\frac{\zeta_{k}}{A}>2$ and $2<\frac{\zeta_{k}}{A}<\frac{2\mu}{\mu-1}$,
which implies $\mu>1$, we must choose $\bar{E}_{+}$.
\item If $\frac{\zeta_{k}}{A}>2$ and $2<\frac{2\mu}{\mu-1}<\frac{\zeta_{k}}{A}$,
which implies $\mu>1$, we can choose either of $\bar{E}_{\pm}$.
\end{enumerate}
Using sign considerations on $\frac{\zeta_{k}}{A}-2$, we easily see
that for the cases where $\frac{\zeta_{k}}{A}<1$ or $1<\frac{\zeta_{k}}{A}<2$,
then the only permittable choice for $\bar{E}$ is $\bar{E}_{+}$.
When $\frac{\zeta_{k}}{A}>2$, then sign considerations alone do not
eliminate either of $\bar{E}_{\pm}$. The additional restrictions
on the relative values of $\mu$ and $\frac{\zeta_{k}}{A}$ follow
from further exploration of the restriction that $\bar{E}_{\pm}^{2}>0$,
or equivalently
\[
\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}\left(\frac{\zeta_{k}}{A}-2\right)\pm2>0\text{.}
\]
We first consider Case 1, where $\frac{\zeta_{k}}{A}<1$, which by
(\ref{positivity-restriction}) implies $\mu<1$. As $\bar{E}^{2}>0$,
and since $\frac{\zeta_{k}}{A}-2$ is negative, by (\ref{EminusDefn})
$\bar{E}_{-}^{2}$ cannot occur and by (\ref{EplusDefn}) $\bar{E}_{+}^{2}$
must satisfy
\[
\frac{\bar{E}_{+}^{2}}{c^{2}}\sqrt{\frac{\frac{\zeta_{k}}{A}-1}{\mu^{2}-1}}=\left(\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}\left(\frac{\zeta_{k}}{A}-2\right)+2\right)>0
\]
which is equivalent to
\[
\sqrt{\frac{1-\mu^{2}}{1-\frac{\zeta_{k}}{A}}}<\frac{2}{2-\frac{\zeta_{k}}{A}}
\]
iff
\[
\left(\frac{\zeta_{k}}{A}-\frac{2\mu}{1+\mu}\right)\left(\frac{\zeta_{k}}{A}+\frac{2\mu}{1-\mu}\right)<0\text{.}
\]
Since $\frac{-2\mu}{1-\mu}<\frac{2\mu}{1+\mu}$, we obtain $\frac{-2\mu}{1-\mu}<\frac{\zeta_{k}}{A}<\frac{2\mu}{1+\mu}<1,\frac{\zeta_{k}}{A}\neq0$.
The forward direction of Case 1 is now complete.
We next consider Case 2, where $1<\frac{\zeta_{k}}{A}<2$, which by
\ (\ref{positivity-restriction}) implies $\mu>1$. As $\bar{E}^{2}>0$,
and since $\frac{\zeta_{k}}{A}-2$ is negative, by (\ref{EminusDefn})
$\bar{E}_{-}^{2}$ cannot occur and by (\ref{EplusDefn}) $\bar{E}_{+}^{2}$
must satisfy
\[
\frac{\bar{E}_{+}^{2}}{c^{2}}\sqrt{\frac{\frac{\zeta_{k}}{A}-1}{\mu^{2}-1}}=\left(\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}\left(\frac{\zeta_{k}}{A}-2\right)+2\right)>0
\]
which is equivalent to, since $\mu>1$,
\begin{eqnarray*}
\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}} & < & \frac{2}{2-\frac{\zeta_{k}}{A}}\\
\text{OMIT}\zeta_{k}^{2}-\frac{\zeta_{k}}{A}\frac{4\mu^{2}}{\mu^{2}-1}+\frac{4\mu^{2}}{\mu^{2}-1} & < & 0
\end{eqnarray*}
or
\[
\left(\frac{\zeta_{k}}{A}-\frac{2\mu}{\mu-1}\right)\left(\frac{\zeta_{k}}{A}-\frac{2\mu}{\mu+1}\right)<0\text{.}
\]
Since
\[
\frac{2\mu}{\mu+1}<2<\frac{2\mu}{\mu-1}
\]
the positivity restriction is equivalent to
\[
1<\frac{2\mu}{1+\mu}<\frac{\zeta_{k}}{A}<2\text{.}
\]
The forward direction of Case 2 is now complete.
We next consider Cases 3 and 4,where \ $\frac{\zeta_{k}}{A}>2$,
which by (\ref{positivity-restriction}) implies $\mu>1$. \ As $\bar{E}^{2}>0$,
and since $\frac{\zeta_{k}}{A}-2$ is positive, by (\ref{EplusDefn}),
(\ref{EminusDefn}), $\bar{E}_{\pm}^{2}$ must satisfy
\[
\frac{\bar{E}_{\pm}^{2}}{c^{2}}\sqrt{\frac{\frac{\zeta_{k}}{A}-1}{\mu^{2}-1}}=\left(\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}\left(\frac{\zeta_{k}}{A}-2\right)\pm2\right)>0\text{.}
\]
So $\bar{E}_{+}^{2}$is automatically positive. However, $\bar{E}_{-}^{2}$
requires
\[
2<\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}\left(\frac{\zeta_{k}}{A}-2\right)
\]
equivalently
\[
0<\left(\frac{\zeta_{k}}{A}-\frac{2\mu}{\mu-1}\right)\left(\frac{\zeta_{k}}{A}-\frac{2\mu}{\mu+1}\right)\text{.}
\]
Since $\mu>1$ and therefore
\[
\frac{2\mu}{\mu+1}<2<\frac{2\mu}{\mu-1}
\]
the positivity restriction is equivalent to
\[
2<\frac{2\mu}{\mu-1}<\frac{\zeta_{k}}{A}\text{.}
\]
The forward direction of Cases 3 and 4 are now complete.
For the reverse direction, fixing energy $E$ and magnetic strength
$\left\vert c\right\vert /A$, we first note that if we pick $k\in\mathbb{Z}_{\neq0}$
$\ $such that $\frac{\zeta_{k}}{A}=\frac{\hat{z}}{\pi k}$ lies in
one of Case 1 through\ Case 4, then we have a corresponding positive
value $\bar{E}^{2}$. To see this, note that the positivity calculations
in the proof of the forward direction for Cases 1 through 4 hold in
the reverse direction. We then choose $x_{0},y_{0}$ so that $\bar{E}^{2}=A\left(x_{0}^{2}+y_{0}^{2}\right).$
Note that Cases 1 through 4 all clearly satisfy $\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}>0$.
The fact that $E^{2}=\bar{E}^{2}+z_{0}^{2}$ follows from the computations
in (\ref{correct-energy-plus}) and (\ref{correct-energy-minus}).
It remains to show that in all four cases, the (spiraling) magnetic
geodesic through the identity $\sigma_{v}\left(t\right)$ with initial
velocity $v=x_{0}X+y_{0}Y+z_{0}Z$ is $\gamma=\exp\left(\hat{z}Z\right)$-periodic
with energy $E$ and period
\[
\omega=\frac{2\hat{z}A}{\zeta_{k}\left(z_{0}+c\right)}.
\]
Observe that
\[
\frac{2\hat{z}}{\zeta_{k}}=2\pi k\in2\pi\mathbb{Z}_{\neq0}
\]
so by the expression (\ref{SpiralingGeodEqsNoncentral}) $x\left(t\right)$
and $y\left(t\right)$ are clearly periodic with period $\omega=\frac{2\hat{z}A}{\zeta_{k}\left(z_{0}+c\right)}$.
\ Also by (\ref{SpiralingGeodEqsCentral})
\begin{gather*}
z\left(t+\frac{2\hat{z}A}{\zeta_{k}\left(z_{0}+c\right)}\right)-z\left(t\right)=\frac{z_{0}\left(z_{0}+2c\right)+E^{2}}{2\left(z_{0}+c\right)}\left(\frac{2\hat{z}A}{\zeta_{k}\left(z_{0}+c\right)}\right)\\
=\frac{2z_{0}\left(z_{0}+c\right)+\bar{E}^{2}}{\left(z_{0}+c\right)^{2}}\left(\frac{\hat{z}A}{\zeta_{k}}\right)
\end{gather*}
We denote the constant
\[
p=z\left(t+\frac{2\hat{z}A}{\zeta_{k}\left(z_{0}+c\right)}\right)-z\left(t\right)\text{,}
\]
and need to show that $p=\hat{z}$. In Cases 1, 2, and 3,
\[
\bar{E}^{2}=c^{2}\left(\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}\left(\frac{\zeta_{k}}{A}-2\right)+2\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}\right)
\]
and
\[
z_{0}=c\left(-1+\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}\right)\text{.}
\]
So
\begin{eqnarray*}
\frac{p\zeta_{k}}{A\hat{z}} & = & \frac{2c\left(-1+\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}\right)c\left(\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}\right)+\bar{E}^{2}}{c^{2}\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}\\
& = & \frac{2\left(-\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}+\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}\right)+\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}\left(\frac{\zeta_{k}}{A}-2\right)+2\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}}{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}\\
& = & \frac{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}\frac{\zeta_{k}}{A}}{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}=\frac{\zeta_{k}}{A}
\end{eqnarray*}
In Case 4, since
\[
z_{0}=c\left(-1-\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}\right)\text{,}
\]
we have
\begin{eqnarray*}
\frac{p\zeta_{k}}{A\hat{z}} & = & \frac{2c\left(-1-\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}\right)c\left(-\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}\right)+\bar{E}^{2}}{c^{2}\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}\\
& = & \frac{2\left(\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}+\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}\right)+\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}\left(\frac{\zeta_{k}}{A}-2\right)-2\sqrt{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}}{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}\\
& = & \frac{+\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}\frac{\zeta_{k}}{A}}{\frac{\mu^{2}-1}{\frac{\zeta_{k}}{A}-1}}=\frac{\zeta_{k}}{A}\text{.}
\end{eqnarray*}
Thus $p=\hat{z}$, and the reverse direction of the proof is complete.
\end{proof}
\end{comment}
\bibliographystyle{alpha}
| {
"attr-fineweb-edu": 1.540039,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdvA5qsNCPYBLwmgM | \section{Introduction}\label{sec:intro}
Noncommutative Quantum Field Theories (NCQFT's), in particular those
obtained by Moyal deformation of the usual (pointwise) product of
functions, have been a subject of intense research in recent
years~\cite{nekrasov}, because of many different reasons. Among them is
their relevance to open string dynamics~\cite{SW} and, in a quite different
context, they are important tools for an effective description of the
Quantum Hall Effect (QHE)~\cite{Suss}. In this realization of an
incompressible quantum fluid~\cite{Jackiw:2004nm}, the projection to the
lowest Landau level under the existence of a strong magnetic field amounts,
for a two-dimensional system, to the noncommutativity of the spatial
coordinates~\cite{Duval:2000xr}.
In this paper, we calculate one-loop quantum effects around both trivial
and non-trivial saddle points, for the NCQFT of a self-interacting complex
scalar field equipped with a Grosse-Wulkenhaar (GW) term~\cite{bulquen}
(see also~\cite{R} and \cite{0710.2652}).
One of the interests for carrying out this explicit calculation is that,
in spite of the many important general results for this kind of
NCQFT~\cite{bulquen} there are, we believe, still few concrete results
obtained by actually evaluating quantum effects in models that
include a GW-term. In particular, we shall focus on the divergent terms in
the effective action, and on the first quantum corrections to the effective
action around non trivial minima, in the case of a spontaneous symmetry
breaking potential.
We deal with a $2+1$ dimensional model, something which makes it
more attractive from the point of view of its potential applications to the
situation of a planar system in an external magnetic field. At the same
time, it provides an opportunity to probe the effect to the GW term in an
odd number of spacetime dimensions where, necessarily, some of the
coordinates do commute. Finally, we also consider the important issue of
calculating quantum corrections on top of non-trivial minima that arise
when there is spontaneous symmetry breaking.
This article is organized as follows: in section~\ref{sec:themodel} we
write down the action that defines the model, selecting the basis of
functions to be used in the loopwise expansion, and extracting the
resulting Feynman rules. We analyze the renormalizability
of the theory in~\ref{sec:renormalization}, while
the one-loop corrections to the two and four-point functions are evaluated in
section~\ref{sec:renormalizedgeneratingfunctional}. We consider quantum
effects around non-trivial minima in section~\ref{ntvc}. In
section~\ref{sec:conclusions}, we present our conclusions.
\section{The model}\label{sec:themodel}
We are concerned with a noncommutative model whose dynamical variable is a
complex scalar field in $2+1$ space-time dimensions, such that the
coordinates satisfy:
\begin{equation}
[x_\mu \,,\, x_\nu ] \;=\; i \, \theta_{\mu\nu}
\;\;, \;\;\; \mu, \, \nu \,=\, 0,\,1,\,2 \;,
\end{equation}
where $\theta_{\mu\nu}$ are the elements of a constant real antisymmetric
constant matrix.
In $2+1$ dimensions this matrix is necessarily singular; thus we shall
assume that its (only) null eigenvalue corresponds to the time direction,
$x_0$, since we are not interested in introducing noncommutativity for the
time coordinate. Although there are some general arguments to discard those
theories~\cite{alvarezgaume}, in our case the reason is simpler: we want to
consider theories that might be interpreted in terms of effective field
theory models in strong magnetic fields~\cite{cesar_ana_lopez}. Thus we
have the more explicit commutation relations:
\begin{equation}
[ x_0, x_j ] \,=\, 0 \;\;,\;\;\;\; [ x_j, x_k ] \,=\,i \,
\theta_{jk} \;\;\;,\;\;\; j,\,k = 1,\, 2 \;
\end{equation}
where $\theta_{jk}= \theta \,\epsilon_{jk}$, and we shall assume that
$\theta > 0$.
The model is defined by the following Euclidean action:
\begin{equation}
\mathcal{S}=\int_{x,t} \left(\partial_{\mu} \varphi^{*} \partial_{\mu} \varphi+m^{2} \varphi^{*} \varphi+\Omega^2
\varphi^{*} \star z_j \star \varphi \star z_j \right)\,+\,\mathcal{S}_{int}
\end{equation}
(with $z_j\equiv\theta^{-1}_{jk} x_k$), which is of the kind proposed in~\cite{bulquen}.
Under the extra assumption that $\Omega^2\equiv 2$, the system is said to be
at the self-dual point since it is invariant under a combined Fourier
transformation and rescaling~\cite{szabo} of spatial coordinates:
\begin{equation}
\mathcal{S}[\varphi,\varphi^{*},\theta,g]=\mathcal{S}[\frac{1}{\theta}\hat \varphi_{(\frac{x}{\theta})},\frac{1}{\theta}\hat \varphi^{*}_{(\frac{x}{\theta})},\theta,g] \;\; ,
\end{equation}
a symmetry that survives, as we shall see explicitly, one loop quantum
corrections.
At that special point, one of the terms in the commutators used to define
the spatial (inner) derivatives is canceled with a like one coming from
the confining potential term, leading to an action with the form:
\begin{equation} \label{eq: accion}
\mathcal{S}=\int_{x,t} \left(\dot \varphi^* \dot \varphi + m^2 \varphi^{*} \varphi
+\frac{1}{\theta^2} \varphi^{*} \star x_{j} \star x_{j}\star \varphi
+\frac{1}{\theta^2} \varphi \star x_{j} \star x_{j}\star \varphi^{*} \right)
+\mathcal{S}_{int} \;,
\end{equation}
where the dot denotes differentiation with respect to $x_0$. The
interaction term that we shall consider may be regarded as the orientable
analog of the $\varphi^4$ vertex, namely,
\begin{equation} \label{eq:vertice}
\mathcal{S}_{int}=\frac{g}{4!}\int_{x,t} \varphi^{*} \star \varphi \star \varphi^{*}
\star \varphi \;.
\end{equation}
Note that there is, indeed, yet another inequivalent analog to the
$\varphi^4$ vertex, namely:
\begin{equation}
\mathcal{S}_{int}=\frac{g}{4!}\int_{x,t}
\varphi^{*} \star \varphi^{*} \star \varphi \star \varphi \;.
\end{equation}
We shall not, however, deal here with a theory including this term since
its UV properties seem to be qualitatively different~\cite{bulquen2}.
The interaction term (\ref{eq:vertice}) yields a
super-renormalizable theory, as we shall see in section
\ref{sec:renormalization}.
To carry on explicit calculations it is convenient to chose the so called
{\em matrix basis}, since, as it can be shown, their
$\star$-product adopts a `diagonal form' :
\begin{itemize}
\item $f_{nk} \star f_{k'n'}=\delta_{kk'} f_{nn'}$
\item $(f_{nk})^{*}=f_{kn}$ \;.
\end{itemize}
In Appendix A, a brief summary of this and related properties is presented.
Careful demostrations may be found, for example, in~\cite{gracia}.
The coefficients $\varphi_{nk}(t)$, that appear in the expansion of the field
in such a basis,
\begin{equation}
\varphi(x,t)\;=\; \sum_{nk} \varphi_{nk}(t) f_{nk}(x) \;,
\end{equation}
become then the dynamical variables.
In terms of these coefficients, the action integral reads:
\begin{equation}
\mathcal{S}=\int_{t_1 t_2} \varphi^*_{ln}(t_1)
G_{ln,kr}(t_1-t_2)\varphi_{kr}(t_2)+\frac{2\pi \theta
g}{4!}\int_{t}\varphi^{*}_{n_1,n_4}\varphi_{n_1,n_2}\varphi^{*}_{n_3,n_2}\varphi_{n_3,n_4}\;\; ,
\end{equation}
where
\begin{equation}
G_{ln,kr}(t_1-t_2)=2\pi \theta
\delta(t_1-t_2) \delta_{lk}\delta_{nr}(-\partial_t^2+m^2+\frac{2}{\theta}(k+n+1))
\end{equation}
is a kernel that defines the quadratic (free) part of the action. To
derive the Feynman rules corresponding to this action, we need an explicit
expression for $\Delta=G^{-1}$. Since $G$ is already diagonal with respect
to its discrete indices, we only need to deal with the temporal
coordinates. In Fourier (frequency) space:
\begin{equation} \label{eq:propa_impulso}
\hat \Delta_{ln,kr}(\nu)=\frac{\delta_{lk}\delta_{nr}}{2 \pi \theta} \frac{1}{\omega_{nk}^2+\nu^2} \;\; ,
\end{equation}
and after Fourier transformation:
\begin{equation}
\Delta_{ln,kr}(t_1-t_2)=\braket{\varphi_{ln}(t_1)\varphi^*_{kr}(t_2)}_0=\frac{\delta_{lk}
\delta_{nr}}{2\pi\theta}\frac{e^{-\omega_{kn}|t_1-t_2|}}{2\omega_{kn}}
\;\; ,
\end{equation}
where
\begin{equation}
\omega_{kn}^2=m^2+\frac{2}{\theta}(k+n+1) \;\; .
\end{equation}
The Feynman rules and conventions used for the diagrammatic expansion that
follows from this model are better introduced in terms of diagrams with a
double line notation, to cope with matrix indices. Orientation is, on the
other hand, assigned according to the usual convention for creation and
annihilation operators.
The free propagator and the interaction vertex correspond to the diagrams
of figures~\ref{fig:propa} and~\ref{fig:vertex}, respectively:
\begin{figure}[!ht]
\begin{center}
\begin{picture}(0,0)%
\includegraphics{propagator.pstex}%
\end{picture}%
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(3714,294)(529,-208)
\end{picture}%
\end{center}
\caption{The free propagator} \label{fig:propa}
\end{figure}
\begin{figure}[!ht]
\begin{center}
\begin{picture}(0,0)%
\includegraphics{vertex.pstex}%
\end{picture}%
\setlength{\unitlength}{3108sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(5580,5400)(2701,-7261)
\end{picture}%
\end{center}
\caption{The interaction vertex $\frac{g}{4!}
\varphi^{*}_{n_1,n_4}\varphi_{n_1,n_2}\varphi^{*}_{n_3,n_2}\varphi_{n_3,n_4}$} \label{fig:vertex}
\end{figure}
A dot attached to a line indicates that it corresponds to the first index.
So when two vertices are connected with a double line, both the dots and
the orientation of the lines must coincide (note that it is not necessary
to attach a dot to the propagator). Equipped with this notation, we may
easily group all the inequivalent diagrams corresponding to a given class.
Symmetry factors can, of course, be calculated by standard application of
Wick's theorem.
Thus we are ready to construct perturbatively the generating functional of
1PI graphs which we shall calculate explicitly up to the one-loop order.
This analysis is adapted for a propagator with a simple form in the matrix
basis. Other basis can be of interest (such as plane waves) depending on
the structure of the propagator and the vertex~\cite{seiberg}.
\section{Renormalization}\label{sec:renormalization}
\subsection{One-loop divergences}
It is easy to see
that the only divergent diagram of the theory at the one-loop level is
the~\emph{tadpole} graph of Figure~\ref{fig:tadpole}.
\begin{figure}[!ht]
\begin{center}
\begin{picture}(0,0)%
\includegraphics[scale=0.6]{tadpole.pstex}%
\end{picture}%
\setlength{\unitlength}{3947sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(10149,2823)(814,-2900)
\end{picture}%
\end{center}
\caption{The tadpole graph. Two contractions are possible.}\label{fig:tadpole}
\end{figure}
As shown in the figure, there is a `free' internal index (not fixed by the
external ones). This leads to an UV divergent contribution to the two point
function:
\begin{equation}
\Gamma_{n_0 n_1,n_2 n_3}^{(2), planar}(x-y)=\frac{2 g}{4!} \delta_{n_0 n_2} \delta_{n_1 n_3} \delta(x-y) \sum_{k\geq0}
\frac{1}{\omega_{k, n_2}} \;.
\end{equation}
The same amplitude is obtained writing $n_3$ instead of $n_2$ and this would yield to a symmetric expression in $\varphi$ and $\varphi^*$. This corresponds to the other contraction shown in Figure \ref{fig:tadpole}. For the sake of simplicity we concentrate now in one of these, and we finally give the symmetrized expression in equation \ref{eq:contribution_two}.\\
An Euclidean cut off can be implemented simply by limiting the number of
modes we sum. Denoting by $k_{max}$ the maximum index in the (convergent)
sum, we split it up into two parts: one of them shall give a mass
renormalization term, while the other will be a
function of $n_2$ with a finite limit as $k_{max}\to \infty$. We chose as
subtraction point $n_2=0$, in this way the singular contribution to
$\Gamma$ is:
\begin{equation}
\delta \Gamma = \frac{2 g}{\pi \theta 4!}
\Big{(}\sum_{k=0}^{k=k_{max}}\frac{1}{\omega_{k, 0}}\Big{)}\int_{x,t} \varphi^*_{(x,t)} \star
\varphi_{(x,t)} \;,
\end{equation}
which can be absorbed by the definition of the mass
parameter.
\noindent On the other hand, the finite part reads:
\begin{equation}
\label{eq:finita} \sum_{k=0}^{k=k_{max}} (\frac{1}{\omega_{k,
n_2}}-\frac{1}{\omega_{k, 0}}) \;,
\end{equation}
where the $k_{max}\to \infty$ limit can be taken to get a finite
contribution to the generating functional. This yields a function of $n_2$
that, as we shall see, can be written as a (one-body) potential term.
\subsection{Renormalizability and power counting}
Let us first show the theory is at least renormalizable (by power
counting). For a given Green's function the most important contributions
are given by the planar graphs. But taking into account the structure of
the propagator (\ref{eq:propa_impulso}) any amplitude must converge better
than a fermionic theory with a quartic vertex two dimensions (and without
infrared problems). In order to show this we recall the standard definition:
\begin{equation}
\omega_{vertex}=(\frac{d-1}{2})F_{\nu} \;\; ,
\end{equation}
where $F_{\nu}$ is the number of fermions in the vertex. So in our case the
theory behaves better than $\omega_{\nu}=2$, i.e. a renormalizable
theory.
In order to see that the theory is super-renormalizable, note that there
must be at least two propagators in each loop (in other case, it
would be the one-loop tadpole contribution, that has already been
considered), but products of two or more propagators of the form
(\ref{eq:propa_impulso}) yield
convergent integrals, because the argument can be sum or integrated in any
order and each of the iterated operation converges~\cite{folland}. One way
to see this is integrating in the worst iteration possible, this is to
perform the continuous integral and then the sum. But if one of the
propagators is multiplied by a rational function of the discrete variable
the sum converges, and this is indeed the case (as can be easy verified
performing the integral asymptotically).\\ There remain non-trivial cases,
namely: overlapping loop graph such as the one shown in Figure~\ref{fig:self_2loops}.
\begin{figure}[!ht] \begin{center} \begin{picture}(0,0)%
\includegraphics{2loop_self.pstex}%
\end{picture}%
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(3534,1888)(439,-1725)
\end{picture}%
\caption{Two loop self energy diagram.} \label{fig:self_2loops}
\end{center}
\end{figure}
The amplitude associated with this diagram is proportional to:
\begin{eqnarray} \nonumber
\int_{\omega_1\omega_2}\sum_{n_1 n_2}\frac{1}{\omega_1^2+m^2+\frac{2}{\theta}(k_1+n_1+1)}\frac{1}{\omega_2^2+m^2+\frac{2}{\theta}(k_2+n_2+1)}\times\\ \frac{1}{(\omega_1+\omega_2-q)^2+m^2+\frac{2}{\theta}(n_1+n_2+1)} \; ,
\end{eqnarray}
where $k_1$, $k_2$ and $q$ are external variables. This graph is convergent iff the following integral is convergent:
\begin{equation}
\int d^4x \frac{1}{x_1^2+|x_2|+1}\frac{1}{x_3^2+|x_4|+1}\frac{1}{(x_1+x_3-\beta)^2+|x_1|+|x_2|+1}
\;\; ,
\end{equation}
but this is indeed the case, because is an integral of a positive function and the integration in each variable is convergent. Any other multiloop planar diagram is convergent for the same reason. In this way we see that it is enough to renormalize the tadpole graph.
\section{Renormalized generating functional}\label{sec:renormalizedgeneratingfunctional}
We construct here the generating functional of 1PI graphs for the one loop
renormalized perturbation series up to fourth order in the field variable.
\subsection{Two point function}
We need to consider the expression in (\ref{eq:finita}) in more detail.
This is a convergent series which defines a holomorphic function of $n_2$.
Introducing coefficients $\alpha_{\lambda}$, so that:
\begin{equation}
\sum_{k \geq0} (\frac{1}{\omega_{k,
n_2}}-\frac{1}{\omega_{k, 0}})=\sum_{\lambda\geq1}\alpha_{\lambda}
n_{2}^{\lambda} \;,
\end{equation}
the relation:
\begin{equation}
\alpha_{\lambda}=\sqrt{\frac{\theta}{2}}
\beta^{(\lambda)}_{(1+\frac{m^2\theta}{2})} \end{equation} \begin{equation}
\beta^{(\lambda)}(z)=\frac{1}{\lambda!}\frac{\partial^{\lambda}}{\partial
w^{\lambda}}\Big{(}\mathcal{Z}[\frac{1}{2},w+z]-\mathcal{Z}[\frac{1}{2},z]\Big{)}|_{w=0}
\end{equation}
where $\mathcal{Z}$ is the Hurwitz zeta function, is easily obtained.
It is important to note the smooth behavior with respect to the product
$m^2\theta$, this number is greater than zero so the argument of the
function beta is always greater than
one (i.e. in this domain the function is regular). Now we are ready to
include the contribution of the two point function to the generating
functional. The singular part is absorbed in a mass renormalization, while
the finite part is:
\begin{equation}
\Gamma_{n_0 n_1,n_2
n_3}^{(2), finite}(x-y)=\frac{2 g}{4!} \delta_{n_0 n_2}
\delta_{n_1 n_3} \delta(x-y) \left(\sum_{\lambda\geq1}\alpha_{\lambda}
n_{2}^{\lambda}\right)\;.
\end{equation}
Taking now into account the correspondence with the functional
representation (see Appendix A), we can use the number operator to get an
expression in the original functional space:
\begin{equation} \label{eq:contribution_two}
\delta \Gamma[\varphi,\varphi^*]=\frac{2 g}{4!2\pi \sqrt{2\theta}}\int_{x,t}\big{(}\varphi^* \star
V(\frac{x}{\sqrt{\theta}}) \star \varphi + \varphi \star V(\frac{x}{\sqrt{\theta}})
\star \varphi^* \big{)}\;,
\end{equation}
where we have used the definition:
\begin{equation}
V(\frac{x}{\sqrt{\theta}}) \,=\,
\sum_{\lambda\geq1}\frac{\beta^{(\lambda)}}{2^\lambda}\big{(}\frac{x_{j}
\star x_{j}}{\theta}-1\big{)}^{\star \lambda} \;.
\end{equation}
This shows the explicit form of the one-body potential,
The first three terms in the expansion of this potential are plotted in
Figure \ref{fig:pote}, for the values $m^2\theta = 0$, $m^2\theta = 2$ and $m^2\theta = \infty$ .
\begin{figure}[!ht]
\begin{center}
\begin{picture}(0,0)%
\includegraphics{potencial_corregido.pstex}%
\end{picture}%
\setlength{\unitlength}{4144sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(5625,4050)(1,-3211)
\put(451,614){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$V(\frac{x}{\sqrt{\theta}})/V(0)$}%
}}}}
\put(5356,-1546){\makebox(0,0)[lb]{\smash{{\SetFigFont{12}{14.4}{\rmdefault}{\mddefault}{\updefault}{\color[rgb]{0,0,0}$x/\sqrt{\theta}$}%
}}}}
\end{picture}%
\end{center}
\caption{One-body potential due to quantum corrections. $m^2\theta=0$ (Short dashed),$m^2\theta=2$ (Long dashed), $m^2\theta\rightarrow \infty$ (bold).} \label{fig:pote}
\end{figure}
It is clear that this quantum correction tends to deconfine the system, as
it should be expected from the repulsive character of the interaction.
\subsection{Four-point functions}
Now we deal with the four-point contributions, which correspond to four
inequivalent diagrams, which we study below, together with their
corresponding contributions to the action.
\begin{figure}[!ht]
\begin{center}
\begin{picture}(0,0)%
\includegraphics{4ptosA.pstex}%
\end{picture}%
\setlength{\unitlength}{3108sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(6300,2700)(3151,-4201)
\end{picture}%
\end{center}
\caption{} \label{fig:4A}
\end{figure}
The diagram of Figure \ref{fig:4A} contributes with:
\begin{equation}
\delta \Gamma=- \frac{S(2\pi \theta
g)^2}{2(4!)^2}\delta_{(t_1-t_2)}\delta_{(t_3-t_4)}\delta^{n_1}_{n_3}
\delta^{n_2}_{n_0}\delta^{n_5}_{n_7}\delta^{n_4}_{n_6}
(\Delta^{n_4 n_1,n_4 n_1}_{(t_1-t_3)})^2
\;\; ,
\end{equation}
where $S$ is a symmetry factor.
To obtain an explicit expression for the quantum correction to the action we
will consider a low energy approximation, assuming we are concerned with the
physics of this system up to $n_i^{max}$ (which is a kind of low-momentum
approximation).
\noindent Thus, assuming the condition $\theta m^2 >>
n_i^{max}$ for the external indices, we can write:
\begin{equation} \label{eq:np1}
\delta \Gamma= - \alpha
\frac{g^2}{m^3 \theta^2} \int_t (\int_x \varphi^*_{(x,t)} \star
\varphi_{(x,t)})^2 \; , \;\;\;\alpha>0
\;\; ,
\end{equation}
where $\alpha$ is independent of the parameters of the problem.
Another contribution is the one represented in Figure \ref{fig:4B}.
\begin{figure}[!ht]
\begin{center}
\begin{picture}(0,0)%
\includegraphics{4ptosB.pstex}%
\end{picture}%
\setlength{\unitlength}{3108sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(3600,2700)(3601,-5101)
\end{picture}%
\end{center}
\caption{}\label{fig:4B}
\end{figure}
Its analytic expression is:
\begin{equation}
\delta \Gamma=\frac{-S(2\pi \theta
g)^2}{2(4!)^2}\delta_{(t_1-t_3)}\delta_{(t_2-t_4)}\delta^{n_6}_{n_4}\delta^{n_3}_{n_1}\delta^{n_2}_{n_0}\delta^{n_5}_{n_7}
\Delta^{n_1 n_4,n_1 n_4}_{(t_2-t_3)} \Delta^{n_0 n_5,n_0 n_5}_{(t_2-t_3)}
\;\; .
\end{equation}
Using the same approximation as for the previous diagram, we see that it
may be approximated by
\begin{equation}\label{eq:np2}
\delta \Gamma= - \alpha \frac{g^2}{m^3 \theta^2} \int_t (\int_x \varphi^*_{(x,t)}
\star \varphi_{(x,t)})^2 \;\;,\;\;\;\alpha>0 \;.
\end{equation}
Another nonequivalent diagram of this class is represented in Figure \ref{fig:4C}.
\begin{figure}[!ht]
\begin{center}
\begin{picture}(0,0)%
\includegraphics{4ptosC.pstex}%
\end{picture}%
\setlength{\unitlength}{3108sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(3600,2880)(4276,-5101)
\end{picture}%
\end{center}
\caption{}\label{fig:4C}
\end{figure}
Under the same approximation we used before, it contributes with:
\begin{equation} \label{eq:np3}
\delta \Gamma= - \alpha \frac{g^2}{m^3 \theta^2} \int_t (\int_x \varphi^*_{(x,t)}
\star \varphi_{(x,t)})^2 \;\;,\;\;\;\alpha>0 \;.
\end{equation}
Thus under this approximation all non-planar contributions have the same expression. Numerical factors (we call $\alpha$ in equations \ref{eq:np1}, \ref{eq:np2} and \ref{eq:np3}) can of course be different.\\
There is also a planar diagram with one of its indexes not fixed by the
external ones, Figure \ref{fig:4D}.
\begin{figure}[!ht]
\begin{center}
\begin{picture}(0,0)%
\includegraphics{4ptosD.pstex}%
\end{picture}%
\setlength{\unitlength}{3108sp}%
\begingroup\makeatletter\ifx\SetFigFont\undefined%
\gdef\SetFigFont#1#2#3#4#5{%
\reset@font\fontsize{#1}{#2pt}%
\fontfamily{#3}\fontseries{#4}\fontshape{#5}%
\selectfont}%
\fi\endgroup%
\begin{picture}(5760,3150)(3601,-5461)
\end{picture}%
\end{center}
\caption{} \label{fig:4D}
\end{figure}
Because of this its contribution is more important than the previous ones:
\begin{equation}
\delta \Gamma=\frac{-S(2\pi \theta
g)^2}{2(4!)^2}\delta_{(t_1-t_2)}\delta_{(t_3-t_4)}\delta^{n_0}_{n_2}\delta^{n_6}_{n_4}\delta^{n_3}_{n_5}\delta^{n_1}_{n_7}
\sum_{\lambda \geq 0} \Delta^{\lambda n_3,\lambda n_3}_{(t_1-t_3)} \Delta^{\lambda
n_1,\lambda n_1}_{(t_1-t_3)}
\;\; ,
\end{equation}
which we again approximate, with the result:
\begin{equation}
\delta \Gamma= - \alpha
(g\theta^{\frac{1}{2}})\mathcal{Z}_{(\frac{3}{2},\frac{2+\theta m^2}{2})} g
\int \varphi^* \star \varphi \star \varphi^* \star \varphi \;\; , \;\;\;\alpha>0
\;\; .
\end{equation}
This diagram has a finite $\theta m^2 \rightarrow \infty$ limit.
Indeed,
\begin{equation}
\lim_{x \to \infty} \sqrt{x}
\mathcal{Z}_{(\frac{3}{2},\frac{2+x}{2})}=\beta
\;\; ,
\end{equation}
where $\beta$ is a positive number of order unity.
So we have
\begin{equation}
\delta \Gamma= - \alpha \beta
\frac{g^2}{m}\int \varphi^* \star \varphi \star \varphi^*
\star \varphi \;\; , \;\;\; \alpha>0 \;.
\end{equation}
In this way, we see that only the last graph is leading when
$\theta m^2 \rightarrow \infty$. This is a consequence of the free internal
line (loop) which gives the most important contribution to the generating
functional in this limit.
\subsection{Approximate generating functional}
Joining all the previous pieces, we get an approximate expression for the
1PI functional, in the $\theta m^2 \rightarrow \infty$ limit.
\begin{equation} \nonumber
\Gamma=\mathcal{S}+\frac{g}{4!2\pi \sqrt{2\theta}}\int_{x,t}\varphi^* \star
V(\frac{x}{\sqrt{\theta}}) \star \varphi + \varphi \star V(\frac{x}{\sqrt{\theta}})
\star \varphi^*- \end{equation} \begin{equation} -\alpha \beta \frac{g^2}{m}\int \varphi^* \star \varphi
\star \varphi^* \star \varphi
\;\; .
\end{equation}
The approximation have been used to eliminate some of the four point
contributions. Taking into account the form of the coefficients in the two
point function it is easily verified that if the series which defines the
one-body potential is truncated, this correction vanishes as well. We do
not have, however, a closed analytical expression for that correction, so this term should be kept.\\
If further corrections are taken into account under the approximation $\theta m^2 \rightarrow \infty$ non-planar diagrams can be eliminated as in the four point function case. It is easily seen that if a series of internal lines connected to external legs are replaced by an internal loop the amplitude results a factor $\theta m^2$ bigger than the non-planar case. So, for example, if the two-loop self energy diagram is considered as in Figure \ref{fig:self_2loops} the non-planar case would be suppressed by a factor $\mathcal{O}(\frac{1}{(\theta m^2)^2})$, and so the latter correction would not be important.
\section{Non trivial vacuum configurations}\label{ntvc}
Using the properties of the matrix base, exact classical solutions to the
equations of motion can be found. A natural question is whether we
can define a sensible quantum theory around those non trivial vacuum
configurations. As we shall see, this is indeed the case.
We will also analyze how the vacuum energy is shifted under variations of
the parameters that characterize the solutions.
\subsection{Classical solutions}
Considering the real-time action associated to the Euclidean one of
(\ref{eq: accion}), we see that a classical solution must satisfy:
\begin{equation}
\ddot \varphi + m^2 \varphi + \frac{1}{\theta^2}(x_{\mu}\star x_{\mu}\star
\varphi)+\frac{1}{\theta^2}(\varphi \star x_{\mu}\star x_{\mu})+\frac{2g}{4!} \varphi
\star \varphi^{*} \star \varphi = 0 \;.
\end{equation}
Using the ansatz
\begin{equation}\label{ansatz}
\varphi_{nk}(x,t)=e^{\dot\imath \Omega_{nk} t} f_{nk}(x)
\;\; ,
\end{equation}
we have a solution to the nonlinear problem if the following dispersion
relation is satisfied:
\begin{equation}
\Omega_{nk}^2=m^2+\frac{2}{\theta}(n+k+1)+\frac{2g}{4!}
\;\; .
\end{equation}
This means that, at the classical level, objects with typical size $\theta$
can be stable (note the difference with the commutative case). There is a vast literature on the subject of solitonic solutions for noncommutative theories, some basic references are \cite{nekrasov} and \cite{Schaposnik:2004rc}.\\
In order to study the quantum corrections, we deal next with
the Euclidean version of the problem.
\subsection{Quantum case}
Consider again the Euclidean action (\ref{eq: accion}). The condition for an extremum
with an ansatz such as (\ref{ansatz}) is
\begin{equation}
\Omega_{nk}^2+m^2+\frac{2}{\theta}(n+k+1)+\frac{2g}{4!}=0 \;.
\end{equation}
If we focus on time-independent solutions, a symmetry-breaking like
potential is needed in order to have an extremum. We will, however, continue
the discussion for a different kind of solution. As it may be easily verified,
$\varphi=\eta f_{00}$ is a solution of the equation of motion if:
\begin{equation} \label{eq:condition}
m^2+\frac{2}{\theta}+\frac{2g\eta^2}{4!}=0
\;\; .
\end{equation}
In the same way it is possible to generate more solutions of the form $\varphi=\eta f_{nk}$, with a non-linear condition for the amplitude. We will focus on the fundamental one ($\varphi=\eta f_{00}$) for an explicit analysis.\\
A first question is whether a generating functional (in the path
integral formalism) can be constructed by expanding around this extremum. Next we want to know the dependence
of the vacuum energy with the parameters of the problem. Let us first deal with
the first (stability) condition. The second-order correction about the
extremum of the Euclidean action is parameterized as follows:
\begin{equation}
\frac{1}{2} \left(
\begin{array}{ll}
\chi \\
\chi^*
\end{array}
\right)^{\dagger} \mathbb{H}(\mathcal{S}) \left(
\begin{array}{ll}
\chi \\
\chi^*
\end{array}
\right)
\end{equation}
where $\chi$ is the fluctuation around the non trivial solution,
and $\mathbb{H}(\mathcal{S})$ is the Hessian matrix:
\begin{equation}
\left(
\begin{array}{ll}
\frac{\delta^2 \mathcal{S}}{\delta \varphi_1 \delta \varphi^*_2} &
\frac{\delta^2 \mathcal{S}}{\delta \varphi^*_1 \delta
\varphi^*_2} \\\frac{\delta^2 \mathcal{S}}{\delta \varphi_1 \delta
\varphi_2} & \frac{\delta^2 \mathcal{S}}{\delta \varphi^*_1 \delta
\varphi_2}
\end{array}
\right)
\end{equation}
with the usual notation for kernels. So the consistency condition is
equivalent to check that all eigenvalues of this matrix are positive.
In fact we already have a basis of eigenvectors $\{\varphi/\varphi=e^{i\Omega
t}f_{nk}(x),~n,k \in \mathbb{N},~\Omega \in \mathbb{R}\}$,
and the eigenvalues are: \begin{displaymath} \left\{
\begin{array}{ll} \Omega^2+m^2+\frac{2}{\theta}(n_1+n_2+1) & n_1,n_2 \geq 1
\\ \\ \Omega^2+m^2+\frac{2}{\theta}(n_1+n_2+1)+\frac{2g\eta^2}{4!} &
n_{1,2}=0, n_{2,1} \geq 1\\
\\\Omega^2+m^2+\frac{2}{\theta}+\frac{6g\eta^2}{4!} & n_1=n_2=0
\end{array}\right.
\end{displaymath}
Using the condition \ref{eq:condition} the set of eigenvalues is:
\begin{displaymath} \left\{
\begin{array}{ll} \Omega^2+\frac{2}{\theta}(n_1+n_2)-\frac{2g\eta^2}{4!} & n_1,n_2 \geq 1
\\ \\ \Omega^2+\frac{2}{\theta}(n_1+n_2) &
n_{1,2}=0, n_{2,1} \geq 1\\
\\\Omega^2+\frac{g\eta^2}{3!} & n_1=n_2=0,
\end{array}\right.
\end{displaymath}
which are all positive if $g\eta^2<\frac{2\;4!}{\theta}$.\\
Now the vacuum energy shift between two sets of parameters associated with the
eigenvalues $\{\lambda'_n(\Omega)\}$ and $\{\lambda_n(\Omega)\}$ can be
evaluated as:
\begin{equation}
\Delta E=\frac{1}{2\pi}\int_{d\Omega}
\sum_{n}\log \Big{(}\frac{\lambda'_n(\Omega)}{\lambda_n(\Omega)}\Big{)} \;\; .
\end{equation}
This shows there is a way of changing the parameters such that the energy remains constant, if we mantain $m$ and $\theta$ constant and if the product $g\eta^2$ does not change then $\Delta E=0$. But note that we can change the coupling constant $g$ and the amplitude of the solution $\eta$, with just one constraint.
In reference~\cite{0709}, a throughout study of non-trivial vacuum
configurations in (real and complex) scalar models
in $2$ and $4$ spacetime dimensions is presented.
The kind of ansatz that we consider here may be regarded as an embedding
to $2+1$ dimensions, of one of the solutions considered there for the
$2$-dimensional case.
\section{Conclusions}\label{sec:conclusions}
We have shown explicitly that the self-dual model is a super-renormalizable
theory, carrying out the explicit one-loop renormalization procedure, and
evaluating the corresponding contributions to the effective action to that
order. We have also found an approximate expression for the generating
functional of proper vertices, under the assumption: $m^2\theta>>1$.
Besides, some non trivial solutions in the presence of the GW term and a
symmetry breaking potential have been found at classical level, and it was
shown that they are stable under the leading quantum corrections, by
evaluating the exact eigenvalues of the Hessian around those extrema. The
resulting dependence of the vacuum energy on the model's parameters has
also been explicitly found.
\section*{Acknowledgments}
C.~D.~F. was supported by CONICET, ANPCyT and
UNCuyo. G.~A.~M. is supported by a Petroenergy SA - Trafigura studentship at Instituto Balseiro UNCuyo.
We thank Professors A. de Goursac, A. Tanasa and J-C Wallet for pointing
out reference~\cite{0709}, and the relation between our classical
configuration and some non-trivial vacuum configurations considered
there, for even-dimensional models.
\section*{Appendix A}
In this section we will briefly derive the properties of the basis which `diagonalizes' Moyal product: \begin{equation} (f \star g)_{(x)}=f_{(x)} e^{\frac{\dot\imath}{2}
\theta_{\mu \nu}\overleftarrow{\partial_\mu} \overrightarrow{\partial_\nu }}
g_{(x)} \;\; .\end{equation} First we build an operatorial representation of the algebra.
Consider two hermitian operators such that $[x_1, x_2]=i \theta$, and
define the creation an annihilation operators $a$ and $a^{\dagger}$: \begin{equation}
a=\frac{x_1 + i x_2}{\sqrt{2\theta}} \hspace{8mm} [a,a^\dagger]=1 \;\; .
\end{equation} To connect the algebra of functions $\mathcal{G}$ with the algebra of
operators $\mathcal{G}'$ consider a map $\mathcal{S}^{-1}:
f\in\mathcal{G}\rightarrow \mathcal{O}_{(f)}\in \mathcal{G}'$ defined by the
following equation (here time is just a parameter): \begin{equation} \label{eq: symbol}
\mathcal{O}_f(t)=\int_{\bar k\in \mathbb{R}^2}\frac{1}{(2\pi)^2}\hat f(\bar
k,t):e^{i \sqrt{\frac{\theta}{2}}(k^* a+ka^\dagger)}: \hspace{4mm}; k=k_1+i
k_2 \;\; , \end{equation} where $::$ denotes normal ordering and $\hat f$ is the usual Fourier
transform: \begin{equation} \hat f(\bar k,t)=\int_{\bar x\in \mathbb{R}^2} f(\bar x,t)
e^{-i \bar k_j \bar x_j} \;\; . \end{equation} It is the work of a moment to verify the properties:
\begin{itemize}
\item \begin{equation} \label{eq: Moyal_ope} \mathcal{O}_{f\star
g}=\mathcal{O}_f\mathcal{O}_g \end{equation}
\item \begin{equation} Tr(\mathcal{O}_f)=\frac{1}{2\pi \theta}\int_{x} f(x, t)
\end{equation}
\item \begin{equation} \mathcal{O}_{(\partial_{x_1} f)}=[\frac{\dot\imath}{\theta} x_2,
\mathcal{O}_f] \hspace{4mm} \mathcal{O}_{(\partial_{x_2}
f)}=[-\frac{\dot\imath}{\theta} x_1, \mathcal{O}_f] \;\; .\end{equation}
\end{itemize}
So the Moyal product in the algebra of functions is mapped to composition of
operators. On the other hand, there is a special class of operators that allow
a very easy way to perform composition, namely the ones which have the form
$\ket{n}\bra{k}$. So if we know $f_{nk}\in
\mathcal{G}/\mathcal{O}_{(f_{nk})}=\ket{n}\bra{k}$ we would have \begin{equation} f_{nk}
\star f_{k'n'}=\delta_{kk'} f_{nn'} \hspace{4mm} f_{nk}^{*}=f_{kn} \; , \end{equation}
because of equation (\ref{eq: Moyal_ope}). This is the basis we mentioned above. To get an explicit form of the condition
$\mathcal{O}_{f_{nk}}=\ket{n}\bra{k}$ is enough to take matrix elements in equation
(\ref{eq: symbol}) and use that Laguerre associated polynomials ($\mathbb{L}^{(n-j)}_j$) are complete
(very useful identities can be found in \cite{gango}). Looking at the
coefficients we find that the Fourier transform of such a function in
polar coordinates is: \begin{equation} \hat f_{nj(\rho,\varphi)}=2\pi\theta
\sqrt{\frac{j!}{n!}}(i \sqrt{\frac{\theta}{2}})^{j-n} e^{-i \varphi (n-j)}
\rho^{j-n} e^{-\frac{\theta \rho^2}{4}} \mathbb{L}^{(n-j)}_j(\frac{\theta
\rho^2}{2}) \;\; ,\end{equation} so for example a diagonal one is a Gaussian times a
polynomial \begin{equation} \hat f_{nn(k)}=2\pi\theta \; e^{-\frac{\theta k^2}{4}}
\; \mathbb{L}^{(n)}\big{(}\frac{\theta k^2}{2}\big{)} \;\; . \end{equation}
| {
"attr-fineweb-edu": 1.21875,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdvM25YjgKOD-EJRM | \section{Introduction}
\label{section:introduction}
The problem of keeping the state of a dynamical system in a compact subset of its state space over an infinite time horizon in the presence of uncertainties was introduced more than four decades ago \cite{bertsekas1971minimax,Bertsekas72a}. Since then, \textit{reachability} analysis has been exploited in different applications, e.g., it is used in \textit{model checking} and \textit{safety verification} to ensure the system does not enter an unsafe region, some specific situations are avoided, and some properties for an acceptable design are met \cite{prajna2004safety,bouajjani1997reachability,gillula2010design}. Reachability analysis has several applications in \textit{model predictive control} such as \textit{terminal set} design, \textit{persistence feasibility} \cite{rawlings2008unreachable}, and robust invariant set computation for initial conditions \cite{gupta2019full}.
Recently, progress in wireless communication technologies has provided new opportunities but also new challenges to control theorists. On the one hand, the use of communication in the control loop has several benefits, such as reduced system wiring, ease of maintenance and diagnosis, and ease of implementation \cite{zhang2001stability}. On the other hand, wireless links are subject to noise, time varying delay, packet loss, jitter, limited bandwidth, and quantization errors, which are not negligible in the stability and performance analysis. Feedback control systems that are closed through a network are called \textit{networked control systems} (NCS). See \cite{zhang2016survey} for recent advances in the field.
An interesting subset of NCS problems, which has received little attention so far, regards a scenario where a central decision maker is in charge of keeping the states of a set of independent systems within given admissible sets, receiving measurement data and deciding which local controller(s) should receive state measurement(s) update through the common communication channel(s), see Fig.~\ref{fig:network}.
\begin{figure}[t]
\centering
\includegraphics[width=1\columnwidth]{network2.png}
\caption{A set of agents sharing a common channel to exchange state measurements data based on decisions of the Central Scheduler.}
\label{fig:network}
\end{figure}
This is a model, for instance, of remotely sensed and actuated robotic systems based on CAN communication or, as we will discuss in our application example, of remote multi-agent control setups for intelligent transportation systems field testing. This scenario shares some similarities with event-driven control~\cite{Heemels12a}. However, in our case, the core problem is to guarantee invariance of the admissible sets despite the communication constraints, rather than to ensure stability while minimizing communication costs, which does not explicitly guarantee satisfaction of hard constraints. As we will see, this shift in focus brings about a corresponding shift in the set of available tools.
We establish a connection between the control design and classical scheduling problems in order to use available results from the scheduling literature. These scheduling problems are the \textit{pinwheel problem} (PP) and the \textit{windows scheduling problem} (WSP)~\cite{Chan92a,Holte89a,bar2003windows}. Both problems have been extensively studied and, though they are NP-hard, several heuristic algorithms have been proposed, which are able to solve a significant amount of problem instances~\cite{bar2002minimizing,bar2003windows,bar2007windows}.
In this paper we target a reachability and safety verification problem, in discrete time, for the described NCS. With respect to a standard reachability problem, the limitations of the communication channel imply that only a subset of the controllers can receive state measurements and/or transmit the control signals at any given time. Therefore, a suitable \textit{scheduler} must be designed concurrently with the control law to guarantee performance. We formalize a general model for the control problem class, and propose a heuristic that solves the schedule design problem exploiting its analogy with PP. Furthermore, we show that in some cases, our problem is equivalent to either WSP or PP. This gives us a powerful set of tools to co-design scheduling and control algorithms, and to provide guarantees on persistent schedulability. Building on these results, we propose online scheduling techniques to improve performance and cope with lossy communication channels.
The rest of the paper is organized as follows. In Section~\ref{section:problem-statement}, the problem is formulated. Relevant mathematical background is stated in Section~\ref{section:math-background}. Our main contributions are divided into an offline and an online strategy, presented in Section~\ref{section:results_offline} and Section~\ref{section:results_online}, respectively. Examples and numerical simulations are provided in Section~\ref{section:numerics}.
\section{Problem statement}
\label{section:problem-statement}
In this section, we define the control problem for uncertain constrained multi-agent NCS and provide the background knowledge needed to support Section~\ref{section:results_offline}. We first formulate the problem in general terms; afterwards, we provide some examples with different network topologies.
\subsection{Problem Formulation}
For each agent, $x$, $\hat{x}$ and $u$ denote the state of the plant, the state of the observer and the state of the controller, respectively. Note that in NCS the controller typically has memory to cope with packet losses. The discrete-time dynamics of the agent is described by
\begin{equation}
\label{eq:general_system}
z^+=
\begin{cases}
F\big(z,v\big), &\textrm{if } \delta = 1,\\
\hat{F}\big(z,v\big), &\textrm{if } \delta = 0,\\
\end{cases}
\end{equation}
with
\begin{equation}
F:=\begin{pmatrix}
f\big(z,v\big)\\
g\big(z\big)\\
c\big(z\big)
\end{pmatrix}, ~
\hat{F}:=\begin{pmatrix}
\hat{f}\big(z,v\big)\\
\hat{g}\big(z\big)\\
\hat{c}\big(z \big)
\end{pmatrix}, ~
z:=\begin{pmatrix}
x\\
\hat{x}\\
u
\end{pmatrix},
\end{equation}
where $z \in\m{X}\times \m{X}\times \m{U}\subseteq\mathbb{R}^{n}\times\mathbb{R}^{n}\times\mathbb{R}^{r}$ and
$v \in \m{V} \subset \mathbb{R}^{p}$ denotes an external disturbance. The two dynamics correspond to the \textit{connected mode}, i.e., $\delta=1$ and the \textit{disconnected mode}, i.e., $\delta=0$. The latter models the case in which the agent cannot communicate though the network and evolves in open loop.
We consider $q$ agents of the form \eqref{eq:general_system}, possibly with different functions $F_i$, $\hat{F}_i$, $i\in\{1,\ldots,q\}$, and with state spaces of different sizes. We use the set $\boldsymbol{\mathcal{C}}=\{C_j\},~j \in \{1,\ldots,l\}$ of \textit{connection patterns} to represent the sets of agents that can be connected simultaneously.
A connection pattern is an ordered tuple
\begin{equation}
\label{eq:binary_vectors}
C_j:=(c_{1,j},\ldots,c_{m_j,j}),
\end{equation}
with~$c_{k,j} \in \{1,\ldots,q\}$, $m_j \leq q$, and
\begin{equation}
i\in C_j \Leftrightarrow \delta_i=1 \textrm{ in connection pattern } j.
\end{equation}
For example, $C_1=(1,5)$ means that agents $1$ and $5$ are connected when $C_1$ is chosen.
We can now formulate the control task as follows.
\begin{problem}
\label{prob:the_problem}
Design a communication allocation which guarantees that the agent state $z$ remains inside an \textit{admissible} set $\m{A}\subset \m{X}\times\m{X}\times\m{U}$ at all time.
\end{problem}
\subsection{Examples of models belonging to framework \eqref{eq:general_system}}
The modeling structure introduced in the previous subsection is a general framework for modeling a number of different NCS with limited capacity in the communication links between controller, plant, and sensors.
As a special case, the two dynamics in~\eqref{eq:general_system} could describe the evolution of a NCS, together with a predictor and a dynamic feedback controller. The connection signal~$\delta$, set by a central coordinator in a \emph{star} communication topology (e.g., cellular network), can describe (see Figure~\ref{fig:system_1})
\begin{itemize}
\item sensor-controller (SC) networks: $\delta_u=1, \ \delta_s =\delta$;
\item controller-actuator (CA) networks: $\delta_u=\delta,\ \delta_s =1$;
\item sensor-controller-actuator (SCA) networks: $\delta_u=\delta_s =\delta$.
\end{itemize}
For the three cases and in a linear setting, the functions~$F,~\hat{F}$ can be written as in the following examples.
\begin{example}[Static state-feedback, SC network]
\label{Exp:case i}
\begin{equation}
\label{eq:sc_network}
F=\begin{pmatrix}
Ax+BKx+Ev\\
Ax+BKx\\
\emptyset
\end{pmatrix},~\hat{F}=\begin{pmatrix}
Ax+BK\hat{x}+Ev\\
A\hat{x}+BK\hat{x}\\
\emptyset
\end{pmatrix}.
\end{equation}
\end{example}
\begin{example}[Static state-feedback, CA network]
\begin{multline}
F=\begin{pmatrix}
\begin{pmatrix}
A & 0\\
0 & 0
\end{pmatrix}x+
\begin{pmatrix}
B\\1
\end{pmatrix}Kx+
\begin{pmatrix}
E\\0
\end{pmatrix}v\\
\emptyset\\
\emptyset
\end{pmatrix},\\
\hat{F}=\begin{pmatrix}
\begin{pmatrix}
A & B\\
0 & 1
\end{pmatrix}x+
\begin{pmatrix}
E\\0
\end{pmatrix}v\\
\emptyset\\
\emptyset
\end{pmatrix},
\end{multline}
where $x$ is $n+1$ dimensional if $A$ is $n\times n$, and $K_{n+1}=0$. Element $n+1$ is a memory variable, used to implement a hold function on the last computed input, when no new state information is available.
\end{example}
\begin{example}[Dynamic state-feedback, SCA network]
\begin{multline}
F=\begin{pmatrix}
\begin{pmatrix}
A & 0\\
0 & 0
\end{pmatrix}x+
\begin{pmatrix}
B\\1
\end{pmatrix}C_{c} u+
\begin{pmatrix}
E\\0
\end{pmatrix}v\\
Ax+BC_{c}u\\
A_{c}u+B_{c}x
\end{pmatrix},\\
\hat{F}=\begin{pmatrix}
\begin{pmatrix}
A & B\\
0 & 1
\end{pmatrix}x+
\begin{pmatrix}
E\\0
\end{pmatrix}v\\
A\hat{x}+BC_{c}u\\
A_{c}u+B_{c}\hat{x}
\end{pmatrix},
\end{multline}
where $x$ is $n+1$ dimensional if $A$ is $n\times n$, and $K_{n+1}=0$. Element $n+1$ is a memory variable, used to implement a hold function on the last computed input, when no new state information is available.
\end{example}
\begin{figure}[t]
\centering
\includegraphics[width=0.5\columnwidth]{system_1.png}
\caption{An example of a system described by the model~\eqref{eq:general_system}.}
\label{fig:system_1}
\end{figure}
In all of the above cases, the decision variable $\delta$ is selected by a central scheduler. This might have access to the exact state, or might be subject to the same limitations on state information as the controller. As we discuss in Section \ref{Sec_online_subsec_online}, the available information influences the scheduler design.
\section{Mathematical background}
\label{section:math-background}
In order to translate control problem~\ref{prob:the_problem} into a scheduling problem, we will define the concept of \emph{safe time interval} by relying on robust invariance and reachability analysis.
Set $\m{S}\subset\m{X}\times\m{X}\times\m{U}$ is \textit{robust invariant} for \eqref{eq:general_system} in connected mode, i.e., $\delta=1$ i
\begin{align}
F(z,v) \in \mathcal{S} ,&& \,\forall\, z \in \m{S},~ v\in\mathcal{V}.
\end{align}
Any robust invariant set $\mathcal{S}$ contains all forward-time trajectories of the agent \eqref{eq:general_system} in the connected mode, provided $z(0) \in\m{S}$, regardless of the disturbance $v$.
Let $\{\m{S}\}$ be the set of all robust invariant sets of \eqref{eq:general_system} that are contained in the admissible set $\m{A}$. We call $\MR{} \in \{\mathcal{S}\}$ the \textit{maximal robust invariant} set:
\begin{align}
\mathcal{S} \subseteq \mathcal{S}_{\infty},&& \forall \mathcal{S} \in \{\mathcal{S}\}.
\end{align}
We define the \emph{$1$-step reachable set} as the set of states $z$ that can be reached in \emph{one step} from a set of initial states~$\mathcal{O}$ with dynamics $H \in \{F,\hat{F}\}$:
\begin{align}\label{eq:reach}
\Reach{H}{1}{\mathcal{O}}:=\{H(z,v): ~ z \in \m{O}, ~ v \in \m{V}\}.
\end{align}
The $t$-step reachable set, $t=2,\ldots$ is defined recursively as
\begin{align}\label{eq:reach_t_steps}
\Reach{H}{t}{\mathcal{O}}:=\Reach{H}{1}{\Reach{H}{t-1}{\mathcal{O}}}.
\end{align}
Numerical tools for the calculation of~$\mathcal{S}_\infty$ and $\Reach{H}{t}{\mathcal{O}}$ can be found in~\cite{borrelli2017predictive}, for linear $H$.
\begin{definition}[Safe time interval, from \cite{ECC}]
\label{def:safe_time}
We define the \emph{safe time interval} for agent $i$ as
\begin{multline}\label{eq:safe_time}
\alpha_i:=1+\max_{\tau} \Big{\{} \tau : \Reach{\hat{F}_i}{\tau}{\Reach{F_i}{1}{\MR{i}}} \subseteq \MR{i} \Big{\}}.
\end{multline}
\end{definition}
Essentially, $\alpha_i$ counts the amount of time steps during which agent $i$ can be disconnected while maintaining its state in $\MR{i}$, provided that its initial state is in $\MR{i}$.
Note that, by definition of $\MR{i}$, agent $i$ remains in $\MR{i}$ for all future times when connected.
The following example illustrates the effect of measurement noise on the reachable set and on the safe time interval, in a system with static feedback structured as a SC network.
\begin{example}\label{ex:alpha_sets}
Consider an agent described by
\begin{equation}
x(t+1)=Ax(t)+Bu(t)+Ev(t)
\end{equation}
where
\begin{equation}
A=\begin{bmatrix}
1 & 0.5\\-0.5 & 1
\end{bmatrix}, \quad B=E=\begin{bmatrix}
0\\1
\end{bmatrix},
\end{equation}
with admissible sets
\begin{equation}
\mathcal{X}=\Big{\{} x \in \mathbb{R}^2: \begin{bmatrix}
-2\\-2
\end{bmatrix} \leq x \leq \begin{bmatrix}
2\\2
\end{bmatrix} \Big{\}}
\end{equation}
and
\begin{equation}
\mathcal{U}= \{u \in \mathbb{R} : |u| \leq 5 \} ,~ \mathcal{V}=\{ v \in \mathbb{R} : |v| \leq 0.45 \} ,
\end{equation}
and $u(t)=-K\hat{x}(t)$. The state is estimated according to
\begin{equation}
\hat{x}(t)=\begin{cases}
x(t),& \text{if }C(t)=1\\
A\hat{x}(t-1)+Bu(t-1), & \text{if } C(t)=0
\end{cases}
\end{equation}
and the control gain is
\begin{equation}
K=\begin{bmatrix}
0.2263 & 1.2988
\end{bmatrix}.
\end{equation}
Consider a SA network, i.e., $\delta_u=1$ and $\delta_s(t)=C(t),~\forall t>0$. Furthermore, assume the agent is connected at $t=0$, i.e., $C(0)=1$, and disconnected afterwards, i.e., $C(t)=0, \forall t>0$. This implies $\hat{x}(0)=x(0)$ and
\begin{equation}
\hat{F}=\begin{bmatrix}
Ax(t)-BK\hat{x}(t)+Ev(t)\\
A\hat{x}(t)-BK\hat{x}(t)\\
\emptyset
\end{bmatrix}.
\end{equation}
The sets of states which can be reached in $t$ steps are displayed in Figure~\ref{fig:reachable_sets} where notation $\mathcal{X}_t$ indicates the reachable set for $x(t)$.
One can observe that in this example $\mathcal{X}_4 \not \subseteq \mathcal{S}_{\infty}$, which implies $\alpha=3$.
\begin{figure}
\centering
\includegraphics[width=0.7\columnwidth]{fig3_reachable_set}
\caption
{
Maximum robust invariant set and $\mathcal{X}_t=\Reach{\hat{F}}{t}{\mathcal{S}_{\infty}}$ in Example~\ref{ex:alpha_sets} projected along $x$.
}
\label{fig:reachable_sets}
\end{figure}
\end{example}
The task of keeping the state of each agent in its admissible set can now be formulated as follows.
\begin{problem}[P\ref{pr:schedulability}]
\label{pr:schedulability}
Given the set of $q$ agents, each described by \eqref{eq:general_system}, an admissible set $\m{A}:= \m{A}_1\times\ldots\times\m{A}_q$, and the set $\boldsymbol{\mathcal{C}}$ of connection patterns \eqref{eq:binary_vectors}, determine if there exists an infinite sequence over the elements of $\boldsymbol{\mathcal{C}}$ such that,
\begin{multline}
z_i(t) \in \MR{i},~\forall z_i(0) \in \MR{i}, \ v_i(t)\in\m{V}_i,\\
i\in\{1,\ldots,q\},\ t>0.
\end{multline}
\end{problem}
In other words, we seek an infinite sequence of connection patterns
\begin{equation}
\label{eq:feasibleSchedule}
\mathbf{C}:= \big{(}C(0),~C(1),\ldots\big{)},
\end{equation}
with~$C(t)\in \boldsymbol{\mathcal{C}}$ that keeps $(x,\hat{x},u)$ of all $q$ agents within $\MR{}$ despite the fact that, due to the structure of set $\boldsymbol{\mathcal{C}}$, at each time step some agents are disconnected. Note that the set $\boldsymbol{\mathcal{C}}$ is assumed to be fixed and given \textit{a priori}, e.g., based on the network structure.
A \textit{schedule} solving P\ref{pr:schedulability} is any sequence of $C_j$ such that every agent $i$ is connected at least once every $\alpha_i$ steps. Instance $I:=\{\boldsymbol{\mathcal{C}},\{\alpha_i\} \}$ is accepted, denoted
\begin{equation}
\label{eq:instanceP1}
I \in \text{P\ref{pr:schedulability}},
\end{equation}
if and only if a schedule $\mathbf{C}$ exists that satisfies P\ref{pr:schedulability}.
In order to find a feasible schedule for P\ref{pr:schedulability}, we will translate this problem into a PP or WSP. In this section, we formally introduce these two problems and discuss their fundamental properties.
\subsection{The Pinwheel Problem}
\begin{PinwheelProblem}[From \cite{Han96a}]
Given a set of integers $\{\alpha_i\}$ with $\alpha_i\geq 1$, determine the existence of an infinite sequence of the symbols $1,\ldots,q$ such that there is at least one symbol $i$ within any subsequence of $\alpha_i$ consecutive symbols.
\end{PinwheelProblem}
A schedule solving PP can be defined by using the notation in Section~\ref{section:problem-statement}, as
\begin{equation}
\mathbf{C}:=c(1),c(2),\ldots~,
\end{equation}
with $c(t)\in \{1,\ldots,q\}$.
\begin{definition}[Feasible schedule]
\label{def:Schedule}
A schedule $\mathbf{C}$ that solves a schedulability problem is called a \textit{feasible schedule} for that problem.
\end{definition}
Instance $I:=\{\alpha_i\}$ is accepted by PP, denoted
\begin{equation}
\label{eq:instancePP}
I\in \text{PP},
\end{equation}
if and only if a feasible schedule $\mathbf{C}$ exists for the problem.
Conditions for schedulability, i.e., existence of a feasible solution of PP, have been formulated in terms of the $\textit{density}$ of a problem instance $I$, defined as
\begin{equation}
\rho(I):=\sum_i \frac{1}{\alpha_i}.
\end{equation}
\begin{theorem}[Schedulability conditions]
\label{thm:schedulability_threshold}
Given an instance $I:=\{\alpha_i\}$ of PP,
\begin{enumerate}
\item if $\rho(I)> 1$ then $I \notin \text{PP}$,
\item if $\rho(I)\leq 0.75$ then $I \in \text{PP}$,
\item if $\rho(I)\leq \frac{5}{6}$ and there exists $i:\alpha_i=2$ then $I \in \text{PP}$,
\item if $\rho(I)\leq \frac{5}{6}$ and $I$ has only three symbols then $I \in \text{PP}$,
\item if $\rho(I)\leq 1$ and $I$ has only two symbols $\alpha_1$ and $\alpha_2$ then $I \in \text{PP}$.
\end{enumerate}
\end{theorem}
\begin{proof}
Condition $1$ is a simple consequence of the definition of density: if $\rho(I)>1$ there are not enough time slots to schedule all symbols $\{\alpha_i \}$. Conditions $2$ and $3$ are proved in \cite{Fishburn02a}. Conditions $4$ and $5$ are proved in \cite{Chen04a}.
\end{proof}
It has been conjectured that any instance of PP with $\rho(I) \leq \frac{5}{6}$ is schedulable; however, the correctness of this conjecture has not been proved yet \cite{Chan92a}. Determining whether a general instance of PP with $\frac{5}{6}<\rho(I) \leq 1$ is schedulable, is not possible just based on the density $\rho(I)$ (e.g., $\rho(\{2,2\})=1$ is schedulable and $\rho(\{2,3,12\})=\frac{11}{12}$ is not schedulable). Furthermore, determining the schedulability of dense instances, i.e., when $ \rho(I) = 1$, is NP-hard in general \cite{Holte89a}.
Since a schedule for PP is an infinite sequence of symbols, the scheduling search space is also infinite dimensional. Fortunately, the following theorem alleviates this issue.
\begin{theorem}[Theorem 2.1 in \cite{Holte89a}]
All instances of PP that admit a schedule admit a \textit{cyclic schedule}, i.e., a schedule whose symbols repeat periodically.
\end{theorem}
\subsection{The Windows Scheduling Problem}
WSP is a more general version of PP, where multiple symbols can be scheduled at the same time. We call \textit{channels} the multiple strings of symbols that constitute a Windows Schedule.
\begin{WindowsSchedulingProblem}[From \cite{bar2003windows}]
Given the set of integers $\{\alpha_i\}$ with $\alpha_i\geq 1$, determine the existence of an infinite sequence of ordered tuples with $m_{\mathrm{c}} \geq 1$ elements of the set $\{1,\ldots,q\}$ such that there is at least one tuple that contains the symbol $i$ within any subsequence of $\alpha_i$ consecutive tuples.
An instance $\{m_{\mathrm{c}},\{\alpha_i\}\}$ of WSP is accepted, and denoted as
\begin{equation}
\label{eq:instanceWSP}
I=\{m_{\mathrm{c}},\{\alpha_i\}\} \in \text{WSP},
\end{equation}
if and only if a feasible schedule
\begin{equation}
\mathbf{C}=C(1),C(2),\ldots
\end{equation}
with
\begin{equation}
C(t)=\left( c_1(t),\ldots,c_{m_{\mathrm{c}}}(t) \right)
\end{equation}
exists for the problem.
\end{WindowsSchedulingProblem}
WSP is equivalent to PP when $m_{\mathrm{c}}=1$. Similarly to PP, if a schedule for the WSP exists, then a cyclic schedule exists.
Furthermore, the following schedulability conditions are known.
\begin{theorem}
\label{thm:schedulability_thresholdmc}
Given an instance $I=\{m_{\mathrm{c}},\{\alpha_i\}\}$ of WSP,
\begin{enumerate}
\item if $\rho(I)> m_{\mathrm{c}}$ then $I \notin \text{WSP}$ ,
\item if $\rho(I)\leq 0.5m_{\mathrm{c}}$ then $I \in \text{WSP}$.
\end{enumerate}
\end{theorem}
\begin{proof}
Condition 1) is a direct consequence of the definition of schedule density; condition 2) is proved in Lemma 4 and 5 in \cite{bar2003windows}.
\end{proof}
The results on WSP used next rely on special schedules of a particular form, defined as follows.
\begin{definition}[Migration and perfect schedule, from \cite{bar2003windows,bar2007windows}]
A \textit{migrating} symbol is a symbol that is assigned to different channels at different time instants of a schedule. A schedule with no migrating symbols is called a \textit{perfect schedule}.
\end{definition}
An instance $I:=\{m_{\mathrm{c}},\{\alpha_i\}\}$ of WSP is accepted with a perfect schedule if and only if a feasible schedule $\mathbf{C}$ exists for the problem such that
\begin{equation}
c_i(t_1)=c_k(t_2) \implies i=k,
\label{eq:perfect_schedule_condition}
\end{equation}
for any $ i,k \in \{1,\ldots,m_{\mathrm{c}}\}$ and $t_1,t_2 \in \mathbb{N}$; we denote this as:
\begin{equation}
\label{eq:instanceWSP_Pf}
I \in \text{WSP-perfect}.
\end{equation}
Equation~\eqref{eq:perfect_schedule_condition} ensures that agents do not appear on different channels of the schedule.
\section{Main Results: offline scheduling}
\label{section:results_offline}
In this section, we provide theoretical results and algorithms to solve P\ref{pr:schedulability}. In Subsection~\ref{subsec_V_A}, P\ref{pr:schedulability} is considered in the most general form and we prove that the problem is decidable, i.e., there is an algorithm that determines whether an instance is accepted by the problem \cite{margenstern2000frontier}. In Subsection~\ref{subsec_V_B}, we provide a heuristic to find a feasible schedule. In the last subsection, we consider a fixed number of communication channels. In this case, we show that the scheduling problem is equivalent to the WSP. We propose a technique to solve the scheduling problem in this case and illustrate the merits of the proposed heuristic with respect to the existing ones. We also refute a standing conjecture regarding perfect schedules in WSP~\cite{bar2007windows}.
\subsection{Solution of P\ref{pr:schedulability}}
\label{subsec_V_A}
In this subsection we show that P\ref{pr:schedulability} is decidable by showing that if there exists a feasible schedule for the problem, then there also exists a cyclic schedule with bounded period. Finally, we provide an optimization problem to find a feasible cyclic schedule.
Consider sequence $\mathbf{C}$ as the schedule for P\ref{pr:schedulability}, and define sequence
\begin{equation}
\mathbf{D}:=D(1),D(2),\ldots
\label{eq:def_D}
\end{equation}
with the vector $D(t)$ defined as
\begin{equation}
\label{Ctilde}
D(t):=\left( d_1(t),d_2(t),\ldots,d_q(t)\right),
\end{equation}
where $d_i(t):=t-\tau_i^{\mathbf{C}}(t)$, and the \textit{latest connection time} $\tau_i^{\mathbf{C}}(t)$ is defined as:
\begin{equation}
\tau_i^{\mathbf{C}}(t):=\max \{t^\prime\le t:~ i \in C(t^\prime)\},
\label{eq:TauLastMeasurement}
\end{equation}
where $t^\prime:=0$ when the above set is empty.
\begin{lemma}
\label{schedule_Cbar}
The schedule $\mathbf{C}$ is feasible for P\ref{pr:schedulability} if and only if $0 \leq d_i(t) \leq \alpha_i -1$, $\forall i \in \{1,\ldots,q\},~ \forall t>0$.
\end{lemma}
\begin{proof}
If $\mathbf{C}$ is a feasible schedule, then $0 \leq d_i(t) \leq \alpha_i -1$ for $\forall i \in \{1,\ldots,q\},~ \forall t>0$ by construction. This implies that agent $i$ is connected at least once every $\left( 1+\max_t d_i(t) \right) \leq \alpha_i$ time instants. Therefore, $\mathbf{C}$ is a feasible schedule.
\end{proof}
\begin{theorem}
\label{thm:Cyclic_schedulability}
Consider the set of integers $\{ \alpha_i \}$ defined in \eqref{eq:safe_time}. If P\ref{pr:schedulability} has a feasible schedule $\mathbf{C}$, then it has a cyclic schedule whose period is no greater than $m=\prod_{i=1}^q \alpha_i$.
\end{theorem}
\begin{proof}
We define $\mathbf{D}$ as in \eqref{Ctilde}, so that $ 0 \leq d_i(t) \leq \alpha_i -1 $ holds by Lemma~\ref{schedule_Cbar}. Hence, each $d_i(t)$ can have no more than $\alpha_i$ different values. This implies $D(t)$ can have at most $m~:=~\prod_{i=1}^q \alpha_i$ different values. Hence,
\begin{equation}
\exists ~t_1,t_2:~D(t_1)=D(t_2),~m \leq t_1 < t_2 < 2m.
\end{equation}
Now, consider the sequence
\begin{equation}
\mathbf{C}_{\mathrm{r}}:=\big{(}C(t_1),C(t_1+1),\ldots,C(t_2-1)\big{)}
\end{equation}
as the cyclic part of the cyclic schedule $\mathbf{C}_{\mathrm{c}}$ for P\ref{pr:schedulability}, defined as
\begin{equation}
\mathbf{C}_c:=\big{(} \mathbf{C}_{\mathrm{r}}, \mathbf{C}_{\mathrm{r}},\ldots\big{)}.
\end{equation}
Define $\mathbf{D}_{\mathrm{c}}$ as in \eqref{eq:def_D} for the new schedule $\mathbf{C}_{\mathrm{c}}$. One can conclude that
\begin{equation}
D_{\mathrm{c}}(\tau) \leq D(\tau+t_1-1), \qquad \forall \tau \in \{1,\ldots,t_2-t_1\},
\end{equation}
since for any $ i \in \{1,\ldots,q\}$ we have
\begin{equation}
{d_{\mathrm{c}}}_i(\tau)=\tau-\tau_i^{\mathbf{C}_{\mathrm{c}}}=(\tau+t_1-1)-(\tau_i^{\mathbf{C}_{\mathrm{c}}}+t_1-1)\leq d_i(\tau+t_1-1).
\end{equation}
Furthermore, $D(t_1)=D(t_2)$ implies $i \in \mathbf{C}_{\mathrm{r}}$ for $\forall i \in \{1,\ldots,q\}$. As a result, ${d_{\mathrm{c}}}_i(t_2-t_1)=d_i(t_2-1)$. This implies
\begin{equation}
{d_{\mathrm{c}}}_i(k(t_2-t_1)+\tau)=d_i(t_1-1+\tau), \qquad k \in \mathbb{N}.
\end{equation}
Since $d_i(t)\leq \alpha_i-1$ holds for any $t>0$, then ${d_{\mathrm{c}}}_i(t) \leq \alpha_i-1$ also holds for any $t>0$. Inequality ${d_{\mathrm{c}}}_i(t) \leq \alpha_i-1$ implies that $\mathbf{C}_c$ is a feasible schedule by Lemma~\ref{schedule_Cbar}.
\end{proof}
Theorem~\ref{thm:Cyclic_schedulability} implies that a feasible schedule can always be searched for within the finite set of cyclic schedules of a length no greater than $m$. An important consequence of this theorem is the following.
\begin{theorem}
P\ref{pr:schedulability} is decidable.
\end{theorem}
\begin{proof}
Since the search space is a finite set, schedules can be finitely enumerated.
\end{proof}
Theorem~\ref{thm:Cyclic_schedulability} allows us to solve P\ref{pr:schedulability} by solving the following optimization problem, which searches for a feasible periodic schedule among all schedules of period $T_r$.
\begin{subequations}
\label{eq:qnary_formulation_exact_solution}
\begin{align}
\min_{C(1),\ldots,C(T_{\mathrm{r}}),T_{\mathrm{r}}} & \ \ T_{\mathrm{r}}\\
\mathrm{s.t.}& \quad C(1),\ldots,C(T_{\mathrm{r}}) \in \boldsymbol{\mathcal{C}}, \label{eq:qnary_formulation_exact_solution_b}\\
&\quad T_{\mathrm{r}} \leq \prod_{i=1}^q \alpha_i,~T_{\mathrm{r}} \in \mathbb{N}, \label{eq:qnary_formulation_exact_solution_c}\\
& \quad \sum_{k=t}^{t+\alpha_j-1} \eta_j ( k ) \geq 1, \label{eq:qnary_formulation_exact_solution_d}\\
& \quad \forall j \in \{1,\ldots,q\},~ \forall t \in \{1,\ldots,T_{\mathrm{r}}\}, \nonumber\\
&\quad \eta_j(k) =\begin{cases}
1 & \text{if } j\in C(k \bmod T_{\mathrm{r}}),\\
0 & \textrm{ otherwise}.
\end{cases} \label{eq:qnary_formulation_exact_solution_e}
\end{align}
\end{subequations}
Note that we define $C(0):=C(T_{\mathrm{r}})$ in \eqref{eq:qnary_formulation_exact_solution_e}. Equation \eqref{eq:qnary_formulation_exact_solution_b} enforces the schedule elements to be chosen from the set of connection patterns $\boldsymbol{\mathcal{C}}$; \eqref{eq:qnary_formulation_exact_solution_c} limits the search space by giving an upper bound for the length of the periodic part, i.e., $T_{\mathrm{r}}$; and \eqref{eq:qnary_formulation_exact_solution_d} ensures that label $i$ appears at least once in each $\alpha_i$ successive elements of the schedule sequence.
Note that the main challenge in Problem~\eqref{eq:qnary_formulation_exact_solution} is finding a feasible solution; minimization of $T_{\mathrm{r}}$ is a secondary goal since any solution of \eqref{eq:qnary_formulation_exact_solution} provides a feasible schedule for P\ref{pr:schedulability}. Unfortunately, \eqref{eq:qnary_formulation_exact_solution_e} is combinatorial in the number of agents and connection patterns.
In order to tackle this issue, we next propose a strategy to simplify the computation of a feasible schedule.
\subsection{A heuristic solution to P\ref{pr:schedulability}}
\label{subsec_V_B}
In this subsection, we propose a heuristic to solve P\ref{pr:schedulability} based on the assumption that the satisfaction of the constraints for a agent $i$ is a duty assigned to a single connection pattern $C_j$.
To give some intuition on the assignment of connection patterns, we propose the following example.
\begin{example}
\label{ex:toy_general_case}
\begin{figure}[h]
\centering
\includegraphics[width=0.4\columnwidth]{connection_pattern_density}
\caption{A model with $5$ agents and $4$ connection patterns.}
\label{fig:connection_pattern_example}
\end{figure}
Consider the network displayed in Figure~\ref{fig:connection_pattern_example}, with five agents which can be connected according to $4$ connection patterns: $C_1:=(1,2)$, $C_2:=(2,4)$, $C_3:=(3,4)$, $C_4:=(5)$.
Assume that the safe time intervals of the $5$ agents are $\alpha_1=10$, $\alpha_2=2$, $\alpha_3=10$, $\alpha_4=2$, $\alpha_5=100$. This means that agents $2$ and $4$ must be connected at least once every $2$ steps, while the other agents have less demanding requirements.
As a first try, let us attempt a schedule using only connection patterns $C_1$, $C_3$, $C_4$.
In this case, one can see that the sequence $(C_1,C_3,C_1,C_3,\ldots)$ is the only possible schedule satisfying the requirements of agents $2$ and $4$. There is however no space to connect agent $5$ within this schedule. As an alternative solution we therefore propose to utilize the patterns $C_1$, $C_2$, $C_3$, $C_4$ and design $(C_2,C_1,C_2,C_3,C_2,C_4)$ as the cyclic part of a schedule. One can verify that this schedule is feasible.
With the first choice of connection patterns, the duty of satisfying the constraints for agents $2$ and $4$ is assigned to the patterns~$C_1$ and $C_3$, respectively, which, therefore, must be scheduled every 2 steps. On the other hand, with the second choice, this duty is assigned to~$C_2$, while agents~$1$ and $3$ are assigned to~$C_1$ and $C_3$, respectively. As a consequence, $C_2$ must be scheduled every $2$ steps, but~$C_1$ and $C_3$ can be scheduled once every $10$ steps. This allows one to make space for $C_4$. Borrowing the terminology of PP, with the first choice~$C_1$ and $C_3$ are symbols of density $0.5$ and $C_4$ has density $0.01$. Hence, the three symbols are not schedulable. With the second choice, instead, $C_2$ has density $0.5$, $C_1$ and $C_3$ have density $0.1$, and $C_4$ has density $0.01$. Hence, the total density is $0.71$ and the four symbols are schedulable.
\end{example}
Example \ref{ex:toy_general_case} shows how we assign duties to the connection patterns, and also how
schedulability is affected. In the following, we formulate a problem that selects the connection patterns in order to minimize the total density.
Let us now represent the assignment of agent $i$ to the connection pattern $C_j$ with a binary variable $\eta_{i,j}$ and---with a slight abuse of notation---the density of symbol $C_j$ with $\hat{\rho}_j$. The proposed strategy is to decide the set of $\eta_{i,j}$ such that $\sum_j \hat{\rho}_j$ is minimized. This is performed by solvin
\begin{subequations}%
\label{eq:min_connection}
\begin{align}
& \min_{\hat{\rho}_j,~\eta_{i,j}} \sum_{j=1}^l \hat{\rho}_j \label{eq:min_connection_cost}\\
\mathrm{s.t.} \quad &\hat{\rho}_j \geq \frac{1}{\alpha_i} \eta_{i,j}, \ \forall j \in \{1,\ldots,l\}, \ \forall i \in C_j, \label{eq:min_connection_beta}\\
& \sum_{j:i \in C_j}\eta_{i,j} \geq 1, \ \forall i \in \{1,\ldots,q\}, \label{eq:min_connection_eta}\\
& \eta_{i,j} \in \{0,1\}, \ \forall i \in \{1,\ldots,q\}, \ \forall j \in \{1,\ldots,l\}.
\end{align}
\end{subequations}
Constraint~\eqref{eq:min_connection_eta} guarantees that every agent~$i$ is connected by at least one connection pattern. Variables~$\hat{\rho}_j$ bound the density of the resulting scheduling problem, where~$1/\hat{\rho}_j$ is the maximum number of steps between two occurrences of connection pattern~$C_j$ in~$\mathbf{C}$ that is sufficient to enforce $(x_i,\hat{x}_i,u_i) \in \MR{i}$. If $\hat{\rho}_j=0$, then connection pattern $j$ is not used. Without loss of generality, assume that the solution to \eqref{eq:min_connection} returns $l$ distinct connection patterns with $\hat{\rho}>0$, i.e., $\hat{\rho}_1,\ldots,\hat{\rho}_l>0$ and define
\begin{equation}
\hat{\alpha}_i:=\frac{1}{\hat{\rho}_i}, \qquad \forall i \in \{1,\ldots,l\}.
\label{eq:new_alpha_GWSP}
\end{equation}
\begin{theorem}
\label{thm:schedulability_thresholdmcS}
$\{\hat{\alpha}_i\} \in \text{PP} \implies \{\boldsymbol{\mathcal{C}},\{\alpha_i\} \} \in \text{P\ref{pr:schedulability}}$.
\end{theorem}
\begin{proof}
Consider any schedule $\mathbf{C}_{\mathrm{P}}$ which is feasible for the instance $\{ \hat{\alpha}_i \}$ of PP.
Define the schedule $\mathbf{C}$ by $C(t):=C_j$ given ${c_{\mathrm{P}}}(t)=j$ for any $j$. By the statement of PP, ${c_{\mathrm{P}}}(t)=j$ once at least in every \(\hat{\alpha}_j\) successive time instants. By \eqref{eq:min_connection}, for all $i$ there exists \(C_j\) such that $i \in C_j$ and \(\alpha_i \geq \hat{\alpha}_i\). Hence, $\mathbf{C}$ is a feasible schedule for P\ref{pr:schedulability}.
\end{proof}
Using Theorem~\ref{thm:schedulability_thresholdmcS}, we propose the following algorithm to find a feasible schedule for P\ref{pr:schedulability}.
\begin{algorithm}
\begin{algorithmic}[1]
\footnotesize
\State Define $\hat{\alpha}_i$ as in \eqref{eq:new_alpha_GWSP} by solving the optimization problem~\eqref{eq:min_connection}
\If {$\{\hat{\alpha}_i\} \in \text{PP}$}
\State find a schedule $\mathbf{C}_{\mathrm{P}}$ for instance $\{\hat{\alpha}_i\}$ of PP using \eqref{eq:qnary_formulation_exact_solution} or any other suitable scheduling technique
\State define $C(t) := C_j$ given $C_\mathrm{P}(t)= j$
\State \Return {$\mathbf{C}:= \big{(}C(1),~C(2),\ldots\big{)}$}
\Else
\State \Return{no schedule was found}
\EndIf
\end{algorithmic}
\caption{A heuristic scheduling for P\ref{pr:schedulability} (\textbf{input}: $\{\{\alpha_i\}, \boldsymbol{\mathcal{C}}\}$, \textbf{output}: $\boldsymbol{C}$)}
\label{alg:General_P1}
\end{algorithm}
As shown by the following example, the converse of Theorem~\ref{thm:schedulability_thresholdmcS} does not hold in general. I.e., if Algorithm~\ref{alg:General_P1} does not find a schedule a feasible schedule may still exist for P\ref{pr:schedulability}.
\begin{example}[Converse of Theorem~\ref{thm:schedulability_thresholdmcS}] \label{ex:conv1}
Consider five agents with \(\alpha_1=\alpha_3=3,\ \alpha_2=\alpha_4=\alpha_5=5\) and $\boldsymbol{\mathcal{C}}:=\{C_1,C_2,C_3,C_4\}$ where
\begin{equation}
C_1=(1,2),~C_2=3,~C_3=4,~C_4=(1,5).
\end{equation}
Using \eqref{eq:min_connection}, one obtains
\begin{equation}
\hat{\alpha}_1=\hat{\alpha}_3=5, \quad \hat{\alpha}_2=\hat{\alpha}_4=3.
\end{equation}
There is no feasible schedule for this problem considering the assigned density function $\hat{\rho}(\{3,3,5,5\})=\frac{16}{15}$, see Theorem~\ref{thm:schedulability_threshold}. However, one can verify that
the following schedule is feasible
\begin{equation}
\mathbf{C}_c:=\big{(}\mathbf{C}_{\mathrm{r}},\mathbf{C}_{\mathrm{r}},\ldots \big{)},
\end{equation}
where
\begin{equation}
\mathbf{C}_{\mathrm{r}}:=\big{(}C_1, C_2, C_4, C_2, C_3 \big{)}.
\end{equation}
\end{example}
\subsection{Solution of P\ref{pr:schedulability} in the $m_{\mathrm{c}}$-channel case}
\label{subsection:WSP}
In the previous subsection, $\boldsymbol{\mathcal{C}}$ was an arbitrary set of connection patterns. Assume now that the set~$\boldsymbol{\mathcal{C}}$ is
\begin{equation}
\label{eq:mc_channels}
\boldsymbol{\mathcal{C}}:=\{C:~C \subseteq \{1,\ldots,q\},~ |C|=m_{\mathrm{c}}\},
\end{equation}
i.e., the set of all subsets of $ \{1,\ldots,q\}$ with cardinality $m_{\mathrm{c}}$. This is a special case of P\ref{pr:schedulability} where any combination of $m_{\mathrm{c}}$ agents can be connected at the same time. One application of such case is for instance when the connection patterns model a multi-channel star communication topology between a set of agents and a central controller. This class of problems is easily mapped to the class of WSP:
\begin{theorem}
\label{thm:equivalencemc}
When $\boldsymbol{\mathcal{C}}$ is as in \eqref{eq:mc_channels}, then
\begin{equation}
\{\boldsymbol{\mathcal{C}},\{\alpha_i\} \} \in \text{P\ref{pr:schedulability}} \iff \{m_c,\{\alpha_i\} \} \in \text{WSP}.
\end{equation}
\end{theorem}
\begin{proof}
By definition, any schedule solving P\ref{pr:schedulability} must satisfy $i \in C(t)\Rightarrow i \in C(t+\tau)$ with $\tau\leq \alpha_i$ for all $i,t$. Provided that $|C(t)|=m_\mathrm{c}$ for all $t$, this also defines a schedule solving WSP.
\end{proof}
We exploit this result to solve P\ref{pr:schedulability} indirectly by solving WSP. To that end, we propose a heuristic which replaces WSP with a PP relying on modified safe time intervals defined as
\begin{equation}
\label{new_alpha}
\tilde{\alpha}_i:=m_{\mathrm{c}} \alpha_i, \qquad \forall i \in \{1,\ldots,q\}.
\end{equation}
\begin{theorem}
\label{malpha}
$\{\tilde{\alpha}_i\} \in \text{PP} \implies \{m_{\mathrm{c}},\{\alpha_i\}\} \in \text{WSP}$.
\end{theorem}
\begin{proof}
Given a feasible schedule $\mathbf{C}_{\mathrm{P}}$ for PP, ${{c_{\mathrm{P}}}(t)=i}$ at least once every \(\tilde{\alpha}_i=m_{\mathrm{c}}\alpha_i\) successive time instants. Define schedule $\mathbf{C}$ as
\begin{equation}
C(t):=\left( c_{\mathrm{P}}(m_{\mathrm{c}}(t-1)+1),\ldots, c_{\mathrm{P}}(m_{\mathrm{c}}t) \right).
\end{equation}
In this schedule, $i \in C(t)$ at least once every $\alpha_i$ successive time instants. This implies that $\mathbf{C}$ is a feasible schedule for WSP.
\end{proof}
Theorem~\ref{malpha} can be used to find a feasible schedule for WSP using a feasible schedule for PP. The converse of this theorem does not hold: if this method does not find a feasible schedule,
a feasible schedule for WSP may still exist. Nevertheless, Lemma~\ref{mc_lemma_1} provides a sufficient condition to determine when a feasible schedule for WSP does not exist. Without loss of generality, assume \(\alpha_1 \leq \alpha_2 \leq \ldots \leq \alpha_q \) and define
\begin{equation}
\zeta_i:=\begin{cases}
m_{\mathrm{c}} \alpha_i & i \leq m_{\mathrm{c}}\\
m_{\mathrm{c}} \alpha_i+(m_{\mathrm{c}}-1) & i > m_{\mathrm{c}}
\end{cases}.
\label{converse_alpha}
\end{equation}
\begin{lemma}
\label{mc_lemma_1}
$ \{\zeta_i\} \notin \text{PP} \implies \{m_{\mathrm{c}},\{\alpha_i\}\} \notin \text{WSP}.$
\end{lemma}
\begin{proof}
We proceed by contradiction. Assume $\{m_{\mathrm{c}},\{\alpha_i\}\} \in \text{WSP}$ with a corresponding feasible schedule $\mathbf{C}$, while $\{\zeta_i\}~\notin~\text{PP}$.
Without loss of generality, assume that the labels $i \in \{1,\ldots,m_{\mathrm{c}}\}$ are arranged in $C(t)$ so as to satisfy
\begin{equation}
\label{eq:orderC}
i \in C(t) \implies c_i(t)=i
\end{equation}
while labels $i \in \{m_{\mathrm{c}},\ldots,q \}$ are arranged in an arbitrary order.
Using the ordered $C(t)$, construct a schedule $\mathbf{C}_{\mathrm{P}}$ as
\begin{equation}
\label{eq:CP1}
\mathbf{C}_{\mathrm{P}}=\left(c_1(1),\ldots,c_{m_{\mathrm{c}}}(1),\ldots,c_1(t),\ldots,c_{m_{\mathrm{c}}}(t),\ldots \right).
\end{equation}
If $\{\zeta_i\} \notin \text{PP}$, then there exists a $t_0>0$ and an $i \in \{1,\ldots,q\}$ such that the sequence $\left( {c_{\mathrm{P}}}(t_0),\ldots,{c_{\mathrm{P}}}(t_0+\zeta_i-1)\right)$ does not contain label $i$, where $c_{\mathrm{P}}(t_0)$ is the entry $c_k(t)$ of $\mathbf{C}$. A pair of integers $(t,k)$ can be found such that $t \geq 0$, $k \in \{1,\ldots,m_{\mathrm{c}}\}$, and
\begin{equation}
t_0=m_{\mathrm{c}}(t-1)+k.
\end{equation}
Consider the case $i \in \{1,\ldots,m_{\mathrm{c}}\}$. By \eqref{converse_alpha} we have $\zeta_i=m_{\mathrm{c}}\alpha_i$, such that the sequence $\left( {c_{\mathrm{P}}}(t_0),\ldots,{c_{\mathrm{P}}}(t_0+\zeta_i-1)\right)$ contains exactly $\alpha_i$ vectors $C(t),\ldots,C(t+\alpha_i-1)$ if $k=1$, or spans $\alpha_i+1$ vectors $C(t),\ldots,C(t+\alpha_i)$ if $k>1$. Hence, if $k\leq i$, by \eqref{eq:orderC} one can conclude $i \notin \left(C(t),\ldots,C(t+\alpha_i-1) \right)$, while if $k>i$, by \eqref{eq:orderC}, $i \notin \left(C(t+1),\ldots,C(t+\alpha_i) \right)$. In both cases $\mathbf{C}$ is not feasible, which contradicts our assumption.
\\
Consider the case $i \in \{m_{\mathrm{c}},\ldots,q\}$. By \eqref{converse_alpha} we have $\zeta_i=m_{\mathrm{c}}(\alpha_i+1)-1$, such that the sequence $\left( {c_{\mathrm{P}}}(t_0),\ldots,{c_{\mathrm{P}}}(t_0+\zeta_i-1)\right)$ contains $\alpha_i$ subsequent vectors of the schedule $\mathbf{C}$ that does not contain label $i$. This implies $\mathbf{C}$ is not a feasible schedule, which contradicts again our assumption.
\end{proof}
According to Theorem~\ref{malpha} one can find a schedule for an instance of WSP using a schedule for an instance of PP. A common approach proposed in the literature consists in restricting the search to perfect schedules. We prove next that our heuristic
returns a feasible schedule if a perfect schedule exists; in addition, it can also return non-perfect schedules. As we will prove, cases exist when the WSP does not admit a perfect schedule while it does admit a non-perfect one. We will provide such example and show that our heuristic is able to solve it.
The following lemma provides a sufficient condition to exclude existence of a perfect schedule. An immediate corollary of this lemma and of Theorem~\ref{malpha} is that the heuristic in Theorem~\ref{malpha} can schedule all WSP instances that admit a perfect schedule.
\begin{lemma}
\label{mc_lemma_2}
$ \{\tilde{\alpha}_i\} \notin \text{PP} \implies \{m_{\mathrm{c}},\{\alpha_i\}\} \notin \text{WSP-perfect}$.
\end{lemma}
\begin{proof}
Assume $\mathbf{C}$ is a perfect schedule for WSP. Then, ${c}_i(t)={c}_j(t^\prime)=k$ implies $i=j$ for all $i,j \in \{1,\ldots,m_{\mathrm{c}}\}$ and $k \in \{1,\ldots,q\}$. Consider the sequence $\mathbf{C}_{\mathrm{P}}$ as in \eqref{eq:CP1} where $t \geq 1$. Since $\mathbf{C}$ is a perfect schedule, ${c}_i(t)={c}_i(t^\prime)=k$ for every $k \in \{1,\ldots,q\}$. Furthermore, $|t-t^\prime|\leq \alpha_k$ implies ${c_{\mathrm{P}}}(t_1)={c_{\mathrm{P}}}(t_2)=k$ with $t_1=m_{\mathrm{c}}(t-1)+i,~t_2=m_{\mathrm{c}}(t^\prime-1)+i$. Hence, $|t_1-t_2|=m_{\mathrm{c}}|t-t^\prime|\leq m_{\mathrm{c}} \alpha_k$. Consequently, $\mathbf{C}_{\mathrm{P}}$ is a feasible schedule for PP.
\end{proof}
The following example shows that the converse of Theorem~\ref{malpha} does not hold in general, i.e., $\exists~ \{ m_{\mathrm{c}},\{\alpha_i\} \} \in \text{WSP}$ while $\{\tilde{\alpha}_i\} \notin \text{PP}$. This also indicates the importance of non-perfect schedules.
\begin{example}[Converse of Theorem~\ref{malpha}] \label{exp:conv2}
Consider problem instance
\begin{equation}
\label{eq:Ex3_instance}
\{m_{\mathrm{c}},\{\alpha_i\}\}= \{2,\{2,3,3,4,5,5,10\}\}.
\end{equation}
While $\{\tilde{\alpha}_i\} \notin \text{PP}$, a schedule with the cyclic part
\begin{align}
\mathbf{C}_{\mathrm{r}}=&C_1,C_2,C_3,C_4,C_5,C_6,C_1,C_7,C_8,C_9,\nonumber\\
&C_5,C_9,C_3,C_{10},C_1,C_6,C_5,C_{11},C_8,C_2~,
\end{align}
is feasible for WSP where
\begin{align}
C_1&=(1,2),&C_2=(3,4),~&C_3=(1,6),&C_4=(2,5) \nonumber \\
C_5&=(1,3),&C_6=(4,7),~&C_7=(3,6),&C_8=(1,5)\nonumber \\
C_9&=(2,4),&C_{10}=(3,5),~&C_{11}=(2,6).&
\end{align}
\end{example}
\begin{remark}
Since $\{\tilde{\alpha}_i\} \notin \text{PP}$ in Example~\ref{exp:conv2}, $\{m_{\mathrm{c}},\{\alpha_i\}\} \notin \text{WSP-perfect}$ by Lemma ~\ref{mc_lemma_2}. However, $\{m_{\mathrm{c}},\{\alpha_i\}\} \in \text{WSP}$ which provides a negative answer to an open problem in the scheduling community, i.e., whether all feasible instances of WSP admit perfect schedules too, see \cite{bar2007windows}.
\end{remark}
The next example provides a case in which $ \{\tilde{\alpha}_i\} \in \text{PP} $ while $\{m_{\mathrm{c}},\{\alpha_i\}\} \notin \text{WSP-perfect}$. This implies that the proposed heuristic for WSP can return feasible schedules for instances in which there is no perfect schedule.
\begin{example}[] \label{exp:conv3}
Consider the problem instance
\begin{equation}
\label{eq:Ex4_instance}
\{m_{\mathrm{c}},\{\alpha_i\}\}= \{2,\{2,3,4,5,5,5,7,14\}\}.
\end{equation}
In order to find a perfect schedule, one can first compute all possible allocations of agents to one channel or the other, see Table~\ref{Table:bins}. One can verify that $I_1 \notin \text{PP}$ for all allocations in Table~\ref{Table:bins} where $I_1$ is the instance allocated to the first channel. Consequently, $\{m_{\mathrm{c}},\{\alpha_i\}\} \notin \text{WSP-perfect}$.
\begin{table}[H]
\caption{Channel allocation of the problem instance~\eqref{eq:Ex4_instance}}
\label{Table:bins}
\begin{center}
\begin{tabular}{ c | c c }
& Channel 1 & Channel 2\\
\hline
\multicolumn{1}{ c| } {Allocation 1 }& $\{2,3,7\}$ & $\{4,5,5,5,14\}$ \\
\hline
\multicolumn{1}{ c| } {Allocation 2} & $\{2,3,14\}$ & $\{4,5,5,5,7\}$ \\
\hline
\multicolumn{1}{ c | } {Allocation 3} & $\{3,5,5,7,14\}$ & $\{2,4,5\}$ \\
\hline
\multicolumn{1}{ c | } {Allocation 4} & $\{2,4,7,14\}$ & $\{3,5,5,5\}$ \\
\hline
\multicolumn{1}{ c | } {Allocation 5} & $\{3,4,5,7\}$ & $\{2,5,5,14\}$ \\
\hline
\multicolumn{1}{ c | } {Allocation 6} & $\{3,4,5,7,14\}$ & $\{2,5,5\}$ \\
\hline
\multicolumn{1}{ c | } {Allocation 7} & $\{3,4,5,5\}$ & $\{2,5,7,14\}$ \\
\hline
\end{tabular}
\end{center}
\end{table}
However, a schedule with the following cyclic part is feasible for instance $ \{\tilde{\alpha}_i\}$ of PP
\begin{align}
{\mathbf{C}_{\mathrm{P}}}_{\mathrm{r}}=&2,3,4,1,7,6,2,1,5,3,2,1,4,3, \nonumber\\
& 6,1,2,5,7,1,4,3,2,1,6,8,5,1.
\end{align}
This schedule can be transformed into a feasible schedule for WSP with the cyclic part
\begin{align}
\mathbf{C}_{\mathrm{r}}=&(2,3),(4,1),(7,6),(2,1),(5,3),(2,1),(4,3), \nonumber \\
&(6,1),(2,5),(7,1),(4,3),(2,1),(6,8),(5,1).
\end{align}
\end{example}
We propose the following algorithm to compute (possibly non-perfect) schedules for WSP.
\begin{algorithm}
\begin{algorithmic}[1]
\footnotesize
\State Define $\tilde{\alpha}_i$ as in \eqref{new_alpha}
\If {$\{\tilde{\alpha}_i\} \in \text{PP}$}
\State find the feasible schedule $\mathbf{C}_{\mathrm{P}}$ for instance $\{\tilde{\alpha}_i\}$ of PP
\State define $C(t):=\left(c_{\mathrm{P}}(m_{\mathrm{c}}(t-1)+1),\ldots,c_{\mathrm{P}}(m_{\mathrm{c}}t)\right)$
\State \Return {$\mathbf{C}:= \big{(}C(1),~C(2),\ldots\big{)}$ }
\Else
\State \Return{no schedule was found}
\EndIf
\end{algorithmic}
\caption{A heuristic scheduling for WSP (\textbf{input}: $\{\{\alpha_i\}, m_{\mathrm{c}}\}$, \textbf{output}: $\boldsymbol{C}$) }
\label{alg:WSP}
\end{algorithm}
Algorithm~\ref{alg:WSP} checks whether $\{\tilde{\alpha}_i\}$ is accepted by PP or not. Since
\begin{itemize}
\item $\{\tilde{\alpha}_i\} \in \text{PP} \implies \{m_{\mathrm{c}},\{\alpha_i\}\} \in \text{WSP}$,
\item $ \{\tilde{\alpha}_i\} \notin \text{PP} \implies \{m_{\mathrm{c}},\{\alpha_i\}\} \notin \text{WSP-perfect}$,
\item $ \exists~\{m_{\mathrm{c}},\{\alpha_i\}\}:~ \{\tilde{\alpha}_i\} \in \text{PP},~\{m_{\mathrm{c}},\{\alpha_i\}\} \notin \text{WSP-perfect}$,
\end{itemize}
Algorithm~\ref{alg:WSP} outperforms the current heuristics in the literature in the sense that it accepts more instances of WSP.
While $\rho(I) \leq 0.5 m_{\mathrm{c}}$ is a sufficient condition for schedulability of WSP~\cite{bar2003windows}, we provide alternative, less restrictive sufficient conditions in the following theorem.
\begin{theorem}
\label{thm:schedulability_thresholdmc2}
Given an instance $I=\{m_{\mathrm{c}},\{\alpha_i\}\}$ of WSP,
\begin{enumerate}
\item if $\rho(I)\leq 0.75m_{\mathrm{c}}$ then $I \in \text{WSP}$,
\item if $\rho(I)\leq \frac{5}{6} m_{\mathrm{c}}$ and $I$ has only three symbols then $I \in \text{WSP}$.
\end{enumerate}
\end{theorem}
\begin{proof}
This is a direct consequence of Theorems~\ref{thm:schedulability_threshold} and \ref{malpha}.
\end{proof}
\section{Main Results: online scheduling}
\label{section:results_online}
The scheduling techniques, proposed in Section~\ref{section:results_offline} are computed offline solely based on the information available \textit{a priori} and without any online adaptation. The main drawback of this setup is the conservativeness stemming from the fact that the robust invariance condition \eqref{eq:safe_time} must hold for all admissible initial conditions and disturbances. Moreover, packet losses are not explicitly accounted for. This issue has been partially addressed in \cite{ECC}, where an online adaptation of the schedule has been proposed for the single channel case.
Here we exploit the fact that, differently from offline scheduling, information about the state is available through current or past measurements and can be used to compute less conservative reachable sets in a similar fashion to \eqref{eq:safe_time}. In Section~\ref{Sec_online_subsec_online}, we show that the online scheduling significantly reduces conservativeness. Then, in Section~\ref{subsec:PacketLoss} we extend the results in \cite{ECC} and also provide necessary and sufficient conditions for the existence of a feasible schedule in case of lossy communication link.
\subsection{Online Scheduling without Packet Losses}
\label{Sec_online_subsec_online}
In this subsection we show, under the assumption of no packet losses, how the schedule can be optimized online, based on the available information.
Our strategy is to start with a feasible offline schedule, which we call the \textit{baseline schedule}. Such schedule is then shifted based on estimates of the safe time intervals, which are built upon the current state. In fact, while in equation \eqref{eq:safe_time} the safe time interval is defined as the solution of a reachability problem with $\mathcal{S}_{i,\infty}$ as the initial set, the scheduler may have a better set-valued estimate of the current state of each agent than the whole $\mathcal{S}_{i,\infty}$. This estimate, which we call $\mathcal{O}$, can in general be any set with the following properties, for all $t \geq 0$:
\begin{subequations}
\begin{align}
\label{eq:SchedulerCoDomainDefProperties}
&\left( x(\tau),\hat{x}(\tau),u(\tau) \right) \in \mathcal{O}(\tau), &\forall \tau \in \{0,\ldots,t\}, \\
& \mathcal{O}(t) \subseteq \mathcal{S}_{\infty}, & \text{if}~\delta=1,\\
&\mathcal{O}(t) \subseteq \Reach{\hat{F}}{1}{\mathcal{O}(t-1)}.
\end{align}
\end{subequations}
\begin{example}
Consider a case in which several automated vehicles are to cross an intersection and the crossing order is communicated to them from the infrastructure, equipped with cameras to measure the states of the vehicles. This corresponds to an SC network, as described by Equation~\eqref{eq:sc_network} in Example~\ref{Exp:case i}, with a scheduler that can measure the state of all agents at all time, but the state measurements have additive noise, i.e., $x_i^s(t)=x_i(t)+w_i(t)$ where $\omega_i \in \mathcal{W}_i$ and $0 \in \mathcal{W}_i$.
Then,
\begin{equation}
\mathcal{O}_i(t):=\begin{bmatrix}
x_i^s(t) \oplus (-\mathcal{W}_i)\\
(A_i+B_iK_i)^{(t-\tau_i^C(t-1))}x_i^s(\tau_i^C(t-1))\\
K_i (A_i+B_iK_i)^{(t-\tau_i^C(t-1))}x_i^s(\tau_i^C(t-1))
\end{bmatrix},
\end{equation}
where subscript $i$ refers to agent $i$, and $\tau_i^C$ is the last time when agent $i$ was connected.
\end{example}
Based on set $\mathcal{O}_i(t)$ available at time $t$, we can compute a better estimate of the safe time interval. Let us define this estimate, function of $t$, as follows:
\begin{equation}
\gamma^x_i(t):=\max \left\{ t^\prime: \Reach{\hat{F}_i}{t^\prime}{\mathcal{O}_i(t)} \subseteq \MR{i} \right\}.
\label{eq:GammaOfXDef}
\end{equation}
Equations \eqref{eq:safe_time} and \eqref{eq:GammaOfXDef} imply that, for any feasible schedule $\mathbf{C}$,
\begin{equation}
\label{eq:alpha_less_than_gamma}
\gamma_i^x(t)\geq \alpha_i-(t-\tau_i^{\boldsymbol{\mathbf{C}}}(t)), \qquad \forall i \in \{1,\ldots,q\},~ \forall \, t>0.
\end{equation}
Let us now introduce, for any arbitrarily defined schedule $\mathbf{C}_{\mathrm{o}}$, the quantity
\begin{equation}
\gamma^{\mathbf{C}}_i(t):=\min \{t^\prime \ge t:~ i \in C(t^\prime)\}-t,
\label{eq:GammaDef}
\end{equation}
which measures how long agent $i$ will have to wait, at time $t$, before being connected. Using \eqref{eq:GammaDef} and \eqref{eq:GammaOfXDef}, we can formulate a condition for the schedule $\mathbf{C}_{\mathrm{o}}$ to be feasible.
\begin{definition}[Online feasible schedule]
\label{pr:gamma_scheduling}
A schedule~$\mathbf{C}_{\mathrm{o}}$ is online feasible if the \emph{safety residuals} $\mathbf{r}(\mathbf{C}_{\mathrm{o}},t)$ defined as
\begin{equation}\label{eq:residuals}
r_i(\mathbf{C}_{\mathrm{o}},t):=\gamma^x_i(t)-\gamma^{\mathbf{C}_{\mathrm{o}}}_i(t), ~ \forall i \in \{1,\ldots,q\},
\end{equation}
are non-negative, for all $t$, with $\gamma^x(t)$ defined in \eqref{eq:GammaOfXDef} and $\gamma^{\mathbf{C}_{\mathrm{o}}}(t)$ defined in \eqref{eq:GammaDef}.
\end{definition}
In the job scheduling literature (e.g., \cite{Pinedo08}), the quantities $\gamma^{\mathbf{C}}_i$ correspond to the \textit{completion times} of job $i$, the quantities $\gamma^x_i$ are the \textit{deadlines}, and the quantity $\gamma^{\mathbf{C}}_i-\gamma^x_i = -r_i$ is the \textit{job lateness}. A schedule for $q$ jobs with deadlines is feasible provided that the maximum lateness is non-positive, that is, all safety residuals are non-negative.
In the following, we formulate an optimization problem to find a recursively feasible online schedule using safety residuals and shifts of the baseline schedule.
Given a cycle~$\mathbf{C}_{\mathrm{r}}:=\left( C(1),\ldots,C(T_{\mathrm{r}})\right)$ of the baseline schedule, let
\begin{equation}
\label{eq:rotated_schedule}
R(\mathbf{C}_\mathrm{r},j):=\left( C(j),\ldots,C(T_{\mathrm{r}}),C(1),\ldots,C(j-1) \right)
\end{equation}
be a rotation of the sequence $\mathbf{C}_{\mathrm{r}}$ with $j \in \{1,\ldots,T_{\mathrm{r}}\}$. Then, one can compute the shift of the baseline schedule $\mathbf{C}$ which maximizes the minimum safety residual by solving the following optimization problem
\begin{equation}
j_t^*:= \arg \max_{j} ~ \min_i \ r_i(R(\mathbf{C}_\mathrm{r},j),t). \label{eq:ResOpt}
\end{equation}
The online schedule maximizing the safety residual is then
\begin{align}
\label{eq:Optimal_Online_Schedule}
C^*(t) := C(j_t^*).
\end{align}
\begin{proposition}
\label{prop1}
Assume that the baseline schedule $\mathbf{C}$ is feasible for P\ref{pr:schedulability}. Then, the online schedule $\mathbf{C}^*$ is feasible for P\ref{pr:schedulability}.
\end{proposition}
\begin{proof}
At time $t=1$, the baseline schedule is a feasible schedule which implies $\min_i r_i(R(\mathbf{C}_\mathrm{r},1),1) \geq 0$. As a result, $\min_i r_i(R(\mathbf{C}_\mathrm{r},j_1^*),1) \geq 0$ by construction and schedule $\tilde{\mathbf{C}}$, defined as
\begin{equation}
\tilde{\mathbf{C}}:=C(j_1^*),\ldots,C(T_{\mathrm{r}}),\mathbf{C}_{\mathrm{r}},\mathbf{C}_{\mathrm{r}},\ldots~,
\end{equation}
is a feasible schedule. Since $C^*(1)=C(j_1^*)$,
$$\min_i r_i(R(\mathbf{C}_\mathrm{r},j_1^*+1),2) \geq 0,$$
which implies
$$\min_i r_i(R(\mathbf{C}_\mathrm{r},j_2^*),2) \geq 0.$$
Consequently,
\begin{equation}
\tilde{\mathbf{C}}:=C(j_1^*),C(j_2^*),C(j_2^*+1),\ldots,C(T_{\mathrm{r}}),\mathbf{C}_{\mathrm{r}},\ldots~,
\end{equation}
is a feasible schedule. This argument can be used recursively which implies $\mathbf{C}^*$ is a feasible schedule.
\end{proof}
\begin{remark}
The schedule~\eqref{eq:Optimal_Online_Schedule} maximizes the minimum residual, as shown in~\eqref{eq:ResOpt}. That is, the communication is scheduled for the system which is closest to exit~$\mathcal{S}_{\infty}$. Clearly, any function of the residuals could be used. For example, the residuals could be weighted, thus reflecting the priority given to the constraints to be satisfied. \end{remark}
\subsection{Robustness Against Packet Loss}
\label{subsec:PacketLoss}
In this subsection, we drop the assumption of no packet losses in the communication link and we consider a communication protocol which has packet delivery acknowledgment. We provide a reconnection strategy to overcome packet losses when the baseline schedule satisfies a necessary condition. Furthermore, we provide the necessary and sufficient conditions for the existence of robust schedules in the presence of packet losses. Then, using these and the results in Section~\ref{Sec_online_subsec_online}, we provide an algorithm to compute an online schedule that is robust to packet losses..
Let us consider the binary variable $\nu(t) \in \{0,1\}$, with $\nu(t)=1$ indicating that the packet sent at time $t$ was lost. This binary variable is known to the scheduler if, as we assume, an acknowledgment-based protocol is used for communication.
Let us also assume that the maximum number of packets that can be lost in a given amount of time is bounded.
\begin{assumption} No more than $n_{l,i}$ packets are lost in $\alpha_i$ consecutive steps, i.e.,
\label{ass:packet_losses}
\begin{align}
\sum_{j=t}^{t+\alpha_i-1}\nu(j) \leq n_\mathrm{l,i}, && \forall\ i \in \{1,\ldots,q\},\ \forall \, t \geq 0.
\label{eq:PacketLoss}
\end{align}
\end{assumption}
Note that \eqref{eq:PacketLoss} defines $q$ different inequalities which must be satisfied at the same time. Additionally, we assume that when a packet is lost, the whole information exchanged at time $t$ is discarded.
\begin{problem}[P\ref{pr:schedulability_losses}]
\label{pr:schedulability_losses}
Given the set of $q$ agents, each described by \eqref{eq:general_system}, an admissible set $\m{A}:= \m{A}_1\times\ldots\times\m{A}_q$, and the set $\boldsymbol{\mathcal{C}}$ of connection patterns \eqref{eq:binary_vectors}, determine if there exists an infinite sequence over the elements of $\boldsymbol{\mathcal{C}}$ such that,
\begin{equation}
z_i(t) \in \MR{i},~\forall z_i(0) \in \MR{i}, \ v_i(t)\in\m{V}_i,~i\in\{1,\ldots,q\},
\end{equation}
for $t>0$, provided that Assumption~\ref{ass:packet_losses} holds.
\end{problem}
A schedule solving P\ref{pr:schedulability_losses} is any sequence of $C_j$ such that every agent $i$ is connected at least once every $\alpha_i$ steps in the presence of packet losses satisfying Assumption~\ref{ass:packet_losses}. Instance $I:=\{\boldsymbol{\mathcal{C}},\{\alpha_i\},\{n_\mathrm{l,i} \} \}$ is accepted, i.e.,
\begin{equation}
\label{eq:instanceP2}
I \in \text{P\ref{pr:schedulability_losses}},
\end{equation}
if and only if a schedule $\mathbf{C}$ exists that satisfies the scheduling requirements.
Given a feasible baseline schedule $\mathbf{C}$, we define a \textit{shifted schedule} $\bar{\mathbf{C}}$ as
\begin{equation}
\label{eq:schedule_with_losses}
\bar{C}(t) :=C \left(t-\sum_{j=0}^{t-1} \nu(j)\right),
\end{equation}
to compensate the effects of packet losses. We define the maximum time between two successive connections of agent $i$, based on schedule $\mathbf{C}$ as
\begin{equation}
T_i := \left( 1+\max_t \ t-\tau_i^{\mathbf{C}}(t)\right),
\end{equation}
where the latest connection time $\tau_i^{\mathbf{C}}(t)$ is defined in \eqref{eq:TauLastMeasurement}. Feasibility of the baseline schedule implies $T_i\leq \alpha_i$, for all $i$.
We prove next that Assumption~\ref{ass:packet_losses} can be used to provide a sufficient condition for the shifted schedule~$\bar{\mathbf{C}}$ to be feasible under packet losses.
\begin{theorem}[Schedulability under packet losses]
\label{thm:PacketLoss}
Let Assumption~\ref{ass:packet_losses} be verified.
Schedule~$\bar{\mathbf{C}}$ defined in~\eqref{eq:schedule_with_losses} is feasible for $\mathrm{P}\ref{pr:schedulability_losses}$ if and only if
\begin{equation}
\alpha_i-T_i\geq n_{l,i}, \qquad \forall i \in \{1,\ldots,q\}.
\end{equation}
\end{theorem}
\begin{proof}
In the error-free schedule $\mathbf{C}$, two consecutive appearances of a symbol $i$ are at most $T_i$ steps apart. In the schedule $\bar{\mathbf{C}}$, during $\alpha_i$ steps at most $n_{l,i}$ retransmissions take place. Hence, if $\alpha_i-T_i\geq n_{l,i}$, two consecutive occurrences of symbol $i$ are never spaced more than $\alpha_i$ steps, ensuring feasibility of the schedule.
\end{proof}
In the sequel, we provide necessary and sufficient conditions for the existence of a baseline schedule which is robust to packet losses. To that end, we define a new set of safe time intervals as
\begin{equation}
\big{\{}\beta_i: \beta_i=\alpha_i-n_{l,i},\ \forall i \in \{1,\ldots,q\} \big{\}}.
\label{eq:thcon}
\end{equation}
\begin{theorem}
Assume that the communication channel satisfies Assumption~\ref{ass:packet_losses}. Then,
\begin{align*}
\{\boldsymbol{\mathcal{C}},\{\alpha_i\},\{n_\mathrm{l,i} \} \}\in \mathrm{P}\ref{pr:schedulability_losses}&&\Leftrightarrow&&\{\boldsymbol{\mathcal{C}},\{\beta_i\} \}\in \mathrm{P}\ref{pr:schedulability},
\end{align*}
with $\beta_i$ defined in \eqref{eq:thcon}.
\label{thm:PacketLoss2}
\end{theorem}
\begin{proof}
We first prove
$$
\{\boldsymbol{\mathcal{C}},\{\alpha_i\},\{n_\mathrm{l,i} \} \}\in \mathrm{P}\ref{pr:schedulability_losses} \implies \{\boldsymbol{\mathcal{C}},\{\beta_i\} \}\in \mathrm{P}\ref{pr:schedulability}.
$$
Assume that there exists a feasible schedule for instance $\{\boldsymbol{\mathcal{C}},\{\alpha_i\},\{n_\mathrm{l,i} \} \}$ of $\mathrm{P}\ref{pr:schedulability_losses}$ while it is not feasible for instance $\{\boldsymbol{\mathcal{C}},\{\beta_i\} \}$ of $\mathrm{P}\ref{pr:schedulability}$. This implies
\begin{equation}
\exists \, i, t>0: t-\tau_i^{\mathbf{C}}(t) \geq \beta_i+1=\alpha_i-n_{l,i}+1,
\label{eq:ConExamPro}
\end{equation}
where the latest connection time \(\tau_i^{\mathbf{C}}(t)\) is defined in \eqref{eq:TauLastMeasurement}. Assume \(n_{l,i}\) consecutive packets are lost starting from time \(t+1\), such that $\tau_i^{\mathbf{C}}(t+n_{l,i})=\tau_i^{\mathbf{C}}(t)$. This implies
\begin{equation}
\left( t+n_{l,i} \right)-\tau_i^{\mathbf{C}}(t+n_{l,i}) \geq \alpha_i+1,
\end{equation}
such that agent $i$ did not receive any packet for $\alpha_i$ consecutive steps, i.e., $\{\boldsymbol{\mathcal{C}},\{\alpha_i\},\{n_\mathrm{l,i} \} \}\notin \mathrm{P}\ref{pr:schedulability_losses}$.
In order to prove
$$
\{\boldsymbol{\mathcal{C}},\{\beta_i\} \}\in \mathrm{P}\ref{pr:schedulability} \implies \{\boldsymbol{\mathcal{C}},\{\alpha_i\},\{n_\mathrm{l,i} \} \}\in \mathrm{P}\ref{pr:schedulability_losses},
$$
consider feasible schedule $\mathbf{C}$ for instance $\{\boldsymbol{\mathcal{C}},\{\beta_i\} \}$ of $\mathrm{P}\ref{pr:schedulability}$. Each packet loss causes one time step delay in receiving the measurement for agent $i$, see \eqref{eq:schedule_with_losses}, and since these packet losses can at most cause \(n_{l,i}\) time step delays between two connection times, agent $i$ would be connected at least once during each \(\beta_i+n_{l,i} = \alpha_i\) time steps. This implies $\bar{\mathbf{C}}$ defined in \eqref{eq:schedule_with_losses} is a feasible schedule for instance $\{\boldsymbol{\mathcal{C}},\{\alpha_i\},\{n_\mathrm{l,i} \} \}$ of $\mathrm{P}\ref{pr:schedulability_losses}$.
\end{proof}
Theorem~\ref{thm:PacketLoss2} implies that $\mathrm{P}\ref{pr:schedulability_losses}$ can be cast in the framework of $\mathrm{P}\ref{pr:schedulability}$ by using equation~\eqref{eq:thcon} to define $\{\beta_i\}$ based on $\{\alpha_i\}$ and $\{n_\mathrm{l,i}\}$.
Since the shifted schedule $\bar{\mathbf{C}}$ provides a feasible robust schedule against packet losses, one can use the online scheduling method proposed in the previous subsection to improve safety of this robust schedule. To that end, we define the number of packet losses that can occur before agent $i$ receives a measurement from time $t$ as
\begin{equation}
n_i^{\mathbf{C}}(t):=\{\min_{t^\prime,n} n: t^\prime \geq t,~i \in C(t^\prime),~\sum_{j=t}^{t^\prime}\nu(j)\leq n \}.
\label{eq:packet_loss_remained}
\end{equation}
\begin{definition}[Robust online feasible schedule]
\label{pr:gamma_scheduling2}
A schedule~${\mathbf{C}}$ is robust online feasible if the \emph{robust safety residuals} $\bar{\mathbf{r}}(\mathbf{C},t)$ defined as
\begin{equation}\label{eq:residuals2}
\bar{r}_i(\mathbf{C},t):=\gamma^x_i(t)-\gamma^{\mathbf{C}}_i(t)-n_i^{\mathbf{C}}(t),~\forall i \in \{1,\ldots,q\},
\end{equation}
are non-negative for all $t$, with $\gamma^x(t)$ defined in \eqref{eq:GammaOfXDef}, $\gamma^{\mathbf{C}}(t)$ defined in \eqref{eq:GammaDef}, and $n_i^{\mathbf{C}}(t)$ defined in \eqref{eq:packet_loss_remained}.
\end{definition}
Similarly to the case without packet losses, the following optimization problem maximizes the minimum robust safety residuals for a lossy channel.
\begin{equation}
\bar{j}_t^*:= \arg \max_{j} ~ \min_i \ \bar{r}_i(R(\mathbf{C}_\mathrm{r},j),t).
\label{eq:ResOpt2}
\end{equation}
The online schedule is then given by
\begin{align}
\label{eq:Optimal_Online_Schedule_losses}
\bar{C}^*(t) := C(\bar{j}_t^*).
\end{align}
\begin{proposition} \label{prop2}
Assume that the baseline schedule $\bar{\mathbf{C}}$ is feasible for P\ref{pr:schedulability_losses}. Then, the online schedule $\bar{C}^*(t)$ is feasible for all $t$.
\end{proposition}
\begin{proof}
This can be proved similarly to Proposition~\ref{prop1} and Theorem~\ref{thm:PacketLoss}.
\end{proof}
The following algorithm, based on Theorem~\ref{thm:PacketLoss2}, returns an online robust feasible schedule for P\ref{pr:schedulability_losses}.
\begin{algorithm}
\begin{algorithmic}[1]
\footnotesize
\State Define $\beta_i$ using \eqref{eq:thcon}
\If {$\{\beta_i\} \in \text{PP}$}
\State find a schedule $\mathbf{C}_{\mathrm{P}}$ for instance $\{\beta_i\}$ of PP
\State define $C(t):=C_j$ when $c_{\mathrm{P}}(t)=j,~\forall j$
\ForAll{t}
\State find $\bar{C}^*(t)$ as in $\eqref{eq:Optimal_Online_Schedule_losses}$
\EndFor
\State \Return $\bar{\mathbf{C}}^*$
\Else
\State \Return {no schedule was found}
\EndIf
\end{algorithmic}
\caption{Robust online scheduling for P\ref{pr:schedulability} (\textbf{input}: ($\{\alpha_i\},\{(n_{l,i},T_i)\}, \ldots $) , \textbf{output}: $\bar{\mathbf{C}}^*$)}
\label{alg:robust_online}
\end{algorithm}
\section{Numerical results}
\label{section:numerics}
We now discuss some numerical examples in order to illustrate and evaluate the effectiveness of the proposed methods. First, we evaluate Algorithms~\ref{alg:General_P1}~and~\ref{alg:WSP}. Then, we provide a trajectory tracking scenario for remotely controlled vehicles with limited number of lossy communication channels.
\subsection{Evaluation of Algorithm~\ref{alg:General_P1}}
We have considered 1000 networks with a random number of agents (2 to 4), random safe time intervals (2 to 8), and random connection patterns (1 to 4). These random instances are used for evaluating the three following implementations:
\begin{itemize}
\item $M_1$: solve optimization problem~\eqref{eq:qnary_formulation_exact_solution};
\item $M_2^{\mathrm{A}1}$: use Algorithm~\ref{alg:General_P1} in combination with optimization problem~\eqref{eq:qnary_formulation_exact_solution} to solve PP;
\item $M_3^{\mathrm{A}1}$: use Algorithm~\ref{alg:General_P1} in combination with the double-integer method proposed in \cite{Chan92a} to solve PP.
\end{itemize}
We have used Gurobi to solve the integer problems and provided a summary of the results in Table~\ref{table:comparison}. $M_1$ is exact and returns false positives nor false negatives. Although $M_2^{\mathrm{A}1}$ and $M_3^{\mathrm{A}1}$ do not return any false positive, they might return false negatives. Furthermore, a false negative answer in $M_2^{\mathrm{A}1}$ implies the same for $M_3^{\mathrm{A}1}$ since the latter uses a heuristic to solve PP while the former finds a schedule for PP whenever it exists. Note also that $M_2^{\mathrm{A}1}$ and $M_3^{\mathrm{A}1}$ do not necessarily return a solution with the minimum period length.
\begin{table}[H]
\caption{Comparison of $M_1$, $M_2^{\mathrm{A}1}$,~ and $M_3^{\mathrm{A}1}$}
\label{table:comparison}
\begin{center}
\begin{tabular}{ c| c c c }
& $M_1$ & $M_2^{\mathrm{A}1}$ & $M_3^{\mathrm{A}1}$\\
\hline
\multicolumn{1}{ c| } {True Positive }& 960 & 958 & 958 \\
\hline
\multicolumn{1}{ c| } {False Positive} & 0 & 0 & 0 \\
\hline
\multicolumn{1}{ c | } {True Negative} & 40 & 40 & 40 \\
\hline
\multicolumn{1}{ c | } {False Negative} & 0 & 2 & 2 \\
\hline
\end{tabular}
\end{center}
\end{table}
In Table~\ref{table:comparison3} and Fig.~\ref{fig:AlgComparisons1}, we tested $M_1$, $M_2^{\mathrm{A}1}$ and $M_3^{\mathrm{A}1}$ on larger randomly generated networks (2 to 11 agents, 2 to 21 safe time intervals, 1 to 11 random connection patterns). To limit the computation-time, we had to halt the execution of $M_1$ and $M_2^{\mathrm{A}1}$ when no schedule of period $ \leq 70$ was found. We labeled \textit{undecided} the instances for which these two methods were halted.
\begin{table}[H]
\caption{Comparison of $M_1$, $M_2^{\mathrm{A}1}$,~ and $M_3^{\mathrm{A}1}$}
\label{table:comparison3}
\begin{center}
\begin{tabular}{c |c c c}
& $M_1$ & $M_2^{\mathrm{A}1}$ & $M_3^{\mathrm{A}1}$\\
\hline
\multicolumn{1}{ c| } {accepted instances} & 887 & 872 & 853 \\
\multicolumn{1}{ c| } {average time (sec)} & 1.7915 & 1.3119 & 0.5143 \\
\hline
\multicolumn{1}{ c| } {undecided instances} & 102 & 32 & 0 \\
\multicolumn{1}{ c| } {average time (sec)} & 61.0570 & 48.9156 & 0 \\
\hline
\multicolumn{1}{ c| } {rejected instances} & 11 & 96 & 147 \\
\multicolumn{1}{ c| } {average time (sec)} & 53.7730 & 1.5442 & 0.5333 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\centering
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{fig_n_1}
\caption{}
\label{fig:M4M5Mrho}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{fig_n_2}
\caption{}
\label{fig:M1M2_times_accepted}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{fig_n_3}
\caption{}
\label{fig:M1M2_times_rejected}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{fig_n_4}
\caption{}
\label{fig:M3_times_accepted_rejected}
\end{subfigure}
\caption
{Comparison of acceptance ratio, with respect to the assigned density function $\hat{\rho}$, and the Cumulative Distribution Function (CDF) of the computation time for $M_1$, $M_2^{\mathrm{A}1}$, and $M_3^{\mathrm{A}1}$ methods}
\label{fig:AlgComparisons1}
\end{figure}
Although $M_3^{\mathrm{A}1}$ might result in a few false negatives, Table~\ref{table:comparison3} indicates that its average computation time is significantly lower than the corresponding average computation times of $M_1$ and $M_2^{\mathrm{A}1}$. More importantly, Fig.~\ref{fig:M1M2_times_accepted}, \ref{fig:M1M2_times_rejected}, and \ref{fig:M3_times_accepted_rejected} demonstrate that $M_3^{\mathrm{A}1}$ has a lower computation time for almost all considered instances. Note that $M_1$ accepts some instances with an assigned density $\hat{\rho}$ greater than one, as shown in Fig.~\ref{fig:M4M5Mrho}, which implies that converse of Theorem~\ref{thm:schedulability_thresholdmcS} does not hold.
\subsection{Evaluation of Algorithm~\ref{alg:WSP}}
In this subsection we evaluate the proposed heuristic in Algorithm~\ref{alg:WSP} to find a feasible schedule for P\ref{pr:schedulability} when $\boldsymbol{\mathcal{C}}$ is defined as in \eqref{eq:mc_channels}. We have generated 1000 networks with five agents, random safe time intervals (from 2 to 7), minimum number of channels required for schedulability, i.e., $m_{\mathrm{c}}=\lceil \sum_i \frac{1}{\alpha_i} \rceil $. These random instances are used for evaluating the three following implementations:
\begin{itemize}
\item $M_1$: solve optimization problem~\eqref{eq:qnary_formulation_exact_solution};
\item $M_2^{\mathrm{A}2}$: use Algorithm~\ref{alg:WSP} in combination with optimization problem~\eqref{eq:qnary_formulation_exact_solution} to solve PP;
\item $M_3^{\mathrm{A}2}$: use Algorithm~\ref{alg:WSP} in combination with optimization problem~\eqref{eq:qnary_formulation_exact_solution} to solve PP;
\end{itemize}
The simulation results are provided in Table~\ref{table:comparison4}.
\begin{table}[H]
\caption{Comparison of $M_1$, $M_2^{\mathrm{A}2}$,~ and $M_3^{\mathrm{A}2}$}
\label{table:comparison4}
\begin{center}
\begin{tabular}{ c | c c c }
& $M_1$ & $M_2^{\mathrm{A}2}$ & $M_3^{\mathrm{A}2}$\\
\hline
\multicolumn{1}{ c| } {True Positive }& 994 & 994 & 987 \\
\hline
\multicolumn{1}{ c| } {False Positive} & 0 & 0 & 0 \\
\hline
\multicolumn{1}{ c | } {True Negative} & 6 & 6 & 6\\
\hline
\multicolumn{1}{ c | } {False Negative} & 0 & 0 & 7 \\
\hline
\end{tabular}
\end{center}
\end{table}
Once again, the three implementations were tested on larger instances by halting $M_1$ and $M_2^{\mathrm{A}2}$ if no schedule of length $\leq 70$ was found. The results are reported in Table~\ref{table:comparison5} and Fig.~\ref{fig:AlgComparisons2}. Define a \textit{normalized density function} as
\begin{equation}
\tilde{\rho}:=\frac{1}{m_{\mathrm{c}}} \left(\sum_i \frac{1}{\alpha_i}\right),
\end{equation}
for sake of a meaningful comparison.
\begin{table}[H]
\caption{Comparison of $M_1$, $M_2^{\mathrm{A}2}$,~ and $M_3^{\mathrm{A}2}$}
\label{table:comparison5}
\begin{center}
\begin{tabular}{c |c c c}
& $M_1$ & $M_2^{\mathrm{A}2}$ & $M_3^{\mathrm{A}2}$\\
\hline
\multicolumn{1}{ c| } {accepted instances} & 984 & 980 & 909 \\
\multicolumn{1}{ c| } {average time (sec)} & 5.0353 & 5.5204 & 1.3043e-04 \\
\hline
\multicolumn{1}{ c| } {undecided instances} & 16 & 20 & 0 \\
\multicolumn{1}{ c| } {average time (sec)} & 267.1501 & 139.8464 & 0 \\
\hline
\multicolumn{1}{ c| } {rejected instances} & 0 & 0 & 91 \\
\multicolumn{1}{ c| } {average time (sec)} & 0 & 0 & 1.0384e-04 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\centering
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{fig_n_5}
\caption{}
\label{fig:M4M5M6rho}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{fig_n_6}
\caption{}
\label{fig:M4M5_times_accepted}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{fig_n_7}
\caption{}
\label{fig:M4M5_times_rejected}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{fig_n_8}
\caption{}
\label{fig:M6_times_accepted_rejected}
\end{subfigure}
\caption
{
Comparison of acceptance ratio, with respect to the normalized density function $\tilde{\rho}$, and the Cumulative Distribution Function (CDF) of the computation time for $M_1$, $M_2^{\mathrm{A}2}$, and $M_3^{\mathrm{A}2}$ methods
}
\label{fig:AlgComparisons2}
\end{figure}
Although $M_3^{\mathrm{A}2}$ might result in a few false negatives, Table~\ref{table:comparison5} indicates that its average computation time is drastically lower than the corresponding average computation times of $M_1$ and $M_2^{\mathrm{A}2}$. More importantly, Fig.~\ref{fig:M4M5_times_accepted}, \ref{fig:M4M5_times_rejected}, and \ref{fig:M6_times_accepted_rejected} demonstrate that $M_3^{\mathrm{A}1}$ has a lower computation time for almost all considered instances. Note that the simulations confirm the result of Theorem~\ref{thm:schedulability_thresholdmc2}: as displayed in Fig.~\ref{fig:M4M5M6rho} any instance with $\tilde{\rho}=\frac{\rho}{m_{\mathrm{c}}} \leq 0.75$ is schedulable.
\subsection{Remotely Controlled Vehicles}
In this subsection, two numerical examples are given to illustrate the introduced concepts and algorithms. First we consider a tracking problem for vehicles with performance/safety constraints on the errors; this problem can be translated to P\ref{pr:schedulability} (see \cite{colombo2018invariance}). Then, we consider a tracking problem where the communication network is subject to packet losses.
\begin{example}[Networked control vehicles without packet loss]\label{ex:ex_tracking}
Consider a case of eight remotely controlled vehicles, described by the models
\begin{align}\label{eq:vehicles}
x_i(t+1) &= A_ix_i(t)+B_iu_i(t)+E_iv_i(t),\,\forall\, i
\end{align}
where~
\begin{equation}
A_i=\left[
\begin{array}{ccc}
1 & h & 0 \\
0 & 1 & h \\
0 & 0 & 1-\frac{h}{\tau_i}
\end{array}
\right],~B_i=\left[
\begin{array}{c}
0\\
0\\
\frac{h}{\tau_i}
\end{array}
\right],~E_i=\left[
\begin{array}{c}
0\\
0\\
1
\end{array}
\right],
\end{equation}
and~$\tau_1=0.1$,~$\tau_2=0.2$,~$\tau_3=0.3$,~$\tau_4=0.4$,~$\tau_5=\ldots=\tau_8=0.5$, and~$h=0.2$.
The longitudinal motion of these vehicles must track their reference trajectories within prescribed error bounds, to realize a specified traffic scenario. Such situations occur, for instance, when setting up full-scale test scenarios for driver-assist systems. The reference state trajectories are generated by
\begin{equation}
x_i^d(t+1)=A_i x_i^d(t)+B_iu_i^d(t),~\forall i,
\end{equation}
while the tracking inputs are defined as
\begin{equation}
u_i(t):=u_i^d(t)+\tilde{u}_i(t).
\end{equation}
The error dynamics for each vehicle is
\begin{align}\label{eq:vehicles2}
e_i(t+1) &= A_ie_i(t)+B_i \tilde{u}_i(t)+E_iv_i(t),\,\forall\, i
\end{align}
where $e_i(t):=x_i(t)-x_i^d(t)$ is the difference between the state and the desired state and $\tilde{u}_i(t)=u_i(t)-u_i^d(t)$ is the difference between the system input and the input's feed-forward term.
We assume that the controller is always connected with the actuator (SC network), i.e., $\delta_u=1$ in Fig.~\ref{fig:system_1}, while the sensor is connected to the controller through a network, i.e., $\delta_s(t)=C(t)$ in Fig.~\ref{fig:system_1}.
We consider the feedback terms as $\tilde{u}_i(t)=-K_i\hat{e}_i(t)$ where $\hat{e}_i(t)$ is the tracking error estimation and it is specified by
\begin{equation}
\hat{e}_i(t)=\begin{cases}
e_i(t), & \text{if } i \in C(t)\\
A_i \hat{e}_i(t-1)+B_i \tilde{u}(t-1), & \text{if } i \notin C(t)
\end{cases}.
\end{equation}
Feedback gains $K_i$ are calculated by solving LQR problems with cost gains \(Q=\text{diag}([10,1,0.1]),\ R=0.1\). Furthermore, $\mathcal{U}_i=\{-10\le \tilde{u}_i \le 10\},$ and $\mathcal{V}_i=\{ |v_i| \le \tilde{v}_i\}$ with \(\tilde{v}_1=3.4,\ \tilde{v}_2=2.1,\ \tilde{v}_3=1.1,\ \tilde{v}_4=0.6,\ \tilde{v}_5=\ldots=\tilde{v}_8=0.4\) are the set of admissible control inputs and the bound on the disturbances, respectively.
For each system, the admissible tracking errors belong to the set
\begin{equation}
\mathcal{E}_i=\left\{ e_i \in \mathbb{R}^3: \left[
\begin{array}{c}
-1\\
-5\\
-10
\end{array}
\right] \le
e_i \le \left[
\begin{array}{c}
1\\
5\\
10
\end{array}
\right]\right\},~\,\forall\, i.
\end{equation}
In this example, safe time intervals are \(\alpha_1=2,~\alpha_2=3,~\alpha_3=4,~\alpha_4=5,~\alpha_5=\ldots=\alpha_8=6\) using \eqref{eq:safe_time}. Assuming there are two communication channels, i.e., \(m_{\mathrm{c}}=2\), finding a feasible schedule is not straightforward. Nevertheless, one can use Algorithm~\ref{alg:WSP} to find a feasible schedule; the cyclic part of such a schedule is as follows
\begin{equation}
\mathbf{C}_{\mathrm{r}}=C_1,C_2,C_3,C_4,C_5,C_6,C_1,C_7,C_3,C_8,C_5,C_9~,
\end{equation}
where
\begin{align}
C_1&=(1,2),&C_2=(3,5),~&C_3=(1,6),&C_4=(4,2) \nonumber \\
C_5&=(1,7),&C_6=(3,8),~&C_7=(4,5),&C_8=(3,2) \nonumber \\
C_9&=(4,8).
\end{align}
The tracking errors for the above schedule, along with the corresponding feedback control actions, are reported in Fig.~\ref{fig:vehicle_errors}, which shows them in their admissible sets. Note that in this example, the scheduler is designed offline.
\begin{figure}
\centering
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{fig6}
\caption{}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{fig7}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{fig8}
\caption{}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{fig9}
\caption{}
\end{subfigure}
\caption
{ Example \ref{ex:ex_tracking}:
The shaded bands identify values out of admissible sets
}
\label{fig:vehicle_errors}
\end{figure}
\end{example}
\begin{example}[Networked control vehicles with packet loss]\label{ex:ex_tracking2}
Consider the first five vehicles in Example \ref{ex:ex_tracking} in which \(\tilde{v}_1=2,\ \tilde{v}_2=1,\ \tilde{v}_3=0.45,\ \tilde{v}_4=0.25,\ \tilde{v}_5=0.15\). Using \eqref{eq:safe_time}, one can compute safe time intervals as $\alpha_1=4$, $\alpha_2=6$, $\alpha_3=8$, $\alpha_4=10$, and $\alpha_5=12$.
Assuming only two packets can be lost every four successive packets, one can use \eqref{eq:thcon} to calculate new safe time intervals as $\beta_1=\beta_2=2$, $\beta_3=\beta_4=4$, and $\beta_5=6$.
One can verify that the following $\mathbf{C}_{\mathrm{r}}$ is the cyclic part of a robust feasible schedule.
\begin{align}
\mathbf{C}_{\mathrm{r}}=&(1,2),(3,4),(1,2),(1,3),(2,4), \nonumber \\
&(1,5),(2,3),(1,4),(2,5)
\end{align}
By considering $\mathbf{C}_{\mathrm{r}}$ as the cyclic part of the baseline schedule, one can obtain a robust schedule using either \eqref{eq:schedule_with_losses} or \eqref{eq:Optimal_Online_Schedule_losses}. Note that in this example, we assume that the scheduler has access to all states and there is no measurement noise. The two strategies are compared in Fig.~\ref{fig:vehicle_residual_deadlines}. In this figure, we note:
\begin{itemize}
\item The robust safety residuals are non-negative, i.e., $\bar{r}_i(t) \geq 0$, and the update deadlines are positive, i.e., $\gamma_i^x(t) > 0$, both of which imply no constraint is violated;
\item For the online schedule $\bar{r}_i \geq 3$ most of the times and $\min_i \bar{r}_i =1$ in four time instants; however, in the shifted schedule $\bar{r}_i =1$ at many time instants and $\min_i \bar{r}_i =0$;
\item In the online schedule $\max_i \bar{r}_i \leq 6$ at most of the times and $\max_i \bar{r}_i =9$ at just one time instant; however, in the shifted schedule $\max_i \bar{r}_i \leq 8$ most of the times and $\max_i \bar{r}_i =12$ at two time instants. These observations imply that the online schedule increases the safety of the least safe system at the expense of decreasing the safety of safer systems;
\item One can also see the compromise of the online schedule by comparing the measurement update deadlines. For instance $\min_t \gamma^x_4(t)=2$ and $\min_t \gamma^x_5(t)=4$ for the online schedule, however, $\min_t \gamma^x_4(t)=6$ and $\min_t \gamma^x_5(t)=10$ for the shifted schedule.
\end{itemize}
\begin{figure}
\centering
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{fig11}
\caption{}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{fig13}
\caption{}
\end{subfigure}
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{fig12}
\caption{}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.24\textwidth}
\centering
\includegraphics[width=\textwidth]{fig14}
\caption{}
\end{subfigure}
\caption
{ Example \ref{ex:ex_tracking2}:
Comparison between robust online schedule $\bar{\mathbf{C}}^*$, defined in \eqref{eq:Optimal_Online_Schedule_losses}, and the shifted schedule $\bar{\mathbf{C}}$, defined in \eqref{eq:schedule_with_losses}, that are given in the left and the right columns, respectively. The first row is the measurement update deadlines and the second one is the robust safety residuals defined in \eqref{eq:residuals2}.
}
\label{fig:vehicle_residual_deadlines}
\end{figure}
\end{example}
\section{Conclusions}
\label{section:conclusion}
In this paper we proposed strategies to guarantee that networked control systems are kept withing an assigned admissible set. We provide such guarantees by translating the control problem into a scheduling problem. To that end, we introduced PP and WSP, reviewed the state-of-the-art knowledge and refined some results on their schedulability. This allowed us to design offline schedules, i.e., schedules which can be applied to NCS regardless of the actual noise realization. In order to reduce conservatism, we proposed an online scheduling strategy which is based on a suitable shift of a pre-computed offline schedule. This allowed us to reduce conservatism while preserving robust positive invariance.
Future research directions include designing control laws that maximize the safe time intervals; adopting a probabilistic packet losses model instead of the deterministic one; and considering systems with coupled dynamics or admissible sets.
\bibliographystyle{IEEEtran}
| {
"attr-fineweb-edu": 1.676758,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdwPxaJiQn7ar4J_F | \section*{Abstract}{\small
We present optical (UBVRI) linear polarimetric observations of 8 Wolf--Rayet (WR) massive
binaries and single stars. We have corrected the observed values for the interstellar extinction
and polarization by the interstellar medium to obtain the intrinsic polarization and position angle.
We find three highly polarization stars between 5\% and 10\% (WR1, WR5 and WR146), three between 3\%
and 4\% (WR2, WR3 and WR4), and two between 1\% and 2\% (WR137 and WR140). Moreover, 5 stars show
increasing degree of polarization to shorter wavelengths (e.g WR 146) indicative with asymmetric
circumstellar envelope and 3 have nearly constant polarization within the errors (e.g WR 140).
\vspace{10mm}
\normalsize}
\end{minipage}
\section{Introduction}
Massive stars are of one of the most puzzling components of stellar evolution and many
questions related to them still remain unanswered. They show high mass-loss rate (10$^{-6}$ to 10$^{-7}$ M$_{\odot}$ y$^{-1}$),
in the form of winds or ejecta. Specifically, Wolf--Rayet (WR) stars represent a late evolutionary
phase of massive stars with faster winds and significantly higher mass-loss rate (10$^{-4}$ to 10$^{-5}$ M$_{\odot}$ y$^{-1}$)
than OB massive stars (Nugis \& Lamers 2000). 39\% of the Galactic WR stars may present a binary companion (van der Hucht 2001). Despite the fact that
the intense radiation field of the young companion should prevent the formation of dust in these binaries,
Allen, Swings \& Harvey (1972) found substantial IR excess in a subset of WR stars indicative with the
presence of circumstellar dust at 1500 K, associated with strong colliding-winds. It is expected, therefore,
to be highly polarized due to the free-free scattering (e.g. WR 97, ~5.5\%, Niemela et al. 1996).
Nevertheless, Harries, Hillier \& Howarth (1998) did not found evidence of linear polarization among WR binaries.
Polarimetry is, therefore, an important observational technique to study the dust formation and properties
-(distribution, composition) and stellar winds of massive stars.
\section{Observations}
Optical (UBVRI) linear polarimetry of 8 massive stars were performed
at the 0.84m, f/15 telescope at Observatorio Astronomico Nacional in Mexico
using a CCD camera and the POLIMA polarimeter (Hiriart et al. 2005) in August 2012 and April 2013.
The normalized Stokes parameters Q and U were calculated by mean of the following equations:
{\it Q=f(0)-f(90)/f(0)+f(90) \& U=f(45)-f(135)/f(45)+f(135)}
where f($\theta$) is the flux of the object in the image at a position angle $\theta$ of the polarizer optical
axis. Q and U parameters are related to the degree (P) and position angle ($\theta$) of polarizations by the
following equations: {\it P=(Q$^2$+U$^2$)1/2 \& $\theta$=arctan(U/Q)/2}.
Thus, we assumed that the circular polarization is neglectable.
\section{Preliminary Results}
\begin{figure}
\includegraphics[scale=0.31]{fig1a.eps}
\includegraphics[scale=0.31]{fig1b.eps}
\caption{Polarization (upper panel) and position angle of the polarization vector (lower panel)
as a function of wavelengths for WR140 (left panels) and WR146 (right panels).}
\end{figure}
Our prelimiary results suggest scattering of light from dust grains in axisymmetric circumstellar
envelopes around Massive WR stars. WR140 is a well--known colliding wind binary star and periodic dust--maker (Williams et al. 1990).
The degree of polarization is found to be independent of wavelength at about 2\% indicative or large dust grains scatters
(Fig1. left panels). WR3 and WR137 show similar flat polarization wavelength dependence at 2.5\% and 1\%, respectively.
WR146 is found to be the highest polarized star in our sample with increasing polarization to
shorter wavelengths (Fig1. right panels) consistent with an axymmetric circumstellar
envelope and multiple scattering (Wood et al. 1996). WR1, WR2, WR3 and WR5 show similar
wavelengths dependence and maximum polarization in the B-band of 6.5\%, 4.0\%, 3\% and 4.5\%, respectively.
WR1 shows small time variations of polarization and position angle (Fig. 2). Responsible for these time
variations may be: (i) asymmetric circumstellar/binary envelopes, (ii) episodic dust formation due
to colliding winds, (iii) inhomogeneities in WR winds (e.g blobs) and (iv) the orbital phase for binary systems.
However, low--level variations can also be occurred due to the weather conditions, calibration and technical problems.
\section{Conclusion}
Our main conclusion are: (i) 6 WR stars are clearly polarized. Three of them
show intrinsic linear polarization higher than 5\%, three have polarization between 3\% and
4\%, and two lower than 2\%, (ii) we did not find difference between WN and WC subclasses
indicating no correlation between the polarization and Teff (evolutionary stage of WR stars),
(iii) variable plarization might be found due to dust formation in colliding winds zone
and asymmetric circumstellar/binary envelopes and (iv) linear polarization seems to increase
with J--H color. Surprisingly, we did not find a similar trend for J-K and H-K colors.
\begin{figure}
\center
\includegraphics[scale=0.46]{fig2.eps}
\caption{Polarization (left panels) and position angle (right panels) vs Julian Date for the WR1 in the
U, B, V, R and I bands}
\end{figure}
\small
\section*{Acknowledgments}
S. A. gratefully acknowledges a postdoctoral scholarship from DGAPA--UNAM (IN100410).
R.--V. J. and H. D. also acknowledge support from CONACyT, Mexico through grants 180817 and 106719.
\section*{References}
\bibliographystyle{aj}
\small
Allen D. A., Swings J. P., \& Harvey P. M., 1972, A\&A, 20, 333 \\
Harries T. J., Hillier D. J., \& Howarth I. D., 1998, MNRAS, 296, 1072 \\
Hiriart D., Valdez J., Quiros F., et al., 2005, POLIMA Manual de Usuario, UNAM\\
Niemela V. S., Rovero A., C., \& Cerruti M. A., 1996, RMxAC, 5, 126 \\
Nugis T., \& Lamers H. J. G. L. M., A\&A, 2000, 360, 227 \\
van der Hucht K. A., 2001, New Astr. Rev. 45, 135 \\
Williams P. M., van der Hucht K. A., Pollock A. M. T., et al., 1990, MNRAS, 243, 662\\
Wood K., Bjorkman J. E., Whitney B., et al., 1996, ApJ, 461, 847
| {
"attr-fineweb-edu": 1.978516,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdx05qoTBFHyxCZ6t | \section*{Introduction}
The attractive possibility that the observed baryon asymmetry of the
universe was created at the electroweak phase transition has
deserved a lot of of attention in the last years \cite{RE}.
It was realized \cite{KURUSH} that
all the conditions formulated by Sakharov \cite{SA} (for generating
the baryon asymmetry) can be met
in the Standard Model (SM). In particular, the sphaleron transitions were
recognized as unsupressed at high temperatures and so playing a
central role in the mechanisms generating the
amount of $\Delta B$ we finally observe today.
These sphaleron processes, if unsupressed after the electroweak phase
transition, will drive the previously created baryon asymmetry to
zero, and so its rate $\Gamma$ at that time \cite{KURUSH,ARLE}
($\Gamma \sim exp(-E_{sph}/T)$) should be less than the Hubble
expansion rate $H$ to go out of equilibrium.
Due to the exponential sensitivity of $\Gamma$ to the ratio
$E_{sph}(T)/T$, imposing $\Gamma < H$ gives the model independent
condition \cite{SH}:
\begin{equation}
\label{central}
\frac{E_{sph}(T_c)}{T_c} \geq 45.
\end{equation}
After relating the sphaleron energy \cite{KLMA} $E_{sph}$ to the vacuum
expectation value of the Higgs field at the critical
temperature $T_c$ ($E_{sph}(T_c)\sim \langle\phi(T_c)\rangle/g$) the central
relation Eq. (\ref{central}) translates into
\begin{equation}
\label{strong}
\frac{\langle\phi (T_c)\rangle}{T_c} \stackrel{>}{{}_\sim} 1,
\end{equation}
which tells that the transition should be strongly
enough first order so as to avoid the erasure of the baryon asymmetry.
When the dependence of $\langle\phi (T_c)\rangle$ on the parameters
of the Higgs potential is taken into account,
condition (2) gives an upper bound on the mass of the
Higgs boson (in the Standard Model where this calculation was
originally performed and also in more general cases).
The study of
this bound is the central aim of the work here resumed.
All along this paper we will say that a phase transition is strongly
first order if condition (\ref{strong}) is satisfied and weakly first
order (or even second order) if it is not.
\section*{The Standard Model}
The effective potential at non zero temperature is the central
tool in studying the electroweak phase transition. In one-loop
approximation the temperature-dependent
potential describes a first order phase transition between the
symmetric phase (at high T) and the $SU(2)\times U(1)$ breaking one.
Using a high temperature expansion the form of the potential is:
\begin{equation}
V(\phi,T)=D(T^2-T_0^2)\phi^2-E T\phi^3+\frac{1}{4}\lambda(T)\phi^4,
\end{equation}
where $D, T_0, E$ and $\lambda(T)$ are easily calculable.
Approximating the critical temperature $T_c$ as the temperature $T_D$
at which two degenerate minima
coexist ($T_c$ will be somewhere in the narrow range
between $T_D$ and the temperature $T_0$ at which the
minimum at $\phi=0$ is destabilized) one gets
\begin{equation}
\label{MAIN}
\frac{\langle\phi (T_c)\rangle}{T_c} = \frac{2 E}{\lambda(T_c)}.
\end{equation}
We see from this formula that the strenght of the transition is
governed by $E$ (which gives the cubic term in the potential).
Now, $E$ comes from bosonic loops only (from $n=0$ Matsubara
frecuencies). Each bosonic degree of freedom, with field dependent
mass $M_i(\phi)$ contributes to the
potential a term
\begin{equation}
\Delta_i V = -\frac{T}{12 \pi}\left[ M_i^2(\phi)\right]^{3/2}.
\end{equation}
In the SM, the main contribution is the one coming from gauge boson
loops:
\begin{equation}
\label{GB}
\Delta V = -\frac{T}{12 \pi}\left[ 6 \left(
\frac{1}{4}g^2\phi^2\right)^{3/2} + 3 \left(
\frac{1}{4}(g^2+g'^2)\phi^2\right)^{3/2}\right].
\end{equation}
Using the value of $E$ obtained from Eq. (\ref{GB}) and relating
$\lambda$ to the Higgs mass one gets the bound \cite{SH,DIHUSI}
\begin{equation}
M_h\stackrel{<}{{}_\sim} (45 - 50)\ GeV,
\end{equation}
region which is already ruled out by LEP.
This bound is obtained within the one-loop approximation to the
effective potential and as is well known this will only be reliable
for $\phi>T$ (the expansion parameter is $g^2 T^2 / M^2(\phi)$). For
small $\phi$ the perturbative expansion is plagued with infrared
divergences. To cure this, one has to reorganize the loop expansion
resumming the most important infrared divergent
diagrams \cite{CASH} (the so
called daisy diagrams). The effect of this daisy resummation is to
shift the mass of the $n=0$ modes contributing to the cubic term by a
thermal mass (Debye screening effect).
In the SM this thermal mass is of order $g^2 T^2$ to leading order
for the longitudinal gauge bosons, and zero (to this order) for
the transverse ones.
The effect of this thermal mass is to shield the cubic term of the
longitudinal modes while leaving the transverse modes
unaffected and so to effectively reduce $E$ by a factor $2/3$.
The improved bound on the Higgs mass is then even lower \cite{DHLLL}:
\begin{equation}
M_h\stackrel{<}{{}_\sim} \sqrt{2/3} (45 - 50)\ GeV\sim (35 - 40)\ GeV.
\end{equation}
There exist further refinements of this calculation in the
literature. If the screening of the transverse modes at subleading
order $(g^4 T^2)$ is taken into account \cite{MM} the strenght
of the transition is further reduced.
On the other hand, some non-perturbative effects may give a lower
value for the critical temperature \cite{KRS}
translating into a larger ratio
$\langle\phi (T_c)\rangle/T_c$. Also there are some two-loop
(resummed) calculations \cite{TWOL}
that give larger $\langle\phi (T_c)\rangle$.
Even with these uncertainties the SM has another problems to be able
to generate the observed baryon asymmetry. In fact the $CP$ violation
effects are by far too small to give the correct amount of $\Delta B$
and some recent atempts \cite{FS} to solve this problem have been
shown \cite{GA} to be flawn. So it seems necessary to
move to some extensions of the SM to explain the generation of the
baryon asymmetry at the electroweak phase transition.
{}From what we have learned by studying the Standard Model case we
can work out an estrategy \cite{ANHA} for the extensions of it
to give a strong first order phase transition.
A look at Eq. (\ref{MAIN}) shows that we need a large cubic term
in our potential.
So, adding more bosonic degrees of freedom may help for only bosons contribute
to this cubic term.
The screened mass of such bosons will have the
general form
\begin{equation}
\label{GENMASS}
\overline{M}^2(\phi) = M^2 + \lambda \phi^2 + \Pi (T),
\end{equation}
where $M$ is an invariant mass, $\lambda$ is a generic coupling of
the new bosons to the Higgs (here $\phi$ stands generically for the
fields driving the transition) and $\Pi$ is the thermal mass.
For this to give a non-negligible contribution to the cubic term,
small values of $M$ and $\Pi$ and large values of $\lambda$ are
required. Usually, there will exist an experimental lower bound for
$M$ so that one cannot take it to be arbitrarily small.
Concerning the coupling $\lambda$ the requirement of perturbativity
up to some large scale will set an upper bound on it. In a given
model one has to take all this conditions into account when looking
for a region in the parameter space where the phase transition is
strong enough.
Another way of increasing the strenght of the transition has
been considered in the
literature \cite{ANHA,BENSON,ZHANG}, namely
the possibility of decreasing the value of the coupling in the
denominator of Eq. (\ref{MAIN}) due to one loop corrections (even at zero
temperature). But usually for this
mechanism to give a sizeable effect large couplings
and/or many bosonic degrees of freedom are needed.
Before moving to the discussion of extended models some words about
the validity of the perturbative calculations are needed. Roughly
speaking, one can trust the (resummed) perturbative calculations in
the SM if $m_h^2\stackrel{<}{{}_\sim} m_W^2$ (or equivalently $\lambda / g^2 \stackrel{<}{{}_\sim}
1$). When trying to improve the "sphaleron bound"
$m_h\stackrel{<}{{}_\sim} m_{h,crit}$, one can face cases where $m_{h,crit}\sim m_W$
rising some doubts about the reliability of the calculation of
$m_{h,crit}$. In the cases we are going to consider the relevant
parameter will no longer be $\lambda/g^2$. In fact $g^2$ will be
substituted by $\zeta^2$ in the SM + singlet case and by $h_t^2$
in the MSSM . As we will be interested in large values of these
couplings, for a given Higgs mass the expansion parameter will be
smaller than in the SM, and the perturbative calculation more reliable.
\section*{The Standard Model with a gauge Singlet}
This is the simplest extension of the Standard Model that can in principle
improve the situation concerning the electroweak baryogenesis
problem, as already noted in Ref. \cite{ANHA}.
The lagrangian of the model is defined as:
\begin{equation}
\label{lag}
{\cal L}={\cal L}_{SM}+\partial^{\mu}S^*\partial_{\mu}S-
M^2S^*S-\lambda_S(S^*S)^2-2\zeta^2 S^*SH^*H,
\end{equation}
where $H$ is the SM doublet with $\langle H
\rangle=\phi/\sqrt{2}$, $\phi$ is the classical field, and $M^2,
\lambda_S, \zeta^2 \ge 0$, to guarantee that $\langle S\rangle=0$ at all
temperatures.
The temperature dependent effective potential in the one loop
approximation was studied in Ref. \cite{ANHA}. The daisy improvement
was performed in Refs. \cite{EQ,BENSON}. The potential is a
function of the classical field $\phi$ as in the SM, but with
a new contribution coming from the $S$ bosonic
loops of the form (imposing renormalization conditions preserving the tree
level value of the $vev$ $v$)
\begin{equation}
\Delta V_S=g_S
\left\{ \frac{m_S^2(\phi)T^2}{24}-\frac{\overline{
M}_S^3(\phi)T}{12\pi} -\frac{m_S^4(\phi)}{64\pi^2}
\left[\log\frac{m_S^2(v)}{c_BT^2}-2\frac{m_S^2(v)}{m_S^2(\phi)}
\right] \right\},
\end{equation}
where $g_S=2$ is the number of degrees of freedom of the (complex) singlet.
$m_S(\phi)$ is the field dependent mass of $S$
and $\overline{M}_S(\phi)$ its Debye screened mass. They are given by
\be\begin{array}{ccl}
m_S^2(\phi)&=&M^2+\zeta^2 \phi^2,\vspace{.5cm}\\
\overline{M}_S^2&=&m_S^2(\phi)+\Pi_S(T).
\end{array}\ee
Here $\Pi_S$ is, to leading order
\begin{equation}
\Pi_S(T)=\frac{\lambda_S+\zeta^2}{3}T^2.
\end{equation}
The thermal polarizations for the rest of SM particles can receive
extra contributions coming from $S$ loops (see Refs. \cite{EQ,BENSON}).
To understand the numerical results presented at the end of this
section it is convenient to study analytically the effective potential
assuming as will be the case that the bosonic contribution is
dominated by the singlet (and this is the reason for the relevant
expansion parameter to be $\lambda/\zeta^2$ rather than
$\lambda/g^2$). The high temperature expansion of the
effective potential takes then the form
\begin{equation}
V(\phi)=A(T)\phi^2+B(T)\phi^4+C(T)\left(\phi^2+K^2(T)\right)^{3/2},
\end{equation}
where
\begin{equation}
\label{C}
C(T)=-\frac{\zeta^3T}{6\pi},
\end{equation}
\begin{equation}
\label{K2}
K^2(T)=\frac{(\zeta^2+\lambda_S)T^2+3M^2}{3\zeta^2}.
\end{equation}
{}From these expressions and the discussion at the end of the previous
section is clear that the best case for the phase transition to be
stongly first order is to have low values of $M$ and $\lambda_S$ and
large values of $\zeta^2$. This is what was found numerically in
Ref. \cite{EQ}.
For large values of $M$ the transition is
weaker and eventually, when $M\gg T$,
the singlet decouples and the SM
result is recovered. The dependence
on $\zeta^2$ is the most interesting one. Imposing the condition that
all the couplings in the theory remain perturbative up to some high
scale $\Lambda$ an upper bound for $\zeta^2$ at the electroweak scale
is derived (as a function of course of the scale $\Lambda$ and the
unknown value of the top yukawa coupling $h_t$). This renormalization
group study was also performed in Ref. \cite{EQ}.
Translating the requirement of a strong phase transition
($E_{sph}(T_c)/T_c>45$ or $\langle\phi (T_c)\rangle\stackrel{>}{{}_\sim} 1.3$ with our
definition of $T_c$) into a bound
on the Higgs mass $M_H^{crit}$, one finds that in this model
one can evade
the stringent bound of the SM and go to larger values of
$M_H^{crit}$. The numerical value of this critical mass depends on
how large the coupling $\zeta^2$ is, or equivalently, on
the mass of the top and the scale of new physics $\Lambda$. Assuming
that this scale is not lower than $10^6\ GeV$ the value of
$M_H^{crit}$ can be of order $80\ GeV$ (see Fig. 1) which is higher than the
experimental bound. Larger values of this critical mass can only be
obtained at the price of introducing new physics at lower scales.
\section*{The\, Minimal\, Supersymmetric\, Standard Model}
The Minimal Supersymmetric Standard Model (MSSM) is the physically
most motivated and phenomenologically most acceptable among the
extensions of the SM.
Concerning the generation of the baryon asymmetry the MSSM allows
for extra CP-violating phases besides the Kobayashi-Maskawa one,
which could help in generating the observed baryon
asymmetry \cite{CONE}.
On the other hand the MSSM has a lot of new degrees of freedom and
the nature of the phase
transition could be significantly modified with respect to the SM.
The main tool for our study is the one-loop, daisy-improved (for
previous studies in the one-loop approximation see
Refs. \cite{DIHUSI,MSSM1})
finite-temperature
effective potential of the MSSM, $V_{\rm{eff}}(\phi,T)$. Now the
potential is actually a function of two fields: $\phi_1 \equiv {\rm
Re}
\, H_1^0$ and $\phi_2 \equiv {\rm Re} \, H_2^0$, where $H_1^0$
and $H_2^0$ are the neutral
components of the Higgs doublets $H_1$ and $H_2$, thus $\phi$ will
stand
for $(\phi_1,\phi_2)$. Working in the 't~Hooft-Landau gauge and in
the
$\overline{DR}$-scheme, we can write
\begin{equation}
\label{total}
V_{\rm{eff}}(\phi,T) = V_0(\phi)
+ V_1(\phi,0) + \Delta V_1(\phi,T)
+\Delta V_{\rm{daisy}}(\phi,T) \, ,
\end{equation}
where
\begin{eqnarray}
\label{v0}
V_0(\phi) & = & m_1^2 \phi_1^2 + m_2^2 \phi_2^2 + 2 m_3^2 \phi_1
\phi_2
+ {g^2+g'\,^2 \over 8} (\phi_1^2 -\phi_2^2)^2 \, ,
\vspace*{.5cm}\\
\label{deltav}
V_1(\phi,0) & = & \sum_i {n_i \over
64 \pi^2} m_i^4 (\phi) \left[ \log {m_i^2 (\phi) \over Q^2} - {3
\over 2} \right] \, ,
\vspace*{.5cm}\\
\label{deltavt}
\Delta V_1(\phi,T) & = & {T^4 \over 2 \pi^2} \left\{ \sum_i
n_i \, J_i \left[ { m^2_i (\phi) \over T^2 } \right] \right\} \, ,
\vspace*{.5cm}\\
\label{dvdaisy}
\Delta V_{\rm{daisy}}(\phi,T) & = & - {T \over 12 \pi} \sum_i n_i
\left[ \overline{m}_i^3 (\phi, T ) - m_i^3 (\phi) \right] \, .
\end{eqnarray}
The first term,
Eq.~(\ref{v0}),
is the tree-level potential. The second term, Eq.~(\ref{deltav}), is
the one-loop contribution at $T=0$: $Q$ is the renormalization scale,
where we
choose for
definiteness $Q^2 = m_Z^2$, $m_i^2 (\phi)$ is the field-dependent
mass of
the $i^{th}$ particle, and $n_i$ is the corresponding number of
degrees of
freedom, taken negative for fermions. Since $V_1(\phi,0)$ is
dominated by
top ($t$) and stop ($\tilde{t}_1,\tilde{t}_2$) contributions, only these will be
included
in the following. The third term, Eq.~(\ref{deltavt}), is the
additional
one-loop
contribution due to temperature effects. Here $J_i=J_+ (J_-)$ if the
$i^{th}$
particle is a boson (fermion), and
\begin{equation}
\label{ypsilon}
J_{\pm} (y^2) \equiv \int_0^{\infty} dx \, x^2 \,
\log \left( 1 \mp e^{- \sqrt{x^2 + y^2}} \right) \, .
\end{equation}
Since the relevant contributions to $\Delta V_1(\phi,T)$ are due to
top ($t$),
stops ($\tilde{t}_1,\tilde{t}_2$) and gauge bosons ($W,Z$), only these will be
considered
in the following. Finally, the last term, Eq.~(\ref{dvdaisy}), is the
correction
coming from daisy diagrams. The sum
runs over bosons only. As usual, the masses $\overline{m}_i^2 (\phi,T)$ are obtained
from the ${m}_i^2 (\phi)$ by adding the leading $T$-dependent
self-energy
contributions, which are proportional to $T^2$.
The relevant degrees of freedom for our calculation are:
\begin{equation}
\label{multi}
n_t = - 12 \, , \;\;
n_{\tilde{t}_1} = n_{\tilde{t}_2} = 6 \, , \;\;
n_W=6 \, , \;\; n_Z=3 \, , \;\;
n_{W_L}=2 \, , \;\; n_{Z_L}=n_{\gamma_L}=1 \, .
\end{equation}
The field-dependent top mass is
\begin{equation}
\label{tmass}
m_t^2(\phi)=h_t^2 \phi_2^2 \, .
\end{equation}
The entries of the field-dependent stop mass matrix are
\begin{eqnarray}
\label{tlmass}
m_{\tilde{t}_L}^2 (\phi) & = & m_{Q_3}^2 + m_t^2 (\phi) +
D_{\tilde{t}_L}^2 (\phi) \, ,
\vspace{.5cm}\\
\label{trmass}
m_{\tilde{t}_R}^2 (\phi) & = & m_{U_3}^2 + m_t^2 (\phi) +
D_{\tilde{t}_R}^2 (\phi) \, ,
\vspace{.5cm}\\
\label{mixmass}
m_X^2 (\phi) & = & h_t (A_t \phi_2 + \mu \phi_1) \, ,
\end{eqnarray}
where $m_{Q_3}$, $m_{U_3}$ and $A_t$ are soft supersymmetry-breaking
mass
parameters, $\mu$ is a superpotential Higgs mass term, and
\begin{eqnarray}
\label{dterms}
D_{\tilde{t}_L}^2(\phi)& = & \left( {1 \over 2}-
{2 \over 3}\sin^2 \theta_W \right)
{g^2 + g'\,^2 \over 2}(\phi_1^2-\phi_2^2),\\
D_{\tilde{t}_R}^2(\phi) &=&\left( {2 \over 3}\sin^2 \theta_W \right)
{g^2 + g'\,^2 \over 2}(\phi_1^2-\phi_2^2)
\end{eqnarray}
are the $D$-term contributions. The field-dependent stop masses are
then
\begin{equation}
\label{mstop}
m_{\tilde{t}_{1,2}}^2 (\phi) = {m^2_{\tilde{t}_L} (\phi) +
m^2_{\tilde{t}_R} (\phi) \over 2} \pm \sqrt{ \left[
{m^2_{\tilde{t}_L} (\phi)- m^2_{\tilde{t}_R} (\phi) \over 2}
\right]^2 + \left[ m_X^2(\phi) \right]^2 } \, .
\end{equation}
The corresponding effective $T$-dependent masses,
$\overline{m}^2_{\tilde{t}_{1,2}}
(\phi,T)$, are given by expressions identical to (\ref{mstop}), apart
from the
replacement
\begin{equation}
\label{repl}
m^2_{\tilde{t}_{L,R}} (\phi) \, \rightarrow \,
\overline{m}^2_{\tilde{t}_{L,R}}(\phi,T) \equiv
m^2_{\tilde{t}_{L,R}} (\phi)+ \Pi_{\tilde{t}_{L,R}}(T) \, .
\end{equation}
The $\Pi_{\tilde{t}_{L,R}}(T)$ are the leading parts of the
$T$-dependent
self-energies of $\tilde{t}_{L,R}$\,,
\begin{eqnarray}
\label{pistl}
\Pi_{\tilde{t}_L}(T)& = &
{4 \over 9}g_s^2 T^2 +
{1 \over 4}g^2 T^2 +
{1 \over 108}g'\,^2 T^2 +
{1 \over 6}h_t^2 T^2 \, ,
\vspace{.5cm}\\
\label{pistr}
\Pi_{\tilde{t}_R}(T) & = &
{4 \over 9}g_s^2 T^2 +
{4 \over 27} g'\,^2 T^2 +
{1 \over 3}h_t^2 T^2 \, ,
\end{eqnarray}
where $g_s$ is the strong gauge coupling constant. Only loops of
gauge
bosons, Higgs bosons and third generation squarks have been included,
implicitly assuming that all remaining supersymmetric particles are
heavy and decouple. If some of these are also light, the plasma
masses for the stops will be even larger, further suppressing the
effects of the associated cubic terms, and therefore weakening the
first-order nature of the phase transition.
We expect the stops to play an important role in making the
transition stronger as they have a large number of degrees of freedom
and couple to the Higgses with strength $h_t$. On the other hand as they are
coloured scalars their thermal mass is of order $g_s^2 T^2$ as shown
above and also they are not protected by any symmetry to have an
invariant mass. The final importance they have in the strenght of the
transition will result from the interplay between these opposite
properties.
Finally, the
field-dependent
gauge boson masses are
\begin{equation}
\label{gauge}
m_W^2 (\phi) = {g^2 \over 2} (\phi_1^2 + \phi_2^2) \, ,
\;\;\;\;\;\;
m_Z^2 (\phi) = {g^2 + g'\,^2 \over 2} (\phi_1^2 + \phi_2^2) \, ,
\end{equation}
and the effective $T$-dependent masses of the longitudinal gauge
bosons
are
\begin{eqnarray}
\label{wmass}
\phantom{b}&\phantom{b}&
\overline{m}_{W_L}^2(\phi,T) \ \ = m_W^2 (\phi)+ \Pi_{W_L}(T) \, ,
\\
\label{zphlmass}
\phantom{b}&\phantom{b}&
\overline{m}^2_{Z_L,\gamma_L}(\phi,T) =
\frac{1}{2} \left[ m_Z^2 (\phi) + \Pi_{W_L}(T) + \Pi_{B_L}(T) \right]
\nonumber\\
& \pm &
\sqrt{ \frac{1}{4} \left[ {g^2 - g'\,^2 \over 2} (\phi_1^2 +
\phi_2^2)
+ \Pi_{W_L}(T) - \Pi_{B_L}(T) \right]^2
+ \left[ {gg' \over 2}(\phi_1^2 + \phi_2^2) \right]^2 } \, .
\end{eqnarray}
In eqs.~(\ref{wmass}) and (\ref{zphlmass}), $\Pi_{W_L}(T)$ and
$\Pi_{B_L}(T)$
are the leading parts of the $T$-dependent self-energies of $W_L$ and
$B_L$,
given by
\begin{equation}
\label{piwb}
\Pi_{W_L}(T) = {5 \over 2}g^2 T^2 \, ,
\;\;\;\;\;
\Pi_{B_L}(T) = {47 \over 18}g'\,^2 T^2 \, ,
\end{equation}
where only loops of Higgs bosons, gauge bosons, Standard Model
fermions
and third-generation squarks have been included.
We need to analyse the effective potential (\ref{total}) as a
function of
$\phi$ and $T$. Before doing this, however, we trade the parameters
$m_1^2,
m_2^2,m_3^2$ appearing in the tree-level potential (\ref{v0}) for
more
convenient parameters. To this purpose, we first minimize the
zero-temperature
effective potential, i.e. we impose the vanishing of the first
derivatives of
$V_0(\phi) + V_1(\phi,0)$ at $(\phi_1,\phi_2)=(v_1,v_2)$, where
$(v_1,v_2)$
are the one-loop vacuum expectation values at $T=0$. This allows us
to eliminate
$m_1^2$ and $m_2^2$ in favour of $m_Z^2$ and $\tan\beta\equiv
v_2/v_1$ in the standard way.
Moreover, $m_3^2$ can be traded for the one-loop-corrected mass
$m_A^2$
of the CP-odd neutral Higgs boson \cite{erz2}.
Therefore the whole effective potential (\ref{total}) is completely
determined, in our approximation, by the parameters $(m_A,\tan\beta)$ of
the Higgs sector, and by the parameters ($m_t$, $m_{Q_3}$, $m_{U_3}$,
$\mu$, $A_t$) of the top/stop sector. The same set of parameters also
determines the one-loop-corrected masses and couplings of the MSSM
Higgs bosons.
The next steps are the computation of the critical temperature and
of the location of the minimum of the effective potential at the
critical temperature.
We define here $T_0$ as the temperature at which the determinant of
the
second derivatives of $V_{\rm eff}(\phi,T)$ at $\phi=0$ vanishes:
\begin{equation}
\label{det}
\det\left[
{{\partial^2 V_{\rm eff}(\phi,T_0)}\over{\partial \phi_i \partial
\phi_j}}
\right]_{\phi_{1,2}=0} = 0 \, .
\end{equation}
It is straightforward to compute the derivatives in eq.~(\ref{det})
from the previous formulae; the explicit expressions are given in
Ref. \cite{beqz}.
Once eq.~(\ref{det}) is solved (numerically) and $T_0$ is found, one
can
minimize (numerically) the potential $V_{\rm eff}(\phi,T_0)$ and find
the
minimum $[v_1(T_0),v_2(T_0)]$. The quantity of interest is indeed, as
will
be discussed later, the ratio $v(T_0)/T_0$, where $v(T_0)\equiv
\sqrt{v_1^2
(T_0)+v_2^2(T_0)}$.
Before presenting the numerical results we discuss the experimental
constraints on the parameters of the top/stop sector and of the Higgs
sector. We treat $m_{Q_3}$, $m_{U_3}$ and the other soft mass terms as
independent parameters, even if they can be related in specific
SUGRA models. We want to be as general as possible and in this way
the region of parameters we are able to exclude will be excluded in
any particular model.
Direct and indirect searches
at LEP imply \cite{coignet} that $m_{\tilde{b}_L} \stackrel{>}{{}_\sim} 45 {\rm \; GeV}$,
which
in turn translates into a bound in the $(m_{Q_3},\tan\beta)$ plane.
Electroweak
precision measurements \cite{altarelli} put stringent constraints on
a light
stop-sbottom sector: in first approximation, and taking into account
possible
effects \cite{abc} of other light particles of the MSSM, we
conservatively
summarize the constraints by $\Delta \rho (t,b) + \Delta \rho
(\tilde{t},
\tilde{b}) < 0.01$.
We finally need to consider the constraints coming from LEP searches
for
supersymmetric Higgs bosons \cite{coignet}.
Experimentalists put limits on the processes $Z \rightarrow h Z^*$
and
$Z \rightarrow h A$, where $h$ is the lighter neutral CP-even boson.
We
need to translate these limits into exclusion contours in the $(m_A,\tan \beta)$
plane,
for given values of the top/stop parameters. In order to do this, we
identify
the value of $BR(Z \rightarrow h Z^*)$, which corresponds to the
limit
$m_{\phi} > 63.5 {\rm \; GeV}$ on the SM Higgs, and the value of $BR(Z
\rightarrow
h A)$, which best fits the published limits for the representative
parameter
choice $m_t = 140 {\rm \; GeV}$, $m_{Q_3} = m_{U_3} \equiv \tilde{m} = 1
\rm \; TeV$,
$A_t = \mu = 0$. We then compare those values of $BR(Z \rightarrow h
Z^*)$
and $BR(Z \rightarrow h A)$ with the theoretical predictions of the
MSSM,
for any desired parameter choice and after including the radiative
corrections
associated to top/stop loops \cite{pioneer,erz2}. Of course,
this procedure is not entirely correct, since it ignores the
variations of
the efficiencies with the Higgs masses and branching ratios, as well
as
the possible presence of candidate events at some mass values, but it
is adequate for our purposes.
We now present our numerical results, based on the effective
potential of eq.~(\ref{total}), concerning the strength of the
electroweak phase transition and the condition for preserving the
baryon asymmetry.
Particularizing to the MSSM the
studies
of sphalerons in general two-Higgs models \cite{two}, we obtain
that
\begin{equation}
E_{sph}^{MSSM} (T) \le E_{sph}^{SM} (T) \, ,
\end{equation}
where, in our conventions,
\begin{equation}
\label{esph}
{E_{sph}^{SM} (T) \over T} = {4 \sqrt{2} \pi \over g} B
\left\{ {\lambda_{\rm eff} (T) \over 4 g^2 } \right\} {v(T)
\over T} \, ,
\end{equation}
and $B$ is a smoothly varying function whose values can be found in
Ref. \cite{KLMA}. For example, $B(10^{-2})=1.67$, $B(10^{-1})=1.83$, $B(1)
=2.10$. It can also be shown that
\begin{equation}
{v(T_D) \over T_D} < {v(T_C) \over T_C} <
{v(T_0) \over T_0} \, ,
\end{equation}
where $T_C$ is the actual temperature at which the phase transition
occurs, satisfying the inequalities
\begin{equation}
T_0 < T_C < T_D \, ,
\end{equation}
if $T_0$ is defined by (\ref{det}) and $T_D$ is the temperature at
which
there are two degenerate minima.
Finally, the corrections in $E_{sph}^{SM}$ due to $g'\neq 0$
have been estimated and shown to be
small \cite{mixing}. Therefore, a conservative bound to be imposed
is
\begin{equation}
\label{erre}
R \equiv {v(T_0) \over T_0} {4 \sqrt{2} \pi B
\left\{ {\lambda_{\rm eff} (T_0) \over 4 g^2 } \right\} \over
45 g} > 1 \, .
\end{equation}
The last point to be discussed is the determination of the value of
$\lambda_{\rm eff} (T_0)$ to be plugged into
eq.~(\ref{erre}). The $B$-function we use is taken from
Ref.~\cite{KLMA}, where the sphaleron energy was computed using the
zero-temperature `Mexican-hat' potential, $V = \frac{\lambda}{4}
(\phi^2-v^2)^2$. The sphaleron energy at finite temperature was
computed in Ref.~\cite{belgians}, where it was proven that it scales
like $v(T)$, i.e.
\begin{equation}
\label{aaa}
E^{SM}_{sph}(T)=E^{SM}_{sph}(0) \; \frac{v(T)}{v} \, ,
\end{equation}
with great accuracy. Therefore, to determine the value of
$\lambda_{\rm eff} (T_0)$ we have fitted $V_{\rm
eff}(\phi,T_0)$, in the direction of the minimum,
to the one-dimensional approximate
potential,
\begin{equation}
\label{bbb}
V_{\rm eff}(\phi,T_0) \simeq \frac{1}{4} \lambda_{\rm
eff}(T_0)
[\phi^2-v^2(T_0)]^2 \,
,
\end{equation}
The value
of $\lambda_{\rm eff}$ obtained from (\ref{bbb}),
\begin{equation}
\label{ccc}
\lambda_{\rm eff}(T_0)=4 \;
\frac{V_{\rm eff}(0,T_0)-V_{\rm eff}[v(T_0),T_0]} {v^4(T_0)} \, ,
\end{equation}
where all quantities on the right-hand side are calculated
numerically
from the potential of eq.~(\ref{total}), is then plugged into
eq.~(\ref{erre}) to obtain our bounds. Of course, the quality of the
fit is good only for values of $\phi \stackrel{<}{{}_\sim} v(T_0)$ but this is
precisely the region of interest to determine the sphaleron energy.
Our numerical results are summarized in fig.~2, in the $(m_A,\tan \beta)$ plane
and for two representative values of the top quark mass: $m_t = 130
{\rm \; GeV}$ (fig.~2a) and $m_t = 170 {\rm \; GeV}$ (fig.~2b). In each case, the
values of the remaining free parameters have been chosen in order
to maximize the strength of the phase transition, given the
experimental constraints on the top-stop sector. Notice that
arbitrarily small values
of $m_{U_3}$ cannot be excluded on general grounds, even if they are
disfavoured by model calculations. Also, we have explicitly checked
that,
as in Ref.~\cite{eqz3}, mixing effects in the stop mass matrix
always
worsen the case. In fig.~2, solid lines correspond to contours of
constant $R$: one can see that the requirement
of large values of $R$ favours small
$\tan\beta$ and $m_A \gg m_Z$. The thick solid line corresponds to the
limits coming from Higgs searches at LEP: for our parameter choices,
the
allowed regions correspond to large $\tan\beta$ and/or $m_A \gg m_Z$. For
reference, contours of constant $m_h$ (in GeV) have also been plotted
as dashed lines. One can see that, even for third-generation
squarks as
light as allowed by all phenomenological constraints,
only a very small globally allowed region can exist
in the $(m_A,\tan \beta)$ plane, and that the most favourable
situation
is the one already discussed in Ref.~\cite{eqz3}. More precisely,
the region that is still marginally allowed corresponds to $m_A \gg
m_Z$, $\tan\beta \sim 2$, stop and sbottom sectors as light as otherwise
allowed, a heavy top, and a light Higgs boson with SM-like properties
and
mass $m_h \sim 65 {\rm \; GeV}$, just above the present experimental limit.
A less conservative interpretation of the limits from precision
measurements, the inclusion of some theoretically motivated
constraints on the model parameters, or a few GeV improvement in the
SM Higgs mass limit, would each be enough to fully exclude
electroweak baryogenesis in the MSSM.
\section*{Acknowledgments}
The results presented here are based on joint work done in collaboration with
A.~Brignole, M.~Quir\'os and F.~Zwirner. I must say it is always a pleasure
to work with them.
| {
"attr-fineweb-edu": 1.923828,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdxw4eIOjRvCHtMiY | \section{Introduction}
The prominent absorption features blue-ward of the Lyman-$\alpha$~ emission in the
spectra of high-redshift quasars (QSOs) are believed to arise from
smooth density fluctuations of a photoionised warm intergalactic medium
which trace the dark matter distribution in a relatively simple manner
(see Rauch 1998 and Weinberg et al. 1999 for reviews). As a result,
the flux power spectrum of this `Lyman-$\alpha$~ forest' has become a powerful
quantitative probe of the matter power spectrum on scales of
$1\,h^{-1}$ to $40\,h^{-1}$ Mpc at redshifts $z=2-4$. At these scales
and redshifts, the matter distribution is linear or mildly non-linear,
a regime that can be accurately modelled with numerical
simulations. Such simulations have been used to obtain quantitative
estimates of the clustering amplitude and constraints on cosmological
parameters from the Lyman-$\alpha$~ forest (Croft et al. 1998; Croft et al. 1999;
McDonald et al. 2000; Hui et al. 2001; Croft et al. 2002; McDonald
2003; Viel et al. 2003; Viel, Haehnelt \& Springel 2004; Viel, Weller
\& Haehnelt 2004; McDonald et al. 2004a; Desjacques \& Nusser 2004) or
on astro-physical parameters (Theuns et al. 1998, Meiksin et
al. 2001, McDonald et al. 2004b, Bolton et al. 2005).
Unfortunately, the flux power spectrum does not only depend on the dark
matter (DM) distribution but also on the thermal state of the
intergalactic medium (IGM), and possibly on feedback effects due to
star formation and active galactic nuclei (AGN). Ideally, one would
like to use simulations which not only take into account the non-linear
gravitational clustering of the matter distribution but also all the
relevant hydrodynamics of the gas, including effects of galaxy
formation physics, such as radiative cooling and heating, star
formation and winds driven by stellar associations or AGN. However,
full hydrodynamical simulations of the Lyman-$\alpha$~ forest are computationally
very demanding. This makes their use for extensive parameter studies
difficult. In addition, some physical processes, such as the feedback
mechanisms, are still poorly understood. Thus, the use of approximate
numerical calculations of the flux distribution of the Lyman-$\alpha$~ forest very
attractive, an approach that has been widely applied in previous work
(e.g. McGill 1990; Hui et al. 1997; Meiksin \& White 2001; Viel et
al. 2002b; Zhan et al. 2005). Note that such approximate calculations
of the Lyman-$\alpha$~ flux distribution have been crucial in establishing the
modern paradigm for the origin of the Lyman-$\alpha$~ forest in the first place
(Bi 1993, Bi \& Davidsen 1997, Viel et al. 2002a).
In 1998, Gnedin \& Hui (GH) have proposed the `Hydrodynamic
Particle-Mesh method' (HPM) as an efficient numerical method to
approximate the formation and evolution of the Lyman-$\alpha$~ forest. This
technique is based on a particle-mesh (PM) approach for following the
evolution of dark matter. The gravitational potential of the PM solver
is then modified with an effective potential which mimics the effect of
gas pressure. GH found that global statistical properties of the flux
distribution in HPM simulations are accurate to $\sim 5-20\%$ when
compared to full hydrodynamical simulations. This prompted e.g.
McDonald et al.~(2004a) to use HPM simulations that were calibrated with
a small number of hydrodynamical simulations to obtain predictions of
the flux power spectrum for a wide range of cosmological and physical
parameters describing the thermal state of the gas.
The statistical errors of the flux power spectrum obtained from
high-resolution Echelle spectra are $\sim 4$~\% and can in principle
become as small as a few percent for large samples of low-resolution
spectra (e.g. Kim et al.~2004, McDonald et al. 2005). This has opened
up the exciting prospect to use the Lyman-$\alpha$~ forest to constrain
inflationary parameters and the nature of dark matter, based on high
accuracy measurements of the DM power spectrum inferred from the Lyman-$\alpha$~
forest (Viel, Weller \& Hahenelt 2004; Seljak et al. 2004; Viel et al.
2005). However, a prerequisite is the availability of accurate
predictions of the flux power spectrum for a wide range of parameters.
The hydrodynamical code \mbox{\small{GADGET-2}\,\,} (Springel, Yoshida \& White 2001; Springel
2005), which we have used extensively in earlier work for full
hydrodynamical simulations of the Lyman-$\alpha$~ forest (Viel, Haehnelt \&
Springel 2004; Bolton et al. 2005), is a TreeSPH code which also
offers a PM algorithm which can optionally be used to calculate
long-range gravitational forces. In this code the HPM method of GH can
therefore be easily implemented. This makes \mbox{\small{GADGET-2}\,\,} well suited for a
detailed analysis of the accuracy and systematic uncertainties of the
HPM method by comparing simulations run with it to full hydrodynamical
TreeSPH-PM simulations.
In this paper, we perform such an analysis and investigate the
dependence of the discrepancies between HPM and full hydrodynamical
simulations on a range of numerical parameters for the relevant
redshift range $z=2-4$. Note that we here do not intend to optimize
the HPM method.
The outline of the paper is as follows. In
Section \ref{hydro}, we briefly describe the hydrodynamical code \mbox{\small{GADGET-2}\,\,}
and we review the basic equations of the HPM formalism. We also show
that the HPM implementation of \mbox{\small{GADGET-2}\,\,} and the HPM code by Gnedin \& Hui
give similar results for a suitable choice of numerical parameters. In
Section \ref{compare2}, we discuss the differences between our HPM
implementation and full hydrodynamical simulation by analysing the
statistical properties of the flux distribution. We further analyse the
effect of shock heating, the influence of various numerical parameters
on the results, and the CPU time and memory requirements. Finally,
Section \ref{conclu} contains a summary and our conclusions.
\section{Simulation methods of the Lyman-$\alpha$~ forest}
\label{hydro}
\subsection{Full hydrodynamical simulations}
The hydrodynamical simulation code {\small GADGET-2} (Springel, Yoshida
\& White 2001; Springel 2005) can optionally employ a PM technique to
calculate long-range gravitational forces, resulting in a `TreePM'
scheme for gravitational forces. We will use hydrodynamical
simulations run with this SPH/TreePM implementation of \mbox{\small{GADGET-2}\,\,} as
``reference'' simulations to assess in detail the accuracy and
systematic uncertainties of the approximate HPM method. The TreePM
approach speeds up the calculation of long-range gravitational forces
considerably compared to a tree-only implementation.
All our simulations were performed with periodic boundary conditions
and an equal number of dark matter and gas particles. We employ the
`entropy-formulation' of SPH proposed by Springel \& Hernquist (2002).
Radiative cooling and heating processes are followed using an
implementation similar to that of Katz et al.~(1996) for a primordial
mix of hydrogen and helium. We have assumed a mean UV background
produced by quasars as given by Haardt \& Madau (1996), which leads to
reionisation of the Universe at $z\simeq 6$. The simulations are run
with heating rates increased by a factor of 3.3 in order to achieve
temperatures which are close to observed temperatures (Abel \& Haehnelt
1999, Schaye et al. 2000, Ricotti et al. 2000).
In order to maximise the speed of the dissipative hydrodynamical
simulations we have employed a simplified star-formation criterion in
the majority of our runs. All gas at densities larger than 1000 times
the mean density was turned into collisionless stars. The absorption
systems producing the Lyman-$\alpha$~ forest have small overdensity so this
criterion has little effect on flux statistics, while speeding up the
calculation by a factor of $\sim 6$, because the small dynamical times
that would otherwise arise in the highly overdense gas need not to be
followed. In a pixel-to-pixel comparison with a simulation which
adopted the full multi-phase star formation model of Springel \&
Hernquist (2003) we explicitly checked for any differences introduced
by this approximation. We found that the differences in the flux
probability distribution function were smaller than 2\%, while the
differences in the flux-power spectrum were smaller than 0.2 \%. We
have also turned off all feedback options of {\small GADGET-2} in our
simulations. An extensive resolution and box size study has been
performed in Viel, Haehnelt \& Springel (2004) and in Bolton et
al. (2005).
For all simulations presented here we have adopted a box size of 30
comoving \mpch and the cosmological parameters $\Omega_{0{\rm m}}=
0.26$, $\Omega_{0\Lambda} = 0.74$, $\Omega_{0{\rm b}} = 0.0463$ and
$H_0=72\,{\rm km\,s^{-1}Mpc^{-1}}$, $\sigma_8=0.85$ and $n=0.95$ (the
parameters of the B2 simulation in Viel, Haehnelt \& Springel 2004).
The CDM transfer functions of all models have been taken from
Eisenstein \& Hu (1999).
\begin{figure*}
\center\resizebox{0.5\textwidth}{!}{\includegraphics{f1.ps}}
\caption{Power spectrum of the gas density field of the HPM simulations
run with \mbox{\small{GADGET-2}\,\,} at $z=3$, at two different resolutions and for several
different values of the parameter $N_{\rm grid}$. The power spectrum of
the full hydrodynamical simulation is represented by the filled
triangles.}
\label{f1}
\end{figure*}
\begin{figure*}
\center\resizebox{1.0\textwidth}{!}{\includegraphics{f2.ps}}
\caption{{\it Left:} differences in the probability distribution
functions of the dark matter density field between \mbox{\small{GADGET-2}\,\,} (G2) and the Gnedin
\& Hui (GH) code. Both of them have been run in the PM mode with a grid
of $200^3$ for GH and $400^3$ for G2. {\it Right:} Fractional
differences in the 3D matter power spectrum. The results are shown at
three different redshifts $z=2,3,4$ as dashed, continuous and dotted
curves, respectively.}
\label{f2}
\end{figure*}
\begin{figure*}
\center\resizebox{1.0\textwidth}{!}{\includegraphics{f3.ps}}
\caption{{\it Left:} differences in the probability distribution
functions of the gas density field between simulations run with \mbox{\small{GADGET-2}\,\,}
(G2) and the Gnedin
\& Hui (GH) code. Both of them have been run in the HPM mode with a grid
of $200^3$ for GH and $400^3$ for G2. {\it Right:} Fractional
differences in the 3D matter power spectrum. The results are shown at
three different redshifts $z=2,3,4$ as dashed, continuous and dotted
curves, respectively.}
\label{f3}
\end{figure*}
\subsection{HPM implementation of \mbox{\small{GADGET-2}\,\,}}
\label{hpm}
GH proposed to introduce an effective potential that mimics gas pressure
into an otherwise collisionless dark matter simulation, carried out with a
particle mesh code. This method has become known as Hydro-Particle-Mesh (HPM)
approximation. The idea of the HPM approximation is to take advantage of the
fact that the low density IGM responsible for most of the Lyman-$\alpha$~ forest
absorption obeys a simple relation between gas density and gas temperature,
which is well described by a power-law `equation of state': \begin{equation}
T=T_0(z) \, (1+\delta)^{\gamma (z)-1}\;.
\label{eqst}
\end{equation}
The evolution of $T_0$ and $\gamma$ with redshift depends on the reionisation
history (Hui \& Gnedin 1997). The `equation of state' predicts the
temperature of gas of given density to better than 10\% for the low density
IGM where shock heating is not important. Instead, the temperature is set by
a balance between photoionisation heating and adiabatic cooling due to the
expansion of the universe.
Based on the density alone, equation (\ref{eqst}) also allows an estimate of
the thermal pressure which enters the equation of motion for a cosmic gas
element. We know from full hydrodynamical simulations that the baryons follow
the dark matter generally well apart from high density regions where
pressure effects on small scales become important. GH suggested therefore
to use the density of the dark matter in a PM simulation together with
Eqn.~(\ref{eqst}) to estimate the temperature and pressure of the gas. One can
then obtain the acceleration on a cosmic gas element due to the gradient of
the pressure as
\begin{equation}
{d{\bmath v}\over dt} + H{\bmath v} = -\nabla\phi -
{1\over\rho}\nabla P,
\label{eom}
\end{equation}
where ${\bmath v}$ is the gas peculiar velocity, $\phi$ is the gravitational
potential, and $P$ is the thermal pressure. If the gas is highly ionised (so
that the mean molecular weight is roughly constant, which is true for the
Lyman-alpha forest), and the temperature is a function of density only, so
that $P=P(\rho)$, equation (\ref{eom}) can be reduced to the expression
\begin{equation}
{d{\bmath v}\over dt} + H{\bmath v} = -\nabla\psi ,
\label{eompsi}
\end{equation}
where
\begin{equation}
\psi = \phi + {\cal H},
\label{psidef}
\end{equation}
and ${\cal H}$, the {\it specific enthalpy\/}, is
\begin{equation}
{\cal H}(\rho) = {P(\rho)\over\rho} +
\int_1^\rho {P(\rho^\prime)\over\rho^\prime}
{d\rho^\prime\over\rho^\prime}.
\label{enthalpy}
\end{equation}
Equation (\ref{eompsi}) is identical to the equation of motion for the
collisionless dark matter except that the usual gravitational potential $\phi$
is replaced by an effective potential $\psi$, which takes into account both
gravity and thermal pressure. Since the gravitational potential $\phi$ has to
be computed from the density field in a regular PM simulation anyway,
computing the enthalpy adds only a modest computational overhead.
\begin{figure*}
\center\resizebox{1.0\textwidth}{!}{\includegraphics{f4.ps}}
\caption{{\it Left:} differences in the probability distribution
functions of the dark matter density field between simulations run with
\mbox{\small{GADGET-2}\,\,} in its HPM and in its TreePM mode (the PM grid for the TreePM run
is fixed to the value $N_{\rm grid}=200$). {\it Right:} Fractional
differences in the 3D matter power spectrum. The results are shown at
$z=3$ and for three different values of $N_{\rm grid}$ (400,600,1200)
as continuous, dashed and dot-dashed curves, respectively.}
\label{f4}
\end{figure*}
\begin{figure*}
\center\resizebox{1.0\textwidth}{!}{\includegraphics{f5.ps}}
\caption{{\it Left:} differences in the probability distribution
functions of the gas density field between simulations run with \mbox{\small{GADGET-2}\,\,} in
its HPM and in its TreePM mode (the PM grid for the TreePM run is fixed
to the value 200). {\it Right:} Fractional differences in the 3D matter
power spectrum. The results are shown at $z=3$ and for three different
values of $N_{\rm grid}$ (400,600,1200) as continuous, dashed and
dot-dashed curves, respectively.}
\label{f5}
\end{figure*}
We have implemented this HPM method in the simulation code {\small GADGET-2}.
We closely follow the approach of GH with only a few minor differences. In
the HPM code of GH, only one set of particles was used, i.e.~the fact that
the dark matter does not feel the pressure on small scales was neglected. As
\mbox{\small{GADGET-2}\,\,} is a SPH code which treats DM and baryons separately, we kept this
distinction in our HPM implementation. This may result in some small
differences on small scales. In Section \ref{compare1}, we will compare
simulations with the HPM implementation of \mbox{\small{GADGET-2}\,\,} to runs carried out with the
HPM code of GH (kindly provided by Nick Gnedin).
There are three numerical parameters defining the technical details of
our HPM implementation in \mbox{\small{GADGET-2}\,\,}. The first parameter is the number of
cells of the PM grid. We describe this by $N_{\rm grid}$, the number of
cells per dimension. The second parameter, $H_{\rm s}$, describes the
scale of the smoothing applied to the enthalpy field before taking its
spatial derivative. The density and enthalpy fields are more sensitive
to shot noise than the gravitational potential, because for the latter,
high frequency noise is suppressed as $\phi(k)\propto \delta_k \,
k^{-2}$. We have thus followed GH and apply a Gaussian smoothing to
the density field before computing the enthalpy and its spatial
derivative. We apply a smoothing factor $\exp(-{k}^2 h_{\rm s}^2)$ to
the density field in Fourier space, where $h_{\rm s}= H_{\rm s}
L/N_{\rm grid}$. The third numerical parameter, $r_{\rm s}= A_{\rm s} L
/N_{\rm grid} $, is the scale of the smoothing of the PM force, which
we usually express in terms of $A_{\rm s}$, i.e.~in units of the mesh
cell size. The parameter $A_{\rm s}$ hence controls the level of
residual force anisotropies in the PM force. In the TreePM code,
$r_{\rm s}$ also gives the scale of the short-range/long-range force
split. We will discuss the choice of numerical values for these
parameters in Section \ref{numparam}. Note that the HPM code of GH has
only two parameters $N_{\rm grid}$ and $H_{\rm s}$, i.e.~no attempt is
made to make the PM force more isotropic on the scale of the mesh. GH
have adopted the choice $H_{\rm s}=3$.
To fix the slope and normalisation of the power-law temperature-density
relation of the IGM, our code follows the thermal history of two fiducial gas
elements at density values equal to the mean cosmic density and at 1.1 times
the mean cosmic density. For a specified evolution of the ionising UV
background, we can then compute the values of $T_0$ and $\gamma$ from the
temperatures attained by these two fiducial gas elements.
In Figure \ref{f1}, we compare the 3D gas power spectrum for a range of HPM
simulations with different particle numbers and mesh sizes with a full
hydrodynamical simulation with $200^3$ dark matter and $200^3$ gas particles
(shown as triangles). All simulations were run with \mbox{\small{GADGET-2}\,\,}. We only show
results at $z=3$, but note that the results at $z=2$ and $z=4$ are very
similar. On large scales ($k< 6 h$/Mpc), the power spectrum of the gas
distribution of HPM simulations converges nicely to that of the full
hydrodynamical simulation when the resolution of the mesh used for calculating
the gravitational forces is increased. Note, however, that even for very high
resolution (six times more mesh cells in the HPM simulation than particles in
the full hydrodynamical simulation) the power on small scales in the HPM
simulations is significantly smaller than that in the full hydrodynamical
simulations. Note further that changing the mesh resolution is more important
than changing the particle number in the HPM simulations. The thin and thick
solid curves are for HPM simulations with the same grid resolution but a
factor eight different particle number. They are virtually identical. We also
note that the results and trends for the dark matter power spectrum are
qualitatively similar. In the runs discussed in the following, we will use the
HPM implementation of \mbox{\small{GADGET-2}\,\,} with $2\times 200^3$ particles and with $N_{\rm
grid} \ge 200$.
\subsection{Comparison between the HPM implementation of \mbox{\small{GADGET-2}\,\,} and the
HPM code of Gnedin \& Hui}
\label{compare1}
In this section we compare the gas and dark matter distribution of simulations
run with the HPM implementation of the \mbox{\small{GADGET-2}\,\,} code and the HPM code of GH. We
use the same initial conditions and temperature-density relation.
At $z=2$, $3$, and $4$, $T_0$ and $\gamma$
($T_0,\gamma$) have the following values: (21500 K,1.505), (21500
K,1.524) and (19200 K,1.536).
In Figures \ref{f2} and \ref{f3}, we show the relative differences of
the probability distribution and the power spectrum of the {\it dark
matter} and {\it gas} density at redshifts $z=2$, $z=3$, and $z=4$. We
have varied the resolution of the mesh to calculate the gravitational
force in the HPM implementation of \mbox{\small{GADGET-2}\,\,} in steps of factors of two.
The other two relevant parameters in the \mbox{\small{GADGET-2}\,\,} runs have been set to
$H_{\rm s} = 3$ and $A_{\rm s}= 1.25$. For the case shown in Figures
\ref{f2} and \ref{f3}, the grid resolution for the HPM implementation
of \mbox{\small{GADGET-2}\,\,} was a factor two higher than that used for the HPM code of
GH. In this case the agreement was best, better than 5\% (dark matter)
and 8\% (gas) for the probability distribution function{\footnote{The
pdf is defined as the number of points or pixels in a given x-axis bin with the
property that its integral along the x-coordinate is one.} (pdf) and better than 2\% for power spectra at
wavenumbers relevant for constraining cosmological parameters with the
Lyman-$\alpha$~ forest, $0.3\,\mincir k \, (h/{\rm Mpc}) \mincir 3$ (Viel,
Haehnelt \& Springel 2004). Because of the smoothing applied to the PM
force in \mbox{\small{GADGET-2}\,\,}, a somewhat finer mesh is needed to match the results of
the HPM code by GH, where such a smoothing is not carried out and
larger force anisotropies on the mesh scale are accepted. By
reducing $A_{\rm s}$, the agreement of the two codes could be improved
further. The two HPM codes agree very well. In the following we will
only use the HPM implementation of \mbox{\small{GADGET-2}\,\,} but our results should apply
similarly to the GH code.
\section{Comparison between full hydrodynamical and HPM simulations}
\label{compare2}
\subsection{The dark matter and gas density fields}
\label{dmgas}
\begin{table}
\begin{tabular}{llll}
\hline
\small (CODE, \# part.) & $N_{\rm grid}$ & CPU-time (ks) & Mem. (Gb) \\
\hline
HPM-$200^3$ & 100 & 1.5 & 3 \\
HPM-$200^3$ & 200 & 3.1 & 3.5 \\
HPM-$200^3$ & 400 & 4.7 & 4.5 \\
HPM-$200^3$ & 600 & 11.2 & 12 \\
HPM-$200^3$ & 1200 & 15 & 76 \\
HPM-$400^3$ & 100 & 33 & 26 \\
HPM-$400^3$ & 200 & 35 & 28 \\
HPM-$400^3$ & 400 & 40 & 30 \\
HPM-$400^3$ & 600 & 44 & 36 \\
Hydro-$200^3$ & 200 & 183 & 3.2 \\
Hydro-$400^3$ & 400 & 11700 & 26 \\
GH HPM-$200^3$ & 200 & 5.4 & 3.5 \\
\hline
\end{tabular}
\caption{CPU-time required to reach $z=2$ for simulations of a 30 Mpc/h box
$\Lambda$CDM model and for several different resolutions and values of
the parameter $N_{\rm grid}$. The memory required is shown in the last
column. All the values are wall-clock times for 32 CPUs (1.3 GHz
Itanium 2) of the SGI Altix 3700 (COSMOS) at DAMTP (Cambridge).}
\end{table}
We first want to check the agreement of the dark matter and gas
distributions between simulations run with the TreePM and HPM
implementations of \mbox{\small{GADGET-2}\,\,}. In Figure \ref{f4}, we show the
differences in the density pdf and the power spectrum for the dark
matter distribution at $z=3$, for three different values of $N_{\rm
grid}$ used in the HPM simulation. The results at $z=2$ and $z=4$ are
similar. The simulations were run with $200^3$ and $2\times 200^3$
particles, respectively. In the simulation with the Tree-PM
implementation, the number of mesh cells of the PM grid was set equal
to the number of particles. As also expected from the results shown in
Fig.~\ref{f1}, the differences become smaller with increasing
resolution of the PM grid used for the HPM implementation. The
differences in the pdf of the DM density are smaller than 10\% (20\%)
for $N_{\rm grid}$=600 (400). If a very fine mesh of dimension $N_{\rm
grid}$=1200 is used, the pdf of the HPM simulation is indistinguishable
from that of the full hydrodynamical TreePM simulation. For $N_{\rm
grid}$=600 (400) the discrepancy in the dark matter power spectrum
(right panel) is less than 2\% (4\%) for $0.2< k (h/{\rm Mpc}) <2$.
For $N_{\rm grid}$=1200, the difference is less than 0.5\% in the same
range of wavenumbers. At larger wavenumber the differences in the
power spectra become much larger due to the much higher resolution
achieved with the TreePM code. Note, however, that these small scales
are not used for the recovery of the dark matter power spectrum from
the Lyman-$\alpha$~ forest because of the uncertainties in the flux power spectrum
due to the thermal history and the metal contamination of the IGM (Kim
et al. 2004).
Figure \ref{f5} shows the difference in the gas distributions between
simulations with the HPM and TreePM implementations. The differences are
similar to those found in the dark matter distribution.
\subsection{Flux statistics}
\label{fluxstats}
\subsubsection{The flux probability distribution function}
\begin{figure*}
\center\resizebox{1.0\textwidth}{!}{\includegraphics{f6.ps}}
\caption{Effect of the parameter $N_{\rm grid}$ for simulations of a 30
Mpc$/h$ box with $2\times 200^3$ (gas and dm) particles. {\it Left
panel:} Fractional differences between the probability distribution
functions of simulations with $N_{\rm grid}$=200 (dotted), $N_{\rm
grid}$=400 (continuous), $N_{\rm grid}$=600 (dashed), and $N_{\rm
grid}$=1200 (dot-dashed) at $z=2$. Also shown is the full
hydrodynamical TreePM simulation with $N_{\rm grid}$=200 and with the
same initial conditions. The long-dashed line with filled diamonds
represents results for the higher resoultion run with $2\times 400^3$
particles and with $N_{\rm grid}$=600.
The other HPM parameters have been fixed to
the fiducial values described in the text. The dot-dashed curve with
overplotted empty triangles is for a simulation with $N_{\rm
grid}$=1200 for which the simulated flux has been scaled to match the
value of the full hydro simulations, in the other cases the spectra
have not been scaled (see text for the details). {\it Middle Panel:}
Results at $z=3$. {\it Right Panel:} Results at $z=4$. }
\label{f6}
\end{figure*}
The flux distribution in the Lyman-$\alpha$~ forest depends on the spatial distribution,
the peculiar velocity field and the thermal properties of the gas. In the last
section, we have shown that the gas distribution of the HPM simulations
converges rather well to that of the full hydrodynamical simulations when the
resolution of the PM mesh is improved. For the flux distribution the
situation is more complicated, however. In Figure \ref{f6}, we plot the
differences in the pdf of the flux for HPM simulations with a range of $N_{\rm
grid}$ values compared with the full hydrodynamical simulations at
$z=2,3,4$. The simulations are the same as those discussed in section
\ref{compare2} and shown in figures \ref{f4} and \ref{f5} (these figures
show results only at $z=3$). The curves without symbols show the results
for the same amplitude of the ionising UV background as in the full
hydrodynamical simulations. Note that this means that the the flux
distribution has {\it not} been rescaled to a fixed mean flux, as it is often
done. Such a rescaling would mask the numerical effects we seek to identify
here. However, to facilitate comparison with other work (e.g. ~McDonald et al.
~2004a), the curves with triangles show the pdf of the flux after re-scaling
the flux distribution of the $N_{\rm grid}=1200$ HPM simulation such that the
mean transmitted flux is the same as in the full hydrodynamical simulations.
At $z=3$, the flux distribution of the HPM simulations converges reasonably
well to that of the full hydrodynamical simulations. With the exception of
flux levels $F>0.8$, the differences are smaller than 5\% for $N_{\rm
grid}=600$ and even smaller for higher resolutions of the PM mesh. In
regions of low absorption ($F> 0.8$) the differences are, however, large
(10-20\%), change sign with increasing resolution, and do not converge. We
have inspected a few spectra individually and found that the discrepancy is
due to differences in both density and temperature in the lowest density
regions. At $z=4$ these differences in regions of low absorption are
substantially larger. Because of the strong decrease of the mean flux with
increasing redshift, these regions correspond to significantly more underdense
regions than at $z=3$. At $z=2$ additional large differences up to 50\% arise
in regions of strong absorption, which also do not vanish with increasing
resolution. For the $N_{\rm grid}=1200$ HPM simulation, the overall agreement
with the full hydrodynamical simulation is of the order of 2\% for $F<0.85$ at
$z=3,4$, while at $z=2$, discrepancies of the order of $\magcir 10$\% remain
both in underdense and very dense regions. The differences at $z=2$ and for
$F<0.15$ are due to the gas in dense regions being substantially colder in the
HPM simulations than in the full hydrodynamical simulation where a significant
portion of the dense gas is shock heated. In Figure 6 we overplot the
results from a higher resolution HPM run with $2\times 400^3$ particles
and $N_{\rm grid}=600$ as a long dashed line with filled diamonds. The
results are very similar to the HPM simulation with $2\times 200^3$
particles and $N_{\rm grid}=200$.
We hence confirm the findings of
GH that the differences in the flux pdf between HPM and full hydrodynamical
simulations are of the order of 10-15\%.
\subsubsection{The flux power spectrum}
The main motivation of the use of HPM simulations comes presently from the
need for accurate predictions of the flux power spectrum for a wide range of
astrophysical and cosmological parameters. Such a grid of predictions allows a
detailed comparison with observational data and a determination of best-fit
values and confidence intervals of cosmological parameters (McDonald et
al. 2004a).
In Figure \ref{f7}, we plot the differences of the flux power spectrum of
HPM simulations with a range of mesh sizes compared with full hydrodynamical
simulations at $z=2,3,4$. The simulations are the same as those discussed in
the previous sections. As in figure \ref{f6}, the curves without symbols
show results for the same amplitude of the ionising UV background while
the curves
with empty triangles show the flux power spectrum after rescaling the flux
distribution of the $N_{\rm grid}=1200$ HPM simulation such that the mean flux
is the same as in the full hydrodynamical simulations. In Figure
\ref{f7} we show the results from a higher resolution HPM run with
$2\times 400^3$ particles and $N_{\rm grid}=600$, as the long-dashed
line with overplotted filled diamonds. At redshift $z=4$ and $z=3$
there is perfect agreement with the $N_{\rm grid}=1200$ HPM
simulation, in the wave number range of interest here. At $z=2$ there
are small differences of the order of $< 5\%$. Thereby, increasing the
number of particles does not improve the agreement significantly.
At redshifts $z=3$ and $z=4$, the flux power spectra of the HPM
simulations converge well to those of the full hydrodynamical
simulations, but only for resolutions of the PM mesh where the number
of the mesh cells is substantially larger than that of the number of
particles in the full hydrodynamical simulations. At $z=3$, the HPM
simulations with $N_{\rm grid}$=400 (600) have {\it scale-dependent}
differences of about 10\% (7\%) in the wavenumber range relevant for
inferring the matter power spectrum. For $N_{\rm grid}$=1200, there is
a scale-independent offset of about 5\% (3\% when rescaled to the same
mean flux). At redshift $z=4$ the situation is very similar. However,
at redshift $z=2$, the flux power spectrum of the HPM simulations does
not converge to that of the full hydrodynamical simulation. The
differences are here actually smallest for the HPM simulation with
lowest resolution ($N_{\rm grid}$=200). However, even in this case the
discrepancies are large and strongly scale dependent, of the order of
25-30\% at the largest scales. At small scales $k > 0.02$ s/km, the
size of the disagreement and its scale dependence is similar to that
found by McDonald et al. (2004a, their figure 5). Note that because of
the smaller box size of their hydro simulations, McDonald et al.~were
not able to probe scales $k < 0.007$ s/km (at $z=2$), where the
differences increase dramatically. Note that the amount of
shock-heated gas is significantly larger in simulations with larger
box-size. To test further to what extent these discrepancies at large
scales depend on the resolution of the hydro-simulation, we have run an
additional hydro-simulation with 64 times higher mass resolution
($2\times 200^3$ particles in a 7.5 $h/$ Mpc box). There is good
agreement with the results shown in Figure 7. We stress here
that our goal is to get a good convergence of the flux power in the
range $0.003 < k$ (s/km) $<0.03$, which is the range which is used for
the matter power spectrum reconstruction as in Viel et al. (2004).
These large differences and the lack of convergence appear
perhaps counterintuitive considering the rather good convergence of the gas and dark
matter distribution. However, they simply originate in the large differences
in the pdf of the flux distribution, which in turn are due to the different
thermal state of the gas in high density regions in the HPM and full
hydrodynamical simulations.
At redshift $z=2$, a larger proportion of the
absorption is from gas in high density regions, which is shock-heated in the
full hydrodynamical simulations and therefore on average hotter than in the
HPM simulations. This tends to mainly affect the strong absorption systems
which contribute significantly to the flux power spectrum at large scales
(Viel et al. 2004a; McDonald et al. 2004b). We fill discuss this further in
the next section.
\begin{figure*}
\center\resizebox{1.0\textwidth}{!}{\includegraphics{f7.ps}}
\caption{Effect of the parameter $N_{\rm grid}$ for simulations of a 30
Mpc$/h$ box with $2\times 200^3$ (gas and dm) particles. {\it Left
panel:} Fractional differences between the 1D flux power spectra of
simulations with $N_{\rm grid}$=200 (dotted), $N_{\rm grid}$=400
(continuous), $N_{\rm grid}$=600 (dashed), and $N_{\rm grid}$=1200
(dot-dashed) at $z=2$. Also shown is the full hydrodynamical TreePM
simulation with $N_{\rm grid}$=200 and the same initial conditions. The
other HPM parameters have been fixed to the fiducial values described
in the text. The dot-dashed curve with overplotted empty triangles is
for a simulation with $N_{\rm grid}$=1200 for which the simulated flux
has been scaled to match the value of the full hydro simulations, in
the other cases the spectra have not been scaled (see text for the
details). The long-dashed line with filled diamonds represents results
for the higher resoultion run with $2\times 400^3$ particles and with
$N_{\rm grid}$=600 (results scaled to reproduce the same $\tau_{\rm eff}$).
{\it Middle Panel:} Results at $z=3$. {\it Right
Panel:} Results at $z=4$. In all the panels the dashed area represents
the range of wavenumbers used by Viel, Haehnelt \& Springel (2004) to
recover cosmological parameters. }
\label{f7}
\end{figure*}
\subsubsection{Temperature effects on the flux pdf and the
flux power spectrum}
\label{shock}
We have argued that the approximation of the relation between
gas density and gas temperature as a power-law breaks down at low redshift. This
approximation inevitably does not take into account the amount of moderately
shock-heated gas that is falling into the potential wells of the dark matter
haloes. In this section, we want to check this explicitly.
For this purpose we use the hydrodynamical simulation and the HPM
run with $N_{\rm grid}$=600.
As a first step we perform the following simple test. We superimpose
onto the full hydro-dynamical SPH simulation the temperature-density
relation of the HPM runs, and then recompute the QSO spectra. We find
that this results in differences much smaller than those in Figure 7,
of order 8\%, 5\% and 4\%, at $z=2$, $3$ and $4$, respectively, at the
largest scales. Most of the discrepancy is thus indeed due to the
differences in the thermal state especially at low
redshift. Differences in the thermal state will lead, however, also to
pressure differences during the dynamical evolution, which will modify
the mass distribution and the peculiar velocity field. In shock fronts,
the change in particle trajectories can be substantial. Since the HPM
implementation does not capture shocks, it would not treat the dynamics
correctly even if the temperatures would be accurate at all times. To
investigate this further we have run an SPH simulation with artificial
viscosity set to zero and the temperature-density relation of the HPM
simulation. This should mimick an `ideal' HPM simulation: the
gravitational force is resolved with high accuracy and in an isotropic
way, while the pressure gradients are smooth and resolved everywhere
with the maximum resolution allowed by the local particle sampling. The
standard HPM method has a less well resolved gravitational force and
should be sensitive to over- or under-smoothing of the pressure field
in regions of high or low particle density, respectively. The results
are shown in Figure \ref{f8}. The SPH simulation is represented by
the dashed line while the dotted line is for the HPM simulation with
$N_{\rm grid}=1200$ (both the runs have the same number of particles
equal to $2\times 200^3$). There is good agreement with the HPM
simulations, suggesting that the discrepancy in the flux power is
primarily due to the different thermal state of the gas due to shocks
and not to any artefacts of our particular HPM implementation. The
total effect on the flux power spectrum should thereby be a combination
of an increase of the overall amount of shock-heated gas with
decreasing redshift and the change of the mean effective optical depth.
The flux power spectrum becomes increasingly sensitive to
higher-density gas with decreasing redshift due to the decreasing
effective optical depth.
We will now investigate the relation between the differences between HPM and
SPH and gradients in the velocity field of the gas. Negative gradients in
the peculiar velocity field along the line-of-sight should
represent a signature of infalling material and may thus serve as
a rough guide to where shocks occur.
In the left panel of Figure~\ref{f9}, we show the ratio of the gas temperatures for
the full hydrodynamical simulation and the HPM simulation
as a fuction of the velocity gradient of the gas.
We first average the temperature in pixels within 100
km/s from a minimum in the gradient of the peculiar velocity field in
real space. Then, we average over the corresponding flux values, in
redshift space. Before selecting the negative gradients in the
hydrodynamical simulations, we have explicitely checked that the
peculiar velocity fields are very similar in both simulations.
\begin{figure*}
\center\resizebox{1.0\textwidth}{!}{\includegraphics{f8.ps}}
\caption{Fractional differences between: 1) an HPM simulation with $2\times
200^3$ and with $N_{\rm grid}=1200$ and a full SPH hydrodynamical
simulation (dotted line); 2) between a SPH simulation with zero artificial
viscosity and with a superimposed temperature-density relation of the
HPM runs and a full SPH hydrodynamical simulation (dashed
line). Results are shown at $z=2,3,4$ in the left, middle and right
panels, respectively. Spectra have been scaled to reproduce the same
effective optical depth.}
\label{f8}
\end{figure*}
\begin{figure*}
\center\resizebox{1.0\textwidth}{!}{\includegraphics{f9.ps}}
\caption{Role of shock-heated gas. {\it Left:} Ratio of the
temperatures in simulations run with the full hydro and HPM
implementation with $N_{\rm grid}$=600 with the same initial
conditions, plotted as a function of the gradient of the peculiar
velocity field along the line of sight. {\it Right:} Differences in the
simulated flux values. The contour plots represent the number density
of points in the 2D plane and the number density increases by an order
of magnitude at each contour level. The dashed, continuous and dotted
lines are for $z=2,3,4$, respectively.}
\label{f9}
\end{figure*}
The contour plots indicate the number density of points, which varies
by a factor of 10 between adjacent contour levels. The bulk of the pixels in
this panel is in regions with $\delta v/\delta x \sim -0.5$ and at
`hydrodynamical' temperatures that are about 10\% lower than the corresponding
temperatures of the HPM simulation. The simulation at $z=2$ shows a
significantly increased amount of pixels at $\delta v/\delta x \sim -1$ with
HPM temperatures that are colder than the corresponding temperatures in the
hydrodynamical simulation. These differences in the temperatures have an
important effect on the simulated flux.
In the right panel of Figure~\ref{f9}, we plot the differences in the flux for
the same regions of infalling gas. Since the flux is observed in redshift
space we have averaged the flux within 100 km/s velocity bins. There
are no obvious trends for smaller values of $\delta v/\delta x$ (i.e.
``stronger'' shocks). This is due to the fact that stronger shocks have a more
complex temperature and density structure which in the hydro simulation is
represented more faithfully than in the HPM simulation. As a result, the
differences in the temperatures and fluxes actually tend to be averaged out
for strong shocks. The scatter for positive values of the gradient $\delta
v/\delta x$ also shows smaller scatter, both in the temperatures ratio
and in the flux differences. Most of the differences at $z=2$ arise from
regions of infalling gas that are not modelled accurately by the HPM method.
This suggests that at least part of the discrepancy at low
redshift is due to the increased amount of shock-heated gas probed by
the Ly-$\alpha$ forest at lower redshift.
\subsection{The effect of the numerical parameters $H_{\rm s}$ and $A_{\rm s}$ }
\label{numparam}
As discussed in section \ref{hpm}, we need to specify the parameters $H_{\rm
s}$ and $A_{\rm s}$ which describe the smoothing of the gas density and of
the gravitational force field in the HPM simulations. There is no obvious
optimum choice for these parameters, so a choice needs to be made by
comparing to the full hydrodynamical simulations. For changes of $H_{\rm s}$,
which controls the smoothing of the pressure field, the resulting differences
are very small at large scales (less than 1\%). They are only weakly scale-
and redshift-dependent and only slightly increase at small scales for $H_{\rm
s}$ in the range 1.5-3. We have therefore fixed $H_{\rm s} = 3$ for all
simulations.
Varying the parameter $A_{\rm s}$, which controls the smoothing of the
gravitational force field, has a somewhat larger effect. In Figure
\ref{f10}, we show the differences between the flux power spectrum of a HPM
($N_{\rm grid}=600$) simulation and that of the full hydrodynamical simulation
for different values of $A_{\rm s}$, at three different redshifts $z=2,3,4$.
The differences are typically a few percent but can be as large as 10\%. It is
not obvious which value for $A_{\rm s}$ represents an optimum choice. One
possibility is to impose a certain requirement for the maximum allowed force
anisotropy generated by a point mass in the PM scheme. Using such a criterion,
we have set $A_{\rm s}= 1.25$, which gives typical PM force errors less than 1
per cent.
\begin{figure*}
\center\resizebox{0.5\textwidth}{!}{\includegraphics{f10.ps}}
\caption{Effect of the smoothing parameter $A_{\rm s}$ of the HPM
implementation in $\mbox{\small{GADGET-2}\,\,}$ for simulations of a 30 Mpc$/h$ box with
$2\times 200^3$ particles. We plot the fractional differences between
the 1D flux power spectra of a model with $A_{\rm s}$ set to 1.25 (thin
line with filled triangles), 1.1 (thin line with empty triangles), 1
(thick line), 0.9 (thin line) and the full hydrodynamical simulation.
The results are shown for three different redshifts $z=2,3,4$ as
dashed, continuous and dotted curve, respectively. Note that the flux
power spectra have not been scaled to reproduce the same effective optical
depth.}
\label{f10}
\end{figure*}
\subsection{CPU time and memory requirements}
\label{cputime}
In Table 1, we summarise the total CPU time (in wall-clock seconds) required
by the simulations to run to $z=2$, and their memory requirement (in Gbytes).
We include simulations with a range of particle numbers and resolutions of the
PM mesh, all for a box size of 30 Mpc$/h$. The HPM simulation with $N_{\rm
grid}$=600 has run about 20 times faster than the hydrodynamical SPH/TreePM
simulation at the corresponding resolution, but has a three times larger
memory requirement. The $N_{\rm grid}$=1200 HPM simulation, which as we saw
gave a good agreement with the full hydrodynamical simulations in terms of the
gas and dark matter distribution, is still faster than the SPH simulation by a
factor 10, but its memory requirement is very large. We note that the
simulations with a very high resolution of the PM mesh ($N_{\rm grid} = 1200$)
have been difficult to run because of their very large memory requirement,
which is close to the total amount available on the COSMOS computer we used.
\section{Discussion and Conclusions}
\label{conclu}
We have compared full hydrodynamical simulations carried out with the
SPH/TreePM code \mbox{\small{GADGET-2}\,\,} to simulations that used the HPM method. The latter
scheme was implemented by us in \mbox{\small{GADGET-2}\,\,}, and we compared this implementation with
the independent code of GH. Our comparison was performed at redshifts
$z=2$,$\,z=3$ and $z=4$. Our main results can be summarised as follows.
\begin{itemize}
\item{The dark matter and gas distributions of HPM simulations with \mbox{\small{GADGET-2}\,\,}
converge well to the full hydrodynamical simulations with the SPH/TreePM
code. For a PM mesh with $>6^3$ more mesh cells than particles in the SPH
simulations, the difference in the pdf of the gas and matter distributions
are less than 1 percent. The same is true for the matter power spectrum
at wavenumbers up to 20 times the fundamental mode of the box for a mesh
with $1200^3$. At smaller scales the differences in the power spectra
strongly increase due to lack of resolution of the HPM grid.}
\item{The pdf of the flux distribution of HPM simulations with \mbox{\small{GADGET-2}\,\,} does not
converge to that of the full hydro simulations. At low levels of
absorption ($F>0.8$) the differences (10\% and more) do not decrease with
increasing resolution at all three redshifts examined. At $z=2$, there is
an additional large difference at low flux levels which rises to 50\% at
the lowest flux levels. The latter difference is most likely due to the
larger proportion of absorption by dense shock-heated gas at $z=2$ which
is not modelled well by the HPM method. }
\item{At redshifts $z=3$ and $z=4$, the flux power spectrum of HPM simulations
with \mbox{\small{GADGET-2}\,\,} does converge to that of the full hydrodynamical simulations up
to a scale independent offset. For a HPM simulation with box size of
30\mpch and a PM mesh with $1200^3$ cells this offset is about $5\% -7\% $
at wave numbers 0.002 s/km $<k<$ 0.05 s/km. At $z=2$, however, there are
large scale-dependent differences between the flux power spectrum of the
HPM simulation and the full hydrodynamical simulation which are as large
as 20-40\%. These differences are again most likely due to the larger
proportion of absorption by dense shock-heated gas at $z=2$.}
\item{The HPM implementation of \mbox{\small{GADGET-2}\,\,} and the code by GH give similar results
(to within a few percent) for the same initial conditions, provided a
slightly higher resolution of the PM grid is used for \mbox{\small{GADGET-2}\,\,}. This
offset is a result of the PM force smoothing done by \mbox{\small{GADGET-2}\,\,}, which is
adjustable. The results obtained above should thus hold in a similar
form for the GH code. }
\end{itemize}
The HPM method involves two main simplifications compared to full
hydro-simulations, calculating the pressure in an approximate way and
estimating the temperatures based on the density alone. The HPM
approximation does a good job in modelling the gas and matter
distribution on the scales relevant for the Lyman-$\alpha$ forest
suggesting that the first approximation works well. The situation for
an accurate prediction of the flux distribution is quite different and
we have shown that the treatment of the thermal state in the HPM
approximation is the main problem for accurate predictions of the flux
distribution. The strong dependence of the transmitted flux on the
thermal state of the gas together with the crude approximation of the
thermal state in the HPM approximation leads to large and not always
intuitive scale- and redshift-dependent differences in the flux
distribution between HPM and the full hydrodynamical simulations.
For the flux power spectrum, these
differences are less important than for the pdf of the flux distribution. Our
results suggest that at $z=3$ and $z=4$ the gain in speed offered by HPM
simulations may still make them an attractive tool to obtain predictions of
the flux power spectrum for a wide range of parameters. This will, however,
require very careful calibration with full hydrodynamical simulations, and it
appears doubtful that HPM simulations are suitable to model the dependence of
the flux power spectrum on the thermal state of the gas accurately. The rather
large memory requirement of HPM simulations with sufficient resolution to
reach convergence also partially offsets the advantage of their higher speed.
Our results further suggest that at lower redshift the larger proportion of
absorption by dense shock-heated gas makes HPM simulations unsuitable for
accurate predictions of the flux power spectrum.
Currently the observational uncertainties regarding the
thermal state of the IGM are still rather large. The results
of quantitative studies of the matter power spectrum with Lyman-alpha
forest data are therefore generally marginalized over a wide
range of simple temperature-density relations. The
difficulties of simple HPM implementations with modeling the
effect of the thermal state accurately may therefore
be less important than suggested by our discucussion so far.
However, improved measurements of the thermal
state of the gas utilizing the Doppler parameter distribution,
the flux PDF and the small scale flux power spectrum
are an important prerequisite for reducing the errors of
measurements of the matter power spectrum from Lyman-$\alpha$~ forest data.
Accurate modeling of the thermal state of the gas will be
required to take full advantage of an reduced uncertainty
regarding the thermal state of the IGM. For HPM simulations
this will almost certainly require a signifificant improvement
of the modelling of the thermal state, e.g. by introducing some
scatter in the temperature density relation. Full
hydrodynamical simulations could thereby be used to quantify
and calibrate this scatter and to investigate possible
correlations of the scatter with physical quantities. Such
modelling would obviously greatly benefit from more precise
observational estimates of the parameters describing the
temperature-density relation which may be possible with the use
of the flux power at smaller scales and from an estimate of the
scatter in the temperature density relation using higher order
statistics such as the bispectrum (Mandelbaum et al. 2003, Viel
et al. 2004, Fang \& White 2004). It will then also be
important (in HPM and full hydro simulations) to model other
physical aspects affecting the thermal state of the gas as the
presence of galactic winds and temperature/UV fluctuations due
to the reionization of HeII.
\section*{Acknowledgements.}
The simulations were run on the COSMOS (SGI Altix 3700) supercomputer
at the Department of Applied Mathematics and Theoretical Physics in
Cambridge and on the Sun Linux cluster at the Institute of Astronomy in
Cambridge. COSMOS is a UK-CCC facility which is supported by HEFCE and
PPARC. MV thanks PPARC for financial support and Adam Lidz for useful
discussions. MV, MGH and VS thank the Kavli Institute for Theoretical
Physics in Santa Barbara, where part of this work was done, for
hospitality during the workshop on ``Galaxy-Intergalactic Medium
Interactions''. This work is partly supported by the European Community
Research and Training Network ``The Physics of the Intergalactic
Medium''. We thank Nick Gnedin for providing us with a copy of
his HPM code.
| {
"attr-fineweb-edu": 1.608398,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUdyc4ubng3m-N5Uxv | \section{Introduction}
In this paper, we consider the problem of function computation over a directed acyclic communication network, called {\it network function computation}. A general setup of the problem can be as follows. A directed acyclic graph is used to model a communication network, where the edges model the communication links (noiseless or noisy) with capacity constraints and the nodes are assumed to have unlimited computing capability and infinite storage. In such a network, a set of nodes, referred to as the {\it source nodes}, generate possibly correlated messages, while another set of nodes, referred to as the {\it sink nodes}, are required to compute possibly different functions of the source messages with fidelity constraints. In particular, the network transmission problem, where the sink nodes are required to reconstruct certain subsets of the source messages, is a special case of network function computation with the sink nodes computing the corresponding identity functions.
The straightforward approach to network function computation is to transmit the required source messages to the sink nodes over the network and then compute the desired functions at the sink nodes. Instead of first transmitting the source messages to the sink nodes, network function computation can in general be done more efficiently in a distributed manner by means of network coding \cite{flow}. In recent years, network function computation has received considerable attention due to its important applications in sensor networks \cite{Giridhar05, Kowshik-Kumar-TIT12}, Big Data processing \cite{Infocom16}, Internet of Things (IoT) \cite{IEEEAccess16}, machine learning~\cite{IEEEAccess16},~etc.
\subsection{Related Works}
From the information theoretic point of view,\footnote{This problem has also been studied widely from the computational complexity point of view (e.g., \cite{Yao-STOC-1979,Ahlswede-Cai-TIT94,book-CommComp-1997}).} we are interested in the achievable rate region for the sink nodes to reliably compute their desired functions over the network. However, the problem with the general setup described in the foregoing is very difficult, because it encompasses various topics in information theory, including multi-terminal source coding, multi-terminal channel coding, network coding, separation of these three types of coding, etc. There are well-known open problems in each of these topics. We refer the reader to the comprehensive book by El~Gamal and Kim \cite{EIGamal_Kim_12b}. The overwhelming complexity and difficulty of network function computation necessitate the consideration of different simplifications of the setup in order to be able to make progress.
One simplification is to consider the problem under the setting that the messages generated by the source nodes are correlated but the network topology is very simple. This line of research can be traced back to Shannon's seminal works in which the transmission of a source message over a point-to-point channel was discussed. These include the classical source coding theorem \cite{Shannon48}, channel coding theorem \cite{Shannon48}, separation of source coding and channel coding \cite{Shannon48}, and rate-distortion theorem \cite{Shannon59-rate-distorsion} (see also~\cite{Cover91b}). Witsenhausen~\cite{Witsenhausen-IT-76} considered the source coding problem with side information at the decoder. In this model, the encoder compresses a source variable $X$. The decoder, in addition to receiving the output of the encoder, also observes a source variable $Y$ which is correlated with $X$. The decoder is required to reconstruct $X$ with zero error. Orlitsky and Roche \cite{Orlitsky-Roche-IT01-Coding_Compu} generalized Witsenhausen's model by requiring the decoder to compute an arbitrary function of $X$ and $Y$.
Multi-terminal source coding was launched by Slepian and Wolf \cite{Slepian-Wolf-IT73}, in which the following model was considered. Two correlated sources are compressed separately by two encoders. The decoder, which receives the output of both encoders, is required to reconstruct the two sources almost perfectly. Building on the Slepian-Wolf model, K\"{o}rner and Marton~\cite{Korner-Marton-IT73} investigated the computation of the modulo $2$ sum of two correlated binary sources.
To our knowledge, the K\"{o}rner-Marton problem was the first non-identity function computation problem over a network. Doshi~\textit{et al.}~\cite{doshi10} generalized the K\"{o}rner-Marton model by requiring the decoder to compute an arbitrary function of two correlated sources. More recently, Feizi and M\'{e}dard~\cite{Feizi-Medard-netw_funct_compre-IT-2014} investigated the computation of an arbitrary function of multiple correlated sources over a tree network. In such a network, the leaf nodes are the source nodes where correlated sources are generated, and the root node is the unique sink node where an arbitrary function of the sources is computed.
Another simplification is to consider network function computation under the setting that the messages generated by all the source nodes are mutually independent but the network topology can be general (an arbitrary directed network). Under this setting, when the sink nodes are required to reconstruct different subsets of the source messages, the network function computation problem degenerates to network coding \cite{flow,linear,alg} (see also \cite{Zhang-book,yeung08b}). For single-source network coding, i.e., the message generated by the single source node is required to be transmitted to every sink node, the capacity is completely characterized by a max-flow min-cut bound theorem \cite{flow}, and linear network coding is sufficient to achieve the capacity \cite{linear,alg}. For multi-source network coding, i.e., the source nodes generate mutually independent messages and each one of them is multicast to a certain subset of the sink nodes, the capacity region can only be characterized implicitly in terms of achievable entropy functions when the network is acyclic \cite{Yan-Yeung-Zhang-IT12-NC-region}. More explicit characterizations of the capacity region for some special cases can be found in \cite{Yan-Yang-Zhang-IT06-NetSharingBound, DFZ-IT07, Wang-Shroff-IT10-PairwInters-NC, Chan-IT16-Cut-setbounds-NC,Kamath-IT17-2unicast}.
To our knowledge, the first non-identity function computation problem over a directed acyclic network is the following so-called {\it sum-network} problem~\cite{Koetter-CISS2004,Ramamoorthy-ISIT08-sum-networks, Ramamoorthy-Langberg-JSAC13-sum-networks, Shenvi-Dey-ISIT10_3-3-sum-networks,
Rai-Dey-TIT-2012, Rai-Das-IEEEComm13}.
In a directed acyclic network, the multiple sink nodes are required to compute an algebraic sum of the messages observed by all the source nodes over a finite field (e.g., the foregoing modulo $2$ sum is an algebraic sum over the finite field $\mathbb{F}_2$). When there exists only one sink node, linear network coding achieves the computing capacity \cite{Koetter-CISS2004}. Ramamoorthy \cite{Ramamoorthy-ISIT08-sum-networks} first proved that if the number of source nodes and the number of sink nodes are at most $2$, all the sink nodes can compute the algebraic sum of the source messages with zero error by using scalar linear network coding if and only if there exists a directed path for every pair of source and sink nodes. Subsequently, Ramamoorthy and Langberg \cite{Ramamoorthy-Langberg-JSAC13-sum-networks} proved that if there are $3$ source nodes and $3$ sink nodes in the network, the existence of a single path for every pair of source and sink nodes is in general not sufficient for computing the algebraic sum of the source messages by using the network only once.\footnote{Using the network once means that each edge is used at most once.} Instead, it is sufficient if every pair of source and sink nodes can be connected by $2$ edge-disjoint paths.\footnote{The similar results were obtained independently by Shenvi and Dey \cite{Shenvi-Dey-ISIT10_3-3-sum-networks}.} However, Rai and Das~\cite{Rai-Das-IEEEComm13} showed by a counterexample that even this condition is not always sufficient if there are $7$ source nodes and $7$ sink nodes in the network.
In \cite{Appuswamy11,Appuswamy13,Appuswamy14,huang15}, the following network function computation model was considered. In a directed acyclic network, the single sink node is required to compute with zero error a function of the source messages separately observed by multiple source nodes. The network topology and the function are arbitrary. Appuswamy~\textit{et~al.}~\cite{Appuswamy11} investigated the fundamental {\it computing capacity}, i.e., the maximum average number of times that the function can be computed with zero error for one use of the network, and gave a cut-set based upper bound that is valid under certain constraints on either the network topology or the target function. Huang~\textit{et~al.}~\cite{huang15} obtained an enhancement of Appuswamy~\textit{et~al.}'s upper bound that can be applied for arbitrary functions and arbitrary network topologies. Specifically, for the case of computing an arbitrary function of the source messages over a multi-edge tree network and the case of computing the identity function or the algebraic sum function of the source messages over an arbitrary network topology, the above two upper bounds coincide and are tight (see \cite{Appuswamy11} and \cite{huang15}). However, both of these bounds are in general quite loose. Building on this model, Appuswamy~\textit{et~al.}~\cite{Appuswamy13} introduced the notions of routing, linear, and nonlinear computing capacities that respectively correspond to performing routing operations, linear network coding and nonlinear network coding at the nodes, and then compared the three different computing capacities. Recently, Appuswamy and Franceschetti~\cite{Appuswamy14} investigated the solvability (rate-$1$ achievable) of linear network codes when the single sink node is required to compute a vector-linear function of the source messages over a directed acyclic network.
\subsection{Contributions and Organization of the Paper}
In this paper, we consider the network function computation model discussed in \cite{Appuswamy11,Appuswamy13,Appuswamy14,huang15,Guang_NFC_ITW16}. To be specific, in a directed acyclic network, a single sink node is required to compute with zero error a function, called the {\it target function}, of which the arguments are the source messages generated by the multiple source nodes. The edges in the network are assumed to be error-free and have limited (unit) capacity. The nodes in the network are assumed to have unlimited computing capability and perform network coding, i.e., each node can encode the messages it receives or generates and then transmit the output of the encoding function. From the information-theoretic point of view, we are interested in the fundamental computing capacity, which is the average number of times that the target function can be computed with zero error for one use of the network.
One main contribution of this work is an improved upper bound on the computing capacity, which is applicable to arbitrary target functions and arbitrary network topologies. Our improved upper bound not only is an enhancement of the previous upper bounds (cf.~\cite{Appuswamy11,huang15}), but also is the first tight upper bound on the computing capacity for computing an arithmetic sum over a certain ``non-tree'' network (cf.~Example~\ref{eg:1} in Section~\ref{sec:preliminaries} of the current paper).\footnote{This example, first introduced by Appuswamy \textit{et al.} in \cite{Appuswamy11}, is used to show that both their upper bound and lower bounds proposed are not always tight and illustrate the combinatorial nature of the computing problem.}
An important application of our improved upper bound is in computing a vector-linear function of the source messages over an arbitrary directed acyclic network, which has been considered by Appuswamy and Franceschetti~\cite{Appuswamy14}. One of the main results in \cite{Appuswamy14} is that the {\it min-cut condition} (cf.~\cite{Appuswamy14} or Section~\ref{subsec:computing_linear_func} of the current paper), inherited from network coding, is not always sufficient for computing a vector-linear function over a network by using any rate-$1$ {\em linear} network code. The proof of this result in \cite{Appuswamy14} is rather complicated and relies on the use of some advanced algebraic tools. In contrast, by applying our improved upper bound, we can provide a simple proof of the stronger result that the min-cut condition is not always sufficient to compute a vector-linear function over a network by using any rate-$1$ network code (linear or nonlinear).
For all previously considered network function computation problems whose computing capacities are known, our improved upper bound is achievable if the computing capacity is rational, or is asymptotically achievable if the computing capacity is irrational. Another main contribution of this work is to prove that for computing the binary maximum function over the ``reverse butterfly network'', our improved upper bound is not achievable. Here, a novel network splitting approach is used to prove this result and the proof is highly nontrivial.
The paper is organized as follows. In Section~\ref{sec:preliminaries}, we formally present the network function computation model considered throughout the paper and the existing upper bounds on the computing capacity, and then give an example that suggests how the existing upper bounds can be improved. The improved upper bound is stated in Section~\ref{sec:improved_upp_bou}, followed by two discussions. The first is about evaluating the improved upper bound by using multi-dimensional arrays. The second is an application of the improved upper bound to enhance a result in \cite{Appuswamy14} on computing a vector-linear function over a network, as discussed in the foregoing. Section~\ref{sec:proof} is devoted to the proof of the improved upper bound. We show in Section~\ref{sec:non_tight} that the improved upper bound for computing the binary maximum function over the reverse butterfly network is not achievable. In Section~\ref{sec:concl}, we conclude with a summary of our results and a remark on future research.
\begin{itw2016}
\textbf{ITW Introduction} In Big Data processing, sensor networks~\cite{Giridhar05} and many
other scenarios, a target function is calculated repeatedly crossing a
network, where the input symbols of the function are generated
at multiple source nodes, and the function value (output) is calculated
cooperatively by all network nodes and desired at a sink node. The bottleneck of the capacity of the network function computation is the network bandwidth, rather than (or in addition to) the computing capability of the network nodes. This network function
computation problem is a generalization of the network communication
problem, where the latter is just considered as the computation of the
\emph{identity function}.
Various models and special cases of the network function computation
problem have been studied in the literature. One line of related works
studies functional data compression, where the input symbols are
generated by the source nodes according to a joint distribution, and the
sink node, directly and reliably connected with all the source nodes,
recovers a function of the input symbols (see references in
\cite{doshi10}). For a plenty of settings, the achievable rate region of
functional data compression is related to graph entropy, which is in
general difficult to be characterized in a single-letter form.
We are interested in the \emph{network coding model} for network function
computation, which has been studied in
\cite{Appuswamy11,Kowshik12,Rai12,Ramam13,Appuswamy14,huang15}. Specifically,
we consider a \emph{directed acyclic network} where the network links
have limited (unit) capacity and are error-free. Each source node
generates multiple input symbols, and the network codes can perform {\em vector
network coding} by using the network multiple times, where one use of a network means the use of each link at most once. Each intermediate network node is assumed to have the unbounded computing ability, and can transmit the output of a certain fixed function of
the symbols it receives. We do not assume any particular distribution
on the input symbols and require computing the target function correctly for all input combinations. The \emph{computing rate} is the average number of times that the target function can be computed correctly for one use of the network. The maximum computing rate is called the \emph{computing capacity}.
When computing the identity function, the problem degenerates to the
extensively studied network coding problem \cite{flow}, and it is known
that in general linear network codes are sufficient to achieve the
multicast capacity when each sink node requires all sources \cite{linear, alg}. For linear target functions over a finite field, a complete characterization of the computing
capacity is not available for networks with one sink node. Certain
necessary and sufficient conditions have been obtained for sufficiency of linear network codes in calculating a linear target function \cite{Ramam13, Appuswamy14}. But linear network codes are in general not sufficient to achieve the capacity of linear
target functions \cite{Rai12}.
Networks with a single sink node are discussed in this paper, while
both the target function and the network code can be nonlinear. In
this case, the computing capacity is known when the network is a
multi-edge tree \cite{Appuswamy11} or when the target function is the
identity function. For the general case, lower and upper bounds on
the computing capacity based on cut sets have been studied
\cite{Appuswamy11,Kowshik12,huang15}. In this paper, we characterize a
general upper bound on the computing capacity, which is
strictly better than the existing ones.
For the best existing upper bound, Huang \textit{et al.} \cite{huang15} define
an equivalence relation associated with the subsets of the inputs of
the target function
and propose a cut-set bound on the computing capacity by counting the
number of the equivalence classes related to each cut set. This upper
bound is not tight in general, which has been demonstrated by a counter example. In this paper, by examining partitions of a cut set, we find that
certain different inputs that are in the same equivalence class of
Huang \textit{et al.}, should be represented by different transmission messages on the cut set. The cut set partitions motivate us to define another equivalence relation
and then derive a better upper bound. We also show via an example that our bound is strictly better than the one in \cite{huang15}.
\end{itw2016}
\section{Model and Preliminaries}\label{sec:preliminaries}
\subsection{Network Function Computation Model}
\label{sec:net-funct-comp-model}
Let $G=(\mathcal{V},\mathcal{E})$ be a directed acyclic graph with a finite vertex set $\mathcal{V}$ and an edge set $\mathcal{E}$,
where multiple edges are allowed between two nodes. A {\em
network} over $G$ is denoted by $\mathcal{N}=(G,S,\rho)$, where
$S\subset \mathcal{V}$ is the set of {\em source nodes}, say $S=\{\sigma_1, \sigma_2, \cdots, \sigma_s \}$ with $|S|=s$, and $\rho\in
\mathcal{V}\backslash S$ is the single {\em sink node}. The tail and the head of an edge $e$ are denoted by $\mathrm{tail}(e)$ and $\mathrm{head}(e)$, respectively. Moreover, for each node $u\in\mathcal{V}$, let $\mathcal{E}_{\mathrm{i}}(u) = \{e\in \mathcal{E}: \mathrm{head}(e)=u\}$ and $\mathcal{E}_{\mathrm{o}}(u)=\{e\in\mathcal{E}:\mathrm{tail}(e)=u\}$, both of which are the set of incoming edges and the set of outgoing edges of $u$, respectively. Without loss of generality, we assume that every source node has no incoming edges, because otherwise we can introduce a new source node and install a directed edge from the new source node to the original source node which is now regarded as a non-source node. We further assume that there exists a directed path from every node $u\in \mathcal{V}\setminus \{\rho\}$ to $\rho$ in $G$. Then it follows from the acyclicity of $G$ that the sink node $\rho$ has no outgoing edges. Let $\mathcal{B}$ be a finite alphabet, and we assume that a symbol in $\mathcal{B}$ can be transmitted on each edge reliably for each use.
Let $\mathcal{A}$ and $\mathcal{O}$ be finite alphabets, and $f:\mathcal{A}^s\to \mathcal{O}$ be the {\em target function}. For the target function $f$, the $i$th argument is generated at the $i$th source node $\sigma_i$ and all outputs of the function are demanded by the sink node $\rho$. We will compute $f$ over the network $\mathcal{N}$ by using the network multiple times. Computation units with unbounded computing capability are available at all nodes in the network. However, the computing capability of the whole network is constrained by the network transmission capability.
Assume that the $i$th source node $\sigma_i$ generates $k$ symbols in $\mathcal{A}$, denoted by
$\vec{x}_i=(x_{i,1},x_{i,2},\cdots,x_{i,k})^\top$, which is called the
\textit{source vector} generated by $\sigma_i$. The symbols generated
by all the source nodes constitute the \textit{source matrix}
$\vec{x}_S=(\vec{x}_{1}, \vec{x}_{2}, \cdots, \vec{x}_{s})$ of size
$k\times s$. Let
\begin{equation*}
f(\vec{x}_S) = \big(f(x_{1,j},x_{2,j},\cdots,x_{s,j}):\ j=1,2,\ldots,k\big)^{\top}
\end{equation*}
be the $k$ outputs of the target function $f$ corresponding to the $k$ inputs of the source nodes. For any subset $J\subseteq S$, we let
$\vec{x}_J=(\vec{x}_{i}: \sigma_i\in J)$ and use $\mathcal{A}^{k\times J}$ (instead of $\mathcal{A}^{k\times |J|}$ for simplicity) to denote the set of all possible $k\times |J|$ matrices taken by $\vec{x}_J$.
In particular, for $k=1$, we omit the symbol ``\;$\vec{\cdot}$\;'' for notational simplicity, e.g., $x_J\in \mathcal{A}^J$. Moreover, whenever we write $\vec{x}_J$ as $x_J$, we implicitly assume that $k=1$. Throughout this paper, we adopt the convention that $\mathcal{A}^0$ is the singleton that contains an empty vector of dimension $0$ taking value in $\mathcal{A}$. As such, for $J=\emptyset$, we have $\mathcal{A}^J=\mathcal{A}^{|J|}=\mathcal{A}^0$. It also follows that for $J=\emptyset$, $\mathcal{A}^{k\times J}=\mathcal{A}^{k\times |J|}=(\mathcal{A}^k)^0$.
For two positive integers $k$ and $n$, a $(k,n)$ \textit{(function-computing) network code} over the network $\mathcal{N}$ with the target function $f$ is defined as follows. Let $\vec{x}_S\in \mathcal{A}^{k\times S}$ be the source matrix generated by all the source nodes. The purpose of such a network code is to compute $f(\vec{x}_S)$ by transmitting at most $n$ symbols in $\mathcal{B}$ on each edge in $\mathcal{E}$, i.e., using the network at most $n$ times. A $(k,n)$ (function-computing) network code consists of a {\em local encoding function} $\theta_{e}$ for each edge $e$, where
\begin{equation}\label{defn_local_function}
\theta_{e}:
\begin{cases}
\qquad \mathcal{A}^k \rightarrow \mathcal{B}^n, & \text{if }\ e\in \mathcal{E}_{\mathrm{o}}(\sigma) \text{ for some $\sigma\in S$}; \\
\prod\limits_{d\in \mathcal{E}_{\mathrm{i}}(\mathrm{tail}(e))} \mathcal{B}^n \rightarrow
\mathcal{B}^n, & \text{otherwise.}
\end{cases}
\end{equation}
With the encoding mechanism as described, the local encoding functions $\theta_{e}$, $e\in \mathcal{E}$ derive recursively the symbols transmitted over all edges $e$, denoted by $g_{e}(\vec{x}_S)$, which can be considered as vectors in $\mathcal{B}^n$. Specifically, if $e$ is an outgoing edge of the $i$th source node $\sigma_{i}$, then
$g_{e}(\vec{x}_S) = \theta_{e}(\vec{x}_i)$;
if $e$ is an outgoing edge of some non-source node $u$ in $\mathcal{V}$, then
$g_{e}(\vec{x}_S) = \theta_{e}\big(g_{\mathcal{E}_{\mathrm{i}}(u)}(\vec{x}_S)\big)$. Similar to the classical network codes (see \cite{Zhang-book, yeung08b}), for each edge $e$, we call $g_{e}$ the {\em global encoding function} for $e$. For an edge set $E\subset\mathcal{E}$, we let
$$g_{E}(\vec{x}_S)=\big( g_{e}(\vec{x}_S):\ e\in E \big).$$
Furthermore, the $(k,n)$ network code consists of a decoding function
\begin{equation*}
\varphi: \prod_{e\in \mathcal{E}_{\mathrm{i}}(\rho)} \mathcal{B}^n \rightarrow
\mathcal{O}^k
\end{equation*}
at the sink node $\rho$. Define
$\psi(\vec{x}_S) = \varphi\big(g_{\mathcal{E}_{\mathrm{i}}(\rho)}(\vec{x}_S)\big)$.
If the network code can {\em compute} $f$, i.e., $\psi(\vec{x}_S)=f(\vec{x}_S)$ for all source matrices $\vec{x}_S\in\mathcal{A}^{k\times S}$, then $\frac{k}{n}\log_{|\mathcal{B}|}|\mathcal{A}|$ is called an {\em achievable
computing rate}. Further, a nonnegative real number $r$ is called {\it asymptotically achievable} if $\forall~\epsilon>0$, there exists a $(k,n)$ network code that can compute $f$ such that
\begin{align*}
\frac{k}{n}\log_{|\mathcal{B}|}|\mathcal{A}|>r-\epsilon.
\end{align*}
Clearly, any achievable computing rate must be asymptotically achievable. The {\em rate region} for computing $f$ over $\mathcal{N}$ is defined as
\begin{align}\label{def:rate_region}
\mathfrak{R}(\mathcal{N}, f)=\Big\{ r:\ r \text{ is asymptotically achievable for computing $f$ over $\mathcal{N}$} \Big\},
\end{align}
which is evidently closed and bounded. The {\em computing capacity} of the network $\mathcal{N}$ with respect
to the target function $f$ is defined as
\begin{align}\label{def:comp_cap}
\mathcal{C}(\mathcal{N},f)=\max~\mathfrak{R}(\mathcal{N}, f).
\end{align}
Without loss of generality, we assume throughout the paper that $\mathcal{A}=\mathcal{B}$, so that $\frac{k}{n}\log_{|\mathcal{B}|}|\mathcal{A}|$ in the above is simplified to $\frac{k}{n}$. Although the definition of the computing capacity $\mathcal{C}(\mathcal{N},f)$ here is a little different from the one used in \cite{Appuswamy11} and \cite{huang15}, i.e.,
$\sup\big\{ k/n:\ k/n\textrm{ is achievable} \big\}$, it is easy to see that they are equivalent. Our definition has the advantage that it is more consistent with the usual concept of rate region in information theory problems. In this paper we are interested in general upper bounds on $\mathcal{C}(\mathcal{N},f)$, where ``general'' means that the upper bounds are applicable to arbitrary network $\mathcal{N}$ and arbitrary function~$f$.
\subsection{Existing Upper Bounds}\label{subsec:pre_B}
Let us first discuss a simple upper bound. For two nodes $u$ and $v$ in $\mathcal{V}$, if there exists a directed path from $u$ to $v$ in $G$, denote this relation by $u\rightarrow v$. If there exists no such directed path from $u$ to $v$, we say that $u$ is
\emph{separated} from $v$. Given a set of edges $C\subseteq
\mathcal{E}$, define $I_C$ as the set of the source nodes that are separated from the sink node $\rho$ if $C$ is deleted from $\mathcal{E}$, i.e.,
\begin{align*}
I_C=\left\{ \sigma\in S:\ \sigma \text{ is separated from } \rho \text{ upon deleting the edges in $C$ from $\mathcal{E}$} \right\}.
\end{align*}
Equivalently, $I_C$ is the set of source nodes from which all directed paths to the sink node $\rho$ pass through~$C$. For two cut sets $C_1$ and $C_2$ in $\Lambda(\mathcal{N})$, it is clear that $I_{C_i}\subseteq I_{C_1\cup\, C_2}$, $i=1,2$. Thus,
\begin{align}\label{eq:I_C_subseteq}
I_{C_1}\cup I_{C_2}\subseteq I_{C_1\cup\, C_2}.
\end{align}
However, $I_{C_1}\cup I_{C_2}\neq I_{C_1\cup\, C_2}$ in general.
An edge set $C$ is said to be a {\em cut set} if $I_C\neq \emptyset$, and let $\Lambda(\mathcal{N})$ be the family of all cut sets in the network~$\mathcal{N}$, i.e.,
$$\Lambda(\mathcal{N})=\{ C\subseteq \mathcal{E}:\ I_C \neq \emptyset \}.$$
In particular, we say a cut set $C$ with $I_C=S$ as a {\em global cut set}.
Denote by $f(\mathcal{A}^s)$ the set of all possible images of $f$ on $\mathcal{O}$, i.e.,
\begin{align*}
f(\mathcal{A}^s)=\big\{o\in \mathcal{O}:\ o=f(x_S) \text{ for some } x_S\in A^S \big\}.
\end{align*}
A $(k,n)$ network code that can compute $f$ has to distinguish all images in $f(\mathcal{A}^s)$ on every global cut set $C$. We elaborate this as follows. Let $\{g_e: e\in \mathcal{E} \}$ be the set of all global encoding functions of a given $(k,n)$ network code. By the acyclicity of $G$, since $C$ is a global cut set, $g_{\mathcal{E}_{\mathrm{i}}(\rho)}(\vec{x}_S)$ is a function of $g_C(\vec{x}_S)$. For any two source matrices $\vec{a}_S$ and $\vec{b}_S$ in $\mathcal{A}^{k\times S}$, if $f(\vec{a}_S)\neq f(\vec{b}_S)$, then $g_C(\vec{a}_S)\neq g_C(\vec{b}_S)$, because otherwise we have $g_{\mathcal{E}_{\mathrm{i}}(\rho)}(\vec{a}_S)=g_{\mathcal{E}_{\mathrm{i}}(\rho)}(\vec{b}_S)$, a contradiction to the assumption that this $(k,n)$ network code can compute $f$ over $\mathcal{N}$. Hence, the following inequality is satisfied:
\begin{align*}
|\mathcal{A}|^{n\cdot|C|}\geq |f(\mathcal{A}^s)|^k.
\end{align*}
This implies the following upper bound (also see \cite[Proposition~2]{huang15}):
\begin{equation}\label{eq:1}
\mathcal{C}(\mathcal{N},f) \leq \min_{C\in
\Lambda(\mathcal{N}): I_C=S}
\frac{|C|}{\log_{|\mathcal{A}|}|f(\mathcal{A}^s)|}.
\end{equation}
In \cite{Appuswamy11}, Appuswamy~\textit{et~al.} gave a proof of an enhanced upper bound by considering an equivalence relation defined on the input vectors of the target function $f$ with respect to the cut sets. However, it was subsequently pointed out by Huang~\textit{et~al.}~\cite{huang15} that the proof in \cite{Appuswamy11} is incorrect and in fact the claimed upper bound is valid only for either arbitrary target functions but special network topologies or arbitrary network topologies but special target functions. Instead, they fixed the upper bound in \cite{Appuswamy11} by modifying the equivalence relation considered in \cite{Appuswamy11}. However, it was pointed out in \cite{huang15} that this enhanced upper bound is not tight for an example first studied in \cite{Appuswamy11}. A main contribution of this work is a further enhanced upper bound that is tight for this example. These will be discussed in detail in the rest of the paper.
Next, we review the upper bound obtained in \cite{huang15}. Define a set $K_C$ for a cut set $C\in \Lambda(\mathcal{N})$ as
\begin{align}\label{def_K_C}
K_C=\left\{ \sigma\in S:\ \exists\ e\in C \text{ s.t. } \sigma\rightarrow\mathrm{tail}(e) \right\}.
\end{align}
Recall that $u\rightarrow \rho$ for all $u\in\mathcal{V}\setminus\{\rho\}$. In particular, $\mathrm{tail}(e)\rightarrow \rho$ for all $e\in C$. Then we can easily see that $K_C$ is the set of source nodes from which there exists a directed path to the sink node $\rho$ that passes through $C$. Evidently, $I_C\subseteq K_C$. Further, let $J_C=K_C\backslash I_C$, and hence $K_C=I_C\cup J_C$ and $I_C \cap J_C=\emptyset$. Note that once $C$ is given, $K_C$, $I_C$ and $J_C$ are determined.
For notational convenience in the rest of the paper, we suppose that the argument of the target function $f$ with subscript $i$ always stands for the symbol generated by the $i$th source node $\sigma_i$, so that we can ignore the order of the arguments of $f$. For example, let $S=\{\sigma_1, \sigma_2, \sigma_3, \sigma_4\}$, $I=\{\sigma_2, \sigma_4\}$, and $J=\{\sigma_1, \sigma_3\}$ (clearly, $S=I\cup J$ and $I\cap J=\emptyset$). Then we regard $f(x_I, x_J)$ and $f(x_J, x_I)$ as being the same as $f(x_S)$, i.e.,
\begin{align*}
f(x_2,x_4,x_1,x_3)=f(x_1,x_3,x_2,x_4)=f(x_1,x_2,x_3,x_4).
\end{align*}
This abuse of notation should cause no ambiguity and would greatly simplify the notation.
\begin{defn}\label{def:ec}
Consider two disjoint sets $I,J\subseteq S$ and a fixed $\vec{a}_{J}\in \mathcal{A}^{k\times J}$ for a positive integer $k$. For any $\vec{b}_I, \vec{b}'_I \in \mathcal{A}^{k\times I}$, we say $\vec{b}_I$ and $\vec{b}'_I$ are $(I,\vec{a}_J)$-equivalent if
\begin{align*}
f(\vec{b}_I, \vec{a}_{J}, \vec{d})=f(\vec{b}'_I, \vec{a}_{J},\vec{d}),\quad \forall\ \vec{d} \in \mathcal{A}^{k\times S\setminus (I\cup J)}.
\end{align*}
\end{defn}
We remark that Definition~\ref{def:ec} depends only on the target function $f$ but not on the network $\mathcal{N}$. It is easily seen that the above relation is an equivalence relation. Now, we consider a fixed cut set $C\in \Lambda(\mathcal{N})$ and let $I$ and $J$ in Definition~\ref{def:ec} be $I_C$ and $J_C$, respectively (evidently, $I\cap J=\emptyset$ by definition). Fix $\vec{a}_J\in \mathcal{A}^{k\times J}$. Then the $(I,\vec{a}_J)$-equivalence relation induces a partition of $\mathcal{A}^{k\times I}$ and the blocks in the partition are called {\em $(I,\vec{a}_J)$-equivalence classes}.
Let $\{g_e:\ e\in \mathcal{E}\}$ be the set of all global encoding functions of a given $(k,n)$ network code that can compute $f$ over $\mathcal{N}$. Then this network code has to distinguish all the $(I,\vec{a}_J)$-equivalence classes on the cut set $C$. Intuitively, for any two source matrices $\vec{b}_I$ and $\vec{b}'_I$ in
$\mathcal{A}^{k\times I}$ that are not $(I, \vec{a}_J)$-equivalent, it is
necessary that
\begin{align}\label{neq_g_C}
g_C(\vec{b}_I, \vec{a}_J)\neq g_C(\vec{b}'_I, \vec{a}_J).
\end{align}
This can be formally proved as follows. First, since no directed path exist from any source node in $S\setminus (I\cup J)$ to any node in $\{\mathrm{tail}(e): e\in C\}$, the input symbols $\vec{x}_{S\setminus(I\cup J)}$ do not contribute to the values of $g_C=(g_e: e\in C)$. Hence, we write $g_C(\vec{x}_I, \vec{x}_J, \vec{x}_{S\setminus(I\cup J)})$ as $g_C(\vec{x}_I, \vec{x}_J)$. Consider any $\vec{b}_I$ and $\vec{b}'_I$ in $\mathcal{A}^{k\times I}$ that are not $(I, \vec{a}_J)$-equivalent, i.e., $\exists~\vec{d}\in \mathcal{A}^{k \times S\setminus (I\cup J)}$ such that
\begin{align}\label{equ:pf_lem1_1}
f(\vec{b}_I, \vec{a}_J, \vec{d}) \neq f(\vec{b}'_I, \vec{a}_J, \vec{d}).
\end{align}
Let $D=\bigcup_{\sigma\in (S\setminus I)}\mathcal{E}_{\mathrm{o}}(\sigma)$, an edge subset of $\mathcal{E}$. Then $\widehat{C}=C\cup D$ is a global cut set, i.e., $I_{\widehat{C}}=S$. Since $g_{\mathcal{E}_{\mathrm{i}}(\rho)}(\vec{x}_S)$ is a function of $g_{\widehat{C}}(\vec{x}_S)$ and the network code can compute $f$, \eqref{equ:pf_lem1_1} implies that
\begin{align}\label{neq_g_C_1}
g_{\widehat{C}}(\vec{b}_I, \vec{a}_J, \vec{d})\neq g_{\widehat{C}}(\vec{b}'_I, \vec{a}_J, \vec{d}).
\end{align}
Together with $K_C=I\cup J$ and $K_{D}=S\setminus I$, we have
\begin{align*}
\big( g_{C}(\vec{b}_I, \vec{a}_{J}),\ g_{D}(\vec{a}_J, \vec{d})\big)
=g_{\widehat{C}}(\vec{b}_I, \vec{a}_J, \vec{d})
\neq
g_{\widehat{C}}(\vec{b}'_I, \vec{a}_J, \vec{d})
=\big( g_{C}(\vec{b}'_I, \vec{a}_J),\ g_{D}(\vec{a}_J, \vec{d})\big).
\end{align*}
By comparing the ordered pairs on the left and right above, we obtain \eqref{neq_g_C}.
Let $W_{C,f}^{(\vec{a}_J)}$ denote the number of all $(I, \vec{a}_J)$-equivalence classes. Then it follows from the above discussion that $|\mathcal{A}|^{n\cdot|C|}\geq W_{C,f}^{(\vec{a}_J)}$, and furthermore that
\begin{align}\label{ineq}
|\mathcal{A}|^{n\cdot|C|}\geq \max_{ \vec{a}_J\in \mathcal{A}^{k\times J}} W_{C,f}^{(\vec{a}_J)}.
\end{align}
In \eqref{ineq}, for $k=1$, $\max\limits_{ \vec{a}_J\in \mathcal{A}^{k\times J}} W_{C,f}^{(\vec{a}_J)}$ becomes $\max\limits_{ {a}_J\in \mathcal{A}^{J}} W_{C,f}^{({a}_J)}$, and we denote it by $w_{C,f}$. Together with the claim that $\max\limits_{ \vec{a}_J\in \mathcal{A}^{k\times J}} W_{C,f}^{(\vec{a}_J)}=(w_{C,f})^k$ in \cite{huang15}, we obtain the upper bound therein:
\begin{equation}
\label{eq:2}
\mathcal{C}(\mathcal{N},f)\leq
\min_{C\in\Lambda(\mathcal{N})}\dfrac{|C|}{\log_{|\mathcal{A}|}w_{C,f}}.
\end{equation}
When the cut set $C$ is a global cut set, i.e., $I=I_C=S$, we have $J=J_C=\emptyset$. Then we can see that two source inputs $a_S$ and $b_S$ in $\mathcal{A}^S$ are $(I,{a}_J)$-equivalent provided that $f(a_S)=f(b_S)$ (note that $\forall~{a}_J\in \mathcal{A}^J$, $a_J$ is an empty vector). This implies that $w_{C,f}=|f(\mathcal{A}^s)|$. Considering the right hand side of \eqref{eq:2}, we have
\begin{align*}
\min_{C\in\Lambda(\mathcal{N})}\dfrac{|C|}{\log_{|\mathcal{A}|}w_{C,f}}\leq \min_{C\in\Lambda(\mathcal{N}): I_C=S}\dfrac{|C|}{\log_{|\mathcal{A}|}w_{C,f}}=
\min_{C\in\Lambda(\mathcal{N}): I_C=S} \frac{|C|}{\log_{|\mathcal{A}|}|f(\mathcal{A}^s)|}.
\end{align*}
Hence, the upper bound in \eqref{eq:2} is an enhancement of the one in \eqref{eq:1}. It was shown in \cite{huang15} that this bound in fact is tighter than the one in \eqref{eq:1} and is tight for {\em multi-edge tree networks}, where a multi-edge tree network is a tree with multiple edges allowed between two adjacent nodes (see \cite{Appuswamy11} and \cite{huang15}). However, it was also demonstrated in \cite{huang15} that this bound is not tight for an example that was first studied in \cite{Appuswamy11}.
\begin{example}[\!\!\cite{Appuswamy11,huang15}]\label{eg:1}
\begin{figure}[t]
\centering
{
\begin{tikzpicture}[x=0.6cm]
\draw (0,0) node[vertex] (2) [label=above:$\sigma_2$] {};
\draw (-2,-1.5) node[vertex] (1') [label=left:] {};
\draw (2,-1.5) node[vertex] (2') [label=right:] {};
\draw (0,-3) node[vertex] (0) [label=below:$\rho$] {};
\draw[->,>=latex] (2) -- (1') node[midway, auto,swap, left=0mm] {$e_2$};
\draw[->,>=latex] (2) -- (2') node[midway, auto, right=0mm] {$e_3$};
\draw[->,>=latex] (1') -- (0) node[midway, auto,swap, left=0mm] {$e_5$};
\draw[->,>=latex] (2') -- (0) node[midway, auto, right=0mm] {$e_6$};
\draw node[vertex,label=above:$\sigma_1$] at (-4,0) (1) {};
\draw[->,>=latex] (1) -- (1') node[midway, auto,swap, left=0mm] {$e_1$};
\draw node[vertex,label=above:$\sigma_3$] at (4,0) (3) {};
\draw[->,>=latex] (3) -- (2') node[midway, auto, right=0mm] {$e_4$};
\end{tikzpicture}
}
\caption{The network $\mathcal{N}$ has three binary sources
$\sigma_1$, $\sigma_2$, $\sigma_3$ and one sink $\rho$ that
computes the {\em arithmetic sum} of the source messages as the target function $f$, i.e., $f(x_1,x_2,x_3)=x_1+x_2+x_3$, with $\mathcal{A}=\{0,1\}$ and $\mathcal{O}=\{0,1,2,3\}$.}
\label{fig:1}
\end{figure}
For the network function computation problem in Fig.~\ref{fig:1}, denoted by $(\mathcal{N}, f)$, both the upper bounds in \eqref{eq:1} and \eqref{eq:2} are equal to $1$, giving $\mathcal{C}(\mathcal{N},f)\leq 1$. To be specific, the right hand side of \eqref{eq:1} is minimized by the global cut set $C=\{e_5, e_6\}$ with cardinality $2$ and $|f(\mathcal{A}^s)|=4$, giving the upper bound $1$. It was shown in \cite{huang15} that the right hand side of \eqref{eq:2} is minimized by the cut set $C=\{e_5, e_6\}$. Denote $I_C$ and $J_C$ by $I$ and $J$, respectively. Evidently, $I=S$ and $J=\emptyset$. Then, $\forall~{a}_J\in \mathcal{A}^J$, ${a}_J$ is an empty vector. Thus, the $(I, {a}_J)$-equivalence classes are
\begin{align*}
&{\mathrm{Cl}}_1= \{(0,0,0)\}, \quad {\mathrm{Cl}}_2=\{(0,0,1),(0,1,0),(1,0,0) \}, \\
&{\mathrm{Cl}}_3= \{(0,1,1),(1,0,1),(1,1,0)\}, \quad {\mathrm{Cl}}_4=\{(1,1,1)\},
\end{align*}
and for the input vectors in the same equivalence class ${\mathrm{Cl}}_i$, $1\leq i \leq 4$, the function $f$ takes the same value.\footnote{In fact, for any network computation problem $(\mathcal{N},f)$, we can easily see that for every global cut set $C$, i.e., $I=S$ and $J=\emptyset$, two source inputs $b_S$ and $b_S'$ in $\mathcal{A}^S$ are $(I, a_J)$-equivalent if and only if $f(b_S)=f(b_S')$.} Hence, we have $w_{C,f}=4$ and $|C|/\log_{|\mathcal{A}|} w_{C,f}=1$, giving the upper bound $1$.
However, it was shown in \cite{Appuswamy11} that the exact computing capacity of $(\mathcal{N},f)$ is $2/(1+\log_2 3)\approx 0.77$, which is considerably smaller than $1$. The proof of this computing capacity is non-trivial.
Note that the network in Fig.~\ref{fig:1} has a very simple ``non-tree'' structure. Thus, this example indicates that the existing upper bounds are far from being tight for general non-tree network topologies.
\end{example}
We now use Example~\ref{eg:1} to give an intuitive (but not complete) explanation why the upper bound $1$ on $\mathcal{C}(N,f)$ in \eqref{eq:2} is not tight. Suppose this upper bound is tight so that the rate $1$ is achievable, i.e., there exists a $(k,k)$ network code for some positive integer $k$. Let us for the time being assume that $k=1$, and let the set of global encoding functions be $\{ g_{e_i}: 1\leq i \leq 6 \}$. Since this code can compute $f$, according to the upper bound \eqref{eq:2}, it is necessary for it to distinguish the four $(I,a_J)$-equivalence classes ${\mathrm{Cl}}_1$, ${\mathrm{Cl}}_2$, ${\mathrm{Cl}}_3$ and ${\mathrm{Cl}}_4$ at the cut set $C=\{e_5, e_6\}$.
However, we now show that this condition is not sufficient for the code to compute $f$. Suppose $g_C$ takes the same value for all the inputs in ${\mathrm{Cl}}_2$. Since the two inputs $(0,0,1)$ and $(1,0,0)$ are in ${\mathrm{Cl}}_2$, we have
\begin{align*}
g_C(0,0,1)=&\big( g_{e_5}(x_1=0,x_2=0), g_{e_6}(x_2=0,x_3=1) \big)\\
=&\big( g_{e_5}(x_1=1,x_2=0), g_{e_6}(x_2=0,x_3=0) \big)=g_C(1,0,0),
\end{align*}
implying that
$$g_{e_5}(x_1=0,x_2=0)=g_{e_5}(x_1=1,x_2=0).$$
On the other hand, by considering the input $(1,0,1)$ in ${\mathrm{Cl}}_3$ and the input $(0,0,1)$ in ${\mathrm{Cl}}_2$, we obtain
\begin{align*}
g_C(1,0,1)=&\big(g_{e_5}(x_1=1,x_2=0), g_{e_6}(x_2=0,x_3=1) \big)\\
=&\big(g_{e_5}(x_1=0, x_2=0), g_{e_6}(x_2=0,x_3=1) \big)=g_C(0,0,1),
\end{align*}
implying that the code cannot distinguish these $2$ inputs and hence cannot compute $f$ because $2=f(1,0,1) \neq f(0,0,1)=1$.
In other words, the necessary condition that has been used to obtain \eqref{eq:2} is not strong enough to be also sufficient, and hence the upper bound \eqref{eq:2} is not tight. Nevertheless, based on the intuition obtained in the above discussion, we will propose a new upper bound that is applicable to arbitrary network topology and target function. This upper bound not only is an enhancement of the upper bound in \eqref{eq:2}, but also is tight for the network function computation problem in Example~\ref{eg:1}.
\section{Improved Upper Bound}\label{sec:improved_upp_bou}
In this section, we state our improved upper bound with some discussions. The proof of this bound is deferred to Section~\ref{sec:proof}.
\subsection{The Improved Upper Bound}
\begin{defn}\label{def:strong_parti}
Let $C\in \Lambda(\mathcal{N})$ be a cut set and $\mathcal{P}_C=\{C_1,C_2,\cdots, C_m \}$ be a partition of the cut set $C$. The partition $\mathcal{P}_C$ is said to be a strong partition of $C$ if the following two conditions are satisfied:
\begin{enumerate}
\item $I_{C_l} \neq \emptyset$, $\forall~1\leq l \leq m$;
\item $I_{C_i}\cap I_{C_j}=\emptyset$, $\forall~1\leq i, j \leq m$ and $i\neq j$.
\end{enumerate}
\end{defn}
For any cut set $C$ in $\Lambda(\mathcal{N})$, the partition $\{C \}$ is called the \textit{trivial strong partition} of $C$.
\begin{defn}
\label{defn:Par_Equ_Relation_ScalarCase}
Let $I$ and $J$ be two disjoint subsets of $S$. Let $I_l$, $l=1,2, \cdots, m$, be $m$ disjoint subsets of $I$ and let $L=I\setminus(\bigcup_{l=1}^m I_l)$. For given ${a}_J\in \mathcal{A}^{J}$ and ${a}_L\in \mathcal{A}^{L}$, we say that ${b}_{I_l}$ and $b'_{I_l}$ in $\mathcal{A}^{I_l}$ are $(I_l, {a}_L, {a}_J)$-equivalent for $1 \leq l \leq m$, if for each ${c}_{I_j} \in \mathcal{A}^{I_j}$ with $1 \leq j \leq m$ and $j\neq l$, $({b}_{I_l},{a}_{L}, {c}_{I_j},\ 1 \leq j \leq m, j\neq l)$ and $({b}'_{I_l}, {a}_{L}, {c}_{I_j},\ 1 \leq j \leq m, j\neq l)$ in $\mathcal{A}^{I}$ are $(I, {a}_J)$-equivalent.
\end{defn}
It is easily seen that the above relation for every $l$ is an equivalence relation and thus partitions $\mathcal{A}^{I_l}$ into $(I_l, {a}_L, {a}_J)$-equivalence classes. Similar to the $(I, {a}_J)$-equivalence relation, the $(I_l, {a}_L, {a}_J)$-equivalence relation does not depend on any cut set or the network topology.
Note that Definition~\ref{defn:Par_Equ_Relation_ScalarCase} subsumes Definition~\ref{def:ec} because the former reduces to the latter when $m=1$ and $I_1=I$ ($L=\emptyset$ and $a_L$ is an empty vector), i.e., the $(I_1, {a}_L, {a}_J)$-equivalence relation becomes the $(I, {a}_J)$-equivalence relation and the $(I_1, {a}_L, {a}_J)$-equivalence classes become the $(I, {a}_J)$-equivalence classes.
Fix a cut set $C\in \Lambda(\mathcal{N})$ and let $\mathcal{P}_C=\{ C_1,C_2,\cdots, C_m \}$ be a strong partition of $C$. For notational simplicity, let $I=I_C$, $J=J_C$ and $I_l=I_{C_l}$ for $l=1,2, \cdots, m$. By \eqref{eq:I_C_subseteq}, $\bigcup_{l=1}^m I_l\subseteq I_{\bigcup_{l=1}^m C_l}= I$, and accordingly we let $L=I\setminus(\bigcup_{l=1}^m I_l)$. Then we see that $\{ I_1, I_2, \cdots, I_m, L \}$ forms a partition of $I$.
We use ${\mathrm{Cl}}[{a}_J]$ to denote an $(I,{a}_J)$-equivalence class. For $l=1,2,\cdots,m$, we use ${\mathrm{cl}}_{I_l}[{a}_{L}, {a}_J]$ to denote an $(I_l,{a}_L,{a}_J)$-equivalence class, and use $V_{I_l}^{[{a}_{L},{a}_J]}$ to denote the number of the $(I_l,{a}_L,{a}_J)$-equivalence classes. In particular, when ${a}_{L}$ and ${a}_J$ are clear from the context, we write ${\mathrm{cl}}_{I_l}$ and $V_{I_l}$ to simplify the notation. Now, we define the set
\begin{align}\label{def:langle_set_rangle}
\big\langle {\mathrm{cl}}_{I_1}, {\mathrm{cl}}_{I_2}, \cdots, {\mathrm{cl}}_{I_m}, {a}_{L}\big \rangle
\triangleq \Big\{ ({b}_{I_1}, {b}_{I_2}, \cdots, {b}_{I_m}, {a}_{L}):\ {b}_{I_l}\in {\mathrm{cl}}_{I_l}, l=1,2,\cdots,m \Big\}\subseteq \mathcal{A}^{I}
\end{align}
and state the following lemma. The proof is deferred to Section~\ref{sec:proof}.
\begin{lemma}\label{prop_cl}
For any set of $(I_l, {a}_{L}, {a}_J)$-equivalence classes ${\mathrm{cl}}_{I_l}$, $l=1, 2, \cdots, m$, all source inputs $({b}_{I_1}, {b}_{I_2}, \cdots, {b}_{I_m}, {a}_{L})$ in $\big\langle {\mathrm{cl}}_{I_1}, {\mathrm{cl}}_{I_2}, \cdots, {\mathrm{cl}}_{I_m}, {a}_{L}\big \rangle$ are $(I,{a}_J)$-equivalent. In other words, there exists an $(I,{a}_J)$-equivalence class ${\mathrm{Cl}}[{a}_J]$ such that
\begin{align}\label{equ:lem:prop_cl}
\big\langle {\mathrm{cl}}_{I_1}, {\mathrm{cl}}_{I_2}, \cdots, {\mathrm{cl}}_{I_m}, {a}_{L} \big\rangle \subseteq {\mathrm{Cl}}[{a}_J].
\end{align}
\end{lemma}
With Lemma~\ref{prop_cl}, we can define a function $h$ that maps $({\mathrm{cl}}_{I_1}, {\mathrm{cl}}_{I_2}, \cdots, {\mathrm{cl}}_{I_m})$ to the corresponding $(I, {a}_J)$-equivalence class for given $a_L\in \mathcal{A}^L$ and $a_J\in \mathcal{A}^J$.
For each $(I,{a}_J)$-equivalence class ${\mathrm{Cl}}[{a}_J]$, we define
\begin{align}\label{no_finer_eq_cl}
N\big({a}_{L}, {\mathrm{Cl}}[{a}_J]\big)=&
\# \Big\{ \big( {\mathrm{cl}}_{I_1}, {\mathrm{cl}}_{I_2}, \cdots, {\mathrm{cl}}_{I_m} \big):\
{\mathrm{cl}}_{I_l} \text{ is an $(I_l, {a}_{L}, {a}_J)$-equivalence class, }
l=1,2,\cdots,m,\nonumber\\
&\qquad \qquad \qquad \qquad \qquad \quad \textrm{and } h({\mathrm{cl}}_{I_1}, {\mathrm{cl}}_{I_2}, \cdots, {\mathrm{cl}}_{I_m})={\mathrm{Cl}}[{a}_J]\Big\}\nonumber\\
=&\# \Big\{ \big( {\mathrm{cl}}_{I_1}, {\mathrm{cl}}_{I_2}, \cdots, {\mathrm{cl}}_{I_m} \big):\
{\mathrm{cl}}_{I_l} \text{ is an $(I_l, {a}_{L}, {a}_J)$-equivalence class, }
l=1,2,\cdots,m,\nonumber\\
&\qquad \qquad \qquad \qquad \qquad \quad \textrm{and } \big\langle {\mathrm{cl}}_{I_1}, {\mathrm{cl}}_{I_2}, \cdots, {\mathrm{cl}}_{I_m}, {a}_{L} \big\rangle \subseteq {\mathrm{Cl}}[{a}_J]\Big\},
\end{align}
where we use ``$\#\{\cdot\}$'' to stand for the cardinality of the set. Further, let
\begin{align}\label{equ:N_Cl}
N\big({\mathrm{Cl}}[{a}_J]\big)=\max_{a_{L}\in \mathcal{A}^{L}} N\big(a_{L}, {\mathrm{Cl}}[{a}_J]\big).
\end{align}
Note that $N\big(a_{L}, {\mathrm{Cl}}[{a}_J]\big)$ can be equal to $0$. On the other hand, $N\big({\mathrm{Cl}}[{a}_J]\big)$ is always positive, which is explained as follows. Note that ${\mathrm{Cl}}[{a}_J]$ is an $(I, a_J)$-equivalence class and hence non-empty. Therefore, there exists $b_I$ in $\mathcal{A}^{I}$ such that $b_I\in {\mathrm{Cl}}[{a}_J]$, and we write
$$b_I=({b}_{I_1}, {b}_{I_2}, \cdots, {b}_{I_m}, {b}_{L}),$$
where ${b}_{L}$ is equal to some $a_L\in \mathcal{A}^L$. For $l=1,2,\cdots,m$, since $\mathcal{A}^{I_l}$ is partitioned into $(I_l,a_L,a_J)$-equivalence classes, $b_{I_l}$ is in some ${\mathrm{cl}}_{I_l}[a_L,a_J]$, abbreviated as ${\mathrm{cl}}_{I_l}$. Also, since $b_I\in {\mathrm{Cl}}[{a}_J]$, we have $h({\mathrm{cl}}_{I_1}, {\mathrm{cl}}_{I_2}, \cdots, {\mathrm{cl}}_{I_m})={\mathrm{Cl}}[{a}_J]$ by Lemma~\ref{prop_cl}. Therefore, we see that $N\big(a_{L}, {\mathrm{Cl}}[{a}_J]\big)\geq 1$.
Next, we consider the summation of $N\big({\mathrm{Cl}}[{a}_J]\big)$ over all the $(I,{a}_J)$-equivalence classes, i.e.,
\begin{align}\label{eq:sum_N_Cl}
\sum_{\text{all }{\mathrm{Cl}}[{a}_J]} N\big({\mathrm{Cl}}[{a}_J]\big).
\end{align}
Let
\begin{align}\label{def:{a}_J_star0}
{a}_J^* \in \arg\max_{{a}_J\in \mathcal{A}^{J}} \sum_{\text{all }{\mathrm{Cl}}[{a}_J]} N\big({\mathrm{Cl}}[{a}_J]\big),
\end{align}
i.e.,
\begin{align}\label{def:{a}_J_star}
\sum_{\text{all }{\mathrm{Cl}}[{a}^*_J]} N\big({\mathrm{Cl}}[{a}^*_J]\big)=\max_{{a}_J\in \mathcal{A}^{J}} \sum_{\text{all }{\mathrm{Cl}}[{a}_J]} N\big({\mathrm{Cl}}[{a}_J]\big),
\end{align}
and further
\begin{align}\label{n_C_Parti_1st}
n_C(\mathcal{P}_C) = \sum_{\text{all }{\mathrm{Cl}}[{a}_J^*]}
N\big({\mathrm{Cl}}[{a}_J^*]\big).
\end{align}
Denote by $n_{C,f}$ the maximum of $n_C(\mathcal{P}_C)$ over all strong partitions $\mathcal{P}_C$ of $C$,
i.e.,
\begin{align}\label{n_C_f_1st}
n_{C,f}=\max_{\text{all strong partitions }\mathcal{P}_C \text{ of } C} n_C(\mathcal{P}_C).
\end{align}
Based on the above,
we give in the following theorem our improved upper bound which is applicable to arbitrary networks and target functions.
\begin{thm}
\label{thm:upper_bound}
Let $\mathcal{N}$ be a network and $f$ be a target function. Then
\begin{align}\label{equ:improved_upper_bound}
\mathcal{C}(\mathcal{N},f)\leq \min_{C\in\Lambda(\mathcal{N})}\dfrac{|C|}{\log_{|\mathcal{A}|}n_{C,f}}.
\end{align}
\end{thm}
For a cut set $C\in \Lambda(\mathcal{N})$, if we consider its trivial strong partition $\mathcal{P}_C=\{ C \}$, then we have $m=1$ and $I_1=I$ ($L=\emptyset$ and $a_L$ is an empty vector). Following the discussion in the second paragraph below Definition~\ref{defn:Par_Equ_Relation_ScalarCase}, we see from \eqref{no_finer_eq_cl} that $N\big({a}_{L}, {\mathrm{Cl}}[{a}_J]\big)=1$, and from \eqref{equ:N_Cl} that
$$N\big({\mathrm{Cl}}[{a}_J]\big)=N\big({a}_{L}, {\mathrm{Cl}}[{a}_J]\big)=1.$$
Then the summation in \eqref{eq:sum_N_Cl} becomes $W_{C,f}^{(a_J)}$ and the right hand side of \eqref{def:{a}_J_star} becomes $w_{C,f}$. Finally, it follows from \eqref{n_C_Parti_1st} and \eqref{n_C_f_1st} that $n_{C}(\{C\})=w_{C,f}\leq n_{C,f}$. Hence, the upper bound in \eqref{equ:improved_upper_bound} is an enhancement of the one in \eqref{eq:2}.
\subsection{Evaluation of the Improved Upper Bound}
An important step toward evaluating the upper bound \eqref{equ:improved_upper_bound} in Theorem~\ref{thm:upper_bound} is to calculate the value of $N\big(a_L, {\mathrm{Cl}}[{a}_J]\big)$ in \eqref{no_finer_eq_cl} for $a_{L} \in \mathcal{A}^L$ and an $(I,{a}_J)$-equivalence class ${\mathrm{Cl}}[{a}_J]$. In this subsection, we introduce a {\em multi-dimensional array} to facilitate this calculation.
For $l=1,2,\cdots, m$, let ${\mathrm{cl}}_{I_l}$ be an $(I_l,a_L,
{a}_J)$-equivalence class. By Lemma \ref{prop_cl},
$({\mathrm{cl}}_{I_1},{\mathrm{cl}}_{I_2},\cdots,{\mathrm{cl}}_{I_m})$ uniquely
determines an $(I, {a}_J)$-equivalence class ${\mathrm{Cl}}[{a}_J]$ through the function $h$, namely, $$h({\mathrm{cl}}_{I_1},{\mathrm{cl}}_{I_2},\cdots,{\mathrm{cl}}_{I_m})={\mathrm{Cl}}[{a}_J].$$
We can then define the following $m$-dimensional array (when $m=2$, this array can be regarded as a matrix):
\begin{align}\label{equ:matrix_M}
M(a_L,{a}_J)=
\Big[h\big({\mathrm{cl}}_{I_1,i_1}[a_L,{a}_J], {\mathrm{cl}}_{I_2,i_2}[a_L,{a}_J], \cdots, {\mathrm{cl}}_{I_m,i_m}[a_L,{a}_J]\big)\Big]
_{
\substack{1\leq i_1 \leq V_{I_1}^{[a_L,{a}_J]}\\
1\leq i_2 \leq V_{I_2}^{[a_L,{a}_J]}\\
\vdots\quad\quad\ \\
1\leq i_m \leq V_{I_m}^{[a_L,{a}_J]}
}}
\end{align}
where ${\mathrm{cl}}_{I_l,i_l}[a_L,{a}_J]$, $1\leq i_l \leq V_{I_l}^{[a_L,{a}_J]}$ are all the $(I_l, a_L, {a}_J)$-equivalence classes partitioning $\mathcal{A}^{I_l}$ for $1\leq l \leq m$. We observe that $N\big(a_{L}, {\mathrm{Cl}}[{a}_J]\big)$ is simply the number of the entries equal to ${\mathrm{Cl}}[{a}_J]$ in the array $M(a_L, {a}_J)$.
We continue to use the setup in Example~\ref{eg:1} to illustrate the computation of the upper bound in Theorem~\ref{thm:upper_bound} by using the array $M(a_L,{a}_J)$. We will also see that our improved upper bound is tight for the network function computation problem in Example~\ref{eg:1}.
\begin{journalonly}
Let $k=1$ and then we obtain the corresponding results for the scalar case. Concretely, for the given source vectors ${a}_J\in \mathcal{A}^{J}$ and $a_L\in \mathcal{A}^{L}$, we have the following matrices:
\begin{align*}
M(a_L)=\Big[M_{i,j}(a_L)\Big]_{1\leq i \leq V_{I_1}(a_{L}, {a}_J),\ 1\leq j \leq V_{I_2}(a_{L}, {a}_J)}.
\end{align*}
Subsequently, we write
\begin{align}
\vec{a}_J&=({a}_J^{(1)}, {a}_J^{(2)}, \cdots, {a}_J^{(k)})^\top,\label{nota_{a}_J}\\
\vec{a}_{L}&=(a_{L}^{(1)}, a_{L}^{(2)}, \cdots, a_{L}^{(k)})^\top,\label{nota_a_L}
\end{align}
and the corresponding equivalence class ${\mathrm{Cl}}^{(k)}[\vec{a}_J]$ is written as
\begin{align}
{\mathrm{Cl}}^{(k)}[\vec{a}_J]=\big({\mathrm{Cl}}^{(1)}({a}_J^{(1)}),\ {\mathrm{Cl}}^{(1)}({a}_J^{(2)}),\ \cdots,\ {\mathrm{Cl}}^{(1)}({a}_J^{(k)})\big)^\top,\label{nota_Cl_J}
\end{align}
which is considered as a column vector constituted by $k$ scalar equivalence classes. For all $p$, $1\leq p \leq k$, one has the $k$ corresponding matrices as follows
\begin{align*}
M(a_L^{(p)})=\Big[M_{i,j}(a_L^{(p)})\Big]_{1\leq i \leq V_{I_1}(a_{L}^{(p)}, c_{J,p}),\ 1\leq j \leq V_{I_2}(a_{L}^{(p)}, c_{J,p})},
\end{align*}
and the value $N(a_{L}^{(p)}, {\mathrm{Cl}}^{(1)}(c_{J,p}))$ is the number of the entries $M_{i,j}(a_L^{(p)})$ equal to ${\mathrm{Cl}}^{(1)}(c_{J,p})$ in the matrix $M(a_L^{(p)})$. Furthermore, it is not difficult to see that
\begin{align}\label{prod}
N(\vec{a}_{L}, {\mathrm{Cl}}^{(k)}[\vec{a}_J])=\prod_{p=1}^{k} N(a_{L}^{(p)}, {\mathrm{Cl}}^{(1)}(c_{J,p})).
\end{align}
Therefore, we derive that for $1\leq i \leq W_{C,f}^{(\vec{a}_J)}$,
\begin{align}\label{ineq7}
(\ref{ineq6})=N(\vec{a}_{L,(i)}, {\mathrm{Cl}}^{(k)}_i(\vec{a}_J))=\prod_{p=1}^{k} N(a_{L,(i)}^{(p)}, {\mathrm{Cl}}^{(1)}_i(c_{J,p})).
\end{align}
In order to clear the proof, before discussion further we review all the inequalities from (\ref{ineq1}) to (\ref{ineq7}) as follows
\begin{align*}
&|\mathcal{A}|^{n|C|}\\
\geq &\#\{ g_C(\vec{x}_I, \vec{x}_J):\ \text{\ all } \vec{x}_I\in \mathcal{A}^{k\times I}, \vec{x}_J\in \mathcal{A}^{k\times J}\} \tag*{// by (\ref{ineq1})}\\
\geq & \#\{ g_C(\vec{x}_I, \vec{a}_J): \text{ all } \vec{x}_I\in \mathcal{A}^{k\times I}\} \tag*{ // for every $\vec{a}_J\in \mathcal{A}^{k\times J}$ by (\ref{ineq2})}\\
=&\sum_{i=1}^{W_{C,f}^{(\vec{a}_J)}}\#\{ g_C(\vec{b}_I, \vec{a}_J):\text{ all } \vec{b}_I\in {\mathrm{Cl}}^{(k)}_i(\vec{a}_J)\}\tag*{// by (\ref{ineq3})}\\
=&\sum_{i=1}^{W_{C,f}^{(\vec{a}_J)}}\#\{ \big(g_{C_1}(\vec{a}_{I_1}, \vec{a}_{L}, \vec{a}_J), g_{C_2}(\vec{a}_{I_2}, \vec{a}_{L}, \vec{a}_J) \big): \\
& \qquad\qquad\qquad\quad \text{ all } \vec{b}_I=(\vec{a}_{I_1}, \vec{a}_{L}, \vec{a}_{I_2})\in {\mathrm{Cl}}^{(k)}_i(\vec{a}_J)\} \tag*{// by (\ref{ineq4})}\\
=&\sum_{i=1}^{W_{C,f}^{(\vec{a}_J)}}\#\{ \big(g_{C_1}(\vec{a}_{I_1}, \vec{a}_{L}), g_{C_2}(\vec{a}_{I_2}, \vec{a}_{L}) \big): \\
&\qquad\qquad\qquad\quad \text{ all } \vec{b}_I=(\vec{a}_{I_1}, \vec{a}_{L}, \vec{a}_{I_2})\in {\mathrm{Cl}}^{(k)}_i(\vec{a}_J)\} \tag*{// just for notation simplicity by (\ref{ineq5}) }\\
\geq &\sum_{i=1}^{W_{C,f}^{(\vec{a}_J)}}\#\mathop{\cup}_{\text{all }\vec{a}_{L}\in \mathcal{A}^{k\times L}} \{ \big(g_{C_1}(\vec{a}_{I_1}, \vec{a}_{L}), g_{C_2}(\vec{a}_{I_2}, \vec{a}_{L}) \big):\ \text{all } \\
& \vec{a}_{I_1}\in \mathcal{A}^{k\times I_1}, \vec{a}_{I_2}\in \mathcal{A}^{k\times I_2} \textrm{ s.t. } (\vec{a}_{I_1}, \vec{a}_{L}, \vec{a}_{I_2})\in {\mathrm{Cl}}^{(k)}_i(\vec{a}_J)\}\nonumber\\
\geq & \sum_{i=1}^{W_{C,f}^{(\vec{a}_J)}}\#\{ \big(g_{C_1}(\vec{a}_{I_1}, \vec{a}_{L,(i)}), g_{C_2}(\vec{a}_{I_2}, \vec{a}_{L,(i)}) \big):\ \\
&\quad \vec{a}_{I_l}\in \mathcal{A}^{k\times I_l}, l=1,2, \textrm{ s.t. } (\vec{a}_{I_1}, \vec{a}_{L,(i)}, \vec{a}_{I_2})\in {\mathrm{Cl}}^{(k)}_i(\vec{a}_J)\}\tag*{ // for all possible $\vec{a}_{L,(i)}$ by (\ref{ineq_for_every})}\\
\geq & \sum_{i=1}^{W_{C,f}^{(\vec{a}_J)}} \#\{ \big({\mathrm{cl}}^{(k)}_{I_1}(\vec{a}_{L,(i)},\vec{a}_J),\ {\mathrm{cl}}^{(k)}_{I_2}(\vec{a}_{L,(i)}, \vec{a}_J)\big): \\
&\qquad \text{ all } {\mathrm{cl}}^{(k)}_{I_1}(\vec{a}_{L,(i)},\vec{a}_J), \text{ and } {\mathrm{cl}}^{(k)}_{I_2}(\vec{a}_{L,(i)},\vec{a}_J), \textrm{ s.t. } \\
& \big({\mathrm{cl}}^{(k)}_{I_1}(\vec{a}_{L,(i)},\vec{a}_J),\ \vec{a}_{L,(i)},\ {\mathrm{cl}}^{(k)}_{I_2}(\vec{a}_{L,(i)}, \vec{a}_J)\big)\subseteq {\mathrm{Cl}}^{(k)}_i(\vec{a}_J)\}\tag*{}\\
= & \sum_{i=1}^{W_{C,f}^{(\vec{a}_J)}} N(\vec{a}_{L,(i)}, {\mathrm{Cl}}^{(k)}_i(\vec{a}_J)) \tag*{// by (\ref{ineq6})}\\
= & \sum_{i=1}^{W_{C,f}^{(\vec{a}_J)}} \prod_{p=1}^{k} N(a_{L,(i)}^{(p)}, {\mathrm{Cl}}^{(1)}_i(c_{J,p})), \tag*{// by (\ref{ineq7})}
\end{align*}
where recall (\ref{nota_{a}_J}), (\ref{nota_a_L}), and (\ref{nota_Cl_J}) for the meanings of notation $a_{L,(i)}^{(p)}$, $c_{J,p}$ and ${\mathrm{Cl}}^{(1)}_i(c_{J,p})$. Consequently, we summarize that
\begin{align}\label{ineq_final}
|\mathcal{A}|^{n|C|}\geq \sum_{i=1}^{W_{C,f}^{(\vec{a}_J)}} \prod_{p=1}^{k} N(a_{L,(i)}^{(p)}, {\mathrm{Cl}}^{(1)}_i(c_{J,p})),
\end{align}
for arbitrary ${a}_J^{(p)}\in \mathcal{A}^{J}$, and arbitrary $a_{L,(i)}^{(p)}\in \mathcal{A}^{L}$ corresponding to the $(I, J, {a}_J^{(p)})$-equivalence class ${\mathrm{Cl}}^{(1)}_i(c_{J,p})$, $1\leq p \leq k$, $1 \leq i \leq W_{C,f}^{(\vec{a}_J)}$.
On the other hand, notice that for any ${a}_J\in \mathcal{A}^{J}$, the $(I,{a}_J)$-equivalence relation induces different equivalence classes ${\mathrm{Cl}}^{(1)}_j({a}_J)$, $1 \leq j \leq W_{C,f}^{({a}_J)}$, constituting a partition of $\mathcal{A}^{I}$. Then for every equivalence class ${\mathrm{Cl}}^{(1)}_j({a}_J)$, let
\begin{align*}
a_L^*({\mathrm{Cl}}^{(1)}_j({a}_J))=\arg\max_{a_L\in \mathcal{A}^{L}}N(a_{L}, {\mathrm{Cl}}^{(1)}_j({a}_J)).
\end{align*}
Let further
\begin{align*}
{a}_J^*=\arg\max_{{a}_J\in \mathcal{A}^{J}} \sum_{j=1}^{W_{C,f}^{({a}_J)}} N(a_L^*({\mathrm{Cl}}^{(1)}_j({a}_J)), {\mathrm{Cl}}^{(1)}_j({a}_J)),
\end{align*}
and thus we can obtain
\begin{align*}
&\max_{{a}_J\in \mathcal{A}^{J}} \max_{a_{L,j}\in \mathcal{A}^{L}:\atop 1\leq j \leq W_{C,f}^{({a}_J)}} \sum_{j=1}^{W_{C,f}^{({a}_J)}} N(a_{L,j}, {\mathrm{Cl}}^{(1)}_j({a}_J))\\
=&\sum_{j=1}^{W_{C,f}^{({a}_J^*)}} N(a_{L}^*({\mathrm{Cl}}^{(1)}_j({a}_J^*)), {\mathrm{Cl}}^{(1)}_j({a}_J^*))\\
\triangleq &\ n_C(C_1,C_2).
\end{align*}
Together with (\ref{ineq_final}), we deduce that
\begin{align}\label{ineq_k_n}
|\mathcal{A}|^{n|C|}\geq & \prod_{p=1}^{k} \sum_{j=1}^{W_{C,f}^{({a}_J^*)}} N(a_{L}^*({\mathrm{Cl}}^{(1)}_j({a}_J^*)), {\mathrm{Cl}}^{(1)}_j({a}_J^*))\nonumber\\
= & \Big[ \sum_{j=1}^{W_{C,f}^{({a}_J^*)}} N(a_{L}^*({\mathrm{Cl}}^{(1)}_j({a}_J^*)), {\mathrm{Cl}}^{(1)}_j({a}_J^*))\Big]^k \nonumber\\
= & \big[ n_C(C_1,C_2) \big]^k.
\end{align}
Furthermore, notice that the above inequalities follow for all partitions $C=C_1\cup C_2$ with $I_{C_i}\neq \emptyset$, $i=1,2$. Thus, we choose that partition such that $n_C(C_1,C_2)$ achieves the maximum and denote such a partition by $C=C_1^*\cup C_2^*$, i.e.,
$$n_C(C_1^*, C_2^*)=\max_{C=C_1\cup C_2 \text{ with} \atop I_{C_l}\neq \emptyset,\ l=1,2} n_C(C_1,C_2),$$
further denoted by $n_{C,f}$ because this maximum value $n_C(C_1^*, C_2^*)$ only depends on the cut set $C$ for the given $f$.
Then by (\ref{ineq_k_n}), we can show that
\begin{align*}
|\mathcal{A}|^{n|C|}\geq \big[ n_C(C_1^*,C_2^*) \big]^k=n_{C,f}^k,
\end{align*}
and equivalently,
\begin{align}\label{ineq_upp_bound_k_n}
\frac{k}{n}\leq \dfrac{|C|}{\log_{|\mathcal{A}|}n_{C,f}}.
\end{align}
At last, the above inequality (\ref{ineq_upp_bound_k_n}) should be satisfied for all cuts $C$ with $I_C \neq \emptyset$, that is, for all $C\in \Lambda{\mathcal{N}}$. Therefore, we obtain the upper bound
\begin{align*}
\frac{k}{n}\leq \min_{C\in\Lambda(\mathcal{N})}\dfrac{|C|}{\log_{|\mathcal{A}|}n_{C,f}}.
\end{align*}
Now, we can give the main result below.
\begin{thm}
\label{thm:upper_bound}
Let $\mathcal{N}=(G, S, \rho)$ be a network and $f$ be a target function. Then
\begin{align*}
\mathcal{C}(\mathcal{N},f)\leq \min_{C\in\Lambda(\mathcal{N})}\dfrac{|C|}{\log_{|\mathcal{A}|}n_{C,f}}.
\end{align*}
\end{thm}
The upper bound herein is better than the previous one (\ref{eq:2}), since $n_{C,f}\geq w_{C,f}$ always holds for every $C\in \Lambda(\mathcal{N})$. Subsequently, we continue using the Example \ref{eg:1} to depict this obtained bound and show that it is tight for this example, i.e., achieves the capacity.
\end{journalonly}
\begin{example}\label{eg:2}
For the network function computation problem $(\mathcal{N},f)$ depicted in Fig.~\ref{fig:1}, consider the cut set $C=\{e_5, e_6\}$, and let $I=I_C$ and $J=J_C$. Then $I=S$, $J=\emptyset$, and $a_J$ is an empty vector. Recall in Example~\ref{eg:1} that all the $(I, {a}_J)$-equivalence classes are
\begin{align*}
&{\mathrm{Cl}}_1= \{(0,0,0)\}, \quad {\mathrm{Cl}}_2=\{(0,0,1),(0,1,0),(1,0,0) \}, \\
&{\mathrm{Cl}}_3= \{(0,1,1),(1,0,1),(1,1,0)\}, \quad {\mathrm{Cl}}_4=\{(1,1,1)\}.
\end{align*}
Furthermore, the only nontrivial (strong) partition of $C$ is $\mathcal{P}_C=\{C_1=\{e_5\},
C_2=\{e_6\}\}$. Let $I_1= I_{C_1}=\{\sigma_1\}$,
$I_2= I_{C_2}=\{\sigma_3\}$ and accordingly $L = I_C\setminus (I_1\cup I_2)=\{ \sigma_2 \}$.
When $\sigma_2$ generates $0$ (i.e., $a_L=0$), since $(0,0,0)\in{\mathrm{Cl}}_1$ and $(1,0,0)\in
{\mathrm{Cl}}_2$, i.e., $0$ and $1$ in $\mathcal{A}^{I_1}=\{0,1\}$ are not $(I_1, a_L=0, a_J)$-equivalent, $\mathcal{A}^{I_1}$ is partitioned into two $(I_1,a_L=0, {a}_J)$-equivalence classes
$${\mathrm{cl}}_{I_1,1}[0]=\{0\} \quad \text{ and }\quad {\mathrm{cl}}_{I_1,2}[0]=\{1\}.$$
Here, we have simplified ${\mathrm{cl}}_{I_1,1}[0,a_J]$ to ${\mathrm{cl}}_{I_1,1}[0]$ since $a_J$ is an empty vector, so on and so forth. Symmetrically, since $(0,0,0)\in{\mathrm{Cl}}_1$ and $(0,0,1)\in {\mathrm{Cl}}_2$, $\mathcal{A}^{I_2}=\{0,1\}$ is also partitioned
into two $(I_2, a_L=0, {a}_J)$-equivalence classes
$${\mathrm{cl}}_{I_2,1}[0]=\{0\} \quad \text{ and } \quad {\mathrm{cl}}_{I_2,2}[0]=\{1\}.$$
Similarly, when $\sigma_2$ generates $1$ (i.e., $a_L=1$), $\mathcal{A}^{I_1}$ is partitioned into two
$(I_1, a_L=1, {a}_J)$-equivalence classes
$${\mathrm{cl}}_{I_1,1}[1]=\{0\} \quad \text{ and } \quad {\mathrm{cl}}_{I_1,2}[1]=\{1\},$$
and $\mathcal{A}^{I_2}$ is partitioned into two
$(I_2, a_L=1, {a}_J)$-equivalence classes
$${\mathrm{cl}}_{I_2,1}[1]=\{0\} \quad \text{ and } \quad {\mathrm{cl}}_{I_2,2}[1]=\{1\}.$$
Denote the matrices $M(a_L=0, {a}_J)$ and $M(a_L=1, {a}_J)$ respectively by $M(0)$ and $M(1)$ for simplicity. By \eqref{equ:matrix_M}, we have
\begin{align}\label{matrix_EX2_1}
M(0)=\bordermatrix[{[]}]{%
&\text{\footnotesize{${\mathrm{cl}}_{I_2,1}[0]$}}&\text{\footnotesize{${\mathrm{cl}}_{I_2,2}[0]$}} \cr
\text{\footnotesize{${\mathrm{cl}}_{I_1,1}[0]$}}& {\mathrm{Cl}}_1 & {\mathrm{Cl}}_2 \cr
\text{\footnotesize{${\mathrm{cl}}_{I_1,2}[0]$}}& {\mathrm{Cl}}_2 & {\mathrm{Cl}}_3 \cr
},
\end{align}
and
\begin{align}\label{matrix_EX2_2}
M(1)=\bordermatrix[{[]}]{%
&\text{\footnotesize{${\mathrm{cl}}_{I_2,1}[1]$}}&\text{\footnotesize{${\mathrm{cl}}_{I_2,2}[1]$}} \cr
\text{\footnotesize{${\mathrm{cl}}_{I_1,1}[1]$}}& {\mathrm{Cl}}_2 & {\mathrm{Cl}}_3 \cr
\text{\footnotesize{${\mathrm{cl}}_{I_1,2}[1]$}}& {\mathrm{Cl}}_3 & {\mathrm{Cl}}_4 \cr
}.
\end{align}
As an explanation, for the $\big({\mathrm{cl}}_{I_1,1}[0], {\mathrm{cl}}_{I_2,1}[0]\big)$-th entry of $M(0)$, since ${\mathrm{cl}}_{I_1,1}[0]=\{0\}$ and ${\mathrm{cl}}_{I_2,1}[0]=\{0\}$, and $a_L=0$, we have
$\big\langle {\mathrm{cl}}_{I_1,1}[0], {\mathrm{cl}}_{I_2,1}[0], a_L=0 \big\rangle \subseteq {\mathrm{Cl}}_1$, and so this entry is equal to ${\mathrm{Cl}}_1$.
From \eqref{matrix_EX2_1} and \eqref{matrix_EX2_2}, we see that
\begin{align*}
N(0,{\mathrm{Cl}}_1)=1,\ N(0,{\mathrm{Cl}}_2)=2,\ N(0,{\mathrm{Cl}}_3)=1,\ N(0,{\mathrm{Cl}}_4)=0,\\
N(1,{\mathrm{Cl}}_1)=0,\ N(1,{\mathrm{Cl}}_2)=1,\ N(1,{\mathrm{Cl}}_3)=2,\ N(1,{\mathrm{Cl}}_4)=1,
\end{align*}
and further $N({\mathrm{Cl}}_1)=1$, $N({\mathrm{Cl}}_2)=2$, $N({\mathrm{Cl}}_3)=2$, and $N({\mathrm{Cl}}_4)=1$ by \eqref{equ:N_Cl}. Since $a_J$ is an empty vector, $\forall~a_J\in \mathcal{A}^J$, it follows from \eqref{n_C_Parti_1st} that
$$n_C(\mathcal{P}_C)=N({\mathrm{Cl}}_1)+N({\mathrm{Cl}}_2)+N({\mathrm{Cl}}_3)+N({\mathrm{Cl}}_4)=6.$$
By Theorem~\ref{thm:upper_bound}, we have
\begin{align*}
\mathcal{C}(\mathcal{N},f)\leq \frac{|C|}{\log_{|\mathcal{A}|} n_{C,f}}=\frac{|C|}{\log_{|\mathcal{A}|} n_C(\mathcal{P}_C)}=\frac{2}{\log_2 6}=\frac{2}{1+\log_2 3}.
\end{align*}
On the other hand, the network code designed in \cite{Appuswamy11} achieves the rate $2/(1+\log_2 3)$. Hence, the upper bound in Theorem \ref{thm:upper_bound} is tight for the network function computation problem $(\mathcal{N}, f)$.
\end{example}
\subsection{Computing a Linear Function over a Network}\label{subsec:computing_linear_func}
In \cite{Appuswamy14}, Appuswamy and Franceschetti considered the achievability of rate $1$ for computing a {\em linear function} over a network, where a linear function is defined as follows. A target function $f:\mathcal{A}^s\to \mathcal{O}$ is {\em linear}, if
\begin{enumerate}
\item the alphabet $\mathcal{A}$ is a finite field $\mathbb{F}_q$, where $q$ is a prime power;
\item the alphabet $\mathcal{O}$ is $\mathbb{F}_q^{l}$, where $l$ is a positive integer;
\item there exists an $l\times s$ matrix $T$ over $\mathcal{A}$ such that $f(x_S)=T\cdot x_S^{\top}$, $\forall~x_S\in \mathcal{A}^S$, where `$\top$' denotes matrix transposition.
\end{enumerate}
Without loss of generality we assume that $T$ is full-rank over $\mathbb{F}_q$ and has no all-zero columns. Hence, we can regard the size of $T$ as $l\times s$, where $1\leq l \leq s$. Note that if $l=s$, i.e., $T$ is a full-rank matrix of size $s\times s$, this network function computation problem reduces to a network coding problem.
Let $A$ and $B$ be two matrices in $\mathbb{F}_q^{l\times s}$. We write $A\sim B$ if there exists an $l\times l$ invertible matrix $Q$ over $\mathbb{F}_q$ and an $s\times s$ permutation matrix $\Pi$ such that $Q\cdot A\cdot \Pi=B$. Now, we consider a special linear target function $f$ corresponding to a matrix $T\in \mathbb{F}_q^{l\times s}$ with $T\sim (I~P)$, where $I$ is an $l\times l$ identity matrix and at least one element of $P\in\mathbb{F}_q^{l\times (s-l)}$ is zero. For this target function $f$, denote the columns of the matrix $T$ by $T_1,T_2,\cdots,T_s$, i.e., $T=(T_1~T_2~\cdots~T_s)$. Then the following so-called {\em min-cut condition} was given in \cite{Appuswamy14}:
\begin{align}\label{mincut_T}
\textrm{min-cut}(\mathcal{N}, T)\triangleq \min_{C\in \Lambda(\mathcal{N})}\dfrac{|C|}{{\mathrm{Rank}}\big(\big[T_i: \sigma_i\in I_C\big]\big)}=1,
\end{align}
which is considered as a necessary condition of importance throughout \cite{Appuswamy14} for determining the rate-$1$ achievability of a linear function over a network. Theorem~\Rmnum{3}.5 in \cite{Appuswamy14}, one of main results in \cite{Appuswamy14}, showed that there always exists a network $\mathcal{N}$ such that, even if the min-cut condition \eqref{mincut_T} is satisfied, there does not exist a rate-$1$ linear network code for computing $f$ over $\mathcal{N}$, where this rate-$1$ linear network code is allowed to be over any extension field of $\mathbb{F}_q$. More specifically, we consider a linear network code over an extension field $\mathbb{F}_{q^n}$ of $\mathbb{F}_q$ ($n$ is a positive integer) and the source matrices $\vec{x}_S\in \mathbb{F}_q^{n\times S}$ can be regarded as an $s$-dimensional row vector in $\mathbb{F}_{q^n}^{S}$.
\begin{figure}[!t]
\centering
\begin{minipage}[b]{0.5\textwidth}
\centering
\begin{tikzpicture}[x=0.7cm]
\draw (0,0) node[vertex] (2) [label=above:$\sigma_2$] {};
\draw (-2,-1.5) node[vertex] (1') [label=left:] {};
\draw (2,-1.5) node[vertex] (2') [label=right:] {};
\draw (0,-3) node[vertex] (0) [label=below:$\rho$] {};
\draw[->,>=latex] (2) -- (1') node[pos=0.3, left=0mm] {$e_2$};
\draw[->,>=latex] (2) -- (2') node[pos=0.3, right=0mm] {$e_3$};
\draw[->,>=latex] (1') -- (0) node[pos=0.3, right=0mm] {$e_5$};
\draw[->,>=latex] (2') -- (0) node[pos=0.3, left=0mm] {$e_6$};
\draw node[vertex,label=above:$\sigma_1$] at (-4,0) (1) {};
\draw[->,>=latex] (1) -- (1') node[pos=0.3, right=0mm] {$e_1$};
\draw node[vertex,label=above:$\sigma_3$] at (4,0) (3) {};
\draw[->,>=latex] (3) -- (2') node[pos=0.3, left=0mm] {$e_4$};
\draw[->,>=latex] (2) -- (1') node[pos=0.7, right=0mm] {$g_2$};
\draw[->,>=latex] (2) -- (2') node[pos=0.7, left=0mm] {$g_3$};
\draw[->,>=latex] (1') -- (0) node[pos=0.7, left=0mm] {$g_5$};
\draw[->,>=latex] (2') -- (0) node[pos=0.7, right=0mm] {$g_6$};
\draw[->,>=latex] (1) -- (1') node[pos=0.7, left=0mm]{$g_1$};
\draw[->,>=latex] (3) -- (2') node[pos=0.7, right=0mm] {$g_4$};
\end{tikzpicture}
\vspace{5mm}
\caption{The problem $(\widehat{\mathcal{N}}, \widehat{T})$.\newline\newline}
\label{fig:vector_fig}
\end{minipage}%
\begin{minipage}[b]{0.5\textwidth}
\centering
\begin{tabular}{p{4mm}p{2cm}p{0mm}p{4mm}p{2cm}}
\hline
\specialrule{0em}{2pt}{2pt}
$\sigma_1$: & $(x_{1,1}, x_{1,2})$ & & $\sigma_2$: & $(x_{2,1}, x_{2,2})$ \\
\specialrule{0em}{2pt}{2pt}
$\sigma_3$: & $(x_{3,1}, x_{3,2})$ & & \\
\specialrule{0em}{2pt}{2pt}
$g_1$: & ${\scriptsize \begin{bmatrix} x_{1,1} \\ x_{1,2} \end{bmatrix}}$ & & $g_4$: & ${\scriptsize \begin{bmatrix} x_{3,1} \\ x_{3,2} \end{bmatrix}}$\\
\specialrule{0em}{2pt}{2pt}
$g_2$: & $x_{2,1}$ & & $g_3$: & $x_{2,2}$\\
\specialrule{0em}{2pt}{2pt}
$g_5$: & ${\scriptsize \begin{bmatrix} x_{1,1} \\ x_{1,2} \\ x_{2,1} \end{bmatrix}}$ & & $g_6$: & ${\scriptsize \begin{bmatrix} x_{3,1} \\ x_{3,2} \\ x_{2,2} \end{bmatrix}}$\\
\specialrule{0em}{2pt}{2pt}
\hline
\end{tabular}
\caption{A trivial rate-$\frac{2}{3}$ network code $\{g_i:1\leq i \leq 6\}$, where the linear function $\hat{f}$ can be computed at the sink node $\rho$ from its inputs.}
\label{fig:vector_scheme}
\end{minipage}
\end{figure}
To prove this result, \cite{Appuswamy14} restricts attention to the rate-$1$ achievability of the linear function $\hat{f}$ corresponding to the matrix
\begin{align}\label{eq:T1}
\widehat{T}=\begin{pmatrix}
1 & 0 & \gamma \\
0 & 1 & 0 \\
\end{pmatrix}
, \quad \gamma\neq 0,
\end{align}
over $\mathbb{F}_q$ on the network $\widehat{\mathcal{N}}$ as shown in Fig.~\ref{fig:vector_fig}. Further, this specific network function computation problem $(\widehat{\mathcal{N}}, \widehat{T})$ is used as a building block to establish a general network function computation problem $(\mathcal{N}, T)$. Then, it is proved that the existence of a rate-$1$ linear network code for computing $T$ over $\mathcal{N}$ implies the existence of a rate-$1$ linear network code for computing $\widehat{T}$ over $\widehat{\mathcal{N}}$. Equivalently, $(\mathcal{N}, T)$ is not rate-$1$ achievable by a linear network code provided that $(\widehat{\mathcal{N}}, \widehat{T})$ is not rate-$1$ achievable by a linear network code. Hence, the key here is to prove that $(\widehat{\mathcal{N}}, \widehat{T})$ is not rate-$1$ achievable by a linear network code (i.e., \cite[Lemma~\Rmnum{3}.4]{Appuswamy14}).
In \cite{Appuswamy14}, the proof that $(\widehat{\mathcal{N}}, \widehat{T})$ is not rate-$1$ achievable by a linear network code is complicated, and it relies on the use of some advanced algebraic tools. To be specific, by applying the Gr\"{o}bner basis of an ideal generated by a subset of a polynomial ring over this polynomial ring itself and Hilbert's Nullstellensatz (theorem of zeros), a necessary and sufficient condition for the existence of a rate-$1$ linear network code over the algebraic closure $\bar{\mathbb{F}}_q$ of $\mathbb{F}_q$ for computing a linear function $f$ over the field $\mathbb{F}_q$ on a network $\mathcal{N}$ is given (see Theorem~\Rmnum{2}.4 in \cite{Appuswamy14}). Then, it was proved that the condition is not satisfied for the network function computation problem $(\widehat{\mathcal{N}}, \widehat{T})$.
In contrast, by applying our upper bound in Theorem~\ref{thm:upper_bound}, we can easily prove that $\mathcal{C}(\widehat{\mathcal{N}}, \widehat{T})\leq 2/3$ (the upper bound on $\mathcal{C}(\widehat{\mathcal{N}}, \widehat{T})$ in \eqref{eq:2} is $1$), and in fact $\mathcal{C}(\widehat{\mathcal{N}}, \widehat{T})= 2/3$ (see Example~\ref{linear_vector_case} below). This not only implies that no rate-$1$ linear network codes exist for $(\widehat{\mathcal{N}}, \widehat{T})$ but also that no rate-$1$ network codes (linear or nonlinear) exist for $(\widehat{\mathcal{N}}, \widehat{T})$, which enhances Lemma~\Rmnum{3}.4 in \cite{Appuswamy14}. This further implies that no linear or nonlinear rate-$1$ network codes exist for $(\mathcal{N},T)$, as stated in the next proposition.
\begin{prop}
Consider a linear target function $f$ corresponding to a matrix $T\in \mathbb{F}_q^{l\times s}$ with $T\sim (I~P)$ so that at least one element of $P\in\mathbb{F}_q^{l\times (s-l)}$ is zero. Then there exists a network $\mathcal{N}$ such that no rate-$1$ network codes (linear or nonlinear) exist for computing $f$ over $\mathcal{N}$.
\end{prop}
\begin{example}\label{linear_vector_case}
In Fig.~\ref{fig:vector_fig}, consider the cut set $C=\{e_5, e_6\}$, and let $I=I_C$ and $J=J_C$ with $I=S$ and $J=\emptyset$. Then, ${a}_J$ is an empty vector. For the linear target function $\hat{f}$ corresponding to the matrix $\widehat{T}$ in \eqref{eq:T1}, the $(I, {a}_J)$-equivalence classes are:
\begin{align}\label{Cl_alpha_beta}
{\mathrm{Cl}}_{\alpha, \beta}&=\big\{x_S=(x_1,x_2,x_3)\in \mathbb{F}_q^3:\ \widehat{T}\cdot x_S^{\top}=\big(\begin{smallmatrix}\alpha \\ \beta \end{smallmatrix}\big) \big\}\nonumber\\
&=\big\{x_S=(x_1,x_2,x_3)\in \mathbb{F}_q^3:\ x_1+\gamma x_3=\alpha\text{ and } x_2=\beta \big\}\nonumber\\
&=\big\{x_S=(x_1,\beta,x_3)\in \mathbb{F}_q^3:\ x_1+\gamma x_3=\alpha \big\}
\end{align}
for all pairs $(\alpha, \beta)\in \mathbb{F}_q\times \mathbb{F}_q$. Then the total number of $(I, {a}_J)$-equivalence classes is $q^2$.
Let $\mathcal{P}_C=\{C_1=\{e_5\}, C_2=\{e_6\}\}$, the only nontrivial (strong) partition of $C$, and further let $I_1= I_{C_1}=\{\sigma_1\}$, $I_2= I_{C_2}=\{\sigma_3\}$ and accordingly $L=I\setminus (I_1\cup I_2)=\{ \sigma_2 \}$. First, fix $a_L=x_2=\beta$. Since for any two distinct elements $\xi$ and $\eta$ in $\mathbb{F}_q^{I_1}=\mathbb{F}_q$,
\begin{align*}
\xi+\gamma x_3\neq \eta+\gamma x_3, \quad \forall\ x_3\in \mathbb{F}_q,
\end{align*}
every element in $\mathbb{F}_q$ ($=\mathbb{F}_q^{I_1}$) itself constitutes an $(I_1, a_L=\beta, {a}_J)$-equivalence class so that the total number of $(I_1, a_L=\beta, {a}_J)$-equivalence classes is $q$. Further, for any two distinct elements $\xi$ and $\eta$ in $\mathbb{F}_q^{I_2}=\mathbb{F}_q$, since $\gamma\neq 0$, we have
\begin{align*}
x_1+\gamma \xi \neq x_1+\gamma \eta, \quad \forall\ x_1\in \mathbb{F}_q.
\end{align*}
This implies that every element in $\mathbb{F}_q$ ($=\mathbb{F}_q^{I_2}$) itself constitutes an $(I_2, a_L=\beta, {a}_J)$-equivalence class and so the total number of $(I_2, a_L=\beta, {a}_J)$-equivalence classes is also $q$.
Furthermore, note that
\begin{align}\label{equ_x1x3}
\big|\big\{ (x_1,x_3)\in \mathbb{F}_q^2: x_1+\gamma x_3=\alpha \big\}\big|=q,\ \forall\ \alpha\in \mathbb{F}_q.
\end{align}
Therefore, from the above discussion, we obtain that for any pair $(\alpha, \beta)\in \mathbb{F}_q\times \mathbb{F}_q$,
\begin{align*}
N(a_L=\tau, {\mathrm{Cl}}_{\alpha, \beta})=
\begin{cases}
q, & \text{if } \tau=\beta,\\
0, & \text{otherwise;}
\end{cases}
\end{align*}
({\rm cf}.~\eqref{no_finer_eq_cl}) and consequently,
\begin{align*}
N({\mathrm{Cl}}_{\alpha, \beta})=\max_{\tau\in \mathbb{F}_q}N(a_L=\tau, {\mathrm{Cl}}_{\alpha, \beta})=q.
\end{align*}
Hence,
\begin{align*}
n_C(\mathcal{P}_C)=\sum_{{\rm all}\ {\mathrm{Cl}}_{\alpha, \beta}}N({\mathrm{Cl}}_{\alpha, \beta})=q^3.
\end{align*}
By Theorem~\ref{thm:upper_bound}, we have
\begin{align}\label{ineq:C_N1_T1_up_bound}
\mathcal{C}(\widehat{\mathcal{N}},\widehat{T})\leq \frac{|C|}{\log_{|\mathbb{F}_q|} n_{C,\widehat{T}}}=\frac{|C|}{\log_{|\mathbb{F}_q|} n_C(\mathcal{P}_C)}=\frac{2}{\log_q q^3}=\dfrac{2}{3}.
\end{align}
On the other hand, a trivial rate-$\frac{2}{3}$ linear network code for $(\widehat{\mathcal{N}}, \widehat{T})$ is given in Fig.~\ref{fig:vector_scheme}. Together with \eqref{ineq:C_N1_T1_up_bound}, we obtain $\mathcal{C}(\widehat{\mathcal{N}},\widehat{T})=2/3$.
\end{example}
\section{Proof of Theorem~\ref{thm:upper_bound}}\label{sec:proof}
Consider $\{g_e(\vec{x}_S)\in \mathcal{A}^n:\ e\in \mathcal{E}\}$, the set of global encoding functions of a given $(k,n)$ network code that can compute the target function $f$ over $\mathcal{N}$.
Fix a cut set $C\in \Lambda(\mathcal{N})$ and let $I=I_C$ and $J=J_C$, respectively. Then,
\begin{align}\label{ineq1}
|\mathcal{A}|^{n|C|}& \geq \#\big\{ g_C(\vec{x}_S):\ \vec{x}_S\in \mathcal{A}^{k\times S} \big\}\\
& = \#\big\{ g_C(\vec{x}_I, \vec{x}_J):\ \vec{x}_I\in \mathcal{A}^{k\times I} \text{ and } \vec{x}_J\in \mathcal{A}^{k\times J} \big\}\\
&= \#\ \bigcup_{\vec{x}_J\in \mathcal{A}^{k\times J}}\big\{ g_C(\vec{x}_I, \vec{x}_J):\ \vec{x}_I\in \mathcal{A}^{k\times I}\big\}.
\end{align}
Hence, for every $\vec{a}_J\in \mathcal{A}^{k\times J}$, we have
\begin{align}\label{ineq2}
|\mathcal{A}|^{n|C|}\geq \# \big\{ g_C(\vec{x}_I, \vec{a}_J):\ \vec{x}_I\in \mathcal{A}^{k\times I} \big\}.
\end{align}
We use ${\mathrm{Cl}}^{(k)}[\vec{a}_J]$ to denote an $(I,\vec{a}_J)$-equivalence class. Since all $(I,\vec{a}_J)$-equivalence classes form a partition of $\mathcal{A}^{k\times I}$, we can write the right hand side of \eqref{ineq2} as
\begin{align*}
\#\bigcup_{\text{all } {\mathrm{Cl}}^{(k)}[\vec{a}_J]} \big\{ g_C(\vec{b}_I, \vec{a}_J):\ \vec{b}_I\in {\mathrm{Cl}}^{(k)}[\vec{a}_J]\big\}.
\end{align*}
Applying the assertion in the second paragraph below Definition~\ref{def:ec} that $g_C(\vec{b}_I, \vec{a}_J)\neq g_C(\vec{b}'_I, \vec{a}_J)$ for any $\vec{b}_I, \vec{b}'_I \in \mathcal{A}^{k\times I}$ that are not $(I, \vec{a}_J)$-equivalent, we further obtain that
\begin{align}
|\mathcal{A}|^{n|C|}\geq&\#\bigcup_{\text{all } {\mathrm{Cl}}^{(k)}[\vec{a}_J]} \big\{ g_C(\vec{b}_I, \vec{a}_J):\ \vec{b}_I\in {\mathrm{Cl}}^{(k)}[\vec{a}_J]\big\} \label{ineq_add_1}\\
=&\sum_{\text{all } {\mathrm{Cl}}^{(k)}[\vec{a}_J]}\#\{ g_C(\vec{b}_I, \vec{a}_J):\ \vec{b}_I\in {\mathrm{Cl}}^{(k)}[\vec{a}_J]\}. \label{ineq3}
\end{align}
\subsection{Partition Equivalence Relation}
For the cut set $C\in \Lambda(\mathcal{N})$, let $\mathcal{P}_C=\{C_1,C_2,\cdots, C_m \}$ be a strong partition of $C$ (cf.~Definition~\ref{def:strong_parti}). Let $I_l=I_{C_l}$ for $l=1,2, \cdots, m$ and accordingly $L=I\setminus(\bigcup_{l=1}^m I_l)$. Now, we rewrite the sets in the summation in \eqref{ineq3} as follows:
\begin{align}
& \big\{ g_C(\vec{b}_I, \vec{a}_J): \ \vec{b}_I\in {\mathrm{Cl}}^{(k)}[\vec{a}_J] \big\}\label{ineq:1}\\
&= \big\{ g_C(\vec{b}_{I_1}, \vec{b}_{I_2}, \cdots, \vec{b}_{I_m}, \vec{b}_{L}, \vec{a}_J):
\ \vec{b}_I=(\vec{b}_{I_1}, \vec{b}_{I_2}, \cdots, \vec{b}_{I_m}, \vec{b}_{L})\in {\mathrm{Cl}}^{(k)}[\vec{a}_J] \big\}\\
&= \big\{ \big(g_{C_l}(\vec{b}_{I_l}, \vec{b}_{L}, \vec{a}_J),\ l=1,2,\cdots,m \big):
\ \vec{b}_I=(\vec{b}_{I_1}, \vec{b}_{I_2}, \cdots, \vec{b}_{I_m}, \vec{b}_{L})\in {\mathrm{Cl}}^{(k)}[\vec{a}_J]\big\} \label{ineq4} \\
&= \bigcup_{\vec{b}_{L}\in \mathcal{A}^{k\times L}}
\big\{ \big(g_{C_l}(\vec{b}_{I_l}, \vec{b}_{L}, \vec{a}_J),\ l=1,2,\cdots,m \big): \ \vec{b}_{I_l} \in \mathcal{A}^{k\times I_l}, l=1,2,\cdots,m, \textrm{ and } \nonumber\\[-4mm]
&\quad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad\ (\vec{b}_{I_1}, \vec{b}_{I_2}, \cdots, \vec{b}_{I_m}, \vec{b}_{L})
\in {\mathrm{Cl}}^{(k)}[\vec{a}_J]\big\},\label{ineq5_0}
\end{align}
where \eqref{ineq4} follows from that for each $l$, the value of $g_{C_l}$ does not depend on $\vec{b}_{I_j}$, $1\leq j \leq m$ and $j\neq l$. Further, for any $\vec{a}_{L}\in \mathcal{A}^{k\times L}$, we have
\begin{align}
{\rm RHS\ of\ }\eqref{ineq5_0}&\supseteq \big\{ \big(g_{C_l}(\vec{b}_{I_l}, \vec{a}_{L}, \vec{a}_J),\ l=1,2,\cdots,m \big): \ \vec{b}_{I_l} \in \mathcal{A}^{k\times I_l}, l=1,2,\cdots,m, \textrm{ and } \nonumber\\
& \quad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad\ (\vec{b}_{I_1}, \vec{b}_{I_2}, \cdots, \vec{b}_{I_m}, \vec{a}_{L})
\in {\mathrm{Cl}}^{(k)}[\vec{a}_J]\big\}\label{ineq5}
\end{align}
Next, we give the definition of {\em partition equivalence relation}, and observe that Definition~\ref{defn:Par_Equ_Relation_ScalarCase} is the special case with $k=1$. The importance of this relation will become clear in Lemma~\ref{lem_second_equi_relation}.
\begin{defn}[{Partition Equivalence Relation}]\label{defn_P_E_Relation}
Let $I$ and $J$ be two disjoint subsets of $S$. Let $I_l$, $l=1,2, \cdots, m$ be $m$ disjoint subsets of $I$ and accordingly $L=I\setminus(\bigcup_{l=1}^m I_l)$. Given $\vec{a}_J\in \mathcal{A}^{k\times J}$ and $\vec{a}_L\in \mathcal{A}^{k\times L}$, for $1 \leq l \leq m$, we say that $\vec{b}_{I_l}$ and $\vec{b}'_{I_l}$ in $\mathcal{A}^{k\times I_l}$ are $(I_l, \vec{a}_L, \vec{a}_J)$-equivalent if for each $\vec{c}_{I_j} \in \mathcal{A}^{k\times I_j}$ with $1 \leq j \leq m$ and $j\neq l$, $(\vec{b}_{I_l}, \vec{a}_{L}, \vec{c}_{I_j},\ 1 \leq j \leq m, j\neq l)$ and $(\vec{b}'_{I_l}, \vec{a}_{L}, \vec{c}_{I_j},\ 1 \leq j \leq m, j\neq l)$ in $\mathcal{A}^{k \times I}$ are $(I, \vec{a}_J)$-equivalent.
\end{defn}
We remark that Definition~\ref{defn_P_E_Relation} depends only on the target function $f$ but not on the network $\mathcal{N}$, and evidently, every relation above is an equivalence relation.
\begin{lemma}\label{lem_second_equi_relation}
Let $\{g_e: e\in \mathcal{E}\}$ be the set of global encoding functions of a $(k,n)$ network code that can compute $f$ over $\mathcal{N}$. For a cut set $C$ in $\Lambda(\mathcal{N})$ with a strong partition $\mathcal{P}_C=\{C_1,C_2,\cdots, C_m \}$, let $I=I_{C}$, $J=J_{C}$, and $I_l=I_{C_l}$ for $l=1,2, \cdots, m$ and accordingly $L=I\setminus(\bigcup_{l=1}^m I_l)$. Fix $\vec{a}_J\in \mathcal{A}^{k\times J}$ and $\vec{a}_L\in \mathcal{A}^{k\times L}$. Then for each $1\leq l \leq m$ and any two source inputs $\vec{b}_{I_l}$ and $\vec{b}'_{I_l}$ in $\mathcal{A}^{k\times I_l}$ that are not $(I_l,\vec{a}_L, \vec{a}_J)$-equivalent, it is necessary that $g_{C_l}(\vec{b}_{I_l}, \vec{a}_{L}, \vec{a}_J)\neq g_{C_l}(\vec{b}'_{I_l}, \vec{a}_{L}, \vec{a}_J)$.
\end{lemma}
\begin{IEEEproof}
Without loss of generality, it suffices to prove the lemma for $l=1$ only.
Consider two source inputs $\vec{b}_{I_1}$ and $\vec{b}'_{I_1}$ in $\mathcal{A}^{k\times I_1}$ that are not $(I_1, \vec{a}_L, \vec{a}_J)$-equivalent. Then there exist $\vec{c}_{I_j}\in \mathcal{A}^{k\times I_j}$ for $j=2,3,\cdots,m$ such that $\vec{b}_I\triangleq(\vec{b}_{I_1}, \vec{c}_{I_2}, \cdots, \vec{c}_{I_m}, \vec{a}_{L})$ and $\vec{b}'_I\triangleq(\vec{b}'_{I_1}, \vec{c}_{I_2}, \cdots, \vec{c}_{I_m}, \vec{a}_{L})$ are not $(I,\vec{a}_J)$-equivalent. In other words, there exists $\vec{d}\in \mathcal{A}^{k\times S\backslash (I\cup J)}$ such that
\begin{align}\label{ineq_pf_lem_second_equi_relation}
f(\vec{b}_I, \vec{a}_J, \vec{d})\neq f(\vec{b}'_I, \vec{a}_J, \vec{d}).
\end{align}
Next, let $D=\bigcup_{\sigma\in (S\setminus I)}\mathcal{E}_{\mathrm{o}}(\sigma)$, an edge subset of $\mathcal{E}$. Then $\widehat{C}=C\cup D$ is a global cut set, i.e., $I_{\widehat{C}}=S$. Since $g_{\mathcal{E}_{\mathrm{i}}(\rho)}(\vec{x}_S)$ is a function of $g_{\widehat{C}}(\vec{x}_S)$ and the network code can compute $f$, \eqref{ineq_pf_lem_second_equi_relation} implies that
\begin{align*}
g_{\widehat{C}}(\vec{b}_I, \vec{a}_J, \vec{d})\neq g_{\widehat{C}}(\vec{b}'_I, \vec{a}_J, \vec{d}).
\end{align*}
Equivalently,
\begin{align*}
\big( g_{C}(\vec{b}_I, \vec{a}_J),\ g_{D}(\vec{a}_J, \vec{d})\big)=g_{\widehat{C}}(\vec{b}_I, \vec{a}_J, \vec{d})\neq g_{\widehat{C}}(\vec{b}'_I, \vec{a}_J, \vec{d})
=\big( g_{C}(\vec{b}'_I, \vec{a}_J),\ g_{D}(\vec{a}_J, \vec{d})\big).
\end{align*}
By comparing the left hand side and the right hand side above, we immediately obtain $g_{C}(\vec{b}_I, \vec{a}_J)\neq g_{C}(\vec{b}'_I, \vec{a}_J)$, i.e.,
\begin{align*}
&\big(g_{C_1}(\vec{b}_{I_1}, \vec{a}_{L}, \vec{a}_J),\ g_{{C}_j}(\vec{c}_{I_j}, \vec{a}_{L}, \vec{a}_J), j=2,3,\cdots,m \big)\\
&\neq \big(g_{C_1}(\vec{b}'_{I_1}, \vec{a}_{L}, \vec{a}_J),\ g_{C_j}(\vec{c}_{I_j}, \vec{a}_{L}, \vec{a}_J), j=2,3,\cdots,m \big),
\end{align*}
which implies $g_{C_1}(\vec{b}_{I_1}, \vec{a}_{L}, \vec{a}_J)\neq g_{C_1}(\vec{b}'_{I_1}, \vec{a}_{L}, \vec{a}_J)$. The lemma is proved.
\end{IEEEproof}
For $l=1,2,\cdots,m$, we use ${\mathrm{cl}}_{I_l}[\vec{a}_{L}, \vec{a}_J]$ to denote an $(I_l, \vec{a}_L, \vec{a}_J)$-equivalence class. All $(I_l, \vec{a}_L, \vec{a}_J)$-equivalence classes form a partition of $\mathcal{A}^{k\times I_l}$. When $\vec{a}_{L}$ and $\vec{a}_J$ are clear from the context, we write ${\mathrm{cl}}_{I_l}[\vec{a}_{L}, \vec{a}_J]$ as ${\mathrm{cl}}^{(k)}_{I_l}$ to simplify notation. In the following, we give a lemma that reduces to Lemma~\ref{prop_cl} for the case $k=1$.
\begin{lemma}\label{prop_cl_vector_type}
For any set of $(I_l, \vec{a}_{L}, \vec{a}_J)$-equivalence classes ${\mathrm{cl}}^{(k)}_{I_l}$, $l=1, 2, \cdots, m$, define the set
\begin{align*}
\big\langle {\mathrm{cl}}^{(k)}_{I_1}, {\mathrm{cl}}^{(k)}_{I_2}, \cdots, {\mathrm{cl}}^{(k)}_{I_m}, \vec{a}_{L}\big \rangle
\triangleq \Big\{ (\vec{b}_{I_1}, \vec{b}_{I_2}, \cdots, \vec{b}_{I_m}, \vec{a}_{L}):
\ \vec{b}_{I_l}\in {\mathrm{cl}}^{(k)}_{I_l}, l=1,2,\cdots,m \Big\}\subseteq \mathcal{A}^{k\times I}.
\end{align*}
Then all source inputs $(\vec{b}_{I_1}, \vec{b}_{I_2}, \cdots, \vec{b}_{I_m}, \vec{a}_{L})$ in $\big\langle {\mathrm{cl}}^{(k)}_{I_1}, {\mathrm{cl}}^{(k)}_{I_2}, \cdots, {\mathrm{cl}}^{(k)}_{I_m}, \vec{a}_{L}\big \rangle$ are $(I,\vec{a}_J)$-equivalent. In other words, there exists an $(I,\vec{a}_J)$-equivalence class ${\mathrm{Cl}}^{(k)}[\vec{a}_J]$ such that
$$\big\langle {\mathrm{cl}}^{(k)}_{I_1}, {\mathrm{cl}}^{(k)}_{I_2}, \cdots, {\mathrm{cl}}^{(k)}_{I_m}, \vec{a}_{L} \big\rangle \subseteq {\mathrm{Cl}}^{(k)}[\vec{a}_J].$$
\end{lemma}
\begin{IEEEproof}
Let $\vec{b}_{I_l}$ and $\vec{b}'_{I_l}$ be arbitrarily two source matrices in ${\mathrm{cl}}^{(k)}_{I_l}$ for $l=1,2,\cdots,m$. Throughout this proof, we write $\vec{x}_I \sim \vec{y}_I$ for $\vec{x}_I,\vec{y}_I\in \mathcal{A}^{k\times I}$ if $\vec{x}_I$ and $\vec{y}_I$ are $(I, \vec{a}_J)$-equivalent.
Next, we will prove that for $1\leq l \leq m$,
\begin{align*}
(\vec{b}_{I_1}, \vec{b}_{I_2}, \cdots, \vec{b}_{I_m}, \vec{a}_{L}) \sim (\vec{b}'_{I_1}, \vec{b}'_{I_2} \cdots, \vec{b}'_{I_l}, \vec{b}_{I_{l+1}}, \cdots, \vec{b}_{I_{m}}, \vec{a}_{L})
\end{align*}
by induction on $l$. In particular, when $l=m$, we have
\begin{align*}
(\vec{b}_{I_1}, \vec{b}_{I_2}, \cdots, \vec{b}_{I_m}, \vec{a}_{L}) \sim (\vec{b}'_{I_1}, \vec{b}'_{I_2} \cdots, \vec{b}'_{I_m}, \vec{a}_{L}).
\end{align*}
This proves the lemma.
First, since $\vec{b}_{I_1}$ and $\vec{b}'_{I_1}$ are $(I_1,\vec{a}_L,\vec{a}_J)$-equivalent, by Definition~\ref{defn_P_E_Relation}, we have \begin{align*}
(\vec{b}_{I_1}, \vec{b}_{I_2}, \cdots, \vec{b}_{I_m}, \vec{a}_{L}) \sim
(\vec{b}'_{I_1}, \vec{b}_{I_2}, \cdots, \vec{b}_{I_m}, \vec{a}_{L}).
\end{align*}
Assume that
\begin{align}\label{equ:assumption}
(\vec{b}_{I_1}, \vec{b}_{I_2}, \cdots, \vec{b}_{I_m}, \vec{a}_{L}) \sim
(\vec{b}'_{I_1}, \cdots, \vec{b}'_{I_l}, \vec{b}_{I_{l+1}}, \cdots, \vec{b}_{I_{m}}, \vec{a}_{L})
\end{align}
for some $1 \leq l < m$. We now prove that
\begin{align}\label{equ:tobeproved}
(\vec{b}_{I_1}, \vec{b}_{I_2}, \cdots, \vec{b}_{I_m}, \vec{a}_{L}) \sim (\vec{b}'_{I_1}, \cdots, \vec{b}'_{I_{l+1}}, \vec{b}_{I_{l+2}}, \cdots, \vec{b}_{I_{m}}, \vec{a}_{L}).
\end{align}
Since $\vec{b}_{I_{l+1}}$ and $\vec{b}'_{I_{l+1}}$ are $(I_{l+1},\vec{a}_L,\vec{a}_J)$-equivalent, we see that
$$(\vec{b}'_{I_1}, \cdots, \vec{b}'_{I_l}, \vec{b}_{I_{l+1}}, \vec{b}_{I_{l+2}}, \cdots, \vec{b}_{I_{m}}, \vec{a}_{L}) \sim (\vec{b}'_{I_1}, \cdots, \vec{b}'_{I_l}, \vec{b}'_{I_{l+1}}, \vec{b}_{I_{l+2}}, \cdots, \vec{b}_{I_{m}}, \vec{a}_{L}).$$
Together with the assumption \eqref{equ:assumption} and the transitivity of the $(I, \vec{a}_J)$-equivalence relation ``$\sim$'', we have proved \eqref{equ:tobeproved} and hence accomplished the proof.
\end{IEEEproof}
\subsection{Derivation of the Improved Upper Bound}
From \eqref{ineq:1} to \eqref{ineq5}, we obtain that for every $\vec{a}_{L}$ in $\mathcal{A}^{k\times L}$,
\begin{align}
& \# \big\{ g_C(\vec{b}_I, \vec{a}_J): \ \vec{b}_I\in {\mathrm{Cl}}^{(k)}[\vec{a}_J] \big\}\nonumber\\
& \geq \# \big\{ \big(g_{C_l}(\vec{b}_{I_l}, \vec{a}_{L}, \vec{a}_J),\ l=1,2,\cdots,m \big): \ \vec{b}_{I_l} \in \mathcal{A}^{k\times I_l}, l=1,2,\cdots,m, \textrm{ and }\nonumber\\
&\quad\ \qquad \qquad \qquad \qquad \qquad \qquad \qquad \qquad \ \ (\vec{b}_{I_1}, \vec{b}_{I_2}, \cdots, \vec{b}_{I_m}, \vec{a}_{L})
\in {\mathrm{Cl}}^{(k)}[\vec{a}_J]\big\}. \label{ineq_for_every}
\end{align}
We now derive a lower bound on \eqref{ineq_for_every}; the steps are explained after the derivation.
\begin{align}
&\# \big\{ \big(g_{C_l}(\vec{b}_{I_l}, \vec{a}_{L}, \vec{a}_J),\ l=1,2,\cdots,m \big): \ \vec{b}_{I_l} \in \mathcal{A}^{k\times I_l}, l=1,2,\cdots,m, \textrm{ and }\nonumber\\
&\quad\ \qquad \qquad \qquad \qquad \qquad \qquad \qquad\quad (\vec{b}_{I_1}, \vec{b}_{I_2}, \cdots, \vec{b}_{I_m}, \vec{a}_{L})
\in {\mathrm{Cl}}^{(k)}[\vec{a}_J]\big\}\nonumber\\
&=\# \big\{ \big(g_{C_l}(\vec{b}_{I_l}, \vec{a}_{L}, \vec{a}_J),\ l=1,2,\cdots,m \big): \vec{b}_{I_l} \in {\mathrm{cl}}^{(k)}_{I_l}, \text{ an $(I_l, \vec{a}_{L}, \vec{a}_J)$-equivalence class, } 1 \leq l \leq m,\nonumber\\
& \qquad\qquad \qquad \qquad \qquad\qquad\qquad\qquad\ \textrm{ and }\big\langle {\mathrm{cl}}^{(k)}_{I_1}, {\mathrm{cl}}^{(k)}_{I_2}, \cdots, {\mathrm{cl}}^{(k)}_{I_m}, \vec{a}_{L} \big\rangle \subseteq {\mathrm{Cl}}^{(k)}[\vec{a}_J]\big\}\label{ineq_for_every_2}\\
&\geq \# \big\{ \big( {\mathrm{cl}}^{(k)}_{I_1}, {\mathrm{cl}}^{(k)}_{I_2}, \cdots, {\mathrm{cl}}^{(k)}_{I_m} \big):\
{\mathrm{cl}}^{(k)}_{I_l} \text{ is an $(I_l, \vec{a}_{L}, \vec{a}_J)$-equivalence class, } l=1,2,\cdots,m, \nonumber \\
& \qquad\qquad \qquad \qquad \qquad\qquad\qquad\qquad\ \textrm{ and } \big\langle {\mathrm{cl}}^{(k)}_{I_1}, {\mathrm{cl}}^{(k)}_{I_2}, \cdots, {\mathrm{cl}}^{(k)}_{I_m}, \vec{a}_{L} \big\rangle \subseteq {\mathrm{Cl}}^{(k)}[\vec{a}_J]\big\}.\label{ineq6}
\end{align}
\begin{itemize}
\item The equality~\eqref{ineq_for_every_2} is justified by establishing the following:
\begin{align}
&\big\{ (\vec{b}_{I_1}, \vec{b}_{I_2}, \cdots, \vec{b}_{I_m}, \vec{a}_L): \vec{b}_{I_l} \in \mathcal{A}^{k\times I_l}, l=1,2,\cdots,m, \textrm{ and } (\vec{b}_{I_1}, \vec{b}_{I_2}, \cdots, \vec{b}_{I_m}, \vec{a}_{L})\in {\mathrm{Cl}}^{(k)}[\vec{a}_J]\big\}\nonumber\\
&=\bigcup_{ \textrm{ all } \left({\mathrm{cl}}^{(k)}_{I_1}, {\mathrm{cl}}^{(k)}_{I_2}, \cdots, {\mathrm{cl}}^{(k)}_{I_m}\right) \textrm{ s.t. } \atop \left\langle {\mathrm{cl}}^{(k)}_{I_1}, {\mathrm{cl}}^{(k)}_{I_2}, \cdots, {\mathrm{cl}}^{(k)}_{I_m}, \vec{a}_{L} \right\rangle \subseteq {\mathrm{Cl}}^{(k)}[\vec{a}_J]} \big\langle {\mathrm{cl}}^{(k)}_{I_1}, {\mathrm{cl}}^{(k)}_{I_2}, \cdots, {\mathrm{cl}}^{(k)}_{I_m}, \vec{a}_{L} \big\rangle. \label{equ_2sets}
\end{align}
To see \eqref{equ_2sets}, we first consider an arbitrary $(\vec{b}_{I_1}, \vec{b}_{I_2}, \cdots, \vec{b}_{I_m}, \vec{a}_L)$ in LHS of \eqref{equ_2sets}, i.e.,
\begin{align}\label{equ1_pf_equ_2sets}
(\vec{b}_{I_1}, \vec{b}_{I_2}, \cdots, \vec{b}_{I_m}, \vec{a}_{L})\in {\mathrm{Cl}}^{(k)}[\vec{a}_J].
\end{align}
Let ${\mathrm{cl}}^{(k)}_{I_l}$ be the corresponding $(I_l, \vec{a}_{L}, \vec{a}_J)$-equivalence class containing $\vec{b}_{I_l}$ for $1\leq l \leq m$. Then
\begin{align}\label{equ2_pf_equ_2sets}
(\vec{b}_{I_1}, \vec{b}_{I_2}, \cdots, \vec{b}_{I_m}, \vec{a}_L)\in \big\langle {\mathrm{cl}}^{(k)}_{I_1}, {\mathrm{cl}}^{(k)}_{I_2}, \cdots, {\mathrm{cl}}^{(k)}_{I_m}, \vec{a}_{L} \big\rangle.
\end{align}
Combining \eqref{equ1_pf_equ_2sets} and \eqref{equ2_pf_equ_2sets} and by Lemma~\ref{prop_cl_vector_type}, we have
\begin{align}\label{equ3_pf_equ_2sets}
(\vec{b}_{I_1}, \vec{b}_{I_2}, \cdots, \vec{b}_{I_m}, \vec{a}_L)\in \big\langle {\mathrm{cl}}^{(k)}_{I_1}, {\mathrm{cl}}^{(k)}_{I_2}, \cdots, {\mathrm{cl}}^{(k)}_{I_m}, \vec{a}_{L} \big\rangle \subseteq {\mathrm{Cl}}^{(k)}[\vec{a}_J],
\end{align}
which shows that LHS of \eqref{equ_2sets} is a subset of RHS of \eqref{equ_2sets}. On the other hand, it is evident that RHS of \eqref{equ_2sets} is a subset of LHS of \eqref{equ_2sets}, proving \eqref{equ_2sets}. Immediately, \eqref{equ_2sets} implies \eqref{ineq_for_every_2}.
\item The inequality~\eqref{ineq6} is proved as follows. For every $\big( {\mathrm{cl}}^{(k)}_{I_1}, {\mathrm{cl}}^{(k)}_{I_2}, \cdots, {\mathrm{cl}}^{(k)}_{I_m} \big)$ in the set on the RHS of \eqref{ineq6}, we arbitrarily choose a vector $(\vec{b}_{I_1}, \vec{b}_{I_2}, \cdots, \vec{b}_{I_m})$ such that $\vec{b}_{I_l}\in {\mathrm{cl}}^{(k)}_{I_l}$, for $l=1,2,\cdots,m$. For any two distinct $\big( {\mathrm{cl}}^{(k)}_{I_1}, {\mathrm{cl}}^{(k)}_{I_2}, \cdots, {\mathrm{cl}}^{(k)}_{I_m} \big)$ and $\big( {\mathrm{cl}}'^{(k)}_{I_1}, {\mathrm{cl}}'^{(k)}_{I_2}, \cdots, {\mathrm{cl}}'^{(k)}_{I_m} \big)$, let $(\vec{b}_{I_1}, \vec{b}_{I_2}, \cdots, \vec{b}_{I_m})$ and $(\vec{b}'_{I_1}, \vec{b}'_{I_2}, \cdots, \vec{b}'_{I_m})$ be the corresponding vectors that have been chosen. Then by Lemma~\ref{lem_second_equi_relation}, we have
\begin{align}\label{equ1_pf_ineq6}
\big( g_{C_l}(\vec{b}_{I_l}, \vec{a}_{L}, \vec{a}_J),\ l=1,2,\cdots,m\big)\neq
\big( g_{C_l}(\vec{b}'_{I_l}, \vec{a}_{L}, \vec{a}_J),\ l=1,2,\cdots,m\big),
\end{align}
which implies \eqref{ineq6}.
\end{itemize}
We denote the RHS of \eqref{ineq6} by $N\big(\vec{a}_{L}, {\mathrm{Cl}}^{(k)}[\vec{a}_J]\big)$, which is consistent with the notation $N\big({a}_{L}, {\mathrm{Cl}}[{a}_J]\big)$ in \eqref{no_finer_eq_cl} for the case $k=1$.
We write $\vec{a}_J=\left(a_{J,1}, a_{J,2}, \cdots, a_{J,k}\right)^\top$, where $a_{J,p} \in \mathcal{A}^{J}$, $p=1,2,\cdots,k$, are the rows of $\vec{a}_J\in \mathcal{A}^{k\times J}$. Let $\vec{b}_I$ and $\vec{b}'_I$ in $\mathcal{A}^{k\times I}$ be two source inputs. Similarly, we write $\vec{b}_I= \big(b_{I,1}, b_{I,2}, \cdots, b_{I,k}\big)^\top$ and $\vec{b}'_I= \big(b'_{I,1}, b'_{I,2}, \cdots, b'_{I,k}\big)^\top$ with $b_{I,p}, b'_{I,p}\in \mathcal{A}^{I}$ for $1\leq p \leq k$.
By Definition~\ref{def:ec}, we see that $\vec{b}_I$ and $\vec{b}'_I$ are $(I,\vec{a}_{J})$-equivalent if and only if $b_{I,p}$ and $b'_{I,p}$ are $(I,a_{J,p})$-equivalent for all $1\leq p \leq k$. Thus, every $(I,\vec{a}_J)$-equivalence class corresponds to a set of $(I,a_{J,p})$-equivalence classes, $p=1,2,\cdots,k$. On the other hand, every set of $(I,a_{J,p})$-equivalence classes, $p=1,2,\cdots,k$, also corresponds to an $(I,\vec{a}_J)$-equivalence class.
For an $(I,\vec{a}_{J})$-equivalence class ${\mathrm{Cl}}^{(k)}[\vec{a}_J]$, denote the corresponding set of $(I,a_{J,p})$-equivalence classes by $\big\{{\mathrm{Cl}}_{p}[a_{J,p}],\ p=1,2,\cdots,k\big\}$.
Next, we consider the $(I_l, \vec{a}_{L}, \vec{a}_J)$-equivalence relation, $1\leq l\leq m$, and obtain a similar result. To be specific, we also write $\vec{a}_{L}=\left(a_{L,1}, a_{L,2}, \cdots, a_{L,k}\right)^\top$ with $a_{L,p} \in \mathcal{A}^{L}$, $p=1,2,\cdots,k$ being the rows of $\vec{a}_L\in \mathcal{A}^{k\times L}$, and consider two source inputs $\vec{b}_{I_l}= \big(b_{I_l,1}, b_{I_l,2}, \cdots, b_{I_l,k}\big)^\top$ and $\vec{b}'_{I_l}= \big(b'_{I_l,1}, b'_{I_l,2}, \cdots, b'_{I_l,k}\big)^\top$ in $\mathcal{A}^{k\times I_l}$ with $b_{I_l,p}, b'_{I_l,p}\in \mathcal{A}^{I_l}$ for $1\leq p \leq k$. Similarly, by Definition~\ref{defn_P_E_Relation}, $\vec{b}_{I_l}$ and $\vec{b}'_{I_l}$ are $(I_l,\vec{a}_{L}, \vec{a}_{J})$-equivalent if and only if $b_{I_l,p}$ and $b'_{I_l,p}$ are $(I_l,a_{L,p},a_{J,p})$-equivalent for all $1\leq p \leq k$. Thus, every $(I_l,\vec{a}_{L}, \vec{a}_{J})$-equivalence class corresponds to a set of $(I_l, a_{L,p}, a_{J,p})$-equivalence classes, $1\leq p \leq k$, and vice versa. For an $(I_l,\vec{a}_{L}, \vec{a}_{J})$-equivalence class ${\mathrm{cl}}^{(k)}_{I_l}$, denote the corresponding set of $(I_l, a_{L,p}, a_{J,p})$-equivalence classes by $\big\{{\mathrm{cl}}_{I_l,p},\ p=1,2,\cdots,k\big\}$.
We now consider $(I_l, \vec{a}_{L}, \vec{a}_J)$-equivalence classes ${\mathrm{cl}}^{(k)}_{I_l}$, $1\leq l \leq m$. Based on the above arguments, we obtain that
\begin{align}
\big\langle {\mathrm{cl}}^{(k)}_{I_1}, {\mathrm{cl}}^{(k)}_{I_2}, \cdots, {\mathrm{cl}}^{(k)}_{I_m}, \vec{a}_{L}\big \rangle \subseteq {\mathrm{Cl}}^{(k)}[\vec{a}_J]
\end{align}
if and only if
\begin{align}
\big\langle {\mathrm{cl}}_{I_1,p}, {\mathrm{cl}}_{I_2,p}, \cdots, {\mathrm{cl}}_{I_m,p}, a_{L,p}\big\rangle \subseteq {\mathrm{Cl}}_p[a_{J,p}],\quad \forall~p=1,2,\cdots,k.
\end{align}
Hence, this implies that
\begin{align}\label{prod}
N\left(\vec{a}_{L}, {\mathrm{Cl}}^{(k)}[\vec{a}_J]\right)=\prod_{p=1}^{k} N\big(a_{L,p}, {\mathrm{Cl}}_{p}[a_{J,p}]\big).
\end{align}
Considering all $\vec{a}_L$ in $\mathcal{A}^{k\times L}$ and by combining \eqref{ineq_for_every}-\eqref{ineq6} with \eqref{prod}, we have
\begin{align}
& \#\Big\{ g_C(\vec{b}_I, \vec{a}_J): \ \vec{b}_I\in {\mathrm{Cl}}^{(k)}[\vec{a}_J]\Big\}\nonumber\\
& \geq \max_{\vec{a}_L \in \mathcal{A}^{k\times L}} N\big(\vec{a}_{L}, {\mathrm{Cl}}^{(k)}[\vec{a}_J]\big)\label{prod-1}\\
& = \max_{\vec{a}_L \in \mathcal{A}^{k\times L}} \prod_{p=1}^{k} N\big(a_{L,p}, {\mathrm{Cl}}_{p}[a_{J,p}]\big)\label{prod-2}\\
& = \max_{a_{L,1} \in \mathcal{A}^{L}}~\max_{a_{L,2} \in \mathcal{A}^{L}}\cdots\max_{a_{L,k} \in \mathcal{A}^{L}} \prod_{p=1}^{k} N\big(a_{L,p}, {\mathrm{Cl}}_{p}[a_{J,p}]\big)\\
& = \prod_{p=1}^{k} \max_{a_{L,p} \in \mathcal{A}^{L}} N\big(a_{L,p}, {\mathrm{Cl}}_{p}[a_{J,p}]\big)\\
& = \prod_{p=1}^{k} N\big({\mathrm{Cl}}_{p}[a_{J,p}]\big),\label{a_star}
\end{align}
where \eqref{a_star} follows from the definition in \eqref{equ:N_Cl}.
We now combine \eqref{ineq_add_1}, \eqref{ineq3}, and \eqref{prod-1}-\eqref{a_star} to obtain
\begin{align}
|\mathcal{A}|^{n|C|}\geq & \sum_{\text{all }{\mathrm{Cl}}^{(k)}[\vec{a}_J]}\left[\prod_{p=1}^{k} N\big({\mathrm{Cl}}_{p}[a_{J,p}]\big)\right]\label{ineq_final-1}\\
= & \sum_{\text{all }{\mathrm{Cl}}_{1}[a_{J,1}]~}\sum_{\text{all }{\mathrm{Cl}}_{2}[a_{J,2}]}\cdots\sum_{\text{all }{\mathrm{Cl}}_{k}[a_{J,k}]}
\left[\prod_{p=1}^{k} N\big({\mathrm{Cl}}_{p}[a_{J,p}]\big)\right]\label{ineq_final-2}\\
= & \prod_{p=1}^{k}\left[ \sum_{\text{all }{\mathrm{Cl}}_{p}[a_{J,p}]} N\big({\mathrm{Cl}}_{p}[a_{J,p}]\big)\right].\label{ineq_final-3}
\end{align}
Note that the inequality \eqref{ineq_final-1} holds for an arbitrary $\vec{a}_J\in \mathcal{A}^{k\times J}$, or equivalently, arbitrary $a_{J,p} \in \mathcal{A}^{J}$, $p=1,2,\cdots, k$. Let
\begin{align*}
{a}_J^*\in \arg\max_{{a}_J\in \mathcal{A}^{J}} \sum_{\text{all }{\mathrm{Cl}}[{a}_J]} N\big({\mathrm{Cl}}[{a}_J]\big),
\end{align*}
i.e.,
\begin{align*}
\sum_{\text{all }{\mathrm{Cl}}[{a}_J^*]} N\big({\mathrm{Cl}}[{a}_J^*]\big)=\max_{{a}_J\in \mathcal{A}^{J}} \sum_{\text{all }{\mathrm{Cl}}[{a}_J]} N\big({\mathrm{Cl}}[{a}_J]\big).
\end{align*}
Then it follows from \eqref{ineq_final-1}-\eqref{ineq_final-3} that
\begin{align}
&|\mathcal{A}|^{n|C|}\geq \left[ \sum_{\text{all }{\mathrm{Cl}}[{a}_J^*]} N\big( {\mathrm{Cl}}[{a}_J^*]\big)\right]^k. \label{ineq_k_n}
\end{align}
For the strong partition $\mathcal{P}_C=\{ C_1,C_2,\cdots,C_m \}$ of the cut set $C$, recall from \eqref{n_C_Parti_1st} and \eqref{n_C_f_1st}, the definitions of $n_C(\mathcal{P}_C)$ and $n_{C,f}$, respectively. Since the inequality \eqref{ineq_k_n} is valid for all strong partitions $\mathcal{P}_C$ of $C$, we have
$$|\mathcal{A}|^{n|C|}\geq n_{C,f}^k,$$
or equivalently,
\begin{align}\label{ineq_upp_bound_k_n}
\frac{k}{n}\leq \dfrac{|C|}{\log_{|\mathcal{A}|}n_{C,f}}.
\end{align}
Finally, considering all cut sets $C\in \Lambda(\mathcal{N})$, we obtain by \eqref{ineq_upp_bound_k_n} that
\begin{align}\label{equ:improved_upper_bound_re}
\mathcal{C}(\mathcal{N},f)\leq \min_{C\in\Lambda(\mathcal{N})}\dfrac{|C|}{\log_{|\mathcal{A}|}n_{C,f}}.
\end{align}
Therefore, we have proved Theorem~\ref{thm:upper_bound}.
\begin{journalonly}
Let $k=1$ and then we obtain the corresponding results for the scalar case. Concretely, for the given source vectors ${a}_J\in \mathcal{A}^{J}$ and $a_L\in \mathcal{A}^{L}$, we have the following matrices:
\begin{align*}
M(a_L)=\Big[M_{i,j}(a_L)\Big]_{1\leq i \leq V_{I_1}(a_{L}, {a}_J),\ 1\leq j \leq V_{I_2}(a_{L}, {a}_J)}.
\end{align*}
Subsequently, we write
\begin{align}
\vec{a}_J&=({a}_J^{(1)}, {a}_J^{(2)}, \cdots, {a}_J^{(k)})^\top,\label{nota_{a}_J}\\
\vec{a}_{L}&=(a_{L}^{(1)}, a_{L}^{(2)}, \cdots, a_{L}^{(k)})^\top,\label{nota_a_L}
\end{align}
and the corresponding equivalence class ${\mathrm{Cl}}^{(k)}[\vec{a}_J]$ is written as
\begin{align}
{\mathrm{Cl}}^{(k)}[\vec{a}_J]=\big({\mathrm{Cl}}^{(1)}({a}_J^{(1)}),\ {\mathrm{Cl}}^{(1)}({a}_J^{(2)}),\ \cdots,\ {\mathrm{Cl}}^{(1)}({a}_J^{(k)})\big)^\top,\label{nota_Cl_J}
\end{align}
which is considered as a column vector constituted by $k$ scalar equivalence classes. For all $p$, $1\leq p \leq k$, one has the $k$ corresponding matrices as follows
\begin{align*}
M(a_L^{(p)})=\Big[M_{i,j}(a_L^{(p)})\Big]_{1\leq i \leq V_{I_1}(a_{L}^{(p)}, c_{J,p}),\ 1\leq j \leq V_{I_2}(a_{L}^{(p)}, c_{J,p})},
\end{align*}
and the value $N(a_{L}^{(p)}, {\mathrm{Cl}}^{(1)}(c_{J,p}))$ is the number of the entries $M_{i,j}(a_L^{(p)})$ equal to ${\mathrm{Cl}}^{(1)}(c_{J,p})$ in the matrix $M(a_L^{(p)})$. Furthermore, it is not difficult to see that
\begin{align}\label{prod}
N(\vec{a}_{L}, {\mathrm{Cl}}^{(k)}[\vec{a}_J])=\prod_{p=1}^{k} N(a_{L}^{(p)}, {\mathrm{Cl}}^{(1)}(c_{J,p})).
\end{align}
Therefore, we derive that for $1\leq i \leq W_{C,f}^{(\vec{a}_J)}$,
\begin{align}\label{ineq7}
(\ref{ineq6})=N(\vec{a}_{L,(i)}, {\mathrm{Cl}}^{(k)}_i(\vec{a}_J))=\prod_{p=1}^{k} N(a_{L,(i)}^{(p)}, {\mathrm{Cl}}^{(1)}_i(c_{J,p})).
\end{align}
In order to clear the proof, before discussion further we review all the inequalities from (\ref{ineq1}) to (\ref{ineq7}) as follows
\begin{align*}
&|\mathcal{A}|^{n|C|}\\
\geq &\#\{ g_C(\vec{x}_I, \vec{x}_J):\ \text{\ all } \vec{x}_I\in \mathcal{A}^{k\times I}, \vec{x}_J\in \mathcal{A}^{k\times J}\} \tag*{// by (\ref{ineq1})}\\
\geq & \#\{ g_C(\vec{x}_I, \vec{a}_J): \text{ all } \vec{x}_I\in \mathcal{A}^{k\times I}\} \tag*{ // for every $\vec{a}_J\in \mathcal{A}^{k\times J}$ by (\ref{ineq2})}\\
=&\sum_{i=1}^{W_{C,f}^{(\vec{a}_J)}}\#\{ g_C(\vec{b}_I, \vec{a}_J):\text{ all } \vec{b}_I\in {\mathrm{Cl}}^{(k)}_i(\vec{a}_J)\}\tag*{// by (\ref{ineq3})}\\
=&\sum_{i=1}^{W_{C,f}^{(\vec{a}_J)}}\#\{ \big(g_{C_1}(\vec{a}_{I_1}, \vec{a}_{L}, \vec{a}_J), g_{C_2}(\vec{a}_{I_2}, \vec{a}_{L}, \vec{a}_J) \big): \\
& \qquad\qquad\qquad\quad \text{ all } \vec{b}_I=(\vec{a}_{I_1}, \vec{a}_{L}, \vec{a}_{I_2})\in {\mathrm{Cl}}^{(k)}_i(\vec{a}_J)\} \tag*{// by (\ref{ineq4})}\\
=&\sum_{i=1}^{W_{C,f}^{(\vec{a}_J)}}\#\{ \big(g_{C_1}(\vec{a}_{I_1}, \vec{a}_{L}), g_{C_2}(\vec{a}_{I_2}, \vec{a}_{L}) \big): \\
&\qquad\qquad\qquad\quad \text{ all } \vec{b}_I=(\vec{a}_{I_1}, \vec{a}_{L}, \vec{a}_{I_2})\in {\mathrm{Cl}}^{(k)}_i(\vec{a}_J)\} \tag*{// just for notation simplicity by (\ref{ineq5}) }\\
\geq &\sum_{i=1}^{W_{C,f}^{(\vec{a}_J)}}\#\mathop{\cup}_{\text{all }\vec{a}_{L}\in \mathcal{A}^{k\times L}} \{ \big(g_{C_1}(\vec{a}_{I_1}, \vec{a}_{L}), g_{C_2}(\vec{a}_{I_2}, \vec{a}_{L}) \big):\ \text{all } \\
& \vec{a}_{I_1}\in \mathcal{A}^{k\times I_1}, \vec{a}_{I_2}\in \mathcal{A}^{k\times I_2} \textrm{ s.t. } (\vec{a}_{I_1}, \vec{a}_{L}, \vec{a}_{I_2})\in {\mathrm{Cl}}^{(k)}_i(\vec{a}_J)\}\nonumber\\
\geq & \sum_{i=1}^{W_{C,f}^{(\vec{a}_J)}}\#\{ \big(g_{C_1}(\vec{a}_{I_1}, \vec{a}_{L,(i)}), g_{C_2}(\vec{a}_{I_2}, \vec{a}_{L,(i)}) \big):\ \\
&\quad \vec{a}_{I_l}\in \mathcal{A}^{k\times I_l}, l=1,2, \textrm{ s.t. } (\vec{a}_{I_1}, \vec{a}_{L,(i)}, \vec{a}_{I_2})\in {\mathrm{Cl}}^{(k)}_i(\vec{a}_J)\}\tag*{ // for all possible $\vec{a}_{L,(i)}$ by (\ref{ineq_for_every})}\\
\geq & \sum_{i=1}^{W_{C,f}^{(\vec{a}_J)}} \#\{ \big({\mathrm{cl}}^{(k)}_{I_1}(\vec{a}_{L,(i)},\vec{a}_J),\ {\mathrm{cl}}^{(k)}_{I_2}(\vec{a}_{L,(i)}, \vec{a}_J)\big): \\
&\qquad \text{ all } {\mathrm{cl}}^{(k)}_{I_1}(\vec{a}_{L,(i)},\vec{a}_J), \text{ and } {\mathrm{cl}}^{(k)}_{I_2}(\vec{a}_{L,(i)},\vec{a}_J), \textrm{ s.t. } \\
& \big({\mathrm{cl}}^{(k)}_{I_1}(\vec{a}_{L,(i)},\vec{a}_J),\ \vec{a}_{L,(i)},\ {\mathrm{cl}}^{(k)}_{I_2}(\vec{a}_{L,(i)}, \vec{a}_J)\big)\subseteq {\mathrm{Cl}}^{(k)}_i(\vec{a}_J)\}\tag*{}\\
= & \sum_{i=1}^{W_{C,f}^{(\vec{a}_J)}} N(\vec{a}_{L,(i)}, {\mathrm{Cl}}^{(k)}_i(\vec{a}_J)) \tag*{// by (\ref{ineq6})}\\
= & \sum_{i=1}^{W_{C,f}^{(\vec{a}_J)}} \prod_{p=1}^{k} N(a_{L,(i)}^{(p)}, {\mathrm{Cl}}^{(1)}_i(c_{J,p})), \tag*{// by (\ref{ineq7})}
\end{align*}
where recall (\ref{nota_{a}_J}), (\ref{nota_a_L}), and (\ref{nota_Cl_J}) for the meanings of notation $a_{L,(i)}^{(p)}$, $c_{J,p}$ and ${\mathrm{Cl}}^{(1)}_i(c_{J,p})$. Consequently, we summarize that
\begin{align}\label{ineq_final}
|\mathcal{A}|^{n|C|}\geq \sum_{i=1}^{W_{C,f}^{(\vec{a}_J)}} \prod_{p=1}^{k} N(a_{L,(i)}^{(p)}, {\mathrm{Cl}}^{(1)}_i(c_{J,p})),
\end{align}
for arbitrary ${a}_J^{(p)}\in \mathcal{A}^{J}$, and arbitrary $a_{L,(i)}^{(p)}\in \mathcal{A}^{L}$ corresponding to the $(I, J, {a}_J^{(p)})$-equivalence class ${\mathrm{Cl}}^{(1)}_i(c_{J,p})$, $1\leq p \leq k$, $1 \leq i \leq W_{C,f}^{(\vec{a}_J)}$.
On the other hand, notice that for any ${a}_J\in \mathcal{A}^{J}$, the $(I,{a}_J)$-equivalence relation induces different equivalence classes ${\mathrm{Cl}}^{(1)}_j({a}_J)$, $1 \leq j \leq W_{C,f}^{({a}_J)}$, constituting a partition of $\mathcal{A}^{I}$. Then for every equivalence class ${\mathrm{Cl}}^{(1)}_j({a}_J)$, let
\begin{align*}
a_L^*({\mathrm{Cl}}^{(1)}_j({a}_J))=\arg\max_{a_L\in \mathcal{A}^{L}}N(a_{L}, {\mathrm{Cl}}^{(1)}_j({a}_J)).
\end{align*}
Let further
\begin{align*}
{a}_J^*=\arg\max_{{a}_J\in \mathcal{A}^{J}} \sum_{j=1}^{W_{C,f}^{({a}_J)}} N(a_L^*({\mathrm{Cl}}^{(1)}_j({a}_J)), {\mathrm{Cl}}^{(1)}_j({a}_J)),
\end{align*}
and thus we can obtain
\begin{align*}
&\max_{{a}_J\in \mathcal{A}^{J}} \max_{a_{L,j}\in \mathcal{A}^{L}:\atop 1\leq j \leq W_{C,f}^{({a}_J)}} \sum_{j=1}^{W_{C,f}^{({a}_J)}} N(a_{L,j}, {\mathrm{Cl}}^{(1)}_j({a}_J))\\
=&\sum_{j=1}^{W_{C,f}^{({a}_J^*)}} N(a_{L}^*({\mathrm{Cl}}^{(1)}_j({a}_J^*)), {\mathrm{Cl}}^{(1)}_j({a}_J^*))\\
\triangleq &\ n_C(C_1,C_2).
\end{align*}
Together with (\ref{ineq_final}), we deduce that
\begin{align}\label{ineq_k_n}
|\mathcal{A}|^{n|C|}\geq & \prod_{p=1}^{k} \sum_{j=1}^{W_{C,f}^{({a}_J^*)}} N(a_{L}^*({\mathrm{Cl}}^{(1)}_j({a}_J^*)), {\mathrm{Cl}}^{(1)}_j({a}_J^*))\nonumber\\
= & \Big[ \sum_{j=1}^{W_{C,f}^{({a}_J^*)}} N(a_{L}^*({\mathrm{Cl}}^{(1)}_j({a}_J^*)), {\mathrm{Cl}}^{(1)}_j({a}_J^*))\Big]^k \nonumber\\
= & \big[ n_C(C_1,C_2) \big]^k.
\end{align}
Furthermore, notice that the above inequalities follow for all partitions $C=C_1\cup C_2$ with $I_{C_i}\neq \emptyset$, $i=1,2$. Thus, we choose that partition such that $n_C(C_1,C_2)$ achieves the maximum and denote such a partition by $C=C_1^*\cup C_2^*$, i.e.,
$$n_C(C_1^*, C_2^*)=\max_{C=C_1\cup C_2 \text{ with} \atop I_{C_l}\neq \emptyset,\ l=1,2} n_C(C_1,C_2),$$
further denoted by $n_{C,f}$ because this maximum value $n_C(C_1^*, C_2^*)$ only depends on the cut set $C$ for the given $f$.
Then by (\ref{ineq_k_n}), we can show that
\begin{align*}
|\mathcal{A}|^{n|C|}\geq \big[ n_C(C_1^*,C_2^*) \big]^k=n_{C,f}^k,
\end{align*}
and equivalently,
\begin{align}\label{ineq_upp_bound_k_n}
\frac{k}{n}\leq \dfrac{|C|}{\log_{|\mathcal{A}|}n_{C,f}}.
\end{align}
At last, the above inequality (\ref{ineq_upp_bound_k_n}) should be satisfied for all cuts $C$ with $I_C \neq \emptyset$, that is, for all $C\in \Lambda{\mathcal{N}}$. Therefore, we obtain the upper bound
\begin{align*}
\frac{k}{n}\leq \min_{C\in\Lambda(\mathcal{N})}\dfrac{|C|}{\log_{|\mathcal{A}|}n_{C,f}}.
\end{align*}
Now, we can give the main result below.
\begin{thm}
\label{thm:upper_bound}
Let $\mathcal{N}=(G, S, \rho)$ be a network and $f$ be a target function. Then
\begin{align*}
\mathcal{C}(\mathcal{N},f)\leq \min_{C\in\Lambda(\mathcal{N})}\dfrac{|C|}{\log_{|\mathcal{A}|}n_{C,f}}.
\end{align*}
\end{thm}
The upper bound herein is better than the previous one (\ref{eq:2}), since $n_{C,f}\geq w_{C,f}$ always holds for every $C\in \Lambda(\mathcal{N})$. Subsequently, we continue using the Example \ref{eg:1} to depict this obtained bound and show that it is tight for this example, i.e., achieves the capacity.
\end{journalonly}
\section{A Nontrivial Example}\label{sec:non_tight}
In the last section, we have proved an improved upper bound in Theorem~\ref{thm:upper_bound} on network function computing capacity. For all previously considered network function computation problems whose computing capacities are known, our improved upper bound is achievable if the computing capacity is rational, or is asymptotically achievable if the computing capacity is irrational, e.g., arbitrary target functions over a multi-edge tree network, the identity function or algebraic sum function over an arbitrary network topology, and the problem previously considered in \cite{Appuswamy11,huang15} (see Fig.~\ref{fig:1} and Example~\ref{eg:2}). Nevertheless, in this section we prove that our improved upper bound is not necessarily achievable even when its value is rational. The result is stated in the following theorem, whose proof is highly nontrivial. This is the first example showing the non-achievability of the improved upper bound when its value is rational.
\begin{figure}[t]
\centering
{
\begin{tikzpicture}[x=0.6cm]
\draw (-3,0) node[vertex] (1) [label=above:$\sigma_1$] {};
\draw ( 3,0) node[vertex] (2) [label=above:$\sigma_2$] {};
\draw ( 0,-1.5) node[vertex] (3) [label=left:] {};
\draw (-3,-5) node[vertex] (4) [label=left:] {};
\draw ( 3,-5) node[vertex] (5) [label=right:] {};
\draw ( 0,-3.5) node[vertex] (6) [label=right:] {};
\draw ( 0,-6.5) node[vertex] (7) [label=below: $\rho$] {};
\draw[->,>=latex] (1) -- (4) node[midway, auto,swap, left=-1mm] {$e_1$};
\draw[->,>=latex] (1) -- (3) node[midway, auto, right=-0.5mm] {$e_2$};
\draw[->,>=latex] (2) -- (3) node[midway, auto,swap, left=-0.5mm] {$e_3$};
\draw[->,>=latex] (2) -- (5) node[midway, auto, right=-1mm] {$e_4$};
\draw[->,>=latex] (3) -- (6) node[midway, auto, right=-1mm] {$e_5$};
\draw[->,>=latex] (6) -- (4) node[midway, auto, left=-0.5mm] {$e_6$};
\draw[->,>=latex] (6) -- (5) node[midway, auto,swap, right=-0.5mm] {$e_7$};
\draw[->,>=latex] (4) -- (7) node[midway, auto,swap, right=-0.5mm] {$e_8$};
\draw[->,>=latex] (5) -- (7) node[midway, auto, left=-0.5mm] {$e_9$};
\end{tikzpicture}
}
\caption{The reverse butterfly network $\mathcal{N}$ has two binary sources
$\sigma_1$ and $\sigma_2$, and one sink $\rho$ that
computes the binary maximum function of the source messages, i.e.,
$f(x_1,x_2)=\max\{ x_1, x_2 \}$, where $\mathcal{A}=\mathcal{O}=\{0,1\}$ and the elements in $\mathcal{A}$ and $\mathcal{O}$ are taken as real numbers.}
\label{fig:butterfly_network}
\end{figure}
\begin{thm}\label{thm_butterfly_network_non-tight}
For the computation problem of the binary maximum function $f=\max$ over the reverse butterfly network $\mathcal{N}$ (depicted in Fig.~\ref{fig:butterfly_network}), the upper bound in Theorem~\ref{thm:upper_bound} on the computing capacity $\mathcal{C}(\mathcal{N}, f=\max)$ is not achievable, i.e., for any $(k,n)$ network code that can compute $f=\max$ over $\mathcal{N}$, the rate
\begin{align*}
\frac{k}{n}< \min_{C\in\Lambda(\mathcal{N})}\dfrac{|C|}{\log_{|\mathcal{A}|}n_{C,f}}=2.
\end{align*}
\end{thm}
We first show that the upper bound in Theorem~\ref{thm:upper_bound} for this computation problem $(\mathcal{N},f)$ is equal to $2$, i.e.,
\begin{align}\label{equ:upb=2}
\min_{C\in\Lambda(\mathcal{N})}\dfrac{|C|}{\log_{|\mathcal{A}|}n_{C,f}}=2.
\end{align}
We claim that for any cut set $C\in\Lambda(\mathcal{N})$,
\begin{align}\label{reverse_BF_UppB}
\left\{
\begin{array}{ll}
|C|\geq 4\text{ and }n_{C,f}\leq 4, & \hbox{if $C$ has a nontrivial strong partition;} \\
|C|\geq 2\text{ and }n_{C,f}\leq 2, & \hbox{otherwise.}
\end{array}
\right.
\end{align}
To see this, we first prove the following for an \textit{arbitrary} network function computation problem $(\mathcal{N},f)$:
\begin{align}\label{ineq:N_Cf_leq_AI}
n_{C,f}\leq \big|\mathcal{A}^{I_C}\big|=|\mathcal{A}|^{|I_C|},\quad \forall~C\in\Lambda(\mathcal{N}).
\end{align}
Let $C$ be a cut set in $\Lambda(\mathcal{N})$ and $\mathcal{P}_C=\{C_1,C_2,\cdots,C_m\}$, $m\geq 1$, be an arbitrary strong partition of~$C$. For notational simplicity, let $I=I_C$, $J=J_C$, $I_l=I_{C_l}$, $1\leq l \leq m$, and $L=I\setminus(\bigcup_{l=1}^m I_l)$.
Recall the definition of $N\big({a}_{L}, {\mathrm{Cl}}[{a}_J]\big)$ in \eqref{no_finer_eq_cl},
where ${\mathrm{Cl}}[{a}_J]$ stands for an arbitrary $(I,a_J)$-equivalence class. It follows from \eqref{equ_2sets} in Section~\ref{sec:proof} that for any $a_L\in \mathcal{A}^{L}$ and $a_J\in \mathcal{A}^{J}$,
\begin{align*}
\big|{\mathrm{Cl}}[{a}_J]\big| & \geq \# \Big\{ (b_{I_1}, b_{I_2}, \cdots, b_{I_m}, a_L): b_{I_l} \in \mathcal{A}^{I_l}, l=1,2,\cdots,m, \textrm{ and } (b_{I_1}, b_{I_2}, \cdots, b_{I_m}, a_{L})\in {\mathrm{Cl}}[a_J]\Big\}\\
&= \sum_{\textrm{ all } \left({\mathrm{cl}}_{I_1}, {\mathrm{cl}}_{I_2}, \cdots, {\mathrm{cl}}_{I_m}\right) \textrm{ s.t. } \atop \left\langle {\mathrm{cl}}_{I_1}, {\mathrm{cl}}_{I_2}, \cdots, {\mathrm{cl}}_{I_m}, a_{L} \right\rangle \subseteq {\mathrm{Cl}}[a_J]} \Big|\big\langle {\mathrm{cl}}_{I_1}, {\mathrm{cl}}_{I_2}, \cdots, {\mathrm{cl}}_{I_m}, a_{L} \big\rangle \Big|\\
&\geq \sum_{\textrm{ all } \left({\mathrm{cl}}_{I_1}, {\mathrm{cl}}_{I_2}, \cdots, {\mathrm{cl}}_{I_m}\right) \textrm{ s.t. } \atop \left\langle {\mathrm{cl}}_{I_1}, {\mathrm{cl}}_{I_2}, \cdots, {\mathrm{cl}}_{I_m}, a_{L} \right\rangle \subseteq {\mathrm{Cl}}[a_J]} 1 = N\big({a}_{L}, {\mathrm{Cl}}[{a}_J]\big).
\end{align*}
Thus,
\begin{align}\label{ineq:N_Cf_leq_AI_pf_1}
N\big({a}_{L}, {\mathrm{Cl}}[{a}_J]\big)\leq \big|{\mathrm{Cl}}[{a}_J]\big|, \quad \forall~a_L\in \mathcal{A}^{L} \text{ and }\forall~a_J\in \mathcal{A}^{J}.
\end{align}
By \eqref{equ:N_Cl}, \eqref{ineq:N_Cf_leq_AI_pf_1} immediately implies that $N\big({\mathrm{Cl}}[{a}_J]\big)\leq \big|{\mathrm{Cl}}[{a}_J]\big|$, and thus
\begin{align*}
\sum_{\text{all }{\mathrm{Cl}}[{a}_J]} N\big({\mathrm{Cl}}[{a}_J]\big)\leq \sum_{\text{all }{\mathrm{Cl}}[{a}_J]} \big|{\mathrm{Cl}}[{a}_J]\big| = |\mathcal{A}^I|,
\end{align*}
where the last equality follows from the fact that all $(I,a_J)$-equivalence classes constitute a partition of~$\mathcal{A}^I$. Finally, by \eqref{def:{a}_J_star0}, \eqref{def:{a}_J_star}, and \eqref{n_C_Parti_1st}, we have
\begin{align*}
n_C(\mathcal{P}_C) = \sum_{\text{all }{\mathrm{Cl}}[{a}_J^*]}
N\big({\mathrm{Cl}}[{a}_J^*]\big) \leq |\mathcal{A}^I|,
\end{align*}
and hence $n_{C,f}\leq |\mathcal{A}^I|$ by \eqref{n_C_f_1st}, proving \eqref{ineq:N_Cf_leq_AI}.
Now, let us return to the proof of \eqref{reverse_BF_UppB} for the network computation problem $(\mathcal{N}, f)$ in Theorem~\ref{thm_butterfly_network_non-tight} by considering the following two cases:
\noindent\textbf{Case 1:} A cut set $C\in\Lambda(\mathcal{N})$ has a nontrivial strong partition.
Let $\mathcal{P}_C=\{C_1,C_2,\cdots,C_m\}$ be a nontrivial strong partition of $C$. Clearly, $m\geq 2$. Since $\mathcal{P}_C$ is a strong partition (see Definition~\ref{def:strong_parti}), we have $I_{C_l} \neq \emptyset$, $\forall~1\leq l \leq m$ and $I_{C_i}\cap I_{C_j}=\emptyset$, $\forall~1\leq i, j \leq m$ and $i\neq j$. Together with $\bigcup_{l=1}^m I_{C_l}\subseteq I_C \subseteq S$, we obtain that
\begin{align*}
2\leq m\leq \sum_{l=1}^m |I_{C_l}| \leq |S| = 2,
\end{align*}
which implies $m=2$, i.e., $\mathcal{P}_C$ is a two-partition given by $\{C_1, C_2\}$, and $|I_{C_1}|=|I_{C_2}|=1$.
We first prove that $|C|\geq 4$. It is readily seen from the network $\mathcal{N}$ that the minimum cut capacity between $\sigma_i$ and $\rho$ is equal to $2$, $i=1,2$. Then, for any cut set $C_i'$ such that $I_{C_i'}=\{\sigma_i\}$, we have $|C_i'|\geq 2$, $i=1,2$. This implies that $|C|=|C_1|+|C_2|\geq 2+2=4$ (e.g., $C=\{e_1,e_2,e_3,e_4\}$ has a unique nontrivial strong partition $\mathcal{P}_C=\big\{C_1=\{e_1,e_2\}, C_2=\{e_3,e_4\}\big\}$ with $I_{C_1}=\{\sigma_1\}$ and $I_{C_2}=\{\sigma_2\}$).
We now prove that $n_{C,f}\leq 4$. This can be obtained from \eqref{ineq:N_Cf_leq_AI} with $|\mathcal{A}|=2$ and $I_C=S$, so that $|\mathcal{A}|^{|I_C|}=| \mathcal{A}|^{|S|}=4$.
\noindent\textbf{Case 2:} A cut set $C\in\Lambda(\mathcal{N})$ has no nontrivial strong partition.
Following the discussion in Case 1, it is easy to see that $|C|\geq 2$, $\forall~C\in\Lambda(\mathcal{N})$, because $C$ separates at least one source node from the sink node $\rho$. To obtain $n_{C,f}\leq 2$, we consider the following two subcases:
\begin{itemize}
\item if $|I_C|=2$, i.e., $C$ is a global cut set (e.g., $C=\{e_8,e_9\}$ with $I_C=\{\sigma_1, \sigma_2\}$), then $n_{C,f}=|f(\mathcal{A}^2)|=2$ (see the discussion below Theorem~\ref{thm:upper_bound} and the discussion below \eqref{eq:2} in Section~\ref{subsec:pre_B});
\item if $|I_C|=1$ (e.g., $C=\{e_1,e_2\}$ with $I_C=\{\sigma_1\}$), then $n_{C,f}\leq |\mathcal{A}|^{|I_C|}=2$ by \eqref{ineq:N_Cf_leq_AI}.
\end{itemize}
\begin{remark}
By means of an evaluation of $n_{C,f}$ specific to the network computation problem $(\mathcal{N},f)$ in Theorem~\ref{thm_butterfly_network_non-tight}, it can be shown that the upper bounds on $n_{C,f}$ in \eqref{reverse_BF_UppB} are in fact tight. Since we do not need this result in the sequel, the details are omitted here.
\end{remark}
\bigskip
Consequently, we can obtain from \eqref{reverse_BF_UppB} that for each cut set $C\in\Lambda(\mathcal{N})$,
\begin{align}\label{equ:upb=2-1}
\dfrac{|C|}{\log_{|\mathcal{A}|}n_{C,f}}\geq 2.
\end{align}
In particular, for the global cut set $C=\{e_8,e_9\}$, we have $|I_C|=2$ and $n_{C,f}=2$ (cf. the first bullet in Case 2 above), so that $|C|/\log_{|\mathcal{A}|}n_{C,f}=2$. Thus, we have proved \eqref{equ:upb=2}.
Toward proving Theorem~\ref{thm_butterfly_network_non-tight}, it remains to prove that $k/n<2$ for any $(k,n)$ network code that can compute $f$ on $\mathcal{N}$, which will be done by contradiction. Assume that the upper bound $2$ is achievable. To be specific, for some positive integer~$n$, there exists a rate-$2$ $(2n, n)$ network code $\mathbf{C}=\{ g_{e_i}(\vec{x}_1, \vec{x}_2): 1\leq i \leq 9 \}$ that can compute the target function $f$ at the sink node $\rho$, where $\vec{x}_l\in \mathcal{A}^{2n}$ stands for $2n$ symbols in $\mathcal{A}$ generated by the source node $\sigma_l$, $l=1,2$, and $g_{e_i}(\vec{x}_1, \vec{x}_2)\in \mathcal{A}^{n}$ is the global encoding function of $e_i$ that contains at most $n$ symbols in $\mathcal{A}$ transmitted on the edge $e_i$, $1\leq i \leq 9$. For notational simplicity, we write $g_{e_i}(\vec{x}_1, \vec{x}_2)$ as $g_{i}(\vec{x}_1, \vec{x}_2)$ for all $1\leq i \leq 9$. We may further simplify $g_{i}(\vec{x}_1, \vec{x}_2)$ to $g_{i}$ when its dependence on $\vec{x}_1$ and $\vec{x}_2$ is implicitly assumed.
Consider the edge set $C=\{e_1, e_4, e_5\}$, which is a global cut set. Since the $(2n,n)$ network code $\mathbf{C}$ can compute the target function $f$, there must exist a decoding function $\psi_C$ from $\mathcal{A}^n \times \mathcal{A}^n \times \mathcal{A}^n$ to $\mathcal{A}^{2n}$ such that
\begin{align}\label{equ:decoding_function_psi_C}
\psi_C\big(g_1(\vec{a}_1, \vec{a}_2), g_4(\vec{a}_1, \vec{a}_2), g_5(\vec{a}_1, \vec{a}_2)\big)=f(\vec{a}_1, \vec{a}_2), \quad \forall\ \vec{a}_l\in \mathcal{A}^{2n},\ l=1,2.
\end{align}
With this, we split the the network $\mathcal{N}$ into two sub-networks $\mathcal{N}_1$ and $\mathcal{N}_2$, depicted in Fig. \ref{fig:butterfly_subnetwork1} and Fig. \ref{fig:butterfly_subnetwork2}, respectively, where in $\mathcal{N}_1$, the artificial sink node $\rho'$ that takes $e_1$, $e_4$, and $e_5$ as input is created.
\begin{figure}[!t]
\centering
\begin{minipage}[b]{0.5\textwidth}
\centering
\begin{tikzpicture}[x=0.6cm]
\draw (-3.5,0) node[vertex] (1) [label=above:$\sigma_1$] {};
\draw ( 3.5,0) node[vertex] (2) [label=above:$\sigma_2$] {};
\draw ( 0,-1.5) node[vertex] (3) [label=left:] {};
\draw ( 0,-4) node[vertex] (6) [label=below:$\rho'$] {};
\draw[->,>=latex] (1) -- (6) node[midway, auto, swap, left=0mm] {$e_1$};
\draw[->,>=latex] (1) -- (3) node[midway, auto, right=0mm] {$e_2$};
\draw[->,>=latex] (2) -- (3) node[midway, auto,swap, left=0mm] {$e_3$};
\draw[->,>=latex] (2) -- (6) node[midway, auto, right=0mm] {$e_4$};
\draw[->,>=latex] (3) -- (6) node[pos=0.4, right=-1mm] {$e_5$};
\end{tikzpicture}
\caption{The network computation $(\mathcal{N}_1, f)$.}
\label{fig:butterfly_subnetwork1}
\end{minipage}%
\centering
\begin{minipage}[b]{0.5\textwidth}
\centering
\begin{tikzpicture}[x=0.6cm]
\draw (0,0) node[vertex] (2) [label=above:$\sigma_2'$] {};
\draw (-4,0)node[vertex] (1) [label=above:$\sigma_1'$] {};
\draw (4,0) node[vertex] (3) [label=above:$\sigma_3'$] {};
\draw (-2,-2) node[vertex] (1') [label=left:] {};
\draw (2,-2) node[vertex] (2') [label=right:] {};
\draw (0,-4) node[vertex] (0) [label=below:$\rho$] {};
\draw[->,>=latex] (2) -- (1') node[midway, auto, swap, pos=0.3, left=0mm] {$e_6$};
\draw[->,>=latex] (2) -- (2') node[midway, auto,pos=0.3, right=0mm] {$e_7$};
\draw[->,>=latex] (1') -- (0) node[midway, auto, pos=0.3, left=0mm] {$e_8$};
\draw[->,>=latex] (2') -- (0) node[midway, auto, swap, pos=0.3, right=0mm] {$e_9$};
\draw[->,>=latex] (1) -- (1') node[midway, auto, pos=0.3, right=0mm] {$e_1$};
\draw[->,>=latex] (3) -- (2') node[midway, auto, swap, pos=0.3, left=0mm] {$e_4$};
\end{tikzpicture}
\caption{The network computation $(\mathcal{N}_2, F)$.}
\label{fig:butterfly_subnetwork2}
\end{minipage}
\end{figure}
We first consider computing $f$ over $\mathcal{N}_1$, i.e., the network computation problem $(\mathcal{N}_1, f)$ depicted in Fig.~\ref{fig:butterfly_subnetwork1}. Here, $\mathcal{N}_1$ contains two source nodes $\sigma_1$ and $\sigma_2$, and one sink node $\rho'$ that is required to compute the maximum function of the source messages, i.e., $f(x_1,x_2)=\max\{ x_1, x_2 \}$ with $\mathcal{A}=\mathcal{O}=\{0,1\}$. For the $(2n,n)$ network code $\mathbf{C}=\{ g_i(\vec{x}_1, \vec{x}_2): 1\leq i \leq 9 \}$ on $(\mathcal{N},f)$, let $\mathbf{C}_1=\{g_i(\vec{x}_1, \vec{x}_2): 1\leq i \leq 5\}$ and we see that $\mathbf{C}_1$ is a $(2n,n)$ network code induced on $(\mathcal{N}_1, f)$.
On the other hand, we consider another network computation problem $(\mathcal{N}_2, F)$, where the network $\mathcal{N}_2$ is depicted in Fig.~\ref{fig:butterfly_subnetwork2} and the target function $F$, which is induced by the rate-$2$ network code $\mathbf{C}_1$ on $(\mathcal{N}_1,f)$, is given as follows. Let the alphabet of the source messages and the transmitted messages be $\mathcal{A}^n$. The source node $\sigma_l'$ in $\mathcal{N}_2$ generates the source vector in $\mathcal{A}^n$, denoted by $\vec{y}_l$, $l=1,2,3$.
The target function $F$ is defined as
\begin{align}\label{defn:F}
F:\ \big(\mathcal{A}^n\big)\times \big(\mathcal{A}^n\big) \times \big(\mathcal{A}^n\big) \ \longrightarrow &\ \ \ \big(\mathcal{A}^{2n})\nonumber\\
(\vec{y}_1,\vec{y}_2,\vec{y}_3) \ \longmapsto &\ \ \ \psi_C(g_1=\vec{y}_1, g_4=\vec{y}_3, g_5=\vec{y}_2),
\end{align}
where $\psi_C$ is the decoding function of the network code $\mathbf{C}_1$ (cf. \eqref{equ:decoding_function_psi_C}). Note that the target function $F$ is defined upon the network code $\mathbf{C}_1$, and we will prove later that $F$ is indeed well-defined.
With the $(2n,n)$ network code $\mathbf{C}$ on $(\mathcal{N}, f)$, let $\mathbf{C}_2=\{g_i(\vec{x}_1, \vec{x}_2): i=1,4,6,7,8,9 \}$. Then $\mathbf{C}_2$ is a $(1,1)$ network code on $(\mathcal{N}_2, F)$. Here $\big(\mathcal{A}^n\big)$ corresponds to $\mathcal{A}$ and $\big(\mathcal{A}^{2n})$ corresponds to $\mathcal{O}$ in the definition of a network code in Section~\ref{sec:preliminaries}. To be specific, for source inputs $(\vec{y}_1, \vec{y}_2, \vec{y}_3)$ generated by $\sigma_1'$, $\sigma_2'$ and $\sigma_3'$, respectively, let $g_1=\vec{y}_1$, $g_4=\vec{y}_3$, $g_6=\theta_6(\vec{y}_2)$, $g_7=\theta_7(\vec{y}_2)$ (which is equivalent to letting $g_5=\vec{y}_2$), $g_8=\theta_8(g_1, g_6)$, and $g_9=\theta_9(g_4, g_7)$, where $\theta_i$ denotes the local encoding function of $e_i$ for $i=6,7,8,9$ in $\mathbf{C}$. The construction of $\mathbf{C}_2$ implies that $\mathcal{C}(\mathcal{N}_2, F)\geq 1$.
However, we will prove in the rest of the section that for any function $F$ induced by a rate-$2$ network code on $(\mathcal{N}_1, f)$, the rate $1$ is not achievable on $(\mathcal{N}_2, F)$, i.e., $\mathcal{C}(\mathcal{N}_2, F)<1$. This immediately leads to a contradiction, which implies that $2$ is not achievable.
\subsection{The Network Computation Problem $(\mathcal{N}_1,f)$}
In this subsection, we will give some necessary properties that all rate-$2$ network codes on $(\mathcal{N}_1, f)$ must satisfy. First, we have $\mathcal{C}(\mathcal{N}_1, f)=2$, because Theorem~\ref{thm:upper_bound} implies $\mathcal{C}(\mathcal{N}_1, f)\leq 2$ (for example consider $C=\{e_1,e_2\}$ and $n_{C,f}=2$) and Fig.~\ref{fig:butterfly_network_Part1_b} gives a coding scheme achieving the rate $2$.
\begin{figure}[!t]
\centering
\begin{minipage}[b]{0.5\textwidth}
\centering
\begin{tikzpicture}[x=0.6cm]
\draw (-3,0) node[vertex] (1) [label=above:$\sigma_1$] {};
\draw ( 3,0) node[vertex] (2) [label=above:$\sigma_2$] {};
\draw ( 0,-1.5) node[vertex] (3) [label=left:] {};
\draw ( 0,-4) node[vertex] (6) [label=below:$\rho'$] {};
\draw[->,>=latex] (1) -- (6) node[midway, auto, swap, left=0mm] {$g_1$};
\draw[->,>=latex] (1) -- (3) node[midway, auto, right=0mm] {$g_2$};
\draw[->,>=latex] (2) -- (3) node[midway, auto,swap, left=0mm] {$g_3$};
\draw[->,>=latex] (2) -- (6) node[midway, auto, right=0mm] {$g_4$};
\draw[->,>=latex] (3) -- (6) node[pos=0.4, right=-1mm] {$g_5$};
\end{tikzpicture}
\label{fig:butterfly_network_Part1_a}
\end{minipage}%
\begin{minipage}[b]{0.5\textwidth}
\centering
\begin{tabular}{p{5mm}p{2.1cm}}
\hline
\specialrule{0em}{1pt}{1pt}
$g_1$: & $x_{11}$\\
\specialrule{0em}{1pt}{1pt}
$g_2$: & $x_{12}$\\
\specialrule{0em}{1pt}{1pt}
$g_3$: & $x_{22}$\\
\specialrule{0em}{1pt}{1pt}
$g_4$: & $x_{21}$\\
\specialrule{0em}{1pt}{1pt}
$g_5$: & $\max\{x_{12}, x_{22}\}$\\
\specialrule{0em}{2pt}{2pt}
\hline
\end{tabular}
\vspace{3mm}
\end{minipage}
\caption{A rate-$2$ $(2,1)$ network code $\{g_i:1\leq i \leq 5\}$ on $(\mathcal{N}_1, f=\max)$, where $\vec{x}_1=(x_{11}, x_{12})^\top$ and $\vec{x}_2=(x_{21}, x_{22})^\top$ are source vectors generated by $\sigma_i$, $i=1,2$, respectively.}
\label{fig:butterfly_network_Part1_b}
\end{figure}
In general, we let $\mathbf{C}_1=\{g_i(\vec{x}_1, \vec{x}_2): 1\leq i \leq 5\}$ be a $(2n, n)$ network code on $\mathcal{N}_1$ with respect to $f$, where $n$ is a positive integer. Since $K_{\{e_1\}}=\{\sigma_1\}$ (cf. \eqref{def_K_C}) in $\mathcal{N}_1$, the global encoding function $g_1(\vec{x}_1,\vec{x}_2)$ only depends on the source inputs $\vec{x}_1$ of $\sigma_1$ and hence we write $g_1(\vec{x}_1,\vec{x}_2)$ as $g_1(\vec{x}_1)$, a function from $\mathcal{A}^{2n}$ to $\mathcal{A}^n$. In fact, $g_1$ is the local encoding function $\theta_1$ of the edge $e_1$ (cf. \eqref{defn_local_function} for the definition), i.e., $\theta_1(\vec{x}_1)=g_1(\vec{x}_1)$. Similarly, we can write $g_2(\vec{x}_1, \vec{x}_2)$, $g_3(\vec{x}_1, \vec{x}_2)$, and $g_4(\vec{x}_1, \vec{x}_2)$ as $g_2(\vec{x}_1)$, $g_3(\vec{x}_2)$, and $g_4(\vec{x}_2)$, respectively. They are also the local encoding functions $\theta_2(\vec{x}_1)$, $\theta_3(\vec{x}_2)$, and $\theta_4(\vec{x}_2)$ corresponding to the edges $e_2$, $e_3$, and $e_4$, respectively. For the edge $e_5$, since $K_{\{e_5\}}=\{\sigma_1, \sigma_2\}$ which means that $g_5(\vec{x}_1,\vec{x}_2)$ possibly is affected by both $\vec{x}_1$ and $\vec{x}_2$, we keep $g_5(\vec{x}_1,\vec{x}_2)$. Then
\begin{align}
g_5(\vec{a}_1, \vec{a}_2)=\theta_5\big(g_2(\vec{a}_1), g_3(\vec{a}_2)\big), \quad \forall\ \vec{a}_l\in \mathcal{A}^{2n},\ l=1,2,
\end{align}
where $\theta_5$ is the local encoding function of the edge $e_5$. With the above, we rewrite the network code $\mathbf{C}_1=\{g_i(\vec{x}_1, \vec{x}_2): 1\leq i \leq 5\}$ as
\begin{align*}
\mathbf{C}_1=\big\{g_1(\vec{x}_1), g_2(\vec{x}_1), g_3(\vec{x}_2), g_4(\vec{x}_2), g_5(\vec{x}_1, \vec{x}_2)\big\},
\end{align*}
or $\mathbf{C}_1=\{g_i: 1\leq i \leq 5\}$ for simplicity.
\begin{lemma}\label{lemma1:non_tight}
Let $\mathbf{C}_1=\{g_i: 1\leq i \leq 5\}$ be a $(2n,n)$ network code on $(\mathcal{N}_1, f=\max)$. Then for any two distinct vectors $\vec{a}$ and $\vec{b}$ in $\mathcal{A}^{2n}$,
\begin{align}
\big( g_1(\vec{a}), g_2(\vec{a}) \big)&\neq
\big( g_1(\vec{b}), g_2(\vec{b}) \big),\label{equ:1_lemma7}\\
\big( g_3(\vec{a}), g_4(\vec{a}) \big)&\neq
\big( g_3(\vec{b}), g_4(\vec{b}) \big).\label{equ:2_lemma7}
\end{align}
In other words, $\big( g_1(\vec{x}_1), g_2(\vec{x}_1) \big)$ (resp. $\big( g_3(\vec{x}_2), g_4(\vec{x}_2) \big)$), regarded as a function from $\mathcal{A}^{2n}$ to $\mathcal{A}^n\times \mathcal{A}^n$, is a bijection.
\end{lemma}
\begin{IEEEproof}
We first prove by contradiction that \eqref{equ:1_lemma7} holds for any two distinct vectors $\vec{a}$ and $\vec{b}$ in $\mathcal{A}^{2n}$. Assume the contrary that there exist two distinct vectors $\vec{a}$ and $\vec{b}$ in $\mathcal{A}^{2n}$ such that
\begin{align}
\big( g_1(\vec{a}), g_2(\vec{a}) \big)=
\big( g_1(\vec{b}), g_2(\vec{b}) \big),
\end{align}
i.e., $g_1(\vec{a})=g_1(\vec{b})$ and $g_2(\vec{a})=g_2(\vec{b})$.
Let $\vec{x}_2=\vec{0}$, the all-zero $2n$-vector in $\mathcal{A}^{2n}$. By $g_2(\vec{a})=g_2(\vec{b})$, we obtain
\begin{align}
g_5\big(\vec{a}, \vec{0}\big)
=\theta_5\big(g_2(\vec{a}), g_3(\vec{0})\big)
=\theta_5\big(g_2(\vec{b}), g_3(\vec{0})\big)=g_5\big(\vec{b}, \vec{0}\big).
\end{align}
Together with $g_1(\vec{a})=g_1(\vec{b})$, we immediately have
\begin{align}\label{equ:non_tight_lemma1_1}
\big(g_1(\vec{a}), g_4(\vec{0}), g_5(\vec{a}, \vec{0}) \big)
=\big(g_1(\vec{b}), g_4(\vec{0}), g_5(\vec{b}, \vec{0}) \big),
\end{align}
i.e., for the distinct source inputs $(\vec{a}, \vec{0})$ and $(\vec{b}, \vec{0})$, the two corresponding messages transmitted on $C=\{e_1,e_4,e_5\}$ are the same.
Since the network code $\mathbf{C}_1$ can compute $f$ with zero error and the cut set $C$ is global, we obtain
\begin{align}
\vec{a}=f(\vec{a}, \vec{0})=\psi_C\big(g_1(\vec{a}), g_4(\vec{0}), g_5(\vec{a}, \vec{0})\big)=\psi_C\big(g_1(\vec{b}), g_4(\vec{0}), g_5(\vec{b}, \vec{0})\big)=f(\vec{b}, \vec{0})=\vec{b},
\end{align}
where $\psi_C$ is the decoding function of $\mathbf{C}_1$. This contradicts the assumption that $\vec{a}\neq\vec{b}$.
The same result for $(g_3, g_4)$ can be proved by using the same argument. The proof is completed.
\end{IEEEproof}
We now introduce some notations below that will be used frequently in the sequel:
\begin{itemize}
\item Denote by $g_i(\mathcal{A}^{2n})$ the {\em image} of $\mathcal{A}^{2n}$ under $g_i$ for $i=1,2,3,4$, i.e.,
\begin{align}\label{notation_image_set_g_i}
g_i(\mathcal{A}^{2n})=\big\{g_i(\vec{a}):\ \vec{a}\in \mathcal{A}^{2n}\big\}\subseteq \mathcal{A}^n, \ i=1,2,3,4.
\end{align}
Similarly, denote by $g_5(\mathcal{A}^{2n}, \mathcal{A}^{2n})$ the {\em image} of $\mathcal{A}^{2n}\times\mathcal{A}^{2n}$ under $g_5$, i.e.,
\begin{align}\label{equ:35}
g_5(\mathcal{A}^{2n}, \mathcal{A}^{2n})=\big\{g_5(\vec{a}, \vec{b}):\ (\vec{a}, \vec{b})\in \mathcal{A}^{2n}\times\mathcal{A}^{2n} \big\}\subseteq \mathcal{A}^n.
\end{align}
\item Let $\vec{\gamma}\in \mathcal{A}^n$. For $1\leq i \leq 5$, denote by $g_i^{-1}(\vec{\gamma})$ the {\em inverse image} of $\vec{\gamma}$ under $g_i$, i.e.,
\begin{align}
g_i^{-1}(\vec{\gamma})&=\big\{\vec{a}\in \mathcal{A}^{2n}:\ g_i(\vec{a})=\vec{\gamma} \big\}\subseteq \mathcal{A}^{2n},\quad i=1,2,3,4; \\
g_5^{-1}(\vec{\gamma})&=\big\{(\vec{a}, \vec{b}) \in \mathcal{A}^{2n}\times \mathcal{A}^{2n}:\ g_5(\vec{a}, \vec{b})=\vec{\gamma} \big\}\subseteq \mathcal{A}^{2n}\times\mathcal{A}^{2n}.
\end{align}
\end{itemize}
\begin{lemma}\label{thm:non-tight}
Let $\mathbf{C}_1=\{g_i: 1\leq i \leq 5\}$ be a $(2n,n)$ network code on $(\mathcal{N}_1, f=\max)$.
Then
\begin{enumerate}
\item All global encoding functions $g_i$, $1\leq i \leq 5$, are surjective, i.e.,
\begin{align*}
g_i(\mathcal{A}^{2n})=g_5(\mathcal{A}^{2n}, \mathcal{A}^{2n})=\mathcal{A}^n,\quad i=1,2,3,4.
\end{align*}
\item For every $\vec{\gamma}\in \mathcal{A}^n$ and each $i=1,2,3,4$,
\begin{align*}
\left|g_i^{-1}(\vec{\gamma})\right|=2^n.
\end{align*}
In other words, $\left\{g_i^{-1}(\vec{\gamma}):\ \vec{\gamma}\in \mathcal{A}^n \right\}$ forms an equipartition of $\mathcal{A}^{2n}$
\end{enumerate}
\end{lemma}
\begin{IEEEproof}
We first prove $g_i(\mathcal{A}^{2n})=\mathcal{A}^n$ for $i=1,2,3,4$. Since $\big( g_1(\vec{x}_1), g_2(\vec{x}_1) \big)$ (resp. $\big( g_3(\vec{x}_2), g_4(\vec{x}_2) \big)$) is a bijection from $\mathcal{A}^{2n}$ to $\mathcal{A}^n\times \mathcal{A}^n$ by Lemma~\ref{lemma1:non_tight}, we obtain
\begin{align}\label{equ:cal_g1_g2}
g_1(\mathcal{A}^{2n})=g_2(\mathcal{A}^{2n})=\mathcal{A}^n\quad (\text{resp. } g_3(\mathcal{A}^{2n})=g_4(\mathcal{A}^{2n})=\mathcal{A}^n).
\end{align}
Before proving $g_5(\mathcal{A}^{2n}, \mathcal{A}^{2n})=\mathcal{A}^n$, we first prove 2) in Lemma~\ref{thm:non-tight} that $\big|g_i^{-1}(\vec{\gamma})\big|=2^n$, $\forall~\vec{\gamma}\in \mathcal{A}^n$ for $i=1,2,3,4$. Consider $g_1$ and assume that there exists $\vec{\gamma}$ in $\mathcal{A}^n$ such that $\big|g_1^{-1}(\vec{\gamma})\big|\neq 2^n$. We further assume that $\big|g_1^{-1}(\vec{\gamma})\big|>2^n$, which does not lose any generality. This is explained as follows. Since $g_1(\mathcal{A}^{2n})=\mathcal{A}^n$ by \eqref{equ:cal_g1_g2}, we obtain that $g_1^{-1}(\vec{\gamma})\neq \emptyset$, $\forall~\vec{\gamma}\in \mathcal{A}^n$ and so $\big\{ g_1^{-1}(\vec{\gamma}):~\vec{\gamma}\in \mathcal{A}^n \big\}$ constitutes a partition of $\mathcal{A}^{2n}$ that contains $|\mathcal{A}^n|=2^n$ blocks.\footnote{The definition of a partition requires that every subset of a partition is nonempty and these subsets are called {\em blocks}.} Hence, if $\big|g_1^{-1}(\vec{\gamma})\big|<2^n$, there must exist another $\vec{\gamma}' \in \mathcal{A}^{n}$ such that $\big|g_1^{-1}(\vec{\gamma}')\big|> 2^n$ because otherwise $\sum_{\vec{\gamma} \in \mathcal{A}^{n}}|g_1^{-1}(\vec{\gamma})\big|<|\mathcal{A}|^{2n}$, a contradiction.
Now, since $\big|g_1^{-1}(\vec{\gamma})\big|> 2^n$ and $\big|g_2(\mathcal{A}^{2n})\big|=2^n$ by \eqref{equ:cal_g1_g2}, there exist two distinct $2n$-column vectors $\vec{a}, \vec{b} \in g_1^{-1}(\vec{\gamma})$ such that
\begin{align}\label{equ:non_tight_thm1_2}
g_2(\vec{a})=g_2(\vec{b}).
\end{align}
Consider $\vec{x}_2=\vec{0}$. By \eqref{equ:non_tight_thm1_2}, we see that
\begin{align}\label{equ:non_tight_thm1_3}
g_5\big( \vec{a}, \vec{0} \big)=\theta_5\big( g_2(\vec{a}), g_3(\vec{0}) \big)=\theta_5\big( g_2(\vec{b}), g_3(\vec{0}) \big)=g_5\big( \vec{b}, \vec{0} \big).
\end{align}
Together with $g_1(\vec{a})=g_1(\vec{b})=\vec{\gamma}$, this immediately implies that for $C=\{e_1, e_4, e_5 \}$,
\begin{align}\label{equ85}
g_C\big(\vec{a}, \vec{0}\big)=\big( g_1(\vec{a}), g_4(\vec{0}), g_5(\vec{a}, \vec{0}) \big)=\big( g_1(\vec{b}), g_4(\vec{0}), g_5(\vec{b}, \vec{0}) \big)=g_C\big( \vec{b}, \vec{0} \big).
\end{align}
Since $C$ is global ($I_C=S$) and $\psi_C$ is the decoding function of the network code $\mathbf{C}_1$, we obtain by \eqref{equ85} that
\begin{align}
\vec{a}=f(\vec{a}, \vec{0})&=\psi_C\big(g_C(\vec{a}, \vec{0})\big)=\psi_C\big(g_C(\vec{b}, \vec{0})\big)=f(\vec{b}, \vec{0})=\vec{b},\label{equ:non_tight_thm1_4}
\end{align}
a contradiction to $\vec{a}\neq \vec{b}$. Thus, we have proved that $\big|g_1^{-1}(\vec{\gamma})\big|=2^n$, $\forall~\vec{\gamma}\in \mathcal{A}^n$.
By a symmetrical argument, we can prove that $\big|g_2^{-1}(\vec{\gamma})\big|=2^n$, $\forall~\vec{\gamma}\in \mathcal{A}^n$. Similarly, we can prove that $\big|g_3^{-1}(\vec{\gamma})\big|=\big|g_4^{-1}(\vec{\gamma})\big|=2^n$, $\forall~\vec{\gamma}\in \mathcal{A}^n$.
Now, we proceed to prove that $g_5(\mathcal{A}^{2n}, \mathcal{A}^{2n})=\mathcal{A}^n$. For any $n$-vector $\vec{\gamma}$ in $\mathcal{A}^n$, we will prove that
\begin{align}\label{equ:pf1}
\{ g_5(\vec{a}, \vec{0}):\ \vec{a}\in g_1^{-1}(\vec{\gamma})\} = \mathcal{A}^n,
\end{align}
which, together with $g_5(\mathcal{A}^{2n}, \mathcal{A}^{2n})\subseteq \mathcal{A}^n$ implies that $g_5(\mathcal{A}^{2n}, \mathcal{A}^{2n})=\mathcal{A}^n$.
We assume the contrary of \eqref{equ:pf1}, or equivalently,
\begin{align}\label{equ:pf2}
\big|\{ g_5(\vec{a}, \vec{0}):\ \vec{a}\in g_1^{-1}(\vec{\gamma}) \} \big|<2^n.
\end{align}
Since we have proved that $\big|g_1^{-1}(\vec{\gamma})\big|=2^n$, by \eqref{equ:pf2} there exist two distinct vectors $\vec{a}$ and $\vec{b}$ in $g_1^{-1}(\vec{\gamma})$ such that
\begin{align}\label{equ:non_tight_thm1_6}
g_5(\vec{a}, \vec{0})=g_5(\vec{b}, \vec{0}).
\end{align}
By comparing \eqref{equ:non_tight_thm1_6} with \eqref{equ:non_tight_thm1_3} and applying the argument following \eqref{equ:non_tight_thm1_3}, we obtain \eqref{equ:non_tight_thm1_4}, a contradiction to $\vec{a}\neq \vec{b}$. Hence, we have proved \eqref{equ:pf1}. This completes the proof of the lemma.
\end{IEEEproof}
\begin{removeEX4}
We can immediately obtain the following corollary from Lemmas~\ref{lemma1:non_tight}~and~\ref{thm:non-tight}.
\begin{cor}\label{cor_non-tight-2}
Let $\mathbf{C}_1=\{g_i: 1\leq i \leq 5\}$ be a $(2n,n)$ network code on $(\mathcal{N}_1, f=\max)$. For each $\vec{\gamma}\in \mathcal{A}^n$,
\begin{align*}
\left\{ g_1(\vec{a}):\ \vec{a}\in g_2^{-1}(\vec{\gamma})\right\}=\left\{ g_2(\vec{a}):\ \vec{a}\in g_1^{-1}(\vec{\gamma})\right\}=\mathcal{A}^n,
\end{align*}
and
\begin{align*}
\left\{ g_3(\vec{a}):\ \vec{a}\in g_4^{-1}(\vec{\gamma})\right\}=\left\{ g_4(\vec{a}):\ \vec{a}\in g_3^{-1}(\vec{\gamma})\right\}=\mathcal{A}^n.
\end{align*}
\end{cor}
\begin{IEEEproof}
For each $\vec{\gamma}\in \mathcal{A}^n$, it follows from Lemma~\ref{lemma1:non_tight} that $g_1(\vec{a})$, $\vec{a}\in g_2^{-1}(\vec{\gamma})$ are distinct.
Together with $\big|g_2^{-1}(\vec{\gamma})\big|=2^n$ by Lemma~\ref{thm:non-tight}, we obtain that $g_1(\vec{a})$, $\vec{a}\in g_2^{-1}(\vec{\gamma})$ are $2^n$ distinct vectors in $\mathcal{A}^n$, i.e., $\left\{ g_1(\vec{a}):\ \vec{a}\in g_2^{-1}(\vec{\gamma})\right\}=\mathcal{A}^n$. By the same argument, we can prove that
\begin{align*}
\left\{ g_2(\vec{a}):\ \vec{a}\in g_1^{-1}(\vec{\gamma})\right\}=\left\{ g_3(\vec{a}):\ \vec{a}\in g_4^{-1}(\vec{\gamma})\right\}=\left\{ g_4(\vec{a}):\ \vec{a}\in g_3^{-1}(\vec{\gamma})\right\}=\mathcal{A}^n,
\end{align*}
and so the proof is accomplished.
\end{IEEEproof}
In the following example, we use the coding scheme depicted in Fig.~\ref{fig:butterfly_network_Part1_b} to illustrate the concepts and conclusions mentioned above.
\begin{example}\label{eg:N_1}
The coding scheme depicted in Fig.~\ref{fig:butterfly_network_Part1_b} is a $(2,1)$ network code for the network computation problem $(\mathcal{N}_1, f)$. First, we see that $\big( g_1(\vec{x}_1), g_2(\vec{x}_1) \big)=(x_{11}, x_{12})$ and $\big( g_3(\vec{x}_2), g_4(\vec{x}_2) \big)=(x_{22}, x_{21})$ are two bijections from $\mathcal{A}^2$ to $\mathcal{A} \times \mathcal{A}$, where $\vec{x}_1=(x_{11}, x_{12})^\top$ and $\vec{x}_2=(x_{21}, x_{22})^\top$. Also,
\begin{align*}
g_1(\mathcal{A}^2)&=\{ g_1(\left[\begin{smallmatrix}0\\0\end{smallmatrix}\right])
=g_1(\left[\begin{smallmatrix}0\\1\end{smallmatrix}\right])=0,
g_1(\left[\begin{smallmatrix}1\\0\end{smallmatrix}\right])
=g_1(\left[\begin{smallmatrix}1\\1\end{smallmatrix}\right])=1\}=\mathcal{A},\\
g_2(\mathcal{A}^2)&=\{ g_2(\left[\begin{smallmatrix}0\\0\end{smallmatrix}\right])
=g_2(\left[\begin{smallmatrix}1\\0\end{smallmatrix}\right])=0,
g_2(\left[\begin{smallmatrix}0\\1\end{smallmatrix}\right])
=g_2(\left[\begin{smallmatrix}1\\1\end{smallmatrix}\right])=1\}=\mathcal{A},\\
g_3(\mathcal{A}^2)&=\{ g_3(\left[\begin{smallmatrix}0\\0\end{smallmatrix}\right])
=g_3(\left[\begin{smallmatrix}1\\0\end{smallmatrix}\right])=0,
g_3(\left[\begin{smallmatrix}0\\1\end{smallmatrix}\right])
=g_3(\left[\begin{smallmatrix}1\\1\end{smallmatrix}\right])=1\}=\mathcal{A},\\
g_4(\mathcal{A}^2)&=\{ g_4(\left[\begin{smallmatrix}0\\0\end{smallmatrix}\right])
=g_4(\left[\begin{smallmatrix}0\\1\end{smallmatrix}\right])=0,
g_4(\left[\begin{smallmatrix}1\\0\end{smallmatrix}\right])
=g_4(\left[\begin{smallmatrix}1\\1\end{smallmatrix}\right])=1\}=\mathcal{A},\\
g_5(\mathcal{A}^2, \mathcal{A}^2)&=\{\max\{x_{12}, x_{22}\}: x_{12}\in \mathcal{A}, x_{22}\in \mathcal{A}\}=\mathcal{A}.
\end{align*}
In addition, we see that
\begin{align*}
g_1^{-1}(0)=\{\left[\begin{smallmatrix}0\\0\end{smallmatrix}\right], \left[\begin{smallmatrix}0\\1\end{smallmatrix}\right]\},\ g_1^{-1}(1)=\{\left[\begin{smallmatrix}1\\0\end{smallmatrix}\right], \left[\begin{smallmatrix}1\\1\end{smallmatrix}\right]\},
\end{align*}
immediately,
$|g_1^{-1}(0)|=|g_1^{-1}(1)|=|\mathcal{A}|=2$. We also have
\begin{align*}
|g_i^{-1}(0)|=|g_i^{-1}(1)|=|\mathcal{A}|=2, \quad i=2,3,4.
\end{align*}
Then, the $(2,1)$ network code in Fig.~\ref{fig:butterfly_network_Part1_b} for $(\mathcal{N}_1,f)$ qualifies Lemmas~\ref{lemma1:non_tight}~and~\ref{thm:non-tight}, and Corollary~\ref{cor_non-tight-2}.
\end{example}
\end{removeEX4}
\subsection{The Network Computation Problem $(\mathcal{N}_2,F)$}
\begin{figure}[t]
\centering
{
\begin{tikzpicture}[x=0.7cm]
\draw (0,0) node[vertex] (2) [label=above:$\sigma_2'$] {};
\draw (-2,-1.5) node[vertex] (1') [label=left:] {};
\draw (2,-1.5) node[vertex] (2') [label=right:] {};
\draw (0,-3) node[vertex] (0) [label=below:$\rho$] {};
\draw[->,>=latex] (2) -- (1') node[midway, auto, swap, pos=0.3, left=0mm] {$e_6$};
\draw[->,>=latex] (2) -- (1') node[midway, auto, pos=0.7, right=0mm] {$g_6$};
\draw[->,>=latex] (2) -- (2') node[midway, auto,pos=0.3, right=0mm] {$e_7$};
\draw[->,>=latex] (2) -- (2') node[midway, auto, swap, pos=0.7, left=0mm] {$g_7$};
\draw[->,>=latex] (1') -- (0) node[midway, auto, swap, pos=0.7, left=0mm] {$g_8$};
\draw[->,>=latex] (1') -- (0) node[midway, auto, pos=0.3, right=0mm] {$e_8$};
\draw[->,>=latex] (2') -- (0) node[midway, auto, pos=0.7, right=0mm] {$g_9$};
\draw[->,>=latex] (2') -- (0) node[midway, auto, swap, pos=0.3, left=0mm] {$e_9$};
\draw node[vertex,label=above:$\sigma_1'$] at (-4,0) (1) {};
\draw[->,>=latex] (1) -- (1') node[midway, auto,swap,pos=0.7, left=0mm] {$g_1$};
\draw[->,>=latex] (1) -- (1') node[midway, auto, pos=0.3, right=0mm] {$e_1$};
\draw node[vertex,label=above:$\sigma_3'$] at (4,0) (3) {};
\draw[->,>=latex] (3) -- (2') node[midway, auto, pos=0.7, right=0mm] {$g_4$};
\draw[->,>=latex] (3) -- (2') node[midway, auto, swap, pos=0.3, left=0mm] {$e_4$};
\end{tikzpicture}
}
\caption{The network $\mathcal{N}_2=(G_2, S_2=\{\sigma_1', \sigma_2', \sigma_3' \}, \rho)$.}
\label{fig:2}
\end{figure}
From the first paragraph of the last subsection, we have $\mathcal{C}(\mathcal{N}_1,f)=2$. Then, let $\mathbf{C}_1=\{g_i: 1\leq i \leq 5\}$ be an arbitrary rate-$2$ $(2n,n)$ network code on $(\mathcal{N}_1,f)$, where $n$ is a positive integer. Consider the target function $F$, induced by the network code $\mathbf{C}_1$ as given in \eqref{defn:F}, which is required to be computed on the network $\mathcal{N}_2$ (see Fig.~\ref{fig:2}).
To show that the target function $F$ is well-defined, we need to show that $F(\vec{y}_1,\vec{y}_2,\vec{y}_3)$ is defined for every input $(\vec{y}_1,\vec{y}_2,\vec{y}_3)\in(\mathcal{A}^n)\times (\mathcal{A}^n) \times (\mathcal{A}^n)$. To see this, we only need to observe that $g_1(\mathcal{A}^{2n})=g_5(\mathcal{A}^{2n}, \mathcal{A}^{2n})=g_4(\mathcal{A}^{2n})=\mathcal{A}^n$ by Lemma~\ref{thm:non-tight} and thus the domain of $F$ is $(\mathcal{A}^n) \times (\mathcal{A}^n) \times (\mathcal{A}^n)$.
The following theorem asserts that for any target function $F$ induced by a rate-$2$ network code $\mathbf{C}_1$ for $(\mathcal{N}_1, f)$, it is impossible for $(\mathcal{N}_2, F)$ to achieve the rate $1$, i.e., $\mathcal{C}(\mathcal{N}_2, F)<1$.
\begin{thm}\label{thm:Cap_N_2_F}
Let $F$ be a target function induced by a rate-$2$ network code on $(\mathcal{N}_1, f=\max)$ as given in \eqref{defn:F}. Then $\mathcal{C}(\mathcal{N}_2, F)<1$.
\end{thm}
To prove Theorem~\ref{thm:Cap_N_2_F}, we first prove Lemma~\ref{thm:no_equ_classes} after explicitly characterizing two equivalence relations. Consider a global cut set $\widehat{C}=\{ e_8, e_9 \}$ in the network $\mathcal{N}_2$. Denote $I_{\widehat{C}}$ and $J_{\widehat{C}}$ by $I$ and $J$, respectively for notational simplicity. Then, $I=S_2=\{\sigma_1', \sigma_2', \sigma_3'\}$, the set of source nodes in $\mathcal{N}_2$, and $J=\emptyset$ so that $\vec{a}_J$ is an empty vector. Hence, the $(I, \vec{a}_J)$-equivalence relation is given as follows (see Definition~\ref{def:ec}):
$\vec{\alpha}_{S_2}$ and $\vec{\beta}_{S_2}$ in $(\mathcal{A}^n)^3$ are $(I,\vec{a}_J)$-equivalent, if
\begin{align}\label{equ_F_A}
F(\vec{\alpha}_{S_2})=F(\vec{\beta}_{S_2}),\quad \text{ or equivalently, }\quad \psi_C(\vec{\alpha}_{S_2})=\psi_C(\vec{\beta}_{S_2}).
\end{align}
For an $(I,\vec{a}_J)$-equivalence class, let $\vec{m}$ be the common value of $F(\vec{\alpha}_{S_2})$ for all $\vec{\alpha}_{S_2}$ in the equivalence class. Then we see that the equivalence class is uniquely identified by $\vec{m}$. We claim that $\forall~\vec{m}\in \mathcal{A}^{2n}$,
\begin{align*}
F^{-1}(\vec{m})\triangleq \big\{ \vec{\alpha}_{S_2}\in (\mathcal{A}^n)\times (\mathcal{A}^n) \times (\mathcal{A}^n): F(\vec{\alpha}_{S_2})=\vec{m} \big\}\neq \emptyset.
\end{align*}
It then follows that the total number of $(I,\vec{a}_J)$-equivalence classes is $|\mathcal{A}^{2n}|=2^{2n}$.
Consider a fixed $\vec{m}\in\mathcal{A}^{2n}$. Note that $f(\vec{0},\vec{m})=\max\{\vec{0}, \vec{m}\}=\vec{m}$, and it follows from \eqref{defn:F} that
\begin{align*}
F\big(\vec{y}_1=g_1(\vec{0}), \vec{y}_2=g_5(\vec{0},\vec{m}), \vec{y}_3=g_4(\vec{m})\big)=\vec{m}.
\end{align*}
This shows that $F^{-1}(\vec{m})\neq \emptyset$, proving the claim.
Next, we consider the partition equivalence relation (see Definition~\ref{defn_P_E_Relation}) with respect to $\widehat{C}$. The unique nontrivial (strong) partition of $\widehat{C}$ is $\big\{\widehat{C}_1=\{ e_8 \}, \widehat{C}_2=\{ e_9 \}\big\}$, denoted by $\mathcal{P}_{\widehat{C}}$, and $I_{\widehat{C}_1}=\{\sigma_1'\}$, $I_{\widehat{C}_2}=\{\sigma_3'\}$, and $I\setminus(I_{\widehat{C}_1}\cup I_{\widehat{C}_2})=\{\sigma_2'\}$. Let $I_1=I_{\widehat{C}_1}$, $I_{2}=I_{\widehat{C}_2}$, and $L=I\setminus(I_{1}\cup I_{2})=\{\sigma_2'\}$. For $\vec{y}_L=(\vec{y}_i: \sigma_i'\in L)=\vec{y}_2=\vec{\gamma}$, an arbitrary vector in $(\mathcal{A}^n)^{L}=\mathcal{A}^n$, by Definition~\ref{defn_P_E_Relation}, we say that $\vec{\alpha}$ and $\vec{\beta}$ in $(\mathcal{A}^n)^{I_1}=\mathcal{A}^n$ are $(I_1, \vec{\gamma}, \vec{a}_J)$-equivalent if for each $\vec{\eta}\in (\mathcal{A}^n)^{I_2}=\mathcal{A}^n$,
\begin{align}\label{equ:F_B}
F(\vec{y}_1=\vec{\alpha}, \vec{y}_2=\vec{\gamma}, \vec{y}_3=\vec{\eta})=
F(\vec{y}_1=\vec{\beta}, \vec{y}_2=\vec{\gamma}, \vec{y}_3=\vec{\eta}),
\end{align}
or equivalently,
\begin{align}
\psi_C(g_1=\vec{\alpha}, g_4=\vec{\eta}, g_5=\vec{\gamma})=
\psi_C(g_1=\vec{\beta}, g_4=\vec{\eta}, g_5=\vec{\gamma}),
\end{align}
where as given in \eqref{defn:F}, $g_1=\vec{y}_1$, $g_4=\vec{y}_3$, and $g_5=\vec{y}_2$. Similarly, we can define the {\em $(I_2, \vec{\gamma}, \vec{a}_J)$-equivalence relation}.
\begin{lemma}\label{thm:no_equ_classes}
For every $\vec{\gamma} \in (\mathcal{A}^n)^L=\mathcal{A}^n$, the total number of $(I_l, \vec{\gamma}, \vec{a}_J)$-equivalence classes is $2^n$, and every vector $\vec{\alpha}$ in $(\mathcal{A}^n)^{I_l}=\mathcal{A}^n$ by itself forms an $(I_l, \vec{\gamma}, \vec{a}_J)$-equivalence class, $l=1,2$.
\end{lemma}
\begin{IEEEproof}
By symmetry, we only need to prove the lemma for $l=1$.
Fix $\vec{\gamma} \in \mathcal{A}^n$. We will prove that $(\mathcal{A}^n)^{I_1}=\mathcal{A}^n$ is partitioned into $2^n$ $(I_1, \vec{\gamma}, \vec{a}_J)$-equivalence classes. Equivalently, we will prove that any two distinct vectors $\vec{\alpha}$ and $\vec{\beta}$ in $(\mathcal{A}^n)^{I_1}=\mathcal{A}^n$ are not $(I_1, \vec{\gamma}, \vec{a}_J)$-equivalent, i.e.,
\begin{align*}
\exists\ \vec{\eta}\in (\mathcal{A}^n)^{I_2}=\mathcal{A}^n,\ \text{ s.t. }\
\psi_C(g_1=\vec{\alpha}, g_4=\vec{\eta}, g_5=\vec{\gamma})\neq
\psi_C(g_1=\vec{\beta}, g_4=\vec{\eta}, g_5=\vec{\gamma}).
\end{align*}
Let $\mathcal{L}$ be the set of all possible image values under the local encoding function $\theta_5$, i.e.,
\begin{align}\label{eqU:95}
\mathcal{L} = \{ \theta_5(\vec{\xi},\vec{\eta}):~(\vec{\xi}, \vec{\eta})\in {\mathcal{A}^n}\times \mathcal{A}^n \}.
\end{align}
Since $g_2(\mathcal{A}^{2n})=g_3(\mathcal{A}^{2n})=\mathcal{A}^{n}$ by Lemma~\ref{thm:non-tight}, it follows from \eqref{eqU:95} that
\begin{align}
\mathcal{L}=\big\{ \theta_5\big(g_2(\vec{a}_1),g_3(\vec{a}_2)\big):~(\vec{a}_1, \vec{a}_2)\in {\mathcal{A}^{2n}}\times \mathcal{A}^{2n} \big\}=g_5(\mathcal{A}^{2n}, \mathcal{A}^{2n})=\mathcal{A}^n,
\end{align}
which the last equality follows from Lemma~\ref{thm:non-tight}.
Now, we let
\begin{align}
\mathcal{L}_1=\big\{ \theta_5\big(g_2=\vec{\xi}, g_3(\vec{0})\big):\ \vec{\xi} \in \mathcal{A}^n \big\},
\end{align}
which is a subset of $\mathcal{L}$ such that $\vec{\eta}=g_3(\vec{0})\in \mathcal{A}^n$ with $\vec{0}$ being the all-zero vector in $\mathcal{A}^{2n}$. In the following, we prove by contradiction that $\mathcal{L}_1=\mathcal{L}=\mathcal{A}^n$. Assume otherwise. Then there exist two distinct $\vec{\xi}_1, \vec{\xi}_2 \in \mathcal{A}^n$ such that
$$\theta_5\big(\vec{\xi}_1, g_3(\vec{0})\big)=\theta_5\big(\vec{\xi}_2, g_3(\vec{0})\big).$$
Now, for any $\vec{\alpha}\in \mathcal{A}^n$, we have
\begin{align}
\Big( g_1=\vec{\alpha}, g_4(\vec{0}), \theta_5\big(\vec{\xi}_1, g_3(\vec{0})\big) \Big)
=\Big( g_1=\vec{\alpha}, g_4(\vec{0}), \theta_5\big(\vec{\xi}_2, g_3(\vec{0})\big) \Big),
\end{align}
and hence
\begin{align}\label{equ1:thm:no_equ_classes}
\psi_C\Big( g_1=\vec{\alpha}, g_4(\vec{0}), \theta_5\big(\vec{\xi}_1, g_3(\vec{0})\big) \Big)
=\psi_C\Big( g_1=\vec{\alpha}, g_4(\vec{0}), \theta_5\big(\vec{\xi}_2, g_3(\vec{0})\big) \Big).
\end{align}
By Lemma~\ref{lemma1:non_tight}, we let $g_1^{-1}(\vec{\alpha})\cap g_2^{-1}(\vec{\xi}_1)=\{\vec{a}_1\}$ and $g_1^{-1}(\vec{\alpha})\cap g_2^{-1}(\vec{\xi}_2)=\{\vec{a}_2\}$, where $\vec{a}_1\neq \vec{a}_2$. Together with \eqref{equ1:thm:no_equ_classes}, we obtain
\begin{align}
\vec{a}_1=f(\vec{a}_1, \vec{0})=&~\psi_C\Big( g_1=\vec{\alpha}, g_4(\vec{0}), \theta_5\big(\vec{\xi}_1, g_3(\vec{0})\big) \Big)\\
=&~\psi_C\Big( g_1=\vec{\alpha}, g_4(\vec{0}), \theta_5\big(\vec{\xi}_2, g_3(\vec{0})\big) \Big)=f(\vec{a}_2, \vec{0})=\vec{a}_2,
\end{align}
a contradiction. Thus, we have proved that
\begin{align}\label{equ2:thm:no_equ_classes}
\mathcal{L}_1=\Big\{ \theta_5\big(\vec{\xi}, g_3(\vec{0})\big):\ \vec{\xi}\in \mathcal{A}^n \Big\}=\mathcal{A}^n=\mathcal{L}.
\end{align}
Now, by \eqref{equ2:thm:no_equ_classes}, we see that $\theta_5\big(\cdot, g_3(\vec{0})\big)$ is a bijection from $\mathcal{A}^n$ to $\mathcal{A}^n$. Hence, for the fixed $\vec{\gamma}$ in $\mathcal{A}^n=\mathcal{L}$, there exists exactly one $\vec{\xi}$ in $\mathcal{A}^n$ such that $\theta_5\big(\vec{\xi}, g_3(\vec{0})\big)=\vec{\gamma}.$
Next, we prove that any two distinct $\vec{\alpha}$ and $\vec{\beta}$ in $\mathcal{A}^n$ are not $(I_1, \vec{\gamma}, \vec{a}_J)$-equivalent. By Lemma~\ref{lemma1:non_tight}, let
\begin{align}
g_1^{-1}(\vec{\alpha})\cap g_2^{-1}(\vec{\xi})=\{\vec{b}_1\},\quad g_1^{-1}(\vec{\beta})\cap g_2^{-1}(\vec{\xi})=\{\vec{b}_2\},
\end{align}
where $\vec{b}_1\neq \vec{b}_2$. With this, we obtain that
\begin{align}
&\psi_C\Big(g_1(\vec{b}_1)=\vec{\alpha}, g_4(\vec{0}), g_5(\vec{b}_1, \vec{0})=\theta_5\big(g_2(\vec{b}_1)=\vec{\xi}, g_3(\vec{0})\big)=\vec{\gamma} \Big)=f(\vec{b}_1, \vec{0})=\vec{b}_1\\
& \neq \vec{b}_2=f(\vec{b}_2, \vec{0})=\psi_C\Big(g_1(\vec{b}_2)=\vec{\beta}, g_4(\vec{0}), g_5(\vec{b}_2, \vec{0})=\theta_5\big(g_2(\vec{b}_2)=\vec{\xi}, g_3(\vec{0})\big)=\vec{\gamma}\Big),
\end{align}
which implies that $\vec{\alpha}$ and $\vec{\beta}$ are not $(I_1, \vec{\gamma}, \vec{a}_J)$-equivalent. Immediately, we see that every vector $\vec{\alpha}$ in $(\mathcal{A}^n)^{I_1}=\mathcal{A}^n$ by itself forms an $(I_1, \vec{\gamma}, \vec{a}_J)$-equivalence class. The proof is accomplished.
\end{IEEEproof}
\begin{IEEEproof}[Proof of Theorem~\ref{thm:Cap_N_2_F}]
We proceed to prove that $\mathcal{C}(\mathcal{N}_2, F)<1$ by applying our improved upper bound in Theorem~\ref{thm:upper_bound}.
Recall the discussion following Theorem~\ref{thm:Cap_N_2_F} and the definition of $N\big({a}_{L}, {\mathrm{Cl}}[{a}_J]\big)$ in \eqref{no_finer_eq_cl}. Here, $a_L$ corresponds to $\vec{y}_2$ and we let $\vec{y}_2=\vec{\gamma}$ in $\mathcal{A}^n$, and each $(I, \vec{a}_J)$-equivalence class ${\mathrm{Cl}}[{a}_J]$ can be indexed by one and only one image value $\vec{m}\in\mathcal{A}^{2n}$ under the function $F$ (see the second paragraph before Lemma~\ref{thm:no_equ_classes}). So we write $N\big({a}_{L}, {\mathrm{Cl}}[{a}_J]\big)$ as $N(\vec{\gamma}, \vec{m})$ in the sequel to simplify notation. By Lemma~\ref{thm:no_equ_classes}, \eqref{no_finer_eq_cl} and \eqref{equ:F_B}, for every $\vec{\gamma}\in \mathcal{A}^n$ and every image value $\vec{m} \in \mathcal{A}^{2n}$ under the target function $F=\psi_C$, we have
\begin{align}\label{equ:N_gamma_f}
N(\vec{\gamma}, \vec{m})=&\#\left\{ (\vec{\alpha}, \vec{\beta})\in \mathcal{A}^{n}\times \mathcal{A}^{n}:\ \psi_C\big(g_1=\vec{\alpha},g_4=\vec{\beta}, g_5=\vec{\gamma} \big)=\vec{m} \right\}.
\end{align}
Similar to $N\big({\mathrm{Cl}}[{a}_J]\big)$ (see \eqref{equ:N_Cl}), we let
\begin{align}\label{equ:max_N_gamma_f}
N(\vec{m})=&\max_{\vec{\gamma}\in\mathcal{A}^{n}} N(\vec{\gamma}, \vec{m}).
\end{align}
Next, we will evaluate the value of $N(\vec{m})$. For each $\vec{m}\in \mathcal{A}^{2n}$, there exists at least one inverse image $(\vec{\alpha}, \vec{\beta}, \vec{\gamma})\in (\mathcal{A}^n)\times (\mathcal{A}^n)\times (\mathcal{A}^n)$ of $\vec{m}$ under $F$, i.e., $F(\vec{\alpha}, \vec{\beta}, \vec{\gamma})=\vec{m}$ (cf. the second paragraph before Lemma~\ref{thm:no_equ_classes}). This implies
\begin{align}\label{equ:N_m_geq_1}
N(\vec{m})\geq 1,\ \forall\ \vec{m}\in \mathcal{A}^{2n}.
\end{align}
Consider the image value $\vec{m}=\vec{0}$, the all-zero $2n$-vector in $\mathcal{A}^{2n}$. Clearly, the unique inverse image of $\vec{0}$ under the function $f=\max$ is $(\vec{x}_1=\vec{0}, \vec{x}_2=\vec{0})$. This implies that the inverse image of $\vec{0}$ under the function $F=\psi_C$ is also unique and the unique inverse image is $$\big( g_1(\vec{x}_1=\vec{0}), g_4(\vec{x}_2=\vec{0}), g_5(\vec{x}_1=\vec{0}, \vec{x}_2=\vec{0}) \big),$$
or equivalently,
$$\big( g_1(\vec{x}_1=\vec{0}), g_4(\vec{x}_2=\vec{0}), \theta_5\big(g_2(\vec{x}_1=\vec{0}), g_3(\vec{x}_2=\vec{0})\big) \big).$$
Now, we let $\vec{\gamma}^*=\theta_5\big(g_2(\vec{0}), g_3(\vec{0})\big)$. Then, for each $\vec{\gamma}$ in $\mathcal{A}^n$ such that $\vec{\gamma}\neq \vec{\gamma}^*$,
\begin{align}\label{equ:110}
\psi_C\big(g_1=\vec{\alpha}, g_4=\vec{\beta}, g_5=\vec{\gamma} \big)\neq \vec{0},
\quad \forall\ (\vec{\alpha}, \vec{\beta})\in \mathcal{A}^n \times \mathcal{A}^n,
\end{align}
and so the set
\begin{align*}
\Big\{ \psi_C\big(g_1=\vec{\alpha}, g_4=\vec{\beta}, g_5=\vec{\gamma} \big):\ \forall\ (\vec{\alpha}, \vec{\beta})\in \mathcal{A}^n \times \mathcal{A}^n \Big\}\subsetneq \mathcal{A}^{2n},
\end{align*}
because it does not contain $\vec{0}$. Hence,
\begin{align}
\#\Big\{\psi_C\big(g_1=\vec{\alpha}, g_4=\vec{\beta}, g_5=\vec{\gamma} \big):\ \forall\ (\vec{\alpha}, \vec{\beta})\in \mathcal{A}^n \times \mathcal{A}^n \Big\}<2^{2n}.
\end{align}
Together with $|\mathcal{A}^n\times \mathcal{A}^n|=2^{2n}$, there must exist two distinct pairs $(\vec{\alpha}_1, \vec{\beta}_1)$ and $(\vec{\alpha}_2, \vec{\beta}_2)$ in $\mathcal{A}^n\times \mathcal{A}^n$ such that
\begin{align}\label{equ:psi_C_value}
\psi_C\big(g_1=\vec{\alpha}_1, g_4=\vec{\beta}_1, g_5=\vec{\gamma} \big)=\psi_C\big(g_1=\vec{\alpha}_2, g_4=\vec{\beta}_2, g_5=\vec{\gamma} \big)
\neq \vec{0},
\end{align}
(cf.~\eqref{equ:110}). Denote the common value of $\psi_C$ in \eqref{equ:psi_C_value} by $\vec{m}'$. Immediately, we obtain that $N(\vec{\gamma}, \vec{m}')\geq 2$ (cf.~\eqref{equ:N_gamma_f}), which, together with \eqref{equ:max_N_gamma_f}, further implies that
\begin{align}\label{equ:N_m'}
N(\vec{m}')\geq 2.
\end{align}
Consequently, by \eqref{eq:sum_N_Cl}-\eqref{n_C_Parti_1st} and $\mathcal{P}_{\widehat{C}}=\big\{\widehat{C}_1=\{ e_8 \}, \widehat{C}_2=\{ e_9 \}\big\}$, we have
\begin{align}\label{defn:n_psi_C}
n_{\widehat{C}}(\mathcal{P}_{\widehat{C}})=\sum_{\vec{m}\in \mathcal{A}^{2n}} N(\vec{m}).
\end{align}
Hence, by combining \eqref{equ:N_m'} with \eqref{equ:N_m_geq_1}, it follows from \eqref{defn:n_psi_C} that
\begin{align}
n_{\widehat{C}}(\mathcal{P}_{\widehat{C}})> 2^{2n}.
\end{align}
From the definition of $n_{C,f}$ in \eqref{n_C_f_1st}, here we obtain $n_{\widehat{C},F}\geq n_{\widehat{C}}(\mathcal{P}_{\widehat{C}})$. It then follows from our improved upper bound in Theorem~\ref{thm:upper_bound} that
\begin{align}
\mathcal{C}(\mathcal{N}_2, F)\leq \dfrac{|\widehat{C}|}{\log_{|\mathcal{A}^n|}n_{\widehat{C},F}}\leq \dfrac{|\widehat{C}|}{\log_{|\mathcal{A}^n|}n_{\widehat{C}}(\mathcal{P}_{\widehat{C}})}
<\dfrac{2}{\log_{2^{n}}2^{2n}}=1.
\end{align}
Therefore, the theorem is proved.
\end{IEEEproof}
In the following example, we use the rate-$2$ network code for $(\mathcal{N}_1,f)$ depicted in Fig.~\ref{fig:butterfly_network_Part1_b}
to induce a network computation problem $(\mathcal{N}_2, F)$, and then illustrate that the computing capacity $\mathcal{C}(\mathcal{N}_2,F)$ is strictly smaller than $1$.
\begin{example}
Consider the $(2,1)$ network code depicted in Fig.~\ref{fig:butterfly_network_Part1_b}. According to \eqref{defn:F}, the target function $F$ ($=\psi_C$) induced by the network code is given as follows:
\begin{align*}
F:\ \{0,1\}^3 \ \longrightarrow &\ \ \ \{0,1\}^2\\
({y}_1,{y}_2,{y}_3) \ \longmapsto &\ \ \
\begin{bmatrix}
\max\{y_1,y_3\}\\ y_2
\end{bmatrix}.
\end{align*}
Clearly, we see that
$g_1(\mathcal{A}^2)=g_5(\mathcal{A}^2, \mathcal{A}^2)=g_4(\mathcal{A}^2)=\mathcal{A}$, namely, the domain of $F$ is $\mathcal{A} \times \mathcal{A} \times \mathcal{A}$. Hence, $F$ is well-defined.
For the network computation problem $(\mathcal{N}_2, F)$ by \eqref{equ_F_A}, the $(I, {a}_J)$-equivalence classes are:
\begin{align*}
{\mathrm{Cl}}_1=&\left\{ ({y}_1,{y}_2,{y}_3)\in \mathcal{A}\times\mathcal{A}\times\mathcal{A}: F({y}_1,{y}_2,{y}_3)=\left[\begin{smallmatrix}0\\0\end{smallmatrix}\right] \right\}=\{(0,0,0)\},\\
{\mathrm{Cl}}_2=&\left\{ ({y}_1,{y}_2,{y}_3)\in \mathcal{A}\times\mathcal{A}\times\mathcal{A}: F({y}_1,{y}_2,{y}_3)=\left[\begin{smallmatrix}0\\1\end{smallmatrix}\right] \right\}=\{(0,1,0)\},\\
{\mathrm{Cl}}_3=&\left\{ ({y}_1,{y}_2,{y}_3)\in \mathcal{A}\times\mathcal{A}\times\mathcal{A}: F({y}_1,{y}_2,{y}_3)=\left[\begin{smallmatrix}1\\0\end{smallmatrix}\right] \right\}=\{(0,0,1),(1,0,0),(1,0,1)\},\\
{\mathrm{Cl}}_4=&\left\{ ({y}_1,{y}_2,{y}_3)\in \mathcal{A}\times\mathcal{A}\times\mathcal{A}: F({y}_1,{y}_2,{y}_3)=\left[\begin{smallmatrix}1\\1\end{smallmatrix}\right] \right\}=\{(0,1,1),(1,1,0),(1,1,1)\}.
\end{align*}
Further, by \eqref{equ:N_gamma_f}, the values of $N(\vec{\gamma}, \vec{m})$ for all $\vec{\gamma}\in \mathcal{A}$ and $\vec{m}\in \mathcal{A}^2$ are:
\begin{align*}
N\left(0, \left[\begin{smallmatrix}0\\0\end{smallmatrix}\right]\right)
&=\#\left\{({y}_1,{y}_3)\in \mathcal{A}\times\mathcal{A}: F({y}_1,{y}_2=0,{y}_3)=\left[\begin{smallmatrix}0\\0\end{smallmatrix}\right] \right\}=|\{(0,0)\}|=1,\\
N\left(0, \left[\begin{smallmatrix}0\\1\end{smallmatrix}\right]\right)
&=\#\left\{({y}_1,{y}_3)\in \mathcal{A}\times\mathcal{A}: F({y}_1,{y}_2=0,{y}_3)=\left[\begin{smallmatrix}0\\1\end{smallmatrix}\right] \right\}=0,\\
N\left(0, \left[\begin{smallmatrix}1\\0\end{smallmatrix}\right]\right)
&=\#\left\{({y}_1,{y}_3)\in \mathcal{A}\times\mathcal{A}: F({y}_1,{y}_2=0,{y}_3)=\left[\begin{smallmatrix}1\\0\end{smallmatrix}\right] \right\}=|\{(0,1),(1,0),(1,1)\}|=3,\\
N\left(0, \left[\begin{smallmatrix}1\\1\end{smallmatrix}\right]\right)
&=\#\left\{({y}_1,{y}_3)\in \mathcal{A}\times\mathcal{A}: F({y}_1,{y}_2=0,{y}_3)=\left[\begin{smallmatrix}1\\1\end{smallmatrix}\right] \right\}=0,\\
N\left(1, \left[\begin{smallmatrix}0\\0\end{smallmatrix}\right]\right)
&=N\left(1, \left[\begin{smallmatrix}1\\0\end{smallmatrix}\right]\right)=0,\
N\left(1, \left[\begin{smallmatrix}0\\1\end{smallmatrix}\right]\right)=1,\
N\left(1, \left[\begin{smallmatrix}1\\1\end{smallmatrix}\right]\right)=3.
\end{align*}
By \eqref{equ:max_N_gamma_f} and \eqref{defn:n_psi_C}, we obtain
\begin{align*}
N\left(\left[\begin{smallmatrix}0\\0\end{smallmatrix}\right]\right)
=N\left(0, \left[\begin{smallmatrix}0\\0\end{smallmatrix}\right]\right)=1,\ &\
N\left(\left[\begin{smallmatrix}0\\1\end{smallmatrix}\right]\right)
=N\left(1, \left[\begin{smallmatrix}0\\1\end{smallmatrix}\right]\right)=1,\\
N\left(\left[\begin{smallmatrix}1\\0\end{smallmatrix}\right]\right)
=N\left(0, \left[\begin{smallmatrix}1\\0\end{smallmatrix}\right]\right)=3,\ &\
N\left(\left[\begin{smallmatrix}1\\1\end{smallmatrix}\right]\right)
=N\left(1, \left[\begin{smallmatrix}1\\1\end{smallmatrix}\right]\right)=3,
\end{align*}
and consequently,
\begin{align*}
n_{\widehat{C}}(\mathcal{P}_{\widehat{C}})=1+1+3+3=8,
\end{align*}
which implies that
\begin{align*}
\mathcal{C}(\mathcal{N}_2, F)\leq \dfrac{|\widehat{C}|}{\log_{|\mathcal{A}^n|}n_{\widehat{C},F}} = \frac{2}{\log_2 8}=\frac{2}{3}<1.
\end{align*}
\end{example}
\begin{figure}[!t]
\centering
\begin{minipage}[h]{0.4\textwidth}
\begin{tikzpicture}[x=0.6cm]
\draw (-3,0) node[vertex] (1) [label=above:{\scriptsize $(x_{11}\ x_{12}\ x_{13})$}] {};
\draw ( 3,0) node[vertex] (2) [label=above:{\scriptsize $(x_{21}\ x_{22}\ x_{23})$}] {};
\draw ( 0,-1.5) node[vertex] (3) [label=left:] {};
\draw (-3,-5) node[vertex] (4) [label=left:] {};
\draw ( 3,-5) node[vertex] (5) [label=right:] {};
\draw ( 0,-3.5) node[vertex] (6) [label=right:] {};
\draw ( 0,-6.5) node[vertex] (7) [label=below: $\rho$] {};
\draw[->,>=latex] (1) -- (4) node[midway, auto,swap, left=-1mm] {$g_1$};
\draw[->,>=latex] (1) -- (3) node[midway, auto, right=-0.5mm] {$g_2$};
\draw[->,>=latex] (2) -- (3) node[midway, auto,swap, left=-0.5mm] {$g_3$};
\draw[->,>=latex] (2) -- (5) node[midway, auto, right=-1mm] {$g_4$};
\draw[->,>=latex] (3) -- (6) node[midway, auto, right=-1mm] {$g_5$};
\draw[->,>=latex] (6) -- (4) node[midway, auto, left=-0.5mm] {$g_6$};
\draw[->,>=latex] (6) -- (5) node[midway, auto,swap, right=-0.5mm] {$g_7$};
\draw[->,>=latex] (4) -- (7) node[midway, auto,swap, right=-0.5mm] {$g_8$};
\draw[->,>=latex] (5) -- (7) node[midway, auto, left=-0.5mm] {$g_9$};
\end{tikzpicture}
\end{minipage}%
\quad
\begin{minipage}{0.4\textwidth}
\centering
\begin{tabular}{p{5mm}p{2cm}p{0mm}p{5mm}p{2cm}}
\hline
\specialrule{0em}{2pt}{2pt}
$g_1$: & {$\small x_{11}$} & & $g_4$: & {$\small x_{21}$}\\
\specialrule{0em}{2pt}{2pt}
$g_2$: & ${\scriptsize \begin{bmatrix} x_{12} \\ x_{13} \end{bmatrix}}$ & & $g_3$: & ${\scriptsize \begin{bmatrix} x_{22} \\ x_{23} \end{bmatrix}}$\\
\specialrule{0em}{2pt}{2pt}
$g_5$: & {\scriptsize $\begin{bmatrix} \max\{x_{12}, x_{22}\} \\ \max\{x_{13}, x_{23}\} \end{bmatrix}$} & & \\
\specialrule{0em}{2pt}{2pt}
$g_6$: & {\scriptsize $\max\{x_{12}, x_{22}\}$} & & $g_7$: & {\scriptsize $\max\{x_{13}, x_{23}\}$}\\
\specialrule{0em}{2pt}{2pt}
$g_8$: & {\scriptsize $\begin{bmatrix} x_{11} \\ \max\{x_{12}, x_{22}\} \end{bmatrix}$} & & $g_9$: & {\scriptsize $\begin{bmatrix} x_{21} \\ \max\{x_{13}, x_{23}\} \end{bmatrix}$}\\
\specialrule{0em}{2pt}{2pt}
\hline
\end{tabular}
\end{minipage}
\caption{A coding scheme of the computing rate $3/2$ to compute the binary maximum function of the source messages, i.e., $f(x_1,x_2)=\max\{ x_1, x_2 \}$, where $\mathcal{A}=\mathcal{O}=\{0,1\}$.}
\label{butfly_net_rate_3_over_2}
\end{figure}
\begin{remark}
For the original network computation problem $(\mathcal{N}, f=\max)$, we give a coding scheme (see Fig.~\ref{butfly_net_rate_3_over_2}) achieving the computing rate $3/2$. Together with Theorem~\ref{thm_butterfly_network_non-tight}, we obtain that for any $(k,n)$ network code that can compute $\max$ over $\mathcal{N}$, the achievable rate $k/n$ satisfies
\begin{align*}
\frac{3}{2}\leq \frac{k}{n} < 2.
\end{align*}
\end{remark}
\begin{remark}
By symmetry, we have $\mathcal{C}(\mathcal{N}, \max)=\mathcal{C}(\mathcal{N}, \min)$, where $\min$ is the binary minimum function. So, for any $(k,n)$ network code computing $\min$ over $\mathcal{N}$, $3/2 \leq k/n < 2$. Note that the function $\min$ is in fact equivalent to the multiplication over $\mathbb{F}_2$, i.e., $f(x_1,x_2)=x_1\cdot x_2$. On the other hand, we note that if $f$ is the summation over $\mathbb{F}_2$, i.e., $f(x_1,x_2)=x_1+x_2$, $\mathcal{C}(\mathcal{N}, f)$ can be determined \cite{Koetter-CISS2004}. Therefore, for the function computation over a network, there is an intrinsic difference between addition and multiplication.
\end{remark}
\begin{journalonly}
\section{Future Works}
By a further observation, we believe that if the cuts $C_1$ and $C_2$ can be partitioned further by the method we used herein, better upper bounds are possibly obtained. Thus, an interesting problem is that by using the same analysis approach, what the best we can achieve. Besides that, notice that all cuts $C\in \Lambda(\mathcal{N})$ are considered for the later two non-trivial upper bounds, which results in high complexity for searching. Hence, another research problem is how to reduce the search scope largely meanwhile the minimum is still obtained.
\end{journalonly}
\section{Conclusion}\label{sec:concl}
In this paper, we have proved a new upper bound on the computing capacity in network function computation which can be applied to arbitrary target functions and arbitrary network topologies. Our bound not only is a strict improvement over the previous ones, but also is the first tight upper bound on the computing capacity for computing an arithmetic sum over a certain ``non-tree'' network. Previously, only upper bounds for general target functions and network topologies that are tight only for tree networks have been reported.
On the other hand, we have shown that our improved upper bound is in general not achievable. Specifically, the bound is not achievable for computing the binary maximum function over the reverse butterfly network. However, whether the bound is in general asymptotically achievable remains open.
\section*{Acknowledgement}
This work was partially supported by NSFC Grant (Nos. 61771259 and 61471215), the University Grants Committee of the Hong Kong SAR, China (Project No. AoE/E-02/08), and the Vice-Chancellor's One-off Discretionary Fund of CUHK (Project Nos. VCF2014030 and VCF2015007).
| {
"attr-fineweb-edu": 1.855469,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUe1M5qhLBlnv5FMo3 | \section{Introduction}
Let $f = f(x,v,t)$ be the particle distribution function at $(x,v) \in \mathbb R^d \times \mathbb R^d$ and at time $t \in \mathbb R_+$ for the following kinetic equation:
\begin{equation}\label{main_eq}
\partial_t f + v\cdot\nabla_x f - \nabla_v \cdot \left((\gamma v + \lambda\left(\nabla_x V + \nabla_x W \star \rho\right))f \right) = \nabla_v \cdot (\beta (v - u)f)
\end{equation}
subject to the initial data
\[
f(x,v,t)|_{t=0} =: f_0(x,v), \quad (x,v) \in \mathbb R^d \times \mathbb R^d,
\]
where $u$ is the local particle velocity, i.e.,
$$
u = \frac{ \int_{\mathbb R^d} vf\,dv} {\rho}\quad \mbox{ with } \rho := \int_{\mathbb R^d} f\,dv\,,
$$
$V$ and $W$ are the confinement and the interaction potentials, respectively. In \eqref{main_eq}, the first two terms take into account the free transport of the particles, and the third term consists of linear damping with a strength $\gamma>0$ and the particle confinement and interaction forces in position due to the potentials with strength $\lambda>0$. The right hand side of \eqref{main_eq} is the local alignment force for particles as introduced in \cite{KMT13} for swarming models. In fact, it can also be understood as the localized version of the nonlinear damping term introduced in \cite{MT} as a suitable normalization of the Cucker-Smale model \cite{CS}.
Notice that this alignment term is also a nonlinear damping relaxation towards the local velocity used in classical kinetic theory \cite{CIP,Vilkinetic}. Throughout this paper, we assume that $f$ is a probability density, i.e., $\|f(\cdot,\cdot,t)\|_{L^1} = 1$ for $t\geq 0$, since the total mass is preserved in time.
In the current work, we are interested in the asymptotic analysis of \eqref{main_eq} when considering singular parameters. More specifically, we deal with the large friction limit to a continuity type equation from the kinetic equation \eqref{main_eq} when the parameters $\gamma, \lambda > 0$, and $\beta > 0$ get large enough. Computing the moments on the kinetic equation \eqref{main_eq}, we find that the local density $\rho$ and local velocity $u$ satisfy
$$\begin{aligned}
&\partial_t \rho + \nabla_x \cdot (\rho u) = 0,\cr
&\partial_t (\rho u) + \nabla_x \cdot (\rho u \otimes u) + \nabla_x \cdot \left( \int_{\mathbb R^d} (v-u)\otimes (v-u) f(x,v,t)\,dv\right) \cr
&\hspace{2.5cm} = - \gamma \rho u - \lambda\rho(\nabla_x V + \nabla_x W \star \rho).
\end{aligned}$$
As usual, the moment system is not closed. By letting the friction of the equation \eqref{main_eq} very strong, i.e., $\gamma, \lambda, \beta \gg 1$, for instance, $\gamma=\lambda=\beta=o\left(\varepsilon^{-1}\right) \to + \infty$ with $\lambda/\gamma=o(1)\to \kappa>0$ as $\varepsilon \to 0$, then at the formal level, we find
$$
\nabla_v \cdot \left((2v -u + \kappa \left(\nabla_x V + \nabla_x W \star \rho\right))f \right) =0\,,
$$
and thus,
\[
f(x,v,t) \simeq \rho(x,t) \otimes \delta_{v - u(x,t)} \quad \mbox{and} \quad \rho u \simeq -\kappa\rho(\nabla_x V + \nabla_x W \star \rho) \quad \mbox{for} \quad \varepsilon \ll 1
\]
is the element in its kernel with the initial monokinetic distribution $\rho(x,0) \otimes \delta_{v - u(x,0)}$.
Those relations provide that the density $\rho$ satisfies the following continuity type equation with a nonlocal velocity field, the so-called {\it aggregation equation}, see for instance \cite{BCL,BLR,CCH}
and the references therein,
\begin{equation}\label{main_conti}
\partial_t \rho + \nabla_x \cdot (\rho u) = 0, \quad \rho u = -\kappa\rho\left(\nabla_x V + \nabla_x W \star \rho\right).
\end{equation}
The large friction limit has been considered in \cite{Jabin00}, where the macroscopic limit of a Vlasov type equation with friction is studied by using a PDE approach, and later the restrictions on the functional spaces for the solutions and the conditions for interaction potentials are relaxed in \cite{FS15} by employing PDE analysis and the method of characteristics. More recently, these results have been extended in \cite{FST16} for more general Vlasov type equations; Vlasov type equations with nonlocal interaction and nonlocal velocity alignment forces. However, all of these results in \cite{FS15,FST16,Jabin00} are based on compactness arguments, and to our best knowledge, quantitative estimates for the large friction limit have not yet been obtained. The large friction limit has received a lot of attention at the hydrodynamic level by the conservation laws community, see for instance \cite{CG,MM,LC,GLT,LT}, but due to their inherent difficulties, it has been elusive at the kinetic level.
The main purpose of this work is to render the above formal limit to the nonlocal aggregation equation completely rigorous with quantitative bounds. Our strategy of the proof uses an intermediate system to divide the error estimates as depicted in Figure \ref{schemeofproof}.
\begin{figure}[!ht]
\centering
\includegraphics[width=1\textwidth,clip]{diagram.eps}
\caption{
Schematic illustration of the strategy of the proof.
}
\label{schemeofproof}
\end{figure}
We first fix $\lambda$ and $\gamma$ with $\kappa \gamma = \lambda$ and take $\beta = 1/\varepsilon$. We denote by $f^{\gamma,\varepsilon}$ the solution to the associated kinetic equation \eqref{main_eq}. We then introduce an intermediate system, given by the pressureless Euler equations with nonlocal interactions, between the kinetic equation \eqref{main_eq} and the limiting equation \eqref{main_conti}:
\begin{align}\label{main_eq2}
\begin{aligned}
&\partial_t \rho^{\gamma} + \nabla_x \cdot (\rho^{\gamma} u^{\gamma}) = 0,\cr
&\partial_t (\rho^{\gamma} u^{\gamma}) + \nabla_x \cdot (\rho^{\gamma} u^{\gamma} \otimes u^{\gamma}) = - \gamma \left(\rho^{\gamma} u^{\gamma} + \kappa\rho^{\gamma}(\nabla_x V + \nabla_x W \star \rho^{\gamma})\right).
\end{aligned}
\end{align}
In order to estimate the error between two solutions $\rho^{\gamma,\varepsilon}$ and $\rho^{\gamma}$ to \eqref{main_eq} and \eqref{main_eq2}, respectively, where
$$
\rho^{\gamma,\varepsilon} := \int_{\mathbb R^d} f^{\gamma,\varepsilon}\,dv\,,
$$
we use the Wasserstein distance which is defined by
\[
W_p(\mu, \nu):= \inf_{\pi \in \Pi(\mu,\nu)}\left(\int_{\mathbb R^d \times \mathbb R^d} |x - y|^p d\pi(x,y) \right)^{1/p}
\]
for $p\geq 1$ and $\mu,\nu \in \mathcal{P}_p(\mathbb R^d)$, where $\Pi(\mu,\nu)$ is the set of all probability measures on $\mathbb R^d \times \mathbb R^d$ with first and second marginals $\mu$ and $\nu$ and bounded $p$-moments, respectively. We refer to \cite{AGS08,Vil08} for discussions of various topics related to the Wasserstein distance.
Employing the $2$-Wasserstein distance, we first obtain the quantitative estimate for $W_2^2(\rho^{\gamma,\varepsilon},\rho^{\gamma})$ with the aid of the relative entropy argument. It is worth mentioning that the entropy for the system \eqref{main_eq2} is not strictly convex with respect to $\rho$ due to the absence of pressure in the system, see Section \ref{sec_re} for more details. Thus the relative entropy estimate is not enough to provide the error estimates between the spatial density $\rho^{\gamma,\varepsilon}$ and the density $\rho^{\gamma}$. We also want to emphasize that the relative entropy estimate is even not closed due to the nonlinearity and nonlocality of the interaction term $\nabla_x W \star \rho$. We provide a new inequality which gives a remarkable relation between the $2$-Wasserstein distance and the relative entropy, see Lemma \ref{prop_rho_wa}. Using that new observation together with combining the relative entropy estimate and the $2$-Wasserstein distance between the solutions in a hypocoercivity type argument, we have the quantitative error estimate for the vertical part of the diagram in Figure \ref{schemeofproof}. Let us point out that in order to make this step rigorous, we need to work with strong solutions to the pressureless Euler system \eqref{main_eq2} for two reasons. On one hand, strong solutions are needed for making sense of the integration by parts required for the relative entropy argument. On the other hand, some regularity on the velocity field, the boundedness of the spatial derivatives of the velocity field uniformly in $\gamma$, is needed in order to control terms appearing due to the time derivatives of $W_2^2(\rho^{\gamma,\varepsilon},\rho^{\gamma})$ and the relative entropy.
We finally remark that the closest result in the literature to ours is due to Figalli and Kang in \cite{FK19}. It concerns with the vertical part of the diagram in Figure \ref{schemeofproof} for a related system without interaction forces but Cucker-Smale alignment terms. Even if they already combined the $2$-Wasserstein distance and the relative entropy between $\rho^{\gamma,\varepsilon}$ and $\rho^{\gamma}$, they did not take full advantage of the $2$-Wasserstein distance, see Remark \ref{rmk_hydro} for more details. This is our main contribution in this step.
The final step, corresponding to the bottom part of the diagram in Figure \ref{schemeofproof}, is inspired on a recent work of part of the authors \cite{CCTpre}. Actually, we can estimate the error between the solutions $\rho^{\gamma}$ and $\rho$ to \eqref{main_eq2} and \eqref{main_conti}, respectively, in the $2$-Wasserstein distance again. Here, it is again crucial to use the boundedness of the spatial derivatives of the velocity field uniformly in $\gamma$. Combining the above arguments, we finally conclude the main result of our work: the quantitative error estimate between two solutions $\rho^{\gamma,\varepsilon}$ and $\rho$ to the equations \eqref{main_eq} and \eqref{main_conti}, respectively, in the $2$-Wasserstein distance.
Before writing our main result, we remind the reader of a well known estimate for the total energy of the kinetic equation \eqref{main_eq}. For this, we define the total energy $\mathcal{F}$ and the associated dissipations $\mathcal{D}_1$ and $\mathcal{D}_2$ as follows:
\begin{equation}\label{freen}
\mathcal{F}(f) := \frac12\int_{\mathbb R^d \times \mathbb R^d} |v|^2 f\,dxdv + \frac{\lambda}{2}\int_{\mathbb R^d \times \mathbb R^d} W(x-y)\rho(x) \rho(y)\,dxdy + \lambda \int_{\mathbb R^d} V \rho\,dx,
\end{equation}
$$\begin{aligned}
\mathcal{D}_1(f) &:= \int_{\mathbb R^d \times \mathbb R^d} f\left|u-v\right|^2 dxdv, \quad \mbox{and} \quad \mathcal{D}_2(f):= \int_{\mathbb R^d \times \mathbb R^d} |v|^2 f\,dxdv,
\end{aligned}$$
respectively. Suppose that $f$ is a solution of \eqref{main_eq} with sufficient integrability, then it is straightforward to check that
\begin{equation}\label{lem_energy}
\frac{d}{dt}\mathcal{F}(f) + \beta D_1(f) + \gamma D_2(f) = 0.
\end{equation}
Notice that weak solutions may only satisfy an inequality in the above relation that is enough for our purposes.
In order to control the velocity field for the intermediate pressureless Euler equations \eqref{main_eq2}, we assume that the confinement potential $V$ and the interaction potential $W$ satisfy:
\begin{itemize}
\item[{\bf (H)}] The confinement potential $V(x) = c_V|x|^2/2$, and the interaction potential $W$ satisfies $W(-x) = W(x)$, $\nabla_x W \in (\mathcal{W}^{1,\infty} \cap \mathcal{W}^{[d/2]+1,\infty})(\mathbb R^d)$, and $c_V + c_W > 0$ with
\[
c_W := \inf_{x \neq y} \frac{\left\langle x-y, \nabla_x W(x) - \nabla_y W(y) \right\rangle}{|x-y|^2}.
\]
\end{itemize}
We are now in position to state the main result of this work.
\begin{theorem}\label{thm_main} Assume that initial data $f_0^\varepsilon$ satisfy
\[
\sup_{\varepsilon > 0}\|(1+ |v|^2 + V)f_0^\varepsilon\|_{L^1} < \infty, \quad f_0^\varepsilon \in L^\infty(\mathbb R^d \times \mathbb R^d), \quad \mbox{and} \quad \rho^\varepsilon_0(W\star \rho^\varepsilon_0) \in L^1(\mathbb R^d)
\]
for all $\varepsilon > 0$. Let $f^\varepsilon$ be a solution to the equation \eqref{main_eq} with $\beta = 1/\varepsilon$, $\kappa\gamma = \lambda =1/\varepsilon$ with $\kappa>0$ up to time $T>0$, such that $f^\varepsilon \in L^\infty(0,T; (L^1\cap L^\infty)(\mathbb R^d \times \mathbb R^d))$ satisfying the energy inequality \eqref{lem_energy} with initial data $f_0^\varepsilon$. Let $\rho$ be a solution of \eqref{main_conti} up to the time $T$, such that $\rho \in \mathcal C([0,T];\mathcal{P}_2(\mathbb R^d))$ with initial data $\rho_0$ satisfying
\[
\rho_0 \in \mathcal{P}_2(\mathbb R^d) \quad \mbox{and} \quad \int_{\mathbb R^d} \left(|u_0|^2 + V+W\star \rho_0\right)\rho_0\,dx < \infty.
\]
Suppose that ${\bf (H)}$ holds. Then, for $\varepsilon,\kappa > 0$ small enough, we have the following quantitative bound:
\[
\int_0^T W_2^2(\rho^\varepsilon(t), \rho(t))\,dt \leq \mathcal{O}(\varepsilon) + CW_2^2(\rho^\varepsilon_0,\rho_0),
\]
where $\rho^\varepsilon = \int_{\mathbb R^d} f^\varepsilon\,dv$ and $C > 0$ is independent of $\varepsilon$.
\end{theorem}
\begin{remark} As mentioned above, our strategy consists in using \eqref{main_eq2} as intermediate system and compare the errors from the kinetic equation \eqref{main_eq} to the pressureless Euler equations \eqref{main_eq2} and from \eqref{main_eq2} to the aggregation equation \eqref{main_conti}. These estimates hold as long as there exist strong solutions to the system \eqref{main_eq2} up to the given time $T>0$. Strong solutions can be obtained locally in time by only assuming $\nabla_x W \in (\mathcal{W}^{1,1} \cap \mathcal{W}^{1,\infty})(\mathbb R^d)$, see Theorem \ref{thm_local}. However, in order to ensure existence on any arbitrarily large time interval $[0,T)$, the additional regularity for $\nabla_x W$ is required, see Theorem \ref{thm_glo}. Moreover, our error estimates in Section \ref{sec_quanti} and Section \ref{sec_lft} only need the regularity $\nabla_x W \in (\mathcal{W}^{1,1} \cap \mathcal{W}^{1,\infty})(\mathbb R^d)$ too.
\end{remark}
The rest of paper is organized as follows. In Section \ref{sec_quanti}, we provide a quantitative error estimate the kinetic equation \eqref{main_eq} and the intermediate pressureless Euler system with nonlocal forces \eqref{main_eq2} by means of the relative entropy argument together with $2$-Wasserstein distance. Section \ref{sec_lft} is devoted to give the details of the proof for our main result on the large friction limit, and the required global-in-time existence theories for the equations \eqref{main_eq}, \eqref{main_conti}, and \eqref{main_eq2} are presented in Section \ref{sec_ext}.
\section{Quantitative error estimate between \eqref{main_eq} and \eqref{main_eq2}}\label{sec_quanti}
In this section, we provide the quantitative error estimate between weak solutions to the kinetic equation \eqref{main_eq} and a unique strong solution to the system \eqref{main_eq2} by employing the relative entropy estimate together with $2$-Wasserstein distance. As mentioned in Introduction, we estimate the $2$-Wasserstein distance between the spatial density of \eqref{main_eq} and the density of \eqref{main_eq2}. This together with the standard relative entropy estimate gives our desired quantitative estimate. Note that in this section the result allows more general potentials $V$ and $W$; the particular choice $V = c_V|x|^2/2$ is not required, and the condition $c_V + c_W > 0$ appeared in {\bf (H)} is not needed. The assumption {\bf (H)} implies that the sum of the last two terms in \eqref{freen} related to the macroscopic density $\rho$ involving $V$ and $W$ in the total energy $\mathcal{F}$ is displacement convex with respect to $2$-Wasserstein distance. This fact will be used for the estimate of the large friction limit from \eqref{main_eq2} to \eqref{main_conti} in Section \ref{sec_lft}.
For notational simplicity, we drop the $\gamma$-dependence in solutions and denote by $f^\varepsilon := f^{\gamma,\varepsilon}, \rho:= \rho^{\gamma}, u:=u^{\gamma}$ throughout this section. In the following two subsections, we prove the proposition below on the quantitative estimate of $2$-Wasserstein distance between solutions to \eqref{main_eq} and \eqref{main_eq2}.
\begin{proposition}\label{prop_main} Let $f^\varepsilon$ be the solution to the equation \eqref{main_eq} and $(\bar\rho,\bar u)$ be the strong solution to the system \eqref{main_eq2} on the time interval $[0,T]$.
Suppose that $\gamma > 0$ is large enough such that $\gamma -C\lambda - e^{C_{\bar u}}(1+\lambda) > 0$, where $C_{\bar u} := C\|\nabla_x \bar u\|_{L^\infty(0,T;L^\infty)}$. Furthermore, we assume that the confinement potential $V$ is bounded from below and the interaction potential $W$ is symmetric and $\nabla_x W \in \mathcal{W}^{1,\infty}(\mathbb R^d)$.
Then we have
\[
W_2^2(\rho^\varepsilon(t), \bar\rho(t)) \leq e^{C_{\bar u}}\left( W_2^2(\rho^\varepsilon_0, \bar\rho_0) + \frac{\mathcal{I}(U^\varepsilon_0, \bar U_0) + C_{\bar u}\max\{1,\lambda\}\varepsilon + e^{C_{\bar u}}\lambda W_2^2(\rho^\varepsilon_0,\bar\rho_0)}{\gamma -C\lambda - e^{C_{\bar u}}(1+\lambda)} \right),
\]
where $\mathcal{I}(U^\varepsilon_0, U_0)$ is given by
\[
\mathcal{I}(U^\varepsilon_0, \bar U_0) := \int_{\mathbb R^d} \rho^\varepsilon_0(x)| u^\varepsilon_0(x) - \bar u_0(x)|^2\,dx + \int_{\mathbb R^d} \left( \int_{\mathbb R^d} f_0^\varepsilon |v|^2\,dv - \bar \rho_0|\bar u_0|^2\right)dx,
\]
and $C>0$ is independent of $\gamma, \lambda$ and $\varepsilon$, but depends on $T$. \end{proposition}
\begin{remark}Without loss of generality, we assume that $V \geq 0$ in the rest of this section.
\end{remark}
\subsection{Relative entropy estimate}\label{sec_re}
We rewrite the equations \eqref{main_eq2} in conservative form:
\[
U_t + \nabla_x \cdot A(U) = F(U),
\qquad
\mbox{where }
m := \rho u, \quad U := \begin{pmatrix}
\rho \\
m
\end{pmatrix},
\quad
A(U) := \begin{pmatrix}
m \\
\frac{m \otimes m}{\rho}
\end{pmatrix},
\]
and
\[
F(U) := -\begin{pmatrix}
0 \\
\displaystyle \gamma \rho u + \lambda \rho\left(\nabla_x V + \nabla_x W \star \rho \right)
\end{pmatrix}.
\]
Then the above system has the following macro entropy form $E(U) := \tfrac{|m|^2}{2\rho}$.
Note that the entropy defined above is not strictly convex with respect to $\rho$.
We now define the relative entropy functional $\mathcal{H}$ as follows.
\begin{equation}\label{def_rel}
\mathcal{H}(U| \bar U) := E( U) - E(\bar U) - DE(\bar U)( U-\bar U) \quad \mbox{with} \quad \bar U := \begin{pmatrix}
\bar\rho \\
\bar m \\
\end{pmatrix},
\end{equation}
where $D E(U)$ denotes the derivation of $E$ with respect to $\rho, m$, i.e.,
\[
DE(U) = \begin{pmatrix}
\displaystyle -\frac{|m|^2}{2\rho^2} \\[3mm]
\displaystyle \frac{m}{\rho}
\end{pmatrix}.
\]
This yields
\[
\mathcal{H}(U|\bar U) = \frac{\rho|u|^2}{2} - \frac{\bar \rho|\bar u|^2}{2} - \frac{|\bar u|^2}{2}(\bar \rho - \rho) - \bar u\cdot (\rho u - \bar\rho \bar u)= \frac{\rho}{2}|u - \bar u|^2.
\]
We next derive an evolution equation for the integrand relative entropy in the lemma below.
\begin{lemma}\label{lem_rel}The relative entropy $\mathcal{H}$ defined in \eqref{def_rel} satisfies the following equality:
\begin{align*}
\begin{aligned}
\frac{d}{dt}\int_{\mathbb R^d} \mathcal{H}(U|\bar U)\,dx &= \int_{\mathbb R^d} \partial_t E(U)\,dx - \int_{\mathbb R^d} \nabla_x (DE(\bar U)):A(U|\bar U)\,dx\cr
&\quad - \int_{\mathbb R^d} DE(\bar U)\left[ \partial_t U + \nabla_x \cdot A(U) - F(U)\right]dx\cr
&\quad -\gamma \int_{\mathbb R^d} \rho| \bar u - u|^2 - \rho |u|^2\,dx + \lambda \int_{\mathbb R^d} \nabla_x V \cdot \rho u\,dx\cr
&\quad + \lambda \int_{\mathbb R^d} \rho (u - \bar u) \cdot \nabla_x W \star (\bar \rho - \rho) + \rho u \cdot \nabla_x W \star \rho\,dx,
\end{aligned}
\end{align*}
where $A(U|\bar U) := A(U) - A(\bar U) - DA(\bar U)(U-\bar U)$ is the relative flux functional.
\end{lemma}
\begin{proof}It follows from \eqref{def_rel} that
\begin{align*}
\begin{aligned}
\frac{d}{dt}\int_{\mathbb R^d} \mathcal{H}(U|\bar U)\,dx & = \int_{\mathbb R^d} \partial_t E(U)\,dx - \int_{\mathbb R^d} DE(\bar U)(\partial_t U + \nabla_x \cdot A(U)- F(U))\,dx \cr
&\quad +\int_{\mathbb R^d} D^2 E(\bar U) \nabla_x \cdot A(\bar U)(U-\bar U) + DE(\bar U) \nabla_x \cdot A(U)\,dx\cr
&\quad -\int_{\mathbb R^d} D^2 E(\bar U)F(\bar U)(U-\bar U) + DE(\bar U)F(U)\,dx\cr
&=: \sum_{i=1}^4 I_i.
\end{aligned}
\end{align*}
Integrating by parts, the following identity holds
\[
\int_{\mathbb R^d} D^2 E(\bar U) : \nabla_x \cdot A(\bar U) (U - \bar U)\,dx = \int_{\mathbb R^d} \nabla_x DE(\bar U) : DA(\bar U)(U - \bar U)\,dx,
\]
see \cite[Lemma 4.1]{KMT15} for details of proof. Moreover, we also find from \cite{KMT15} that
\[
\int_{\mathbb R^d} \nabla_x DE(\bar U): A(\bar U)\,dx = 0.
\]
Thus we obtain
\begin{align*}
\begin{aligned}
I_3 &= \int_{\mathbb R^d} \left(\nabla_x DE(\bar U)\right):\left( DA(\bar U)(U-\bar U) - A(U)\right)dx \cr
&= -\int_{\mathbb R^d} \left(\nabla_x DE(\bar U)\right):\left(A(U|\bar U) + A(\bar U)\right)dx\cr
&=-\int_{\mathbb R^d} \left(\nabla_x DE(\bar U)\right):A(U|\bar U)\,dx.
\end{aligned}
\end{align*}
For the estimate $I_4$, we notice that
\begin{equation*}
DE(\bar U) = \begin{pmatrix}
\displaystyle -\frac{|\bar m|^2}{2\bar\rho^2} \\[4mm]
\displaystyle \frac{\bar m}{\bar\rho}
\end{pmatrix}
\quad \mbox{and} \quad
D^2E(\bar U) = \begin{pmatrix}
* & \displaystyle - \frac{\bar m}{\bar \rho^2} \\[4mm]
* & \displaystyle \frac{1}{\bar \rho}
\end{pmatrix}.
\end{equation*}
Then, by a direct calculation, we find
\[
D^2 E(\bar U)F(\bar U)(U-\bar U) = -\rho(x) (u(x) - \bar u(x))\cdot \left(\gamma \bar u + \lambda \nabla_x V + \lambda \nabla_x W \star \bar\rho \right)
\]
and
\[
DE(\bar U)F(U) = - \rho \bar u \cdot \left(\gamma u + \lambda \nabla_x V + \lambda \nabla_x W \star \rho \right).
\]
Thus we obtain
$$\begin{aligned}
-I_4 &= -\int_{\mathbb R^d} \rho(x) (u(x) - \bar u(x))\cdot\left(\gamma \bar u(x) + \lambda(\nabla_x V(x) + (\nabla_x W \star \bar\rho)(x)) \right) \,dx \cr
&\quad - \int_{\mathbb R^d} \rho(x) \bar u(x)\cdot\left(\gamma u(x)+ \lambda \nabla_x V(x) + \lambda (\nabla_x W \star \rho)(x) \right) \,dx \cr
&= \gamma \int_{\mathbb R^d} \rho| \bar u - u|^2 - \rho |u|^2\,dx - \lambda \int_{\mathbb R^d} \nabla_x V \cdot \rho u\,dx\cr
&\quad - \lambda \int_{\mathbb R^d} \rho (u - \bar u) \cdot \nabla_x W \star (\bar \rho - \rho) + \rho u \cdot \nabla_x W \star \rho\,dx.
\end{aligned}$$
Combining the above estimates concludes the desired result.
\end{proof}
In the light of the previous lemma, we provide the following proposition.
\begin{proposition}\label{prop_re}Let $f^\varepsilon$ be the solution to the equation \eqref{main_eq} and $(\bar \rho,\bar u)$ be the strong solution to the system \eqref{main_eq2} on the time interval $[0,T]$. Then we have
\begin{align}\label{eqn_rel}
\begin{aligned}
&\int_{\mathbb R^d} \mathcal{H}(U^\varepsilon(t)|\bar U(t))\,dx + \gamma\int_0^t \int_{\mathbb R^d} \rho^\varepsilon(x)| u^\varepsilon(x) - \bar u(x)|^2\,dxds\cr
&\qquad \leq \int_{\mathbb R^d} \mathcal{H}(U^\varepsilon_0|\bar U_0)\,dx + \int_{\mathbb R^d} \left( \int_{\mathbb R^d} f_0^\varepsilon |v|^2\,dv - \bar \rho_0|\bar u_0|^2\right)dx + C\|\nabla_x \bar u\|_{L^\infty}\max\{1,\lambda\} \varepsilon \cr
&\qquad \quad + C\|\nabla_x \bar u\|_{L^\infty}\int_0^t \int_{\mathbb R^d} \mathcal{H}(U^\varepsilon(s)|\bar U(s))\,dxds \cr
&\qquad \quad + \lambda\int_0^t \int_{\mathbb R^d} \rho^\varepsilon(x) ( u^\varepsilon(x) - \bar u(x)) \cdot (\nabla_x W \star (\bar \rho - \rho^\varepsilon))(x)\,dxds.
\end{aligned}
\end{align}
\end{proposition}
\begin{proof}It follows from Lemma \ref{lem_rel} that
\begin{align*}
\begin{aligned}
&\int_{\mathbb R^d} \mathcal{H}(U^\varepsilon(t)|\bar U(t))\,dx + \gamma\int_0^t \int_{\mathbb R^d} \rho^\varepsilon(x)| u^\varepsilon(x) - \bar u(x)|^2\,dxds\cr
&\quad = \int_{\mathbb R^d} \mathcal{H}(U_0^\varepsilon|\bar U_0)\,dx + \int_{\mathbb R^d} E(U^\varepsilon) - E(\bar U_0)\,dx - \int_0^t \int_{\mathbb R^d} \nabla_x (DE(\bar U)):A(U^\varepsilon|\bar U)\,dxds \cr
&\qquad - \int_0^t \int_{\mathbb R^d} DE(\bar U)\left[ \partial_s U^\varepsilon + \nabla_x \cdot A(U^\varepsilon) - F(U^\varepsilon)\right]dxds \cr
&\qquad + \gamma\int_0^t \int_{\mathbb R^d} \rho^\varepsilon(x) |u^\varepsilon(x)|^2\,dxds + \lambda \int_0^t \int_{\mathbb R^d} \nabla_x V(x) \cdot \rho^\varepsilon(x) u^\varepsilon(x)\,dxds\cr
&\qquad + \lambda\int_0^t \int_{\mathbb R^d} \rho^\varepsilon(x) ( u^\varepsilon(x) - \bar u(x)) \cdot (\nabla_x W \star (\bar \rho - \rho^\varepsilon))(x) + \rho^\varepsilon(x) u^\varepsilon(x) \cdot (\nabla_x W \star \rho^\varepsilon)(x)\,dxds\cr
&\quad =: \sum_{i=1}^7 J_i^\varepsilon.
\end{aligned}
\end{align*}
Here $J_i^\varepsilon, i =2,\cdots,7$ can be estimated as follows. \newline
\noindent {\bf Estimate of $J_2^\varepsilon$}: Note that
\begin{equation}\label{est_uf}
|u^\varepsilon|^2 = \left|\frac{\displaystyle \int_{\mathbb R^d} vf^\varepsilon\,dv}{\displaystyle\int_{\mathbb R^d} f^\varepsilon\,dv} \right|^2 \leq \frac{\displaystyle \int_{\mathbb R^d} |v|^2 f^\varepsilon\,dv}{\rho^\varepsilon}, \quad \mbox{i.e.,} \quad \rho^\varepsilon|u^\varepsilon|^2 \leq \int_{\mathbb R^d} |v|^2 f^\varepsilon\,dv.
\end{equation}
This gives
\[
E(U^\varepsilon) = \frac12 \rho^\varepsilon|u^\varepsilon|^2\, \leq \frac12\int_{\mathbb R^d} |v|^2 f^\varepsilon\,dv =: K(f^\varepsilon).
\]
Thus, by adding and subtracting the functional $K(f^\varepsilon)$, we find
$$\begin{aligned}
J_2^\varepsilon &= \int_{\mathbb R^d} E(U^\varepsilon)\,dx - \int_{\mathbb R^d} K(f^\varepsilon)\,dx + \int_{\mathbb R^d} K(f^\varepsilon)\,dx - \int_{\mathbb R^d} K(f^\varepsilon_0)\,dx\cr
&\quad + \int_{\mathbb R^d} K(f^\varepsilon_0)\,dx - \int_{\mathbb R^d} E(\bar U_0)\,dx\cr
&\leq 0 + \int_{\mathbb R^d} K(f^\varepsilon)\,dx - \int_{\mathbb R^d} K(f^\varepsilon_0)\,dx + \int_{\mathbb R^d} K(f^\varepsilon_0)\,dx - \int_{\mathbb R^d} E(\bar U_0)\,dx.
\end{aligned}$$
\noindent {\bf Estimate of $J_3^\varepsilon$}: It follows from \cite[Lemma 4.3]{KMT15} that
\[
A(U^\varepsilon|\bar U) = \begin{pmatrix}
0 \\[4mm]
\rho^\varepsilon (u^\varepsilon - \bar u) \otimes (u^\varepsilon - \bar u)
\end{pmatrix}.
\]
This together with the fact $DE(\bar U) = \binom{-|\bar u|^2/2}{\bar u}$ yields
\[
J_3^\varepsilon \leq C\|\nabla_x \bar u\|_{L^\infty}\int_0^t\int_{\mathbb R^d} \rho^\varepsilon|u^\varepsilon-\bar u|^2\,dxds = C\|\nabla_x \bar u\|_{L^\infty}\int_0^t \int_{\mathbb R^d} \mathcal{H}(U^\varepsilon(s)|\bar U(s))\,dxds.
\]
\noindent {\bf Estimate of $J_4^\varepsilon$}: A direct computation asserts
\[
|J_4^\varepsilon| \leq \|\nabla_x \bar u\|_{L^\infty}\int_0^t \int_{\mathbb R^d} \left|\int_{\mathbb R^d} (u^\varepsilon \otimes u^\varepsilon - v \otimes v)f^\varepsilon\,dv \right|dxds.
\]
On the other hand, we get
\[
\int_{\mathbb R^d} (u^\varepsilon \otimes u^\varepsilon - v \otimes v)f^\varepsilon\,dv = \int_{\mathbb R^d} (u^\varepsilon - v) \otimes (v - u^\varepsilon)\,f^\varepsilon\,dv.
\]
This together with \eqref{lem_energy} gives
\[
|J_4^\varepsilon| \leq C\|\nabla_x \bar u\|_{L^\infty}\max\{1,\lambda\} \varepsilon,
\]
where $C > 0$. \newline
\noindent {\bf Estimate of $J_5^\varepsilon + J_6^\varepsilon$}: Integrating by parts gives
$$\begin{aligned}
\lambda \int_0^t \int_{\mathbb R^d} \nabla_x V(x) \cdot \rho^\varepsilon(x) u^\varepsilon(x)\,dxds &= -\lambda\int_0^t \int_{\mathbb R^d} V(x) \nabla_x \cdot (\rho^\varepsilon(x,s) u^\varepsilon(x,s))\,dxds \cr
&= \lambda\int_0^t \int_{\mathbb R^d} V(x) \partial_s \rho^\varepsilon(x,s)\,dxds \cr
&= \lambda\int_{\mathbb R^d} V(x)\rho^\varepsilon(x,t)\,dx - \lambda\int_{\mathbb R^d} V(x)\rho^\varepsilon_0(x)\,dx.
\end{aligned}$$
Thus we get
\[
J_5^\varepsilon + J_6^\varepsilon = \gamma\int_0^t \int_{\mathbb R^d} \rho^\varepsilon(x) |u^\varepsilon(x)|^2\,dxds + \lambda\int_{\mathbb R^d} V(x)\rho^\varepsilon(x,t)\,dx - \lambda\int_{\mathbb R^d} V(x)\rho^\varepsilon_0(x)\,dx.
\]
\noindent {\bf Estimate of $J_7^\varepsilon$}: Note that
$$\begin{aligned}
&\lambda\int_0^t \int_{\mathbb R^d} \rho^\varepsilon(x,s) u^\varepsilon(x,s) \cdot (\nabla_x W \star \rho^\varepsilon)(x,s)\,dxds \cr
&\quad = \lambda\int_0^t \int_{\mathbb R^d} \partial_s (\rho^\varepsilon(x,s) ) (W \star \rho^\varepsilon)(x,s)\,dxds\cr
&\quad =\frac\lambda2 \int_0^t \frac{\partial}{\partial s}\left(\int_{\mathbb R^d \times \mathbb R^d} W(x-y) \rho^\varepsilon(x,s) \rho^\varepsilon(y,s)\,dxdy\right)ds\cr
&\quad =\frac\lambda2\left(\int_{\mathbb R^d \times \mathbb R^d} W(x-y) \rho^\varepsilon(x,t)\rho^\varepsilon(y,t)\,dxdy - \int_{\mathbb R^d \times \mathbb R^d} W(x-y) \rho^\varepsilon_0(x) \rho^\varepsilon_0(y)\,dxdy \right).
\end{aligned}$$
This yields
$$\begin{aligned}
J_7^\varepsilon &= \lambda\int_0^t \int_{\mathbb R^d} \rho^\varepsilon(x) ( u^\varepsilon(x) - \bar u(x)) \cdot (\nabla_x W \star (\bar \rho - \rho^\varepsilon))(x)\,dxds \cr
&\quad + \frac\lambda2\left(\int_{\mathbb R^d \times \mathbb R^d} W(x-y) \rho^\varepsilon(x,t)\rho^\varepsilon(y,t)\,dxdy - \int_{\mathbb R^d \times \mathbb R^d} W(x-y) \rho^\varepsilon_0(x) \rho^\varepsilon_0(y)\,dxdy \right).
\end{aligned}$$
We now combine the estimates $J_i^\varepsilon, i=2, 5, 6,7$ to get
$$\begin{aligned}
\sum_{i \in \{2,5,6,7\}} J_i^\varepsilon &= \int_{\mathbb R^d} K(f^\varepsilon_0)\,dx - \int_{\mathbb R^d} E(\bar U_0)\,dx + \mathcal{F}(f^\varepsilon) - \mathcal{F}(f_0^\varepsilon) \cr
&\quad + \lambda\int_0^t \int_{\mathbb R^d} \rho^\varepsilon(x) ( u^\varepsilon(x) - \bar u(x)) \cdot (\nabla_x W \star (\bar \rho - \rho^\varepsilon))(x)\,dxds\cr
&\quad + \gamma\int_0^t \int_{\mathbb R^d} \rho^\varepsilon(x) |u^\varepsilon(x)|^2\,dxds.
\end{aligned}$$
We then use \eqref{lem_energy} and \eqref{est_uf} to find
$$\begin{aligned}
\sum_{i \in \{2,5,6,7\}} J_i^\varepsilon &\leq \int_{\mathbb R^d} K(f^\varepsilon_0)\,dx - \int_{\mathbb R^d} E(\bar U_0)\,dx \cr
&\quad + \lambda\int_0^t \int_{\mathbb R^d} \rho^\varepsilon(x) ( u^\varepsilon(x) - \bar u(x)) \cdot (\nabla_x W \star (\bar \rho - \rho^\varepsilon))(x)\,dxds.
\end{aligned}$$
We finally combine all the above estimates to conclude the proof.
\end{proof}
\begin{remark}
Note that we proved $J_4^\varepsilon = \mathcal{O}(\varepsilon)$, if $\lambda$ is a fixed constant, in contrast with \cite[Lemma 4.4]{KMT15}, where they only proved $J_4^\varepsilon = \mathcal{O}(\sqrt\varepsilon)$ due to the pressure term in the Euler equations.
\end{remark}
\subsection{Relative entropy combined with $2$-Wasserstein distance}
In this part, we show that the $2$-Wasserstein distance can be bounded by the relative entropy.
Note that the local densities $\bar \rho$ and $\rho^\varepsilon$ satisfy
\[
\partial_t \bar \rho + \nabla_x \cdot (\bar \rho \bar u) = 0 \quad \mbox{and} \quad \partial_t \rho^\varepsilon + \nabla_x \cdot (\rho^\varepsilon u^\varepsilon) = 0,
\]
respectively. Let us define forward characteristics $X(t) := X(t;0,x)$ and $X^\varepsilon(t) := X^\varepsilon(t;0,x)$, $t \in [0,T]$ which solve the following ODEs:
\begin{equation}\label{eq_char2}
\partial_t X(t) = \bar u(X(t),t) \quad \mbox{and} \quad \partial_t X^\varepsilon(t) = u^\varepsilon(X^\varepsilon(t),t)
\end{equation}
with $X(0) = X^\varepsilon(0) = x \in \mathbb R^d$, respectively. Since we assumed that $\bar u$ is bounded and Lipschitz continuous on the time interval $[0,T]$, there exists a unique solution $\bar \rho$, which is determined as the push-forward of the its initial densities through the flow maps $X$, i.e., $\bar\rho(t) = X(t;0,\cdot) \# \bar\rho_0$. Here $\cdot \,\# \,\cdot $ stands for the push-forward of a probability measure by a measurable map, more precisely, $\nu = \mathcal{T} \# \mu$ for probability measure $\mu$ and measurable map $\mathcal{T}$ implies
\[
\int_{\mathbb R^d} \varphi(y) \,d\nu(y) = \int_{\mathbb R^d} \varphi(\mathcal{T}(x)) \,d\mu(x),
\]
for all $\varphi \in \mathcal C_b(\mathbb R^d)$. Note that the solution $X(t;0,x)$ is Lipschitz in $x$ with the Lipschitz constant $e^{\|\nabla_x \bar u\|_{L^\infty}}$. Indeed, we estimate
\begin{align*}
|X(t;0,x) - X(t;0,y)| &\leq |x-y| + \int_0^t |\bar u(X(s;0,x)) - \bar u(X(s;0,y))|\,ds\cr
&\leq |x-y| + \|\nabla_x \bar u\|_{L^\infty}\int_0^t |X(s;0,x) - X(s;0,y)|\,ds.
\end{align*}
Apply Gr\"onwall's lemma to the above gives
\begin{equation}\label{est_xlip}
|X(t;0,x) - X(t;0,y)| \leq e^{\|\nabla_x \bar u\|_{L^\infty}t}|x-y|.
\end{equation}
On the other hand, the regularity of $u^\varepsilon$ is not enough to have the existence of solutions $X^\varepsilon(t)$ to the second differential equation in \eqref{eq_char2}. Thus, inspired by the following proposition from \cite[Theorem 8.2.1]{AGS08}, see also \cite[Proposition 3.3]{FK19}, we overcome this difficulty.
\begin{proposition}\label{prop_am}Let $T>0$ and $\rho : [0,T] \to \mathcal{P}(\mathbb R^d)$ be a narrowly continuous solution of \eqref{eq_char2}, that is, $\rho$ is continuous in the duality with continuous bounded functions, for a Borel vector field $u$ satisfying
\begin{equation}\label{est_p1}
\int_0^T\int_{\mathbb R^d} |u(x,t)|^p\rho(x,t)\,dx dt < \infty,
\end{equation}
for some $p > 1$. Let $\Xi_T: [0,T] \to \mathbb R^d$ denote the space of continuous curves. Then there exists a probability measure $\eta$ on $\Xi_T \times \mathbb R^d$ satisfying the following properties:
\begin{itemize}
\item[(i)] $\eta$ is concentrated on the set of pairs $(\xi,x)$ such that $\xi$ is an absolutely continuous curve satisfying
\[
\dot\xi(t) = u(\xi(t),t)
\]
for almost everywhere $t \in (0,T)$ with $\xi(0) = x \in \mathbb R^d$.
\item[(ii)] $\rho$ satisfies
\[
\int_{\mathbb R^d} \varphi(x)\rho\,dx = \int_{\Xi_T \times \mathbb R^d}\varphi(\xi(t))\,d\eta(\xi,x)
\]
for all $\varphi \in \mathcal C_b(\mathbb R^d)$, $t \in [0,T]$.
\end{itemize}
\end{proposition}
Note that it follows from \eqref{lem_energy}, see also \eqref{est_uf}, that
\[
\int_{\mathbb R^d} |u^\varepsilon|^2 \rho^\varepsilon\,dx \leq \int_{\mathbb R^d \times \mathbb R^d} |v|^2 f^\varepsilon\,dxdv < \infty,
\]
i.e., \eqref{est_p1} holds for $p=2$, and thus by Proposition \ref{prop_am}, we have the existence of a probability measure $\eta^\varepsilon$ in $\Xi_T \times \mathbb R^d$, which is concentrated on the set of pairs $(\xi,x)$ such that $\xi$ is a solution of
\begin{equation}\label{eq_gam}
\dot{\xi}(t) = u^\varepsilon(\xi(t),t)
\end{equation}
with $\xi(0) = x \in \mathbb R^d$. Moreover, we have
\begin{equation}\label{eq_gam2}
\int_{\mathbb R^d} \varphi(x) \rho^\varepsilon(x,t)\,dx = \int_{\Xi_T \times \mathbb R^d}\varphi(\xi(t))\,d\eta^\varepsilon(\xi,x)
\end{equation}
for all $\varphi \in \mathcal C_b(\mathbb R^d)$, $t \in [0,T]$.
\begin{lemma}\label{prop_rho_wa} Let $f^\varepsilon$ be the solution to the equation \eqref{main_eq} and $(\bar \rho,\bar u)$ be the strong solution to the system \eqref{main_eq2} on the time interval $[0,T]$. Then we have
\[
W_2^2(\rho^\varepsilon(t), \bar\rho(t)) \leq C\exp\left(C\|\nabla_x \bar u\|_{L^\infty(0,T;L^\infty)}\right)\left(W_2^2(\rho^\varepsilon_0, \bar \rho_0) + \int_0^t \int_{\mathbb R^d} \mathcal{H}(U^\varepsilon(s)|\bar U(s))\,dx\,ds\right),
\]
for $0 \leq t \leq T$, where $\rho^\varepsilon = \int_{\mathbb R^d} f^\varepsilon\,dv$ and $C > 0$ depends only on $T$.
\end{lemma}
\begin{proof}
Let us introduce a density $\hat \rho^\varepsilon$ which is determined by the push-forward of $\rho_0^\varepsilon$ through the flow map $X$, i.e., $\hat\rho^\varepsilon = X \# \rho_0^\varepsilon$. For the proof, we estimate $W_2(\bar\rho, \hat\rho^\varepsilon)$ and $W_2(\rho^\varepsilon, \hat \rho^\varepsilon)$ to have the error estimate between $\bar\rho$ and $\rho^\varepsilon$ in $2$-Wasserstein distance. Let us first show the estimate of $W_2(\bar\rho, \hat\rho^\varepsilon)$. We choose an optimal transport map $\mathcal{T}_0^\varepsilon(x)$ between $\rho_0^\varepsilon$ and $\bar\rho_0$ such that $\rho_0^\varepsilon = \mathcal{T}_0^\varepsilon \# \bar\rho_0$. Then since $\bar\rho = X \# \bar\rho_0$ and $\hat \rho^\varepsilon = X \# \rho_0^\varepsilon$, we find
\[
W_2^2(\bar\rho(t), \hat\rho^\varepsilon(t)) \leq \int_{\mathbb R^d} |X(t;0,x) - X(t;0,\mathcal{T}_0^\varepsilon(x))|^2 \rho_0(x)\,dx.
\]
Then this together with the Lipschitz estimate of $X$ appeared in \eqref{est_xlip} yields
\[
W_2^2(\bar\rho(t), \hat\rho^\varepsilon(t)) \leq e^{2\|\nabla_x \bar u\|_{L^\infty} t}\int_{\mathbb R^d} |x - \mathcal{T}_0^\varepsilon(x)|^2 \rho_0(x)\,dx = e^{2\|\nabla_x \bar u\|_{L^\infty} t}W_2^2(\bar\rho_0, \rho^\varepsilon_0),
\]
that is,
\[
W_2(\bar\rho(t), \hat\rho^\varepsilon(t)) \leq e^{\|\nabla_x \bar u\|_{L^\infty} t}W_2(\bar\rho_0, \rho^\varepsilon_0).
\]
For the estimate of $W_2(\rho^\varepsilon, \hat \rho^\varepsilon)$, we use the disintegration theorem of measures (see \cite{AGS08}) to write
\[
d\eta^\varepsilon(\xi,x) = \eta^\varepsilon_x(d\xi) \otimes \rho^\varepsilon_0(x)\,dx,
\]
where $\{\eta^\varepsilon_x\}_{x \in \mathbb R^d}$ is a family of probability measures on $\Xi_T$ concentrated on solutions of \eqref{eq_gam}. We then introduce a measure $\nu^\varepsilon$ on $\Xi_T \times \Xi_T \times \mathbb R^d$ defined by
\[
d\nu^\varepsilon(\xi, x, \sigma) = \eta^\varepsilon_x(d\xi) \otimes \delta_{X(\cdot;0,x)}(d\sigma) \otimes \rho^\varepsilon_0(x)\,dx.
\]
We also introduce an evaluation map $E_t : \Xi_T \times \Xi_T \times \mathbb R^d \to \mathbb R^d \times \mathbb R^d$ defined as $E_t(\xi, \sigma, x) = (\xi(t), \sigma(t))$. Then we readily show that measure $\pi^\varepsilon_t:= (E_t)\# \nu^\varepsilon$ on $\mathbb R^d \times \mathbb R^d$ has marginals $\rho^\varepsilon(x,t)\,dx$ and $\hat\rho^\varepsilon(y,t)\,dy$ for $t \in [0,T]$, see \eqref{eq_gam2}. This yields
\begin{align}\label{est_rho2}
\begin{aligned}
W_2^2(\rho^\varepsilon(t), \hat\rho^\varepsilon(t)) &\leq \int_{\mathbb R^d \times \mathbb R^d} |x-y|^2\,d\pi^\varepsilon_t(x,y)\cr
&=\int_{\Xi_T \times \Xi_T \times \mathbb R^d} |\sigma(t) - \xi(t) |^2 \,d\nu^\varepsilon(\xi, \sigma, x) \cr
&= \int_{\Xi_T \times \mathbb R^d} |X(t;0,x) - \xi(t)|^2 \,d\eta^\varepsilon(\xi,x).
\end{aligned}
\end{align}
In order to estimate the right hand side of \eqref{est_rho2}, we use \eqref{eq_char2} and \eqref{eq_gam} to have
\begin{align*}
&\left|X(t;0,x) -\xi(t)\right| \cr
&\quad = \left|\int_0^t \bar u(X(s;0,x)) - u^\varepsilon(\xi(s),s)\,ds\right|\cr
&\qquad \leq \int_0^t \left|\bar u(X(s;0,x)) - \bar u(\xi(s),s)\right|ds + \int_0^t \left|\bar u(\xi(s),s) - u^\varepsilon(\xi(s),s)\right|ds\cr
&\qquad \leq \|\nabla_x \bar u\|_{L^\infty}\int_0^t \left|X(s;0,x) - \xi(s)\right|ds + \int_0^t \left|\bar u(\xi(s),s) - u^\varepsilon(\xi(s),s)\right|ds,
\end{align*}
subsequently, this yields
\[
\left|X(t;0,x) -\xi(t)\right| \leq Ce^{C\|\nabla_x \bar u\|_{L^\infty}}\int_0^t \left|\bar u(\xi(s),s) - u^\varepsilon(\xi(s),s)\right|ds,
\]
where $C>0$ is independent of $\varepsilon>0$. Combining this with \eqref{est_rho2}, we have
$$\begin{aligned}
W_2^2(\rho^\varepsilon(t), \hat\rho^\varepsilon(t)) &\leq Ce^{C\|\nabla_x \bar u\|_{L^\infty}}\int_{\Xi_T \times \mathbb R^d} \left|\int_0^t \left|\bar u(\xi(s),s) - u^\varepsilon(\xi(s),s)\right|ds\right|^2 d\eta^\varepsilon(\xi,x)\cr
&\leq Cte^{C\|\nabla_x \bar u\|_{L^\infty}}\int_0^t\int_{\Xi_T \times \mathbb R^d} \left|\bar u(\xi(s),s) - u^\varepsilon(\xi(s),s)\right|^2 d\eta^\varepsilon(\xi,x)\,ds\cr
&\leq Ce^{C\|\nabla_x \bar u\|_{L^\infty}}\int_0^t \int_{\mathbb R^d} |\bar u(x,s) - u^\varepsilon(x,s)|^2 \rho^\varepsilon(x,s)\,dxds\cr
&=Ce^{C\|\nabla_x \bar u\|_{L^\infty}}\int_0^t \int_{\mathbb R^d} \mathcal{H}(U^\varepsilon(s)|\bar U(s))\,dx ds,
\end{aligned}$$
where $C>0$ is independent of $\varepsilon > 0$, and we used the relation \eqref{eq_gam2}. Combining all of the above estimates asserts
\begin{align*}
W_2^2(\bar \rho(t), \rho^\varepsilon(t)) &\leq \sqrt 2 W_2^2(\bar \rho(t), \hat\rho^\varepsilon(t)) + \sqrt 2 W_2^2(\rho^\varepsilon(t), \hat\rho^\varepsilon(t))\cr
&\leq \sqrt 2 e^{2\|\nabla_x \bar u\|_{L^\infty} t}W_2^2(\bar\rho_0, \rho^\varepsilon_0) + Ce^{C\|\nabla_x \bar u\|_{L^\infty}}\int_0^t \int_{\mathbb R^d} \mathcal{H}(U^\varepsilon(s)|\bar U(s))\,dx ds\cr
&\leq Ce^{C\|\nabla_x \bar u\|_{L^\infty}}\left( W_2^2(\bar\rho_0, \rho^\varepsilon_0) + \int_0^t \int_{\mathbb R^d} \mathcal{H}(U^\varepsilon(s)|\bar U(s))\,dx ds\right),
\end{align*}
where $C>0$ is independent of $\varepsilon > 0$. This completes the proof.
\end{proof}
\begin{proposition}\label{prop_re2} Let $f^\varepsilon$ be the solution to the equation \eqref{main_eq} and $(\bar \rho,\bar u)$ be the strong solution to the system \eqref{main_eq2} on the time interval $[0,T]$. Then we obtain
$$
\begin{aligned}
&\int_{\mathbb R^d} \mathcal{H}(U^\varepsilon(t)|\bar U(t))\,dx + (\gamma -C\lambda - e^{C_{\bar u}}(1+\lambda))\int_0^t \int_{\mathbb R^d} \mathcal{H}(U^\varepsilon(s)|\bar U(s))\,dxds\cr
&\qquad \leq \int_{\mathbb R^d} \mathcal{H}(U^\varepsilon_0|\bar U_0)\,dx + \int_{\mathbb R^d} \left( \int_{\mathbb R^d} f_0^\varepsilon |v|^2\,dv - \bar \rho_0|\bar u_0|^2\right)dx + C_{\bar u}\max\{1,\lambda\}\varepsilon + e^{C_{\bar u}}\lambda W_2^2(\rho^\varepsilon_0,\bar \rho_0).
\end{aligned}
$$
Here $C_{\bar u} = C\|\nabla_x \bar u\|_{L^\infty(0,T;L^\infty)}$ and $C>0$ is independent of $\gamma, \lambda$ and $\varepsilon$, but depends on $T$.
\end{proposition}
\begin{proof} Since $\nabla_x W \in \mathcal{W}^{1,\infty}(\mathbb R^d)$, we get
\[
\left|\int_{\mathbb R^d} \nabla_x W(x-y)(\bar \rho(y) - \rho^\varepsilon(y))\,dy \right| \leq CW_1(\rho^\varepsilon,\bar \rho).
\]
This enables us to estimate the last term on the right hand side of \eqref{eqn_rel} as
$$\begin{aligned}
&\lambda\left|\int_{\mathbb R^d} \rho^\varepsilon(x) ( u^\varepsilon(x) - \bar u(x)) \cdot (\nabla_x W \star (\bar \rho - \rho^\varepsilon))(x)\,dx \right| \cr
&\quad \leq C\lambda W_1(\rho^\varepsilon,\bar\rho)\left(\int_{\mathbb R^d} \rho^\varepsilon|u^\varepsilon - \bar u|^2\,dx \right)^{1/2} \leq C\lambda W_2^2 (\rho^\varepsilon,\bar \rho) + C\lambda \int_{\mathbb R^d} \rho^\varepsilon|u^\varepsilon - \bar u|^2\,dx,
\end{aligned}$$
where we used $W_1 \leq W_2$. This together with Proposition \ref{prop_re} gives
$$\begin{aligned}
&\int_{\mathbb R^d} \mathcal{H}(U^\varepsilon(t)|\bar U(t))\,dx + (\gamma - C\lambda)\int_0^t \int_{\mathbb R^d} \rho^\varepsilon(x)| u^\varepsilon(x) - \bar u(x)|^2\,dxds\cr
&\quad \leq \int_{\mathbb R^d} \mathcal{H}(U^\varepsilon_0|\bar U_0)\,dx + \int_{\mathbb R^d} \left( \int_{\mathbb R^d} f_0^\varepsilon |v|^2\,dv - \bar\rho_0|\bar u_0|^2\right)dx + C\|\nabla_x \bar u\|_{L^\infty}\varepsilon\max\{1,\lambda\} \cr
&\qquad + C\lambda\int_0^t W_2^2(\rho^\varepsilon(s),\bar \rho(s)) \,ds + C\|\nabla_x \bar u\|_{L^\infty}\int_0^t \int_{\mathbb R^d} \mathcal{H}(U^\varepsilon(s)|\bar U(s))\,dxds.
\end{aligned}$$
We then combine the above inequality and Lemma \ref{prop_rho_wa} to have
$$\begin{aligned}
&\int_{\mathbb R^d} \mathcal{H}(U^\varepsilon(t)|\bar U(t))\,dx + (\gamma - C\lambda)\int_0^t \int_{\mathbb R^d} \rho^\varepsilon(x)| u^\varepsilon(x) - \bar u(x)|^2\,dxds\cr
&\quad \leq \int_{\mathbb R^d} \mathcal{H}(U^\varepsilon_0|\bar U_0)\,dx + \int_{\mathbb R^d} \left( \int_{\mathbb R^d} f_0^\varepsilon |v|^2\,dv - \bar\rho_0|\bar u_0|^2\right)dx + C_{\bar u}\varepsilon\max\{1,\lambda\} \cr
&\qquad + e^{C_{\bar u}}\lambda W_2^2(\rho^\varepsilon_0,\bar\rho_0) + e^{C_{\bar u}}(1 + \lambda)\int_0^t \int_{\mathbb R^d} \mathcal{H}(U^\varepsilon(s)|\bar U(s))\,dxds,
\end{aligned}$$
where $C_{\bar u} = C\|\nabla_x \bar u\|_{L^\infty(0,T;L^\infty)}$. This completes the proof.
\end{proof}
\begin{remark}\label{rmk_hydro} If we study the hydrodynamic limit $\varepsilon \to 0$ with fixed $\gamma, \lambda > 0$, then assuming
\[
\int_{\mathbb R^d} \mathcal{H}(U^\varepsilon_0|\bar U_0)\,dx + \int_{\mathbb R^d} \left( \int_{\mathbb R^d} f_0^\varepsilon |v|^2\,dv - \bar\rho_0|\bar u_0|^2\right)dx + W_2(\rho^\varepsilon_0,\bar\rho_0) = \mathcal{O}(\varepsilon)
\]
yields the relative entropy and the $2$-Wasserstein distance between solutions decays to zero as $\varepsilon \to 0$:
\[
\sup_{0 \leq t \leq T} \left(\int_{\mathbb R^d} \rho^\varepsilon(x,t)| u^\varepsilon(x,t) - \bar u(x,t)|^2\,dx + W_2^2(\rho^\varepsilon(t),\bar \rho(t))\right) \to 0 \quad \mbox{as} \quad \varepsilon \to 0.
\]
In this case, the limit of $f^\varepsilon$ is also determined by
\[
f^\varepsilon \rightharpoonup \bar \rho\, \delta_{v - \bar u} \quad \mbox{weakly-$*$ as} \quad \varepsilon \to 0
\]
for a.e. $t \in (0,T)$. Indeed, for $\phi \in \mathcal C^\infty_c(\mathbb R^d \times \mathbb R^d \times [0,T])$, we have
$$\begin{aligned}
&\int_0^T \int_{\mathbb R^d \times \mathbb R^d} \left(f^\varepsilon(x,v,t) - \bar \rho(x,t) \, \delta_{(v - \bar u(x,t))}\right)\phi(x,v,t)\,dxdvdt\cr
&\quad = \int_0^T \int_{\mathbb R^d \times \mathbb R^d} f^\varepsilon(x,v,t) \left(\phi(x,v,t) - \phi(x,\bar u(x,t),t) \right) dxdvdt \cr
&\qquad + \int_0^T \int_{\mathbb R^d} \left( \rho^\varepsilon(x,t) - \bar\rho(x,t) \right)\phi(x,\bar u(x,t),t)\,dxdt \cr
&=: R_1^\varepsilon + R_2^\varepsilon,
\end{aligned}$$
where $R_1^\varepsilon$ can be estimated as
\[
\left|R_1^\varepsilon\right| \leq C(\|\nabla_{x,v,t}\phi\|_{L^\infty})\left(\int_0^T\int_{\mathbb R^d \times \mathbb R^d} f^\varepsilon|v - \bar u|^2\,dxdvdt \right)^{1/2} \to 0
\]
as $\varepsilon \to 0$, due to \eqref{lem_energy}. For the estimate of $R_2^\varepsilon$, we obtain
\[
\left| R_2^\varepsilon\right| \leq C(\|\phi\|_{L^\infty}, \|\nabla_{x,v,t}\phi\|_{L^\infty}, \|\nabla_x \bar u\|_{L^\infty} )\int_0^T W_2(\rho^\varepsilon(t),\bar\rho(t))\,dt \to 0
\]
as $\varepsilon \to 0$. We finally note that in \cite{FK19}, $2$-Wasserstein distance is also used to handle the nonlocal velocity alignment force, however, they need a slightly stronger assumption like $\|\rho^\varepsilon_0 - \bar\rho_0\|_{L^1} = \mathcal{O}(\varepsilon)$ rather than $W_2(\rho^\varepsilon_0,\bar\rho_0) = \mathcal{O}(\varepsilon)$. We also want to emphasize that our estimate is more consistent in the sense that we need to assume the condition for $W_2(\rho^\varepsilon_0,\bar\rho_0)$ to have the estimate for $W_2(\rho^\varepsilon(t),\bar\rho(t))$. Moreover, it is not clear that the strategy used in \cite{FK19} works for the whole space case since they make use of the periodicidity and the boundedness of the domain. In a recent work \cite{CYpre}, it is observed that the $1$-Wasserstein distance can be also bounded by the relative entropy.
\end{remark}
\begin{remark}\label{rmk_v2} Suppose that $\gamma$ is large enough such that $\gamma -C\lambda - e^{C_{\bar u}}(1+\lambda) >0$. Then it follows from Proposition \ref{prop_re2} that
\[
\int_0^t \int_{\mathbb R^d} \mathcal{H}(U^\varepsilon(s)|\bar U(s))\,dxds \leq \frac{\mathcal{I}(U^\varepsilon_0, \bar U_0) + C_{\bar u}\max\{1,\lambda\}\varepsilon + e^{C_{\bar u}}\lambda W_2^2(\rho^\varepsilon_0,\bar\rho_0)}{\gamma -C\lambda - e^{C_{\bar u}}(1+\lambda)},
\]
where
\[
\mathcal{I}(U^\varepsilon_0, \bar U_0) = \int_{\mathbb R^d} \mathcal{H}(U^\varepsilon_0|\bar U_0)\,dx + \int_{\mathbb R^d} \left( \int_{\mathbb R^d} f_0^\varepsilon |v|^2\,dv - \bar\rho_0|\bar u_0|^2\right)dx.
\]
\end{remark}
We now provide the details of Proof of Proposition \ref{prop_main}.
\begin{proof}[Proof of Proposition \ref{prop_main}]Combining Lemma \ref{prop_rho_wa} and Remark \ref{rmk_v2} yields
$$\begin{aligned}
W_2^2(\rho^\varepsilon(t), \bar\rho(t)) &\leq e^{C_{\bar u}}\left(W_2^2(\rho^\varepsilon_0, \bar\rho_0) + \int_0^t \int_{\mathbb R^d} \mathcal{H}(U^\varepsilon(s)|\bar U(s))\,dxds\right)\cr
&\leq e^{C_{\bar u}}\left( W_2^2(\rho^\varepsilon_0, \bar\rho_0) + \frac{\mathcal{I}(U^\varepsilon_0, \bar U_0) + C_{\bar u}\max\{1,\lambda\}\varepsilon + e^{C_{\bar u}}\lambda W_2^2(\rho^\varepsilon_0,\bar \rho_0)}{\gamma -C\lambda - e^{C_{\bar u}}(1+\lambda)} \right).
\end{aligned}$$
This concludes the desired result.
\end{proof}
\section{Proof of Theorem \ref{thm_main}: Large friction limit}\label{sec_lft}
In this section, we provide the details of proof of Theorem \ref{thm_main} on the large friction limit from the kinetic equation \eqref{main_eq} to the aggregation equation \eqref{main_conti}. Our main strategy is to combine the $2$-Wasserstein distance estimate in Proposition \ref{prop_main} and the recent work \cite{CCTpre} where the overdamped limit to the aggregation equation from damped Euler system with interaction forces is established by optimal transport techniques. We notice that the intermediate system \eqref{main_eq2} depends on the parameters $\gamma$ and $\lambda$ and the estimates in Section \ref{sec_quanti} also depend on the $\|\nabla_x \bar u\|_{L^\infty(0,T;L^\infty)}$. Thus we need to check how it depends on the parameters $\gamma$ and $\lambda$. Throughout this section, we set $\lambda=\kappa\gamma$.
\subsection{$Lip$-estimate on the velocity field.} Let us denote by $\bar u$ the strong solution to the system \eqref{main_eq2}. Our goal in this part is to provide the $L^\infty$-estimate of $\nabla_x \bar u$.
Define the characteristic flow $\bar\eta$ associated to the fluid velocity $\bar u(x,t)$ by
\begin{equation}\label{char}
\partial_t \bar\eta(x,t) = \bar u(\bar\eta(x,t),t) \quad \mbox{for} \quad t > 0 \quad \mbox{subject to} \quad \bar\eta(x,0) = x \in \mathbb R^d.
\end{equation}
\begin{lemma}\label{lem_u} Let $T>0$ and $(\bar\rho,\bar u)$ be the strong solution to the system \eqref{main_eq2} on the time interval $[0,T]$. Then there exist $\gamma_* > 0$ and $\kappa_*>0$ such that
\[
\|\nabla_x \bar u\|_{L^\infty(0,T;L^\infty)} \leq \|\nabla_x \bar u_0\|_{L^\infty} + 1
\]
for $\gamma \geq \gamma_*$ and $\kappa \leq \kappa_*$.
\end{lemma}
\begin{proof}
It follows from the momentum equations in \eqref{main_eq2} that
\[
\partial_t \nabla_x \bar u + \bar u \cdot \nabla_x^2 \bar u + (\nabla_x \bar u)^2 = - \gamma\nabla_x \bar u -\lambda (c_V\mathbb{I}_d + \nabla_x^2 W \star \bar\rho).
\]
Then, along the characteristic flow defined in \eqref{char}, we find
$$\begin{aligned}
(\nabla_x\bar u)(\bar\eta(x,t),t) &= (\nabla_x\bar u_0)(x)e^{-\gamma t} \cr
&\quad - e^{-\gamma t}\int_0^t \left((\nabla_x\bar u)^2(\bar\eta(x,s),s) + \lambda(c_V \mathbb{I}_d+ \nabla_x^2 W \star \bar \rho(\bar\eta(x,s),s))\right)e^{\gamma s}\,ds,
\end{aligned}$$
and this yields
$$\begin{aligned}
\|\nabla_x\bar u(\cdot,t)\|_{L^\infty} &\leq \|\nabla_x\bar u_0\|_{L^\infty}e^{-\gamma t} + Ce^{-\gamma t}\int_0^t \left(\|\nabla_x\bar u(\cdot,s)\|_{L^\infty}^2 + \lambda \right)e^{\gamma s}\,ds\cr
& = \|\nabla_x\bar u_0\|_{L^\infty}e^{-\gamma t} + Ce^{-\gamma t}\int_0^t \|\nabla_x\bar u(\cdot,s)\|_{L^\infty}^2 e^{\gamma s}\,ds + \kappa(1 - e^{-\gamma t}),
\end{aligned}$$
due to $\lambda = \kappa \gamma$. Set $C_*:= \|\nabla_x \bar u_0\|_{L^\infty} + 1$ and
\[
\mathcal{A} := \left\{ t > 0\,:\,\|\nabla_x \bar u(\cdot,s)\|_{L^\infty} < C_*\mbox{ for } s \in [0,t) \right\}.
\]
Since $\mathcal{A} \neq \emptyset$, we can define $T_* := \sup \mathcal{A}$, and if $T_* < T$, then the following holds:
\[
\lim_{t \to T_*\mbox{-}}\|\nabla_x \bar u(\cdot,t)\|_{L^\infty} = C_*.
\]
On the other hand, for $t < T_*$, we get
\[
\|\nabla_x \bar u(\cdot,t)\|_{L^\infty} \leq \|\nabla_x \bar u_0\|_{L^\infty} e^{-\gamma t} + \left(\frac{CC_*^2}{\gamma} + \kappa\right)(1 - e^{-\gamma t}).
\]
We now choose $\gamma_*$ sufficiently large and $\kappa_*$ small enough so that
$
\tfrac{CC_*^2}{\gamma} + \kappa < 1
$
for $\gamma \geq \gamma_*$ and $\kappa \leq \kappa_*$. Thus we obtain
\[
C_* = \lim_{t \to T_*\mbox{-}}\|\nabla_x \bar u(\cdot,t)\|_{L^\infty} \leq \|\nabla_x \bar u_0\|_{L^\infty} e^{-\gamma T_*} + 1 < C_*,
\]
and this is a contradiction. Hence we have $T_* \geq T$, and this completes the proof.
\end{proof}
\subsection{Overdamped limit: from Euler to aggregation equations}
Let us consider the pressureless Euler equations \eqref{main_eq2}:
\begin{align}\label{eq_wo}
\begin{aligned}
&\partial_t \rho^\gamma + \nabla_x \cdot (\rho^\gamma u^\gamma) = 0,\cr
&\partial_t (\rho^\gamma u^\gamma) + \nabla_x \cdot (\rho^\gamma u^\gamma \otimes u^\gamma) = - \gamma \rho^\gamma \left(u^\gamma + \kappa\left(\nabla_x V + \nabla_x W \star \rho^\gamma\right)\right).
\end{aligned}
\end{align}
Then, an easy generalization of \cite[Theorem 5]{CCTpre} implies the following error estimate between $\rho^\gamma$ and $\rho$, which is a solution to \eqref{main_conti} in $2$-Wasserstein distance.
\begin{proposition}\label{prop_od} Let $T>0$ and $(\rho^\gamma,u^\gamma)$ be the strong solution of \eqref{eq_wo} for sufficiently large $\gamma > 0$, and let $(\rho, u)$ be the unique strong solution to the following equation on the time interval $[0,T]$:
\[
\partial_t \rho + \nabla_x \cdot (\rho u) = 0, \quad \rho u = -\kappa\rho(\nabla_x V + \nabla_x W \star \rho).
\]
We further assume that the initial data satisfy
\[
\mathcal{E}(\rho_0,u_0) < \infty, \quad \sup_{\gamma > 0} \mathcal{E}(\rho_0^\gamma,u_0^\gamma) < \infty, \quad \sup_{\gamma > 0}W_2(\rho_0, \rho_0^\gamma) < \infty,
\]
and
\[
\sup_{\gamma > 0} \int_{\mathbb R^d} |u_0 - u_0^\gamma|^2 \rho_0^\gamma\,dx < \infty,
\]
where
\[
\mathcal{E}(\rho,u) := \mathcal{E}_1(\rho,u) + \mathcal{E}_2(\rho,u):= \left(\int_{\mathbb R^d} V \rho\,dx+\frac12\int_{\mathbb R^d \times \mathbb R^d} W(x-y)\rho(x)\rho(y)\,dxdy\right) + \int_{\mathbb R^d} |u|^2 \rho\,dx.
\]
Then we have
\[
\int_0^T W_2^2(\rho^\gamma(t), \rho(t))\,dt \leq \frac{M_{\gamma}}{2c_W\gamma - 1},
\]
where $M_{\gamma} > 0$ is given by
$$\begin{aligned}
M_{\gamma} &:= 4\left(\mathcal{E}_1(\rho_0,u_0) + \mathcal{E}_1(\rho_0^\gamma,u_0^\gamma)\right) + (1 + \gamma)W_2^2(\rho_0, \rho_0^\gamma) \cr
&\quad
+ \frac2\gamma\left( \mathcal{E}_2(\rho_0,u_0) + \mathcal{E}_2(\rho_0^\gamma,u_0^\gamma)\right) + \int_{\mathbb R^d} |u_0 - u_0^\gamma|^2 \rho_0^\gamma\,dx.
\end{aligned}$$
\end{proposition}
\begin{remark}
The improvement of Proposition \ref{prop_od} with respect to \cite[Theorem 5]{CCTpre} is on the initial data assumptions to allow the initial data depending on $\gamma$.
\end{remark}
Then we are now in a position to give the details of proof of Theorem \ref{thm_main}.
\begin{proof}[Proof of Theorem \ref{thm_main}] For a given $\rho_0$ satisfying the assumptions in Theorem \ref{thm_main}, we consider its approximation $0 \leq \bar\rho_0^\varepsilon \in H^s(\mathbb R^d)$ with $s>d/2+1$ satisfying
\[
\sup_{\varepsilon > 0}\|\bar \rho_0^\varepsilon\|_{L^1} < \infty, \quad \sup_{\varepsilon > 0}\mathcal{E}(\bar\rho_0^\varepsilon,\bar u_0^\varepsilon) < \infty, \quad \mbox{and} \quad W_2^2(\rho_0, \bar\rho_0^\varepsilon) = \mathcal{O}(\varepsilon).
\]
Set $\bar u_0^\varepsilon := -\kappa (\nabla_x V + \nabla_x W \star \bar\rho_0^\varepsilon)$. Then it is clear to get
\[
W_2^2(\rho_0^\varepsilon, \bar\rho_0^\varepsilon) \leq 2W_2^2(\rho_0^\varepsilon, \rho_0) + 2W_2^2(\rho_0, \bar\rho_0^\varepsilon) = \mathcal{O}(\varepsilon) + 2W_2^2(\rho_0^\varepsilon, \rho_0)
\]
and
\[
\|\nabla_x \bar u_0^\varepsilon\|_{L^\infty} \leq C\kappa\left(1 + \|\nabla_x^2 W\|_{L^\infty}\|\bar\rho_0^\varepsilon\|_{L^1} \right) \leq C\kappa.
\]
We now take into account the pressureless Euler system \eqref{main_eq2} with above the initial data $(\bar\rho_0^\varepsilon, \bar u_0^\varepsilon)$ and the singular parameter $\gamma = 1/\varepsilon$, i.e., $\lambda = \kappa/\varepsilon$. This, together with Lemma \ref{lem_u}, Proposition \ref{prop_main}, and choosing $\varepsilon,\kappa >0$ small enough, yields
$$\begin{aligned}
W_2^2(\rho^\varepsilon(t), \bar\rho^\varepsilon(t)) &\leq e^{C\kappa}\left( W_2^2(\rho^\varepsilon_0, \bar\rho_0^\varepsilon) + \frac{\mathcal{I}(U^\varepsilon_0, \bar U_0^\varepsilon) + C\kappa\varepsilon\lambda + e^{C\kappa}\kappa\gamma W_2^2(\rho^\varepsilon_0,\bar \rho_0^\varepsilon)}{\gamma -C\kappa\gamma - e^{C\kappa}(1+\kappa \gamma)} \right)\cr
&=e^{C\kappa}\left( W_2^2(\rho^\varepsilon_0, \bar\rho_0^\varepsilon) + \frac{\varepsilon\mathcal{I}(U^\varepsilon_0, \bar U_0^\varepsilon) + C\kappa^2\varepsilon + \kappa e^{C\kappa} W_2^2(\rho^\varepsilon_0,\bar \rho_0^\varepsilon)}{1 -\kappa( C+ e^{C\kappa}(1+\varepsilon))} \right)\cr
&=\mathcal{O}(\varepsilon) + C W_2^2(\rho^\varepsilon_0, \bar\rho_0^\varepsilon),
\end{aligned}$$
where $C> 0$ is independent of $\varepsilon$ and
\[
\mathcal{I}(U^\varepsilon_0, \bar U^\varepsilon_0) = \int_{\mathbb R^d} \rho^\varepsilon_0(x)| u^\varepsilon_0(x) - \bar u^\varepsilon_0(x)|^2\,dx + \int_{\mathbb R^d} \left( \int_{\mathbb R^d} f_0^\varepsilon |v|^2\,dv - \bar \rho^\varepsilon_0|\bar u^\varepsilon_0|^2\right)dx.
\]
Note that
\[
\int_{\mathbb R^d} \rho^\varepsilon_0|\bar u^\varepsilon_0|^2\,dx \leq C\int_{\mathbb R^d} \rho^\varepsilon_0 V\,dx + C\|\nabla_x W \star \bar\rho^\varepsilon_0\|_{L^\infty}^2 \int_{\mathbb R^d} \rho^\varepsilon_0\,dx \leq C\left(\int_{\mathbb R^d} \rho^\varepsilon_0 V\,dx + \|f^\varepsilon_0\|_{L^1}\|\bar\rho^\varepsilon_0\|_{L^1}^2 \right),
\]
where $C > 0$ is independent of $\varepsilon$. Then this implies
\[
\sup_{\varepsilon > 0}\mathcal{I}(U^\varepsilon_0, \bar U^\varepsilon_0) \leq C\sup_{\varepsilon > 0}\left(\int_{\mathbb R^d \times \mathbb R^d} f_0^\varepsilon |v|^2\,dxdv + \int_{\mathbb R^d} \rho^\varepsilon_0 V\,dx + \|f^\varepsilon_0\|_{L^1}\|\bar\rho^\varepsilon_0\|_{L^1}^2 \right) < \infty,
\]
where $C > 0$ is independent of $\varepsilon$. Furthermore, since $W_2^2(\rho^\varepsilon_0, \bar\rho_0^\varepsilon) \leq \mathcal{O}(\varepsilon)+2W_2^2(\rho_0^\varepsilon, \rho_0)$, we have
\[
W_2^2(\rho^\varepsilon(t), \bar\rho^\varepsilon(t)) \leq \mathcal{O}(\varepsilon)+ 2W_2^2(\rho_0^\varepsilon, \rho_0).
\]
For the error estimate of solutions to \eqref{main_conti} and \eqref{main_eq2}, we use Proposition \ref{prop_od} with $\gamma = 1/\varepsilon$ to obtain
\[
\int_0^T W_2^2(\bar\rho^\varepsilon(t), \rho(t))\,dt \leq C\varepsilon + CW_2^2(\bar\rho_0^\varepsilon, \rho_0) = \mathcal{O}(\varepsilon).
\]
We finally combine all the above estimates to conclude
$$\begin{aligned}
\int_0^T W_2^2(\rho^\varepsilon(t), \rho(t))\,dt &\leq \int_0^T W_2^2(\rho^\varepsilon(t), \bar\rho^\varepsilon(t))\,dt + \int_0^T W_2^2(\bar\rho^\varepsilon(t), \rho(t))\,dt \cr
&\leq \mathcal{O}(\varepsilon) + CW_2^2(\rho_0^\varepsilon, \rho_0).
\end{aligned}$$
This completes the proof.
\end{proof}
\section{Well-posedness of equations \eqref{main_eq}, \eqref{main_conti}, and \eqref{main_eq2}}\label{sec_ext}
In this section, we show the global-in-time existence of solutions to the equations \eqref{main_eq}, \eqref{main_conti}, and \eqref{main_eq2} under suitable assumptions on the initial data, making our main result completely rigorous.
\subsection{Global-in-time existence of weak solutions to the equation \eqref{main_eq}}\label{sec_weak}
We first present a notion of weak solutions of the equation \eqref{main_eq} and our result on the global-in-time existence of weak solutions.
\begin{definition}\label{def_weak} For a given $T \in (0,\infty)$, we say that $f$ is a weak solution to the equation \eqref{main_eq} if the following conditions are satisfied:
\begin{itemize}
\item[(i)] $f \in L^\infty(0,T;(L^1_+ \cap L^\infty)(\mathbb R^d \times \mathbb R^d))$,
\item[(ii)] for any $\varphi \in \mathcal C^\infty_c(\mathbb R^d \times \mathbb R^d \times [0,T])$,
$$\begin{aligned}
&\int_0^t \int_{\mathbb R^d \times \mathbb R^d} f(\partial_t\varphi + v \cdot \nabla_x \varphi - \left(\gamma v + \lambda(\nabla_x V + \nabla_x W \star \rho) \right) \cdot \nabla_v \varphi)\,dxdvds\cr
&\quad + \int_0^t \int_{\mathbb R^d \times \mathbb R^d} f(\beta(u-v) \cdot \nabla_v \varphi)\,dxdvds + \int_{\mathbb R^d \times \mathbb R^d} f_0 \varphi(\cdot,\cdot,0)\,dxdv= 0.
\end{aligned}$$
\end{itemize}
\end{definition}
We also recall the velocity averaging lemma whose proof can be found in \cite[Lemma 2.7]{KMT13}.
\begin{lemma}\label{lem_vel} For $1 \leq p < (d+2)/(d+1)$, let $\{G_n\}_n$ be bounded in $L^p(\mathbb R^d \times\mathbb R^d \times (0,T))$. Suppose that
\begin{itemize}
\item[(i)] $f_n$ is bounded in $L^\infty(0,T;(L^1 \cap L^\infty)(\mathbb R^d \times \mathbb R^d))$,
\item[(ii)] $(|x|^2 + |v|^2)f_n$ is bounded in $L^\infty(0,T;L^1(\mathbb R^d \times \mathbb R^d))$.
\end{itemize}
If $f_n$ and $G_n$ satisfy the following equation:
$
\partial_t f_n + v \cdot \nabla_x f_n = \nabla_v G_n,
$
then, for any $\varphi(v)$ satisfying $\varphi(v) \leq c|v|$ as $|v|\to \infty$, the sequence
$
\left\{ \int_{\mathbb R^d} f_n\varphi(v)\,dv\right\}_n
$
is relatively compact in $L^p(\mathbb R^d \times (0,T))$.
\end{lemma}
We can now show the existence results for this type of solutions.
\begin{theorem}\label{thm_weak} Let $T>0$. Suppose that $f_0$ satisfies
\[
f_0 \in (L^1_+ \cap L^\infty)(\mathbb R^d \times \mathbb R^d) \quad \mbox{and} \quad (|v|^2 + V + W\star \rho_0)f_0 \in L^1(\mathbb R^d \times \mathbb R^d).
\]
Furthermore, we assume
\[
V(x) = \frac{|x|^2}{2}, \quad W \mbox{ is symmetric and bounded from below}, \quad \mbox{and} \quad \nabla_x W \in L^\infty(\mathbb R^d).
\]
Then there exists a weak solution of the equation \eqref{main_eq} in the sense of Definition \ref{def_weak} satisfying
$
(|v|^2 + V + W\star \rho)f \in L^\infty(0,T;L^1(\mathbb R^d \times \mathbb R^d)).
$
Furthermore, the total energy inequality \eqref{lem_energy} holds.
\end{theorem}
For notational simplicity, in the rest of this section, we set $\beta=\lambda = \gamma = 1$.
\begin{remark}
Our strategy can be directly applied to the case, where the confinement potential $V$ satisfies
$0 \leq V(x) \to + \infty$ as $|x| \to +\infty$, and $|\nabla_x V(x)|^2 \lesssim V(x)$ for $x \in \mathbb R^d$.
Without loss of generality, we may assume that $W \geq 0$ in the rest of this subsection.
\end{remark}
The global-in-time existence of weak solutions for the Vlasov equation with local alignment forces was studied in \cite{KMT13}. In the presence of diffusion, the global-in-time existence classical solutions around the global Maxwellian was obtained in \cite{C16kin}. We basically take a similar strategy as in \cite{KMT13} and develop it to handle the additional terms, confinement and interaction potentials, in order to provide the details of proof of Theorem \ref{thm_weak}.
\subsubsection{Regularized equation}
In this part, we deal with a regularized equation of \eqref{main_eq}. Inspired by \cite{KMT13}, we regularize the local velocity $u$ and apply the high-velocity cut-off to the regularized local velocity. More precisely, we consider
\begin{equation}\label{eq_reg}
\partial_t f + v \cdot \nabla_x f = \nabla_v \left(f(v - \chi_\zeta(u_\delta)) + f(v + \nabla_x V + \nabla_x W \star \rho)\right)
\end{equation}
with the initial data
$
f(x,v,t)|_{t=0} = f_0(x,v),
$
where
\[
\chi_\zeta(u) = u\mathbf{1}_{|u| \leq \zeta} \quad \mbox{and} \quad u_\delta := \frac{\int_{\mathbb R^d} vf\,dv}{\delta + \int_{\mathbb R^d} f\,dv} = \frac{\rho}{\delta + \rho} u
\]
with $\delta>0$ and $\zeta>0$.
Then our goal of this part is to prove the global well-posedness of the regularized equation \eqref{eq_reg}.
\begin{proposition}\label{prop_reg} Let $f_0 \geq 0$ satisfy the condition of Theorem \ref{thm_weak}. Then, for any $\delta, \zeta>0$, there exists a solution $f \in L^\infty(0,T;(L^1\cap L^p)(\mathbb R^d \times \mathbb R^d))$ with $p \in [1,\infty]$ of \eqref{eq_reg} satisfying
\begin{equation}\label{est_lp}
\|f\|_{L^\infty(0,T;L^p)} \leq e^{C/p'}\|f_0\|_{L^p} \quad \mbox{for} \quad p \in [1,\infty]
\end{equation}
and
\[
\sup_{0 \leq t \leq T} \mathcal{F}(f) + \int_0^T \int_{\mathbb R^d \times \mathbb R^d} |v|^2 f\,dxdv \leq \mathcal{F}(f_0),
\]
where $C > 0$ is independent of $\delta$ and $\zeta$.
\end{proposition}
\begin{proof}Since the proof is similar to \cite[Proposition 3.1]{KMT13}, we briefly give the idea of that.
\noindent {\bf Step 1 (Setup for fixed point argument):} We first fix $p_0 \in (1,(d+2)/(d+1))$. For a given $\bar u \in L^{p_0}(\mathbb R^d \times (0,T))$, we let $f$ be the solution of
\begin{equation}\label{eq_reg_app}
\partial_t f + v \cdot \nabla_x f = \nabla_v \left(f(v - \chi_\zeta(\bar u_\delta)) + f(v + \nabla_x V + \nabla_x W \star \rho)\right)
\end{equation}
with the initial data
\[
f(x,v,t)|_{t=0} = f_0(x,v).
\]
We then define a map $\mathcal{T}$ by
\[
\bar u \mapsto \mathcal{T}(\bar u) = u_\delta.
\]
\noindent {\bf Step 2 (Existence):} We first show that the operator $\mathcal{T}$ is well-defined. In fact, the global-in-time existence and uniqueness of solution $f \in L^\infty(0,T;(L^1\cap L^p)(\mathbb R^d \times \mathbb R^d))$ to \eqref{eq_reg_app} is standard at this point since $\chi_\zeta(\bar u) \in L^\infty(\mathbb R^d \times (0,T))$. Furthermore, we can also obtain the uniform $L^p$ estimate \eqref{est_lp}. Indeed, it can be easily found by using the fact that
\[
\int_{\mathbb R^d} f(\chi_\zeta(\bar u) - \nabla_x V - \nabla_x W \star \rho)\cdot \nabla_v f^{p-1}\,dv = \frac1p (\chi_\zeta(\bar u) - \nabla_x V - \nabla_x W \star \rho) \cdot \int_{\mathbb R^d} \nabla_v f^p\,dv = 0.
\]
For the energy estimate, we obtain
\begin{equation}\label{est_energy_app0}
\frac{d}{dt}\mathcal{F}(f) = -2\int_{\mathbb R^d \times \mathbb R^d} |v|^2 f\,dxdv + \int_{\mathbb R^d \times \mathbb R^d} f v \cdot \chi_\zeta(\bar u)\,dxdv \leq -\int_{\mathbb R^d \times \mathbb R^d} |v|^2 f\,dxdv + \zeta^2,
\end{equation}
and this gives
\begin{equation}\label{est_energy_app}
\sup_{0 \leq t \leq T}\mathcal{F}(f(t)) + \int_0^T\int_{\mathbb R^d \times \mathbb R^d} |v|^2 f\,dxdvdt \leq \mathcal{F}(f_0) + \zeta^2 T.
\end{equation}
The continuity of the operator $\mathcal{T}$ just follows from \cite[Lemma 3.3]{KMT13}. We next provide that the operator $\mathcal{T}$ is compact. More precisely, let $\{ \bar u_n\}_n$ be a bounded sequence in $L^{p_0}(\mathbb R^d \times (0,T))$, then we show that $T(\bar u_n)$ converges strongly in $L^{p_0}(\mathbb R^d \times (0,T))$ up to a subsequence. This proof relies on the velocity averaging lemma, Lemma \ref{lem_vel}, and for the proof it is enough to estimate the uniform $L^q$ bound of force fields given in \eqref{eq_reg_app} with $q \leq 2$, see \cite[Section 3.2]{KMT13}. Let us denote by $G = f(v - \chi_\zeta(\bar u_\delta)) + f(v + \nabla_x V + \nabla_x W \star \rho)$. Then we find from the above $L^p$ estimate of $f$ and \eqref{est_energy_app} that
$$\begin{aligned}
\|G\|_{L^q} &\leq \zeta\|f\|_{L^q} + 2\|(x + v)f\|_{L^q} + \|(\nabla_x W\star\rho)f\|_{L^q}\cr
&\leq \zeta\|f\|_{L^q} + 4\mathcal{F}(f)^{1/2} + \|\nabla_x W\|_{L^\infty}\|f\|_{L^q} <\infty,
\end{aligned}$$
where we used
\[
\|(x + v)f\|_{L^q} \leq 2\left( \int_{\mathbb R^d \times \mathbb R^d} (|x|^2 + |v|^2)f\,dxdv\right)^{1/2}\|f\|_{L^{\frac{q}{2-q}}}^{1/2} \leq 2\mathcal{F}(f)^{1/2}\|f\|_{L^{\frac{q}{2-q}}}^{1/2},
\]
for $q \leq 2$. Then using this, Lemma \ref{lem_vel}, the argument in \cite[Section 3.2]{KMT13}, we can apply Schauder fixed point theorem to conclude the existence of solutions to the regularized equation \eqref{eq_reg}.
\noindent {\bf Step 3 (Uniform energy estimate):} Similarly to \eqref{est_energy_app0}, we find
\[
\frac{d}{dt}\mathcal{F}(f) = -2\int_{\mathbb R^d \times \mathbb R^d} |v|^2 f\,dxdv + \int_{\mathbb R^d \times \mathbb R^d} f v \cdot \chi_\zeta(u_\delta)\,dxdv.
\]
We then use the following facts
\[
|\chi_\zeta(u_\delta)| \leq |u_\delta| \leq |u| \quad \mbox{and} \quad \rho|u|^2 \leq \int_{\mathbb R^d} |v|^2f\,dv
\]
to get
$$\begin{aligned}
\left|\int_{\mathbb R^d \times \mathbb R^d} f v \cdot \chi_\zeta(u_\delta)\,dxdv\right| &\leq \left(\int_{\mathbb R^d \times \mathbb R^d}|v|^2 f\,dxdv \right)^{1/2}\left(\int_{\mathbb R^d}|\chi_\zeta(u_\delta)|^2 \rho\,dx \right)^{1/2}\cr
&\leq \int_{\mathbb R^d \times \mathbb R^d} |v|^2 f\,dxdv.
\end{aligned}$$
Hence we have
\[
\frac{d}{dt}\mathcal{F}(f) \leq -\int_{\mathbb R^d \times \mathbb R^d} |v|^2 f\,dxdv.
\]
This completes the proof.
\end{proof}
\subsubsection{Proof of Theorem \ref{thm_weak}}
In order to conclude the proof of Theorem \ref{thm_weak}, we need to pass to the limits $\zeta \to +\infty$ and $\delta \to 0$. Note that we obtain the uniform $L^p$ estimate and the energy estimate in Proposition \ref{prop_reg}, and the uniform-in-$\zeta$ bound estimate of $G$ in $L^\infty(0,T;L^q(\mathbb R^d \times \mathbb R^d))$ with $q \leq 2$ can be obtained by using the similar argument as before. Those observations together with the argument in \cite[Section 4]{KMT13} conclude the proof of Theorem \ref{thm_weak}.
\subsection{Global-in-time existence of weak solutions to the equation \eqref{main_conti}}\label{sec_conti}
In this subsection, we discuss the global-in-time existence and uniqueness of weak solutions to the continuity type equation \eqref{main_conti}. We just refer to \cite{BLR,CR,BCLR,CCH, NPS01,Pou02} for related results. We adapt some of these ideas for our particular purposes. We first introduce a definition of weak solutions to the equation \eqref{main_conti} and state the our main theorem in this part.
\begin{definition}\label{def_strong3} For a given $T \in (0,\infty)$, we say that $\rho$ is a weak solution to the equation \eqref{main_conti} if the following conditions are satisfied:
\begin{itemize}
\item[(i)] $\rho \in \mathcal C([0,T];\mathcal{P}_2(\mathbb R^d))$,
\item[(ii)] $\rho$ satisfies the system \eqref{main_conti} in the sense of distributions.
\end{itemize}
\end{definition}
\begin{theorem}\label{thm_conti} Let $T>0$. Suppose that the confinement potential $V$ is given by $V = |x|^2/2$ and the interaction potential $W$ is symmetric and $\nabla_x W \in \mathcal{W}^{1,\infty}(\mathbb R^d)$. If $\rho_0 \in \mathcal{P}_2(\mathbb R^d)$, then there exists a unique global solution $\rho$ to the equation \eqref{main_conti} on the time interval $[0,T]$ in the sense of Definition \ref{def_strong3}. In particular, we have $\sqrt{\rho}(\partial_t u + u \cdot \nabla_x u)\in L^2(0,T;L^2(\mathbb R^d))$.
\end{theorem}
\begin{proof}
We first introduce the flow $\Psi: \mathbb R_+ \times \mathbb R_+ \times \mathbb R^d \to \mathbb R^d$, generated by the velocity field $u = -\nabla_x V - \nabla_x W \star \rho$:
\[
\frac{d}{dt}\Psi(t;s,x) = u(\Psi(t;s,x),t), \quad \Psi(s;s,x) = x
\]
for all $s,t \in [0,T]$. Note that the above flow is well-defined globally in time due to the regularity of $\nabla_x W \in \mathcal{W}^{1,\infty}$ and $\nabla_x V = x$. Concerning the integrability $\sqrt{\rho}(\partial_t u + u \cdot \nabla_x u)\in L^2(0,T;L^2(\mathbb R^d))$, we first find
\[
\|\partial_t u\|_{L^\infty} \leq \|\nabla_x W\|_{\mathcal{W}^{1,\infty}}\|\sqrt{\rho}u\|_{L^2} \quad \mbox{and} \quad \|\nabla_x u\|_{L^\infty} \leq C + \|\nabla_x W\|_{\mathcal{W}^{1,\infty}}.
\]
This yields
$$\begin{aligned}
\int_{\mathbb R^d} \rho\left(|\partial_t u|^2 + |u|^2|\nabla_x u|^2\right)dx &\leq \int_{\mathbb R^d} \rho |\partial_t u|^2\, dx +\|\nabla_x u\|_{L^\infty}^2\int_{\mathbb R^d} \rho |u|^2\,dx\cr
&\leq \left(C + \|\nabla_x W\|_{\mathcal{W}^{1,\infty}}^2\right)\int_{\mathbb R^d}\rho |u|^2\,dx.
\end{aligned}$$
On the other hand, it follows from \cite{CMV03, CCH, BCLR} that
\[
\int_{\mathbb R^d}\rho |u|^2\,dx \leq \int_{\mathbb R^d}\rho_0 |u_0|^2\,dx.
\]
Hence we have
\[
\int_{\mathbb R^d} \rho\left(|\partial_t u|^2 + |u|^2|\nabla_x u|^2\right)dx \leq C\int_{\mathbb R^d}\rho_0 |u_0|^2\,dx \leq C\left(\int_{\mathbb R^d} \rho_0|x|^2\,dx + 1\right).
\]
This completes the proof.
\end{proof}
\subsection{Global-in-time existence of strong solutions to the system \eqref{main_eq2}}\label{sec_strong}
In this part, we study the global-in-time existence of strong solutions to the following system:
\begin{align}\label{main_eq3}
\begin{aligned}
&\partial_t \rho + \nabla_x \cdot (\rho u) = 0, \quad (x,t) \in \mathbb R^d \times \mathbb R_+,\cr
&\partial_t (\rho u) + \nabla_x \cdot (\rho u \otimes u) = -\gamma \rho u - \lambda \rho(\nabla_x V + \nabla_x W \star \rho)
\end{aligned}
\end{align}
with the initial data
\[
(\rho(x,t),u(x,t))|_{t=0} =: (\rho_0(x), u_0(x)), \quad x \in \mathbb R^d.
\]
We now introduce a notion of strong solution to the system \eqref{main_eq3}.
\begin{definition}\label{def_strong2} Let $s > d/2+1$. For given $T\in(0,\infty)$, the pair $(\rho,u)$ is a strong solution of \eqref{main_eq3} on the time interval $[0,T]$ if and only if the following conditions are satisfied:
\begin{itemize}
\item[(i)] $\rho \in \mathcal C([0,T];H^s(\mathbb R^d))$, $u \in \mathcal C([0,T];Lip(\mathbb R^d)\cap L^2_{loc}(\mathbb R^d))$, and $\nabla_x^2 u \in \mathcal C([0,T];H^{s-1}(\mathbb R^d))$,
\item[(ii)] $(\rho, u)$ satisfy the system \eqref{main_eq3} in the sense of distributions.
\end{itemize}
\end{definition}
We first present the local-in-time existence and uniqueness results for the systems \eqref{main_eq3}.
\begin{theorem}\label{thm_local}Let $s > d/2+1$ and $R>0$. Suppose that the confinement potential $V$ is given by $V = |x|^2/2$ and the interaction potential $W$ is symmetric and $\nabla_x W \in (\mathcal{W}^{1,1} \cap \mathcal{W}^{1,\infty})(\mathbb R^d)$. For any $N<M$, there is a positive constant $T^*$ depending only on $R$, $N$, and $M$ such that if
\[
\|\rho_0\|_{H^s} + \|u_0\|_{L^2(B(0,R))}+\|\nabla_x u_0\|_{L^\infty} + \|\nabla_x^2 u_0\|_{H^{s-1}} < N,
\]
then the Cauchy problem \eqref{main_eq3} has a unique strong solution $(\rho,u)$, in the sense of Definition \ref{def_strong2}, satisfying
\[
\sup_{0 \leq t \leq T^*}\left(\|\rho(\cdot,t)\|_{H^s} + \|u(\cdot,t)\|_{L^2(B(0,R))} +\|\nabla_x u(\cdot,t)\|_{L^\infty} + \|\nabla_x^2 u(\cdot,t)\|_{H^{s-1}}\right) \leq M,
\]
where $B(0,R)$ denotes a ball of radius $R$ centered at the origin.
\end{theorem}
\begin{proof} Since the proof of local-in-time existence theory is by now classical, we sketch the proof here, see \cite[Section 2.1]{CK16} for detailed discussions. For simplicity, we set $\lambda = \gamma =1$.
\noindent {\bf Step 1 (Linearized system):} We first consider the associate linear system:
\begin{align}\label{lin_sys}
\begin{aligned}
&\partial_t \rho + \tilde u \cdot \nabla_x \rho + \rho \nabla_x \cdot \tilde u = 0,\cr
&\rho\partial_t u + \rho \tilde u \cdot \nabla_x u = - \rho u - \rho(\nabla_x V + \nabla_x W \star \rho)
\end{aligned}
\end{align}
with the initial data $(\rho_0,u_0)$ satisfying the assumptions in Theorem \ref{thm_local}. Here $\tilde u$ satisfies
\begin{equation}\label{lin_reg}
\sup_{0 \leq t \leq T}\left(\|\tilde u(\cdot,t)\|_{L^2(B(0,R))} + \|\nabla_x \tilde u(\cdot,t)\|_{L^\infty} + \|\nabla_x^2 \tilde u(\cdot,t)\|_{H^{s-1}}\right) \leq M.
\end{equation}
We notice that the existence of the above linear system can be proved by a standard linear theory \cite{K73}. Since $\tilde u$ is globally Lipschitz, by using the method of characteristics, we can show the positivity of the density $\rho$. By a straightforward computation, we first find from the continuity equation in \eqref{lin_sys} that
\begin{align}\label{est_rho}
\begin{aligned}
\frac{d}{dt}\int_{\mathbb R^d} \rho^2\,dx &\leq C\|\nabla_x \tilde u\|_{L^\infty}\|\rho\|_{L^2}^2,\cr
\frac{d}{dt}\int_{\mathbb R^d} |\nabla_x \rho|^2\,dx &\leq C\|\nabla_x \tilde u\|_{L^\infty}\|\nabla_x \rho\|_{L^2}^2 + C\|\nabla_x^2 \tilde u\|_{L^2}\|\rho\|_{L^\infty}\|\nabla_x \rho\|_{L^2}.
\end{aligned}
\end{align}
For $2 \leq k \leq s$, we obtain
$$\begin{aligned}
&\frac12\frac{d}{dt}\int_{\mathbb R^d} |\nabla_x^k \rho|^2\,dx \cr
&\quad = - \int_{\mathbb R^d} \nabla_x^k \rho \cdot (\tilde u \cdot \nabla_x^{k+1} \rho)\,dx - \int_{\mathbb R^d} \nabla_x^k \rho \cdot (\nabla_x^k (\nabla_x \rho \cdot \tilde u) - \tilde u \cdot \nabla_x^{k+1} \rho)\,dx\cr
&\qquad - \int_{\mathbb R^d} \nabla_x^k \rho \cdot (\nabla_x^k (\nabla_x \cdot \tilde u)) \rho\,dx - \int_{\mathbb R^d} \nabla_x^k \rho \cdot (\nabla_x^k(\rho \nabla_x \cdot \tilde u) - \rho\nabla_x^k (\nabla_x \cdot \tilde u))\,dx\cr
&\quad =: \sum_{i=1}^4 I_i,
\end{aligned}$$
where $\nabla_x^k$ denotes any partial derivative $\partial_x^\alpha$ with multi-index $\alpha$, $|\alpha| = k$, and we estimate
$$\begin{aligned}
I_1 &\leq \|\nabla_x \tilde u\|_{L^\infty}\|\nabla_x^k \rho\|_{L^2}^2,\cr
I_2 &\leq \|\nabla_x^k (\nabla_x \rho \cdot \tilde u) - \tilde u \cdot \nabla_x^{k+1} \rho\|_{L^2}\|\nabla_x^k\rho\|_{L^2}\cr
&\leq C\left(\|\nabla_x^k \tilde u\|_{L^2}\|\nabla_x \rho\|_{L^\infty} + \|\nabla_x \tilde u\|_{L^\infty}\|\nabla_x^k\rho\|_{L^2}\right)\|\nabla_x^k\rho\|_{L^2},\cr
I_3 &\leq \|\rho\|_{L^\infty}\|\nabla_x^k\rho\|_{L^2}\|\nabla_x^{k+1}\tilde u\|_{L^2},\cr
I_4 &\leq \|\nabla_x^k(\rho \nabla_x \cdot \tilde u) - \rho\nabla_x^k (\nabla_x \cdot \tilde u)\|_{L^2}\|\nabla_x^k\rho\|_{L^2}\cr
&\leq C\left(\|\nabla_x^k \rho\|_{L^2}\|\nabla_x \tilde u\|_{L^\infty} + \|\nabla_x \rho\|_{L^\infty}\|\nabla_x^k \tilde u\|_{L^2} \right)\|\nabla_x^k\rho\|_{L^2}.
\end{aligned}$$
Here, in order to bound $I_2$ and $I_4$, we used Moser-type inequality \cite[Lemma 2.1]{C16} as
\[
\|\nabla_x^k (fg) - f\nabla_x^kg\|_{L^2} \leq C\left(\|\nabla_x f\|_{L^\infty}\|\nabla_x^{k-1}g\|_{L^2} + \|\nabla_x^k f\|_{L^2}\|g\|_{L^\infty} \right)
\]
for $f,g \in (H^k \cap L^\infty)(\mathbb R^d)$ and $\nabla_x f \in L^\infty(\mathbb R^d)$. This, together with \eqref{lin_reg}, yields
\begin{equation}\label{est_lrho}
\frac{d}{dt}\|\rho\|_{H^s}^2 \leq CM\|\rho\|_{H^s}^2, \quad \mbox{i.e.,} \quad \sup_{0 \leq t \leq T} \|\rho(\cdot,t)\|_{H^s} \leq \|\rho_0\|_{H^s}e^{CMT},
\end{equation}
due to $s > d/2+1$. For the estimate of $u$, we use the positivity of $\rho$ to divide the momentum equation in \eqref{lin_sys} by $\rho$ and use the similar argument as in Lemma \ref{lem_u} to get
\[
\|\nabla_x u\|_{L^\infty}e^t \leq \|\nabla_x u_0\|_{L^\infty} + CM\int_0^t \|\nabla_x u\|_{L^\infty}e^s\,ds + C(e^t - 1).
\]
Applying Gronwall's inequality to the above, we obtain
\begin{equation}\label{est_supu}
\sup_{0 \leq t \leq T}\|\nabla_x u(\cdot,t)\|_{L^\infty} \leq \|\nabla_x u_0\|_{L^\infty} e^{CMT} + C(e^{CMT} - 1).
\end{equation}
For $2 \leq k \leq s+1$, similarly as above, we next estimate
$$\begin{aligned}
&\frac12\frac{d}{dt}\int_{\mathbb R^d} |\nabla_x^k u|^2\,dx \cr
&\quad = - \int_{\mathbb R^d} \nabla_x^k u \cdot (\tilde u \cdot \nabla_x^{k+1} u)\,dx - \int_{\mathbb R^d} \nabla_x^k u \cdot ( \nabla_x^k(\tilde u \cdot \nabla_x u) - \tilde u \cdot \nabla_x^{k+1} u)\,dx\cr
&\qquad - \int_{\mathbb R^d} |\nabla_x^k u|^2\,dx - \int_{\mathbb R^d} \nabla_x^k u \cdot (\nabla_x^2 W \star \nabla_x^{k-1}\rho)\,dx\cr
&\leq \|\nabla_x \tilde u\|_{L^\infty}\|\nabla_x^k u\|_{L^2}^2 + C\left(\|\nabla_x^k \tilde u\|_{L^2}\|\nabla_x u\|_{L^\infty} + \|\nabla_x \tilde u\|_{L^\infty} \|\nabla_x^k u\|_{L^2} \right)\|\nabla_x^k u\|_{L^2}\cr
&\quad - \|\nabla_x^k u\|_{L^2}^2 + \|\nabla_x^k u\|_{L^2}\|\nabla_x^2 W\|_{L^1}\|\nabla_x^{k-1}\rho\|_{L^2}\cr
&\leq CM\|\nabla_x^k u\|_{L^2}^2 + CM\|\nabla_x u\|_{L^\infty}\|\nabla_x^k u\|_{L^2} + C\|\nabla_x^k u\|_{L^2}\|\nabla_x^{k-1}\rho\|_{L^2}.
\end{aligned}$$
Summing the above inequality over $2 \leq k \leq s+1$ gives
\[
\frac{d}{dt}\|\nabla_x^2 u\|_{H^{s-1}} \leq CM\|\nabla_x^2 u\|_{H^{s-1}} + C\|\nabla_x \rho\|_{H^{s-1}} + CM\|\nabla_x u\|_{L^\infty}.
\]
Then we combine the above, \eqref{est_lrho}, and \eqref{est_supu} to have
\[
\frac{d}{dt}\|\nabla_x^2 u\|_{H^{s-1}} \leq CM\|\nabla_x^2 u\|_{H^{s-1}} + C\|\rho_0\|_{H^s}e^{CMT} + C\|\nabla_x u_0\|_{L^\infty} e^{CMT} + C(e^{CMT} - 1).
\]
Thus we obtain
\[
\|\nabla_x^2 u\|_{H^{s-1}} \leq \|\nabla_x^2 u_0\|_{H^{s-1}} e^{CMT} + C(\|\rho_0\|_{H^s} + \|\nabla_x u_0\|_{L^\infty} + 1)Te^{CMT}.
\]
On the other hand, we get
$$\begin{aligned}
\frac12\frac{d}{dt}\int_{B(0,R)} |u|^2\,dx &= -\int_{B(0,R)} u \cdot ((\tilde u \cdot \nabla_x)u)\,dx - \int_{B(0,R)} |u|^2\,dx\cr
&\quad - \int_{B(0,R)} u \cdot \nabla_x V\,dx - \int_{B(0,R)} u \cdot (\nabla_x W \star \rho)\,dx\cr
&\leq \|\nabla_x u\|_{L^\infty}\|\tilde u\|_{L^2(B(0,R))}\|u\|_{L^2(B(0,R))} + C\|u\|_{L^2(B(0,R))} - \|u\|_{L^2(B(0,R))}^2\cr
&\leq C\|u\|_{L^2(B(0,R))} - \|u\|_{L^2(B(0,R))}^2,
\end{aligned}$$
due to \eqref{est_supu}. By using Gronwall's inequality, we find
\[
\frac{d}{dt}\|u\|_{L^2(B(0,R))} \leq C - \|u\|_{L^2(B(0,R))}, \quad \mbox{i.e.,} \quad \|u\|_{L^2(B(0,R))} \leq \|u_0\|_{L^2(B(0,R))} + C(e^T - 1).
\]
Combining all of the above observations yields
$$\begin{aligned}
&\|\rho\|_{H^s} + \|u\|_{L^2(B(0,R))}+ \|\nabla_x u\|_{L^\infty} + \|\nabla_x^2 u\|_{H^{s-1}} \cr
&\quad \leq (\|\rho_0\|_{H^s} + \|u_0\|_{L^2(B(0,R))} +\|\nabla_x^2 u_0\|_{H^{s-1}}) e^{CMT}\cr
&\qquad + C(\|\rho_0\|_{H^s} + \|\nabla_x u_0\|_{L^\infty} + 1)Te^{CMT} + C(e^{CMT} - 1)\cr
&\quad \leq (N + (N + 1)T)e^{CMT} + C(e^{CMT} - 1).
\end{aligned}$$
We finally choose $T^* >0$ small enough such that the right hand side of the above inequality is less than $M$. Hence we have
\[
\sup_{0 \leq t \leq T^*}\left(\|\rho(\cdot,t)\|_{H^s} + \|u(\cdot,t)\|_{L^2(B(0,R))}+ \|\nabla_x u(\cdot,t)\|_{L^\infty} + \|\nabla_x^2 u(\cdot,t)\|_{H^{s-1}}\right) \leq M.
\]
Notice that $T^*$, $N$, and $M$ do not depend on $\tilde u$. \newline
\noindent {\bf Step 2 (Existence):} We now construct the approximated solutions $(\rho^n,u^n)$ for the system \eqref{main_eq3} by solving the following linear system:
$$\begin{aligned}
&\partial_t \rho^{n+1} + u^n \cdot \nabla_x \rho^{n+1} + \rho^{n+1} \nabla_x \cdot u^n = 0,\cr
&\rho^{n+1}\partial_t u^{n+1} + \rho^{n+1} u^n \cdot \nabla_x u^{n+1} = - \rho^{n+1}u^{n+1} - \rho^{n+1}(\nabla_x V + \nabla_x W \star \rho^{n+1}),
\end{aligned}$$
with the initial data and first iteration step defined by
\[
(\rho^n(x,0),u^n(x,0))=(\rho_0(x),u_0(x)) \quad \mbox{for all} \quad n \geq 1, \quad x\in \mathbb R^d,
\]
and
\[
(\rho^0(x,t),u^0(x,t)) = (\rho_0(x),u_0(x)), \quad (x,t) \in \mathbb R^d \times \mathbb R_+.
\]
Then it follows from {\bf Step 1} that for any $N < M$, there exists $T^* > 0$ such that if $\|\rho_0\|_{H^s} + \|u_0\|_{L^2(B(0,R))} + \|\nabla_x u_0\|_{L^\infty} + \|\nabla_x^2 u_0\|_{H^{s-1}} < N$, then we have
\[
\sup_{n \geq 0} \sup_{0 \leq t \leq T^*}\left(\|\rho^n(\cdot,t)\|_{H^s} + \|u^n(\cdot,t)\|_{L^2(B(0,R))} + \|\nabla_x u^n(\cdot,t)\|_{L^\infty} + \|\nabla_x^2 u^n(\cdot,t)\|_{H^{s-1}}\right) \leq M.
\]
Note that $\rho^{n+1} - \rho^n$ and $u^{n+1} - u^n$ satisfy
$$\begin{aligned}
&\partial_t (\rho^{n+1} - \rho^n) + (u^n - u^{n-1})\cdot \nabla_x \rho^{n+1} + u^{n-1} \cdot \nabla_x (\rho^{n+1} - \rho^n) \cr
&\qquad + (\rho^{n+1} - \rho^n) \nabla_x \cdot u^n + \rho^n \nabla_x \cdot (u^n - u^{n-1}) = 0
\end{aligned}$$
and
$$\begin{aligned}
&\partial_t (u^{n+1} - u^n) + (u^n - u^{n-1})\cdot \nabla_x u^{n+1} + u^{n-1} \cdot \nabla_x (u^{n+1} - u^n) \cr
&\qquad = - (u^{n+1} - u^n) - \nabla_x W \star (\rho^{n+1} - \rho^n),
\end{aligned}$$
respectively. Then a straightforward computation gives
\[
\|(\rho^{n+1} - \rho^n)(\cdot,t)\|_{L^2}^2 \leq C\int_0^t \left(\|(\rho^{n+1} - \rho^n)(\cdot,s)\|_{L^2}^2 + \|(u^n - u^{n-1})(\cdot,s)\|_{H^1}^2\right)ds,
\]
where $C > 0$ depends on $\|\nabla_x \rho^{n+1}\|_{L^\infty}$, $\|\nabla_x u^n\|_{L^\infty}$, $\|\nabla_x u^{n+1}\|_{L^\infty}$, and $\|\rho^n\|_{L^\infty}$.
We also find
\[
\|(u^{n+1} - u^n)(\cdot,t)\|_{H^1}^2 \leq C\int_0^t \left(\|(\rho^{n+1} - \rho^n)(\cdot,s)\|_{L^2}^2 + \|(u^n - u^{n-1})(\cdot,s)\|_{H^1}^2\right)ds,
\]
where $C > 0$ depends on $\|\nabla_x u^{n+1}\|_{\mathcal{W}^{1,\infty}}$, $\|\nabla_x u^{n-1}\|_{\mathcal{W}^{1,\infty}}$, and $\|\nabla_x W\|_{\mathcal{W}^{1,1}}$. This provides that $(\rho^n,u^n)$ is a Cauchy sequence in $\mathcal C([0,T];L^2(\mathbb R^d)) \times \mathcal C([0,T];H^1(\mathbb R^d))$. Interpolating this strong convergences with the above uniform-in-$n$ bound estimates gives
\[
\rho^n \to \rho \quad \mbox{in }\mathcal C([0,T_*]; H^{s-1}(\mathbb R^d)), \quad u^n \to u \quad \mbox{in }\mathcal C([0,T_*]; H^1(B(0,R))) \quad \mbox{as } n\to\infty,
\]
\[
\nabla_x u^n \to \nabla_x u \quad \mbox{in } \mathcal C(\mathbb R^d \times [0,T_*]), \quad \mbox{and} \quad \nabla_x^2 u^n \to \nabla_x^2 u \quad \mbox{in } \mathcal C([0,T_*];H^{s-2}(\mathbb R^d)) \quad \mbox{as } n\to\infty,
\]
due to $s > d/2+1$. In order to show the limiting functions $\rho$ and $u$ satisfy the regularity in Theorem \ref{thm_local} we can use a standard functional analytic arguments. For more details, we refer to \cite[Section 2.1]{CK16} and \cite[Appendix A]{CCZ16}. We also notice that it is easy to show the limiting functions $\rho$ and $u$ are solutions to \eqref{main_eq3} in the sense of Definition \ref{def_strong2}.
\noindent {\bf Step 3 (Uniqueness):} Let $(\rho_1,u_1)$ and $(\rho_2,u_2)$ be the strong solutions obtained in the previous step with the same initial data $(\rho_0,u_0)$. Then it directly follows from the Cauchy estimate in {\bf Step 2} that
\[
\|(\rho_1 - \rho_2)(\cdot,t)\|_{L^2}^2 + \|(u_1 - u_2)(\cdot,t)\|_{H^1}^2 \leq C\int_0^t \left(\|(\rho_1 - \rho_2)(\cdot,s)\|_{L^2}^2 + \|(u_1 - u_2)(\cdot,s)\|_{H^1}^2\right)ds.
\]
Thus we obtain
\[
\|(\rho_1 - \rho_2)(\cdot,t)\|_{L^2}^2 + \|(u_1 - u_2)(\cdot,t)\|_{H^1}^2 \equiv 0
\]
for all $t \in [0,T^*]$. Hence we have the uniqueness of strong solutions.
\end{proof}
We next show global-in-time existence of strong solutions to the system \eqref{main_eq3} under additional assumptions on parameters $\gamma$, $\lambda$, see below, and the interaction potential $W$. We remark that the assumption on $\gamma$ and $\lambda$ is used in Lemma \ref{lem_u} for the uniform bound estimate of $\nabla_x u$. The strong regularity of $\nabla_x W$ is needed for the global-in-time existence of solutions. Note that we do not require any small assumptions on the initial data.
\begin{theorem}\label{thm_glo} Let $s> d/2 + 1$, $T>0$, and $R>0$. Suppose that the confinement potential $V$ is given by $V = |x|^2/2$ and the interaction potential $W$ is symmetric and $\nabla_x W \in (\mathcal{W}^{1,1} \cap \mathcal{W}^{[d/2]+1,\infty})(\mathbb R^d)$. Suppose that initial data $(\rho_0, u_0)$ satisfy
\[
\rho_0 \in H^s(\mathbb R^d), \quad u_0 \in (Lip \cap L^2_{loc})(\mathbb R^d), \quad \mbox{and} \quad \nabla_x^2 u_0 \in H^{s-1}(\mathbb R^d).
\]
Then there exist $\gamma_* > 0$ and $\kappa_*>0$ such that
\[
\sup_{0 \leq t \leq T}\left( \|\rho(\cdot,t)\|_{H^s} + \|u(\cdot,t)\|_{L^2(B(0,R))}+\|\nabla_x u(\cdot,t)\|_{L^\infty} + \|\nabla_x^2 u(\cdot,t)\|_{H^{s-1}} \right) \leq C
\]
for $\gamma \geq \gamma_*$ and $\kappa \leq \kappa_*$, where $C$ depends on the initial data $(\rho_0, u_0)$, $T$, $\gamma_*$, $\kappa_*$, and $\|\nabla_x W\|_{\mathcal{W}^{[d/2]+1,1}}$. Here $\gamma_* $ and $\kappa_*$ are appeared in Lemma \ref{lem_u}.
\end{theorem}
\begin{proof}Similarly as \eqref{est_rho}, we estimate
\[
\frac{d}{dt}\|\rho\|_{H^1}^2 \leq C\|\nabla_x u\|_{L^\infty}\|\rho\|_{H^1}^2 + C\|\nabla_x^2 u\|_{L^2}\|\rho\|_{L^\infty}\|\nabla_x \rho\|_{L^2}
\]
and
\[
\frac{d}{dt}\|\nabla_x^2 u\|_{L^2}^2 \leq C\|\nabla_x u\|_{L^\infty}\|\nabla_x^2 u\|_{L^2}^2 + C\|\nabla_x^2 u\|_{L^2}\|\rho\|_{H^1}.
\]
This yields
\[
\frac{d}{dt}\left(\|\rho\|_{H^1}^2 + \|\nabla_x^2 u\|_{L^2}^2 \right) \leq C\left(\|\nabla_x u\|_{L^\infty} + \|\rho\|_{L^\infty} +1\right)\left(\|\rho\|_{H^1}^2 + \|\nabla_x^2 u\|_{L^2}^2 \right),
\]
i.e.,
$$\begin{aligned}
&\sup_{0 \leq t \leq T} \left(\|\rho(\cdot,t)\|_{H^1} + \|\nabla_x^2 u(\cdot,t)\|_{L^2}\right) \cr
&\qquad \leq C\left(\|\rho_0\|_{H^1} + \|\nabla_x^2 u_0\|_{L^2} \right) \exp\left( \int_0^T \|\nabla_x u(\cdot,t)\|_{L^\infty} + \|\rho(\cdot,t)\|_{L^\infty} \,dt \right).
\end{aligned}$$
On the other hand, it follows from Lemma \ref{lem_u} that there exist $\gamma_* > 0$ and $\kappa_*>0$ such that
\[
\sup_{0 \leq t \leq T}\|\nabla_x u(\cdot,t)\|_{L^\infty} \leq \|\nabla_x u_0\|_{L^\infty} + 1
\]
for $\gamma \geq \gamma_*$ and $\kappa \leq \kappa_*$. Then this, together with using the method of characteristics, gives
\[
\sup_{0 \leq t \leq T}\|\rho(\cdot,t)\|_{L^\infty} \leq \|\rho_0\|_{L^\infty} \exp\left(\int_0^T\|\nabla_x u(\cdot,t)\|_{L^\infty}\,dt\right) \leq C\|\rho_0\|_{L^\infty}\exp\left( \|\nabla_x u_0\|_{L^\infty} + 1 \right).
\]
Combining all of the above observations, we obtain
\begin{align}\label{est_f1}
\begin{aligned}
&\sup_{0 \leq t \leq T}\left( \|\rho(\cdot,t)\|_{H^1} + \|\nabla_x u(\cdot,t)\|_{L^\infty} + \|\nabla_x^2 u(\cdot,t)\|_{L^2} \right)\cr
&\qquad \leq C\exp\left(\|\rho_0\|_{H^1} + \|\nabla_x u_0\|_{L^\infty} + \|\nabla_x^2 u_0\|_{L^2} \right)
\end{aligned}
\end{align}
for $\gamma \geq \gamma_*$ and $\kappa \leq \kappa_*$. We also easily estimate
\[
\sup_{0 \leq t \leq T}\|u(\cdot,t)\|_{L^2(B(0,R))} \leq C\|u_0\|_{L^2(B(0,R))}.
\]
For $0 \leq k \leq s$, we find
\begin{equation}\label{est_rho1}
\frac{d}{dt} \|\nabla_x^k \rho\|_{L^2} \leq C\|\nabla_x^k \rho\|_{L^2} + C\|\nabla_x^k u\|_{L^2}\|\nabla_x \rho\|_{L^\infty} + C\|\nabla_x^{k+1}u\|_{L^2}
\end{equation}
and
\begin{equation}\label{est_u1}
\frac{d}{dt} \|\nabla_x^{k+1} u\|_{L^2} \leq C\|\nabla_x^{k+1}u\|_{L^2} + C\|\nabla_x^{k+1}(\nabla_x W \star \rho)\|_{L^2}.
\end{equation}
Then we have from \eqref{est_u1}
$$\begin{aligned}
\frac{d}{dt} \|\nabla_x^2 u\|_{H^{[d/2]}} &\leq C\|\nabla_x^2 u\|_{H^{[d/2]}} + C\sum_{1 \leq k \leq [d/2]+1}\|\nabla_x^{k+1}(\nabla_x W \star \rho)\|_{L^2}\cr
&\leq C\|\nabla_x^2 u\|_{H^{[d/2]}} + C\sum_{1 \leq k \leq [d/2]+1}\|\nabla_x^{k+1}W\|_{L^1}\|\nabla_x \rho\|_{L^2}\cr
&\leq C\|\nabla_x^2 u\|_{H^{[d/2]}} + C\|\nabla_x W\|_{\mathcal{W}^{[d/2]+
1,1}}\|\nabla_x \rho\|_{L^2}.
\end{aligned}$$
This together with \eqref{est_f1} implies
\begin{equation}\label{est_u2}
\sup_{0 \leq t \leq T}\|\nabla_x^2 u(\cdot,t)\|_{H^{[d/2]}} \leq C,
\end{equation}
where $C$ depends on the initial data $(\rho_0, u_0)$, $\nabla_x W$, and $T$. We back to \eqref{est_rho1} to obtain
$$\begin{aligned}
\frac{d}{dt} \|\nabla_x^2 \rho\|_{H^{[d/2]}} &\leq C \|\nabla_x^2 \rho\|_{H^{[d/2]}} + C\|\nabla_x^2 u\|_{H^{[d/2]}}\|\nabla_x \rho\|_{L^\infty} + C\|\nabla_x^3 u\|_{H^{[d/2]}}\cr
&\leq C \|\nabla_x^2 \rho\|_{H^{[d/2]}} + C\|\nabla_x \rho\|_{H^{[d/2]+1}} + C\|\nabla_x^3 u\|_{H^{[d/2]}}\cr
&\leq C + C\|\nabla_x^2 \rho\|_{H^{[d/2]}} + C\|\nabla_x^3 u\|_{H^{[d/2]}},
\end{aligned}$$
where we used \eqref{est_u2} and \eqref{est_f1}. It also follows from \eqref{est_u1} that
$$\begin{aligned}
\frac{d}{dt}\|\nabla_x^3 u\|_{H^{[d/2]}} &\leq C\|\nabla_x^3 u\|_{H^{[d/2]}} + \|\nabla_x W\|_{\mathcal{W}^{[d/2]+ 1,1}}\|\nabla_x^2 \rho\|_{L^2}\cr
&\leq C\|\nabla_x^3 u\|_{H^{[d/2]}} + \|\nabla_x^2 \rho\|_{H^{[d/2]}}.
\end{aligned}$$
Combining the above two differential inequalities yields
\[
\frac{d}{dt}\left(\|\nabla_x^2 \rho\|_{H^{[d/2]}} + \|\nabla_x^3 u\|_{H^{[d/2]}} \right) \leq C + C\left(\|\nabla_x^2 \rho\|_{H^{[d/2]}} + \|\nabla_x^3 u\|_{H^{[d/2]}} \right),
\]
and subsequently we find
\[
\sup_{0 \leq t \leq T}\left(\|\rho(\cdot,t)\|_{H^{[d/2]+2}} + \|\nabla_x^2 u(\cdot,t)\|_{H^{[d/2]+1}}\right) \leq C,
\]
where $C > 0$ depends on the initial data $\|\rho_0\|_{H^{[d/2]+2}}$, $\|\nabla_x^2 u_0\|_{H^{[d/2]+1}}$, $\|\nabla_x u_0\|_{L^\infty}$, $\|\nabla_x W\|_{\mathcal{W}^{[d/2]+ 1,1}}$, and $T$. We next estimate \eqref{est_rho1} and \eqref{est_u1} as
\[
\frac{d}{dt}\left(\|\nabla_x^k \rho\|_{L^2} + \|\nabla_x^{k+1} u\|_{L^2}\right) \leq C\left(\|\nabla_x^k \rho\|_{L^2} + \|\nabla_x^{k+1} u\|_{L^2}\right) + C\|\nabla_x^k u\|_{L^2},
\]
where we used
\[
\|\nabla_x^{k+1}(\nabla_x W \star \rho)\|_{L^2} \leq \|\nabla_x^2 W\|_{L^1}\|\nabla_x^k \rho\|_{L^2} \quad \mbox{and} \quad \|\nabla_x \rho\|_{L^\infty} \leq C\|\nabla_x \rho\|_{H^{[d/2]+1}} \leq C.
\]
By summing it over $2 \leq k \leq s$ and applying Gronwall's inequality to the resulting differential inequality, we finally have
\[
\sup_{0 \leq t \leq T}\left(\|\nabla_x^2 \rho(\cdot,t)\|_{H^{s-2}} + \|\nabla_x^2 u(\cdot,t)\|_{H^{s-1}}\right) \leq C,
\]
where $C > 0$ depends on the initial data $\|\rho_0\|_{H^s}$, $\|\nabla_x^2 u_0\|_{H^{s-1}}$, $\|\nabla_x u_0\|_{L^\infty}$, $\|\nabla_x W\|_{\mathcal{W}^{[d/2]+ 1,1}}$, and $T$. Combining all of the above discussion concludes the desired result.
\end{proof}
\section*{Acknowledgements}
JAC was partially supported by the EPSRC grant number EP/P031587/1. YPC was supported by NRF grant(No. 2017R1C1B2012918 and 2017R1A4A1014735) and POSCO Science Fellowship of POSCO TJ Park Foundation.
| {
"attr-fineweb-edu": 1.692383,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUe7vxK1yAgWt7_9W7 | \section{Introduction}\label{sec:intro}
In his paper we investigate the relationship between the absorption and dispersion experienced by off-resonant radiation interacting with an inhomogeneously broadened atomic medium. Of particular interest is the response of the medium when the detuning is larger than the inhomogeneous linewidth. In this region scattering of photons is reduced, but this does not necessarily mean that the dispersive atom-light coupling suffers accordingly. The large dispersion of off-resonant Doppler-broadened systems has been exploited in a number of studies. Slow-light experiments~\cite{Vanner08,Camacho07} and theoretical studies~\cite{Shakh08} utilise the large group refractive index associated with the medium to control the propagation of broadband optical pulses. Slow-light
interferometers using monochromatic~\cite{ShiOptLett, Purves} and broadband light~\cite{Shi07} have also been demonstrated. The off-resonant Faraday effect can be used to separate the sidebands of Raman light with high fidelity~\cite{Abel09}, and can be used as a dispersive probe with continuous-wave or pulsed light~\cite{Siddons09}. The non-invasive nature of off-resonant dispersive probing could also lead to the possibility of ``weak'' measurements~\cite{Ahanorov88} in the context of quantum non-demolition (QND). In many of these experiments there is a trade-off between the absorption of the optically-thick medium and the magnitude of the effect of interest. As the real and imaginary parts of the medium's susceptibility show different spectral dependences it is not obvious
at which detuning to perform the experiments. The motivation of this study is to take previously developed analytic results for the susceptibility~\cite{Siddons08} and to investigate the domain of validity of two approximations which facilitate the analysis of experimental data.
The structure of this paper is as follows. In section~\ref{sec:chi} we describe the function which governs the absorption and dispersion of a Doppler-broadened medium, and go on to make approximations to this function. We then use these analytic approximations to compare the absorptive and dispersive characteristics of an atomic resonance. In section~\ref{sec:results} we compare the theoretical expressions to experiment, and in section~\ref{sec:conclusion} we draw our conclusions.
\section{The electric susceptibility}\label{sec:chi}
The electric susceptibility of a medium, $\chi$, describes the medium's absorptive and dispersive properties. For the case of an isolated resonance in a Doppler-broadened atomic medium the susceptibility as a function of detuning from resonance, $\Delta$, is~\cite{Siddons08}:
\begin{eqnarray}
\chi(\Delta) &=& c_{\mathrm{m_F}}^2\frac{d^2\mathcal{N}}{\hbar \epsilon_0}s(\Delta).\label{eq:chi}
\end{eqnarray}
Here $c_{\mathrm{m_F}}^2$ is the transition strength factor for the transition \mbox{$\left|F_{\mathrm{g}},m_{\mathrm{Fg}}\right\rangle\rightarrow\left|F_{\mathrm{e}},m_{\mathrm{Fe}}\right\rangle$}, \mbox{$d=\langle L_{\mathrm{g}}||e\textrm{\textbf{r}}||L_{\mathrm{e}}\rangle$} is the reduced dipole matrix element of the \mbox{$|L_{\mathrm{g}}\rangle\rightarrow |L_{\mathrm{e}}\rangle$} transition, $\mathcal{N}$ is the atomic number density of state $\left|F_{\mathrm{g}},m_{\mathrm{Fg}}\right\rangle$, and $s(\Delta)$ is the lineshape factor. The total susceptibility of the medium is obtained by summing over all transitions which the light is stimulating. The absorption coefficient is proportional to the imaginary part of the susceptibility, $\chi^{\mathrm{I}}$, and has the form of the well-known Voigt profile. Dispersion results from the real part, $\chi^{\mathrm{R}}$.
The lineshape factor $s(\Delta)$ is the convolution of $f(\Delta)$, the homogeneous atomic lineshape, and $g(v)$, the Gaussian distribution of velocities $v$. This convolution is given by:
\begin{equation}
s(\Delta) =\int ^{+\infty}_{-\infty} f(\Delta-kv)\times g(v) dv,
\label{eq:vintegral}
\end{equation}
where $k$ is the wavenumber of the radiation, and
\begin{eqnarray}
f(\Delta)&=&\frac{i}{\Gamma/2-i\Delta},\label{eq:convl}\\
g(v)&=&\frac{1}{u\sqrt{\pi}}\mathrm{e}^{-(v/u)^2}.\label{eq:convg}
\end{eqnarray}
Here $\Gamma$ is the FWHM (Full-width at half-maximum) of the homogeneous broadening, and $u$ is the $1/\mathrm{e}$ of the inhomogenous broadening mechanism (and the RMS atomic speed). The lineshape $s(\Delta)$ is related to the Faddeeva (or complex error) function, $w(iz)$, of complex argument $z$, via
\begin{eqnarray}
s(\Delta) &=& \frac{i\sqrt{\pi}}{ku}w(iz),\label{eq:s}\\
w(iz)&=&\frac{i}{\pi}\int^{+\infty}_{-\infty}\frac{\mathrm{e}^{-x^2}}{iz-x}\textrm{d}x=\mathrm{e}^{z^2}\mathrm{erfc}(z),\label{eq:wiz}\\
z(\Delta)&=&\frac{1}{2}\frac{\Gamma}{ku}-i\frac{\Delta}{ku}.\label{eq:z}
\end{eqnarray}
Equation (\ref{eq:s}) is the exact analytic lineshape of the Doppler-broadened susceptibility; unfortunately this exact result can be difficult to use. Although algorithms exist for the Faddeeva and complementary error function $\mathrm{erfc}(z)$, they are not easy to manipulate analytically, and can be time-consuming to evaluate numerically. Consequently it is difficult to relate $\chi$ to $z$ and the parameters of which they are composed (namely the widths $\Gamma$ and $ku$). This in turn makes it difficult to see the relationship between the absorptive and dispersive properties. The preceding reasons motivate our study of approximations to the analytic result by looking at the Faddeeva function in two regimes, named the Gaussian and Lorentzian approximations for reasons that will become apparent.
\subsection{The Gaussian approximation}\label{sec:gapprox}
We consider the situation where the broadening due to atomic motion is much larger than natural broadening, which is the case for typical room temperature alkali-metal atoms. For this approximation we therefore look at the limit that $\Gamma/ku\rightarrow0$ in the derivation of the Faddeeva function. Starting from the homogeneous lineshape of equation~(\ref{eq:convl}),
\begin{equation}
\lim_{\Gamma/ku\rightarrow0}f(z)=-\frac{ku}{\Delta}+i\pi\delta(\frac{\Delta}{ku}),
\end{equation}
where the imaginary part is given by a Dirac delta function. With this expression substituted into equation~(\ref{eq:s}), the real and imaginary parts of the susceptibility are, respectively
\begin{eqnarray}
s^{\mathrm{R}}&=&-\frac{\sqrt{\pi}}{ku}\:\mathrm{e}^{-(\Delta/ku)^2}
\mathrm{erfi}(\Delta/ku),\label{eq:gsr}\\
s^{\mathrm{I}}&=&\ \ \ \: \frac{\sqrt{\pi}}{ku}\:\mathrm{e}^{-(\Delta/ku)^2}.
\label{eq:gsi}
\end{eqnarray}
The real part contains the imaginary error function $\mathrm{erfi}(z)$ which is similar to the Faddeeva function in that it needs to be evaluated numerically. The imaginary term is the convolution of a Gaussian and a Dirac delta function, and as expected evaluates to the Gaussian function responsible for Doppler-broadening, with the FWHM Doppler width $\Delta\omega_{\mathrm{D}}=2\sqrt{\ln2}\:ku$.
In this approximation both $s^{\mathrm{R}}$ and $s^{\mathrm{I}}$ have a Gaussian detuning dependence, whose exponential decrease means that it decays rapidly away from resonance. $s^{\mathrm{R}}$ has an additional imaginary error function dependence which increases rapidly with detuning; hence dispersion contains the long-range characteristics associated with the Faddeeva function. Thus the imaginary part will only be valid close to resonance, whereas the real part of the Gaussian approximation is expected to be in good agreement with the Faddeeva function for all detunings
\subsection{The Lorentzian approximation}\label{sec:lapprox}
In the Gaussian approximation we made the assumption that homogeneous broadening was negligible compared to inhomogeneous broadening, based on the ratio of their frequency widths. We saw that this is not true far from resonance. In this section we will find regimes under which homogeneous dominates the susceptibility. We begin by noting that the complementary error function can be written in the form of a continued fraction~\cite{ContFrac1,ContFrac2}
\begin{equation}
\sqrt{\pi}\:\mathrm{erfc}(z)= 2\int ^{\infty}_{z}\mathrm{e}^{-t^2}\mathrm{d}t=\frac{2\mathrm{e}^{-z^2}}{2z+\frac{2}{2z+\frac{4}{2z+\frac{6}{2z+\ldots}}}}.
\label{eq:efn}
\end{equation}
For $|z|\gg1$ the continued fraction can be approximated to $\mathrm{e}^{-z^2}/z$. This requires either of the following conditions to be fulfilled:
\begin{tabular}{cc}
(\textit{i})& $|\Delta|\gg\Delta\omega_{\mathrm{D}}$\\
(\textit{ii})& $\;\Gamma\ \gg\Delta\omega_{\mathrm{D}}$
\end{tabular}
\noindent The first condition is that the laser is detuned from resonance further than the Doppler width and is essentially a property of the light source; the second is that natural broadening dominates over Doppler broadening and is a property of the medium. The Doppler width can be reduced by, for example, using cold atoms at sub-milliKelvin temperatures. Many experiments, including the ones considered in this paper, are conducted with alkali-metal atoms on the D-line at room temperature (or hotter); the parameters of interest are then $\Gamma\sim 2\pi\times 5$~MHz, and $\Delta\omega_{\mathrm{D}}\sim 2\pi\times 0.5$~GHz, thus $\Gamma/\Delta\omega_{\mathrm{D}}\sim10^{-2}$. Therefore, for the limit $|z|\gg1$ to be valid, it is necessary to be detuned far from resonance, $|\Delta|\gg\Delta\omega_{\mathrm{D}}$.
Substituting the approximated $\mathrm{erfc}(z)$ into (\ref{eq:s}) we get the result
\begin{eqnarray}
s&=&\frac{i}{ku}\frac{1}{z}=\frac{i}{\Gamma/2-i\Delta}.
\label{eq:lapprox}
\end{eqnarray}
Note that this is identical to the case for homogeneous broadening, e.g. an ensemble of stationary atoms, or atoms at ultralow temperatures for which Doppler broadening is negligible~\cite{Shultz2008}. The real part of the susceptibility gives the dispersion function, and the imaginary part the Lorentzian function; specifically
\begin{eqnarray}
s^{\mathrm{R}}&=&-\frac{\Delta}{(\Gamma/2)^2+\Delta^2},\label{eq:lsr}\\
s^{\mathrm{I}}&=&\frac{\Gamma/2}{(\Gamma/2)^2+\Delta^2}.
\label{eq:lsi}
\end{eqnarray}
Furthermore, since $|\Delta|\gg\Delta\omega_{\mathrm{D}}\gg\Gamma$, these relations simplify further to $s^{\mathrm{R}}=-1/\Delta$, and $s^{\mathrm{I}}=\Gamma/2\Delta^{2}$ respectively. These detuning dependences are discussed further in section~\ref{sec:comabsdis}.
The physical interpretation of the Lorentzian approximation is that the Gaussian lineshape responsible for inhomogeneous broadening decreases exponentially with detuning, whereas the homogeneous lineshape decreases much more slowly in the wings. Hence the contribution to the overall lineshape far from resonance is dominated by the Lorentzian function, and both absorption and dispersion will be well approximated.
\subsection{Validity of the approximations}\label{sec:val}
Figure~\ref{fig:sapprox} shows the lineshape, $s$, of the Faddeeva function and its Gaussian and Lorentzian approximations, for a typical room temperature alkali-metal atomic ensemble where the Doppler-broadening is two orders of magnitude larger than natural broadening. It can be seen in figure~\ref{fig:sapprox}(a) that for $|\Delta|<1.5\times\Delta\omega_{\mathrm{D}}$ the imaginary part of the Faddeeva function is adequately described by the Gaussian approximation, and for $|\Delta|>2\Delta\omega_{\mathrm{D}}$ the Lorentzian approximation holds. Therefore, close to resonance, Doppler-broadening dominates the absorptive interaction; whereas natural broadening dominates at large detuning. A similar situation for the real part of the Lorentzian aproximation is seen in figure~\ref{fig:sapprox}(b), i.e. it is valid for $|\Delta|>2\Delta\omega_{\mathrm{D}}$. However, the Gaussian approximation is in good agreement with the Faddeeva function over the whole spectral range, as predicted in section~\ref{sec:gapprox}.
\subsection{Comparing absorption and dispersion}\label{sec:comabsdis}
Figure~\ref{fig:chiapprox} shows the ratio $|\chi^{\mathrm{R}}/\chi^{\mathrm{I}}|$, calculated using the Faddeeva function and its Gaussian and Lorentzian approximations. It shows that the ratio between dispersion and absorption continually increases with detuning, with the asymptotic limit $2|\Delta|/\Gamma$ from the Lorentzian approximation. Note, however, that dispersion also decreases linearly in this limit. Hence, any dispersive effects which require low absorption are best performed far from resonance under conditions which increase the atom-light interaction e.g. high atomic density~\cite{Siddons09} or stronger coupling~\cite{Kubasik09}.
\subsection{Hyperfine structure}
We have shown that Doppler-broadening can effectively be ignored for detunings $|\Delta|>2\Delta\omega_{\mathrm{D}}$. However, this situation is somewhat complicated due to the presence of hyperfine structure. For alkali-metal atoms the hyperfine structure is such that the ground state splitting, $\Delta\omega_{\mathrm{hfs}}$, is much larger than the room temperature Doppler width. This is not the case for the excited states, which tend to have intervals of comparable size to $\Delta\omega_{\mathrm{D}}$. Hence, in order to calculate $\chi$ near to the line-centre, each individual hyperfine transition needs to be modelled individually, although for some purposes excited state splitting can be ignored (for example, on the $\mathrm{D}_2$ lines of Rb~\cite{Shi07} and Cs~\cite{Camacho07}). Far from line-centre, at detunings larger than the ground state hyperfine splitting, it is possible to approximate all hyperfine transitions to a single Lorentzian function. By performing this calculation we find that there is a less than 5\% error for $|\Delta|>3.5\times\Delta\omega_{\mathrm{hfs}}$.
\section{Comparison between theory and experiment}\label{sec:results}
In order to test experimentally the validity of the approximations to the Faddeeva function, the transmission of a probe beam on the $\mathrm{D}_1$ line of rubidium was recorded. The output from an external cavity diode laser at 795~nm was attenuated to be less than $1\:\mu$W such that it is in the weak probe limit (see ref.~\cite{weak}) and passed through a 75~mm heated vapour cell, based on the design of~\cite{Danny}. A solenoid provided the heating and magnetic field, when required. The cell contained Rb isotopes according to the ratio $^{87}$Rb:$^{85}$Rb of 99:1.
\subsection{Absorption}\label{sec:abs}
The solid curve in figure~\ref{fig:trans}(a) shows the transmission measured at $132^\circ$C as a function of detuning from the weighted line-centre in units of Doppler width, $\Delta\omega_{\mathrm{D}}=2\pi\times584\:\mathrm{MHz}$. Absorption in the region shown is due to the $^{87}$Rb $F_{\mathrm{g}}=2\rightarrow F_{\mathrm{e}}=1,2$ transitions. Theoretical transmission (dashed curves) was calculated using equations~(\ref{eq:s}), (\ref{eq:gsi}) and (\ref{eq:lsi}) to model each hyperfine transition involved. The transmission difference between theory and experiment is shown in figure~\ref{fig:trans}(b). It can be seen that the Faddeeva function agrees with the measured data to within the noise level; any discrepancy beyond this is due to the fitting of the frequency axis. Both the Gaussian and Lorentzian approximations agree on resonance. This is because the transition is optically thick so any variation between the two approximations is obscured. At a detuning of about one Doppler width the Lorentzian is no longer at zero transmission and only matches with measured data again for detunings greater than two Doppler widths. Conversely, the Gaussian approximation matches the experiment up to about 1.5 Doppler widths before differing significantly for larger detunings. It was stated in the derivation of the Gaussian and Lorentzian approximations that Doppler broadening dominates close to resonance, whilst natural broadening dominates far from resonance. It can be seen that at around two Doppler widths absorption due to the Gaussian lineshape rapidly decreases, and it is from this point that the Lorentzian function becomes the dominant broadening mechanism.
The situation with the two approximations is similar to that seen in previous experiments, where Gaussian fits to data are used when the line-centre is of interest, e.g. ref.~\cite{Gauss1}, and Lorentzian fits for off resonant behaviour, e.g. ref.~\cite{Camacho07}. However, we have derived these lineshapes \textit{ab initio} and quantified their regime of validity.
\subsection{Dispersion}\label{sec:dis}
The dispersive properties of the medium are easily probed via the Faraday effect~\cite{Siddons09}. A rotation of the plane of polarisation arises due to the difference in the refractive indices of left and right circularly polarised light. In a heated vapour this rotation can be as large as tens of radians over a frequency range of many Doppler widths. Using the technique described in ref.~\cite{Siddons09} we measured the rotation of light polarisation using a differencing signal in a balanced polarimeter. Light transmitted through the vapour is sent through a polarisation beam splitter and the resulting vertical and horizontal polarisations directed to a differencing photodiode, where the intensities $I_{x}$ and $I_{y}$ are subtracted. An important feature of this signal is that the period of oscillation is dependent on dispersion, whilst being independent of absorption. From the zero crossings one can extract the rotation angle, $\theta$.
Figure~\ref{fig:nemo}(a) shows the differencing signal for an atomic temperature of $112^\circ$C and applied field of 200~G. The signal is normalised to the maximum intensity, $I_0$, received by one of the photodiodes in the absence of a magnetic field. Also shown is the theoretical signal calculated by solving the complete Hamiltonian of the system, with a Faddeeva lineshape. There is good agreement between the two curves, the main difference being in the amplitude, which is due in part to the differencing photodiodes not being perfectly balanced. Figure~\ref{fig:nemo}(b) compares the Faddeeva function and its approximations to the rotation angle experienced by the linearly polarised probe beam. The data points are taken from the zero crossings in figure~\ref{fig:nemo}(a). Excellent agreement is seen between measured data and both the Faddeeva function and Gaussian approximation over the whole spectral range, whilst the Lorentzian approximation only holds far from resonance, in agreement with the conclusion of sections~\ref{sec:gapprox} and~\ref{sec:lapprox}.
\section{Conclusion}\label{sec:conclusion}
We have seen in the previous two sections that the Faddeeva function gives a good fit to data over the whole frequency range, whereas the Lorentzian approximation is only valid far from resonance. The Gaussian approximation, however, shows contrasting behaviour in its absorptive and dispersive properties. With regard to absorption it is valid close to resonance only, whilst the dispersive properties are accounted for over all detuning. The differing nature of the two approximations stems from the fact that only in the Lorentzian approximation did we assume that $|\Delta|\gg\Delta\omega_{\mathrm{D}}$.
The approximations to the electrical susceptibility we have described facilitate (i) the analytic manipulation of $\chi$, allowing a comparison between the dispersive and absorptive properties of atomic media; and (ii) ease of computation of properties of interest. In particular, we have seen that for a detuning greater than two Doppler-broadened linewidths the fully analytic Lorentzian approximation is valid. From this we have shown that off resonance dispersion increasingly dominates over absorption.
\ack This work is supported by EPSRC. We thank A S Arnold for valuable discussion.
\section*{References}
| {
"attr-fineweb-edu": 1.44043,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUe8LxK6nrxl9bPEy4 | \section{Introduction}
In the space $\mathcal H_d$ of $d\times d$ complex Hermitian matrices, there is a natural Brownian process $W$, whose covariance is given by the Hilbert--Schmidt norm. It turns out, as was first observed by Dyson in 1962 \cite{Dyson}, that the spectrum of $W$ is Markovian: the law of the spectrum of $s\mapsto W_{t+s}$ depends on $W_t$ only through its spectrum. In this paper, we show that for a natural smoothing of the Brownian motion, the so-called Kinetic Brownian motion, the situation can be different.
Here, a kinetic motion with values in $\mathcal H_d$ is a process of regularity $\mathcal C^1$, such that the couple $(H,\dot H)$ of the position and associated velocity is Markovian. Kinetic Brownian motion is the kinetic motion whose velocity $\dot H$ is a standard Brownian motion on the unit sphere. We say that it is a smoothing of Brownian motion because in the large scale limit, $H$ looks like a Brownian motion; namely, the law of the process $t\mapsto\frac1LH_{L^2t}$ converges to that of $W$ as $L\to\infty$, up to a factor $4/d^2(d^2-1)$. In other words, $\frac1LH_{L^2t}$ is very similar to a Brownian motion, with the important difference that it is actually $\mathcal C^1$. For a proof of this convergence, see \cite[Theorem 1.1]{LiKBM} or \cite[Proposition 2.5]{ABT}. See references below for more on kinetic Brownian motion.
Define $\Lambda_t$ as the vector of eigenvalues of $H_t$, the order being irrelevant provided it depends continuously on time. Then a classical application of the inverse function theorem shows that $\Lambda$ has to be $\mathcal C^1$ whenever the eigenvalues of $H_t$ are distinct; in particular, $\Lambda$ cannot be Markovian, otherwise it would be deterministic. The next natural hope would be for the process $(\Lambda,\dot\Lambda)$ to be Markovian. The main objective of this paper is to prove it is the case if and only if $d=2$.
\begin{theorem}
\label{thm:main}
Let $(H,\dot H)$ be a kinetic Brownian motion on the space $\mathcal H_d$ of $d\times d$ complex Hermitian matrices ($d\geq2$), and $0\leq\tau\leq\infty$ the first time $H$ has multiple eigenvalues. Let $\Lambda$ be the process of eigenvalues of $H$, seen as a continuous process from an interval of $\mathbb R_+$ to $\mathbb R^d$.
Then $\tau = \infty$ whenever $H_0$ has distinct eigenvalues. Moreover, $(\Lambda_t,\dot\Lambda_t)_{0\leq t<\tau}$ is well-defined, and is Markovian if and only if $d=2$.
\end{theorem}
In the case $d=2$, the stochastic differential equations describing the eigenvalues of $H$ are given in Lemma \ref{lem:casesond}, Section \ref{ssec:criterion}. In the general case, we introduce in Section \ref{ssec:LambdaA} a subdiffusion $(\Lambda,A)$ of $(H,\dot H)$. Since this process is Markovian in any dimension (see Lemma \ref{lem:MarkovLambdaA} for defining equations), one may want to consider it as a suitable approach to kinetic Dyson Brownian motion, rather than the more restrictive $(\Lambda,\dot\Lambda)$.
As discussed above, it is known that the process $t\mapsto\frac1LH_{L^2t}$ converges in law to a Brownian motion. From this known fact, we will deduce that $\Lambda$ converges to a standard Dyson Brownian motion, in the following sense.
\begin{proposition}
\label{prop:homogenisation}
Let $(H,\dot H)$ be a kinetic Brownian motion on the space $\mathcal H_d$ of $d\times d$ complex Hermitian matrices ($d\geq2$) such that $H_0$ has distinct eigenvalues almost surely. Let $\Lambda$ be the process of eigenvalues of $H$ in non-decreasing order, seen as a continuous process from an interval of $\mathbb R_+$ to $\mathbb R^d$. Let $D$ be the process of eigenvalues of a standard Brownian motion in $\mathcal H_d$ starting at zero, with the same conventions.
Then we have the following convergence in law as $L$ goes to infinity:
\[ (t\mapsto \Lambda_{L^2t})_{0\leq t\leq1}\xrightarrow{\mathcal L}\frac4{d^2(d^2-1)}\cdot D. \]
\end{proposition}
We give a sketch of proof of Theorem \ref{thm:main} in Sections \ref{ssec:tau} to \ref{ssec:criterion}, using various lemmas proved in Part \ref{sec:proofs}. Proposition \ref{prop:homogenisation} is proved in Section \ref{ssec:homogenisation}.
Kinetic Brownian motion was first introduced by Li in \cite{LiIntertwining}, where Theorem 4.3 proves a stronger convergence theorem to Brownian motion than stated above. A self-contained proof by the same author appeared later in \cite{LiKBM}. This result was generalised by Angst, Bailleul and Tardif \cite{ABT} and the author \cite{Perruchaud}. Kinetic Brownian motion has been given different names in the literature, for instance velocity spherical Brownian motion by Baudoin and Tardif \cite{BaudoinTardif} or circular Langevin diffusion by Franchi \cite{Franchi} in the context of heat kernels. See also \cite{ABP} for considerations in an infinite dimensional setting.
The question of existence and behaviour of kinetic Dyson Brownian motion was raised by Thierry Lévy during the author's PhD defence. It turned out to be a beautiful endeavour, and the latter would like to thank the former for this suggestion.
\section{Definitions and Proof Outline}
Let $\mathcal H_d$ be the space of $d\times d$ complex Hermitian matrices. We assume $d\geq2$, and endow it with the Hilbert--Schmidt inner product:
\[ \|H\|_\mathcal{H}^2:=\Tr(H^2)=\Tr(H^*H)=\sum_{ij}|H_{ij}|^2, \]
where $H^*$ is the conjugate transpose of $H$, $(H_{ij})_{ij}$ are the coefficients of $H$, and $\langle\cdot,\cdot\rangle$ is the Hermitian product on $\mathbb C^d$, with the convention that it is linear in its second argument. It is isometric to the standard Euclidean matrix space $\mathbb R^{d\times d}$, via
\[ M=(m_{ij}) \mapsto
\begin{pmatrix}
m_{11} & \frac{m_{12}+\mathsf i\,m_{21}}{\sqrt 2} & & \cdots & \frac{m_{1d}+\mathsf i\,m_{d1}}{\sqrt 2} \\
\frac{m_{12}-\mathsf i\,m_{21}}{\sqrt 2} & m_{22} & \ddots & & \\[15pt]
\vdots & \ddots & \ddots & \ddots & \vdots \\[15pt]
& & \ddots & m_{d-1,d-1} & \frac{m_{d-1,d}+\mathsf i\,m_{d,d-1}}{\sqrt 2} \\
\frac{m_{1d}-\mathsf i\,m_{d1}}{\sqrt 2} & \cdots & & \frac{m_{d-1,d}-\mathsf i\,m_{d,d-1}}{\sqrt 2} & m_{dd}
\end{pmatrix}. \]
A Brownian motion $W$ in $\mathcal H_d$ associated to this Euclidean structure can be described as a matrix as above, where the $m_{ij}$'s are independent (real standard) Brownian motions.
Let $\dot H$ be a standard Brownian motion on the unit sphere $\mathbb S(\mathcal H_d)$ of $\mathcal H_d$, and $H$ its integral. For instance, one may define $(H,\dot H)$ as the solution of the stochastic differential equation
\begin{align}
\label{eq:defH}
\mathrm d H_t & = \dot H_t\mathrm d t, \\
\label{eq:defdotH}
\mathrm d\dot H_t & = \underbrace{{}\circ\mathrm d W_t - \dot H_t\langle\dot H_t,{}\circ\mathrm d W_t\rangle_\mathcal{H}}_{\text{projection of }\circ\mathrm d W_t\text{ on }\dot H_t^\perp}
= \mathrm d W_t - \dot H_t\Tr(\dot H_t^*\mathrm d W_t) - \frac{d^2-1}2\dot H_t\mathrm d t,
\end{align}
where $\circ\mathrm d W$ (resp. $\mathrm d W$) denotes the Stratonovich (resp. Ito) integral. It is defined for all times, since $\dot H$ is the solution of a SDE with smooth coefficients on a compact manifold, and $H$ is the integral of a process that is uniformly bounded. Let $0\leq\tau\leq\infty$ be the first time $H$ has multiple eigenvalues, with $\tau=\infty$ if its eigenvalues stay distinct for all times. It is a stopping time since it is the hitting time of a closed set.
In the following, we will work with diagonal matrices and matrices whose diagonal is zero; let $\mathcal H_d=\mathcal H_d^\Delta\oplus\mathcal H_d^{\mathord{\scalerel*{\boxbslash}{gX}}}$ be the associated decomposition. Note that it is actually an orthonormal decomposition. Let us also write $\mathfrak u_d$ for the space of skew-Hermitian matrices ($\mathfrak u_d$ is the Lie algebra of the group $U_d(\mathbb C)$ of unitary matrices).
The remainder of this section is a complete proof outline of Theorem \ref{thm:main}, using a few lemmas proved in the next section.
\subsection{Explosion time}
\label{ssec:tau}
By ``$\tau=\infty$ whenever $H_0$ has distinct eigenvalues'', we mean that the event
\[ \{H_0\text{ has distinct eigenvalues and }\tau<\infty\} \]
has measure zero. It is known, see references in Section \ref{ssec:discriminant}, that the subset of $\mathcal H_d$ consisting of matrices with multiple eigenvalues is contained in a finite collection of submanifolds of codimension 3. Then it is enough to prove the following result, of independent interest.
\begin{proposition}[proved in Section \ref{ssec:codim2}]
\label{prop:codim2}
Let $M$ be a complete Riemannian manifold, and $N\subset M$ a submanifold of codimension at least 2. Let $(H,\dot H)$ be a kinetic Brownian motion in $M$, defined for all times $t\geq0$. Then the event
\[ \{ H_t\in N\text{ for some }t>0\} \]
has measure zero.
\end{proposition}
\subsection{Diagonalisation}
Because $H_t$ is Hermitian, there exists for all $t$ a unitary matrix $U_t$ such that $U_t^*H_tU_t$ is diagonal. Abstract geometric arguments show that it is possible to show that for a fixed realisation of $H$, we can find a $U$ with regularity $\mathcal C^1$, at least as long as the eigenvalues stay distinct. However, we would like $U$ to be described by an explicit stochastic differential equation. Let us look for a candidate.
Given a $\mathcal C^1$ process $U$ with unitary values, we call
\[ \dot u_t := U_t^{-1}\dot U_t = U_t^*\dot U_t \]
its derivative, seen in the Lie algebra of $U_d(\mathbb C)$: $\dot u_t\in\mathfrak u_d$. If we define the $\mathcal C^1$ process
\begin{equation}
\label{eq:defLambda}
\Lambda:t\mapsto U_t^*H_tU_t,
\end{equation}
then $H_t$ will be diagonal in the frame $U_t$ if and only if $\Lambda_t\in\mathcal H^\Delta$. It means that we are looking for a $U$ such that the derivative $\dot\Lambda_t$ stays in $\mathcal H^\Delta$. We have
\[ \dot\Lambda_t
= \big(\dot U_t^*H_tU_t + U_t^*H_t\dot U_t\big) + U_t^*\dot H_tU_t
= (\dot u_t^*\Lambda_t + \Lambda_t\dot u_t) + U_t^*\dot H_tU_t. \]
Assuming $\Lambda$ is indeed diagonal, and since $\dot u_t$ is skew-Hermitian, the coefficients of the first term are
\begin{equation}
\label{eq:productdotuLambda}
(\dot u_t^*\Lambda_t + \Lambda_t\dot u_t)_{ij}
= (\dot u_t^*)_{ij}(\Lambda_t)_{jj} + (\Lambda_t)_{ii}(\dot u_t)_{ij}
= \big((\Lambda_t)_{ii} - (\Lambda_t)_{jj}\big)(\dot u_t)_{ij}.
\end{equation}
In other words, for $\Lambda$ to stay diagonal, we have no choice for the off-diagonal coefficients of the velocity $\dot u_t$: setting
\begin{equation}
\label{eq:defA}
A:t\mapsto U_t^*\dot H_tU_t,
\end{equation}
they have to be
\[ -\frac{(U_t^*\dot H_tU_t)_{ij}}{(U_t^*H_tU_t)_{ii}-(U_t^*H_tU_t)_{jj}}
= -\frac{(A_t)_{ij}}{(\Lambda_t)_{ii}-(\Lambda_t)_{jj}}. \]
It turns out that this choice works, as we will see.
For the sake of conciseness, define $\dot u(\Lambda,A)$ as
\begin{equation}
\label{eq:defdotu}
\big(\dot u(\Lambda,A)\big)_{ij} :=
\begin{cases}
\displaystyle-\frac{A_{ij}}{\Lambda_{ii}-\Lambda_{jj}} & \text{ for } i\neq j, \\
0 & \text{ else}
\end{cases}
\end{equation}
whenever $\Lambda$ has distinct diagonal entries. As long as $\Lambda$ stays diagonal with distinct eigenvalues, $u(\Lambda,A)$ stays well-defined.
\begin{lemma}[proved in Section \ref{ssec:proofL12}]
\label{lem:defULambda}
Let $(H_t,\dot H_t)_{t\geq0}$ be a kinetic Brownian motion on $\mathcal H_d$, and $0\leq\tau\leq\infty$ the first time $H$ has multiple eigenvalues. Let $U_0$ be a (random) unitary matrix such that $U_0^*H_0U_0$ is diagonal, and define $U_t$ as the solution of
\begin{align}
\label{eq:defU}
\mathrm d U_t &= U_t\dot u_t\mathrm d t, &
\dot u_t &= \dot u\big(U_t^*H_tU_t,U_t^*\dot H_tU_t\big),
\end{align}
where $\dot u(\Lambda,A)$ is defined in equation \eqref{eq:defdotu}.
Then $U_t$ is defined for all $0\leq t<\tau$, and $U_t^*H_tU_t$ is diagonal for all such $t$.
\end{lemma}
In particular, it means that $\Lambda$ is the process of eigenvalues of $H$, so the potential kinetic Dyson Brownian motion is $(\Lambda,\dot\Lambda)$. If one wishes, we can take $U_0$ so that the diagonal entries of $U_0^*H_0U_0$ are in a given order, for instance non-decreasing. Then, using the fact that the eigenvalues of $H_t$ stay distinct for all $t>0$ (see Section \ref{ssec:tau}), we see that the diagonal entries $U_t^*H_tU_t$ stay in the same order. In the following, we will not need nor assume that $U_0$ satisfies this property.
\subsection{The Markovian process \texorpdfstring{$(\Lambda,A)$}{(Λ,A)}}
\label{ssec:LambdaA}
From $(H,\dot H)$, we constructed $U$ using equation \eqref{eq:defU}. We define then $\Lambda$ and $A$ as in equations \eqref{eq:defLambda} and \eqref{eq:defA}. By Lemma \ref{lem:defULambda} we know that $\Lambda$ is in fact diagonal. We also notice that $A$ takes values in the sphere $\mathbb S(\mathcal H_d)$, since $\dot H$ does, and conjugation by a fixed unitary matrix is an isometry.
Then we see that the triple $(U_t,\Lambda_t,A_t)$ satisfies the system of equations
\begin{align*}
\mathrm d U_t & = U_t\dot u_t\mathrm d t \\
\mathrm d \Lambda_t & = (\dot u_t^*\Lambda_t + \Lambda_t\dot u_t)\mathrm d t + A_t\mathrm d t \\
\mathrm d A_t & = (\dot u_t^*A_t + A_t\dot u_t)\mathrm d t + U_t^*\mathrm d\dot H_tU_t
\end{align*}
for $0\leq t<\tau$, where $\dot u_t = \dot u(\Lambda_t,A_t)$. These $\dot u(A,\Lambda)$ and $\dot u_t$ are still the same, defined respectively in equations \eqref{eq:defdotu} and \eqref{eq:defU}; we are merely emphasising the fact that $\dot u_t$ depends on $(H,\dot H)$ only through $(\Lambda,A)$. Note that the above is really a stochastic differential equation describing $(U,\Lambda,A)$ with a driving noise $\dot H$, rather than an abstract functional of the couple $(H,\dot H)$. If one wishes, one can see $(U,\Lambda,A)$ as a continuous process defined for all times, over the one-point compactification $\mathcal U\cup\{\dagger\}$ of the open set $\mathcal U$ of triples $(U,\Lambda,A)\in U_d(\mathbb C)\times\mathcal H_d^\Delta\times\mathbb S(\mathcal H_d)$ such that $\Lambda$ has distinct eigenvalues, with the convention that $\dagger$ is absorbing. However, since $\tau$ can only be zero or infinity, the benefit of doing so is rather small.
We are interested in the dynamics of $(\Lambda,\dot\Lambda)$. In fact, this pair is nothing but $(\Lambda,\pi^\Delta(A))$, where the second term is the projection of $A$ on $\mathcal H_d^\Delta$, roughly speaking its diagonal. Indeed, this is a direct consequence of the fact that in the driving equation for $\mathrm d\Lambda_t$, the first term is zero on the diagonal as seen in \eqref{eq:productdotuLambda}. As explained in the following lemma, it turns out that $(\Lambda,A)$ is Markovian.
\begin{lemma}[proved in Section \ref{ssec:proofL12}]
\label{lem:MarkovLambdaA}
Let $(H,\dot H)$ be a kinetic Brownian motion in $\mathcal H_d$, $0\leq\tau\leq\infty$ the first time $H$ has multiple eigenvalues, and define $U$, $\Lambda$ and $A$ as in equations \eqref{eq:defU}, \eqref{eq:defLambda} and \eqref{eq:defA}. Then, up to enlarging the underlying probability space, there exists a standard Brownian motion $B$ with values in $\mathcal H_d$ such that
\begin{align*}
\mathrm d \Lambda_t
& = \pi^\Delta(A_t)\mathrm d t \\
\mathrm d A_t
& = \big(\dot u_t^*A_t + A_t\dot u_t\big)\mathrm d t
+ \mathrm d B_t - A_t\Tr\big(A_t^*\mathrm d B_t\big)
- \frac{d^2-1}2A_t\mathrm d t
\end{align*}
for all $0\leq t<\tau$, where $\dot u_t = u_t(\Lambda_t,A_t)$ as defined in equation \eqref{eq:defdotu} and $\pi^\Delt
$ is the projection on the space of diagonal matrices. In particular, the process $(\Lambda,A)$ is a Markovian process.
\end{lemma}
Here, we could replace $\dot u_t$ everywhere by its actual expression, which makes it clear that $(\Lambda,A)$ satisfies a self-contained stochastic differential equation. We can again see $(\Lambda,A)$ as a process defined for all times, over the one-point compactification of an appropriate open set of $\mathcal H_d^\Delta\times\mathbb S(\mathcal H_d)$.
Since $(\Lambda,A)$ is Markovian in all dimensions whereas (as we will show later) $(\Lambda,\dot\Lambda)$ is not, we made the remark in the introduction that one might view the former as a natural definition of kinetic Dyson Brownian motion. Instead of containing only the information about the derivative of the eigenvalues of $\Lambda$ as $\dot\Lambda$ might do, $A$ is the matrix $\dot H$ as seen in a referential that makes $H$ diagonal; in particular, it contains some hints about the motion of the eigenspaces in relation to each other. As we will see below, at least some of this additional information is needed to describe the motion of $\Lambda$ entirely, at least in dimension $d\geq3$.
\subsection{A criterion for a Markovian kinetic Dyson Brownian motion}
\label{ssec:criterion}
It is worth noticing that the equation for $A$ describes a Brownian motion with drift on a sphere; see for instance the definition of $\dot H_t$ in \eqref{eq:defdotH}, describing a Brownian motion without drift. In fact, if it were not for the first term $(\dot u^*A+A\dot u)\mathrm d t$, $A$ would be precisely a standard Brownian motion on the unit sphere of $\mathcal H_d$.
But it is known, and not too difficult to see, that the projection of a spherical Brownian motion $X$ is Markovian. Indeed, once one fixes a subset of coordinates $(X^0,\ldots,X^k)$ of norm $r$, then the remaining coordinates $(X^{k+1},\ldots,X^n)$ can always be reduced to $\big(\sqrt{1-r^2},0,\ldots,0\big)$, up to a rotation fixing the first coordinates; see Section \ref{ssec:sphericalBM} for details and references. In particular, if we continue to ignore this drift term, it would be clear at this point that $(\Lambda,\dot\Lambda)=(\Lambda,\pi^\Delta(A))$ is Markovian. So any obstruction for $(\Lambda,\dot\Lambda)$ to be Markovian must come from the additional term $(\dot u^*A+A\dot u)$. The following lemma describes the situation with a precise criterion.
\begin{lemma}[proved in Section \ref{ssec:proofL3}]
\label{lem:MarkovPhi}
Define $\Phi=\Phi^{(d)}:\mathcal H_d^\Delta\times\mathbb S(\mathcal H_d)\to \mathcal H_d^\Delta$ by
\[ \Phi(\Lambda,A)
:= \pi^\Delta(\dot u^*A+A\dot u), \]
where $\pi^\Delta$ is the projection on $\mathcal H_d^\Delta$ and $\dot u = \dot u(\Lambda,A)$ as defined in \eqref{eq:defdotu}.
Let $(H,\dot H)$ be a kinetic Brownian motion in $\mathcal H_d$, and define $\Lambda$ the continuous process of its eigenvalues. Then $(\Lambda,\dot\Lambda)$ is Markovian if and only if $\Phi$ factors through $(\Lambda,A)\mapsto(\Lambda,\pi^\Delta(A))$.
\end{lemma}
There is a concise expression for the coefficients of $\Phi(\Lambda,A)$. It is of course zero out of the diagonal, and we have
\begin{equation}
\label{eq:concisePhi}
\Phi(\Lambda,A)_{ii}
= \sum_{j\neq i}\Big(- \frac{\overline A_{ji}}{\Lambda_{jj}-\Lambda_{ii}}\cdot A_{ji}
- A_{ij}\cdot\frac{A_{ji}}{\Lambda_{jj}-\Lambda_{ii}}\Big)
= 2\sum_{j\neq i}\frac{|A_{ij}|^2}{\Lambda_{ii}-\Lambda_{jj}}.
\end{equation}
In dimension 2, since $|A| = 1$, we get directly
\[ \Phi(\Lambda,A)_{11}
= - \Phi(\Lambda,A)_{22}
= \frac{|A_{12}|^2+|A_{21}|^2}{\Lambda_{11}-\Lambda_{22}}
= \frac{1-|A_{11}|^2-|A_{22}|^2}{\Lambda_{11}-\Lambda_{22}}. \]
This depends only on $\Lambda$ and the on-diagonal coefficients of $A$, so $(\Lambda,\dot\Lambda)$ is indeed Markovian. In dimension $d\geq3$, it is not obvious that one could use a similar trick, and in fact we can show that it is not possible and that the process is not Markovian.
In Sections \ref{ssec:dim2} and \ref{ssec:dim3+} we carry out the computations in dimension $d=2$ and $d\geq3$ respectively, and we conclude as follows.
\begin{lemma}[proved in Sections \ref{ssec:dim2} and \ref{ssec:dim3+}]
\label{lem:casesond}
~
\begin{itemize}
\item If $d=2$, the eigenvalues $\lambda,\mu$ of $H$ make up a kinetic diffusion, and satisfy the equations
\begin{align*}
\mathrm d\lambda_t &= \dot\lambda_t\mathrm d t, &
\mathrm d\dot\lambda_t &= +\frac{1-\dot\lambda_t^2-\dot\mu_t^2}{\lambda_t-\mu_t}\mathrm d t
+ \mathrm d M^\lambda_t
- \frac{d^2-1}2\dot\lambda_t\mathrm d t, \\
\mathrm d\mu_t &= \dot\mu_t\mathrm d t, &
\mathrm d\dot\mu_t &= -\frac{1-\dot\lambda_t^2-\dot\mu_t^2}{\lambda_t-\mu_t}\mathrm d t
+ \mathrm d M^\mu_t
- \frac{d^2-1}2\dot\mu_t\mathrm d t,
\end{align*}
where $M^\lambda$ and $M^\mu$ are martingales with brackets
\begin{align*}
\mathrm d\langle M^\lambda,M^\lambda\rangle_t &= \big(1 - \dot\lambda_t^2\big)\mathrm d t, &
\mathrm d\langle M^\mu,M^\mu\rangle_t &= \big(1 - \dot\mu_t^2\big)\mathrm d t,
\end{align*}
\[ \mathrm d\langle M^\lambda,M^\mu\rangle_t = - \dot\lambda_t\dot\mu_t\mathrm d t. \]
\item If $d\geq3$, $\Phi^{(d)}$ does not factor, and the process $(\Lambda,\dot\Lambda)$ is not Markovian.
\end{itemize}
\end{lemma}
It is tempting to try to salvage some weaker Markovian properties from $(\Lambda,\dot\Lambda)$ when $d\geq3$. In Section \ref{ssec:dim3+}, we show that in some sense, it is not locally Markovian anywhere.
\subsection{Homogenisation}
\label{ssec:homogenisation}
As stated in the introduction, if we write $H^L$ for the normalised process $t\mapsto\frac1LH_{L^2t}$, then $(H^L_t)_{0\leq t\leq1}$ converges in law to a standard Brownian motion, up to a constant scaling factor $4/d^2(d^2-1)$; see \cite[Theorem 1.1]{LiKBM} or \cite[Proposition 2.5]{ABT}. In this section we prove Proposition \ref{prop:homogenisation}, namely that the process $\Lambda$, although not Markovian, is somehow Markovian in large scales, in the sense that a similar limit converges to a Dyson Brownian motion.
The map $\mathcal H_d\to\mathcal H_d^\Delta$ sending a matrix $H$ to the matrix $\Lambda$ whose diagonal entries are the eigenvalues of $H$ with multiplicities according to a chosen order (e.g. non-decreasing) is continuous. One can see this as follows. First, the spectral measure $\frac1d\sum\delta_\lambda$ is a continuous function of $M\in M_d(\mathbb C)$, where $\delta_\lambda$ is the Dirac mass at $\lambda$, and $\lambda$ ranges over the spectrum of $M$. Then, since $H$ has real eigenvalues, we only need continuity of the map sending a spectral measure on $\mathbb R$ to the ordered vector of its atoms, which follows directly from the easy fact that the smallest atom depends continuously on the measure.
This means that $\Lambda_t$ can more or less be described as a continuous function of $H_t$. We say more or less, because the ordering of the eigenvalues depends on our choice of $U_0$: if it was chosen such that the eigenvalues of $U_0^*H_0U_0$ are in non-decreasing order, which we can always impose, then the values of $\Lambda_t$ along the diagonal would stay in non-decreasing order (since the eigenvalues of $H_t$ are distinct for all $t>0$, see Section \ref{ssec:tau}). Suppose we chose $U_0$ as such, and define $\Lambda:\mathcal H_d\to\mathcal H_d^\Delta$ as the map sending a given matrix $H$ to the diagonal matrix whose entries are the eigenvalues of $H$ in non-decreasing order; it should lead to no confusion of notation, since we then have $\Lambda_t=\Lambda(H_t)$. In particular, if one defines $\Lambda^L:t\mapsto\frac1L\Lambda_{L^2t}$, then $\Lambda^L=\Lambda\circ H^L$. Such operations preserve convergence in law, so given a standard Brownian motion $W$ in $\mathcal H_d$, we have the following convergence in law:
\[ (H^L_t)_{0\leq t\leq1}\xrightarrow{\mathcal L}\frac4{d^2(d^2-1)}\cdot W,
\qquad
(\Lambda^L_t)_{0\leq t\leq1}\xrightarrow{\mathcal L}\frac4{d^2(d^2-1)}\cdot\Lambda(W). \]
Since $\Lambda(W)$ is the spectrum of a Brownian motion in $\mathcal H_d$ in the form of a diagonal matrix, it is nothing but a Dyson Brownian motion. As stated above, $\Lambda$ looks very much like a Dyson Brownian motion at large scales. Recall that Dyson Brownian motion is Markovian, so the hidden information preventing $\Lambda$ to be Markovian (the off-diagonal coefficients of $A$, but also the derivative $\dot\Lambda$) vanishes in the limit.
It might be interesting to see if one could prove the convergence of $\Lambda^L$ towards the rescaled $\Lambda(W)$ using only the dynamics of $(\Lambda,A)$ as given in Lemma \ref{lem:MarkovLambdaA}. Although the author does not pretend it is impossible, it seems that the non-linearity in the vector field $\Phi(\Lambda,A)$ makes it more difficult to approach than the convergence of $H^L$, using for instance the methods of \cite{ABT}.
\section{Proof of the Lemmas}
\label{sec:proofs}
\subsection{Matrices with multiple eigenvalues}
\label{ssec:discriminant}
We claimed earlier that the set of Hermitian matrices with multiple eigenvalues is covered by finitely many submanifolds of codimension 3. This is all we will need for our purposes, although we can actually show an additional structure result. Since this set is a (real) algebraic set (it is the zero locus of the discriminant of the characteristic polynomial), it actually means that it is the disjoint union of at most $d^2-2$ (real analytic) submanifolds of dimensions $0$ to $d^2-3$; see for instance propositions 3.3.11 and 3.3.14 of \cite{BochnakCosteRoy}.
We sketch a proof of the covering result; the approach is carried out in \cite{Arnold} with a detailed discussion of the underlying combinatorial structure. See also \cite{RepeatedEigenvalues} and references therein. Let $d_1+\cdots+d_k=d$ be a partition of $d$ into $k$ positive integers. Consider the space $N$ of all Hermitian matrices $H$ with eigenvalues $\lambda_1\leq\cdots\leq\lambda_d$ such that the first $d_1$ eigenvalues are equal but less than the next, the following $d_2$ are equal but less than the $(d_1+d_2+1)$st, etc. For instance, if $(d_1,d_2,d_3)=(1,2,2)$, we are considering $5\times5$ matrices whose eigenvalues satisfy
\[ \lambda_1<\lambda_2=\lambda_3<\lambda_4=\lambda_5. \]
There is a one-to-one correspondence between such matrices and a choice of $k$ (orthogonal) eigenspaces and associated (real) eigenvalues. There are $k$ degrees of freedom for the choice of the eigenvalues. Using for instance the Gram--Schmidt algorithm, the choice of the eigenspaces is equivalent to the data of a flag $E_1\subset\cdots\subset E_k=\mathbb C^d$ of subspaces of respective dimensions $i_1\leq\cdots\leq i_k$. The space of these flags is known to be a manifold of complex dimension
\[ d_1(d-i_1) + d_2(d-i_2) + \cdots + d_{k-1}(d-i_{k-1})
= \frac12\Big(d^2 - \sum_\ell d_\ell^2\Big). \]
All in all, the set of matrices satisfying this constraint is a manifold of real dimension
\[ d^2-(d_1^2-1)-\cdots-(d_k^2-1) \]
(the restriction to a space of dimension $d_\ell$ could have been any matrix of $\mathcal H_{d_\ell}$, but is instead scalar), so it has codimension at least $3$ when a given $d_\ell$ is not one. Considering all partitions of $d$ except the trivial $1+\cdots+1$, we see that the set of matrices with multiple eigenvalues is included in a finite collection of manifolds of codimension at least 3.
\subsection{Proof of Proposition \ref{prop:codim2}}
\label{ssec:codim2}
Let us turn to the proof of Proposition \ref{prop:codim2}. Let $(M,g)$ be a complete Riemannian manifold of dimension $n$, and $N\subset M$ a submanifold of codimension at least 2. We suppose $N$ is an embedded manifold without boundary, although it will be clear that the proof may be adapted to the more general case of immersed manifolds with boundary. Given a kinetic Brownian motion $(H,\dot H)$ on $M$, we want to show that the event
\[ \{ H_t\in N\text{ for some }t>0\} \]
has probability zero.
We call embedded (closed) disc of dimension $k$ a subset $D$ of $M$ for which there exists an open set $\mathcal U\supset D$ and a diffeomorphism $\phi:\mathcal U\to B_0(1)$ to the unit ball of $\mathbb R^n$ such that $\phi(D)$ is the intersection $(\mathbb R^k\times\{0\}^{n-k})\cap\overline B_0(1/2)$. Then, because $N$ is second countable, it can be covered by countably many embedded discs of codimension 2, say $N\subset\bigcup_{i\geq0}D_i$. It means that
\[ \mathbb P(H_t\in N\text{ for some }t>0)
\leq \sum_{N,i\geq0}\mathbb E\big[\mathbb P(H_t\in D_i\text{ for some }t\in[2^{-N},2^N])\big|(H_0,\dot H_0)\big]. \]
We are left to show that for a given compact interval $[a,b]\subset(0,\infty)$, starting point $(H_0,\dot H_0)$ and embedded disc $D$ of codimension 2, we have
\[ \mathbb P\left( H_t\in D\text{ for some } t\in [a,b] \right) = 0. \]
Fix some $\delta>0$, and write $D_\delta$ for the set of points at distance at most $\delta$ from $D$. Since the position process of the kinetic Brownian motion has velocity one, if we have $H_t\in D$ for a given $t>0$, then there must exist $t'\in2\delta\mathbb N$ such that $H_{t'}\in D_\delta$. It means that
\begin{equation}
\label{eq:discretetime}
\mathbb P\left( H_t\in D\text{ for some } t\in I \right)
\leq \sum_{\ell=\lfloor a/2\delta\rfloor}^{\lceil b/2\delta\rceil}\mathbb P(H_{2\delta\ell}\in D_\delta).
\end{equation}
We will prove that
\[ \limsup_{\delta\to0}\sup_{a/2\leq t\leq2b}\frac{\mathbb P(H_t \in D_\delta)}{\delta^2} < \infty, \]
which will show that the sum in \eqref{eq:discretetime} is bounded by a constant multiple of $\delta$, so the left hand side is zero upon taking the limit $\delta\to0$.
Let $\phi:\mathcal U\to B_0(1)$ be a map compatible with $D$, in the sense described above. It induces a diffeomorphism $T^1\phi$ from $T^1\mathcal U$ to $T^1B_0(1)\simeq B_0(1)\times\mathbb S^{n-1}$, sending $(h,\dot h)$ to
\[ \left(\phi(h),\frac{d\phi_h(\dot h)}{|d\phi_h(\dot h)|}\right). \]
Since $D$ is compact, there is some small $\varepsilon>0$ such that $D_\varepsilon\subset\mathcal U$. The fact that $(H,\dot H)$ is a hypoelliptic diffusion means that the density of $(H_t,\dot H_t)$ is smooth for any given $t>0$, and moreover depends smoothly on $t$. In particular, there exists a smooth function $p$ depending on $t>0$ and $(x,\dot x)\in T^1B_0(1)$ such that
\[ \mathbb P\Big(H_t\in(T^1\phi)^{-1}(A)\Big)=\int_Ap_t(x,\dot x)\mathrm d x\mathrm d\dot x, \]
where the integral is considered with respect to Lebesgue measure (normalised however the reader pleases, up to introducing a constant in $p$). Since $p$ is smooth and $D_\varepsilon$ is compact, there exists a constant $\|p\|>0$ such that we have $p_t(x,\dot x)\leq\|p\|$ for all $t\in[a/2,2b]$, $x\in\phi(D_\varepsilon)$, $\dot x\in\mathbb S^{n-1}$. It means that for any such $t$ and $0<\delta\leq\varepsilon$,
\[ \mathbb P(H_t\in D_\delta) \leq \|p\|\cdot\operatorname{Vol}(\phi(D_\delta))\cdot\operatorname{Vol}(\mathbb S^{n-1}). \]
Let $\phi^*g$ be the metric on $B_0(1)$ induced by the identification with $\mathcal U$, seen as a $n\times n$ matrix. Using smoothness and compactness again, here exists a constant $C>0$ such that $\phi^*g$ is bounded above by $C\operatorname{id}$ in $D_\varepsilon$. In particular, given $0<\delta\leq\varepsilon$ and a point $y$ in $D_\delta$, there exists (by compactness) a point $x$ in $D$ and a smooth curve $\gamma$ of length at most $\delta$ with endpoints $x$ and $y$. The points of $\gamma$ belong to $D_\varepsilon$, so the Euclidean length of $\phi\circ\gamma$ is at most $C$ times the length of $\gamma$, which means that $\phi(y)$ is included in $\phi(D)+B_0(C\delta)$. All in all,
\[ \phi(D_\delta)
\subset \phi(D)+B_0(C\delta)
\subset \big(B_0(1)\cap\mathbb R^{n-2}\big)\times\big(B_0(C\delta)\cap\mathbb R^2\big) \]
for all $\delta$ small enough, and the Euclidean volume of $\phi(D_\delta)$ is bounded by $\delta^2$ up to a constant factor, which concludes.
Note that Proposition \ref{prop:codim2} is obviously optimal in terms of dimension. If any kinetic motion $(H,\dot H)$ were to avoid codimension 1 manifolds, then it wouldn't be able to reach the boundary of small balls around its initial point, so it would have to be stationary: $H_t=H_0$, $\dot H_t=0$. If one wants to show that a set that is not a submanifold is completely avoided by kinetic Brownian motion, the above shows that it would be enough for the $\delta$-fattenings of its bounded subsets to have volume of order $o(\delta)$.
\subsection{Proof of Lemmas \ref{lem:defULambda} and \ref{lem:MarkovLambdaA}}
\label{ssec:proofL12}
Suppose $(H,\dot H)$ is defined as in \eqref{eq:defH} and \eqref{eq:defdotH}. In particular, $\dot H$ is driven by a standard Brownian motion $W$ with values in $\mathcal H_d$. We want to show that the processes $U$, $\Lambda$ and $A$ are well-defined as long as $H$ has distinct eigenvalues, that they take values in the spaces $U_d(\mathbb C)$, $\mathcal H_d^\Delta$ and $\mathbb S(\mathcal H_d)$ respectively, and that $(\Lambda,A)$ satisfies the equations stated in Lemma \ref{lem:MarkovLambdaA}.
For the first few steps, we can actually work with a fixed realisation of $(H,\dot H)$. If $H_0$ has multiple eigenvalues then there is nothing to prove, so assume it is not the case. Writing
\[ \dot u_t = \dot u(U_t^*H_tU_t,U_t^*\dot H_tU_t) \]
and since $\mathrm d U_t = U_t\dot u_t\mathrm d t$ by definition, we can use the usual Picard-Lindelöf theorem to see that $U$ is uniquely well-defined for a maximal time interval $[0,T)$. Note that $\dot u(\Lambda,A)$ is in $\mathfrak u_d$ whenever it is well-defined, even if $\Lambda$ is not diagonal, which means that $U$ is in $U_d(\mathbb C)$ until $T$. This directly implies that $A$ takes values in the sphere $\mathbb S(\mathcal H_d)$, since $\dot H$ does and $K\mapsto U^*KU$ is an isometry of $\mathcal H_d$ for every unitary $U$.
If $U_t^*H_tU_t$, up to time $T$, stays uniformly away from the closed set of matrices with multiple diagonal entries, then $\dot u_t$ is bounded for $t\in[0,T)$, which means that $U_t$ converges to a limit as $t\to T$. Since $U_T^*H_TU_T$ has distinct diagonal entries, we can then apply Picard-Lindelöf again at $T$ and get a solution defined on a larger interval $[0,T+\varepsilon)$, which is a contradiction. Therefore, $T$ must be at least as large as the stopping time when at least two diagonal coefficients of $U_t^*H_tU_t$ become equal, or more precisely $T\geq\sup_{\varepsilon>0}T_\varepsilon$ for $T_\varepsilon$ the first time when $U_t^*H_tU_t$ is at distance at most $\varepsilon$ from the closed set of matrices with at least two diagonal coefficients being equal. Since $\dot u_t$ is not defined at this instant, we have in fact $T=\sup_{\varepsilon>0}T_\varepsilon$.
It implies that $\Lambda$ and $A$ are well-defined up to this time as well. Moreover, if we show that $\Lambda$ is diagonal up to $T$, then in fact the diagonal entries of $\Lambda_t=U_t^*H_tU_t$ are precisely the eigenvalues of $H_t$, and the collapse of the diagonal entries of the former corresponds to that of the eigenvalues of the latter, i.e. $T=\tau$.
Using the representation of $(H,\dot H)$ involving $W$, we see that for all $t<T$ we must have
\begin{align*}
\mathrm d U_t & = U_t\dot u(\Lambda_t,A_t)\mathrm d t, \\
\mathrm d\Lambda_t & = \big(\dot u_t^*\Lambda_t + \Lambda_t\dot u_t\big)\mathrm d t + A_t\mathrm d t, \\
\mathrm d A_t & = \big(\dot u_t^*A_t + A_t\dot u_t\big)\mathrm d t
+ U_t^*\Big(\mathrm d W_t - \dot H_t\Tr(\dot H_t^*\mathrm d W_t) - \frac{d^2-1}2\dot H_t\mathrm d t\Big)U_t \\
& = \big(\dot u_t^*A_t + A_t\dot u_t\big)\mathrm d t
+ U_t^*\mathrm d W_tU_t - A_t\Tr(A_t^*U_t^*\mathrm d W_tU_t) - \frac{d^2-1}2A_t\mathrm d t.
\end{align*}
Since $U$ is unitary, the integral
\[ B:t\mapsto\int_0^tU_s^*\mathrm d W_sU_s \]
defines a standard Brownian motion in $\mathcal H_d$, and $A$ satisfies the equation described in Lemma \ref{lem:MarkovLambdaA}. Moreover, if $\Lambda$ stays diagonal, then in fact
\[ \mathrm d\Lambda_t
= \pi^\Delta\big(\mathrm d\Lambda_t\big)
= \pi^\Delta\big(\dot u_t\Lambda_t + \Lambda_t\dot u_t\big)\mathrm d t
+ \pi^\Delta(A_t)\mathrm d t
= \pi^\Delta(A_t)\mathrm d t \]
according to equation \eqref{eq:productdotuLambda}, so $\Lambda$ satisfies the equation given in Lemma \ref{lem:MarkovLambdaA}. So the last thing we need to prove is that $\Lambda$ stays diagonal for all $t<T$.
This last fact is essentially a consequence of uniqueness for strong solutions of stochastic differential equations. Indeed, we can define $(\Lambda^B,A^B)$ as the solution of
\begin{align*}
\mathrm d\Lambda^B_t & = \big((\dot u^B_t)^*\Lambda^B_t + \Lambda^B_t\dot u^B_t\big)\mathrm d t + A^B_t\mathrm d t, \\
\mathrm d A^B_t & = \big((\dot u^B_t)^*A^B_t + A^B_t\dot u^B_t\big)\mathrm d t
+ \mathrm d B_t - A^B_t\Tr((A^B_t)^*\mathrm d B_t) - \frac{d^2-1}2A^B_t\mathrm d t
\end{align*}
for $\dot u^B = \dot u(\Lambda^B,A^B)$ and initial condition $(\Lambda^B,A^B)_0=(\Lambda,A)_0$, seen as a process with values in the open set of $\mathcal H_d^\Delta\times\mathcal H_d$ where the first component has distinct eigenvalues. It is defined on a (random) maximal interval $[0,T^B)$. The pairs $(\Lambda,A)$ and $(\Lambda^B,A^B)$ (or rather the inclusion of the latter in the full space $\mathcal H_d\times\mathcal H_d$) are solution to the same stochastic differential equation for all times before $T$ and $T^B$, so they are equal and $\Lambda$ is actually diagonal over $[0,T\wedge T^B)$. However, over the event $\{T^B<T\}$, the limit $(\Lambda^B,A^B)_{T^B}$ is well-defined in the large space where $(\Lambda,A)$ takes values, namely it is $(\Lambda,A)_{T^B}$ where $\Lambda_{T^B}\in\mathcal H_d$ with distinct diagonal entries and $A_{T^B}\in\mathcal H_d$. Since $\mathcal H_d^\Delta$ is closed, then in fact $(\Lambda^B,A^B)$ admits a limit in the small space as $t$ approaches $T^B$. But this event has measure zero according to the classical explosion criterion for equations with smooth coefficients, so the event $\{T^B<T\}$ has measure zero and $T\wedge T^B=T$, hence $\Lambda$ is diagonal for all times $t<T$. Though we don't need it, it also means that $(\Lambda,A)$ and $(\Lambda^B,A^B)$ are solutions to the same equation in the small space, and by maximality $T\leq T^B$, so we have in fact $T=T^B$ and $(\Lambda,A)=(\Lambda^B,A^B)$ always.
As discussed above, this concludes the proof of Lemmas \ref{lem:defULambda} and \ref{lem:MarkovLambdaA}.
\subsection{Projections of spherical Brownian motions}
\label{ssec:sphericalBM}
Let $X$ be a standard Brownian motion on the sphere $\mathbb S(\mathbb R^n)$, and $X^{[k]}$ its projection $(X^1,\ldots,X^k)$. The case we have in mind is $n=d^2$ and $k=d$. One way to define such an $X$ is to fix a standard Brownian motion $B$ with values in $\mathbb R^n$ and set $X$ the solution of
\[ \mathrm d X_t
= {}\circ\mathrm d B_t - X_tX_t^*\circ\mathrm d B_t
= \mathrm d B_t - X_tX_t^*\mathrm d B_t - \frac{n-1}2X_t\mathrm d t. \]
We want to show that $X^{[k]}$ is Markovian. Since the projection $x\mapsto x^{[k]}$ is smooth, by Itô's formula we know that the process is solution to a stochastic differential equation of the form
\[ \mathrm d X^{[k]}_t = b(X_t)\mathrm d t + \sigma(X_t)\mathrm d B_t \]
with $b$ and $\sigma$ smooth. Such coefficients are uniquely determined, as they can be deduced from the generator $\frac12\Delta_\mathbb{S}$ of $X$ applied to functions of the form $x\mapsto f(x^{[k]})$. Moreover, for any fixed rotation $R$, since the law of $X$ is invariant under $R$, we know that
\[ \mathrm d (RX)^{[k]}_t = (b\circ R)(X_t)\mathrm d t + (\sigma\circ R)(X_t)\mathrm d B_t. \]
For any fixed $x\in\mathbb S(\mathbb R^n)$, let $R$ be a rotation leaving the first $k$ coordinates invariant and such that $Rx = (x^1,\ldots,x^k,\sqrt{1-r^2},0,\ldots,0)$ with $r=\big|x^{[k]}\big|$. Then $(RX)^{[k]}=X^{[k]}$ for all times, so
\[ \mathrm d X^{[k]}_t
= b(X_t)\mathrm d t + \sigma(X_t)\mathrm d B_t
= (b\circ R)(X_t)\mathrm d t + (\sigma\circ R)(X_t)\mathrm d B_t, \]
and by uniqueness
\[ b(x) = b(Rx) = b(x^1,\ldots,x^k,\sqrt{1-r^2},0,\ldots,0) = b^{[k]}\big(x^{[k]}\big) \]
for a fixed function $b^{[k]}$ that is smooth away from $\{x^{[k]}=0\}$. We can use the same reasoning for $\sigma$, so that $X^{[k]}$ is Markovian, at least away from $\{x^{[k]}=0\}$.
Note that if $k\geq2$, which holds in our case, then almost surely, $X^{[k]}$ avoids zero except possibly for $t=0$. One way to see it is to notice that the traces of $X$ and $B/|B|$ have the same distribution, so that the probability that the trace of $X$ contains a point $x$ with $x^{[k]}=0$ is equal to that of $B$ containing a point $y$ with $y^{[k]}=0$. But $B^{[k]}$ is a standard Brownian motion in dimension at least 2, so it avoids zero almost surely.
Similar symmetry considerations can be used to show that $\theta:=X^{[k]}/|X^{[k]}|$ has to be a Brownian motion of the sphere, up to a time change depending only on the $r$ described above. In fact, using the decomposition of $\mathrm d B_t$ along the following subspaces
\begin{align*}
E_t & := \{ x=(x^1,\ldots,x^k,0,\ldots,0)\text{ orthogonal to }X_t \} \\
F_t & := \{ x=(0,\ldots,0,x^{k+1},\ldots,x^n)\text{ orthogonal to }X_t \} \\
G_t & := (E_t\oplus F_t)^\perp = \mathbb R(X^1,\ldots,X^k,0\ldots,0) \oplus \mathbb R(0,\ldots,0,X^{k+1},\ldots,X^n),
\end{align*}
valid when $0\neq X^{[k]}\neq X$, one can show that
\begin{align*}
\mathrm d(r^2)_t &= 2\sqrt{(1-r^2_t)r^2_t}\,\mathrm d B^r_t + (k-nr^2_t)\mathrm d t \\
\mathrm d\theta_t &= \frac1{\sqrt{r_t^2}}(\mathrm d B^\theta_t - \theta_t\theta_t^*\mathrm d B^\theta_t) - \frac1{r_t^2}\frac{k-1}2\theta_t\mathrm d t \\
\mathrm d\phi_t &= \frac1{\sqrt{1-r_t^2}}(\mathrm d B^\phi_t - \phi_t\phi_t^*\mathrm d B^\phi_t) - \frac1{1-r_t^2}\frac{n-k-1}2\phi_t\mathrm d t
\end{align*}
for $X=(r\theta,\sqrt{1-r^2}\phi)$ and $(B^r,B^\theta,B^\phi)$ a standard Brownian motion on $\mathbb R^{n+1}$. A (different) complete proof, as well as pathwise uniqueness, is described by Mijatović, Mramor and Uribe in \cite{MMU}.
\subsection{Proof of Lemma \ref{lem:MarkovPhi}}
\label{ssec:proofL3}
We want to show that $(\Lambda,\dot\Lambda)$ is Markovian if and only if the vector field $\Phi$ depends on $A$ only through its diagonal $\pi^\Delta(A)$. The indirect implication is clear: if $\Phi(\Lambda,A) = \overline\Phi(\Lambda,\pi^\Delta(A))$, then
\begin{align*}
\mathrm d\Lambda_t &= \pi^\Delta(A_t)\mathrm d t \\
\mathrm d \pi^\Delta(A)_t
&= \overline\Phi\big(\Lambda_t,\pi^\Delta(A)_t\big)\mathrm d t
+ b^\Delta\big(\pi^\Delta(A_t)\big)\mathrm d t + \sigma^\Delta\big(\pi^\Delta(A_t)\big)\mathrm d B_t,
\end{align*}
where
\[ \mathrm d X^\Delta_t = b^\Delta(X^\Delta_t)\mathrm d t + \sigma^\Delta(X^\Delta_t)\mathrm d B_t \]
is the equation describing the projection $X^\Delta = \pi^\Delta(X)$ on $\mathcal H_d^\Delta$ of a spherical Brownian motion $X$ in $\mathbb S(\mathcal H_d)$, as discussed in the previous section. Then $(\Lambda,\dot\Lambda)=(\Lambda,\pi^\Delta(A))$ is the solution of a self-contained SDE, so it is Markovian.
Conversely, suppose that $(\Lambda,\dot\Lambda)$ is Markovian. Let $L^\Delta$ be its generator, $L$ that of $(\Lambda,A)$. For $f:\mathcal H_d^\Delta\times\mathcal H_d^\Delta\to\mathbb R$ regular enough, we should have
\[ L\big(f\circ(\operatorname{id},\pi^\Delta)\big)(\Lambda_0,A_0)
= \frac{\mathrm d}{\mathrm d t}_{|t=0}\mathbb E_{\Lambda_0,A_0}[f(\Lambda_t,\pi(A_t))]
= (L^\Delta f)\big(\Lambda_0,\pi^\Delta(A_0)\big). \]
For instance, one can see that this holds for $f$ smooth with compact support.
Let $X^\Delta=\pi^\Delta(X)$, as above, be the projection of a spherical Brownian motion on $\mathbb S(\mathcal H_d)$, and set $I$ its integral (i.e. $\mathrm d I_t=X^\Delta_t\mathrm d t$). According to the previous section, $(I,X^\Delta)$ is Markovian. In particular, $(I,X)$ and $(I,X^\Delta)$ both admit generators $\widetilde L$ and $\widetilde L^\Delta$, and they are linked by the same relation
\[ \widetilde L\big(f\circ(\operatorname{id},\pi^\Delta)\big)(I_0,X_0)
= (\widetilde L^\Delta f)\big(I_0,\pi^\Delta(X_0)\big), \]
for instance when $f:\mathcal H_d^\Delta\times\mathcal H_d^\Delta\to\mathbb R$ is smooth with compact support.
As mentioned above, the only difference between $L$ and $\widetilde L$ is the additional vector field $\Phi$ acting on the second component:
\[ (L-\widetilde L)g(\Lambda_0,A_0)
= D_{\!A}\,g(\Lambda_0,A_0)(\Phi(\Lambda_0,A_0))=:(\Phi\cdot\nabla_{\!A}\,g)(\Lambda_0,A_0), \]
where $D_{\!A}\,g$ is the differential of $g:\mathcal H_d^\Delta\times\mathcal H_d\to\mathbb R$ with respect to its second variable. In particular,
\[ \big(\Phi\cdot\nabla_A(f\circ(\operatorname{id},\pi^\Delta))\big)(\Lambda_0,A_0)
= (L^\Delta f-\widetilde L^\Delta f)\big(\Lambda_0,\pi^\Delta(A_0)\big). \]
The right hand side depends on $A_0$ only through $\pi^\Delta(A_0)$, so the left hand side must be a function of $(\Lambda,\pi^\Delta(A))$. Since we can deduce a given vector field $\Psi$ with values in $\mathcal H_d^\Delta$ by the action of the operator $\Psi\cdot\nabla_{\!A}$ on functions of the form $f\circ(\operatorname{id},\pi^\Delta)$ (choose for instance a collection of $f$ smooth with compact support such that $f(\Lambda,\dot\Lambda) = \dot\Lambda_{ii}$ on a small open set), $\Phi$ actually factors through $(\Lambda,A)\mapsto(\Lambda,\pi^\Delta(A))$ as expected.
\subsection{The case \texorpdfstring{$d=2$}{d=2}}
\label{ssec:dim2}
We have seen at the end of Section \ref{ssec:criterion} that in dimension $d=2$, the process $(\Lambda,\dot\Lambda)$ is Markovian, using the fact that
\[ \Phi(\Lambda,A)_{11}
= - \Phi(\Lambda,A)_{22}
= \frac{|A_{12}|^2+|A_{21}|^2}{\Lambda_{11}-\Lambda_{22}}
= \frac{1-|A_{11}|^2-|A_{22}|^2}{\Lambda_{11}-\Lambda_{22}}. \]
In fact, we can use this expression and the equation satisfied by $(\Lambda,A)$, given in Lemma \ref{lem:MarkovLambdaA}, to get the equation for the evolution. Write $\lambda$ and $\mu$ for the eigenvalues $\Lambda_{11}$ and $\Lambda_{22}$ of $H$. For $B$ a standard Brownian motion on $\mathcal H_2$, define the martingales
\[ M^\lambda : t \mapsto (B_{11})_t - \int_0^t\dot\lambda_s\Tr(A_s^*\mathrm d B_s)
\quad\text{ and }\quad
M^\mu : t \mapsto (B_{22})_t - \int_0^t\dot\mu_s\Tr(A_s^*\mathrm d B_s). \]
Then
\begin{align*}
\mathrm d\lambda_t &= \dot\lambda_t\mathrm d t, &
\mathrm d\dot\lambda_t &= +\frac{1-\dot\lambda_t^2-\dot\mu_t^2}{\lambda_t-\mu_t}\mathrm d t
+ \mathrm d M^\lambda_t
- \frac{d^2-1}2\dot\lambda_t\mathrm d t, \\
\mathrm d\mu_t &= \dot\mu_t\mathrm d t, &
\mathrm d\dot\mu_t &= -\frac{1-\dot\lambda_t^2-\dot\mu_t^2}{\lambda_t-\mu_t}\mathrm d t
+ \mathrm d M^\mu_t
- \frac{d^2-1}2\dot\mu_t\mathrm d t.
\end{align*}
Writing $A^\Re_{12}$ and $A^\Im_{12}$ for the real and imaginary parts of $A_{12}$, and similarly for the real and imaginary parts of $B_{12}$,
\begin{align*}
\mathrm d M^\lambda_t
& = (1-\dot\lambda^2_t)\mathrm d (B_{11})_t
- \dot\lambda_t(\overline A_{12})_t\mathrm d(B_{12})_t
- \dot\lambda_t(\overline A_{21})_t\mathrm d(B_{21})_t
- \dot\lambda_t\dot\mu_t\mathrm d(B_{22})_t \\
& = (1-\dot\lambda^2_t)\mathrm d (B_{11})_t
- 2\dot\lambda_t(A_{12}^\Re)_t\mathrm d(B_{12}^\Re)_t
- 2\dot\lambda_t(A_{12}^\Im)_t\mathrm d(B_{12}^\Im)_t
- \dot\lambda_t\dot\mu_t\mathrm d(B_{22})_t, \\
\mathrm d M^\mu_t
& = -\dot\lambda_t\dot\mu_t\mathrm d (B_{11})_t
- 2\dot\mu_t(A_{12}^\Re)_t\mathrm d(B_{12}^\Re)_t
- 2\dot\mu_t(A_{12}^\Im)_t\mathrm d(B_{12}^\Im)_t
+ (1-\dot\mu_t^2)\mathrm d(B_{22})_t.
\end{align*}
Since $\sum_{ij}|A_{ij}|^2=1$, we deduce $2|A_{12}^\Re|^2+2|A_{12}^\Im|^2 = 1-\dot\lambda^2-\dot\mu^2$, and we find the bracket of $M^\lambda$:
\[ \mathrm d\langle M^\lambda,M^\lambda\rangle_t
= (1-\dot\lambda_t^2)^2\mathrm d t
+ \dot\lambda_t^2(1-\dot\lambda_t^2 - \dot\mu_t^2)\mathrm d t
+ \dot\lambda_t^2\dot\mu_t^2\mathrm d t
= (1 - \dot\lambda_t^2)\mathrm d t. \]
Similarly the bracket of $M^\mu$ grows as $(1-\dot\mu_t^2)\mathrm d t$. The covariance term is given by
\[ \mathrm d\langle M^\lambda,M^\mu\rangle_t
= - \dot\lambda_t\dot\mu_t(1-\dot\lambda_t^2)\mathrm d t
+ \dot\lambda_t\dot\mu_t(1-\dot\lambda_t^2-\dot\mu_t^2)\mathrm d t
- \dot\lambda_t\dot\mu_t(1-\dot\mu_t^2)\mathrm d t
= - \dot\lambda_t\dot\mu_t\mathrm d t, \]
as stated in Lemma \ref{lem:casesond}.
Note that it corresponds to the diffusion term for the projection of a spherical Brownian motion, as described in \cite{MMU}. Indeed, as explained in the proof outline, the only difference between $A$ and a spherical Brownian motion is a drift term. Alternatively, one can also study the trace and determinant of $H$, and deduce the process satisfied by the eigenvalues, since they are the roots of the polynomial $X^2-\Tr(H)X+\operatorname{det}(H)$.
\subsection{The case \texorpdfstring{$d\geq3$}{d≥3}}
\label{ssec:dim3+}
We show that in this case, the vector field $\Phi(\Lambda,A)$ depends on the off-diagonal elements of $A$. As stated in Lemma \ref{lem:MarkovPhi}, this will show that $(\Lambda,\dot\Lambda)$ cannot be Markovian. We will use the expression given in equation \eqref{eq:concisePhi}.
In dimension 3, it is a direct computation to see that for any $\Lambda$ with distinct eigenvalues, the following two choices for $A$ give different $\Phi(\Lambda,A)_{11}$, although they are equal on the diagonal:
\begin{align*}
A &= \begin{pmatrix} 0 & 1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{pmatrix}, &
\widetilde A &= \begin{pmatrix} 0 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{pmatrix}.
\end{align*}
In fact, one gets $\Phi(\Lambda,A)_{11} = 2/(\Lambda_{11}-\Lambda_{22})$, whereas $\Phi(\Lambda,\widetilde A)_{11} = 2/(\Lambda_{11}-\Lambda_{33})$. In higher dimension, chose $A$ and $\widetilde A$ to be zero except on the top left $3\times3$ minor, which is given by the above expressions.
One might be interested in weaker versions of Markovian behaviour. For instance, one may try to restrict to a well-chosen family of initial conditions and hope that $(\Lambda,\dot\Lambda)$ be Markovian up to some exit time. Let $N$ be some submanifold of $\mathcal H_d^\Delta\times\mathbb S(\mathcal H_d)$, and consider the process $(\Lambda,A)^\dagger$ as $(\Lambda,A)$ stopped when it exits $N$, with values in the one-point compactification $N\cup\{\dagger\}$ of $N$. By convention, $\dagger$ is absorbing. We will show that this localisation of $(\Lambda,A)$ does not induce a Markovian $(\Lambda,\dot\Lambda)$, except in the uninteresting case $(\Lambda,A)^\dagger_t=\dagger$ for all $t>0$, where the motion was stopped immediately.
If $N$ is not of maximal dimension, then by hypoellipticity of $(\Lambda,A)$ (see the governing equation in Lemma \ref{lem:MarkovLambdaA}), the unstopped $(\Lambda,A)_t$ admits a smooth density for arbitrarily small $t>0$, and the probability that $(\Lambda,A)_t$ belongs to $N$ is zero ($N$ has zero Lebesgue measure as a submanifold of positive codimension). In particular $(\Lambda,A)^\dagger_t=\dagger$ for all $t>0$.
We consider now the case where $N$ is of maximal dimension; in other words, $N$ is an open set. Unwinding the proof of Lemma \ref{lem:MarkovPhi} in Section \ref{ssec:proofL3}, we see that the induced $(\Lambda,\dot\Lambda)$ will be Markovian if and only if the vector field $\Phi(\Lambda,A)$ factors through $(\Lambda,A)\mapsto(\Lambda,\pi^\Delta(A))$ when restricted to $N$.
Since $N$ is open, we can choose $(\Lambda,A)\in N$ such that $\Lambda$ has distinct eigenvalues and $A$ is not diagonal. There exists $i\neq j$ such that $A_{ij}\neq0$; moreover, because $d\geq3$, there exists $k\notin\{i,j\}$. Choose a unit complex number $u$ such that $A_{ik}=|A_{ik}|u$. Set $A^\varepsilon\in\mathcal H_d$ equal to $A$ everywhere but at $(i,j)$, $(i,k)$ and their symmetric counterparts $(j,i)$ and $(k,i)$, which are instead defined by
\begin{align*}
A^\varepsilon_{ij} & = A_{ij}\cos(\varepsilon), &
A^\varepsilon_{ik} & = A_{ik} + \mathsf i|A_{ij}|\sin(\varepsilon)u_{ik}. &
\end{align*}
The important features of this perturbation are that $|A^\varepsilon_{ij}|$ is not constant around $\varepsilon=0$, and $A^\varepsilon$ stays on the sphere $\mathbb S(\mathcal H_d)$; indeed,
\[ |A^\varepsilon_{ij}|^2 + |A^\varepsilon_{ik}|^2
= |A_{ij}|^2\cos(\varepsilon)^2 + |A_{ik}|^2 + |A_{ij}|^2\sin(\varepsilon)^2
= |A_{ij}|^2 + |A_{ik}|^2. \]
In particular, the entries of $\Phi(\Lambda,A^\varepsilon)$ along the diagonal should not depend on $\varepsilon$. However, writing $\lambda^\ell:=\Lambda_{\ell\ell}$ for the $\ell$th eigenvalue of $\Lambda$,
\begin{align*}
\frac{\mathrm d^2}{\mathrm d\varepsilon^2}\big(\Phi(\Lambda,A^\varepsilon)_{ii}\big)
& = \frac2{\lambda^i-\lambda^j}\cdot\frac{\mathrm d^2}{\mathrm d\varepsilon^2}|A^\varepsilon_{ij}|^2
+ \frac2{\lambda^i-\lambda^k}\cdot\frac{\mathrm d^2}{\mathrm d\varepsilon^2}|A^\varepsilon_{ik}|^2 \\
& = \left(\frac2{\lambda^i-\lambda^j} - \frac2{\lambda^i-\lambda^k}\right)
\cdot\frac{\mathrm d^2}{\mathrm d\varepsilon^2}|A^\varepsilon_{ij}|^2 \\
& = \frac{\lambda^k-\lambda^j}{(\lambda^i-\lambda^j)(\lambda^i-\lambda^k)}\cdot4|A^\varepsilon_{ij}|^2\cos(2\varepsilon).
\end{align*}
The first factor is well-defined and non zero since $\Lambda$ has distinct eigenvalues, and the second is non-zero for $\varepsilon=0$. This shows that $\Phi(\Lambda,A)_{ii}$ depends on the off-diagonal entries of $A$ over $N$, hence $(\Lambda,\dot\Lambda)$ cannot be Markovian according to Lemma \ref{lem:MarkovPhi}.
| {
"attr-fineweb-edu": 1.472656,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeA05qsBB3Zsxc63- | \section{Introduction}
The issue of ghost fields and their stability have been a subject of interest over the past years \cite{Mukhanov:1990me,ArkaniHamed:2003uy,Krotov:2004if,Adams:2006sv,Muk,Barrow:2009sj,Dvali:2012zc}. The common lore is that fields with negative kinetic energy could have an unbounded vacuum decay into photons and gravitons which place tight bounds on their existence\cite{Trodden,Cline}. However, ghosts have been revisited in the context of infrared modifications to gravity to address a range of phenomenological issues, such as dark energy, cosmic inflation, a probe of Lorentz violation, and renormalizable deformation of perturbative quantum gravity.
Ghost fields seem to be inevitable in bouncing and cyclic models such as Ekpyrotic, Anamorphic and Matter-Bounce \cite{Steinhardt:2001st,Steinhardt:2001vw,Novello:2008ra,Cai:2012va,Lehners:2008vx,Brandenberger:2012zb}. This is because the Friedmann equations require the Hubble parameter to go to zero at the bounce and a negative energy contribution in the Hamiltonian is needed to accomplish this requirement. Recently, a cyclic cosmology model was proposed to deal with the fine tuning issues in gauge couplings in the standard model. Instead of using the inflationary multiverse idea to populate pocket universes with different couplings, the authors demonstrate that in a cyclic universe, the gauge couplings can undergo a quasi-random walk during the bounce and stabilize during the expanding phase. Therefore, we happen to live in an expansion epoch where the couplings have dynamically evolved to be compatible with the measured values in our standard model.
This ``cyclic-multiverse'' model \cite{Alexander:2015pda} implements a dilaton-like field which naturally acts as coupling constants for gauge theories. It is remarkable that in this model, the dilaton field plays a dual role as the agent that leads to the bounce and the coupling constant. Despite its promise, issues of stability arise since the dilaton field looks like a ghost. However, because of the non-linear couplings and time-dependence, it is important to address an explicit stability analysis for this theory in particular.
In this work we will derive the gauge invariant cosmological perturbations of the ghost-dilaton-gauge system and perform a classical, linear stability analysis. We then discuss the classical nonlinear vacuum stability issues particular to this model, extensions to ghost condensate models and conclude with a future research directions regarding UV completion and non-perturbative physics.
\section{Background Equations}
We are investigating the stability of a ghost scalar field with a periodic potential with a dilatonic coupling to a $U(1)$ gauge field. The action for our model is
\begin{equation}
\label{eq:action}
S =\int\dd{^4x}\sqrt{-g}\left[\frac{R}{16\pi G}+\frac{1}{2}\partial_\mu \psi\partial^\mu\psi-V(\psi) -\frac{1}{4}g_0 e^{-2\psi/M_p} F_{\mu\nu}F^{\mu\nu}
\right],
\end{equation}
where with our sign convention $(-,+,+,+)$ the kinetic term for the field $\psi$ has the wrong sign. We use a negative periodic potential
\begin{equation}
V(\psi) = -\Lambda^4(1+\cos(\psi/f)).
\end{equation}
First, we will investigate the classical, linear stability. At the background level, we assume an FRW metric in conformal coordinates,
\begin{equation}
ds^2 = a^2(\eta)\left(-d\eta^2+ \gamma_{ij}dx^idx^j\right)
\end{equation}
where $\gamma_{ij} = \delta_{ij}\left[1+\frac{1}{4}\mathcal{K}\left(x^2+y^2+z^2\right)\right]^{-2}$ is the spatial metric and $\mathcal{K}$ the spatial curvature. Let us consider the equation of motion for the homogeneous background part of the ghost field $\psi=\psi(\eta)$,
\begin{equation}
\label{eq:psiEOM}
\psi'' + 2\mathcal{H}\psi' - a^2 \frac{\partial V}{\partial\psi} +a^2 \frac{g_0}{2M_p}e^{-2\psi/M_p}F_{\mu\nu}F^{\mu\nu}=0
\end{equation}
where a prime denotes a derivative with respect to conformal time.
\begin{figure}
\includegraphics[scale=1]{psi_f_0-01}
\caption{\label{fig:psisol}Example of the behaviour of the background solution for the ghost field $\psi$ as a function of conformal time. The inset shows the behaviour between two bounces. The parameters used in this solution are $f=10^{-2}M_p$ and $\Lambda^4=\rho_{F0}$.}
\end{figure}
To solve for the evolution of the homogeneous background quantities, we assume that the gauge field is in the form of radiation. In that case $F_{\mu\nu}F^{\mu\nu}=0$, and we can ignore the last term in equation \eqref{eq:psiEOM}. The Friedmann equations for the evolution of the spacetime are
\begin{gather}
\label{eq:Fried1}
\mathcal{H}^2 = \frac{1}{3M_p^2}\left[-\frac{\psi'^2}{2}-a^2\Lambda^4\left(1+\cos(\psi/f)\right)+\frac{\rho_{F0}}{a^2}\right]-\mathcal{K}\\
\label{eq:Fried2}
\mathcal{H}' = \frac{1}{3M_p^2}\left[\psi'^2-a^2\Lambda^4\left(1+\cos(\psi/f)\right)-\frac{\rho_{F0}}{a^2}\right]
\end{gather}
where $\mathcal{H}\equiv a'/a$ and $\rho_{F0}$ is the radiation energy density at $a=1$. In order to find cyclic solutions, we take the curvature to be positive. In figure \ref{fig:psisol} we show an example of the behaviour of the field $\psi(\eta)$. During this evolution, the scale factor oscillates with the time scale at its minimum very short compared to rest of the evolution. We refer to this period, where the scale factor goes from decreasing to increasing, as a bounce. During each bounce the field $\psi$ jumps quickly from one value to another. Between bounces, $\psi$ exhibits linearly stable oscillations around the maximum of its potential, since the kinetic term is of the wrong sign. These oscillations can be seen in the inset of figure \ref{fig:psisol}. The changing value of $\psi$ from one cycle to another changes the effective coupling constant of the gauge field via the dilatonic coupling in equation \eqref{eq:action}.
\begin{figure}
\begin{subfigure}{0.47\textwidth}
\includegraphics[scale=0.93]{acyclef0-01}
\caption{Phase portrait showing limit cycle for $f =10^{-2}M_p$}
\label{fig:phaseportE-2}
\end{subfigure}
\begin{subfigure}{0.47\textwidth}
\includegraphics[scale=0.93]{acyclef0-001}
\caption{Phase portrait showing limit cycle for $f =10^{-3}M_p$}
\label{fig:phaseportE-3}
\end{subfigure}
\begin{subfigure}{0.47\textwidth}
\includegraphics[scale=0.93]{ghostcyclef0-01}
\caption{Phase portrait for ghost field, $f=10^{-2}M_p$.}
\label{fig:phaseportE-4}
\end{subfigure}
\begin{subfigure}{0.47\textwidth}
\includegraphics[scale=0.93]{ghostcyclef0-001}
\caption{Phase portrait for ghost field, $f=10^{-3}M_p$.}
\label{fig:phaseportE-5}
\end{subfigure}
\end{figure}
We can study the stability of the background solutions by analyzing phase portraits of the scale factor. These are plotted in figures \ref{fig:phaseportE-2} and \ref{fig:phaseportE-3} for $f=10^{-2}M_p$ and $f=10^{-3}M_p$ respectively. We see that the scale factor solution is almost periodic. The precise evolution differs from cycle to cycle, but the overall solution remains confined to a band, indicating the stability of the solutions. We can also see that for the smaller value of $f$, the band of solutions is much tighter, showing that the solutions become closer to periodic for smaller values of $f$.
The phase portraits for the ghost field $\psi$, figures \ref{fig:phaseportE-4} and \ref{fig:phaseportE-5}, show the overall behaviour of the ghost field. During the expansion and contraction, the evolution of the field is confined to the intersection near the $\psi'=0$ axis. It is only during the bounce phases where the large jumps occur which take the ghost field to different maxima of the potential. As with the scale factor, the evolution of the ghost field becomes more tightly confined to cycles as $f$ becomes smaller.
To investigate linear instabilities caused by inhomogeneous perturbations, we perturb both the ghost field and the metric. For now, we ignore the gauge field. The scalar perturbation of the background Friedmann-Robertson-Walker metric in the Newtonian gauge gives
\begin{equation}
ds^2 = a^2\left(\eta\right)\left(-\left(1+2\phi\right)d\eta^2 + \left(1-2\theta\right)\gamma_{ij}dx^idx^j\right)
\end{equation}
By perturbing the ghost field, $\psi \mapsto \psi(\eta) + \chi(\eta,\mathbf{x})$, we obtain the Einstein equation $G^\mu_{\; \nu} = M_p^{-2} T^\mu_{\; \nu}$ to first order in perturbations, $\phi$, $\theta$ and $\chi$. For a scalar field source, there is no anisotropic stress, and we have $\phi=\theta$. The field equations for the metric perturbation $\phi$ are then
\begin{gather}
\label{eq:PertEE00}
\nabla^2 \phi - 3\mathcal{H}\phi' - 3\phi \left(\mathcal{H}^2 - \mathcal{K}\right) = \frac{1}{2M_p^2} \left(\phi \left(\psi'\right)^2 -\psi'\chi' + a^2 \frac{\partial V}{\partial \psi}\chi\right) \\
\label{eq:PertEE0i}
\mathcal{H}\phi + \phi' = -\frac{1}{2M_p^2}\psi'\chi \\
\label{eq:PertEEij}
\phi'' + 3\mathcal{H}\phi'+\left(2\mathcal{H}'+\mathcal{H}^2 - \mathcal{K}\right)\phi = \frac{1}{2M_p^2}\left(\phi\left(\psi'\right)- \psi'\chi' - a^2\frac{\partial V}{\partial \psi}\chi\right).
\end{gather}
These are the same as the standard equations for scalar perturbations sourced by a scalar field except that those terms derived from the kinetic term of the ghost field have the opposite sign \cite{Mukhanov:1990me}. With the second order Lagrangian, the equation of motion for the perturbation of the ghost field can be found. This equation of motion, along with a combination of equations \eqref{eq:PertEE00} and \eqref{eq:PertEEij}, give wave equations for the two perturbative fields
\begin{gather}
\chi'' - \nabla^2\chi + 2\mathcal{H}\chi' - a^2 \frac{\partial^2 V}{\partial \psi^2}\chi = 2\phi\psi'' + 4\psi'\left(\mathcal{H}\phi+\phi'\right) \\
\phi'' - \nabla^2\phi + 6\mathcal{H}\phi' + \left(2\mathcal{H}' + 4\mathcal{H}^2 - 4\mathcal{K}\right)\phi = - \frac{a^2}{M_p^2} \frac{\partial V}{\partial \psi}\chi
\end{gather}
We can simplify the coupling terms in the equations by using the background equation of motion for $\psi$ and equation \eqref{eq:PertEE0i}. We again assume that the gauge field just contributes a radiation density $\rho_F$ to the background evolution. It will then have no direct effect on the background equation for $\psi$ since we will have $F_{\mu\nu}F^{\mu\nu}=0$. The equation of motion for $\psi$ is then
\begin{equation}
\label{eq:psiEOM2}
\psi'' + 2\mathcal{H}\psi' -a^2\frac{\Lambda^4}{f}\sin\left(\frac{\psi}{f}\right) = 0.
\end{equation}
In Fourier space, the equations for the perturbations become
\begin{gather}
\label{eq:chi_k}
\chi_k'' + 2\mathcal{H}\chi_k' + \left(k^2+\frac{2}{M_p^2}\psi'^2-a^2\frac{\Lambda^4}{f^2}\cos(\psi/f)\right)\chi_k = 2\psi''\phi_k\\
\label{eq:phi_k}
\phi_k'' + 2\mathcal{H}\phi_k' + \left(k^2+2\mathcal{H}'-4\mathcal{K}\right)\phi_k = -\frac{1}{M_p^2}\psi''\chi_k.
\end{gather}
\begin{figure}
\includegraphics[scale=1]{Hubble}
\caption{\label{fig:hubble}Conformal Hubble rate, $\mathcal{H}=a'/a$, as a function of conformal time. $\eta$, for single cycle from one bounce to another. The initial ghost field velocity at the bounce is taken to be $\psi'(0)=10^3\rho_{F0}^{1/2}$, while $\Lambda^4=\rho_{F0}$ and $f=10^{-2}M_p$. In the first half of the evolution, $\mathcal{H}$ is positive and decreases as the universe expands. It reaches zero as the universe reaches its maximum extent and turns around. In the second half, $\mathcal{H}$ is negative and decreases as the universe contracts.}
\end{figure}
We can make some approximations to specialize these equations to the period of time near the bounce. Figure \ref{fig:hubble} shows an example of the evolution of the Hubble rate from one bounce to the next. The bounce occurs when the negative kinetic energy of the ghost field, $\psi$, becomes large enough to cancel the radiation energy density such that $\mathcal{H}=0$. At this point the potential term in equation \eqref{eq:psiEOM2} becomes irrelevant and we have approximately
\begin{equation}
\psi'' = -2\mathcal{H}\psi'.
\end{equation}
Using this in equation \eqref{eq:phi_k} and then using equation \eqref{eq:PertEE0i} we can get a decoupled equation for $\phi_k$,
\begin{equation}
\phi_k'' + 6\mathcal{H}\phi_k' + \left(k^2+2\mathcal{H}'+4\mathcal{H}^2-4\mathcal{K}\right)\phi_k = 0.
\end{equation}
Also, since during the bounce $\psi'$ is large, the cosine term in equation \eqref{eq:chi_k} oscillates quickly and averages to zero. We can therefore write the equation for $\chi_k$ as
\begin{equation}
\chi_k''+2\mathcal{H}\chi_k' +\left( k^2+\frac{2}{M_p^2}\psi'^2\right)\chi_k = -4\mathcal{H}\psi'\phi_k.
\end{equation}
\begin{figure}
\begin{subfigure}{0.47\textwidth}
\includegraphics[scale=0.8]{GhostPerturbationsk1}
\caption{$k=\sqrt{\rho_{F0}}/M_p$}
\end{subfigure}
\begin{subfigure}{0.47\textwidth}
\includegraphics[scale=0.8]{GhostPerturbationsk10}
\caption{$k=10\sqrt{\rho_{F0}}/M_p$}
\end{subfigure}
\begin{subfigure}{0.47\textwidth}
\includegraphics[scale=0.8]{MetricPerturbationsk1}
\caption{$k=\sqrt{\rho_{F0}}/M_p$}
\end{subfigure}
\begin{subfigure}{0.47\textwidth}
\includegraphics[scale=0.8]{MetricPerturbationsk10}
\caption{$k=10\sqrt{\rho_{F0}}/M_p$}
\end{subfigure}
\caption{\label{fig:perts}Behaviour of the ghost field perturbation $\chi$, and the metric perturbation, $\phi$ during expansion phase. The parameters used are $\psi'(0)=10^3\rho_{F0}^{1/2}$, $\Lambda^4=\rho_{F0}$ and $f=10^{-2}M_p$. The conformal time, $\eta$ is in units of $(M_p^2/\rho_{F0})^{1/2}$. The amplitude of the perturbations are relative to their initial amplitudes at the bounce.}
\end{figure}
For the evolution of the perturbations during the expansion and contraction phases we use the full equations \eqref{eq:chi_k} and \eqref{eq:phi_k} along with the background equations \eqref{eq:psiEOM2}. In figure \ref{fig:perts} we show the behavior of the perturbations for two different $k$ modes. We can compare the size of the $k$ modes to the Hubble scale by referring to figure \ref{fig:hubble}. The perturbations which enter the horizon early, $k\gtrsim 10\sqrt{\rho_{F0}}/M_p$, are stable throughout the expansion and contraction of the universe. Those that enter the horizon later on in the expansion, $k\lesssim 10\sqrt{\rho_{F0}}/M_p$, are unstable. Linear, short wavelength modes of the ghost field are stable during the expansion and contraction phases. There are however classically unstable modes around the size of the Hubble scale during most of the expansion phase.
The main problem with the ghost field however is really the nonlinear instability due to the negative kinetic term, and its interaction with other fields. This points us towards the idea of a ghost condensate to stabilize the field \cite{ArkaniHamed:2003uy}. The idea is to add higher order derivative terms which stabilize the ghost field around a non-zero value of $\partial_\mu\psi$, analogous to the stabilization of a tachyonic potential by adding higher order polynomials to the potential.
\section{Discussion}
In this work we studied the stability of a ghost field in the context of a cyclic universe scenario. Unlike the linear instabilities that non-interacting ghosts suffer in flat, time independent backgrounds, we will argue that it is possible for ghosts in cyclic cosmologies to exhibit stability. We already demonstrated that a background time-dependent ghost subject to an oscillatory potential exhibits limit cycles which bound the field trajectories and energies in a finite series of limit cycles. This reflects the fact that the oscillatory potential bounds the negative energy background field configuration.
Secondly we find that the gauge invariant coupled metric and ghost cosmological perturbations have a peculiar feature. First, similar to well behaved scalar perturbations, the ghost field undergoes a classical instability for superhorizon modes. However, subhorizon modes are well behaved and are generically oscillatory. One distinct feature of the ghost system is that modes that are marginally sub-horizon are unstable and could actually have some interesting phenomenological consequences. Since this class of sub horizon instabilities are classical, it could signal the formation of a stable condensed configuration. Mukohyama have shown that ghost instabilities that accreted into black holes behave just like dust and could also serve as a viable dark matter candidate\cite{Muk}. It is therefore possible the these unstable sub-horizon modes can accreted into primordial black-holes which accrete the ghost field and we are currently investigating this interesting possibility.
\begin{figure}
\begin{subfigure}{0.47\textwidth}
\includegraphics[scale=1]{vacuumgraviton}
\caption{\label{fig:gravdecay}Graviton mediated vacuum decay}
\end{subfigure}
\begin{subfigure}{0.47\textwidth}
\includegraphics[scale=1]{vacuumphoton2}
\caption{\label{fig:photdecay}Direct coupling vacuum decay}
\end{subfigure}
\caption{Vacuum decay channels when ghost is present.}
\end{figure}
At the nonlinear level it is well established that the ghost vacuum can undergo graviton mediated decay into two photons and two ghosts via the process shown in figure \ref{fig:gravdecay}. The decay rate for this process is naively infinite, however, if we introduce a cutoff energy, $E_c$, above which the effective theory with the ghost does not apply, then we can estimate a decay rate of $\Gamma_{0\rightarrow 2\gamma \phi} \sim \frac{ E_c^{8}}{M_{p}^{4}} $ . Constraints come from diffuse gamma ray backgrounds to give a UV cutoff at $E_c \leq 3$ MeV \cite{Cline}. Therefore, our theory will likewise be effective up to a MeV cutoff. Moreover, the ghost also directly couples to the photon through the term $\Delta \mathcal{L}= -\frac{1}{4}e^{-2\psi/M_p}F^2$ providing another channel for vacuum decay. To lowest order we will have an interaction $(\psi/2M_p)F^2$ which allows the vacuum to decay to a ghost plus two photons as in figure \ref{fig:photdecay}. The interaction is Planck suppressed just as for the graviton mediated decay, however the process involves only a single vertex so the decay rate goes as $\Lambda^6/M_p^2$, an enhancement of $M_p^2/\Lambda^2$ over the graviton mediated channel. This relatively small UV cutoff points to the need for a ghost free UV complete theory or another avenue to quantum mechanically stabilize the ghost system. The cutoff induced by direct interactions of the ghost can be relaxed if the gauge field is identified as the dark photon. When the ghost is directly coupled to regular photons, a bound can be found from high energy photon measurements. However, in the case where the direct coupling is with the dark photon, the analogous constraint would also involve the suppressed coupled between the dark photon and standard model particles \cite{Essig:2013lka,An:2014twa,Ilten:2016tkc}.
One potential resolution to alleviating the issue of vacuum decay is to realize that the ghost condenses in the IR. In this case, an opposite sign quartic kinetic self interaction with a frequency dispersion relation renders the vacuum stable against decay. This idea is reminiscent to a Higgs-like phenomena of spontaneous symmetry breaking. Here the Higgs field $\Phi$ has an unstable tachyonic mode around the false vacuum $\left<\Phi\right>=0$. While fluctuations around field values are unstable, the full theory is stable since the Higgs potential has a global vacuum that is bounded from below. Likewise, as pointed out in Arkani-Hamed et al. \cite{ArkaniHamed:2003uy}, the ghost field can be seen as an effective theory with higher order kinetic terms which has a stable minima:
\begin{equation}
S= \int d^{3}x dt\left[\frac{1}{2}M^{4}\dot{\phi}^{2} -\frac{1}{2}\tilde{M}(\nabla^{2}\phi)^{2} - \frac{M^{4}}{2c}\dot{\phi}(\nabla\phi)^{2} + \frac{M^{4}}{8c^{2}}(\nabla \phi)^{4} + ...\right]
\end{equation}
A power-counting analysis of scaling dimension of the above operator reveals that there are no large quantum instabilities in the IR. Therefore if our ghost is a condensate in the IR we can evade the instabilities provided that there is a UV completion of our theory.
The issue of finding a UV completed theory that gives the ghost condensate is still an open ended quest. Recently, the authors \cite{Krotov:2004if} of claimed that an Abelian-Higgs like model coupled to fermions can yield a ghost condensate after integrating out the fermions.
A roadblock to UV completion was pointed out in a very interesting work by Adams et. al where they demonstrated that the constraints of a local, Lorentz-invariant, S-matrix prohibit a UV completion of a ghost condensate further negating the claims of \cite{Adams:2006sv}. The point is that a translationally invariant ghost configuration that picks a frame, necessarily generates superluminal waves which violate causality in the S-matrix. However there is a very interesting loop-hole that was pointed out by Dvali and collaborators\cite{Dvali:2012zc}:
The road to UV completion rests on the Wilsonian paradigm which is based on the existence of weakly coupled degrees of freedom that become relevant in the deep UV. A good example of this is in QCD where below some cutoff scale strongly coupled states such as pions arise. Above the cutoff scale the pion is no longer a reliable degree of freedom and are replaced by weakly coupled quarks and gluons in the UV. However, the authors of \cite{Dvali:2012zc} show another route, using ghost condensates as an example, where there is a non-Wilsonian completion. In this case it is not new weakly coupled states that appear but collective excitations of multi-particle states composed out of soft original quanta. Interestingly these field configurations are often non-linear and appear to be non-unitary.The authors show that the issue of superlumanility and non-unitarity can be resolved because even if the background classicalon solution produces super-luminal waves, boost transformations that can lead to acausality are not allowed by the background. We trade a short spatial wavelength instability for a different instability due to higher order time derivatives, if we insist on local Lorentz invariance. If we give up local Lorentz invariance then we run afoul of all experiments testing special relativity. However, instabilities do not prove a theory is inconsistent, only that the vacuum and quasiparticles have not been properly identified.
| {
"attr-fineweb-edu": 1.570312,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeBDxK4sA-5Y3tMvm | \section{Introduction}
\numberwithin{equation}{section}
\subsection*{1.1}
The aim of the present paper is to prove a variant of the abstract probabilistic version of Szemer\'{e}di's regularity lemma,
due to Tao \cite{Tao1,Tao2,Tao3}. This variant applies to a number of combinatorial structures---including graphs, hypergraphs,
hypercubes, graphons, and many more---and works for random variables in $L_p$ for any $p>1$. A proper exposition of our main
result requires some preparatory work and hence, at this point, we will not discuss it in detail. Instead, we will focus
on the following model case which is representative of the contents of this paper.
\subsection*{1.2}
A very basic fact of probability theory is that the set of simple functions is dense in $L_1$. Actually, this fact is so basic
that it is hardly mentioned when applied. But how do we approximate a given random variable by a simple function? More precisely,
given an integrable random variable $f\colon [0,1]\to\rr$ and a real $0<\ee\mik 1$ (that we regard as an error) we are asking for
an effective method to locate a simple function $s\colon [0,1]\to\rr$ such that $\|f-s\|_{L_1}\mik \ee$.
It turns out that there is a natural greedy algorithm for this problem which we are about to describe. We start by setting
$\calf_0=\big\{\emptyset,[0,1]\big\}$ and $f_0=\ave(f\, | \, \calf_0)$. That is, $\calf_0$ is the trivial $\sigma$-algebra on $[0,1]$
and $f_0$ is the conditional expectation of $f$ with respect to $\calf_0$ (see, e.g., \cite{Du}). Notice that $f_0$ is constant
and equal to the expected value $\ave(f)$ of $f$. Thus, if $\|f-f_0\|_{L_1}\mik \ee$, then we are done. Otherwise, by considering
the support of the positive part or the negative part of $f-f_0$, we may select a measurable subset $A_0$ of $[0,1]$ such that
\begin{equation} \label{e1.1}
\frac{\ee}{2} < \big| \int_{A_0} (f-f_0)\, dt \big|.
\end{equation}
Next we set $\calf_1=\sigma(\calf_0 \cup\{A_0\})$ and $f_1=\ave(f\, | \, \calf_1)$. (That is, $\calf_1$ is the smallest
$\sigma\text{-algebra}$ on $[0,1]$ that contains all elements of $\calf_0$ and $A_0$, and $f_1$ is the conditional expectation
of $f$ with respect to $\calf_1$.) Observe that, by \eqref{e1.1}, we have
\begin{equation} \label{e1.2}
\frac{\ee}{2} < \big| \int_{A_0} (f_1-f_0)\, dt \big| \mik \|f_1-f_0\|_{L_1}.
\end{equation}
Also notice that $f_1$ is a simple function since the $\sigma$-algebra $\calf_1$ is finite, and so if $\|f-f_1\|_{L_1}\mik \ee$,
then we can stop this process. On the other hand, if $\|f-f_1\|_{L_1}>\ee$, then we select a measurable subset $A_1$ of $[0,1]$
such that $|\int_{A_1} (f-f_1)\, dt|> \ee/2$ and we continue similarly.
The next thing that one is led to analyze is whether this algorithm will eventually terminate and, if yes, at what speed.
To this end, notice that if the algorithm runs forever, then it produces an increasing sequence $(\calf_i)$ of finite
$\sigma$-algebras of $[0,1]$ and a sequence $(f_i)$ of random variables with $f_i=\ave(f\, | \, \calf_i)$ for every
$i\in\nn$ and such that $\|f_i-f_{i-1}\|_{L_1}> \ee/2$\, if\, $i\meg 1$. In other words, $(f_i)$ is a martingale adapted to the
filtration $(\calf_i)$ whose successive differences are bounded away from zero in the $L_1$ norm. This last piece of information
is the key observation of this analysis since successive differences of martingales, known as martingale difference sequences,
are highly structured sequences of random variables. In particular, if the given random variable $f$ belongs to $L_p$ for some
$1< p\mik 2$, then for every integer $n\meg 1$ we have
\begin{equation} \label{e1.3}
\Big( \sum_{i=1}^n \|f_i-f_{i-1}\|^2_{L_p}\Big)^{1/2} \mik \Big( \frac{1}{p-1}\Big)^{1/2} \cdot \|f\|_{L_p}.
\end{equation}
This functional analytic estimate is sharp, and was recently proved by Ricard and Xu \cite{RX} who deduced it from
a uniform convexity inequality for $L_p$ spaces. We briefly comment on these results in Appendix A.
Of course, with inequality \eqref{e1.3} at our disposal, it is very easy to analyze the greedy algorithm described above.
Precisely, by \eqref{e1.3} and the monotonicity of the $L_p$ norms, we see that if $f\in L_p$ for some $1<p \mik 2$, then
this algorithm will terminate after at most $\lfloor 4\, \|f\|^2_{L_p} \ee^{-2} (p-1)^{-1}\rfloor+1$ iterations.
\subsection*{1.3}
Our main result (Theorem \ref{t3.1} in Section 3) follows the method outlined above, but with two important extra features.
First, our approximation scheme is more demanding in the sense that the simple function we wish to locate is required
to be a linear combination of characteristic functions of sets belonging to a given class. It is useful to view the sets
in this class as being ``structured"\!, though for the purpose of performing the greedy algorithm only some (not particularly
restrictive) stability properties are needed. These properties are presented in Definition \ref{d2.1} in Section 2, together
with several related examples.
Second, the error term of the approximation is controlled not only by the $L_p$ norm but also by a certain ``uniformity norm"
which depends on the class of ``structured" sets with which we are dealing (see Definition \ref{d2.2} in Section 2).
This particular feature is already present in Tao's work and can be traced to \cite{FrK}.
Finally, we note that in Section 4 we discuss some applications, including a regularity lemma for hypercubes and
an extension of the strong regularity lemma to $L_p$ graphons for any $p>1$. More applications will appear in \cite{DKK}.
\subsection*{1.4}
By $\nn=\{0,1,2,\dots\}$ we denote the set of natural numbers. As usual, for every positive integer $n$ we set
$[n]\coloneqq \{1,\dots,n\}$. For every function $f\colon\nn\to\nn$ and every $\ell\in\nn$ by $f^{(\ell)}\colon\nn\to\nn$
we shall denote the $\ell$-th iteration of $f$ defined recursively by $f^{(0)}(n)=n$ and $f^{(\ell+1)}(n)=f\big(f^{(\ell)}(n)\big)$
for every $n\in\nn$. All other pieces of notation we use are standard.
\section{Semirings and their uniformity norms}
\numberwithin{equation}{section}
We begin by introducing the following slight strengthening of the classical concept of a semiring of sets (see also \cite{BN}).
\begin{defn} \label{d2.1}
Let $\Omega$ be a nonempty set and $k$ a positive integer. Also let $\cals$ be a collection of subsets of $\Omega$.
We say that $\mathcal{S}$ is a \emph{$k$-semiring on $\Omega$} if the following properties are satisfied.
\begin{enumerate}
\item[(P1)] We have that\, $\emptyset,\Omega\in\cals$.
\item[(P2)] For every $S,T\in\cals$ we have that $S\cap T\in\cals$.
\item[(P3)] For every $S,T\in\cals$ there exist $\ell\in [k]$ and $R_1,\dots,R_{\ell}\in \cals$ which are pairwise disjoint
and such that $S\setminus T= R_1\cup\cdots\cup R_{\ell}$.
\end{enumerate}
\end{defn}
As we have already indicated in the introduction, we view every element of a $k\text{-semiring}$ $\cals$ as a ``structured" set and
a linear combination of few characteristic functions of elements of $\cals$ as a ``simple" function. We will use the following norm
in order to quantify how far from being ``simple" a given function is.
\begin{defn} \label{d2.2}
Let $(\Omega,\calf,\mathbb{P})$ be a probability space, $k$ a positive integer and $\cals$ a $k$-semiring on $\Omega$ with
$\cals\subseteq \calf$. For every $f\in L_1(\Omega,\calf,\mathbb{P})$ we set
\begin{equation} \label{e2.1}
\|f\|_{\cals} = \sup\Big\{ \big| \int_S f \, d\mathbb{P} \big| : S\in \cals\Big\}.
\end{equation}
The quantity $\|f\|_{\cals}$ will be called the \emph{$\cals$-uniformity norm} of $f$.
\end{defn}
The $\cals$-uniformity norm is, in general, a seminorm. Note, however, that if the $k$-semiring $\cals$ is sufficiently rich, then
the function $\|\cdot\|_{\cals}$ is indeed a norm. More precisely, the function $\|\cdot\|_{\cals}$ is a norm if and only if the family
$\{\mathbf{1}_S:S\in \cals\}$ separates points in $L_1(\Omega,\calf,\mathbb{P})$, that is, for every $f,g\in L_1(\Omega,\calf,\mathbb{P})$
with $f\neq g$ there exists $S\in\cals$ with $\int_S f\, d\mathbb{P}\neq \int_S g\, d\mathbb{P}$.
The simplest example of a $k$-semiring on a nonempty set $\Omega$, is an algebra of subsets of $\Omega$. Indeed, observe that a family
of subsets of $\Omega$ is a $1$-semiring if and only if it is an algebra. Another basic example is the collection of all intervals of
a linearly ordered set, a family which is easily seen to be a $2$-semiring. More interesting (and useful) $k$-semirings can be constructed
with the following lemma.
\begin{lem} \label{l2.3}
Let $\Omega$ be a nonempty set. Also let $m, k_1,\dots, k_m$ be positive integers and set $k=\sum_{i=1}^m k_i$.
If\, $\cals_i$ is a $k_i$-semiring on $\Omega$ for every $i\in [m]$, then the family
\begin{equation} \label{e2.2}
\cals=\Big\{\bigcap_{i=1}^m S_i: S_i\in\cals_i \text{ for every } i\in [m] \Big\}
\end{equation}
is a $k$-semiring on $\Omega$.
\end{lem}
\begin{proof}
Clearly we may assume that $m\meg 2$. Notice, first, that the family $\cals$ satisfies properties (P1) and (P2) in Definition \ref{d2.1}.
To see that property (P3) is also satisfied, fix $S, T\in\cals$ and write $S =\bigcap_{i=1}^m S_i$ and $T=\bigcap_{i=1}^m T_i$ where
$S_i, T_i\in\cals_i$ for every $i\in [m]$. We set $P_1=\Omega\setminus T_1$ and $P_j=T_1\cap\cdots\cap T_{j-1}\cap (\Omega\setminus T_j)$
if $j\in\{2,\dots,m\}$. Observe that the sets $P_1,\dots, P_m$ are pairwise disjoint. Moreover,
\begin{equation} \label{e2.3}
\Omega\setminus \Big(\bigcap_{i=1}^m T_i \Big) = \bigcup_{j=1}^m P_j
\end{equation}
and so
\begin{equation} \label{e2.4}
S\setminus T = \Big( \bigcap_{i=1}^m S_i\Big) \setminus \Big( \bigcap_{i=1}^m T_i\Big) =
\bigcup_{j=1}^m\Big(\bigcap_{i=1}^m S_i \cap P_j\Big).
\end{equation}
Let $j\in [m]$ be arbitrary. Since $\mathcal{S}_j$ is a $k_j$-semiring, there exist $\ell_j\in [k_j]$ and pairwise disjoint sets
$R^j_1,\dots, R^j_{\ell_j}\in\cals_j$ such that $S_j\setminus T_j=R^j_1\cup \cdots \cup R^j_{\ell_j}$. Thus, setting
\begin{enumerate}
\item[(a)] $B_1=\Omega$ and $B_j=\bigcap_{1\mik i<j}(S_i\cap T_i)$ if $j\in\{2,\dots,m\}$,
\item[(b)] $C_j=\bigcap_{j<i\mik m} S_i$ if $j\in \{1,\dots,m-1\}$ and $C_m=\Omega$,
\end{enumerate}
and invoking the definition of the sets $P_1,\dots,P_m$ we obtain that
\begin{equation} \label{e2.5}
S\setminus T = \bigcup_{j=1}^m \Big( \bigcup_{n=1}^{\ell_j} \big( B_j \cap R^j_n\cap C_j \big) \Big).
\end{equation}
Now set $I=\bigcup_{j=1}^{m} \big(\{j\}\times [\ell_j]\big)$ and observe that $|I|\mik k$. For every $(j,n)\in I$ let $U^j_n=B_j \cap R^j_n\cap C_j$
and notice that $U^j_n\in\cals$, $U_n^j\subseteq R_n^j$ and $U_n^j\subseteq P_j$. It follows that the family $\{U^j_n: (j,n)\in I\}$ is contained
in $\cals$ and consists of pairwise disjoint sets. Moreover, by \eqref{e2.5}, we have
\begin{equation} \label{e2.6}
S\setminus T= \bigcup_{(j,n)\in I} U_n^j.
\end{equation}
Hence, the family $\cals$ satisfies property (P3) in Definition \ref{d2.1}, as desired.
\end{proof}
By Lemma \ref{l2.3}, we have the following corollary.
\begin{cor} \label{c2.4}
The following hold.
\begin{enumerate}
\item[(a)] Let $\Omega$ be a nonempty set. Also let $k$ be a positive integer and for every $i\in [k]$ let $\cala_i$ be an algebra on $\Omega$.
Then the family
\begin{equation} \label{e2.7}
\{A_1\cap \cdots \cap A_k: A_i\in \cala_i \text{ for every } i\in [k]\}
\end{equation}
is a $k$-semiring on $\Omega$.
\item[(b)] Let $d,k_1,\dots,k_d$ be a positive integers and set $k=\sum_{i=1}^d k_i$. Also let $\Omega_1,\dots,\Omega_d$ be nonempty sets and
for every $i\in [d]$ let $\cals_i$ be a $k_i$-semiring on $\Omega_i$. Then the family
\begin{equation} \label{e2.8}
\{S_1\times \cdots\times S_d: S_i\in \cals_i \text{ for every } i\in [d] \}
\end{equation}
is $k$-semiring on $\Omega_1\times\cdots\times \Omega_d$.
\end{enumerate}
\end{cor}
Next we isolate some basic properties of the $\cals$-uniformity norm.
\begin{lem} \label{l2.5}
Let $(\Omega,\calf,\mathbb{P})$ be a probability space, $k$ a positive integer and $\cals$ a $k\text{-semiring}$ on $\Omega$
with $\cals\subseteq \calf$. Also let $f\in L_1(\Omega,\calf,\mathbb{P})$. Then the following hold.
\begin{enumerate}
\item[(a)] We have $\|f\|_{\cals}\mik \|f\|_{L_1}$.
\item[(b)] If $\mathcal{B}$ is a $\sigma$-algebra on $\Omega$ with $\mathcal{B}\subseteq \cals$,
then $\|\ave(f\, | \, \mathcal{B})\|_{\cals}\mik \|f\|_{\cals}$.
\item[(c)] If $\cals$ is a $\sigma$-algebra, then $\|f\|_{\cals} \mik \|\ave(f\, | \, \cals)\|_{L_1}\mik 2 \|f\|_{\cals}$.
\end{enumerate}
\end{lem}
\begin{proof}
Part (a) is straightforward. For part (b), fix a $\sigma$-algebra $\mathcal{B}$ on $\Omega$ with $\mathcal{B}\subseteq \cals$
and set $P=\{\omega\in\Omega: \ave(f\, | \, \mathcal{B})(\omega)\meg 0\}$ and $N=\Omega\setminus P$. Notice that
$P,N\in\mathcal{B}\subseteq\cals$. Hence, for every $S\in\cals$ we have
\begin{eqnarray} \label{e2.9}
\big|\int_S \ave(f\, | \, \mathcal{B}) \, d\mathbb{P}\big| & \mik & \max\Big\{ \int_{P\cap S} \ave(f\, | \, \mathcal{B})\, d\mathbb{P},
-\int_{N\cap S} \ave(f\, | \, \mathcal{B})\, d\mathbb{P}\Big\} \\
& \mik & \max\Big\{ \int_{P} \ave(f\, | \, \mathcal{B})\, d\mathbb{P}, -\int_{N} \ave(f\, | \, \mathcal{B})\, d\mathbb{P}\Big\} \nonumber \\
& = & \max\Big\{ \int_{P} f\, d\mathbb{P}, -\int_{N} f\, d\mathbb{P}\Big\} \mik \|f\|_{\cals} \nonumber
\end{eqnarray}
which yields that $\|\ave(f\, | \, \mathcal{B})\|_{\cals}\mik \|f\|_{\cals}$.
Finally, assume that $\cals$ is a $\sigma$-algebra and notice that $\int_S f\, d\pp=\int_S \ave(f\, | \, \cals)\, d\pp$
for every $S\in\cals$. In particular, we have $\|f\|_{\cals}\mik \|\ave(f\, |\, \cals)\|_{L_1}$. Also let, as above,
$P=\{\omega\in\Omega: \ave(f\, | \, \cals)(\omega)\meg 0\}$ and $N=\Omega\setminus P$. Since $P,N\in\cals$ we obtain that
\begin{equation} \label{e2.10}
\|\ave(f\, | \, \cals)\|_{L_1} \mik 2\cdot \max\Big\{ \int_P \ave(f\, | \, \cals)\, d\mathbb{P},
-\int_N \ave(f\, | \, \cals)\, d\mathbb{P}\Big\} \mik 2\|f\|_{\cals}
\end{equation}
and the proof is completed.
\end{proof}
We close this section by presenting some examples of $k$-semirings which are relevant from a combinatorial perspective.
In the first example the underlying space is the Cartesian product of a finite sequence of nonempty finite sets.
The corresponding semirings are related to the development of Szemer\'{e}di's regularity method for hypergraphs.
\begin{examp} \label{ex1}
Let $d\in\nn$ with $d\meg 2$ and $V_1,\dots,V_d$ nonempty finite sets. We view the Cartesian product $V_1\times\cdots \times V_d$
as a discrete probability space equipped with the uniform probability measure. For every nonempty subset $F$ of $[d]$ let
$\pi_F\colon\prod_{i\in [d]} V_i\to \prod_{i\in F} V_i$ be the natural projection and set
\begin{equation} \label{e2.11}
\cala_F=\Big\{ \pi^{-1}_F(A): A\subseteq \prod_{i\in F} V_i \Big\}.
\end{equation}
The family $\cala_F$ is an algebra of subsets of $V_1\times\cdots \times V_d$ and consists of those sets which
depend only on the coordinates determined by $F$.
More generally, let $\calf$ be a family of nonempty subsets of $[d]$. Set $k=|\calf|$ and observe that, by Corollary \ref{c2.4},
we may associate with the family $\calf$ a $k$-semiring $\cals_{\calf}$ on $V_1\times\cdots\times V_d$ defined by the rule
\begin{equation} \label{e2.12}
S\in\cals_{\calf} \Leftrightarrow S=\bigcap_{F\in\calf} A_F \text{ where } A_F\in \cala_F \text{ for every } F\in\calf.
\end{equation}
Notice that if the family $\calf$ satisfies $[d]\notin \calf$ and $\cup\calf=[d]$, then it gives rise to a non-trivial semiring
whose corresponding uniformity norm is a genuine norm.
It turns out that there is a minimal non-trivial semiring $\cals_{\min}$ one can obtain in this way. It corresponds to the family
$\calf_{\min}={[d]\choose 1}$ and is particularly easy to grasp since it consists of all rectangles of $V_1\times\cdots\times V_d$.
The $\cals_{\min}$-uniformity norm is known as the \emph{cut norm} and was introduced by Frieze and Kannan \cite{FrK}.
At the other extreme, this construction also yields a maximal non-trivial semiring $\cals_{\max}$ on $V_1\times\cdots\times V_d$.
It corresponds to the family $\calf_{\max}={[d]\choose d-1}$ and consists of those subsets of the product which can be written as
$A_1\cap\cdots \cap A_d$ where for every $i\in [d]$ the set $A_i$ does not depend on the $i$-th coordinate. The $\cals_{\max}$-uniformity
norm is known as the \emph{Gowers box norm} and was introduced by Gowers \cite{Go1,Go2}.
\end{examp}
In the second example the underlying space is of the form $\bo\times\bo$ where $\bo$ is the sample space of a probability space
$(\bo,\calf,\pp)$. The corresponding semirings are related to the theory of convergence of graphs (see, e.g., \cite{BCLSV,L}).
\begin{examp} \label{ex2}
Let $(\bo,\calf,\pp)$ be a probability space and define
\begin{equation} \label{e2.13}
\cals_{\square}=\big\{ S\times T: S,T\in\calf\big\}.
\end{equation}
That is, $\cals_{\square}$ is the family of all measurable rectangles of $\bo\times\bo$. By Corollary \ref{c2.4}, we see that
$\cals_{\square}$ is a $2$-semiring on $\bo\times\bo$. The $\cals_{\square}$-uniformity norm is also referred to as the
\textit{cut norm} and is usually denoted by $\|\cdot\|_{\square}$. In particular, for every integrable random variable
$f\colon \bo\times\bo\to\rr$ we have
\begin{equation} \label{e2.14}
\|f\|_{\square} = \sup\Big\{ \big| \int_{S\times T} f\, d\pp\big|: S,T\in\calf \Big\}.
\end{equation}
There is another natural semiring in this context which was introduced by Bollob\'{a}s and Nikiforov \cite{BN} and can
be considered as the ``symmetric" version of $\cals_{\square}$. Specifically, let
\begin{equation} \label{e2.15}
\Sigma_{\square}=\big\{ S\times T: S,T\in\calf \text{ and either } S=T \text{ or } S\cap T=\emptyset\big\}
\end{equation}
and observe that $\Sigma_{\square}$ is a $4$-semiring which is contained, of course, in $\cals_{\square}$.
On the other hand, note that the family $\cals_{\square}$ is not much larger than $\Sigma_{\square}$ since every element of
$\cals_{\square}$ can be written as the disjoint union of at most $4$ elements of $\Sigma_{\square}$. Therefore,
for every integrable random variable $f\colon\bo\times\bo\to\rr$ we have
\begin{equation} \label{e2.16}
\|f\|_{\Sigma_{\square}} \mik \|f\|_{\square} \mik 4 \|f\|_{\Sigma_{\square}}.
\end{equation}
\end{examp}
In the last example the underlying space is the hypercube
\begin{equation} \label{e2.17}
A^n=\big\{ (a_0,\dots,a_{n-1}): a_0,\dots,a_{n-1}\in A\big\}
\end{equation}
where $n$ is a positive integer and $A$ is a finite alphabet (i.e., a finite set) with at least two letters.
The building blocks of the corresponding semirings were introduced by Shelah \cite{Sh} in his work on the Hales--Jewett
numbers, and are essential tools in all known combinatorial proofs of the density Hales--Jewett theorem (see \cite{DKT1,P,Tao3}).
\begin{examp} \label{ex3}
Let $n$ be a positive integer and $A$ a finite alphabet with $|A|\meg 2$. As in Example \ref{ex1}, we view the hypercube $A^n$
as a discrete probability space equipped with the uniform probability measure.
Now let $a,b\in A$ with $a\neq b$. Also let $z, y\in A^n$ and write $z=(z_0,\dots,z_{n-1})$ and $y=(y_0,\dots,y_{n-1})$.
We say that $z$ and $y$ are \textit{$(a,b)$-equivalent} provided that for every $i\in\{0,\dots,n-1\}$
and every $\gamma\in A\setminus\{a,b\}$ we have
\begin{equation} \label{e2.18}
z_i=\gamma \ \text{ if and only if } \ y_i=\gamma.
\end{equation}
In other words, $z$ and $y$ are $(a,b)$-equivalent if they possibly differ only in the coordinates taking values in $\{a,b\}$.
Clearly, the notion of $(a,b)$-equivalence defines an equivalence relation on $A^n$. The sets which are invariant under this equivalence
relation are called \textit{$(a,b)$-insensitive}. That is, a subset $X$ of $A^n$ is $(a,b)$-insensitive provided that for every $z\in X$
and every $y\in A^n$ if $z$ and $y$ are $(a,b)$-equivalent, then $y\in X$. We set
\begin{equation} \label{e2.19}
\cala_{\{a,b\}} = \{ X\subseteq A^n: X \text{ is $(a,b)$-insensitive} \}.
\end{equation}
It follows readily from the above definitions that the family $\cala_{\{a,b\}}$ is an algebra of subsets of $A^n$.
The algebras $\big\{\cala_{\{a,b\}}: \{a,b\}\in {A\choose 2}\big\}$ can then be used to construct various $k\text{-semirings}$
on $A^n$. Specifically, let $\calf\subseteq {A\choose 2}$ and set $k=|\calf|$. By Corollary \ref{c2.4}, we see that the family
constructed from the algebras $\{\cala_{\{a,b\}}: \{a,b\}\in\calf\}$ via formula \eqref{e2.7} is a $k$-semiring on $A^n$.
The maximal semiring obtained in this way corresponds to the family ${A\choose 2}$. We shall denote it by $\cals(A^n)$. In
particular, we have that $\cals(A^n)$ is a $K$-semiring on $A^n$ where $K=|A|(|A|-1)2^{-1}$. Note that $K$ is independent of $n$.
Also observe that if $|A|\meg 3$, then the $\cals(A^n)$-uniformity norm is actually a norm.
\end{examp}
\section{The main result}
\numberwithin{equation}{section}
First we introduce some terminology and some pieces of notation. We say that a function $F\colon\nn\to\rr$ is a
\textit{growth function} provided that: (i) $F$ is increasing, and (ii)~$F(n)\meg n+1$ for every $n\in\nn$. Moreover,
for every nonempty set $\Omega$ and every finite partition $\calp$ of $\Omega$ by $\cala_{\calp}$ we shall denote the
$\sigma\text{-algebra}$ on $\Omega$ generated~by~$\calp$. Clearly, the $\sigma$-algebra $\cala_{\calp}$ is finite and
its nonempty atoms are precisely the members of $\calp$. Also note if $\calq$ and $\calp$ are two finite partitions of
$\Omega$, then $\calq$ is a refinement of $\calp$ if and only if $\cala_{\calq}\supseteq \cala_{\calp}$.
Now for every pair $k,\ell$ of positive integers, every $0<\sigma\mik 1$, every $1<p\mik 2$ and every growth function
$F\colon\nn\to\rr$ we define $h\colon \nn\to\nn$ recursively by the rule
\begin{equation} \label{e3.1}
\begin{cases}
h(0)=0, \\
h(i+1)=h(i)+\lceil \sigma^2\, \ell\, F^{(h(i)+2)}\!(0)^2 (p-1)^{-1}\rceil
\end{cases}
\end{equation}
and we set
\begin{equation} \label{e3.2}
R= h\big(\lceil \ell\, \sigma^{-2}(p-1)^{-1}\rceil -1 \big).
\end{equation}
Finally, we define
\begin{equation} \label{e3.3}
\mathrm{Reg}(k,\ell,\sigma,p,F)= F^{(R)}(0).
\end{equation}
Note that if $F\colon\nn\to\nn$ is a primitive recursive growth function which belongs to the class $\mathcal{E}^n$
of Grzegorczyk's hierarchy for some $n\in\nn$ (see, e.g., \cite{Ros}), then the numbers $\mathrm{Reg}(k,\ell,\sigma,p,F)$
are controlled by a primitive recursive function belonging to the class $\mathcal{E}^m$ where $m=\max\{4, n+2\}$.
We are now ready to state the main result of this paper.
\begin{thm} \label{t3.1}
Let $k, \ell$ be positive integers, $0<\sigma\mik 1$, $1<p\mik 2$ and $F\colon\nn\to \rr$ a growth function. Also let
$(\bo,\calf,\pp)$ be a probability space and $(\cals_i)$ an increasing sequence of $k$-semirings on $\Omega$
with $\cals_i\subseteq \calf$ for every $i\in\nn$. Finally, let $\calc$ be a family in $L_p(\bo,\calf,\pp)$ such that
$\|f\|_{L_p}\mik 1$ for every $f\in\calc$ and with $|\calc|=\ell$. Then there exist
\begin{enumerate}
\item[(a)] a natural number $N$ with $N\mik \mathrm{Reg}(k,\ell,\sigma,p,F)$,
\item[(b)] a partition $\calp$ of $\Omega$ with $\calp\subseteq \cals_N$ and $|\calp|\mik (k+1)^N$, and
\item[(c)] a finite refinement $\calq$ of $\calp$ with $\calq\subseteq\cals_i$ for some $i\meg N$
\end{enumerate}
such that for every $f\in\calc$, writing $f=f_{\mathrm{str}}+ f_{\mathrm{err}}+ f_{\mathrm{unf}}$ where
\begin{equation} \label{e3.4}
f_{\mathrm{str}}=\ave(f \, | \, \cala_{\calp}), \ \
f_{\mathrm{err}}=\ave(f \, | \, \cala_{\calq})-\ave(f \, | \, \cala_{\calp}) \ \text{ and } \
f_{\mathrm{unf}}=f-\ave(f \, | \, \cala_{\calq}),
\end{equation}
we have the estimates
\begin{equation} \label{e3.5}
\| f_{\mathrm{err}}\|_{L_p}\mik \sigma \ \text{ and } \ \|f_{\mathrm{unf}}\|_{\cals_i}\mik \frac{1}{F(i)}
\end{equation}
for every $i\in\{0,\dots,F(N)\}$.
\end{thm}
The case ``$p=2$" in Theorem \ref{t3.1} is essentially due to Tao \cite{Tao1,Tao2,Tao3}. His approach, however,
is somewhat different since he works with $\sigma\text{-algebras}$ instead of $k$-semirings.
The increasing sequence $(\cals_i)$ of $k$-semirings can be thought of as the higher-complexity analogue of the
classical concept of a filtration in the theory of martingales. In fact, this is more than an analogy since, by
applying Theorem \ref{t3.1} to appropriately selected filtrations, one is able to recover the fact that, for any
$1<p\mik 2$, every $L_p$ bounded martingale is $L_p$ convergent. We discuss these issues in Appendix B.
We also note that the idea to obtain ``uniformity" estimates with respect to an arbitrary growth function has been
considered by several authors. This particular feature is essential when one wishes to iterate this structural decomposition
(this is the case, for instance, in the context of hypergraphs---see, e.g., \cite{Tao1}). On the other hand, the need
to ``regularize"\!, simultaneously, a finite family of random variables appears frequently in extremal combinatorics and
related parts of Ramsey theory (see, e.g., \cite{DKT2}). Nevertheless, in most applications (including the applications
presented in Section 4), one deals with a single random variable and with a single semiring. Hence, we will isolate
this special case in order to facilitate future references.
To this end, for every positive integer $k$, every $0<\sigma\mik1$, every $1<p\mik 2$ and every growth function
$F\colon\nn\to\rr$ we set
\begin{equation} \label{e3.6}
\mathrm{Reg}'(k,\sigma,p,F)=(k+1)^{\mathrm{Reg}(k,1,\sigma,p,F')}
\end{equation}
where $F'\colon \nn\to\rr$ is the growth function defined by the rule $F'(n)=F\big((k+1)^n\big)$ for every $n\in\nn$.
We have the following corollary.
\begin{cor} \label{c3.2}
Let $k$ be a positive integer, $0<\sigma\mik 1$, $1<p\mik 2$ and $F\colon\nn\to \rr$ a growth function. Also let
$(\bo,\calf,\pp)$ be a probability space and let $\cals$ be a $k\text{-semiring}$ on $\Omega$ with $\cals\subseteq\calf$.
Finally, let $f\in L_p(\bo,\calf,\pp)$ with $\|f\|_{L_p}\mik 1$. Then there exist
\begin{enumerate}
\item[(a)] a positive integer $M$ with $M\mik \mathrm{Reg}'(k,\sigma,p,F)$,
\item[(b)] a partition $\calp$ of $\Omega$ with $\calp\subseteq \cals$ and $|\calp|= M$, and
\item[(c)] a finite refinement $\calq$ of $\calp$ with $\calq\subseteq\cals$
\end{enumerate}
such that, writing $f=f_{\mathrm{str}}+ f_{\mathrm{err}}+ f_{\mathrm{unf}}$ where
\begin{equation} \label{e3.7}
f_{\mathrm{str}}=\ave(f \, | \, \cala_{\calp}), \ \
f_{\mathrm{err}}=\ave(f \, | \, \cala_{\calq})-\ave(f \, | \, \cala_{\calp}) \ \text{ and } \
f_{\mathrm{unf}}=f-\ave(f \, | \, \cala_{\calq}),
\end{equation}
we have the estimates
\begin{equation} \label{e3.8}
\| f_{\mathrm{err}}\|_{L_p}\mik \sigma \ \text{ and } \ \|f_{\mathrm{unf}}\|_{\cals}\mik \frac{1}{F(M)}.
\end{equation}
\end{cor}
Finally, we notice that the assumption that $1< p\mik 2$ in the above results is not restrictive, since the case of random
variables in $L_p$ for $p>2$ is reduced to the case $p=2$. On the other hand, we remark that Theorem \ref{t3.1} does
not hold true for $p=1$ (see Appendix B). Thus, the range of $p$ in Theorem \ref{t3.1} is optimal.
\subsection{Proof of Theorem \ref{t3.1}}
We start with the following lemma.
\begin{lem} \label{l3.3}
Let $k$ be a positive integer, $p\meg 1$ and $0<\delta\mik 1$. Also let $(\bo,\calf,\pp)$ be a probability
space, $\Sigma$ a $k$-semiring on $\Omega$ with $\Sigma\subseteq \calf$, $\calq$ a finite partition of $\Omega$ with
$\calq\subseteq\Sigma$ and $f\in L_p(\bo,\calf,\pp)$ with $\|f-\ave(f \, | \, \cala_{\calq})\|_{\Sigma}> \delta$.
Then there exists a refinement $\calr$ of $\calq$ with $\calr\subseteq \Sigma$ and $|\calr|\mik |\calq| (k+1)$, and
such that $\|\ave(f \, | \, \cala_{\calr})-\ave(f \, | \, \cala_{\calq})\|_{L_p} >\delta$.
\end{lem}
\begin{proof}
By our assumptions, there exists $S\in\Sigma$ such that
\begin{equation} \label{e3.9}
\big| \int_S \big( f-\ave(f \, | \, \cala_{\calq}) \big)\, d\pp \big|> \delta.
\end{equation}
Since $\Sigma$ is a $k$-semiring on $\Omega$, there exists a refinement $\calr$ of $\calq$ such that: (i) $\calr\subseteq\Sigma$,
(ii) $|\calr|\mik |\calq| (k+1)$, and (iii) $S\in\cala_{\calr}$. It follows, in particular, that
\begin{equation} \label{e3.10}
\int_S \ave(f \, | \, \cala_{\calr})\, d\pp=\int_S f\, d\pp.
\end{equation}
Hence, by \eqref{e3.9} and the monotonicity of the $L_p$ norms, we obtain that
\begin{eqnarray} \label{e3.11}
\delta & < & \big| \int_S \big( \ave(f \, | \, \cala_{\calr})-\ave(f \, | \, \cala_{\calq}) \big)\, d\pp \big| \\
& \mik & \|\ave(f \, | \, \cala_{\calr})-\ave(f \, | \, \cala_{\calq})\|_{L_1}
\mik \|\ave(f \, | \, \cala_{\calr})-\ave(f \, | \, \cala_{\calq})\|_{L_p} \nonumber
\end{eqnarray}
and the proof is completed.
\end{proof}
We proceed with the following lemma.
\begin{lem} \label{l3.4}
Let $k, \ell$ be positive integers, $0<\delta,\sigma \mik 1$ and $1<p\mik 2$, and set
\begin{equation} \label{e3.12}
n=\Big\lceil \frac{\sigma^2 \ell}{\delta^2 (p-1)}\Big\rceil.
\end{equation}
Also let $(\bo,\calf,\pp)$ be a probability space and let $(\Sigma_i)$ be an increasing sequence of $k\text{-semirings}$
on $\Omega$ with $\Sigma_i\subseteq \calf$ for every $i\in\nn$. Finally, let $m\in\nn$ and $\calp$ a partition of $\bo$
with $\calp\subseteq\Sigma_m$ and $|\calp|\mik (k+1)^m$. Then for every family $\calc$ in $L_p(\bo,\calf,\pp)$ with $|\calc|=\ell$
there exist $j\in\{m,\dots,m+n\}$ and a refinement $\calq$ of $\calp$ with $\calq\subseteq\Sigma_j$ and $|\calq|\mik (k+1)^j$,
and such that either
\begin{enumerate}
\item[(a)] $\|\ave(f \, | \, \cala_{\calq})-\ave(f \, | \, \cala_{\calp})\|_{L_p}>\sigma$ for some $f\in\calc$, or
\item[(b)] $\|\ave(f \, | \, \cala_{\calq})-\ave(f \, | \, \cala_{\calp})\|_{L_p}\mik \sigma$ and
$\|f-\ave(f \, | \, \cala_{\calq})\|_{\Sigma_{j+1}}\mik \delta$ for every $f\in\calc$.
\end{enumerate}
\end{lem}
The case ``$p=2$" in Lemma \ref{l3.4} can be proved with an ``energy increment strategy" which ultimately depends upon the fact
that martingale difference sequences are orthogonal in $L_2$ (see, e.g., \cite[Theorem 2.11]{Tao2}). In the non-Hilbertian
case (that is, when $1<p<2$) the geometry is more subtle and we will rely, instead, on Proposition \ref{pa.1}. The argument
can therefore be seen as the $L_p$-version of the ``energy increment strategy"\!. More applications of this method
are given in \cite{DKK,DKT3}.
\begin{proof}[Proof of Lemma \emph{\ref{l3.4}}]
Assume that the first part of the lemma is not satisfied. Note that this is equivalent to saying that
\begin{enumerate}
\item[(H1)] for every $j\in\{m,\dots,m+n\}$, every refinement $\calq$ of $\calp$ with $\calq\subseteq\Sigma_j$ and
$|\calq|\mik (k+1)^j$ and every $f\in\calc$ we have $\|\ave(f\, | \, \cala_{\calq})-\ave(f\, | \, \cala_{\calp})\|_{L_p}\mik\sigma$.
\end{enumerate}
We will use hypothesis (H1) to show that part (b) is satisfied.
To this end we will argue by contradiction. Let $j\in\{m,\dots,m+n\}$ and let $\calq$ be a refinement of $\calp$ with
$\calq\subseteq\Sigma_j$ and $|\calq|\mik (k+1)^j$. Observe that hypothesis (H1) and our assumption that part (b) does
not hold true, imply that there exists $f\in\calc$ (possibly depending on the partition $\calq$) such that
$\|f-\ave(f \, | \, \cala_{\calq})\|_{\Sigma_{j+1}}>\delta$. Since the sequence $(\Sigma_i)$ is increasing, Lemma \ref{l3.3}
can be applied to the $k$-semiring $\Sigma_{j+1}$, the partition $\calq$ and the random variable $f$. Hence, we obtain that
\begin{enumerate}
\item[(H2)] for every $j\in\{m,\dots,m+n\}$ and every refinement $\calq$ of $\calp$ with $\calq\subseteq\Sigma_j$ and
$|\calq|\mik (k+1)^j$ there exist $f\in\calc$ and a refinement $\calr$ of $\calq$ with $\calr\subseteq\Sigma_{j+1}$ and
$|\calr|\mik (k+1)^{j+1}$, and such that $\|\ave(f \, | \, \cala_{\calr})-\ave(f \, | \, \cala_{\calq})\|_{L_p}> \delta$.
\end{enumerate}
Recursively and using hypothesis (H2), we select a finite sequence $\calp_0,\dots,\calp_n$ of partitions of $\Omega$
with $\calp_0=\calp$ and a finite sequence $f_1,\dots,f_n$ in $\calc$ such that for every $i\in [n]$ we have:
(P1) $\calp_i$ is a refinement of $\calp_{i-1}$, (P2) $\calp_i\subseteq \Sigma_{m+i}$ and $|\calp_i|\mik (k+1)^{m+i}$,
and (P3) $\|\ave(f_i \, | \, \cala_{\calp_i})-\ave(f_i \, | \, \cala_{\calp_{i-1}})\|_{L_p}>\delta$. It follows,
in particular, that $(\cala_{\calp_i})_{i=0}^n$ is an increasing sequence of finite sub-$\sigma$-algebras of $\calf$.
Also note that, by the classical pigeonhole principle and the fact that $|\calc|=\ell$, there exist $g\in\calc$ and
$I\subseteq [n]$ with $|I|\meg n/\ell$ and such that $g=f_i$ for every $i\in I$.
Next, set $f=g-\ave(g\, | \, \cala_{\calp})$ and let $(d_i)_{i=0}^n$ be the difference sequence associated with the finite
martingale $\ave(f\, | \, \cala_{\calp_0}),\dots,\ave(f\, | \, \cala_{\calp_n})$. Observe that for every $i\in I$ we have
$d_i=\ave(g \, | \, \cala_{\calp_i})-\ave(g \, | \, \cala_{\calp_{i-1}})$ and so, by the choice of $I$ and property (P3),
we obtain that $\|d_i\|_{L_p}>\delta$ for every $i\in I$. Therefore, by Proposition \ref{pa.1}, we have
\begin{eqnarray} \label{e3.13}
\sigma & \stackrel{\eqref{e3.12}}{\mik} & \sqrt{p-1}\,\delta \Big(\frac{n}{\ell}\Big)^{1/2} \mik
\sqrt{p-1}\, \delta |I|^{1/2} \\
& < & \sqrt{p-1} \cdot \Big( \sum_{i=0}^n \|d_i\|^2_{L_p} \Big)^{1/2} \nonumber \\
& \stackrel{\eqref{ea.5}}{\mik} & \big\| \sum_{i=0}^n d_i \big\|_{L_p} =
\|\ave(g \, | \, \cala_{\calp_n})-\ave(g \, | \, \cala_{\calp})\|_{L_p}. \nonumber
\end{eqnarray}
On the other hand, by properties (P1) and (P2), we see that $\calp_n$ is a refinement of $\calp$ with $\calp_n\subseteq\Sigma_{m+n}$
and $|\calp_n|\mik (k+1)^{m+n}$. Therefore, by hypothesis (H1), we must have
$\|\ave(g \, | \, \cala_{\calp_n}\!)-\ave(g \, | \, \cala_{\calp})\|_{L_p}\mik \sigma$ which contradicts, of course, the estimate
in \eqref{e3.13}. The proof of Lemma \ref{l3.4} is thus completed.
\end{proof}
The following lemma is the last step of the proof of Theorem \ref{t3.1}.
\begin{lem} \label{l3.5}
Let $k, \ell$ be positive integers, $0<\sigma \mik 1$, $1<p\mik 2$ and $H\colon\nn\to\rr$ a growth function.
Set $L=\lceil \ell\, \sigma^{-2}(p-1)^{-1}\rceil$ and define $(n_i)$ recursively by the rule
\begin{equation} \label{e3.14}
\begin{cases}
n_0=0, \\
n_{i+1}=n_i+\lceil \sigma^2\, \ell\, H(n_i)^2 (p-1)^{-1}\rceil.
\end{cases}
\end{equation}
Also let $(\bo,\calf,\pp)$ be a probability space and let $(\Sigma_i)$ be an increasing sequence of $k$-semirings on $\Omega$
with $\Sigma_i\subseteq\calf$ for every $i\in\nn$. Finally, let $\calc$ be a family in $L_p(\bo,\calf,\pp)$ such that $\|f\|_{L_p}\mik 1$
for every $f\in\calc$ and with $|\calc|=\ell$. Then there exist $j\in \{0,\dots,L-1\}$, $J\in \{n_j,\dots,n_{j+1}\}$ and two partitions
$\calp, \calq$ of $\Omega$ with the following properties: \emph{(i)} $\calp\subseteq\Sigma_{n_j}$ and $\calq\subseteq \Sigma_J$,
\emph{(ii)} $|\calp|\mik (k+1)^{n_j}$ and $|\calq|\mik (k+1)^J$, \emph{(iii)} $\calq$ is a refinement of\, $\calp$, and \emph{(iv)}
$\|\ave(f \, | \, \cala_{\calq})-\ave(f \, | \, \cala_{\calp})\|_{L_p}\mik\sigma$ and
$\|f-\ave(f \, | \, \cala_{\calq})\|_{\Sigma_{J+1}}\mik 1/H(n_j)$ for every $f\in\calc$.
\end{lem}
\begin{proof}
It is similar to the proof of Lemma \ref{l3.4}. Indeed, assume, towards a contradiction, that the lemma is false.
Recursively and using Lemma \ref{l3.4}, we select a finite sequence $J_0,\dots, J_L$ in $\nn$ with $J_0=0$, a finite
sequence $\calp_0,\dots,\calp_L$ of partitions of $\bo$ with $\calp_0=\{\bo\}$ and a finite sequence $f_1,\dots,f_L$
in $\calc$ such that for every $i\in [L]$ we have that: (P1) $J_i\in \{n_{i-1},\dots,n_i\}$, (P2) the partition $\calp_i$ is
a refinement of $\calp_{i-1}$, (P3) $\calp_i\subseteq \Sigma_{J_i}$ with $|\calp_i|\mik (k+1)^{J_i}$, and (P4)
$\|\ave(f_i \, | \, \cala_{\calp_i})-\ave(f_i \, | \, \cala_{\calp_{i-1}})\|_{L_p}>\sigma$. As in the proof of Lemma \ref{l3.4},
we observe that $(\cala_{\calp_i})_{i=0}^L$ is an increasing sequence of finite sub-$\sigma$-algebras of $\calf$,
and we select $g\in\calc$ and $I\subseteq [L]$ with $|I|\meg L/\ell$ and such that $g=f_i$ for every $i\in I$.
Let $(d_i)_{i=0}^L$ be the difference sequence associated with the finite martingale
$\ave(g\, | \, \cala_{\calp_0}),\dots,\ave(g\, | \, \cala_{\calp_L})$. Notice that, by property (P4), we have $\|d_i\|_{L_p}>\sigma$
for every $i\in I$. Hence, by the choice of $L$, Proposition \ref{pa.1} and the fact that $\|g\|_{L_p}\mik 1$, we conclude that
\begin{eqnarray} \label{e3.15}
1 & \mik & \sqrt{p-1}\, \sigma |I|^{1/2} <
\sqrt{p-1} \cdot \Big( \sum_{i=0}^L \|d_i\|^2_{L_p} \Big)^{1/2} \\
& \stackrel{\eqref{ea.5}}{\mik} & \big\| \sum_{i=0}^L d_i \big\|_{L_p} =
\|\ave(g \, | \, \cala_{\calp_L})\|_{L_p} \mik \|g\|_{L_p}\mik 1 \nonumber
\end{eqnarray}
which is clearly a contradiction. The proof of Lemma \ref{l3.5} is completed.
\end{proof}
We are ready to complete the proof of Theorem \ref{t3.1}.
\begin{proof}[Proof of Theorem \emph{\ref{t3.1}}]
Fix the data $k, \ell, \sigma, p$, the growth function $F$, the sequence $(\cals_i)$ and the family $\calc$. We define
$H\colon\nn\to\rr$ by the rule $H(n)=F^{(n+2)}(0)$ and we observe that $H$ is a growth function. Moreover, for every
$i\in\nn$ let $m_i=F^{(i)}(0)$ and set $\Sigma_i=\cals_{m_i}$. Notice that $(\Sigma_i)$ is an increasing sequence of
$k$-semirings of $\bo$ with $\Sigma_i\subseteq\calf$ for every $i\in\nn$.
Let $j,J,\calp$ and $\calq$ be as in Lemma \ref{l3.5} when applied to $k,\ell,\sigma,p,H$, the sequence $(\Sigma_i)$ and the
family $\calc$. We set
\begin{equation} \label{e3.16}
N=m_{n_j}=F^{(n_j)}(0)
\end{equation}
and we claim that the natural number $N$ and the partitions $\calp$ and $\calq$ are as desired.
Indeed, notice first that $n_j\mik n_{L-1}$. Since $F$ is a growth function, by the choice of $h$ and $R$ in \eqref{e3.1}
and \eqref{e3.2} respectively, we have
\begin{equation} \label{e3.17}
N\mik F^{(n_{L-1})}(0)=F^{(R)}(0) \stackrel{\eqref{e3.3}}{=} \mathrm{Reg}(k,\ell,\sigma,p,F).
\end{equation}
On the other hand, note that $n_j\mik F^{(n_j)}(0)=N$ and so $|\calp|\mik (k+1)^{n_j}\mik (k+1)^N$ and
$\calp\subseteq \Sigma_{n_j}=\cals_N$. Moreover, by Lemma \ref{l3.5}, we see that $\calq$ is a finite refinement of $\calp$
with $\calq\subseteq \cals_i$ for some $i\meg N$. It follows that $N, \calp$ and $\calq$ satisfy the requirements of the theorem.
Finally, let $f\in\calc$ be arbitrary and write $f=f_{\mathrm{str}}+ f_{\mathrm{err}}+ f_{\mathrm{unf}}$ where
$f_{\mathrm{str}}=\ave(f \, | \, \cala_{\calp})$, $f_{\mathrm{err}}=\ave(f \, | \, \cala_{\calq})-\ave(f \, | \, \cala_{\calp})$
and $f_{\mathrm{unf}}=f-\ave(f \, | \, \cala_{\calq})$. Invoking Lemma \ref{l3.5}, we obtain that
\begin{equation} \label{e3.18}
\|f_{\mathrm{err}}\|_{L_p} = \|\ave(f \, | \, \cala_{\calq})-\ave(f \, | \, \cala_{\calp})\|_{L_p} \mik \sigma.
\end{equation}
Also observe that $n_j+1\mik J+1$ which is easily seen to imply that $\cals_{F(N)}\subseteq \Sigma_{J+1}$.
Therefore, using Lemma \ref{l3.5} once again, for every $i\in\{0,\dots, F(N)\}$ we have
\begin{eqnarray} \label{e3.19}
\|f_{\mathrm{unf}}\|_{\cals_i} & = & \|f-\ave(f \, | \, \cala_{\calq}) \|_{\cals_i} \mik
\|f-\ave(f \, | \, \cala_{\calq}) \|_{\Sigma_{J+1}} \\
& \mik & \frac{1}{H(n_j)} = \frac{1}{F\big( F(N)\big)} \mik \frac{1}{F(i)}. \nonumber
\end{eqnarray}
The proof of Theorem \ref{t3.1} is completed.
\end{proof}
\section{Applications}
\numberwithin{equation}{section}
\subsection{Uniform partitions}
In this section we will discuss some applications of our main result (more applications can be found in \cite{DK}).
We start with a consequence of Theorem \ref{t3.1} which is closer in spirit to the original formulation of Szemer\'{e}di's
regularity lemma \cite{Sz}.
Recall that if $(\bo,\calf,\pp)$ is a probability space, $f\in L_1(\bo,\calf,\pp)$ and $S\in\calf$ is an event of
non-zero probability, then $\ave(f\, | \, S)$ stands for the conditional expectation of $f$ with respect to $S$, that is,
$\ave(f\, | \, S) =\big( \int_S f\, d\pp\big)/ \pp(S)$. If $\pp(S)=0$, then by convention we set $\ave(f\, | \, S)=0$.
We have the following definition.
\begin{defn} \label{d4.1}
Let $(\bo,\calf,\pp)$ be a probability space, $k$ a positive integer and $\cals$ a $k$-semiring on $\bo$ with $\cals\subseteq\calf$.
Also let $f\in L_1(\bo,\calf,\pp)$, $0<\eta\mik 1$ and $S\in\cals$. We say that the set $S$ is \emph{$(f,\cals,\eta)$-uniform} if for
every $T\subseteq S$ with $T\in\cals$ we have
\begin{equation} \label{e4.1}
\big| \int_T \big(f-\ave(f\, | \, S)\big)\, d\pp \big| \mik \eta\cdot \pp(S).
\end{equation}
Moreover, for every $\calc\subseteq\cals$ we set $\mathrm{Unf}(\calc,f,\eta) = \{ C\in\calc: C \text{ is $(f,\cals,\eta)$-uniform}\}$.
\end{defn}
Notice that if $S\in\cals$ with $\pp(S)=0$, then the set $S$ is $(f,\cals,\eta)$-uniform for every $0<\eta\mik 1$. The same remark
of course applies if the random variable $f$ is constant on $S$. Also note that the concept of $(f,\cals,\eta)$-uniformity is closely
related to the $\cals\text{-uniformity}$ norm. Indeed, let $S\in\cals$ with $\pp(S)>0$ and observe that the set $S$ is
$(f,\cals,\eta)$-uniform if and only if the function $f-\ave(f \, | \, S)$, viewed as a random variable in $L_1(\bo,\calf,\pp_S)$,
has $\cals$-uniformity norm less than or equal to $\eta$. (Here, $\pp_S$ stands for the conditional probability measure of $\pp$
relative to $S$.) In particular, the set $\bo$ is $(f,\cals,\eta)$-uniform if and only if $\|f-\ave(f)\|_{\cals}\mik\eta$.
We have the following proposition (see also \cite[Section 11.6]{TV}).
\begin{prop} \label{p4.2}
For every positive integer $k$, every $1<p\mik 2$ and every $0<\eta\mik 1$ there exists a positive integer $\mathrm{U}(k,p,\eta)$
with the following property. If $(\bo,\calf,\pp)$ is a probability space, $\cals$ a $k$-semiring on $\bo$ with $\cals\subseteq \calf$
and $f\in L_p(\bo,\calf,\pp)$ with $\|f\|_{L_p}\mik 1$, then there exist a positive integer $M\mik \mathrm{U}(k,p,\eta)$ and a partition
$\calp$ of $\bo$ with $\calp\subseteq\cals$ and $|\calp|=M$, and such that
\begin{equation} \label{e4.2}
\sum_{S\in\mathrm{Unf}(\calp,f,\eta)} \pp(S)\meg 1-\eta.
\end{equation}
\end{prop}
The following lemma will enable us to reduce Proposition \ref{p4.2} to Corollary \ref{c3.2}.
\begin{lem} \label{l4.3}
Let $(\bo,\calf,\pp)$ be a probability space, $k$ a positive integer and $\cals$ a $k\text{-semiring}$ on $\bo$ with
$\cals\subseteq \calf$. Also let $\calp$ be a finite partition of $\bo$ with $\calp\subseteq \calf$, $f\in L_1(\bo,\calf,\pp)$
and $0<\eta\mik 1$. Assume that the function $f$ admits a decomposition $f=f_{\mathrm{str}}+ f_{\mathrm{err}}+f_{\mathrm{unf}}$
into integrable random variables such that $f_{\mathrm{str}}$ is constant on each $S\in\calp$ and the functions $f_{\mathrm{err}}$
and $f_{\mathrm{unf}}$ obey the estimates $\|f_{\mathrm{err}}\|_{L_1}\mik \eta^2/8$ and
$\|f_{\mathrm{unf}}\|_{\cals}\mik (\eta^2/8) |\calp|^{-1}$. Then we have
\begin{equation} \label{e4.3}
\sum_{S\notin\mathrm{Unf}(\calp,f,\eta)} \pp(S) \mik \eta.
\end{equation}
\end{lem}
\begin{proof}
Fix $S\notin \mathrm{Unf}(\calp,f,\eta)$. We select $T\subseteq S$ with $T\in\cals$ such that
\begin{equation} \label{e4.4}
\eta\cdot \pp(S) < \big| \int_T \big(f -\mathbb{E}(f\, | \, S)\big)\, d\pp \big|.
\end{equation}
The function $f_{\mathrm{str}}$ is constant on $S$ and so, by \eqref{e4.4}, we see that
\begin{equation} \label{e4.5}
\eta\cdot\pp(S) < \big|\int_T \big(f_{\mathrm{err}}-\mathbb{E}(f_{\mathrm{err}} \, | \, S)\big)\, d\pp \big| +
\big|\int_T\big(f_{\mathrm{unf}} -\mathbb{E}(f_{\mathrm{unf}} \, | \, S)\big)\, d\pp\big|.
\end{equation}
Next observe that
\begin{equation} \label{e4.6}
\big|\int_T\big(f_{\mathrm{err}}- \mathbb{E}(f_{\mathrm{err}} \, | \, S)\big)\, d\pp \big|\mik
2 \mathbb{E}( |f_{\mathrm{err}}| \, | \, S) \cdot \pp(S)
\end{equation}
and
\begin{equation} \label{e4.7}
\big| \int_T\big(f_{\mathrm{unf}} -\mathbb{E}(f_{\mathrm{unf}} \, | \, S)\big)\, d\pp\big| \mik 2\|f_{\mathrm{unf}}\|_{\cals}.
\end{equation}
Finally, notice that $\pp(S)>0$ since $S\notin\mathrm{Unf}(\calp,f,\eta)$. Thus, setting
\begin{equation} \label{e4.8}
\cala=\{S\in\calp: \mathbb{E}( |f_{\mathrm{err}}| \, | \, S)\meg \eta/4\} \ \text{ and } \
\calb=\{S\in\calp: \pp(S)\mik 4\eta^{-1}\|f_{\mathrm{unf}}\|_{\cals}\}
\end{equation}
and invoking \eqref{e4.5}--\eqref{e4.7}, we obtain that $\calp\setminus \mathrm{Unf}(\calp,f,\eta)\subseteq \cala\cup\calb$.
Since the family $\calp$ is a partition, it consists of pairwise disjoint sets. Hence,
\begin{equation} \label{e4.9}
\sum_{S\in\cala} \pp(S) \mik \frac{4}{\eta}\Big( \sum_{S\in\cala} \int_S |f_{\mathrm{err}}|\, d\pp \Big) \mik
\frac{4}{\eta} \|f_{\mathrm{err}}\|_{L_1} \mik \frac{\eta}{2}.
\end{equation}
Moreover,
\begin{equation} \label{e4.10}
\sum_{S\in\calb} \pp(S) \mik \frac{4\|f_{\mathrm{unf}}\|_{\cals}}{\eta} \cdot |\calb|\mik
\frac{4\|f_{\mathrm{unf}}\|_{\cals}}{\eta} \cdot |\calp| \mik \frac{\eta}{2}.
\end{equation}
By \eqref{e4.9} and \eqref{e4.10} and using the inclusion $\calp\setminus \mathrm{Unf}(\calp,f,\eta)\subseteq \cala\cup\calb$,
we conclude that the estimate in \eqref{e4.3} is satisfied and the proof is completed.
\end{proof}
We proceed to the proof of Proposition \ref{p4.2}.
\begin{proof}[Proof of Proposition \emph{\ref{p4.2}}]
Fix $k,p$ and $\eta$. We set $\sigma=\eta^2/8$ and we define $F\colon\nn\to \rr$ by the rule
$F(n)=(n/\sigma)+1=(8n/\eta^2)+1$ for every $n\in\nn$. Notice that $F$ is a growth function. We set
\begin{equation} \label{e4.11}
\mathrm{U}(k,p,\eta)= \mathrm{Reg}'(k,p,\sigma, F)
\end{equation}
and we claim that $\mathrm{U}(k,p,\eta)$ is as desired. Indeed, let $(\bo,\calf,\pp)$ be a probability
space and $\cals$ a $k$-semiring on $\Omega$ with $\cals\subseteq\calf$. Also let $f\in L_p(\bo,\calf,\pp)$ with
$\|f\|_{L_p}\mik 1$. By Corollary \ref{c3.2}, there exist a positive integer $M\mik \mathrm{U}(k,p,\eta)$, a partition $\calp$
of $\bo$ with $\calp\subseteq\cals$ and $|\calp|=M$, and a finite refinement $\calq$ of $\calp$ with $\calq\subseteq\cals$
such that, setting
\begin{equation} \label{e4.12}
f_{\mathrm{str}}=\ave(f \, | \, \cala_{\calp}), \ \ f_{\mathrm{err}}=\ave(f \, | \, \cala_{\calq})-\ave(f \, | \, \cala_{\calp})
\ \text{ and } \ f_{\mathrm{unf}}=f-\ave(f \, | \, \cala_{\calq}),
\end{equation}
we have the estimates $\|f_{\mathrm{err}}\|_{L_p}\mik \sigma$ and $\|f_{\mathrm{unf}}\|_{\cals}\mik 1/F(M)$. It follows
that $f$ admits a decomposition $f=f_{\mathrm{str}}+f_{\mathrm{err}}+ f_{\mathrm{unf}}$ into integrable random
variables such that $f_{\mathrm{str}}$ is constant on each $S\in\calp$, $\|f_{\mathrm{err}}\|_{L_p}\mik \sigma$ and
$\|f_{\mathrm{unf}}\|_{\cals}\mik 1/F(M)$. Notice that, by the monotonicity of the $L_p$ norms, we have
$\|f_{\mathrm{err}}\|_{L_1}\mik \sigma$. Hence, by Lemma \ref{l4.3} and the choice of $\sigma$ and $F$, we conclude
that the estimate in \eqref{e4.2} is satisfied and the proof of Proposition \ref{p4.2} is completed.
\end{proof}
We close this subsection by presenting an application of Proposition \ref{p4.2} for subsets of hypercubes
(see also \cite[Section 2.1.3]{Tao3}). Specifically, let $A$ be a finite alphabet with $|A|\meg 2$ and set $K=|A|(|A|-1)2^{-1}$.
Also let $n$ be a positive integer. As in Example \ref{ex3}, we view $A^n$ as a discrete probability space equipped
with the uniform probability measure which we shall denote by $\pp$. More generally, for every nonempty subset $S$
of $A^n$ by $\pp_S$ we shall denote the uniform probability measure concentrated on $S$, that is,
$\pp_S(X)=|X\cap S|/|S|$ for every $X\subseteq A^n$. Recall that $\cals(A^n)$ stands for the $K\text{-semiring}$
on $A^n$ consisting of all subsets $X$ of $A^n$ which are written as
\begin{equation} \label{e4.13}
X=\bigcap_{\{a,b\}\in{A\choose 2}} X_{\{a,b\}}
\end{equation}
where $X_{\{a,b\}}$ is $(a,b)$-insensitive for every $\{a,b\} \in {A\choose 2}$.
Now let $D$ be a subset of $A^n$, $0<\ee\mik 1$ and $S\in\cals(A^n)$ with $S\neq\emptyset$. Notice that the set $S$ is
$(\mathbf{1}_D,\cals(A^n),\ee^2)$-uniform if and only if for every nonempty $T\subseteq S$ with $T\in\cals(A^n)$ we have
\begin{equation} \label{e4.14}
|\pp_T(D)-\pp_S(D)| \cdot \pp(T) \mik \ee^2 \cdot\pp(S).
\end{equation}
In particular, if $S$ is nonempty and $(\mathbf{1}_D,\cals(A^n),\ee^2)$-uniform, then for every $T\subseteq S$ with
$T\in\cals(A^n)$ and $|T|\meg\ee |S|$ we have $|\pp_T(D)-\pp_S(D)| \mik\ee$. Thus, by Proposition \ref{p4.2} and taking
into account these remarks, we obtain the following corollary.
\begin{cor} \label{c4.4}
For every integer $k\meg 2$ and every $0<\ee\mik 1$ there exists a positive integer $N(k,\ee)$ with the following property.
If $n$ is a positive integer, $A$ is an alphabet with $|A|=k$ and $D$ is a subset of $A^n$, then there exist a positive integer
$M\mik N(k,\ee)$, a partition $\calp$ of $A^n$ with $\calp\subseteq\cals(A^n)$ and $|\calp|=M$, and a subfamily
$\calp'\subseteq \calp$ with $\pp(\cup\calp')\meg 1-\ee$ such that
\begin{equation} \label{e4.15}
|\pp_T(D)-\pp_S(D)|\mik\ee
\end{equation}
for every $S\in\calp'$ and every $T\subseteq S$ with $T\in\cals(A^n)$ and $|T|\meg\ee |S|$.
\end{cor}
\subsection{$L_p$ graphons}
Our last application is an extension of the, so-called, \textit{strong regularity lemma for $L_2$ graphons} (see, e.g., \cite{L,LS}).
To state this extension we need to introduce some terminology and notation related to graphons.
Let $(\bo,\calf,\pp)$ be a probability space and recall that a \textit{graphon}\footnote[1]{In several places
in the literature, graphons are required to be $[0,1]$-valued, and the term \textit{kernel} is used for (not necessarily bounded)
integrable, symmetric random variables.} is an integrable random variable $W\colon\bo\times\bo\to\rr$ which is symmetric, that is,
$W(x,y)=W(y,x)$ for every $x,y\in \bo$. If $p>1$ and $W$ is graphon which belongs to $L_p$, then $W$ is said to be an
\textit{$L_p$ graphon} (see, e.g., \cite{BCCZ}).
Now let $\calr$ be a finite partition of $\bo$ with $\calr\subseteq\calf$ and notice that the family
\begin{equation} \label{e4.16}
\calr^2=\{S\times T: S,T\in\calr\}
\end{equation}
is a finite partition of $\bo\times\bo$. As in Section 3, let $\cala_{\calr^2}$ be the $\sigma$-algebra on $\bo\times\bo$
generated by $\calr^2$ and observe that $\cala_{\calr^2}$ consists of measurable sets. If $W\colon\bo\times\bo\to\rr$ is
a graphon, then the conditional expectation of $W$ with respect to $\cala_{\calr^2}$ is usually denoted by $W_\calr$.
Note that $W_{\calr}$ is also a graphon and satisfies (see, e.g., \cite{L})
\begin{equation} \label{e4.17}
\|W_{\calr}\|_{\square}\mik \|W\|_{\square}
\end{equation}
where $\|\cdot\|_{\square}$ is the cut norm defined in \eqref{e2.14}. On the other hand, by standard properties of the
conditional expectation (see, e.g., \cite{Du}), we have $\|W_{\calr}\|_{L_p}\mik \|W\|_{L_p}$ for any $p\meg 1$. It follows,
in particular, that $W_{\calr}$ is an $L_p$ graphon provided, of course, that $W\in L_p$.
We have the following corollary.
\begin{cor}[Strong regularity lemma for $L_p$ graphons] \label{c4.5}
For every $0<\ee\mik 1$, every $1<p\mik 2$ and every positive function $h\colon\nn\to\rr$ there exists a positive
integer $\mathrm{s}(\ee,p,h)$ with the following property. If $(\bo,\calf,\pp)$ is a probability space
and $W\colon\bo\times\bo\to\rr$ is an $L_p$ graphon with $\|W\|_{L_p}\mik 1$, then there exist a partition $\calr$ of $\bo$
with $\calr\subseteq\calf$ and $|\calr|\mik \mathrm{s}(\ee,p,h)$, and an $L_p$ graphon $U\colon\bo\times\bo\to\rr$
such that $\|W-U\|_{L_p}\mik \ee$ and $\|U-U_{\calr}\|_{\square}\mik h\big(|\calr|\big)$.
\end{cor}
\begin{proof}
Fix the constants $\ee, p$ and the function $h$, and define $F\colon\nn\to\rr$ by the rule
\begin{equation} \label{e4.18}
F(n)=(n+1)+\sum_{i=0}^n \frac{8}{h(i)}.
\end{equation}
Notice that $F$ is a growth function. We set
\begin{equation} \label{e4.19}
\mathrm{s}(\ee,p,h)=\mathrm{Reg}'(4,\ee,p,F)
\end{equation}
and we claim that with this choice the result follows.
Indeed, let $(\bo,\calf,\pp)$ be a probability space and fix an $L_p$ graphon $W\colon\bo\times\bo\to\rr$ with
$\|W\|_{L_p}\mik 1$. Also let $\Sigma_{\square}$ be the $4$-semiring on $\bo\times\bo$ which is defined via formula \eqref{e2.15}
for the given probability space $(\bo,\calf,\pp)$. We apply Corollary \ref{c3.2} to $\Sigma_{\square}$ and the random variable
$W$ and we obtain
\begin{enumerate}
\item[(a)] a partition $\calp$ of $\bo\times\bo$ with $\calp\subseteq\Sigma_{\square}$ and $|\calp|\mik\mathrm{Reg}'(4,\ee,p,F)$, and
\item[(b)] a finite refinement $\calq$ of $\calp$ with $\calq\subseteq\Sigma_{\square}$
\end{enumerate}
such that, writing the graphon $W$ as $W_{\mathrm{str}}+W_{\mathrm{err}}+W_{\mathrm{str}}$ where
$W_{\mathrm{str}}=\ave(W\, |\, \cala_{\calp})$, $W_{\mathrm{err}}=\ave(W \, | \, \cala_{\calq})-\ave(W \, | \, \cala_{\calp})$
and $W_{\mathrm{unf}}=W-\ave(W \, | \, \cala_{\calq})$, we have the estimates $\|W_{\mathrm{err}}\|_{L_p}\mik \ee$ and
$\|W_{\mathrm{unf}}\|_{\Sigma_{\square}}\mik 1/F\big(|\calp|\big)$. Note that, by (a) and (b) and the definition of the $4$-semiring
$\Sigma_{\square}$ in \eqref{e2.15}, there exist two finite partitions $\calr,\calz$ of $\bo$ with $\calr,\calz\subseteq\calf$ and
such that $\calp=\calr^2$ and $\calq=\calz^2$. It follows, in particular, that the random variables $W_{\mathrm{str}}, W_{\mathrm{err}}$
and $W_{\mathrm{unf}}$ are all $L_p$ graphons.
We will show that the partition $\calr$ and the $L_p$ graphon $U\coloneqq W_{\mathrm{str}}+W_{\mathrm{unf}}$ are as desired.
To this end notice first that
\begin{equation} \label{e4.20}
|\calr| \mik |\calr^2| = |\calp| \mik \mathrm{Reg}'(4,\ee,p,F)\stackrel{\eqref{e4.19}}{=}\mathrm{s}(\ee,p,h).
\end{equation}
Next observe that
\begin{equation} \label{e4.21}
\|W-U\|_{L_p} = \| W_{\mathrm{err}}\|_{L_p} \mik \ee.
\end{equation}
Finally note that, by \eqref{e4.17}, we have $\|(W_{\mathrm{unf}})_{\calr}\|_{\square}\mik \|W_{\mathrm{unf}}\|_{\square}$.
Moreover, the fact that $\calp=\calr^2$ and the choice of $W_{\mathrm{str}}$ yield that $(W_{\mathrm{str}})_{\calr}=W_{\mathrm{str}}$.
Therefore,
\begin{eqnarray} \label{e4.22}
\|U-U_{\calr}\|_{\square} & \mik & 2\|W_{\mathrm{unf}}\|_{\square} \stackrel{\eqref{e2.16}}{\mik}
8\|W_{\mathrm{unf}}\|_{\Sigma_{\square}} \mik \frac{8}{F\big(|\calp|\big)} \\
& \stackrel{\eqref{e4.20}}{\mik} & \frac{8}{F\big(|\calr|\big)} \stackrel{\eqref{e4.18}}{\mik} h\big( |\calr|\big) \nonumber
\end{eqnarray}
and the proof of Corollary \ref{c4.5} is completed.
\end{proof}
\begin{rem} \label{r1}
Recently, Borgs, Chayes, Cohn and Zhao \cite{BCCZ} extended the weak regularity lemma to $L_p$ graphons
for any $p>1$. Their extension follows, of course, from Corollary \ref{c4.5}, but this reduction is rather ineffective
since the bound obtained by Corollary \ref{c4.5} is quite poor. However, this estimate can be significantly improved
if instead of invoking Corollary \ref{c3.2}, one argues directly as in the proof of Lemma \ref{l3.4}. More precisely,
note that for every $0<\ee\mik 1$, every $1< p\mik 2$, every probability space $(\bo,\calf,\pp)$ and every
$L_p$ graphon $W\colon \bo\times\bo\to\rr$ with $\|W\|_{L_p}\mik 1$ there exists a partition $\calr$ of $\bo$
with $\calr\subseteq \calf$ and
\begin{equation} \label{e4.23}
|\calr|\mik 4^{(p-1)^{-1}\ee^{-2}}
\end{equation}
and such that $\|W-W_{\calr}\|_{\square}\mik \ee$. The estimate in \eqref{e4.23} matches the bound for the weak regularity lemma
for the case of $L_2$ graphons (see, e.g., \cite{L}) and is essentially optimal.
\end{rem}
| {
"attr-fineweb-edu": 1.447266,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeGHxK0iCl2n1_6la | \section{Introduction}
In this paper we study the relation between the BFKL perturbative resummation of leading logarithms \cite{bfkl} at high energy and the JIMWLK/KLWMIJ evolution \cite{jimwlk},\cite{klwmij} equations, which take into account the physics of saturation and multiple scatterings \cite{GLR}. With a slight abuse of language we call BFKL the second quantized formulation which leads not only to the BFKL equation for two gluon exchange, but also to the whole set of BKP equations for $t$-channel exchanges with an arbitrary number of gluons \cite{bkp}.
The basic correspondence between the elements of the two approaches was established in \cite{reggeon} and we will review it briefly below.
Here we are interested in the question how to relate the eigenvalues and the eigenfunctions of the BFKL Hamiltonian and their counterparts in the JIMWLK/KLWMIJ theory. We will show that the eigenfunctions of the JIMWLK/KLWMIJ Hamiltonians when expanded in Taylor series in the appropriate variable ($\rho$ for JIMWLK and $\delta/\delta\rho$ for KLWMIJ) are eigenfunctions of the BFKL Hamiltonian. However most of the BFKL eigenfunctions when resummed into solutions of JIMWLK/KLWMIJ are not normalizable. The question which are, and which are not, cannot be settled within the BFKL framework. We also discuss how to calculate higher corrections to the BFKL eigenfunctions. As an example we concentrate on the reggeized gluon \cite{reggeization}. We calculate corrections to the reggeized gluon wave function. We show that terms with up to two gluons in the $t$-channel reggeize due to the bootstrap condition inherent in this approach. We also point out that in the JIMWLK/KLWMIJ framework the bootstrap is a necessary consequence of the hermiticity of the JIMWLK/KLWMIJ Hamiltonian, and does not require a specific form of the emission kernel. We calculate corrections to the eigenfunction beyond the two gluon exchange approximation and thereby find the screening correction which appears when at least three gluons are exchanged in the $t$-channel. This is an interesting example of the interrelation between the $t$-channel unitarity, which is the origin of Reggeons and the $s$-channel one which is the inherent feature of the JIMWLK/KLWMIJ approach.
Let us start by recapitulating the JIMWLK/KLWMIJ formalism. In this approach
one considers the scattering of a projectile hadron on a hadronic target. The projectile is described by a distribution of color charge density in the transverse plane $\rho_P(x)$, while the target is viewed as an ensemble of the color fields $\alpha_T(x)$ with probability densities $W^P[\rho_P]$ and $W_T[\alpha_T]$ respectively.
The second quantized $S$ matrix operator is given by its eikonal expression (see \fig{btstrp1})
\begin{equation}
\hat S=\exp\left\{i\int_{0}^{1} dy^-\int d^2x\,\hat\rho_P^a(x,y^-)\,\hat\alpha_T^a(x,y^-)\right\}
\end{equation}
and the forward $S$ matrix element at rapidity $Y$ is given by the functional integral
\begin{equation}\label{smatrix}
S_Y=\,\,\int\, D\alpha_T^{a}\,\, \int d\rho_P\,\,W_Y^P[\rho_P]\,\,W^T[\alpha_T]
\,\exp\left\{i\int_{0}^{1} dy^-\int d^2x\,\rho_P^a(x,y^-)\,\alpha_T^a(x,y^-)\right\}
\end{equation}
\begin{figure}
\includegraphics{bootstrap1.eps}
\caption{\label{btstrp1}The $S$ matrix in the JIMWKL approach with averaging specified by \eq{smatrix}.}
\end{figure}
The rapidity evolution of basic physical observables (including the $S$-matrix) is determined by the second quantized evolution Hamiltonian $H_{RFT}$ (the following pertains to the projectile evolution - we drop the subscript on $\rho$ for brevity)
\begin{equation}\label{evoleq}
-\frac{d}{dY}W[\rho]=H_{RFT}[\rho, \frac{\delta}{\delta\rho}]W[\rho]
\end{equation}
The evolution Hamiltonian $H_{RFT}$ depends on two unitary matrices,
\begin{equation}
S(x)\,\,=\,\,{\cal P}\,\exp\left\{i\int_{0}^{1} dy^-\,T^a\,\alpha^a(x,y^-)\right\} \ \ \ \ \
R(x)\,=\,{\cal P}\exp\left\{\int_0^1 dx^-\, \frac{\delta}{\delta\rho^a(x,x^-)}\,T^a\right\}\,.
\label{alpha1}
\end{equation}
Here the field $\alpha$ is the projectile color field analogous to $\alpha_T$. It is related by a nonlinear transformation to the projectile color charge density $\rho$
\begin{equation}\label{alpha}
\alpha^a(x,x^-)T^a\,\,=g^2\,\,\frac{1}{\partial^2}(x-y)\,
\left\{S^\dagger(y,x^-)\,\,\rho^{a}(y,x^-)\,T^a\,\,S(y,x^-)\right\}\,.
\end{equation}
with $\frac{1}{\partial^2}(x,y)\,=\,\frac{1}{2\pi}\, \ln [(x-y)^2\mu^2]$ and
\begin{equation}
S(x, x^-)\,\,=\,\,{\cal P}\,\exp\left\{i\int_{0}^{x^-} dy^-\,T^a\,\alpha^a(x,y^-)\right\}
\end{equation}
The $S$-matrix at arbitrary rapidity can therefore be represented in terms of the eigenvalues of $H_{RFT}$.
Its evolution is given by
\begin{equation}\label{smatrix1}
\frac{dS_Y}{dY}=-\,\,\int\, D\alpha_T^{a}\,\, \int d\rho_P\,\,H_{RFT}[\rho,\frac{\delta}{\delta\rho}]W_Y^P[\rho]\,\,W^T[\alpha_T]
\,\exp\left\{i\int_{0}^{1} dy^-\int d^2x\,\rho^a(x,y^-)\,\alpha_T^a(x,y^-)\right\}
\end{equation}
Given the set of complete eigenfunctionals of $H_{RFT}$
\begin{equation}
H_{RFT}\Psi_i[\rho]=\omega_i\Psi_i[\rho]
\end{equation}
one can expand the probability distribution at the initial rapidity $Y_0$ as
\begin{equation}\label{psii}
W_{Y_0}[\rho]=\sum_i\gamma_i\Psi_i[\rho]
\end{equation}
and therefore
\begin{equation}\label{psiii}
S_Y=\sum_i e^{-\omega_i(Y-Y_0)}\gamma_i\beta_i
\end{equation}
with
\begin{equation}
\beta_i=\int\, D\alpha_T^{a}\,\, \int d\rho\,\,\Psi_i[\rho]\,\,W^T[\alpha_T]
\,\exp\left\{i\int_{0}^{1} dy^-\int d^2x\,\rho^a(x,y^-)\,\alpha_T^a(x,y^-)\right\}
\end{equation}
Given that the Hamiltonian $H_{RFT}$ is positive definite \cite{yinyang}, the asymptotic behavior of the $S$-matrix is governed by the lowest lying eigenvalues $\omega_i$. The study of the spectrum of $H_{RFT}$ is therefore of direct relevance to understanding the high energy behavior of physical amplitudes.
Two limits of Hamiltonian $H_{RFT}$ have been widely discussed in the literature. One
is valid in the limit of dense projectile, but allows only exchange of two gluons with the target \cite{jimwlk}. In the other limit one assumes that the projectile is dilute, but resumms all possible multiple interactions of the partons of the projectile with the target \cite{klwmij}.
The two limiting cases are related by the dense-dilute duality transformation \cite{duality} and have therefore identical spectra. In the rest of this paper we choose to study the KLWMIJ Hamiltonian.
\begin{equation}
H_{KLWMIJ}=\frac{\alpha_s}{2\pi^2}\int_{x,y,z}{K_{xyz}\left\{J^a_L(x)J^a_L(y)+J^a_R(x)J^a_R(y)-2J^a_L(x)R^{ab}_zJ^b_R(y)\right\}}
\label{klwmij}
\end{equation}
with the kernel
\begin{equation}
K_{xyz}=\frac{(x-z)_i(y-z)_i}{(x-z)^2(y-z)^2}
\end{equation}
and
the left and right rotation generators
\begin{eqnarray}\label{LR}
J^a_L(x)=-tr\left[\frac{\delta}{\delta R^{\dagger}_x}T^aR_x\right] \\
J^a_R(x)=-tr\left[R_xT^a\frac{\delta}{\delta R^{\dagger}_x}\right]
\end{eqnarray}
Alternatively one can write
\begin{equation}
H_{KLWMIJ}=\frac{\alpha_s}{2\pi^2}\int_{x,y,z}{K_{xyz}\left\{J^a_V(x)J^a_V(y)-2J^a_L(x)\left(R_z-1\right)^{ab}J^b_R(y)\right\}}
\end{equation}
with
\begin{equation}
J_V^a(x)=J_R^a(x)-J_L^a(x)
\end{equation}
The operators $J_V^a(x)$ are the generators of $SU_V(N_c)$ - the vector subgroup of $SU_L(N_c)\otimes SU_R(N_c)$.
As noted above, the JIMWLK Hamiltonian is obtained from eq.(\ref{klwmij}) by the dense-dilute duality transformation
\begin{equation}\label{densedilute}
R(x)\rightarrow S(x)
\end{equation}
We note that a generalization of $H_{RFT}$ which interpolates between $H_{KLWMIJ}$ and $H_{JIMWLK}$ has recently been derived \cite{aklp}. However due to its complexity we will not deal with it in the present paper.
The Hamiltonian $H_{KLWMIJ}$ is the limit of $H_{RFT}$ at low color charge density. The left and right rotation operators $J_R$ and $J_L$ eq.(\ref{LR}) that appear in eq.(\ref{klwmij}) are in fact just the color charge density in the hadronic wave function and its conjugate respectively \cite{kl}.
Some aspects of the structure of the spectrum of $H_{KLWMIJ}$ were discussed in \cite{yinyang}. In particular we know that $H_{KLWMIJ}$ is positive definite since it can be written as
\begin{equation}
H_{KLWMIJ}=\frac{\alpha}{2\pi^2}\int_zQ^{a \dagger}_i(z)Q^a_i(z)
\end{equation}
with the Hermitian amplitude
\begin{equation}
Q^a_i(z)=\int_x\frac{(x-z)_i}{(x-z)^2}\left[R^{ab}(z)-R^{ab}(x)\right]J^b_R(x)
\end{equation}
It has two states with zero eigenvalue:
\begin{equation}
H_{KLWMIJ}|Yin\rangle=0; \ \ \ \ \ \ H_{KLWMIJ}|Yang\rangle=0
\end{equation}
The state $|Yang\rangle$ corresponds to the physical vacuum, that is to the state which is annihilated by the color charge density
\begin{equation}
J^a_L(x)|Yang\rangle=J^a_R(x)|Yang\rangle=0
\end{equation}
while $|Yin\rangle$ represents the black disk state
\begin{equation}
R^{ab}(x)|Yin\rangle=\delta^{ab}|Yin\rangle
\end{equation}
We will find it convenient to work in the basis of eigefunctions of the operator $R$.
Written in this basis the wave functionals of the two states are
\begin{equation}
\langle R|Yang\rangle=1; \ \ \ \ \ \ \langle R|Yin\rangle=\delta\left(R^{ab}(x)-\delta^{ab}\right)
\end{equation}
Each one of these states sustains a tower of excitations above it. The RFT states "close" to $|Yang\rangle$ correspond to physical QCD states with small number of particles in the projectile wave function. This interpretation stems from the fact that, as discussed in detail in \cite{kl} the probability distribution $W$ has a convenient representation
\begin{equation}
W[\rho]=\Sigma[R]\delta[\rho]
\end{equation}
and thus in the $R$ basis $W$ is simply a regular function of $R$. Every factor of $R$ in $W$ corresponds to a gluon in the projectile wave function. Thus expansion of $W$ in powers of $R-1$, is equivalent to expansion in the number of the projectile gluons that scatter on the target.
In this paper we are interested in the eigenstates of $H_{KLWMIJ}$ which have similar structure, namely in the $R$ basis the eigenfunctions are regular functionals of $R(x)$ which can be expanded in powers of $R-1$. Since $R$ is a regular function of $\delta/\delta\rho$, these same states can also be expanded in powers of $\delta/\delta\rho$. Clearly the state $|Yang\rangle$ is one of those states since the wave function in this case is simply a constant.
On the other hand the state $|Yin\rangle$ does not fall into this category. Its wave function is not expandable in powers of $R-1$. The same is true for other states "close" to it. As explained in \cite{yinyang} those states correspond to "holes" in the black disk and their eigenfunctions are expandable in powers of $\rho$ rather than $\delta/\delta \rho$.
The discussion of this paper pertains directly only to the $|Yang\rangle$ - like states.
The derivation of the Hamiltonian $H_{KLWMIJ}$ \cite{kl} does not assume anything about the strength of the target fields. If one interested in the situation when the target fields are small, one can expand $H_{KLWMIJ}$ in powers of $\delta/\delta\rho$. The expansion in powers of $\delta/\delta\rho$ is equivalent to expansion in powers of the target field, since powers of $\delta/\delta\rho$ turn into powers of $\alpha_T$ in the calculation of the scattering matrix eq.(\ref{smatrix1}). The leading order of this expansion gives $H_{BFKL}$, which is the second quantized Hamiltonian that generates the high energy evolution in the BFKL framework.
The form of $H_{BFKL}$ is well known (see for example \cite{msw}). For completeness we present the derivation
of $H_{BFKL}$ in the appendix, carefully keeping track of the path ordering in the definition of $R(x)$. Although this path ordering is not relevant in many cases \cite{reggeon}, in general it cannot be neglected. The result is
\begin{equation}\label{bfkl}
H_{BFKL}=-\frac{\alpha_s}{2\pi^2}\int_{xyz}K_{xyz}(T^aT^b)_{cd}\rho^a_x\left[\frac{\delta}{\delta \rho^c_x}-\frac{\delta}{\delta \rho^c_z}\right]\left[\frac{\delta}{\delta \rho^d_y}-\frac{\delta}{\delta \rho^d_z}\right]\rho^b_y
\end{equation}
where
\begin{equation}
\rho^a_x\equiv\int_0^1dx^-\rho^a(x,x^-); \ \ \ \ \ \frac{\delta}{\delta\rho^a_x}\equiv\int_0^1dx^-\frac{\delta}{\delta\rho^a(x,x^-)}
\end{equation}
In the "normal ordered form" this reads
\begin{equation}\label{bfklno}
H_{BFKL}=-\frac{\alpha_s}{2\pi^2}\int_{xyz}K_{xyz}(T^aT^b)_{cd}\left[\frac{\delta}{\delta \rho^c_x}-\frac{\delta}{\delta \rho^c_z}\right]\left[\frac{\delta}{\delta \rho^d_y}-\frac{\delta}{\delta \rho^d_z}\right]\rho^a_x\rho^b_y+\int_{xz}\beta_{xz}\frac{\delta}{\delta \rho^a_z}\rho^a_x
\end{equation}
with
\begin{equation}\label{beta}
\beta_{x-z}=\frac{\alpha_sN_c}{2\pi^2}\left[\delta^2(x-z)\int_uK(x,x,u)-K(x,x,z)\right]
=\frac{\alpha_sN_c}{ 2\pi^2}\left[\delta^2(x-z)\int_u\frac{1}{u^2}-\frac{1}{(x-z)^2}\right]
\end{equation}
The question we want to address is what is the relation between the eigenvalues and eigenfunctions of $H_{BFKL}$ and $H_{KLWMIJ}$. More importantly, to what extent can we use the results of calculations performed in the framework of BFKL evolution to get information about the spectrum of $H_{KLWMIJ}$.
\section{From BFKL to KLWMIJ}
The calculation of the spectrum of $H_{BFKL}$ is equivalent to solution of the complete set of BKP equations.
The BFKL Hamiltonian is a homogeneous function of the coordinates ($\delta/\delta \rho$) and momenta ($\rho$) and thus its eigenfunctions are pure powers. Taking an eigenfunction in the form
\begin{equation}
\Psi_A[\delta/\delta\rho]=\int_{x_1...x_n}G_A^{a_1a_2...a_n}(x_1...x_n)\frac{\delta}{\delta\rho^{a_1}_{x_1}}...\frac{\delta}{\delta\rho^{a_n}_{x_n}}
\end{equation}
and acting on it with $H_{BFKL}$ leads to an eigenvalue equation of the form
\begin{eqnarray}\label{bkp}
&&-\frac{\alpha}{2\pi^2}\Sigma_{i\ne j}(T^aT^b)_{a_ia_j}\Bigg[\int_zK_{x_ix_jz}G_A^{a_1...a...b...a_n}(x_1...x_n)+\int_{xy}K_{xyx_i}G_A^{a_1...a...b...a_n}(x_1...x...y...x_n)\delta(x_i-x_j)\nonumber\\
&&-\int_yK_{x_iyx_j}G_A^{a_1...a...b...a_n}(x_1...x_i...y...x_n)-\int_xK_{xx_jx_i}G_A^{a_1...a...b...a_n}(x_1...x...x_j...x_n)\Bigg]\nonumber\\
&&+\Sigma_i\int_x\beta_{xx_i}G_A^{a_1...a...b...a_n}(x_1...x...x_n)=\omega_AG_A^{a_1...a_n}(x_1...x_n)
\end{eqnarray}
Eqs.(\ref{bkp}) are precisely the BKP equations \cite{bkp} for the rapidity evolution of $n$ - gluon exchange in $t$-channel.
The index $A$ denotes various quantum numbers that characterize the eigenfunction $G_A$, in particular total momentum, color representation, charge conjugation, parity and so on.
As mentioned above, expanding $H_{KLWMIJ}$ and $\Psi$ is powers of $\delta/\delta\rho$ is equivalent to expanding the $S$ matrix eq.(\ref{smatrix}) in powers of the target color field $\alpha_T$. Every factor of $\alpha_T$ represents an exchange of a gluon in $t$-channel between the projectile and the target. Thus physically, expansion in powers of $\delta/\delta\rho$ is equivalent to expansion in the number of gluons exchanged in the $t$ - channel \cite{reggeon}. Consequently, the quantum numbers denoted by the index $A$ in eq.(\ref{bkp}) are the quantum numbers of the $n$ - gluon state exchanged in the $t$-channel.
For $n=1$ the solution of eq.(\ref{bkp}) is the reggeized gluon. In this case the color representation of the exchange is obviously adjoint and the eigenvalues are characterized by the transverse momentum. For the singlet two gluon state, $n=2$, this is the celebrated BFKL equation and the solution is the BFKL Pomeron. For arbitrary $n$ in the large $N_c$ approximation these equations have been extensively studied in \cite{korchemsky} where it was shown that the spectrum of the $n$ gluon state is described by an integrable spin chain.
Although the full spectrum of eq.(\ref{bkp}) is not known, the salient features are the following. In the reggeized gluon sector ($n=1$) the eigenvalues are nonnegative. The lowest eigenvalue is vanishing and corresponds to zero transverse momentum exchange $t$. At nonzero $t$ the eigenvalues are logarithmically infrared divergent. For the BFKL Pomeron (singlet $n=2$ exchange) the lowest eigenvalue is actually negative, corresponding to the growth of the amplitude at high energy with the famous BFKL intercept. At $n>2$ the spectrum is rich. Importantly the lowest eigenvalue for the color singlet exchange is always negative and grows proportionally to $n$, corresponding to the $n/2$ Pomeron exchanges \footnote{In fact for very large $n>N_c$ this growth is even faster and the absolute value of the most negative eigenvalue is proportional to $n^2$ \cite{LALE}.}
Now let us consider $H_{KLWMIJ}$. The eigenvalues and eigenfunctions as noted above are determined by solving
\begin{equation}
H_{KLWMIJ}\Psi_A[R]=\omega_A\Psi_A[R]
\end{equation}
Suppose we try to solve this by Taylor expanding $\Psi_A$ in powers of $\delta/\delta\rho$. We thus write
\begin{equation}\label{expfu}
\Psi_A=\Psi^0_A[\frac{\delta}{\delta\rho}]+\Psi^1_A[\frac{\delta}{\delta\rho}]+...
\end{equation}
To find $\Psi$ we need to act on it with $H_{KLWMIJ}$ also expanded in powers of $\delta/\delta\rho$.
It is clear from the mechanics of the expansion of $J_L$ and $J_R$ given in the appendix, that to all orders in $\delta/\delta\rho$, both the charge densities are proportional to the first power of $\rho$, and only the power of $\delta/\delta\rho$ in $H_{KLWMIJ}$ grow order by order. Therefore the Hamiltonian $H_{KLWMIJ}$ can be written as
\begin{equation}\label{expeq}
H_{KLWMIJ}=H_{BFKL}+H_1+...
\end{equation}
Here the first correction to the Hamiltonina, $H_1$ schematically has the form
\begin{equation}
H_1=K_1\left(\frac{\delta}{\delta\rho}\right)^3\rho^2+\beta_1\left(\frac{\delta}{\delta\rho}\right)^2\rho
\end{equation}
The structure of eqs.(\ref{expfu},\ref{expeq}) is such that in the leading order the wave function $\Psi^0_A[\frac{\delta}{\delta\rho}]$ must be an eigenfunction of $H_{BFKL}$. Thus the leading order equation determines $\omega_A$ completely, and the role of the higher order equations is only to determine the higher order Taylor series terms in the expansion of the wave function $\Psi_A[R]$.
E.g. having found $\Psi^0_A$ as an eigenfunction of $H_{BFKL}$ with eigenvalue $\omega_A$, to first order one has
\begin{equation}
\Psi^1_A[\rho]=\frac{1}{ \omega_A-H_{BFKL}}H_1\Psi^0_A
\end{equation}
and so on.
It thus appears that the eigenvalues of $H_{KLWMIJ}$ can be obtained {\it exactly} from the leading order approximation - the BFKL Hamiltonian. The catch however is that we do not know from the BFKL calculation {\it per se} whether the wave function corresponding to a given "eigenvalue" will turn out to be normalizable or not. Within the BFKL approximation itself, none of the wavefunctions are of course normalizable since they are simple monomials of $\delta/\delta\rho$. However only normalizable eigenfunctions of $H_{KLWMIJ}$ should be used in the expansion of the $S$-matrix eqs.(\ref{psii},\ref{psiii}).
A similar situation is encountered in simple quantum mechanics. Take for example a one dimensional harmonic oscillator
\begin{equation}
h=\frac{1}{2}\left(p^2+x^2\right)
\end{equation}
Let us now try to find its ground state wavefunction in Taylor expansion. We take
\begin{equation}
\psi=1+ax^2+cx^4+...
\end{equation}
Acting on it with the Hamiltonian we get
\begin{equation}
h\psi=-a +\frac{1}{2}x^2-6cx^2...
\end{equation}
and the Schroedinger equation
\begin{equation}
-a+\frac{1}{2}(1-12c)x^2=\omega(1+ax^2)
\end{equation}
Thus for arbitrary $\omega$ we simply have
\begin{equation}
a=-\omega; \ \ \ \ \ c=\frac{1}{12}(1+2\omega^2)
\end{equation}
So there is a solution for every possible value of $\omega$ (in fact there are two solutions for each $\omega$, but the other one is an odd function of $x$ and is outside our initial ansatz).
This calculation however does not carry a lot of information since we do not know {\it a priori} which of the so found functions are normalizable when the Taylor series is summed to all orders. We know of course that the spectrum in fact is discreet and therefore most of the "eigenvalues" do not correspond to normalizable eigenfunctions.
The situation in the KLWMIJ-BFKL system is similar in this respect. As we have noted above, $H_{KLWMIJ}$ is hermitian and positive definite, thus its eigenvalues have to be positive. On the other hand many of the eigenvalues of $H_{BFKL}$ are negative (including of course, the Pomeron). We are therefore assured that those eigenvalues do not correspond to normalizable eigenfunctions. As for the positive eigenvalues of $H_{BFKL}$, it is tempting to surmise that they correspond to normalizable eigenfunctions of $H_{KLWMIJ}$. Unfortunately, we have no right to do so. The question whether resummed Taylor series is normalizable or not is very complicated and can not be answered in any finite order of Taylor expansion. We note however that the Taylor expansion for $H_{KLWMIJ}$ is in a subtle way different from that for the harmonic oscillator. In the later case the leading order of the expansion puts no restrictions at all on possible eigenvalues. In the KLWMIJ case however, the leading order itself leads to an eigenvalue problem, so that not every $\omega_A$ is allowed.
Even though it is not clear whether the eigenfunctions of $H_{BFKL}$ give rise to normalizable eigenfunctions of $H_{KLWMIJ}$, it is still interesting to illustrate the procedure discussed above by some concrete example. In the following section we will therefore consider higher order in $\delta/\delta\rho$ corrections to some eigenfunctions. We will concentrate on the eigenvalues corresponding to the reggeized gluon, which is the simplest eigenfunction of $H_{BFKL}$.
\section{The Reggeized gluon and the bootstrap.}
\subsection{The Reggeized gluon}
Let us look for the eigenstate of $H_{KLWMIJ}$ which corresponds to the quantum numbers of one gluon exchange. The state must belong to the adjoint representation of $SU_V(N_c)$
and its wave function when Taylor expanded should start with the linear term in $\delta/\delta\rho$.
Thus we take
\begin{equation}\label{regg}
\Psi_0=\int d^2x \phi(x)\frac{\delta}{\delta\rho^a(x)}
\end{equation}
Acting on it with $H_{BFKL}$
we obtain the eigenvalue equation
\begin{equation}\label{omegabeta}
\int_z\beta_{xz}\phi_z=\omega\phi_x
\end{equation}
This is solved by
\begin{equation}\label{eigenf}
\phi(x)=e^{iqx}
\end{equation}
with the eigenvalue
\begin{equation}\label{intercept}
\omega_q=\frac{\bar{\alpha}}{2\pi}\int_\mu d^2k\frac{q^2}{k^2(q-k)^2}
\end{equation}
This is nothing but the gluon reggeization.
To calculate first correction to the reggeized gluon state we have to consider $\Psi^1$ which is quadratic in $\delta/\delta \rho$.
We will perform the calculation in a slightly different way which is technically simpler. We know that the wave function $\Psi$ at the end of the day should depend only on $R$. Therefore it makes sense, rather than taking an arbitrary quadratic function, to choose such a function that itself can be obtained from expansion of $R$.
Since $\Psi^0$ is simply represented as expansion of $R$
\begin{equation}
\Psi^a_0[\delta/\delta \rho]=\int d^2x\phi(x)tr \left(T^aR\right)_{\rm first \ order}
\end{equation}
we will take a simple guess
\begin{equation}
\Psi^a_0[\delta/\delta \rho]+\Psi_a^1[\delta/\delta \rho]=\int d^2x\phi(x)tr \left(T^aR\right)_{\rm first+second \ order}
\end{equation}
and will show that it indeed satisfies the KLWMIJ eigenvalue equation to second order in $\delta/\delta \rho$.
The matrix $R$ is taken here in the adjoint representation.
\subsection{Bootstrap of the antisymmetric adjoint}
We now consider the action of $H_{KLWMIJ}$ in the state
\begin{equation}
G^a_A= \int d^2x\phi(x)tr \left(T^aR\right)
\end{equation}
The basic elements we need is the action of the left and right rotation generators on the matrix $R$
\begin{equation}
[J^a_L(x),R(y)]=T^aR(y)\delta^2(x-y); \ \ \ \ \ \ [J^a_R(x),R(y)]=R(y)T^a\delta^2(x-y)
\end{equation}
With this it is easy to calculate the action of the real and virtual parts of $H_{KLWMIJ}$:
\begin{equation}
2J^c_L(x)[R(z)-1]^{cd}J^d_R(y)tr[T^aR_u]=2[R_z-1]^{cd} tr[T^aT^cR_xT^d]\delta(x-y)\delta(y-u)
\end{equation}
and
\begin{equation}
[J^c_L(x)-J^c_R(x)][J^c_L(y)-J^c_R(y)]tr[T^aR_u]=tr\left\{T^a[T^c,[T^c,R_x]]\right\}\delta(x-y)\delta(y-u)
\end{equation}
Hence
\begin{equation}
H_{KLWMIJ}G^a_A=-\int_{u,z}K_{uuz}\phi_u\left\{2[R_z-1]^{cd}tr[T^dT^aT^cR_u]+tr\left\{T^a[T^c,[T^c,R_u]]\right\}\right\}
\end{equation}
Using
\begin{equation}
tr\left\{T^a[T^c,[T^c,R_u]]\right\}=N_ctr[T^aR_u]
\end{equation}
we write
\begin{eqnarray}\label{1glue}
H_{KLWMIJ}G^a_A&=&-\frac{\alpha}{ 2\pi^2}\int_{u,z}K_{uuz}\phi_u\left\{2[R_z-1]^{cd}tr[T^dT^aT^cR_u]-N_ctr[T^aR_u]\right\} \nonumber \\
&=&\frac{\alpha}{ \pi^2}\int_{u,z}K_{uuz}\phi_u\left\{-2[R_z-1]^{cd}tr[T^dT^aT^c(R_u-1)]+N_c\left(tr[T^aR_u]-tr[T^aR_z]\right)\right\}
\end{eqnarray}
We now have to expand this to second order in $\delta/\delta\rho$.
The matrix $R$ is expanded as
\begin{equation}
R^{ab}_u=\delta^{ab}+T^{ab}_c\int^1_0du^-\frac{\delta}{\delta \rho^c(u,u^-)}+T^{a e}_c T^{e b}_d \int^1_0du^-_1\int^{u^-_1}_0du^-_2\frac{\delta}{\delta \rho^c(u,u^-_1)}\frac{\delta}{\delta \rho^d(u,u^-_2)}
\end{equation}
The crucial observation is that the first and second order terms come only from the second term in eq.(\ref{1glue}). For first order term this is obvious. The second order contribution from the first term in eq.(\ref{1glue}) is proportional to
\begin{equation}
T^e_{cd}tr[T^dT^aT^cT^b]\frac{\delta}{\delta\rho_u^b}\frac{\delta}{\delta\rho_z^e}
\end{equation}
However, using the properties of the adjoint SU(N) generators
\begin{equation}
tr(T^aT^b)=f^{a\alpha\beta}f^{b\alpha\beta}=N_c\delta^{ab} ,\ \ tr(T^aT^b T^c)=-if^{\alpha a\beta}f^{\beta b\gamma}f^{\gamma c\alpha}=i\frac{N_c}{2}f^{abc}
\end{equation}
one can easily show that
\begin{equation}
T^e_{cd}tr[T^dT^aT^cT^b]=0
\end{equation}
and so this contribution vanishes.
Thus to second order the eigenvalue equation is
\begin{eqnarray}\label{antisreg}
&&\frac{\alpha N_c}{2\pi^2}\int_{u,z}K_{uuz}\phi_u\left[\left(\frac{\delta}{\delta\rho_u^a}-\frac{\delta}{\delta\rho_z^a}\right)+\frac{i}{2} f^{abc} \int_{x^->y^-} \left(\frac{\delta}{\delta\rho^b(u,x^-)}\frac{\delta}{ \delta\rho^c(u,y^-)}-\frac{\delta} {\delta\rho^b(z,x^-)}\frac{\delta}{ \delta\rho^c(z,y^-)}\right)\right]\nonumber\\
&&=
\omega\int_u\phi(u)\left[\frac{\delta}{\delta\rho_u^a}+\frac{i}{ 2}f^{abc} \int_{x^->y^-} \frac{\delta}{ \delta\rho^b(u,x^-)}\frac{\delta}{ \delta\rho^c(u,y^-)}\right]
\end{eqnarray}
Obviously, this is satisfied as before by $\phi(x)$ of eq.(\ref{eigenf}) with the eigenvalue eq.(\ref{intercept}).
According to our earlier discussion, the eigenvalue determined in the leading order does not change order by order. An interesting feature of this calculation is that the second order correction to the eigenfunction is the term which describes the two gluon exchange in the $t$-channel such that the two gluons are in the octet. The fact that this term reggeizes precisely in the same way as the one gluon exchange is the essence of the celebrated bootstrap feature in high energy QCD \cite{bootstrap}.
\subsection{Bootstrap of the symmetric adjoint}
The two $t$-channel gluons in the previous calculation are in the antisymmetric adjoint representation. It is easy to show that two gluons in the symmetric adjoint also reggeize (see second paper in \cite{reggeization}).
To see this let us consider a similar calculation but take the matrix $R$ to be in the fundamental representation.
As before we take
\begin{equation}\label{quark}
G^a_F=\int_u\phi_utr[\tau^aR_F(u)]
\end{equation}
where $\tau^a$ are generators of $SU(N_c)$ in the fundamental representation.
The action of $H_{KLWMIJ}$ on this state is claculated just like before using
\begin{equation}
[J^a_L(x),R_F(y)]=\tau^aR_F(y)\delta^2(x-y); \ \ \ \ \ \ [J^a_R(x),R_F(y)]=R_F(y)\tau^a\delta^2(x-y)
\end{equation}
Using of completeness relation of fundamental SU(N) generators
\begin{equation}
\tau^c_{\alpha \beta}\tau^c_{\gamma \delta}=\frac{1}{2}\left[\delta_{\alpha \delta}\delta_{\beta \gamma}-\frac{1}{N_c}\delta_{\alpha \beta}\delta_{\gamma \delta}\right]
\end{equation}
the properties of fundamental generators
\begin{equation}
tr(\tau^a\tau^b)=\frac{1}{2}\delta^{a b}, \ \ tr(T^aT^b T^c)=\frac{1}{4}(d^{abc}+if^{abc})
\end{equation}
and the representation of an adjoint unitary matrix in terms of fundamental matrices
\begin{equation}
R^{ab}_A(z)=2tr\left[\tau^aR_F(z)\tau^bR^{\dagger}_F(z)\right]
\end{equation}
one can write
\begin{equation}\label{regf}
H_{KLWMIJ}G^a_F=\frac{\alpha}{ 2\pi^2}\int_{u,z}K_{uuz}\phi_u\left\{tr[1-R^{\dagger}_{F}(z)R_{F}(u)]tr[\tau^aR_{F}(z)]+N_ctr\left[\tau^a\left(R_{F}(u)-R_{F}(z)\right)\right]\right\}
\end{equation}
The first term on the RHS starts in the order $(\delta/\delta\rho)^3$. Thus the eigenvalue equation to second order reads
\begin{eqnarray}\label{symreg}
&&\frac{\alpha N_c}{ 2\pi^2}\int_{u,z}K_{uuz}\phi_u\left[\left(\frac{\delta}{\delta\rho_u^a}-\frac{\delta}{\delta\rho_z^a}\right)+\frac{1}{ 2} \left\{if^{abc}+d^{abc}\right\} \int_{x^->y^-} \left(\frac{\delta}{ \delta\rho^b(u,x^-)}\frac{\delta}{ \delta\rho^c(u,y^-)}-\frac{\delta}{ \delta\rho^b(z,x^-)}\frac{\delta}{ \delta\rho^c(z,y^-)}\right)\right]\nonumber\\
&&=
\omega\int_u\phi(u)\left[\frac{\delta}{\delta\rho_u^a}+\frac{1}{ 2} \left\{if^{abc}+d^{abc}\right\} \int_{x^->y^-} \frac{\delta}{ \delta\rho^b(u,x^-)}\frac{\delta}{ \delta\rho^c(u,y^-)}\right]
\end{eqnarray}
Again the plane wave eq.(\ref{eigenf}) is the solution of this equation with the eigenvalue eq.(\ref{intercept}). The second order term proportional to the $d^{abc}$ tensor corresponds to exchange of two $t$ - channel gluons in the symmetric adjoint representation. As we have seen in the previous subsection, the antisymmetric octet (the $f^{abc}$ term) reggeizes. Thus eq.(\ref{symreg}) tells us that the symmetric adjoint reggeizes by itself. This is another example of bootstrap at work.
\subsection{The bootstrap in the KLWMIJ/JIMWLK approach.}
Since the bootstrap plays such an important role in the discussions of high energy amplitudes, it is worth while explaining how and why the bootstrap condition in the KLWMIJ approach is satisfied {\it by fiat}.
First off, we note that the bootstrap condition, which leads to reggeization of the two gluon exchange can be stated as the relation between the real part of the kernel of the BFKL equation and the reggeized gluon trajectory\cite{levin}.
\begin{equation}
\omega(k_1)-\omega(k_2)-\omega(k_1+k_2)=\frac{1}{ 2}\int_{q_1,q_2}\tilde K(k_1,k_2,q_1,q_2)
\end{equation}
Here $\tilde K$ is the real part of the BFKL kernel which evolves the color singlet two $t$-channel gluon state. It arises from the first term in $H_{BFKL}$ eq.(\ref{bfklno}). Expressing it in coordinate space in terms of the kernel $K(x,y,z)$ we have
\begin{equation}
\tilde K(xy,uv)=2\Bigg[\int_zK_{uvz}\delta(x-u)\delta(y-v)-K_{uvy}\delta(x-u)
-K_{uvx}\delta(y-v)+K_{uvx}\delta(x-y)\Bigg]
\end{equation}
According to eq.(\ref{omegabeta}) and eq.(\ref{eigenf}) the Fourier transform of $\omega(k)$ into coordinate space is $\beta(x)$.
Fourier transforming the bootstrap condition into the coordinate space we have
\begin{equation}\label{configbootstrap}
\beta(x)\delta(y)-\beta(y)\delta(x)-\beta(x)\delta(x-y)=\Bigg[\int_zK_{00z}\delta(x)\delta(y)-K_{00y}\delta(x)-K_{00x}\delta(y)+K_{00x}\delta(x-y)\Bigg]
\end{equation}
We can derive this condition directly by considering the eigenvalue equation in the symmetric adjoint channel.
Take the trial function in the form
\begin{equation}
G^a=\int_{uv}\Psi(u,v)d^{abc}\frac{\delta}{\delta \rho^b_u}\frac{\delta}{\delta \rho^c_v}
\end{equation}
Acting on it by $H_{BFKL}$ we derive the eigenvalue equation
\begin{eqnarray}
-\frac{N_c\alpha}{4\pi^2}\int\Big[2K_{uvz}\Psi(u,v)-2K_{uzv}\Psi(u,z)-2K_{zvu}\Psi(z,v)+2K_{zxu}\delta(u-v)\Psi(z,x)\Big]&+&\Big[\beta_{zv}\Psi(z,u)+\beta_{zu}\Psi(z,v)\Big]\nonumber \\
&=&\omega_q\Psi(u,v)
\end{eqnarray}
Assuming that the solution identical to the single gluon exchange
\begin{equation}
\Psi_q(u,v)=\delta^2(u-v)e^{iqu}
\end{equation}
exists for all $q$, and taking the integral over $q$ we arrive at the configuration space condition eq.(\ref{configbootstrap}) \footnote{We note that the condition Eq.(\ref{configbootstrap}) can not be derived by acting with the BFKL Hamiltonian on the color octet antisymmetric two gluon state, even though this state does reggeize. The reason is that the reggeization of the antisymmetric octet is technically somewhat different form the symmetric one. The antisymmetric state $\Psi^A_2$ appears in the wave function $\Psi$ in a combination with the one gluon state $\Psi_1$. Contributions to the two gluon antisymmetric term in the left hand side of the eigenvalue equation eq.(\ref{antisreg}) arise both from the action of $H_{BFKL}$ on $\Psi^A_2$ {\bf and} from the action of $H_1$ on $\Psi_1$. On the other hand the action of $H_1$ on $\Psi_1$ does not generate any contributions to the symmetric octet function. Thus the symmetric octet reggeizes by the action of $H_{BFKL}$ alone, but the antisymmetric state does not.}.
With $\beta(x)$ defined in eq.(\ref{beta}), this condition is clearly satisfied. Clearly, the bootstrap condition would be satisfied in the KLWMIJ approach for any functional form of kernel $K_{xyz}$, since the relation between $\beta$ and $K$, eq.(\ref{beta}) is immutable. It appears simply due to normal ordering of $H_{BFKL}$ written in the original form eq.(\ref{bfkl}). Thus even if $K$ is modified in eq.(\ref{bfkl}), the bootstrap condition will still be automatically satisfied. One can contemplate several reasons for such a modification. First, higher order corrections in $\alpha_s$ certainly lead to modification of $K$ \cite{nexttoleading}. Another reason to consider a modification of $K$ is the unphysical infrared behavior of perturbative gluon emission which leads to violation of Froissart bound \cite{froissart}. Cutting off long distance tails of the Weiszacker-Williams field in the emission kernel $K$ is a possible "phenomenological" solution of this problem \cite{cutoff}.
One could question in principle the starting point of our discussion - eq.(\ref{bfkl}).
However the form eq.(\ref{bfkl}) is dictated by the hermiticity of $H_{BFKL}$. The real emission part is given by the first term in eq.(\ref{bfklno}). By itself this term is not hermitian, and only with the gluon trajectory term, the second term in eq.(\ref{bfklno}), the hermiticity of the Hamiltonian is restored.
Thus we conclude that the bootstrap condition within the KLWMIJ framework is tantamount to the condition of hermiticity of the Hamiltonian which generates the rapidity evolution of the scattering amplitude.
The Hermiticity of the Hamiltonian is related to the unitarity of the high energy evolution, even though the evolution equation is not a Schroedinger equation, but rather a diffusion type equation. The evolution Hamiltonian acts on the probability distribution $W$ eq.(\ref{evoleq}). Since $W[\rho]$ has the meaning of probability density, it must be positive definite. The eigenvalues of the evolution thus better be real, otherwise the "probability density" will develop an imaginary part even if one starts with a real and positive distribution at initial rapidity. This is assured if the evolution Hamiltonian is Hermitian.
The origin of reggeization, including the gluon rerggeization, is in the $t$-channel unitarity. On the other hand the JIMWLK/KLWMIJ approach is formulated in $s$-channel and has the most natural interpretation as the evolution of the $s$-channel wave function. It is therefore interesting to see how $t$ and $s$ channel pictures are interrelated on the example of the gluon reggeization. The two pictures lead to the equivalent description, if the evolution in rapidity can be described by a hermitian Hamiltonian.
We note that the bootstrap equation was used in \cite{levin} to find the generalization of the gluon reggeization for the running QCD coupling case.
The JIMWLK/KLWMIJ Hamiltonian is of course modified when the running is taken into account.
The practical conclusion from our discussion in this subsection is the following: if we know how to include the running of the QCD coupling in $\beta$ of \eq{beta}, then \eq{bfklno} can be used to generalize the full BFKL Hamiltonian to the running coupling case. This idea has been explored in \cite{levin} and led to the "triumvirate" structure \cite{KOVWE}. The same form of the Hamiltonian was derived recently in \cite{KOVWE} by direct summation of the Feyman diagrams in the dipole approximation.
\eq{bfklno} with
\begin{equation}\label{interceptf}
\omega_q=\frac{1}{2\pi}\int_\mu d^2k\,\frac{\bar{\alpha}(\left( (q - k)^2\right) \,\bar{\alpha}\left( k^2\right) }{
\bar{\alpha}\left(q^2\right)}\frac{q^2}{k^2(q-k)^2}
\end{equation}
gives the correct generalization of the JIMWLK/KLWMIJ Hamiltotian for running QCD coupling both for linear and non-linear term at any value of $N_c$.
\section{Screening corrections}
As we have seen above, expansion in powers of $\delta/\delta\rho$ is the expansion in number of gluons exchanged in the $t$-channel. For example the linear approximation of eq.(\ref{regg}) allows only one gluon exchange between the partons of the projectile and the target. Diagrammatically the approximate diagonalization discussed in the previous section corresponds to summing the diagrams of Fig.\ref{f1}. The evolution therefore allows for emission of an arbitrary number of gluons in the wave function of the projectile, but only for a single gluon exchange between the evolved projectile and the target. The terms quadratic in $\delta/\delta\rho$ are represented in Fig.\ref{f2}. The vanishing of the last diagram on Fig.\ref{f2} leaves the remaining contributions local in transverse coordinate and thereby ensures the reggeization of the two gluon exchange. The third order terms are not local anymore. In particular one encounters the diagrams of Fig.\ref{f3} which give a non vanishing bilocal contribution.
\begin{figure}
\includegraphics{Fig1b.eps}
\caption{\label{f1} KLWMIJ evolution in the approximation where only one gluon exchange in the $t$-channel is allowed.}
\end{figure}
\begin{figure}
\includegraphics{Fig2b.eps}
\caption{\label{f2}KLWMIJ evolution in the approximation which allows exchanges of up to two gluons in the $t$-channel.}
\end{figure}
\begin{figure}
\includegraphics{Fig3b.eps}
\caption{\label{f3}A nonlocal in transverse plain contribution to the eigenfunction with three $t$ channel gluon exchanges.}
\end{figure}
\begin{figure}
\includegraphics{Fig4b.eps}
\caption{\label{f4} Reggeization diagramms with arbitrary number of $t$-channel gluons that couple to the same parton in the projectile.}
\end{figure}
Since the quadratic terms in the expansion of the wave function reggeize in the same way as the linear term, the significant corrections in a sense start form the cubic order. There is no reggeization of the third order terms in eq.(\ref{1glue}) nor eq.(\ref{regf}), which is another way of stating that three gluon exchange is sensitive to screening corections. It is thus interesting to calculate the terms of order $(\delta/\delta\rho)^3$ in the wave function. Although this can be done, the calculation is somewhat tedious. In this section we will perform a similar calculation, but the one that is easier implemented and has a somewhat more direct meaning in the framework of $H_{KLWMIJ}$. Instead of expanding the wave function $\Psi[R]$ in powers of $\delta/\delta\rho$ we will expand it in powers of $R-1$. This is similar in spirit to \cite{reggeon}. Since the complete KLWMIJ wave function must depend on $R$, this type of expansion is more direct.
To leading order the expansion in $R-1$ and $\delta/\delta\rho$ are equivalent. Beyond the leading order however they differ both on the calculational and conceptual levels.
Expansion in powers of $R-1$ corresponds to the expansion in the number of the projectile partons which participate in scattering. To linear order in $R-1$ our approximation allows only one parton in the projectile wave function to scatter off the target, but it can scatter by exchanging an arbitrary number of $t$-channel gluons. The diagrams which are resummed in this approximation are depicted on Fig. 4. Clearly the leading order expansion in $\delta/\delta\rho$ is subsumed in the leading order expansion in $R-1$, since if we allow only one $t$-channel gluon, we also allow only one parton of the projectile to scatter. In higher orders it is not the case anymore. In this section we will calculate $O[(R-1)^2]$ correction to the wave function of eq.(\ref{quark}).
Recall eq.(\ref{regf})
\begin{eqnarray}\label{hgaf}
&&H_{KLWMIJ}\int_u\phi_utr[\tau^aR_F(u)]=\frac{\alpha}{ 2\pi^2}\int_{u,z}K_{uuz}\phi_u\left\{tr[1-R^{\dagger}_{F}(z)R_{F}(u)]tr[\tau^aR_{F}(z)]+N_ctr\left[\tau^a\left(R_{F}(u)-R_{F}(z)\right)\right]\right\}\nonumber\\
&&=\frac{\alpha}{ 2\pi^2}\int_{u,z}K_{uuz}\phi_u\left\{N_ctr\left[\tau^a\left(R_{F}(u)-R_{F}(z)\right)\right]+tr[R_{F}(u)-1]tr[\tau^aR_{F}(z)]-tr[R^{\dagger}_{F}(z)-1]tr[\tau^aR_{F}(z)]+...
\right\}
\end{eqnarray}
The linear term on the RHS is the familiar reggeization term. We see thus that reggeization is the property of an arbitrary number of gluon exchanges, as long as all the $t$-channel gluons couple to {\bf the same parton} (in this case quark) in the projectile.
The second term on RHS is second order in $R-1$. Note that in terms of the expansion in $\delta/\delta\rho$ it actually starts with the cubic order. Clearly, to correct the eigenfunction we have to add to our original $G^a_F$ a term of second order in $R-1$. Guided by the form of the RHS of eq.(\ref{hgaf}) we take the wave function to second order of the form
\begin{eqnarray}
G^a_F&=&G^a_1+G^a_2\\
G^a_1&=&\int_u\phi_utr[\tau^aR_F(u)]\nonumber\\
G^a_2&=&\frac{1}{N_c}\int_{u,v}\psi(u,v)tr[R^{\dagger}_F(v)-1]tr[\tau^aR_F(u)]+\frac{1}{N_c}\int_{u,v}\tilde{\psi}(u,v)tr[R_F(v)-1]tr[\tau^aR_F(u)]\nonumber
\end{eqnarray}
We will see that this ansatz is general enough to satisfy the eigenvalue equation to second order in the large $N_c$ limit.
In the rest of this section all the matrixes $R$ are in fundamental representation and we drop the subscript $F$ for convenience.
The action of $H_{KLWMIJ}$ on $G$ is straightforward to calculate. After some algebra we find
\begin{eqnarray}
&&H_{KLWMIJ}\int \psi(u,v)tr[R^\dagger_v-1]tr[\tau^aR_u]= \nonumber \\
&=&\frac{\alpha}{ 2\pi^2}\int_{u,v,z}\psi(u,v)\Bigg\{K_{uvz}\Big\{tr[R^\dagger_z R^\dagger_vR_z\tau^aR_u]+tr[R^\dagger_zR_u\tau^aR_zR^\dagger_v]-tr[R^\dagger_v\{R_u,\tau^a\}]\Big\}\nonumber \\&&
-K_{vvz}\Big\{tr(R^\dagger_z)tr[(R^\dagger_v-1)(R_z-1)]+tr(R^\dagger_z-1)tr(R^\dagger_v-1)\nonumber\\
&&+tr(R^\dagger_z-1)tr(R^\dagger_z-1)+N_c[tr(R^\dagger_z-1)+tr(R_z-1)]\Big\}tr(\tau^aR_u)\nonumber \\ &&-K_{uuz}\Big\{tr[R^\dagger_v-1]tr[(R^\dagger_z-1)(R_u-1)]+tr[R^\dagger_v-1]tr[R^\dagger_z-1]\nonumber\\
&&+tr[R^\dagger_v-1]tr[R_u-1]+N_ctr[R^\dagger_v-1]\Big\}tr[\tau^aR_u]\nonumber \\&&
+N_cK_{uuz}tr[R^\dagger_v-1]tr[\tau^aR_u]\Bigg\}
\end{eqnarray}
Keeping only $[O(R-1)^2]$ terms and taking large $N_c$ limit for simplicity, we have
\begin{eqnarray}
&& H_{KLWMIJ}\int\psi(u,v)tr[R^\dagger_v-1]tr[\tau^aR_u]= \\
&&=\frac{\alpha}{ 2\pi^2}\int_{uvz}N_c\Bigg\{\psi(u,z)K_{vvz}tr[1-R_v]tr[\tau^aR_u]+\Big[\psi(u,z)K_{vvz}+\psi(z,v)K_{uuz}-\psi(u,v)K_{uuz}\Big]tr[1-R^\dagger_v]tr[\tau^aR_u]\Bigg\}\nonumber
\end{eqnarray}
Similarly to quadratic order in the large $N_c$ limit
\begin{eqnarray}
&&H_{KLWMIJ}\int tr[R_v-1]tr[\tau^aR_u]= \\
&=&\frac{\alpha}{ 2\pi^2}\int_{uvz}N_c\left\{\tilde{\psi}(u,z)K_{vvz}tr[1-R^\dagger_v]tr[\tau^aR_u]+\left(\tilde{\psi}(u,z)K_{vvz}+\tilde{\psi}(z,v)K_{uuz}-\tilde{\psi}(u,v)K_{uuz}\right)tr[1-R_v]tr[\tau^aR_u]\right\}\nonumber
\end{eqnarray}
Finally we have
\begin{eqnarray}
HG^a_2&=&\frac{\alpha}{ 2\pi^2}\int_{u,v,z}\left\{\psi(u,z)K_{vvz}+\tilde{\psi}(u,z)K_{vvz}+\psi(z,v)K_{uuz}-\psi(u,v)K_{uuz}\right\}tr[1-R^\dagger_v]tr[\tau^aR_u]\nonumber\\
&+&\frac{\alpha}{ 2\pi^2}\int_{u,v,z}\left\{\tilde{\psi}(u,z)K_{vvz}+\psi(u,z)K_{vvz}+\tilde{\psi}(z,v)K_{uuz}-\tilde{\psi}(u,v)K_{uuz}\right\}tr[1-R_v]tr[\tau^aR_u]
\end{eqnarray}
We determine the functions $\psi$ and $\tilde\psi$ by solving the eigenvalue equation
\begin{equation}
H_{KLWMIJ}G^a_F=\omega_QG^a_F
\end{equation}
with the eigenvalue $\omega_Q$ given by eq.(\ref{intercept}). As we have seen in eq(\ref{hgaf}) the linear terms reggeize and cancel between the left and right hand side. For the quadratic terms we are then left with the equation
\begin{eqnarray}
&&\frac{\alpha}{ 2\pi^2}\int_{u,z}K_{uuz}\phi_u[tr(1-R^\dagger_z)+tr(1-R_u)]tr(\tau^aR_z)\nonumber \\
&+&\frac{\alpha}{ 2\pi^2}\int_{u,v,z}\left\{\psi(u,z)K_{vvz}+\tilde{\psi}(u,z)K_{vvz}+\psi(z,v)K_{uuz}-\psi(u,v)K_{uuz}\right\}tr[1-R^\dagger_v]tr[\tau^aR_u]\nonumber\\
&+&\frac{\alpha}{ 2\pi^2}\int_{u,v,z}\left\{\tilde{\psi}(u,z)K_{vvz}+\psi(u,z)K_{vvz}+\tilde{\psi}(z,v)K_{uuz}-\tilde{\psi}(u,v)K_{uuz}\right\}tr[1-R_v]tr[\tau^aR_u]\nonumber \\
&=&\frac{\omega_Q}{ N_c}\int_{u,v}[\psi(u,v)tr(R^\dagger_v-1)tr(\tau^aR_u)+\tilde{\psi}(u,v)tr(R_v-1)tr(\tau^aR_u)]
\end{eqnarray}
with $\phi_u=\exp \{iQu\}$.
This reduces to the equation for $\psi$ and $\tilde{\psi}$:
\begin{eqnarray}\label{correction}
\frac{\alpha}{ 2\pi^2}&&\int_z\left[\psi(u,v)K_{uuz}-\psi(u,z)K_{vvz}-\psi(z,v)K_{uuz}-\tilde{\psi}(u,z)K_{vvz}\right]-\frac{\omega_Q}{ N_c}\psi(u,v)=\frac{\alpha}{ 2\pi^2}\int_zK_{zzu}\phi_z\delta(u-v)\\
\frac{\alpha}{ 2\pi^2}&&\int_z\left[\tilde{\psi}(u,v)K_{uuz}-\tilde{\psi}(u,z)K_{vvz}-\tilde{\psi}(z,v)K_{uuz}-\psi(u,z)K_{vvz}\right]-\frac{\omega_Q}{ N_c}\tilde{\psi}(u,v)=\frac{\alpha}{ 2\pi^2}K_{vvu}\phi_v
\end{eqnarray}
These equations are easily solved in momentum space.
Defining
\begin{equation}
\psi(u,v)=\int d^2kd^2qe^{i(qu+kv)}\psi(q,k); \ \ \ \ \
\psi(q,k)=\int d^2ud^2ve^{-i(qu+kv)}\psi(u,v)
\end{equation}
it is starightforward to Fourier transform eq.(\ref{correction}). For example
\begin{equation}
\int_{uv}e^{-i(qu+kv)}\int_zK_{vvz}\psi(u,z)= \ln \frac{\Lambda^2}{ k^2}\psi(q,k); \ \ \ \ \ etc.
\end{equation}
The ultraviolet cutoff $\Lambda$ is needed to regularize the divergence in the Fourier transfrom of $1/x^2$.
Similarly Fourier transforming the rest of the terms we get
\begin{eqnarray}
\left(\ln \frac{\Lambda^2}{ Q^2}-\ln \frac{\Lambda^2}{ q^2}-\ln \frac{\Lambda^2}{ k^2}\right)\psi(q,k)-\ln \frac{\Lambda^2}{ k^2}\tilde{\psi}(q,k)&=&\ln \frac{\Lambda^2}{ Q^2}\delta^2(Q-q-k) \\
\left(\ln \frac{\Lambda^2}{ Q^2}-\ln \frac{\Lambda^2}{ q^2}-\ln \frac{\Lambda^2}{ k^2}\right)\tilde{\psi}(q,k)-\ln \frac{\Lambda^2}{ k^2}\psi(q,k)&=&\ln \frac{\Lambda^2}{ q^2}\delta(Q-q-k)
\end{eqnarray}
The solution is found straightforwardly as
\begin{eqnarray}
\psi(q,k)&=&\frac{\ln \frac{\Lambda^2}{ Q^2}-\ln \frac{\Lambda^2}{ k^2}}{ \ln \frac{\Lambda^2}{ Q^2}-\ln \frac{\Lambda^2}{ q^2}-2\ln \frac{\Lambda^2}{ k^2}}\delta^2(Q-q-k)\rightarrow_{\Lambda\rightarrow\infty}0\nonumber\\
\tilde\psi(q,k)&=&\frac{\ln \frac{\Lambda^2}{ q^2}+\ln \frac{\Lambda^2}{ k^2}}{ \ln \frac{\Lambda^2}{ Q^2}-\ln \frac{\Lambda^2}{ q^2}-2\ln \frac{\Lambda^2}{ k^2}}\delta^2(Q-q-k)\rightarrow_{\Lambda\rightarrow\infty}-\delta^2(Q-q-k)
\end{eqnarray}
Transforming back to the configuration space we find the eigenfunction to second order in $R-1$ and in the large $N_c$ limit
\begin{equation}
G^2_F=\int_ue^{iQu}tr[\tau^aR_F(u)]\Big[1-\frac{1}{ N_c}tr[R_F(u)-1]\Big]
\end{equation}
\section{Conclusions}
To summarize we have discussed relationship between the KLWMIJ Hamiltonian and its BFKL limit. Eigenfunctions of $H_{KLWMIJ}$ when expanded to leading order in $\delta/\delta\rho$ become eigenfunctions of $H_{BFKL}$. It is however difficult to determine which eigenfunctions of $H_{BFKL}$ become {\it normalizable} eigenfunctions of $H_{KLWMIJ}$ when the Taylor series is resummed to all orders.
The relation of course pertains only to functions (functionals) which are expandable in Taylor series in $\delta/\delta\rho$. We know from the general discussion of \cite{yinyang} that $H_{KLWMIJ}$ also has eigenfunctions which do not have such an expansion. Those are states close to the black disk limit.
For those states the pertinent expansion is in powers of $\rho$. We note that $H_{BFKL}$ is self dual under the transformation
\begin{equation}\label{duality}
\frac{\delta}{\delta\rho(x)}\leftrightarrow \int_y\frac{i}{\partial^2}(x-y)\rho(y)
\end{equation}
It thus contains eigenstates whose eigenfunctions are monomials in $\rho$ rather than $\delta/\delta\rho$. The self duality ensures that those have exactly the same spectrum as the eigenfunctions we discussed in the bulk of this paper, even though these states are indeed formally close to the black disk limit
The duality transformation eq.(\ref{duality}) is the linearized version of eq.(\ref{densedilute}) which transforms $H_{KLWMIJ}$ to $H_{JIMWLK}$. Thus the same relation as discussed above exists between the second set of the eigenfunctions of $H_{BFKL}$ and the eigenfunctions of $H_{JIMWLK}$.
We have also discussed the gluon reggeization and the appearance of the bootstrap condition in the KWLMIJ formalism. The bootstrap condition is direct consequence of Hermiticity of $H_{KWLMIJ}$ and as such is a necessary attribute of the approach. Any modification of the emission kernel $K_{xyz}$ does not ruin the bootstrap property.
Since the JIMWLK picture is a $s$-channel one while the reggeization stems from the $t$ channel unitarity,
we conclude that the Hermiticity of $H_{KWLMIJ}$ is the property that reconciles these two approaches.
Further we have discussed expansion of the eigenfunctions in powers of $R-1$ rather than powers of $\delta/\delta\rho$. This corresponds to expansion in the number of partons in the projectile wave function which participate in the scattering. We have calculated $O[(R-1)^2]$ correction to the reggeized gluon wave function and found that is has a very simple form in the large $N_c$ limit.
We note in this respect that while the eigenvalue determines the rapidity dependence of the scattering amplitude, the functional form of the wave function determines the impact factor. This follows from the expansion eq.(\ref{psii}) which can be written as the overlap of the "wave function" characterizing the projectile hadron at the initial rapidity and the eigenfunction of the evolution Hamiltonian
\begin{equation}
\gamma_i=\langle W_{Y_0}|\Psi_s\rangle
\end{equation}
Thus for example the eigenfunction eq.(\ref{1glue}) will not have zero overlap with a single quark state, while that of eq.(\ref{quark}) will not overlap with a gluon projectile, since their incoming color representations cannot be combined into a singlet. Thus in order for the amplitudes of both, quark and gluon projectiles to have the same energy dependence, there must be a degeneracy in the spectrum of $H_{KLWMIJ}$. We indeed saw this degeneracy explicitly since both eigenfunctions eq.(\ref{1glue}) and eq.(\ref{quark}) correspond to the same eigenvalue. On the formal level this degeneracy is the consequence of the fact that perturbative vacuum breaks the global symmetry of $H_{KLWMIJ}$ as $SU_L(N_c)\otimes SU_R(N_c)\rightarrow SU_V(N_c)$, the reggeized gluon being "the Goldstone boson" associated with this breaking \cite{reggeon}.
\section*{Acknowledgements}
One of us (E.L.) thanks Physics Department of the University of Connecticut for hospitality and creative atmosphere during his visit when this work was done.
This work was supported in part by the DOE grant DE-FG02-92ER40716.00 and Fondecyt (Chile) grant \# 1100648.
\section{Appendix. Derivation of $H_{BFKL}$ from $H_{KLWMIJ}$}
In order to derive the BFKL Hamiltonian we have to expand $R^{ab}(x)$ and $J^a_{L(R)}(x)$ in powers of $\frac{\delta}{\delta \rho^a(x,x^-)}$. The expansion of $R$ is straightforward. To expand $J_R$ and $J_L$ we will use their commutation relations with $R(x)$. We start with the KLWMIJ Hamiltonian
\begin{equation}
H_{KLWMIJ}=\frac{\alpha}{ 2\pi^2}\int_zQ^{a \dagger}_i(z)Q^a_i(z)
\end{equation}
Expansion of $R$ is straightforward. To second order we have
\begin{equation}
R^{\mu \beta}_u=\delta^{\mu \beta}+T^{\mu \beta}_c\int_0^1du^{-}_1\frac{\delta}{\delta \rho^c(u,u^{-}_1)}+T^{\mu \lambda}_cT^{\lambda \beta}_d\int_0^1du^{-}_1\int_0^{u^{-}_1}du^{-}_2\frac{\delta}{\delta \rho^c(u,u^{-}_1)}\frac{\delta}{\delta \rho^d(u,u^{-}_2)}
\end{equation}
To expand $J_L$ we use the commutation relation
\begin{equation}
[J^a_L(x),R(y)]=T^aR(y)\delta^2(x-y)
\end{equation}
It is easy to check that to second order in $\delta/\delta\rho$ the following expression satisfies the correct commutation relation
\begin{equation}
J^{a}_L (u) = -\int_0^1du^-_1\rho^{a} (u,u^{-}_1)-T^{\chi \kappa}_a\int_0^1du^{-}_1\int_0^{u^{-}_1}du^{-}_2\rho^{\chi}(u,u^{-}_2)\frac{\delta}{\delta \rho^{\kappa}(u,u^{-}_1)}
\end{equation}
Now using the fact that $J^a_L(u)=R^{ab}_uJ^b_R(u)$ we have also
\begin{equation}
J^a_R(u)=-\int_0^1du^{-}_1\rho^a(u,u^{-}_1)+T^{\chi \kappa}_a\int_0^1du^{-}_1\int_{u^{-}_1}^1du^{-}_2\rho^\chi(u,u^{-}_2)\frac{\delta}{\delta \rho^\kappa(u,u^{-}_1)}
\end{equation}
Now the expansion of the amplitude $Q$ reads
\begin{eqnarray}
Q^a_i(z)&=&\int_x\frac{(x-z)_i}{(x-z)^2}\left[R^{ab}(z)-R^{ab}(x)\right]J^b_R(x) \nonumber \\
&=&\int_x\frac{(x-z)_i}{(x-z)^2}T^{\alpha \beta}_a\left[\int_0^1dx^{-}_1\int_0^1dx^{-}_2\rho^\alpha(x,x^{-}_1)\frac{\delta}{\delta \rho^\beta(x,x^{-}_2)}-\int_0^1dx^{-}_1\int_0^1dz^{-}_1\rho^\alpha(x,x^{-}_1)\frac{\delta}{\delta \rho^\beta(z,z^{-}_1)}\right]\nonumber\\
&=&\int_x\frac{(x-z)_i}{(x-z)^2}T^{\alpha \beta}_a\left[\rho^\alpha_x\left(\frac{\delta}{\delta \rho^\beta_x}-\frac{\delta}{\delta \rho^\beta_z}\right)\right]
\end{eqnarray}
with
$\int_0^1dx^-\rho^\alpha(x,x^-)\equiv\rho^\alpha_x$ and $\int_0^1dx^{-}\frac{\delta}{\delta \rho^\alpha(x,x^-)}\equiv\frac{\delta}{\delta \rho^\alpha_x}$.
Finally we can write the BFKL Hamiltonian as in eq(\ref{bfkl}).
| {
"attr-fineweb-edu": 1.067383,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeKjxaJiQnwHQjRuO | \section{Run time experiments}
We document the run time of our proposed algorithm and compare it to that of our baselines, using the implementations provided by the authors (\cf footnote 1 in the main paper).
The timings are for images of size 512$\times$512 pixels, and running all iterative methods for 500 iterations. We average over 15 runs on a single Nvidia GeForce GTX 1080Ti. The results are shown in Tab.~\ref{tab:runtime-study}.
Naturally, one-shot feed-forward methods are a lot faster to compute, at the cost of a bit lower image quality.
Among the iterative methods, the differences are practically negligible. Ours is on par with the two competitors, adding \textless10\% of computational overhead over Gatys' original method; while being slightly faster than MM, due to a more efficient implementation.
\begin{table}[!h]
\setlength{\tabcolsep}{3pt}
\renewcommand{\arraystretch}{1.4}
\centering
\vspace{0.5em}
\begin{tabularx}{\linewidth}{|*6{>{\centering\arraybackslash}X}|}
\hline
AdaIN* & Gatys & MM & OST* & WCT* & Ours \\
\hline
0.58s & 30.51s & 35.49s & 2.40s & 1.93s & \textbf{33.59s} \\
\hline
\end{tabularx}
\caption{Run time of different \ac{nst} methods, in seconds.}
\label{tab:runtime-study}
\end{table}
\section{Influence of the learning rate}
We further investigate the influence of varying learning rates. As can be seen from Fig.~\ref{fig:differentlr}, increasing the learning rate has a similar effect as reducing the weight $\alpha$ of the content loss in \eqref{eq:nstloss}. This is expected, as the style loss can be decreased more rapidly when disregarding the "constraint" to preserve the content, encoded in the content loss. %
With too high learning rate, only barely recognisable traces of the image content are preserved, as can be seen towards the right side of Fig.~\ref{fig:differentlr}. Also, training becomes increasingly unstable, as often for deep networks one must balance learning speed against learning success.
\begin{figure}[!htbp]
\centering
\begin{subfigure}[b]{0.16\linewidth}
\centering
\includegraphics[width=\textwidth]{images/different_lr/dancing.jpg}
\caption{Content}
\end{subfigure}
\begin{subfigure}[b]{0.16\linewidth}
\centering
\includegraphics[width=\textwidth]{images/different_lr/smeared_0096.jpg}
\caption{Style}
\end{subfigure}
\begin{subfigure}[b]{0.16\linewidth}
\centering
\includegraphics[width=\textwidth]{images/different_lr/test4_lr_0.01.jpg}
\caption{$\text{lr}=0.01$}
\end{subfigure}
\begin{subfigure}[b]{0.16\linewidth}
\centering
\includegraphics[width=\textwidth]{images/different_lr/test4_lr_0.1.jpg}
\caption{$\text{lr}=0.1$}
\end{subfigure}
\begin{subfigure}[b]{0.16\linewidth}
\centering
\includegraphics[width=\textwidth]{images/different_lr/test4_lr_0.2.jpg}
\caption{$\text{lr}=0.2$}
\end{subfigure}
\begin{subfigure}[b]{0.16\linewidth}
\centering
\includegraphics[width=\textwidth]{images/different_lr/test4_lr_0.3.jpg}
\caption{$\text{lr}=0.3$}
\end{subfigure}
\begin{subfigure}[b]{0.16\linewidth}
\centering
\includegraphics[width=\textwidth]{images/different_lr/modern.jpg}
\caption{Content}
\end{subfigure}
\begin{subfigure}[b]{0.16\linewidth}
\centering
\includegraphics[width=\textwidth]{images/different_lr/udnie.jpg}
\caption{Style}
\end{subfigure}
\begin{subfigure}[b]{0.16\linewidth}
\centering
\includegraphics[width=\textwidth]{images/different_lr/test3_lr_0.01.jpg}
\caption{$\text{lr}=0.01$}
\end{subfigure}
\begin{subfigure}[b]{0.16\linewidth}
\centering
\includegraphics[width=\textwidth]{images/different_lr/test3_lr_0.1.jpg}
\caption{$\text{lr}=0.1$}
\end{subfigure}
\begin{subfigure}[b]{0.16\linewidth}
\centering
\includegraphics[width=\textwidth]{images/different_lr/test3_lr_0.2.jpg}
\caption{$\text{lr}=0.2$}
\end{subfigure}
\begin{subfigure}[b]{0.16\linewidth}
\centering
\includegraphics[width=\textwidth]{images/different_lr/test3_lr_0.3.jpg}
\caption{$\text{lr}=0.3$}
\end{subfigure}
\caption{Impact of learning rate on the output of our \ac{cmd} method.}
\label{fig:differentlr}
\end{figure}
\newpage
\section{Additional qualitative results}
\vspace{-0.6em}
\setlength{\tabcolsep}{1pt}
\renewcommand{\arraystretch}{0.2}
\begin{figure}[!htbp]
\begin{center}
\begin{tabularx}{\textwidth}{*{8}{>{\centering\arraybackslash}X}}
\includegraphics[width=0.124\textwidth]{images/additional_results/076.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/candy.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/en_campo_gris.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kandinsky2_small.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/picasso.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/richter2.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/The_Scream_small.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/vangogh_starry_night_small.jpg}\\
\includegraphics[width=0.124\textwidth]{images/additional_results/12.png_076.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/12.png_candy.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/12.png_en_campo_gris.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/12.png_kandinsky2.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/12.png_picasso.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/12.png_richter2.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/12.png_The_Scream.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/12.png_vangogh_starry_night.jpg.jpg}\\
\includegraphics[width=0.124\textwidth]{images/additional_results/chicago.jpg_076.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/chicago.jpg_candy.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/chicago.jpg_en_campo_gris.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/chicago.jpg_kandinsky2.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/chicago.jpg_picasso.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/chicago.jpg_richter2.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/chicago.jpg_The_Scream.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/chicago.jpg_vangogh_starry_night.jpg.jpg}\\
\includegraphics[width=0.124\textwidth]{images/additional_results/dancing.jpg_076.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/dancing.jpg_candy.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/dancing.jpg_en_campo_gris.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/dancing.jpg_kandinsky2.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/dancing.jpg_picasso.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/dancing.jpg_richter2.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/dancing.jpg_The_Scream.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/dancing.jpg_vangogh_starry_night.jpg.jpg}\\
\includegraphics[width=0.124\textwidth]{images/additional_results/kodim02.png_076.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim02.png_candy.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim02.png_en_campo_gris.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim02.png_kandinsky2.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim02.png_picasso.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim02.png_richter2.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim02.png_The_Scream.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim02.png_vangogh_starry_night.jpg.jpg}\\
\includegraphics[width=0.124\textwidth]{images/additional_results/kodim04.png_076.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim04.png_candy.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim04.png_en_campo_gris.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim04.png_kandinsky2.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim04.png_picasso.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim04.png_richter2.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim04.png_The_Scream.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim04.png_vangogh_starry_night.jpg.jpg}\\
\includegraphics[width=0.124\textwidth]{images/additional_results/kodim05.png_076.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim05.png_candy.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim05.png_en_campo_gris.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim05.png_kandinsky2.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim05.png_picasso.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim05.png_richter2.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim05.png_The_Scream.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim05.png_vangogh_starry_night.jpg.jpg}\\
\includegraphics[width=0.124\textwidth]{images/additional_results/kodim18.png_076.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim18.png_candy.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim18.png_en_campo_gris.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim18.png_kandinsky2.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim18.png_picasso.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim18.png_richter2.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim18.png_The_Scream.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim18.png_vangogh_starry_night.jpg.jpg}\\
\includegraphics[width=0.124\textwidth]{images/additional_results/kodim19.png_076.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim19.png_candy.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim19.png_en_campo_gris.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim19.png_kandinsky2.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim19.png_picasso.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim19.png_richter2.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim19.png_The_Scream.jpg.jpg}
& \includegraphics[width=0.124\textwidth]{images/additional_results/kodim19.png_vangogh_starry_night.jpg.jpg}\\
\end{tabularx}
\end{center}
\vspace{-1em}
\caption{Additional qualitative style transfer results of our \ac{cmd} algorithm. All examples shown also formed part of the user study. Best viewed on screen. Please zoom in to appreciate style details.}
\end{figure}
\newpage
\section{Additional qualitative comparison}
\setlength{\tabcolsep}{1pt}
\begin{figure}[htbp]
\begin{center}
\begin{tabularx}{\textwidth}{*5{>{\centering\arraybackslash}X}}
\includegraphics[width=0.195\textwidth]{images/review_results/contentstyle/1_cs.jpg}
& \includegraphics[width=0.195\textwidth]{images/review_results/gatys/4.jpg}
& \includegraphics[width=0.195\textwidth]{images/review_results/huang/4.png}
& \includegraphics[width=0.195\textwidth, trim=0 4 0 4, clip]{images/review_results/li/4.png}
& \includegraphics[width=0.195\textwidth, trim=8 0 8 0, clip]{images/review_results/ours/c4_s1_1e7.jpg}\\
\includegraphics[width=0.195\textwidth]{images/review_results/contentstyle/2_cs.jpg}
& \includegraphics[width=0.195\textwidth]{images/review_results/gatys/1.jpg}
& \includegraphics[width=0.195\textwidth]{images/review_results/huang/1.png}
& \includegraphics[width=0.195\textwidth, trim=0 4 0 4, clip]{images/review_results/li/1.png}
& \includegraphics[width=0.195\textwidth, trim=8 0 8 0, clip]{images/review_results/ours/c1_s2_1e7.jpg}\\
\includegraphics[width=0.195\textwidth]{images/review_results/contentstyle/4_cs.jpg}
& \includegraphics[width=0.195\textwidth]{images/review_results/gatys/5.jpg}
& \includegraphics[width=0.195\textwidth]{images/review_results/huang/5.png}
& \includegraphics[width=0.195\textwidth, trim=0 4 0 4, clip]{images/review_results/li/5.png}
& \includegraphics[width=0.195\textwidth, trim=8 0 8 0, clip]{images/review_results/ours/c5_s3_1e7.jpg}\\
\includegraphics[width=0.195\textwidth]{images/review_results/contentstyle/3_cs.jpg}
& \includegraphics[width=0.195\textwidth]{images/review_results/gatys/8.jpg}
& \includegraphics[width=0.195\textwidth]{images/review_results/huang/8.png}
& \includegraphics[width=0.195\textwidth, trim=0 4 0 4, clip]{images/review_results/li/8.png}
& \includegraphics[width=0.195\textwidth, trim=8 0 8 0, clip]{images/review_results/ours/c8_s5_1e7.jpg}\\
\includegraphics[width=0.195\textwidth]{images/review_results/contentstyle/5_cs.jpg}
& \includegraphics[width=0.195\textwidth]{images/review_results/gatys/9.jpg}
& \includegraphics[width=0.195\textwidth]{images/review_results/huang/9.png}
& \includegraphics[width=0.195\textwidth, trim=0 4 0 4, clip]{images/review_results/li/9.png}
& \includegraphics[width=0.195\textwidth, trim=8 0 8 0, clip]{images/review_results/ours/c9_s9_1e7.jpg}\\
\includegraphics[width=0.195\textwidth]{images/review_results/contentstyle/6_cs.jpg}
& \includegraphics[width=0.195\textwidth]{images/review_results/gatys/15.jpg}
& \includegraphics[width=0.195\textwidth]{images/review_results/huang/15.png}
& \includegraphics[width=0.195\textwidth, trim=0 4 0 4, clip]{images/review_results/li/15.png}
& \includegraphics[width=0.195\textwidth, trim=8 0 8 0, clip]{images/review_results/ours/c15_s10_1e7.jpg}\\
\rule{0pt}{14pt}\small (a) Input
& \rule{0pt}{12pt}\small (b) Gatys
& \rule{0pt}{12pt}\small (c) AdaIN
& \rule{0pt}{12pt}\small (d) WCT
& \rule{0pt}{12pt}\small (e) Ours
\end{tabularx}
\end{center}
\caption{Style transfer results with our algorithm, and with one competing method per category (MM: AdaIn~\cite{huang2017arbitrary}; MMD: Gatys~\cite{gatys2016image}; OT: WCT~\cite{li2017universal}).
The displayed results for those methods were made available in \cite{jing2019neural}. Best viewed on screen. Please zoom in to appreciate style details.}
\end{figure}
\section{Introduction}
\vspace{-0.1em}
In 2017 \textit{Loving Vincent} was released, the first fully painted feature film with $>$65,000 frames. Indeed, every single frame is an oil painting drawn by one of over 100 artists. The creation of the movie was split into two steps. First, the entire movie was produced with real actors in front of a green screen, which was then replaced by Van Gogh paintings. In a second step, each frame was painted over by an artist with the techniques and style of Van Gogh, which took over six years to complete.
Attempts to automate this form of texture synthesis, termed \textit{style transfer}, date back to at least the mid-90s \cite{heeger1995pyramid}. More recently, Gatys \etal \cite{gatys2016image} pioneered the idea of \textit{\ac{nst}}. It is based on the idea that the deep layers of a pre-trained \ac{cnn} encode high-level semantic information and are insensitive to the actual appearance, whereas shallow layers learn low-level features such as color, texture and brush patterns.
A fundamental question that arises in this context is how to define style. Li \etal \cite{li2017demystifying} proved that the loss introduced in \cite{gatys2016image} can be rewritten as a \ac{mmd}, offering an interpretation of style transfer as aligning feature distributions. In fact, most existing methods can be interpreted in this way.
This has led to a series of works all centered around aligning feature distributions of \acp{cnn}, linking style transfer to \ac{da}.
Here we look deeper into that interpretation. By translating \ac{nst} to distribution matching, it becomes amenable to a suite of tools developed to measure the divergence between probability distributions, such as integral probability metrics, $f$-divergences and \ac{ot}.
Divergences $d(P,Q)$ between two distributions, respectively probability measures, are in general not metrics, but they should fulfil the weaker conditions of \emph{(i)} non-negativity: $d(P,Q)\!\geq\!0$; and \emph{(ii)} identity of indiscernibles: $d(P,Q)\!=\!0\text{ iff }P\!=\!Q$.
However, \textit{in the light of feature distributions, existing style transfer methods suffer from rather elementary theoretical limitations}.
Broadly, there are two schools. Either the distributions are unrestricted, but the discrepancy between them is measured without adhering to the law of indiscernibles \cite{gatys2016image, li2017demystifying, huang2017arbitrary, risser2017stable}; or the distributions are approximated roughly with simple functions, so that they admit closed-form solutions \cite{pmlr-v108-mroueh20a, kolkin2019style, li2017universal, lu2019closed}.
\vspace{0.5em}
Here, we show how to overcome these limitations with the help of the recently proposed framework of \acfp{cmd} \cite{zellinger2019robust}.
That (pseudo-)metric is based on the representation of distributions as moment sequences on compact intervals. In the limit, \ac{cmd} is an integral probability metric on the set of compactly supported distributions, so it complies with the law of indiscernibles (as well as non-negativity) by definition.
Importantly, in its dual formulation the \ac{cmd} is computationally efficient, and approximations can be seamlessly justified with an upper bound on the central moments \cite{zellinger2017central}.
In summary, we make the following contributions:
\emph{(i)}
We systematically categorize existing \ac{nst} methods according to their way of aligning distributions;
\emph{(ii)}
we make explicit underlying approximations and highlight the corresponding limitations;
\emph{(iii)} We propose a novel \ac{nst} algorithm based on the \acl{cmd}.
To our knowledge, our method is the first one that aligns style distributions in a rigorous and computationally efficient manner, with theoretically grounded approximation bounds.
Empirically, the method achieves a more perspicuous separation between artistic style and semantic content, and enables visually more compelling style transfer according to a user study with \textgreater50 participants.
\vspace{1em}
\section{Related work}
\vspace{0.5em}
\paragraph{Style Transfer} has been an active research topic in computer vision for at least two decades. Until recently it was based on hand-crafted features and styles. This includes stroke-based rendering \cite{kyprianidis2012state} to repaint an image with a set of brush strokes \cite{hertzmann1998painterly}, image quilting \cite{efros2001image} where texture is synthesized in small patches according to a segmentation map, or image analogies \cite{hertzmann2001image} that learn style filters in a supervised fashion. The shift to \acp{cnn} has given rise to \acl{nst}. Current \ac{nst} techniques can be categorized as being based on either image optimization or model optimization \cite{jing2019neural}. Methods in the first group iteratively transfer style to each new output image, following the seminal paper of \cite{gatys2016image}. That work first introduced the idea to match feature statistics of intermediate layers in a \ac{cnn}. Subsequent works explored different directions to improve the quality of stylization. Risser \etal \cite{risser2017stable} circumvent instabilities of the optimization by incorporating additional histogram and total variation losses.
To further enhance the preservation of low-level content such as edges, Li \etal \cite{li2017laplacian} add a Laplacian loss. In order to transfer style between semantically matching patches (\eg, from eyes of a dog to eyes of a cat), \cite{mechrez2018contextual} defines a loss that compares regions with similar semantic meaning. Similarly, \cite{li2016combining} use MRFs to find the nearest-neighbor patch in the feature space of the style image. Both require similar shapes and boundaries in the content and style images. Gatys \etal \cite{gatys2017controlling} also went on to add user-control for perceptual factors such as color or scale, \eg, by transferring style only in the luminance channel to preserve color. Recently, Kolkin \etal \cite{kolkin2019style} also incorporate user-defined spatial constraints, via appropriate weights in the cost function.
\vspace{0.5em}
Iterative optimization per image is comparatively slow. Model optimization methods instead employ feed-forward networks \cite{johnson2016perceptual,ulyanov2016instance} trained offline on large datasets, to achieve real-time style transfer.
Initially they were restricted to a fixed set of styles \cite{ulyanov2016texture, ulyanov2016instance, dumoulin2016learned, li2017diversified}. Later they were extended to handle unseen styles. Huang and Belongie \cite{huang2017arbitrary} propose an adaptive instance normalization layer that normalizes the content image with affine parameters from the style image, Chen and Schmidt \cite{chen2016fast} define a swap layer that replaces content feature patches with matching style feature patches. However, there is a price to pay for fast feed-forward inference, as it does not reach the quality of iterative methods. Recently it has been shown that
adaptive instance normalization, as well as the whitening color transform \cite{li2017universal} are special cases of an \ac{ot} map between Gaussian distributions, thus providing some theoretical foundation for feed-forward models \cite{lu2019closed,pmlr-v108-mroueh20a}.
\paragraph{Domain Adaptation}
is a particular instance of transfer learning, \ie, distilling and transferring knowledge across different domains. \acf{da} utilizes supervision in a source domain to guide the learning for a target domain where no labeled data is available \cite{csurka2017domain}. The principle is that the shift between the source and target domains can be measured, and therefore also minimized. Several authors have noted the close relation to \ac{nst} \cite{li2017demystifying, bousmalis2017unsupervised}.
A common approach is to learn a joint feature space by aligning the distributions in the latent feature space with measures such as Kullback-Leibler divergence \cite{zhuang2015supervised}, \acl{mmd} \cite{long2017deep} or correlation alignment \cite{sun2016deep}. Also related to style transfer, another approach to \ac{da} is to directly learn the mapping between the source and target domains, \eg, using GANs \cite{bousmalis2017unsupervised}. For an overview of \ac{da}, see \cite{csurka2017domain,wang2018deep}. Here we make use of yet another idea originally aimed at \ac{da}, emphasizing its close relation to style transfer.
\section{Method}
We first briefly review the core ideas of \textit{\acl{nst}}. In that context, we revisit several existing methods and classify them into three categories. By taking the view of distribution alignment to its logical end, we then go on to provide an alternative loss function that has strong theoretical guarantees, is efficient to compute, and delivers visually appealing results (\cf Fig.~\ref{fig:artistinception}).
\subsection{Neural style transfer}
The fundamental idea of \ac{nst} is to use a pretrained, deep neural network to generate an image $I_o$ with the content-specific features of a content image $I_c$ and the style-specific features from a style image $I_s$. Typically, one minimizes a convex combination of a content and a style loss:
\begin{equation}\label{eq:nstloss}
\mathcal{L} = \alpha \mathcal{L}_\text{content} + (1 - \alpha) \mathcal{L}_\text{style}.
\end{equation}
We further specify those losses following the notation of \cite{pmlr-v108-mroueh20a}.
Let $g$ be a deep encoder, say VGG-19 \cite{simonyan2014very}. For a specific layer $l$ with corresponding output feature map of spatial dimension ${H_l\cdot W_l=n_l}$ and channel depth $C_l$, we denote the $j$\textsuperscript{th} component of the feature map as a (reshaped) function ${F^l_j: \mathbb{R}^{d} \rightarrow \mathbb{R}^{C_l}}$, ${j\in [n_l]}$.
We write ${\mathbf{F}^l=(F^l_j)_{j\in [n]} \in \mathbb{R}^{C_l\times n_l}}$ and call $\mathbf{F}^l(I)$ the $l$\textsuperscript{th} (reshaped) \textit{feature map} of image $I$.
\Ie, the $L$\textsuperscript{th} feature map of image $I$ is the activation map after applying all layers $l=1,\dots, L$ to $I$.
Then, the content loss is proportional to
\begin{equation}
\mathcal{L}_\text{content}(I_o, I_c) \propto
\sum_{l}
|| \mathbf{F}^l(I_o) - \mathbf{F}^l(I_c)||^2,
\end{equation}
where $l$ iterates over a set of layers of $g$. Commonly, only a single, deep layer is used to compute the content loss; whereas the style loss is an average over multiple layers, shallow and deep, with hyper-parameters $w_l$:
\begin{equation}
\mathcal{L}_\text{style}(I_o, I_s) =
\sum_l w_l \mathcal{L}^l_\text{style}(I_o, I_s).
\end{equation}
\subsection{Style as feature distribution}\label{sec:stylefeaturedistribution}
Losses proposed for $\mathcal{L}^l_\text{style}$ can be categorized according to how they align distributions. We first need some additional definitions, again following \cite{pmlr-v108-mroueh20a}.
To obtain a distribution, we view the feature map $\mathbf{F}^l(I)$ as a $C_l$-dimensional empirical distribution measure over ${n_l=H_l\cdot W_l}$ samples. Note, by regarding the $n_l$ samples as an unordered set we explicitly discard the spatial layout.
This corresponds to the intuition that style attributes like color, strokes and texture are independent of the location. More formally, we define
\begin{equation}
\nu^l: \mathbb{R}^d \rightarrow\mathscr{P}(\mathbb{R}^{C_l})
\quad,\quad
I \mapsto \frac{1}{n_l} \sum_{i=1}^{n_l}\delta_{F_j^l(I)},
\end{equation}
where $\mathscr{P}(\mathbb{R}^{C_l})$ is the space of empirical measures on $\mathbb{R}^{C_l}$.
We abbreviate ${\nu_I^l = \nu^l(I)}$ and drop the layer index when not needed. With these definitions we now review existing style transfer methods in the light of distribution alignment.
\vspace{-1.0em}
\paragraph{MMD-based optimization.}
Already the first \ac{nst} paper \cite{gatys2016image} used statistics of feature maps to extract style-specific attributes of $I_s$, via the Gram matrix $G$. The Gram matrix contains 2\textsuperscript{nd}-order statistics, in our case correlations between corresponding channels in the feature map.
The link to aligning distributions may not be obvious, but Li \etal \cite{li2017demystifying} show that the style loss in \cite{gatys2016image} can be rewritten as an unbiased empirical estimate of the \ac{mmd} \cite{gretton2012kernel} with a polynomial kernel ${k(x,y) = (x^Ty)^2}$:
\begin{equation}\label{eq:mmd}
\mathcal{L}^l_\text{style}(I_o, I_s) \propto \mathsf{mmd}^2[\mathbf{F}^l(I_o), \mathbf{F}^l(I_s)].
\end{equation}
Under the assumption that the \ac{rkhs} is \textit{characteristic} \cite{fukumizu2008kernel}, the \ac{mmd} vanishes if and only if the two distributions are the same. By treating the feature maps of $I_o$ and $I_s$ as samples, minimizing the objective (\ref{eq:mmd}) is the same as minimizing the discrepancy between $\nu_{I_o}$ and $\nu_{I_s}$.
\vspace{-1.0em}
\paragraph{Moment-based optimization}$\!\!$approaches explicitly minimize the difference between style distributions. Theoretical support for these methods comes from \acp{mgf}. It is known that a distribution is uniquely characterized by its moments if the \ac{mgf} is finite in an open interval containing zero. Hence, if two distributions with finite \ac{mgf} have equal moments, they are identical.
Besides relating style transfer to distribution alignment, Li \etal \cite{li2017demystifying} also introduced a style loss based on \emph{batch normalization} statistics. That loss is the first to explicitly match moments in feature space, namely the means $\mu_{\mathbf{F}^l(I)}$ and the standard deviations $\sigma_{\mathbf{F}^l(I)}$:
\begin{equation}
\mathcal{L}^l_\text{style}(I_o, I_s) \propto\!\sum\limits_{i=1}^{C_l}[ (\mu^i_{\mathbf{F}^l(\!I_o\!)}\!\!-\! \mu^i_{\mathbf{F}^l(\!I_s\!)})^2
\!\!+\! (\sigma^i_{\mathbf{F}^l(\!I_o\!)}\!\!-\! \sigma^i_{\mathbf{F}^l(\!I_s\!)})^2].
\end{equation}
Interestingly, moment alignment can also produce reasonable results when applied in feed-forward mode, without iterative optimization. Based on ideas from \cite{ulyanov2016instance,dumoulin2016learned}, Huang and Belongie \cite{huang2017arbitrary} align the mean and variance with a transformation layer. In summary, matching the mean and variance of the content image's feature space to that of the style image reduces the divergence between $\nu_{I_o}$ and $\nu_{I_s}$ -- but discrepancies due to higher-order moments remain.
\vspace{-1.0em}
\paragraph{Optimal Transport-based optimization}$\!\!$provides a principled framework to minimize the discrepancy between distributions, notably taking into account the geometry of the underlying spaces. When working in the space of probability measures ${\mathcal{P}_p(\mathbb{R}^d)}$ with bounded $p$\textsuperscript{th} moment, the \emph{Wasserstein distance} for ${P,Q \in \mathcal{P}_p(\mathbb{R}^d)}$ is defined as
\begin{equation}\label{eq:wasserstein}
\mathcal{W}_p(P, Q)^p = \inf\limits_{\Gamma(P, Q)} \int ||x-y||^p d\pi(x,y).
\end{equation}
We can use the Wasserstein distance for back-propagation to minimize the discrepancy between $\nu_{I_o}$ and $\nu_{I_s}$. In general, computing the \ac{ot} has complexity ${O(n_l^3\log n_l)}$ and is not suitable for iterative optimization schemes.
However, restricting the distributions to Gaussians, ${\Tilde{\nu}_{I_o} \colonequals \mathcal{N}(\mu_{\nu_{I_o}},\Sigma_{\nu_{I_o}})}$ and ${\Tilde{\nu}_{I_s} \colonequals \mathcal{N}(\mu_{\nu_{I_s}},\Sigma_{\nu_{I_s}})}$
admits a closed form solution,
\begin{equation}
\begin{split}
\mathcal{W}_2(\Tilde{\nu}_{I_o},& \Tilde{\nu}_{I_s})^2= \|\mu_{\nu_{I_o}}\!\!- \mu_{\nu_{I_s}}\|^2_2\\
&+ \Tr\big(\Sigma_{\nu_{I_o}}\!\!+ \Sigma_{\nu_{I_s}}\!\!- 2(\Sigma_{\nu_{I_o}}^\frac{1}{2}\Sigma_{\nu_{I_s}}\Sigma_{\nu_{I_o}}^\frac{1}{2})^\frac{1}{2}\big).
\end{split}
\label{eq:ot_approx}
\end{equation}
This is similar to matching the first and second moments as in moment-based optimization (higher-order moments of Gaussians are constant \wrt mean and variance). Conveniently, the \ac{ot} map can also be directly derived.
If one is willing to accept the Gaussian approximation, the style features can be aligned by iteratively minimizing $\mathcal{W}_2$, or by integrating the \ac{ot} map into the encoder-decoder network \cite{pmlr-v108-mroueh20a, kolkin2019style, lu2019closed, li2017universal}.
It has been shown \cite{pmlr-v108-mroueh20a,lu2019closed} that adaptive instance normalization can be seen as \ac{ot} of Gaussians with diagonal covariances.
\begin{figure*}[!tb]
\centering
\begin{subfigure}[b]{0.162\linewidth}
\centering
\includegraphics[width=\textwidth, trim=6 3 2 0, clip]{dist-matching/pics-source.pdf}
\caption{Source}
\end{subfigure}
\begin{subfigure}[b]{0.162\linewidth}
\centering
\includegraphics[width=\textwidth, trim=6 3 2 0, clip]{dist-matching/pics-target.pdf}
\caption{Target}
\end{subfigure}
\begin{subfigure}[b]{0.162\linewidth}
\centering
\includegraphics[width=\textwidth, trim=6 3 2 0, clip]{dist-matching/pics-mmd.pdf}
\caption{MMD}
\end{subfigure}
\begin{subfigure}[b]{0.162\linewidth}
\centering
\includegraphics[width=\textwidth, trim=6 3 2 0, clip]{dist-matching/pics-mm.pdf}
\caption{MM / OT}
\end{subfigure}
\begin{subfigure}[b]{0.162\linewidth}
\centering
\includegraphics[width=\textwidth, trim=6 3 2 0, clip]{dist-matching/pics-cmd5.pdf}
\caption{CMD, $K\!=\!5$}
\end{subfigure}
\begin{subfigure}[b]{0.162\linewidth}
\centering
\includegraphics[width=\textwidth, trim=6 3 2 0, clip]{dist-matching/pics-cmd50.pdf}
\caption{CMD, $K\!=\!50$}
\end{subfigure}
\caption{Illustration of distribution matching in 1D. The source $\sim\!Beta(2,3)$ and target $\sim\!Beta(0.5,0.45)$ cannot be aligned with MMD, MM or OT (which in 1D is the same as MM). On the contrary, CMD aligns them well already with five moments, and the residual error decreases asymptotically as more moments are added. See text for details.}
\label{fig:distmatching}
\end{figure*}
\vspace{0.7em}
\subsection{Motivation}\label{sec:motivation}
From a statistical perspective all three categories of methods contradict, to some extent, the goal of optimally aligning feature distributions.
Methods based on \textbf{\ac{mmd}} rely on simplistic (typically, linear or quadratic) kernels \cite{gatys2016image, li2017demystifying}. Previously, \cite{risser2017stable} already identified instabilities during training, as different distributions result in the same \ac{mmd}. They point out that changes in mean and variance can compensate each other, giving rise to the same Gram matrix (and thus the same \ac{mmd} with quadratic kernel), since the Gram matrix is related to \textit{non-central} second moments.
We offer an alternative explanation why the Gram matrix violates the identity of indiscernibles: the quadratic kernel is non-characteristic, \ie, the map ${p\rightarrow \mathbb{E}_{x\sim p}[k(x, \cdot)]}$ is not injective and the distribution $p$ has no unique embedding in the \ac{rkhs}. %
Moreover, the quadratic kernel (resp.\ Gram matrix) is obviously restricted to 2\textsuperscript{nd} moments. It is highly unlikely that those are sufficient statistics for deep feature activations, so ${MMD(p,q)\!=\!0}$ almost certainly does not imply ${p\!=\!q}$.
A similar argument can be made about existing methods based directly on \textbf{\ac{mm}}, since they match only the means and variances. It is trivial to define two distinct distributions with the same variances -- \eg, a Gaussian ${\mathcal{N}(0, \sqrt{2})}$ and a Laplace distribution ${\mathcal{L}(0, 1)}$.
While \textbf{\ac{ot}} is a powerful framework at the conceptual level, it is hobbled by high computation cost.
The Gaussian approximation makes \ac{ot} tractable, but at the cost of losing information.
There is no evidence that the distributions $\nu_{I_o}$ and $\nu_{I_s}$ are (approximately) Gaussian -- in fact it is very unlikely, unless one artificially constrains them, thus seriously restraining the deep network's expressive power.
We claim that \ac{ot}, at least in its prevalent, restricted form, also mostly reduces to matching the first and second moments -- the approximations in~\eqref{eq:ot_approx} are completely defined in terms of means and covariances.
Finally, we point out the \textit{mean over-penalization} effect: \cite{zellinger2019robust} found instabilities of distribution alignment during \ac{da} training under small perturbations, which arise from the use of raw instead of centralized moments (as in \ac{mmd} with standard polynomial kernel and non-centralized integral probability metrics).
For details, please refer to \cite{zellinger2019robust}.
\vspace{0.7em}
\subsection{CMD for neural style transfer}
Instead of only matching first- and second-order moments, we propose to make use of a suitable integral probability metric, the \acl{cmd} \cite{zellinger2017central}. At its core, that metric utilizes the dual representation of compactly supported distributions as moment sequences. The translation to central moments leads to natural geometric relations such as variance, skewness and kurtosis.
Not that the idea of matching higher moments has been investigated in early work on texture synthesis~\cite{portilla2000parametric}, but so far has been disregarded in \ac{nst}.
In Fig.~\ref{fig:distmatching}, we illustrate the enhanced expressive power of \ac{cmd}.
In our toy example, the source and target are univariate $Beta$-distributions with different parameters, \ie, their third and fourth moments are non-zero. We represent each distribution with 10,000 samples and minimize the respective alignment loss with gradient descent. The example confirms that none of the three approaches based on first and second moments can align the two distributions (note that for the 1D case \ac{mm} and \ac{ot} are identical). On the contrary, \ac{cmd} aligns them nicely.
The \ac{cmd} between two compactly supported distributions $P$ and $Q$ is defined as follows \cite{zellinger2019robust}:
\begin{equation}\label{eq:cmd}
\begin{split}
\mathsf{cmd}_k(P,Q) \coloneqq& \sum\limits_{i=1}^k a_i \|c_i(P) - c_i(Q)\|_2\quad\text{, where}\\
c_i(X)=&\begin{cases}
\mathbb{E}_X[x] & i=1\\
\mathbb{E}_X[\eta^{(i)}(x - \mathbb{E}_X[x])] &i\geq 2
\end{cases}
\end{split}
\end{equation}
with $a_i\geq 0$. The $\eta^{(i)}(x)$ are monomial vectors of order $i$ defined as
\begin{equation}
\begin{split}
\eta^{(i)}: &\mathbb{R}^m \rightarrow \mathbb{R}^{\frac{(i+1)^{m-1}}{(m-1)!}}\\
&x \mapsto \big(x_1^{r_1}\dotsm x_m^{r_m}\big)_{\genfrac{}{}{0pt}{1}{(r_1,\cdots,r_m)\in \mathbb{N}_0^m}{r_1+\cdots+r_m=k}}.
\end{split}
\end{equation}
By construction the \ac{cmd} is non-negative, respects the triangle inequality, and if ${P=Q}$ then ${\mathsf{cmd}_k(P,Q) = 0}$. Furthermore, \cite[Theorem~1]{zellinger2017central} states that ${\mathsf{cmd}_k(P,Q) = 0}$ implies ${P=Q}$ for ${k\rightarrow\infty}$, so \ac{cmd} is a metric on compactly supported distributions.
For practical applications computing $\mathsf{cmd}_\infty$ is obviously not possible, and we have to bound $k$ to ${K\!<\!\infty}$ from above. Compared to other approximations used for style transfer \cite{pmlr-v108-mroueh20a, kolkin2019style}, the bounded $\mathsf{cmd}_K$ has a natural theoretical justification. It can be shown \cite[Proposition~1]{zellinger2019robust} that the $i$\textsuperscript{th} term in the summation of equation \ref{eq:cmd} is bounded by an upper bound that strictly decreases with the order $i$. \Ie, the contribution of higher-order moment terms in equation (\ref{eq:cmd}) converges monotonically to $0$. To keep the implementation efficient we only compute the marginal moments, by restricting the monomial vectors to ${\eta^{(i)}(x) = (x_1^i,\cdots,x_m^i)}$.
\begin{figure*}[!tb]
\setlength{\tabcolsep}{0pt}
\renewcommand{\arraystretch}{0.4}
\begin{center}
\begin{tabularx}{\textwidth}{*7{>{\centering\arraybackslash}X}}
\includegraphics[width=0.14\textwidth]{images/user-study-results/inputs/kodim23.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/adain/kodim23_stylized_by_vangogh_starry_night.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/gram/kodim23.png_vangogh_starry_night.jpg.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/li/kodim23.png_vangogh_starry_night.jpg.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/ost/kodim23_stylized_by_vangogh_starry_night.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/wct/kodim23_stylized_by_vangogh_starry_night.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/ours/kodim23.png_vangogh_starry_night.jpg.jpg}\\
\includegraphics[width=0.14\textwidth]{images/user-study-results/inputs/kodim22.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/adain/kodim22_stylized_by_monet_sunrise.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/gram/kodim22.png_monet_sunrise.jpg.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/li/kodim22.png_monet_sunrise.jpg.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/ost/kodim22_stylized_by_monet_sunrise.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/wct/kodim22_stylized_by_monet_sunrise.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/ours/kodim22.png_monet_sunrise.jpg.jpg}\\
\includegraphics[width=0.14\textwidth]{images/user-study-results/inputs/kodim17.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/adain/kodim17_stylized_by_richter2.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/gram/kodim17.png_richter2.jpg.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/li/kodim17.png_richter2.jpg.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/ost/kodim17_stylized_by_richter2.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/wct/kodim17_stylized_by_richter2.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/ours/kodim17.png_richter2.jpg.jpg}\\
\includegraphics[width=0.14\textwidth]{images/user-study-results/inputs/kodim04.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/adain/kodim04_stylized_by_woman_in_peasant_dress.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/gram/kodim04.png_woman_in_peasant_dress.jpg.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/li/kodim04.png_woman_in_peasant_dress.jpg.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/ost/kodim04_stylized_by_woman_in_peasant_dress.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/wct/kodim04_stylized_by_woman_in_peasant_dress.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/ours/kodim04.png_woman_in_peasant_dress.jpg.jpg}\\
\includegraphics[width=0.14\textwidth]{images/user-study-results/inputs/kodim15.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/adain/kodim15_stylized_by_The_Scream.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/gram/kodim15.png_The_Scream.jpg.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/li/kodim15.png_The_Scream.jpg.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/ost/kodim15_stylized_by_The_Scream.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/wct/kodim15_stylized_by_The_Scream.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/ours/kodim15.png_The_Scream.jpg.jpg}\\
\includegraphics[width=0.14\textwidth]{images/user-study-results/inputs/006.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/adain/006_stylized_by_076.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/gram/006.jpg_076.jpg.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/li/006.jpg_076.jpg.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/ost/006_stylized_by_076.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/wct/006_stylized_by_076.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/ours/006.jpg_076.jpg.jpg}\\
\includegraphics[width=0.14\textwidth]{images/user-study-results/inputs/1.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/adain/1_stylized_by_rain_princess.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/gram/1.png_rain_princess.jpg.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/li/1.png_rain_princess.jpg.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/ost/1_stylized_by_rain_princess.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/wct/1_stylized_by_rain_princess.jpg}
& \includegraphics[width=0.14\textwidth]{images/user-study-results/ours/1.png_rain_princess.jpg.jpg}\\
\rule{0pt}{14pt}\small (a) Input
& \rule{0pt}{12pt}\small (b) AdaIN \cite{huang2017arbitrary}
& \rule{0pt}{12pt}\small (c) Gatys \cite{gatys2016image}
& \rule{0pt}{12pt}\small (d) MM \cite{li2017demystifying}
& \rule{0pt}{12pt}\small (e) OST \cite{lu2019closed}
& \rule{0pt}{12pt}\small (f) WCT \cite{li2017universal}
& \rule{0pt}{12pt}\small (g) Ours
\end{tabularx}
\end{center}
\vspace{-1.5em}
\caption{Style transfer results of our algorithm and of previous methods from all three categories. Best viewed on screen. Please zoom in to appreciate style details.}\label{fig:user-study-results}
\end{figure*}
Adapting \ac{cmd} to our style feature distributions is straight-forward.
To fulfill the requirements, we wrap a sigmoid function $\sigma(\cdot)$ around each feature output so as to restrict the support of the empirical distribution to ${[0, 1]}$.
With a slight abuse of notation we write $\sigma(\nu^l)$ for the $\nu^l$ computed from sigmoid-transformed features and define
\begin{equation}
\mathcal{L}^l_\text{style}(I_o, I_s) \coloneqq \mathsf{cmd}_k\big(\sigma(\nu^l_{I_o}),\sigma(\nu^l_{I_s})\big),
\label{eq:cmd_features}
\end{equation}
for layer $l$. The moments are simply the moments of the empirical measure, \ie powers of ${\mathbb{E}[\mathbf{F}^l(I) - \mu_{\mathbf{F}^l(I)}] \in \mathbb{R}^{C_l}}$.
By adopting \ac{cmd} we have an integral probability metric for \ac{nst} at our disposal that not only has favourable theoretical properties, but is also easy to implement, computationally efficient, and able to handle complex feature distributions with significant higher-order moments.
\section{Results}
In this section, we compare our results with existing methods from each of the categories. After summarizing details of the implementation, we qualitatively evaluate the effects of aligning the style features with \ac{cmd}. Beyond visual comparisons, we report quantitative results from an user-study, which supports our hypothesis that higher-order moments carry important style information and should not be ignored. Lastly, we further investigate the impact of different moments in an ablation study.
\subsection{Experimental setup}
We employ VGG-19 \cite{simonyan2014very} as feature encoder and read out feature maps at layer levels ${l\in \{1\_1,2\_1,3\_1,4\_1,5\_1\}}$. Deviating slightly from the commonly used \ac{nst} setting, we work with the raw convolution outputs $\textit{conv-l}$ rather than their rectified versions $\textit{relu-l}$, since we clamp them to ${[0, 1]}$ with sigmoid activations for computing the \ac{cmd}, see~\eqref{eq:cmd_features}.
The content loss is computed on $\textit{conv4\_1}$, for the individual layers in the style loss we use the same weighting scheme as proposed in \cite{gatys2016image}.
Optimization is performed with Adam \cite{kingma2014adam}. Instead of blindly stopping after a fixed number of iterations, we implement a stopping criterion based on the difference of the current style loss and a moving average of the style loss.
We compare our algorithm to five baselines: one from the \ac{mmd} group \cite{gatys2016image}, two based on direct moment differences \cite{li2017demystifying,huang2017arbitrary} and two based on \ac{ot} \cite{li2017universal,lu2019closed}. We use the existing open-source implementations%
\footnote{For \cite{gatys2016image, li2017demystifying,lu2019closed}, original implementations by the authors; for \cite{huang2017arbitrary,li2017universal}, implementation provided by the authors of \cite{lu2019closed}.} %
and keep all hyper-parameters as proposed in the original papers, respectively source codes. Our implementation is based on PyTorch \cite{paszke2019pytorch} and is also publicly available.\footnote{Code: \href{https://github.com/D1noFuzi/cmd_styletransfer}{https://github.com/D1noFuzi/cmd\_styletransfer}} For our experiments we bound the order of the moments to ${K\!=\!5}$, as higher orders have little influence.
\subsection{Qualitative results}
We have pinpointed theoretical limitations of previous \ac{nst} methods in Sec.~\ref{sec:motivation}. To see how these translate to concrete visual differences, we analyze how well the stylized images preserve three different style attributes, \textit{color, texture and stroke, shape}. See Fig.~\ref{fig:user-study-results}, and further results in the supplementary material.
\paragraph{Color and brightness.}
This paper is concerned with fully automatic \ac{nst}, without additional user control. Hence, the output should have the color palette of the style image. \Ie, only the semantic content of the content image should be retained, but colors should be replaced by those representative of the style, and in particular the two color spaces should not be mixed.
Looking at the 1\textsuperscript{st} row of Fig.~\ref{fig:user-study-results}, the red of the right parrot strongly leaks into the results of AdaIN, Gatys and \ac{mm}, and traces are also visible in WCT. Besides our method, those based on \ac{ot} fare best in terms of color palette, but \ac{ot} has a tendency towards exaggerated brightness variations not warranted by the content, \eg, the girl's face in row 5 and the background in row 6.
Indeed, it appears that local color and intensity information is to some degree hidden in higher-order moments. That observation is also supported by the ablation study in Sec.~\ref{sec:ablationstudies}.
\vspace{-1.5em}
\paragraph{Texture and stroke.}
Maintaining strokes and textures is especially important when it comes to artistic style transfer, to preserve the concomitant individual painting techniques. We find that the proposed \ac{cmd} method is particularly good at replicating granular canvas, oriented brush strokes, \etc. Clear cases in point are rows 1 and 5 of Fig.~\ref{fig:user-study-results}, as well as the reflections on the lake in row 2. We also point out the particularly challenging example in the 4\textsuperscript{th} row. Zooming in on the style image, we can see the rough texture of the paper, as well as a preference for oriented shading strokes. While none of the methods is perfect on this difficult instance, the only ones to even partially pick up those patterns are our method and to some degree Gatys (but with strong color artifacts).
In general, we observe that oriented high-frequency patterns appear to benefit from higher (particularly, odd) moments, but further research is needed to explore the relation in depth.
\vspace{-1.5em}
\paragraph{Shape.}
Lastly, we turn our attention to shape. That attribute is somewhat more complex, as ornamental and decorative shape elements such as the square pattern in row 3 of Fig.~\ref{fig:user-study-results} are part of the style, whereas semantically meaningful elements of similar size are part of the content, like the eyes in row 4 or the make-up in row 5.
\ac{cmd} manages to disentangle these two aspects and preserve important boundaries and details of the content rather well, while still imposing the characteristic shape features of the style. Perhaps the most convincing example is row 3. But also in other cases the delicate balance between imposing the style and preserving salient content features appears to benefit from higher-order moments, \eg, rows 4, 5, 6.
\subsection{Quantitative results}
\paragraph{User study.}
There is no clear consensus how to quantitatively evaluate \ac{nst}. The question what constitutes a ``correct" output is clearly ill-posed, and even the judgment how ``good" a given stylization is depends on aesthetic preferences and must remain subjective.
In fact one can, with the \emph{same} method, generate very different results only by changing the relative weights of the style and content losses, and it depends on the application and on personal taste which one is preferred.
The current consensus is to perform user studies where participants are shown results without revealing how they were generated, and to collect statistics of user preferences.
We note that, while we agree that aesthetic quality is hard to measure, people can usually pick their favorite among a handful of alternative stylizations without much hesitation, which lends some support to these studies: at the very least, they are a guideline which one among the available methods will deliver the result that the relatively largest share of the user group likes best.
We conduct a user study with the same methods as above: AdaIN \cite{huang2017arbitrary}, Gatys \cite{gatys2016image}, Moment Matching \cite{li2017demystifying}, OST \cite{lu2019closed}, WCT \cite{li2017universal} and the proposed \ac{cmd} method.
The study uses parts of the Kodak image dataset \cite{kodakdataset} and additional content images widely used in \ac{nst}, showing a variety of scenes, objects and humans.
The style dataset is made up by paintings and drawings commonly used for \ac{nst}, from a range of artists including Picasso, Kandinsky, Van Gogh and others.
In total we exhaustively combine 31 content images and 20 style images, resulting in 620 stylized images per algorithm.
For the study, the six stylization results were displayed side-by-side in random order, along with the underlying content and style images. Users were asked to pick a single image that would best transfer style aspects such as shape, textures and colors using their own judgement.
Overall, we have collected $>$2700 votes from 56 different participants. The scores are reported in Tab.~\ref{tab:user-study}. The study reveals some interesting insights. Indeed, our proposed \ac{cmd} method performs favorably, with $\approx$10\% more votes than the closest competitor. The classical \ac{nst} of \cite{gatys2016image} attains the second-highest number of votes. This supports our claim that iterative methods still have an edge in terms of quality, as one-shot approaches trade quality for speed.
\begin{table}[!h]
\setlength{\tabcolsep}{3pt}
\renewcommand{\arraystretch}{1.4}
\centering
\begin{tabularx}{\linewidth}{|*6{>{\centering\arraybackslash}X}|}
\hline
AdaIN* & Gatys & MM & OST* & WCT* & Ours \\
\hline
155 & 533 & 443 & 523 & 463 & \textbf{587} \\
5.7\% & 19.7\% & 16.3\% & 19.3\% & 17.1\% & \textbf{21.7\%} \\
\hline
\end{tabularx}
\caption{Number of votes each method received in our user study. * denotes one-shot feed-forward methods.}
\label{tab:user-study}
\end{table}
\makeatletter
\newlength{\tw}
\setlength{\tw}{0.095\textwidth}
\makeatother
\begin{figure}[!tb]
\centering
\setlength{\tabcolsep}{0pt}
\renewcommand{\arraystretch}{0.1}
\begin{tabular}{ccccc}
\vspace{2pt}
\footnotesize 1\textsuperscript{st} moment &
\footnotesize 2\textsuperscript{nd} moment &
\footnotesize 3\textsuperscript{rd} moment &
\footnotesize 4\textsuperscript{th} moment &
\footnotesize 5\textsuperscript{th} moment
\\
\includegraphics[width=\tw]{images/different_k/picasso/picasso_10000.jpg} &
\includegraphics[width=\tw]{images/different_k/picasso/picasso_11000.jpg} &
\includegraphics[width=\tw]{images/different_k/picasso/picasso_11100.jpg} &
\includegraphics[width=\tw]{images/different_k/picasso/picasso_11110.jpg} &
\includegraphics[width=\tw]{images/different_k/picasso/picasso_11111.jpg} \\
&
\includegraphics[width=\tw]{images/different_k/picasso/picasso_01000.jpg} &
\includegraphics[width=\tw]{images/different_k/picasso/picasso_01100.jpg} &
\includegraphics[width=\tw]{images/different_k/picasso/picasso_01110.jpg} &
\includegraphics[width=\tw]{images/different_k/picasso/picasso_01111.jpg} \\
\raisebox{3pt}{\footnotesize Content} &&
\includegraphics[width=\tw]{images/different_k/picasso/picasso_00100.jpg} &
\includegraphics[width=\tw]{images/different_k/picasso/picasso_00110.jpg} &
\includegraphics[width=\tw]{images/different_k/picasso/picasso_00111.jpg} \\
\includegraphics[width=\tw]{images/different_k/picasso/pablo_picasso.jpg} &
\raisebox{3pt}{\footnotesize Style} & &
\includegraphics[width=\tw]{images/different_k/picasso/picasso_00010.jpg} &
\includegraphics[width=\tw]{images/different_k/picasso/picasso_00011.jpg} \\
&
\includegraphics[width=\tw]{images/different_k/picasso/picasso.jpg} &
&&
\includegraphics[width=\tw]{images/different_k/picasso/picasso_00001.jpg}
\end{tabular}
\caption{Ablation study using only selected moments. See text for details.}\label{fig:momentablation}
\end{figure}
\vspace{-0.5em}
\subsection{Ablation studies}\label{sec:ablationstudies}
In our method it is possible to individually reweight or turn off moments. We have conducted an ablation study to better understand the effects of different moments, see Fig.~\ref{fig:momentablation}.
Note that this tuning knob is orthogonal to user control in the spirit of \cite{gatys2017controlling}, where one isolates a specific attribute like color in preprocessing and applies the stylization selectively.
Figure \ref{fig:momentablation} shows style transfer results with different combinations of moments. Only a single moment corresponding to the row/column index is used on the diagonal. Then higher-order moments are progressively added along the rows, so for instance position $(2,2)$ corresponds to only the second moment (weight vector $a=[0,1,0,0,0]$) and element $(2,4)$ corresponds to the the 2\textsuperscript{nd}, 3\textsuperscript{rd} and 4\textsuperscript{th} moments (weight vector $a=[0,1,1,1,0]$).
As was to be expected there is no obvious, ``pure" correspondence between moments and visual attributes. Still, the study illustrates some interesting relations. First, one can immediately see that even the 5\textsuperscript{th} order still contributes significant style elements, for instance on the chin and the cap in the first row.
Odd moments appear to primarily modulate overall brightness and contrast, whereas even ones tend to change colors and high-frequency texture.
Our \ac{cmd} method changes only the loss function for distribution alignment and can be seamlessly combined with
other extensions of \ac{nst}. For instance, the user can still control how strongly the style is imprinted on the image content, by adjusting the relative weight of the style and content losses.
To illustrate this, we stylize with our \ac{cmd} method and linearly interpolate the weight $\alpha$ in eq.~\eqref{eq:nstloss}.
Figure \ref{fig:linearinterpolation} shows an example how putting more weight on the content loss produces increasingly weaker "partial stylizations" that stay closer to the content image.
\begin{figure}[!tb]
\centering
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[width=\textwidth]{images/different_alpha/face1.jpg}
\caption{Content}
\end{subfigure}
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[width=\textwidth]{images/different_alpha/candy.jpg}
\caption{Style}
\end{subfigure}
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[width=\textwidth]{images/different_alpha/test2_0.4.jpg}
\caption{$\alpha=0.6$}
\end{subfigure}
\par\medskip
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[width=\textwidth]{images/different_alpha/test2_0.8.jpg}
\caption{$\alpha=0.2$}
\end{subfigure}
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[width=\textwidth]{images/different_alpha/test2_0.99.jpg}
\caption{$\alpha=0.01$}
\end{subfigure}
\begin{subfigure}[b]{0.32\linewidth}
\centering
\includegraphics[width=\textwidth]{images/different_alpha/test2_1.0.jpg}
\caption{$\alpha=0$}
\end{subfigure}
\caption{Varying the strength of style transfer by varying the relative influence $\alpha$ of the content loss (\cf eq.~\eqref{eq:nstloss}).}
\label{fig:linearinterpolation}
\end{figure}
\section{Limitations and future work}
There are currently two conceptual directions in \ac{nst}: iterative optimization techniques and one-shot feed-forward approaches. Our algorithm belongs to the former.
While iterative methods arguably still produce better results, they are too slow for real-time applications. Our method inherits that shortcoming, \eg, it could not be used for (near) real-time video synthesis.
At the conceptual level, we had to make two simplifying approximations to take the step from the mathematical formalism of \ac{cmd} to a practical implementation.
On the one hand, we limit the order of the central moments to a finite, in practice small $K$.
At least in principle the impact of that restriction can be kept as small as desired by increasing $K$, because the influence of additional central moments provably converges $\rightarrow\!0$ with increasing order.
On the other hand, and perhaps more importantly, we only utilize the \textit{marginal} central moments in our loss.
We take this shortcut for computational reasons, but it effectively means that we only achieve exact distribution matching when the marginal distributions are independent.
There is currently no evidence that this is the case, and we do not see a simple way to gauge how much information might be lost due to the approximation.
\section{Conclusion}
We have revisited the interpretation of \acl{nst} as aligning feature distributions. After categorizing existing methods into three groups based on \ac{mmd}, moment matching and \ac{ot}, we show that all of them, in practice, only match first and second moments.
We then went on to propose a novel approach based on \aclp{cmd}. Our method can be interpreted alternatively as minimizing an integral probability metric, or as matching all central moments up to a desired order.
Our method has both theoretical and practical benefits. In terms of theory it comes with strong approximation guarantees. On the practical side it offers a computationally efficient way to account for higher-order moments of complex feature distributions, and achieves visually better transfer of many artistic styles.
On a broader scale, even though Portilla and Simoncelli proposed higher order matching to texture synthesis \cite{portilla2000parametric}, Gatys \etal \cite{gatys2015texture, gatys2016image} disregarded all but second-order moments when pioneering Neural Style Transfer. In this regard, our method reintroduces higher order matching to \ac{nst}.
{\small
\bibliographystyle{ieee_fullname}
| {
"attr-fineweb-edu": 1.540039,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeLHxK7kjXMEc-BJ_ | \section{Introduction}
The massive X-ray binary Cygnus X-1 = HD~226868 consists of an O9.7
Iab primary \citep{wal73} with a black hole (BH) companion. The
fundamental properties of this system have been the subject of many
studies, but they continue to be controversial. For example,
\citet{sha07} determined a relationship between black hole mass and
observed X-ray properties in the low frequency, quasi-periodic
oscillation -- spectral index plane to derive a BH mass of $8.7 \pm
0.8 M_{\odot}$ for Cyg~X-1, a value at the low end of previous
estimates \citep{gie86a,abu05}. On the other hand, \citet{zio05} used
temperature -- luminosity relations in conjunction with evolutionary
models to calculate the mass of the bright, mass donor star. Then,
with the orbital mass function from \citet{gie03} and the method
outlined by \citet{pac74}, he estimated the mass of the BH as $13.5-29
M_{\odot}$, at the high end of prior estimates.
Our goal in this paper is to determine if mass estimates from these
two methods can be reconciled through a re-examination of the
supergiant's spectrum to determine the stellar temperature, mass, and
radius. Shortly after the X-ray source Cyg~X-1 was identified with
the star HD~226868 \citep{bol72,web72}, \citet{wal73} classified it as
a normal O9.7~Iab star. The stellar temperature of a star of this
type depends critically upon the model atmosphere assumptions adopted
to match the line spectrum \citep{mar05}. From a classical curve of
growth analysis of the optical spectrum of HD~226868, \citet{can95}
estimated an effective temperature of $T_{\rm eff} = 32\pm 2$~kK, and
they found an overabundance of He in the photosphere. \citet{her95}
also estimated the temperature of the star $T_{\rm eff} \approx 32$~kK
based upon fits of the optical line spectrum with calculated profiles
from unified model atmospheres that included a non-LTE treatment of H
and He but neglected line-blanketing from transitions of heavier
elements. In addition, they determined values for gravity from fits
of the Balmer lines that ranged from $\log g = 3.03$ for
plane-parallel models to $\log g = 3.21$ for spherical models that
included wind effects. Their results led to mass estimates of $17.8$
and $10.1 M_{\odot}$ for the supergiant and BH, respectively. More
recently, \citet{kar08} classified HD~226868 as an ON star with a
temperature of $T_{\rm eff} = 30.4 \pm 0.5$~kK and gravity of $\log g
= 3.31 \pm 0.07$ using a semi-gray model atmosphere that accounts for
non-LTE effects in some lines and for X-ray illumination.
Here we present an analysis of the photospheric parameters for the
supergiant based upon ground-based optical spectra and
high-resolution, UV spectra from the {\it Hubble Space Telescope}
Space Telescope Imaging Spectrograph ({\it STIS}). These {\it STIS}
spectra were first presented by \citet{gie08} and \citet{vrt08} in
discussions of the orbital variations observed in the stellar wind
lines. We compare the optical and UV line profiles of HD~226868 with
synthetic spectra based on line blanketed, non-LTE photospheric models
in order to determine the stellar temperature and gravity (\S2).
Since the continuum flux and spectral lines of the supergiant could be
influenced by X-ray heating, we search for heating effects in the
orbital UV flux variations using the low/hard state {\it International
Ultraviolet Explorer (IUE)} archival spectra and the high/soft state
{\it HST} spectra (\S3). A stellar radius -- distance relation can be
determined from fits of the spectral energy distribution. We use the
observed flux distribution and spectra of field stars in the same
region of the sky to estimate the reddening and extinction in the
direction of Cyg~X-1 and to determine the angular size of the star
(\S3). Finally, we use this radius -- distance relation with the
method developed by \citet{pac74} to set mass limits as a function of
distance and to estimate the probable masses using constraints from
the rotational line broadening and ellipsoidal light curve (\S4).
\section{Ultraviolet and Optical Line Spectrum}
We need to rely on the line spectral features to estimate temperature
since the UV and optical continuum falls in the long wavelength,
Rayleigh-Jeans part of the flux distribution where the shape of the
continuum is insensitive to temperature. Some of the best line
diagnostics for late O-supergiants are found in the optical spectrum
where several ionization state line ratios and the Balmer line
profiles change dramatically with temperature and gravity
\citep{wal90,sea08}. In this section, we use $\chi^2_{\nu}$ fits of
the optical and UV spectra with model spectra to estimate $T_{\rm
eff}$ and $\log g$. Our data consist of high resolution UV spectra
taken with the {\it HST}/STIS (G140M grating, resolving power
$R=14500$) and two sets of optical spectra from the Kitt Peak National
Observatory Coud\'{e} Feed telescope (CF; 3759 -- 5086 \AA , $R=2990$)
and 4~m Mayall Telescope and RC spectrograph (RC; 4182 -- 4942 \AA ,
$R=5700$). Details of these observations are given in Table~1 of
\citet{gie08}. All of these flux-rectified spectra were shifted to
the rest frame (using the orbital solution given by \citealt{gie08})
and co-added to increase the signal-to-noise ratio.
We compared these spectra with model spectra from the TLUSTY/SYNSPEC
codes given in the grids OSTAR2002 \citep{lan03} and BSTAR2006
\citep{lan07}. The model atmospheres are based upon a plane parallel
geometry, solar abundances, line blanketed opacities, and non-LTE
calculations of atomic populations for H, He, and representative atoms
up to Fe. The model spectra are presented as a function of four
parameters: the microturbulent velocity of gas in the line forming
region, $\xi$, the stellar effective temperature, $T_{\rm eff}$,
logarithm of the gravitational acceleration in the photosphere, $\log
g$, and the chemical abundance of the gas. The model spectra were
transformed to the observed wavelength grids by wavelength integration
and convolution with rotational and instrumental broadening functions.
We adopted a projected rotational velocity of $V\sin i = 98$
km~s$^{-1}$ \citep{gie86a}, linear limb darkening coefficients from
\citet{wad85}, and Gaussian representations of instrumental broadening
using the projected slit FWHM \citep{gie08}.
It was clear from inspection that a large microturbulence is required
to match the observed and model spectra. The OSTAR2002 grid uses $\xi
= 10$ km~s$^{-1}$ throughout while the BSTAR2006 grid adopts $\xi = 2$
km~s$^{-1}$ for the full grid and $\xi = 10$ km~s$^{-1}$ for a
selection of low gravity (supergiant) models. We found that the best
fits were obtained in all three spectral bands with the $\xi = 10$
km~s$^{-1}$ models, and this was especially true in the FUV where the
observed deep spectral lines were not matched with the lower
microturbulent velocity models. An atmospheric microturbulence of
$\xi = 10$ km~s$^{-1}$ is typical for late-O supergiants \citep{rya02},
and \citet{can95} derived an estimate of $\xi = 10.7$ km~s$^{-1}$ from
a curve of growth analysis of the optical \ion{N}{3} lines in the
spectrum of HD~226868.
We tested the goodness-of-fit for each of the models of interest by
calculating the reduced $\chi^2_{\nu}$ statistic,
\begin{equation}\label{eq1}
\chi^2_{\nu} = \sum^{N}_{i=0} \frac{[F_{obs}(\lambda_{i}) - F_{model}
(\lambda_{i})]^2}{\sigma_{err}^2(\lambda_{i}) (N - 1)}.
\end{equation}
Here $N$ is the total number of wavelength points used in the fit and
$\sigma_{err} (\lambda_i)$ is the standard deviation of the mean
rectified flux (determined from the scatter at wavelength $\lambda_i$
among the individual spectra in the co-added mean). We selectively
omitted from the summation spectral regions that contained stellar
wind features or interstellar lines that are not present in the model
spectra. Our results are listed in Table~1 for a wide range of model
spectra with a microturbulence of $\xi = 10$ km~s$^{-1}$. Column (1)
indicates the spectral region fit by ``HST'' for the FUV spectrum,
``CF'' for the KPNO CF, blue spectrum, and ``RC'' for the KPNO 4~m RC,
green spectrum. Column (2) gives the grid value of gravity $\log g$,
and column (3) gives a code for the spectral model (``O'' for spectra
from the OSTAR2002 grid and ``B'' and ``BCN'' for spectra from the
BSTAR2006 grid). Then follow 10 columns that list the measured
$\chi^2_{\nu}$ for grid values of $T_{\rm eff}$ (at increments of 2.5
kK and 1 kK for the OSTAR2002 and BSTAR2006 grids, respectively).
\placetable{tab1}
The trends in Table~1 are represented in a combined contour diagram in
Figure~1. Here the gray-scale contours represent the goodness-of fit
for the FUV spectrum, the solid lines for the blue spectrum, and the
dashed lines for the green spectrum. Since there is not an exact
match between the predictions of the OSTAR2002 and BSTAR2006 grids
at their boundary, Figure~1 shows contours based only on the OSTAR2002
grid for high gravity models $\log g \geq 3.0$, while the contours in the
lower gravity region $\log g \leq 3.0$ are based upon the BSTAR2006 grid.
Note that there are no models available for high temperature, low
gravity region in the lower right part of the diagram because such
atmospheres approach or exceed the Eddington luminosity limit. The
higher gravity, lower temperature region is empty because the
BSTAR2006 grid contains only models for a lower microturbulent
velocity $\xi = 2$ km~s$^{-1}$ for this parameter range. Note that
the $\chi^2_{\nu}$ minima in Table~1 have values much larger than the
expected value of unity. This is due to the inclusion of spectral
regions where there is evident mismatch because of incomplete removal
interstellar features, marginal differences in the continuum
placement, and real differences between the observed and models even
in the best fit cases. For the purposes of intercomparison in
Figure~1, we have subtracted from each $\chi^2_{\nu}$ set the minimum
minus one value, so that the figure represents variance increases that
result only from changes in the assumed temperature and gravity.
\placefigure{fig1}
The temperature and gravity properties of the $\chi^2_{\nu}$ fits
shown in Figure~1 result primarily from the dependence of the spectral
lines on the ionization levels in the gas. The Saha ionization
equilibrium equation shows that the number ratio of atoms of one ion
to the next higher ionization state is equal to the electron density
times a function that decreases with increasing temperature. Thus, in
order to match the ionization ratios represented in the spectral line
strengths, the best fits are found along a diagonal zone of increasing
$\log g$ with increasing temperature, i.e., increasing the electron
density with $\log g$ compensates for the functional drop related to
temperature increase. However, there are other spectral dependences
that help us determine the minimum $\chi^2_{\nu}$ position along the
valley in the $(T_{\rm eff}, \log g)$ diagram. In particular, the H
Balmer line wings in the optical spectrum are sensitive to pressure
broadening (linear Stark effect) and hence model gravity. The lowest
contours of $\chi^2_{\nu}$ in Figure~1 indicate that the best fits are
found for $T_{\rm eff} = 26.5 - 28.5$ kK and $\log g = 2.9 - 3.1$.
Both of these estimates are lower than found in earlier studies
\citep{can95,her95,kar08}, but they are consistent with recent studies
that demonstrate that the inclusion of line blanketing in stellar
atmosphere models tends to lower the derived effective temperature
\citep{rep04,mar05,lef07,sea08}.
The run of $\chi^2_{\nu}$ fits shown in Figure~1 are for solar
abundance models, but given the reports that the spectrum of HD~226868
has strong N lines, we also explored fits with CN abundance models
included in the BSTAR2006 grid for gravities of $\log g = 2.75$ and
3.00. These models assume a He to H number ratio of 0.2 (compared to
0.085 for the solar abundance models), a C abundance equal to one half
the solar value, and a N abundance five times the solar abundance.
These adjustments demonstrate the kind of changes that are expected
when the atmosphere becomes enriched in CNO-processed gas. While the
CN models do not improve the fits of the FUV spectrum, they are
significantly better fits of the optical spectrum (where a number of
strong \ion{He}{1} and \ion{N}{3} lines are present; see Fig.~2
below). When we compare the $\chi^2_{\nu}$ fit of a solar model to a
CN model at the same gravity in Table \ref{tab1}, the CN models fit
better at higher temperature, especially for fits of the optical
spectra. Following this trend, we estimate that the supergiant's
spectrum is fit best by the CN models with $T_{\rm eff}=28\pm2.5$ kK
and $\log g = 3.00\pm0.25$ dex. Thus, we will focus on these CN models
for the rest of this section.
We compare the two mean optical spectra (CF and RC) with our best fit
model spectrum in Figure~2 ($T_{\rm eff}=28$ kK, $\log g = 3.00$) and
with a marginally acceptable fit in Figure~3 ($T_{\rm eff}=26$ kK,
$\log g = 2.75$). Also shown for comparison is the spectrum of a
similar O9.7~Iab star, $\mu$~Nor (HD~149038; from \citealt{wal90}).
The spectra appear in three plots: the short wavelength range from the
lower resolution CF data is shown in the top panel while the bottom
two panels illustrate the longer wavelength region from the higher
resolution RC spectra (compare with Fig.~1 in \citealt{kar08}). In
general the agreement between the observed and model spectrum is
satisfactory. The largest discrepancies are seen in the \ion{He}{2}
$\lambda 4686$ and H$\beta$ lines where incipient emission from the
stellar wind of the supergiant alters the profiles
\citep{gie86b,nin87a,gie03,gie08}. The H Balmer line emission is
strongest in H$\alpha$, and for simple estimates of the Balmer
decrement for Case B recombination \citep{ost06}, we expect some
measurable degree of wind emission for all the Balmer lines shown in
Figures 2 and 3. We find that the H line cores do appear shallower,
while the Balmer line wings agree well with the models. \citet{sea08}
observed this effect in other supergiants and they suggest that
stellar wind emission from the outer atmosphere tends to fill in the
line core. On the other hand, the Balmer line wings are formed in
higher density gas, deeper in the photosphere where we expect the
TLUSTY/SYNSPEC results to be quite reliable. The H line wings become
narrower with lower gravity, and the predicted H profiles for the
lower gravity model illustrated in Figure~3 appear to be significantly
narrower than the observed ones.
\placefigure{fig2}
\placefigure{fig3}
The \ion{He}{1} and \ion{He}{2} line strengths are well matched in the
He enriched CN model spectra. In particular the temperature sensitive
ratio of \ion{He}{2} $\lambda 4541$ to \ion{Si}{3} $\lambda 4552$
(equal for the O9.7 classification) is better reproduced by the
$T_{\rm eff} = 28$ kK and $\log g = 3.00$ model (Fig.~2) than the
$T_{\rm eff} = 26$ kK and $\log g = 2.75$ model (Fig.~3). The
\ion{N}{3} $\lambda\lambda 4097,4379,4510,4514,4630,4634, 4640$ lines
are also reasonably well fit in the five times overabundant CN models.
On the other hand, O lines like \ion{O}{2} $\lambda\lambda
4069,4072,4075,4590,4596$ are too strong in the model spectra, which
suggests that the O abundance should be revised downwards from solar
values as expected for CNO-processed gas. The other differences
between the observed and model spectra are related to the presence of
interstellar features (\ion{Ca}{2} $\lambda\lambda 3933, 3968$ in the
CF spectrum; most of the deep ISM features were removed from the RC
spectra).
In Figure~4 we present the averaged UV spectrum made with {\it
HST}/STIS with the best TLUSTY/SYNSPEC model superimposed as a lighter
line. Figure~4 also includes an average UV spectrum of $\mu$~Nor,
based upon 34 high resolution, archival spectra from {\it IUE}.
Horizontal line segments indicate those regions where the lines
primarily originate in the photosphere, i.e., free from P~Cygni
stellar wind lines and from regions where interstellar lines were
removed by interpolation \citep{gie08}. Overall, the line features in
the observed UV spectrum agree well with the model UV spectrum based
upon the optimal $T_{\rm eff}$ and $\log g$ parameters derived from
the optical and FUV spectral fits. Note that the \ion{He}{2} $\lambda
1640$ feature appears in absorption as predicted, so there is no
evidence of the Raman scattering emission that was observed by
\citet{kap90} in the massive X-ray binary 4U1700--37. There are,
however, a few specific regions where the match is less satisfactory.
For example, the blends surrounding \ion{Fe}{5} $\lambda$ 1422 and
\ion{Fe}{4} $\lambda\lambda 1596, 1615$ appear stronger in both the
spectra of HD~226868 and $\mu$~Nor, which suggests that the models are
underestimating the Fe line opacity in these wavelength regions. The
\ion{S}{5} $\lambda 1502$ line \citep{how87} has a strength in the
spectrum of HD~226868 that falls between that of the model and of
$\mu$~Nor. The deep feature near 1690 \AA~ is an instrumental flaw
near the edge of the detector at one grating tilt.
\placefigure{fig4}
There are huge variations in the stellar wind lines between the
orbital conjunctions that are due to X-ray ionization of the wind
\citep{gie08,vrt08}, and it is possible that X-ray heating might also
affect some of the photospheric lines. Figure 5 compares the average
UV spectra at the two conjunction phases $\phi$ = 0.0 and 0.5
(inferior and superior conjunction of the supergiant, respectively).
With the exception of the known wind line changes, we find that the
spectra are almost identical between conjunctions. Some slight
differences are seen in very strong features, such as the \ion{Si}{3}
$\lambda 1300$ complex and the \ion{Fe}{5} line blends in the 1600 --
1650 \AA ~region. The deep lines appear somewhat deeper at $\phi =
0.0$ and have slightly extended blue wings compared to those observed
at $\phi = 0.5$ (when the black hole is in the foreground). We
speculate that the deeper cores and blue extensions result from line
opacity that forms in the upper atmosphere where the outward wind
acceleration begins. This outer part of the atmosphere in the
hemisphere facing the black hole may also experience X-ray ionization
(like the lower density wind) that promotes Si and Fe to higher
ionization levels and reduces the line opacity of the observed
transitions.
\placefigure{fig5}
Our spectral fits are all based upon the existing OSTAR2002 and
BSTAR2006 grids, and it would certainly be worthwhile to explore more
specific models, for example, to derive reliable estimates of the He
and N overabundances. A determination of the He abundance in
particular will be important for a definitive temperature estimate.
It is also important in such an analysis to consider the full effects
of the stellar wind in HD~226868. \citet{her95} compared analyses of
the spectrum of HD~226868 from static, plane-parallel models with
unified, spherical models (that treat the photosphere and wind
together), and they found their $\log g$ estimate increased by about
0.2 dex (with no change in temperature) in the unified models. Thus,
we suspect that our gravity estimate derived from the plane-parallel
TLUSTY code is probably a lower limit (approximately consistent with
the results of \citealt{her95} and \citealt{kar05}).
\section{UV -- IR Spectral Energy Distribution}
We can use the derived model flux spectrum to fit the observed
spectral energy distribution (SED) and reassess the interstellar
extinction and the radius -- distance relation. We collected the
archival low dispersion {\it IUE} spectra and combined these fluxes
with the {\it HST} spectra in wavelength bins spanning the FUV and NUV
regions. We transformed the $UBV$ magnitudes from \citet{mas95} into
fluxes using the calibration of \citet{col96}, and the near-IR fluxes
were determined from a calibration of the 2MASS $JHK_s$ magnitudes
\citep{coh03,skr06}. Then we fit the observed fluxes with the optimal
BSTAR2006 flux model (CN model, $\xi = 10$ km~s$^{-1}$, $T_{\rm eff} =
28$ kK, $\log g = 3.0$) to find the best reddening curve using the
extinction law from \citet{fit99}. We placed additional weight on the
six optical and IR points to compensate for the larger number of UV
points. Figure~6 shows the observed and best fit model fluxes for
HD~226868 that we obtained with a reddening $E(B-V) = 1.11\pm0.03$ mag
and a ratio of total to selective extinction $R_V = 3.02\pm 0.03$.
These values agree well with the previous reddening estimates that are
collected in Table~2.
\placefigure{fig6}
\placetable{tab2}
For comparison we examined the colors and reddening of six field stars
within 10 arcminutes of HD~226868 in the sky. These stars were
observed with the KPNO 4~m telescope and RC spectrograph using the
same blue region arrangement selected for our observations of
HD~226868 \citep{gie08}. We made spectral classification of the
stars, and then we used the observed $UBV$ colors from \citet{mas95}
and the intrinsic color and absolute magnitude for the classification
\citep{gra92} to estimate reddening and distances to these stars. Our
results are collected in Table~3 with the reddening estimate from
above listed for HD~226868. \citet{bre73} estimated the distance of
HD~226868 as $d\approx 2.5$ kpc, and set a lower limit of 1~kpc based
upon the colors of other nearby field stars. We find that there are
two stars at distances just under 1~kpc that have a similar reddening
to that of HD~226868, which is consistent with a distance to HD~226868
of $d\gtrsim 1.0$~kpc.
\placetable{tab3}
The normalization of the fit to the SED yields the star's
limb-darkened angular diameter, $\theta = 96 \pm 6$ $\mu$as. Then we
can calculate the luminosity and radius of the star as a function of
distance $d$ (in kpc) to HD~226868,
\begin{equation}\label{eq2}
\frac{L_{1}}{L_{\odot}} = (5.9 \pm 2.1) \times 10^{4} d^{2},
\end{equation}
\begin{equation}\label{eq3}
\frac{R_{1}}{R_{\odot}} = (10.3 \pm 0.7) d.
\end{equation}
It is important to check that these SED results are not affected by
long term or orbital flux variability, so we examined the archival
{\it IUE} low dispersion spectra \citep{gie08} and the {\it HST}
spectra to determine the amplitude of any flux variations in the UV.
We calculated the average continuum flux over three wavelength spans
(1252 -- 1380 \AA , 1410 -- 1350 \AA , and 1565 -- 1685 \AA) that
excluded the main wind features. We then converted the UV fluxes to
differential magnitudes $\Delta m$. We found no significant
differences between fluxes from times corresponding to the X-ray
low/hard state ({\it IUE}) and high/soft state ({\it IUE} and {\it
HST}; see \citealt{gie08} for X-ray state information), nor were there
any long term variations over the 25 year time span between the {\it
IUE} and {\it HST} observations. On the other hand, we do find
marginal evidence of the orbital flux variations related to the tidal
distortion of the supergiant. We plot in Figure~7 the mean orbital
flux variations of the three wavelength intervals for both {\it IUE}
and {\it HST} spectra that are averaged into eight bins of orbital
phase. For comparison we also include the $V$-band ellipsoidal light
curve from \citet{kha81}. The UV and $V$-band light curves appear to
have similar amplitudes, consistent with past estimates
\citep{tre80,vlo01}. Note that the minima have approximately equal
depths (consistent with the optical results; \citealt{bal81}), which
suggests that there is little if any deep heating by X-rays of the
hemisphere of the supergiant facing the black hole. Since the
amplitude of the light curve is small and the average UV fluxes
plotted in the SED in Figure~6 cover the full orbit, the ellipsoidal
variations have a minimal impact on the quantities derived from the
SED.
\placefigure{fig7}
Finally, we need to consider if the SED has a non-stellar flux
contribution from the accretion disk around the black hole or from
other circumstellar gas. \citet{bru78} estimated that the disk
contributes about 2\% of the optical flux, and there are reports of
small optical variations with superorbital periods that may correspond
to the precession of the accretion disk
\citep{kem87,bro99,szo07,pou08}. Furthermore, \citet{dol01} observed
rapid UV variations that he argued originate in dying pulse trains of
infalling material passing the event horizon of the BH. \citet{mil02}
developed a multi-color disk SED to model the X-ray continuum of
Cyg~X-1, and Dr.~Miller kindly sent us the model fluxes extrapolated
into the UV and optical. These are also plotted in Figure~6 after
accounting for interstellar extinction. Both the photospheric and
disk SEDs correspond to the Rayleigh-Jeans tail of a hot continuum,
and the model predicts that the disk contributes approximately 0.01\%
of the total flux in the UV to IR range. This small fraction is
consistent with our successful fitting of the UV and optical line
features that would otherwise appear shallower by flux dilution if the
disk was a significant flux contributor. Thus, our SED fitting is
probably unaffected by any non-stellar flux source.
\section{Mass of the Supergiant}
In this section we will explore the mass consequences of our relations
for radius and luminosity as a function of distance. \citet{pac74}
derived model-independent, minimum mass estimates for both components
as a function of distance based on the lack of X-ray eclipses (setting
a maximum orbital inclination) and the assumption that HD~226868 is
not larger than its Roche lobe (setting a lower limit on the ratio of
the supergiant to black hole mass, $M_1/M_2$). We repeated his
analysis using our revised radius -- distance relationship (eq.\ [3]),
stellar effective temperature $T_{\rm eff} = 28$~kK, and current
values for the mass function $f(m)$ = 0.251$\pm$0.007 M$_{\odot}$ and
period $P$ = 5.599829 days \citep{gie03}. The resulting minimum
masses are presented in columns 7 and 10 of Table~4 as a function of
distance $d$.
\placetable{tab4}
We can make further progress by assuming the supergiant has attained
synchronous rotation with the orbit since the stellar radius is
probably comparable in size to the Roche radius \citep{gie86b}. We
take the ratio $\Omega$ of the star's spin angular velocity to orbital
angular velocity to be 1. Then the projected rotational velocity
$V\sin i$ is related to the inclination $i$ by
\begin{equation}\label{eq4}
V\sin i = {{2 \pi}\over{P}} R_1 \sin i
\end{equation}
where $P$ is the orbital period. The projected rotational velocity,
after correction for macroturbulent broadening, is estimated to be
$V\sin i = 95\pm 6$ km~s$^{-1}$ \citep{gie86a,nin87b,her95}. Inserting
equation~(\ref{eq3}) for $R_1$ we obtain an inclination estimate in terms of distance $d$ (kpc) of
\begin{equation}\label{eq5}
i = \arcsin ((1.02 \pm 0.09) d) .
\end{equation}
These inclination estimates are given in column 2 of Table~4. Note
that this argument suggests a lower limit to the distance of $\approx
1.0$ kpc, similar to that found by reddening considerations.
\citet{gie86a} showed that the mass ratio can be estimated from the
ratio of the projected rotational velocity to the orbital
semiamplitude of the supergiant,
\begin{equation}\label{eq6}
{{V\sin i}\over{K}} = \rho (Q+1) \Phi(Q)
\end{equation}
where $\rho$ is the fill-out factor, i.e., the ratio of volume
equivalent radii of the star and Roche lobe, $Q=M_1/M_2$ is the mass
ratio, $\Phi$ is the ratio of the Roche lobe radius to the semimajor
axis \citep{egg83}, and synchronous rotation is assumed. Thus, given
the observed values of $V\sin i$ and $K$ and an assumed value of
$\rho$, we can find the mass ratio and, with the inclination, the
masses of each star. These masses are listed in columns 8, 9, 11 and
12 of Table~4 under headings that give the fill-out factor in
parentheses. The run of masses is also shown in Figure~8 that
illustrates the mass solutions as a function of distance and fill-out
factor. Loci of constant $\rho$ are denoted by dotted lines
(increasing right to left from $\rho=0.85$ to 1.0) while loci of
constant distance (and inclination angle) are shown by dashed lines.
The derived gravity values from these masses of $\log g \approx 3.3$
reinforces the idea that our spectral estimate of $\log g = 3.0$ is a
lower limit (see \S2).
We assumed synchronous rotation in the relations above because both
observations and theory indicate that the orbital synchronization time
scale in close binaries is shorter than the circularization time scale
\citep{cla95}, and since the orbit is circular, it follows that the
star must rotate at close to the synchronous rate. However, it is
straight forward to see how the solutions will change if the
synchronism parameter $\Omega$ differs from unity. In equation (5)
the distance $d$ can be replaced by the product $\Omega d$, while in
equation (6) the fill-out parameter $\rho$ can be replaced by
$\Omega\rho$. If, for example, $\Omega = 0.95$, then the mass
solutions can be obtained from Table~4 and Figure~8 by selecting a
distance of $0.95 d$ and a fill-out ratio of $0.95 \rho$.
\placetable{tab4}
\placefigure{fig8}
The other important constraint comes from the ellipsoidal light curve.
The tidal distortion of the star results in a double-wave variation
(Fig.~7) whose amplitude depends on the inclination (maximal at
$i=90^\circ$) and degree of tidal distortion (maximal for fill-out
$\rho=1.0$). In order to determine which parts of mass plane are
consistent with the observed variation, we constructed model $V$-band
light curves using the GENSYN code \citep{moc72,gie86a} for the four
values of fill-out factor illustrated in Figure~8. There is a unique
solution for the best fit of the light curve along each line of
constant fill-out factor, since the light curve amplitude
monotonically decreases with decreasing inclination (increasing
distance). The solid line in Figure~8 connects these best fit
solutions (indicated by plus sign symbols). These light curve
solutions differ slightly from those presented by \citet{gie86a}
because we chose to fit the light curve from \citet{kha81} instead of
that from \citet{kem83}, and the differences in the solutions reflect
the uncertainties in the observed light curve.
There are several other constraints from hints about the mass transfer
process, luminosity, and distance that can provide additional limits
on the acceptable mass ranges. Both \citet{gie86b} and \citet{nin87a}
presented arguments that the unusual \ion{He}{2} $\lambda 4686$
emission in the spectrum of HD~226868 originates in a tidal stream or
focused wind from the supergiant towards the black hole. Furthermore,
\citet{gie86b} made radiative transfer calculations of the focused
wind emission profiles for models of the asymmetric wind from
\citet{fri82}, and they determined that the fill-out factor must
exceed $\rho=0.90$ in order to increase sufficiently the wind density
between the stars to account for the observed strength of the
\ion{He}{2} $\lambda 4686$ emission. Thus, the presence of a focused
wind implies that the fill-out factor falls in the range $\rho=0.9 -
1.0$.
\citet{pac74} and \citet{zio05} argue that massive stars evolve at
near constant luminosity, and, therefore, the best solutions will obey
the observed mass -- luminosity relation. Table~4 lists the derived
luminosity $\log L_{1}$ (column 4) as a function of distance (eq.\
[2]) plus the predicted luminosities for the mass solutions determined
for the $\rho=0.9$ and 1.0 cases, $\log L_{1}^\star(0.9)$ and $\log
L_{1}^\star(1.0)$, respectively (columns 5 and 6). These predictions
are based upon the mass -- luminosity relations for $T_{\rm
eff}=28$~kK stars from the model evolutionary sequences made by
\citet{sch92}. We find that the observed and predicted luminosities
match over the distance range of $d=1.7$ ($\rho=0.9$) to 2.0 kpc
($\rho=1.0$), closer than the range advocated by \citet{zio05} who
adopted a higher temperature and hence higher luminosity. Note that
some stars in mass transfer binaries appear overluminous for their
mass, so these distances should probably be considered as upper
limits.
Several authors have suggested that the position and proper motion of
HD~226868 indicates that it is a member of the Cyg~OB3 association
\citep{mir03} that has a distance of 1.6-2.5 kpc \citep{uya01}.
However, a radio parallax study by \citet{les99} indicates a smaller
(but possibly consistent) distance of $1.4^{+0.9}_{-0.4}$~kpc for Cyg
X-1. Our fits of the ellipsoidal light curve suggest that the maximum
allowable distance is $d\approx 2.0$~kpc (for $\rho=1.0$) The
interstellar reddening indicates a distance of at least 1.0~kpc (\S3),
which is probably consistent with the strength of interstellar
\ion{Ca}{2} lines. \citet{meg05} present a method for determining the
distance to O supergiants using the equivalent width $W_{\lambda}$ of
the \ion{Ca}{2} $\lambda 3933$ feature. Using their calibration with
the value of $W_{\lambda} = 400 \pm 10$ m\AA ~from \citet{gie86a}
yields a distance $d = 1.2$~kpc. Since the reddening of HD~226868 is
approximately the same as that for the much more distant Cepheid,
V547~Cyg \citep{bre73}, the ISM must have a relatively low density
beyond $\approx 1$~kpc along this line of sight through the Galaxy, so
we suspect that the distance derived from the interstellar \ion{Ca}{2}
line is probably a lower limit.
All of these constraints are consistent with the mass solutions for a
fill-out factor range of $\rho = 0.9 - 1.0$, and the corresponding
mass ranges are listed in Table~5. We also list mass estimates from
earlier investigations. Our downward revision of the effective
temperature results in lower luminosity estimates than adopted by
\citet{zio05}, and consequently, our mass estimates (based upon the
light curve) are significantly lower than his mass estimates (based
upon the mass -- luminosity relation from models). In fact, the lower
limit for the black hole mass now overlaps comfortably with the mass
determined by \citet{sha07} using the correlation between the X-ray
quasi-periodic oscillation frequency and spectral index, so the
apparent discrepancy in black hole mass estimates from X-ray and
optical data is now resolved. If the X-ray derived mass is accurate,
then the mass solution for fill-out factor $\rho=0.91$ is preferred
($M_1 = 19 M_\odot$ and the distance is $d=1.6$~kpc).
\placetable{tab5}
Our analysis of the first high resolution UV spectra of HD~226868 and
of the complementary optical spectra shows that the photospheric line
spectrum can be matched by adopting an atmosphere mixed with
CNO-processed gas with an effective temperature $T_{\rm eff} = 28.0
\pm 2.5$~kK and log $g \gtrsim 3.0 \pm 0.25$. Assuming synchronous
rotation ($\Omega = 1$) and using the fill-out factor range from
above, the mass of the supergiant ranges from $M_1 = 17 - 31
M_{\odot}$ and the black hole mass ranges from $M_2 = 8 -
16M_{\odot}$. This corresponds to an inclination of $i = 31^\circ -
43^\circ$ and a distance of $d = 1.5 - 2.0$ kpc. Better estimates of
the masses may be possible in the future. For example, both the {\it
GAIA} \citep{jor08} and {\it SIM Lite} \citep{unw08} space astrometry
missions will provide an accurate parallax and distance. Furthermore,
pointed observations with SIM Lite will measure the astrometric motion
of the supergiant around the system center of mass, yielding
independent estimates of both the orbital inclination and distance (by
equating the astrometric and radial velocity semi-major axes;
\citealt{tom09}). Finally, future high dispersion X-ray spectroscopy
with the {\it International X-ray
Observatory}\footnote{http://ixo.gsfc.nasa.gov/index.html} will
measure the orbital motion of the black hole through the orbital
Doppler shifts of accretion disk flux in the Fe K$\alpha$ line
\citep{mil07}. By comparing the optical and X-ray orbital velocity
curves, we will have a secure mass ratio that, together with the
distance estimate, will lead to unique and accurate mass
determinations of the supergiant and black hole.
\acknowledgments
We thank the staffs of the Kitt Peak National Observatory and the
Space Telescope Science Institute (STScI) for their support in
obtaining these observations. We are also grateful to Thierry Lanz
and Ivan Hubeny for information about their model atmosphere grid and
to Jon Miller for providing us with details about his accretion disk
model. Support for {\it HST} proposal number GO-9840 was provided by
NASA through a grant from the Space Telescope Science Institute, which
is operated by the Association of Universities for Research in
Astronomy, Incorporated, under NASA contract NAS5-26555. The {\it
IUE} data presented in this paper were obtained from the Multimission
Archive at the Space Telescope Science Institute (MAST). Support for
MAST for non-HST data is provided by the NASA Office of Space Science
via grant NAG5-7584 and by other grants and contracts. This
publication makes use of data products from the Two Micron All Sky
Survey, which is a joint project of the University of Massachusetts
and the Infrared Processing and Analysis Center/California Institute
of Technology, funded by the National Aeronautics and Space
Administration and the National Science Foundation. Bolton's research
is partially supported by a Natural Sciences and Engineering Research
Council of Canada (NSERC) Discovery Grant. Hadrava's research is
funded under grant projects GA\v{C}R 202/06/0041 and LC06014. Herrero
thanks the Spanish MEC for support under project AY 2007-67456-C02-01.
This work was also supported by the National Science Foundation under
grants AST-0205297, AST-0506573, and AST-0606861. Institutional
support has been provided from the GSU College of Arts and Sciences
and from the Research Program Enhancement fund of the Board of Regents
of the University System of Georgia, administered through the GSU
Office of the Vice President for Research. We are grateful for all
this support.
\bibliographystyle{apj}
| {
"attr-fineweb-edu": 1.96875,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeNk4eIOjSBZee0mI | \section{Summary}
\paragraph{Conclusion.}
We have demonstrated a scalable first-principles quantum Monte Carlo technique for extended systems; this will allow future studies to blend treatment of electron correlation and spin-orbit interactions using QMC calculations.
The method is based on deriving an effective Hamiltonian for the spin-orbit interaction.
In this method, the major computational cost is contributed by the evaluation of spin-orbit interaction energy on the FN-DMC wave functions, less than twice that of a standard FN-DMC calculation.
One can include also electron-electron interactions in the effective Hamiltonian.
We demonstrated this technique in atomic systems and monolayer WS$_2$.
For the main-group atoms, we compute the spin-orbit splittings of the ground state configurations and obtain results in agreement with the experimentally determined fine-structure splittings.
For the monolayer WS$_2$, we compute the band splitting induced by spin-orbit interaction at $K$.
Our first-principles result agrees with previous calculations and experiments, thus demonstrating the ability of this method to be generalized to larger scale systems.
A major promise of this technique is that one can treat electron-electron interactions, spin-orbit effects, and one-body terms in effective Hamiltonians all on the same footing.
As mentioned before, the cost is similar to a standard FN-DMC calculation, which will allow it to be applied to realistic models of materials.
We envision this opening a new frontier for modeling spin-orbit effects in correlated materials.
\begin{acknowledgements}
This work was funded by the grant DOE FG02-12ER46875 (SciDAC). The authors want to thank Cody Melton for precious discussions.
\end{acknowledgements}
| {
"attr-fineweb-edu": 1.253906,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeOk5qoTAh9p01IM1 | \section{Introduction}
In recent years, there has been a growing interest in the application of reinforcement learning (RL) algorithms in networked control systems. One of the most popular reinforcement learning (RL) algorithms in practice is policy gradient, due to its stability and fast convergence. However, from the theoretical point of view, there is not much known about it. Recently, it is shown in~\cite{fazel2018global} that a single-agent linear quadratic (LQ) optimal control problem enjoys the global convergence, despite the fact that the optimization problem is not convex in the policy space. A similar result is obtained for zero-sum LQ games in~\cite{zhang2019policy}. On the other hand, a nonzero-sum LQ game is more challenging than the above problems,
where the existing results on the global (or even local) convergence of the policy gradient methods are generally not encouraging~\cite{mazumdar2019policy}.
Inspired by recent developments in deep structured teams and games~\cite{Jalal2019MFT,Jalal2019Automatica,Jalal2019risk,Jalal2020Nash,Jalal2020CCTA,Vida2020CDC}, we study a class of LQ games wherein the effect of other players on any individual player is characterized by a linear regression of the states and actions of all players. The closest field of research to deep structured games is mean-field games~\cite{Caines2018book}. In a classical LQ mean-field game, one often has: (a) homogeneous individual weights (i.e., players are equally important); (b) the number of players~$n$ is asymptotically large with independent primitive random variables (to be able to predict the trajectory of the mean-field using the strong law of large numbers); (c) the coupling is through the mean of the states, where the control coupling (called extended coupling) is more challenging; (d) the proof technique revolves around the fact that the effect of a single player on others is negligible, reducing the game to a coupled forward-backward optimal control problem; (e) the solution concept is Nash equilibrium; (f) given some fixed-point conditions across the time horizon, the forward-backward equation admits a solution leading to an approximate Nash in the finite-population game; (g) they are often not practical for long-horizon and reinforcement learning applications wherein the common practice is to adopt a weaker solution concept called stationary Nash equilibrium (where the trajectory of the mean-field is stationary), and (h) since the results are asymptotic, the models are limited to those that are uniformly bounded in~$n$. In contrast to mean-field game, LQ deep structured game often has: (a') heterogeneous individual weights that are not necessarily homogeneous; (b') the number of players is arbitrary (not necessarily very large) with possibly correlated primitive random variables; (c') the coupling is through the weighted mean of the states and actions; (d') the proof technique revolves around a gauge transformation initially proposed in~\cite{arabneydi2016new} (not based on the negligible effect); (e') the solution concept is sequential Nash; (f') the solution is exact (not an approximate one) for any arbitrary number of players and it is identified by Riccati equations; (g') since the solution concept is sequential, it is well suited for long-horizon and reinforcement learning, and (h') since the results are also valid for finite-population game, the dynamics and cost are not necessarily limited to uniformly bounded functions with respect to $n$. It is shown in~\cite{Jalal2019Automatica} that the classical LQ mean-field game with the tracking cost formulation is a special case of deep structured games under standard conditions, where the mean-field equilibrium coincides with the sequential mean-field equilibrium. It is to be noted that the LQ mean-field-type game~\cite{elliott2013discrete,bensoussan2013mean,carmona2018probabilistic} is a single-agent
control problem (i.e., it is not a non-cooperative game), which resembles a team problem with social welfare cost function.\footnote{When the mean field is replaced by the expectation of the state of the genetic player, the resultant problem is called mean-field-type game.} In particular, it may be viewed as a special case of risk-neutral LQ mean-field teams introduced in~\cite{arabneydi2016new}, showcased in~\cite{Jalal2017linear,JalalCDC2015,JalalCDC2018,Jalal2019LCSS,JalalACC2018}, and extended to deep structured LQ teams in~\cite{Jalal2019risk}.
The interested reader is referred to~\cite[Section VI]{Jalal2019Automatica} for more details on similarities and differences between mean-field games, mean-field-type games and mean-field teams.
The rest of the paper is organized as follows. In Section~\ref{sec:problem}, the problem of LQ deep structured game is formulated. In Section~\ref{sec:main}, the global convergence of model-based and model-free policy gradient descent and natural policy gradient descent algorithms are presented. In Section~\ref{sec:numerical}, some numerical examples are provided to validate the theoretical results. The paper is concluded in Section~\ref{sec:conclusion}.
\section{Problem Formulation}\label{sec:problem}
Throughout the paper, $\mathbb{R}$, $\mathbb{R}_{>0}$ and $\mathbb{N}$ refer to the sets of real, positive real and natural numbers, respectively. Given any $ n \in \mathbb{N}$, $\mathbb{N}_n$, $x_{1:n}$ and $\mathbf{I}_{n \times n}$ denote the finite set $\{1,\ldots,n\}$, vector $(x_1,\ldots,x_n)$ and the $n\times n$ identity matrix, respectively. $\| \boldsymbol \cdot \|$ is the spectral norm of a matrix, $\NF{\boldsymbol \cdot}$ is the Frobenius norm of a matrix, $\TR(\boldsymbol \cdot)$ is the trace of a matrix, $\sigma_{\text{min}}(\boldsymbol \cdot)$ is the minimum singular value of a matrix, $\rho(\boldsymbol \cdot)$ is the spectral radius of a matrix, and $\DIAG(\Lambda_1, \Lambda_2)$ is the block diagonal matrix $[\Lambda_1\quad 0;0 \quad \Lambda_2]$. For vectors $x,y$ and $z$, $\VEC(x,y,z)=[x^\intercal, y^\intercal,z^\intercal]^\intercal$ is a column vector. The superscript $-i$ refers to all players except the $i$-th player. In addition, $\Poly(\boldsymbol \cdot)$ denotes polynomial function.
Consider a nonzero-sum stochastic dynamic game with $n \in \mathbb{N}$ players. Let $x^i_t \in \mathbb{R}^{d_x}$, $u^i_t \in \mathbb{R}^{d_u}$ and $w^i_t \in \mathbb{R}^{d_x}$ denote the state, action and local noise of player $i \in \mathbb{N}_n$ at time $t \in \mathbb{N}$, where $d_x,d_u \in \mathbb{N}$. Define the weighted averages:
\begin{equation}
\bar x_t:=\sum_{i=1}^n \alpha^i_n x^i_t, \quad \bar u_t:=\sum_{i=1}^n \alpha^i_n u^i_t,
\end{equation}
where $\alpha^i_n \in \mathbb{R}$ is the \emph{influence factor} (weight) of player $i$ among its peers. From~\cite{Jalal2019MFT,Jalal2019Automatica,Jalal2019risk}, we refer to the above linear regressions as \emph{deep state} and \emph{deep action} in the sequel. To ease the presentation, the weights are normalized as follows: $\sum_{i=1}^n \alpha^i_n=1$.
The initial states $\{x^1_1,\ldots,x^n_1 \}$ are random with finite covariance matrices. The evolution of the state of player $i \in \mathbb{N}_n$ at time $t \in \mathbb{N}$ is given by:
\begin{equation}\label{eq:dynamics_original}
x^i_{t+1}=Ax^i_t +Bu^i_t+ \bar A \bar x_t +\bar B \bar u_t +w^i_t,
\end{equation}
where $\{w^i_t\}_{t=1}^\infty$ is an i.i.d. zero-mean noise process with a finite covariance matrix. The primitive random variables $\{ \{x^i_1\}_{i=1}^n, \{w^i_1\}_{i=1}^n,\{w^i_2\}_{i=1}^n,\ldots \}$ are defined on a common probability space and are mutually independent across time. The above random variables can be non-Gaussian and correlated (not necessarily independent) across players. The cost of player $i \in \mathbb{N}_n$ at time $t \in \mathbb{N}$ is given by:
\begin{equation}\label{eq:cost_original}
\begin{split}
c_t^i=&(x_t^i)^{\intercal}Q x_t^i+2(x_t^i)^{\intercal}S^x\bar{x}_t+(\bar{x}_t)^{\intercal}\bar Q\bar{x}_t\\
&+(u_t^i)^{\intercal}R u_t^i+2(u_t^i)^{\intercal}S^u\bar{u}_t+(\bar{u}_t)^{\intercal} \bar R\bar{u}_t,
\end{split}
\end{equation}
where $Q, S^x, \bar{Q}, R, S^u$ and $\bar{R}$ are symmetric matrices with appropriate dimensions.
From~\cite{Jalal2019MFT,Jalal2019Automatica,Jalal2019risk}, an information structure called \emph{deep state sharing} (DSS) is considered wherein each player $i \in \mathbb{N}_n$ at any time $t \in \mathbb{N}$ observes its local state $x^i_t$ and the deep state~$\bar x_t$, i.e., $u^i_t=g^i_t(x^i_{1:t},\bar x_{1:t})$,
where $g^i_t$ is a measurable function adapted to the filteration of the underlying primitive random variables of $\{x^i_{1:t},\bar x_{1:t}\}$.
When the number of players is very large, one can use \emph{no-sharing} (NS) information structure wherein each player observes only its local state. However, such a fully decentralized information structure comes at a price that one must predict the trajectory of the deep state in time (which introduces the computational complexity in time horizon in terms of storage and computation). For example, if the dynamics of the deep state (i.e., $A+\bar A$ and $B+\bar B$) is known (which is not applicable for model-free applications), the deep state can be predicted a head of time when primitive random variables are mutually independent by the strong law of large numbers. Alternatively, one can assume to have access to an external simulator for the dynamics of the deep state (which is basically DSS structure). In this paper, we focus on DSS information structure wherein there is no loss of optimality in restricting attention to stationary strategies despite the fact that the deep state is not stationary.
The interested reader is referred to~\cite{Jalal2019Automatica} for the convergence analysis of NS (approximate) solution to the DSS solution, as $n \rightarrow \infty$.
Define $\mathbf g^i_n:=\{g^i_t\}_{t=1}^\infty$ and $\mathbf g_n:=\{ \mathbf g^1,\ldots,\mathbf{g}^n\}$. The admissible set of actions are square integrable such that $\Exp{\sum_{t=1}^\infty \gamma^{t-1}(u^i_t)^\intercal u^i_t} <\infty$. Given a discount factor $\gamma \in (0,1)$, the cost-to-go for any player $i \in \mathbb{N}_n$ is described by:
\begin{equation}\label{cost_function}
J^i_{n,\gamma} (\mathbf g^i_n,\mathbf g^{-i}_n)_{t_0}=(1-\gamma) \Exp{\sum_{t=t_0}^\infty \gamma^{t-1} c^i_t}, \quad t_0 \in \mathbb{N}.
\end{equation}
\begin{problem}\label{prob1}
Suppose that the weights are homogeneous, i.e. $\alpha^i_n=\frac{1}{n}$, $i \in \mathbb{N}_n$. When a sequential Nash strategy $\mathbf{g}^\ast_n $ exists, develop model-based and model-free gradient descent and natural policy gradient descent procedures under DSS information structure such that for any player $i \in \mathbb{N}_n$ at any stage of the game $t_0\in \mathbb{N}$, and any arbitrary strategy $\mathbf g^i$:
\begin{equation}
{J^i_{n,\gamma}(\mathbf g^{\ast,i}_n,\mathbf g^{\ast,-i}_n)}_{t_0} \leq {J^i_{n,\gamma}(\mathbf g^{ i},\mathbf g^{\ast,-i}_n)}_{t_0}.
\end{equation}
\end{problem}
\begin{remark}
\emph{It is to be noted that Problem~\ref{prob1} holds for arbitrary number of players $n$, where the solution depends on $n$. Since the infinite-population solution is easier for analysis and may be viewed as a special case, one can generalize the homogeneous weights $\alpha^i_n=\frac{1}{n}$ to heterogeneous weights $\alpha^i_n=\frac{\beta^i}{n}$, where $\beta^i \in [-\beta_{\text{max}},\beta_{\text{max}}]$, $\beta_{\text{max}} \in \mathbb{R}_{>0}$, $i \in \mathbb{N}_n$. The resultant solution is called \emph{sequential weighted mean-field equilibrium} (SWMFE) in \cite{Jalal2019Automatica}. The SWMFE constructs an approximate solution at any stage of the game $t_0 \in \mathbb{N}$ such that $J^i_{n,\gamma}(\mathbf g^{\ast,i}_\infty,\mathbf g^{\ast,-i}_{\infty})_{t_0} \leq {J^i_{n,\gamma}(\mathbf g^{ i},\mathbf g^{\ast,-i}_\infty)}_{t_0}+\varepsilon(n)$,
where $\lim_{n \rightarrow \infty}\varepsilon(n)=0$. For more details, see~\cite[Theorem 4]{Jalal2019Automatica}.}
\end{remark}
\subsection{Main challenges and contributions}
There are several challenges to solve Problem~\ref{prob1}. The first one is the \emph{curse of dimensionality}, where the computational complexity of the solution increases with the number of players. The second one is the \emph{imperfect information} structure, where players do not have perfect information about the states of other players. The third challenge is that the resultant optimization problem is \emph{non-convex} in the policy space, see a counterexample in~\cite{fazel2018global}. The forth one lies in the fact that policy optimization is \emph{not even locally convergent} in a game with continuous spaces, in general; see a counterexample in~\cite{mazumdar2019policy}. The main contribution of this paper is to present an analytical proof for the global convergence of model-based and model-free policy gradient algorithms. In contrast to the model-based solution in~\cite{Jalal2019Automatica} (whose number of unknowns increases quadratically with $d_x$), the number of unknown parameters in the proposed algorithms increases linearly with $d_x$ and $ d_u$. To the best of our knowledge, this is the first result on the global convergence of policy optimization in nonzero-sum LQ games.
\section{Main Results}\label{sec:main}
In this section, we first present a model-based algorithm introduced in~\cite{Jalal2019Automatica} that requires $2d_x \times 2d_x$ parameters to construct the solution. Then, we propose two model-based gradient algorithms and prove their global convergence to the above solution, where their planning space is the policy space (that requires $2d_u \times d_x$ parameters to identify the solution). Based on the proposed gradient methods, we develop two model-free (reinforcement learning) algorithms and establish their global convergence to the model-based solution.
From~\cite{Jalal2019Automatica}, we use a gauge transformation to define the following variables for any player $i \in \mathbb{N}_n$ at any time $t \in \mathbb{N}$:
$ \mathbf{x}_t^i:= \VEC(x_t^i-\bar{x}_t,\bar x_t)$, $ \mathbf{u}_t^i:= \VEC(u_t^i-\bar{u}_t,\bar u_t)$ and $ \mathbf{w}_t^i:= \VEC(w_t^i-\bar{w}_t,\bar w_t)$,
where $\bar{w}_t:=\frac{1}{n}\sum_{i=1}^{n} w_t^i$. In addition, we define the following matrices: $\mathbf A:=\DIAG(A, A+\bar A)$, $\mathbf B:=\DIAG(B, B+\bar B)$, and
\begin{equation}
\Compress
\mathbf {Q}:=\begin{bmatrix}
Q &Q+S^x\\
Q+S^x&Q+2S^x+\bar Q\\
\end{bmatrix},\quad
\mathbf {R}:=\begin{bmatrix}
R&R+S^u\\
R+S^u&R+2S^u+ \bar R\\
\end{bmatrix}.
\end{equation}
We now express the per-step cost of each player in~\eqref{eq:cost_original} as:
\begin{align}\label{perstep}
c_t^i&=(\mathbf x^i_{t})^{\intercal}\mathbf{Q}\mathbf x^i_{t}+(\mathbf u^i_{t})^{\intercal}\mathbf{R}\mathbf u^i_{t}.
\end{align}
To formulate the solution, we present a non-standard algebraic Riccati equation, introduced in~\cite{Jalal2019Automatica}, as follows:
\begin{equation}\label{eq:riccati-bar-m}
\mathbf M(\boldsymbol \theta)=\mathbf Q + \boldsymbol \theta^\intercal \mathbf R \boldsymbol \theta+ \gamma (\mathbf A -\mathbf B \boldsymbol \theta)^\intercal \mathbf M(\boldsymbol \theta) (\mathbf A - \mathbf B \boldsymbol \theta),
\end{equation}
where $\boldsymbol \theta:=\DIAG(\theta(n),\bar \theta(n))$, $\theta(n):=(F_n)^{-1} K_n$, $\bar \theta(n):=(\bar F_n)^{-1} \bar K_n$, and matrices ${{} F_n}$, ${{}\bar F_n}$, ${{} K_n}$ and ${{}\bar K_n}$ are given by:
\begin{align}\label{eq:breve-f}
&{{} F_n}= (1-\frac{1}{n})\Big[R + \gamma B^\intercal {{}\mathbf M}^{ 1,1}(\boldsymbol \theta) B \Big] \nonumber \\
&\quad + \frac{1}{n}\Big[R + S^u +\gamma (B+\bar B)^\intercal {{}\mathbf M}^{1,2}(\boldsymbol \theta) B \Big], \nonumber \\
&{{}\bar F_n}= (1-\frac{1}{n})\left[R+ S^u +\gamma B^\intercal {{}\mathbf M}^{2,1}(\boldsymbol \theta) (B+\bar B) \right] \nonumber \\
&+ \frac{1}{n}\left[R + 2S^u + \bar R +\gamma (B+\bar B)^\intercal {{}\mathbf M}^{2,2}(\boldsymbol \theta) (B + \bar B) \right], \nonumber \\
&{{} K_n}= (1-\frac{1}{n})\gamma B^\intercal {{}\mathbf M}^{1,1}(\boldsymbol \theta) A + \frac{\gamma}{n} (B+ \bar B)^\intercal {{}\mathbf M}^{1,2}(\boldsymbol \theta) A, \nonumber \\
&\bar K_n\hspace{-.1cm}= \hspace{-.1cm}(1\hspace{-.1cm}-\hspace{-.1cm}\frac{1}{n})\gamma B^\intercal {{}\mathbf M}^{ 2,1}(\boldsymbol \theta) (A\hspace{-.1cm}+\hspace{-.1cm}\bar A)\hspace{-.1cm} + \hspace{-.1cm} \frac{\gamma}{n} (B\hspace{-.1cm}+ \hspace{-.1cm}\bar B)^\intercal {{}\mathbf M}^{2,2}(\boldsymbol \theta) (A\hspace{-.1cm}+\hspace{-.1cm}\bar A).
\end{align}
\begin{assumption}\label{ass:existence}
Suppose equations~\eqref{eq:riccati-bar-m} and~\eqref{eq:breve-f} admit a unique stable solution, which is also the limit of the finite-horizon solution. In addition, let ${{} F_n}$ and ${{}\bar F_n}$ be invertible matrices, and $(1-\frac{1}{n}){{} F_n} + \frac{1}{n}{{}\bar F_n}$ be a positive definite matrix.
\end{assumption}
We now provide two sufficient conditions for Assumption~\ref{ass:existence} ensuring the existence of a stationary solution. Let $G$ denote the mapping from $\mathbf M$ to $\boldsymbol \theta$ displayed in~\eqref{eq:breve-f} (where $\boldsymbol \theta=G(\mathbf M)$), and $L$ denote the mapping from $\boldsymbol \theta$ to $\mathbf M$ expressed in~\eqref{eq:riccati-bar-m} (where $\mathbf M=L(\boldsymbol \theta)$). Thus, $\mathbf M=L(G(\mathbf M))$ is a fixed-point equation to be solved by fixed-point methods.
\begin{assumption}\label{ass:contractive}
Let the mapping $L(G(\boldsymbol \cdot))$ be a contraction, implying that equations~\eqref{eq:riccati-bar-m} and~\eqref{eq:breve-f} admit a unique fixed-point solution. In addition, let ${{} F_n}$ and ${{}\bar F_n}$ be invertible matrices, and $(1-\frac{1}{n}){{} F_n} + \frac{1}{n}{{}\bar F_n}$ be a positive definite matrix.
\end{assumption}
\begin{assumption}[Infinite-population decoupled Riccati equations]\label{ass:decoupled}
\emph{Let $Q$ and $Q+S^x$ be positive semi-definite, $R$ and $R+S^u$ be positive definite, and $\bar A$ and $\bar B$ be zero. Suppose $(A,B)$ is stabilizable, and $(A,Q^{1/2})$ and $(A,(Q+S^x)^{1/2})$ are detectable. When $n$ is asymptotically large, the non-standard Riccati equation~\eqref{eq:riccati-bar-m} decomposes into two decoupled standard Riccati equations; see~\cite[Proposition 2]{Jalal2019Automatica}.}
\end{assumption}
\begin{theorem}[Model-based solution using non-standard Riccati equation~\cite{Jalal2019Automatica}]\label{thm:model_known}
Let Assumption~\ref{ass:existence} hold. There exists a stationary subgame perfect Nash equilibrium such that for any player $ i \in \mathbb{N}_n$ at any time $t\in \mathbb{N}_T$,
\begin{equation}
u^{\ast,i}_t=-\theta^\ast(n) x^i_t - ( \bar \theta^\ast(n) - \theta^\ast(n))\bar x_t,
\end{equation}
where the gains are obtained from \eqref{eq:breve-f}.
In addition, the optimal cost of player~$i \in \mathbb{N}_n$ from the initial time $t_0=1$ is given by:
$ J^{i}_{n,\gamma}(\boldsymbol \theta^\ast)= (1-\gamma) \TR( \mathbf M(\boldsymbol \theta^\ast) \boldsymbol \Sigma^i_x) +\gamma \TR( \mathbf M(\boldsymbol \theta^\ast) \boldsymbol \Sigma^i_w)$,
where $ \boldsymbol \Sigma^i_x:=\Exp{(\VEC(\Delta x^i_1), \bar x_1)(\VEC(\Delta x^i_1), \bar x_1)^\intercal}$ and $ \boldsymbol \Sigma^i_w:=\Exp{\VEC(\Delta w^i_t), \bar w_t)\VEC(\Delta w^i_t), \bar w_t)^\intercal }$.
\end{theorem}
\subsection{Model-based solution using policy optimization}
From Theorem~\ref{thm:model_known}, there is no loss of optimality in restricting attention to linear identical stationary strategies of the form $\boldsymbol \theta=\DIAG(\theta,\bar \theta)$. Therefore, we select one arbitrary player~$i$ as a learner and other players as imitators (that are passive during the learning process).
More precisely, at each time instant, player $i$ uses a gradient algorithm to update its strategy whereas other players employ the updated strategy to determine their next actions. In this article, we discard the process of selecting the learner, but in order to have a fair implementation, the learner may be chosen randomly at each iteration.\footnote{For the special case of infinite population, it is also possible that all players become learners, i.e., they simultaneously learn the strategies as long as their exploration noises are i.i.d. In such a case, the infinite-population deep state reduces to weighted mean-field and remains unchanged.
} For simplicity of presentation, we omit the superscript $i$ and the subscription of the cost function. Hence, the strategy of the learner can be described by:
$\mathbf u_t=-\boldsymbol \theta \mathbf x_t, \mathbf u_t \in \mathbb{R}^{2d_u}, \mathbf x_t \in \mathbb{R}^{2 d_x}, t \in \mathbb{N}$.
\begin{lemma}\label{lemma:gradient_formulation} The following holds at the initial time $t_0=1$:
\begin{equation}\label{eq:gradient_1}
[
\nabla_{ \theta} J(\boldsymbol \theta),
\nabla_{\bar \theta} J(\boldsymbol \theta)]
= 2 \mathbf P_n \mathbf E_{\boldsymbol \theta} \boldsymbol \Sigma_{\boldsymbol \theta},
\end{equation}
where
\begin{equation}\label{eq:gradient_matrices}
\begin{cases}
\mathbf P_n:= [
(1-\frac{1}{n}) \mathbf I_{d_u \times d_u},
\frac{1}{n} \mathbf I_{d_u \times d_u}],\\
\mathbf E_{\boldsymbol \theta}:= (\mathbf R + \gamma \mathbf B^\intercal \mathbf M(\boldsymbol \theta) \mathbf B)\boldsymbol \theta
- \gamma \mathbf B^\intercal \mathbf M(\boldsymbol \theta) \mathbf A,\\
\boldsymbol \Sigma_{\boldsymbol \theta}:= \Exp{(1-\gamma)\sum_{t=1}^\infty \gamma^{t-1}\mathbf x_t \mathbf x_t^\intercal}.
\end{cases}
\end{equation}
\end{lemma}
\begin{proof}
The proof is presented in Appendix~\ref{sec:proof_lemma:gradient_formulation}.
\end{proof}
In this paper, we consider two gradient-based methods.
\begin{itemize}
\item \textbf{Policy gradient descent}:
\begin{equation}\label{eq:GD}
\boldsymbol \theta_{k+1}=\boldsymbol \theta_k - \eta \DIAG(
\nabla_{ \theta} J( \boldsymbol \theta_k),\nabla_{\bar \theta} J(\boldsymbol \theta_k)).
\end{equation}
\item \textbf{Natural policy gradient descent}:
\begin{equation}\label{eq:NPGD}
\Compress
\boldsymbol \theta_{k+1}=\boldsymbol \theta_k - \eta \DIAG(
\nabla_{ \theta} J( \boldsymbol \theta_k),\nabla_{\bar \theta} J(\boldsymbol \theta_k)) \boldsymbol \Sigma_{\boldsymbol \theta}^{-1}.
\end{equation}
\end{itemize}
To prove our convergence results, we impose extra standard assumptions, described below.
\begin{assumption}\label{stable_search}
The initial policy is stable. A policy $\boldsymbol \theta$ is said to be stable if $\rho(\mathbf A- \mathbf B \boldsymbol \theta) <1 $.
\end{assumption}
\begin{assumption}\label{ass:positive_sigma}
Given the learner, $\Exp{\mathbf x_1 (\mathbf x_1)^\intercal}$ is positive definite. For the special case of i.i.d. initial states, $\Exp{\mathbf x^i_1 (\mathbf x^i_1)^\intercal}$= $\DIAG((1-\frac{1}{n}) \text{cov}(x_1), \frac{1}{n}\text{cov}(x_1)+\Exp{x_1} \Exp{x_1}^\intercal)$ is positive definite if $\text{cov}(x^i_1)=:\text{cov}(x_1)$ and $\Exp{x^i_1}\Exp{x^i_1}^\intercal=: \Exp{x_1}\Exp{x_1}^\intercal$, $ i \in \mathbb{N}_n$, are positive definite.
\end{assumption}
\begin{assumption}\label{ass:positive_Q}
For finite-population model, $\mathbf Q$ and $\mathbf R$ are positive definite matrices. For the infinite-population case satisfying Assumption~\ref{ass:decoupled}, $Q$ and $Q+S$ are positive definite.
\end{assumption}
Assumptions~\ref{stable_search}--\ref{ass:positive_Q} are standard conditions in the literature of LQ reinforcement learning~\cite{fazel2018global,malik2020derivative}, which ensure that for any stable $\boldsymbol \theta$, $J(\boldsymbol \theta)$ is properly bounded and $\boldsymbol \Sigma_{\boldsymbol \theta} \succcurlyeq \Exp{\mathbf x_1 (\mathbf x_1)^\intercal}$ is positive definite. We now show that the best-response optimization at the learner satisfies the Polyak-Lojasiewicz (PL) condition~\cite{Polyak1964, Lojasiewicz1963}, which is a relaxation of the notion of strong convexity. Let $\mu:=\sigma_{min}(\Exp{\mathbf x_1 \mathbf x_1^\intercal})$.
\begin{lemma} [PL condition] \label{lemma:GD}
Let Assumptions~\ref{ass:existence},~\ref{stable_search},~\ref{ass:positive_sigma} and~\ref{ass:positive_Q} hold. Let also $\boldsymbol \theta^*$ be the Nash policy in Theorem~\ref{thm:model_known}. There exists a positive constant $L_1 (\boldsymbol \theta^\ast)$ such that
\begin{equation}
J(\boldsymbol \theta)- J(\boldsymbol \theta^*) \leq L_1 (\boldsymbol \theta^\ast)
\NF{[\nabla_{\theta} J(\boldsymbol \theta), \nabla_{ \bar \theta} J( \boldsymbol \theta)]}^2,
\end{equation}
where $L_1(\boldsymbol \theta^\ast)= \frac{n^2\norm{\boldsymbol \Sigma_{\boldsymbol \theta^*}}}{4\mu^2\sigma_{\text{min}}(\mathbf{R})}$. For the special case of infinite population (i.e. $n=\infty$) with i.i.d. initial states under Assumptions~\ref{ass:decoupled},~\ref{stable_search},~\ref{ass:positive_sigma} and~\ref{ass:positive_Q}, one has $L_1(\boldsymbol \theta^\ast)= \frac{\norm{\boldsymbol \Sigma^{1,1}_{\boldsymbol \theta^*}}}{4\sigma_{min}(\text{cov}(x_1))^2\sigma_{\text{min}}(R)} + \frac{\norm{\boldsymbol \Sigma^{2,2}_{\boldsymbol \theta^*}}}{4\sigma_{min}(\mathbb{E}[x_1]\mathbb{E}[x_1]^\intercal)^2\sigma_{\text{min}}(R+S^u)}$.
\end{lemma}
\begin{proof}
The proof is presented in Appendix~\ref{sec:proof_lemma:GD}.
\end{proof}
In the following lemmas, we show that the cost function and its gradient are locally Lipschitz functions.
\begin{lemma}[Locally Lipschitz cost function]\label{CLip}
For any $\boldsymbol \theta '$ satisfying the inequality $\NF{\boldsymbol \theta' - \boldsymbol \theta} < \varepsilon(\boldsymbol \theta)$, there exists a positive constant $L_2(\boldsymbol \theta)$ such that
$ |J(\boldsymbol \theta ')- J(\boldsymbol \theta )|\leq L_2(\boldsymbol \theta )\NF{\boldsymbol \theta'-\boldsymbol \theta}$,
where the explicit expressions of $\varepsilon(\boldsymbol \theta)$ and $L_2(\boldsymbol \theta)$ can be obtained in a similar manner as~\cite[Lemma 15]{malik2020derivative}.
\end{lemma}
\begin{proof}
The proof is omitted due to space limitation.
\end{proof}
\begin{lemma}[Locally Lipschitz gradient] \label{lemma:LLG}
For any $\boldsymbol \theta '$ satisfying the inequality $\NF{\boldsymbol \theta' - \boldsymbol \theta} < \varepsilon(\boldsymbol \theta)$, there exists a positive constant $L_3(\boldsymbol \theta )$ such that
\begin{equation}
\Compress
\NF{[\nabla J_{\theta}(\boldsymbol \theta'), \nabla J_{\bar \theta}(\boldsymbol \theta')] - [\nabla J_{\theta}(\boldsymbol \theta), \nabla J_{\bar \theta}(\boldsymbol \theta)]}\leq L_3(\boldsymbol \theta)\NF{\boldsymbol \theta'-\boldsymbol \theta},
\end{equation}
where the explicit expressions of $\varepsilon(\boldsymbol \theta)$ and $L_3(\boldsymbol \theta)$ can be obtained in a similar manner as~\cite[Lemma 16]{malik2020derivative}.
\end{lemma}
\begin{proof}
The proof is omitted due to space limitation.
\end{proof}
\begin{theorem}[Global convergence via model-based gradient]\label{MDTHm}
Let Assumptions~\ref{ass:existence},~\ref{stable_search},~\ref{ass:positive_sigma} and~\ref{ass:positive_Q} hold. For a sufficiently small fixed step size $\eta$ chosen as
$\eta=\Poly\big(\frac{\mu\sigma_{\text{min}}(\mathbf{Q})}{J (\boldsymbol \theta_1)},\frac{1}{\sqrt{\gamma}\rVert\mathbf{A}\lVert},\frac{1}{\sqrt{\gamma}\rVert\mathbf{B}\lVert},\frac{1}{\rVert\mathbf{R}\lVert},\sigma_{\text{min}}(\mathbf{R})\big)$,
and for a sufficiently large number of iterations $K$ such that
$K \geq\frac{\rVert\mathbf{\Sigma_{\boldsymbol \theta^*}}\lVert}{\mu}\log\frac{J (\boldsymbol \theta_1)-J (\boldsymbol \theta^*)}{\varepsilon} \Poly\big(\frac{J (\boldsymbol \theta_1)}{\mu\sigma_{\text{min}}(\mathbf{Q})},{\sqrt{\gamma}\rVert\mathbf{A}\lVert},{\sqrt{\gamma}\rVert\mathbf{B}\lVert},{\rVert\mathbf{R}\lVert}$, $\frac{1}{\sigma_{\text{min}}(\mathbf{R})}\big)$,
the gradient descent algorithm~\eqref{eq:GD} leads to the following bound:
$
J (\boldsymbol \theta_K)-J (\boldsymbol \theta^*)\leq\varepsilon
$.
In particular, for a fixed step size $
\eta=\frac{1}{\|\mathbf{P}_n^\intercal \mathbf{P}_n\|(\rVert\mathbf{R}\lVert+\frac{\gamma\rVert\mathbf{B}\lVert^2J (\boldsymbol \theta_1)}{\mu})}$
and for a sufficiently large number of iterations $K$, i.e.,
$K\geq\frac{\rVert\mathbf{\Sigma_{\boldsymbol \theta^*}}\lVert \|\mathbf P_n^\intercal \mathbf P_n\| }{\mu}\big(\frac{\rVert\mathbf{R}\lVert}{\sigma_{\text{min}}(\mathbf{R})}+\frac{\gamma\rVert\mathbf{B}\lVert^2J (\boldsymbol \theta_1)}{\mu\sigma_{\text{min}}(\mathbf{R})}\big)\log\frac{J (\boldsymbol \theta_1)-J (\boldsymbol \theta^*)}{\varepsilon}$,
the natural policy gradient descent algorithm~\eqref{eq:NPGD} enjoys the bound: $
J (\boldsymbol \theta_K)-J (\boldsymbol \theta^*)\leq\varepsilon$.
\end{theorem}
\begin{proof}
Following the proof technique in~\cite[Theorem 7]{fazel2018global}, we choose a sufficiently small step size $\eta$ such that the value of the cost decreases at each iteration. More precisely, for the natural policy gradient descent at iteration $K$,
$J ({\boldsymbol \theta}_{K+1})-J (\boldsymbol \theta^*)\leq
(1- \frac{\mu \sigma_{\text{min}}(\mathbf{R})}{\|\mathbf{P}_n^\intercal \mathbf{P}_n\|(\rVert\mathbf{R}\lVert+\frac{\gamma\rVert\mathbf{B}\lVert^2J(\boldsymbol{ \theta_1})}{\mu})\rVert \mathbf{\Sigma_{\boldsymbol \theta^*}}\lVert})(J ({\boldsymbol \theta}_K)-J (\boldsymbol \theta^*)) = (1-\eta \frac{\mu \sigma_{\text{min}}(\mathbf{R})}{\rVert \mathbf{\Sigma_{\boldsymbol \theta^*}}\lVert})(J ({\boldsymbol \theta}_K)-J (\boldsymbol \theta^*))$. The above recursion is contractive for the specified~$\eta$.
\end{proof}
\subsection{Model-free solution using policy optimization}
It is desired now to develop a model-free RL algorithm.
\begin{lemma}[Finite-horizon approximation]\label{lemma:rollout}
For any $\boldsymbol \theta$ with finite $J(\boldsymbol \theta)$, define ${\tilde J_T}(\boldsymbol \theta):=(1-\gamma)\mathbb{E}[\sum_{t=1}^{T} \gamma^{t-1} c_t]$ and $\tilde{\boldsymbol \Sigma}_{\boldsymbol \theta}=(1-\gamma)\mathbb{E}[\sum_{t=1}^{T} \gamma^{t-1} \mathbf x_t \mathbf x_t^\intercal]$. Let $\varepsilon(T):= \frac{d_x(J (\boldsymbol \theta))^2}{(1-\gamma)T\mu\sigma_{\text{min}}^2(\mathbf{Q})}
$ and
$
\bar \varepsilon(T):=\varepsilon(T) (\rVert\mathbf{Q}\lVert+\rVert\mathbf{R}\lVert\rVert\mathbf{\boldsymbol{ \theta}}\lVert^2)
$,
then $\norm{\tilde{\boldsymbol \Sigma}_{\boldsymbol \theta}-\boldsymbol \Sigma_{\boldsymbol \theta}} \leq \varepsilon(T) $ and $|{\tilde J _T}(\boldsymbol \theta)-J (\boldsymbol \theta)|\leq\bar \varepsilon(T)$.
\end{lemma}
\begin{proof}
The proof is omitted due to space limitation.
\end{proof}
Let $\mathbb{S}_r$ be a set of uniformly distributed points with norm $r>0$ (e.g., the surface of a sphere). In addition, let $\mathbb{B}_r$ denote the set of all uniformly distributed points whose norms are at most $r$ (e.g., all points within the sphere). For a matrix $\tilde{\boldsymbol \theta}=\DIAG(\tilde \theta, \tilde{\bar \theta})$, these distributions are defined over the Frobenius norm ball. Hence,
$J_r (\boldsymbol \theta)=\mathbb{E}_{\tilde{\boldsymbol \theta} \sim \mathbb{B}_r}[J (\boldsymbol \theta+\tilde{\boldsymbol \theta})]$.
Since the expectation can be expressed as an integral function, one can use Stokes' formula to compute the gradient of $J_r (\boldsymbol \theta )$ with only query access to the function values.
\begin{lemma}[Zeroth-order optimization]\label{lemma:smooth}
For a smoothing factor $r >0$,
$
[\nabla_{ \theta} J ( \boldsymbol \theta),
\nabla_{\bar \theta} J (\boldsymbol \theta)
]
= \frac{2d_xd_u}{r^2} \mathbb{E}_{\tilde{\boldsymbol \theta} \sim \mathbb{S}_r }[J (\boldsymbol \theta+\tilde{\boldsymbol \theta})[\tilde \theta, \tilde{\bar \theta}]]$.
\end{lemma}
\begin{proof}
The proof follows directly from the zeroth-order optimization approach~\cite[Lemma 1]{flaxman2004online}.
\end{proof}
\begin{lemma}\label{lemma:ber}
Let $\tilde {\boldsymbol \theta}_1, \ldots, \tilde {\boldsymbol \theta}_L$, $L \in \mathbb{N}$, be i.i.d. samples drawn uniformly from $\mathbb{S}_r$. There exists $\varepsilon(L):= \Poly(1/L) >0$, such that $[
\tilde \nabla^L_{ \theta} J ( \boldsymbol \theta),
\tilde \nabla^L_{\bar \theta} J (\boldsymbol \theta)
]
= \frac{2d_xd_u}{r^2 L} \sum_{l=1}^{L}J (\boldsymbol \theta+\tilde{\boldsymbol \theta}_l)[\tilde \theta, \tilde{\bar \theta}]$ converges to $[\nabla_{ \theta} J ( \boldsymbol \theta),\nabla_{\bar \theta} J (\boldsymbol \theta)]$ in the Frobenius norm with a probability greater than $1-({\frac{2d_xd_u}{\varepsilon(L)}})^{-2d_xd_u}$.
From Lemma~\ref{lemma:rollout}, there exists $\varepsilon(L,T):=\Poly(1/L, 1/T)>0$ such that
$[
\tilde \nabla^{L,T}_{ \theta} J ( \boldsymbol \theta),
\tilde \nabla^{L,T}_{\bar \theta} J (\boldsymbol \theta)
]
= \frac{2d_xd_u(1-\gamma)}{r^2 L} \sum_{l=1}^{L}[\sum_{t=1}^{T}\gamma^{t-1}(c_t)][\tilde \theta_l, \tilde{\bar \theta}_l]$
is $\varepsilon(L,T)$ close to $[\nabla_{ \theta} J ( \boldsymbol \theta),\nabla_{\bar \theta} J (\boldsymbol \theta)]$ with a probability greater than $1-({\frac{2d_xd_u}{\varepsilon(L,T)}})^{-2d_xd_u}$ in the Frobenius norm.
\end{lemma}
\begin{proof}
The proof is omitted due to space limitation.
\end{proof}
\begin{theorem}[Global convergence via model-free gradient]\label{thm:RL}
Let Assumptions~\ref{ass:existence},~\ref{stable_search},~\ref{ass:positive_sigma} and~\ref{ass:positive_Q} hold. For a sufficiently large horizon $T$ and samples $L$, model-free gradient descent and natural policy gradient decent with the empirical gradient in Lemma~\ref{lemma:ber} and covariance matrix in Lemma~\ref{lemma:rollout} converge to the model-based solutions in Theorem~\ref{MDTHm}. In particular, the gradient descent algorithm converges with a probability greater than $1-({\frac{2d_xd_u}{\varepsilon(L,T)}})^{-2d_xd_u}$, where $\varepsilon(L,T)=\Poly(1/L, 1/T)$.
\end{theorem}
\begin{proof}
From~\cite[Theorem 31]{fazel2018global} and Theorem~\ref{MDTHm}, one has the following inequality at iteration $K\in \mathbb{N}$ for a sufficiently small step size $\eta \leq \eta_{max}$,
$J (\boldsymbol \theta_{K+1})-J (\boldsymbol \theta^*)\leq (1-\eta\eta_{max}^{-1})(J (\boldsymbol \theta_K)-J (\boldsymbol \theta^*))$.
At iteration $K$, denote by $\tilde{\nabla}_K$ the empirical gradient and by $\hat{\boldsymbol{\theta}}_{K+1}=\boldsymbol{\theta}_K-\eta \tilde{\nabla}_K$ the update with the empirical gradient. From Lemma~\ref{CLip}, $|J (\hat{\boldsymbol \theta}_{K+1})-J (\boldsymbol \theta_{K+1})| \leq \frac{1}{2}\eta \eta_{max}^{-1}\varepsilon(L,T)$, when $\rVert\hat{\boldsymbol \theta}_{K+1}-\boldsymbol{ \theta}_{K+1} \lVert\leq \frac{1}{2}\eta \eta_{max}^{-1}\varepsilon(L,T) (1/L_2(\boldsymbol{ \theta}_{K+1}))$, upon noting that $\hat{\boldsymbol{ \theta}}_{K+1}-\boldsymbol{ \theta}_{K+1}=\eta(\nabla_K-\tilde{\nabla}_K)$ and
$\rVert \nabla_{K}-\tilde{\nabla}_{K}\lVert\leq \frac{1}{2} \eta_{max}^{-1}\varepsilon(L,T) (1/L_2(\boldsymbol{ \theta}_{K+1}))$.
According to the Bernstein inequality, the above inequality holds with a probability greater than $1-({\frac{2d_xd_u}{\varepsilon(L,T)}})^{-2d_xd_u}$. Therefore, from Lemmas~\ref{lemma:LLG} and~\ref{lemma:ber}, the distance between the empirical gradient and the exact one monotonically decreases as the number of samples and rollouts increases, provided that the smoothing factor $r$ is sufficiently small. Consequently, one arrives at
$J (\hat{\boldsymbol \theta}_{K+1})-J (\boldsymbol \theta^*)\leq(1-\frac{1}{2}\eta \eta_{max}^{-1})(J (\boldsymbol \theta_K)-J (\boldsymbol \theta^*))$, when $ J (\boldsymbol \theta_K)-J (\boldsymbol \theta^*) \leq \varepsilon(L,T)$. This recursion is contractive; i.e., the rest of the proof will be similar to that of Theorem~\ref{MDTHm}.
\end{proof}
\section{Simulations}\label{sec:numerical}
In this section, simulations are conducted to demonstrate the global convergence of the proposed gradient methods. To compute the Nash policy, plotted in dashed lines in the figures, we use the solution of equation~\eqref{eq:riccati-bar-m}.
\textbf{Example 1.} Consider a dynamic game with the following parameters: $\eta=0.1$, $n=100$, $T=100$, $L=3$, $A=0.7, B=0.4, \bar{A}=0, \bar{B}=0, Q=1, R=1,S^x=4, S^u=0, \bar{Q}=0$, $\bar{R}=0$, $\Sigma_x=1$ and $\Sigma_w=0.4$. It is observed in Figure~\ref{fig:PN} that natural policy gradient descent reaches the Nash strategy faster than the gradient descent.
\begin{figure}[t!]
\hspace{0cm}
\scalebox{1}{
\includegraphics[ trim={0cm 13.6cm 0 7cm},clip,width=\linewidth]{PG_NPG2.pdf}}
\caption{Convergence of the model-based gradient descent and natural policy gradient descent algorithms in Example 1.}\label{fig:PN}
\end{figure}
\textbf{Example 2.} Let the system parameters be $\eta=0.04$, $n=10$, $T=10$, $r=0.09$, $L=1500$, $A=1, B=0.5, \bar{A}=0, \bar{B}=0, Q=1, R=1,S^x=2, S^u=0, \bar{Q}=1$, $\bar{R}=0$, $r=0.09$, $\Sigma_x=0.05$ and $\Sigma_w=0.01$. The model-free policy gradient algorithm was run on a 2.7 GHz Intel Core i5 processor for $10$ random seeds. After $6000$ iterations, which took roughly $10$ hours, both $\theta$ and $\bar{ \theta}$ reached their optimal values as depicted in Figure~\ref{fig:model_free}.
\begin{figure}[t!]
\centering
\includegraphics[trim={0cm 8.5cm 0 8.7cm},clip,width=\linewidth]{model_free_correct}
\caption{Convergence of the proposed model-free algorithm in Example 2.}\label{fig:model_free}
\end{figure}
\begin{figure}[t!]
\centering
\includegraphics[ trim={0cm 9.3cm 0 9.4cm},clip,width=\linewidth]{theta_k1.pdf}
\caption{The effect of the number of players on the policy~in Example~3.}\label{fig:bar}
\end{figure}
\textbf{Example 3.} In this example, let the system parameters be $\eta=0.1$, $T=100$, $L=3$, $A=0.8, B=0.2, \bar{A}=0, \bar{B}=0, Q=1, R=1,S^x=2, S^u=0, \bar{Q}=4$, $\bar{R}=0$, $\Sigma_x=1$ and $\Sigma_w=0.1$. To investigate the effect of the number of players, we considered five different values for $n \in \{ 2, 5, 10,20,100\}$. It is shown in Figure~\ref{fig:bar} that the policies converge to a limit as the number of players increases, which is known as the mean-field limit.
\section{Conclusions}\label{sec:conclusion}
In this paper, we investigated model-based and model-free gradient descent and natural policy gradient descent algorithms for LQ deep structured games with homogeneous weights. It was shown theoretically and verified by simulations, that the gradient-based methods enjoy the global convergence to the sequential Nash solution.
One of the main features of the proposed solutions is that their planning space is independent of the number of players.
The obtained results naturally extend to asymptotically vanishing weights and other variants of policy gradient algorithms such as REINFORCE and actor-critic methods.
\bibliographystyle{IEEEtran}
| {
"attr-fineweb-edu": 1.509766,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeS3xK6wB9mn5ADTN | \section{Introduction}
CMB anisotropies have been used widely to put constraints on
cosmological parameters (CP hereafter) during the last ten
years. Until recently, the inhomogeneous set of observations and
systematic effects have lead to limited constraints on some
parameters (or combinations of parameters). Most of these limitations
seem to disappear with the new WMAP data which encompass a large sky
coverage (full sky) and large scale range (from arc minute to 180
degrees, or in Fourier modes, from $\ell=2$ to
$\ell=800$). Nevertheless, some degeneracies are intrinsic to CMB
physics and can not be broken even with a perfect experiment (when
considering temperature anisotropies only). I focus in this work on
the degeneracy between initial conditions and late time cosmology and
in particular between the shape of the initial power spectrum and the
matter content of the universe.
\section{Shape of the initial power spectrum}
\subsection{Power law spectrum}
The shape of the initial power spectrum (IPS hereafter) is not known
a priori and is often assumed to be a power law caracterised by an
amplitude $A$ and an index $n$ ($P \propto A k^n$). The case $n=1$ is
referred as Harrison--Zel'dovitch spectrum. If such assumption is
made (in addition to the flatness, trivial topology of the universe
and unicity of adiabatic initial modes), one can retrieve with good
accuracy most of the cosmological parameters (see Spergel et
al. 2003 and Fig.~1 (right)).
\subsection{Spectrum by intervalle}
One can also find that a unique amplitude for all scale is a bit
simple and parametrize the IPS by different amplitudes by intervalles
in inverse scale ($k\;\rm in\; h Mpc^{-1}$). Following the different works,
one finds that the best shape (as needed to fit the WMAP CMB angular
power spectrum) is in agreement with a power law (Briddle et
al. 2003) or present a bent at a particular scale (see Mukherjee et
al. 2003 and figure 1) around $k \sim 0.01 \rm Mpc^{-1}$.
\begin{figure}[h]
\centering
\includegraphics[width=4.5cm]{douspis_fig1.ps}\includegraphics[width=4.5cm]{douspis_fig2.ps}
\caption{Left: shape of the IPS when parametrized in amplitudes in bins for WMAP fitting (from Mukherjee et al. 2003).
Right: Effect of such parametrization on the CP constraints: in red the likelihoods with power law IPS and in blue assuming free amplitudes in bins (from Briddle et al. 2003) }
\label{figure_mafig}
\end{figure}
\subsection{Broken initial spectrum}
A second parametrization considered is the broken initial power
spectrum. The idea is to split the spectrum in two parts, two power
laws having independent indexes caracterising large and small
scales. Blanchard et al. (2003) have shown that in this situation WMAP
angular power spectrum (in temperature and cross temperature
polarization cases), large scale surveys (2dF, APM, PSCz) and Lyman
alpha inferred power spectra are fully in agreement with an Einstein de Sitter
model. Figure 2 shows that even Planck will be unable to make the
difference between a $\Lambda$-CDM power law cosmology and a
$\Omega_{matter}=1$ model with a broken power spectrum.
\begin{figure}[h]
\centering
\includegraphics[width=4.5cm]{douspis_fig3.ps} \includegraphics[width=4.5cm, height=4.5cm]{douspis_fig4.ps}
\caption{$\Lambda-CDM$
power law angular spectrum and $\Omega_{matter}=1$ model
over-plotted on WMAP data and their difference compared to Planck
estimated error bars}
\label{figure_mafig2}
\end{figure}
\subsection{Reconstructing the shape}
Having such good datasets as WMAP allows one to probe the initial
condition of cosmic scenarios. The angular power spectrum of CMB
anisotropies is a convolution of ``late time cosmology'' effects
(transfer function, $\Delta$) with initial conditions (IPS of
fluctuations, $P^0$ ): $C_{\ell}=4\pi\int\Delta^{2}_{\ell}(k)\, P^0(k)
\, \frac{\rm{d} k}{k} \simeq WP^{0}$, where the last term is in matrix
notation and represents a numerical approximation to the integral.
Generally speaking, it is not possible to invert the matrix $W$ to
solve for the power spectrum, because each $C_{\ell}$ embraces
information about a limited range in $k$-space. However the inversion
is feasible under some assumptions. Basically the lack of information
can be remedied by the introduction of priors, such as for example the
requirement of a certain degree of smoothness in the
solution. Tocchini Valentini, Douspis and Silk (2004) have presented a
new method reaching this goal, with the advantage of knowing the error
bars on the reconstructed IPS. Other methods have been proposed in the
same spirit (Shafieloo et al. 2004; Kogo et al. 2004).
Figure 3 (left) shows the reconstructed power spectrum from WMAP
individual $C_{\ell}$'s when ``concordance'' late time cosmological
parameters are assumed. The horizontal black line represent the best
fitting power law IPS found in Spergel et al. 2003. The first remark
is perhaps that the reconstructed IPS is not far from a power
law. Nevertheless, deviations occur and are needed in order to fit
features seen in the angular power spectrum and responsible for the
``bad'' goodness of fit of a power law spectrum. The reconstructed IPS
allows one to fit the WMAP $C_{\ell}$'s in improving by 44 the
log-likelihood ($\sim \chi^2$) compared to the power law
$\Lambda$-CDM. Such spectrum let some imprints in the angular power
spectrum but also in the matter power spectrum. Recent surveys probing
large scales are then able to test such deviations. Figure 3 (right)
shows the comparison between the IPS found in Fig.~3 (left) evolved
to now, and the matter power spectrum observed by the Sloan Digital
Sky Survey (SDSS). More than being compatible, here again, the
reconstructed IPS allows to improve the goodness of fit, by fitting
nicely the deviations seen in SDSS data.
\begin{figure}[h]
\centering
\hspace{-4.3cm}\includegraphics[width=4cm, height=2.3cm]{douspis_fig5.ps}
\hspace{2.5cm}\includegraphics[width=6cm,
height= 2.3cm]{douspis_fig7.ps}\hspace{-8cm}\includegraphics[width=3.5cm]{douspis_fig6.ps}
\caption{Reconstructed IPS from
WMAP under concordance prior (left) -- with corresponding matter power spectrum compared to SDSS
(center)-- and Einstein de Sitter prior
(right); from Tocchini-Valentini et al. 2004. } \label{figure_mafig}
\end{figure}
If such deviations (in the angular and matter PS) remain in the future
(at this level), it will be hard firstly to detect them in the IPS (if
they are not systematics effects) and then to explain such behavior
(especially if they are not coherent; see Martin \& Ringeval (2004) for
coherent oscillations in the IPS). Furthermore, if the ``concordance''
cosmological parameter prior is not set, one is able to find the
``best'' IPS fitting the data for almost any kind of cosmological
parameters given the strong degeneracy between initial conditions and
late time cosmology. Figure 6 shows for example the IPS reconstructed
from WMAP $C_{\ell}$'s by forcing $\Omega_{Matter}=1$ and the
cosmological parameters presented in Blanchard et al. (2003). The
corresponding angular spectrum gives by definition an acceptable
goodness of fit ($\chi^2/(degrees\; of\; freedom) \sim 1$).
\section{Conclusion}
The degeneracy between primordial cosmology, IPS,
and late time cosmology, CP, allows one to mimic
actual (and future) CMB angular power spectrum observations. If the
hypothesis of unique power law initial power spectrum is relaxed, the
constraints from CMB (and large scale surveys) on cosmological
parameters are weakened. Such degeneracy shows then the need for other
probes and combinations (clusters of galaxy, supernovae, lensed
galaxies, ...) in order to reach precision cosmology and to set strong
constraints on cosmic scenarios.
| {
"attr-fineweb-edu": 1.993164,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeTLxK0zjCobCPcuS | \section{Introduction}
This paper studies a fractional version of the Schr\"odinger equation in a magnetic field, or a fractional magnetic Schr\"odinger equation (FMSE), establishing a uniqueness result for a related inverse problem. We thus deal with a non-local counterpart of the classical magnetic Schr\"odinger equation (MSE) (see \cite{NSU95}), which requires to find up to gauge the scalar and vector potentials existing in a medium from voltage and current measurements on its boundary.
Let $\Omega\subset\mathbb R^n$ be a bounded open set with Lipschitz boundary, representing a medium containing an unknown electromagnetic field. The solution of the Dirichlet problem for the MSE is a function $u$ satisfying
$$\left\{\begin{array}{lr}
(-\Delta)_A u + qu := -\Delta u -i \nabla\cdot(Au) -i A\cdot\nabla u + (|A|^2+q)u =0 & \text{in } \Omega\\
u=f & \text{on } \partial\Omega
\end{array}\right. \;, $$
\noindent where $f$ is the prescribed boundary value and $A,q$ are the vector and scalar potentials in the medium. The boundary measurements are encoded in $\Lambda_{A,q} : H^{1/2}(\partial\Omega)\rightarrow H^{-1/2}(\partial\Omega)\;,$ the Dirichlet-to-Neumann (or DN) map. The inverse problem consists in finding $A, q$ in $\Omega$ up to gauge by knowing $\Lambda_{A,q}$.
The study of the local MSE has both mathematical and practical interest, since it constitutes a substantial generalization of the Calder\'on problem (see \cite{Ca80}). This problem first arose for the prospection of the ground in search of valuable minerals. In the method known as Electrical Impedance Tomography (EIT), electrodes are placed on the ground in order to deliver voltage and measure current flow; the resulting data carries information about the conductivity of the materials underground, allowing deductions about their composition (\cite{Uh09}). A similar method is also used in medical imaging. Since the tissues of a body have different electrical conductivities (\cite{Jo98}), using the same setup harmless currents can be allowed to flow in the body of a patient, thus collecting information about its internal structure. This technique can be applied to cancer detection (\cite{GZ03}), monitoring of vital functions (\cite{CGIN90}) and more (see e.g. \cite{Ho05}). Various engineering applications have also been proposed. A recent one (see \cite{HPS14}) describes a sensing skin consisting of a thin layer of conductive copper paint applied on concrete. In case of cracking of the block, the rupture of the surface would result in a local decrease in conductivity, which would in turn be detected by EIT, allowing the timely substitution of the failing block. The version of the problem with non-vanishing magnetic field is interesting on its own, as it is related to the inverse scattering problem with a fixed energy (see \cite{NSU95}). First order terms also arise by reduction in the study of numerous other inverse problems, among which isotropic elasticity (\cite{NU94}), special cases of Maxwell and Schr\"odinger equations (\cite{Mc00}, \cite{Es01}), Dirac equations (\cite{NT00}) and the Stokes system (\cite{HLW06}). The survey \cite{Sa07} contains more references on inverse boundary value problems for the MSE.
Below we introduce a fractional extension of the local problem. Fractional mathematical models are nowadays quite common in many different fields of science, including image processing (\cite{GO08}), physics (\cite{DGLZ2012}, \cite{Er02}, \cite{GL97}, \cite{La00}, \cite{MK00}, \cite{ZD10}), ecology (\cite{Hu10}, \cite{MV18}, \cite{RR09}), turbulent fluid dynamics (\cite{Co06}, \cite{DG13}) and mathematical finance (\cite{AB88}, \cite{Le04}, \cite{Sc03}). For more references, see \cite{BV18}. The common idea in these applications is that the fractional Schr\"odinger equation usefully describes anomalous diffusion, i.e. a diffusion process in which the mean squared displacement does not depend linearly on time. We expect this to be even more the case for FMSE, given its greater generality.
For the fractional case, fix $s\in(0,1)$, and consider the fractional divergence and gradient operators $(\nabla\cdot)^s$ and $\nabla^s$. These are based on the theoretical framework laid down in \cite{DGLZ2012}, \cite{DGLZ2013}, and were introduced in \cite{Co18} as non-local counterparts of the classical divergence and gradient. Fix a vector potential $A$, and consider the magnetic versions $(\nabla\cdot)^s_A$ and $\nabla^s_A$ of the above operators. These correspond to $(-i\nabla+A)\cdot$ and $(-i\nabla+A)$, whose combination results in the local magnetic Laplacian $(-\Delta)_A$. Analogously, we will show how $(\nabla\cdot)^s_A$ and $\nabla^s_A$ can be combined in a fractional magnetic Laplacian $(-\Delta)^s_A$.
\noindent The next step will be setting up the Dirichlet problem for FMSE as
$$\left\{\begin{array}{lr}
(-\Delta)^s_A u + qu =0 & \text{in } \Omega\\
u=f & \text{in } \Omega_e
\end{array}\right. \;.
$$
Since our operators are non-local, the exterior values are taken over $\Omega_e=\mathbb R^n\setminus \overline\Omega$. The well-posedness of the direct problem is granted by the assumption that $0$ is not an eigenvalue for the left hand side of FMSE (see e.g. \cite{RS2017}). We can therefore define the DN map $\Lambda_{A,q}^s : H^s(\Omega_e) \rightarrow (H^{s}(\Omega_e))^*$ from the bilinear form associated to the equation. The inverse problem is to recover $A$ and $q$ in $\Omega$ from $\Lambda_{A,q}^s$. Because of a natural gauge $\sim$ enjoyed by FMSE, solving the inverse problem completely is impossible; however, the gauge class of the solving potentials can be fully recovered:
\begin{The}
Let $\Omega \subset \mathbb R^n, \; n\geq 2$ be a bounded open set, $s\in(0,1)$, and let $(A_i,q_i) \in \cal P$ for $i=1,2$. Suppose $W_1, W_2\subset \Omega_e$ are open sets, and that the DN maps for the FMSEs in $\Omega$ relative to $(A_1,q_1)$ and $(A_2,q_2)$ satisfy $$\Lambda^s_{A_1,q_1}[f]|_{W_2}=\Lambda^s_{A_2,q_2}[f]|_{W_2}, \;\;\;\;\; \forall f\in C^\infty_c(W_1)\;.$$ \noindent Then $(A_1,q_1)\sim(A_2,q_2)$, that is, the potentials coincide up to the gauge $\sim$.
\end{The}
The set $\mathcal P$ of potentials and the gauge $\sim$ are defined in Section 3. $\cal P$ contains all potentials $(A,q)$ satisfying certain properties, among which \emph{(p5)}: supp$(A)\subseteq\Omega^2$. We suspect this assumption to be unnecessary, but we nonetheless prove our Theorem in this easier case, and highlight the occasions when \emph{(p5)} is used.
The proof is based on three preliminary results: the integral identity for the DN map, the weak unique continuation property (WUCP) and the Runge approximation property (RAP). The WUCP is easily proved by reducing our case to that of the fractional Laplacian $(-\Delta)^s$, for which the result is already known (see e.g. \cite{Ru15}, \cite{GSU2017}). For this we use \emph{(p5)}. The proof of the RAP then comes from the WUCP and the Hahn-Banach theorem. Eventually, we use this result, the integral identity and \emph{(p5)} to complete the proof by means of Alessandrini's identity. This technique generalizes the one studied in \cite{GSU2017}.
We consider Theorem 1.1 to be very satisfactory, as gauges show up in the local case $s=1$ as well (again, see \cite{NSU95}). For comparison see \cite{CLR18}, where it is shown that no gauge exists for a certain MSE in which only the highest order term is non-local. This interesting result inspired us to investigate a fully fractional operator, and is thus the main academic motivation for this work.
\section{Preliminaries}
\para{Operators on bivariate vector functions}
\begin{Def}\label{def1} Let $A \in C^{\infty}_c(\mathbb R^{n}\times\mathbb R^{n},\mathbb C^n)$. The \emph{symmetric, antisymmetric, parallel and perpendicular parts of $A$} at points $x,y$ are $$ A_s(x,y):= \frac{A(x,y)+A(y,x)}{2}\,, \;\;\;\;\;\; A_a(x,y) := A(x,y)-A_s(x,y)\;, $$
$$A_\parallel(x,y) := \left\{
\begin{array}{cc}
\frac{A(x,y)\cdot(x-y)}{|x-y|^2}(x-y) & \mbox{if } x\neq y \\
A(x,y) & \mbox{if } x=y
\end{array} \right.\,,\;\; A_\perp(x,y) := A(x,y)-A_\parallel(x,y)\;.$$
\noindent The \emph{$L^2$ norms of $A$ with respect to the first} and \emph{second variable} at point $x$ are $$ {\cal J}_1 A(x) := \left( \int_{\mathbb R^n} |A(y,x)|^2 \,dy \right)^{1/2}\;\;, \;\;\;\; {\cal J}_2 A(x) := \left( \int_{\mathbb R^n} |A(x,y)|^2 \,dy \right)^{1/2}\;.$$
\end{Def}
\begin{Rem}
Being $A\in C^\infty_c$, these two integrals are finite and the definitions make sense. Moreover, since $A_a \cdot A_s$ is an antisymmetric scalar function and $A_\parallel \cdot A_\perp = 0$, by the following computations
\begin{equation} \label{pitsim}
\begin{split}
\|A\|_{L^2}^2 & = \|A_a + A_s\|_{L^2}^2 = \|A_a\|_{L^2}^2+\|A_s\|_{L^2}^2+2\langle A_a, A_s\rangle \\ & = \|A_a\|_{L^2}^2+\|A_s\|_{L^2}^2+2\int_{\mathbb R^{2n}}A_a\cdot A_s \,dx\,dy = \|A_a\|_{L^2}^2+\|A_s\|_{L^2}^2\;\;,
\end{split}
\end{equation}
\begin{equation} \label{pitort}
\begin{split}
\|A\|_{L^2}^2 & = \|A_\parallel + A_\perp\|_{L^2}^2 = \|A_\parallel\|_{L^2}^2+\|A_\perp\|_{L^2}^2+2\langle A_\parallel, A_\perp\rangle \\ & = \|A_\parallel\|_{L^2}^2+\|A_\perp\|_{L^2}^2+2\int_{\mathbb R^{2n}}A_\parallel\cdot A_\perp \,dx\,dy = \|A_\parallel\|_{L^2}^2+\|A_\perp\|_{L^2}^2\;\;
\end{split}
\end{equation}
\noindent the four operators $(\cdot)_s, (\cdot)_a, (\cdot)_\parallel, (\cdot)_\perp$ can be extended to act from $L^2(\mathbb R^{2n})$ to $L^2(\mathbb R^{2n})$. This is true of ${\cal J}_1A$ and ${\cal J}_1A$ as well:
\spleq{normpart}{
\| {\cal J}_1A \|^2_{L^2(\mathbb R^n)} & = \int_{\mathbb R^n} |({\cal J}_1A)(x)|^2 \,dx =
\int_{\mathbb R^{2n}} |A(y,x)|^2 \,dy \,dx = \|A\|^2_{L^2(\mathbb R^{2n})}\;. }
\end{Rem}
\begin{Lem}\label{almevlem} The equalities defining $(\cdot)_s, (\cdot)_a, (\cdot)_\parallel, (\cdot)_\perp$ in Definition \ref{def1} for $A\in C^\infty_c$ still hold a.e. for $A\in L^2(\mathbb R^{2n})$.
\end{Lem}
\begin{proof}
We prove the Lemma only for $(\cdot)_s$, as the other cases are similar. For all $i\in\mathbb N$, let $A^{i}\in C^{\infty}_c(\mathbb R^{2n},\mathbb C^n)$ such that $\|A-A^{i}\|_{L^2} \leq 1/i$. By \eqref{pitsim},
\begin{align*}
\bigg\|A_s \bigg.&-\left.\frac{A(x,y)+A(y,x)}{2} \right\|_{L^2} \leq \\ & \leq \|(A-A^{i})_s\|_{L^2} +\left\|A^{i}_s - \frac{A^{i}(x,y)+A^{i}(y,x)}{2} \right\|_{L^2} \\ & \;\;\;\; + \left\| \frac{(A(x,y) - A^{i}(x,y)) +(A(y,x) - A^{i}(y,x))}{2} \right\|_{L^2} \\ & = \|(A-A^{i})_s\|_{L^2} + \left\| \frac{(A(x,y) - A^{i}(x,y)) +(A(y,x) - A^{i}(y,x))}{2} \right\|_{L^2} \\ & \leq 2 \|A-A^{i}\|_{L^2} \leq 2/i\;.\qedhere\end{align*}\end{proof}
\begin{Rem}
If $A\in C^{\infty}_c$, the operators $(\cdot)_s, (\cdot)_a, (\cdot)_\parallel, (\cdot)_\perp$ commute with each other; because of Lemma \ref{almevlem}, this still holds a.e. for $A\in L^2(\mathbb R^{2n})$. Thus in the following we use e.g. the symbol $A_{s\parallel}$ for both $(A_s)_\parallel$ and $(A_\parallel)_s$.
\end{Rem}
\para{Sobolev spaces} Let $\Omega\subset \mathbb R^n$ be open and $r\in \mathbb R$, $p\in(1,\infty)$, $n\in\mathbb{N}\setminus \{0\}$. By the symbols $W^{r,p} = W^{r,p}(\mathbb R^n)$ and $W^{r,p}_c(\Omega)$ we denote the usual $L^p$-based Sobolev spaces. We also let $H^s = H^s(\mathbb{R}^n) = W^{s,2}(\mathbb{R}^n)$ be the standard $L^2$-based Sobolev space with norm $\|u\|_{H^s(\mathbb{R}^n)} = \|\mathcal{F}^{-1}( \langle\xi\rangle^s \hat u ) \|_{L^2(\mathbb{R}^n)}\;,$ where $s\in \mathbb R$, $\langle\xi\rangle := (1+|\xi|^2)^{1/2}$ and the Fourier transform is
$$ \hat u(\xi) = \mathcal F u(\xi) = \int_{\mathbb R^n} e^{-ix\cdot\xi} u(x) dx\;.$$
One should note that there exist many equivalent definitions of fractional Sobolev spaces (see e.g. \cite{hitch}). Using the Sobolev embedding and multiplication theorems (see e.g. \cite{BBM01}, \cite{BH2017}), these spaces can often be embedded into each other:
\begin{Lem} Let $s\in(0,1), \,p :=$\emph{max}$\{2, n/2s\}$ and $h\geq 0$. Then the embeddings
\begin{multicols}{2}
\begin{enumerate}[label=(e\arabic*).]
\item $\; H^s \times H^s \hookrightarrow L^{n/(n/2+s p-2s)} \;,$
\item $\; H^s \times L^{p} \hookrightarrow L^{2n/(n+2s)}\;,$
\item $\; L^{2p} \times L^2 \hookrightarrow L^{2n/(n+2s)}\;,$
\item $\; L^{2p} \times H^s \hookrightarrow L^2\;,$
\item $\; L^{2p} \times L^{2p} \hookrightarrow L^{p}\;,$
\item $\; H^{sp-2s} \hookrightarrow L^{p}\;,$
\item $\; L^{2n/(n+2h)} \hookrightarrow H^{-h}$
\end{enumerate}
\end{multicols}
\noindent hold, where $\times$ indicates the pointwise product.\qed\end{Lem}
Let $U, F\subset \mathbb R^n$ be an open and a closed set. We define the spaces $$H^s(U) = \{ u|_U, u\in H^s(\mathbb R^n) \}\;,$$$$ \tilde H^s(U) = \text{closure of $C^\infty_c(U)$ in $H^s(\mathbb R^n)$}\;, \;\mbox{and}$$ $$ H^s_F(\mathbb R^n) = \{ u\in H^s(\mathbb R^n) : \text{supp}(u) \subset F \}\;, $$
\noindent where $\|u\|_{H^s(U)}= \inf\{ \|w\|_{H^s(\mathbb R^n)} ; w\in H^s(\mathbb R^n), w|_{U}=u \}$. For $s\in(0,1)$ and a bounded open set $U\subset \mathbb R^n$, let $X:= H^s(\mathbb R^n)/\tilde H^s(U)$. If $U$ is a Lipschitz domain, then then $\tilde H^s(U)$ and $H^s_{\bar U}(\mathbb R^n)$ can be identified for all $s\in \mathbb R$ (see \cite{GSU2017}); therefore, $X = H^s(\mathbb R^n)/H^s_{\bar U}(\mathbb R^n)$, and its elements are equivalence classes of functions from $H^s(\mathbb R^n)$ coinciding on $U_e$. X is called \emph{abstract trace space}.
\para{Non-local operators} If $u\in \mathcal S(\mathbb R^n)$, its fractional Laplacian is (see \cite{Kw15}, \cite{hitch}) $$ (-\Delta)^s u(x) := \mathcal C_{n,s} \lim_{\epsilon \rightarrow 0^+}\int_{\mathbb R^n\setminus B_\epsilon (x)} \frac{u(x)-u(y)}{|y-x|^{n+2s}} dy\;,$$
\noindent for a constant $\mathcal C_{n,s}$. Its Fourier symbol is $|\xi|^{2s}$, i.e. $(-\Delta)^s u(x) =\mathcal F^{-1} ( |\xi|^{2s} \hat u (\xi) )$. By \cite{Ho90}, Ch. 4 and \cite{Ta96}, $(-\Delta)^s$ extends as a bounded map $(-\Delta)^s : W^{r,p}(\mathbb R^n) \rightarrow W^{r-2s,p}(\mathbb R^n)$ for $r\in\mathbb R$ and $p\in(1,\infty)$. Let $\alpha(x,y): \mathbb R^{2n}\rightarrow \mathbb R^n$ be the map
\begin{equation*}\alpha(x,y)= \frac{\mathcal C_{n,s}^{1/2}}{\sqrt 2} \frac{y-x}{|y-x|^{n/2 +s+1}}\;.\end{equation*}
\noindent If $u\in C^{\infty}_c(\mathbb R^n)$ and $x,y \in \mathbb R^n$, the \emph{fractional gradient of $u$ at points $x$ and $y$} is \begin{equation}\label{graddef}\nabla^s u(x,y) := (u(x)-u(y))\alpha(x,y)\;,\end{equation}
\noindent and is thus a symmetric and parallel vector function of $x$ and $y$. Since it was proved in \cite{Co18} that $\|\nabla^s u\|^2_{L^2(\mathbb R^{2n})} \leq \|u\|^2_{H^s(\mathbb R^n)}$, and thus that the linear operator $\nabla^s$ maps $C^\infty_c(\mathbb R^n)$ into $L^2(\mathbb R^{2n})$, we see that $\nabla^s$ can be extended to $\nabla^s : H^s(\mathbb R^n) \rightarrow L^2(\mathbb R^{2n})$. Using a proof by density similar to the one for Lemma \ref{almevlem}, one sees that \eqref{graddef} still holds a.e. for $u\in H^s(\mathbb R^n)$.
\noindent If $u\in H^s(\mathbb R^n)$ and $v\in L^2(\mathbb R^{2n})$, the \emph{fractional divergence} is defined as that operator $(\nabla\cdot)^s : L^2(\mathbb R^{2n}) \rightarrow H^{-s}(\mathbb R^n)$ satisfying
\begin{equation}
\label{eq:defdiv}
\langle (\nabla\cdot)^s v,u \rangle_{L^2(\mathbb R^{n})} = \langle v,\nabla^s u \rangle_{L^2(\mathbb R^{2n})} \;,
\end{equation}
\noindent i.e. it is by definition the adjoint of the fractional gradient. As observed in \cite{Co18}, Lemma 2.1, if $u \in H^s(\mathbb R^n)$ the equality $(\nabla\cdot)^s(\nabla^su)(x) = (-\Delta)^su(x)$ holds in weak sense, and $(\nabla\cdot)^s(\nabla^su) \in H^{-s}(\mathbb R^n)$.
\begin{Lem} Let $u\in C^\infty_c(\mathbb R^n)$. There exists a constant $k_{n,s}$ such that $${\cal F}(\nabla^s u)(\xi,\eta) = k_{n,s} \left( \frac{\xi}{|\xi|^{n/2+1-s}}+\frac{\eta}{|\eta|^{n/2+1-s}} \right){\cal F} u(\xi+\eta)\;.$$
\end{Lem}
\begin{proof} As $u\in C^\infty_c(\mathbb R^n)$, we know that $\nabla^s u \in L^2(\mathbb R^{2n})$, and we can compute its Fourier transform in the variables $\xi,\eta$. By a change of variables,
\begin{equation*}\begin{split}
{\cal F}(\nabla^s u)(\xi,\eta) & = \frac{\mathcal C_{n,s}^{1/2}}{\sqrt 2}\int_{\mathbb R^n}\int_{\mathbb R^n} e^{-ix\cdot\xi}e^{-iy\cdot\eta} \frac{u(x)-u(y)}{|y-x|^{n/2 +s+1}}(y-x) \; dx\,dy \\ & = k'_{n,s}\int_{\mathbb R^n} \frac{e^{-iz\cdot\eta}}{|z|^{n/2 +s+1}} z \int_{\mathbb R^n} e^{-ix\cdot(\xi+\eta)} (u(x)-u(x+z)) \; dx\,dz \\ & = k'_{n,s}\int_{\mathbb R^n} \frac{z}{|z|^{n/2 +s+1}} e^{-iz\cdot\eta} \,{\cal F}u(\xi+\eta) (1-e^{iz\cdot(\xi+\eta)})\,dz \\ & = k''_{n,s}\, {\cal F}u(\xi+\eta) \int_{\mathbb R^n} (e^{-iz\cdot\eta} - e^{iz\cdot\xi})\nabla_z(|z|^{1-n/2 -s}) \,dz \\ & = k''_{n,s}\, {\cal F}u(\xi+\eta) \left(\eta{\cal F}(|z|^{1-n/2 -s})(\eta)+\xi{\cal F}(|z|^{1-n/2 -s})(-\xi)\right) \\ & = k_{n,s} \left( \frac{\xi}{|\xi|^{n/2+1-s}}+\frac{\eta}{|\eta|^{n/2+1-s}} \right){\cal F} u(\xi+\eta)\;.\qedhere
\end{split}\end{equation*}\end{proof}
\begin{Lem}\label{extgrad}
The fractional gradient extends as a bounded map $$\nabla^s : H^r(\mathbb R^n) \rightarrow \langle D_x+D_y \rangle^{r-s} L^2(\mathbb R^{2n})\;,$$
\noindent and if $r\leq s$ then also $\; \nabla^s : H^r(\mathbb R^n) \rightarrow H^{r-s}(\mathbb R^{2n})\;. $
\end{Lem}
\begin{proof}
\noindent Start with $u\in C^\infty_c(\mathbb R^n)$, and let $r\in\mathbb R$. Then
\spleq{estigrad1}{\|\nabla^s u\|^2_{\langle D_x+D_y \rangle^{r-s} L^2} & = ( \langle D_x+D_y \rangle^{r-s} \nabla^s u, \langle D_x+D_y \rangle^{r-s} \nabla^s u )_{L^2} \\ & = ( \langle D_x+D_y \rangle^{2(r-s)} \nabla^s u, \nabla^s u )_{L^2} \\ & = ({\cal F}( \langle D_x+D_y \rangle^{2(r-s)} \nabla^s u), {\cal F}(\nabla^s u) )_{L^2}\;.
}
\noindent From the previous Lemma we can deduce that
\spleqno{ {\cal F}( \langle D_x+D_y \rangle&^{2(r-s)} \nabla^s u) = (1+|\xi+\eta|^2)^{r-s} {\cal F}(\nabla^s u) \\ & = (1+|\xi+\eta|^2)^{r-s} k_{n,s} \left( \frac{\xi}{|\xi|^{n/2+1-s}}+\frac{\eta}{|\eta|^{n/2+1-s}} \right){\cal F} u(\xi+\eta) \\ & = k_{n,s} \left( \frac{\xi}{|\xi|^{n/2+1-s}}+\frac{\eta}{|\eta|^{n/2+1-s}} \right) {\cal F}( \langle D_x \rangle^{2(r-s)} u )(\xi+\eta) \\ & = {\cal F} (\nabla^s ( \langle D_x\rangle^{2(r-s)} u ))\;.}
Using the properties of the fractional gradient and \eqref{estigrad1},
\spleqno{ \|\nabla^s u\|^2_{\langle D_x+D_y \rangle^{r-s} L^2} & = ( {\cal F} (\nabla^s ( \langle D_x\rangle^{2(r-s)} u )), {\cal F}(\nabla^s u) )_{L^2} \\ & = ( \nabla^s ( \langle D_x\rangle^{2(r-s)} u ), \nabla^s u )_{L^2} = ( \langle D_x\rangle^{2(r-s)} u, (-\Delta)^s u )_{L^2} \\ & = ( \langle D_x\rangle^{r-s} (-\Delta)^{s/2} u, \langle D_x\rangle^{r-s} (-\Delta)^{s/2} u )_{L^2} \\ & = \| (-\Delta)^{s/2} u \|^2_{H^{r-s}} \leq c \|u\|^2_{H^r}\;.
}
\noindent An argument by density completes the proof of the first part of the statement. For the second one, observe that $r \leq s$ implies
\spleqno{ \|v\|^2_{H^{r-s}} & = ( \langle D_{x,y}\rangle^{r-s} v, \langle D_{x,y}\rangle^{r-s} v )_{L^2} = ( \langle D_{x,y}\rangle^{2(r-s)} v, v )_{L^2} \\ & = ( (1+ |\xi|^2 + |\eta|^2)^{r-s} \hat v, \hat v )_{L^2} \leq c( (1+ |\xi+\eta|^2)^{r-s} \hat v, \hat v )_{L^2} \\ & =c ( \langle D_x + D_y \rangle^{2(r-s)}v,v )_{L^2} = c \|v\|^2_{\langle D_x+D_y \rangle^{r-s} L^2}\;,}
\noindent and so $ \langle D_x+D_y \rangle^{r-s} L^2(\mathbb R^{2n}) \subseteq H^{r-s}(\mathbb R^{2n})$.\end{proof}
\noindent As a consequence of the above Lemma, the fractional divergence can be similarly extended as $(\nabla\cdot)^s : H^t(\mathbb R^{2n})\rightarrow H^{t-s}(\mathbb R^n)$ for all $t\geq s$.
\section{Definition and properties of FMSE}
\para{Fractional magnetic Schr\"odinger equation} Let $\Omega\subset \mathbb R^n$ be open, $\Omega_e = \mathbb R^n\setminus\overline\Omega$ be the \emph{exterior domain}, and also recall that $p :=$max$\{2, n/2s\}$. The \emph{vector potential} and \emph{scalar potential} are two functions $A: \mathbb R^{2n}\mapsto \mathbb C^n$ and $q:\mathbb R^n\mapsto\mathbb R$. The following properties are of interest:
\begin{enumerate}[label=\emph{(p\arabic*)}.]
\item $\;{\cal J}_1A, \;{\cal J}_2A \in L^{2p}(\mathbb R^n)\;,$
\item $\;A_{s\parallel} \in H^{sp-s}(\mathbb R^{2n}, \mathbb C^n)\;,$
\item $\;A_{a\parallel}(x,y) \cdot (y-x) \geq 0, \;$ for all $x, y\in \mathbb R^n\;,$
\item $\;q\in L^{p}(\Omega) \;,$
\item $\;A\in L^2(\mathbb R^{2n}), \;\;\;$ supp$(A) \subseteq \Omega^2\;.$
\end{enumerate}
\noindent With respect to the above properties, we define four sets of potentials:
$$ \begin{array}{c}
{\cal A}_0 := \{ \mbox{vector potentials } A \mbox{ verifying } (p1) - (p3) \}, \\
{\cal A} := \{ \mbox{vector potentials } A \mbox{ verifying } (p1) - (p3) \mbox{ and } (p5) \}, \\
{\cal P}_0 := \{ \mbox{pairs of potentials } (A,q) \mbox{ verifying } (p1) - (p4) \}, \\
{\cal P} := \{ \mbox{pairs of potentials } (A,q) \mbox{ verifying } (p1) - (p5) \}. \\
\end{array} $$
\begin{Rem}
The peculiar definitions for the spaces in \emph{(p1)}, \emph{(p2)} and \emph{(p4)} are due to computational necessities: they make the following quantities
$$\|qu\|_{H^{-s}} ,\;\;\; \|(\nabla\cdot)^sA_{s\parallel}\|_{L^{p}} ,\;\;\; \|(\mathcal J_2A)^2\|_{L^{p}},\;\;\; \|u \mathcal J_2A \|_{L^2} $$
\noindent finite for $u\in H^s$, as needed in Remark 3.8, Lemma 3.12 and \eqref{usingp1}. This is easily proved by using Lemma 2.5. However, if $n\geq 4$, then $p=n/2s$, and so in this case $L^{2p} = L^{n/s}$ and $H^{sp-s}=H^{n/2-s}$; this simplifies the assumptions for $n$ large enough.
\end{Rem}
\noindent Let $A \in {\cal A}_0$ and $u\in H^s(\mathbb R^n)$. By \emph{(p1)} and \emph{(e4)},
\spleq{usingp1}{
\| A(x,y)u(x) \|_{L^2(\mathbb R^{2n})} & = \left(\int_{\mathbb R^n} u(x)^2 \int_{\mathbb R^n} |A(x,y)|^2 dy\, dx\right)^{1/2} \\ & = \left(\int_{\mathbb R^n} u(x)^2 \,{\cal J}_2A(x)^2 \,dx\right)^{1/2} = \|u \,{\cal J}_2A\|_{L^2(\mathbb R^n)} \\ & \leq k \|u\|_{H^s} \|{\cal J}_2A\|_{L^{2p}}<\infty\;,
}
\noindent and thus the \emph{magnetic fractional gradient} of $u$ can be defined as the function $\nabla^s_A u : \mathbb R^{2n}\rightarrow \mathbb C^n$ such that \begin{equation}\label{defgradA} \langle \nabla^s_A u, v \rangle := \langle \nabla^s u + A(x,y)u(x), v \rangle\;, \;\;\; \mathrm{for}\;\mathrm{all}\; v\in L^2(\mathbb R^{2n})\;. \end{equation}
\noindent By the same computation, $\nabla^s_A$ acts as an operator $\nabla^s_A : H^s(\mathbb R^n) \rightarrow L^2(\mathbb R^{2n})$.
\noindent Let $A\in {\cal A}_0$, $u\in H^s({\mathbb R^n})$ and $v\in L^2(\mathbb R^{2n})$. The \emph{magnetic fractional divergence} is defined by duality as that operator $(\nabla\cdot)^s_A : L^2(\mathbb R^{2n}) \rightarrow H^{-s}({\mathbb R^n})$ such that
\spleqno{
\langle (\nabla\cdot)^s_A v, u \rangle := \langle v, \nabla^s_A u \rangle\;.
}
\noindent By construction, the magnetic fractional divergence and gradient can be combined; we call \emph{magnetic fractional Laplacian} $(-\Delta)^s_A:=(\nabla\cdot)^s_A(\nabla^s_A)$ that operator from $H^s(\mathbb R^n)$ to $H^{-s}(\mathbb R^n)$ such that, for all $u,v \in H^s(\mathbb R^n)$,
\spleq{deflapA}{
\langle (-\Delta)^s_A u, v \rangle = \langle \nabla^s_A u, \nabla^s_A v \rangle\;.}
\begin{Rem}\label{Anonzero}
If $A\equiv 0$, the magnetic fractional Laplacian $(-\Delta)^s_A$ is reduced to its non-magnetic counterpart $(-\Delta)^s$, as expected. Since the fractional Laplacian is well understood (see e.g. \cite{GSU2017}), from now on we assume $A\not\equiv 0$.
\end{Rem}
\begin{Lem}\label{lemform1} Let $A\in L^2(\mathbb R^{2n}) \cap {\cal A}_0$ and $u\in H^s(\mathbb R^n)$. The equation \spleq{form1}{(-\Delta)^s_A u = (-\Delta)^su + 2\int_{\mathbb R^n} \left(A_{a\parallel} \cdot \nabla^s u\right) \,dy + \left( (\nabla\cdot)^sA_{s\parallel} + \int_{\mathbb R^n} |A|^2 \, dy \right) u}
\noindent holds in weak sense.
\end{Lem}
\begin{proof}
\noindent By \eqref{deflapA}, $(-\Delta)^s_A u \in H^{-s}(\mathbb R^n)$, and in order to prove \eqref{form1} in weak sense one needs to compute $\langle (-\Delta)^s_A u, v \rangle$ for $v\in H^s(\mathbb R^n)$. By \eqref{deflapA} and \eqref{defgradA},
\spleqno{ \langle (-\Delta)^s_A u, v \rangle & = \langle \nabla^s u + A(x,y)u(x), \nabla^s v + A(x,y)v(x) \rangle \\ & = \langle\nabla^s u,\nabla^s v\rangle + \langle Au, Av\rangle + \langle\nabla^s u, Av\rangle + \langle \nabla^s v, Au\rangle\;, }
\noindent where all the above terms make sense, since by formula \eqref{usingp1} $\nabla^s u, \nabla^s v, Au$ and $Av$ all belong to $L^2(\mathbb R^{2n})$. The new term $\langle \nabla^s u, A(y,x)v(x) \rangle$ is also finite, so
\spleq{breaklap}{ \langle (-\Delta)^s_A u, v \rangle = \,& \langle\nabla^s u,\nabla^s v\rangle + \langle Au, Av\rangle + \\ & + \langle\nabla^s u, A(x,y)v(x)\rangle -\langle \nabla^s u, A(y,x)v(x)\rangle + \\ & + \langle \nabla^s u, A(y,x)v(x) \rangle + \langle \nabla^s v, A(x,y)u\rangle\;. }
\noindent For the first term on the right hand side of \eqref{breaklap}, by definition,
\spleq{firstterm}{
\langle\nabla^s u,\nabla^s v\rangle = \langle(\nabla\cdot)^s\nabla^s u, v\rangle = \langle (-\Delta)^s u, v \rangle\;.
}
\noindent For the second one, by the embeddings \emph{(e5)}, \emph{(e2)} and $\emph{(e7)}$,
\spleq{secondterm}{
\langle Au, Av\rangle = \left\langle u(x) \int_{\mathbb R^n} |A(x,y)|^2 dy, v\right\rangle = \langle u ({\cal J}_2A)^2, v\rangle \;.
}
\noindent Since $u\in H^s(\mathbb R^n)$, by \eqref{normpart} we deduce ${\cal J}_2(\nabla^s u) \in L^2(\mathbb R^n)$. Now \emph{(e3)} implies that ${\cal J}_2(\nabla^s u){\cal J}_2A \in L^{\frac{2n}{n+2s}}$. On the other hand, by Cauchy-Schwarz
\spleqno{
\left\| \int_{\mathbb R^n} \right.&\left.\nabla^s u \cdot A dy \right\|^{\frac{2n}{n+2s}}_{L^{\frac{2n}{n+2s}}(\mathbb R^n)} = \int_{\mathbb R^n} \left| \int_{\mathbb R^n} \nabla^s u \cdot A dy \right|^{\frac{2n}{n+2s}} dx \\ & \leq \int_{\mathbb R^n} \left( \int_{\mathbb R^n} |\nabla^s u|\,|A| dy \right)^{\frac{2n}{n+2s}} dx \leq \int_{\mathbb R^n} \left( \int_{\mathbb R^n} |\nabla^s u|^2 dy \int_{\mathbb R^n} |A|^2 dy\right)^{\frac{n}{n+2s}} dx \\ & = \int_{\mathbb R^n} | {\cal J}_2(\nabla^s u) \; {\cal J}_2A |^{\frac{2n}{n+2s}} dx = \left\| {\cal J}_2(\nabla^s u) \; {\cal J}_2A \right\|^{\frac{2n}{n+2s}}_{L^{\frac{2n}{n+2s}}(\mathbb R^n)}\;,
}
\noindent and so $\int_{\mathbb R^n} \nabla^s u \cdot A dy \in L^{\frac{2n}{n+2s}}$. Now $\langle \int_{\mathbb R^n} \nabla^s u \cdot A dy , v\rangle$ is finite by \emph{(e7)}, and
\spleq{thirdfourthterm}{
\left\langle\nabla^s u, A(x,y)\right.&\left.v(x)\right\rangle -\langle \nabla^s u, A(y,x)v(x)\rangle = \\ & = \left\langle \int_{\mathbb R^n} \nabla^s u \cdot A(x,y) dy , v\right\rangle - \left\langle \int_{\mathbb R^n} \nabla^s u \cdot A(y,x) dy , v\right\rangle \\ & = \left\langle \int_{\mathbb R^n} \nabla^s u \cdot (A(x,y)-A(y,x)) dy , v\right\rangle \\ & = \left\langle 2\int_{\mathbb R^n} \nabla^s u \cdot A_a\, dy , v\right\rangle = \left\langle 2\int_{\mathbb R^n} \nabla^s u \cdot A_{a\parallel}\, dy , v\right\rangle\;.
}
\noindent The last steps use Lemma \ref{almevlem} to write $A_a$ for $A\in L^2$ and to see that $\nabla^s u$ is a.e. a parallel vector for $u\in H^s(\mathbb R^n)$, which implies $\nabla^s u \cdot A_{a\perp} = 0 \;a.e.$. This computes the third and fourth terms on the right hand side of \eqref{breaklap}. For the last two terms observe that, since $A(y,x)v(x) - A(x,y)v(y)$ is antisymmetric, by Lemma \ref{almevlem} we have $ \langle \nabla^s u , A(y,x)v(x) - A(x,y)v(y) \rangle =0$, and so
\spleq{lastterm}{ \left\langle \nabla^s u, A(y,x)\right.&\left.v(x) \right\rangle + \langle \nabla^s v, Au\rangle\ \\ & = \int_{\mathbb R^{2n}} A(x,y)\cdot (v(y)\nabla^s u + u(x)\nabla^s v )\; dx\,dy\; \\ & =
\int_{\mathbb R^{2n}} A\cdot \alpha \Big(v(y)(u(x)-u(y)) + u(x)(v(x)-v(y)) \Big)\; dx\,dy \\ & = \int_{\mathbb R^{2n}} A_{s\parallel}\cdot \alpha \Big(u(x)v(x) - u(y)v(y) \Big)\; dx\,dy \\ & = \langle A_{s\parallel}, \nabla^s(uv)\rangle = \langle u (\nabla\cdot)^s A_{s\parallel}, v \rangle \;.
}
\noindent On the third line of \eqref{lastterm} the integrand is the product of a symmetric, parallel vector and $A$; this reduces $A$ to $A_{s\parallel}$. From \emph{(e1)}, \emph{(e7)} and Lemma \ref{extgrad} one sees that $\nabla^s(uv) \in H^{s-sp}$, and now $\langle A_{s\parallel}, \nabla^s(uv)\rangle$ makes sense by \emph{(p2)}. Eventually, (\ref{eq:defdiv}), \emph{(e6)}, \emph{(e2)} and \emph{(e7)} explain the last step. Equation \eqref{form1} follows from \eqref{breaklap}, \eqref{firstterm}, \eqref{secondterm}, \eqref{thirdfourthterm} and \eqref{lastterm}. \end{proof}
\begin{Lem}\label{lemsigma} Let $A\in L^2(\mathbb R^{2n}) \cap {\cal A}_0$. There exists a positive, symmetric distribution $\sigma\in {\cal D}'(\mathbb R^{2n})$ such that $A_{a\parallel} = \alpha(\sigma-1)$ a.e..
\end{Lem}
\begin{proof} Because of Lemma \ref{almevlem}, $A_{a\parallel}$ is a parallel vector almost everywhere, and thus $\|A_{a\parallel} - (A_{a\parallel})_\parallel\|_{L^2} =0$. Again by Lemma \ref{almevlem},
\spleqno{
0 & = \|A_{a\parallel} - (A_{a\parallel})_\parallel\|_{L^2} = \left\| A_{a\parallel} - \frac{A_{a\parallel} \cdot (x-y)}{|x-y|^2}(x-y) \right\|_{L^2} \\ & = \left\| A_{a\parallel} - \left( -\frac{\sqrt 2}{\mathcal C_{n,s}^{1/2}} \frac{A_{a\parallel} \cdot (x-y)}{|x-y|^{1-n/2-s}}\right) \frac{\mathcal C_{n,s}^{1/2}}{\sqrt 2} \frac{y-x}{|y-x|^{n/2 +s+1}} \right\|_{L^2} \\ & = \left\| A_{a\parallel} - \left(\left(1+ \frac{\sqrt 2}{\mathcal C_{n,s}^{1/2}} \frac{A_{a\parallel} \cdot (y-x)}{|x-y|^{1-n/2-s}}\right)-1\right) \alpha \right\|_{L^2}\;.
}
\noindent Moreover, if $\phi\in C^\infty_c(\mathbb R^{2n})$ and $B_{r_1}, B_{r_2}$ are balls in $\mathbb R^n$ centered at the origin such that supp$(\phi)\subset B_{r_1} \times B_{r_2}$, then by \eqref{pitsim}, \eqref{pitort} and Cauchy-Schwarz inequality
\spleqno{
\left|\left\langle 1+\frac{\sqrt{2}}{C_{n,s}^{1/2}}\right.\right.&\left.\left. |y-x|^{n/2+s} \left( A_{a\parallel} \cdot \frac{y-x}{|y-x|} \right) ,\phi \right\rangle\right| = \\ & = \left| \int_{\mathbb R^n}\int_{\mathbb R^n} \left(1+\frac{\sqrt{2}}{C_{n,s}^{1/2}}|y-x|^{n/2+s} \left( A_{a\parallel} \cdot \frac{y-x}{|y-x|} \right)\right)\phi\;dy\,dx \right| \\ & \leq \int_{\mathbb R^n}\int_{\mathbb R^n} \left|1+\frac{\sqrt{2}}{C_{n,s}^{1/2}}|y-x|^{n/2+s} \left( A_{a\parallel} \cdot \frac{y-x}{|y-x|} \right)\right||\phi|\;dy\,dx \\ & \leq \|\phi\|_{L^\infty} \int_{B_{r_1}}\int_{B_{r_2}} \left|1+\frac{\sqrt{2}}{C_{n,s}^{1/2}}|y-x|^{n/2+s} \left( A_{a\parallel} \cdot \frac{y-x}{|y-x|} \right)\right|\;dy\,dx \\ & \leq k \|\phi\|_{L^\infty} \left( 1 + \int_{B_{r_1}}\int_{B_{r_2}} |y-x|^{n/2+s} \left| A_{a\parallel} \cdot \frac{y-x}{|y-x|} \right|\;dy\,dx \right) \\ & \leq k \|\phi\|_{L^\infty} \left( 1 + \int_{B_{r_1}}\int_{B_{r_2}} (|x|+|y|)^{n/2+s} | A_{a\parallel}|\;dy\,dx \right) \\ & \leq k'\|\phi\|_{L^\infty} \left( 1 + \int_{B_{r_1}}\int_{B_{r_2}} | A_{a\parallel}|\;dy\,dx \right) \\ & \leq k'\|\phi\|_{L^\infty} \left( 1 + \|A_{a\parallel}\|^2_{L^2(\mathbb R^{2n})} \right) \leq k'\|\phi\|_{L^\infty} \left( 1 + \|A\|^2_{L^2(\mathbb R^{2n})} \right) < \infty\;.
}
\noindent Thus it makes sense to define a distribution $\sigma\in {\cal D}'(\mathbb R^{2n})$ such that $$ \langle\sigma,\phi\rangle= \left\langle 1+\frac{\sqrt{2}}{C_{n,s}^{1/2}} |y-x|^{n/2+s} \left( A_{a\parallel} \cdot \frac{y-x}{|y-x|} \right) ,\phi \right\rangle $$
\noindent holds for all $\phi\in C^\infty_c(\mathbb R^{2n})$. Given that $A_{a\parallel}$ is antisymmetric, it is clear that $\sigma$ is symmetric; moreover, property \emph{(p3)} assures that $\sigma \geq 1$. \end{proof}
\begin{Rem}\label{lapcond}
If $u\in {\cal S}(\mathbb R^n)$, by the previous Lemma we can rewrite the leading term of $(-\Delta)^s_A$ as
\spleqno{
\mathcal C_{n,s}\; PV \int_{\mathbb R^n} \sigma(x,y) \frac{u(x)-u(y)}{|x-y|^{n+2s}}\, dy\;.
}
\noindent This shows the connection between the magnetic and classical fractional Laplacians: if $\sigma(x,y) \equiv 1$, i.e. if $A_{a\parallel}\equiv 0$, the formula above defines $(-\Delta)^su$. Moreover, if $\sigma(x,y)$ is \emph{separable} (i.e. there are functions $\sigma_1, \sigma_2 : \mathbb R^n \rightarrow \mathbb R$ such that $\sigma(x,y)=\sigma_1(x)\sigma_2(y)$) we get the fractional conductivity operator (see \cite{Co18}).
\end{Rem}
Consider $(A,q)\in {\cal P}_0$ and $f\in H^s(\Omega_e)$. We say that $u\in H^s(\mathbb R^n)$ solves FMSE with exterior value $f$ if and only if
$$ \left\{\begin{array}{lr}
(-\Delta)^s_A u + qu = 0 & \text{in } \Omega\,\;\\
u=f & \text{in } \Omega_e
\end{array}\right.$$
\noindent holds in weak sense, that is if and only if $u-f \in \tilde H^s(\Omega)$ and, for all $v\in H^s(\mathbb R^n)$,\spleq{FMSE}{
\langle(-\Delta)^s_A u, v\rangle + \langle qu, v \rangle = 0\;.
}
\begin{Rem}
By \emph{(p1), (p2)} and \emph{(p4)}, formula \eqref{FMSE} makes sense. This was already partially shown in the above discussion about the magnetic fractional Laplacian. For the last term, just use \emph{(p4)}, \emph{(e2)} and \emph{(e7)}.
\end{Rem}
\para{Old gauges, new gauges} Let $(G,\cdot)$ be the abelian group of all strictly positive functions $\phi\in C^\infty(\mathbb R^n)$ such that $\phi|_{\Omega_e} = 1$. For $(A,q), (A',q') \in {\cal P}_0$, define
\begin{gather}
(A,q) \sim (A',q') \;\;\;\Leftrightarrow\;\;\; (-\Delta)^s_A u + qu = (-\Delta)^s_{A'} u + q'u\;, \label{newg} \\
(A,q) \approx (A',q') \;\;\;\Leftrightarrow\;\;\; \exists \phi \in G : (-\Delta)^s_A (u\phi) + qu\phi = \phi( (-\Delta)^s_{A'} u + q'u) \label{oldg}
\end{gather}
\noindent for all $u\in H^s(\mathbb R^n)$. Both $\sim$ and $\approx$ are equivalence relations on ${\cal P}_0$, and thus we can consider the quotient spaces ${\cal P}_0/\sim$ and ${\cal P}_0/\approx$. Moreover, since $\phi \equiv 1 \in G$, we have $(A,q) \sim (A',q') \Rightarrow (A,q) \approx (A',q')$.
We say that FMSE \emph{has the gauge} $\sim$ if for each $(A,q)\in {\cal P}_0$ there exists $(A',q')\in {\cal P}_0$ such that $(A',q')\neq(A,q)$ and $(A,q)\sim(A',q')$. Similarly, we say that FMSE \emph{has the gauge} $\approx$ if for each $(A,q)\in {\cal P}_0$ there exist $\phi\in G$, $(A',q')\in {\cal P}_0$ such that $\phi\not\equiv 1$, $(A',q')\neq(A,q)$ and $(A,q)\approx(A',q')$.
\begin{Rem}
The definitions \eqref{newg} and \eqref{oldg}, which have been given for FMSE, can be extended to the local case in the natural way.
\end{Rem}
If $s=1$, it is known that $(-\Delta)_A (u\phi) + qu\phi = \phi \left( (-\Delta)_{A+\frac{\nabla\phi}{\phi}} u + qu \right)$ for all $\phi\in G$ and $u\in H^1(\mathbb R^n)$. If we choose $\phi\not\equiv 1$, we have $\left(A+\frac{\nabla\phi}{\phi},q\right)\neq(A,q)$ and $(A,q)\approx\left(A+\frac{\nabla\phi}{\phi},q\right)$, which shows that MSE has the gauge $\approx$. On the other hand, if $(A,q)\sim(A',q')$ then necessarily $A=A'$ and $q=q'$: thus, MSE does not enjoy the gauge $\sim$. We now treat the case $s\in(0,1)$.
\begin{Lem}\label{charofnewg}
Let $(A,q), (A',q') \in {\cal P}_0$. Then $(A,q)\sim(A',q')$ if and only if $A_{a\parallel} = A'_{a\parallel}$ and $Q = Q'$, where $$ Q := q + \int_{\mathbb R^n}|A|^2\, dy + (\nabla\cdot)^sA_{s\parallel}\;,\;\;\;\;\;\; Q' := q' + \int_{\mathbb R^n}|A'|^2\, dy + (\nabla\cdot)^sA'_{s\parallel}\;. $$
\end{Lem}
\begin{proof}
One direction of the implication is trivial: by \eqref{form1} and the definition, it is clear that if $A_{a\parallel} = A'_{a\parallel}$ and $Q=Q'$ then $ (-\Delta)^s_A u + qu = (-\Delta)^s_{A'} u + q'u $.
\noindent For the other one, use Lemma 3.2 to write $ (-\Delta)^s_A u + qu = (-\Delta)^s_{A'} u + q'u $ as
\spleq{comput1}{
0 &= 2\int_{\mathbb R^n} |\alpha|^2 (\sigma' - \sigma)(u(y)-u(x))\,dy + u(x) (Q-Q') \\ & = \mathcal C_{n,s}\int_{\mathbb R^n} (\sigma' - \sigma)\frac{u(y)-u(x)}{|x-y|^{n+2s}}\,dy + u(x) (Q-Q') \;.
}
\noindent Fix $\psi \in C^{\infty}_c(\mathbb R^n)$, $x\in \mathbb R^n$ and $u(y):= \psi(y)e^{-1/|x-y|} |x-y|^{n+2s}$; one sees that $u\in \cal S$, since it is compactly supported and all the derivatives of the smooth function $e^{-1/|x-y|}$ vanish at $x$. Thus $u\in H^s$, and we can substitute it in \eqref{comput1}:
$$0= \int_{\mathbb R^n} (\sigma(x,y) - \sigma'(x,y))e^{-1/|x-y|}\psi(y) \,dy = \langle (\sigma(x,\cdot) - \sigma'(x,\cdot))e^{-1/|x-y|} ,\psi\rangle\;.$$
\noindent Being $\psi$ arbitrary and $e^{-1/|x-y|}$ non-negative, we deduce that $y\mapsto\sigma(x,y) - \sigma'(x,y)$ is zero for any fixed $x$, that is, $\sigma = \sigma'$. Then $A_{a\parallel} = A'_{a\parallel}$ by Lemma \ref{lemsigma}, and also $Q=Q'$ by \eqref{comput1}. \end{proof}
\begin{Lem}
Let $A\not\equiv 0$. Then FMSE has the gauge $\sim$.
\end{Lem}
\begin{proof}
If $(A,q) \in {\cal P}_0$ and $A'\in {\cal A}_0$ is such that $A_{a\parallel} = A'_{a\parallel}$, then by the previous Lemma $(A,q)\sim(A',q')$ if and only if $Q=Q'$, that is
$$ q' = q + \int_{\mathbb R^n}|A|^2\, dy + (\nabla\cdot)^sA_{s\parallel} - \int_{\mathbb R^n}|A'|^2\, dy - (\nabla\cdot)^sA'_{s\parallel}\;.$$
Since $A, A' \in {\cal A}_0$, we have $A_{s\parallel}, A'_{s\parallel} \in H^{sp-s}$ and $\mathcal J_2A, \mathcal J_2A' \in L^{2p}$. By the former fact, $(\nabla\cdot)^sA_{s\parallel}, (\nabla\cdot)^sA'_{s\parallel}$ belong to $H^{sp-2s}$ and eventually to $L^{p}$ because of \emph{(e6)}. By the latter fact and \emph{(e5)}, $\int_{\mathbb R^n}|A|^2\, dy, \int_{\mathbb R^n}|A'|^2\, dy \in L^{p}$. Also, $q\in L^{p}$ because $(A,q)\in{\cal P}_0$. This implies that \emph{(p4)} holds for the $q'$ computed above. Hence, if we find $A'\in {\cal A}_0$ such that $A_{a\parallel} = A'_{a\parallel}$, and then take $q'$ as above, we get a $(A',q') \in {\cal P}_0$ in gauge $\sim$ with a given $(A,q) \in {\cal P}_0$. We now show how to do this with $A\neq A'$, which implies that FMSE enjoys $\sim$.
Fix $(A,q) \in {\cal P}_0$, and for the case $A_\perp\not\equiv 0$ let $A' := A_\parallel - A_\perp$. Then $A\neq A'$, because $A_\perp \neq A'_\perp$; moreover, from $A_\parallel = A'_\parallel$ we get $A_{a\parallel} = A'_{a\parallel}$ and $A'_{s\parallel} = A_{s\parallel} \in H^{sp-s}$. Eventually, $|A'|^2 = |A'_\parallel|^2 + |A'_\perp|^2 = |A_\parallel|^2 + |-A_\perp|^2 = |A_\parallel|^2 + |A_\perp|^2 = |A|^2$ implies $\mathcal J_2A' = \mathcal J_2A$, and $A'$ verifies \emph{(p1)}. If instead we have $A_\perp\equiv 0$, let $A' = A_\parallel + RA_\parallel$, where $R$ is any $\pi/2$ rotation. Then as before $A_{a\parallel} = A'_{a\parallel}$ and $A'_{s\parallel} = A_{s\parallel} \in H^{sp-s}$, because $A_\parallel = A'_\parallel$. We also have $A\neq A'$, because $A_\perp = 0 \neq R A_\parallel = A'_\perp$. Finally, since $\mathcal J_2 A \in L^p$, $A'$ verifies \emph{(p1)}:
\spleqno{
\mathcal J_2A' & = \left( \int_{\mathbb R^n} |A'|^2 dy \right)^{1/2} = \left( \int_{\mathbb R^n} |A'_\parallel|^2 + |A'_\perp|^2 dy \right)^{1/2} \\ & = \left( \int_{\mathbb R^n} |A_\parallel|^2 + |RA_\parallel|^2 dy \right)^{1/2} = \left( \int_{\mathbb R^n} 2|A_\parallel|^2 dy \right)^{1/2} = \sqrt 2\, \mathcal J_2 A\;\qedhere.}\end{proof}
\begin{Lem}
FMSE does not have the gauge $\approx$.
\end{Lem}
\begin{proof}
Let $(A,q), (A',q') \in {\cal P}_0$ such that $(A,q)\approx(A',q')$. Then there exists $\phi\in G$ such that $(-\Delta)^s_A (u\phi) + qu\phi = \phi( (-\Delta)^s_{A'} u + q'u)$ for all $u\in H^s$. Fix $\psi \in C^{\infty}_c(\mathbb R^n)$, $x\in \mathbb R^n$ and $u(y):= \psi(y)e^{-1/|x-y|} |x-y|^{n+2s}$ as in Lemma 3.8. Then $u\in \cal S$, and by Lemma 3.3 and Remark 3.5,
\begin{align*}
0 & = \mathcal C_{n,s} \; PV\int_{\mathbb R^n} \left( \sigma(x,y) \frac{u(x)\phi(x)-u(y)\phi(y)}{|x-y|^{n+2s}} - \sigma'(x,y)\frac{u(x)\phi(x)-u(y)\phi(x)}{|x-y|^{n+2s}} \right) \, dy \\ & \;\;\;\; + u(x)\phi(x)(Q-Q') \\ & = \mathcal C_{n,s} \; PV\int_{\mathbb R^n} \frac{u(y)}{|x-y|^{n+2s}} \left( \sigma'(x,y)\phi(x)-\sigma(x,y) \phi(y) \right) \, dy \\ & = \mathcal C_{n,s} \int_{\mathbb R^n} \psi(y)e^{-1/|x-y|} \left( \sigma'(x,y)\phi(x)-\sigma(x,y) \phi(y) \right) \, dy\;.
\end{align*}
\noindent Here the principal value disappears because the integral is not singular. Given the arbitrarity of $\psi$ and the non negativity of the exponential, we deduce $\sigma(x,y)\phi(y) = \sigma'(x,y)\phi(x)$ for all $y\neq x$. On the other hand, since $\sigma, \sigma'$ are symmetric and $\phi>0$, by taking the symmetric part of each side
\spleqno{
\sigma(x,y)\frac{\phi(x)+\phi(y)}{2} = ( \sigma(x,y)\phi(y) )_s = ( \sigma'(x,y)\phi(x) )_s = \sigma'(x,y)\frac{\phi(x)+\phi(y)}{2}\;.
}
\noindent This implies $\sigma=\sigma'$, and the equation can be rewritten as $\sigma(x,y) (\phi(y)-\phi(x))=0$. Being $\sigma>0$, it is clear that $\phi$ must be constant, and therefore equal to $1$. This means that whenever $(A,q), (A',q') \in {\cal P}_0$ are such that $(A,q)\approx(A',q')$ with some $\phi\in G$, then $\phi\equiv 1$, i.e. FMSE does not have the gauge $\approx$.
\end{proof}
By the last two Lemmas, FMSE enjoys $\sim$, but not $\approx$. Observe that the reverse is true for the classical magnetic Schr\"odinger equation. This surprising difference is due to the non-local nature of the operators involved: FMSE has $\sim$ because the coefficient of its gradient term is not the whole vector potential $A$, as in the classical case, but just a part of it. On the other hand, the restriction imposed by the antisymmetry of such part motivates the absence of $\approx$.
\para{Bilinear form} Let $s\in(0,1),\; u,v\in H^s(\mathbb R^n)$, and define the \emph{bilinear form} $B^s_{A,q} : H^s \times H^s \rightarrow \mathbb R$ as follows:
\begin{equation*}
B^s_{A,q} [u,v] = \int_{\mathbb R^n}\int_{\mathbb R^n} \nabla^s_A u \cdot \nabla^s_A v \,dy dx + \int_{\mathbb R^n} quv \,dx\;.
\end{equation*}
\noindent Observe that by Fubini's theorem and Lemmas 3.3, 3.4
\begin{align*}
B^s_{A,q} [u,u] &= \langle(-\Delta)^s u, u\rangle + 2\langle \nabla^s u, A_{a\parallel}u \rangle + \langle Qu,u\rangle \\ & = \langle\nabla^s u, \nabla^su\rangle + 2\langle \nabla^s u, \alpha(\sigma-1)u \rangle + \langle Qu,u\rangle \\ & = \langle\nabla^s u, \nabla^su+ (\sigma-1)\alpha (u(x)-u(y))\rangle +\langle Qu,u\rangle \\ & = \langle\nabla^s u,\sigma\nabla^s u\rangle+\langle Qu,u\rangle \;.
\end{align*}
\noindent Since again by Lemma 3.4 we have $\sigma>1$, for the first term
$$ \langle\nabla^s u,\sigma\nabla^s u\rangle = \int_{\mathbb R^{2n}}\sigma |\nabla^s u|^2 \, dydx \geq \int_{\mathbb R^{2n}} |\nabla^s u|^2 \, dydx = \langle(-\Delta)^s u, u\rangle\;,$$
\noindent and thus $B^s_{A,q} [u,u] \geq B^s_{0,Q}[u,u]$. Now Lemma 2.6 from \cite{RS2017} gives the well-posedness of the direct problem for FMSE, in the assumption that $0$ is not an eigenvalue for the equation: if $F \in (\tilde H^s(\Omega))^*$ then there exists a unique solution $u_F \in H^s(\Omega)$ to $B^s_{A,q}[u,v]=F(v), \; \forall v\in \tilde H^s(\Omega)$, that is a unique $u_F\in H^s(\Omega)$ such that $(-\Delta)^s_A u +qu =F$ in $\Omega$, $u_F|_{\Omega_e}=0$. For non-zero exterior value, see e.g. \cite{Co18} and \cite{GSU2017}; one also gets the estimate
\begin{equation}\label{eq:estidirect}\|u_f\|_{H^s(\mathbb R^n)} \leq c(\|F\|_{(\tilde H^s(\Omega))^*} + \|f\|_{H^s(\mathbb R^n)})\;.\end{equation}
\begin{Lem}\label{Lembil} Let $v,w\in H^s(\mathbb R^n)$, $f,g\in H^s(\Omega_e)$ and $u_f, u_g\in H^s(\mathbb R^n)$ be such that $((-\Delta)^s_A + q) u_f =((-\Delta)^s_A + q) u_g = 0$ in $\Omega$, $u_f|_{\Omega_e} = f$ and $u_g|_{\Omega_e} = g$. Then
\begin{enumerate}
\item $B^s_{A,q}[v,w] = B^s_{A,q}[w,v]\;$ (symmetry),
\item $|B^s_{A,q}[v,w]| \leq k\|v\|_{H^s(\mathbb R^n)}\|w\|_{H^s(\mathbb R^n)}\;$,
\item $B^s_{A,q}[u_f,e_g] = B^s_{A,q}[u_g,e_f]\;$,
\end{enumerate}
\noindent where $e_g, e_f \in H^s(\mathbb R^n)$ are extensions of $g,f$ respectively.
\end{Lem}
\begin{proof} Symmetry follows immediately from the definition. For the second point, use \emph{(e2)}, \emph{(e7)} and the definition of magnetic fractional gradient to write
\spleqno{
|B^s_{A,q}[v,w]| & = |\langle \nabla^s_A v, \nabla^s_A w \rangle + \langle qv,w \rangle| \leq |\langle \nabla^s_A v, \nabla^s_A w \rangle| + |\langle qv,w \rangle| \\ & \leq \|\nabla^s_A v\|_{L^2} \|\nabla^s_A w\|_{L^2} + \|qv\|_{H^{-s}} \|w\|_{H^s} \\ & \leq k'\|v\|_{H^s} \|w\|_{H^s} + k''\|q\|_{L^{p}} \|v\|_{H^s} \|w\|_{H^s} \leq k \|v\|_{H^s} \|w\|_{H^s}\;.
}
\noindent For the third point, first compute
\spleqno{
B^s_{A,q}[u_f, u_g] & = \int_{\mathbb R^n} ((-\Delta)^s_A u_f + q u_f) u_g \,dx = \int_{\Omega_e} ((-\Delta)^s_A u_f + q u_f) u_g \,dx \\ & = \int_{\Omega_e} ((-\Delta)^s_A u_f + q u_f) e_g \,dx = B^s_{A,q}[u_f, e_g]\;, }
\noindent and then $ B^s_{A,q}[u_f, e_g] = B^s_{A,q}[u_f, u_g] = B^s_{A,q}[u_g, u_f] = B^s_{A,q}[u_g, e_f]$.
\end{proof}
\para{The DN-map and the integral identity}
\begin{Lem}\label{DNLem}
There exists a bounded, linear, self-adjoint map $\Lambda_{A,q}^s : X\rightarrow X^*$ defined by $$\langle \Lambda_{A,q}^s [f],[g] \rangle = B^s_{A,q}[u_f,g], \;\;\;\;\;\;\; \forall f,g\in H^s(\mathbb R^n)\; ,$$ \noindent where $X$ is the abstract quotient space $H^s(\mathbb R^n)/\tilde H^s(\Omega)$ and $u_f\in H^s(\mathbb R^n)$ solves $\mathrm (-\Delta)^s_A u_f + qu_f = 0$ in $\Omega$ with $u-f\in \tilde H^s(\Omega)$.
\end{Lem}
\begin{proof}
We first prove that the tentative definition of the DN-map does not depend on the representatives of the equivalence classes involved. Let $\phi,\psi\in \tilde H^s(\Omega)$ and compute by Lemma \ref{Lembil}
\spleqno{
B^s_{A,q}[u_{f+\phi}, g+\psi] & = \int_{\Omega_e} (g+\psi) ((-\Delta)^s_A + q)u_{f+\phi} \, dx \\ & = \int_{\Omega_e} g ((-\Delta)^s_A + q)u_{f} \, dx = B^s_{A,q}[u_{f}, g]\;.
}
\noindent The $\psi$ disappears because it vanishes in $\Omega_e$, while the $\phi$ plays actually no role, since $f=f+\phi$ over $\Omega_e$ implies $u_{f+\phi}=u_f$. The boundedness of $\Lambda_{A,q}^s$ follows from \ref{Lembil} and \eqref{eq:estidirect}: first compute
\spleqno{
|\langle \Lambda_{A,q}^s [f],[g] \rangle| = |B^s_{A,q}[u_f,g]| \leq k\|u_f\|_{H^s}\|g\|_{H^s} \leq c\|f\|_{H^s}\|g\|_{H^s}\;,
}
\noindent for all $f\in [f],\;g\in [g]$, and then observe that this implies $$ |\langle \Lambda_{A,q}^s [f],[g] \rangle| \leq k \inf_{f\in [f]}\|f\|_{H^s}\inf_{g\in [g]}\|g\|_{H^s}= k\|[f]\|_X\|[g]\|_X\;. $$
\noindent Finally, we prove the self-adjointness using Lemma \ref{Lembil} again: $$ \langle \Lambda_{A,q}^s [f],[g] \rangle = B^s_{A,q} [u_f,e_g] = B^s_{A,q}[u_g,e_f] = \langle \Lambda_{A,q}^s [g],[f] \rangle = \langle [f],\Lambda_{A,q}^s [g] \rangle\;.\qedhere $$
\end{proof}
\vspace{2mm}
The DN-map will now be used to prove an integral identity.
\begin{Lem}
Let $(A_1, q_1), (A_2, q_2) \in {\cal P}, \;f_1, f_2$ be exterior data belonging to $H^s(\mathbb R^n)$ and $u_i \in H^s(\mathbb R^n)$ be the solution of $(-\Delta)^s_{A_i} u_i + q_i u_i = 0$ with $u_i - f_i \in \tilde H^s(\Omega)$ for $i=1,2$. The following integral identity holds:
\spleq{intid}{
\langle (\Lambda^s_{A_1,q_1} & - \Lambda^s_{A_2,q_2}) f_1, f_2 \rangle = \\ & = 2\Big\langle \int_{\mathbb R^n} ((A_1)_{a\parallel} - (A_2)_{a\parallel})\cdot \nabla^su_1\, dy, u_2 \rangle + \langle (Q_1-Q_2)u_1, u_2 \Big\rangle\;.
}
\end{Lem}
\begin{proof}
The proof is a computation based on the results of Lemmas \ref{DNLem} and 3.3:
\spleqno{
\langle (\Lambda^s_{A_1,q_1} & - \Lambda^s_{A_2,q_2}) f_1, f_2 \rangle = B^s_{A_1, q_1}[u_1,u_2] - B^s_{A_2,q_2}[u_1,u_2] \\ & = \langle \nabla^s u_1, \nabla^s u_2 \rangle + 2\Big\langle \int_{\mathbb R^n} (A_1)_{a\parallel} \cdot \nabla^su_1\, dy, u_2 \Big\rangle + \langle Q_1u_1, u_2 \rangle - \\ & \;\;\;\;\; - \langle \nabla^s u_1, \nabla^s u_2 \rangle - 2\Big\langle \int_{\mathbb R^n} (A_2)_{a\parallel} \cdot \nabla^su_1\, dy, u_2 \Big\rangle - \langle Q_2u_1, u_2 \rangle \\ & = 2\Big\langle \int_{\mathbb R^n} ((A_1)_{a\parallel} - (A_2)_{a\parallel})\cdot \nabla^su_1\, dy, u_2 \Big\rangle + \langle (Q_1-Q_2)u_1, u_2 \rangle\;.\qedhere }
\end{proof}
\para{The WUCP and the RAP} Let $W \subseteq \Omega_e$ be open and $u\in H^s(\mathbb R^n)$ be such that $u=0$ and $(-\Delta)^s_A u + qu = 0$ in $W$. If this implies that $u=0$ in $\Omega$ as well, we say that FMSE has got the WUCP. It is known that WUCP holds if both $A$ and $q$ vanish, that is, in the case of the fractional Laplace equation (see \cite{RS2017}).
Let $\mathcal R = \{ u_f|_{\Omega}, f \in C^\infty_c(W) \} \subset L^2(\Omega)$ be the set of the restrictions to $\Omega$ of those functions $u_f$ solving FMSE for some smooth exterior value $f$ supported in $W$. If $\cal R$ is dense in $L^2(\Omega)$, we say that FMSE has got the RAP.
\begin{Rem}
The WUCP and the RAP are non-local properties. For example, the RAP shows a certain freedom of the solutions to fractional PDEs, since it states that they can approximate any $L^2$ function. This is not the case for a local operator, e.g. the classical Laplacian, whose solutions are much more rigid.
\end{Rem}
\begin{Lem}
The WUCP implies the RAP in the case of FMSE.
\end{Lem}
\begin{proof} We follow the spirit of the analogous Lemma of \cite{GSU2017}. Let $v\in L^2(\Omega)$, and assume that $\langle v,w\rangle =0$ for all $w\in \cal R$. Then if $f\in C^\infty_c(W)$ and $\phi\in \tilde H^s(\Omega)$ solves $(-\Delta)^s_A \phi + q\phi = v$ in $\Omega$, we have
\spleqno{
0 & = \langle v, u_f|_\Omega \rangle = \langle v, u_f - f \rangle = \int_{\mathbb R^n} v(u_f-f) \, dx \\ & = \int_{\Omega} v(u_f-f) \, dx = \int_{\Omega} ((-\Delta)^s_A \phi + q\phi )(u_f-f) \, dx \\ & = \int_{\mathbb R^n} ((-\Delta)^s_A \phi + q\phi )(u_f-f) \, dx \\ & = B^s_{A,q}[\phi, u_f] - \int_{\mathbb R^n} ((-\Delta)^s_A \phi + q\phi ) f \, dx \;.
}
\noindent However, $B^s_{A,q}[\phi, u_f] = \int_{\mathbb R^n} ((-\Delta)^s_A u_f + qu_f ) \phi \, dx = 0$, and so $\int_{\mathbb R^n} ((-\Delta)^s_A \phi + q\phi ) f \, dx=0$. Given the arbitrarity of $f\in C^\infty_c(W)$, this implies that $(-\Delta)^s_A \phi + q\phi =0$ in $W$. Now we use the WUCP: from $(-\Delta)^s_A \phi + q\phi =0$ and $\phi=0$ in $W$, an open subset of $\Omega_e$, we deduce that $\phi=0$ in $\Omega$ as well. By the definition of $\phi$ and the fact that $v\in L^2(\Omega)$ it now follows that $v\equiv 0$. Thus if $\langle v,w\rangle =0$ holds for all $w\in \cal R$, then $v\in L^2(\Omega)$ must vanish; by the Hahn-Banach theorem this implies that $\cal R$ is dense in $L^2(\Omega)$. \end{proof}
\section{Main results}
\para{The inverse problem} We prove Theorem 1.1 under the assumption $(A,q)\in\cal P$, while for all the previous results we only required $(A,q)\in {\cal P}_0$. We find that \emph{(p5)} makes physical sense, as the random walk interpretation of FMSE suggests; however, we move the consideration of the general case to future work.
\noindent By \emph{(p5)} and Lemma 3.5 we easily deduce that $\sigma(x,y)\equiv 1$ whenever $(x,y)\not\in\Omega^2$, since in this case $A_{a\parallel}(x,y)=0$. Another consequence of \emph{(p5)} is:
\begin{Lem}
Let $(A,q)\in\cal P$. Then FMSE enjoys the WUCP.
\end{Lem}
\begin{proof}
Suppose $W\subseteq\Omega_e$ is such that $u(x)=0$, $(-\Delta)^s_Au(x)+q(x)u(x) = 0$ when $x\in W$. Then $A_{a\parallel}(x,y)=0$, and by Lemma 3.3 $(-\Delta)^s u(x) =0$. Now the known WUCP for the fractional Laplacian (\cite{RS2017}) gives the result. \end{proof}
We are ready to solve the inverse problem, which we restate here:
\noindent \textbf{Theorem 1.1.} \emph{Let $\Omega \subset \mathbb R^n, \; n\geq 2$ be a bounded open set, $s\in(0,1)$, and let $(A_i,q_i) \in \cal P$ for $i=1,2$. Suppose $W_1, W_2\subset \Omega_e$ are open sets, and that the DN maps for the FMSEs in $\Omega$ relative to $(A_1,q_1)$ and $(A_2,q_2)$ satisfy $$\Lambda^s_{A_1,q_1}[f]|_{W_2}=\Lambda^s_{A_2,q_2}[f]|_{W_2}, \;\;\;\;\; \forall f\in C^\infty_c(W_1)\;.$$ \noindent Then $(A_1,q_1)\sim(A_2,q_2)$, that is, the potentials coincide up to the gauge $\sim$. }
\begin{proof}
Without loss of generality, let $W_1 \cap W_2 = \emptyset$. Let $f_i \in C^\infty_c(W_i)$, and let $u_i \in H^s(\mathbb R^n)$ solve $(-\Delta)^s_{A_i} u_i + q_i u_i = 0$ with $u_i - f_i \in \tilde H^s(\Omega)$ for $i=1,2$. Knowing that the DN maps computed on $f\in C^\infty_c(W_1)$ coincide when restricted to $W_2$ and the integral identity \eqref{intid}, we write \emph{Alessandrini's identity}:
\spleq{Ale1}{ 0 & = \langle (\Lambda^s_{A_1,q_1} - \Lambda^s_{A_2,q_2}) f_1, f_2 \rangle \\ & = 2\Big\langle \int_{\mathbb R^n} ((A_1)_{a\parallel} - (A_2)_{a\parallel})\cdot \nabla^su_1\, dy, u_2 \Big\rangle + \langle (Q_1-Q_2)u_1, u_2 \rangle\;. }
\noindent We can refine \eqref{Ale1} by substituting every instance of $u_i$ with $u_i|_{\Omega}$. In fact, since $u_i$ is supported in $\Omega\cup W_i$ and $(\Omega\cup W_1)\cap(\Omega\cup W_2)=\Omega$,
\spleqno{ \langle (Q_1-Q_2)u_1, u_2 \rangle & = \int_{\mathbb R^n} u_1 u_2 (Q_1-Q_2)\; dx = \int_{\Omega} u_1 u_2 (Q_1-Q_2)\; dx \\ & = \int_{\Omega} u_1|_\Omega u_2|_\Omega (Q_1-Q_2)\; dx = \int_{\mathbb R^n} u_1|_\Omega u_2|_\Omega (Q_1-Q_2)\; dx .}
\noindent Moreover, by property \emph{(p5)},
\spleqno{
\Big\langle \int_{\mathbb R^n} & \nabla^su_1 \cdot ((A_1)_{a\parallel} - (A_2)_{a\parallel}) \, dy, u_2 \Big\rangle = \\ & = \int_{\mathbb R^n} u_2 \int_{\mathbb R^n} ((A_1)_{a\parallel} - (A_2)_{a\parallel})\cdot \nabla^su_1\, dy \, dx \\ & = \int_{\mathbb R^n} u_2(x) \int_{\mathbb R^n} (\sigma_1(x,y)-\sigma_2(x,y)) \,|\alpha|^2 (u_1(x)-u_1(y))\, dy \, dx \\ & = \int_{\Omega} (u_2|_\Omega)(x) \int_{\Omega} (\sigma_1(x,y)-\sigma_2(x,y)) \,|\alpha|^2 \Big((u_1|_\Omega)(x)-(u_1|_\Omega)(y)\Big)\, dy \, dx\;.
}
\noindent Eventually we get
\spleq{Ale2}{
0 & =2\int_{\mathbb R^n} (u_2|_\Omega)(x) \int_{\mathbb R^n} (\sigma_1(x,y)-\sigma_2(x,y)) \,|\alpha|^2 \Big((u_1|_\Omega)(x)-(u_1|_\Omega)(y)\Big)\, dy \, dx + \\& \;\;\; + \int_{\mathbb R^n} u_1|_\Omega u_2|_\Omega (Q_1-Q_2)\; dx \;.
}
\noindent The RAP holds by Lemmas 3.18 and 3.19. Fix any $f\in L^2(\Omega)$, and let $f_i^{(k)} \in C^\infty_c(W_i)$ for $i=1,2$ and $k\in \mathbb N$ be such that $u_1^{(k)}|_\Omega\rightarrow 1$, $u_2^{(k)}|_\Omega\rightarrow f$ in $L^2$. Inserting these solutions in \eqref{Ale2} and taking the limit as $k\rightarrow\infty$ implies that $\int_{\mathbb R^n} f (Q_1-Q_2)\; dx =0$, so that, given that $f\in L^2(\Omega)$ is arbitrary, we deduce $Q_1(x) = Q_2(x)$ for $x\in\Omega$. Coming back to \eqref{Ale2}, we can write
\spleqno{ \int_{\mathbb R^n} (u_2|_\Omega)(x) & \int_{\mathbb R^n} (\sigma_1(x,y)-\sigma_2(x,y)) \frac{(u_1|_\Omega)(x)-(u_1|_\Omega)(y)}{|x-y|^{n+2s}}\, dy \, dx = 0, }
\noindent where $u_i \in H^s(\mathbb R^n)$ once again solves $(-\Delta)^s_{A_i} u_i + q_i u_i = 0$ with $u_i - f_i \in \tilde H^s(\Omega)$ for some $f_i \in C^\infty_c(W_i)$ and $i=1,2$. Choosing $u_2^{(k)}|_\Omega\rightarrow f$ in $L^2$ for some arbitrary $f\in L^2$, by the same argument
$$ \int_{\mathbb R^n} (\sigma_1(x,y)-\sigma_2(x,y)) \frac{(u_1|_\Omega)(x)-(u_1|_\Omega)(y)}{|x-y|^{n+2s}}\, dy = 0 $$
\noindent for $x\in \Omega$. Fix now some $x \in \Omega$ and an arbitrary $\psi\in C^\infty_c(\Omega)$. Since $g(y):= \psi(y)e^{-1/|x-y|}|x - y|^{n+2s} \in {\cal S} \subset L^2(\Omega)$ as in Lemma \ref{charofnewg}, by the RAP we find a sequence $u_1^{(k)}|_\Omega \rightarrow g$. Substituting these solutions and taking the limit,
$$ \int_{\mathbb R^n} (\sigma_1(x,y)-\sigma_2(x,y))\psi(y)e^{-1/|x-y|}\, dy = 0\;. $$
\noindent Thus we conclude that for all $x\in \Omega$ it must be $\sigma_1(x,y)=\sigma_2(x,y)$ for all $y\in\Omega$, i.e. $\sigma_1 = \sigma_2$ over $\Omega^2$. But then $\sigma_1$ and $\sigma_2$ coincide everywhere, because they are both 1 in $\mathbb R^{2n}\setminus \Omega^2$. This means that $(A_1)_{a\parallel}=(A_2)_{a\parallel}$. Moreover, since by \emph{(p2)}, \emph{(p4)} and \emph{(p5)} we have $Q_1 = 0 = Q_2$ over $\Omega_e$, by the argument above $Q_1 = Q_2$ everywhere. It thus follows from Lemma \ref{charofnewg} that $(A_1,q_1)\sim(A_2,q_2)$.
\end{proof}
\section{A random walk interpretation for FMSE}
Diffusion phenomena can often be seen as continuous limits of random walks. The classical result for the Laplacian was extended in \cite{valdi} to the fractional one by considering long jumps. Similarly, the fractional conductivity equation was shown in \cite{Co18} to arise from a long jump random walk with weight $\gamma^{1/2}$, where $\gamma$ is the conductivity. We now show how the leading term in FMSE is itself the limit of a long jump random walk with weights. For simplicity, here we take $\sigma$ as smooth and regular as needed. Let $h>0,\; \tau=h^{2s}, \; k\in \mathbb{Z}^n$, $x\in h\mathbb{Z}^n$ and $t\in \tau\mathbb{Z}$. We consider a random walk on $h\mathbb{Z}^n$ with time steps from $\tau\mathbb{Z}$. Define
\begin{equation*}
f(x,k) :=
\begin{cases}
\sigma(x,x+hk) |k|^{-n-2s} & \mbox{if} \quad k\neq 0 \\
0 & \mbox{if} \quad k=0
\end{cases}\;,
\end{equation*}
\noindent and then observe that $\forall x\in h\mathbb{Z}^n$
\begin{equation*}
\begin{split}
\sum_{k\in\mathbb{Z}^n} f(x,k) & = \sum_{k\in\mathbb{Z}^n \setminus \{0\}} f(x,k) = \sum_{k\in\mathbb{Z}^n \setminus \{0\}} \sigma(x,x+hk) |k|^{-n-2s} \\ & \leq \|\sigma\|_{L^\infty} \sum_{k\in\mathbb{Z}^n \setminus \{0\}} |k|^{-n-2s} < \infty\;.
\end{split}
\end{equation*}
\noindent Thus we can normalize $f(x,k)$, and get the new function $P(x,k)$
\begin{equation*}
P(x,k) :=
\begin{cases}
\left( \sum_{j\in\mathbb{Z}^n} f(x,j) \right)^{-1} \sigma(x,x+hk) |k|^{-n-2s} & \mbox{if} \quad k\neq 0 \\
0 & \mbox{if} \quad k=0
\end{cases}\;.
\end{equation*}
\noindent $P(x,k)$ takes values in $[0,1]$ and verifies $\sum_{k\in\mathbb{Z}^n} P(x,k)=1$; we interpret it as the probability that a particle will jump from $x+hk$ to $x$ in the next step.
\begin{Rem}
Let us compare $P(x,k)$ for the fractional Laplacian, conductivity and magnetic Laplacian operators. $P(x,k)$ always decreases when $k$ increases; the fractional Laplacian, which has $\sigma(x,y)\equiv 1$, treats all the points of $\mathbb R^n$ equally: no point is intrinsically more likely to be reached at the next jump; the fractional conductivity operator, which has $\sigma(x,y) = \sqrt{\gamma(x)\gamma(y)}$, distinguishes the points of $\mathbb R^n$: those with high conductivity are more likely to be reached. However, the conductivity field is independent from the current position of the particle. The magnetic fractional Laplacian operator has no special $\sigma(x,y)$ and it distinguishes the points of $\mathbb R^n$ in a more subtle way, as the conductivity field depends on the position of the particle: the same point may have high conductivity if the particle is at $x$ and a low one if it is at $y$.
\end{Rem}
\begin{Rem}
We now see why $\sigma>0$ and $\sigma(x,y)=1$ if $(x,y)\not\in\Omega^2$: these are needed for $y\mapsto \sigma(x,y)$ to be a conductivity as in \cite{Co18} for all $x\in\mathbb R^n$.
\end{Rem}
\noindent Let $u(x,t)$ be the probability that the particle is at point $x$ at time $t$. Then \begin{equation*} u(x, t+\tau) = \sum_{k\in\mathbb{Z}^n\setminus\{0\}}P(x,k)u(x+hk,t) \;\;. \end{equation*}
\noindent We can compute $\partial_t u(x,t)$ as the limit for $\tau\rightarrow 0$ of the difference quotients, and then substitute the above formula (see \cite{Co18}). As the resulting sum approximates the Riemannian integral, we eventually get that for some constant $C>0$
$$ \partial_t u(x,t) = C \int_{\mathbb R^n} \sigma(x,y) \frac{u(y,t)-u(x,y)}{|x-y|^{n+2s}}\, dy\;. $$
\noindent If $u(x,t)$ is independent of $t$, the leading term of FMSE is recovered.
\section{One slight generalization}
\noindent We now briefly consider a fractional magnetic \emph{conductivity} equation (FMCE) and show that it shares similar features as FMSE. Let $(A,q)\in \cal P$ and let $\gamma$ be a conductivity in the sense of \cite{Co18}. Consider $u\in H^s(\mathbb R^n)$. Since $\nabla^s_A: H^s(\mathbb R^n) \rightarrow L^2(\mathbb R^{2n})$, if $\Theta(x,y):= \sqrt{\gamma(x)\gamma(y)}$Id by the properties of $\gamma$ we know that $\Theta \cdot\nabla^s_A u \in L^2(\mathbb R^{2n})$. Thus we define the \emph{fractional magnetic conductivity operator}
$$ \mathrm C^s_{\gamma,A} u(x) := (\nabla\cdot)^s_A(\Theta\cdot\nabla^s_A u)(x)\;,\;\;\;\;\;\;\;\mathrm C^s_{\gamma,A} : H^s(\mathbb R^n) \rightarrow H^{-s}(\mathbb R^n)\;. $$
\noindent We say that $u\in H^s(\mathbb R^n)$ solves the FMCE with exterior value $f\in H^s(\Omega_e)$ if
$$ \left\{\begin{array}{lr}
\mathrm C^s_{\gamma,A} u(x) + q(x)u(x) = 0 & \text{in } \Omega\,\;\\
u=f & \text{in } \Omega_e
\end{array}\right.$$
\noindent holds in weak sense.
\begin{Lem}
Let $u\in H^s(\mathbb R^n)$, $g\in H^s(\Omega_e)$, $w=\gamma^{1/2}u$ and $f=\gamma^{1/2}g$. Moreover, let $(A,q)\in {\cal P}$ and
\spleqno{q' := q'_{\gamma,A,q} & = \frac{q}{\gamma} - (\nabla\cdot)^s A_{s\parallel} + \frac{(\nabla\cdot)^s(A \gamma^{1/2}(y))}{\gamma^{1/2}(x)} -\frac{(-\Delta)^s(\gamma^{1/2})}{\gamma^{1/2}(x)} +\\ & + \int_{\mathbb R^n} \Big( - \frac{\nabla^s(\gamma^{1/2}) \cdot A}{\gamma^{1/2}(x)} + |A|^2 \Big(\frac{\gamma^{1/2}(y)}{\gamma^{1/2}(x)}-1\Big) \Big) \,dy\;.}
\noindent FMCE with potentials $(A,q)$, conductivity $\gamma$ and exterior value $g$ is solved by $u$ if and only if $w$ solves FMSE with potentials $(A,q')$ and exterior value $f$, i.e.
\begin{equation*}
\left\{\begin{array}{lr}
\mathrm C^s_{\gamma,A} u + qu = 0 & \text{in } \Omega\,\;\\
u=g & \text{in } \Omega_e
\end{array}\right. \;\;\Leftrightarrow\;\;
\left\{\begin{array}{lr}
(-\Delta)^s_A w +q'w=0 & \text{in } \Omega\,\;\\
w=f & \text{in } \Omega_e
\end{array}\right. \;.
\end{equation*}
\noindent \emph{Moreover, the following formula holds for all $w\in H^s(\mathbb R^n)$:}
$$ \mathrm C^s_{\gamma,A} (\gamma^{-1/2}w) + q\gamma^{-1/2}w = \gamma^{1/2} \Big((-\Delta)^s_A +q'\Big)w\,. $$
\end{Lem}
\begin{proof}
Let us start from some preliminary computations. One sees that
\spleqno{
\nabla^s w & = \nabla^s (\gamma^{1/2}u) = \nabla^s u + \nabla^s(mu) = \nabla^s u + m(y)\nabla^s u + u(x)\nabla^s m \\ & = \gamma^{1/2}(y) \nabla^s u + u(x)\nabla^s (\gamma^{1/2}) = \gamma^{1/2}(y) \nabla^s u + w(x)\frac{\nabla^s (\gamma^{1/2})}{\gamma^{1/2}(x)}\;,
} \noindent from which $\nabla^s u = \frac{\nabla^s w}{\gamma^{1/2}(y)}- w(x)\frac{\nabla^s(\gamma^{1/2})}{\gamma^{1/2}(x)\gamma^{1/2}(y)}$, and eventually
\spleq{prel1}{
\nabla^s_A u = \frac{\nabla^s w}{\gamma^{1/2}(y)}- w(x)\frac{\nabla^s(\gamma^{1/2})}{\gamma^{1/2}(x)\gamma^{1/2}(y)} + A(x,y)\frac{w(x)}{\gamma^{1/2}(x)}\;.
}
\noindent By the definition of magnetic fractional divergence, if $v\in H^s(\mathbb R^n)$,
\spleqno{
\langle (\nabla\cdot)^s_A&(\Theta\cdot\nabla^s_A u),v \rangle = \langle \gamma^{1/2}(x)\gamma^{1/2}(y)\nabla^s_A u, \nabla^s_A v \rangle
\\ & = \langle \gamma^{1/2}(x)\gamma^{1/2}(y)\nabla^s_A u, \nabla^s v \rangle + \langle \gamma^{1/2}(x)\gamma^{1/2}(y)\nabla^s_A u, A v \rangle \\ & = \langle \gamma^{1/2}(x)\gamma^{1/2}(y)\nabla^s_A u, \nabla^s v \rangle + \Big\langle \int_{\mathbb R^n}\gamma^{1/2}(y)\nabla^s_A u \cdot A \,dy, \gamma^{1/2} v \Big\rangle\;.
}
\noindent Applying formula \eqref{prel1}, we get
\begin{align}
\langle (\nabla&\cdot)^s_A(\Theta\cdot\nabla^s_A u),v \rangle = \langle \gamma^{1/2}(x)\nabla^s w, \nabla^s v \rangle
+ \langle w(x) (A(x,y) \gamma^{1/2}(y) -\nabla^s(\gamma^{1/2}) ) , \nabla^s v \rangle \nonumber\\ & \;\;\;\;\; + \Big\langle \int_{\mathbb R^n}\gamma^{1/2}(y)\Big( \frac{\nabla^s w}{\gamma^{1/2}(y)}- w(x)\frac{\nabla^s(\gamma^{1/2})}{\gamma^{1/2}(x)\gamma^{1/2}(y)} + A(x,y)\frac{w(x)}{\gamma^{1/2}(x)} \Big) \cdot A \,dy, \gamma^{1/2} v \Big\rangle \; \nonumber\\ & = \langle \gamma^{1/2}(x)\nabla^s w, \nabla^s v \rangle
+ \langle w(x) (A(x,y) \gamma^{1/2}(y) -\nabla^s(\gamma^{1/2}) ) , \nabla^s v \rangle \label{mainpart} \\ & \;\;\;\;\; + \Big\langle \int_{\mathbb R^n} \Big( \nabla^s w \cdot A - w(x)\frac{\nabla^s(\gamma^{1/2}) \cdot A}{\gamma^{1/2}(x)} + |A|^2 w(x) \frac{\gamma^{1/2}(y)}{\gamma^{1/2}(x)} \Big) \,dy, \gamma^{1/2} v \Big\rangle\;.\nonumber\end{align}
\noindent We treat the resulting terms separately. For the first one, by symmetry,
\begin{align}
\langle \gamma^{1/2}&(x)\nabla^s w, \nabla^s v \rangle = \langle \nabla^s w, \gamma^{1/2}(x)\nabla^s v \rangle = \langle \nabla^s w, \nabla^s(v\gamma^{1/2})-v(y)\nabla^s (\gamma^{1/2}) \rangle \nonumber \\ & = \langle (-\Delta)^s w, v\gamma^{1/2} \rangle - \langle \nabla^s w, v(y) \nabla^s(\gamma^{1/2}) \rangle = \langle (-\Delta)^s w, v\gamma^{1/2} \rangle - \langle \nabla^s w, v(x) \nabla^s(\gamma^{1/2}) \rangle \nonumber \\ & = \langle (-\Delta)^s w, v\gamma^{1/2} \rangle - \Big\langle \int_{\mathbb R^n} \nabla^s w\cdot \frac{\nabla^s(\gamma^{1/2})}{\gamma^{1/2}(x)}\, dy, \gamma^{1/2}v \Big\rangle\;. \label{firstpart}
\end{align}
\noindent For the second part of \eqref{mainpart}, we will compute as follows:
\begin{align}
& \langle A(x,y) \gamma^{1/2}(y) -\nabla^s(\gamma^{1/2}) , w(x)\nabla^s v \rangle = \nonumber \\ & = \langle A(x,y) \gamma^{1/2}(y) -\nabla^s(\gamma^{1/2}) , \nabla^s (vw) - v(y)\nabla^s w \rangle \nonumber \\ & = \Big\langle (\nabla\cdot)^s\Big(A(x,y) \gamma^{1/2}(y) -\nabla^s(\gamma^{1/2})) , vw \Big\rangle - \Big\langle \Big(A(x,y) \gamma^{1/2}(y) -\nabla^s(\gamma^{1/2})\Big) v(y) , \nabla^s w \Big\rangle \nonumber \\ & = \Big\langle \Big( \frac{(\nabla\cdot)^s(A \gamma^{1/2}(y))}{\gamma^{1/2}(x)} -\frac{(-\Delta)^s(\gamma^{1/2})}{\gamma^{1/2}(x)} \Big) w(x) , v\gamma^{1/2} \Big\rangle - \Big\langle \Big(A(y,x) \gamma^{1/2}(x) -\nabla^s(\gamma^{1/2})\Big) v(x) , \nabla^s w \Big\rangle \nonumber \\ & = \Big\langle \Big( \frac{(\nabla\cdot)^s(A \gamma^{1/2}(y))}{\gamma^{1/2}(x)} -\frac{(-\Delta)^s(\gamma^{1/2})}{\gamma^{1/2}(x)} \Big) w(x) , v\gamma^{1/2} \Big\rangle - \label{secondpart} \\ & \;\;\;\;\; - \Big\langle \int_{\mathbb R^n}A(y,x) \cdot \nabla^s w\, dy , v \gamma^{1/2} \Big\rangle + \Big\langle \int_{\mathbb R^n} \frac{\nabla^s(\gamma^{1/2})}{\gamma^{1/2}(x)} \cdot \nabla^s w \, dy , v\gamma^{1/2} \Big\rangle\;.\nonumber
\end{align}
\noindent Substituting \eqref{firstpart} and \eqref{secondpart} into \eqref{mainpart}, we conclude the proof:
\begin{align*} \langle &(\nabla\cdot)^s_A (\Theta\cdot\nabla^s_A u),v \rangle = \langle (-\Delta)^s w, v\gamma^{1/2} \rangle - \Big\langle \int_{\mathbb R^n} \nabla^s w\cdot \frac{\nabla^s(\gamma^{1/2})}{\gamma^{1/2}(x)}\, dy, \gamma^{1/2}v \Big\rangle + \\ & \;\;\;\;\; + \Big\langle \Big( \frac{(\nabla\cdot)^s(A \gamma^{1/2}(y))}{\gamma^{1/2}(x)} -\frac{(-\Delta)^s(\gamma^{1/2})}{\gamma^{1/2}(x)} \Big) w(x) , v\gamma^{1/2} \Big\rangle - \\ & \;\;\;\;\; - \Big\langle \int_{\mathbb R^n}A(y,x) \cdot \nabla^s w\, dy , v \gamma^{1/2} \Big\rangle + \Big\langle \int_{\mathbb R^n} \frac{\nabla^s(\gamma^{1/2})}{\gamma^{1/2}(x)} \cdot \nabla^s w \, dy , v\gamma^{1/2} \Big\rangle + \\ & \;\;\;\;\; + \Big\langle \int_{\mathbb R^n} \Big( \nabla^s w \cdot A - w(x)\frac{\nabla^s(\gamma^{1/2}) \cdot A}{\gamma^{1/2}(x)} + |A|^2 w(x) \frac{\gamma^{1/2}(y)}{\gamma^{1/2}(x)} \Big) \,dy, \gamma^{1/2} v \Big\rangle \\ & = \Big\langle (-\Delta)^s w + 2 \int_{\mathbb R^n}A_{a\parallel} \cdot \nabla^s w\, dy + w(x)\Big(\int_{\mathbb R^n} |A|^2\, dy + (\nabla\cdot)^s A_{s\parallel} \Big) , v\gamma^{1/2} \Big\rangle + \\ & \;\;\;\;\; + \Big\langle \left\{ - (\nabla\cdot)^s A_{s\parallel} + \frac{(\nabla\cdot)^s(A \gamma^{1/2}(y))}{\gamma^{1/2}(x)} -\frac{(-\Delta)^s(\gamma^{1/2})}{\gamma^{1/2}(x)}+ \right. \\ & \;\;\;\;\; \;\;\;\;\;\left.+ \int_{\mathbb R^n} \Big( - \frac{\nabla^s(\gamma^{1/2}) \cdot A}{\gamma^{1/2}(x)} + |A|^2 \Big(\frac{\gamma^{1/2}(y)}{\gamma^{1/2}(x)}-1\Big) \Big) \,dy \right\} w(x) , v\gamma^{1/2} \Big\rangle \\ & = \langle (-\Delta)^s_A w + (q'-q/\gamma)w, v\gamma^{1/2} \rangle\;. \qedhere\end{align*}\end{proof}
\noindent Thus the FMCEs can be reduced to FMSEs; hence, we know that FMCE enjoys the same gauges as FMSE, and most importantly we can consider and solve an analogous inverse problem.
\vspace{3mm}
\noindent {\bf Acknowledgements.} This work is part of the PhD research of the author, who was partially supported by the European Research Council under Horizon 2020 (ERC CoG 770924). The author wishes to express his sincere gratitude to Professor Mikko Salo for his reliable guidance and constructive discussion in the making of this work.
| {
"attr-fineweb-edu": 1.230469,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeUg241xiDnlsQl2c | \section{\label{}}
\section{Introduction}
Soft-gluon resummation formalisms for top quark production and other
hard scattering cross sections beyond leading logarithms are formulated in
terms of exponentials of soft anomalous dimensions that control noncollinear
soft-gluon emission (see e.g. \cite{NKrev}).
The calculations of these anomalous dimensions \cite{NKGS,NKprl} are
performed using the eikonal approximation, which is valid for descibing the
emission of soft gluons from partons in the hard scattering.
The approximation leads to a simplified form of the Feynman rules by
removing the Dirac matrices from the calculation.
When the gluon momentum, $k$, goes to zero,
the Feynman rule for the quark-gluon vertex reduces to
$g_s T_F^c v^{\mu} / v\cdot k$
with $v$ a dimensionless velocity vector.
Here we calculate diagrams and derive the soft anomalous dimension
for top quark pair production via
$e^+ e^-\rightarrow t {\bar t}$ through two loops \cite{NKprl}.
We also discuss the massless limit and extensions to other processes.
Writing the soft anomalous dimension $\Gamma_S$ as a series in $\alpha_s$
\begin{equation}
\Gamma_S=\frac{\alpha_s}{\pi} \Gamma_S^{(1)}
+\left(\frac{\alpha_s}{\pi}\right)^2 \Gamma_S^{(2)}+\cdots
\end{equation}
we calculate the one- and two-loop expressions for $\Gamma_S$.
We use the Feynman gauge and we employ dimensional
regularization with $n=4-\epsilon$ dimensions.
The soft anomalous dimension can be determined from the coefficients
of the ultraviolet (UV) poles of the eikonal diagrams.
We do not include the color factors in the individual diagrams below, but
take them into account when assembling all the pieces together in the
last section. The Appendix lists some of the integrals calculated for the
evaluation of the diagrams.
\section{One-loop diagrams}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{loopg1.eps}
\caption{One-loop diagrams with heavy-quark eikonal lines.}
\end{center}
\label{1loop}
\end{figure}
In this section we calculate the one-loop diagrams in Fig. 1.
We define $\beta=\sqrt{1-4m^2/s}$ with $m$ the heavy quark mass and
$s$ the squared c.m. energy.
We label the two heavy quark lines by $i$ and $j$.
Using the relation $v=p \, {\sqrt{2/s}}$, with $p$ the momentum, we have
$v_i^2=v_j^2=(1-\beta^2)/2$ and $v_i \cdot v_j=(1+\beta^2)/2$.
We begin with the integral $I_{1a}$ for the
diagram in Fig. 1(a) given by
\begin{equation}
I_{1a} = g_s^2 \int\frac{d^n k}{(2\pi)^n} \frac{(-i)g_{\mu \nu}}{k^2}
\frac{v_i^{\mu}}{v_i\cdot k} \, \frac{(-v_j^{\nu})}{(-v_j\cdot k)} \, .
\end{equation}
Using Feynman parameterization and after several manipulations and
separation of UV and infrared (IR) singularities,
we find through Eq. (\ref{A1}) the UV poles
\begin{equation}
I_{1a}=\frac{\alpha_s}{\pi} \frac{(1+\beta^2)}{2\, \beta}
\frac{1}{\epsilon} \ln\left(\frac{1-\beta}{1+\beta}\right) \, .
\label{I1aUV}
\end{equation}
Next we calculate the one-loop self-energy diagrams in Fig. 1(b) given by
\begin{equation}
I_{1b}=g_s^2 \int\frac{d^n k}{(2\pi)^n} \frac{(-i)g_{\mu \nu}}{k^2}
\frac{v_i^{\mu}}{v_i\cdot (k'-k)} \, \frac{v_i^{\nu}}{v_i\cdot k'} \, .
\end{equation}
Here we have introduced the regulator $k'$, i.e. the external quark momentum is
$p_i+k'$, so as to use the eikonal rules.
We then expand the above expression around $v_i\cdot k'=0$ at constant
$\epsilon$. The expansion gives an irrelevant $1/v_i \cdot k'$ term,
a constant term,
and linear and higher-order terms in $v_i \cdot k'$ which vanish when
setting $k'=0$. The remaining term is then
\begin{equation}
I_{1b}=\frac{i \, g_s^2 \, v_i^2}{(2\pi)^{n}}
\int \frac{d^n k}{k^2 \, (v_i\cdot k)^2} \, .
\end{equation}
We isolate the UV poles and using Eq. (\ref{A2}) we find
\begin{equation}
I_{1b}=\frac{\alpha_s}{\pi} \frac{1}{\epsilon} \, .
\label{I1bUV}
\end{equation}
The one-loop soft anomalous dimension is then read off the coefficient
of the UV poles of the one-loop diagrams through
\begin{equation}
C_F[I_{1a}+I_{1b}]=-\frac{\alpha_s}{\pi} \frac{\Gamma_S^{(1)}}{\epsilon}
\end{equation}
which gives
\begin{equation}
\Gamma_S^{(1)}=C_F \left[-\frac{(1+\beta^2)}{2 \, \beta}
\ln\left(\frac{1-\beta}{1+\beta}\right) -1\right]
\label{GammaS1}
\end{equation}
with $C_F=(N_c^2-1)/(2 N_c)$ the color factor.
When expressed in terms of the cusp angle
$\gamma=\cosh^{-1}\left(v_i \cdot v_j/\sqrt{v_i^2 v_j^2}\right)
=\ln[(1+\beta)/(1-\beta)]$ this becomes
$\Gamma_S^{(1)}=C_F (\gamma \coth\gamma-1)$ and
is also known as a cusp anomalous dimension \cite{IKR,KR}.
\section{Two-loop vertex diagrams}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{loopg2.eps}
\caption{Two-loop vertex diagrams with heavy-quark eikonal lines.}
\end{center}
\label{2vloop}
\end{figure}
The two-loop diagrams can be separated into two groups, shown in Fig. 2 and 3.
The calculations are
challenging due to the presence of a heavy quark mass and they
involve multiple complicated integrals and delicate separations
of infrared and ultraviolet poles, which by construction of the
soft function are opposites of each other \cite{NKGS}.
The analytical structure of the results involves logarithms and polylogarithms.
In this section we calculate the two-loop vertex diagrams of Fig. 2.
We also have to include the one-loop counterterms to these diagrams.
We will denote by $I'_2$ the diagrams in Fig. 2 and by $I^{c.t.}_2$ their
counterterms. The total contribution will then be $I_2=I'_2+I^{c.t.}_2$.
We find it convenient to define
\begin{eqnarray}
M_{\beta}&=&\left(4 \ln 2+\ln \pi-\gamma_E-i\pi\right)
\ln\left(\frac{1-\beta}{1+\beta}\right)
\nonumber \\ &&
{}+\frac{1}{2} \ln^2(1+\beta)-\frac{1}{2} \ln^2(1-\beta)
\nonumber \\ &&
{}-{\rm Li}_2\left(\frac{1+\beta}{2}\right)
+{\rm Li}_2\left(\frac{1-\beta}{2}\right) \, .
\end{eqnarray}
\subsection{$I_{2a}$ and $I_{2b}$}
In this section we calculate the integrals $I_{2a}$ and $I_{2b}$
for the two-loop diagrams in Figs. 2(a) and 2(b).
Note that there is no counterterm for diagram 2(b), so $I_{2b}=I'_{2b}$.
Diagram 2(b) is given by
\begin{eqnarray}
&& \hspace{-12mm}
I_{2b}=g_s^4 \int\frac{d^n k_1}{(2\pi)^n}\frac{d^n k_2}{(2\pi)^n}
\frac{(-i)g_{\mu\nu}}{k_1^2} \frac{(-i)g_{\rho\sigma}}{k_2^2}
\nonumber \\ && \hspace{-9mm} \times
\frac{v_i^{\mu}}{v_i\cdot k_1} \frac{v_i^{\rho}}{v_i\cdot (k_1+k_2)}
\frac{(-v_j^{\nu})}{-v_j\cdot (k_1+k_2)} \frac{(-v_j^{\sigma})}{-v_j\cdot k_2}.
\end{eqnarray}
Now, we begin with the $k_2$ integral and use Feynman parameterization.
Performing first the $k_2$ integration and then an integration over one of
the parameters, we have
\begin{eqnarray}
&& \hspace{-3mm}
I_{2b}= -i \frac{\alpha_s^2}{\pi^2} \, 2^{-4+\epsilon} \,
\pi^{-2+3\frac{\epsilon}{2}} \,
\Gamma\left(1-\frac{\epsilon}{2}\right) \, \Gamma(1+\epsilon)
\nonumber \\ &&
\times (1+\beta^2)^2 \int_0^1 dz \int_0^1 dy \, (1-y)^{-\epsilon}
\nonumber \\ && \times
\left[2\beta^2(1-y)^2 z^2
-2\beta^2(1-y)z-\frac{(1-\beta^2)}{2}\right]^{-1+\frac{\epsilon}{2}}
\nonumber \\ &&
\times \int \frac{d^n k_1}{k_1^2 \, v_i \cdot k_1 \,
\left[\left((v_i-v_j)z+v_j\right)\cdot k_1\right]^{1+\epsilon}} \, .
\label{I2b}
\end{eqnarray}
Next we proceed with the $k_1$ integral
\begin{eqnarray}
&& \hspace{-5mm}\int \frac{d^nk_1}{k_1^2 \, v_i \cdot k_1 \,
\left[\left((v_i-v_j)z+v_j\right)\cdot k_1\right]^{1+\epsilon}}=
-i (-1)^{-3\frac{\epsilon}{2}}
\nonumber \\ && \hspace{-3mm} \times
2^{2+3\epsilon} \pi^{2-\frac{\epsilon}{2}}
\frac{\Gamma\left(1+\frac{3\epsilon}{2}\right)}{\Gamma(1+\epsilon)}
\int_0^1 dx_1 x_1^{-1+2\epsilon} (1-x_1)^{-1-2\epsilon}
\nonumber \\ && \hspace{-3mm}
\times \int_0^1 dx_2 (1-x_2)^{\epsilon}
\left\{\left[x_2 v_i \right. \right.
\nonumber \\ && \hspace{10mm} \left. \left.
+(1-x_2)\left((v_i-v_j)z+v_j\right)\right]^2
\right\}^{-1-3\frac{\epsilon}{2}}.
\end{eqnarray}
The integral over $x_1$ contains both UV and IR singularities. We isolate
the UV singularities via
\begin{eqnarray}
&& \hspace{-11mm}\int_0^1 dx_1 \, x_1^{-1+2\epsilon} (1-x)^{-1-2\epsilon}
=\int_0^1 dx_1 \, x_1^{-1+2\epsilon}
\nonumber \\ && \hspace{-10mm}
{}+\int_0^1 dx_1 \, x_1^{-1+2\epsilon}
\left[(1-x)^{-1-2\epsilon}-1\right]
=\frac{1}{2\epsilon}+{\rm IR}.
\end{eqnarray}
We set $\epsilon=0$ in the integral over $x_2$ since it is UV finite.
We thus find the UV poles of the $k_1$ integral
\begin{eqnarray}
&& \hspace{-10mm} \int \frac{d^nk_1}{k_1^2 \, v_i \cdot k_1 \,
\left[\left((v_i-v_j)z+v_j\right)\cdot k_1\right]^{1+\epsilon}}
=-i \, 2\pi^2 \frac{1}{\epsilon}
\nonumber \\ && \hspace{-8mm} \times
\frac{1}{\beta (1-z)}
\left[\tanh^{-1}(\beta(1-2z))-\tanh^{-1}(-\beta)\right] \, .
\end{eqnarray}
Using this result in the expression for $I_{2b}$, Eq. (\ref{I2b}),
and performing the $y$ and $z$ integrals, we find
\begin{eqnarray}
&& \hspace{-6mm}
I_{2b}=\frac{\alpha_s^2}{\pi^2} \frac{(1+\beta^2)^2}{8 \, \beta^2}
\frac{1}{\epsilon} \left\{-\frac{1}{3}\ln^3\left(\frac{1-\beta}{1+\beta}\right)
\right.
\nonumber \\ && \hspace{15mm}
{}-\ln\left(\frac{1-\beta}{1+\beta}\right)
\left[{\rm Li}_2\left(\frac{(1-\beta)^2}{(1+\beta)^2}\right)
+\zeta_2\right]
\nonumber \\ && \hspace{15mm} \left.
{}+{\rm Li}_3\left(\frac{(1-\beta)^2}{(1+\beta)^2}\right)-\zeta_3 \right\}.
\label{I2bUV}
\end{eqnarray}
Diagrams 2(a) and 2(b) are related through the equation
$I'_{2a}+I_{2b}=\frac{1}{2} I_{1a}^2$ (where we need to keep UV poles
and constant terms in $I_{1a}$) from which we can calculate $I'_{2a}$.
The one-loop counterterm to $I_{2a}$ is
\begin{eqnarray}
&& \hspace{-15mm}
I_{2a}^{c.t.}=\frac{\alpha_s^2}{\pi^2} \frac{(1+\beta^2)^2}{8 \, \beta^2}
\left\{
-\frac{2}{\epsilon^2} \ln^2\left(\frac{1-\beta}{1+\beta}\right) \right.
\nonumber \\ && \hspace{20mm}
-\frac{1}{\epsilon} M(\beta) \ln\left(\frac{1-\beta}{1+\beta}\right) \, .
\end{eqnarray}
Then we find the simple relation
\begin{equation}
I_{2a}+I_{2b}=\frac{\alpha_s^2}{\pi^2} \frac{(1+\beta^2)^2}{8 \, \beta^2}
\frac{(-1)}{\epsilon^2}
\ln^2\left(\frac{1-\beta}{1+\beta}\right) \, .
\end{equation}
\subsection{$I_{2c}$}
In this section we calculate the integrals for the three diagrams
represented by Fig. 2(c). The blob represents a quark, gluon, or ghost loop.
Note that an additional diagram with a four-gluon vertex vanishes.
The quark-loop diagram is given by
\begin{eqnarray}
&& \hspace{-10mm}
I'_{2cq}=(-1) n_f g_s^4 \int\frac{d^n k}{(2\pi)^n}\frac{d^n l}{(2\pi)^n}
\frac{v_i^{\mu}}{v_i\cdot k} \frac{(-v_j^{\rho})}{(-v_j\cdot k)}
\nonumber \\ && \hspace{-8mm} \times
\frac{(-i)g_{\mu\nu}}{k^2} \frac{(-i)g_{\rho\sigma}}{k^2}
{\rm Tr} \left[-i \gamma^{\nu}
\frac{i l\!\!/}{l^2} (-i) \gamma^{\sigma} i \frac{(l\!\!/-k\!\!/)}
{(l-k)^2}\right].
\label{I2cq}
\end{eqnarray}
After several steps, and using Eq. (\ref{A3}), the final result for
the UV poles of $I'_{2cq}$ is
\begin{eqnarray}
&& \hspace{-12mm}
I'_{2cq}=\frac{\alpha_s^2}{\pi^2} n_f \frac{(1+\beta^2)}{6\, \beta}
\left\{-\frac{1}{\epsilon^2} \ln\left(\frac{1-\beta}{1+\beta}\right) \right.
\nonumber \\ && \hspace{17mm} \left.
{}-\frac{1}{\epsilon}\left[\frac{5}{6}\ln\left(\frac{1-\beta}{1+\beta}\right)
+M_{\beta}\right]\right\}
\label{I2cqUV}
\end{eqnarray}
with $n_f$ the number of light quark flavors.
The one-loop counterterm is
\begin{equation}
I_{2cq}^{c.t.}=\frac{\alpha_s^2}{\pi^2} n_f \frac{(1+\beta^2)}{6\, \beta}
\left\{\frac{2}{\epsilon^2} \ln\left(\frac{1-\beta}{1+\beta}\right)
+\frac{1}{\epsilon} M_{\beta}\right\} \, .
\label{I2cqct}
\end{equation}
Then the sum $I_{2cq}=I'_{2cq}+I_{2cq}^{c.t.}$ gives
\begin{equation}
I_{2cq}=\frac{\alpha_s^2}{\pi^2} n_f \frac{(1+\beta^2)}{6 \, \beta}
\left[\frac{1}{\epsilon^2}-\frac{5}{6 \, \epsilon}\right]
\ln\left(\frac{1-\beta}{1+\beta}\right).
\end{equation}
Next we calculate the integral for the gluon-loop diagram
given by
\begin{eqnarray}
&& \hspace{-4mm}
I'_{2cgl}=\frac{1}{2} g_s^4 \int\frac{d^n k}{(2\pi)^n}\frac{d^n l}{(2\pi)^n}
\frac{v_i^{\mu}}{v_i\cdot k} \frac{(-v_j^{\nu})}{(-v_j\cdot k)}
\nonumber \\ && \times
\frac{(-i)g_{\mu\mu'}}{k^2} \frac{(-i)g_{\rho\rho'}}{l^2}
\frac{(-i)g_{\sigma\sigma'}}{(k-l)^2} \frac{(-i)g_{\nu\nu'}}{k^2}
\nonumber \\ && \times
\left[g^{\mu' \rho} (k+l)^{\sigma}+g^{\rho \sigma} (k-2l)^{\mu'}
+g^{\sigma \mu'} (-2k+l)^{\rho}\right]
\nonumber \\ && \times
\left[g^{\rho' \nu'} (l+k)^{\sigma'}+g^{\nu' \sigma'} (-2k+l)^{\rho'}\right.
\nonumber \\ && \hspace{30mm} \left.
{}+g^{\sigma' \rho'} (k-2l)^{\nu'}\right].
\end{eqnarray}
Using Eq. (\ref{A3}) we find for the UV poles of $I'_{2cgl}$,
\begin{eqnarray}
&& \hspace{-12mm}
I'_{2cgl}=\frac{\alpha_s^2}{\pi^2} \frac{19}{96}
\frac{(1+\beta^2)}{\beta}
\left\{-\frac{1}{\epsilon^2} \ln\left(\frac{1-\beta}{1+\beta}\right)\right.
\nonumber \\ && \hspace{15mm} \left.
-\frac{1}{\epsilon}\left[\frac{58}{57}\ln\left(\frac{1-\beta}{1+\beta}\right)
+M_{\beta}\right] \right\}.
\end{eqnarray}
The one-loop counterterm is
\begin{equation}
I_{2cgl}^{c.t.}=\frac{\alpha_s^2}{\pi^2} \frac{19}{96}
\frac{(1+\beta^2)}{\beta}
\left\{\frac{2}{\epsilon^2} \ln\left(\frac{1-\beta}{1+\beta}\right)
+\frac{1}{\epsilon} M_{\beta}\right\}.
\end{equation}
Then
\begin{equation}
I_{2cgl}=\frac{\alpha_s^2}{\pi^2} \frac{19}{96}
\frac{(1+\beta^2)}{\beta}
\left\{\frac{1}{\epsilon^2}-\frac{58}{57\, \epsilon}\right\}
\ln\left(\frac{1-\beta}{1+\beta}\right) .
\label{I2cgl}
\end{equation}
Last we calculate the integral for the ghost-loop diagram
given by
\begin{eqnarray}
&& \hspace{-10mm}
I'_{2cgh}=(-1) g_s^4 \int\frac{d^n k}{(2\pi)^n}\frac{d^n l}{(2\pi)^n}
\frac{v_i^{\mu}}{v_i\cdot k} \frac{(-v_j^{\rho})}{(-v_j\cdot k)}
\nonumber \\ && \hspace{3mm} \times
\frac{i}{l^2} l^{\nu} \frac{i}{(l-k)^2} (l-k)^{\sigma}
\frac{(-i)g_{\mu\nu}}{k^2} \frac{(-i)g_{\rho\sigma}}{k^2}.
\end{eqnarray}
Using Eq. (\ref{A3}), we find
\begin{eqnarray}
&& \hspace{-12mm}
I'_{2cgh}= \frac{\alpha_s^2}{\pi^2}
\frac{(1+\beta^2)}{96\beta}
\left\{-\frac{1}{\epsilon^2} \ln\left(\frac{1-\beta}{1+\beta}\right) \right.
\nonumber \\ && \hspace{15mm} \left.
{}-\frac{1}{\epsilon}\left[
\frac{4}{3}\ln\left(\frac{1-\beta}{1+\beta}\right)+M_{\beta}\right]
\right\}.
\end{eqnarray}
The one-loop counterterm is
\begin{equation}
I_{2cgh}^{c.t.}= \frac{\alpha_s^2}{\pi^2}
\frac{(1+\beta^2)}{96 \, \beta}
\left\{\frac{2}{\epsilon^2} \ln\left(\frac{1-\beta}{1+\beta}\right)
+\frac{1}{\epsilon} M_{\beta} \right\}.
\end{equation}
Then
\begin{equation}
I_{2cgh}=\frac{\alpha_s^2}{\pi^2}
\frac{(1+\beta^2)}{96 \, \beta}
\left\{\frac{1}{\epsilon^2} -\frac{4}{3 \, \epsilon}\right\}
\ln\left(\frac{1-\beta}{1+\beta}\right).
\label{I2cgh}
\end{equation}
The sum of the gluon and ghost loops, Eqs. (\ref{I2cgl}) and (\ref{I2cgh}),
denoted by $I_{2cg}$, is then
\begin{equation}
I_{2cg}=\frac{\alpha_s^2}{\pi^2} \frac{5}{24} \frac{(1+\beta^2)}{\beta}
\left[\frac{1}{\epsilon^2}-\frac{31}{30\, \epsilon}\right]
\ln\left(\frac{1-\beta}{1+\beta}\right).
\end{equation}
\subsection{$I_{2d}$ and $I_{2e}$}
In this section we calculate the integral $I'_{2d}$ for the diagram in
Fig. 2(d) given by
\begin{eqnarray}
I'_{2d}&=&g_s^4 \int\frac{d^n k}{(2\pi)^n}\frac{d^n l}{(2\pi)^n}
\frac{(-i)g_{\mu\nu}}{k^2} \frac{(-i)g_{\rho\sigma}}{l^2}
\frac{v_i^{\mu}}{v_i\cdot k}
\nonumber \\ && \quad \times
\frac{v_i^{\sigma}}{v_i\cdot (k-l)}
\frac{v_i^{\rho}}{v_i\cdot k} \frac{(-v_j^{\nu})}{(-v_j\cdot k)}
\end{eqnarray}
as well as the integral $I'_{2e}$ for the diagram in Fig. 2(e)
given by
\begin{eqnarray}
I'_{2e}&=&g_s^4 \int\frac{d^n k}{(2\pi)^n}\frac{d^n l}{(2\pi)^n}
\frac{(-i)g_{\mu\nu}}{k^2} \frac{(-i)g_{\rho\sigma}}{l^2}
\frac{v_i^{\sigma}}{(-v_i\cdot l)}
\nonumber \\ && \quad \times
\frac{v_i^{\mu}}{v_i\cdot (k-l)}
\frac{v_i^{\rho}}{v_i\cdot k} \frac{(-v_j^{\nu})}{(-v_j\cdot k)} \, .
\end{eqnarray}
First we note that
\begin{eqnarray}
I'_{2d}+I'_{2e}&=&\frac{g_s^4}{(2\pi)^{2n}} v_i^2 \, v_i \cdot v_j
\int\frac{d^n k}{k^2 (v_i \cdot k)^2 v_j\cdot k}
\nonumber \\ && \quad \quad \times
\int \frac{d^n l}{l^2 v_i \cdot l}=0
\end{eqnarray}
where the last integral is zero because it is odd in $l$.
Therefore $I'_{2d}=-I'_{2e}$ and similarly for the counterterms,
and then $I_{2d}=-I_{2e}$.
Thus we only have to calculate $I'_{2e}$ and its counterterm.
Using Eq. (\ref{A4}) and including the counterterm
$I_{2e}^{c.t.}=(\alpha_s^2/\pi^2) [(1+\beta^2)/(4\beta)]
[2/ \epsilon^2 \ln((1-\beta)/(1+\beta))+M_{\beta}/\epsilon]$
we find
\begin{eqnarray}
&& \hspace{-5mm}
I_{2e}=\frac{\alpha_s^2}{\pi^2} \frac{(1+\beta^2)}{4 \, \beta}
\left\{\frac{1}{\epsilon^2} \ln\left(\frac{1-\beta}{1+\beta}\right)
-\frac{1}{\epsilon} \left[\ln\left(\frac{1-\beta}{1+\beta}\right)
\right. \right.
\nonumber \\ && \quad \quad
{}+\frac{1}{2} \ln^2\left(\frac{1-\beta}{1+\beta}\right)
-\frac{1}{2}{\rm Li}_2\left(\frac{(1- \beta)^2}{(1+\beta)^2}\right)
\nonumber \\ && \quad \quad \left. \left.
{}+\ln\left(\frac{1-\beta}{1+\beta}\right)
\ln\left(\frac{(1+\beta)^2}{4\beta}\right)
+\frac{\zeta_2}{2}\right] \right\}.
\end{eqnarray}
\subsection{$I_{2f}$}
In this section we calculate the three-gluon diagram in Fig. 2(f)
given by
\begin{eqnarray}
&& \hspace{-3mm} I'_{2f}=g_s^4 \int\frac{d^n k_1}{(2\pi)^n}
\frac{d^n k_2}{(2\pi)^n}
\frac{(-i)g_{\mu \mu'}}{k_1^2} \frac{(-i)g_{\nu \nu'}}{k_2^2}
\frac{(-i)g_{\rho \rho'}}{(k_1+k_2)^2}
\nonumber \\ && \quad \quad
\times \frac{v_i^{\mu}}{v_i\cdot k_1} \, \frac{(-v_j^{\nu})}{(-v_j\cdot k_1)}
\frac{(-v_j^{\rho})}{-v_j\cdot (k_1+k_2)} (-i)
\nonumber \\ && \quad \quad
\times \left[g^{\mu'\nu'} (k_1-k_2)^{\rho'}
+g^{\nu'\rho'} (k_1+2k_2)^{\mu'}\right.
\nonumber \\ && \hspace{15mm} \left.
{}+g^{\rho'\mu'} (-2k_1-k_2)^{\nu'}\right]\, .
\end{eqnarray}
After many steps we find
\begin{eqnarray}
&& \hspace{-10mm}
I_{2f}=\frac{1}{\epsilon} \left\{
\frac{(1+\beta^2)}{12 \, \beta}
\ln^3\left(\frac{1-\beta}{1+\beta}\right)\right.
\nonumber \\ &&
{}-\frac{1}{4}
\left[2 \zeta_2+\ln^2\left(\frac{1-\beta}{1+\beta}\right)\right]
\nonumber \\ && \quad \left. \times
\left[\frac{(1+\beta^2)}{2 \, \beta} \ln\left(\frac{1-\beta}{1+\beta}\right)
+1\right]\right\}.
\end{eqnarray}
\section{Two-loop heavy-quark self-energy diagrams}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{loopg2s.eps}
\caption{Two-loop heavy-quark self-energy diagrams with eikonal lines.}
\end{center}
\label{2sloop}
\end{figure}
In this section we calculate the diagrams in Fig. 3 and their counterterms.
We will denote by $I'_3$ the diagrams shown in Fig. 3 and
by $I^{c.t.}_3$ their counterterms. The total contribution is
then $I_3=I'_3+I^{c.t.}_3$.
As for diagram 1(b) we introduce a regulator $k'$ in the quark momentum
and then expand around $v_i\cdot k'=0$ at constant
$\epsilon$. We also find it convient to define
\begin{equation}
K_{\beta}= -\ln(1-\beta^2)+5 \ln 2+\ln \pi-\gamma_E-i \pi.
\end{equation}
\subsection{$I_{3a}$}
Here we calculate the three diagrams represented by Fig. 3(a) which are
shown in detail in Fig. 4.
Note that an additional graph involving a three-gluon vertex with
all three gluons attached to the same eikonal line vanishes.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{3a.eps}
\caption{Detail of the diagrams of Fig. 3(a).}
\end{center}
\label{3a}
\end{figure}
We begin with the diagram labeled (3a1i). Using Eq. (\ref{A2}) we find
\begin{eqnarray}
I'_{3a1i}=\frac{\alpha_s^2}{\pi^2}
\left\{\frac{1}{\epsilon^2}+\frac{1}{\epsilon} K_{\beta} \right\}
\end{eqnarray}
with counterterm
$I_{3a1i}^{c.t.}=(\alpha_s^2/\pi^2)
[-2/\epsilon^2-K_{\beta}/\epsilon].$
Then
\begin{equation}
I_{3a1i}=\frac{\alpha_s^2}{\pi^2} \frac{(-1)}{\epsilon^2}.
\label{I3a1i}
\end{equation}
Next we calculate $I'_{3a1ii}$. Using Eq. (\ref{A5}) we find
\begin{equation}
I'_{3a1ii}=\frac{\alpha_s^2}{\pi^2}
\left\{\frac{1}{2 \, \epsilon^2}+\frac{1}{2 \, \epsilon} \left[
1+K_{\beta}\right]\right\}
\end{equation}
with counterterm
$I_{3a1ii}^{c.t.}=(\alpha_s^2/\pi^2)
[-1/\epsilon^2-K_{\beta}/(2 \epsilon)].$
Then
\begin{equation}
I_{3a1ii}=\frac{\alpha_s^2}{\pi^2}
\left\{-\frac{1}{2 \, \epsilon^2}+\frac{1}{2 \, \epsilon}\right\}.
\label{I3a1ii}
\end{equation}
Adding Eqs. (\ref{I3a1i}) and (\ref{I3a1ii}) and denoting the sum by
$I_{3a1}$, we find
\begin{equation}
I_{3a1}=\frac{\alpha_s^2}{\pi^2} \left\{-\frac{3}{2\, \epsilon^2}
+\frac{1}{2\, \epsilon}\right\}.
\end{equation}
Last we calculate $I'_{3a2}$, given by
\begin{eqnarray}
&& \hspace{-3mm}
I'_{3a2}=g_s^4 \int\frac{d^n k_1}{(2\pi)^n} \frac{d^n k_2}{(2\pi)^n}
\frac{v_i^{\mu}}{v_i\cdot k'} \, \frac{v_i^{\rho}}{v_i\cdot (k'-k_1)}
\nonumber \\ && \times
\frac{v_i^{\nu}}{v_i\cdot (k'-k_1-k_2)} \,
\frac{v_i^{\sigma}}{v_i\cdot (k'-k_2)}
\frac{(-i)g_{\mu \nu}}{k_1^2} \frac{(-i)g_{\rho \sigma}}{k_2^2} \, .
\nonumber \\ &&
\end{eqnarray}
Using Eqs. (\ref{A6}), (\ref{A7}), and (\ref{A8}), we find the UV poles of
the integral
\begin{equation}
I'_{3a2}=\frac{\alpha_s^2}{\pi^2}
\left\{-\frac{1}{\epsilon^2}-\frac{1}{\epsilon} \left[\frac{1}{2}
+K_{\beta}\right]\right\}
\end{equation}
with counterterm
$I_{3a2}^{c.t.}=(\alpha_s^2/\pi^2)
[2/\epsilon^2+K_{\beta}/\epsilon].$
Then
\begin{equation}
I_{3a2}=\frac{\alpha_s^2}{\pi^2}
\left\{\frac{1}{\epsilon^2}-\frac{1}{2 \, \epsilon}\right\}.
\end{equation}
\subsection{$I_{3b}$}
In this section we calculate the three self-energy diagrams
(with quark, gluon, and ghost loops) represented by
Fig. 3(b). Note that an additional diagram with a four-gluon vertex vanishes.
After several manipulations the quark-loop diagram becomes
\begin{eqnarray}
&& \hspace{-4mm}
I'_{3bq}=-4 n_f \frac{g_s^4}{(2\pi)^{2n}} \frac{1}{v_i \cdot k'}
\int \frac{d^n k \, d^n l}{k^4 \, v_i\cdot (k-k') \, l^2 \, (l-k)^2}
\nonumber \\ && \times
\left[2(v_i\cdot l)^2-v_i^2 \, l^2 -2 v_i\cdot l \, v_i\cdot k
+v_i^2 \, l\cdot k\right] \, .
\end{eqnarray}
Expanding around $v_i \cdot k'=0$ this gives
\begin{eqnarray}
&& \hspace{-11mm}
I'_{3bq}=4 n_f \frac{g_s^4}{(2\pi)^{2n}}
\int \frac{d^n k}{k^4 \, (v_i\cdot k)^2}
\int \frac{d^n l}{l^2 \, (l-k)^2}
\nonumber \\ && \times
\left[-v_i^2 \, l\cdot k-2(v_i\cdot l)^2+2 v_i\cdot l \, v_i\cdot k \right]\, .
\end{eqnarray}
A calculation of the UV poles of the integral using Eq. (\ref{A9}) gives
\begin{equation}
I'_{3bq}=\frac{\alpha_s^2}{\pi^2} \frac{n_f}{3}
\left\{-\frac{1}{\epsilon^2}-\frac{1}{\epsilon} \left[\frac{5}{6}
+K_{\beta}\right]\right\}
\end{equation}
with counterterm
$I_{3bq}^{c.t.}=(\alpha_s^2/\pi^2) (n_f/3)
[2/\epsilon^2 + K_{\beta}/\epsilon]$.
Then
\begin{equation}
I_{3bq}=\frac{\alpha_s^2}{\pi^2} \frac{n_f}{3}
\left[\frac{1}{\epsilon^2}-\frac{5}{6\, \epsilon}\right].
\end{equation}
Next we calculate the gluon-loop self-energy diagram
in Fig. 3(b) which,
after several manipulations, becomes
\begin{eqnarray}
&& \hspace{-10mm}
I'_{3bgl}=-\frac{g_s^4}{2(2\pi)^{2n}} \frac{1}{v_i \cdot k'}
\int\frac{d^n k}{v_i \cdot (k-k')}
\nonumber \\ && \times
\left\{\left[\frac{4 \, v_i^2}{k^2}
+(n-6)\frac{(v_i \cdot k)^2}{k^4}\right] \int \frac{d^n l}{l^2 (k-l)^2} \right.
\nonumber \\ && \quad \quad
{}-(4n-6)\frac{v_i\cdot k}{k^4} \int d^n l \frac{v_i \cdot l}{l^2 (k-l)^2}
\nonumber \\ && \quad \quad \left.
{}+\frac{(4n-6)}{k^4}
\int d^n l \frac{(v_i \cdot l)^2}{l^2 (k-l)^2} \right\}.
\end{eqnarray}
Expanding around $v_i \cdot k'=0$ this gives
\begin{eqnarray}
&& \hspace{-10mm}
I'_{3bgl}=-\frac{\alpha_s^2}{\pi^2} 2^{-5+2\epsilon}
i \pi^{-2+\frac{3\epsilon}{2}} (1-\beta^2)
\nonumber \\ && \hspace{-5mm} \times
\Gamma\left(\frac{\epsilon}{2}\right)
\left[\Gamma\left(1-\frac{\epsilon}{2}\right)\right]^2
\frac{1}{\Gamma(2-\epsilon)}
\nonumber \\ && \hspace{-5mm}
\times \left[2-\frac{(10-4\epsilon)}{8} \frac{1}{3-\epsilon}\right]
\int \frac{d^n k}{(v_i \cdot k)^2 \, (k^2)^{1+\frac{\epsilon}{2}}}.
\end{eqnarray}
Using Eq. (\ref{A9}) the UV poles of the integral are then
\begin{equation}
I'_{3bgl}=\frac{\alpha_s^2}{\pi^2} \frac{19}{48}
\left\{-\frac{1}{\epsilon^2}-\frac{1}{\epsilon} \left[\frac{58}{57}
+K_{\beta}\right]\right\}.
\end{equation}
The counterterm is
$I_{3bgl}^{c.t.}=(\alpha_s^2/\pi^2) (19/48)
[2/\epsilon^2 +K_{\beta}/\epsilon].$
Then
\begin{equation}
I_{3bgl}=\frac{\alpha_s^2}{\pi^2} \frac{19}{48}
\left\{\frac{1}{\epsilon^2}-\frac{58}{57\, \epsilon}\right\}.
\label{I3bgl}
\end{equation}
Last we calculate the ghost-loop self-energy diagram
in Fig. 3(b).
After several manipulations and
expanding around $v_i \cdot k'=0$ this gives
\begin{eqnarray}
&& \hspace{-8mm}
I'_{3bgh}=\frac{\alpha_s^2}{\pi^2} 2^{-6+2\epsilon}
\pi^{-2+\frac{3\epsilon}{2}} i (1-\beta^2) \Gamma\left(-1+\frac{\epsilon}{2}\right)
\nonumber \\ && \hspace{-3mm}\times
\left[\Gamma\left(2-\frac{\epsilon}{2}\right)\right]^2
\frac{1}{\Gamma(4-\epsilon)}
\int \frac{d^n k}{(v_i \cdot k)^2 \, (k^2)^{1+\frac{\epsilon}{2}}}.
\end{eqnarray}
Using Eq. (\ref{A9}) the UV poles are then given by
\begin{equation}
I'_{3bgh}=\frac{\alpha_s^2}{\pi^2} \frac{1}{48}
\left\{-\frac{1}{\epsilon^2}-\frac{1}{\epsilon} \left[\frac{4}{3}
+K_{\beta}\right]\right\}
\end{equation}
with counterterm
$I_{3bgh}^{c.t.}=(\alpha_s^2/\pi^2) (1/48)
[2/\epsilon^2 +K_{\beta}/\epsilon].$
Then
\begin{equation}
I_{3bgh}=\frac{\alpha_s^2}{\pi^2} \frac{1}{48}
\left\{\frac{1}{\epsilon^2}-\frac{4}{3 \, \epsilon}\right\}.
\label{I3bgh}
\end{equation}
The sum of the gluon and ghost loops, Eqs. (\ref{I3bgl}) and (\ref{I3bgh}),
denoted by $I_{3bg}$, is then
\begin{equation}
I_{3bg}=\frac{\alpha_s^2}{\pi^2} \frac{5}{12}
\left[\frac{1}{\epsilon^2}-\frac{31}{30\, \epsilon}\right].
\end{equation}
\subsection{$I_{3c}$}
Finally we calculate the diagram in Fig. 3(c).
Using Eqs. (\ref{A1}) and (\ref{A2})
and adding the one-loop counterterm, we find
\begin{equation}
I_{3c}= \frac{\alpha_s^2}{\pi^2} \frac{(1+\beta^2)}{2\beta}
\frac{(-1)}{\epsilon^2}\ln\left(\frac{1-\beta}{1+\beta}\right).
\end{equation}
\section{Two-loop soft anomalous dimension}
We now combine the kinematic results from sections 3 and 4
with color and symmetry factors.
The contribution of the diagrams in Figs. (2) and (3) to the two-loop soft
anomalous dimension is
\begin{eqnarray}
&& C_F^2 \left[I_{2a}+I_{2b}+2 \, I_{2d}
+2 \, I_{2e}+I_{3a1}+I_{3a2}+I_{3c}\right]
\nonumber \\ && \hspace{-3mm}
{}+C_F \, C_A \left[-\frac{1}{2} I_{2b}
+I_{2f} -I_{2cg}
-I_{2e}-I_{3bg}-\frac{1}{2} I_{3a2} \right]
\nonumber \\ && \hspace{-3mm}
{}+\frac{1}{2} C_F \left[I_{2cq}+I_{3bq}\right]
\nonumber \\ &&\hspace{-3mm}
=\frac{\alpha_s^2}{\pi^2} \left[
-\frac{1}{2 \epsilon^2} \left(\Gamma_S^{(1)}\right)^2
+\frac{\beta_0}{4 \epsilon^2} \Gamma_S^{(1)}
-\frac{1}{2 \epsilon} \Gamma_S^{(2)} \right].
\label{S2}
\end{eqnarray}
On the right-hand side of Eq. (\ref{S2})
in addition to $\Gamma_S^{(2)}$, which appears in the coefficient of the
$1/\epsilon$ pole, there also appear terms from the exponentiation
of the one-loop result and the running of the coupling,
with $\beta_0=(11/3) C_A-2n_f/3$,
$C_A=N_c$, which account for all the double poles of the graphs.
From Eq. (\ref{S2}) we solve for the two-loop soft anomalous dimension:
\begin{eqnarray}
&& \hspace{-5mm}\Gamma_S^{(2)}=\frac{K}{2} \, \Gamma_S^{(1)}
+C_F C_A \left\{\frac{1}{2}+\frac{\zeta_2}{2}
+\frac{1}{2} \ln^2\left(\frac{1-\beta}{1+\beta}\right) \right.
\nonumber \\ && \hspace{-5mm}
{}-\frac{(1+\beta^2)^2}{8 \beta^2} \left[\zeta_3
+\zeta_2 \ln\left(\frac{1-\beta}{1+\beta}\right)
+\frac{1}{3} \ln^3\left(\frac{1-\beta}{1+\beta}\right) \right.
\nonumber \\ && \hspace{-3mm}\left.
{}+\ln\left(\frac{1-\beta}{1+\beta}\right)
{\rm Li}_2\left(\frac{(1-\beta)^2}{(1+\beta)^2}\right)
-{\rm Li}_3\left(\frac{(1-\beta)^2}{(1+\beta)^2}\right)\right]
\nonumber \\ && \hspace{-5mm}
{}-\frac{(1+\beta^2)}{4 \beta} \left[\zeta_2
-\zeta_2 \ln\left(\frac{1-\beta}{1+\beta}\right)
+\ln^2\left(\frac{1-\beta}{1+\beta}\right)\right.
\nonumber \\ &&
{}-\frac{1}{3} \ln^3\left(\frac{1-\beta}{1+\beta}\right)
+2 \ln\left(\frac{1-\beta}{1+\beta}\right)
\ln\left(\frac{(1+\beta)^2}{4 \beta}\right)
\nonumber \\ && \hspace{15mm} \left. \left.
{}-{\rm Li}_2\left(\frac{(1-\beta)^2}{(1+\beta)^2}\right)\right]\right\}
\label{Gamma2}
\end{eqnarray}
where
\begin{equation}
K=C_A\left(\frac{67}{18}-\zeta_2 \right)-\frac{5n_f}{9}.
\end{equation}
A slightly different but equivalent expression is given in Ref. \cite{NKprl}.
In terms of the cusp angle $\gamma$ we find
\begin{eqnarray}
&& \hspace{-5mm} \Gamma_S^{(2)}=\frac{K}{2} \, \Gamma_S^{(1)}
+C_F C_A \left\{\frac{1}{2}+\frac{\zeta_2}{2}+\frac{\gamma^2}{2}
-\frac{1}{2}\coth^2\gamma \right.
\nonumber \\ && \quad \times
\left[\zeta_3-\zeta_2\gamma-\frac{\gamma^3}{3}
-\gamma \, {\rm Li}_2\left(e^{-2\gamma}\right)
-{\rm Li}_3\left(e^{-2\gamma}\right)\right]
\nonumber \\ && \hspace{-4mm}
{}-\frac{1}{2} \coth\gamma\left[\zeta_2+\zeta_2\gamma+\gamma^2
+\frac{\gamma^3}{3} \right.
\nonumber \\ && \hspace{14mm} \left. \left.
{}+2\, \gamma \, \ln\left(1-e^{-2\gamma}\right)
-{\rm Li}_2\left(e^{-2\gamma}\right)\right] \right\}.
\end{eqnarray}
This expression is consistent with the form of the two-loop cusp
anomalous dimension given in Ref. \cite{KR} (which involves a few
uncalculated integrals),
and it is also consistent with the two-loop heavy-quark form factor
of Ref. \cite{HQFF}.
Using the above results we can find the soft-gluon logarithms in the cross
section for $e^+ e^- \rightarrow t {\bar t}$, which are
of the form $\ln^{n-1}(\beta^2)/\beta^2$ at $n$-th order in $\alpha_s$.
The first-order soft-gluon corrections are
$\sigma^{(1)}=\sigma^B \, (\alpha_s/\pi) \, 2 \, \Gamma_S^{(1)} / \beta^2$
with $\sigma^B$ the Born cross section.
The second-order soft-gluon corrections are
\begin{eqnarray}
&& \hspace{-8mm}
\sigma^{(2)}=\sigma^B \frac{\alpha_s^2}{\pi^2}
\left\{ \left[4 (\Gamma_S^{(1)})^2-\beta_0 \,\Gamma_S^{(1)} \right]
\frac{\ln(\beta^2)}{\beta^2} \right.
\nonumber \\ && \hspace{15mm} \left.
{}+\left[2 \,T_1 \, \Gamma_S^{(1)} +2 \, \Gamma_S^{(2)}\right]
\frac{1}{\beta^2} \right\}
\end{eqnarray}
with $T_1$ the NLO virtual corrections.
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{gammadpf09plot.eps}
\caption{The two-loop soft anomalous dimension $\Gamma_S^{(2)}$.}
\label{Gamma2Kexp}
\end{center}
\end{figure}
In Figure 5 we plot $\Gamma_S^{(2)}$ versus $\beta$. The insets
show the small and large $\beta$ regions in detail.
Note that both $\Gamma_S^{(1)}$ and $\Gamma_S^{(2)}$ vanish in
the threshold limit, $\beta=0$,
and they both diverge in the massless limit, $\beta=1$.
The small-$\beta$ expansion of Eq. (\ref{Gamma2}) is
\begin{eqnarray}
\Gamma_{S \, \rm exp}^{(2)}&=&-\frac{2}{27} \beta^2
\left[C_F C_A (18 \zeta_2-47)+5 n_f C_F\right]
\nonumber \\ && \hspace{30mm}
{}+{\cal O}(\beta^4) \, .
\end{eqnarray}
$\Gamma_S^{(2)}$ is an even function of $\beta$ and hence
only even powers of $\beta$ appear in the expansion.
As shown in \cite{NKprl} the expansion
provides a good approximation to the complete result for small $\beta$.
Next we study the large-$\beta$ behavior of $\Gamma_S^{(2)}$,
Eq. (\ref{Gamma2}).
As $\beta \rightarrow 1$,
\begin{equation}
\Gamma_S^{(2)} \rightarrow \frac{K}{2} \Gamma_S^{(1)}
+C_F C_A \frac{(1-\zeta_3)}{2}
\end{equation}
with $\Gamma_S^{(1)}=C_F\left[\ln\left(2 v_i \cdot v_j/\sqrt{v_i^2 v_j^2}
\right)-1\right]$. Since $\Gamma_S^{(1)}$ diverges at $\beta=1$,
$\Gamma_S^{(2)} \approx (K/2) \Gamma_S^{(1)}$ at that
limit. This is consistent with the massless case $\Gamma_S^{(2)}=\frac{K}{2} \Gamma_S^{(1)}$
with $\Gamma_S^{(1)}=C_F \ln(v_i \cdot v_j)$ \cite{ADS,BN,GM}.
As is clear from Eq. (\ref{Gamma2}) the massive case is much more complicated
than the simple massless relation, and $(K/2) \Gamma_S^{(1)}$ is just the
first of many terms in the expression for $\Gamma_S^{(2)}$.
Figure 6 shows that numerically the ratio
$(K/2) \Gamma_S^{(1)}/\Gamma_S^{(2)}$ goes to
1 as $\beta \rightarrow 1$, the massless limit, as expected from the above
discussion, but it is significantly different from 1 at other $\beta$
and takes the value 1.144 at the other end of the range, $\beta=0$.
In the mixed massive-massless case, with $v_i$ the heavy quark and $v_j$ the
massless quark, we find
\begin{equation}
\Gamma_S^{(2)}=\frac{K}{2} \Gamma_S^{(1)}
+C_F C_A \frac{(1-\zeta_3)}{4}
\end{equation}
with
\begin{equation}
\Gamma_S^{(1)}=C_F \left[\ln\left(\frac{\sqrt{2} \, v_i \cdot v_j}
{\sqrt{v_i^2}}
\right)-\frac{1}{2}\right].
\end{equation}
Given that the small-$\beta$ expansion gives very good
approximations to $\Gamma_S^{(2)}$ at smaller $\beta$ while the
expression $(K/2)\, \Gamma_S^{(1)}$ is the large-$\beta$ limit,
we can derive an approximation to $\Gamma_S^{(2)}$, Eq. (\ref{Gamma2}),
valid for all $\beta$ using the following approximate formula:
\begin{eqnarray}
&& \hspace{-10mm} \Gamma^{(2)}_{S \, \rm approx}=\Gamma^{(2)}_{S \, \rm exp}
+\frac{K}{2} \Gamma_S^{(1)}-\frac{K}{2} \Gamma^{(1)}_{S \, \rm exp}
\nonumber \\ && \hspace{-3mm}
=\frac{K}{2} \Gamma_S^{(1)}+C_F C_A \left(1-\frac{2}{3}\zeta_2\right) \beta^2
+{\cal O}\left(\beta^4\right).
\end{eqnarray}
\begin{figure}
\begin{center}
\includegraphics[width=8cm]{gapprdpf09plot.eps}
\caption{Approximations to $\Gamma_S^{(2)}$.}
\label{Gamma2approx}
\end{center}
\end{figure}
As seen from Fig. 6, adding just the $\beta^2$ terms to
$(K/2)\, \Gamma_S^{(1)}$
provides an excellent approximation to the exact result for
for all $\beta$.
If we keep additional terms through order $\beta^{12}$ then the approximation
is extremely good and does not differ by more than one per mille anywhere
in the $\beta$ range.
Finally we discuss extensions to single top \cite{NKtop,NKst} and
top pair \cite{NKtop,NKRV} production at hadron colliders.
As first shown in Ref. \cite{NKprl},
since the two-loop soft anomalous
dimensions for single top and top pair processes involve eikonal graphs with
both massless and massive eikonal lines,
it is clear that the simple massless relation
$\Gamma_S^{(2)}=(K/2)\, \Gamma_S^{(1)}$ will not hold.
This observation was also made later in Refs. \cite{MSS,BNm}.
The two-loop soft anomalous dimension $\Gamma_S^{(2)}$
derived in \cite{NKprl} and in this paper is an essential ingredient
for next-to-next-to-leading logarithm (NNLL) resummation for
heavy quark hadroproduction
and has been used in very recent calculations \cite{BNm,BFS,CMS,FNPY}.
\begin{acknowledgments}
This work was supported by the National Science Foundation under
Grant No. PHY 0855421.
\end{acknowledgments}
\section*{Appendix}
We list results for the UV poles of several of the many integrals needed
in the calculation of the two-loop soft anomalous dimension.
\begin{eqnarray}
&& \hspace{-7mm}
\int\frac{d^n k}{k^2 \, v_i\cdot k \, v_j\cdot k}=\frac{i}{\epsilon}
(-1)^{-1-\frac{\epsilon}{2}} \pi^{2-\frac{\epsilon}{2}}
2^{3+3\frac{\epsilon}{2}}
\nonumber\\ && \quad \times
\Gamma\left(1+\frac{\epsilon}{2}\right)\, {}_2F_1\left(\frac{1}{2},
1+\frac{\epsilon}{2};\frac{3}{2};\beta^2\right)
\nonumber\\ &&
=\frac{i 4\pi^2}{\beta}
\left\{\frac{1}{\epsilon}\ln\left(\frac{1-\beta}{1+\beta}\right) \right.
\nonumber\\ && \quad
{}+\frac{1}{2}(2\ln 2-\ln \pi-\gamma_E-i \pi)
\ln\left(\frac{1-\beta}{1+\beta}\right)
\nonumber\\ && \quad
{}+\frac{1}{4}\ln^2(1+\beta)-\frac{1}{4}\ln^2(1-\beta)
\nonumber\\ && \quad \left.
{}-\frac{1}{2}{\rm Li}_2\left(\frac{1+\beta}{2}\right)
+\frac{1}{2}{\rm Li}_2\left(\frac{1-\beta}{2}\right) \right\}
+{\cal O}(\epsilon)
\label{A1}
\nonumber \\
\end{eqnarray}
where ${}_2F_1$ is the Gauss hypergeometric function.
\begin{eqnarray}
&& \hspace{-9mm}
\int\frac{d^n k}{k^2 \, (v_i\cdot k)^2}
=\frac{i}{\epsilon}\,
(-1)^{1-\frac{\epsilon}{2}} \, \pi^{2-\frac{\epsilon}{2}} \,
2^{3+3\frac{\epsilon}{2}}
\nonumber\\ && \quad \quad \times
(1-\beta^2)^{-1-\frac{\epsilon}{2}}
\Gamma\left(1+\frac{\epsilon}{2}\right)
\nonumber\\ && \hspace{-8mm}
=-\frac{i 8 \pi^2}{1-\beta^2}
\left[\frac{1}{\epsilon} -\frac{1}{2}\ln(1-\beta^2)\right.
\nonumber\\ && \hspace{7mm} \left.
{}+\frac{3}{2} \ln 2-\frac{1}{2}\ln \pi-\frac{\gamma_E}{2}
-\frac{i \pi}{2} \right]
+{\cal O}(\epsilon).
\label{A2}
\end{eqnarray}
\begin{eqnarray}
&& \hspace{-9mm}
\int\frac{d^n k}{(k^2)^{1+\frac{\epsilon}{2}} \, v_i\cdot k \, v_j\cdot k}
=\frac{i}{\epsilon^2} \, \frac{(-1)^{1-\epsilon}}{\beta} \, 2^{2\epsilon}
\nonumber\\ && \hspace{-4mm} \times
\pi^{2-\frac{\epsilon}{2}} \, \Gamma(1+\epsilon)
\frac{1}{\Gamma\left(1+\frac{\epsilon}{2}\right)}
\nonumber\\ && \hspace{-4mm} \times
\left[(1-\beta)^{-\epsilon} {}_2F_1\left(-\epsilon,
1+\epsilon;1-\epsilon;\frac{1-\beta}{2}\right) \right.
\nonumber\\ && \hspace{-2mm} \left.
{}-(1+\beta)^{-\epsilon} {}_2F_1\left(-\epsilon,
1+\epsilon;1-\epsilon;\frac{1+\beta}{2}\right)\right].
\label{A3}
\end{eqnarray}
\begin{eqnarray}
&& \hspace{-9mm}
\int\frac{d^n k}{k^2 \, (v_i\cdot k)^{1+\epsilon} \, v_j\cdot k}
=\frac{i \pi^{2-\frac{\epsilon}{2}}}{\epsilon (1+\epsilon)}
2^{2+\frac{9\epsilon}{2}} (-1)^{-1-\frac{3\epsilon}{2}}
\nonumber\\ && \hspace{-6mm} \times
(1-\beta^2)^{-1-\frac{3\epsilon}{2}} \,
\Gamma\left(1+\frac{3\epsilon}{2}\right) \frac{1}{\Gamma(1+\epsilon)}
\nonumber\\ && \hspace{-6mm} \times
F_1[1+\epsilon;1+\frac{3\epsilon}{2},1+\frac{3\epsilon}{2};2+\epsilon;
\frac{2\beta}{1+\beta},\frac{-2\beta}{1-\beta}]
\label{A4}
\end{eqnarray}
where $F_1$ is the Appell hypergeometric function.
\begin{eqnarray}
&& \hspace{-7mm}
\int\frac{d^n k_2}{k_2^2 \, \left[v_i\cdot (k_1+k_2)\right]^2}
=\frac{i}{\epsilon } \, \frac{(-1)^{-1+\frac{\epsilon}{2}}}{(1+\epsilon)} \,
2^{4-\frac{\epsilon}{2}} \, \pi^{\frac{3-\epsilon}{2}}
\nonumber\\ && \quad \times
(1-\beta^2)^{-1+\frac{\epsilon}{2}} \, (v_i \cdot k_1)^{-\epsilon} \,
\Gamma\left(1+\frac{\epsilon}{2}\right)
\nonumber\\ && \quad \times
\Gamma\left(1-\frac{\epsilon}{2}\right) \,
\Gamma\left(\frac{3+\epsilon}{2}\right) \, .
\label{A5}
\end{eqnarray}
\begin{eqnarray}
&& \hspace{-7mm}
\int\frac{d^n k}{k^2 \, (v_i\cdot k)^{2+\epsilon}}
=\frac{i \pi^{2-\frac{\epsilon}{2}}}{\epsilon (1+\epsilon)}
2^{2+\frac{9\epsilon}{2}} (-1)^{-1-\frac{3\epsilon}{2}}
\nonumber\\ && \quad \times
(1-\beta^2)^{-1-\frac{3\epsilon}{2}} \,
\Gamma\left(1+\frac{3\epsilon}{2}\right) \frac{1}{\Gamma(1+\epsilon)} \, .
\label{A6}
\end{eqnarray}
\begin{eqnarray}
&& \hspace{-7mm}
\int\frac{d^n k_1}{k_1^2 \, v_i\cdot k_1 \, v_i\cdot (k_1+k_2)}
=\frac{i}{\epsilon} \, (-1)^{\frac{\epsilon}{2}} \,
2^{2-\frac{\epsilon}{2}} \, \pi^{\frac{3-\epsilon}{2}}
\nonumber\\ && \quad \quad \times
(v_i \cdot k_2)^{-\epsilon} \, (1-\beta^2)^{-1+\frac{\epsilon}{2}} \,
\Gamma\left(1+\frac{\epsilon}{2}\right)
\nonumber\\ && \quad \quad \times
\Gamma\left(1-\frac{\epsilon}{2}\right) \,
\Gamma\left(\frac{\epsilon-1}{2}\right) \, .
\label{A7}
\end{eqnarray}
\begin{eqnarray}
&& \hspace{-7mm}
\int\frac{d^n k_2}{k_2^2 \, v_i\cdot k_2 \, [v_i\cdot (k_1+k_2)]^2}
=\frac{i}{1-\epsilon} \, (-1)^{1+\frac{\epsilon}{2}}
\nonumber\\ && \quad \times
2^{3-\frac{3\epsilon}{2}} \, \pi^{2-\frac{\epsilon}{2}} \,
(v_i \cdot k_1)^{-1-\epsilon} \, (1-\beta^2)^{-1+\frac{\epsilon}{2}}
\nonumber\\ && \quad \times
\Gamma\left(1-\frac{\epsilon}{2}\right) \, \Gamma(1+\epsilon) \, .
\label{A8}
\end{eqnarray}
\begin{eqnarray}
&& \hspace{-7mm}
\int\frac{d^n k}{(k^2)^{1+\frac{\epsilon}{2}} \, (v_i\cdot k)^2}
=\frac{i}{\epsilon} \, (-1)^{-1-\epsilon} \,
2^{2+3\epsilon} \, \pi^{2-\frac{\epsilon}{2}}
\nonumber\\ && \quad \quad \times
(1-\beta^2)^{-1-\epsilon} \, \Gamma(1+\epsilon) \,
\frac{1}{\Gamma\left(1+\frac{\epsilon}{2}\right)} \, .
\label{A9}
\end{eqnarray}
| {
"attr-fineweb-edu": 1.753906,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUeVTxK6wB9mpb3Jc4 | \section{Introduction}
Recent advances in storage and networking technologies have resulted in many applications with interconnected relationships between objects.
This has led to the forming of gigantic inter-related and multi-typed heterogeneous information networks (HINs) across a variety of domains, such as e-government, e-commerce, biology, social media, etc. HINs provide an effective graph model to characterize the diverse relationships among different types of nodes. Understanding the vast amount of semantic information modeled in HINs has received a lot of attention. In particular, the concept of metapaths~\cite{Sun:2011:pathsim}, which connect two nodes through a sequence of relations between node types, is widely used to exploit rich semantics in HINs. In the last few years, many metapath-based algorithms are proposed to carry out data mining tasks over HINs, including similarity search~\cite{Sun:2011:pathsim}, personalized recommendation~\cite{Jamali:2013:HeteroMF,Shi:2015:semantic}, and object clustering~\cite{Sun:2012:integrating}.
Despite their great potential, data mining tasks in HINs often suffer from high complexity, because real-world HINs are very large and have very complex network structure. For example, when measuring metapath similarity between two distant nodes, all metapath instances need to be enumerated. This makes it very time-consuming to perform mining tasks, such as link prediction or similarity search, across the entire network. This inspires a lot of research interests in network embedding that aims to embed the network into a low-dimensional vector space, such that the proximity (or similarity) between nodes in the original network can be preserved. Analysis and search over large-scale HINs can then be applied in the embedding space, with the help of efficient indexing or parallelized algorithms designed for vector spaces.
Conventional network embedding techniques~\cite{cao2015grarep,grover2016node2vec,perozzi2014deepwalk,tang2015line,wang2016structural,zhang2016homophily,zhang2017user,Zhang:2018:Survey}, however, focus on homogeneous networks, where all nodes and relations are considered to have a single type. Thus, they cannot handle the heterogeneity of node and relation types in HINs. Only very recently, metapath-based approaches~\cite{Chen:2017:task,Dong:2017:metapath2vec}, such as MetaPath2Vec~\cite{Dong:2017:metapath2vec}, are proposed to exploit specific metapaths as guidance to generate random walks and then to learn heterogeneous network embedding. For example, consider a DBLP bibliographic network, Fig.~\ref{fig1:schema} shows the HIN schema, which consists of three node types: Author (A), Paper (P) and Venue (V), and three edge types: an author writes a paper, a paper cites another paper, and a paper is published in a venue. The metapath $\mathcal{P}_{1}$: $A \rightarrow P \rightarrow V \rightarrow P \rightarrow A$ describes the relationship where both authors have papers published in the same venue, while $\mathcal{P}_{2}$: $A \rightarrow P \rightarrow A \rightarrow P \rightarrow A$ describes that two authors share the same co-author. If $\mathcal{P}_{1}$ is used by MetaPath2Vec to generate random walks, a possible random walk could be: $a_1 \rightarrow p_1 \rightarrow v_1 \rightarrow p_2 \rightarrow a_2$. Consider a window size of 2, authors $a_1$ and $a_2$ would share the same context node $v_1$, so they should be close to each other in the embedding space. This way, semantic similarity between nodes conveyed by metapaths is preserved.
\begin{figure}[!htbp]
\begin{scriptsize}
\centering
\subfigure[Schema]{
\label{fig1:schema}
\begin{tikzpicture}
\node (V) at (0,0) [circle, draw, thick] {V};
\node[circle, draw, thick, above = 0.9 cm of V] (P) {P};
\node[circle, draw, thick, above = 0.9 cm of P] (A) {A};
\node[circle, below = 0.3 cm of V] (V0) {};
\draw [->,thick] (A) -- node[midway,left] {$write$} (P);
\draw [->,thick] (V) -- node[midway,left] {$publish$} (P);
\draw [->,thick] (P) edge [loop, out=45, in=315, looseness=7] node[midway,right] {$cite$} (P);
\end{tikzpicture}}
\subfigure[Metapah and Metagraph]{
\label{fig1:meta}
\begin{tikzpicture}
\node (A1) at (0,0) [circle, draw, thick] {A};
\node[circle, draw, thick, right = 1 cm of A1] (P1) {P};
\node[circle, right = 1 cm of P1] (N) {};
\node[circle, draw, thick, right = 1 cm of N] (P2) {P};
\node[circle, draw, thick, right = 1 cm of P2] (A2) {A};
\node[circle, draw, thick, above =0.3 cm of N] (V1) {V};
\node[circle, draw, thick, below = 0.3 cm of N] (A3) {A};
\node[circle, below = 0.1 cm of A3] (V0) {};
\node[circle, draw, thick, above = 0.5 cm of V1] (A4) {A};
\node[circle, draw, thick, left = 1 cm of A4] (P3) {P};
\node[circle, draw, thick, left = 1 cm of P3] (A5) {A};
\node[circle, draw, thick, right = 1 cm of A4] (P4) {P};
\node[circle, draw, thick, right = 1 cm of P4] (A6) {A};
\node[circle, draw, thick, above = 0.5 cm of A4] (V2) {V};
\node[circle, draw, thick, left = 1 cm of V2] (P5) {P};
\node[circle, draw, thick, left = 1 cm of P5] (A7) {A};
\node[circle, draw, thick, right = 1 cm of V2] (P6) {P};
\node[circle, draw, thick, right = 1 cm of P6] (A8) {A};
\node[left = 0.3cm of A1] (metagraph) {$\mathcal{G}:$};
\node[left = 0.3cm of A5] (metapath1) {$\mathcal{P}_{2}:$};
\node[left = 0.3cm of A7] (metapath2) {$\mathcal{P}_{1}:$};
\draw [->,thick] (A1) -- node[midway,above] {\tiny$write$} (P1);
\draw [->,thick] (P2) -- node[midway,above] {\tiny$write^{-1}$} (A2);
\draw [->,thick] (P1) -- node[near end,left] {\tiny$publish^{-1}$} (V1);
\draw [->,thick] (P1) -- node[near end,left] {\tiny$write^{-1}$} (A3);
\draw [->,thick] (V1) -- node[near start,right] {\tiny$publish$} (P2);
\draw [->,thick] (A3) -- node[near start,right] {\tiny$write$} (P2);
\draw [->,thick] (A5) -- node[midway,above] {\tiny$write$} (P3);
\draw [->,thick] (P3) -- node[midway,above] {\tiny$write^{-1}$} (A4);
\draw [->,thick] (A4) -- node[midway,above] {\tiny$write$} (P4);
\draw [->,thick] (P4) -- node[midway,above] {\tiny$write^{-1}$} (A6);
\draw [->,thick] (A7) -- node[midway,above] {\tiny$write$} (P5);
\draw [->,thick] (P5) -- node[midway,above] {\tiny$publish^{-1}$} (V2);
\draw [->,thick] (V2) -- node[midway,above] {\tiny$publish$} (P6);
\draw [->,thick] (P6) -- node[midway,above] {\tiny$write^{-1}$} (A8);
\end{tikzpicture}}
\caption{Schema, Metapath and Metagraph}
\label{fig1}
\end{scriptsize}
\end{figure}
Due to difficulties in information access, however, real-world HINs often have sparse connections or many missing links. As a result, metapath-based algorithms may fail to capture latent semantics between distant nodes. As an example, consider the bibliographic network, where many papers may not have venue information, as they may be preprints submitted to upcoming venues or their venues are simply missing. The lack of paper-venue connection would result in many short random walks, failing to capture hidden semantic similarity between distant nodes. On the other hand, besides publishing papers on same venues, distant authors can also be connected by other types of relations, like sharing common co-authors or publishing papers with similar topics. Such information should be taken into account to augment metapath-based embedding techniques.
\begin{figure}[!htbp]
\begin{scriptsize}
\centering
\begin{tikzpicture}
\node (a1) at (0,0) [thick, scale = 0.8, circle, draw] {$a_{1}$};
\node[scale = 0.8, circle, draw, right = 0.5cm of a1] (p1) {$p_{1}$};
\node[scale = 0.8, circle, draw, right = 0.5cm of p1] (v1) {$v_{1}$};
\node[scale = 1.2, right = of a1] (cross) {\color{red}$\mathbf{\times}$};
\node[scale = 0.8, circle, draw, above = 0.5cm of v1] (a2) {$a_{2}$};
\node[scale = 0.8, circle, draw, right = 0.5cm of v1] (p2) {$p_{2}$};
\node[thick, scale = 0.8, circle, draw, right = 0.5cm of p2] (a3) {$a_{3}$};
\node[scale = 0.8, circle, draw, right = 0.5cm of a3] (p4) {$p_{4}$};
\node[scale = 0.8, circle, draw, right = 0.5cm of p4] (v2) {$v_{2}$};
\node[scale = 0.8, circle, draw, right = 0.5cm of v2] (p5) {$p_{5}$};
\node[thick, scale = 0.8, circle, draw, right = 0.5cm of p5] (a4) {$a_{4}$};
\draw [->] (a1) -- (p1);
\draw [->] (p1) -- (v1);
\draw [->] (v1) -- (p2);
\draw [->] (p2) -- (a3);
\draw [->] (a3) -- (p4);
\draw [->] (p4) -- (v2);
\draw [->] (v2) -- (p5);
\draw [->] (p5) -- (a4);
\draw [->] (p1) edge [out=90, in=180, looseness=0.8] (a2);
\draw [->] (a2) edge [out=0, in=90, looseness=0.8] (p2);
\end{tikzpicture}
\caption{An example of random walk from $a_1$ to $a_4$ based on metagraph $\mathcal{G}$, which cannot be generated using metapaths $\mathcal{P}_1$ and $\mathcal{P}_2$. This justifies the ability of MetaGraph2Vec to provide richer structural contexts to measure semantic similarity between distant nodes.}
\label{fig2:example}
\end{scriptsize}
\end{figure}
Inspired by this observation, we propose a new method for heterogeneous network embedding, called MetaGraph2Vec, that learns more informative embeddings by capturing richer semantic relations between distant nodes. The main idea is to use metagraph~\cite{Huang:2016:metastructure} to guide random walk generation in an HIN, which fully encodes latent semantic relations between distant nodes at the network level. Metagraph has its strength to describe complex relationships between nodes and to provide more flexible matching when generating random walks in an HIN. Fig.~\ref{fig1:meta} illustrates a metagraph $\mathcal{G}$, which describes that two authors are relevant if they have papers published in the same venue or they share the same co-authors. Metagraph $\mathcal{G}$ can be considered as a union of metapaths $\mathcal{P}_1$ and $\mathcal{P}_2$, but when generating random walks, it can provide a superset of random walks generated by both $\mathcal{P}_1$ and $\mathcal{P}_2$. Fig.~\ref{fig2:example} gives an example to illustrate the intuition behind. When one uses metapath $\mathcal{P}_1$ to guide random walks, if paper $p_1$ has no venue information, the random walk would stop at $p_1$ because the link from $p_1$ to $v_1$ is missing. This results in generating too many short random walks that cannot reveal semantic relation between authors $a_1$ and $a_3$. In contrast, when metagraph $\mathcal{G}$ is used as guidance, the random walk $a_1 \rightarrow p_1 \rightarrow a_2 \rightarrow p_2 \rightarrow a_3$, and $a_3 \rightarrow p_4 \rightarrow v_2 \rightarrow p_5 \rightarrow a_4$ is generated by taking the path en route $A$ and $V$ in $\mathcal{G}$, respectively. This testifies the ability of MetaGraph2Vec to provide richer structural contexts to measure semantic similarity between distant nodes, thereby enabling more informative network embedding.
Based on this idea, in MetaGraph2Vec, we first propose metagraph guided random walks in HINs to generate heterogeneous neighborhoods that fully encode rich semantic relations between distant nodes. Second, we generalize the Skip-Gram model~\cite{mikolov2013distributed} to learn latent embeddings for multiple types of nodes. Finally, we develop a heterogeneous negative sampling based method that facilitates the efficient and accurate prediction of a node's heterogeneous neighborhood. MetaGraph2Vec has the advantage of offering more flexible ways to generate random walks in HINs so that richer structural contexts and semantics between nodes can be preserved in the embedding space.
The contributions of our paper are summarized as follows:
\begin{enumerate}
\item We advocate a new \textit{metagraph} descriptor which augments metapaths for flexible and reliable relationship description in HINs. Our study investigates the ineffectiveness of existing metapath based node proximity in dealing with sparse HINs, and explains the advantage of metagraph based solutions.
\item We propose a new network embedding method, called MetaGraph2Vec, that uses metagraph to capture richer structural contexts and semantics between distant nodes and to learn latent embeddings for multiple types of nodes in HINs.
\item We demonstrate the effectiveness of our proposed method through various heterogeneous network mining tasks such as node classification, node clustering, and similarity search, outperforming the state-of-the-art.
\end{enumerate}
\section{Preliminaries and Problem Definition}
In this section, we formalize the problem of heterogeneous information network embedding and give some preliminary definitions.
\begin{definition}A \textbf{heterogeneous information network (HIN)} is defined as a directed graph $G=(V,E)$ with a node type mapping function $\phi:V\rightarrow\mathcal{L}$ and an edge type mapping function $\psi:E\rightarrow\mathcal{R}$. $T_{G}=(\mathcal{L},\mathcal{R})$ is the network schema that defines the node type set $\mathcal{L}$ with $\phi(v)\in\mathcal{L}$ for each node $v\in V$, and the allowable link types $\mathcal{R}$ with $\psi(e)\in\mathcal{R}$ for each edge $e\in E$.
\end{definition}
\begin{example}For a bibliographic HIN composed of authors, papers, and venues, Fig.~\ref{fig1:schema} defines its network schema. The network schema contains three node types, author (A), paper (P) and venue (V), and defines three allowable relations, $A\xrightarrow{write}P$, $P\xrightarrow{cite}P$ and $V\xrightarrow{publish}P$. Implicitly, the network schema also defines the reverse relations, i.e., $P\xrightarrow{write^{-1}}A$, $P\xrightarrow{cite^{-1}}P$ and $P\xrightarrow{publish^{-1}}V$.
\end{example}
\begin{definition}Given an HIN $G$, \textbf{heterogeneous network embedding} aims to learn a mapping function $\mathrm{\Phi}:V\rightarrow\mathbb{R}^{d}$ that embeds the network nodes $v\in V$ into a low-dimensional Euclidean space with $d\ll|V|$ and guarantees that nodes sharing similar semantics in $G$ have close low-dimensional representations $\mathrm{\Phi}(v)$.
\end{definition}
\begin{definition} A \textbf{metagraph} is a directed acyclic graph (DAG) $\mathcal{G}=(N,M,n_{s},n_{t})$ defined on the given HIN schema $T_{G}=(\mathcal{L},\mathcal{R})$, which has only a single source node $n_{s}$ (\textit{i.e.}, with 0 in-degree) and a single target node $n_{t}$ (\textit{i.e.}, with 0 out-degree). $N$ is the set of the occurrences of node types with $n\in\mathcal{L}$ for each $n\in N$. $M$ is the set of the occurrences of edge types with $m\in\mathcal{R}$ for each $m\in M$. \end{definition}
As metagraph $\mathcal{G}$ depicts complex composite relations between nodes of type $n_{s}$ and $n_{t}$, $N$ and $M$ may contain duplicate node and edge types. To clarify, we define the \textit{layer} of each node in $N$ as its topological order in $\mathcal{G}$ and denote the number of layers by $d_{\mathcal{G}}$. According to nodes' layer, we can partition $N$ into disjoint subsets $N[i]\ (1\leq i\leq d_{\mathcal{G}})$, which represents the set of nodes in layer $i$. Each $N[i]$ does not contain duplicate nodes. Now each element in $N$ and $M$ can be uniquely described as follows. For each $n$ in $N$, there exists a unique $i$ with $1\leq i\leq d_{\mathcal{G}}$ satisfying $n\in N[i]$ and we define the layer of node $n$ as $l(n)=i$. For each $m\in M$, there exist unique $i$ and $j$ with $1\leq i<j\leq d_{\mathcal{G}}$ satisfying $m\in N[i]\times N[j]$.
\begin{example}Given a bibliographic HIN $G$ and a network schema $T_{G}$ shown in Fig.~\ref{fig1:schema}, Fig.~\ref{fig1:meta} shows an example of metagraph $\mathcal{G}=(N,M,n_{s},n_{t})$ with $n_{s}=n_{t}=A$. There are $5$ layers in $\mathcal{G}$ and node set $N$ can be partitioned into 5 disjoint subsets, one for each layer, where $N[1]=\{A\}, N[2]=\{P\}, N[3]=\{A, V\}, N[4]=\{P\}, N[5]=\{A\}$.
\end{example}
\begin{definition} For a metagraph $\mathcal{G}=(N,M,n_{s},n_{t})$ with $n_{s}=n_{t}$, its \textbf{recursive metagraph} $\mathcal{G}^{\infty}=(N^{\infty},M^{\infty},n^{\infty}_{s},n^{\infty}_{t})$ is a metagraph formed by tail-head concatenation of an arbitrary number of $\mathcal{G}$. $\mathcal{G}^{\infty}$ satisfies the following conditions:
\begin{enumerate}
\item $N^{\infty}[i]=N[i]$ for $1\leq i< d_{\mathcal{G}}$, and $N^{\infty}[i]={N[i\ \mathrm{mod}\ d_{\mathcal{G}}+1]}$ for $i\geq d_{\mathcal{G}}$.
\item For each $m\in N^{\infty}[i]\times N^{\infty}[j]$ with any $i$ and $j$, $m\in M^{\infty}$ if and only if one of the following two conditions is satisfied:
\begin{enumerate}
\item $1\leq i<j\leq d_{\mathcal{G}}$ and $m\in M\bigcap(N[i]\times N[j])$;
\item $i\geq d_{\mathcal{G}}$, $1\leq j-i\leq d_{\mathcal{G}}$ and $m\in M\bigcap(N[i\mod d_{\mathcal{G}}+1]\times {N[j\mod d_{\mathcal{G}}+1]})$.
\end{enumerate}
\end{enumerate}In the recursive metagraph $\mathcal{G}^{\infty}$, for each node $n\in N^{\infty}$, we define its layer as $l^{\infty}(n)$.
\end{definition}
\begin{definition}\label{randwalkseq}
Given an HIN $G$ and a metagraph $\mathcal{G}=(N,M,n_{s},n_{t})$ with $n_{s}=n_{t}$ defined on its network schema $T_{G}$, together with the corresponding recursive metagraph $\mathcal{G}^{\infty}=(N^{\infty},M^{\infty},n^{\infty}_{s},n^{\infty}_{t})$, we define the random walk node sequence constrained by metagraph $\mathcal{G}$ as $\mathcal{S}_{\mathcal{G}}=\{v_{1},v_{2},\cdots,v_{L}\}$ with length $L$ satisfying the following conditions:
\begin{enumerate}
\item For each $v_{i}\ (1\leq i\leq L)$ in $\mathcal{S}_{\mathcal{G}}$, $v_{i}\in V$ and for each $v_{i}\ (1< i\leq L)$ in $\mathcal{S}_{\mathcal{G}}$, $(v_{i-1},v_{i})\in E$. Namely, the sequence $\mathcal{S}_{\mathcal{G}}$ respects the network structure in $G$.
\item $\phi(v_{1})=n_{s}$ and $l^{\infty}(\phi(v_{1}))=1$. Namely, the random walk starts from a node with type $n_{s}$.
\item For each $v_{i}\ (1< i\leq L)$ in $\mathcal{S}_{\mathcal{G}}$, there exists a unique $j$ satisfying $(\phi(v_{i-1}),\phi(v_{i}))\in M^{\infty}\bigcap (N^{\infty}[l^{\infty}(\phi(v_{i-1}))]\times N^{\infty}[j])$ with $j>l^{\infty}(\phi(v_{i-1}))$, $\phi(v_{i})\in N^{\infty}[j]$ and $l^{\infty}(\phi(v_{i}))=j$. Namely, the random walk is constrained by the recursive metagraph $\mathcal{G}^{\infty}$.
\end{enumerate}
\end{definition}
\begin{example}
Given metagraph $\mathcal{G}$ in Fig.~\ref{fig1:meta}, a possible random walk is $a_1 \rightarrow p_1 \rightarrow v_1 \rightarrow p_2 \rightarrow a_2 \rightarrow p_3 \rightarrow a_3 \rightarrow p_4 \rightarrow a_5$. It describes that author $a_1$ and $a_2$ publish papers in the same venue $v_1$ and author $a_2$ and $a_5$ share the common co-author $a_3$. Compared with metapath $\mathcal{P}_1$ given in Fig.~\ref{fig1:meta}, metagraph $\mathcal{G}$ captures richer semantic relations between distant nodes.
\end{example}
\section{Methodology}
In this section, we first present metagraph-guided random walk to generate heterogeneous neighborhood in an HIN, and then present the MetaGraph2Vec learning strategy to learn latent embeddings of multiple types of nodes.
\subsection{MetaGraph Guided Random Walk}
In an HIN $G=(V,E)$, assuming a metagraph $\mathcal{G}=(N,M,n_{s},n_{t})$ with $n_{s}=n_{t}$ is given according to domain knowledge, we can get the corresponding recursive metagraph $\mathcal{G}^{\infty}=(N^{\infty},M^{\infty},n^{\infty}_{s},n^{\infty}_{t})$. After choosing a node of type $n_{s}$, we can start the metagraph guided random walk. We denote the transition probability guided by metagraph $\mathcal{G}$ at $i$th step as $\mathrm{Pr}(v_{i}|v_{i-1};\mathcal{G}^{\infty})$. According to Definition $\ref{randwalkseq}$, if $(v_{i-1},v_{i})\notin E$, or $(v_{i-1},v_{i})\in E$ but there is no link from node type $\phi(v_{i-1})$ at layer $l^{\infty}(\phi(v_{i-1}))$ to node type $\phi(v_{i})$ in the recursive metagraph $\mathcal{G}^{\infty}$, the transition probability $\mathrm{Pr}(v_{i}|v_{i-1};\mathcal{G}^{\infty})$ is $0$. The probability $\mathrm{Pr}(v_{i}|v_{i-1};\mathcal{G}^{\infty})$ for $v_{i}$ that satisfies the conditions of Definition $\ref{randwalkseq}$ is defined as
{\small\begin{equation}
\mathrm{Pr}(v_{i}|v_{i-1};\mathcal{G}^{\infty})=\frac{1}{T_{\mathcal{G}^{\infty}}(v_{i-1})}\times\frac{1}{|\{u|(v_{i-1},u)\in E, \phi(v_{i})=\phi(u)\}|}.
\end{equation}}Above, $T_{\mathcal{G}^{\infty}}(v_{i-1})$ is the number of edge types among the edges starting from $v_{i-1}$ that satisfy the constraints of the recursive metagraph $\mathcal{G}^{\infty}$, which is formalized as
{\footnotesize\begin{equation}
T_{\mathcal{G}^{\infty}}(v_{i-1})={|\{j| (\phi(v_{i-1}),\phi(u))\in M^{\infty}\bigcap (N^{\infty}[l^{\infty}(\phi(v_{i-1}))]\times N^{\infty}[j]), (v_{i-1},u)\in E \}|},
\end{equation}}and $|\{u|(v_{i-1},u)\in E, \phi(v_{i})=\phi(u)\}|$ is the number of $v_{i-1}$'s 1-hop forward neighbors sharing common node type with node $v_{i}$.
At step $i$, the metagraph guided random walk works as follows. Among the edges starting from $v_{i-1}$, it firstly counts the number of edge types satisfying the constraints and randomly selects one qualified edge type. Then it randomly walks across one edge of the selected edge type to the next node. If there are no qualified edge types, the random walk would terminate.
\subsection{MetaGraph2Vec Embedding Learning}
Given a metagraph guided random walk $\mathcal{S}_{\mathcal{G}}=\{v_{1},v_{2},\cdots,v_{L}\}$ with length $L$, the node embedding function $\mathrm{\Phi}(\cdot)$ is learned by maximizing the probability of the occurrence of $v_{i}$'s context nodes within $w$ window size conditioned on $\mathrm{\Phi}(v_{i})$:
{\small\begin{equation}
\min_{\mathrm{\Phi}}-\log\mathrm{Pr}(\{v_{i-w},\cdots,v_{i+w}\}\setminus v_{i}|\mathrm{\Phi}(v_{i})),
\end{equation}}where,
{\small\begin{equation}
\mathrm{Pr}(\{v_{i-w},\cdots,v_{i+w}\}\setminus v_{i}|\mathrm{\Phi}(v_{i}))=\prod_{j=i-w,j\neq i}^{i+w}\mathrm{Pr}(v_{j}|\mathrm{\Phi}(v_{i})).
\end{equation}}Following MetaPath2Vec~\cite{Dong:2017:metapath2vec}, the probability $\mathrm{Pr}(v_{j}|\mathrm{\Phi}(v_{i}))$ is modeled in two different ways:
\begin{enumerate}
\item \textbf{Homogeneous Skip-Gram} that assumes the probability $\mathrm{Pr}(v_{j}|\mathrm{\Phi}(v_{i}))$ does not depend on the type of $v_{j}$, and thus models the probability $\mathrm{Pr}(v_{j}|\mathrm{\Phi}(v_{i}))$ directly by softmax:
{\small\begin{equation}
\mathrm{Pr}(v_{j}|\mathrm{\Phi}(v_{i}))=\frac{\exp(\mathrm{\Psi}(v_{j})\cdot\mathrm{\Phi}(v_{i}))}{\sum_{u\in V}\exp(\mathrm{\Psi}(u)\cdot\mathrm{\Phi}(v_{i}))}.
\end{equation}}
\item \textbf{Heterogeneous Skip-Gram} that assumes the probability $\mathrm{Pr}(v_{j}|\mathrm{\Phi}(v_{i}))$ is related to the type of node $v_{j}$:
{\small\begin{equation}
\mathrm{Pr}(v_{j}|\mathrm{\Phi}(v_{i}))=\mathrm{Pr}(v_{j}|\mathrm{\Phi}(v_{i}),\phi(v_{j}))\mathrm{Pr}(\phi(v_{j})|\mathrm{\Phi}(v_{i})),
\end{equation}}where the probability $\mathrm{Pr}(v_{j}|\mathrm{\Phi}(v_{i}),\phi(v_{j}))$ is modeled via softmax:
{\small\begin{equation}
\mathrm{Pr}(v_{j}|\mathrm{\Phi}(v_{i}),\phi(v_{j}))=\frac{\exp(\mathrm{\Psi}(v_{j})\cdot\mathrm{\Phi}(v_{i}))}{\sum_{u\in V, \phi(u)=\phi(v_{j})}\exp(\mathrm{\Psi}(u)\cdot\mathrm{\Phi}(v_{i}))}.
\end{equation}}
\end{enumerate}
To learn node embeddings, the MetaGraph2Vec algorithm first generates a set of metagraph guided random walks, and then counts the occurrence frequency $\mathbb{F}(v_{i},v_{j})$ of each node context pair $(v_{i},v_{j})$ within $w$ window size. After that, stochastic gradient descent is used to learn the parameters. At each iteration, a node context pair $(v_{i},v_{j})$ is sampled according to the distribution of $\mathbb{F}(v_{i},v_{j})$, and the gradients are updated to minimize the following objective,
{\small\begin{equation}\label{SGD_obj}
\mathcal{O}_{ij}=-\log\mathrm{Pr}(v_{j}|\mathrm{\Phi}(v_{i})).
\end{equation}}To speed up training, negative sampling is used to approximate the objective function:
{\small\begin{equation}\label{SGD_obj_1}
\mathcal{O}_{ij}=\log\sigma(\mathrm{\Psi}(v_{j})\cdot\mathrm{\Phi}(v_{i}))+\sum_{k=1}^{K}\log\sigma(-\mathrm{\Psi}(v_{N_{j,k}})\cdot\mathrm{\Phi}(v_{i})),
\end{equation}}where $\sigma(\cdot)$ is the sigmoid function, $v_{N_{j,k}}$ is the $k$th negative node sampled for node $v_{j}$ and $K$ is the number of negative samples.
For Homogeneous Skip-Gram, $v_{N_{j,k}}$ is sampled from all nodes in $V$; for Heterogeneous Skip-Gram, $v_{N_{j,k}}$ is sampled from nodes with type $\phi(v_{j})$. Formally, parameters $\mathrm{\Phi}$ and $\mathrm{\Psi}$ are updated as follows:
{\small\begin{equation}\label{update_para}
\begin{aligned}
\mathrm{\Phi}=\mathrm{\Phi}-\alpha\frac{\partial\mathcal{O}_{ij}}{\partial \mathrm{\Phi}};\ \ \ \ \mathrm{\Psi}=\mathrm{\Phi}-\alpha\frac{\partial\mathcal{O}_{ij}}{\partial \mathrm{\Psi}},
\end{aligned}
\end{equation}}where $\alpha$ is the learning rate.
The pseudo code of the MetaGraph2Vec algorithm is given in Algorithm~\ref{alg:metgraph2vec}.
\begin{algorithm}[htb]
\caption{The MetaGraph2Vec Algorithm}
\label{alg:metgraph2vec}
\begin{algorithmic}[1]
\REQUIRE ~~\\
(1) A heterogeneous information network (HIN): $G=(V,E)$;\\
(2) A metagraph: $\mathcal{G}=(N,M,n_{s},n_{t})$ with $n_{s}=n_{t}$;\\
(3) Maximum number of iterations: $MaxIterations$;
\ENSURE ~~\\
Node embedding $\mathrm{\Phi}(\cdot)$ for each $v\in V$;
\STATE $\mathbb{S}$ $\leftarrow$ generate a set of random walks according to $\mathcal{G}$;
\STATE $\mathbb{F}(v_i,v_j)$ $\leftarrow$ count frequency of node context pairs ($v_{i},v_{j})$ in $\mathbb{S}$;
\STATE $Iterations \leftarrow 0;$
\REPEAT
\STATE $(v_i,v_j) \leftarrow$ sample a node context pair according to the distribution of $\mathbb{F}(v_i,v_j)$;
\STATE $(\mathrm{\Phi}, \mathrm{\Psi}) \leftarrow$ update parameters using $(v_i,v_j)$ and Eq.~(\ref{update_para});
\STATE $Iterations \leftarrow Iterations+1$;
\UNTIL {$convergence$ or $Iterations\ge MaxIterations$ }\label{code:iteration}
\STATE \textbf{return} $\mathrm{\Phi}$;
\end{algorithmic}
\end{algorithm}
\section{Experiments}
In this section, we demonstrate the effectiveness of the proposed algorithms for heterogeneous network embedding via various network mining tasks, including node classification, node clustering, and similarity search.
\subsection{Experimental Settings}
For evaluation, we carry out experiments on the DBLP\footnote{https://aminer.org/citation (Version 3 is used)} bibliographic HIN, which is composed of papers, authors, venues, and their relationships. Based on paper's venues, we extract papers falling into four research areas: \textit{Database}, \textit{Data Mining}, \textit{Artificial Intelligence}, \textit{Computer Vision}, and preserve the associated authors and venues, together with their relations. To simulate the paper-venue sparsity, we randomly select 1/5 papers and remove their paper-venue relations. This results in a dataset that contains 70,910 papers, 67,950 authors, 97 venues, as well as 189,875 paper-author relations, 91,048 paper-paper relations and 56,728 venue-paper relations.
To evaluate the quality of the learned embeddings, we carry out multi-class classification, clustering and similarity search on author embeddings. Metapaths and metagraph shown in Fig.~\ref{fig1:meta} are used to measure the proximity between authors. The author's ground true label is determined by research area of his/her major publications.
We evaluate MetaGraph2Vec with Homogeneous Skip-Gram and its variant MetaGraph2Vec++ with Heterogeneous Skip-Gram. We compare their performance with the following state-of-the-art baseline methods:
\begin{itemize}
\item DeepWalk~\cite{perozzi2014deepwalk}: It uses the uniform random walk that treats nodes of different types equally to generate random walks.
\item LINE~\cite{tang2015line}: We use two versions of LINE, namely LINE\_1 and LINE\_2, which models the first order and second order proximity, respectively. Both neglect different node types and edge types.
\item MetaPath2Vec and MetaPath2Vec++~\cite{Dong:2017:metapath2vec}: They are the state-of-the-art network embedding algorithms for HINs, with MetaPath2Vec++ being a variant of MetaPath2Vec that uses heterogeneous negative sampling. To demonstrate the strength of metagraph over metapath, we compare with different versions of the two algorithms: $\mathcal{P}_1$ MetaPath2Vec, $\mathcal{P}_2$ MetaPath2Vec and Mixed MetaPath2Vec, which uses $\mathcal{P}_1$ only, $\mathcal{P}_2$ only, or both, to guide random walks, as well as their counterparts, $\mathcal{P}_1$ MetaPath2Vec++, $\mathcal{P}_2$ MetaPath2Vec++, and Mixed MetaPath2Vec++.
\end{itemize}
For all random walk based algorithms, we start random walks with length $L=100$ at each author for $\gamma=80$ times, for efficiency reasons. For the mixed MetaPath2Vec methods, $\gamma/2=40$ random walks are generated by following metapaths $\mathcal{P}_1$ and $\mathcal{P}_2$, respectively. To improve the efficiency, we use our optimization strategy for all random walk based methods: After random walks are generated, we first count the co-occurrence frequencies of node context pairs using a window size $w=5$, and according to the frequency distribution, we then sample one node context pair to do stochastic gradient descent sequentially. For fair comparisons, the total number of samples (iterations) is set to 100 million, for both random walk based methods and LINE. For all methods, the dimension of learned node embeddings $d$ is set to $128$.
\subsection{Node Classification Results}
We first carry out multi-class classification on the learned author embeddings to compare the performance of all algorithms. We vary the ratio of training data from 1\% to 9\%. For each training ratio, we randomly split training set and test set for 10 times and report the averaged accuracy.
\begin{table}[!htbp]
\centering
\caption{Multi-class author classification on DBLP}
\label{author_classification}
\begin{scriptsize}
\renewcommand{\arraystretch}{1}
\setlength\tabcolsep{3pt}
\begin{tabular}{lccccccccc}
\toprule
Method & 1\% & 2\% & 3\% & 4\% & 5\% & 6\% & 7\% & 8\% & 9\%\\
\midrule
DeepWalk & 82.39 &86.04 &87.16 &88.15 &89.10 &89.49 &90.02 &90.25 &90.56\\
LINE\_1 &71.25 &79.25 &83.11 &85.60 &87.17 &88.29 &89.05 &89.45 &89.63\\
LINE\_2 &75.70 &80.80 &82.49 &83.88 &84.83 &85.71 &86.58 &86.90 &86.93\\
$\mathcal{P}_1$ MetaPath2Vec & 83.24 &87.70 &88.42 &89.05 &89.26 &89.46 &89.51 &89.76 &89.69\\
$\mathcal{P}_1$ MetaPath2Vec++& 82.14 &86.02 &87.04 &87.96 &88.47 &88.66 &88.90 &88.91 &89.02\\
$\mathcal{P}_2$ MetaPath2Vec & 49.59 &52.12 &53.76 &54.67 &55.68 &55.49 &55.83 &55.68 &56.07\\
$\mathcal{P}_2$ MetaPath2Vec++& 50.31 &52.50 &53.72 &54.47 &55.53 &55.78 &56.30 &56.36 &57.02\\
Mixed MetaPath2Vec & 83.86 &87.34 &88.37 &89.22 &89.70 &90.01 &90.37 &90.42 &90.71 \\
Mixed MetaPath2Vec++& 83.08 &86.91 &88.13 &89.07 &89.69 &90.09 &90.58 &90.68 &90.87\\
MetaGraph2Vec &\textbf{85.76} &\textbf{89.00} &89.79 &90.55 &91.02 &91.30 &91.72 &92.13 &92.25\\
MetaGraph2Vec++ & 85.20 &88.97 &\textbf{89.99} &\textbf{90.78} &\textbf{91.42} &\textbf{91.65} &\textbf{92.13} &\textbf{92.42} &\textbf{92.46}\\
\bottomrule
\end{tabular}
\end{scriptsize}
\end{table}
Table \ref{author_classification} shows the multi-class author classification results in terms of accuracy (\%) for all algorithms, with the highest score highlighted by $\textbf{bold}$. Our MetaGraph2Vec and MetaGraph2vec++ algorithms achieve the best performance in all cases. The performance gain over metapath based algorithms proves the capacity of MetaGraph2Vec in capturing complex semantic relations between distant authors in sparse networks, and the effectiveness of the semantic similarity in learning informative node embeddings. By considering methpaths between different types of nodes, MetaPath2Vec can capture better proximity properties and learn better author embeddings than DeepWalk and LINE, which neglect different node types and edge types.
\subsection{Node Clustering Results}
We also carry out node clustering experiments to compare different embedding algorithms. We take the learned author embeddings produced by different methods as input and adopt $K$-means to do clustering. With authors' labels as ground truth, we evaluate the quality of clustering using three metrics, including Accuracy, F score and NMI. From Table \ref{clustering}, we can see that MetaGraph2Vec and MetaGraph2Vec++ yield the best clustering results on all three metrics.
\begin{table}[!htbp]
\centering
\caption{Author clustering on DBLP}
\label{clustering}
\begin{scriptsize}
\begin{tabular}{lccc}
\toprule
Method & Accuracy(\%) & F(\%) & NMI(\%)\\
\midrule
DeepWalk & 73.87 & 67.39 & 42.02\\
LINE\_1 & 50.26 & 46.33 & 17.94\\
LINE\_2 & 52.14 & 45.89 & 19.55\\
$\mathcal{P}_1$ MetaPath2Vec & 69.39 & 63.05 & 41.72\\
$\mathcal{P}_1$ MetaPath2Vec++ & 66.11 & 58.68 & 36.45\\
$\mathcal{P}_2$ MetaPath2Vec & 47.51 &43.30 & 6.17\\
$\mathcal{P}_2$ MetaPath2Vec++& 47.65 & 41.48 & 6.56\\
Mixed MetaPath2Vec & 77.20 & 69.50 & 49.43\\
Mixed MetaPath2Vec++& 72.36 & 65.09 & 42.40\\
MetaGraph2Vec & \textbf{78.00} & \textbf{70.96}& \textbf{51.40}\\
MetaGraph2Vec++ &77.48&70.69&50.60\\
\bottomrule
\end{tabular}
\end{scriptsize}
\end{table}
\subsection{Node Similarity Search}
Experiments are also performed on similarity search to verify the ability of MetaGraph2Vec to capture author proximities in the embedding space. We randomly select 1,000 authors and rank their similar authors according to cosine similarity score. Table \ref{search} gives the averaged precision@100 and precision@500 for different embedding algorithms. As can be seen, our MetaGraph2Vec and MetaGraph2Vec++ achieve the best search precisions.
\begin{table}[!htbp]
\centering
\caption{Author similarity search on DBLP}
\label{search}
\begin{scriptsize}
\begin{tabular}{lccc}
\toprule
Methods & Precision$@100$ (\%) & Precision$@500$ (\%) \\
\midrule
DeepWalk & 91.65 & 91.44 \\
LINE\_1 & 91.18 & 89.88 \\
LINE\_2 & 91.92 & 91.38 \\
$\mathcal{P}_1$ MetaPath2Vec & 88.21 & 88.64 \\
$\mathcal{P}_1$ MetaPath2Vec++& 88.68 & 88.58 \\
$\mathcal{P}_2$ MetaPath2Vec & 53.98 & 44.11 \\
$\mathcal{P}_2$ MetaPath2Vec++& 53.39 & 44.11 \\
Mixed MetaPath2Vec & 90.94 & 90.27 \\
Mixed MetaPath2Vec++& 91.49 & 90.69 \\
MetaGraph2Vec & 92.50& \textbf{92.17} \\
MetaGraph2Vec++ & \textbf{92.59} &91.92\\
\bottomrule
\end{tabular}
\end{scriptsize}
\end{table}
\subsection{Parameter Sensitivity}
We further analyze the sensitivity of MetaGraph2vec and MetaGraph2Vec++ to three parameters: (1) $\gamma$: the number of metagraph guided random walks starting from each author; (2) $w$: the window size used for collecting node context pairs; (3) $d$: the dimension of learned embeddings. Fig.~\ref{fig:parameter} shows node classification performance with 5\% training ratio by varying the values of these parameters. We can see that, as the dimension of learned embeddings $d$ increases, MetaGraph2Vec and MetaGraph2Vec++ gradually perform better and then stay at a stable level. Yet, both algorithms are not very sensitive to the the number of random walks and window size.
\begin{figure}[!htbp]
\centering
\subfigure[$\gamma$]{
\label{fig:parameter:subfig:gamma}
\includegraphics[width=1.5in]{para_gamma.eps}}
\subfigure[$w$]{
\label{fig:parameter:subfig:w}
\includegraphics[width=1.5in]{para_w.eps}}
\subfigure[$d$]{
\label{fig:parameter:subfig:d}
\includegraphics[width=1.5in]{para_d.eps}}
\caption{The effect of parameters $\gamma$, $w$, and $d$ on node classification performance}
\label{fig:parameter}
\end{figure}
\section{Conclusions and Future Work}
This paper studied network embedding learning for heterogeneous information networks. We analyzed the ineffectiveness of existing \textit{metapath} based approaches in handling sparse HINs, mainly because metapath is too strict for capturing relationships in HINs. Accordingly, we proposed a new \textit{metagraph} relationship descriptor which augments metapaths for flexible and reliable relationship description in HINs. By using metagraph to guide the generation of random walks, our new proposed algorithm, MetaGraph2Vec, can capture rich context and semantic information between different types of nodes in the network. The main contribution of this work, compared to the existing research in the field, is twofold: (1) a new metagraph guided random walk approach to capturing rich contexts and semantics between nodes in HINs, and (2) a new network embedding algorithm for very sparse HINs, outperforming the state-of-the-art.
In the future, we will study automatic methods for efficiently learning metagraph structures from HINs and assess the contributions of different metagraphs to network embedding. We will also evaluate the performance of MetaGraph2Vec on other types of HINs, such as heterogeneous biological networks and social networks, for producing informative node embeddings.\\
\noindent\textbf{Acknowledgments}. This work is partially supported by the Australian Research Council (ARC) under discovery grant DP140100545, and by the Program for Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions of Higher Learning. Daokun Zhang is supported by China Scholarship Council (CSC) with No. 201506300082 and a supplementary postgraduate scholarship from CSIRO.
\bibliographystyle{plain}
| {
"attr-fineweb-edu": 1.852539,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUe_jxK4tBVhvvt7-f | \section{Introduction}
Advances in the computation of higher-order radiative contributions
in perturbative quantum chromodynamics (PQCD) open opportunities to
predict hadronic observables at an unprecedented level of precision.
Full realization of this potential requires concurrent improvements
in the methods for QCD factorization and resummation of logarithmic
enhancements in hadronic cross sections in infrared kinematic regions.
All-orders resummation of logarithmic corrections, such as the resummation
of transverse momentum ($Q_{T}$) logarithms in Drell-Yan-like processes~\cite{Collins:1984kg},
is increasingly challenging in multi-loop calculations as a result
of algebraic complexity and new types of logarithmic singularities
associated with multi-particle matrix elements.
In this paper, we address new theoretical issues in $Q_{T}$ resummation
at two-loop accuracy. We focus on photon pair production, particularly
on the gluon-gluon subprocess, $gg\rightarrow\gamma\gamma$, one of
the important short-distance subprocesses that contribute to the inclusive
reactions $p\bar{p}\rightarrow\gamma\gamma X$ at the Fermilab Tevatron
and $pp\rightarrow\gamma\gamma X$ at the CERN Large Hadron Collider
(LHC). This hadronic reaction is interesting in its own right, and
it is relevant in searches for the Higgs boson $h$, where it constitutes
an important QCD background to the $pp\rightarrow hX\rightarrow\gamma\gamma X$
production chain \cite{ATLAS:1999fr,CMSTDR:2006,Abdullin:2005yn}.
A reliable prediction of the cross section for $gg\rightarrow\gamma\gamma$
is needed for complete estimates of the $\gamma\gamma$ production
cross sections, a task that we pursue in accompanying papers~\cite{Balazs:2006cc,Balazs:2007hr}.
The lowest-order contribution to the cross section for $gg\rightarrow\gamma\gamma$
arises from a $2\rightarrow2$ diagram of order ${\mathcal{O}}(\alpha^{2}\alpha_{s}^{2})$
involving a 4-vertex virtual quark loop {[}Fig.~\ref{Fig:FeynDiag}(a)].
We evaluate all next-to-leading (NLO) contributions of order ${\mathcal{O}}(\alpha^{2}\alpha_{s}^{3})$
to the $gg\rightarrow\gamma\gamma$ process shown in Figs.~\ref{Fig:FeynDiag}(b-e).
An important new ingredient in this paper is the inclusion of the
$gq\rightarrow\gamma\gamma q$ process, Fig.~\ref{Fig:FeynDiag}(d),
a necessary component of the resummed NLO contribution. Our complete
treatment of the NLO cross section represents an improvement over
our original publication~\cite{Balazs:1997hv}, in which the large-$Q_{T}$
behavior of the $gg$ subprocess was approximated, and the $gq$ contribution
was not included. Furthermore, we resum to next-to-next-to-leading
logarithmic (NNLL) accuracy the large logarithmic terms of the form
$\ln(Q_{T}^{2}/Q^{2})$ in the limit when $Q_{T}$ of the $\gamma\gamma$
pair is much smaller than its invariant mass $Q$. Our NNLL cross
section includes the exact ${\mathcal{C}}$ coefficients of order
$\alpha_{s}$ for $gg+gq\rightarrow\gamma\gamma X$, and the functions
${\mathcal{A}}$ and ${\mathcal{B}}$ of orders $\alpha_{s}^{3}$
and $\alpha_{s}^{2}$ in all subprocesses, with these functions defined
in Sec.~\ref{Sec:Theory}.
We begin in Sec.~II with a summary of kinematics and our notation,
and we outline the partonic subprocesses that contribute to $\gamma\gamma$
production. In this section, we also derive a matrix element for the
$qg\rightarrow\gamma\gamma g$ process shown in Fig.~\ref{Fig:FeynDiag}(d),
a subprocess whose contribution is required to obtain consistent resummed
predictions for all values of $Q_{T}$. We obtain the ${\mathcal{O}}(\alpha^{2}\alpha_{s}^{3})$
cross section for the $gq\rightarrow\gamma\gamma q$ process from
the color-decomposed $q\bar{q}ggg$ amplitudes in Ref.~\cite{Bern:1994fz}.
The rich helicity structure of the $gg\rightarrow\gamma\gamma$ matrix
element is addressed in Sec.~III. The helicity dependence requires
a new type of transverse-momentum dependent (TMD) parton distribution
function (PDF) associated with the interference of amplitudes for
initial-state gluons of opposite helicities. The existence of the
helicity-flip TMD PDF modifies the azimuthal angle distributions of
the final-state photons, an effect that could potentially be observed
experimentally. By contrast, in vector boson production $pp\!\!\!{}^{{}^{(-)}}\rightarrow VX$
(with $V=\gamma^{*},W,Z,...$), such helicity-flip contributions are
suppressed as a result of the simple spin structure of the lowest-order
$q\bar{q}V$ coupling. In this section, we establish the presence
of helicity interference in the finite-order $2\rightarrow3$ cross
sections by systematically deriving their soft and collinear limits
in the splitting amplitude formalism \cite{Bern:1993mq,Bern:1993qk,Bern:1994zx,Bern:1994fz,Bern:1998sc,Bern:1999ry,Kosower:1999rx}.
We show how the helicity-flip TMD PDF arises from the general structure
of the small-$Q_{T}$ resummed cross section.
Section~IV contains some numerical predictions for the Tevatron and
LHC, where we show the fraction of the rate for $\gamma\gamma$ production
supplied by the $gg+gq$ subprocess. The generally expected prominence
of $gg+gq$ scattering at the LHC is only partially supported by our
findings. The large $gg$ partonic luminosity cannot fully compensate
for the small cross section associated with $gg$ scattering. Our
findings are summarized in Sec.~V. Three Appendices are included.
In Appendix~\ref{Appendix:qgAAq}, we present some of the details
of our derivation of the amplitude for the subprocess $qg\rightarrow\gamma\gamma g$.
In Appendix~\ref{Appendix:ASYgg}, we derive the small-$Q_{T}$ asymptotic
form of the NLO cross section for $gg\rightarrow\gamma\gamma$.
\begin{figure}
\includegraphics[width=12cm,keepaspectratio]{eps/feyn_diags_diphoton_gg0}
\caption{Representative parton scattering subprocesses for diphoton production
in gluon-gluon scattering. \label{Fig:FeynDiag}}
\end{figure}
\section{Notation and Subprocesses \label{Sec:Theory}}
\subsection{Notation \label{sub:Notations}}
We consider the scattering process $h_{1}(P_{1})+h_{2}(P_{2})\rightarrow\gamma(P_{3})+\gamma(P_{4})+X$,
where $h_{1}$ and $h_{2}$ are the initial-state hadrons. In terms
of the center-of-mass collision energy $\sqrt{S}$, the $\gamma\gamma$
invariant mass $Q$, the $\gamma\gamma$ transverse momentum $Q_{T}$,
and the $\gamma\gamma$ rapidity $y$, the momenta $P_{1}^{\mu}$
and $P_{2}^{\mu}$ of the initial hadrons and $q^{\mu}\equiv P_{3}^{\mu}+P_{4}^{\mu}$
of the pair are expressed in the laboratory frame as\begin{eqnarray}
P_{1}^{\mu} & = & \frac{\sqrt{S}}{2}\left\{ 1,0,0,1\right\} ;\\
P_{2}^{\mu} & = & \frac{\sqrt{S}}{2}\left\{ 1,0,0,-1\right\} ;\\
q^{\mu} & = & \left\{ \sqrt{Q^{2}+Q_{T}^{2}}\cosh y,Q_{T},0,\sqrt{Q^{2}+Q_{T}^{2}}\sinh y\right\} .\end{eqnarray}
The light-cone momentum fractions for the boosted $2\rightarrow2$
scattering system are\begin{equation}
x_{1,2}\equiv\frac{2(P_{2,1}\cdot q)}{S}=\frac{\sqrt{Q^{2}+Q_{T}^{2}}e^{\pm y}}{\sqrt{S}}.\end{equation}
Decay of the $\gamma\gamma$ pairs is described in the hadronic Collins-Soper
frame \cite{Collins:1977iv}. The Collins-Soper frame is a rest frame
of the $\gamma\gamma$ pair (with $q^{\mu}=\left\{ Q,0,0,0\right\} $
in this frame), chosen so that (a) the momenta $\vec{P}_{1}$ and
$\vec{P}_{2}$ of the initial hadrons lie in the $Oxz$ plane (with
zero azimuthal angle), and (b) the $z$ axis bisects the angle between
$\vec{P}_{1}$ and $-\vec{P}_{2}$. The photon momenta are antiparallel
in the Collins-Soper frame: \begin{eqnarray}
P_{3}^{\mu} & = & \frac{Q}{2}\left\{ 0,\sin\theta_{*}\cos\varphi_{*},\sin\theta_{*}\sin\varphi_{*},\cos\theta_{*}\right\} ,\label{p3CS}\\
P_{4}^{\mu} & = & \frac{Q}{2}\left\{ 0,-\sin\theta_{*}\cos\varphi_{*},-\sin\theta_{*}\sin\varphi_{*},-\cos\theta_{*}\right\} ,\label{p4CS}\end{eqnarray}
where $\theta_{*}$ and $\varphi_{*}$ are the photon's polar and
azimuthal angles. Our aim is to derive resummed predictions for the
fully differential $\gamma\gamma$ cross section $d\sigma/(dQ^{2}dydQ_{T}^{2}d\Omega_{*}),$
where $d\Omega_{*}=d\cos\theta_{*}d\varphi_{*}$ is a solid angle
element around the direction of $\vec{P}_{3}$ in the Collins-Soper
frame of reference defined in Eq.~(\ref{p3CS}). The parton momenta
and helicities are denoted by lowercase $p_{i}$ and $\lambda_{i}$.
\subsection{Scattering contributions \label{subsection:OverviewFeynmanDiagrams}}
We concentrate on direct production of isolated photons in hard QCD
scattering, the dominant production process at hadron colliders. A
number of hard-scattering contributions to the processes $q\bar{q}+qg\rightarrow\gamma\gamma,$
as well as photon production via fragmentation, have been studied
in the past \cite{Aurenche:1985yk,Bailey:1992br,Binoth:1999qq}. Our
numerical calculations include the lowest-order process $q\bar{q}\rightarrow\gamma\gamma$
of order ${\cal O}(\alpha^{2})$ and contributions from $q\bar{q}\rightarrow\gamma\gamma g$
and $q^{\!\!\!\!\!{}^{(-)}}g\rightarrow\gamma\gamma q^{\!\!\!\!\!{}^{(-)}}$
of order ${\cal O}(\alpha^{2}\alpha_{s}),$ where $\alpha(\mu)=e^{2}/4\pi$
and $\alpha_{s}(\mu)=g^{2}/4\pi$ are the running QED and QCD coupling
strengths.
Glue-glue scattering is the next leading direct production channel,
with the full set of NLO contributions shown in Fig.~\ref{Fig:FeynDiag}.
Production of $\gamma\gamma$ pairs via a box diagram in $gg$ scattering
as in Fig.~\ref{Fig:FeynDiag}(a) \cite{Berger:1983yi} is suppressed
by two powers of $\alpha_{s}$ compared to the lowest-order $q\bar{q}\rightarrow\gamma\gamma$
contribution, but is enhanced by a product of two large gluon PDF's
if typical momentum fractions $x$ are small. The main ${\mathcal{O}}(\alpha^{2}\alpha_{s}^{3})$,
or NLO, corrections, include one-loop $gg\rightarrow\gamma\gamma g$
diagrams (b) and (c) derived in \cite{Balazs:1999yf,deFlorian:1999tp},
as well as 4-leg two-loop diagrams (e) computed in \cite{Bern:2001df}.
The real and virtual diagrams are combined in Ref.~\cite{Bern:2002jx}
to obtain the full NLO contribution from $gg$ scattering. In this
study we also include subleading NLO contributions from the process
(d), $gq_{S}\rightarrow\gamma\gamma q_{S}$ via the quark loop, where
$q_{S}=\sum_{i=u,d,s,...}(q_{i}+\bar{q}_{i})$ denotes the flavor-singlet
combination of quark scattering channels. The $gq_{S}\rightarrow\gamma\gamma q_{S}$
helicity amplitude is derived from the one-loop $q\bar{q}ggg$ amplitude
\cite{Bern:1994fz} and explicitly presented in Appendix~\ref{Appendix:qgAAq}.
As a cross check, we verified that this amplitude correctly reproduces
the known collinear limits. Our result does not confirm an expression
for this amplitude available in the literature \cite{Yasui:2002bn},
which does not satisfy these limits. When evaluated in our resummation
calculation under typical event selection conditions, $gg+gq_{S}$
scattering contributes about 20\% and 10\% of the total rate at the
LHC and the Tevatron, respectively, but this fraction can be larger
in specific regions of phase space.
\section{Theoretical Presentation}
\subsection{Small-$Q_{T}$ asymptotics of the next-to-leading order cross section\label{subsection:AsymptoticISR}}
When the transverse momentum $Q_{T}$ of the diphoton approaches zero,
the NLO production cross section $d\sigma/(dQ^{2}dy\, dQ_{T}^{2}d\Omega_{*})$,
or briefly $P(Q,Q_{T},y,\Omega_{*}),$ is dominated by $\gamma\gamma$
recoil against soft and collinear QCD radiation. In this subsection
we concentrate on the effects of initial-state QCD radiation and derive
the leading small-$Q_{T}$ part of the NLO differential cross section,
called the asymptotic term $A(Q,Q_{T},y,\Omega_{*})$.
The ${\mathcal{O}}(\alpha_{s})$ asymptotic cross section valid at
$Q_{T}^{2}\ll Q^{2}$ consists of a few generalized functions that
are integrable on an interval $0\leq Q_{T}\leq P_{T}$, with $P_{T}$
being a finite value of transverse momentum: \begin{eqnarray}
& & A(Q,Q_{T},y,\Omega_{*})=F_{\delta}(Q,y,\Omega_{*})\delta(\vec{Q}_{T})\nonumber \\
& + & F_{1}(Q,y,\Omega_{*})\left[\frac{1}{Q_{T}^{2}}\ln\frac{Q^{2}}{Q_{T}^{2}}\right]_{+}+F_{0}(Q,y,\Omega_{*})\left[\frac{1}{Q_{T}^{2}}\right]_{+}+\dots.\label{ASYgeneric}\end{eqnarray}
The {}``$+$'' prescription $\left[f(Q_{T})\right]_{+}$ is defined
for a function $f(Q_{T})$ and a smooth function $g(Q_{T})$ as\begin{eqnarray}
\int_{0}^{P_{T}^{2}}dQ_{T}^{2}\left[f(Q_{T})\right]_{+}g(Q_{T}) & \equiv & \int_{0}^{P_{T}^{2}}dQ_{T}^{2}f(Q_{T})\left(g(Q_{T})-g(0)\right);\\
\left[f(Q_{T})\right]_{+}=f(Q_{T}) & \mbox{ for} & Q_{T}\neq0.\end{eqnarray}
Subleading terms proportional to $\left(Q/Q_{T}\right)^{p}$ with
$p\leq1$ are neglected in Eq.~(\ref{ASYgeneric}). Its form is influenced
by spin correlations between the initial-state partons and final-state
photons. As a consequence of these spin correlations, the functions
$F_{\delta},$ $F_{0}$, and $F_{1}$ depend on the direction of the
final-state photons in the Collins-Soper frame (the polar angle $\theta_{*}$
and sometimes the azimuthal angle $\varphi_{*}$).
The spin dependence of the small-$Q_{T}$ cross section in the $gg\rightarrow\gamma\gamma g$
and $gq_{S}\rightarrow\gamma\gamma q_{S}$ channels is complex. The
Born-level process $g(p_{1},\lambda_{1})+g(p_{2},\lambda_{2})\rightarrow\gamma(p_{3},\lambda_{3})+\gamma(p_{4},\lambda_{4})$
is described by 16 non-zero helicity amplitudes ${\mathcal{M}}_{4}(p_{1},\lambda_{1};p_{2},\lambda_{2};p_{3},\lambda_{3};p_{4},\lambda_{4})\equiv{\mathcal{M}}_{4}(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4})$
for quark-box diagrams of the type shown in Fig.~\ref{Fig:FeynDiag}(a).
The normalization of ${\mathcal{M}}_{4}(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4})$
is chosen so that the unpolarized Born $gg\rightarrow\gamma\gamma$
cross section reads as\begin{equation}
\left.\frac{d\sigma_{gg}}{dQ^{2}dy\, dQ_{T}^{2}d\Omega_{*}}\right|_{Born}=\delta(\vec{Q}_{T})\frac{\Sigma_{g}(\theta_{*})}{S}f_{g/h_{1}}(x_{1},\mu_{F})f_{g/h_{2}}(x_{2},\mu_{F}),\label{Borngg}\end{equation}
where\begin{equation}
\Sigma_{g}(\theta_{*})\equiv\sigma_{g}^{(0)}L_{g}(\theta_{*}),\label{Sigmag}\end{equation}
with\begin{equation}
\sigma_{g}^{(0)}=\frac{\alpha^{2}(Q)\alpha_{s}^{2}(Q)}{32\pi Q^{2}(N_{c}^{2}-1)}\left(\sum_{i}e_{i}^{2}\right)^{2},\end{equation}
and \begin{equation}
L_{g}(\theta_{*})\equiv\sum_{\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}=\pm1}\left|{\mathcal{M}}_{4}(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4})\right|^{2}.\label{L}\end{equation}
In these equations, $N_{c}=3$ is the number of QCD colors, $e_{i}$
is the fractional electric charge (in units of the positron charge
$e$) of the quark $i$ circulating in the loop, and $f_{g/h}(x,\mu_{F})$
is the gluon PDF evaluated at a factorization scale $\mu_{F}$. The
right-hand side of Eq.~(\ref{L}) includes summation over gluon and
photon helicities $\lambda_{i}$, with $i=1,...,4$.
At NLO, the small-$Q_{T}$ cross section is proportional to the angular
function $\Sigma_{g}(\theta_{*})$ (the same as in the Born cross
section), and another function \begin{eqnarray}
\Sigma_{g}^{\prime}(\theta_{*},\varphi_{*}) & = & \sigma_{g}^{(0)}\sum_{\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}=\pm1}{\mathcal{M}}_{4}^{*}(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}){\mathcal{M}}_{4}(-\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4})\nonumber \\
& \equiv & \sigma_{g}^{(0)}L_{g}^{\prime}(\theta_{*})\cos2\varphi_{*}.\label{Sigmagprime}\end{eqnarray}
The function $\Sigma_{g}^{\prime}(\theta_{*},\varphi_{*})$ is obtained
by spin-averaging the product of the amplitude ${\mathcal{M}}_{4}(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}),$
and the complex-conjugate amplitude ${\mathcal{M}}_{4}^{*}(-\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4})$
evaluated with the reverse sign of the helicity $\lambda_{1}$. The
sign flip for $\lambda_{1}$ results in dependence of $\Sigma_{g}^{\prime}(\theta_{*},\varphi_{*})$
on $\cos2\varphi_{*}$. The $\theta_{*}$ dependence of $\Sigma_{g}^{\prime}(\theta_{*},\varphi_{*})$
enters through the function\begin{eqnarray}
L_{g}^{\prime}(\theta_{*}) & = & -4\mbox{{Re}}\left(M_{1,1,-1,-1}^{(1)}+M_{1,-1,1,-1}^{(1)}+M_{-1,1,1,-1}^{(1)}+1\right),\label{Lprime}\end{eqnarray}
presented in terms of reduced amplitudes $M_{\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}}^{(1)}$
in the notation of Ref.~\cite{Bern:2001df}. For comparison, the
functions $L_{g}(\theta_{*})$ and $L_{g}^{\prime}(\theta_{*})$ are
plotted versus $(1+\cos\theta_{*})/2$ in Fig.~\ref{Fig:LLprime}.
\begin{figure}
\begin{centering}
\includegraphics[width=1\textwidth,height=8cm,keepaspectratio]{eps/LLprime}
\par\end{centering}
\caption{The functions $L_{g}(\theta_{*})$ and $L_{g}^{\prime}(\theta_{*})$
arising in the $gg\rightarrow\gamma\gamma$ asymptotic cross section
(\ref{ASYgg}) and their ratio. \label{Fig:LLprime}}
\end{figure}
The NLO asymptotic term in the sum of the contributions from the $gg\rightarrow\gamma\gamma$
and $gq_{S}\rightarrow\gamma\gamma$ channels (denoted as $gg+gq_{S}$
channel) is \begin{eqnarray}
A(Q,Q_{T},y,\Omega_{*}) & = & \frac{1}{S}\Biggl\{\Sigma_{g}(\theta_{*})\left[\delta(\vec{Q}_{T})F_{g,\delta}(Q,y,\theta_{*})+F_{g,+}(Q,y,Q_{T})\right]\nonumber \\
& & \hspace{12pt}+\Sigma_{g}^{\prime}(\theta_{*},\varphi_{*})F_{g,+}^{\prime}(Q,y,Q_{T})\Biggr\}.\label{ASYgg}\end{eqnarray}
Here\begin{eqnarray}
& & F_{g,\delta}\equiv f_{g/h_{1}}(x_{1},\mu_{F})f_{g/h_{2}}(x_{2},\mu_{F})\left(1+2\frac{\alpha_{s}}{\pi}h_{g}^{(1)}(\theta_{*})\right)\nonumber \\
& + & \frac{\alpha_{s}}{\pi}\Biggl\{\left(\left[{\mathcal{C}}_{g/a}^{(1,c)}\otimes f_{a/h_{1}}\right](x_{1},\mu_{F})-\left[P_{g/a}\otimes f_{a/h_{1}}\right](x_{1},\mu_{F})\,\ln\frac{\mu_{F}}{Q}\right)\, f_{g/h_{2}}(x_{2},\mu_{F})\nonumber \\
& & \hspace{27pt}+f_{g/h_{1}}(x_{1},\mu_{F})\left(\left[{\mathcal{C}}_{g/a}^{(1,c)}\otimes f_{a/h_{2}}\right](x_{2},\mu_{F})-\left[P_{g/a}\otimes f_{a/h_{2}}\right](x_{2},\mu_{F})\,\ln\frac{\mu_{F}}{Q}\right)\Biggr\};\label{FgDelta}\end{eqnarray}
\begin{eqnarray}
F_{g,+} & = & \frac{1}{2\pi}\frac{\alpha_{s}}{\pi}\Biggl\{ f_{g/h_{1}}(x_{1},\mu_{F})f_{g/h_{2}}(x_{2},\mu_{F})\left({\mathcal{A}}_{g}^{(1,c)}\left[\frac{1}{Q_{T}^{2}}\ln\frac{Q^{2}}{Q_{T}^{2}}\right]_{+}+{\mathcal{{\mathcal{B}}}}_{g}^{(1,c)}\left[\frac{1}{Q_{T}^{2}}\right]_{+}\right)\nonumber \\
& + & \left[\frac{1}{Q_{T}^{2}}\right]_{+}\Bigl(\left[P_{g/a}\otimes f_{a/h_{1}}\right](x_{1},\mu_{F})\, f_{g/h_{2}}(x_{2},\mu_{F})\nonumber \\
& & \hspace{47pt}+f_{g/h_{1}}(x_{1},\mu_{F})\left[P_{g/a}\otimes f_{a/h_{2}}\right](x_{2},\mu_{F})\Bigr)\Biggr\};\label{FgPlus}\end{eqnarray}
and\begin{eqnarray}
F_{g,+}^{\prime} & = & \frac{1}{2\pi}\frac{\alpha_{s}}{\pi}\left[\frac{1}{Q_{T}^{2}}\right]_{+}\Bigl(\left[P_{g/g}^{\prime}\otimes f_{g/h_{1}}\right](x_{1},\mu_{F})\, f_{g/h_{2}}(x_{2},\mu_{F})\nonumber \\
& & \hspace{77pt}+f_{g/h_{1}}(x_{1},\mu_{F})\left[P_{g/g}^{\prime}\otimes f_{g/h_{2}}\right](x_{2},\mu_{F})\Bigr).\label{FgPlusPrime}\end{eqnarray}
The ${\cal O}(\alpha_{s}/\pi)$ coefficients ${\mathcal{A}}_{g}^{(1,c)}$,
${\mathcal{B}}_{g}^{(1,c)}$ and functions ${\mathcal{C}}_{g/a}^{(1,c)}(x,b)$,
$h_{g}^{(1)}(\theta_{*})$ are defined and listed explicitly in Ref.~\cite{Balazs:2007hr}.
The function $h_{g}^{(1)}(\theta_{*})$ denotes an ${\cal O}(\alpha_{s}/\pi)$
correction to the hard-scattering contribution ${\cal H}$ in the
resummed cross section, cf. Sec.~\ref{subsection:CompleteResummed}.
The convolutions $\left[P_{g/a}\otimes f_{a/h}\right]$ and $\left[{\mathcal{C}}_{g/a}^{(1,c)}\otimes f_{a/h}\right]$,
defined for two functions $f(x,\mu_{F})$ and $g(x,\mu_{F})$ as\[
[f\otimes g](x,\mu_{F})\equiv\int_{x}^{1}\frac{d\xi}{\xi}f(\xi,\mu_{F})g(\frac{x}{\xi},\mu_{F}),\]
are summed over the intermediate parton's flavors $a=g,\, q_{S}$
(gluon and the flavor-singlet combination of quark-scattering channels).
In addition to the conventional splitting functions $P_{g/g}(x)$
and $P_{g/q_{S}}(x)$ arising in $F_{g,+}$, a new splitting function\begin{equation}
P_{g/g}^{\prime}(x)=2C_{A}(1-x)/x,\label{Pggprime}\end{equation}
where $C_{A}=N_{c}=3,$ enters the $\varphi_{*}$-dependent part
of the asymptotic cross section through $F_{g,+}^{\prime}.$
For completeness, the small-$Q_{T}$ asymptotic form Eq.~(\ref{ASYgg})
for the $gg+gq_{S}$ channels is derived in Appendix~\ref{Appendix:ASYgg}.
The existence of the $\varphi_{*}-$dependent singular contribution
proportional to $\Sigma_{g}^{\prime}(\theta_{*},\varphi_{*})$ is
established by examining the factorization of the $2\rightarrow3$
cross section in the limit of a collinear gluon emission. It follows
directly from factorization rules for helicity amplitudes \cite{Bern:1993mq,Bern:1993qk,Bern:1994zx,Bern:1994fz,Bern:1998sc,Bern:1999ry,Kosower:1999rx},
as well as from the dipole factorization formalism \cite{Catani:1996vz}.
In contrast, the NLO quark-antiquark contribution $q\bar{q}\rightarrow\gamma\gamma$
does not include a spin-flip contribution, as a result of the simple
structure of the Born contribution in $q\bar{q}$ scattering (see
also Sec.~\ref{subsection:Spin-Flip}).
\subsection{Resummation \label{subsection:ResummationOne}}
To predict the shape of $d\sigma/dQ_{T}$ distributions, we perform
an all-orders summation of singularities $\delta(\vec{Q}_{T})$ and
$\left[Q_{T}^{-2}\ln^{p}\left(Q^{2}/Q_{T}^{2}\right)\right]_{+}$
in the asymptotic cross section, which coincides with the perturbative
expansion of the resummed small-$Q_{T}$ cross section obtained within
the Collins-Soper-Sterman formalism \cite{Collins:1981uk,Collins:1981va,Collins:1984kg}.
In this formalism, we write the fully differential cross section as
\begin{eqnarray}
\frac{d\sigma(h_{1}h_{2}\rightarrow\gamma\gamma)}{dQ\, dQ_{T}^{2}\, dy\, d\Omega_{*}}=W(Q,Q_{T},y,\Omega_{*})+Y(Q,Q_{T},y,\Omega_{*}).\label{FullResum}\end{eqnarray}
The term $W$ contains large logarithmic contributions of the form
$\ln^{p}(Q/Q_{T})$ from initial-state radiation, while $Y$ is free
of these logs and calculated using collinear QCD factorization (cf.
the end of Sec.~\ref{subsection:CompleteResummed}).
The function $W$ may be expressed as a Fourier-Bessel transform of
a function $\widetilde{W}(Q,b,y,\Omega_{*})$ in the impact parameter
(${\vec{b}}$) space, \begin{equation}
W(Q,Q_{T},y,\Omega_{*})=\int\frac{d\vec{b}}{(2\pi)^{2}}e^{i\vec{Q}_{T}\cdot\vec{b}}\widetilde{W}(Q,b,y,\Omega_{*}).\label{FourierIntegral}\end{equation}
The generic form of $\widetilde{W}(Q,b,y,\Omega_{*})$ in the $q\bar{q}+qg\rightarrow\gamma\gamma$
and $gg+gq_{S}\rightarrow\gamma\gamma$ channels can be determined
by solving evolution equations for the gauge- and renormalization-group
invariance of $\widetilde{W}(Q,b,y,\Omega_{*})$:\begin{eqnarray}
\widetilde{W}(Q,b,y,\Omega_{*}) & = & \sum_{a}\sum_{\lambda_{1},\lambda_{1}^{\prime},\lambda_{2},\lambda_{2}^{\prime},\lambda_{3},\lambda_{4}}{\mathcal{H}}_{a}^{\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{4}}(Q,\Omega_{*})\left({\mathcal{H}}_{a}^{\lambda_{1}^{\prime}\lambda_{2}^{\prime}\lambda_{3}\lambda_{4}}(Q,\Omega_{*})\right)^{*}\nonumber \\
& & \hspace{84pt}\times{\mathcal{P}}_{a/h_{1}}^{\lambda_{1}\lambda_{1}^{\prime}}(x_{1},\vec{b})\,{\mathcal{P}}_{\bar{a}/h_{2}}^{\lambda_{2}\lambda_{2}^{\prime}}(x_{2},\vec{b})\, e^{-{\mathcal{S}}_{a}(Q,b)}.\label{Eq:Wt}\end{eqnarray}
It is composed of the hard-scattering function ${\mathcal{H}}_{a}^{\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{4}}(Q,\Omega_{*})$
and its complex conjugate, $\left({\mathcal{H}}_{a}^{\lambda_{1}^{\prime}\lambda_{2}^{\prime}\lambda_{3}\lambda_{4}}(Q,\Omega_{*})\right)^{*};$
the Sudakov exponential $\exp\left(-{\mathcal{S}}_{a}(Q,b)\right)$;
and parton distribution matrices ${\mathcal{P}}_{a/h_{i}}^{\lambda_{i}\lambda_{i}^{\prime}}(x_{i},\vec{b})$.
\begin{figure}
\begin{centering}
\includegraphics[width=8cm,keepaspectratio]{eps/soft_diphoton}
\par\end{centering}
\caption{The structure of the resummed form factor $\widetilde{W}(Q,b,y,\Omega_{*})$.
\label{Fig:Factorization}}
\end{figure}
The multiplicative structure of Eq.~(\ref{Eq:Wt}) reflects the topology
of the dominant cut diagrams in the small-$Q_{T}$ cross sections
shown in Fig.~\ref{Fig:Factorization}. The function ${\mathcal{H}}_{a}^{\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{4}}$
describes the hard $2\rightarrow2$ scattering subprocess $a(p_{1},\lambda_{1})+\bar{a}(p_{2},\lambda_{2})\rightarrow\gamma(p_{3},\lambda_{3})+\gamma(p_{4},\lambda_{4})$,
with $a=u,\bar{u},d,\bar{d},...$ in $q\bar{q}\rightarrow\gamma\gamma$,
and $a=\bar{a}=g$ in $gg\rightarrow\gamma\gamma$. All momenta in
${\mathcal{H}}$ have virtualities of order $Q^{2}.$ For now, we
consider the leading contribution to ${\mathcal{H}}_{a}^{\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{4}},$
which reads as ${\mathcal{H}}_{a}^{\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{4}}=\sqrt{\sigma_{a}^{(0)}}{\mathcal{M}}_{4}(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}),$
where the Born helicity amplitude ${\mathcal{M}}_{4}(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4})$
and overall constant normalization $\sigma_{a}^{(0)}$ are introduced
in Sec.~\ref{subsection:AsymptoticISR}. Sometimes ${\mathcal{H}}_{a}^{\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{4}}$
also includes finite parts of higher-order $2\rightarrow2$ virtual
corrections, as discussed in Sec. \ref{subsection:CompleteResummed}.
Similarly, $\left({\mathcal{H}}_{a}^{\lambda_{1}^{\prime}\lambda_{2}^{\prime}\lambda_{3}\lambda_{4}}(Q,\Omega_{*})\right)^{*}$
arises from the complex-conjugate amplitude ${\mathcal{M}}_{4}^{*}(\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4})$
and possible loop corrections to it. The helicities $\lambda_{1}^{\prime}$
and $\lambda_{2}^{\prime}$ in $\left({\mathcal{H}}_{a}^{\lambda_{1}^{\prime}\lambda_{2}^{\prime}\lambda_{3}\lambda_{4}}\right)^{*}$
need not coincide with $\lambda_{1}$ and $\lambda_{2}$ in ${\mathcal{H}}_{a}^{\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{4}}$
. The right-hand side of Eq.~(\ref{Eq:Wt}) is summed over flavors
$a$ and helicities $\lambda_{k},\lambda_{k}^{\prime}$ of the partons
entering ${\mathcal{H}}{\mathcal{{H}^{*}}}$, as well as over helicities
$\lambda_{3}$ and $\lambda_{4}$ of the final-state photons.
The Sudakov exponent \begin{eqnarray}
{\mathcal{S}}_{a}(Q,b)=\int_{C_{1}^{2}/b^{2}}^{C_{2}Q^{2}}\frac{d\bar{\mu}^{2}}{\bar{\mu}^{2}}\left[{\mathcal{A}}_{a}\left(C_{1},\bar{\mu}\right)\ln\left(\frac{C_{2}^{2}Q^{2}}{\bar{\mu}^{2}}\right)+{\mathcal{B}}_{a}\left(C_{1},C_{2},\bar{\mu}\right)\right]\label{Sudakov}\end{eqnarray}
resums contributions from the initial-state soft and soft-collinear
gluon emission (indicated by gluon lines connecting $e^{-{\cal S}}$
to ${\cal H}$, ${\cal H}^{*},$ and ${\cal P}_{a/h}(x,\vec{b})$
in Fig.~\ref{Fig:Factorization}). Here $C_{1}$ and $C_{2}$ are
constants of order unity. The functions ${\mathcal{A}}_{a}\left(C_{1},\bar{\mu}\right)$
and ${\mathcal{B}}_{a}\left(C_{1},C_{2},\bar{\mu}\right)$ can be
evaluated in perturbation theory at large scales $\bar{\mu}^{2}\gg\Lambda_{QCD}^{2}$,
hence for large $Q$ and small $b$.
The collinear emissions are described by parton distribution matrices
${\mathcal{P}}_{a/h}^{\lambda\lambda^{\prime}}(x,\vec{b})$, where
$\lambda$ and $\lambda^{\prime}$ denote the helicity state of the
intermediate parton $a$ to the left and right of the unitarity cut
in Fig.~\ref{Fig:Factorization}. The matrix ${\mathcal{P}}_{a/h}^{\lambda\lambda^{\prime}}(x,\vec{b})$
is derived from a matrix element of the light-cone correlator \cite{Ralston:1979ys,Soper:1976jc,Soper:1979fq,Ali:1992qj,Bashinsky:1998if}
for finding parton $a$ inside the parent hadron $h$.
It is convenient to introduce sums of diagonal and off-diagonal entries
of the helicity matrix ${\mathcal{P}}_{a/h}^{\lambda\lambda^{\prime}}(x,\vec{b})$,\begin{equation}
{\mathcal{P}}_{a/h}(x,\vec{b})=\sum_{\lambda}{\mathcal{P}}_{a/h}^{\lambda\lambda}(x,\vec{b}),\end{equation}
and \begin{equation}
{\mathcal{P}}_{a/h}^{\prime}(x,\vec{b})=\sum_{\lambda}{\mathcal{P}}_{a/h}^{\lambda,-\lambda}(x,\vec{b}).\end{equation}
In this notation, Eq.~(\ref{Eq:Wt}) can be rewritten as \begin{eqnarray}
\widetilde{W}(Q,b,y,\Omega_{*}) & = & \frac{1}{S}e^{-{\mathcal{S}}_{a}(Q,b)}\sum_{a}\Biggl\{\Sigma_{a}(\theta_{*}){\mathcal{P}}_{a/h_{1}}(x_{1},\vec{b})\,{\mathcal{P}}_{\bar{a}/h_{2}}(x_{2},\vec{b})+\nonumber \\
& + & \Sigma_{a}^{\prime}(\theta_{*},\varphi_{*})\left[{\mathcal{P}}_{a/h_{1}}^{\prime}(x_{1},\vec{b})\,{\mathcal{P}}_{\bar{a}/h_{2}}(x_{2},\vec{b})+{\mathcal{P}}_{a/h_{1}}(x_{1},\vec{b})\,{\mathcal{P}}_{\bar{a}/h_{2}}^{\prime}(x_{2},\vec{b})\right]\label{Eq:Wt2}\\
& + & \Sigma_{a}^{\prime\prime}(\theta_{*},\varphi_{*}){\mathcal{P}}_{a/h_{1}}^{\prime}(x_{1},\vec{b})\,{\mathcal{P}}_{\bar{a}/h_{2}}^{\prime}(x_{2},\vec{b})\Biggr\},\nonumber \end{eqnarray}
where \begin{eqnarray}
\Sigma_{a}(\theta_{*}) & \equiv & \sum_{\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}}\left|{\mathcal{H}}_{a}^{\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{4}}\right|^{2},\\
\Sigma_{a}^{\prime}(\theta_{*},\varphi_{*}) & \equiv & \sum_{\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}}{\mathcal{H}}_{a}^{\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{4}}\left({\mathcal{H}}_{a}^{-\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{4}}\right)^{*},\end{eqnarray}
and \begin{equation}
\Sigma_{a}^{\prime\prime}(\theta_{*},\varphi_{*})\equiv\sum_{\lambda_{1},\lambda_{2},\lambda_{3},\lambda_{4}}{\mathcal{H}}_{a}^{\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{4}}\left({\mathcal{H}}_{a}^{-\lambda_{1}-\lambda_{2}\lambda_{3}\lambda_{4}}\right)^{*}.\end{equation}
The unpolarized parton distribution ${\mathcal{P}}_{a/h}(x,\vec{b})$
coincides with the Fourier-Bessel transform of the unpolarized transverse-momentum-dependent
(TMD) parton density ${\mathcal{P}}_{a/h}(x,\vec{k}_{T})$ \cite{Collins:1981uw}
for finding parton $a$ with light-cone momentum fraction $x$ and
transverse momentum $\vec{k}_{T}$. At small $b,$ ${\mathcal{P}}_{a/h}(x,\vec{b})$
is reduced to a convolution of unpolarized $k_{T}-$integrated parton
densities $f_{a/h}(x,\mu)$ and Wilson coefficient functions ${\mathcal{C}}_{a/a^{\prime}}(x,b;C/C_{2},\mu)$,
evaluated at a factorization scale $\mu$ of order $1/b$:\begin{eqnarray}
\left.{\mathcal{P}}_{a/h}(x,\vec{b})\right|_{b^{2}\ll\Lambda_{QCD}^{-2}} & = & \sum_{a^{\prime}}\left[\int_{x}^{1}{\frac{d\xi}{\xi}}{\mathcal{C}}_{a/a^{\prime}}\left(\frac{x}{\xi},b;\frac{C_{1}}{C_{2}},\mu\right)f_{a^{\prime}/h}(\xi,\mu)\right].\label{Eq:CalP}\end{eqnarray}
Perturbative entries with $\lambda_{i}=\lambda_{i}^{\prime}$ reduce
in total to the product of the unpolarized Born scattering probability
and unpolarized resummed functions:\begin{eqnarray}
\left.\widetilde{W}(Q,b,y,\Omega_{*})\right|_{\lambda_{i}=\lambda_{i}^{\prime}} & = & \sum_{a}\frac{\Sigma_{a}(\theta_{*})}{S}e^{-{\mathcal{S}}_{a}(Q,b)}\nonumber \\
& \times & \left[{\mathcal{C}}_{a/c_{1}}\otimes f_{c_{1}/h_{1}}\right](x_{1},b;\mu)\left[{\mathcal{C}}_{\bar{a}/c_{2}}\otimes f_{c_{2}/h_{2}}\right](x_{2},b;\mu).\label{WSpinDiagonal}\end{eqnarray}
The function $\Sigma_{g}(\theta_{*})$ is shown explicitly in Eq.~(\ref{Sigmag}).
\subsection{Spin-flip term in gluon scattering \label{subsection:Spin-Flip}}
We concentrate in this subsection on the spin-flip distribution ${\mathcal{P}}_{g/h}^{\prime}(x,\vec{b})$
in gluon scattering. Its existence is warranted by basic symmetries
of helicity- and transverse-momentum-dependent gluon distribution
functions \cite{Mulders:2000sh}. This function, which describes interference
of the amplitudes for nearly collinear gluons with opposite helicities,
coincides with the function $H^{\perp}$ in Ref.~\cite{Mulders:2000sh}
up to an overall factor. It contributes to \emph{unpolarized} $Q_{T}$
distributions, because the hard-scattering product ${\mathcal{H}}_{g}^{\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{4}}\left({\mathcal{H}}_{g}^{\lambda_{1}^{\prime}\lambda_{2}^{\prime}\lambda_{3}\lambda_{4}}\right)^{*}$
(with $ $ ${\mathcal{H}}_{g}^{\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{4}}$
given by the quark box helicity amplitude in Fig.~\ref{Fig:FeynDiag}(a))
does not vanish for $\lambda_{1}=-\lambda_{1}^{\prime}$ or $\lambda_{2}=-\lambda_{2}^{\prime}$
. The presence of ${\mathcal{P}}_{g/h}^{\prime}(x,\vec{b})$ modifies
dependence of the resummed cross section on the photon's azimuthal
angle $\varphi_{*}$ in the Collins-Soper frame. It vanishes after
the integration over $\varphi_{*}$ is performed. In contrast, the
helicity-diagonal part of $\widetilde{W}(Q,b,y,\Omega_{*})$ is independent
of $\varphi_{*}$, cf. Eq.~(\ref{WSpinDiagonal}).
The gluon function ${\mathcal{P}}_{g/h}^{\prime}(x,\vec{b})$ is invariant
under time reversal (i.e., is $T$-even) and acquires large contributions
proportional to the unpolarized $T$-even PDF's ${\mathcal{P}}_{g/h}(x,\vec{b})$
in the process of gluon radiation. These contributions require resummation
via PDF evolution equations (similar to Dokshitzer-Gribov-Lipatov-Altarelli-Parisi
equations \cite{Dokshitzer:1977sg,Gribov:1972ri,Gribov:1972rt,Altarelli:1977zs})
in order to predict the $\varphi_{*}$ dependence in the $gg$ channel.
At one loop, the mixing of spin-flip and unpolarized gluon PDF's is
driven by the convolution $\left[P_{g/g}^{\prime}\otimes f_{g/h}\right](x,\mu_{F})$
of the spin-flip splitting function $P_{g/g}^{\prime}(x)$ shown in
Eq.~(\ref{Pggprime}) with the gluon PDF $f_{g/h}(x,\mu_{F})$. This
convolution may be comparable to or exceed the analogous convolution
$\left[P_{g/g}\otimes f_{g/h}\right](x,\mu_{F})$ of the unpolarized
splitting function $P_{g/g}(x)$ for some $x$ and $\mu_{F}$ values,
as shown in Fig.~\ref{fig:PggxQ}. As a result of the mixing, an
additional $\varphi_{*}$-dependent term \begin{eqnarray}
& & \frac{\Sigma_{g}^{\prime}(\theta_{*},\varphi_{*})}{2\pi SQ_{T}^{2}}\frac{\alpha_{s}}{\pi}\Bigl(\left[P_{g/g}^{\prime}\otimes f_{g/h_{1}}\right](x_{1},\mu_{F})\, f_{g/h_{2}}(x_{2},\mu_{F})\nonumber \\
& & \hspace{77pt}+f_{g/h_{1}}(x_{1},\mu_{F})\left[P_{g/g}^{\prime}\otimes f_{g/h_{2}}\right](x_{2},\mu_{F})\Bigr)\label{ASYprime}\end{eqnarray}
arises in the unpolarized ${\mathcal{O}}(\alpha_{s})$ asymptotic
piece, cf. Eq.~(\ref{ASYgg}). It is produced by the perturbative
expansion of the entry proportional to $\Sigma_{g}^{\prime}(\theta_{*},\varphi_{*}){\mathcal{P}}_{g/h}(x_{i},\vec{b}){\mathcal{P}}_{g/h}^{\prime}(x_{j},\vec{b})$
in $\widetilde{W}(Q,b,y,\Omega_{*})$, with $\Sigma_{g}^{\prime}(\theta_{*},\varphi_{*})$
shown explicitly in Eqs.~(\ref{Sigmagprime}) and (\ref{Lprime}).
Generally, the $\varphi_{*}$-dependent contribution is not small,
even though it is suppressed comparatively to the unpolarized collinear
contribution by the ratio $L_{g}^{\prime}(\theta_{*})/L_{g}(\theta_{*})$
shown in Fig.~\ref{Fig:LLprime}. For example, for $Q=100$ GeV at
the LHC, its magnitude constitutes up to about a half of the collinear
unpolarized asymptotic contribution,\begin{equation}
\frac{\Sigma_{g}(\theta_{*})}{2\pi SQ_{T}^{2}}\frac{\alpha_{s}}{\pi}\Bigl(\left[P_{g/a}\otimes f_{a/h_{1}}\right](x_{1},\mu_{F})\, f_{g/h_{2}}(x_{2},\mu_{F})+f_{g/h_{1}}(x_{1},\mu_{F})\left[P_{g/a}\otimes f_{a/h_{2}}\right](x_{2},\mu_{F})\Bigr).\label{ASYNoSpinFlip}\end{equation}
The ${\mathcal{O}}(\alpha_{s})$ spin-flip $gg$ contribution does
not mix with the $gq_{S}$ contribution.
In terms of the reduced matrix elements $M_{\lambda_{1}\lambda_{2}\lambda_{3}\lambda_{4}}^{(1)}$
defined in \cite{Bern:2001df}, the double spin-flip hard vertex function
$\Sigma_{g}^{\prime\prime}(\theta_{*},\varphi_{*})$ is\begin{eqnarray}
\Sigma_{g}^{\prime\prime}(\theta_{*},\varphi_{*}) & = & \sigma_{g}^{(0)}\left(L_{1g}^{\prime\prime}(\theta_{*})+L_{2g}^{\prime\prime}(\theta_{*})\cos\left(4\varphi_{*}\right)\right),\end{eqnarray}
where\begin{eqnarray}
L_{1g}^{\prime\prime}(\theta_{*}) & = & 4{\mbox{Re}}\left(M_{1,1,-1,-1}^{(1)}+1\right),\end{eqnarray}
and\begin{eqnarray}
L_{2g}^{\prime\prime}(\theta_{*}) & = & 4{\mbox{Re}}\left(M_{1,-1,-1,1}^{(1)}M_{1,-1,1,-1}^{(1)*}+1\right).\end{eqnarray}
The perturbative expansion of the resummed entry proportional to
$\Sigma_{g}^{\prime\prime}(\theta_{*},\varphi_{*}){P}_{g/h}^{\prime}(x_{i},\vec{b}){P}_{g/h}^{\prime}(x_{j},\vec{b})$
produces an NNLO term in the unpolarized $gg$ asymptotic piece, \begin{equation}
\frac{\Sigma_{g}^{\prime\prime}(\theta_{*},\varphi_{*})}{2\pi SQ_{T}^{2}}\frac{\alpha_{s}^{2}}{\pi^{2}}\left[P_{g/g}^{\prime}\otimes f_{g/h_{1}}\right](x_{1},\mu_{F})\left[P_{g/g}^{\prime}\otimes f_{g/h_{2}}\right](x_{2},\mu_{F}).\end{equation}
The analogous quark function ${\mathcal{P}}_{q_{i}/h}^{\prime}(x,\vec{k}_{T})$
corresponds to the transversity distribution \cite{Tangerman:1994eh}
and is odd under time reversal ($T$-odd). It cannot be generated
radiatively through conventional PDF evolution from the $T$-even
unpolarized function ${\mathcal{P}}_{q_{i}/h}(x,\vec{k}_{T})$ and
does not contribute to the NLO asymptotic term. We find $\Sigma_{q}^{\prime}=0,$
because the non-vanishing amplitudes ${\cal H}_{q}(q_{1}^{\lambda_{1}},\bar{q}_{2}^{\lambda_{2}},\gamma_{3}^{\lambda_{3}},\gamma_{4}^{\lambda_{4}})$
must have opposite helicities of the quark and antiquark ($\lambda_{1}=-\lambda_{2}$).
Therefore, the functions ${\mathcal{P}}_{q_{i}/h}^{\prime}(x,\vec{b})$
contribute in pairs through the term proportional to $\Sigma_{q}^{\prime\prime}(\varphi_{*})=-(\alpha^{2}e_{i}^{4}\pi/(2N_{c}Q^{2}))\,\cos2\varphi_{*}.$
These contributions are anticipated to be much smaller than the usual
spin-average contribution and negligible at large $Q$, in analogy
to unpolarized Drell-Yan production \cite{Henneman:2001ev,Boer:2001he}.
In summary, the azimuthal angle ($\varphi_{*}$) dependence of photons
in the $gg$ scattering channel is affected by large QCD contributions
associated with interference between gluons of opposite helicities.
These logarithmic corrections may arise at NLO through QCD radiation
from conventional unpolarized PDF's, a mechanism that is unique to
gluon scattering. Other types of spin-interference contributions (not
considered here) involve spin-flip PDF's only. The soft and collinear
logarithms associated with the spin-flip contributions must be resummed
along the lines discussed in Ref.~\cite{Idilbi:2004vb}. Given that
$gg\rightarrow\gamma\gamma$ is the subleading production channel
at the Tevatron and at the LHC, we henceforth neglect the gluon spin-flip
contributions to the resummed $\widetilde{W}(Q,b,y,\Omega_{*})$,
while subtracting the corresponding $\varphi_{*}$-dependent asymptotic
contribution from the finite-order $2\rightarrow3$ cross section.
The nature of $gg$ spin-flip contributions can be explored by measuring
the double-differential distribution in $\varphi_{*}$ and $Q_{T}$
at the LHC, a topic that is interesting also from the point of view
of the Higgs boson search. Full resummation of the gluon spin-flip
contributions may be needed in the future.
\begin{figure}
\includegraphics[width=0.8\textwidth,keepaspectratio]{eps/PggprimePgg}
\caption{Comparison of $[P_{g/g}\otimes f_{g/p}](x,\mu_{F})$ and $[P_{g/g}^{\prime}\otimes f_{g/p}](x,\mu_{F})$
for the gluon PDF $f_{g/p}(x,\mu_{F})$ in the proton (multiplied
by $x^{1.5}$ to better illustrate the small-$x$ region) at several
values of the factorization scale $\mu_{F}.$ \label{fig:PggxQ}}
\end{figure}
\subsection{Complete expressions for resummed cross sections \label{subsection:CompleteResummed}}
In this Section, we review complete expressions for the unpolarized
resummed cross sections, starting from the perturbative QCD approximation
$\widetilde{W}_{pert}(Q,b,y,\Omega_{*})$ valid at small impact parameters
$b^{2}\ll1\mbox{ GeV}^{-2}$. For a hard-scattering function $\sum_{hel.}\left|{\mathcal{H}}(Q,\theta_{*})\right|^{2}\equiv\Sigma_{a}(\theta_{*})h_{a}^{2}(Q,\theta_{*}),$
the form factor $\widetilde{W}_{pert}(Q,b,y,\Omega_{*})$ is \begin{eqnarray}
\widetilde{W}_{pert}(Q,b,y,\theta_{*}) & = & \sum_{a}\frac{\Sigma_{a}(\theta_{*})}{S}h_{a}^{2}(Q,\theta_{*})e^{-{\mathcal{S}}_{a}(Q,b)}\nonumber \\
& \times & \left[{\mathcal{C}}_{a/c_{1}}\otimes f_{c_{1}/h_{1}}\right](x_{1},b;\mu)\left[{\mathcal{C}}_{\bar{a}/c_{2}}\otimes f_{c_{2}/h_{2}}\right](x_{2},b;\mu).\label{UnpW2}\end{eqnarray}
The Sudakov function is defined in Eq.~(\ref{Sudakov}), and the
function $h_{a}(Q,\theta_{*})$ collects radiative contributions to
${\mathcal{H}}(Q,\theta_{*})$ arising at NLO and beyond. We compute
the functions $h_{a},$ ${\mathcal{A}}_{a}$, ${\mathcal{B}}_{a}$
and ${\mathcal{C}}_{a/c}$ up to orders $\alpha_{s},$ $\alpha_{s}^{3},$
$\alpha_{s}^{2},$ and $\alpha_{s},$ respectively. The ${\mathcal{A}}_{a}$,
${\mathcal{B}}_{a}$ and ${\mathcal{C}}_{a/c}$ coefficients are taken
from Refs.~\cite{Balazs:1997hv,Nadolsky:2002gj,Yuan:1991we,Vogt:2004mw,deFlorian:2000pr,Moch:2004pa}
and listed in a consistent notation in Ref.~\cite{Balazs:2007hr}.
We use a procedure outlined in Ref.~\cite{Balazs:1997xd} to join
the small-$Q_{T}$ resummed cross sections $W$ with the large-$Q_{T}$
NLO cross sections $P$. In Eq.~(\ref{FullResum}), $Y\equiv P-A$
is the difference between the perturbative cross section $P$ and
its small-$Q_{T}$ asymptotic expansion $A$, explicitly given in
Eq.~(\ref{ASYgg}). For each value of $Q$ and $y$ of the $\gamma\gamma$
pair, $W+Y$ approaches $P$ from above and eventually becomes smaller
than $P$ as $Q_{T}$ increases. We use $W+Y$ as our prediction at
$Q_{T}$ values below this point of crossing and the finite-order
cross section $P$ at $Q_{T}$ above the crossing point.
The final cross sections depend on several factorization scales: $C_{1}/b$,
$C_{2}Q$, $\mu\equiv C_{3}/b$ in the $W$ term, and $\mu_{F}\equiv C_{4}Q$
in the $Y$ term. Here $C_{i}$ ($i=1,..4$) are dimensionless constants
of order unity, chosen as $C_{2}=C_{4}=1$, $C_{1}=C_{3}=2e^{-\gamma_{E}}=1.123...$
by default. These choices simplify perturbative coefficients by eliminating
scale-dependent logarithmic terms, cf. the appendix in Ref.~\cite{Balazs:2007hr}.
Dependence on the scale choice is studied in Section~\ref{Sec:Phenomenology}.
In the general formulation of CSS resummation presented in \cite{Collins:1981uk,Collins:1981va},
one has the freedom to choose different {}``resummation schemes'',
resulting effectively in variations in the form of $h_{a}(Q,\theta_{*})$.
These differences are compensated, up to higher-order corrections,
by adjustments in the functions ${\mathcal{B}}$ and ${\mathcal{C}}$.
In {}``the CSS resummation scheme'' \cite{Collins:1984kg}, one
chooses $h_{a}(Q,\theta_{*})=1,$ while including the virtual corrections
to the $2\rightarrow2$ scattering process in ${\mathcal{B}}$ and
${\mathcal{C}}.$ In this scheme, some ${\mathcal{B}}$ and ${\mathcal{C}}$
coefficients depend on the $2\rightarrow2$ hard scattering process
and also on $\theta_{*}$.
In an alternative prescription by Catani, de Florian and Grazzini
\cite{Catani:2000vq}, {}``the CFG resummation scheme'', one keeps
the $2\rightarrow2$ virtual corrections within a single function
$\left|{\mathcal{H}}(Q,\theta_{*})\right|^{2}.$ In this case, the
${\mathcal{B}}$ and ${\mathcal{C}}$ functions depend only on the
initial state. Most of our numerical calculations are realized in
the CSS resummation scheme, with a few made in the CFG scheme for
comparison purposes.
In impact parameter ($b$) space used in the resummation procedure,
we must integrate into the nonperturbative region of large $b$, cf.~Eq.~(\ref{FourierIntegral}).
Contributions from this region are known to be suppressed at high
energies~\cite{Berger:2002ut}, but some residual dependence may
remain. In the $q\bar{q}+qg\rightarrow\gamma\gamma$ channel, our
model for the nonperturbative contributions (denoted as KN1 \cite{Konychev:2005iy})
is derived from the analysis of Drell-Yan pair and $Z$ boson production.
The nonperturbative function in this model is dominated at large $Q$
by a soft contribution, which does not depend on the flavor of initial-state
light quarks. This function is therefore expected to be applicable
to the $q\bar{q}+qg\rightarrow\gamma\gamma$ process.
The nonperturbative function in the $gg+gq_{S}$ channel, which is
yet to be measured directly, is approximated by the nonperturbative
function for the $q\bar{q}+gq$ channel multiplied by the ratio $C_{A}/C_{F}=9/4$
of the color factors $C_{A}$ and $C_{F}$ for the leading soft contributions
in the $gg$ and $q\bar{q}$ channels. This ansatz suggests stronger
dependence of the $gg+gq_{S}$ channel on the nonperturbative input
compared to the $q\bar{q}+qg$ channels. It leads to small differences
from the prescription used in Refs.~\cite{Balazs:1997hv,Balazs:1999yf},
where only the leading $\ln Q$ term of the nonperturbative function
was rescaled. To examine the dependence of the resummed cross sections
on the nonperturbative model, we evaluate some of them assuming an
alternative (BLNY) parameterization of the nonperturbative function
\cite{Landry:2002ix}.
\section{Numerical Results \label{Sec:Phenomenology}}
The analytical results of Sec.~III are implemented in our computer
codes \textsc{Legacy} and \textsc{ResBos} \cite{Ladinsky:1993zn,Landry:2002ix,Balazs:1997xd,Balazs:1999gh}.
We use the same parameters as in the calculation of Ref.~\cite{Balazs:2006cc},
and we concentrate on the region $Q_{T}<Q$ where our calculation
is most reliable \cite{Balazs:2006cc}.
\begin{figure*}
\includegraphics[width=100cm,height=9.5cm,keepaspectratio]{eps/1960_cdf_qt_mid_qqgg_p_w221_css}
\caption{Parton flavor decomposition of the resummed transverse momentum distribution
at the energy $\sqrt{S}=1.96$ TeV of the Tevatron Run-2. The total
(solid), $q{\bar{q}}+qg$ (dashes), and $gg+gq_{S}$ (dash-dots) initial-state
contributions are shown separately. \label{Fig:bStarCDF}}
\end{figure*}
\subsection{Results for Run 2 at the Tevatron}
In this section, we present our results for the Tevatron $p\bar{p}$
collider at $\sqrt{S}=1.96$~TeV. We make the same restrictions on
the final-state photons as those used in the experimental measurement
by the Collider Detector at Fermilab (CDF) collaboration~\cite{Acosta:2004sn}:
transverse momentum $p_{T}^{\gamma}>p_{T\, min}^{\gamma}=14~(13)$~GeV
for the harder (softer) photon, and rapidity $|y^{\gamma}|<0.9$ for
each photon. We impose photon isolation by requiring the hadronic
transverse energy not to exceed $1$ GeV in the cone $\Delta R=0.4$
around each photon, as specified in the CDF publication. We also require
the angular separation $\Delta R_{\gamma\gamma}$ between the photons
to be larger than 0.3.
We focus in this paper on the role of the $gg$ contribution, referring
to our other papers \cite{Balazs:2006cc,Balazs:2007hr} for a more
complete treatment.
\begin{figure*}
\includegraphics[width=0.7\textwidth,keepaspectratio]{eps/qt_tev2_scale_dep}
\caption{Scale dependence of the cross sections in $gg+gq_{S}$ and all scattering
channels. The upper frame shows cross sections for the default choice
of scales specified in Section~\protect\ref{subsection:CompleteResummed}
(solid), as well as for varied scales $C_{2}=C_{4}=2$ (dashes), $C_{2}=C_{4}=0.5$
(dots), and $C_{3}=4e^{-2\gamma_{E}}$ (dot-dashes). The lower two
frames show ratios of the cross sections computed for the varied factorization
scales to the cross section for the default choice of the scales. }
\label{Fig:ScaleTev2}
\end{figure*}
To illustrate the relative importance of the individual initial-state
contributions in the final answer, we provide a parton flavor decomposition
of our resummed transverse momentum distribution $d\sigma/dQ_{T}$
in Fig.~\ref{Fig:bStarCDF}. This distribution is integrated over
all diphoton invariant masses $Q$, subject to the CDF cuts, and receives
dominant contributions from the $Q_{T}<Q$ region. The $gg+gq_{S}$
contribution supplies about one-third of the total rate near $Q_{T}=5$~GeV.
It falls steeply after $Q_{T}>20$ GeV, because the gluon PDF falls
steeply with parton fractional momentum $x$.
Dependence of the resummed cross sections on the choice of factorization
scales mentioned in Section~\ref{subsection:CompleteResummed} is
examined in Fig.~\ref{Fig:ScaleTev2}. We pick a few characteristic
combinations of alternative scales to probe the scale dependence associated
with the resummed Sudakov function $e^{-{\cal S}}$, the $b$-dependent
PDF's ${\mathcal{P}}_{a/h}(x,\vec{b})\approx[{\mathcal{C}}_{a/c}\otimes f_{c/h}](x,b;\mu)$,
and the regular $Y$ term. The small-$Q_{T}$ region is sensitive
primarily to the scales $C_{1}/b,C_{2}\, Q,C_{3}/b$ in the resummed
term $W$. The event rate at large $Q_{T}$ is controlled by the choice
of the factorization scale $\mu_{F}\equiv C_{4}\, Q$ in the regular
term $Y$.
At the relatively low values of $Q$ relevant for the Tevatron experiments,
the scale dependence of the next-to-leading order $gg+gq_{S}$ cross
section is still substantial, with variations being about $-20\%$
($+50\%$) at $Q_{T}=5-10$ GeV, $\pm10\%$ at $Q_{T}=10-20$ GeV,
and $\pm20\%$ at $Q_{T}=20-40$ GeV. Since the $Y$ term is the lowest-order
approximation for $gg\rightarrow\gamma\gamma g$ at $Q_{T}\sim Q$,
the scale dependence associated with the constant $C_{4}$ remains
pronounced at large $Q_{T}$. The inclusive $gg+gq_{S}$ rate, integrated
over $Q_{T}$, varies by $20-40\%$ almost independently of the $\gamma\gamma$
invariant mass $Q$. The large scale dependence of the NLO $gg+gq_{S}$
cross section reflects slow perturbative convergence in gluon gluon
scattering, observed also in other similar processes, e.g., $gg\rightarrow\mbox{Higgs}$
via the top quark loop \cite{Harlander:2002wh,Anastasiou:2004xq,Anastasiou:2005qj}.
For this reason, a NNLO calculation would be desirable to reduce the
scale uncertainty in the $gg+gq_{S}$ channel.
On the other hand, the scale dependence of the cross section when
all channels are combined is relatively mild, with variations not
exceeding 10\% at small $Q_{T}$ and 20\% at large $Q_{T}$. Variations
in the integrated inclusive rate for all channels combined are below
10\% at $Q>30$ GeV.
\begin{figure*}
\includegraphics[width=1\textwidth,height=6.5cm,keepaspectratio]{eps/qt_tev2_css_vs_cfg_rat}
(a)
\includegraphics[width=1\textwidth,height=6.5cm,keepaspectratio]{eps/qt_tev2_kn_vs_blny_rat}
(b)
\caption{Ratios of resummed cross sections at the Tevatron Run-2 computed
in (a) the Catani-de Florian-Grazzini (CFG) and Collins-Soper-Sterman
(CSS) resummation schemes and (b) using the BLNY and KN1 nonperturbative
models, as functions of the $\gamma\gamma$ transverse momentum $Q_{T}$.
The ratios are shown in the $q\bar{q}+qg$ (dashed), $gg+gq_{S}$
(dot-dashed), and all (solid) scattering channels. A $Q_{T}<Q$ cut
is imposed in this comparison.}
\label{Fig:VariationsTev2}
\end{figure*}
Another aspect of scale dependence is associated with the assumed
arrangement of logarithmic terms in the resummed $W$ term, i.e.,
the {}``resummation scheme'' that is adopted. This dependence is
yet another indicator of the size of higher-order corrections not
included in the present analysis. Figure~\ref{Fig:VariationsTev2}(a)
shows ratios of the full resummed cross sections in the Catani-de
Florian-Grazzini (CFG) and Collins-Soper-Sterman (CSS) resummation
schemes, as described in Sec.~III. The differences between these
schemes stem from the different treatment of the NLO hard-vertex correction
$h_{a}^{(1)}(\theta_{*})$. The magnitude of $h_{a}^{(1)}(\theta_{*})$
determines whether the channel is sensitive to the choice of the two
resummation schemes. The magnitude of $h_{g}^{(1)}(\theta_{*})$ in
the $gg+gq_{S}$ channel exceeds that of $h_{q}^{(1)}(\theta_{*})$
in the $q\bar{q}+qg$ channel by roughly an order of magnitude for
most values of the $\theta_{*}$ angle \cite{Nadolsky:2002gj}. Consequently,
while the dependence on the resummation scheme is practically negligible
in the dominant $q\bar{q}+qg$ channel (dashed line), it can reach
15\% in the subleading $gg+gq_{S}$ channel (dot-dashed line). The
$Q_{T}$ spectrum in $gg+gq_{S}$ channel is slightly softer in the
CFG scheme up to the point of switching to the fixed-order cross section
at $Q_{T}\approx60$ GeV. The resummation scheme dependence in all
channels (solid line) is less than 3-4\%, reflecting mostly the scheme
dependence in the $gg+gq_{S}$ channel.
To examine the sensitivity of the resummed predictions to long-distance
nonperturbative dynamics in hadron-hadron scattering, we include in
Fig.~\ref{Fig:VariationsTev2}(b) a comparison with the resummed
cross sections for an alternative choice of the nonperturbative model.
As explained in Sec.~\ref{subsection:CompleteResummed}, our default
calculation is performed in the recent KN1 model \cite{Konychev:2005iy}
for the nonperturbative part of the resummed form factor $\widetilde{W}(Q,b,y,\Omega_{*})$.
Figure~\ref{Fig:VariationsTev2}(b) shows ratios of the predictions
for a different BLNY model \cite{Landry:2002ix} and our default KN1
model in various initial-state scattering channels.
The difference is maximal at the lowest $Q_{T}$, as expected, and
it is less than 5\% for the total cross section. For the $q{\bar{q}}+qg$
and $gg+gq_{S}$ initial states the maximal difference is about 5\%
and 20\%, respectively. The dependence on the nonperturbative function
is stronger in the $gg+gq_{S}$ channel, where the BLNY/KN1 ratio
in the $gg+gq_{S}$ channel reaches its maximum of 1.15 at $Q_{T}\approx25$
GeV and slowly decreases toward 1, reached at the switching point
at $Q_{T}\approx60$ GeV. This behavior reflects our assumption of
a larger magnitude of the nonperturbative function in the $gg+gq_{S}$
channel, which is rescaled in our model by $C_{A}/C_{F}=9/4$ compared
to the nonperturbative function in the $q\bar{q}+qg$ channel. In
summary, despite a few-percent uncertainty associated with the nonperturbative
function in the $gg+gq_{S}$ process, the overall dependence of the
Tevatron $\gamma\gamma$ cross section on the nonperturbative input
can be neglected.
\subsection{Results for the LHC}
To obtain predictions for $pp$ collisions at the LHC at $\sqrt{S}=14$~TeV,
we employ the cuts on the individual photons used by the ATLAS collaboration
in their simulations of Higgs boson decay, $h\rightarrow\gamma\gamma$~\cite{ATLAS:1999fr}.
We require transverse momentum~$p_{T}^{\gamma}>40~(25)$~GeV for
the harder (softer) photon, and rapidity~$|y^{\gamma}|<2.5$ for
each photon. We impose the ATLAS isolation criteria, looser than for
the Tevatron study, requiring less than 15~GeV of hadronic and extra
electromagnetic transverse energy inside a $\Delta R=0.4$ cone around
each photon. We also require the separation $\Delta R_{\gamma\gamma}$
between the two isolated photons to be above 0.4. The cuts optimized
for the Higgs boson search may require adjustments in order to test
perturbative QCD predictions in the full $\gamma\gamma$ invariant
mass range accessible at the LHC.
\begin{figure*}
\includegraphics[width=0.49\textwidth]{eps/14000_atlas_q_qqgg_p_w2211}\includegraphics[width=0.49\textwidth]{eps/14000_atlas_qt_qqgg_p_w2211_115q140}\\
(a)\hspace{3in}(b)
\includegraphics[width=0.49\textwidth]{eps/14000_atlas_delphi_qqgg_p_w2211}\\
(c)
\caption{Resummed $d\sigma/dQ,$ $d\sigma/dQ_{T},$ and $d\sigma/d\Delta\varphi$
distributions of photon pairs at the LHC for ATLAS kinematic cuts.
\label{Fig:QDelPhiLHC}}
\end{figure*}
Distributions in the invariant mass $Q$, transverse momentum $Q_{T}$,
and azimuthal angle separation $\Delta\varphi\equiv\varphi_{\gamma_{1}}-\varphi_{\gamma_{2}}$
between the two photons in the laboratory frame are shown in Fig.~\ref{Fig:QDelPhiLHC}.
As before, we compare the magnitudes of the $q\bar{q}+qg$ and $gg+gq_{S}$
cross sections. The qualitative features are similar to those at the
Tevatron, but the relative contribution of the various initial states
changes at the LHC. The $gg+gq_{S}$ initial state contributes about
25\% of the total rate at $Q\sim80$~GeV where the mass distribution
peaks, but the $gg+gq_{S}$ rate falls faster than $q\bar{q}+qg$
with increasing invariant mass.
In the invariant mass range relevant for the Higgs boson search, $115<Q<140$
GeV, the transverse momentum distribution in Fig.~\ref{Fig:QDelPhiLHC}(b)
shows that the $gg+gq_{S}$ initial state accounts for about 25\%
of the rate at low $Q_{T}$. At high transverse momentum, on the other
hand, the other channels dominate. The relative size of the $gg+gq_{S}$
contribution drops as the invariant mass or the transverse momentum
of the photon pair grows. The $gg+gq_{S}$ contribution falls more
steeply with $Q_{T}$ for larger masses of the diphoton. These features
are attributable to the steeply falling gluon distribution as a function
of increasing momentum fraction $x$.
\begin{figure*}
\includegraphics[width=0.7\textwidth,keepaspectratio]{eps/qt_lhc_scale_dep}
\caption{Scale dependence in the $gg+gq_{S}$ and all scattering channels
at the LHC for the same scale choices as in Fig.~\protect\ref{Fig:ScaleTev2}.}
\label{Fig:ScaleLHC}
\end{figure*}
The scale dependence at the LHC, presented in Fig.~\ref{Fig:ScaleLHC},
is somewhat reduced compared to the Tevatron (cf. Fig.~\ref{Fig:ScaleTev2}).
Maximum scale variations of about 40\% in the $gg+gq_{S}$ channel
are observed at the peak of the $d\sigma/dQ_{T}$ distribution, and
they are substantially smaller at large $Q_{T}$. The scale variation
in the sum over all channels does not exceed 10\% (15\%) at small
$Q_{T}$ (large $Q_{T}$). Variations in the integrated inclusive
rate at $Q>50$ GeV are below 7\% (30\%) in all channels ($gg+gq_{S}$
channel).
\begin{figure*}
\includegraphics[width=1\textwidth,height=6.5cm,keepaspectratio]{eps/qt_lhc_css_vs_cfg_rat}\\
(a)
\includegraphics[width=1\textwidth,height=6.5cm,keepaspectratio]{eps/qt_lhc_kn_vs_blny_rat}\\
(b)
\caption{Same as Fig.~\ref{Fig:VariationsTev2}, at the LHC. \label{Fig:VariationsLHC}}
\end{figure*}
The dependence on the resummation scheme is mild at the LHC (cf.~Fig.~\ref{Fig:VariationsLHC}(a)),
with the maximal differences between the CSS and CFG schemes below
0.5\%, 10\%, and 2\% in $q\bar{q}+qg$, $gg+gq_{S}$, and all channels.
The scheme dependence is again the largest in the $gg+gq_{S}$ channel,
where it persists up to the point of switching to the fixed-order
cross section at $Q_{T}\approx120$ GeV. The ratios of the resummed
cross sections calculated in the BLNY and KN1 models for nonperturbative
contributions in the CSS scheme are shown in Fig.~\ref{Fig:VariationsLHC}(b).
The influence of the long-distance (large-$b)$ contributions is suppressed
at the high center-of-mass energy of the LHC. Differences between
the predictions in the two models do not exceed 2\%, 6\%, and 2\%
in the $q\bar{q}+qg,$ $gg+gq_{S},$ and all scattering channels.
The KN1 and BLNY nonperturbative models neglect the possibility of
a strong $x$ dependence of the nonperturbative function, which may
substantially modify our predictions at the energy of the LHC collider.
Analysis of small-$x$ semi-inclusive deep inelastic scattering data
\cite{Berge:2004nt} suggests that $x$-dependent nonperturbative
corrections of uncertain magnitude may substantially affect the resummed
cross sections. Such corrections can be constrained by studying the
rapidity and energy dependence of the nonperturbative function at
the Tevatron and LHC, for example, from copious production of $Z$
bosons \cite{Berge:2004nt}. We conclude that uncertainties due to
the choice of the resummation scheme and the nonperturbative model
will be small at the LHC, if the resummed nonperturbative function
does not vary strongly with $x$.
\begin{figure}
\includegraphics[width=0.5\textwidth,keepaspectratio]{eps/qt_lhc_ag1}\includegraphics[width=0.5\textwidth,keepaspectratio]{eps/qt_lhc_ag2}\\
(a)\hspace{3in}(b)
\caption{Inclusion of the $qg$ contribution improves the matching of the
resummed and NLO perturbative cross sections at large $Q_{T}$, as
demonstrated by these plots of the resummed and finite-order NLO cross
sections for (a) the $gg$ channel only; (b) the combined $gg+gq_{S}$
channel. The resummed and NLO cross sections are shown by the solid
and dashed lines. \label{Fig:gqSLHC}}
\end{figure}
\subsection{The role of the $gq_{S}$ contribution \label{subsection:gqS}}
Figures~\ref{Fig:bStarCDF}-\ref{Fig:VariationsLHC} show the contributions
from the $q\bar{q}+qg$ and $gg+gq_{S}$ channels along with their
sum. One may wonder if a further decomposition into $q\bar{q}$ and
$qg$ (or $gg$ and $gq_{S}$) contributions could provide additional
insights into the relative importance of different scattering processes.
We observe in our calculations that the resummed cross sections $W+Y$
and the fixed-order cross sections $P$ in the elementary scattering
subchannels ($q\bar{q}$, $qg$,...) may not cross until $Q_{T}$
is significantly larger than $Q$. This result is at variance with
our expectation that the fixed-order answer should be adequate when
$Q_{T}$ is of order $Q$, where logarithmic effects are small, and
the one-scale nature of the dynamics seems apparent.
Consider, for example, the $gg$ and $gg+gq_{S}$ transverse momentum
distributions in the mass interval $115<Q<140$ GeV at the LHC shown
in Figs.~\ref{Fig:gqSLHC}(a) and (b). In the $gg$ channel alone
(Fig.~\ref{Fig:gqSLHC}(a)), the $W+Y$ cross section remains above
the NLO cross section $P$ until $Q_{T}\sim140\mbox{ GeV}$. However,
after the $gq_{S}$ contribution is included (Fig.~\ref{Fig:gqSLHC}(b)),
$W+Y$ crosses $P$ at $Q_{T}\sim105\mbox{ GeV}$. Our expectation
of the adequacy of the NLO prediction at $Q_{T}\sim Q$ is satisfied
in this case, and this conclusion also holds for other intervals of
$Q$. At the crossing point, the two cross sections satisfy $W+Y=P,$
i.e., $\textrm{$W=A$}$; the resummed term is equal to its NLO perturbative
expansion, the asymptotic term. Similarly, good matching of the resummed
and NLO cross sections in the $q\bar{q}+qg$ channel requires that
we include both $q\bar{q}$ and $qg$ contributions.
This feature can be understood by noticing that the flavors of the
PDF's $f_{a/h}(x,\mu)$ mix in the process of PDF evolution. Consequently
the perturbative expansion of $W$ in the $gg$ channel contains the
full NLO asymptotic piece $A$ in the combined $gg+gq_{S}$ channel,
generated from the Sudakov exponential and lowest-order resummed contribution
$\propto f_{g/h_{1}}(x_{1},1/b)\, f_{g/h_{2}}(x_{2},1/b)$ evaluated
at a scale of order $1/b$. The mismatch between the flavor content
in the perturbatively expanded $W$ and $A$ in nominally the same
$gg$ subchannel causes the difference $W-A$ to be large and delays
the crossing. On the other hand, the flavor content of $W$ and $A$
is the same (up to NNLO) when the $gg$ and $gq_{S}$ contributions
are combined, and the matching is improved. The $gq_{S}$ scattering
subchannel has been assumed to be small and neglected in past studies,
and indeed it contributes about one tenth of the $gg+gq_{S}$ inclusive
rate $d\sigma/dQ$. However, we see that the $gq_{S}$ contribution
must be included to correctly predict $d\sigma/dQ_{T}$ and to realize
matching between the resummed and perturbative contributions at large
transverse momenta.
\section{Summary and Conclusions}
In this paper, we address new theoretical issues in $Q_{T}$ resummation
at two-loop accuracy that arise in the gluon-gluon subprocess, $gg+gq\rightarrow\gamma\gamma$,
one of the important short-distance subprocesses that contribute to
the inclusive reactions $p\bar{p}\rightarrow\gamma\gamma X$ at the
Fermilab Tevatron and $pp\rightarrow\gamma\gamma X$ at the CERN Large
Hadron Collider (LHC).
We evaluate all next-to-leading (NLO) contributions of order ${\mathcal{O}}(\alpha^{2}\alpha_{s}^{3})$
to the $gg+gq\rightarrow\gamma\gamma$ process (Fig.~\ref{Fig:FeynDiag}(b-e)).
A new ingredient in this paper is the inclusion of the $gq\rightarrow\gamma\gamma q$
process, Fig.~\ref{Fig:FeynDiag}(d), a necessary component of the
resummed NLO contribution. We resum to next-to-next-to-leading logarithmic
(NNLL) accuracy the large logarithmic terms of the form $\ln(Q_{T}^{2}/Q^{2})$
in the limit when $Q_{T}$ of the $\gamma\gamma$ pair is smaller
than its invariant mass $Q$. The perturbative Sudakov functions ${\mathcal{A}}$
and ${\mathcal{B}}$ and the Wilson coefficient functions ${\mathcal{C}}$
in the resummed cross section $W$ are computed to orders $\alpha_{s}^{3},$
$\alpha_{s}^{2}$, and $\alpha_{s}$. The resummed cross sections
are computed according to the CSS \cite{Collins:1984kg} and CFG \cite{Catani:2000vq}
resummation schemes, with the differences between the two approaches
reflecting the size of higher-order corrections. A new nonperturbative
function \cite{Konychev:2005iy}, dominated by a process-independent
soft correction, is employed to describe the dynamics at large impact
parameters.
Subtraction of the singular logarithmic contributions associated with
initial-state radiation from the NLO cross section $P$ defines a
regular piece $Y$. This regular term is added to the small-$Q_{T}$
resummed cross section $W$ to predict the production rate at small
to moderate values of $Q_{T}$. In the $gg$ channel, we also subtract
from $P$ a new singular spin-flip contribution that affects azimuthal
angle ($\varphi_{*})$ dependence in the Collins-Soper reference frame.
For our final prediction, we switch from the resummed cross section
$W+Y$ to $P$ at the point where $W+Y$ crosses $P$, approaching
$P$ from above, as in Ref.~\cite{Balazs:1997xd}. The location of
this point in $Q_{T}$ is of order $Q$ in the $q\bar{q}+qg$ and
the $gg+gq$ channels. For such matching to happen, it is essential
to combine cross sections in the $q\bar{q}$ and $qg$ ($gg$ and
$gq$) channels, as demonstrated in Sec.~\ref{subsection:gqS}.
At the LHC (Tevatron), the $gg+gq$ subprocess contributes 20\% (10\%)
of the total $\gamma\gamma$ production rate (integrated over the
full range of the photons' momenta). The relative contribution of
$gg+gq$ scattering may reach 25\% for some $Q$ and $Q_{T}$ values.
The $gg+gq$ channel provides an interesting opportunity to test CSS
resummation at a loop level and may be explored in detail at later
stages of the LHC operation. The NNLL/NLO resummed cross section for
the $gg+gq_{S}$ channel is used in Ref.~\cite{Balazs:2007hr} to
predict fully differential distributions of Higgs bosons and QCD background
at the LHC in the $\mbox{Higgs}\rightarrow\gamma\gamma$ decay mode.
\section*{Acknowledgments}
Research in the High Energy Physics Division at Argonne is supported
in part by US Department of Energy, Division of High Energy Physics,
Contract DE-AC02-06CH11357. The work of C.-P. Y. is supported by the
U. S. National Science Foundation under grant PHY-0555545. P.M.N.
thanks G. Bodwin for discussions of the azimuthal angle dependence
in gluon scattering. We gratefully acknowledge the use of \textit{Jazz},
a 350-node computing cluster operated by the Mathematics and Computer
Science Division at ANL as part of its Laboratory Computing Resource
Center. The Feynman diagrams in Fig.~\ref{Fig:FeynDiag} were drawn
with aid of the program \textsc{JaxoDraw} \cite{Binosi:2003yf}.
| {
"attr-fineweb-edu": 1.342773,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUefzxK6EuNArbDZUO | \section{Introduction}
\begin{figure}[t]
\centering
\includegraphics[width = 0.7\linewidth]{figures/main.png}
\caption{
The goal of this paper is to explore a self-supervised domain adaptation approach of skeleton-based action recognition,
which optimizes a model trained on a source domain (\textit{e.g.,} NTU RGB+D dataset~\cite{DBLP:conf/cvpr/ShahroudyLNW16}) to generalize well on a target domain (\textit{e.g.,} Kinetics dataset~\cite{DBLP:journals/corr/KayCSZHVVGBNSZ17}).
Our main idea is to reduce the domain shift by two auxiliary tasks with self-supervision, which aim to recognize the permutation of the segments in the temporal dimension or in the skeleton joint dimension and guide the network to learn more robust and more general features.
In this figure, the frames in different temporal segments are marked by borders in different colors while the joints assumed as different body parts are also distinguished by their colors.
Fig. \ref{fig:spatial} provides a clearer representation for the spatial Cubism. More detailed description about the auxiliary tasks is referred to sections \ref{subsection:tem_cub} and \ref{subsection:spa_cub}.
All figures are best viewed in color.
}
\label{fig:main}
\vspace{-0.2cm}
\end{figure}
Skeleton-based action recognition has achieved impressive progress in recent years.
As an evidence, under the cross-view setting of NTU RGB+D dataset~\cite{DBLP:conf/cvpr/ShahroudyLNW16}, the recognition accuracy has been improved from 70.27\% to 96.10\%~\cite{Shi_DGNN_2019_CVPR} significantly.
However, most of the methods apply a fully-supervised learning paradigm where the training and testing data are from the same domain.
Meanwhile, there is a lack of exploration under the UDA setting in this field,
where the action labels are only available on a source dataset, but unavailable on a target dataset for performance evaluation.
This is a more \textit{pragmatic} and \textit{challenging} setting because:
(1) It is expensive and unfeasible to obtain the annotation of all videos in target dataset from a new environment.
(2) Due to the domain shift, there will be a significant performance drop on the target dataset when directly utilizing the model trained on the source dataset,
which could not be easily handled by simply pre-processing the skeleton-based data (\textit{e.g.,} rotation). See section \ref{section:main_results} for more details.
To this end, we propose a self-supervised learning framework for cross-dataset skeleton-based action recognition under the UDA setting in this paper.
Different from the mainstream UDA methods which apply an adversarial learning based scheme at the \textit{feature level}~\cite{DANN,JAN,CDAN},
our proposed self-supervision scheme concentrates on the \textit{raw data level}, which better preserves their original structure to reduce the domain shift and is easier to implement.
In order to design proper self-supervised learning tasks for skeleton-based action,
we draw lessons from Cubism\footnote{https://en.wikipedia.org/wiki/Cubism}, a famous art genre from the early 20th century, which proposes to deconstruct the object and reassemble the pieces into a screwy yet impressive shape to illustrate the object from different views.
Specially,
we devise a temporal spatial Cubism strategy, which guides the network to be aware of the permutation of the segments in the temporal domain and the body parts in the spatial domain separately.
During training phase,
we design the objective function based on two criteria:
(1) minimizing the original action recognition loss on the source domain to improve the discriminative power and
(2) optimizing the self-supervision loss to enhance the generalization ability.
Moreover, there is a scarcity of available datasets for evaluating UDA approaches for skeleton-based action recognition.
Although some efforts have been made by a recent work~\cite{DBLP:journals/tcsv/gins} on this direction, it still suffers from an obstruction due to the limited data (See Section IV.A for details).
To address this problem, we propose a new experiment setting based on the overlapping action classes of the PKU-MMD~\cite{DBLP:conf/mm/LiuHLS017}, NTU RGB+D~\cite{DBLP:conf/cvpr/ShahroudyLNW16} and Kinetics~\cite{DBLP:journals/corr/KayCSZHVVGBNSZ17}, which are three large-scale and widely used datasets for skeleton-based action analysis.
We conduct experiments on a series of UDA methods and the extensive results on these three datasets as well as other three datasets evaluated in~\cite{DBLP:journals/tcsv/gins}. Extensive experiments have shown that our method sets new state-of-the-art results in this field.
Our main contributions are summarized as follows:
\begin{itemize}
\item[1)] Different from conventional works on skeleton-based action recognition under the fully-supervised paradigm,
we explore a new UDA setting in this realm with greater challenge and more pragmatic value.
\item[2)] Unlike the popular adversarial learning based approaches for UDA,
we propose a self-supervised learning framework,
which mines the temporal and spatial dependency for skeleton-based sequence and enhance the generalization ability of the model.
\item[3)] In order to facilitate the performance evaluation on this problem, we present a new experiment setting on three large-scale datasets. To our best knowledge, they are currently the largest datasets for cross-dataset skeleton-based action recognition.
\item[4)]
We conduct experiments on six datasets under the setting proposed in this paper and~\cite{DBLP:journals/tcsv/gins}.
Both quantitative and qualitative results demonstrate the superiority of our approach compared with the state of the art.
\end{itemize}
The remainder of this paper is organized as follows: Section
II briefly reviews some related works. Section III introduces
the proposed approach for cross-dataset skeleton-based action recognition in detail. Section IV reports experimental presents
and analysis, and Section V concludes the paper.
\section{Related Work}
In this section, we briefly review four related topics:
1) skeleton-based action recognition,
2) unsupervised domain adaptation,
3) video-based domain adaptation, and
4) self-supervised learning.
\subsection{Skeleton-based Action Recognition}
Skeleton-based action recognition has attracted growing attention in the realm of computer vision and a variety of methods have been proposed over the past decades.
For a detailed survey we refer the reader to~\cite{DBLP:journals/cviu/HanRHZ17,NTU120,DBLP:journals/cviu/WangLO0E18}, while here we provide a brief literature review.
The early works on skeleton-based action recognition are based on hand-crafted features~\cite{Vemulapalli2014Human,JunwuCVPR17,KoniuszCP16,DBLP:conf/eccv/WangYHLZ16},
while recent approaches are devised by designing deep neural networks (DNNs) like convolutional neural networks
(CNNs)~\cite{Liu2017Enhanced,DPRL,DBLP:conf/ijcai/LiZXP18} and recurrent neural networks (RNNs)~\cite{Song2016An,DBLP:conf/iccv/ZhangLXZXZ17,DBLP:conf/eccv/ZhangXLZGZ18}.
In order to better capture the relationship of different joints in the spatial domain or dependency of different frames in the temporal domain,
a number of works utilized the attention mechanisms~\cite{Song2016An,DBLP:conf/eccv/ZhangXLZGZ18,DBLP:conf/cvpr/SiC0WT19} and
graph neural networks (GNNs)~\cite{DBLP:conf/aaai/YanXL18,Shi_DGNN_2019_CVPR,Shi_AGCN_2019_CVPR,Li_2019_CVPR,DBLP:conf/cvpr/SiC0WT19} more recently.
Besides, there are various works using both skeleton joints and RGB videos as inputs for action recognition~\cite{PoTion, CMS_Networks, verma2020deep}. For example, Verma \textit{et al.}~\cite{verma2020deep} design two deep neural networks (DNNs) models for the multi-modal inputs respectively, and use a weight product model (WPM) to fuse the softmax scores obtained from the two DNNs.
Different from these works which deal with the input videos from the same dataset during training and testing phases,
we study a more practical and challenging UDA setting to deal with the samples across different datasets.
\subsection{Unsupervised Domain Adaptation}
Reducing the domain shift between the source and target datasets is the core of UDA.
In the past few years, a series of models have been built upon deep neural networks for learning domain-independent representations,
which show more promising results than early methods based on hand-crafted features~\cite{DBLP:conf/nips/HuangSGBS06,DA_TCA,gong2013connecting}.
Among these, one representative strategy is to design adaptation layers for aligning different distributions~\cite{JAN},
and another popular scheme is to include a domain discriminator sub-network for adversarial learning~\cite{DANN,JAN,MCDUDA,CDAN}.
More recently, there are several attempts on leveraging self-supervised learning for UDA~\cite{DBLP:conf/cvpr/CarlucciDBCT19,DA_SS}.
Under a multi-task learning paradigm,
they optimized the model with the supervision from the source domain, and the auxiliary self-supervision from both source and target domains.
Motivated by the success of these methods in the image domain,
we move a further step in the field of skeleton-based action recognition.
Note that our exploration is non-trivial since the intrinsic structure of the skeleton-based video is quite different from image, and further generalization and adaptation are required.
\subsection{Video-based Domain Adaptation}
Compared with image-based domain adaptation, video-based domain adaptation is a seldom-explored field.
In the literature, a few works have been proposed for RGB videos,
by foreground-weighted histogram decomposition~\cite{HAR_FHD},
or performing adversarial learning on the video features~\cite{DBLP:conf/bmvc/JamalNDV18}.
More recently, Chen et al.~\cite{video_DA} devised TA$^3$N by introducing a temporal relation module and domain attention mechanism.
For skeleton-based video, Tas et al.~\cite{DBLP:conf/bmvc/TasK18}
and Lin et al.~\cite{DBLP:conf/mm/LinSY020} study the supervised domain adaptation and transfer learning settings, where the action labels of the target dataset are required at the training or fine-tuning stages respectively.
The most relevant work to ours is GINs~\cite{DBLP:journals/tcsv/gins}, which also studied the problem of cross-dataset skeleton-based action recognition under the UDA setting.
In comparison, we proposed a setting with three datasets with larger-scale, and devised a self-supervised learning framework rather the adversarial-based method used in~\cite{DBLP:journals/tcsv/gins}. Experimental results also show the advantage of our method.
\subsection{Self-supervised Learning}
The paradigm of self-supervised learning is to design auxiliary task(s) with the supervision of the data itself,
for example, predicting spatial context~\cite{DBLP:conf/iccv/DoerschGE15} or image rotation~\cite{DBLP:conf/iclr/GidarisSK18}, solving jigsaw puzzles~\cite{DBLP:conf/eccv/NorooziF16} and many others~\cite{DBLP:conf/eccv/ZhangIE16,DBLP:conf/cvpr/LarssonMS17,DBLP:conf/cvpr/PathakKDDE16,DBLP:conf/iccv/DoerschZ17,DBLP:conf/cvpr/He0WXG20,DBLP:journals/corr/abs-2002-05709}.
There have been a number of self-supervised learning methods for RGB videos,
according to the information of ordering~\cite{DBLP:conf/eccv/MisraZH16,DBLP:conf/cvpr/FernandoBGG17,DBLP:conf/iccv/LeeHS017}, geometry~\cite{DBLP:conf/cvpr/GanGLSG18}, correspondence~\cite{wang2019learning,DBLP:conf/cvpr/DwibediATSZ19,DBLP:conf/cvpr/LaiLX20}, motion and appearance statistics~\cite{DBLP:conf/cvpr/WangJBHLL19} or spatio-temporal cubic puzzles~\cite{STCP}.
Compared with these works, besides temporal ordering,
we further explore the relationship of different human body parts for skeleton-based videos by learning from spatial Cubism,
and leverage the advantage of self-supervised learning to seek a better alignment between the source and target domains.
\begin{figure*}[!t]
\includegraphics[width = \linewidth]{figures/overview.jpg}
\caption{The pipeline of our proposed method.
(a) During the training stage, the videos from the source and target domains are paired. The videos are randomly chosen for the Cubism transformations and the permuted videos are fed into the networks together with the ordered videos. The network will be optimized according to the Cubism ordering loss from both two domains and the action classification loss only from source domain. The temporal and spatial streams are optimized separately.
(b) During the test stage, the videos from the target domain are fed into the networks to acquire the prediction. The final label is deduced by fusing the predicted scores from two streams.
}
\label{fig:overview}
\end{figure*}
\section{Approach}
\subsection{Problem Formulation}
We use $\mathcal{D}_s = \{(\boldsymbol{x}_i^s,y_i^s)\}_{i=1}^{n_s}$ to denote a source domain,
which contains skeleton-based action videos $\{\boldsymbol{x}_i^s\}_{i=1}^{n_s}$ and their action labels $\{y_i^s\}_{i=1}^{n_s}$. Here $i$ denotes the index of the $i$-{th} video, $n_s$ means the number of videos, and the subscript of $n_s$ denotes the source domain.
Similarly, the target domain is defined as $\mathcal{D}_t = \{(\boldsymbol{x}_j^t\}_{j=1}^{n_t}$,
where the action labels are unavailable during the network optimization but can used for the performance evaluation.
Since the videos in the source and target domains are from different datasets,
they correspond to two different joint distributions as $P(\boldsymbol{X}_s,\boldsymbol{Y}_s)$ and $Q(\boldsymbol{X}_t,\boldsymbol{Y}_t)$.
The training should be performed on the source domain with the action labels, and a split of target domain data where the action labels are unavailable.
The testing process is based on the other split of the target domain data which is invisible during the training phase. See section \ref{section:expe} for more details.
There is a more challenging cross-dataset setting which assumes that data from target domain are totally unavailable. The experimental results under this setting are introduced in the section \ref{without_target_domain}.
\subsection{Pipeline Overview}
The motivation of this work is to leverage self-supervised learning and Cubism transformation to reduce the domain shift between the skeleton-based action videos from the source and target datasets.
The concept ``Cubism'' here is originally an art genre from the early 20th century, which breaks and reassembles the objects to convey a greater context.
Inspired by this idea and the progress in self-supervised learning~\cite{DBLP:conf/cvpr/FernandoBGG17,DBLP:conf/iccv/LeeHS017,DBLP:conf/eccv/MisraZH16}, we design two auxiliary pretext tasks, named as temporal Cubism (section 3.3) and spatial Cubism (section 3.4), for skeleton-based action recognition under a new cross-dataset scenario. Accordingly, we devise two networks as the Tem-Cub Network and Spa-Cub Network, where ``Tem-Cub'' and ``Spa-Cub'' are abbreviations of ``temporal Cubism'' and ``spatial Cubism''.
During the training phase, each network is optimized based on one of the self-supervised tasks and the main prediction task jointly.
At the inference period, the final result is obtained by fusing the prediction scores of two networks.
We elaborate on each stage of our pipeline in detail as follows.
\begin{figure}[!t]
\centering
\includegraphics[width =0.85\linewidth]{figures/temporal_jigsaw.png}
\caption{Learning from temporal Cubism.
Given a video from the source or target domain,
we divide it into $N$ segments ($N$=3) and permute them to generate a new sample with a new ordering label.
We sent the original data and permuted data into a backbone simultaneously.
The network parameters are optimized in a multi-task learning framework with a total loss of two terms:
(1) the cross-entropy loss between the predicted action scores and the action labels in the source domain, and
(2) the cross-entropy loss between the predicted ordering scores and the ordering labels in both source and target domains.
}
\label{fig:tem_jig}
\end{figure}
\subsection{Learning from Temporal Cubism}
\label{subsection:tem_cub}
Fig. \ref{fig:tem_jig} illustrates our proposed strategy of learning from temporal Cubism. The term temporal Cubism here means we shuffle a video in temporal dimension and reorganize them in a new frame order.
Mathematically, we organize each skeleton-based action video as a representation with the size of $F\times K \times D$, where $F$ denotes the number of the frames, $K$ is the number of the joints, and D represents the dimension of joint coordinates.
Given a video sample $\boldsymbol{x}$, we first divide it into $N$ segments uniformly in the temporal domain as
$
\boldsymbol{x} = \begin{bmatrix}(\boldsymbol{x}^{(1)})^T &
(\boldsymbol{x}^{(2)})^T &
\dots &
(\boldsymbol{x}^{(N)})^T\end{bmatrix}^T
$,
where we choose $N=3$ in this paper empirically.
Then a new video $\boldsymbol{x}_{tem}$ with corresponding permutation label $l_{tem}$ in temporal domain is acquired by permuting the segments. This transformation $\varphi_{tem}$ could be presented by a partitioned permutation matrix $\boldsymbol{P}_{tem}$ as:
{\setlength\abovedisplayskip{1pt}
\setlength\belowdisplayskip{1pt}
\begin{equation}
\boldsymbol{x}_{tem} =
\varphi_{tem}(\boldsymbol{x}) =
\boldsymbol{P}_{tem}\boldsymbol{x} .
\end{equation}}
There is only one identity matrix on each row and on each column, and the remaining elements are zero matrices.
For example, if the permutation is to exchange the order of the first and the third segments, the transformation can be written as below:
\begin{equation}
\boldsymbol{x}_{tem}
= \left[\begin{array}{c c c}
\boldsymbol{0} & \boldsymbol{0} & \boldsymbol{I} \\
\boldsymbol{0} & \boldsymbol{I} & \boldsymbol{0} \\
\boldsymbol{I} & \boldsymbol{0} & \boldsymbol{0}
\end{array}\right] \begin{bmatrix}
\boldsymbol{x}^{(1)} \\
\boldsymbol{x}^{(2)} \\
\boldsymbol{x}^{(3)}
\end{bmatrix} .
\end{equation}
In this way, we build the permuted videos and form the augmented source and target datasets which are presented as follows:
\begin{align}\label{eq:domain}
\mathcal{D}_s^\prime = \{(\boldsymbol{x}_{tem,i}^s, y_i^s, l_{tem,i}^s)\}_{i = 1}^{n_s^\prime},
\quad \textit{where} \; \boldsymbol{x}_{tem,i}^s = \boldsymbol{x}^s~\mbox{or}~\boldsymbol{x}_{tem,i}^s = \varphi_{tem}(\boldsymbol{x}^s), \; \boldsymbol{x}^s \in \mathcal{D}_s. \\ \nonumber
\mathcal{D}_t^\prime = \{(\boldsymbol{x}_{tem,i}^t, l_{tem,i}^t)\}_{i = 1}^{n_t^\prime},
\quad \textit{where} \; \boldsymbol{x}_{tem,i}^t = \boldsymbol{x}^t~\mbox{or}~\boldsymbol{x}_{tem,i}^t = \varphi_{tem}(\boldsymbol{x}^t), \; \boldsymbol{x}^t \in \mathcal{D}_t.
\end{align}
Here $n_s^\prime$ and $n_t^\prime$ denote the overall number of videos in the augmented source datasets and the augmented target datasets. For the $i$-th video, $y_i$ and $l_{tem,i}$ represent the action label and permutation label of temporal Cubism respectively. Based on the augmented datasets, we design an auxiliary classification task in the temporal domain, which guides the network to learn to recognize the temporal permutation.
During training, the ordered and permuted video samples are packed into batches with a certain ratio which is dependent on a hyper-parameter $p_t$ indicating the percentage of the ordered samples in a batch. This hyper-parameter will be studied in section \ref{subsection:expe_res_and_anal}. Moreover, the permutation way is chosen with equal probability so that transformed videos with different ordering labels are of an identical proportion.
The mixed batches are fed into a CNN-based backbone (we detail it in section \ref{subsection:implement_detail}) followed by two parallel full-connected classifiers. $f_{cls}$ takes the features of the ordered and disordered samples from the source domain and predicts the action classes while $f_{tem}$ targets to recognize the ordering label for the samples from both source and target domains. Two kinds of losses are computed and combined after the classifiers to optimize the network.
Comprising two parts of losses, the total loss $\textbf{\textit{J}}_{tem\_total}$ could be formalized as:
\begin{align}\label{eq:tem_loss}
J_{tem\_total} &= J_c + \lambda_t J_{tem} \nonumber \\
&= \frac{1}{n_s^\prime} \sum_{(\boldsymbol{x}_{tem,i}, y_i) \in \mathcal{D}_s^\prime } J_c( f_{cls}(\boldsymbol{x}_{tem,i}|\theta_b,\theta_{cls}),y_i) \\
&+ \frac{\lambda_t}{n_s^\prime+n_t^\prime} \sum_{(\boldsymbol{x}_{tem,i}, l_{tem,i}) \atop \in \mathcal{D}_s^\prime \cup \mathcal{D}_t^\prime }J_{tem}( f_{tem}(\boldsymbol{x}_{tem,i}|\theta_b,\theta_{tem}),l_{tem,i}) . \nonumber
\end{align}
Here we adopt the cross-entropy loss for $J_c$ and $J_{tem}$.
$f_{cls}$ and $f_{tem}$ are the softmax scores of action classes and temporal permutation classes.
$\theta_b, \theta_{cls}$ and $\theta_{tem}$ denote the trainable parameters of the backbone, action recognition fc layer and temporal permutation recognition fc layer respectively.
Here $\lambda_t$ is the hyper-parameters to balance the effects of the losses of the main task and self-supervision learning, which will be studied in section \ref{subsection:expe_res_and_anal} as well.
\begin{figure*}[!t]
\includegraphics[width = \linewidth]{figures/spatial.png}
\caption{Spatial Cubism.
Given a suite of skeleton, we colored the left part with orange and the right with blue.
We build the new samples by directly swapping the coordinates of two arms or two legs which results in an uncoordinated pose of the body.
This transformation is implemented by swapping the order of the corresponding elements stored in the linear list.
}
\label{fig:spatial}
\end{figure*}
\subsection{Learning from Spatial Cubism}
\label{subsection:spa_cub}
As shown in Fig. \ref{fig:spatial},
we design a new self-supervised classification task based on the spatial Cubism among the different body parts.
Specifically, for a skeleton-based action video $\boldsymbol{x}$ defined in the last subsection,
we organize the body parts according to the following ordered list:
$
\boldsymbol{x} =
\begin{bmatrix}\boldsymbol{x}^{(t)},
\boldsymbol{x}^{(la)},
\boldsymbol{x}^{(ra)},
\boldsymbol{x}^{(ll)},
\boldsymbol{x}^{(rl)}\end{bmatrix}
$.
The five blocks are corresponding to the data of trunk, left arm, right arm, left leg and right leg respectively.
Similar to the temporal Cubism,
we can obtain a new sample by performing spatial transformation $\varphi_{spa}$ with another permutation matrix
$\boldsymbol{P}_{spa}$ as:
\begin{equation}
\boldsymbol{x}_{spa} = \varphi_{spa}(\boldsymbol{x}) = \boldsymbol{x}\boldsymbol{P}_{spa} .
\end{equation}
Here we design two concrete instances for $\boldsymbol{P}_{spa}$ as $\boldsymbol{P}_{a}$ and $\boldsymbol{P}_{\ell}$ to swap the coordinates of the joints of the arms and legs respectively:
\begin{equation}
\begin{aligned}
\boldsymbol{P}_{a} = \left[\begin{array}{ccccc}
\boldsymbol{I} & \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{0} \\
\boldsymbol{0} & \boldsymbol{0} & \boldsymbol{I} & \boldsymbol{0} & \boldsymbol{0} \\
\boldsymbol{0} & \boldsymbol{I} & \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{0} \\
\boldsymbol{0} & \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{I} & \boldsymbol{0} \\
\boldsymbol{0} & \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{I}
\end{array}\right],
\quad
\boldsymbol{P}_{\ell} = \left[\begin{array}{ccccc}
\boldsymbol{I} & \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{0} \\
\boldsymbol{0} & \boldsymbol{I} & \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{0} \\
\boldsymbol{0} & \boldsymbol{0} & \boldsymbol{I} & \boldsymbol{0} & \boldsymbol{0} \\
\boldsymbol{0} & \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{I} \\
\boldsymbol{0} & \boldsymbol{0} & \boldsymbol{0} & \boldsymbol{I} & \boldsymbol{0}
\end{array}\right] .
\end{aligned}
\end{equation}
Through these transformations, the skeleton-based video would convey the screwy actions which refer to the spatial Cubism.
By learning to discover these,
the network would have a better generalization ability on the spatial domain.
Similar to that of the temporal Cubism, we construct the augmented source dataset $\mathcal{D}''_s$ and target dataset $\mathcal{D}''_t$ as follows:
\begin{align}\label{eq:domain_spa}
\mathcal{D}''_s = \{(\boldsymbol{x}_{spa,i}^s, y_i^s, l_{spa,i}^s)\}_{i = 1}^{n''_s},
\quad \textit{where} \; \boldsymbol{x}_{spa,i}^s = \boldsymbol{x}^s~\mbox{or}~\boldsymbol{x}_{spa,i}^s = \varphi_{spa}(\boldsymbol{x}^s), \; \boldsymbol{x}^s \in \mathcal{D}_s. \\ \nonumber
\mathcal{D}''_t = \{(\boldsymbol{x}_{spa,i}^t, l_{spa,i}^t)\}_{i = 1}^{n''_t},
\quad \textit{where} \; \boldsymbol{x}_{spa,i}^t = \boldsymbol{x}^t~\mbox{or}~\boldsymbol{x}_{spa,i}^t = \varphi_{spa}(\boldsymbol{x}^t), \; \boldsymbol{x}^t \in \mathcal{D}_t .
\end{align}
We introduce a hyper-parameter $p_s$ to indicate the percentage of the ordered samples in a batch during the training phase.
The total loss $\textbf{\textit{J}}_{spa\_total}$, in this case, could be formalized as:
\begin{equation} \label{eq:spa_loss}
\begin{aligned}
J_{spa\_total} &= J_c + \lambda_s J_{spa} \\
&= \frac{1}{n''_s} \sum_{(\boldsymbol{x}_{spa,i}, y_i) \in \mathcal{D}''_s} J_c(f_{cls}(\boldsymbol{x}_{spa,i} | \theta_b, \theta_{cls}), y_i) \\
&+ \frac{\lambda_s}{n''_s+n''_t} \sum_{(\boldsymbol{x}_{spa,i}, l_{spa,i}) \atop \in \mathcal{D}''_s \cup \mathcal{D}''_t} J_{spa}(f_{spa}(\boldsymbol{x}_{spa,i} | \theta_b, \theta_{spa}), l_{spa,i}) .
\end{aligned}
\end{equation}
The variables in Equation (\ref{eq:domain_spa}) and (\ref{eq:spa_loss}) have the similar definitions with those in Equation (\ref{eq:domain}) and (\ref{eq:tem_loss}).
We present a mathematical algorithm of our method in Algorithm \ref{alg:train TS-Cub}.
\begin{algorithm}[t]
\DontPrintSemicolon
\KwIn{skeleton-based videos from source domain $\mathcal{D}_s = \{(\boldsymbol{x}_i^s,y_i^s)\}_{i=1}^{n_s}$ and target domain $\mathcal{D}_t = \{(\boldsymbol{x}_j^t\}_{j=1}^{n_t}$, training epoch $\Gamma$.\;
}
\KwOut{The weights of the Tem-Cub Network $\theta_t$ and Spa-Cub Network $\theta_s$.}
\SetAlgoLined
// \emph{Training the Tem-Cub Network:} \;
Perform temporal transformation to obtain $\mathcal{D}'_s$, $\mathcal{D}'_t$ based on $\mathcal{D}_s$, $\mathcal{D}_t$ and Eqn. (\ref{eq:domain}). \;
Initialize $\theta_t$.\;
\For{$k \leftarrow$ 1, 2, ..., $\Gamma$}
{
Feed the input data through Tem-Cub Network. \;
Calculate the objective function $\textbf{\textit{J}}_{tem\_total}^k$ at the $k$-th epoch by Eqn. (\ref{eq:tem_loss}). \;
Update $\theta_t$ by back propagation.\;
}
// \emph{Training the Spa-Cub Network:} \;
Perform spatial transformation to obtain $\mathcal{D}''_s$, $\mathcal{D}''_t$ based on $\mathcal{D}_s$, $\mathcal{D}_t$ and Eqn. (\ref{eq:domain_spa}). \;
Initialize $\theta_s$.\;
\For{$k \leftarrow$ 1, 2, ..., $\Gamma$}
{
Feed the input data through Spa-Cub Network. \;
Calculate the objective function $\textbf{\textit{J}}_{spa\_total}^k$ at the $k$-th epoch by Eqn. (\ref{eq:spa_loss}). \;
Update $\theta_s$ by back propagation.\;
}
\textbf{Return:} The parameters $\theta_t$ and $\theta_s$ of the Tem-Cub Network and Spa-Cub Network.
\caption{Training Procedure of Our TS-Cub} \label{alg:train TS-Cub}
\end{algorithm}
\subsection{Two-Stream Fusion}
In order to further boost performance, we explore several approaches to couple the temporal and spatial Cubism transforms. One is to apply the two kinds of transforms simultaneously and therefore divide the videos into finer-grained atoms (see section \ref{subsection:expe_res_and_anal} for details). However, this results in a more complex task, which might bring more difficulty for optimizing the network and more cost in data pre-processing.
Though feature-level fusion is a common two-stream fusion strategy, we do not apply it in this paper.
This is because spatial and temporal streams implement different auxiliary tasks, and feature-level fusion will make it much more difficult to recognize the ordering label.
Actually, as explored by several previous works~\cite{DBLP:conf/nips/SimonyanZ14,DBLP:conf/eccv/WangXW0LTG16}, it is more effective and efficient to separately deal with the temporal and spatial information and combine them after the network.
Hence, we explore several approaches to fuse softmax scores from the temporal and spatial streams during the inference stage, \textit{e.g.,} Weighted Arithmetic Mean (WAM), Weighted Root Squared Mean (WRSM), Weighted Geometric Mean (WGM) and Max Pooling (MP). The experimental results and more details are shown in Table \ref{tab:other_self_super_tasks} in later section.
\section{Experiments}
\label{section:expe}
\subsection{Datasets and Experiment Settings}
To our best knowledge, there are very few benchmarks for cross-dataset skeleton-based action recognition. Although the recent work~\cite{DBLP:journals/tcsv/gins} has proposed two benchmarks for this problem, they mainly have two drawbacks. The first is the scales of these datasets are relatively small, and the second is that it adopts a ``test-time training'' strategy~\cite{SunICML20}, which utilizes the test data (without using their action labels) during training. This might be not practical in some real-world scenarios.
\begin{table*}[!t]
\small
\centering
\caption{Comparison of the proposed unsupervised domain adaptation setting for skeleton-based action recognition with that of the previous work~\cite{DBLP:journals/tcsv/gins}. Here ``Nm'' (m = 8, 51, 12) denotes m-action subset of NTU RGB+D.}
\setlength{\tabcolsep}{7pt}
\begin{tabular}{l | c c c c |c c }
\hline
& \multicolumn{4}{c|}{Training} & \multicolumn{2}{c}{Testing}\\
& Source& \# Clips & Target & \# Clips & Target & \# Clips\\
\hline
NTU$\to$SBU~\cite{DBLP:journals/tcsv/gins} & N8 & 7513 & SBU & 282 & SBU & 282 \\
ORGBD$\to$MSRA3D~\cite{DBLP:journals/tcsv/gins} & ORGBD & 240 &MSRA3D & 100 & MSRA3D & 100 \\
\hline
P$\to$N-CV (Ours) & P & 21544 & N51-CV-train & 31989 & N51-CV-test& 16092\\
P$\to$N-CS (Ours) & P &21544 & N51-CS-train & 34068 & N51-CS-test& 14013\\
N$\to$P-CV (Ours) & N51 & 48081 & P-CV-train & 14356 & P-CV-test&7188 \\
N$\to$P-CS (Ours) & N51 & 48081 & P-CS-train & 18840 & P-CS-test&2704 \\
\hline
N$\to$K (Ours) & N12 & 11256 & K-train& 8912 & K-test & 787 \\
K$\to$N (Ours) & K & 9699 &N12-CV-train&7476 & N12-CV-test & 3780 \\
\hline
\end{tabular}
\label{tab:eval_pro}
\end{table*}
To address these issues,
we define two groups of experiment settings crossing three large-scale datasets in this paper: NTU RGB+D~\cite{DBLP:conf/cvpr/ShahroudyLNW16}, PKU-MMD~\cite{DBLP:conf/mm/LiuHLS017}, and Kinetics~\cite{DBLP:journals/corr/KayCSZHVVGBNSZ17}.
We present a comparison of the proposed settings with the previous work~\cite{DBLP:journals/tcsv/gins} in Table \ref{tab:eval_pro}. We detail the corresponding datasets and the experimental settings as follows.
\noindent \textbf{NTU RGB+D}: The NTU RGB+D dataset is a large-scale
dataset for evaluating skeleton-based action recognition.
The dataset contains 56,880 skeleton-based videos of
60 action categories. There are two evaluation protocols as
cross-subject (CS) and cross-view (CV). Under CS setting,
there are videos of 40 subjects used for training while the
videos of the rest 20 subjects are used for the test. Under
CV setting, The videos from view 2 and 3 are used for
training, while the videos from view 1 are used for the test.
\noindent \textbf{PKU-MMD}: There are 1,076 long videos in the PKU-MMD
dataset, which is originally presented for skeleton-based
action detection. In order to construct a dataset for
unsupervised domain adaptation on action recognition, we
trim each long video according to their temporal annotations
and obtain 21,544 video clips of 51 actions. Similar to
the NTU RGB+D dataset, the cross-subject and cross-view
settings are recommended for PKU-MMD dataset. Under
CS setting, there are 18,840 training videos and 2,704 test
videos. Under CV setting, there are 14,356 training videos
and 7,188 test videos.
\noindent \textbf{Kinetics}: Kinetics is a large-scale dataset for action
recognition containing about 300,000 video clips collected from Youtube. Each clip in Kinetics contains around 300
frames. These video clips cover 400 action categories and
under each category, there are more than 400 samples for
training and about 100 for the test. The original Kinetics
dataset releases only raw RGB sequences. Hence we
adopt the estimated poses provided by~\cite{DBLP:conf/aaai/YanXL18} extracted by
OpenPose~\cite{CaoZhe2018_OpenPose} to study the skeleton-based UDA task.
\noindent \textbf{P$\leftrightarrow$N.} We perform unsupervised domain adaptation between PKU-MMD and NTU RGB+D. 51 action categories are extracted from NTU RGB+D to pair with the actions in PKU-MMD.
Both CV and CS settings are adopted for evaluation.
For clarification, we use \textit{N51} to denote the 51-action subset of NTU RGB+D and \textit{P} to denote PKU-MMD. The infixes \textit{CV}, \textit{CS} and suffixes \textit{train}, \textit{test} are used to indicate the subset, \textit{e.g.}, \textit{N51-CS-train} implies the training set of NTU RGB+D under cross-subject setting. Due to the limited space, we show the paired action classes of P$\leftrightarrow$N in our project page.
\noindent \textbf{N$\leftrightarrow$K.} Experiments are carried out between NTU RGB+D and Kinetics as well.
We select 12 paired actions from NTU RGB+D and Kinetics for domain adaptation. As the estimated pose data on Kinetics are 2-dimensional, we extract the coordinates of x and y axes from NTU RGB+D to get a similar 2D skeleton.
The Kinetics subset is partitioned into the training and test subsets in accordance with the raw division while NTU RGB+D is used under only CV setting.
Similarly, the subset of NTU RGB+D is marked as \textit{N12} and Kinetics is marked as \textit{K}. The suffixes \textit{train} and \textit{test} are used to indicate the subset as well.
Same as before, the paired action classes of N~\cite{DBLP:conf/cvpr/ShahroudyLNW16}$\leftrightarrow$K~\cite{DBLP:journals/corr/KayCSZHVVGBNSZ17} are presented in the project page.
In order to make a better evaluation of our method, we also conduct experiments on the
SBU Kinect Interaction dataset (SBU)~\cite{kiwon_hau3d12},
Online RGBD Action dataset (ORGBD)~\cite{DBLP:conf/accv/YuLY14} and MSRDaily Activity3D dataset (MSRDA3D)~\cite{DBLP:conf/cvpr/WangLWY12}, following the same settings proposed in the previous work~\cite{DBLP:journals/tcsv/gins}.
The experimental results and analysis are described in detail as below.
\subsection{Compared Methods} In the following section, we first conduct experiments and acquire the results on \textit{Source Only} and \textit{Target Only}.
\textit{Source Only} indicates a baseline method which trains a model in the source domain, and directly evaluates the testing data on the target domain without supervision.
\textit{Target Only} denotes to utilize the ground-truth action labels in the target domain for training, which provides an upper bound for this problem.
Besides, because there are very few models designed for skeleton-based action recognition under the UDA setting, we compare our model with some image-based UDA models (\textit{i.e.}, MMD, DANN, JAN, CDAN, BSP, MCC) and a RGB video-based model (\textit{i.e.}, TA$^{3}$N).
We replace those models' backbone with HCN and apply the same experimental setting with our model for fair comparison.
Specifically, for TA$^{3}$N, besides replacing the feature extractor with HCN, we add a fully-connected layer between HCN and the spatial-temporal adversarial module in TA$^{3}$N to make them compatible.
Moreover, we also compare our method with GINs, which is the most related work on cross-dataset skeleton-based action recognition.
In our paper, there are mainly three kinds of information to be utilized for cross-dataset transfer. They are temporal information (T), spatial information (S) and their combination (TS), which are used for our Tem-Cub Network, Spa-Cub Network and TS-Cub Network respectively.
For the compared methods, most of them perform transfer based on the temporal-spatial feature extracted by HCN backbone (TS), except the GINs~\cite{DBLP:journals/tcsv/gins} focused on the relation of different joints at the spatial domain (S).
\begin{table}[t]
\centering
\caption{
Study on the PReLU activation function and Rotation processing.
Source domain: PKU-MMD (P) dataset. Target domain: NTU RGB+D (N51) .}
\begin{tabular}{l c c}
\hline
Method & P$\to$N-CV & P$\to$N-CS\\
\hline
Source Only~\cite{DBLP:conf/ijcai/LiZXP18} & 51.9 & 47.9\\
HCN~\cite{DBLP:conf/ijcai/LiZXP18} & 50.9 & 45.8\\
\hline
HCN + PReLU & 51.9 & 47.9\\
HCN + Rot. & 53.4 & 50.3 \\
HCN + Rot. + PReLU & \textbf{54.9} & \textbf{50.5} \\
\hline
\end{tabular}
\label{table:baseline}
\end{table}
\subsection{Implementation Details}
\label{subsection:implement_detail}
We conduct experiments on a system with the Intel(R) Xeon(R) CPU E5-2698 v4 @ 2.00Ghz.
We implement our method with the PyTorch toolbox and train the model on Nvidia GTX 1080 Ti GPU. The training epoch $\Gamma$ is set to 400 for both Tem-Cub and Spa-Cub.
We adopt the HCN~\cite{DBLP:conf/ijcai/LiZXP18} as the backbone because of its effectiveness and efficiency.
In order to ameliorate the performance, we made two modifications.
The PReLU~\cite{DBLP:conf/iccv/HeZRS15} is used as the non-linear activation function instead of the commonly used ReLU and
a rotation pre-processing is applied before the networks to eliminate the difficulty for bridging the domain gap induced by the videos captured from different views.
specifically, we rotate the skeleton around the z axis to make the connection of the joints \textit{right shoulder} and \textit{left shoulder} parallel with the x axis and the connection of \textit{spine base} and \textit{spine} parallel with y axis.
\subsection{Experimental results}
\label{section:main_results}
\begin{table}[t]
\centering
\caption{Comparison of the skeleton-based action recognition accuracy (\%) under the UDA setting between PKU-MMD (P) and NTU RGB+D (N51) datasets.}
\setlength{\tabcolsep}{2.3pt}
\begin{tabular}{l|cccc|c}
\hline
Method & P$\to$N-CV & P$\to$N-CS & N$\to$P-CV& N$\to$P-CS & year\\
\hline
Target Only & 91.3 & 84.6 & 95.7 & 93.1 & -\\
Source Only & 54.9 & 50.5 & 59.6 & 57.6 & -\\
\hline
MMD~\cite{DBLP:journals/corr/Long015} & 55.4 & 51.7 & 61.3 & 59.4 & 2015\\
DANN~\cite{DANN} & 58.1 & 52.4 & 61.9 & 58.8 & 2017\\
JAN~\cite{JAN} & 51.9 & 47.1 & 65.3 & 63.1 & 2017\\
CDAN~\cite{CDAN} & 54.9 & 51.1 & 63.8 & 61.3 & 2018\\
BSP~\cite{BSP} & 55.7 & 49.3 & 64.0 & 63.0 & 2019\\
TA$^3$N~\cite{video_DA} & 55.9 & 51.2 & 66.2 & 65.9 & 2019\\
GINs~\cite{DBLP:journals/tcsv/gins} & 44.9 & 40.0 & 54.7 & 52.4 & 2020\\
MCC~\cite{DBLP:conf/eccv/JinWLW20} & 56.1 & 52.2 & 64.2 & 61.7 & 2020 \\
\hline
Tem-Cub (Ours) & 58.3&52.5 & 65.2 & 62.8 & \\
Spa-Cub (Ours) & 56.5&51.0 & 61.2&59.3 & \\
TS-Cub (Ours) & \textbf{59.2}&\textbf{53.6} & \textbf{65.5}&\textbf{63.3} & \\
\hline
\end{tabular}
\label{tab:SAR_SOTA}
\end{table}
\noindent \textbf{Analysis of the Baseline Model:}
We adopt the HCN\cite{DBLP:conf/ijcai/LiZXP18} as the backbone and make two modifications that are introduced in the last subsection.
As shown in Table \ref{table:baseline}, both these two modifications manage to ameliorate the performance of HCN. Combining PReLU and rotation pre-processing could further improve the performance. Hence we will refer baseline to the improved HCN in the sequel.
Though pre-processing can reduce the domain gap to some extent, there are still some phenomena that could not be easily handled. For example, bridging the gap between 2D and 3D skeleton-based videos.
Moreover, conventional pre-processing methods, \textit{e.g.,} rotation, scale normalization, \textit{etc}, are executed based on several specific joints.
But in skeleton-based videos, these joints might be inaccurate or missing (padding 0).
Actually, these are common phenomena in skeleton-based datasets.
In this case, performing pre-processing on these data might even cause negative effects.
Therefore, our proposed method owns greater generalization ability than pre-processing.
\begin{figure*}[t]
\includegraphics[width =\linewidth]{figures/case_visual.png}
\caption{Visualization of two skeleton-based videos under the N$\to$P-CV setting and the predicted results of different methods.}
\label{fig:case_visual}
\end{figure*}
\noindent \textbf{Evaluation on PKU $\leftrightarrow$ NTU:}
We show our results compared to the state-of-the-arts under the P$\to$N and N$\to$P settings in Table \ref{tab:SAR_SOTA}.
We first observe the large difference between \textit{Target Only} and \textit{Source only} caused by the domain shift (around 35\% accuracy drops),
which indicates the greater challenge of the UDA setting compared with conventional fully supervised-learning setting.
Moreover, according to the results, our proposed method acquires consistent improvement of the performance of skeleton-based unsupervised domain adaptation.
To be specific, it is shown that our TS-Cub method exceeds the baseline by 4.3\% and 3.1\% under CV and CS settings respectively and achieves remarkable improvement based on the state-of-the-arts. The performance is 3.8\%, 1.1\%, 7.3\%, 4.3\%, 3.5\%, 3.1\% higher than that of MMD, DANN, JAN, CDAN, BSP and MCC in order under P$\to$N-CV setting.
Meanwhile, our method constantly outperforms the state-of-the-art video domain adaptation model TA$^{3}$N by 4.4\%, 2.7\%, 1.5\%, 0.7\% under four P$\leftrightarrow$N settings.
Figure~\ref{fig:case_visual} shows two failure cases on compared methods but succeeded on TS-Cub. For the two skeleton-based videos under the N$\to$P-CV setting, the ground-truth labels are \textit{eat meal/snake} and \textit{take a selfie}, while the compared methods give wrong predictions, our TS-Cub approach obtains the right action labels, which demonstrate its effectiveness for cross-dataset action recognition.
Besides, it is observed that our Tem-Cub method performs consistently better than Spa-Cub. We explain this fact by that it is a little difficult to discriminate between the ordered and permuted instances keeping strong spatial symmetry, which affects the performance of the Spa-Cub method.
As conclusion, our proposed Cubism methods have the capability to improve the domain adaptation performance. Tem-Cub appears to be more effective than Spa-Cub and combining the information from both two streams will further enhance recognition accuracy.
\begin{table}[t]
\centering
\caption{Comparison of the skeleton-based action recognition accuracy (\%) under the UDA setting.
Source domain: NTU RGB+D (N12) dataset. Target domain: Kinetics (K) dataset.}
\begin{tabular}{l c c c}
\hline
Method & N$\to$K & K$\to$N & Year\\
\hline
Target Only & 40.8 & 89.1 & -\\
Source Only & 14.4 & 22.5 & -\\
\hline
MMD~\cite{DBLP:journals/corr/Long015} & 14.9 & 22.8 & 2015\\
DANN~\cite{DANN}& 16.5 & 22.9 & 2017\\
JAN~\cite{JAN} & 15.0 & 24.4 & 2017\\
CDAN~\cite{CDAN} & 14.9 & 24.0 & 2018\\
BSP\cite{BSP} & 14.6 & 16.6 & 2019\\
TA$^{3}$N~\cite{video_DA} & 15.6 & 25.6 & 2019\\
GINs~\cite{DBLP:journals/tcsv/gins} & 16.4 & 23.4 & 2020 \\
MCC~\cite{DBLP:conf/eccv/JinWLW20} & 15.2 & 26.2 & 2020 \\
\hline
Tem-Cub (Ours) & 16.4 & 29.3 & 2021\\
Spa-Cub (Ours) & 14.6 & 22.9 & 2021\\
TS-Cub (Ours) & \textbf{16.8} & \textbf{29.6} & 2021 \\
\hline
\end{tabular}
\label{table:SAR_SOTA_N&K}
\end{table}
\begin{figure*}[!t]
\includegraphics[width = \linewidth]{figures/conf.png}
\caption{Visualization of confusion matrices.
We show the ground truth labels and the predicted labels on the vertical axis and the horizontal axis respectively.
The first row displays the results of NTU $\to$ SBU, where the labels are: punching (1), exchanging something (2), hugging (3), handshaking (4), pushing (5), kicking (6), walking apart (7), walking towards (8).
The second row represents the results of ORGBD $\to$ MSRDA3D, where the labels are: drinking (1), eating (2), using laptop (3), making phone call (4), reading book (5).
}
\label{fig:conf}
\vspace{0.1cm}
\end{figure*}
\begin{table}[t]
\centering
\caption{Comparison of the skeleton-based action recognition accuracy (\%) under the unsupervised domain adaptation setting. * Since the target data is not divided into subsets for training and testing~\cite{DBLP:journals/tcsv/gins}, we could not evaluate the ``Target Only'' in this table.}
\vspace{-0.35 cm}
\begin{tabular}{l c c c}
\hline
Method & NTU $\to$ SBU & ORGBD $\to$ MSRDA3D & Year\\
\hline
Target Only* & - & - & -\\
Source Only & 35.8 & 48.3 & -\\
\hline
MMD~\cite{DBLP:journals/corr/Long015} &31.4 & 25.5 & 2015 \\
DANN~\cite{DANN} & 46.3 & 39.3 & 2017\\
JAN~\cite{JAN} &47.6 & 49.2 & 2017\\
CDAN~\cite{CDAN} &39.9 & 48.7 & 2018\\
GAKT~\cite{DBLP:conf/eccv/DingLSF18} & 31.8 & 48.4 & 2018 \\
BSP~\cite{BSP} & 32.4 & 41.3 & 2019\\
GINs~\cite{DBLP:journals/tcsv/gins} & 50.7 & 51.5 & 2020 \\
\hline
Tem-Cub (Ours) & 50.7 & 52.5 & 2021 \\
Spa-Cub (Ours) & 46.8 & \textbf{53.1} & 2021 \\
TS-Cub (Ours) & \textbf{51.1} & 53.0 & 2021 \\
\hline
\end{tabular}
\label{table:SAR_SOTA_S_M}
\end{table}
\noindent \textbf{Evaluation on NTU $\leftrightarrow$ Kinetics:}
We conducted further experiments on NTU RGB+D and Kinetics datasets to verify the effectiveness of our methods and present the results in Table \ref{tab:SAR_SOTA}.
Compared to P$\leftrightarrow$N, there is an even larger performance gap between \textit{Target Only} and \textit{Source only} in N$\leftrightarrow$K.
As for our proposed methods,
TS-Cub exceeds the baseline by 2.4\% and 7.1\% under N$\to$K and K$\to$N settings respectively and exceeds the best one from the state-of-the-arts.
Besides, it is noticed that the adaptation performance from Kinetics to NTU RGB+D is significantly higher than that from NTU RGB+D back to Kinetics. This should be attributed to the features of the two underlying datasets.
Kinetics is captured from a wider domain, the Youtube videos in the wild, while NTU RGB+D is collected under the artificially predefined settings.
Accordingly, Kinetics conveys more information that might facilitate the adaptation and NTU RGB+D carries less noise which brings less difficulty for recognition.
For the aforementioned reasons, an adaptation from a more complex domain could result in relatively higher performance. This phenomenon holds for the adaption between PKU-MMD and NTU RGB+D as well.
\label{subsection:expe_res_and_anal}
\noindent \textbf{Evaluation on NTU $\to$ SBU:}
We then conducted experiments between NTU and SBU datasets, following the setting in ~\cite{DBLP:journals/tcsv/gins}.
We present the results in Table \ref{table:SAR_SOTA_S_M}, where the performances of other methods are reported in \cite{DBLP:journals/tcsv/gins}.
As a result, we can find that most of the methods can boost the accuracy compared with \textit{Source Only}, and our TS-Cub method achieves the highest accuracy of 51.1\%.
We further show the confusion matrices of different methods in the top row of Fig. \ref{fig:conf}. Our ST-Cubism shows strong performance on the actions of \textit{kicking} and \textit{hugging}.
\noindent \textbf{Evaluation on ORGBD $\to$ MSRDA3D:}
Table \ref{table:SAR_SOTA_S_M} shows the experimental results on the ORGBD $\to$ MSRDA3D setting~\cite{DBLP:journals/tcsv/gins}.
Compared with other aforementioned datasets, ORGBD $\to$ MSRDA3D dataset is rather small and only contains 5 categories.
Referring to Table \ref{table:SAR_SOTA_S_M}, we can find that \textit{Source Only} exceeds almost all the compared methods. This may attribute to the fact that adversarial learning methods require numerous training data. Meanwhile, the proposed methods achieve the results of 52.5\% (Tem-Cub), 53.1\% (Spa-Cub) and 53.0\% (TS-Cub) respectively, surpassing all the other compared methods. This shows the robustness of our methods to the fewer training data in comparison with the mainstream adversarial methods.
We display the compared confusion matrices in the bottom row of Fig. \ref{fig:conf}. Our method could recognize the action of \textit{reading book} well, but would be sometimes confused by the action of \textit{eating} and \textit{using laptop}.
\begin{table}[t]
\centering
\caption{
Comparison of the averaged computational cost (ms) based on a single video during the inference period.
Experiments are conducted under the P$\to$N-CV setting.}
\vspace{-0.35 cm}
\setlength{\tabcolsep}{0.7pt}
\begin{tabular}{l|cccccc||cc|c}
\hline
Method & MMD~\cite{DBLP:journals/corr/Long015} & DANN~\cite{DANN} & JAN~\cite{JAN} & CDAN~\cite{CDAN} & BSP~\cite{BSP} & GINs~\cite{DBLP:journals/tcsv/gins} & Tem-Cub & Spa-Cub & TS-Cub \\
\hline
Time & 0.106 & 0.153 & 0.083 & 0.093 & 0.305 & 0.241 & 0.112 & 0.109 & 0.211 \\
\hline
\end{tabular}
\label{tab:SAR_cost}
\vspace{-0.1 cm}
\end{table}
\noindent \textbf{Evaluation of the Computational Cost:}
We run experiments under the P$\to$N-CV setting and report the averaged computational cost based on a single video in Table \ref{tab:SAR_cost}. As it shows, our Tem-Cub network and Spa-Cub network cost 0.112 ms and 0.109 ms to predict the action label of each single video, achieving comparable speed with the fastest method JAN~\cite{JAN}. Though our final model TS-Cub requires more time than some of the compared methods, it can still satisfy the real-time application requirement.
\begin{figure*}[t]
\includegraphics[width = 0.8\linewidth]{figures/p_lambda_v3.pdf}
\caption{The ablation study of the hyper-parameters, the loss trade-off parameter $\lambda$ and data bias parameter $p$, in our Cubism methods.
}
\label{fig:hyper}
\vspace{-0.1cm}
\end{figure*}
\subsection{Analysis of the TS-Cub}
\noindent \textbf{Analysis of the Hyper-parameters:}
We studied the impact of the hyper-parameters in our method, the weight parameter $\lambda$ in the multi-task loss and the ratio $p$ of the ordered videos in a batch. These experiments are conducted under the P$\to$N-CV setting.
Fig. \ref{fig:hyper} (a) and (b) studies the impact of $\lambda_t$ and $p_t$, the hyper-parameters in the temporal Cubism approach.
In Fig. \ref{fig:hyper} (a) is shown the change of the recognition accuracy when $p_t$ is fixed to 0.8 while $\lambda_t$ varies from 0 to 10. It is noticed that the best performance is acquired when $\lambda_t = 0.1$.
The value of $\lambda_t$ that is less than 0.1 degrades the effect of the loss of the self-supervised task and furthermore the effect of the self-supervision based method. But the performance is still ameliorated based on the baseline thanks to the data augmentation. On the other hand, a greater value of $\lambda_t$ will possibly make the auxiliary task overwhelm the main task, which will also result in the decrement of the performance. As evidence, the performance plummets sharply when $\lambda = 10$.
Moreover, we studied the impact of $p_t$ and show in Fig. \ref{fig:hyper} (b) the variation of the obtained accuracy in the case that $\lambda_t$ is fixed to 0.1 while $p_t$ varies from 0.2 to 1. It is found that various scales of $p_t$ result in consistent boosting of the adaptation performance base on the baseline while the best performance is achieved with $p_t = 0.8$.
Meanwhile, we notice that the performance is slightly poorer for the relatively greater value of $p_t$.
This may be explained by that greater $p_t$ degrades the role of the auxiliary task and the method will fall back to the baseline when $p_t = 1$.
Likewise, we verified the impact of the hyper-parameters in the spatial Cubism approach, $\lambda_s$ and $p_s$, and received the similar curves shown in Fig. \ref{fig:hyper} (c) and (d). The most effective parameters for spatial Cubism approach appear to be $\lambda_s = 0.1$ and $p_s = 0.6$.
\begin{figure}[t]
\includegraphics[width = \linewidth]{figures/tsne2x3.pdf}
\caption{The t-SNE visualization of the feature distribution of the samples from the source and target domains. The samples are extracted from eight action categories.
The plots on the left color the samples in the source domain with blue and the samples in the target domain with brown,
while the plots on the right use different colors to represent the points from the various action classes.
}
\label{fig:tsne}
\vspace{-0.15 cm}
\end{figure}
\noindent \textbf{Analysis of the segment number:}
In section \ref{subsection:tem_cub}, a video is divided into $N$ ($N=3$) segments uniformly in the temporal domain to perform temporal Cubism.
Here we evaluate other numbers for $N$ ($N=2$ and $N=4$) under P$\to$N-CV setting and present the results in Table~\ref{tab:N_seg}.
We can observe that $N=3$ outperforms other segment numbers.
This can be attributed to dividing a video into three segments making the amount of permutation categories properly, which is vital to make the auxiliary task neither too simple nor too complicated.
\begin{table}[t]
\centering
\small
\setlength{\tabcolsep}{7mm}
\caption{The experimental results for dividing a video into $N$ segments under P$\to$N-CV setting.}
\begin{tabular}{ c|c|c|c}
\hline
Segment number $N$ & 2 & 3 & 4\\
\hline
Accuracy & 56.9 & \textbf{58.3} & 56.5 \\
\hline
\end{tabular}
\vspace{-0.1cm}
\label{tab:N_seg}
\end{table}
\noindent \textbf{t-SNE Visualization:}
We visualize the distribution of both domains extracted from some experiments under the P$\to$N-CV setting using t-SNE\cite{MaatenLaurens_van_der2008JMLR_t-SNE} in Fig. \ref{fig:tsne}.
It is shown that the samples from different domains are not that well-aligned through the baseline model though the source samples could be finely grouped into clusters, while the distributions of two domains get much closer when applied temporal or spatial Cubism methods. These plots intuitively demonstrate the effectiveness of our methods.
\noindent \textbf{Analysis of Different Actions:}
Fig. \ref{fig:actions} illustrates the performance of TS-Cub on the individual action categories compared to the baseline.
Our method achieves consistent improvement on most of the action categories and outperforms the best on the actions \textit{cross hand in front}, \textit{give something to other person}, \textit{shake hand}, \textit{jump up} and \textit{touch chest}. On these actions, the baseline achieves relatively poorer performance at first and our method manages to enhance the adaptation performance.
On the other side, our TS-Cub fails to reach the baseline on the actions \textit{eat meal/snack}, \textit{push other person}, \textit{take off hat/cap} and \textit{type on a keyboard}. Actually, the videos from \textit{eat meal/snack} have an unduly longer duration and the videos from \textit{push other person} have a shorter duration than the videos from other categories, which may bring extra difficulty for the temporal stream. Action \textit{take off hat/cap} and \textit{type on a keyboard} hold a little more tiny spatial features that may confuse the spatial stream.
For the baseline and our proposed TS-Cub approach, we find that they both fail to recognize the action
\textit{put something inside pocket}. This is because in the NTU dataset, this action involves two people (one person puts something inside the pocket of another person). However, in the PKU dataset this is a single person action (one person puts something inside the pocket of himself/herself). This failure case suggests the limitation of using skeleton-based data as input for recognizing action involving semantic object (\textit{e.g.,} interaction with the pocket). This issue would be tackled by further leveraging the RGB modality. We will explore this interesting direction in the future.
\begin{figure*}[t]
\includegraphics[width = \linewidth]{figures/different_actions.pdf}
\caption{The action recognition accuracy on the individual actions under UDA setting from NTU RGB+D to PKU-MMD (CV). The action label can be best viewed by zooming the PDF file.}
\label{fig:actions}
\end{figure*}
\begin{table}[t]
\centering
\caption{Comparison of the skeleton-based action recognition accuracy (\%) of single action and interacted action on the N$\to$P-CV setting.}
\setlength{\tabcolsep}{2.3pt}
\vspace{-0.15 cm}
\begin{tabular}{l|c|c|c}
\hline
Method & single action & interacted action & overall\\
\hline
Baseline (source only) & 59.1 & 66.1 & 59.6 \\
\hline
Tem-Cub (Ours) & 64.7 & 72.0 & 65.2\\
Spa-Cub (Ours) & 61.2 & 71.8 & 61.2\\
TS-Cub (Ours) & 64.8 & 73.0 & 65.5\\
\hline
\end{tabular}
\vspace{-0.2cm}
\label{tab:single_multiple}
\end{table}
\noindent \textbf{Analysis of the Number of People:}
In NTU and PKU datasets, there are numbers of interacted actions that involves multiple people in each video (\textit{e.g.,} \textit{shaking hands}, \textit{hugging}, \textit{etc}).
For these actions, we follow~\cite{DBLP:conf/ijcai/LiZXP18} to apply element-wise maximum operation to fuse the features of multiple people.
Furthermore,
we compare the experimental results under the N$\to$P-CV setting of the single action and the interacted action in Table~\ref{tab:single_multiple}.
We observe that our method obtains larger improvements over the interacted action (66.1\% $\to$ 73.0\%) than those over the single action (59.1\% $\to$ 64.8\%).
These experimental results demonstrate the generalized capability of our TS-Cub model, which can effectively deal with the multiple people cases with the element-wise maximum operation.
\begin{table}[t]
\centering
\caption{Comparison of the skeleton-based action recognition accuracy (\%) when using different number of joints $N_j$. Experiments are conducted under the N$\to$P-CV setting.}
\setlength{\tabcolsep}{7pt}
\vspace{-0.15 cm}
\begin{tabular}{l|c|c|c|c}
\hline
Method & $N_j$ = 25 & $N_j$ = 22 & $N_j$ = 18 & $N_j$ = 12\\
\hline
Tem-Cub (Ours) & 65.2 & 65.8 & 64.5 & 63.5 \\
Spa-Cub (Ours) & 61.2 & 61.9 & 61.8 & 60.2 \\
TS-Cub (Ours) & 65.5 & 65.4 & 64.7 & 63.3 \\
\hline
\end{tabular}
\label{tab:number_of_joint}
\end{table}
\noindent \textbf{Analysis of the Number of Joints:}
We further conduct experiments to ablate the number of joints $N_j$ in skeleton-based video\footnote{Based on the 25 joints used in~\cite{DBLP:conf/cvpr/ShahroudyLNW16}, we remove the joints ``middle of the spine'', ``left thumb'' and ``right thumb'' for $N_j$ = 22, remove the joints ``middle of the spine'', ``left hand'', ``right hand'', ``left ankle'', ``right ankle'', ``left thumb'' and ``right thumb'' for $N_j$ = 18, remove the joints ``middle of the spine'', ``left hand'', ``right hand'', ``left ankle'', ``right ankle'', ``left wrist'', ``right wrist'', ``left elbow'', ``right elbow'', ``left knee'', ``right knee'', ``left thumb'' and ``right thumb'' for $N_j$ = 12.}.
As shown in Table~\ref{tab:number_of_joint}, our final TS-Cub model achieves better results with more joints as input. When only using 12 major joints, it can also obtain a comparable performance with the result of $N_j$ = 25, indicating its robustness to the number of joints. We also find that using less joints could achieve better results in some cases (\textit{e.g.,} $N_j=22$ versus $N_j=25$), this is because the absent joints (\textit{e.g.,} left thumb and right thumb) sometimes would be redundant or even bring noise to the final action results.
\begin{table}[t]
\centering
\caption{Comparison of the skeleton-based action recognition accuracy (\%) when influenced by Gaussian noise with different standard deviations $\sigma$. Experiments are conducted under the N$\to$P-CV setting.}
\setlength{\tabcolsep}{5pt}
\begin{tabular}{l|c|c|c|c}
\hline
Method & $\sigma=0$ & $\sigma=0.001$ & $\sigma=0.01$ & $\sigma=0.1$
\\
\hline
Tem-Cub (Ours) & 65.2 & 63.3 & 62.0 & 43.3 \\
Spa-Cub (Ours) & 61.2 & 60.8 & 60.0 & 39.9 \\
TS-Cub (Ours) & 65.5 & 63.1 & 62.5 & 42.5 \\
\hline
\end{tabular}
\label{tab:gaussian}
\end{table}
\noindent \textbf{Analysis of the Gaussian Noise:}
To evaluate the robustness of our method, we add Gaussian noise to the input video. Specifically, we first normalize the input data into the scale of [-1, 1], add Gaussian noise with zero mean and different standard deviations $\sigma$ to them, and re-scale data to the original interval. As shown in Table \ref{tab:gaussian}, with perturbation of $\sigma=0.001$ and $\sigma=0.01$, our algorithm achieves comparable performance with that of $\sigma=0$.
But with more noise, TS-Cub has a noticeable decrease from 65.5\% ($\sigma=0$) to 42.5\% ($\sigma=0.1$).
\begin{table*}[!t]
\centering
\caption{Exploration on other self-supervised learning tasks.
Source Domain: PKU-MMD (P) dataset. Target Domain: NTU-RGB+D (N51) dataset (CV setting).
For the definition of the tasks, refer to section \ref{subsection:expe_res_and_anal} for details.}
\begin{tabular}{l|ccc||cc}
\hline
Task & Spa-Cub & Spa-Jigsaw & Freezing Game & Tem-Cub & Tem-Flip \\
\hline
Accuracy & 56.5 & 54.2 & 55.3 & 58.3 & 57.1 \\
\hline
Fusion way & WRSM & WGM & MP & TS-Cub (WAM) & Coupled-Cub \\
\hline
Accuracy & 51.9 & 58.9 & 58.5 & \textbf{59.2} & 55.6 \\
\hline
\end{tabular}
\label{tab:other_self_super_tasks}
\end{table*}
\noindent \textbf{Exploration of Other Self-supervised Learning Tasks:}
Besides our proposed Cubism tasks, we also explore other self-supervised learning tasks but receive not that satisfactory results. The comparison of results adopting different supervised tasks is shown in Table \ref{tab:other_self_super_tasks}.
For instance, we consider the task \textit{Tem-Flip} to distinguish the ordered videos from the temporally inverse videos. However, it is hard to discriminate between such actions like ordered \textit{put something inside pocket} and inverse \textit{take out something from pocket}. Hence this task cannot be applied to all action categories and fails to lead to higher performance.
We explore a task named as \textit{Spa-Jigsaw} in the temporal domain.
There are a number of joints comprising a body, which are stored in a linear list in practice. Hence we simply uniformly divide that list into 3 segments and permute them. This way of permutation thoroughly breaks the spatial relation of the joints and thereby achieves a slightly poor result.
Meanwhile, we try another way called \textit{Freezing Game} to build augmented data by freezing the pose of the arms and the legs from the first frame during the whole video. However, as several actions do not comprise large-amplitude motions at first, this task seems to be so difficult that the importance of the expected classification task gets degraded.
Though spatial rotation is a conventional transformation, we do not exploit it further as we have taken it as a part of the data pre-processing.
Additionally, several approaches are investigated to combine Tem-Cub and Spa-Cub. Firstly, we explore some softmax scores fusion approaches like $\lambda_{1}\textbf{s}_{1}+\lambda_{2}\textbf{s}_{2}$, $\sqrt{\lambda_{1}{\textbf{s}_{1}}^{2}+\lambda_{2}{\textbf{s}_{2}}^{2}}$, ${\textbf{s}_{1}}^{\lambda_{1}}{\textbf{s}_{2}}^{\lambda_{2}}$, and max-pool$(\textbf{s}_{1},\textbf{s}_{2})$, named as Weighted Arithmetic Mean (WAM), Weighted Root Squared Mean (WRSM), Weighted Geometric Mean (WGM) and Max Pooling (MP) orderly in Table \ref{tab:other_self_super_tasks}. Here $\textbf{s}_{1}$ and $\textbf{s}_{2}$ denote temporal and spatial softmax scores, and $\lambda_{1}=0.6, \lambda_{2}=0.4$ are two hyper-parameters. We find that simply add temporal and spatial softmax scores achieves the best result and name it \textit{TS-Cub}. Besides, there is another method that applies the temporal and spatial transformations simultaneously to the training samples. This combination way couples the temporal Cubism and spatial Cubism and is named as \textit{Coupled-Cub}. \textit{Coupled-Cub} will considerably increase the number of the permutation labels and produce more disordered samples, which raises the difficulty of the auxiliary task as well.
As a conclusion of our exploration, the auxiliary task is not supposed to be too simple or too difficult. A too simple task has not got enough ability to draw the two domains close while a too difficult task could overwhelm the original classification task.
In other words, an inadequate self-supervised task could result in an even worse adaptation performance thus choosing an appropriate additional task is crucial for such a self-supervision based approach.
\begin{table*}[t]
\centering
\small
\caption{A new cross-dataset skeleton-based action recognition setting where the data from target domain are totally unavailable during training.}
\setlength{\tabcolsep}{7pt}
\begin{tabular}{l | c c |c c }
\hline
& \multicolumn{2}{c|}{Training} & \multicolumn{2}{c}{Testing}\\
& Source& Clips & Target & Clips\\
\hline
P$\to$N-CV & P & 21544 & N51-CV-test& 16092\\
P$\to$N-CS & P &21544 & N51-CS-test& 14013\\
N$\to$P-CV & N51 & 48081 & P-CV-test&7188 \\
N$\to$P-CS & N51 & 48081 & P-CS-test&2704 \\
\hline
N$\to$K & N12 & 11256 & K-test & 787 \\
K$\to$N & K & 9699 & N12-CV-test & 3780 \\
\hline
\end{tabular}
\label{tab:new_setting}
\vspace{-0.1cm}
\end{table*}
\begin{table*}[t]
\small
\centering
\caption{The experimental results for training without target domain data.}
\setlength{\tabcolsep}{3pt}
\vspace{-0.2cm}
\begin{tabular}{l|cccc|cc}
\hline
Method & P$\to$N-CV & P$\to$N-CS & N$\to$P-CV& N$\to$P-CS & N$\to$K & K$\to$N \\
\hline
Target Only & 91.3 & 84.6 & 95.7 & 93.1 & 40.8 & 89.1 \\
Source Only & 54.9 & 50.5 & 59.6 & 57.6 & 14.4 & 22.5 \\
\hline
Tem-Cub & 57.1&52.7 & 62.7&60.0 & \textbf{15.6} & 25.5 \\
Spa-Cub & 54.7&50.8 & 61.9&59.6 & 15.0 & 25.1 \\
TS-Cub & \textbf{57.7}&\textbf{53.8} & \textbf{63.4}&\textbf{61.3} & 15.5& \textbf{25.6} \\
\hline
\end{tabular}
\label{tab:exp_res}
\vspace{-0.35cm}
\end{table*}
\subsection{Training the TS-Cub without Target Domain Data}
\label{without_target_domain}
We make further exploration about testing our proposed TS-Cub under a more challenging cross-dataset setting, which assumes that data from target domain are totally unavailable during the training period. We detail this unsupervised domain adaptation setting in Table~\ref{tab:new_setting}. During the training phase, the permuted video samples from source domain are delivered into the network along with the ordering data. The final losses are composed of the main classification task and the auxiliary Cubism task.
We present the experimental results in Table \ref{tab:exp_res}, other compared methods in the previous section are absent because they all required the target data during training, which is not available in this setting.
As shown in Table \ref{tab:exp_res}, our TS-Cub consistently outperforms the baseline \textit{Source Only} on the six tasks, which indicates its robustness for cross-dataset skeleton-based action recognition.
\begin{table}[t]
\centering
\caption{Combining the Cubism with CDAN~\cite{CDAN} under the UDA setting.}
\setlength{\tabcolsep}{1.0pt}
\begin{tabular}{l|cc|cc|cc}
\hline
Method & P$\to$N-CV & P$\to$N-CS & N$\to$P-CV& N$\to$P-CS & N$\to$K & K$\to$N \\
\hline
Tem-Cub (Ours) & 58.3& 52.5 & 65.2 & 62.8 & 16.4 & 29.3 \\
Spa-Cub (Ours) & 56.5& 51.0 & 61.2 & 59.3 & 14.6 & 22.9 \\
TS-Cub (Ours) & 59.2 & 53.6 & 65.5& 63.3 & \textbf{16.8} & \textbf{29.6}\\
\hline
Tem-Cub (Ours) + CDAN~\cite{CDAN,DBLP:conf/eccv/ChoiSSH20} & \textbf{60.2} & 56.5 & 67.6 & 64.6 & 16.2 & 21.5 \\
Spa-Cub (Ours) + CDAN~\cite{CDAN} & 59.3 & 56.4 & 68.3 & 65.4 & 15.6 & 26.8 \\
TS-Cub (Ours) + CDAN~\cite{CDAN} & 59.5 & \textbf{57.8} & \textbf{68.9} & \textbf{66.1} & 16.6 & 27.1 \\
\hline
\end{tabular}
\vspace{-0.2cm}
\label{tab:cubism_and_cdan}
\end{table}
\subsection{Combining the Cubism with Adversarial Learning Method}
Recently, Choi \textit{et al.}~\cite{DBLP:conf/eccv/ChoiSSH20} study the cross-dataset action recognition problem by combining the domain adversarial task with the clip order prediction task.
Motivated by this work, we further conduct experiments to see whether our self-supervised pretext tasks are complementary with the conventional domain adversarial task.
Since~\cite{DBLP:conf/eccv/ChoiSSH20} is designed for RGB video while our work focus on skeleton-based video, we use HCN~\cite{DBLP:conf/ijcai/LiZXP18} as the backbone similar with TA$^3$N~\cite{video_DA}.
Then we perform temporal Cubism at raw data level as well as apply adversarial learning in feature level based on CDAN~\cite{CDAN} (denoted as ``Tem-Cub (Ours) + CDAN'' in Table \ref{tab:cubism_and_cdan}).
We also conduct experiment on combining our Spa-Cub with CDAN (\textit{i.e.,} ``Tem-Cub (Ours) + CDAN''), and ensembling the results of ``Tem-Cub (Ours) + CDAN'' and ``Spa-Cub (Ours) + CDAN'' (\textit{i.e.,} ``TS-Cub (Ours) + CDAN'').
We present the compared results in Table ~\ref{tab:cubism_and_cdan}.
On P$\leftrightarrow$N setting, we find the performance could be further improved by combining our approach with CDAN~\cite{CDAN}, which shows the complementary characteristics of our method and adversarial approach.
However, on the N$\to$K setting, we found that the performance drops slightly when combining with CDAN. This might attribute to the videos from the Kinetics are collected in the wild, and the skeleton-based inputs are obtained from 2D pose estimation algorithm rather than 3D sensor for NTU and PKU datasets. In this case, the adversarial approach (\textit{e.g.,} CDAN) might have more difficulty in dealing with this kind of data with more noise. In comparison, our method is more robust to generalize to this more challenging scenario.
\section{Conclusions}
In this paper, we have investigated the unsupervised domain adaptation setting for skeleton-based action recognition.
In order to reduce the domain shift between different datasets, we have devised a self-supervised learning approach based on temporal spatial Cubism.
Both quantitative and qualitative experimental results have demonstrated the effectiveness of our method.
We expect this work to provide a new direction for skeleton-based action recognition, and inspire applications to other related tasks, such as group activity recognition~\cite{DBLP:journals/tip/TangLWYZ19}, action quality assessment~\cite{DBLP:conf/cvpr/TangNZZLWZ20} and instructional video analysis~\cite{coin}.
\bibliographystyle{ACM-Reference-Format}
| {
"attr-fineweb-edu": 1.374023,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUehzxK4sA-9zSB0tw |
\chapter*{Author's Statements}
\label{chap8}
\hspace{0.1in}\textbf{Accepted Paper}\\
\begin{itemize}
\item Souvik Roy, Subrata Paul and Sukanta Das, ``Temporally Stochastic Elementary Cellular Automata : classes and dynamics'' \textit{International Journal of Bifurcation and Chaos}, 2022.
\end{itemize}
\vspace{0.4in}
\hspace{0.0in}\textbf{Preprint}\\
\begin{itemize}
\item Kamalika Bhattacharjee, Subrata Paul and Sukanta Das, ``Affinity Classification Problem by Stochastic Cellular Automata'' \textit{arXiv preprint arxiv.2207.05446}, 2022.
\end{itemize}
\vspace{0.4in}
\hspace{0.0in}\textbf{Submitted Papers}\\
\begin{itemize}
\item Subrata Paul, Souvik Roy and Sukanta Das, ``Pattern Classification with Temporally
Stochastic Cellular Automata'' \textit{AUTOMATA 2022} , 2022.
\item Subrata Paul, Sukanta Das and Biplab K. Sikdar, ``Searching with Cellular Automata on Cayley Tree'' \textit{AUTOMATA 2022} , 2022.
\end{itemize}
\chapter*{Abstract}
\label{abstr}
\textbf{T}his work introduces temporally stochasticity in cellular automata and the behavior of such cellular automata. The work also explores the computational ability of such cellular automaton that illustrates the computability of solving the affinity classification problem. In addition to that, a cellular automaton, defined over Cayley tree, is shown as the classical searching problem solver.
\textbf{T}he proposed temporally stochastic cellular automata deals with two elementary cellular automata rules, say $f$ and $g$. The $f$ is the default rule, however, $g$ is temporally applied to the overall system with some probability $\tau$ which acts as a noise in the system. As the mathematical analysis of such a system is a difficult task, we use qualitative and quantitative simulations to get insights into the dynamics of this temporally stochastic cellular automata. We are interested in exploring the questions $-$ is it possible that two periodic (resp. chaotic) rules together depict chaotic (resp. periodic) dynamics. To answer these questions, we fully classify temporally stochastic cellular automata. Here, we are also particularly interested in studying phase transition and various types of class transition dynamics.
\textbf{A}fter exploring the dynamics of temporally stochastic cellular automata (TSCAs), we study the dynamical behavior of these temporally stochastic cellular automata (TSCAs) to identify the TSCAs that converge to a fixed point from any seed. We apply each of the convergent TSCAs to some standard datasets and observe the effectiveness of each TSCA as a pattern classifier. It is observed that the proposed TSCA-based classifier shows competitive performance in comparison with existing classifier algorithms.
\textbf{T}he work introduces a new problem in the field of cellular automata, named as, \emph{affinity classification problem} which is a generalization of the \emph{density classification problem}. To solve this problem, we use temporally stochastic cellular automata where two rules are stochastically applied in each step to all cells of the automata. Our model is defined on a two-dimensional grid and has affection capability. We show that this model can be used in several applications, like modeling self-healing systems.
Finally, the work introduces a new model of computing unit developed around cellular automata to reduce the workload of the Central Processing Unit (CPU) of a machine to compute. Each cell of the computing unit acts as a tiny processing element with attached memory. Such a CA is implemented on the Cayley Tree to realize efficient solutions for diverse computational problems. To prove the effectiveness of this model, this work targets solutions for the Searching problems. However, such a cellular structure can be employed to solve various computational problems, Searching problem is one of them.
\chapter{Introduction}
\label{chap1}
The growth of science has always been captivated by the wonders of nature. The way that nature operates is incredibly unpredictable, with each living thing playing a specific role in determining overall behavior. However, the Turing Machine~\cite{turing1937computable, Turing}, whose computation was controlled by a centralized control tape head, formed the foundation of the mathematical model used from the beginning of the modern computer era. Even von Neumann's proposed computer design is managed by the CPU, which is also a centrally controlled mechanism.
From the very first computer to modern smart-phones, all operated in a centralized manner. We may see patterns in the natural world, such as those found in snowflakes, ant motion, sea shells, etc., where we observe that centralized control may be emerging. For example, it appears that a leader may be in charge of the entire colony. However, each ant makes its own decisions and performs its own duty.
In the early $20^{th}$ century, a new field of research known as Network Science was introduced in this area to examine individualities and parallelism. Various models have been presented throughout this period, many of which are bio-inspired and provide distributed decentralized computing. One of the most significant advancements in this area was the invention of cellular automata. Decentralization has been a widely used notion in computation since the first widely utilized distributed systems, like Ethernet~\cite{Clark21, Maarten7}, were introduced. Since the distributed system \emph{Internet} caused a ``paradigm shift'', the idea of decentralization has cropped up often in almost all areas of human endeavor.
An array of networked, yet independent, computer components make up a distributed system. These components only communicate with one another to coordinate their functions. From the standpoint of a process, a distributed system may be seen as a collection of geographically scattered processes that only communicate via message exchange. As a result, the processes in the system can only speak to one another while doing a computational task. In a distributed computing architecture, the supervision and control of the computation are not exercised by a single entity. The components and processes of a distributed computation may be recognized by their unique identifiers. A central organization is required for a system with detectable individual identities, or a ``non-anonymous system'', in order to give the processes their distinctive individuality. The fundamental tenets of distributed control are violated by this. As a result, a distributed system must be anonymous by definition~\cite{Maarten7, tel73}. A number of formal models have already been published for distributed systems~\cite{consys1, Milner, Reisig}, providing useful insights. Conversely, because of their innate parallelism, cellular automata (CAs) can always be a natural choice for distributed computing frameworks.
In the early 1950's Jon von Neumann first introduced a self-reproducing Automata \cite{Neuma66}; which was later named as ``Cellular Automata''. He introduced constructive universality in cellular automata to study the implementability of self-reproducing machines and the concept of computational universality. A computing machine is said to be computationally universal if it is capable of simulating any other computing machine; for example, John von Neumann's universal constructor is a machine capable of emulating other machines that can be embedded in its cellular automaton. Computational universality and constructive universality are conceptually related properties, but a machine does not need to possess a universal computer to be a universal constructor. Jon von Neumann, in fact, demonstrated that it is possible to implement a Turing machine in his cellular automaton, but pointed out that a Turing machine is not a necessary component of the universal constructor.
The study of biological phenomena has become a focus for all researchers in the domains of science, philosophy, technology, and other related fields. In this direction, Christopher Langton introduced a research area titled ``Artificial Life'' that uses cellular automata (CA) as a natural basis for the artificial life model. Some of the CAs have inherited some characteristics of biological systems, such as \emph{self-replication, self-organization, self-healing}, etc. Conway's ``Game of Life''~\cite{Gardner70} is an important example of such type of CA. The focus will now shift to another attribute termed \emph{affection}. As is well known, every biological system will readily show fondness towards anything.
Over the globe, scientists are struggling in their own way to make \emph{intelligent machines}. In general, a living system is known as an \emph{intelligent machine}. The Turing Test (TT), a behavioral test developed by Alan Turing in 1950~\cite{TT1950}, is one of the fundamental tools to analyze if a machine is intelligent. A machine's performance in TT is evaluated based on how it responds to a series of questions, and passing TT implies that the machine is acting intelligently. According to Turing, such a machine should be recognized as a thinking machine. However, this test is based on a functional approach, where the machine is labeled \emph{intelligent} if it successfully completes specific tasks, or acts intelligently. However, this method does not care if the system has intrinsic intelligence like a living being. The Chinese Room Argument~\cite{Searle1980}, which distinguishes between what is intelligent and what behaves intelligently, is well known. In fact, this problem inspired the development of the strong AI paradigm ~\cite{Kenneth1988}. By going beyond this functional perspective, we want to further the notion of intelligent machines in our study. If a machine has similar properties to a living system, then the machine is called an \emph{intelligent machine}. It is crucial to define the term \emph{intelligent} before providing a response to the question, ``How intelligent has a machine become?''. Living systems are frequently referred to as intelligent systems. If a machine has similar properties to a living system, then the machine is called \emph{intelligent machine}. By looking at how the terms \emph{machine} and \emph{intelligent} are typically used, it is able to structure the definitions to represent that use as closely as possible. It is very challenging to escape the conclusion that the meaning and the answer to the question, \emph{``How intelligent has a machine become?''}.
\section{Motivation and Objective of the thesis}
Exploring the processing power of decentralized computational models, namely distributed computing on cellular automata, is the objective of this thesis. Each cell in a cellular automaton (CA) is made up of a finite automaton that communicates with its neighbors in order to move to its next state~\cite{Neuma66}. The CA is distributed across a regular grid. The appeal of a CA is that it generates complicated (global) behavior from simple local interactions. Some people even assert that nature functions as a quantum information processing system~\cite{PhysRevLett.88.237901}, with CA serving as the mechanism for this processing~\cite{Zuse1982,wolfram2002new}. In fact, using CAs as models for concurrency and distributed systems was one of the pioneering efforts in CAs. According to published research, a few distributed systems issues have also been solved computationally using CAs~\cite{Cor,Smith71}. In this dissertation, we investigate how cellular automata (CAs) can be used to resolve the following issues:
\begin{itemize}
\item Watch how a system behaves when noise has an impact on it.
\item Such a system is capable of classifying patterns.
\item Affinity classification problem,
\item Searching problem.
\end{itemize}
When a system evolves in real life and noise enters the system, the behavior of the system may alter. In this direction, researchers have proposed stochastic CA in the literature~\cite{ Heinz9, fates13, fates:LIPIcs}, where the system employs, several rules for different cells, rather than a single local rule. This dissertation examines these kinds of CAs from noisy real-time systems. A variant of the CA model, temporary stochastic cellular automata, a new form of stochastic CA, has been proposed for studying these kinds of changes. The vast majority of scientific research is based on classical CAs. However, this dissertation shows a willingness to explore a variant of CA, which we refer to as temporally stochastic cellular automata. In order to do that, analysis of the CA's behavior and categorization into different classes is our objective. Using this classification, we choose the convergent TSCAs and utilize them to construct a pattern classifier. For the same purpose, certain algorithms have been developed in the past. But compared to the existing techniques, our proposed model produces outcomes that are competitive.
All researchers had the vision of making an intelligent system from the beginning. Living systems are often used to describe intelligent systems. When a machine can imitate a living system, it is referred to be an intelligent system. One can find that the strength of the living system is \emph{affection}. In our study, we introduced a cellular automata model with affection capabilities, and we believe that this feature lends our model some intelligence.
The next state of a cell in a standard cellular automaton (CA) depends only on the nearest neighborhood configuration from the previous time step, making them ahistoric (memoryless). However, by taking automata with memory capacity into consideration~\cite{alonso2009memory,CPLX_CPLX21495,doi:10.,Alonso-Sanz2013,Seck12,Das2022}, the conventional CA framework has been expanded. The update criteria of the CA are not changed, but each cell now contains a history of all previous iterations according to a weighted mean of all previous states. The process of performing computations entirely within memory is known as in-memory computation (or in-memory computing). This phrase often refers to extensive, sophisticated computations that must be performed on a cluster of machines using specialist systems software. As a cluster, the machines pool their memory so that the computation is basically performed over several machines and makes use of the combined memory of all the machines. In \cite{Das2022}, a variant of CA is introduced to implement those types of notions utilizing cellular automata, where a memory unit is attached to each CA cell and an additional processing unit is also added to the memory to aid in calculation and lessen the workload on the CPU. Using this notion, we develop a model to solve well-known \emph{searching problem}.
\section{Contribution of the thesis}
In light of the aforementioned goal, we have conducted this study. The following is a summary of the major findings from the research activities:
\begin{itemize}
\item We have explored the behaviors of temporary stochastic cellular automata, where we deal with elementary cellular automata (ECAs).
\item We have found that whereas certain stochastic CAs are insensitive to the temporal noise rate, others are impacted by temporal noise. But even these CAs have produced a wide range of outcomes.
\item It is noteworthy that stochastic CAs with (at least one) chaotic rule have often shown less resistance during phase transition (i.e. the critical value of the noise rate is low). But during phase transition, the stochastic CAs devoid of any chaotic rule has shown greater resilience (i.e., the critical value of the noise rate is high). This is another intriguing finding from this research.
\item We have found the convergent temporally stochastic cellular automata (TSCAs) that were utilized in the development of a two-class pattern classifier after evaluating their dynamics. In this situation, the proposed design of the TSCA-based two-class pattern classifier gives a competitive performance in contrast to existing common approaches.
\item We have introduced a new problem known as the affinity classification problem. We have developed a devoted machine that is integrated into a two-dimensional cellular automaton with periodic boundary conditions and Moore neighborhood dependence. Our model may be described by four parameters: $K; \phi(x);\psi(x);$ and $p$, and has affection capabilities to a converging point, all-1 or all-0.
\item By adjusting its parameters alone, the dedicated machine may partially solve the density classification problem.
\item Using this concept, we may develop a self-healing system. We are aware that self-healing allows any species to survive and develop.
\item To reduce the workload on the CPU, a new kind of cellular automata-based model has been developed in which each cell of the CA has a memory and an additional processing unit attached to it.
\item In this type of CA, each cell has some additional processing power that helps with the circumstances at hand. We have shown how the model can solve the search problem. The model is able to decide the search's outcome by perceiving just one node.
\item In the CA over Cayley tree, the arrangement of elements in the array does not affect the flow of the scheme. No matter if the elements in the array are already sorted, reverse sorted, or randomly placed, the scheme works the same for all these cases, and thus the time complexity for all such cases is the same, $O(k + log(n))$ where $k$ is the key element and $n$ is a total number of elements presents in the set of elements.
\end{itemize}
\section{Organization of the thesis}
In this section, we provide an organization of the thesis along with a summary of each chapter. The contribution of the thesis to the aforementioned topic is divided. Before going to discuss the different chapters, the introduction of the CAs is provided in Chapter~\ref{chap2}.
\begin{itemize}
\item \textbf{Chapter~\ref{chap2}.} This chapter describes the principle of CAs, different forms of CAs, and a brief history of CAs. After that, we concentrate on some non-classical CAs, such as stochastic CAs, automata networks, and non-uniform CAs. Finally, we briefly discuss the history of CAs as well as various computing tasks and social applications.
\item \textbf{Chapter~\ref{chap3}.} This chapter addresses a non-conventional CA variant where noise has a temporal impact on the entire system. In reality, one of the two rules, let's say $f$ and $g$, can be used to update a cell at a time step. You may think of the $f$ as the CA's default rule. In contrast, $g$ is applied temporally to the entire system with a certain probability and behaves as noise in the system. We named the variant as \emph{Temporary Stochastic CA}.
\item \textbf{Chapter~\ref{chap4}.} This is an extension of the previous chapter, where an application of \emph{Temporary Stochastic CA} is discussed. The convergence property of TSCA is discussed in this chapter. With the help of this convergent TSCA, we build a pattern classifier. On a few standard datasets, we deploy each convergent TSCA and analyze the performance as a pattern classifier. It has been noted that when compared to previous classifier algorithms, the proposed TSCA-based classifier shows competitive performance.
\item \textbf{Chapter~\ref{chap5}.} This chapter introduces \emph{affinity classification problem} as a generalization of the \emph{density classification problem}. Formally, the problem can be stated as, for given an initial configuration, and a binary cellular automaton that converges to $all-1$ configuration if the density of $1$s is more than $\rho$. Otherwise, it converges to $all-0$ configuration. A variant of the CA model is proposed to solve this kind of problem.
\item \textbf{Chapter~\ref{chap6}.} This chapter introduces a new model of computing unit developed around cellular automata, where each cell of the computing unit acts as a tiny processing element with attached memory. The Cayley Tree is used to implement such a CA in order to achieve effective solutions for various computing problems. This study focuses on finding solutions to the searching problems to demonstrate the usefulness of this paradigm. Such cellular structures may, however, be used to address a variety of computing problems, the searching problem being one of them.
\item \textbf{Chapter~\ref{chap7}.} This chapter wraps up the thesis with a few unresolved problems that might be addressed later.
\end{itemize}
\chapter{Survey on Cellular Automata}
\label{chap2}
Cellular Automaton (CA) is a discrete, abstract computational system that consists of a regular network of finite state automata, formally known as cells. A cell changes its state depending on the state of its neighbors using a local update rule and all the cells change their states simultaneously using the same update rule.
For the last fifty years, cellular automata have been rigorously studied in various fields because of their simple structures. Some theoretical aspects of cellular automata include chaos in CAs~\cite{Devaney,langton90,DCTMitchell93,Cattaneocht,Supreeti_2018_chaos,KAMILYA2019116}, reversibility~\cite{Siap11,Kari90,Kari94,marti2011reversibility,DIGREGORIO75,Bhattacharjee16b,Sarkar12,SethiD14,tome1994necessary,0305-4470-37-22-006,kamalikaThesis,CayLeyid02,uguz2013reversibility,BhattacharjeeNR16,Ángel13,cinkir2011reversibility,Fats2018,Sarkar11}, reachability problems~\cite{naskar12}, primitive polynomials~\cite{ADAK202179,Bardell1992} etc. Also, CAs have been applied to model physical systems~\cite{Suzudo2004185,ulam1962some,uguz14,White51175,liang2001s,BAUDAINS2013211,10.2307/2094159,GUNJI1990317,Margolus198481}, VLSI design and test~\cite{Pries86,vlsi00d,Horte89c,Tsali91,biplab,SukantaTH,santanu00,Chowd92d,Tsali90,vlsi02a,ubist,Misra92b,vlsi00b,vlsi00a,Chandrama,aspdac04,Bardell,dubrova1999multiple,MitraDCN96}, cryptography~\cite{Sethi2016,Seredynski2004753,wolfram1985cryptography,das2013car30}, pattern classification~\cite{CPLX:CPLX21749,maji2003theory,DasMNS09,Sethi6641432,Sethi2015}, random number generator~\cite{Horte89a,Tomassini96,MARSAGLIA19931,L'Ecuyer:2007:TCL:1268776.1268777,opre.48.2.308.12385,S1064827598349033,Rukhin10statisticaltest,Matsumoto:1998:MTE:272991.272995,soto1999statistical,Mitra2014,BhattacharjeeIJMPC2018,Panneton:2005:XRN:1113316.1113319,Doi1004061,Park:1988:RNG:63039.63042,JAMES1990329,LEcuyer:1997,L'Ecuyer:2005:FRN:1162708.1162732,HELLEKALEK1998485,rukhin2001statistical,Rotenberg:1960:NPN:321008.321019,fishman1990multiplicative,doi:10.1137/0907002,Eichenauer1986,L'Ecuyer:1988:EPC:62959.62969,10.2307.2347988,doi:10.1287.opre.44.5.816,doi:10.1287.opre.47.1.159,nance1978some,marsaglia1985current,RNG,BHATTACHARJEE2022100471,saito2011TinyMT,Tsali91,comer2012random}, compression~\cite{Bhatt95,Lafe,lafe2002method,ShawSM04,ShawDS06,Chandrama,ShawMSSRC04}, image processing~\cite{Rosin06,Rosin2010790,Jin2012538,Soto2008,Mariot2017,Okba11,Wongthanavasu03,Sadeghi12,ShawDS06,ye2008novel,Sato2009}, number-conserving~\cite{Morita98,Morita:99,boccara98,das2011characterization,Boccara02,Durand03,Moreira03,Formenti2003269,Kohyama01011989-28,GOLES20113616}, computability~\cite{Kutrib2011,short12,Mark1,Chopard2012,Darabos7,Soto2008,langton90,Smith71,Durand-Lose98,lindgren90,wolfram84,toffoli77,DCTMitchell93,Culik90,Margolus198481,Hem82,naka,tome1994necessary}, natural computing~\cite{Sethi2015,Sato2009,Alberto12}, biological systems~\cite{RUXTON98,Mark1,gaylord1995computer,ermentrout1993cellular}, medical science~\cite{AMTuring,ermentrout1993cellular,RUXTON98,Dabbaghian2012,dhande1984ternary}, social networking~\cite{6108515,Beltran11,Sakoda89791,Epstein96,liebrand1996frontiers,Andreas98,moreno1957first,david2004social,Hegselmann1996,Nowak1996,DABBAGHIAN20101752,Benito1012898,Das7477,LANG201412,10.1007/978-3-642-40495-5_3,iet.cp.2013.2283} etc.
\section{Cellular Automata}
A cellular automaton (CA) is made up of a number of cells arranged in a regular network. Each of a CA's cells is a finite automaton that uses a finite state-set, called $S$. At particular times and locations, the CAs alter. Throughout evolution, a CA cell changes based on the state of its neighbors. In other words, a cell updates its state using a next-state function, also referred to as a local rule. The neighbors of the cell's current state serve as the function's inputs. The aggregation of all cell states throughout time is referred to as the CA's configuration. A CA alternates between configurations as a result as it evolves.
\begin{definition}
A cellular automaton is a quadruple ($\mathcal{L}$, $\mathcal{S}$, $\mathcal{M}$, $\mathcal{R}$) where,
\begin{itemize}
\item $\mathcal{L} \subseteq \mathbb{Z}^\mathcal{D}$ is the $\mathcal{D}-$dimensional cellular space. A set of cells are placed in the locations of $\mathcal{L}$ to form a regular network.
\item $\mathcal{S}$ is the finite set of states; e.g. $\mathcal{S} = \{0, 1,\cdots, d-1\}$.
\item $\mathcal{M}=(\vec{v_1},\vec{v_2},\cdots,\vec{v_m})$ is the neighborhood vector of $m$ distinct elements of $\mathcal{L}$ which associates one cell to it's neighbors.
\item The local rule of the CA is $\mathcal{R}:\mathcal{S}^m\rightarrow\mathcal{S}$. A cell's subsequent state is determined by the expression $f(s_1,s_2,s_3,\cdots,s_m)$ where $s_1,s_2,s_3,\cdots,s_m$ denotes the states of it's $m$ neighbors.
\end{itemize}
\end{definition}
Cellular automata can be of many types. One of a cellular automaton's most fundamental properties is the kind of grid that it evolves on. The most fundamental grids of this kind are one-dimensional arrays (1D CA). In case of two dimensional CA, grids of square, triangular, and hexagonal shapes can be used. Additionally, cellular automata may be built on Cartesian grids in any numbers of dimensions, with the integer lattice in dimensions being the most popular option. For example, The Wolfram's cellular automaton are implemented on a one-dimensional integer lattice.
The three fundamental characteristics of a classical CA are locality, synchronicity, and uniformity.
\begin{itemize}
\item According to the concept of locality, the computation of CAs using a rule is a local computation. A cell's state varies in response to local interactions only.
\item Synchronicity refers to the simultaneous updating of all local computation-performing cells.
\item The employment of the same local rule by all cells is referred to as uniformity.
\end{itemize}
Therefore, the local rule, which is distinct throughout the lattice, is used to carry out the actual calculation. The radius of a CA represents the quantity of succeeding cells in a direction that a cell depends on. A CA cell, for instance, depends on its two successive left cells and its two subsequent right cells if the radius of the CA is, let's say, $2$. The CA is a $(2 + 2 + 1) = 5-$neighborhood CA in the scenario.
\begin{figure}[hbt!]\centering
\includegraphics[width=0.50\textwidth]{CAFigure/r.pdf}
\caption{neighborhood dependence in a one-dimensional cellular automaton with radious $r$.}
\label{r}
\end{figure}
The neighborhood dependence of a one-dimensional CA is shown in Fig.~\ref{r}. A cell uses $r$ left neighbors and $r$ right neighbors in order to move to the next state.
However, (John) von Neumann neighborhood dependency and Moore neighborhood dependency are typically used to identify the neighborhood of a two-dimensional CA. John von Neumann's suggested CA is a two-dimensional CA with square grids. A cell represents each square box. Each cell has one of $29$ possible states. Each cell's subsequent state is determined by the current states of its four orthogonal neighbors and itself (see Fig.~\ref{von}). Later, the CAs are simplified with fewer states even if they have maintained their capacity to reproduce itself and computational efforts; see~\cite{Arbib66,Codd68,Banks71,thatcher1964,langton1984self,banks1970universality,Smith71,iirgen1987simple,Morita89,Culik90,Goucher2010a,martin1994universal,DUBACQ0202,ollinger2002quest,ollinger2003intrinsic,cook2004universality} for illustration.
\begin{figure}[hbt!]
\subfloat[]{
\begin{minipage}[c][1\width]{
0.30\textwidth}
\label{von}
\centering
\includegraphics[width=1\textwidth]{CAFigure/von_Nei}
\end{minipage}}
\hfill
\subfloat[]{
\begin{minipage}[c][1\width]{
0.30\textwidth}
\label{moor}
\centering
\includegraphics[width=1\textwidth]{CAFigure/moor_Nei}
\end{minipage}}
\caption{The neighborhood dependencies for two-dimensional cellular automata; (a) Von numann neighborhood; (b) Moore neighborhood.}
\label{fig:neighbors}
\end{figure}
Four non-orthogonal cells are also taken into consideration as neighbors in the Moore neighborhood~\cite{moore1962machine} of two dimensional CAs, which uses a $9$-neighborhood dependence. Moore neighborhood dependence of a CA is depicted in Fig.~\ref{moor}. The renowned Game of Life, a CA that Martin Gardner popularised and John Conway presented, was developed using this type of neighborhood structure~\cite{Gardner70}.
The existence of a barrier and boundary conditions is irrelevant because the cellular space is often infinite. The assumption that cellular space is finite, which undoubtedly has bounds, is made in several publications. If the automata are to be put into practise, finite CAs are necessary. A finite CA is often investigated with two boundary conditions, that is, open and periodic. In this study, we primarily take into account finite CAs with periodic boundary conditions. In a few instances, nevertheless, we also research dynamics with an open boundary condition. The missing neighbors of extreme cells (leftmost and rightmost cell in one-dimensional CAs) are typically given some fixed states in case of open boundary CAs. The most common open boundary condition is null boundary, which ensures that any missing neighbors of terminal cells are always in state $0$ (Null). The works~\cite{Wolfr83, Horte89a,Tsali91, ppc1,das2010scalable} shows some use of CAs with null boundary consition. On the other hand, with a periodic boundary condition, the boundary cells are immediate neighbors of certain other boundary cells. For one-dimensional+ CAs, the neighboring cells on the right and left are shown in Fig.~\ref{neigh} as an example, where the left neighbor of the leftmost cell (resp. the right neighbor of the rightmost cell) is the rightmost cell (resp. leftmost cell). Some researchers have also looked at periodic boundary conditions for higher dimensional CAs~\cite{Palas1,Jin2012538,uguz2013reversibility}.
\begin{figure*}[hbt!]
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.4\textwidth]{CAFigure/nullb.pdf} & \includegraphics[width=0.4\textwidth]{CAFigure/pb.pdf} \\
(a) Null Boundary & (b) Periodic Boundary \\
\includegraphics[width=0.4\textwidth]{CAFigure/ab.pdf}&
\includegraphics[width=0.4\textwidth]{CAFigure/rb.pdf} \\
(c) Adiabatic Boundary & (d) Reflexive Boundary \\
\includegraphics[width=0.4\textwidth]{CAFigure/ib.pdf} & \\
(e) Intermediate Boundary &\\
\end{tabular}
\caption{Boundary conditions of one-dimensional finite CAs. (a) Null Boundary ; (b) Periodic Boundary; (c) Adiabatic Boundary; (d) Reflexive Boundary; (e) Intermediate Boundary. }
\label{neigh}
\end{center}
\end{figure*}
Initially conceived using a two-dimensional lattice of cells~\cite{Toffo87,cattell1996synthesis,Siap11,imai2000computation,morita2016universality,margenstern1999polynomial,Margenstern200199,Packa85b,uguz14}, cellular automata have also been proposed in greater dimensional space~\cite{gandin19973d,Palas1,miller2005two,mo20143,dennunzio2014multidimensional,tomassini29,Darabos7,Marr20}. Even yet, many researchers from all around the world have committed their careers to studying one-dimensional CAs~\cite{Amoroso72,Pries86,sutner1991bruijn,sikdar2002design}. Elementary Cellular Automata (ECAs) are a subclass of CAs that Wolfram has studied~\cite{Wolfr83,wolfram2002new}. ECAs are one-dimensional, two-state, and dependent on three nearest neighbors. ECAs and their modifications have received a significant amount of attention in cellular automata research during the past three decades.
\subsection{Elementary Cellular Automata}
One of the simplest structures with complicated behavior is a one-dimensional, two-state, three-neighborhood CA. Because of this, there is a significant demand for CAs among academics that study natural phenomena. These CAs are referred to as Wolfram's CAs or Elementary CAs (ECAs). Wolfram conducted significant research on the dynamics of these CAs and categorized the CAs based on their dynamical tendencies. ECAs are further categorised and described by other scholars for theoretical advancement and application standpoint in part because of this study.
In elementary CAs, each cell evolves its state based on the states of its left neighbor, self, and right neighbor. This results in a one-dimensional infinite lattice of cells. That is, the radius $(r)$ of a cell is $1$ and $S = \{0, 1\}$. A cell is subjected to a function $f$ or local rule, depending on which the cell changes its state from one state to the next. Tabular representations are often possible for the local rule $f$. Table~\ref{Table:ECA} displays two basic principles with $S = 0$ and $1$, and $r = 1$. Usually, the rules are given as $d-ary$ (binary for ECAs) or their corresponding decimal integers.
\begin{table}[ht]
\centering
\begin{adjustbox}{width=0.8\columnwidth,center}
\begin{tabular}{cccccccccc} \hline \hline
RMT&\makecell{111\\(7)}&\makecell{110\\(6)}&\makecell{101\\(5)}&\makecell{100\\(4)}&\makecell{011\\(3)}&\makecell{010\\(2)}&\makecell{001\\(1)}&\makecell{000\\(0)}&Rule\\\hline
&0&0&0&1&1&1&1&0&30\\
f&0&1&0&1&1&0&1&0&90\\
&1&0&0&1&0&1&1&0&150\\\hline\hline
\end{tabular}
\end{adjustbox}
\caption{Elementary CA rules $30,90$ and $150$.}
\label{Table:ECA}
\end{table}
There are $2^{2^3}$ possible functions for CAs, with two states and three neighborhoods. There are $2^8 = 256$ elementary CA rules, according to this. The phrase \emph{Rule Min Term}, which is explained in the next para, is referred to by the acronym \emph{RMT} in Table~\ref{Table:ECA}. \emph{RMTs} of ECA rule $30, 90$ and $150$ are shown in the Table~\ref{Table:ECA}.
\begin{definition}
(Rule Min Term (RMT))~\cite{Bhattacharjee16b}: Let $f : S^{2r+1} \rightarrow S$ be the local rule of a one-dimensional CA. An input $(x_{-r},\cdots, x_0,\cdots, x_r) \in s^{2r+1}$ to $f$ is called a Rule Min Term (RMT). An RMT is commonly represented by a decimal number $r = x_{-r}.d^{2r} + x_{-r+1}.d^{2r-1} + \cdots + x_r$ and $f(x_{-r}, \cdots, x_0, \cdots, x_r)$ is represented by $f(r)$. RMT $r$ is called passive if $f(x_{-r}, \cdots, x_0, \cdots, x_r)=x_0$. Otherwise the RMT is called active RMT.
\end{definition}
The first row of Table~\ref{Table:ECA} shows the \emph{RMTs}. For a $d$-state CA, total number of RMTs is $d^{2r+1}$. Hence, number of \emph{RMTs}, in case of an ECA, is $2^3 = 8$ and number of rules is $2^8=256$. We often omit commas within an \emph{RMT}, if it does
not lead to any confusion. That is, an RMT $(x_1, x_2, x_3)$ is generally presented
as $x_1x_2x_3$. In ECA rule $30$, the RMTs $0 (000), 2 (010), 3 (011)$ and $5 (110)$ are passive (see Table~\ref{Table:ECA}), whereas the rest \emph{RMTs} are active.
\subsection{Wolfram's Classification of Cellular Automata}
A one-dimensional, two-state, three-neighborhood CA has one of the most basic structures yet is capable of displaying sophisticated behavior. As a result, researchers have a strong demand for such CAs in order to study naturally occurring phenomena. The term `Wolfram's CAs'' or ``Elementary CAs'' refers to such CAs. According to their dynamical tendencies, Wolfram categorized the CAs after a thorough study of their dynamics. Different scholars have further classified and described ECAs in light of this work, both from a theoretical development and practical standpoint~\cite{ppc1,FSSPCA3,mitch98a,Sarkar00Survey,binder1993phase,ninagawa2014classifying,KARI20053,BhattacharjeeNR16}.
The fundamental unit of an elementary cellular automaton is a finite automaton defined over a one-dimensional array. Two states are used by the automaton, and they are updated synchronously based on their own state and the states of their two nearest neighbors. In this paper~\cite{wolfram84b}, according to the outcomes of the system's evolution from a random initial state, Stephen Wolfram proposed classifying cellular automaton rules into four types:
\begin{enumerate}
\item[] \textbf{Class I.} A homogeneous state of CA is developing,
\item[] \textbf{Class II.} CA is periodically developing,
\item[] \textbf{Class III.} CA is chaotically developing,
\item[] \textbf{Class IV.} comprises all preceding instances; a class of complex rules.
\end{enumerate}
This classification was initially only for one-dimensional cellular automata, but in~\cite{Packa85b}, he extended it to two-dimensional cellular automata.
According to this classification, it makes intuitive sense to assume that \emph{class IV} cellular automata will only exhibit gliders, Turing-universality, and comparable complex behaviour. Wolfram states that ``These comparisons lead to the assumption that \emph{class IV} automata are characterized by the possibility for universal computing'' after analyzing the existence of glider-like structures in a \emph{class IV} automaton and their resemblance to structures in Conway's Life.
A cellular automaton's capacity for colors (or unique states) must also be stated. Usually an integer, the easiest option for this number is (binary). Colors 0 and 1 in a binary automaton are typically referred to as ``white'' and ``black'', respectively. However, continuous range cellular automata may also be taken into consideration.
\begin{figure}[hbt!]\centering
\includegraphics[width=0.50\textwidth]{CAFigure/rule30}
\caption{Space-time diagram of a $1-$D Cellular Automaton of (Wolfram rule $30$).}
\label{rule30}
\end{figure}
A space-time diagram of an elementary CA, ECA $30$ is shown in Fig.~\ref{rule30}, where the development of the CA starts with a single $1$. White cells denote state $0$, whereas black cells denote state $1$.
\subsection{Li-Packard Classification}
The first initiative to categorize each rule's behavior based on observation was made by Wolfram. It provides academics with a fresh perspective on how to better understand the dynamics of CAs. The classification, however, does not totally separate the rules of one class from another. There are several rules where two kinds of behavior are superimposed. For example, a CA may behave chaotically in some areas, or two chaotic sections may be divided by a wall. Local chaos is one type of such activity. Wolfram's classification was somewhat changed by Li and Packard in 1990, and they divided the ECA rules into five categories: null, fixed point, periodic, locally chaotic, and chaotic. Rule space analysis served as the foundation for this categorization. A rule's likelihood of being linked to another rule is evaluated. The classes is reported in Table~\ref{Table:Li_Packard}.
\begin{table}[ht]
\centering
\begin{adjustbox}{width=0.8\columnwidth,center}
\begin{tabular}{c|c} \hline
Classes & ECA Rules \\ \hline \hline
Null & 0, 8, 32, 40, 128, 136, 160, 168\\\hline
Fixed Point & \makecell{2, 4, 10, 12, 13, 24, 34, 36, 42, 44, 46, 56, 57, 58,\\
72, 76, 77, 78, 104, 130, 132, 138, 140, 152, 162, 164,\\
170, 172, 184, 200, 204, 232} \\\hline
Periodic & \makecell{1, 3, 5, 6, 7, 9, 11, 14, 15, 19, 23, 25, 27, 28, 29, 33,\\35, 37, 38, 41, 43, 50, 51, 74, 108, 131, 133, 134,\\ 142, 156, 178} \\\hline
Localy Chaotic & 26, 73, 154\\\hline
Chaotic & 18, 22, 30, 45, 54, 60, 90, 105, 106, 129, 137, 146, 150, 161\\\hline
\end{tabular}
\end{adjustbox}
\caption{Elementary CA rules classification according to Li $\&$ Packard~\cite{Li90thestructure}.}
\label{Table:Li_Packard}
\end{table}
One new class is introduced among the five classes of ECA rules, i.e., locally chaotic. It displays an intriguing case of CA behavior. Chaos is typically caused by infinitely long CAs. However, this class represents chaos inside a small space, in contrast to other classes that are similar to Wolfram's classes. A cell's neighbors get information that contributes to it, but information cannot cross a wall. This obstruction, sometimes referred to as the blocking word, is where the information is impeded. Despite the fact that a behaviour inside a defined region is always predictable, the authors have characterized it as chaotic by looking at how information spreads among the cells there. Rule 154 is an example of a locally chaotic ECA rule.
\subsection{von Neumann's Universal constructor}
Robbert Oppenheimer, Enrico Fermi, Niels Bohr, Hans Bethe, Richard Feynman, Eugene Wigner, John Von Neumann, and several more scientists collaborated on the Manhattan project at Los Alamos during the Second World War. Because the team included of the top scientists, it was amazing. The scientific objective for such project was also quite clear: ``Building bomb in a race against the Nazis''. However, some scientists started to consider the difficulty of computers about the same period and computer simulation also served as a bridge between theory and experimentation. Jon von Neumann was also interested in self-replicating machine and cellular automata around that time.
The mathematician Stanislas Ulam, who conducted the initial experiments on one of the earliest stored programme computers at Los Alamos, is credited as being the genuine inventor of artificial growth and evolution research. Ulam was fascinated by the development patterns of geometrical forms in two and three dimensions that were produced by incredibly straightforward recursive procedures. John von Neumann was motivated by Ulam's inventions, which helped him develop the first cellular automaton model. From there, the initial definition of artificial life emerges, i.e., one of the fundamental concepts in artificial life is the notion of complexity emerging through the combination of basic principles.
John von Neumann developed the idea of cellular automata at the start of the 1950s~\cite{Neuma66}. von Neumann was interested in the possibility of reproduction and explored for the logical prerequisites necessary for a non-trivial self-reproduction. He had originally developed a kinematic model of a robot floating in a lake with all the parts required to construct further robots. He imagined the robot gathering parts and putting them together to create a duplicate of itself. In essence, von Neumann was successful in demonstrating how the floating robot might replicate itself, but sadly, a large portion of his research was hampered by the issue of motion in the lake. Ulam's method was therefore adopted by von Neumann, who furthered the abstraction.
Instead of simulating self-reproduction at the genetic level, his approach was to abstract it in its coherent manner. If self-reproduction can be described as a logical series of processes, then there is a universal Turing Machine that is capable of self-reproduction. A two-dimensional cellular automaton with $29$ possible states was defined by John Von Neumann where, a transition rule is applied to the current cell and its four orthogonal neighbors to determine the state of each cell.
von Neumann demonstrated that, rather than relying on some unexplained quality of matter, one of life's primary characteristics could be described using logical concepts. Sadly, he passed away in 1957 before completing his proof. His contributions were finished and revised by Arthur Burks, who collaborated with him on the logical design of the EDVAC (one of the earliest computers). John von Neumann is today regarded as the pioneer of the Artificial Life concept for deriving from the natural self-reproduction its logical (computational) form.
\subsection{Conway's Game of Life}
In von Neumann's approach~\cite{Neuma66} there were $29$ states of each cell so the complexity of computing of each cell state is much higher. Many researchers were trying to reduce the computational complexity, actually wanted to reduce the number of states without compromising the property (Self-replication) of that machine. In the year 1970 John Horton Conway, a young mathematician at Gonville and Caius College of the Cambridge University, proposed Game of Life is certainly the best example of the idea that complex worlds could emerge from simple rules. Conway adapted Ulam and von Neumann's approach based on cellular automata. In Conway's Game of Life there were only $2$ states of each cell which are capable to perform Self-reproduction. The state of each cell (\emph{alive/dead}) is the result of two rules applied on the cell and its eight neighbors.
Life's rules are marvelously simple:
\begin{itemize}
\item If the number of \emph{alive} cells is exactly three, the current cell will be \emph{alive} in the next generation.
\item If the number of \emph{alive} cells is zero, one, four, five, six, seven or eight, the cell will be \emph{dead} in the next generation.
\end{itemize}
\begin{figure}[hbt!]\centering
\includegraphics[width=0.8\textwidth]{CAFigure/gol}
\caption{Subsequent stages of the glider pattern on Game of Life}
\label{gol}
\end{figure}
Life has been experimented with extensively. Many of the configurations which emerge seem to have a \emph{life} of their own. One of the most remarkable example of life's structures is the \emph{glider}, a configuration of period four which displaces itself diagonally. This CA can also perform self-replication by following the above rules.
\subsection{Codd's Cellular Automaton}
Cellular Automata, introduced in 1968 by British computer scientist Edgar F. Codd~\cite{Codd68}, uses $8$ rather than $29$ states to replicate the computation and building universality of von Neumann's CA. Similar to von Neumann's \emph{universal constructor}, Codd demonstrated that it was possible to construct a self-replicating machine in his CA, but he never provided a full implementation.
\begin{figure}[hbt!]\centering
\includegraphics[width=0.8\textwidth]{CAFigure/codd}
\caption{Codd's path extension machine }
\label{codd}
\end{figure}
Fig.~\ref{codd} depicts the loop instruction using $8$ states CA introduced by Edgar F. Codd. This loop is a self replicating loop and the instructions for building a loop might not fit within a loop that size. Codd's signal sequence, which is needed to build another loop (Self-replication), does not appear to allow for this as he needs the sequence $7-0-1-1-6-0$ merely to lengthen the route by one cell. How might a loop store enough data to develop a structure that is the same size as itself if it takes six cells to store the information needed to generate a single cell of the new machine? We only need to store the instructions necessary to generate one side and one corner of the storage loop if we make it a perfect square, is the solution.The procedure of making a side and a corner will be repeated four times as these instructions go around the loop four times.
The instructions to construct one side and one corner are too lengthy to fit in a loop even when using Codd's signal sequences.
\subsection{Langton's Self-Reproducing Automata}
The writings of von Neumann, Ulam, and Conway had a big influence on Christopher Langton. In late 1971, he coined the phrase \emph{Artificial Life}, Christopher Langton came at Los Alamos in August $1986$ to begin a postdoctoral position at the Center for Nonlinear Studies. He planned the inaugural ALife workshop at the Los Alamos National Laboratory in September 1987. The Santa Fe Institute, Apple Computer Inc., and the Center for Nonlinear Studies all provided financial support for the program. It gathered $160$ experts from many fields, including anthropology, computer science, biology, and physics who were all interested in simulating and synthesizing biological systems.
Another of Burks' students, Christopher Langton, developed a self-replicating pattern in 1984 based on a very basic Codd's automaton configuration known as the periodic emitter, which was in turn derived from the periodic pulser organ in von Neumann's 29-state automaton. Christopher Langton proved that the ability to build anything at all was not a need for self-reproduction. Some of Codd's signals had their meaning changed by Langton, giving them additional potency on their own. Instead of modifying array configurations as indicated in Codd's section to change the meaning of signals, transition rules that govern how the array's configurations behave must be changed. By using this method, he shortens the entire sequence required to build one side and one corner to the point that it can now fit inside the size of the loop that it produces~\cite{langton1984self}.
\begin{figure}[hbt!]\centering
\includegraphics[width=0.7\textwidth]{CAFigure/langton1}
\caption{Self reproducing loop with Initial sequence~\cite{langton1984self}: $7\;0 - 7 \;0 - 7 \;0 - 7 \;0 - 7 \;0 - 7\; 0 - 4 \;0 - 4 \;0$ }
\label{langton1}
\end{figure}
As a result, Codd's requirement that the sequence $7\; 0 - 6\; 0$ be used to expand the data route by one cell has been replaced with the $7\; 0$ signal, which can trigger this extension on its own. Another distinction is that Langton's signal allowed signals to be just three cells apart, but Codd's signal required the machine to be spaced four cells apart. Allow two consecutive $4\; 0$ signals to complete a left path extension in place of Codd's necessary sequence of $4\; 0 - 4\; 0 - 5\; 0- 6\; 0$.
The automaton is based on eight-state cells which are used -
\begin{enumerate}
\item as information to replicate in the cellular environment, resulting in the birth of a child, and
\item as instructions to be carried out in accordance with the transition rule.
\end{enumerate}
\begin{figure}[hbt!]\centering
\includegraphics[width=0.8\textwidth]{CAFigure/langton2}
\caption{Langton's self-reproducing automaton~\cite{langton1984self}.}
\label{langton2}
\end{figure}
The basic structure has successfully replicated itself after $151$ time steps~\cite{langton1984self}. Then, each of these \emph{loops} goes on to replicate itself in a similar way, growing a colony of \emph{loops} (see Fig.~\ref{langton2}) in the process. The genotype codes for the elements of a dynamic process in the cell, and it is this dynamic process that is principally responsible for computing the expression of the genotype during development. This experiment illustrates the essence of what happens in natural development.
\section{Artificial life}
In 1986, American computer scientist Christopher Langton coined the phrase \emph{Artificial Life} while planning the inaugural ``Workshop on the Synthesis and Simulation of Living Systems''~\cite{langton86}. Since then, the concept of artificial life has permeated computer science, gaming, artificial intelligence research, and other fields. The Web and the way it enables networked computers to produce complex habitats in which artificial creatures may survive and evolve have played a significant role in its development.
The study of artificial systems that display behaviour like that of living things is known as artificial life. It is the endeavor to provide an explanation for life in all of its conceivable forms, without restriction to the specific instances that have developed on Earth. This comprises computer simulations, biological and chemical investigations, and entirely theoretical undertakings. Investigations focus on processes at the molecular, societal, and evolutionary levels ~\cite{copeland2004essential,Gotts09,Jeanson2008E,banda2,Bersini94,Sughimura14}. Extraction of biological systems' logical form is the ultimate objective.
\subsection{Langton's loop}
Christopher Langton developed a two-dimensional, self-replicating cellular automaton in 1984~\cite{langton1984self}. The replicating pattern consists of a loop that contains genomic information's. A series of cells that flow through the loop's arm and eventually merge to form another loop are the genetic information. Some components of the genetic code are designed to make the loop make three left turns before closing or dying and ceasing to reproduce. The colony of loops that is replicated in an infinite two-dimensional space has no size restriction. This system is regarded as a top illustration of synthetic self-reproduction.
In This directions, Langton explored some variations of CAs and its abilities~\cite{langton86, LangtonII,langton90}. In the nest section Langton's ant, a similar variation of CA, is discussed.
\begin{figure}\centering
\includegraphics[width=0.5\textwidth]{CAFigure/Self_loop_langton}
\captionof{figure}{Self reproduction of langton's loop}
\label{langtonLoop}
\end{figure}
After multiple (more than $250$) iterations, we may observe certain dead loops in the cellular space during the process. When the genetic information runs in a loop around its arms, it is automatically generated. Six $7$s are found in the loop, followed by two $4$s. As a consequence, it expands straight forth before turning $90$ degrees counterclockwise in response to the $4$s command. The command is repeated until all four sides of the new loop have been formed since this pattern of $7$s and $4$s keeps revolving around the loop. The other states are accustomed to advancing to new loops. A $5$ signal is delivered to the parent loop after it has developed a completely formed child to instruct it to rotate $90$ degrees and begin developing a child on the opposite side. Similar to $6$, which instructs the youngster to continue and have children of their own, is $6$. A loop begins developing the next child once its genome has been utilized $6$ three times: once to construct an arm, once for each of its child's four sides, and once to transfer the genome to its child (its clone). The parents will pass away once all four children have been produced, leaving behind a spiral shell and an empty genome. The entire situation spirals outward around a point of no return.
\subsection{Langton's Ant}
The Langton's ant is a two-dimensional 4-state universal Turing Machine. It was invented by Chris Langton in 1986. It is basically an ant, sitting on a square lattice of cells, which are initially white. The ant moves on the plane and changes the color of cells, creating patterns on it. But the movement of the ant is not random. It follows the following set of rules-
\begin{itemize}
\item If the ant is on a black square, it turns right 90 degrees and moves forward one unit.
\item If the ant is on a white square, it turns left 90 degrees and moves forward one unit.
\item When the ant leaves a square, it inverts the color.
\end{itemize}
As the ant starts its journey, it creates a black and white pattern while moving. Initially, the changes are not distinctive, but as we iterate them over and over again, a beautiful pattern emerges. But if we further increase the number of iterations (approximately $10000$), the ant starts repeating its path with a gradual shift instead of making new patterns. Thus, we obtain a \emph{highway} like pattern that is infinite. The ant keeps moving on that highway and gives the following pattern.
\begin{figure}\centering
\includegraphics[width=0.5\textwidth]{CAFigure/ant}
\captionof{figure}{Langton's Ant after $13000$ iterations.}
\label{langtonAnt}
\end{figure}
\section{Summary}
This chapter has included a brief review of cellular automata, a brief survey of various models have developed using cellular automata. To understand the behavior, scientists have investigated dynamical systems in a variety of approaches. To distinguish between the dynamical behaviors, CAs have been classified in several ways. Many researchers have studied the topology of self-replicating cellular automata, which provides insight into the Artificial Life. The parametrization process may be used to forecast the behavior of CAs. One may partially describe the behavior of CAs by defining parameters. However, whether the CA is homogeneous, periodic, or chaotic, parameters may fail to appropriately identify it. Therefore, developing a more precise parameter is constantly needed in the research community. Studies have also been conducted for traditional CAs. There has not been any literature on understanding the behavior of Temporally Stochastic CAs (TSCAs). It offers a new field for researchers to find the dynamics of TSCAs and develop parameters for them.
\chapter{Temporally Stochastic Elementary Cellular Automata : Classes and Dynamics}
\label{chap3}
\section{Introduction}
During $1950$'s, Alan Turing in his article on morphogenesis~\cite{Turing} argued that the question of randomness is fundamental to understand the laws of life. To study such natural phenomenon, stochastic cellular automata (CAs) have been introduced by several CA researchers \cite{Fates17, arrighi2013stochastic, fates13}, where the update rules are chosen randomly from a set of rules. Though, traditionally cellular automata are deterministic and uniform~\cite{BhattacharjeeNR16}, i.e. all the CAs cells are updated by simultaneously identical local rule.
In this direction, the present chapter explores another variation of non-conventional CA where the overall system temporally affected by a noise. In reality, at a time step, a cell can be updated using one of the two rules, say $f$ and $g$. The $f$ can be considered as the default rule for the CA. On the other hand, $g$ is temporally applied to the overall system with some probability which acts as a noise in the system. According to Turing (regarding morphogenesis, \cite{Turing}), the system chooses one of the symmetric direction of evolution which depicts one of the possible ways to evolve the system with randomness. The proposed framework of cellular automata is suitable for the study of such natural phenomenon. We refer to these automata as \emph{temporally stochastic cellular automata}.
In this chapter, we deal with elementary cellular automata (ECAs) which are a one-dimensional array of finite automata. Each automaton takes two states and updates its state in discrete time depending on its own state and states of its two closest neighbors. All cells update their states synchronously. Wolfram \cite{wolfram2002new, Wolfram94} have introduced following general classification of the ECAs rules;
\begin{itemize}
\item[] Class I: evolving to a homogeneous configuration;
\item[] Class II: evolving periodically;
\item[] Class III: evolving chaotically; and
\item[] Class IV: class of complex rules.
\end{itemize}
Later, Li and Packard have identified that some periodic rules are locally chaotic \cite{Li90thestructure, Genaro13}. Obeying these classifications, we target to answer the following question -- When $f$ and $g$ belong to the same class, is it possible that during the evolution of temporally stochastic CA, it shows the behavior of a different class? That is, is it possible that two periodic (resp. chaotic) rules together depict a kind of closeness towards chaos (resp. simplicity)? Further, behavior of which class will dominate when $f$ and $g$ belong to different class? Another rich issue, is it possible to get any {\em phase transition} for a critical value of noise (i.e. probability of rule $g$). Throughout the journey, the classifications of Wolfram \cite{wolfram2002new, Wolfram94} and Li-Packard \cite{Li90thestructure} are our point of reference. As we will see, this study is sufficiently rich to provide many of these kinds of worthy examples, with potential applications in the study of physical, chemical and biological systems.
In the above background, this chapter is organized as follows. The temporally stochastic cellular automata is introduced in Section~\ref{S2}. The main result of the work in Section~\ref{S3}, followed by the detailed results under the temporal noise environment in Section~\ref{S4} are presented. These result sections answer all the questions that we have raised above. Finally, Section~\ref{S5} summarize this chapter and discuss the wide variety of interesting insight of the temporally stochastic system.
\section{Temporally Stochastic CAs}
\label{S2}
In this work, we consider one-dimensional three-neighborhood binary cellular automata with periodic boundary condition which is commonly known as elementary cellular automata (ECAs). The cells are arranged as a ring and the set of indices that represent the cells is denoted by $\mathcal{L}$ = $\mathbb{Z}/n\mathbb{Z}$, where $n$ is the number of cells. At each time step $t \in \mathbb{N}$, a cell is assigned a state and we denote by $Q$ = \{$0,1$\} the set of states. The collection of all states at given time is called a configuration. If $x$ is a configuration then $x =$($x_i$)$_{i \in \mathcal{L}}$ where $x_i$ is the state of cell $i \in \mathcal{L}$. The set of all configurations is denoted by $Q^{\mathcal{L}}$.
Here, a cell changes its state depending on left neighbor, self and right neighbor. At each time step, the updates are made synchronously according to a local rule $f:$ $Q^3 \rightarrow Q$. Given a local function $f$ and a set of cells $\mathcal{L}$, one can define the global function $G: Q^{\mathcal{L}} \rightarrow Q^{\mathcal{L}}$ such that, the image $y = (y_i)_{i \in \mathcal{L}} = G(x)$ of a configuration $x = (x_i)_{i \in \mathcal{L}} \in Q^{\mathcal{L}}$ is given by,
\begin{equation}
\nonumber
\forall i \in \mathcal{L}, y_i = f(x_{i-1},x_i,x_{i+1})
\end{equation}
Each rule $f$ is associated with a `decimal code' $w$, where $w$ = $f$($0,0,0$) $\cdot$ $2^0$ + $f$($0,0,1$) $\cdot$ $2^1$ + $\cdots$ + $f$($1,1,1$) $\cdot$ $2^7$, for the naming purpose. There are $2^8$ = $256$ ECA rules in two-state three-neighborhood dependency. Through the use of left/right reflexion and $0/1$ complementarity, it is possible to narrow down the $256$ ECA rule space to $88$ classes, each represented by the rule of smallest number, i.e. the minimal representative ECA rule \cite{Li90thestructure}. In our work, we consider $88$ minimal representative ECA rules.
Let us now discuss temporally stochastic cellular automata where at a time step, a cell can be updated using one of the two rules $f$ and $g$. Here, $f$ is the default rule for the CA, whereas, $g$ is the noise and is applied with some probability. That is, rule $g$ is applied with probability $\tau$ $\in$ [$0,1$] whereas the rule $f$ is applied with probability ($1 - \tau$). We call $\tau$ as the {\em temporal noise rate}. This way of looking at these rules makes both of them temporally stochastic. Therefore,
\vspace{.3cm}
$ y = \begin{cases} G_g (x) \hspace{1cm} & \mbox{with probability} \hspace{.2cm} \tau \\ G_f (x) & \mbox{with probability } 1 - \tau \end{cases}$
\vspace{.3cm}
where, $G_g$($x$)$\mid_i$ = $g$($x_{i-1}$,$x_i$,$x_{i+1}$) and $G_f$($x$)$\mid_i$ = $f$($x_{i-1}$,$x_i$,$x_{i+1}$). We write ($f,g$)[$\tau$] to denote the proposed system specification.
In this study, we are particularly interested in the {\em qualitative} transformation (i.e. visible change in space-time diagram) that a cellular system may undergo when one progressively varies the noise rate. In this qualitative approach, we need to look at the evolution of the configurations, i.e. the space-time diagrams, by eye over a few time steps. This traditional approach can provide a good visual comparison. However, we also study the ratio of cell with state one in the formal {\em quantitative} approach to understand phase transition kind of dynamics more properly. The density of a configuration $x \in Q^{\mathcal{L}}$ can be written as $d$($x$) = $\#_1 x$/$\mid$x$\mid$, where $\#_1 x$ is the number of $1$'s in the configuration $x$ and $\mid$x$\mid$ is the size of the configuration. Without loss of generality, we start with a configuration of initial density $0.5$ which is constructed using {\em Bernoulli} process. In both qualitative and quantitative study, we start with a configuration of CA size $500$. Here, we let the system evolve during $2000$ time steps and average the density parameter value for (last) $100$ time steps. It is also possible that the system may show different behavior for different run. Therefore, for each instance (i.e. for each ($f,g$)[$\tau$]), we study the cellular system's dynamics for $25$ times. Although the results reported here are based on CA size $500$, we repeat the experiment for various other sizes to cross-verify the result and observe that the dynamical behavior of an automaton remains almost same for the other sizes. Therefore, in general, we claim that the cellular system's almost always show similar dynamics for any CA size.
\section{Main result}
\label{S3}
As this study uses Wolfram's \cite{wolfram2002new, Wolfram94} and Li-Packard's \cite{Li90thestructure} classification, we first note down the Wolfram's classification for $88$ minimal representative ECAs in Table~\ref{Table1}. According to Li-Packard, Wolfram's class II CAs show fixed point, periodic, locally chaotic behavior which are respectively marked with black, underlined and bold in Table~\ref{Table1}. Here, $f$ and $g$ are taken from the $88$ minimal representative ECAs set. Hence, working with $^{88}C_2$ = $\frac{88 \times 87}{2}$ = $3828$ couples of ($f,g$) is sufficient after considering the exchange symmetry between $f$ and $g$.
\begin{table}
\centering
\begin{adjustbox}{width=\columnwidth,center}
\begin{tabular}{ccccccccccccc} \hline
Class I & 0 & 8 & 32 & 40 & 128 & 136 & 160 & 168 & & & & \\
Class II & 2 & 4 & 10 & 12 & 13 & 24 & 34 & 36 & 42 & 44 & 46 & 56\\
& 57 & 58 & 72 & 76 & 77 & 78 & 104 & 130 & 132 & 138 & 140 & 152\\
& 162 & 164 & 170 & 172 & 184 & 200 & 204 & 232 & \underline{1} & \underline{3} & \underline{5} & \underline{6}\\
& \underline{7} & \underline{9} & \underline{11} & \underline{14} & \underline{15} & \underline{19} & \underline{23} & \underline{25} & \underline{27} & \underline{28} & \underline{29} & \underline{33}\\
& \underline{35} & \underline{37} & \underline{38} & \underline{43} & \underline{50} & \underline{51} & \underline{62} & \underline{74} & \underline{94} & \underline{108} & \underline{134} & \underline{142}\\
& \underline{156} & \underline{178} & \textbf{26} & \textbf{73} & \textbf{154} & & & & & & & \\
Class III & 18 & 22 & 30 & 45 & 60 & 90 & 105 & 122 & 126 & 146 & 150 & \\
Class IV & 41 & 54 & 106 & 110 & & & & & & & & \\
\hline
\end{tabular}
\end{adjustbox}
\caption{Wolfram's classification for $88$ minimal representative ECAs.}
\label{Table1}
\end{table}
Therefore, this study captures the dynamics of $3828$ couples of ($f,g$) following the qualitative and quantitative (if necessary) approach where the dynamics of ($f,g$) targets to identify the similarities in Wolfram's classes. However, during this mapping, to make this first experience easier, we merge Wolfram's class III and IV, i.e. chaotic and complex behavior, and locally chaotic behavior (Li-Packard's class) together. Obviously, theoretically the dynamics of chaotic and locally chaotic rules are different. However, according to space-time diagram, locally chaotic rules show more closeness towards chaotic and complex dynamics in comparison with periodic dynamics. Hence, we map the dynamics of ($f,g$) into following three classes.
\begin{itemize}
\item[-] Class A, which is similar to Wolfram's class I (or, Li-Packard's Uniform (U) behavior);
\item[-] Class B, which is similar to Wolfram's class II, except three Locally Chaotic (LC) dynamics (that is, Li-Packard's Fixed Point (FP) and Periodic (P) behavior); and
\item[-] Class C, which is similar to Wolfram's class III (Chaotic (C)) and class IV (Complex (CO)) along with those three locally chaotic behavior.
\end{itemize}
For later part of the work, we shall use these class names (A,B,C) for individual ECA rules also. That is, rules of class I will be treated as rules of class A, and so on. However, we can not always be able to map the dynamics for many couples of ($f,g$) into above three classes. These couples of ($f,g$) show the well known {\em phase transition} \cite{ROY2019600} and {\em transition of class} (similar to \cite{Martinez12}) dynamics.
Overall, we have performed a large number of experiments on the pairs of rules. The summary of the outcome is note in Fig.~\ref{Fig1} where class A,~B and C are marked by yellow, orange and red respectively. The $88$ rules are plotted horizontally and vertically where the vertical line shows the $f$ rules and $g$ is represented by the horizontal line. Each box on the line (vertical and horizontal) represents a rule. The rules are numbered as per the sequence of Table~\ref{Table1}. That is, rule $0$ is the first rule and rule $110$ is the $88$th rule. A cell ($i,j$) in the figure depicts the behavior of the stochastic CA ($f,g$) where $f$ and $g$ are the rules represented by the $i$th box in the vertical line and $j$th box in the horizontal line respectively. Here, we consider the exchange symmetry between $f$ and $g$. So while stochastic CA ($f,g$) is plotted, the place for the CA ($g,f$) remains blank (white) in Fig.~\ref{Fig1}.
Additionally, the place for the CA ($f,f$) is kept blank (white) in the Fig.~\ref{Fig1}. To illustrate the above discussion clearly, the following matrix depicts the partial representation of Fig.~\ref{Fig1}.
\begin{table}[!htbp]
\centering
\scriptsize
\begin{tabular}{c|cccccc} \hline
& 0 & 8 & 32 & 40 & 128 & $\cdots$ \\ \hline
0 & & & & & & \\
8 & (8,0) & & & & & \\
32 & (32,0) & (32,8) & & & & \\
40 & (40,0) & (40,8) & (40,32) & & & \\
128 & (128,0) & (128,8) & (128,32) & (128,40) & & \\
$\vdots $ & $\vdots $ & $\vdots $ & $\vdots $ & $\vdots $ & $\vdots $ & \\
\end{tabular}
\end{table}
Now, there are two possibilities for a couple ($f,g$) -- $f$ and $g$ belong to the same class; and $f$ and $g$ are from different class. We denote the class of $f$ and $g$ as $\mathcal{C}$($f$) and $\mathcal{C}$($g$) respectively and class of ($f,g$) is denoted by $\mathcal{C}$(($f,g$)). According to Fig~\ref{Fig1}, following are the rich set of observations regarding the dynamics of those temporally stochastic rules.
\begin{figure}[hbt!]\centering
\includegraphics[width=5.8in]{TEM_IMAGE/Diagram10.pdf}
\caption{Summarized behavior of temporally stochastic cellular automata.}
\label{Fig1}
\end{figure}
\begin{itemize}
\item[1.] If $\mathcal{C}$($f$) = $\mathcal{C}$($g$), then the first possibility is $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($f$), dynamics of which observed in large number of stochastic CA ($f,g$). Under this case, the CA ($f,g$) where $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = class A, evolves to a homogeneous configuration as ECA $f$ and ECA $g$ do.
\item[2.] When $\mathcal{C}$($f$) = $\mathcal{C}$($g$), then it may be possible that, $\mathcal{C}$(($f,g$)) $\neq$ $\mathcal{C}$($f$). That is, these stochastic CAs ($f,g$) are massively affected by the noise. Under this umbrella of dynamics, the most two interesting observations are:
\begin{itemize}
\item[-] Couple of two periodic rules (class B) show kind of closeness towards chaotic/complex/locally chaotic dynamics (class C). Observe that, in Fig.~\ref{Fig1}, many couple of orange CAs show red dynamics.
\item[-] On the contrary, couple of two periodic rules (class B) show uniform (evolving to homogeneous configuration, i.e. class A) dynamics. See, many couples of orange CAs depict yellow dynamics in Fig.~\ref{Fig1}.
\end{itemize}
\item[3.] Next a stochastic CA ($f,g$) where $\mathcal{C}$($f$) $\neq$ $\mathcal{C}$($g$), shows the dynamics where one of the rule's class dominates, i.e. $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($f$) or $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($g$). Under this case, the CA ($f,g$) where $\mathcal{C}$($f$) = class A and $\mathcal{C}$($g$) = class C, shows a behavior with $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($f$). Similarly, the stochastic CA ($f,g$) with $\mathcal{C}$($f$) = class B and $\mathcal{C}$($g$) = class C shows the dynamics as $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($g$), see Fig.~\ref{Fig1}.
\item[4.] On the other hand, a stochastic CA ($f,g$) where $\mathcal{C}$($f$) $\neq$ $\mathcal{C}$($g$), shows the dynamics where none of the rule's class dominates, i.e. $\mathcal{C}$(($f,g$)) $\neq$ $\mathcal{C}$($f$) and $\mathcal{C}$(($f,g$)) $\neq$ $\mathcal{C}$($g$). Here, the CA ($f,g$) where $\mathcal{C}$($f$) = class A and $\mathcal{C}$($g$) = class C, shows a behavior with $\mathcal{C}$(($f,g$)) = class B. Similarly, the CA ($f,g$) with $\mathcal{C}$($f$) = class B and $\mathcal{C}$($g$) = class C shows the dynamics as $\mathcal{C}$(($f,g$)) = class A, see Fig.~\ref{Fig1}.
\end{itemize}
Till now we have not mentioned about the temporal noise rate ($\tau$). That is, for the above cases, if we progressively vary the temporal noise rate, the cellular system's dynamics remains unchanged.
To sum up, these stochastic CAs ($f,g$) are not temporal noise rate ($\tau$) sensitive \footnote{Note that, the cellular system may behaves differently for sufficiently small value of $\tau$ ($\tau \approx 0.01$).} However, there are cases (following) where the system is $\tau$ sensitive.
\begin{itemize}
\item[5.] Interestingly, some stochastic CAs ($f,g$) show a discontinuity after a critical value of temporal noise rate ($\tau$). This type of brutal change of behavior is well known as second-order {\em phase transition}. In this case, there exists a critical temporal noise rate, say $\tau_c$, which separates a behavior where the system converges to $0^{\mathcal{L}}$ (passive phase) and a behavior with a stationary non-zero density (active phase). In Fig.~\ref{Fig1}, the CA ($f,g$) with this phase transition behavior are marked by blue.
\item[6.] Lastly, for a set of stochastic CAs ($f,g$), the class dynamics of the system changes after a critical value of $\tau$, say $\tau_t$. That is, $\mathcal{C}$(($f,g$))[$\tau$] $\neq$ $\mathcal{C}$(($f,g$))[$\tau'$] where $\tau \in$ [0,$\tau_t$] and $\tau' \in$ [$\tau_t$,1]. Here, a CA ($f,g$) with $\mathcal{C}$($f$) = class B and $\mathcal{C}$($g$) = class C shows periodic behavior, but slowly transforms into chaotic dynamics after critical value of noise $\tau_t$. Fig~\ref{Fig1} notes these kind of behavior in black. In the rest of the chapter, we will denote this dynamics as {\em class transition}.
\end{itemize}
\begin{figure*}[hbt!]
\begin{center}
\scalebox{0.8}{
\begin{tabular}{ccccc}
ECA 22 & ECA 18 & (22,18)[0.1] & (22,18)[0.5] & (22,18)[0.9] \\[6pt]
\includegraphics[width=31mm]{TEM_IMAGE/NR22-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR18-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR22_18_1-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR22_18_5-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR22_18_9-eps-converted-to.pdf} \\
ECA 150 & ECA 126 & (150,126)[0.1] & (150,126)[0.5] & (150,126)[0.9] \\
\includegraphics[width=31mm]{TEM_IMAGE/NR150-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR126-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR150_126_1-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR150_126_5-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR150_126_9-eps-converted-to.pdf} \\
ECA 43 & ECA 77 & (43,77)[0.1] & (43,77)[0.5] & (43,77)[0.9] \\
\includegraphics[width=31mm]{TEM_IMAGE/NR43-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR77-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR43_77_1-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR43_77_5-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR43_77_9-eps-converted-to.pdf} \\
\end{tabular}}
\caption{Stochastic CAs ($f,g$) dynamics when $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($f$) = $\mathcal{C}$($g$).}
\label{Fig2}
\end{center}
\end{figure*}
\begin{figure*}[hbt!]
\begin{center}
\scalebox{0.8}{
\begin{tabular}{ccccc}
ECA 33 & ECA 5 & (33,5)[0.1] & (33,5)[0.5] & (33,5)[0.9] \\[6pt]
\includegraphics[width=31mm]{TEM_IMAGE/Added/rule33.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/rule5v1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_33_5_0.1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_33_5_0.5.png} & \includegraphics[width=31mm]{TEM_IMAGE//Added/fgT_33_5_0.8999999999999999.png} \\
ECA 45 & ECA 30 & (45,30)[0.1] & (45,30)[0.5] & (45,30)[0.9] \\[6pt]
\includegraphics[width=31mm]{TEM_IMAGE/Added/rule45.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/rule30v1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_45_30_0.1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_45_30_0.5.png} & \includegraphics[width=31mm]{TEM_IMAGE//Added/fgT_45_30_0.8999999999999999.png} \\
ECA 172 & ECA 140 & (172,140)[0.1] & (172,140)[0.5] & (172,140)[0.9] \\[6pt]
\includegraphics[width=31mm]{TEM_IMAGE/Added/rule172.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/rule140v1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_172_140_0.1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_172_140_0.5.png} & \includegraphics[width=31mm]{TEM_IMAGE//Added/fgT_172_140_0.8999999999999999.png} \\
\end{tabular}}
\caption{Stochastic CAs ($f,g$) dynamics when $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($f$) = $\mathcal{C}$($g$).}
\label{TSCA1}
\end{center}
\end{figure*}
The above discussion indicates the rich set of possibility in dynamics for the temporal stochastic CAs. In the next section, we revisit these dynamics with examples, and in more detail.
\section{Detailed results}
\label{S4}
Let us now detail out the results that we have presented in previous section. Here we pick up suitable examples for different cases to illustrate the behavior of the automaton.
\begin{figure*}[hbt!]
\begin{center}
\scalebox{0.8}{
\begin{tabular}{ccccc}
ECA 164 & ECA 131 & (164,131)[0.1] & (164,131)[0.2] & (164,131)[0.3] \\[6pt]
\includegraphics[width=31mm]{TEM_IMAGE/NR164-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR131-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR164_131_1-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR164_131_2-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR164_131_3-eps-converted-to.pdf} \\
ECA 164 & ECA 13 & (164,13)[0.1] & (164,13)[0.2] & (164,13)[0.3] \\
\includegraphics[width=31mm]{TEM_IMAGE/NR164-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR13-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR164_13_1-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR164_13_2-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR164_13_3-eps-converted-to.pdf} \\
ECA 200 & ECA 130 & (200,130)[0.1] & (200,130)[0.2] & (200,130)[0.3] \\
\includegraphics[width=31mm]{TEM_IMAGE/NR200-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR130-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR200_130_1-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR200_130_2-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR200_130_3-eps-converted-to.pdf} \\
\end{tabular}}
\caption{Stochastic CAs ($f,g$) dynamics when $\mathcal{C}$(($f,g$)) $\neq$ $\mathcal{C}$($f$) and $\mathcal{C}$($f$) = $\mathcal{C}$($g$).}
\label{Fig3}
\end{center}
\end{figure*}
\begin{figure*}[hbt!]
\begin{center}
\scalebox{0.8}{
\begin{tabular}{ccccc}
ECA 9 & ECA 77 & (9,77)[0.1] & (9,77)[0.5] & (9,77)[0.9] \\[6pt]
\includegraphics[width=31mm]{TEM_IMAGE/Added/rule9.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/rule77v1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_9_77_0.1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_9_77_0.5.png} & \includegraphics[width=31mm]{TEM_IMAGE//Added/fgT_9_77_0.8999999999999999.png} \\
ECA 130 & ECA 13 & (130,13)[0.1] & (130,13)[0.5] & (130,13)[0.9] \\[6pt]
\includegraphics[width=31mm]{TEM_IMAGE/Added/rule130.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/rule13v1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_130_13_0.1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_130_13_0.5.png} & \includegraphics[width=31mm]{TEM_IMAGE//Added/fgT_130_13_0.8999999999999999.png} \\
ECA 172 & ECA 77 & (172,77)[0.1] & (172,77)[0.5] & (172,77)[0.9] \\[6pt]
\includegraphics[width=31mm]{TEM_IMAGE/Added/rule172.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/rule77v1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_172_77_0.1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_172_77_0.5.png} & \includegraphics[width=31mm]{TEM_IMAGE//Added/fgT_172_77_0.8999999999999999.png} \\
\end{tabular}}
\caption{Stochastic CAs ($f,g$) dynamics when $\mathcal{C}$(($f,g$)) $\neq$ $\mathcal{C}$($f$) and $\mathcal{C}$($f$) = $\mathcal{C}$($g$).}
\label{TSCA2}
\end{center}
\end{figure*}
\begin{figure*}[hbt!]
\begin{center}
\scalebox{0.8}{
\begin{tabular}{ccccc}
ECA 77 & ECA 130 & (77,130)[0.1] & (77,130)[0.5] & (77,130)[0.9] \\[6pt]
\includegraphics[width=31mm]{TEM_IMAGE/Added/rule77.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/rule130v1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_77_130_0.1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_77_130_0.5.png} & \includegraphics[width=31mm]{TEM_IMAGE//Added/fgT_77_130_0.8999999999999999.png} \\
ECA 38 & ECA 232 & (38,232)[0.1] & (38,232)[0.5] & (38,232)[0.9] \\[6pt]
\includegraphics[width=31mm]{TEM_IMAGE/Added/rule38.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/rule232v1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_38_232_0.1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_38_232_0.5.png} & \includegraphics[width=31mm]{TEM_IMAGE//Added/fgT_38_232_0.8999999999999999.png} \\
ECA 134 & ECA 164 & (134,164)[0.1] & (134,164)[0.5] & (134,164)[0.9] \\[6pt]
\includegraphics[width=31mm]{TEM_IMAGE/Added/rule134.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/rule164v1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_134_164_0.1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_134_164_0.5.png} & \includegraphics[width=31mm]{TEM_IMAGE//Added/fgT_134_164_0.8999999999999999.png} \\
\end{tabular}}
\caption{Stochastic CAs ($f,g$) dynamics when $\mathcal{C}$(($f,g$)) $\neq$ $\mathcal{C}$($f$) and $\mathcal{C}$($f$) = $\mathcal{C}$($g$).}
\label{TSCA3}
\end{center}
\end{figure*}
\begin{figure*}[hbt!]
\begin{center}
\begin{adjustbox}{width=\columnwidth,center}
\begin{tabular}{ccccc}
ECA 122 & ECA 37 & (122,37)[0.1] & (122,37)[0.5] & (122,37)[0.9] \\ [6pt]
\includegraphics[width=31mm]{TEM_IMAGE/NR122-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR37-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR122_37_1-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR122_37_5-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR122_37_9-eps-converted-to.pdf} \\
ECA 22 & ECA 7 & (22,7)[0.1] & (22,7)[0.5] & (22,7)[0.9] \\
\includegraphics[width=31mm]{TEM_IMAGE/NR22X-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR7-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR22_7_2-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR22_7_5-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR22_7_9-eps-converted-to.pdf} \\
ECA 22 & ECA 128 & (22,128)[0.1] & (22,128)[0.5] & (22,128)[0.9] \\
\includegraphics[width=31mm]{TEM_IMAGE/DD22-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/DD128-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/DD22_128_1-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/DD22_128_5-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/DD22_128_9-eps-converted-to.pdf} \\
ECA 11 & ECA 8 & (11,8)[0.1] & (11,8)[0.5] & (11,8)[0.9] \\
\includegraphics[width=31mm]{TEM_IMAGE/DD11-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/DD8-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/DD11_8_1-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/DD11_8_5-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/DD11_8_9-eps-converted-to.pdf} \\
ECA 44 & ECA 40 & (44,40)[0.1] & (44,40)[0.5] & (44,40)[0.9] \\
\includegraphics[width=31mm]{TEM_IMAGE/DD44-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/DD40-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/DD44_40_1-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/DD44_40_5-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/DD44_40_9-eps-converted-to.pdf} \\
\end{tabular}
\end{adjustbox}
\caption{Stochastic CAs ($f,g$) dynamics where either $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($f$) or $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($g$). Here, $\mathcal{C}$($f$) $\neq$ $\mathcal{C}$($g$).}
\label{Fig4}
\end{center}
\end{figure*}
\begin{figure*}[hbt!]
\begin{center}
\scalebox{0.8}{
\begin{tabular}{ccccc}
ECA 41 & ECA 23 & (41,23)[0.1] & (41,23)[0.5] & (41,23)[0.9] \\[6pt]
\includegraphics[width=31mm]{TEM_IMAGE/Added/rule41.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/rule23v1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_41_23_0.1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_41_23_0.5.png} & \includegraphics[width=31mm]{TEM_IMAGE//Added/fgT_41_23_0.8999999999999999.png} \\
ECA 105 & ECA 72 & (105,72)[0.1] & (105,72)[0.5] & (105,72)[0.9] \\[6pt]
\includegraphics[width=31mm]{TEM_IMAGE/Added/rule105.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/rule72v1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_105_72_0.1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_105_72_0.5.png} & \includegraphics[width=31mm]{TEM_IMAGE//Added/fgT_105_72_0.8999999999999999.png} \\
ECA 45 & ECA 162 & (45,162)[0.1] & (45,162)[0.5] & (45,162)[0.9] \\[6pt]
\includegraphics[width=31mm]{TEM_IMAGE/Added/rule45.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/rule162v1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_45_162_0.1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_45_162_0.5.png} & \includegraphics[width=31mm]{TEM_IMAGE//Added/fgT_45_162_0.8999999999999999.png} \\
ECA 126 & ECA 78 & (126,78)[0.1] & (126,78)[0.5] & (126,78)[0.9] \\[6pt]
\includegraphics[width=31mm]{TEM_IMAGE/Added/rule126.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/rule78v1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_126_78_0.1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_126_78_0.5.png} & \includegraphics[width=31mm]{TEM_IMAGE//Added/fgT_126_78_0.8999999999999999.png} \\
\end{tabular}}
\caption{Stochastic CAs ($f,g$) dynamics where either $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($f$) or $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($g$). Here, $\mathcal{C}$($f$) $\neq$ $\mathcal{C}$($g$).}
\label{TSCA4}
\end{center}
\end{figure*}
\begin{figure*}[hbt!]
\begin{center}
\scalebox{0.8}{
\begin{tabular}{ccccc}
ECA 13 & ECA 32 & (13,32)[0.1] & (13,32)[0.5] & (13,32)[0.9] \\[6pt]
\includegraphics[width=31mm]{TEM_IMAGE/Added/rule13.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/rule32v1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_13_32_0.1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_13_32_0.5.png} & \includegraphics[width=31mm]{TEM_IMAGE//Added/fgT_13_32_0.8999999999999999.png} \\
ECA 146 & ECA 40 & (146,40)[0.1] & (146,40)[0.5] & (146,40)[0.9] \\[6pt]
\includegraphics[width=31mm]{TEM_IMAGE/Added/rule146.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/rule40v1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_146_40_0.1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_146_40_0.5.png} & \includegraphics[width=31mm]{TEM_IMAGE//Added/fgT_146_40_0.8999999999999999.png} \\
ECA 164 & ECA 136 & (164,136)[0.1] & (164,136)[0.5] & (164,136)[0.9] \\[6pt]
\includegraphics[width=31mm]{TEM_IMAGE/Added/rule164.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/rule136v1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_164_136_0.1.png} & \includegraphics[width=31mm]{TEM_IMAGE/Added/fgT_164_136_0.5.png} & \includegraphics[width=31mm]{TEM_IMAGE//Added/fgT_164_136_0.8999999999999999.png} \\
\end{tabular}}
\caption{Stochastic CAs ($f,g$) dynamics where either $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($f$) or $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($g$). Here, $\mathcal{C}$($f$) $\neq$ $\mathcal{C}$($g$).}
\label{TSCA5}
\end{center}
\end{figure*}
\subsection{Dynamics when $\mathcal{C}$($f$) = $\mathcal{C}$($g$)}
Let us first explore the situation where $\mathcal{C}$($f$) = $\mathcal{C}$($g$). Under this setting, we have observed two kinds of results in our experiments.
\begin{itemize}
\item $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($f$); and
\item $\mathcal{C}$(($f,g$)) $\neq$ $\mathcal{C}$($f)$
\end{itemize}
Whenever $f$ and $g$ both are chosen from Class A, the resultant dynamics remains the same (in class A). On the other hand, only for some of the cases when $f$ and $g$ both are chosen either from class B or class C, the stochastic CA behaves similarly. That is, $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($f$). A minute observation of Fig.~\ref{Fig1} can reveal such $f$ and $g$. Here, we show six such example behavior when ($f,g$) = ($22,18$), ($150,126$), ($43,77$), ($33,5$), ($45,30$) and($172,140$).
Fig.~\ref{Fig2} shows the space-time diagram for ECA $22$ and ECA $18$. Here, both of the ECAs individually show chaotic behavior (Wolfram's class III), i.e. class C dynamics according to our renaming. Now, if rule $22$ is considered as default rule ($f$) and rule $18$ is added as noise ($g$) at different probability, the resultant dynamics remains chaotic. As sample, we show the space-time diagram of the stochastic CA for $\tau$ $\in$ \{$0.1, 0.5, 0.9$\}. It is also to note that if we progressively change the $\tau$, then the cellular system's dynamics remains unchanged. Similarly, the temporally stochastic CA ($150,126$) and ($45,30$) for $\tau \in$ \{$0.1,0.5,0.9$\} also exhibit chaotic dynamics, and the ECAs $150$, $126$, $45$ and $30$ individually show chaotic dynamics (see Fig.~\ref{Fig2} and Fig.~\ref{TSCA1}). Next, we have chosen both $f$ and $g$ from class B. Fig.~\ref{Fig2} depicts the space-time diagram for ECA $43$ and $77$ where both of the ECAs individually show periodic behavior (Wolfram's class II). Now, if rule $43$ is considered as default rule ($f$) and rule $77$ is added as noise ($g$), the resultant dynamics remains periodic (left shift). As an evidence, Fig.~\ref{Fig2} shows the space time diagram for stochastic CA ($43,77$) for $\tau \in$ \{$0.1,0.5,0.9$\}. Similarly, the temporally stochastic CA ($33,5$) and ($172,140$) for $\tau \in$ \{$0.1,0.5,0.9$\} also exhibit periodic behavior (Wolfram's class II), and the ECAs $33$, $5$, $172$ and $140$ individually show periodic dynamics (see Fig.~\ref{TSCA1}).
\begin{figure*}[hbt!]
\begin{center}
\begin{adjustbox}{width=\columnwidth,center}
\begin{tabular}{ccccc}
ECA 22 & ECA 104 & (22,104)[0.1] & (22,104)[0.5] & (22,104)[0.9] \\ [6pt]
\includegraphics[width=31mm]{TEM_IMAGE/D22-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/D104-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/D22_104_1-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/D22_104_5-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/D22_104_9-eps-converted-to.pdf} \\
ECA 105 & ECA 40 & (105,40)[0.1] & (105,40)[0.5] & (105,40)[0.9] \\
\includegraphics[width=31mm]{TEM_IMAGE/D105-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/D40-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/D105_40_1-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/D105_40_5-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/D105_40_9-eps-converted-to.pdf} \\
\end{tabular}
\end{adjustbox}
\caption{Stochastic CAs ($f,g$) dynamics where $\mathcal{C}$(($f,g$)) $\neq$ $\mathcal{C}$($f$) and $\mathcal{C}$(($f,g$)) $\neq$ $\mathcal{C}$($g$). Here, $\mathcal{C}$($f$) $\neq$ $\mathcal{C}$($g$).}
\label{Fig4A}
\end{center}
\end{figure*}
Let us now present the rest cases with $\mathcal{C}$($f)$ = $\mathcal{C}$($g$) = class B or class C but $\mathcal{C}$(($f,g$)) $\neq$ $\mathcal{C}$($f)$. So we have following four cases:
\begin{itemize}
\item[(i)] $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = class B and $\mathcal{C}$(($f,g$)) = class C;
\item[(ii)] $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = class B and $\mathcal{C}$(($f,g$)) = class A;
\item[(iii)] $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = class C and $\mathcal{C}$(($f,g$)) = class A; and
\item[(iv)] $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = class C and $\mathcal{C}$(($f,g$)) = class B.
\end{itemize}
Let us start with case (i) where $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = class B and $\mathcal{C}$(($f,g$)) = class C. Here, we show two such example behaviors when ($f,g$) = ($164,131$) and ($164,13$). Observe the class B behavior of ECA $164$ and ECA $131$ in Fig.~\ref{Fig3}. However, in Fig.~\ref{Fig3}, the CA ($164,131$) for $\tau \in$ \{$0.1,0.2,0.3$\} shows chaotic dynamics. The same situation arises for temporally stochastic CA ($164,13$). One can observe the interesting Pascal's triangle patterns in ($164,13$) for $\tau \in$ \{$0.1,0.2,0.3$\}, see Fig.~\ref{Fig3}. Similarly, observe the class B behavior of ECA $9$, ECA $77$, ECA $130$, ECA $13$, ECA $172$, ECA $38$, ECA $232$, ECA $134$ and ECA $164$ in Fig.~\ref{TSCA2} and Fig.~\ref{TSCA3}. However, in Fig.~\ref{TSCA2}, the CA ($9,77$) for $\tau \in$ \{$0.1,0.2,0.3$\} shows chaotic dynamics. The same situation arises for temporally stochastic CA ($130,13$) , ($172,77$), ($130,13$), ($77,130$). This dynamics is a point of interest of this study because couple of two periodic (simple) rules depict kind of chaotic dynamics under temporally stochastic environment. In our previous study \cite{KAMILYA2019116}, we have observed that in spatial mixing environment (i.e. non-uniform CA), two chaotic rules together show periodic (simple) behavior. However, the opposite dynamics has not been observed ever. Hence, this special dynamics is one of the rich assets of this study.
According to case (ii), $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = class B and $\mathcal{C}$(($f,g$)) = class A. As example, if rule $200$ is considered as default rule ($f$) and rule $130$ is added as noise ($g$) at different probabilities, the resultant dynamics shows the evolving to homogeneous configuration, i.e. class A. Note that, ECA $200$ and $130$ individually show periodic behavior (Wolfram's class II). As sample, we show the space-time diagram of the couple ($200,130$) for $\tau$ $\in$ \{$0.1, 0.2, 0.3$\}, see Fig.~\ref{Fig3}. The same situation arises for temporally stochastic CA ($38,232$) and ($134,164$) (see Fig.~\ref{TSCA3}).
In our experiment, case (iii) and case (iv) dynamics have not been observed for any temporally stochastic CA. However, similar {\em class transition} dynamics is observed for case (iv), see Section~\ref{ctd} for details.
\begin{figure*}[hbt!]
\begin{center}
\begin{tabular}{cccc}
(28,40)[0.2] & (30,136)[0.08] & (78,104)[0.15] & (60,164)[0.08] \\[6pt]
\includegraphics[width=33mm]{TEM_IMAGE/PT28_40_2-eps-converted-to.pdf} & \includegraphics[width=33mm]{TEM_IMAGE/PT30_136_08-eps-converted-to.pdf} & \includegraphics[width=33mm]{TEM_IMAGE/PT78_104_15-eps-converted-to.pdf} & \includegraphics[width=33mm]{TEM_IMAGE/PT60_164_08-eps-converted-to.pdf}\\
(28,40)[0.3] & (30,136)[0.11] & (78,104)[0.33] & (60,164)[0.1] \\
\includegraphics[width=33mm]{TEM_IMAGE/PT28_40_33-eps-converted-to.pdf} & \includegraphics[width=33mm]{TEM_IMAGE/PT30_136_15-eps-converted-to.pdf} & \includegraphics[width=33mm]{TEM_IMAGE/PT78_104_33-eps-converted-to.pdf} & \includegraphics[width=33mm]{TEM_IMAGE/PT60_164_1-eps-converted-to.pdf}\\
(28,40)[0.35] & (30,136)[0.13] & (78,104)[0.39] & (60,164)[0.12] \\
\includegraphics[width=33mm]{TEM_IMAGE/PT28_40_36-eps-converted-to.pdf} & \includegraphics[width=33mm]{TEM_IMAGE/PT30_136_18-eps-converted-to.pdf} & \includegraphics[width=33mm]{TEM_IMAGE/PT78_104_39-eps-converted-to.pdf} & \includegraphics[width=33mm]{TEM_IMAGE/PT60_164_12-eps-converted-to.pdf}\\
\end{tabular}
\caption{Phase transition behavior of stochastic CAs ($28,40$),($30,136$),($78,104$),($60,164$).}
\label{Fig5}
\end{center}
\end{figure*}
\begin{figure*}[hbt!]
\begin{center}
\scalebox{0.8}{
\begin{tabular}{cccc}
(90,104)[0.02] & (90,104)[0.07] & (90,104)[0.09] & (90,104)[0.15]\\ [6pt]
\includegraphics[width=33mm]{TEM_IMAGE/Added/fgT_90_104_0.02.png} & \includegraphics[width=33mm]{TEM_IMAGE/Added/fgT_90_104_0.07.png} & \includegraphics[width=33mm]{TEM_IMAGE/Added/fgT_90_104_0.09.png} & \includegraphics[width=33mm]{TEM_IMAGE/Added/fgT_90_104_0.15.png}\\
(90,104)[0.16] & (90,104)[0.17] & (90,104)[0.18] & (90,104)[0.19]\\ [6pt]
\includegraphics[width=33mm]{TEM_IMAGE/Added/fgT_90_104_0.16.png} & \includegraphics[width=33mm]{TEM_IMAGE/Added/fgT_90_104_0.17.png} & \includegraphics[width=33mm]{TEM_IMAGE/Added/fgT_90_104_0.18.png} & \includegraphics[width=33mm]{TEM_IMAGE/Added/fgT_90_104_0.19.png}\\
(90,104)[0.20] & (90,104)[0.22] & (90,104)[0.25] & (90,104)[0.46]\\ [6pt]
\includegraphics[width=33mm]{TEM_IMAGE/Added/fgT_90_104_0.20.png} & \includegraphics[width=33mm]{TEM_IMAGE/Added/fgT_90_104_0.22.png} & \includegraphics[width=33mm]{TEM_IMAGE/Added/fgT_90_104_0.25.png} & \includegraphics[width=33mm]{TEM_IMAGE/Added/fgT_90_104_0.46.png}\\
t \end{tabular}}
\caption{Phase transition behavior of stochastic CAs ($90,104$) with different $\tau$ values.}
\label{TSCA6}
\end{center}
\end{figure*}
\begin{figure*}[hbt!]
\begin{center}
\scalebox{0.8}{
\begin{tabular}{cccc}
(156,160)[0.09] & (156,160)[0.20] & (156,160)[0.24] & (156,160)[0.27]\\ [6pt]
\includegraphics[width=33mm]{TEM_IMAGE/Added/fgT_156_160_0.09.png} & \includegraphics[width=33mm]{TEM_IMAGE/Added/fgT_156_160_0.20.png} & \includegraphics[width=33mm]{TEM_IMAGE/Added/fgT_156_160_0.24.png} & \includegraphics[width=33mm]{TEM_IMAGE/Added/fgT_156_160_0.27.png}\\
(156,160)[0.28] & (156,160)[0.29] & (156,160)[0.30] & (156,160)[0.31]\\ [6pt]
\includegraphics[width=33mm]{TEM_IMAGE/Added/fgT_156_160_0.28.png} & \includegraphics[width=33mm]{TEM_IMAGE/Added/fgT_156_160_0.29.png} & \includegraphics[width=33mm]{TEM_IMAGE/Added/fgT_156_160_0.30.png} & \includegraphics[width=33mm]{TEM_IMAGE/Added/fgT_156_160_0.31.png}\\
(156,160)[0.32] & (156,160)[0.40] & (156,160)[0.50] & (156,160)[0.60]\\ [6pt]
\includegraphics[width=33mm]{TEM_IMAGE/Added/fgT_156_160_0.32.png} & \includegraphics[width=33mm]{TEM_IMAGE/Added/fgT_156_160_0.40.png} & \includegraphics[width=33mm]{TEM_IMAGE/Added/fgT_156_160_0.50.png} & \includegraphics[width=33mm]{TEM_IMAGE/Added/fgT_156_160_0.60.png}\\
t \end{tabular}}
\caption{Phase transition behavior of stochastic CAs ($156,160$) with different $\tau$ values.}
\label{TSCA7}
\end{center}
\end{figure*}
\subsection{Dynamics when $\mathcal{C}$($f$) $\neq$ $\mathcal{C}$($g$)}
Next, we explore the situation where $\mathcal{C}$($f$) $\neq$ $\mathcal{C}$($g$). Under this setting, we have observed two kinds of results in our experiments.
\begin{itemize}
\item $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($f$) or $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($g$); and
\item $\mathcal{C}$(($f,g$)) $\neq$ $\mathcal{C}$($f$) and $\mathcal{C}$(($f,g$)) $\neq$ $\mathcal{C}$($g$)
\end{itemize}
Let us focus on the first possibility, i.e. $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($f$) or $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($g$). So, here, we have following cases:
\begin{itemize}
\item[(i)] $\mathcal{C}$($f$) = class C, $\mathcal{C}$($g$) = class B, and $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($f$);
\item[(ii)] $\mathcal{C}$($f$) = class C, $\mathcal{C}$($g$) = class B, and $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($g$);
\item[(iii)] $\mathcal{C}$($f$) = class C, $\mathcal{C}$($g$) = class A, and $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($f$);
\item[(iv)] $\mathcal{C}$($f$) = class C, $\mathcal{C}$($g$) = class A, and $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($g$);
\item[(v)] $\mathcal{C}$($f$) = class B, $\mathcal{C}$($g$) = class A, and $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($f$); and
\item[(vi)] $\mathcal{C}$($f$) = class B, $\mathcal{C}$($g$) = class A, and $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($g$).
\end{itemize}
Let us start with case (i) where $\mathcal{C}$($f$) = class C, $\mathcal{C}$($g$) = class B, and $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($f$). Observe the chaotic (class C) and periodic (class B) behavior of ECA $122$ and $37$ respectively. However, in Fig.~\ref{Fig4}, the CA ($122,37$) shows chaotic dynamics for $\tau \in$ \{$0.1,0.5,0.9$\}. That is $\mathcal{C}$(($122,37$)) = $\mathcal{C}$($122$). Next, according to case (ii), $\mathcal{C}$($f$) = class C, $\mathcal{C}$($g$) = class B, and $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($g$) which is the opposite of case (i). Here also, ECAs $22$ and $7$ individually show chaotic and periodic dynamics respectively. To show the second case, if rule $22$ is considered as default rule and rule $7$ is added as noise at different probability, the resultant dynamics shows periodic dynamics. Observe that, the space time diagrams of ($22,7$) for $\tau \in$ \{$0.1,0.5,0.9$\} shows little but chaotic nature in first few time steps. However, when the time progresses, they move towards periodic behavior (see Fig.~\ref{Fig4}). If one progressively changes the temporal noise rate, the same dynamics is observed.
In our experiment (see Fig.~\ref{Fig1}), none of the stochastic CAs shows the third case where $\mathcal{C}$($f$) = class C, $\mathcal{C}$($g$) = class A, and $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($f$). However, the case (iv) is observed for some of the stochastic CAs. Observe that, ECA $22$ and ECA $128$ respectively show chaotic (class C) and evolving to homogeneous configuration (class A) dynamics in Fig.~\ref{Fig4}. The temporally stochastic CA ($22,128$) for $\tau \in$ \{$0.1,0.5,0.9$\} exhibits homogeneous configurations in Fig.~\ref{Fig4}.
The remaining cases are $\mathcal{C}$($f$) = class B, $\mathcal{C}$($g$) = class A, $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($f$) (case (v)) and $\mathcal{C}$($f$) = class B, $\mathcal{C}$($g$) = class A, $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($g$) (case (vi)). In Fig.~\ref{Fig4}, ECA $11$ and ECA $44$ show class B dynamics (class B). On the other hand, ECAs $8$ and ECA $40$ depict evolving to homogeneous configuration, i.e. class A dynamics. As an evidence of case (v), if rule $11$ is considered as default rule ($f$) and rule $8$ is added as noise ($g$) at different probability, the resultant dynamics shows periodic dynamics. As sample, we show the space-time diagram of the stochastic CA for $\tau$ $\in$ \{$0.1, 0.5, 0.9$\} in Fig.~\ref{Fig4}. In case (vi), the stochastic CA ($44,40$) for $\tau \in$ \{$0.1,0.2,0.3$\} shows class A dynamics.
Let us now present the remaining situation where $\mathcal{C}$($f$) $\neq$ $\mathcal{C}$($g$), $\mathcal{C}$(($f,g$)) $\neq$ $\mathcal{C}$($f$) and $\mathcal{C}$(($f,g$)) $\neq$ $\mathcal{C}$($g$). Here we have following three cases.
\begin{itemize}
\item[(i)] $\mathcal{C}$($f$) = class B, $\mathcal{C}$($g$) = class A, and $\mathcal{C}$(($f,g$)) = class C;
\item[(ii)] $\mathcal{C}$($f$) = class C, $\mathcal{C}$($g$) = class B, and $\mathcal{C}$(($f,g$)) = class A; and
\item[(iii)] $\mathcal{C}$($f$) = class C, $\mathcal{C}$($g$) = class A, and $\mathcal{C}$(($f,g$)) = class B.
\end{itemize}
In our experiment, case (i) dynamics has not been observed for any temporally stochastic CA. So, let us start with case (ii) where $\mathcal{C}$($f$) = class C, $\mathcal{C}$($g$) = class B, and $\mathcal{C}$(($f,g$)) = class A. In Fig.~\ref{Fig4A}, observe the chaotic (class C) and periodic (class B) behavior of ECA $22$ and $104$ respectively. However, Fig.~\ref{Fig4A} shows that the CA ($22,104$) evolves to homogeneous configuration (class A) for $\tau \in$ \{$0.1,0.5,0.9$\}. For the third case, if rule $105$ is considered as default rule ($f$) and rule $40$ is added as noise ($g$) at different probability, the resultant dynamics exhibits class B dynamics (see Fig.~\ref{Fig4A} for $\tau \in$ \{$0.1,0.5,0.9$\}). Note that, ECA $105$ belongs to class C and ECA $40$ individually shows the behavior of class A.
\subsection{Phase transition dynamics}
According to the literature of CAs, the occurrence of phase transition is an interesting phenomenon in different non-classical CAs \cite{BoureFC12,jca/Fates14,ALONSOSANZ2005383,doi:10.}. The occurrence of phase transition can be defined as the following way: there exists a critical value of non-uniformity rate \footnote{Here, non-uniformity rate means synchrony rate of a non-classical updating scheme or mixing rate of different CA rule or any other non-uniform scheme mixing rate.} which distinguishes the behavior of the system in two different `phases'- {\em passive phase} (i.e. the system converges to a homogeneous fixed point of all $0$'s) and {\em active phase} (i.e. the system oscillates around a fixed non-zero density). For the $\alpha$-, $\beta$- and $\gamma$-synchronous updating scheme, this brutal change of behavior was noted in \cite{BoureFC12,jca/Fates09}. The phase transition of ECA with memory was identified in \cite{ALONSOSANZ2005383,doi:10.}. Recently, this abrupt change of behavior has been studied by Fat\'es \cite{Fates17} for {\em Diploid}\footnote{The rules of diploid cellular automata are obtained with random mixing of two deterministic ECA rules.} cellular automata.
According to Fig.~\ref{Fig1} (in blue), some of the temporally stochastic CAs show this {\em phase transition} behavior. Here, we broadly distinguish these CAs into following two categories.
\begin{itemize}
\item[(i)] The stochastic CAs which are associated with at least one class C rule; and
\item[(ii)] The stochastic CAs which are not associated with any class C rule.
\end{itemize}
For the first category, we show two such example behavior when ($f,g$) = ($30,136$) and ($60,164$). ECAs $30$ and $60$ individually show (chaotic) class C dynamics (see Fig.~\ref{Fig5}). Observe that, stochastic CAs ($30,136$) and ($60,164$) depict phase transition for small value of $\tau$ where $\tau_c = 0.131$ for CA ($30,136$) and $\tau_c = 0.124$ for CA ($60,164$). Here, for both of the cases (CAs ($30,136$) and ($60,164$)), the cellular systems show chaotic dynamics, however, due to small amount of noise the system converges to homogeneous fixed point of all $0$'s. Fig~\ref{Fig6} shows the profile of density parameter as a function of $\tau$ for the above mentioned CAs.
On the other hand, for the second category, we again show two such example behaviors when ($f,g$) = ($28,40$) and ($78,104$). Here, none of the ECAs ($28$,~$40$,~$78$,~$104$) belongs to the class C. In Fig~\ref{Fig5}, stochastic CAs ($28,40$) and ($78,104$) show phase transition for relatively large value of $\tau$ : $\tau_c = 0.34$ for CA ($28,40$) and $\tau_c = 0.39$ for CA ($78,104$). As an evidence, see Fig~\ref{Fig6} for the profile of density parameter.
\begin{figure*}[hbt!]
\begin{center}
\begin{tabular}{cc}
(28,40),$\tau_c = 0.34$ & (30,136),$\tau_c = 0.131$\\[6pt]
\includegraphics[width=.45\textwidth]{TEM_IMAGE/graph28-40} & \includegraphics[width=.45\textwidth]{TEM_IMAGE/graph30-136} \\
(78,104),$\tau_c = 0.39$ & (60,164),$\tau_c = 0.124$ \\[6pt] \includegraphics[width=.45\textwidth]{TEM_IMAGE/graph78-104} & \includegraphics[width=.45\textwidth]{TEM_IMAGE/graph60-164}\\
(90,104),$\tau_c = 0.18$ & (156,160),$\tau_c = 0.28$ \\[6pt]
\includegraphics[width=.45\textwidth]{TEM_IMAGE/Added/graph90-104} & \includegraphics[width=.45\textwidth]{TEM_IMAGE/Added/graph156-160}\\
\end{tabular}
\caption{The plot shows the profile of density parameter as a function of temporal noise rate ($\tau$) for CAs (28,40), (30,136), (78,104), (60,164), (90,104) and (156,160) .}
\label{Fig6}
\end{center}
\end{figure*}
Therefore, the stochastic CAs with one class C rules show phase transition for small value of noise, on a contrary, stochastic CAs, which are not associated with any class C rules, show phase transition for relatively large value of noise. As more evidence, $\tau_c = 0.08$ for the CA ($60,168$), $\tau_c = 0.09$ for the CA ($30,160$), $\tau_c = 0.09$ for the CA ($150,200$), $\tau_c = 0.109$ for the CA ($90,168$) where one rule of each couple is from class C. On the other hand, $\tau_c = 0.49$ for the CA ($50,40$), $\tau_c = 0.48$ for the CA ($178,160$), $\tau_c = 0.44$ for the CA ($58,32$), $\tau_c = 0.33$ for the CA ($156,168$) where none of stochastic CAs are associated with class C rules. Of course, there are exceptions. However, in general, we can argue that the stochastic CAs with class C rules show less resistance against {\em effective noise}. Here, the word `effective noise' indicates the noise which can able to (at least) do the phase transition. If the noise is not associated with any impact on the stochastic CA, then the situation is different (we have seen these kind of example earlier).
\begin{figure*}[hbt!]
\begin{center}
\begin{adjustbox}{width=\columnwidth,center}
\begin{tabular}{ccccc}
ECA 45 & ECA 18 & (45,18)[0.1] & (45,18)[0.2] & (45,18)[0.3] \\[6pt]
\includegraphics[width=31mm]{TEM_IMAGE/T45-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR18-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/T45_18_1-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/T45_18_2-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/T45_18_3-eps-converted-to.pdf} \\
ECA 106 & ECA 18 & (106,18)[0.1] & (106,18)[0.3] & (106,18)[0.5] \\
\includegraphics[width=31mm]{TEM_IMAGE/T106-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/NR18-eps-converted-to.pdf} &\includegraphics[width=31mm]{TEM_IMAGE/T106_18_1-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/T106_18_3-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/T106_18_5-eps-converted-to.pdf} \\
ECA 77 & ECA 44 & (77,44)[0.1] & (77,44)[0.3] & (77,44)[0.5] \\
\includegraphics[width=31mm]{TEM_IMAGE/T77-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/T44-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/T77_44_1-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/T77_44_3-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/T77_44_5-eps-converted-to.pdf} \\
ECA 9 & ECA 58 & (9,58)[0.1] & (9,58)[0.3] & (9,58)[0.7] \\
\includegraphics[width=31mm]{TEM_IMAGE/T9-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/T58-eps-converted-to.pdf} &\includegraphics[width=31mm]{TEM_IMAGE/T9_58_1-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/T9_58_3-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/T9_58_7-eps-converted-to.pdf} \\
\end{tabular}
\end{adjustbox}
\caption{Class transition dynamics of stochastic CAs (45,18), (106,18), (77,44), (9,58) where $\mathcal{C}$($f$) = $\mathcal{C}$($g$).}
\label{Fig7}
\end{center}
\end{figure*}
\subsection{Class transition dynamics}
\label{ctd}
Next, {\em class transition} is another $\tau$ sensitive dynamics where the cellular systems change their class dynamics for a critical value of $\tau_t$. Formally, we write $\mathcal{C}$(($f,g$))[$\tau$] $\neq$ $\mathcal{C}$(($f,g$))[$\tau'$] where $\tau \in$ [0,$\tau_t$] and $\tau' \in$ ($\tau_t$,1]. Under this umbrella of dynamics, we observe two kinds of results in our experiments.
\begin{itemize}
\item $\mathcal{C}$($f$) = $\mathcal{C}$($g$); and
\item $\mathcal{C}$($f$) $\neq$ $\mathcal{C}$($g$).
\end{itemize}
In the first case, $f$ and $g$ are chosen from the same class. So we have following two cases:
\begin{itemize}
\item[(i)] $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = class C; and
\item[(ii)] $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = class B.
\end{itemize}
Let us start with case (i) where $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = class C. Here, we show two such example behavior when ($f,g$) = ($45,18$) and ($106,18$). Fig~\ref{Fig7} depicts these interesting phenomenon where two chaotic/complex rules are involved in the stochastic CA. Observe the (chaotic) class C behavior of ECA $45$ and ECA $18$ in Fig.~\ref{Fig7}. Here, the stochastic CA ($45,18$) shows (chaotic) class C dynamics for $\tau = 0.1$. However, the cellular system ($45,18$) shows periodic class B dynamics for $\tau = 0.3$. Fig.~\ref{Fig7} shows the class transition space time diagram of CA ($45,18$) for $\tau \in$ \{$0.1,0.2,0.3$\}. We denote this dynamics in the following way -- \{chaotic $\rightsquigarrow$ periodic\}. Now, if we consider the CA ($18,45$) which is exchange symmetric to the CA ($45,18$), it shows the \{periodic $\rightsquigarrow$ chaotic\} dynamics. Hence, we can write that \{chaotic $\leftrightsquigarrow$ periodic\} dynamics is observed for this cellular system. Similarly, the stochastic CA ($106,18$) exhibits class B dynamics for $\tau = 0.1$, however the CA shows chaotic class C dynamics for $\tau = 0.5$. Note that, here also, ECA $106$ and $18$ individually show complex/chaotic class C dynamics. Interesting to note that, here, couple of two chaotic rules show periodic dynamics for a range of noise rate.
Let us now discuss case (ii) : $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = class B. Here, we also show two such example behavior when ($f,g$) = ($77,44$) and ($9,58$). Observe the periodic class B behavior of ECA $77$ and ECA $44$ in Fig.~\ref{Fig7}. Here, the stochastic CA ($77,18$) shows class C dynamics for $\tau = 0.5$, however the cellular system shows periodic dynamics for $\tau = 0.1$. Fig.~\ref{Fig7} shows the class transition space-time diagram of CA ($77,44$) for $\tau \in$ \{$0.1,0.3,0.5$\}. Similarly, CA ($9,58$) shows class C dynamics for high value of $\tau$ (see ($9,58$)[$0.7$], in Fig.~\ref{Fig8}). On the other hand, ($9,58$)[$0.1$] depicts periodic dynamics. Here also, both the ECAs $9$ and $58$ individually depicts periodic dynamics (class B).
Next we discuss the class transformation dynamics for $\mathcal{C}$($f$) $\neq$ $\mathcal{C}$($g$). So we have following three situations:
\begin{itemize}
\item[(i)] $\mathcal{C}$($f$) = class C and $\mathcal{C}$($g$) = class B;
\item[(ii)] $\mathcal{C}$($f$) = class C and $\mathcal{C}$($g$) = class A; and
\item[(iii)] $\mathcal{C}$($f$) = class B and $\mathcal{C}$($g$) = class A;
\end{itemize}
Let us start with case (i) where $\mathcal{C}$($f$) = class C and $\mathcal{C}$($g$) = class B. For this case, we show the dynamics of stochastic CA ($122,14$). Observe that, ECA 122 shows chaotic class C dynamics, on the other hand, ECA $14$ shows class B dynamics. Here, the CA ($122,14$) exhibits chaotic dynamics for $\tau = 0.1$, see Fig~\ref{Fig8}. However, if we progressively increase the noise rate $\tau$, then the stochastic CA shows periodic dynamics (see Fig.~\ref{Fig8} for $\tau \in$ \{$0.4,0.7$\}).
In the next case (case (ii)), $\mathcal{C}$($f$) = class C and $\mathcal{C}$($g$) = class A. In Fig.~\ref{Fig8}, ECA $45$ individually depicts chaotic class C dynamics, and ECA $136$ belongs to class A, i.e. evolving to homogeneous configuration. Here, the stochastic CA ($45,136$) shows periodic dynamics for high value of $\tau$ ($\tau = 0.5$). However, if we decrease the $\tau$ value, the cellular system shows chaotic dynamics. Space-time diagrams of ($45,136$) for $\tau \in$ \{$0.1,0.3,0.5$\} are shown in Fig.~\ref{Fig8}.
For case (iii), where $\mathcal{C}$($f$) = class B and $\mathcal{C}$($g$) = class A, we consider the example of stochastic CA ($131,136$). According to the classification, ECA $131$ belongs to class B. Whereas, ECA $136$ evolves to homogeneous configuration dynamics (class A). Now, if rule $131$ is considered as default rule ($f$) and rule $136$ is added as noise ($g$) at low probability, the resultant dynamics shows chaotic behavior. However, if we progressively increase the noise rate, the cellular system shows class A dynamics. As an evidence, Fig.~\ref{Fig8} depicts class transition dynamics of CA ($131,136$) for $\tau \in$ \{$0.1,0.5,0.9$\}. Here also, couple of two periodic and homogeneous rules show chaotic dynamics for a range of noise rate.
\begin{figure*}[hbt!]
\begin{center}
\begin{adjustbox}{width=\columnwidth,center}
\begin{tabular}{ccccc}
ECA 122 & ECA 14 & (122,14)[0.1] & (122,14)[0.4] & (122,14)[0.7] \\[6pt]
\includegraphics[width=31mm]{TEM_IMAGE/T122-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/T14-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/T122_14_1-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/T122_14_4-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/T122_14_7-eps-converted-to.pdf} \\
ECA45 & ECA 136 & (45,136)[0.1] & (45,136)[0.3] & (45,136)[0.5] \\
\includegraphics[width=31mm]{TEM_IMAGE/T45-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/T136-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/T45_136_1-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/T45_136_3-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/T45_136_5-eps-converted-to.pdf} \\
ECA131 & ECA 136 & (131,136)[0.1] & (131,136)[0.5] & (131,136)[0.9] \\
\includegraphics[width=31mm]{TEM_IMAGE/T131-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/T136-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/T131_136_1-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/T131_136_5-eps-converted-to.pdf} & \includegraphics[width=31mm]{TEM_IMAGE/T131_136_9-eps-converted-to.pdf} \\
\end{tabular}
\end{adjustbox}
\caption{Class transition dynamics of stochastic CAs (122,14), (45,136), (131,136) where $\mathcal{C}$($f$) $\neq$ $\mathcal{C}$($g$).}
\label{Fig8}
\end{center}
\end{figure*}
\section{Summary}
\label{S5}
This chapter constitutes a first step in the exploration of the space of the temporally stochastic CAs. We have identified that some of the stochastic CAs are affected by the temporal noise, but they are not sensitive to temporal noise rate (i.e. if we progressively vary the temporal noise rate, the cellular system's dynamical behavior remains unchanged). These CAs have shown diverse set of results. We should mention some richest evidence, where
\begin{itemize}
\item[] $\mathcal{C}$(periodic,periodic) = chaotic;
\item[] $\mathcal{C}$(periodic,periodic) = homogeneous;
\item[] $\mathcal{C}$(chaotic,periodic) = homogeneous; and
\item[] $\mathcal{C}$(chaotic,homogeneous) = periodic.
\end{itemize}
That is, these stochastic CAs ($f,g$) are totally destructed from the individual class of $f$ and $g$. However, when $f$ and $g$ are from different class, there are interesting situations where one of their class dominates in the stochastic CA. Interestingly, there are many cases where chaotic behavior is dominated by temporal noise of type periodic and homogeneous behavior.
On the other hand, temporal noise sensitive stochastic CAs have shown phenomenon, like phase transition and class transition. It is interesting to note that, in general, stochastic CAs with (at least one) chaotic rules have shown less resistance during phase transition (i.e. critical value of noise rate is low). However, the stochastic CAs without any chaotic rule have exhibited more resistance during phase transition (i.e. critical value of noise rate is high). This is also one of the exiting observation of this study.
The rarest phenomenon shown by these CAs is class transition. We should mention the following worthy examples,
\begin{itemize}
\item[] $\mathcal{C}$(chaotic,chaotic) = \{chaotic $\leftrightsquigarrow$ periodic\};
\item[] $\mathcal{C}$(periodic,periodic) = \{periodic $\leftrightsquigarrow$ chaotic\}; and
\item[] $\mathcal{C}$(periodic,homogeneous) = \{chaotic $\leftrightsquigarrow$ homogeneous\}.
\end{itemize}
In terms of numbers, we have performed a large number of experiments on $3828$ temporally stochastic CAs. Table~\ref{Table:classes_dynamics} depicts different situations (as mentioned in Section~\ref{S4}) with number of temporally stochastic CAs that show the mentioned behavior.
\begin{table}[ht]
\centering
\begin{adjustbox}{width=\columnwidth,center}
\begin{tabular}{ccc|c} \hline
Cases & & & No. of CAs \\ \hline \hline
$\mathcal{C}$($f$) = $\mathcal{C}$($g$) & & & \\
& $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($f$) & & \\
& & $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = $\mathcal{C}$(($f,g$)) = class A & 28\\
& & $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = $\mathcal{C}$(($f,g$)) = class B & 1436\\
& & $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = $\mathcal{C}$(($f,g$)) = class C & 149\\
& $\mathcal{C}$(($f,g$)) $\neq$ $\mathcal{C}$($f$) & & \\
& & $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = class A, $\mathcal{C}$(($f,g$)) = class B & 0\\
& & $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = class A, $\mathcal{C}$(($f,g$)) = class C & 0\\
& & $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = class B, $\mathcal{C}$(($f,g$)) = class A & 113\\
& & $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = class B, $\mathcal{C}$(($f,g$)) = class C & 18\\
& & $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = class C, $\mathcal{C}$(($f,g$)) = class A & 0\\
& & $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = class C, $\mathcal{C}$(($f,g$)) = class B & 0\\
$\mathcal{C}$($f$) $\neq$ $\mathcal{C}$($g$) & & & \\ \hline
& $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($f$) or $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($g$) & & \\
& & $\mathcal{C}$($f$) = class B, $\mathcal{C}$($g$) = class A, $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($f$) & 167\\
& & $\mathcal{C}$($f$) = class B, $\mathcal{C}$($g$) = class A, $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($g$) & 297\\
& & $\mathcal{C}$($f$) = class C, $\mathcal{C}$($g$) = class A, $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($f$) & 0\\
& & $\mathcal{C}$($f$) = class C, $\mathcal{C}$($g$) = class A, $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($g$) & 98\\
& & $\mathcal{C}$($f$) = class C, $\mathcal{C}$($g$) = class B, $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($f$) & 89\\
& & $\mathcal{C}$($f$) = class C, $\mathcal{C}$($g$) = class B, $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($g$) & 67\\
& $\mathcal{C}$(($f,g$)) $\neq$ $\mathcal{C}$($f$) and $\mathcal{C}$(($f,g$)) $\neq$ $\mathcal{C}$($g$) & & \\
& & $\mathcal{C}$($f$) = class B, $\mathcal{C}$($g$) = class A, $\mathcal{C}$(($f,g$)) = class C & 0\\
& & $\mathcal{C}$($f$) = class C, $\mathcal{C}$($g$) = class A, $\mathcal{C}$(($f,g$)) = class B & 20\\
& & $\mathcal{C}$($f$) = class C, $\mathcal{C}$($g$) = class B, $\mathcal{C}$(($f,g$)) = class A & 57\\ \hline
Class & & & \\
Transition(CT) & $\mathcal{C}$($f$) = $\mathcal{C}$($g$) & & \\
& & $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = class A, $\mathcal{C}$(($f,g$)) = CT & 0\\
& & $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = class B, $\mathcal{C}$(($f,g$)) = CT & 297\\
& & $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = class C, $\mathcal{C}$(($f,g$)) = CT & 4\\
& $\mathcal{C}$($f$) $\neq$ $\mathcal{C}$($g$) & & \\
& & $\mathcal{C}$($f$) = class B, $\mathcal{C}$($g$) = class A, $\mathcal{C}$(($f,g$)) = CT & 4\\
& & $\mathcal{C}$($f$) = class C, $\mathcal{C}$($g$) = class A, $\mathcal{C}$(($f,g$)) = CT & 12\\
& & $\mathcal{C}$($f$) = class C, $\mathcal{C}$($g$) = class B, $\mathcal{C}$(($f,g$)) = CT & 873\\ \hline
Phase & & & 99\\
Transition & & & \\ \hline
\end{tabular}
\end{adjustbox}
\caption{Different situations according to Section~\ref{S4} and number of temporally stochastic CAs, out of $3828$, corresponding to every situation.}
\label{Table:classes_dynamics}
\end{table}
To sum up, this chapter has indicated a rich possibilities of this temporally stochastic CAs. However, in this chapter, we have explored these CAs primarily through experiment. Therefore, the proper theoretical understanding regarding the temporally stochastic CA, is still an open question to further explore.
\chapter{Pattern Classification with Temporally Stochastic Cellular Automata}
\label{chap4}
\section{Introduction}
In a classical cellular automaton (CA), a rule ($f$) is applied to each and every cell of the lattice to evolve the CA from one configuration to its next configuration \cite{BhattacharjeeNR16}. In this work, we deviate from the classical CA, and introduce another rule, say $g$, in the cellular structure which is applied to all the cells in a time step with probability $\tau$. The rule $g$ may be considered as noise of the cellular structure and $\tau$ as the noise rate. The rule $f$ can be called as default rule, which is applied to all cells in a time step with probability $1-\tau$. We name these cellular automata (CAs) as \emph{Temporally Stochastic Cellular Automata} (TSCAs).
In this work, we take only ECAs rules as our default rule and noise to study this class of automata. We further consider the CAs as finite, which use periodic boundary condition. We first study the dynamical behavior of these TSCAs through an extensive experiment, and classify them as Class A, Class B and Class C by observing their behavior following the Wolfram's~\cite{Wolfram94} and Li \& Packard's~\cite{LangtonII,Genaro13} classification. Then we identify a set of TSCAs that converge to fixed points from any initial configuration.
The CAs that converge to fixed point from any seed have been widely employed for the design of pattern classifier~\cite{DasMNS09,Sethi2016,RAGHAVAN1993145}. In this work we also utilize the convergent TSCAs to develop two-class pattern classifier. However, there are some convergent TSCAs which are having a single fixed point (attractor). These CAs cannot act as a two-class pattern classifier. Similarly, a convergent TSCA having enormous number of fixed points (attractor) is not a good classifier. Using these criteria, we identify a set of convergent TSCAs that can act as a good classifier. To evaluate the performance of the proposed classifier, we choose standard data sets which are taken from \url{http://www.ics.uci.edu/~mlearn/MLRepository.html}. It is observed that the proposed classifier performs nicely in \emph{training phase} as well as \emph{testing phase}. Finally we compare the performance of the proposed classifier with that of well-known classifiers. It is found that the proposed classifier is very much competitive with the best-performing classifiers.
\subsection{Dynamical behavior}\label{dynamics}
Stephen Wolfram \cite{Wolfram94} introduced following general classification of the ECAs (defined over $\mathbb{Z}$) depending on their dynamical behavior:
\begin{itemize}
\item[] Class I: evolving to a homogeneous configuration;
\item[] Class II: evolving periodically;
\item[] Class III: evolving chaotically;
\item[] Class IV: class of complex rules.
\end{itemize}
Later, Li and Packard have identified some periodic rules (Class II) as locally chaotic \cite{LangtonII,Genaro13}. For TSCAs, we target to identify their dynamical behavior and to classify the TSCAs as above. We take $f$ and $g$ from $88$ minimum representative ECA rules, and then consider all possible combinations of these $88$ ECAs rules. Here, total $\frac{88\times87}{2}=3828$ couple of $(f, g)$ are sufficient because, the rest are exchange symmetry of $f$ and $g$.
We arrange a large number of experiments to understand dynamical behavior of TSCAs. We map the dynamics of TSCAs into following three classes $-$
\begin{itemize}
\item[] Class A: which is similar to Wolfram's Class I.
\item[] Class B: which is similar to Wolfram's Class II except locally chaotic rules.
\item[] Class C: similar to Wolfram's Class III and Class IV, including three locally chaotic rules.
\end{itemize}
Now, there are two possibilities for a couple ($f$, $g$) $(i)$ $f$ and $g$ belong to the same class; $(ii)$ $f$ and $g$ are from different class. We denote the class of $f$ and $g$ as $C(f)$ and $C(g)$ respectively and class of ($f$, $g$) is denoted by $C((f,\; $g$))$. We find amazing experimental outcomes:
\begin{itemize}
\item If $C(f) = C(g)$ in a TSCA, one option is $C((f, g)) = C(f)$, which has been seen in a significant number of TSCAs. The TSCA ($f$, $g$), where $C(f) = C(g)$ = Class $A$, approaches to a homogeneous configuration, much like ECA $f$ and ECA $g$. On the other hand, $C((f, g)) \neq C(f)$ could be conceivable. That is, the noise has a significant impact on these TSCAs($f$, $g$) (as an evidence, see Fig~\ref{fig1:ex1}).
\item Next case where $C(f)\neq C(g)$, shows the dynamics where one of the rule's class dominates, i.e. $C((f, g)) = C(f)$ or $C((f, g)) = C(g)$. Under this case, the TSCA ($f$, $g$) with $C(f) =$ Class C and $C(g) =$ Class B shows the dynamics as $C((f, g)) = C(g)$ (see Example~\ref{ex3}). Fig~\ref{fig3} shows an evidence of this situation. Here, ECA $22$ and ECA $7$ respectively belong to class III and II and, the CA $(22, 7)$ shows periodic behavior (like Wolfram's Class II), see Fig~\ref{fig3:e}, where the class of ECA $7$ dominates.
\item A TSCA ($f$, $g$) with $C(f) \neq C(g)$, on the other hand, depicts dynamics in which none of the rule's classes dominates, i.e. $C((f, g)) \neq C(f)$ and $C((f, g)) \neq C(g)$. The TSCA ($f$, $g$) with $C(f) =$ Class C and $C(g) =$ Class A displays $C((f, g)) =$ Class B, with none of the rule's classes dominating (see Fig~\ref{fig4}). In Fig~\ref{fig4:c}, ECA $105$ belongs to Class III and ECA $40$ belongs to Class I. However, the TSCA $(105,40)[0.2]$ shows periodic behavior (like Wolfram's Class II).
\end{itemize}
\begin{table}[!htbp]
\centering
\scriptsize
\begin{adjustbox}{width=\columnwidth,center}
\begin{tabular}{c|c|c|c} \hline
Class & Conditions & Number of TSCAs& $\tau$ \\ \hline
&&&\\
& $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = $\mathcal{C}$(($f,g$)) = Class A & 28&\\
& $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = Class B, $\mathcal{C}$(($f,g$)) = Class A & 113&\\
A& $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = Class C, $\mathcal{C}$(($f,g$)) = Class A & 0&\\
& $\mathcal{C}$($f$) = Class B, $\mathcal{C}$($g$) = Class A, $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($g$) & 297&\\
& $\mathcal{C}$($f$) = Class C, $\mathcal{C}$($g$) = Class A, $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($g$) & 98&\\
& $\mathcal{C}$($f$) = Class C, $\mathcal{C}$($g$) = Class B, $\mathcal{C}$(($f,g$)) = Class A & 57&\\
&&&\\
&&&\\
&&&\\
& $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = $\mathcal{C}$(($f,g$)) = Class B & 1436& \\
& $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = Class A, $\mathcal{C}$(($f,g$)) = Class B & 0&\\
B & $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = Class C, $\mathcal{C}$(($f,g$)) = Class B & 0&\\
& $\mathcal{C}$($f$) = Class B, $\mathcal{C}$($g$) = Class A, $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($f$) & 167& Insensitive\\
& $\mathcal{C}$($f$) = Class C, $\mathcal{C}$($g$) = Class B, $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($g$) & 67&\\
& $\mathcal{C}$($f$) = Class C, $\mathcal{C}$($g$) = Class A, $\mathcal{C}$(($f,g$)) = Class B & 20&\\
&&&\\
&&&\\
&&&\\
& $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = $\mathcal{C}$(($f,g$)) = Class C & 149&\\
& $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = Class A, $\mathcal{C}$(($f,g$)) = Class C & 0&\\
C & $\mathcal{C}$($f$) = $\mathcal{C}$($g$) = Class B, $\mathcal{C}$(($f,g$)) = Class C & 18&\\
& $\mathcal{C}$($f$) = Class C, $\mathcal{C}$($g$) = Class A, $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($f$) & 0&\\
& $\mathcal{C}$($f$) = Class C, $\mathcal{C}$($g$) = Class B, $\mathcal{C}$(($f,g$)) = $\mathcal{C}$($f$) & 89&\\
& $\mathcal{C}$($f$) = Class B, $\mathcal{C}$($g$) = Class A, $\mathcal{C}$(($f,g$)) = Class C & 0&\\
&&&\\
\hline
&&&\\
$*$&$-$&1289& Sensitive\\
&&&\\
\hline
\end{tabular}
\end{adjustbox}
\caption{Distribution of TSCAs under different classes ( $*$ indicates $-$ No specific class can be obtained)}
\label{Table4}
\end{table}
\begin{figure}[!ht]
\subfloat[ECA 164]{
\begin{minipage}[c][1\width]{
0.3\textwidth}
\label{fig1:ex1:a}
\centering
\includegraphics[width=1\textwidth]{Images/164}
\end{minipage}}
\hfill
\subfloat[ECA 131]{
\begin{minipage}[c][1\width]{
0.3\textwidth}
\label{fig1:ex1:b}
\centering
\includegraphics[width=1\textwidth]{Images/131}
\end{minipage}}
\hfill
\subfloat[$(164, 131);\: \tau=0.1$ ]{
\begin{minipage}[c][1\width]{
0.33\textwidth}
\label{fig1:ex1:c}
\centering
\includegraphics[width=1.05\textwidth]{Images/164_131_1}
\end{minipage}}
\caption{Stochastic CAs ($f$, $g$) dynamics, when $C((f, g)) \neq C(f)$ and $C(f) = C(g)$.
}
\label{fig1:ex1}
\end{figure}
\begin{example}\label{ex1}
Let us consider a TSCA($164, 131$)[$0.1$], where $f=164$ and $g=131$ are applied with probability $0.1$ and $0.9$ at each step of the evolution. Fig~\ref{fig1:ex1:a} and Fig~\ref{fig1:ex1:b} show the space-time diagrams of ECA $164$ and ECA $131$ with random initial configuration. Whereas, Fig~\ref{fig1:ex1:c} shows the space-time diagram of TSCA($164, 131$)[$0.1$]. Rule $131$ is applied at the time step marked by $\leftarrow$ (arrow) in Fig~\ref{fig1:ex1:c}. It is interesting to note here that the dynamical behavior of TSCA can widely vary from that of the CAs with default rule and noise.
\end{example}
\begin{example}\label{ex3}
Fig~\ref{fig3} shows the dynamics where one of the rule's class dominates. Rule $22$ belongs to Class III and rule $7$ belongs to Class II, and the CA shows periodic behavior (like Wolfram's Class II)(see Fig~\ref{fig3:e}), where the class of rule $7$ dominates .
\begin{figure}[!ht]
\subfloat[ECA 22]{
\begin{minipage}[c][1.2\width]{
0.23\textwidth}
\label{fig3:a}
\centering
\includegraphics[width=1\textwidth]{Images/rule22.png}
\end{minipage}}
\hfill
\subfloat[ECA 7]{
\begin{minipage}[c][1.2\width]{
0.23\textwidth}
\label{fig3:b}
\centering
\includegraphics[width=1\textwidth]{Images/rule7.png}
\end{minipage}}
\hfill
\subfloat[$\tau=0.1$ ]{
\begin{minipage}[c][1.2\width]{
0.23\textwidth}
\label{fig3:c}
\centering
\includegraphics[width=1\textwidth]{Images/fgT_22_7_0_1}
\end{minipage}}
\hfill
\subfloat[$ \tau=0.8$ ]{
\begin{minipage}[c][1.2\width]{
0.23\textwidth}
\label{fig3:e}
\centering
\includegraphics[width=1\textwidth]{Images/fgT_22_7_0_7999999999999999}
\end{minipage}}
\caption{Dynamics of CA $(22, 7)[0.1]$ and CA $(22, 7)[0.8]$ $(C((f, g)) = C(g)$ and $C(f) \neq C(g))$.
}
\label{fig3}
\end{figure}
\end{example}
\begin{example}\label{ex5}
Fig~\ref{fig4} shows the dynamics where none of the rule's class dominates. Rule $105$ belongs to Class III and rule $40$ belongs to Class I. The TSCA($105$,$40$)[$0.6$] shows periodic behavior (like Wolfram's Class II) (see Fig~\ref{fig4:e}).
\end{example}
\begin{figure}[!ht]
\subfloat[ECA 105]{
\begin{minipage}[c][1.2\width]{
0.23\textwidth}
\label{fig4:a}
\centering
\includegraphics[width=1\textwidth]{Images/rule105.png}
\end{minipage}}
\hfill
\subfloat[ECA 40]{
\begin{minipage}[c][1.2\width]{
0.23\textwidth}
\label{fig4:b}
\centering
\includegraphics[width=1\textwidth]{Images/rule40.png}
\end{minipage}}
\hfill
\subfloat[$\tau=0.2$ ]{
\begin{minipage}[c][1.2\width]{
0.23\textwidth}
\label{fig4:c}
\centering
\includegraphics[width=1\textwidth]{Images/fgT_105_40_0_2}
\end{minipage}}
\hfill
\subfloat[$ \tau=0.6$ ]{
\begin{minipage}[c][1.2\width]{
0.23\textwidth}
\label{fig4:e}
\centering
\includegraphics[width=1\textwidth]{Images/fgT_105_40_0_6}
\end{minipage}}
\caption{Dynamics of CA $(105, 40)[0.2]$ and CA $(105, 40)[0.6]$ $(C((f, g)) \neq C(f)$, $(C((f, g)) \neq C(g)$ and $C(f) \neq C(g))$..
}
\label{fig4}
\end{figure}
We have found that out of $3828$ TSCAs, $593$, $1690$ and $256$ TSCAs belong to Class A, Class B and Class C respectively. Table~\ref{table4} shows the summary of the outcome. It is also found that for a number of TSCAs, the noise rate ($\tau$) has not been playing a significant role. That is, for these cases, if we progressively vary the temporal noise rate, the cellular system's dynamics remains unchanged. Therefore, these TSCAs are $\tau$-insensitive.
However, there are $1289$ cases, out of $3828$ TSCAs, where the noise rate $(\tau)$ has been playing a significant role, i.e. $\tau-$sensitive. These TSCAs show phase
transition~\footnote{Some TSCAs show a discontinuity after a critical value of temporal noise rate which brutal change of behavior is well known as phase transition.} and class transition~\footnote{For a set of TSCAs, the class dynamics of the system changes after a critical value of $\tau$.} dynamics. Fig~\ref{TSCA7} phase transition behavior andFig~\ref{Fig8} class transition behavior. However, the goal of the current work is to explore the pattern classification capability of TSCAs for which $\tau-$sensitive CAs are not appropriate. Therefore, we next deal with the CAs, dynamical behavior of which
are independent of $\tau$, i.e. $\tau-$insensitive. For our next purpose, we identify the $\tau-$insensitive TSCAs which converge to fixed point from any initial configuration.
\subsection{Convergence}\label{convergence}
During evolution, a CA approaches to a set of configurations which form an attractor. If the set is a singleton, we call the attractor as fixed point. Whenever all the attractors are fixed points, we call the CAs as convergent.
\begin{definition}\label{def0}
A TSCA($f$, $g$)[$\tau$] is called as convergent TSCA if the CA converges to a fixed point from any initial configuration and for any $\tau$ and $n$, where $n$ is the number of cells of the TSCA.
\end{definition}
In other words, for a given seed, a TSCA converges to a fixed point, if both $f$ and $g$ may converge to a fixed point separately for the same seed. Following a large number of experiments, we identify the set of TSCAs that converge to fixed points. Here, the experimental study shows that $424$ couple of CAs converge to fixed point starting from any initial configuration and for any $\tau$ and $n$, see Table~\ref{pattern1}.
However, there are some cases where the convergence feature of various TSCAs changes depending on the size n and $\tau$ . As an evidence, Fig~\ref{Fig5} depicts the dynamics of TSCA $(30, 136)$ which converges to all$-0$ configuration after a critical value of $\tau$ (here, $\tau = 0.13$). However, $(30, 136)$ oscillates around a fixed non-zero density for $\tau = 0.08$ and $\tau = 0.11$, in Fig~\ref{Fig5}. Earlier, we have mentioned that this type of brutal change of behavior is well known as second-order phase transition~\cite{Sethi2016}. Similarly, the couple $(131, 136)$ converges to all$-1$ configuration for $\tau$ value $0.5$ and $0.9$, see Fig~\ref{Fig8}. On the other hand, it shows chaotic dynamics for $\tau = 0.1$. Here, the class dynamics of the system changes after a critical value of $\tau$ , i.e. class transition. However, for the current study, we exclude these TSCAs.
Although $424$ convergent TSCAs are identified to design pattern classifier, the general demand is multiple attractor TSCA.
\begin{definition}\label{def3}
If a convergent TSCA is having more than one fixed point, the TSCA is called as Multiple Attractor TSCA.
\end{definition}
\begin{example}
The TSCA ($12,4$)[$\tau$] with $4$ cells (and any $\tau$) is a convergent TSCA and is having seven fixed points $-$ $0000$, $0001$, $0010$, $0100$, $1000$, $0101$ and $1010$. Hence, it is a multiple attractor TSCA.
\end{example}
Previously, we have mentioned that to designing a pattern classifier, we exclude the TSCAs which
\begin{itemize}
\item are associated with single fixed point (see Table~\ref{pattern1}, in black); and
\item are associated with large number of attractors, specifically, couples with ECA
$204$ (see Table~\ref{pattern1}, in blue).
\end{itemize}
Therefore, to design a pattern classifier, we need to pick a few couples from the Table~\ref{pattern1} after excluding the above situations. Finally, we find a few of $114$
couples (see Table~\ref{pattern1}, in bold) which are the candidate of the proposed pattern
classifier.
\begin{table}[htbp]
\begin{center}
\begin{adjustbox}{width=\columnwidth,center}
\begin{tabular}{|cccccccccccc|} \hline\hline
(2, 0) & (4, 0) & (6, 0) & (6, 4) & (8, 0) & (8, 2) & (8, 4) & (8, 6) & (10, 0)& (10, 8) & (12, 0) &
(12, 8) \\ (14, 0) & (14, 4) & (18, 0) & (18, 8) & (22, 0) & (22, 4) & (22, 8) & (24, 0) & (24, 8) & (26, 0) & (26, 8) &
(28, 0) \\ (28, 8) & (30, 0) & (30, 4) & (32, 0) & (32, 2) & (32, 8) & (32, 10) & (32, 18) &(32, 24) & (32, 26) & (34, 0) &
(34, 8) \\ (36, 0) & (36, 6) & (36, 8) & (36, 32 & (38, 0) & (38, 4) & (38, 8) & (38, 32) & (40, 0) & (40, 2) & (40, 8) & (40, 10) \\ (40, 24) & (40, 36) & (40, 38) & (42, 0) & (42, 8) & (44, 0) &
(44, 8) & (44, 32) & (46, 0) & (46, 4) & (46, 32) & (50, 0) \\ (50, 8) & (54, 0) & (54, 4) & (54, 8) & (54, 32) & (54, 40) & (56, 0) & (56, 8) & (58, 0) & (60, 0) & (60, 4) & (60, 8) \\ (60, 32) & (72, 0) & (72, 2) & (72, 4) &
(72, 6) & (72, 8) & (72, 12) & (72, 24) & (72, 28) & (72, 32) & (72, 34) & (72, 36) \\ (72, 38) & (74, 0) & (74, 8) & (74, 32) & (74, 72) & (76, 0) & (76, 8) & (78, 0) & (78, 18) & (90, 0) & (90, 8) & (90, 32) \\ (94, 0) & (104, 0) & (104, 2) & (104, 8) & (104, 24) & (104, 36) & (104, 38) & (104, 44) & (104, 74) & (106, 0) & (106, 8) & (106, 72) \\ (108, 0) & (108, 8) & (108, 32) & (108, 40) & (110, 0) & (110, 4) & (110, 32) & (122, 0) & (122, 8) & (122, 36) & (126, 0) & (126, 4) \\(126, 32) & (128, 0) & (128, 2) & (128, 4) & (128, 6) & (128, 8) & (128, 10) & (128, 12) & (128, 14) & (128, 18) & (128, 22) & (128, 24) \\ (128, 26) & (128, 28) & (128, 32) & (128, 34) & (128, 36) & (128, 38) & (128, 40) & (128, 42) & (128, 44) & (128, 46) & (128, 50) & (128, 54) \\ (128, 56) & (128, 58) & (128, 60) & (128, 72) & (128, 74) & (128, 76) & (128, 78) & (128, 94) & (128, 104) & (128, 106) & (128, 108)& (128, 110) \\ (130, 0) & (130, 8) & (130, 32) & (130, 40) & (130, 72) & (130, 104) & (131, 128) & (132, 0) & (132, 6) & (132, 8) & (132, 14) & (132, 38) \\ (132, 46) & (132, 72) & (134, 0) & (134, 4) & (134, 8) & (134, 36) & (134, 72) & (136, 0) & (136, 2) & (136, 4) & (136, 6)& (136, 8) \\ (136, 10) & (136, 12) & (136, 18) & (136, 22) & (136, 24) & (136, 26) & (136, 28) & (136, 32) & (136, 34) & (136, 36) & (136, 38) & (136, 40) \\ (136, 42) & (136, 44) & (136, 50) & (136, 54) & (136, 56) & (136, 72) & (136, 74) & (136, 76) & (136, 90) & (136, 104) & (136, 106) & (136, 108) \\ (138, 0) & (138, 8) & (138, 32)&\ (138, 40) & (140, 0) & (140, 8) & (140, 72) & (142, 0) & (142, 4) & (146, 0) & (146, 8) & (146, 32) \\ (146, 78) & (150, 0) & (150, 4) & (150, 8) & (152, 0) & (152, 8) & (152, 32) & (152, 40) & (152, 72) & (152, 104) & (154, 0) & (154, 8) \\ (154, 32) & (154, 40) & (156, 0) & (156, 8) & (156, 72) & (156, 126) & (156, 131) & (160, 0) & (160, 2) & (160, 8) & (160, 10) & (160, 18) \\ (160, 24) & (160, 26) & (160, 36) & (160, 38) & (160, 44) & (160, 54) & (160, 72) & (160, 74) & (160, 108) & (160, 131) & (162, 0) & (162, 8) \\ (162, 72) & (164, 0) & (164, 6) & (164, 8) & (164, 32) & (164, 40) & (164, 72) & (164, 104) & (168, 0) & (168, 2) & (168, 8) & (168, 10) \\ (168, 24) & (168, 36) & (168, 38) & (168, 54) & (170, 0) & (170, 8) & (172, 0) & (172, 8) & (172, 32) & (178, 0) & (178, 8) & (178, 131) \\ (184, 0) & (184, 8) & (200, 0) & (200, 2) & (200, 4) & (200, 6) & (200, 8) & (200, 12) & (200, 24) & (200, 28) & (200, 32) & (200, 34) \\ (200, 36) & (200, 38) & (204, 0) & (204, 8) & (232, 0) & (232, 2) & (232, 8) & (232, 24) &(232, 36) & (232, 38) & (232, 44) &
\textbf{(12, 4)} \\ \textbf{(36, 4)} & \textbf{(36, 12)} & \textbf{(44, 4)} &\textbf{ (44, 12)} & \textbf{(44, 36)} & \textbf{(76, 4) }&\textbf{ (76, 12)} &\textbf{(76, 72)} & \textbf{(78, 76) }&\textbf{ (94, 78)} &\textbf{ (104, 72)} &\textbf{ (108, 4)} \\ \textbf{(108, 12)} & \textbf{(108, 36)} & \textbf{(108, 44)} &\textbf{(108, 72)} & \textbf{(130, 128)} &\textbf{(132, 4)}&\textbf{ (132, 12)} & \textbf{(132, 36)} & \textbf{(132, 44)} & \textbf{(132, 76) }&\textbf{ (132, 108)} &\textbf{(132, 128)} \\\textbf{ (134, 128)} &\textbf{ (134, 132)} & \textbf{(136, 128)} & \textbf{(136, 130)} &\textbf{ (136, 132)} & \textbf{(136, 134)} &\textbf{ (138, 128)} &\textbf{(138, 136)} & \textbf{(140, 4)} & \textbf{(140, 12)} & \textbf{(140, 36)} & \textbf{(140, 44)} \\ \textbf{(140, 76)} &\textbf{ (140, 108)} & \textbf{(140, 128)}&\textbf{(140, 132) }&\textbf{ (140, 136)} &\textbf{ (142, 128)} & \textbf{(142, 132)} &\textbf{ (146, 128)} &\textbf{ (146, 136)} & \textbf{(150, 128)} & \textbf{(150, 136) } &\textbf{(152, 128)} \\\textbf{ (152, 136)} & \textbf{(154, 128)}&\textbf{ (154, 136)} &\textbf{ (156, 128)} & \textbf{(156, 136)} &\textbf{ (160, 128) }& \textbf{(160, 130)} &\textbf{(160, 136)} & \textbf{(160, 138)} & \textbf{(160, 146)} & \textbf{(160, 152)} & \textbf{(160, 154)} \\ \textbf{(162, 128)} & \textbf{(162, 136)} & \textbf{(164, 4)} &\textbf{(164, 12)} &\textbf{(164, 36)} & \textbf{(164, 44)} & \textbf{(164, 108) }&\textbf{ (164, 128)} &\textbf{ (164, 132) }& \textbf{(164, 134)}&\textbf{ (164, 136) } &\textbf{(164, 140)}\\\textbf{ (164, 160)} & \textbf{(168, 128)} & \textbf{(168, 130)} &\textbf{ (168, 136)} &\textbf{ (168, 138)} & \textbf{(168, 152)} &\textbf{ (170, 128)} & \textbf{ (170, 136)} & \textbf{(172, 4)} &\textbf{ (172, 12)} &\textbf{ (172, 36)} &\textbf{ (172, 128)} \\\textbf{ (172, 132) }& \textbf{(172, 136)} &\textbf{ (172, 140)} & \textbf{(178, 128)} & \textbf{(178, 136)} &\textbf{ (184, 128) }& \textbf{(184, 136)}&\textbf{ (200, 72)} & \textbf{(200, 76)} &\textbf{ (200, 128)} &\textbf{ (200, 130)} &\textbf{ (200, 132)} \\ \textbf{(200, 134)} & \textbf{(200, 136)} & \textbf{(200, 140)} & \textbf{(200, 152)} & \textbf{(200, 156) }& \textbf{(200, 160)} &\textbf{ (200, 162)} &\textbf{(200, 164) }& \textbf{(232, 72)} & \textbf{(232, 108)} & \textbf{(232, 128)} &\textbf{ (232, 130)} \\ \textbf{(232, 136)} & \textbf{(232, 154)} & \textbf{(232, 164)} &\textbf{(232, 172)} & \textbf{(232, 200)}&
\textcolor{blue}{(204, 4)} & \textcolor{blue}{(204, 12)} & \textcolor{blue}{(204, 36)} & \textcolor{blue}{(204, 72)} & \textcolor{blue}{(204, 76)} & \textcolor{blue}{(204, 78)} & \textcolor{blue}{(204, 128)} \\ \textcolor{blue}{(204, 132)}& \textcolor{blue}{(204, 136)} & \textcolor{blue}{(204, 140)} & \textcolor{blue}{(204, 200)}&&&&&&&&\\\hline \hline
\end{tabular}
\end{adjustbox}
\caption{ Couples of TSCAs that converge to Fixed Points}
\label{pattern1}
\end{center}
\end{table}
\section{Multiple Attractor TSCA as Pattern classifier}\label{pattern_classifier}
A $n$-cell TSCA with $k$ fixed points can act as $k$-class classifier. Each class contains a set of configurations that converge to a single fixed point. Hence the fixed point can act as representative of the set. Now to design a two-class classifier, a set of fixed points, out of $k$ fixed points, needs to represent one class whereas the rest fixed points shall represent the other class. From implementation point of view, all the fixed points along with their class interaction are to be stored in memory. Whenever class of an input pattern ($P$) is to be found out, the TSCA runs with the pattern as seed. Based on the fixed point, where the TSCA settles down, the class of $P$ is declared.
As an example, the $4$-cell convergent TSCA($108,44$)[$0.1$] which has five attractors may be used as a two-class pattern classifier. Assume that the fixed points $0000$, $0001$ and $1000$ represent Class I, and the rest fixed points $0010$ and $0100$ represents Class II. Whenever a pattern, say $1101$ is given, the TSCA is run with $1101$ as seed. After some time, the CA reaches to a fixed point, say $1000$. Since $1000$ represents Class I, class of $1101$ is declared as I. Hence this multiple attractor TSCA can act as two-class pattern classifier, see Fig~\ref{fig:pattern1}.
For good classifiers, the patterns are to be distributed evenly throughout the attractor basins. In real-world datasets, however, the attractor basins may mix up the patterns of two classes. As a result, we evaluate the classifier's performance in terms of classification accuracy, which is defined as the ratio of properly classified patterns to total patterns. The formula for calculating efficiency is as follows:
\begin{align}
Efficiency &= \frac{\text{No. of properly classified patterns}}{\text{Total no. of patterns }}\times100\%
\end{align}
\begin{figure}
\centering
\includegraphics[width=0.6\textwidth]{Images/pattern3}
\caption{The TSCA ($108$, $44$) was classified using multiple fixed points.}
\label{fig:pattern1}
\end{figure}
However, a multiple-attractor TSCA may not be a good pattern classifier. To measure the performance and effectiveness of a TSCA, we pass it through the \emph{training phase} and \emph{testing phase}.
\subsection{Training phase}\label{training}
As shown in Table~\ref{pattern1}, there are $114$ multiple fixed attractor TSCAs. These TSCAs can act as potential candidates for pattern classification. However, in order to find the most effective classifier, we train all the candidates using patterns of two disjoint datasets, say $P_1$ and $P_2$. A TSCA from the set of candidates is loaded first with patterns of $P_1$ and $P_2$, and constantly updated until the TSCA reaches to a fixed point. We keep track of all the attractors and the number of patterns converge on them. If more patterns from pattern set $P_1$ converge to the attractor than the patterns from pattern set $P_2$, the attractor is declared to be of Class I and stored in \emph{attractorset-$1$}; otherwise, the attractor is of Class II and stored in \emph{attractorset-$2$}. At the end, we have two sets of attractors. The following formula is used to determine the efficiency of a TSCA:
\begin{align}
Efficiency &= \frac{\sum_{i=1}^{m} max(n_1^i, n_2^i)}{\big|P_1\big|+\big|P_2\big|}
\end{align}
Here, $n^i_1$ and $n^i_2$ are the maximum number of patterns converged to the $i^{th}$ fixed point attractor of a TSCA from dataset $P_1$ and $P_2$, respectively. $\big|P_1\big|$ and $\big|P_2\big|$ are the number of patterns of two datasets used for pattern classification. The \emph{training phase} produces a TSCA with highest efficiency, \emph{attractorset-$1$}, and \emph{attractorset-$2$} as output. Note that, the output of this phase is used as input of the \emph{testing phase}(see Section~\ref{testing}).
As an example, let us consider Monk$-1$ dataset ($11$-bit data) for classification. Let us take the TSCA $(76, 72)[0.2]$ as two-class pattern classifier (similar to Fig~\ref{fig:pattern1}), with two pattern set $P_1$ and $P_2$ loaded to the TSCA as Class I and Class II respectively. There are total $169$ patterns including $P_1$ and $P_2$, out of which $2$ patterns of $P_2$ and $4$ pattern of $P_1$ are wrongly identified as in Class I and Class II respectively. Hence, $163$ patterns are properly classified, which gives training efficiency as $96.4497\%$.
To get the best candidate TSCA, we train all the $114$ TSCAs of Table~\ref{pattern1} by Monk$-1$ dataset. The result of the training is noted in Table~\ref{table4}. We find that the TSCA($76,72$)[$0.1$] with training efficiency $97.54\%$, as the best performing TSCA. This TSCA acts as our desired classifier.
\subsection{Testing phase}\label{testing}
In this phase, a new collection of patterns are used to find the efficacy of the designed classifier. The attractor sets \emph{attractorset-1} and \emph{attractorset-2} with a TSCA (output of training phase) and the pattern sets ($P_1$ and $P_2$) are taken as input in this phase. The TSCA is loaded with the patterns of $P_1$ and $P_2$ and updated till all the patterns converge to any fixed point attractor. The number of patterns successfully detected by the classifier is used to measure the TSCA's efficiency. For example, if an attractor is present in \emph{attractorset-1} then count only the number of patterns from dataset $P_1$ converge to the attractor as correctly identified patterns, similarly, if an attractor is present in \emph{attractorset-2}, count only the number of patterns from dataset $P_2$ that converge to the attractor as correctly identified patterns. TSCAs with their training and testing efficiencies for different datasets are reported in Table~\ref{table5}.
\begin{table}[htbp]
\begin{center}
\begin{adjustbox}{width=\columnwidth,center}
\begin{tabular}{|ccccccccc|} \hline
TSCAs & \makecell{ Efficiency\\ (in \%)} & \makecell{ Number of\\ Attractor} &TSCAs & \makecell{ Efficiency\\ (in \%)} & \makecell{ Number of\\ Attractor}&TSCAs & \makecell{ Efficiency\\ (in \%)} & \makecell{ Number of\\ Attractor}\\\hline
(12,4)[0.1] & 86.066 & 199 & (36,4)[0.9] & 68.033 & 67 & (36,12)[0.9] & 84.426 & 67\\(44,4)[0.1] & 83.607 & 67 & (44,36)[0.1] & 83.607 & 67 & (44,12)[0.1] & 84.426 & 67\\(76,4)[0.1] & 96.721 & 199 & (76,12)[0.1] & 98.361 & 199 & (76,72)[0.1] & 97.541 & 67\\(78,76)[0.1] & 65.574 & 23 & (94,78)[0.1] & 73.77 & 23 & (104,72)[0.9] & 64.754 & 34\\(108,4)[0.1] & 81.967 & 67 & (108,36)[0.1] & 85.246 & 67 & (108,72)[0.4] & 80.328 & 34\\(108,12)[0.1] & 88.525 & 67 & (108,44)[0.1] & 89.344 & 67 & (130,128)[0.1] & 50.0 & 2\\(132,128)[0.1] & 72.951 & 2 & (132,36)[0.1] & 75.41 & 67 & (132,4)[0.1] & 74.59 & 199\\(132,12)[0.9] & 87.705 & 199 & (132,44)[0.9] & 86.885 & 67 & (132,76)[0.9] & 88.525 & 199\\(132,108)[0.9] & 91.803 & 67 & (134,128)[0.1] & 50.0 & 2 & (134,132)[0.1] & 50.0 & 2\\(136,128)[0.1] & 50.0 & 2 & (136,130)[0.1] & 50.0 & 2 & (136,132)[0.1] & 50.0 & 2\\(136,134)[0.1] & 50.0 & 2 & (138,128)[0.1] & 50.0 & 2 & (138,136)[0.1] & 50.0 & 2\\(140,128)[0.1] & 88.525 & 2 & (140,36)[0.1] & 86.885 & 67 & (140,4)[0.1] & 85.246 & 199\\(140,132)[0.1] & 88.525 & 200 & (140,136)[0.1] & 86.066 & 2 & (140,108)[0.9] & 94.262 & 67\\(140,44)[0.2] & 88.525 & 67 & (140,12)[0.1] & 84.426 & 199 & (140,76)[0.9] & 94.262 & 199\\(142,128)[0.1] & 50.0 & 2 & (142,132)[0.1] & 50.0 & 2 & (146,128)[0.1] & 50.0 & 2\\(146,136)[0.1] & 50.0 & 2 & (150,128)[0.1] & 50.0 & 2 & (150,136)[0.1] & 50.0 & 2\\(152,128)[0.1] & 50.0 & 2 & (152,136)[0.1] & 50.0 & 2 & (154,128)[0.1] & 50.0 & 2\\(154,136)[0.1] & 50.0 & 2 & (156,128)[0.1] & 50.0 & 2 & (156,136)[0.1] & 50.0 & 2\\(160,128)[0.1] & 50.0 & 2 & (160,130)[0.1] & 50.0 & 2 & (160,136)[0.1] & 50.0 & 2\\(160,138)[0.1] & 50.0 & 2 & (160,146)[0.1] & 50.0 & 2 & (160,152)[0.1] & 50.0 & 2\\(160,154)[0.1] & 50.0 & 2 & (162,128)[0.1] & 50.0 & 2 & (162,136)[0.1] & 50.0 & 2\\(164,128)[0.1] & 68.033 & 2 & (164,4)[0.1] & 70.492 & 67 & (164,36)[0.2] & 70.492 & 67\\(164,132)[0.9] & 72.951 & 68 & (164,108)[0.9] & 87.705 & 67 & (164,134)[0.7] & 73.77 & 2\\(164,160)[0.9] & 73.77 & 2 & (164,136)[0.9] & 77.869 & 2 & (164,44)[0.9] & 86.066 & 67\\(164,140)[0.8] & 86.066 & 68 & (164,12)[0.9] & 85.246 & 67 & (168,128)[0.1] & 50.0 & 2\\(168,130)[0.1] & 50.0 & 2 & (168,136)[0.1] & 50.0 & 2 & (168,138)[0.1] & 50.0 & 2\\(168,152)[0.1] & 50.0 & 2 & (170,128)[0.1] & 50.0 & 2 & (170,136)[0.1] & 50.0 & 2\\(172,128)[0.1] & 81.148 & 2 & (172,4)[0.1] & 77.869 & 67 & (172,36)[0.1] & 81.967 & 67\\(172,132)[0.1] & 82.787 & 68 & (172,136)[0.1] & 83.607 & 2 & (172,12)[0.4] & 86.066 & 67\\(172,140)[0.7] & 85.246 & 68 & (178,128)[0.1] & 50.0 & 2 & (178,136)[0.1] & 50.0 & 2\\(184,128)[0.1] & 50.0 & 2 & (184,136)[0.1] & 50.0 & 2 & (200,160)[0.1] & 87.705 & 2\\(200,162)[0.1] & 86.885 & 2 & (200,130)[0.1] & 86.885 & 2 & (200,132)[0.1] & 86.885 & 2\\(200,152)[0.1] & 86.885 & 2 & (200,128)[0.1] & 87.705 & 2 & (200,140)[0.1] & 86.885 & 2\\(200,76)[0.1] & 86.066 & 67 & (200,156)[0.1] & 88.525 & 2 & (200,72)[0.1] & 86.066 & 67\\(200,136)[0.2] & 87.705 & 2 & (200,164)[0.8] & 89.344 & 2 & (200,134)[0.4] & 90.984 & 2\\(232,130)[0.1] & 81.967 & 2 & (232,72)[0.2] & 83.607 & 34 & (232,128)[0.1] & 83.607 & 2\\(232,108)[0.1] & 84.426 & 34 & (232,164)[0.2] & 85.246 & 2 & (232,136)[0.1] & 82.787 & 2\\(232,154)[0.7] & 90.984 & 2 & (232,200)[0.1] & 81.967 & 200 & (232,172)[0.7] & 88.525 & 2\\\hline
\end{tabular}
\end{adjustbox}
\caption{Effectiveness of TSCAs During Training of Monk-1 Dataset}
\label{table4}
\end{center}
\end{table}
\begin{table}[htbp]
\begin{center}
\begin{adjustbox}{width=\columnwidth,center}
\begin{tabular}{|ccccccc|} \hline
Datasets & \makecell{TSCA\\ Size} & \makecell{Training \\Efficiency} &\makecell {Margin of Error\\ in Training} & \makecell{Testing\\ Efficiency} & \makecell{Margin of Error\\ in Testing} & \makecell{Proposed\\ TSCAs}\\
\hline
Monk-1 & 11 & 97.54 & 0.223 & 86.08 & 0.3112 & (76, 72)[0.1]\\
Monk-2 & 11 & 96.45 & 0.2012 & 88.22 & 0.2068 & (76, 72)[0.1]\\
Monk-3 & 11 & 98.36 & 0.123 & 94.21 & 0.2406 & (76, 72)[0.1]\\
Haber man & 9 & 80.27 & 0.4321 & 80.76& 0.6730 & (132, 108)[0.4]\\
Heart-statlog & 16 & 99.26 & 0.541 & 92.59 & 0.7679 & (232, 154)[0.8]\\
Tic-Tac-Toe & 18 & 100 & 0 & 99.48 & 0.1679 & (140, 12)[0.9]\\
Hepatitis & 19 & 100 & 0.6089 & 97.3 & 0.7303 & (232, 172)[0.6]\\
Spect Heart & 22 & 97.33 & 0.4326 & 95.699 & 0.4133 & (172, 140)[0.3]\\
Appendicitis & 28 & 97.56 & 0.1921 & 95 & 0.7874 & (76, 72)[0.4]\\
\hline
\end{tabular}
\end{adjustbox}
\caption{Pattern classifiers' performance over various datasets (for proposed classifier)}
\label{table5}
\end{center}
\end{table}
\subsection{Margin of error}
As previously stated, the classifier is a \emph{Temporally Stochastic CA}-based classifier, in which the noise rule $g$ is applied with a probability $\tau$ and the cells are stochastically updated. This might happen in different ways for different runs, resulting in varying efficiency. As a result, these categorization differences must be recorded. To obtain these information, we determine the margin of error for both the training and testing phases. A margin of error expresses as a variation of small amount in case of change of circumstances, i.e. the maximum expected difference between the true parameter and a sample estimation of that parameter \cite{Cochran1977}. We estimate the margin of error for sample size $m$ using Equation~\ref{mr} \cite{Cochran1977}.
\begin{align}\label{mr}
Marginal\; Error &= Z_{n/2}(\frac{\sigma}{\sqrt{m}})
\end{align}
We have considered $m=30$ samples for the experimentation with $\sigma$ as the variance.
\begin{align}\label{var}
\sigma &= \sqrt{\frac{\sum(x_i-\bar{x})^2}{(m-1)}}
\end{align}
The efficiency of the $i^{th}$ sample is $x_i$ , and the mean of the sample efficiencies is $\bar{x}$. As we consider the confidence level for our sampling experiments is $95$\% percent \cite{Cochran1977}, we set $Z_{n/2} = 1.96$. Table~\ref{table5} displays the margin of error of different classifiers in training and testing phase. It is found that the margin of error is very less in both training and testing phase. Hence, During different runs, the suggested classifier's efficiency fluctuates somewhat.
\subsection{Comparison}
For the study of the efficiency of the proposed two-class pattern classifier, we employed nine datasets: Monk-1, Monk-2, Monk-3, Haber-man, Heart-statlog, Tic-Tac-Toe, Spect heart, Hepatitis and Appendicitis. The datasets are preprocessed suitably to fit the input features of the classifier.
The classification accuracy of the proposed classifier is compared with different existing standard algorithms such as Bayesian, C4.5 \cite{Salzberg1994}, MLP (Multilayer Perceptron), TCC, MTSC, ASVM, LSVM, Sparse grid, Traditional CA \cite{DasMNS09} and Asynchronous CA \cite{Sethi2016}.
\begin{table}[htbp]
\begin{center}
\begin{adjustbox}{width=\columnwidth,center}
\begin{tabular}{|cccc|} \hline
Datasets & \makecell{Algorithm} & \makecell{Efficiency in \%} &\makecell {Efficiency of proposed classifier \\ with Margin of Error} \\
\hline
Monk-1 & Bayesian & 99.9&86.08 $\pm$ 0.3112 \\
&C4.5 & 100 & (TSCA(76, 72)[0.1]) \\
&TCC & 100 &\\
&MTSC & 98.65 &\\
&MLP & 100 &\\
&Traditional CA & 61.111 &\\
&Asynchronous CA & 81.519 &\\\hline
Monk-2 & Bayesian & 69.4&88.22 $\pm$ 0.2068\\
&C4.5 & 66.2 & (TSCA(76, 72)[0.1])\\
&TCC & 78.16 &\\
&MTSC & 77.32 &\\
&MLP & 75.16 &\\
&Traditional CA & 67.129 &\\
&Asynchronous CA & 73.410 &\\\hline
Monk-3 & Bayesian & 92.12&94.21 $\pm$ 0.2406\\
&C4.5 & 96.3 & (TSCA(76, 72)[0.1])\\
&TCC & 76.58 &\\
&MTSC & 97.17 &\\
&MLP & 98.10 &\\
&Traditional CA & 80.645 &\\
&Asynchronous CA & 83.749 &\\\hline
Haber-man & Traditional CA & 73.499&80.76 $\pm$ 0.6730 \\
&Asynchronous CA & 77.493 & (TSCA(132, 108)[0.4])\\\hline
Spect Heart & Traditional CA & 91.978 & 95.699 $\pm$ 0.4133 \\
&Asynchronous CA & 100 & (TSCA(172, 140)[0.3])\\\hline
Tic-Tac-Toe & Sparce grid & 98.33 & 99.48 $\pm$ 0.1679 \\
&ASVM & 70.00 & (TSCA(140, 12)[0.9])\\
&LSVM & 93.330 &\\
&Traditional CA & 93.330 &\\
&Asynchronous CA & 99.721 &\\\hline
Heart-statlog&Bayesian&82.56&92.59$\pm$0.7679\\
&C4.5&80.59&(TSCA(232,154)[0.8])\\
&Logit-boost DS &82.22&\\ \hline
Hepatitis&Bayesian&84.18&97.3 $\pm$ 0.7303\\
&C4.5&82.38&(TSCA(232,172)[0.6])\\
&Logit-boost DS &81.58&\\ \hline
Appendicitis&-&-&95$\pm$ 0.7874\\
&&&(TSCA(76,72)[0.4])\\
\hline
\end{tabular}
\end{adjustbox}
\caption{Classification accuracy compared to other well-known classifiers}
\label{table6}
\end{center}
\end{table}
The performance of our proposed TSCA-based classifier is compared to that of other well-known classifiers, as shown in Table~\ref{table6}. We observed that our proposed TSCA-based two-class pattern classifier performs much better than traditional CA-based classifier and it becomes more competitive and performs reliably better than other well known classifier algorithms.
\section{Summary}
In this chapter, we have suggested a variant of cellular automata, termed as \emph{Temporally Stochastic CA} (TSCA), in which, instead of one local rule, two rules ($f$ and $g$) have been utilized. Where, rule $f$ acts as a default rule and $g$ acts as a noise rule, applied with probability $\tau$ (noise rate). After analyzing their dynamics, we have identified the convergent TSCAs that have been used to design two-class pattern classifier.
A two-class pattern classifier was developed using these TSCAs ($114$ in number)(see Table~\ref{pattern1}). For a given dataset, we have chosen a TSCA with the highest efficiency to build a classifier. In comparison to existing common algorithms, our suggested design of TSCA-based two-class pattern classifier offers competitive performance. As a two-class pattern classifier, one can employ TSCAs with optimal number of fixed point attractors to improve performance. This will be the focus of our future works.
\chapter{Affinity Classification Problem by Stochastic Cellular Automata}
\label{chap5}
\section{Introduction}
\noindent The density classification problem is a well-known problem in cellular automata (CAs). Given an initial configuration, this problem asks to find a binary cellular automaton (CA) that converges to all-$0$ (resp. all-$1$) configuration, a fixed point, if number of $0$'s (resp. $1$s) in the initial configuration in higher than the number of $1$s (resp. $0$s). That is, the CA reaches all-$1$ configuration if it has an \emph{affinity} towards $1$ in its initial configuration with respect to the density of $1$ in it and reaches all-$0$ otherwise. However, sometimes, the requirement of many applications is that, this density itself is to be treated as a variable -- still a binary CA is required that can converge to the all-$1$ (resp. all-$0$) configuration. In this chapter, we introduce this problem as a generalization of the density classification problem. Formally the problem can be stated as:\\\\
\textbf{Problem Statement:} \emph{Given an initial configuration, find a binary cellular automaton that converges to all-$1$ configuration if density of $1$s is more than $\rho$. Otherwise, it converges to all-$0$ configuration.}\\\\
Here, $\rho$ is calculated as the density of $1$s in the initial configuration and all-$0$ and all-$1$ are the only fixed points of the CA. We name this problem as \emph{Affinity Classification Problem} as the CA has an affection towards the all-$1$ configuration. When $\rho = 0.5$, the problem is reduced to the classical density classification problem.
In literature, several attempts have been taken to solve the density classification problem. However, in \cite{PhysRevLett.74.5148}, it is proved that it is impossible to solve this problem with $100\%$ accuracy using classical CAs. Because of this, research efforts have been shifted towards finding the non-classical CAs which can solve the problem {\em almost} perfectly.
In \cite{Fuk05}, it is shown that the density classification task is solvable by running in sequence the trivial combination of elementary rules $184$ and $232$. This solution is extended for two-dimension using a stochastic component into each of these two rules in \cite{fuks2015solving}. In \cite{fates13}, a stochastic CA is used to solve the problem with an arbitrary precision. In this solution, the cells of 1-dimensional CA stochastically choose a rule in each step from a set of rules to evolve. These non-classical CAs can be named as \emph{spatially} stochastic CAs. Target has also been taken to tackle this problem with non-uniform CA where the cells can use different rules to evolve. A non-uniform CA that does the best density classification task is identified in \cite{NazmaTh}. However, neither (spatial) stochastic CA nor non-uniform CA can perfectly solve the density classification problem. Whereas, the non-classical CA of Ref.\cite{Fuk05} which may be called as \emph{temporally} non-uniform CA, can do it perfectly.
As the affinity classification problem is an extension of the density classification problem, it is most likely to be unsolvable using classical CAs. We may need non-classical CA with temporal non-uniformity and stochastic component for this. Hence, to solve this problem, in this work, we introduce \emph{temporally stochastic} CAs. We define our problem over two dimensional binary CAs and use two different CA rules uniformly over the grid. The default rule is deterministic, whereas, another rule is stochastic whose application time is dependent on some probability. Section~\ref{model} describes the proposed model. The simulation and convergence to the solution for different density is shown in Section~\ref{simulation}. It is shown that our model is not blind as it \emph{intelligently} decides and converges to its \emph{point of attraction}.
Finally, we show that this model has several applications including as model for \emph{self-healing systems} (Section~\ref{application}).
\section{The Model}\label{model}
\noindent The proposed cellular automaton is defined over two-dimensional square grid which uses periodic boundary condition. The CA is binary and considers Moore neighborhood dependency; that is, a cell takes any of the two states $0$ or $1$ and depends on itself and it's eight nearest neighbors. At a time stamp $t$, a cell can be updated using one of the two rules $f$ and $g$. Here, $f$ is deterministic and the default rule for the grid, whereas, $g$ is stochastic and is applied with some probability. As the CA is defined over Moore neighborhood, both $f$ and $g$ are having the same domain and range:
\[f:\{0,1\}^9 \rightarrow \{0,1\} \text{ and } g:\{0,1\}^9 \rightarrow \{0,1\} \]
Let us now first discuss about the default rule $f$. This rule is spatially deterministic -- at any time, it is applied over all cells uniformly. At each time step $t+1$, this rule updates the state of cell ${(i,j)}$ depending on the present states of its neighboring cells:
\begin{scriptsize}
$${(i-1,j)}, {(i-1,j-1)}, {(i,j-1)}, {(i+1,j-1)}, {(i+1,j)}, {(i+1,j+1)}, {(i,j+1)}, {(i-1,j+1)}$$
\end{scriptsize}
Let $s^t_{i,j}$ be the present state of cell ${(i,j)}$ and $\mathscr{C}^d_{(i,j)}$ represents for the cell $(i,j)$, $s^t_{i,j}=d$ where $d \in \{0, 1\}$.
Then $f$ works in the following way:
\begin{align*}
s^{t+1}_{i,j} & = f(s^t_{i-1,j}, s^t_{i-1,j-1}, s^t_{i,j-1} , s^t_{i,j} , s^t_{i+1,j-1}, s^t_{i+1,j}, s^t_{i+1,j+1} , s^t_{i,j+1} , s^t_{i-1,j+1})\\
& = \begin{cases}
0 & \text{ if } s^t_{i,j}=1 \text{ and } \sum\limits_{\substack{i-1 \le l \le i+1,\\ j-1 \le m \le j+1}}\mathscr{C}^0_{(l,m)}> K \\
1 & \text{ if } s^t_{i,j}=0 \text{ and } \sum\limits_{\substack{i-1 \le l \le i+1,\\ j-1 \le m \le j+1}}\mathscr{C}^1_{(l,m)}=8-K\\
s^t_{i,j} & \text{ otherwise }
\end{cases}
\end{align*}
where $K$ is a constant and $0\le K \le 8$.
That means, if a cell is $1$ and it has more than $K$ neighbors with state 0, it becomes 0 in next step; whereas, a cell of state 0 with $(8-K)$ or more neighbors with state 1 becomes 1 in the next step. This number of neighbors required for state transition ($K$) is the \emph{first parameter} of the model.
The most significant characteristics of our model comes from the second rule $g$. As already mentioned, $g$ is a stochastic rule, that is, it is applied to each cell with some probabilities. Moreover, at which time step this rule is to be applied that is also stochastically decided. Hence, we call the CA as a temporally stochastic CA. However, when selected, this rule is also applied uniformly over all cells. Following is the definition of this rule:
\begin{align*}
s^{t+1}_{i,j} & = g(s^t_{i-1,j}, s^t_{i-1,j-1}, s^t_{i,j-1} , s^t_{i,j} , s^t_{i+1,j-1}, s^t_{i+1,j}, s^t_{i+1,j+1} , s^t_{i,j+1} , s^t_{i-1,j+1})\\
& = \begin{cases}
0 \text{ with probability } \phi(x)& \text{ if } s^t_{i,j}=1 \text{ and } \sum\limits_{\substack{i-1 \le l \le i+1,\\ j-1 \le m \le j+1}}\mathscr{C}^0_{(l,m)}=x \\
1 \text{ with probability } \psi(x) & \text{ if } s^t_{i,j}=0 \text{ and } \sum\limits_{\substack{i-1 \le l \le i+1,\\ j-1 \le m \le j+1}}\mathscr{C}^1_{(l,m)}=x\\
s^t_{i,j} & \text{ otherwise }
\end{cases}
\end{align*}
Here, $\phi(x)$, $\psi(x): \{0,1,\cdots, K\}\rightarrow [0,1]$ are two probability distribution functions.
We denote this $x$ as the number of \textit{supporting neighbors} or simply \emph{support}.
This rule implies, if a cell is at state $1$ and it has $x$ number of neighbors with state $0$, it updates its value to $0$ with some probability $\phi(x)$. Similarly, if a cell is at state $0$ and it has $x$ number of neighbors with state $1$, it updates its value to $1$ with some probability $\psi(x)$. We name $\phi(x)$ as the \emph{affection probability} and $\psi(x)$ as the \emph{repulsion probability} function. These two probability distribution functions are the \emph{second} and \emph{third parameters} of our model.
However this stochastic rule $g$ does not act in each step. When it is to be applied is decided by another probability $p$, which we name as the \emph{upgrade probability}. This $p$ is the \emph{fourth} and final \emph{parameter} of our model. Hence, the parameters required by the model are --
\begin{itemize}
\item $K$ = number of neighbors required to change from one state to another
\item $\phi(x)$= affection probability function
\item $\psi(x)$ = repulsion probability function
\item $p$ = upgrade probability
\end{itemize}
Observe that, in our model the role of $g$ is to give the cells an extra chance to change their status. During evolution of the CA by $f$ if some cells are left out which are \emph{eager} to update their states but can not do so because of the surrounding neighbors (\emph{hostile environment}), they get a \emph{booster} to upgrade their current status through $g$. This $g$ helps them achieve their desired status even if they have less number of neighboring cells to their \emph{support} (as $x\le K$). But whether the cell will be updated or not, is dependent on the probability value. Both cells with state $0$ and $1$ get this advantage uniformly in terms of the two probability distribution functions $\phi(x)$ and $\psi(x)$. As $g$ gives precedence towards some cells, it is to be applied with a caution -- so there is the \emph{upgrade} probability value $p$ which works as a controlling measure. Therefore, when $K=4$, $f$ works as a simple majority rule and depending on $g$ the system can be inclined towards a specific state.
Note that, the parameters give us flexibility to design the model according to the need of an application. For example, for the model to have affection to converge to all-$0$ as a fixed point, we can set $\phi(x)$ and $\psi(x)$ accordingly. Similarly, we can change value of our parameter(s) to get different versions of the model which can be used for a specific purpose. In fact, we may also consider that in our model rule $g$ is applied with probability $p$ whereas the rule $f$ is applied with probability $(1-p)$ with $p$ being any probability value. This way of looking at these rules makes both of them \emph{temporally stochastic}.
The next section shows some simulation results of our model to solve the affinity classification problem taking some specific value of the parameters.
\section{Solving Affinity Classification Problem: A Simulation}\label{simulation}
\noindent We now simulate our proposed model to understand its efficacy in solving the affinity classification problem. As mentioned before, the model is a 2-dimensional finite CA that uses periodic boundary condition. For the simulation purpose, we consider here the grid size as $10^3\times10^3$, that is total number of cells = $10^6$. Further, our model is characterized by four parameters -- $K$, $\phi(x)$, $\psi(x)$ and $p$. In our simulation, we have assumed the following values for the parameters:
\begin{align*}
K&=4\\
\phi(x)&=\begin{cases}
0 &\text{if } x \le 1\\
log_K(x) &{2\le x \le K }
\end{cases}\\
\psi(x)&=\begin{cases}
0 &\text{if } x = 0\\
e^{x-K} &{1\le x \le K }
\end{cases}\\
p&=0.2
\end{align*}
As our model uses Moore neighborhood dependency on 2-D grid, $K$ is very small ($0\le K \le 8$). In this small range of $K$, logarithmic function grows faster than exponential function. Therefore, since we want to observe the affinity of the model towards all-$0$ configuration, we take $\phi(x)$ as a logarithmic function and $\psi(x)$ as an exponential function. As per our model, we use $x=0,1,\cdots,K$ to get the probability values for $\phi(x)$ and $\psi(x)$. We have plotted $\phi(x)$ and $\psi(x)$ for different $x$ to see their behavior at $K=4$ (see Figure \ref{fig:13} and Figure \ref{fig:24} respectively). We can observe that, at $K=4$, $\phi(1)=0.0$, $\phi(2)=0.5$, $\phi(3)=0.79248$, $\phi(4)=1.0$, whereas, $\psi(1)=0.0497$, $\psi(2)=0.1353$, $\psi(3)=0.3679$, $\psi(4)=1.0$.
\begin{figure}[h]
\vspace{-1.5em}
\subfloat[ \label{fig:12}]{%
\includegraphics[scale = 0.20]{CAFigure/3}
}
\hfill
\subfloat[ \label{fig:13}]{%
\includegraphics[scale = 0.20]{CAFigure/4}
}
\hfill
\subfloat[ \label{fig:14}]{%
\includegraphics[scale = 0.20]{CAFigure/5}
}
\hfill
\subfloat[ \label{fig:15}]{%
\includegraphics[scale = 0.20]{CAFigure/6}
}
\caption{Graph of $\phi(x)$ for different $K$: a) $K=3$ ; b) $K=4$ ; c) $K=5$ ; d) $K=6$}
\label{fig:16}
\end{figure}
\begin{figure}[!htbp]
\vspace{-1.5em}
\subfloat[ \label{fig:23}]{%
\includegraphics[scale = 0.20]{CAFigure/psi3}
}
\hfill
\subfloat[ \label{fig:24}]{%
\includegraphics[scale = 0.20]{CAFigure/psi4}
}
\hfill
\subfloat[ \label{fig:25}]{%
\includegraphics[scale = 0.20]{CAFigure/psi5}
}
\hfill
\subfloat[ \label{fig:26}]{%
\includegraphics[scale = 0.20]{CAFigure/psi6}
}
\caption{Graph of $\psi(x)$ for different $K$: a) $K=3$ ; b) $K=4$ ; c) $K=5$ ; d) $K=6$}
\label{fig:27}
\end{figure}
We observe that, if the count of supporting neighbors $x$ is increased then the probability of changing state from $1$ to $0$ is also increased (Figure~\ref{fig:13}); but, if $x$ is decreased then the probability of changing state from $0$ to $1$ is increased with the growth of the first function being faster than the latter (Figure~\ref{fig:24}).
\subsection{Random Initial Configuration}
We have experimented our model with huge number of random initial configurations having various $\rho$ where
$$\rho= \frac{\text{Number of 1s}}{\text{Total number of cells}} $$
Following are some sample results from our experiment when $K=4$. Here, $0$ is represented in color \emph{yellow} and $1$ is represented by color \emph{red}.
\begin{figure}[!htbp]
\vspace{-1.5em}
\subfloat[ \label{fig:28}]{%
\includegraphics[scale = 0.20]{CAFigure/demo1}
}
\hfill
\subfloat[ \label{fig:29}]{%
\includegraphics[scale = 0.20]{CAFigure/demo2}
}
\hfill
\subfloat[ \label{fig:30}]{%
\includegraphics[scale = 0.20]{CAFigure/demo3}
}
\caption{For $K=4$ and $\rho=0.475$, the model converge after $150$ iterations: (a) Initial configuration, (b) An intermediate Configuration, (c)Final configuration(all-0)}
\label{fig:31}
\end{figure}
\noindent Figure~\ref{fig:31} shows that, at $\rho=0.475$, for a random initial configuration, all the cells become yellow after $150$ iterations, that means, the model converge to it's converging point (all-$0$). We have experimented with large number of random initial configurations and seen that, in our experiments, when the initial configuration has $\rho \le 0.675$, the model is converging to all-$0$, otherwise it converges to all-$1$.
\begin{figure}[!htbp]
\vspace{-1.5em}
\subfloat[ \label{fig:41}]{%
\includegraphics[scale = 0.20]{CAFigure/demo4}
}
\hfill
\subfloat[ \label{fig:42}]{%
\includegraphics[scale = 0.20]{CAFigure/demo5}
}
\hfill
\subfloat[ \label{fig:43}]{%
\includegraphics[scale = 0.20]{CAFigure/demo6}
}
\caption{For $K=4$ and $\rho=0.6989$: the model converge to all-$1$ after $137$ iterations: (a) Initial configuration, (b)An intermediate Configuration, (c)Final configuration(all-1)}
\label{fig:4}
\end{figure}
Figure \ref{fig:4} shows another sample random initial configuration with an arbitrary $\rho > 0.675$ (here $\rho = 0.6989$). Here, the model converges to all-$1$ after 137 iterations.
\begin{figure}[!htbp]
\vspace{-1.5em}
\subfloat[ \label{fig:51}]{%
\includegraphics[scale = 0.20]{CAFigure/k31}
}
\hfill
\subfloat[ \label{fig:52}]{%
\includegraphics[scale = 0.20]{CAFigure/k33}
}
\hfill
\subfloat[ \label{fig:53}]{%
\includegraphics[scale = 0.20]{CAFigure/k35}
}
\hfill
\subfloat[ \label{fig:54}]{%
\includegraphics[scale = 0.20]{CAFigure/k36}
}
\caption{For $K=3$ and $\rho=0.969852$, after 3132 iterations, the model converge to all-$0$'s: (a) Initial configuration, (b) An intermediate Configuration, (c) Another intermediate configuration, (d) Final configuration (all-$0$)}
\label{fig:5}
\end{figure}
Naturally, the question comes, \emph{``Can we increase the affection probability so that even if we take a lot of $1$'s in the initial configuration then also the model converges to all-$0$?''}. To search for this answer, we have again done large number of experiments by varying the value of $K$. In our experiments, we have observed that when we decrease the value of $K$, the model is converging to all-$0$ even though $\rho > 0.68$. For example, for the initial configuration of Figure~\ref{fig:5}, if $K=3$, then although $\rho \le 0.96$, the model is converged to all-$0$s. By further experimentation, we observe that, if the value of $K$ is decreased to $2$ then $\rho$ can be as high as $0.99$ but the model may still converges to all-$0$. Similarly, when we increase the value of $K$ then the value of $\rho$ is to be decreased for converging to all-$0$.
\begin{table}[!h]
\centering
\vspace{-1.5em}
\resizebox{0.99\textwidth}{!}{
\begin{tabular}{cc}
\begin{tabular}{|c|c|c|c|}
\hline\hline
{\bfseries K } & {\bfseries $\rho$} & {\bfseries Number of iterations } & {\bfseries Converge to }\\
\hline
1 & 0.000002 & 1 &all-0\\
1 & 0.0002 & 2 &all-0\\
1 & 0.0051 & 2 &all-0\\
1 & 0.3 & 3 &all-0\\
1 & 0.55 & 4 &all-0\\
1 & 0.67 & 6 &all-0\\
1 & 0.6864 & 8 &all-0\\
1 & 0.943 & 21 &all-0\\
1 & 0.991 & 76 &all-0\\
1 & 0.997 & 170 &all-0\\
1 & 0.9995 & 489 &all-0\\
\hline
2 & 0.1 & 3 &all-0\\
2 & 0.3 & 4 &all-0\\
2 & 0.4 & 5 &all-0\\
2 & 0.61 & 7 &all-0\\
2 & 0.74 & 14 &all-0\\
2 & 0.8 & 16 &all-0\\
2 & 0.9536 & 90 &all-0\\
2 & 0.965 & 151 &all-0\\
2 & 0.982 & 323 &all-0\\
2 & 0.993 & 759 &all-0\\
2 & 0.995 & 3 &all-1\\
2 & 0.9995 & 2 &all-1\\
\hline
3 & 0.1 & 3 &all-0\\
3 & 0.3 & 6 &all-0\\
3 & 0.4 & 9 &all-0\\
3 & 0.55 & 20 &all-0\\
3 & 0.61 & 22 &all-0\\
3 & 0.6864 & 44 &all-0\\
3 & 0.8 & 109&all-0\\
3 & 0.943 & 518 &all-0\\
3 & 0.953 & 2042 &all-0\\
3 & 0.96 & 2372 &all-0\\
3 & 0.965 & 3220 &all-0\\
3 & 0.982 & 3 &all-1\\
3 & 0.991 & 3 &all-1\\
3 & 0.995 & 2 &all-1\\
\hline\hline
\end{tabular}
&
\begin{tabular}{|c|c|c|c|}
\hline\hline
{\bfseries K } & {\bfseries $\rho$} & {\bfseries Number of iterations } & {\bfseries Converge to }\\
\hline
4 & 0.1 & 4 &all-0\\
4 & 0.4 & 66 &all-0\\
4 & 0.55 & 805 &all-0\\
4 & 0.61 & 3074 &all-0\\
4 & 0.65 & 7019 &all-0\\
4 & 0.67 & 12385 &all-0\\
4 & 0.675 & 16186 &all-0\\
4 & 0.6864 & 261 &all-1\\
4 & 0.7 & 159 &all-1\\
4 & 0.74 & 69 &all-1\\
4 & 0.8 & 8 &all-1\\
4 & 0.943 & 4 &all-1\\
4 & 0.965 & 4 &all-1\\
4 & 0.991 & 2 &all-1\\
\hline
5 & 0.06 & 8 &all-0\\
5 & 0.08 & 14 &all-0\\
5 & 0.09512 & 8 &all-0\\
5 & 0.1 & 14 &all-0\\
5 & 0.3 & 304 &all-1\\
5 & 0.4 & 91 &all-1\\
5 & 0.55 & 12 &all-1\\
5 & 0.61 & 10 &all-1\\
5 & 0.686 & 6 &all-1\\
\hline
6 & 0.001 & 2 &all-0\\
6 & 0.0051 & 3 &all-0\\
6 & 0.00994 & 8 &all-0\\
6 & 0.03 & 638&all-1\\
6 & 0.0629 & 230 &all-1\\
6 & 0.076 & 194 &all-1\\
6 & 0.08 & 98 &all-1\\
6 & 0.1 & 58 &all-1\\
\hline
7 & 0.000002 & 2 &all-0\\
7 & 0.0002 & 2 &all-0\\
7 & 0.0004 & 2 &all-0\\
7 & 0.0005 & 976 &all-1\\
7 & 0.0009 & 375 &all-1\\
7 & 0.001 & 417 &all-1\\
7 & 0.00499 & 136 &all-1\\
7 & 0.00994 & 83 &all-1\\
7 & 0.0676 & 25 &all-1\\
7 & 0.1 & 12 &all-1\\
7 & 0.55 & 4 &all-1\\
7 & 0.95 & 2 &all-1\\
\hline\hline
\end{tabular}
\end{tabular}}
\caption{Relationship between the values of $K$ and $\rho$ where the model converges to all-$0$ or all-$1$}\label{tab1}
\end{table}
Figure \ref{fig:16}and \ref{fig:27} show the variation of the probability distribution functions for different $K$ values. {If $K$ is changed then the growth of the probability distribution functions $\phi(x)$ and $\psi(x)$ are also changed with respect to $K$.} Table~\ref{tab1} gives some of our experimental results. In each of the subtables of this table, column $1$ and $2$ describe the initial configurations in the form of $K$ and $\rho$, whereas, column $3$ and $4$ show experimental outcomes.
\begin{table}[!h]
\centering
\resizebox{0.6\textwidth}{!}{
\begin{tabular}{|c|c|c|c|}
\hline\hline
{\bfseries $K$} & {\bfseries Number of $0s$ } & {\bfseries Iterations (Time steps) }& {\bfseries Converges to }\\
\hline
1 & 1 & 1 &all-1\\
1 & 2 & 998 &all-0\\
\hline
2 & 2 & 1 &all-1\\
2 & 3 & 1152 &all-0\\
2 & 4 & 1148 &all-0\\
\hline
3 & 3 & 1 &all-1\\
3 & 4 & 3874 & all-0\\
3 & 25 & 4093 & all-0\\
\hline
4 & 49 & 64 &all-1\\
4 & 64 & 92 &all-1\\
4 & 70 & 492 &all-1\\
4 & 81 & 576 &all-1\\
4 & 100 & 15915 &all-0\\
4 & 144 & 16138 &all-0\\
\hline\hline
\end{tabular}}
\caption{Relationship between the values of $K$ and the block of $0$s where the model converges to all-$0$ or all-$1$ considering number of $0$s are placed sequentially in the grid}\label{tab2}
\end{table}
\subsection{Initial Configuration with Block of $0$s and $1$s}\label{sec:block}
Previous subsection shows the results when $0$ and $1$ in the initial configuration are randomly organized. Now, we experiment with initial configurations where block of cells are set to have same value. Table~\ref{tab2} and \ref{tab3} depict our sample results. In Table \ref{tab2}, we consider initial configurations with a small number of consecutive cells at state $0$ and the remaining cells in state $1$. For every value of $K$ ($ 1 \le K \le 7$), column 2 shows the number of consecutive cells having same value in our experiments so that the model converges to all-$0$ . For instance, when $K=4$ then an initial configuration having a block of $100$ consecutive $0$s converges the model to all-$0$. These consecutive $0$s form a cluster and it grows in size to converge the model to all-$0$.
\begin{figure}[!h]
\vspace{-1.5em}
\subfloat[ \label{fig:17}]{%
\includegraphics[scale = 0.20]{CAFigure/K3Block25_1}
}
\hfill
\subfloat[ \label{fig:18}]{%
\includegraphics[scale = 0.2]{CAFigure/K3Block25_2}
}
\hfill
\subfloat[ \label{fig:19}]{%
\includegraphics[scale = 0.2]{CAFigure/K3Block25_3}
}
\hfill
\subfloat[ \label{fig:20}]{%
\includegraphics[scale = 0.2]{CAFigure/K3Block25_4}
}
\caption{$K=3$ and the block of $25$ $0s$, the model converge to all $0s$ after 4093 iterations: a) Initial configuration, b)Intermediate Configuration, c)Another intermediate configuration, d) Final Configuration converge to all-$0$}
\label{fig:21}
\end{figure}
Figure~\ref{fig:21} shows a random initial configuration with $10^6$ cells, where only $25$ consecutive cells are in state $0$ and the value of $K$ is $3$. We can observe that, although the number of $0s$ is very less, still the model converges to all-$0$ after $4093$ iterations (see Figure~\ref{fig:20}). {Therefore, our model has affinity to converge to all-$0$ even if at initial configuration the number of $0$s is very less in comparison to the grid size.}
However, the model does not always converge to the desired fixed point. For example, at $K=3$, for a random initial configuration having block of $0$s of size $81$, the model converges to all-$1$ (Table~\ref{tab2}).
\begin{table}[!htbp]
\centering
\resizebox{0.6\textwidth}{!}{
\begin{tabular}{|l|l|l|l|}
\hline\hline
{\bfseries $K$ } & {\bfseries Number of $1$'s }& {\bfseries Iterations (Time steps) } & {\bfseries Attractor }\\
\hline
5 & 25 & 24 &all-0\\
5 & 225 & 212 &all-0\\
5 & 256 & 483 &all-0\\
5 & 324 & 12148 &all-1\\
5 & 400 & 11870 &all-1\\
\hline
6 & 2 & 1 &all-0\\
6 & 4 & 13 &all-0\\
6 & 5 & 1519 &all-1\\
6 & 6 & 1510 &all-1\\
6 & 8 & 1507 &all-1\\
6 & 9 & 1493 &all-1\\
\hline
7 & 1 & 1 &all-0\\
7 & 2 & 979 &all-1\\
\hline\hline
\end{tabular}}
\caption{Relationship between the values of $K$ and the block of $0s$ where the model converges to all-$0$ or all-$1$ considering number of $1s$ are placed sequentially in the grid}\label{tab3}
\end{table}
Table \ref{tab3} depicts some sample results from our experiment where we take small number of $1$s organized in sequential order that is, they make cluster. Here, we can see that, after taking the $K>4$, sometimes the model converges to all-$1$s even the number of $1$'s is very less.
Therefore, even if the model has an affection to converge to all-$0$, the value of $K$ may take a major role to converge the model in a direction (all-$0$ or all-$1$).
\section{Applications}\label{application}
\noindent As discussed in Section~\ref{model}, the parameters give us flexibility to design our model according to the need of the solution to a particular problem. There are several possible applications of our model. Here we discuss some of them.
\subsection{Modeling Self-healing Systems}\label{sec:application}
Living systems are assumed to be more intelligent than a non-living system. Therefore, to be intelligent, a machine (non-living system) has to emulate the properties of living systems. Among the properties, self-healing is a basic and important biological property which indicates sign of life.
Self-healing is the ability to reorganize and heal itself.
If a machine has self-healing ability, it is likely to mimic other properties of living elements like self-replication. Hence, it will be more intelligent just like a living system. We can show that our proposed CA can be used to model any self-healing system where parameters of our abstract model can be interpreted as the characteristics of the self-healing system.
Let us interpret our model as the following. Let the grid of cells embodies a collection of living elements (they can be cells, humans, animals -- anything), where state $0$ means the cell is healthy and $1$ means it is sick. We want to model how much infection the cells can endure and still heal.
By default, the living system is healthy, that is, all cells are in state $0$. Now, suppose, because of some change in environment, a number of cells get infected and update their states to $1$ (become sick). This is our initial configuration in the model. We start to observe the dynamics of the system from here. Let us consider that, in our model, system's immunity is the immunity of individual cells and as a whole the system's \emph{health} is the \emph{majority} of the \emph{individual} cell's health condition. So, at the initial configuration, if we ask the system, ``\emph{Are you sick?}'', it can answer ``Yes'' or ``No'' depending on density of $1$ ($\rho$). If using this model the system can \emph{heal} itself, that is, comes back to all-$0$, then we can call the model as a model for self-healing systems. At that time, the answer to ``\emph{Are you sick?}'' to the model will always be ``No''. Therefore, our target is to converge the grid to all-$0$ so that we can say there is no infection and ``\emph{The model is {Not Sick}}''. However, if the model converges to all-$1$ then we have to declare, ``\emph{The model is {Sick}}''.
Now, any living body has some inbuilt immunity status. This immunity is represented by the first parameter $K$. Just like immunity is different for different elements, $K$ itself is a variable. When $K=4$, the system can be interpreted as the situation of natural immunity having no prevailing sickness. The deterministic rule $f$ plays the role of natural healing process based on immunity $K$. Results from Section\ref{simulation} show that, if converging point is set to all-0 and $K\le4$ then there is a tendency to converge towards all-$0$ even if in the initial configuration number of 1s $>$ number of $0$s. This indicates, like any living body, our model also wants to become \textit{Not Sick}.
\begin{figure}[!h]
\vspace{-1.5em}
\subfloat[ \label{fig:h3}]{%
\includegraphics[scale = 0.2]{CAFigure/s1}
}
\hfill
\subfloat[ \label{fig:h4}]{%
\includegraphics[scale = 0.2]{CAFigure/s2}
}
\hfill
\subfloat[ \label{fig:h5}]{%
\includegraphics[scale = 0.2]{CAFigure/s3}
}
\hfill
\subfloat[ \label{fig:h6}]{%
\includegraphics[scale = 0.2]{CAFigure/s4}
}
\caption{(a)Initial configuration of a sick model ; (b) and (c) shows two intermediate configurations during evaluation ; (d) The model is healed}
\label{fig:h7}
\end{figure}
However, even in this condition, if number of infected cells become too large ($\rho$ is high), then, according to our rule, the system is \emph{sick}. So, the inherent immunity is not enough to restore it to its health. For example, if we take a random initial configuration with some infected cells (cell state $1$) where $\rho=0.632275$ ($\rho$ = density of $1$'s) and $K=4$, then, at this stage the model is \textit{Sick} (see Figure \ref{fig:h3}). At this point, the cells are given some \emph{booster} to improve its immunity in terms of $g$. Here, $g$ may be considered as a \emph{vaccine} for the infection. As if, it can bypass the \emph{natural justice} process giving the cells a second chance to live.
But, whether vaccine will be effective to a cell, is not deterministic (so, $g$ is stochastic). Further, when this vaccine is to be applied to the system is also not pre-determined (temporally stochastic CA with probability $p$).
Nevertheless, for every cell, the vaccine will not react similarly. A large number of sick cells with favorable environment may become healthy ($\phi(x)$), whereas, some healthy cells with unhealthy environment can become sick ($\psi(x)$). But, if we take $\phi(x)$ as logarithmic and $\psi(x)$) exponential like defined in Section~\ref{simulation}, by choosing $K$ and $x$, we can see that, after some iterations the model converge to all-$0$ (see Table~\ref{tab1}). Then we can say the model is \textit{Not Sick} (Figure~\ref{fig:h6}).
However, if we increase $\rho$ value further (say, from $0.675$ to $0.68$ or more), then for the same $K$ and $x$, the model may be converged to all-$1$ (see Table\ref{tab1}) and the model becomes \textit{Sick}. Therefore, the role of $K$ and $x$ is very important to model self-healing systems. If we want to have our system a larger tendency to heal, then we need to choose the parameters of our model wisely. Moreover, if the affection probability $\phi(x)$ is large, then the system has more tendency to heal.
This probability indicates the ability to repair or heal oneself automatically and evolve oneself according to the demand of the environment.
This is how the self-healing is modeled by our CA. It also shows that, our abstract model can be a good interpretation of the role of vaccination in living population. Also, observe that, our proposed model takes the global decision democratically where every single cell take their own decision and the system comes to a consensus. Because of these properties we claim that our proposed model is intelligent.
\subsection{Modeling Transformation Process}\label{sec:intelligence}
In nature and chemical world, we get glimpses of several transformation processes -- water evaporates into vapor, a drop of color in a glass of liquid dissolves giving the whole glass of liquid a lighter shade of that color. All these processes happen to conserve the law of mass and energy. This section shows that our CA can be used to model such transformation processes.
During the process of transformation, the particles are divided into smaller sized particles and dissolves until the system comes to an equilibrium. In our model, if we set $K=3$ (and other parameters as same as Section~\ref{simulation}), then, for some special initial configurations, the evolution of the CA looks like transformation processes -- the configuration is divided into two or more smaller configurations. It goes on dividing and dissolving until the system converges to a fixed point which signifies the equilibrium state. For example, in Figure \ref{fig:7},
\begin{figure}[!htbp]
\vspace{-1.5em}
\subfloat[$t=0$ \label{fig:s1}]{%
\includegraphics[scale = 0.15]{CAFigure/self1}
}
\hfill
\subfloat[$t=42$ \label{fig:s2}]{%
\includegraphics[scale = 0.15]{CAFigure/self22}
}
\hfill
\subfloat[ $t=78$ \label{fig:s3}]{%
\includegraphics[scale = 0.15]{CAFigure/self23}
}
\hfill
\subfloat[$t=108$ \label{fig:s4}]{%
\includegraphics[scale = 0.15]{CAFigure/self24}
}
\hfill
\subfloat[ $t=151$ \label{fig:s5}]{%
\includegraphics[scale = 0.15]{CAFigure/s5}
}
\hfill
\subfloat[$t=180$ \label{fig:s6}]{%
\includegraphics[scale = 0.15]{CAFigure/s6}
}
\hfill
\subfloat[$t=198$ \label{fig:s7}]{%
\includegraphics[scale = 0.15]{CAFigure/s7}
}
\caption{Simulation of a transformation process}
\label{fig:7}
\end{figure}
an initial configuration is shown, which, after some iterations, is divided into more than three configurations. It keeps on getting smaller until it converges to the fixed point all-$0$ when the system has reached its equilibrium. Hence, we can say that, by varying the parameters of our model, we can simulate the transformation process from one system to another by our CA.
\subsection{Density Classification Problem}
The \emph{density classification problem} can also be addressed by our model.
According to the definition of \emph{affinity classification problem}, this problem goes down to the former if we take the density of $1$s ($\rho$)=$0.5$.
However, here, instead of taking $\rho$ as exact $0.5$, we take it as a variable and see how close we can reach to solve this classical problem using our model.
Previous works have established that density classification problem is not solvable by spatially stochastic CA (uniform or non-uniform), but can be solved by using temporally non-uniform CA \cite{Fuk05,fuks2015solving}. So, we also take our CA as temporarily non-uniform with a stochastic component $(g)$ which perfectly fits our model. However, a property of this problem is, there is no affinity towards any state at any time. Hence, to make the system unbiased, we take the number of neighbors required to change from one state to another $(K)$ as $4$. Also, we choose both the probability distribution functions $\phi(x)$ and $\psi(x)$ to be same. That is, if a cell is at state $1$ and it has $x$ number of neighbors with state $0$, it updates its value to $0$ with the same probability distribution function as in case of the cell being at state $0$ with $x$ number of neighbors with state $1$ and gets updated to state $1$. Further, we consider the upgrade probability value $p=0.1$ such that the stochastic component ($g$) is applied with very low probability.
Here, we show simulation results for two different probability distribution functions -- linear and exponential.
For the first case, the value of the parameters for the model are:
\begin{align*}
K&=4\\
\phi(x)&=\begin{cases}
\frac{x}{K} &\text{for } 0 \le x \le K\\
\end{cases}\\
\psi(x)&=\begin{cases}
\frac{x}{K} &\text{for } 0 \le x \le K\\
\end{cases}\\
p&=0.1
\end{align*}
We have done huge experimentation on random initial configurations over $200 \times 200$ grid based on this model. Some sample simulation results are shown in Table~\ref{tab4}. Here, the first column indicates some $\rho$ values whereas, the third and fourth columns represents that, for each of these $\rho$, out of $100$ experiments how many are converged to all-$0$ and all-$1$ respectively. {In our experiments, we observe that when initial configuration is taken randomly, then for $\rho \le 0.4647$ or $\rho \ge0.54$, our model converges to its fixed point (all-$0$ and all-$1$ respectively).
\begin{table}[h]
\centering
\resizebox{0.9\textwidth}{!}{
\begin{tabular}{|c|c|c|c|} \hline \hline
$\rho$ (number of $1$'s) & Number of experiments & Converge to all - $0$ & Converge to all - $1$\\ \hline
$\le$ 0.4647 & 100&100&0 \\
0.4779710& 100 & 96&4 \\
0.49112875& 100 & 73&27 \\
0.5036035& 100 & 37 & 63 \\
0.51548925 & 100 & 9 & 91\\
0.52758225 & 100 & 5 & 95 \\
0.5394037& 100 & 1 & 99\\
$\ge$ 0.54 & 100 & 0 & 100 \\
\hline \hline
\end{tabular}}
\caption{Taking 2D-square grid ($200 \times 200$) and both $\phi$ and $\psi$ as linear functions}\label{tab4}
\end{table}
For the second case, we take both $\phi$ and $\psi$ as exponential functions with $K=4$ and $p=0.1$. Hence, the changed parameters of the model are:
\begin{align*}
\phi(x)&=\begin{cases}
0 &\text{if } x =0\\
e^{x-K} &\text{for } 1 \le x \le K\\
\end{cases}\\
\psi(x)&=\begin{cases}
0 &\text{if } x =0\\
e^{x-K} &\text{for } 1 \le x \le K\\
\end{cases}
\end{align*}
We again repeat our experiments with a large set of random initial configurations over $100 \times 100$ grid. Table~\ref{tab5} shows some sample results of this experiment.
\begin{table}[h]
\centering
\resizebox{0.9\textwidth}{!}{
\begin{tabular}{|c|c|c|c|} \hline \hline
$\rho$ (number of $1$'s) & Number of experiments & Converge to all - $0$ & Converge to all - $1$\\ \hline
$\le$ 0.4679 & 100 & 100 & 0 \\
0.47837 & 100 & 96 & 4 \\
0.513014 & 100 & 30 & 70 \\
0.5181367 & 100 & 0 & 100 \\
$\ge$ 0.520 & 100 & 0 & 100 \\
\hline \hline
\end{tabular} }
\caption{Taking 2D-square grid with size $100 \times 100$ and both $\phi$ and $\psi$ as exponential functions}\label{tab5}
\end{table}
\begin{figure}[!hbtp]
\vspace{-1.5em}
\subfloat[\label{fig:10}]{%
\includegraphics[scale = 0.2]{CAFigure/unsolved_pattern}
}
\hfill
\subfloat[\label{fig:11}]{%
\includegraphics[scale = 0.2]{CAFigure/unsolved_pattern2}
}
\caption{Some unsolvable configurations for density classification problem in 2D square grid}
\end{figure}
Here also we observe that, when the initial configuration is random, then for $\rho \le 0.4679$ or $\ge 0.520$, the model reaches its desired fixed point (all-$0$ or all-$1$). However, when the configurations are block of $0$s or $1$s forming a cluster, then it fails to reach the desired fixed point. Figure~\ref{fig:10} shows example of two such patterns where the model can not reach its fixed point (see Section~\ref{sec:block} for more details).
\section{Summary}
\label{future}
\noindent There are several properties in the living system that make them intelligent -- affection is one of them. In this work, we propose a new problem, named as, \emph{affinity classification problem}. We develop a devoted machine that is embedded in a 2-dimensional cellular automaton having Moore neighborhood dependency and periodic boundary condition. Our model has affection capabilities to a converging point, all-$1$ or all-$0$ and can be characterized by four parameters $K, \phi(x), \psi(x)$ and $p$.
Using this model, we can develop a self-healing system. We know that, because of self-healing any species can survive in the evolution. As our model has this feature and it takes decision democratically, we can say that the model is acting like a natural living system to some extent and we can conclude that the model become intelligent.
However, there are some other properties of life which an intelligent machine need to possess; we have to see if our model possess them. Similarly, here we have considered only Moore neighborhood, what kind of behavior might arise if we change the neighborhood dependency for the rules is still not seen.
Different other behaviors might emerge by varying the parameters of our model. And, apart from self-healing systems, our model may be useful for other several areas of application. Answers to these questions remain work of the future.
\chapter{Searching with Cellular Automata on Cayley Tree}
\label{chap6}
\section{Introduction}
In a traditional Cellular Automaton (CA), a lattice of cells and a local rule are the components of a CA. The system operates in discrete time and space, and each cell generates its next state using the same rule. The CA has no memory; a cell switches to its next state depending only on the current states of its neighbors. Cellular Automata have been used in different domains, including biology, image processing, encryption, physics, machine learning~\cite{BhattacharjeeNR16,Sethi2016}, etc.
A variant of Cellular Automata (CAs) in which each cell has an attached memory and an additional processing unit has been proposed in~\cite{Das2022}. Our proposed model, similar to this variant of CA, is developed over Cayley Tree~\cite{CayLeyid01,CayLeyid02} of order $\eta$, where each node represents a cell of the CA. A memory unit is attached to each cell. This is introduced here to efficiently solve the \emph{Searching problem} with In-Memory computation.
Many algorithms have already been developed using CAs~\cite{parallelsorting} to solve computational problems. However, it is critical to emphasize the importance of Cellular Automata as a computational tool for effectively resolving issues involving large amounts of evenly dispersed data. To solve the issue, the components are often exchanged in the form of a cell state. However, in this study, no element exchange is carried out.
We consider a finite CA, where the data elements ($X$) are distributed over the cells' memory. The CA solves the Searching problem, which asks to decide whether a given element (key) exists in a finite set of natural numbers ($X$). If the key $k \in X$, the CA concludes with a positive output as \emph{Found}; otherwise, the CA concludes with a negative output as \emph{Not Found}.
In Section~\ref{model}, details of the proposed model are reported. The Cayley tree is introduced in Section~\ref{cayley}. The realization of \emph{In-Memory Searching} is reported in Section~\ref{in-memory}.
\section{Cellular Automata over Cayley Tree}\label{cayley}
A Cellular Automaton (CA) is a discrete, abstract computational model that consists of a regular network of finite-state automata, formally known as cells. This section introduces the Cayley tree and the CA is developed over the Cayley Tree, where each cell of the CA is represented as a vertex in the tree.
\begin{definition}
A Cayley tree is a tree in which each non-leaf vertex has a constant number of branches $\eta$ (order of the tree). The Cayley tree $\kappa^\eta$ of order $\eta \geq 1$ is an infinite tree, from each vertex of which, there are exactly $\eta+1$ edges. The $\kappa^\eta$ = $(V, E, \nu)$, where $V$ is the set of vertices of $\kappa^n$, $E$ is the set of edges and $\nu$ is the incidence function associating each edge $e\in E$ with its endpoints $v_{neighbors} \in V$.
\end{definition}
\begin{figure}[!ht]
\subfloat[]{
\begin{minipage}[c][1\width]{
0.5\textwidth}
\label{fig1:a}
\centering
\includegraphics[width=1\textwidth]{CAFigure/CayLey0}
\end{minipage}}
\hfill
\subfloat[]{
\begin{minipage}[c][1\width]{
0.5\textwidth}
\label{fig1:b}
\centering
\includegraphics[width=1\textwidth]{CAFigure/nei}
\end{minipage}}
\caption{Cayley tree (a) height $h=7$ , (b) Current cell and it's neighbors}
\label{fig:CayLAyTree}
\end{figure}
In the current work, we consider the Cayley tree of height $h$ with a finite number of cells. The distances from the root node to all the leaf nodes (boundary cells) are equal. The number of nodes in the grid is -
\begin{align}\label{eq1}
n &=\begin{cases}
h &\text{for } h \le 1 \\
1 + \sum_{i=2}^{h} \eta^{i-2}(\eta+1)&\text{Otherwise}
\end{cases}
\end{align}
In Fig.~\ref{fig:CayLAyTree}, the Cayley tree is of order $\eta=2$ and the height $h=7$, where each of the vertices is denoted as a cell. The total number of cells in the tree is $n=190$ (follow Eq.~\ref{eq1}). The nearest neighborhood comprises three cells. The root node (cell) is marked as \emph{blue}, and is placed in the center of the tree. The neighborhood is depicted in Fig.~\ref{fig1:b}, where each cell has three neighbors (for $\eta=2$). The root has no parent cell as such. Instead of the parent cell, the root has three children. This model follows null boundary conditions. That is, the missing neighbors of a leaf node are assumed to be in state $0$.
\section{Proposed Computational Model}\label{model}
The computational model reported in Section~\ref{cayley} is similar to a cellular automaton, where each cell uses a local rule $(f)$ to switch to its next state depending on the present states of its neighbors. Unlike classical CAs, however, a cell uses an additional function, say $g$, that operates on the previous states stored in the memory.
Let us first define $f$. This is the local rule for the CA, which can decide on the next state for the cell. This rule is deterministic and, at any point of time, it is applied over all the cells uniformly. At each time step $t + 1$, this rule updates the state of a cell depending on the present states of its neighboring cells and the internal state - that is, the states of its parent, internal (determined by $g$) and the children.
The function (rule) $g$ is defined in such a way that it has more computational power than $f$. Rule $g$ generates internal state $s_{internal}$ based on the contents of memory elements and the current state of the cell.
\begin{figure}[h]
\includegraphics[width=\linewidth]{CAFigure/model2}\centering
\caption{ A typical cell}
\label{fig:CaModel}
\end{figure}
The schematic of a typical cell is shown in Fig.~\ref{fig:CaModel} where, $s^t$ is the current state of the considering cell, $s^t_1 ,\cdots, s^t_{m-1}$ are the present sates of neighbors, $s^t_{internal}$ is the internal state and $m$ is the number of neighbors. Since it is desirable that $g$ has to be computationally more powerful than $f$, $g$ is allowed to use the memory of the cell. Let $S$ be the set of states that a cell uses and $\Psi$ be the set of memory symbols, then
$g : \mathcal{S} \times \Psi \rightarrow \mathcal{S} \times \Psi$.
Here $f$ is the function of present states of $m$ neighbors of the cell and an internal state generated by $g$. Thus,
\begin{align}\label{eq4}
f&:\mathcal{S}^{m+1} \rightarrow \mathcal{S}
\end{align}
Apart from the neighbor's present states, the internal state is taken into cognizance by $f$ to generate the cells' next state. That is, a cell uses a finite memory which is accessed and modified by $g$.
The proposed model is defined over the Cayley tree. So, the parent and children of a node are the neighbors of the node. Here, the rule $f$ depends on the parent cell, self, the internal state generated by $g$, and its children ,
\begin{align}\label{eq5}
s^{t+1}_{self} &= f(s^t_{parent}, s^t_{self}, s^t_{internal},s^t_{child_1},s^t_{child_2},\cdots,s^t_{child_\eta})\\
s^t_{internal} &= g(s^t_{self})
\end{align}
Eq.~\ref{eq5} shows that, for each cell, the next state of the cell at time $t+1$ ($s^{t+1}_{self}$) depends on the cell itself ($s^{t}_{self}$), its corresponding neighbors ($s^t_{parent},s^t_{child_1},s^t_{child_2},\\\cdots,s^t_{child_\eta}$), where $\eta$ is the order of the tree. and an internal state ($s^t_{internal}$) at time $t$. However, based on the cells' updated memory element, the internal state alters. The updating of the memory element depends on the current state of the cell ($s^t_{self}$). Note that, the number of children varies for different orders($\eta$) of the tree. For example, if the order of the tree is $2$ (for $\eta=2$) then, the model considers the state of two children ($s^t_{child_1},s^t_{child_2}$). For the root, there is no parent but the number of children is $\eta+1$. For the leaves, on the other hand, there is no child.
If a cell itself is its neighbour, then the state of the cell and a $f$ is the function of the internal state. To get configurations in this automaton, we need to take the set of memory elements into consideration, along with the set of states. Hence, here a configuration is an assignment $ c : \mathcal{L} \rightarrow \mathcal{S} \times \Psi$. Let us define a observable configuration ($c_\pi$) of a configuration $c$ as $c_\pi\: :\: \mathcal{L} \rightarrow \mathcal{S}$. During computation, observable change occurs at the configuration ($c$). At each time $t \in \mathbb{N}$, a cell is assigned a observable state and we denoted by $ \mathcal{S}=\{0,1,2\}$. Rule $f$ and $g$ are defined as $-$ In traditional CA, the additional function $g$ is void and $f$ takes the role of computation.
\begin{figure}[!htbp]
\subfloat[]{
\begin{minipage}[c][1\width]{
0.24\textwidth}
\label{sim:a}
\centering
\includegraphics[width=1\textwidth]{CAFigure/tree/1}
\end{minipage}}
\hfill
\subfloat[]{
\begin{minipage}[c][1\width]{
0.24\textwidth}
\label{sim:b}
\centering
\includegraphics[width=1\textwidth]{CAFigure/tree/2}
\end{minipage}}
\hfill
\subfloat[]{
\begin{minipage}[c][1\width]{
0.24\textwidth}
\label{sim:c}
\centering
\includegraphics[width=1\textwidth]{CAFigure/tree/3}
\end{minipage}}
\hfill
\subfloat[]{
\begin{minipage}[c][1\width]{
0.24\textwidth}
\label{sim:d}
\centering
\includegraphics[width=1\textwidth]{CAFigure/tree/4}
\end{minipage}}
\vfill
\subfloat[]{
\begin{minipage}[c][1\width]{
0.24\textwidth}
\label{sim:e}
\centering
\includegraphics[width=1\textwidth]{CAFigure/tree/5}
\end{minipage}}
\hfill
\subfloat[]{
\begin{minipage}[c][1\width]{
0.24\textwidth}
\label{sim:f}
\centering
\includegraphics[width=1\textwidth]{CAFigure/tree/6}
\end{minipage}}
\hfill
\subfloat[]{
\begin{minipage}[c][1\width]{
0.24\textwidth}
\label{sim:g}
\centering
\includegraphics[width=1\textwidth]{CAFigure/tree/7}
\end{minipage}}
\hfill
\subfloat[]{
\begin{minipage}[c][1\width]{
0.24\textwidth}
\label{sim:h}
\centering
\includegraphics[width=1\textwidth]{CAFigure/tree/8}
\end{minipage}}
\vfill \subfloat[]{
\begin{minipage}[c][1\width]{
0.24\textwidth}
\label{sim:i}
\centering
\includegraphics[width=1\textwidth]{CAFigure/tree/9}
\end{minipage}}
\hfill
\subfloat[]{
\begin{minipage}[c][1\width]{
0.24\textwidth}
\label{sim:j}
\centering
\includegraphics[width=1\textwidth]{CAFigure/tree/11}
\end{minipage}}
\hfill
\subfloat[]{
\begin{minipage}[c][1\width]{
0.24\textwidth}
\label{sim:k}
\centering
\includegraphics[width=1\textwidth]{CAFigure/tree/12}
\end{minipage}}
\hfill
\subfloat[]{
\begin{minipage}[c][1\width]{
0.24\textwidth}
\label{sim:l}
\centering
\includegraphics[width=1\textwidth]{CAFigure/tree/13}
\end{minipage}}
\vfill
\subfloat[]{
\begin{minipage}[c][1\width]{
0.24\textwidth}
\label{sim:m}
\centering
\includegraphics[width=1\textwidth]{CAFigure/tree/14}
\end{minipage}}
\hfill
\subfloat[]{
\begin{minipage}[c][1\width]{
0.24\textwidth}
\label{sim:n}
\centering
\includegraphics[width=1\textwidth]{CAFigure/tree/15}
\end{minipage}}
\hfill
\subfloat[]{
\begin{minipage}[c][1\width]{
0.24\textwidth}
\label{sim:o}
\centering
\includegraphics[width=1\textwidth]{CAFigure/tree/16}
\end{minipage}}
\hfill
\subfloat[]{
\begin{minipage}[c][1\width]{
0.24\textwidth}
\label{sim:p}
\centering
\includegraphics[width=1\textwidth]{CAFigure/tree/17}
\end{minipage}}
\caption{Simulation of Searching , where \emph{red} = state 2; \emph{blue} = state $1$; \emph{white} = state $0$. (a) Initial Configuration , (p) Final Configuration.}
\label{fig:simulation}
\end{figure}
An initial configuration is given and, after the computation, it converges to some fixed points. The output is taken from that configuration, reachable from the initial configuration. In the initial configuration, the cells are assigned to some states, and the memory of each cell is assigned some memory symbols. Hence, in computation, the rule $f$ of the cells apparently plays the decisive role in computation, but the function $g$ plays a crucial role to realize a fruitful computation on the internal memory.
\noindent
\section{In-Memory Searching}\label{in-memory}
\noindent In \emph{In-Memory Computing}, the target is to reduce the workload of a CPU. The data-intensive computations are run in memory. In this section, we report the realization of \emph{In-Memory Computing} around the computing model, introduced in Section~\ref{model}, with Cayley Tree.
\subsection{Overview}\label{overview}
For searching, the key $k$ is stored at the root of the Cayley Tree. The elements are distributed over the nodes' memory (CA cells' memory). The tree is constructed in such a way that the distance between the root node and each leaf is equal. There should be more nodes (cells) in the tree than the number of elements to be dealt with. The elements are scattered throughout the cells' memory randomly. The key is stored in the memory of the root. $0$ is placed in the rest of the cells' memory. The memory unit of a node can only retain one element. Initially, the state of the root is set to $1$ and other cells are set to $0$; the internal state of all the cells is set to $0$ if the memory contains $0$, set to $1$ otherwise, (this is considered the \emph{initial configuration} of the model).
The search is composed of two modules-
\begin{itemize}
\item[] A. Computation at the root node:
A reduction operation on $k$ by $1$ is done at the root. The root then sends $1$ to its children and sets its next state as $0$ if the updated $k$ is $0$, else it is set to $1$. This process is repeated until the $k$ becomes $0$. However, if the state of any child is $2$ then the next state of the root node switches to $2$; otherwise, it remains $0$
\item[] B. Computation at other nodes:
For every time step, when the state of a node is $1$ (respectively $2$) then the node transmits its present state to its children (respectively parent). It then decrements the memory content (element) if the present state is $1$. The next state of the node depends on its nearest neighbor-
\begin{itemize}
\item[]Case 1$-$ If the state of any child is $2$ then the next state of the node is set to $2$, otherwise, it follows the parent state (follow \emph{case 2}).
\item[]Case 2$-$ The next state of the node is set to $1$ if the parent's state is $1$; otherwise, it depends on the memory element (follow\emph{case 3}).
\item[]Case 3$-$ If the updated memory contains $0$ (internal state is set to $0$), then the next state of the node switches to $2$, otherwise the next state switches to $0$.
\item[]Case 4$-$ If none of the above criteria match, then the next state remains the same as the present state.
\end{itemize}
The CA converges when the states of all the nodes are $0$ and the root node is either $0$ or $2$. If the root node is $2$, the execution concludes with the positive result \emph{"Found"}, but if it is not, the execution ends with the negative output \emph{"Not Found"}.
\end{itemize}
The memory modification is done by $g$ as:
\begin{align}\label{eq6}
Memory Element=\begin{cases}
Memory Element-1 & if \:s^t_{self} = 1 \\
Memory Element & Otherwise
\end{cases}
\end{align}
\begin{algorithm}[h]
\caption{Rule $f$ for the root node:}
\begin{algorithmic}
\State Taking states at time $t$ : $s^t_{self}, s^t_{internal}\: and\: s^t_{child_1},s^t_{child_2},\cdots,s^t_{child_\eta}$ as \textbf{input}
\If{state of any child is $2$}
\State $NS \gets 2$
\ElsIf{$s^t_{self}==2$}
\State $NS \gets 2$
\ElsIf{$ s^t_{self}==1$}
\State Compute $g$
\If {$s^t_{internal}== 1$}
\State $NS \gets 1$
\Else
\State $NS \gets 0$
\EndIf
\Else
\State $NS \gets s^t_{self}$
\EndIf
\State $s^{t+1}_{self} \gets NS$
\State \textbf{Output : }$s^{t+1}_{self}$
\end{algorithmic}
\end{algorithm}
The internal state modification is done by $g$ as:
\begin{align}\label{eq7}
s^t_{internal}&=\begin{cases}
0 &\text{if Memory Element} = 0 \\
1 &\text{Otherwise}
\end{cases}
\end{align}
\begin{algorithm}[h]
\caption{Rule $f$ for other nodes:}
\begin{algorithmic}
\State Taking states at time $t$ : $s^t_{parent}, s^t_{self}, s^t_{internal} \:\: \text{ and } \:\: s^t_{child_1},s^t_{child_2},\cdots,s^t_{child_\eta}$ as \textbf{input}
\If{state of any child is $2$}
\State $NS \gets 2$
\ElsIf{$s^t_{self}==2$}
\State $NS \gets 0$
\ElsIf{$s^t_{parent}==2$}
\State $NS \gets s^t_{self}$
\ElsIf{$s^t_{parent}==0 \:and\: s^t_{self}==0$}
\State $NS \gets 0$
\ElsIf{$s^t_{parent}==0 \:and\: s^t_{self}==1$}
\State Compute $g$
\If {$s^t_{internal}==0$}
\State $NS \gets 2$
\Else
\State $NS \gets 0$
\EndIf
\ElsIf{$s^t_{parent}==1 \:and\: s^t_{self}==0$}
\State $NS \gets 1$
\ElsIf{$s^t_{parent}==1 \:and\: s^t_{self}==1$}
\State Compute $g$
\State $NS \gets 1$
\EndIf
\State $s^{t+1}_{self} \gets NS$
\State \textbf{Output : }$s^{t+1}_{self}$
\end{algorithmic}
\end{algorithm}
The leaf nodes have the same $f$ as the other nodes; the only distinction is that because there are no children, the next state is determined by the current state, the state of the parent, and the newly updated internal state.
The computational model is better described with an example run in Fig.~\ref{fig:simulation}. Fig.~\ref{fig:simulation} is a Cayley tree of order two, and the height is $7$. Here, the root node emits consecutive $1$s towards the leaves. The number of $1$s depends on the key element($k$) stored in the root's memory. In this simulation, we assume that the key $k=3$. That is, the root emits three consecutive $1$s and then it goes to state $0$. The same rule is followed when passing these three consecutive $1$s through the nodes. When three $1$s pass through, the memory components on all nodes are reduced by $3$ units. A node then switches to state $2$ if the memory element of the node becomes $0$, and switches to state $0$ otherwise.
\begin{figure}[!htbp]
\subfloat[]{
\begin{minipage}[c][1\width]{
0.50\textwidth}
\label{ex:cell}
\centering
\includegraphics[width=1\textwidth]{CAFigure/Example/Cayley11}
\end{minipage}}
\hfill
\subfloat[]{
\begin{minipage}[c][1\width]{
0.50\textwidth}
\label{ex:str}
\centering
\includegraphics[width=1\textwidth]{CAFigure/Example/Cayley12}
\end{minipage}}
\caption{Cayley Tree of order $\eta=2$ and height $h=4$ (a) Overview of a single node , (b) Structure of the tree.}
\label{fig:init}
\end{figure}
If the state of any node changes to $2$ then the next time step, the node sends $2$ to its parent and switches to state $0$ (see Fig.~\ref{sim:g}). In these ways, $2$ is reached to the root. If the root's state switches to $2$ (see Fig.~\ref{sim:h}) then we can consider it as the key \emph{Found}. Note that, after $k+2\times h$ time steps, if the root node's state remains $0$, the key should be regarded as \emph{Not Found}.
Fig~\ref{sim:a} shows the initial configuration with all $0$s nodes and the state of the root is set to $1$. Computation starts from the initial configuration to the final configuration (see Fig~\ref{sim:p}), where the root is in state $2$ (\emph{red marked}), which means \emph{Found}.
\begin{example}
The model is run with $20$ elements ($\Lambda=20$), $X=\{5,4,7,3,\\5,11,3,4,9,3,9,5,2,1,3,7,9,2,5,7\}$ and the key $k$ is $3$. We consider a Cayley tree of height $h=4$ and order $\eta=2$ (see Fig.~\ref{ex:str}), where the number of nodes in the tree is $22$ (see Eq.~\ref{eq1}). The elements are distributed over nodes and key is stored in the root's memory (see Fig.~\ref{ex:a}). The Fig.~\ref{fig:example2} illustrates the detailed execution, where Fig.~\ref{ex:str} shows the structure of the tree and Fig.~\ref{ex:cell} describes each node. Fig.~\ref{ex:a} depicts the initial configuration for the search, Fig.~\ref{ex:j} depicts the configuration where the CA converges. Note that, in Fig.~\ref{ex:h}, the observable state of the root node is $2$, indicates that $k \in X$. In other words, this configuration (see Fig.~\ref{ex:h}) concludes that the key is \emph{Found}.
\end{example}
\begin{figure}[!htbp]
\subfloat[]{
\begin{minipage}[c][1\width]{
0.50\textwidth}
\label{ex:a}
\centering
\includegraphics[width=1\textwidth]{CAFigure/Example/Cayley1}
\end{minipage}}
\hfill
\subfloat[]{
\begin{minipage}[c][1\width]{
0.50\textwidth}
\label{ex:b}
\centering
\includegraphics[width=1\textwidth]{CAFigure/Example/Cayley2}
\end{minipage}}
\vfill
\subfloat[]{
\begin{minipage}[c][1\width]{
0.50\textwidth}
\label{ex:c}
\centering
\includegraphics[width=1\textwidth]{CAFigure/Example/Cayley3}
\end{minipage}}
\hfill
\subfloat[]{
\begin{minipage}[c][1\width]{
0.50\textwidth}
\label{ex:d}
\centering
\includegraphics[width=1\textwidth]{CAFigure/Example/Cayley4}
\end{minipage}}
\caption{In-Memory Searching; (a) Initial Configuration.}
\label{fig:example0}
\end{figure}
\begin{figure}[!htbp]\ContinuedFloat
\subfloat[]{
\begin{minipage}[c][1\width]{
0.50\textwidth}
\label{ex:e}
\centering
\includegraphics[width=1\textwidth]{CAFigure/Example/Cayley5}
\end{minipage}}
\hfill
\subfloat[]{
\begin{minipage}[c][1\width]{
0.50\textwidth}
\label{ex:f}
\centering
\includegraphics[width=1\textwidth]{CAFigure/Example/Cayley6}
\end{minipage}}
\vfill
\subfloat[]{
\begin{minipage}[c][1\width]{
0.50\textwidth}
\label{ex:g}
\centering
\includegraphics[width=1\textwidth]{CAFigure/Example/Cayley7}
\end{minipage}}
\hfill
\subfloat[]{
\begin{minipage}[c][1\width]{
0.50\textwidth}
\label{ex:h}
\centering
\includegraphics[width=1\textwidth]{CAFigure/Example/Cayley8}
\end{minipage}}
\caption{In-Memory Searching; (h) The root node switches to state $2$.}
\label{fig:example1}
\end{figure}
\begin{figure}[!htbp]\ContinuedFloat
\subfloat[]{
\begin{minipage}[c][1\width]{
0.50\textwidth}
\label{ex:i}
\centering
\includegraphics[width=1\textwidth]{CAFigure/Example/Cayley9}
\end{minipage}}
\hfill
\subfloat[]{
\begin{minipage}[c][1\width]{
0.50\textwidth}
\label{ex:j}
\centering
\includegraphics[width=1\textwidth]{CAFigure/Example/Cayley10}
\end{minipage}}
\caption{In-Memory Searching; (j) Final Configuration (where the CA converges).}
\label{fig:example2}
\end{figure}
This searching scheme implements parallelism in the computation. The nodes' local interactions carry out the entire process. If $n$ is the number of elements and $k$ is the searching key, then the basic intuition behind finding the complexity can be that, as the root decrements the key for $k$ times and then wait for the response from the children for $log(n)$ time, thus the total time complexity becomes $O(k+log(n))$.
The arrangement of elements in the array does not affect the flow of the scheme. No matter if the elements in the array are already sorted, reverse sorted, or randomly placed, the scheme works the same for all these cases, and thus the time complexity for all such cases is the same, $O(k+log(n))$.
The most important issue is by sensing only the root node, the outcome of a search is determined. That is, if the root is in state $2$, then \emph{Found}, otherwise \emph{Not Found}.
\section{Summary}
In this study, a new type of cellular automata model has been proposed to reduce the workload of the CPU. Each cell of the CA has a memory and an additional processing unit attached to it. This is the initial step in the introduction of a new model capable of resolving computational issues. In this work, a CA was implemented over a Cayley tree, with each node denoting a cell. Each CA cell has some additional processing capability, which aids in resolving the given situation.
We've shown that the model can deal with the \emph{Searching Problem}. By sensing only one node, the model can decide the outcome of the search. It demonstrates the model's efficacy. The focus of our next study is to find the solutions to other computational problems, i.e., the sorting problem, finding the greatest number, etc.
\chapter{Conclusion}
\label{chap7}
This chapter aims to summarize the thesis' important contributions and to discuss the potential future directions for study in the subject of cellular automata, which has the potential to bring scientists from many academic fields from across the world.
\section{Main Contribution}
The primary goal of this research work has been to study the dynamics of temporally stochastic cellular automata (TSCAs). In this thesis, the TSCAs have been utilized to classify patterns and used to define affinity classification problem. Moreover, the thesis has explored a new kind of CA that has been proposed to reduce the workload of the CPU. Each cell of the CA has a memory and an additional processing unit attached to it. The CA was implemented over a Cayley tree, with each node denoting a cell. We've demonstrated that the model is capable of handling the searching problem. The model can decide the search's conclusion by sensing just one node (the root node).
A brief survey of cellular automata, elementary cellular automata, artificial life, and computational and societal applications of cellular automata that are relevant to this research work is briefly reviewed in Chapter~\ref{chap2}. Introduction of Temporally Stochastic Elementary Cellular Automata, its classes and dynamics are reported in Chapter~\ref{chap3}. This was the first step in the exploration of the spaces of the temporally stochastic CAs. Firstly, we have identified that some of the stochastic CAs are affected by the temporal noise, but they are not sensitive to temporal noise rate. However, even these CAs have shown a diverse set of results. On the other hand, temporally stochastic CAs that are sensitive to temporal noise have demonstrated phenomena like phase transition and class transition. It is noteworthy that stochastic CAs with (at least one) chaotic rule have often displayed lower resistance during phase transition (i.e. the critical value of the noise rate is low). But during phase transition, the stochastic CAs devoid of any chaotic rule have shown greater resilience (i.e. critical value of noise rate is high). This is another exciting finding from the study.
In Chapter~\ref{chap4}, we have proposed a variant of CAs, termed Temporally Stochastic CAs (TSCAs), in which, instead of one local rule, two rules (default rule f and noise rule g) are utilized. After analyzing their dynamics, we have identified the convergent TSCAs that have been used to design two-class pattern classifiers. In this context, in comparison to existing common algorithms, the proposed design of a TSCA-based two-class pattern classifier offers competitive performance.
Chapter~\ref{chap5} introduces a new type of problem known as the affinity classification problem, where we build a dedicated machine that is integrated into a two-dimensional cellular automaton with periodic boundary conditions and Moore neighborhood dependence. Our model may be described by the four parameters $K, \phi(x), \psi(x),$ and $p$ and has affection capabilities up to a convergence point, all-1 or all-0. We can lead to a self-system using this paradigm. We are aware that any species may live and evolve due to self-healing. We may argue that our model behaves something like a naturally occurring biological system and that it has become intelligent since it has this property and makes decisions democratically.
The last chapter, Chapter~\ref{chap6}, was devoted to exploring how a new model may be used to resolve a well-known search problem. In this model, each CA cell has some additional processing capability, which aids in resolving the given situation.
\section{Future Directions}
Here are a few intriguing future research initiatives that may be conducted as a result of our current work,
\begin{enumerate}
\item The most comprehensive future possibilities of these temporally stochastic CAs are discussed in Chapter~\ref{chap3}. However, in this research, our primary method of investigating these CAs was experimental. Therefore, there is still a need for investigation into the precise theoretical explanation of the temporally stochastic CA.
\item The natural extensions of the work, which is discussed in Chapter~\ref{chap4}, on the sensitivity to temporally stochastic CAs and pattern classification, include,
\begin{itemize}
\item Here, we have only experimentally explored the convergent TSCAs. What
can be said for theoretical understanding behind the convergence?
\item What can be said for classification time (i.e., convergence time) of these
TSCAs?
\end{itemize}
\item In Chapter~\ref{chap5}, we must decide whether our model possesses other aspects of one 's life that an intelligent machine must have. The Moore neighborhood is the only one that has been taken into consideration here, therefore it is still unclear what type of behavior may result from changing the area's dependence on the regulations. By changing the parameters of our model, several additional behaviors could appear. In addition, our concept may be applicable for a number of additional applications outside self-healing systems. These questions still need to be answered in the future.
\item Chapter~\ref{chap6} shows the model's ability to address the searching problem. The model may decide the search's conclusion by sensing just one node. It proves how effective the model is. Finding answers to further computational challenges, such as the sorting problem, the greatest number problem, etc., is the main goal of our next research.
\end{enumerate}
| {
"attr-fineweb-edu": 1.282227,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUel_xK6Ot9TMSsFPk | \section{Introduction}
One of the common approaches to approximate Bayesian inference, especially popular for large-scale models, is maximization of variational lower bound on the model's evidence. In recent years several new objectives were proposed. \citet{burda2016} gave a tighter evidence lower bound (ELBO) which exploits multiple samples from the approximating distribution. \citet{li2016} extended traditional variational inference with Renyi divergence-inspired evidence bounds. These bounds were shown to be more effective for certain unsupervised learning problems.
In this paper we consider the problem of unsupervised learning for data which is known to contain many uninformative (noise) objects. First, we replace the traditional log-evidence $\sum_i \log p(x_i)$ with a robust counterpart $\sum_i \log (\varepsilon + p(x_i))$.
This function ignores the objects with low evidence $p(x_i) \ll \varepsilon$. Next, we derive a variational lower bound on robust model evidence which also shares the same robustness property. We show that by maximizing this lower bound we can successfully train variational autoencoders even in the scenarios where the noise objects comprise the majority of the training dataset.
An alternative approach, proposed by \citet{wang2016reweighted}, is to reweight the per-object likelihood terms with additional local latent variables.
In future, we plan to compare to this approach.
\section{Robust variational inference}
In what follows we consider a parametric latent variable model $p(x, z | \theta) = p(x | z, \theta) p(z) $ with local latent variables $z$ and parameter $\theta$. We derive a lower bound for the robust evidence and study its properties. Finally, following \citep{kingma2014}, we propose a training procedure for variational autoencoders with the robust evidence lower bound.
\subsection{Robust evidence lower bound}
We start off with the robust log-evidence $\sum_{i = 1}^N \log (\varepsilon + p(x_i | \theta))$.
First, we rewrite it for each sample:
\begin{equation}
\log \left[ \varepsilon + p(x_i | \theta) \right] = \log \left[ \mathbb E_{q(z_i | x_i, \phi)} \left( \varepsilon + \frac{p(x_i, z_i | \theta)}{q(z_i | x_i, \phi)} \right) \right]
\end{equation}
and then apply Jensen's inequality to obtain the robust evidence lower bound $\mathcal L_{\varepsilon}(X, \theta, \phi)$
\begin{equation}
\sum_{i = 1}^N \log (\varepsilon + p(x_i | \theta)) \geq \sum_{i = 1}^{N} \mathbb E_{q(z_i | x_i, \phi)} \log \left[ \varepsilon + \frac{p(x_i, z_i | \theta)}{q(z_i | x_i, \phi )} \right] = \mathcal L_{\varepsilon}(X, \theta, \phi).
\end{equation}
The robust evidence bound is tight when the variational distribution $q(z_i | x_i, \phi)$ is the true posterior $p(z_i | x_i, \theta)$.
This objective exhibits robustness to the objects with the low value of $\frac{p(x_i, z_i | \theta)}{q(z_i | x_i, \phi )}$.
For a fixed sample $(x_i, z_i)$ explicit computation gives
\begin{equation}
\nabla \log\left[ \varepsilon + \frac{p(x_i, z_i| \theta)}{q(z_i | x_i, \phi)} \right] =
\left(
\frac{p(x_i, z_i| \theta)}{q(z_i | x_i, \phi)} \left[\varepsilon + \frac{p(x_i, z_i| \theta)}{q(z_i | x_i, \phi)} \right]^{-1}
\right)\nabla \log \frac{p(x_i, z_i | \theta)}{q(z_i | x_i, \phi)}.
\end{equation}
Therefore, the stochastic gradient of the robust evidence lower bound has the same direction as the gradient of the non-regularized ELBO $\mathcal{L}$ for this sample $(x_i, z_i)$:
\begin{equation}
\nabla \log \frac{p(x_i, z_i | \theta)}{q(z_i | x_i, \phi )}.
\end{equation}
When $\frac{p(x_i, z_i | \theta)}{q(z_i | x_i, \phi )} \ll \varepsilon$, the scalar factor before this gradient is close to zero and the sample does not contribute to the parameter update.
On the other hand, when $\frac{p(x_i, z_i | \theta)}{q(z_i | x_i, \phi )} > \varepsilon$, the factor lies in $[ \frac{1}{2}, 1)$ and we obtain almost the same update as for the non-regularized ELBO.
To benefit from the robustness one has to choose $\varepsilon$ carefully. Underestimating $\varepsilon$ results in poor regularization, overestimating $\varepsilon$ results in significant distortion of the evidence.
It is natural choose $\varepsilon$ value to be comparable to the log-likelihood of the dataset.
We propose to use a \textit{dynamically changing} value for $\varepsilon$, specifically a multiple of the mean evidence lower bound:
\begin{equation}
\varepsilon = \alpha \exp{ \left( \frac{\mathcal L(X, \theta, \phi)}{|X|} \right) }.
\label{eqn:eps}
\end{equation}
Here $\alpha > 0$ controls the regularization effect.
As $\alpha \rightarrow 0$ the robust evidence lower bound converges to ELBO.
\subsection{Training procedure}
To train the model, a stochastic gradient based optimizer is used to maximize the objective function. We use Gaussian latent variables and employ reparametrization trick to obtain the gradient estimates. Firstly, we train the model for one epoch with the evidence lower bound as an objective and initialize $\log \varepsilon$ as the mean ELBO value at the first epoch. Secondly, we fix $\log \alpha$ and then train the model using the robust evidence lower bound as an objective. After each gradient step we update $\log \varepsilon$ with the mean ELBO of the previous batch using exponential smoothing. Moreover, after each epoch we update $\log \varepsilon$ with the mean ELBO from the previous epoch.
\section{Experiments}
In our experiments we used a model with the following architecture: fully-connected encoder and decoder had two hidden layers with 200 units, stochastic layer had 50 hidden units. Parametric ReLU \citep{he2015delving} activation units were used for the deterministic layers.
We used Adam (\cite{kingma2015}) with parameters $\beta_1 = 0.99, \beta_2 = 0.999, \epsilon = 10^{-4}$ for objective maximization. Each model was trained for 1000 epochs with a fixed learning rate of $10^{-3}$. Batch size was set to 200. The following rule was used to update $\varepsilon$ after processing of each batch: $\log \varepsilon_{new} = 0.99 \log \varepsilon_{old} + 0.01 \log \varepsilon $, where $\varepsilon$ is estimated using eqn. \eqref{eqn:eps}.
In the first experiment we compared the robust variational autoencoders with autoencoders on two synthetic datasets. We used MNIST and OMNIGLOT \citep{lake2015human} as real-world base sets, and then added uninformative data points, i.e. $28*28$ images with each pixel's intensity equal to the mean pixel intensity of the original dataset. Due to dynamic binarization \citep{burda2016} these data points act as noise. We varied the relation of the number of the original data points to the number of noise data points from 2:1 to 1:2.
To evaluate the models performance we computed mean log-likelihood estimate over 200 samples on MNIST and OMNIGLOT test sets (without any noise).
The range of $\log \alpha$ was selected empirically: we started with $\log \alpha = -50$ and then increased it to find the optimal value with respect to the test likelihood.
\begin{figure}
\centering
\includegraphics[width=0.9\linewidth]{./graphics/graph-crop.pdf}
\caption{\textbf{Left}: noisy MNIST, \textbf{right}: noisy OMNIGLOT. The proportion of (original:noise) data points is varied from 2:1 to 1:2. We compared test log-likelihood of the \textit{original dataset} for variational autoencoders (VAE) and the proposed robust autoencoders (rVAE) with different regularization parameters $\alpha$ (note that the x-axis is not uniform). rVAE successfully ignores the noise data points while VAE's quality degrades significantly.}
\label{fig:a}
\end{figure}
\begin{figure}
\centering
\begin{subfigure}[b]{0.4\linewidth}
\includegraphics[width=\linewidth]{./graphics/pure_mnist.pdf}
\end{subfigure}
~
\begin{subfigure}[b]{0.4\linewidth}
\includegraphics[width=\linewidth]{./graphics/pure_omniglot.pdf}
\end{subfigure}
\caption{\textbf{Left}: MNIST, \textbf{right}: OMNIGLOT. Log-likelihood estimates for the robust autoencoder and the variational autoencoder trained without the synthetic noise. In this setting, choosing a very small value of $\log \alpha$ results in a regularization effect leading to a small improvement over VAE ($0.7$ nats for MNIST, $0.74$ nats for OMNIGLOT).
}
\label{fig:b}
\end{figure}
Results of the first experiment are shown in Figure~\ref{fig:a}. The robust autoencoder managed to fit the data despite noise. At the same time, VAEs test log-likelihood significantly decreased as the fraction of noise increased. The optimal value of $\alpha$ depends on base dataset and fraction of noise. For example, for OMNIGLOT the best $\log \alpha$ increased monotonically with the fraction of noise. However, for MNIST there was no such simple pattern.
In the second experiment we have compared rVAEs and VAEs on datasets without synthetic noise. We used the same network architecture and optimization approach. Test log-likelihoods for MNIST and OMNIGLOT datasets are presented in Figure~\ref{fig:b}. In this setting the best results were achieved when $\alpha$ almost coincided with zero. We observed a small improvement of the robust autoencoder over the variational autoencoder, suggesting that robust VAE provides a beneficial regularization effect.
\section{Conclusion}
We presented a new variational objective for approximate inference and showed its advantage in the training setting where noisy objects comprise the majority of a dataset. Additionally, the proposed robust variational objective provides small regularization effect on datasets without any artificial noise. In future we plan to incorporate the regularization parameter into the probabilistic model, design a procedure for automatic selection of the parameter and evaluate the model on a real-world noisy dataset.
\textbf{Acknowledgments.}
This work was supported by RFBR project No. 15-31-20596 (mol-a-ved) and by Microsoft: Moscow State University Joint Research Center (RPD 1053945).
{\small
\bibliographystyle{IEEEtranSN}
| {
"attr-fineweb-edu": 1.207031,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUerbxK1yAgXBVGIUa | \section{Introduction}
The field theory in space-time $D=2+1$ has some interesting features related
with the nontrivial topology of the configuration space. For example,
solitons of $D=2+1$ theories can hold fractional charge, statistics and spin
\cite{Jackiw, Niemi, Marino,Wilczek1}, many of such systems have been
observed in condensed matter experiments.
Alternatively, other phenomenological models implement the appearance of
exotic statistics by the addition of a Chern-Simons term to the effective
action for a statistical gauge field \cite{Hansson, Wilczek}. For example,
such interesting situation ocurred in the $O\left( 3\right) $ $\sigma -$
model proposed by Balachandran \textit{et al.} \cite{Balachandran}, where
the Chern-Simon term is constructed from the $SU\left( 2\right) $ connection
form on $\sigma -$ model fiber bundle space with $S^{2}$ sphere as the base,
where the quantization of this model leads to obtain solitons with
fractional spin. In the Semenoff%
\'{}%
s work \cite{Semenoff, Semenoff1}\ solitons with exotic statistical
properties are also obtained when the interaction of the scalar and abelian
gauge field is considered. Other works involving particles with fractional
spin can be found in \cite{Plyushchay1}, where a non-Grassmannian approach
is formulated on the pseudoclassical basis for the massive as well as for
the massless case.
The problem to the construction of a consistent field theory for quartions
in dimensions $D=2+1$ and $D=3+1$ was considered by Volkov \textit{et al.}
in the works \cite{Tkach1, Volkov}. The extension of the free theory to
higher dimensional space-times must be performed with a special care because
there is a theorem which states that in $D\geq 3+1$ the statistics must be
either fermionic or bosonic ones. As we know this theorem is valid for the
finite-dimensional representation of the Lorentz group. However in the works
\cite{Tkach1, Volkov} it is showed that the fractional spin-states are
described by the infinite-dimensional representations of the Lorentz group
and the existence of quartions in higher dimensions is also possible. It is
worthwhile to\textbf{\ }remark that in $D=3+1$ a pair of linear independent
equations is obtained and it becomes inconsistent when the interaction is
included. However as Volkov \textit{et al.} \cite{Tkach1, Volkov}\ pointed
out there is the possibility to describe the dynamic of quartions by means
of the twistor variables and the interactions can be studied in a consistent
way.
For further development of the theory, it would be very useful to establish
the fundamental connection between space-time and twistor description of
particles and superparticles at the Lagrangian level. Twistor theory has
been developed mainly by Penrose \cite{Penrose, Penrose1} and the theory is
in fact largely based on ideas of conformal symmetry, i.e., zero rest mass
particles and conformally invariant fields. In this formalism, the basic
variables to describe the dynamics of the massless spinning particles are a
pair of spinor variables called twistor and the procedure of canonical
quantization can be applied to these variables. In this sense the space of
twistors can be considered as more basic and fundamental than space-time and
in certain cases it allows a simplification of the constraint analysis and a
larger transparency of the symmetry properties. Consequently, when the
twistor techniques \cite{Tkach2, Tkach3} are implemented into the structure
of supersymmetric theories, a new ingredient to study the different models
is presented.
The main goal in this present work is to explore the consequences of the
vacuum fluctuation of one of these models \cite{Tkach}\ just originated by
the twistor variables. For this purpose we give a SUSY generalization of
this action and study the constraint structure of the model for the free
case as well as when an interaction is included.
The paper is organized as follows: in section \textbf{2} we give a brief
review about theories that consider particles with fractional spin and
statistics (quartions). We discuss the connections between fractional
statistics and fractional spin and see that the possibility of the existence
of quartions no contradict the fundamental Pauli principle. In section
\textbf{3} we start with the action for a free massless spinning particles
in $D=2+1$ that includes twistor variables. Next, considering only the
vacuum fluctuations we construct an action that is invariant under SUSY
transformations and $\tau $ - reparametrizations, in following we perform
the constraint analysis for the free case as well as for an interacting
"gauge" field and, finally a massive term to the model is introduced. In
section \textbf{4} we give our final remarks and conclusions.
\section{Particles with fractional spins}
It was shown \cite{Tkach1} that quantum field theories in $D=2+1$ dimensions
have a very interesting structure when the connection between statistical
and spin properties is studied. As it was pointed out, the existence of
objects (quartions) possessing nontrivial (exotic) spin no contradict the
fundamental Pauli principle that establishes the existence of integer or
half integer spin. The existence of quartions is concerning with the
topological properties of the space-time and it is in complete agreement
with\ the group-theoretical description of its dynamical properties.
The Poincare group (or the inhomogeneous Lorentz group $ISO(1,2)$) is
constructed by three translation generators $P_{m}$ ($m=0,1,2$) and three
angular momenta generators $M_{m}$ of the Lorentz group $SO(1,2)$ that is
isomorphic to $SL(2,\mathbf{R})$. It is well known that the $ISO(1,2)$
generators satisfy the following commutation relations \
\begin{equation}
\left[ P_{m},P_{n}\right] =0,\quad \left[ M^{m},M^{n}\right] =i\epsilon
^{mnl}M_{l},\quad \left[ M^{m},P^{n}\right] =i\epsilon ^{mnl}P_{l}
\label{p1}
\end{equation}%
here $\epsilon ^{mnl}$ is the total antisymmetric tensor and the space-time
metric is defined by $\eta ^{mn}=diag\left( +,-,-\right) $. There are three
independent Casimir operators
\begin{eqnarray}
C_{1} &=&P^{n}P_{n}=m^{2} \notag \\
C_{2} &=&M_{n}P^{n} \label{p2} \\
C_{3} &=&\frac{P_{0}}{\left\vert P_{0}\right\vert } \notag
\end{eqnarray}%
where we see that the mass shell condition and the Pauli-Liubanski scalar
are defined by the the two first relations while the third one is the energy
sign.
A consistent relativistic field theory for particles with fractional spin
and statistics is constructed on the base of the Heisenberg-Weyl group \cite%
{Sannikov, Perelomov} whose irreducible representations are given by the
particle states with spin values $S_{1/4}$ and $S_{3/4}.$ As it is known
this group is generated by the coordinate $q$ and momentum $p=i\hbar
\partial /\partial q$ operators acting on vectors of the Hilbert space
satisfying the usual commutation relations
\begin{equation}
\left[ q,p\right] =i,\quad \left[ q,q\right] =\left[ p,p\right] =0
\label{p4}
\end{equation}
We recall that in the considered theory $q$ parametrizes the quartion spin
space. As customary the action of the rising $a^{+}$ and lowering $a$
operator
\begin{equation}
a^{+}=\frac{1}{\sqrt{2}}\left( q-ip\right) ,\quad a=\frac{1}{\sqrt{2}}\left(
q+ip\right) \label{p5}
\end{equation}
onto the vacuum vector $\left| 0\right\rangle ,$ generates the corresponding
orthonormal basic vector of the representation space that has the following
form
\begin{equation}
\left| n\right\rangle =\left( n!\right) ^{-1/2}\left( a^{+}\right)
^{n}\left| 0\right\rangle ,\quad n=0,1,2,... \label{p6}
\end{equation}
Defining the Majorana spinor
\begin{equation}
L_{\alpha }=\left(
\begin{array}{c}
q \\
p%
\end{array}
\right) \label{p7}
\end{equation}
it is possible to construct the $SL\left( 2,R\right) $ group generators by
means of the Heisemberg-Weyl generators $q,$ $p$ in a Lorentz covariant
manner.
With this definition the commutation relation (\ref{p4}) becomes
\begin{equation}
\left[ L_{\alpha },L_{\beta }\right] =-i\hbar \epsilon _{\alpha \beta }
\label{p8}
\end{equation}%
where $\epsilon _{\alpha \beta }$ is the antisymmetric matrix $\epsilon
_{12}=1$. The last relation determines, in our case, the nature of the
theory under consideration and implies in the possible existence of
particles with exotic spin and statistics (quartions).
The $SL\left( 2,R\right) $ generators acting on the representations $%
S_{1/4},S_{3/4}$ are given by the anticommutators of spinors $L_{\alpha }$
components as follows
\begin{equation}
M_{\alpha \beta }=iM_{n}\left( \gamma ^{n}\right) _{\alpha \beta }=\frac{1}{4%
}\left( L_{\alpha }L_{\beta }+L_{\beta }L_{\alpha }\right) =\frac{1}{2}%
\left\{ L_{\alpha },L_{\beta }\right\} \label{p9}
\end{equation}
As it is well known, spinors have a richer structure than vectors, and is
connected with the group properties of the $SU\left( 2\right) $ which is the
covering group of the rotation group $O\left( 3\right) $. In this sense the
existence of quartions can be considered as more fundamental than spinors
and should have a certain relation with the elementary particle physics.
As it was given in \cite{Tkach1, Volkov} the equation for quartions in
Lorentz covariant form can be written as
\begin{equation}
\left( L^{\alpha }P_{\alpha \beta }-mL_{\beta }\right) \Phi =0 \label{p10}
\end{equation}%
and it resembles the Dirac equation if we put $L_{\alpha }\Phi =\Psi ,$ thus
in our case $\Phi $ has a continuous dependence on the spin parameter. It is
important to remark that in these models there are problems concerned with
the construction of the Lagrangian which generates the equations of motions (%
\ref{p10}). Another problem, related to the development of the theory based
on the equation (\ref{p10}) is the difficulty to add interactions of
quartions with the common fields as, for example, the electromagnetic
interaction that can be implemented via the minimal coupling procedure.
Therefore, other alternatives \ must be explored to obtain a satisfactory
and consistent theory for quartions. We will try to reach our goal by means
of SUSY resources.
\section{Relativistic Particle Dynamics}
\subsection{Free case}
We begin with the formulation of massless relativistic particle dynamics in $%
D=2+1$ - dimensional space-time\textbf{. }The momentum vector $p_{\alpha
\beta }=\gamma _{\alpha \beta }^{m}p_{m}$ is written as a bilinear
combination of twistor components $\lambda _{\alpha }$ obtaining for the
proposed action \cite{Shirafuji}
\begin{equation}
S=\int d\tau \lambda _{\alpha }\lambda _{\beta }\dot{x}^{\alpha \beta }
\label{f1}
\end{equation}%
which is\textbf{\ }the connection between the space-time formulation and the
twistor one\textbf{. }Here\textbf{\ }$\lambda _{\alpha }$\textbf{\ }is a
commuting Majorana spinor, the index\textbf{\ }$\alpha ,\beta =1,2$\textbf{,
}$x^{\alpha \beta }(\tau )=\gamma ^{m\alpha \beta }x_{m}(\tau )$ is the
coordinate of the particle ($m=0,1,2$) and $\dot{x}^{\alpha \beta }=\frac{d}{%
d\tau }$\ $x^{\alpha \beta }(\tau )$.
The inclusion of twistor variables enables us to consider the vacuum
fluctuations giving an additional term containing a $\dot{\lambda}^{\alpha }$
and that is considered minimally into the action \cite{Tkach}
\begin{equation}
S_{0}=l\int d\tau \lambda _{\alpha }\dot{\lambda}^{\alpha } \label{f2}
\end{equation}%
where $l$ is an arbitrary parameter of length and was introduced to assure
the correctness of the action dimension.
We consider the motion of the particle in the large superspace $\left(
X_{m},\Theta _{\alpha }\right) $ whose trajectory is parameterized by the
proper supertime $\left( \tau ,\eta \right) $ of dimension $\left(
1/1\right) $ ($\eta $ is the grassmannian real superpartner of the
conventional time $\tau $). In this way the coordinates of the particle
trajectory constitute scalar superfields in the little superspace $\left(
1/1\right) $:
\begin{eqnarray}
X_{m}\left( \tau ,\eta \right) &=&x_{m}\left( \tau \right) +i\eta \psi
_{m}\left( \tau \right) \label{f3} \\
\Theta _{\alpha }\left( \tau ,\eta \right) &=&\theta _{\alpha }\left( \tau
\right) +\eta \lambda _{\alpha }\left( \tau \right) \label{f4}
\end{eqnarray}%
where the grassmannian variable $\psi _{m}$ is the superpartner of the
bosonic coordinate $x_{m}$ and the commuting Majorana spinor $\lambda
_{\alpha }$ is the superpartner of the grassmannian variable $\theta
_{\alpha }$.
In order to construct an action which is invariant under general
transformations in superspace we introduce the supereinbein $E_{M}^{A}\left(
\tau ,\eta \right) $, where $M$ [$A$] are curved [tangent] indices and $%
D_{A}=E_{A}^{M}\partial _{M}$ is the supercovariant general derivative, $%
E_{A}^{M}$ is the inverse of $E_{M}^{A}$. In the special gauge \cite{Brink0}
\begin{equation}
E_{M}^{\alpha }=\Lambda \overline{E}_{M}^{\alpha },\quad E_{M}^{a}=\Lambda
^{1/2}\overline{E}_{M}^{a} \label{f5}
\end{equation}%
where
\begin{equation}
\overline{E}_{\mu }^{\alpha }=1,\quad \overline{E}_{\mu }^{a}=0,\quad
\overline{E}_{m}^{\alpha }=-i\eta ,\quad \overline{E}_{m}^{a}=1 \label{f6}
\end{equation}%
is the flat space supereinbein. In this case, the superscalar field $\Lambda
$ and the derivative $D_{A}$ can be written as
\begin{equation}
\Lambda =e+i\eta \chi ,\quad \overline{D}_{a}=\partial _{\eta }+i\eta
\partial _{\tau },\quad \overline{D}_{\alpha }=\partial _{\tau } \label{f7}
\end{equation}%
where $e(\tau )$ is the graviton field and $\chi (\tau )$ is the gravitino
field of the $1$ - dimensional $n=1$ supergravity. There is no difficult to
prove that
\begin{equation*}
\left( \overline{D}_{a}\right) ^{2}\equiv \left( D_{\eta }\right)
^{2}=i\partial _{\tau }
\end{equation*}
The extension to superspace of the actions (\ref{f1}) and (\ref{f2}) is
given by\footnote{%
As we will see later the presence of the superscalar field $\Lambda $
guarantees the local SUSY invariance.}
\begin{equation}
S=il\int d\tau d\eta \Lambda ^{-1}D_{\eta }X^{\alpha \beta }D_{\eta }\Theta
_{\alpha }D_{\eta }\Theta _{\beta } \label{f8}
\end{equation}%
and
\begin{equation}
S_{0}=\frac{il}{2}\int d\tau d\eta \Lambda ^{-1}D_{\eta }\Theta _{\alpha }%
\overset{.}{\Theta }^{\alpha }, \label{f9}
\end{equation}%
respectively. Where we introduce the length constant $l$ to obtain the
correct dimension of the superfield components, however, the final results
will be $l$-independent.
From the condition $\Lambda \Lambda ^{-1}=1$ we obtain
\begin{equation}
\Lambda ^{-1}=e^{-1}-ie^{-2}\eta \chi . \label{f10}
\end{equation}
Our main goal is to study the dynamics of the action (\ref{f9}) arising when
we consider the vacuum fluctuations. We also remark that $S_{0}$ appears due
to the twistor variables introduced in the action (\ref{f1}).
After simple manipulations we obtain for the action (\ref{f9}) in the second
order formalism
\begin{equation}
S_{0}=l\int d\tau \left[ \frac{1}{2}e^{-1}\left( i\dot{\theta}_{\alpha }\dot{%
\theta}^{\alpha }+\lambda _{\alpha }\dot{\lambda}^{\alpha }\right) -\frac{i}{%
2}e^{-2}\chi \lambda _{\alpha }\dot{\theta}^{\alpha }\right] . \label{f11}
\end{equation}%
Immediately, we do the following redefinition of the fields
\begin{equation}
\lambda _{\alpha }=e^{1/2}\widehat{\lambda }_{\alpha },\quad \chi =e^{1/2}%
\widehat{\chi } \label{f12}
\end{equation}%
\textbf{\ }that allows to rewrite the action $S_{0}$ as being
\begin{equation}
S_{0}=l\!\int_{\tau _{1}}^{\tau _{2}}\!\!\!\!d\tau \left[ \frac{i}{2}%
e^{-1}\left( \dot{\theta}_{\alpha }-\frac{1}{2}\widehat{\chi }\widehat{%
\lambda }_{\alpha }\right) \left( \dot{\theta}^{\alpha }-\frac{1}{2}\widehat{%
\chi }\widehat{\lambda }^{\alpha }\right) +\frac{1}{2}\widehat{\lambda }%
_{\alpha }\dot{\widehat{\lambda }}^{\alpha }\right] +\frac{l}{2}\widehat{%
\lambda }_{\alpha }\left( \tau _{2}\right) \widehat{\lambda }^{\alpha
}\left( \tau _{1}\right) \label{f13}
\end{equation}%
note that the "small" supersymmetrization of the action (\ref{f2}) generates
the kinetic term for the dynamical variable $\theta _{\alpha }.$ The
boundary term in (\ref{f13}) was introduced to get a set of consistent
equations of motion which are given by
\begin{equation}
\dot{\widehat{\lambda }}_{\alpha }=\frac{ie^{-1}}{2}\widehat{\chi }\left(
\dot{\theta}_{\alpha }-\frac{1}{2}\widehat{\chi }\widehat{\lambda }_{\alpha
}\right) ,\quad \widehat{\lambda }^{\alpha }\pi _{\alpha }=0,\quad \pi
_{\alpha }\pi ^{\alpha }=0,\quad \dot{\pi}_{\alpha }=0. \label{f13a}
\end{equation}
We follow the standard Dirac procedure to study the constrained system
generated by the action\textbf{\ (}\ref{f13}\textbf{). }The canonical
momentum obtained from (\ref{f13}) are
\begin{eqnarray}
\pi _{\alpha } &=&ie^{-1}\left( \dot{\theta}_{\alpha }-\frac{1}{2}\widehat{%
\chi }\widehat{\lambda }_{\alpha }\right) \label{f16-1} \\
\varkappa _{\alpha } &=&\frac{1}{2}\widehat{\lambda }_{\alpha },\quad \pi
_{\chi }=0,\quad \pi _{e}=0. \label{f16-2}
\end{eqnarray}
The set of primary constraints is
\begin{equation}
\Omega _{\alpha }=\varkappa _{\alpha }-\frac{1}{2}\widehat{\lambda }_{\alpha
}\approx 0,\quad \Omega _{\chi }=\pi _{\chi }\approx 0,\quad \Omega _{e}=\pi
_{e}\approx 0 \label{f17}
\end{equation}
The primary hamiltonian associated to the action (\ref{f13}) and that
considers the primary constraints is given by
\begin{equation}
\mathcal{H}_{P}=-\frac{i}{2}e\pi _{\alpha }\pi ^{\alpha }-\frac{1}{2}%
\widehat{\chi }\widehat{\lambda }^{\alpha }\pi _{\alpha }+\Gamma ^{a}\Omega
_{a} \label{f18}
\end{equation}%
where $\Gamma ^{a}\equiv \left\{ \Gamma ^{\alpha },\Gamma ^{\chi },\Gamma
^{e}\right\} $ are the lagrange multipliers. The stability condition applied
on the primary constraints gives a set of secondary constraints
\begin{equation}
\Omega _{\chi }^{(2)}=\frac{1}{2}\widehat{\lambda }^{\alpha }\pi _{\alpha
}\approx 0,\quad \Omega _{e}^{(2)}=\frac{i}{2}\pi _{\alpha }\pi ^{\alpha
}\approx 0 \label{f19}
\end{equation}%
which yield a set of first class constraints. With the help of the second
class constraint $\Omega _{\alpha }=\varkappa _{\alpha }-\frac{1}{2}\widehat{%
\lambda }_{\alpha }\approx 0$ we can construct the Dirac Bracket (DB) for
any two variables
\begin{equation}
\left\{ F,G\right\} _{DB}=\left\{ F,G\right\} _{PB}-\left\{ F,\Omega
_{\alpha }\right\} _{PB}C_{\alpha \beta }^{-1}\left\{ \Omega _{\beta
},G\right\} _{PB} \label{f20}
\end{equation}%
where $C_{\alpha \beta }$ is the matrix formed by the Poisson Bracket (PB)
of the second class constraints. Thus we derive the DB for the canonical
variables
\begin{eqnarray}
\left\{ \theta ^{\alpha },\theta ^{\beta }\right\} _{DB} &=&\left\{ \pi
_{\alpha },\pi _{\beta }\right\} _{DB}=0 \label{f21} \\
\left\{ \theta ^{\alpha },\pi _{\beta }\right\} _{DB} &=&-\delta _{\alpha
\beta },\quad \left\{ \widehat{\lambda }_{\alpha },\widehat{\lambda }_{\beta
}\right\} _{DB}=\epsilon _{\alpha \beta } \label{f22}
\end{eqnarray}
There are two types of gauge (super) transformations that leave the action (%
\ref{f13}) invariant: The invariance under local SUSY transformations
\begin{eqnarray}
\delta \theta _{\alpha } &=&\alpha \left( \tau \right) \widehat{\lambda }%
_{\alpha },\quad \delta \widehat{\lambda }_{\alpha }=i\alpha \left( \tau
\right) e^{-1}\left( \dot{\theta }_{\alpha }-\frac{1}{2}\widehat{\chi }%
\widehat{\lambda }_{\alpha }\right) \label{f14} \\
\delta e &=&i\alpha \left( \tau \right) \widehat{\chi },\quad \delta
\widehat{\chi }=2\dot{\alpha }\left( \tau \right) \notag
\end{eqnarray}
and the $\tau$-reparametrizations
\begin{eqnarray}
\delta \theta _{\alpha } &=&a\left( \tau \right) \dot{\theta }_{\alpha
},\quad \delta \widehat{\lambda }_{\alpha }=a\left( \tau \right) \dot{%
\widehat{\lambda }}_{\alpha } \notag \\
\delta e &=&\left( ae\right) ^{.},\quad \delta \widehat{\chi }=\left( a%
\widehat{\chi }\right) ^{.} \label{f15}
\end{eqnarray}
The invariance under $\tau$-reparametrizations is required by the fact that
we can choose any parameter without altering the physics of the system.
It is interesting to commute two SUSY transformations, then\textbf{,} using (%
\ref{f14}) we obtain
\begin{eqnarray}
\left[ \delta _{\alpha },\delta _{\beta }\right] \theta _{\alpha } &=&f\dot{%
\theta}_{\alpha }+\overline{\delta }_{g}\theta _{\alpha },\quad \left[
\delta _{\alpha },\delta _{\beta }\right] \widehat{\lambda }_{\alpha }=f\dot{%
\widehat{\lambda }}_{\alpha }+\overline{\delta }_{g}\widehat{\lambda }%
_{\alpha } \notag \\
\left[ \delta _{\alpha },\delta _{\beta }\right] e &=&\left( fe\right) ^{.}+%
\overline{\delta }_{g}e,\quad \left[ \delta _{\alpha },\delta _{\beta }%
\right] \widehat{\chi }=\left( f\widehat{\chi }\right) ^{.}+\overline{\delta
}_{g}\widehat{\chi } \label{f15a}
\end{eqnarray}
where we have introduced a new reparametrization $\left( f\right) $ and SUSY
$\left( g\right) $ transformation parameters
\begin{equation}
f\left( \tau \right) =2i\beta \alpha e^{-1},\quad g\left( \tau \right) =-%
\frac{1}{2}f\widehat{\chi } \label{f15b}
\end{equation}
Thus we see that the commutation of two SUSY transformations yields a
reparametrization (with parameter $f$) plus an additional SUSY
transformation (with parameter $g$). We also remark that the new
transformation parameters are field dependent.
The generator $G$ of the transformations (\ref{f14}) and (\ref{f15}) can be
found by means of \cite{Casalbuoni,Casalbuoni1}
\begin{equation}
\epsilon G=p_{a}\delta a^{a}-\varphi ,\quad \delta L=\frac{d\varphi }{d\tau }
\label{f16a}
\end{equation}%
where $\epsilon ^{a}$ are the transformation parameters and, $\varphi $ is
the generating function. The generators must satisfy the relation
\begin{equation}
\delta u=\left\{ u,\epsilon G\right\} _{DB} \label{f16b}
\end{equation}%
being $u$ any of the coordinate $q^{a}$.
In this way, we get for the the local SUSY transformations
\begin{eqnarray}
G &=&-\widehat{\lambda }^{\alpha }\widehat{\pi }_{\alpha }+i\widehat{\chi }%
\pi _{e} \notag \\
\left\{ \theta ^{\alpha },\alpha G\right\} _{DB} &=&\alpha \widehat{\lambda }%
^{\alpha },\quad \left\{ \widehat{\lambda }^{\alpha },\alpha G\right\}
_{DB}=i\alpha \left( \dot{\theta}^{\alpha }-\frac{1}{2}\widehat{\chi }%
\widehat{\lambda }^{\alpha }\right) \label{f17a} \\
\left\{ e,\alpha G\right\} _{DB} &=&i\alpha \widehat{\chi } \notag
\end{eqnarray}
and the following $\tau $-reparametrizations
\begin{eqnarray}
G &=&-\frac{1}{2}e\pi _{\alpha }\pi ^{\alpha }-\frac{1}{2}\widehat{\chi }%
\widehat{\lambda }^{\alpha }\pi _{\alpha } \label{f17b} \\
\left\{ \theta ^{\alpha },aG\right\} _{DB} &=&\alpha \dot{\theta}^{\alpha
},\quad \left\{ \widehat{\lambda }^{\alpha },aG\right\} _{DB}=a\dot{\widehat{%
\lambda }}^{\alpha } \notag
\end{eqnarray}
the last result shows that the canonical hamiltonian is the generator of the
$\tau $-reparametrizations.
\subsection{Quantization}
The quantization of the model is performed using the correspondence
principle where the Dirac brackets of the dynamical variables transform in
commutator or anticommutator $\left\{ \widehat{\quad }\right\} \rightarrow
\frac{\hbar }{i}\left\{ \quad \right\} _{DB}$ , i.e.
\begin{eqnarray}
\left\{ \widehat{\theta }^{\alpha },\widehat{\theta }^{\beta }\right\}
&=&\left\{ \widehat{\pi }_{\alpha },\widehat{\pi }_{\beta }\right\} =0
\label{f23a} \\
\left\{ \widehat{\theta }^{\alpha },\widehat{\pi }_{\beta }\right\}
&=&ih\delta _{\alpha \beta },\quad \left[ \widehat{\lambda }_{\alpha },%
\widehat{\lambda }_{\beta }\right] =-ih\epsilon _{\alpha \beta }.
\label{f23b}
\end{eqnarray}
The first class constraints are applied on the quartion vector states\textbf{%
\ $\left| \Phi \right\rangle $}
\begin{eqnarray}
\widehat{\lambda }_{\alpha }\widehat{\pi }^{\alpha }\left| \Phi
\right\rangle &=&0 \label{f24a} \\
\widehat{\pi }_{\alpha }\widehat{\pi }^{\alpha }\left| \Phi \right\rangle
&=&0. \label{f24b}
\end{eqnarray}
After a simple manipulation we can see that $\left( \widehat{\lambda }%
_{\alpha }\widehat{\pi }^{\alpha }\right) ^{2}\approx \widehat{\pi }_{\alpha
}\widehat{\pi }^{\alpha }$. In a certain sense this leads to interpret the (%
\ref{f24a}) as the Dirac equation and the (\ref{f24b}) as the Klein-Gordon
equation. However it is necessary to point out that in this model we do not
have necessarily particles with spin $1/2$ or $0$.
Immediately, we select a particular realization for the operators satisfying
the commutation relations (\ref{f23a}) and (\ref{f23b})
\begin{eqnarray}
\mathcal{D}\left( \widehat{\theta }_{\alpha }\right) &=&\theta _{\alpha
},\quad ,\mathcal{D}\left( \widehat{\lambda }_{\alpha }\right) =L_{\alpha }
\label{f25a} \\
\mathcal{D}\left( \widehat{\pi }_{\alpha }\right) &=&i\hbar \frac{\partial }{%
\partial \theta ^{\alpha }}\equiv i\hbar \partial _{\alpha } \label{f25b}
\end{eqnarray}
where $L_{\alpha }$ is the operator given in (\ref{p7}) just the realization
for the operators that describes particles with exotic spin (quartions).
This result enables us to consider the presence of quartions inside the
vector state $\left| \Phi \right\rangle $ and, a possible supermultiplet
formed by particles with spin $s=1/4,3/4$. We emphasize that it does not
contradict the SUSY principles since the difference between the minimal
weight is equal to $1/2$ just as it happened in any SUSY transformation.
\subsection{Interaction}
Now we will analyze our system when a \textquotedblleft
gauge\textquotedblright\ field is added. To construct the action that
includes the interaction of the vacuum fluctuations with a certain gauge
field must be considered their functional nature. Then the action takes the
form \cite{Brink}
\begin{equation}
S_{1}=ig\int d\tau d\eta D_{\eta }\Theta _{\alpha }\mathbf{A}^{\alpha
}\left( \Theta \right) \label{in1}
\end{equation}%
where $g$ is the coupling constant for interaction and $\mathbf{A}^{\alpha
}\left( \Theta \right) $ is a \textquotedblleft
functional\textquotedblright\ supergauge field given by
\begin{equation}
\mathbf{A}^{\alpha }\left( \Theta \right) \equiv \mathbf{A}^{\alpha }\left(
\theta ,\eta ;\lambda \right) =A^{\alpha }\left( \theta \right) +\eta
B^{\alpha }\left( \theta ;\lambda \right) \label{in2}
\end{equation}%
with $A^{\alpha }$ being the grassmannian superpartner of the bosonic field $%
B^{\alpha }$. On the other hand, considering (\ref{f4}) we obtain
\begin{equation}
\mathbf{A}^{\alpha }\left( \Theta \right) \equiv \mathbf{A}^{\alpha }\left(
\theta +\eta \lambda \right) =A^{\alpha }\left( \theta \right) +\eta \lambda
_{\beta }\frac{F^{\beta \alpha }\left( \theta \right) }{2} \label{in3}
\end{equation}%
the factor $\frac{1}{2}$ in the last relation is inserted for convenience.
From (\ref{in2}) and (\ref{in3}), we conclude that
\begin{equation}
B^{\alpha }\left( \theta ;\lambda \right) =\frac{1}{2}\lambda _{\beta
}F^{\beta \alpha }\left( \theta \right) \label{in4}
\end{equation}%
using the equations (\ref{f4}), (\ref{f7}) and (\ref{in4}) we can write the
action (\ref{in1}) as being
\begin{equation}
S_{1}=ig\int d\tau \left( e\widehat{\lambda }_{\alpha }\widehat{B}^{\alpha }+%
\dot{\theta}_{\alpha }A^{\alpha }\right) =ig\int d\tau \left( \frac{1}{2}e%
\widehat{\lambda }_{\alpha }\widehat{\lambda }_{\beta }F^{\beta \alpha }+%
\dot{\theta}_{\alpha }A^{\alpha }\right) \label{in5}
\end{equation}%
where we have redefined the fields as in (\ref{f12}). Due the commutation
relation for the spinor $\widehat{\lambda }_{\alpha }$ we infer that only
the symmetrical part of the field $F^{\beta \alpha }$ contributes to this
action.
The action (\ref{in5}) is invariant under local SUSY transformations (\ref%
{f14}) with
\begin{eqnarray}
\delta A^{\alpha } &=&i\alpha \left( \tau \right) \widehat{B}^{\alpha }=%
\frac{i}{2}\alpha \left( \tau \right) \widehat{\lambda }_{\beta }F^{\beta
\alpha } \label{in6} \\
\delta \widehat{B}^{\alpha } &=&i\alpha \left( \tau \right) e^{-1}\left[
\dot{A}^{\alpha }-\frac{i}{2}\widehat{\chi }\widehat{B}^{\alpha }\right]
\label{in7}
\end{eqnarray}
this invariance provides an unique value for the field $F^{\alpha \beta }$
which results in
\begin{equation}
F^{\alpha \beta }=i\left( \partial ^{\beta }A^{\alpha }+\partial ^{\alpha
}A^{\beta }\right) \label{in7a}
\end{equation}
it is no difficult to show that
\begin{equation}
\partial _{\alpha }F_{\beta \gamma }+\partial _{\beta }F_{\gamma \alpha
}+\partial _{\gamma }F_{\alpha \beta }=0 \label{in7b}
\end{equation}
On account of the connection of the $SL\left( 2,R\right) $ and $O\left(
3\right) $ groups where the $\sigma ^{m}$ matrices play the role of
Clebsh-Gordon coefficients, we infer the following relation between the
quantities $F_{\alpha \beta }$ and $F_{mn}$%
\begin{eqnarray}
F^{mn} &=&\left( \sigma ^{mn}\right) _{\alpha \beta }F^{\alpha \beta },\quad
\partial _{m}A^{m}=\partial _{\alpha \beta }F^{\alpha \beta }=0 \label{in7c}
\\
F^{mn} &=&\partial ^{m}A^{n}-\partial ^{n}A^{m} \notag
\end{eqnarray}
i.e. $F^{\alpha \beta }$ is the spinor form of the ``electromagnetic
field''. We remember that for this ``spin tensor'' field in $D=2+1$
dimensions there are only 3 linearly independent components.
The invariance under $\tau$-reparametrizations (\ref{f15}) are completed
with
\begin{eqnarray}
\delta A^{\alpha } &=&a\dot{A}^{\alpha } \label{in8} \\
\delta \widehat{B}^{\alpha } &=&a\dot{\widehat{B}}^{\alpha } \label{in9}
\end{eqnarray}
Joining the free action (\ref{f13}) with the interaction action (\ref{in5})
we have
\begin{eqnarray}
S &=&\int d\tau \left[ \frac{i}{2}e^{-1}\left( \dot{\theta}_{\alpha }-\frac{1%
}{2}\widehat{\chi }\widehat{\lambda }_{\alpha }\right) \left( \dot{\theta}%
^{\alpha }-\frac{1}{2}\widehat{\chi }\widehat{\lambda }^{\alpha }\right) +%
\frac{1}{2}\widehat{\lambda }_{\alpha }\dot{\widehat{\lambda }}^{\alpha
}\right. \notag \\
&&\left. +\frac{i}{2}eg\widehat{\lambda }_{\alpha }\widehat{\lambda }_{\beta
}F^{\beta \alpha }+ig\dot{\theta}_{\alpha }A^{\alpha }\right] . \label{in10}
\end{eqnarray}
From which we obtain the following canonical momentum conjugate
\begin{eqnarray}
\pi _{\alpha } &=&ie^{-1}\left( \dot{\theta}_{\alpha }-\frac{1}{2}\widehat{%
\chi }\widehat{\lambda }_{\alpha }\right) +iA_{\alpha }=\mathcal{P}_{\alpha
}+igA_{\alpha } \notag \\
\varkappa _{\alpha } &=&\frac{1}{2}\widehat{\lambda }_{\alpha },\quad \pi
_{\chi }=0,\quad \pi _{e}=0,\quad \pi _{\alpha }^{A}=0,\quad \pi _{\alpha
}^{B}=0 \label{in11}
\end{eqnarray}
and the primary constraints
\begin{eqnarray}
\Omega _{\alpha } &=&\varkappa _{\alpha }-\frac{1}{2}\widehat{\lambda }%
_{\alpha }\approx 0,\quad \Omega _{\chi }=\pi _{\chi }\approx 0,\quad \Omega
_{e}=\pi _{e}\approx 0 \label{in12} \\
\Omega _{\alpha }^{A} &=&\pi _{\alpha }^{A}\approx 0,\quad \Omega _{\alpha
}^{B}=\pi _{\alpha }^{B}\approx 0 \notag
\end{eqnarray}
The extended hamiltonian that considers the primary constraints (\ref{in12})
is
\begin{equation}
\mathcal{H}_{P}=-\frac{i}{2}e\mathcal{P}_{\alpha }\mathcal{P}^{\alpha }-%
\frac{i}{2}eg\widehat{\lambda }_{\alpha }\widehat{\lambda }_{\beta
}F^{\alpha \beta }+\frac{1}{2}\widehat{\chi }\widehat{\lambda }_{\alpha }%
\mathcal{P}^{\alpha }+\Gamma ^{a}\Omega _{a} \label{in13}
\end{equation}
where $\Gamma ^{a}\equiv \left\{ \Gamma ^{\alpha },\Gamma ^{\chi },\Gamma
^{e},\Gamma _{A}^{\alpha },\Gamma _{B}^{\alpha }\right\} $ are the new
lagrange multipliers. The conservation of primary constraints in time leads
to
\begin{equation}
T_{2}\equiv \frac{1}{2}\widehat{\lambda }_{\alpha }\mathcal{P}^{\alpha
}\approx 0,\quad T_{1}\equiv \frac{i}{2}\left( \mathcal{P}_{\alpha }\mathcal{%
P}^{\alpha }+g\widehat{\lambda }_{\alpha }\widehat{\lambda }_{\beta
}F^{\alpha \beta }\right) \approx 0 \label{in14}
\end{equation}
which is a set of first class constraints satisfying the algebra
\begin{equation}
\left\{ T_{1},T_{2}\right\} _{DB}=0,\quad \left\{ T_{1},T_{1}\right\}
_{DB}=0,\quad \left\{ T_{2},T_{2}\right\} _{DB}=\frac{i}{2}T_{1}.
\label{in14a}
\end{equation}
In the same manner as in (\ref{f20}) we define the DB, that results in
\begin{eqnarray}
\left\{ \theta ^{\alpha },\theta ^{\beta }\right\} _{DB} &=&0,\quad \left\{
\theta ^{\alpha },\mathcal{P}_{\beta }\right\} _{DB}=-\delta _{\alpha \beta }
\notag \\
\left\{ \mathcal{P}_{\alpha },\mathcal{P}_{\beta }\right\} _{DB}
&=&-gF_{\alpha \beta },\quad \left\{ \widehat{\lambda }_{\alpha },\widehat{%
\lambda }_{\beta }\right\} _{DB}=\epsilon _{\alpha \beta } \label{in15}
\end{eqnarray}
Upon quantization the canonical variables become operators and the DB
follows the commutator or anticommutator rules
\begin{eqnarray}
\left\{ \widehat{\theta }^{\alpha },\widehat{\theta }^{\beta }\right\}
&=&0,\quad \left\{ \widehat{\theta }^{\alpha },\widehat{\mathcal{P}}_{\beta
}\right\} =i\hbar \epsilon _{\alpha \beta } \notag \\
\left\{ \widehat{\mathcal{P}}_{\alpha },\widehat{\mathcal{P}}_{\beta
}\right\} &=&i\hbar gF_{\alpha \beta },\quad \left[ \widehat{\lambda }%
_{\alpha },\widehat{\lambda }_{\beta }\right] =-i\hbar \epsilon _{\alpha
\beta }. \label{in16}
\end{eqnarray}
The first class constraints are applied on the vector state $\left| \Phi
\right\rangle $%
\begin{eqnarray}
\widehat{\lambda }_{\alpha }\mathcal{P}^{\alpha }\left| \Phi \right\rangle
&=&0 \label{in17} \\
\left( \mathcal{P}_{\alpha }\mathcal{P}^{\alpha }+g\widehat{\lambda }%
_{\alpha }\widehat{\lambda }_{\beta }F^{\alpha \beta }\right) \left| \Phi
\right\rangle &=&0 \label{in18}
\end{eqnarray}
We note that the first equation (\ref{in17}) obeys the minimal coupling
principle when a gauge field is added. On the other hand (\ref{in18}) is the
Klein-Gordon-Fock equation when the interaction is considered.
The possible realization for the resulting operators that take into account
the commutation relations (\ref{in16}) is similar to the free case (\ref%
{f25a}) but the equation (\ref{f25b}) suffers a compatible modification with
the minimal coupling principle,
\begin{equation}
\mathcal{D}\left( \widehat{\mathcal{P}}_{\alpha }\right) =i\hbar \frac{%
\partial }{\partial \theta ^{\alpha }}+iA_{\alpha }\equiv i\hbar \partial
_{\alpha }+igA_{\alpha }. \label{in19}
\end{equation}
As the representations (\ref{f25a}) remain the same, the possibility to
obtain quartions in our analysis is maintained.
\subsection{The massive Term}
We consider the possibility of including a massive term to the lagrangian (%
\ref{f11}). The SUSY extension for this term is non trivial and requires
concepts and methods of spontaneous SUSY breaking. Nevertheless, we give a
possible component form of the model based on ideas of the pseudoclassical
formalism, thus, a consistent action including a massive term is given by
\begin{equation}
S_{m}=\frac{i}{2}\int\limits_{\tau _{1}}^{\tau _{2}}d\tau \left(
em^{2}+i\theta _{5}\dot{\theta}_{5}+im\widehat{\chi }\theta _{5}\right) +%
\frac{i}{2}\theta _{5}\left( \tau _{2}\right) \theta _{5}\left( \tau
_{1}\right) \label{in20}
\end{equation}%
where $\theta _{5}$ is a grassmannian variable and the boundary term is
added for the consistence of the resulting equation of motions. The action (%
\ref{in20}) preserves the invariance under local SUSY transformations (\ref%
{f14}) and $\tau $-reparametrizations (\ref{f15}) when $\delta \theta
_{5}=m\alpha $ and $\delta \theta _{5}=a\dot{\theta}_{5}$ are included,
respectively. Thus the new hamiltonian for the massive free case results in
\begin{equation}
\mathcal{H}=-\frac{ie}{2}\left( \pi _{\alpha }\pi ^{\alpha }+m^{2}\right) -%
\frac{1}{2}\widehat{\chi }\left( \widehat{\lambda }^{\alpha }\pi _{\alpha
}-m\lambda _{5}\right) \label{in21}
\end{equation}
The constraint analysis of the new system provides the following set of
first class constraints
\begin{equation}
\pi _{\alpha }\pi ^{\alpha }+m^{2}\approx 0,\quad \widehat{\lambda }^{\alpha
}\pi _{\alpha }-m\theta _{5}\approx 0 \label{in22a}
\end{equation}
and second class constraints
\begin{equation}
\varkappa _{\alpha }-\frac{1}{2}\widehat{\lambda }_{\alpha }\approx 0,\quad
\varkappa _{5}-\frac{1}{2}\theta _{5}\approx 0. \label{in22b}
\end{equation}
\section{Conclusions}
In this work we have constructed in $D=2+1$ dimensional space-time a
supersymmetric version of the action that describes the vacuum fluctuations
of the massless relativistic particles, these contribution appears when
twistor variables are introduced in the theory \cite{Tkach}\textbf{. }The
construction is performed leaving the action invariant under local SUSY
transformations and $\tau $- reparametrizations\textbf{.} The general Dirac
procedure to the analysis of constrained systems was performed obtaining
after quantization a very interesting result, i.e., the possibility to
appear particles states with fractional spin. Our result is preserved even
when a certain \textquotedblleft gauge\textquotedblright\ superfield $%
A_{\alpha }$ is switched on. We argued that the proposed action via
inclusion of twistor variables also give a consistent method to study
interactions of quartions and \textquotedblleft gauge\textquotedblright\
fields. The multiplet formed by this particles is in complete accordance
with the SUSY principles because the difference between the minimal weights
(spins) in the multiplet is equal to $1/2$.
On the other hand we have included a massive term to the studied action (\ref%
{f2}). The SUSY extension for this term is non trivial and requires concepts
and methods of spontaneous SUSY breaking. Nevertheless, we give a possible
component form of the model based on ideas of pseudoclassical formalism by
the introduction of the grassmannian variable $\theta _{5}$. The
contribution must be added to the action preserving its invariance under
local SUSY transformations and $\tau $-reparametrizations. The study of the
meaning of a massive theory for quartions and exploration of the resulting
multiplet will also be explored.
Further we will study the extension of the model to $D=3+1$ dimension and
will also explore the possibility of obtaining particles with fractional
statistics and spin. Here we point out that this requires the use of the
covering group $SL\left( 2,C\right) $ and must be considered two types of
spinors $\left( \alpha ,\dot{\alpha}\right) $. This implies that the
contribution to the vacuum fluctuation will have the additional term $%
\lambda _{\dot{\alpha}}\dot{\lambda}^{\dot{\alpha}}$ and the existence of
the antiparticles could arise in this model.
\section{Acknowledgements}
We would like to thank Prof. B.M. Pimentel and R.A. Casana for the comments
and suggestions given during the writing of this work. MP thanks CAPES for
full support.
| {
"attr-fineweb-edu": 1.645508,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfEjxK7Tt6CA5I2w0 |
\section{Analysis strategy}
\label{sec:analysis}
In this section we describe our analysis strategy.
First of all we discuss the settings
for jet clustering and the strategy for jet $b$-tagging.
Following this we discuss the categorisation of events into different
topologies, and how the different topologies may be prioritised.
We motivate our choice of analysis cuts by comparing signal and background
distributions for representative kinematic variables.
Finally, we describe the simulation of PU and validate
the PU subtraction strategy.
\subsection{Jet reconstruction}
After the parton shower, final state particles
are clustered using the
jet reconstruction algorithms
of
{\tt FastJet}~\cite{Cacciari:2011ma,Cacciari:2005hq},
{\tt v3.1.0}.
Here we use the following jet definitions:
\begin{itemize}
\item {\it Small-$R$ jets}.
These are jets reconstructed with the
anti-$k_T$ clustering algorithm~\cite{Cacciari:2008gp} with $R=0.4$ radius.
%
These small-$R$ jets are required
to have transverse momentum $p_T \ge 40$~GeV
and pseudo-rapidity $|\eta|<2.5$, within the central
acceptance of ATLAS and CMS, and therefore within the region
where $b$-tagging is possible.
\item {\it Large-$R$ jets}.
These jets are also constructed with the
anti-$k_T$ clustering algorithm, now using a $R=1.0$ radius.
%
Large-$R$ jets are required to have
$p_T \ge 200$~GeV and lie in a pseudo-rapidity region of
$|\eta|<2.0$.
%
The more restrictive range in pseudo-rapidity
as compared to the small-$R$ jets
is motivated by mimicking the experimental requirements
in ATLAS and CMS
related to the track-jet based calibration~\cite{Aad:2014bia,ATLAS:2012kla}.
In addition to the basic $p_T$ and $\eta$
acceptance requirements, large-$R$ jets should also
satisfy the BDRS mass-drop tagger (MDT)~\cite{Butterworth:2008iy}
conditions, where the {\tt FastJet} default
parameters of $\mu_{\rm mdt} = 0.67$ and $y_{\rm mdt}=0.09$ are used.
%
Before applying the BDRS tagger, the large-$R$ jet
constituents are reclustered with the Cambridge/Aachen (C/A)
algorithm~\cite{Dokshitzer:1997in,Wobisch:1998wt}
with $R=1.0$.
In the case of the analysis including PU, a trimming
algorithm~\cite{Krohn:2009th}
is applied to all large-$R$ jets to mitigate the effects of PU,
especially on the jet mass.
%
For further details see Sect.~\ref{sec:pileup}.
\item {\it Small-$R$ subjets}.
All final-state particles are clustered using the
anti-$k_T$ algorithm, but this time with
a smaller radius parameter, namely $R=0.3$.
%
The resulting anti-$k_T$ $R=0.3$ (AKT03) jets
are then ghost-associated to each large-$R$ jets
in order to define its subjets~\cite{Aad:2015uka}.
These AKT03 subjets
are required to satisfy
$p_T > 50$~GeV and $|\eta|<2.5$, and
will be the main input for
$b$-tagging in the boosted category.
%
\end{itemize}
For the boosted and intermediate categories,
which involve the use of large-$R$ jets,
we use jet substructure variables~\cite{Salam:2009jx,Aad:2013gja} to
improve the significance of the discrimination between signal and background
events in the MVA.
In particular we consider the following
substructure variables:
\begin{itemize}
\item The $k_T$-splitting scale~\cite{Butterworth:2002tt,Butterworth:2008iy}.
This variable is obtained by reclustering the constituents of a jet with the
$k_t$ algorithm~\cite{Ellis:1993tq},
which usually clusters last the harder constituents, and then
taking the $k_t$ distance measure between the two subjets at the final stage of the recombination
procedure,
\begin{equation}
\label{eq:ktsplitting}
\sqrt{d_{12}} \equiv {\rm min}\left( p_{T,1},p_{T,2}\right) \cdot \Delta R_{12} \, .
\end{equation}
with $p_{T,1}$ and $p_{T,2}$ the transverse momenta of the two subjets merged
in the final step of the clustering, and $\Delta R_{12}$ the corresponding
angular separation.
\item The ratio of 2-to-1 subjettiness $\tau_{21}$~\cite{Thaler:2010tr,Thaler:2011gf}.
The $N$-subjettiness variables $\tau_N$ are defined by clustering the constituents
of a jet with the exclusive $k_t$ algorithm~\cite{Catani:1993hr}
and requiring that $N$ subjets are found,
\begin{equation}
\tau_N \equiv \frac{1}{d_0} \sum_k p_{T,k}\cdot {\rm min}\left( \delta R_{1k}, \ldots,
\delta R_{Nk}\right) \, , \qquad d_0\equiv \sum_k p_{T,k}\cdot R \, ,
\end{equation}
where $p_{T,k}$ is the $p_T$ of the constituent particle $k$ and $\delta R_{ik}$ the distance from
subjet $i$ to constituent $k$.
%
In this work we use as input to the MVA the ratio of 2-subjettiness to 1-subjettiness, namely
\begin{equation}
\label{eq:tau21}
\tau_{21} \equiv \frac{\tau_2}{\tau_1} \, ,
\end{equation}
which provides good discrimination
between QCD jets and jets arising from the decay of
a heavy resonance.
\item The ratios of energy correlation functions (ECFs) $C^{(\beta)}_2$~\cite{Larkoski:2013eya} and
$D_2^{(\beta)}$~\cite{Larkoski:2014gra}.
The ratio of energy correlation functions $C_2^{(\beta)}$ is defined as
\begin{equation}
\label{eq:c2}
C_2^{(\beta)} \equiv \frac{ {\rm ECF}(3,\beta) {\rm ECF}(1,\beta)}{\left[ {\rm ECF}(2,\beta)\right] ^2} \, ,
\end{equation}
while $D_2^{(\beta)}$ is instead defined as a double ratio of ECFs, that is,
\begin{equation}
e_3^{(\beta)}\equiv \frac{ {\rm ECF}(3,\beta)}{\left[ {\rm ECF}(1,\beta)\right]^3} \, , \quad
e_2^{(\beta)}\equiv \frac{ {\rm ECF}(2,\beta)}{\left[ {\rm ECF}(1,\beta)\right]^2} \, , \quad
\label{eq:d2}
D_2^{(\beta)} \equiv \frac{ e_3^{(\beta)})}{\left( e_2^{(\beta)} \right)^3} \, .
\end{equation}
The energy correlation functions ${\rm ECF}(N,\beta)$ are defined
in~\cite{Larkoski:2013eya} with the motivation that $(N+1)$-point correlators
are sensitive to $N$-prong substructure.
%
The free parameter $\beta$ is set to a value of $\beta=2$,
as recommended by Refs.~\cite{Larkoski:2013eya,Larkoski:2014gra}.
\end{itemize}
\subsection{Tagging of $b$-jets}
\label{sec:btagging}
In this analysis we adopt
a $b$-tagging strategy along the lines
of current ATLAS performance~\cite{Aad:2013gja,Aad:2015ydr},
though differences with respect to
the corresponding CMS
settings~\cite{Khachatryan:2011wq,Chatrchyan:2012jua}
do not modify qualitatively our results.
For each jet definition described above, a different
$b$-tagging strategy is adopted:
\begin{itemize}
\item {\it Small-$R$ jets}.
%
If a small-$R$ jet has at least one $b$-quark among their constituents,
it will be tagged as a $b$-jet with probability $f_b$.
%
In order to be considered in the $b$-tagging algorithm,
$b$-quarks inside the small-$R$ jet
should satisfy $p_T \ge 15$ GeV~\cite{Aad:2015ydr}.
%
The probability of tagging a jet is not modified
if more than one $b$-quark is found among the jet constituents.
If no $b$-quarks are found among the constituents
of this jet, it can be still be tagged as a $b$-jet with
a mistag rate of $f_l$, unless a charm quark is present instead,
and in this case the mistag rate is $f_c$.
%
Only jets that contain at least one (light or charm)
constituent
with $p_T \ge 15$ GeV can induce a fake $b$-tag.
We attempt to $b$-tag only the four (two) hardest small-$R$ jets
in the resolved (intermediate) category.
%
Attempting to $b$-tag all of the
small-$R$ jets that satisfy the acceptance cuts worsens the
overall performance as the probability of fake $b$-tags increases
substantially.
\item {\it Large-$R$ jets}.
Large-$R$ jets are $b$-tagged by
ghost-associating anti-$k_T$ $R=0.3$ (AKT03)
subjets to the original large-$R$
jets~\cite{Cacciari:2007fd,Aad:2013gja,
ATLAS-CONF-2014-004,Aad:2015uka}.
%
A large-$R$ jet is considered $b$-tagged if both
the leading and subleading AKT03 subjets, where the ordering
is done in the subjet $p_T$, are both individually $b$-tagged,
with the same criteria as the small-$R$ jets.
%
Therefore, a large-$R$ jet where the two leading
subjets have at least one $b$-quark will be tagged
with probability $f_b^2$.
As in the case
of small-$R$ jets, we only attempt to $b$-tag the two leading subjets,
else one finds a degradation of the
signal significance.
%
The treatment of the $b$-jet mis-identification
from light and charm jets
is the same as for the small-$R$ jets.
\end{itemize}
For the $b$-tagging probability $f_b$, along with
the $b$-mistag probability of light ($f_l$) and charm ($f_c$) jets,
we use the values $f_b=0.8$, $f_l=0.01$
and $f_c=0.1$.
\subsection{Event categorisation}
\label{sec:categorisation}
The present analysis follows a strategy similar to the
scale-invariant resonance tagging of Ref.~\cite{Gouzevitch:2013qca}.
Rather than restricting ourselves to a specific event topology,
we aim to consistently combine the information from
the three possible topologies: boosted, intermediate and
resolved, with the optimal cuts for each category being determined
separately.
This approach is robust
under variations of
the underlying production model of Higgs pairs,
for instance in the case of
BSM dynamics, which can substantially increase
the degree of boost in the final state.
The three categories are defined as follows:
\begin{itemize}
\item {\it Boosted category}.
An event which
contains at least two large-$R$ jets, with the two leading jets
being $b$-tagged.
%
Each of these two $b$-tagged, large-$R$ jets are
therefore candidates
to contain the decay products of a Higgs boson.
\item {\it Intermediate category}.
An event with exactly one $b$-tagged, large-$R$ jet, which
is assigned to be the leading Higgs candidate.
%
In addition, we require at least two $b$-tagged, small-$R$ jets,
which must be separated with respect to the large-$R$ jet
by an angular distance of $\Delta R\ge 1.2$.
%
The subleading Higgs boson candidate is reconstructed
by selecting the two $b$-tagged small-$R$ jets that minimize the difference
between the invariant mass of the large-$R$ jet
with that of the dijet obtained
from the sum of the two small-$R$ jets.
\item {\it Resolved category}.
An event with at least
four $b$-tagged small-$R$ jets.
%
The two Higgs candidates are reconstructed out of the
leading four small-$R$ jets in the event
by considering all possible combinations of forming two pairs of jets
and then choosing the configuration that minimizes the relative difference of
dijet masses.
%
\end{itemize}
Once a Higgs boson candidate has been identified,
its invariant mass is required to lie within a fixed window
of width $80~{\rm GeV}$ around the nominal Higgs boson mass of $m_h= 125$
GeV.
Specifically we require the condition
\begin{equation}
\label{higgsmasswindow}
|m_{h,j} - 125~{\rm GeV}| < 40~{\rm GeV} \, ,\, j=1,2 \, ,
\end{equation}
where $m_{h,j}$ is the invariant mass of each of the two reconstructed Higgs candidates.
This cut is substantially looser than the corresponding
cut used in the typical ATLAS and CMS $h\to b\bar{b}$
analyses~\cite{Aad:2012gxa,Chatrchyan:2013zna}.
The motivation
for such a loose cut
is that further improvements of the
signal significance will be obtained using an MVA.
Only events where the two Higgs candidates satisfy
Eq.~(\ref{higgsmasswindow}) are classified as signal events.
These three categories are not exclusive:
a given event can be assigned to more than one category, for
example, satisfying the requirements of both the intermediate
and resolved
categories at the same time.
The exception is the boosted and intermediate categories, which have
conflicting jet selection requirements.
This is achieved as follows.
%
First of all we perform an inclusive analysis, and optimise the
signal significance
$S/\sqrt{B}$ in each of the three categories separately, including
the MVA.
We find that the category with highest significance is
the boosted one,
followed by the intermediate and the resolved topologies, the latter two
with similar significance.
Therefore, when ascertaining in
which category an event is to be exclusively placed:
if the event satisfies the boosted requirements, it is assigned to
this category, else we check if it suits the intermediate
requirements.
If the event also fails the intermediate category
requirements, we
then check if it passes the resolved selection criteria.
The resulting exclusive event samples are then separately processed
through the MVA, allowing for a consistent combination
of the significance of the three event categories.
\subsection{Motivation for basic kinematic cuts}
We now motivate the
kinematic cuts applied to the different categories,
comparing representative kinematic distributions between
signal and background events.
Firstly we present results without PU, and then
discuss
the impact of PU
on the description of the kinematic
distributions.
In the following, all
distributions are normalized to their total integral.
In Fig.~\ref{fig:cutplots1} we show
the $p_T$ distributions
of the
leading and subleading large-$R$ jets in the boosted category.
%
We observe that the background distribution
falls off more rapidly as a function of $p_T$ than the di-Higgs signal.
%
On the other hand, the cut in $p_T$ cannot be too strong to avoid
a substantial degradation of signal selection efficiency,
specially taking into account the subleading large-$R$ jet.
%
This comparison justifies the cut of $p_T \ge 200$ GeV
for the large-$R$ jets that we impose in the boosted category.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.48\textwidth]{plots/pt_H0_bst_C1d_noPU.pdf}
\includegraphics[width=0.48\textwidth]{plots/pt_H1_bst_C1d_noPU.pdf}
\caption{\small Comparison of the $p_T$ distributions of the
leading (left) and
subleading (right) large-$R$ jets in the boosted category,
for signal and background events.
%
Distributions have been normalized to unity.
%
The total background is the sum of all components
listed in Table~\ref{tab:samples}.
}
\label{fig:cutplots1}
\end{center}
\end{figure}
Another selection requirement for the boosted category is that the two
leading AKT03 subjets of the large-$R$ jet
should satisfy $p_T \ge $ 50 GeV.
To motivate this cut, in Fig.~\ref{fig:cutplots22}
we show the distribution in $p_T$ of the leading
and subleading AKT03 subjets in the subleading large-$R$ jet in events
corresponding to the boosted category.
It is clear from the comparison that the subjet $p_T$ spectrum is
relatively harder in the signal with respect to the background.
On the other hand, considering the subleading AKT03 subjet,
this cut in $p_T$
cannot be too harsh, to maintain a high signal selection
efficiency.
Therefore,
as for the previous distribution, the chosen cut value is
a compromise between suppressing backgrounds but keeping a large fraction of
signal events is crucial.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.48\textwidth]{plots/pt_leadSJ_fj2_noPU.pdf}
\includegraphics[width=0.48\textwidth]{plots/pt_subleadSJ_fj2_noPU.pdf}
\caption{\small Same as Fig.~\ref{fig:cutplots1} for the leading (left)
and subleading (right) AKT03
subjets in the subleading Higgs candidate large-$R$ jet.
}
\label{fig:cutplots22}
\end{center}
\end{figure}
Turning to the resolved category, an important aspect to account for
in the selection
cuts is the fact that the $p_T$ distribution
of the four leading small-$R$ jets of the event can be relatively soft,
specially for the subleading jets.
As noted in~\cite{deLima:2014dta}, this is due to the fact
that the boost from the Higgs decay is moderate,
therefore the $p_T$ selection cuts for the small-$R$ jets cannot be too large.
In Fig.~\ref{fig:cutplots23}
we show the distribution in $p_T$ of the four leading
small-$R$ jets in signal and background events: we observe that both
distributions peak at $p_T \le 50$ GeV, with the signal distribution
falling off less steeply at large $p_T$.
The feasibility of triggering on four small-$R$ jets with a relatively
soft $p_T$ distribution is one of the experimental challenges for
exploiting the resolved category in this final state,
and hence the requirement that $p_T \ge 40$ GeV for
the small-$R$ jets.
In Fig.~\ref{fig:cutplots23} we also show the
rapidity distribution of the the small-$R$
jets in the resolved category.
As expected, the production
is mostly central, more so in the case
of signal events, since backgrounds are dominated by
QCD $t$-channel exchange, therefore the
selection criteria on the jet rapidity are very efficient.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.48\textwidth]{plots/pt_smallRjets_res_noPU.pdf}
\includegraphics[width=0.48\textwidth]{plots/eta_smallRjets_res_noPU.pdf}
\caption{\small Same as Fig.~\ref{fig:cutplots1}, now for the
$p_T$ and rapidity distributions of the small-$R$
jets corresponding to the resolved selection.
}
\label{fig:cutplots23}
\end{center}
\end{figure}
One of the most discriminating selection cuts is the requirement
that the invariant mass of the Higgs candidate (di)jets must lie within a window
around the nominal Higgs value, Eq.~(\ref{higgsmasswindow}).
In Fig.~\ref{fig:mHHinv} we show the invariant mass
of the leading reconstructed Higgs candidates, before the Higgs mass window
selection
is applied, for the resolved and boosted categories.
While the signal distribution naturally peaks at the
nominal Higgs mass, the background distributions
show no particular
structure.
The
width of the Higgs mass peak is driven both from QCD effects,
such as initial-state radiation (ISR)
and out-of-cone radiation, as well
as from the four-momentum smearing applied to final state particles
as part of our minimal detector simulation.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.48\textwidth]{plots/m_H0_res_C1d_noPU.pdf}
\includegraphics[width=0.48\textwidth]{plots/m_H0_bst_C1d_noPU.pdf}
\caption{\small Same as Fig.~\ref{fig:cutplots1} for the invariant
mass distribution of the leading Higgs candidates in the resolved
(left) and boosted (right) selections.
}
\label{fig:mHHinv}
\end{center}
\end{figure}
The invariant mass of the di-Higgs system is another important
kinematic distribution for this process.
The di-Higgs invariant mass is a direct measure of the boost of the system,
which in BSM scenarios can be substantially
enhanced, for instance due to
specific $d=6$ EFT operators~\cite{Azatov:2015oxa}.
One important advantage of the $b\bar{b}b\bar{b}$ final state for
di-Higgs production is that it significantly increases the reach
in $m_{hh}$ as compared to other channels with smaller branching
ratios,
such as $2b2\gamma$ or $2b2\tau$.
In Fig.~\ref{fig:mhh} we show the invariant mass distribution of the
reconstructed Higgs pairs,
comparing the resolved and the boosted categories.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.48\textwidth]{plots/m_HH_C2_res_noPU.pdf}
\includegraphics[width=0.48\textwidth]{plots/m_HH_C2_bst_noPU.pdf}
\caption{\small
Same as Fig.~\ref{fig:cutplots1} for the invariant
mass distribution of the di-Higgs system $m_{hh}$, in
the resolved (left) and boosted (right) categories.
}
\label{fig:mhh}
\end{center}
\end{figure}
In the resolved case, we see that the distribution
in $m_{hh}$ is rather harder for the signal as compared
to the background,
and therefore one expects that cutting in $m_{hh}$ would help signal
discrimination.
For the boosted category the overall trend of the $m_{hh}$ distribution
is different because of the selection criteria, and the
distribution now peaks at higher values of the invariant mass.
In this case, signal and background distributions are not significantly
differentiated.
Note that at parton-level the $m_{hh}$ distribution
for signal events has a kinematic
cut-off at $m_{hh}^{\rm min}=250$ GeV, which is smeared due
to parton shower and detector resolution effects.
In Fig.~\ref{fig:pthh} we show the transverse momentum of
the di-Higgs system, $p_T^{hh}$,
for the resolved and boosted categories.
Once more we see that the background has a steeper fall-off in $p_T^{hh}$
than the signal, in both categories, therefore this variable
should provide additional discrimination power, motivating its inclusion
as one of the inputs for the MVA.
In our LO simulation the $p_T^{hh}$ distribution is generated
by the parton shower, an improved theoretical
description would require
merging higher-multiplicity
matrix elements~\cite{Maierhofer:2013sha} or matching to
the NLO calculation~\cite{Frederix:2014hta},
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.48\textwidth]{plots/pt_HH_C2_res_noPU.pdf}
\includegraphics[width=0.48\textwidth]{plots/pt_HH_C2_bst_noPU.pdf}
\caption{\small Same as Fig.~\ref{fig:cutplots1} for the transverse momentum
distribution of the di-Higgs system $p_T^{hh}$.
}
\label{fig:pthh}
\end{center}
\end{figure}
We shall now investigate the discrimination power
provided by jet substructure
quantities.
In Fig.~\ref{fig:mva_substructure_1}
we show the distributions of representative
substructure variables for the boosted category: the
$k_t$ splitting scale $\sqrt{d_{12}}$, Eq.~(\ref{eq:ktsplitting}),
the ECF ratio $C_2^{(\beta)}$,
Eq.~(\ref{eq:c2}), and
the 2--to--1 subjettiness ratio $\tau_{21}$, Eq.~(\ref{eq:tau21}),
all for the leading
Higgs candidates, and also $\tau_{21}$ for the subleading
Higgs candidates.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.48\textwidth]{plots/split12_h1_C1_boost.pdf}
\includegraphics[width=0.48\textwidth]{plots/EEC_C2_h0_C1_boost.pdf}
\includegraphics[width=0.48\textwidth]{plots/tau21_h0_C1_boost.pdf}
\includegraphics[width=0.48\textwidth]{plots/tau21_h1_C1_boost.pdf}
\caption{\small Distribution of representative substructure variables
in the boosted category at the end of the cut-based
analysis, to be used as input to the MVA.
%
From top to bottom and from left to right we show the
$k_t$ splitting scale $\sqrt{d_{12}}$,
the energy correlation ratio $C_2^{(\beta)}$
and the subjettiness ratio $\tau_{21}$ for the leading
Higgs.
%
In the case of
$\tau_{21}$ the distributions for the subleading Higgs are also given.
}
\label{fig:mva_substructure_1}
\end{center}
\end{figure}
From Fig.~\ref{fig:mva_substructure_1}
we observe how for these substructure variables the shapes of the signal
and background distributions reflect
the inherent differences in the internal structure of
QCD jets and jets originating from Higgs decays.
Signal and background distributions peak
in rather
different regions. For example, the $k_t$ splitting scale $\sqrt{d_{12}}$
peaks around 80 GeV (40 GeV) for signal (background) events, while
the distribution of the
ECF ratio $C_2^{(\beta)}$ is concentrated at small values
for signal and is much broader for background events.
From Fig.~\ref{fig:mva_substructure_1} we also see
the distributions of the subjettiness ratio $\tau_{21}$ are
reasonably similar
for both the leading and the subleading jets.
\subsection{Impact of pileup}
\label{sec:pileup}
Now we turn to discuss how the description of kinematic
distributions for signal
and background processes are
modified in the presence of pileup.
To study the impact of PU,
Minimum Bias events have been generated
with {\tt Pythia8}, and then
superimposed to the signal
and background samples described in Sect.~\ref{mcgeneration}.
We have explored two scenarios,
one with a number of
PU vertices per bunch crossing of $n_{\rm PU}=80$,
and another
with $n_{\rm PU}=150$.
In the following we adopt $n_{\rm PU}=80$ as our baseline,
and denote this scenario by PU80.
We have verified that the combined signal significance is
similar if $n_{\rm PU}=150$ is adopted instead.
In order to subtract PU in hadronic collisions, a number of techniques
are available~\cite{Cacciari:2009dp,TheATLAScollaboration:2013pia,Butterworth:2008iy,Cacciari:2007fd,Krohn:2009th,Krohn:2013lba,Cacciari:2008gd,Ellis:2009me,Bertolini:2014bba,Cacciari:2014gra,Cacciari:2014jta,Berta:2014eza,Larkoski:2014wba}.\footnote{
These techniques have also important applications in the subtraction
of the UE/MPI contamination for jet reconstruction
in heavy ion collisions~\cite{Cacciari:2010te}.
}
In this work, PU is subtracted
with the {\tt SoftKiller} (SK)
method~\cite{Cacciari:2014gra}, as implemented in {\tt FastJet},
whose performance has been shown to
improve the commonly used area-based subtraction~\cite{Cacciari:2009dp}.
The idea underlying {\tt SoftKiller} consists of eliminating particles
below a given cut-off in their transverse momentum, $p_T^{\rm (cut)}$, whose
value is dynamically determined so that the event-wide
transverse-momentum flow density $\rho$ vanishes, where $\rho$ is
defined as
\begin{equation}
\rho\equiv{\rm median}_i \Bigg\{ \frac{p_{Ti}}{A_i}\Bigg\} \, ,
\end{equation}
and where the median is computed over all the regions $i$ with area
$A_i$ and transverse momentum $p_{Ti}$ in which the $\left( \eta,\phi\right)$ plane
is partitioned.
From its definition in terms of the median,
it follows that the value of $p_T^{(\rm cut)}$
will be dynamically raised until half of the regions have
$p_{Ti}=0$.
The size and number of these regions is a free parameter of the algorithm -
here we will use square regions with length $a=0.4$.
We restrict ourselves to the central rapidity region,
$|\eta| \le 2.5$, for the estimation of the
$p_T$ flow density $\rho$.
The {\tt SoftKiller} subtraction is then
applied to particles at the end of the parton shower, before
jet clustering.
In addition, jet trimming~\cite{Krohn:2009th}, as implemented in {\tt FastJet}, is applied to large-$R$ jets.
The trimming parameters are chosen such that the constituents of a given jet are reclustered into $k_T$ subjets with $R_{\textrm{sub}} = 0.2$.
Subjets with transverse momentum less than 5\% of the total
transverse momentum of the large-$R$ jet are then removed.
The use of trimming in addition to PU removal with {\tt SoftKiller} is necessary to correct the jet mass in the boosted category,
which is particularly susceptible
to soft, wide-angle contaminations.
No trimming is applied to the small-$R$ jets and
to the case without PU.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.49\textwidth]{plots/m_htot_res_signal_PUnoSK.pdf}
\includegraphics[width=0.49\textwidth]{plots/m_htot_bst_signal_PUnoTrim.pdf}
\caption{\small
The invariant mass distributions of Higgs candidates in signal
events in the resolved (left) and boosted
(right) categories.
%
In the resolved category,
we compare the results without PU
with those with PU80
with and without SK subtraction.
%
In the boosted case, the comparison is performed between no PU,
PU with only SK subtraction,
and PU with both SK and trimming.
}
\label{fig:PUvalidation}
\end{center}
\end{figure}
In Fig.~\ref{fig:PUvalidation} we show the
invariant mass distributions of the
Higgs candidates for signal
events in the resolved and boosted categories.
In the resolved category,
we compare the results without PU
with those with PU80, with and without SK subtraction.
%
If PU is not subtracted, there is a large shift in the Higgs mass
peak, by more than 30 GeV.
Once SK subtraction is performed, we recover a distribution much closer
to the no PU case, with only a small shift of a few GeV
and a broadening of the mass
distribution.
%
In the boosted case, the comparison is performed between no PU,
PU with only SK subtraction,
and PU with both SK and trimming.
We find that
the mass distribution for jets to which no trimming
is applied peaks at around 160~GeV, even
after PU subtraction with {\tt SoftKiller}.
When trimming is applied in addition to {\tt SoftKiller},
the distribution peaks close to the nominal Higgs mass, as in the case
of the resolved category.
In Fig.~\ref{fig:mHH_PU}
we compare the transverse momentum of the leading Higgs
candidate, $p_t^{h}$ and the invariant mass of the di-Higgs system
$m_{hh}$, in both the boosted and resolved categories,
between the no PU and the PU+SK+Trim cases.
In the case of the $p_T^{h}$ distribution, the differences between the selection
criteria for the resolved
and boosted categories is reflected in the rightward shift of the latter.
After subtraction,
the effects of PU are small in the two categories.
A similar behaviour is observed in the di-Higgs invariant mass distribution.
We can also assess the impact of PU on the
substructure variables that will be
used as input to the MVA in the boosted
and intermediate categories.
In Fig.~\ref{fig:Substructure_PU} we show the 2-to-1 subjettiness ratio
$\tau_{21}$, Eq.~(\ref{eq:tau21}), and the ratio
of energy correlation functions $C_2^{(\beta)}$,
Eq.~(\ref{eq:c2}), for the leading Higgs candidate.
We observe that
the shapes of both substructure variables
are reasonably robust in a environment including significant PU.
Therefore we can consider the PU subtraction strategy
as validated for the purposes of this study, although
further optimisation should still be possible, both in terms of
the {\tt SoftKiller} and of the trimming
input settings.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.49\textwidth]{plots/pt_H0_C2_res_comp.pdf}
\includegraphics[width=0.49\textwidth]{plots/pt_H0_C2_bst_comp.pdf}
\includegraphics[width=0.49\textwidth]{plots/m_HH_C2_res_comp.pdf}
\includegraphics[width=0.49\textwidth]{plots/m_HH_C2_bst_comp.pdf}
\caption{\small
The
transverse momentum $p_T^h$ of the leading
Higgs candidate (upper plots) and of the invariant mass $m_{hh}$
of the di-Higgs system (lower plots) in the resolved
(left) and boosted (right) categories.
%
We compare the results without PU with those with PU80
and SK+Trim subtraction,
as explained in the text.
}
\label{fig:mHH_PU}
\end{center}
\end{figure}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.49\textwidth]{plots/tau21_h0_bst_comp.pdf}
\includegraphics[width=0.49\textwidth]{plots/D2_h0_bst_comp.pdf}
\caption{\small
Same as Fig.~\ref{fig:mHH_PU} for the
substructure variables $\tau_{21}$ (left)
and $C_2^{(\beta)}$ (right)
for the leading Higgs candidates in the boosted category.
}
\label{fig:Substructure_PU}
\end{center}
\end{figure}
It is also interesting to quantify how
the relative differences between
signal over background distributions are modified by the inclusion of PU.
Considering the boosted category initially,
in Fig.~\ref{fig:signal-vs-back-boosted} we compare
various kinematic distributions for signal and background events,
with and without PU for the leading Higgs candidate: the transverse
momentum distribution $p_T$,
the $p_T$ of the leading AKT03 subjet,
the 2--to--1 subjettiness ratio $\tau_{21}$, and
the $k_T$ splitting scale $\sqrt{d_{12}}$.
%
We verify that the relevant
qualitative differences between signal
and background distributions are maintained in the presence of PU.
%
This is especially noticeable for the substructure variables, which
exhibit a similar discriminatory power both with and without
PU.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.49\textwidth]{plots/pt_h0_bst_comp_back.pdf}
\includegraphics[width=0.49\textwidth]{plots/pt_leadSJ_fj1_bst_comp_back.pdf}
\includegraphics[width=0.49\textwidth]{plots/tau21_h1_bst_comp_back.pdf}
\includegraphics[width=0.49\textwidth]{plots/split12_h0_bst_comp_back.pdf}
\caption{\small
Comparison of kinematic distributions for the leading
Higgs candidate, in
the boosted category, for signal and background events
in the case of PU subtracted with SK+Trim:
its transverse momentum $p_T$,
the $p_T$ of its leading AKT03 subjet,
and the substructure variables $\tau_{21}$ and $\sqrt{d_{12}}$.
}
\label{fig:signal-vs-back-boosted}
\end{center}
\end{figure}
We can also perform a similar comparison for
the resolved category.
In Fig.~\ref{fig:signal-vs-back-resolved} we compare
the kinematic distributions for signal and background events,
with and without PU, for the invariant mass and the
transverse momentum of the leading
Higgs candidate.
%
Again, the PU-subtracted background distributions
appear reasonably close
to their counterparts without PU, and thus
the distinctive features between signal and background
are maintained after PU subtraction.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.49\textwidth]{plots/pt_h0_res_comp_back.pdf}
\includegraphics[width=0.49\textwidth]{plots/m_h0_res_comp_back.pdf}
\caption{\small
Same as Fig.~\ref{fig:signal-vs-back-boosted} for the resolved category.
}
\label{fig:signal-vs-back-resolved}
\end{center}
\end{figure}
\begin{table}[h]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
\multicolumn{4}{|c|}{Resolved category}\\
\hline
\hline
& & $\left\langle m_h^{\rm reco}\right\rangle-m_h$ & $\sigma_{m_h}$ \\
\hline
\multirow{2}{*}{no PU} & leading $h$ & -3.8 GeV & $\left( 8.5\pm 0.2\right)$ GeV \\
& subleading $h$ & -5.8 GeV & $\left( 9.1\pm 0.3\right)$ GeV \\
\hline
\multirow{2}{*}{PU80} & leading $h$ & +33 GeV & $\left( 8.8\pm 1.5\right)$ GeV \\
& subleading $h$ & +31 GeV & $\left( 11.7\pm 3.3\right)$ GeV \\
\hline
\multirow{2}{*}{PU80+SK} & leading $h$ & +3.9 GeV & $\left( 10.7\pm 0.3\right)$ GeV \\
& subleading $h$ & +2.1 GeV & $\left( 10.5\pm 0.3\right)$ GeV \\
\hline
\multicolumn{4}{c}{}\\
\hline
\multicolumn{4}{|c|}{Boosted category}\\
\hline
\hline
& & $\left\langle m_h^{\rm reco}\right\rangle-m_h$ & $\sigma_{m_h}$ \\
\hline
\multirow{2}{*}{no PU} & leading $h$ & +2.0 GeV & $\left( 8.2\pm 0.5\right)$ GeV \\
& subleading $h$ & +1.0 GeV & $\left( 8.8\pm 0.5\right)$ GeV \\
\hline
\multirow{2}{*}{PU80+SK+Trim} & leading $h$ & -2.2 GeV & $\left( 8.7\pm 0.7\right)$ GeV \\
& subleading $h$ & -4.9 GeV & $\left( 9.0\pm 0.8 \right)$ GeV \\
\hline
\end{tabular}
\caption{\label{tab:massresolution}
Resolution of the invariant mass distribution of
reconstructed Higgs candidates in the resolved
and boosted categories.
%
We show three cases: no PU, with PU80
without subtraction (only for resolved),
and the same with SK+Trim subtraction.
%
We indicate the shift of the fitted invariant
mass peak $\left\langle m_h^{\rm reco}\right\rangle$ for
the Higgs candidates as compared
to the nominal Higgs mass $m_h$, as well as the
the fitted Gaussian width $\sigma_{m_h}$.
}
\end{table}
It is illustrative to determine
the mass resolution obtained for the
reconstructed Higgs candidates in the various cases considered in
the present study.
In Table~\ref{tab:massresolution}
we indicate the shift of the fitted invariant mass peak as compared
to the nominal Higgs mass, $\left\langle m_h^{\rm reco}\right\rangle-m_h$,
and the corresponding width of the distribution, $\sigma_{m_h}$,
obtained from fitting a Gaussian to the mass distributions
of leading and subleading Higgs candidates in
the resolved and boosted categories.
%
We show results for three cases: without PU, with PU80
but without subtraction (only for the
resolved category), and the same with SK+Trim subtraction.
In both categories,
we find a mass resolution of around 9 GeV in the case
without PU.
%
In the case of PU
with SK+Trim subtraction,
in the resolved category the mass resolution
worsens only slightly
to around 11 GeV, while in the boosted category we find
the same resolution as in the no PU case.
%
We also note that after SK+Trim subtraction, the peak of
the invariant mass distributions of Higgs candidates
coincides with the nominal values of $m_h$ within a few GeV
for the two categories.
\section{Conclusions and outlook}
\label{sec:conclusions}
In this work we have presented a feasibility study for
the measurement of Higgs pair production in the $b\bar{b}b\bar{b}$
final state at the LHC.
Our strategy is based on the combination of traditional
cut-based analysis with state-of-the-art multivariate techniques.
We take into account
all relevant backgrounds, in particular
the irreducible $4b$
and the reducible
$2b2j$ and $4j$ QCD multijets.
We have illustrated how the $2b2j$ component leads to
a contribution comparable to that of QCD $4b$ production,
due to a combination of parton shower effects, $b$-quark
pair radiation, and selection requirements.
We have also demonstrated the robustness of our analysis strategy
under the addition of significant PU.
In particular, we have explored two scenarios, $n_{\rm PU}=80$ and
$n_{\rm PU}=150$, and found a comparable overall signal significance
in the two cases.
Combining the contributions from the resolved,
intermediate and boosted categories, we find that, for
$\mathcal{L}=3$ ab$^{-1}$, the
signal significance for
the production of Higgs pairs turns out to be $S/\sqrt{B}\simeq 3$.
This indicates that, already from the $b\bar{b}b\bar{b}$
final state alone,
it should be possible to claim observation of Higgs pair production at
the HL-LHC.
Our study also suggests possible avenues that the LHC experiments
could explore to further improve this signal significance.
One handle would be to reduce the contribution from light and charm
jet mis-identification, ensuring that the irreducible $4b$ background
dominates over the $2b2j$ component.
This would allow to enhance $S/\sqrt{B}$ almost to the discovery
level, see Table~\ref{table:cutflowMVA_fakes}.
It would also be advantageous to improve the $b$-tagging efficiency, allowing
to achieve higher signal yields.
Another possibility would be to improve the mass resolution of the Higgs
reconstruction
in high-PU environments, and, more general,
to optimize the PU subtraction
strategy in order
to reduce the impact of PU in the modelling
of kinematic variables and the associated
degradation in the MVA discrimination.
Another challenging aspect of the measurement of Higgs pairs
in the $b\bar{b}b\bar{b}$ final state is achieving an efficient
triggering strategy.
In order to reduce the rate from background QCD processes sufficiently, while
being able
to access the relevant $p_T$ regimes, (multi-)jet triggers
using $b$-quark tagging information online for one or more jets are
likely to be
necessary.
The additional rejection provided by these triggers could
enable events to be selected efficiently, with four
jets down to $p_T=40$ GeV in the resolved category,
and boosted Higgs decays in large-$R$ jets down to jet transverse momenta of
$p_T=200$ GeV.
In addition,
good control of the multijet backgrounds and the
experimental systematics of the MVA inputs will be important to achieve
these sensitivities.
Our strategy relies on the modeling of the kinematic
distributions of signal and background events, since these provide
the inputs to the MVA discriminant.
In this respect, it would be important, having established the key
relevance of the $b\bar{b}b\bar{b}$ channel for the study of
Higgs pair production, to revisit and improve the
theoretical modeling of our signal and background simulation,
in particular using NLO calculations matched to
parton showers both for signal~\cite{Frederix:2014hta,Maierhofer:2013sha}
and for backgrounds~\cite{Alwall:2014hca,Gleisberg:2008ta}.
One important implication of this work is that it should
be possible to significantly
improve the accuracy on the extraction of
the Higgs trilinear coupling $\lambda$ from
a measurement of the
$\sigma\left( hh\to b\bar{b}b\bar{b}\right)$ cross-section, as compared
to existing estimates.
A determination of $\lambda$ in our approach is however
rather
non-trivial, involving
not only regenerating signal samples
for a wide range of values of $\lambda$, but also
repeating the analysis
optimisation, including the MVA training, for each
of these values.
This study is left to a future
publication, where we will also
compare the precision from the $b\bar{b}b\bar{b}$ final state
with the corresponding precision
that has been reported from other final states such as
$b\bar{b}\gamma\gamma$
and $b\bar{b}\tau\tau$.
It will also be interesting to perform
this exercise for a 100 TeV hadron collider~\cite{Barr:2014sga,
Azatov:2015oxa,Papaefstathiou:2015iba,
Arkani-Hamed:2015vfh}.
While at 100 TeV the
signal yields would be increased, also the (gluon-driven) QCD
multijet background would grow strongly.
Revisiting
the present analysis, including the MVA optimization,
at 100 TeV would also allow us
to assess the accuracy of an extraction of the trilinear
coupling $\lambda$ from the $b\bar{b}b\bar{b}$ final state
at 100 TeV.
In this work we have considered only the SM production mechanism,
but many BSM scenarios predict deviations
in Higgs pair production, both at the level of total rates
and of
differential distributions.
In the absence of new explicit degrees of freedom,
deviations from the SM can be parametrized in
the EFT framework using higher-order
operators~\cite{Azatov:2015oxa,Goertz:2014qta}.
Therefore, we plan to study the constraints
on the coefficients of these effective
operators that can be obtained from measurements
of various kinematic distributions
in the $hh\to b\bar{b}b\bar{b}$ process.
Note that the higher rates of the $b\bar{b}b\bar{b}$ final state as compared to
other final states, such as
$b\bar{b}\gamma\gamma$, allow for better constraints upon operators
that modify the high-energy behavior
of the theory, for instance,
it would become possible
to access the tail of the $m_{hh}$ distribution.
As in the case of the extraction of the Higgs
trilinear coupling $\lambda$, such a study
would be a computationally intensive task, since
BSM dynamics will modify the shapes of the kinematic
distributions and thus in principle each point in the EFT parameter
space would require a re-optimization with a newly trained
MVA.
In order to explore efficiently the BSM parameters
without having to repeat the full analysis
for each point, modern statistical techniques
such as the Cluster Analysis method proposed
in Ref.~\cite{Dall'Osso:2015aia} might be helpful.
\bigskip
\bigskip
\begin{center}
\rule{5cm}{.1pt}
\end{center}
\bigskip
\bigskip
{\bf\noindent Acknowledgments \\}
We thank F.~Bishara, R.~Contino, A.~Papaefstathiou and
G.~Salam for useful discussions on the topic
of Higgs pair production.
We thank E.~Vryonidou and M.~Zaro for
assistance with di-Higgs production
in {\tt MadGraph5\_aMC@NLO}.
\noindent
The work of K.~B. is supported by a Rhodes Scholarship.
%
D.~B., J.~F. and C.~I. are supported by the STFC.
%
J.~R. and N.~H. are
supported by an European Research Council Starting Grant ``PDF4BSM".
J.~R. is supported by an STFC Rutherford Fellowship and
Grant ST/K005227/1 and ST/M003787/1.
\section{Introduction}
The measurement of double Higgs production will be one of the central
physics goals of the LHC program in its recently started high-energy
phase, as well as for its future high-luminosity upgrade (HL-LHC)
which aims to accumulate a total integrated
luminosity of 3 ab$^{-1}$~\cite{ATLAS:2013hta,CMS:2013xfa}.
Higgs pair production~\cite{baglio} is directly sensitive to the
Higgs trilinear coupling $\lambda$ and
provides crucial
information on the electroweak symmetry breaking mechanism.
It also probes the underlying strength of the Higgs interactions
at high energies, and can be used to test the composite nature of the
Higgs boson~\cite{Giudice:2007fh,Contino:2010mh}.
While Standard Model (SM) cross-sections are small,
many Beyond the SM (BSM)
scenarios predict enhanced rates for double Higgs production, therefore searches have already been performed by ATLAS and CMS with Run I data~\cite{Aad:2015xja,Aad:2015uka,Aad:2014yja,Khachatryan:2015yea,Chatrchyan:2011wt}
and will continue at Run II.
The study of Higgs pair production will also be relevant to
any future high-energy
collider, either at a 100 TeV circular machine~\cite{Arkani-Hamed:2015vfh,Barr:2014sga,Papaefstathiou:2015iba,Azatov:2015oxa} or at
a linear or circular electron-positron collider~\cite{Contino:2013gna}.
Analogously to single Higgs production~\cite{Dittmaier:2012vm},
in the SM the dominant mechanism for the production of a pair of
Higgs bosons at the LHC is
gluon fusion (see~\cite{baglio,Frederix:2014hta} and
references therein).
For a center-of-mass energy of $\sqrt{s} = 14\,$TeV, the
next-to-next-to-leading order (NNLO)
total cross section is approximately $40\,$fb~\cite{deFlorian:2013jea},
which is increased by a further few percent once
next-to-next-to-leading logarithmic
(NNLL) corrections
are accounted for~\cite{deFlorian:2015moa}.
Feasibility studies in the case of a SM-like Higgs boson
in the gluon-fusion channel
at the LHC have been performed for different final states, including
$b\bar b\gamma\gamma$~\cite{Baur:2003gp,Barger:2013jfa,Lu:2015jza},
$b\bar{b}\tau^+\tau^-$~\cite{Baur:2003gpa,Barr:2013tda,Dolan:2012rv,Dolan:2013rja},
$b\bar{b}W^+W^-$~\cite{Dolan:2012rv,Papaefstathiou:2012qe} and
$b\bar{b}b\bar{b}$~\cite{Baur:2003gpa,Dolan:2012rv,Wardrope:2014kya,deLima:2014dta,Barger:2013jfa}.
While these studies differ in their quantitative conclusions,
a consistent picture emerges
that the ultimate precision in the determination of the Higgs trilinear
coupling $\lambda$ requires the full integrated luminosity
of the HL-LHC, $\mathcal{L}=3$ ab$^{-1}$,
and should rely on the combination of different final states.
The interplay between kinematic
distributions for the
extraction of $\lambda$ from the measured
cross-sections, and the role of the associated theoretical
uncertainties, have been intensely scrutinized
recently~\cite{Slawinska:2014vpa,Chen:2014xra,Goertz:2013kp,
Frederix:2014hta,Dawson:2015oha,Maltoni:2014eza,Maierhofer:2013sha,Grigo:2013rya,Grigo:2014jma}.
In addition to the gluon-fusion channel, Higgs pairs
can also be produced in the vector-boson fusion
channel $hhjj$~\cite{Contino:2010mh,Dolan:2013rja,Dolan:2015zja,
Brooijmans:2014eja},
the associated production modes
$hhW$ and $hhZ$~\cite{Barger:1988jk,baglio,Cao:2015oxx}
(also known as Higgs-Strahlung),
and also in association
with top quark pairs $hht\bar{t}$~\cite{Englert:2014uqa}.
All these channels are challenging due to the small production
rates: at 14 TeV, the inclusive total cross-sections are
2.0 fb for VBF $hhjj$~\cite{Liu-Sheng:2014gxa},
0.5 fb for $W(Z)hh$~\cite{baglio}
and 1.0 for $hht\bar{t}$~\cite{Englert:2014uqa}.
While the SM production rates for Higgs
pairs are small, they are substantially
enhanced in a variety of BSM scenarios.
Feasibility studies of Higgs pair production in New Physics
models have been performed in a number of different frameworks,
including Effective Field
Theories (EFTs) with higher-dimensional
operators and anomalous
Higgs couplings~\cite{Nishiwaki:2013cma,Dall'Osso:2015aia,Azatov:2015oxa,Liu:2014rba,Goertz:2014qta,He:2015spf,Grober:2015cwa,Cao:2015oaa}, resonant production
in models such as extra dimensions~\cite{Gouzevitch:2013qca,Cooper:2013kia,No:2013wsa,Wen-Juan:2015gqg}, and Supersymmetry and
Two Higgs Doublet models (2HDMs)~\cite{Belyaev:1999kk,Han:2013sga,Hespel:2014sla,Wu:2015nba,Cao:2014kya,Ellwanger:2013ova,Cao:2013si}.
Since BSM dynamics modify
the kinematic distributions of the Higgs decay products, for
instance boosting the di-Higgs system,
different analysis strategies might be required for BSM
Higgs pair searches as compared to SM measurements.
Searches for the production of Higgs pairs
have already been performed with 8 TeV Run I data
by ATLAS in the $b\bar{b}b\bar{b}$~\cite{Aad:2015uka}
and $b\bar{b}\gamma\gamma$~\cite{Aad:2014yja} final states,
and by
CMS in the same $b\bar{b}b\bar{b}$~\cite{Khachatryan:2015yea}
and $b\bar{b}\gamma\gamma$~\cite{Chatrchyan:2011wt} final
states.
In addition, ATLAS has presented~\cite{Aad:2015xja} a combination
of its di-Higgs searches in the $bb\tau\tau,$
$\gamma\gamma WW^*$, $\gamma\gamma bb$, and $bbbb$ final states.
Many other exotic searches involve Higgs pairs in the final
state, such as the recent
search for heavy Higgs bosons $H$~\cite{Khachatryan:2015tha}.
In the context of SM production,
the main advantage of the $b\bar{b}b\bar{b}$ final state is the
enhancement of the signal yield
from the large branching fraction of Higgs bosons into
$b\bar{b}$
pairs, ${\rm BR}\left( H\to b\bar{b}\right)\simeq 0.57$~\cite{Dittmaier:2012vm}.
However a measurement in this channel
needs to deal with an overwhelming QCD multi-jet background.
Recent studies of Higgs pair production in this
final state~\cite{Wardrope:2014kya,deLima:2014dta}
estimate that, for an integrated
luminosity of
$\mathcal{L}=3$ ab$^{-1}$,
a signal significance of around $S/\sqrt{B}\simeq 2.0$ can be obtained.
In these analysis, irreducible backgrounds such as $4b$ and
$t\bar{t}$ are included, however the
reducible components, in particular $bbjj$ and
$jjjj$, are neglected.
These can contribute to the signal yield when
light and charm jets are mis-identified as $b$-jets.
Indeed, due to both
selection effects and $b$-quark radiation in the
parton shower, the
contribution of the $2b2j$ process is as significant as
the irreducible $4b$ component.
In this work, we revisit the feasibility of SM Higgs pair production by
gluon-fusion
in the $b\bar{b}b\bar{b}$ final state at the LHC.
Our strategy is based upon a combination of traditional cut-based
methods and multivariate analysis (MVA).
%
We account for all relevant
backgrounds, including the contribution from mis-identified
light and charm jets.
%
We also assess the robustness of our analysis strategy in
an environment with high pileup (PU).
%
Our results indicate that
the $b\bar{b}b\bar{b}$
final state
alone should allow for the observation of double Higgs production
at the HL-LHC.
%
The structure of this paper proceeds as follows.
In Sect.~\ref{mcgeneration} we present the modeling of the signal
and background processes with Monte Carlo event generators.
In Sect.~\ref{sec:analysis}
we introduce our analysis strategy, in particular
the classification of individual events into
different categories according to their topology.
Results of the cut-based analysis
are then presented in Sect.~\ref{sec:results}.
In Sect.~\ref{sec:mva} we illustrate the enhancement of signal
significance using multivariate techniques, and
assess the robustness of our results against the effects of PU.
In Sect.~\ref{sec:conclusions} we conclude and outline
future studies to estimate the accuracy
in the determination of the trilinear coupling $\lambda$ and
to provide
constraints in
BSM scenarios.
\section{Modeling of signal and background processes}
\label{mcgeneration}
In this section we discuss the Monte Carlo generation of the signal and background
process samples used in this analysis.
We shall also discuss the modelling of detector
resolution effects.
\subsection{Higgs pair production in gluon-fusion}
Higgs pair production is simulated at leading order (LO) using
{\tt MadGraph5\_aMC@NLO}~\cite{Alwall:2014hca}.
We use a tailored model~\cite{Maltoni:2014eza}
for gluon-fusion Higgs boson pair production
which includes mass effects
from the
exact form factors for the top-quark triangle and box
loops~\cite{Plehn:1996wb}.
Equivalent results can be obtained using
the recently available functionalities
for the calculation of loop-induced processes~\cite{Hirschi:2015iia}
in {\tt MadGraph5\_aMC@NLO}.
The calculation is performed in the
$n_f$=4 scheme, accounting for $b$-quark mass effects.
The renormalization and factorization
scales are taken to be $\mu_F=\mu_R=H_T/2$,
with
\begin{equation}
H_T\equiv \sum_i \sqrt{p_{T,i}^2+m_i^2} \, ,
\end{equation}
the scalar sum of the
transverse masses of all final state particles.
For the input parton distribution functions (PDFs) we
adopt the NNPDF 3.0 $n_f=4$ LO set~\cite{Ball:2014uwa} with
$\alpha_s(m_Z^2)=0.118$,
interfaced via {\tt LHAPDF6}~\cite{Buckley:2014ana}.
The Higgs boson couplings
and branching ratios are set to their SM values,
and its mass is taken to be
$m_h=125$ GeV~\cite{Aad:2014aba,Khachatryan:2014jba,Aad:2015zhl}.
In the SM, the Higgs trilinear coupling
is given by $\lambda=m_h^2/2v^2$, with
$v\simeq 246$ GeV the Higgs vacuum expectation
value.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.96\textwidth]{plots/hhFeyn.pdf}
\caption{\small Representative Feynman diagrams
for Higgs pair production in gluon fusion at
leading order.
%
Only the fermion triangle loop diagram (right) is
directly sensitive to the Higgs trilinear coupling
$\lambda$.
%
In the SM, the fermion loops are dominated by the
contribution from the top quark.
}
\label{fig:hhFeyn}
\end{center}
\end{figure}
In Fig.~\ref{fig:hhFeyn} we show representative Feynman diagrams
for LO Higgs pair production in gluon fusion.
%
The non-trivial interplay between the heavy quark box and the triangle loop diagrams
can lead to either constructive or destructive interference
and complicates the extraction of
the trilinear coupling
$\lambda$ from the measurement of the Higgs pair
production cross-section.
%
Higher-order corrections~\cite{deFlorian:2013jea,Frederix:2014hta}
are dominated by gluon radiation
from either the initial state gluons or from the heavy quark loops.
The total inclusive cross-section for this processes is
known up to NNLO~\cite{deFlorian:2013jea}.
%
Resummed NNLO+NNLL calculations for Higgs pair production are
also available~\cite{deFlorian:2015moa},
leading to a moderate enhancement of the order of
few percent as compared to the fixed-order NNLO calculation.
To achieve the correct higher-order value of the
integrated cross-section, we rescale our LO signal sample to match the
NNLO+NNLL
inclusive calculation.
This corresponds to
a $K$-factor $\sigma_{\rm NNLO+NNLL}/\sigma_{\rm LO}=2.4$, as indicated
in Table~\ref{tab:samples}.
Parton level signal events are then showered with the {\tt Pythia8} Monte
Carlo~\cite{Sjostrand:2007gs,Sjostrand:2014zea}, version {\tt v8.201}.
We use the default settings for the modeling
of the underlying event (UE), multiple parton
interactions (MPI), and PU, by means
of the Monash 2013 tune~\cite{Skands:2014pea},
based on the NNPDF2.3LO PDF set~\cite{Ball:2012cx,Ball:2013hta}.
\subsection{Backgrounds}
Background samples are generated at leading order
with {\tt SHERPA}~\cite{Gleisberg:2008ta} {\tt v2.1.1}.
As in the case of the signal generation,
the NNPDF 3.0 $n_f = 4$ LO set with strong coupling
$\alpha_s(m_Z^2)=0.118$ is used for all samples, and
we use as
factorisation and renormalisation scales $\mu_F=\mu_R=H_T/2$.
We account for all relevant background
processes that can mimic the
$hh\to 4b$ signal process.
This includes QCD $4b$ multi-jet production, as well as
QCD $2b2j$ and $4j$ production, and top quark pair
production.
The latter is restricted to the fully hadronic final state,
since
leptonic decays of top quarks can be removed by requiring
a lepton veto.
Single Higgs production processes such as $Z(\to b\bar{b})h(\to b\bar{b})$
and $t\bar{t}h(\to b\bar{b})$ (see Appendix~\ref{app:singlehiggs})
along with electroweak backgrounds {\it e.g} $Z(\to b\bar{b})b\bar{b}$,
are much smaller than the
QCD backgrounds~\cite{Wardrope:2014kya,deLima:2014dta}
and are therefore not included in the present analysis.
The LO cross-sections for
the background samples have been rescaled so that the integrated
distributions reproduce known higher-order QCD results.
For the $4j$ sample, we rescale the LO cross-section
using the {\tt BLACKHAT}~\cite{Bern:2011ep}
calculation, resulting in
an NLO/LO $K$-factor of 0.6.
For the $4b$ and $2b2j$ samples NLO/LO $K$-factors of 1.6 and 1.3
respectively have been determined
using {\tt MadGraph5\_aMC@NLO}~\cite{Alwall:2014hca}.
Finally, the LO cross-section for $t\bar{t}$ production has been rescaled
to match the NNLO+NNLL calculation of Ref.~\cite{Czakon:2013goa}, leading
to a $K$-factor of 1.4.
The $K$-factors that we use to rescale
the signal and background samples are summarised in
Table~\ref{tab:samples}.
\begin{table}[h]
\small
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
Process & Generator & $N_{\mathrm{evt}}$ & $\sigma_{\mathrm{LO}}$ (pb) & $K$-factor \\
\hline
\hline
$pp \to hh\to 4b$ & {\tt MadGraph5\_aMC@NLO} & 1M & $6.2\cdot10^{-3}$ & 2.4 (NNLO+NNLL~\cite{deFlorian:2013jea,deFlorian:2015moa}) \\
\hline
\hline
$pp \to b\bar{b}b\bar{b}$ & {\tt SHERPA} & 3M &$1.1 \cdot10^3$ & 1.6 (NLO~\cite{Alwall:2014hca}) \\
$pp \to b\bar{b}jj$ & {\tt SHERPA} & 3M & $2.7 \cdot 10^5$ & 1.3 (NLO~\cite{Alwall:2014hca}) \\
$pp \to jjjj$ & {\tt SHERPA} & 3M & $9.7\cdot 10^6$ & 0.6 (NLO~\cite{Bern:2011ep})\\
$pp \to t\bar{t}\to b\bar{b}jjjj$ & {\tt SHERPA} & 3M & $2.5\cdot 10^3$ & 1.4 (NNLO+NNLL~\cite{Czakon:2013goa})\\
\hline
\end{tabular}
\caption{\small Details of the signal and background Monte
Carlo samples used in this work.
%
Also provided are the inclusive $K$-factors
which are applied to reproduce the known
higher-order results. \label{tab:samples}
}
\end{center}
\end{table}%
At the generation level, the following loose selection
cuts are applied to
background events.
Each final-state particle in the hard process must have $p_T \ge 20$ GeV, and be located
in the central rapidity
region with
$| \eta | \le 3.0$.
At the matrix-element level
all final-state particles must also be separated by a minimum $\Delta R_{\mathrm{min}} =0.1$.
We have checked that these generator-level cuts are loose enough to have
no influence over the analysis cuts.
From Table~\ref{tab:samples}
we see that the $t\bar{t}$ and QCD $4b$ cross-sections are of
the same order of magnitude. However the former can be efficiently
reduced by using top quark reconstruction criteria.
The $bbjj$ cross-section is more than two orders
of magnitude larger than the $4b$ result, but will be suppressed
by the light and charm jet mis-identification rates,
required to contribute to the $4b$ final state.
As a cross-check of the {\tt SHERPA}
background cross-sections reported in Table~\ref{tab:samples}, we have produced leading-order
multi-jet samples
using {\tt MadGraph5\_aMC@NLO},
benchmarked with the results for the same processes reported in
Ref.~\cite{Alwall:2014hca}.
Using common settings, we find
agreement, within scale uncertainties, between the
{\tt MadGraph5\_aMC@NLO} and {\tt SHERPA} calculations of
the multi-jet backgrounds.
\subsection{Modelling of detector resolution}
\label{sec:detectormodeling}
While it is beyond the scope of this work to perform a full
detector simulation, it is important to include an estimate of detector
effects in the analysis, particularly for the finite energy
and angular resolutions which directly
degrade the reconstruction of important kinematic variables, such as
the invariant mass of the Higgs candidates.
Here we simulate the finite energy resolution of the ATLAS and CMS
hadronic calorimeters by applying a Gaussian smearing of the transverse
momentum $p_T$ with mean zero and standard deviation $\sigma_E$ for all
final-state particles before jet clustering, that is,
\begin{equation}
\label{eq:smearing}
p_T^{(i)} \, \to \, p_T^{(i)\prime}= \left( 1+ r_i\cdot\sigma_E \right)\, p_T^{(i)} \, , \quad
i=1,\ldots,N_{\rm part} \, ,
\end{equation}
with $r_i$ a univariate Gaussian random number, different for each
of the $N_{\rm part}$ particles in the event.
We take as a baseline value for the transverse momentum smearing a
factor of $\sigma_E=5\%$.
To account for the finite angular resolution of the calorimeter,
the $\left( \eta,\phi\right)$ plane is divided into regions of
$\Delta \eta \times \Delta \phi=0.1\times 0.1$,
and each final state particle
which falls in each of these cells is set to the same $\eta$
and $\phi$ values of the center of the
corresponding cell.
Finally, the energy
of each final-state particle
is recalculated from the smeared $p_T^\prime$,
$\eta^\prime$ and $\phi^\prime$ values to ensure that the resulting
four-momentum is that of a light-like particle, since we neglect all
jet constituent masses in this analysis.
Our modelling of detector simulation has been tuned
to lead to a mass resolution of
the reconstructed Higgs candidates consistent
with the hadronic mass resolutions of the
ATLAS and CMS detectors~\cite{Aad:2012gxa,Chatrchyan:2013zna,Aad:2014xzb},
as discussed in Sect.~\ref{sec:pileup}.
\section{Multivariate analysis}
\label{sec:mva}
At the end of the loose cut-based analysis,
by combining the three event topologies,
we obtain a signal significance of $S/\sqrt{B}\simeq 0.8~(1.4)$
with all backgrounds (only QCD $4b$) considered.
This section describes how this signal significance
can be enhanced when the cut-based analysis
is complemented by multivariate techniques.
These are by now a mature tool
in high-energy physics data analysis, opening
new avenues to improve the performance
of many measurements and searches at high-energy colliders.
In particular, the classification of events into
signal and
background processes by means of MVAs is
commonly used in LHC
applications~\cite{Baldi:2014pta,Aaltonen:2012qt,
Wardrope:2014kya,Chatrchyan:2013zna,Dall'Osso:2015aia,Kang:2015uoc}.
In this section, first we present the specific MVA that we use,
based on feed-forward multi-layer neural networks.
Then we introduce the input variables that are
used in the MVA, including the jet substructure
variables, and then present the signal significance obtained
by applying the MVA.
Then we assess the robustness of the MVA strategy in
the case of significant contamination from pileup.
\subsection{Deep artificial neural networks}
The specific type of MVA that we use to
disentangle signal and background events is
a multi-layer feed-forward artificial neural network (ANN),
known as a {\it perceptron}.\footnote{This type of ANNs are the same
as those used to parametrize Parton Distribution Functions
in the NNPDF global analyses~\cite{DelDebbio:2004qj,Ball:2008by,Ball:2011mu,Ball:2010de}.}
This family of ANNs are also known as {\it deep neural networks},
due to their multi-layered architecture.
The MVA inputs are a set of kinematic variables describing the
signal and background
events which satisfy the requirements of the
cut-based analysis.
The output of the trained ANNs also allows for the identification,
in a fully automated way,
of the most relevant variables in the discrimination between
signal and background.
In this work, the ANN that we use has the following architecture.
\begin{equation}
\label{eq:nn1}
N_{\mathrm{var}}\times5\times3\times1 \, ,
\end{equation}
where $N_{\mathrm{var}}$ represents the number of input variables for the MVA,
which is different in the resolved, intermediate, and boosted categories.
All neural-network layers use a sigmoid activation function, allowing
for a probabilistic
interpretation of the ANN output.
In Fig.~\ref{fig:nnarch} we show an illustrative
example of an ANN used in this work, corresponding
to the case of the boosted category (thus $N_{\mathrm{var}}=21$, as we explain below).
\begin{figure}[t]
\begin{center}
\vspace{-1cm}
\includegraphics[width=0.90\textwidth]{plots/bst_nnarch_noPU.pdf}
\vspace{-2cm}
\caption{\small Schematic of the Artificial
Neural Network (ANN)
used for the analysis of the
boosted
category, with $N_{\rm var}=21$ input variables and thus
the same number of neurons
in the first layer.
%
The color code in the neuron connections (the weights) is a heat map obtained
at the end of the Genetic Algorithms training,
with red indicating larger values and black indicating smaller values.
}
\label{fig:nnarch}
\end{center}
\end{figure}
The training of the ANN for the signal/background classification task
proceeds as follows.
Given a set of $N_{\mathrm{var}}$ kinematic variables $\{k\}_i$ associated with the event $i$, and a set of neural network weight
parameters $\{\omega\}$, we interpret the neural network output $y_i$
(the activation state of the
neuron in the last layer)
as the probability that the event $i$ originates from the signal process,
\begin{equation}
y_i = P(y^\prime_i=1|\{k\}_i, \{\omega\} )\, ,
\end{equation}
where $y_i^\prime$ represents the true classification of the event $i$, {\it i.e},
$y^\prime_i = 1$ for signal and $y^\prime_i = 0$ for background events.
With this interpretation, our general classification probability including background events is given by
\begin{equation}
P(y_i^\prime|\{k\}_i, \{\omega\}) = y_i^{y^\prime_i}(1-y_i)^{1-y^\prime_i} \, ,
\end{equation}
consequently we can define an error function $E(\{\omega\})$
to be minimized during the ANN training. In this case, the error function is
the cross-entropy function, defined as
\begin{eqnarray}
&&E(\{\omega\}) \equiv -\log\left(\prod_i^{N_{\text{ev}}} P(y_i^\prime|\{k\}_i, \{\omega\})\right)\nonumber\\
&&=
\sum_i^{N_{\text{ev}}} \left[ y^\prime_i\log{y_i} + (1-y^\prime_i)\log{(1-y_i)}\right] \, ,
\label{cross-entropy}
\end{eqnarray}
where $N_{\text{ev}}$ is the number of
Monte Carlo events that are used for the ANN training.
%
The ANN is trained both on the signal and background MC events,
so it is important to ensure that the input MC sample is large enough
to avoid contamination from MC statistical fluctuations.
The training of the neural networks therefore consists of the
minimization of the cross-entropy error,
Eq.~(\ref{cross-entropy}), which in this work is achieved using a
Genetic Algorithm (GA).
%
Genetic Algorithms~\cite{quevedo,tau,Abel:2014xta,Nesseris:2012tt} are
non-deterministic
minimization strategies suitable for the solution
of complex optimization problems, for instance when a very large number
of quasi-equivalent minima are present.
%
GAs are inspired on natural selection processes
that emulate biological evolution.
%
In our case, the GA training is performed for a very large
number of generations, $N_{\rm gen}=5\cdot 10^{4}$, to avoid the risk of
under-training.
%
We have verified that if a much larger number of generations
are used, the results are unchanged.
%
In addition,
in order to avoid the possibility of over-fitting,
we have used a cross-validation stopping
criterion, in particular the same one as
that used in the NNPDF3.0 analysis~\cite{Ball:2014uwa}.
%
This cross-validation proceeds by dividing the input MC dataset into two disjoint sets,
using one for training the ANN and the other for validation: the optimal
stopping point is then given by the minimum of the error function
Eq.~(\ref{cross-entropy}) to the validation sub-sample.
%
This indicates the point where
the ANN begins to train upon statistical fluctuations
in the input MC samples, rather than learning
the underlying (smooth) physical distributions.
\subsection{Input kinematic variables}
\label{sec:input}
In this work we use different sets of
input variables for the three categories.
In the case of large-$R$ jets, we exploit the available
information on jet substructure.
For the three categories, boosted, intermediate and resolved,
the following common variables are used as input to the MVA:
\begin{itemize}
\item The transverse momenta of the leading and subleading Higgs, $p_{T,h_1}$ and $p_{T,h_2}$.
\item The transverse momentum of the reconstructed Higgs pair, $p_{T,hh}$.
\item The invariant masses of the leading and sub-leading Higgs candidates, $m_{h,1}$ and $m_{h,2}$.
\item The invariant mass of the reconstructed Higgs pair, $m_{hh}$.
\item The separation in the $\phi$--$\eta$ plane
between the two Higgs candidates, $\Delta R_{hh}$.
\item The separation in $\eta$ between the two Higgs candidates, $\Delta \eta_{hh}$.
\item The separation in $\phi$ between the two Higgs candidates, $\Delta \phi_{hh}$.
\end{itemize}
In addition, in the boosted category we use
the transverse momenta of the leading, $p_{T,h_{1,1}}$ and $p_{T,h_{1,2}}$ and
sub-leading, $p_{T,h_{2,1}}$ and $p_{T,h_{2,2}}$, Higgs candidate AKT03 subjets.
%
In the resolved category instead,
the corresponding variables are
the transverse momenta $p_{T,i}$ of the four leading
$b$-tagged small-$R$ jets in the event.
%
In the intermediate category, we use the
transverse momenta of the subjets
from the large-$R$ jet $p_{T,h_{1,1}}$ and $p_{T,h_{1,2}}$ and the
transverse momenta $p_{T,i}$ of the two leading
$b$-tagged small-$R$ jets.
%
Therefore, we have 13 variables which are common to the three categories.
In the boosted and intermediate categories, we also include the jet substructure
variables introduced in Sect.~\ref{sec:analysis} for the
large-$R$ jets: the $k_t$ splitting scales
$\sqrt{d_{12}}$, the ratio of 2-to-1 subjettiness $\tau_{12}$,
and the ratios of energy correlation functions $C^{(\beta)}_2$ and
$D_2^{(\beta)}$.
%
This leads to
a total of $N_{\mathrm{var}}=13,17$ and 21 variables for the
resolved, intermediate, and boosted categories, respectively.
Given that the MVA is able to identify the most discriminatory variables
in an automated way,
and to suppress those which have little effect, it is advantageous to
include a wide array of input variables.
This is one of the main advantages of ANNs in this context:
their inherent redundancy means that
adding additional information, even if carries very little weight,
should not degrade
the classification power of the MVA.
\subsection{MVA results}
\label{sec:signalsignificance}
We now present the results of the MVA, first without PU, and then
later including the effects of PU.
First of all, in Fig.~\ref{fig:nnresponse} we show the distribution of
the ANN output at the end of the GA minimization,
separately for the
boosted, intermediate and resolved categories.
All distributions are normalized so that their integral
adds up to one.
The separation between signal and background is achieved by introducing
a cut, $y_{\rm cut}$, on the ANN output, so that MC events with $y_i\ge
y_{\rm cut}$ are classified as signal events, and those with
$y_i <
y_{\rm cut}$ as background events.
Therefore,
the more differentiated the distribution of the ANN output is
for signal and background events, the more efficient
the MVA discrimination will be.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.65\textwidth]{plots/Boosted_disc_noPU.pdf}
\includegraphics[width=0.48\textwidth]{plots/Intermediate_disc_noPU.pdf}
\includegraphics[width=0.48\textwidth]{plots/Resolved_disc_noPU.pdf}
\caption{\small The distributions, at the end of the
GA training,
for the signal and background MC events in the three categories:
boosted (upper plot), intermediate (lower left plot) and
resolved (lower right plot), as a function of the ANN output.
}
\label{fig:nnresponse}
\end{center}
\end{figure}
From Fig.~\ref{fig:nnresponse} we see that in the boosted category the MVA can produce
a clear discrimination between signal and background, with the two distributions
forming peaks at their respective optimal limits.
This indicates that introducing a suitable cut
$y_{\rm cut}$
in the ANN output will substantially reduce the background,
while keeping a reasonable signal efficiency.
The performance of the MVA discrimination is similar,
although slightly worse, in the intermediate
and resolved categories.
The results for the signal selection efficiency and the
background rejection rate as a function of the cut in the ANN output
$y_{\rm cut}$
define the so-called Receiver-Operating Characteristic (ROC)
curve, shown in Fig.~\ref{fig:exampleroc}.
It is clear that we can achieve high signal efficiency by using
a small value of $y_{\rm cut}$, but such a choice would be
affected by poor background
rejection.
Conversely, using a higher value of the cut will increase background rejection at the
cost of dropping signal efficiency.
As could already be inferred from the distribution of neural
networks output in Fig.~\ref{fig:nnresponse}, we find
that our MVA is reasonably efficient
in discriminating signal over background.
The performance is best in the case of the boosted category,
and then slightly worse in the resolved
and intermediate categories, consistent with the distributions of
the ANN outputs in
Fig.~\ref{fig:nnresponse}.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.49\textwidth]{plots/roc_noPU.pdf}
\includegraphics[width=0.49\textwidth]{plots/nev2_noPU.pdf}
\caption{\small Left: ROC curve for the background rejection rate as a function of the signal
selection efficiency, as the cut $y_{\rm cut}$
in the ANN output is varied.
%
Right: Number of signal (dashed) and background (solid)
events expected at the HL-LHC as a function of the $y_{\rm cut}$.
}
\label{fig:exampleroc}
\label{fig:nev2}
\end{center}
\end{figure}
It is useful to estimate, for each value of
the cut in the ANN output $y_{\rm cut}$, how many
signal and background events are expected at the HL-LHC
with $\mathcal{L}=3$ ab$^{-1}$.
This comparison is shown in
Fig.~\ref{fig:nev2}.
We observe that
in the boosted category, for a value $y_{\rm cut}\simeq 0.9$
we end up with around 300 signal events and $10^4$ background
events.
Similar results are obtained in the intermediate and resolved
categories: in the former we find 130 ($3\cdot 10^3$) signal (background)
events for $y_{\rm cut}\simeq 0.85$ (0.60), and in the latter
630 ($10^5$) signal (background) events for
$y_{\rm cut}\simeq 0.6$.
Therefore, the MVA achieves a
substantial background suppression
with only a
moderate reduction of signal efficiency.
A useful property of MVAs such as the one used in our
analysis
is that they can provide direct physical insight about which of the
input variables contribute to the separation between
signal and background.
In the case of ANNs, this can be quantified by computing the sum
of the absolute values of all the weights connected to a given
input neuron $i$, that is
\begin{equation}
\label{eq:totweight}
\omega^{\rm (tot)}_i \equiv \sum_{k=1}^{n^{(2)}} \Big|\omega^{(2)}_{ki}\Big| \, ,
\qquad i=1,\ldots,N_{\rm var} \, ,
\end{equation}
with $\omega^{(2)}_{ki}$ the value of the weight connecting
the $k$-th neutron of the second layer with the $i$-th neuron of
the first (input) layer, and $n^{(2)}=5$ the number of
neurons in the second layer.
Those input variables with a larger value of $\omega^{\rm (tot)}_i$ will be those
that play a more significant role in enhancing the signal
discrimination using the MVA.
We note however
that the estimate provided
by Eq.~(\ref{eq:totweight}) is necessarily qualitative.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.49\textwidth]{plots/res_wgthist_noPU.pdf}
\includegraphics[width=0.49\textwidth]{plots/int_wgthist_noPU.pdf}
\includegraphics[width=0.60\textwidth]{plots/bst_wgthist_noPU.pdf}
\vspace{-0.5cm}
\caption{\small
Distribution of the total associated weight,
Eq.~(\ref{eq:totweight}) for each of the $N_{\rm var}$ input
variables of the resolved (upper left), intermediate (upper right)
and boosted (lower plot)
categories.
}
\label{fig:nnweights}
\end{center}
\end{figure}
In Fig.~\ref{fig:nnweights} we show
the distribution of the total associated weight,
Eq.~(\ref{eq:totweight}) for each of the $N_{\rm var}$ input
variables of the three categories, using the
notation for the kinematic variables
as in Sect.~\ref{sec:input}.
In the
resolved category, the variables that carry
a higher discrimination power
are the transverse momentum of the two reconstructed Higgs candidates and
their invariant masses.
In the case of the boosted category, the invariant mass distribution
of the Higgs candidates is also the most discriminatory
variable, followed by the subjet $p_T$ distributions and
substructure variables such as $C_2^{(\beta)}$ and
$D_2^{(\beta)}$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.48\textwidth]{plots/ssb_noPU.pdf}
\includegraphics[width=0.48\textwidth]{plots/sb_noPU.pdf}
\caption{\small
The values of the signal significance, $S/\sqrt{B}$, and of the
signal over background ratio, $S/B$, for the boosted, intermediate
and resolved categories as a function of the cut
$y_{\rm cut}$ in the ANN output.
%
The $y_{\rm cut}=0$
results are those at the end of the cut-based
analysis.
}
\label{fig:sb_mva}
\end{center}
\end{figure}
The results for the signal significance $S/\sqrt{B}$ and
the signal over background ratio
$S/B$ as a function of $y_{\rm cut}$
for the three categories are given in
Fig.~\ref{fig:sb_mva}.
The values
for $y_{\rm cut}=0$ correspond to those at
the end of the loose cut-based analysis.
We observe how in the three
categories there is a marked improvement in signal
significance as compared to the pre-MVA results.
We also observe a substantial enhancement in $S/B$, arising
from the background suppression achieved by the MVA, reaching
values of 1\%, 6\% and 3.5\% in the resolved,
intermediate and boosted categories.
This improvement in $S/B$ is crucial to ensure the feasibility
of this measurement, since it allows systematic
uncertainties in the background determination to
be at most of a similar size.
The optimal value of the cut in the
ANN output, $y_{\rm cut}$, can be determined from the maximisation of $S/\sqrt{B}$,
ensuring that the number of signal events $N_{\rm ev}$
expected at the HL-LHC does not become too low.
In addition, we require
that the number of MC events used to define the signal
category (events with $y_i \ge y_{\rm cut}$)
is sufficiently large in order to avoid the biases and statistical
fluctuations associated to a small training sample.
In Table~\ref{table:cutflowMVA} we quote, for the optimal
value of $y_{\rm cut}$
in each category,
the number of signal and background events $N_{\rm ev}$ expected
at the HL-LHC, as well as $S/\sqrt{B}$ and $S/B$.
For completeness, we also include the corresponding
pre-MVA results.
\begin{table}[t]
\centering
\begin{tabular}{|c|l|c|c|c|c|}
\hline
\multicolumn{6}{|c|}{HL-LHC, no PU} \\
\hline
\hline
Category & & $N_{\rm ev}$ signal & $N_{\rm ev}$ back & $S/\sqrt{B}$ & $S/B$ \\
\hline
\hline
\multirow{2}{*}{Boosted} & $y_{\rm cut}=0$ & 440 & $7.6\cdot 10^5$ & 0.5 & $6\cdot 10^{-4}$ \\
& $y_{\rm cut}=0.90$ & 290 & $1.2\cdot 10^4$ & 2.7 & 0.03 \\
\hline
\hline
\multirow{2}{*}{Intermediate} & $y_{\rm cut}=0$ & 280 & $5.3\cdot 10^5$
& 0.4 & $5\cdot 10^{-4}$ \\
& $y_{\rm cut}=0.85$ & 130 & $3.1\cdot 10^3$ & 2.3 & 0.04\\
\hline
\hline
\multirow{2}{*}{Resolved} & $y_{\rm cut}=0$ & 1500 & $1.5\cdot 10^{7}$ & 0.4 &$1\cdot 10^{-4}$ \\
& $y_{\rm cut}=0.60$ & 630 & $1.1\cdot 10^{5}$ & 1.9 & 0.01 \\
\hline
\end{tabular}
\caption{\small Post-MVA results, for the optimal value of the
ANN discriminant $y_{\rm cut}$ in the three categories, compared with the
corresponding
pre-MVA results ($y_{\rm cut}=0$).
%
We quote the number of signal and
background events expected for $\mathcal{L}=3$ ab$^{-1}$,
the signal significance $S/\sqrt{B}$ and
the signal over background ratio $S/B$.
%
The pre-MVA results correspond to row C2 in
Table~\ref{tab:cutflow_noPU_1}.
\label{table:cutflowMVA}
}
\end{table}
From Table~\ref{table:cutflowMVA} we see that
following the application of the MVA,
the signal significance in the boosted category increases
from 0.5 to 2.7, with $S/B$ increasing from $0.06\%$ to $3\%$.
For the intermediate and resolved categories, $S/\sqrt{B}$
increases from 0.4 to 2.3 and 1.9 respectively, with
the signal over background ratio raising from
$0.05\%$ and $0.01\%$ to 4\% and 1\%.
Combining the three categories, taking into
account all background components, we obtain the overall signal
significance:
\begin{equation}
\label{eq:soverb}
\left( \frac{S}{\sqrt{B}}\right)_{\rm tot} \simeq 4.0~(1.3) \, ,\quad
\mathcal{L}=3000~(300)\,{\rm fb}^{-1}\, ,
\end{equation}
The signal significance for
$\mathcal{L}=3$ ab$^{-1}$
is thus
well above the threshold for the observation of Higgs
pair production.
However, given that the HL-LHC will be a high-PU environment,
which will affect the description of the various
kinematic distributions used as input to the MVA,
it is essential to quantify the robustness of these
results
in a realistic environment including the effects of
significant PU.
It should be emphasized that MVAs such as the ANNs used in this work can always be understood as
a combined set of correlated cuts.
Once the ANNs have been trained, it is possible to compare kinematical distributions after and before the ANN cut to verify its impact.
This information would allow in principle to perform a cut-based analysis, without the need of using ANNs,
and finding similar results.
To illustrate this point,
in Fig.~\ref{fig:pt_H0_sub0_res_noPU_ANNcut} we show
the $p_T$ distribution of the leading AKT04 small-$R$ jets
and the invariant mass of reconstructed Higgs candidates in the resolved
category, comparing the pre-MVA results ($y_{\rm cut}=0$) with the post-MVA
results ($y_{\rm cut}=0.60$) for signal and background events.
%
The distributions are not normalized, to better visualize the effect
of the MVA cut.
%
Unsurprisingly, the ANN cut effectively selects events which
lead to similar kinematical distributions between signal
and background events.
%
In the case of the small-$R$ jets $p_T$ distribution, the
ANN cuts favors the
high-$p_T$ region, while for the invariant mass distribution
only the region around the Higgs mass peak is selected for
background events.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.48\textwidth]{plots/pt_H0_sub0_res_noPU_ANNcut.pdf}
\includegraphics[width=0.48\textwidth]{plots/pt_H0_sub0_res_noPU_ANNcut_back_4b.pdf}
\includegraphics[width=0.48\textwidth]{plots/m_H0_res_noPU_ANNcut.pdf}
\includegraphics[width=0.48\textwidth]{plots/m_H0_res_noPU_ANNcut_back_4b.pdf}
\caption{\small
The $p_T$ distribution of the leading AKT04 small-$R$ jets (upper plots)
and
the invariant mass of reconstructed Higgs candidates (lower plots) in the resolved
category, comparing the pre-MVA results ($y_{\rm cut}=0$) with the post-MVA
results ($y_{\rm cut}=0.60$) for signal (left) and background (right plot) events.
%
In this case the distributions are not normalized, to better visualize the effects
of the MVA cut.
}
\label{fig:pt_H0_sub0_res_noPU_ANNcut}
\end{center}
\end{figure}
A particularly challenging aspect of our analysis is the modeling of the $2b2j$ and $4j$ background,
specially of the latter, which require extremely large MC samples.
In the analysis reported here, out of the original 3M $4j$ generated events, only around 100
survive the analysis cuts, and thus
these low statistics have associated a potentially large uncertainty
in the calculation of the post-MVA $4j$ cross-section.
On the other hand, since the $4j$ cross-sections
are always quite smaller than the sum of the $4b$ of the $2b2j$ components,
this low statistics should not modify qualitatively our conclusions above.
To verify explicitly this expectation, and obtain a more
robust estimate of the background cross-section from mis-identified jets,
we have increased by a factor 10 the size of the $2b2j$ and $4j$
background samples, up to a total of 30M each.
Processing these events though our analysis, including retraining the MVA, we find
$(S/\sqrt{B})_{\rm tot}=3.9$, consistent with Eq.~(\ref{eq:soverb}), indicating that the low
statistics of the $4j$ background is not a limiting factor.
\subsection{Impact of PU in the MVA}
In this section we study how the MVA results are modified
when the analysis is performed including significant PU.
The loose cut-based analysis and the subsequent
MVA optimization have been performed using the same
settings as in the case without PU.
In Table~\ref{tab:cutflow_PU80_1}
we provide the pre-MVA cut flow in the case of PU80,
the corresponding version without PU being
Table~\ref{tab:cutflow_noPU_1}.
The interplay between the signal cross-sections and the various
background components is qualitatively unchanged as compared
to the no PU case.
\begin{table}[t]
\centering
\scriptsize
\input{table_PU80.tex}
\caption{\small Same as Table~\ref{tab:cutflow_noPU_1},
now for the case
of PU80+SK+Trim.
\label{tab:cutflow_PU80_1}}
\end{table}
In Table~\ref{table:cutflowMVA_PU} we compare the results
for the PU80+SK+Trim case between
the pre-MVA loose cut-based analysis and
the post-MVA results for the
optimal values of the ANN output cut $y_{\rm cut}$.
%
As in Table~\ref{table:cutflowMVA},
we also quote
the number of signal and
total background events expected
for $\mathcal{L}=3$ ab$^{-1}$
and the values of $S/\sqrt{B}$ and $S/B$.
We observe that the pre-MVA
signal significance is close
to the results of the simulations
without PU for the three categories.
We now find values for $S/\sqrt{B}$ of 0.4, 0.3 and 0.6, in the resolved,
intermediate and boosted categories, respectively, to be compared
with the corresponding values without PU, namely 0.4, 0.4 and 0.5.
The number of selected
signal events in each category at the
end of the cut-based analysis is only mildly affected
by PU.
The slight pre-MVA improvement in $S/\sqrt{B}$ for the
boosted case arises from a reduction in the number
of background events that are classified in this category
as compared to the case without PU.
Once the MVA is applied, the signal significance in the
resolved, intermediate and boosted
categories increases to 2.0, 1.9 and 1.5 respectively,
to be compared with the corresponding values
without PU, namely 1.9, 2.3 and 2.7.
Therefore, the post-MVA effect of PU on $S/\sqrt{B}$ is
a moderate degradation of the boosted and intermediate categories,
specially for the former,
while the resolved category is largely unchanged.\footnote{
The impact of PU on the separate significance of
the three categories exhibits some
dependence on the specific choice for $n_{\rm PU}$ and on the settings
of the PU subtraction strategy.
%
We find however that the
overall signal significance from combining the three
categories is similar in the $n_{\rm PU}=80$ and
$n_{\rm PU}=150$ cases.
}
We also observe that, due
to the MVA, the
signal over background ratio is increased from 0.007\%, 0.03\% and
0.1\% up to 1\%, 3\% and 1\% in the resolved, intermediate
and boosted categories respectively.
This indicates that while this measurement is still highly challenging,
requiring a careful extraction of the QCD
background from the data, it should be within reach.
\begin{table}[t]
\centering
\begin{tabular}{|c|l|c|c|c|c|}
\hline
\multicolumn{6}{|c|}{HL-LHC, PU80+SK+Trim} \\
\hline
\hline
Category & & $N_{\rm ev}$ signal & $N_{\rm ev}$ back & $S/\sqrt{B}$ & $S/B$ \\
\hline
\hline
\multirow{2}{*}{Boosted} & $y_{\rm cut}=0$ & 410 & $4.5\cdot 10^5$ & 0.6 & $ 10^{-3}$ \\
& $y_{\rm cut}=0.8$ & 290 & $3.7\cdot 10^4$ & 1.5 & 0.01 \\
\hline
\hline
\multirow{2}{*}{Intermediate} & $y_{\rm cut}=0$ & 260 & $7.7\cdot 10^5$ & 0.3 &
$3\cdot 10^{-4}$ \\
& $y_{\rm cut}=0.75$ & 140 & $5.6\cdot 10^3$ & 1.9 & 0.03 \\
\hline
\hline
\multirow{2}{*}{Resolved} & $y_{\rm cut}=0$ & 1800 & $2.7\cdot 10^7$
& 0.4 & $7\cdot 10^{-5}$ \\
& $y_{\rm cut}=0.60$ & 640 & $1.0\cdot 10^5$ & 2.0 & 0.01 \\
\hline
\end{tabular}
\caption{\small Same as Table~\ref{table:cutflowMVA}, now for the case
of PU80+SK+Trim.
\label{table:cutflowMVA_PU}
}
\end{table}
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.49\textwidth]{plots/roc_SKPU80.pdf}
\includegraphics[width=0.49\textwidth]{plots/nev2_SKPU80.pdf}
\caption{\small Same as Fig.~\ref{fig:nev2}
for
the PU80+SK+Trim case.
\label{fig:nev2_PU}}
\end{center}
\end{figure}
In Fig.~\ref{fig:nev2_PU}
we show the number of signal and background events that
are expected for $\mathcal{L}=3$ ab$^{-1}$
as a function of
$y_{\rm cut}$, together with the corresponding ROC curve.
The slight degradation of the boosted category in the case
of PU can be seen by comparing with the corresponding
results without PU in Fig.~\ref{fig:nev2}.
In Fig.~\ref{fig:sb_mva_PU} we show the signal significance,
$S/\sqrt{B}$, and the signal over background ratio,
$S/B$, accounting now for the effects of PU.
The corresponding results in the case without PU were shown in
Fig.~\ref{fig:sb_mva}.
As can be seen, the MVA-driven enhancement remains robust in the
presence of PU, with $S/\sqrt{B}$ only moderately degraded.
Therefore, the qualitative conclusions drawn
in the case without PU also hold when the analysis
is performed in a high-PU environment.
Since no specific effort has been performed to
optimize PU subtraction, for instance by tuning the values
of the patch length $a$ in {\tt SoftKiller}
or the $p_T$ threshold during jet trimming,
we believe that
there should be still room for further improvement.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.48\textwidth]{plots/ssb_SKPU80.pdf}
\includegraphics[width=0.48\textwidth]{plots/sb_SKPU80.pdf}
\caption{\small
Same as Fig.~\ref{fig:sb_mva} for
the PU80+SK+Trim case.
}
\label{fig:sb_mva_PU}
\end{center}
\end{figure}
It is useful to quantify which of the MVA input variables
carry the highest discrimination power
in the case of PU, by means of
Eq.~(\ref{eq:totweight}),
and compare this with the corresponding
results without PU shown in Fig.~\ref{fig:nnweights}.
We have verified that
the relative weight of the different input variables to the MVA
is mostly unchanged in the case of PU.
In the resolved category, the highest total associated weight is carried
by the Higgs candidates $p_T$ and invariant mass, as well
as by the $p_T$ of the individual small-$R$ jets.
For the boosted category, the highest weight is carried by the
Higgs invariant mass, followed by
the Higgs $p_T$, $m_{hh}$, the $p_T$ of the AKT03 subjets and the
substructure variables, with a similar weighting among them.
In Table~\ref{table:cutflowMVA_fakes} we provide
the post-MVA number of signal and background events expected for
$\mathcal{L}=3$ ab$^{-1}$.
For the backgrounds, we quote
both
the total number, $N_{\rm ev}^{\rm tot}$,
and the QCD $4b$ component only,
$N_{\rm ev}^{\rm 4b}$.
We quote results for the no PU and PU80+SK+Trim cases.
We also quote in each case the corresponding values for the signal
significance and the signal over background ratio.
%
Note that the MVA is always trained to the inclusive background sample,
though differences
in the kinematic distributions of the $4b$ and $2b2j$ processes are
moderate,
see Fig.~\ref{fig:histoBack}.
%
From Table~\ref{table:cutflowMVA_fakes} one observes that
all categories exhibit
a marked improvement from eliminating the contamination
from light and charm jet mis-identification.
%
For instance, in the intermediate category,
$S/\sqrt{B}$ increases from 2.3 to 3.3 (1.9 to 2.9)
in the no PU (PU80) case, with similar improvements in the
resolved and boosted categories.
%
\begin{table}[t]
\centering
\scriptsize
\begin{tabular}{|c|c|c|c|c||c|c||c|c|}
\hline
Category & & signal & \multicolumn{2}{c||}{background} &
$S/\sqrt{B_{\rm tot}}$ & $S/\sqrt{B_{\rm 4b}}$
& $S/B_{\rm tot}$ & $S/B_{\rm 4b}$\\
& & $N_{\rm ev}$ & $N_{\rm ev}^{\rm tot}$ & $N_{\rm ev}^{\rm 4b}$ &
& & & \\
\hline
\hline
\multirow{2}{*}{Boosted} & no PU & 290 & $1.2\cdot 10^4$ & $8.0\cdot 10^3$ &
2.7 & 3.2 & 0.03 & 0.04 \\
& PU80+SK+Trim & 290 &$3.7\cdot 10^4$ & $1.2\cdot 10^4$ & 1.5 & 2.7 & 0.01 & 0.02 \\
\hline
\hline
\multirow{2}{*}{Intermediate} & no PU & 130 & $3.1\cdot 10^3$ & $1.5\cdot 10^3$ &
2.3 & 3.3 & 0.04 & 0.08 \\
& PU80+SK+Trim & 140 & $5.6\cdot 10^3$ & $2.4\cdot 10^3$ & 1.9 & 2.9 & 0.03 & 0.06 \\
\hline
\hline
\multirow{2}{*}{Resolved} & no PU & 630 & $1.1\cdot 10^5$ & $5.8\cdot 10^4$
& 1.9 & 2.7 & 0.01 & 0.01 \\
& PU80+SK & 640 & $1.0\cdot 10^5$ & $7.0\cdot 10^4$ & 2.0 & 2.6 & 0.01 & 0.01 \\
\hline
\hline
\multirow{2}{*}{\bf Combined} & no PU & \multicolumn{3}{c||}{}
& 4.0 & 5.3 & \multicolumn{2}{c|}{} \\
& PU80+SK+Trim & \multicolumn{3}{c||}{} & 3.1 & 4.7 & \multicolumn{2}{c|}{} \\
\hline
\end{tabular}
\caption{\small Post-MVA number of signal and background events
with $\mathcal{L}=3$ ab$^{-1}$.
%
For the backgrounds, both the total number, $N_{\rm ev}^{\rm tot}$,
and the $4b$
component only, $N_{\rm ev}^{\rm 4b}$, are shown.
%
Also provided are the values of the signal
significance and the signal over background ratio,
both separated in categories and for their combination.
%
We quote the results without PU and for PU80+SK+Trim.
\label{table:cutflowMVA_fakes}
}
\end{table}
In Table~\ref{table:cutflowMVA_fakes} we also provide
the results for $S/\sqrt{B}$ obtained by
combining the three categories.
Taking into
account all background components, we obtain for the case
of $n_{\rm PU}=80$
an overall signal
significance of
\begin{equation}
\left( \frac{S}{\sqrt{B}}\right)_{\rm tot} \simeq 3.1~(1.0) \, ,\quad
\mathcal{L}=3000~(300)\,{\rm fb}^{-1}\, ,
\end{equation}
indicating that a measurement of
Higgs pair production in the $b\bar{b}b\bar{b}$ final state at the HL-LHC
should be
above the threshold for observation, even when realistic PU conditions
are accounted for.
A similar signal significance is obtained in the case of
$n_{\rm PU}=150$.
Under the assumption that
the only relevant background would be the irreducible QCD $4b$ component,
one obtains instead
\begin{equation}
\left( \frac{S}{\sqrt{B_{\rm 4b}}}\right)_{\rm tot} \simeq 4.7~(1.5) \, ,\quad
\mathcal{L}=3000~(300)\,{\rm fb}^{-1}\, .
\end{equation}
Therefore, a measurement of Higgs pair production
in the $b\bar{b}b\bar{b}$ final state at the
HL-LHC might be even above the threshold for discovery, provided
the effects due to mis-identification of light and charm jets as
$b$-jets can be reduced.
\section{Pre-MVA loose cut-based analysis}
\label{sec:results}
In this section we present the results of the pre-MVA
loose cut-based analysis described in the previous section, and provide
cut-flows for the different analysis steps.
We study how the signal significance
is affected if only the $4b$ component of the
QCD multi-jet background is taken into account.
This section presents the results in an environment
without pileup; the following one contains those
obtained including significant PU.
\subsection{Cut-flow and signal significance}
Here we compare the cross-sections for
signal and background events at various
stages of the analysis.
We consider all relevant backgrounds (see Sect.~\ref{mcgeneration}),
and discuss how results are modified in the case where only the $4b$
background is considered.
In Table~\ref{tab:cutflowdetails}
the different
steps of the cut-flow in the present analysis are summarised,
separated into the boosted, intermediate,
and resolved topologies.
%
The different analysis steps proceed as follows:
\begin{table}[t]
\centering
\begin{tabular}{|c|c|c|c|}
\hline
& Boosted & Intermediate & Resolved \\
\hline
\hline
\multirow{2}{*}{{\bf C1a}} & $N_{\rm jets}^{R10}\ge 2$ & $N_{\rm jets}^{R04}\ge 2$, $N_{\rm jets}^{R10}=1$ &
$N_{\rm jets}^{R04}\ge 4$ \\
& \multicolumn{3}{c|}{+$p_T$ cuts and rapidity cuts} \\
\hline
\multirow{2}{*}{{\bf C1b}} & +$N_{\rm MDT}\ge 2$ & +$N_{\rm jets}^{R10}=1$ with MDT &
+Higgs reconstruction \\
& &
+Higgs reconstruction & \\
\hline
{\bf C1c} & \multicolumn{3}{c|}{ +$m_h$ window cut} \\
\hline
{\bf C2} & \multicolumn{3}{c|}{+$b$-tagging} \\
\hline
\end{tabular}
\caption{\small Definition of the cuts imposed successively for the three selections.
%
\label{tab:cutflowdetails}
}
\end{table}
\begin{itemize}
%
\item {\bf C1a}: check that we have at least
two large-$R$ jets (in the boosted case),
one large-$R$ jet and at least 2 small-$R$ jets (in the intermediate
case) and at least four small-$R$ jets (in the resolved case).
In addition,
require that these jets
satisfy the corresponding $p_T$ thresholds;
$p_T \ge 200$ GeV for large-$R$ jets and
$p_T \ge 40$ GeV for small-$R$ jets, as well as
the associated
rapidity acceptance constraints.
\item {\bf C1b}: the two leading large-$R$ jets must
be mass-drop tagged in the boosted category.
%
In the intermediate category, the large-$R$ jet
must also be mass-drop tagged.
%
\item {\bf C1c}: after the two Higgs candidates have been reconstructed,
their invariant masses are required to lie within a window around $m_H$,
in particular between 85 and 165 GeV, Eq.~(\ref{higgsmasswindow}).
\item {\bf C2}: the
$b$-tagging conditions are
imposed (see
Sect.~\ref{sec:btagging}), and the event is categorised exclusively
into one of the three topologies, according
to the hierarchy determined in Sect.~\ref{sec:categorisation}.
\end{itemize}
Signal and background events satisfying all the analysis cuts up to the
C2 level
are then used as input for the MVA training, to be described next
in Sect.~\ref{sec:mva}.
In Table~\ref{tab:cutflow_noPU_1} we collect
the values for the signal and background cross-sections
at the different analysis steps.
%
Results are divided into the resolved, intermediate and boosted categories,
and are inclusive up to the C2 level, where exclusivity is imposed.
%
In Table~\ref{tab:cutflow_noPU_1} we also provide the signal over
background ratio, $S/B$, and the signal
significance, $S/\sqrt{B}$, corresponding to an integrated
luminosity of $\mathcal{L}=3$ ab$^{-1}$.
%
These are computed either
taking into account all the background components or
the $4b$ QCD background only.
%
We find that after $b$-tagging, the $2b2j$ component is
of the same order of magnitude as the $4b$ component in all categories.
%
This implies that the signal significance at the end of the cut-based
analysis is degraded due to the contribution
of light and charm jets being mis-identified as $b$-jets.
\begin{table}[t]
\centering
\scriptsize
\input{table_noPU_1.tex}
$\,$ \\
\vspace{0.5cm}
\input{table_noPU_2.tex}
$\,$ \\
\vspace{0.5cm}
\input{table_noPU_3.tex}
\caption{\small The cross-sections
for the signal and the background
processes at different steps of the
analysis (see Table~\ref{tab:cutflowdetails}), for the resolved (upper),
intermediate (middle) and boosted
(lower table) categories, for the analysis
without PU.
%
For each step, the signal over
background ratio $S/B$, and the signal
significance $S/\sqrt{B}$ for
$\mathcal{L}=3$ ab$^{-1}$ are also provided, considering either
the total background or only the $4b$ component.
%
\label{tab:cutflow_noPU_1}}
\end{table}
In the boosted category, at the end of the loose cut-based
analysis, we find that around 500 events
are expected
at the HL-LHC, with a large number,
$\simeq 10^6$, of background events.
This leads to a pre-MVA signal significance of
$S/\sqrt{B}=0.5$ and a signal over background
ratio of $S/B=0.06\%$.
From Table~\ref{tab:cutflow_noPU_1}
it is also possible to compute the corresponding pre-MVA
expectations for the LHC Run II with
$\mathcal{L}=300$ fb$^{-1}$: one expects in the boosted
category around
50 signal events, with signal significance dropping down to
$S/\sqrt{B}\simeq 0.16$.
Such signal
significances could have been enhanced
by applying tighter selection requirements,
but our analysis cuts have been left deliberately loose
so that such optimisation may be performed by the MVA.
The resolved category benefits from higher signal yields,
but this enhancement is compensated for by the
corresponding
increase in the QCD multi-jet background.
In both resolved and intermediate categories
the signal significance is
$S/\sqrt{B}\simeq 0.4$,
similar to that of the boosted category.
A further
drawback of the resolved case is
that $S/B$
is substantially reduced as compared to the boosted and
intermediate cases.
Combining the results
from the boosted, intermediate and resolved categories,
we obtain an overall pre-MVA
significance for the observation of the Higgs pair production
in the $b\bar{b}b\bar{b}$ final
state at the HL-LHC
of $(S/\sqrt{B})_{\rm tot} \simeq 0.8$.
\subsection{The role of light and charm jet mis-identification}
One of the main differences in the present study as compared
to previous works is the inclusion of both irreducible
and reducible background components, which allows us to
quantify
the impact of light and charm jet mis-identification.
Two recent studies that have also studied the
feasibility of SM Higgs pair production in the $b\bar{b}b\bar{b}$
final state are from the UCL group~\cite{Wardrope:2014kya} and from
the
Durham group~\cite{deLima:2014dta}.
The UCL study is based
on requiring at least four $b$-tagged $R=0.4$ anti-$k_T$ jets
in central acceptance with $p_T \ge 40$ GeV, which are
then used to construct dijets (Higgs candidates) with
$p_T \ge 150$ GeV, $85 \le m_{\rm dijet} \le 140$ GeV
and $\Delta R \le 1.5$ between the two components
of the dijet.
In addition to the basic selection cuts, the constraints
from additional kinematic variables are included by means of a
Boosted Decision Tree (BDT) discriminant.
The backgrounds included are the $4b$ and
$2b2c$ QCD multijets, as well as
$t\bar{t}$, $Zh$, $t\bar{t}h$ and $hb\bar{b}$.
For the HL-LHC, a signal significance of $S/\sqrt{B}\simeq 2.1$
is obtained.
The Durham group study~\cite{deLima:2014dta} requires events
to have two $R=1.2$ C/A jets with $p_T\ge 200$ GeV, and in
addition
two $b$-tagged subjets inside each large-$R$ jet with
$p_T \ge$ 40 GeV each.
To improve the separation between
signal and background, both the BDRS
method and the Shower Deconstruction (SD)~\cite{Soper:2011cr,Soper:2012pb}
technique are used.
The backgrounds considered are QCD $4b$ as well as $Zb\bar{b}$, $hZ$ and
$hW$.
At the HL-LHC, their best result is obtained by requiring two
SD-tagged large-$R$ jets, which leads to $S/\sqrt{B}\simeq 2.1$.
Using the BDRS tagger
results in slightly poorer performance.
%
From our results in Table~\ref{tab:cutflow_noPU_1}, we observe
that the signal significance for the boosted, intermediate,
and resolved categories is increased to 1.1, 0.6 and 0.6, respectively,
when only the QCD $4b$ background is included.
%
Combining
the signal significance in the three categories,
we
obtain $(S/\sqrt{B_{\rm 4b}})_{\rm tot}\simeq 1.4$, twice
as large as the result found when
all background components are included.
%
Note the importance of
the combination of the three exclusive event topologies,
as opposed the exploitation of a single specific category.
%
Taking into account the loose selection cuts, we
see that
our pre-MVA results including only the $4b$ background are consistent
with those reported in previous studies.
From Table~\ref{tab:cutflow_noPU_1} we
can compare the interplay
between the reducible and irreducible components of the
QCD backgrounds.
%
In all cases, the $4b$ and $2b2j$ components have comparable
magnitudes within the uncertainties from missing higher-order
corrections.
%
On the other hand, the $4j$ component
is always substantially smaller.
%
So while the $4j$ component can be safely
neglected, the inclusion of the
$2b2j$ component is essential to assess the feasibility
of measuring Higgs pairs in this final state robustly,
especially
in the boosted category.
%
This has the important
consequence that a promising avenue to improve the prospects
of this measurement would be to reduce, as much as possible,
the light and charm jet mis-identification rate.
In Fig.~\ref{fig:histoBack} we show a
comparison
of the shapes of the $4b$ and $2b2j$
components of the QCD background for the transverse momentum
$p_T^h$ of the leading
Higgs candidate and for invariant
mass $m_{hh}$ of the
reconstructed di-Higgs system in the resolved
and boosted categories.
%
The two components possess a rather similar shape
for the two distributions, albeit with some
differences.
%
In the boosted
category, the $4b$ component exhibits a less steep fall-off of
the $p_T^h$ distribution at large $p_T$,
while in the resolved case
the $2b2j$ component has a slightly harder
distribution of the invariant
mass $m_{hh}$.
We also observe that the $2b2j$ distributions
are affected by somewhat larger
Monte Carlo fluctuations as compared to $4b$, despite the large size
of the initial sample.
\begin{figure}[t]
\begin{center}
\includegraphics[width=0.49\textwidth]{plots/pt_h0_C2_res_back_noPU.pdf}
\includegraphics[width=0.49\textwidth]{plots/pt_h0_C2_bst_back_noPU.pdf}
\includegraphics[width=0.49\textwidth]{plots/m_hh_C2_res_back_noPU.pdf}
\includegraphics[width=0.49\textwidth]{plots/m_hh_C2_bst_back_noPU.pdf}
\caption{\small
Upper plots: comparison
of the shapes of the $4b$ and $2b2j$
components of the QCD background for the $p_T^h$ of the leading
Higgs candidate in the resolved
(left plot) and boosted (right plot) categories.
%
Lower plots: same comparison for the invariant
mass $m_{hh}$ of the
reconstructed di-Higgs system.
}
\label{fig:histoBack}
\end{center}
\end{figure}
In the resolved category,
the cross-section before
$b$-tagging is two orders
of magnitude larger in the $2b2j$
sample as compared to the $4b$ sample.
After $b$-tagging, a naive assessment would
suggest a suppression of the $2b2j$ cross-section by a factor $(f_l/f_b)^2 \simeq
1.5\cdot 10^{-4}$, as compared to the $4b$ component,
since a total of four $b$-tags are required to classify the
event as a Higgs candidate.
In this case the ratio of $2b2j$ over $4b$ would be
around $\simeq 3\%$, and therefore negligible.
%
While we have checked that this expectation is borne
out at the parton level,
we find that when parton shower effects
are accounted for the situation is different, due both to radiation of $b\bar{b}$ pairs
and from selection effects.
Due to these,
the
number of $b$ quarks in the final state is
increased substantially in the $2b2j$ component as compared
to the parton level, while at the same
time the number of events in the $4b$ sample
with 4 $b$-jets passing selection cuts is reduced.
We can make these statements more quantitative in the following way.
To first approximation, neglecting the contribution from
charm mis-identification,
the
overall efficiency of the $b$-tagging requirements in the resolved category will be
given by the following expression:
\begin{equation}
\label{btaggingeff}
{\rm EFF}_{\rm b-tag}\simeq \sum_{j=0}^{4}n^{\rm (b-jet)}_j\cdot f_b^{j}\cdot f_l^{4-j} \, ,
\end{equation}
with $n^{\rm (b-jet)}_j$ being the fraction of events satisfying all the selection
requirements,
where $j$ jets out of the leading four jets of the event
contain $b$ quarks (with $p_T^b\ge 15$
GeV).
Similar expressions can be derived for
the boosted and intermediate categories.
The naive expectation is that all events in the $4b$ sample have $n^{\rm (b-jet)}_4\simeq 1$
and $n^{\rm (b-jet)}_j\simeq 0$ for $j\ne 4$, while the events in the $2b2j$ sample
should have $n^{\rm (b-jet)}_2\simeq 1$ and zero otherwise.
This leads to a ratio of overall $b$-tagging selection efficiencies
\begin{equation}
\label{eq:naive}
\frac{ {\rm EFF}_{\rm b-tag} \left[ 2b2j \right]}{{\rm EFF}_{\rm b-tag} \left[ 4b\right]}
\simeq
\left( \frac{f_l}{f_b}\right)^2 \simeq 1.5\cdot 10^{-4} \, .
\end{equation}
However, after the parton shower, the above estimate is no longer accurate.
First of all, we will have a non-negligible fraction $n^{\rm (b-jet)}_j$
with $j=3,4$ also in the $2b2j$ sample, due to $b$-quark pair radiation
during the shower.
Secondly, not all events in the $4b$ sample will lead to four small-$R$ $b$-jets,
due to a combination of selection cuts and
parton shower effects.
\begin{table}[t]
\centering
\small
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
\multicolumn{2}{|c|}{} & $n^{\rm (b-jet)}_0$ & $n^{\rm (b-jet)}_1$ & $n^{\rm (b-jet)}_2$ & $n^{\rm (b-jet)}_3$ &
$n^{\rm (b-jet)}_4$ & ${\rm EFF}_{\rm b-tag}$ \\
\hline
\hline
Signal & $hh\to 4b$ & 0.1\% & 3\% & 25\% & 53\% & 20\% & 8.5\% \\
\hline
\multirow{3}{*}{Background} & QCD $4b$ & 1\% & 8\% & 27\% & 44\% & 20\% & 8.4\% \\
& QCD $2b2j$ & 9\% & 42\% & 49\% & 1\% & 0.1\% & 0.04\% \\
& QCD $4j$ & 96\% & 3.5\% & 0.5\% & 0.01\% & $3\cdot 10^{-4}$\% &
$2\cdot 10^{-4}$\%\\
\hline
\end{tabular}
\caption{\small
The relative fractions $n^{\rm (b-jet)}_j$ of events for the resolved selection
for which out of the four leading small-$R$ jets of the
event, $j$ jets
contain at least one $b$-quark with $p_T^b\ge 15$ GeV.
%
This information is provided
for the di-Higgs signal events and for the three QCD background samples.
%
The last column indicates the overall
selection efficiency as defined in
Eq.~(\ref{btaggingeff})
\label{tab:btaggingcheck}
}
\end{table}
In Table~\ref{tab:btaggingcheck} we collect
the values of $n^{\rm (b-jet)}_j$ for the signal and the three QCD background samples.
We find that rather than the estimate Eq.~(\ref{eq:naive}),
the correct ratio of $b$-tagging selection efficiencies is instead
\begin{equation}
\frac{{\rm EFF}_{\rm b-tag} \left[ 2b2j\right]}{{\rm EFF}_{\rm b-tag} \left[ 4b\right]}=
\frac{0.04\%}{8.4\%} \simeq 5\cdot 10^{-3} \, .
\end{equation}
This suppression factor is of the same order as
the ratio of $4b$ to $2b2j$ cross-sections
in the resolved category before $b$-tagging.
%
This explains why the $2b2j$ contribution cannot be neglected as compared
to the irreducible $4b$ component of the QCD background.
%
A similar calculation from the numbers in Table~\ref{tab:btaggingcheck}
shows
that, on the other hand, the $4j$ component of the background
can be neglected.
\section{Single Higgs backgrounds}
\label{app:singlehiggs}
As discussed in Sect.~\ref{mcgeneration},
in our analysis we neglect single Higgs production processes,
since they are much smaller than both the signal and the main
QCD multijet backgrounds.
To explicitly demonstrate this, we have generated LO samples
using {\tt MadGraph5\_aMC@NLO}
for the
following single-Higgs processes:
\begin{enumerate}
\item $Z(\to b\bar{b})h(\to b\bar{b})$ (electroweak)
\item $t\bar{t}h(\to b\bar{b})$
\item $b\bar{b}h(\to b\bar{b})$ (QCD)
\end{enumerate}
For each processes, we have generated 1M events, and in
Table~\ref{HK} we list resulting the
LO and NLO cross-sections at the generation level.
%
The subsequent decays and the
corresponding branching fractions are not included in these cross-sections,
since
these are taken care by the {\tt Pythia8} parton shower.
%
The values of these branching fractions
are listed in Table~\ref{HBF}, corresponding
to the most recent averages from the PDG.
%
In the case of the $t\bar{t}h$ process, we
consider only the fully hadronic decays
of the top quark, since leptonic and semi-leptonic decays
can be suppressed
by means of a lepton veto.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Sample & LO & NLO & $K$-factor\\
\hline\hline
$Zh$ (13 TeV) & $6.5 \cdot 10^{-1}$ pb & $ 7.7 \cdot 10^{-1}$ pb & 1.19 \\
$t\bar{t}h$ (13 TeV) & $3.8 \cdot 10^{-1}$ pb & $4.6 \cdot 10^{-1}$ pb & 1.29 \\
$b\bar{b}h$ (13 TeV) & $4.9 \cdot 10^{-1}$ pb & $6.1 \cdot 10^{-1}$ pb & 1.22 \\
\hline
\end{tabular}
\caption{\small LO and NLO cross-sections at the generation level for the single-Higgs background
processes listed above, computed using {\tt MadGraph5\_aMC@NLO}.
%
The subsequent decays and the corresponding branching fractions are not included in these generation-level cross-sections. \label{HK}
}
\end{center}
\end{table}%
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
Sample & Decay & Branching Fraction\\
\hline\hline
$Zh$ & ($Z\to b\bar{b}$)($h\to b\bar{b}$) & 0.086 \\
$t\bar{t}h$ & $(W\to q\bar{q})^2$($h\to b\bar{b}$) & 0.26 \\
$b\bar{h}h$ & $h\to b\bar{b}$ & 0.57 \\
\hline
\end{tabular}
\caption{\small The values of the branching fractions applied to the single-Higgs
background processes from Table~\ref{HK}, corresponding to
the most updated PDG values. \label{HBF}}
\end{center}
\end{table}%
In Table~\ref{HBxsec}
we show the signal and background cross-sections at the end of the cut-based analysis, before the MVA is applied,
in the case without PU.
%
We separate the results into the three exclusive categories used in our analysis.
%
From this comparison, we see that as expected, at the end of the cut-based analysis, the single-Higgs
backgrounds are smaller than the QCD multijet background by several orders of magnitude.
%
In addition, we find that already at the end of the cut-based analysis the di-Higgs
signal is also larger than all the single-Higgs backgrounds in all the selection categories.
%
Since this discrimination can only be improved by the MVA, we
conclude that neglecting single-Higgs backgrounds is a reasonable
approximation.
From Table~\ref{HBxsec} we also observe that in the resolved
and intermediate categories $Zh\to b\bar{b}b\bar{b}$ is
the dominant single-Higgs background, while $t\bar{t}h(\to b\bar{b})$ is
instead the most important one in the boosted category.
\begin{table}[h]
\begin{center}
\begin{tabular}{|c|c|c|c|c|}
\hline
& Sample & \multicolumn{3}{c|}{Pre-MVA cross-section (fb)}\\
& & Boosted & Intermediate & Resolved \\[0.1cm]
\hline\hline
Signal & $hh\to b\bar{b}b\bar{b}$ & $3.5\cdot 10^{-1}$ & $2.2\cdot 10^{-1}$ & $1.2\cdot 10^{0}$ \\[0.1cm]
\hline
\multirow{4}{*}{Backgrounds} & QCD multijet & $2.5\cdot 10^{+2}$ & $1.8\cdot 10^{+2}$ & $4.9\cdot 10^{+3}$ \\
&$Z(\to b\bar{b})h(\to b\bar{b})$ & $2.0\cdot 10^{-2}$ & $1.2\cdot 10^{-1}$ & $7.5\cdot 10^{-1}$ \\
&$t\bar{t}h(\to b\bar{b})$ & $5.1\cdot 10^{-2}$ & $6.3\cdot 10^{-3}$ & $4.0\cdot 10^{-1}$ \\
&$b\bar{b}h(\to b\bar{b})$ & $2.3\cdot 10^{-3}$ & $5.5\cdot 10^{-3}$ & $2.6\cdot 10^{-1}$\\
\hline
\end{tabular}
\end{center}
\caption{\small \label{HBxsec} Signal and background cross-sections at the end of the cut-based analysis
(before the MVA is applied), in the case without PU.
%
We separate the results into the three exclusive categories used in our analysis.
%
}
\end{table}%
| {
"attr-fineweb-edu": 1.987305,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfFjxK6nrxqmo1UFt | \section{Introduction}
Let us consider the (homogeneous) Dirichlet problem
\begin{equation}\label{D1}
\begin{aligned}
\Laplace f &=0 && \text{in $G$},\\
f|_{\partial G}& = h &&\text{on $\partial G$},
\end{aligned}
\end{equation}
where $G\subset \mathbb{R}^d$ is a domain with Lipschitz boundary $\partial G$ and $\Laplace$ denotes the Laplace operator, i.e.\ $\Laplace= \sum_{i=1}^d \frac{\partial^2}{\partial x_i^2}$. In order to show that there exists a solution to \eqref{D1} which belongs to some subspace of $L_p(G)$, say, to the Besov space $B_{pp}^\sigma(G)$, $\sigma>0$, it is necessary that $h$ is an element of the trace space of $B_{pp}^\sigma(G)$ on $\partial G$; it is well known that the trace space is given by $B_{pp}^{\sigma-1/p}(\partial G)$, see Jerison \& Kenig \cite[Theorem 3.1]{JK95}, a more general version can be found in Jonsson \& Wallin
\cite[Chapter VII]{JW84}, and for domains with $C^\infty$-boundary a good reference is Triebel \cite[Sections 3.3.3--4]{Tr83}. The smoothness of the solution $f$, expressed by the parameter $\sigma$ in $B_{pp}^\sigma(G)$, is, however, not only determined by the smoothness of $h$, but also by the geometry of $G$. It seems that Grisvard \cite{Gr85} is the first author to quantify this in the case when $G$ is a non-convex polygon. Subsequently, partly due to its relevance in scientific computing, this problem attracted a lot of attention; for instance, it was studied by Jerison \& Kenig \cite{JK95}, by Dahlke \& DeVore \cite{DD97} in connection with wavelet representations of Besov functions, by Mitrea \& Mitrea \cite{MM03} and Mitrea, Mitrea \& Yan \cite{MMY10} in H\"older spaces, to mention but a few references.
In this note we use a probabilistic approach to the problem and we obtain a probabilistic interpretation in the special case when $G$ is an \emph{$L$-shaped domain} of the form $\LL:= \mathbb{R}^2\backslash \{(x,y):\,x,y\geq 0\}$, see Figure~\ref{L-shape1}, and in an $L_2$-setting.
\begin{figure}\label{L-shape1}
\sidecaption
\includegraphics[scale=0.8]{domain-l}
\caption{The $L$-shaped model domain $\LL\subset\real^2$.}
\end{figure}
This is the model problem for all non-convex domains with an obtuse interior angle.
In this case the Besov space $B_{22}^\sigma(\LL)$ coincides with the Sobolev--Slobodetskij space $W_2^\sigma(\LL)$. In particular, we
\begin{itemize}
\item
give a probabilistic interpretation of the solution to \eqref{D1} with $G=\LL$;
\item
provide a different proof of the fact that the critical order of smoothness of $f$ is $\sigma< \pi/\frac{3\pi}{2}=\frac{2}{3}$, i.e.´\ even for $h\in C_0^2(\partial \LL)$ we may have
\begin{equation}\label{eq-main}
f\in W_{2,\mathrm{loc}}^{1+ \sigma}(\LL), \quad \sigma< \tfrac{2}{3},
\qquad\text{and}\qquad
f\notin W_2^{1+ \sigma}(\LL), \quad \sigma\geq \tfrac{2}{3};
\end{equation}
\item
apply the ``breakdown of regularity'' result to the Poisson (or inhomogeneous Dirichlet) problem.
\end{itemize}
It is clear that this result holds in a more general setting, if we replace the obtuse angle $3\pi/2$ by some $\theta\in (\pi, 2\pi)$.
Results of this type were proved for polygons and in a H\"older space setting by Mitrea \& Mitrea \cite{MM03}. Technically, our proof is close (but different) to that given in \cite{MM03}---yet our staring idea is different. Dahlke \& DeVore \cite{DD97} proved this regularity result analytically using a wavelet basis for $L_p$-Besov spaces.
Problem \eqref{D1} is closely related to the Poisson (or nonhomogeneous Dirichlet) problem
\begin{equation}\label{P1}
\begin{aligned}
\Laplace\poisol &=g && \text{on $G$},\\
\poisol|_{\partial G}& = 0 &&\text{on $\partial G$}.
\end{aligned}
\end{equation}
If $G$ is bounded and has a $C^\infty$-boundary, the problems \eqref{D1} and \eqref{P1} are equivalent. Indeed, in this case for every right-hand side $g\in L_2(G)$ of \eqref{P1} there exists a unique solution $\poisol \in W_2^2(G)$, see Triebel \cite[Theorem 4.3.3]{Tr83}. Denote by $N$ the Newtonian potential on $\mathbb{R}^d$ and define $w:=g* N$; clearly, $\Laplace w=g$ on $G$ and $w\in W_2^2(G)$. Since the boundary is smooth, there is a continuous linear trace operator $\tr: W_2^{2}(G)\to W_p^{3/2}(\partial G)$ as well as a continuous linear extension operator $\ex: W_2^{3/2}(\partial G)\to W_2^2(G)$, such that $\tr \circ \ex = \id$, cf.\ Triebel \cite{Tr83}. Hence, the function $f:= w-\poisol$ solves the inhomogeneous Dirichlet problem \eqref{D1} with $h=\tr w$ on $\partial G$.
On the other hand, let $f$ be the (unique) solution to \eqref{D1}. Since there exists a continuous linear extension operator from $W_2^{3/2}(\partial G)$ to $W_2^2(G)$ given by $\tilde{h}= \ex h$, we see that the function $\poisol := f-\tilde{h}$ satisfies \eqref{P1} with $g= \Laplace \tilde{h}$.
If the boundary $\partial G$ is Lipschitz the situation is different. It is known, see for example Jerison \& Kenig \cite[Theorem~B]{JK95}) that, in general, on a Lipschitz domain $G$ and for $g\in L_2(G)$ one can only expect that the solution $\poisol$ to \eqref{P1} belongs to $W_2^{3/2}(G)$; there are counterexamples of domains, for which $\poisol$ cannot be in $W_2^{\alpha}(G)$ for any $\alpha>3/2$. Thus, the above procedure does not work in a straightforward way. However, by our strategy we can recover the negative result for this concrete domain, cf.\ Theorem~\ref{t2}: \emph{If $g\in H_1(\real^2)\cap W_2^1(\LL) $, then the solution $\poisol$ to \eqref{P1} is not in $W_2^{1+\sigma}(\LL)$ for any $\sigma\geq 2/3$}. Here $H_1(\real^2)\subset L_1(\real^2)$ is the Hardy space, cf.\ Stein \cite{stein}.
If $G$ is unbounded, the solution to \eqref{D1} might be not unique and, in general, it is only in the local space $W^2_{2,\mathrm{loc}}(G)$ even if $\partial G$ is smooth, cf.\ Gilbarg \& Trudinger \cite[Chapter 8]{gil-tru}. On the other hand, if the complement $G^c$ is non-empty, if no component of $G^c$ reduces to a single point, and if the boundary value $h$ is bounded and continuous on $\partial G$, then there exists a unique bounded solution to \eqref{D1} given by the convolution with the Poisson kernel, see Port \& Stone \cite[Theorem IV.2.13]{PS78}.
A strong motivation for this type of results comes from numerical analysis and approximation theory, because the exact Besov smoothness of $u$ is very important for computing $u$ and the feasibility of adaptive computational schemes, see Dahlke \& DeVore \cite{DD97}, Dahlke, Dahmen \& DeVore \cite{DDD97}, DeVore \cite{De98}, Cohen, Dahmen \& DeVore \cite{CDD01}, Cohen \cite{C03}; an application to SPDEs is in Cioika et.\ al.\ \cite{Ci11,Ci14}. More precisely---using the set-up and the notation of \cite{CDD01}---let $\{\psi_\lambda, \,\lambda\in \Lambda\}$ be a basis of wavelets on $G$ and assume that the index set $\Lambda$ is of the form $\Lambda= \bigcup_{i\geq 0} \Lambda_i$ with (usually hierarchical) sets $\Lambda_i$ of cardinality $N_i$. By $u_{\Lambda_i}$ we denote the Galerkin approximation of $u$ in terms of the wavelets $\{\psi_\lambda\}_{\lambda\in \Lambda_i}$ (this amounts to solving a system of linear equations), and by $e_{N_i}(u):= \|u-u_{\Lambda_i}\|_p $ the approximation error in this scheme. Then it is known, cf. \cite[(4.2) and (2.35)]{CDD01}, that
\begin{equation}\label{Gal}
u\in W_p^\sigma(G) \implies e_{N_i}(u) \leq C N_i^{-\sigma/d}, \quad i\geq 1.
\end{equation}
There is also an adaptive algorithm for choosing the index sets $(\Lambda_i)_{i\geq 1}$. Starting with an initial set $\Lambda_0$, this algorithm adaptively generates a sequence of nested sets $(\Lambda_i)_{i\geq 1}$; roughly speaking, in each iteration step we choose the next set $\Lambda_{i+1}$ by partitioning the domain of those wavelets $\psi_\lambda$, $\lambda\in \Lambda_i$ (i.e.\ selectively refining the approximation by considering the next generation of wavelets), whose coefficients $u_\lambda$ make, in an appropriate sense, the largest contribution to the sum $u=\sum_{\lambda\in \Lambda_i} u_\lambda \psi_\lambda$.
\runinhead{Notation.} Most of our notation is standard. By $(r,\theta)\in (0,\infty)\times (0,2\pi]$ we denote polar coordinates in $\real^2$, and $\mathbb{H}$ is the lower half-plane in $\real^2$. We write $f\asymp g$ to say that $c f(t)\leq g(t)\leq C f(t)$ for all $t$ and some fixed constants.
\section{Setting and the main result}\label{mainset}
Let $B = (B_t^x)_{t\geq 0}$ be a Brownian motion started at a point $x\in G$. Suppose that there exists a conformal mapping $\varphi: G\to \mathbb{H}$, where $\mathbb{H}:=\{(x_1,x_2)\in \mathbb{R}^2,\,\,x_2\leq 0\}$ is the lower half-plane in $\mathbb{R}^2$. Using the conformal invariance of Brownian motion, see e.g.\ M\"orters \& Peres \cite[p.~202]{MP10}, we can describe the distribution of the Brownian motion inside $G$ in terms of \emph{some} Brownian motion $W$ in $\mathbb{H}$, which is much easier to handle. Conformal invariance of Brownian motion means that there exists a planar Brownian motion $W=(W_t^y)_{t\geq 0}$ with starting point $y\in \mathbb{H}$ such that, under the conformal map $\varphi:G\to\mathbb{H}$ with boundary identification,
\begin{equation}\label{BW1}
\left(\varphi(B_t^x)\right)_{0\leq t\leq \tau_G}
\quad\text{has the same law as}\quad
\left(W_{\xi(t)}^{\varphi(x)}\right)_{0\leq t \leq \tau_{\mathbb H}};
\end{equation}
the time-change $\xi$ is given by $\xi(t):= \int_0^t |\varphi'(B^x_s)|^2\,{\D}s$; in particular, $\xi(\tau_G)= \tau_\mathbb{H}$, where $\tau_G = \inf\{t>0: B_t^x\in\partial G\}$ and $\tau_\mathbb{H}:= \inf\{ t>0: \, W_t^{\varphi(x)}\in \partial \mathbb{H}\}$ are the first exit times from $G$ and $\mathbb H$, respectively.
Let us recall some properties of a planar Brownian motion in $\mathbb{H}$ killed upon exiting at the boundary $\partial\mathbb{H} = \{(w_1,w_2): \,w_2=0\}$. The distribution of the exit position $W_{\tau_\mathbb{H}}$ has the transition probability density
\begin{equation}\label{WH}
u\mapsto p_\mathbb{H}(w,u)
= \frac{1}{\pi} \frac{|w_2|}{|u-w_1|^2 + w_2^2},\quad w=(w_1,w_2)\in \mathbb{H},
\end{equation}
cf.\ Bass~\cite[p.~91]{Bass}. Recall that a random variable $X$ with values in $\real$ has a Cauchy distribution, $X\sim\Cauchy(m,b)$, $m\in \real$, $b>0$, if it has a transition probability density of the form
$$
p(u)= \frac{1}{\pi}\frac{b}{(u-m)^2 +b^2}, \quad u\in\real;
$$
if $X\sim \Cauchy(m,b)$, then $Z:=(X-m)/b\sim \Cauchy(0,1)$. Thus, the probabilistic interpretation of $W_{\tau_\mathbb{H}}^w$ is
\begin{equation}\label{Cauchy}
W_{\tau_\mathbb{H}}^w \sim Z^w\sim \Cauchy(w_1,|w_2|)
\quad\text{or}\quad
W_{\tau_\mathbb{H}}^w \sim \frac{Z-w_1}{|w_2|}
\quad\text{where}\quad Z\sim \Cauchy(0,1).
\end{equation}
This observation allows us to simplify the calculation of functionals $\Theta$ of a Brownian motion $B$ on $G$, killed upon exiting from $G$, in the following sense:
\begin{equation}\label{fG0}\begin{aligned}
\Ee \Theta(B_{\tau_G}^x)
= \Ee \left(\Theta\circ \varphi^{-1}\right)(\varphi(B_{\tau_G}^x))
&= \Ee\left(\Theta\circ \varphi^{-1}\right) (W_{\tau_\mathbb{H}}^{\varphi(x)})\\
&= \Ee\left(\Theta\circ \varphi^{-1}\right) \left( \frac{Z-\varphi_1(x)}{|\varphi_2(x)|}\right).
\end{aligned}\end{equation}
In particular, the formula \eqref{fG0} provides us with a probabilistic representation for the solution $f$ to the Dirichlet problem \eqref{D1}:
\begin{equation}\label{fG}
f(x)= \Ee h(B_{\tau_G}^x)= \Ee\left(h\circ \varphi^{-1}\right) (W_{\tau_\mathbb{H}}^{\varphi(x)}).
\end{equation}
\begin{remark}
The formulae in \eqref{fG0} are very helpful for the numerical calculation of the values $\Ee \Theta(B_{\tau_G}^x)$. In fact, in order to simulate $\Theta(B_{\tau_G}^x)$, it is enough to simulate the Cauchy distribution $Z\sim \Cauchy(0,1)$ and then evaluate \eqref{fG0} using the Monte Carlo method.
\end{remark}
We will now consider the $L$-shaped domain $\LL$. It is easy to see that the conformal mapping of $\LL$ to $\mathbb{H}$ is given by
\begin{equation}\label{Conf1}
\varphi(z)= \eul^{\imag\frac{2\pi}{3}} z^{2/3} = r^{2/3} \exp\left( \tfrac{2}{3} \imag (\theta+\pi)\right)=\varphi_1(r,\theta)+ \imag \varphi_2(r,\theta),
\end{equation}
cf.\ Figure~\ref{fig-conf}, where $\theta = \arg z \in (0,2\pi]$.
\begin{figure}\label{fig-conf}
\centering
\caption{Conformal mapping from $\LL$ to $\mathbb{H}$ and its behaviour at the boundaries.}
\includegraphics[scale=0.8]{conformal-mapping}
\end{figure}
The following lemma uses the conformal mapping $\varphi: \LL\to \mathbb{H}$ and the conformal invariance of Brownian motion to obtain the distribution of $B^x_{\tau_\LL}$.
\begin{lemma}\label{tL}
Let $\LL$ be an $L$-shaped domain as shown in Fig.~\ref{L-shape1}. The exit position $B_{\tau_\LL}$ of Brownian motion from $\LL$ is a random variable on $\partial \LL= \{0\}\times [0,\infty)\cup[0,\infty)\times \{0\}$ which has the following probability distribution:
\begin{equation}\label{PBL}
\begin{aligned}
\Pp\left(B^x_{\tau_\LL}\in {\D}y\right)
&= \frac{1}{\pi} \frac{|\varphi_2(x)|}{|\varphi_1(x)+y_2|^2+|\varphi_2(x)|^2} \,{\D}y_2 \,\delta_0({\D}y_1)\\
&\qquad\mbox{} + \frac{1}{\pi}\frac{|\varphi_2(x)|}{|\varphi_1(x)-y_1|^2+|\varphi_2(x)|^2} \,{\D}y_1 \,\delta_0({\D}y_2).
\end{aligned}
\end{equation}
\end{lemma}
Lemma~\ref{tL} provides us with an explicit representation of the solution $f(x)$ to the Dirichlet problem \eqref{D1} for $G=\LL$. Indeed, since
(cf.\ Figure~\ref{fig-conf})
\begin{equation*
\left(h\circ \varphi^{-1}\right)(u)
=
\begin{cases}
h(u,0), & u\geq 0, \\
h(0,-u), &u\leq 0,
\end{cases}
\end{equation*}
we get
\begin{equation}\label{fD2}
f(x)
= \int_\real f_0(u) \,p_\mathbb{H}(\varphi(x),u) \,{\D}u
= \frac{1}{\pi} \int_\real f_0(u)\, \frac{|\varphi_2(x)|}{(\varphi_1(x)-u)^2+ |\varphi_2(x)|^2}\,{\D}u,
\end{equation}
where
\begin{equation}\label{f0}
f_0(u)
:= h(0,-u)\Id_{(-\infty,0)}(u) + h(u,0)\Id_{[0,\infty)}(u).
\end{equation}
After a change of variables, this becomes
\begin{equation}\label{fD22}
f(x)
= \frac{1}{\pi} \int_\real f_0\left( u |\varphi_2(x)|+ \varphi_1(x)\right) \frac{{\D}u}{u^2+1}.
\end{equation}
If we want to investigate the smoothness of $f$, it is more convenient to rewrite $f$ in polar coordinates. From the right-hand side of \eqref{Conf1} we infer
\begin{equation}\label{Phi1}
\varphi_1(r,\theta)
= r^{2/3} \cos \Phi_\theta
\quad\text{and}\quad
\varphi_2(r,\theta)= r^{2/3} \sin \Phi_\theta,
\end{equation}
where we use the shorthand
\begin{equation*
\Phi_\theta:= \frac{2}{3}(\pi+\theta).
\end{equation*}
Observe that for $\theta\in (\pi/2, 2\pi]$ we have $\pi< \Phi_\theta\leq 2\pi$, hence $\varphi_2 \leq 0$. This yields
\begin{equation}\label{f1-0}
f(r,\theta)
= \frac{1}{\pi}\int_\real f_0\left(r^{2/3} \cos \Phi_\theta - r^{2/3}v\sin \Phi_\theta\right)\, \frac{{\D}v}{1+v^2}.
\end{equation}
Now we turn to the principal objective of this note: the smoothness of $f$ in the Sobolev--Slobodetskij scale.
\begin{theorem}\label{t1}
Consider the \textup{(}homogeneous\textup{)} Dirichlet problem \eqref{D1} with a boundary term $f_0$, given by \eqref{f0}, and let $f$ denote the solution to \eqref{D1}.
\begin{enumerate}\renewcommand{\theenumi}{\textup{\alph{enumi}}}
\item\label{t1-a}
If $f_0\in W_1^2(\real) \cap W_2^2(\real)$ satisfies
\begin{equation}\label{f0}
\liminf_{\epsilon \to 0} \int_{|x|>\epsilon} \frac{f_0'(x)}{x} \,{\D}x\neq 0,
\end{equation}
then $f\notin W_2^{1+\sigma}(\LL)$, even $f\notin W_{2,\mathrm{loc}}^{1+\sigma}(\LL)$, for any $\sigma\geq 2/3$.
\item\label{t1-b}
If $f_0\in W_1^2(\real)\cap W_p^1(\real)$, where $p>\max\{2,\, 2/(2-3\sigma)\}$, then $f\in W_{2,\mathrm{loc}}^{1+\sigma}(\LL)$ for all $\sigma\in (0,2/3)$.
\end{enumerate}
\end{theorem}
\begin{remark}\label{rem1}
By the Sobolev embedding theorem we have $W_1^2(\real) \cap W_2^2(\real) \subset C_b(\real)$ and $W_1^2(\real)\cap W_p^1(\real)\subset C_b(\real)$ if $p>\max\{2,\, 2/(2-3\sigma)\}$. Hence, the function $f$ given by \eqref{fD22} is the unique bounded solution to \eqref{D1}.
\end{remark}
The idea of the proof of Theorem~\ref{t1} makes essential use of the results by Jerison \& Kenig \cite{JK95} combined with the observation that it is, in fact, enough to show the claim for $\LLB:=\LL\cap B(0,1)$, where $B(0,1):= \{ x\in \mathbb{R}^2:\, |x|< 1\}$.
Theorem~\ref{t1} allows us to prove the negative result for the solution to the Poisson problem, which improves \cite[Theorem~B]{JK95}.
Recall that $H_1(\real^2)\subset L_1(\real^2)$ is the usual Hardy space, cf.\ Stein \cite{stein}.
\begin{theorem}\label{t2}
Consider the Poisson \textup{(}inhomogeneous Dirichlet\textup{)} problem \eqref{P1} with right-hand side $g\in H_1(\real^2)\cap W_2^1(\LL)$ such that $f_0(x):=\left((\tr g*N)\circ \varphi^{-1}\right) (x)$ satisfies \eqref{f0}, where $N(x)= (2\pi)^{-1}\log |x|$ is the Newton kernel. Then the solution $\poisol \notin W_2^{1+\sigma} (\LL)$, even $\poisol \notin W_{2,\mathrm{loc}}^{1+\sigma} (\LL)$, for any $\sigma\geq 2/3$.
\end{theorem}
The proofs of Theorem~\ref{t1} and \ref{t2} are deferred to the next section.
\section{Proofs}\label{proofs}
\begin{proof}[Proof of Lemma~\ref{tL}] We calculate the characteristic function of $B^x_{\tau_\LL}$.
As before, let $y=(y_1,y_2)$, $x=(x_1,x_2)$ and $\varphi(x)=(\varphi_1(x),\varphi_2(x))$. We have
\begin{align*}
\Ee \eul^{\imag\xi\cdot B_{\tau_\LL}^x}
&\overset{\eqref{BW1}}{=} \Ee \eul^{\imag\xi \cdot\varphi^{-1} (W_{\tau_\mathbb{H}}^{\varphi(x)})} \\
&= \int_{\mathbb{R}^2} \eul^{\imag\xi\cdot \varphi^{-1} (y)} \,\Pp(W_{\tau_\mathbb{H}}^{\varphi(x)}\in {\D}y)\\
&\overset{\eqref{WH}}{=} \frac{1}{\pi} \int_\real \eul^{\imag\xi \cdot \varphi^{-1}(y_1,0)}\frac{|\varphi_2(x)|}{|\varphi_1(x)-y_1|^2+|\varphi_2(x)|^2}{\D}y_1\\
&= \frac{1}{\pi} \int_{-\infty}^0 \eul^{-\imag \xi_2u} \frac{|\varphi_2(x)|}{|\varphi_1(x)-u|^2+|\varphi_2(x)|^2}\,{\D}u\\
&\qquad\mbox{} + \frac{1}{\pi} \int_0^{+\infty} \eul^{\imag \xi_1u }\frac{|\varphi_2(x)|}{|\varphi_1(x)- u|^2+|\varphi_2(x)|^2}\,{\D}u.\tag*{\qed}
\end{align*}
\end{proof}
For the proof of Theorem~\ref{t1} we need some preparations. In order to keep the presentation self-contained, we quote the classical result by Jerison \& Kenig \cite[Theorem~4.1]{JK95}.
\begin{theorem}[Jerison \& Kenig]\label{JK}
Let $\sigma\in (0,1)$, $k\in \nat_0$ and $p\in [1, \infty]$. For any function $u$ which is harmonic on a bounded domain $\Omega$, the following assertions are equivalent:
\begin{enumerate}\renewcommand{\theenumi}{\textup{\alph{enumi}}}
\item\label{JK-a}
$f\in B_{pp}^{k+\sigma}(\Omega)$;
\item\label{JK-b}
$\dist(x,\partial\Omega)^{1-\sigma}\,\left| \nabla^{k+1} f\right| + \left| \nabla^k f \right| + \left| f\right| \in L_p(\Omega)$.
\end{enumerate}
\end{theorem}
We will also need the following technical lemma. Recall that $\LLB = \LL\cap B(0,1)$.
\begin{lemma}\label{B1}
Suppose that $f_0\in W_p^1(\real)$ for some $p>2$. Then $f\in W_2^1 (\LLB)$.
\end{lemma}
\begin{proof}
Using the representation \eqref{f1-0}, the H\"older inequality and a change of variables, we get
\begin{align*}
&\int_{\pi/2}^{2\pi} \int_0^1 |f(r,\theta)|^2 r\,{\D}r\,{\D}\theta\\
&= \frac{3}{2\pi^2 }\int_{\pi/2}^{2\pi} \int_0^1 \rho^2
\left|\int_\real f_0(w)\frac{|\rho \sin \Phi_\theta|}{(w-\rho \cos \Phi_\theta)^2+ (\rho \sin \Phi_\theta)^2}{\D}w\right|^2 \,{\D}\rho\,{\D}\theta\\
&\leq C_1\int_{\pi/2}^{2\pi} \int_0^1 \rho^2
\left[
\left(\int_\real \left| f_0(v)\right|^p{\D}v\right)^{1/p}
\left(\int_\real \frac{|\rho \sin \Phi_\theta|^q}{(v^2 + |\rho \sin \Phi_\theta|^2)^q}{\D}v \right)^{1/q}
\right]^2\,{\D}\rho\,{\D}\theta\\
&\leq C_2\int_{\pi/2}^{2\pi} \int_0^1 \rho^2 \left|\rho \sin \Phi_\theta\right|^{-2+2/q}
\left(\int_\real \frac{1}{(w^2 +1)^q}{\D}w \right)^{2/q}\,{\D}\rho\,{\D}\theta\\
&= C_3 \int_{\pi/2}^{2\pi} \int_0^1 \rho^{2/q} \left|\sin \Phi_\theta\right|^{-2+2/q} \,{\D}\rho\,{\D}\theta,
\end{align*}
where $p^{-1}+q^{-1}=1$. Because of $p>2$ we have $-2+2/q>-1$, hence $q<2$. Note that the inequalities $2x/\pi \leq \sin x\leq x$ for $x\in [0,\pi/2]$, imply
$$
\int_{\pi/2}^{2\pi} \left|\sin \Phi_\theta\right|^{-1+\epsilon} \,{\D}\theta
= \int_0^{\pi} \left|\sin \varphi\right|^{-1+\epsilon} \,{\D}\varphi
= 2 \int_0^{\pi/2} \left|\sin\varphi\right|^{-1+\epsilon}\,{\D}\varphi
< \infty.
$$
This shows that $f\in L_2(\LLB)$.
Recall that the partial derivatives of the polar coordinates are
\begin{equation}\label{der10}
\frac{\partial}{\partial x_1} r = \cos \theta, \quad
\frac{\partial}{\partial x_1} \theta= - \frac{\sin \theta}{r}, \quad
\frac{\partial}{\partial x_1} \Phi_\theta= \frac{2}{3} \frac{\partial}{\partial x_1} \theta = - \frac{2\sin \theta}{3r}.
\end{equation}
Therefore, we have for $\theta\in (\pi/2,2\pi)$
\begin{align}
\notag
\frac{\partial}{\partial x_1} &f(r,\theta)
= \frac{1}{\pi} \int_\real f_0' \left( r^{2/3} \cos \Phi_\theta- vr^{2/3} \sin \Phi_\theta \right) \, \frac{1}{v^2+1}\times\\
&\notag\quad \mbox{}\times \left[\frac{2\cos \theta }{3 r^{1/3} } \left( \cos \Phi_\theta- v \sin \Phi_\theta \right)\right.
\\
&\label{f1}\quad\qquad\left.\mbox{}+r^{2/3} \left( \frac{-2\sin \theta}{3r}\right) ( -v \cos \Phi_\theta -\sin \Phi_\theta ) \right] {\D}v\\
&\notag= \frac{2}{3 \pi r^{1/3} } \int_\real f_0'\left( r^{2/3} \cos \Phi_\theta- vr^{2/3} \sin \Phi_\theta \right) \, \frac{1}{v^2+1}\times\\
&\notag\quad \mbox{}\times \left[\left( \cos \Phi_\theta- v \sin \Phi_\theta \right) \cos \theta
+( v \cos \Phi_\theta +\sin \Phi_\theta ) \sin \theta \right] {\D}v\\
&\notag= \frac{2}{3\pi r^{1/3} } \int_\real f_0' \left( r^{2/3} \cos \Phi_\theta- vr^{2/3} \sin \Phi_\theta \right) \, \frac{K(\theta,v)}{v^2+1}\,{\D}v,
\end{align}
where
\begin{equation}\label{Kav}
K(\theta,v):=\cos \omega_\theta - v \sin \omega_\theta ,
\end{equation}
and
\begin{equation}\label{tha2}
\omega_\theta= \frac{1}{3} \left(2\pi-\theta\right).
\end{equation}
Note that $\Phi_{\pi/2} =\pi$ and $\omega_{\pi/2}= \pi/2$.
Let us show that the first partial derivatives of $f$ belong to $L_2(\LLB)$. Because of the symmetry of $\LLB$, is it enough to check this for $\frac{\partial}{\partial x_1} f$.
Using the estimate $|K(\theta, v)| (1+v^2)^{-1} \leq C(1+|v|)^{-1}$, a change of variables and the H\"older inequality, we get
\begin{align*}
\int_0^1 &\int_{\pi/2}^{2\pi} \left| \frac{\partial}{\partial x_1} f(r,\theta)\right|^2 r\,{\D}\theta\,{\D}r\\
&= \int_0^ 1 \int_{\pi/2}^{2\pi}
\left|
\int_\real \frac{2}{3 \pi r^{1/3} }\, f_0'
\left(r^{2/3} \cos \Phi_\theta- vr^{2/3} \sin \Phi_\theta \right)
\frac{K(\theta, v)}{1+v^2} \,{\D}v
\right|^2
r\,{\D}\theta \,{\D}r\\
&= \frac{2}{3\pi^2} \int_0^1 \int_{\pi/2}^{2\pi} \rho
\left|
\int_\real f_0'\left(\rho \cos \Phi_\theta- v\rho\sin \Phi_\theta \right) \frac{K(\theta, v)}{1+v^2}\,{\D}v
\right|^2
\,{\D}\theta\,{\D}\rho\\
&\leq C_1 \int_0^1 \int_{\pi/2}^{2\pi}\rho
\left(\int_\real \frac{|f_0'(w)|}{|\rho \sin \Phi_\theta| +|w- \rho \cos \Phi_\theta|}{\D}w \right)^2
\,{\D}\theta \,{\D}\rho\\
&\leq C_2 \left(\int_\real \left| f_0' (w)\right|^p {\D}w\right)^{2/p}
\left( \int_\real \frac{1}{(1+|w|)^q} {\D}w\right)^{2/q} \times\\
&\qquad\mbox{}\times \int_{\pi/2}^{2\pi} \int_0^1\rho \left( |\rho \sin \Phi_\theta|^{-1+1/q}\right)^2\,{\D}\rho\,{\D}\theta\\
&= C_3 \int_{\pi/2}^{2\pi}\int_0^1 |\sin \Phi_\theta|^{-2+2/q} \rho^{-1+2/q} \,{\D}\rho \,{\D}\theta
<\infty;
\end{align*}
in the last line we use again that $-2+2/q>-1$.
\smartqed\qed
\end{proof}
\begin{proof}[Proof of Theorem~\ref{t1}] It is enough to consider the set $\LLB$.
We verify that condition~\ref{JK-b} of Theorem~\ref{JK} holds true. We check whether
$$
\dist(0,\cdot)^{1-\sigma} \left| \frac{\partial^2}{\partial x_1^2} f \right| + \left| \frac{\partial}{\partial x_1} f\right|+\left|f\right|
\quad\text{is in $L_2(\LLB)$ or not.}
$$
From Lemma~\ref{B1} we already know that $\left| \frac{\partial}{\partial x_1} f\right|+\left|f\right|\in L_2(\LLB)$. Let us check when
$$
\dist(0,\cdot)^{1-\sigma} \left| \frac{\partial^2}{\partial x_1^2} f \right| \in L_2(\LLB).
$$
We will only work out the term $\frac{\partial^2}{\partial x_1^2} f(r,\theta)$ since the calculations for $\tfrac{\partial^2}{\partial x_1\partial x_2} f(r,\theta)$ are similar. We have
\begin{align*}
\frac{\partial}{\partial x_1} K(\theta,v)
= \frac{\sin \theta}{3r} \left(- v \cos \omega_\theta - \sin \omega_\theta\right)
=: \frac{\sin \theta}{3r} K^{\star}(\theta,v),
\end{align*}
where use that $\tfrac{\partial}{\partial x_1} \omega_\theta= -\frac{1}{3} \tfrac{\partial}{\partial x_1} \theta= \frac{ \sin \theta}{3r}$ and set
\begin{equation}\label{K1}
K^{\star}(\theta,v):= -v \cos \omega_\theta - \sin \omega_\theta.
\end{equation}
Therefore, differentiating $\tfrac{\partial}{\partial x_1} f$---we use the representation \eqref{f1}---with respect to $x_1$ gives
\begin{align*}
\tfrac{\partial^2}{\partial x_1^2} f(r,\theta)
&= - \frac{2\cos \theta}{9 \pi r^{4/3}} \int_\real f_0' \left( r^{2/3} (\cos \Phi_\theta- v \sin \Phi_\theta)\right)\frac{K(\theta, v)}{1+v^2} \,{\D}v\\
&\qquad \mbox{}+\frac{4}{9 \pi r^{2/3}} \int_\real f_0'' \left( r^{2/3} (\cos \Phi_\theta- v \sin \Phi_\theta)\right)\frac{K^2(\theta, v)}{1+v^2} \,{\D}v\\
&\qquad \mbox{}+\frac{2\sin \theta}{9 \pi r^{4/3}} \int_\real f_0' \left( r^{2/3} (\cos \Phi_\theta- v \sin \Phi_\theta)\right)\frac{K^{\star}(\theta, v)}{1+v^2} \,{\D}v.
\end{align*}
Note that
\begin{equation}\label{int2}\begin{aligned}
&\int_{\LLB} \dist (x, \partial \LLB)^{2-2\sigma} \left|\frac{\partial^2}{\partial x_1^2} f(x)\right|^2 \,{\D}x\\
&\qquad= \int_0^1 \int_{\pi/2}^{2\pi} \dist ((r,\theta), \partial \LLB)^{2-2\sigma} \left|\frac{\partial^2}{\partial x_1^2}f(r,\theta)\right|^2 r\,{\D}\theta\,{\D}r.
\end{aligned}\end{equation}
Since only the values near the boundary $\Gamma:= \partial \LLB\cap \partial \LL$ determine the convergence of the integral, it is enough to check that
\begin{equation}\label{int22}
\mathbb{I}
= \int_0^1 \int_{\pi/2}^{2\pi} \dist ((r,\theta), \Gamma)^{2-2\sigma} \left|\frac{\partial^2}{\partial x_1^2} f(r,\theta)\right|^2 r\,{\D}\theta\,{\D}r
\end{equation}
is infinite if $\sigma\geq 2/3$ and finite if $\sigma<2/3$.
\enlargethispage{.66\baselineskip}
We split $\LLB$ into three parts. For $\delta>0$ small enough we define, see Figure~\ref{k1-k3},
\begin{figure}\label{k1-k3}
\sidecaption
\includegraphics[scale=0.8]{k-sets}
\caption{The set $\LLB$ is split into three disjoint parts $K_1$, $K_2$, $K_3$.}
\end{figure}
\begin{gather*}
K_1:= \left\{ (r,\theta):\, 0<r<1, \;\; \frac{\pi}{2}+\delta<\theta<2\pi-\delta\right\},\\
K_2:= \left\{ (r,\theta):\, 0<r<1, \;\; \frac{\pi}{2} \leq \theta<\frac{\pi}{2}+\delta\right\},\\
K_3:= \left\{ (r,\theta):\, 0<r<1, \;\; 2\pi-\delta <\theta\leq 2\pi\right\}.
\intertext{Splitting the integral accordingly, we get}
\mathbb{I}
= \left(\int_{K_1}+\int_{K_2}+\int_{K_3}\right)\dist ((r,\theta), \Gamma)^{2-2\sigma} \left|\frac{\partial^2}{\partial x_1^2} f(r,\theta)\right|^2 r\,{\D}\theta\,{\D}r;
\end{gather*}
in order to show that $\mathbb{I}$ is infinite if $\sigma\geq 2/3$, it is enough to see that the integral over $K_1$ is infinite.
Noting that in $K_1$ we have $\dist((r,\theta), \Gamma)\asymp r$, we get
\begin{equation}\label{eq1}
\begin{aligned}
\int_{K_1} &\left|r^{1-\sigma}\tfrac{\partial^2}{\partial x_1^2} f(r,\theta)\right|^2 r\,{\D}\theta \,{\D}r\\
&= \int_{K_1} r\left|r^{1-\sigma} \frac{2}{9 \pi r^{4/3}} \right|^2 \times\\
&\quad \mbox{}\times \left| \int_\real f_0' \left( r^{2/3} (\cos \Phi_\theta- v \sin \Phi_\theta)\right)
\frac{K^{\star}(\theta, v)\sin \theta- K(\theta,v)\cos \theta}{1+v^2} \,{\D}v\right.\\
&\qquad \left.\mbox{}+2r^{2/3} \int_\real f_0'' \left( r^{2/3} (\cos \Phi_\theta- v \sin \Phi_\theta)\right)\frac{K^2(\theta,v)}{1+v^2}\,{\D}v\right|^2 \,{\D}r \,{\D}\theta\\
&= \frac{4}{81 \pi^2 } \int_{K_1} r^{1/3-2\sigma}
\left| \int_\real f_0' \left( r^{2/3} (\cos \Phi_\theta- v \sin \Phi_\theta)\right)\frac{K^{\star\star}(\theta, v)}{1+v^2} \,{\D}v\right.\\
&\qquad \left.\mbox{}+ 2r^{2/3} \int_\real f_0''\left( r^{2/3} (\cos \Phi_\theta- v \sin \Phi_\theta)\right)\frac{K^2(\theta,v)}{1+v^2} \,{\D}v\right|^2 \,{\D}r\,{\D}\theta\\
&= \frac{4}{81 \pi^2 }\int_{K_1} r^{1/3-2\sigma} \left|J(r^{2/3},\theta)+ I(r^{2/3},\theta)\right|^2 \,{\D}r\,{\D}\theta\\
&= \frac{2}{27 \pi^2 }\int_{K_1} \rho^{1-3\sigma} \left|J(\rho,\theta)+ I(\rho,\theta)\right|^2 \,{\D}\rho\,{\D}\theta,
\end{aligned}
\end{equation}
where we use the following shorthand notation
\begin{gather*}
K^{\star\star}(\theta,v)
:= K^{\star}(\theta, v)\sin \theta- K(\theta,v)\cos \theta
= -v \sin (\theta-\omega_\theta)- \cos (\theta-\omega_\theta),
\\
J(\rho,\theta)
:= \int_\real f_0' \left(\rho(\cos \Phi_\theta- v \sin \Phi_\theta)\right)\frac{K^{\star\star}(\theta, v)}{1+v^2}\,{\D}v,
\\
I(\rho,\theta)
:= 2\rho \int_\real f_0'' \left( \rho(\cos \Phi_\theta- v \sin \Phi_\theta)\right)\frac{K^2(\theta, v)}{1+v^2} \,{\D}v.
\end{gather*}
Observe that $\theta-\omega_\theta\in (0,2\pi)$ for $\theta\in (\frac{\pi}{2}, 2\pi)$, and
$\theta-\omega_\theta\in (\frac{4\delta}{3},2\pi-\frac{4\delta}{3})$ whenever $\theta\in (\frac{\pi}{2}+\delta, 2\pi-\delta)$.
Without loss of generality we may assume that $J(\rho,\theta)+ I(\rho,\theta)\not\equiv 0$ on $K_1$. Let us show that
$\lim_{\rho\to 0} |J(\rho,\theta)+ I(\rho,\theta)| = C(f_0, \theta)>0$. This guarantees that we can choose some $K_{11}\subset K_1$ such that
\begin{equation}\label{K11}
|J(\rho,\theta)+ I(\rho,\theta)|\geq C(f_0)>0 \quad \text{on $K_{11}$.}
\end{equation}
Using the change of variables $x=v\rho$ we get, using dominated convergence,
\begin{align*}
I(\rho,\theta)
&= 2\int_\real f_0''\left(\rho\cos \Phi_\theta- x \sin \Phi_\theta\right)\frac{\big(\rho \cos \omega_\theta -x \sin \omega_\theta)^2}{\rho^2+x^2} \,{\D}x\\
& \underset{\rho\to 0}{\longrightarrow} \,2\sin^2 \omega_\theta \int_\real f_0''\left(-x \sin \Phi_\theta\right) \,{\D}x=\frac{2\sin^2 \omega_\theta}{\sin \Phi_\theta} \int_\real f_0''(x) \,{\D}x =0,
\end{align*}
since we assume that $f_0\in W_2^1 (\real)$.
For $J(\rho,\theta)$ we have, using the same change of variables,
\begin{align*}
J(\rho, \theta)&= -\int_\real f_0'\left(\rho \cos \Phi_\theta- \rho v \sin \Phi_\theta\right)\frac{\cos (\theta-\omega_\theta) + v \sin(\theta-\omega_\theta)}{1+v^2} \,{\D}v\\
&= -\int_\real f_0'\left(\rho \cos \Phi_\theta- x \sin \Phi_\theta\right)\frac{\rho\cos (\theta-\omega_\theta) + x \sin(\theta-\omega_\theta)}{\rho^2+x^2} \,{\D}x\\
&= - \left( \int_{|x|>\epsilon} +\int_{|x|\leq \epsilon} \right) \left( \dots\right) \,{\D}x.
\end{align*}
The first integral can be treated with the dominated convergence theorem because we have $f_0'\in L_1(\real)$ and
$\rho (\rho^2+x^2)^{-1} \leq x^{-2}$, $x(\rho^2+x^2)^{-1} \leq x^{-1}$ are bounded for $|x|>\epsilon$. Therefore,
$$
\lim_{\rho\to 0} \left[- \int_{|x|>\epsilon}\left(\dots \right)\,{\D}x\right]
= - \sin(\theta-\omega_\theta)\int_{|x|>\epsilon}\frac{f'_0(-x\sin \Phi_\theta)}{x}\,{\D}x.
$$
Now we estimate the two parts of the second integral. For
$$
-\int_{|x|\leq \epsilon} f_0'(\rho \cos \Phi_\theta-x\sin \Phi_\theta)
\frac{\rho\cos(\theta-\omega_\theta)}{\rho^2+x^2}\,{\D}x
$$
we have
$\rho(\rho^2+x^2)^{-1}\leq \epsilon^{-1} \rho x(\rho^2+x^2)^{-1} \leq \epsilon^{-1}$,
so this term tends to $0$ by the dominated convergence theorem. For the second term in this integral we have using a change of variables and the Cauchy--Schwarz inequality,
\begin{align*}
|\sin& (\theta-\omega_\theta)|\cdot \left| \int_{|x|\leq \epsilon} f_0'
\left( \rho \cos\Phi_\theta -x \sin \Phi_\theta\right) \frac{x}{x^2+\rho^2}
dx\right| \\
& \leq
\left|\int_{|w|\leq \epsilon} \left(f_0' \left( \rho \cos \Phi_\theta- w \sin \Phi_\theta \right)- f_0' \left( \rho \cos \Phi_\theta\right)\right)\frac{w}{\rho^2+w^2}\,{\D}w\right|
\\
&\leq \int_{|w|\leq \epsilon} \int_0^1 |f_0''(\rho \cos \Phi_\theta- r w \sin \Phi_\theta) | \,{\D}r \,{\D}w\\
&\leq \sqrt{2\epsilon} \int_0^1 \left(\int_\real |f_0''(\rho \cos \Phi_\theta- v \sin \Phi_\theta)|^2 \,{\D}v\right)^{1/2} \,{\D}r\\
&\leq C_1(\theta) \sqrt\epsilon \,\|f_0''\|_2.
\end{align*}
Altogether we have upon letting $\rho \to 0$ and then $\epsilon\to 0$, that
\begin{equation}\label{I10}
\lim_{\rho\to 0} I (\rho,\theta) = 0,
\end{equation}
\begin{equation}\label{I10}
\liminf_{\epsilon \to 0} \lim_{\rho\to 0} J (\rho,\theta)
=\sin (\omega_\theta-\theta) \liminf_{\epsilon\to 0} \int_{|x|>\epsilon} \frac{f_0'(x)}{x}\,{\D}x.
\end{equation}
If the ``$\liminf$'' diverges, it is clear that \eqref{K11} holds, if it converges but is still not equal to 0, we can choose $K_{11}$ in such a way that $\sin (\omega_\theta-\theta)\neq 0$. Thus, the integral over $K_1$ blows up as $\int_0^1 \rho^{1-3\sigma}\,{\D}\rho=\infty$ for any $\sigma\geq 2/3$.
\medskip
To show the convergence result, we have to estimate $I$ and $J$ from above.
Write
\begin{align*}
J(\rho,\theta)
=& -\int_\real f_0'\left(\rho (\cos \Phi_\theta- \nu \sin \Phi_\theta)\right)\frac{\nu\sin (\theta-\omega_\theta)}{1+\nu^2} \,{\D}\nu \\
&\mbox{} -\int_\real f_0'\left(\rho ( \cos \Phi_\theta- \nu \sin \Phi_\theta)\right)
\frac{\cos(\theta-\omega_\theta)}{1+\nu^2} \,{\D}\nu =: J_1(\rho,\theta) + J_2(\rho,\theta).
\end{align*}
Since $f_0\in W_p^1(\real)$, using the H\"older inequality and a change of variables give
\begin{equation}\label{J2}
\begin{aligned}
|J_{1}(\rho,\theta)|
&\leq \left(\int_\real \left| f_0' \left( \rho(\cos \Phi_\theta- v \sin \Phi_\theta)\right)\right|^p\,{\D}v\right)^{\frac 1p}\left( \int_\real \left(\frac{v}{1+v^2}\right)^q \,{\D}v\right)^{\frac 1q}\\
&\leq c |\rho \sin \Phi_\theta|^{-1/p}
\end{aligned}
\end{equation}
for all $\theta \in [\pi/2,2\pi]$ and $\rho>0$. An even simpler calculation yields
\begin{equation}\label{J1}
|J_{2}(\rho,\theta)|\leq c |\rho \sin \Phi_\theta|^{-1/p}
\end{equation}
for all $\theta \in [\pi/2,2\pi]$ and $\rho>0$. Now we estimate $I(\rho,\theta)$.
Note that for every $\theta\in [\pi/2, 2\pi]$ we have $K^2 (\theta,\nu)/(1+\nu^2) \leq C$. By a change of variables we get
\begin{equation}\label{i1}
\left| I(\rho, \theta)\right|
\leq \frac{C_1}{|\sin \Phi_{\theta}|} \int_\real \left|f_0''(w+\rho \cos \Phi_\theta )\right|\,{\D}w
\leq\frac{C_2}{|\sin \Phi_{\theta}|}
\end{equation}
for all $\theta \in [\pi/2,2\pi]$ and $\rho>0$. Note that for $\Phi_\theta\in [\pi + 2\delta/3, 2\pi-2\delta/3]$ it holds that $|\sin\Phi_\theta|>0$. Thus, on $K_1$ we have
\begin{equation}\label{JK-K1}
|I(\rho, \theta)+ J(\rho, \theta)| \leq C\rho^{-1/p},
\quad \theta \in [\pi/2+\delta,2\pi-\delta],\;\rho>0,
\end{equation}
implying
$$
\int_{ K_1}\left|r^{1-\sigma}\frac{\partial^2}{\partial x_1^2} f(r,\theta)\right|^2 r \,{\D}\theta \,{\D}r
\leq C \int_0^1 \rho^{1-3\sigma-2/p} \,{\D}\rho.
$$
The last integral converges if $\sigma\in (0,2/3)$ and $p>\frac{2}{2-3\sigma}$.
In order to complete the proof of the convergence part, let us show that the integrals over $K_2$ and $K_3$ are convergent for all $\sigma\in (0,1)$.
In the regions $K_2$ and $K_3$ we have $\dist ((r,\theta), \Gamma)\leq r |\cos \theta|$ and $\dist ((r,\theta), \Gamma)\leq r |\sin \theta|$, respectively. We will discuss only $K_2$ since $K_3$ can be treated in a similar way. We need to show that
\begin{equation}\label{K2-1}
\int_{K_2} \left| |r\cos \theta| ^{1-\sigma} \frac{\partial^2}{\partial x_1^2} f(r,\theta) \right|^2 r\,{\D}r\,{\D}\theta
< \infty
\quad\text{for all\ } \sigma\in (0,1).
\end{equation}
From \eqref{J2}, \eqref{J1} and \eqref{i1} we derive that for all $(\rho,\theta)\in \LLB$
\begin{equation}\label{j3}
\left|J(\rho, \theta)+I(\rho,\theta) \right|
\leq C \rho^{-\frac 1p} \left( |\sin \Phi_\theta|^{-1}+ |\sin \Phi_\theta|^{-\frac 1p}\right)
\stackrel{}{\leq}{} C' \rho^{-\frac 1p}|\sin \Phi_\theta|^{-1}.
\end{equation}
Now we can use a calculation similar to \eqref{eq1} for $K_1$ to show that \eqref{K2-1} is finite and, therefore, it is enough to show that
\begin{equation}\label{K2-2}
\int_{\frac{\pi}{2}}^{\frac{\pi}{2}+\delta} \left( \frac{|\cos \theta|^{1-\sigma}}{\sin\Phi_\theta} \right)^2 {\D}\theta
< \infty.
\end{equation}
Observe that $\lim_{\theta\to \frac{\pi}{2}} \cos \frac13(\pi+ \theta) / \cos \theta=\frac{1}{3}$, implying
$$
\frac{|\cos \theta|^{1-\sigma}}{\sin\Phi_\theta}
= \frac{| \cos \theta|^{1-\sigma}}{2 \sin \frac 13(\pi+ \theta) \cos \frac 13(\pi+ \theta)}
\asymp |\cos\theta|^{-\sigma}
\quad\text{as $\theta\to \dfrac{\pi}{2}$}.
$$
Therefore, it is sufficient to note that for any $\sigma \in (0,1)$
$$
\int_{\pi/2}^{\pi/2+\delta} |\cos \theta|^{-2\sigma} \,{\D}\theta
\asymp \int_0^1 \frac{{\D}x}{(1-x^2)^\sigma}
= \int_0^1 \frac{{\D}x}{(1-x)^\sigma(1+x)^\sigma}
< \infty.
$$
Summing up, we have shown that
\begin{gather*}
\dist(0,\cdot)^{1-\sigma} \left| \frac{\partial^2}{\partial x_1^2} f \right| \in L_2(\LLB)
\quad\text{resp.}\quad
\notin L_2(\LLB),
\end{gather*}
according to $\sigma\in (0,2/3)$ or $\sigma\in [2/3,1)$.
\end{proof}
\begin{proof}[Proof of Theorem~\ref{t2}]
Let $\poisol$ be the solution to \eqref{P1} on $\LL$ with source function $g$, and define $w= g* N$ for the Newtonian potential $N$ on $\mathbb{R}^2$. As we have already mentioned in the introduction, $f:= w-\poisol$ is the solution to \eqref{D1} on $\LL$ with the boundary condition $h:= \tr w$ on $\partial \LL$.
Note that under the condition $g\in H_1(\real^2)\cap W_2^1(\LL)$ we have $\Delta w= g$ (cf.\ Stein~\cite[Theorem III.3.3, p.~114]{stein}), which implies $w\in W_1^{3}(\real^2)\cap W_2^{3}(\LL)$. By the trace theorem we have $h\in W_1^{2} (\partial \LL )\cap W_2^{5/2}(\partial \LL)$, which in terms of $f_0$ means $f_0\in W_1^{2} (\real)\cap W_2^{5/2}(\real)$. The explosion result of Theorem~\ref{t1} requires $f_0\in W_1^{2} (\real )\cap W_2^2(\real)$ and \eqref{f0}. The latter is guaranteed by the assumption on the trace in the statement of the theorem. Hence, $f\notin W_{2,\mathrm{loc}}^{1+\sigma}(\LL)$, $\sigma\geq 2/3$. Since
$w\in W_{2,\mathrm{loc}}^{2}(\LL)$, this implies that $F\notin W_{2,\mathrm{loc}}^{1+\sigma}(\LL)$, $\sigma\geq 2/3$.
\smartqed\qed
\end{proof}
\subsection*{Acknowledgement} We thank S.\ Dahlke (Marburg) who pointed out the reference \cite{JK95}, N.\ Jacob (Swansea) for his suggestions on the representation of Sobolev--Slobodetskij spaces, and A.\ Bendikov (Wroc{\l}aw) who told us about the papers \cite{MM03}, \cite{MMY10}. We are grateful to B.\ B\"ottcher for drawing the illustrations and commenting on the first draft of this paper. Financial support from NCN grant
2014/14/M/ST1/00600 (Wroc{\l}aw) for V.~Knopova is gratefully acknowledged.
| {
"attr-fineweb-edu": 1.399414,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfJo4uzkg4BCDaNC7 | \section{Introduction}
In this paper we present a new methodology of code generation for block diagram simulation tools
such as Scicos~\cite{scn}, VisSim SIMULATE\footnote{VisSim SIMULATE is a commercial modeling and simulation product
based on Scicos.} (VSS) and Simulink.
This methodology takes advantage of the operator overloading facilities
available in matrix based languages such as Matlab, Scilab, Nsp~\cite{nsp} or Octave. The implementation is
done in Nsp/Scicos and VSS environments \cite{scn} but the code may easily be ported to
Matlab/Simulink and other
similar environments. In our implementation, the final C code can be either
generated directly using a C pretty printer or
through the use of Gnat Model Compiler \cite{gnat}. In the latter case, an \verb+.xmi+
file is generated and then
\verb+gmc+ is invoked to generate the C code. Either way, the C code may be
reimported automatically in the Scicos and VSS environments as a \verb+CBlock+ for validation.
Not only the block behaviors may easily and naturally be implemented in the Nsp language but
they can also be simulated directly in Scicos, if implemented as such. The implementation is natural
because most Scicos blocks support matrices and Nsp data types. But also because Nsp is the working
language in the Nsp/Scicos environment. In particular Scicos model and block parameters are defined via Nsp
scripts and expressions, and evaluated by the Nsp interpreter.
The main reason for which only parameter definitions in Scicos blocks use Nsp programs and run-time block
code is invariably expressed in C is performance. Parameters are evaluated once at compile time,
so peformance is not an issue, but if the run time block behavior is expressed in Nsp, the simulation
performance would be significantly reduced.
The methodology proposed in this paper provides the possibility of defining Scicos/VSS blocks entirely
in Nsp. These blocks, which can be developed by Scicos developers, toolbox developers or end-users,
may be tested and validated directly by (low performance because interpreted)
simulations in Scicos. The Nsp code of the
block can then be used for code generation by the methodology presented in this paper.
In this methodology, an Nsp script is generated from the Scicos/VSS model. The generation of this script
is based on the result of the compilation of the model by Scicos or VSS compilers. The compiler result
includes in particular the order in
which the blocks in the model must be executed (their
functions called) and the types and sizes of all the block inputs and outputs. This generated script,
which contains calls to the Nsp functions of the blocks, evaluates block outputs, new discrete states,
state derivatives, zero-crossing surfaces, etc., i.e., all the information needed for both simulation
and code generation.
The main obstacle in generating compilable code (such as C or ADA) from Nsp, and more generally from
all Matlab-like languages, is the inability, in the general case,
to determine the types and sizes of variables before run-time.
But this is not an issue for the script generated from a Scicos or VSS model since in this case,
all the sizes and types of the variables representing block inputs and outputs are known
(determined by the compiler).
The main component of
the code generation tool presented here is the Nsp code generation facility that can produce
compilable code from such Nsp scripts. It should be emphasized that this facility is not a general
purpose code generator for arbitray Nsp scripts. It is developed especially for Scicos/VSS usage where
in the generated code, data types involved are limited to matrices of doubles and various integer
data types, and all variable sizes can be statically determined.
This Nsp code generation facility is entirely developed in Nsp using its abilities to define
new data types and performing operator overloading for these new data types.
In particular we define a new data type called \verb!bvar! representing a matrix and containing:
\begin{itemize}
\item type: numerics or symbolics
\item value: a matrix
\item name: string
\end{itemize}
If a variable of type \verb!bvar! is of type numerics, then its value represents its actual value
and it is similar to a regular Nsp matrix with the same value. However using the type \verb!bvar!, it
is possible to associate a name to the variable. If the variable is of type symbolics, then
its value is just a ``nominal value'', often zero, but its data type (double, int8, int16,...)
and its size represent the actual data type and size of the variable.
Operations involving numeric variables (of type \verb!bvar!) and regular Nsp variables never produce
symbolic variables (of type \verb!bvar!). Operations involving symbolic variables produce in general
symbolic variables except in special cases. In particular if \verb!X! is a symbolic variable, then
\verb!size(X)! is not; neither is \verb!datatype(X)!. The overloading of basic operators and language
primitives for these data types automatically overloads all Nsp functions that use these operators and
primitives thanks to dynamic scoping property of the Nsp language.
The main idea of the approach used in this paper is to overload basic Nsp operators
and functions for variables of type \verb!bvar! so that if we have an Nsp script that performs
operations on regular matrices, if these matrices are replaced by variables of type \verb!bvar!,
the script runs as before but in addition, a pseudo-code (a sequence of ``instructions'')
is generated and stored in a global list variable named \verb!code!.
Every time an operation involves a variable of type \verb!bvar!, the overloaded
operation performs the original operation returning the corresponding value as if the arguments
were not of type \verb!bvar! but in addition it adds one or several ``instructions'' to
the list \verb!code!. This mechanism may be seen as spying the script
as it is executed: when an operation involving a type \verb!bvar! variable is performed, it
is recorded in the \verb!code! list. The record includes not only the nature of the operation (for
example matrix multiplication) but also certain information on its operands. The information in
the generated pseudo-code is rich enough so that it can be used to generate code in C, ADA or other
languages, by simple pretty printing operations.
The execution of the script can be seen
as partial evaluation. When the value is unknown, only the type and size information is propagated;
when the value is known, the actual value is propagated. As long as the script can run to the end by
propagating this partial information, the code generation can be performed.
Unlike most code generators where the objective is to
transform a program written in a source language into a program in the target language preserving its
functionality, in our approach the program in the source language contains not only the code to
be translated but also specifications on how this transformation should be performed. This is done
by providing a set of functions for use as
code generation directives so that a code generation task may be expressed as a part of the Nsp script.
The Scicos/VSS block Nsp codes however will not make use of these functions.
Scicos users need not know about these directives; as far as they are concerned,
the block codes are written in standard Nsp language with regular Nsp data. During code generation,
the script will be calling these block codes with the variables expressing blocks input/outputs
and states defined as symbolic \verb!bvar! variables unbeknownst to the Scicos user.
There are many advantages in using the technique presented here as opposed to developing an independent
code generator: this code generator does not use a different parser for the scripting language; the script
is run using the tool itself. Similarly, the operator and functions used in the
process of partial evaluation are that of the scripting language itself.
Finally the operator overloading is being
performed in the scripting language, so it can be customized easily for different targets.
The method presented here
is implemented in Nsp but it may be developed in other languages providing similar overloading facilities,
in particular Matlab, Scilab and Octave.
The tool based
on this methodology provides a powerful and easily customizable code generator for Scicos and VSS.
Despite the
existence of code generators for Simulink (Matlab and Simulink coder in particular) a
tool based on the approach presented in this paper would still be of interest since it provides
more control over the generated code.
In this paper we will focus on code generation for discrete-time subsystems of Scicos and VSS
models. This is what is needed in the majority of embedded code generation applications, in
particular for the implementation
of embedded controllers. We consider double, boolean and signed and unsigned 8, 16 and 32 bit integers
data types.
We do not consider the complex data type (seldom used in embedded code), nor do we consider fixed-point
data types, which will be considered in a subsequent paper and for which the methodology presented here is
very promising.
\section{Simple Example}
We start with a very simple example to illustrate the basic idea of our approach. Consider the
Scicos model depicted in Fig.~\ref{m1}.
The objective is to generate C code for the Super Block the contents
of which is depicted in Fig.~\ref{m2}.
\begin{figure}[ht]
\begin{center}
\mybox{\includegraphics[scale=0.5]{ basic01}}
\caption{Main diagram.\label{m1}}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\mybox{\includegraphics[scale=0.5]{ basic02}}
\caption{Content of Super Block for which C code is generated.\label{m2}}
\end{center}
\end{figure}
Scicos code generator produces an Nsp script, based on the result of the Scicos compiler,
corresponding to the subsystem inside the Super Block. This script calls the Nsp functions associated
with the blocks in the subsystem. The block functions have an imposed calling sequence and
provide specific callback functionalities. Here we review the functions associated
with the basic blocks used in this subsystem to illustrate how block functions may be developed for
code generation.
A simplified version of the Nsp code of the delay block \verb!1/z! is given below:
\begin{verbatim}
function block=P_Unit_Delay(block,flag)
if flag==1 then
block.io(2)=block.state(1)
elseif flag==2 then
block.state(1)=block.io(1)
elseif flag==-1 then
z=convert(block.params.p1,datatype(block.io(1)))
if prod(size(z))==1 then z=z*convert(ones(size(block.io(1))),datatype(z));end
block.state(1)=z
end
endfunction
\end{verbatim}
The arguments of the block functions are an Nsp structure and an integer.
The structure, named \verb!block! in this code, contains the fields
\begin{itemize}
\item \verb!io!: a list of matrices of possibly different types and sizes representing block input and output values
\item \verb!state!: a list of matrices of possibly different types and sizes representing block states
\item \verb!params!: an Nsp structure coding the names and values of block parameters.
\end{itemize}
The second argument of the function is a flag that indicates what job the function should perform. The
value $-1$ is used for initialization. The function is called with this flag once at the beginning so that
it can initialize the internal states of the block, if any. Flag values $1$ and $2$ are used
respectively to code the output update and state update phases of the block behavior. Here,
we consider only code generation for discrete-time subsystems. For more general blocks, other
flag values are used for example for the computation of state derivative, evaluation of zero-crossing
surfaces, etc. For more information on the usage of flags see \cite{scn}, Chapter~9.
In the script generated from the Scicos or VSS model,
the Nsp code of the blocks are called several times with different flag values.
Unlike block parameters, which are known at the time of
code generation, block input/output and state values are not known. Their values are defined as
\verb!bvar! variables when the block function is called. Even though the function has been developed
to receive a structure containting regular matrix arguments, thanks to operator overloading,
the function accepts this new argument.
The structures \verb!io! and \verb!state! are also new data types for which insertion and
extraction operations have been overloaded. But once again, the developer of the block function code
needs not know about it.
When this function is called with flag $-1$, normally we would simply copy the parameter value in the
state, but the \verb!Delay! block accepts that the parameter be defined as a scalar, even if the
state is not scalar, in which case all the entries of the state matrix take the value of the parameter. Moreover,
even if the parameter is defined as a double (default data type in Nsp), the block may be of a
different data type so the parameter should be converted into
correct data type before being copied in the state. Note that the expressions
\verb!datatype(block.io(1))! and \verb!size(block.io(1))! in this code can be evaluated even if
\verb!block.io(1)! is symbolic (their values depend only on the data type and size of \verb!block.io(1)!).
Needless to say that
the execution of this part of the code does not perform any code generation, and thus no trace
of this code will be seen in the final generated code.
For this block, the state update and output update operations (Flag values $1$ and $2$) consist
simply of copying respectively the input to the state, and the state to the output. The corresponding
pseudo-code is generated thanks to the overloading of the insertion operation in
the \verb!io! and \verb!state! fields.
The simplified version of the code for the Gain block is as follows:
\begin{verbatim}
function block=P_GAINBLK(block,flag)
if flag==1 then
put_annotation("Gain block begins.")
block.io(2)=convert(block.params.p1,datatype(block.io(1)))*block.io(1)
put_annotation("Gain block ends.")
end
endfunction
\end{verbatim}
This block does not have a state, so its function contains no initialization phase or state update phase.
As in the case of the previous block, the block parameter, the gain value in this case, may be defined
as a double even if the block data type is something else, for example any type of integer.
The Gain block in Scicos may perform different types of operations. The gain parameter may be a matrix
and the block output is the product of this matrix with the input vector (which must have compatible sizes).
But the gain parameter may also be a scalar, in which case the input and output have identical sizes
and the product multiplies all the elements of the input matrix with this parameter to compute the output.
But the parameter may also be a matrix when the input is a scalar. But all of this complexity is captured
in the Nsp multiplication operation used in the code. Indeed the Nsp multiplication operation has
exactly the same properties. This similarity is often seen between basic Nsp operators and primitives, and
basic Scicos and VSS block behaviors. This is the fundamental reason why
using Nsp coding is a convenient way of describing the
behavior of Scicos and VSS blocks, often leading to a compact, readable and maintainable Nsp code.
The function \verb!put_annotation! is used to place comments in the generated code.
We now give the complete code of the \verb!SUMMATION! block function because it shows some advanced
features of block definitions that can directly be handled by our approach:
\begin{verbatim}
function block=P_SUMMATION(block,flag)
if flag==1 then
vars=block.io
nin=length(vars)-1
sgns=block.params.p2
put_annotation("Sum block begins with "+string(nin)+" inputs.")
if nin == 1 then
put_annotation("Using the sum function.")
if sgns(1)==-1 then
out=-sum(vars(1))
elseif sgns(1)==+1 then
out=sum(vars(1))
else
error('wrong sign: "+string(sgns(1)))
end
else
if sgns(1)==-1 then
out=-vars(1)
elseif sgns(1)==+1 then
out=vars(1)
else
error('wrong sign: "+string(sgns(1)))
end
for i=2:nin
if sgns(i)==-1 then
out=out-vars(i)
elseif sgns(i)==+1 then
out=out+vars(i)
else
error('wrong sign: "+string(sgns(i)))
end
end
end
block.io($)=out // $ indicates the last element of the list
end
endfunction
\end{verbatim}
This block performs two very different summing operations. When the block has only one input, the
output of the block is the sum of the entries of the input matrix. When it has more than one inputs,
the block performs the addition (or substraction depending on the value of parameter \verb!p2!) of the
input matrices. The Nsp code contains both algorithms and the selection is made by testing the
number of inputs. This test and other tests in particular for deciding if the operation to perform is
an addition or a subtraction are ``partial evaluated'' and will not appear in the generated code.
Note that the conditional expressions of the \verb!if! statements and the range of \verb!for! loops
must be known (through partial evaluation) at the time of code generation. For example in this case,
\verb!nin!, which represents the number of inputs, is known because the length of \verb!block.io! is known.
The generated code will of course contain no trace of these conditional statements or loops.
Clearly if the conditional expression of an \verb!if! statement is not known, the execution of the
script cannot continue. This is a limitation of this approach.
Note also that the \verb!error! functions generate errors at the time of code generation if a parameter
value is incorrect. But if the Nsp code is used as run time code for simulation, they generate
error messages at the beginning of the simulation if a parameter does not have correct value.
The Nsp code for the Scicos \verb!MUX! block is also straightforward since the operation of this block
corresponds to row concatenation in Nsp. The block may have more than two inputs in which case repeated
concatenations are required within a \verb!for! loop:
\begin{verbatim}
function block=P_MUX(block,flag)
if flag==1 then
vars=block.io
y=vars(1)
put_annotation("MUX block begins with "+string(length(vars)-1)+" inputs.")
for i=2:length(vars)-1
y=[y;vars(i)]
end
block.io($)=y
put_annotation("MUX block ends.")
end
endfunction
\end{verbatim}
The generated C code for the Super Block in Fig.~\ref{m2} contains
two functions for state update and output update:
\begin{verbatim}
#include <scicos/scicos_block4.h>
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <math.h>
typedef int boolean;
/* Start1000*/
static double z_10001=0;
static double z_10002=0;
static double link10004=0;
void initialize1000(){
static double tmp_13=0;
static double tmp_14=0;
static double tmp_15=0;
z_10001=tmp_13;
z_10002=tmp_14;
link10004=tmp_15;
}
void updateOutput10001(double *inouts1,double *inouts2){
double tmp_1;
double tmp_5[2];
/* Gain block begins.*/
/* Gain block ends.*/
tmp_1=z_10001;
/* Sum block begins with 2 inputs.*/
link10004=(tmp_1-z_10002);
/* MUX block begins with 2 inputs.*/
tmp_5[0]=tmp_1;
tmp_5[1]=*inouts1;
/* MUX block ends.*/
inouts2[0]=tmp_5[0];
inouts2[1]=tmp_5[1];
}
void updateState10001(double *inouts1,double *inouts2){
z_10001=link10004;
z_10002=*inouts1;
}
/* End1000*/
void toto1000(scicos_block *block,int flag)
{
if (flag == 1) {
updateOutput10001((GetRealInPortPtrs(block,1)),(GetRealOutPortPtrs(block,1)));
}
else if (flag == 2) {
updateState10001((GetRealInPortPtrs(block,1)),(GetRealOutPortPtrs(block,1)));
}
else if (flag == 4) {
initialize1000();
}
\end{verbatim}
The calling sequence of the state update and output update C routines of the block contain
pointers to the input and output data, in that order. The ``state'' of the block, which
consists of the subsystem blocks' input/outputs (link values), and the state of the Delay block
are defined as global variables so that their values
are preserved from the output update phase to state update phase, and from one time step to the next.
Not all link values need to be treated as global, further code optimization may be used to identify
link values that need not be preserved and thus simplify the
code by removing these links out of the global variables.
In this case additional optimization will also
remove the unnecessary temporary variables \verb!tmp_1! and \verb!tmp_2!, but the C compiler does
this anyway.
Even though there is no scalar type in Nsp (a scalar is just a one by one matrix)
the generated C code uses, for example for real values, the
type \verb!double! for scalar variables and not \verb!double *!, which is used for non-scalars only. But
the arguments of function calls are all passed as pointers.
Following the code generation process, the Super Block is automatically replaced in the model
with a \verb!CBlock! containing
the generated C code; see Fig.~\ref{m4}. The simulation of this new model can be used to
compare simulation results before and after code generation. This is a fast and efficient method
for testing the generated code.
\begin{figure}[ht]
\begin{center}
\mybox{\includegraphics[scale=0.5]{ basic-c01}}
\caption{Super Block is automatically replaced with a C block for testing.\label{m4}}
\end{center}
\end{figure}
\section{Operator overloading and pseudo-code generation}
Overloading of operators and primitives is a powerful method in Matlab like environments used
for various applications, such as automatic differentiation~\cite{wr,pwr}. Overloading can be used
to alter the behavior of an existing code, in particular by adding additional operations such as
the evaluation of the derivatives for automatic differentiation, or pseudo-code generation in our case.
\subsection{New data types and overloading}
In Nsp~\cite{nsp}, defining new datatypes for which operator overloading is possible is done by
writing the new datatype definition in a C code. Some Nsp internal tools
are available to ease the writing of the code necessary to implement a new datatype definition.
As an example, the C-code needed to implement the \verb+bvar+ datatype was generated from the
following simple definition of the \verb+bvar+ object.
\begin{verbatim}
(define-object Bvar
(in-module "Bvar")
(parent "Object")
(c-name "NspBvar")
(fields
'("gboolean" "sym" "hidden" "FALSE");
'("NspObject*" "value" "hidden" "NULL" );
'("char*" "varname" "hidden" );
)
(gtype-id "Bvar")
)
\end{verbatim}
When the new datatype is defined, the definition of the overloaded operators can be performed in Nsp or in C.
Examples on how to define new data types in Nsp are given in the \verb!src/types-test! directory of
Nsp source code.
The main new data type used in this application is called \verb!bvar! (block variable).
An object of type \verb!bvar! is a wrapper around a standard Nsp object decorated with new attributes:
an attribute containing the name of the variable and an attribute to specify whether the object is
numeric or symbolic.
The following lines of code explain how to create a \verb!bvar! variable, which encapsulates a
\verb!1x1! symbolic matrix of type double named \verb!x! having nominal value~67.
\begin{verbatim}
-nsp->A=bvar(varname="x",value=67,symbolic
A = "x"
-nsp->type(A,'short')
ans = s (1x1)
bvar
-nsp->A.get_value[]
ans = r (1x1)
| 67 |
-nsp->A.get_varname[]
ans = s (1x1)
x
-nsp->A.is_symbolic[]
ans = b (1x1)
| T |
\end{verbatim}
The fields in the \verb!A! variable can be obtained or changed using methods, and the type
of object \verb!A! is obtained by using the \verb!type! function.
The short type name of a variable is the string which is used to define functions used
for overloading. Suppose we define the function \verb!f_bvar! in a function library as follows~:
\begin{verbatim}
function y=f_bvar(A) y=size(A.get_value[]);endfunction;
\end{verbatim}
Then calling \verb!f(A)! will return the size of the value stored in the \verb!bvar! variable \verb!A!:
\begin{verbatim}
-nsp->f(A)
ans = r (1x2)
| 3 4 |
\end{verbatim}
Nsp operators and functions used for writing Scicos and VSS blocks are overloaded for \verb!bvar!
variables. As an example we give the code
for the function \verb!inv_bvar!, which implements matrix inversion.
When the matrix to be inverted is \verb!2x2! we use the explicit inverse formula.
\begin{verbatim}
function out=inv_bvar(in)
[m,n]=size(in);
if m<>n then error("Division by non square matrix not supported.");end
if m==1 then
out=1/in
elseif m==2 then
out=bvarempty(in)
out(1,1)=in(2,2)
out(2,2)=in(1,1)
out(1,2)=-in(1,2)
out(2,1)=-in(2,1)
out=out/(in(1,1)*in(2,2)-in(1,2)*in(2,1))
else
....
end
endfunction
\end{verbatim}
When used with a \verb!2x2! symbolic matrix, the evaluation of the above
function executes the previous code, which emits corresponding pseudo-code directives stored in
a global variable called \verb!code!. This variable can then be used to produce
code for different targets. Here is the C code obtained after calling
\verb+inv+ on a symbolic \verb!2x2! matrix:
\begin{sessioncmd}
-nsp->codegen_init();
-nsp->A=symbolics(rand(2,2));
-nsp->inv(A);
-nsp->global('code');
-nsp->[L,code,declarations]=code_optimize(list(),code,list(),list(),opt=
-nsp->txt=code_printer_c(code,list())
txt = s (10x1)
tmp_2[0]=(tmp_1[3]);
tmp_2[3]=(tmp_1[0]);
tmp_2[2]=(-(tmp_1[2]));
tmp_2[1]=(-(tmp_1[1]));
tmp_19=(((tmp_1[0])*(tmp_1[3]))-((tmp_1[2])*(tmp_1[1])));
tmp_20[0]=((tmp_2[0])/ tmp_19);
tmp_20[2]=((tmp_2[2])/ tmp_19);
tmp_20[1]=((tmp_2[1])/ tmp_19);
tmp_20[3]=((tmp_2[3])/ tmp_19);
\end{sessioncmd}
By looking at the code of function \verb!inv_bvar!, it may seem
surprizing that its execution emits pseudo-code. But, note that affectations,
minus operation and multiplications are also overloaded. It is precisely the role of the overloaded
functions to produce the required pseudo-code.
As an example of a
function which contains explicit code directives, we give
the overloaded code for the overloaded unary minus function:
\begin{verbatim}
function out = minus_bvar(in)
// unary minus
global overflow_option
if ~is_sym(in) then out= ( - valueof(in));return,end
if prod(size(valueof(in)))==1 then
out=symbolics(-valueof(in),getunique())
rhs=expression("-",list(in),overflow_option)
gen_def(out,rhs)
else
out=bvarempty(in)
sz=size(valueof(in))
for i=1:sz(1)
for j=1:sz(2)
out(i,j)=-in(i,j)
end
end
end
endfunction
\end{verbatim}
When the function argument is not symbolic then the function
just performs \verb!( - valueof(in))! and no code is emitted.
When the variable is symbolic and of size \verb!1x1! the function
returns a new symbolics variable (\verb!out=symbolics(-valueof(in),getunique()!)
and the code is emitted by the call to the \verb!expression! function
(\verb!expression("-",list(in),overflow_option)!) and by the call to the
function \verb!gen_def(out,rhs)!. When the variable is a general matrix, the same function is applied to every entry of the matrix.
\section{Code generation directives}
A number of specific Nsp functions are used as directives to control code
generation during the execution of the script. Some are easy to understand, for example
directives to initiate and terminate the code generation process:
\begin{verbatim}
codegen_init()
textout=codegen_finalize()
\end{verbatim}
or to start and end a function definition in the target language:
\begin{verbatim}
StartFunction(f,_io)
EndFunction()
\end{verbatim}
More complex directives are used to specify properties of the generated code that cannot be naturally
expressed in Nsp due to semantic differences between the Nsp language and the target language. The
most important directives are described below.
\subsection{Creating persistent variables}
Persistent or static variables do not exist in Nsp but they exist in the target languages
considered for code
generation (in particular C) and are needed. So special directives are introduced for creating them.
A pool of persistent variables may be created with the functions
\verb+persistent_create()+ and \verb+persistent_insert(...)+.
Each variable in the pool of persistent variables is a \verb+bvar+ object
of type symbolics. When code is emitted, a top level declaration (of static type)
will be generated for each persistent variable used in the generated code.
The function \verb+persistent_extract()+ can be used to obtain the persistent
variable.
When the function \verb+persistent_insert+ is used with a variable name
already present in the pool of persistent variables then a local variable
declaration is emited and the local variable value is copied into the
persistent variable.
In the generated code a special function called \verb+initialize+ will
contain code which resets all the persistent variables to their default values.
\begin{sessioncmd}
-nsp->// demo {\bf for} creation of a pool of persistent variables
-nsp->codegen_init();
-nsp->states=persistent_create();
-nsp->states=persistent_insert(states,'x1',1:3);
-nsp->states=persistent_insert(states,'x2',7);
-nsp->// note that not used persistent declarations are removed from the
-nsp->// generated code (x3 is unsued).
-nsp->states=persistent_insert(states,'x3',1:6);
-nsp->io=inouts();
-nsp->StartFunction("foo",io);
-nsp->code_insert('annotation','copy [4:6] into x1 with memcpy');
-nsp->states=persistent_insert(states,'x1',[4:6]);
-nsp->code_insert('annotation','copy with assign since x2 is 1x1');
-nsp->states=persistent_insert(states,'x2',8);
-nsp->EndFunction("foo",io);
-nsp->txt=codegen_finalize()
txt = s (17x1)
static double x1[]=\{ 1, 2, 3 \};
static double x2=7;
void initialize()\{
static double tmp_4[]=\{ 1, 2, 3 \};
static double tmp_5=7;
memcpy(x1,tmp_4,3*sizeof(double));
x2=tmp_5;
\}
void foo()\{
double tmp_1[]=\{ 4, 5, 6 \};
/* copy [4:6] into x1 with memcpy*/
memcpy(x1,tmp_1,3*sizeof(double));
/* copy with assign since x2 is 1x1*/
x2=8;
\}
-nsp->txt=codegen_finalize()
\end{sessioncmd}
\subsection{Creating function arguments}
Function arguments are created with the use of the functions
\verb+inouts+ and \verb+inouts_insert+. Arguments are \verb+bvar+ objects
of type symbolics. When the \verb+inouts_insert+ function is used with a variable name
already present in a sequence of function arguments, then a local
variable declaration is emited and the local variable value is copied into the
function argument.
In the following example a function \verb+foo+ with one argument is created.
When calling the function \verb+foo+, the persistent variable \verb+x1+ is
filled with the value of the function \verb+foo+ argument and then the
function \verb+foo+ argument is filled with the constant value \verb+[7,8,9]+.
\begin{sessioncmd}
-nsp->codegen_init();
-nsp->states=persistent_create();
-nsp->states=persistent_insert(states,'x1',1:3);
-nsp->// in and out arguments of the {\bf function}
-nsp->io=inouts();
-nsp->io=inouts_insert(io,'inouts1',[4,8,9]);
-nsp->//
-nsp->StartFunction("foo",io);
-nsp->code_insert('annotation','copy inouts1 to x1 with memcpy');
-nsp->states=persistent_insert(states,'x1',io.inouts1);
-nsp->code_insert('annotation','copy [7,8,9] into inouts1');
-nsp->io = inouts_insert(io,'inouts1',[7,8,9]);
-nsp->EndFunction("foo",io);
-nsp->txt=codegen_finalize();
\end{sessioncmd}
\subsection{Function constant}
The declaration and initialization of local variables (automatic variables) may
be performed with the \verb+constant+ function.
\begin{sessioncmd}
-nsp->// declaration {\bf for} a constant
-nsp->codegen_init();
-nsp->out=constant([5,0;7,8],'x1'); // declaration + set
-nsp->global('declarations','code','top_declarations','text');
-nsp->txt=code_printer_c(code,declarations)
txt = s (1x1)
double x1[]=\{ 5, 7, 0, 8 \};
\end{sessioncmd}
\subsection{Function expand}
The function \verb+expand+ returns in a new symbolic variable. In particular,
\verb+expand(in,m,n)+ returns a new symbolic \verb+bvar+ variable
with the same type as the variable \verb+in+ but with size \verb+mxn+.
It also inserts a declaration for the new returned variable in the generated code.
\begin{sessioncmd}
-nsp->// demo {\bf for} expand
-nsp->codegen_init();
-nsp->in=numerics
-nsp->var=expand(in,2,3)
var = "tmp_2"
-nsp->global('declarations','code','top_declarations','text');
-nsp->txt=code_printer_c(code,declarations)
txt = s (1x1)
int tmp_2[]=\{ TRUE, TRUE, TRUE, TRUE, TRUE, TRUE \};
-nsp->
-nsp->
\end{sessioncmd}
\subsection{Functions bvarcopy and bvarempty}
The functions \verb+bvarcopy+ and \verb+bvarempty+ are used to create
new symbolic \verb+bvar+ variables with same type and size as their
argument. In the case of \verb+bvarcopy+, a code for copying the
value of the argument \verb+in+ to the value of the newly created
variable is also emitted.
\begin{sessioncmd}
-nsp->// demo {\bf for} bvarcopy and bvarempty
-nsp->codegen_init();
-nsp->in=numerics(rand(2,3));
-nsp->var=bvarcopy(in); // var is declared and set or mcopy is generated
-nsp->global('declarations','code','top_declarations','text');
-nsp->txt=code_printer_c(code,declarations)
txt = s (2x1)
Column 1 :
double tmp_2[]=\{ 0.8147236919030547, 0.1354770041070879,
0.9057919341139495, 0.8350085897836834,
0.1269868118688464, 0.9688677710946649 \};
memcpy(tmp_2,tmp_1,6*sizeof(double));
-nsp->
-nsp->codegen_init();
-nsp->in=numerics(rand(2,3));
-nsp->var=bvarempty(in); // var is declared and filled in declaration.
-nsp->global('declarations','code','top_declarations','text');
-nsp->txt=code_printer_c(code,declarations)
txt = s (1x1)
Column 1 :
double tmp_4[]=\{ 0.91337585565634072, 0.22103404277004302,
0.63235924998298287, 0.30816705035977066,
0.09754040162079036, 0.54722059634514153 \};
\end{sessioncmd}
\section{Application to code generation for Scicos}
\subsection{Code generation script}
Code generation script is an Nsp script generated automatically from a Scicos/VSS
model, and in particular from
the content of the Super Block for which code generation is requested. Scicos compiler provides all the
information needed to generate this script. The information includes in particular
the list of blocks and their order of execution for the
initialization, state update and output update phases. The compilation result contains also the data types
and sizes of all the link variables inside the Super Block and those of its input and output ports.
To generate the script, the model variables are first determined. These variables contain the
signals on various links and the internal block states. They lead to static C variables in the final C code
because they have to retain their values from one call to the next. Links that transfer constant values are partial evaluated
and do not appear in the generated code. This is done by propagating constants through the diagram, when possible.
The script starts with a section to determine the variables' initial values, which are then explicitly
deefined in the generated code. This section calls the block routines with \verb+flag+~$-1$ and does not use \verb+bvar+ type
variables. Subsequently the generated script calls in proper order the block routines with flags $1$ and $2$. The input and
output values in these calls are of type \verb+bvar+ and the C code is generated thanks to operator overloading.
Unlike the code generation script, which is generated automatically and deals explicitly with variables of
type \verb+bvar+, the developer of the Nsp Scicos/VSS block functions
writes the function codes in standard Nsp as if these functions were only used for simulation.
In fact, he does not even need to know that the
arguments of his functions may be at any time be of any type other than regular Nsp data types. There are however certain
limitations on what he can use in his code. In particular, no conditional tests may be based on values that
cannot be constant propagated (partial evaluated to a numeric value) and the compiler should be able to determine
the number of iterations in
\verb+for+ and \verb+while+ loops, assuming the types and sizes of all the input arguments to his functions
are known. Note also that not all Nsp primitives may be used; only those that have been overloaded
for the \verb+bvar+ type may be used (most basic Nsp primitives have already been overloaded).
The constraint on the usage of conditional statements such as the \verb+if then else+ construct may seem severe, but
it turns out that for describing Scicos/VSS block behaviors, conditional statements based on non-constant propagated
conditions are rarely encountered. In most cases such conditions may be expressed as expressional
\verb+if then else+ and \verb+select case+ statements, which are implemented as \verb+If_exp+ and \verb+Select_exp+ functions.
An expressional \verb+if then else+ statement expressed as \verb+If_exp+ function is used as follows:
\begin{verbatim}
out = If_exp(cond, exp1, exp2)
\end{verbatim}
If \verb+cond+ is true, \verb+out+ takes the value of the expression \verb+exp1+, if not, it takes the value
of the expression \verb+exp2+. Note however that in either case both expressions \verb+exp1+ and \verb+exp2+ are
evaluated, unlike for a real \verb+if then else+ statement where only one of the expressions is evaluated. These
structurel conditions used in particular for subsampling in Scicos and VSS are realized using the special blocks
\verb+IfThenElse+ and \verb+ESelect+.
\subsection{Conditional subsampling}
We have seen so far that in order for the code generation script to function, the expressions of all
conditional statements (\verb!if-then-else!, \verb!when!,...) must be partial evaluated, which means that
it cannot be dependent on symbolic variables. But Scicos and VSS models may include conditional subsampling using
in particular the \verb!IfThenElse! block. Subsampling is an important mechanism available in Scicos, so a special
method is introduced to handle such models for efficient code generation.
\begin{figure}[htb]
\begin{center}
\mybox{\includegraphics[scale=0.5]{ code01}}
\caption{Model for testing the coding scheme.\label{code}}
\end{center}
\end{figure}
An example of Scicos model with conditional subsampling is given in Fig.~\ref{code}. The content of the Super Block
for which C code is generated is illustrated in Fig.~\ref{code_SB}. This model implements a simple
coding scheme where the Super Block receives a sequence of $0$ and $1$ integers and generates a sequence providing the
number of consecutive $0$'s or $1$'s. The Super Block contains an \verb+IfThenElse+ block for testing whether or not
two successive integers have the the same value, and depending on the result, a counter is either incremented or reset to~$1$.
\begin{figure}[htb]
\begin{center}
\mybox{\includegraphics[scale=0.5]{ code02}}
\caption{The Super Block modeling the coding scheme using subsampling.
Code is generated for this Super Block. \label{code_SB}}
\end{center}
\end{figure}
\clearpage
The generated code shows that the test implemented by the \verb+IfThenElse+ block has produced an \verb+if+ statement in~C:
\begin{verbatim}
/* Scicos Computational function
* Generated by Code_Generation toolbox of Scicos with scicos4.4.1
* date : 05 mars 2015
*/
#include <scicos/scicos_block4.h>
#include <string.h>
#include <stdio.h>
#include <stdlib.h>
#include <stdint.h>
#include <math.h>
/* Start1004*/
static int32_t z_10041=0;
static int32_t z_10042=0;
static int32_t link10046=0;
static int32_t link10048=1;
void initialize1004(){
static int32_t tmp_13=0;
static int32_t tmp_14=0;
static int32_t tmp_15=0;
static int32_t tmp_16=1;
z_10041=tmp_13;
z_10042=tmp_14;
link10046=tmp_15;
link10048=tmp_16;
}
void updateOutput10041(int32_t *inouts1,int32_t *inouts2,int32_t *inouts3){
/* Selct block starts*/
/* Selct block ends*/
link10046=0;
}
void updateOutput10042(int32_t *inouts1,int32_t *inouts2,int32_t *inouts3){
/* Selct block starts*/
/* Selct block ends*/
link10046=*inouts2;
}
void updateOutput10043(int32_t *inouts1,int32_t *inouts2,int32_t *inouts3){
int tmp_6;
int tmp_7;
/* RELATIONALOP block starts*/
tmp_6=(*inouts1!=z_10041);
/* RELATIONALOP block ends*/
*inouts3=tmp_6;
*inouts2=z_10042;
tmp_7=(*inouts3>0);
if (tmp_7) {
updateOutput10041(inouts1,inouts2,inouts3);
} else {
updateOutput10042(inouts1,inouts2,inouts3);
}
/* Sum block begins with 2 inputs.*/
link10048=(link10046+1);
}
void updateState10043(int32_t *inouts1,int32_t *inouts2,int32_t *inouts3){
int tmp_11;
z_10041=*inouts1;
/* RELATIONALOP block starts*/
/* RELATIONALOP block ends*/
z_10042=link10048;
tmp_11=(*inouts3>0);
}
/* End1004*/
void toto1004(scicos_block *block,int flag)
{
if (flag == 1) {
updateOutput10043((Getint32InPortPtrs(block,1)),(Getint32OutPortPtrs(block,1)),(Getint32OutPortPtrs(block,2)));
}
else if (flag == 2) {
updateState10043((Getint32InPortPtrs(block,1)),(Getint32OutPortPtrs(block,1)),(Getint32OutPortPtrs(block,2)));
}
else if (flag == 4) {
initialize1004();
}
}
\end{verbatim}
The code geneation script uses the \verb+if_cos+ function, which uses the \verb+If_exp+ function.
The \verb+if_cos+ function is defined as follows:
\begin{verbatim}
function if_cos(in,f1,f2)
// f1 and f2 are expressions of type call(..)
if is_sym(in) then
code_insert("if_expr", in > (0),f1,f2);
elseif valueof(in)>0 then
code_insert("ident",f1)
else
code_insert("ident",f2)
end
endfunction
\end{verbatim}
Note that the \verb+if_cos+ function does not return any value. This is consistent with the fact that the
\verb+IfThenElse+ block has no regular output. The second and third arguments of this function are calls to external
(generated) C routines. The arguments are the Super Block inputs and outputs, passed as pointers, so that these
external routines can only modify these inputs/outputs (in general only outputs) and global variables.
\subsection{Example}
The methodology presented in this paper may be used to generate code for models using existing Scicos blocks
but it can also be used to generate code for models containing \verb+sciblk+s, i.e. blocks where the user has
expressed the block's behavior in Nsp. The following example is adapted from an extended Kalman filter
application developed in \cite{god} for Simulink with an embedded Matlab block.
\begin{figure}[ht]
\begin{center}
\mybox{\includegraphics[scale=0.5]{ kalman01}}
\caption{Model of an extended Kalman filter.\label{k1}}
\end{center}
\end{figure}
\begin{figure}[ht]
\begin{center}
\mybox{\includegraphics[scale=0.5]{ kalman02}}
\caption{The Nsp block is included in a Super Block for code generation. The delay blocks are used to
hold the state of the filter and the corresponding covariance matrix.\label{k2}}
\end{center}
\end{figure}
The Kalman filter equation is expressed in Nsp in the \verb+sciblk+ block:
\begin{verbatim}
function [blk] = P_sciblk(blk,flag)
if flag==1 then //output computation
meas=blk.io(1);xhat=blk.io(2);P=blk.io(3);
//put_annotation("sciblk block begins.")
// This Nsp Function implements an extended Kalman filter used
// for object tracking.
//
// The states of the process are given by
// x = [x_position; x_velocity; y_position; y_velocity];
//
// and the measurements are given by
// y = [range; bearing]
//
// where
// range = sqrt(x_position^2 + y_position^2)
// bearing = atan(y_position/x_position)
// Author: Phil Goddard (phil@goddardconsulting.ca)
// Date: Q2, 2011.
// Adapted to Nsp/Scicos in 2015
dt=0.1
Q = diag([0 .1 0 .1]);
R = diag([50^2 0.005^2]);
// Calculate the Jacobians for the state and measurement equations
F = [1 dt 0 0;0 1 0 0;0 0 1 dt;0 0 0 1];
rangeHat = sqrt(xhat(1)^2+xhat(3)^2);
bearingHat = atan(xhat(3),xhat(1));
yhat = [rangeHat; bearingHat];
H = [cos(bearingHat) 0 sin(bearingHat) 0;
-sin(bearingHat)/rangeHat 0 cos(bearingHat)/rangeHat 0];
// Propogate the state and covariance matrices
xhat = F*xhat;
P = F*P*F' + Q;
// Calculate the Kalman gain
K = P*H'/(H*P*H' + R);
// Calculate the measurement residual
resid = meas - yhat;
// Update the state and covariance estimates
xhat = xhat + K*resid;
P = (eye(size(K,1),size(K,1))-K*H)*P;
// Post the results
blk.io(4)=xhat
blk.io(5)=P
elseif flag==2 then //discrete state computation (if any)
end
//put_annotation("sciblk block ends.")
endfunction
\end{verbatim}
Note that this is a very general Nsp code; it uses a number of matrix manipulation operators and functions. All these operators
and functions are overloaded. The constraints on the usage
of conditional statements and loops are also satisfied since there are no such statements in the code.
\section{Conclusion}
This paper has provided a short presentation of a new methodology based on partial evaluation and overloading
used for the implementation of a code generation tool developed for Scicos and VSS.
The code is freely available in Nsp 1.0 (See \cite{nsp}).
| {
"attr-fineweb-edu": 1.813477,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfJo5qhDCisszkFWz | \section{Introduction}
\label{sec:intro}
The discovery of quasicrystals~\cite{schechtman84} has focused considerable
interest on quasiperiodic or, more
generally, aperiodic systems~\cite{henley87a}.
In the field of critical
phenomena, due to their intermediate situation between periodic and random
systems, aperiodic models have been intensively studied (for a review,
see~\cite{grimm96}).
Furthermore, aperiodic multilayers are experimentally feasible and should
build a new class of artificial structures exhibiting interesting bulk and
surface properties.
Although aperiodic superlattices have already been worked out by molecular beam
epitaxy~\cite{majkrzak91}, nothing has been done experimentally up to now from
the point of view of critical phenomena.
In the perspective of possible future experimental
studies in this context, it seems an interesting and challenging problem to
complete our
understanding through a mean field theory approach.
Surface critical behaviour has indeed been intensively investigated on
the basis of the Ginzburg-Landau theory~\cite{ginzburg50} in the
seventies~\cite{mills71}.
This led to a classification of the transitions which may occur at the
surface and to the derivation of scaling laws between surface and bulk critical
exponents~\cite{bray77a} (for a review,
see~\cite{binder83}). These early papers are known as an important stage in
the further developments of surface critical phenomena.
Seen from the side of critical phenomena, the universal behaviour of
aperiodically perturbed
systems is now well understood since Luck proposed a
relevance-irrelevance criterion~\cite{luck93a,luck93b}.
The characteristic length scale in a critical system is given by the
correlation
length and as
in
the Harris criterion for random systems~\cite{harris74}, the
strength of the fluctuations of the couplings on this scale determines the
critical behaviour.
An aperiodic perturbation can thus be relevant, marginal or irrelevant,
depending on the
sign of a crossover exponent involving the correlation length exponent $\nu$ of
the
unperturbed system and the wandering exponent $\omega$ which governs the
size-dependence
of the fluctuations of the aperiodic couplings~\cite{queffelec87}. In the light
of this criterion, the results obtained in early papers, mainly concentrated on
the
Fibonacci~\cite{igloi88}
and the Thue-Morse~\cite{doria89} sequences, found a consistent
explanation,
since, resulting from
the bounded fluctuations, a critical
behaviour which resembles the periodic case was found for the Ising model in
two dimensions.
In the last years, much progress have been made in the understanding of the
properties of marginal and relevant aperiodically perturbed systems. Exact
results for the
$2d$ layered Ising model and the quantum Ising chain have been
obtained with irrelevant, marginal and relevant aperiodic
perturbations~\cite{lin92a,iglotu94}.
The critical behaviour is in agreement with Luck's criterion leading to
essential singularities
or first-order surface transition when the perturbation is relevant and power
laws with
continuously varying exponents in the marginal situation with logarithmically
diverging
fluctuations. A strongly anisotropic behaviour has been recognized in this
latter
situation~\cite{berche95,berche96}. Marginal surface perturbations have also
been
studied with the Fredholm sequence~\cite{karevski95a} and conformal aspects
have
been
discussed~\cite{grimm94}.
In the present paper, we continue our study of marginal
sequences. The case of the Fibonacci sequence, which leads to irrelevant
behaviour in the
Ising model, should exhibit non universal properties within mean field
approach according to
the Luck criterion and it has not yet been studied in this context. The article
is organized as
follows: in \sref{sec:GL}, we present the phenomenological Ginzburg-Landau
theory on a discrete lattice with a perturbation following a Fibonacci sequence
and we summarize the scaling arguments leading to Luck's criterion, then we
discuss the definitions of both bulk and surface thermodynamic quantities. We
consider magnetic properties in \sref{sec:mag}. Both bulk and surface
quantities
are computed numerically, leading to the values of the corresponding critical
exponents. In \sref{sec:therm}, we discuss the thermal properties and
eventually in \sref{sec:concl}, we discuss the upper critical dimension of
the model.
\section{Discrete Ginzburg-Landau equations for a Fibonacci aperiodic
perturbation}\label{sec:GL}
\subsection{Landau expansion and equation of state on a one-dimensional
lattice}
Let us first review briefly the essentials of the Ginzburg-Landau theory
formulated on a discrete lattice. We consider a one-dimensional lattice of $L$
sites with a lattice spacing $\ell$ and free boundary conditions. The critical
behaviour would be
the same as in a $d-$dimensional plate of thickness $L\ell$ with
translational invariance along the $d-1$ directions perpendicular to the chain
and extreme axial anisotropy which forces the magnetic moments to keep a
constant direction in the plane of the plate. We investigate the critical
properties of an aperiodically distributed perturbation within the framework of
a $\phi^4$ phenomenological Landau theory~\cite{landau}. The underlying
assumption in this approach is based on the
following expansion of the bulk
free energy density
\begin{equation}
f_b\{\phi_j\}={1\over 2}\mu_j\phi_j^2+{1\over 4}g\phi_j^4-H\phi_j+{1\over
2}c\left({\phi_{j+1}-\phi_j\over\ell}\right)^2,\label{eq-1} \end{equation}
where the aperiodic perturbation of the coupling constants is determined by a
two-digits
substitution rule and enters the $\phi^2$ term only. A dimensional analysis
indeed shows that
the deviation from the critical temperature, $\mu$, is the relevant scaling
field which has to
be modified by the perturbation. The free energy of the whole chain is thus
given by
\begin{equation}
F[\phi_j]=\sum_jf_b\{\phi_j\},
\label{eq-4}\end{equation}
and the spatial distribution of order parameter satisfies the usual functional
minimization:
\begin{equation}
\delta F[\phi_j]=F[\phi_j+\delta\phi_j]-F[\phi_j]=0.
\label{eq-5}\end{equation}
One then obtains the coupled discrete Ginzburg-Landau equations:
\begin{equation}
\mu_j\phi_j+g\phi_j^3-H-{c\over\ell^2}(\phi_{j+1}-2\phi_j+\phi_{j-1})=0.
\label{eq-6}\end{equation}
The coefficients $\mu_j$ depend on the site location and are written as
\begin{equation}
\mu_j=k_BT-(zJ-Rf_j)=a_0\left(1-{1\over\theta}+rf_j\right),
\label{eq-7}\end{equation}
where $J$ is the exchange coupling between neighbour sites in the homogeneous
system, $z$ the
lattice coordination and $f_j$ the aperiodically distributed sequence of $0$
and
$1$.
The prefactor $a_0=k_BT$ is essentially constant in the vicinity of the
critical point, and the temperature $\theta$ is normalized relatively to the
unperturbed system critical temperature:
$\theta=k_BT/zJ$. In the following, we will also use the notation
$\mu=1-1/\theta$. In
order to obtain a dimensionless equation, let us define
$\phi_j=m_j\sqrt{a_0/g}$
leading to the following non-linear equations for the $m_j$'s:
\begin{equation}
(\mu+rf_j)m_j+m_j^3-h-(m_{j+1}-2m_j+m_{j-1})=0,
\label{eq-8}\end{equation}
with boundary conditions
\numparts
\begin{eqnarray}
&(\mu+rf_1)m_1+m_1^3-h-(m_{2}-2m_1)=0,\\
&(\mu+rf_L)m_L+m_L^3-h-(-2m_L+m_{L-1})=0.
\end{eqnarray}
\label{eq-9}\endnumparts
Here,
the lengths are measured in units $\ell=\sqrt{c/a_0}$ and $h=H\sqrt{g/a_0^3}$
is
a reduced magnetic field.
One can point out the absence of specific surface term
in
the free energy density. The surface equations for the order parameter profile
simply keep the
bulk form with the boundary conditions $m_0=m_{L+1}=0$ and our study will only
concern ordinary
surface transitions~\cite{binder83}.
\subsection{Fibonacci perturbation and Luck's criterion}
The Fibonacci perturbation considered below may be defined as a two digits
substitution
sequence which follows from the inflation rule
\begin{equation}
0\rightarrow S(0)=01,\quad
1\rightarrow S(1)=0,
\label{eq-11}\end{equation}
leading, by iterated application of the rule on the initial word $0$, to
successive words of
increasing lengths:
\begin{equation}
\begin{array}{llllllll}
0&&&&&&&\\
0&1&&&&&&\\
0&1&0&&&&&\\
0&1&0&0&1&&&\\
0&1&0&0&1&0&1&0\\
.&.&.&&&&&\\
\end{array}
\label{eq-12}\end{equation}
It is now well known that most of the properties of such a sequence can be
characterized by a substitution matrix whose elements $M_{ij}$ are given by the
number $n_i^{S(j)}$ of digits of type $i$ in the substitution
$S(j)$~\cite{luck93a,queffelec87}. In the case of the Fibonacci sequence, this
yields \begin{equation}
\mat{M}=
\left(
\begin{array}{cc}
n_0^{S(0)}&n_0^{S(1)}\\
n_1^{S(0)}&n_1^{S(1)}\\
\end{array}
\right) =
\left(
\begin{array}{cc}
1&1\\
1&0\\\end{array}
\right).
\label{eq-13}\end{equation}
The largest eigenvalue of the
substitution matrix
is given by the
golden mean $\Lambda_1={1+\sqrt{5}\over 2}$ and is related to the length of the
sequence after $n$ iterations, $L_n\sim\Lambda_1^n$, while the second
eigenvalue
$\Lambda_2=-1/\Lambda_1$ governs the behaviour of the cumulated deviation
from the asymptotic density of modified couplings
$\rho_\infty=1-{2\over\sqrt 5+1}$:
\begin{equation}
\sum_{j=1}^L(f_j-\bar f)=n_L-\rho_\infty
L\sim\mid\Lambda_2\mid^n\sim (\Lambda_1^\omega)^n,
\label{eq-14}\end{equation}
where we have introduced
the sum $n_L=\sum_{j=1}^Lf_j$ and the wandering exponent \begin{equation}
\omega={\ln\mid\Lambda_2\mid\over\ln\Lambda_1}=-1.
\label{eq-15}\end{equation}
When the scaling field $\mu$ is perturbed as considered in the previous
section,
\begin{equation}
\mu_j=a_0\left(\mu+rf_j\right),
\label{eq-16}\end{equation}
the cumulated deviation of the couplings from the average at a length scale $L$
\begin{equation}
\overline{\delta\mu}(L)={1\over L}\sum_{j=1}^L(\mu_j-\bar\mu)=
{1\over L}a_0r(n_L-\rho_\infty L)
\label{eq-17}\end{equation}
behaves with a size power law:
\begin{equation}
\overline{\delta\mu}(L)\sim L^{\omega-1},
\label{eq-18}\end{equation}
and induces a shift in the critical temperature $\overline{\delta
t}\sim\xi^{\omega -1}$ to be compared with the deviation $t$ from the critical
temperature:
\begin{equation}
{\overline{\delta t}\over t}\sim t^{-(\nu (\omega -1)+1)}.
\label{eq-41}
\end{equation}
This defines the crossover exponent $\phi=\nu (\omega -1)+1$. When $\phi=0$,
the
perturbation is marginal: it remains unchanged under a renormalization
transformation, and the system is thus
governed by a new perturbation-dependent fixed point.
A perturbation of the
parameters $g$ or $c$ entering the Landau expansion \eref{eq-1} would be
irrelevant.
\subsection{Bulk and surface thermodynamic quantities}
In the following, we discuss both bulk and surface critical exponents and
scaling functions. We
deal with the surface and boundary magnetizations $m_s$ and $m_1$, surface and
boundary
susceptibilities $\chi_s$ and $\chi_1$, and surface specific heat $C_s$. All
these quantities can be expressed as derivatives of the surface free energy
density $f_s$~\cite{binder83} (see \tref{table4}).
\begin{table}
\caption{Bulk and surface thermodynamic quantities in terms of the
bulk $f_b$ and surface
$f_s$ free energy densities. $h$ and $h_1$ are bulk and surface
magnetic fields respectively
and $t$ is the reduced temperature. }\footnotesize\rm
\begin{tabular}{@{}llllllll} \br
\centre{2}{magnetization}&&\centre{2}{susceptibility}&&\centre{2}{specific
heat}\\
\crule{2}&\quad&\crule{2}&\quad&\crule{2}\\
bulk&surface&\quad&bulk&surface&\quad&bulk&surface\\ \mr
$m_b=-{\partial f_b\over\partial h}$ & $m_s=-{\partial f_s\over\partial h}$
&\quad&
$\chi_b=-{\partial^2 f_b\over\partial h^2}$ & $\chi_s=-{\partial^2
f_s\over\partial h^2}$ &\quad& $C_b=-{\partial^2 f_b\over\partial t^2}$ &
$C_s=-{\partial^2
f_s\over\partial t^2}$ \\ & $m_1=-{\partial f_s\over\partial h_1}$ &\quad& &
$\chi_1=-{\partial^2 f_s\over\partial h\partial h_1}$ &\quad& & \\ & &
&\quad& $\chi_{11}=-{\partial^2 f_s\over\partial h_1^2}$ &\quad& & \\
\br\end{tabular} \label{table4} \end{table}
While there is no special attention to pay to these definitions in a
homogeneous
system, they
have to be carefully rewritten in the perturbed model that we consider here.
First of all, we
shall focus on local quantities such as the boundary magnetization $m_1$ or the
local bulk
magnetization $m_{(n-1)}$, defined, for a chain of size $L_n$ obtained after
$n$
substitutions, by the order parameter at position $L_{n-1}$. This definition
leads to equivalent sites
for different chain sizes (see \fref{fig2}).
\begin{figure}
\epsfxsize=11cm
\begin{center}
\mbox{\epsfbox{berche-mf-01.eps}}
\end{center}
\vskip 0mm
\caption{Fibonacci chain of 21 sites. The local bulk
magnetization, for a chain of $L_n$ sites obtained after $n$ iterations of
the substitution rule is computed on the site $L_{n-1}$, here site 13.}
\label{fig2}
\end{figure}
In addition to these local quantities, one may also calculate both surface and
mean bulk
magnetizations ($m_s$ and $m_b$ respectively),
which should be interesting from an experimental
point of view since any experimental device would average any measurement over
a
large region compared to microscopic scale.
In order to keep symmetric sites with respect to the middle of the chain, and
to avoid
surface effects, the mean bulk magnetization $m_b$ is defined by averaging
over $L_{n-2}$ sites around the middle for a chain of size $L_n$.
\begin{equation}
m_b={1\over L_{n-2}}\sum_{j\in L_{n-2}}m_j.\label{eq-100}
\end{equation}
We checked numerically that one recovers the same average as for a chain of
size
$L_{n-2}$ with
periodic boundary conditions.
Following Binder~\cite{binder83}, for a film of size $L_n$ with two free
surfaces, the surface
magnetization is then defined by the deviation of the average magnetization
$\langle
m_j\rangle$ over the whole chain from the bulk mean value:
\begin{equation}
m_s={1\over 2}\left( m_b-{1\over L_n}\sum_{j=1}^{L_n}m_j\right).
\label{eq-36}\end{equation}
A graphical description can be found in \fref{fig10}.
\begin{figure}
\epsfxsize=11cm
\begin{center}
\mbox{\epsfbox{berche-mf-02.eps}}
\end{center}
\vskip 0mm
\caption{Typical shape of the order parameter profile for a perturbed system,
showing the
boundary and local bulk magnetizations $m_1$ and $m_{(n-1)}$, and the average
values $ m_b$ and
$\langle m_j\rangle$}
\label{fig10} \end{figure}
In the following, we shall use brackets for the
averages over the finite system, taking thus surface effects into account.
In the same way, the bulk free energy density in \tref{table4} has to be
understood as:
\begin{equation}
f_b={1\over L_{n-2}}\sum_{j\in L_{n-2}} f_b\{m_j\}
\label{eq-40}\end{equation}
while the surface free energy density $f_s$
is defined as the excess from
the average bulk free energy
\begin{equation}
F=\sum_{j=1}^{L_n}f_b\{m_j\}=L_n\langle f_b\rangle=L_n f_b+2f_s.
\label{eq-37}\end{equation}
\section{Magnetic properties}
\label{sec:mag}
\subsection{Order parameter profile and critical temperature}
The order parameter profile is determined numerically by a Newton-Raphson
method, starting with arbitrary values for the initial trial profile ${m_j}$.
\Eref{eq-8} provides a system of $L$ coupled non-linear equations
\begin{equation}
G_i(m_1,m_2,\dots,m_L)=0,\quad i=1,2,\dots ,L
\label{eq-29}
\end{equation}
for the components of the vector $\vec m=(m_1,m_2,\dots,m_L)$, which can be
expanded in a first
order Taylor series: \begin{equation}
G_i(\vec m+\delta \vec m)=G_i(\vec m)+\sum_{j=1}^L{\partial G_i\over\partial
m_j}\delta
m_j+O(\delta\vec m^2).
\label{eq-30}
\end{equation}
A set of linear equations follows for the corrections $\delta\vec m$
\begin{equation}
\sum_{j=1}^L{\partial G_i\over\partial m_j}\delta
m_j=-G_i(\vec m)
\label{eq-31}
\end{equation}
which moves each function $G_i$ closer to zero simultaneously.
This technique is known to provide a fast convergence towards the exact
solution. Typical examples of the profile obtained for the Fibonacci
perturbation are shown on~\fref{fig1}.
\begin{figure}
\epsfxsize=11cm
\begin{center}
\mbox{\epsfbox{berche-mf-03.eps}}
\end{center}
\vskip -0mm
\caption{Order parameter profiles for a perturbation $r=2$ and three values of
the temperature below the critical point. The size of the chain is $L=144$.}
\label{fig1}
\end{figure}
The magnetization profile decreases as the temperature is increased and
vanishes
for some size-dependent effective value of the critical temperature
$\mu_c(L)=1-(\theta_c(L))^{-1}$. This value may be obtained through a recursion
relation deduced from the equation of state. In the high temperature phase,
when
$h=0$,
equation \eref{eq-8} can be rewritten as a homogeneous system of linear
equations:
\begin{equation}
\mat{G}\vec m
=
\left(
\begin{array}{cccccc}
\alpha_1& -1&0&0&\dots&0\\
-1&\alpha_2&-1&0&\dots&0\\
0&-1&\alpha_3&-1&\ddots&\vdots\\
\vdots&\vdots&\ddots&\alpha_j&\ddots&\vdots\\
0&\dots&0&0&-1&\alpha_L\\
\end{array}
\right)
\left(
\begin{array}{c}
m_1\\
m_2\\
\vdots\\
\vdots\\
m_L\\
\end{array}
\right)
=0,
\label{eq-20}\end{equation}
where $\alpha_j=2+\mu+rf_j$.
If the determinant $D_L(\mu)=\hbox{\rm Det}\ \mat{G}(\mu)$ is not
vanishing, the null vector ${\vec m}=\vec 0$ provides the satisfying unique
solution
for the high temperature phase. The critical temperature is then defined
by the limiting value $\mu_c(L)$ which allows a non-vanishing solution for
${\vec m}$,
i.e. $D_L(\mu_c)=0$. Because of the tridiagonal structure of the determinant,
the following recursion relation holds, for any value of $\mu$:
\numparts
\begin{eqnarray}
D_L(\mu)&=&\alpha_LD_{L-1}(\mu)-D_{L-2}(\mu),\\
D_0(\mu)&=&1,\\
D_1(\mu)&=&\alpha_1.
\end{eqnarray}\label{eq-23}
\endnumparts
Thus we can obtain $\mu_c(L)$ for different sizes $L$ from 144 to 46368 and
estimate the
asymptotic critical point by an extrapolation to infinite size. This technique
allows a
determination of the critical temperature with an absolute accuracy in the
range
$10^{-7}$ to
$10^{-9}$ depending on the value of the amplitude $r$.
\subsection{Surface and bulk spontaneous magnetization behaviours}
The boundary magnetization $m_1$ vanishes at the same temperature than
the profile itself. First of all, the influence
of finite size effects~\cite{barber83} has to be studied. This is done by the
determination of the profiles for different chains of lengths given
by the successive sizes of the Fibonacci sequence $L=1,2,3,5,8,13,21,34\dots$
The boundary and bulk magnetization in zero magnetic field are shown
on~\fref{fig3}
on a log-log scale.
\begin{figure}
\epsfxsize=11cm
\begin{center}
\mbox{\epsfbox{berche-mf-04.eps}}
\end{center}
\vskip -0mm
\caption{Log-Log plot of the bulk and boundary magnetization v.s. the reduced
temperature $t=\mu_c-\mu$ for two values of the aperiodic amplitude $r$ and for
different sizes of the chain from 144 to 46368. Finite-size effects occur when
the curves deviate from the asymptotic straight line. The insert shows the
behaviour
of the magnetization with the temperature.}
\label{fig3}
\end{figure}
The finite size effects appear in the deviation from the straight line
asymptotic behaviour.
These effects are not too sensitive, as it can be underscored by considering
the
deviation of the curve for a size $L=17711$, which occurs around
$t=\mu_c-\mu\simeq 10^{-7}$, i.e. very close to the critical point.
The expected marginal
behaviour is furthermore indicated by the variation of the slopes with the
aperiodic modulation
amplitude $r$ and is more noticeable for the boundary magnetization than in the
case of the bulk.
A more detailed inspection of
these curves also shows oscillations resulting from the discrete scale
invariance~\cite{jona75} of the system and the asymptotic
magnetization can thus be written
\begin{equation}
m(t)=t^\beta\tilde m(t^{-\nu})
\label{eq-25}\end{equation}
where $\tilde m(t^{-\nu})$ is a log-periodic scaling function of its argument.
We make use of this oscillating behaviour to obtain a more precise
determination
of
the critical temperature (in the range $10^{-11}$ to $10^{-12}$) and of the
values
of the bulk and
surface exponents by plotting the rescaled magnetization $mt^{-\beta}$ as a
function of $\ln
t^{-\nu}$ as shown on~\fref{fig4} in the case of the
first layer.
\begin{figure}
\epsfxsize=11cm
\begin{center}
\mbox{\epsfbox{berche-mf-05.eps}}
\end{center}
\vskip -0mm
\caption{Periodic oscillations of the rescaled boundary magnetization
$m_1t^{-\beta_1}$ v.s. $\ln t^{-\nu}$. The deviation
from the oscillating behaviour for large values of the correlation length
$t^{-\nu}$ is due to
finite-size effects. The insert shows the oscillations of the rescaled boundary
magnetization for
different values of $r$ after substraction of a constant amplitude.}
\label{fig4}
\end{figure}
The values of $\mu_c$ and $\beta_1$
that we consider suitable are the ones which allow an oscillating behaviour for
the widest
interval in the variable $\ln t^{-\nu}$. A modification of the boundary
exponent
$\beta_1$ would change the average slope of the oscillating regime. This
could be due to corrections to scaling, but, if such corrections really
existed,
they should cancel in this range of temperatures (in the oscillating regime,
$t$
goes to values as small as $10^{-9}$). The other parameter, $\mu_c$, modifies
the number of oscillations and we have choosen a
value leading to the largest number of such oscillations. A poor determination
of the critical point $\mu_c'=\mu_c+\Delta\mu_c$ would indeed artificially
introduce a correction to scaling term, since
$t^\beta=(t'+\Delta\mu_c)^\beta\sim
t'^\beta\left(1+\beta\frac{\Delta\mu_c}{t'}\right)$.
\begin{table}
\caption{Numerical values of the critical temperature and the magnetic
exponents for the surface and bulk magnetizations. The figure in
brackets gives the uncertainty on the last digit.}
\footnotesize\rm
\begin{tabular}{@{}llllllll}
\br
&&\centre{3}{surface}&&\centre{2}{bulk}\\
\ns
&&\crule{3}&\quad&\crule{2}\\
$r$ & $\theta_c$ & $\beta_1$
& $\beta_L$&$\beta_s$&& $\beta_{(n-1)}$ & $\beta_b$ \\
\mr
.1 & .963977634341 (5) & 1.00036 (2) & \full
& .0002 (2) && .500087 (1) & .5002 (2)\\
.2 & .93187679929 (2) & 1.00146 (2) &1.0015 (1)& \full
&& .50033 (1) &\full \\
.3 & .90314503363 (2) & 1.0034 (1) &1.0034 (1)& \full
&& .50072 (6) &\full \\
.5 & .85404149087 (2) & 1.0092 (1) & \full
& .0094 (2) && .50187 (1) & .505 (1)
\\
.8 & .796437160887 (5) & 1.02214 (2) & \full
& .0216 (2) && .50419 (2) & \full\\
1. & .76600595095 (2) & 1.0327 (1) & \full
& .0302 (2) && .505777 (2) & .516
(1)\\
1.5& .70902241601 (2) & 1.0621 (1) & \full
& \full && .50943 (1) & \full\\
2. & .67010909237 (2) & 1.0913 (1) & \full
& .087 (1) && .51186 (3) & .538
(1)\\
2.5& .64234629279 (2) & 1.1178 (1) & \full
& \full && .51294 (1) & \full\\
3. & .621796760462 (5) & 1.1410 (1) &1.1410 (1)& .133 (1)
&& .5132 (1) & .555 (1)
\\
3.5& .60610567508 (2) & 1.1602 (1) & \full
& \full && .51327 (4) & \full\\
4. & .593804120472 (5) & 1.1766 (1) & \full
& .1692 (4) && .5130 (1) & .563
(1)\\
4.5& .58394117369 (2) & 1.1904 (1) & \full
& \full && .5122 (1) &
\full\\
5. & .5758805295248 (5)& 1.2026 (1) & \full
& .195 (1) && .51125 (2) & .567
(1)\\ \br
\end{tabular}
\label{table1}
\end{table}
The corresponding values of $\theta_c$, $\beta_1$ and $\beta_{(n-1)}$ are given
for
several values of the perturbation amplitude $r$ in \tref{table1}. The critical
exponent associated to the right surface ($m_L$) of the Fibonacci chain has
also
been computed for different values of $r$ for the largest chain size. It gives,
with a good accuracy,
the same value as for the left surface ($m_1$) as it can be seen by inspection
in the table. The aperiodic sequence is indeed the same, seen from both ends,
if
we forget the last two digits.
Furthermore, the profiles of
\fref{fig1} clearly show that the sites of the chain are not all equivalent
and
the
magnetization profiles can be locally rescaled with different values of the
exponents depending
on the site~\cite{berche96}. Thus, after the local quantities, the computation
of the surface
and mean bulk magnetizations enable us to determine the critical exponents
respectively written
$\beta_s$ and $\beta_b$ and given in \tref{table1}.
From our values, one obviously recovers the usual unperturbed ordinary
transition values of
the exponents when the perturbation amplitude goes to zero.
\subsection{Susceptibility and critical isotherm}
Taking account of a non vanishing bulk magnetic field in equations~\eref{eq-8}
and (7),
one can compute the magnetization in a finite field and then deduce
the critical isotherms exponents $\delta_{(n-1)}$ and $\delta_1$
from the behaviours of the local magnetizations $m_{(n-1)}$ and $m_1$ with
respect to $h$:
\begin{equation}
m_{(n-1)}\sim h^{1/\delta_{(n-1)}},\qquad m_1\sim h^{1/\delta_1},\qquad t=0.
\label{eq-26}\end{equation}
\begin{figure}
\epsfxsize=14cm
\begin{center}
\mbox{\epsfbox{berche-mf-06.eps}}
\end{center}
\vskip -0mm
\caption{Rescaled equations of state for the boundary and bulk magnetization
for
$r=2$. The
values of the temperature are $\theta=0.670090$, $0.670094$, $0.670097$,
$0.670100$, $0.670103$, $0.670105$ below $\theta_c$ and $0.670111$, $0.670113$,
$0.670115$, $0.670118$, $0.670121$, $0.670125$ above $\theta_c$. Top: scaling
functions $f_{m_1}^\pm$, the insert shows the boundary magnetization as a
function of the bulk
magnetic field. Bottom: same as above for the local bulk magnetization.}
\label{fig5} \end{figure}
This time, a direct log-log plot allows a precise determination of the
exponents and the rescaled equation of state confirms the
validity of the estimate since we obtain a good data collapse. In the case of
the boundary
magnetization, the scaling assumption takes the following form under rescaling
by an arbitrary factor
$b$: \begin{equation} m_1(t,h)=b^{-\beta_1/\nu} m_1(b^{y_t}t,b^{y_h}h),
\label{eq-27}\end{equation}
where $y_t$ is given by the inverse of the correlation length exponent
$y_t=1/\nu$ and the
value of the magnetic field anomalous dimension $y_h$ follows the requirements
of~\eref{eq-26}: $y_h=\beta_1\delta_1/\nu=\beta\delta/\nu$. The choice
$b=t^{-\nu}$ for the rescaling factor then
leads to a universal behaviour expressed in terms of a single scaled variable:
\begin{equation} m_1(t,h)=t^{\beta_1}f_{m_1}^\pm(ht^{-\Delta})
\label{eq-28}\end{equation}
where $\Delta=\beta_1\delta_1$ is the so-called gap exponent, $f_{m_1}^\pm$ is
a
universal scaling function and $\pm$ refers to the two phases $\theta
>\theta_c$
and $\theta <\theta_c$.
This may then be checked by a plot of $m_1t^{-\beta_1}$ v.s. $ht^{-\Delta}$
shown on~\fref{fig5} and the same type of universal function have been obtained
for the local bulk site $m_{(n-1)}t^{-\beta_{(n-1)}}=f_{m_{(n-1)}}^\pm
(ht^{-\Delta})$. The values of
$\delta_1$ and $\delta_{(n-1)}$ are given in \tref{table2}.
\begin{table}
\caption{Numerical values of the critical exponents associated to the critical
isotherms and
the susceptibilities. $\gamma_b$ and $ \delta_b$ correspond to the behaviour of
the
mean bulk magnetization $ m_b$. The figure in brackets gives the uncertainty on
the last
digit.}\footnotesize\rm
\begin{tabular}{@{}llllllllll}
\br &\centre{4}{surface}&&\centre{4}{bulk}\\
\ns
&\crule{4}&\quad&\crule{4}\\
$r$& $\gamma_1$
&$\delta_1$&$\gamma_s$&$\delta_s$&&$\gamma_{(n-1)}$&$\delta_{(n-1)}$&$\gamma_b$
&
$\delta_b$
\\ \mr .1 & .5013 (2) &1.5024 (2)&1.498 (1) & \full
&&.9997 (1)& 2.9989 (1) &1.0005 (1)&\full\\
.2 & .5006 (2) &1.5004 (2)& \full &
\full &&.9993 (2)&2.9972 (3) & \full &\full\\
.3 & .4992 (2) &1.4977 (2)& \full &
\full &&.9993 (2)&2.9949 (9) & \full &\full\\
.5 & .4958 (2) &1.4901 (2)&1.493 (1) & 312 (11) &&.9989
(2)&2.9895 (9) &.99790 (2)&2.98136
(2) \\
.8 & .487 (1) &1.4751 (3)&1.486 (1) & 85 (2) &&.9986
(3)&2.981 (2) & \full &\full\\
1. & .4796 (2) &1.4641 (2)&1.480 (2) & 53 (1) &&.9985
(4)&2.9744 (9) &.99253 (2)&2.93144
(2)\\
1.5& .4568 (2) &1.4378 (4)& \full & \full &&.9988
(7)&2.963 (3) & \full &\full\\
2. & .4316 (2) &1.4135 (1)&1.438 (2) & 16.37 (2)&&.999 (2)&2.9571 (1) &.9792
(1) &2.82375
(3) \\
2.5& .412 (1) &1.3845 (6)& \full & \full
&&.9992 (9)&2.954 (2) & \full &\full\\
3. & .388 (1) &1.3484 (6)&1.394 (2) & 11.2 (2) &&.9988
(6)&2.952 (2) &.9660 (1)
&2.7513 (2)\\
3.5& .372 (1) &1.3108 (5)& \full & \full
&&.9986 (5)&2.949 (2) & \full & \full \\
4. & .354 (1) &1.2989 (4)&1.360 (2) & 10.01 (5)&&.9988 (8)&2.948 (2)
&.9619 (5) &2.6976 (3)\\
4.5& .341 (1) &1.2571 (2)& \full & \full
&&.999 (1)&2.950 (2) & \full & \full\\
5. & .328 (1) &1.2467 (6)&1.330 (2) & 8.3 (4) &&.999
(2)&2.953 (2) &.9514 (2) &2.65938
(2)\\ \br\end{tabular}
\label{table2}
\end{table}
The behaviours of $m_s$ and $ m_b$ with $h$ at the critical point lead to the
values of $\delta_s$ and $\delta_b$, also listed in \tref{table2}. We can point
out the
low accuracy in the determination of $\delta_s$ since the slope of the log-log
plot of $m_s$
v.s. $h$ is quite small when $r$ reaches the unperturbed value $r=0$.
The derivative of equation~\eref{eq-27} with respect to the bulk magnetic
field $h$ defines the boundary susceptibility $\chi_1$
which diverges as the critical point is approached with an exponent $\gamma_1$.
Numerically, the boundary magnetization is calculated for several values of
the
bulk
magnetic field (of the order of $10^{-9}$), and $\chi_1$ follows a finite
difference
derivation. The bulk local susceptibility $\chi_{(n-1)}$ may be obtained in
the same way. Log-periodic oscillations also occur in these quantities and the
determination
of the exponents can be done in the same way as in the previous section for
the
magnetization.
Again, the accuracy of the result is confirmed by the rescaled curves for the
susceptibilities, for example
$\chi_{(n-1})t^{\gamma_{(n-1)}}=f_{\chi_{(n-1)}}^\pm (ht^{-\Delta})$
shown on \fref{fig6} exhibits a good data collapse on two universal curves for
$\theta <\theta_c$ and
$\theta>\theta_c$.
\begin{figure}
\epsfxsize=11cm
\begin{center}
\mbox{\epsfbox{berche-mf-07.eps}}
\end{center}
\vskip -0mm
\caption{Rescaled bulk susceptibility giving the behaviour of the
universal functions
$f_{\chi_{(n-1)}}^\pm$ below and above $\theta_c$ for $r=2$. The values of the
temperature are the same as in figure 6. The inserts show the
behaviours of $\chi_{(n-1)}$ as a function of $h$ for the same temperatures
(left),
and the singularities of both $\chi_{(n-1)}$ and $\chi_1$ in zero magnetic
field
as a
function of $\theta$ (right).} \label{fig6} \end{figure}
The values of the exponents are given in~\tref{table2} which presents also
$\gamma_s$ and
$\gamma_b$, associated to the surface and average bulk magnetization field
derivatives.
\section{Specific heat}
\label{sec:therm}
According to the definitions of \sref{sec:GL},
the surface and bulk free energies are also defined as follows:
\numparts
\begin{eqnarray}
F_s={1\over 2}(F_{FBC}-F_{PBC}),\\
F_b=F_{PBC}.
\end{eqnarray}\label{eq-38}
\endnumparts
where $F_{FBC}$ and $F_{PBC}$ denote the total free energies of aperiodic
chains
with free and periodic boundary conditions respectively and are obtained
numerically using
equations \eref{eq-40} and \eref{eq-37}.
The expected singular behaviours of the free energy densities
\numparts
\begin{eqnarray}
f_s&(t,h)=t^{2-\alpha_s}f_s(ht^{-\Delta}),\\
f_b&(t,h)=t^{2-\alpha_b}f_b(ht^{-\Delta}),
\end{eqnarray}\label{eq-33}
\endnumparts
where the dependence of $f_s$ with the local magnetic surface field $h_1$ has
been omitted
since we always consider the case $h_1=0$, lead to the surface and bulk
specific
heat
exponents.
The values of $\alpha_s$ and $\alpha_b$ are simply deduced from the slopes of
the log-log
plots of $f_s$ and $f_b$ v.s. $t$.
In \fref{fig20}, we show the bulk free energy density amplitude
$f_bt^{\alpha_b-2}$ as a function of $\ln t^{-\nu}$ for $r=2$. It exhibits the
same type of oscillating behaviour than the rescaled magnetisation of
\fref{fig4}.
\begin{figure}
\epsfxsize=11cm
\begin{center}
\mbox{\epsfbox{berche-mf-09.eps}}
\end{center}
\vskip -0mm
\caption{Rescaled bulk free energy density $f_bt^{\alpha_b-2}$ v.s. $\ln
t^{-\nu}$ for $r=2$. The amplitude of the bulk free energy density exhibits
log-periodic oscillations.} \label{fig20}
\end{figure}
The surface and bulk specific heat exponents are collected in \tref{table3}.
The bulk
specific heat discontinuity of the homogeneous system is washed out in the
perturbed system, since $\alpha_b<0$.
\begin{table}
\caption{Numerical values of the specific heat critical exponents. The figure
in
brackets gives the uncertainty on the last digit.}
\begin{indented}
\item[]\begin{tabular}{@{}lll}
\br
$r$ & $\alpha_s$ & $\alpha_b$ \\
\mr
.1&0.51496 (7) &-0.00031 (1) \\
.5&0.50112 (5) &-0.00733 (1) \\
.8&0.48448 (5) &-0.01709 (1) \\
1.&0.47075 (4) &-0.02462 (1) \\
2.&0.40265 (1) &-0.05924 (1) \\
3.&0.35077 (4) &-0.07813 (1) \\
4.&0.31516 (3) &-0.08579 (1) \\
5.&0.28935 (6) &-0.08805 (1) \\
\br
\end{tabular}
\end{indented}
\label{table3}
\end{table}
\section{Discussion}
\label{sec:concl}
We have calculated numerically several surface and bulk critical exponents for
a
marginal
aperiodic system within mean field theory. The marginal aperiodicity leads to
exponents which vary continuously with the amplitude of the perturbation $r$.
The variations
of these exponents are shown on \fref{fig9} as a function of $r$.
\begin{figure}
\epsfxsize=14cm
\begin{center}
\mbox{\epsfbox{berche-mf-08.eps}}
\end{center}
\vskip -0mm
\caption{Variations of the surface and bulk exponents with the
perturbation amplitude $r$ (boundary exponents: \full, surface: \chain, local
bulk: \dashed, mean bulk: \broken).} \label{fig9}
\end{figure}
The comparison in \tref{table1} between the bulk exponent
$\beta_b$ and the local
one $\beta_{(n-1)}$ clearly shows that it is no longer possible, in this
aperiodic system, to define a
unique bulk exponent, as it was already suggested by the possibility of a local
rescaling of the
profiles with position-dependent exponents which suggests a multiscaling
behaviour.
A constant value
$y_t=1/\nu$ is consistent with continuously varying exponents, in order
to keep a
vanishing crossover exponent which ensures that the marginality condition
remains valid for any value
of the aperiodicity amplitude $r$. For $y_h$ on the other hand, there is no
such
reason.
From this point of
view, equations like
\eref{eq-27} are not exact since a unique field anomalous dimension $y_h$ has
no
real significance. It follows that the universal functions in
\fref{fig5} and \fref{fig6} only give an approximate picture of the scaling
behaviour in this system, since they involve the gap exponent
$\Delta=y_h/y_t$. The good data collapse has to be credited to the weak
variation of the exponents with the perturbation amplitude $r$.
On the other hand, the scaling laws involving the dimension of the system are
satisfied in mean field theory with a value of $d$ equal to the upper critical
dimension $d^*$. As for the $2d$ Ising model with a marginal
aperiodicity~\cite{berche95,berche96}, one expects a strongly anisotropic
behaviour in the Gaussian model.
It yields a continuous shift of the upper critical dimension with the
pertubation amplitude, $d^*(r)$,
since the value $d^*=4$ for a critical point in the homogeneous $\phi^4$ theory
follows Ginzburg's criterion for an isotropic behaviour. Hyperscaling
relations should thus be satisfied for the mean field exponents
with $d^*(r)$:
\numparts
\begin{eqnarray}
2-\alpha_b=\nu d^*(r),\\
2-\alpha_s=\nu (d^*(r)-1).
\end{eqnarray}\label{eq-50}
\endnumparts
We can make use of these relations to obtain an estimate of the upper critical
dimension $d^*(r)$ for this aperiodic system. The corresponding results are
given in \tref{table10}.
\begin{table}
\caption{Numerical values of the upper critical dimension $d^*(r)$ deduced
from hyperscaling relations.}
\begin{indented}
\item[]\begin{tabular}{@{}ccc}
\br
$r$ & $(2-\alpha_b)/\nu$ & $(2-\alpha_s)/\nu+1$ \\
\mr
.1&4.00 &3.97 \\
.5&4.01 &4.00 \\
.8&4.03 &4.03 \\
1.&4.05 &4.06 \\
2.&4.12 &4.19 \\
3.&4.16 &4.28 \\
4.&4.17 &4.37 \\
5.&4.18 &4.42 \\
\br
\end{tabular}
\end{indented}
\label{table10}
\end{table}
The two determinations are in good agreement for small values of the
perturbation amplitude. The discrepancy at larger values of $r$
suggests that the precision in the determination of the exponents has
probably been
overestimated, but
the variation of the upper critical dimension with the perturbation amplitude
is
clear and should be attributed
to an anisotropic scaling behaviour in the corresponding Gaussian model.
One can finally mention that a mean field approach for relevant aperiodic
perturbations would be interesting. Many cases of aperiodic sequences with a
wandering exponent $\omega>-1$ are known, they constitute relevant
perturbations in mean field theory. In the case of the $2d$ layered Ising model
with relevant perturbations,
a behaviour which looks like random systems behaviour, with essential
singularities, was found~\cite{iglotu94}, and the same type of situation can be
expected within mean field approximation.
\ack
We thank L. Turban for valuable discussions and F. Igl\'oi and G. Pal\`agyi
for
informing us on a
related work before publication. This work has been supported by the Groupe
CNI/Mati\`ere under
project CNI155C96.
\section*{References}
| {
"attr-fineweb-edu": 1.922852,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfLbxK0iCl4WD5Zun | \section{INTRODUCTION}
For the last decade, measurements of the primordial D/H ratio in QSO sightlines have provided increasingly more precise constraints on the cosmological baryon density. Although the measurement of D/H is simple in principle,
compared to the other light elements produced during big bang nucleosynthesis, finding those QSO absorption lines systems which are suitable for measuring D/H has proven observationally challenging
\citep{tytler_review}.
For a QSO absorption system to show D/H, a number of criteria must be met (see \citet{kirkman03}, hereafter K03, for a more detailed discussion). First, the hydrogen column density must be large enough (since D/H is of the order of one part in $10^5$) such that deuterium can be observed using modern high-resolution spectrographs. Second, the velocity structure of the hydrogen absorption must be simple enough, ideally a single component of gas, so that the deuterium absorption is well resolved given the small 82 km s$^{-1}$\ offset
from the hydrogen Lyman lines.
Third, there can be little to no interloping Ly$\alpha$\ forest or
metal lines at the position of the deuterium absorption, since such absorption strongly complicates attempts to constrain the deuterium column density. Unfortunately, Ly$\alpha$\ forest absorption is both ubiquitous and stochastic in high redshift QSO spectra. Finally, the background QSO must be bright enough to obtain high signal-to-noise, high-resolution spectroscopy at $\lambda < 4000$\AA\
with a reasonable allotment of telescope time. Each one of these criteria act to decrease the probability that a D/H measurement can be made towards any given QSO, and since \textit{all} the criteria must be met, the resultant probability of a QSO sightline being suitable for measuring D/H is very low, with only approximately 1\% of QSOs at $z \simeq 3$ able to provide a measurement of D/H.
To date, there are few measurements of
D/H in QSO spectra (\citet{bt98a},\citet{bt98b}, \citet{omeara01}, \citet{pettinibowen}, K03, \citet{levshakovDH}, \citet{crighton04}). These measurements constrain the baryon density $\Omega_{b}h^{2}$\ through the framework of standard big bang nucleosynthesis (SBBN), which predicts the abundances of the light elements as a function of the baryon--to--photon ratio $\eta$ and the expansion rate of the universe \citep{kolb},
and through the cosmic microwave background radiation, which provides the photon density. A measurement of the ratio of any of the light nuclei produced in SBBN gives the baryon density, and measurement of additional abundance ratios test the theory (see \citet{steigman05}, \citet{pettinidh}, and references therein for a current census of D/H and the other light element abundances).
Recent measurements of the temperature angular power spectrum of the CMB (Spergel \textit{et al.}\ 2006) also provide a measurement of $\Omega_{b}h^{2}$\,
depending on the assumptions made,
with a level of accuracy roughly equal to or greater than that provided by D/H.
Nevertheless, measurements of D/H are still important for a number of reasons.
First, primordial D/H probes the universe at one of the earliest times in the universe accessible with current observational and theoretical techniques.
Second, the light element abundances predicted from SBBN do not all agree with each other; most notably the observationally inferred abundance of $^7$Li\ is significantly lower than that expected from SBBN and D/H \citep{fields06}.
Third, D/H can help constrain deviations from SBBN, such as inhomogeneous
BBN \citep[e.g.][]{lara06},
relic primordial particle decays \citep[e.g.][]{jedamzik04},
or non-standard neutrino physics \citep[e.g.]{abazajian05}.
Fourth, the dispersion in the measurements of D/H is larger than would be expected from the individual measurement errors (K03), i.e., the data demand both a better understanding of the errors on the current measurements and new constraints.
Fifth, the value of $\Omega_{b}$\ derived from D/H and SBBN requires many fewer priors than the CMB derived value. Moreover, the $\Omega_{b}$\ from D/H can be used in principle as a prior in the CMB analysis, and the ratio of the values for $\Omega_{b}$\ from D/H and the CMB offer a precision test of the hot Big Bang model.
Finally, the existing measurements of D/H show evidence of a trend of
decreasing D/H with increasing $N_{\rm H I}$.
K03 suggested that this trend
(and the dispersion in D/H values) is due to error under-estimation.
Fortunately, the Sloan Digital Sky Survey \citep[SDSS,][]{sdssdr4},
by virtue of its large sample of high redshift QSO spectra
(the Data Release 4 alone contains 5036 QSOs with $z > 2.7$)
gives us a new data set to find those special sightlines which can show D/H.
In this \textit{Letter}, we present a new measurement of D/H in a
QSO sightline from the SDSS, SDSS1558-0031, which was chosen as part of
our high-resolution survey for Lyman limit absorption
\citep{omeara06}.
\section{OBSERVATIONS}
We have obtained two high-resolution spectra of the $z=2.83$ quasar SDSS1558-0031\ using two different spectrographs, the MIKE \citep{bernstein03}
echelle spectrograph on the 6.5 meter Magellan Clay telescope at Las Campanas,
and the upgraded HIRES \citep{vogt94}
spectrometer on the 10 meter Keck-I telescope on Mauna Kea.
The MIKE spectrum was obtained as part of our ongoing high-resolution
survey for Lyman Limit absorption and was selected from the SDSS
because of the redshift and brightness of the QSO. The MIKE spectrum
was obtained on May 10, 2004 and covers the spectral range
3221--7420 \AA, with an exposure time of 3600 seconds. The data were obtained in sub-arcsecond seeing with a one arcsecond slit, which provides a resolution of $R=28,000$ and $R=22,000$ on the blue and red arms of the spectrograph respectively.
The MIKE data was reduced using the MIKE reduction pipeline\footnote{http://web.mit.edu/$\sim$burles/www/MIKE/}, and has a signal to noise ratio of approximately 12 at $\lambda = 4000$\AA.
The HIRES spectrum was taken on 2006 April 11
and covers the spectral range 3338--6200 \AA, with an exposure time of 4100 seconds. The data were taken in sub-arcsecond seeing with a 1.148 arcsecond slit, which provides a resolution of $R=34,000$. The data were reduced using the HIRES reduction pipeline\footnote{http://www.ucolick.org/$\sim$xavier/HIRedux/index.html},
and have a signal to noise ratio of approximately 20 at $\lambda = 4000$ \AA.
The HIRES data are the primary source for the measurement of D/H presented below owing to the higher signal-to-noise and spectral resolution.
With the exception of the H~I Ly$\alpha$\ line, we used the HIRES spectrum to determine all values for column densities presented in the text. Because
we were more successful at fluxing data from the MIKE spectrometer,
we use the flux calibrated MIKE spectrum to constrain the $N_{\rm H I}$\ value
in the Ly$\alpha$\ line whose profile spans several echelle orders.
\section{Analysis}
Inspection of the MIKE spectrum of SDSS1558-0031\ shows that there is a DLA at $z = 2.70262$ which is also responsible for the break in flux from the Lyman limit of the absorber at $\lambda \approx 3885$ \AA.
The parameters which describe the observed, single component of
absorption ($N$,$b$,$z$) for the Lyman series and metal-lines
in the DLA are presented in Table~\ref{dhlinetab}.
The parameters and their errors are derived
predominately from Voigt profile fits to the data using the VPFIT
routine kindly provided by R. Carswell and J. Webb.
For some of the metal-line
transitions, we opt instead to use the apparent
optical depth technique \citep{savage91} to measure the column densities,
and then use that column density as a fixed input parameter to
VPFIT to determine the $z$ and $b$ values for the absorption in question.
The absence of metal-line absorption at the position of D
aruges that the observed feature in the Lyman series is not interploping H.
The hydrogen column density of the DLA is sufficiently large
to show damping wings in the Lyman $\alpha$--$\gamma$ transitions
(Figure~\ref{fig_lyseries}).
These features allow for a precise measurement of the H~I column density.
In the case of SDSS1558-0031\, the DLA Ly$\alpha$\ lies near the QSO emission line and the
assignment of the continuum level over the full
extent of the absorption feature is
non-trivial.
Fortunately, this issue is minimized in two ways. First, the MIKE spectrum is flux calibrated, which allows for an easier assignment of the continuum level to the Ly$\alpha$\ line, although it is still subject to unidentified emission features
inherent to the QSO.
Second, the H~I derived from the damping features in the higher order lines is less susceptible to large continuum shape errors, because the profile spans a
significantly smaller wavelength range than the Ly$\alpha$\ line. In particular,
the Ly$\beta$\ line of the DLA places an excellent constraint on the \ion{H}{1}
column density,
because the line has prominent damping features, covers only $\approx 35$ \AA, and has little interloping hydrogen absorption.
To arrive at the best estimate of $N_{\rm H I}$\ we simultaneously vary the values of $N_{\rm H I}$\ along with the shape and amplitude of the local continuum level. This variation continues until we arrive at a value of the $N_{\rm H I}$\ which best reproduces the data whilst having a reasonable continuum shape.
We adopt a redshift and velocity width of the H~I, $z=2.702646 \pm 0.000010$ and $b=13.56 \pm 1.0$, from a fit to the the higher order Lyman series transitions.
Of some concern is the fact that the redshift of the H~I agrees only at the $\approx 3$ km s$^{-1}$\ level with the redshift inferred from low-ion metal-lines (Table~\ref{dhlinetab}). We note, however, that there exists some degeneracy between $b$ the $z$ for the Lyman series transitions we use, along with the increasing effects of poor signal-to-noise for shorter wavelength data
(i.e. higher up the Lyman series). Furthermore, we cannot discount the possibility that the H~I gas is multi-component, however there is little evidence
from the metal-line transitions that this is the case.
Furthermore the Lyman series lines
all appear to be well fit using a single component,
with a few departures due to interloping
\ion{H}{1} gas at different redshifts from the system which shows deuterium.
When we consider the Lyman $\alpha$--$\gamma$ transitions, we arrive at a best estimate of the H~I of $\log N_{\rm H I} = 20.67 \pm 0.05$\ cm$^{-2}$.
The errors on $N_{\rm H I}$\ are dominated by continuum uncertainties and by signal-to-noise, two effects which correlate, particularly on smaller wavelength scales.
As can be seen in Figure \ref{fig_lyseries}, we observe resolved
absorption by deuterium in the
Lyman series from Ly$\gamma$\ all the way through to Lyman-13.
Because the D~I column density is large,
the absorption is saturated until we reach deuterium Lyman-11,
which offers the best constraint on the $N_{\rm D I}$\ value.
For this transition, we measure $\log N_{\rm D I} = 16.19 \pm 0.04$\ cm$^{-2}$, where the errors come from the error estimate of VPFIT alone (i.e. independent of continuum error). The transition suffers from mild contamination by
interloping hydrogen on $\approx 25$ km s$^{-1}$\ to the red side of the absorption profile, but this absorption has little effect on the $N_{\rm D I}$\ value.
Fits to the data including and excluding a model for the interloping
hydrogen improve the $\chi^2$ for the fit without changing the value
or uncertainty in the $N_{\rm D I}$\ value.
We have also estimated $N_{\rm D I}$\ using the AODM technique, and
arrive at a consistent value $\log N_{\rm D I}$\ $=16.20 \pm 0.04$cm$^{-2}$.
The optical depth of this absorption feature is ideal for measuring a
column density because it is highly insensitive to the local continuum
level placement. If we vary the amplitude of the continuum level by
as much as 20\% about the adopted value the central value, of
$N_{\rm D I}$\ changes by less than the statistical error.
The $N_{\rm D I}$\ value is further constrained by
other Lyman series lines, e.g.\ the depth of the deuterium
Lyman--8 transition rules out significantly larger or smaller values of $N_{\rm D I}$.
We obtain a value of $z=2.702626 \pm 0.000007$
for the deuterium absorption, consistent with
that of the H~I and other metals.
We measure a velocity width of $b=10.48 \pm 0.78$ km s$^{-1}$\ for the deuterium absorption.
Neutral hydrogen
gas with $\log N_{\rm H I}$\ $\simeq 16.2$ cm$^{-2}$\ is not expected to have such a narrow
Doppler parameter \citep{kt97} whereas the value is reasonable for D.
We detect over 30 metal-line transitions in the DLA absorber.
A subset of these are summarized in
Table~\ref{dhlinetab} and are shown in Figure~\ref{fig_velp}.
In particular, we note the presence of \ion{O}{1} absorption
at $z=2.702610 \pm 0.000005$,
which is well described by a single component.
\ion{O}{1} absorption is important for measuring D/H in that \ion{O}{1}
directly traces the \ion{H}{1} gas \citep[see][]{omeara01},
and because \ion{O}{1}/\ion{H}{1} $\approx$ O/H in
most environments (especially DLA).
Adopting the measured value of $\log N_{OI} = 15.86$, we establish a
metallicity of [O/H]$= -1.49$ for the absorber assuming
the solar (atmospheric) oxygen abundance reported by \cite{asplund04}.
This metallicity is higher than all the other extragalactic measurements
of D/H. Nevertheless, a 3\% solar metallicity
implies minimal astration of D and we believe this system
is still representative of primordial gas (see Figure 20 of K03 and \cite{romano06}).
Finally, we note that the velocity structure of the absorber,
as traced by the metal lines, is amongst the simplest yet
observed for a DLA \citep{pw01}.
\section{Discussion}
We now discuss the value of D/H we obtain for the absorber and place it within the context of the combined D/H ratio for all QSO absorption systems.
The best estimates of the \ion{H}{1} and \ion{D}{1} column densities
in the DLA towards SDSS1558-0031\
imply a value of $\log \rm{D/H} = -4.48 \pm 0.06$. The errors on D/H stem primarily from the effect of continuum placement uncertainty on the H~I column density, and the signal-to-noise ratio of the data at Lyman-11 where the \ion{D}{1} column density is best constrained.
Turning now to the combined D/H value from QSO sightlines, Figure \ref{fig_alldh} shows the new value of D/H from SDSS1558-0031\ along with the previous values of D/H taken from the sample discussed in K03. We do not include the result of Crighton \textit{et al.}\ (2004), since we feel that the errors on D/H in this system have been under-estimated, particularly
for the reported $N_{\rm H I}$\ value.
We do not include the results of \citet{levshakovDH} for the reasons given
in K03.
The horizontal solid line shows the value for the weighted mean
of the data
$\log \rm{D/H} = -4.54581 \pm 0.03606$, which we round to
$\log \rm{D/H} = -4.55 \pm 0.04$
to keep consistent with the literature,
and the dashed lines show the $\pm 1 \sigma$ uncertainties
estimated from a jackknife analysis of the weighted means. In the case of asymmetric errors on individual D/H measurements, we have
adopted the larger of the errors for the calculation of the weighting.
Prior to the addition of SDSS1558-0031\ to the sample of D/H measurements of K03, a $\chi^2$--minimizing linear fit to the data of the form
$\log \rm{D/H} = -2.914 \pm 0.467 - (0.087 \pm 0.025)\times$$\log N_{\rm H I}$\ provided an acceptable fit to the data ($P_{(\chi^2 > \chi^2_{\rm{fit}})} = 0.74$). With the inclusion of SDSS1558-0031, however, we see a significant decrease in the
likelihood that there is a D/H trend with $N_{\rm H I}$.
Although the data are best fit with a non--zero slope,
$\log \rm{D/H} = -3.707 \pm 0.385 - (0.044 \pm 0.021)\times$$\log N_{\rm H I}$,
the slope differs from zero at only the $2\sigma$ level (even before
accounting for the presence of $N_{\rm H I}$\ in both axes).
Furthermore, this is not a good model of the data, with $P_{(\chi^2 > \chi^2_{\rm{fit}})} = 0.04$. As such, the data give little confidence to the existence of a trend of D/H with $N_{\rm H I}$.
Finally, if we were to include
the \cite{crighton04} and \cite{levshakovDH} results, the
likelihood that D/H depends on $N_{\rm H I}$\ is further diminished.
The
presumption of a single value for D/H, however, is still not supported by the observations; the observed scatter exceeds that expected assuming
the error estimates reported in the literature. Our adopted error on
the best estimate for D/H from the jackknife estimation exceeds the
error on the weighted mean by a factor of two.
Likewise, the $\chi^2$ of the six measurements of D/H about the weighted
mean value of $\log \rm{D/H} = -4.55 \pm 0.04$\ is high,
with $P_{(\chi^2 > \chi^2_{\rm{D/H}})} = 0.01$. Although there is the possibility that the scatter in the individual D/H measurements is real, we prefer
the hypothesis of K03 that the errors in some of the individual measurements, if not all of them, are underestimated. It is likely that a combination of new methods of error analysis and new QSO sightlines is required to fully address the excess scatter in the D/H measurements.
Using the framework from SBBN \citep{bnt2001}, the value for D/H of $\log \rm{D/H} = -4.55 \pm 0.04$\ translates to a value for the cosmological baryon density of $\Omega_{b}h^{2} = 0.0213 \pm 0.0013 \pm 0.0004$, where the first error term comes from the errors on D/H explained above, and the second term from the uncertainties in the nuclear reaction rates. By comparison, the WMAP three year result provides an estimate of $\Omega_{b}h^{2}$ $=0.0223^{+0.0007}_{-0.0009}$, which lies within the $1\sigma$ error estimate on $\Omega_{b}h^{2}$\ from D/H (Spergel \textit{et al.}\ 2006).
Finally, we note that the absorber showing D/H in the spectrum of SDSS1558-0031\ was discovered serendipitously as part of our survey for Lyman limit absorption, and is the first D/H measurement from a QSO first discovered by the SDSS.
The SDSS Data Release 3 alone has 405 DLA with
redshifts optimal for D/H \citep[$2.51 \le z \le 4.0$][]{dla_dr3},
Assuming a small fraction of these DLA provide measurements of D/H,
the SDSS will give many tens of measurements. The situation improves further
if one considers the Lyman limit systems toward the SDSS QSO sample.
This contrasts with the SLLS and DLA which give the $N_{\rm H I}$\ from the Ly$\alpha$\ and Ly$\beta$\ lines, and the $N_{\rm D I}$\ from the unsaturated D~I lines in the Lyman series. Because of this effect, the DLA and SLLS offer more pathlength per QSO to potentially find D/H. All of these effects combine to give a likely distribution of D/H measurements which is roughly independent of $N_{\rm H I}$, a hypothesis which is already being hinted at in the current sample, since two measurements come from LLS, two from SLLS, and two from DLA.
Altogether, the SDSS offers the best opportunity for investigating
the larger than expected scatter in D/H and correlations with
$N_{\rm H I}$, metallicity, etc.
\acknowledgments
The authors wish to recognize and acknowledge the very significant
cultural role and reverence that the summit of Mauna Kea has always
had within the indigenous Hawaiian community. We are most fortunate
to have the opportunity to conduct observations from this mountain.
We thank M. Pettini, G. Steigman, and the referee, P. Molaro, for inciteful questions and comments which improved this letter.
GEP and JXP are supported by NSF grant AST-0307408.
JO and SB acknowledge support from NSF grant AST-0307705.
| {
"attr-fineweb-edu": 1.838867,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfWbxK1ThhAcYmeiX | \section*{Acknowledgements}
The Know‐Center is funded within the Austrian COMET Program – Competence Centers for Excellent Technologies – under the auspices of the Austrian Federal Ministry of Transport, Innovation and Technology, the Austrian Federal Ministry of Economy, Family and Youth and by the State of Styria. COMET is managed by the Austrian Research Promotion Agency FFG.
This work is supported by the H2020 project TRUSTS (GA: 871481) and the "DDAI" COMET Module within the COMET Program, funded by the Austrian Federal Ministry for Transport, Innovation and Technology (bmvit), the Austrian Federal Ministry for Digital and Economic Affairs (bmdw), the Austrian Research Promotion Agency (FFG), the province of Styria (SFG) and partners from industry and academia.
\section{Detailed effectiveness results}
\label{app:detail-result}
Table shows the accuracy of GCN~\cite{gcn} for node classification on the considered datasets after changing the structure using different combinations of Structack\xspace with a perturbation rate $r=0.05$. These are the detailed results of what Figure~\ref{fig:critical-distance} summarizes.
\begin{table}[h]
\centering
\footnotesize
\caption{GCN accuracy on each dataset after applying Structack\xspace with each centrality$\times$similarity combination. The lowest accuracy (best combination) in each dataset is shown in boldface.}
\label{tab:acc}
\begin{tabular}{llllll}
\toprule
& Similarity & Community & Distance & Katz & Random \\
Dataset & Centrality & & & & \\
\midrule
\multirow{6}{*}{Citeseer} & Betweenness & 72.06\tiny{$\pm$0.9} & 72.04\tiny{$\pm$1.0} & 71.83\tiny{$\pm$1.0} & 72.22\tiny{$\pm$1.0} \\
& Closeness & 72.75\tiny{$\pm$0.7} & 71.99\tiny{$\pm$1.0} & 72.22\tiny{$\pm$1.1} & 72.88\tiny{$\pm$1.1} \\
& Degree & 71.89\tiny{$\pm$0.9} & 71.66\tiny{$\pm$1.0} & \textbf{71.33\tiny{$\pm$1.1}} & 71.86\tiny{$\pm$0.9} \\
& Eigenvector & 72.44\tiny{$\pm$1.1} & 72.55\tiny{$\pm$0.8} & 72.46\tiny{$\pm$1.3} & 73.05\tiny{$\pm$0.8} \\
& Pagerank & 71.67\tiny{$\pm$1.0} & 71.38\tiny{$\pm$1.0} & 71.67\tiny{$\pm$1.0} & 72.18\tiny{$\pm$0.7} \\
& Random & 73.08\tiny{$\pm$1.1} & 73.08\tiny{$\pm$1.1} & 73.04\tiny{$\pm$0.9} & 72.64\tiny{$\pm$1.2} \\
\cline{1-6}
\multirow{6}{*}{Cora} & Betweenness & 79.05\tiny{$\pm$0.5} & 79.11\tiny{$\pm$0.4} & 78.77\tiny{$\pm$0.5} & 79.65\tiny{$\pm$0.5} \\
& Closeness & 79.99\tiny{$\pm$0.4} & 79.99\tiny{$\pm$0.6} & 79.57\tiny{$\pm$0.5} & 80.33\tiny{$\pm$0.4} \\
& Degree & 78.51\tiny{$\pm$0.6} & 78.80\tiny{$\pm$0.5} & 78.98\tiny{$\pm$0.5} & 79.63\tiny{$\pm$0.5} \\
& Eigenvector & 80.19\tiny{$\pm$0.5} & 79.80\tiny{$\pm$0.5} & 79.93\tiny{$\pm$0.6} & 80.53\tiny{$\pm$0.5} \\
& Pagerank & 78.85\tiny{$\pm$0.5} & 78.53\tiny{$\pm$0.5} & \textbf{78.40\tiny{$\pm$0.5}} & 78.99\tiny{$\pm$0.5} \\
& Random & 80.35\tiny{$\pm$0.5} & 80.44\tiny{$\pm$0.5} & 80.31\tiny{$\pm$0.5} & 80.56\tiny{$\pm$0.5} \\
\cline{1-6}
\multirow{6}{*}{Cora-ML} & Betweenness & 80.50\tiny{$\pm$0.7} & 80.16\tiny{$\pm$0.5} & 80.27\tiny{$\pm$0.7} & 80.85\tiny{$\pm$0.7} \\
& Closeness & 81.42\tiny{$\pm$0.7} & 81.32\tiny{$\pm$0.7} & 81.58\tiny{$\pm$0.6} & 82.14\tiny{$\pm$0.6} \\
& Degree & 80.12\tiny{$\pm$0.6} & 80.15\tiny{$\pm$0.5} & 80.17\tiny{$\pm$0.6} & 80.51\tiny{$\pm$0.7} \\
& Eigenvector & 81.72\tiny{$\pm$0.8} & 81.60\tiny{$\pm$0.5} & 81.48\tiny{$\pm$0.7} & 82.09\tiny{$\pm$0.7} \\
& Pagerank & 80.51\tiny{$\pm$0.6} & 80.19\tiny{$\pm$0.6} & \textbf{80.06\tiny{$\pm$0.7}} & 80.99\tiny{$\pm$0.8} \\
& Random & 82.23\tiny{$\pm$0.7} & 82.00\tiny{$\pm$0.6} & 82.10\tiny{$\pm$0.6} & 82.24\tiny{$\pm$0.6} \\
\cline{1-6}
\multirow{6}{*}{Polblogs} & Betweenness & \textbf{75.19\tiny{$\pm$0.9}} & 77.88\tiny{$\pm$3.0} & 76.41\tiny{$\pm$1.8} & 83.17\tiny{$\pm$2.5} \\
& Closeness & 75.87\tiny{$\pm$1.4} & 79.09\tiny{$\pm$2.4} & 75.72\tiny{$\pm$1.8} & 83.02\tiny{$\pm$2.0} \\
& Degree & 75.65\tiny{$\pm$1.3} & 78.87\tiny{$\pm$2.0} & 76.27\tiny{$\pm$1.4} & 82.48\tiny{$\pm$2.9} \\
& Eigenvector & 76.52\tiny{$\pm$0.9} & 77.69\tiny{$\pm$2.1} & 76.62\tiny{$\pm$1.7} & 82.41\tiny{$\pm$2.7} \\
& Pagerank & 75.25\tiny{$\pm$1.4} & 78.09\tiny{$\pm$2.2} & 75.99\tiny{$\pm$1.6} & 82.73\tiny{$\pm$2.6} \\
& Random & 79.93\tiny{$\pm$3.3} & 80.25\tiny{$\pm$2.9} & 80.13\tiny{$\pm$3.4} & 83.57\tiny{$\pm$3.4} \\
\cline{1-6}
\multirow{6}{*}{Pubmed} & Betweenness & 84.93\tiny{$\pm$0.3} & 84.71\tiny{$\pm$0.2} & 84.38\tiny{$\pm$0.2} & 85.21\tiny{$\pm$0.3} \\
& Closeness & 85.42\tiny{$\pm$0.3} & 85.38\tiny{$\pm$0.2} & 85.24\tiny{$\pm$0.3} & 85.58\tiny{$\pm$0.3} \\
& Degree & 84.79\tiny{$\pm$0.3} & 84.55\tiny{$\pm$0.3} & 84.34\tiny{$\pm$0.3} & 85.08\tiny{$\pm$0.4} \\
& Eigenvector & 85.45\tiny{$\pm$0.3} & 85.49\tiny{$\pm$0.2} & 85.40\tiny{$\pm$0.3} & 85.65\tiny{$\pm$0.2} \\
& Pagerank & 84.54\tiny{$\pm$0.4} & 84.20\tiny{$\pm$0.3} & \textbf{ 84.08\tiny{$\pm$0.3}} & 85.13\tiny{$\pm$0.2} \\
& Random & 85.83\tiny{$\pm$0.3} & 85.74\tiny{$\pm$0.3} & 85.64\tiny{$\pm$0.3} & 85.90\tiny{$\pm$0.3} \\
\bottomrule
\end{tabular}
\end{table}
\newpage
\section{Detailed efficiency results}
\label{app:detail-efficiency}
We show the detailed runtime for Structack\xspace combinations in Table~\ref{tab:runtime-all} and the memory consumption in Table~\ref{tab:memory-all}. Runtime and memory consumption results are obtained after setting random linking with each selection method, and random selection with each linking method. We chose random because its time and memory requirements are negligible for our comparison. For these experiments, we also set the perturbation rate $r$ to 0.05.
\begin{table}[h]
\centering
\footnotesize
\caption{Runtime in seconds for each selection/linking method.}
\label{tab:runtime-all}
\begin{tabular}{lllllll}
\toprule
& Dataset & Citeseer & Cora & Cora-ML & Polblogs & Pubmed \\
\midrule
\multirow{5}{*}{Centrality} & Betweenness & 170.42 & 242.02 & 398.90 & 233.17 & 19,507.80 \\
& Closeness & 36.95 & 54.19 & 91.24 & 66.79 & 3,938.98 \\
& \textbf{Degree} & \textbf{0.18} & \textbf{0.42} & \textbf{0.46} & \textbf{0.96} & \textbf{2.82} \\
& Eigenvector & 1.04 & 2.10 & 6.12 & 5.57 & 21.56 \\
& Pagerank & 2.01 & 2.22 & 3.09 & 4.65 & 20.86 \\
\cline{1-7}
\multirow{3}{*}{Similarity} & \textbf{Community} & \textbf{2.38} & \textbf{ 3.06} & \textbf{ 5.60} & \textbf{13.48} & \textbf{110.42} \\
& Distance & 3.27 & 5.86 & 13.48 & 49.53 & 468.35 \\
& Katz & 58.75 & 79.47 & 85.60 & 38.30 & 5,978.16 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}[h]
\centering
\footnotesize
\caption{Memory consumption in Megabytes for each selection/linking method.}
\label{tab:memory-all}
\begin{tabular}{llrrrrr}
\toprule
& Dataset & Citeseer & Cora & Cora-ML & Polblogs & Pubmed \\
\midrule
\multirow{5}{*}{Centrality} & Betweenness & 367 & 366 & 363 & 364 & 438 \\
& Closeness & 369 & 367 & 365 & 365 & 438 \\
& \textbf{Degree} & \textbf{316} & \textbf{318} & \textbf{321} & \textbf{323} & \textbf{423} \\
& Eigenvector & 357 & 354 & 353 & 361 & 437 \\
& Pagerank & 349 & 349 & 354 & 367 & 500 \\
\cline{1-7}
\multirow{3}{*}{Similarity} & \textbf{Community} & \textbf{368} & \textbf{367} & \textbf{370} & \textbf{387} & \textbf{1,055} \\
& Distance & 387 & 382 & 400 & 409 & 2,150 \\
& Katz & 549 & 609 & 659 & 432 & 1,3881 \\
\bottomrule
\end{tabular}
\end{table}
\newpage
\section{Detailed unnoticeability results}
\label{app:detail-unnoticeability}
We show the unnoticeability of different combinations of Structack\xspace and with different perturbation rates $r \in$ \{0.001, 0.002, 0.003, 0.004, 0.005, 0.0075, 0.01, 0.025,0.05, 0.075, 0.10, 0.15, 0.20\}. These are the detailed results of what Figure~\ref{tab:unnoticeability} summarizes.
Notice that the choice of similarity has no impact on $r_{critical}$.
For Citeseer, Cora and Cora-ML, closeness centrality and eigenvector centrality are the least noticeable.
For the other two datasets (Polblogs and Pubmed), all centrality measures have the same $r_{critical}$
\begin{table}[h]
\centering
\footnotesize
\caption{Attack unnoticeability on each dataset after applying Structack\xspace with each centrality$\times$similarity combination.
The centrality measure (excpet random) with the highest critical perturbation rate $r_{critical}$ in each dataset is shown in boldface.}
\label{tab:detailed-unnoticeability}
\begin{tabular}{llllll}
\toprule
& Similarity & Community & Distance & Katz & Random \\
Dataset & Centrality & & & & \\
\midrule
\multirow{6}{*}{Citeseer} & Betweenness & 0.0100 & 0.0100 & 0.0100 & 0.0100 \\
& \textbf{Closeness} & 0.0250 & 0.0250 & 0.0250 & 0.0250 \\
& Degree & 0.0100 & 0.0100 & 0.0100 & 0.0100 \\
& \textbf{Eigenvector} & 0.0250 & 0.0250 & 0.0250 & 0.0250 \\
& Pagerank & 0.0100 & 0.0100 & 0.0100 & 0.0100 \\
& Random & 0.0500 & 0.0500 & 0.0500 & 0.0500 \\
\cline{1-6}
\multirow{6}{*}{Cora} & Betweenness & 0.0100 & 0.0100 & 0.0100 & 0.0100 \\
& \textbf{Closeness} & 0.0250 & 0.0250 & 0.0250 & 0.0250 \\
& Degree & 0.0075 & 0.0075 & 0.0075 & 0.0075 \\
& \textbf{Eigenvector} & 0.0250 & 0.0250 & 0.0250 & 0.0250 \\
& Pagerank & 0.0075 & 0.0075 & 0.0075 & 0.0075 \\
& Random & 0.0250 & 0.0250 & 0.0250 & 0.0250 \\
\cline{1-6}
\multirow{6}{*}{Cora-ML} & Betweenness & 0.0100 & 0.0100 & 0.0100 & 0.0100 \\
& \textbf{Closeness} & 0.0100 & 0.0100 & 0.0100 & 0.0100 \\
& Degree & 0.0050 & 0.0050 & 0.0050 & 0.0050 \\
& \textbf{Eigenvector} & 0.0100 & 0.0100 & 0.0100 & 0.0100 \\
& Pagerank & 0.0050 & 0.0050 & 0.0005 & 0.0050 \\
& Random & 0.0250 & 0.0250 & 0.0250 & 0.0250 \\
\cline{1-6}
\multirow{6}{*}{Polblogs} & \textbf{Betweenness} & 0.0020 & 0.0020 & 0.0020 & 0.0020 \\
& \textbf{Closeness} & 0.0020 & 0.0020 & 0.0020 & 0.0020 \\
& \textbf{Degree} & 0.0020 & 0.0020 & 0.0020 & 0.0020 \\
& \textbf{Eigenvector} & 0.0020 & 0.0020 & 0.0020 & 0.0020 \\
& \textbf{Pagerank} & 0.0020 & 0.0020 & 0.0020 & 0.0020 \\
& Random & 0.0100 & 0.0100 & 0.0100 & 0.0100 \\
\cline{1-6}
\multirow{6}{*}{Pubmed} & \textbf{Betweenness} & 0.0030 & 0.0030 & 0.0030 & 0.0030 \\
& \textbf{Closeness} & 0.0030 & 0.0030 & 0.0030 & 0.0030 \\
& \textbf{Degree} & 0.0030 & 0.0030 & 0.0030 & 0.0030 \\
& \textbf{Eigenvector} & 0.0030 & 0.0030 & 0.0030 & 0.0030 \\
& \textbf{Pagerank} & 0.0030 & 0.0030 & 0.0030 & 0.0030 \\
& Random & 0.0050 & 0.0050 & 0.0050 & 0.0050 \\
\bottomrule
\end{tabular}
\end{table}
\section{Conclusion}
We investigated the effectiveness of uninformed adversarial attacks on GNNs, i.e. attacks that have no access to information about node labels or feature vectors in a graph.
With theoretical considerations and experimental support, we demonstrated that uninformed attacks can exploit structural features of the graph, such as node centrality and similarity.
We presented Structack\xspace, a novel uninformed attack strategy that selects nodes with low centrality and links pairs of nodes with low similarity.
In experiments on five graph datasets Structack\xspace showed comparable performance to state-of-the-art attacks, while having less information about the graph (no access to node attributes), exhibiting higher efficiency, and reasonable unnoticeability.
Our work shows that uninformed adversarial attacks are successful with only structural knowledge, sometimes outperforming informed attacks.
The feasibility of Structack\xspace on real-world graphs makes it vital to develop more structure-aware defense mechanisms for more reliable GNN prediction.
\section{Discussion}
\para{Performance trade-off.}
Structack\xspace provides competitive effectiveness and high efficiency.
However, it shows to be relatively noticeable compared to baseline approaches.
On the other hand, optimization-based informed attacks achieve better unnoticeability but with much lower efficiency compared to Structack\xspace.
This low efficiency prevents them from running on larger graphs, with Pubmed as a toy example (this has been recently noted by \citet{geisler2021attacking}).
A deeper look into Structack\xspace shows that the selection strategy (i.e., centrality measure) has more impact on effectiveness and unnoticeability.
Conversely, the linking strategy (i.e., similarity measure) has more impact on the efficiency.
All in all, we assume an attack to be effective (cause high misclassification rate) if one of the 7 most effective combinations is picked.
When running on big graphs, attackers would tend to choose efficient combinations such as DG$\times$Comm.
To hide their behavior, attackers would tend to choose less noticeable combinations such as BT$\times$Katz.
\para{Limitations.}
Our study focuses on exploiting the structure information using centrality and similarity measures.
One could study other centrality and similarity measures, and even other graph structural properties.
Moreover, instead of the theoretical strategy defined in Section~\ref{sec:attack-strategy}, one could define a more practical heuristic to exploit these structural features.
Furthermore, other forms of degree normalization in the target GNN model could result in different strategies than Structack\xspace, which is an interesting direction for future work.
However, the aim of our work is to illustrate the extent to which uninformed attacks are successful, and we demonstrate that through our Structack\xspace strategy, which covers a range of possibilities of uninformed attacks.
Our unnoticeability measure was limited to degree and clustering coefficient distributions. Different unnoticeability tests could be investigated for this purpose.
In this regard, Structack\xspace appears more noticeable than existing informed attacks due to its greediness in selecting nodes with lowest centrality.
The unnoticeability results motivate us to look into approaches that intrinsically consider both effectiveness and unnoticeability.
More careful selection could improve Structack\xspace's unnoticeability, at the possible cost of effectiveness.
Additionally, comparing distributions of the clean graph and the perturbed one is not practical for dynamic networks, where edges and nodes are added and removed constantly.
This comparison does not consider the natural growth of the network.
This type of comparison is a common practice in works on adversarial attacks on graphs, and it should be improved.
This could be alleviated by using graph growth models or dedicated datasets with edge timestamps.
\section{Experimental evaluation}
\label{sec:experiments}
\subsection{Adversarial attack evaluation}
The goal of the experimental evaluation is to test the efficacy of Structack\xspace perturbations on GNNs.
To this end, we evaluate Structack\xspace against informed baseline attacks as well as the random (uninformed) baseline.
\ex{Notice that our attacks as well as the evaluated baselines apply structural perturbations only and not feature perturbations.}
For a perturbation rate $r$, we allow each attack to perturb the graph by adding (or removing in case of some studied baselines) a budget of $k = \lfloor r \times m \rfloor$ edges.
We evaluate each attack on three different criteria: (i) Effectiveness in terms of GNN misclassification rate, (ii) Efficiency in terms of computation time and memory requirements, and (iii) Unnoticeability in terms of changes of degree and clustering coefficient distributions.
With this evaluation, we aim to demonstrate a performance trade-off of these three aspects.
\subsection{Experimental setup}
\label{subsec:experimental-setup}
We evaluate $24$ different combinations of (Structack\xspace) derived from combining $6$ different possibilities for node selection (including random selection) with $4$ different possibilities for node linking (including random linking) as listed in Table~\ref{tab:complexity}.
We include random selection and random linking to evaluate whether the effectiveness of certain centrality or similarity choices stem from randomness.
We perform the following evaluations on the $5$ datasets described in Table~\ref{tab:datasets}.
\para{Effectiveness.}
To evaluate effectiveness (misclassification), we train a GNN model on the perturbed graph and report the classification accuracy on its test set.
Aiming for more robust evaluation (inspired by~\cite{shchur2018pitfalls}), we use $5$ different random splits (10\% train, 10\% validation, and 80\% test) for each dataset.
Our GNN model of choice is the well-known GCN~\cite{gcn} model, which we initialize $5$ times with different random weights for each perturbed input graph.
For the effectiveness evaluation, we set the perturbation rate to $0.05$.
\para{Efficiency.}
Another criterion for evaluating adversarial attacks is their ability to efficiently use available resources in terms of computation time and used memory.
More efficient attacks have a lower runtime and use less memory.
Please note that we ran all experiments on a machine running Linux Ubuntu OS version 16.04 with Intel Xeon E5-2630 Processor with 40 CPUs, 256GB RAM, and a dedicated NVIDIA Tesla P100 16GB GPU. For these efficiency experiments, we also set the perturbation rate to 0.05.
If an attack did not fit into the GPU memory for a particular dataset, we ran it with CPU settings for that dataset.
\para{Unnoticeability.}
To evaluate attack unnoticeability, we run each attack for different perturbation rates $r \in$ \{0.001, 0.002, 0.003, 0.004, 0.005, 0.0075, 0.01, 0.025,0.05, 0.075, 0.10, 0.15, 0.20\}.
We report results in terms of the critical perturbation rate $r_{critical}$, i.e., largest $r$ for which the attack is still deemed \textit{unnoticeable}.
We consider the attack to be unnoticeable if the changes in the node degree and local clustering coefficient values made by the attack are not significant.
A commonly used approach for comparing two node degree distributions is the Two-Sample Kolmogorov-Smirnov statistical test (KS test)~\cite{aliakbary2014quantification}.
Therefore, we use the KS test to determine whether two compared samples (original graph versus perturbed graph) stem from the same distribution.
We apply this test to obtain the significance in change for both degree and local clustering coefficient distributions.
Here the null hypothesis of the KS test is that \textit{two samples are drawn from the same continuous distribution}.
We set the probability of rejecting the null hypothesis $\alpha$ to $0.05$.
\para{Baselines.}
We evaluate the most effective combinations of Structack\xspace against the following baselines in terms of the three evaluation criteria.
\textbf{Random:} A simple uninformed baseline attack that selects random node pairs and adds an edge between them. This is the only \textit{uninformed} baseline against which we compare Structack\xspace.
\textbf{DICE~\cite{waniek2018hiding}:} A simple heuristic, which is explicitly based on disconnecting nodes with the same label and connecting nodes with different labels. This attack is \textit{informed} as it has access to node labels.
\textbf{Metattack~\cite{zugner2019metattack}:} State-of-the-art optimization-based attack on graphs via meta-learning. It treats the adjacency matrix as a parameter of the optimization problem, which is minimizing the accuracy of a surrogate model. Metattack does not require access to the GNN model parameters, and uses the surrogate model instead.
\textbf{PGD and MinMax~\cite{xu2019topology}:} State-of-the-art optimization-based attacks on graphs. Both attacks apply projected gradient descent to solve the optimization problem after convex relaxation. MinMax attempts to build a more robust attack through attacking a re-trainable GNN. These two attacks require access to the GNN model parameters.
In addition to the graph structure, Metattack, PGD, and MinMax have access to the feature vectors of all nodes and the labels of some nodes (typically, nodes in the training set).
Thus, these three attacks are \textit{informed} in our definition.
These attacks involve randomization, which is why we initialize each of them $5$ times with different random weights for each attack setting.
\begin{figure*}[t]
\centering
\includegraphics[width=.9\linewidth]{fig/cd-plot-edited.pdf}
\caption{Comparison of Structack\xspace combinations' effectiveness.
This plot shows combinations from most to least effective (lowest to highest GCN classification accuracy) presented from left to right. Thick horizontal bars represent no significant difference between the combinations they mark. We find that the best seven combinations are not significantly different, while being significantly better than the rest. We also see that the stronger impact lies in the choice of centrality with the degree and Pagerank centralities with random linking outperforming half of the other combinations.
}
\label{fig:critical-distance}
\end{figure*}
\section{Results and discussion}
Next we present evaluation results, discuss trade-offs, and outline the limitations of our work.
In the results tables, we use the abbreviations defined in Table~\ref{tab:complexity} to describe the centrality and similarity measures of Structack\xspace combinations.
\para{Effectiveness.}
First, we apply Structack\xspace combinations to each graph dataset and obtain the GCN accuracy.
Then we compute the average rank of each combination in terms of classification accuracy.
We visualize the ranking in Figure~\ref{fig:critical-distance} with a critical difference diagram.
The thick horizontal bars in this figure group together the combinations with no significant difference\footnote{For details on the computation of significance, we refer to the documentation of the R package \texttt{scmamp} \url{https://cran.r-project.org/web/packages/scmamp/scmamp.pdf}.} in ranks between them.
The six lowest-ranked combinations (which involve randomness) perform significantly worse than the rest.
This confirms that the improvement of Structack\xspace does not stem from randomness.
We observe that the seven most effective combinations are not significantly different.
Among these combinations, we frequently see Pagerank centrality, degree centrality and Katz similarity, which implies the effectiveness of these three measures.
Node centrality in Structack\xspace has a substantial impact on effectiveness, relative to the node similarity.
For example, performing selection with degree or Pagerank centrality and linking at random (Degree.Random and Pagerank.Random in Figure~\ref{fig:critical-distance}) seems to perform better than some combinations that do not involve random linking.
As the seven most effective combinations do not differ significantly from each other, we consequently compare them to the baselines as presented in Table~\ref{tab:acc-baselines}.
Structack\xspace combinations show a comparable performance to state-of-the-art methods, although they have no access to node attributes.
\begin{table}
\centering
\footnotesize
\caption{Adversarial attack effectiveness. This table gives the accuracy of a GCN model trained on the perturbed graph generated by applying each adversarial attack (lower accuracy $\rightarrow$ more effective attack). Structack\xspace (in boldface) is comparable with the state-of-the-art attacks on most datasets with minimum knowledge. According to a Wilcoxon signed-rank test, each Structack approach is significantly more effective than the uninformed Random approach with $p<0.01$ (after Bonferroni correction).
\textit{*~Metattack could not run for Pubmed with 16GB GPU, and did not finish with CPU settings after 3 weeks running}
}
\label{tab:acc-baselines}
\begin{tabular}{lllllll}
\toprule
& Dataset & Citeseer & Cora & Cora-ML & Polblogs & Pubmed \\
\midrule
& Clean & 71.90\tiny{$\pm$1.9} & 83.44\tiny{$\pm$1.1} & 85.11\tiny{$\pm$0.7} & 94.66\tiny{$\pm$1.2} & 86.51\tiny{$\pm$0.3} \\
\midrule
\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{{Informed}}}} & DICE & 70.63\tiny{$\pm$1.8} & 80.09\tiny{$\pm$2.1} & 80.74\tiny{$\pm$2.4} & 82.44\tiny{$\pm$6.1} & 83.32\tiny{$\pm$2.6} \\
& Metattack & 69.21\tiny{$\pm$1.7} & 76.83\tiny{$\pm$1.3} & 80.01\tiny{$\pm$1.0} & 76.98\tiny{$\pm$0.9} & N/A* \\
& MinMax & 68.95\tiny{$\pm$0.8} & 78.45\tiny{$\pm$1.1} & 83.39\tiny{$\pm$0.5} & 85.02\tiny{$\pm$2.2} & 84.97\tiny{$\pm$0.6} \\
& PGD & 63.80\tiny{$\pm$0.9} & 75.15\tiny{$\pm$1.4} & 80.17\tiny{$\pm$0.7} & 83.28\tiny{$\pm$3.3} & 82.58\tiny{$\pm$0.4} \\
\cline{1-7}
\parbox[t]{2mm}{\multirow{8}{*}{\rotatebox[origin=c]{90}{{Uninformed}}}} & Random & 72.64\tiny{$\pm$1.2} & 80.56\tiny{$\pm$0.5} & 82.24\tiny{$\pm$0.6} & 83.57\tiny{$\pm$3.4} & 85.90\tiny{$\pm$0.3} \\
& \textbf{BT*Katz} & 71.83\tiny{$\pm$1.0} & 78.77\tiny{$\pm$0.5} & 80.27\tiny{$\pm$0.7} & 76.41\tiny{$\pm$1.8} & 84.38\tiny{$\pm$0.2} \\
& \textbf{DG*Comm} & 71.89\tiny{$\pm$0.9} & 78.51\tiny{$\pm$0.6} & 80.12\tiny{$\pm$0.6} & 75.65\tiny{$\pm$1.3} & 84.79\tiny{$\pm$0.3} \\
& \textbf{DG*Dist} & 71.66\tiny{$\pm$1.0} & 78.80\tiny{$\pm$0.5} & 80.15\tiny{$\pm$0.5} & 78.87\tiny{$\pm$2.0} & 84.55\tiny{$\pm$0.3} \\
& \textbf{DG*Katz} & 71.33\tiny{$\pm$1.1} & 78.98\tiny{$\pm$0.5} & 80.17\tiny{$\pm$0.6} & 76.27\tiny{$\pm$1.4} & 84.34\tiny{$\pm$0.3} \\
& \textbf{PR*Comm} & 71.67\tiny{$\pm$1.0} & 78.85\tiny{$\pm$0.5} & 80.51\tiny{$\pm$0.6} & 75.25\tiny{$\pm$1.4} & 84.54\tiny{$\pm$0.4} \\
& \textbf{PR*Dist} & 71.38\tiny{$\pm$1.0} & 78.53\tiny{$\pm$0.5} & 80.19\tiny{$\pm$0.6} & 78.09\tiny{$\pm$2.2} & 84.20\tiny{$\pm$0.3} \\
& \textbf{PR*Katz} & 71.67\tiny{$\pm$1.0} & 78.40\tiny{$\pm$0.5} & 80.06\tiny{$\pm$0.7} & 75.99\tiny{$\pm$1.6} & 84.08\tiny{$\pm$0.3} \\
\bottomrule
\end{tabular}
\end{table}
\para{Efficiency.}
In Tables~\ref{tab:runtime} and~\ref{tab:memory}, we respectively show the runtime and the memory consumption of our most effective Structack\xspace combinations and existing adversarial attack methods.
We notice a significant drop in runtime and memory consumption for Structack\xspace compared to the optimization-based attacks (Metattack, PGD, and MinMax).
These three attacks did not fit in the available GPU memory for Pubmed, and therefore we ran them with CPU settings for this dataset.
For Structack\xspace combinations, the similarity measure generally has a substantial effect on runtime and memory consumption, with community-based similarity being the most efficient.
An exception to this rule is the runtime of Betweenness and Closeness centralities.
For example on Pubmed, Betweenness and Closeness computation takes $325$ and $66$ minutes respectively, while the computation of Katz similarity takes $100$ minutes.
The time complexity of computing these two measures (Table~\ref{tab:complexity}) is $\mathcal{O}(nm)$ making them impractically slow for large graphs.
\begin{table}
\centering
\footnotesize
\caption{Runtime in minutes with 0.05 perturbation rate. Structack\xspace is in boldface.
}
\label{tab:runtime}
\begin{tabular}{llrrrrr}
\toprule
& Dataset & Citeseer & Cora & Cora-ML & Polblogs & Pubmed \\
\midrule
\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{{Informed}}}} & DICE & 0.05 & 0.07 & 0.13 & 0.08 & 3.07 \\
& Metattack & 7.75 & 7.65 & 22.38 & 8.80 & N/A \\
& MinMax & 12.83 & 13.03 & 13.68 & 12.58 & 2,645.87 \\
& PGD & 12.15 & 12.08 & 12.35 & 11.10 & 1,569.55 \\
\cline{1-7}
\parbox[t]{2mm}{\multirow{7}{*}{\rotatebox[origin=c]{90}{{Uninformed}}}} & \textbf{BT*Katz} & 3.33 & 5.12 & 7.42 & 3.87 & 379.00 \\
& \textbf{DG*Comm} & 0.03 & 0.05 & 0.10 & 0.25 & 1.70 \\
& \textbf{DG*Dist} & 0.05 & 0.10 & 0.23 & 0.82 & 8.08 \\
& \textbf{DG*Katz} & 0.98 & 1.30 & 1.55 & 0.63 & 97.98 \\
& \textbf{PR*Comm} & 0.05 & 0.08 & 0.15 & 0.27 & 1.90 \\
& \textbf{PR*Dist} & 0.10 & 0.12 & 0.27 & 0.87 & 8.13 \\
& \textbf{PR*Katz} & 0.93 & 1.32 & 1.52 & 0.72 & 93.28 \\
\bottomrule
\end{tabular}
\end{table}
\begin{table}
\centering
\footnotesize
\caption{Memory consumption in Megabytes with 0.05 perturbation rate. Structack\xspace is in boldface.
\textit{*~Snapshot taken after 3 weeks of running}.
}
\label{tab:memory}
\begin{tabular}{llrrrrr}
\toprule
& Dataset & Citeseer & Cora & Cora-ML & Polblogs & Pubmed \\
\midrule
\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{{Informed}}}} & DICE & 313 & 1,623 & 1,213 & 1,230 & 773 \\
& Metattack & 2,074 & 2,096 & 2,123 & 2,078 & *58,394 \\
& MinMax & 2,176 & 2,243 & 2,318 & 2,109 & 20,554 \\
& PGD & 2,155 & 2,232 & 2,299 & 2,110 & 19,779 \\
\cline{1-7}
\parbox[t]{2mm}{\multirow{7}{*}{\rotatebox[origin=c]{90}{{Uninformed}}}} & \textbf{BT*Katz} & 578 & 626 & 677 & 442 & 13,918 \\
& \textbf{DG*Comm} & 316 & 322 & 337 & 403 & 901 \\
& \textbf{DG*Dist} & 433 & 431 & 430 & 433 & 1,995 \\
& \textbf{DG*Katz} & 556 & 587 & 662 & 440 & 13,918 \\
& \textbf{PR*Comm} & 445 & 443 & 443 & 460 & 928 \\
& \textbf{PR*Dist} & 445 & 443 & 442 & 450 & 2,021 \\
& \textbf{PR*Katz} & 570 & 617 & 668 & 441 & 13,919 \\
\bottomrule
\end{tabular}
\end{table}
\para{Unnoticeability.}
We report the critical perturbation rate $r_{critical}$ for which the respective attack remains unnoticeable as per our definition in Sections~\ref{subsec:experimental-setup}.
We present $r_{critical}$ for each approach in Table~\ref{tab:unnoticeability}.
For most datasets, Structack\xspace's $r_{critical}$ is on par or slightly lower than the informed approaches.
We also observe that the choice for node selection strategy in Structack\xspace has a greater influence on the attack unnoticeability than the node linking strategy.
\begin{table}
\centering
\footnotesize
\caption{Maximum unnoticeable perturbation rate. We evaluate unnoticeability in terms of critical perturbation rate $r_{critical}$, which we define as as the maximum perturbation rate for which the attack remains unnoticeable as per definition in Section~\ref{subsec:experimental-setup}.
We present the results for $r_{critical}$ per dataset and per adversarial attack. Structack\xspace is in boldface.
}
\begin{tabular}{lllllll}
\toprule
& Dataset & Citeseer & Cora & Cora-ML & Polblogs & Pubmed \\
\midrule
\parbox[t]{2mm}{\multirow{4}{*}{\rotatebox[origin=c]{90}{\scriptsize{Informed}}}} & DICE & 0.0750 & 0.0500 & 0.0500 & 0.0100 & 0.0100 \\
& Metattack & 0.0100 & 0.0100 & 0.0100 & 0.0040 & N/A \\
& MinMax & 0.0750 & 0.0750 & 0.0750 & 0.0500 & 0.0040 \\
& PGD & 0.1000 & 0.0750 & 0.0500 & 0.1000 & 0.0040 \\
\cline{1-7}
\parbox[t]{2mm}{\multirow{8}{*}{\rotatebox[origin=c]{90}{\scriptsize{Uninformed}}}} & Random & 0.0250 & 0.0250 & 0.0250 & 0.0100 & 0.0050 \\
& \textbf{BT*Katz} & 0.0100 & 0.0100 & 0.0075 & 0.0020 & 0.0030 \\
& \textbf{DG*Comm} & 0.0100 & 0.0075 & 0.0050 & 0.0020 & 0.0030 \\
& \textbf{DG*Dist} & 0.0100 & 0.0075 & 0.0050 & 0.0020 & 0.0030 \\
& \textbf{DG*Katz} & 0.0100 & 0.0075 & 0.0050 & 0.0020 & 0.0030 \\
& \textbf{PR*Comm} & 0.0100 & 0.0075 & 0.0050 & 0.0020 & 0.0030 \\
& \textbf{PR*Dist} & 0.0100 & 0.0075 & 0.0050 & 0.0020 & 0.0030 \\
& \textbf{PR*Katz} & 0.0100 & 0.0075 & 0.0050 & 0.0020 & 0.0030 \\
\bottomrule
\end{tabular}
\label{tab:unnoticeability}
\end{table}
\section{Introduction}
\begin{figure*}[t]
\centering
\includegraphics[width=0.9\linewidth]{fig/structack-teaser.pdf}
\caption{Illustration of Structack\xspace on GNN classification. In (a) we show a standard GNN classification with all node features and labels available to the algorithm. In (b) we depict an informed attack, which has access to the same information that the GNN classification task itself has. Based on this information, adversarial edges are added to attack GNN performance.
In (c), we show an uninformed attack strategy, i.e. an attack that has no access to information about node attributes (labels, features), but only to the structure of the graph. A naive strategy could add edges based on topological graph features, for example add edges between pairs of nodes with high centrality or high similarity.
In (d), we show a Structack\xspace attack which also has no access to information about node attributes, but attacks more successfully by adding edges between nodes with low centrality and low similarity.
Structack\xspace approaches the performance of informed attacks such as Metattack~\cite{zugner2019metattack} with less available information.
}
\label{fig:teaser}
\end{figure*}
Graph neural networks (GNNs) are state-of-the-art models for tasks on graphs such as node classification~\cite{gcn}, link prediction~\cite{zhang2018link} and graph classification~\cite{graphsage}.
Recent work has shown that GNNs are vulnerable to adversarial attacks, which can cause GNNs to fail by carefully manipulating node attributes~\cite{ma2020practical}, graph structure~\cite{ma2019rewatt,zugner2019metattack} or both~\cite{zugner2018nettack}.
For example, adversarial attacks on social networks can add links via fake accounts, or change the personal data of a controlled account.
Most existing attacks~\cite{zugner2018nettack,zugner2019metattack,xu2019topology,dai2018rls2v,ma2019rewatt,ma2020practical} assume that information about node attributes (e.g., demographics of users) are available to the attacker.
In practice however, attackers have limited access to such attribute information.
We thus differentiate between two cases: the \textit{informed} case where both graph structure and node attributes are available to the attacker, and the \textit{uninformed} case where only information about the structure is available (see Figure~\ref{fig:teaser}).
\para{Objectives.}
In this work, we investigate \emph{uninformed} adversarial attacks that aim to reduce the overall accuracy of node classification with GNNs by manipulating the graph structure. Our aim is to study (i) potential strategies for uninformed attacks and (ii) how effective they are in practical settings.
\para{Approach.}
Insights in~\cite{zugner2019metattack,ma2020practical,jin2020prognn} have shown a considerable influence of node degree and shortest paths on GNN robustness.
However, these insights are not well investigated.
Therefore, we further inspect the effect of degree centrality and shortest path lengths on GNN adversarial attacks.
First, we theoretically show that with standard degree normalization, low-degree neighbors surprisingly have more influence on a node's representation than higher-degree neighbors.
Second, we discuss the results showing the dependency of GNNs on links within graph communities~\cite{Li2018Deeper,hussain2020impact}, which are ubiquitous in real-world graphs.
Based on that, we argue that adversarial edges should link nodes with longer paths between them.
Experimentally, we verify these insights on degrees and distance through simulating attacks on empirical datasets.
We then introduce our uninformed \textbf{struc}ture-based adversarial at\textbf{tack} (Structack\xspace), which generalizes these findings, and injects links between nodes of low structural centrality and similarity.
Finally, we evaluate Structack\xspace compared to state-of-the-art attacks in terms of (i) reducing GNN accuracy, (ii) computational efficiency, and (iii) the ability to remain undetected.
\para{Contribution and Impact.}
We introduce Structack\xspace\footnote{We provide the implementation Structack\xspace and the experiments for reproducibility at \url{https://github.com/sqrhussain/structack}.}, a novel structure-based uninformed adversarial attack on GNNs.
In experiments on empirical datasets, Structack\xspace performs on a level that is comparable to more informed state-of-the-art attacks~\cite{xu2019topology,zugner2019metattack}, while using less information about the graph and significantly lower computational requirements.
We give insights on the detection of attacks, such as Structack\xspace, by analyzing their ability to be undetected.
With our work, we introduce a new unstudied category of attacks that could be applied to real-world networks.
Our findings highlight the vulnerability of GNNs to uninformed attacks that have no knowledge about node attributes or the attacked model.
Hence, our work contributes toward \ex{building} more robust predictive and defensive models for graph data.
\section{Structack\xspace}
\label{sec:model}
In this section, we introduce our attack strategy Structack\xspace (\textbf{Struc}ture-based at\textbf{tack}), built upon the findings from Section~\ref{sec:hypotheses}.
We outline the attacker's goal, capabilities and knowledge, explain the attack strategy, provide a complexity analysis, and discuss insights on the detection of the attack.
\subsection{Attacker's capabilities and restrictions}
In our setting, the attacker aims to minimize the overall GNN accuracy on node classification.
We limit the knowledge of the attacker to the adjacency matrix, as opposed to existing work~\cite{zugner2018nettack,zugner2019metattack,xu2019topology,ma2019rewatt,dai2018rls2v,ma2020practical}.
The attacker has no access to the features or the label of any node.
They also do not have any information about the attacked GNN model or its parameters.
We assume that the attacker is able to add edges between any pair of nodes\footnote{\ex{This ability might not directly translate to real-world attacks, but it is necessary to study the extent of different attack approaches, including the baselines that we evaluate as well.}} in the graph, up to a limit $k$, called the \textbf{budget}.
As a result, the attack generates a poisoned adjacency matrix $A'$, where $||A-A'||_0 \leq k$.
According to the taxonomy suggested by~\cite{jin2020survey}, our attack is an untargeted (global) poisoning attack on graph structure.
\subsection{Attack strategy}
\label{sec:attack-strategy}
The findings in Section~\ref{sec:hypotheses} show the impact of low node degrees and long node distances in the graph on adversarial attacks.
Following these findings, an efficient strategy to exploit this impact is to (1) \textit{select} nodes with low degrees, and (2) \textit{link} pairs of nodes with high distances.
Node degree is a measure of node centrality, and distance represents one form of node dissimilarity (e.g., Katz similarity~\cite{newman2018networks} gives higher weights to shorter paths).
We generalize node degree and distance to a diverse set of measures of centrality and similarity.
Therefore, Structack\xspace consists of selecting nodes with the lowest \textit{centrality} and linking these nodes so that the \textit{similarity} between linked nodes is minimized.
For a budget $k$, Structack\xspace chooses $2k$ nodes with the lowest centrality.
We then split these nodes into two sets $U_1$ and $U_2$, both of size $k$, based on their centrality, i.e., $U_1$ has the $k$ nodes with lowest centrality.
Then Structack\xspace finds the matching between nodes in $U_1$ and $U_2$, which minimizes the sum of similarities between the matched nodes.
To solve this minimization problem, we use the Hungarian algorithm.
Finally, Structack\xspace adds edges between matched nodes.
For selection and linking steps, we investigate different choices of centrality and similarity measures (Table~\ref{tab:complexity}).
Otherwise, we follow conventional procedures, e.g., splitting lowest-centrality nodes in order, and using the sum of similarities as a criterion for the matching problem.
Please note, investigating other splitting and matching criteria can be interesting, e.g., using interleaving splitting.
However, we leave this for future work as we are more interested in the impact of centrality and similarity choices.
\subsection{Complexity analysis}
After computing the centrality for each node, obtaining the $2k$ lowest-centrality nodes for the splitting step requires $\mathcal{O}(n \log k)$ time.
At the final step of Structack\xspace, finding the optimal node matching is a minimum cost maximum bipartite matching problem.
We solve this problem using the Hungarian algorithm, which has the complexity of $\mathcal{O}(k^3)$ time.
In Table~\ref{tab:complexity}, we list the centrality and similarity measures we used with their corresponding time and memory complexity.
These measures are well defined in the literature, along with their complexity.
However, to make our paper self-contained, we explain essential details about how we compute similarity and give the resulting time complexity.
\para{Community-based similarity:}
First, we perform community detection using Louvain method~\cite{louvain}, which splits the graph into $C$ disjoint communities.
We then build a community similarity matrix $\mathbf{S} \in \RR^{C \times C}$ encoding the original density of edges, i.e., $\mathbf{S}_{i,j}$ represents the edge density of links between community~$i$ and community~$j$.
Then we set the similarity between two nodes $u$ and $v$ to the similarity of their corresponding communities $\mathbf{S}_{Comm(u), Comm(v)}$, where $Comm(x)$ is the community of node $x$ as per Louvain method.
For the community-based similarity, the time complexity of Louvain community detection is considered to be linear in the number of edges on typical and sparse data~\cite{louvain} $\mathcal{O}(m)$, and the edge density computation step is also of order $\mathcal{O}(m)$, making this similarity calculation of order $\mathcal{O}(m)$ as well.
\para{Distance-based similarity:} We use breadth-first search (BFS) to get single-source shortest paths from each node in $U_1$ to all nodes in $U_2$ (which, in the worst case, means to all nodes in the graph). We choose BFS because we assume that the input graph is unweighted as mentioned in Section~\ref{sec:prelim}. We restrict BFS sources to nodes in $U_1$ since the distance between nodes outside $U_1$ and $U_2$ are not relevant for Structack\xspace.
For the shortest path length computation, and if we do not consider parallelization, the BFS algorithm is repeated $k$ times (once for each node in $U_1$), which gives a time complexity of $O(km)$. Please note that a higher distance indicates a lower similarity.
\para{Katz similarity:}
This notion is a measure of regular equivalence of nodes~\cite{newman2018networks} .
It counts paths of all lengths and weighs them differently, i.e., shorter paths with higher weights.
We can write Katz similarity matrix as $ \sum_{i=0}^{\infty}{(\alpha A)^i}, $ where $\alpha$ is a constant which needs to be less than the inverse of the largest eigenvalue of $A$.
We approximate the similarity matrix without matrix inversion using \textit{inverse iteration} until the matrix converges after $t$ iterations.
With sparse matrix multiplication, the time complexity turns into $\mathcal{O}(t m)$.
The number of iterations goes down to the desired precision of the similarity.
For a typical choice of $\alpha = 0.85$, we obtain a precision of $10^{-6}$ with $t=100$ iterations\footnote{This argument also applies for computing Pagerank which is also $\mathcal{O}(t m)$.}.
\begin{table}[]
\centering
\footnotesize
\caption{Description of considered centrality/similarity metrics with their time and memory complexity. We list abbreviations for the metric names to use later in results tables.
$n$ and $m$ represent the number of nodes and edges in the graph respectively, and $k$ is the attack budget. $t$ is the number of iterations for computing Pagerank centrality and Katz similarity.}
\label{tab:complexity}
\begin{tabular}{ccc}
\toprule
Centrality metric & Time complexity & Memory complexity \\
\midrule
Degree (DG) & $\mathcal{O}(m)$ & $\mathcal{O}(m)$ \\
Eigenvector (EV)~\cite{newman2008mathematics} & $\mathcal{O}(m)$ & $\mathcal{O}(n+m)$ \\% https://www.lume.ufrgs.br/bitstream/handle/10183/122516/000971709.pdf?sequence=1
Pagerank (PR)~\cite{page1999pagerank} & $ \mathcal{O}(tm)$ & $\mathcal{O}(n+m)$\\% https://docs.oracle.com/cd/E56133_01/2.4.0/reference/algorithms/pagerank.html
Betweenness (BT)~\cite{newman2008mathematics,brandes2001faster} & $\mathcal{O}(nm)$ & $\mathcal{O}(n+m)$ \\% http://www.uvm.edu/pdodds/research/papers/others/2001/brandes2001a.pdf
Closeness (CL)~\cite{newman2008mathematics,freeman1978centrality} &$\mathcal{O}(nm)$&$\mathcal{O}(n+m)$ \\
\midrule
\midrule
Similarity metric & Time complexity & Memory complexity \\
\midrule
Katz (Katz)~\cite{newman2018networks} & $\mathcal{O}(tm)$ & $\mathcal{O}(n^2)$\\
Community-based (Comm) & $\mathcal{O}(m)$ & $\mathcal{O}(m)$\\
Distance-based (Dist) & $\mathcal{O}(km)$ & $\mathcal{O}(m)$ \\
\bottomrule
\end{tabular}
\end{table}
\subsection{Insights on attack detection}
\label{subsec:attack-unnoticeability-consideration}
Structack\xspace selects nodes with low centrality, which typically have few edges.
Therefore, the attack can cause a significant change in the degree distribution.
We hence suggest observing the changes in the degree distribution, similar to~\cite{zugner2018nettack}.
Moreover, Structack\xspace links pairs of nodes with low structural similarity, which are likely to have few common neighbors.
The local clustering coefficient of a node is lower with fewer edges shared among its neighbors~\cite{newman2018networks}.
Based on that, we expect that Structack\xspace causes a significant change in the local clustering coefficients of the nodes being linked.
Therefore, we also suggest observing the changes in the local clustering coefficient distribution.
We define two criteria to detect the attack by comparing the original and the perturbed graphs.
First, we test if the node degrees of both graphs stem from the same distribution.
Second, we test if the local clustering coefficient values of both graphs also stem from the same distribution.
If values are assumed to stem from the same distribution in both cases, we consider the attack to be \textit{unnoticeable}.
\section{Background}
\label{sec:prelim}
\begin{figure*}
\includegraphics[width=0.95\linewidth]{fig/edge-ends-degree_all.pdf}
\includegraphics[width=.95\textwidth]{fig/distance_range_all.pdf}
\caption{
Impact of degree and distance on adversarial attacks on GNN classification.
\textit{Top}: GNN accuracy when we link nodes of varying degrees to other nodes of varying degrees as well, i.e., low-to-low degrees (top-left corner) up to high-to-high (bottom-right corner).
Linking nodes with lower node degrees appears to result in more effective attacks.
\textit{Bottom}: GNN accuracy when adding edges between pairs of nodes with the lowest distance up to the highest distance.
Linking nodes with higher distance (lower similarity) results in a more effective attack.
Degrees and distances are grouped into 10-quantiles. The presented accuracy comes from training a GNN model (namely GCN\cite{gcn}) on the perturbed graphs of $5$ empirical datasets.
}
\label{fig:injection-experiment}
\end{figure*}
\para{Preliminaries.}
Let $G=(A,X,Y)$ be an attributed undirected graph with an unweighted adjacency matrix $A \in \{0,1\}^{n \times n}$, a feature matrix $X \in \RR^{n \times f}$, and a label matrix $Y \in \{0,1\}^{n \times |L|}$, where $L$ is the set of labels. We refer to the set of nodes as $V=\{1,...,n\}$, and the set of edges as $E$, where $|E|=m$ and $(i,j)\in E$ \textit{iff} $A_{i,j}=1$.
Each node $u \in V$ has a feature vector $x_u \in \RR^f$, where $f$ is the feature vector dimension, and a label $y_u \in L$.
The feature vectors are encoded in $X$, where $u$'s feature vector $x_u^T$ is the row $u$ of matrix~$X$.
The labels of all the nodes are accordingly encoded in $Y$ as well, with one-hot encoding in each row.
We use the notation~$D$ to refer to the degree matrix, a diagonal matrix where $D_{i,i} = d_i$ is the degree of node~$i$.
\para{Graph neural networks.}
GNNs are multi-layer machine learning models that process graph-structured data.
They follow a message passing and aggregation scheme, where nodes aggregate the messages received from their neighbors and update their representation on this basis.
For a GNN with $K$ layers, \citet{gcn} describe the propagation rule in a simple form as follows
\begin{equation}
\label{eq:gcn}
H'^{(k+1)} = \sigma (\Tilde{D}^{-\frac{1}{2}}\Tilde{A}\Tilde{D}^{-\frac{1}{2}} H'^{(k)} W^{(k)})
\end{equation}
for $k = 0,1,...,K-1$, where $\Tilde{A}=A+I$, $\Tilde{D}=D+I$, $H'^{(0)}=X$ are the given node features, $W^{(k)}$ is a trainable weight matrix, and $\sigma$ is an activation function, which is typically non-linear, e.g., ReLU.
This formula is usually written as $H'^{(k+1)} = \sigma (\Hat{A} H'^{(k)} W^{(k)})$ with $\Hat{A} = \Tilde{D}^{-\frac{1}{2}}\Tilde{A}\Tilde{D}^{-\frac{1}{2}}$.
\para{Node classification with GNNs.}
For node classification, we set the activation function of the last layer of the GNN to softma
\begin{equation}
\label{eq:softmax}
Z = f(\Theta;A,X) = \text{softmax}(\Hat{A} H'^{(K-1)} W^{(K-1)}),
\end{equation}
where $\Theta=\{W^{(0)},...,W^{(K-1)}\}$ is the set of model parameters which we aim to optimize, and $Z_{u,c}$ represents the model confidence that a node $u$ belongs to label $c$.
Given the labels of a subset of nodes $V' \subseteq V$, the goal of node classification is to find the labels of the unlabeled nodes $V \setminus V'$.
To achieve this with GNNs, a common choice is to minimize the cross entropy error in $V'$
\begin{equation}
\label{eq:loss}
\mathcal{L}(Y,Z) = - \sum_{u \in V'}{\ln Z_{u,y_u}},
\end{equation}
where $Z_{u,y_u}$ represents the model confidence that node $u$ belongs to its ground-truth class $y_u$.
\para{Adversarial attacks on GNNs.}
GNNs are prone to global adversarial attacks on the graph structure (i.e., the adjacency matrix $A$)~\cite{jin2020survey}.
These attacks aim to reduce the overall (i.e., \textit{global}) node classification accuracy of a GNN model.
To that end, these attacks follow different strategies to perturb the graph structure by adding or removing up to a \textit{budget} of $k$ edges.
\section{Impact of the structure on attacks}
\label{sec:hypotheses}
This section presents examples of graph structural properties and analyzes their effect on adversarial attacks on graph structure.
Findings from related work show that node degrees have an impact on graph controllability~\cite{liu2011controllability} and on the selection of targeted nodes in adversarial attacks on node attributes~\cite{ma2020practical}.
Besides, an analysis of Metattack (a state-of-the-art attack) in~\cite{zugner2019metattack} shows a slight tendency of the attack to link pairs of nodes with longer shortest paths (i.e., longer distances\footnote{We refer to shortest path lengths as distances for brevity.}).
However, these works do not particularly focus on the impact of node degrees and distances on adversarial attacks or the reasoning behind it.
Therefore, in the following analysis, we study the impact of node degrees and distances on GNNs from a theoretical perspective, and consequently verify this impact empirically.
\subsection{Impact of degree and distance}
\label{sec:analysis}
\para{Node degree impact.}
\label{sec:node-degree}
We aim to theoretically assess the role of node degree on the propagation in GNNs (Equation~\ref{eq:gcn}).
For this study, we investigate the common degree normalization form as in Equation~\ref{eq:gcn}, i.e., normalization by the degree square root of two adjacent nodes. Using other less common forms of degree normalization or no degree normalization can be investigated in future work.
We first simplify the update rule given in Equation~\ref{eq:gcn} by ignoring the non-linearity in the intermediate layers, i.e., linearizing the equation (inspired by~\cite{zugner2018nettack} and~\cite{wu2019simplifying})
\begin{equation}
\label{eq:linearized}
H'^{(K)} := \text{softmax}(H^{(K)} W)= \text{softmax}(\Hat{A}^K X W),
\end{equation}
where weight matrices $W^{(k)}$ for $k \in \{0,1,..,K-1\}$ are absorbed by $W=W^{(0)} W^{(1)} ... W^{(K-1)} \in \RR^{f \times |L|}$.
We use $H^{(k)} = \Hat{A}^k X \in \RR^{n \times f}$ to represent node intermediate representations at layer $k$ in the linearized model.
Each row $u$ of matrix $H^{(k)}$, denoted as $(h_u^{(k)})^T \in \RR^{f}$, is the intermediate representation of node $u$ at layer $k$.
As $H^{(k)} = \Hat{A} H^{(k-1)}$, we can write the representation in layer $k$ of node $u$ (i.e., $h_u^{(k)}$) in terms of the representations of its neighboring nodes $\Ng{u}$ in the previous layer $k-1$ as follows
\begin{equation}
h_u^{(k)} = \sum_{v \in \Ng{u}}{\frac{1}{\sqrt{d_u d_{v}}} h_{v}^{(k-1)}}.\\
\label{eq:recursion}
\end{equation}
To show the impact of the degree of a specific neighbor $w \in \Ng{u}$ on the node $u$, we compute the derivative of $u$'s final representation $h_u^{(K)}$ with respect to $w$'s initial representation $h_w^{(0)}$ (i.e., the input features for node $w$), that is, the Jacobian matrix $\mathbf{J}^{u,w} \in \RR^{f \times f}$ with $J_{i,j}^{u,w} = \partial h_{u,i}^{(K)}/\partial h_{w,j}^{(0)}$.
Equation~\ref{eq:recursion} shows that the $i$-th vector component of $h_u^{(k)} : k > 0$ (i.e., $h_{u,i}^{(k)}$) only depends on the vector component $h_{w,i}^{(k-1)}$ of the neighbor $w$, and not on any other component $h_{w,j}^{(k-1)}$ with $i \neq j$\footnote{This argument is possible due to linearizing the Equation~\ref{eq:gcn}.}.
By induction, we can show that, for $w \in \Ng{u}$, the $i$-th vector component of $h_{u}^{(k)}$ only depends on the $i$-th vector component of $h_{w}^{(0)}$.
This fact leads to the Jacobian matrix being diagonal. Therefore, it is sufficient to compute the partial of an arbitrary component~$i$
\begin{equation}
J_{i,i}^{u,w} = \frac{\partial h_{u,i}^{(K)}}{\partial h_{w,i}^{(0)}} = \frac{\partial (\sum_{v \in \Ng{u}}{\frac{1}{\sqrt{d_u d_{v}}} h_{v}^{(k-1)}})}{\partial h_{w,i}^{(0)}}. \\
\end{equation}
By applying the chain rule, we get
\begin{equation}
\label{eq:generalization}
\begin{split}
J_{i,i}^{u,w} & = \sum_{v_1 \in \Ng{u}}{\frac{1}{\sqrt{d_u d_{v_1}}} \frac{\partial h_{v_1,i}^{(K-1)}}{\partial h_{w,i}^{(0)}}}\\
\end{split}
\end{equation}
By repeatedly applying the chain rule $K$ times, we end up at the partial of a node's initial representation $h_{v_K,i}^{(0)}$ in terms of $w$'s initial representation $h_{w,i}^{(0)}$, that is
\begin{equation}
\label{eq:base}
\frac{\partial h_{v_K,i}^{(0)}}{\partial h_{w,i}^{(0)}} =
\begin{cases}
1 : v_K = w\\
0 : v_K \neq w.
\end{cases}
\end{equation}
When we propagate this back to Equation~\ref{eq:generalization}, we arrive at
\begin{equation}
\label{eq:generalization-ext}
\begin{split}
J_{i,i}^{u,w} & = \sum_{v_1 \in \Ng{u}}{\frac{1}{\sqrt{d_u d_{v_1}}} (... (\sum_{v_K \in \{w\}}{\frac{1}{\sqrt{d_{v_{K-1}} d_{v_K}}}}) ...)}\\
\end{split}
\end{equation}
We can rewrite Equation~\ref{eq:generalization-ext} for each (not necessarily simple) path of length $K$ between $u$ and $w$, that is, with $K-1$ intermediate nodes $[v_1, v_2, ..., v_{K-1}] \in \texttt{Paths}(u,w,K)$, as follows
\begin{equation}
\label{eq:generalization-end}
\begin{split}
J_{i,i}^{u,w} & = \frac{1}{\sqrt{d_u d_w}} \sum_{[v_1, v_2, ..., v_{K-1}] \in \texttt{Paths}(u,w,K)}{\prod_{i=1}^{K-1}{\frac{1}{d_{v_i}}}}\\
\end{split}
\end{equation}
This final term is $O((d_u d_w)^{-1/2})$ in terms of the two neighbors degrees.
This shows that high-degree nodes have less impact on the representations of their neighbors, and that the degree normalization is the main reason.
While the normalization is essential to reduce the bias towards nodes with very high degree, e.g., hubs, it can also make GNNs more vulnerable to attacks from nodes with low degrees.
\para{Node distance impact.}
\label{sec:node-distance}
To explain the impact of node distance on attacks, we start by discussing the relationship between GNNs and network communities.
Then we infer the role of the distance in this context.
Communities are densely connected subgraphs, and are very common in empirical networks, e.g., social networks and co-author networks.
The GNN update rule (Equation~\ref{eq:gcn}) is a special form of Laplacian smoothing~\cite{Li2018Deeper}.
Crucially, GNNs assume that
nodes within the same community tend to share the same label and have similar features~\cite{hussain2020impact,Li2018Deeper}.
This indicates that linking nodes from different communities effectively perturbs the GNN accuracy.
Findings in~\cite{bhattacharyya2014community} show that nodes within similar communities tend to have shorter paths between them.
This supports that linking nearby nodes is likely adding intra-community links, while linking more remote nodes is likely adding inter-community links.
Therefore, we hypothesize that linking distant nodes would result in more effective attacks.
\subsection{Empirical validation}
\label{sec:numerical-validation}
\begin{table}[]
\footnotesize
\centering
\caption{Dataset statistics.}
\begin{tabular}{lrrrr}
\toprule
\textbf{Dataset} & \textbf{Nodes} & \textbf{Edges} & \textbf{Features} & \textbf{Labels} \\
\midrule
Citeseer~\cite{sen2008collective} & 2,110 & 3,668 & 3,703 & 6 \\
Cora~\cite{sen2008collective} & 2,485 & 5,069 & 1,433 & 7 \\
Cora-ML~\cite{mccallum2000automating} & 2,810 & 7,981 & 2,879 & 7 \\
Polblogs~\cite{adamic2005political} & 1,222 & 16,714 & 1,490 & 2 \\
Pubmed~\cite{pubmed} & 19,717 & 44,325 & 500 & 3 \\
\bottomrule
\end{tabular}
\label{tab:datasets}
\end{table}
Next, we empirically verify the hypotheses from the previous analysis on the datasets, summarized in Table~\ref{tab:datasets}.
We perform perturbation by adding edges to the graph following different strategies.
Then we observe the accuracy of training a GNN model on the perturbed graph.
We choose the well-known non-linear GCN~\cite{gcn} model\footnote{\ex{As the reader might notice, the analysis in Section~\ref{sec:analysis} does not only apply to this particular family of GNNs since feature propagation and normalization are necessary components of GNNs. Our work studies SGC models theoretically and GCN models empirically.}} to empirically show that our theoretical analysis of a linearized GNN model extends to a non-linear one.
In the next experiments, we have a budget of $k = \lfloor r \times m \rfloor$ edges to add to the graph, where $r$ is the perturbation rate which we set to $0.05$.
\para{Node degree.}
The first experiment aims to compare linking low-degree nodes to linking high-degree nodes.
We group the nodes into $10$ equal-sized subsets based on their degrees.
For each pair of subsets, we try adding $k$ adversarial edges between random pairs of nodes in the two subsets and observe the GCN accuracy.
We obtain the results in Figure~\ref{fig:injection-experiment} (top).
These results support our discussion (Section~\ref{sec:analysis}) and show an increase in accuracy, i.e., a decrease in attack effectiveness, when linking pairs of high-degree nodes.
As a result, we assume that attacks are more effective when they \textit{link pairs of low-degree nodes}.
\para{Node distance.}
The second experiment aims to compare linking distant pairs of nodes to linking nearby pairs.
We perform this experiment in $10$ trials, with trial $1$ linking nodes with lowest distances and trial $10$ with highest distances.
In each trial, we observe the GCN accuracy after adding $k$ adversarial edges.
In trial $i\in\{1,..,10\}$, for each adversarial edge (to be added), we randomly pick one node $u$ from the graph and attach one end of that edge to $u$.
Then, we group all the nodes in the graph into $10$ equal-sized subsets based on their distance from $u$.
Finally, we link $u$ to a random node in the $i$-th subset.
Figure~\ref{fig:injection-experiment} (bottom) depicts this comparison and shows the accuracy of each trial.
The figure suggests that \textit{linking distant nodes results in more effective attacks} than linking nearby nodes.
\section{Related work}
\para{Information available to attackers.}
Many recent works have introduced attack models for GNNs with different knowledge and capabilities. These models adhere to various restrictions on the practicality of the adversarial attacks and the limitations of the attacker.
However, the majority of these models assume the attacker's knowledge of the targeted GNN model~\cite{xu2019topology,wu2019jaccard} or their access to node attributes~\cite{ma2020practical,zugner2019metattack,zugner2018nettack,sun2019node}, i.e., feature vectors and some labels.
We have referred to such adversarial attacks as \textit{informed attacks}.
A recent survey~\cite{jin2020survey} describes the level of knowledge of (i) the targeted GNN model and (ii) graph data as one characteristic of the attack.
Our work differentiates between these two descriptions and focuses on the knowledge of graph data regardless of the knowledge of the targeted model.
\para{Node centrality.}
Earlier findings in network science on controlling complex networks~\cite{liu2011controllability} show that fewer nodes are needed to control the network, if one aims to control nodes with low degrees.
Another study about the stability of node embedding~\cite{schumacher2020effects} shows that high-centrality nodes have more stable embeddings compared to low-centrality nodes.
In the context of GNNs, Metattack\cite{zugner2019metattack} shows a slight tendency to connect nodes with low degree.
\citet{zhu2019robust} experimentally consider attacks on nodes with higher than $10$ degrees for noticeability considerations.
\citet{ma2020practical} introduce practical adversarial attacks by targeting nodes with high importance score, e.g., PageRank, node degree, and betweenness.
The authors argue that nodes with too high importance score, e.g., hubs, are hard to control, hence the attack approach avoids such nodes.
Our work conversely builds theoretical grounds and experimental support to show that attacks are more effective if they focus on low degree nodes.
\para{Node similarity.}
A study on the behavior of GNNs~\cite{Li2018Deeper} shows that feature and label smoothness are the reason why GNNs work.
Some works on GNN adversarial attacks~\cite{jin2020survey,jin2020prognn} analyze the poisoned graphs of popular attack models and show a tendency of the attackers to add edges between nodes with different labels and low-similarity features.
~\citet{waniek2018hiding} introduce an attack that is explicitly based on disconnecting nodes with the same label and connecting nodes with different labels (Disconnect Internally, Connect Externally - DICE).
More insights on structure in Metattack~\cite{zugner2019metattack} suggest that attacks tend to link pairs of nodes with higher-than-average shortest path length.
Finally, a preprocessing-based defense mechanism for GNNs~\cite{wu2019jaccard} is based on reducing the weight of edges between nodes with a low Jaccard similarity score of their features.
Our work builds on these findings to investigate more in structural node similarity and build an uninformed structure-based adversarial attack strategy.
| {
"attr-fineweb-edu": 1.266602,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfWzxK03BfNelYHNi | \section{Introduction}\label{sec1}
Very recently, a new charmed baryon state, which is named as $\Lambda_c(2860)^+$, was found by LHCb in the $D^0p$ channel~\cite{Aaij:2017vbw}. Its mass and decay width were determined as
\begin{equation}
\begin{split}
&m_{\Lambda_c(2860)^+}=2856.1^{+2.0}_{-1.7}\textrm{(stat)}\pm0.5\textrm{(syst)}^{+1.1}_{-5.6}\textrm{(model)~MeV},\\&
\Gamma_{\Lambda_c(2860)^+}=67.6^{+10.1}_{-8.1}\textrm{(stat)}\pm1.4\textrm{(syst)}^{+5.9}_{-20.0}\textrm{(model)~MeV}.\nonumber
\end{split}
\end{equation}
Additionally, experimental analysis indicated that it has spin-parity quantum number $J^P=3/2^+$. Before the observation of LHCb, there was possible evidence of $\Lambda_c(2860)^+$ existing in the BaBar data~\cite{Aubert:2006sp}. This newly observed $\Lambda_c(2860)^+$ directly confirms the predictions in Refs.~\cite{Chen:2014nyo,Chen:2016iyi,Lu:2016ctt,Chen:2016phw}, where a \emph{D}-wave charmed baryon around 2.85 GeV and with $J^P=3/2^+$ was suggested to be accompanied with the observed charmed baryon $\Lambda_c(2880)^+$~\cite{Olive:2016xmw}.
In the past years, more and more charmed baryons were reported with the experimental progress (see a recent review paper \cite{Chen:2016spr} for more details). Due to the experimental and theoretical effort, \emph{S}-wave and \emph{P}-wave charmed baryon families were established step by step. It is obvious that it is not the end of whole story. We are expecting what happen to the \emph{D}-wave charmed baryon family. The LHCb's observation of $\Lambda_c(2860)^+$ is a key point to reveal it.
Firstly, we briefly introduce the experimental information of other three charmed baryons related to the present work, which are $\Lambda_c(2880)^+$, $\Xi_c(3055)^+$ and $\Xi_c(3080)^+$. $\Lambda_c(2880)^+$ was first observed by the CLEO Collaboration in the $\Lambda_c^+\pi^+\pi^-$ channel~\cite{Artuso:2000xy}, confirmed by BaBar in the $D^0p$ invariant mass spectrum~\cite{Aubert:2006sp} and Belle in the $\Sigma_c^{(\ast)}\pi$ channel~\cite{Abe:2006rz}. The available experimental analysis suggests that the $J^P=5/2^+$ assignment to $\Lambda_c(2880)^+$ is favorable. The mass and width of $\Lambda_c(2880)^+$ were measured as
\begin{equation}
\begin{split}
&M_{\Lambda_c(2880)^+}=2881.75\pm0.29\textrm{(stat)}\pm0.07\textrm{(syst)}^{+0.14}_{-0.20}\textrm{(model)},\\&
\Gamma_{\Lambda_c(2880)^+}=5.43^{+0.77}_{-0.71}\textrm{(stat)}\pm0.29\textrm{(syst)}^{+0.75}_{-0.00}\textrm{(model)},\nonumber
\end{split}
\end{equation}
in MeV by LHCb recently~\cite{Aaij:2017vbw}. In addition, the ratio
\begin{equation}
\frac{\mathcal{B}(\Lambda_c(2880)^+\rightarrow\Sigma^\ast_c(2520)\pi)}{\mathcal{B}(\Lambda_c(2880)^+\rightarrow\Sigma_c(2455)\pi)}=0.225\pm0.062\pm0.025, \label{eq1}
\end{equation}
has been given by Belle~\cite{Abe:2006rz}.
{$\Xi_c(3055)^+$ and $\Xi_c(3080)^+$ were already found in the channels of $\Lambda_c^+K^-\pi^+$, $\Sigma^{(\ast)++}_cK^-$ by Belle~\cite{Chistov:2006zj} and BaBar~\cite{Aubert:2007dt}. In last year, $\Xi_c(3055)^+$ and $\Xi_c(3080)^+$ were first observed by Belle in the $D^+\Lambda$ channel~\cite{Kato:2016hca}. With more accurate measurement, the resonance parameters of $\Xi_c(3055)^+$ obtained by Belle~\cite{Kato:2016hca} were
\begin{equation}
\begin{split}
&M_{\Xi_c(3055)^+}=3055.8\pm0.4\textrm{(stat)}\pm0.2\textrm{(syst)~MeV},\\&
\Gamma_{\Xi_c(3055)^+}=7.8\pm1.2\textrm{(stat)}\pm1.5\textrm{(syst)~MeV}.\nonumber
\end{split}
\end{equation}
Meanwhile, the mass and width of $\Xi_c(3080)^+$ were estimated to be $3077.9\pm0.9$ MeV and $3.0\pm0.7\pm0.4$ MeV, respectively. In addition, the following ratios of branching fractions
\begin{equation}
\frac{\mathcal{B}(\Xi_c(3055)^+\rightarrow\Lambda D^+)}{\mathcal{B}(\Xi_c(3055)^+\rightarrow\Sigma_c(2455)^{++}K^-)}=5.09\pm1.01\pm0.76, \label{eq2}
\end{equation}
\begin{equation}
\frac{\mathcal{B}(\Xi_c(3080)^+\rightarrow\Lambda D^+)}{\mathcal{B}(\Xi_c(3080)^+\rightarrow\Sigma_c(2455)^{++}K^-)}=1.29\pm0.30\pm0.15, \label{eq3}
\end{equation}
and
\begin{equation}
\frac{\mathcal{B}(\Xi_c(3080)^+\rightarrow\Sigma_c(2520)^{++}K^-)}{\mathcal{B}(\Xi_c(3080)^+\rightarrow\Sigma_c(2455)^{++}K^-)}=1.07\pm0.27\pm0.01, \label{eq4}
\end{equation}
were also reported~\cite{Kato:2016hca}, where the uncertainties are statistical and systematic.
It is obvious that the information above is crucial to find the relation among these four states and identify their properties. In this work, we carry out a mass spectrum analysis by combing with these observed $\Lambda_c(2860)^+$, $\Lambda_c(2880)^+$, $\Xi_c(3055)^+$, and $\Xi_c(3080)^+$, which may suggest that these four states are 1\emph{D} candidates in charmed baryon family. For further testing this assignment to them, we need to perform the study of two-body Okubo-Zweig-Iizuka (OZI) allowed strong decay behavior of them, where the Eichten-Hill-Quigg (EHQ) decay formula is adopted.
This paper is organized as follows. After introduction, we give a mass spectrum analysis of these discussed $\Lambda_c(2860)^+$, $\Lambda_c(2880)^+$, $\Xi_c(3055)^+$, and $\Xi_c(3080)^+$ in Sec. \ref{sec2}. In Sec. \ref{sec3}, the two-body OZI-allowed decays of these four charmed baryons will be discussed. The paper ends with the discussion and conclusion in Sec. \ref{sec4}.
\section{The mass spectrum analysis of \emph{D}-wave charmed and charmed-strange baryons}\label{sec2}
\begin{table}[b]
\caption{Comparison of the theoretical results with experimental data for the $\lambda$-mode excited 1\emph{D} $\Lambda_c$ and $\Xi_c$ baryon masses (in MeV).} \label{table1}
\renewcommand\arraystretch{1.2}
\begin{tabular*}{85mm}{@{\extracolsep{\fill}}lcccc}
\toprule[1pt]\toprule[1pt]
Assignments &\multicolumn{2}{c}{$1D(3/2^+)$} & \multicolumn{2}{c}{$1D(5/2^+)$} \\
\cline{2-3}\cline{4-5}
Candidates & $\Lambda_c(2860)^+$ & $\Xi_c(3055)^+$& $\Lambda_c(2880)^+$ & $\Xi_c(3080)^+$ \\
\midrule[0.8pt]
Expt.~\cite{Aaij:2017vbw,Kato:2016hca} & 2856.1 & 3055.8 & 2881.8 & 3077.9 \\
Ref.~\cite{Chen:2014nyo} & 2857 & 3055 & 2879 & 3076 \\
Ref.~\cite{Chen:2016iyi} & 2843 & 3033 & 2851 & 3040 \\
Ref.~\cite{Ebert:2011kk} & 2874 & 3059 & 2880 & 3076 \\
Ref.~\cite{Roberts:2007ni} & 2887 & 3012 & 2887 & 3004 \\
Ref.~\cite{Shah:2016mig} & 2873 & 3080 & 2849 & 3054 \\
\bottomrule[1pt]\bottomrule[1pt]
\end{tabular*}
\end{table}
Many works have focused on the mass spectra of higher excited charmed baryons, where different phenomenological models were adopted which include the relativistic flux tube (RFT) model~\cite{Chen:2014nyo}, the potential models~\cite{Chen:2016iyi,Lu:2016ctt,Ebert:2011kk,Roberts:2007ni,Shah:2016mig}, the QCD sum rule~\cite{Chen:2016phw}, and the Regge phenomenology~\cite{Guo:2008he}. In Table~\ref{table1}, we collect some predicted masses for \emph{D}-wave charmed and charmed-strange baryons. In Refs.~\cite{Chen:2014nyo,Ebert:2011kk,Chen:2016iyi,Roberts:2007ni,Shah:2016mig}, the $\lambda$-mode excitations of charmed baryons were investigated, where
the orbital excitation only exists between light quark cluster (two light quarks) and charm quark. Just shown in Table \ref{table1}, the mass of the 1\emph{D} charmed baryon with $J^P=3/2^+$ is about $2840\sim 2890$ MeV. Thus, the newly observed $\Lambda_c(2860)^+$ can be assigned as the \emph{D}-wave $3/2^+$ charmed baryon with $\lambda$ mode excitation well. As expected by theories (see Table \ref{table1}), a $5/2^+$ $\Lambda_c^+$ should be accompanied with $\Lambda_c(2860)^+$ in the near mass region.
Therefore, $\Lambda_c(2880)^+$ becomes to be a good $5/2^+$ candidate of \emph{D}-wave charmed baryon with $\lambda$ mode excitation (see the comparison between experimental and theoretical results in Table \ref{table1}). In a word, we may specify that $\Lambda_c(2860)^+$ with $J^P=3/2^+$ and $\Lambda_c(2880)^+$ with $J^P=5/2^+$ form a degenerate \emph{D}-wave doublet $[3/2^+,5/2^+]$ in the heavy quark limit, which can be reflected by the small mass difference between $\Lambda_c(2860)^+$ and $\Lambda_c(2880)^+$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8.4cm,keepaspectratio]{massgaps.eps}
\caption{The experimentally observed $\Lambda_c^+$ and $\Xi_c^+$ states. The \emph{S}-wave ($\Lambda_c(2286)^+$ and $\Xi_c(2470)^+$) and \emph{P}-wave ($\Lambda_c(2595)^+$, $\Lambda_c(2625)^+$, $\Xi_c(2790)^+$, and $\Xi_c(2815)^+$) members have been well established~\cite{Olive:2016xmw}, while the \emph{D}-wave state candidates ($\Lambda_c(2860)^+$, $\Lambda_c(2880)^+$, $\Xi_c(3055)^+$, and $\Xi_c(3080)^+$) are still in dispute. Here, $\delta_{1i}$ ($i=S,\,P,\,D$) denotes the mass gaps between \emph{i}-wave $\Lambda_c$ and $\Xi_c$ states. $\Delta_{1P-1S}$ refers to the mass difference between the \emph{P}-wave and \emph{S}-wave excited states, while $\Delta_{1D-1P}$ denotes the mass gap between the \emph{D}-wave and \emph{P}-wave excited states.}\label{Fig1}
\end{center}
\end{figure}
When checking the masses of $\Lambda_c(2860)^+$, {$\Lambda_c(2880)^+$, $\Xi_c(3055)^+$ and $\Xi_c(3080)^+$}, we find the mass relations
\begin{eqnarray}
&&M_{\Xi_c(3080)^+}-M_{\Lambda_c(2880)^+}\approx M_{\Xi_c(3055)^+}-M_{\Lambda_c(2860)^+},\\
&&M_{\Lambda_c(2880)^+}-M_{\Lambda_c(2860)^+}\approx M_{\Xi_c(3080)^+}-M_{\Xi_c(3055)^+},
\end{eqnarray}
which show that the observed $\Xi_c(3055)^+$ and $\Xi_c(3080)^+$ can be strange partners of $\Lambda_c(2860)^+$ and $\Lambda_c(2880)^+$, respectively. More specifically, $\Xi_c(3055)^+$ and $\Xi_c(3080)^+$ can be grouped into a doublet $[3/2^+,5/2^+]$ in \emph{D}-wave $\Xi_c$ family (as shown in Fig. \ref{Fig1}). This assignment to $\Xi_c(3055)^+$ and $\Xi_c(3080)^+$ can be also supported by theoretical results since the 1\emph{D} $\Xi_c$ states with $J^P=3/2^+$ and $J^P=5/2^+$ were predicted with mass about $3000\sim 3090$ MeV~\cite{Chen:2014nyo,Ebert:2011kk,Chen:2016iyi,Roberts:2007ni,Shah:2016mig} (see Table \ref{table1} for more details).
If we consider the SU(3) flavor symmetry, the comparable mass splittings in 1\emph{S} and 1\emph{P} charmed and charmed-strange baryons~\cite{Olive:2016xmw} can be explained by a RFT model~\cite{Chen:2014nyo}. And then, the mass splitting (22 MeV) for the $\lambda$-mode excited \emph{D}-wave $3/2^+$ and $5/2^+$ $\Lambda_c$ states was predicted in Ref.~\cite{Chen:2014nyo}, which is in good agreement with the experimental data (26 MeV). In addition, the averaged mass of $\Lambda_c(2860)^+$ and $\Lambda_c(2880)^+$ is about 256 MeV higher than that of the 1\emph{P} states, which is also consistent with the theoretical estimate in Ref. \cite{Chen:2014nyo}.
When further comparing the masses of well-established \emph{S}-wave and \emph{P}-wave $\Lambda_c/\Xi_c$ states~\cite{Olive:2016xmw}, we also find that the mass gaps between the corresponding $\Lambda_c$ and $\Xi_c$ states is about 190 MeV (see Fig.\ref{Fig1}). This law for the mass difference between $\Lambda_c$ and $\Xi_c$ families has also been explained by the RFT model~\cite{Chen:2014nyo}, by which the strange partners of $\Lambda_c(2860)^+$ and $\Lambda_c(2880)^+$ are predicted around 3060 MeV. This estimate provides an extra support of the observed $\Xi_c(3055)^+$~\cite{Aubert:2007dt,Kato:2016hca} and $\Xi_c(3080)^+$~\cite{Chistov:2006zj,Aubert:2007dt,Kato:2016hca} as the strange partners of $\Lambda_c(2860)^+$ and $\Lambda_c(2880)^+$, respectively.
By the above analysis of mass spectrum, we may conclude that four observed $\Lambda_c(2860)^+$, $\Lambda_c(2880)^+$, $\Xi_c(3055)^+$ and $\Xi_c(3080)^+$ are categorized into $\lambda$-mode excitations of \emph{D}-wave charmed and charmed-strange baryons. To test this assignment to them, in following, we will perform a detailed study of their two-body OZI-allowed strong decays, by which their partial and total decay widths can be obtained.
\section{The decay behavior of these discussed states}\label{sec3}
In this section, we will employ the EHQ decay formula which was proposed by Eichten, Hill, and Quigg (EHQ)~\cite{Eichten:1993ub} to calculate the two-body OZI-allowed decays of $\Lambda_c(2860)^+$, $\Lambda_c(2880)^+$, $\Xi_c(3055)^+$ and $\Xi_c(3080)^+$. The general expression of the EHQ decay formula reads
\begin{equation}\label{eq7}
\Gamma^{A\rightarrow BC}_{j_C,\ell} = \xi\,\left|\mathcal
{C}^{s_Q,j_B,J_B}_{j_C,j_A,J_A}\mathcal
{M}^{j_A,j_B}_{j_C,\ell}(q)\right|^2 \,q^{2\ell+1}\,
e^{-q^2/\tilde{\beta}^2}.
\end{equation}
Here, the flavor factors, $\xi$, have been given in Ref.~\cite{Chen:2016iyi}. $q=|\vec q|$ denotes the three-momentum of a final state in the rest frame of an initial state. \emph{A} and $B$ represent the initial and final heavy-light hadrons, respectively. {The total angular momenta and the corresponding projections are defined as $J_i$ and $j_i$ ($i=A,B$), respectively. Here
$C$ denotes the light flavor hadron. The explicit expression of $\tilde{\beta}$ has been given in our previous work~\cite{Chen:2016iyi} (see Appendix A in Ref. \cite{Chen:2016iyi} for more details). In addition, the normalized coefficient $\mathcal {C}^{s_Q,j_B,J_B}_{j_C,j_A,J_A}$ is given by the following equation
\begin{eqnarray}\label{eq8}
\begin{split}
\mathcal {C}^{s_Q,j_B,J_B}_{j_C,j_A,J_A}=(-1)^{J_A+j_B+j_C+s_Q}~&\sqrt{(2j_A+1)(2J_B+1)}\\ &\times\left\{
\begin{array}{ccc}
s_Q & j_B & J_B\\
j_C & J_A & j_A\\
\end{array}
\right\},
\end{split}
\end{eqnarray}
where $\vec{j}_C \equiv \vec{s}_C + \vec{\ell}$. The symbols $s_C$ and $\ell$ represent the spin of the light hadron $C$ and the orbital angular momentum relative to $B$, respectively. {The spin of heavy quark $Q$ is denoted as $s_Q$.} The coefficient given by Eq.~(\ref{eq8}) has explicitly incorporated into {the heavy quark symmetry} which is believed to be important for the strong decays of heavy-light hadrons~\cite{Isgur:1991wq}.
The transition factors $\mathcal {M}^{j_A,j_B}_{j_C,\ell}(q)$ which is related to the non-perturbative dynamics can be calculated by various phenomenological models. {In this work, the transition factors can be calculated by the $^3P_0$ model~\cite{Micu:1968mk,LeYaouanc:1972vsx,LeYaouanc:1988fx}. In the following, we briefly introduce the procedure of calculating these transition factors by the $^3P_0$ model. Firstly, the mock states of initial and final hadrons are constructed, where a simple harmonic oscillator (SHO) wave function is applied to describe the spatial wave function of a hadron state involved in these discussed decays. And then, the helicity amplitude $\mathcal{M}^{j_A,j_B,j_C}(q)$ can be deduced by the transition operator $\hat{\mathcal{T}}$ of the $^3P_0$ model. With the help of Jacob-Wick formula, the partial wave amplitudes $\mathcal{M}_{LS}(q)$ are related to
the obtained helicity amplitude $\mathcal{M}^{j_A,j_B,j_C}(q)$. By performing a unitary rotation between the $LS$ coupling and $jj$ coupling, finally, the transition factors $\mathcal {M}^{j_A,j_B}_{j_C,\ell}(q)$ can be extracted directly. More details for this approach can be found in Ref.~\cite{Chen:2016iyi}.} For different decay modes of the \emph{D}-wave charmed baryons, the explicit expressions for $\mathcal {C}^{s_Q,j_B,J_B}_{j_C,j_A,J_A}$ and $\mathcal {M}^{j_A,j_B}_{j_C,\ell}(q)$ are listed in Table \ref{table2}. With the above preparation, the partial and total widths can be calculated for the \emph{D}-wave charmed baryons.
\begin{table}[t]
\caption{The transition factors and the normalized coefficients for the decays of \emph{D}-wave $\Lambda_c$ and $\Xi_c$ states appeared in the EHQ formula. $\mathfrak{B}^\prime_c(1S)$ and $\mathfrak{B}^\ast_c(1S)$ denote the $1/2^+$ and $3/2^+$ $\Sigma_c/\Xi^\prime_c$ states, respectively. $\mathcal {P}$ is the light pseudoscalar mesons while ``LB'' denotes the light baryons in final states. $\mathfrak{B}^{\prime0,1/2}_c(1P)$ is abbreviation of $\Sigma_c(2700)$ and $\Xi^\prime_c(2840)$.} \label{table2}
\renewcommand\arraystretch{1.5}
\begin{tabular*}{85mm}{@{\extracolsep{\fill}}ccrcc}
\toprule[1pt]\toprule[1pt]
$~J^P$ & $\mathfrak{B}^\prime_c(1S)$~+~$\mathcal {P}$ & $\mathfrak{B}^\ast_c(1S)$~+~$\mathcal {P}$ & $\mathfrak{B}^{\prime0,1/2}_c(1P)$~+~$\mathcal {P}$ & \emph{D}~+~``LB'' \\
\midrule[0.8pt]
$~\frac{3}{2}^+$ & $\sqrt{\frac{5}{6}}\mathcal {M}^{2,1}_{1,1}(q)$ & $\sqrt{\frac{1}{6}}\mathcal {M}^{2,1}_{1,1}(q)$ & $\mathcal {M}^{2,0}_{2,2}(q)$ & $\sqrt{\frac{5}{8}}\mathcal {M}^{2,1/2}_{3/2,1}(q)$ \\
& & $\mathcal {M}^{2,1}_{3,3}(q)$ & & \\
$~\frac{5}{2}^+$ & & $\mathcal {M}^{2,1}_{1,1}(q)$ & $\mathcal {M}^{2,0}_{2,2}(q)$ & \\
& $-\frac{\sqrt{5}}{3}\mathcal {M}^{2,1}_{3,3}(q)$ & $\frac{2}{3}\mathcal {M}^{2,1}_{3,3}(q)$ & & $-\sqrt{\frac{5}{12}}\mathcal {M}^{2,1/2}_{5/2,3}(q)$ \\
\bottomrule[1pt]\bottomrule[1pt]
\end{tabular*}
\end{table}
\subsection{$\Lambda_c(2860)^+$ and $\Lambda_c(2880)^+$}
\begin{table}[b]
\caption{The partial and total decay widths in MeV, and branching fractions in \%, of $\lambda$-mode excited \emph{D}-wave $\Lambda_c$ states.} \label{table3}
\renewcommand\arraystretch{1.2}
\begin{tabular*}{85mm}{@{\extracolsep{\fill}}lcccc}
\toprule[1pt]\toprule[1pt]
Decay &\multicolumn{2}{c}{$\Lambda_c(2860)^+~[1D(3/2^+)]$} & \multicolumn{2}{c}{$\Lambda_c(2880)^+~[1D(5/2^+)]$} \\
\cline{2-3}\cline{4-5}
modes & $\Gamma_i$ & $\mathcal{B}_i$ & $\Gamma_i$ & $\mathcal{B}_i$ \\
\midrule[0.8pt]
$\Sigma_c(2455)\pi$ & 2.2 & 3.0\% & 0.4 & 1.7\% \\
$\Sigma^\ast_c(2520)\pi$ & 1.0 & 1.4\% & 3.7 & 15.4\% \\
$\Sigma_c(2700)\pi$ & 0.0 & 0.0\% & 0.2 & 0.8\% \\
$D^0p$ & 34.5 & 48.0\% & 10.8 & 44.8\% \\
$D^+n$ & 34.2 & 47.6\% & 9.0 & 37.3\% \\
\midrule[0.8pt]
Theory & 71.9 & 100\% & 24.1 & 100\% \\
Expt.~\cite{Aaij:2017vbw} & \multicolumn{2}{l}{$67.6^{+10.1}_{-8.1}\pm1.4^{+5.9}_{-20.0}$} & \multicolumn{2}{l}{$5.43^{+0.77}_{-0.71}\pm0.29^{+0.75}_{-0.00}$} \\
\bottomrule[1pt]\bottomrule[1pt]
\end{tabular*}
\end{table}
If treating $\Lambda_c(2860)^+$ as a $\lambda$-mode excited \emph{D}-wave state with $J^P=3/2^+$, the obtained partial and total decay widths of $\Lambda_c(2860)^+$ are presented in Table~\ref{table3}. The result shows that the $D^0p$ and $D^+n$ channels are the main decay modes for $\Lambda_c(2860)^+$, which can explain why $\Lambda_c(2860)^+$ was firstly observed in the $D^0p$ channel. We notice that there is no evidence of $\Lambda_c(2860)^+$ observed in the $\Sigma_c(2455)\pi$ and $\Sigma^\ast_c(2520)\pi$ channels, in which the $\Lambda_c(2880)^+$ has been detected by Belle~\cite{Abe:2006rz}. Our result can reflect this fact since the partial widths of the $\Lambda_c(2860)^+$ decays into $\Sigma_c(2455)\pi$ and $\Sigma^\ast_c(2520)\pi$ are quite small. Additionally, the calculated total decay width of $\Lambda_c(2860)^+$ is 71.9 MeV which is in agreement with the LHCb data well~\cite{Aaij:2017vbw}. Therefore, we may conclude that {\it $\Lambda_c(2860)^+$ as the $\lambda$-mode excited D-wave state with $J^P=3/2^+$ can be established well}.
In the following, we continue to discuss $\Lambda_c(2880)^+$. This state has been detected in the $D^0p$ channel by BaBar~\cite{Aubert:2006sp} and LHCb~\cite{Aaij:2017vbw}. These measurements indicate that $D^0p$ is a primary decay channel of $\Lambda_c(2880)^+$. If $\Lambda_c(2880)^+$ is the $\lambda$-mode excited \emph{D}-wave state with $J^P=5/2^+$, the branching ratio of its $D^0p$ channel can reach up to 44.8\%, which can explain why $\Lambda_c(2880)^+$ was observed in the $D^0p$ channel. {The $D^+n$ channel is its main decay modes. Besides the $D^0 p$ and $D^+ n$ as dominant decay channels, we notice the decay channel $\Sigma^\ast_c(2520)\pi$ has considerable contribution to its total width. }
However, we find some difficulties to assign $\Lambda_c(2880)^+$ as a \emph{D}-wave state with $J^P=5/2^+$. Firstly, the calculated total decay width is 4.4 times larger than the experimental width of $\Lambda_c(2880)^+$. However, the measurement given by CLEO~\cite{Artuso:2000xy} and Belle~\cite{Abe:2006rz} indicates that $\Sigma^\ast_c(2520)\pi$ is a less important decay mode for $\Lambda_c(2880)^+$. Thirdly, we cannot reproduce the ratio listed in Eq. (\ref{eq1}). Thus, if categorizing $\Lambda_c(2880)^+$ as the $\lambda$-mode excited \emph{D}-wave state with $J^P=5/2^+$, we have to be faced with these three contradictions indicated above, and should provide reasonable solution. In Sec. \ref{sec4}, we will discuss this point.
\subsection{$\Xi_c(3055)^+$ and $\Xi_c(3080)^+$}
We study the decay properties of $\Xi_c(3055)^+$ as the $\lambda$-mode excited \emph{D}-wave charmed-strange baryon with $J^P=3/2^+$. The results presented in Table \ref{table4} show that $D^+\Lambda$ and $\Sigma_c(2455)K$ are its dominant decay modes, which naturally explain why $\Xi_c(3055)^+$ was firstly found in the $\Sigma_c(2455)^{++}K^-$ channel by BaBar~\cite{Aubert:2007dt}, and confirmed by Belle in the $\Sigma_c(2455)^{++}K^-$ and $D^+\Lambda$ channels~\cite{Kato:2016hca}.
In addition, the theoretical value for the $\Gamma(D^+\Lambda)/\Gamma(\Sigma_c(2455)K)$ ratio is about 2.3 which is comparable with the experimental data (see Eq. (\ref{eq2})) if considering the experimental error.
The obtained total width of $\Xi_c(3055)^+$ as a \emph{D}-wave charmed-strange baryon with $J^P=3/2^+$ is about 15.4 MeV, which is comparable with the upper limit of Belle's result ($5.1\sim10.5$ MeV). Based on the analysis of mass spectrum and the study of the decay behavior, $\Xi_c(3055)^+$ could be regarded as the $3/2^+$ \emph{D}-wave state. In other word, $\Xi_c(3055)^+$ may be the strange partner of $\Lambda_c(2860)^+$. This assignment to $\Xi_c(3055)^+$ is also supported by an analysis in the chiral quark model~\cite{Liu:2012sj}.
Under this assignment, we have given a strong prediction of the spin-parity quantum number of $\Xi_c(3055)^+$, $i.e.$, $\Xi_c(3055)^+$ has $J^P=3/2^+$. This tough prediction can be tested directly by further experiment study.
\begin{table}[htbp]
\caption{The partial and total decay widths in MeV, and branching fractions in \%, of the $\lambda$-mode excited \emph{D}-wave $\Xi_c$ states.} \label{table4}
\renewcommand\arraystretch{1.2}
\begin{tabular*}{85mm}{@{\extracolsep{\fill}}lcccc}
\toprule[1pt]\toprule[1pt]
Decay &\multicolumn{2}{c}{$\Xi_c(3055)~[1D(3/2^+)]$} & \multicolumn{2}{c}{$\Xi_c(3080)~[1D(5/2^+)]$} \\
\cline{2-3}\cline{4-5}
modes & $\Gamma_i$ & $\mathcal{B}_i$ & $\Gamma_i$ & $\mathcal{B}_i$ \\
\midrule[0.8pt]
$\Sigma_c(2455)K$ & 4.2 & 27.3\% & 0.6 & 4.8\% \\
$\Xi^\prime_c(2580)\pi$ & 0.2 & 1.3\% & 0.1 & 0.8\% \\
$\Sigma^\ast_c(2520)K$ & 0.7 & 4.5\% & 5.4 & 42.8\% \\
$\Xi^\ast_c(2645)\pi$ & 0.3 & 2.0\% & 0.5 & 4.0\% \\
$\Xi^\prime_c(2840)\pi$ & 0.4 & 2.6\% & 0.7 & 5.5\% \\
$D^+\Lambda$ & 9.6 & 62.3\% & 5.3 & 42.1\% \\
\midrule[0.8pt]
Theory & 15.4 & 100\% & 12.6 & 100\% \\
Expt.~\cite{Kato:2016hca} & \multicolumn{2}{l}{$7.8\pm1.2\pm1.5$} & \multicolumn{2}{l}{$3.0\pm0.7\pm0.4$} \\
\bottomrule[1pt]\bottomrule[1pt]
\end{tabular*}
\end{table}
As a $\lambda$-mode excited \emph{D}-wave charmed-strange baryon with $J^P=5/2^+$, $\Xi_c(3080)^+$ has two main decay modes $\Sigma_c(2520)K$ and $D^+\Lambda$. We notice that $\Xi_c(3080)^+$ was observed in the channels of $\Lambda_c^+K^-\pi^+$~\cite{Chistov:2006zj}, $\Sigma^{(\ast)++}_cK^-$~\cite{Aubert:2007dt,Kato:2016hca}, and $D^+\Lambda$~\cite{Kato:2016hca}. Although our theoretical result can reasonably explain why $\Xi_c(3080)^+$ was found in the $\Sigma^{\ast++}_cK^-$~\cite{Aubert:2007dt,Kato:2016hca}, and $D^+\Lambda$~\cite{Kato:2016hca} channels, we cannot reproduce the ratios listed in Eqs.~(\ref{eq2})-(\ref{eq3}) since our calculation shows that the partial decay width of the $\Sigma_c(2455)K$ channel is only about 0.6 MeV. So it is far smaller than that of $\Sigma_c(2520)K$ (see Table~\ref{table4}). Similar to the situation of $\Lambda_c(2880)^+$, the obtained total decay width of $\Xi_c(3080)^+$ is about 4 times larger than experimental measurement~\cite{Kato:2016hca}. Then we also need to face the contradictions above if we assign $\Xi_c(3080)^+$ as a $\lambda$-mode excited \emph{D}-wave charmed-strange baryon with $J^P=5/2^+$.
{Before closing this section, we need to discuss the theoretical uncertainties of the results presented in Tables \ref{table3} and \ref{table4}.
The main uncertainties may come from the approximations used to obtain the EHQ decay formula, adopting a simple harmonic oscillator wave function to replace the hadron wave functions, and ignoring relativistic effect.
Additionally, we adopt experimental widths like $\Sigma_c(2520)^{++}\rightarrow\Lambda_c(2286)^+\pi^+$~\cite{Chen:2016iyi}
to fix the value of parameter $\gamma$ and take the experimental masses of initial and final states as input, which provide extra sources of uncertainty of these obtained result.
}
\section{Discussion and conclusion}\label{sec4}
Stimulated by newly reported $\Lambda_c(2860)^+$ \cite{Aaij:2017vbw}, we carry a comprehensive study of $\Lambda_c(2860)^+$ associated with three observed resonances $\Lambda_c(2880)^+$, $\Xi_c(3055)^+$, and $\Xi_c(3080)^+$. The mass spectrum analysis shows that
$\Lambda_c(2860)^+$, $\Lambda_c(2880)^+$, $\Xi_c(3055)^+$, and $\Xi_c(3080)^+$ can be grouped into the $\lambda$-mode excited \emph{D}-wave charmed or charmed-strange baryon families. $\Lambda_c(2860)^+$ with $\Lambda_c(2880)^+$ or $\Xi_c(3055)^+$ with $\Xi_c(3080)^+$ may form a degenerate \emph{D}-wave doublet of charmed or charmed-strange baryon, which also indicates that $\Xi_c(3055)^+$ and $\Xi_c(3080)^+$ are strange partners of $\Lambda_c(2860)^+$ and $\Lambda_c(2880)^+$, respectively.
For further testing this assignment, we carry out a study of their two-body OZI-allowed strong decays, which provide valuable information of their partial and total decay widths.
For $\Lambda_c(2860)^+$, the assignment as a $\lambda$-mode excited \emph{D}-wave charmed baryon with $J^P=3/2^+$ is suggested since the obtained theoretical results of partial and total decay widths can reproduce the experimental data well. And then, $\Xi_c(3055)^+$ as the strange partner of $\Lambda_c(2860)^+$ is also supported by the study of its decay behavior. In fact, the present study gives an strong constraint of the spin-parity quantum number of $\Xi_c(3055)^+$, where $\Xi_c(3055)^+$ has $J^P=3/2^+$. This prediction is a crucial information to further test the \emph{D}-wave assignment to $\Xi_c(3055)^+$ in future experiment.
Although a spectrum analysis strongly suggests that $\Lambda_c(2880)^+$ and $\Xi_c(3080)^+$ can be explained as the $\lambda$-mode excited \emph{D}-wave charmed and charmed-strange baryons, respectively, with $J^P=5/2^+$. However, there exist some difficulties when we further study their strong decay properties, $i.e.$, the total decay widths of $\Lambda_c(2880)^+$ and $\Xi_c(3080)^+$ obtained in our model is far larger than the experimental widths. And some experimental ratios of the partial decays cannot be reproduced in the present scenario.
{If $\Lambda_c(2880)^+$ and $\Xi_c(3080)^+$ are difficult to be identified as $\lambda$-mode excited \emph{D}-wave charmed and charmed-strange baryons, respectively, we need to further search for other possible assignments to $\Lambda_c(2880)^+$ and $\Xi_c(3080)^+$. For example, $\Lambda_c(2880)^+$ and $\Xi_c(3080)^+$ as $\rho$-mode excited states or $\rho$-mode and $\lambda$-mode simultaneously excited states in the \emph{D}-wave charmed and charmed-strange baryon families should be tested by the mass spectrum analysis and the study of their decay behavior. Besides, the mixing effect of the different \emph{D}-wave states with $J^P=5/2^+$ should be considered in future work.
In addition, we notice the $D^*N$ and $DN$ molecular state assignments to the observed $\Lambda_c(2940)^+$ and $\Sigma_c(2800)$ in literatures \cite{He:2006is,Dong:2010gu}. However, for the $\Lambda_c(2880)^+$ and $\Xi_c(3080)^+$, the hadronic molecular state assignments are not suitable since molecular states are usually predicted to have masses a few MeV below meson-baryon thresholds with corresponding quantum numbers. Whereas there are no such thresholds in the vicinity of the $5/2^+$ states, $\Lambda_c(2880)^+$ and $\Xi_c(3080)^+$.
Considering the present situation of $\Lambda_c(2880)^+$ and $\Xi_c(3080)^+$, we suggest further experimental study of the resonance parameters of $\Lambda_c(2880)^+$ and $\Xi_c(3080)^+$ and the ratios of partial widths. Of course, more theoretical studies of $\Lambda_c(2880)^+$ and $\Xi_c(3080)^+$ by different phenomenological models are encouraged.
In summary, in this work we identify the possibility of the observed $\Lambda_c(2860)^+$, $\Lambda_c(2880)^+$, $\Xi_c(3055)^+$, and $\Xi_c(3080)^+$ as the \emph{D}-wave charmed and charmed-strange baryons. We have reasons to believe that \emph{D}-wave charmed and charmed-strange baryon families will become more and more abundant with experimental progress and theoretical effort. Finally, the \emph{D}-wave $\Lambda_c$ and $\Xi_c$ states will be established, which is an interesting research issue in the coming years.
\iffalse
To date,
\fi
\section*{Acknowledgement}
This project is supported by the National Natural Science Foundation of China under Grant Nos. 11305003, 11222547, 11175073, 11447604, 11647301 and 11475111. Xiang Liu is also supported by the National Program for Support of Top-notch Young Professionals and the Fundamental Research Funds for the Central Universities.
| {
"attr-fineweb-edu": 1.980469,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUf_o5qhLB3TVFxzNO | \section{Introduction} \label{introduction}
Many applications in the social sciences, economics, biostatistics, and medicine argue for ``as-if'' random assignment of units to treatment regimes. Examples include natural experiments, regression discontinuity designs, matching designs, and RCTs with attrition. To support a claim of ``as-if'' random assignment, researchers typically demonstrate that the observed covariates are balanced between treatment and control units. Typically it is required to show that pre-treatment characteristics cannot predict future treatment status.
This paper develops a nonparametric test that formalizes the question of whether the covariates can predict treatment status. The test makes use of classification methods and permutation inference, and we name it the Classification Permutation Test (CPT). The CPT trains a classifier (e.g., logistic regression, random forests) to distinguish treated units from control units. Then, using permutation inference, the CPT tests whether the classifier is in fact able to distinguish treated units from control units more accurately than would be expected by chance.
The CPT may be viewed as a test for equality of multivariate distributions, as it tests whether the joint distribution of the covariates is the same in both the treatment and control groups. Several other nonparametric tests for equality of multivariate distributions have been proposed in the past. \cite{rosenbaum2005} developed the Cross-Match test which compares two multivariate distributions using a matching algorithm. First, the observations are matched into pairs, using a distance metric computed from the covariates (treatment status is ignored). The Cross-Match test statistic is then the number of matched pairs containing one observation from the treatment group and one from the control group; high values of the test statistic imply covariate balance, and for low values the null hypothesis of random assignment is rejected. Applications and extensions of the Cross-Match test are described in \cite{heller2010} and \cite{heller2010b}. \cite{szekely2009, szekely2009b} developed the energy test, another nonparametric test for equality of multivariate distributions. \cite{aronow2012} suggested using the energy test to test for covariate imbalance between groups. \cite{cattaneo2014} proposed a permutation based method for optimal window selection in a regression discontinuity design based on covariate balance on both sides of the cut-point. The method uses only information about the marginal distributions of the covariates, and therefore may not detect imbalances in the joint distribution. Still other methods include \citet{heller2013} and \citet{taskinen2005}.
This paper contributes to the existing literature in four ways:
(1) We show that the CPT is a useful tool in practice. Using both simulated and real data, we find that the CPT is often able to detect covariate imbalance where existing nonparametric methods do not.
(2) The paper illustrates how ``black box'' algorithms from the machine learning literature such as random forests can be used for rigorous inference in the social sciences, without actually relying on any strong modeling assumptions. Classification methods and permutation inference have been previously combined in the computational biology literature \citep{ojala2010}.
(3) We apply the CPT to make a substantive contribution to the political economy and criminal justice literatures. We revisit \cite{eggers2009} and shed new light on the validity of their regression discontinuity design, and provide new evidence in support of the ``judges design'' identification strategy used by \cite{green2010}.
(4) The CPT has a clear and intuitive interpretation. The test statistic is a direct measure of the ability of the covariates to predict treatment assignment. Moreover, the CPT relates equality of multivariate distributions to the propensity score \citep{rosenbaum1983}. Rejection of the null hypothesis implies the covariates are predictive of treatment assignment, or in other words that the distribution of the propensity score is different across the treatment and control groups.
The paper is organized as follows. Section \ref{method} provides a brief overview of the method. Section \ref{simulations} examines the performance of the CPT on simulated data, and Section \ref{applications} looks at real-life data examples. Section \ref{technical} provides further theoretical discussion, including a proof that the CPT is consistent under weak assumptions on the chosen classifier.
\section{Overview of the Method} \label{method}
This section gives an informal description of the CPT and a more detailed description is given in Section \ref{technical}.
Suppose there are $n$ units, indexed by $i$. For each unit there is a vector of observed covariates $Z_i \in \mathbb{R}^m$ and a treatment assignment indicator $T_i \in \{0,1\}$. (Presumably there is an outcome variable as well, but it is irrelevant for our purposes.) We model the $(Z_i, T_i)$ pairs as being IID from some unknown distribution. Let $T$ be the $n\times 1$ vector whose $i^{\mathrm{th}}$ entry is $T_i$ and let $Z$ be the $n\times p$ matrix whose $i^{\mathrm{th}}$ row is $Z_i$.
We wish to test whether
\begin{equation}
T \rotatebox[origin=c]{90}{$\models$} Z \label{tindz}
\end{equation}
or whether treatment assignment is independent of the observed covariates. This is our notion of ``random treatment assignment.''
The CPT proceeds as follows. First, we train a classifier to predict $T$ from $Z$. The classifier can be anything --- logistic regression, a random forest, K-nearest neighbors, etc. We only require that the classifier provide us with a $n\times 1$ vector $\hat{T}$ of ``predicted'' treatment assignments, where $\hat{T}_i \in \{0,1\}$. We then define the \textit{in-sample classification accuracy rate} $S$ as
\begin{equation}
S \equiv \frac{1}{n}\sum_{i=1}^nI\{\hat{T}_i = T_i\}
\end{equation}
where $I\{\hat{T}_i = T_i\}$ is the indicator function for whether $\hat{T}_i = T_i$. We use $S$ as our test statistic; intuitively, $S$ should be high only if $Z$ is predictive of $T$, implying that $Z$ and $T$ are not independent.
To determine statistical significance, we use permutation inference. We randomly permute the rows of $T$ (but not $Z$) $B$ times. Each time we retrain the classifier and recalculate the classification accuracy rate, which we denote $S^{\star}_b$, where $1 \le b \le B$. We then calculate our $P$-value as
\begin{equation}
\frac{1}{B}\sum_{b=1}^BI\{S \ge S^{\star}_b\}
\end{equation}
where $I\{S \ge S^{\star}_b\}$ is the indicator function for whether $S \ge S^{\star}_b$.
A few comments: (1) Because we use permutation inference, the CPT's $P$-value is valid, even in finite samples, no matter what classifier we use.\footnote{Strictly speaking, this is true only as $B \to \infty$; for finite $B$, the distribtuion of $\{S^\star_b\}$ is only an approximation to the true permutation null distribution, and thus our $P$-value is only an approximation to the true permutation test $P$-value.} (2) In particular, the CPT's $P$-value is valid despite the fact that we use the in-sample classification accuracy rate. Overfitting may occur, causing $S$ to be quite high, perhaps misleadingly so. However, overfitting would cause the $S^\star_b$ to be high as well; thus, any overfitting problem is also manifested in the null distribution, and thereby effectively accounted for. (3) The choice of classifier does affect the power of the test; the CPT will only have power if the classifier is able to distinguish the distribution of the covariates in the treatment group from the distribution of the covariates in the control group. In this paper we focus on logistic regression (with all pairwise interaction terms included in the design matrix) and also random forests. We select these classifiers because they are able to detect differences in the joint distribution of the covariates, as opposed to merely differences in the marginal distributions.
In addition to the CPT as it is described above, we also consider some variants. In one variant we replace the in-sample classification accuracy rate by an out-of-sample accuracy rate estimated by cross-validation. This makes the CPT very computationally demanding, but gives it nice theoretical properties; see Section \ref{technical}. In Section \ref{rouse} we consider a scenario in which the experimental units are blocked. We implement a variant of the CPT in which we permute treatment assignment only within blocks.
\section{Simulations} \label{simulations}
We use Monte Carlo simulations to study the power of the CPT, the Cross-Match test \citep{rosenbaum2005}, and the energy test \citep{szekely2009, szekely2009b}. In each simulation we generate $n = 200$ observations; 100 are in treatment and 100 in control. For each observation $i$ we generate a vector $Z_i$ of $m = 3$ covariates. In the treatment group, the covariates are drawn from a multivariate normal distribution with mean 0 and variance $\Sigma_{\rho}$, where
\begin{equation}
\Sigma_{\rho} \equiv
\left(\begin{array}{ccc}
1 & \rho & \rho \\
\rho & 1 & \rho \\
\rho & \rho & 1 \\
\end{array} \right).
\end{equation}
In the control group, the covariates are also drawn from a multivariate normal distribution with mean 0, but with variance $\Sigma_0 = I_{3\times 3}$. In other words, the only difference in the distribution of the covariates between the treatment and control groups is the correlation. In particular, the marginal distributions of the covariates are identical between the treatment and control groups. Differences between treatment and control units cannot be detected using a balance table or a main effects regression.
We vary the value of $\rho$ from 0 to 0.75 in increments of 0.05. For each value of $\rho$ we generate 1,000 datasets as described above, and then run the CPT, Cross-Match test, and energy test on each dataset. From this, we are able to approximate the power (at significance levels $\alpha = 0.05$ and $\alpha = 0.01$) of each test as a function of $\rho$. Results are shown in Figure \ref{fig: power-test-main}. In addition, a receiver operating characteristic (ROC) plot\footnote{See \cite{fawcett2006} for a description of ROC curves.} for $\rho = 0.5$ is shown in Figure \ref{fig: roc-test}.
Figure \ref{fig: power-test-main} shows the CPT has higher power for every level of $\rho$. Figure \ref{fig: roc-test} shows that the CPT has a higher true positive rejection rate for every level of false rejection rate. Together, Figures \ref{fig: power-test-main} and \ref{fig: roc-test} suggest the CPT typically outperforms the Cross-Match test and the Energy test with respect to power in this simulation.
\begin{figure}[ht]
\centering
\caption{Power of the Cross-Match test, the energy test, and the CPT on simulated data.}
\label{fig: power-test-main}
\includegraphics[scale=0.38]{power05.pdf}
\includegraphics[scale=0.38]{power01.pdf}
\begin{minipage}{15cm}
\emph{Notes: Results for three variants of the CPT are shown; one variant (``logistic2'') uses a logistic regression classifier with all two-way interactions included in the model, and another (``forest'') uses random forests. A third (``logistic'') uses logistic regression but does not include interaction terms; as expected, this version is unable to distinguish the two distributions. We used $B = 500$ permutations in the calculation of the $P$-values.}
\end{minipage}
\end{figure}
\begin{figure}[ht]
\centering
\caption{ROC curves (simulated data).}
\label{fig: roc-test}
\includegraphics[scale=0.38]{rocfig1_50.pdf}
\includegraphics[scale=0.38]{rocfig1_25.pdf}
\begin{minipage}{15cm}
\emph{Notes: See Figure \ref{fig: power-test-main}}.
\end{minipage}
\end{figure}
\clearpage
\section{Applications} \label{applications}
\subsection{Indiscriminate Violence in Chechnya: \cite{lyall2009}} \label{lyall}
\cite{lyall2009} investigates the effect of indiscriminate violence, specifically the bombing of villages in Chechnya. Villages are the unit of analysis, and the outcome of interest is insurgent attacks. The identification strategy is a matching procedure that yields almost completely balanced treatment and control groups in all the marginal distributions. See Figure \ref{fig: lyall-balance}. Lyall also presents the results shown in Figure \ref{fig: lyall-balance}, and uses these results to support the claim of covariate balance.
The CPT finds significant evidence of covariate imbalance between the treatment and the control groups. Figure \ref{fig: lyall-distro} shows the distribution of the CPT test statistic under the null and the observed test statistic. The null is clearly rejected. This example illustrates that looking only at the marginal distributions of the covariates is not sufficient.
\begin{figure}[ht]
\caption{Covariate balance between treatment and control villages in \cite{lyall2009}.}
\label{fig: lyall-balance}
\centering
\includegraphics[scale=1]{fig_DIV_balance.pdf}
\begin{minipage}{11cm}
\emph{Notes: Figure \ref{fig: lyall-balance} shows balance on each covariate separately. The points are P-values using t-test, Wilcoxon rank sum test and Kolmogorov-Smirnov (KS) tests.}
\end{minipage}
\end{figure}
\begin{figure}[ht]
\centering
\caption{Distribution of the CPT test statistic under the null hypothesis \newline}
\label{fig: lyall-distro}
\includegraphics[scale=0.48]{lyall-log2.pdf}
\includegraphics[scale=0.48]{lyall-forest.pdf}
\begin{minipage}{14cm}
\emph{\newline Notes: The figure shows the distribution of the CPT test statistic under the null hypothesis of random treatment assignment, and the observed test statistic. Results are shown for both logistic regression with all two-way interactions, and for random forests.}
\end{minipage}
\end{figure}
\subsection{Random assignment of defendants to judge calendars: \cite{green2010}} \label{judges}
\cite{green2010} studied the effect of incarceration length and probation length on recidivism. They argue that defendants are assigned as-if at random to different judge calendars, and that different judges have different punishment propensities. The data consists of a sample of $1,003$ felony drug defendants that are assumed to be randomly allocated between nine different judge calendars. The energy test ($P = 0.447 $) and the CPT both find no evidence of imbalance in the observed characteristics of the defendants across the nine judge calendars ($P = 0.115$). See Appendix \ref{appendix: data green and winik (2010)} for a list of all the observed covariates.
\begin{figure}[ht]
\centering
\caption{The distribution of the estimated P-score using a main effects model and all two-way interactions model. \newline}
\label{fig: distribution of P-score judge calendar 2}
\includegraphics[scale=0.45]{fig_pscore_overfit1.pdf}
\includegraphics[scale=0.45]{fig_pscore_overfit2.pdf}
\end{figure}
An intuitive method for examining whether the observations in two groups are comparable in observable characteristics is to plot fitted propensity score ($e(Z)$) values, however this method can be sensitive to over-fitting issues. Consider a binary indicator whether defendant $i$ was assigned to judge calendar $2$.\footnote{The choice of judge calendar $2$ is arbitrary and was motivated as an example that illustrates the issue of over-fitting a propensity score to the data. Other judge calendar choices are also possible, however our aim is not to deduce a statement on judge calendars, but rather to emphasize an estimation and testing issue.} We estimated $e(Z)$ using a logistic regression and plotted the fitted values, $\hat{e}(Z)$, among the treated (assigned to judge calendar $2$) and the control in Figure \ref{fig: distribution of P-score judge calendar 2}. The imbalance in the estimated propensity score could be the result of real differences in observable characteristics between the treated and control units or over-fitting of the logistic regression model to the observed data. The CPT does not find any difference in the observable characteristics between defendants assigned to judge calendar $2$ and the other defendants ($P = 0.241$). As the CPT re-estimates the logistic regression in each permutation it avoids any over-fitting issues and has finite sample exact coverage.
The likelihood ratio test (LRT) from a logistic regression is a common alternative to the CPT or other permutation based tests. Table \ref{tab: type-I error rate judge calendar} shows the results of testing separately for each judge calendar whether defendants are randomly assigned or not using the LRT. The main effects logistic regression usually yields P-values that have correct coverage (i.e., Type-I error rate), however when including all two-way interactions the model over-fits the data and has incorrect coverage. This illustrates the over fitting problem of the LRT in finite samples. Next we investigate the finite sample performance of the LRT in this data application.
Figure \ref{fig: judges Type-I error rate LRT} shows the distribution of the LRT P-values when the null hypothesis of random assignment is correct. We permuted the treatment at random and tested the null of random assignment. It is clear that the finite sample distribution of the over-fitted LRT P-value has incorrect Type-I error rates. The over-fitting problem of LRT in finite samples have been previously documented in the literature \citep{hansen2008b}.
\begin{table}[!htbp] \centering
\caption{The Likelihood Ratio Test P-values and Type-I error rates for each judge calendar dummy}
\label{tab: type-I error rate judge calendar}
\begin{tabular}{@{\extracolsep{5pt}} ccccccc}
\\[-1.8ex]\hline
\hline \\[-1.8ex]
& \multicolumn{3}{c}{\emph{Main effects only}} &
\multicolumn{3}{c}{\emph{All two-way interactions}} \\ \\
Judge calendar & P-value & Num. of coefficients & Type-I & P-value & Num. of coefficients & Type-I \\
\hline \\[-1.8ex]
1 & $0.070$ & $22$ & $0.054$ & $0.001$ & $206$ & $0.354$ \\
2 & $0.210$ & $22$ & $0.069$ & $0.007$ & $206$ & $0.363$ \\
3 & $0.435$ & $22$ & $0.051$ & $0.123$ & $206$ & $0.343$ \\
4 & $0.852$ & $22$ & $0.063$ & $0.010$ & $206$ & $0.346$ \\
5 & $0.408$ & $22$ & $0.067$ & $0.129$ & $206$ & $0.339$ \\
6 & $0.767$ & $22$ & $0.067$ & $0.231$ & $206$ & $0.348$ \\
7 & $0.159$ & $22$ & $0.053$ & $0.017$ & $206$ & $0.354$ \\
8 & $0.917$ & $22$ & $0.066$ & $0.841$ & $206$ & $0.367$ \\
9 & $0.618$ & $22$ & $0.053$ & $0.090$ & $206$ & $0.363$ \\
\hline \\[-1.8ex]
\end{tabular}
\end{table}
\subsection{MPs for Sale: \cite{eggers2009}}
\cite{eggers2009} (henceforth EH) studied the effect of membership in the UK parliament on personal wealth. EH use a regression discontinuity design (RDD) in which candidates for parliament who just barely won an election are compared to candidates who just barely lost. In a RDD the observations just above and just below the threshold are assumed to be comparable, with the same distribution of observed and unobserved characteristics \citep{caughy2011}. Testable implications of a valid RDD include covariate balance and no manipulation around the winning threshold \citep{imbens2008, lee2010}.
The aim of this data application is to illustrate the performance of the CPT in a RDD setting. To begin, we cast doubt on the RDD used by EH. We demonstrate manipulation of the running variable (vote share) around the cut-point. We also find imbalance in the party identity close to the cut point.
Second, we drop party identity from the covariate set, and examine how well the CPT succeeds at identifying an imbalance in observables using only the remaining covariates. This can be thought of as a power test of how well the CPT can identify that the RD design is not valid.
One possible explanation for our findings is that the EH design breaks the RD pairs of barely winners and losers by comparing individuals who attempted to run a \emph{different} number of times across \emph{multiple} elections. For example, the barely winners (and losers) could have run several times before the first winning (or best losing) race, and those elections will be ignored in the EH design. If for example the design used only one election at time X, this issue would have not been a problem. The concern raises from a comparison across multiple elections of candidates that are not necessary comparable due to differences in the characteristics that motivate a candidate to continue trying to be elected after losing a race.
If the two populations of candidates, barely winners and losers, are indeed different in observable (and non-observable) characteristics it can explain our findings.
Figure \ref{fig: Density histograms MPs} shows the distribution of the winning margin by party. There is clear evidence of manipulation around the winning threshold by the non-labour party candidates. The McCrary test \citep{mccrary2008} for manipulation around the cut-point finds significant evidence of manipulation.
\begin{figure}[ht]
\centering
\caption{The distribution of the winning margin by party identity \newline}
\label{fig: Density histograms MPs}
\includegraphics[scale=.3]{hist_all.pdf}
\includegraphics[scale=.3]{hist_nonlabour.pdf}
\includegraphics[scale=.3]{hist_labour.pdf}
\begin{minipage}{15cm}
\footnotesize
\emph{Notes: The bin size is at the default level in the {\bf{\textsf{R} }} package ``ggplot2''.}
\end{minipage}
\end{figure}
To demonstrate the added value of the CPT relative to a standard balance table, we will look at a specific window around the winning threshold.
Table 4 in EH shows the main estimates of the treatment effect. The estimates use a window of 164 to 223 observations around the winning threshold. We restricted the sample to a window containing 164 observations, and examined the covariate balance within that window. Table \ref{tab: MPs balance window} and Figure \ref{fig: MPs balance table } in the Appendix suggest the covariate balance is not bad, and except for imbalance on the party identity, most other covariates seem to be balanced. Furthermore, a joint F-test of the null hypothesis that the covariates have no predictive power rejects only at a 10\% significance level ($P = 0.075$) and without including the party indicator the joint F-test does not reject the null of no predictive power ($P = 0.412$) and finds no evidence of imbalance when part identity is not included.
We remove the party indicator from the covariate set and check whether the multivariate balance tests can detect a difference between the winners and the losers based on the remaining covariates.
The Energy test and the Cross-Match test do not detect a covariate imbalance ($P=0.88$ and $P=0.13$ respectively), however the CPT finds significant imbalance between the two groups, see Figure \ref{fig: Null distribution CPT MPs} in the Appendix.
In Figure \ref{fig: P-values for different window sizes} we compare the Energy test, Cross-Match test and the CPT over a grid of different window sizes. The results suggest that the CPT has higher power than the Energy and Cross-Natch tests. The CPT detects significant covariate imbalance at window sizes that are half of the one used by EH. We used a random forest as the classifier, because logistic regression with all two-way interactions had more parameters than observations. This is an example of how machine learning algorithms combined with permutation inference can be used to complement existing econometric tools.
\begin{figure}[ht]
\centering
\caption{P-values of each of the multivariate balance test at different window sizes. \newline}
\label{fig: P-values for different window sizes}
\includegraphics[width=0.78\textwidth]{MPs_rdd_window.pdf}
\begin{minipage}{13.5cm}
\footnotesize
\emph{Notes:
The CPT uses a random forest classifier, and the test statistic is the in-sample classification accuracy rate.
In the smallest two window sizes the Cross-Match test statistic was not well defined, as the covariance matrix could not been inverted.
EH used a window containing between 164 to 223 observations in their RD treatment effect estimation, see Table 4 in EH. }
\end{minipage}
\end{figure}
\subsection{The effect of community college on educational attainment, \cite{rouse1995} and \cite{heller2010}} \label{rouse}
In a matching design it is common to use Fisherian inference after conducting the matching procedure, see \cite{rosenbaum2010}. A key question is whether after matching the researcher should imagine that units have been assigned at random within matched blocks, or whether each unit has been assigned independently to treatment. In other words, in the hypothetical experiment that the matching design is meant to mimic, is the randomization within a group (match) or across groups?
In this data application we will show it is essential to specify the probability model, because the two may lead to opposite conclusions when conducting balance diagnostics.
\cite{rouse1995} studied the educational attainment of students who started in a two-year college to that of students at a four-year college. \cite{heller2010} used this data to demonstrate the use of the Cross-Match test for testing imbalance between multivariate distributions. We use this data to demonstrate methodological issues in conducting inference after matching, and not to make any inference or analysis on the effects of two-year college on educational attainment relative to four-year college.
In Rouse's data, prior to conducting matching there is clear imbalance in the observable characteristics of students who started at a two-year college and those who started at a four-year college (see Figure \ref{fig: college balance} in Appendix). After matching, with or without replacement, the balance tables comparing the treated (two-year) and control (four-year) units show the groups are comparable in the observed characteristics and validates the matching procedure worked well. To test whether there is imbalance in the joint distribution of the covariates we use the CPT, and Figure \ref{fig: college null distribution CPT} shows the results. Figure \ref{fig: college null distribution CPT} yields opposite results depending on the randomization structure that is used. When the randomization structure is across blocks the observed test statistic is to the left of the null distribution, implying more balance than would have been likely under random assignment. When the randomization structure is within blocks the observed test statistic is to the right of the null distribution, implying the covariates can predict the treatment assignment better than under random assignment. The difference between the left and right plots in Figure \ref{fig: college null distribution CPT} is the matching method, with or without replacement, and as can be seen the matching procedure has no effect on our discussion of within versus across block randomization.
\begin{figure}[ht]
\begin{center}
\caption{The distribution of the test statistic under the null according to randomization within blocks and across blocks for matching designs with and without replacement}
\label{fig: college null distribution CPT}
\end{center}
\includegraphics[scale=0.5]{college_norep_null_rate.pdf}
\includegraphics[scale=0.5]{college_rep_null_rate.pdf}
\begin{minipage}{17cm}
\footnotesize
\emph{Notes:} The difference between the left and right panels is whether the matching was done with replacement or without, and as can be seen from the figure the matching method has no effect on our conclusions concerning within versus across block randomization.
\end{minipage}
\end{figure}
\section{Theory} \label{technical}
In this section we explicitly reformulate the CPT as a two-sample test for equality of multivariate distributions (\ref{reformulation}), describe an idealized version of the CPT (\ref{idealized}), and then show that the idealized CPT is consistent under weak conditions (\ref{consist}). We conclude with some comments (\ref{techcomments}).
\subsection{Reformulation} \label{reformulation}
In Section \ref{method} we assume that the $(Z_i, T_i)$ pairs are IID from some unknown distribution. Let $\mathcal{F}$ be the conditional distribution of $Z_i$ given $T_i = 1$ and $\mathcal{G}$ be the conditional distribution of $Z_i$ given $T_i = 0$. Then $Z \rotatebox[origin=c]{90}{$\models$} T$ if and only if $\mathcal{F} = \mathcal{G}$. We may therefore reformulate the CPT as a test for equality of the multivariate distributions $\mathcal{F}$ and $\mathcal{G}$.
Suppose there are $l > 0$ values of $i$ for which $T_i = 1$, and let $X_1$, $X_2$, ..., $X_l$ denote the $l$ corresponding $Z_i$. Similarly suppose there are $m > 0$ values of $i$ for which $T_i = 0$ and let $Y_1$, $Y_2$, ..., $Y_m$ denote the $m$ corresponding $Z_i$. Let $X$ be the $l\times p$ matrix whose rows are $X_1$, ..., $X_l$, and let $Y$ be the $m \times p$ matrix whose rows are $Y_1$, ..., $Y_m$. Note that the rows of $X$ are IID draws from $\mathcal{F}$ and the rows of $Y$ are IID draws from $\mathcal{G}$. In this context, the CPT is simply a two-sample test comparing $X$ and $Y$.
Let us now \textit{redefine} $Z$ to be the $n\times p$ matrix
\begin{equation}
Z \equiv \left(\begin{array}{c} X \\ Y \end{array} \right).
\end{equation}
Note that in our redefinition of $Z$ we have simply reordered the rows so that the first $l$ rows are from the treatment group and the remaining $m$ rows are from the control group.
\subsection{Description of an Idealized CPT} \label{idealized}
Let $s:\mathbb{R}^{n \times p}\mapsto \mathbb{R}$ be some fixed but otherwise arbitrary measurable function that maps an $n \times p$ matrix to a real number. We will use $S \equiv s(Z)$ as our test statistic. (We specify possible choices for $s$ below, but for now we allow $s$ to be arbitrary.)
Let $\Pi_1$, ..., $\Pi_{n!}$ denote some ordering of the $n!$ permutation matrices of dimension $n \times n$. We assume that $\Pi_1 = I$, but the ordering may otherwise be arbitrary. Define $S^{(i)} \equiv s(\Pi_i Z)$ for $1 \le i \le n!$. The values of $S^{(i)}$ are the re-calculated values of the test statistic we obtain after shuffling the observations (i.e., after shuffling the rows of $Z$).
Now define
\begin{equation}
P \equiv \frac{\#\left\{i : S \le S^{(i)} \right\}}{n!}.
\end{equation}
Proposition \ref{validity} states that $P$ is a valid $P$-value for testing the hypothesis that $\mathcal{F}=\mathcal{G}$.
\begin{proposition} \label{validity}
Assume that $\mathcal{F} = \mathcal{G}$. Then for any real number $\alpha$ such that $0 \le \alpha \le 1$, it follows that $\mathbb{P}\left(P \le \alpha \right) \le \alpha$.
\end{proposition}
A proof is given in Appendix \ref{proofs}. We must point out that there is of course nothing fundamentally new here; we have simply outlined a classical permutation test. The key point we wish to make is that we may choose any function $s$ that we like, and the test remains valid. Indeed, our choice of $s$ is a rather complicated function. We use $Z$ to train a classifier that classifies observations as coming from either $\mathcal{F}$ or $\mathcal{G}$, and $s(Z)$ is some measure of the accuracy of the classifier. (The function $s$ encapsulates both the training of the classifier, and the measurement of its accuracy.) We also point out that we are describing here an idealized version of the CPT, because it is usually infeasible to compute $P$ in practice, since that would require us to compute all $n!$ values of $S^{(i)}$. Lastly, we note that the assumption that $s$ is fixed is somewhat restrictive. It excludes the possibility that the classifier might use a randomized algorithm. This would exclude, for example, random forests. In Appendix \ref{proofs} we discuss a generalization of Proposition \ref{validity} that allows for $s$ to be random.
We next discuss how we might construct our function $s$, and present two possibilities. One possibility calculates the in-sample classification accuracy rate. The other calculates the out-of-sample classification rate. Both require us to specify a \textit{classification function}, which in practice amounts to choosing a classification algorithm (e.g. logistic regression).
\paragraph{Classification Function}
The classification function, which we denote $\hat{f}$, is a function that takes an observation and classifies it as coming from either $\mathcal{F}$ or $\mathcal{G}$. Somewhat informally, we may think of $\hat{f}$ as a function that maps a $p$-dimensional vector (i.e., a single observation) to $\{0, 1\}$, with ``1'' meaning the observation is classified as coming from $\mathcal{F}$, and ``0'' meaning the observation is classified as coming from $\mathcal{G}$. However, the prediction rule used by $\hat{f}$ to classify observations is learned from training data, and thus, strictly speaking, the function $\hat{f}$ is a function not only of the observation to be classified, but also of the training data. We therefore write $\hat{f}$ as a function of two variables, i.e.\ $\hat{f}(u,v)$, where $u$ is the observation to be classified (a $p$-dimensional vector) and $v$ is the training data (a $n_0\times p$ matrix, where $n_0$ is the number of observations included in the training set; $n_0$ will usually be defined implicitly, depending on how we construct the training set.).
In what follows, we allow $\hat{f}$ to be any fixed, measurable function that maps $\mathbb{R}^p\times\mathbb{R}^{n_0 \times p}$ to $\{0, 1\}$. We do not place any other restrictions on $\hat{f}$. In practice, we might choose $\hat{f}$ to be, for example, a logistic regression classifier. Note that we require $\hat{f}$ here to be fixed, which excludes randomized algorithms such as random forests.
\paragraph{In-sample Classification Accuracy Rate} Once we have chosen a function $\hat{f}$, we may then define $s$ in terms of $\hat{f}$. One simple option is the \textit{in-sample classification accuracy rate}:
\begin{equation}
s_{\mathrm{in}}(z) = \frac{1}{n} \left\{\sum_{i = 1}^l \hat{f}(z_i, z) + \sum_{i = l+1}^n \left[1 - \hat{f}(z_i, z)\right]\right\}
\end{equation}
where $z_i$ denotes the $i^{\mathrm{th}}$ row of $z$ (note that the variable $z$ does not have any special meaning of its own; it is simply used here to define the function $s_{\mathrm{in}}$ in terms of the function $f$). Here, we use the entire dataset $z$ as the training data (so $n_0 = n$). We then count the number of observations that are correctly classified and divide by $n$.
\paragraph{Out-of-sample Classification Accuracy Rate} An alternative to the in-sample classification accuracy rate would be the \textit{out-of-sample classification accuracy rate}. As defined below, the out-of-sample classification accuracy rate essentially amounts to cross validation except that we consider all possible training sets of a fixed size, instead of just 5 or 10 disjoint training sets. In addition, we require that exactly half of the observations in the test set come from $\mathcal{F}$ and exactly half come from $\mathcal{G}$ (see discussion in Section \ref{techcomments}).
Let $\kappa$ be some integer such that $1 \le \kappa < \mathrm{min}(l,m)$. If $z$ is a $n \times p$ matrix, let $\mathbf{z}$ (in bold) denote
\[
\left( \begin{array}{c} z_1 \\ \vdots \\ z_{l-\kappa} \\ z_{l} \\ \vdots \\ z_{n-\kappa} \end{array} \right).
\]
In other words, $\mathbf{z}$ is equal to $z$, but with the following $2\kappa$ rows removed: $l-\kappa+1$, $l-\kappa+2$, ..., $l$ and $n-\kappa+1$, $n-\kappa+2$, ..., $n$. Note that the definition of $\mathbf{z}$ depends on $\kappa$, even though this is not reflected explicitly in the notation. (We do not write, for example, $\mathbf{z}(\kappa)$. This is to avoid notational clutter.) The motivation for this ``bold'' notation is that we can use $\mathbf{z}$ as a training set.
The remaining $2\kappa$ rows of $z$ can be used as a test set.
Next, define the function $a(z)$ as follows:
\begin{equation}
a(z) \equiv \frac{1}{2\kappa} \left\{\sum_{i=l-\kappa+1}^{l} \hat{f}(z_i; \mathbf{z}) + \sum_{i=n-\kappa+1}^{n} \left[1 - \hat{f}(z_i; \mathbf{z})\right] \right\}.
\end{equation}
Here we use only $\mathbf{z}$ as the training set, so $n_0 = n-2\kappa$. The remaining $2\kappa$ observations are the test set. We count how many of the test-set observations are correctly classified, and divide by $2\kappa$. Thus, $a(z)$ may be interpreted as the out-of-sample classification accuracy rate for one specific partition of $z$ into a training set and test set.
We may now define $s_{\mathrm{out}}(z)$ as:
\begin{equation}
s_{\mathrm{out}}(z) = \frac{1}{l!m!}\sum_{i,j}a\left(\Pi_i^{(X)} \Pi_j^{(Y)} z\right) \label{oosrate}
\end{equation}
where $\Pi_1^{(X)}$, $\Pi_2^{(X)}$, ..., $\Pi_{l!}^{(X)}$ denotes some ordering of the $l!$ permutation matrices that permute only the first $l$ rows of $z$, and $\Pi_1^{(Y)}$, $\Pi_2^{(Y)}$, ..., $\Pi_{m!}^{(Y)}$ denotes some ordering of the $m!$ permutation matrices that permute only the final $m$ rows of $z$. In other words, $\{\Pi_i^{(X)}\}$ is the set of all $n\times n$ permutation matrices whose most lower-right $m\times m$ submatrix is equal to $I_{m\times m}$, and $\{\Pi_i^{(Y)}\}$ is the set of all $n\times n$ permutation matrices whose most upper-left $l\times l$ submatrix is equal to $I_{l\times l}$. (Note that $\Pi_i^{(X)}$ and $\Pi_j^{(Y)}$ commute.) Equation \ref{oosrate} is our definition of the \textit{out-of-sample classification accuracy rate}. We will drop the subscript ``out'' on $s_{\mathrm{out}}(z)$ when it is clear from context.
\subsection{Consistency} \label{consist}
The CPT is a consistent test if (1) we use the out-of-sample classification accuracy rate, and (2) the classification function $\hat{f}$ has at least some predictive power to discriminate $\mathcal{F}$ from $\mathcal{G}$. The exact sense in which we mean ``$\hat{f}$ has at least some predictive power to discriminate $\mathcal{F}$ from $\mathcal{G}$'' is specified in Definition \ref{kdgpredictive}.
\begin{definition} \label{kdgpredictive}
Let $Z$, $\kappa$, and $\mathbf{Z}$ be defined as above. Let $\tilde{X} \sim \mathcal{F}$ and $\tilde{Y} \sim \mathcal{G}$ be $1 \times p$ random vectors, and assume that $\tilde{X}$ and $\tilde{Y}$ are independent of $Z$ and of each other. We say that a function $\hat{f}: \mathbb{R}^p\times\mathbb{R}^{(n-2\kappa) \times p} \mapsto \{0, 1\}$ is $(\kappa, \delta, \gamma)$-predictive under $\mathcal{F}$ and $\mathcal{G}$ if and only if both of the following are true:
$$
\mathbb{P}\left\{\mathbb{P}\left[\hat{f}\left(\tilde{X}, \mathbf{Z}\right) = 1 \, \middle| \, \mathbf{Z}\right] > 0.5 + \delta\right\} > 1-\gamma
$$
and
$$
\mathbb{P}\left\{\mathbb{P}\left[\hat{f}\left(\tilde{Y}, \mathbf{Z}\right) = 0 \, \middle| \, \mathbf{Z}\right] > 0.5 + \delta\right\} > 1-\gamma.
$$
\end{definition}
In other words, if we use $\mathbf{Z}$ as a training set, then with probability at least $1 - \gamma$ the function $\hat{f}$ will be able to correctly classify new, independent observations at least somewhat better than a coin flip. Under this assumption, if $\gamma$ is sufficiently small and if $\kappa$ is sufficiently large, it follows that with high probability the test statistic $S$ will be at least some finite amount larger that 0.5. More precisely:
\begin{proposition} \label{Sdistro}
Assume that $\mathcal{F} \ne \mathcal{G}$ and that $\hat{f}$ is $(\kappa, \delta, \gamma)$-predictive under $\mathcal{F}$ and $\mathcal{G}$. Then
\begin{equation}
\mathbb{P}\left[S \le 0.5 + \delta/4 \right] < \frac{8\gamma + 4\mathrm{exp}\left(-\kappa\delta^2\right)}{\delta}.
\end{equation}
\end{proposition}
\begin{proof}
See Appendix \ref{proofs}.
\end{proof}
Moreover, if $\kappa$ is large, then most of the the values of $S^{(i)}$ concentrate right around 0.5.
\begin{proposition} \label{sibound}
Let $\xi$ be some real number such that $0 < \xi < 0.5$. Then
\begin{equation}
\frac{\#\{i : S^{(i)} > 0.5 + \xi\}}{n!} < \frac{1}{\xi}\left(\frac{1+\sqrt{\pi}}{2\sqrt{2}}\right) \sqrt{\frac{1}{\kappa}}.
\end{equation}
\end{proposition}
\begin{proof}
See Appendix \ref{proofs}.
\end{proof}
Combining Propositions \ref{Sdistro} and \ref{sibound}, and recalling the definition of $P$, we see that the power of the CPT goes to 1 as as $n \to \infty$ as long as $\kappa \to \infty$ and $\gamma \to 0$ and $\delta \to \delta_0 > 0.5$. Note that slightly stronger statements are possible using the results in Appendix \ref{proofs}, but we do not pursue them here.
\subsection{Comments} \label{techcomments}
To summarize, when constructing the test statistic we must make two main choices: (1) what classifier to use, and (2) what accuracy measure to use. Neither decision affects the validity of the test (that is guaranteed by Proposition \ref{validity}) but our choices affect the power of the test, and also the computational complexity.
In practice, the most important choice is usually the classifier (see below). The better the classifier can distinguish $\mathcal{F}$ from $\mathcal{G}$, the more powerful the test. This is both a feature and a bug. On one hand, a researcher may have some intuition about what type of classifier might best fit her data (e.g.\ a linear vs.\ non-linear classifier), and thus ``customize'' the CPT to her particular application. We feel that this is a major strength of the method. On the other hand, since the choice is arbitrary, it could easily lead to data snooping. We therefore suggest, as a default, that researchers run the CPT once with logistic regression and once with random forests, and report the results of both. If it is felt that a third classifier is more appropriate, we suggest reporting its result as well, in addition to the first two. Of course, when the CPT is merely being used as a diagnostic tool to discover covariate imbalance, data snooping may not be a serious concern --- if there are serious imbalances, we would like to find them, even if it requires a little searching.
The choice of accuracy measure seems to be much less important than the choice of classifier in practice. See Figure \ref{fig: roc-inout} in the appendix, which compares the in-sample vs out-of-sample CPT on simulated data. Little, if any, difference can be seen. Our focus here on the out-of-sample classification rate is primarily theoretical; it is more difficult (and requires further assumptions) to prove consistency of the in-sample CPT. To see why, consider a K-nearest neighbors classifier, with K = 1. This classifier may be able to discriminate $\mathcal{F}$ from $\mathcal{G}$ in the sense described above (Section \ref{consist}), but the in-sample CPT will have 0 power. Assuming the $Z_i$ are all distinct, the in-sample classification accuracy rate will always be 1, over all permutations, and thus the CPT will never reject.
Another theoretical detail is that in our definition of the out-of-sample classification accuracy rate we force the test set to have an equal number of observations from treatment and control. The idea here is that, to the extent that the classifier approximates an ideal Bayes classifier, it should approximate a Bayes classifier that has a 50/50 prior on the class label. If the prior is not uniform on the class label, and especially if there is a large imbalance, it is possible that the Bayes classifier would always classify every observation to a single class. In such cases, the classification accuracy rate would be constant over all permutations, and the CPT would have 0 power. In practice, this implies that some caution may be required when applying the CPT to datasets with a large imbalance in the number of observations from treatment and control. In such cases, it may be preferable to use the out-of-sample classification accuracy rate (instead of in-sample), and to ensure the classifier effectively places a uniform prior on the class label.
\section{Discussion} \label{discussion}
The CPT reformulates the problem of testing whether a binary treatment was assigned at random as a test for equality of multivariate distributions. The test combines classification methods with Fisherian permutation inference. We illustrate the power of the method relative to existing procedures using Monte-Carlo simulations as well as four real data examples. We hope the CPT will illustrate the gains of using machine learning tools for the construction of powerful new test statistics, and Fisherian inference for conducting hypothesis testing and inference.
The paper emphasizes the importance of the joint distribution rather than the marginal distributions when testing for equality of multivariate distributions. The CPT is \emph{not} a substitute for standard methods such as a balance table that tests for differences in the means of each pre-treatment characteristic separately. The CPT is targeted to complement a balance table and provide a summary measure of the covariates' imbalance.
The CPT can be easily generalized. Furthermore, although we focus in this paper on binary treatments, a similar method could be implemented for continuous treatments by replacing the classification algorithm with some form of regression, and replacing the classification accuracy rate with some other goodness of fit measure. This flexibility, combined with exact finite sample inference, allows researchers to verify random assignment to treatment in a variety of situations. The four empirical applications aim to illustrate the applicability of the method to different situations that rise in applied research.
\clearpage
\singlespacing
\bibliographystyle{aer}
| {
"attr-fineweb-edu": 1.823242,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |
BkiUfbU25V5jayWgFLxK | \section{Introduction} \label{sec:intro}
Current developments in fluorescence microscopy have allowed the generation of image data with progressively increasing resolution and often accordingly increasing data size \cite{economo2016,strnad2016}.
3D microscopy data sets can easily reach terabyte-scale, which requires a high degree of automation of analytical processes including image segmentation \cite{meijering2020}.
Machine learning-based segmentation approaches require annotated training data sets for reliable and robust outcomes, which often must be created manually and are rarely available.
Although some tools are available \cite{mcquin2018,spina18,dereuille2015,sommer2011}, this issue is even more severe for 3D microscopy image data, since manual annotation of 3D cellular structures is very time-consuming and often infeasible, which causes the acquisition of annotated 3D data sets to be highly expensive.
2D approaches have been applied in a slice-wise manner to overcome this issue \cite{stringer2020}, diminishing the need for expensive 3D annotations, but resulting in potential sources of errors and inaccuracies.
Segmentation errors are more likely to occur at slice transitions, leading to noisy segmentations, and omitting information from the third spatial dimension, \textit{i.e.}, a decreased field of view, leads to a potential loss of segmentation accuracy.
Conversely, if there are fully-annotated 3D data sets available, robust and generalist approaches are desired to make full use of those data sets \cite{stringer2020, isensee2021, schmidt2018}.
Stringer \emph{et al.} proposed the Cellpose algorithm \cite{stringer2020} and demonstrated that this approach serves as a reliable and generalist approach for cellular segmentation on a large variety of data sets.
However, this method was designed for 2D application.
Although a concept to obtain a 3D segmentation from a successive application of the 2D method in different spatial dimensions was proposed, the approach is still prone to the aforementioned sources of errors, missing on the full potential for 3D image data.
In this paper we propose (1) an extension of the Cellpose approach to fully exploit the available 3D information for improved segmentation smoothness and increased robustness.
Furthermore, we demonstrate (2) how the training objective can be simplified without loosing accuracy for segmentation of fluorescently labeled cell membranes and (3) we propose a concept for instance reconstruction allowing for stable runtimes.
(4) All of our code is publicly available and has been integrationed into the open source applications XPIWIT \cite{bartschat2016} and MorphoGraphX \cite{dereuille2015}.
\section{Method} \label{sec:method}
The proposed method extends the Cellpose algorithm proposed by Stringer \emph{et al.} \cite{stringer2020}, which formulates the instance segmentation problem as a prediction of directional gradients pointing towards the center of each cell.
These gradients are derived from mathematically modelling a heat diffusion process, originating at the centroid of a cell and extending to the cell boundary (Fig.~\ref{fig:grad_maps}, upper row).
\begin{figure}[h]
\centering
\includegraphics[width=0.49\textwidth]{images/flows.eps}
\caption{Gradient map representation of cell instances in $x$ (red), $y$ (green) and $z$ (blue) directions. The upper row shows heat diffusion simulations \cite{stringer2020}, while the lower row shows the simplified hyperbolic tangent distributions. Brightness indicates the gradient direction.}
\label{fig:grad_maps}
\end{figure}
Gradients are divided into separate maps for each spatial direction, which can be predicted by a neural network alongside a foreground probability map to prevent background segmentations.
After prediction of the foreground and gradient maps using a U-Net architecture \cite{ronneberger2015}, each cell instance is reconstructed by tracing the multidimensional gradient maps to the respective simulated heat origin.
All voxels that end up in the same sink are ultimately assigned to the same cell instance.
For processing images directly in 3D and benefiting from the full 3D information, we propose multiple additions to the Cellpose approach and necessary changes to overcome memory limitations and prevent long run-times.
An overview of the entire processing pipeline is visualized in Fig. \ref{fig:pipeline}.
We propose to use a different objective for the generation of the gradient maps, since calculation of a heat diffusion in 3D (Fig.~\ref{fig:grad_maps}, top) is computationally complex.
Instead, we rely on a hyperbolic tangent spanning values in the range of $(-1,1)$ between cell boundaries in each spatial direction (Fig.~\ref{fig:grad_maps}, bottom), which can be interpreted as the relative directional distance to reach the cell center axis.
Note that the different formulation constitutes a trade-off between lower complexity and the ability to reliably segment overly non-convex cell shapes.
Predictions are obtained by a 3D U-Net~\cite{cicek2016}, including all three spatial gradient maps for the $x$, $y$ and $z$ directions, respectively, and an additional 3D foreground map highlighting cellular regions.
Since the training objective is formulated by a hyperbolic tangent, the output activation of the U-Net is set to a hyperbolic tangent accordingly.
Processing images directly in the 3D space has the advantage of performing predictions only once for an entire 3D region, as opposed to a slice-wise 2D application, which has to be applied repetitively along each axis to obtain precise gradient maps for each of the three spatial directions.
Reconstruction of cell instances is done in an iterative manner, by moving each voxel within the foreground region along the predicted 3D gradient field $g$ by a step size $\delta_\textrm{recon}(x,y,z) = g(x,y,z) * s_\textrm{recon}$, given by the magnitude of the predicted gradient at each respective position $g(x,y,z)$ and a fixed integer scaling factor $s_\textrm{recon}$.
The number of iterations is defined by a fixed integer $N_\textrm{recon}$, which is adjusted to represent the number of steps necessary to certainly move a boundary voxel to the cell center.
Ultimately, each voxel ends up in the vicinity of the corresponding cell center, where they can be grouped and assigned a unique cell label.
Instead of utilizing clustering techniques to assign labels, we rely on mathematical morphology to identify connected voxel groups at each cell center.
As opposed to clustering, this allows for a constant and fast run-time, independent of the potentially large quantity of cells in 3D image data.
A morphological closing operation using a spherical structuring element with a radius $r_\textrm{closing}$ smaller than the estimated average cell radius is applied to determine those unique connected components.
Reconstructed cell instances are filtered by their size, requiring each instance to be within a predefined range of $(r_\textrm{min},r_\textrm{max})$ and by their overlap with the predicted foreground region with a required overlap ratio of at least $p_\textrm{overlap}$.
Similar to the post-processing used in \cite{stringer2020}, gradient maps are computed for each reconstructed instance and compared to the corresponding gradient maps predicted by the network.
In an ideal case, both gradient maps are equivalent, but if their mean absolute error exceeds $err_\textrm{gradient}$ this instance is assumed to be falsely reconstructed and discarded from the final result.
\begin{figure}[h]
\centering
\includegraphics[width=0.48\textwidth]{images/processing_pipeline.eps}
\caption{Representative visualization of our 3D processing pipeline, including the input patch \cite{wolny2020}, the neural network and the predicted outputs. Instances are reconstructed from the predicted gradient maps and filtered ($f$) by utilizing foreground predictions and flow errors.}
\label{fig:pipeline}
\end{figure}
\section{Experiments and Results} \label{sec:results}
\begin{figure*}[ht]
\centering
\includegraphics[width=0.49\textwidth]{images/PNASmeristem_dice.eps}
\includegraphics[width=0.49\textwidth]{images/VALmeristem_dice.eps}
\caption{Dice scores computed from segmentation results obtained for the public Meristem data set \cite{willis2016} (left) and for the manually annotated 3D image stack (right). The experiments include pre-trained Cellpose provided by \cite{stringer2020} ($\text{Baseline}_\textrm{pretrained}$), a retrained version of the original Cellpose architecture ($\text{Baseline}_\textrm{specialized}$), our 3D extension with the original ($\text{Extension}_\textrm{heat}$) and the hyperbolic tangent gradient formulation ($\text{Extension}_\textrm{tanh}$). Whiskers represent the $5^\text{th}$ and $95^\text{th}$ percentile, the box spans from the $1^\text{st}$ to the $3^\text{rd}$ quartile, the orange line shows the median value and the green triangle indicates the mean value.}
\label{fig:seg_scores}
\end{figure*}
To evaluate our method, we conduct different experiments comparing the 3D extension to the 2D baseline \cite{stringer2020} and assessing the generation of gradient maps.
For a comparison to various state-of-the-art segmentation approaches, we refer to the results reported in \cite{stringer2020}.
The data set we use for evaluation is publicly available \cite{willis2016} and includes $125$ 3D stacks of confocal microscopy image data showing fluorescently labeled cell membranes in the Meristem of the plant model organism \textit{A.~thaliana}.
We divided the data set into training and test sets, where plants 1,2,4 and 13 were used for training and plants 15 and 18 were used for testing.
The test data set comprises about 37,000 individual cells.
Furthermore, we show how well each of the approaches performs on unseen data to assess the generalizability to different microscopy settings.
Therefore, we use a manually annotated 3D confocal stack of \textit{A. thaliana}, comprising a total of 972 fully annotated cells.
Annotations were manually obtained using the \mbox{SEGMENT3D} online platform \cite{spina18}.
Experiments are structured as follows:
\vspace{-1em}\paragraph*{$\text{Baseline}_\textrm{pretained}$} As a baseline we use the publicly available Cellpose \textit{cyto}-model and apply it to the test data set using the 3D pipeline published in \cite{stringer2020}.
Since the model was trained on a large and highly varying data set, we do not perform any further training.
\vspace{-1em}\paragraph*{$\text{Baseline}_\textrm{specialized}$} The second experiment is designed to be a second baseline, as we use the original Cellpose approach \cite{stringer2020} and train it from scratch using the above-mentioned training data set from Willis \emph{et al.} \cite{willis2016} for 1000 epochs, which we find sufficient for convergence.
To reduce memory consumption and prevent high redundancies, we extract every fourth slice of each 3D stack of the training data set to construct the new 2D training data.
This experiment constitutes a specialized case of the baseline approach.
\vspace{-1em}\paragraph*{$\text{Extension}_\textrm{heat}$} Our proposed 3D extension is first trained with the original representation of the gradient maps, formulated as a heat diffusion process \cite{stringer2020}.
The 3D U-Net is trained on patches of size $128 \times 128 \times 64$ voxel for 1000 epochs using the Meristem training data set from Willis \emph{et al.} \cite{willis2016}.
In every epoch, one randomly located patch is extracted from each image stack of the training data set.
To obtain the final full-size image, a weighted tile merging strategy as proposed in \cite{de-bel19a} is used on the predicted foreground and gradient maps, before reconstructing individual instances.
Instance reconstruction parameters are empirically set to $s_\textrm{recon}=4$, $N_\textrm{recon}=100$ and $r_\textrm{closing}=3$.
Filtering parameters are set to $(r_\textrm{min},r_\textrm{max})=(5,100)$, $p_\textrm{overlap}=0.2$ and $err_\textrm{gradient}=0.8$.
\vspace{-1em}\paragraph*{$\text{Extension}_\textrm{tanh}$} For the fourth experiment, we change the representation of the gradient maps, formulated by hyperbolic tangent functions as described in Sec.~\ref{sec:method}.
Otherwise, the setup is identical to the setup of $\text{Extension}_\textrm{heat}$.\newline
Final mean Dice values for the public test data set \cite{willis2016} are $0.656$ for $\text{Baseline}_\textrm{pretrained}$, $0.723$ for $\text{Baseline}_\textrm{specialized}$ and $0.905$ and $0.897$ for $\text{Extension}_\textrm{heat}$ and $\text{Extension}_\textrm{tanh}$, respectively (Fig.~\ref{fig:seg_scores}, left).
Although $\text{Baseline}_\textrm{specialized}$ benefits from specialized knowledge and shows improved segmentation scores, both baseline approaches result in poor instance segmentations in regions with low fluorescence intensity, leading to a large spread of obtained scores.
The full 3D extensions, however, are able to exploit the structural information from the third dimension to successfully outline poorly visible cell instances.
Furthermore, similar scores are obtained for both gradient formulations.
Results for the manually annotated data are shown as boxplots in Fig.~\ref{fig:seg_scores} (right) with mean Dice scores of $0.383$ for $\text{Baseline}_\textrm{pretrained}$, $0.877$ for $\text{Baseline}_\textrm{specialized}$ and scores of $0.883$ for both, $\text{Extension}_\textrm{heat}$ and $\text{Extension}_\textrm{tanh}$.
This confirms that the lack of structural 3D information leads to a loss in accuracy and that both gradient formulations perform similarly well.
\begin{figure*}[h!]
\centering
\includegraphics[width=1.0\textwidth]{images/atlas_segmentations.eps}
\caption{Slices of raw 3D image data (top), corresponding ground truth and predictions of the $\text{Extension}_\textrm{tanh}$ model (center) and 3D views of the predictions (bottom) including the obtained Dice scores (DSC) for different plant organs in \textit{A.~thaliana} \cite{wolny2020}.}
\label{fig:atlas_seg}
\end{figure*}
To further demonstrate the generalizability of the proposed approach, we generated results for the publicly available 3D image data from a variety of different plant organs in \textit{A.~thaliana} \cite{wolny2020}.
We used the model trained in $\text{Extension}_\textrm{tanh}$ and each image was scaled to roughly match the cell sizes of the training data set in each spatial direction.
Since the ground truth did not include segmentations of all cells being visible in the image data, we limited the computation of Dice scores to annotated cells.
Average Dice scores (DSC) obtained for instance segmentations of each image stack range from 0.54 to 0.91 and slices of raw image data, ground truth and instance predictions are shown in Fig.~\ref{fig:atlas_seg}.
This data set contains cellular shapes never seen during training, which causes overly elongated cells to be split up into different sections for some of the images.
Nevertheless, overall results demonstrate robustness and applicability to different cellular structures.
\section{Conclusion and Availability} \label{sec:conclusion}
In this work we demonstrated how the concept of Cellpose~\cite{stringer2020} can be extended to increase segmentation accuracy for 3D image data.
The utilization of the full 3D information and the prediction of 3D gradient maps, help to improve segmentation of cells in regions of poor image quality and low intensity signals.
Our alternate formulation of the proposed gradient maps leads to a comparable accuracy of segmentation results, while offering a lower complexity with respect to training data preparation and instance reconstruction.
The morphology-based approach proposed for the instance reconstruction enables the application to 3D microscopy image data independent from the quantity of captured cells.
Results obtained on completely different data sets never seen during training support the claim that this approach is generalist and robust.
Code, training and application pipelines are publicly available at https://github.com/stegmaierj/Cellpose3D.
Furthermore, we integrated the approach into the existing open source applications XPIWIT \cite{bartschat2016} and MorphoGraphX \cite{dereuille2015} to make it accessible to a broad range of community members.
\section{ACKNOWLEDGEMENTS}
This work was funded by the German Research Foundation DFG with the grants STE2802/2-1 (DE) and an Institute Strategic Programme Grant from the BBSRC to the John Innes Centre (BB/P013511/1).
\vfill\pagebreak
\bibliographystyle{IEEEbib}
| {
"attr-fineweb-edu": 1.507812,
"attr-cc_en_topic": 12,
"domain": "arxiv"
} |